Kernel
Threads by month
- ----- 2025 -----
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- 25 participants
- 20643 discussions

[openeuler:OLK-5.10 3227/3227] arch/x86/kvm/x86.c:8661:5: warning: no previous prototype for function '__kvm_vcpu_halt'
by kernel test robot 13 Oct '25
by kernel test robot 13 Oct '25
13 Oct '25
tree: https://gitee.com/openeuler/kernel.git OLK-5.10
head: 990be7492566e0cfb03f18b813b905a67bf29ef1
commit: c499a3120204c834ff963c43d90a5ff33194b34c [3227/3227] KVM: SVM: Add support for booting APs in an SEV-ES guest
config: x86_64-rhel-9.4-rust (https://download.01.org/0day-ci/archive/20251013/202510132118.GCFTQYLt-lkp@…)
compiler: clang version 22.0.0git (https://github.com/llvm/llvm-project 39f292ffa13d7ca0d1edff27ac8fd55024bb4d19)
rustc: rustc 1.58.0 (02072b482 2022-01-11)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251013/202510132118.GCFTQYLt-lkp@…)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp(a)intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202510132118.GCFTQYLt-lkp@intel.com/
All warnings (new ones prefixed by >>):
arch/x86/kvm/x86.c:809:5: warning: no previous prototype for function 'kvm_read_guest_page_mmu' [-Wmissing-prototypes]
809 | int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
| ^
arch/x86/kvm/x86.c:809:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
809 | int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
| ^
| static
>> arch/x86/kvm/x86.c:8661:5: warning: no previous prototype for function '__kvm_vcpu_halt' [-Wmissing-prototypes]
8661 | int __kvm_vcpu_halt(struct kvm_vcpu *vcpu, int state, int reason)
| ^
arch/x86/kvm/x86.c:8661:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
8661 | int __kvm_vcpu_halt(struct kvm_vcpu *vcpu, int state, int reason)
| ^
| static
2 warnings generated.
vim +/__kvm_vcpu_halt +8661 arch/x86/kvm/x86.c
8660
> 8661 int __kvm_vcpu_halt(struct kvm_vcpu *vcpu, int state, int reason)
8662 {
8663 ++vcpu->stat.halt_exits;
8664 if (lapic_in_kernel(vcpu)) {
8665 vcpu->arch.mp_state = state;
8666 return 1;
8667 } else {
8668 vcpu->run->exit_reason = reason;
8669 return 0;
8670 }
8671 }
8672
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
1
0

[PATCH OLK-6.6] tracing: Silence warning when chunk allocation fails in trace_pid_write
by Tengda Wu 13 Oct '25
by Tengda Wu 13 Oct '25
13 Oct '25
From: Pu Lehui <pulehui(a)huawei.com>
stable inclusion
from stable-v6.6.111
commit 1262bda871dace8c6efae25f3b6a2d34f6f06d54
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/ID0R4J
CVE: CVE-2025-39914
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit cd4453c5e983cf1fd5757e9acb915adb1e4602b6 ]
Syzkaller trigger a fault injection warning:
WARNING: CPU: 1 PID: 12326 at tracepoint_add_func+0xbfc/0xeb0
Modules linked in:
CPU: 1 UID: 0 PID: 12326 Comm: syz.6.10325 Tainted: G U 6.14.0-rc5-syzkaller #0
Tainted: [U]=USER
Hardware name: Google Compute Engine/Google Compute Engine
RIP: 0010:tracepoint_add_func+0xbfc/0xeb0 kernel/tracepoint.c:294
Code: 09 fe ff 90 0f 0b 90 0f b6 74 24 43 31 ff 41 bc ea ff ff ff
RSP: 0018:ffffc9000414fb48 EFLAGS: 00010283
RAX: 00000000000012a1 RBX: ffffffff8e240ae0 RCX: ffffc90014b78000
RDX: 0000000000080000 RSI: ffffffff81bbd78b RDI: 0000000000000001
RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000001 R12: ffffffffffffffef
R13: 0000000000000000 R14: dffffc0000000000 R15: ffffffff81c264f0
FS: 00007f27217f66c0(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b2e80dff8 CR3: 00000000268f8000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
tracepoint_probe_register_prio+0xc0/0x110 kernel/tracepoint.c:464
register_trace_prio_sched_switch include/trace/events/sched.h:222 [inline]
register_pid_events kernel/trace/trace_events.c:2354 [inline]
event_pid_write.isra.0+0x439/0x7a0 kernel/trace/trace_events.c:2425
vfs_write+0x24c/0x1150 fs/read_write.c:677
ksys_write+0x12b/0x250 fs/read_write.c:731
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
We can reproduce the warning by following the steps below:
1. echo 8 >> set_event_notrace_pid. Let tr->filtered_pids owns one pid
and register sched_switch tracepoint.
2. echo ' ' >> set_event_pid, and perform fault injection during chunk
allocation of trace_pid_list_alloc. Let pid_list with no pid and
assign to tr->filtered_pids.
3. echo ' ' >> set_event_pid. Let pid_list is NULL and assign to
tr->filtered_pids.
4. echo 9 >> set_event_pid, will trigger the double register
sched_switch tracepoint warning.
The reason is that syzkaller injects a fault into the chunk allocation
in trace_pid_list_alloc, causing a failure in trace_pid_list_set, which
may trigger double register of the same tracepoint. This only occurs
when the system is about to crash, but to suppress this warning, let's
add failure handling logic to trace_pid_list_set.
Link: https://lore.kernel.org/20250908024658.2390398-1-pulehui@huaweicloud.com
Fixes: 8d6e90983ade ("tracing: Create a sparse bitmask for pid filtering")
Reported-by: syzbot+161412ccaeff20ce4dde(a)syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/67cb890e.050a0220.d8275.022e.GAE@google.com
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt(a)goodmis.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Tengda Wu <wutengda2(a)huawei.com>
---
kernel/trace/trace.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index e2881fb585ff..645b52ede612 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -750,7 +750,10 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
/* copy the current bits to the new max */
ret = trace_pid_list_first(filtered_pids, &pid);
while (!ret) {
- trace_pid_list_set(pid_list, pid);
+ ret = trace_pid_list_set(pid_list, pid);
+ if (ret < 0)
+ goto out;
+
ret = trace_pid_list_next(filtered_pids, pid + 1, &pid);
nr_pids++;
}
@@ -787,6 +790,7 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
trace_parser_clear(&parser);
ret = 0;
}
+ out:
trace_parser_put(&parser);
if (ret < 0) {
--
2.34.1
2
1
This patch series is a small bugfixes for ftrace hist feature.
Mohamed Khalfella (2):
tracing/histograms: Add histograms to hist_vars if they have
referenced variables
tracing/histograms: Return an error if we fail to add histogram to
hist_vars list
kernel/trace/trace_events_hist.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
--
2.34.1
2
3

[PATCH openEuler-1.0-LTS] binfmt_misc: fix shift-out-of-bounds in check_special_flags
by Tengda Wu 13 Oct '25
by Tengda Wu 13 Oct '25
13 Oct '25
From: Liu Shixin <liushixin2(a)huawei.com>
stable inclusion
from stable-v4.19.270
commit 97382a2639b1cd9631f6069061e9d7062cd2b098
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/ID0UC9
CVE: CVE-2022-50497
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 6a46bf558803dd2b959ca7435a5c143efe837217 ]
UBSAN reported a shift-out-of-bounds warning:
left shift of 1 by 31 places cannot be represented in type 'int'
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x8d/0xcf lib/dump_stack.c:106
ubsan_epilogue+0xa/0x44 lib/ubsan.c:151
__ubsan_handle_shift_out_of_bounds+0x1e7/0x208 lib/ubsan.c:322
check_special_flags fs/binfmt_misc.c:241 [inline]
create_entry fs/binfmt_misc.c:456 [inline]
bm_register_write+0x9d3/0xa20 fs/binfmt_misc.c:654
vfs_write+0x11e/0x580 fs/read_write.c:582
ksys_write+0xcf/0x120 fs/read_write.c:637
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x34/0x80 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x4194e1
Since the type of Node's flags is unsigned long, we should define these
macros with same type too.
Signed-off-by: Liu Shixin <liushixin2(a)huawei.com>
Signed-off-by: Kees Cook <keescook(a)chromium.org>
Link: https://lore.kernel.org/r/20221102025123.1117184-1-liushixin2@huawei.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Tengda Wu <wutengda2(a)huawei.com>
---
fs/binfmt_misc.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
index aa4a7a23ff99..217217a64616 100644
--- a/fs/binfmt_misc.c
+++ b/fs/binfmt_misc.c
@@ -42,10 +42,10 @@ static LIST_HEAD(entries);
static int enabled = 1;
enum {Enabled, Magic};
-#define MISC_FMT_PRESERVE_ARGV0 (1 << 31)
-#define MISC_FMT_OPEN_BINARY (1 << 30)
-#define MISC_FMT_CREDENTIALS (1 << 29)
-#define MISC_FMT_OPEN_FILE (1 << 28)
+#define MISC_FMT_PRESERVE_ARGV0 (1UL << 31)
+#define MISC_FMT_OPEN_BINARY (1UL << 30)
+#define MISC_FMT_CREDENTIALS (1UL << 29)
+#define MISC_FMT_OPEN_FILE (1UL << 28)
typedef struct {
struct list_head list;
--
2.34.1
2
1
Add XCALL_SMT_QOS support.
Jinjie Ruan (3):
arm64: Add XCALL_SMT_QOS VDSO WFxt support
irqchip/gic-v3: Support PPI interrupt to be xint
arm64: Support pmu irq inject when select online/offline task
arch/arm64/Kconfig.turbo | 12 ++
arch/arm64/include/asm/smt_qos.h | 100 ++++++++++
arch/arm64/include/asm/thread_info.h | 5 +
arch/arm64/include/asm/vdso.h | 6 +-
arch/arm64/include/uapi/asm/unistd.h | 7 +
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/entry-common.c | 53 +++++-
arch/arm64/kernel/smt_qos.c | 100 ++++++++++
arch/arm64/kernel/vdso.c | 25 +++
arch/arm64/kernel/vdso/Makefile | 10 +
arch/arm64/kernel/vdso/smt_qos.c | 45 +++++
arch/arm64/kernel/vdso/smt_qos_trampoline.S | 53 ++++++
arch/arm64/kernel/vdso/vdso.lds.S | 7 +
drivers/irqchip/irq-gic-v3.c | 3 +-
drivers/perf/arm_pmu.c | 11 ++
include/linux/mm_types.h | 4 +
include/linux/syscalls.h | 4 +
include/vdso/datapage.h | 5 +
kernel/fork.c | 14 ++
kernel/sched/core.c | 55 +++++-
kernel/sched/fair.c | 194 +++++++++++++++++++-
kernel/sched/sched.h | 6 +-
kernel/sysctl.c | 33 ++++
23 files changed, 738 insertions(+), 15 deletions(-)
create mode 100644 arch/arm64/include/asm/smt_qos.h
create mode 100644 arch/arm64/kernel/smt_qos.c
create mode 100644 arch/arm64/kernel/vdso/smt_qos.c
create mode 100644 arch/arm64/kernel/vdso/smt_qos_trampoline.S
--
2.34.1
2
4

13 Oct '25
From: zhoubin <zhoubin120(a)h-partners.com>
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/ID1L0L?from=project-issue
CVE: NA
--------------------------------
Add Huawei Intelligent Network Card Driver: hinic3
Signed-off-by: zhoubin <zhoubin120(a)h-partners.com>
Signed-off-by: zhuyikai <zhuyikai1(a)h-partners.com>
Signed-off-by: zhengjiezhen <zhengjiezhen(a)h-partners.com>
Signed-off-by: shijing <shijing34(a)huawei.com>
---
drivers/net/ethernet/huawei/hinic3/Kconfig | 13 +
drivers/net/ethernet/huawei/hinic3/Makefile | 66 +
drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.c | 1125 ++++++++
drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.h | 99 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.c | 2015 ++++++++++++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.h | 216 ++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.c | 1516 ++++++++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.h | 67 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.c | 506 ++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.h | 53 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.c | 182 ++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.h | 37 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.c | 444 +++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.h | 36 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_define.h | 50 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.c | 1674 +++++++++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.h | 381 +++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.c | 682 +++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.h | 23 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.c | 1493 ++++++++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.h | 714 +++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.c | 1389 ++++++++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.h | 93 +
drivers/net/ethernet/huawei/hinic3/cqm/readme.txt | 3 +
drivers/net/ethernet/huawei/hinic3/hinic3_crm.h | 1280 +++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c | 1108 ++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c | 482 ++++
drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h | 75 +
drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c | 1464 ++++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c | 1320 +++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_filter.c | 483 ++++
drivers/net/ethernet/huawei/hinic3/hinic3_hw.h | 877 ++++++
drivers/net/ethernet/huawei/hinic3/hinic3_irq.c | 194 ++
drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c | 1737 ++++++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_main.c | 1469 ++++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h | 1298 +++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_mt.h | 864 ++++++
drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c | 2125 ++++++++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic.h | 221 ++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c | 1894 +++++++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h | 664 +++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c | 726 +++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c | 159 ++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h | 21 +
drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h | 462 ++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c | 649 +++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c | 1130 ++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h | 325 +++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c | 47 +
drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h | 59 +
drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h | 384 +++
drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c | 909 ++++++
drivers/net/ethernet/huawei/hinic3/hinic3_rss.c | 1003 +++++++
drivers/net/ethernet/huawei/hinic3/hinic3_rss.h | 95 +
drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c | 413 +++
drivers/net/ethernet/huawei/hinic3/hinic3_rx.c | 1523 ++++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_rx.h | 164 ++
drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h | 220 ++
drivers/net/ethernet/huawei/hinic3/hinic3_tx.c | 1051 +++++++
drivers/net/ethernet/huawei/hinic3/hinic3_tx.h | 157 ++
drivers/net/ethernet/huawei/hinic3/hinic3_wq.h | 130 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c | 1214 ++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h | 286 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c | 1575 +++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h | 204 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c | 93 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h | 188 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c | 997 +++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h | 118 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c | 432 +++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h | 173 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c | 1422 ++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h | 164 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c | 495 ++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h | 141 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c | 1632 +++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h | 346 +++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c | 1681 +++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h | 51 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c | 530 ++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h | 49 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c | 2222 +++++++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h | 234 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c | 1050 +++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h | 113 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c | 2460 +++++++++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c | 1884 +++++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h | 281 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c | 1571 +++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h | 182 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.c | 1259 +++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.h | 124 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c | 1021 +++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h | 39 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h | 74 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c | 44 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h | 109 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h | 160 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c | 160 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c | 279 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h | 35 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c | 159 ++
drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c | 119 +
drivers/net/ethernet/huawei/hinic3/include/bond/bond_common_defs.h | 73 +
drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd.h | 12 +
.../include/cfg_mgmt/cfg_mgmt_mpu_cmd_defs.h | 221 ++
drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd.h | 31 +
drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd_defs.h | 61 +
drivers/net/ethernet/huawei/hinic3/include/hinic3_common.h | 145 +
drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm.h | 353 +++
drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm_define.h | 48 +
drivers/net/ethernet/huawei/hinic3/include/hinic3_lld.h | 225 ++
drivers/net/ethernet/huawei/hinic3/include/hinic3_profile.h | 148 +
drivers/net/ethernet/huawei/hinic3/include/mpu/mag_mpu_cmd.h | 74 +
drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_board_defs.h | 135 +
drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_cmd_base_defs.h | 165 ++
drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd.h | 192 ++
drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd_defs.h | 1104 ++++++++
.../include/mpu/mpu_outband_ncsi_cmd_defs.h | 213 ++
drivers/net/ethernet/huawei/hinic3/include/mpu/nic_cfg_comm.h | 67 +
drivers/net/ethernet/huawei/hinic3/include/ossl_types.h | 144 +
drivers/net/ethernet/huawei/hinic3/include/public/npu_cmdq_base_defs.h | 232 ++
drivers/net/ethernet/huawei/hinic3/include/readme.txt | 1 +
drivers/net/ethernet/huawei/hinic3/include/vmsec/vmsec_mpu_common.h | 107 +
drivers/net/ethernet/huawei/hinic3/include/vram_common.h | 74 +
drivers/net/ethernet/huawei/hinic3/mag_mpu_cmd_defs.h | 1143 ++++++++
drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd.h | 174 ++
drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd_defs.h | 1420 ++++++++++
drivers/net/ethernet/huawei/hinic3/nic_npu_cmd.h | 36 +
drivers/net/ethernet/huawei/hinic3/ossl_knl.h | 39 +
drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h | 371 +++
131 files changed, 72437 insertions(+)
create mode 100644 drivers/net/ethernet/huawei/hinic3/Kconfig
create mode 100644 drivers/net/ethernet/huawei/hinic3/Makefile
create mode 100644 drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_define.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/readme.txt
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_crm.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_filter.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_hw.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_irq.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_main.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_mt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rss.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rss.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rx.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rx.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_tx.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_tx.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_wq.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/bond/bond_common_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/hinic3_common.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm_define.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/hinic3_lld.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/hinic3_profile.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/mag_mpu_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_board_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_cmd_base_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_outband_ncsi_cmd_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/nic_cfg_comm.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/ossl_types.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/public/npu_cmdq_base_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/readme.txt
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/vmsec/vmsec_mpu_common.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/vram_common.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/mag_mpu_cmd_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/nic_npu_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/ossl_knl.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h
diff --git a/drivers/net/ethernet/huawei/hinic3/Kconfig b/drivers/net/ethernet/huawei/hinic3/Kconfig
new file mode 100644
index 0000000..7208864
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/Kconfig
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Huawei driver configuration
+#
+
+config HINIC3
+ tristate "Huawei Intelligent Network Interface Card 3rd"
+ depends on PCI_MSI && NUMA && PCI_IOV && DCB && (X86 || ARM64)
+ help
+ This driver supports HiNIC PCIE Ethernet cards.
+ To compile this driver as part of the kernel, choose Y here.
+ If unsure, choose N.
+ The default is N.
diff --git a/drivers/net/ethernet/huawei/hinic3/Makefile b/drivers/net/ethernet/huawei/hinic3/Makefile
new file mode 100644
index 0000000..21d8093
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/Makefile
@@ -0,0 +1,66 @@
+# SPDX-License-Identifier: GPL-2.0-only
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/hw/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/bond/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/cqm/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/include/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/include/cqm/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/include/public/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/include/mpu/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/include/bond/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/include/vmsec/
+
+obj-$(CONFIG_HINIC3) += hinic3.o
+hinic3-objs := hw/hinic3_hwdev.o \
+ hw/hinic3_hw_cfg.o \
+ hw/hinic3_hw_comm.o \
+ hw/hinic3_prof_adap.o \
+ hw/hinic3_sriov.o \
+ hw/hinic3_lld.o \
+ hw/hinic3_dev_mgmt.o \
+ hw/hinic3_common.o \
+ hw/hinic3_hwif.o \
+ hw/hinic3_wq.o \
+ hw/hinic3_cmdq.o \
+ hw/hinic3_eqs.o \
+ hw/hinic3_mbox.o \
+ hw/hinic3_mgmt.o \
+ hw/hinic3_api_cmd.o \
+ hw/hinic3_hw_api.o \
+ hw/hinic3_sml_lt.o \
+ hw/hinic3_hw_mt.o \
+ hw/hinic3_nictool.o \
+ hw/hinic3_devlink.o \
+ hw/ossl_knl_linux.o \
+ hw/hinic3_multi_host_mgmt.o \
+ bond/hinic3_bond.o \
+ hinic3_main.o \
+ hinic3_tx.o \
+ hinic3_rx.o \
+ hinic3_rss.o \
+ hinic3_ntuple.o \
+ hinic3_dcb.o \
+ hinic3_ethtool.o \
+ hinic3_ethtool_stats.o \
+ hinic3_dbg.o \
+ hinic3_irq.o \
+ hinic3_filter.o \
+ hinic3_netdev_ops.o \
+ hinic3_nic_prof.o \
+ hinic3_nic_cfg.o \
+ hinic3_mag_cfg.o \
+ hinic3_nic_cfg_vf.o \
+ hinic3_rss_cfg.o \
+ hinic3_nic_event.o \
+ hinic3_nic_io.o \
+ hinic3_nic_dbg.o \
+ cqm/cqm_bat_cla.o \
+ cqm/cqm_bitmap_table.o \
+ cqm/cqm_object_intern.o \
+ cqm/cqm_bloomfilter.o \
+ cqm/cqm_cmd.o \
+ cqm/cqm_db.o \
+ cqm/cqm_object.o \
+ cqm/cqm_main.o \
+ cqm/cqm_memsec.o
diff --git a/drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.c b/drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.c
new file mode 100644
index 0000000..a252e09
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.c
@@ -0,0 +1,1125 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <net/sock.h>
+#include <net/bonding.h>
+#include <linux/rtnetlink.h>
+#include <linux/net.h>
+#include <linux/mutex.h>
+#include <linux/netdevice.h>
+#include <linux/version.h>
+
+#include "ossl_knl.h"
+#include "hinic3_lld.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_hw.h"
+#include "hinic3_bond.h"
+#include "hinic3_hwdev.h"
+
+#include "bond_common_defs.h"
+#include "vram_common.h"
+
+#define PORT_INVALID_ID 0xFF
+
+#define STATE_SYNCHRONIZATION_INDEX 3
+
+struct hinic3_bond_dev {
+ char name[BOND_NAME_MAX_LEN];
+ struct bond_attr bond_attr;
+ struct bond_attr new_attr;
+ struct bonding *bond;
+ void *ppf_hwdev;
+ struct kref ref;
+#define BOND_DEV_STATUS_IDLE 0x0
+#define BOND_DEV_STATUS_ACTIVATED 0x1
+ u8 status;
+ u8 slot_used[HINIC3_BOND_USER_NUM];
+ struct workqueue_struct *wq;
+ struct delayed_work bond_work;
+ struct bond_tracker tracker;
+ spinlock_t lock; /* lock for change status */
+};
+
+typedef void (*bond_service_func)(const char *bond_name, void *bond_attr,
+ enum bond_service_proc_pos pos);
+
+static DEFINE_MUTEX(g_bond_service_func_mutex);
+
+static bond_service_func g_bond_service_func[HINIC3_BOND_USER_NUM];
+
+struct hinic3_bond_mngr {
+ u32 cnt;
+ struct hinic3_bond_dev *bond_dev[BOND_MAX_NUM];
+ struct socket *rtnl_sock;
+};
+
+static struct hinic3_bond_mngr bond_mngr = { .cnt = 0 };
+static DEFINE_MUTEX(g_bond_mutex);
+
+static bool bond_dev_is_activated(const struct hinic3_bond_dev *bdev)
+{
+ return bdev->status == BOND_DEV_STATUS_ACTIVATED;
+}
+
+#define PCI_DBDF(dom, bus, dev, func) \
+ (((dom) << 16) | ((bus) << 8) | ((dev) << 3) | ((func) & 0x7))
+
+#ifdef __PCLINT__
+static inline bool netif_is_bond_master(const struct net_device *dev)
+{
+ return (dev->flags & IFF_MASTER) && (dev->priv_flags & IFF_BONDING);
+}
+#endif
+
+static u32 bond_gen_uplink_id(struct hinic3_bond_dev *bdev)
+{
+ u32 uplink_id = 0;
+ u8 i;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ struct pci_dev *pdev = NULL;
+ u32 domain, bus, dev, func;
+
+ spin_lock(&bdev->lock);
+ for (i = 0; i < BOND_PORT_MAX_NUM; i++) {
+ if (BITMAP_JUDGE(bdev->bond_attr.slaves, i)) {
+ if (!bdev->tracker.ndev[i])
+ continue;
+ nic_dev = netdev_priv(bdev->tracker.ndev[i]);
+ pdev = nic_dev->pdev;
+ domain = (u32)pci_domain_nr(pdev->bus);
+ bus = pdev->bus->number;
+ dev = PCI_SLOT(pdev->devfn);
+ func = PCI_FUNC(pdev->devfn);
+ uplink_id = PCI_DBDF(domain, bus, dev, func);
+ break;
+ }
+ }
+ spin_unlock(&bdev->lock);
+
+ return uplink_id;
+}
+
+static struct hinic3_nic_dev *get_nic_dev_safe(struct net_device *ndev)
+{
+ struct hinic3_lld_dev *lld_dev = NULL;
+
+ lld_dev = hinic3_get_lld_dev_by_netdev(ndev);
+ if (!lld_dev)
+ return NULL;
+
+ return netdev_priv(ndev);
+}
+
+static u8 bond_get_slaves_bitmap(struct hinic3_bond_dev *bdev, struct bonding *bond)
+{
+ struct slave *slave = NULL;
+ struct list_head *iter = NULL;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ u8 bitmap = 0;
+ u8 port_id;
+
+ rcu_read_lock();
+ bond_for_each_slave_rcu(bond, slave, iter) {
+ nic_dev = get_nic_dev_safe(slave->dev);
+ if (!nic_dev)
+ continue;
+
+ port_id = hinic3_physical_port_id(nic_dev->hwdev);
+ BITMAP_SET(bitmap, port_id);
+ (void)iter;
+ }
+ rcu_read_unlock();
+
+ return bitmap;
+}
+
+static void bond_update_attr(struct hinic3_bond_dev *bdev, struct bonding *bond)
+{
+ spin_lock(&bdev->lock);
+
+ bdev->new_attr.bond_mode = (u16)bond->params.mode;
+ bdev->new_attr.bond_id = bdev->bond_attr.bond_id;
+ bdev->new_attr.up_delay = (u16)bond->params.updelay;
+ bdev->new_attr.down_delay = (u16)bond->params.downdelay;
+ bdev->new_attr.slaves = 0;
+ bdev->new_attr.active_slaves = 0;
+ bdev->new_attr.lacp_collect_slaves = 0;
+ bdev->new_attr.first_roce_func = DEFAULT_ROCE_BOND_FUNC;
+
+ /* Only support L2/L34/L23 three policy */
+ if (bond->params.xmit_policy <= BOND_XMIT_POLICY_LAYER23)
+ bdev->new_attr.xmit_hash_policy = (u8)bond->params.xmit_policy;
+ else
+ bdev->new_attr.xmit_hash_policy = BOND_XMIT_POLICY_LAYER2;
+
+ bdev->new_attr.slaves = bond_get_slaves_bitmap(bdev, bond);
+
+ spin_unlock(&bdev->lock);
+}
+
+static u8 bond_get_netdev_idx(const struct hinic3_bond_dev *bdev,
+ const struct net_device *ndev)
+{
+ u8 i;
+
+ for (i = 0; i < BOND_PORT_MAX_NUM; i++) {
+ if (bdev->tracker.ndev[i] == ndev)
+ return i;
+ }
+
+ return PORT_INVALID_ID;
+}
+
+static u8 bond_dev_track_port(struct hinic3_bond_dev *bdev,
+ struct net_device *ndev)
+{
+ u8 port_id;
+ void *ppf_hwdev = NULL;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ struct hinic3_lld_dev *ppf_lld_dev = NULL;
+
+ nic_dev = get_nic_dev_safe(ndev);
+ if (!nic_dev) {
+ pr_warn("hinic3_bond: invalid slave: %s\n", ndev->name);
+ return PORT_INVALID_ID;
+ }
+
+ ppf_lld_dev = hinic3_get_ppf_lld_dev_unsafe(nic_dev->lld_dev);
+ if (ppf_lld_dev)
+ ppf_hwdev = ppf_lld_dev->hwdev;
+
+ pr_info("hinic3_bond: track ndev:%s", ndev->name);
+ port_id = hinic3_physical_port_id(nic_dev->hwdev);
+
+ spin_lock(&bdev->lock);
+ /* attach netdev to the port position associated with it */
+ if (bdev->tracker.ndev[port_id]) {
+ pr_warn("hinic3_bond: Old ndev:%s is replaced\n",
+ bdev->tracker.ndev[port_id]->name);
+ } else {
+ bdev->tracker.cnt++;
+ }
+ bdev->tracker.ndev[port_id] = ndev;
+ bdev->tracker.netdev_state[port_id].link_up = 0;
+ bdev->tracker.netdev_state[port_id].tx_enabled = 0;
+ if (!bdev->ppf_hwdev)
+ bdev->ppf_hwdev = ppf_hwdev;
+ pr_info("TRACK cnt: %d, slave_name(%s)\n", bdev->tracker.cnt, ndev->name);
+ spin_unlock(&bdev->lock);
+
+ return port_id;
+}
+
+static void bond_dev_untrack_port(struct hinic3_bond_dev *bdev, u8 idx)
+{
+ spin_lock(&bdev->lock);
+
+ if (bdev->tracker.ndev[idx]) {
+ bdev->tracker.ndev[idx] = NULL;
+ bdev->tracker.cnt--;
+ pr_info("hinic3_bond: untrack port:%u ndev:%s cnt:%d\n", idx,
+ bdev->tracker.ndev[idx]->name, bdev->tracker.cnt);
+ }
+
+ spin_unlock(&bdev->lock);
+}
+
+static void bond_slave_event(struct hinic3_bond_dev *bdev, struct slave *slave)
+{
+ u8 idx;
+
+ idx = bond_get_netdev_idx(bdev, slave->dev);
+ if (idx == PORT_INVALID_ID)
+ idx = bond_dev_track_port(bdev, slave->dev);
+ if (idx == PORT_INVALID_ID)
+ return;
+
+ spin_lock(&bdev->lock);
+ bdev->tracker.netdev_state[idx].link_up = bond_slave_is_up(slave);
+ bdev->tracker.netdev_state[idx].tx_enabled = bond_slave_is_up(slave) &&
+ bond_is_active_slave(slave);
+ spin_unlock(&bdev->lock);
+
+ queue_delayed_work(bdev->wq, &bdev->bond_work, 0);
+}
+
+static bool bond_eval_bonding_stats(const struct hinic3_bond_dev *bdev,
+ struct bonding *bond)
+{
+ int mode;
+
+ mode = BOND_MODE(bond);
+ if (mode != BOND_MODE_8023AD &&
+ mode != BOND_MODE_XOR &&
+ mode != BOND_MODE_ACTIVEBACKUP) {
+ pr_err("hinic3_bond: Wrong mode:%d\n", mode);
+ return false;
+ }
+
+ return bdev->tracker.cnt > 0;
+}
+
+static void bond_master_event(struct hinic3_bond_dev *bdev,
+ struct bonding *bond)
+{
+ spin_lock(&bdev->lock);
+ bdev->tracker.is_bonded = bond_eval_bonding_stats(bdev, bond);
+ spin_unlock(&bdev->lock);
+
+ queue_delayed_work(bdev->wq, &bdev->bond_work, 0);
+}
+
+static struct hinic3_bond_dev *bond_get_bdev(struct bonding *bond)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ int bid;
+
+ if (bond == NULL) {
+ pr_err("hinic3_bond: bond is NULL\n");
+ return NULL;
+ }
+
+ mutex_lock(&g_bond_mutex);
+ for (bid = BOND_FIRST_ID; bid <= BOND_MAX_ID; bid++) {
+ bdev = bond_mngr.bond_dev[bid];
+ if (!bdev)
+ continue;
+
+ if (bond == bdev->bond) {
+ mutex_unlock(&g_bond_mutex);
+ return bdev;
+ }
+
+ if (strncmp(bond->dev->name, bdev->name, BOND_NAME_MAX_LEN) == 0) {
+ bdev->bond = bond;
+ return bdev;
+ }
+ }
+ mutex_unlock(&g_bond_mutex);
+ return NULL;
+}
+
+static struct bonding *get_bonding_by_netdev(struct net_device *ndev)
+{
+ struct bonding *bond = NULL;
+ struct slave *slave = NULL;
+
+ if (netif_is_bond_master(ndev)) {
+ bond = netdev_priv(ndev);
+ } else if (netif_is_bond_slave(ndev)) {
+ slave = bond_slave_get_rtnl(ndev);
+ if (slave) {
+ bond = bond_get_bond_by_slave(slave);
+ }
+ }
+
+ return bond;
+}
+/*lint -e580 -e546*/
+bool hinic3_is_bond_dev_status_actived(struct net_device *ndev)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ struct bonding *bond = NULL;
+
+ if (!ndev) {
+ pr_err("hinic3_bond: netdev is NULL\n");
+ return false;
+ }
+
+ bond = get_bonding_by_netdev(ndev);
+ bdev = bond_get_bdev(bond);
+ if (!bdev)
+ return false;
+
+ return bdev->status == BOND_DEV_STATUS_ACTIVATED;
+}
+EXPORT_SYMBOL(hinic3_is_bond_dev_status_actived);
+/*lint +e580 +e546*/
+
+static void bond_handle_rtnl_event(struct net_device *ndev)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ struct bonding *bond = NULL;
+ struct slave *slave = NULL;
+
+ bond = get_bonding_by_netdev(ndev);
+ bdev = bond_get_bdev(bond);
+ if (!bdev)
+ return;
+
+ bond_update_attr(bdev, bond);
+
+ if (netif_is_bond_slave(ndev)) {
+ slave = bond_slave_get_rtnl(ndev);
+ bond_slave_event(bdev, slave);
+ } else {
+ bond_master_event(bdev, bond);
+ }
+}
+
+static void bond_rtnl_data_ready(struct sock *sk)
+{
+ struct net_device *ndev = NULL;
+ struct ifinfomsg *ifinfo = NULL;
+ struct nlmsghdr *hdr = NULL;
+ struct sk_buff *skb = NULL;
+ int err = 0;
+
+ skb = skb_recv_datagram(sk, 0, 0, &err);
+ if (err != 0 || !skb)
+ return;
+
+ hdr = (struct nlmsghdr *)skb->data;
+ if (!hdr ||
+ !NLMSG_OK(hdr, skb->len) ||
+ hdr->nlmsg_type != RTM_NEWLINK ||
+ !rtnl_is_locked()) {
+ goto free_skb;
+ }
+
+ ifinfo = nlmsg_data(hdr);
+ ndev = dev_get_by_index(&init_net, ifinfo->ifi_index);
+ if (ndev) {
+ bond_handle_rtnl_event(ndev);
+ dev_put(ndev);
+ }
+
+free_skb:
+ kfree_skb(skb);
+}
+
+static int bond_enable_netdev_event(void)
+{
+ struct sockaddr_nl addr = {
+ .nl_family = AF_NETLINK,
+ .nl_groups = RTNLGRP_LINK,
+ };
+ int err;
+ struct socket **rtnl_sock = &bond_mngr.rtnl_sock;
+
+ err = sock_create_kern(&init_net, AF_NETLINK, SOCK_DGRAM, NETLINK_ROUTE,
+ rtnl_sock);
+ if (err) {
+ pr_err("hinic3_bond: Couldn't create rtnl socket.\n");
+ *rtnl_sock = NULL;
+ return err;
+ }
+
+ (*rtnl_sock)->sk->sk_data_ready = bond_rtnl_data_ready;
+ (*rtnl_sock)->sk->sk_allocation = GFP_KERNEL;
+
+ err = kernel_bind(*rtnl_sock, (struct sockaddr *)(u8 *)&addr, sizeof(addr));
+ if (err) {
+ pr_err("hinic3_bond: Couldn't bind rtnl socket.\n");
+ sock_release(*rtnl_sock);
+ *rtnl_sock = NULL;
+ }
+
+ return err;
+}
+
+static void bond_disable_netdev_event(void)
+{
+ if (bond_mngr.rtnl_sock)
+ sock_release(bond_mngr.rtnl_sock);
+}
+
+static int bond_send_upcmd(struct hinic3_bond_dev *bdev, struct bond_attr *attr,
+ u8 cmd_type)
+{
+ int err, len;
+ struct hinic3_bond_cmd cmd = {0};
+ u16 out_size = sizeof(cmd);
+
+ cmd.sub_cmd = 0;
+ cmd.ret_status = 0;
+
+ if (attr) {
+ memcpy(&cmd.attr, attr, sizeof(*attr));
+ } else {
+ cmd.attr.bond_id = bdev->bond_attr.bond_id;
+ cmd.attr.slaves = bdev->bond_attr.slaves;
+ }
+
+ len = sizeof(cmd.bond_name);
+ if (cmd_type == MPU_CMD_BOND_CREATE) {
+ strscpy(cmd.bond_name, bdev->name, len);
+ cmd.bond_name[sizeof(cmd.bond_name) - 1] = '\0';
+ }
+
+ err = hinic3_msg_to_mgmt_sync(bdev->ppf_hwdev, HINIC3_MOD_OVS, cmd_type,
+ &cmd, sizeof(cmd), &cmd, &out_size, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err != 0 || !out_size || cmd.ret_status != 0) {
+ pr_err("hinic3_bond: uP cmd: %u failed, err: %d, sts: %u, out size: %u\n",
+ cmd_type, err, cmd.ret_status, out_size);
+ err = -EIO;
+ }
+
+ return err;
+}
+
+static int bond_upcmd_deactivate(struct hinic3_bond_dev *bdev)
+{
+ int err;
+ u16 id_tmp;
+
+ if (bdev->status == BOND_DEV_STATUS_IDLE)
+ return 0;
+
+ pr_info("hinic3_bond: deactivate bond: %u\n", bdev->bond_attr.bond_id);
+
+ err = bond_send_upcmd(bdev, NULL, MPU_CMD_BOND_DELETE);
+ if (err == 0) {
+ id_tmp = bdev->bond_attr.bond_id;
+ memset(&bdev->bond_attr, 0, sizeof(bdev->bond_attr));
+ bdev->status = BOND_DEV_STATUS_IDLE;
+ bdev->bond_attr.bond_id = id_tmp;
+ if (!bdev->tracker.cnt)
+ bdev->ppf_hwdev = NULL;
+ }
+
+ return err;
+}
+
+static void bond_pf_bitmap_set(struct hinic3_bond_dev *bdev, u8 index)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+ u8 pf_id;
+
+ nic_dev = netdev_priv(bdev->tracker.ndev[index]);
+ if (!nic_dev)
+ return;
+
+ pf_id = hinic3_pf_id_of_vf(nic_dev->hwdev);
+ BITMAP_SET(bdev->new_attr.bond_pf_bitmap, pf_id);
+}
+
+static void bond_update_slave_info(struct hinic3_bond_dev *bdev,
+ struct bond_attr *attr)
+{
+ struct net_device *ndev = NULL;
+ u8 i;
+
+ if (!netif_running(bdev->bond->dev))
+ return;
+
+ if (attr->bond_mode == BOND_MODE_ACTIVEBACKUP) {
+ rcu_read_lock();
+ ndev = bond_option_active_slave_get_rcu(bdev->bond);
+ rcu_read_unlock();
+ }
+
+ for (i = 0; i < BOND_PORT_MAX_NUM; i++) {
+ if (!BITMAP_JUDGE(attr->slaves, i)) {
+ if (BITMAP_JUDGE(bdev->bond_attr.slaves, i))
+ bond_dev_untrack_port(bdev, i);
+
+ continue;
+ }
+
+ if (!bdev->tracker.ndev[i])
+ continue;
+
+ bond_pf_bitmap_set(bdev, i);
+
+ if (!bdev->tracker.netdev_state[i].tx_enabled)
+ continue;
+
+ if (attr->bond_mode == BOND_MODE_8023AD) {
+ BITMAP_SET(attr->active_slaves, i);
+ BITMAP_SET(attr->lacp_collect_slaves, i);
+ } else if (attr->bond_mode == BOND_MODE_XOR) {
+ BITMAP_SET(attr->active_slaves, i);
+ } else if (ndev && (ndev == bdev->tracker.ndev[i])) {
+ /* BOND_MODE_ACTIVEBACKUP */
+ BITMAP_SET(attr->active_slaves, i);
+ break;
+ }
+ }
+}
+
+static int bond_upcmd_config(struct hinic3_bond_dev *bdev,
+ struct bond_attr *attr)
+{
+ int err;
+
+ bond_update_slave_info(bdev, attr);
+ attr->bond_pf_bitmap = bdev->new_attr.bond_pf_bitmap;
+
+ if (memcmp(&bdev->bond_attr, attr, sizeof(struct bond_attr)) == 0)
+ return 0;
+
+ pr_info("hinic3_bond: Config bond: %u\n", attr->bond_id);
+ pr_info("mode:%u, up_d:%u, down_d:%u, hash:%u, slaves:%u, ap:%u, cs:%u\n",
+ attr->bond_mode,
+ attr->up_delay,
+ attr->down_delay,
+ attr->xmit_hash_policy,
+ attr->slaves,
+ attr->active_slaves,
+ attr->lacp_collect_slaves);
+ pr_info("bond_pf_bitmap: 0x%x\n", attr->bond_pf_bitmap);
+ pr_info("bond user_bitmap 0x%x\n", attr->user_bitmap);
+
+ err = bond_send_upcmd(bdev, attr, MPU_CMD_BOND_SET_ATTR);
+ if (!err)
+ memcpy(&bdev->bond_attr, attr, sizeof(*attr));
+
+ return err;
+}
+
+static int bond_upcmd_activate(struct hinic3_bond_dev *bdev,
+ struct bond_attr *attr)
+{
+ int err;
+
+ if (bond_dev_is_activated(bdev))
+ return 0;
+
+ pr_info("hinic3_bond: active bond: %u\n", bdev->bond_attr.bond_id);
+
+ err = bond_send_upcmd(bdev, attr, MPU_CMD_BOND_CREATE);
+ if (err == 0) {
+ bdev->status = BOND_DEV_STATUS_ACTIVATED;
+ bdev->bond_attr.bond_mode = attr->bond_mode;
+ err = bond_upcmd_config(bdev, attr);
+ }
+
+ return err;
+}
+
+static void bond_call_service_func(struct hinic3_bond_dev *bdev, struct bond_attr *attr,
+ enum bond_service_proc_pos pos, int bond_status)
+{
+ int i;
+
+ if (bond_status)
+ return;
+
+ mutex_lock(&g_bond_service_func_mutex);
+ for (i = 0; i < HINIC3_BOND_USER_NUM; i++) {
+ if (g_bond_service_func[i])
+ g_bond_service_func[i](bdev->name, (void *)attr, pos);
+ }
+ mutex_unlock(&g_bond_service_func_mutex);
+}
+
+static u32 bond_get_user_bitmap(struct hinic3_bond_dev *bdev)
+{
+ u32 user_bitmap = 0;
+ u8 user;
+
+ for (user = HINIC3_BOND_USER_OVS; user < HINIC3_BOND_USER_NUM; user++) {
+ if (bdev->slot_used[user] == 1)
+ BITMAP_SET(user_bitmap, user);
+ }
+ return user_bitmap;
+}
+
+static void bond_do_work(struct hinic3_bond_dev *bdev)
+{
+ bool is_bonded = 0;
+ struct bond_attr attr;
+ int is_in_kexec;
+ int err = 0;
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec != 0) {
+ pr_info("Skip changing bond status during os replace\n");
+ return;
+ }
+
+ spin_lock(&bdev->lock);
+ is_bonded = bdev->tracker.is_bonded;
+ attr = bdev->new_attr;
+ spin_unlock(&bdev->lock);
+ attr.user_bitmap = bond_get_user_bitmap(bdev);
+
+ /* is_bonded indicates whether bond should be activated. */
+ if (is_bonded && !bond_dev_is_activated(bdev)) {
+ bond_call_service_func(bdev, &attr, BOND_BEFORE_ACTIVE, 0);
+ err = bond_upcmd_activate(bdev, &attr);
+ bond_call_service_func(bdev, &attr, BOND_AFTER_ACTIVE, err);
+ } else if (is_bonded && bond_dev_is_activated(bdev)) {
+ bond_call_service_func(bdev, &attr, BOND_BEFORE_MODIFY, 0);
+ err = bond_upcmd_config(bdev, &attr);
+ bond_call_service_func(bdev, &attr, BOND_AFTER_MODIFY, err);
+ } else if (!is_bonded && bond_dev_is_activated(bdev)) {
+ bond_call_service_func(bdev, &attr, BOND_BEFORE_DEACTIVE, 0);
+ err = bond_upcmd_deactivate(bdev);
+ bond_call_service_func(bdev, &attr, BOND_AFTER_DEACTIVE, err);
+ }
+
+ if (err)
+ pr_err("hinic3_bond: Do bond failed\n");
+}
+
+static void bond_try_do_work(struct work_struct *work)
+{
+ struct delayed_work *delayed_work = to_delayed_work(work);
+ struct hinic3_bond_dev *bdev =
+ container_of(delayed_work, struct hinic3_bond_dev, bond_work);
+ int status;
+
+ status = mutex_trylock(&g_bond_mutex);
+ if (status == 0) {
+ /* Delay 1 sec and retry */
+ queue_delayed_work(bdev->wq, &bdev->bond_work, HZ);
+ } else {
+ bond_do_work(bdev);
+ mutex_unlock(&g_bond_mutex);
+ }
+}
+
+static int bond_dev_init(struct hinic3_bond_dev *bdev, const char *name)
+{
+ bdev->wq = create_singlethread_workqueue("hinic3_bond_wq");
+ if (!bdev->wq) {
+ pr_err("hinic3_bond: Failed to create workqueue\n");
+ return -ENODEV;
+ }
+
+ INIT_DELAYED_WORK(&bdev->bond_work, bond_try_do_work);
+ bdev->status = BOND_DEV_STATUS_IDLE;
+ strscpy(bdev->name, name, sizeof(bdev->name));
+
+ spin_lock_init(&bdev->lock);
+
+ return 0;
+}
+
+static int bond_dev_release(struct hinic3_bond_dev *bdev)
+{
+ int err;
+ u8 i;
+ u32 bond_cnt;
+
+ err = bond_upcmd_deactivate(bdev);
+ if (err) {
+ pr_err("hinic3_bond: Failed to deactivate dev\n");
+ mutex_unlock(&g_bond_mutex);
+ return err;
+ }
+
+ for (i = BOND_FIRST_ID; i <= BOND_MAX_ID; i++) {
+ if (bond_mngr.bond_dev[i] == bdev) {
+ bond_mngr.bond_dev[i] = NULL;
+ bond_mngr.cnt--;
+ pr_info("hinic3_bond: Free bond, id: %u mngr_cnt:%u\n", i, bond_mngr.cnt);
+ break;
+ }
+ }
+
+ bond_cnt = bond_mngr.cnt;
+ mutex_unlock(&g_bond_mutex);
+ if (!bond_cnt)
+ bond_disable_netdev_event();
+
+ cancel_delayed_work_sync(&bdev->bond_work);
+ destroy_workqueue(bdev->wq);
+ kfree(bdev);
+
+ return err;
+}
+
+static void bond_dev_free(struct kref *ref)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+
+ bdev = container_of(ref, struct hinic3_bond_dev, ref);
+ bond_dev_release(bdev);
+}
+
+static struct hinic3_bond_dev *bond_dev_alloc(const char *name)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ u16 i;
+ int err;
+
+ bdev = kzalloc(sizeof(*bdev), GFP_KERNEL);
+ if (!bdev) {
+ mutex_unlock(&g_bond_mutex);
+ return NULL;
+ }
+
+ err = bond_dev_init(bdev, name);
+ if (err) {
+ kfree(bdev);
+ mutex_unlock(&g_bond_mutex);
+ return NULL;
+ }
+
+ if (!bond_mngr.cnt) {
+ err = bond_enable_netdev_event();
+ if (err) {
+ bond_dev_release(bdev);
+ return NULL;
+ }
+ }
+
+ for (i = BOND_FIRST_ID; i <= BOND_MAX_ID; i++) {
+ if (!bond_mngr.bond_dev[i]) {
+ bdev->bond_attr.bond_id = i;
+ bond_mngr.bond_dev[i] = bdev;
+ bond_mngr.cnt++;
+ pr_info("hinic3_bond: Create bond dev, id:%u cnt:%u\n", i, bond_mngr.cnt);
+ break;
+ }
+ }
+
+ if (i > BOND_MAX_ID) {
+ bond_dev_release(bdev);
+ bdev = NULL;
+ pr_err("hinic3_bond: Failed to get free bond id\n");
+ }
+
+ return bdev;
+}
+
+static void update_bond_info(struct hinic3_bond_dev *bdev, struct bonding *bond)
+{
+ struct slave *slave = NULL;
+ struct list_head *iter = NULL;
+ struct net_device *ndev[BOND_PORT_MAX_NUM];
+ int i = 0;
+
+ bdev->bond = bond;
+
+ rtnl_lock();
+ bond_for_each_slave(bond, slave, iter) {
+ if (bond_dev_track_port(bdev, slave->dev) == PORT_INVALID_ID)
+ continue;
+ ndev[i] = slave->dev;
+ dev_hold(ndev[i++]);
+ if (i >= BOND_PORT_MAX_NUM)
+ break;
+ (void)iter;
+ }
+
+ bond_for_each_slave(bond, slave, iter) {
+ bond_handle_rtnl_event(slave->dev);
+ (void)iter;
+ }
+
+ bond_handle_rtnl_event(bond->dev);
+
+ rtnl_unlock();
+ /* In case user queries info before bonding is complete */
+ flush_delayed_work(&bdev->bond_work);
+
+ rtnl_lock();
+ while (i)
+ dev_put(ndev[--i]);
+ rtnl_unlock();
+}
+
+static struct hinic3_bond_dev *bond_dev_by_name(const char *name)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ int i;
+
+ for (i = BOND_FIRST_ID; i <= BOND_MAX_ID; i++) {
+ if (bond_mngr.bond_dev[i] &&
+ (strcmp(bond_mngr.bond_dev[i]->name, name) == 0)) {
+ bdev = bond_mngr.bond_dev[i];
+ break;
+ }
+ }
+
+ return bdev;
+}
+
+static void bond_dev_user_attach(struct hinic3_bond_dev *bdev,
+ enum hinic3_bond_user user)
+{
+ u32 user_bitmap;
+
+ if (user < 0 || user >= HINIC3_BOND_USER_NUM)
+ return;
+
+ if (bdev->slot_used[user])
+ return;
+
+ bdev->slot_used[user] = 1;
+ if (!kref_get_unless_zero(&bdev->ref))
+ kref_init(&bdev->ref);
+ else {
+ user_bitmap = bond_get_user_bitmap(bdev);
+ pr_info("hinic3_bond: user %u attach bond %s, user_bitmap %#x\n",
+ user, bdev->name, user_bitmap);
+ queue_delayed_work(bdev->wq, &bdev->bond_work, 0);
+ }
+}
+
+static void bond_dev_user_detach(struct hinic3_bond_dev *bdev,
+ enum hinic3_bond_user user, bool *freed)
+{
+ if (bdev->slot_used[user]) {
+ bdev->slot_used[user] = 0;
+ if (kref_read(&bdev->ref) == 1)
+ *freed = true;
+ kref_put(&bdev->ref, bond_dev_free);
+ }
+}
+
+static struct bonding *bond_get_knl_bonding(const char *name)
+{
+ struct net_device *ndev_tmp = NULL;
+
+ rcu_read_lock();
+ for_each_netdev(&init_net, ndev_tmp) {
+ if (netif_is_bond_master(ndev_tmp) &&
+ !strcmp(ndev_tmp->name, name)) {
+ rcu_read_unlock();
+ return netdev_priv(ndev_tmp);
+ }
+ }
+ rcu_read_unlock();
+ return NULL;
+}
+
+void hinic3_bond_set_user_bitmap(struct bond_attr *attr, enum hinic3_bond_user user)
+{
+ if (!BITMAP_JUDGE(attr->user_bitmap, user))
+ BITMAP_SET(attr->user_bitmap, user);
+}
+EXPORT_SYMBOL(hinic3_bond_set_user_bitmap);
+
+int hinic3_bond_attach(const char *name, enum hinic3_bond_user user,
+ u16 *bond_id)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ struct bonding *bond = NULL;
+ bool new_dev = false;
+
+ if (!name || !bond_id)
+ return -EINVAL;
+
+ bond = bond_get_knl_bonding(name);
+ if (!bond) {
+ pr_warn("hinic3_bond: Kernel bond %s not exist.\n", name);
+ return -ENODEV;
+ }
+
+ mutex_lock(&g_bond_mutex);
+ bdev = bond_dev_by_name(name);
+ if (!bdev) {
+ bdev = bond_dev_alloc(name);
+ new_dev = true;
+ } else {
+ pr_info("hinic3_bond: %s already exist\n", name);
+ }
+
+ if (!bdev) {
+ // lock has beed released in bond_dev_alloc
+ return -ENODEV;
+ }
+
+ bond_dev_user_attach(bdev, user);
+ mutex_unlock(&g_bond_mutex);
+
+ if (new_dev)
+ update_bond_info(bdev, bond);
+
+ *bond_id = bdev->bond_attr.bond_id;
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_bond_attach);
+
+int hinic3_bond_detach(u16 bond_id, enum hinic3_bond_user user)
+{
+ int err = 0;
+ bool lock_freed = false;
+
+ if (!BOND_ID_IS_VALID(bond_id) || user >= HINIC3_BOND_USER_NUM) {
+ pr_warn("hinic3_bond: Invalid bond id or user, bond_id: %u, user: %d\n",
+ bond_id, user);
+ return -EINVAL;
+ }
+
+ mutex_lock(&g_bond_mutex);
+ if (!bond_mngr.bond_dev[bond_id])
+ err = -ENODEV;
+ else
+ bond_dev_user_detach(bond_mngr.bond_dev[bond_id], user, &lock_freed);
+
+ if (!lock_freed)
+ mutex_unlock(&g_bond_mutex);
+ return err;
+}
+EXPORT_SYMBOL(hinic3_bond_detach);
+
+void hinic3_bond_clean_user(enum hinic3_bond_user user)
+{
+ int i = 0;
+ bool lock_freed = false;
+
+ mutex_lock(&g_bond_mutex);
+ for (i = BOND_FIRST_ID; i <= BOND_MAX_ID; i++) {
+ if (bond_mngr.bond_dev[i]) {
+ bond_dev_user_detach(bond_mngr.bond_dev[i], user, &lock_freed);
+ if (lock_freed) {
+ mutex_lock(&g_bond_mutex);
+ lock_freed = false;
+ }
+ }
+ }
+ if (!lock_freed)
+ mutex_unlock(&g_bond_mutex);
+}
+EXPORT_SYMBOL(hinic3_bond_clean_user);
+
+int hinic3_bond_get_uplink_id(u16 bond_id, u32 *uplink_id)
+{
+ if (!BOND_ID_IS_VALID(bond_id) || !uplink_id) {
+ pr_warn("hinic3_bond: Invalid args, id: %u, uplink: %d\n",
+ bond_id, !!uplink_id);
+ return -EINVAL;
+ }
+
+ mutex_lock(&g_bond_mutex);
+ if (bond_mngr.bond_dev[bond_id])
+ *uplink_id = bond_gen_uplink_id(bond_mngr.bond_dev[bond_id]);
+ mutex_unlock(&g_bond_mutex);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_bond_get_uplink_id);
+
+int hinic3_bond_register_service_func(enum hinic3_bond_user user, void (*func)
+ (const char *bond_name, void *bond_attr,
+ enum bond_service_proc_pos pos))
+{
+ if (user >= HINIC3_BOND_USER_NUM)
+ return -EINVAL;
+
+ mutex_lock(&g_bond_service_func_mutex);
+ g_bond_service_func[user] = func;
+ mutex_unlock(&g_bond_service_func_mutex);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_bond_register_service_func);
+
+int hinic3_bond_unregister_service_func(enum hinic3_bond_user user)
+{
+ if (user >= HINIC3_BOND_USER_NUM)
+ return -EINVAL;
+
+ mutex_lock(&g_bond_service_func_mutex);
+ g_bond_service_func[user] = NULL;
+ mutex_unlock(&g_bond_service_func_mutex);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_bond_unregister_service_func);
+
+int hinic3_bond_get_slaves(u16 bond_id, struct hinic3_bond_info_s *info)
+{
+ struct bond_tracker *tracker = NULL;
+ int size;
+ int i;
+ int len;
+
+ if (!info || !BOND_ID_IS_VALID(bond_id)) {
+ pr_warn("hinic3_bond: Invalid args, info: %d,id: %u\n",
+ !!info, bond_id);
+ return -EINVAL;
+ }
+
+ size = ARRAY_LEN(info->slaves_name);
+ if (size < BOND_PORT_MAX_NUM) {
+ pr_warn("hinic3_bond: Invalid args, size: %u\n",
+ size);
+ return -EINVAL;
+ }
+
+ mutex_lock(&g_bond_mutex);
+ if (bond_mngr.bond_dev[bond_id]) {
+ info->slaves = bond_mngr.bond_dev[bond_id]->bond_attr.slaves;
+ tracker = &bond_mngr.bond_dev[bond_id]->tracker;
+ info->cnt = 0;
+ for (i = 0; i < BOND_PORT_MAX_NUM; i++) {
+ if (BITMAP_JUDGE(info->slaves, i) && tracker->ndev[i]) {
+ len = sizeof(info->slaves_name[0]);
+ strscpy(info->slaves_name[info->cnt], tracker->ndev[i]->name, len);
+ info->cnt++;
+ }
+ }
+ }
+ mutex_unlock(&g_bond_mutex);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_bond_get_slaves);
+
+struct net_device *hinic3_bond_get_netdev_by_portid(const char *bond_name, u8 port_id)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+
+ if (port_id >= BOND_PORT_MAX_NUM)
+ return NULL;
+ mutex_lock(&g_bond_mutex);
+ bdev = bond_dev_by_name(bond_name);
+ if (!bdev) {
+ mutex_unlock(&g_bond_mutex);
+ return NULL;
+ }
+ mutex_unlock(&g_bond_mutex);
+ return bdev->tracker.ndev[port_id];
+}
+EXPORT_SYMBOL(hinic3_bond_get_netdev_by_portid);
+
+int hinic3_get_hw_bond_infos(void *hwdev, struct hinic3_hw_bond_infos *infos, u16 channel)
+{
+ struct comm_cmd_hw_bond_infos bond_infos;
+ u16 out_size = sizeof(bond_infos);
+ int err;
+
+ if (!hwdev || !infos)
+ return -EINVAL;
+
+ memset(&bond_infos, 0, sizeof(bond_infos));
+
+ bond_infos.infos.bond_id = infos->bond_id;
+
+ err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, COMM_MGMT_CMD_GET_HW_BOND,
+ &bond_infos, sizeof(bond_infos),
+ &bond_infos, &out_size, 0, channel);
+ if (bond_infos.head.status || err || !out_size) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get hw bond information, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, bond_infos.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ memcpy(infos, &bond_infos.infos, sizeof(*infos));
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_hw_bond_infos);
+
+int hinic3_get_bond_tracker_by_name(const char *name, struct bond_tracker *tracker)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ int i;
+
+ mutex_lock(&g_bond_mutex);
+ for (i = BOND_FIRST_ID; i <= BOND_MAX_ID; i++) {
+ if (bond_mngr.bond_dev[i] &&
+ (strcmp(bond_mngr.bond_dev[i]->name, name) == 0)) {
+ bdev = bond_mngr.bond_dev[i];
+ spin_lock(&bdev->lock);
+ *tracker = bdev->tracker;
+ spin_unlock(&bdev->lock);
+ mutex_unlock(&g_bond_mutex);
+ return 0;
+ }
+ }
+ mutex_unlock(&g_bond_mutex);
+ return -ENODEV;
+}
+EXPORT_SYMBOL(hinic3_get_bond_tracker_by_name);
diff --git a/drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.h b/drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.h
new file mode 100644
index 0000000..5ab36f7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_BOND_H
+#define HINIC3_BOND_H
+
+#include <linux/netdevice.h>
+#include <linux/types.h>
+#include "mpu_inband_cmd_defs.h"
+#include "bond_common_defs.h"
+
+enum hinic3_bond_user {
+ HINIC3_BOND_USER_OVS,
+ HINIC3_BOND_USER_TOE,
+ HINIC3_BOND_USER_ROCE,
+ HINIC3_BOND_USER_NUM
+};
+
+enum bond_service_proc_pos {
+ BOND_BEFORE_ACTIVE,
+ BOND_AFTER_ACTIVE,
+ BOND_BEFORE_MODIFY,
+ BOND_AFTER_MODIFY,
+ BOND_BEFORE_DEACTIVE,
+ BOND_AFTER_DEACTIVE,
+ BOND_POS_MAX
+};
+
+#define BITMAP_SET(bm, bit) ((bm) |= (typeof(bm))(1U << (bit)))
+#define BITMAP_CLR(bm, bit) ((bm) &= ~((typeof(bm))(1U << (bit))))
+#define BITMAP_JUDGE(bm, bit) ((bm) & (typeof(bm))(1U << (bit)))
+
+#define MPU_CMD_BOND_CREATE 17
+#define MPU_CMD_BOND_DELETE 18
+#define MPU_CMD_BOND_SET_ATTR 19
+#define MPU_CMD_BOND_GET_ATTR 20
+
+#define HINIC3_MAX_PORT 4
+#define HINIC3_IFNAMSIZ 16
+struct hinic3_bond_info_s {
+ u8 slaves;
+ u8 cnt;
+ u8 srv[2];
+ char slaves_name[HINIC3_MAX_PORT][HINIC3_IFNAMSIZ];
+};
+
+#pragma pack(push, 1)
+struct netdev_lower_state_info {
+ u8 link_up : 1;
+ u8 tx_enabled : 1;
+ u8 rsvd : 6;
+};
+
+#pragma pack(pop)
+
+struct bond_tracker {
+ struct netdev_lower_state_info netdev_state[BOND_PORT_MAX_NUM];
+ struct net_device *ndev[BOND_PORT_MAX_NUM];
+ u8 cnt;
+ bool is_bonded;
+};
+
+struct bond_attr {
+ u16 bond_mode;
+ u16 bond_id;
+ u16 up_delay;
+ u16 down_delay;
+ u8 active_slaves;
+ u8 slaves;
+ u8 lacp_collect_slaves;
+ u8 xmit_hash_policy;
+ u32 first_roce_func;
+ u32 bond_pf_bitmap;
+ u32 user_bitmap;
+};
+
+struct hinic3_bond_cmd {
+ u8 ret_status;
+ u8 version;
+ u16 sub_cmd;
+ struct bond_attr attr;
+ char bond_name[16];
+};
+
+bool hinic3_is_bond_dev_status_actived(struct net_device *ndev);
+void hinic3_bond_set_user_bitmap(struct bond_attr *attr, enum hinic3_bond_user user);
+int hinic3_bond_attach(const char *name, enum hinic3_bond_user user, u16 *bond_id);
+int hinic3_bond_detach(u16 bond_id, enum hinic3_bond_user user);
+void hinic3_bond_clean_user(enum hinic3_bond_user user);
+int hinic3_bond_get_uplink_id(u16 bond_id, u32 *uplink_id);
+int hinic3_bond_register_service_func(enum hinic3_bond_user user, void (*func)
+ (const char *bond_name, void *bond_attr,
+ enum bond_service_proc_pos pos));
+int hinic3_bond_unregister_service_func(enum hinic3_bond_user user);
+int hinic3_bond_get_slaves(u16 bond_id, struct hinic3_bond_info_s *info);
+struct net_device *hinic3_bond_get_netdev_by_portid(const char *bond_name, u8 port_id);
+int hinic3_get_hw_bond_infos(void *hwdev, struct hinic3_hw_bond_infos *infos, u16 channel);
+int hinic3_get_bond_tracker_by_name(const char *name, struct bond_tracker *tracker);
+#endif /* HINIC3_BOND_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.c
new file mode 100644
index 0000000..1562c59
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.c
@@ -0,0 +1,2015 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/mm.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/kernel.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+
+#include "cqm_object.h"
+#include "cqm_bitmap_table.h"
+#include "cqm_cmd.h"
+#include "cqm_object_intern.h"
+#include "cqm_main.h"
+#include "cqm_memsec.h"
+#include "cqm_bat_cla.h"
+
+#include "cqm_npu_cmd.h"
+#include "cqm_npu_cmd_defs.h"
+
+#include "vram_common.h"
+
+static void cqm_bat_fill_cla_common_gpa(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ struct tag_cqm_bat_entry_standerd *bat_entry_standerd)
+{
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ struct hinic3_func_attr *func_attr = NULL;
+ struct tag_cqm_bat_entry_vf2pf gpa = {0};
+ u32 cla_gpa_h = 0;
+ dma_addr_t pa;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_0)
+ pa = cla_table->cla_z_buf.buf_list[0].pa;
+ else if (cla_table->cla_lvl == CQM_CLA_LVL_1)
+ pa = cla_table->cla_y_buf.buf_list[0].pa;
+ else
+ pa = cla_table->cla_x_buf.buf_list[0].pa;
+
+ gpa.cla_gpa_h = CQM_ADDR_HI(pa) & CQM_CHIP_GPA_HIMASK;
+
+ /* On the SPU, the value of spu_en in the GPA address
+ * in the BAT is determined by the host ID and fun IDx.
+ */
+ if (hinic3_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID) {
+ func_attr = &cqm_handle->func_attribute;
+ gpa.acs_spu_en = func_attr->func_global_idx & 0x1;
+ } else {
+ gpa.acs_spu_en = 0;
+ }
+
+ /* In fake mode, fake_vf_en in the GPA address of the BAT
+ * must be set to 1.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD) {
+ gpa.fake_vf_en = 1;
+ func_attr = &cqm_handle->parent_cqm_handle->func_attribute;
+ gpa.pf_id = func_attr->func_global_idx;
+ } else {
+ gpa.fake_vf_en = 0;
+ }
+
+ memcpy(&cla_gpa_h, &gpa, sizeof(u32));
+ bat_entry_standerd->cla_gpa_h = cla_gpa_h;
+
+ /* GPA is valid when gpa[0] = 1.
+ * CQM_BAT_ENTRY_T_REORDER does not support GPA validity check.
+ */
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ bat_entry_standerd->cla_gpa_l = CQM_ADDR_LW(pa);
+ else
+ bat_entry_standerd->cla_gpa_l = CQM_ADDR_LW(pa) |
+ gpa_check_enable;
+
+ cqm_info(handle->dev_hdl, "Cla type %u, pa 0x%llx, gpa 0x%x-0x%x, level %u\n",
+ cla_table->type, pa, bat_entry_standerd->cla_gpa_h, bat_entry_standerd->cla_gpa_l,
+ bat_entry_standerd->cla_level);
+}
+
+static void cqm_bat_fill_cla_common(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u8 *entry_base_addr)
+{
+ struct tag_cqm_bat_entry_standerd *bat_entry_standerd = NULL;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 cache_line = 0;
+
+ /* The cacheline of the timer is changed to 512. */
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER)
+ cache_line = CQM_CHIP_TIMER_CACHELINE;
+ else
+ cache_line = CQM_CHIP_CACHELINE;
+
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0, don't init bat entry\n",
+ cla_table->type);
+ return;
+ }
+
+ bat_entry_standerd = (struct tag_cqm_bat_entry_standerd *)entry_base_addr;
+
+ /* The QPC value is 256/512/1024 and the timer value is 512.
+ * The other cacheline value is 256B.
+ * The conversion operation is performed inside the chip.
+ */
+ if (cla_table->obj_size > cache_line) {
+ if (cla_table->obj_size == CQM_OBJECT_512)
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_512;
+ else
+ bat_entry_standerd->entry_size =
+ CQM_BAT_ENTRY_SIZE_1024;
+ bat_entry_standerd->max_number = cla_table->max_buffer_size /
+ cla_table->obj_size;
+ } else {
+ if (cache_line == CQM_CHIP_CACHELINE) {
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_256;
+ bat_entry_standerd->max_number =
+ cla_table->max_buffer_size / cache_line;
+ } else {
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_512;
+ bat_entry_standerd->max_number =
+ cla_table->max_buffer_size / cache_line;
+ }
+ }
+
+ bat_entry_standerd->max_number = bat_entry_standerd->max_number - 1;
+
+ bat_entry_standerd->bypass = CQM_BAT_NO_BYPASS_CACHE;
+ bat_entry_standerd->z = cla_table->cacheline_z;
+ bat_entry_standerd->y = cla_table->cacheline_y;
+ bat_entry_standerd->x = cla_table->cacheline_x;
+ bat_entry_standerd->cla_level = cla_table->cla_lvl;
+
+ cqm_bat_fill_cla_common_gpa(cqm_handle, cla_table, bat_entry_standerd);
+}
+
+static void cqm_bat_fill_cla_cfg(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct tag_cqm_bat_entry_cfg *bat_entry_cfg = NULL;
+
+ bat_entry_cfg = (struct tag_cqm_bat_entry_cfg *)(*entry_base_addr);
+ bat_entry_cfg->cur_conn_cache = 0;
+ bat_entry_cfg->max_conn_cache =
+ func_cap->flow_table_based_conn_cache_number;
+ bat_entry_cfg->cur_conn_num_h_4 = 0;
+ bat_entry_cfg->cur_conn_num_l_16 = 0;
+ bat_entry_cfg->max_conn_num = func_cap->flow_table_based_conn_number;
+
+ /* Aligns with 64 buckets and shifts rightward by 6 bits.
+ * The maximum value of this field is 16 bits. A maximum of 4M buckets
+ * can be supported. The value is subtracted by 1. It is used for &hash
+ * value.
+ */
+ if ((func_cap->hash_number >> CQM_HASH_NUMBER_UNIT) != 0) {
+ bat_entry_cfg->bucket_num = ((func_cap->hash_number >>
+ CQM_HASH_NUMBER_UNIT) - 1);
+ }
+ if (func_cap->bloomfilter_length != 0) {
+ bat_entry_cfg->bloom_filter_len = func_cap->bloomfilter_length -
+ 1;
+ bat_entry_cfg->bloom_filter_addr = func_cap->bloomfilter_addr;
+ }
+
+ (*entry_base_addr) += sizeof(struct tag_cqm_bat_entry_cfg);
+}
+
+static void cqm_bat_fill_cla_other(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ cqm_bat_fill_cla_common(cqm_handle, cla_table, *entry_base_addr);
+
+ (*entry_base_addr) += sizeof(struct tag_cqm_bat_entry_standerd);
+}
+
+static void cqm_bat_fill_cla_taskmap(struct tag_cqm_handle *cqm_handle,
+ const struct tag_cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ struct tag_cqm_bat_entry_taskmap *bat_entry_taskmap = NULL;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ int i;
+
+ if (cqm_handle->func_capability.taskmap_number != 0) {
+ bat_entry_taskmap =
+ (struct tag_cqm_bat_entry_taskmap *)(*entry_base_addr);
+ for (i = 0; i < CQM_BAT_ENTRY_TASKMAP_NUM; i++) {
+ bat_entry_taskmap->addr[i].gpa_h =
+ (u32)(cla_table->cla_z_buf.buf_list[i].pa >>
+ CQM_CHIP_GPA_HSHIFT);
+ bat_entry_taskmap->addr[i].gpa_l =
+ (u32)(cla_table->cla_z_buf.buf_list[i].pa &
+ CQM_CHIP_GPA_LOMASK);
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: taskmap bat entry: 0x%x 0x%x\n",
+ bat_entry_taskmap->addr[i].gpa_h,
+ bat_entry_taskmap->addr[i].gpa_l);
+ }
+ }
+
+ (*entry_base_addr) += sizeof(struct tag_cqm_bat_entry_taskmap);
+}
+
+static void cqm_bat_fill_cla_timer(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ /* Only the PPF allocates timer resources. */
+ if (cqm_handle->func_attribute.func_type != CQM_PPF) {
+ (*entry_base_addr) += CQM_BAT_ENTRY_SIZE;
+ } else {
+ cqm_bat_fill_cla_common(cqm_handle, cla_table,
+ *entry_base_addr);
+
+ (*entry_base_addr) += sizeof(struct tag_cqm_bat_entry_standerd);
+ }
+}
+
+static void cqm_bat_fill_cla_invalid(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ (*entry_base_addr) += CQM_BAT_ENTRY_SIZE;
+}
+
+/**
+ * cqm_bat_fill_cla - Fill the base address of the CLA table into the BAT table
+ * @cqm_handle: CQM handle
+ */
+static void cqm_bat_fill_cla(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ u32 entry_type = CQM_BAT_ENTRY_T_INVALID;
+ u8 *entry_base_addr = NULL;
+ u32 i = 0;
+
+ /* Fills each item in the BAT table according to the BAT format. */
+ entry_base_addr = bat_table->bat;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cqm_dbg("entry_base_addr = %p\n", entry_base_addr);
+ entry_type = bat_table->bat_entry_type[i];
+ cla_table = &bat_table->entry[i];
+
+ if (entry_type == CQM_BAT_ENTRY_T_CFG) {
+ cqm_bat_fill_cla_cfg(cqm_handle, cla_table, &entry_base_addr);
+ } else if (entry_type == CQM_BAT_ENTRY_T_TASKMAP) {
+ cqm_bat_fill_cla_taskmap(cqm_handle, cla_table, &entry_base_addr);
+ } else if (entry_type == CQM_BAT_ENTRY_T_INVALID) {
+ cqm_bat_fill_cla_invalid(cqm_handle, cla_table, &entry_base_addr);
+ } else if (entry_type == CQM_BAT_ENTRY_T_TIMER) {
+ if (cqm_handle->func_attribute.func_type == CQM_PPF &&
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2)) {
+ entry_base_addr += sizeof(struct tag_cqm_bat_entry_standerd);
+ continue;
+ }
+
+ cqm_bat_fill_cla_timer(cqm_handle, cla_table,
+ &entry_base_addr);
+ } else {
+ cqm_bat_fill_cla_other(cqm_handle, cla_table, &entry_base_addr);
+ }
+
+ /* Check whether entry_base_addr is out-of-bounds array. */
+ if (entry_base_addr >=
+ (bat_table->bat + CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE))
+ break;
+ }
+}
+
+u32 cqm_funcid2smfid(const struct tag_cqm_handle *cqm_handle)
+{
+ u32 funcid = 0;
+ u32 smf_sel = 0;
+ u32 smf_id = 0;
+ u32 smf_pg_partial = 0;
+ /* SMF_Selection is selected based on
+ * the lower two bits of the function id
+ */
+ u32 lbf_smfsel[4] = {0, 2, 1, 3};
+ /* SMFID is selected based on SMF_PG[1:0] and SMF_Selection(0-1) */
+ u32 smfsel_smfid01[4][2] = { {0, 0}, {0, 0}, {1, 1}, {0, 1} };
+ /* SMFID is selected based on SMF_PG[3:2] and SMF_Selection(2-4) */
+ u32 smfsel_smfid23[4][2] = { {2, 2}, {2, 2}, {3, 3}, {2, 3} };
+
+ /* When the LB mode is disabled, SMF0 is always returned. */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL) {
+ smf_id = 0;
+ } else {
+ funcid = cqm_handle->func_attribute.func_global_idx & 0x3;
+ smf_sel = lbf_smfsel[funcid];
+
+ if (smf_sel < 0x2) {
+ smf_pg_partial = cqm_handle->func_capability.smf_pg &
+ 0x3;
+ smf_id = smfsel_smfid01[smf_pg_partial][smf_sel];
+ } else {
+ smf_pg_partial =
+ /* shift to right by 2 bits */
+ (cqm_handle->func_capability.smf_pg >> 2) & 0x3;
+ smf_id = smfsel_smfid23[smf_pg_partial][smf_sel - 0x2];
+ }
+ }
+
+ return smf_id;
+}
+
+/* This function is used in LB mode 1/2. The timer spoker info
+ * of independent space needs to be configured for 4 SMFs.
+ */
+static void cqm_update_timer_gpa(struct tag_cqm_handle *cqm_handle, u32 smf_id)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ u32 entry_type = CQM_BAT_ENTRY_T_INVALID;
+ u8 *entry_base_addr = NULL;
+ u32 i = 0;
+
+ if (cqm_handle->func_attribute.func_type != CQM_PPF)
+ return;
+
+ if (cqm_handle->func_capability.lb_mode != CQM_LB_MODE_1 &&
+ cqm_handle->func_capability.lb_mode != CQM_LB_MODE_2)
+ return;
+
+ cla_table = &bat_table->timer_entry[smf_id];
+ entry_base_addr = bat_table->bat;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ entry_type = bat_table->bat_entry_type[i];
+
+ if (entry_type == CQM_BAT_ENTRY_T_TIMER) {
+ cqm_bat_fill_cla_timer(cqm_handle, cla_table,
+ &entry_base_addr);
+ break;
+ }
+
+ if (entry_type == CQM_BAT_ENTRY_T_TASKMAP)
+ entry_base_addr += sizeof(struct tag_cqm_bat_entry_taskmap);
+ else
+ entry_base_addr += CQM_BAT_ENTRY_SIZE;
+
+ /* Check whether entry_base_addr is out-of-bounds array. */
+ if (entry_base_addr >=
+ (bat_table->bat + CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE))
+ break;
+ }
+}
+
+static s32 cqm_bat_update_cmd(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cmd_buf *buf_in,
+ u32 smf_id, u32 func_id)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cmdq_bat_update *bat_update_cmd = NULL;
+ s32 ret = CQM_FAIL;
+
+ int is_in_kexec;
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec != 0) {
+ cqm_info(handle->dev_hdl, "Skip updating the cqm_bat to chip during kexec!\n");
+ return CQM_SUCCESS;
+ }
+
+ bat_update_cmd = (struct tag_cqm_cmdq_bat_update *)(buf_in->buf);
+ bat_update_cmd->offset = 0;
+
+ if (cqm_handle->bat_table.bat_size > CQM_BAT_MAX_SIZE) {
+ cqm_err(handle->dev_hdl,
+ "bat_size = %u, which is more than %d.\n",
+ cqm_handle->bat_table.bat_size, CQM_BAT_MAX_SIZE);
+ return CQM_FAIL;
+ }
+ bat_update_cmd->byte_len = cqm_handle->bat_table.bat_size;
+
+ memcpy(bat_update_cmd->data, cqm_handle->bat_table.bat, bat_update_cmd->byte_len);
+
+#ifdef __CQM_DEBUG__
+ cqm_byte_print((u32 *)(cqm_handle->bat_table.bat),
+ CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE);
+#endif
+
+ bat_update_cmd->smf_id = smf_id;
+ bat_update_cmd->func_id = func_id;
+
+ cqm_info(handle->dev_hdl, "Bat update: smf_id=%u\n",
+ bat_update_cmd->smf_id);
+ cqm_info(handle->dev_hdl, "Bat update: func_id=%u\n",
+ bat_update_cmd->func_id);
+
+ cqm_swab32((u8 *)bat_update_cmd,
+ sizeof(struct tag_cqm_cmdq_bat_update) >> CQM_DW_SHIFT);
+
+ ret = cqm_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
+ CQM_CMD_T_BAT_UPDATE, buf_in, NULL, NULL,
+ CQM_CMD_TIMEOUT, HINIC3_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(handle->dev_hdl, "%s: send_cmd_box ret=%d\n", __func__,
+ ret);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_bat_update - Send a command to tile to update the BAT table through cmdq
+ * @cqm_handle: CQM handle
+ */
+static s32 cqm_bat_update(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cmd_buf *buf_in = NULL;
+ s32 ret = CQM_FAIL;
+ u32 smf_id = 0;
+ u32 func_id = 0;
+ u32 i = 0;
+
+ buf_in = cqm_cmd_alloc((void *)(cqm_handle->ex_handle));
+ if (!buf_in)
+ return CQM_FAIL;
+ buf_in->size = sizeof(struct tag_cqm_cmdq_bat_update);
+
+ /* In non-fake mode, func_id is set to 0xffff, indicating the current
+ * func. In fake mode, the value of func_id is specified. This is a fake
+ * func_id.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD)
+ func_id = cqm_handle->func_attribute.func_global_idx;
+ else
+ func_id = 0xffff;
+
+ /* The LB scenario is supported.
+ * The normal mode is the traditional mode and is configured on SMF0.
+ * In mode 0, load is balanced to four SMFs based on the func ID (except
+ * the PPF func ID). The PPF in mode 0 needs to be configured on four
+ * SMF, so the timer resources can be shared by the four timer engine.
+ * Mode 1/2 is load balanced to four SMF by flow. Therefore, one
+ * function needs to be configured to four SMF.
+ */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ smf_id = cqm_funcid2smfid(cqm_handle);
+ ret = cqm_bat_update_cmd(cqm_handle, buf_in, smf_id, func_id);
+ } else if ((cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1) ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2) ||
+ ((cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0) &&
+ (cqm_handle->func_attribute.func_type == CQM_PPF))) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ cqm_update_timer_gpa(cqm_handle, i);
+
+ /* The smf_pg variable stores the currently
+ * enabled SMF.
+ */
+ if (cqm_handle->func_capability.smf_pg & (1U << i)) {
+ smf_id = i;
+ ret = cqm_bat_update_cmd(cqm_handle, buf_in,
+ smf_id, func_id);
+ if (ret != CQM_SUCCESS)
+ goto out;
+ }
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Bat update: unsupport lb mode=%u\n",
+ cqm_handle->func_capability.lb_mode);
+ ret = CQM_FAIL;
+ }
+
+out:
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+static s32 cqm_bat_init_ft(struct tag_cqm_handle *cqm_handle, struct tag_cqm_bat_table *bat_table,
+ enum func_type function_type)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 i = 0;
+
+ bat_table->bat_entry_type[CQM_BAT_INDEX0] = CQM_BAT_ENTRY_T_CFG;
+ bat_table->bat_entry_type[CQM_BAT_INDEX1] = CQM_BAT_ENTRY_T_HASH;
+ bat_table->bat_entry_type[CQM_BAT_INDEX2] = CQM_BAT_ENTRY_T_QPC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX3] = CQM_BAT_ENTRY_T_SCQC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX4] = CQM_BAT_ENTRY_T_LUN;
+ bat_table->bat_entry_type[CQM_BAT_INDEX5] = CQM_BAT_ENTRY_T_TASKMAP;
+
+ if (function_type == CQM_PF || function_type == CQM_PPF) {
+ bat_table->bat_entry_type[CQM_BAT_INDEX6] = CQM_BAT_ENTRY_T_L3I;
+ bat_table->bat_entry_type[CQM_BAT_INDEX7] = CQM_BAT_ENTRY_T_CHILDC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX8] = CQM_BAT_ENTRY_T_TIMER;
+ bat_table->bat_entry_type[CQM_BAT_INDEX9] = CQM_BAT_ENTRY_T_XID2CID;
+ bat_table->bat_entry_type[CQM_BAT_INDEX10] = CQM_BAT_ENTRY_T_REORDER;
+ bat_table->bat_size = CQM_BAT_SIZE_FT_PF;
+ } else if (function_type == CQM_VF) {
+ bat_table->bat_size = CQM_BAT_SIZE_FT_VF;
+ } else {
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(function_type));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_bat_init_rdma(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_bat_table *bat_table,
+ enum func_type function_type)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 i = 0;
+
+ bat_table->bat_entry_type[CQM_BAT_INDEX0] = CQM_BAT_ENTRY_T_QPC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX1] = CQM_BAT_ENTRY_T_SCQC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX2] = CQM_BAT_ENTRY_T_SRQC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX3] = CQM_BAT_ENTRY_T_MPT;
+ bat_table->bat_entry_type[CQM_BAT_INDEX4] = CQM_BAT_ENTRY_T_GID;
+
+ if (function_type == CQM_PF || function_type == CQM_PPF) {
+ bat_table->bat_entry_type[CQM_BAT_INDEX5] = CQM_BAT_ENTRY_T_L3I;
+ bat_table->bat_entry_type[CQM_BAT_INDEX6] =
+ CQM_BAT_ENTRY_T_CHILDC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX7] =
+ CQM_BAT_ENTRY_T_TIMER;
+ bat_table->bat_entry_type[CQM_BAT_INDEX8] =
+ CQM_BAT_ENTRY_T_XID2CID;
+ bat_table->bat_entry_type[CQM_BAT_INDEX9] =
+ CQM_BAT_ENTRY_T_REORDER;
+ bat_table->bat_size = CQM_BAT_SIZE_RDMA_PF;
+ } else if (function_type == CQM_VF) {
+ bat_table->bat_size = CQM_BAT_SIZE_RDMA_VF;
+ } else {
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(function_type));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_bat_init_ft_rdma(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_bat_table *bat_table,
+ enum func_type function_type)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 i = 0;
+
+ bat_table->bat_entry_type[CQM_BAT_INDEX0] = CQM_BAT_ENTRY_T_CFG;
+ bat_table->bat_entry_type[CQM_BAT_INDEX1] = CQM_BAT_ENTRY_T_HASH;
+ bat_table->bat_entry_type[CQM_BAT_INDEX2] = CQM_BAT_ENTRY_T_QPC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX3] = CQM_BAT_ENTRY_T_SCQC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX4] = CQM_BAT_ENTRY_T_SRQC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX5] = CQM_BAT_ENTRY_T_MPT;
+ bat_table->bat_entry_type[CQM_BAT_INDEX6] = CQM_BAT_ENTRY_T_GID;
+ bat_table->bat_entry_type[CQM_BAT_INDEX7] = CQM_BAT_ENTRY_T_LUN;
+ bat_table->bat_entry_type[CQM_BAT_INDEX8] = CQM_BAT_ENTRY_T_TASKMAP;
+
+ if (function_type == CQM_PF || function_type == CQM_PPF) {
+ bat_table->bat_entry_type[CQM_BAT_INDEX9] = CQM_BAT_ENTRY_T_L3I;
+ bat_table->bat_entry_type[CQM_BAT_INDEX10] =
+ CQM_BAT_ENTRY_T_CHILDC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX11] =
+ CQM_BAT_ENTRY_T_TIMER;
+ bat_table->bat_entry_type[CQM_BAT_INDEX12] =
+ CQM_BAT_ENTRY_T_XID2CID;
+ bat_table->bat_entry_type[CQM_BAT_INDEX13] =
+ CQM_BAT_ENTRY_T_REORDER;
+ bat_table->bat_size = CQM_BAT_SIZE_FT_RDMA_PF;
+ } else if (function_type == CQM_VF) {
+ bat_table->bat_size = CQM_BAT_SIZE_FT_RDMA_VF;
+ } else {
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(function_type));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_bat_init - Initialize the BAT table. Only the items to be initialized and
+ * the entry sequence are selected. The content of the BAT entry
+ * is filled after the CLA is allocated.
+ * @cqm_handle: CQM handle
+ */
+s32 cqm_bat_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *capability = &cqm_handle->func_capability;
+ enum func_type function_type = cqm_handle->func_attribute.func_type;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ u32 i;
+
+ memset(bat_table, 0, sizeof(struct tag_cqm_bat_table));
+
+ /* Initialize the type of each bat entry. */
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ /* Select BATs based on service types. Currently,
+ * feature-related resources of the VF are stored in the BATs of the VF.
+ */
+ if (capability->ft_enable && capability->rdma_enable)
+ return cqm_bat_init_ft_rdma(cqm_handle, bat_table, function_type);
+ else if (capability->ft_enable)
+ return cqm_bat_init_ft(cqm_handle, bat_table, function_type);
+ else if (capability->rdma_enable)
+ return cqm_bat_init_rdma(cqm_handle, bat_table, function_type);
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_bat_uninit - Deinitialize the BAT table
+ * @cqm_handle: CQM handle
+ */
+void cqm_bat_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ memset(bat_table->bat, 0, CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE);
+
+ /* Instruct the chip to update the BAT table. */
+ if (cqm_bat_update(cqm_handle) != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_update));
+}
+
+static s32 cqm_cla_fill_buf(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *cla_base_buf,
+ struct tag_cqm_buf *cla_sub_buf, u8 gpa_check_enable)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct hinic3_func_attr *func_attr = NULL;
+ dma_addr_t *base = NULL;
+ u64 fake_en = 0;
+ u64 spu_en = 0;
+ u64 pf_id = 0;
+ u32 i = 0;
+ u32 addr_num;
+ u32 buf_index = 0;
+
+ /* Apply for space for base_buf */
+ if (!cla_base_buf->buf_list) {
+ if (cqm_buf_alloc(cqm_handle, cla_base_buf, false) ==
+ CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(cla_base_buf));
+ return CQM_FAIL;
+ }
+ }
+
+ /* Apply for space for sub_buf */
+ if (!cla_sub_buf->buf_list) {
+ if (cqm_buf_alloc(cqm_handle, cla_sub_buf, false) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(cla_sub_buf));
+ cqm_buf_free(cla_base_buf, cqm_handle);
+ return CQM_FAIL;
+ }
+ }
+
+ /* Fill base_buff with the gpa of sub_buf */
+ addr_num = cla_base_buf->buf_size / sizeof(dma_addr_t);
+ base = (dma_addr_t *)(cla_base_buf->buf_list[0].va);
+ for (i = 0; i < cla_sub_buf->buf_number; i++) {
+ /* The SPU SMF supports load balancing from the SMF to the CPI,
+ * depending on the host ID and func ID.
+ */
+ if (hinic3_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID) {
+ func_attr = &cqm_handle->func_attribute;
+ spu_en = (u64)(func_attr->func_global_idx & 0x1) << 0x3F;
+ } else {
+ spu_en = 0;
+ }
+
+ /* fake enable */
+ if (cqm_handle->func_capability.fake_func_type ==
+ CQM_FAKE_FUNC_CHILD) {
+ fake_en = 1ULL << 0x3E;
+ func_attr =
+ &cqm_handle->parent_cqm_handle->func_attribute;
+ pf_id = func_attr->func_global_idx;
+ pf_id = (pf_id & 0x1f) << 0x39;
+ } else {
+ fake_en = 0;
+ pf_id = 0;
+ }
+
+ *base = (dma_addr_t)((((((u64)(cla_sub_buf->buf_list[i].pa) & CQM_CHIP_GPA_MASK) |
+ spu_en) |
+ fake_en) |
+ pf_id) |
+ gpa_check_enable);
+
+ cqm_swab64((u8 *)base, 1);
+ if ((i + 1) % addr_num == 0) {
+ buf_index++;
+ if (buf_index < cla_base_buf->buf_number)
+ base = cla_base_buf->buf_list[buf_index].va;
+ } else {
+ base++;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_cla_xyz_lvl1(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u32 trunk_size)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf *cla_y_buf = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+ s32 shift = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ u32 cache_line = 0;
+
+ /* The cacheline of the timer is changed to 512. */
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER)
+ cache_line = CQM_CHIP_TIMER_CACHELINE;
+ else
+ cache_line = CQM_CHIP_CACHELINE;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ gpa_check_enable = 0;
+
+ cla_table->cla_lvl = CQM_CLA_LVL_1;
+
+ shift = cqm_shift(trunk_size / cla_table->obj_size);
+ cla_table->z = (u32)(shift ? (shift - 1) : (shift));
+ cla_table->y = CQM_MAX_INDEX_BIT;
+ cla_table->x = 0;
+
+ cqm_dbg("cla_table->obj_size = %d, cache_line = %d",
+ cla_table->obj_size, cache_line);
+ if (cla_table->obj_size >= cache_line) {
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+ } else {
+ shift = cqm_shift(trunk_size / cache_line);
+ cla_table->cacheline_z = (u32)(shift ? (shift - 1) : (shift));
+ cla_table->cacheline_y = CQM_MAX_INDEX_BIT;
+ cla_table->cacheline_x = 0;
+ }
+
+ /* Applying for CLA_Y_BUF Space */
+ cla_y_buf = &cla_table->cla_y_buf;
+ cla_y_buf->buf_size = trunk_size;
+ cla_y_buf->buf_number = 1;
+ cla_y_buf->page_number = cla_y_buf->buf_number <<
+ cla_table->trunk_order;
+
+ ret = cqm_buf_alloc(cqm_handle, cla_y_buf, false);
+ if (ret != CQM_SUCCESS)
+ return CQM_FAIL;
+
+ /* Applying for CLA_Z_BUF Space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number =
+ (ALIGN(cla_table->max_buffer_size, trunk_size)) / trunk_size;
+ cla_z_buf->page_number = cla_z_buf->buf_number <<
+ cla_table->trunk_order;
+
+ /* All buffer space must be statically allocated. */
+ if (cla_table->alloc_static) {
+ ret = cqm_cla_fill_buf(cqm_handle, cla_y_buf, cla_z_buf,
+ gpa_check_enable);
+ if (unlikely(ret != CQM_SUCCESS)) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_fill_buf));
+ return CQM_FAIL;
+ }
+ } else { /* Only the buffer list space is initialized. The buffer space
+ * is dynamically allocated in services.
+ */
+ cla_z_buf->buf_list = vmalloc(cla_z_buf->buf_number *
+ sizeof(struct tag_cqm_buf_list));
+ if (!cla_z_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_1_z_buf));
+ cqm_buf_free(cla_y_buf, cqm_handle);
+ return CQM_FAIL;
+ }
+ memset(cla_z_buf->buf_list, 0,
+ cla_z_buf->buf_number * sizeof(struct tag_cqm_buf_list));
+ }
+
+ return CQM_SUCCESS;
+}
+
+static void cqm_cla_xyz_lvl2_param_init(struct tag_cqm_cla_table *cla_table, u32 trunk_size)
+{
+ s32 shift = 0;
+ u32 cache_line = 0;
+
+ /* The cacheline of the timer is changed to 512. */
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER)
+ cache_line = CQM_CHIP_TIMER_CACHELINE;
+ else
+ cache_line = CQM_CHIP_CACHELINE;
+
+ cla_table->cla_lvl = CQM_CLA_LVL_2;
+
+ shift = cqm_shift(trunk_size / cla_table->obj_size);
+ cla_table->z = (u32)(shift ? (shift - 1) : (shift));
+ shift = cqm_shift(trunk_size / sizeof(dma_addr_t));
+ cla_table->y = cla_table->z + shift;
+ cla_table->x = CQM_MAX_INDEX_BIT;
+
+ if (cla_table->obj_size >= cache_line) {
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+ } else {
+ shift = cqm_shift(trunk_size / cache_line);
+ cla_table->cacheline_z = (u32)(shift ? (shift - 1) : (shift));
+ shift = cqm_shift(trunk_size / sizeof(dma_addr_t));
+ cla_table->cacheline_y = cla_table->cacheline_z + shift;
+ cla_table->cacheline_x = CQM_MAX_INDEX_BIT;
+ }
+}
+
+static s32 cqm_cla_xyz_lvl2_xyz_apply(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table, u32 trunk_size)
+{
+ struct tag_cqm_buf *cla_x_buf = NULL;
+ struct tag_cqm_buf *cla_y_buf = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+ s32 ret = CQM_FAIL;
+
+ /* Apply for CLA_X_BUF Space */
+ cla_x_buf = &cla_table->cla_x_buf;
+ cla_x_buf->buf_size = trunk_size;
+ cla_x_buf->buf_number = 1;
+ cla_x_buf->page_number = cla_x_buf->buf_number << cla_table->trunk_order;
+ cla_x_buf->buf_info.use_vram = get_use_vram_flag();
+ ret = cqm_buf_alloc(cqm_handle, cla_x_buf, false);
+ if (ret != CQM_SUCCESS)
+ return CQM_FAIL;
+
+ /* Apply for CLA_Z_BUF and CLA_Y_BUF Space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number = (ALIGN(cla_table->max_buffer_size, trunk_size)) / trunk_size;
+ cla_z_buf->page_number = cla_z_buf->buf_number << cla_table->trunk_order;
+
+ cla_y_buf = &cla_table->cla_y_buf;
+ cla_y_buf->buf_size = trunk_size;
+ cla_y_buf->buf_number =
+ (u32)(ALIGN(cla_z_buf->buf_number * sizeof(dma_addr_t), trunk_size)) / trunk_size;
+ cla_y_buf->page_number = cla_y_buf->buf_number << cla_table->trunk_order;
+
+ return 0;
+}
+
+static s32 cqm_cla_xyz_vram_name_init(struct tag_cqm_cla_table *cla_table,
+ struct hinic3_hwdev *handle)
+{
+ struct tag_cqm_buf *cla_x_buf = NULL;
+ struct tag_cqm_buf *cla_y_buf = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+
+ cla_x_buf = &cla_table->cla_x_buf;
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_y_buf = &cla_table->cla_y_buf;
+ cla_x_buf->buf_info.use_vram = get_use_vram_flag();
+ snprintf(cla_x_buf->buf_info.buf_vram_name,
+ VRAM_NAME_APPLY_LEN, "%s%s", cla_table->name,
+ VRAM_CQM_CLA_COORD_X);
+
+ cla_y_buf->buf_info.use_vram = get_use_vram_flag();
+ snprintf(cla_y_buf->buf_info.buf_vram_name,
+ VRAM_NAME_APPLY_LEN, "%s%s", cla_table->name,
+ VRAM_CQM_CLA_COORD_Y);
+
+ cla_z_buf->buf_info.use_vram = get_use_vram_flag();
+ snprintf(cla_z_buf->buf_info.buf_vram_name,
+ VRAM_NAME_APPLY_LEN, "%s%s",
+ cla_table->name, VRAM_CQM_CLA_COORD_Z);
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_cla_xyz_lvl2(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table, u32 trunk_size)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf *cla_x_buf = NULL;
+ struct tag_cqm_buf *cla_y_buf = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+
+ cqm_cla_xyz_lvl2_param_init(cla_table, trunk_size);
+
+ ret = cqm_cla_xyz_lvl2_xyz_apply(cqm_handle, cla_table, trunk_size);
+ if (ret)
+ return ret;
+
+ cla_x_buf = &cla_table->cla_x_buf;
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_y_buf = &cla_table->cla_y_buf;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ gpa_check_enable = 0;
+
+ /* All buffer space must be statically allocated. */
+ if (cla_table->alloc_static) {
+ /* Apply for y buf and z buf, and fill the gpa of z buf list in y buf */
+ if (cqm_cla_fill_buf(cqm_handle, cla_y_buf, cla_z_buf,
+ gpa_check_enable) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_fill_buf));
+ cqm_buf_free(cla_x_buf, cqm_handle);
+ return CQM_FAIL;
+ }
+
+ /* Fill the gpa of the y buf list into the x buf.
+ * After the x and y bufs are applied for, this function will not fail.
+ * Use void to forcibly convert the return of the function.
+ */
+ (void)cqm_cla_fill_buf(cqm_handle, cla_x_buf, cla_y_buf, gpa_check_enable);
+ } else { /* Only the buffer list space is initialized. The buffer space
+ * is dynamically allocated in services.
+ */
+ cla_z_buf->buf_list = vmalloc(cla_z_buf->buf_number *
+ sizeof(struct tag_cqm_buf_list));
+ if (!cla_z_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_z_buf));
+ cqm_buf_free(cla_x_buf, cqm_handle);
+ return CQM_FAIL;
+ }
+ memset(cla_z_buf->buf_list, 0,
+ cla_z_buf->buf_number * sizeof(struct tag_cqm_buf_list));
+
+ cla_y_buf->buf_list = vmalloc(cla_y_buf->buf_number *
+ sizeof(struct tag_cqm_buf_list));
+ if (!cla_y_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_y_buf));
+ cqm_buf_free(cla_z_buf, cqm_handle);
+ cqm_buf_free(cla_x_buf, cqm_handle);
+ return CQM_FAIL;
+ }
+ memset(cla_y_buf->buf_list, 0,
+ cla_y_buf->buf_number * sizeof(struct tag_cqm_buf_list));
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_cla_xyz_check(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table, u32 *size)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 trunk_size = 0;
+
+ /* If the capability(obj_num) is set to 0, the CLA does not need to be
+ * initialized and exits directly.
+ */
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0, don't alloc buffer\n",
+ cla_table->type);
+ return CQM_SUCCESS;
+ }
+
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0x%x, gpa_check_enable=%d\n",
+ cla_table->type, cla_table->obj_num,
+ cqm_handle->func_capability.gpa_check_enable);
+
+ /* Check whether obj_size is 2^n-aligned. An error is reported when
+ * obj_size is 0 or 1.
+ */
+ if (!cqm_check_align(cla_table->obj_size)) {
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_size 0x%x is not align on 2^n\n",
+ cla_table->type, cla_table->obj_size);
+ return CQM_FAIL;
+ }
+
+ trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+
+ if (trunk_size < cla_table->obj_size) {
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cla type %u, obj_size 0x%x is out of trunk size\n",
+ cla_table->type, cla_table->obj_size);
+ return CQM_FAIL;
+ }
+
+ *size = trunk_size;
+
+ return CQM_CONTINUE;
+}
+
+static s32 cqm_cla_xyz_lvl0(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table, u32 trunk_size)
+{
+ struct tag_cqm_buf *cla_z_buf = NULL;
+
+ cla_table->cla_lvl = CQM_CLA_LVL_0;
+
+ cla_table->z = CQM_MAX_INDEX_BIT;
+ cla_table->y = 0;
+ cla_table->x = 0;
+
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+
+ /* Applying for CLA_Z_BUF Space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number = 1;
+ cla_z_buf->page_number = cla_z_buf->buf_number << cla_table->trunk_order;
+ cla_z_buf->bat_entry_type = cla_table->type;
+
+ return cqm_buf_alloc(cqm_handle, cla_z_buf, false);
+}
+
+/**
+ * cqm_cla_xyz - Calculate the number of levels of CLA tables and allocate
+ * space for each level of CLA table.
+ * @cqm_handle: CQM handle
+ * @cla_table: CLA table
+ */
+static s32 cqm_cla_xyz(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 trunk_size = 0;
+ s32 ret = CQM_FAIL;
+
+ ret = cqm_cla_xyz_check(cqm_handle, cla_table, &trunk_size);
+ if (ret != CQM_CONTINUE)
+ return ret;
+
+ ret = cqm_cla_xyz_vram_name_init(cla_table, handle);
+ if (ret != CQM_SUCCESS)
+ return ret;
+
+ /* Level-0 CLA occupies a small space.
+ * Only CLA_Z_BUF can be allocated during initialization.
+ */
+ cqm_dbg("cla_table->max_buffer_size = %d trunk_size = %d\n",
+ cla_table->max_buffer_size, trunk_size);
+
+ if (cla_table->max_buffer_size > trunk_size &&
+ cqm_need_secure_mem((void *)handle)) {
+ trunk_size = roundup(cla_table->max_buffer_size, CQM_SECURE_MEM_ALIGNED_SIZE);
+ cqm_dbg("[memsec]reset trunk_size = %u\n", trunk_size);
+ }
+
+ if (cla_table->max_buffer_size <= trunk_size) {
+ ret = cqm_cla_xyz_lvl0(cqm_handle, cla_table, trunk_size);
+ if (ret != CQM_SUCCESS)
+ return CQM_FAIL;
+ /* Level-1 CLA
+ * Allocates CLA_Y_BUF and CLA_Z_BUF during initialization.
+ */
+ } else if (cla_table->max_buffer_size <=
+ (trunk_size * (trunk_size / sizeof(dma_addr_t)))) {
+ if (cqm_cla_xyz_lvl1(cqm_handle, cla_table, trunk_size) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_xyz_lvl1));
+ return CQM_FAIL;
+ }
+ /* Level-2 CLA
+ * Allocates CLA_X_BUF, CLA_Y_BUF, and CLA_Z_BUF during initialization.
+ */
+ } else if (cla_table->max_buffer_size <= (trunk_size * (trunk_size / sizeof(dma_addr_t)) *
+ (trunk_size / sizeof(dma_addr_t)))) {
+ if (cqm_cla_xyz_lvl2(cqm_handle, cla_table, trunk_size) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_xyz_lvl2));
+ return CQM_FAIL;
+ }
+ } else { /* The current memory management mode does not support such
+ * a large buffer addressing. The order value needs to
+ * be increased.
+ */
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cla max_buffer_size 0x%x exceeds support range\n",
+ cla_table->max_buffer_size);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static void cqm_cla_init_entry_normal(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ struct tag_cqm_func_capability *capability)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_HASH:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->hash_number * capability->hash_basic_size;
+ cla_table->obj_size = capability->hash_basic_size;
+ cla_table->obj_num = capability->hash_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_QPC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->qpc_number * capability->qpc_basic_size;
+ cla_table->obj_size = capability->qpc_basic_size;
+ cla_table->obj_num = capability->qpc_number;
+ cla_table->alloc_static = capability->qpc_alloc_static;
+ cqm_info(handle->dev_hdl, "Cla alloc: qpc alloc_static=%d\n",
+ cla_table->alloc_static);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->mpt_number *
+ capability->mpt_basic_size;
+ cla_table->obj_size = capability->mpt_basic_size;
+ cla_table->obj_num = capability->mpt_number;
+ cla_table->alloc_static = true; /* CCB decided. MPT uses only
+ * static application scenarios.
+ */
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->scqc_number * capability->scqc_basic_size;
+ cla_table->obj_size = capability->scqc_basic_size;
+ cla_table->obj_num = capability->scqc_number;
+ cla_table->alloc_static = capability->scqc_alloc_static;
+ cqm_info(handle->dev_hdl, "Cla alloc: scqc alloc_static=%d\n",
+ cla_table->alloc_static);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->srqc_number * capability->srqc_basic_size;
+ cla_table->obj_size = capability->srqc_basic_size;
+ cla_table->obj_num = capability->srqc_number;
+ cla_table->alloc_static = false;
+ break;
+ default:
+ break;
+ }
+}
+
+static void cqm_cla_init_entry_extern(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ struct tag_cqm_func_capability *capability)
+{
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_GID:
+ /* Level-0 CLA table required */
+ cla_table->max_buffer_size = capability->gid_number *
+ capability->gid_basic_size;
+ cla_table->trunk_order =
+ (u32)cqm_shift(ALIGN(cla_table->max_buffer_size, PAGE_SIZE) / PAGE_SIZE);
+ cla_table->obj_size = capability->gid_basic_size;
+ cla_table->obj_num = capability->gid_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_LUN:
+ cla_table->trunk_order = CLA_TABLE_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->lun_number *
+ capability->lun_basic_size;
+ cla_table->obj_size = capability->lun_basic_size;
+ cla_table->obj_num = capability->lun_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_TASKMAP:
+ cla_table->trunk_order = CQM_4K_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->taskmap_number *
+ capability->taskmap_basic_size;
+ cla_table->obj_size = capability->taskmap_basic_size;
+ cla_table->obj_num = capability->taskmap_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_L3I:
+ cla_table->trunk_order = CLA_TABLE_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->l3i_number *
+ capability->l3i_basic_size;
+ cla_table->obj_size = capability->l3i_basic_size;
+ cla_table->obj_num = capability->l3i_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_CHILDC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->childc_number *
+ capability->childc_basic_size;
+ cla_table->obj_size = capability->childc_basic_size;
+ cla_table->obj_num = capability->childc_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_TIMER:
+ /* Ensure that the basic size of the timer buffer page does not
+ * exceed 128 x 4 KB. Otherwise, clearing the timer buffer of
+ * the function is complex.
+ */
+ cla_table->trunk_order = CQM_8K_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->timer_number *
+ capability->timer_basic_size;
+ cla_table->obj_size = capability->timer_basic_size;
+ cla_table->obj_num = capability->timer_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_XID2CID:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->xid2cid_number *
+ capability->xid2cid_basic_size;
+ cla_table->obj_size = capability->xid2cid_basic_size;
+ cla_table->obj_num = capability->xid2cid_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_REORDER:
+ /* This entry supports only IWARP and does not support GPA
+ * validity check.
+ */
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->reorder_number *
+ capability->reorder_basic_size;
+ cla_table->obj_size = capability->reorder_basic_size;
+ cla_table->obj_num = capability->reorder_number;
+ cla_table->alloc_static = true;
+ break;
+ default:
+ break;
+ }
+}
+
+static s32 cqm_cla_init_entry_condition(struct tag_cqm_handle *cqm_handle, u32 entry_type)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_cla_table *cla_table = &bat_table->entry[entry_type];
+ struct tag_cqm_cla_table *cla_table_timer = NULL;
+ u32 i;
+
+ /* When the timer is in LB mode 1 or 2, the timer needs to be
+ * configured for four SMFs and the address space is independent.
+ */
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER &&
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ cla_table_timer = &bat_table->timer_entry[i];
+ memcpy(cla_table_timer, cla_table, sizeof(struct tag_cqm_cla_table));
+
+ snprintf(cla_table_timer->name,
+ VRAM_NAME_APPLY_LEN, "%s%s%01u", cla_table->name,
+ VRAM_CQM_CLA_SMF_BASE, i);
+
+ if (cqm_cla_xyz(cqm_handle, cla_table_timer) ==
+ CQM_FAIL) {
+ cqm_cla_uninit(cqm_handle, entry_type);
+ return CQM_FAIL;
+ }
+ }
+ return CQM_SUCCESS;
+ }
+
+ if (cqm_cla_xyz(cqm_handle, cla_table) == CQM_FAIL) {
+ cqm_cla_uninit(cqm_handle, entry_type);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_cla_init_entry(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_func_capability *capability)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ s32 ret;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ cla_table->type = bat_table->bat_entry_type[i];
+ snprintf(cla_table->name, VRAM_NAME_APPLY_LEN,
+ "%s%s%s%02u", cqm_handle->name, VRAM_CQM_CLA_BASE,
+ VRAM_CQM_CLA_TYPE_BASE, cla_table->type);
+
+ cqm_cla_init_entry_normal(cqm_handle, cla_table, capability);
+ cqm_cla_init_entry_extern(cqm_handle, cla_table, capability);
+
+ /* Allocate CLA entry space at each level. */
+ if (cla_table->type < CQM_BAT_ENTRY_T_HASH ||
+ cla_table->type > CQM_BAT_ENTRY_T_REORDER) {
+ mutex_init(&cla_table->lock);
+ continue;
+ }
+
+ /* For the PPF, resources (8 wheels x 2k scales x 32B x
+ * func_num) need to be applied for to the timer. The
+ * structure of the timer entry in the BAT table needs
+ * to be filled. For the PF, no resource needs to be
+ * applied for the timer and no structure needs to be
+ * filled in the timer entry in the BAT table.
+ */
+ if (!(cla_table->type == CQM_BAT_ENTRY_T_TIMER &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ ret = cqm_cla_init_entry_condition(cqm_handle, i);
+ if (ret != CQM_SUCCESS)
+ return CQM_FAIL;
+ cqm_dbg("~~~~cla_table->type = %d\n", cla_table->type);
+ }
+ cqm_dbg("****cla_table->type = %d\n", cla_table->type);
+ mutex_init(&cla_table->lock);
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_cla_init - Initialize the CLA table
+ * @cqm_handle: CQM handle
+ */
+s32 cqm_cla_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ s32 ret;
+
+ /* Applying for CLA Entries */
+ ret = cqm_cla_init_entry(cqm_handle, capability);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_init_entry));
+ return ret;
+ }
+
+ /* After the CLA entry is applied, the address is filled
+ * in the BAT table.
+ */
+ cqm_bat_fill_cla(cqm_handle);
+
+ /* Instruct the chip to update the BAT table. */
+ ret = cqm_bat_update(cqm_handle);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_update));
+ goto err;
+ }
+
+ cqm_info(handle->dev_hdl, "Timer start: func_type=%d, timer_enable=%u\n",
+ cqm_handle->func_attribute.func_type,
+ cqm_handle->func_capability.timer_enable);
+
+ if (cqm_handle->func_attribute.func_type == CQM_PPF) {
+ ret = hinic3_ppf_ht_gpa_init(handle);
+ if (ret) {
+ cqm_err(handle->dev_hdl, "PPF ht gpa init fail!\n");
+ goto err;
+ }
+
+ if (cqm_handle->func_capability.timer_enable ==
+ CQM_TIMER_ENABLE) {
+ /* Enable the timer after the timer resources are applied for */
+ cqm_info(handle->dev_hdl, "PPF timer start\n");
+ ret = hinic3_ppf_tmr_start(handle);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "PPF timer start, ret=%d\n", ret);
+ goto err1;
+ }
+ }
+ }
+
+ return CQM_SUCCESS;
+err1:
+ hinic3_ppf_ht_gpa_deinit(handle);
+err:
+ cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_cla_uninit - Deinitialize the CLA table
+ * @cqm_handle: CQM handle
+ * @entry_numb: entry number
+ */
+void cqm_cla_uninit(struct tag_cqm_handle *cqm_handle, u32 entry_numb)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ s32 inv_flag = 0;
+ u32 i;
+
+ for (i = 0; i < entry_numb; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_x_buf,
+ &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_y_buf,
+ &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_z_buf,
+ &inv_flag);
+ }
+ mutex_deinit(&cla_table->lock);
+ }
+
+ /* When the lb mode is 1/2, the timer space allocated to the 4 SMFs
+ * needs to be released.
+ */
+ if (cqm_handle->func_attribute.func_type == CQM_PPF &&
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ cla_table = &bat_table->timer_entry[i];
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_x_buf,
+ &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_y_buf,
+ &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_z_buf,
+ &inv_flag);
+ mutex_deinit(&cla_table->lock);
+ }
+ }
+}
+
+static s32 cqm_cla_update_cmd(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cmd_buf *buf_in,
+ struct tag_cqm_cla_update_cmd *cmd)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cla_update_cmd *cla_update_cmd = NULL;
+ s32 ret = CQM_FAIL;
+
+ cla_update_cmd = (struct tag_cqm_cla_update_cmd *)(buf_in->buf);
+
+ cla_update_cmd->gpa_h = cmd->gpa_h;
+ cla_update_cmd->gpa_l = cmd->gpa_l;
+ cla_update_cmd->value_h = cmd->value_h;
+ cla_update_cmd->value_l = cmd->value_l;
+ cla_update_cmd->smf_id = cmd->smf_id;
+ cla_update_cmd->func_id = cmd->func_id;
+
+ cqm_swab32((u8 *)cla_update_cmd,
+ (sizeof(struct tag_cqm_cla_update_cmd) >> CQM_DW_SHIFT));
+
+ ret = cqm_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
+ CQM_CMD_T_CLA_UPDATE, buf_in, NULL, NULL,
+ CQM_CMD_TIMEOUT, HINIC3_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(handle->dev_hdl, "Cla alloc: cqm_cla_update, cqm_send_cmd_box_ret=%d\n",
+ ret);
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cqm_cla_update, cla_update_cmd: 0x%x 0x%x 0x%x 0x%x\n",
+ cmd->gpa_h, cmd->gpa_l, cmd->value_h, cmd->value_l);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_cla_update - Send a command to update the CLA table
+ * @cqm_handle: CQM handle
+ * @buf_node_parent: parent node of the content to be updated
+ * @buf_node_child: Subnode for which the buffer is to be applied
+ * @child_index: Index of a child node
+ * @cla_update_mode: CLA update mod
+ */
+static s32 cqm_cla_update(struct tag_cqm_handle *cqm_handle,
+ const struct tag_cqm_buf_list *buf_node_parent,
+ const struct tag_cqm_buf_list *buf_node_child,
+ u32 child_index, u8 cla_update_mode)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cmd_buf *buf_in = NULL;
+ struct tag_cqm_cla_update_cmd cmd;
+ dma_addr_t pa = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ u32 i = 0;
+ u64 spu_en;
+
+ buf_in = cqm_cmd_alloc(cqm_handle->ex_handle);
+ if (!buf_in)
+ return CQM_FAIL;
+ buf_in->size = sizeof(struct tag_cqm_cla_update_cmd);
+
+ /* Fill command format, convert to big endian. */
+ /* SPU function sets bit63: acs_spu_en based on function id. */
+ if (hinic3_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID)
+ spu_en = ((u64)(cqm_handle->func_attribute.func_global_idx &
+ 0x1)) << 0x3F;
+ else
+ spu_en = 0;
+
+ pa = ((buf_node_parent->pa + (child_index * sizeof(dma_addr_t))) |
+ spu_en);
+ cmd.gpa_h = CQM_ADDR_HI(pa);
+ cmd.gpa_l = CQM_ADDR_LW(pa);
+
+ pa = (buf_node_child->pa | spu_en);
+ cmd.value_h = CQM_ADDR_HI(pa);
+ cmd.value_l = CQM_ADDR_LW(pa);
+
+ cqm_dbg("Cla alloc: %s, gpa=0x%x 0x%x, value=0x%x 0x%x, cla_update_mode=0x%x\n",
+ __func__, cmd.gpa_h, cmd.gpa_l, cmd.value_h, cmd.value_l,
+ cla_update_mode);
+
+ /* current CLA GPA CHECK */
+ if (gpa_check_enable) {
+ switch (cla_update_mode) {
+ /* gpa[0]=1 means this GPA is valid */
+ case CQM_CLA_RECORD_NEW_GPA:
+ cmd.value_l |= 1;
+ break;
+ /* gpa[0]=0 means this GPA is valid */
+ case CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID:
+ case CQM_CLA_DEL_GPA_WITH_CACHE_INVALID:
+ cmd.value_l &= (~1);
+ break;
+ default:
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: %s, wrong cla_update_mode=%u\n",
+ __func__, cla_update_mode);
+ break;
+ }
+ }
+
+ /* Todo: The following code is the same as that in the bat update and
+ * needs to be reconstructed.
+ */
+ /* In non-fake mode, set func_id to 0xffff.
+ * Indicates the current func fake mode, set func_id to the
+ * specified value, This is a fake func_id.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD)
+ cmd.func_id = cqm_handle->func_attribute.func_global_idx;
+ else
+ cmd.func_id = 0xffff;
+
+ /* Normal mode is 1822 traditional mode and is configured on SMF0. */
+ /* Mode 0 is hashed to 4 SMF engines (excluding PPF) by func ID. */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ cmd.smf_id = cqm_funcid2smfid(cqm_handle);
+ ret = cqm_cla_update_cmd(cqm_handle, buf_in, &cmd);
+ /* Modes 1/2 are allocated to four SMF engines by flow.
+ * Therefore, one function needs to be allocated to four SMF engines.
+ */
+ /* Mode 0 PPF needs to be configured on 4 engines,
+ * and the timer resources need to be shared by the 4 engines.
+ */
+ } else if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2 ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type == CQM_PPF)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ /* The smf_pg variable stores currently enabled SMF. */
+ if (cqm_handle->func_capability.smf_pg & (1U << i)) {
+ cmd.smf_id = i;
+ ret = cqm_cla_update_cmd(cqm_handle, buf_in,
+ &cmd);
+ if (ret != CQM_SUCCESS)
+ goto out;
+ }
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Cla update: unsupport lb mode=%u\n",
+ cqm_handle->func_capability.lb_mode);
+ ret = CQM_FAIL;
+ }
+
+out:
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+/**
+ * cqm_cla_alloc - Trunk page for applying for a CLA.
+ * @cqm_handle: CQM handle
+ * @cla_table: BAT table entry
+ * @buf_node_parent: parent node of the content to be updated
+ * @buf_node_child: subnode for which the buffer is to be applied
+ * @child_index: index of a child node
+ */
+static s32 cqm_cla_alloc(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ struct tag_cqm_buf_list *buf_node_parent,
+ struct tag_cqm_buf_list *buf_node_child, u32 child_index)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ s32 ret = CQM_FAIL;
+
+ /* Apply for trunk page */
+ buf_node_child->va = (u8 *)ossl_get_free_pages(GFP_KERNEL | __GFP_ZERO,
+ cla_table->trunk_order);
+ if (!buf_node_child->va)
+ return CQM_FAIL;
+
+ /* PCI mapping */
+ buf_node_child->pa = pci_map_single(cqm_handle->dev, buf_node_child->va,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, buf_node_child->pa)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf_node_child->pa));
+ goto err1;
+ }
+
+ /* Notify the chip of trunk_pa so that the chip fills in cla entry */
+ ret = cqm_cla_update(cqm_handle, buf_node_parent, buf_node_child,
+ child_index, CQM_CLA_RECORD_NEW_GPA);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_update));
+ goto err2;
+ }
+
+ return CQM_SUCCESS;
+
+err2:
+ pci_unmap_single(cqm_handle->dev, buf_node_child->pa,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+err1:
+ free_pages((ulong)(buf_node_child->va), cla_table->trunk_order);
+ buf_node_child->va = NULL;
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_cla_free - Release trunk page of a CLA
+ * @cqm_handle: CQM handle
+ * @cla_table: BAT table entry
+ * @buf_node_parent: parent node of the content to be updated
+ * @buf_node_child: subnode for which the buffer is to be applied
+ * @child_index: index of a child node
+ * @cla_update_mode: the update mode of CLA
+ */
+static void cqm_cla_free(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ struct tag_cqm_buf_list *buf_node_parent,
+ struct tag_cqm_buf_list *buf_node_child,
+ u32 child_index, u8 cla_update_mode)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 trunk_size;
+
+ cqm_dbg("Cla free: cla_update_mode=%u\n", cla_update_mode);
+
+ if (cqm_cla_update(cqm_handle, buf_node_parent, buf_node_child,
+ child_index, cla_update_mode) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_update));
+ return;
+ }
+
+ if (cla_update_mode == CQM_CLA_DEL_GPA_WITH_CACHE_INVALID) {
+ trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+ if (cqm_cla_cache_invalid(cqm_handle, buf_node_child->pa,
+ trunk_size) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_cache_invalid));
+ return;
+ }
+ }
+
+ /* Remove PCI mapping from the trunk page */
+ pci_unmap_single(cqm_handle->dev, buf_node_child->pa,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+
+ /* Rlease trunk page */
+ free_pages((ulong)(buf_node_child->va), cla_table->trunk_order);
+ buf_node_child->va = NULL;
+}
+
+static u8 *cqm_cla_get_unlock_lvl0(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct tag_cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+
+ /* Level 0 CLA pages are statically allocated. */
+ offset = index * cla_table->obj_size;
+ ret_addr = (u8 *)(cla_z_buf->buf_list->va) + offset;
+ *pa = cla_z_buf->buf_list->pa + offset;
+
+ return ret_addr;
+}
+
+static u8 *cqm_cla_get_unlock_lvl1(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct tag_cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
+ struct tag_cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf_list *buf_node_y = NULL;
+ struct tag_cqm_buf_list *buf_node_z = NULL;
+ u32 y_index = 0;
+ u32 z_index = 0;
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+
+ z_index = index & ((1U << (cla_table->z + 1)) - 1);
+ y_index = index >> (cla_table->z + 1);
+
+ if (y_index >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla get: index exceeds buf_number, y_index %u, z_buf_number %u\n",
+ y_index, cla_z_buf->buf_number);
+ return NULL;
+ }
+ buf_node_z = &cla_z_buf->buf_list[y_index];
+ buf_node_y = cla_y_buf->buf_list;
+
+ /* The z buf node does not exist, applying for a page first. */
+ if (!buf_node_z->va) {
+ if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_y, buf_node_z,
+ y_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ cqm_err(handle->dev_hdl,
+ "Cla get: cla_table->type=%u\n",
+ cla_table->type);
+ return NULL;
+ }
+ }
+
+ cqm_dbg("Cla get: 1L: z_refcount=0x%x, count=0x%x\n",
+ buf_node_z->refcount, count);
+ buf_node_z->refcount += count;
+ offset = z_index * cla_table->obj_size;
+ ret_addr = (u8 *)(buf_node_z->va) + offset;
+ *pa = buf_node_z->pa + offset;
+
+ return ret_addr;
+}
+
+static u8 *cqm_cla_get_unlock_lvl2(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct tag_cqm_buf *cla_x_buf = &cla_table->cla_x_buf;
+ struct tag_cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
+ struct tag_cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf_list *buf_node_x = NULL;
+ struct tag_cqm_buf_list *buf_node_y = NULL;
+ struct tag_cqm_buf_list *buf_node_z = NULL;
+ u32 x_index = 0;
+ u32 y_index = 0;
+ u32 z_index = 0;
+ u32 trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+ u64 tmp;
+
+ z_index = index & ((1U << (cla_table->z + 1)) - 1);
+ y_index = (index >> (cla_table->z + 1)) &
+ ((1U << (cla_table->y - cla_table->z)) - 1);
+ x_index = index >> (cla_table->y + 1);
+ tmp = x_index * (trunk_size / sizeof(dma_addr_t)) + y_index;
+
+ if (x_index >= cla_y_buf->buf_number || tmp >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla get: index exceeds buf_number, x %u, y %u, y_buf_n %u, z_buf_n %u\n",
+ x_index, y_index, cla_y_buf->buf_number,
+ cla_z_buf->buf_number);
+ return NULL;
+ }
+
+ buf_node_x = cla_x_buf->buf_list;
+ buf_node_y = &cla_y_buf->buf_list[x_index];
+ buf_node_z = &cla_z_buf->buf_list[tmp];
+
+ /* The y buf node does not exist, applying for pages for y node. */
+ if (!buf_node_y->va) {
+ if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_x, buf_node_y,
+ x_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ return NULL;
+ }
+ }
+
+ /* The z buf node does not exist, applying for pages for z node. */
+ if (!buf_node_z->va) {
+ if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_y, buf_node_z,
+ y_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ if (buf_node_y->refcount == 0)
+ /* To release node Y, cache_invalid is
+ * required.
+ */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_x, buf_node_y, x_index,
+ CQM_CLA_DEL_GPA_WITH_CACHE_INVALID);
+ return NULL;
+ }
+
+ cqm_dbg("Cla get: 2L: y_refcount=0x%x\n", buf_node_y->refcount);
+ /* reference counting of the y buffer node needs to increase
+ * by 1.
+ */
+ buf_node_y->refcount++;
+ }
+
+ cqm_dbg("Cla get: 2L: z_refcount=0x%x, count=0x%x\n",
+ buf_node_z->refcount, count);
+ buf_node_z->refcount += count;
+ offset = z_index * cla_table->obj_size;
+ ret_addr = (u8 *)(buf_node_z->va) + offset;
+ *pa = buf_node_z->pa + offset;
+
+ return ret_addr;
+}
+
+/**
+ * cqm_cla_get_unlock - Apply for block buffer in number of count from the index
+ * position in the cla table, The unlocked process is used for
+ * static buffer application.
+ * @cqm_handle: CQM handle
+ * @cla_table: BAT table entry
+ * @index: the index position in the cla table
+ * @count: number of block buffer
+ * @pa: dma physical address
+ */
+u8 *cqm_cla_get_unlock(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ u8 *ret_addr = NULL;
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_0)
+ ret_addr = cqm_cla_get_unlock_lvl0(cqm_handle, cla_table, index,
+ count, pa);
+ else if (cla_table->cla_lvl == CQM_CLA_LVL_1)
+ ret_addr = cqm_cla_get_unlock_lvl1(cqm_handle, cla_table, index,
+ count, pa);
+ else
+ ret_addr = cqm_cla_get_unlock_lvl2(cqm_handle, cla_table, index,
+ count, pa);
+
+ return ret_addr;
+}
+
+/**
+ * cqm_cla_get_lock - Apply for block buffer in number of count from the index position
+ * in the cla table. The lock process is used during dynamic buffer application.
+ * @cqm_handle: CQM handle
+ * @cla_table: BAT table entry
+ * @index: the index position in the cla table
+ * @count: number of block buffer
+ * @pa: dma physical address
+ */
+u8 *cqm_cla_get_lock(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ u8 *ret_addr = NULL;
+
+ mutex_lock(&cla_table->lock);
+
+ ret_addr = cqm_cla_get_unlock(cqm_handle, cla_table, index, count, pa);
+
+ mutex_unlock(&cla_table->lock);
+
+ return ret_addr;
+}
+
+/**
+ * cqm_cla_put - Decrease the value of reference counting on the trunk page. If the value is 0,
+ * the trunk page is released.
+ * @cqm_handle: CQM handle
+ * @cla_table: BAT table entry
+ * @index: the index position in the cla table
+ * @count: number of block buffer
+ */
+void cqm_cla_put(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count)
+{
+ struct tag_cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ struct tag_cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
+ struct tag_cqm_buf *cla_x_buf = &cla_table->cla_x_buf;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf_list *buf_node_z = NULL;
+ struct tag_cqm_buf_list *buf_node_y = NULL;
+ struct tag_cqm_buf_list *buf_node_x = NULL;
+ u32 x_index = 0;
+ u32 y_index = 0;
+ u32 trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+ u64 tmp;
+
+ /* The buffer is applied statically, and the reference counting
+ * does not need to be controlled.
+ */
+ if (cla_table->alloc_static)
+ return;
+
+ mutex_lock(&cla_table->lock);
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_1) {
+ y_index = index >> (cla_table->z + 1);
+
+ if (y_index >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla put: index exceeds buf_number, y_index %u, z_buf_number %u\n",
+ y_index, cla_z_buf->buf_number);
+ cqm_err(handle->dev_hdl,
+ "Cla put: cla_table->type=%u\n",
+ cla_table->type);
+ mutex_unlock(&cla_table->lock);
+ return;
+ }
+
+ buf_node_z = &cla_z_buf->buf_list[y_index];
+ buf_node_y = cla_y_buf->buf_list;
+
+ /* When the value of reference counting on the z node page is 0,
+ * the z node page is released.
+ */
+ cqm_dbg("Cla put: 1L: z_refcount=0x%x, count=0x%x\n",
+ buf_node_z->refcount, count);
+ buf_node_z->refcount -= count;
+ if (buf_node_z->refcount == 0)
+ /* The cache invalid is not required for the Z node. */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_y,
+ buf_node_z, y_index,
+ CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID);
+ } else if (cla_table->cla_lvl == CQM_CLA_LVL_2) {
+ y_index = (index >> (cla_table->z + 1)) &
+ ((1U << (cla_table->y - cla_table->z)) - 1);
+ x_index = index >> (cla_table->y + 1);
+ tmp = x_index * (trunk_size / sizeof(dma_addr_t)) + y_index;
+
+ if (x_index >= cla_y_buf->buf_number || tmp >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla put: index exceeds buf, x %u, y %u, y_buf_n %u, z_buf_n %u\n",
+ x_index, y_index, cla_y_buf->buf_number,
+ cla_z_buf->buf_number);
+ mutex_unlock(&cla_table->lock);
+ return;
+ }
+
+ buf_node_x = cla_x_buf->buf_list;
+ buf_node_y = &cla_y_buf->buf_list[x_index];
+ buf_node_z = &cla_z_buf->buf_list[tmp];
+ cqm_dbg("Cla put: 2L: z_refcount=0x%x, count=0x%x\n",
+ buf_node_z->refcount, count);
+
+ /* When the value of reference counting on the z node page is 0,
+ * the z node page is released.
+ */
+ buf_node_z->refcount -= count;
+ if (buf_node_z->refcount == 0) {
+ cqm_cla_free(cqm_handle, cla_table, buf_node_y,
+ buf_node_z, y_index,
+ CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID);
+
+ /* When the value of reference counting on the y node
+ * page is 0, the y node page is released.
+ */
+ cqm_dbg("Cla put: 2L: y_refcount=0x%x\n",
+ buf_node_y->refcount);
+ buf_node_y->refcount--;
+ if (buf_node_y->refcount == 0)
+ /* Node y requires cache to be invalid. */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_x, buf_node_y,
+ x_index, CQM_CLA_DEL_GPA_WITH_CACHE_INVALID);
+ }
+ }
+
+ mutex_unlock(&cla_table->lock);
+}
+
+/**
+ * cqm_cla_table_get - Searches for the CLA table data structure corresponding to a BAT entry
+ * @bat_table: bat entry
+ * @entry_type: cla table type
+ *
+ * RETURNS:
+ * Queried cla table
+ */
+struct tag_cqm_cla_table *cqm_cla_table_get(struct tag_cqm_bat_table *bat_table,
+ u32 entry_type)
+{
+ struct tag_cqm_cla_table *cla_table = NULL;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if ((cla_table != NULL) && (entry_type == cla_table->type))
+ return cla_table;
+ }
+
+ return NULL;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.h
new file mode 100644
index 0000000..a51c1dc
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.h
@@ -0,0 +1,216 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_BAT_CLA_H
+#define CQM_BAT_CLA_H
+
+#include <linux/types.h>
+#include <linux/mutex.h>
+
+#include "cqm_bitmap_table.h"
+#include "cqm_object.h"
+#include "vram_common.h"
+
+/* When the connection check is enabled, the maximum number of connections
+ * supported by the chip is 1M - 63, which cannot reach 1M
+ */
+#define CQM_BAT_MAX_CONN_NUM (0x100000 - 63)
+#define CQM_BAT_MAX_CACHE_CONN_NUM (0x100000 - 63)
+
+#define CLA_TABLE_PAGE_ORDER 0
+#define CQM_4K_PAGE_ORDER 0
+#define CQM_4K_PAGE_SIZE 4096
+#define CQM_8K_PAGE_ORDER 1
+
+#define CQM_BAT_ENTRY_MAX 16
+#define CQM_BAT_ENTRY_SIZE 16
+#define CQM_BAT_STORE_API_SIZE 16
+
+#define CQM_BAT_SIZE_FT_RDMA_PF 240
+#define CQM_BAT_SIZE_FT_RDMA_VF 160
+#define CQM_BAT_SIZE_FT_PF 192
+#define CQM_BAT_SIZE_FT_VF 112
+#define CQM_BAT_SIZE_RDMA_PF 160
+#define CQM_BAT_SIZE_RDMA_VF 80
+
+#define CQM_BAT_INDEX0 0
+#define CQM_BAT_INDEX1 1
+#define CQM_BAT_INDEX2 2
+#define CQM_BAT_INDEX3 3
+#define CQM_BAT_INDEX4 4
+#define CQM_BAT_INDEX5 5
+#define CQM_BAT_INDEX6 6
+#define CQM_BAT_INDEX7 7
+#define CQM_BAT_INDEX8 8
+#define CQM_BAT_INDEX9 9
+#define CQM_BAT_INDEX10 10
+#define CQM_BAT_INDEX11 11
+#define CQM_BAT_INDEX12 12
+#define CQM_BAT_INDEX13 13
+#define CQM_BAT_INDEX14 14
+#define CQM_BAT_INDEX15 15
+
+enum cqm_bat_entry_type {
+ CQM_BAT_ENTRY_T_CFG = 0,
+ CQM_BAT_ENTRY_T_HASH = 1,
+ CQM_BAT_ENTRY_T_QPC = 2,
+ CQM_BAT_ENTRY_T_SCQC = 3,
+ CQM_BAT_ENTRY_T_SRQC = 4,
+ CQM_BAT_ENTRY_T_MPT = 5,
+ CQM_BAT_ENTRY_T_GID = 6,
+ CQM_BAT_ENTRY_T_LUN = 7,
+ CQM_BAT_ENTRY_T_TASKMAP = 8,
+ CQM_BAT_ENTRY_T_L3I = 9,
+ CQM_BAT_ENTRY_T_CHILDC = 10,
+ CQM_BAT_ENTRY_T_TIMER = 11,
+ CQM_BAT_ENTRY_T_XID2CID = 12,
+ CQM_BAT_ENTRY_T_REORDER = 13,
+ CQM_BAT_ENTRY_T_INVALID = 14,
+ CQM_BAT_ENTRY_T_MAX = 15,
+};
+
+/* CLA update mode */
+#define CQM_CLA_RECORD_NEW_GPA 0
+#define CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID 1
+#define CQM_CLA_DEL_GPA_WITH_CACHE_INVALID 2
+
+#define CQM_CLA_LVL_0 0
+#define CQM_CLA_LVL_1 1
+#define CQM_CLA_LVL_2 2
+
+#define CQM_MAX_INDEX_BIT 19
+
+#define CQM_CHIP_CACHELINE 256
+#define CQM_CHIP_TIMER_CACHELINE 512
+#define CQM_OBJECT_256 256
+#define CQM_OBJECT_512 512
+#define CQM_OBJECT_1024 1024
+#define CQM_CHIP_GPA_MASK 0x1ffffffffffffff
+#define CQM_CHIP_GPA_HIMASK 0x1ffffff
+#define CQM_CHIP_GPA_LOMASK 0xffffffff
+#define CQM_CHIP_GPA_HSHIFT 32
+
+/* Aligns with 64 buckets and shifts rightward by 6 bits */
+#define CQM_HASH_NUMBER_UNIT 6
+
+struct tag_cqm_cla_table {
+ u32 type;
+ u32 max_buffer_size;
+ u32 obj_num;
+ bool alloc_static; /* Whether the buffer is statically allocated */
+ u32 cla_lvl;
+ u32 cacheline_x; /* x value calculated based on cacheline,
+ * used by the chip
+ */
+ u32 cacheline_y; /* y value calculated based on cacheline,
+ * used by the chip
+ */
+ u32 cacheline_z; /* z value calculated based on cacheline,
+ * used by the chip
+ */
+ u32 x; /* x value calculated based on obj_size, used by software */
+ u32 y; /* y value calculated based on obj_size, used by software */
+ u32 z; /* z value calculated based on obj_size, used by software */
+ struct tag_cqm_buf cla_x_buf;
+ struct tag_cqm_buf cla_y_buf;
+ struct tag_cqm_buf cla_z_buf;
+ u32 trunk_order; /* A continuous physical page contains 2^order pages */
+ u32 obj_size;
+ struct mutex lock; /* Lock for cla buffer allocation and free */
+
+ struct tag_cqm_bitmap bitmap;
+
+ struct tag_cqm_object_table obj_table; /* Mapping table between
+ * indexes and objects
+ */
+ char name[VRAM_NAME_APPLY_LEN];
+};
+
+struct tag_cqm_bat_entry_cfg {
+ u32 cur_conn_num_h_4 : 4;
+ u32 rsv1 : 4;
+ u32 max_conn_num : 20;
+ u32 rsv2 : 4;
+
+ u32 max_conn_cache : 10;
+ u32 rsv3 : 6;
+ u32 cur_conn_num_l_16 : 16;
+
+ u32 bloom_filter_addr : 16;
+ u32 cur_conn_cache : 10;
+ u32 rsv4 : 6;
+
+ u32 bucket_num : 16;
+ u32 bloom_filter_len : 16;
+};
+
+#define CQM_BAT_NO_BYPASS_CACHE 0
+#define CQM_BAT_BYPASS_CACHE 1
+
+#define CQM_BAT_ENTRY_SIZE_256 0
+#define CQM_BAT_ENTRY_SIZE_512 1
+#define CQM_BAT_ENTRY_SIZE_1024 2
+
+struct tag_cqm_bat_entry_standerd {
+ u32 entry_size : 2;
+ u32 rsv1 : 6;
+ u32 max_number : 20;
+ u32 rsv2 : 4;
+
+ u32 cla_gpa_h : 32;
+
+ u32 cla_gpa_l : 32;
+
+ u32 rsv3 : 8;
+ u32 z : 5;
+ u32 y : 5;
+ u32 x : 5;
+ u32 rsv24 : 1;
+ u32 bypass : 1;
+ u32 cla_level : 2;
+ u32 rsv5 : 5;
+};
+
+struct tag_cqm_bat_entry_vf2pf {
+ u32 cla_gpa_h : 25;
+ u32 pf_id : 5;
+ u32 fake_vf_en : 1;
+ u32 acs_spu_en : 1;
+};
+
+#define CQM_BAT_ENTRY_TASKMAP_NUM 4
+struct tag_cqm_bat_entry_taskmap_addr {
+ u32 gpa_h;
+ u32 gpa_l;
+};
+
+struct tag_cqm_bat_entry_taskmap {
+ struct tag_cqm_bat_entry_taskmap_addr addr[CQM_BAT_ENTRY_TASKMAP_NUM];
+};
+
+struct tag_cqm_bat_table {
+ u32 bat_entry_type[CQM_BAT_ENTRY_MAX];
+ u8 bat[CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE];
+ struct tag_cqm_cla_table entry[CQM_BAT_ENTRY_MAX];
+ /* In LB mode 1, the timer needs to be configured in 4 SMFs,
+ * and the GPAs must be different and independent.
+ */
+ struct tag_cqm_cla_table timer_entry[4];
+ u32 bat_size;
+};
+
+s32 cqm_bat_init(struct tag_cqm_handle *cqm_handle);
+void cqm_bat_uninit(struct tag_cqm_handle *cqm_handle);
+s32 cqm_cla_init(struct tag_cqm_handle *cqm_handle);
+void cqm_cla_uninit(struct tag_cqm_handle *cqm_handle, u32 entry_numb);
+u8 *cqm_cla_get_unlock(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa);
+u8 *cqm_cla_get_lock(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa);
+void cqm_cla_put(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count);
+struct tag_cqm_cla_table *cqm_cla_table_get(struct tag_cqm_bat_table *bat_table,
+ u32 entry_type);
+u32 cqm_funcid2smfid(const struct tag_cqm_handle *cqm_handle);
+
+#endif /* CQM_BAT_CLA_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.c
new file mode 100644
index 0000000..86b268c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.c
@@ -0,0 +1,1516 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include <linux/mm.h>
+#include <linux/gfp.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "cqm_memsec.h"
+#include "cqm_object.h"
+#include "cqm_bat_cla.h"
+#include "cqm_cmd.h"
+#include "cqm_object_intern.h"
+#include "cqm_main.h"
+
+#include "cqm_npu_cmd.h"
+#include "cqm_npu_cmd_defs.h"
+#include "vram_common.h"
+
+#define common_section
+
+struct malloc_memory {
+ bool (*check_alloc_mode)(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf);
+ s32 (*malloc_func)(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf);
+};
+
+struct free_memory {
+ bool (*check_alloc_mode)(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf);
+ void (*free_func)(struct tag_cqm_buf *buf);
+};
+
+/**
+ * Prototype : cqm_swab64(Encapsulation of __swab64)
+ * Description : Perform big-endian conversion for a memory block (8 bytes).
+ * Input : u8 *addr: Start address of the memory block
+ * u32 cnt: Number of 8 bytes in the memory block
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+void cqm_swab64(u8 *addr, u32 cnt)
+{
+ u64 *temp = (u64 *)addr;
+ u64 value = 0;
+ u32 i;
+
+ for (i = 0; i < cnt; i++) {
+ value = __swab64(*temp);
+ *temp = value;
+ temp++;
+ }
+}
+
+/**
+ * Prototype : cqm_swab32(Encapsulation of __swab32)
+ * Description : Perform big-endian conversion for a memory block (4 bytes).
+ * Input : u8 *addr: Start address of the memory block
+ * u32 cnt: Number of 4 bytes in the memory block
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/7/23
+ * Modification : Created function
+ */
+void cqm_swab32(u8 *addr, u32 cnt)
+{
+ u32 *temp = (u32 *)addr;
+ u32 value = 0;
+ u32 i;
+
+ for (i = 0; i < cnt; i++) {
+ value = __swab32(*temp);
+ *temp = value;
+ temp++;
+ }
+}
+
+/**
+ * Prototype : cqm_shift
+ * Description : Calculates n in a 2^n number.(Find the logarithm of 2^n)
+ * Input : u32 data
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+s32 cqm_shift(u32 data)
+{
+ u32 data_num = data;
+ s32 shift = -1;
+
+ do {
+ data_num >>= 1;
+ shift++;
+ } while (data_num);
+
+ return shift;
+}
+
+/**
+ * Prototype : cqm_check_align
+ * Description : Check whether the value is 2^n-aligned. If 0 or 1, false is
+ * returned.
+ * Input : u32 data
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/9/15
+ * Modification : Created function
+ */
+bool cqm_check_align(u32 data)
+{
+ u32 data_num = data;
+
+ if (data == 0)
+ return false;
+
+ /* Todo: (n & (n - 1) == 0) can be used to determine the value. */
+ do {
+ /* When the value can be exactly divided by 2,
+ * the value of data is shifted right by one bit, that is,
+ * divided by 2.
+ */
+ if ((data_num & 0x1) == 0)
+ data_num >>= 1;
+ /* If the value cannot be divisible by 2, the value is
+ * not 2^n-aligned and false is returned.
+ */
+ else
+ return false;
+ } while (data_num != 1);
+
+ return true;
+}
+
+/**
+ * Prototype : cqm_kmalloc_align
+ * Description : Allocates 2^n-byte-aligned memory for the start address.
+ * Input : size_t size
+ * gfp_t flags
+ * u16 align_order
+ * Output : None
+ * Return Value : void *
+ * 1.Date : 2017/9/22
+ * Modification : Created function
+ */
+void *cqm_kmalloc_align(size_t size, gfp_t flags, u16 align_order)
+{
+ void *orig_addr = NULL;
+ void *align_addr = NULL;
+ void *index_addr = NULL;
+
+ orig_addr = kmalloc(size + ((u64)1 << align_order) + sizeof(void *),
+ flags);
+ if (!orig_addr)
+ return NULL;
+
+ index_addr = (void *)((char *)orig_addr + sizeof(void *));
+ align_addr =
+ (void *)((((u64)index_addr + ((u64)1 << align_order) - 1) >>
+ align_order) << align_order);
+
+ /* Record the original memory address for memory release. */
+ index_addr = (void *)((char *)align_addr - sizeof(void *));
+ *(void **)index_addr = orig_addr;
+
+ return align_addr;
+}
+
+/**
+ * Prototype : cqm_kfree_align
+ * Description : Release the memory allocated for starting address alignment.
+ * Input : void *addr
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2017/9/22
+ * Modification : Created function
+ */
+void cqm_kfree_align(void *addr)
+{
+ void *index_addr = NULL;
+
+ /* Release the original memory address. */
+ index_addr = (void *)((char *)addr - sizeof(void *));
+
+ cqm_dbg("free aligned address: %p, original address: %p\n", addr,
+ *(void **)index_addr);
+
+ kfree(*(void **)index_addr);
+}
+
+static void cqm_write_lock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ write_lock_bh(lock);
+ else
+ write_lock(lock);
+}
+
+static void cqm_write_unlock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ write_unlock_bh(lock);
+ else
+ write_unlock(lock);
+}
+
+static void cqm_read_lock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ read_lock_bh(lock);
+ else
+ read_lock(lock);
+}
+
+static void cqm_read_unlock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ read_unlock_bh(lock);
+ else
+ read_unlock(lock);
+}
+
+static inline bool cqm_bat_entry_in_secure_mem(void *handle, u32 type)
+{
+ if (!cqm_need_secure_mem(handle))
+ return false;
+
+ if (type == CQM_BAT_ENTRY_T_QPC || type == CQM_BAT_ENTRY_T_SCQC ||
+ type == CQM_BAT_ENTRY_T_SRQC || type == CQM_BAT_ENTRY_T_MPT)
+ return true;
+
+ return false;
+}
+
+s32 cqm_buf_alloc_direct(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf, bool direct)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct page **pages = NULL;
+ u32 i, j, order;
+
+ order = (u32)get_order(buf->buf_size);
+
+ if (!direct) {
+ buf->direct.va = NULL;
+ return CQM_SUCCESS;
+ }
+
+ pages = vmalloc(sizeof(struct page *) * buf->page_number);
+ if (!pages) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(pages));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < buf->buf_number; i++) {
+ for (j = 0; j < ((u32)1 << order); j++)
+ pages[(ulong)(unsigned int)((i << order) + j)] =
+ (void *)virt_to_page((u8 *)(buf->buf_list[i].va) + (PAGE_SIZE * j));
+ }
+
+ buf->direct.va = vmap(pages, buf->page_number, VM_MAP, PAGE_KERNEL);
+ vfree(pages);
+ if (!buf->direct.va) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf->direct.va));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static bool check_use_vram(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ return buf->buf_info.use_vram ? true : false;
+}
+
+static bool check_use_non_vram(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ return buf->buf_info.use_vram ? false : true;
+}
+
+static bool check_for_use_node_alloc(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ if (buf->buf_info.use_vram == 0 && handle->board_info.service_mode == 0)
+ return true;
+
+ return false;
+}
+
+static bool check_for_nouse_node_alloc(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ if (buf->buf_info.use_vram == 0 && handle->board_info.service_mode != 0)
+ return true;
+
+ return false;
+}
+
+static s32 cqm_buf_vram_kalloc(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ void *vaddr = NULL;
+ int i;
+
+ vaddr = hi_vram_kalloc(buf->buf_info.buf_vram_name, (u64)buf->buf_size * buf->buf_number);
+ if (!vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(buf_page));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (s32)buf->buf_number; i++)
+ buf->buf_list[i].va = (void *)((char *)vaddr + i * (u64)buf->buf_size);
+
+ return CQM_SUCCESS;
+}
+
+static void cqm_buf_vram_free(struct tag_cqm_buf *buf)
+{
+ s32 i;
+
+ if (buf->buf_list == NULL) {
+ return;
+ }
+
+ if (buf->buf_list[0].va)
+ hi_vram_kfree(buf->buf_list[0].va, buf->buf_info.buf_vram_name,
+ (u64)buf->buf_size * buf->buf_number);
+
+ for (i = 0; i < (s32)buf->buf_number; i++)
+ buf->buf_list[i].va = NULL;
+}
+
+static void cqm_buf_free_page_common(struct tag_cqm_buf *buf)
+{
+ u32 order;
+ s32 i;
+
+ if (buf->buf_list == NULL) {
+ return;
+ }
+
+ order = (u32)get_order(buf->buf_size);
+
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ if (buf->buf_list[i].va) {
+ free_pages((ulong)(buf->buf_list[i].va), order);
+ buf->buf_list[i].va = NULL;
+ }
+ }
+}
+
+static s32 cqm_buf_use_node_alloc_page(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ struct page *newpage = NULL;
+ u32 order;
+ void *va = NULL;
+ s32 i, node;
+
+ order = (u32)get_order(buf->buf_size);
+ node = dev_to_node(handle->dev_hdl);
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ newpage = alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, order);
+ if (!newpage) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(buf_page));
+ break;
+ }
+ va = (void *)page_address(newpage);
+ /* Initialize the page after the page is applied for.
+ * If hash entries are involved, the initialization
+ * value must be 0.
+ */
+ memset(va, 0, buf->buf_size);
+ buf->buf_list[i].va = va;
+ }
+
+ if (i != buf->buf_number) {
+ cqm_buf_free_page_common(buf);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_buf_unused_node_alloc_page(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ u32 order;
+ void *va = NULL;
+ s32 i;
+
+ order = (u32)get_order(buf->buf_size);
+
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ va = (void *)ossl_get_free_pages(GFP_KERNEL | __GFP_ZERO, order);
+ if (!va) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(buf_page));
+ break;
+ }
+ /* Initialize the page after the page is applied for.
+ * If hash entries are involved, the initialization
+ * value must be 0.
+ */
+ memset(va, 0, buf->buf_size);
+ buf->buf_list[i].va = va;
+ }
+
+ if (i != buf->buf_number) {
+ cqm_buf_free_page_common(buf);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static const struct malloc_memory g_malloc_funcs[] = {
+ {check_use_vram, cqm_buf_vram_kalloc},
+ {check_for_use_node_alloc, cqm_buf_use_node_alloc_page},
+ {check_for_nouse_node_alloc, cqm_buf_unused_node_alloc_page}
+};
+
+static const struct free_memory g_free_funcs[] = {
+ {check_use_vram, cqm_buf_vram_free},
+ {check_use_non_vram, cqm_buf_free_page_common}
+};
+
+static s32 cqm_buf_alloc_page(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 malloc_funcs_num = ARRAY_SIZE(g_malloc_funcs);
+ u32 i;
+
+ for (i = 0; i < malloc_funcs_num; i++) {
+ if (g_malloc_funcs[i].check_alloc_mode &&
+ g_malloc_funcs[i].malloc_func &&
+ g_malloc_funcs[i].check_alloc_mode(handle, buf))
+ return g_malloc_funcs[i].malloc_func(handle, buf);
+ }
+
+ cqm_err(handle->dev_hdl, "Unknown alloc mode\n");
+
+ return CQM_FAIL;
+}
+
+static void cqm_buf_free_page(struct tag_cqm_buf *buf)
+{
+ u32 free_funcs_num = ARRAY_SIZE(g_free_funcs);
+ u32 i;
+
+ for (i = 0; i < free_funcs_num; i++) {
+ if (g_free_funcs[i].check_alloc_mode &&
+ g_free_funcs[i].free_func &&
+ g_free_funcs[i].check_alloc_mode(NULL, buf))
+ return g_free_funcs[i].free_func(buf);
+ }
+}
+
+static s32 cqm_buf_alloc_map(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct pci_dev *dev = cqm_handle->dev;
+ void *va = NULL;
+ s32 i;
+
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ va = buf->buf_list[i].va;
+ buf->buf_list[i].pa = pci_map_single(dev, va, buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(dev, buf->buf_list[i].pa)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf_list));
+ break;
+ }
+ }
+
+ if (i != buf->buf_number) {
+ i--;
+ for (; i >= 0; i--)
+ pci_unmap_single(dev, buf->buf_list[i].pa,
+ buf->buf_size, PCI_DMA_BIDIRECTIONAL);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_buf_get_secure_mem_pages(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 i;
+
+ for (i = 0; i < buf->buf_number; i++) {
+ buf->buf_list[i].va =
+ cqm_get_secure_mem_pages(handle,
+ (u32)get_order(buf->buf_size),
+ &buf->buf_list[i].pa);
+ if (!buf->buf_list[i].va) {
+ cqm_err(handle->dev_hdl,
+ CQM_ALLOC_FAIL(cqm_get_secure_mem_pages));
+ break;
+ }
+ }
+
+ if (i != buf->buf_number) {
+ cqm_free_secure_mem_pages(handle, buf->buf_list[0].va,
+ (u32)get_order(buf->buf_size));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * Prototype : cqm_buf_alloc
+ * Description : Apply for buffer space and DMA mapping for the struct tag_cqm_buf
+ * structure.
+ * Input : struct tag_cqm_buf *buf
+ * struct pci_dev *dev
+ * bool direct: Whether direct remapping is required
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+s32 cqm_buf_alloc(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf, bool direct)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct pci_dev *dev = cqm_handle->dev;
+ s32 i;
+ s32 ret;
+
+ /* Applying for the buffer list descriptor space */
+ buf->buf_list = vmalloc(buf->buf_number * sizeof(struct tag_cqm_buf_list));
+ if (!buf->buf_list)
+ return CQM_FAIL;
+ memset(buf->buf_list, 0, buf->buf_number * sizeof(struct tag_cqm_buf_list));
+
+ /* Page for applying for each buffer */
+ if (cqm_bat_entry_in_secure_mem((void *)handle, buf->bat_entry_type))
+ ret = cqm_buf_get_secure_mem_pages(cqm_handle, buf);
+ else
+ ret = cqm_buf_alloc_page(cqm_handle, buf);
+
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(linux_cqm_buf_alloc_page));
+ goto err1;
+ }
+
+ /* PCI mapping of the buffer */
+ if (!cqm_bat_entry_in_secure_mem((void *)handle, buf->bat_entry_type)) {
+ if (cqm_buf_alloc_map(cqm_handle, buf) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(linux_cqm_buf_alloc_map));
+ goto err2;
+ }
+ }
+
+ /* direct remapping */
+ if (cqm_buf_alloc_direct(cqm_handle, buf, direct) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_buf_alloc_direct));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ if (!cqm_bat_entry_in_secure_mem((void *)handle, buf->bat_entry_type)) {
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ pci_unmap_single(dev, buf->buf_list[i].pa, buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+ }
+ }
+err2:
+ if (cqm_bat_entry_in_secure_mem((void *)handle, buf->bat_entry_type))
+ cqm_free_secure_mem_pages(handle, buf->buf_list[0].va,
+ (u32)get_order(buf->buf_size));
+ else
+ cqm_buf_free_page(buf);
+err1:
+ vfree(buf->buf_list);
+ buf->buf_list = NULL;
+ return CQM_FAIL;
+}
+
+/**
+ * Prototype : cqm_buf_free
+ * Description : Release the buffer space and DMA mapping for the struct tag_cqm_buf
+ * structure.
+ * Input : struct tag_cqm_buf *buf
+ * struct pci_dev *dev
+ * bool direct: Whether direct remapping is required
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+void cqm_buf_free(struct tag_cqm_buf *buf, struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct pci_dev *dev = cqm_handle->dev;
+ s32 i;
+
+ if (buf->direct.va) {
+ vunmap(buf->direct.va);
+ buf->direct.va = NULL;
+ }
+
+ if (!buf->buf_list)
+ return;
+
+ if (cqm_bat_entry_in_secure_mem(handle, buf->bat_entry_type)) {
+ cqm_free_secure_mem_pages(handle, buf->buf_list[0].va,
+ (u32)get_order(buf->buf_size));
+ goto free;
+ }
+
+ for (i = 0; i < (s32)(buf->buf_number); i++) {
+ if (buf->buf_list[i].va)
+ pci_unmap_single(dev, buf->buf_list[i].pa,
+ buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+ }
+ cqm_buf_free_page(buf);
+
+free:
+ vfree(buf->buf_list);
+ buf->buf_list = NULL;
+}
+
+static s32 cqm_cla_cache_invalid_cmd(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cmd_buf *buf_in,
+ struct tag_cqm_cla_cache_invalid_cmd *cmd)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cla_cache_invalid_cmd *cla_cache_invalid_cmd = NULL;
+ s32 ret;
+
+ cla_cache_invalid_cmd = (struct tag_cqm_cla_cache_invalid_cmd *)(buf_in->buf);
+ cla_cache_invalid_cmd->gpa_h = cmd->gpa_h;
+ cla_cache_invalid_cmd->gpa_l = cmd->gpa_l;
+ cla_cache_invalid_cmd->cache_size = cmd->cache_size;
+ cla_cache_invalid_cmd->smf_id = cmd->smf_id;
+ cla_cache_invalid_cmd->func_id = cmd->func_id;
+
+ cqm_swab32((u8 *)cla_cache_invalid_cmd,
+ /* shift 2 bits by right to get length of dw(4B) */
+ (sizeof(struct tag_cqm_cla_cache_invalid_cmd) >> 2));
+
+ /* Send the cmdq command. */
+ ret = cqm_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
+ CQM_CMD_T_CLA_CACHE_INVALID, buf_in, NULL, NULL,
+ CQM_CMD_TIMEOUT, HINIC3_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(handle->dev_hdl,
+ "Cla cache invalid: cqm_send_cmd_box_ret=%d\n",
+ ret);
+ cqm_err(handle->dev_hdl,
+ "Cla cache invalid: cla_cache_invalid_cmd: 0x%x 0x%x 0x%x\n",
+ cmd->gpa_h, cmd->gpa_l, cmd->cache_size);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_cache_invalid(struct tag_cqm_handle *cqm_handle, dma_addr_t pa, u32 cache_size)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cmd_buf *buf_in = NULL;
+ struct hinic3_func_attr *func_attr = NULL;
+ struct tag_cqm_bat_entry_vf2pf gpa = {0};
+ struct tag_cqm_cla_cache_invalid_cmd cmd;
+ u32 cla_gpa_h = 0;
+ s32 ret = CQM_FAIL;
+ u32 i;
+
+ buf_in = cqm_cmd_alloc((void *)(cqm_handle->ex_handle));
+ if (!buf_in)
+ return CQM_FAIL;
+ buf_in->size = sizeof(struct tag_cqm_cla_cache_invalid_cmd);
+
+ gpa.cla_gpa_h = CQM_ADDR_HI(pa) & CQM_CHIP_GPA_HIMASK;
+
+ /* On the SPU, the value of spu_en in the GPA address
+ * in the BAT is determined by the host ID and fun IDx.
+ */
+ if (hinic3_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID) {
+ func_attr = &cqm_handle->func_attribute;
+ gpa.acs_spu_en = func_attr->func_global_idx & 0x1;
+ } else {
+ gpa.acs_spu_en = 0;
+ }
+
+ /* In non-fake mode, set func_id to 0xffff.
+ * Indicate the current func fake mode.
+ * The value of func_id is a fake func ID.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD) {
+ cmd.func_id = cqm_handle->func_attribute.func_global_idx;
+ func_attr = &cqm_handle->parent_cqm_handle->func_attribute;
+ gpa.fake_vf_en = 1;
+ gpa.pf_id = func_attr->func_global_idx;
+ } else {
+ cmd.func_id = 0xffff;
+ }
+
+ memcpy(&cla_gpa_h, &gpa, sizeof(u32));
+
+ /* Fill command and convert it to big endian */
+ cmd.cache_size = cache_size;
+ cmd.gpa_l = CQM_ADDR_LW(pa);
+ cmd.gpa_h = cla_gpa_h;
+
+ /* The normal mode is the 1822 traditional mode and is all configured
+ * on SMF0.
+ */
+ /* Mode 0 is hashed to 4 SMF engines (excluding PPF) by func ID. */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ cmd.smf_id = cqm_funcid2smfid(cqm_handle);
+ ret = cqm_cla_cache_invalid_cmd(cqm_handle, buf_in, &cmd);
+ /* Mode 1/2 are allocated to 4 SMF engines by flow. Therefore,
+ * one function needs to be allocated to 4 SMF engines.
+ */
+ /* The PPF in mode 0 needs to be configured on 4 engines,
+ * and the timer resources need to be shared by the 4 engines.
+ */
+ } else if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2 ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type == CQM_PPF)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ /* The smf_pg stored currently enabled SMF engine. */
+ if (cqm_handle->func_capability.smf_pg & (1U << i)) {
+ cmd.smf_id = i;
+ ret = cqm_cla_cache_invalid_cmd(cqm_handle,
+ buf_in, &cmd);
+ if (ret != CQM_SUCCESS)
+ goto out;
+ }
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Cla cache invalid: unsupport lb mode=%u\n",
+ cqm_handle->func_capability.lb_mode);
+ ret = CQM_FAIL;
+ }
+
+out:
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+static void free_cache_inv(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf,
+ s32 *inv_flag)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 order;
+ s32 i;
+
+ order = (u32)get_order(buf->buf_size);
+
+ if (!handle->chip_present_flag)
+ return;
+
+ if (!buf->buf_list)
+ return;
+
+ for (i = 0; i < (s32)(buf->buf_number); i++) {
+ if (!buf->buf_list[i].va)
+ continue;
+
+ if (*inv_flag != CQM_SUCCESS)
+ continue;
+
+ /* In the Pangea environment, if the cmdq times out,
+ * no subsequent message is sent.
+ */
+ *inv_flag = cqm_cla_cache_invalid(cqm_handle, buf->buf_list[i].pa,
+ (u32)(PAGE_SIZE << order));
+ if (*inv_flag != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl,
+ "Buffer free: fail to invalid buf_list pa cache, inv_flag=%d\n",
+ *inv_flag);
+ }
+}
+
+void cqm_buf_free_cache_inv(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf,
+ s32 *inv_flag)
+{
+ /* Send a command to the chip to kick out the cache. */
+ free_cache_inv(cqm_handle, buf, inv_flag);
+
+ /* Clear host resources */
+ cqm_buf_free(buf, cqm_handle);
+}
+
+void cqm_byte_print(u32 *ptr, u32 len)
+{
+ u32 i;
+ u32 len_num = len;
+
+ len_num = (len_num >> 0x2);
+ for (i = 0; i < len_num; i = i + 0x4) {
+ cqm_dbg("%.8x %.8x %.8x %.8x\n", ptr[i], ptr[i + 1],
+ ptr[i + 2], /* index increases by 2 */
+ ptr[i + 3]); /* index increases by 3 */
+ }
+}
+
+#define bitmap_section
+
+/**
+ * Prototype : cqm_single_bitmap_init
+ * Description : Initialize a bitmap.
+ * Input : struct tag_cqm_bitmap *bitmap
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/9/9
+ * Modification : Created function
+ */
+static s32 cqm_single_bitmap_init(struct tag_cqm_bitmap *bitmap)
+{
+ u32 bit_number;
+
+ spin_lock_init(&bitmap->lock);
+
+ /* Max_num of the bitmap is 8-aligned and then
+ * shifted rightward by 3 bits to obtain the number of bytes required.
+ */
+ bit_number = (ALIGN(bitmap->max_num, CQM_NUM_BIT_BYTE) >>
+ CQM_BYTE_BIT_SHIFT);
+ if (bitmap->bitmap_info.use_vram != 0)
+ bitmap->table = hi_vram_kalloc(bitmap->bitmap_info.buf_vram_name, bit_number);
+ else
+ bitmap->table = vmalloc(bit_number);
+ if (!bitmap->table)
+ return CQM_FAIL;
+ memset(bitmap->table, 0, bit_number);
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_bitmap_toe_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_bitmap *bitmap = NULL;
+
+ /* SRQC of TOE services is not managed through the CLA table,
+ * but the bitmap is required to manage SRQid.
+ */
+ if (cqm_handle->service[CQM_SERVICE_T_TOE].valid) {
+ bitmap = &cqm_handle->toe_own_capability.srqc_bitmap;
+ bitmap->max_num =
+ cqm_handle->toe_own_capability.toe_srqc_number;
+ bitmap->reserved_top = 0;
+ bitmap->reserved_back = 0;
+ bitmap->last = 0;
+ if (bitmap->max_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: toe_srqc_number=0, don't init bitmap\n");
+ return CQM_SUCCESS;
+ }
+
+ if (cqm_single_bitmap_init(bitmap) != CQM_SUCCESS)
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static void cqm_bitmap_toe_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bitmap *bitmap = NULL;
+
+ if (cqm_handle->service[CQM_SERVICE_T_TOE].valid) {
+ bitmap = &cqm_handle->toe_own_capability.srqc_bitmap;
+ if (bitmap->table) {
+ spin_lock_deinit(&bitmap->lock);
+ vfree(bitmap->table);
+ bitmap->table = NULL;
+ }
+ }
+}
+
+/**
+ * Prototype : cqm_bitmap_init
+ * Description : Initialize the bitmap.
+ * Input : struct tag_cqm_handle *cqm_handle
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+s32 cqm_bitmap_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ s32 ret = CQM_SUCCESS;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0, don't init bitmap\n",
+ cla_table->type);
+ continue;
+ }
+
+ bitmap = &cla_table->bitmap;
+ snprintf(bitmap->bitmap_info.buf_vram_name, VRAM_NAME_APPLY_LEN,
+ "%s%s%02d", cla_table->name,
+ VRAM_CQM_BITMAP_BASE, cla_table->type);
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_QPC:
+ bitmap->max_num = capability->qpc_number;
+ bitmap->reserved_top = capability->qpc_reserved;
+ bitmap->reserved_back = capability->qpc_reserved_back;
+ bitmap->last = capability->qpc_reserved;
+ bitmap->bitmap_info.use_vram = get_use_vram_flag();
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ bitmap->max_num = capability->mpt_number;
+ bitmap->reserved_top = capability->mpt_reserved;
+ bitmap->reserved_back = 0;
+ bitmap->last = capability->mpt_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ bitmap->max_num = capability->scqc_number;
+ bitmap->reserved_top = capability->scq_reserved;
+ bitmap->reserved_back = 0;
+ bitmap->last = capability->scq_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ bitmap->max_num = capability->srqc_number;
+ bitmap->reserved_top = capability->srq_reserved;
+ bitmap->reserved_back = 0;
+ bitmap->last = capability->srq_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ default:
+ break;
+ }
+
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ "Bitmap init: failed to init cla_table_type=%u, obj_num=0x%x\n",
+ cla_table->type, cla_table->obj_num);
+ goto err;
+ }
+ }
+
+ if (cqm_bitmap_toe_init(cqm_handle) != CQM_SUCCESS)
+ goto err;
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_bitmap_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * Prototype : cqm_bitmap_uninit
+ * Description : Deinitialize the bitmap.
+ * Input : struct tag_cqm_handle *cqm_handle
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+void cqm_bitmap_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ bitmap = &cla_table->bitmap;
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID &&
+ bitmap->table) {
+ spin_lock_deinit(&bitmap->lock);
+ if (bitmap->bitmap_info.use_vram != 0)
+ hi_vram_kfree(bitmap->table, bitmap->bitmap_info.buf_vram_name,
+ ALIGN(bitmap->max_num, CQM_NUM_BIT_BYTE) >>
+ CQM_BYTE_BIT_SHIFT);
+ else
+ vfree(bitmap->table);
+ bitmap->table = NULL;
+ }
+ }
+
+ cqm_bitmap_toe_uninit(cqm_handle);
+}
+
+/**
+ * Prototype : cqm_bitmap_check_range
+ * Description : Starting from begin, check whether the bits in number of count
+ * are idle in the table. Requirement:
+ * 1. This group of bits cannot cross steps.
+ * 2. This group of bits must be 0.
+ * Input : const ulong *table,
+ * u32 step,
+ * u32 max_num,
+ * u32 begin,
+ * u32 count
+ * Output : None
+ * Return Value : u32
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+static u32 cqm_bitmap_check_range(const ulong *table, u32 step, u32 max_num, u32 begin,
+ u32 count)
+{
+ u32 end = (begin + (count - 1));
+ u32 i;
+
+ /* Single-bit check is not performed. */
+ if (count == 1)
+ return begin;
+
+ /* The end value exceeds the threshold. */
+ if (end >= max_num)
+ return max_num;
+
+ /* Bit check, the next bit is returned when a non-zero bit is found. */
+ for (i = (begin + 1); i <= end; i++) {
+ if (test_bit((int)i, table))
+ return i + 1;
+ }
+
+ /* Check whether it's in different steps. */
+ if ((begin & (~(step - 1))) != (end & (~(step - 1))))
+ return (end & (~(step - 1)));
+
+ /* If the check succeeds, begin is returned. */
+ return begin;
+}
+
+static void cqm_bitmap_find(struct tag_cqm_bitmap *bitmap, u32 *index, u32 last,
+ u32 step, u32 count)
+{
+ u32 last_num = last;
+ u32 max_num = bitmap->max_num - bitmap->reserved_back;
+ ulong *table = bitmap->table;
+
+ do {
+ *index = (u32)find_next_zero_bit(table, max_num, last_num);
+ if (*index < max_num)
+ last_num = cqm_bitmap_check_range(table, step, max_num,
+ *index, count);
+ else
+ break;
+ } while (last_num != *index);
+}
+
+static void cqm_bitmap_find_with_low2bit_align(struct tag_cqm_bitmap *bitmap, u32 *index,
+ u32 max_num, u32 last, u32 low2bit)
+{
+ ulong *table = bitmap->table;
+ u32 offset = last;
+
+ while (offset < max_num) {
+ *index = (u32)find_next_zero_bit(table, max_num, offset);
+ if (*index >= max_num)
+ break;
+
+ if ((*index & 0x3) == (low2bit & 0x3)) /* 0x3 used for low2bit align */
+ break;
+
+ offset = *index + 1;
+ if (offset == max_num)
+ *index = max_num;
+ }
+}
+
+/**
+ * Prototype : cqm_bitmap_alloc
+ * Description : Apply for a bitmap index. 0 and 1 must be left blank.
+ * Scan backwards from where you last applied.
+ * A string of consecutive indexes must be applied for and
+ * cannot be applied for across trunks.
+ * Input : struct tag_cqm_bitmap *bitmap,
+ * u32 step,
+ * u32 count
+ * Output : None
+ * Return Value : u32
+ * The obtained index is returned.
+ * If a failure occurs, the value of max is returned.
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+u32 cqm_bitmap_alloc(struct tag_cqm_bitmap *bitmap, u32 step, u32 count, bool update_last)
+{
+ u32 index = 0;
+ u32 max_num = bitmap->max_num - bitmap->reserved_back;
+ u32 last = bitmap->last;
+ ulong *table = bitmap->table;
+ u32 i;
+
+ spin_lock(&bitmap->lock);
+
+ /* Search for an idle bit from the last position. */
+ cqm_bitmap_find(bitmap, &index, last, step, count);
+
+ /* The preceding search fails. Search for an idle bit
+ * from the beginning.
+ */
+ if (index >= max_num) {
+ last = bitmap->reserved_top;
+ cqm_bitmap_find(bitmap, &index, last, step, count);
+ }
+
+ /* Set the found bit to 1 and reset last. */
+ if (index < max_num) {
+ for (i = index; i < (index + count); i++)
+ set_bit(i, table);
+
+ if (update_last) {
+ bitmap->last = (index + count);
+ if (bitmap->last >= max_num)
+ bitmap->last = bitmap->reserved_top;
+ }
+ }
+
+ spin_unlock(&bitmap->lock);
+ return index;
+}
+
+/**
+ * Prototype : cqm_bitmap_alloc_low2bit_align
+ * Description : Apply for a bitmap index with low2bit align. 0 and 1 must be left blank.
+ * Scan backwards from where you last applied.
+ * A string of consecutive indexes must be applied for and
+ * cannot be applied for across trunks.
+ * Input : struct tag_cqm_bitmap *bitmap,
+ * u32 low2bit,
+ * bool update_last
+ * Output : None
+ * Return Value : u32
+ * The obtained index is returned.
+ * If a failure occurs, the value of max is returned.
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+u32 cqm_bitmap_alloc_low2bit_align(struct tag_cqm_bitmap *bitmap, u32 low2bit, bool update_last)
+{
+ u32 index = 0;
+ u32 max_num = bitmap->max_num - bitmap->reserved_back;
+ u32 last = bitmap->last;
+ ulong *table = bitmap->table;
+
+ spin_lock(&bitmap->lock);
+
+ /* Search for an idle bit from the last position. */
+ cqm_bitmap_find_with_low2bit_align(bitmap, &index, max_num, last, low2bit);
+
+ /* The preceding search fails. Search for an idle bit from the beginning. */
+ if (index >= max_num) {
+ last = bitmap->reserved_top;
+ cqm_bitmap_find_with_low2bit_align(bitmap, &index, max_num, last, low2bit);
+ }
+
+ /* Set the found bit to 1 and reset last. */
+ if (index < max_num) {
+ set_bit(index, table);
+
+ if (update_last) {
+ bitmap->last = index;
+ if (bitmap->last >= max_num)
+ bitmap->last = bitmap->reserved_top;
+ }
+ }
+
+ spin_unlock(&bitmap->lock);
+ return index;
+}
+
+/**
+ * Prototype : cqm_bitmap_alloc_reserved
+ * Description : Reserve bit applied for based on index.
+ * Input : struct tag_cqm_bitmap *bitmap,
+ * u32 count,
+ * u32 index
+ * Output : None
+ * Return Value : u32
+ * The obtained index is returned.
+ * If a failure occurs, the value of max is returned.
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+u32 cqm_bitmap_alloc_reserved(struct tag_cqm_bitmap *bitmap, u32 count, u32 index)
+{
+ ulong *table = bitmap->table;
+ u32 ret_index;
+
+ if (index >= bitmap->max_num || count != 1)
+ return CQM_INDEX_INVALID;
+
+ if (index >= bitmap->reserved_top && (index < bitmap->max_num - bitmap->reserved_back))
+ return CQM_INDEX_INVALID;
+
+ spin_lock(&bitmap->lock);
+
+ if (test_bit((int)index, table)) {
+ ret_index = CQM_INDEX_INVALID;
+ } else {
+ set_bit(index, table);
+ ret_index = index;
+ }
+
+ spin_unlock(&bitmap->lock);
+ return ret_index;
+}
+
+/**
+ * Prototype : cqm_bitmap_free
+ * Description : Releases a bitmap index.
+ * Input : struct tag_cqm_bitmap *bitmap,
+ * u32 index,
+ * u32 count
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+void cqm_bitmap_free(struct tag_cqm_bitmap *bitmap, u32 index, u32 count)
+{
+ u32 i;
+
+ spin_lock(&bitmap->lock);
+
+ for (i = index; i < (index + count); i++)
+ clear_bit((s32)i, bitmap->table);
+
+ spin_unlock(&bitmap->lock);
+}
+
+#define obj_table_section
+
+/**
+ * Prototype : cqm_single_object_table_init
+ * Description : Initialize a object table.
+ * Input : struct tag_cqm_object_table *obj_table
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/9/9
+ * Modification : Created function
+ */
+static s32 cqm_single_object_table_init(struct tag_cqm_object_table *obj_table)
+{
+ rwlock_init(&obj_table->lock);
+
+ obj_table->table = vmalloc(obj_table->max_num * sizeof(void *));
+ if (!obj_table->table)
+ return CQM_FAIL;
+ memset(obj_table->table, 0, obj_table->max_num * sizeof(void *));
+ return CQM_SUCCESS;
+}
+
+/**
+ * Prototype : cqm_object_table_init
+ * Description : Initialize the association table between objects and indexes.
+ * Input : struct tag_cqm_handle *cqm_handle
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+s32 cqm_object_table_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *obj_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ s32 ret = CQM_SUCCESS;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Obj table init: cla_table_type %u, obj_num=0, don't init obj table\n",
+ cla_table->type);
+ continue;
+ }
+
+ obj_table = &cla_table->obj_table;
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_QPC:
+ obj_table->max_num = capability->qpc_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ obj_table->max_num = capability->mpt_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ obj_table->max_num = capability->scqc_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ obj_table->max_num = capability->srqc_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ default:
+ break;
+ }
+
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ "Obj table init: failed to init cla_table_type=%u, obj_num=0x%x\n",
+ cla_table->type, cla_table->obj_num);
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_object_table_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * Prototype : cqm_object_table_uninit
+ * Description : Deinitialize the association table between objects and
+ * indexes.
+ * Input : struct tag_cqm_handle *cqm_handle
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+void cqm_object_table_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_object_table *obj_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ obj_table = &cla_table->obj_table;
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ if (obj_table->table) {
+ rwlock_deinit(&obj_table->lock);
+ vfree(obj_table->table);
+ obj_table->table = NULL;
+ }
+ }
+ }
+}
+
+/**
+ * Prototype : cqm_object_table_insert
+ * Description : Insert an object
+ * Input : struct tag_cqm_handle *cqm_handle
+ * struct tag_cqm_object_table *object_table
+ * u32 index
+ * struct tag_cqm_object *obj
+ * bool bh
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+s32 cqm_object_table_insert(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_object_table *object_table,
+ u32 index, struct tag_cqm_object *obj, bool bh)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl,
+ "Obj table insert: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return CQM_FAIL;
+ }
+
+ cqm_write_lock(&object_table->lock, bh);
+
+ if (!object_table->table[index]) {
+ object_table->table[index] = obj;
+ cqm_write_unlock(&object_table->lock, bh);
+ return CQM_SUCCESS;
+ }
+
+ cqm_write_unlock(&object_table->lock, bh);
+ cqm_err(handle->dev_hdl,
+ "Obj table insert: object_table->table[0x%x] has been inserted\n",
+ index);
+
+ return CQM_FAIL;
+}
+
+/**
+ * Prototype : cqm_object_table_remove
+ * Description : Remove an object
+ * Input : struct tag_cqm_handle *cqm_handle
+ * struct tag_cqm_object_table *object_table
+ * u32 index
+ * const struct tag_cqm_object *obj
+ * bool bh
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+void cqm_object_table_remove(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_object_table *object_table,
+ u32 index, const struct tag_cqm_object *obj, bool bh)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl,
+ "Obj table remove: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return;
+ }
+
+ cqm_write_lock(&object_table->lock, bh);
+
+ if (object_table->table[index] && object_table->table[index] == obj)
+ object_table->table[index] = NULL;
+ else
+ cqm_err(handle->dev_hdl,
+ "Obj table remove: object_table->table[0x%x] has been removed\n",
+ index);
+
+ cqm_write_unlock(&object_table->lock, bh);
+}
+
+/**
+ * Prototype : cqm_object_table_get
+ * Description : Remove an object
+ * Input : struct tag_cqm_handle *cqm_handle
+ * struct tag_cqm_object_table *object_table
+ * u32 index
+ * bool bh
+ * Output : None
+ * Return Value : struct tag_cqm_object *obj
+ * 1.Date : 2018/6/20
+ * Modification : Created function
+ */
+struct tag_cqm_object *cqm_object_table_get(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_object_table *object_table,
+ u32 index, bool bh)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object *obj = NULL;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl,
+ "Obj table get: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return NULL;
+ }
+
+ cqm_read_lock(&object_table->lock, bh);
+
+ obj = object_table->table[index];
+ if (obj)
+ atomic_inc(&obj->refcount);
+
+ cqm_read_unlock(&object_table->lock, bh);
+
+ return obj;
+}
+
+u32 cqm_bitmap_alloc_by_xid(struct tag_cqm_bitmap *bitmap, u32 count, u32 index)
+{
+ ulong *table = bitmap->table;
+ u32 ret_index;
+
+ if (index >= bitmap->max_num || count != 1)
+ return CQM_INDEX_INVALID;
+
+ spin_lock(&bitmap->lock);
+
+ if (test_bit((int)index, table)) {
+ ret_index = CQM_INDEX_INVALID;
+ } else {
+ set_bit(index, table);
+ ret_index = index;
+ }
+
+ spin_unlock(&bitmap->lock);
+ return ret_index;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.h
new file mode 100644
index 0000000..06b8661
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_BITMAP_TABLE_H
+#define CQM_BITMAP_TABLE_H
+
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+
+#include "cqm_object.h"
+#include "vram_common.h"
+
+struct tag_cqm_bitmap {
+ ulong *table;
+ u32 max_num;
+ u32 last;
+ u32 reserved_top; /* reserved index */
+ u32 reserved_back;
+ spinlock_t lock; /* lock for cqm */
+ struct vram_buf_info bitmap_info;
+};
+
+struct tag_cqm_object_table {
+ /* Now is big array. Later will be optimized as a red-black tree. */
+ struct tag_cqm_object **table;
+ u32 max_num;
+ rwlock_t lock;
+};
+
+struct tag_cqm_handle;
+
+s32 cqm_bitmap_init(struct tag_cqm_handle *cqm_handle);
+void cqm_bitmap_uninit(struct tag_cqm_handle *cqm_handle);
+u32 cqm_bitmap_alloc(struct tag_cqm_bitmap *bitmap, u32 step, u32 count, bool update_last);
+u32 cqm_bitmap_alloc_low2bit_align(struct tag_cqm_bitmap *bitmap, u32 low2bit, bool update_last);
+u32 cqm_bitmap_alloc_reserved(struct tag_cqm_bitmap *bitmap, u32 count, u32 index);
+void cqm_bitmap_free(struct tag_cqm_bitmap *bitmap, u32 index, u32 count);
+s32 cqm_object_table_init(struct tag_cqm_handle *cqm_handle);
+void cqm_object_table_uninit(struct tag_cqm_handle *cqm_handle);
+s32 cqm_object_table_insert(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_object_table *object_table,
+ u32 index, struct tag_cqm_object *obj, bool bh);
+void cqm_object_table_remove(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_object_table *object_table,
+ u32 index, const struct tag_cqm_object *obj, bool bh);
+struct tag_cqm_object *cqm_object_table_get(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_object_table *object_table,
+ u32 index, bool bh);
+u32 cqm_bitmap_alloc_by_xid(struct tag_cqm_bitmap *bitmap, u32 count, u32 index);
+
+void cqm_swab64(u8 *addr, u32 cnt);
+void cqm_swab32(u8 *addr, u32 cnt);
+bool cqm_check_align(u32 data);
+s32 cqm_shift(u32 data);
+s32 cqm_buf_alloc(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf, bool direct);
+s32 cqm_buf_alloc_direct(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf, bool direct);
+void cqm_buf_free(struct tag_cqm_buf *buf, struct tag_cqm_handle *cqm_handle);
+void cqm_buf_free_cache_inv(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf,
+ s32 *inv_flag);
+s32 cqm_cla_cache_invalid(struct tag_cqm_handle *cqm_handle, dma_addr_t gpa,
+ u32 cache_size);
+void *cqm_kmalloc_align(size_t size, gfp_t flags, u16 align_order);
+void cqm_kfree_align(void *addr);
+void cqm_byte_print(u32 *ptr, u32 len);
+
+#endif /* CQM_BITMAP_TABLE_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.c
new file mode 100644
index 0000000..1d9198f
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.c
@@ -0,0 +1,506 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+
+#include "cqm_object.h"
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_cmd.h"
+#include "cqm_main.h"
+#include "cqm_bloomfilter.h"
+
+#include "cqm_npu_cmd.h"
+#include "cqm_npu_cmd_defs.h"
+
+/**
+ * bloomfilter_init_cmd - host send cmd to ucode to init bloomfilter mem
+ * @cqm_handle: CQM handle
+ */
+static s32 bloomfilter_init_cmd(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct tag_cqm_bloomfilter_init_cmd *cmd = NULL;
+ struct tag_cqm_cmd_buf *buf_in = NULL;
+ s32 ret;
+
+ buf_in = cqm_cmd_alloc((void *)(cqm_handle->ex_handle));
+ if (!buf_in)
+ return CQM_FAIL;
+
+ /* Fill the command format and convert it to big-endian. */
+ buf_in->size = sizeof(struct tag_cqm_bloomfilter_init_cmd);
+ cmd = (struct tag_cqm_bloomfilter_init_cmd *)(buf_in->buf);
+ cmd->bloom_filter_addr = capability->bloomfilter_addr;
+ cmd->bloom_filter_len = capability->bloomfilter_length;
+
+ cqm_swab32((u8 *)cmd,
+ (sizeof(struct tag_cqm_bloomfilter_init_cmd) >> CQM_DW_SHIFT));
+
+ ret = cqm_send_cmd_box((void *)(cqm_handle->ex_handle),
+ CQM_MOD_CQM, CQM_CMD_T_BLOOMFILTER_INIT, buf_in,
+ NULL, NULL, CQM_CMD_TIMEOUT,
+ HINIC3_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(cqm_handle->ex_handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(cqm_handle->ex_handle->dev_hdl, "Bloomfilter: %s ret=%d\n", __func__,
+ ret);
+ cqm_err(cqm_handle->ex_handle->dev_hdl, "Bloomfilter: %s: 0x%x 0x%x\n",
+ __func__, cmd->bloom_filter_addr,
+ cmd->bloom_filter_len);
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return CQM_FAIL;
+ }
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return CQM_SUCCESS;
+}
+
+static void cqm_func_bloomfilter_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bloomfilter_table *bloomfilter_table = &cqm_handle->bloomfilter_table;
+
+ if (bloomfilter_table->table) {
+ mutex_deinit(&bloomfilter_table->lock);
+ vfree(bloomfilter_table->table);
+ bloomfilter_table->table = NULL;
+ }
+}
+
+static s32 cqm_func_bloomfilter_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bloomfilter_table *bloomfilter_table = NULL;
+ struct tag_cqm_func_capability *capability = NULL;
+ u32 array_size;
+ s32 ret;
+
+ bloomfilter_table = &cqm_handle->bloomfilter_table;
+ capability = &cqm_handle->func_capability;
+
+ if (capability->bloomfilter_length == 0) {
+ cqm_info(cqm_handle->ex_handle->dev_hdl,
+ "Bloomfilter: bf_length=0, don't need to init bloomfilter\n");
+ return CQM_SUCCESS;
+ }
+
+ /* The unit of bloomfilter_length is 64B(512bits). Each bit is a table
+ * node. Therefore the value must be shift 9 bits to the left.
+ */
+ bloomfilter_table->table_size = capability->bloomfilter_length <<
+ CQM_BF_LENGTH_UNIT;
+ /* The unit of bloomfilter_length is 64B. The unit of array entryis 32B.
+ */
+ array_size = capability->bloomfilter_length << 1;
+ if (array_size == 0 || array_size > CQM_BF_BITARRAY_MAX) {
+ cqm_err(cqm_handle->ex_handle->dev_hdl, CQM_WRONG_VALUE(array_size));
+ return CQM_FAIL;
+ }
+
+ bloomfilter_table->array_mask = array_size - 1;
+ /* This table is not a bitmap, it is the counter of corresponding bit.
+ */
+ bloomfilter_table->table = vmalloc(bloomfilter_table->table_size *
+ (sizeof(u32)));
+ if (!bloomfilter_table->table)
+ return CQM_FAIL;
+
+ memset(bloomfilter_table->table, 0, (bloomfilter_table->table_size * sizeof(u32)));
+
+ /* The bloomfilter must be initialized to 0 by ucode,
+ * because the bloomfilter is mem mode
+ */
+ if (cqm_handle->func_capability.bloomfilter_enable) {
+ ret = bloomfilter_init_cmd(cqm_handle);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(cqm_handle->ex_handle->dev_hdl,
+ "Bloomfilter: bloomfilter_init_cmd ret=%d\n",
+ ret);
+ vfree(bloomfilter_table->table);
+ bloomfilter_table->table = NULL;
+ return CQM_FAIL;
+ }
+ }
+
+ mutex_init(&bloomfilter_table->lock);
+ cqm_dbg("Bloomfilter: table_size=0x%x, array_size=0x%x\n",
+ bloomfilter_table->table_size, array_size);
+ return CQM_SUCCESS;
+}
+
+static void cqm_fake_bloomfilter_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_handle *fake_cqm_handle = NULL;
+ s32 child_func_number;
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type != CQM_FAKE_FUNC_PARENT)
+ return;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = cqm_handle->fake_cqm_handle[i];
+ cqm_func_bloomfilter_uninit(fake_cqm_handle);
+ }
+}
+
+static s32 cqm_fake_bloomfilter_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_handle *fake_cqm_handle = NULL;
+ s32 child_func_number;
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type != CQM_FAKE_FUNC_PARENT)
+ return CQM_SUCCESS;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = cqm_handle->fake_cqm_handle[i];
+ if (cqm_func_bloomfilter_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_func_bloomfilter_init));
+ goto bloomfilter_init_err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+bloomfilter_init_err:
+ cqm_fake_bloomfilter_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_bloomfilter_init - initialize the bloomfilter of cqm
+ * @ex_handle: device pointer that represents the PF
+ */
+s32 cqm_bloomfilter_init(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ if (cqm_fake_bloomfilter_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_fake_bloomfilter_init));
+ return CQM_FAIL;
+ }
+
+ if (cqm_func_bloomfilter_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_func_bloomfilter_init));
+ goto bloomfilter_init_err;
+ }
+
+ return CQM_SUCCESS;
+
+bloomfilter_init_err:
+ cqm_fake_bloomfilter_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_bloomfilter_uninit - uninitialize the bloomfilter of cqm
+ * @ex_handle: device pointer that represents the PF
+ */
+void cqm_bloomfilter_uninit(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ cqm_fake_bloomfilter_uninit(cqm_handle);
+ cqm_func_bloomfilter_uninit(cqm_handle);
+}
+
+/**
+ * cqm_bloomfilter_cmd - host send bloomfilter api cmd to ucode
+ * @ex_handle: device pointer that represents the PF
+ * @func_id: function id
+ * @op: operation code
+ * @k_flag: kernel enable flag
+ * @id: the ID of the bloomfilter
+ */
+s32 cqm_bloomfilter_cmd(void *ex_handle, u16 func_id, u32 op, u32 k_flag, u64 id)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_cmd_buf *buf_in = NULL;
+ struct tag_cqm_bloomfilter_cmd *cmd = NULL;
+ s32 ret;
+
+ buf_in = cqm_cmd_alloc(ex_handle);
+ if (!buf_in)
+ return CQM_FAIL;
+
+ /* Fill the command format and convert it to big-endian. */
+ buf_in->size = sizeof(struct tag_cqm_bloomfilter_cmd);
+ cmd = (struct tag_cqm_bloomfilter_cmd *)(buf_in->buf);
+ memset((void *)cmd, 0, sizeof(struct tag_cqm_bloomfilter_cmd));
+ cmd->func_id = func_id;
+ cmd->k_en = k_flag;
+ cmd->index_h = (u32)(id >> CQM_DW_OFFSET);
+ cmd->index_l = (u32)(id & CQM_DW_MASK);
+
+ cqm_swab32((u8 *)cmd, (sizeof(struct tag_cqm_bloomfilter_cmd) >> CQM_DW_SHIFT));
+
+ ret = cqm_send_cmd_box(ex_handle, CQM_MOD_CQM, (u8)op, buf_in, NULL,
+ NULL, CQM_CMD_TIMEOUT, HINIC3_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(handle->dev_hdl, "Bloomfilter: bloomfilter_cmd ret=%d\n",
+ ret);
+ cqm_err(handle->dev_hdl, "Bloomfilter: op=0x%x, cmd: 0x%x 0x%x 0x%x 0x%x\n",
+ op, *((u32 *)cmd), *(((u32 *)cmd) + CQM_DW_INDEX1),
+ *(((u32 *)cmd) + CQM_DW_INDEX2),
+ *(((u32 *)cmd) + CQM_DW_INDEX3));
+ cqm_cmd_free(ex_handle, buf_in);
+ return CQM_FAIL;
+ }
+
+ cqm_cmd_free(ex_handle, buf_in);
+
+ return CQM_SUCCESS;
+}
+
+static struct tag_cqm_handle *cqm_get_func_cqm_handle(struct hinic3_hwdev *ex_handle, u16 func_id)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_func_capability *func_cap = NULL;
+ s32 child_func_start, child_func_number;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(ex_handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ /* function id is PF/VF */
+ if (func_id == hinic3_global_func_id(ex_handle))
+ return cqm_handle;
+
+ func_cap = &cqm_handle->func_capability;
+ if (func_cap->fake_func_type != CQM_FAKE_FUNC_PARENT) {
+ cqm_err(ex_handle->dev_hdl, CQM_WRONG_VALUE(func_cap->fake_func_type));
+ return NULL;
+ }
+
+ child_func_start = cqm_get_child_func_start(cqm_handle);
+ if (child_func_start == CQM_FAIL) {
+ cqm_err(ex_handle->dev_hdl, CQM_WRONG_VALUE(child_func_start));
+ return NULL;
+ }
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(ex_handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return NULL;
+ }
+
+ /* function id is fake vf */
+ if (func_id >= child_func_start && (func_id < (child_func_start + child_func_number)))
+ return cqm_handle->fake_cqm_handle[func_id - (u16)child_func_start];
+
+ return NULL;
+}
+
+/**
+ * cqm_bloomfilter_inc - The reference counting field is added to the ID of the bloomfilter
+ * @ex_handle: device pointer that represents the PF
+ * @func_id: function id
+ * @id: the ID of the bloomfilter
+ */
+s32 cqm_bloomfilter_inc(void *ex_handle, u16 func_id, u64 id)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_bloomfilter_table *bloomfilter_table = NULL;
+ u32 array_tmp[CQM_BF_SECTION_NUMBER] = {0};
+ struct tag_cqm_handle *cqm_handle = NULL;
+ u32 array_index, array_bit, i;
+ u32 k_flag = 0;
+
+ cqm_dbg("Bloomfilter: func_id: %d, inc id=0x%llx\n", func_id, id);
+
+ cqm_handle = cqm_get_func_cqm_handle(ex_handle, func_id);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle_bf_inc is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ if (cqm_handle->func_capability.bloomfilter_enable == 0) {
+ cqm_info(handle->dev_hdl, "Bloomfilter inc: bloomfilter is disable\n");
+ return CQM_SUCCESS;
+ }
+
+ /* |(array_index=0)32B(array_bit:256bits)|(array_index=1)32B(256bits)|
+ * array_index = 0~bloomfilter_table->table_size/256bit
+ * array_bit = 0~255
+ */
+ cqm_dbg("Bloomfilter: inc id=0x%llx\n", id);
+ bloomfilter_table = &cqm_handle->bloomfilter_table;
+
+ /* The array index identifies a 32-byte entry. */
+ array_index = (u32)CQM_BF_BITARRAY_INDEX(id, bloomfilter_table->array_mask);
+ /* convert the unit of array_index to bit */
+ array_index = array_index << CQM_BF_ENTRY_SIZE_UNIT;
+ cqm_dbg("Bloomfilter: inc array_index=0x%x\n", array_index);
+
+ mutex_lock(&bloomfilter_table->lock);
+ for (i = 0; i < CQM_BF_SECTION_NUMBER; i++) {
+ /* the position of the bit in 64-bit section */
+ array_bit =
+ (id >> (CQM_BF_SECTION_BASE + i * CQM_BF_SECTION_SIZE)) &
+ CQM_BF_SECTION_MASK;
+ /* array_bit + number of 32-byte array entries + number of
+ * 64-bit sections before the section
+ */
+ array_bit = array_bit + array_index +
+ (i * CQM_BF_SECTION_BIT_NUMBER);
+
+ /* array_temp[i] records the index of the bloomfilter.
+ * It is used to roll back the reference counting of the
+ * bitarray.
+ */
+ array_tmp[i] = array_bit;
+ cqm_dbg("Bloomfilter: inc array_bit=0x%x\n", array_bit);
+
+ /* Add one to the corresponding bit in bloomfilter table.
+ * If the value changes from 0 to 1, change the corresponding
+ * bit in k_flag.
+ */
+ (bloomfilter_table->table[array_bit])++;
+ cqm_dbg("Bloomfilter: inc bloomfilter_table->table[%d]=0x%x\n",
+ array_bit, bloomfilter_table->table[array_bit]);
+ if (bloomfilter_table->table[array_bit] == 1)
+ k_flag |= (1U << i);
+ }
+
+ if (k_flag != 0) {
+ /* send cmd to ucode and set corresponding bit. */
+ if (cqm_bloomfilter_cmd(ex_handle, func_id, CQM_CMD_T_BLOOMFILTER_SET,
+ k_flag, id) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bloomfilter_cmd_inc));
+ for (i = 0; i < CQM_BF_SECTION_NUMBER; i++) {
+ array_bit = array_tmp[i];
+ (bloomfilter_table->table[array_bit])--;
+ }
+ mutex_unlock(&bloomfilter_table->lock);
+ return CQM_FAIL;
+ }
+ }
+
+ mutex_unlock(&bloomfilter_table->lock);
+
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_bloomfilter_inc);
+
+/**
+ * cqm_bloomfilter_dec - The reference counting field is decreased to the ID of the bloomfilter
+ * @ex_handle: device pointer that represents the PF
+ * @func_id: function id
+ * @id: the ID of the bloomfilter
+ */
+s32 cqm_bloomfilter_dec(void *ex_handle, u16 func_id, u64 id)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_bloomfilter_table *bloomfilter_table = NULL;
+ u32 array_tmp[CQM_BF_SECTION_NUMBER] = {0};
+ struct tag_cqm_handle *cqm_handle = NULL;
+ u32 array_index, array_bit, i;
+ u32 k_flag = 0;
+
+ cqm_handle = cqm_get_func_cqm_handle(ex_handle, func_id);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle_bf_dec is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ if (cqm_handle->func_capability.bloomfilter_enable == 0) {
+ cqm_info(handle->dev_hdl, "Bloomfilter dec: bloomfilter is disable\n");
+ return CQM_SUCCESS;
+ }
+
+ cqm_dbg("Bloomfilter: dec id=0x%llx\n", id);
+ bloomfilter_table = &cqm_handle->bloomfilter_table;
+
+ /* The array index identifies a 32-byte entry. */
+ array_index = (u32)CQM_BF_BITARRAY_INDEX(id, bloomfilter_table->array_mask);
+ cqm_dbg("Bloomfilter: dec array_index=0x%x\n", array_index);
+ mutex_lock(&bloomfilter_table->lock);
+ for (i = 0; i < CQM_BF_SECTION_NUMBER; i++) {
+ /* the position of the bit in 64-bit section */
+ array_bit =
+ (id >> (CQM_BF_SECTION_BASE + i * CQM_BF_SECTION_SIZE)) &
+ CQM_BF_SECTION_MASK;
+ /* array_bit + number of 32-byte array entries + number of
+ * 64-bit sections before the section
+ */
+ array_bit = array_bit + (array_index << 0x8) + (i * 0x40);
+
+ /* array_temp[i] records the index of the bloomfilter.
+ * It is used to roll back the reference counting of the
+ * bitarray.
+ */
+ array_tmp[i] = array_bit;
+
+ /* Deduct one to the corresponding bit in bloomfilter table.
+ * If the value changes from 1 to 0, change the corresponding
+ * bit in k_flag. Do not continue -1 when the reference counting
+ * value of the bit is 0.
+ */
+ if (bloomfilter_table->table[array_bit] != 0) {
+ (bloomfilter_table->table[array_bit])--;
+ cqm_dbg("Bloomfilter: dec bloomfilter_table->table[%d]=0x%x\n",
+ array_bit, (bloomfilter_table->table[array_bit]));
+ if (bloomfilter_table->table[array_bit] == 0)
+ k_flag |= (1U << i);
+ }
+ }
+
+ if (k_flag != 0) {
+ /* send cmd to ucode and clear corresponding bit. */
+ if (cqm_bloomfilter_cmd(ex_handle, func_id, CQM_CMD_T_BLOOMFILTER_CLEAR,
+ k_flag, id) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bloomfilter_cmd_dec));
+ for (i = 0; i < CQM_BF_SECTION_NUMBER; i++) {
+ array_bit = array_tmp[i];
+ (bloomfilter_table->table[array_bit])++;
+ }
+ mutex_unlock(&bloomfilter_table->lock);
+ return CQM_FAIL;
+ }
+ }
+
+ mutex_unlock(&bloomfilter_table->lock);
+
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_bloomfilter_dec);
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.h
new file mode 100644
index 0000000..8fd446c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_BLOOMFILTER_H
+#define CQM_BLOOMFILTER_H
+
+#include <linux/types.h>
+#include <linux/mutex.h>
+
+/* Bloomfilter entry size is 32B(256bit), whitch index is the 48-32-bit of the
+ * hash. |31~26|25~20|19~14|13~8| will be used to locate 4 bloom filter section
+ * in one entry. k_en[3:0] used to specify the section of bloom filter.
+ */
+#define CQM_BF_ENTRY_SIZE 32
+#define CQM_BF_ENTRY_SIZE_UNIT 8
+#define CQM_BF_BITARRAY_MAX BIT(17)
+
+#define CQM_BF_SECTION_NUMBER 4
+#define CQM_BF_SECTION_BASE 8
+#define CQM_BF_SECTION_SIZE 6
+#define CQM_BF_SECTION_MASK 0x3f
+#define CQM_BF_SECTION_BIT_NUMBER 64
+
+#define CQM_BF_ARRAY_INDEX_OFFSET 32
+#define CQM_BF_BITARRAY_INDEX(id, mask) \
+ (((id) >> CQM_BF_ARRAY_INDEX_OFFSET) & (mask))
+
+/* The unit of bloomfilter_length is 64B(512bits). */
+#define CQM_BF_LENGTH_UNIT 9
+
+#define CQM_DW_MASK 0xffffffff
+#define CQM_DW_OFFSET 32
+#define CQM_DW_INDEX0 0
+#define CQM_DW_INDEX1 1
+#define CQM_DW_INDEX2 2
+#define CQM_DW_INDEX3 3
+
+struct tag_cqm_bloomfilter_table {
+ u32 *table;
+ u32 table_size; /* The unit is bit */
+ u32 array_mask; /* The unit of array entry is 32B, used to address entry
+ */
+ struct mutex lock;
+};
+
+/* only for test */
+s32 cqm_bloomfilter_cmd(void *ex_handle, u16 func_id, u32 op, u32 k_flag, u64 id);
+s32 cqm_bloomfilter_init(void *ex_handle);
+void cqm_bloomfilter_uninit(void *ex_handle);
+s32 cqm_bloomfilter_inc(void *ex_handle, u16 func_id, u64 id);
+s32 cqm_bloomfilter_dec(void *ex_handle, u16 func_id, u64 id);
+
+#endif /* CQM_BLOOMFILTER_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.c
new file mode 100644
index 0000000..3d38edc
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.c
@@ -0,0 +1,182 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_hwdev.h"
+
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_main.h"
+
+/**
+ * cqm_cmd_alloc - Apply for a cmd buffer. The buffer size is fixed to 2 KB,
+ * The buffer content is not cleared and needs to be cleared by services.
+ * @ex_handle: device pointer that represents the PF
+ */
+struct tag_cqm_cmd_buf *cqm_cmd_alloc(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_cmd_alloc_cnt);
+
+ return (struct tag_cqm_cmd_buf *)hinic3_alloc_cmd_buf(ex_handle);
+}
+EXPORT_SYMBOL(cqm_cmd_alloc);
+
+/**
+ * cqm_cmd_free - Release for a cmd buffer
+ * @ex_handle: device pointer that represents the PF
+ * @cmd_buf: command buffer
+ */
+void cqm_cmd_free(void *ex_handle, struct tag_cqm_cmd_buf *cmd_buf)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return;
+ }
+ if (unlikely(!cmd_buf)) {
+ pr_err("[CQM]%s: cmd_buf is null\n", __func__);
+ return;
+ }
+ if (unlikely(!cmd_buf->buf)) {
+ pr_err("[CQM]%s: buf is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_cmd_free_cnt);
+
+ hinic3_free_cmd_buf(ex_handle, (struct hinic3_cmd_buf *)cmd_buf);
+}
+EXPORT_SYMBOL(cqm_cmd_free);
+
+/**
+ * cqm_send_cmd_box - Send a cmd message in box mode,
+ * This interface will mount a completion quantity, causing sleep.
+ * @ex_handle: device pointer that represents the PF
+ * @mod: command module
+ * @cmd:command type
+ * @buf_in: data buffer in address
+ * @buf_out: data buffer out address
+ * @out_param: out paramer
+ * @timeout: out of time
+ * @channel: mailbox channel
+ */
+s32 cqm_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, struct tag_cqm_cmd_buf *buf_in,
+ struct tag_cqm_cmd_buf *buf_out, u64 *out_param, u32 timeout,
+ u16 channel)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!buf_in)) {
+ pr_err("[CQM]%s: buf_in is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!buf_in->buf)) {
+ pr_err("[CQM]%s: buf is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_send_cmd_box_cnt);
+
+ return hinic3_cmdq_detail_resp(ex_handle, mod, cmd,
+ (struct hinic3_cmd_buf *)buf_in,
+ (struct hinic3_cmd_buf *)buf_out,
+ out_param, timeout, channel);
+}
+EXPORT_SYMBOL(cqm_send_cmd_box);
+
+/**
+ * cqm_lb_send_cmd_box - Send a cmd message in box mode and open cos_id,
+ * This interface will mount a completion quantity, causing sleep.
+ * @ex_handle: device pointer that represents the PF
+ * @mod: command module
+ * @cmd:command type
+ * @cos_id: cos id
+ * @buf_in: data buffer in address
+ * @buf_out: data buffer out address
+ * @out_param: out paramer
+ * @timeout: out of time
+ * @channel: mailbox channel
+ */
+s32 cqm_lb_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, u8 cos_id,
+ struct tag_cqm_cmd_buf *buf_in, struct tag_cqm_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!buf_in)) {
+ pr_err("[CQM]%s: buf_in is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!buf_in->buf)) {
+ pr_err("[CQM]%s: buf is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_send_cmd_box_cnt);
+
+ return hinic3_cos_id_detail_resp(ex_handle, mod, cmd, cos_id,
+ (struct hinic3_cmd_buf *)buf_in,
+ (struct hinic3_cmd_buf *)buf_out,
+ out_param, timeout, channel);
+}
+EXPORT_SYMBOL(cqm_lb_send_cmd_box);
+
+/**
+ * cqm_lb_send_cmd_box_async - Send a cmd message in box mode and open cos_id,
+ * This interface will not wait completion
+ * @ex_handle: device pointer that represents the PF
+ * @mod: command module
+ * @cmd:command type
+ * @cos_id: cos id
+ * @buf_in: data buffer in address
+ * @channel: mailbox channel
+ */
+s32 cqm_lb_send_cmd_box_async(void *ex_handle, u8 mod, u8 cmd,
+ u8 cos_id, struct tag_cqm_cmd_buf *buf_in,
+ u16 channel)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!buf_in)) {
+ pr_err("[CQM]%s: buf_in is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!buf_in->buf)) {
+ pr_err("[CQM]%s: buf is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_send_cmd_box_cnt);
+
+ return hinic3_cmdq_async_cos(ex_handle, mod, cmd, cos_id,
+ (struct hinic3_cmd_buf *)buf_in, channel);
+}
+EXPORT_SYMBOL(cqm_lb_send_cmd_box_async);
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.h
new file mode 100644
index 0000000..46eb8ec
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_CMD_H
+#define CQM_CMD_H
+
+#include <linux/types.h>
+
+#include "cqm_object.h"
+
+#ifdef __cplusplus
+#if __cplusplus
+extern "C" {
+#endif
+#endif /* __cplusplus */
+
+#define CQM_CMD_TIMEOUT 10000 /* ms */
+
+struct tag_cqm_cmd_buf *cqm_cmd_alloc(void *ex_handle);
+void cqm_cmd_free(void *ex_handle, struct tag_cqm_cmd_buf *cmd_buf);
+s32 cqm_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, struct tag_cqm_cmd_buf *buf_in,
+ struct tag_cqm_cmd_buf *buf_out, u64 *out_param, u32 timeout,
+ u16 channel);
+s32 cqm_lb_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, u8 cos_id,
+ struct tag_cqm_cmd_buf *buf_in, struct tag_cqm_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel);
+s32 cqm_lb_send_cmd_box_async(void *ex_handle, u8 mod, u8 cmd,
+ u8 cos_id, struct tag_cqm_cmd_buf *buf_in,
+ u16 channel);
+
+#ifdef __cplusplus
+#if __cplusplus
+}
+#endif
+#endif /* __cplusplus */
+
+#endif /* CQM_CMD_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.c
new file mode 100644
index 0000000..db65c8b
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.c
@@ -0,0 +1,444 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_hwdev.h"
+
+#include "cqm_object.h"
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_object_intern.h"
+#include "cqm_main.h"
+#include "cqm_db.h"
+
+/**
+ * cqm_db_addr_alloc - Apply for a page of hardware doorbell and dwqe.
+ * The indexes are the same. The obtained addresses are physical
+ * addresses. Each function has a maximum of 1K addresses(DB).
+ * @ex_handle: device pointer that represents the PF
+ * @db_addr: doorbell address
+ * @dwqe_addr: DWQE address
+ */
+s32 cqm_db_addr_alloc(void *ex_handle, void __iomem **db_addr,
+ void __iomem **dwqe_addr)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!db_addr)) {
+ pr_err("[CQM]%s: db_addr is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!dwqe_addr)) {
+ pr_err("[CQM]%s: dwqe_addr is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_db_addr_alloc_cnt);
+
+ return hinic3_alloc_db_addr(ex_handle, db_addr, dwqe_addr);
+}
+
+s32 cqm_db_phy_addr_alloc(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr)
+{
+ return hinic3_alloc_db_phy_addr(ex_handle, db_paddr, dwqe_addr);
+}
+
+/**
+ * cqm_db_addr_free - Release a page of hardware doorbell and dwqe
+ * @ex_handle: device pointer that represents the PF
+ * @db_addr: doorbell address
+ * @dwqe_addr: DWQE address
+ */
+static void cqm_db_addr_free(void *ex_handle, const void __iomem *db_addr, void __iomem *dwqe_addr)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_db_addr_free_cnt);
+
+ hinic3_free_db_addr(ex_handle, db_addr, dwqe_addr);
+}
+
+static void cqm_db_phy_addr_free(void *ex_handle, u64 *db_paddr, const u64 *dwqe_addr)
+{
+ hinic3_free_db_phy_addr(ex_handle, *db_paddr, *dwqe_addr);
+}
+
+static bool cqm_need_db_init(s32 service)
+{
+ bool need_db_init = false;
+
+ switch (service) {
+ case CQM_SERVICE_T_NIC:
+ case CQM_SERVICE_T_OVS:
+ case CQM_SERVICE_T_IPSEC:
+ case CQM_SERVICE_T_VIRTIO:
+ case CQM_SERVICE_T_PPA:
+ need_db_init = false;
+ break;
+ default:
+ need_db_init = true;
+ }
+
+ return need_db_init;
+}
+
+/**
+ * cqm_db_init - Initialize the doorbell of the CQM
+ * @ex_handle: device pointer that represents the PF
+ */
+s32 cqm_db_init(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ s32 i;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ /* Allocate hardware doorbells to services. */
+ for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
+ service = &cqm_handle->service[i];
+ if (!cqm_need_db_init(i) || !service->valid)
+ continue;
+
+ if (cqm_db_addr_alloc(ex_handle, &service->hardware_db_vaddr,
+ &service->dwqe_vaddr) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_db_addr_alloc));
+ break;
+ }
+
+ if (cqm_db_phy_addr_alloc(handle, &service->hardware_db_paddr,
+ &service->dwqe_paddr) !=
+ CQM_SUCCESS) {
+ cqm_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_db_phy_addr_alloc));
+ break;
+ }
+ }
+
+ if (i != CQM_SERVICE_T_MAX) {
+ i--;
+ for (; i >= 0; i--) {
+ service = &cqm_handle->service[i];
+ if (!cqm_need_db_init(i) || !service->valid)
+ continue;
+
+ cqm_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+ cqm_db_phy_addr_free(ex_handle,
+ &service->hardware_db_paddr,
+ &service->dwqe_paddr);
+ }
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_db_uninit - Deinitialize the doorbell of the CQM
+ * @ex_handle: device pointer that represents the PF
+ */
+void cqm_db_uninit(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ s32 i;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ /* Release hardware doorbell. */
+ for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
+ service = &cqm_handle->service[i];
+ if (service->valid && cqm_need_db_init(i)) {
+ cqm_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+ cqm_db_phy_addr_free(ex_handle, &service->hardware_db_paddr,
+ &service->dwqe_paddr);
+ }
+ }
+}
+
+/**
+ * cqm_get_db_addr - Return hardware DB vaddr
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ */
+void *cqm_get_db_addr(void *ex_handle, u32 service_type)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct hinic3_hwdev *handle = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ if (service_type >= CQM_SERVICE_T_MAX) {
+ pr_err("%s service_type = %d state is error\n", __func__,
+ service_type);
+ return NULL;
+ }
+ handle = (struct hinic3_hwdev *)ex_handle;
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+
+ return (void *)service->hardware_db_vaddr;
+}
+EXPORT_SYMBOL(cqm_get_db_addr);
+
+/**
+ * cqm_ring_hardware_db - Ring hardware DB to chip
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: Each kernel-mode service is allocated a hardware db page
+ * @db_count: The bit[7:0] of PI can't be store in 64-bit db
+ * @db: It contains the content of db, whitch is organized by
+ * service, including big-endian conversion.
+ */
+s32 cqm_ring_hardware_db(void *ex_handle, u32 service_type, u8 db_count, u64 db)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct hinic3_hwdev *handle = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ if (service_type >= CQM_SERVICE_T_MAX) {
+ pr_err("%s service_type = %d state is error\n", __func__,
+ service_type);
+ return CQM_FAIL;
+ }
+
+ handle = (struct hinic3_hwdev *)ex_handle;
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ *((u64 *)service->hardware_db_vaddr + db_count) = db;
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_ring_hardware_db);
+
+/**
+ * cqm_ring_hardware_db_fc - Ring fake vf hardware DB to chip
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: Each kernel-mode service is allocated a hardware db page
+ * @db_count: The bit[7:0] of PI can't be store in 64-bit db
+ * @pagenum: Indicates the doorbell address offset of the fake VFID
+ * @db: It contains the content of db, whitch is organized by
+ * service, including big-endian conversion.
+ */
+s32 cqm_ring_hardware_db_fc(void *ex_handle, u32 service_type, u8 db_count,
+ u8 pagenum, u64 db)
+{
+#define HIFC_DB_FAKE_VF_OFFSET 32
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct hinic3_hwdev *handle = NULL;
+ void *dbaddr = NULL;
+
+ handle = (struct hinic3_hwdev *)ex_handle;
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ dbaddr = (u8 *)service->hardware_db_vaddr +
+ ((pagenum + HIFC_DB_FAKE_VF_OFFSET) * HINIC3_DB_PAGE_SIZE);
+ *((u64 *)dbaddr + db_count) = db;
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_ring_direct_wqe_db - Ring direct wqe hardware DB to chip
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: Each kernel-mode service is allocated a hardware db page
+ * @db_count: The bit[7:0] of PI can't be store in 64-bit db
+ * @direct_wqe: The content of direct_wqe
+ */
+s32 cqm_ring_direct_wqe_db(void *ex_handle, u32 service_type, u8 db_count,
+ void *direct_wqe)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct hinic3_hwdev *handle = NULL;
+ u64 *tmp = (u64 *)direct_wqe;
+ int i;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ if (service_type >= CQM_SERVICE_T_MAX) {
+ pr_err("%s service_type = %d state is error\n", __func__,
+ service_type);
+ return CQM_FAIL;
+ }
+
+ handle = (struct hinic3_hwdev *)ex_handle;
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ for (i = 0; i < 0x80 / 0x8; i++)
+ *((u64 *)service->dwqe_vaddr + 0x40 + i) = *tmp++;
+
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_ring_direct_wqe_db);
+
+s32 cqm_ring_direct_wqe_db_fc(void *ex_handle, u32 service_type,
+ void *direct_wqe)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct hinic3_hwdev *handle = NULL;
+ u64 *tmp = (u64 *)direct_wqe;
+ int i;
+
+ handle = (struct hinic3_hwdev *)ex_handle;
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ *((u64 *)service->dwqe_vaddr + 0x0) = tmp[0x2];
+ *((u64 *)service->dwqe_vaddr + 0x1) = tmp[0x3];
+ *((u64 *)service->dwqe_vaddr + 0x2) = tmp[0x0];
+ *((u64 *)service->dwqe_vaddr + 0x3) = tmp[0x1];
+ tmp += 0x4;
+
+ /* The FC use 256B WQE. The directwqe is written at block0,
+ * and the length is 256B
+ */
+ for (i = 0x4; i < 0x20; i++)
+ *((u64 *)service->dwqe_vaddr + i) = *tmp++;
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_ring_hardware_db_update_pri - Provides the doorbell interface for the CQM to convert the PRI
+ * to the CoS. The doorbell transmitted by the service must be
+ * the host sequence. This interface converts the network
+ * sequence.
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: Each kernel-mode service is allocated a hardware db page
+ * @db_count: The bit[7:0] of PI can't be store in 64-bit db
+ * @db: It contains the content of db, whitch is organized by
+ * service, including big-endian conversion.
+ */
+s32 cqm_ring_hardware_db_update_pri(void *ex_handle, u32 service_type,
+ u8 db_count, u64 db)
+{
+ struct tag_cqm_db_common *db_common = (struct tag_cqm_db_common *)(&db);
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct hinic3_hwdev *handle = NULL;
+
+ handle = (struct hinic3_hwdev *)ex_handle;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+
+ /* the CQM converts the PRI to the CoS */
+ db_common->cos = 0x7 - db_common->cos;
+
+ cqm_swab32((u8 *)db_common, sizeof(u64) >> CQM_DW_SHIFT);
+
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ *((u64 *)service->hardware_db_vaddr + db_count) = db;
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_ring_software_db - Ring software db
+ * @object: CQM object
+ * @db_record: It contains the content of db, whitch is organized by service,
+ * including big-endian conversion. For RQ/SQ: This field is filled
+ * with the doorbell_record area of queue_header. For CQ: This field
+ * is filled with the value of ci_record in queue_header.
+ */
+s32 cqm_ring_software_db(struct tag_cqm_object *object, u64 db_record)
+{
+ struct tag_cqm_nonrdma_qinfo *nonrdma_qinfo = NULL;
+ struct tag_cqm_rdma_qinfo *rdma_qinfo = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct hinic3_hwdev *handle = NULL;
+
+ if (unlikely(!object)) {
+ pr_err("[CQM]%s: object is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ handle = cqm_handle->ex_handle;
+
+ if (object->object_type == CQM_OBJECT_NONRDMA_EMBEDDED_RQ ||
+ object->object_type == CQM_OBJECT_NONRDMA_EMBEDDED_SQ ||
+ object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
+ nonrdma_qinfo = (struct tag_cqm_nonrdma_qinfo *)(void *)object;
+ nonrdma_qinfo->common.q_header_vaddr->doorbell_record =
+ db_record;
+ } else if ((object->object_type == CQM_OBJECT_NONRDMA_EMBEDDED_CQ) ||
+ (object->object_type == CQM_OBJECT_NONRDMA_SCQ)) {
+ nonrdma_qinfo = (struct tag_cqm_nonrdma_qinfo *)(void *)object;
+ nonrdma_qinfo->common.q_header_vaddr->ci_record = db_record;
+ } else if ((object->object_type == CQM_OBJECT_RDMA_QP) ||
+ (object->object_type == CQM_OBJECT_RDMA_SRQ)) {
+ rdma_qinfo = (struct tag_cqm_rdma_qinfo *)(void *)object;
+ rdma_qinfo->common.q_header_vaddr->doorbell_record = db_record;
+ } else if (object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ rdma_qinfo = (struct tag_cqm_rdma_qinfo *)(void *)object;
+ rdma_qinfo->common.q_header_vaddr->ci_record = db_record;
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ }
+
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_ring_software_db);
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.h
new file mode 100644
index 0000000..954f62b
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_DB_H
+#define CQM_DB_H
+
+#include <linux/types.h>
+
+struct tag_cqm_db_common {
+#if (BYTE_ORDER == LITTLE_ENDIAN)
+ u32 rsvd1 : 23;
+ u32 c : 1;
+ u32 cos : 3;
+ u32 service_type : 5;
+#else
+ u32 service_type : 5;
+ u32 cos : 3;
+ u32 c : 1;
+ u32 rsvd1 : 23;
+#endif
+
+ u32 rsvd2;
+};
+
+/* Only for test */
+s32 cqm_db_addr_alloc(void *ex_handle, void __iomem **db_addr,
+ void __iomem **dwqe_addr);
+s32 cqm_db_phy_addr_alloc(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr);
+
+s32 cqm_db_init(void *ex_handle);
+void cqm_db_uninit(void *ex_handle);
+
+s32 cqm_ring_hardware_db(void *ex_handle, u32 service_type, u8 db_count,
+ u64 db);
+
+#endif /* CQM_DB_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_define.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_define.h
new file mode 100644
index 0000000..2c227ae
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_define.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_DEFINE_H
+#define CQM_DEFINE_H
+#ifndef HIUDK_SDK
+#define cqm_init cqm3_init
+#define cqm_uninit cqm3_uninit
+#define cqm_service_register cqm3_service_register
+#define cqm_service_unregister cqm3_service_unregister
+#define cqm_bloomfilter_dec cqm3_bloomfilter_dec
+#define cqm_bloomfilter_inc cqm3_bloomfilter_inc
+#define cqm_cmd_alloc cqm3_cmd_alloc
+#define cqm_cmd_free cqm3_cmd_free
+#define cqm_send_cmd_box cqm3_send_cmd_box
+#define cqm_lb_send_cmd_box cqm3_lb_send_cmd_box
+#define cqm_lb_send_cmd_box_async cqm3_lb_send_cmd_box_async
+#define cqm_db_addr_alloc cqm3_db_addr_alloc
+#define cqm_db_addr_free cqm3_db_addr_free
+#define cqm_ring_hardware_db cqm3_ring_hardware_db
+#define cqm_ring_software_db cqm3_ring_software_db
+#define cqm_object_fc_srq_create cqm3_object_fc_srq_create
+#define cqm_object_share_recv_queue_create cqm3_object_share_recv_queue_create
+#define cqm_object_share_recv_queue_add_container \
+ cqm3_object_share_recv_queue_add_container
+#define cqm_object_srq_add_container_free cqm3_object_srq_add_container_free
+#define cqm_object_recv_queue_create cqm3_object_recv_queue_create
+#define cqm_object_qpc_mpt_create cqm3_object_qpc_mpt_create
+#define cqm_object_nonrdma_queue_create cqm3_object_nonrdma_queue_create
+#define cqm_object_rdma_queue_create cqm3_object_rdma_queue_create
+#define cqm_object_rdma_table_get cqm3_object_rdma_table_get
+#define cqm_object_delete cqm3_object_delete
+#define cqm_object_offset_addr cqm3_object_offset_addr
+#define cqm_object_get cqm3_object_get
+#define cqm_object_put cqm3_object_put
+#define cqm_object_funcid cqm3_object_funcid
+#define cqm_object_resize_alloc_new cqm3_object_resize_alloc_new
+#define cqm_object_resize_free_new cqm3_object_resize_free_new
+#define cqm_object_resize_free_old cqm3_object_resize_free_old
+#define cqm_function_timer_clear cqm3_function_timer_clear
+#define cqm_function_hash_buf_clear cqm3_function_hash_buf_clear
+#define cqm_srq_used_rq_container_delete cqm3_srq_used_rq_container_delete
+#define cqm_timer_base cqm3_timer_base
+#define cqm_get_db_addr cqm3_get_db_addr
+#define cqm_ring_direct_wqe_db cqm3_ring_direct_wqe_db
+#define cqm_fake_vf_num_set cqm3_fake_vf_num_set
+#define cqm_need_secure_mem cqm3_need_secure_mem
+
+#endif
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.c
new file mode 100644
index 0000000..0e8a579
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.c
@@ -0,0 +1,1674 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_hw_cfg.h"
+
+#include "cqm_object.h"
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_bloomfilter.h"
+#include "cqm_db.h"
+#include "cqm_memsec.h"
+#include "cqm_main.h"
+
+#include "vram_common.h"
+
+static unsigned char roce_qpc_rsv_mode = CQM_QPC_ROCE_NORMAL;
+module_param(roce_qpc_rsv_mode, byte, 0644);
+MODULE_PARM_DESC(roce_qpc_rsv_mode,
+ "for roce reserve 4k qpc(qpn) (default=0, 0-rsv:2, 1-rsv:4k, 2-rsv:200k+2)");
+
+static s32 cqm_set_fake_vf_child_timer(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_handle *fake_cqm_handle, bool en)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)cqm_handle->ex_handle;
+ u16 func_global_idx;
+ s32 ret;
+
+ if (fake_cqm_handle->func_capability.timer_enable == 0)
+ return CQM_SUCCESS;
+
+ func_global_idx = fake_cqm_handle->func_attribute.func_global_idx;
+ ret = hinic3_func_tmr_bitmap_set(cqm_handle->ex_handle, func_global_idx, en);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "func_id %u Timer %s timer bitmap failed\n",
+ func_global_idx, en ? "enable" : "disable");
+ return CQM_FAIL;
+}
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_unset_fake_vf_timer(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)cqm_handle->ex_handle;
+ s32 child_func_number;
+ u32 i;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++)
+ (void)cqm_set_fake_vf_child_timer(cqm_handle,
+ cqm_handle->fake_cqm_handle[i], false);
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_set_fake_vf_timer(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)cqm_handle->ex_handle;
+ s32 child_func_number;
+ u32 i;
+ s32 ret;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ ret = cqm_set_fake_vf_child_timer(cqm_handle,
+ cqm_handle->fake_cqm_handle[i], true);
+ if (ret != CQM_SUCCESS)
+ goto err;
+ }
+
+ return CQM_SUCCESS;
+err:
+ (void)cqm_unset_fake_vf_timer(cqm_handle);
+ return CQM_FAIL;
+}
+
+static s32 cqm_set_timer_enable(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ int is_in_kexec;
+
+ if (!ex_handle)
+ return CQM_FAIL;
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec != 0) {
+ cqm_info(handle->dev_hdl, "Skip starting cqm timer during kexec\n");
+ return CQM_SUCCESS;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_PARENT &&
+ cqm_set_fake_vf_timer(cqm_handle) != CQM_SUCCESS)
+ return CQM_FAIL;
+
+ /* The timer bitmap is set directly at the beginning of the CQM.
+ * The ifconfig up/down command is not used to set or clear the bitmap.
+ */
+ if (hinic3_func_tmr_bitmap_set(ex_handle, hinic3_global_func_id(ex_handle),
+ true) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "func_id %u Timer start: enable timer bitmap failed\n",
+ hinic3_global_func_id(ex_handle));
+ goto err;
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_unset_fake_vf_timer(cqm_handle);
+ return CQM_FAIL;
+}
+
+static s32 cqm_set_timer_disable(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+
+ if (!ex_handle)
+ return CQM_FAIL;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ if (cqm_handle->func_capability.fake_func_type != CQM_FAKE_FUNC_CHILD_CONFLICT &&
+ hinic3_func_tmr_bitmap_set(ex_handle, hinic3_global_func_id(ex_handle),
+ false) != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl, "func_id %u Timer stop: disable timer bitmap failed\n",
+ hinic3_global_func_id(ex_handle));
+
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_PARENT &&
+ cqm_unset_fake_vf_timer(cqm_handle) != CQM_SUCCESS)
+ return CQM_FAIL;
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_init_all(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ /* Initialize secure memory. */
+ if (cqm_secure_mem_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_mem_init));
+ return CQM_FAIL;
+ }
+
+ /* Initialize memory entries such as BAT, CLA, and bitmap. */
+ if (cqm_mem_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_mem_init));
+ goto err1;
+ }
+
+ /* Event callback initialization */
+ if (cqm_event_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_event_init));
+ goto err2;
+ }
+
+ /* Doorbell initiation */
+ if (cqm_db_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_db_init));
+ goto err3;
+ }
+
+ /* Initialize the bloom filter. */
+ if (cqm_bloomfilter_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bloomfilter_init));
+ goto err4;
+ }
+
+ if (cqm_set_timer_enable(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_set_timer_enable));
+ goto err5;
+ }
+
+ return CQM_SUCCESS;
+err5:
+ cqm_bloomfilter_uninit(ex_handle);
+err4:
+ cqm_db_uninit(ex_handle);
+err3:
+ cqm_event_uninit(ex_handle);
+err2:
+ cqm_mem_uninit(ex_handle);
+err1:
+ cqm_secure_mem_deinit(ex_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_init - Complete CQM initialization.
+ * If the function is a parent fake function, copy the fake.
+ * If it is a child fake function (in the fake copy function,
+ * not in this function), set fake_en in the BAT/CLA table.
+ * cqm_init->cqm_mem_init->cqm_fake_init(copy)
+ * If the child fake conflict occurs, resources are not
+ * initialized, but the timer must be enabled.
+ * If the function is of the normal type,
+ * follow the normal process.
+ * @ex_handle: device pointer that represents the PF
+ */
+s32 cqm_init(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ cqm_handle = kmalloc(sizeof(*cqm_handle), GFP_KERNEL | __GFP_ZERO);
+ if (!cqm_handle)
+ return CQM_FAIL;
+
+ /* Clear the memory to prevent other systems from
+ * not clearing the memory.
+ */
+ memset(cqm_handle, 0, sizeof(struct tag_cqm_handle));
+
+ cqm_handle->ex_handle = handle;
+ cqm_handle->dev = (struct pci_dev *)(handle->pcidev_hdl);
+ handle->cqm_hdl = (void *)cqm_handle;
+
+ /* Clearing Statistics */
+ memset(&handle->hw_stats.cqm_stats, 0, sizeof(struct cqm_stats));
+
+ /* Reads VF/PF information. */
+ cqm_handle->func_attribute = handle->hwif->attr;
+ cqm_info(handle->dev_hdl, "Func init: function[%u] type %d(0:PF,1:VF,2:PPF)\n",
+ cqm_handle->func_attribute.func_global_idx,
+ cqm_handle->func_attribute.func_type);
+
+ /* Read capability from configuration management module */
+ ret = cqm_capability_init(ex_handle);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_capability_init));
+ goto err;
+ }
+
+ /* In FAKE mode, only the bitmap of the timer of the function is
+ * enabled, and resources are not initialized. Otherwise, the
+ * configuration of the fake function is overwritten.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD_CONFLICT) {
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+ return CQM_SUCCESS;
+ }
+
+ ret = cqm_init_all(ex_handle);
+ if (ret == CQM_FAIL)
+ goto err;
+
+ return CQM_SUCCESS;
+err:
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_uninit - Deinitializes the CQM module. This function is called once each time
+ * a function is removed.
+ * @ex_handle: device pointer that represents the PF
+ */
+void cqm_uninit(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return;
+ }
+
+ cqm_set_timer_disable(ex_handle);
+
+ /* After the TMR timer stops, the system releases resources
+ * after a delay of one or two milliseconds.
+ */
+ if (cqm_handle->func_attribute.func_type == CQM_PPF) {
+ if (cqm_handle->func_capability.timer_enable ==
+ CQM_TIMER_ENABLE) {
+ cqm_info(handle->dev_hdl, "PPF timer stop\n");
+ ret = hinic3_ppf_tmr_stop(handle);
+ if (ret != CQM_SUCCESS)
+ /* The timer fails to be stopped,
+ * and the resource release is not affected.
+ */
+ cqm_info(handle->dev_hdl, "PPF timer stop, ret=%d\n", ret);
+ }
+
+ hinic3_ppf_ht_gpa_deinit(handle);
+
+ usleep_range(0x384, 0x3E8); /* Somebody requires a delay of 1 ms,
+ * which is inaccurate.
+ */
+ }
+
+ /* Release Bloom Filter Table */
+ cqm_bloomfilter_uninit(ex_handle);
+
+ /* Release hardware doorbell */
+ cqm_db_uninit(ex_handle);
+
+ /* Cancel the callback of the event */
+ cqm_event_uninit(ex_handle);
+
+ /* Release various memory tables and require the service
+ * to release all objects.
+ */
+ cqm_mem_uninit(ex_handle);
+
+ cqm_secure_mem_deinit(ex_handle);
+
+ /* Release cqm_handle */
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+}
+
+static void cqm_test_mode_init(struct tag_cqm_handle *cqm_handle,
+ struct service_cap *service_capability)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ if (service_capability->test_mode == 0)
+ return;
+
+ cqm_info(handle->dev_hdl, "Enter CQM test mode\n");
+
+ func_cap->qpc_number = service_capability->test_qpc_num;
+ func_cap->qpc_reserved =
+ GET_MAX(func_cap->qpc_reserved,
+ service_capability->test_qpc_resvd_num);
+ func_cap->xid_alloc_mode = service_capability->test_xid_alloc_mode;
+ func_cap->gpa_check_enable = service_capability->test_gpa_check_enable;
+ func_cap->pagesize_reorder = service_capability->test_page_size_reorder;
+ func_cap->qpc_alloc_static =
+ (bool)(service_capability->test_qpc_alloc_mode);
+ func_cap->scqc_alloc_static =
+ (bool)(service_capability->test_scqc_alloc_mode);
+ func_cap->flow_table_based_conn_number =
+ service_capability->test_max_conn_num;
+ func_cap->flow_table_based_conn_cache_number =
+ service_capability->test_max_cache_conn_num;
+ func_cap->scqc_number = service_capability->test_scqc_num;
+ func_cap->mpt_number = service_capability->test_mpt_num;
+ func_cap->mpt_reserved = service_capability->test_mpt_recvd_num;
+ func_cap->reorder_number = service_capability->test_reorder_num;
+ /* 256K buckets, 256K*64B = 16MB */
+ func_cap->hash_number = service_capability->test_hash_num;
+}
+
+static void cqm_service_capability_update(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+
+ func_cap->qpc_number = GET_MIN(CQM_MAX_QPC_NUM, func_cap->qpc_number);
+ func_cap->scqc_number = GET_MIN(CQM_MAX_SCQC_NUM,
+ func_cap->scqc_number);
+ func_cap->srqc_number = GET_MIN(CQM_MAX_SRQC_NUM,
+ func_cap->srqc_number);
+ func_cap->childc_number = GET_MIN(CQM_MAX_CHILDC_NUM,
+ func_cap->childc_number);
+}
+
+static void cqm_service_valid_init(struct tag_cqm_handle *cqm_handle,
+ const struct service_cap *service_capability)
+{
+ u16 type = service_capability->chip_svc_type;
+ struct tag_cqm_service *svc = cqm_handle->service;
+
+ svc[CQM_SERVICE_T_NIC].valid = ((type & CFG_SERVICE_MASK_NIC) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_OVS].valid = ((type & CFG_SERVICE_MASK_OVS) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_ROCE].valid = ((type & CFG_SERVICE_MASK_ROCE) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_TOE].valid = ((type & CFG_SERVICE_MASK_TOE) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_FC].valid = ((type & CFG_SERVICE_MASK_FC) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_IPSEC].valid = ((type & CFG_SERVICE_MASK_IPSEC) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_VBS].valid = ((type & CFG_SERVICE_MASK_VBS) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_VIRTIO].valid = ((type & CFG_SERVICE_MASK_VIRTIO) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_IOE].valid = false;
+ svc[CQM_SERVICE_T_PPA].valid = ((type & CFG_SERVICE_MASK_PPA) != 0) ?
+ true : false;
+}
+
+static void cqm_service_capability_init_nic(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: nic is valid, but nic need not be init by cqm\n");
+}
+
+static void cqm_service_capability_init_ovs(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct ovs_service_cap *ovs_cap = &service_capability->ovs_cap;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: ovs is valid\n");
+ cqm_info(handle->dev_hdl, "Cap init: ovs qpc 0x%x\n",
+ ovs_cap->dev_ovs_cap.max_pctxs);
+ func_cap->hash_number += ovs_cap->dev_ovs_cap.max_pctxs;
+ func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ func_cap->qpc_number += ovs_cap->dev_ovs_cap.max_pctxs;
+ func_cap->qpc_basic_size = GET_MAX(ovs_cap->pctx_sz,
+ func_cap->qpc_basic_size);
+ func_cap->qpc_reserved += ovs_cap->dev_ovs_cap.max_pctxs;
+ func_cap->qpc_alloc_static = true;
+ func_cap->pagesize_reorder = CQM_OVS_PAGESIZE_ORDER;
+}
+
+static void cqm_service_capability_init_roce(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct hinic3_board_info *board_info = &handle->board_info;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct rdma_service_cap *rdma_cap = &service_capability->rdma_cap;
+ struct dev_roce_svc_own_cap *roce_own_cap =
+ &rdma_cap->dev_rdma_cap.roce_own_cap;
+
+ cqm_info(handle->dev_hdl, "Cap init: roce is valid\n");
+ cqm_info(handle->dev_hdl, "Cap init: roce qpc 0x%x, scqc 0x%x, srqc 0x%x, drc_qp 0x%x\n",
+ roce_own_cap->max_qps, roce_own_cap->max_cqs,
+ roce_own_cap->max_srqs, roce_own_cap->max_drc_qps);
+ cqm_info(handle->dev_hdl, "Cap init: board_type 0x%x, scenes_id:0x%x, qpc_rsv_mode:0x%x, srv_bmp:0x%x\n",
+ board_info->board_type, board_info->scenes_id,
+ roce_qpc_rsv_mode, board_info->service_en_bitmap);
+
+ if (roce_qpc_rsv_mode == CQM_QPC_ROCE_VBS_MODE) {
+ func_cap->qpc_reserved += CQM_QPC_ROCE_RSVD;
+ func_cap->qpc_reserved_back += CQM_QPC_ROCE_VBS_RSVD_BACK;
+ } else if ((service_capability->chip_svc_type & CFG_SERVICE_MASK_ROCEAA) != 0) {
+ func_cap->qpc_reserved += CQM_QPC_ROCEAA_RSVD;
+ func_cap->scq_reserved += CQM_CQ_ROCEAA_RSVD;
+ func_cap->srq_reserved += CQM_SRQ_ROCEAA_RSVD;
+ } else {
+ func_cap->qpc_reserved += CQM_QPC_ROCE_RSVD;
+ }
+ func_cap->qpc_number += roce_own_cap->max_qps;
+ func_cap->qpc_basic_size = GET_MAX(roce_own_cap->qpc_entry_sz,
+ func_cap->qpc_basic_size);
+ if (cqm_handle->func_attribute.func_type == CQM_PF && (IS_MASTER_HOST(handle))) {
+ func_cap->hash_number = roce_own_cap->max_qps;
+ func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ }
+ func_cap->qpc_alloc_static = true;
+ func_cap->scqc_number += roce_own_cap->max_cqs;
+ func_cap->scqc_basic_size = GET_MAX(rdma_cap->cqc_entry_sz,
+ func_cap->scqc_basic_size);
+ func_cap->srqc_number += roce_own_cap->max_srqs;
+ func_cap->srqc_basic_size = GET_MAX(roce_own_cap->srqc_entry_sz,
+ func_cap->srqc_basic_size);
+ func_cap->mpt_number += roce_own_cap->max_mpts;
+ func_cap->mpt_reserved += rdma_cap->reserved_mrws;
+ func_cap->mpt_basic_size = GET_MAX(rdma_cap->mpt_entry_sz,
+ func_cap->mpt_basic_size);
+ func_cap->gid_number = CQM_GID_RDMA_NUM;
+ func_cap->gid_basic_size = CQM_GID_SIZE_32;
+ func_cap->childc_number += CQM_CHILDC_ROCE_NUM;
+ func_cap->childc_basic_size = GET_MAX(CQM_CHILDC_SIZE_256,
+ func_cap->childc_basic_size);
+}
+
+static void cqm_service_capability_init_toe(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_toe_private_capability *toe_own_cap = &cqm_handle->toe_own_capability;
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct rdma_service_cap *rdma_cap = &service_capability->rdma_cap;
+ struct toe_service_cap *toe_cap = &service_capability->toe_cap;
+ struct dev_toe_svc_cap *dev_toe_cap = &toe_cap->dev_toe_cap;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: toe is valid\n");
+ cqm_info(handle->dev_hdl, "Cap init: toe qpc 0x%x, scqc 0x%x, srqc 0x%x\n",
+ dev_toe_cap->max_pctxs, dev_toe_cap->max_cqs,
+ dev_toe_cap->max_srqs);
+ func_cap->hash_number += dev_toe_cap->max_pctxs;
+ func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ func_cap->qpc_number += dev_toe_cap->max_pctxs;
+ func_cap->qpc_basic_size = GET_MAX(toe_cap->pctx_sz,
+ func_cap->qpc_basic_size);
+ func_cap->qpc_alloc_static = true;
+ func_cap->scqc_number += dev_toe_cap->max_cqs;
+ func_cap->scqc_basic_size = GET_MAX(toe_cap->scqc_sz,
+ func_cap->scqc_basic_size);
+ func_cap->scqc_alloc_static = true;
+
+ toe_own_cap->toe_srqc_number = dev_toe_cap->max_srqs;
+ toe_own_cap->toe_srqc_start_id = dev_toe_cap->srq_id_start;
+ toe_own_cap->toe_srqc_basic_size = CQM_SRQC_SIZE_64;
+ func_cap->childc_number += dev_toe_cap->max_cctxt;
+ func_cap->childc_basic_size = GET_MAX(CQM_CHILDC_SIZE_256,
+ func_cap->childc_basic_size);
+ func_cap->mpt_number += dev_toe_cap->max_mpts;
+ func_cap->mpt_reserved = 0;
+ func_cap->mpt_basic_size = GET_MAX(rdma_cap->mpt_entry_sz,
+ func_cap->mpt_basic_size);
+}
+
+static void cqm_service_capability_init_ioe(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: ioe is valid\n");
+}
+
+static void cqm_service_capability_init_fc(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct fc_service_cap *fc_cap = &service_capability->fc_cap;
+ struct dev_fc_svc_cap *dev_fc_cap = &fc_cap->dev_fc_cap;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: fc is valid\n");
+ cqm_info(handle->dev_hdl, "Cap init: fc qpc 0x%x, scqc 0x%x, srqc 0x%x\n",
+ dev_fc_cap->max_parent_qpc_num, dev_fc_cap->scq_num,
+ dev_fc_cap->srq_num);
+ func_cap->hash_number += dev_fc_cap->max_parent_qpc_num;
+ func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ func_cap->qpc_number += dev_fc_cap->max_parent_qpc_num;
+ func_cap->qpc_basic_size = GET_MAX(fc_cap->parent_qpc_size,
+ func_cap->qpc_basic_size);
+ func_cap->qpc_alloc_static = true;
+ func_cap->scqc_number += dev_fc_cap->scq_num;
+ func_cap->scqc_basic_size = GET_MAX(fc_cap->scqc_size,
+ func_cap->scqc_basic_size);
+ func_cap->srqc_number += dev_fc_cap->srq_num;
+ func_cap->srqc_basic_size = GET_MAX(fc_cap->srqc_size,
+ func_cap->srqc_basic_size);
+ func_cap->lun_number = CQM_LUN_FC_NUM;
+ func_cap->lun_basic_size = CQM_LUN_SIZE_8;
+ func_cap->taskmap_number = CQM_TASKMAP_FC_NUM;
+ func_cap->taskmap_basic_size = PAGE_SIZE;
+ func_cap->childc_number += dev_fc_cap->max_child_qpc_num;
+ func_cap->childc_basic_size = GET_MAX(fc_cap->child_qpc_size,
+ func_cap->childc_basic_size);
+ func_cap->pagesize_reorder = CQM_FC_PAGESIZE_ORDER;
+}
+
+static void cqm_service_capability_init_vbs(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct vbs_service_cap *vbs_cap = &service_capability->vbs_cap;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: vbs is valid\n");
+
+ /* If the entry size is greater than the cache line (256 bytes),
+ * align the entries by cache line.
+ */
+ func_cap->xid2cid_number +=
+ (CQM_XID2CID_VBS_NUM * service_capability->virtio_vq_size) / CQM_CHIP_CACHELINE;
+ func_cap->xid2cid_basic_size = CQM_CHIP_CACHELINE;
+ func_cap->qpc_number += (vbs_cap->vbs_max_volq * 2); // VOLQ group * 2
+ func_cap->qpc_basic_size = GET_MAX(CQM_VBS_QPC_SIZE,
+ func_cap->qpc_basic_size);
+ func_cap->qpc_alloc_static = true;
+}
+
+static void cqm_service_capability_init_ipsec(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct ipsec_service_cap *ipsec_cap = &service_capability->ipsec_cap;
+ struct dev_ipsec_svc_cap *ipsec_srvcap = &ipsec_cap->dev_ipsec_cap;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ func_cap->childc_number += ipsec_srvcap->max_sactxs;
+ func_cap->childc_basic_size = GET_MAX(CQM_CHILDC_SIZE_256,
+ func_cap->childc_basic_size);
+ func_cap->scqc_number += ipsec_srvcap->max_cqs;
+ func_cap->scqc_basic_size = GET_MAX(CQM_SCQC_SIZE_64,
+ func_cap->scqc_basic_size);
+ func_cap->scqc_alloc_static = true;
+ cqm_info(handle->dev_hdl, "Cap init: ipsec is valid\n");
+ cqm_info(handle->dev_hdl, "Cap init: ipsec childc_num 0x%x, childc_bsize %d, scqc_num 0x%x, scqc_bsize %d\n",
+ ipsec_srvcap->max_sactxs, func_cap->childc_basic_size,
+ ipsec_srvcap->max_cqs, func_cap->scqc_basic_size);
+}
+
+static void cqm_service_capability_init_virtio(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+
+ cqm_info(handle->dev_hdl, "Cap init: virtio is valid\n");
+ /* If the entry size is greater than the cache line (256 bytes),
+ * align the entries by cache line.
+ */
+ cqm_handle->func_capability.xid2cid_number +=
+ (CQM_XID2CID_VIRTIO_NUM * service_capability->virtio_vq_size) / CQM_CHIP_CACHELINE;
+ cqm_handle->func_capability.xid2cid_basic_size = CQM_CHIP_CACHELINE;
+}
+
+static void cqm_service_capability_init_ppa(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct ppa_service_cap *ppa_cap = &service_capability->ppa_cap;
+
+ cqm_info(handle->dev_hdl, "Cap init: ppa is valid\n");
+ func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ func_cap->qpc_alloc_static = true;
+ func_cap->pagesize_reorder = CQM_PPA_PAGESIZE_ORDER;
+ func_cap->qpc_basic_size = GET_MAX(ppa_cap->pctx_sz,
+ func_cap->qpc_basic_size);
+}
+
+struct cqm_srv_cap_init serv_cap_init_list[] = {
+ {CQM_SERVICE_T_NIC, cqm_service_capability_init_nic},
+ {CQM_SERVICE_T_OVS, cqm_service_capability_init_ovs},
+ {CQM_SERVICE_T_ROCE, cqm_service_capability_init_roce},
+ {CQM_SERVICE_T_TOE, cqm_service_capability_init_toe},
+ {CQM_SERVICE_T_IOE, cqm_service_capability_init_ioe},
+ {CQM_SERVICE_T_FC, cqm_service_capability_init_fc},
+ {CQM_SERVICE_T_VBS, cqm_service_capability_init_vbs},
+ {CQM_SERVICE_T_IPSEC, cqm_service_capability_init_ipsec},
+ {CQM_SERVICE_T_VIRTIO, cqm_service_capability_init_virtio},
+ {CQM_SERVICE_T_PPA, cqm_service_capability_init_ppa},
+};
+
+static void cqm_service_capability_init(struct tag_cqm_handle *cqm_handle,
+ struct service_cap *service_capability)
+{
+ u32 list_size = ARRAY_SIZE(serv_cap_init_list);
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 i;
+
+ for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
+ cqm_handle->service[i].valid = false;
+ cqm_handle->service[i].has_register = false;
+ cqm_handle->service[i].buf_order = 0;
+ }
+
+ cqm_service_valid_init(cqm_handle, service_capability);
+
+ cqm_info(handle->dev_hdl, "Cap init: service type %d\n",
+ service_capability->chip_svc_type);
+
+ for (i = 0; i < list_size; i++) {
+ if (cqm_handle->service[serv_cap_init_list[i].service_type].valid &&
+ serv_cap_init_list[i].serv_cap_proc) {
+ serv_cap_init_list[i].serv_cap_proc(cqm_handle,
+ (void *)service_capability);
+ }
+ }
+}
+
+s32 cqm_get_fake_func_type(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ u32 parent_func, child_func_start, child_func_number, i;
+ u32 idx = cqm_handle->func_attribute.func_global_idx;
+
+ /* Currently, only one set of fake configurations is implemented.
+ * fake_cfg_number = 1
+ */
+ for (i = 0; i < func_cap->fake_cfg_number; i++) {
+ parent_func = func_cap->fake_cfg[i].parent_func;
+ child_func_start = func_cap->fake_cfg[i].child_func_start;
+ child_func_number = func_cap->fake_cfg[i].child_func_number;
+
+ if (idx == parent_func) {
+ return CQM_FAKE_FUNC_PARENT;
+ } else if ((idx >= child_func_start) &&
+ (idx < (child_func_start + child_func_number))) {
+ return CQM_FAKE_FUNC_CHILD_CONFLICT;
+ }
+ }
+
+ return CQM_FAKE_FUNC_NORMAL;
+}
+
+s32 cqm_get_child_func_start(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_func_attr *func_attr = &cqm_handle->func_attribute;
+ u32 i;
+
+ /* Currently, only one set of fake configurations is implemented.
+ * fake_cfg_number = 1
+ */
+ for (i = 0; i < func_cap->fake_cfg_number; i++) {
+ if (func_attr->func_global_idx ==
+ func_cap->fake_cfg[i].parent_func)
+ return (s32)(func_cap->fake_cfg[i].child_func_start);
+ }
+
+ return CQM_FAIL;
+}
+
+s32 cqm_get_child_func_number(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_func_attr *func_attr = &cqm_handle->func_attribute;
+ u32 i;
+
+ for (i = 0; i < func_cap->fake_cfg_number; i++) {
+ if (func_attr->func_global_idx ==
+ func_cap->fake_cfg[i].parent_func)
+ return (s32)(func_cap->fake_cfg[i].child_func_number);
+ }
+
+ return CQM_FAIL;
+}
+
+/* Set func_type in fake_cqm_handle to ppf, pf, or vf. */
+static void cqm_set_func_type(struct tag_cqm_handle *cqm_handle)
+{
+ u32 idx = cqm_handle->func_attribute.func_global_idx;
+
+ if (idx == 0)
+ cqm_handle->func_attribute.func_type = CQM_PPF;
+ else if (idx < CQM_MAX_PF_NUM)
+ cqm_handle->func_attribute.func_type = CQM_PF;
+ else
+ cqm_handle->func_attribute.func_type = CQM_VF;
+}
+
+static void cqm_lb_fake_mode_init(struct hinic3_hwdev *handle, struct service_cap *svc_cap)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct tag_cqm_fake_cfg *cfg = func_cap->fake_cfg;
+
+ func_cap->lb_mode = svc_cap->lb_mode;
+
+ /* Initializing the LB Mode */
+ if (func_cap->lb_mode == CQM_LB_MODE_NORMAL)
+ func_cap->smf_pg = 0;
+ else
+ func_cap->smf_pg = svc_cap->smf_pg;
+
+ /* Initializing the FAKE Mode */
+ if (svc_cap->fake_vf_num == 0) {
+ func_cap->fake_cfg_number = 0;
+ func_cap->fake_func_type = CQM_FAKE_FUNC_NORMAL;
+ func_cap->fake_vf_qpc_number = 0;
+ } else {
+ func_cap->fake_cfg_number = 1;
+
+ /* When configuring fake mode, ensure that the parent function
+ * cannot be contained in the child function; otherwise, the
+ * system will be initialized repeatedly. The following
+ * configuration is used to verify the OVS fake configuration on
+ * the FPGA.
+ */
+ cfg[0].parent_func = cqm_handle->func_attribute.port_to_port_idx;
+ cfg[0].child_func_start = svc_cap->fake_vf_start_id;
+ cfg[0].child_func_number = svc_cap->fake_vf_num_cfg;
+
+ func_cap->fake_func_type = (u32)cqm_get_fake_func_type(cqm_handle);
+ func_cap->fake_vf_qpc_number = svc_cap->fake_vf_max_pctx;
+ }
+
+ cqm_info(handle->dev_hdl, "Cap init: lb_mode=%u\n", func_cap->lb_mode);
+ cqm_info(handle->dev_hdl, "Cap init: smf_pg=%u\n", func_cap->smf_pg);
+ cqm_info(handle->dev_hdl, "Cap init: fake_func_type=%u\n", func_cap->fake_func_type);
+ cqm_info(handle->dev_hdl, "Cap init: fake_cfg_number=%u\n", func_cap->fake_cfg_number);
+}
+
+static int cqm_capability_init_bloomfilter(struct hinic3_hwdev *handle)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+
+ func_cap->bloomfilter_enable = service_capability->bloomfilter_en;
+ cqm_info(handle->dev_hdl, "Cap init: bloomfilter_enable %u (1: enable; 0: disable)\n",
+ func_cap->bloomfilter_enable);
+
+ if (func_cap->bloomfilter_enable != 0) {
+ func_cap->bloomfilter_length = service_capability->bfilter_len;
+ func_cap->bloomfilter_addr = service_capability->bfilter_start_addr;
+ if (func_cap->bloomfilter_length != 0 &&
+ !cqm_check_align(func_cap->bloomfilter_length)) {
+ cqm_err(handle->dev_hdl, "Cap init: bloomfilter_length %u is not the power of 2\n",
+ func_cap->bloomfilter_length);
+
+ return CQM_FAIL;
+ }
+ }
+
+ cqm_info(handle->dev_hdl, "Cap init: bloomfilter_length 0x%x, bloomfilter_addr 0x%x\n",
+ func_cap->bloomfilter_length, func_cap->bloomfilter_addr);
+
+ return 0;
+}
+
+static void cqm_capability_init_part_cap(struct hinic3_hwdev *handle)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+
+ func_cap->flow_table_based_conn_number = service_capability->max_connect_num;
+ func_cap->flow_table_based_conn_cache_number = service_capability->max_stick2cache_num;
+ cqm_info(handle->dev_hdl, "Cap init: cfg max_conn_num 0x%x, max_cache_conn_num 0x%x\n",
+ func_cap->flow_table_based_conn_number,
+ func_cap->flow_table_based_conn_cache_number);
+
+ func_cap->qpc_reserved = 0;
+ func_cap->qpc_reserved_back = 0;
+ func_cap->mpt_reserved = 0;
+ func_cap->scq_reserved = 0;
+ func_cap->srq_reserved = 0;
+ func_cap->qpc_alloc_static = false;
+ func_cap->scqc_alloc_static = false;
+
+ func_cap->l3i_number = 0;
+ func_cap->l3i_basic_size = CQM_L3I_SIZE_8;
+
+ func_cap->xid_alloc_mode = true; /* xid alloc do not reuse */
+ func_cap->gpa_check_enable = true;
+}
+
+static int cqm_capability_init_timer(struct hinic3_hwdev *handle)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+ struct hinic3_func_attr *func_attr = &cqm_handle->func_attribute;
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ u32 total_timer_num = 0;
+ int err;
+
+ /* Initializes the PPF capabilities: include timer, pf, vf. */
+ if (func_attr->func_type == CQM_PPF && service_capability->timer_en) {
+ func_cap->pf_num = service_capability->pf_num;
+ func_cap->pf_id_start = service_capability->pf_id_start;
+ func_cap->vf_num = service_capability->vf_num;
+ func_cap->vf_id_start = service_capability->vf_id_start;
+ cqm_info(handle->dev_hdl, "Cap init: total function num 0x%x\n",
+ service_capability->host_total_function);
+ cqm_info(handle->dev_hdl,
+ "Cap init: pf_num 0x%x, pf_start 0x%x, vf_num 0x%x, vf_start 0x%x\n",
+ func_cap->pf_num, func_cap->pf_id_start,
+ func_cap->vf_num, func_cap->vf_id_start);
+
+ err = hinic3_get_ppf_timer_cfg(handle);
+ if (err != 0)
+ return err;
+
+ func_cap->timer_pf_num = service_capability->timer_pf_num;
+ func_cap->timer_pf_id_start = service_capability->timer_pf_id_start;
+ func_cap->timer_vf_num = service_capability->timer_vf_num;
+ func_cap->timer_vf_id_start = service_capability->timer_vf_id_start;
+ cqm_info(handle->dev_hdl,
+ "host timer init: timer_pf_num 0x%x, timer_pf_id_start 0x%x, timer_vf_num 0x%x, timer_vf_id_start 0x%x\n",
+ func_cap->timer_pf_num, func_cap->timer_pf_id_start,
+ func_cap->timer_vf_num, func_cap->timer_vf_id_start);
+
+ total_timer_num = func_cap->timer_pf_num + func_cap->timer_vf_num;
+ if (IS_SLAVE_HOST(handle)) {
+ total_timer_num *= CQM_TIMER_NUM_MULTI;
+ cqm_info(handle->dev_hdl,
+ "host timer init: need double tw resources, total_timer_num=0x%x\n",
+ total_timer_num);
+ }
+ }
+
+ func_cap->timer_enable = service_capability->timer_en;
+ cqm_info(handle->dev_hdl, "Cap init: timer_enable %u (1: enable; 0: disable)\n",
+ func_cap->timer_enable);
+
+ func_cap->timer_number = CQM_TIMER_ALIGN_SCALE_NUM * total_timer_num;
+ func_cap->timer_basic_size = CQM_TIMER_SIZE_32;
+
+ return 0;
+}
+
+static void cqm_capability_init_cap_print(struct hinic3_hwdev *handle)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+
+ func_cap->ft_enable = service_capability->sf_svc_attr.ft_en;
+ func_cap->rdma_enable = service_capability->sf_svc_attr.rdma_en;
+
+ cqm_info(handle->dev_hdl, "Cap init: pagesize_reorder %u\n", func_cap->pagesize_reorder);
+ cqm_info(handle->dev_hdl, "Cap init: xid_alloc_mode %d, gpa_check_enable %d\n",
+ func_cap->xid_alloc_mode, func_cap->gpa_check_enable);
+ cqm_info(handle->dev_hdl, "Cap init: qpc_alloc_mode %d, scqc_alloc_mode %d\n",
+ func_cap->qpc_alloc_static, func_cap->scqc_alloc_static);
+ cqm_info(handle->dev_hdl, "Cap init: hash_number 0x%x\n", func_cap->hash_number);
+ cqm_info(handle->dev_hdl, "Cap init: qpc_num 0x%x, qpc_rsvd 0x%x, qpc_basic_size 0x%x\n",
+ func_cap->qpc_number, func_cap->qpc_reserved, func_cap->qpc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: scqc_num 0x%x, scqc_rsvd 0x%x, scqc_basic 0x%x\n",
+ func_cap->scqc_number, func_cap->scq_reserved, func_cap->scqc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: srqc_num 0x%x, srqc_rsvd 0x%x, srqc_basic 0x%x\n",
+ func_cap->srqc_number, func_cap->srq_reserved, func_cap->srqc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: mpt_number 0x%x, mpt_reserved 0x%x\n",
+ func_cap->mpt_number, func_cap->mpt_reserved);
+ cqm_info(handle->dev_hdl, "Cap init: gid_number 0x%x, lun_number 0x%x\n",
+ func_cap->gid_number, func_cap->lun_number);
+ cqm_info(handle->dev_hdl, "Cap init: taskmap_number 0x%x, l3i_number 0x%x\n",
+ func_cap->taskmap_number, func_cap->l3i_number);
+ cqm_info(handle->dev_hdl, "Cap init: timer_number 0x%x, childc_number 0x%x\n",
+ func_cap->timer_number, func_cap->childc_number);
+ cqm_info(handle->dev_hdl, "Cap init: childc_basic_size 0x%x\n",
+ func_cap->childc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: xid2cid_number 0x%x, reorder_number 0x%x\n",
+ func_cap->xid2cid_number, func_cap->reorder_number);
+ cqm_info(handle->dev_hdl, "Cap init: ft_enable %d, rdma_enable %d\n",
+ func_cap->ft_enable, func_cap->rdma_enable);
+}
+
+/**
+ * cqm_capability_init - Initializes the function and service capabilities of the CQM.
+ * Information needs to be read from the configuration management module.
+ * @ex_handle: device pointer that represents the PF
+ */
+s32 cqm_capability_init(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+ struct hinic3_func_attr *func_attr = &cqm_handle->func_attribute;
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ int err = 0;
+
+ err = cqm_capability_init_timer(handle);
+ if (err != 0)
+ goto out;
+
+ err = cqm_capability_init_bloomfilter(handle);
+ if (err != 0)
+ goto out;
+
+ cqm_capability_init_part_cap(handle);
+
+ cqm_lb_fake_mode_init(handle, service_capability);
+
+ cqm_service_capability_init(cqm_handle, service_capability);
+
+ cqm_test_mode_init(cqm_handle, service_capability);
+
+ cqm_service_capability_update(cqm_handle);
+
+ cqm_capability_init_cap_print(handle);
+
+ return CQM_SUCCESS;
+
+out:
+ if (func_attr->func_type == CQM_PPF)
+ func_cap->timer_enable = 0;
+
+ return err;
+}
+
+static void cqm_fake_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type !=
+ CQM_FAKE_FUNC_PARENT)
+ return;
+
+ for (i = 0; i < CQM_FAKE_FUNC_MAX; i++) {
+ kfree(cqm_handle->fake_cqm_handle[i]);
+ cqm_handle->fake_cqm_handle[i] = NULL;
+ }
+}
+
+static void set_fake_cqm_attr(struct hinic3_hwdev *handle, struct tag_cqm_handle *fake_cqm_handle,
+ s32 child_func_start, u32 i)
+{
+ struct tag_cqm_func_capability *func_cap = NULL;
+ struct hinic3_func_attr *func_attr = NULL;
+ struct service_cap *cap = &handle->cfg_mgmt->svc_cap;
+
+ func_attr = &fake_cqm_handle->func_attribute;
+ func_cap = &fake_cqm_handle->func_capability;
+ func_attr->func_global_idx = (u16)(child_func_start + i);
+ cqm_set_func_type(fake_cqm_handle);
+ func_cap->fake_func_type = CQM_FAKE_FUNC_CHILD;
+ cqm_info(handle->dev_hdl, "Fake func init: function[%u] type %d(0:PF,1:VF,2:PPF)\n",
+ func_attr->func_global_idx, func_attr->func_type);
+
+ func_cap->qpc_number = cap->fake_vf_max_pctx;
+ func_cap->qpc_number = GET_MIN(CQM_MAX_QPC_NUM, func_cap->qpc_number);
+ func_cap->hash_number = cap->fake_vf_max_pctx;
+ func_cap->qpc_reserved = cap->fake_vf_max_pctx;
+
+ if (cap->fake_vf_bfilter_len != 0) {
+ func_cap->bloomfilter_enable = true;
+ func_cap->bloomfilter_addr = cap->fake_vf_bfilter_start_addr +
+ cap->fake_vf_bfilter_len * i;
+ func_cap->bloomfilter_length = cap->fake_vf_bfilter_len;
+ }
+}
+
+/**
+ * cqm_fake_init - When the fake VF mode is supported, the CQM handles of the fake VFs
+ * need to be copied.
+ * @cqm_handle: Parent CQM handle of the current PF
+ */
+static s32 cqm_fake_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_handle *fake_cqm_handle = NULL;
+ struct tag_cqm_func_capability *func_cap = NULL;
+ s32 child_func_start, child_func_number;
+ u32 i;
+
+ func_cap = &cqm_handle->func_capability;
+ if (func_cap->fake_func_type != CQM_FAKE_FUNC_PARENT)
+ return CQM_SUCCESS;
+
+ child_func_start = cqm_get_child_func_start(cqm_handle);
+ if (child_func_start == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_start));
+ return CQM_FAIL;
+ }
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = kmalloc(sizeof(*fake_cqm_handle), GFP_KERNEL | __GFP_ZERO);
+ if (!fake_cqm_handle) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(fake_cqm_handle));
+ goto err;
+ }
+
+ /* Copy the attributes of the parent CQM handle to the child CQM
+ * handle and modify the values of function.
+ */
+ memcpy(fake_cqm_handle, cqm_handle, sizeof(struct tag_cqm_handle));
+ set_fake_cqm_attr(handle, fake_cqm_handle, child_func_start, i);
+
+ fake_cqm_handle->parent_cqm_handle = cqm_handle;
+ cqm_handle->fake_cqm_handle[i] = fake_cqm_handle;
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_fake_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+static void cqm_fake_mem_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_handle *fake_cqm_handle = NULL;
+ s32 child_func_number;
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type !=
+ CQM_FAKE_FUNC_PARENT)
+ return;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = cqm_handle->fake_cqm_handle[i];
+
+ cqm_object_table_uninit(fake_cqm_handle);
+ cqm_bitmap_uninit(fake_cqm_handle);
+ cqm_cla_uninit(fake_cqm_handle, CQM_BAT_ENTRY_MAX);
+ cqm_bat_uninit(fake_cqm_handle);
+ }
+}
+
+/**
+ * cqm_fake_mem_init - Initialize resources of the extended fake function
+ * @cqm_handle: Parent CQM handle of the current PF
+ */
+static s32 cqm_fake_mem_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_handle *fake_cqm_handle = NULL;
+ s32 child_func_number;
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type !=
+ CQM_FAKE_FUNC_PARENT)
+ return CQM_SUCCESS;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = cqm_handle->fake_cqm_handle[i];
+ snprintf(fake_cqm_handle->name, VRAM_NAME_APPLY_LEN,
+ "%s%s%02u", cqm_handle->name, VRAM_CQM_FAKE_MEM_BASE, i);
+
+ if (cqm_bat_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bat_init));
+ goto err;
+ }
+
+ if (cqm_cla_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_init));
+ goto err;
+ }
+
+ if (cqm_bitmap_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bitmap_init));
+ goto err;
+ }
+
+ if (cqm_object_table_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_init));
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_fake_mem_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_mem_init - Initialize CQM memory, including tables at different levels.
+ * @ex_handle: device pointer that represents the PF
+ */
+s32 cqm_mem_init(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ snprintf(cqm_handle->name, VRAM_NAME_APPLY_LEN,
+ "%s%02u", VRAM_CQM_GLB_FUNC_BASE, hinic3_global_func_id(handle));
+
+ if (cqm_fake_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fake_init));
+ return CQM_FAIL;
+ }
+
+ if (cqm_fake_mem_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fake_mem_init));
+ goto err1;
+ }
+
+ if (cqm_bat_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_init));
+ goto err2;
+ }
+
+ if (cqm_cla_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_init));
+ goto err3;
+ }
+
+ if (cqm_bitmap_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bitmap_init));
+ goto err4;
+ }
+
+ if (cqm_object_table_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_init));
+ goto err5;
+ }
+
+ return CQM_SUCCESS;
+
+err5:
+ cqm_bitmap_uninit(cqm_handle);
+err4:
+ cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
+err3:
+ cqm_bat_uninit(cqm_handle);
+err2:
+ cqm_fake_mem_uninit(cqm_handle);
+err1:
+ cqm_fake_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_mem_uninit - Deinitialize CQM memory, including tables at different levels
+ * @ex_handle: device pointer that represents the PF
+ */
+void cqm_mem_uninit(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ cqm_object_table_uninit(cqm_handle);
+ cqm_bitmap_uninit(cqm_handle);
+ cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
+ cqm_bat_uninit(cqm_handle);
+ cqm_fake_mem_uninit(cqm_handle);
+ cqm_fake_uninit(cqm_handle);
+}
+
+/**
+ * cqm_event_init - Initialize CQM event callback
+ * @ex_handle: device pointer that represents the PF
+ */
+s32 cqm_event_init(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ /* Registers the CEQ and AEQ callback functions. */
+ if (hinic3_ceq_register_cb(ex_handle, ex_handle, HINIC3_NON_L2NIC_SCQ,
+ cqm_scq_callback) != CHIPIF_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Event: fail to register scq callback\n");
+ return CQM_FAIL;
+ }
+
+ if (hinic3_ceq_register_cb(ex_handle, ex_handle, HINIC3_NON_L2NIC_ECQ,
+ cqm_ecq_callback) != CHIPIF_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Event: fail to register ecq callback\n");
+ goto err1;
+ }
+
+ if (hinic3_ceq_register_cb(ex_handle, ex_handle, HINIC3_NON_L2NIC_NO_CQ_EQ,
+ cqm_nocq_callback) != CHIPIF_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Event: fail to register nocq callback\n");
+ goto err2;
+ }
+
+ if (hinic3_aeq_register_swe_cb(ex_handle, ex_handle, HINIC3_STATEFUL_EVENT,
+ cqm_aeq_callback) != CHIPIF_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Event: fail to register aeq callback\n");
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ hinic3_ceq_unregister_cb(ex_handle, HINIC3_NON_L2NIC_NO_CQ_EQ);
+err2:
+ hinic3_ceq_unregister_cb(ex_handle, HINIC3_NON_L2NIC_ECQ);
+err1:
+ hinic3_ceq_unregister_cb(ex_handle, HINIC3_NON_L2NIC_SCQ);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_event_uninit - Deinitialize CQM event callback
+ * @ex_handle: device pointer that represents the PF
+ */
+void cqm_event_uninit(void *ex_handle)
+{
+ hinic3_aeq_unregister_swe_cb(ex_handle, HINIC3_STATEFUL_EVENT);
+ hinic3_ceq_unregister_cb(ex_handle, HINIC3_NON_L2NIC_NO_CQ_EQ);
+ hinic3_ceq_unregister_cb(ex_handle, HINIC3_NON_L2NIC_ECQ);
+ hinic3_ceq_unregister_cb(ex_handle, HINIC3_NON_L2NIC_SCQ);
+}
+
+/**
+ * cqm_scq_callback - CQM module callback processing for the ceq, which processes NON_L2NIC_SCQ
+ * @ex_handle: device pointer that represents the PF
+ * @ceqe_data: CEQE data
+ */
+void cqm_scq_callback(void *ex_handle, u32 ceqe_data)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_service_register_template *service_template = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct tag_cqm_queue *cqm_queue = NULL;
+ struct tag_cqm_object *obj = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: scq_callback_ex_handle is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_scq_callback_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: scq_callback_cqm_handle is null\n", __func__);
+ return;
+ }
+
+ cqm_dbg("Event: %s, ceqe_data=0x%x\n", __func__, ceqe_data);
+ obj = cqm_object_get(ex_handle, CQM_OBJECT_NONRDMA_SCQ,
+ CQM_CQN_FROM_CEQE(ceqe_data), true);
+ if (unlikely(!obj)) {
+ pr_err("[CQM]%s: scq_callback_obj is null\n", __func__);
+ return;
+ }
+
+ if (unlikely(obj->service_type >= CQM_SERVICE_T_MAX)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(obj->service_type));
+ cqm_object_put(obj);
+ return;
+ }
+
+ service = &cqm_handle->service[obj->service_type];
+ service_template = &service->service_template;
+ if (service_template->shared_cq_ceq_callback) {
+ cqm_queue = (struct tag_cqm_queue *)obj;
+ service_template->shared_cq_ceq_callback(service_template->service_handle,
+ CQM_CQN_FROM_CEQE(ceqe_data),
+ cqm_queue->priv);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_PTR_NULL(shared_cq_ceq_callback));
+ }
+
+ cqm_object_put(obj);
+}
+
+/**
+ * cqm_ecq_callback - CQM module callback processing for the ceq, which processes NON_L2NIC_ECQ.
+ * @ex_handle: device pointer that represents the PF
+ * @ceqe_data: CEQE data
+ */
+void cqm_ecq_callback(void *ex_handle, u32 ceqe_data)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_service_register_template *service_template = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct tag_cqm_qpc_mpt *qpc = NULL;
+ struct tag_cqm_object *obj = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ecq_callback_ex_handle is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_ecq_callback_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: ecq_callback_cqm_handle is null\n", __func__);
+ return;
+ }
+
+ obj = cqm_object_get(ex_handle, CQM_OBJECT_SERVICE_CTX,
+ CQM_XID_FROM_CEQE(ceqe_data), true);
+ if (unlikely(!obj)) {
+ pr_err("[CQM]%s: ecq_callback_obj is null\n", __func__);
+ return;
+ }
+
+ if (unlikely(obj->service_type >= CQM_SERVICE_T_MAX)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(obj->service_type));
+ cqm_object_put(obj);
+ return;
+ }
+
+ service = &cqm_handle->service[obj->service_type];
+ service_template = &service->service_template;
+ if (service_template->embedded_cq_ceq_callback) {
+ qpc = (struct tag_cqm_qpc_mpt *)obj;
+ service_template->embedded_cq_ceq_callback(service_template->service_handle,
+ CQM_XID_FROM_CEQE(ceqe_data),
+ qpc->priv);
+ } else {
+ cqm_err(handle->dev_hdl,
+ CQM_PTR_NULL(embedded_cq_ceq_callback));
+ }
+
+ cqm_object_put(obj);
+}
+
+/**
+ * cqm_nocq_callback - CQM module callback processing for the ceq,
+ * which processes NON_L2NIC_NO_CQ_EQ
+ * @ex_handle: device pointer that represents the PF
+ * @ceqe_data: CEQE data
+ */
+void cqm_nocq_callback(void *ex_handle, u32 ceqe_data)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_service_register_template *service_template = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct tag_cqm_qpc_mpt *qpc = NULL;
+ struct tag_cqm_object *obj = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: nocq_callback_ex_handle is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_nocq_callback_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: nocq_callback_cqm_handle is null\n", __func__);
+ return;
+ }
+
+ obj = cqm_object_get(ex_handle, CQM_OBJECT_SERVICE_CTX,
+ CQM_XID_FROM_CEQE(ceqe_data), true);
+ if (unlikely(!obj)) {
+ pr_err("[CQM]%s: nocq_callback_obj is null\n", __func__);
+ return;
+ }
+
+ if (unlikely(obj->service_type >= CQM_SERVICE_T_MAX)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(obj->service_type));
+ cqm_object_put(obj);
+ return;
+ }
+
+ service = &cqm_handle->service[obj->service_type];
+ service_template = &service->service_template;
+ if (service_template->no_cq_ceq_callback) {
+ qpc = (struct tag_cqm_qpc_mpt *)obj;
+ service_template->no_cq_ceq_callback(service_template->service_handle,
+ CQM_XID_FROM_CEQE(ceqe_data),
+ CQM_QID_FROM_CEQE(ceqe_data),
+ qpc->priv);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_PTR_NULL(no_cq_ceq_callback));
+ }
+
+ cqm_object_put(obj);
+}
+
+static u32 cqm_aeq_event2type(u8 event)
+{
+ u32 service_type;
+
+ /* Distributes events to different service modules
+ * based on the event type.
+ */
+ if (event < CQM_AEQ_BASE_T_ROCE)
+ service_type = CQM_SERVICE_T_NIC;
+ else if (event < CQM_AEQ_BASE_T_FC)
+ service_type = CQM_SERVICE_T_ROCE;
+ else if (event < CQM_AEQ_BASE_T_IOE)
+ service_type = CQM_SERVICE_T_FC;
+ else if (event < CQM_AEQ_BASE_T_TOE)
+ service_type = CQM_SERVICE_T_IOE;
+ else if (event < CQM_AEQ_BASE_T_VBS)
+ service_type = CQM_SERVICE_T_TOE;
+ else if (event < CQM_AEQ_BASE_T_IPSEC)
+ service_type = CQM_SERVICE_T_VBS;
+ else if (event < CQM_AEQ_BASE_T_MAX)
+ service_type = CQM_SERVICE_T_IPSEC;
+ else
+ service_type = CQM_SERVICE_T_MAX;
+
+ return service_type;
+}
+
+/**
+ * cqm_aeq_callback - CQM module callback processing for the aeq
+ * @ex_handle: device pointer that represents the PF
+ * @event: QEQ event
+ * @data: callback private data
+ */
+u8 cqm_aeq_callback(void *ex_handle, u8 event, u8 *data)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_service_register_template *service_template = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ u8 event_level = FAULT_LEVEL_MAX;
+ u32 service_type;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: aeq_callback_ex_handle is null\n", __func__);
+ return event_level;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_aeq_callback_cnt[event]);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: aeq_callback_cqm_handle is null\n", __func__);
+ return event_level;
+ }
+
+ /* Distributes events to different service modules
+ * based on the event type.
+ */
+ service_type = cqm_aeq_event2type(event);
+ if (service_type == CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(event));
+ return event_level;
+ }
+
+ service = &cqm_handle->service[service_type];
+ service_template = &service->service_template;
+
+ if (!service_template->aeq_level_callback)
+ cqm_err(handle->dev_hdl,
+ "Event: service_type %u aeq_level_callback unregistered, event %u\n",
+ service_type, event);
+ else
+ event_level =
+ service_template->aeq_level_callback(service_template->service_handle,
+ event, data);
+
+ if (!service_template->aeq_callback)
+ cqm_err(handle->dev_hdl, "Event: service_type %u aeq_callback unregistered\n",
+ service_type);
+ else
+ service_template->aeq_callback(service_template->service_handle,
+ event, data);
+
+ return event_level;
+}
+
+/**
+ * cqm_service_register - Callback template for the service driver to register with the CQM
+ * @ex_handle: device pointer that represents the PF
+ * @service_template: CQM service template
+ */
+s32 cqm_service_register(void *ex_handle, struct tag_service_register_template *service_template)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!service_template)) {
+ pr_err("[CQM]%s: service_template is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ if (service_template->service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl,
+ CQM_WRONG_VALUE(service_template->service_type));
+ return CQM_FAIL;
+ }
+ service = &cqm_handle->service[service_template->service_type];
+ if (!service->valid) {
+ cqm_err(handle->dev_hdl, "Service register: service_type %u is invalid\n",
+ service_template->service_type);
+ return CQM_FAIL;
+ }
+
+ if (service->has_register) {
+ cqm_err(handle->dev_hdl, "Service register: service_type %u has registered\n",
+ service_template->service_type);
+ return CQM_FAIL;
+ }
+
+ service->has_register = true;
+ memcpy((void *)(&service->service_template), (void *)service_template,
+ sizeof(struct tag_service_register_template));
+
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_service_register);
+
+/**
+ * cqm_service_unregister - The service driver deregisters the callback function from the CQM
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ */
+void cqm_service_unregister(void *ex_handle, u32 service_type)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return;
+ }
+
+ if (service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return;
+ }
+
+ service = &cqm_handle->service[service_type];
+ if (!service->valid)
+ cqm_err(handle->dev_hdl, "Service unregister: service_type %u is disable\n",
+ service_type);
+
+ service->has_register = false;
+ memset(&service->service_template, 0, sizeof(struct tag_service_register_template));
+}
+EXPORT_SYMBOL(cqm_service_unregister);
+
+s32 cqm_fake_vf_num_set(void *ex_handle, u16 fake_vf_num_cfg)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct service_cap *svc_cap = NULL;
+
+ if (!ex_handle)
+ return CQM_FAIL;
+
+ svc_cap = &handle->cfg_mgmt->svc_cap;
+
+ if (fake_vf_num_cfg > svc_cap->fake_vf_num) {
+ cqm_err(handle->dev_hdl, "fake_vf_num_cfg is invlaid, fw fake_vf_num is %u\n",
+ svc_cap->fake_vf_num);
+ return CQM_FAIL;
+ }
+
+ /* fake_vf_num_cfg is valid when func type is CQM_FAKE_FUNC_PARENT */
+ svc_cap->fake_vf_num_cfg = fake_vf_num_cfg;
+
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_fake_vf_num_set);
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.h
new file mode 100644
index 0000000..8d1e481
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.h
@@ -0,0 +1,381 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_MAIN_H
+#define CQM_MAIN_H
+
+#include <linux/pci.h>
+
+#include "hinic3_crm.h"
+#include "cqm_bloomfilter.h"
+#include "hinic3_hwif.h"
+#include "cqm_bat_cla.h"
+
+#define GET_MAX max
+#define GET_MIN min
+#define CQM_DW_SHIFT 2
+#define CQM_QW_SHIFT 3
+#define CQM_BYTE_BIT_SHIFT 3
+#define CQM_NUM_BIT_BYTE 8
+
+#define CHIPIF_SUCCESS 0
+#define CHIPIF_FAIL (-1)
+
+#define CQM_TIMER_ENABLE 1
+#define CQM_TIMER_DISABLE 0
+
+#define CQM_TIMER_NUM_MULTI 2
+
+/* The value must be the same as that of hinic3_service_type in hinic3_crm.h. */
+#define CQM_SERVICE_T_NIC SERVICE_T_NIC
+#define CQM_SERVICE_T_OVS SERVICE_T_OVS
+#define CQM_SERVICE_T_ROCE SERVICE_T_ROCE
+#define CQM_SERVICE_T_TOE SERVICE_T_TOE
+#define CQM_SERVICE_T_IOE SERVICE_T_IOE
+#define CQM_SERVICE_T_FC SERVICE_T_FC
+#define CQM_SERVICE_T_VBS SERVICE_T_VBS
+#define CQM_SERVICE_T_IPSEC SERVICE_T_IPSEC
+#define CQM_SERVICE_T_VIRTIO SERVICE_T_VIRTIO
+#define CQM_SERVICE_T_PPA SERVICE_T_PPA
+#define CQM_SERVICE_T_MAX SERVICE_T_MAX
+
+struct tag_cqm_service {
+ bool valid; /* Whether to enable this service on the function. */
+ bool has_register; /* Registered or Not */
+ u64 hardware_db_paddr;
+ void __iomem *hardware_db_vaddr;
+ u64 dwqe_paddr;
+ void __iomem *dwqe_vaddr;
+ u32 buf_order; /* The size of each buf node is 2^buf_order pages. */
+ struct tag_service_register_template service_template;
+};
+
+struct tag_cqm_fake_cfg {
+ u32 parent_func; /* The parent func_id of the fake vfs. */
+ u32 child_func_start; /* The start func_id of the child fake vfs. */
+ u32 child_func_number; /* The number of the child fake vfs. */
+};
+
+#define CQM_MAX_FACKVF_GROUP 4
+
+struct tag_cqm_func_capability {
+ /* BAT_PTR table(SMLC) */
+ bool ft_enable; /* BAT for flow table enable: support toe/ioe/fc service
+ */
+ bool rdma_enable; /* BAT for rdma enable: support RoCE */
+ /* VAT table(SMIR) */
+ bool ft_pf_enable; /* Same as ft_enable. BAT entry for toe/ioe/fc on pf
+ */
+ bool rdma_pf_enable; /* Same as rdma_enable. BAT entry for rdma on pf */
+
+ /* Dynamic or static memory allocation during the application of
+ * specified QPC/SCQC for each service.
+ */
+ bool qpc_alloc_static;
+ bool scqc_alloc_static;
+
+ u8 timer_enable; /* Whether the timer function is enabled */
+ u8 bloomfilter_enable; /* Whether the bloomgfilter function is enabled
+ */
+ u32 flow_table_based_conn_number; /* Maximum number of connections for
+ * toe/ioe/fc, whitch cannot excedd
+ * qpc_number
+ */
+ u32 flow_table_based_conn_cache_number; /* Maximum number of sticky
+ * caches
+ */
+ u32 bloomfilter_length; /* Size of the bloomfilter table, 64-byte
+ * aligned
+ */
+ u32 bloomfilter_addr; /* Start position of the bloomfilter table in the
+ * SMF main cache.
+ */
+ u32 qpc_reserved; /* Reserved bit in bitmap */
+ u32 qpc_reserved_back; /* Reserved back bit in bitmap */
+ u32 mpt_reserved; /* The ROCE/IWARP MPT also has a reserved bit. */
+
+ /* All basic_size must be 2^n-aligned. */
+ u32 hash_number; /* The number of hash bucket. The size of BAT table is
+ * aliaed with 64 bucket. At least 64 buckets is
+ * required.
+ */
+ u32 hash_basic_size; /* THe basic size of hash bucket is 64B, including
+ * 5 valid entry and one next entry.
+ */
+ u32 qpc_number;
+ u32 fake_vf_qpc_number;
+ u32 qpc_basic_size;
+
+ /* NUmber of PFs/VFs on the current host only for timer resource used */
+ u32 pf_num;
+ u32 pf_id_start;
+ u32 vf_num;
+ u32 vf_id_start;
+
+ u8 timer_pf_num;
+ u8 timer_pf_id_start;
+ u16 timer_vf_num;
+ u16 timer_vf_id_start;
+
+ u32 lb_mode;
+ /* Only lower 4bit is valid, indicating which SMFs are enabled.
+ * For example, 0101B indicates that SMF0 and SMF2 are enabled.
+ */
+ u32 smf_pg;
+
+ u32 fake_mode;
+ u32 fake_func_type; /* Whether the current function belongs to the fake
+ * group (parent or child)
+ */
+ u32 fake_cfg_number; /* Number of current configuration groups */
+ struct tag_cqm_fake_cfg fake_cfg[CQM_MAX_FACKVF_GROUP];
+
+ /* Note: for cqm specail test */
+ u32 pagesize_reorder;
+ bool xid_alloc_mode;
+ bool gpa_check_enable;
+ u32 scq_reserved;
+ u32 srq_reserved;
+
+ u32 mpt_number;
+ u32 mpt_basic_size;
+ u32 scqc_number;
+ u32 scqc_basic_size;
+ u32 srqc_number;
+ u32 srqc_basic_size;
+
+ u32 gid_number;
+ u32 gid_basic_size;
+ u32 lun_number;
+ u32 lun_basic_size;
+ u32 taskmap_number;
+ u32 taskmap_basic_size;
+ u32 l3i_number;
+ u32 l3i_basic_size;
+ u32 childc_number;
+ u32 childc_basic_size;
+ u32 child_qpc_id_start; /* FC service Child CTX is global addressing. */
+ u32 childc_number_all_function; /* The chip supports a maximum of 8096
+ * child CTXs.
+ */
+ u32 timer_number;
+ u32 timer_basic_size;
+ u32 xid2cid_number;
+ u32 xid2cid_basic_size;
+ u32 reorder_number;
+ u32 reorder_basic_size;
+};
+
+#define CQM_PF TYPE_PF
+#define CQM_VF TYPE_VF
+#define CQM_PPF TYPE_PPF
+#define CQM_UNKNOWN TYPE_UNKNOWN
+#define CQM_MAX_PF_NUM 32
+
+#define CQM_LB_MODE_NORMAL 0xff
+#define CQM_LB_MODE_0 0
+#define CQM_LB_MODE_1 1
+#define CQM_LB_MODE_2 2
+
+#define CQM_LB_SMF_MAX 4
+
+#define CQM_FPGA_MODE 0
+#define CQM_EMU_MODE 1
+
+#define CQM_FAKE_FUNC_NORMAL 0
+#define CQM_FAKE_FUNC_PARENT 1
+#define CQM_FAKE_FUNC_CHILD 2
+#define CQM_FAKE_FUNC_CHILD_CONFLICT 3 /* The detected function is the
+ * function that is faked.
+ */
+
+#define CQM_FAKE_FUNC_MAX 32
+
+#define CQM_SPU_HOST_ID 4
+
+#define CQM_QPC_ROCE_PER_DRCT 12
+#define CQM_QPC_ROCE_NORMAL 0
+#define CQM_QPC_ROCE_VBS_MODE 2
+
+struct tag_cqm_toe_private_capability {
+ /* TOE srq is different from other services
+ * and does not need to be managed by the CLA table.
+ */
+ u32 toe_srqc_number;
+ u32 toe_srqc_basic_size;
+ u32 toe_srqc_start_id;
+
+ struct tag_cqm_bitmap srqc_bitmap;
+};
+
+struct tag_cqm_secure_mem {
+ u16 func_id;
+ bool need_secure_mem;
+
+ u32 mode;
+ u32 gpa_len0;
+
+ void __iomem *va_base;
+ void __iomem *va_end;
+ u64 pa_base;
+ u32 page_num;
+
+ /* bitmap mgmt */
+ spinlock_t bitmap_lock;
+ unsigned long *bitmap;
+ u32 bits_nr;
+ u32 alloc_cnt;
+ u32 free_cnt;
+};
+
+struct tag_cqm_handle {
+ struct hinic3_hwdev *ex_handle;
+ struct pci_dev *dev;
+ struct hinic3_func_attr func_attribute; /* vf/pf attributes */
+ struct tag_cqm_func_capability func_capability; /* function capability set */
+ struct tag_cqm_service service[CQM_SERVICE_T_MAX]; /* Service-related structure */
+ struct tag_cqm_bat_table bat_table;
+ struct tag_cqm_bloomfilter_table bloomfilter_table;
+ /* fake-vf-related structure */
+ struct tag_cqm_handle *fake_cqm_handle[CQM_FAKE_FUNC_MAX];
+ struct tag_cqm_handle *parent_cqm_handle;
+
+ struct tag_cqm_toe_private_capability toe_own_capability; /* TOE service-related
+ * capability set
+ */
+ struct tag_cqm_secure_mem secure_mem;
+ struct list_head node;
+ char name[VRAM_NAME_APPLY_LEN];
+};
+
+#define CQM_CQN_FROM_CEQE(data) ((data) & 0xfffff)
+#define CQM_XID_FROM_CEQE(data) ((data) & 0xfffff)
+#define CQM_QID_FROM_CEQE(data) (((data) >> 20) & 0x7)
+#define CQM_TYPE_FROM_CEQE(data) (((data) >> 23) & 0x7)
+
+#define CQM_HASH_BUCKET_SIZE_64 64
+
+#define CQM_MAX_QPC_NUM 0x100000U
+#define CQM_MAX_SCQC_NUM 0x100000U
+#define CQM_MAX_SRQC_NUM 0x100000U
+#define CQM_MAX_CHILDC_NUM 0x100000U
+
+#define CQM_QPC_SIZE_256 256U
+#define CQM_QPC_SIZE_512 512U
+#define CQM_QPC_SIZE_1024 1024U
+
+#define CQM_SCQC_SIZE_32 32U
+#define CQM_SCQC_SIZE_64 64U
+#define CQM_SCQC_SIZE_128 128U
+
+#define CQM_SRQC_SIZE_32 32
+#define CQM_SRQC_SIZE_64 64
+#define CQM_SRQC_SIZE_128 128
+
+#define CQM_MPT_SIZE_64 64
+
+#define CQM_GID_SIZE_32 32
+
+#define CQM_LUN_SIZE_8 8
+
+#define CQM_L3I_SIZE_8 8
+
+#define CQM_TIMER_SIZE_32 32
+
+#define CQM_XID2CID_SIZE_8 8
+
+#define CQM_REORDER_SIZE_256 256
+
+#define CQM_CHILDC_SIZE_256 256U
+
+#define CQM_XID2CID_VBS_NUM (2 * 1024) /* 2K nvme Q */
+
+#define CQM_VBS_QPC_SIZE 512U
+
+#define CQM_XID2CID_VIRTIO_NUM (16 * 1024) /* 16K virt Q */
+
+#define CQM_GID_RDMA_NUM 128
+
+#define CQM_LUN_FC_NUM 64
+
+#define CQM_TASKMAP_FC_NUM 4
+
+#define CQM_L3I_COMM_NUM 64
+
+#define CQM_CHILDC_ROCE_NUM (8 * 1024)
+#define CQM_CHILDC_OVS_VBS_NUM (8 * 1024)
+
+#define CQM_TIMER_SCALE_NUM (2 * 1024)
+#define CQM_TIMER_ALIGN_WHEEL_NUM 8
+#define CQM_TIMER_ALIGN_SCALE_NUM \
+ (CQM_TIMER_SCALE_NUM * CQM_TIMER_ALIGN_WHEEL_NUM)
+
+#define CQM_QPC_OVS_RSVD (1024 * 1024)
+#define CQM_QPC_ROCE_RSVD 2
+#define CQM_QPC_ROCEAA_SWITCH_QP_NUM 4
+#define CQM_QPC_ROCEAA_RSVD \
+ (4 * 1024 + CQM_QPC_ROCEAA_SWITCH_QP_NUM) /* 4096 Normal QP +
+ * 4 Switch QP
+ */
+
+#define CQM_CQ_ROCEAA_RSVD 64
+#define CQM_SRQ_ROCEAA_RSVD 64
+#define CQM_QPC_ROCE_VBS_RSVD_BACK 204800 /* 200K */
+
+#define CQM_OVS_PAGESIZE_ORDER 9
+#define CQM_OVS_MAX_TIMER_FUNC 48
+
+#define CQM_PPA_PAGESIZE_ORDER 8
+
+#define CQM_FC_PAGESIZE_ORDER 0
+
+#define CQM_QHEAD_ALIGN_ORDER 6
+
+typedef void (*serv_cap_init_cb)(struct tag_cqm_handle *, void *);
+
+struct cqm_srv_cap_init {
+ u32 service_type;
+ serv_cap_init_cb serv_cap_proc;
+};
+
+/* Only for llt test */
+s32 cqm_capability_init(void *ex_handle);
+/* Can be defined as static */
+s32 cqm_mem_init(void *ex_handle);
+void cqm_mem_uninit(void *ex_handle);
+s32 cqm_event_init(void *ex_handle);
+void cqm_event_uninit(void *ex_handle);
+void cqm_scq_callback(void *ex_handle, u32 ceqe_data);
+void cqm_ecq_callback(void *ex_handle, u32 ceqe_data);
+void cqm_nocq_callback(void *ex_handle, u32 ceqe_data);
+u8 cqm_aeq_callback(void *ex_handle, u8 event, u8 *data);
+s32 cqm_get_fake_func_type(struct tag_cqm_handle *cqm_handle);
+s32 cqm_get_child_func_start(struct tag_cqm_handle *cqm_handle);
+s32 cqm_get_child_func_number(struct tag_cqm_handle *cqm_handle);
+
+s32 cqm_init(void *ex_handle);
+void cqm_uninit(void *ex_handle);
+s32 cqm_service_register(void *ex_handle, struct tag_service_register_template *service_template);
+void cqm_service_unregister(void *ex_handle, u32 service_type);
+
+s32 cqm_fake_vf_num_set(void *ex_handle, u16 fake_vf_num_cfg);
+#define CQM_LOG_ID 0
+
+#define CQM_PTR_NULL(x) "%s: " #x " is null\n", __func__
+#define CQM_ALLOC_FAIL(x) "%s: " #x " alloc fail\n", __func__
+#define CQM_MAP_FAIL(x) "%s: " #x " map fail\n", __func__
+#define CQM_FUNCTION_FAIL(x) "%s: " #x " return failure\n", __func__
+#define CQM_WRONG_VALUE(x) "%s: " #x " %u is wrong\n", __func__, (u32)(x)
+
+#define cqm_err(dev, format, ...) dev_err(dev, "[CQM]" format, ##__VA_ARGS__)
+#define cqm_warn(dev, format, ...) dev_warn(dev, "[CQM]" format, ##__VA_ARGS__)
+#define cqm_notice(dev, format, ...) \
+ dev_notice(dev, "[CQM]" format, ##__VA_ARGS__)
+#define cqm_info(dev, format, ...) dev_info(dev, "[CQM]" format, ##__VA_ARGS__)
+#ifdef __CQM_DEBUG__
+#define cqm_dbg(format, ...) pr_info("[CQM]" format, ##__VA_ARGS__)
+#else
+#define cqm_dbg(format, ...)
+#endif
+
+#endif /* CQM_MAIN_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.c
new file mode 100644
index 0000000..258c10a
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.c
@@ -0,0 +1,682 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_hwif.h"
+#include "hinic3_hw_cfg.h"
+
+#include "cqm_object.h"
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_bloomfilter.h"
+#include "cqm_db.h"
+#include "cqm_main.h"
+#include "vram_common.h"
+#include "vmsec_mpu_common.h"
+#include "cqm_memsec.h"
+
+#define SECURE_VA_TO_IDX(va, base) (((va) - (base)) / PAGE_SIZE)
+#define PCI_PROC_NAME_LEN 32
+#define U8_BIT 8
+#define MEM_SEC_PROC_DIR "driver/memsec"
+#define BITS_TO_MB(bits) ((bits) * PAGE_SIZE / 1024 / 1024)
+#define MEM_SEC_UNNECESSARY 1
+#define MEMSEC_TMP_LEN 32
+#define STD_INPUT_ONE_PARA 1
+#define STD_INPUT_TWO_PARA 2
+#define MR_KEY_2_INDEX_SHIFT 8
+#define IS_ADDR_IN_MEMSEC(va, len, start, end) \
+ ((va) >= (start) && (va) + (len) < (end))
+
+static int memsec_proc_show(struct seq_file *seq, void *offset);
+static int memsec_proc_open(struct inode *inode, struct file *file);
+static int memsec_proc_release(struct inode *inode, struct file *file);
+static void memsec_info_print(struct seq_file *seq, struct tag_cqm_secure_mem *secure_mem);
+static int hinic3_secure_mem_proc_ent_init(void *hwdev);
+static void hinic3_secure_mem_proc_ent_deinit(void);
+static int hinic3_secure_mem_proc_node_remove(void *hwdev);
+static int hinic3_secure_mem_proc_node_add(void *hwdev);
+static ssize_t memsec_proc_write(struct file *file, const char __user *data, size_t len,
+ loff_t *pff);
+
+static struct proc_dir_entry *g_hinic3_memsec_proc_ent; /* proc dir */
+static atomic_t g_memsec_proc_refcnt = ATOMIC_INIT(0);
+
+#if KERNEL_VERSION(5, 10, 0) > LINUX_VERSION_CODE
+static const struct file_operations memsec_proc_fops = {
+ .open = memsec_proc_open,
+ .read = seq_read,
+ .write = memsec_proc_write,
+ .release = memsec_proc_release,
+};
+#else
+static const struct proc_ops memsec_proc_fops = {
+ .proc_open = memsec_proc_open,
+ .proc_read = seq_read,
+ .proc_write = memsec_proc_write,
+ .proc_release = memsec_proc_release,
+};
+#endif
+
+bool cqm_need_secure_mem(void *hwdev)
+{
+ struct tag_cqm_secure_mem *info = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)hwdev;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (cqm_handle == NULL) {
+ return false;
+ }
+ info = &cqm_handle->secure_mem;
+ return ((info->need_secure_mem) && hinic3_is_guest_vmsec_enable(hwdev));
+}
+EXPORT_SYMBOL(cqm_need_secure_mem);
+
+static int memsec_proc_open(struct inode *inode, struct file *file)
+{
+ struct hinic3_hwdev *handle = PDE_DATA(inode);
+ int ret;
+
+ if (!try_module_get(THIS_MODULE))
+ return -ENODEV;
+
+ ret = single_open(file, memsec_proc_show, handle);
+ if (ret)
+ module_put(THIS_MODULE);
+
+ return ret;
+}
+
+static int memsec_proc_release(struct inode *inode, struct file *file)
+{
+ module_put(THIS_MODULE);
+ return single_release(inode, file);
+}
+
+static void memsec_info_print(struct seq_file *seq, struct tag_cqm_secure_mem *secure_mem)
+{
+ int i, j;
+
+ seq_printf(seq, "Secure MemPageSize: %lu\n", PAGE_SIZE);
+ seq_printf(seq, "Secure MemTotal: %u pages\n", secure_mem->bits_nr);
+ seq_printf(seq, "Secure MemTotal: %lu MB\n", BITS_TO_MB(secure_mem->bits_nr));
+ seq_printf(seq, "Secure MemUsed: %d pages\n",
+ bitmap_weight(secure_mem->bitmap, secure_mem->bits_nr));
+ seq_printf(seq, "Secure MemAvailable: %d pages\n",
+ secure_mem->bits_nr - bitmap_weight(secure_mem->bitmap, secure_mem->bits_nr));
+ seq_printf(seq, "Secure MemFirstAvailableIdx: %lu\n",
+ find_first_zero_bit(secure_mem->bitmap, secure_mem->bits_nr));
+ seq_printf(seq, "Secure MemVirtualAddrStart: 0x%p\n", secure_mem->va_base);
+ seq_printf(seq, "Secure MemVirtualAddrEnd: 0x%p\n", secure_mem->va_end);
+ seq_printf(seq, "Secure MemPhysicalAddrStart: 0x%llx\n", secure_mem->pa_base);
+ seq_printf(seq, "Secure MemPhysicalAddrEnd: 0x%llx\n",
+ secure_mem->pa_base + secure_mem->gpa_len0);
+ seq_printf(seq, "Secure MemAllocCnt: %d\n", secure_mem->alloc_cnt);
+ seq_printf(seq, "Secure MemFreeCnt: %d\n", secure_mem->free_cnt);
+ seq_printf(seq, "Secure MemProcRefCnt: %d\n", atomic_read(&g_memsec_proc_refcnt));
+ seq_puts(seq, "Secure MemBitmap:");
+
+ for (i = 0, j = 0; i < (secure_mem->bits_nr / U8_BIT); i++) {
+ if (i % U8_BIT == 0) {
+ seq_printf(seq, "\n [%05d-%05d]: ", j, j + (U8_BIT * U8_BIT) - 0x1);
+ j += U8_BIT * U8_BIT;
+ }
+ seq_printf(seq, "0x%x ", *(u8 *)((u8 *)secure_mem->bitmap + i));
+ }
+
+ seq_puts(seq, "\nSecure MemBitmap info end\n");
+}
+
+static struct tag_cqm_secure_mem *memsec_proc_get_secure_mem(struct hinic3_hwdev *handle)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_secure_mem *info = NULL;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (!cqm_handle) {
+ cqm_err(handle->dev_hdl, "[memsec]cqm not inited yet\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ info = &cqm_handle->secure_mem;
+ if (!info || !info->bitmap) {
+ cqm_err(handle->dev_hdl, "[memsec]secure mem not inited yet\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ return info;
+}
+
+static int memsec_proc_show(struct seq_file *seq, void *offset)
+{
+ struct hinic3_hwdev *handle = seq->private;
+ struct tag_cqm_secure_mem *info = NULL;
+
+ info = memsec_proc_get_secure_mem(handle);
+ if (IS_ERR(info))
+ return -EINVAL;
+
+ memsec_info_print(seq, info);
+
+ return 0;
+}
+
+static int test_read_secure_mem(struct hinic3_hwdev *handle, char *data, size_t len)
+{
+ u64 mem_ptr;
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_secure_mem *info = &cqm_handle->secure_mem;
+
+ if (sscanf(data, "r %llx", &mem_ptr) != STD_INPUT_ONE_PARA) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] read info format unknown!\n");
+ return -EINVAL;
+ }
+
+ if (mem_ptr < (u64)(info->va_base) || mem_ptr >= (u64)(info->va_end)) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] addr 0x%llx invalid!\n", mem_ptr);
+ return -EINVAL;
+ }
+
+ cqm_info(handle->dev_hdl, "[memsec_dfx] read addr 0x%llx val 0x%llx\n",
+ mem_ptr, *(u64 *)mem_ptr);
+ return 0;
+}
+
+static int test_write_secure_mem(struct hinic3_hwdev *handle, char *data, size_t len)
+{
+ u64 mem_ptr;
+ u64 val;
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_secure_mem *info = &cqm_handle->secure_mem;
+
+ if (sscanf(data, "w %llx %llx", &mem_ptr, &val) != STD_INPUT_TWO_PARA) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] read info format unknown!\n");
+ return -EINVAL;
+ }
+
+ if (mem_ptr < (u64)(info->va_base) || mem_ptr >= (u64)(info->va_end)) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] addr 0x%llx invalid!\n", mem_ptr);
+ return -EINVAL;
+ }
+
+ *(u64 *)mem_ptr = val;
+
+ cqm_info(handle->dev_hdl, "[memsec_dfx] write addr 0x%llx val 0x%llx now val 0x%llx\n",
+ mem_ptr, val, *(u64 *)mem_ptr);
+ return 0;
+}
+
+static void test_query_usage(struct hinic3_hwdev *handle)
+{
+ cqm_info(handle->dev_hdl, "\t[memsec_dfx]Usage: q <query_type> <index>\n");
+ cqm_info(handle->dev_hdl, "\t[memsec_dfx]Check whether roce context is in secure memory\n");
+ cqm_info(handle->dev_hdl, "\t[memsec_dfx]Options:\n");
+ cqm_info(handle->dev_hdl, "\t[memsec_dfx]query_type: qpc, mpt, srqc, scqc\n");
+ cqm_info(handle->dev_hdl, "\t[memsec_dfx]index: valid index.e.g. 0x3\n");
+}
+
+static int test_query_parse_type(struct hinic3_hwdev *handle, char *data,
+ enum cqm_object_type *type, u32 *index)
+{
+ char query_type[MEMSEC_TMP_LEN] = {'\0'};
+
+ if (sscanf(data, "q %s %x", query_type, index) != STD_INPUT_TWO_PARA) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] parse query cmd fail!\n");
+ return -1;
+ }
+ query_type[MEMSEC_TMP_LEN - 1] = '\0';
+
+ if (*index <= 0) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] query index 0x%x is invalid\n", *index);
+ return -1;
+ }
+
+ if (strcmp(query_type, "qpc") == 0) {
+ *type = CQM_OBJECT_SERVICE_CTX;
+ } else if (strcmp(query_type, "mpt") == 0) {
+ *type = CQM_OBJECT_MPT;
+ *index = (*index >> MR_KEY_2_INDEX_SHIFT) & 0xFFFFFF;
+ } else if (strcmp(query_type, "srqc") == 0) {
+ *type = CQM_OBJECT_RDMA_SRQ;
+ } else if (strcmp(query_type, "scqc") == 0) {
+ *type = CQM_OBJECT_RDMA_SCQ;
+ } else {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] query type is invalid\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int test_query_context(struct hinic3_hwdev *handle, char *data, size_t len)
+{
+ int ret;
+ u32 index = 0;
+ bool in_secmem = false;
+ struct tag_cqm_object *cqm_obj = NULL;
+ struct tag_cqm_qpc_mpt *qpc_mpt = NULL;
+ struct tag_cqm_queue *cqm_queue = NULL;
+ struct tag_cqm_secure_mem *info = NULL;
+ enum cqm_object_type query_type;
+
+ ret = test_query_parse_type(handle, data, &query_type, &index);
+ if (ret < 0) {
+ test_query_usage(handle);
+ return -EINVAL;
+ }
+
+ info = memsec_proc_get_secure_mem(handle);
+ if (IS_ERR(info))
+ return -EINVAL;
+
+ cqm_obj = cqm_object_get((void *)handle, query_type, index, false);
+ if (!cqm_obj) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] get cmq obj fail!\n");
+ return -EINVAL;
+ }
+
+ switch (query_type) {
+ case CQM_OBJECT_SERVICE_CTX:
+ case CQM_OBJECT_MPT:
+ qpc_mpt = (struct tag_cqm_qpc_mpt *)cqm_obj;
+ in_secmem = IS_ADDR_IN_MEMSEC(qpc_mpt->vaddr,
+ cqm_obj->object_size,
+ (u8 *)info->va_base,
+ (u8 *)info->va_end);
+ cqm_info(handle->dev_hdl,
+ "[memsec_dfx]Query %s:0x%x, va=%p %sin secure mem\n",
+ query_type == CQM_OBJECT_MPT ? "MPT, mpt_index" : "QPC, qpn",
+ index, qpc_mpt->vaddr, in_secmem ? "" : "not ");
+ break;
+ case CQM_OBJECT_RDMA_SRQ:
+ case CQM_OBJECT_RDMA_SCQ:
+ cqm_queue = (struct tag_cqm_queue *)cqm_obj;
+ in_secmem = IS_ADDR_IN_MEMSEC(cqm_queue->q_ctx_vaddr,
+ cqm_obj->object_size,
+ (u8 *)info->va_base,
+ (u8 *)info->va_end);
+ cqm_info(handle->dev_hdl,
+ "[memsec_dfx]Query %s:0x%x, va=%p %sin secure mem\n",
+ query_type == CQM_OBJECT_RDMA_SRQ ? "SRQC, srqn " : "SCQC, scqn",
+ index, cqm_queue->q_ctx_vaddr, in_secmem ? "" : "not ");
+ break;
+ default:
+ cqm_err(handle->dev_hdl, "[memsec_dfx] not support query type!\n");
+ break;
+ }
+
+ cqm_object_put(cqm_obj);
+ return 0;
+}
+
+static ssize_t memsec_proc_write(struct file *file, const char __user *data,
+ size_t len, loff_t *off)
+{
+ int ret = -EINVAL;
+ struct hinic3_hwdev *handle = PDE_DATA(file->f_inode);
+ char tmp[MEMSEC_TMP_LEN] = {0};
+
+ if (!handle)
+ return -EIO;
+
+ if (len >= MEMSEC_TMP_LEN)
+ return -EFBIG;
+
+ if (copy_from_user(tmp, data, len))
+ return -EIO;
+
+ switch (tmp[0]) {
+ case 'r':
+ ret = test_read_secure_mem(handle, tmp, len);
+ break;
+ case 'w':
+ ret = test_write_secure_mem(handle, tmp, len);
+ break;
+ case 'q':
+ ret = test_query_context(handle, tmp, len);
+ break;
+ default:
+ cqm_err(handle->dev_hdl, "[memsec_dfx] not support cmd!\n");
+ }
+
+ return (ret == 0) ? len : ret;
+}
+
+static int hinic3_secure_mem_proc_ent_init(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (g_hinic3_memsec_proc_ent)
+ return 0;
+
+ g_hinic3_memsec_proc_ent = proc_mkdir(MEM_SEC_PROC_DIR, NULL);
+ if (!g_hinic3_memsec_proc_ent) {
+ /* try again */
+ remove_proc_entry(MEM_SEC_PROC_DIR, NULL);
+ g_hinic3_memsec_proc_ent = proc_mkdir(MEM_SEC_PROC_DIR, NULL);
+ if (!g_hinic3_memsec_proc_ent) {
+ cqm_err(dev->dev_hdl, "[memsec]create secure mem proc fail!\n");
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+static void hinic3_secure_mem_proc_ent_deinit(void)
+{
+ if (g_hinic3_memsec_proc_ent && !atomic_read(&g_memsec_proc_refcnt)) {
+ remove_proc_entry(MEM_SEC_PROC_DIR, NULL);
+ g_hinic3_memsec_proc_ent = NULL;
+ }
+}
+
+static int hinic3_secure_mem_proc_node_remove(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct pci_dev *pdev = dev->pcidev_hdl;
+ char pci_name[PCI_PROC_NAME_LEN] = {0};
+
+ if (!g_hinic3_memsec_proc_ent) {
+ sdk_info(dev->dev_hdl, "[memsec]proc_ent_null!\n");
+ return 0;
+ }
+
+ atomic_dec(&g_memsec_proc_refcnt);
+
+ snprintf(pci_name, PCI_PROC_NAME_LEN,
+ "%02x:%02x:%x", pdev->bus->number, pdev->slot->number,
+ PCI_FUNC(pdev->devfn));
+
+ remove_proc_entry(pci_name, g_hinic3_memsec_proc_ent);
+
+ return 0;
+}
+
+static int hinic3_secure_mem_proc_node_add(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct pci_dev *pdev = dev->pcidev_hdl;
+ struct proc_dir_entry *res = NULL;
+ char pci_name[PCI_PROC_NAME_LEN] = {0};
+
+ if (!g_hinic3_memsec_proc_ent) {
+ cqm_err(dev->dev_hdl, "[memsec]proc_ent_null!\n");
+ return -EINVAL;
+ }
+
+ atomic_inc(&g_memsec_proc_refcnt);
+
+ snprintf(pci_name, PCI_PROC_NAME_LEN,
+ "%02x:%02x:%x", pdev->bus->number, pdev->slot->number,
+ PCI_FUNC(pdev->devfn));
+ /* 0400 Read by owner */
+ res = proc_create_data(pci_name, 0400, g_hinic3_memsec_proc_ent, &memsec_proc_fops,
+ hwdev);
+ if (!res) {
+ cqm_err(dev->dev_hdl, "[memsec]proc_create_data fail!\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+void hinic3_memsec_proc_init(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int ret;
+
+ ret = hinic3_secure_mem_proc_ent_init(hwdev);
+ if (ret != 0) {
+ cqm_err(dev->dev_hdl, "[memsec]proc ent init fail!\n");
+ return;
+ }
+
+ ret = hinic3_secure_mem_proc_node_add(hwdev);
+ if (ret != 0) {
+ cqm_err(dev->dev_hdl, "[memsec]proc node add fail!\n");
+ return;
+ }
+}
+
+void hinic3_memsec_proc_deinit(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int ret;
+
+ if (!cqm_need_secure_mem(hwdev))
+ return;
+
+ ret = hinic3_secure_mem_proc_node_remove(hwdev);
+ if (ret != 0) {
+ cqm_err(dev->dev_hdl, "[memsec]proc node remove fail!\n");
+ return;
+ }
+
+ hinic3_secure_mem_proc_ent_deinit();
+}
+
+static int cqm_get_secure_mem_cfg(void *dev, struct tag_cqm_secure_mem *info)
+{
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+ struct vmsec_cfg_ctx_gpa_entry_cmd mem_info;
+ u16 out_size = sizeof(struct vmsec_cfg_ctx_gpa_entry_cmd);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ memset(&mem_info, 0, sizeof(mem_info));
+ mem_info.entry.func_id = info->func_id;
+
+ err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_VMSEC, VMSEC_MPU_CMD_CTX_GPA_SHOW,
+ &mem_info, sizeof(mem_info), &mem_info,
+ &out_size, 0, HINIC3_CHANNEL_COMM);
+ if (err || !out_size || mem_info.head.status) {
+ cqm_err(hwdev->dev_hdl, "failed to get memsec info, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, mem_info.head.status, out_size);
+ return -EINVAL;
+ }
+
+ info->gpa_len0 = mem_info.entry.gpa_len0;
+ info->mode = mem_info.entry.mode;
+ info->pa_base = (u64)((((u64)mem_info.entry.gpa_addr0_hi) << CQM_INT_ADDR_SHIFT) |
+ mem_info.entry.gpa_addr0_lo);
+
+ return 0;
+}
+
+static int cqm_secure_mem_param_check(void *ex_handle, struct tag_cqm_secure_mem *info)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (!info->pa_base || !info->gpa_len0)
+ goto no_need_secure_mem;
+
+ if (!IS_ALIGNED(info->pa_base, CQM_SECURE_MEM_ALIGNED_SIZE) ||
+ !IS_ALIGNED(info->gpa_len0, CQM_SECURE_MEM_ALIGNED_SIZE)) {
+ cqm_err(handle->dev_hdl, "func_id %u secure mem not 2M aligned\n",
+ info->func_id);
+ return -EINVAL;
+ }
+
+ if (info->mode == VM_GPA_INFO_MODE_NMIG)
+ goto no_need_secure_mem;
+
+ return 0;
+
+no_need_secure_mem:
+ cqm_info(handle->dev_hdl, "func_id %u no need secure mem gpa 0x%llx len0 0x%x mode 0x%x\n",
+ info->func_id, info->pa_base, info->gpa_len0, info->mode);
+ info->need_secure_mem = false;
+ return MEM_SEC_UNNECESSARY;
+}
+
+int cqm_secure_mem_init(void *ex_handle)
+{
+ int err;
+ struct tag_cqm_secure_mem *info = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (!handle)
+ return -EINVAL;
+
+ // only vf in vm need secure mem
+ if (!hinic3_is_guest_vmsec_enable(ex_handle)) {
+ cqm_info(handle->dev_hdl, "no need secure mem\n");
+ return 0;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ info = &cqm_handle->secure_mem;
+ info->func_id = hinic3_global_func_id(ex_handle);
+
+ // get gpa info from mpu
+ err = cqm_get_secure_mem_cfg(ex_handle, info);
+ if (err) {
+ cqm_err(handle->dev_hdl, "func_id %u get secure mem failed, ret %d\n",
+ info->func_id, err);
+ return err;
+ }
+
+ // remap gpa
+ err = cqm_secure_mem_param_check(ex_handle, info);
+ if (err) {
+ cqm_info(handle->dev_hdl, "func_id %u cqm_secure_mem_param_check failed\n",
+ info->func_id);
+ return (err == MEM_SEC_UNNECESSARY) ? 0 : err;
+ }
+
+ info->va_base = ioremap(info->pa_base, info->gpa_len0);
+ info->va_end = info->va_base + info->gpa_len0;
+ info->page_num = info->gpa_len0 / PAGE_SIZE;
+ info->need_secure_mem = true;
+ info->bits_nr = info->page_num;
+ info->bitmap = bitmap_zalloc(info->bits_nr, GFP_KERNEL);
+ if (!info->bitmap) {
+ cqm_err(handle->dev_hdl, "func_id %u bitmap_zalloc failed\n",
+ info->func_id);
+ iounmap(info->va_base);
+ return -ENOMEM;
+ }
+
+ hinic3_memsec_proc_init(ex_handle);
+ return err;
+}
+
+int cqm_secure_mem_deinit(void *ex_handle)
+{
+ struct tag_cqm_secure_mem *info = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (!handle)
+ return -EINVAL;
+
+ // only vf in vm need secure mem
+ if (!cqm_need_secure_mem(ex_handle))
+ return 0;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ info = &cqm_handle->secure_mem;
+
+ if (info && info->va_base)
+ iounmap(info->va_base);
+
+ if (info && info->bitmap)
+ kfree(info->bitmap);
+
+ hinic3_memsec_proc_deinit(ex_handle);
+ return 0;
+}
+
+void *cqm_get_secure_mem_pages(struct hinic3_hwdev *handle, u32 order, dma_addr_t *pa_base)
+{
+ struct tag_cqm_secure_mem *info = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ unsigned int nr;
+ unsigned long *bitmap = NULL;
+ unsigned long index;
+ unsigned long flags;
+
+ if (!handle || !(handle->cqm_hdl)) {
+ pr_err("[memsec]%s null pointer\n", __func__);
+ return NULL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ info = &cqm_handle->secure_mem;
+ bitmap = info->bitmap;
+ nr = 1 << order;
+
+ if (!bitmap) {
+ cqm_err(handle->dev_hdl, "[memsec] %s bitmap null\n", __func__);
+ return NULL;
+ }
+
+ spin_lock_irqsave(&info->bitmap_lock, flags);
+
+ index = (order) ? bitmap_find_next_zero_area(bitmap, info->bits_nr, 0, nr, 0) :
+ find_first_zero_bit(bitmap, info->bits_nr);
+ if (index >= info->bits_nr) {
+ spin_unlock_irqrestore(&info->bitmap_lock, flags);
+ cqm_err(handle->dev_hdl,
+ "can not find continuous memory, size %d pages, weight %d\n",
+ nr, bitmap_weight(bitmap, info->bits_nr));
+ return NULL;
+ }
+
+ bitmap_set(bitmap, index, nr);
+ info->alloc_cnt++;
+ spin_unlock_irqrestore(&info->bitmap_lock, flags);
+
+ *pa_base = info->pa_base + index * PAGE_SIZE;
+ return (void *)(info->va_base + index * PAGE_SIZE);
+}
+
+void cqm_free_secure_mem_pages(struct hinic3_hwdev *handle, void *va, u32 order)
+{
+ struct tag_cqm_secure_mem *info = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ unsigned int nr;
+ unsigned long *bitmap = NULL;
+ unsigned long index;
+ unsigned long flags;
+
+ if (!handle || !(handle->cqm_hdl)) {
+ pr_err("%s null pointer\n", __func__);
+ return;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ info = &cqm_handle->secure_mem;
+ bitmap = info->bitmap;
+ nr = 1UL << order;
+
+ if (!bitmap) {
+ cqm_err(handle->dev_hdl, "%s bitmap null\n", __func__);
+ return;
+ }
+
+ if (va < info->va_base || (va > (info->va_end - PAGE_SIZE)) ||
+ !PAGE_ALIGNED((va - info->va_base)))
+ cqm_err(handle->dev_hdl, "%s va wrong value\n", __func__);
+
+ index = SECURE_VA_TO_IDX(va, info->va_base);
+ spin_lock_irqsave(&info->bitmap_lock, flags);
+ bitmap_clear(bitmap, index, nr);
+ info->free_cnt++;
+ spin_unlock_irqrestore(&info->bitmap_lock, flags);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.h
new file mode 100644
index 0000000..7d4a422
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) Huawei Technologies Co., Ltd. 2023-2023. All rights reserved. */
+#ifndef CQM_MEMSEC_H
+#define CQM_MEMSEC_H
+
+#include <linux/pci.h>
+#include "hinic3_hwdev.h"
+#include "hinic3_crm.h"
+#include "cqm_define.h"
+
+#define CQM_GET_MEMSEC_CTX_GPA 19
+#define CQM_INT_ADDR_SHIFT 32
+#define CQM_SECURE_MEM_ALIGNED_SIZE (2 * 1024 * 1024)
+
+bool cqm_need_secure_mem(void *hwdev);
+void *cqm_get_secure_mem_pages(struct hinic3_hwdev *handle, u32 order, dma_addr_t *pa_base);
+void cqm_free_secure_mem_pages(struct hinic3_hwdev *handle, void *va, u32 order);
+int cqm_secure_mem_init(void *ex_handle);
+int cqm_secure_mem_deinit(void *ex_handle);
+void hinic3_memsec_proc_init(void *hwdev);
+void hinic3_memsec_proc_deinit(void *hwdev);
+
+#endif /* CQM_MEMSEC_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.c
new file mode 100644
index 0000000..86359c0
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.c
@@ -0,0 +1,1493 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/mm.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_hwdev.h"
+
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_object_intern.h"
+#include "cqm_main.h"
+#include "cqm_object.h"
+
+/**
+ * cqm_object_qpc_mpt_create - create QPC/MPT
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type, must be MPT or CTX.
+ * @object_size: object size, unit is Byte
+ * @object_priv: private structure of the service layer, it can be NULL.
+ * @index: apply for the reserved qpn based on this value, if automatic allocation is required,
+ * please fill CQM_INDEX_INVALID.
+ */
+struct tag_cqm_qpc_mpt *cqm_object_qpc_mpt_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv, u32 index,
+ bool low2bit_align_en)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_qpc_mpt_info *qpc_mpt_info = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ s32 ret = CQM_FAIL;
+ u32 relative_index;
+ u32 fake_func_id;
+ u32 index_num = index;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_qpc_mpt_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ if (service_type >= CQM_SERVICE_T_MAX || !cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ if (object_type != CQM_OBJECT_SERVICE_CTX && object_type != CQM_OBJECT_MPT) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ /* fake vf adaption, switch to corresponding VF. */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_PARENT) {
+ fake_func_id = index_num / cqm_handle->func_capability.fake_vf_qpc_number;
+ relative_index = index_num % cqm_handle->func_capability.fake_vf_qpc_number;
+
+ if ((s32)fake_func_id >= cqm_get_child_func_number(cqm_handle)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(fake_func_id));
+ return NULL;
+ }
+
+ index_num = relative_index;
+ cqm_handle = cqm_handle->fake_cqm_handle[fake_func_id];
+ }
+
+ qpc_mpt_info = kmalloc(sizeof(*qpc_mpt_info), GFP_ATOMIC | __GFP_ZERO);
+ if (!qpc_mpt_info)
+ return NULL;
+
+ qpc_mpt_info->common.object.service_type = service_type;
+ qpc_mpt_info->common.object.object_type = object_type;
+ qpc_mpt_info->common.object.object_size = object_size;
+ atomic_set(&qpc_mpt_info->common.object.refcount, 1);
+ init_completion(&qpc_mpt_info->common.object.free);
+ qpc_mpt_info->common.object.cqm_handle = cqm_handle;
+ qpc_mpt_info->common.xid = index_num;
+
+ qpc_mpt_info->common.priv = object_priv;
+
+ ret = cqm_qpc_mpt_create(&qpc_mpt_info->common.object, low2bit_align_en);
+ if (ret == CQM_SUCCESS)
+ return &qpc_mpt_info->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_qpc_mpt_create));
+ kfree(qpc_mpt_info);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_qpc_mpt_create);
+
+/**
+ * cqm_object_recv_queue_create - when srq is used, create rq
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type
+ * @init_rq_num: init RQ number
+ * @container_size: CQM queue container size
+ * @wqe_size: CQM WQE size
+ * @object_priv: RQ privite data
+ */
+struct tag_cqm_queue *cqm_object_recv_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 init_rq_num, u32 container_size,
+ u32 wqe_size, void *object_priv)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_nonrdma_qinfo *rq_qinfo = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ s32 ret;
+ u32 i;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_rq_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ if (object_type != CQM_OBJECT_NONRDMA_EMBEDDED_RQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ if (service_type != CQM_SERVICE_T_TOE) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, "Rq create: service_type %u has not registered\n",
+ service_type);
+ return NULL;
+ }
+
+ /* 1. create rq qinfo */
+ rq_qinfo = kmalloc(sizeof(*rq_qinfo), GFP_KERNEL | __GFP_ZERO);
+ if (!rq_qinfo)
+ return NULL;
+
+ /* 2. init rq qinfo */
+ rq_qinfo->container_size = container_size;
+ rq_qinfo->wqe_size = wqe_size;
+ rq_qinfo->wqe_per_buf = container_size / wqe_size - 1;
+
+ rq_qinfo->common.queue_link_mode = CQM_QUEUE_TOE_SRQ_LINK_MODE;
+ rq_qinfo->common.priv = object_priv;
+ rq_qinfo->common.object.cqm_handle = cqm_handle;
+ /* this object_size is used as container num */
+ rq_qinfo->common.object.object_size = init_rq_num;
+ rq_qinfo->common.object.service_type = service_type;
+ rq_qinfo->common.object.object_type = object_type;
+ atomic_set(&rq_qinfo->common.object.refcount, 1);
+ init_completion(&rq_qinfo->common.object.free);
+
+ /* 3. create queue header */
+ rq_qinfo->common.q_header_vaddr =
+ cqm_kmalloc_align(sizeof(struct tag_cqm_queue_header),
+ GFP_KERNEL | __GFP_ZERO, CQM_QHEAD_ALIGN_ORDER);
+ if (!rq_qinfo->common.q_header_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_header_vaddr));
+ goto err1;
+ }
+
+ rq_qinfo->common.q_header_paddr =
+ pci_map_single(cqm_handle->dev, rq_qinfo->common.q_header_vaddr,
+ sizeof(struct tag_cqm_queue_header), PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev,
+ rq_qinfo->common.q_header_paddr) != 0) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_header_vaddr));
+ goto err2;
+ }
+
+ /* 4. create rq */
+ for (i = 0; i < init_rq_num; i++) {
+ ret = cqm_container_create(&rq_qinfo->common.object, NULL,
+ true);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_container_create));
+ goto err3;
+ }
+ if (!rq_qinfo->common.head_container)
+ rq_qinfo->common.head_container =
+ rq_qinfo->common.tail_container;
+ }
+
+ return &rq_qinfo->common;
+
+err3:
+ cqm_container_free(rq_qinfo->common.head_container, NULL,
+ &rq_qinfo->common);
+err2:
+ cqm_kfree_align(rq_qinfo->common.q_header_vaddr);
+ rq_qinfo->common.q_header_vaddr = NULL;
+err1:
+ kfree(rq_qinfo);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_recv_queue_create);
+
+/**
+ * cqm_object_share_recv_queue_add_container - allocate new container for srq
+ * @common: queue structure pointer
+ */
+s32 cqm_object_share_recv_queue_add_container(struct tag_cqm_queue *common)
+{
+ if (unlikely(!common)) {
+ pr_err("[CQM]%s: common is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ return cqm_container_create(&common->object, NULL, true);
+}
+EXPORT_SYMBOL(cqm_object_share_recv_queue_add_container);
+
+s32 cqm_object_srq_add_container_free(struct tag_cqm_queue *common, u8 **container_addr)
+{
+ if (unlikely(!common)) {
+ pr_err("[CQM]%s: common is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ return cqm_container_create(&common->object, container_addr, false);
+}
+EXPORT_SYMBOL(cqm_object_srq_add_container_free);
+
+static bool cqm_object_share_recv_queue_param_check(struct hinic3_hwdev *handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 container_size, u32 wqe_size)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ /* service_type must be CQM_SERVICE_T_TOE */
+ if (service_type != CQM_SERVICE_T_TOE) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return false;
+ }
+
+ /* exception of service registration check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return false;
+ }
+
+ /* container size2^N aligning */
+ if (!cqm_check_align(container_size)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(container_size));
+ return false;
+ }
+
+ /* external parameter check: object_type must be
+ * CQM_OBJECT_NONRDMA_SRQ
+ */
+ if (object_type != CQM_OBJECT_NONRDMA_SRQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return false;
+ }
+
+ /* wqe_size, the divisor, cannot be 0 */
+ if (wqe_size == 0) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * cqm_object_share_recv_queue_create - create srq
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type
+ * @container_number: CQM queue container number
+ * @container_size: CQM queue container size
+ * @wqe_size: CQM WQE size
+ */
+struct tag_cqm_queue *cqm_object_share_recv_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 container_number, u32 container_size,
+ u32 wqe_size)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_nonrdma_qinfo *srq_qinfo = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_srq_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ if (!cqm_object_share_recv_queue_param_check(handle, service_type, object_type,
+ container_size, wqe_size))
+ return NULL;
+
+ /* 2. create and initialize srq info */
+ srq_qinfo = kmalloc(sizeof(*srq_qinfo), GFP_KERNEL | __GFP_ZERO);
+ if (!srq_qinfo)
+ return NULL;
+
+ srq_qinfo->common.object.cqm_handle = cqm_handle;
+ srq_qinfo->common.object.object_size = container_number;
+ srq_qinfo->common.object.object_type = object_type;
+ srq_qinfo->common.object.service_type = service_type;
+ atomic_set(&srq_qinfo->common.object.refcount, 1);
+ init_completion(&srq_qinfo->common.object.free);
+
+ srq_qinfo->common.queue_link_mode = CQM_QUEUE_TOE_SRQ_LINK_MODE;
+ srq_qinfo->common.priv = NULL;
+ srq_qinfo->wqe_per_buf = container_size / wqe_size - 1;
+ srq_qinfo->wqe_size = wqe_size;
+ srq_qinfo->container_size = container_size;
+ service = &cqm_handle->service[service_type];
+ srq_qinfo->q_ctx_size = service->service_template.srq_ctx_size;
+
+ /* 3. create srq and srq ctx */
+ ret = cqm_share_recv_queue_create(&srq_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &srq_qinfo->common;
+
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_share_recv_queue_create));
+ kfree(srq_qinfo);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_share_recv_queue_create);
+
+/**
+ * cqm_object_fc_srq_create - RQ creation temporarily provided for the FC service.
+ * Special requirement: The number of valid WQEs in the queue
+ * must meet the number of transferred WQEs. Linkwqe can only be
+ * filled at the end of the page. The actual valid number exceeds
+ * the requirement. In this case, the service needs to be
+ * informed of the additional number to be created.
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type
+ * @wqe_number: number of valid WQEs
+ * @wqe_size: size of valid WQEs
+ * @object_priv: private structure of the service layer, it can be NULL
+ */
+struct tag_cqm_queue *cqm_object_fc_srq_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_nonrdma_qinfo *nonrdma_qinfo = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ u32 valid_wqe_per_buffer;
+ u32 wqe_sum; /* include linkwqe, normal wqe */
+ u32 buf_size;
+ u32 buf_num;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_fc_srq_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ /* service_type must be fc */
+ if (service_type != CQM_SERVICE_T_FC) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ /* exception of service unregistered check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ /* wqe_size cannot exceed PAGE_SIZE and must be 2^n aligned. */
+ if (wqe_size >= PAGE_SIZE || (!cqm_check_align(wqe_size))) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
+ return NULL;
+ }
+
+ /* FC RQ is SRQ. (Different from the SRQ concept of TOE, FC indicates
+ * that packets received by all flows are placed on the same RQ.
+ * The SRQ of TOE is similar to the RQ resource pool.)
+ */
+ if (object_type != CQM_OBJECT_NONRDMA_SRQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ service = &cqm_handle->service[service_type];
+ buf_size = (u32)(PAGE_SIZE << (service->buf_order));
+ /* subtract 1 link wqe */
+ valid_wqe_per_buffer = buf_size / wqe_size - 1;
+ buf_num = wqe_number / valid_wqe_per_buffer;
+ if (wqe_number % valid_wqe_per_buffer != 0)
+ buf_num++;
+
+ /* calculate the total number of WQEs */
+ wqe_sum = buf_num * (valid_wqe_per_buffer + 1);
+ nonrdma_qinfo = kmalloc(sizeof(*nonrdma_qinfo), GFP_KERNEL | __GFP_ZERO);
+ if (!nonrdma_qinfo)
+ return NULL;
+
+ /* initialize object member */
+ nonrdma_qinfo->common.object.service_type = service_type;
+ nonrdma_qinfo->common.object.object_type = object_type;
+ /* total number of WQEs */
+ nonrdma_qinfo->common.object.object_size = wqe_sum;
+ atomic_set(&nonrdma_qinfo->common.object.refcount, 1);
+ init_completion(&nonrdma_qinfo->common.object.free);
+ nonrdma_qinfo->common.object.cqm_handle = cqm_handle;
+
+ /* Initialize the doorbell used by the current queue.
+ * The default doorbell is the hardware doorbell.
+ */
+ nonrdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
+ /* Currently, the connection mode is fixed. In the future,
+ * the service needs to transfer the connection mode.
+ */
+ nonrdma_qinfo->common.queue_link_mode = CQM_QUEUE_RING_MODE;
+
+ /* initialize public members */
+ nonrdma_qinfo->common.priv = object_priv;
+ nonrdma_qinfo->common.valid_wqe_num = wqe_sum - buf_num;
+
+ /* initialize internal private members */
+ nonrdma_qinfo->wqe_size = wqe_size;
+ /* RQ (also called SRQ of FC) created by FC services,
+ * CTX needs to be created.
+ */
+ nonrdma_qinfo->q_ctx_size = service->service_template.srq_ctx_size;
+
+ ret = cqm_nonrdma_queue_create(&nonrdma_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &nonrdma_qinfo->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fc_queue_create));
+ kfree(nonrdma_qinfo);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_fc_srq_create);
+
+static bool cqm_object_nonrdma_queue_param_check(struct hinic3_hwdev *handle, u32 service_type,
+ enum cqm_object_type object_type, u32 wqe_size)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ /* exception of service registrion check */
+ if (service_type >= CQM_SERVICE_T_MAX ||
+ !cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return false;
+ }
+ /* wqe_size can't be more than PAGE_SIZE, can't be zero, must be power
+ * of 2 the function of cqm_check_align is to check above
+ */
+ if (wqe_size >= PAGE_SIZE || (!cqm_check_align(wqe_size))) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
+ return false;
+ }
+
+ /* nonrdma supports: RQ, SQ, SRQ, CQ, SCQ */
+ if (object_type < CQM_OBJECT_NONRDMA_EMBEDDED_RQ ||
+ object_type > CQM_OBJECT_NONRDMA_SCQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * cqm_object_nonrdma_queue_create - create nonrdma queue
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type, can be embedded RQ/SQ/CQ and SRQ/SCQ
+ * @wqe_number: include link wqe
+ * @wqe_size: fixed length, must be power of 2
+ * @object_priv: private structure of the service layer, it can be NULL
+ */
+struct tag_cqm_queue *cqm_object_nonrdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_nonrdma_qinfo *nonrdma_qinfo = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_nonrdma_queue_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ if (!cqm_object_nonrdma_queue_param_check(handle, service_type, object_type, wqe_size))
+ return NULL;
+
+ nonrdma_qinfo = kmalloc(sizeof(*nonrdma_qinfo), GFP_KERNEL | __GFP_ZERO);
+ if (!nonrdma_qinfo)
+ return NULL;
+
+ nonrdma_qinfo->common.object.service_type = service_type;
+ nonrdma_qinfo->common.object.object_type = object_type;
+ nonrdma_qinfo->common.object.object_size = wqe_number;
+ atomic_set(&nonrdma_qinfo->common.object.refcount, 1);
+ init_completion(&nonrdma_qinfo->common.object.free);
+ nonrdma_qinfo->common.object.cqm_handle = cqm_handle;
+
+ /* Initialize the doorbell used by the current queue.
+ * The default value is hardware doorbell
+ */
+ nonrdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
+ /* Currently, the link mode is hardcoded and needs to be transferred by
+ * the service side.
+ */
+ nonrdma_qinfo->common.queue_link_mode = CQM_QUEUE_RING_MODE;
+
+ nonrdma_qinfo->common.priv = object_priv;
+
+ /* Initialize internal private members */
+ nonrdma_qinfo->wqe_size = wqe_size;
+ service = &cqm_handle->service[service_type];
+ switch (object_type) {
+ case CQM_OBJECT_NONRDMA_SCQ:
+ nonrdma_qinfo->q_ctx_size = service->service_template.scq_ctx_size;
+ break;
+ case CQM_OBJECT_NONRDMA_SRQ:
+ /* Currently, the SRQ of the service is created through a
+ * dedicated interface.
+ */
+ nonrdma_qinfo->q_ctx_size = service->service_template.srq_ctx_size;
+ break;
+ default:
+ break;
+ }
+
+ ret = cqm_nonrdma_queue_create(&nonrdma_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &nonrdma_qinfo->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_nonrdma_queue_create));
+ kfree(nonrdma_qinfo);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_nonrdma_queue_create);
+
+static bool cqm_object_rdma_queue_param_check(struct hinic3_hwdev *handle, u32 service_type,
+ enum cqm_object_type object_type)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ /* service_type must be CQM_SERVICE_T_ROCE */
+ if (service_type != CQM_SERVICE_T_ROCE) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return false;
+ }
+ /* exception of service registrion check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return false;
+ }
+
+ /* rdma supports: QP, SRQ, SCQ */
+ if (object_type > CQM_OBJECT_RDMA_SCQ || object_type < CQM_OBJECT_RDMA_QP) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * cqm_object_rdma_queue_create - create rdma queue
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type, can be QP and SRQ/SCQ
+ * @object_size: object size
+ * @object_priv: private structure of the service layer, it can be NULL
+ * @room_header_alloc: Whether to apply for queue room and header space
+ * @xid: common index
+ */
+struct tag_cqm_queue *cqm_object_rdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ bool room_header_alloc, u32 xid)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_rdma_qinfo *rdma_qinfo = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_rdma_queue_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ if (!cqm_object_rdma_queue_param_check(handle, service_type, object_type))
+ return NULL;
+
+ rdma_qinfo = kmalloc(sizeof(*rdma_qinfo), GFP_KERNEL | __GFP_ZERO);
+ if (!rdma_qinfo)
+ return NULL;
+
+ rdma_qinfo->common.object.service_type = service_type;
+ rdma_qinfo->common.object.object_type = object_type;
+ rdma_qinfo->common.object.object_size = object_size;
+ atomic_set(&rdma_qinfo->common.object.refcount, 1);
+ init_completion(&rdma_qinfo->common.object.free);
+ rdma_qinfo->common.object.cqm_handle = cqm_handle;
+ rdma_qinfo->common.queue_link_mode = CQM_QUEUE_RDMA_QUEUE_MODE;
+ rdma_qinfo->common.priv = object_priv;
+ rdma_qinfo->common.current_q_room = CQM_RDMA_Q_ROOM_1;
+ rdma_qinfo->room_header_alloc = room_header_alloc;
+ rdma_qinfo->common.index = xid;
+
+ /* Initializes the doorbell used by the current queue.
+ * The default value is hardware doorbell
+ */
+ rdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
+
+ service = &cqm_handle->service[service_type];
+ switch (object_type) {
+ case CQM_OBJECT_RDMA_SCQ:
+ rdma_qinfo->q_ctx_size = service->service_template.scq_ctx_size;
+ break;
+ case CQM_OBJECT_RDMA_SRQ:
+ rdma_qinfo->q_ctx_size = service->service_template.srq_ctx_size;
+ break;
+ default:
+ break;
+ }
+
+ ret = cqm_rdma_queue_create(&rdma_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &rdma_qinfo->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_rdma_queue_create));
+ kfree(rdma_qinfo);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_rdma_queue_create);
+
+/**
+ * cqm_object_rdma_table_get - create mtt and rdmarc of the rdma service
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type
+ * @index_base: start of index
+ * @index_number: number of created
+ */
+struct tag_cqm_mtt_rdmarc *cqm_object_rdma_table_get(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 index_base, u32 index_number)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_rdma_table *rdma_table = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_rdma_table_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ /* service_type must be CQM_SERVICE_T_ROCE */
+ if (service_type != CQM_SERVICE_T_ROCE) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ /* exception of service registrion check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ if (object_type != CQM_OBJECT_MTT &&
+ object_type != CQM_OBJECT_RDMARC) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ rdma_table = kmalloc(sizeof(*rdma_table), GFP_KERNEL | __GFP_ZERO);
+ if (!rdma_table)
+ return NULL;
+
+ rdma_table->common.object.service_type = service_type;
+ rdma_table->common.object.object_type = object_type;
+ rdma_table->common.object.object_size = (u32)(index_number *
+ sizeof(dma_addr_t));
+ atomic_set(&rdma_table->common.object.refcount, 1);
+ init_completion(&rdma_table->common.object.free);
+ rdma_table->common.object.cqm_handle = cqm_handle;
+ rdma_table->common.index_base = index_base;
+ rdma_table->common.index_number = index_number;
+
+ ret = cqm_rdma_table_create(&rdma_table->common.object);
+ if (ret == CQM_SUCCESS)
+ return &rdma_table->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_rdma_table_create));
+ kfree(rdma_table);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_rdma_table_get);
+
+static s32 cqm_qpc_mpt_delete_ret(struct tag_cqm_object *object)
+{
+ u32 object_type;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_SERVICE_CTX:
+ case CQM_OBJECT_MPT:
+ cqm_qpc_mpt_delete(object);
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+static s32 cqm_nonrdma_queue_delete_ret(struct tag_cqm_object *object)
+{
+ u32 object_type;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_NONRDMA_EMBEDDED_RQ:
+ case CQM_OBJECT_NONRDMA_EMBEDDED_SQ:
+ case CQM_OBJECT_NONRDMA_EMBEDDED_CQ:
+ case CQM_OBJECT_NONRDMA_SCQ:
+ cqm_nonrdma_queue_delete(object);
+ return CQM_SUCCESS;
+ case CQM_OBJECT_NONRDMA_SRQ:
+ if (object->service_type == CQM_SERVICE_T_TOE)
+ cqm_share_recv_queue_delete(object);
+ else
+ cqm_nonrdma_queue_delete(object);
+
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+static s32 cqm_rdma_queue_delete_ret(struct tag_cqm_object *object)
+{
+ u32 object_type;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_RDMA_QP:
+ case CQM_OBJECT_RDMA_SRQ:
+ case CQM_OBJECT_RDMA_SCQ:
+ cqm_rdma_queue_delete(object);
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+static s32 cqm_rdma_table_delete_ret(struct tag_cqm_object *object)
+{
+ u32 object_type;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_MTT:
+ case CQM_OBJECT_RDMARC:
+ cqm_rdma_table_delete(object);
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+/**
+ * cqm_object_delete - Deletes a created object. This function will be sleep and wait
+ * for all operations on this object to be performed.
+ * @object: CQM object
+ */
+void cqm_object_delete(struct tag_cqm_object *object)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct hinic3_hwdev *handle = NULL;
+
+ if (unlikely(!object)) {
+ pr_err("[CQM]%s: object is null\n", __func__);
+ return;
+ }
+ if (!object->cqm_handle) {
+ pr_err("[CQM]object del: cqm_handle is null, service type %u, refcount %d\n",
+ object->service_type, (int)object->refcount.counter);
+ kfree(object);
+ return;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+
+ if (!cqm_handle->ex_handle) {
+ pr_err("[CQM]object del: ex_handle is null, service type %u, refcount %d\n",
+ object->service_type, (int)object->refcount.counter);
+ kfree(object);
+ return;
+ }
+
+ handle = cqm_handle->ex_handle;
+
+ if (object->service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->service_type));
+ kfree(object);
+ return;
+ }
+
+ if (cqm_qpc_mpt_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ if (cqm_nonrdma_queue_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ if (cqm_rdma_queue_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ if (cqm_rdma_table_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ kfree(object);
+}
+EXPORT_SYMBOL(cqm_object_delete);
+
+/**
+ * cqm_object_offset_addr - Only the rdma table can be searched to obtain the PA and VA
+ * at the specified offset of the object buffer.
+ * @object: CQM object
+ * @offset: For a rdma table, the offset is the absolute index number
+ * @paddr: physical address
+ */
+u8 *cqm_object_offset_addr(struct tag_cqm_object *object, u32 offset, dma_addr_t *paddr)
+{
+ u32 object_type = object->object_type;
+
+ /* The data flow path takes performance into consideration and
+ * does not check input parameters.
+ */
+ switch (object_type) {
+ case CQM_OBJECT_MTT:
+ case CQM_OBJECT_RDMARC:
+ return cqm_rdma_table_offset_addr(object, offset, paddr);
+ default:
+ break;
+ }
+
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_offset_addr);
+
+/**
+ * cqm_object_get - Obtain an object based on the index
+ * @ex_handle: device pointer that represents the PF
+ * @object_type: object type
+ * @index: support qpn,mptn,scqn,srqn (n->number)
+ * @bh: barrier or not
+ */
+struct tag_cqm_object *cqm_object_get(void *ex_handle, enum cqm_object_type object_type,
+ u32 index, bool bh)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_object *object = NULL;
+
+ /* The data flow path takes performance into consideration and
+ * does not check input parameters.
+ */
+ switch (object_type) {
+ case CQM_OBJECT_SERVICE_CTX:
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ break;
+ case CQM_OBJECT_MPT:
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_MPT);
+ break;
+ case CQM_OBJECT_RDMA_SRQ:
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SRQC);
+ break;
+ case CQM_OBJECT_RDMA_SCQ:
+ case CQM_OBJECT_NONRDMA_SCQ:
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ break;
+ default:
+ return NULL;
+ }
+
+ if (!cla_table) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_table_get));
+ return NULL;
+ }
+
+ object_table = &cla_table->obj_table;
+ object = cqm_object_table_get(cqm_handle, object_table, index, bh);
+ return object;
+}
+EXPORT_SYMBOL(cqm_object_get);
+
+/**
+ * cqm_object_put - This function must be called after the cqm_object_get function.
+ * Otherwise, the object cannot be released.
+ * @object: CQM object
+ */
+void cqm_object_put(struct tag_cqm_object *object)
+{
+ /* The data flow path takes performance into consideration and
+ * does not check input parameters.
+ */
+ if (atomic_dec_and_test(&object->refcount) != 0)
+ complete(&object->free);
+}
+EXPORT_SYMBOL(cqm_object_put);
+
+/**
+ * cqm_object_funcid - Obtain the ID of the function to which the object belongs
+ * @object: CQM object
+ */
+s32 cqm_object_funcid(struct tag_cqm_object *object)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+
+ if (unlikely(!object)) {
+ pr_err("[CQM]%s: object is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!object->cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+
+ return cqm_handle->func_attribute.func_global_idx;
+}
+EXPORT_SYMBOL(cqm_object_funcid);
+
+/**
+ * cqm_object_resize_alloc_new - Currently this function is only used for RoCE. The CQ buffer
+ * is ajusted, but the cqn and cqc remain unchanged.
+ * This function allocates new buffer, but do not release old buffer.
+ * The valid buffer is still old buffer.
+ * @object: CQM object
+ * @object_size: new buffer size
+ */
+s32 cqm_object_resize_alloc_new(struct tag_cqm_object *object, u32 object_size)
+{
+ struct tag_cqm_rdma_qinfo *qinfo = (struct tag_cqm_rdma_qinfo *)(void *)object;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct tag_cqm_buf *q_room_buf = NULL;
+ struct hinic3_hwdev *handle = NULL;
+ u32 order, buf_size;
+
+ if (unlikely(!object)) {
+ pr_err("[CQM]%s: object is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ handle = cqm_handle->ex_handle;
+
+ /* This interface is used only for the CQ of RoCE service. */
+ if (object->service_type == CQM_SERVICE_T_ROCE &&
+ object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ service = cqm_handle->service + object->service_type;
+ order = service->buf_order;
+ buf_size = (u32)(PAGE_SIZE << order);
+
+ if (qinfo->common.current_q_room == CQM_RDMA_Q_ROOM_1)
+ q_room_buf = &qinfo->common.q_room_buf_2;
+ else
+ q_room_buf = &qinfo->common.q_room_buf_1;
+
+ if (qinfo->room_header_alloc) {
+ q_room_buf->buf_number = ALIGN(object_size, buf_size) /
+ buf_size;
+ q_room_buf->page_number = q_room_buf->buf_number <<
+ order;
+ q_room_buf->buf_size = buf_size;
+ if (cqm_buf_alloc(cqm_handle, q_room_buf, true) ==
+ CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_buf_alloc));
+ return CQM_FAIL;
+ }
+
+ qinfo->new_object_size = object_size;
+ return CQM_SUCCESS;
+ }
+
+ cqm_err(handle->dev_hdl,
+ CQM_WRONG_VALUE(qinfo->room_header_alloc));
+ return CQM_FAIL;
+ }
+
+ cqm_err(handle->dev_hdl,
+ "Cq resize alloc: service_type %u object_type %u do not support resize\n",
+ object->service_type, object->object_type);
+ return CQM_FAIL;
+}
+EXPORT_SYMBOL(cqm_object_resize_alloc_new);
+
+/**
+ * cqm_object_resize_free_new - Currently this function is only used for RoCE. The CQ buffer
+ * is ajusted, but the cqn and cqc remain unchanged. This function
+ * frees new buffer, and is used to deal with exceptions.
+ * @object: CQM object
+ */
+void cqm_object_resize_free_new(struct tag_cqm_object *object)
+{
+ struct tag_cqm_rdma_qinfo *qinfo = (struct tag_cqm_rdma_qinfo *)(void *)object;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_buf *q_room_buf = NULL;
+ struct hinic3_hwdev *handle = NULL;
+
+ if (unlikely(!object)) {
+ pr_err("[CQM]%s: object is null\n", __func__);
+ return;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return;
+ }
+ handle = cqm_handle->ex_handle;
+
+ /* This interface is used only for the CQ of RoCE service. */
+ if (object->service_type == CQM_SERVICE_T_ROCE &&
+ object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ if (qinfo->common.current_q_room == CQM_RDMA_Q_ROOM_1)
+ q_room_buf = &qinfo->common.q_room_buf_2;
+ else
+ q_room_buf = &qinfo->common.q_room_buf_1;
+
+ qinfo->new_object_size = 0;
+
+ cqm_buf_free(q_room_buf, cqm_handle);
+ } else {
+ cqm_err(handle->dev_hdl,
+ "Cq resize free: service_type %u object_type %u do not support resize\n",
+ object->service_type, object->object_type);
+ }
+}
+EXPORT_SYMBOL(cqm_object_resize_free_new);
+
+/**
+ * cqm_object_resize_free_old - Currently this function is only used for RoCE. The CQ buffer
+ * is ajusted, but the cqn and cqc remain unchanged. This function
+ * frees old buffer and switches the valid buffer to new buffer.
+ * @object: CQM object
+ */
+void cqm_object_resize_free_old(struct tag_cqm_object *object)
+{
+ struct tag_cqm_rdma_qinfo *qinfo = (struct tag_cqm_rdma_qinfo *)(void *)object;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_buf *q_room_buf = NULL;
+
+ if (unlikely(!object)) {
+ pr_err("[CQM]%s: object is null\n", __func__);
+ return;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return;
+ }
+
+ /* This interface is used only for the CQ of RoCE service. */
+ if (object->service_type == CQM_SERVICE_T_ROCE &&
+ object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ if (qinfo->common.current_q_room == CQM_RDMA_Q_ROOM_1) {
+ q_room_buf = &qinfo->common.q_room_buf_1;
+ qinfo->common.current_q_room = CQM_RDMA_Q_ROOM_2;
+ } else {
+ q_room_buf = &qinfo->common.q_room_buf_2;
+ qinfo->common.current_q_room = CQM_RDMA_Q_ROOM_1;
+ }
+
+ object->object_size = qinfo->new_object_size;
+
+ cqm_buf_free(q_room_buf, cqm_handle);
+ }
+}
+EXPORT_SYMBOL(cqm_object_resize_free_old);
+
+/**
+ * cqm_gid_base - Obtain the base virtual address of the gid table for FT debug
+ * @ex_handle: device pointer that represents the PF
+ */
+void *cqm_gid_base(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bat_table *bat_table = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+ u32 entry_type, i;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ bat_table = &cqm_handle->bat_table;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ entry_type = bat_table->bat_entry_type[i];
+ if (entry_type == CQM_BAT_ENTRY_T_GID) {
+ cla_table = &bat_table->entry[i];
+ cla_z_buf = &cla_table->cla_z_buf;
+ if (cla_z_buf->buf_list)
+ return cla_z_buf->buf_list->va;
+ }
+ }
+
+ return NULL;
+}
+
+/**
+ * cqm_timer_base - Obtain the base virtual address of the timer for live migration
+ * @ex_handle: device pointer that represents the PF
+ */
+void *cqm_timer_base(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bat_table *bat_table = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+ u32 entry_type, i;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ /* Timer resource is configured on PPF. */
+ if (handle->hwif->attr.func_type != CQM_PPF) {
+ cqm_err(handle->dev_hdl, "%s: wrong function type:%d\n",
+ __func__, handle->hwif->attr.func_type);
+ return NULL;
+ }
+
+ bat_table = &cqm_handle->bat_table;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ entry_type = bat_table->bat_entry_type[i];
+ if (entry_type != CQM_BAT_ENTRY_T_TIMER)
+ continue;
+
+ cla_table = &bat_table->entry[i];
+ cla_z_buf = &cla_table->cla_z_buf;
+
+ if (!cla_z_buf->direct.va) {
+ if (cqm_buf_alloc_direct(cqm_handle, cla_z_buf, true) ==
+ CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_buf_alloc_direct));
+ return NULL;
+ }
+ }
+
+ return cla_z_buf->direct.va;
+ }
+
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_timer_base);
+
+static s32 cqm_function_timer_clear_getindex(struct hinic3_hwdev *ex_handle, u32 *buffer_index,
+ u32 function_id, u32 timer_page_num,
+ const struct tag_cqm_buf *cla_z_buf)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(ex_handle->cqm_hdl);
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ u32 index;
+
+ /* Convert functionid and the functionid does not exceed the value range
+ * of the tiemr buffer.
+ */
+ if (function_id < (func_cap->timer_pf_id_start + func_cap->timer_pf_num) &&
+ function_id >= func_cap->timer_pf_id_start) {
+ index = function_id - func_cap->timer_pf_id_start;
+ } else if (function_id < (func_cap->timer_vf_id_start + func_cap->timer_vf_num) &&
+ function_id >= func_cap->timer_vf_id_start) {
+ index = (function_id - func_cap->timer_vf_id_start) +
+ func_cap->timer_pf_num;
+ } else {
+ cqm_err(ex_handle->dev_hdl, "Timer clear: wrong function_id=0x%x\n",
+ function_id);
+ return CQM_FAIL;
+ }
+
+ if ((index * timer_page_num + timer_page_num) > cla_z_buf->buf_number) {
+ cqm_err(ex_handle->dev_hdl,
+ "Timer clear: over cla_z_buf_num, buffer_i=0x%x, zbuf_num=0x%x\n",
+ index, cla_z_buf->buf_number);
+ return CQM_FAIL;
+ }
+
+ *buffer_index = index;
+ return CQM_SUCCESS;
+}
+
+static void cqm_clear_timer(void *ex_handle, u32 function_id, struct hinic3_hwdev *handle,
+ struct tag_cqm_cla_table *cla_table)
+{
+ u32 timer_buffer_size = CQM_TIMER_ALIGN_SCALE_NUM * CQM_TIMER_SIZE_32;
+ struct tag_cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ u32 timer_page_num, i;
+ u32 buffer_index = 0;
+ s32 ret;
+
+ /* During CQM capability initialization, ensure that the basic size of
+ * the timer buffer page does not exceed 128 x 4 KB. Otherwise,
+ * clearing the timer buffer of the function is complex.
+ */
+ timer_page_num = timer_buffer_size /
+ (PAGE_SIZE << cla_table->trunk_order);
+ if (timer_page_num == 0) {
+ cqm_err(handle->dev_hdl,
+ "Timer clear: fail to clear timer, buffer_size=0x%x, trunk_order=0x%x\n",
+ timer_buffer_size, cla_table->trunk_order);
+ return;
+ }
+
+ /* Convert functionid and the functionid does not exceed the value range
+ * of the tiemr buffer.
+ */
+ ret = cqm_function_timer_clear_getindex(ex_handle, &buffer_index,
+ function_id, timer_page_num,
+ cla_z_buf);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_function_timer_clear_getindex));
+ return;
+ }
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_1 ||
+ cla_table->cla_lvl == CQM_CLA_LVL_2) {
+ for (i = buffer_index * timer_page_num;
+ i < (buffer_index * timer_page_num + timer_page_num); i++)
+ memset((u8 *)(cla_z_buf->buf_list[i].va), 0,
+ (PAGE_SIZE << cla_table->trunk_order));
+ } else {
+ cqm_err(handle->dev_hdl, "Timer clear: timer cla lvl: %u, cla_z_buf_num=0x%x\n",
+ cla_table->cla_lvl, cla_z_buf->buf_number);
+ cqm_err(handle->dev_hdl,
+ "Timer clear: buf_i=0x%x, buf_size=0x%x, page_num=0x%x, order=0x%x\n",
+ buffer_index, timer_buffer_size, timer_page_num,
+ cla_table->trunk_order);
+ }
+}
+
+/**
+ * cqm_function_timer_clear - Clear the timer buffer based on the function ID.
+ * The function ID starts from 0 and the timer buffer is arranged
+ * in sequence by function ID.
+ * @ex_handle: device pointer that represents the PF
+ * @function_id: the function id of CQM timer
+ */
+void cqm_function_timer_clear(void *ex_handle, u32 function_id)
+{
+ /* The timer buffer of one function is 32B*8wheel*2048spoke=128*4k */
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ int loop, i;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_func_timer_clear_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return;
+ }
+
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2) {
+ cla_table = &cqm_handle->bat_table.timer_entry[0];
+ loop = CQM_LB_SMF_MAX;
+ } else {
+ cla_table = cqm_cla_table_get(&cqm_handle->bat_table, CQM_BAT_ENTRY_T_TIMER);
+ loop = 1;
+ }
+
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cla_table is null\n", __func__);
+ return;
+ }
+ for (i = 0; i < loop; i++) {
+ cqm_clear_timer(ex_handle, function_id, handle, cla_table);
+ cla_table++;
+ }
+}
+EXPORT_SYMBOL(cqm_function_timer_clear);
+
+/**
+ * cqm_function_hash_buf_clear - clear hash buffer based on global function_id
+ * @ex_handle: device pointer that represents the PF
+ * @global_funcid: the function id of clear hash buf
+ */
+void cqm_function_hash_buf_clear(void *ex_handle, s32 global_funcid)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_func_capability *func_cap = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+ s32 fake_funcid;
+ u32 i;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_func_hash_buf_clear_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return;
+ }
+ func_cap = &cqm_handle->func_capability;
+
+ /* fake vf adaption, switch to corresponding VF. */
+ if (func_cap->fake_func_type == CQM_FAKE_FUNC_PARENT) {
+ fake_funcid = global_funcid -
+ (s32)(func_cap->fake_cfg[0].child_func_start);
+ cqm_info(handle->dev_hdl, "fake_funcid =%d\n", fake_funcid);
+ if (fake_funcid < 0 || fake_funcid >= CQM_FAKE_FUNC_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(fake_funcid));
+ return;
+ }
+
+ cqm_handle = cqm_handle->fake_cqm_handle[fake_funcid];
+ }
+
+ cla_table = cqm_cla_table_get(&cqm_handle->bat_table,
+ CQM_BAT_ENTRY_T_HASH);
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cla_table is null\n", __func__);
+ return;
+ }
+ cla_z_buf = &cla_table->cla_z_buf;
+
+ for (i = 0; i < cla_z_buf->buf_number; i++)
+ memset(cla_z_buf->buf_list[i].va, 0, cla_z_buf->buf_size);
+}
+EXPORT_SYMBOL(cqm_function_hash_buf_clear);
+
+void cqm_srq_used_rq_container_delete(struct tag_cqm_object *object, u8 *container)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ u32 link_wqe_offset = qinfo->wqe_per_buf * qinfo->wqe_size;
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(common->object.cqm_handle);
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_srq_linkwqe *srq_link_wqe = NULL;
+ dma_addr_t addr;
+
+ /* 1. Obtain the current container pa through link wqe table,
+ * unmap pa
+ */
+ srq_link_wqe = (struct tag_cqm_srq_linkwqe *)(container + link_wqe_offset);
+ /* shift right by 2 bits to get the length of dw(4B) */
+ cqm_swab32((u8 *)(srq_link_wqe), sizeof(struct tag_cqm_linkwqe) >> 2);
+
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_gpa_h,
+ srq_link_wqe->current_buffer_gpa_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Rq container del: buffer physical addr is null\n");
+ return;
+ }
+ pci_unmap_single(cqm_handle->dev, addr, qinfo->container_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ /* 2. Obtain the current container va through link wqe table, free va */
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_addr_h,
+ srq_link_wqe->current_buffer_addr_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Rq container del: buffer virtual addr is null\n");
+ return;
+ }
+ kfree((void *)addr);
+}
+EXPORT_SYMBOL(cqm_srq_used_rq_container_delete);
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.h
new file mode 100644
index 0000000..ba61828
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.h
@@ -0,0 +1,714 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_OBJECT_H
+#define CQM_OBJECT_H
+
+#include "cqm_define.h"
+#include "vram_common.h"
+
+#define CQM_LINKWQE_128B 128
+#define CQM_MOD_TOE HINIC3_MOD_TOE
+#define CQM_MOD_CQM HINIC3_MOD_CQM
+
+#ifdef __cplusplus
+#if __cplusplus
+extern "C" {
+#endif
+#endif /* __cplusplus */
+
+#ifndef HIUDK_SDK
+
+#define CQM_SUCCESS 0
+#define CQM_FAIL (-1)
+/* Ignore the return value and continue */
+#define CQM_CONTINUE 1
+
+/* type of WQE is LINK WQE */
+#define CQM_WQE_WF_LINK 1
+/* type of WQE is common WQE */
+#define CQM_WQE_WF_NORMAL 0
+
+/* chain queue mode */
+#define CQM_QUEUE_LINK_MODE 0
+/* RING queue mode */
+#define CQM_QUEUE_RING_MODE 1
+/* SRQ queue mode */
+#define CQM_QUEUE_TOE_SRQ_LINK_MODE 2
+/* RDMA queue mode */
+#define CQM_QUEUE_RDMA_QUEUE_MODE 3
+
+/* generic linkwqe structure */
+struct tag_cqm_linkwqe {
+ u32 rsv1 : 14; /* <reserved field */
+ u32 wf : 1; /* <wf */
+ u32 rsv2 : 14; /* <reserved field */
+ u32 ctrlsl : 2; /* <ctrlsl */
+ u32 o : 1; /* <o bit */
+
+ u32 rsv3 : 31; /* <reserved field */
+ u32 lp : 1; /* The lp field determines whether the o-bit
+ * meaning is reversed.
+ */
+
+ u32 next_page_gpa_h; /* <record the upper 32b physical address of the
+ * next page for the chip
+ */
+ u32 next_page_gpa_l; /* <record the lower 32b physical address of the
+ * next page for the chip
+ */
+
+ u32 next_buffer_addr_h; /* <record the upper 32b virtual address of the
+ * next page for the driver
+ */
+ u32 next_buffer_addr_l; /* <record the lower 32b virtual address of the
+ * next page for the driver
+ */
+};
+
+/* SRQ linkwqe structure. The wqe size must not exceed the common RQE size. */
+struct tag_cqm_srq_linkwqe {
+ struct tag_cqm_linkwqe linkwqe; /* <generic linkwqe structure */
+ u32 current_buffer_gpa_h; /* <Record the upper 32b physical address of
+ * the current page, which is used when the
+ * driver releases the container and cancels
+ * the mapping.
+ */
+ u32 current_buffer_gpa_l; /* <Record the lower 32b physical address of
+ * the current page, which is used when the
+ * driver releases the container and cancels
+ * the mapping.
+ */
+ u32 current_buffer_addr_h; /* <Record the upper 32b of the virtual
+ * address of the current page, which is used
+ * when the driver releases the container.
+ */
+ u32 current_buffer_addr_l; /* <Record the lower 32b of the virtual
+ * address of the current page, which is used
+ * when the driver releases the container.
+ */
+
+ u32 fast_link_page_addr_h; /* <Record the upper 32b of the virtual
+ * address of the fastlink page where the
+ * container address is recorded. It is used
+ * when the driver releases the fastlink.
+ */
+ u32 fast_link_page_addr_l; /* <Record the lower 32b virtual address of
+ * the fastlink page where the container
+ * address is recorded. It is used when the
+ * driver releases the fastlink.
+ */
+
+ u32 fixed_next_buffer_addr_h; /* <Record the upper 32b virtual address
+ * of the next contianer, which is used to
+ * release driver resources. The driver
+ * cannot be modified.
+ */
+ u32 fixed_next_buffer_addr_l; /* <Record the lower 32b virtual address
+ * of the next contianer, which is used to
+ * release driver resources. The driver
+ * cannot be modified.
+ */
+};
+
+/* first 64B of standard 128B WQE */
+union tag_cqm_linkwqe_first64B {
+ struct tag_cqm_linkwqe basic_linkwqe; /* <generic linkwqe structure */
+ struct tag_cqm_srq_linkwqe toe_srq_linkwqe; /* <SRQ linkwqe structure */
+ u32 value[16]; /* <reserved field */
+};
+
+/* second 64B of standard 128B WQE */
+struct tag_cqm_linkwqe_second64B {
+ u32 rsvd0[4]; /* <first 16B reserved field */
+ u32 rsvd1[4]; /* <second 16B reserved field */
+ union {
+ struct {
+ u32 rsvd0[3];
+ u32 rsvd1 : 29;
+ u32 toe_o : 1; /* <o bit of toe */
+ u32 resvd2 : 2;
+ } bs;
+ u32 value[4];
+ } third_16B; /* <third 16B */
+
+ union {
+ struct {
+ u32 rsvd0[2];
+ u32 rsvd1 : 31;
+ u32 ifoe_o : 1; /* <o bit of ifoe */
+ u32 rsvd2;
+ } bs;
+ u32 value[4];
+ } forth_16B; /* <fourth 16B */
+};
+
+/* standard 128B WQE structure */
+struct tag_cqm_linkwqe_128B {
+ union tag_cqm_linkwqe_first64B first64B; /* <first 64B of standard 128B WQE */
+ struct tag_cqm_linkwqe_second64B second64B; /* <back 64B of standard 128B WQE */
+};
+
+/* AEQ type definition */
+enum cqm_aeq_event_type {
+ CQM_AEQ_BASE_T_NIC = 0, /* <NIC consists of 16 events:0~15 */
+ CQM_AEQ_BASE_T_ROCE = 16, /* <ROCE consists of 32 events:16~47 */
+ CQM_AEQ_BASE_T_FC = 48, /* <FC consists of 8 events:48~55 */
+ CQM_AEQ_BASE_T_IOE = 56, /* <IOE consists of 8 events:56~63 */
+ CQM_AEQ_BASE_T_TOE = 64, /* <TOE consists of 16 events:64~95 */
+ CQM_AEQ_BASE_T_VBS = 96, /* <VBS consists of 16 events:96~111 */
+ CQM_AEQ_BASE_T_IPSEC = 112, /* <VBS consists of 16 events:112~127 */
+ CQM_AEQ_BASE_T_MAX = 128 /* <maximum of 128 events can be defined */
+};
+
+/* service registration template */
+struct tag_service_register_template {
+ u32 service_type; /* <service type */
+ u32 srq_ctx_size; /* <SRQ context size */
+ u32 scq_ctx_size; /* <SCQ context size */
+ void *service_handle; /* <pointer to the service driver when the
+ * ceq/aeq function is called back
+ */
+ /* <ceq callback:shared cq */
+ void (*shared_cq_ceq_callback)(void *service_handle, u32 cqn,
+ void *cq_priv);
+ /* <ceq callback:embedded cq */
+ void (*embedded_cq_ceq_callback)(void *service_handle, u32 xid,
+ void *qpc_priv);
+ /* <ceq callback:no cq */
+ void (*no_cq_ceq_callback)(void *service_handle, u32 xid, u32 qid,
+ void *qpc_priv);
+ /* <aeq level callback */
+ u8 (*aeq_level_callback)(void *service_handle, u8 event_type, u8 *val);
+ /* <aeq callback */
+ void (*aeq_callback)(void *service_handle, u8 event_type, u8 *val);
+};
+
+/* object operation type definition */
+enum cqm_object_type {
+ CQM_OBJECT_ROOT_CTX = 0, /* <0:root context, which is compatible with
+ * root CTX management
+ */
+ CQM_OBJECT_SERVICE_CTX, /* <1:QPC, connection management object */
+ CQM_OBJECT_MPT, /* <2:RDMA service usage */
+
+ CQM_OBJECT_NONRDMA_EMBEDDED_RQ = 10, /* <10:RQ of non-RDMA services,
+ * managed by LINKWQE
+ */
+ CQM_OBJECT_NONRDMA_EMBEDDED_SQ, /* <11:SQ of non-RDMA services,
+ * managed by LINKWQE
+ */
+ CQM_OBJECT_NONRDMA_SRQ, /* <12:SRQ of non-RDMA services,
+ * managed by MTT, but the CQM
+ * needs to apply for MTT.
+ */
+ CQM_OBJECT_NONRDMA_EMBEDDED_CQ, /* <13:Embedded CQ for non-RDMA
+ * services, managed by LINKWQE
+ */
+ CQM_OBJECT_NONRDMA_SCQ, /* <14:SCQ of non-RDMA services,
+ * managed by LINKWQE
+ */
+
+ CQM_OBJECT_RESV = 20,
+
+ CQM_OBJECT_RDMA_QP = 30, /* <30:QP of RDMA services, managed by MTT */
+ CQM_OBJECT_RDMA_SRQ, /* <31:SRQ of RDMA services, managed by MTT */
+ CQM_OBJECT_RDMA_SCQ, /* <32:SCQ of RDMA services, managed by MTT */
+
+ CQM_OBJECT_MTT = 50, /* <50:MTT table of the RDMA service */
+ CQM_OBJECT_RDMARC, /* <51:RC of the RDMA service */
+};
+
+/* return value of the failure to apply for the BITMAP table */
+#define CQM_INDEX_INVALID (~(0U))
+/* Return value of the reserved bit applied for in the BITMAP table,
+ * indicating that the index is allocated by the CQM and
+ * cannot be specified by the driver.
+ */
+#define CQM_INDEX_RESERVED 0xfffff
+
+/* to support ROCE Q buffer resize, the first Q buffer space */
+#define CQM_RDMA_Q_ROOM_1 1
+/* to support the Q buffer resize of ROCE, the second Q buffer space */
+#define CQM_RDMA_Q_ROOM_2 2
+
+/* doorbell mode selected by the current Q, hardware doorbell */
+#define CQM_HARDWARE_DOORBELL 1
+/* doorbell mode selected by the current Q, software doorbell */
+#define CQM_SOFTWARE_DOORBELL 2
+
+/* single-node structure of the CQM buffer */
+struct tag_cqm_buf_list {
+ void *va; /* <virtual address */
+ dma_addr_t pa; /* <physical address */
+ u32 refcount; /* <reference counting of the buf,
+ * which is used for internal buf management.
+ */
+};
+
+/* common management structure of the CQM buffer */
+struct tag_cqm_buf {
+ struct tag_cqm_buf_list *buf_list; /* <buffer list */
+ struct tag_cqm_buf_list direct; /* <map the discrete buffer list to a group
+ * of consecutive addresses
+ */
+ u32 page_number; /* <buf_number in quantity of page_number=2^n */
+ u32 buf_number; /* <number of buf_list nodes */
+ u32 buf_size; /* <PAGE_SIZE in quantity of buf_size=2^n */
+ struct vram_buf_info buf_info;
+ u32 bat_entry_type;
+};
+
+/* CQM object structure, which can be considered
+ * as the base class abstracted from all queues/CTX.
+ */
+struct tag_cqm_object {
+ u32 service_type; /* <service type */
+ u32 object_type; /* <object type, such as context, queue, mpt,
+ * and mtt, etc
+ */
+ u32 object_size; /* <object Size, for queue/CTX/MPT,
+ * the unit is Byte, for MTT/RDMARC,
+ * the unit is the number of entries,
+ * for containers, the unit is the number of
+ * containers.
+ */
+ atomic_t refcount; /* <reference counting */
+ struct completion free; /* <release completed quantity */
+ void *cqm_handle; /* <cqm_handle */
+};
+
+/* structure of the QPC and MPT objects of the CQM */
+struct tag_cqm_qpc_mpt {
+ struct tag_cqm_object object; /* <object base class */
+ u32 xid; /* <xid */
+ dma_addr_t paddr; /* <physical address of the QPC/MTT memory */
+ void *priv; /* <private information about the object of
+ * the service driver.
+ */
+ u8 *vaddr; /* <virtual address of the QPC/MTT memory */
+};
+
+/* queue header structure */
+struct tag_cqm_queue_header {
+ u64 doorbell_record; /* <SQ/RQ DB content */
+ u64 ci_record; /* <CQ DB content */
+ u64 rsv1; /* <This area is a user-defined area for driver
+ * and microcode information transfer.
+ */
+ u64 rsv2; /* <This area is a user-defined area for driver
+ * and microcode information transfer.
+ */
+};
+
+/* queue management structure: for queues of non-RDMA services, embedded queues
+ * are managed by LinkWQE, SRQ and SCQ are managed by MTT, but MTT needs to be
+ * applied by CQM; the queue of the RDMA service is managed by the MTT.
+ */
+struct tag_cqm_queue {
+ struct tag_cqm_object object; /* <object base class */
+ u32 index; /* <The embedded queue and QP do not have
+ * indexes, but the SRQ and SCQ do.
+ */
+ void *priv; /* <private information about the object of
+ * the service driver
+ */
+ u32 current_q_doorbell; /* <doorbell type selected by the current
+ * queue. HW/SW are used for the roce QP.
+ */
+ u32 current_q_room; /* <roce:current valid room buf */
+ struct tag_cqm_buf q_room_buf_1; /* <nonrdma:only q_room_buf_1 can be set to
+ * q_room_buf
+ */
+ struct tag_cqm_buf q_room_buf_2; /* <The CQ of RDMA reallocates the size of
+ * the queue room.
+ */
+ struct tag_cqm_queue_header *q_header_vaddr; /* <queue header virtual address */
+ dma_addr_t q_header_paddr; /* <physical address of the queue header */
+ u8 *q_ctx_vaddr; /* <CTX virtual addresses of SRQ and SCQ */
+ dma_addr_t q_ctx_paddr; /* <CTX physical addresses of SRQ and SCQ */
+ u32 valid_wqe_num; /* <number of valid WQEs that are
+ * successfully created
+ */
+ u8 *tail_container; /* <tail pointer of the SRQ container */
+ u8 *head_container; /* <head pointer of SRQ container */
+ u8 queue_link_mode; /* <Determine the connection mode during
+ * queue creation, such as link and ring.
+ */
+};
+
+/* MTT/RDMARC management structure */
+struct tag_cqm_mtt_rdmarc {
+ struct tag_cqm_object object; /* <object base class */
+ u32 index_base; /* <index_base */
+ u32 index_number; /* <index_number */
+ u8 *vaddr; /* <buffer virtual address */
+};
+
+/* sending command structure */
+struct tag_cqm_cmd_buf {
+ void *buf; /* <command buffer virtual address */
+ dma_addr_t dma; /* <physical address of the command buffer */
+ u16 size; /* <command buffer size */
+};
+
+/* definition of sending ACK mode */
+enum cqm_cmd_ack_type {
+ CQM_CMD_ACK_TYPE_CMDQ = 0, /* <ack is written back to cmdq */
+ CQM_CMD_ACK_TYPE_SHARE_CQN = 1, /* <ack is reported through the SCQ of
+ * the root CTX.
+ */
+ CQM_CMD_ACK_TYPE_APP_CQN = 2 /* <ack is reported through the SCQ of
+ * service
+ */
+};
+
+#endif
+/**
+ * @brief: create FC SRQ.
+ * @details: The number of valid WQEs in the queue must meet the number of
+ * transferred WQEs. Linkwqe can only be filled at the end of the
+ * page. The actual number of valid links exceeds the requirement.
+ * The service needs to be informed of the number of extra links to
+ * be created.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param wqe_number: number of WQEs
+ * @param wqe_size: wqe size
+ * @param object_priv: pointer to object private information
+ * @retval struct tag_cqm_queue*: queue structure pointer
+ * @date: 2019-5-4
+ */
+struct tag_cqm_queue *cqm_object_fc_srq_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv);
+
+/**
+ * @brief: create RQ.
+ * @details: When SRQ is used, the RQ queue is created.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param init_rq_num: number of containers
+ * @param container_size: container size
+ * @param wqe_size: wqe size
+ * @param object_priv: pointer to object private information
+ * @retval struct tag_cqm_queue*: queue structure pointer
+ * @date: 2019-5-4
+ */
+struct tag_cqm_queue *cqm_object_recv_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 init_rq_num, u32 container_size,
+ u32 wqe_size, void *object_priv);
+
+/**
+ * @brief: SRQ applies for a new container and is linked after the container
+ * is created.
+ * @details: SRQ applies for a new container and is linked after the container
+ * is created.
+ * @param common: queue structure pointer
+ * @retval 0: success
+ * @retval -1: fail
+ * @date: 2019-5-4
+ */
+s32 cqm_object_share_recv_queue_add_container(struct tag_cqm_queue *common);
+
+/**
+ * @brief: SRQ applies for a new container. After the container is created,
+ * no link is attached to the container. The service is attached to
+ * the container.
+ * @details: SRQ applies for a new container. After the container is created,
+ * no link is attached to the container. The service is attached to
+ * the container.
+ * @param common: queue structure pointer
+ * @param container_addr: returned container address
+ * @retval 0: success
+ * @retval -1: fail
+ * @date: 2019-5-4
+ */
+s32 cqm_object_srq_add_container_free(struct tag_cqm_queue *common, u8 **container_addr);
+
+/**
+ * @brief: create SRQ for TOE services.
+ * @details: create SRQ for TOE services.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param container_number: number of containers
+ * @param container_size: container size
+ * @param wqe_size: wqe size
+ * @retval struct tag_cqm_queue*: queue structure pointer
+ * @date: 2019-5-4
+ */
+struct tag_cqm_queue *cqm_object_share_recv_queue_create(void *ex_handle,
+ u32 service_type,
+ enum cqm_object_type object_type,
+ u32 container_number,
+ u32 container_size,
+ u32 wqe_size);
+
+/**
+ * @brief: create QPC and MPT.
+ * @details: When QPC and MPT are created, the interface sleeps.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param object_size: object size, in bytes.
+ * @param object_priv: private structure of the service layer.
+ * The value can be NULL.
+ * @param index: apply for reserved qpn based on the value. If automatic
+ * allocation is required, fill CQM_INDEX_INVALID.
+ * @retval struct tag_cqm_qpc_mpt *: pointer to the QPC/MPT structure
+ * @date: 2019-5-4
+ */
+struct tag_cqm_qpc_mpt *cqm_object_qpc_mpt_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ u32 index, bool low2bit_align_en);
+
+/**
+ * @brief: create a queue for non-RDMA services.
+ * @details: create a queue for non-RDMA services. The interface sleeps.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param wqe_number: number of Link WQEs
+ * @param wqe_size: fixed length, size 2^n
+ * @param object_priv: private structure of the service layer.
+ * The value can be NULL.
+ * @retval struct tag_cqm_queue *: queue structure pointer
+ * @date: 2019-5-4
+ */
+struct tag_cqm_queue *cqm_object_nonrdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv);
+
+/**
+ * @brief: create a RDMA service queue.
+ * @details: create a queue for the RDMA service. The interface sleeps.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param object_size: object size
+ * @param object_priv: private structure of the service layer.
+ * The value can be NULL.
+ * @param room_header_alloc: whether to apply for the queue room and header
+ * space
+ * @retval struct tag_cqm_queue *: queue structure pointer
+ * @date: 2019-5-4
+ */
+struct tag_cqm_queue *cqm_object_rdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ bool room_header_alloc, u32 xid);
+
+/**
+ * @brief: create the MTT and RDMARC of the RDMA service.
+ * @details: create the MTT and RDMARC of the RDMA service.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param index_base: start index number
+ * @param index_number: index number
+ * @retval struct tag_cqm_mtt_rdmarc *: pointer to the MTT/RDMARC structure
+ * @date: 2019-5-4
+ */
+struct tag_cqm_mtt_rdmarc *cqm_object_rdma_table_get(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 index_base, u32 index_number);
+
+/**
+ * @brief: delete created objects.
+ * @details: delete the created object. This function does not return until all
+ * operations on the object are complete.
+ * @param object: object pointer
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_object_delete(struct tag_cqm_object *object);
+
+/**
+ * @brief: obtains the physical address and virtual address at the specified
+ * offset of the object buffer.
+ * @details: Only RDMA table query is supported to obtain the physical address
+ * and virtual address at the specified offset of the object buffer.
+ * @param object: object pointer
+ * @param offset: for a rdma table, offset is the absolute index number.
+ * @param paddr: The physical address is returned only for the rdma table.
+ * @retval u8 *: buffer specify the virtual address at the offset
+ * @date: 2019-5-4
+ */
+u8 *cqm_object_offset_addr(struct tag_cqm_object *object, u32 offset, dma_addr_t *paddr);
+
+/**
+ * @brief: obtain object according index.
+ * @details: obtain object according index.
+ * @param ex_handle: device pointer that represents the PF
+ * @param object_type: object type
+ * @param index: support qpn,mptn,scqn,srqn
+ * @param bh: whether to disable the bottom half of the interrupt
+ * @retval struct tag_cqm_object *: object pointer
+ * @date: 2019-5-4
+ */
+struct tag_cqm_object *cqm_object_get(void *ex_handle, enum cqm_object_type object_type,
+ u32 index, bool bh);
+
+/**
+ * @brief: object reference counting release
+ * @details: After the function cqm_object_get is invoked, this API must be put.
+ * Otherwise, the object cannot be released.
+ * @param object: object pointer
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_object_put(struct tag_cqm_object *object);
+
+/**
+ * @brief: obtain the ID of the function where the object resides.
+ * @details: obtain the ID of the function where the object resides.
+ * @param object: object pointer
+ * @retval >=0: ID of function
+ * @retval -1: fail
+ * @date: 2020-4-15
+ */
+s32 cqm_object_funcid(struct tag_cqm_object *object);
+
+/**
+ * @brief: apply for a new space for an object.
+ * @details: Currently, this parameter is valid only for the ROCE service.
+ * The CQ buffer size is adjusted, but the CQN and CQC remain
+ * unchanged. New buffer space is applied for, and the old buffer
+ * space is not released. The current valid buffer is still the old
+ * buffer.
+ * @param object: object pointer
+ * @param object_size: new buffer size
+ * @retval 0: success
+ * @retval -1: fail
+ * @date: 2019-5-4
+ */
+s32 cqm_object_resize_alloc_new(struct tag_cqm_object *object, u32 object_size);
+
+/**
+ * @brief: release the newly applied buffer space for the object.
+ * @details: This function is used to release the newly applied buffer space for
+ * service exception handling.
+ * @param object: object pointer
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_object_resize_free_new(struct tag_cqm_object *object);
+
+/**
+ * @brief: release old buffer space for objects.
+ * @details: This function releases the old buffer and sets the current valid
+ * buffer to the new buffer.
+ * @param object: object pointer
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_object_resize_free_old(struct tag_cqm_object *object);
+
+/**
+ * @brief: release container.
+ * @details: release container.
+ * @param object: object pointer
+ * @param container: container pointer to be released
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_srq_used_rq_container_delete(struct tag_cqm_object *object, u8 *container);
+
+void *cqm_get_db_addr(void *ex_handle, u32 service_type);
+
+s32 cqm_ring_hardware_db_fc(void *ex_handle, u32 service_type, u8 db_count,
+ u8 pagenum, u64 db);
+
+/**
+ * @brief: provide the interface of knocking on doorbell.
+ * The CQM converts the pri to cos.
+ * @details: provide interface of knocking on doorbell for the CQM to convert
+ * the pri to cos. The doorbell transferred by the service must be the
+ * host sequence. This interface converts the network sequence.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: Each kernel-mode service is allocated a hardware
+ * doorbell page.
+ * @param db_count: PI[7:0] beyond 64b in the doorbell
+ * @param db: The doorbell content is organized by the service. If there is
+ * endian conversion, the service needs to complete the conversion.
+ * @retval 0: success
+ * @retval -1: fail
+ * @date: 2019-5-4
+ */
+s32 cqm_ring_hardware_db_update_pri(void *ex_handle, u32 service_type,
+ u8 db_count, u64 db);
+
+/**
+ * @brief: knock on software doorbell.
+ * @details: knock on software doorbell.
+ * @param object: object pointer
+ * @param db_record: software doorbell content. If there is big-endian
+ * conversion, the service needs to complete the conversion.
+ * @retval 0: success
+ * @retval -1: fail
+ * @date: 2019-5-4
+ */
+s32 cqm_ring_software_db(struct tag_cqm_object *object, u64 db_record);
+
+/**
+ * @brief: reference counting is added to the bloom filter ID.
+ * @details: reference counting is added to the bloom filter ID. When the ID
+ * changes from 0 to 1, the sending API is set to 1.
+ * This interface sleeps.
+ * @param ex_handle: device pointer that represents the PF
+ * @param id: id
+ * @retval 0: success
+ * @retval -1: fail
+ * @date: 2019-5-4
+ */
+void *cqm_gid_base(void *ex_handle);
+
+/**
+ * @brief: obtain the base virtual address of the timer.
+ * @details: obtain the base virtual address of the timer.
+ * @param ex_handle: device pointer that represents the PF
+ * @retval void *: base virtual address of the timer
+ * @date: 2020-5-21
+ */
+void *cqm_timer_base(void *ex_handle);
+
+/**
+ * @brief: clear timer buffer.
+ * @details: clear the timer buffer based on the function ID. Function IDs start
+ * from 0, and timer buffers are arranged by function ID.
+ * @param ex_handle: device pointer that represents the PF
+ * @param function_id: function id
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_function_timer_clear(void *ex_handle, u32 function_id);
+
+/**
+ * @brief: clear hash buffer.
+ * @details: clear the hash buffer based on the function ID.
+ * @param ex_handle: device pointer that represents the PF
+ * @param global_funcid
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_function_hash_buf_clear(void *ex_handle, s32 global_funcid);
+
+s32 cqm_ring_direct_wqe_db(void *ex_handle, u32 service_type, u8 db_count,
+ void *direct_wqe);
+s32 cqm_ring_direct_wqe_db_fc(void *ex_handle, u32 service_type,
+ void *direct_wqe);
+
+#ifdef __cplusplus
+#if __cplusplus
+}
+#endif
+#endif /* __cplusplus */
+
+#endif /* CQM_OBJECT_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.c
new file mode 100644
index 0000000..1007b44
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.c
@@ -0,0 +1,1389 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/mm.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+
+#include "cqm_object.h"
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_main.h"
+#include "cqm_object_intern.h"
+
+#define srq_obj_intern_if_section
+
+/**
+ * cqm_container_free - Only the container buffer is released. The buffer in the WQE
+ * and fast link tables are not involved. Containers can be released
+ * from head to tail, including head and tail. This function does not
+ * modify the start and end pointers of qinfo records.
+ * @srq_head_container: head pointer of the containers be released
+ * @srq_tail_container: If it is NULL, it means to release container from head to tail
+ * @common: CQM nonrdma queue info
+ */
+void cqm_container_free(u8 *srq_head_container, u8 *srq_tail_container,
+ struct tag_cqm_queue *common)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(common->object.cqm_handle);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ u32 link_wqe_offset = qinfo->wqe_per_buf * qinfo->wqe_size;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_srq_linkwqe *srq_link_wqe = NULL;
+ u32 container_size = qinfo->container_size;
+ struct pci_dev *dev = cqm_handle->dev;
+ u64 addr;
+ u8 *srqhead_container = srq_head_container;
+ u8 *srqtail_container = srq_tail_container;
+
+ if (unlikely(!srqhead_container)) {
+ pr_err("[CQM]%s: srqhead_container is null\n", __func__);
+ return;
+ }
+
+ /* 1. The range is released cyclically from the head to the tail, i.e.
+ * [head:tail]. If the tail is null, the range is [head:null]. Oterwise,
+ * [head:tail->next).
+ */
+ if (srqtail_container) {
+ /* [head:tail->next): Update srqtail_container to the next
+ * container va.
+ */
+ srq_link_wqe = (struct tag_cqm_srq_linkwqe *)(srqtail_container +
+ link_wqe_offset);
+ /* Only the link wqe part needs to be converted. */
+ cqm_swab32((u8 *)(srq_link_wqe), sizeof(struct tag_cqm_linkwqe) >> CQM_DW_SHIFT);
+ srqtail_container = (u8 *)CQM_ADDR_COMBINE(srq_link_wqe->fixed_next_buffer_addr_h,
+ srq_link_wqe->fixed_next_buffer_addr_l);
+ }
+
+ do {
+ /* 2. Obtain the link wqe of the current container */
+ srq_link_wqe = (struct tag_cqm_srq_linkwqe *)(srqhead_container +
+ link_wqe_offset);
+ /* Only the link wqe part needs to be converted. */
+ cqm_swab32((u8 *)(srq_link_wqe), sizeof(struct tag_cqm_linkwqe) >> CQM_DW_SHIFT);
+ /* Obtain the va of the next container using the link wqe. */
+ srqhead_container = (u8 *)CQM_ADDR_COMBINE(srq_link_wqe->fixed_next_buffer_addr_h,
+ srq_link_wqe->fixed_next_buffer_addr_l);
+
+ /* 3. Obtain the current container pa from the link wqe,
+ * and cancel the mapping
+ */
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_gpa_h,
+ srq_link_wqe->current_buffer_gpa_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Container free: buffer physical addr is null\n");
+ return;
+ }
+ pci_unmap_single(dev, (dma_addr_t)addr, container_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ /* 4. Obtain the container va through linkwqe and release the
+ * container va.
+ */
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_addr_h,
+ srq_link_wqe->current_buffer_addr_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Container free: buffer virtual addr is null\n");
+ return;
+ }
+ kfree((void *)addr);
+ } while (srqhead_container != srqtail_container);
+}
+
+/**
+ * cqm_container_create - Create a container for the RQ or SRQ, link it to the tail of the queue,
+ * and update the tail container pointer of the queue.
+ * @object: CQM object
+ * @container_addr: the pointer of container created
+ * @link: if the SRQ is not empty, update the linkwqe of the tail container
+ */
+s32 cqm_container_create(struct tag_cqm_object *object, u8 **container_addr, bool link)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(object->cqm_handle);
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ u32 link_wqe_offset = qinfo->wqe_per_buf * qinfo->wqe_size;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_srq_linkwqe *srq_link_wqe = NULL;
+ struct tag_cqm_linkwqe *link_wqe = NULL;
+ dma_addr_t new_container_pa;
+ u8 *new_container = NULL;
+
+ /* 1. Applying for Container Space and Initializing Invalid/Normal WQE
+ * of the Container.
+ */
+ new_container = kmalloc(qinfo->container_size, GFP_ATOMIC | __GFP_ZERO);
+ if (!new_container) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(new_container));
+ return CQM_FAIL;
+ }
+
+ /* Container PCI mapping */
+ new_container_pa = pci_map_single(cqm_handle->dev, new_container,
+ qinfo->container_size,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, new_container_pa) != 0) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(new_container_pa));
+ goto map_fail;
+ }
+
+ /* 2. The container is linked to the SRQ, and the link wqe of
+ * tail_container and new_container is updated.
+ */
+ /* If the SRQ is not empty, update the linkwqe of the tail container. */
+ if (link) {
+ if (common->tail_container) {
+ srq_link_wqe = (struct tag_cqm_srq_linkwqe *)(common->tail_container +
+ link_wqe_offset);
+ link_wqe = &srq_link_wqe->linkwqe;
+ link_wqe->next_page_gpa_h =
+ __swab32((u32)CQM_ADDR_HI(new_container_pa));
+ link_wqe->next_page_gpa_l =
+ __swab32((u32)CQM_ADDR_LW(new_container_pa));
+ link_wqe->next_buffer_addr_h =
+ __swab32((u32)CQM_ADDR_HI(new_container));
+ link_wqe->next_buffer_addr_l =
+ __swab32((u32)CQM_ADDR_LW(new_container));
+ /* make sure next page gpa and next buffer addr of
+ * link wqe update first
+ */
+ wmb();
+ /* The SRQ tail container may be accessed by the chip.
+ * Therefore, obit must be set to 1 at last.
+ */
+ (*(u32 *)link_wqe) |= 0x80;
+ /* make sure obit set ahead of fixed next buffer addr
+ * updating of srq link wqe
+ */
+ wmb();
+ srq_link_wqe->fixed_next_buffer_addr_h =
+ (u32)CQM_ADDR_HI(new_container);
+ srq_link_wqe->fixed_next_buffer_addr_l =
+ (u32)CQM_ADDR_LW(new_container);
+ }
+ }
+
+ /* Update the Invalid WQE of a New Container */
+ clear_bit(0x1F, (ulong *)new_container);
+ /* Update the link wqe of the new container. */
+ srq_link_wqe = (struct tag_cqm_srq_linkwqe *)(new_container + link_wqe_offset);
+ link_wqe = &srq_link_wqe->linkwqe;
+ link_wqe->o = CQM_LINK_WQE_OWNER_INVALID;
+ link_wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+ link_wqe->lp = CQM_LINK_WQE_LP_INVALID;
+ link_wqe->wf = CQM_WQE_WF_LINK;
+ srq_link_wqe->current_buffer_gpa_h = CQM_ADDR_HI(new_container_pa);
+ srq_link_wqe->current_buffer_gpa_l = CQM_ADDR_LW(new_container_pa);
+ srq_link_wqe->current_buffer_addr_h = CQM_ADDR_HI(new_container);
+ srq_link_wqe->current_buffer_addr_l = CQM_ADDR_LW(new_container);
+ /* Convert only the area accessed by the chip to the network sequence */
+ cqm_swab32((u8 *)link_wqe, sizeof(struct tag_cqm_linkwqe) >> CQM_DW_SHIFT);
+ if (link)
+ /* Update the tail pointer of a queue. */
+ common->tail_container = new_container;
+ else
+ *container_addr = new_container;
+
+ return CQM_SUCCESS;
+
+map_fail:
+ kfree(new_container);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_srq_container_init - Initialize the SRQ to create all containers and link them
+ * @object: CQM object
+ */
+static s32 cqm_srq_container_init(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 container_num = object->object_size;
+ s32 ret;
+ u32 i;
+
+ if (common->head_container || common->tail_container) {
+ cqm_err(handle->dev_hdl, "Srq container init: srq tail/head container not null\n");
+ return CQM_FAIL;
+ }
+
+ /* Applying for a Container
+ * During initialization, the head/tail pointer is null.
+ * After the first application is successful, head=tail.
+ */
+ ret = cqm_container_create(&qinfo->common.object, NULL, true);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, "Srq container init: cqm_srq_container_add fail\n");
+ return CQM_FAIL;
+ }
+ common->head_container = common->tail_container;
+
+ /* The container is dynamically created and the tail pointer is updated.
+ * If the container fails to be created, release the containers from
+ * head to null.
+ */
+ for (i = 1; i < container_num; i++) {
+ ret = cqm_container_create(&qinfo->common.object, NULL, true);
+ if (ret == CQM_FAIL) {
+ cqm_container_free(common->head_container, NULL,
+ &qinfo->common);
+ return CQM_FAIL;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_share_recv_queue_create - Create SRQ(share receive queue)
+ * @object: CQM object
+ */
+s32 cqm_share_recv_queue_create(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_toe_private_capability *toe_own_cap = &cqm_handle->toe_own_capability;
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ u32 step;
+ s32 ret;
+
+ /* 1. Create srq container, including initializing the link wqe. */
+ ret = cqm_srq_container_init(object);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_srq_container_init));
+ return CQM_FAIL;
+ }
+
+ /* 2. Create srq ctx: SRQ CTX is directly delivered by the driver to the
+ * chip memory area through the cmdq channel, and no CLA table
+ * management is required. Therefore, the CQM applies for only one empty
+ * buffer for the driver.
+ */
+ /* bitmap applies for index */
+ bitmap = &toe_own_cap->srqc_bitmap;
+ qinfo->index_count = (ALIGN(qinfo->q_ctx_size,
+ toe_own_cap->toe_srqc_basic_size)) /
+ toe_own_cap->toe_srqc_basic_size;
+ /* align with 2 as the upper bound */
+ step = ALIGN(toe_own_cap->toe_srqc_number, 2);
+ qinfo->common.index = cqm_bitmap_alloc(bitmap, step, qinfo->index_count,
+ func_cap->xid_alloc_mode);
+ if (qinfo->common.index >= bitmap->max_num) {
+ cqm_err(handle->dev_hdl, "Srq create: queue index %u exceeds max_num %u\n",
+ qinfo->common.index, bitmap->max_num);
+ goto err1;
+ }
+ qinfo->common.index += toe_own_cap->toe_srqc_start_id;
+
+ /* apply for buffer for SRQC */
+ common->q_ctx_vaddr = kmalloc(qinfo->q_ctx_size,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!common->q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_ctx_vaddr));
+ goto err2;
+ }
+ return CQM_SUCCESS;
+
+err2:
+ cqm_bitmap_free(bitmap,
+ qinfo->common.index - toe_own_cap->toe_srqc_start_id,
+ qinfo->index_count);
+err1:
+ cqm_container_free(common->head_container, common->tail_container,
+ &qinfo->common);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_srq_used_rq_delete - Delete RQ in TOE SRQ mode
+ * @object: CQM object
+ */
+static void cqm_srq_used_rq_delete(const struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(common->object.cqm_handle);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ u32 link_wqe_offset = qinfo->wqe_per_buf * qinfo->wqe_size;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_srq_linkwqe *srq_link_wqe = NULL;
+ dma_addr_t addr;
+
+ /* Currently, the SRQ solution does not support RQ initialization
+ * without mounting container.
+ * As a result, RQ resources are released incorrectly.
+ * Temporary workaround: Only one container is mounted during RQ
+ * initialization and only one container is released
+ * during resource release.
+ */
+ if (unlikely(!common->head_container)) {
+ pr_err("[CQM]%s: Rq del: rq has no contianer to release\n", __func__);
+ return;
+ }
+
+ /* 1. Obtain current container pa from the link wqe table and
+ * cancel the mapping.
+ */
+ srq_link_wqe = (struct tag_cqm_srq_linkwqe *)(common->head_container + link_wqe_offset);
+ /* Only the link wqe part needs to be converted. */
+ cqm_swab32((u8 *)(srq_link_wqe), sizeof(struct tag_cqm_linkwqe) >> CQM_DW_SHIFT);
+
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_gpa_h,
+ srq_link_wqe->current_buffer_gpa_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Rq del: buffer physical addr is null\n");
+ return;
+ }
+ pci_unmap_single(cqm_handle->dev, addr, qinfo->container_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ /* 2. Obtain the container va through the linkwqe and release. */
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_addr_h,
+ srq_link_wqe->current_buffer_addr_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Rq del: buffer virtual addr is null\n");
+ return;
+ }
+ kfree((void *)addr);
+}
+
+/**
+ * cqm_share_recv_queue_delete - The SRQ object is deleted. Delete only containers that are not
+ * used by SRQ, that is, containers from the head to the tail.
+ * The RQ releases containers that have been used by the RQ.
+ * @object: CQM object
+ */
+void cqm_share_recv_queue_delete(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bitmap *bitmap = &cqm_handle->toe_own_capability.srqc_bitmap;
+ u32 index = common->index - cqm_handle->toe_own_capability.toe_srqc_start_id;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ /* 1. Wait for completion and ensure that all references to the QPC
+ * are complete.
+ */
+ if (atomic_dec_and_test(&object->refcount) != 0)
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Srq del: object is referred by others, has to wait for completion\n");
+
+ wait_for_completion(&object->free);
+ destroy_completion(&object->free);
+ /* 2. The corresponding index in the bitmap is cleared. */
+ cqm_bitmap_free(bitmap, index, qinfo->index_count);
+
+ /* 3. SRQC resource release */
+ if (unlikely(!common->q_ctx_vaddr)) {
+ pr_err("[CQM]%s: Srq del: srqc kfree, context virtual addr is null\n", __func__);
+ return;
+ }
+ kfree(common->q_ctx_vaddr);
+
+ /* 4. The SRQ queue is released. */
+ cqm_container_free(common->head_container, NULL, &qinfo->common);
+}
+
+#define obj_intern_if_section
+
+#define CQM_INDEX_INVALID_MASK 0x1FFFFFFFU
+#define CQM_IDX_VALID_SHIFT 29
+
+/**
+ * cqm_qpc_mpt_bitmap_alloc - Apply for index from the bitmap when creating QPC or MPT
+ * @object: CQM object
+ * @cla_table: CLA table entry
+ * @low2bit_align_en: enable alignment of the lower two bits
+ */
+static s32 cqm_qpc_mpt_bitmap_alloc(struct tag_cqm_object *object,
+ struct tag_cqm_cla_table *cla_table, bool low2bit_align_en)
+{
+ struct tag_cqm_qpc_mpt *common = container_of(object, struct tag_cqm_qpc_mpt, object);
+ struct tag_cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
+ struct tag_cqm_qpc_mpt_info,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_bitmap *bitmap = &cla_table->bitmap;
+ u32 index, count;
+ u32 xid = qpc_mpt_info->common.xid;
+
+ count = (ALIGN(object->object_size, cla_table->obj_size)) / cla_table->obj_size;
+ qpc_mpt_info->index_count = count;
+
+ if ((xid & CQM_INDEX_INVALID_MASK) == CQM_INDEX_INVALID_MASK) {
+ if (low2bit_align_en) {
+ if (count > 1) {
+ cqm_err(handle->dev_hdl, "Not support alloc multiple bits.");
+ return CQM_FAIL;
+ }
+
+ index = cqm_bitmap_alloc_low2bit_align(bitmap, xid >> CQM_IDX_VALID_SHIFT,
+ func_cap->xid_alloc_mode);
+ } else {
+ /* apply for an index normally */
+ index = cqm_bitmap_alloc(bitmap, 1U << (cla_table->z + 1),
+ count, func_cap->xid_alloc_mode);
+ }
+
+ if (index < bitmap->max_num - bitmap->reserved_back) {
+ qpc_mpt_info->common.xid = index;
+ } else {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bitmap_alloc));
+ return CQM_FAIL;
+ }
+ } else {
+ if ((hinic3_func_type((void *)handle) != TYPE_PPF) &&
+ (hinic3_support_roce((void *)handle, NULL))) {
+ /* If PF is vroce control function, apply for index by xid */
+ index = cqm_bitmap_alloc_by_xid(bitmap, count, xid);
+ } else {
+ /* apply for index to be reserved */
+ index = cqm_bitmap_alloc_reserved(bitmap, count, xid);
+ }
+
+ if (index != xid) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bitmap_alloc_reserved));
+ return CQM_FAIL;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_qpc_mpt_create - Create QPC or MPT
+ * @object: CQM object
+ * @low2bit_align_en: enable alignment of the lower two bits
+ */
+s32 cqm_qpc_mpt_create(struct tag_cqm_object *object, bool low2bit_align_en)
+{
+ struct tag_cqm_qpc_mpt *common = container_of(object, struct tag_cqm_qpc_mpt, object);
+ struct tag_cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
+ struct tag_cqm_qpc_mpt_info,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ u32 index, count;
+
+ /* find the corresponding cla table */
+ if (object->object_type == CQM_OBJECT_SERVICE_CTX) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ } else if (object->object_type == CQM_OBJECT_MPT) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_MPT);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ return CQM_FAIL;
+ }
+
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cqm_cla_table_get is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ /* Bitmap applies for index. */
+ if (cqm_qpc_mpt_bitmap_alloc(object, cla_table, low2bit_align_en) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_qpc_mpt_bitmap_alloc));
+ return CQM_FAIL;
+ }
+
+ bitmap = &cla_table->bitmap;
+ index = qpc_mpt_info->common.xid;
+ count = qpc_mpt_info->index_count;
+
+ /* Find the trunk page from the BAT/CLA and allocate the buffer.
+ * Ensure that the released buffer has been cleared.
+ */
+ if (cla_table->alloc_static)
+ qpc_mpt_info->common.vaddr = cqm_cla_get_unlock(cqm_handle,
+ cla_table,
+ index, count,
+ &common->paddr);
+ else
+ qpc_mpt_info->common.vaddr = cqm_cla_get_lock(cqm_handle,
+ cla_table, index,
+ count,
+ &common->paddr);
+
+ if (!qpc_mpt_info->common.vaddr) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_get_lock));
+ cqm_err(handle->dev_hdl, "Qpc mpt init: qpc mpt vaddr is null, cla_table->alloc_static=%d\n",
+ cla_table->alloc_static);
+ goto err1;
+ }
+
+ /* Indexes are associated with objects, and FC is executed
+ * in the interrupt context.
+ */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC) {
+ if (cqm_object_table_insert(cqm_handle, object_table, index,
+ object, false) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_insert));
+ goto err2;
+ }
+ } else {
+ if (cqm_object_table_insert(cqm_handle, object_table, index,
+ object, true) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_insert));
+ goto err2;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err2:
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+err1:
+ cqm_bitmap_free(bitmap, index, count);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_qpc_mpt_delete - Delete QPC or MPT
+ * @object: CQM object
+ */
+void cqm_qpc_mpt_delete(struct tag_cqm_object *object)
+{
+ struct tag_cqm_qpc_mpt *common = container_of(object, struct tag_cqm_qpc_mpt, object);
+ struct tag_cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
+ struct tag_cqm_qpc_mpt_info,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ u32 count = qpc_mpt_info->index_count;
+ u32 index = qpc_mpt_info->common.xid;
+ struct tag_cqm_bitmap *bitmap = NULL;
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_qpc_mpt_delete_cnt);
+
+ /* find the corresponding cla table */
+ /* Todo */
+ if (object->object_type == CQM_OBJECT_SERVICE_CTX) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ } else if (object->object_type == CQM_OBJECT_MPT) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_MPT);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ return;
+ }
+
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cqm_cla_table_get_qpc return failure\n", __func__);
+ return;
+ }
+
+ /* disassociate index and object */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC)
+ cqm_object_table_remove(cqm_handle, object_table, index, object,
+ false);
+ else
+ cqm_object_table_remove(cqm_handle, object_table, index, object,
+ true);
+
+ /* wait for completion to ensure that all references to
+ * the QPC are complete
+ */
+ if (atomic_dec_and_test(&object->refcount) != 0)
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Qpc mpt del: object is referred by others, has to wait for completion\n");
+
+ /* Static QPC allocation must be non-blocking.
+ * Services ensure that the QPC is referenced
+ * when the QPC is deleted.
+ */
+ if (!cla_table->alloc_static)
+ wait_for_completion(&object->free);
+
+ /* VMware FC need explicitly deinit spin_lock in completion */
+ destroy_completion(&object->free);
+
+ /* release qpc buffer */
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+
+ /* release the index to the bitmap */
+ bitmap = &cla_table->bitmap;
+ cqm_bitmap_free(bitmap, index, count);
+}
+
+/**
+ * cqm_linkwqe_fill - Used to organize the queue buffer of non-RDMA services and fill the link wqe
+ * @buf: CQM queue buffer
+ * @wqe_per_buf: Linkwqe is not included
+ * @wqe_size: Linkwqe size
+ * @wqe_number: Linkwqe number
+ * @tail: true - The linkwqe must be at the end of the page;
+ * false - The linkwqe can be not at the end of the page.
+ * @link_mode: Link mode
+ */
+static void cqm_linkwqe_fill(struct tag_cqm_buf *buf, u32 wqe_per_buf, u32 wqe_size,
+ u32 wqe_number, bool tail, u8 link_mode)
+{
+ struct tag_cqm_linkwqe_128B *linkwqe = NULL;
+ struct tag_cqm_linkwqe *wqe = NULL;
+ dma_addr_t addr;
+ u8 *tmp = NULL;
+ u8 *va = NULL;
+ u32 i;
+
+ /* The linkwqe of other buffer except the last buffer
+ * is directly filled to the tail.
+ */
+ for (i = 0; i < buf->buf_number; i++) {
+ va = (u8 *)(buf->buf_list[i].va);
+
+ if (i != (buf->buf_number - 1)) {
+ wqe = (struct tag_cqm_linkwqe *)(va + (u32)(wqe_size * wqe_per_buf));
+ wqe->wf = CQM_WQE_WF_LINK;
+ wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+ wqe->lp = CQM_LINK_WQE_LP_INVALID;
+ /* The valid value of link wqe needs to be set to 1.
+ * Each service ensures that o-bit=1 indicates that
+ * link wqe is valid and o-bit=0 indicates that
+ * link wqe is invalid.
+ */
+ wqe->o = CQM_LINK_WQE_OWNER_VALID;
+ addr = buf->buf_list[(u32)(i + 1)].pa;
+ wqe->next_page_gpa_h = CQM_ADDR_HI(addr);
+ wqe->next_page_gpa_l = CQM_ADDR_LW(addr);
+ } else { /* linkwqe special padding of the last buffer */
+ if (tail) {
+ /* must be filled at the end of the page */
+ tmp = va + (u32)(wqe_size * wqe_per_buf);
+ wqe = (struct tag_cqm_linkwqe *)tmp;
+ } else {
+ /* The last linkwqe is filled
+ * following the last wqe.
+ */
+ tmp = va + (u32)(wqe_size * (wqe_number - wqe_per_buf *
+ (buf->buf_number - 1)));
+ wqe = (struct tag_cqm_linkwqe *)tmp;
+ }
+ wqe->wf = CQM_WQE_WF_LINK;
+ wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+
+ /* In link mode, the last link WQE is invalid;
+ * In ring mode, the last link wqe is valid, pointing to
+ * the home page, and the lp is set.
+ */
+ if (link_mode == CQM_QUEUE_LINK_MODE) {
+ wqe->o = CQM_LINK_WQE_OWNER_INVALID;
+ } else {
+ /* The lp field of the last link_wqe is set to
+ * 1, indicating that the meaning of the o-bit
+ * is reversed.
+ */
+ wqe->lp = CQM_LINK_WQE_LP_VALID;
+ wqe->o = CQM_LINK_WQE_OWNER_VALID;
+ addr = buf->buf_list[0].pa;
+ wqe->next_page_gpa_h = CQM_ADDR_HI(addr);
+ wqe->next_page_gpa_l = CQM_ADDR_LW(addr);
+ }
+ }
+
+ if (wqe_size == CQM_LINKWQE_128B) {
+ /* After the B800 version, the WQE obit scheme is
+ * changed. The 64B bits before and after the 128B WQE
+ * need to be assigned a value:
+ * ifoe the 63rd bit from the end of the last 64B is
+ * obit;
+ * toe the 157th bit from the end of the last 64B is
+ * obit.
+ */
+ linkwqe = (struct tag_cqm_linkwqe_128B *)wqe;
+ linkwqe->second64B.third_16B.bs.toe_o = CQM_LINK_WQE_OWNER_VALID;
+ linkwqe->second64B.forth_16B.bs.ifoe_o = CQM_LINK_WQE_OWNER_VALID;
+
+ /* shift 2 bits by right to get length of dw(4B) */
+ cqm_swab32((u8 *)wqe, sizeof(struct tag_cqm_linkwqe_128B) >> 2);
+ } else {
+ /* shift 2 bits by right to get length of dw(4B) */
+ cqm_swab32((u8 *)wqe, sizeof(struct tag_cqm_linkwqe) >> 2);
+ }
+ }
+}
+
+static int cqm_nonrdma_queue_ctx_create_scq(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ bool bh = false;
+
+ /* find the corresponding cla table */
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ if (!cla_table) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(nonrdma_cqm_cla_table_get));
+ return CQM_FAIL;
+ }
+
+ /* bitmap applies for index */
+ bitmap = &cla_table->bitmap;
+ qinfo->index_count = (ALIGN(qinfo->q_ctx_size, cla_table->obj_size)) / cla_table->obj_size;
+ qinfo->common.index = cqm_bitmap_alloc(bitmap, 1U << (cla_table->z + 1),
+ qinfo->index_count,
+ cqm_handle->func_capability.xid_alloc_mode);
+ if (qinfo->common.index >= bitmap->max_num) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(nonrdma_cqm_bitmap_alloc));
+ return CQM_FAIL;
+ }
+
+ /* find the trunk page from BAT/CLA and allocate the buffer */
+ common->q_ctx_vaddr = cqm_cla_get_lock(cqm_handle, cla_table, qinfo->common.index,
+ qinfo->index_count, &common->q_ctx_paddr);
+ if (!common->q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(nonrdma_cqm_cla_get_lock));
+ cqm_bitmap_free(bitmap, qinfo->common.index, qinfo->index_count);
+ return CQM_FAIL;
+ }
+
+ /* index and object association */
+ object_table = &cla_table->obj_table;
+ bh = ((object->service_type == CQM_SERVICE_T_FC) ? false : true);
+ if (cqm_object_table_insert(cqm_handle, object_table, qinfo->common.index, object,
+ bh) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(nonrdma_cqm_object_table_insert));
+ cqm_cla_put(cqm_handle, cla_table, qinfo->common.index, qinfo->index_count);
+ cqm_bitmap_free(bitmap, qinfo->common.index, qinfo->index_count);
+
+ return CQM_FAIL;
+ }
+
+ return 0;
+}
+
+static s32 cqm_nonrdma_queue_ctx_create(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ s32 shift;
+ int ret;
+
+ if (object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
+ shift = cqm_shift(qinfo->q_ctx_size);
+ common->q_ctx_vaddr = cqm_kmalloc_align(qinfo->q_ctx_size,
+ GFP_KERNEL | __GFP_ZERO,
+ (u16)shift);
+ if (!common->q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_ctx_vaddr));
+ return CQM_FAIL;
+ }
+
+ common->q_ctx_paddr = pci_map_single(cqm_handle->dev, common->q_ctx_vaddr,
+ qinfo->q_ctx_size, PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, common->q_ctx_paddr) != 0) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_ctx_vaddr));
+ cqm_kfree_align(common->q_ctx_vaddr);
+ common->q_ctx_vaddr = NULL;
+ return CQM_FAIL;
+ }
+ } else if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ ret = cqm_nonrdma_queue_ctx_create_scq(object);
+ if (ret != 0)
+ return ret;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_nonrdma_queue_create - Create a queue for non-RDMA services
+ * @object: CQM object
+ */
+s32 cqm_nonrdma_queue_create(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_service *service = cqm_handle->service + object->service_type;
+ struct tag_cqm_buf *q_room_buf = &common->q_room_buf_1;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 wqe_number = qinfo->common.object.object_size;
+ u32 wqe_size = qinfo->wqe_size;
+ u32 order = service->buf_order;
+ u32 buf_number, buf_size;
+ bool tail = false; /* determine whether the linkwqe is at the end of the page */
+
+ /* When creating a CQ/SCQ queue, the page size is 4 KB,
+ * the linkwqe must be at the end of the page.
+ */
+ if (object->object_type == CQM_OBJECT_NONRDMA_EMBEDDED_CQ ||
+ object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ /* depth: 2^n-aligned; depth range: 256-32 K */
+ if (wqe_number < CQM_CQ_DEPTH_MIN ||
+ wqe_number > CQM_CQ_DEPTH_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_number));
+ return CQM_FAIL;
+ }
+ if (!cqm_check_align(wqe_number)) {
+ cqm_err(handle->dev_hdl, "Nonrdma queue alloc: wqe_number is not align on 2^n\n");
+ return CQM_FAIL;
+ }
+
+ order = CQM_4K_PAGE_ORDER; /* wqe page 4k */
+ tail = true; /* The linkwqe must be at the end of the page. */
+ buf_size = CQM_4K_PAGE_SIZE;
+ } else {
+ buf_size = (u32)(PAGE_SIZE << order);
+ }
+
+ /* Calculate the total number of buffers required,
+ * -1 indicates that the link wqe in a buffer is deducted.
+ */
+ qinfo->wqe_per_buf = (buf_size / wqe_size) - 1;
+ /* number of linkwqes that are included in the depth transferred
+ * by the service
+ */
+ buf_number = ALIGN((wqe_size * wqe_number), buf_size) / buf_size;
+
+ /* apply for buffer */
+ q_room_buf->buf_number = buf_number;
+ q_room_buf->buf_size = buf_size;
+ q_room_buf->page_number = buf_number << order;
+ if (cqm_buf_alloc(cqm_handle, q_room_buf, false) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_buf_alloc));
+ return CQM_FAIL;
+ }
+ /* fill link wqe, wqe_number - buf_number is the number of wqe without
+ * link wqe
+ */
+ cqm_linkwqe_fill(q_room_buf, qinfo->wqe_per_buf, wqe_size,
+ wqe_number - buf_number, tail,
+ common->queue_link_mode);
+
+ /* create queue header */
+ qinfo->common.q_header_vaddr = cqm_kmalloc_align(sizeof(struct tag_cqm_queue_header),
+ GFP_KERNEL | __GFP_ZERO,
+ CQM_QHEAD_ALIGN_ORDER);
+ if (!qinfo->common.q_header_vaddr)
+ goto err1;
+
+ common->q_header_paddr = pci_map_single(cqm_handle->dev,
+ qinfo->common.q_header_vaddr,
+ sizeof(struct tag_cqm_queue_header),
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, common->q_header_paddr) != 0) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_header_vaddr));
+ goto err2;
+ }
+
+ /* create queue ctx */
+ if (cqm_nonrdma_queue_ctx_create(object) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_nonrdma_queue_ctx_create));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ pci_unmap_single(cqm_handle->dev, common->q_header_paddr,
+ sizeof(struct tag_cqm_queue_header), PCI_DMA_BIDIRECTIONAL);
+err2:
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+err1:
+ cqm_buf_free(q_room_buf, cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_nonrdma_queue_delete - Delete the queues of non-RDMA services
+ * @object: CQM object
+ */
+void cqm_nonrdma_queue_delete(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_buf *q_room_buf = &common->q_room_buf_1;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ u32 index = qinfo->common.index;
+ u32 count = qinfo->index_count;
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_nonrdma_queue_delete_cnt);
+
+ /* The SCQ has an independent SCQN association. */
+ if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cqm_cla_table_get_queue return failure\n", __func__);
+ return;
+ }
+
+ /* disassociate index and object */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC)
+ cqm_object_table_remove(cqm_handle, object_table, index,
+ object, false);
+ else
+ cqm_object_table_remove(cqm_handle, object_table, index,
+ object, true);
+ }
+
+ /* wait for completion to ensure that all references to
+ * the QPC are complete
+ */
+ if (atomic_dec_and_test(&object->refcount) != 0)
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Nonrdma queue del: object is referred by others, has to wait for completion\n");
+
+ wait_for_completion(&object->free);
+ destroy_completion(&object->free);
+
+ /* If the q header exists, release. */
+ if (qinfo->common.q_header_vaddr) {
+ pci_unmap_single(cqm_handle->dev, common->q_header_paddr,
+ sizeof(struct tag_cqm_queue_header),
+ PCI_DMA_BIDIRECTIONAL);
+
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+ }
+
+ /* RQ deletion in TOE SRQ mode */
+ if (common->queue_link_mode == CQM_QUEUE_TOE_SRQ_LINK_MODE) {
+ cqm_dbg("Nonrdma queue del: delete srq used rq\n");
+ cqm_srq_used_rq_delete(&common->object);
+ } else {
+ /* If q room exists, release. */
+ cqm_buf_free(q_room_buf, cqm_handle);
+ }
+ /* SRQ and SCQ have independent CTXs and release. */
+ if (object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
+ /* The CTX of the SRQ of the nordma is
+ * applied for independently.
+ */
+ if (common->q_ctx_vaddr) {
+ pci_unmap_single(cqm_handle->dev, common->q_ctx_paddr,
+ qinfo->q_ctx_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ cqm_kfree_align(common->q_ctx_vaddr);
+ common->q_ctx_vaddr = NULL;
+ }
+ } else if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ /* The CTX of the SCQ of the nordma is managed by BAT/CLA. */
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+
+ /* release the index to the bitmap */
+ bitmap = &cla_table->bitmap;
+ cqm_bitmap_free(bitmap, index, count);
+ }
+}
+
+static s32 cqm_rdma_queue_ctx_create(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_rdma_qinfo *qinfo = container_of(common, struct tag_cqm_rdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ u32 index;
+
+ if (object->object_type == CQM_OBJECT_RDMA_SRQ ||
+ object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ if (object->object_type == CQM_OBJECT_RDMA_SRQ)
+ cla_table = cqm_cla_table_get(bat_table,
+ CQM_BAT_ENTRY_T_SRQC);
+ else
+ cla_table = cqm_cla_table_get(bat_table,
+ CQM_BAT_ENTRY_T_SCQC);
+
+ if (!cla_table) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(rdma_cqm_cla_table_get));
+ return CQM_FAIL;
+ }
+
+ /* bitmap applies for index */
+ bitmap = &cla_table->bitmap;
+ if (qinfo->common.index == CQM_INDEX_INVALID) {
+ qinfo->index_count = (ALIGN(qinfo->q_ctx_size,
+ cla_table->obj_size)) /
+ cla_table->obj_size;
+ qinfo->common.index =
+ cqm_bitmap_alloc(bitmap, 1U << (cla_table->z + 1),
+ qinfo->index_count,
+ cqm_handle->func_capability.xid_alloc_mode);
+ if (qinfo->common.index >= bitmap->max_num) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(rdma_cqm_bitmap_alloc));
+ return CQM_FAIL;
+ }
+ } else {
+ /* apply for reserved index */
+ qinfo->index_count = (ALIGN(qinfo->q_ctx_size, cla_table->obj_size)) /
+ cla_table->obj_size;
+ index = cqm_bitmap_alloc_reserved(bitmap, qinfo->index_count,
+ qinfo->common.index);
+ if (index != qinfo->common.index) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bitmap_alloc_reserved));
+ return CQM_FAIL;
+ }
+ }
+
+ /* find the trunk page from BAT/CLA and allocate the buffer */
+ qinfo->common.q_ctx_vaddr =
+ cqm_cla_get_lock(cqm_handle, cla_table, qinfo->common.index,
+ qinfo->index_count, &qinfo->common.q_ctx_paddr);
+ if (!qinfo->common.q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(rdma_cqm_cla_get_lock));
+ cqm_bitmap_free(bitmap, qinfo->common.index, qinfo->index_count);
+ return CQM_FAIL;
+ }
+
+ /* associate index and object */
+ object_table = &cla_table->obj_table;
+ if (cqm_object_table_insert(cqm_handle, object_table, qinfo->common.index, object,
+ true) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(rdma_cqm_object_table_insert));
+ cqm_cla_put(cqm_handle, cla_table, qinfo->common.index,
+ qinfo->index_count);
+ cqm_bitmap_free(bitmap, qinfo->common.index, qinfo->index_count);
+ return CQM_FAIL;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_rdma_queue_create - Create rdma queue
+ * @object: CQM object
+ */
+s32 cqm_rdma_queue_create(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_rdma_qinfo *qinfo = container_of(common, struct tag_cqm_rdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_service *service = cqm_handle->service + object->service_type;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf *q_room_buf = NULL;
+ u32 order = service->buf_order;
+ u32 buf_size = (u32)(PAGE_SIZE << order);
+
+ if (qinfo->room_header_alloc) {
+ /* apply for queue room buffer */
+ if (qinfo->common.current_q_room == CQM_RDMA_Q_ROOM_1)
+ q_room_buf = &qinfo->common.q_room_buf_1;
+ else
+ q_room_buf = &qinfo->common.q_room_buf_2;
+
+ q_room_buf->buf_number = ALIGN(object->object_size, buf_size) /
+ buf_size;
+ q_room_buf->page_number = (q_room_buf->buf_number << order);
+ q_room_buf->buf_size = buf_size;
+ if (cqm_buf_alloc(cqm_handle, q_room_buf, true) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_buf_alloc));
+ return CQM_FAIL;
+ }
+
+ /* queue header */
+ qinfo->common.q_header_vaddr =
+ cqm_kmalloc_align(sizeof(struct tag_cqm_queue_header),
+ GFP_KERNEL | __GFP_ZERO,
+ CQM_QHEAD_ALIGN_ORDER);
+ if (!qinfo->common.q_header_vaddr)
+ goto err1;
+
+ qinfo->common.q_header_paddr =
+ pci_map_single(cqm_handle->dev,
+ qinfo->common.q_header_vaddr,
+ sizeof(struct tag_cqm_queue_header),
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev,
+ qinfo->common.q_header_paddr) != 0) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_header_vaddr));
+ goto err2;
+ }
+ }
+
+ /* queue ctx */
+ if (cqm_rdma_queue_ctx_create(object) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_rdma_queue_ctx_create));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ if (qinfo->room_header_alloc)
+ pci_unmap_single(cqm_handle->dev, qinfo->common.q_header_paddr,
+ sizeof(struct tag_cqm_queue_header),
+ PCI_DMA_BIDIRECTIONAL);
+err2:
+ if (qinfo->room_header_alloc) {
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+ }
+err1:
+ if (qinfo->room_header_alloc)
+ cqm_buf_free(q_room_buf, cqm_handle);
+
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_rdma_queue_delete - Create rdma queue
+ * @object: CQM object
+ */
+void cqm_rdma_queue_delete(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_rdma_qinfo *qinfo = container_of(common, struct tag_cqm_rdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_buf *q_room_buf = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ u32 index = qinfo->common.index;
+ u32 count = qinfo->index_count;
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_rdma_queue_delete_cnt);
+
+ if (qinfo->common.current_q_room == CQM_RDMA_Q_ROOM_1)
+ q_room_buf = &qinfo->common.q_room_buf_1;
+ else
+ q_room_buf = &qinfo->common.q_room_buf_2;
+
+ /* SCQ and SRQ are associated with independent SCQN and SRQN. */
+ if (object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cqm_cla_table_get return failure\n", __func__);
+ return;
+ }
+ /* disassociate index and object */
+ object_table = &cla_table->obj_table;
+ cqm_object_table_remove(cqm_handle, object_table, index, object, true);
+ } else if (object->object_type == CQM_OBJECT_RDMA_SRQ) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SRQC);
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cqm_cla_table_get return failure\n", __func__);
+ return;
+ }
+ /* disassociate index and object */
+ object_table = &cla_table->obj_table;
+ cqm_object_table_remove(cqm_handle, object_table, index, object, true);
+ }
+
+ /* wait for completion to make sure all references are complete */
+ if (atomic_dec_and_test(&object->refcount) != 0)
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Rdma queue del: object is referred by others, has to wait for completion\n");
+
+ wait_for_completion(&object->free);
+ destroy_completion(&object->free);
+
+ /* If the q header exists, release. */
+ if (qinfo->room_header_alloc && qinfo->common.q_header_vaddr) {
+ pci_unmap_single(cqm_handle->dev, qinfo->common.q_header_paddr,
+ sizeof(struct tag_cqm_queue_header), PCI_DMA_BIDIRECTIONAL);
+
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+ }
+
+ /* If q room exists, release. */
+ cqm_buf_free(q_room_buf, cqm_handle);
+
+ /* SRQ and SCQ have independent CTX, released. */
+ if (object->object_type == CQM_OBJECT_RDMA_SRQ ||
+ object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+
+ /* release the index to the bitmap */
+ bitmap = &cla_table->bitmap;
+ cqm_bitmap_free(bitmap, index, count);
+ }
+}
+
+/**
+ * cqm_rdma_table_create - Create RDMA-related entries
+ * @object: CQM object
+ */
+s32 cqm_rdma_table_create(struct tag_cqm_object *object)
+{
+ struct tag_cqm_mtt_rdmarc *common = container_of(object, struct tag_cqm_mtt_rdmarc,
+ object);
+ struct tag_cqm_rdma_table *rdma_table = container_of(common, struct tag_cqm_rdma_table,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf *buf = &rdma_table->buf;
+
+ /* Less than one page is allocated by actual size.
+ * RDMARC also requires physical continuity.
+ */
+ if (object->object_size <= PAGE_SIZE ||
+ object->object_type == CQM_OBJECT_RDMARC) {
+ buf->buf_number = 1;
+ buf->page_number = buf->buf_number;
+ buf->buf_size = object->object_size;
+ buf->direct.va = pci_alloc_consistent(cqm_handle->dev,
+ buf->buf_size,
+ &buf->direct.pa);
+ if (!buf->direct.va)
+ return CQM_FAIL;
+ } else { /* page-by-page alignment greater than one page */
+ buf->buf_number = ALIGN(object->object_size, PAGE_SIZE) /
+ PAGE_SIZE;
+ buf->page_number = buf->buf_number;
+ buf->buf_size = PAGE_SIZE;
+ if (cqm_buf_alloc(cqm_handle, buf, true) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_buf_alloc));
+ return CQM_FAIL;
+ }
+ }
+
+ rdma_table->common.vaddr = (u8 *)(buf->direct.va);
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_rdma_table_delete - Delete RDMA-related Entries
+ * @object: CQM object
+ */
+void cqm_rdma_table_delete(struct tag_cqm_object *object)
+{
+ struct tag_cqm_mtt_rdmarc *common = container_of(object, struct tag_cqm_mtt_rdmarc,
+ object);
+ struct tag_cqm_rdma_table *rdma_table = container_of(common, struct tag_cqm_rdma_table,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf *buf = &rdma_table->buf;
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_rdma_table_delete_cnt);
+
+ if (buf->buf_number == 1) {
+ if (buf->direct.va) {
+ pci_free_consistent(cqm_handle->dev, buf->buf_size,
+ buf->direct.va, buf->direct.pa);
+ buf->direct.va = NULL;
+ }
+ } else {
+ cqm_buf_free(buf, cqm_handle);
+ }
+}
+
+/**
+ * cqm_rdma_table_offset_addr - Obtain the address of the RDMA entry based on the offset
+ * @object: CQM object
+ * @offset: The offset is the index
+ * @paddr: dma physical addr
+ */
+u8 *cqm_rdma_table_offset_addr(struct tag_cqm_object *object, u32 offset, dma_addr_t *paddr)
+{
+ struct tag_cqm_mtt_rdmarc *common = container_of(object, struct tag_cqm_mtt_rdmarc,
+ object);
+ struct tag_cqm_rdma_table *rdma_table = container_of(common, struct tag_cqm_rdma_table,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf *buf = &rdma_table->buf;
+ struct tag_cqm_buf_list *buf_node = NULL;
+ u32 buf_id, buf_offset;
+
+ if (offset < rdma_table->common.index_base ||
+ ((offset - rdma_table->common.index_base) >=
+ rdma_table->common.index_number)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(offset));
+ return NULL;
+ }
+
+ if (buf->buf_number == 1) {
+ buf_offset = (u32)((offset - rdma_table->common.index_base) *
+ (sizeof(dma_addr_t)));
+
+ *paddr = buf->direct.pa + buf_offset;
+ return ((u8 *)(buf->direct.va)) + buf_offset;
+ }
+
+ buf_id = (offset - rdma_table->common.index_base) /
+ (PAGE_SIZE / sizeof(dma_addr_t));
+ buf_offset = (u32)((offset - rdma_table->common.index_base) -
+ (buf_id * (PAGE_SIZE / sizeof(dma_addr_t))));
+ buf_offset = (u32)(buf_offset * sizeof(dma_addr_t));
+
+ if (buf_id >= buf->buf_number) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(buf_id));
+ return NULL;
+ }
+ buf_node = buf->buf_list + buf_id;
+ *paddr = buf_node->pa + buf_offset;
+
+ return ((u8 *)(buf->direct.va)) +
+ (offset - rdma_table->common.index_base) * (sizeof(dma_addr_t));
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.h
new file mode 100644
index 0000000..f82fda2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_OBJECT_INTERN_H
+#define CQM_OBJECT_INTERN_H
+
+#include "ossl_knl.h"
+#include "cqm_object.h"
+
+#define CQM_CQ_DEPTH_MAX 32768
+#define CQM_CQ_DEPTH_MIN 256
+
+/* linkwqe */
+#define CQM_LINK_WQE_CTRLSL_VALUE 2
+#define CQM_LINK_WQE_LP_VALID 1
+#define CQM_LINK_WQE_LP_INVALID 0
+#define CQM_LINK_WQE_OWNER_VALID 1
+#define CQM_LINK_WQE_OWNER_INVALID 0
+
+#define CQM_ADDR_COMBINE(high_addr, low_addr) \
+ ((((dma_addr_t)(high_addr)) << 32) + ((dma_addr_t)(low_addr)))
+#define CQM_ADDR_HI(addr) ((u32)((u64)(addr) >> 32))
+#define CQM_ADDR_LW(addr) ((u32)((u64)(addr) & 0xffffffff))
+
+#define CQM_QPC_LAYOUT_TABLE_SIZE 16
+struct tag_cqm_qpc_layout_table_node {
+ u32 type;
+ u32 size;
+ u32 offset;
+ struct tag_cqm_object *object;
+};
+
+struct tag_cqm_qpc_mpt_info {
+ struct tag_cqm_qpc_mpt common;
+ /* Different service has different QPC.
+ * The large QPC/mpt will occupy some continuous indexes in bitmap.
+ */
+ u32 index_count;
+ struct tag_cqm_qpc_layout_table_node qpc_layout_table[CQM_QPC_LAYOUT_TABLE_SIZE];
+};
+
+struct tag_cqm_nonrdma_qinfo {
+ struct tag_cqm_queue common;
+ u32 wqe_size;
+ /* Number of WQEs in each buffer (excluding link WQEs)
+ * For SRQ, the value is the number of WQEs contained in a container.
+ */
+ u32 wqe_per_buf;
+ u32 q_ctx_size;
+ /* When different services use CTXs of different sizes,
+ * a large CTX occupies multiple consecutive indexes in the bitmap.
+ */
+ u32 index_count;
+
+ /* add for srq */
+ u32 container_size;
+};
+
+struct tag_cqm_rdma_qinfo {
+ struct tag_cqm_queue common;
+ bool room_header_alloc;
+ /* This field is used to temporarily record the new object_size during
+ * CQ resize.
+ */
+ u32 new_object_size;
+ u32 q_ctx_size;
+ /* When different services use CTXs of different sizes,
+ * a large CTX occupies multiple consecutive indexes in the bitmap.
+ */
+ u32 index_count;
+};
+
+struct tag_cqm_rdma_table {
+ struct tag_cqm_mtt_rdmarc common;
+ struct tag_cqm_buf buf;
+};
+
+void cqm_container_free(u8 *srq_head_container, u8 *srq_tail_container,
+ struct tag_cqm_queue *common);
+s32 cqm_container_create(struct tag_cqm_object *object, u8 **container_addr, bool link);
+s32 cqm_share_recv_queue_create(struct tag_cqm_object *object);
+void cqm_share_recv_queue_delete(struct tag_cqm_object *object);
+s32 cqm_qpc_mpt_create(struct tag_cqm_object *object, bool low2bit_align_en);
+void cqm_qpc_mpt_delete(struct tag_cqm_object *object);
+s32 cqm_nonrdma_queue_create(struct tag_cqm_object *object);
+void cqm_nonrdma_queue_delete(struct tag_cqm_object *object);
+s32 cqm_rdma_queue_create(struct tag_cqm_object *object);
+void cqm_rdma_queue_delete(struct tag_cqm_object *object);
+s32 cqm_rdma_table_create(struct tag_cqm_object *object);
+void cqm_rdma_table_delete(struct tag_cqm_object *object);
+u8 *cqm_rdma_table_offset_addr(struct tag_cqm_object *object, u32 offset, dma_addr_t *paddr);
+
+#endif /* CQM_OBJECT_INTERN_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/readme.txt b/drivers/net/ethernet/huawei/hinic3/cqm/readme.txt
new file mode 100644
index 0000000..1e21b66
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/readme.txt
@@ -0,0 +1,3 @@
+
+2021/02/25/10:35 gf ovs fake vf hash clear support, change comment
+2019/03/28/15:17 wss provide stateful service queue and context management
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_crm.h b/drivers/net/ethernet/huawei/hinic3/hinic3_crm.h
new file mode 100644
index 0000000..5a11331
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_crm.h
@@ -0,0 +1,1280 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_CRM_H
+#define HINIC3_CRM_H
+
+#include <linux/pci.h>
+
+#include "mpu_cmd_base_defs.h"
+
+#define HINIC3_DRV_VERSION "17.7.8.101"
+#define HINIC3_DRV_DESC "Intelligent Network Interface Card Driver"
+#define HIUDK_DRV_DESC "Intelligent Network Unified Driver"
+
+#define ARRAY_LEN(arr) ((int)((int)sizeof(arr) / (int)sizeof((arr)[0])))
+
+#define HINIC3_MGMT_VERSION_MAX_LEN 32
+
+#define HINIC3_FW_VERSION_NAME 16
+#define HINIC3_FW_VERSION_SECTION_CNT 4
+#define HINIC3_FW_VERSION_SECTION_BORDER 0xFF
+struct hinic3_fw_version {
+ u8 mgmt_ver[HINIC3_FW_VERSION_NAME];
+ u8 microcode_ver[HINIC3_FW_VERSION_NAME];
+ u8 boot_ver[HINIC3_FW_VERSION_NAME];
+};
+
+#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF
+
+/* show each drivers only such as nic_service_cap,
+ * toe_service_cap structure, but not show service_cap
+ */
+enum hinic3_service_type {
+ SERVICE_T_NIC = 0,
+ SERVICE_T_OVS,
+ SERVICE_T_ROCE,
+ SERVICE_T_TOE,
+ SERVICE_T_IOE,
+ SERVICE_T_FC,
+ SERVICE_T_VBS,
+ SERVICE_T_IPSEC,
+ SERVICE_T_VIRTIO,
+ SERVICE_T_MIGRATE,
+ SERVICE_T_PPA,
+ SERVICE_T_CUSTOM,
+ SERVICE_T_VROCE,
+ SERVICE_T_CRYPT,
+ SERVICE_T_VSOCK,
+ SERVICE_T_BIFUR,
+ SERVICE_T_MAX,
+
+ /* Only used for interruption resource management,
+ * mark the request module
+ */
+ SERVICE_T_INTF = (1 << 15),
+ SERVICE_T_CQM = (1 << 16),
+};
+
+enum hinic3_ppf_flr_type {
+ STATELESS_FLR_TYPE,
+ STATEFUL_FLR_TYPE,
+};
+
+struct nic_service_cap {
+ u16 max_sqs;
+ u16 max_rqs;
+ u16 default_num_queues;
+ u16 outband_vlan_cfg_en;
+ u8 lro_enable;
+ u8 rsvd1[3];
+};
+
+struct ppa_service_cap {
+ u16 qpc_fake_vf_start;
+ u16 qpc_fake_vf_num;
+ u32 qpc_fake_vf_ctx_num;
+ u32 pctx_sz; /* 512B */
+ u32 bloomfilter_length;
+ u8 bloomfilter_en;
+ u8 rsvd;
+ u16 rsvd1;
+};
+
+struct bifur_service_cap {
+ u8 rsvd;
+};
+
+struct vbs_service_cap {
+ u16 vbs_max_volq;
+ u8 vbs_main_pf_enable;
+ u8 vbs_vsock_pf_enable;
+ u8 vbs_fushion_queue_pf_enable;
+};
+
+struct migr_service_cap {
+ u8 master_host_id;
+ u8 rsvd[3];
+};
+
+/* PF/VF ToE service resource structure */
+struct dev_toe_svc_cap {
+ /* PF resources */
+ u32 max_pctxs; /* Parent Context: max specifications 1M */
+ u32 max_cctxt;
+ u32 max_cqs;
+ u16 max_srqs;
+ u32 srq_id_start;
+ u32 max_mpts;
+};
+
+/* ToE services */
+struct toe_service_cap {
+ struct dev_toe_svc_cap dev_toe_cap;
+
+ bool alloc_flag;
+ u32 pctx_sz; /* 1KB */
+ u32 scqc_sz; /* 64B */
+};
+
+/* PF FC service resource structure defined */
+struct dev_fc_svc_cap {
+ /* PF Parent QPC */
+ u32 max_parent_qpc_num; /* max number is 2048 */
+
+ /* PF Child QPC */
+ u32 max_child_qpc_num; /* max number is 2048 */
+ u32 child_qpc_id_start;
+
+ /* PF SCQ */
+ u32 scq_num; /* 16 */
+
+ /* PF supports SRQ */
+ u32 srq_num; /* Number of SRQ is 2 */
+
+ u8 vp_id_start;
+ u8 vp_id_end;
+};
+
+/* FC services */
+struct fc_service_cap {
+ struct dev_fc_svc_cap dev_fc_cap;
+
+ /* Parent QPC */
+ u32 parent_qpc_size; /* 256B */
+
+ /* Child QPC */
+ u32 child_qpc_size; /* 256B */
+
+ /* SQ */
+ u32 sqe_size; /* 128B(in linked list mode) */
+
+ /* SCQ */
+ u32 scqc_size; /* Size of the Context 32B */
+ u32 scqe_size; /* 64B */
+
+ /* SRQ */
+ u32 srqc_size; /* Size of SRQ Context (64B) */
+ u32 srqe_size; /* 32B */
+};
+
+struct dev_roce_svc_own_cap {
+ u32 max_qps;
+ u32 max_cqs;
+ u32 max_srqs;
+ u32 max_mpts;
+ u32 max_drc_qps;
+
+ u32 cmtt_cl_start;
+ u32 cmtt_cl_end;
+ u32 cmtt_cl_sz;
+
+ u32 dmtt_cl_start;
+ u32 dmtt_cl_end;
+ u32 dmtt_cl_sz;
+
+ u32 wqe_cl_start;
+ u32 wqe_cl_end;
+ u32 wqe_cl_sz;
+
+ u32 qpc_entry_sz;
+ u32 max_wqes;
+ u32 max_rq_sg;
+ u32 max_sq_inline_data_sz;
+ u32 max_rq_desc_sz;
+
+ u32 rdmarc_entry_sz;
+ u32 max_qp_init_rdma;
+ u32 max_qp_dest_rdma;
+
+ u32 max_srq_wqes;
+ u32 reserved_srqs;
+ u32 max_srq_sge;
+ u32 srqc_entry_sz;
+
+ u32 max_msg_sz; /* Message size 2GB */
+};
+
+/* RDMA service capability structure */
+struct dev_rdma_svc_cap {
+ /* ROCE service unique parameter structure */
+ struct dev_roce_svc_own_cap roce_own_cap;
+};
+
+/* Defines the RDMA service capability flag */
+enum {
+ RDMA_BMME_FLAG_LOCAL_INV = (1 << 0),
+ RDMA_BMME_FLAG_REMOTE_INV = (1 << 1),
+ RDMA_BMME_FLAG_FAST_REG_WR = (1 << 2),
+ RDMA_BMME_FLAG_RESERVED_LKEY = (1 << 3),
+ RDMA_BMME_FLAG_TYPE_2_WIN = (1 << 4),
+ RDMA_BMME_FLAG_WIN_TYPE_2B = (1 << 5),
+
+ RDMA_DEV_CAP_FLAG_XRC = (1 << 6),
+ RDMA_DEV_CAP_FLAG_MEM_WINDOW = (1 << 7),
+ RDMA_DEV_CAP_FLAG_ATOMIC = (1 << 8),
+ RDMA_DEV_CAP_FLAG_APM = (1 << 9),
+};
+
+/* RDMA services */
+struct rdma_service_cap {
+ struct dev_rdma_svc_cap dev_rdma_cap;
+
+ u8 log_mtt; /* 1. the number of MTT PA must be integer power of 2
+ * 2. represented by logarithm. Each MTT table can
+ * contain 1, 2, 4, 8, and 16 PA)
+ */
+ /* todo: need to check whether related to max_mtt_seg */
+ u32 num_mtts; /* Number of MTT table (4M),
+ * is actually MTT seg number
+ */
+ u32 log_mtt_seg;
+ u32 mtt_entry_sz; /* MTT table size 8B, including 1 PA(64bits) */
+ u32 mpt_entry_sz; /* MPT table size (64B) */
+
+ u32 dmtt_cl_start;
+ u32 dmtt_cl_end;
+ u32 dmtt_cl_sz;
+
+ u8 log_rdmarc; /* 1. the number of RDMArc PA must be integer power of 2
+ * 2. represented by logarithm. Each MTT table can
+ * contain 1, 2, 4, 8, and 16 PA)
+ */
+
+ u32 reserved_qps; /* Number of reserved QP */
+ u32 max_sq_sg; /* Maximum SGE number of SQ (8) */
+ u32 max_sq_desc_sz; /* WQE maximum size of SQ(1024B), inline maximum
+ * size if 960B(944B aligned to the 960B),
+ * 960B=>wqebb alignment=>1024B
+ */
+ u32 wqebb_size; /* Currently, the supports 64B and 128B,
+ * defined as 64Bytes
+ */
+
+ u32 max_cqes; /* Size of the depth of the CQ (64K-1) */
+ u32 reserved_cqs; /* Number of reserved CQ */
+ u32 cqc_entry_sz; /* Size of the CQC (64B/128B) */
+ u32 cqe_size; /* Size of CQE (32B) */
+
+ u32 reserved_mrws; /* Number of reserved MR/MR Window */
+
+ u32 max_fmr_maps; /* max MAP of FMR,
+ * (1 << (32-ilog2(num_mpt)))-1;
+ */
+
+ /* todo: max value needs to be confirmed */
+ /* MTT table number of Each MTT seg(3) */
+
+ u32 log_rdmarc_seg; /* table number of each RDMArc seg(3) */
+
+ /* Timeout time. Formula:Tr=4.096us*2(local_ca_ack_delay), [Tr,4Tr] */
+ u32 local_ca_ack_delay;
+ u32 num_ports; /* Physical port number */
+
+ u32 db_page_size; /* Size of the DB (4KB) */
+ u32 direct_wqe_size; /* Size of the DWQE (256B) */
+
+ u32 num_pds; /* Maximum number of PD (128K) */
+ u32 reserved_pds; /* Number of reserved PD */
+ u32 max_xrcds; /* Maximum number of xrcd (64K) */
+ u32 reserved_xrcds; /* Number of reserved xrcd */
+
+ u32 max_gid_per_port; /* gid number (16) of each port */
+ u32 gid_entry_sz; /* RoCE v2 GID table is 32B,
+ * compatible RoCE v1 expansion
+ */
+
+ u32 reserved_lkey; /* local_dma_lkey */
+ u32 num_comp_vectors; /* Number of complete vector (32) */
+ u32 page_size_cap; /* Supports 4K,8K,64K,256K,1M and 4M page_size */
+
+ u32 flags; /* RDMA some identity */
+ u32 max_frpl_len; /* Maximum number of pages frmr registration */
+ u32 max_pkeys; /* Number of supported pkey group */
+};
+
+/* PF OVS service resource structure defined */
+struct dev_ovs_svc_cap {
+ u32 max_pctxs; /* Parent Context: max specifications 1M */
+ u32 fake_vf_max_pctx;
+ u16 fake_vf_num;
+ u16 fake_vf_start_id;
+ u8 dynamic_qp_en;
+};
+
+/* OVS services */
+struct ovs_service_cap {
+ struct dev_ovs_svc_cap dev_ovs_cap;
+
+ u32 pctx_sz; /* 512B */
+};
+
+/* PF IPsec service resource structure defined */
+struct dev_ipsec_svc_cap {
+ u32 max_sactxs; /* max IPsec SA context num */
+ u16 max_cqs; /* max IPsec SCQC num */
+ u16 rsvd0;
+};
+
+/* IPsec services */
+struct ipsec_service_cap {
+ struct dev_ipsec_svc_cap dev_ipsec_cap;
+ u32 sactx_sz; /* 512B */
+};
+
+/* Defines the IRQ information structure */
+struct irq_info {
+ u16 msix_entry_idx; /* IRQ corresponding index number */
+ u32 irq_id; /* the IRQ number from OS */
+};
+
+struct interrupt_info {
+ u32 lli_set;
+ u32 interrupt_coalesc_set;
+ u16 msix_index;
+ u8 lli_credit_limit;
+ u8 lli_timer_cfg;
+ u8 pending_limt;
+ u8 coalesc_timer_cfg;
+ u8 resend_timer_cfg;
+};
+
+enum hinic3_msix_state {
+ HINIC3_MSIX_ENABLE,
+ HINIC3_MSIX_DISABLE,
+};
+
+enum hinic3_msix_auto_mask {
+ HINIC3_CLR_MSIX_AUTO_MASK,
+ HINIC3_SET_MSIX_AUTO_MASK,
+};
+
+enum func_type {
+ TYPE_PF,
+ TYPE_VF,
+ TYPE_PPF,
+ TYPE_UNKNOWN,
+};
+
+enum func_nic_state {
+ HINIC3_FUNC_NIC_DEL,
+ HINIC3_FUNC_NIC_ADD,
+};
+
+struct hinic3_init_para {
+ /* Record hinic_pcidev or NDIS_Adapter pointer address */
+ void *adapter_hdl;
+ /* Record pcidev or Handler pointer address
+ * for example: ioremap interface input parameter
+ */
+ void *pcidev_hdl;
+ /* Record pcidev->dev or Handler pointer address which used to
+ * dma address application or dev_err print the parameter
+ */
+ void *dev_hdl;
+
+ /* Configure virtual address, PF is bar1, VF is bar0/1 */
+ void *cfg_reg_base;
+ /* interrupt configuration register address, PF is bar2, VF is bar2/3
+ */
+ void *intr_reg_base;
+ /* for PF bar3 virtual address, if function is VF should set to NULL */
+ void *mgmt_reg_base;
+
+ u64 db_dwqe_len;
+ u64 db_base_phy;
+ /* the doorbell address, bar4/5 higher 4M space */
+ void *db_base;
+ /* direct wqe 4M, follow the doorbell address space */
+ void *dwqe_mapping;
+ void **hwdev;
+ void *chip_node;
+ /* if use polling mode, set it true */
+ bool poll;
+
+ u16 probe_fault_level;
+};
+
+/* B200 config BAR45 4MB, DB & DWQE both 2MB */
+#define HINIC3_DB_DWQE_SIZE 0x00400000
+
+/* db/dwqe page size: 4K */
+#define HINIC3_DB_PAGE_SIZE 0x00001000ULL
+#define HINIC3_DWQE_OFFSET 0x00000800ULL
+
+#define HINIC3_DB_MAX_AREAS (HINIC3_DB_DWQE_SIZE / HINIC3_DB_PAGE_SIZE)
+
+#ifndef IFNAMSIZ
+#define IFNAMSIZ 16
+#endif
+#define MAX_FUNCTION_NUM 4096
+
+struct card_node {
+ struct list_head node;
+ struct list_head func_list;
+ char chip_name[IFNAMSIZ];
+ int chip_id;
+ void *log_info;
+ void *dbgtool_info;
+ void *func_handle_array[MAX_FUNCTION_NUM];
+ unsigned char bus_num;
+ u16 func_num;
+ u32 rsvd1;
+ atomic_t channel_busy_cnt;
+ void *priv_data;
+ u64 rsvd2;
+};
+
+#define HINIC3_SYNFW_TIME_PERIOD (60 * 60 * 1000)
+#define HINIC3_SYNC_YEAR_OFFSET 1900
+#define HINIC3_SYNC_MONTH_OFFSET 1
+
+#define FAULT_SHOW_STR_LEN 16
+
+enum hinic3_fault_source_type {
+ /* same as FAULT_TYPE_CHIP */
+ HINIC3_FAULT_SRC_HW_MGMT_CHIP = 0,
+ /* same as FAULT_TYPE_UCODE */
+ HINIC3_FAULT_SRC_HW_MGMT_UCODE,
+ /* same as FAULT_TYPE_MEM_RD_TIMEOUT */
+ HINIC3_FAULT_SRC_HW_MGMT_MEM_RD_TIMEOUT,
+ /* same as FAULT_TYPE_MEM_WR_TIMEOUT */
+ HINIC3_FAULT_SRC_HW_MGMT_MEM_WR_TIMEOUT,
+ /* same as FAULT_TYPE_REG_RD_TIMEOUT */
+ HINIC3_FAULT_SRC_HW_MGMT_REG_RD_TIMEOUT,
+ /* same as FAULT_TYPE_REG_WR_TIMEOUT */
+ HINIC3_FAULT_SRC_HW_MGMT_REG_WR_TIMEOUT,
+ HINIC3_FAULT_SRC_SW_MGMT_UCODE,
+ HINIC3_FAULT_SRC_MGMT_WATCHDOG,
+ HINIC3_FAULT_SRC_MGMT_RESET = 8,
+ HINIC3_FAULT_SRC_HW_PHY_FAULT,
+ HINIC3_FAULT_SRC_TX_PAUSE_EXCP,
+ HINIC3_FAULT_SRC_PCIE_LINK_DOWN = 20,
+ HINIC3_FAULT_SRC_HOST_HEARTBEAT_LOST = 21,
+ HINIC3_FAULT_SRC_TX_TIMEOUT,
+ HINIC3_FAULT_SRC_TYPE_MAX,
+};
+
+union hinic3_fault_hw_mgmt {
+ u32 val[4];
+ /* valid only type == FAULT_TYPE_CHIP */
+ struct {
+ u8 node_id;
+ /* enum hinic_fault_err_level */
+ u8 err_level;
+ u16 err_type;
+ u32 err_csr_addr;
+ u32 err_csr_value;
+ /* func_id valid only if err_level == FAULT_LEVEL_SERIOUS_FLR */
+ u8 rsvd1;
+ u8 host_id;
+ u16 func_id;
+ } chip;
+
+ /* valid only if type == FAULT_TYPE_UCODE */
+ struct {
+ u8 cause_id;
+ u8 core_id;
+ u8 c_id;
+ u8 rsvd3;
+ u32 epc;
+ u32 rsvd4;
+ u32 rsvd5;
+ } ucode;
+
+ /* valid only if type == FAULT_TYPE_MEM_RD_TIMEOUT ||
+ * FAULT_TYPE_MEM_WR_TIMEOUT
+ */
+ struct {
+ u32 err_csr_ctrl;
+ u32 err_csr_data;
+ u32 ctrl_tab;
+ u32 mem_index;
+ } mem_timeout;
+
+ /* valid only if type == FAULT_TYPE_REG_RD_TIMEOUT ||
+ * FAULT_TYPE_REG_WR_TIMEOUT
+ */
+ struct {
+ u32 err_csr;
+ u32 rsvd6;
+ u32 rsvd7;
+ u32 rsvd8;
+ } reg_timeout;
+
+ struct {
+ /* 0: read; 1: write */
+ u8 op_type;
+ u8 port_id;
+ u8 dev_ad;
+ u8 rsvd9;
+ u32 csr_addr;
+ u32 op_data;
+ u32 rsvd10;
+ } phy_fault;
+};
+
+/* defined by chip */
+struct hinic3_fault_event {
+ /* enum hinic_fault_type */
+ u8 type;
+ u8 fault_level; /* sdk write fault level for uld event */
+ u8 rsvd0[2];
+ union hinic3_fault_hw_mgmt event;
+};
+
+struct hinic3_cmd_fault_event {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+ struct hinic3_fault_event event;
+};
+
+struct hinic3_sriov_state_info {
+ u8 enable;
+ u16 num_vfs;
+};
+
+enum hinic3_comm_event_type {
+ EVENT_COMM_PCIE_LINK_DOWN,
+ EVENT_COMM_HEART_LOST,
+ EVENT_COMM_FAULT,
+ EVENT_COMM_SRIOV_STATE_CHANGE,
+ EVENT_COMM_CARD_REMOVE,
+ EVENT_COMM_MGMT_WATCHDOG,
+ EVENT_COMM_MULTI_HOST_MGMT,
+};
+
+enum hinic3_event_service_type {
+ EVENT_SRV_COMM = 0,
+#define SERVICE_EVENT_BASE (EVENT_SRV_COMM + 1)
+ EVENT_SRV_NIC = SERVICE_EVENT_BASE + SERVICE_T_NIC,
+ EVENT_SRV_MIGRATE = SERVICE_EVENT_BASE + SERVICE_T_MIGRATE,
+};
+
+#define HINIC3_SRV_EVENT_TYPE(svc, type) ((((u32)(svc)) << 16) | (type))
+#ifndef HINIC3_EVENT_DATA_SIZE
+#define HINIC3_EVENT_DATA_SIZE 104
+#endif
+struct hinic3_event_info {
+ u16 service; /* enum hinic3_event_service_type */
+ u16 type;
+ u8 event_data[HINIC3_EVENT_DATA_SIZE];
+};
+
+typedef void (*hinic3_event_handler)(void *handle, struct hinic3_event_info *event);
+
+struct hinic3_func_nic_state {
+ u8 state;
+ u8 rsvd0;
+ u16 func_idx;
+
+ u8 vroce_flag;
+ u8 rsvd1[15];
+};
+
+/* *
+ * @brief hinic3_event_register - register hardware event
+ * @param dev: device pointer to hwdev
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ */
+void hinic3_event_register(void *dev, void *pri_handle,
+ hinic3_event_handler callback);
+
+/* *
+ * @brief hinic3_event_unregister - unregister hardware event
+ * @param dev: device pointer to hwdev
+ */
+void hinic3_event_unregister(void *dev);
+
+/* *
+ * @brief hinic3_set_msix_auto_mask - set msix auto mask function
+ * @param hwdev: device pointer to hwdev
+ * @param msix_idx: msix id
+ * @param flag: msix auto_mask flag, 1-enable, 2-clear
+ */
+void hinic3_set_msix_auto_mask_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_auto_mask flag);
+
+/* *
+ * @brief hinic3_set_msix_state - set msix state
+ * @param hwdev: device pointer to hwdev
+ * @param msix_idx: msix id
+ * @param flag: msix state flag, 0-enable, 1-disable
+ */
+void hinic3_set_msix_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_state flag);
+
+/* *
+ * @brief hinic3_misx_intr_clear_resend_bit - clear msix resend bit
+ * @param hwdev: device pointer to hwdev
+ * @param msix_idx: msix id
+ * @param clear_resend_en: 1-clear
+ */
+void hinic3_misx_intr_clear_resend_bit(void *hwdev, u16 msix_idx,
+ u8 clear_resend_en);
+
+/* *
+ * @brief hinic3_set_interrupt_cfg_direct - set interrupt cfg
+ * @param hwdev: device pointer to hwdev
+ * @param interrupt_para: interrupt info
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_interrupt_cfg_direct(void *hwdev,
+ struct interrupt_info *info,
+ u16 channel);
+
+int hinic3_set_interrupt_cfg(void *dev, struct interrupt_info info,
+ u16 channel);
+
+/* *
+ * @brief hinic3_get_interrupt_cfg - get interrupt cfg
+ * @param dev: device pointer to hwdev
+ * @param info: interrupt info
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_interrupt_cfg(void *dev, struct interrupt_info *info,
+ u16 channel);
+
+/* *
+ * @brief hinic3_alloc_irqs - alloc irq
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param num: alloc number
+ * @param irq_info_array: alloc irq info
+ * @param act_num: alloc actual number
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_alloc_irqs(void *hwdev, enum hinic3_service_type type, u16 num,
+ struct irq_info *irq_info_array, u16 *act_num);
+
+/* *
+ * @brief hinic3_free_irq - free irq
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param irq_id: irq id
+ */
+void hinic3_free_irq(void *hwdev, enum hinic3_service_type type, u32 irq_id);
+
+/* *
+ * @brief hinic3_alloc_ceqs - alloc ceqs
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param num: alloc ceq number
+ * @param ceq_id_array: alloc ceq_id_array
+ * @param act_num: alloc actual number
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_alloc_ceqs(void *hwdev, enum hinic3_service_type type, int num,
+ int *ceq_id_array, int *act_num);
+
+/* *
+ * @brief hinic3_free_irq - free ceq
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param irq_id: ceq id
+ */
+void hinic3_free_ceq(void *hwdev, enum hinic3_service_type type, int ceq_id);
+
+/* *
+ * @brief hinic3_get_pcidev_hdl - get pcidev_hdl
+ * @param hwdev: device pointer to hwdev
+ * @retval non-null: success
+ * @retval null: failure
+ */
+void *hinic3_get_pcidev_hdl(void *hwdev);
+
+/* *
+ * @brief hinic3_ppf_idx - get ppf id
+ * @param hwdev: device pointer to hwdev
+ * @retval ppf id
+ */
+u8 hinic3_ppf_idx(void *hwdev);
+
+/* *
+ * @brief hinic3_get_chip_present_flag - get chip present flag
+ * @param hwdev: device pointer to hwdev
+ * @retval 1: chip is present
+ * @retval 0: chip is absent
+ */
+int hinic3_get_chip_present_flag(const void *hwdev);
+
+/* *
+ * @brief hinic3_get_heartbeat_status - get heartbeat status
+ * @param hwdev: device pointer to hwdev
+ * @retval heartbeat status
+ */
+u32 hinic3_get_heartbeat_status(void *hwdev);
+
+/* *
+ * @brief hinic3_support_nic - function support nic
+ * @param hwdev: device pointer to hwdev
+ * @param cap: nic service capbility
+ * @retval true: function support nic
+ * @retval false: function not support nic
+ */
+bool hinic3_support_nic(void *hwdev, struct nic_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_ipsec - function support ipsec
+ * @param hwdev: device pointer to hwdev
+ * @param cap: ipsec service capbility
+ * @retval true: function support ipsec
+ * @retval false: function not support ipsec
+ */
+bool hinic3_support_ipsec(void *hwdev, struct ipsec_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_roce - function support roce
+ * @param hwdev: device pointer to hwdev
+ * @param cap: roce service capbility
+ * @retval true: function support roce
+ * @retval false: function not support roce
+ */
+bool hinic3_support_roce(void *hwdev, struct rdma_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_fc - function support fc
+ * @param hwdev: device pointer to hwdev
+ * @param cap: fc service capbility
+ * @retval true: function support fc
+ * @retval false: function not support fc
+ */
+bool hinic3_support_fc(void *hwdev, struct fc_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_rdma - function support rdma
+ * @param hwdev: device pointer to hwdev
+ * @param cap: rdma service capbility
+ * @retval true: function support rdma
+ * @retval false: function not support rdma
+ */
+bool hinic3_support_rdma(void *hwdev, struct rdma_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_ovs - function support ovs
+ * @param hwdev: device pointer to hwdev
+ * @param cap: ovs service capbility
+ * @retval true: function support ovs
+ * @retval false: function not support ovs
+ */
+bool hinic3_support_ovs(void *hwdev, struct ovs_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_vbs - function support vbs
+ * @param hwdev: device pointer to hwdev
+ * @param cap: vbs service capbility
+ * @retval true: function support vbs
+ * @retval false: function not support vbs
+ */
+bool hinic3_support_vbs(void *hwdev, struct vbs_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_toe - sync time to hardware
+ * @param hwdev: device pointer to hwdev
+ * @param cap: toe service capbility
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+bool hinic3_support_toe(void *hwdev, struct toe_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_ppa - function support ppa
+ * @param hwdev: device pointer to hwdev
+ * @param cap: ppa service capbility
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+bool hinic3_support_ppa(void *hwdev, struct ppa_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_bifur - function support bifur
+ * @param hwdev: device pointer to hwdev
+ * @param cap: bifur service capbility
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+bool hinic3_support_bifur(void *hwdev, struct bifur_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_migr - function support migrate
+ * @param hwdev: device pointer to hwdev
+ * @param cap: migrate service capbility
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+bool hinic3_support_migr(void *hwdev, struct migr_service_cap *cap);
+
+/* *
+ * @brief hinic3_sync_time - sync time to hardware
+ * @param hwdev: device pointer to hwdev
+ * @param time: time to sync
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_sync_time(void *hwdev, u64 time);
+
+/* *
+ * @brief hinic3_disable_mgmt_msg_report - disable mgmt report msg
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_disable_mgmt_msg_report(void *hwdev);
+
+/* *
+ * @brief hinic3_func_for_mgmt - get function service type
+ * @param hwdev: device pointer to hwdev
+ * @retval true: function for mgmt
+ * @retval false: function is not for mgmt
+ */
+bool hinic3_func_for_mgmt(void *hwdev);
+
+/* *
+ * @brief hinic3_set_pcie_order_cfg - set pcie order cfg
+ * @param handle: device pointer to hwdev
+ */
+void hinic3_set_pcie_order_cfg(void *handle);
+
+/* *
+ * @brief hinic3_init_hwdev - call to init hwdev
+ * @param para: device pointer to para
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_init_hwdev(struct hinic3_init_para *para);
+
+/* *
+ * @brief hinic3_free_hwdev - free hwdev
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_free_hwdev(void *hwdev);
+
+/* *
+ * @brief hinic3_detect_hw_present - detect hardware present
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_detect_hw_present(void *hwdev);
+
+/* *
+ * @brief hinic3_record_pcie_error - record pcie error
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_record_pcie_error(void *hwdev);
+
+/* *
+ * @brief hinic3_shutdown_hwdev - shutdown hwdev
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_shutdown_hwdev(void *hwdev);
+
+/* *
+ * @brief hinic3_set_ppf_flr_type - set ppf flr type
+ * @param hwdev: device pointer to hwdev
+ * @param ppf_flr_type: ppf flr type
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_ppf_flr_type(void *hwdev, enum hinic3_ppf_flr_type flr_type);
+
+/* *
+ * @brief hinic3_set_ppf_tbl_hotreplace_flag - set os hotreplace flag in ppf function table
+ * @param hwdev: device pointer to hwdev
+ * @param flag : os hotreplace flag : 0-not in os hotreplace 1-in os hotreplace
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_ppf_tbl_hotreplace_flag(void *hwdev, u8 flag);
+
+/* *
+ * @brief hinic3_get_mgmt_version - get management cpu version
+ * @param hwdev: device pointer to hwdev
+ * @param mgmt_ver: output management version
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_mgmt_version(void *hwdev, u8 *mgmt_ver, u8 version_size,
+ u16 channel);
+
+/* *
+ * @brief hinic3_get_fw_version - get firmware version
+ * @param hwdev: device pointer to hwdev
+ * @param fw_ver: firmware version
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_fw_version(void *hwdev, struct hinic3_fw_version *fw_ver,
+ u16 channel);
+
+/* *
+ * @brief hinic3_get_bond_create_mode - get bond create mode
+ * @param hwdev: device pointer to hwdev
+ * @retval global function id
+ */
+u8 hinic3_get_bond_create_mode(void *udkdev);
+
+/* *
+ * @brief hinic3_global_func_id - get global function id
+ * @param hwdev: device pointer to hwdev
+ * @retval global function id
+ */
+u16 hinic3_global_func_id(void *hwdev);
+
+/* *
+ * @brief hinic3_vector_to_eqn - vector to eq id
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param vector: vertor
+ * @retval eq id
+ */
+int hinic3_vector_to_eqn(void *hwdev, enum hinic3_service_type type,
+ int vector);
+
+/* *
+ * @brief hinic3_glb_pf_vf_offset - get vf offset id of pf
+ * @param hwdev: device pointer to hwdev
+ * @retval vf offset id
+ */
+u16 hinic3_glb_pf_vf_offset(void *hwdev);
+
+/* *
+ * @brief hinic3_pf_id_of_vf - get pf id of vf
+ * @param hwdev: device pointer to hwdev
+ * @retval pf id
+ */
+u8 hinic3_pf_id_of_vf(void *hwdev);
+
+/* *
+ * @brief hinic3_func_type - get function type
+ * @param hwdev: device pointer to hwdev
+ * @retval function type
+ */
+enum func_type hinic3_func_type(void *hwdev);
+
+/* *
+ * @brief hinic3_get_stateful_enable - get stateful status
+ * @param hwdev: device pointer to hwdev
+ * @retval stateful enabel status
+ */
+bool hinic3_get_stateful_enable(void *hwdev);
+
+/* *
+ * @brief hinic3_get_timer_enable - get timer status
+ * @param hwdev: device pointer to hwdev
+ * @retval timer enabel status
+ */
+bool hinic3_get_timer_enable(void *hwdev);
+
+/* *
+ * @brief hinic3_host_oq_id_mask - get oq id
+ * @param hwdev: device pointer to hwdev
+ * @retval oq id
+ */
+u8 hinic3_host_oq_id_mask(void *hwdev);
+
+/* *
+ * @brief hinic3_host_id - get host id
+ * @param hwdev: device pointer to hwdev
+ * @retval host id
+ */
+u8 hinic3_host_id(void *hwdev);
+
+/* *
+ * @brief hinic3_func_max_qnum - get host total function number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: host total function number
+ * @retval zero: failure
+ */
+u16 hinic3_host_total_func(void *hwdev);
+
+/* *
+ * @brief hinic3_func_max_qnum - get max nic queue number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: max nic queue number
+ * @retval zero: failure
+ */
+u16 hinic3_func_max_nic_qnum(void *hwdev);
+
+/* *
+ * @brief hinic3_func_max_qnum - get max queue number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: max queue number
+ * @retval zero: failure
+ */
+u16 hinic3_func_max_qnum(void *hwdev);
+
+/* *
+ * @brief hinic3_er_id - get ep id
+ * @param hwdev: device pointer to hwdev
+ * @retval ep id
+ */
+u8 hinic3_ep_id(void *hwdev); /* Obtain service_cap.ep_id */
+
+/* *
+ * @brief hinic3_er_id - get er id
+ * @param hwdev: device pointer to hwdev
+ * @retval er id
+ */
+u8 hinic3_er_id(void *hwdev); /* Obtain service_cap.er_id */
+
+/* *
+ * @brief hinic3_physical_port_id - get physical port id
+ * @param hwdev: device pointer to hwdev
+ * @retval physical port id
+ */
+u8 hinic3_physical_port_id(void *hwdev); /* Obtain service_cap.port_id */
+
+/* *
+ * @brief hinic3_func_max_vf - get vf number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: vf number
+ * @retval zero: failure
+ */
+u16 hinic3_func_max_vf(void *hwdev); /* Obtain service_cap.max_vf */
+
+/* *
+ * @brief hinic3_max_pf_num - get global max pf number
+ */
+u8 hinic3_max_pf_num(void *hwdev);
+
+/* *
+ * @brief hinic3_host_pf_num - get current host pf number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: pf number
+ * @retval zero: failure
+ */
+u32 hinic3_host_pf_num(void *hwdev); /* Obtain service_cap.pf_num */
+
+/* *
+ * @brief hinic3_host_pf_id_start - get current host pf id start
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: pf id start
+ * @retval zero: failure
+ */
+u32 hinic3_host_pf_id_start(void *hwdev); /* Obtain service_cap.pf_num */
+
+/* *
+ * @brief hinic3_pcie_itf_id - get pcie port id
+ * @param hwdev: device pointer to hwdev
+ * @retval pcie port id
+ */
+u8 hinic3_pcie_itf_id(void *hwdev);
+
+/* *
+ * @brief hinic3_vf_in_pf - get vf offset in pf
+ * @param hwdev: device pointer to hwdev
+ * @retval vf offset in pf
+ */
+u8 hinic3_vf_in_pf(void *hwdev);
+
+/* *
+ * @brief hinic3_cos_valid_bitmap - get cos valid bitmap
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: valid cos bit map
+ * @retval zero: failure
+ */
+int hinic3_cos_valid_bitmap(void *hwdev, u8 *func_dft_cos, u8 *port_cos_bitmap);
+
+/* *
+ * @brief hinic3_stateful_init - init stateful resource
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_stateful_init(void *hwdev);
+
+/* *
+ * @brief hinic3_stateful_deinit - deinit stateful resource
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_stateful_deinit(void *hwdev);
+
+/* *
+ * @brief hinic3_free_stateful - sdk remove free stateful resource
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_free_stateful(void *hwdev);
+
+/* *
+ * @brief hinic3_need_init_stateful_default - get need init stateful default
+ * @param hwdev: device pointer to hwdev
+ */
+bool hinic3_need_init_stateful_default(void *hwdev);
+
+/* *
+ * @brief hinic3_get_card_present_state - get card present state
+ * @param hwdev: device pointer to hwdev
+ * @param card_present_state: return card present state
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_card_present_state(void *hwdev, bool *card_present_state);
+
+/* *
+ * @brief hinic3_func_rx_tx_flush - function flush
+ * @param hwdev: device pointer to hwdev
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_func_rx_tx_flush(void *hwdev, u16 channel, bool wait_io);
+
+/* *
+ * @brief hinic3_flush_mgmt_workq - when remove function should flush work queue
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_flush_mgmt_workq(void *hwdev);
+
+/* *
+ * @brief hinic3_ceq_num get toe ceq num
+ */
+u8 hinic3_ceq_num(void *hwdev);
+
+/* *
+ * @brief hinic3_intr_num get intr num
+ */
+u16 hinic3_intr_num(void *hwdev);
+
+/* *
+ * @brief hinic3_flexq_en get flexq en
+ */
+u8 hinic3_flexq_en(void *hwdev);
+
+/* *
+ * @brief hinic3_get_fake_vf_info get fake_vf info
+ */
+int hinic3_get_fake_vf_info(void *hwdev, u8 *fake_vf_vld,
+ u8 *page_bit, u8 *pf_start_bit, u8 *map_host_id);
+
+/* *
+ * @brief hinic3_fault_event_report - report fault event
+ * @param hwdev: device pointer to hwdev
+ * @param src: fault event source, reference to enum hinic3_fault_source_type
+ * @param level: fault level, reference to enum hinic3_fault_err_level
+ */
+void hinic3_fault_event_report(void *hwdev, u16 src, u16 level);
+
+/* *
+ * @brief hinic3_probe_success - notify device probe successful
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_probe_success(void *hwdev);
+
+/* *
+ * @brief hinic3_set_func_svc_used_state - set function service used state
+ * @param hwdev: device pointer to hwdev
+ * @param svc_type: service type
+ * @param state: function used state
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_func_svc_used_state(void *hwdev, u16 svc_type, u8 state,
+ u16 channel);
+
+/* *
+ * @brief hinic3_get_self_test_result - get self test result
+ * @param hwdev: device pointer to hwdev
+ * @retval self test result
+ */
+u32 hinic3_get_self_test_result(void *hwdev);
+
+/* *
+ * @brief hinic3_get_slave_host_enable - get slave host enable
+ * @param hwdev: device pointer to hwdev
+ * @param host_id: set host id
+ * @param slave_en-zero: slave is enable
+ * @retval zero: failure
+ */
+void set_slave_host_enable(void *hwdev, u8 host_id, bool enable);
+
+/* *
+ * @brief hinic3_get_slave_bitmap - get slave host bitmap
+ * @param hwdev: device pointer to hwdev
+ * @param slave_host_bitmap-zero: slave host bitmap
+ * @retval zero: failure
+ */
+int hinic3_get_slave_bitmap(void *hwdev, u8 *slave_host_bitmap);
+
+/* *
+ * @brief hinic3_get_slave_host_enable - get slave host enable
+ * @param hwdev: device pointer to hwdev
+ * @param host_id: get host id
+ * @param slave_en-zero: slave is enable
+ * @retval zero: failure
+ */
+int hinic3_get_slave_host_enable(void *hwdev, u8 host_id, u8 *slave_en);
+
+/* *
+ * @brief hinic3_set_host_migrate_enable - set migrate host enable
+ * @param hwdev: device pointer to hwdev
+ * @param host_id: get host id
+ * @param slave_en-zero: migrate is enable
+ * @retval zero: failure
+ */
+int hinic3_set_host_migrate_enable(void *hwdev, u8 host_id, bool enable);
+
+/* *
+ * @brief hinic3_get_host_migrate_enable - get migrate host enable
+ * @param hwdev: device pointer to hwdev
+ * @param host_id: get host id
+ * @param slave_en-zero: migrte enable ptr
+ * @retval zero: failure
+ */
+int hinic3_get_host_migrate_enable(void *hwdev, u8 host_id, u8 *migrate_en);
+
+/* *
+ * @brief hinic3_is_slave_func - hwdev is slave func
+ * @param dev: device pointer to hwdev
+ * @param is_slave_func: slave func
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_is_slave_func(const void *hwdev, bool *is_slave_func);
+
+/* *
+ * @brief hinic3_is_master_func - hwdev is master func
+ * @param dev: device pointer to hwdev
+ * @param is_master_func: master func
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_is_master_func(const void *hwdev, bool *is_master_func);
+
+bool hinic3_is_multi_bm(void *hwdev);
+
+bool hinic3_is_slave_host(void *hwdev);
+
+bool hinic3_is_vm_slave_host(void *hwdev);
+
+bool hinic3_is_bm_slave_host(void *hwdev);
+
+bool hinic3_is_guest_vmsec_enable(void *hwdev);
+
+int hinic3_get_vfid_by_vfpci(void *hwdev, struct pci_dev *pdev, u16 *global_func_id);
+
+int hinic3_set_func_nic_state(void *hwdev, struct hinic3_func_nic_state *state);
+
+int hinic3_get_netdev_state(void *hwdev, u16 func_idx, int *opened);
+
+int hinic3_get_mhost_func_nic_enable(void *hwdev, u16 func_id, bool *en);
+
+int hinic3_get_dev_cap(void *hwdev);
+
+int hinic3_mbox_to_host_sync(void *hwdev, enum hinic3_mod_type mod,
+ u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout, u16 channel);
+
+int hinic3_get_func_vroce_enable(void *hwdev, u16 glb_func_idx, u8 *en);
+
+void hinic3_module_get(void *hwdev, enum hinic3_service_type type);
+void hinic3_module_put(void *hwdev, enum hinic3_service_type type);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c
new file mode 100644
index 0000000..9b5f017
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c
@@ -0,0 +1,1108 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+#include <linux/semaphore.h>
+
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_nic_dbg.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_rx.h"
+#include "hinic3_tx.h"
+#include "hinic3_dcb.h"
+#include "hinic3_nic.h"
+#include "hinic3_bond.h"
+#include "nic_mpu_cmd_defs.h"
+
+typedef int (*nic_driv_module)(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+struct nic_drv_module_handle {
+ enum driver_cmd_type driv_cmd_name;
+ nic_driv_module driv_func;
+};
+
+static int get_nic_drv_version(void *buf_out, const u32 *out_size)
+{
+ struct drv_version_info *ver_info = buf_out;
+ int err;
+
+ if (!buf_out) {
+ pr_err("Buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (*out_size != sizeof(*ver_info)) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(*ver_info));
+ return -EINVAL;
+ }
+
+ err = snprintf(ver_info->ver, sizeof(ver_info->ver), "%s %s",
+ HINIC3_NIC_DRV_VERSION, "2025-05-01_00:00:03");
+ if (err < 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int get_tx_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u16 q_id;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get tx info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || in_size != sizeof(u32)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect in buf size from user :%u, expect: %lu\n",
+ in_size, sizeof(u32));
+ return -EINVAL;
+ }
+
+ q_id = (u16)(*((u32 *)buf_in));
+
+ return hinic3_dbg_get_sq_info(nic_dev->hwdev, q_id, buf_out, *out_size);
+}
+
+static int get_q_num(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get queue number\n");
+ return -EFAULT;
+ }
+
+ if (!buf_out || !out_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Get queue number para buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (*out_size != sizeof(u16)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(u16));
+ return -EINVAL;
+ }
+
+ *((u16 *)buf_out) = nic_dev->q_params.num_qps;
+
+ return 0;
+}
+
+static int get_tx_wqe_info(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ const struct wqe_info *info = buf_in;
+ u16 wqebb_cnt = 1;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get tx wqe info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || in_size != sizeof(struct wqe_info)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(struct wqe_info));
+ return -EINVAL;
+ }
+
+ return hinic3_dbg_get_wqe_info(nic_dev->hwdev, (u16)info->q_id,
+ (u16)info->wqe_id, wqebb_cnt,
+ buf_out, (u16 *)out_size, HINIC3_SQ);
+}
+
+static int get_rx_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct nic_rq_info *rq_info = buf_out;
+ u16 q_id;
+ int err;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get rx info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || in_size != sizeof(u32)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(u32));
+ return -EINVAL;
+ }
+
+ q_id = (u16)(*((u32 *)buf_in));
+
+ err = hinic3_dbg_get_rq_info(nic_dev->hwdev, q_id, buf_out, *out_size);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Get rq info failed, ret is %d.\n", err);
+ return err;
+ }
+
+ rq_info->delta = (u16)nic_dev->rxqs[q_id].delta;
+ rq_info->ci = (u16)(nic_dev->rxqs[q_id].cons_idx &
+ nic_dev->rxqs[q_id].q_mask);
+ rq_info->sw_pi = nic_dev->rxqs[q_id].next_to_update;
+ rq_info->msix_vector = nic_dev->rxqs[q_id].irq_id;
+
+ rq_info->coalesc_timer_cfg = nic_dev->rxqs[q_id].last_coalesc_timer_cfg;
+ rq_info->pending_limt = nic_dev->rxqs[q_id].last_pending_limt;
+
+ return 0;
+}
+
+static int get_rx_wqe_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct wqe_info *info = buf_in;
+ u16 wqebb_cnt = 1;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get rx wqe info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || in_size != sizeof(struct wqe_info)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(struct wqe_info));
+ return -EINVAL;
+ }
+
+ return hinic3_dbg_get_wqe_info(nic_dev->hwdev, (u16)info->q_id,
+ (u16)info->wqe_id, wqebb_cnt,
+ buf_out, (u16 *)out_size, HINIC3_RQ);
+}
+
+static int get_rx_cqe_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct wqe_info *info = buf_in;
+ u16 q_id = 0;
+ u16 idx = 0;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get rx cqe info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out || !out_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (in_size != sizeof(struct wqe_info)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(struct wqe_info));
+ return -EINVAL;
+ }
+
+ if (*out_size != sizeof(struct hinic3_rq_cqe)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(struct hinic3_rq_cqe));
+ return -EINVAL;
+ }
+ q_id = (u16)info->q_id;
+ idx = (u16)info->wqe_id;
+
+ if (q_id >= nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid q_id[%u] >= %u.\n", q_id,
+ nic_dev->q_params.num_qps);
+ return -EFAULT;
+ }
+ if (idx >= nic_dev->rxqs[q_id].q_depth) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid wqe idx[%u] >= %u.\n", idx,
+ nic_dev->rxqs[q_id].q_depth);
+ return -EFAULT;
+ }
+
+ memcpy(buf_out, nic_dev->rxqs[q_id].rx_info[idx].cqe,
+ sizeof(struct hinic3_rq_cqe));
+
+ return 0;
+}
+
+static void clean_nicdev_stats(struct hinic3_nic_dev *nic_dev)
+{
+ u64_stats_update_begin(&nic_dev->stats.syncp);
+ nic_dev->stats.netdev_tx_timeout = 0;
+ nic_dev->stats.tx_carrier_off_drop = 0;
+ nic_dev->stats.tx_invalid_qid = 0;
+ nic_dev->stats.rsvd1 = 0;
+ nic_dev->stats.rsvd2 = 0;
+ u64_stats_update_end(&nic_dev->stats.syncp);
+}
+
+static int clear_func_static(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ int i;
+
+ *out_size = 0;
+#ifndef HAVE_NETDEV_STATS_IN_NETDEV
+ memset(&nic_dev->net_stats, 0, sizeof(nic_dev->net_stats));
+#endif
+ clean_nicdev_stats(nic_dev);
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ hinic3_rxq_clean_stats(&nic_dev->rxqs[i].rxq_stats);
+ hinic3_txq_clean_stats(&nic_dev->txqs[i].txq_stats);
+ }
+
+ return 0;
+}
+
+static int get_loopback_mode(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_nic_loop_mode *mode = buf_out;
+
+ if (!out_size || !mode)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*mode)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(*mode));
+ return -EINVAL;
+ }
+
+ return hinic3_get_loopback_mode(nic_dev->hwdev, (u8 *)&mode->loop_mode,
+ (u8 *)&mode->loop_ctrl);
+}
+
+static int set_loopback_mode(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct hinic3_nic_loop_mode *mode = buf_in;
+ int err;
+
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't set loopback mode\n");
+ return -EFAULT;
+ }
+
+ if (!mode || !out_size || in_size != sizeof(*mode))
+ return -EINVAL;
+
+ if (*out_size != sizeof(*mode)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(*mode));
+ return -EINVAL;
+ }
+
+ err = hinic3_set_loopback_mode(nic_dev->hwdev, (u8)mode->loop_mode,
+ (u8)mode->loop_ctrl);
+ if (err == 0)
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Set loopback mode %u en %u succeed\n",
+ mode->loop_mode, mode->loop_ctrl);
+
+ return err;
+}
+
+enum hinic3_nic_link_mode {
+ HINIC3_LINK_MODE_AUTO = 0,
+ HINIC3_LINK_MODE_UP,
+ HINIC3_LINK_MODE_DOWN,
+ HINIC3_LINK_MODE_MAX,
+};
+
+static int set_link_mode_param_valid(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ const u32 *out_size)
+{
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't set link mode\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !out_size ||
+ in_size != sizeof(enum hinic3_nic_link_mode))
+ return -EINVAL;
+
+ if (*out_size != sizeof(enum hinic3_nic_link_mode)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(enum hinic3_nic_link_mode));
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int set_link_mode(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const enum hinic3_nic_link_mode *link = buf_in;
+ u8 link_status;
+
+ if (set_link_mode_param_valid(nic_dev, buf_in, in_size, out_size))
+ return -EFAULT;
+
+ switch (*link) {
+ case HINIC3_LINK_MODE_AUTO:
+ if (hinic3_get_link_state(nic_dev->hwdev, &link_status))
+ link_status = false;
+ hinic3_link_status_change(nic_dev, (bool)link_status);
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Set link mode: auto succeed, now is link %s\n",
+ (link_status ? "up" : "down"));
+ break;
+ case HINIC3_LINK_MODE_UP:
+ hinic3_link_status_change(nic_dev, true);
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Set link mode: up succeed\n");
+ break;
+ case HINIC3_LINK_MODE_DOWN:
+ hinic3_link_status_change(nic_dev, false);
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Set link mode: down succeed\n");
+ break;
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid link mode %d to set\n", *link);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int set_pf_bw_limit(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u32 pf_bw_limit;
+ int err;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct net_device *net_dev = nic_dev->netdev;
+
+ if (hinic3_support_roce(nic_dev->hwdev, NULL) &&
+ hinic3_is_bond_dev_status_actived(net_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "The rate limit func is not supported when RoCE bonding is enabled\n");
+ return -EINVAL;
+ }
+
+ if (HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "To set VF bandwidth rate, please use ip link cmd\n");
+ return -EINVAL;
+ }
+
+ if (!buf_in || !buf_out || in_size != sizeof(u32) ||
+ !out_size || *out_size != sizeof(u8))
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(nic_dev->hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ nic_io->direct = HINIC3_NIC_TX;
+ pf_bw_limit = *((u32 *)buf_in);
+
+ err = hinic3_set_pf_bw_limit(nic_dev->hwdev, pf_bw_limit);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to set pf bandwidth limit to %u%%\n",
+ pf_bw_limit);
+ if (err < 0)
+ return err;
+ }
+
+ *((u8 *)buf_out) = (u8)err;
+
+ return 0;
+}
+
+static int set_rx_pf_bw_limit(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u32 pf_bw_limit;
+ int err;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct net_device *net_dev = nic_dev->netdev;
+
+ if (hinic3_support_roce(nic_dev->hwdev, NULL) &&
+ hinic3_is_bond_dev_status_actived(net_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "The rate limit func is not supported when RoCE bonding is enabled\n");
+ return -EINVAL;
+ }
+
+ if (HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "To set VF bandwidth rate, please use ip link cmd\n");
+ return -EINVAL;
+ }
+
+ if (!buf_in || !buf_out || in_size != sizeof(u32) || !out_size || *out_size != sizeof(u8))
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(nic_dev->hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ nic_io->direct = HINIC3_NIC_RX;
+ pf_bw_limit = *((u32 *)buf_in);
+
+ err = hinic3_set_pf_bw_limit(nic_dev->hwdev, pf_bw_limit);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to set pf bandwidth limit to %d%%\n",
+ pf_bw_limit);
+ if (err < 0)
+ return err;
+ }
+
+ *((u8 *)buf_out) = (u8)err;
+
+ return 0;
+}
+
+static int get_pf_bw_limit(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 *rate_limit = (u32 *)buf_out;
+
+ if (HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "To get VF bandwidth rate, please use ip link cmd\n");
+ return -EINVAL;
+ }
+
+ if (!buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(u32) * 2) { // 2:Stored in an array, TX and RX, both length are u32
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %d, expect: %lu\n",
+ *out_size, sizeof(u32) * 2);
+ return -EFAULT;
+ }
+
+ nic_io = hinic3_get_service_adapter(nic_dev->hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ rate_limit[HINIC3_NIC_RX] = nic_io->nic_cfg.pf_bw_rx_limit;
+ rate_limit[HINIC3_NIC_TX] = nic_io->nic_cfg.pf_bw_tx_limit;
+
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "read rate cfg success rx rate is: %u, tx rate is : %u\n",
+ rate_limit[HINIC3_NIC_RX], rate_limit[HINIC3_NIC_TX]);
+ return 0;
+}
+
+static int get_sset_count(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u32 count;
+
+ if (!buf_in || in_size != sizeof(u32) || !out_size ||
+ *out_size != sizeof(u32) || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid parameters, in_size: %u\n", in_size);
+ return -EINVAL;
+ }
+
+ switch (*((u32 *)buf_in)) {
+ case HINIC3_SHOW_SSET_IO_STATS:
+ count = hinic3_get_io_stats_size(nic_dev);
+ break;
+ default:
+ count = 0;
+ break;
+ }
+
+ *((u32 *)buf_out) = count;
+
+ return 0;
+}
+
+static int get_sset_stats(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_show_item *items = buf_out;
+ u32 sset, count, size;
+ int err;
+
+ if (!buf_in || in_size != sizeof(u32) || !out_size || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid parameters, in_size: %u\n", in_size);
+ return -EINVAL;
+ }
+
+ size = sizeof(u32);
+ err = get_sset_count(nic_dev, buf_in, in_size, &count, &size);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Get sset count failed, ret=%d\n", err);
+ return -EINVAL;
+ }
+ if (count * sizeof(*items) != *out_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, count * sizeof(*items));
+ return -EINVAL;
+ }
+
+ sset = *((u32 *)buf_in);
+
+ switch (sset) {
+ case HINIC3_SHOW_SSET_IO_STATS:
+ err = hinic3_get_io_stats(nic_dev, items);
+ if (err < 0)
+ return -EINVAL;
+ break;
+
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unknown %u to get stats\n", sset);
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
+static int update_pcp_dscp_cfg(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dcb_config *wanted_dcb_cfg,
+ const struct hinic3_mt_qos_dev_cfg *qos_in)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ int i;
+ u8 cos_num = 0, valid_cos_bitmap = 0;
+
+ if (qos_in->cfg_bitmap & CMD_QOS_DEV_PCP2COS) {
+ for (i = 0; i < NIC_DCB_UP_MAX; i++) {
+ if (!(dcb->func_dft_cos_bitmap &
+ BIT(qos_in->pcp2cos[i]))) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid cos=%u, func cos valid map is %u",
+ qos_in->pcp2cos[i],
+ dcb->func_dft_cos_bitmap);
+ return -EINVAL;
+ }
+
+ if ((BIT(qos_in->pcp2cos[i]) & valid_cos_bitmap) == 0) {
+ valid_cos_bitmap |= (u8)BIT(qos_in->pcp2cos[i]);
+ cos_num++;
+ }
+ }
+
+ memcpy(wanted_dcb_cfg->pcp2cos, qos_in->pcp2cos,
+ sizeof(qos_in->pcp2cos));
+ wanted_dcb_cfg->pcp_user_cos_num = cos_num;
+ wanted_dcb_cfg->pcp_valid_cos_map = valid_cos_bitmap;
+ }
+
+ if (qos_in->cfg_bitmap & CMD_QOS_DEV_DSCP2COS) {
+ cos_num = 0;
+ valid_cos_bitmap = 0;
+ for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++) {
+ u8 cos = qos_in->dscp2cos[i] == DBG_DFLT_DSCP_VAL ?
+ dcb->wanted_dcb_cfg.dscp2cos[i] :
+ qos_in->dscp2cos[i];
+
+ if (cos >= NIC_DCB_UP_MAX ||
+ !(dcb->func_dft_cos_bitmap & BIT(cos))) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid cos=%u, func cos valid map is %u",
+ cos, dcb->func_dft_cos_bitmap);
+ return -EINVAL;
+ }
+
+ if ((BIT(cos) & valid_cos_bitmap) == 0) {
+ valid_cos_bitmap |= (u8)BIT(cos);
+ cos_num++;
+ }
+ }
+
+ for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++)
+ wanted_dcb_cfg->dscp2cos[i] =
+ qos_in->dscp2cos[i] == DBG_DFLT_DSCP_VAL ?
+ dcb->hw_dcb_cfg.dscp2cos[i] :
+ qos_in->dscp2cos[i];
+ wanted_dcb_cfg->dscp_user_cos_num = cos_num;
+ wanted_dcb_cfg->dscp_valid_cos_map = valid_cos_bitmap;
+ }
+
+ return 0;
+}
+
+static int update_wanted_qos_cfg(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dcb_config *wanted_dcb_cfg,
+ const struct hinic3_mt_qos_dev_cfg *qos_in)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ int ret;
+ u8 cos_num, valid_cos_bitmap;
+
+ if (qos_in->cfg_bitmap & CMD_QOS_DEV_TRUST) {
+ if (qos_in->trust > HINIC3_DCB_DSCP) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid trust=%u\n", qos_in->trust);
+ return -EINVAL;
+ }
+
+ wanted_dcb_cfg->trust = qos_in->trust;
+ }
+
+ if (qos_in->cfg_bitmap & CMD_QOS_DEV_DFT_COS) {
+ if (!(BIT(qos_in->dft_cos) & dcb->func_dft_cos_bitmap)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid dft_cos=%u\n", qos_in->dft_cos);
+ return -EINVAL;
+ }
+
+ wanted_dcb_cfg->default_cos = qos_in->dft_cos;
+ }
+
+ ret = update_pcp_dscp_cfg(nic_dev, wanted_dcb_cfg, qos_in);
+ if (ret)
+ return ret;
+
+ if (wanted_dcb_cfg->trust == HINIC3_DCB_PCP) {
+ cos_num = wanted_dcb_cfg->pcp_user_cos_num;
+ valid_cos_bitmap = wanted_dcb_cfg->pcp_valid_cos_map;
+ } else {
+ cos_num = wanted_dcb_cfg->dscp_user_cos_num;
+ valid_cos_bitmap = wanted_dcb_cfg->dscp_valid_cos_map;
+ }
+
+ if (!(BIT(wanted_dcb_cfg->default_cos) & valid_cos_bitmap)) {
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Current default_cos=%u, change to %u\n",
+ wanted_dcb_cfg->default_cos,
+ (u8)fls(valid_cos_bitmap) - 1);
+ wanted_dcb_cfg->default_cos = (u8)fls(valid_cos_bitmap) - 1;
+ }
+
+ return 0;
+}
+
+static int dcb_mt_qos_map(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ const struct hinic3_mt_qos_dev_cfg *qos_in = buf_in;
+ struct hinic3_mt_qos_dev_cfg *qos_out = buf_out;
+ u8 i;
+ int err;
+
+ if (!buf_out || !out_size || !buf_in)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*qos_out) || in_size != sizeof(*qos_in)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*qos_in));
+ return -EINVAL;
+ }
+
+ memcpy(qos_out, qos_in, sizeof(*qos_in));
+ qos_out->head.status = 0;
+ if (qos_in->op_code & MT_DCB_OPCODE_WR) {
+ memcpy(&dcb->wanted_dcb_cfg, &dcb->hw_dcb_cfg,
+ sizeof(struct hinic3_dcb_config));
+ err = update_wanted_qos_cfg(nic_dev, &dcb->wanted_dcb_cfg,
+ qos_in);
+ if (err) {
+ qos_out->head.status = MT_EINVAL;
+ return 0;
+ }
+
+ err = hinic3_dcbcfg_set_up_bitmap(nic_dev);
+ if (err)
+ qos_out->head.status = MT_EIO;
+ } else {
+ qos_out->dft_cos = dcb->hw_dcb_cfg.default_cos;
+ qos_out->trust = dcb->hw_dcb_cfg.trust;
+ for (i = 0; i < NIC_DCB_UP_MAX; i++)
+ qos_out->pcp2cos[i] = dcb->hw_dcb_cfg.pcp2cos[i];
+ for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++)
+ qos_out->dscp2cos[i] = dcb->hw_dcb_cfg.dscp2cos[i];
+ }
+
+ return 0;
+}
+
+static int dcb_mt_dcb_state(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct hinic3_mt_dcb_state *dcb_in = buf_in;
+ struct hinic3_mt_dcb_state *dcb_out = buf_out;
+ int err;
+ u8 user_cos_num;
+ u8 netif_run = 0;
+
+ if (!buf_in || !buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*dcb_out) || in_size != sizeof(*dcb_in)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*dcb_in));
+ return -EINVAL;
+ }
+
+ user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+ memcpy(dcb_out, dcb_in, sizeof(*dcb_in));
+ dcb_out->head.status = 0;
+ if (dcb_in->op_code & MT_DCB_OPCODE_WR) {
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ==
+ dcb_in->state)
+ return 0;
+
+ if (netif_running(nic_dev->netdev)) {
+ netif_run = 1;
+ hinic3_vport_down(nic_dev);
+ }
+
+ err = hinic3_setup_cos(nic_dev->netdev,
+ dcb_in->state ? user_cos_num : 0,
+ netif_run);
+ if (err)
+ goto setup_cos_fail;
+
+ if (netif_run) {
+ err = hinic3_vport_up(nic_dev);
+ if (err)
+ goto vport_up_fail;
+ }
+ } else {
+ dcb_out->state = !!test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
+ }
+
+ return 0;
+
+vport_up_fail:
+ hinic3_setup_cos(nic_dev->netdev, dcb_in->state ? 0 : user_cos_num,
+ netif_run);
+
+setup_cos_fail:
+ if (netif_run)
+ hinic3_vport_up(nic_dev);
+
+ return err;
+}
+
+static int dcb_mt_hw_qos_get(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ const struct hinic3_mt_qos_cos_cfg *cos_cfg_in = buf_in;
+ struct hinic3_mt_qos_cos_cfg *cos_cfg_out = buf_out;
+
+ if (!buf_in || !buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*cos_cfg_out) ||
+ in_size != sizeof(*cos_cfg_in)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*cos_cfg_in));
+ return -EINVAL;
+ }
+
+ memcpy(cos_cfg_out, cos_cfg_in, sizeof(*cos_cfg_in));
+ cos_cfg_out->head.status = 0;
+
+ cos_cfg_out->port_id = hinic3_physical_port_id(nic_dev->hwdev);
+ cos_cfg_out->func_cos_bitmap = (u8)dcb->func_dft_cos_bitmap;
+ cos_cfg_out->port_cos_bitmap = (u8)dcb->port_dft_cos_bitmap;
+ cos_cfg_out->func_max_cos_num = dcb->cos_config_num_max;
+
+ return 0;
+}
+
+static int get_inter_num(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u16 intr_num;
+
+ intr_num = hinic3_intr_num(nic_dev->hwdev);
+
+ if (!buf_out || !out_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_out or out_size is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (*out_size != sizeof(u16)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(u16));
+ return -EFAULT;
+ }
+ *(u16 *)buf_out = intr_num;
+
+ return 0;
+}
+
+static int get_netdev_name(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ if (!buf_out || !out_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_out or out_size is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (*out_size != IFNAMSIZ) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user :%u, expect: %u\n",
+ *out_size, IFNAMSIZ);
+ return -EFAULT;
+ }
+
+ strscpy(buf_out, nic_dev->netdev->name, IFNAMSIZ);
+
+ return 0;
+}
+
+static int get_netdev_tx_timeout(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct net_device *net_dev = nic_dev->netdev;
+ int *tx_timeout = buf_out;
+
+ if (!buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(int)) {
+ nicif_err(nic_dev, drv, net_dev,
+ "Unexpect buf size from user, out_size: %u, expect: %lu\n",
+ *out_size, sizeof(int));
+ return -EINVAL;
+ }
+
+ *tx_timeout = net_dev->watchdog_timeo;
+
+ return 0;
+}
+
+static int set_netdev_tx_timeout(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct net_device *net_dev = nic_dev->netdev;
+ const int *tx_timeout = buf_in;
+
+ if (!buf_in)
+ return -EINVAL;
+
+ if (in_size != sizeof(int)) {
+ nicif_err(nic_dev, drv, net_dev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(int));
+ return -EINVAL;
+ }
+
+ net_dev->watchdog_timeo = *tx_timeout * HZ;
+ nicif_info(nic_dev, drv, net_dev,
+ "Set tx timeout check period to %ds\n", *tx_timeout);
+
+ return 0;
+}
+
+static int get_xsfp_present(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct mag_cmd_get_xsfp_present *sfp_abs = buf_out;
+
+ if (!buf_in || !buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*sfp_abs) || in_size != sizeof(*sfp_abs)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*sfp_abs));
+ return -EINVAL;
+ }
+
+ sfp_abs->head.status = 0;
+ sfp_abs->abs_status = hinic3_if_sfp_absent(nic_dev->hwdev);
+
+ return 0;
+}
+
+static int get_xsfp_tlv_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct drv_mag_cmd_get_xsfp_tlv_rsp *sfp_tlv_info = buf_out;
+ const struct mag_cmd_get_xsfp_tlv_req *sfp_tlv_info_req = buf_in;
+ int err;
+
+ if ((buf_in == NULL) || (buf_out == NULL) || (out_size == NULL))
+ return -EINVAL;
+
+ if (*out_size != sizeof(*sfp_tlv_info) ||
+ in_size != sizeof(*sfp_tlv_info_req)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*sfp_tlv_info));
+ return -EINVAL;
+ }
+
+ err = hinic3_get_sfp_tlv_info(nic_dev->hwdev,
+ sfp_tlv_info, sfp_tlv_info_req);
+ if (err != 0) {
+ sfp_tlv_info->head.status = MT_EIO;
+ return 0;
+ }
+
+ return 0;
+}
+
+static int get_xsfp_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct mag_cmd_get_xsfp_info *sfp_info = buf_out;
+ int err;
+
+ if (!buf_in || !buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*sfp_info) || in_size != sizeof(*sfp_info)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*sfp_info));
+ return -EINVAL;
+ }
+
+ err = hinic3_get_sfp_info(nic_dev->hwdev, sfp_info);
+ if (err) {
+ sfp_info->head.status = MT_EIO;
+ return 0;
+ }
+
+ return 0;
+}
+
+static const struct nic_drv_module_handle nic_driv_module_cmd_handle[] = {
+ {TX_INFO, get_tx_info},
+ {Q_NUM, get_q_num},
+ {TX_WQE_INFO, get_tx_wqe_info},
+ {RX_INFO, get_rx_info},
+ {RX_WQE_INFO, get_rx_wqe_info},
+ {RX_CQE_INFO, get_rx_cqe_info},
+ {GET_INTER_NUM, get_inter_num},
+ {CLEAR_FUNC_STASTIC, clear_func_static},
+ {GET_LOOPBACK_MODE, get_loopback_mode},
+ {SET_LOOPBACK_MODE, set_loopback_mode},
+ {SET_LINK_MODE, set_link_mode},
+ {SET_TX_PF_BW_LIMIT, set_pf_bw_limit},
+ {GET_PF_BW_LIMIT, get_pf_bw_limit},
+ {GET_SSET_COUNT, get_sset_count},
+ {GET_SSET_ITEMS, get_sset_stats},
+ {DCB_STATE, dcb_mt_dcb_state},
+ {QOS_DEV, dcb_mt_qos_map},
+ {GET_QOS_COS, dcb_mt_hw_qos_get},
+ {GET_ULD_DEV_NAME, get_netdev_name},
+ {GET_TX_TIMEOUT, get_netdev_tx_timeout},
+ {SET_TX_TIMEOUT, set_netdev_tx_timeout},
+ {GET_XSFP_PRESENT, get_xsfp_present},
+ {GET_XSFP_INFO, get_xsfp_info},
+ {GET_XSFP_INFO_COMP_CMIS, get_xsfp_tlv_info},
+ {SET_RX_PF_BW_LIMIT, set_rx_pf_bw_limit}
+};
+
+static int send_to_nic_driver(struct hinic3_nic_dev *nic_dev,
+ u32 cmd, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ int index, num_cmds = (int)(sizeof(nic_driv_module_cmd_handle) /
+ sizeof(nic_driv_module_cmd_handle[0]));
+ enum driver_cmd_type cmd_type = (enum driver_cmd_type)cmd;
+ int err = 0;
+
+ if (cmd_type == DCB_STATE || cmd_type == QOS_DEV)
+ rtnl_lock();
+
+ mutex_lock(&nic_dev->nic_mutex);
+ for (index = 0; index < num_cmds; index++) {
+ if (cmd_type ==
+ nic_driv_module_cmd_handle[index].driv_cmd_name) {
+ err = nic_driv_module_cmd_handle[index].driv_func
+ (nic_dev, buf_in,
+ in_size, buf_out, out_size);
+ break;
+ }
+ }
+ mutex_unlock(&nic_dev->nic_mutex);
+
+ if (cmd_type == DCB_STATE || cmd_type == QOS_DEV)
+ rtnl_unlock();
+
+ if (index == num_cmds) {
+ pr_err("Can't find callback for %d\n", cmd_type);
+ return -EINVAL;
+ }
+
+ return err;
+}
+
+int nic_ioctl(void *uld_dev, u32 cmd, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ if (cmd == GET_DRV_VERSION)
+ return get_nic_drv_version(buf_out, out_size);
+ else if (!uld_dev)
+ return -EINVAL;
+
+ return send_to_nic_driver(uld_dev, cmd, buf_in,
+ in_size, buf_out, out_size);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c b/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c
new file mode 100644
index 0000000..aa53c19
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c
@@ -0,0 +1,482 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+
+#include "hinic3_crm.h"
+#include "hinic3_lld.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_dcb.h"
+
+#define MAX_BW_PERCENT 100
+
+u8 hinic3_get_dev_user_cos_num(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+
+ if (dcb->hw_dcb_cfg.trust == HINIC3_DCB_PCP)
+ return dcb->hw_dcb_cfg.pcp_user_cos_num;
+ if (dcb->hw_dcb_cfg.trust == HINIC3_DCB_DSCP)
+ return dcb->hw_dcb_cfg.dscp_user_cos_num;
+ return 0;
+}
+
+u8 hinic3_get_dev_valid_cos_map(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+
+ if (dcb->hw_dcb_cfg.trust == HINIC3_DCB_PCP)
+ return dcb->hw_dcb_cfg.pcp_valid_cos_map;
+ if (dcb->hw_dcb_cfg.trust == HINIC3_DCB_DSCP)
+ return dcb->hw_dcb_cfg.dscp_valid_cos_map;
+ return 0;
+}
+
+void hinic3_update_qp_cos_cfg(struct hinic3_nic_dev *nic_dev, u8 num_cos)
+{
+ struct hinic3_dcb_config *hw_dcb_cfg = &nic_dev->dcb->hw_dcb_cfg;
+ struct hinic3_dcb_config *wanted_dcb_cfg =
+ &nic_dev->dcb->wanted_dcb_cfg;
+ u8 valid_cos_map = hinic3_get_dev_valid_cos_map(nic_dev);
+ u8 cos_qp_num, cos_qp_offset = 0;
+ u8 i, remainder, num_qp_per_cos;
+
+ if (num_cos == 0 || nic_dev->q_params.num_qps == 0)
+ return;
+
+ num_qp_per_cos = (u8)(nic_dev->q_params.num_qps / num_cos);
+ remainder = nic_dev->q_params.num_qps % num_cos;
+
+ memset(hw_dcb_cfg->cos_qp_offset, 0, sizeof(hw_dcb_cfg->cos_qp_offset));
+ memset(hw_dcb_cfg->cos_qp_num, 0, sizeof(hw_dcb_cfg->cos_qp_num));
+
+ for (i = 0; i < PCP_MAX_UP; i++) {
+ if (BIT(i) & valid_cos_map) {
+ cos_qp_num = num_qp_per_cos + ((remainder > 0) ?
+ (remainder--, 1) : 0);
+
+ hw_dcb_cfg->cos_qp_offset[i] = cos_qp_offset;
+ hw_dcb_cfg->cos_qp_num[i] = cos_qp_num;
+ hinic3_info(nic_dev, drv, "cos %u, cos_qp_offset=%u cos_qp_num=%u\n",
+ i, cos_qp_offset, cos_qp_num);
+
+ cos_qp_offset += cos_qp_num;
+ valid_cos_map -= (int)BIT(i);
+ }
+ }
+
+ memcpy(wanted_dcb_cfg->cos_qp_offset, hw_dcb_cfg->cos_qp_offset,
+ sizeof(hw_dcb_cfg->cos_qp_offset));
+ memcpy(wanted_dcb_cfg->cos_qp_num, hw_dcb_cfg->cos_qp_num,
+ sizeof(hw_dcb_cfg->cos_qp_num));
+}
+
+void hinic3_update_tx_db_cos(struct hinic3_nic_dev *nic_dev, u8 dcb_en)
+{
+ struct hinic3_dcb_config *hw_dcb_cfg = &nic_dev->dcb->hw_dcb_cfg;
+ u8 i;
+ u16 start_qid, q_num;
+
+ hinic3_set_txq_cos(nic_dev, 0, nic_dev->q_params.num_qps,
+ hw_dcb_cfg->default_cos);
+ if (!dcb_en)
+ return;
+
+ for (i = 0; i < NIC_DCB_COS_MAX; i++) {
+ q_num = (u16)hw_dcb_cfg->cos_qp_num[i];
+ if (q_num) {
+ start_qid = (u16)hw_dcb_cfg->cos_qp_offset[i];
+
+ hinic3_set_txq_cos(nic_dev, start_qid, q_num, i);
+ hinic3_info(nic_dev, drv, "update tx db cos, start_qid %u, q_num=%u cos=%u\n",
+ start_qid, q_num, i);
+ }
+ }
+}
+
+static int hinic3_set_tx_cos_state(struct hinic3_nic_dev *nic_dev, u8 dcb_en)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ struct hinic3_dcb_config *hw_dcb_cfg = &dcb->hw_dcb_cfg;
+ struct hinic3_dcb_state dcb_state = {0};
+ u8 i;
+ int err;
+
+ u32 pcp2cos_size = sizeof(dcb_state.pcp2cos);
+ u32 dscp2cos_size = sizeof(dcb_state.dscp2cos);
+
+ dcb_state.dcb_on = dcb_en;
+ dcb_state.default_cos = hw_dcb_cfg->default_cos;
+ dcb_state.trust = hw_dcb_cfg->trust;
+
+ if (dcb_en) {
+ for (i = 0; i < NIC_DCB_COS_MAX; i++)
+ dcb_state.pcp2cos[i] = hw_dcb_cfg->pcp2cos[i];
+ for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++)
+ dcb_state.dscp2cos[i] = hw_dcb_cfg->dscp2cos[i];
+ } else {
+ memset(dcb_state.pcp2cos, hw_dcb_cfg->default_cos,
+ pcp2cos_size);
+ memset(dcb_state.dscp2cos, hw_dcb_cfg->default_cos,
+ dscp2cos_size);
+ }
+
+ err = hinic3_set_dcb_state(nic_dev->hwdev, &dcb_state);
+ if (err)
+ hinic3_err(nic_dev, drv, "Failed to set dcb state\n");
+
+ return err;
+}
+
+int hinic3_configure_dcb_hw(struct hinic3_nic_dev *nic_dev, u8 dcb_en)
+{
+ int err;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ err = hinic3_sync_dcb_state(nic_dev->hwdev, 1, dcb_en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Set dcb state failed\n");
+ return err;
+ }
+
+ hinic3_update_qp_cos_cfg(nic_dev, user_cos_num);
+ hinic3_update_tx_db_cos(nic_dev, dcb_en);
+
+ err = hinic3_set_tx_cos_state(nic_dev, dcb_en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Set tx cos state failed\n");
+ goto set_tx_cos_fail;
+ }
+
+ err = hinic3_rx_configure(nic_dev->netdev, dcb_en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "rx configure failed\n");
+ goto rx_configure_fail;
+ }
+
+ if (dcb_en) {
+ set_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
+ set_bit(HINIC3_DCB_ENABLE, &nic_dev->nic_vram->flags);
+ } else {
+ clear_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
+ clear_bit(HINIC3_DCB_ENABLE, &nic_dev->nic_vram->flags);
+ }
+ return 0;
+rx_configure_fail:
+ hinic3_set_tx_cos_state(nic_dev, dcb_en ? 0 : 1);
+
+set_tx_cos_fail:
+ hinic3_update_tx_db_cos(nic_dev, dcb_en ? 0 : 1);
+ hinic3_sync_dcb_state(nic_dev->hwdev, 1, dcb_en ? 0 : 1);
+
+ return err;
+}
+
+int hinic3_setup_cos(struct net_device *netdev, u8 cos, u8 netif_run)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ int err;
+
+ if (cos && test_bit(HINIC3_SAME_RXTX, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev, "Failed to enable DCB while Symmetric RSS is enabled\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (cos > dcb->cos_config_num_max) {
+ nicif_err(nic_dev, drv, netdev,
+ "Invalid num_tc: %u, max cos: %u\n",
+ cos, dcb->cos_config_num_max);
+ return -EINVAL;
+ }
+
+ err = hinic3_configure_dcb_hw(nic_dev, cos ? 1 : 0);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static u8 get_cos_num(u8 hw_valid_cos_bitmap)
+{
+ u8 support_cos = 0;
+ u8 i;
+
+ for (i = 0; i < NIC_DCB_COS_MAX; i++)
+ if (hw_valid_cos_bitmap & BIT(i))
+ support_cos++;
+
+ return support_cos;
+}
+
+static void hinic3_sync_dcb_cfg(struct hinic3_nic_dev *nic_dev,
+ const struct hinic3_dcb_config *dcb_cfg)
+{
+ struct hinic3_dcb_config *hw_dcb_cfg = &nic_dev->dcb->hw_dcb_cfg;
+
+ memcpy(hw_dcb_cfg, dcb_cfg, sizeof(struct hinic3_dcb_config));
+}
+
+static int init_default_dcb_cfg(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dcb_config *dcb_cfg)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ u8 i, hw_dft_cos_map, port_cos_bitmap, dscp_ind;
+ int err;
+ int is_in_kexec;
+
+ err = hinic3_cos_valid_bitmap(nic_dev->hwdev,
+ &hw_dft_cos_map, &port_cos_bitmap);
+ if (err) {
+ hinic3_err(nic_dev, drv, "None cos supported\n");
+ return -EFAULT;
+ }
+
+ is_in_kexec = vram_get_kexec_flag();
+
+ dcb->func_dft_cos_bitmap = hw_dft_cos_map;
+ dcb->port_dft_cos_bitmap = port_cos_bitmap;
+
+ dcb->cos_config_num_max = get_cos_num(hw_dft_cos_map);
+
+ if (is_in_kexec == 0) {
+ dcb_cfg->trust = HINIC3_DCB_PCP;
+ dcb_cfg->default_cos = (u8)fls(dcb->func_dft_cos_bitmap) - 1;
+ } else {
+ dcb_cfg->trust = nic_dev->dcb->hw_dcb_cfg.trust;
+ dcb_cfg->default_cos = nic_dev->dcb->hw_dcb_cfg.default_cos;
+ }
+ dcb_cfg->pcp_user_cos_num = dcb->cos_config_num_max;
+ dcb_cfg->dscp_user_cos_num = dcb->cos_config_num_max;
+ dcb_cfg->pcp_valid_cos_map = hw_dft_cos_map;
+ dcb_cfg->dscp_valid_cos_map = hw_dft_cos_map;
+
+ for (i = 0; i < NIC_DCB_COS_MAX; i++) {
+ dcb_cfg->pcp2cos[i] = hw_dft_cos_map & BIT(i)
+ ? i : (u8)fls(dcb->func_dft_cos_bitmap) - 1;
+ for (dscp_ind = 0; dscp_ind < NIC_DCB_COS_MAX; dscp_ind++)
+ dcb_cfg->dscp2cos[i * NIC_DCB_DSCP_NUM + dscp_ind] = dcb_cfg->pcp2cos[i];
+ }
+
+ return 0;
+}
+
+void hinic3_dcb_reset_hw_config(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_dcb_config dft_cfg = {0};
+
+ init_default_dcb_cfg(nic_dev, &dft_cfg);
+ hinic3_sync_dcb_cfg(nic_dev, &dft_cfg);
+
+ hinic3_info(nic_dev, drv, "Reset DCB configuration done\n");
+}
+
+int hinic3_configure_dcb(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ err = hinic3_sync_dcb_state(nic_dev->hwdev, 1,
+ test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)
+ ? 1 : 0);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Set dcb state failed\n");
+ return err;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags))
+ hinic3_sync_dcb_cfg(nic_dev, &nic_dev->dcb->wanted_dcb_cfg);
+ else
+ hinic3_dcb_reset_hw_config(nic_dev);
+
+ return 0;
+}
+
+static int hinic3_dcb_alloc(struct hinic3_nic_dev *nic_dev)
+{
+ u16 func_id;
+ int is_use_vram;
+ int ret;
+
+ is_use_vram = get_use_vram_flag();
+ if (is_use_vram) {
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ ret = snprintf(nic_dev->dcb_name, VRAM_NAME_MAX_LEN,
+ "%s%u%s", VRAM_CQM_GLB_FUNC_BASE, func_id,
+ VRAM_NIC_DCB);
+ if (ret < 0) {
+ hinic3_err(nic_dev, drv, "Nic dcb snprintf failed, ret:%d.\n", ret);
+ return ret;
+ }
+
+ nic_dev->dcb = (struct hinic3_dcb *)hi_vram_kalloc(nic_dev->dcb_name,
+ sizeof(*nic_dev->dcb));
+ if (!nic_dev->dcb) {
+ hinic3_err(nic_dev, drv, "Failed to vram alloc dcb.\n");
+ return -EFAULT;
+ }
+ } else {
+ nic_dev->dcb = kzalloc(sizeof(*nic_dev->dcb), GFP_KERNEL);
+ if (!nic_dev->dcb) {
+ hinic3_err(nic_dev, drv, "Failed to create dcb.\n");
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static void hinic3_dcb_free(struct hinic3_nic_dev *nic_dev)
+{
+ int is_use_vram;
+
+ is_use_vram = get_use_vram_flag();
+ if (is_use_vram)
+ hi_vram_kfree((void *)nic_dev->dcb, nic_dev->dcb_name, sizeof(*nic_dev->dcb));
+ else
+ kfree(nic_dev->dcb);
+ nic_dev->dcb = NULL;
+}
+
+void hinic3_dcb_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags))
+ hinic3_sync_dcb_state(nic_dev->hwdev, 1, 0);
+
+ hinic3_dcb_free(nic_dev);
+}
+
+int hinic3_dcb_init(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_dcb_config *hw_dcb_cfg = NULL;
+ int err;
+ u8 dcb_en = test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ? 1 : 0;
+
+ err = hinic3_dcb_alloc(nic_dev);
+ if (err != 0) {
+ hinic3_err(nic_dev, drv, "Dcb alloc failed.\n");
+ return err;
+ }
+
+ hw_dcb_cfg = &nic_dev->dcb->hw_dcb_cfg;
+ err = init_default_dcb_cfg(nic_dev, hw_dcb_cfg);
+ if (err) {
+ hinic3_err(nic_dev, drv,
+ "Initialize dcb configuration failed\n");
+ hinic3_dcb_free(nic_dev);
+ return err;
+ }
+
+ memcpy(&nic_dev->dcb->wanted_dcb_cfg, hw_dcb_cfg,
+ sizeof(struct hinic3_dcb_config));
+
+ hinic3_info(nic_dev, drv, "Support num cos %u, default cos %u\n",
+ nic_dev->dcb->cos_config_num_max, hw_dcb_cfg->default_cos);
+
+ err = hinic3_set_tx_cos_state(nic_dev, dcb_en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Set tx cos state failed\n");
+ hinic3_dcb_free(nic_dev);
+ return err;
+ }
+
+ return 0;
+}
+
+static int change_qos_cfg(struct hinic3_nic_dev *nic_dev,
+ const struct hinic3_dcb_config *dcb_cfg)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ int err = 0;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ if (test_and_set_bit(HINIC3_DCB_UP_COS_SETTING, &dcb->dcb_flags)) {
+ nicif_warn(nic_dev, drv, netdev,
+ "Cos_up map setting in inprocess, please try again later\n");
+ return -EFAULT;
+ }
+
+ hinic3_sync_dcb_cfg(nic_dev, dcb_cfg);
+
+ hinic3_update_qp_cos_cfg(nic_dev, user_cos_num);
+
+ clear_bit(HINIC3_DCB_UP_COS_SETTING, &dcb->dcb_flags);
+
+ return err;
+}
+
+int hinic3_dcbcfg_set_up_bitmap(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ int err, rollback_err;
+ u8 netif_run = 0;
+ struct hinic3_dcb_config old_dcb_cfg;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ memcpy(&old_dcb_cfg, &dcb->hw_dcb_cfg,
+ sizeof(struct hinic3_dcb_config));
+
+ if (!memcmp(&dcb->wanted_dcb_cfg, &old_dcb_cfg,
+ sizeof(struct hinic3_dcb_config))) {
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Same valid up bitmap, don't need to change anything\n");
+ return 0;
+ }
+
+ if (netif_running(nic_dev->netdev)) {
+ netif_run = 1;
+ hinic3_vport_down(nic_dev);
+ }
+
+ err = change_qos_cfg(nic_dev, &dcb->wanted_dcb_cfg);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Set cos_up map to hw failed\n");
+ goto change_qos_cfg_fail;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ err = hinic3_setup_cos(nic_dev->netdev,
+ user_cos_num, netif_run);
+ if (err)
+ goto set_err;
+ }
+
+ if (netif_run) {
+ err = hinic3_vport_up(nic_dev);
+ if (err)
+ goto vport_up_fail;
+ }
+
+ return 0;
+
+vport_up_fail:
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags))
+ hinic3_setup_cos(nic_dev->netdev, user_cos_num
+ ? 0 : user_cos_num, netif_run);
+
+set_err:
+ rollback_err = change_qos_cfg(nic_dev, &old_dcb_cfg);
+ if (rollback_err)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to rollback qos configure\n");
+
+change_qos_cfg_fail:
+ if (netif_run)
+ hinic3_vport_up(nic_dev);
+
+ return err;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h b/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h
new file mode 100644
index 0000000..e0b35cb
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h
@@ -0,0 +1,75 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_DCB_H
+#define HINIC3_DCB_H
+
+#include "ossl_knl.h"
+
+enum HINIC3_DCB_FLAGS {
+ HINIC3_DCB_UP_COS_SETTING,
+ HINIC3_DCB_TRAFFIC_STOPPED,
+};
+
+struct hinic3_cos_cfg {
+ u8 up;
+ u8 bw_pct;
+ u8 tc_id;
+ u8 prio_sp; /* 0 - DWRR, 1 - SP */
+};
+
+struct hinic3_tc_cfg {
+ u8 bw_pct;
+ u8 prio_sp; /* 0 - DWRR, 1 - SP */
+ u16 rsvd;
+};
+
+#define PCP_MAX_UP 8
+#define DSCP_MAC_UP 64
+#define DBG_DFLT_DSCP_VAL 0xFF
+
+struct hinic3_dcb_config {
+ u8 trust; /* pcp, dscp */
+ u8 default_cos;
+ u8 pcp_user_cos_num;
+ u8 pcp_valid_cos_map;
+ u8 dscp_user_cos_num;
+ u8 dscp_valid_cos_map;
+ u8 pcp2cos[PCP_MAX_UP];
+ u8 dscp2cos[DSCP_MAC_UP];
+
+ u8 cos_qp_offset[NIC_DCB_COS_MAX];
+ u8 cos_qp_num[NIC_DCB_COS_MAX];
+};
+
+u8 hinic3_get_dev_user_cos_num(struct hinic3_nic_dev *nic_dev);
+u8 hinic3_get_dev_valid_cos_map(struct hinic3_nic_dev *nic_dev);
+int hinic3_dcb_init(struct hinic3_nic_dev *nic_dev);
+void hinic3_dcb_deinit(struct hinic3_nic_dev *nic_dev);
+void hinic3_dcb_reset_hw_config(struct hinic3_nic_dev *nic_dev);
+int hinic3_configure_dcb(struct net_device *netdev);
+int hinic3_setup_cos(struct net_device *netdev, u8 cos, u8 netif_run);
+void hinic3_dcbcfg_set_pfc_state(struct hinic3_nic_dev *nic_dev, u8 pfc_state);
+u8 hinic3_dcbcfg_get_pfc_state(struct hinic3_nic_dev *nic_dev);
+void hinic3_dcbcfg_set_pfc_pri_en(struct hinic3_nic_dev *nic_dev,
+ u8 pfc_en_bitmap);
+u8 hinic3_dcbcfg_get_pfc_pri_en(struct hinic3_nic_dev *nic_dev);
+int hinic3_dcbcfg_set_ets_up_tc_map(struct hinic3_nic_dev *nic_dev,
+ const u8 *up_tc_map);
+void hinic3_dcbcfg_get_ets_up_tc_map(struct hinic3_nic_dev *nic_dev,
+ u8 *up_tc_map);
+int hinic3_dcbcfg_set_ets_tc_bw(struct hinic3_nic_dev *nic_dev,
+ const u8 *tc_bw);
+void hinic3_dcbcfg_get_ets_tc_bw(struct hinic3_nic_dev *nic_dev, u8 *tc_bw);
+void hinic3_dcbcfg_set_ets_tc_prio_type(struct hinic3_nic_dev *nic_dev,
+ u8 tc_prio_bitmap);
+void hinic3_dcbcfg_get_ets_tc_prio_type(struct hinic3_nic_dev *nic_dev,
+ u8 *tc_prio_bitmap);
+int hinic3_dcbcfg_set_up_bitmap(struct hinic3_nic_dev *nic_dev);
+void hinic3_update_tx_db_cos(struct hinic3_nic_dev *nic_dev, u8 dcb_en);
+
+void hinic3_update_qp_cos_cfg(struct hinic3_nic_dev *nic_dev, u8 num_cos);
+void hinic3_vport_down(struct hinic3_nic_dev *nic_dev);
+int hinic3_vport_up(struct hinic3_nic_dev *nic_dev);
+int hinic3_configure_dcb_hw(struct hinic3_nic_dev *nic_dev, u8 dcb_en);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c
new file mode 100644
index 0000000..548d67d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c
@@ -0,0 +1,1464 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+#include "hinic3_rss.h"
+
+#define COALESCE_ALL_QUEUE 0xFFFF
+#define COALESCE_PENDING_LIMIT_UNIT 8
+#define COALESCE_TIMER_CFG_UNIT 5
+#define COALESCE_MAX_PENDING_LIMIT (255 * COALESCE_PENDING_LIMIT_UNIT)
+#define COALESCE_MAX_TIMER_CFG (255 * COALESCE_TIMER_CFG_UNIT)
+#define HINIC3_WAIT_PKTS_TO_RX_BUFFER 200
+#define HINIC3_WAIT_CLEAR_LP_TEST 100
+
+#ifndef SET_ETHTOOL_OPS
+#define SET_ETHTOOL_OPS(netdev, ops) \
+ ((netdev)->ethtool_ops = (ops))
+#endif
+
+static void hinic3_get_drvinfo(struct net_device *netdev,
+ struct ethtool_drvinfo *info)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct pci_dev *pdev = nic_dev->pdev;
+ u8 mgmt_ver[HINIC3_MGMT_VERSION_MAX_LEN] = {0};
+ int err;
+
+ strscpy(info->driver, HINIC3_NIC_DRV_NAME, sizeof(info->driver));
+ strscpy(info->version, HINIC3_NIC_DRV_VERSION, sizeof(info->version));
+ strscpy(info->bus_info, pci_name(pdev), sizeof(info->bus_info));
+
+ err = hinic3_get_mgmt_version(nic_dev->hwdev, mgmt_ver,
+ HINIC3_MGMT_VERSION_MAX_LEN,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to get fw version\n");
+ return;
+ }
+
+ err = snprintf(info->fw_version, sizeof(info->fw_version), "%s", mgmt_ver);
+ if (err < 0)
+ nicif_err(nic_dev, drv, netdev, "Failed to snprintf fw version\n");
+}
+
+static u32 hinic3_get_msglevel(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ return nic_dev->msg_enable;
+}
+
+static void hinic3_set_msglevel(struct net_device *netdev, u32 data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ nic_dev->msg_enable = data;
+
+ nicif_info(nic_dev, drv, netdev, "Set message level: 0x%x\n", data);
+}
+
+static int hinic3_nway_reset(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct nic_port_info port_info = {0};
+ int err;
+
+ while (test_and_set_bit(HINIC3_AUTONEG_RESET, &nic_dev->flags))
+ msleep(100); /* sleep 100 ms, waiting for another autoneg restart progress done */
+
+ err = hinic3_get_port_info(nic_dev->hwdev, &port_info, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Get port info failed\n");
+ err = -EFAULT;
+ goto reset_err;
+ }
+
+ if (port_info.autoneg_state != PORT_CFG_AN_ON) {
+ nicif_err(nic_dev, drv, netdev, "Autonegotiation is not on, don't support to restart it\n");
+ err = -EOPNOTSUPP;
+ goto reset_err;
+ }
+
+ err = hinic3_set_autoneg(nic_dev->hwdev, false);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Set autonegotiation off failed\n");
+ err = -EFAULT;
+ goto reset_err;
+ }
+
+ msleep(200); /* sleep 200 ms, waiting for status polling finished */
+
+ err = hinic3_set_autoneg(nic_dev->hwdev, true);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Set autonegotiation on failed\n");
+ err = -EFAULT;
+ goto reset_err;
+ }
+
+ msleep(200); /* sleep 200 ms, waiting for status polling finished */
+ nicif_info(nic_dev, drv, netdev, "Restart autonegotiation successfully\n");
+
+reset_err:
+ clear_bit(HINIC3_AUTONEG_RESET, &nic_dev->flags);
+ return err;
+}
+
+#ifdef HAVE_ETHTOOL_RINGPARAM_EXTACK
+static void hinic3_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring,
+ struct kernel_ethtool_ringparam *kernel_ring,
+ struct netlink_ext_ack *extack)
+#else
+static void hinic3_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ ring->rx_max_pending = HINIC3_MAX_RX_QUEUE_DEPTH;
+ ring->tx_max_pending = HINIC3_MAX_TX_QUEUE_DEPTH;
+ ring->rx_pending = nic_dev->rxqs[0].q_depth;
+ ring->tx_pending = nic_dev->txqs[0].q_depth;
+}
+
+static void hinic3_update_qp_depth(struct hinic3_nic_dev *nic_dev,
+ u32 sq_depth, u32 rq_depth)
+{
+ u16 i;
+
+ nic_dev->q_params.sq_depth = sq_depth;
+ nic_dev->q_params.rq_depth = rq_depth;
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ nic_dev->txqs[i].q_depth = sq_depth;
+ nic_dev->txqs[i].q_mask = sq_depth - 1;
+ nic_dev->rxqs[i].q_depth = rq_depth;
+ nic_dev->rxqs[i].q_mask = rq_depth - 1;
+ }
+}
+
+static int check_ringparam_valid(struct net_device *netdev,
+ const struct ethtool_ringparam *ring)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (ring->rx_jumbo_pending || ring->rx_mini_pending) {
+ nicif_err(nic_dev, drv, netdev,
+ "Unsupported rx_jumbo_pending/rx_mini_pending\n");
+ return -EINVAL;
+ }
+
+ if (ring->tx_pending > HINIC3_MAX_TX_QUEUE_DEPTH ||
+ ring->tx_pending < HINIC3_MIN_QUEUE_DEPTH ||
+ ring->rx_pending > HINIC3_MAX_RX_QUEUE_DEPTH ||
+ ring->rx_pending < HINIC3_MIN_QUEUE_DEPTH) {
+ nicif_err(nic_dev, drv, netdev,
+ "Queue depth out of rang tx[%d-%d] rx[%d-%d]\n",
+ HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_TX_QUEUE_DEPTH,
+ HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_RX_QUEUE_DEPTH);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#ifdef HAVE_ETHTOOL_RINGPARAM_EXTACK
+static int hinic3_set_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring,
+ struct kernel_ethtool_ringparam *kernel_ring,
+ struct netlink_ext_ack *extack)
+#else
+static int hinic3_set_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dyna_txrxq_params q_params = {0};
+ u32 new_sq_depth, new_rq_depth;
+ int err;
+
+ err = check_ringparam_valid(netdev, ring);
+ if (err)
+ return err;
+
+ new_sq_depth = (u32)(1U << (u16)ilog2(ring->tx_pending));
+ new_rq_depth = (u32)(1U << (u16)ilog2(ring->rx_pending));
+ if (new_sq_depth == nic_dev->q_params.sq_depth &&
+ new_rq_depth == nic_dev->q_params.rq_depth)
+ return 0; /* nothing to do */
+
+ nicif_info(nic_dev, drv, netdev,
+ "Change Tx/Rx ring depth from %u/%u to %u/%u\n",
+ nic_dev->q_params.sq_depth, nic_dev->q_params.rq_depth,
+ new_sq_depth, new_rq_depth);
+
+ if (!netif_running(netdev)) {
+ hinic3_update_qp_depth(nic_dev, new_sq_depth, new_rq_depth);
+ } else {
+ q_params = nic_dev->q_params;
+ q_params.sq_depth = new_sq_depth;
+ q_params.rq_depth = new_rq_depth;
+ q_params.txqs_res = NULL;
+ q_params.rxqs_res = NULL;
+ q_params.irq_cfg = NULL;
+
+ nicif_info(nic_dev, drv, netdev, "Restarting channel\n");
+ err = hinic3_change_channel_settings(nic_dev, &q_params,
+ NULL, NULL);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to change channel settings\n");
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static int get_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal, u16 queue)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_intr_coal_info *interrupt_info = NULL;
+
+ if (queue == COALESCE_ALL_QUEUE) {
+ /* get tx/rx irq0 as default parameters */
+ interrupt_info = &nic_dev->intr_coalesce[0];
+ } else {
+ if (queue >= nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, netdev,
+ "Invalid queue_id: %u\n", queue);
+ return -EINVAL;
+ }
+ interrupt_info = &nic_dev->intr_coalesce[queue];
+ }
+
+ /* coalescs_timer is in unit of 5us */
+ coal->rx_coalesce_usecs = interrupt_info->coalesce_timer_cfg *
+ COALESCE_TIMER_CFG_UNIT;
+ /* coalescs_frams is in unit of 8 */
+ coal->rx_max_coalesced_frames = interrupt_info->pending_limt *
+ COALESCE_PENDING_LIMIT_UNIT;
+
+ /* tx/rx use the same interrupt */
+ coal->tx_coalesce_usecs = coal->rx_coalesce_usecs;
+ coal->tx_max_coalesced_frames = coal->rx_max_coalesced_frames;
+ coal->use_adaptive_rx_coalesce = nic_dev->adaptive_rx_coal;
+
+ coal->pkt_rate_high = (u32)interrupt_info->pkt_rate_high;
+ coal->rx_coalesce_usecs_high = interrupt_info->rx_usecs_high *
+ COALESCE_TIMER_CFG_UNIT;
+ coal->rx_max_coalesced_frames_high =
+ interrupt_info->rx_pending_limt_high *
+ COALESCE_PENDING_LIMIT_UNIT;
+
+ coal->pkt_rate_low = (u32)interrupt_info->pkt_rate_low;
+ coal->rx_coalesce_usecs_low = interrupt_info->rx_usecs_low *
+ COALESCE_TIMER_CFG_UNIT;
+ coal->rx_max_coalesced_frames_low =
+ interrupt_info->rx_pending_limt_low *
+ COALESCE_PENDING_LIMIT_UNIT;
+
+ return 0;
+}
+
+static int set_queue_coalesce(struct hinic3_nic_dev *nic_dev, u16 q_id,
+ struct hinic3_intr_coal_info *coal)
+{
+ struct hinic3_intr_coal_info *intr_coal = NULL;
+ struct interrupt_info info = {0};
+ struct net_device *netdev = nic_dev->netdev;
+ int err;
+
+ intr_coal = &nic_dev->intr_coalesce[q_id];
+ if (intr_coal->coalesce_timer_cfg != coal->coalesce_timer_cfg ||
+ intr_coal->pending_limt != coal->pending_limt)
+ intr_coal->user_set_intr_coal_flag = 1;
+
+ intr_coal->coalesce_timer_cfg = coal->coalesce_timer_cfg;
+ intr_coal->pending_limt = coal->pending_limt;
+ intr_coal->pkt_rate_low = coal->pkt_rate_low;
+ intr_coal->rx_usecs_low = coal->rx_usecs_low;
+ intr_coal->rx_pending_limt_low = coal->rx_pending_limt_low;
+ intr_coal->pkt_rate_high = coal->pkt_rate_high;
+ intr_coal->rx_usecs_high = coal->rx_usecs_high;
+ intr_coal->rx_pending_limt_high = coal->rx_pending_limt_high;
+
+ /* netdev not running or qp not in using,
+ * don't need to set coalesce to hw
+ */
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags) ||
+ q_id >= nic_dev->q_params.num_qps || nic_dev->adaptive_rx_coal)
+ return 0;
+
+ info.msix_index = nic_dev->q_params.irq_cfg[q_id].msix_entry_idx;
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.coalesc_timer_cfg = intr_coal->coalesce_timer_cfg;
+ info.pending_limt = intr_coal->pending_limt;
+ info.resend_timer_cfg = intr_coal->resend_timer_cfg;
+ nic_dev->rxqs[q_id].last_coalesc_timer_cfg =
+ intr_coal->coalesce_timer_cfg;
+ nic_dev->rxqs[q_id].last_pending_limt = intr_coal->pending_limt;
+ err = hinic3_set_interrupt_cfg(nic_dev->hwdev, info,
+ HINIC3_CHANNEL_NIC);
+ if (err)
+ nicif_warn(nic_dev, drv, netdev,
+ "Failed to set queue%u coalesce", q_id);
+
+ return err;
+}
+
+static int is_coalesce_exceed_limit(struct net_device *netdev,
+ const struct ethtool_coalesce *coal)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (coal->rx_coalesce_usecs > COALESCE_MAX_TIMER_CFG) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_coalesce_usecs out of range[%d-%d]\n", 0,
+ COALESCE_MAX_TIMER_CFG);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_max_coalesced_frames > COALESCE_MAX_PENDING_LIMIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_max_coalesced_frames out of range[%d-%d]\n", 0,
+ COALESCE_MAX_PENDING_LIMIT);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_coalesce_usecs_low > COALESCE_MAX_TIMER_CFG) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_coalesce_usecs_low out of range[%d-%d]\n", 0,
+ COALESCE_MAX_TIMER_CFG);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_max_coalesced_frames_low > COALESCE_MAX_PENDING_LIMIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_max_coalesced_frames_low out of range[%d-%d]\n",
+ 0, COALESCE_MAX_PENDING_LIMIT);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_coalesce_usecs_high > COALESCE_MAX_TIMER_CFG) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_coalesce_usecs_high out of range[%d-%d]\n", 0,
+ COALESCE_MAX_TIMER_CFG);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_max_coalesced_frames_high > COALESCE_MAX_PENDING_LIMIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_max_coalesced_frames_high out of range[%d-%d]\n",
+ 0, COALESCE_MAX_PENDING_LIMIT);
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int is_coalesce_allowed_change(struct net_device *netdev,
+ const struct ethtool_coalesce *coal)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct ethtool_coalesce tmp_coal = {0};
+
+ tmp_coal.cmd = coal->cmd;
+ tmp_coal.rx_coalesce_usecs = coal->rx_coalesce_usecs;
+ tmp_coal.rx_max_coalesced_frames = coal->rx_max_coalesced_frames;
+ tmp_coal.tx_coalesce_usecs = coal->tx_coalesce_usecs;
+ tmp_coal.tx_max_coalesced_frames = coal->tx_max_coalesced_frames;
+ tmp_coal.use_adaptive_rx_coalesce = coal->use_adaptive_rx_coalesce;
+
+ tmp_coal.pkt_rate_low = coal->pkt_rate_low;
+ tmp_coal.rx_coalesce_usecs_low = coal->rx_coalesce_usecs_low;
+ tmp_coal.rx_max_coalesced_frames_low =
+ coal->rx_max_coalesced_frames_low;
+
+ tmp_coal.pkt_rate_high = coal->pkt_rate_high;
+ tmp_coal.rx_coalesce_usecs_high = coal->rx_coalesce_usecs_high;
+ tmp_coal.rx_max_coalesced_frames_high =
+ coal->rx_max_coalesced_frames_high;
+
+ if (memcmp(coal, &tmp_coal, sizeof(struct ethtool_coalesce))) {
+ nicif_err(nic_dev, drv, netdev,
+ "Only support to change rx/tx-usecs and rx/tx-frames\n");
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int is_coalesce_legal(struct net_device *netdev,
+ const struct ethtool_coalesce *coal)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ if (coal->rx_coalesce_usecs != coal->tx_coalesce_usecs) {
+ nicif_err(nic_dev, drv, netdev,
+ "tx-usecs must be equal to rx-usecs\n");
+ return -EINVAL;
+ }
+
+ if (coal->rx_max_coalesced_frames != coal->tx_max_coalesced_frames) {
+ nicif_err(nic_dev, drv, netdev,
+ "tx-frames must be equal to rx-frames\n");
+ return -EINVAL;
+ }
+
+ err = is_coalesce_allowed_change(netdev, coal);
+ if (err)
+ return err;
+
+ err = is_coalesce_exceed_limit(netdev, coal);
+ if (err)
+ return err;
+
+ if (coal->rx_coalesce_usecs_low / COALESCE_TIMER_CFG_UNIT >=
+ coal->rx_coalesce_usecs_high / COALESCE_TIMER_CFG_UNIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "coalesce_usecs_high(%u) must more than coalesce_usecs_low(%u), after dividing %d usecs unit\n",
+ coal->rx_coalesce_usecs_high,
+ coal->rx_coalesce_usecs_low,
+ COALESCE_TIMER_CFG_UNIT);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_max_coalesced_frames_low / COALESCE_PENDING_LIMIT_UNIT >=
+ coal->rx_max_coalesced_frames_high / COALESCE_PENDING_LIMIT_UNIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "coalesced_frames_high(%u) must more than coalesced_frames_low(%u),after dividing %d frames unit\n",
+ coal->rx_max_coalesced_frames_high,
+ coal->rx_max_coalesced_frames_low,
+ COALESCE_PENDING_LIMIT_UNIT);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->pkt_rate_low >= coal->pkt_rate_high) {
+ nicif_err(nic_dev, drv, netdev,
+ "pkt_rate_high(%u) must more than pkt_rate_low(%u)\n",
+ coal->pkt_rate_high,
+ coal->pkt_rate_low);
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+#define CHECK_COALESCE_ALIGN(coal, item, unit) \
+do { \
+ if ((coal)->item % (unit)) \
+ nicif_warn(nic_dev, drv, netdev, \
+ "%s in %d units, change to %u\n", \
+ #item, (unit), ((coal)->item - \
+ (coal)->item % (unit))); \
+} while (0)
+
+#define CHECK_COALESCE_CHANGED(coal, item, unit, ori_val, obj_str) \
+do { \
+ if (((coal)->item / (unit)) != (ori_val)) \
+ nicif_info(nic_dev, drv, netdev, \
+ "Change %s from %d to %u %s\n", \
+ #item, (ori_val) * (unit), \
+ ((coal)->item - (coal)->item % (unit)), \
+ (obj_str)); \
+} while (0)
+
+#define CHECK_PKT_RATE_CHANGED(coal, item, ori_val, obj_str) \
+do { \
+ if ((coal)->item != (ori_val)) \
+ nicif_info(nic_dev, drv, netdev, \
+ "Change %s from %llu to %u %s\n", \
+ #item, (ori_val), (coal)->item, (obj_str)); \
+} while (0)
+
+static int set_hw_coal_param(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_intr_coal_info *intr_coal, u16 queue)
+{
+ u16 i;
+
+ if (queue == COALESCE_ALL_QUEUE) {
+ for (i = 0; i < nic_dev->max_qps; i++)
+ set_queue_coalesce(nic_dev, i, intr_coal);
+ } else {
+ if (queue >= nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid queue_id: %u\n", queue);
+ return -EINVAL;
+ }
+ set_queue_coalesce(nic_dev, queue, intr_coal);
+ }
+
+ return 0;
+}
+
+static void check_coalesce_align(struct net_device *netdev,
+ struct ethtool_coalesce *coal)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ CHECK_COALESCE_ALIGN(coal, rx_coalesce_usecs, COALESCE_TIMER_CFG_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_max_coalesced_frames,
+ COALESCE_PENDING_LIMIT_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_coalesce_usecs_high,
+ COALESCE_TIMER_CFG_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_max_coalesced_frames_high,
+ COALESCE_PENDING_LIMIT_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_coalesce_usecs_low,
+ COALESCE_TIMER_CFG_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_max_coalesced_frames_low,
+ COALESCE_PENDING_LIMIT_UNIT);
+}
+
+static int check_coalesce_change(struct net_device *netdev,
+ u16 queue, struct ethtool_coalesce *coal)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_intr_coal_info *ori_intr_coal = NULL;
+ char obj_str[32] = {0};
+
+ if (queue == COALESCE_ALL_QUEUE) {
+ ori_intr_coal = &nic_dev->intr_coalesce[0];
+ snprintf(obj_str, sizeof(obj_str), "for netdev");
+ } else {
+ ori_intr_coal = &nic_dev->intr_coalesce[queue];
+ snprintf(obj_str, sizeof(obj_str), "for queue %u", queue);
+ }
+
+ CHECK_COALESCE_CHANGED(coal, rx_coalesce_usecs, COALESCE_TIMER_CFG_UNIT,
+ ori_intr_coal->coalesce_timer_cfg, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_max_coalesced_frames,
+ COALESCE_PENDING_LIMIT_UNIT,
+ ori_intr_coal->pending_limt, obj_str);
+ CHECK_PKT_RATE_CHANGED(coal, pkt_rate_high,
+ ori_intr_coal->pkt_rate_high, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_coalesce_usecs_high,
+ COALESCE_TIMER_CFG_UNIT,
+ ori_intr_coal->rx_usecs_high, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_max_coalesced_frames_high,
+ COALESCE_PENDING_LIMIT_UNIT,
+ ori_intr_coal->rx_pending_limt_high, obj_str);
+ CHECK_PKT_RATE_CHANGED(coal, pkt_rate_low,
+ ori_intr_coal->pkt_rate_low, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_coalesce_usecs_low,
+ COALESCE_TIMER_CFG_UNIT,
+ ori_intr_coal->rx_usecs_low, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_max_coalesced_frames_low,
+ COALESCE_PENDING_LIMIT_UNIT,
+ ori_intr_coal->rx_pending_limt_low, obj_str);
+ return 0;
+}
+
+static void init_intr_coal_params(struct hinic3_intr_coal_info *intr_coal,
+ struct ethtool_coalesce *coal)
+{
+ intr_coal->coalesce_timer_cfg =
+ (u8)(coal->rx_coalesce_usecs / COALESCE_TIMER_CFG_UNIT);
+ intr_coal->pending_limt = (u8)(coal->rx_max_coalesced_frames /
+ COALESCE_PENDING_LIMIT_UNIT);
+
+ intr_coal->pkt_rate_high = coal->pkt_rate_high;
+ intr_coal->rx_usecs_high =
+ (u8)(coal->rx_coalesce_usecs_high / COALESCE_TIMER_CFG_UNIT);
+ intr_coal->rx_pending_limt_high =
+ (u8)(coal->rx_max_coalesced_frames_high /
+ COALESCE_PENDING_LIMIT_UNIT);
+
+ intr_coal->pkt_rate_low = coal->pkt_rate_low;
+ intr_coal->rx_usecs_low =
+ (u8)(coal->rx_coalesce_usecs_low / COALESCE_TIMER_CFG_UNIT);
+ intr_coal->rx_pending_limt_low =
+ (u8)(coal->rx_max_coalesced_frames_low /
+ COALESCE_PENDING_LIMIT_UNIT);
+}
+
+static int set_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal, u16 queue)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_intr_coal_info intr_coal = {0};
+ u32 last_adaptive_rx;
+ int err = 0;
+
+ err = is_coalesce_legal(netdev, coal);
+ if (err)
+ return err;
+
+ check_coalesce_align(netdev, coal);
+
+ check_coalesce_change(netdev, queue, coal);
+
+ init_intr_coal_params(&intr_coal, coal);
+
+ last_adaptive_rx = nic_dev->adaptive_rx_coal;
+ nic_dev->adaptive_rx_coal = coal->use_adaptive_rx_coalesce;
+
+ /* coalesce timer or pending set to zero will disable coalesce */
+ if (!nic_dev->adaptive_rx_coal &&
+ (!intr_coal.coalesce_timer_cfg || !intr_coal.pending_limt))
+ nicif_warn(nic_dev, drv, netdev, "Coalesce will be disabled\n");
+
+ /* ensure coalesce paramester will not be changed in auto
+ * moderation work
+ */
+ if (HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ if (!nic_dev->adaptive_rx_coal)
+ cancel_delayed_work_sync(&nic_dev->moderation_task);
+ else if (!last_adaptive_rx)
+ queue_delayed_work(nic_dev->workq,
+ &nic_dev->moderation_task,
+ HINIC3_MODERATONE_DELAY);
+ }
+
+ return set_hw_coal_param(nic_dev, &intr_coal, queue);
+}
+
+#ifdef HAVE_ETHTOOL_COALESCE_EXTACK
+static int hinic3_get_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal,
+ struct kernel_ethtool_coalesce *kernel_coal,
+ struct netlink_ext_ack *extack)
+#else
+static int hinic3_get_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal)
+#endif
+{
+ return get_coalesce(netdev, coal, COALESCE_ALL_QUEUE);
+}
+
+#ifdef HAVE_ETHTOOL_COALESCE_EXTACK
+static int hinic3_set_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal,
+ struct kernel_ethtool_coalesce *kernel_coal,
+ struct netlink_ext_ack *extack)
+#else
+static int hinic3_set_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal)
+#endif
+{
+ return set_coalesce(netdev, coal, COALESCE_ALL_QUEUE);
+}
+
+#if defined(ETHTOOL_PERQUEUE) && defined(ETHTOOL_GCOALESCE)
+static int hinic3_get_per_queue_coalesce(struct net_device *netdev, u32 queue,
+ struct ethtool_coalesce *coal)
+{
+ return get_coalesce(netdev, coal, (u16)queue);
+}
+
+static int hinic3_set_per_queue_coalesce(struct net_device *netdev, u32 queue,
+ struct ethtool_coalesce *coal)
+{
+ return set_coalesce(netdev, coal, (u16)queue);
+}
+#endif
+
+#ifdef HAVE_ETHTOOL_SET_PHYS_ID
+static int hinic3_set_phys_id(struct net_device *netdev,
+ enum ethtool_phys_id_state state)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ switch (state) {
+ case ETHTOOL_ID_ACTIVE:
+ err = hinic3_set_led_status(nic_dev->hwdev,
+ MAG_CMD_LED_TYPE_ALARM,
+ MAG_CMD_LED_MODE_FORCE_BLINK_2HZ);
+ if (err)
+ nicif_err(nic_dev, drv, netdev,
+ "Set LED blinking in 2HZ failed\n");
+ else
+ nicif_info(nic_dev, drv, netdev,
+ "Set LED blinking in 2HZ success\n");
+ break;
+
+ case ETHTOOL_ID_INACTIVE:
+ err = hinic3_set_led_status(nic_dev->hwdev,
+ MAG_CMD_LED_TYPE_ALARM,
+ MAG_CMD_LED_MODE_DEFAULT);
+ if (err)
+ nicif_err(nic_dev, drv, netdev,
+ "Reset LED to original status failed\n");
+ else
+ nicif_info(nic_dev, drv, netdev,
+ "Reset LED to original status success\n");
+ break;
+
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ return err;
+}
+#else
+static int hinic3_phys_id(struct net_device *netdev, u32 data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ nicif_err(nic_dev, drv, netdev, "Not support to set phys id\n");
+
+ return -EOPNOTSUPP;
+}
+#endif
+
+static void hinic3_get_pauseparam(struct net_device *netdev,
+ struct ethtool_pauseparam *pause)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct nic_pause_config nic_pause = {0};
+ int err;
+
+ err = hinic3_get_pause_info(nic_dev->hwdev, &nic_pause);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to get pauseparam from hw\n");
+ } else {
+ pause->autoneg = nic_pause.auto_neg == PORT_CFG_AN_ON ?
+ AUTONEG_ENABLE : AUTONEG_DISABLE;
+ pause->rx_pause = nic_pause.rx_pause;
+ pause->tx_pause = nic_pause.tx_pause;
+ }
+}
+
+static int hinic3_set_pauseparam(struct net_device *netdev,
+ struct ethtool_pauseparam *pause)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct nic_pause_config nic_pause = {0};
+ struct nic_port_info port_info = {0};
+ u32 auto_neg;
+ int err;
+
+ err = hinic3_get_port_info(nic_dev->hwdev, &port_info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to get auto-negotiation state\n");
+ return -EFAULT;
+ }
+
+ auto_neg = port_info.autoneg_state == PORT_CFG_AN_ON ? AUTONEG_ENABLE : AUTONEG_DISABLE;
+ if (pause->autoneg != auto_neg) {
+ nicif_err(nic_dev, drv, netdev,
+ "To change autoneg please use: ethtool -s <dev> autoneg <on|off>\n");
+ return -EOPNOTSUPP;
+ }
+
+ nic_pause.auto_neg = pause->autoneg == AUTONEG_ENABLE ? PORT_CFG_AN_ON : PORT_CFG_AN_OFF;
+ nic_pause.rx_pause = (u8)pause->rx_pause;
+ nic_pause.tx_pause = (u8)pause->tx_pause;
+
+ err = hinic3_set_pause_info(nic_dev->hwdev, nic_pause);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to set pauseparam\n");
+ return err;
+ }
+
+ nicif_info(nic_dev, drv, netdev, "Set pause options, tx: %s, rx: %s\n",
+ pause->tx_pause ? "on" : "off",
+ pause->rx_pause ? "on" : "off");
+
+ return 0;
+}
+
+#ifdef ETHTOOL_GMODULEEEPROM
+static int hinic3_get_module_info(struct net_device *netdev,
+ struct ethtool_modinfo *modinfo)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 sfp_type = 0;
+ u8 sfp_type_ext = 0;
+ int err;
+
+ err = hinic3_get_sfp_type(nic_dev->hwdev, &sfp_type, &sfp_type_ext);
+ if (err)
+ return err;
+
+ switch (sfp_type) {
+ case MODULE_TYPE_SFP:
+ modinfo->type = ETH_MODULE_SFF_8472;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
+ break;
+ case MODULE_TYPE_QSFP:
+ modinfo->type = ETH_MODULE_SFF_8436;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN;
+ break;
+ case MODULE_TYPE_QSFP_PLUS:
+ if (sfp_type_ext >= 0x3) {
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8636_MAX_LEN;
+ } else {
+ modinfo->type = ETH_MODULE_SFF_8436;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN;
+ }
+ break;
+ case MODULE_TYPE_QSFP28:
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8636_MAX_LEN;
+ break;
+ case MODULE_TYPE_DSFP:
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8636_MAX_LEN;
+ break;
+ case MODULE_TYPE_QSFP_CMIS:
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8636_MAX_LEN;
+ break;
+ default:
+ nicif_warn(nic_dev, drv, netdev,
+ "Optical module unknown: 0x%x\n", sfp_type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_get_module_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *ee, u8 *data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 sfp_data[STD_SFP_INFO_MAX_SIZE];
+ int err;
+
+ if (!ee->len || ((ee->len + ee->offset) > STD_SFP_INFO_MAX_SIZE))
+ return -EINVAL;
+
+ memset(data, 0, ee->len);
+
+ err = hinic3_get_sfp_eeprom(nic_dev->hwdev, (u8 *)sfp_data, ee->len);
+ if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+ err = hinic3_get_tlv_xsfp_eeprom(nic_dev->hwdev, (u8 *)sfp_data, sizeof(sfp_data));
+
+ if (err)
+ return err;
+
+ memcpy(data, sfp_data + ee->offset, ee->len);
+
+ return 0;
+}
+#endif /* ETHTOOL_GMODULEEEPROM */
+
+#define HINIC3_PRIV_FLAGS_SYMM_RSS BIT(0)
+#define HINIC3_PRIV_FLAGS_LINK_UP BIT(1)
+#define HINIC3_PRIV_FLAGS_RXQ_RECOVERY BIT(2)
+
+static u32 hinic3_get_priv_flags(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u32 priv_flags = 0;
+
+ if (test_bit(HINIC3_SAME_RXTX, &nic_dev->flags))
+ priv_flags |= HINIC3_PRIV_FLAGS_SYMM_RSS;
+
+ if (test_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags))
+ priv_flags |= HINIC3_PRIV_FLAGS_LINK_UP;
+
+ if (test_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ priv_flags |= HINIC3_PRIV_FLAGS_RXQ_RECOVERY;
+
+ return priv_flags;
+}
+
+static int hinic3_set_rxq_recovery_flag(struct net_device *netdev, u32 priv_flags)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (priv_flags & HINIC3_PRIV_FLAGS_RXQ_RECOVERY) {
+ if (!HINIC3_SUPPORT_RXQ_RECOVERY(nic_dev->hwdev)) {
+ nicif_info(nic_dev, drv, netdev, "Unsupport open rxq recovery\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (test_and_set_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ return 0;
+ queue_delayed_work(nic_dev->workq, &nic_dev->rxq_check_work, HZ);
+ nicif_info(nic_dev, drv, netdev, "open rxq recovery\n");
+ } else {
+ if (!test_and_clear_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ return 0;
+ cancel_delayed_work_sync(&nic_dev->rxq_check_work);
+ nicif_info(nic_dev, drv, netdev, "close rxq recovery\n");
+ }
+
+ return 0;
+}
+
+static int hinic3_set_symm_rss_flag(struct net_device *netdev, u32 priv_flags)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (priv_flags & HINIC3_PRIV_FLAGS_SYMM_RSS) {
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to open Symmetric RSS while DCB is enabled\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to open Symmetric RSS while RSS is disabled\n");
+ return -EOPNOTSUPP;
+ }
+
+ set_bit(HINIC3_SAME_RXTX, &nic_dev->flags);
+ } else {
+ clear_bit(HINIC3_SAME_RXTX, &nic_dev->flags);
+ }
+
+ return 0;
+}
+
+static int hinic3_set_force_link_flag(struct net_device *netdev, u32 priv_flags)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 link_status = 0;
+ int err;
+
+ if (priv_flags & HINIC3_PRIV_FLAGS_LINK_UP) {
+ if (test_and_set_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags))
+ return 0;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev))
+ return 0;
+
+ if (netif_carrier_ok(netdev))
+ return 0;
+
+ nic_dev->link_status = true;
+ netif_carrier_on(netdev);
+ nicif_info(nic_dev, link, netdev, "Set link up\n");
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ hinic3_notify_all_vfs_link_changed(nic_dev->hwdev, nic_dev->link_status);
+ } else {
+ if (!test_and_clear_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags))
+ return 0;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev))
+ return 0;
+
+ err = hinic3_get_link_state(nic_dev->hwdev, &link_status);
+ if (err) {
+ nicif_err(nic_dev, link, netdev, "Get link state err: %d\n", err);
+ return err;
+ }
+
+ nic_dev->link_status = link_status;
+
+ if (link_status) {
+ if (netif_carrier_ok(netdev))
+ return 0;
+
+ netif_carrier_on(netdev);
+ nicif_info(nic_dev, link, netdev, "Link state is up\n");
+ } else {
+ if (!netif_carrier_ok(netdev))
+ return 0;
+
+ netif_carrier_off(netdev);
+ nicif_info(nic_dev, link, netdev, "Link state is down\n");
+ }
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ hinic3_notify_all_vfs_link_changed(nic_dev->hwdev, nic_dev->link_status);
+ }
+
+ return 0;
+}
+
+static int hinic3_set_priv_flags(struct net_device *netdev, u32 priv_flags)
+{
+ int err;
+
+ err = hinic3_set_symm_rss_flag(netdev, priv_flags);
+ if (err)
+ return err;
+
+ err = hinic3_set_rxq_recovery_flag(netdev, priv_flags);
+ if (err)
+ return err;
+
+ return hinic3_set_force_link_flag(netdev, priv_flags);
+}
+
+#define PORT_DOWN_ERR_IDX 0
+#define LP_DEFAULT_TIME 5 /* seconds */
+#define LP_PKT_LEN 60
+
+#define TEST_TIME_MULTIPLE 5
+static int hinic3_run_lp_test(struct hinic3_nic_dev *nic_dev, u32 test_time)
+{
+ u8 *lb_test_rx_buf = nic_dev->lb_test_rx_buf;
+ struct net_device *netdev = nic_dev->netdev;
+ u32 cnt = test_time * TEST_TIME_MULTIPLE;
+ struct sk_buff *skb_tmp = NULL;
+ struct ethhdr *eth_hdr = NULL;
+ struct sk_buff *skb = NULL;
+ u8 *test_data = NULL;
+ u32 i;
+ u8 j;
+
+ skb_tmp = alloc_skb(LP_PKT_LEN, GFP_ATOMIC);
+ if (!skb_tmp) {
+ nicif_err(nic_dev, drv, netdev,
+ "Alloc xmit skb template failed for loopback test\n");
+ return -ENOMEM;
+ }
+
+ eth_hdr = __skb_put(skb_tmp, ETH_HLEN);
+ eth_hdr->h_proto = htons(ETH_P_ARP);
+ ether_addr_copy(eth_hdr->h_dest, nic_dev->netdev->dev_addr);
+ eth_zero_addr(eth_hdr->h_source);
+ skb_reset_mac_header(skb_tmp);
+
+ test_data = __skb_put(skb_tmp, LP_PKT_LEN - ETH_HLEN);
+ for (i = ETH_HLEN; i < LP_PKT_LEN; i++)
+ test_data[i] = i & 0xFF;
+
+ skb_tmp->queue_mapping = 0;
+ skb_tmp->dev = netdev;
+ skb_tmp->protocol = htons(ETH_P_ARP);
+
+ for (i = 0; i < cnt; i++) {
+ nic_dev->lb_test_rx_idx = 0;
+ memset(lb_test_rx_buf, 0, LP_PKT_CNT * LP_PKT_LEN);
+
+ for (j = 0; j < LP_PKT_CNT; j++) {
+ skb = pskb_copy(skb_tmp, GFP_ATOMIC);
+ if (!skb) {
+ dev_kfree_skb_any(skb_tmp);
+ nicif_err(nic_dev, drv, netdev,
+ "Copy skb failed for loopback test\n");
+ return -ENOMEM;
+ }
+
+ /* mark index for every pkt */
+ skb->data[LP_PKT_LEN - 1] = j;
+
+ if (hinic3_lb_xmit_frame(skb, netdev)) {
+ dev_kfree_skb_any(skb);
+ dev_kfree_skb_any(skb_tmp);
+ nicif_err(nic_dev, drv, netdev,
+ "Xmit pkt failed for loopback test\n");
+ return -EBUSY;
+ }
+ }
+
+ /* wait till all pkts received to RX buffer */
+ msleep(HINIC3_WAIT_PKTS_TO_RX_BUFFER);
+
+ for (j = 0; j < LP_PKT_CNT; j++) {
+ if (memcmp((lb_test_rx_buf + (j * LP_PKT_LEN)),
+ skb_tmp->data, (LP_PKT_LEN - 1)) ||
+ (*(lb_test_rx_buf + ((j * LP_PKT_LEN) +
+ (LP_PKT_LEN - 1))) != j)) {
+ dev_kfree_skb_any(skb_tmp);
+ nicif_err(nic_dev, drv, netdev,
+ "Compare pkt failed in loopback test(index=0x%02x, data[%d]=0x%02x)\n",
+ (j + (i * LP_PKT_CNT)),
+ (LP_PKT_LEN - 1),
+ *(lb_test_rx_buf +
+ (((j * LP_PKT_LEN) +
+ (LP_PKT_LEN - 1)))));
+ return -EIO;
+ }
+ }
+ }
+
+ dev_kfree_skb_any(skb_tmp);
+ nicif_info(nic_dev, drv, netdev, "Loopback test succeed.\n");
+ return 0;
+}
+
+enum diag_test_index {
+ INTERNAL_LP_TEST = 0,
+ EXTERNAL_LP_TEST = 1,
+ DIAG_TEST_MAX = 2,
+};
+
+#define HINIC3_INTERNAL_LP_MODE 5
+static int do_lp_test(struct hinic3_nic_dev *nic_dev, u32 *flags, u32 test_time,
+ enum diag_test_index *test_index)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 *lb_test_rx_buf = NULL;
+ int err = 0;
+
+ if (!(*flags & ETH_TEST_FL_EXTERNAL_LB)) {
+ *test_index = INTERNAL_LP_TEST;
+ if (hinic3_set_loopback_mode(nic_dev->hwdev,
+ HINIC3_INTERNAL_LP_MODE, true)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to set port loopback mode before loopback test\n");
+ return -EFAULT;
+ }
+
+ /* suspend 5000 ms, waiting for port to stop receiving frames */
+ msleep(5000);
+ } else {
+ *test_index = EXTERNAL_LP_TEST;
+ }
+
+ lb_test_rx_buf = vmalloc(LP_PKT_CNT * LP_PKT_LEN);
+ if (!lb_test_rx_buf) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to alloc RX buffer for loopback test\n");
+ err = -ENOMEM;
+ } else {
+ nic_dev->lb_test_rx_buf = lb_test_rx_buf;
+ nic_dev->lb_pkt_len = LP_PKT_LEN;
+ set_bit(HINIC3_LP_TEST, &nic_dev->flags);
+
+ if (hinic3_run_lp_test(nic_dev, test_time))
+ err = -EFAULT;
+
+ clear_bit(HINIC3_LP_TEST, &nic_dev->flags);
+ msleep(HINIC3_WAIT_CLEAR_LP_TEST);
+ vfree(lb_test_rx_buf);
+ nic_dev->lb_test_rx_buf = NULL;
+ }
+
+ if (!(*flags & ETH_TEST_FL_EXTERNAL_LB)) {
+ if (hinic3_set_loopback_mode(nic_dev->hwdev,
+ HINIC3_INTERNAL_LP_MODE, false)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to cancel port loopback mode after loopback test\n");
+ err = -EFAULT;
+ }
+ } else {
+ *flags |= ETH_TEST_FL_EXTERNAL_LB_DONE;
+ }
+
+ return err;
+}
+
+static void hinic3_lp_test(struct net_device *netdev, struct ethtool_test *eth_test,
+ u64 *data, u32 test_time)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ enum diag_test_index test_index = 0;
+ u8 link_status = 0;
+ int err;
+ u32 test_time_real = test_time;
+
+ /* don't support loopback test when netdev is closed. */
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Do not support loopback test when netdev is closed\n");
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+ data[PORT_DOWN_ERR_IDX] = 1;
+ return;
+ }
+ if (test_time_real == 0)
+ test_time_real = LP_DEFAULT_TIME;
+
+ netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
+
+ err = do_lp_test(nic_dev, ð_test->flags, test_time_real, &test_index);
+ if (err) {
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+ data[test_index] = 1;
+ }
+
+ netif_tx_wake_all_queues(netdev);
+
+ err = hinic3_get_link_state(nic_dev->hwdev, &link_status);
+ if (!err && link_status)
+ netif_carrier_on(netdev);
+}
+
+static void hinic3_diag_test(struct net_device *netdev,
+ struct ethtool_test *eth_test, u64 *data)
+{
+ memset(data, 0, DIAG_TEST_MAX * sizeof(u64));
+
+ hinic3_lp_test(netdev, eth_test, data, 0);
+}
+
+#if defined(ETHTOOL_GFECPARAM) && defined(ETHTOOL_SFECPARAM)
+static int hinic3_get_fecparam(struct net_device *netdev, struct ethtool_fecparam *fecparam)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 advertised_fec = 0;
+ u8 supported_fec = 0;
+ int err;
+
+ if (fecparam->cmd != ETHTOOL_GFECPARAM) {
+ nicif_err(nic_dev, drv, netdev,
+ "get fecparam cmd err.exp:0x%x,real:0x%x\n",
+ ETHTOOL_GFECPARAM, fecparam->cmd);
+ return -EINVAL;
+ }
+
+ err = get_fecparam(nic_dev->hwdev, &advertised_fec, &supported_fec);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Get fec param failed\n");
+ return err;
+ }
+ fecparam->active_fec = (u32)advertised_fec;
+ fecparam->fec = (u32)supported_fec;
+
+ nicif_info(nic_dev, drv, netdev, "Get fec param success\n");
+ return 0;
+}
+
+static int hinic3_set_fecparam(struct net_device *netdev, struct ethtool_fecparam *fecparam)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ if (fecparam->cmd != ETHTOOL_SFECPARAM) {
+ nicif_err(nic_dev, drv, netdev, "Set fecparam cmd err.exp:0x%x,real:0x%x\n", ETHTOOL_SFECPARAM, fecparam->cmd);
+ return -EINVAL;
+ }
+
+ err = set_fecparam(nic_dev->hwdev, (u8)fecparam->fec);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Set fec param failed\n");
+ return err;
+ }
+
+ nicif_info(nic_dev, drv, netdev, "Set fec param success\n");
+ return 0;
+}
+#endif
+
+static const struct ethtool_ops hinic3_ethtool_ops = {
+#ifdef SUPPORTED_COALESCE_PARAMS
+ .supported_coalesce_params = ETHTOOL_COALESCE_USECS |
+ ETHTOOL_COALESCE_PKT_RATE_RX_USECS |
+ ETHTOOL_COALESCE_MAX_FRAMES |
+ ETHTOOL_COALESCE_USECS_LOW_HIGH |
+ ETHTOOL_COALESCE_MAX_FRAMES_LOW_HIGH,
+#endif
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+ .get_link_ksettings = hinic3_get_link_ksettings,
+ .set_link_ksettings = hinic3_set_link_ksettings,
+#endif
+#endif
+#ifndef HAVE_NEW_ETHTOOL_LINK_SETTINGS_ONLY
+ .get_settings = hinic3_get_settings,
+ .set_settings = hinic3_set_settings,
+#endif
+
+ .get_drvinfo = hinic3_get_drvinfo,
+ .get_msglevel = hinic3_get_msglevel,
+ .set_msglevel = hinic3_set_msglevel,
+ .nway_reset = hinic3_nway_reset,
+#ifdef CONFIG_MODULE_PROF
+ .get_link = hinic3_get_link,
+#else
+ .get_link = ethtool_op_get_link,
+#endif
+ .get_ringparam = hinic3_get_ringparam,
+ .set_ringparam = hinic3_set_ringparam,
+ .get_pauseparam = hinic3_get_pauseparam,
+ .set_pauseparam = hinic3_set_pauseparam,
+ .get_sset_count = hinic3_get_sset_count,
+ .get_ethtool_stats = hinic3_get_ethtool_stats,
+ .get_strings = hinic3_get_strings,
+
+ .self_test = hinic3_diag_test,
+
+#ifndef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+#ifdef HAVE_ETHTOOL_SET_PHYS_ID
+ .set_phys_id = hinic3_set_phys_id,
+#else
+ .phys_id = hinic3_phys_id,
+#endif
+#endif
+
+ .get_coalesce = hinic3_get_coalesce,
+ .set_coalesce = hinic3_set_coalesce,
+#if defined(ETHTOOL_PERQUEUE) && defined(ETHTOOL_GCOALESCE)
+ .get_per_queue_coalesce = hinic3_get_per_queue_coalesce,
+ .set_per_queue_coalesce = hinic3_set_per_queue_coalesce,
+#endif
+
+#if defined(ETHTOOL_GFECPARAM) && defined(ETHTOOL_SFECPARAM)
+ .get_fecparam = hinic3_get_fecparam,
+ .set_fecparam = hinic3_set_fecparam,
+#endif
+
+ .get_rxnfc = hinic3_get_rxnfc,
+ .set_rxnfc = hinic3_set_rxnfc,
+ .get_priv_flags = hinic3_get_priv_flags,
+ .set_priv_flags = hinic3_set_priv_flags,
+
+#ifndef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+ .get_channels = hinic3_get_channels,
+ .set_channels = hinic3_set_channels,
+
+#ifdef ETHTOOL_GMODULEEEPROM
+ .get_module_info = hinic3_get_module_info,
+ .get_module_eeprom = hinic3_get_module_eeprom,
+#endif
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ .get_rxfh_indir_size = hinic3_get_rxfh_indir_size,
+#endif
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+ .get_rxfh_key_size = hinic3_get_rxfh_key_size,
+ .get_rxfh = hinic3_get_rxfh,
+ .set_rxfh = hinic3_set_rxfh,
+#else
+ .get_rxfh_indir = hinic3_get_rxfh_indir,
+ .set_rxfh_indir = hinic3_set_rxfh_indir,
+#endif
+
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+};
+
+#ifdef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+static const struct ethtool_ops_ext hinic3_ethtool_ops_ext = {
+ .size = sizeof(struct ethtool_ops_ext),
+ .set_phys_id = hinic3_set_phys_id,
+ .get_channels = hinic3_get_channels,
+ .set_channels = hinic3_set_channels,
+#ifdef ETHTOOL_GMODULEEEPROM
+ .get_module_info = hinic3_get_module_info,
+ .get_module_eeprom = hinic3_get_module_eeprom,
+#endif
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ .get_rxfh_indir_size = hinic3_get_rxfh_indir_size,
+#endif
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+ .get_rxfh_key_size = hinic3_get_rxfh_key_size,
+ .get_rxfh = hinic3_get_rxfh,
+ .set_rxfh = hinic3_set_rxfh,
+#else
+ .get_rxfh_indir = hinic3_get_rxfh_indir,
+ .set_rxfh_indir = hinic3_set_rxfh_indir,
+#endif
+
+};
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+
+static const struct ethtool_ops hinic3vf_ethtool_ops = {
+#ifdef SUPPORTED_COALESCE_PARAMS
+ .supported_coalesce_params = ETHTOOL_COALESCE_USECS |
+ ETHTOOL_COALESCE_PKT_RATE_RX_USECS |
+ ETHTOOL_COALESCE_MAX_FRAMES |
+ ETHTOOL_COALESCE_USECS_LOW_HIGH |
+ ETHTOOL_COALESCE_MAX_FRAMES_LOW_HIGH,
+#endif
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+ .get_link_ksettings = hinic3_get_link_ksettings,
+#endif
+#else
+ .get_settings = hinic3_get_settings,
+#endif
+ .get_drvinfo = hinic3_get_drvinfo,
+ .get_msglevel = hinic3_get_msglevel,
+ .set_msglevel = hinic3_set_msglevel,
+ .get_link = ethtool_op_get_link,
+ .get_ringparam = hinic3_get_ringparam,
+
+ .set_ringparam = hinic3_set_ringparam,
+ .get_sset_count = hinic3_get_sset_count,
+ .get_ethtool_stats = hinic3_get_ethtool_stats,
+ .get_strings = hinic3_get_strings,
+
+ .get_coalesce = hinic3_get_coalesce,
+ .set_coalesce = hinic3_set_coalesce,
+#if defined(ETHTOOL_PERQUEUE) && defined(ETHTOOL_GCOALESCE)
+ .get_per_queue_coalesce = hinic3_get_per_queue_coalesce,
+ .set_per_queue_coalesce = hinic3_set_per_queue_coalesce,
+#endif
+
+#if defined(ETHTOOL_GFECPARAM) && defined(ETHTOOL_SFECPARAM)
+ .get_fecparam = hinic3_get_fecparam,
+ .set_fecparam = hinic3_set_fecparam,
+#endif
+
+ .get_rxnfc = hinic3_get_rxnfc,
+ .set_rxnfc = hinic3_set_rxnfc,
+ .get_priv_flags = hinic3_get_priv_flags,
+ .set_priv_flags = hinic3_set_priv_flags,
+
+#ifndef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+ .get_channels = hinic3_get_channels,
+ .set_channels = hinic3_set_channels,
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ .get_rxfh_indir_size = hinic3_get_rxfh_indir_size,
+#endif
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+ .get_rxfh_key_size = hinic3_get_rxfh_key_size,
+ .get_rxfh = hinic3_get_rxfh,
+ .set_rxfh = hinic3_set_rxfh,
+#else
+ .get_rxfh_indir = hinic3_get_rxfh_indir,
+ .set_rxfh_indir = hinic3_set_rxfh_indir,
+#endif
+
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+};
+
+#ifdef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+static const struct ethtool_ops_ext hinic3vf_ethtool_ops_ext = {
+ .size = sizeof(struct ethtool_ops_ext),
+ .get_channels = hinic3_get_channels,
+ .set_channels = hinic3_set_channels,
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ .get_rxfh_indir_size = hinic3_get_rxfh_indir_size,
+#endif
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+ .get_rxfh_key_size = hinic3_get_rxfh_key_size,
+ .get_rxfh = hinic3_get_rxfh,
+ .set_rxfh = hinic3_set_rxfh,
+#else
+ .get_rxfh_indir = hinic3_get_rxfh_indir,
+ .set_rxfh_indir = hinic3_set_rxfh_indir,
+#endif
+
+};
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+
+void hinic3_set_ethtool_ops(struct net_device *netdev)
+{
+ SET_ETHTOOL_OPS(netdev, &hinic3_ethtool_ops);
+#ifdef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+ set_ethtool_ops_ext(netdev, &hinic3_ethtool_ops_ext);
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+}
+
+void hinic3vf_set_ethtool_ops(struct net_device *netdev)
+{
+ SET_ETHTOOL_OPS(netdev, &hinic3vf_ethtool_ops);
+#ifdef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+ set_ethtool_ops_ext(netdev, &hinic3vf_ethtool_ops_ext);
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c
new file mode 100644
index 0000000..ec89f62
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c
@@ -0,0 +1,1320 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_mt.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+
+#define HINIC_SET_LINK_STR_LEN 128
+#define HINIC_ETHTOOL_FEC_INFO_LEN 6
+#define HINIC_SUPPORTED_FEC_CMD 0
+#define HINIC_ADVERTISED_FEC_CMD 1
+
+struct hinic3_ethtool_fec {
+ u8 hinic_fec_offset;
+ u8 ethtool_bit_offset;
+};
+
+static struct hinic3_ethtool_fec hinic3_ethtool_fec_info[HINIC_ETHTOOL_FEC_INFO_LEN] = {
+ {PORT_FEC_NOT_SET, 0xFF}, /* The ethtool does not have the corresponding enumeration variable */
+ {PORT_FEC_RSFEC, 0x32}, /* ETHTOOL_LINK_MODE_FEC_RS_BIT */
+ {PORT_FEC_BASEFEC, 0x33}, /* ETHTOOL_LINK_MODE_FEC_BASER_BIT */
+ {PORT_FEC_NOFEC, 0x31}, /* ETHTOOL_LINK_MODE_FEC_NONE_BIT */
+ {PORT_FEC_LLRSFEC, 0x4A}, /* ETHTOOL_LINK_MODE_FEC_LLRS_BIT: Available only in later versions */
+ {PORT_FEC_AUTO, 0XFF} /* The ethtool does not have the corresponding enumeration variable */
+};
+
+struct hinic3_stats {
+ char name[ETH_GSTRING_LEN];
+ u32 size;
+ int offset;
+};
+
+struct hinic3_netdev_link_count_str {
+ u64 link_down_events_phy;
+};
+
+#define HINIC3_NETDEV_LINK_COUNT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_netdev_link_count_str, _stat_item), \
+ .offset = offsetof(struct hinic3_netdev_link_count_str, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_netdev_link_count[] = {
+ HINIC3_NETDEV_LINK_COUNT(link_down_events_phy),
+};
+
+#define HINIC3_NETDEV_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct rtnl_link_stats64, _stat_item), \
+ .offset = offsetof(struct rtnl_link_stats64, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_netdev_stats[] = {
+ HINIC3_NETDEV_STAT(rx_packets),
+ HINIC3_NETDEV_STAT(tx_packets),
+ HINIC3_NETDEV_STAT(rx_bytes),
+ HINIC3_NETDEV_STAT(tx_bytes),
+ HINIC3_NETDEV_STAT(rx_errors),
+ HINIC3_NETDEV_STAT(tx_errors),
+ HINIC3_NETDEV_STAT(rx_dropped),
+ HINIC3_NETDEV_STAT(tx_dropped),
+ HINIC3_NETDEV_STAT(multicast),
+ HINIC3_NETDEV_STAT(collisions),
+ HINIC3_NETDEV_STAT(rx_length_errors),
+ HINIC3_NETDEV_STAT(rx_over_errors),
+ HINIC3_NETDEV_STAT(rx_crc_errors),
+ HINIC3_NETDEV_STAT(rx_frame_errors),
+ HINIC3_NETDEV_STAT(rx_fifo_errors),
+ HINIC3_NETDEV_STAT(rx_missed_errors),
+ HINIC3_NETDEV_STAT(tx_aborted_errors),
+ HINIC3_NETDEV_STAT(tx_carrier_errors),
+ HINIC3_NETDEV_STAT(tx_fifo_errors),
+ HINIC3_NETDEV_STAT(tx_heartbeat_errors),
+};
+
+#define HINIC3_NIC_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_nic_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_nic_stats, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_nic_dev_stats[] = {
+ HINIC3_NIC_STAT(netdev_tx_timeout),
+};
+
+static struct hinic3_stats hinic3_nic_dev_stats_extern[] = {
+ HINIC3_NIC_STAT(tx_carrier_off_drop),
+ HINIC3_NIC_STAT(tx_invalid_qid),
+ HINIC3_NIC_STAT(rsvd1),
+ HINIC3_NIC_STAT(rsvd2),
+};
+
+#define HINIC3_RXQ_STAT(_stat_item) { \
+ .name = "rxq%d_"#_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_rxq_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_rxq_stats, _stat_item) \
+}
+
+#define HINIC3_TXQ_STAT(_stat_item) { \
+ .name = "txq%d_"#_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_txq_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_txq_stats, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_rx_queue_stats[] = {
+ HINIC3_RXQ_STAT(packets),
+ HINIC3_RXQ_STAT(bytes),
+ HINIC3_RXQ_STAT(errors),
+ HINIC3_RXQ_STAT(csum_errors),
+ HINIC3_RXQ_STAT(other_errors),
+ HINIC3_RXQ_STAT(dropped),
+#ifdef HAVE_XDP_SUPPORT
+ HINIC3_RXQ_STAT(xdp_dropped),
+#endif
+ HINIC3_RXQ_STAT(rx_buf_empty),
+};
+
+static struct hinic3_stats hinic3_rx_queue_stats_extern[] = {
+ HINIC3_RXQ_STAT(alloc_skb_err),
+ HINIC3_RXQ_STAT(alloc_rx_buf_err),
+ HINIC3_RXQ_STAT(xdp_large_pkt),
+ HINIC3_RXQ_STAT(restore_drop_sge),
+ HINIC3_RXQ_STAT(rsvd2),
+};
+
+static struct hinic3_stats hinic3_tx_queue_stats[] = {
+ HINIC3_TXQ_STAT(packets),
+ HINIC3_TXQ_STAT(bytes),
+ HINIC3_TXQ_STAT(busy),
+ HINIC3_TXQ_STAT(wake),
+ HINIC3_TXQ_STAT(dropped),
+};
+
+static struct hinic3_stats hinic3_tx_queue_stats_extern[] = {
+ HINIC3_TXQ_STAT(skb_pad_err),
+ HINIC3_TXQ_STAT(frag_len_overflow),
+ HINIC3_TXQ_STAT(offload_cow_skb_err),
+ HINIC3_TXQ_STAT(map_frag_err),
+ HINIC3_TXQ_STAT(unknown_tunnel_pkt),
+ HINIC3_TXQ_STAT(frag_size_err),
+ HINIC3_TXQ_STAT(rsvd1),
+ HINIC3_TXQ_STAT(rsvd2),
+};
+
+#define HINIC3_FUNC_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_vport_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_vport_stats, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_function_stats[] = {
+ HINIC3_FUNC_STAT(tx_unicast_pkts_vport),
+ HINIC3_FUNC_STAT(tx_unicast_bytes_vport),
+ HINIC3_FUNC_STAT(tx_multicast_pkts_vport),
+ HINIC3_FUNC_STAT(tx_multicast_bytes_vport),
+ HINIC3_FUNC_STAT(tx_broadcast_pkts_vport),
+ HINIC3_FUNC_STAT(tx_broadcast_bytes_vport),
+
+ HINIC3_FUNC_STAT(rx_unicast_pkts_vport),
+ HINIC3_FUNC_STAT(rx_unicast_bytes_vport),
+ HINIC3_FUNC_STAT(rx_multicast_pkts_vport),
+ HINIC3_FUNC_STAT(rx_multicast_bytes_vport),
+ HINIC3_FUNC_STAT(rx_broadcast_pkts_vport),
+ HINIC3_FUNC_STAT(rx_broadcast_bytes_vport),
+
+ HINIC3_FUNC_STAT(tx_discard_vport),
+ HINIC3_FUNC_STAT(rx_discard_vport),
+ HINIC3_FUNC_STAT(tx_err_vport),
+ HINIC3_FUNC_STAT(rx_err_vport),
+};
+
+#define HINIC3_PORT_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct mag_cmd_port_stats, _stat_item), \
+ .offset = offsetof(struct mag_cmd_port_stats, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_port_stats[] = {
+ HINIC3_PORT_STAT(mac_tx_fragment_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_undersize_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_undermin_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_64_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_65_127_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_128_255_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_256_511_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_512_1023_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1024_1518_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1519_2047_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_2048_4095_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_4096_8191_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_8192_9216_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_9217_12287_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_12288_16383_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1519_max_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1519_max_good_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_oversize_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_jabber_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_bad_oct_num),
+ HINIC3_PORT_STAT(mac_tx_good_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_good_oct_num),
+ HINIC3_PORT_STAT(mac_tx_total_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_total_oct_num),
+ HINIC3_PORT_STAT(mac_tx_uni_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_multi_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_broad_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pause_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri0_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri1_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri2_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri3_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri4_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri5_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri6_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri7_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_control_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_err_all_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_from_app_good_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_from_app_bad_pkt_num),
+
+ HINIC3_PORT_STAT(mac_rx_fragment_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_undersize_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_undermin_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_64_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_65_127_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_128_255_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_256_511_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_512_1023_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1024_1518_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1519_2047_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_2048_4095_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_4096_8191_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_8192_9216_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_9217_12287_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_12288_16383_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1519_max_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1519_max_good_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_oversize_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_jabber_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_bad_oct_num),
+ HINIC3_PORT_STAT(mac_rx_good_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_good_oct_num),
+ HINIC3_PORT_STAT(mac_rx_total_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_total_oct_num),
+ HINIC3_PORT_STAT(mac_rx_uni_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_multi_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_broad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pause_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri0_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri1_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri2_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri3_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri4_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri5_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri6_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri7_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_control_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_sym_err_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_fcs_err_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_send_app_good_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_send_app_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_unfilter_pkt_num),
+};
+
+#define HINIC3_RSFEC_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct mag_cmd_rsfec_stats, _stat_item), \
+ .offset = offsetof(struct mag_cmd_rsfec_stats, _stat_item) \
+}
+
+static struct hinic3_stats g_hinic3_rsfec_stats[] = {
+ HINIC3_RSFEC_STAT(rx_err_lane_phy),
+};
+
+#define HINIC3_FGPA_PORT_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_phy_fpga_port_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_phy_fpga_port_stats, _stat_item) \
+}
+
+static char g_hinic_priv_flags_strings[][ETH_GSTRING_LEN] = {
+ "Symmetric-RSS",
+ "Force-Link-up",
+ "Rxq_Recovery",
+};
+
+u32 hinic3_get_io_stats_size(const struct hinic3_nic_dev *nic_dev)
+{
+ u32 count;
+
+ count = (u32)(ARRAY_LEN(hinic3_nic_dev_stats) +
+ ARRAY_LEN(hinic3_nic_dev_stats_extern) +
+ (ARRAY_LEN(hinic3_tx_queue_stats) +
+ ARRAY_LEN(hinic3_tx_queue_stats_extern) +
+ ARRAY_LEN(hinic3_rx_queue_stats) +
+ ARRAY_LEN(hinic3_rx_queue_stats_extern)) * nic_dev->max_qps);
+
+ return count;
+}
+
+#define GET_VALUE_OF_PTR(size, ptr) ( \
+ (size) == sizeof(u64) ? *(u64 *)(ptr) : \
+ (size) == sizeof(u32) ? *(u32 *)(ptr) : \
+ (size) == sizeof(u16) ? *(u16 *)(ptr) : *(u8 *)(ptr) \
+)
+
+#define DEV_STATS_PACK(items, item_idx, array, stats_ptr) do { \
+ int j; \
+ for (j = 0; j < ARRAY_LEN(array); j++) { \
+ memcpy((items)[item_idx].name, (array)[j].name, \
+ HINIC3_SHOW_ITEM_LEN); \
+ (items)[item_idx].hexadecimal = 0; \
+ (items)[item_idx].value = \
+ GET_VALUE_OF_PTR((array)[j].size, \
+ (char *)(stats_ptr) + (array)[j].offset); \
+ (item_idx)++; \
+ } \
+} while (0)
+
+int hinic3_rx_queue_stat_pack(struct hinic3_show_item *item,
+ struct hinic3_stats *stat, struct hinic3_rxq_stats *rxq_stats, u16 qid)
+{
+ int ret;
+
+ ret = snprintf(item->name, HINIC3_SHOW_ITEM_LEN, stat->name, qid);
+ if (ret < 0)
+ return -EINVAL;
+
+ item->hexadecimal = 0;
+ item->value = GET_VALUE_OF_PTR(stat->size, (char *)(rxq_stats) + stat->offset);
+
+ return 0;
+}
+
+int hinic3_tx_queue_stat_pack(struct hinic3_show_item *item,
+ struct hinic3_stats *stat, struct hinic3_txq_stats *txq_stats, u16 qid)
+{
+ int ret;
+
+ ret = snprintf(item->name, HINIC3_SHOW_ITEM_LEN, stat->name, qid);
+ if (ret < 0)
+ return -EINVAL;
+
+ item->hexadecimal = 0;
+ item->value = GET_VALUE_OF_PTR(stat->size, (char *)(txq_stats) + stat->offset);
+
+ return 0;
+}
+
+int hinic3_get_io_stats(const struct hinic3_nic_dev *nic_dev, void *stats)
+{
+ struct hinic3_show_item *items = stats;
+ int item_idx = 0;
+ u16 qid;
+ int idx;
+ int ret;
+
+ DEV_STATS_PACK(items, item_idx, hinic3_nic_dev_stats, &nic_dev->stats);
+ DEV_STATS_PACK(items, item_idx, hinic3_nic_dev_stats_extern, &nic_dev->stats);
+
+ for (qid = 0; qid < nic_dev->max_qps; qid++) {
+ for (idx = 0; idx < ARRAY_LEN(hinic3_tx_queue_stats); idx++) {
+ ret = hinic3_tx_queue_stat_pack(&items[item_idx++], &hinic3_tx_queue_stats[idx],
+ &nic_dev->txqs[qid].txq_stats, qid);
+ if (ret != 0)
+ return -EINVAL;
+ }
+
+ for (idx = 0; idx < ARRAY_LEN(hinic3_tx_queue_stats_extern); idx++) {
+ ret = hinic3_tx_queue_stat_pack(&items[item_idx++], &hinic3_tx_queue_stats_extern[idx],
+ &nic_dev->txqs[qid].txq_stats, qid);
+ if (ret != 0)
+ return -EINVAL;
+ }
+ }
+
+ for (qid = 0; qid < nic_dev->max_qps; qid++) {
+ for (idx = 0; idx < ARRAY_LEN(hinic3_rx_queue_stats); idx++) {
+ ret = hinic3_rx_queue_stat_pack(&items[item_idx++], &hinic3_rx_queue_stats[idx],
+ &nic_dev->rxqs[qid].rxq_stats, qid);
+ if (ret != 0)
+ return -EINVAL;
+ }
+
+ for (idx = 0; idx < ARRAY_LEN(hinic3_rx_queue_stats_extern); idx++) {
+ ret = hinic3_rx_queue_stat_pack(&items[item_idx++], &hinic3_rx_queue_stats_extern[idx],
+ &nic_dev->rxqs[qid].rxq_stats, qid);
+ if (ret != 0)
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+static char g_hinic3_test_strings[][ETH_GSTRING_LEN] = {
+ "Internal lb test (on/offline)",
+ "External lb test (external_lb)",
+};
+
+int hinic3_get_sset_count(struct net_device *netdev, int sset)
+{
+ int count = 0, q_num = 0;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ switch (sset) {
+ case ETH_SS_TEST:
+ return ARRAY_LEN(g_hinic3_test_strings);
+ case ETH_SS_STATS:
+ q_num = nic_dev->q_params.num_qps;
+ count = ARRAY_LEN(hinic3_netdev_stats) +
+ ARRAY_LEN(hinic3_nic_dev_stats) +
+ ARRAY_LEN(hinic3_netdev_link_count) +
+ ARRAY_LEN(hinic3_function_stats) +
+ (ARRAY_LEN(hinic3_tx_queue_stats) +
+ ARRAY_LEN(hinic3_rx_queue_stats)) * q_num;
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ count += ARRAY_LEN(hinic3_port_stats);
+ count += ARRAY_LEN(g_hinic3_rsfec_stats);
+ }
+
+ return count;
+ case ETH_SS_PRIV_FLAGS:
+ return ARRAY_LEN(g_hinic_priv_flags_strings);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void get_drv_queue_stats(struct hinic3_nic_dev *nic_dev, u64 *data)
+{
+ struct hinic3_txq_stats txq_stats;
+ struct hinic3_rxq_stats rxq_stats;
+ u16 i = 0, j = 0, qid = 0;
+ char *p = NULL;
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ if (!nic_dev->txqs)
+ break;
+
+ hinic3_txq_get_stats(&nic_dev->txqs[qid], &txq_stats);
+ for (j = 0; j < ARRAY_LEN(hinic3_tx_queue_stats); j++, i++) {
+ p = (char *)(&txq_stats) +
+ hinic3_tx_queue_stats[j].offset;
+ data[i] = (hinic3_tx_queue_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+ }
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ if (!nic_dev->rxqs)
+ break;
+
+ hinic3_rxq_get_stats(&nic_dev->rxqs[qid], &rxq_stats);
+ for (j = 0; j < ARRAY_LEN(hinic3_rx_queue_stats); j++, i++) {
+ p = (char *)(&rxq_stats) +
+ hinic3_rx_queue_stats[j].offset;
+ data[i] = (hinic3_rx_queue_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+ }
+}
+
+static u16 get_ethtool_port_stats(struct hinic3_nic_dev *nic_dev, u64 *data)
+{
+ struct mag_cmd_port_stats *port_stats = NULL;
+ char *p = NULL;
+ u16 i = 0, j = 0;
+ int err;
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to malloc port stats\n");
+ memset(&data[i], 0,
+ ARRAY_LEN(hinic3_port_stats) * sizeof(*data));
+ i += ARRAY_LEN(hinic3_port_stats);
+ return i;
+ }
+
+ err = hinic3_get_phy_port_stats(nic_dev->hwdev, port_stats);
+ if (err)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get port stats from fw\n");
+
+ for (j = 0; j < ARRAY_LEN(hinic3_port_stats); j++, i++) {
+ p = (char *)(port_stats) + hinic3_port_stats[j].offset;
+ data[i] = (hinic3_port_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ kfree(port_stats);
+
+ return i;
+}
+
+static u16 get_ethtool_rsfec_stats(struct hinic3_nic_dev *nic_dev, u64 *data)
+{
+ struct mag_cmd_rsfec_stats *port_stats = NULL;
+ char *p = NULL;
+ u16 i = 0, j = 0;
+ int err;
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to malloc port stats\n");
+ memset(&data[i], 0,
+ ARRAY_LEN(g_hinic3_rsfec_stats) * sizeof(*data));
+ i += ARRAY_LEN(g_hinic3_rsfec_stats);
+ return i;
+ }
+
+ err = hinic3_get_phy_rsfec_stats(nic_dev->hwdev, port_stats);
+ if (err)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get rsfec stats from fw\n");
+
+ for (j = 0; j < ARRAY_LEN(g_hinic3_rsfec_stats); j++, i++) {
+ p = (char *)(port_stats) + g_hinic3_rsfec_stats[j].offset;
+ data[i] = (g_hinic3_rsfec_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ kfree(port_stats);
+
+ return i;
+}
+
+void hinic3_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+#ifdef HAVE_NDO_GET_STATS64
+ struct rtnl_link_stats64 temp;
+ const struct rtnl_link_stats64 *net_stats = NULL;
+#else
+ const struct net_device_stats *net_stats = NULL;
+#endif
+ struct hinic3_nic_stats *nic_stats = NULL;
+
+ struct hinic3_vport_stats vport_stats = {0};
+ u16 i = 0, j = 0;
+ char *p = NULL;
+ int err;
+ int link_down_events_phy_tmp = 0;
+ struct hinic3_netdev_link_count_str link_count = {0};
+
+#ifdef HAVE_NDO_GET_STATS64
+ net_stats = dev_get_stats(netdev, &temp);
+#else
+ net_stats = dev_get_stats(netdev);
+#endif
+ for (j = 0; j < ARRAY_LEN(hinic3_netdev_stats); j++, i++) {
+ p = (char *)(net_stats) + hinic3_netdev_stats[j].offset;
+ data[i] = GET_VALUE_OF_PTR(hinic3_netdev_stats[j].size, p);
+ }
+
+ nic_stats = &nic_dev->stats;
+ for (j = 0; j < ARRAY_LEN(hinic3_nic_dev_stats); j++, i++) {
+ p = (char *)(nic_stats) + hinic3_nic_dev_stats[j].offset;
+ data[i] = GET_VALUE_OF_PTR(hinic3_nic_dev_stats[j].size, p);
+ }
+
+ err = hinic3_get_link_event_stats(nic_dev->hwdev, &link_down_events_phy_tmp);
+
+ link_count.link_down_events_phy = (u64)link_down_events_phy_tmp;
+ for (j = 0; j < ARRAY_LEN(hinic3_netdev_link_count); j++, i++) {
+ p = (char *)(&link_count) + hinic3_netdev_link_count[j].offset;
+ data[i] = GET_VALUE_OF_PTR(hinic3_netdev_link_count[j].size, p);
+ }
+
+ err = hinic3_get_vport_stats(nic_dev->hwdev, hinic3_global_func_id(nic_dev->hwdev),
+ &vport_stats);
+ if (err)
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to get function stats from fw\n");
+
+ for (j = 0; j < ARRAY_LEN(hinic3_function_stats); j++, i++) {
+ p = (char *)(&vport_stats) + hinic3_function_stats[j].offset;
+ data[i] = GET_VALUE_OF_PTR(hinic3_function_stats[j].size, p);
+ }
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ i += get_ethtool_port_stats(nic_dev, data + i);
+ i += get_ethtool_rsfec_stats(nic_dev, data + i);
+ }
+
+ get_drv_queue_stats(nic_dev, data + i);
+}
+
+static u16 get_drv_dev_strings(struct hinic3_nic_dev *nic_dev, char *p)
+{
+ u16 i, cnt = 0;
+
+ for (i = 0; i < ARRAY_LEN(hinic3_netdev_stats); i++) {
+ memcpy(p, hinic3_netdev_stats[i].name,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+
+ for (i = 0; i < ARRAY_LEN(hinic3_nic_dev_stats); i++) {
+ memcpy(p, hinic3_nic_dev_stats[i].name, ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+
+ for (i = 0; i < ARRAY_LEN(hinic3_netdev_link_count); i++) {
+ memcpy(p, hinic3_netdev_link_count[i].name, ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+
+ return cnt;
+}
+
+static u16 get_hw_stats_strings(struct hinic3_nic_dev *nic_dev, char *p)
+{
+ u16 i, cnt = 0;
+
+ for (i = 0; i < ARRAY_LEN(hinic3_function_stats); i++) {
+ memcpy(p, hinic3_function_stats[i].name,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ for (i = 0; i < ARRAY_LEN(hinic3_port_stats); i++) {
+ memcpy(p, hinic3_port_stats[i].name, ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+ for (i = 0; i < ARRAY_LEN(g_hinic3_rsfec_stats); i++) {
+ memcpy(p, g_hinic3_rsfec_stats[i].name,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+ }
+
+ return cnt;
+}
+
+static u16 get_qp_stats_strings(const struct hinic3_nic_dev *nic_dev, char *p)
+{
+ u16 i = 0, j = 0, cnt = 0;
+ int err;
+
+ for (i = 0; i < nic_dev->q_params.num_qps; i++) {
+ for (j = 0; j < ARRAY_LEN(hinic3_tx_queue_stats); j++) {
+ err = sprintf(p, hinic3_tx_queue_stats[j].name, i);
+ if (err < 0)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to sprintf tx queue stats name, idx_qps: %u, idx_stats: %u\n",
+ i, j);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+ }
+
+ for (i = 0; i < nic_dev->q_params.num_qps; i++) {
+ for (j = 0; j < ARRAY_LEN(hinic3_rx_queue_stats); j++) {
+ err = sprintf(p, hinic3_rx_queue_stats[j].name, i);
+ if (err < 0)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to sprintf rx queue stats name, idx_qps: %u, idx_stats: %u\n",
+ i, j);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+ }
+
+ return cnt;
+}
+
+void hinic3_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ char *p = (char *)data;
+ u16 offset = 0;
+
+ switch (stringset) {
+ case ETH_SS_TEST:
+ memcpy(data, *g_hinic3_test_strings, sizeof(g_hinic3_test_strings));
+ return;
+ case ETH_SS_STATS:
+ offset = get_drv_dev_strings(nic_dev, p);
+ offset += get_hw_stats_strings(nic_dev,
+ p + offset * ETH_GSTRING_LEN);
+ get_qp_stats_strings(nic_dev, p + offset * ETH_GSTRING_LEN);
+
+ return;
+ case ETH_SS_PRIV_FLAGS:
+ memcpy(data, g_hinic_priv_flags_strings,
+ sizeof(g_hinic_priv_flags_strings));
+ return;
+ default:
+ nicif_err(nic_dev, drv, netdev,
+ "Invalid string set %u.", stringset);
+ return;
+ }
+}
+
+static const u32 hinic3_mag_link_mode_ge[] = {
+ ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
+ ETHTOOL_LINK_MODE_1000baseKX_Full_BIT,
+ ETHTOOL_LINK_MODE_1000baseX_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_10ge_base_r[] = {
+ ETHTOOL_LINK_MODE_10000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseR_FEC_BIT,
+ ETHTOOL_LINK_MODE_10000baseCR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseSR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseLR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseLRM_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_25ge_base_r[] = {
+ ETHTOOL_LINK_MODE_25000baseCR_Full_BIT,
+ ETHTOOL_LINK_MODE_25000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_25000baseSR_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_40ge_base_r4[] = {
+ ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT,
+ ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT,
+ ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT,
+ ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_50ge_base_r[] = {
+ ETHTOOL_LINK_MODE_50000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseSR_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseCR_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_50ge_base_r2[] = {
+ ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_100ge_base_r[] = {
+ ETHTOOL_LINK_MODE_100000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseSR_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseCR_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_100ge_base_r2[] = {
+ ETHTOOL_LINK_MODE_100000baseKR2_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseSR2_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseCR2_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_100ge_base_r4[] = {
+ ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_200ge_base_r2[] = {
+ ETHTOOL_LINK_MODE_200000baseKR2_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseSR2_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseCR2_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_200ge_base_r4[] = {
+ ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseCR4_Full_BIT,
+};
+
+struct hw2ethtool_link_mode {
+ const u32 *link_mode_bit_arr;
+ u32 arr_size;
+ u32 speed;
+};
+
+static const struct hw2ethtool_link_mode
+ hw2ethtool_link_mode_table[LINK_MODE_MAX_NUMBERS] = {
+ [LINK_MODE_GE] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_ge,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_ge),
+ .speed = SPEED_1000,
+ },
+ [LINK_MODE_10GE_BASE_R] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_10ge_base_r,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_10ge_base_r),
+ .speed = SPEED_10000,
+ },
+ [LINK_MODE_25GE_BASE_R] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_25ge_base_r,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_25ge_base_r),
+ .speed = SPEED_25000,
+ },
+ [LINK_MODE_40GE_BASE_R4] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_40ge_base_r4,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_40ge_base_r4),
+ .speed = SPEED_40000,
+ },
+ [LINK_MODE_50GE_BASE_R] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_50ge_base_r,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_50ge_base_r),
+ .speed = SPEED_50000,
+ },
+ [LINK_MODE_50GE_BASE_R2] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_50ge_base_r2,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_50ge_base_r2),
+ .speed = SPEED_50000,
+ },
+ [LINK_MODE_100GE_BASE_R] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_100ge_base_r,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_100ge_base_r),
+ .speed = SPEED_100000,
+ },
+ [LINK_MODE_100GE_BASE_R2] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_100ge_base_r2,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_100ge_base_r2),
+ .speed = SPEED_100000,
+ },
+ [LINK_MODE_100GE_BASE_R4] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_100ge_base_r4,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_100ge_base_r4),
+ .speed = SPEED_100000,
+ },
+ [LINK_MODE_200GE_BASE_R2] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_200ge_base_r2,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_200ge_base_r2),
+ .speed = SPEED_200000,
+ },
+ [LINK_MODE_200GE_BASE_R4] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_200ge_base_r4,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_200ge_base_r4),
+ .speed = SPEED_200000,
+ },
+};
+
+#define GET_SUPPORTED_MODE 0
+#define GET_ADVERTISED_MODE 1
+
+struct cmd_link_settings {
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(supported);
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(advertising);
+
+ u32 speed;
+ u8 duplex;
+ u8 port;
+ u8 autoneg;
+};
+
+#define ETHTOOL_ADD_SUPPORTED_LINK_MODE(ecmd, mode) \
+ set_bit(ETHTOOL_LINK_MODE_##mode##_BIT, (ecmd)->supported)
+#define ETHTOOL_ADD_ADVERTISED_LINK_MODE(ecmd, mode) \
+ set_bit(ETHTOOL_LINK_MODE_##mode##_BIT, (ecmd)->advertising)
+
+static void ethtool_add_supported_speed_link_mode(struct cmd_link_settings *link_settings,
+ u32 mode)
+{
+ u32 i;
+
+ for (i = 0; i < hw2ethtool_link_mode_table[mode].arr_size; i++) {
+ if (hw2ethtool_link_mode_table[mode].link_mode_bit_arr[i] >=
+ __ETHTOOL_LINK_MODE_MASK_NBITS)
+ continue;
+ set_bit(hw2ethtool_link_mode_table[mode].link_mode_bit_arr[i],
+ link_settings->supported);
+ }
+}
+
+static void ethtool_add_advertised_speed_link_mode(struct cmd_link_settings *link_settings,
+ u32 mode)
+{
+ u32 i;
+
+ for (i = 0; i < hw2ethtool_link_mode_table[mode].arr_size; i++) {
+ if (hw2ethtool_link_mode_table[mode].link_mode_bit_arr[i] >=
+ __ETHTOOL_LINK_MODE_MASK_NBITS)
+ continue;
+ set_bit(hw2ethtool_link_mode_table[mode].link_mode_bit_arr[i],
+ link_settings->advertising);
+ }
+}
+
+/* Related to enum mag_cmd_port_speed */
+static u32 hw_to_ethtool_speed[] = {
+ (u32)SPEED_UNKNOWN, SPEED_10, SPEED_100, SPEED_1000, SPEED_10000,
+ SPEED_25000, SPEED_40000, SPEED_50000, SPEED_100000, SPEED_200000
+};
+
+static int hinic3_ethtool_to_hw_speed_level(u32 speed)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_LEN(hw_to_ethtool_speed); i++) {
+ if (hw_to_ethtool_speed[i] == speed)
+ break;
+ }
+
+ return i;
+}
+
+static void hinic3_add_ethtool_link_mode(struct cmd_link_settings *link_settings,
+ u32 hw_link_mode, u32 name)
+{
+ u32 link_mode;
+
+ for (link_mode = 0; link_mode < LINK_MODE_MAX_NUMBERS; link_mode++) {
+ if (hw_link_mode & BIT(link_mode)) {
+ if (name == GET_SUPPORTED_MODE)
+ ethtool_add_supported_speed_link_mode(
+ link_settings, link_mode);
+ else
+ ethtool_add_advertised_speed_link_mode(
+ link_settings, link_mode);
+ }
+ }
+}
+
+static int hinic3_link_speed_set(struct hinic3_nic_dev *nic_dev,
+ struct cmd_link_settings *link_settings,
+ struct nic_port_info *port_info)
+{
+ u8 link_state = 0;
+ int err;
+
+ if (port_info->supported_mode != LINK_MODE_UNKNOWN)
+ hinic3_add_ethtool_link_mode(link_settings,
+ port_info->supported_mode,
+ GET_SUPPORTED_MODE);
+ if (port_info->advertised_mode != LINK_MODE_UNKNOWN)
+ hinic3_add_ethtool_link_mode(link_settings,
+ port_info->advertised_mode,
+ GET_ADVERTISED_MODE);
+
+ err = hinic3_get_link_state(nic_dev->hwdev, &link_state);
+ if (!err && link_state) {
+ if (hinic3_get_bond_create_mode(nic_dev->hwdev)) {
+ link_settings->speed = port_info->bond_speed;
+ } else {
+ link_settings->speed =
+ port_info->speed <
+ ARRAY_LEN(hw_to_ethtool_speed) ?
+ hw_to_ethtool_speed[port_info->speed] :
+ (u32)SPEED_UNKNOWN;
+ }
+
+ link_settings->duplex = port_info->duplex;
+ } else {
+ link_settings->speed = (u32)SPEED_UNKNOWN;
+ link_settings->duplex = DUPLEX_UNKNOWN;
+ }
+
+ return 0;
+}
+
+static void hinic3_link_port_type(struct cmd_link_settings *link_settings,
+ u8 port_type)
+{
+ switch (port_type) {
+ case MAG_CMD_WIRE_TYPE_ELECTRIC:
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, TP);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, TP);
+ link_settings->port = PORT_TP;
+ break;
+
+ case MAG_CMD_WIRE_TYPE_AOC:
+ case MAG_CMD_WIRE_TYPE_MM:
+ case MAG_CMD_WIRE_TYPE_SM:
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, FIBRE);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, FIBRE);
+ link_settings->port = PORT_FIBRE;
+ break;
+
+ case MAG_CMD_WIRE_TYPE_COPPER:
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, FIBRE);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, FIBRE);
+ link_settings->port = PORT_DA;
+ break;
+
+ case MAG_CMD_WIRE_TYPE_BACKPLANE:
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, Backplane);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, Backplane);
+ link_settings->port = PORT_NONE;
+ break;
+
+ default:
+ link_settings->port = PORT_OTHER;
+ break;
+ }
+}
+
+static int get_link_pause_settings(struct hinic3_nic_dev *nic_dev,
+ struct cmd_link_settings *link_settings)
+{
+ struct nic_pause_config nic_pause = {0};
+ int err;
+
+ err = hinic3_get_pause_info(nic_dev->hwdev, &nic_pause);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get pauseparam from hw\n");
+ return err;
+ }
+
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, Pause);
+ if (nic_pause.rx_pause && nic_pause.tx_pause) {
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, Pause);
+ } else if (nic_pause.tx_pause) {
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings,
+ Asym_Pause);
+ } else if (nic_pause.rx_pause) {
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, Pause);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings,
+ Asym_Pause);
+ }
+
+ return 0;
+}
+
+static bool is_bit_offset_defined(u8 bit_offset)
+{
+ if (bit_offset < __ETHTOOL_LINK_MODE_MASK_NBITS)
+ return true;
+ return false;
+}
+
+static void
+ethtool_add_supported_advertised_fec(struct cmd_link_settings *link_settings,
+ u32 fec, u8 cmd)
+{
+ u8 i;
+ for (i = 0; i < HINIC_ETHTOOL_FEC_INFO_LEN; i++) {
+ if ((fec & BIT(hinic3_ethtool_fec_info[i].hinic_fec_offset)) == 0)
+ continue;
+ if ((is_bit_offset_defined(hinic3_ethtool_fec_info[i].ethtool_bit_offset) == true) &&
+ (cmd == HINIC_ADVERTISED_FEC_CMD)) {
+ set_bit(hinic3_ethtool_fec_info[i].ethtool_bit_offset, link_settings->advertising);
+ return; /* There can be only one advertised fec mode. */
+ }
+ if ((is_bit_offset_defined(hinic3_ethtool_fec_info[i].ethtool_bit_offset) == true) &&
+ (cmd == HINIC_SUPPORTED_FEC_CMD))
+ set_bit(hinic3_ethtool_fec_info[i].ethtool_bit_offset, link_settings->supported);
+ }
+}
+
+static void hinic3_link_fec_type(struct cmd_link_settings *link_settings,
+ u32 fec, u32 supported_fec)
+{
+ ethtool_add_supported_advertised_fec(link_settings, supported_fec, HINIC_SUPPORTED_FEC_CMD);
+ ethtool_add_supported_advertised_fec(link_settings, fec, HINIC_ADVERTISED_FEC_CMD);
+}
+
+static int get_link_settings(struct net_device *netdev,
+ struct cmd_link_settings *link_settings)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct nic_port_info port_info = {0};
+ int err;
+
+ err = hinic3_get_port_info(nic_dev->hwdev, &port_info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to get port info\n");
+ return err;
+ }
+
+ err = hinic3_link_speed_set(nic_dev, link_settings, &port_info);
+ if (err)
+ return err;
+
+ hinic3_link_port_type(link_settings, port_info.port_type);
+
+ hinic3_link_fec_type(link_settings, BIT(port_info.fec),
+ port_info.supported_fec_mode);
+
+ link_settings->autoneg = port_info.autoneg_state == PORT_CFG_AN_ON ?
+ AUTONEG_ENABLE : AUTONEG_DISABLE;
+ if (port_info.autoneg_cap)
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, Autoneg);
+ if (port_info.autoneg_state == PORT_CFG_AN_ON)
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, Autoneg);
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ err = get_link_pause_settings(nic_dev, link_settings);
+
+ return err;
+}
+
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+int hinic3_get_link_ksettings(struct net_device *netdev,
+ struct ethtool_link_ksettings *link_settings)
+{
+ struct cmd_link_settings settings = { { 0 } };
+ struct ethtool_link_settings *base = &link_settings->base;
+ int err;
+
+ ethtool_link_ksettings_zero_link_mode(link_settings, supported);
+ ethtool_link_ksettings_zero_link_mode(link_settings, advertising);
+
+ err = get_link_settings(netdev, &settings);
+ if (err)
+ return err;
+
+ bitmap_copy(link_settings->link_modes.supported, settings.supported,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+ bitmap_copy(link_settings->link_modes.advertising, settings.advertising,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+
+ base->autoneg = settings.autoneg;
+ base->speed = settings.speed;
+ base->duplex = settings.duplex;
+ base->port = settings.port;
+
+ return 0;
+}
+#endif
+#endif
+
+static bool hinic3_is_support_speed(u32 supported_link, u32 speed)
+{
+ u32 link_mode;
+
+ for (link_mode = 0; link_mode < LINK_MODE_MAX_NUMBERS; link_mode++) {
+ if (!(supported_link & BIT(link_mode)))
+ continue;
+
+ if (hw2ethtool_link_mode_table[link_mode].speed == speed)
+ return true;
+ }
+
+ return false;
+}
+
+static int hinic3_is_speed_legal(struct hinic3_nic_dev *nic_dev,
+ struct nic_port_info *port_info, u32 speed)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int speed_level = 0;
+
+ if (port_info->supported_mode == LINK_MODE_UNKNOWN ||
+ port_info->advertised_mode == LINK_MODE_UNKNOWN) {
+ nicif_err(nic_dev, drv, netdev, "Unknown supported link modes\n");
+ return -EAGAIN;
+ }
+
+ speed_level = hinic3_ethtool_to_hw_speed_level(speed);
+ if (speed_level >= PORT_SPEED_UNKNOWN ||
+ !hinic3_is_support_speed(port_info->supported_mode, speed)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Not supported speed: %u\n", speed);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int get_link_settings_type(struct hinic3_nic_dev *nic_dev,
+ u8 autoneg, u32 speed, u32 *set_settings)
+{
+ struct nic_port_info port_info = {0};
+ int err;
+
+ err = hinic3_get_port_info(nic_dev->hwdev, &port_info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to get current settings\n");
+ return -EAGAIN;
+ }
+
+ /* Alwayse set autonegation */
+ if (port_info.autoneg_cap)
+ *set_settings |= HILINK_LINK_SET_AUTONEG;
+
+ if (autoneg == AUTONEG_ENABLE) {
+ if (!port_info.autoneg_cap) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Not support autoneg\n");
+ return -EOPNOTSUPP;
+ }
+ } else if (speed != (u32)SPEED_UNKNOWN) {
+ /* Set speed only when autoneg is disable */
+ err = hinic3_is_speed_legal(nic_dev, &port_info, speed);
+ if (err)
+ return err;
+
+ *set_settings |= HILINK_LINK_SET_SPEED;
+ } else {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Need to set speed when autoneg is off\n");
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_settings_to_hw(struct hinic3_nic_dev *nic_dev,
+ u32 set_settings, u8 autoneg, u32 speed)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_link_ksettings settings = {0};
+ int speed_level = 0;
+ char set_link_str[HINIC_SET_LINK_STR_LEN] = {0};
+ char link_info[HINIC_SET_LINK_STR_LEN] = {0};
+ int err = 0;
+
+ err = snprintf(link_info, sizeof(link_info), "%s",
+ (bool)(set_settings & HILINK_LINK_SET_AUTONEG) ?
+ ((bool)autoneg ? "autong enable " : "autong disable ") : "");
+ if (err < 0)
+ return -EINVAL;
+
+ if (set_settings & HILINK_LINK_SET_SPEED) {
+ speed_level = hinic3_ethtool_to_hw_speed_level(speed);
+ err = snprintf(set_link_str, sizeof(set_link_str),
+ "%sspeed %u ", link_info, speed);
+ if (err < 0)
+ return -EINVAL;
+ }
+
+ settings.valid_bitmap = set_settings;
+ settings.autoneg = (bool)autoneg ? PORT_CFG_AN_ON : PORT_CFG_AN_OFF;
+ settings.speed = (u8)speed_level;
+
+ err = hinic3_set_link_settings(nic_dev->hwdev, &settings);
+ if (err)
+ nicif_err(nic_dev, drv, netdev, "Set %sfailed\n",
+ set_link_str);
+ else
+ nicif_info(nic_dev, drv, netdev, "Set %ssuccess\n",
+ set_link_str);
+
+ return err;
+}
+
+static int set_link_settings(struct net_device *netdev, u8 autoneg, u32 speed)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u32 set_settings = 0;
+ int err = 0;
+
+ err = get_link_settings_type(nic_dev, autoneg, speed, &set_settings);
+ if (err)
+ return err;
+
+ if (set_settings)
+ err = hinic3_set_settings_to_hw(nic_dev, set_settings,
+ autoneg, speed);
+ else
+ nicif_info(nic_dev, drv, netdev, "Nothing changed, exiting without setting anything\n");
+
+ return err;
+}
+
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+int hinic3_set_link_ksettings(struct net_device *netdev,
+ const struct ethtool_link_ksettings *link_settings)
+{
+ /* Only support to set autoneg and speed */
+ return set_link_settings(netdev, link_settings->base.autoneg,
+ link_settings->base.speed);
+}
+#endif
+#endif
+
+#ifndef HAVE_NEW_ETHTOOL_LINK_SETTINGS_ONLY
+int hinic3_get_settings(struct net_device *netdev, struct ethtool_cmd *ep)
+{
+ struct cmd_link_settings settings = { { 0 } };
+ int err;
+
+ err = get_link_settings(netdev, &settings);
+ if (err)
+ return err;
+
+ ep->supported = settings.supported[0] & ((u32)~0);
+ ep->advertising = settings.advertising[0] & ((u32)~0);
+
+ ep->autoneg = settings.autoneg;
+ ethtool_cmd_speed_set(ep, settings.speed);
+ ep->duplex = settings.duplex;
+ ep->port = settings.port;
+ ep->transceiver = XCVR_INTERNAL;
+
+ return 0;
+}
+
+int hinic3_set_settings(struct net_device *netdev,
+ struct ethtool_cmd *link_settings)
+{
+ /* Only support to set autoneg and speed */
+ return set_link_settings(netdev, link_settings->autoneg,
+ ethtool_cmd_speed(link_settings));
+}
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_filter.c b/drivers/net/ethernet/huawei/hinic3/hinic3_filter.c
new file mode 100644
index 0000000..2daa7f9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_filter.c
@@ -0,0 +1,483 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/debugfs.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_srv_nic.h"
+
+static unsigned char set_filter_state = 1;
+module_param(set_filter_state, byte, 0444);
+MODULE_PARM_DESC(set_filter_state, "Set mac filter config state: 0 - disable, 1 - enable (default=1)");
+
+static int hinic3_uc_sync(struct net_device *netdev, u8 *addr)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ return hinic3_set_mac(nic_dev->hwdev, addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+}
+
+static int hinic3_uc_unsync(struct net_device *netdev, u8 *addr)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ /* The addr is in use */
+ if (ether_addr_equal(addr, netdev->dev_addr))
+ return 0;
+
+ return hinic3_del_mac(nic_dev->hwdev, addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+}
+
+void hinic3_clean_mac_list_filter(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry_safe(f, ftmp, &nic_dev->uc_filter_list, list) {
+ if (f->state == HINIC3_MAC_HW_SYNCED)
+ hinic3_uc_unsync(netdev, f->addr);
+ list_del(&f->list);
+ kfree(f);
+ }
+
+ list_for_each_entry_safe(f, ftmp, &nic_dev->mc_filter_list, list) {
+ if (f->state == HINIC3_MAC_HW_SYNCED)
+ hinic3_uc_unsync(netdev, f->addr);
+ list_del(&f->list);
+ kfree(f);
+ }
+}
+
+static struct hinic3_mac_filter *hinic3_find_mac(const struct list_head *filter_list,
+ u8 *addr)
+{
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry(f, filter_list, list) {
+ if (ether_addr_equal(addr, f->addr))
+ return f;
+ }
+ return NULL;
+}
+
+static struct hinic3_mac_filter *hinic3_add_filter(struct hinic3_nic_dev *nic_dev,
+ struct list_head *mac_filter_list,
+ u8 *addr)
+{
+ struct hinic3_mac_filter *f = NULL;
+
+ f = kzalloc(sizeof(*f), GFP_ATOMIC);
+ if (!f)
+ goto out;
+
+ ether_addr_copy(f->addr, addr);
+
+ INIT_LIST_HEAD(&f->list);
+ list_add_tail(&f->list, mac_filter_list);
+
+ f->state = HINIC3_MAC_WAIT_HW_SYNC;
+ set_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags);
+
+out:
+ return f;
+}
+
+static void hinic3_del_filter(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_mac_filter *f)
+{
+ set_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags);
+
+ if (f->state == HINIC3_MAC_WAIT_HW_SYNC) {
+ /* have not added to hw, delete it directly */
+ list_del(&f->list);
+ kfree(f);
+ return;
+ }
+
+ f->state = HINIC3_MAC_WAIT_HW_UNSYNC;
+}
+
+static struct hinic3_mac_filter *hinic3_mac_filter_entry_clone(const struct hinic3_mac_filter *src)
+{
+ struct hinic3_mac_filter *f = NULL;
+
+ f = kzalloc(sizeof(*f), GFP_ATOMIC);
+ if (!f)
+ return NULL;
+
+ *f = *src;
+ INIT_LIST_HEAD(&f->list);
+
+ return f;
+}
+
+static void hinic3_undo_del_filter_entries(struct list_head *filter_list,
+ const struct list_head *from)
+{
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry_safe(f, ftmp, from, list) {
+ if (hinic3_find_mac(filter_list, f->addr))
+ continue;
+
+ if (f->state == HINIC3_MAC_HW_SYNCED)
+ f->state = HINIC3_MAC_WAIT_HW_UNSYNC;
+
+ list_move_tail(&f->list, filter_list);
+ }
+}
+
+static void hinic3_undo_add_filter_entries(struct list_head *filter_list,
+ const struct list_head *from)
+{
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *tmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry_safe(f, ftmp, from, list) {
+ tmp = hinic3_find_mac(filter_list, f->addr);
+ if (tmp && tmp->state == HINIC3_MAC_HW_SYNCED)
+ tmp->state = HINIC3_MAC_WAIT_HW_SYNC;
+ }
+}
+
+static void hinic3_cleanup_filter_list(const struct list_head *head)
+{
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry_safe(f, ftmp, head, list) {
+ list_del(&f->list);
+ kfree(f);
+ }
+}
+
+static int hinic3_mac_filter_sync_hw(struct hinic3_nic_dev *nic_dev,
+ struct list_head *del_list,
+ struct list_head *add_list)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+ int err = 0, add_count = 0;
+
+ if (!list_empty(del_list)) {
+ list_for_each_entry_safe(f, ftmp, del_list, list) {
+ err = hinic3_uc_unsync(netdev, f->addr);
+ if (err) { /* ignore errors when delete mac */
+ nic_err(&nic_dev->pdev->dev, "Failed to delete mac\n");
+ }
+
+ list_del(&f->list);
+ kfree(f);
+ }
+ }
+
+ if (!list_empty(add_list)) {
+ list_for_each_entry_safe(f, ftmp, add_list, list) {
+ err = hinic3_uc_sync(netdev, f->addr);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to add mac\n");
+ return err;
+ }
+
+ add_count++;
+ list_del(&f->list);
+ kfree(f);
+ }
+ }
+
+ return add_count;
+}
+
+static int hinic3_mac_filter_sync(struct hinic3_nic_dev *nic_dev,
+ struct list_head *mac_filter_list, bool uc)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct list_head tmp_del_list, tmp_add_list;
+ struct hinic3_mac_filter *fclone = NULL;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+ int err = 0, add_count = 0;
+
+ INIT_LIST_HEAD(&tmp_del_list);
+ INIT_LIST_HEAD(&tmp_add_list);
+
+ list_for_each_entry_safe(f, ftmp, mac_filter_list, list) {
+ if (f->state != HINIC3_MAC_WAIT_HW_UNSYNC)
+ continue;
+
+ f->state = HINIC3_MAC_HW_UNSYNCED;
+ list_move_tail(&f->list, &tmp_del_list);
+ }
+
+ list_for_each_entry_safe(f, ftmp, mac_filter_list, list) {
+ if (f->state != HINIC3_MAC_WAIT_HW_SYNC)
+ continue;
+
+ fclone = hinic3_mac_filter_entry_clone(f);
+ if (!fclone) {
+ err = -ENOMEM;
+ break;
+ }
+
+ f->state = HINIC3_MAC_HW_SYNCED;
+ list_add_tail(&fclone->list, &tmp_add_list);
+ }
+
+ if (err) {
+ hinic3_undo_del_filter_entries(mac_filter_list, &tmp_del_list);
+ hinic3_undo_add_filter_entries(mac_filter_list, &tmp_add_list);
+ nicif_err(nic_dev, drv, netdev, "Failed to clone mac_filter_entry\n");
+
+ hinic3_cleanup_filter_list(&tmp_del_list);
+ hinic3_cleanup_filter_list(&tmp_add_list);
+ return -ENOMEM;
+ }
+
+ add_count = hinic3_mac_filter_sync_hw(nic_dev, &tmp_del_list,
+ &tmp_add_list);
+ if (list_empty(&tmp_add_list))
+ return add_count;
+
+ /* there are errors when add mac to hw, delete all mac in hw */
+ hinic3_undo_add_filter_entries(mac_filter_list, &tmp_add_list);
+ /* VF don't support to enter promisc mode,
+ * so we can't delete any other uc mac
+ */
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev) || !uc) {
+ list_for_each_entry_safe(f, ftmp, mac_filter_list, list) {
+ if (f->state != HINIC3_MAC_HW_SYNCED)
+ continue;
+
+ fclone = hinic3_mac_filter_entry_clone(f);
+ if (!fclone)
+ break;
+
+ f->state = HINIC3_MAC_WAIT_HW_SYNC;
+ list_add_tail(&fclone->list, &tmp_del_list);
+ }
+ }
+
+ hinic3_cleanup_filter_list(&tmp_add_list);
+ hinic3_mac_filter_sync_hw(nic_dev, &tmp_del_list, &tmp_add_list);
+
+ /* need to enter promisc/allmulti mode */
+ return -ENOMEM;
+}
+
+static void hinic3_mac_filter_sync_all(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int add_count;
+
+ if (test_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags)) {
+ clear_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags);
+ add_count = hinic3_mac_filter_sync(nic_dev,
+ &nic_dev->uc_filter_list,
+ true);
+ if (add_count < 0 && HINIC3_SUPPORT_PROMISC(nic_dev->hwdev)) {
+ set_bit(HINIC3_PROMISC_FORCE_ON,
+ &nic_dev->rx_mod_state);
+ nicif_info(nic_dev, drv, netdev, "Promisc mode forced on\n");
+ } else if (add_count) {
+ clear_bit(HINIC3_PROMISC_FORCE_ON,
+ &nic_dev->rx_mod_state);
+ }
+
+ add_count = hinic3_mac_filter_sync(nic_dev,
+ &nic_dev->mc_filter_list,
+ false);
+ if (add_count < 0 && HINIC3_SUPPORT_ALLMULTI(nic_dev->hwdev)) {
+ set_bit(HINIC3_ALLMULTI_FORCE_ON,
+ &nic_dev->rx_mod_state);
+ nicif_info(nic_dev, drv, netdev, "All multicast mode forced on\n");
+ } else if (add_count) {
+ clear_bit(HINIC3_ALLMULTI_FORCE_ON,
+ &nic_dev->rx_mod_state);
+ }
+ }
+}
+
+#define HINIC3_DEFAULT_RX_MODE (NIC_RX_MODE_UC | NIC_RX_MODE_MC | \
+ NIC_RX_MODE_BC)
+
+static void hinic3_update_mac_filter(struct hinic3_nic_dev *nic_dev,
+ const struct netdev_hw_addr_list *src_list,
+ struct list_head *filter_list)
+{
+ struct hinic3_mac_filter *filter = NULL;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+ struct netdev_hw_addr *ha = NULL;
+
+ /* add addr if not already in the filter list */
+ netif_addr_lock_bh(nic_dev->netdev);
+ netdev_hw_addr_list_for_each(ha, src_list) {
+ filter = hinic3_find_mac(filter_list, ha->addr);
+ if (!filter)
+ hinic3_add_filter(nic_dev, filter_list, ha->addr);
+ else if (filter->state == HINIC3_MAC_WAIT_HW_UNSYNC)
+ filter->state = HINIC3_MAC_HW_SYNCED;
+ }
+ netif_addr_unlock_bh(nic_dev->netdev);
+
+ /* delete addr if not in netdev list */
+ list_for_each_entry_safe(f, ftmp, filter_list, list) {
+ bool found = false;
+
+ netif_addr_lock_bh(nic_dev->netdev);
+ netdev_hw_addr_list_for_each(ha, src_list)
+ if (ether_addr_equal(ha->addr, f->addr)) {
+ found = true;
+ break;
+ }
+ netif_addr_unlock_bh(nic_dev->netdev);
+
+ if (found)
+ continue;
+
+ hinic3_del_filter(nic_dev, f);
+ }
+}
+
+#ifndef NETDEV_HW_ADDR_T_MULTICAST
+static void hinic3_update_mc_filter(struct hinic3_nic_dev *nic_dev,
+ struct list_head *filter_list)
+{
+ struct hinic3_mac_filter *filter = NULL;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+ struct dev_mc_list *ha = NULL;
+
+ /* add addr if not already in the filter list */
+ netif_addr_lock_bh(nic_dev->netdev);
+ netdev_for_each_mc_addr(ha, nic_dev->netdev) {
+ filter = hinic3_find_mac(filter_list, ha->da_addr);
+ if (!filter)
+ hinic3_add_filter(nic_dev, filter_list, ha->da_addr);
+ else if (filter->state == HINIC3_MAC_WAIT_HW_UNSYNC)
+ filter->state = HINIC3_MAC_HW_SYNCED;
+ }
+ netif_addr_unlock_bh(nic_dev->netdev);
+ /* delete addr if not in netdev list */
+ list_for_each_entry_safe(f, ftmp, filter_list, list) {
+ bool found = false;
+
+ netif_addr_lock_bh(nic_dev->netdev);
+ netdev_for_each_mc_addr(ha, nic_dev->netdev)
+ if (ether_addr_equal(ha->da_addr, f->addr)) {
+ found = true;
+ break;
+ }
+ netif_addr_unlock_bh(nic_dev->netdev);
+
+ if (found)
+ continue;
+
+ hinic3_del_filter(nic_dev, f);
+ }
+}
+#endif
+
+static void update_mac_filter(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+
+ if (test_and_clear_bit(HINIC3_UPDATE_MAC_FILTER, &nic_dev->flags)) {
+ hinic3_update_mac_filter(nic_dev, &netdev->uc,
+ &nic_dev->uc_filter_list);
+ /* FPGA mc only 12 entry, default disable mc */
+ if (set_filter_state) {
+#ifdef NETDEV_HW_ADDR_T_MULTICAST
+ hinic3_update_mac_filter(nic_dev, &netdev->mc,
+ &nic_dev->mc_filter_list);
+#else
+ hinic3_update_mc_filter(nic_dev,
+ &nic_dev->mc_filter_list);
+#endif
+ }
+ }
+}
+
+static void sync_rx_mode_to_hw(struct hinic3_nic_dev *nic_dev, int promisc_en,
+ int allmulti_en)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u32 rx_mod = HINIC3_DEFAULT_RX_MODE;
+ int err;
+
+ rx_mod |= (promisc_en ? NIC_RX_MODE_PROMISC : 0);
+ rx_mod |= (allmulti_en ? NIC_RX_MODE_MC_ALL : 0);
+
+ if (promisc_en != test_bit(HINIC3_HW_PROMISC_ON,
+ &nic_dev->rx_mod_state))
+ nicif_info(nic_dev, drv, netdev,
+ "%s promisc mode\n",
+ promisc_en ? "Enter" : "Left");
+ if (allmulti_en !=
+ test_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state))
+ nicif_info(nic_dev, drv, netdev,
+ "%s all_multi mode\n",
+ allmulti_en ? "Enter" : "Left");
+
+ err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mod);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to set rx_mode\n");
+ return;
+ }
+
+ promisc_en ? set_bit(HINIC3_HW_PROMISC_ON, &nic_dev->rx_mod_state) :
+ clear_bit(HINIC3_HW_PROMISC_ON, &nic_dev->rx_mod_state);
+
+ allmulti_en ? set_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state) :
+ clear_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state);
+}
+
+void hinic3_set_rx_mode_work(struct work_struct *work)
+{
+ struct hinic3_nic_dev *nic_dev =
+ container_of(work, struct hinic3_nic_dev, rx_mode_work);
+ struct net_device *netdev = nic_dev->netdev;
+ int promisc_en = 0, allmulti_en = 0;
+
+ update_mac_filter(nic_dev);
+
+ hinic3_mac_filter_sync_all(nic_dev);
+
+ if (HINIC3_SUPPORT_PROMISC(nic_dev->hwdev))
+ promisc_en = !!(netdev->flags & IFF_PROMISC) ||
+ test_bit(HINIC3_PROMISC_FORCE_ON,
+ &nic_dev->rx_mod_state);
+
+ if (HINIC3_SUPPORT_ALLMULTI(nic_dev->hwdev))
+ allmulti_en = !!(netdev->flags & IFF_ALLMULTI) ||
+ test_bit(HINIC3_ALLMULTI_FORCE_ON,
+ &nic_dev->rx_mod_state);
+
+ if (promisc_en !=
+ test_bit(HINIC3_HW_PROMISC_ON, &nic_dev->rx_mod_state) ||
+ allmulti_en !=
+ test_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state))
+ sync_rx_mode_to_hw(nic_dev, promisc_en, allmulti_en);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_hw.h b/drivers/net/ethernet/huawei/hinic3/hinic3_hw.h
new file mode 100644
index 0000000..7fed1c1
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_hw.h
@@ -0,0 +1,877 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HW_H
+#define HINIC3_HW_H
+
+#include "mpu_inband_cmd.h"
+#include "mpu_inband_cmd_defs.h"
+
+#include "hinic3_crm.h"
+
+#ifndef BIG_ENDIAN
+#define BIG_ENDIAN 0x4321
+#endif
+
+#ifndef LITTLE_ENDIAN
+#define LITTLE_ENDIAN 0x1234
+#endif
+
+#ifdef BYTE_ORDER
+#undef BYTE_ORDER
+#endif
+/* X86 */
+#define BYTE_ORDER LITTLE_ENDIAN
+
+/* to use 0-level CLA, page size must be: SQ 16B(wqe) * 64k(max_q_depth) */
+#define HINIC3_DEFAULT_WQ_PAGE_SIZE 0x100000
+#define HINIC3_HW_WQ_PAGE_SIZE 0x1000
+#define HINIC3_MAX_WQ_PAGE_SIZE_ORDER 8
+#define SPU_HOST_ID 4
+
+enum hinic3_channel_id {
+ HINIC3_CHANNEL_DEFAULT,
+ HINIC3_CHANNEL_COMM,
+ HINIC3_CHANNEL_NIC,
+ HINIC3_CHANNEL_ROCE,
+ HINIC3_CHANNEL_TOE,
+ HINIC3_CHANNEL_FC,
+ HINIC3_CHANNEL_OVS,
+ HINIC3_CHANNEL_DSW,
+ HINIC3_CHANNEL_MIG,
+ HINIC3_CHANNEL_CRYPT,
+ HINIC3_CHANNEL_VROCE,
+
+ HINIC3_CHANNEL_MAX = 32,
+};
+
+struct hinic3_cmd_buf {
+ void *buf;
+ dma_addr_t dma_addr;
+ u16 size;
+ /* Usage count, USERS DO NOT USE */
+ atomic_t ref_cnt;
+};
+
+enum hinic3_aeq_type {
+ HINIC3_HW_INTER_INT = 0,
+ HINIC3_MBX_FROM_FUNC = 1,
+ HINIC3_MSG_FROM_MGMT_CPU = 2,
+ HINIC3_API_RSP = 3,
+ HINIC3_API_CHAIN_STS = 4,
+ HINIC3_MBX_SEND_RSLT = 5,
+ HINIC3_MAX_AEQ_EVENTS
+};
+
+enum hinic3_aeq_sw_type {
+ HINIC3_STATELESS_EVENT = 0,
+ HINIC3_STATEFUL_EVENT = 1,
+ HINIC3_MAX_AEQ_SW_EVENTS
+};
+
+enum hinic3_hwdev_init_state {
+ HINIC3_HWDEV_NONE_INITED = 0,
+ HINIC3_HWDEV_MGMT_INITED,
+ HINIC3_HWDEV_MBOX_INITED,
+ HINIC3_HWDEV_CMDQ_INITED,
+};
+
+enum hinic3_ceq_event {
+ HINIC3_NON_L2NIC_SCQ,
+ HINIC3_NON_L2NIC_ECQ,
+ HINIC3_NON_L2NIC_NO_CQ_EQ,
+ HINIC3_CMDQ,
+ HINIC3_L2NIC_SQ,
+ HINIC3_L2NIC_RQ,
+ HINIC3_MAX_CEQ_EVENTS,
+};
+
+enum hinic3_mbox_seg_errcode {
+ MBOX_ERRCODE_NO_ERRORS = 0,
+ /* VF send the mailbox data to the wrong destination functions */
+ MBOX_ERRCODE_VF_TO_WRONG_FUNC = 0x100,
+ /* PPF send the mailbox data to the wrong destination functions */
+ MBOX_ERRCODE_PPF_TO_WRONG_FUNC = 0x200,
+ /* PF send the mailbox data to the wrong destination functions */
+ MBOX_ERRCODE_PF_TO_WRONG_FUNC = 0x300,
+ /* The mailbox data size is set to all zero */
+ MBOX_ERRCODE_ZERO_DATA_SIZE = 0x400,
+ /* The sender function attribute has not been learned by hardware */
+ MBOX_ERRCODE_UNKNOWN_SRC_FUNC = 0x500,
+ /* The receiver function attr has not been learned by hardware */
+ MBOX_ERRCODE_UNKNOWN_DES_FUNC = 0x600,
+};
+
+struct hinic3_ceq_info {
+ u32 q_len;
+ u32 page_size;
+ u16 elem_size;
+ u16 num_pages;
+ u32 num_elem_in_pg;
+};
+
+typedef void (*hinic3_aeq_hwe_cb)(void *pri_handle, u8 *data, u8 size);
+typedef u8 (*hinic3_aeq_swe_cb)(void *pri_handle, u8 event, u8 *data);
+typedef void (*hinic3_ceq_event_cb)(void *pri_handle, u32 ceqe_data);
+
+typedef int (*hinic3_vf_mbox_cb)(void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+typedef int (*hinic3_pf_mbox_cb)(void *pri_handle,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+typedef int (*hinic3_ppf_mbox_cb)(void *pri_handle, u16 pf_idx,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+typedef int (*hinic3_pf_recv_from_ppf_mbox_cb)(void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+/**
+ * @brief hinic3_aeq_register_hw_cb - register aeq hardware callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ * @param hwe_cb: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_aeq_register_hw_cb(void *hwdev, void *pri_handle,
+ enum hinic3_aeq_type event, hinic3_aeq_hwe_cb hwe_cb);
+
+/**
+ * @brief hinic3_aeq_unregister_hw_cb - unregister aeq hardware callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ **/
+void hinic3_aeq_unregister_hw_cb(void *hwdev, enum hinic3_aeq_type event);
+
+/**
+ * @brief hinic3_aeq_register_swe_cb - register aeq soft event callback
+ * @param hwdev: device pointer to hwdev
+ * @pri_handle: the pointer to private invoker device
+ * @param event: event type
+ * @param aeq_swe_cb: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_aeq_register_swe_cb(void *hwdev, void *pri_handle, enum hinic3_aeq_sw_type event,
+ hinic3_aeq_swe_cb aeq_swe_cb);
+
+/**
+ * @brief hinic3_aeq_unregister_swe_cb - unregister aeq soft event callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ **/
+void hinic3_aeq_unregister_swe_cb(void *hwdev, enum hinic3_aeq_sw_type event);
+
+/**
+ * @brief hinic3_ceq_register_cb - register ceq callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_ceq_register_cb(void *hwdev, void *pri_handle, enum hinic3_ceq_event event,
+ hinic3_ceq_event_cb callback);
+/**
+ * @brief hinic3_ceq_unregister_cb - unregister ceq callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ **/
+void hinic3_ceq_unregister_cb(void *hwdev, enum hinic3_ceq_event event);
+
+/**
+ * @brief hinic3_register_ppf_mbox_cb - ppf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_ppf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_ppf_mbox_cb callback);
+
+/**
+ * @brief hinic3_register_pf_mbox_cb - pf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_pf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_pf_mbox_cb callback);
+/**
+ * @brief hinic3_register_vf_mbox_cb - vf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_vf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_vf_mbox_cb callback);
+
+/**
+ * @brief hinic3_unregister_ppf_mbox_cb - ppf unregister mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_ppf_mbox_cb(void *hwdev, u8 mod);
+
+/**
+ * @brief hinic3_unregister_pf_mbox_cb - pf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_pf_mbox_cb(void *hwdev, u8 mod);
+
+/**
+ * @brief hinic3_unregister_vf_mbox_cb - pf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_vf_mbox_cb(void *hwdev, u8 mod);
+
+/**
+ * @brief hinic3_unregister_ppf_to_pf_mbox_cb - unregister mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_ppf_to_pf_mbox_cb(void *hwdev, u8 mod);
+
+typedef void (*hinic3_mgmt_msg_cb)(void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+/**
+ * @brief hinic3_register_service_adapter - register mgmt msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_mgmt_msg_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_mgmt_msg_cb callback);
+
+/**
+ * @brief hinic3_unregister_mgmt_msg_cb - unregister mgmt msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_mgmt_msg_cb(void *hwdev, u8 mod);
+
+/**
+ * @brief hinic3_register_service_adapter - register service adapter
+ * @param hwdev: device pointer to hwdev
+ * @param service_adapter: service adapter
+ * @param type: service type
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_service_adapter(void *hwdev, void *service_adapter,
+ enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_unregister_service_adapter - unregister service adapter
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ **/
+void hinic3_unregister_service_adapter(void *hwdev,
+ enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_service_adapter - get service adapter
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @retval non-zero: success
+ * @retval null: failure
+ **/
+void *hinic3_get_service_adapter(void *hwdev, enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_alloc_db_phy_addr - alloc doorbell & direct wqe pyhsical addr
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to alloc doorbell base address
+ * @param dwqe_base: pointer to alloc direct base address
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_alloc_db_phy_addr(void *hwdev, u64 *db_base, u64 *dwqe_base);
+
+/**
+ * @brief hinic3_free_db_phy_addr - free doorbell & direct wqe physical address
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to free doorbell base address
+ * @param dwqe_base: pointer to free direct base address
+ **/
+void hinic3_free_db_phy_addr(void *hwdev, u64 db_base, u64 dwqe_base);
+
+/**
+ * @brief hinic3_alloc_db_addr - alloc doorbell & direct wqe
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to alloc doorbell base address
+ * @param dwqe_base: pointer to alloc direct base address
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_alloc_db_addr(void *hwdev, void __iomem **db_base,
+ void __iomem **dwqe_base);
+
+/**
+ * @brief hinic3_free_db_addr - free doorbell & direct wqe
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to free doorbell base address
+ * @param dwqe_base: pointer to free direct base address
+ **/
+void hinic3_free_db_addr(void *hwdev, const void __iomem *db_base,
+ void __iomem *dwqe_base);
+
+/**
+ * @brief hinic3_alloc_db_phy_addr - alloc physical doorbell & direct wqe
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to alloc doorbell base address
+ * @param dwqe_base: pointer to alloc direct base address
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_alloc_db_phy_addr(void *hwdev, u64 *db_base, u64 *dwqe_base);
+
+/**
+ * @brief hinic3_free_db_phy_addr - free physical doorbell & direct wqe
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: free doorbell base address
+ * @param dwqe_base: free direct base address
+ **/
+
+void hinic3_free_db_phy_addr(void *hwdev, u64 db_base, u64 dwqe_base);
+
+/**
+ * @brief hinic3_set_root_ctxt - set root context
+ * @param hwdev: device pointer to hwdev
+ * @param rq_depth: rq depth
+ * @param sq_depth: sq depth
+ * @param rx_buf_sz: rx buffer size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_set_root_ctxt(void *hwdev, u32 rq_depth, u32 sq_depth,
+ int rx_buf_sz, u16 channel);
+
+/**
+ * @brief hinic3_clean_root_ctxt - clean root context
+ * @param hwdev: device pointer to hwdev
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_clean_root_ctxt(void *hwdev, u16 channel);
+
+/**
+ * @brief hinic3_alloc_cmd_buf - alloc cmd buffer
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: success
+ * @retval null: failure
+ **/
+struct hinic3_cmd_buf *hinic3_alloc_cmd_buf(void *hwdev);
+
+/**
+ * @brief hinic3_free_cmd_buf - free cmd buffer
+ * @param hwdev: device pointer to hwdev
+ * @param cmd_buf: cmd buffer to free
+ **/
+void hinic3_free_cmd_buf(void *hwdev, struct hinic3_cmd_buf *cmd_buf);
+
+/**
+ * hinic3_sm_ctr_rd16 - small single 16 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd16(void *hwdev, u8 node, u8 instance, u32 ctr_id, u16 *value);
+
+/**
+ * hinic3_sm_ctr_rd16_clear - small single 16 counter read clear
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd16_clear(void *hwdev, u8 node, u8 instance, u32 ctr_id, u16 *value);
+
+/**
+ * @brief hinic3_sm_ctr_rd32 - small single 32 counter read
+ * @param hwdev: device pointer to hwdev
+ * @param node: the node id
+ * @param instance: instance id
+ * @param ctr_id: counter id
+ * @param value: read counter value ptr
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_sm_ctr_rd32(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u32 *value);
+/**
+ * @brief hinic3_sm_ctr_rd32_clear - small single 32 counter read clear
+ * @param hwdev: device pointer to hwdev
+ * @param node: the node id
+ * @param instance: instance id
+ * @param ctr_id: counter id
+ * @param value: read counter value ptr
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_sm_ctr_rd32_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u32 *value);
+
+/**
+ * @brief hinic3_sm_ctr_rd64_pair - big pair 128 counter read
+ * @param hwdev: device pointer to hwdev
+ * @param node: the node id
+ * @param instance: instance id
+ * @param ctr_id: counter id
+ * @param value1: read counter value ptr
+ * @param value2: read counter value ptr
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_sm_ctr_rd64_pair(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value1, u64 *value2);
+
+/**
+ * hinic3_sm_ctr_rd64_pair_clear - big pair 128 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value1: read counter value ptr
+ * @value2: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_pair_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value1, u64 *value2);
+
+/**
+ * @brief hinic3_sm_ctr_rd64 - big counter 64 read
+ * @param hwdev: device pointer to hwdev
+ * @param node: the node id
+ * @param instance: instance id
+ * @param ctr_id: counter id
+ * @param value: read counter value ptr
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_sm_ctr_rd64(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u64 *value);
+
+/**
+ * hinic3_sm_ctr_rd64_clear - big counter 64 read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value);
+
+/**
+ * @brief hinic3_api_csr_rd32 - read 32 byte csr
+ * @param hwdev: device pointer to hwdev
+ * @param dest: hardware node id
+ * @param addr: reg address
+ * @param val: reg value
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_api_csr_rd32(void *hwdev, u8 dest, u32 addr, u32 *val);
+
+/**
+ * @brief hinic3_api_csr_wr32 - write 32 byte csr
+ * @param hwdev: device pointer to hwdev
+ * @param dest: hardware node id
+ * @param addr: reg address
+ * @param val: reg value
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_api_csr_wr32(void *hwdev, u8 dest, u32 addr, u32 val);
+
+/**
+ * @brief hinic3_api_csr_rd64 - read 64 byte csr
+ * @param hwdev: device pointer to hwdev
+ * @param dest: hardware node id
+ * @param addr: reg address
+ * @param val: reg value
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_api_csr_rd64(void *hwdev, u8 dest, u32 addr, u64 *val);
+
+/**
+ * @brief hinic3_dbg_get_hw_stats - get hardware stats
+ * @param hwdev: device pointer to hwdev
+ * @param hw_stats: pointer to memory caller to alloc
+ * @param out_size: out size
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_dbg_get_hw_stats(const void *hwdev, u8 *hw_stats, const u32 *out_size);
+
+/**
+ * @brief hinic3_dbg_clear_hw_stats - clear hardware stats
+ * @param hwdev: device pointer to hwdev
+ * @retval clear hardware size
+ */
+u16 hinic3_dbg_clear_hw_stats(void *hwdev);
+
+/**
+ * @brief hinic3_get_chip_fault_stats - get chip fault stats
+ * @param hwdev: device pointer to hwdev
+ * @param chip_fault_stats: pointer to memory caller to alloc
+ * @param offset: offset
+ */
+void hinic3_get_chip_fault_stats(const void *hwdev, u8 *chip_fault_stats,
+ u32 offset);
+
+/**
+ * @brief hinic3_msg_to_mgmt_sync - msg to management cpu
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param buf_out: message buffer out
+ * @param out_size: out buffer size
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_msg_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_msg_to_mgmt_async - msg to management cpu async
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ *
+ * The function does not sleep inside, allowing use in irq context
+ */
+int hinic3_msg_to_mgmt_async(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 channel);
+
+/**
+ * @brief hinic3_msg_to_mgmt_no_ack - msg to management cpu don't need no ack
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ *
+ * The function will sleep inside, and it is not allowed to be used in
+ * interrupt context
+ */
+int hinic3_msg_to_mgmt_no_ack(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 channel);
+
+int hinic3_msg_to_mgmt_api_chain_async(void *hwdev, u8 mod, u16 cmd,
+ const void *buf_in, u16 in_size);
+
+int hinic3_msg_to_mgmt_api_chain_sync(void *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout);
+
+/**
+ * @brief hinic3_mbox_to_pf - vf mbox message to pf
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param buf_out: message buffer out
+ * @param out_size: out buffer size
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_mbox_to_pf(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_mbox_to_vf - mbox message to vf
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf index
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param buf_out: message buffer out
+ * @param out_size: out buffer size
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_mbox_to_vf(void *hwdev, u16 vf_id, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u32 timeout,
+ u16 channel);
+
+/**
+ * @brief hinic3_mbox_to_vf_no_ack - mbox message to vf no ack
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf index
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param buf_out: message buffer out
+ * @param out_size: out buffer size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_mbox_to_vf_no_ack(void *hwdev, u16 vf_id, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u16 channel);
+
+int hinic3_clp_to_mgmt(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size);
+/**
+ * @brief hinic3_cmdq_async - cmdq asynchronous message
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cmdq_async(void *hwdev, u8 mod, u8 cmd, struct hinic3_cmd_buf *buf_in, u16 channel);
+
+/**
+ * @brief hinic3_cmdq_async_cos - cmdq asynchronous message by cos
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param cos_id: cos id
+ * @param buf_in: message buffer in
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cmdq_async_cos(void *hwdev, u8 mod, u8 cmd, u8 cos_id,
+ struct hinic3_cmd_buf *buf_in, u16 channel);
+
+/**
+ * @brief hinic3_cmdq_detail_resp - cmdq direct message response
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param out_param: message out
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cmdq_direct_resp(void *hwdev, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ u64 *out_param, u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_cmdq_detail_resp - cmdq detail message response
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param buf_out: message buffer out
+ * @param out_param: inline output data
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cmdq_detail_resp(void *hwdev, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_cmdq_detail_resp - cmdq detail message response
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param cos_id: cos id
+ * @param buf_in: message buffer in
+ * @param buf_out: message buffer out
+ * @param out_param: inline output data
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cos_id_detail_resp(void *hwdev, u8 mod, u8 cmd, u8 cos_id,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_ppf_tmr_start - start ppf timer
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_ppf_tmr_start(void *hwdev);
+
+/**
+ * @brief hinic3_ppf_tmr_stop - stop ppf timer
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_ppf_tmr_stop(void *hwdev);
+
+/**
+ * @brief hinic3_func_tmr_bitmap_set - set timer bitmap status
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: global function index
+ * @param enable: 0-disable, 1-enable
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_func_tmr_bitmap_set(void *hwdev, u16 func_id, bool en);
+
+/**
+ * @brief hinic3_get_board_info - get board info
+ * @param hwdev: device pointer to hwdev
+ * @param info: board info
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_board_info(void *hwdev, struct hinic3_board_info *info,
+ u16 channel);
+
+/**
+ * @brief hinic3_set_wq_page_size - set work queue page size
+ * @param hwdev: device pointer to hwdev
+ * @param func_idx: function id
+ * @param page_size: page size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size,
+ u16 channel);
+
+/**
+ * @brief hinic3_event_callback - evnet callback to notify service driver
+ * @param hwdev: device pointer to hwdev
+ * @param event: event info to service driver
+ */
+void hinic3_event_callback(void *hwdev, struct hinic3_event_info *event);
+
+/**
+ * @brief hinic3_dbg_lt_rd_16byte - liner table read
+ * @param hwdev: device pointer to hwdev
+ * @param dest: destine id
+ * @param instance: instance id
+ * @param lt_index: liner table index id
+ * @param data: data
+ */
+int hinic3_dbg_lt_rd_16byte(void *hwdev, u8 dest, u8 instance,
+ u32 lt_index, u8 *data);
+
+/**
+ * @brief hinic3_dbg_lt_wr_16byte_mask - liner table write
+ * @param hwdev: device pointer to hwdev
+ * @param dest: destine id
+ * @param instance: instance id
+ * @param lt_index: liner table index id
+ * @param data: data
+ * @param mask: mask
+ */
+int hinic3_dbg_lt_wr_16byte_mask(void *hwdev, u8 dest, u8 instance,
+ u32 lt_index, u8 *data, u16 mask);
+
+/**
+ * @brief hinic3_link_event_stats - link event stats
+ * @param hwdev: device pointer to hwdev
+ * @param link: link status
+ */
+void hinic3_link_event_stats(void *dev, u8 link);
+
+/**
+ * @brief hinic3_get_link_event_stats - link event stats
+ * @param hwdev: device pointer to hwdev
+ * @param link: link status
+ */
+int hinic3_get_link_event_stats(void *dev, int *link_state);
+
+/**
+ * @brief hinic3_get_hw_pf_infos - get pf infos
+ * @param hwdev: device pointer to hwdev
+ * @param infos: pf infos
+ * @param channel: channel id
+ */
+int hinic3_get_hw_pf_infos(void *hwdev, struct hinic3_hw_pf_infos *infos,
+ u16 channel);
+
+/**
+ * @brief hinic3_func_reset - reset func
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: global function index
+ * @param reset_flag: reset flag
+ * @param channel: channel id
+ */
+int hinic3_func_reset(void *dev, u16 func_id, u64 reset_flag, u16 channel);
+
+int hinic3_get_ppf_timer_cfg(void *hwdev);
+
+int hinic3_set_bdf_ctxt(void *hwdev, u8 bus, u8 device, u8 function);
+
+int hinic3_init_func_mbox_msg_channel(void *hwdev, u16 num_func);
+
+int hinic3_ppf_ht_gpa_init(void *dev);
+
+void hinic3_ppf_ht_gpa_deinit(void *dev);
+
+int hinic3_get_sml_table_info(void *hwdev, u32 tbl_id, u8 *node_id, u8 *instance_id);
+
+int hinic3_mbox_ppf_to_host(void *hwdev, u8 mod, u16 cmd, u8 host_id,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel);
+
+void hinic3_force_complete_all(void *dev);
+int hinic3_get_ceq_page_phy_addr(void *hwdev, u16 q_id,
+ u16 page_idx, u64 *page_phy_addr);
+int hinic3_set_ceq_irq_disable(void *hwdev, u16 q_id);
+int hinic3_get_ceq_info(void *hwdev, u16 q_id, struct hinic3_ceq_info *ceq_info);
+
+int hinic3_init_single_ceq_status(void *hwdev, u16 q_id);
+void hinic3_set_api_stop(void *hwdev);
+
+int hinic3_activate_firmware(void *hwdev, u8 cfg_index);
+int hinic3_switch_config(void *hwdev, u8 cfg_index);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c b/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c
new file mode 100644
index 0000000..7a2644c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c
@@ -0,0 +1,194 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/debugfs.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+
+int hinic3_poll(struct napi_struct *napi, int budget)
+{
+ int tx_pkts, rx_pkts;
+ struct hinic3_irq *irq_cfg =
+ container_of(napi, struct hinic3_irq, napi);
+ struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev);
+
+ rx_pkts = hinic3_rx_poll(irq_cfg->rxq, budget);
+
+ tx_pkts = hinic3_tx_poll(irq_cfg->txq, budget);
+ if (tx_pkts >= budget || rx_pkts >= budget)
+ return budget;
+
+ napi_complete(napi);
+
+ hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx,
+ HINIC3_MSIX_ENABLE);
+
+ return max(tx_pkts, rx_pkts);
+}
+
+static void qp_add_napi(struct hinic3_irq *irq_cfg)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev);
+
+ netif_napi_add(nic_dev->netdev, &irq_cfg->napi,
+ hinic3_poll, nic_dev->poll_weight);
+ napi_enable(&irq_cfg->napi);
+ irq_cfg->napi_reign = NAPI_IS_REGIN;
+}
+
+void qp_del_napi(struct hinic3_irq *irq_cfg)
+{
+ if (irq_cfg->napi_reign == NAPI_IS_REGIN) {
+ napi_disable(&irq_cfg->napi);
+ netif_napi_del(&irq_cfg->napi);
+ irq_cfg->napi_reign = NAPI_NOT_REGIN;
+ }
+}
+
+static irqreturn_t qp_irq(int irq, void *data)
+{
+ struct hinic3_irq *irq_cfg = (struct hinic3_irq *)data;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev);
+
+ hinic3_misx_intr_clear_resend_bit(nic_dev->hwdev, irq_cfg->msix_entry_idx, 1);
+
+ napi_schedule(&irq_cfg->napi);
+
+ return IRQ_HANDLED;
+}
+
+static int hinic3_request_irq(struct hinic3_irq *irq_cfg, u16 q_id)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev);
+ struct interrupt_info info = {0};
+ int err;
+
+ qp_add_napi(irq_cfg);
+
+ info.msix_index = irq_cfg->msix_entry_idx;
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = nic_dev->intr_coalesce[q_id].pending_limt;
+ info.coalesc_timer_cfg =
+ nic_dev->intr_coalesce[q_id].coalesce_timer_cfg;
+ info.resend_timer_cfg = nic_dev->intr_coalesce[q_id].resend_timer_cfg;
+ nic_dev->rxqs[q_id].last_coalesc_timer_cfg =
+ nic_dev->intr_coalesce[q_id].coalesce_timer_cfg;
+ nic_dev->rxqs[q_id].last_pending_limt =
+ nic_dev->intr_coalesce[q_id].pending_limt;
+ err = hinic3_set_interrupt_cfg(nic_dev->hwdev, info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, irq_cfg->netdev,
+ "Failed to set RX interrupt coalescing attribute.\n");
+ qp_del_napi(irq_cfg);
+ return err;
+ }
+
+ err = request_irq(irq_cfg->irq_id, &qp_irq, 0, irq_cfg->irq_name, irq_cfg);
+ if (err) {
+ nicif_err(nic_dev, drv, irq_cfg->netdev, "Failed to request Rx irq\n");
+ qp_del_napi(irq_cfg);
+ return err;
+ }
+
+ irq_set_affinity_hint(irq_cfg->irq_id, &irq_cfg->affinity_mask);
+
+ return 0;
+}
+
+static void hinic3_release_irq(struct hinic3_irq *irq_cfg)
+{
+ irq_set_affinity_hint(irq_cfg->irq_id, NULL);
+ synchronize_irq(irq_cfg->irq_id);
+ free_irq(irq_cfg->irq_id, irq_cfg);
+ qp_del_napi(irq_cfg);
+}
+
+int hinic3_qps_irq_init(struct hinic3_nic_dev *nic_dev)
+{
+ struct pci_dev *pdev = nic_dev->pdev;
+ struct irq_info *qp_irq_info = NULL;
+ struct hinic3_irq *irq_cfg = NULL;
+ u16 q_id, i;
+ u32 local_cpu;
+ int err;
+
+ for (q_id = 0; q_id < nic_dev->q_params.num_qps; q_id++) {
+ qp_irq_info = &nic_dev->qps_irq_info[q_id];
+ irq_cfg = &nic_dev->q_params.irq_cfg[q_id];
+
+ irq_cfg->irq_id = qp_irq_info->irq_id;
+ irq_cfg->msix_entry_idx = qp_irq_info->msix_entry_idx;
+ irq_cfg->netdev = nic_dev->netdev;
+ irq_cfg->txq = &nic_dev->txqs[q_id];
+ irq_cfg->rxq = &nic_dev->rxqs[q_id];
+ nic_dev->rxqs[q_id].irq_cfg = irq_cfg;
+
+ local_cpu = cpumask_local_spread(q_id, dev_to_node(&pdev->dev));
+ cpumask_set_cpu(local_cpu, &irq_cfg->affinity_mask);
+
+ err = snprintf(irq_cfg->irq_name, sizeof(irq_cfg->irq_name),
+ "%s_qp%u", nic_dev->netdev->name, q_id);
+ if (err < 0) {
+ err = -EINVAL;
+ goto req_tx_irq_err;
+ }
+
+ err = hinic3_request_irq(irq_cfg, q_id);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to request Rx irq\n");
+ goto req_tx_irq_err;
+ }
+
+ hinic3_set_msix_auto_mask_state(nic_dev->hwdev, irq_cfg->msix_entry_idx,
+ HINIC3_SET_MSIX_AUTO_MASK);
+ hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx, HINIC3_MSIX_ENABLE);
+ }
+
+ INIT_DELAYED_WORK(&nic_dev->moderation_task, hinic3_auto_moderation_work);
+
+ return 0;
+
+req_tx_irq_err:
+ for (i = 0; i < q_id; i++) {
+ irq_cfg = &nic_dev->q_params.irq_cfg[i];
+ hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx, HINIC3_MSIX_DISABLE);
+ hinic3_set_msix_auto_mask_state(nic_dev->hwdev, irq_cfg->msix_entry_idx,
+ HINIC3_CLR_MSIX_AUTO_MASK);
+ hinic3_release_irq(irq_cfg);
+ }
+
+ return err;
+}
+
+void hinic3_qps_irq_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_irq *irq_cfg = NULL;
+ u16 q_id;
+
+ for (q_id = 0; q_id < nic_dev->q_params.num_qps; q_id++) {
+ irq_cfg = &nic_dev->q_params.irq_cfg[q_id];
+ hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+ hinic3_set_msix_auto_mask_state(nic_dev->hwdev,
+ irq_cfg->msix_entry_idx,
+ HINIC3_CLR_MSIX_AUTO_MASK);
+ hinic3_release_irq(irq_cfg);
+ }
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c
new file mode 100644
index 0000000..8cd891e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c
@@ -0,0 +1,1737 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "mag_mpu_cmd.h"
+#include "mag_mpu_cmd_defs.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "hinic3_common.h"
+
+#define BIFUR_RESOURCE_PF_SSID 0x5a1
+#define CAP_INFO_MAX_LEN 512
+#define DEVICE_VENDOR_MAX_LEN 17
+#define READ_RSFEC_REGISTER_DELAY_TIME_MS 500
+
+struct parse_tlv_info g_page_info = {0};
+struct drv_mag_cmd_get_xsfp_tlv_rsp g_xsfp_tlv_info = {0};
+
+static int mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+static int mag_msg_to_mgmt_sync_ch(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u16 channel);
+
+int hinic3_set_port_enable(void *hwdev, bool enable, u16 channel)
+{
+ struct mag_cmd_set_port_enable en_state;
+ u16 out_size = sizeof(en_state);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ memset(&en_state, 0, sizeof(en_state));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ en_state.function_id = hinic3_global_func_id(hwdev);
+ en_state.state = enable ? MAG_CMD_TX_ENABLE | MAG_CMD_RX_ENABLE :
+ MAG_CMD_PORT_DISABLE;
+
+ err = mag_msg_to_mgmt_sync_ch(hwdev, MAG_CMD_SET_PORT_ENABLE, &en_state,
+ sizeof(en_state), &en_state, &out_size,
+ channel);
+ if (err || !out_size || en_state.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set port state, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, en_state.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_port_enable);
+
+int hinic3_get_phy_port_stats(void *hwdev, struct mag_cmd_port_stats *stats)
+{
+ struct mag_cmd_get_port_stat *port_stats = NULL;
+ struct mag_cmd_port_stats_info stats_info;
+ u16 out_size = sizeof(*port_stats);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats)
+ return -ENOMEM;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ memset(&stats_info, 0, sizeof(stats_info));
+ stats_info.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_PORT_STAT,
+ &stats_info, sizeof(stats_info),
+ port_stats, &out_size);
+ if (err || !out_size || port_stats->head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, port_stats->head.status, out_size);
+ err = -EIO;
+ goto out;
+ }
+
+ memcpy(stats, &port_stats->counter, sizeof(*stats));
+
+out:
+ kfree(port_stats);
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_get_phy_port_stats);
+
+int hinic3_get_phy_rsfec_stats(void *hwdev, struct mag_cmd_rsfec_stats *stats)
+{
+ struct mag_cmd_get_mag_cnt *port_stats = NULL;
+ struct mag_cmd_get_mag_cnt stats_info;
+ u16 out_size = sizeof(*port_stats);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !stats)
+ return -EINVAL;
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats)
+ return -ENOMEM;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ memset(&stats_info, 0, sizeof(stats_info));
+ stats_info.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_MAG_CNT,
+ &stats_info, sizeof(stats_info),
+ port_stats, &out_size);
+ if (err || !out_size || port_stats->head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get rsfec statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, port_stats->head.status, out_size);
+ err = -EIO;
+ goto out;
+ }
+ /* 读2遍, 清除误码残留 */
+ msleep(READ_RSFEC_REGISTER_DELAY_TIME_MS);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_MAG_CNT, &stats_info,
+ sizeof(stats_info),
+ port_stats, &out_size);
+ if (err || !out_size || port_stats->head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get rsfec statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, port_stats->head.status, out_size);
+ err = -EIO;
+ goto out;
+ }
+
+ memcpy(stats, &port_stats->mag_csr[MAG_RX_RSFEC_ERR_CW_CNT],
+ sizeof(u32));
+
+out:
+ kfree(port_stats);
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_get_phy_rsfec_stats);
+
+int hinic3_set_port_funcs_state(void *hwdev, bool enable)
+{
+ return 0;
+}
+
+int hinic3_reset_port_link_cfg(void *hwdev)
+{
+ return 0;
+}
+
+int hinic3_force_port_relink(void *hwdev)
+{
+ return 0;
+}
+
+int hinic3_set_autoneg(void *hwdev, bool enable)
+{
+ struct hinic3_link_ksettings settings = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 set_settings = 0;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ set_settings |= HILINK_LINK_SET_AUTONEG;
+ settings.valid_bitmap = set_settings;
+ settings.autoneg = enable ? PORT_CFG_AN_ON : PORT_CFG_AN_OFF;
+
+ return hinic3_set_link_settings(hwdev, &settings);
+}
+
+static int hinic3_cfg_loopback_mode(struct hinic3_nic_io *nic_io, u8 opcode,
+ u8 *mode, u8 *enable)
+{
+ struct mag_cmd_cfg_loopback_mode lp;
+ u16 out_size = sizeof(lp);
+ int err;
+
+ memset(&lp, 0, sizeof(lp));
+ lp.port_id = hinic3_physical_port_id(nic_io->hwdev);
+ lp.opcode = opcode;
+ if (opcode == MGMT_MSG_CMD_OP_SET) {
+ lp.lp_mode = *mode;
+ lp.lp_en = *enable;
+ }
+
+ err = mag_msg_to_mgmt_sync(nic_io->hwdev, MAG_CMD_CFG_LOOPBACK_MODE,
+ &lp, sizeof(lp), &lp, &out_size);
+ if (err || !out_size || lp.head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to %s loopback mode, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == MGMT_MSG_CMD_OP_SET ? "set" : "get",
+ err, lp.head.status, out_size);
+ return -EIO;
+ }
+
+ if (opcode == MGMT_MSG_CMD_OP_GET) {
+ *mode = lp.lp_mode;
+ *enable = lp.lp_en;
+ }
+
+ return 0;
+}
+
+int hinic3_get_loopback_mode(void *hwdev, u8 *mode, u8 *enable)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !mode || !enable)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ return hinic3_cfg_loopback_mode(nic_io, MGMT_MSG_CMD_OP_GET, mode,
+ enable);
+}
+
+#define LOOP_MODE_MIN 1
+#define LOOP_MODE_MAX 6
+int hinic3_set_loopback_mode(void *hwdev, u8 mode, u8 enable)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (mode < LOOP_MODE_MIN || mode > LOOP_MODE_MAX) {
+ nic_err(nic_io->dev_hdl, "Invalid loopback mode %u to set\n",
+ mode);
+ return -EINVAL;
+ }
+
+ return hinic3_cfg_loopback_mode(nic_io, MGMT_MSG_CMD_OP_SET, &mode,
+ &enable);
+}
+
+int hinic3_set_led_status(void *hwdev, enum mag_led_type type,
+ enum mag_led_mode mode)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct mag_cmd_set_led_cfg led_info;
+ u16 out_size = sizeof(led_info);
+ int err;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&led_info, 0, sizeof(led_info));
+
+ led_info.function_id = hinic3_global_func_id(hwdev);
+ led_info.type = type;
+ led_info.mode = mode;
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_SET_LED_CFG, &led_info,
+ sizeof(led_info), &led_info, &out_size);
+ if (err || led_info.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl, "Failed to set led status, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, led_info.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_get_port_info(void *hwdev, struct nic_port_info *port_info,
+ u16 channel)
+{
+ struct mag_cmd_get_port_info port_msg;
+ u16 out_size = sizeof(port_msg);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !port_info)
+ return -EINVAL;
+
+ memset(&port_msg, 0, sizeof(port_msg));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ port_msg.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync_ch(hwdev, MAG_CMD_GET_PORT_INFO, &port_msg,
+ sizeof(port_msg), &port_msg, &out_size,
+ channel);
+ if (err || !out_size || port_msg.head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port info, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, port_msg.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ port_info->autoneg_cap = port_msg.an_support;
+ port_info->autoneg_state = port_msg.an_en;
+ port_info->duplex = port_msg.duplex;
+ port_info->port_type = port_msg.wire_type;
+ port_info->speed = port_msg.speed;
+ port_info->fec = port_msg.fec;
+ port_info->lanes = port_msg.lanes;
+ port_info->supported_mode = port_msg.supported_mode;
+ port_info->advertised_mode = port_msg.advertised_mode;
+ port_info->supported_fec_mode = port_msg.supported_fec_mode;
+ /* switch Gbps to Mbps */
+ port_info->bond_speed = (u32)port_msg.bond_speed * RATE_MBPS_TO_GBPS;
+ return 0;
+}
+
+int hinic3_get_speed(void *hwdev, enum mag_cmd_port_speed *speed, u16 channel)
+{
+ struct nic_port_info port_info = {0};
+ int err;
+
+ if (!hwdev || !speed)
+ return -EINVAL;
+
+ err = hinic3_get_port_info(hwdev, &port_info, channel);
+ if (err)
+ return err;
+
+ *speed = port_info.speed;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_speed);
+
+int hinic3_set_link_settings(void *hwdev,
+ struct hinic3_link_ksettings *settings)
+{
+ struct mag_cmd_set_port_cfg info;
+ u16 out_size = sizeof(info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !settings)
+ return -EINVAL;
+
+ memset(&info, 0, sizeof(info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ info.port_id = hinic3_physical_port_id(hwdev);
+ info.config_bitmap = settings->valid_bitmap;
+ info.autoneg = settings->autoneg;
+ info.speed = settings->speed;
+ info.fec = settings->fec;
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_SET_PORT_CFG, &info,
+ sizeof(info), &info, &out_size);
+ if (err || !out_size || info.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set link settings, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, info.head.status, out_size);
+ return -EIO;
+ }
+
+ return info.head.status;
+}
+
+int hinic3_get_link_state(void *hwdev, u8 *link_state)
+{
+ struct mag_cmd_get_link_status get_link;
+ u16 out_size = sizeof(get_link);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !link_state)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&get_link, 0, sizeof(get_link));
+ get_link.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_LINK_STATUS, &get_link,
+ sizeof(get_link), &get_link, &out_size);
+ if (err || !out_size || get_link.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to get link state, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, get_link.head.status, out_size);
+ return -EIO;
+ }
+
+ *link_state = get_link.status;
+
+ return 0;
+}
+
+void hinic3_notify_vf_link_status(struct hinic3_nic_io *nic_io,
+ u16 vf_id, u8 link_status)
+{
+ struct mag_cmd_get_link_status link;
+ struct vf_data_storage *vf_infos = nic_io->vf_infos;
+ u16 out_size = sizeof(link);
+ int err;
+
+ memset(&link, 0, sizeof(link));
+ if (vf_infos[HW_VF_ID_TO_OS(vf_id)].registered) {
+ link.status = link_status;
+ link.port_id = hinic3_physical_port_id(nic_io->hwdev);
+ err = hinic3_mbox_to_vf_no_ack(nic_io->hwdev, vf_id,
+ HINIC3_MOD_HILINK,
+ MAG_CMD_GET_LINK_STATUS, &link,
+ sizeof(link), &link, &out_size,
+ HINIC3_CHANNEL_NIC);
+ if (err == MBOX_ERRCODE_UNKNOWN_DES_FUNC) {
+ nic_warn(nic_io->dev_hdl, "VF%d not initialized, disconnect it\n",
+ HW_VF_ID_TO_OS(vf_id));
+ hinic3_unregister_vf(nic_io, vf_id);
+ return;
+ }
+ if (err || !out_size || link.head.status)
+ nic_err(nic_io->dev_hdl,
+ "Send link change event to VF %d failed, err: %d, status: 0x%x, out_size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err, link.head.status, out_size);
+ }
+}
+
+void hinic3_notify_all_vfs_link_changed(void *hwdev, u8 link_status)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 i;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+
+ nic_io->link_status = link_status;
+ for (i = 1; i <= nic_io->max_vfs; i++) {
+ if (!nic_io->vf_infos[HW_VF_ID_TO_OS(i)].link_forced)
+ hinic3_notify_vf_link_status(nic_io, i, link_status);
+ }
+}
+
+static char *g_hw_to_char_fec[HILINK_FEC_MAX_TYPE] = {
+ "not set", "rsfec", "basefec",
+ "nofec", "llrsfec"};
+static char *g_hw_to_speed_info[PORT_SPEED_UNKNOWN] = {
+ "not set", "10MB", "100MB", "1GB", "10GB",
+ "25GB", "40GB", "50GB", "100GB", "200GB"};
+static char *g_hw_to_an_state_info[PORT_CFG_AN_OFF + 1] = {
+ "not set", "on", "off"};
+
+struct port_type_table {
+ u32 port_type;
+ char *port_type_name;
+};
+
+static const struct port_type_table port_optical_type_table_s[] = {
+ {LINK_PORT_UNKNOWN, "UNKNOWN"},
+ {LINK_PORT_OPTICAL_MM, "optical_sr"},
+ {LINK_PORT_OPTICAL_SM, "optical_lr"},
+ {LINK_PORT_PAS_COPPER, "copper"},
+ {LINK_PORT_ACC, "ACC"},
+ {LINK_PORT_BASET, "baset"},
+ {LINK_PORT_AOC, "AOC"},
+ {LINK_PORT_ELECTRIC, "electric"},
+ {LINK_PORT_BACKBOARD_INTERFACE, "interface"},
+};
+
+static char *get_port_type_name(u32 type)
+{
+ u32 i;
+
+ for (i = 0; i < ARRAY_SIZE(port_optical_type_table_s); i++) {
+ if (type == port_optical_type_table_s[i].port_type)
+ return port_optical_type_table_s[i].port_type_name;
+ }
+ return "UNKNOWN TYPE";
+}
+
+static void get_port_type(struct hinic3_nic_io *nic_io,
+ struct mag_cmd_event_port_info *info,
+ char **port_type)
+{
+ if (info->port_type <= LINK_PORT_BACKBOARD_INTERFACE)
+ *port_type = get_port_type_name(info->port_type);
+ else
+ sdk_info(nic_io->dev_hdl, "Unknown port type: %u\n",
+ info->port_type);
+}
+
+static int get_port_temperature_power(struct mag_cmd_event_port_info *info,
+ char *str)
+{
+ char cap_info[CAP_INFO_MAX_LEN];
+
+ memset(cap_info, 0, sizeof(cap_info));
+ snprintf(cap_info, CAP_INFO_MAX_LEN, "%s, %s, Temperature: %u", str,
+ info->sfp_type ? "QSFP" : "SFP", info->cable_temp);
+
+ if (info->sfp_type)
+ snprintf(str, CAP_INFO_MAX_LEN, "%s, rx power: %uuw %uuW %uuW %uuW",
+ cap_info, info->power[0x0], info->power[0x1],
+ info->power[0x2], info->power[0x3]);
+ else
+ snprintf(str, CAP_INFO_MAX_LEN, "%s, rx power: %uuW, tx power: %uuW",
+ cap_info, info->power[0x0], info->power[0x1]);
+
+ return 0;
+}
+
+static void print_cable_info(struct hinic3_nic_io *nic_io,
+ struct mag_cmd_event_port_info *info)
+{
+ char tmp_str[CAP_INFO_MAX_LEN] = {0};
+ char tmp_vendor[DEVICE_VENDOR_MAX_LEN] = {0};
+ char *port_type = "Unknown port type";
+ int i;
+ int err = 0;
+
+ if (info->gpio_insert) {
+ sdk_info(nic_io->dev_hdl, "Cable unpresent\n");
+ return;
+ }
+
+ get_port_type(nic_io, info, &port_type);
+
+ for (i = sizeof(info->vendor_name) - 1; i >= 0; i--) {
+ if (info->vendor_name[i] == ' ')
+ info->vendor_name[i] = '\0';
+ else
+ break;
+ }
+
+ memcpy(tmp_vendor, info->vendor_name, sizeof(info->vendor_name));
+ snprintf(tmp_str, CAP_INFO_MAX_LEN, "Vendor: %s, %s, length: %um, max_speed: %uGbps",
+ tmp_vendor, port_type, info->cable_length, info->max_speed);
+
+ if (info->port_type == LINK_PORT_OPTICAL_MM ||
+ info->port_type == LINK_PORT_AOC) {
+ err = get_port_temperature_power(info, tmp_str);
+ if (err)
+ return;
+ }
+
+ sdk_info(nic_io->dev_hdl, "Cable information: %s\n", tmp_str);
+}
+
+static void print_link_info(struct hinic3_nic_io *nic_io,
+ struct mag_cmd_event_port_info *info,
+ enum hinic3_nic_event_type type)
+{
+ char *fec = "None";
+ char *speed = "None";
+ char *an_state = "None";
+
+ if (info->fec < HILINK_FEC_MAX_TYPE)
+ fec = g_hw_to_char_fec[info->fec];
+ else
+ sdk_info(nic_io->dev_hdl, "Unknown fec type: %u\n", info->fec);
+
+ if (info->an_state > PORT_CFG_AN_OFF) {
+ sdk_info(nic_io->dev_hdl, "an_state %u is invalid",
+ info->an_state);
+ return;
+ }
+
+ an_state = g_hw_to_an_state_info[info->an_state];
+
+ if (info->speed >= PORT_SPEED_UNKNOWN) {
+ sdk_info(nic_io->dev_hdl, "speed %u is invalid", info->speed);
+ return;
+ }
+
+ speed = g_hw_to_speed_info[info->speed];
+ sdk_info(nic_io->dev_hdl, "Link information: speed %s, %s, autoneg %s",
+ speed, fec, an_state);
+}
+
+void print_port_info(struct hinic3_nic_io *nic_io,
+ struct mag_cmd_event_port_info *port_info,
+ enum hinic3_nic_event_type type)
+{
+ print_cable_info(nic_io, port_info);
+
+ print_link_info(nic_io, port_info, type);
+
+ if (type == EVENT_NIC_LINK_UP)
+ return;
+
+ sdk_info(nic_io->dev_hdl, "PMA ctrl: %s, tx %s, rx %s, PMA fifo reg: 0x%x, PMA signal ok reg: 0x%x, RF/LF status reg: 0x%x\n",
+ port_info->pma_ctrl == 1 ? "off" : "on",
+ port_info->tx_enable ? "enable" : "disable",
+ port_info->rx_enable ? "enable" : "disable", port_info->pma_fifo_reg,
+ port_info->pma_signal_ok_reg, port_info->rf_lf);
+ sdk_info(nic_io->dev_hdl, "alos: %u, rx_los: %u, PCS 64 66b reg: 0x%x, PCS link: 0x%x, MAC link: 0x%x PCS_err_cnt: 0x%x\n",
+ port_info->alos, port_info->rx_los, port_info->pcs_64_66b_reg,
+ port_info->pcs_link, port_info->pcs_mac_link,
+ port_info->pcs_err_cnt);
+ sdk_info(nic_io->dev_hdl, "his_link_machine_state = 0x%08x, cur_link_machine_state = 0x%08x\n",
+ port_info->his_link_machine_state,
+ port_info->cur_link_machine_state);
+}
+
+static int hinic3_get_vf_link_status_msg_handler(struct hinic3_nic_io *nic_io,
+ u16 vf_id, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct vf_data_storage *vf_infos = nic_io->vf_infos;
+ struct mag_cmd_get_link_status *get_link = buf_out;
+ bool link_forced, link_up;
+
+ link_forced = vf_infos[HW_VF_ID_TO_OS(vf_id)].link_forced;
+ link_up = vf_infos[HW_VF_ID_TO_OS(vf_id)].link_up;
+
+ if (link_forced)
+ get_link->status = link_up ?
+ HINIC3_LINK_UP : HINIC3_LINK_DOWN;
+ else
+ get_link->status = nic_io->link_status;
+
+ get_link->head.status = 0;
+ *out_size = sizeof(*get_link);
+
+ return 0;
+}
+
+int hinic3_refresh_nic_cfg(void *hwdev, struct nic_port_info *port_info)
+{
+ /* TO DO */
+ return 0;
+}
+
+static void get_port_info(void *hwdev,
+ const struct mag_cmd_get_link_status *link_status,
+ struct hinic3_event_link_info *link_info)
+{
+ struct nic_port_info port_info = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+ if (hinic3_func_type(hwdev) != TYPE_VF && link_status->status) {
+ err = hinic3_get_port_info(hwdev, &port_info, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_warn(nic_io->dev_hdl, "Failed to get port info\n");
+ } else {
+ link_info->valid = 1;
+ link_info->port_type = port_info.port_type;
+ link_info->autoneg_cap = port_info.autoneg_cap;
+ link_info->autoneg_state = port_info.autoneg_state;
+ link_info->duplex = port_info.duplex;
+ link_info->speed = port_info.speed;
+ hinic3_refresh_nic_cfg(hwdev, &port_info);
+ }
+ }
+}
+
+static void link_status_event_handler(void *hwdev, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_link_status *link_status = NULL;
+ struct mag_cmd_get_link_status *ret_link_status = NULL;
+ struct hinic3_event_info event_info = {0};
+ struct hinic3_event_link_info *link_info = (void *)event_info.event_data;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct pci_dev *pdev = NULL;
+
+ /* Ignore link change event */
+ if (hinic3_is_bm_slave_host(hwdev))
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ link_status = buf_in;
+ sdk_info(nic_io->dev_hdl, "Link status report received, func_id: %u, status: %u\n",
+ hinic3_global_func_id(hwdev), link_status->status);
+
+ hinic3_link_event_stats(hwdev, link_status->status);
+
+ /* link event reported only after set vport enable */
+ get_port_info(hwdev, link_status, link_info);
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = link_status->status ?
+ EVENT_NIC_LINK_UP : EVENT_NIC_LINK_DOWN;
+
+ hinic3_event_callback(hwdev, &event_info);
+
+ if (nic_io->pcidev_hdl != NULL) {
+ pdev = nic_io->pcidev_hdl;
+ if (pdev->subsystem_device == BIFUR_RESOURCE_PF_SSID) {
+ return;
+ }
+ }
+
+ if (hinic3_func_type(hwdev) != TYPE_VF) {
+ hinic3_notify_all_vfs_link_changed(hwdev, link_status->status);
+ ret_link_status = buf_out;
+ ret_link_status->head.status = 0;
+ *out_size = sizeof(*ret_link_status);
+ }
+}
+
+static void port_info_event_printf(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_event_port_info *port_info = buf_in;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_event_info event_info;
+ enum hinic3_nic_event_type type;
+
+ if (!hwdev) {
+ pr_err("hwdev is NULL\n");
+ return;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ if (in_size != sizeof(*port_info)) {
+ sdk_info(nic_io->dev_hdl, "Invalid port info message size %u, should be %lu\n",
+ in_size, sizeof(*port_info));
+ return;
+ }
+
+ ((struct mag_cmd_event_port_info *)buf_out)->head.status = 0;
+
+ type = port_info->event_type;
+ if (type < EVENT_NIC_LINK_DOWN || type > EVENT_NIC_LINK_UP) {
+ sdk_info(nic_io->dev_hdl, "Invalid hilink info report, type: %d\n",
+ type);
+ return;
+ }
+
+ print_port_info(nic_io, port_info, type);
+
+ memset(&event_info, 0, sizeof(event_info));
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = type;
+
+ *out_size = sizeof(*port_info);
+
+ hinic3_event_callback(hwdev, &event_info);
+}
+
+void hinic3_notify_vf_bond_status(struct hinic3_nic_io *nic_io,
+ u16 vf_id, u8 bond_status)
+{
+ struct mag_cmd_get_bond_status bond;
+ struct vf_data_storage *vf_infos = nic_io->vf_infos;
+ u16 out_size = sizeof(bond);
+ int err;
+
+ memset(&bond, 0, sizeof(bond));
+ if (vf_infos[HW_VF_ID_TO_OS(vf_id)].registered) {
+ bond.status = bond_status;
+ err = hinic3_mbox_to_vf_no_ack(nic_io->hwdev, vf_id,
+ HINIC3_MOD_HILINK,
+ MAG_CMD_GET_BOND_STATUS, &bond,
+ sizeof(bond), &bond, &out_size,
+ HINIC3_CHANNEL_NIC);
+ if (err == MBOX_ERRCODE_UNKNOWN_DES_FUNC) {
+ nic_warn(nic_io->dev_hdl, "VF %hu not initialized, disconnect it\n",
+ HW_VF_ID_TO_OS(vf_id));
+ hinic3_unregister_vf(nic_io, vf_id);
+ return;
+ }
+ if (err || !out_size || bond.head.status)
+ nic_err(nic_io->dev_hdl,
+ "Send bond change event to VF %hu failed, err: %d, status: 0x%x, out_size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err, bond.head.status,
+ out_size);
+ }
+}
+
+void hinic3_notify_all_vfs_bond_changed(void *hwdev, u8 bond_status)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 i;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ nic_io->link_status = bond_status;
+ for (i = 1; i <= nic_io->max_vfs; i++)
+ hinic3_notify_vf_bond_status(nic_io, i, bond_status);
+}
+
+static void bond_status_event_handler(void *hwdev, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_bond_status *bond_status = NULL;
+ struct hinic3_event_info event_info = {};
+ struct hinic3_nic_io *nic_io = NULL;
+ struct mag_cmd_get_bond_status *ret_bond_status = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ bond_status = (struct mag_cmd_get_bond_status *)buf_in;
+ sdk_info(nic_io->dev_hdl, "bond status report received, func_id: %u, status: %u\n",
+ hinic3_global_func_id(hwdev), bond_status->status);
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = bond_status->status ?
+ EVENT_NIC_BOND_UP : EVENT_NIC_BOND_DOWN;
+
+ hinic3_event_callback(hwdev, &event_info);
+
+ if (hinic3_func_type(hwdev) != TYPE_VF) {
+ hinic3_notify_all_vfs_bond_changed(hwdev, bond_status->status);
+ ret_bond_status = buf_out;
+ ret_bond_status->head.status = 0;
+ *out_size = sizeof(*ret_bond_status);
+ }
+}
+
+static void cable_plug_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_wire_event *plug_event = buf_in;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct hinic3_port_routine_cmd_extern *rt_cmd_ext = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_event_info event_info;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ rt_cmd_ext = &nic_io->nic_cfg.rt_cmd_ext;
+
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ rt_cmd->mpu_send_sfp_abs = false;
+ rt_cmd->mpu_send_sfp_info = false;
+ rt_cmd_ext->mpu_send_xsfp_tlv_info = false;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ memset(&event_info, 0, sizeof(event_info));
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = EVENT_NIC_PORT_MODULE_EVENT;
+ ((struct hinic3_port_module_event *)(void *)event_info.event_data)->type =
+ plug_event->status ? HINIC3_PORT_MODULE_CABLE_PLUGGED :
+ HINIC3_PORT_MODULE_CABLE_UNPLUGGED;
+
+ *out_size = sizeof(*plug_event);
+ plug_event = buf_out;
+ plug_event->head.status = 0;
+
+ hinic3_event_callback(hwdev, &event_info);
+}
+
+static void port_sfp_info_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_xsfp_info *sfp_info = buf_in;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct hinic3_port_routine_cmd_extern *rt_cmd_ext = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+ if (in_size != sizeof(*sfp_info)) {
+ sdk_err(nic_io->dev_hdl, "Invalid sfp info cmd, length: %u, should be %lu\n",
+ in_size, sizeof(*sfp_info));
+ return;
+ }
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ rt_cmd_ext = &nic_io->nic_cfg.rt_cmd_ext;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ memcpy(&rt_cmd->std_sfp_info, sfp_info,
+ sizeof(struct mag_cmd_get_xsfp_info));
+ rt_cmd->mpu_send_sfp_info = true;
+ rt_cmd_ext->mpu_send_xsfp_tlv_info = false;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+}
+
+static void port_xsfp_tlv_info_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_xsfp_tlv_rsp *xsfp_tlv_info = buf_in;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct hinic3_port_routine_cmd_extern *rt_cmd_ext = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ size_t cpy_len = in_size - sizeof(struct mgmt_msg_head) -
+ XSFP_TLV_PRE_INFO_LEN;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (nic_io == NULL)
+ return;
+
+ if (cpy_len > XSFP_CMIS_INFO_MAX_SIZE) {
+ sdk_err(nic_io->dev_hdl, "invalid cpy_len(%lu)\n", cpy_len);
+ return;
+ }
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ rt_cmd_ext = &nic_io->nic_cfg.rt_cmd_ext;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ rt_cmd_ext->std_xsfp_tlv_info.port_id = xsfp_tlv_info->port_id;
+ memcpy(&(rt_cmd_ext->std_xsfp_tlv_info.tlv_buf[0]),
+ &(xsfp_tlv_info->tlv_buf[0]), cpy_len);
+ rt_cmd->mpu_send_sfp_info = false;
+ rt_cmd_ext->mpu_send_xsfp_tlv_info = true;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+}
+
+static void port_sfp_abs_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_xsfp_present *sfp_abs = buf_in;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+ if (in_size != sizeof(*sfp_abs)) {
+ sdk_err(nic_io->dev_hdl, "Invalid sfp absent cmd, length: %u, should be %lu\n",
+ in_size, sizeof(*sfp_abs));
+ return;
+ }
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ memcpy(&rt_cmd->abs, sfp_abs, sizeof(struct mag_cmd_get_xsfp_present));
+ rt_cmd->mpu_send_sfp_abs = true;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+}
+
+bool hinic3_if_sfp_absent(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct mag_cmd_get_xsfp_present sfp_abs;
+ u8 port_id = hinic3_physical_port_id(hwdev);
+ u16 out_size = sizeof(sfp_abs);
+ int err;
+ bool sfp_abs_status = 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return true;
+ memset(&sfp_abs, 0, sizeof(sfp_abs));
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ if (rt_cmd->mpu_send_sfp_abs) {
+ if (rt_cmd->abs.head.status) {
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return true;
+ }
+
+ sfp_abs_status = (bool)rt_cmd->abs.abs_status;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return sfp_abs_status;
+ }
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ sfp_abs.port_id = port_id;
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_XSFP_PRESENT,
+ &sfp_abs, sizeof(sfp_abs), &sfp_abs,
+ &out_size);
+ if (sfp_abs.head.status || err || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port%u sfp absent status, err: %d, status: 0x%x, out size: 0x%x\n",
+ port_id, err, sfp_abs.head.status, out_size);
+ return true;
+ }
+
+ return (sfp_abs.abs_status == 0 ? false : true);
+}
+
+int hinic3_get_sfp_tlv_info(void *hwdev, struct drv_mag_cmd_get_xsfp_tlv_rsp
+ *sfp_tlv_info,
+ const struct mag_cmd_get_xsfp_tlv_req
+ *sfp_tlv_info_req)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_port_routine_cmd_extern *rt_cmd_ext = NULL;
+ u16 out_size = sizeof(*sfp_tlv_info);
+ int err;
+
+ if ((hwdev == NULL) || (sfp_tlv_info == NULL))
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (nic_io == NULL)
+ return -EINVAL;
+
+ rt_cmd_ext = &nic_io->nic_cfg.rt_cmd_ext;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ if (rt_cmd_ext->mpu_send_xsfp_tlv_info == true) {
+ if (rt_cmd_ext->std_xsfp_tlv_info.head.status != 0) {
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return -EIO;
+ }
+
+ memcpy(sfp_tlv_info, &rt_cmd_ext->std_xsfp_tlv_info,
+ sizeof(*sfp_tlv_info));
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return 0;
+ }
+
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_XSFP_TLV_INFO,
+ (void *)sfp_tlv_info_req,
+ sizeof(*sfp_tlv_info_req),
+ sfp_tlv_info, &out_size);
+ if ((sfp_tlv_info->head.status != 0) || (err != 0) || (out_size == 0)) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port%u tlv sfp eeprom information, err: %d, status: 0x%x, out size: 0x%x\n",
+ hinic3_physical_port_id(hwdev), err,
+ sfp_tlv_info->head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int hinic3_trans_cmis_get_page_pos(u32 page_id, u32 content_len, u32 *pos)
+{
+ if (page_id <= QSFP_CMIS_PAGE_03H) {
+ *pos = (page_id * content_len);
+ return 0;
+ }
+
+ if (page_id == QSFP_CMIS_PAGE_11H) {
+ *pos = (QSFP_CMIS_PAGE_04H * content_len);
+ return 0;
+ }
+
+ if (page_id == QSFP_CMIS_PAGE_12H) {
+ *pos = (QSFP_CMIS_PAGE_05H * content_len);
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int hinic3_get_page_key_info(struct mgmt_tlv_info *tlv_info,
+ struct parse_tlv_info *page_info, u8 idx,
+ u32 *total_len)
+{
+ u8 *src_addr = NULL;
+ u8 *dst_addr = NULL;
+ u8 *tmp_addr = NULL;
+ u32 page_id = 0;
+ u32 content_len = 0;
+ u32 src_pos = 0;
+ int ret;
+
+ page_id = MGMT_TLV_GET_U32(tlv_info->value);
+ content_len = tlv_info->length - MGMT_TLV_U32_SIZE;
+ if (page_id == QSFP_CMIS_PAGE_00H) {
+ tmp_addr = (u8 *)(tlv_info + 1);
+ page_info->id = *(tmp_addr + MGMT_TLV_U32_SIZE);
+ }
+
+ ret = hinic3_trans_cmis_get_page_pos(page_id, content_len, &src_pos);
+ if (ret != 0)
+ return ret;
+
+ src_addr = page_info->tlv_page_info + src_pos;
+ tmp_addr = (u8 *)(tlv_info + 1);
+ dst_addr = tmp_addr + MGMT_TLV_U32_SIZE;
+ memcpy(src_addr, dst_addr, content_len);
+ if (ret != 0)
+ return ret;
+
+ if (idx < XSFP_CMIS_PARSE_PAGE_NUM)
+ page_info->tlv_page_num[idx] = page_id;
+
+ *total_len += content_len;
+
+ return 0;
+}
+
+static int hinic3_trans_cmis_tlv_info_to_buf(u8 *sfp_tlv_info,
+ struct parse_tlv_info *page_info)
+{
+ struct mgmt_tlv_info *tlv_info = NULL;
+ u8 *tlv_buf = sfp_tlv_info;
+ u8 idx = 0;
+ u32 total_len = 0;
+ int ret = 0;
+ bool need_continue = true;
+
+ if ((sfp_tlv_info == NULL) || (page_info == NULL))
+ return -EIO;
+
+ while (need_continue) {
+ tlv_info = (struct mgmt_tlv_info *)tlv_buf;
+ switch (tlv_info->type) {
+ case MAG_XSFP_TYPE_PAGE:
+ ret = hinic3_get_page_key_info(
+ tlv_info, page_info, idx, &total_len);
+ if (ret != 0) {
+ pr_err("lib_get_page_key_info fail,ret:0x%x.\n",
+ ret);
+ break;
+ }
+ idx++;
+ break;
+
+ case MAG_XSFP_TYPE_WIRE_TYPE:
+ page_info->wire_type =
+ MGMT_TLV_GET_U32(&(tlv_info->value[0]));
+ break;
+
+ case MAG_XSFP_TYPE_END:
+ need_continue = false;
+ break;
+
+ default:
+ break;
+ }
+
+ tlv_buf += (sizeof(struct mgmt_tlv_info) + tlv_info->length);
+ }
+
+ page_info->tlv_page_info_len = total_len;
+
+ return 0;
+}
+
+int hinic3_get_tlv_xsfp_eeprom(void *hwdev, u8 *data, u32 len)
+{
+ int err = 0;
+ struct mag_cmd_get_xsfp_tlv_req xsfp_tlv_info_req = {0};
+
+ xsfp_tlv_info_req.rsp_buf_len = XSFP_CMIS_INFO_MAX_SIZE;
+ xsfp_tlv_info_req.port_id = hinic3_physical_port_id(hwdev);
+ err = hinic3_get_sfp_tlv_info(hwdev, &g_xsfp_tlv_info,
+ &xsfp_tlv_info_req);
+ if (err != 0)
+ return err;
+
+ err = hinic3_trans_cmis_tlv_info_to_buf(g_xsfp_tlv_info.tlv_buf,
+ &g_page_info);
+ if (err)
+ return -ENOMEM;
+
+ memcpy(data, g_page_info.tlv_page_info, len);
+
+ return (err == 0) ? 0 : -ENOMEM;
+}
+
+int hinic3_get_sfp_info(void *hwdev, struct mag_cmd_get_xsfp_info *sfp_info)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ u8 sfp_info_status = 0;
+ u16 out_size = sizeof(*sfp_info);
+ int err;
+
+ if (!hwdev || !sfp_info)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ sfp_info_status = rt_cmd->std_sfp_info.head.status;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ if (rt_cmd->mpu_send_sfp_info) {
+ if (sfp_info_status != 0) {
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return (sfp_info_status == HINIC3_MGMT_CMD_UNSUPPORTED)
+ ? HINIC3_MGMT_CMD_UNSUPPORTED : -EIO;
+ }
+
+ memcpy(sfp_info, &rt_cmd->std_sfp_info, sizeof(*sfp_info));
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return 0;
+ }
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ sfp_info->port_id = hinic3_physical_port_id(hwdev);
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_XSFP_INFO, sfp_info,
+ sizeof(*sfp_info), sfp_info, &out_size);
+ if (sfp_info->head.status == HINIC3_MGMT_CMD_UNSUPPORTED) {
+ return HINIC3_MGMT_CMD_UNSUPPORTED;
+ }
+
+ if (sfp_info->head.status == HINIC3_MGMT_CMD_UNSUPPORTED) {
+ return -EOPNOTSUPP;
+ }
+ if ((sfp_info->head.status != 0) || (err != 0) || (out_size == 0)) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port%u sfp eeprom information, err: %d, status: 0x%x, out size: 0x%x\n",
+ hinic3_physical_port_id(hwdev), err,
+ sfp_info->head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_get_sfp_eeprom(void *hwdev, u8 *data, u32 len)
+{
+ struct mag_cmd_get_xsfp_info sfp_info;
+ int err;
+
+ if (!hwdev || !data || len > PAGE_SIZE)
+ return -EINVAL;
+
+ if (hinic3_if_sfp_absent(hwdev))
+ return -ENXIO;
+
+ memset(&sfp_info, 0, sizeof(sfp_info));
+
+ err = hinic3_get_sfp_info(hwdev, &sfp_info);
+ if (err)
+ return err;
+
+ memcpy(data, sfp_info.sfp_info, sizeof(sfp_info.sfp_info));
+
+ return 0;
+}
+
+int hinic3_get_sfp_type(void *hwdev, u8 *sfp_type, u8 *sfp_type_ext)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ u8 sfp_data[STD_SFP_INFO_MAX_SIZE];
+ int err = 0;
+
+ if (!hwdev || !sfp_type || !sfp_type_ext)
+ return -EINVAL;
+
+ if (hinic3_if_sfp_absent(hwdev))
+ return -ENXIO;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ if (rt_cmd->mpu_send_sfp_info) {
+ if (rt_cmd->std_sfp_info.head.status == 0) {
+ *sfp_type = rt_cmd->std_sfp_info.sfp_info[0];
+ *sfp_type_ext = rt_cmd->std_sfp_info.sfp_info[1];
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return 0;
+ }
+
+ if (rt_cmd->std_sfp_info.head.status != HINIC3_MGMT_CMD_UNSUPPORTED) {
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return -EIO;
+ }
+
+ err = HINIC3_MGMT_CMD_UNSUPPORTED; /* cmis */
+ }
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ if (err == 0) {
+ err = hinic3_get_sfp_eeprom(hwdev, (u8 *)sfp_data,
+ STD_SFP_INFO_MAX_SIZE);
+ } else {
+ /* mpu_send_sfp_info is false */
+ err = hinic3_get_tlv_xsfp_eeprom(hwdev, (u8 *)sfp_data,
+ STD_SFP_INFO_MAX_SIZE);
+ }
+
+ if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+ err = hinic3_get_tlv_xsfp_eeprom(hwdev, (u8 *)sfp_data,
+ STD_SFP_INFO_MAX_SIZE);
+
+ if (err)
+ return err;
+
+ *sfp_type = sfp_data[0];
+ *sfp_type_ext = sfp_data[1];
+
+ return 0;
+}
+
+int hinic3_set_link_status_follow(void *hwdev, enum hinic3_link_follow_status status)
+{
+ struct mag_cmd_set_link_follow follow;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(follow);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (status >= HINIC3_LINK_FOLLOW_STATUS_MAX) {
+ nic_err(nic_io->dev_hdl, "Invalid link follow status: %d\n", status);
+ return -EINVAL;
+ }
+
+ memset(&follow, 0, sizeof(follow));
+ follow.function_id = hinic3_global_func_id(hwdev);
+ follow.follow = status;
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_SET_LINK_FOLLOW, &follow,
+ sizeof(follow), &follow, &out_size);
+ if ((follow.head.status != HINIC3_MGMT_CMD_UNSUPPORTED && follow.head.status) ||
+ err || !out_size) {
+ nic_err(nic_io->dev_hdl, "Failed to set link status follow port status, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, follow.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return follow.head.status;
+}
+
+int hinic3_update_pf_bw(void *hwdev)
+{
+ struct nic_port_info port_info = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF || !HINIC3_SUPPORT_RATE_LIMIT(hwdev))
+ return 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ err = hinic3_get_port_info(hwdev, &port_info, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get port info\n");
+ return -EIO;
+ }
+
+ err = hinic3_set_pf_rate(hwdev, port_info.speed);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set pf bandwidth\n");
+ return err;
+ }
+
+ return 0;
+}
+
+int hinic3_set_pf_bw_limit(void *hwdev, u32 bw_limit)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 old_bw_limit;
+ u8 link_state = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (bw_limit > MAX_LIMIT_BW) {
+ nic_err(nic_io->dev_hdl, "Invalid bandwidth: %u\n", bw_limit);
+ return -EINVAL;
+ }
+
+ err = hinic3_get_link_state(hwdev, &link_state);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get link state\n");
+ return -EIO;
+ }
+
+ if (!link_state) {
+ nic_err(nic_io->dev_hdl, "Link status must be up when setting pf tx rate\n");
+ return -EINVAL;
+ }
+
+ if (nic_io->direct == HINIC3_NIC_TX) {
+ old_bw_limit = nic_io->nic_cfg.pf_bw_tx_limit;
+ nic_io->nic_cfg.pf_bw_tx_limit = bw_limit;
+ } else {
+ old_bw_limit = nic_io->nic_cfg.pf_bw_rx_limit;
+ nic_io->nic_cfg.pf_bw_rx_limit = bw_limit;
+ }
+
+ err = hinic3_update_pf_bw(hwdev);
+ if (err) {
+ if (nic_io->direct == HINIC3_NIC_TX)
+ nic_io->nic_cfg.pf_bw_tx_limit = old_bw_limit;
+ else
+ nic_io->nic_cfg.pf_bw_rx_limit = old_bw_limit;
+ return err;
+ }
+
+ return 0;
+}
+
+static const struct vf_msg_handler vf_mag_cmd_handler[] = {
+ {
+ .cmd = MAG_CMD_GET_LINK_STATUS,
+ .handler = hinic3_get_vf_link_status_msg_handler,
+ },
+};
+
+/* pf/ppf handler mbox msg from vf */
+int hinic3_pf_mag_mbox_handler(void *hwdev, u16 vf_id,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 index, cmd_size = ARRAY_LEN(vf_mag_cmd_handler);
+ struct hinic3_nic_io *nic_io = NULL;
+ const struct vf_msg_handler *handler = NULL;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ for (index = 0; index < cmd_size; index++) {
+ handler = &vf_mag_cmd_handler[index];
+ if (cmd == handler->cmd)
+ return handler->handler(nic_io, vf_id, buf_in, in_size,
+ buf_out, out_size);
+ }
+
+ nic_warn(nic_io->dev_hdl, "NO handler for mag cmd: %u received from vf id: %u\n",
+ cmd, vf_id);
+
+ return -EINVAL;
+}
+
+static struct nic_event_handler mag_cmd_handler[] = {
+ {
+ .cmd = MAG_CMD_GET_LINK_STATUS,
+ .handler = link_status_event_handler,
+ },
+
+ {
+ .cmd = MAG_CMD_EVENT_PORT_INFO,
+ .handler = port_info_event_printf,
+ },
+
+ {
+ .cmd = MAG_CMD_WIRE_EVENT,
+ .handler = cable_plug_event,
+ },
+
+ {
+ .cmd = MAG_CMD_GET_XSFP_INFO,
+ .handler = port_sfp_info_event,
+ },
+
+ {
+ .cmd = MAG_CMD_GET_XSFP_PRESENT,
+ .handler = port_sfp_abs_event,
+ },
+
+ {
+ .cmd = MAG_CMD_GET_BOND_STATUS,
+ .handler = bond_status_event_handler,
+ },
+
+ {
+ .cmd = MAG_CMD_GET_XSFP_TLV_INFO,
+ .handler = port_xsfp_tlv_info_event,
+ },
+};
+
+static int hinic3_mag_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 size = ARRAY_LEN(mag_cmd_handler);
+ u32 i;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ *out_size = 0;
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ for (i = 0; i < size; i++) {
+ if (cmd == mag_cmd_handler[i].cmd) {
+ mag_cmd_handler[i].handler(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ return 0;
+ }
+ }
+
+ /* can't find this event cmd */
+ sdk_warn(nic_io->dev_hdl, "Unsupported mag event, cmd: %u\n", cmd);
+ *out_size = sizeof(struct mgmt_msg_head);
+ ((struct mgmt_msg_head *)buf_out)->status = HINIC3_MGMT_CMD_UNSUPPORTED;
+
+ return 0;
+}
+
+int hinic3_vf_mag_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ return hinic3_mag_event_handler(hwdev, cmd, buf_in, in_size,
+ buf_out, out_size);
+}
+
+/* pf/ppf handler mgmt cpu report hilink event */
+void hinic3_pf_mag_event_handler(void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ hinic3_mag_event_handler(pri_handle, cmd, buf_in, in_size,
+ buf_out, out_size);
+}
+
+static int _mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u16 channel)
+{
+ u32 i, cmd_cnt = ARRAY_LEN(vf_mag_cmd_handler);
+ bool cmd_to_pf = false;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF &&
+ !hinic3_is_slave_host(hwdev)) {
+ for (i = 0; i < cmd_cnt; i++) {
+ if (cmd == vf_mag_cmd_handler[i].cmd) {
+ cmd_to_pf = true;
+ break;
+ }
+ }
+ }
+
+ if (cmd_to_pf)
+ return hinic3_mbox_to_pf(hwdev, HINIC3_MOD_HILINK, cmd, buf_in,
+ in_size, buf_out, out_size, 0,
+ channel);
+
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_HILINK, cmd, buf_in,
+ in_size, buf_out, out_size, 0, channel);
+}
+
+static int mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return _mag_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size, HINIC3_CHANNEL_NIC);
+}
+
+static int mag_msg_to_mgmt_sync_ch(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u16 channel)
+{
+ return _mag_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size, channel);
+}
+
+#if defined(ETHTOOL_GFECPARAM) && defined(ETHTOOL_SFECPARAM)
+struct fecparam_value_map {
+ u8 hinic3_fec_offset;
+ u8 hinic3_fec_value;
+ u8 ethtool_fec_value;
+};
+
+static void fecparam_convert(u32 opcode, u8 in_fec_param, u8 *out_fec_param)
+{
+ u8 i;
+ u8 fec_value_table_lenth;
+ struct fecparam_value_map fec_value_table[] = {
+ {PORT_FEC_NOT_SET, BIT(PORT_FEC_NOT_SET), ETHTOOL_FEC_NONE},
+ {PORT_FEC_RSFEC, BIT(PORT_FEC_RSFEC), ETHTOOL_FEC_RS},
+ {PORT_FEC_BASEFEC, BIT(PORT_FEC_BASEFEC), ETHTOOL_FEC_BASER},
+ {PORT_FEC_NOFEC, BIT(PORT_FEC_NOFEC), ETHTOOL_FEC_OFF},
+#ifdef ETHTOOL_FEC_LLRS
+ {PORT_FEC_LLRSFEC, BIT(PORT_FEC_LLRSFEC), ETHTOOL_FEC_LLRS},
+#endif
+ {PORT_FEC_AUTO, BIT(PORT_FEC_AUTO), ETHTOOL_FEC_AUTO}
+ };
+
+ *out_fec_param = 0;
+ fec_value_table_lenth = (u8)(sizeof(fec_value_table) / sizeof(struct fecparam_value_map));
+
+ if (opcode == MAG_CMD_OPCODE_SET) {
+ for (i = 0; i < fec_value_table_lenth; i++) {
+ if ((in_fec_param &
+ fec_value_table[i].ethtool_fec_value) != 0)
+ /* The MPU uses the offset to determine the FEC mode. */
+ *out_fec_param =
+ fec_value_table[i].hinic3_fec_offset;
+ }
+ }
+
+ if (opcode == MAG_CMD_OPCODE_GET) {
+ for (i = 0; i < fec_value_table_lenth; i++) {
+ if ((in_fec_param &
+ fec_value_table[i].hinic3_fec_value) != 0)
+ *out_fec_param |=
+ fec_value_table[i].ethtool_fec_value;
+ }
+ }
+}
+
+/* When the ethtool is used to set the FEC mode */
+static bool check_fecparam_is_valid(u8 fec_param)
+{
+ if (
+#ifdef ETHTOOL_FEC_LLRS
+ (fec_param == ETHTOOL_FEC_LLRS) ||
+#endif
+ (fec_param == ETHTOOL_FEC_RS) ||
+ (fec_param == ETHTOOL_FEC_BASER) ||
+ (fec_param == ETHTOOL_FEC_OFF)) {
+ return true;
+ }
+ return false;
+}
+
+int set_fecparam(void *hwdev, u8 fecparam)
+{
+ struct mag_cmd_cfg_fec_mode fec_msg = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(fec_msg);
+ u8 advertised_fec = 0;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (check_fecparam_is_valid(fecparam) == false) {
+ nic_err(nic_io->dev_hdl, "fec param is invalid, failed to set fec param\n");
+ return -EINVAL;
+ }
+ fecparam_convert(MAG_CMD_OPCODE_SET, fecparam, &advertised_fec);
+ fec_msg.opcode = MAG_CMD_OPCODE_SET;
+ fec_msg.port_id = hinic3_physical_port_id(hwdev);
+ fec_msg.advertised_fec = advertised_fec;
+ err = mag_msg_to_mgmt_sync_ch(hwdev, MAG_CMD_CFG_FEC_MODE,
+ &fec_msg, sizeof(fec_msg),
+ &fec_msg, &out_size, HINIC3_CHANNEL_NIC);
+ if ((err != 0) || (fec_msg.head.status != 0))
+ return -EINVAL;
+ return 0;
+}
+
+int get_fecparam(void *hwdev, u8 *advertised_fec, u8 *supported_fec)
+{
+ struct mag_cmd_cfg_fec_mode fec_msg = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(fec_msg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ fec_msg.opcode = MAG_CMD_OPCODE_GET;
+ fec_msg.port_id = hinic3_physical_port_id(hwdev);
+ err = mag_msg_to_mgmt_sync_ch(hwdev, MAG_CMD_CFG_FEC_MODE,
+ &fec_msg, sizeof(fec_msg),
+ &fec_msg, &out_size, HINIC3_CHANNEL_NIC);
+ if ((err != 0) || (fec_msg.head.status != 0))
+ return -EINVAL;
+
+ /* fec_msg.advertised_fec: bit offset,
+ *value is BIT(fec_msg.advertised_fec); fec_msg.supported_fec: value
+ */
+ fecparam_convert(MAG_CMD_OPCODE_GET, BIT(fec_msg.advertised_fec),
+ advertised_fec);
+ fecparam_convert(MAG_CMD_OPCODE_GET, fec_msg.supported_fec,
+ supported_fec);
+ return 0;
+}
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_main.c b/drivers/net/ethernet/huawei/hinic3/hinic3_main.c
new file mode 100644
index 0000000..7790ae2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_main.c
@@ -0,0 +1,1469 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/dcbnl.h>
+#include <linux/tcp.h>
+#include <linux/ip.h>
+#include <linux/debugfs.h>
+
+#include "ossl_knl.h"
+#if defined(HAVE_NDO_UDP_TUNNEL_ADD) || defined(HAVE_UDP_TUNNEL_NIC_INFO)
+#include <net/udp_tunnel.h>
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD || HAVE_UDP_TUNNEL_NIC_INFO */
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_mt.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+#include "hinic3_lld.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_rss.h"
+#include "hinic3_dcb.h"
+#include "hinic3_nic_prof.h"
+#include "hinic3_profile.h"
+#include "hinic3_bond.h"
+
+#define DEFAULT_POLL_WEIGHT 64
+static unsigned int poll_weight = DEFAULT_POLL_WEIGHT;
+module_param(poll_weight, uint, 0444);
+MODULE_PARM_DESC(poll_weight, "Number packets for NAPI budget (default=64)");
+
+#define HINIC3_DEAULT_TXRX_MSIX_PENDING_LIMIT 2
+#define HINIC3_DEAULT_TXRX_MSIX_COALESC_TIMER_CFG 25
+#define HINIC3_DEAULT_TXRX_MSIX_RESEND_TIMER_CFG 7
+
+static unsigned char qp_pending_limit = HINIC3_DEAULT_TXRX_MSIX_PENDING_LIMIT;
+module_param(qp_pending_limit, byte, 0444);
+MODULE_PARM_DESC(qp_pending_limit, "QP MSI-X Interrupt coalescing parameter pending_limit (default=2)");
+
+static unsigned char qp_coalesc_timer_cfg =
+ HINIC3_DEAULT_TXRX_MSIX_COALESC_TIMER_CFG;
+module_param(qp_coalesc_timer_cfg, byte, 0444);
+MODULE_PARM_DESC(qp_coalesc_timer_cfg, "QP MSI-X Interrupt coalescing parameter coalesc_timer_cfg (default=32)");
+
+#define DEFAULT_RX_BUFF_LEN 2
+u16 rx_buff = DEFAULT_RX_BUFF_LEN;
+module_param(rx_buff, ushort, 0444);
+MODULE_PARM_DESC(rx_buff, "Set rx_buff size, buffer len must be 2^n. 2 - 16, default is 2KB");
+
+static unsigned int lro_replenish_thld = 256;
+module_param(lro_replenish_thld, uint, 0444);
+MODULE_PARM_DESC(lro_replenish_thld, "Number wqe for lro replenish buffer (default=256)");
+
+static unsigned char set_link_status_follow = HINIC3_LINK_FOLLOW_STATUS_MAX;
+module_param(set_link_status_follow, byte, 0444);
+MODULE_PARM_DESC(set_link_status_follow, "Set link status follow port status (0=default,1=follow,2=separate,3=unset");
+
+static bool page_pool_enabled = true;
+module_param(page_pool_enabled, bool, 0444);
+MODULE_PARM_DESC(page_pool_enabled, "enable/disable page_pool feature for rxq page management (default enable)");
+
+#define HINIC3_NIC_DEV_WQ_NAME "hinic3_nic_dev_wq"
+
+#define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_LINK)
+
+#define QID_MASKED(q_id, nic_dev) ((q_id) & ((nic_dev)->num_qps - 1))
+#define WATCHDOG_TIMEOUT 5
+
+#define HINIC3_SQ_DEPTH 1024
+#define HINIC3_RQ_DEPTH 1024
+
+#define LRO_ENABLE 1
+
+enum hinic3_rx_buff_len {
+ RX_BUFF_VALID_2KB = 2,
+ RX_BUFF_VALID_4KB = 4,
+ RX_BUFF_VALID_8KB = 8,
+ RX_BUFF_VALID_16KB = 16,
+};
+
+#define CONVERT_UNIT 1024
+#define NIC_MAX_PF_NUM 32
+
+#define BIFUR_RESOURCE_PF_SSID 0x5a1
+
+#ifdef HAVE_MULTI_VLAN_OFFLOAD_EN
+static int hinic3_netdev_event(struct notifier_block *notifier, unsigned long event, void *ptr);
+
+/* used for netdev notifier register/unregister */
+static DEFINE_MUTEX(hinic3_netdev_notifiers_mutex);
+static int hinic3_netdev_notifiers_ref_cnt;
+static struct notifier_block hinic3_netdev_notifier = {
+ .notifier_call = hinic3_netdev_event,
+};
+
+#ifdef HAVE_UDP_TUNNEL_NIC_INFO
+static const struct udp_tunnel_nic_info hinic3_udp_tunnels = {
+ .set_port = hinic3_udp_tunnel_set_port,
+ .unset_port = hinic3_udp_tunnel_unset_port,
+ .flags = UDP_TUNNEL_NIC_INFO_MAY_SLEEP,
+ .tables = {
+ { .n_entries = 1, .tunnel_types = UDP_TUNNEL_TYPE_VXLAN, },
+ },
+};
+#endif /* HAVE_UDP_TUNNEL_NIC_INFO */
+
+static void hinic3_register_notifier(struct hinic3_nic_dev *nic_dev)
+{
+ int err;
+
+ mutex_lock(&hinic3_netdev_notifiers_mutex);
+ hinic3_netdev_notifiers_ref_cnt++;
+ if (hinic3_netdev_notifiers_ref_cnt == 1) {
+ err = register_netdevice_notifier(&hinic3_netdev_notifier);
+ if (err) {
+ nic_info(&nic_dev->pdev->dev, "Register netdevice notifier failed, err: %d\n",
+ err);
+ hinic3_netdev_notifiers_ref_cnt--;
+ }
+ }
+ mutex_unlock(&hinic3_netdev_notifiers_mutex);
+}
+
+static void hinic3_unregister_notifier(struct hinic3_nic_dev *nic_dev)
+{
+ mutex_lock(&hinic3_netdev_notifiers_mutex);
+ if (hinic3_netdev_notifiers_ref_cnt == 1)
+ unregister_netdevice_notifier(&hinic3_netdev_notifier);
+
+ if (hinic3_netdev_notifiers_ref_cnt)
+ hinic3_netdev_notifiers_ref_cnt--;
+ mutex_unlock(&hinic3_netdev_notifiers_mutex);
+}
+
+#define HINIC3_MAX_VLAN_DEPTH_OFFLOAD_SUPPORT 1
+#define HINIC3_VLAN_CLEAR_OFFLOAD (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | \
+ NETIF_F_SCTP_CRC | NETIF_F_RXCSUM | \
+ NETIF_F_ALL_TSO)
+
+static int hinic3_netdev_event(struct notifier_block *notifier, unsigned long event, void *ptr)
+{
+ struct net_device *ndev = netdev_notifier_info_to_dev(ptr);
+ struct net_device *real_dev = NULL;
+ struct net_device *ret = NULL;
+ u16 vlan_depth;
+
+ if (!is_vlan_dev(ndev))
+ return NOTIFY_DONE;
+
+ dev_hold(ndev);
+
+ switch (event) {
+ case NETDEV_REGISTER:
+ real_dev = vlan_dev_real_dev(ndev);
+ if (!hinic3_is_netdev_ops_match(real_dev))
+ goto out;
+
+ vlan_depth = 1;
+ ret = vlan_dev_priv(ndev)->real_dev;
+ while (is_vlan_dev(ret)) {
+ ret = vlan_dev_priv(ret)->real_dev;
+ vlan_depth++;
+ }
+
+ if (vlan_depth == HINIC3_MAX_VLAN_DEPTH_OFFLOAD_SUPPORT) {
+ ndev->vlan_features &= (~HINIC3_VLAN_CLEAR_OFFLOAD);
+ } else if (vlan_depth > HINIC3_MAX_VLAN_DEPTH_OFFLOAD_SUPPORT) {
+#ifdef HAVE_NDO_SET_FEATURES
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ set_netdev_hw_features(ndev,
+ get_netdev_hw_features(ndev) &
+ (~HINIC3_VLAN_CLEAR_OFFLOAD));
+#else
+ ndev->hw_features &= (~HINIC3_VLAN_CLEAR_OFFLOAD);
+#endif
+#endif
+ ndev->features &= (~HINIC3_VLAN_CLEAR_OFFLOAD);
+ }
+
+ break;
+
+ default:
+ break;
+ };
+
+out:
+ dev_put(ndev);
+
+ return NOTIFY_DONE;
+}
+#endif
+
+void hinic3_link_status_change(struct hinic3_nic_dev *nic_dev, bool status)
+{
+ struct net_device *netdev = nic_dev->netdev;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev) ||
+ test_bit(HINIC3_LP_TEST, &nic_dev->flags) ||
+ test_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags))
+ return;
+
+ if (status) {
+ if (netif_carrier_ok(netdev))
+ return;
+
+ nic_dev->link_status = status;
+ netif_carrier_on(netdev);
+ nicif_info(nic_dev, link, netdev, "Link is up\n");
+ } else {
+ if (!netif_carrier_ok(netdev))
+ return;
+
+ nic_dev->link_status = status;
+ netif_carrier_off(netdev);
+ nicif_info(nic_dev, link, netdev, "Link is down\n");
+ }
+}
+
+static void netdev_feature_init(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ netdev_features_t dft_fts = 0;
+ netdev_features_t cso_fts = 0;
+ netdev_features_t vlan_fts = 0;
+ netdev_features_t tso_fts = 0;
+ netdev_features_t hw_features = 0;
+
+ dft_fts |= NETIF_F_SG | NETIF_F_HIGHDMA;
+
+ if (HINIC3_SUPPORT_CSUM(nic_dev->hwdev))
+ cso_fts |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM;
+ if (HINIC3_SUPPORT_SCTP_CRC(nic_dev->hwdev))
+ cso_fts |= NETIF_F_SCTP_CRC;
+
+ if (HINIC3_SUPPORT_TSO(nic_dev->hwdev))
+ tso_fts |= NETIF_F_TSO | NETIF_F_TSO6;
+
+ if (HINIC3_SUPPORT_VLAN_OFFLOAD(nic_dev->hwdev)) {
+#if defined(NETIF_F_HW_VLAN_CTAG_TX)
+ vlan_fts |= NETIF_F_HW_VLAN_CTAG_TX;
+#elif defined(NETIF_F_HW_VLAN_TX)
+ vlan_fts |= NETIF_F_HW_VLAN_TX;
+#endif
+
+#if defined(NETIF_F_HW_VLAN_CTAG_RX)
+ vlan_fts |= NETIF_F_HW_VLAN_CTAG_RX;
+#elif defined(NETIF_F_HW_VLAN_RX)
+ vlan_fts |= NETIF_F_HW_VLAN_RX;
+#endif
+ }
+
+ if (HINIC3_SUPPORT_RXVLAN_FILTER(nic_dev->hwdev)) {
+#if defined(NETIF_F_HW_VLAN_CTAG_FILTER)
+ vlan_fts |= NETIF_F_HW_VLAN_CTAG_FILTER;
+#elif defined(NETIF_F_HW_VLAN_FILTER)
+ vlan_fts |= NETIF_F_HW_VLAN_FILTER;
+#endif
+ }
+
+#ifdef HAVE_ENCAPSULATION_TSO
+ if (HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev->hwdev))
+ tso_fts |= NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_UDP_TUNNEL_CSUM;
+#endif /* HAVE_ENCAPSULATION_TSO */
+
+ /* LRO is disable in default, only set hw features */
+ if (HINIC3_SUPPORT_LRO(nic_dev->hwdev))
+ hw_features |= NETIF_F_LRO;
+
+ netdev->features |= dft_fts | cso_fts | tso_fts | vlan_fts;
+ netdev->vlan_features |= dft_fts | cso_fts | tso_fts;
+
+ if (nic_dev->nic_cap.lro_enable == LRO_ENABLE) {
+ netdev->features |= NETIF_F_LRO;
+ netdev->vlan_features |= NETIF_F_LRO;
+ }
+
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ hw_features |= get_netdev_hw_features(netdev);
+#else
+ hw_features |= netdev->hw_features;
+#endif
+
+ hw_features |= netdev->features;
+
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ set_netdev_hw_features(netdev, hw_features);
+#else
+ netdev->hw_features = hw_features;
+#endif
+
+#ifdef IFF_UNICAST_FLT
+ netdev->priv_flags |= IFF_UNICAST_FLT;
+#endif
+
+#ifdef HAVE_ENCAPSULATION_CSUM
+ netdev->hw_enc_features |= dft_fts;
+ if (HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev->hwdev)) {
+ netdev->hw_enc_features |= cso_fts;
+#ifdef HAVE_ENCAPSULATION_TSO
+ netdev->hw_enc_features |= tso_fts | NETIF_F_TSO_ECN;
+#endif /* HAVE_ENCAPSULATION_TSO */
+ }
+#endif /* HAVE_ENCAPSULATION_CSUM */
+}
+
+static void init_intr_coal_param(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_intr_coal_info *info = NULL;
+ u16 i;
+
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ info = &nic_dev->intr_coalesce[i];
+
+ info->pending_limt = qp_pending_limit;
+ info->coalesce_timer_cfg = qp_coalesc_timer_cfg;
+
+ info->resend_timer_cfg = HINIC3_DEAULT_TXRX_MSIX_RESEND_TIMER_CFG;
+
+ info->pkt_rate_high = HINIC3_RX_RATE_HIGH;
+ info->rx_usecs_high = HINIC3_RX_COAL_TIME_HIGH;
+ info->rx_pending_limt_high = HINIC3_RX_PENDING_LIMIT_HIGH;
+
+ info->pkt_rate_low = HINIC3_RX_RATE_LOW;
+ info->rx_usecs_low = HINIC3_RX_COAL_TIME_LOW;
+ info->rx_pending_limt_low = HINIC3_RX_PENDING_LIMIT_LOW;
+ }
+}
+
+static int hinic3_init_intr_coalesce(struct hinic3_nic_dev *nic_dev)
+{
+ u64 size;
+
+ if (qp_pending_limit != HINIC3_DEAULT_TXRX_MSIX_PENDING_LIMIT ||
+ qp_coalesc_timer_cfg != HINIC3_DEAULT_TXRX_MSIX_COALESC_TIMER_CFG)
+ nic_dev->intr_coal_set_flag = 1;
+ else
+ nic_dev->intr_coal_set_flag = 0;
+
+ size = sizeof(*nic_dev->intr_coalesce) * nic_dev->max_qps;
+ if (!size) {
+ nic_err(&nic_dev->pdev->dev, "Cannot allocate zero size intr coalesce\n");
+ return -EINVAL;
+ }
+ nic_dev->intr_coalesce = kzalloc(size, GFP_KERNEL);
+ if (!nic_dev->intr_coalesce) {
+ nic_err(&nic_dev->pdev->dev, "Failed to alloc intr coalesce\n");
+ return -ENOMEM;
+ }
+
+ init_intr_coal_param(nic_dev);
+
+ if (test_bit(HINIC3_INTR_ADAPT, &nic_dev->flags))
+ nic_dev->adaptive_rx_coal = 1;
+ else
+ nic_dev->adaptive_rx_coal = 0;
+
+ return 0;
+}
+
+static void hinic3_free_intr_coalesce(struct hinic3_nic_dev *nic_dev)
+{
+ kfree(nic_dev->intr_coalesce);
+ nic_dev->intr_coalesce = NULL;
+}
+
+static int hinic3_alloc_txrxqs(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int err;
+
+ err = hinic3_alloc_txqs(netdev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to alloc txqs\n");
+ return err;
+ }
+
+ err = hinic3_alloc_rxqs(netdev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to alloc rxqs\n");
+ goto alloc_rxqs_err;
+ }
+
+ err = hinic3_init_intr_coalesce(nic_dev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to init_intr_coalesce\n");
+ goto init_intr_err;
+ }
+
+ return 0;
+
+init_intr_err:
+ hinic3_free_rxqs(netdev);
+
+alloc_rxqs_err:
+ hinic3_free_txqs(netdev);
+
+ return err;
+}
+
+static void hinic3_free_txrxqs(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_free_intr_coalesce(nic_dev);
+ hinic3_free_rxqs(nic_dev->netdev);
+ hinic3_free_txqs(nic_dev->netdev);
+}
+
+static void hinic3_sw_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_free_txrxqs(nic_dev);
+
+ hinic3_clean_mac_list_filter(nic_dev);
+
+ hinic3_del_mac(nic_dev->hwdev, nic_dev->netdev->dev_addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+
+ hinic3_clear_rss_config(nic_dev);
+ hinic3_dcb_deinit(nic_dev);
+}
+
+static void hinic3_netdev_mtu_init(struct net_device *netdev)
+{
+ /* MTU range: 384 - 9600 */
+#ifdef HAVE_NETDEVICE_MIN_MAX_MTU
+ netdev->min_mtu = HINIC3_MIN_MTU_SIZE;
+ netdev->max_mtu = HINIC3_MAX_JUMBO_FRAME_SIZE;
+#endif
+
+#ifdef HAVE_NETDEVICE_EXTENDED_MIN_MAX_MTU
+ netdev->extended->min_mtu = HINIC3_MIN_MTU_SIZE;
+ netdev->extended->max_mtu = HINIC3_MAX_JUMBO_FRAME_SIZE;
+#endif
+}
+
+static int hinic3_set_default_mac(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 mac_addr[ETH_ALEN];
+ int err = 0;
+
+ err = hinic3_get_default_mac(nic_dev->hwdev, mac_addr);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to get MAC address\n");
+ return err;
+ }
+
+ ether_addr_copy(netdev->dev_addr, mac_addr);
+
+ if (!is_valid_ether_addr(netdev->dev_addr)) {
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nic_err(&nic_dev->pdev->dev,
+ "Invalid MAC address %pM\n",
+ netdev->dev_addr);
+ return -EIO;
+ }
+
+ nic_info(&nic_dev->pdev->dev,
+ "Invalid MAC address %pM, using random\n",
+ netdev->dev_addr);
+ eth_hw_addr_random(netdev);
+ }
+
+ err = hinic3_set_mac(nic_dev->hwdev, netdev->dev_addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+ /* When this is VF driver, we must consider that PF has already set VF
+ * MAC, and we can't consider this condition is error status during
+ * driver probe procedure.
+ */
+ if (err && err != HINIC3_PF_SET_VF_ALREADY) {
+ nic_err(&nic_dev->pdev->dev, "Failed to set default MAC\n");
+ }
+
+ if (err == HINIC3_PF_SET_VF_ALREADY)
+ return 0;
+
+ return err;
+}
+
+static void hinic3_outband_cfg_init(struct hinic3_nic_dev *nic_dev)
+{
+ u16 outband_default_vid = 0;
+ int err = 0;
+
+ if (!nic_dev->nic_cap.outband_vlan_cfg_en)
+ return;
+
+ err = hinic3_get_outband_vlan_cfg(nic_dev->hwdev, &outband_default_vid);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to get_outband_cfg, err: %d\n", err);
+ return;
+ }
+
+ nic_dev->outband_cfg.outband_default_vid = outband_default_vid;
+
+ return;
+}
+
+static int hinic3_sw_init(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u64 nic_features;
+ int err = 0;
+
+ nic_features = hinic3_get_feature_cap(nic_dev->hwdev);
+ /* You can update the features supported by the driver according to the
+ * scenario here
+ */
+ nic_features &= NIC_DRV_DEFAULT_FEATURE;
+ hinic3_update_nic_feature(nic_dev->hwdev, nic_features);
+
+ err = hinic3_dcb_init(nic_dev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to init dcb\n");
+ return -EFAULT;
+ }
+
+ nic_dev->q_params.sq_depth = HINIC3_SQ_DEPTH;
+ nic_dev->q_params.rq_depth = HINIC3_RQ_DEPTH;
+
+ hinic3_try_to_enable_rss(nic_dev);
+
+ err = hinic3_set_default_mac(nic_dev);
+ if (err) {
+ goto set_mac_err;
+ }
+
+ hinic3_netdev_mtu_init(netdev);
+
+ err = hinic3_alloc_txrxqs(nic_dev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to alloc qps\n");
+ goto alloc_qps_err;
+ }
+
+ hinic3_outband_cfg_init(nic_dev);
+
+ return 0;
+
+alloc_qps_err:
+ hinic3_del_mac(nic_dev->hwdev, netdev->dev_addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+
+set_mac_err:
+ hinic3_clear_rss_config(nic_dev);
+
+ return err;
+}
+
+static void hinic3_assign_netdev_ops(struct hinic3_nic_dev *adapter)
+{
+ hinic3_set_netdev_ops(adapter);
+ if (!HINIC3_FUNC_IS_VF(adapter->hwdev))
+ hinic3_set_ethtool_ops(adapter->netdev);
+ else
+ hinic3vf_set_ethtool_ops(adapter->netdev);
+
+ adapter->netdev->watchdog_timeo = WATCHDOG_TIMEOUT * HZ;
+}
+
+static int hinic3_validate_parameters(struct hinic3_lld_dev *lld_dev)
+{
+ struct pci_dev *pdev = lld_dev->pdev;
+
+ /* If weight exceeds the queue depth, the queue resources will be
+ * exhausted, and increasing it has no effect.
+ */
+ if (!poll_weight || poll_weight > HINIC3_MAX_RX_QUEUE_DEPTH) {
+ nic_warn(&pdev->dev, "Module Parameter poll_weight is out of range: [1, %d], resetting to %d\n",
+ HINIC3_MAX_RX_QUEUE_DEPTH, DEFAULT_POLL_WEIGHT);
+ poll_weight = DEFAULT_POLL_WEIGHT;
+ }
+
+ /* check rx_buff value, default rx_buff is 2KB.
+ * Valid rx_buff include 2KB/4KB/8KB/16KB.
+ */
+ if (rx_buff != RX_BUFF_VALID_2KB && rx_buff != RX_BUFF_VALID_4KB &&
+ rx_buff != RX_BUFF_VALID_8KB && rx_buff != RX_BUFF_VALID_16KB) {
+ nic_warn(&pdev->dev, "Module Parameter rx_buff value %u is out of range, must be 2^n. Valid range is 2 - 16, resetting to %dKB",
+ rx_buff, DEFAULT_RX_BUFF_LEN);
+ rx_buff = DEFAULT_RX_BUFF_LEN;
+ }
+
+ return 0;
+}
+
+static void decide_intr_cfg(struct hinic3_nic_dev *nic_dev)
+{
+ set_bit(HINIC3_INTR_ADAPT, &nic_dev->flags);
+}
+
+static void adaptive_configuration_init(struct hinic3_nic_dev *nic_dev)
+{
+ decide_intr_cfg(nic_dev);
+}
+
+static int set_interrupt_moder(struct hinic3_nic_dev *nic_dev, u16 q_id,
+ u8 coalesc_timer_cfg, u8 pending_limt)
+{
+ struct interrupt_info info;
+ int err;
+
+ memset(&info, 0, sizeof(info));
+
+ if (coalesc_timer_cfg == nic_dev->rxqs[q_id].last_coalesc_timer_cfg &&
+ pending_limt == nic_dev->rxqs[q_id].last_pending_limt)
+ return 0;
+
+ /* netdev not running or qp not in using,
+ * don't need to set coalesce to hw
+ */
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev) ||
+ q_id >= nic_dev->q_params.num_qps)
+ return 0;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.coalesc_timer_cfg = coalesc_timer_cfg;
+ info.pending_limt = pending_limt;
+ info.msix_index = nic_dev->q_params.irq_cfg[q_id].msix_entry_idx;
+ info.resend_timer_cfg =
+ nic_dev->intr_coalesce[q_id].resend_timer_cfg;
+
+ err = hinic3_set_interrupt_cfg(nic_dev->hwdev, info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to modify moderation for Queue: %u\n", q_id);
+ } else {
+ nic_dev->rxqs[q_id].last_coalesc_timer_cfg = coalesc_timer_cfg;
+ nic_dev->rxqs[q_id].last_pending_limt = pending_limt;
+ }
+
+ return err;
+}
+
+static void calc_coal_para(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_intr_coal_info *q_coal, u64 rx_rate,
+ u8 *coalesc_timer_cfg, u8 *pending_limt)
+{
+ if (rx_rate < q_coal->pkt_rate_low) {
+ *coalesc_timer_cfg = q_coal->rx_usecs_low;
+ *pending_limt = q_coal->rx_pending_limt_low;
+ } else if (rx_rate > q_coal->pkt_rate_high) {
+ *coalesc_timer_cfg = q_coal->rx_usecs_high;
+ *pending_limt = q_coal->rx_pending_limt_high;
+ } else {
+ *coalesc_timer_cfg =
+ (u8)((rx_rate - q_coal->pkt_rate_low) *
+ (q_coal->rx_usecs_high - q_coal->rx_usecs_low) /
+ (q_coal->pkt_rate_high - q_coal->pkt_rate_low) +
+ q_coal->rx_usecs_low);
+
+ *pending_limt =
+ (u8)((rx_rate - q_coal->pkt_rate_low) *
+ (q_coal->rx_pending_limt_high - q_coal->rx_pending_limt_low) /
+ (q_coal->pkt_rate_high - q_coal->pkt_rate_low) +
+ q_coal->rx_pending_limt_low);
+ }
+}
+
+static void update_queue_coal(struct hinic3_nic_dev *nic_dev, u16 qid,
+ u64 rx_rate, u64 avg_pkt_size, u64 tx_rate)
+{
+ struct hinic3_intr_coal_info *q_coal = NULL;
+ u8 coalesc_timer_cfg, pending_limt;
+
+ q_coal = &nic_dev->intr_coalesce[qid];
+
+ if (rx_rate > HINIC3_RX_RATE_THRESH && avg_pkt_size > HINIC3_AVG_PKT_SMALL) {
+ calc_coal_para(nic_dev, q_coal, rx_rate, &coalesc_timer_cfg, &pending_limt);
+ } else {
+ coalesc_timer_cfg = HINIC3_LOWEST_LATENCY;
+ pending_limt = q_coal->rx_pending_limt_low;
+ }
+
+ set_interrupt_moder(nic_dev, qid, coalesc_timer_cfg, pending_limt);
+}
+
+void hinic3_auto_moderation_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_nic_dev *nic_dev = container_of(delay,
+ struct hinic3_nic_dev,
+ moderation_task);
+ unsigned long period = (unsigned long)(jiffies -
+ nic_dev->last_moder_jiffies);
+ u64 rx_packets, rx_bytes, rx_pkt_diff, rx_rate, avg_pkt_size;
+ u64 tx_packets, tx_bytes, tx_pkt_diff, tx_rate;
+ u16 qid;
+
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags))
+ return;
+
+ queue_delayed_work(nic_dev->workq, &nic_dev->moderation_task,
+ HINIC3_MODERATONE_DELAY);
+
+ if (!nic_dev->adaptive_rx_coal || !period)
+ return;
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ rx_packets = nic_dev->rxqs[qid].rxq_stats.packets;
+ rx_bytes = nic_dev->rxqs[qid].rxq_stats.bytes;
+ tx_packets = nic_dev->txqs[qid].txq_stats.packets;
+ tx_bytes = nic_dev->txqs[qid].txq_stats.bytes;
+
+ rx_pkt_diff =
+ rx_packets - nic_dev->rxqs[qid].last_moder_packets;
+ avg_pkt_size = rx_pkt_diff ?
+ ((unsigned long)(rx_bytes -
+ nic_dev->rxqs[qid].last_moder_bytes)) /
+ rx_pkt_diff : 0;
+
+ rx_rate = rx_pkt_diff * HZ / period;
+ tx_pkt_diff =
+ tx_packets - nic_dev->txqs[qid].last_moder_packets;
+ tx_rate = tx_pkt_diff * HZ / period;
+
+ update_queue_coal(nic_dev, qid, rx_rate, avg_pkt_size,
+ tx_rate);
+
+ nic_dev->rxqs[qid].last_moder_packets = rx_packets;
+ nic_dev->rxqs[qid].last_moder_bytes = rx_bytes;
+ nic_dev->txqs[qid].last_moder_packets = tx_packets;
+ nic_dev->txqs[qid].last_moder_bytes = tx_bytes;
+ }
+
+ nic_dev->last_moder_jiffies = jiffies;
+}
+
+static void hinic3_periodic_work_handler(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_nic_dev *nic_dev = container_of(delay, struct hinic3_nic_dev, periodic_work);
+
+ if (test_and_clear_bit(EVENT_WORK_TX_TIMEOUT, &nic_dev->event_flag))
+ hinic3_fault_event_report(nic_dev->hwdev, HINIC3_FAULT_SRC_TX_TIMEOUT,
+ FAULT_LEVEL_SERIOUS_FLR);
+
+ queue_delayed_work(nic_dev->workq, &nic_dev->periodic_work, HZ);
+}
+
+static void hinic3_vport_stats_work_handler(struct work_struct *work)
+{
+ int err;
+ struct hinic3_vport_stats vport_stats = {0};
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_nic_dev *nic_dev = container_of(delay, struct hinic3_nic_dev, vport_stats_work);
+ err = hinic3_get_vport_stats(nic_dev->hwdev, hinic3_global_func_id(nic_dev->hwdev), &vport_stats);
+ if (err)
+ nic_err(&nic_dev->pdev->dev, "Failed to get dropped stats from fw\n");
+ else
+ nic_dev->vport_stats.rx_discard_vport = vport_stats.rx_discard_vport;
+ queue_delayed_work(nic_dev->workq, &nic_dev->vport_stats_work, HZ);
+}
+
+static void free_nic_dev_vram(struct hinic3_nic_dev *nic_dev)
+{
+ int is_use_vram = get_use_vram_flag();
+ if (is_use_vram != 0)
+ hi_vram_kfree((void *)nic_dev->nic_vram, nic_dev->nic_vram_name,
+ sizeof(struct hinic3_vram));
+ else
+ kfree(nic_dev->nic_vram);
+ nic_dev->nic_vram = NULL;
+}
+
+static void free_nic_dev(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_deinit_nic_prof_adapter(nic_dev);
+ destroy_workqueue(nic_dev->workq);
+ kfree(nic_dev->vlan_bitmap);
+ nic_dev->vlan_bitmap = NULL;
+ free_nic_dev_vram(nic_dev);
+}
+
+static int setup_nic_dev(struct net_device *netdev,
+ struct hinic3_lld_dev *lld_dev)
+{
+ struct pci_dev *pdev = lld_dev->pdev;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ char *netdev_name_fmt = NULL;
+ u32 page_num;
+ u16 func_id;
+ int ret;
+ int is_in_kexec = vram_get_kexec_flag();
+ int is_use_vram = get_use_vram_flag();
+
+ nic_dev = (struct hinic3_nic_dev *)netdev_priv(netdev);
+ nic_dev->netdev = netdev;
+ SET_NETDEV_DEV(netdev, &pdev->dev);
+ nic_dev->lld_dev = lld_dev;
+ nic_dev->hwdev = lld_dev->hwdev;
+ nic_dev->pdev = pdev;
+ nic_dev->poll_weight = (int)poll_weight;
+ nic_dev->msg_enable = DEFAULT_MSG_ENABLE;
+ nic_dev->lro_replenish_thld = lro_replenish_thld;
+ nic_dev->rx_buff_len = (u16)(rx_buff * CONVERT_UNIT);
+ nic_dev->dma_rx_buff_size = RX_BUFF_NUM_PER_PAGE * nic_dev->rx_buff_len;
+ page_num = nic_dev->dma_rx_buff_size / PAGE_SIZE;
+ nic_dev->page_order = page_num > 0 ? ilog2(page_num) : 0;
+ nic_dev->page_pool_enabled = page_pool_enabled;
+ nic_dev->outband_cfg.outband_default_vid = 0;
+
+ // value other than 0 indicates hot replace
+ if (is_use_vram != 0) {
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ ret = snprintf(nic_dev->nic_vram_name,
+ VRAM_NAME_MAX_LEN,
+ "%s%u", VRAM_NIC_VRAM, func_id);
+ if (ret < 0) {
+ nic_err(&pdev->dev, "NIC vram name snprintf failed, ret:%d.\n",
+ ret);
+ return -EINVAL;
+ }
+
+ nic_dev->nic_vram = (struct hinic3_vram *)hi_vram_kalloc(nic_dev->nic_vram_name,
+ sizeof(struct hinic3_vram));
+ if (!nic_dev->nic_vram) {
+ nic_err(&pdev->dev, "Failed to allocate nic vram\n");
+ return -ENOMEM;
+ }
+
+ if (is_in_kexec == 0)
+ nic_dev->nic_vram->vram_mtu = netdev->mtu;
+ else
+ netdev->mtu = nic_dev->nic_vram->vram_mtu;
+ } else {
+ nic_dev->nic_vram = kzalloc(sizeof(struct hinic3_vram),
+ GFP_KERNEL);
+ if (!nic_dev->nic_vram) {
+ nic_err(&pdev->dev, "Failed to allocate nic vram\n");
+ return -ENOMEM;
+ }
+ nic_dev->nic_vram->vram_mtu = netdev->mtu;
+ }
+
+ mutex_init(&nic_dev->nic_mutex);
+
+ nic_dev->vlan_bitmap = kzalloc(VLAN_BITMAP_SIZE(nic_dev), GFP_KERNEL);
+ if (!nic_dev->vlan_bitmap) {
+ nic_err(&pdev->dev, "Failed to allocate vlan bitmap\n");
+ ret = -ENOMEM;
+ goto vlan_bitmap_error;
+ }
+
+ nic_dev->workq = create_singlethread_workqueue(HINIC3_NIC_DEV_WQ_NAME);
+ if (!nic_dev->workq) {
+ nic_err(&pdev->dev, "Failed to initialize nic workqueue\n");
+ ret = -ENOMEM;
+ goto create_workq_error;
+ }
+
+ INIT_DELAYED_WORK(&nic_dev->periodic_work,
+ hinic3_periodic_work_handler);
+ INIT_DELAYED_WORK(&nic_dev->rxq_check_work,
+ hinic3_rxq_check_work_handler);
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ INIT_DELAYED_WORK(&nic_dev->vport_stats_work,
+ hinic3_vport_stats_work_handler);
+
+ INIT_LIST_HEAD(&nic_dev->uc_filter_list);
+ INIT_LIST_HEAD(&nic_dev->mc_filter_list);
+ INIT_WORK(&nic_dev->rx_mode_work, hinic3_set_rx_mode_work);
+
+ INIT_LIST_HEAD(&nic_dev->rx_flow_rule.rules);
+ INIT_LIST_HEAD(&nic_dev->tcam.tcam_list);
+ INIT_LIST_HEAD(&nic_dev->tcam.tcam_dynamic_info.tcam_dynamic_list);
+
+ hinic3_init_nic_prof_adapter(nic_dev);
+
+ netdev_name_fmt = hinic3_get_dft_netdev_name_fmt(nic_dev);
+ if (netdev_name_fmt) {
+ ret = strscpy(netdev->name, netdev_name_fmt, IFNAMSIZ);
+ if (ret < 0)
+ goto get_netdev_name_error;
+ }
+
+ return 0;
+
+get_netdev_name_error:
+ hinic3_deinit_nic_prof_adapter(nic_dev);
+ destroy_workqueue(nic_dev->workq);
+create_workq_error:
+ kfree(nic_dev->vlan_bitmap);
+ nic_dev->vlan_bitmap = NULL;
+vlan_bitmap_error:
+ free_nic_dev_vram(nic_dev);
+ return ret;
+}
+
+static int hinic3_set_default_hw_feature(struct hinic3_nic_dev *nic_dev)
+{
+ int err;
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ hinic3_dcb_reset_hw_config(nic_dev);
+
+ if (set_link_status_follow < HINIC3_LINK_FOLLOW_STATUS_MAX) {
+ err = hinic3_set_link_status_follow(nic_dev->hwdev,
+ set_link_status_follow);
+ if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+ nic_warn(&nic_dev->pdev->dev,
+ "Current version of firmware doesn't support to set link status follow port status\n");
+ }
+ }
+
+ err = hinic3_set_nic_feature_to_hw(nic_dev->hwdev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to set nic features\n");
+ return err;
+ }
+
+ /* enable all hw features in netdev->features */
+ err = hinic3_set_hw_features(nic_dev);
+ if (err) {
+ hinic3_update_nic_feature(nic_dev->hwdev, 0);
+ hinic3_set_nic_feature_to_hw(nic_dev->hwdev);
+ return err;
+ }
+
+ if (HINIC3_SUPPORT_RXQ_RECOVERY(nic_dev->hwdev))
+ set_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags);
+
+ return 0;
+}
+
+static void hinic3_bond_init(struct hinic3_nic_dev *nic_dev)
+{
+ u32 bond_id = HINIC3_INVALID_BOND_ID;
+ int err = hinic3_create_bond(nic_dev->hwdev, &bond_id);
+ if (err != 0) {
+ goto bond_init_failed;
+ }
+
+ /* bond id does not change, means this pf is not bond active pf, no log is generated */
+ if (bond_id == HINIC3_INVALID_BOND_ID) {
+ return;
+ }
+
+ err = hinic3_open_close_bond(nic_dev->hwdev, true);
+ if (err != 0) {
+ hinic3_delete_bond(nic_dev->hwdev);
+ goto bond_init_failed;
+ }
+
+ nic_info(&nic_dev->pdev->dev, "Bond %d init success\n", bond_id);
+ return;
+
+bond_init_failed:
+ nic_err(&nic_dev->pdev->dev, "Bond init failed\n");
+}
+
+static int nic_probe(struct hinic3_lld_dev *lld_dev, void **uld_dev,
+ char *uld_dev_name)
+{
+ struct pci_dev *pdev = lld_dev->pdev;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ struct net_device *netdev = NULL;
+ u16 max_qps, glb_func_id;
+ int err;
+
+ if (!hinic3_support_nic(lld_dev->hwdev, NULL)) {
+ nic_info(&pdev->dev, "Hw don't support nic\n");
+ return 0;
+ }
+
+ nic_info(&pdev->dev, "NIC service probe begin\n");
+
+ err = hinic3_validate_parameters(lld_dev);
+ if (err) {
+ err = -EINVAL;
+ goto err_out;
+ }
+
+ glb_func_id = hinic3_global_func_id(lld_dev->hwdev);
+ err = hinic3_func_reset(lld_dev->hwdev, glb_func_id, HINIC3_NIC_RES,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(&pdev->dev, "Failed to reset function\n");
+ goto err_out;
+ }
+
+ err = hinic3_get_dev_cap(lld_dev->hwdev);
+ if (err != 0) {
+ nic_err(&pdev->dev, "Failed to get dev cap\n");
+ goto err_out;
+ }
+
+ max_qps = hinic3_func_max_nic_qnum(lld_dev->hwdev);
+ netdev = alloc_etherdev_mq(sizeof(*nic_dev), max_qps);
+ if (!netdev) {
+ nic_err(&pdev->dev, "Failed to allocate ETH device\n");
+ err = -ENOMEM;
+ goto err_out;
+ }
+
+ nic_dev = (struct hinic3_nic_dev *)netdev_priv(netdev);
+ err = setup_nic_dev(netdev, lld_dev);
+ if (err)
+ goto setup_dev_err;
+
+ adaptive_configuration_init(nic_dev);
+
+ /* get nic cap from hw */
+ hinic3_support_nic(lld_dev->hwdev, &nic_dev->nic_cap);
+
+ err = hinic3_init_nic_hwdev(nic_dev->hwdev, pdev, &pdev->dev,
+ nic_dev->rx_buff_len);
+ if (err) {
+ nic_err(&pdev->dev, "Failed to init nic hwdev\n");
+ goto init_nic_hwdev_err;
+ }
+
+ err = hinic3_sw_init(nic_dev);
+ if (err)
+ goto sw_init_err;
+
+ hinic3_assign_netdev_ops(nic_dev);
+ netdev_feature_init(netdev);
+#ifdef HAVE_UDP_TUNNEL_NIC_INFO
+ netdev->udp_tunnel_nic_info = &hinic3_udp_tunnels;
+#endif /* HAVE_UDP_TUNNEL_NIC_INFO */
+ err = hinic3_set_default_hw_feature(nic_dev);
+ if (err)
+ goto set_features_err;
+
+ if (hinic3_get_bond_create_mode(lld_dev->hwdev) != 0) {
+ hinic3_bond_init(nic_dev);
+ }
+
+#ifdef HAVE_MULTI_VLAN_OFFLOAD_EN
+ hinic3_register_notifier(nic_dev);
+#endif
+
+ if (pdev->subsystem_device != BIFUR_RESOURCE_PF_SSID) {
+ err = register_netdev(netdev);
+ if (err) {
+ nic_err(&pdev->dev, "Failed to register netdev\n");
+ err = -ENOMEM;
+ goto netdev_err;
+ }
+ }
+
+ queue_delayed_work(nic_dev->workq, &nic_dev->periodic_work, HZ);
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ queue_delayed_work(nic_dev->workq,
+ &nic_dev->vport_stats_work, HZ);
+
+ netif_carrier_off(netdev);
+
+ *uld_dev = nic_dev;
+ nicif_info(nic_dev, probe, netdev, "Register netdev succeed\n");
+ nic_info(&pdev->dev, "NIC service probed\n");
+
+ return 0;
+
+netdev_err:
+#ifdef HAVE_MULTI_VLAN_OFFLOAD_EN
+ hinic3_unregister_notifier(nic_dev);
+#endif
+ hinic3_update_nic_feature(nic_dev->hwdev, 0);
+ hinic3_set_nic_feature_to_hw(nic_dev->hwdev);
+
+set_features_err:
+ hinic3_sw_deinit(nic_dev);
+
+sw_init_err:
+ hinic3_free_nic_hwdev(nic_dev->hwdev);
+
+init_nic_hwdev_err:
+ free_nic_dev(nic_dev);
+setup_dev_err:
+ free_netdev(netdev);
+
+err_out:
+ nic_err(&pdev->dev, "NIC service probe failed\n");
+
+ return err;
+}
+
+static void hinic3_bond_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ int ret = 0;
+
+ ret = hinic3_open_close_bond(nic_dev->hwdev, false);
+ if (ret != 0) {
+ goto bond_deinit_failed;
+ }
+
+ ret = hinic3_delete_bond(nic_dev->hwdev);
+ if (ret != 0) {
+ goto bond_deinit_failed;
+ }
+
+ nic_info(&nic_dev->pdev->dev, "Bond deinit success\n");
+ return;
+
+bond_deinit_failed:
+ nic_err(&nic_dev->pdev->dev, "Bond deinit failed\n");
+}
+
+static void nic_remove(struct hinic3_lld_dev *lld_dev, void *adapter)
+{
+ struct hinic3_nic_dev *nic_dev = adapter;
+ struct net_device *netdev = NULL;
+
+ if (!nic_dev || !hinic3_support_nic(lld_dev->hwdev, NULL))
+ return;
+
+ nic_info(&lld_dev->pdev->dev, "NIC service remove begin\n");
+
+ netdev = nic_dev->netdev;
+
+ if (lld_dev->pdev->subsystem_device != BIFUR_RESOURCE_PF_SSID) {
+ unregister_netdev(netdev);
+ }
+#ifdef HAVE_MULTI_VLAN_OFFLOAD_EN
+ hinic3_unregister_notifier(nic_dev);
+#endif
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ cancel_delayed_work_sync(&nic_dev->vport_stats_work);
+
+ cancel_delayed_work_sync(&nic_dev->periodic_work);
+ cancel_delayed_work_sync(&nic_dev->rxq_check_work);
+ cancel_work_sync(&nic_dev->rx_mode_work);
+ destroy_workqueue(nic_dev->workq);
+
+ hinic3_flush_rx_flow_rule(nic_dev);
+
+ if (hinic3_get_bond_create_mode(lld_dev->hwdev) != 0) {
+ hinic3_bond_deinit(nic_dev);
+ }
+
+ hinic3_update_nic_feature(nic_dev->hwdev, 0);
+ hinic3_set_nic_feature_to_hw(nic_dev->hwdev);
+
+ hinic3_sw_deinit(nic_dev);
+
+ hinic3_free_nic_hwdev(nic_dev->hwdev);
+
+ hinic3_deinit_nic_prof_adapter(nic_dev);
+ kfree(nic_dev->vlan_bitmap);
+ nic_dev->vlan_bitmap = NULL;
+
+ free_netdev(netdev);
+
+ nic_info(&lld_dev->pdev->dev, "NIC service removed\n");
+}
+
+static void sriov_state_change(struct hinic3_nic_dev *nic_dev,
+ const struct hinic3_sriov_state_info *info)
+{
+ if (!info->enable)
+ hinic3_clear_vfs_info(nic_dev->hwdev);
+}
+
+static void hinic3_port_module_event_handler(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_event_info *event)
+{
+ const char *g_hinic3_module_link_err[LINK_ERR_NUM] = { "Unrecognized module" };
+ struct hinic3_port_module_event *module_event = (void *)event->event_data;
+ enum port_module_event_type type = module_event->type;
+ enum link_err_type err_type = module_event->err_type;
+
+ switch (type) {
+ case HINIC3_PORT_MODULE_CABLE_PLUGGED:
+ case HINIC3_PORT_MODULE_CABLE_UNPLUGGED:
+ nicif_info(nic_dev, link, nic_dev->netdev,
+ "Port module event: Cable %s\n",
+ type == HINIC3_PORT_MODULE_CABLE_PLUGGED ?
+ "plugged" : "unplugged");
+ break;
+ case HINIC3_PORT_MODULE_LINK_ERR:
+ if (err_type >= LINK_ERR_NUM) {
+ nicif_info(nic_dev, link, nic_dev->netdev,
+ "Link failed, Unknown error type: 0x%x\n",
+ err_type);
+ } else {
+ nicif_info(nic_dev, link, nic_dev->netdev,
+ "Link failed, error type: 0x%x: %s\n",
+ err_type,
+ g_hinic3_module_link_err[err_type]);
+ }
+ break;
+ default:
+ nicif_err(nic_dev, link, nic_dev->netdev,
+ "Unknown port module type %d\n", type);
+ break;
+ }
+}
+
+bool hinic3_need_proc_link_event(struct hinic3_lld_dev *lld_dev)
+{
+ int ret = 0;
+ u16 func_id;
+ u8 roce_enable = false;
+ bool is_slave_func = false;
+ struct hinic3_hw_bond_infos hw_bond_infos = {0};
+
+ if (!lld_dev)
+ return false;
+
+ /* 非slave设备需要处理link down事件 */
+ ret = hinic3_is_slave_func(lld_dev->hwdev, &is_slave_func);
+ if (ret != 0) {
+ nic_err(&lld_dev->pdev->dev, "NIC get info, lld_dev is null\n");
+ return true;
+ }
+
+ if (!is_slave_func)
+ return true;
+
+ /* 未使能了vroce功能,需处理link down事件 */
+ func_id = hinic3_global_func_id(lld_dev->hwdev);
+ ret = hinic3_get_func_vroce_enable(lld_dev->hwdev, func_id,
+ &roce_enable);
+ if (ret != 0)
+ return true;
+
+ if (!roce_enable)
+ return true;
+
+ /* 未创建bond,需要处理link down事件 */
+ hw_bond_infos.bond_id = HINIC_OVS_BOND_DEFAULT_ID;
+
+ ret = hinic3_get_hw_bond_infos(lld_dev->hwdev, &hw_bond_infos,
+ HINIC3_CHANNEL_COMM);
+ if (ret != 0) {
+ pr_err("[ROCE, ERR] Get chipf bond info failed (%d)\n", ret);
+ return true;
+ }
+
+ if (!hw_bond_infos.valid)
+ return true;
+
+ return false;
+}
+
+bool hinic3_need_proc_bond_event(struct hinic3_lld_dev *lld_dev)
+{
+ return !hinic3_need_proc_link_event(lld_dev);
+}
+
+static void hinic_porc_bond_state_change(struct hinic3_lld_dev *lld_dev,
+ void *adapter,
+ struct hinic3_event_info *event)
+{
+ struct hinic3_nic_dev *nic_dev = adapter;
+
+ if (!nic_dev || !event || !hinic3_support_nic(lld_dev->hwdev, NULL))
+ return;
+
+ switch (HINIC3_SRV_EVENT_TYPE(event->service, event->type)) {
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_BOND_DOWN):
+ if (!hinic3_need_proc_bond_event(lld_dev)) {
+ nic_info(&lld_dev->pdev->dev, "NIC don't need proc bond event\n");
+ return;
+ }
+ nic_info(&lld_dev->pdev->dev, "NIC proc bond down\n");
+ hinic3_link_status_change(nic_dev, false);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_BOND_UP):
+ if (!hinic3_need_proc_bond_event(lld_dev)) {
+ nic_info(&lld_dev->pdev->dev, "NIC don't need proc bond event\n");
+ return;
+ }
+ nic_info(&lld_dev->pdev->dev, "NIC proc bond up\n");
+ hinic3_link_status_change(nic_dev, true);
+ break;
+ default:
+ break;
+ }
+}
+
+static void hinic3_outband_cfg_event_handler(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_outband_cfg_info *info)
+{
+ int err = 0;
+ if (!nic_dev || !info || !hinic3_support_nic(nic_dev->hwdev, NULL)) {
+ pr_err("Outband cfg event invalid param\n");
+ return;
+ }
+
+ if (hinic3_func_type(nic_dev->hwdev) != TYPE_VF &&
+ info->func_id >= NIC_MAX_PF_NUM) {
+ err = hinic3_notify_vf_outband_cfg(nic_dev->hwdev,
+ info->func_id,
+ info->outband_default_vid);
+ if (err)
+ nic_err(&nic_dev->pdev->dev, "Outband cfg event notify vf err: %d,"
+ "func_id: 0x%x, vid: 0x%x\n",
+ err, info->func_id, info->outband_default_vid);
+ return;
+ }
+
+ nic_info(&nic_dev->pdev->dev,
+ "Change outband default vid from %u to %u\n",
+ nic_dev->outband_cfg.outband_default_vid,
+ info->outband_default_vid);
+
+ nic_dev->outband_cfg.outband_default_vid = info->outband_default_vid;
+
+ return;
+}
+
+static void nic_event(struct hinic3_lld_dev *lld_dev, void *adapter,
+ struct hinic3_event_info *event)
+{
+ struct hinic3_nic_dev *nic_dev = adapter;
+ struct hinic3_fault_event *fault = NULL;
+
+ if (!nic_dev || !event || !hinic3_support_nic(lld_dev->hwdev, NULL))
+ return;
+
+ switch (HINIC3_SRV_EVENT_TYPE(event->service, event->type)) {
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_LINK_DOWN):
+ if (!hinic3_need_proc_link_event(lld_dev)) {
+ nic_info(&lld_dev->pdev->dev, "NIC don't need proc link event\n");
+ return;
+ }
+ hinic3_link_status_change(nic_dev, false);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_LINK_UP):
+ hinic3_link_status_change(nic_dev, true);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_BOND_DOWN):
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_BOND_UP):
+ hinic_porc_bond_state_change(lld_dev, adapter, event);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_PORT_MODULE_EVENT):
+ hinic3_port_module_event_handler(nic_dev, event);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_OUTBAND_CFG):
+ hinic3_outband_cfg_event_handler(nic_dev, (void *)event->event_data);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_SRIOV_STATE_CHANGE):
+ sriov_state_change(nic_dev, (void *)event->event_data);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_FAULT):
+ fault = (void *)event->event_data;
+ if (fault->fault_level == FAULT_LEVEL_SERIOUS_FLR &&
+ fault->event.chip.func_id == hinic3_global_func_id(lld_dev->hwdev))
+ hinic3_link_status_change(nic_dev, false);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_PCIE_LINK_DOWN):
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_HEART_LOST):
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_MGMT_WATCHDOG):
+ hinic3_link_status_change(nic_dev, false);
+ break;
+ default:
+ break;
+ }
+}
+
+struct net_device *hinic3_get_netdev_by_lld(struct hinic3_lld_dev *lld_dev)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+
+ if (!lld_dev || !hinic3_support_nic(lld_dev->hwdev, NULL))
+ return NULL;
+
+ nic_dev = hinic3_get_uld_dev_unsafe(lld_dev, SERVICE_T_NIC);
+ if (!nic_dev) {
+ nic_err(&lld_dev->pdev->dev,
+ "There's no net device attached on the pci device");
+ return NULL;
+ }
+
+ return nic_dev->netdev;
+}
+EXPORT_SYMBOL(hinic3_get_netdev_by_lld);
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_netdev(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+
+ if (!netdev || !hinic3_is_netdev_ops_match(netdev))
+ return NULL;
+
+ nic_dev = netdev_priv(netdev);
+ if (!nic_dev)
+ return NULL;
+
+ return nic_dev->lld_dev;
+}
+EXPORT_SYMBOL(hinic3_get_lld_dev_by_netdev);
+
+struct hinic3_uld_info g_nic_uld_info = {
+ .probe = nic_probe,
+ .remove = nic_remove,
+ .suspend = NULL,
+ .resume = NULL,
+ .event = nic_event,
+ .ioctl = nic_ioctl,
+};
+
+struct hinic3_uld_info *get_nic_uld_info(void)
+{
+ return &g_nic_uld_info;
+}
+
+#define HINIC3_NIC_DRV_DESC "Intelligent Network Interface Card Driver"
+
+static __init int hinic3_nic_lld_init(void)
+{
+ int err;
+
+ pr_info("%s - version %s\n", HINIC3_NIC_DRV_DESC,
+ HINIC3_NIC_DRV_VERSION);
+
+ err = hinic3_lld_init();
+ if (err) {
+ pr_err("SDK init failed.\n");
+ return err;
+ }
+
+ err = hinic3_register_uld(SERVICE_T_NIC, &g_nic_uld_info);
+ if (err) {
+ pr_err("Register hinic3 uld failed\n");
+ hinic3_lld_exit();
+ return err;
+ }
+
+ err = hinic3_module_pre_init();
+ if (err) {
+ pr_err("Init custom failed\n");
+ hinic3_unregister_uld(SERVICE_T_NIC);
+ hinic3_lld_exit();
+ return err;
+ }
+
+ return 0;
+}
+
+static __exit void hinic3_nic_lld_exit(void)
+{
+ hinic3_unregister_uld(SERVICE_T_NIC);
+
+ hinic3_module_post_exit();
+
+ hinic3_lld_exit();
+}
+
+module_init(hinic3_nic_lld_init);
+module_exit(hinic3_nic_lld_exit);
+
+MODULE_AUTHOR("Huawei Technologies CO., Ltd");
+MODULE_DESCRIPTION(HINIC3_NIC_DRV_DESC);
+MODULE_VERSION(HINIC3_NIC_DRV_VERSION);
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h b/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h
new file mode 100644
index 0000000..522518d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h
@@ -0,0 +1,1298 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef NIC_MPU_CMD_DEFS_H
+#define NIC_MPU_CMD_DEFS_H
+
+#include "nic_cfg_comm.h"
+#include "mpu_cmd_base_defs.h"
+
+#ifndef ETH_ALEN
+#define ETH_ALEN 6
+#endif
+
+#define HINIC3_CMD_OP_SET 1
+#define HINIC3_CMD_OP_GET 0
+
+#define HINIC3_CMD_OP_ADD 1
+#define HINIC3_CMD_OP_DEL 0
+
+#define NIC_TCAM_BLOCK_LARGE_NUM 256
+#define NIC_TCAM_BLOCK_LARGE_SIZE 16
+
+#ifndef BIT
+#define BIT(n) (1UL << (n))
+#endif
+
+enum nic_feature_cap {
+ NIC_F_CSUM = BIT(0),
+ NIC_F_SCTP_CRC = BIT(1),
+ NIC_F_TSO = BIT(2),
+ NIC_F_LRO = BIT(3),
+ NIC_F_UFO = BIT(4),
+ NIC_F_RSS = BIT(5),
+ NIC_F_RX_VLAN_FILTER = BIT(6),
+ NIC_F_RX_VLAN_STRIP = BIT(7),
+ NIC_F_TX_VLAN_INSERT = BIT(8),
+ NIC_F_VXLAN_OFFLOAD = BIT(9),
+ NIC_F_IPSEC_OFFLOAD = BIT(10),
+ NIC_F_FDIR = BIT(11),
+ NIC_F_PROMISC = BIT(12),
+ NIC_F_ALLMULTI = BIT(13),
+ NIC_F_XSFP_REPORT = BIT(14),
+ NIC_F_VF_MAC = BIT(15),
+ NIC_F_RATE_LIMIT = BIT(16),
+ NIC_F_RXQ_RECOVERY = BIT(17),
+};
+
+#define NIC_F_ALL_MASK 0x3FFFF /* 使能所有属性 */
+
+struct hinic3_mgmt_msg_head {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+};
+
+#define NIC_MAX_FEATURE_QWORD 4
+struct hinic3_cmd_feature_nego {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 1: set, 0: get */
+ u8 rsvd;
+ u64 s_feature[NIC_MAX_FEATURE_QWORD];
+};
+
+struct hinic3_port_mac_set {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 mac[ETH_ALEN];
+};
+
+struct hinic3_port_mac_update {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 old_mac[ETH_ALEN];
+ u16 rsvd2;
+ u8 new_mac[ETH_ALEN];
+};
+
+struct hinic3_vport_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd2[3];
+};
+
+struct hinic3_port_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd2[3];
+};
+
+#define HINIC3_SET_PORT_CAR_PROFILE 0
+#define HINIC3_SET_PORT_CAR_STATE 1
+
+struct hinic3_port_car_info {
+ u32 cir; /* unit: kbps, range:[1,400*1000*1000], i.e. 1Kbps~400Gbps(400M*kbps) */
+ u32 xir; /* unit: kbps, range:[1,400*1000*1000], i.e. 1Kbps~400Gbps(400M*kbps) */
+ u32 cbs; /* unit: Byte, range:[1,320*1000*1000], i.e. 1byte~2560Mbit */
+ u32 xbs; /* unit: Byte, range:[1,320*1000*1000], i.e. 1byte~2560Mbit */
+};
+
+struct hinic3_cmd_set_port_car {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode; /* 0--set car profile, 1--set car state */
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd;
+
+ struct hinic3_port_car_info car;
+};
+
+struct hinic3_cmd_clear_qp_resource {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_cmd_cache_out_qp_resource {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_port_stats_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_vport_stats {
+ u64 tx_unicast_pkts_vport;
+ u64 tx_unicast_bytes_vport;
+ u64 tx_multicast_pkts_vport;
+ u64 tx_multicast_bytes_vport;
+ u64 tx_broadcast_pkts_vport;
+ u64 tx_broadcast_bytes_vport;
+
+ u64 rx_unicast_pkts_vport;
+ u64 rx_unicast_bytes_vport;
+ u64 rx_multicast_pkts_vport;
+ u64 rx_multicast_bytes_vport;
+ u64 rx_broadcast_pkts_vport;
+ u64 rx_broadcast_bytes_vport;
+
+ u64 tx_discard_vport;
+ u64 rx_discard_vport;
+ u64 tx_err_vport;
+ u64 rx_err_vport;
+};
+
+struct hinic3_phy_fpga_port_stats {
+ u64 mac_rx_total_octs_port;
+ u64 mac_tx_total_octs_port;
+ u64 mac_rx_under_frame_pkts_port;
+ u64 mac_rx_frag_pkts_port;
+ u64 mac_rx_64_oct_pkts_port;
+ u64 mac_rx_127_oct_pkts_port;
+ u64 mac_rx_255_oct_pkts_port;
+ u64 mac_rx_511_oct_pkts_port;
+ u64 mac_rx_1023_oct_pkts_port;
+ u64 mac_rx_max_oct_pkts_port;
+ u64 mac_rx_over_oct_pkts_port;
+ u64 mac_tx_64_oct_pkts_port;
+ u64 mac_tx_127_oct_pkts_port;
+ u64 mac_tx_255_oct_pkts_port;
+ u64 mac_tx_511_oct_pkts_port;
+ u64 mac_tx_1023_oct_pkts_port;
+ u64 mac_tx_max_oct_pkts_port;
+ u64 mac_tx_over_oct_pkts_port;
+ u64 mac_rx_good_pkts_port;
+ u64 mac_rx_crc_error_pkts_port;
+ u64 mac_rx_broadcast_ok_port;
+ u64 mac_rx_multicast_ok_port;
+ u64 mac_rx_mac_frame_ok_port;
+ u64 mac_rx_length_err_pkts_port;
+ u64 mac_rx_vlan_pkts_port;
+ u64 mac_rx_pause_pkts_port;
+ u64 mac_rx_unknown_mac_frame_port;
+ u64 mac_tx_good_pkts_port;
+ u64 mac_tx_broadcast_ok_port;
+ u64 mac_tx_multicast_ok_port;
+ u64 mac_tx_underrun_pkts_port;
+ u64 mac_tx_mac_frame_ok_port;
+ u64 mac_tx_vlan_pkts_port;
+ u64 mac_tx_pause_pkts_port;
+};
+
+struct hinic3_port_stats {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ struct hinic3_phy_fpga_port_stats stats;
+};
+
+struct hinic3_cmd_vport_stats {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u32 stats_size;
+ u32 rsvd1;
+ struct hinic3_vport_stats stats;
+ u64 rsvd2[6];
+};
+
+struct hinic3_cmd_qpn {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 base_qpn;
+};
+
+enum hinic3_func_tbl_cfg_bitmap {
+ FUNC_CFG_INIT,
+ FUNC_CFG_RX_BUF_SIZE,
+ FUNC_CFG_MTU,
+};
+
+struct hinic3_func_tbl_cfg {
+ u16 rx_wqe_buf_size;
+ u16 mtu;
+ u32 rsvd[9];
+};
+
+struct hinic3_cmd_set_func_tbl {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd;
+
+ u32 cfg_bitmap;
+ struct hinic3_func_tbl_cfg tbl_cfg;
+};
+
+struct hinic3_cmd_cons_idx_attr {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_idx;
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u32 rsvd;
+ u64 ci_addr;
+};
+
+union sm_tbl_args {
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } mac_table_arg;
+ struct {
+ u32 er_id;
+ u32 vlan_id;
+ } vlan_elb_table_arg;
+ struct {
+ u32 func_id;
+ } vlan_filter_arg;
+ struct {
+ u32 mc_id;
+ } mc_elb_arg;
+ struct {
+ u32 func_id;
+ } func_tbl_arg;
+ struct {
+ u32 port_id;
+ } port_tbl_arg;
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } fdir_io_table_arg;
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } flexq_table_arg;
+ u32 args[4];
+};
+
+#define DFX_SM_TBL_BUF_MAX (768)
+
+struct nic_cmd_dfx_sm_table {
+ struct hinic3_mgmt_msg_head msg_head;
+ u32 tbl_type;
+ union sm_tbl_args args;
+ u8 tbl_buf[DFX_SM_TBL_BUF_MAX];
+};
+
+struct hinic3_cmd_vlan_offload {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 vlan_offload;
+ u8 rsvd1[5];
+};
+
+/* ucode capture cfg info */
+struct nic_cmd_capture_info {
+ struct hinic3_mgmt_msg_head msg_head;
+ u32 op_type;
+ u32 func_port;
+ u32 is_en_trx;
+ u32 offset_cos;
+ u32 data_vlan;
+};
+
+struct hinic3_cmd_lro_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 lro_ipv4_en;
+ u8 lro_ipv6_en;
+ u8 lro_max_pkt_len; /* unit is 1K */
+ u8 resv2[13];
+};
+
+struct hinic3_cmd_lro_timer {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 opcode; /* 1: set timer value, 0: get timer value */
+ u8 rsvd1;
+ u16 rsvd2;
+ u32 timer;
+};
+
+struct hinic3_cmd_local_lro_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 0: get state, 1: set state */
+ u8 state; /* 0: disable, 1: enable */
+};
+
+struct hinic3_cmd_vf_vlan_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 vlan_id;
+ u8 qos;
+ u8 rsvd2[5];
+};
+
+struct hinic3_cmd_spoofchk_set {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 state;
+ u8 rsvd1;
+};
+
+struct hinic3_cmd_tx_rate_cfg {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 min_rate;
+ u32 max_rate;
+ u8 rsvd2[8];
+};
+
+struct hinic3_cmd_port_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+ u8 fec;
+ u16 rsvd2;
+ u32 rsvd3[4];
+};
+
+struct hinic3_cmd_register_vf {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 op_register; /* 0 - unregister, 1 - register */
+ u8 rsvd1[3];
+ u32 support_extra_feature;
+ u8 rsvd2[32];
+};
+
+struct hinic3_cmd_link_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 state;
+ u16 rsvd1;
+};
+
+struct hinic3_cmd_vlan_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 vlan_id;
+ u16 rsvd2;
+};
+
+/* set vlan filter */
+struct hinic3_cmd_set_vlan_filter {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 resvd[2];
+ u32 vlan_filter_ctrl; /* bit0:vlan filter en; bit1:broadcast_filter_en */
+};
+
+struct hinic3_cmd_link_ksettings_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+
+ u32 valid_bitmap;
+ u8 speed; /* enum nic_speed_level */
+ u8 autoneg; /* 0 - off, 1 - on */
+ u8 fec; /* 0 - RSFEC, 1 - BASEFEC, 2 - NOFEC */
+ u8 rsvd2[21]; /* reserved for duplex, port, etc. */
+};
+
+struct mpu_lt_info {
+ u8 node;
+ u8 inst;
+ u8 entry_size;
+ u8 rsvd;
+ u32 lt_index;
+ u32 offset;
+ u32 len;
+};
+
+struct nic_mpu_lt_opera {
+ struct hinic3_mgmt_msg_head msg_head;
+ struct mpu_lt_info net_lt_cmd;
+ u8 data[100];
+};
+
+struct hinic3_force_pkt_drop {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port;
+ u8 rsvd1[3];
+};
+
+struct hinic3_rx_mode_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 rx_mode;
+};
+
+/* rss */
+struct hinic3_rss_context_table {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 context;
+};
+
+struct hinic3_cmd_rss_engine_type {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 hash_engine;
+ u8 rsvd1[4];
+};
+
+struct hinic3_cmd_rss_hash_key {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 key[NIC_RSS_KEY_SIZE];
+};
+
+struct hinic3_rss_indir_table {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 indir[NIC_RSS_INDIR_SIZE];
+};
+
+#define NIC_RSS_CMD_TEMP_ALLOC 0x01
+#define NIC_RSS_CMD_TEMP_FREE 0x02
+
+struct hinic3_rss_template_mgmt {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 cmd;
+ u8 template_id;
+ u8 rsvd1[4];
+};
+
+struct hinic3_cmd_rss_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 rss_en;
+ u8 rq_priority_number;
+ u8 prio_tc[NIC_DCB_COS_MAX];
+ u16 num_qps;
+ u16 rsvd1;
+};
+
+struct hinic3_dcb_state {
+ u8 dcb_on;
+ u8 default_cos;
+ u8 trust;
+ u8 rsvd1;
+ u8 pcp2cos[NIC_DCB_UP_MAX];
+ u8 dscp2cos[64];
+ u32 rsvd2[7];
+};
+
+struct hinic3_cmd_vf_dcb_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ struct hinic3_dcb_state state;
+};
+
+struct hinic3_up_ets_cfg { /* delet */
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+
+ u8 cos_tc[NIC_DCB_COS_MAX];
+ u8 tc_bw[NIC_DCB_TC_MAX];
+ u8 cos_prio[NIC_DCB_COS_MAX];
+ u8 cos_bw[NIC_DCB_COS_MAX];
+ u8 tc_prio[NIC_DCB_TC_MAX];
+};
+
+#define CMD_QOS_ETS_COS_TC BIT(0)
+#define CMD_QOS_ETS_TC_BW BIT(1)
+#define CMD_QOS_ETS_COS_PRIO BIT(2)
+#define CMD_QOS_ETS_COS_BW BIT(3)
+#define CMD_QOS_ETS_TC_PRIO BIT(4)
+struct hinic3_cmd_ets_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 1 - set, 0 - get */
+ /* bit0 - cos_tc, bit1 - tc_bw, bit2 - cos_prio, bit3 - cos_bw, bit4 - tc_prio */
+ u8 cfg_bitmap;
+ u8 rsvd;
+
+ u8 cos_tc[NIC_DCB_COS_MAX];
+ u8 tc_bw[NIC_DCB_TC_MAX];
+ u8 cos_prio[NIC_DCB_COS_MAX]; /* 0 - DWRR, 1 - STRICT */
+ u8 cos_bw[NIC_DCB_COS_MAX];
+ u8 tc_prio[NIC_DCB_TC_MAX]; /* 0 - DWRR, 1 - STRICT */
+};
+
+struct hinic3_cmd_set_dcb_state {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 op_code; /* 0 - get dcb state, 1 - set dcb state */
+ u8 state; /* 0 - disable, 1 - enable dcb */
+ u8 port_state; /* 0 - disable, 1 - enable dcb */
+ u8 rsvd[7];
+};
+
+#define PFC_BIT_MAP_NUM 8
+struct hinic3_cmd_set_pfc {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 0:get 1: set pfc_en 2: set pfc_bitmap 3: set all */
+ u8 pfc_en; /* pfc_en 和 pfc_bitmap 必须同时设置 */
+ u8 pfc_bitmap;
+ u8 rsvd[4];
+};
+
+#define CMD_QOS_PORT_TRUST BIT(0)
+#define CMD_QOS_PORT_DFT_COS BIT(1)
+struct hinic3_cmd_qos_port_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 0 - get, 1 - set */
+ u8 cfg_bitmap; /* bit0 - trust, bit1 - dft_cos */
+ u8 rsvd0;
+
+ u8 trust;
+ u8 dft_cos;
+ u8 rsvd1[18];
+};
+
+#define MAP_COS_MAX_NUM 8
+#define CMD_QOS_MAP_PCP2COS BIT(0)
+#define CMD_QOS_MAP_DSCP2COS BIT(1)
+struct hinic3_cmd_qos_map_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 op_code;
+ u8 cfg_bitmap; /* bit0 - pcp2cos, bit1 - dscp2cos */
+ u16 rsvd0;
+
+ u8 pcp2cos[8]; /* 8 must be configured together */
+ /* If the dscp2cos parameter is set to 0xFF, the MPU ignores the DSCP priority,
+ * Multiple mappings between DSCP values and CoS values can be configured at a time.
+ */
+ u8 dscp2cos[64];
+ u32 rsvd1[4];
+};
+
+struct hinic3_cos_up_map {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 cos_valid_mask; /* every bit indicate index of map is valid 1 or not 0 */
+ u16 rsvd1;
+
+ /* user priority in cos(index:cos, value: up pri) */
+ u8 map[NIC_DCB_UP_MAX];
+};
+
+struct hinic3_cmd_pause_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode;
+ u16 rsvd1;
+ u8 auto_neg;
+ u8 rx_pause;
+ u8 tx_pause;
+ u8 rsvd2[5];
+};
+
+struct nic_cmd_pause_inquiry_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 valid;
+
+ u32 type; /* 1: set, 2: get */
+
+ u32 rx_inquiry_pause_drop_pkts_en;
+ u32 rx_inquiry_pause_period_ms;
+ u32 rx_inquiry_pause_times;
+ /* rx pause Detection Threshold, Default PAUSE_FRAME_THD_10G/25G/40G/100 */
+ u32 rx_inquiry_pause_frame_thd;
+ u32 rx_inquiry_tx_total_pkts;
+
+ u32 tx_inquiry_pause_en; /* tx pause detect enable */
+ u32 tx_inquiry_pause_period_ms; /* tx pause Default Detection Period 200ms */
+ u32 tx_inquiry_pause_times; /* tx pause Default Times Period 5 */
+ u32 tx_inquiry_pause_frame_thd; /* tx pause Detection Threshold */
+ u32 tx_inquiry_rx_total_pkts;
+ u32 rsvd[4];
+};
+
+/* pfc/pause Storm TX exception reporting */
+struct nic_cmd_tx_pause_notice {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 tx_pause_except; /* 1: abnormality,0: normal */
+ u32 except_level;
+ u32 rsvd;
+};
+
+#define HINIC3_CMD_OP_FREE 0
+#define HINIC3_CMD_OP_ALLOC 1
+
+struct hinic3_cmd_cfg_qps {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 1: alloc qp, 0: free qp */
+ u8 rsvd1;
+ u16 num_qps;
+ u16 rsvd2;
+};
+
+struct hinic3_cmd_led_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port;
+ u8 type;
+ u8 mode;
+ u8 rsvd1;
+};
+
+struct hinic3_cmd_port_loopback {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode;
+ u8 mode;
+ u8 en;
+ u32 rsvd1[2];
+};
+
+struct hinic3_cmd_get_light_module_abs {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 abs_status; /* 0:present, 1:absent */
+ u8 rsv[2];
+};
+
+#define STD_SFP_INFO_MAX_SIZE 640
+struct hinic3_cmd_get_std_sfp_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 wire_type;
+ u16 eeprom_len;
+ u32 rsvd;
+ u8 sfp_info[STD_SFP_INFO_MAX_SIZE];
+};
+
+struct hinic3_cable_plug_event {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 plugged; /* 0: unplugged, 1: plugged */
+ u8 port_id;
+};
+
+struct nic_cmd_mac_info {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 valid_bitmap;
+ u16 rsvd;
+
+ u8 host_id[32];
+ u8 port_id[32];
+ u8 mac_addr[192];
+};
+
+struct nic_cmd_set_tcam_enable {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 tcam_enable;
+ u8 rsvd1;
+ u32 rsvd2;
+};
+
+struct nic_cmd_set_fdir_status {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 pkt_type_en;
+ u8 pkt_type;
+ u8 qid;
+ u8 rsvd2;
+};
+
+#define HINIC3_TCAM_BLOCK_ENABLE 1
+#define HINIC3_TCAM_BLOCK_DISABLE 0
+#define HINIC3_MAX_TCAM_RULES_NUM 4096
+
+/* tcam block type, according to tcam block size */
+enum {
+ NIC_TCAM_BLOCK_TYPE_LARGE = 0, /* block_size: 16 */
+ NIC_TCAM_BLOCK_TYPE_SMALL, /* block_size: 0 */
+ NIC_TCAM_BLOCK_TYPE_MAX
+};
+
+/* alloc tcam block input struct */
+struct nic_cmd_ctrl_tcam_block_in {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u8 alloc_en; /* 0: Releases the allocated TCAM block. 1: Applies for a new TCAM block */
+ /* 0: 16 size tcam block, 1: 0 size tcam block, other reserved. */
+ u8 tcam_type;
+ u16 tcam_block_index;
+ /* Size of the block that the driver wants to allocate
+ * Interface returned by the UP to the driver,
+ * indicating the size of the allocated TCAM block supported by the UP
+ */
+ u16 alloc_block_num;
+};
+
+/* alloc tcam block output struct */
+struct nic_cmd_ctrl_tcam_block_out {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u8 alloc_en; /* 0: Releases the allocated TCAM block. 1: Applies for a new TCAM block */
+ /* 0: 16 size tcam block, 1: 0 size tcam block, other reserved. */
+ u8 tcam_type;
+ u16 tcam_block_index;
+ /* Size of the block that the driver wants to allocate
+ * Interface returned by the UP to the driver,
+ * indicating the size of the allocated TCAM block supported by the UP
+ */
+ u16 mpu_alloc_block_size;
+};
+
+struct nic_cmd_flush_tcam_rules {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u16 rsvd;
+};
+
+struct nic_cmd_dfx_fdir_tcam_block_table {
+ struct hinic3_mgmt_msg_head head;
+ u8 tcam_type;
+ u8 valid;
+ u16 tcam_block_index;
+ u16 use_function_id;
+ u16 rsvd;
+};
+
+struct tcam_result {
+ u32 qid;
+ u32 rsvd;
+};
+
+#define TCAM_FLOW_KEY_SIZE (44)
+
+struct tcam_key_x_y {
+ u8 x[TCAM_FLOW_KEY_SIZE];
+ u8 y[TCAM_FLOW_KEY_SIZE];
+};
+
+struct nic_tcam_cfg_rule {
+ u32 index;
+ struct tcam_result data;
+ struct tcam_key_x_y key;
+};
+
+#define TCAM_RULE_FDIR_TYPE 0
+#define TCAM_RULE_PPA_TYPE 1
+
+struct nic_cmd_fdir_add_rule {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 type;
+ u8 fdir_ext; /* 0x1: flow bifur en bit */
+ struct nic_tcam_cfg_rule rule;
+};
+
+struct nic_cmd_fdir_del_rules {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 type;
+ u8 rsvd;
+ u32 index_start;
+ u32 index_num;
+};
+
+struct nic_cmd_fdir_get_rule {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 index;
+ u8 valid;
+ u8 type;
+ u16 rsvd;
+ struct tcam_key_x_y key;
+ struct tcam_result data;
+ u64 packet_count;
+ u64 byte_count;
+};
+
+struct nic_cmd_fdir_get_block_rules {
+ struct hinic3_mgmt_msg_head head;
+ u8 tcam_block_type; // only NIC_TCAM_BLOCK_TYPE_LARGE
+ u8 tcam_table_type; // TCAM_RULE_PPA_TYPE or TCAM_RULE_FDIR_TYPE
+ u16 tcam_block_index;
+ u8 valid[NIC_TCAM_BLOCK_LARGE_SIZE];
+ struct tcam_key_x_y key[NIC_TCAM_BLOCK_LARGE_SIZE];
+ struct tcam_result data[NIC_TCAM_BLOCK_LARGE_SIZE];
+};
+
+struct hinic3_tcam_key_ipv4_mem {
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+ u32 sipv4_h : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+ u32 dipv4_h : 16;
+ u32 sipv4_l : 16;
+ u32 vlan_id : 15;
+ u32 vlan_flag : 1;
+ u32 dipv4_l : 16;
+ u32 rsvd3;
+ u32 dport : 16;
+ u32 rsvd4 : 16;
+ u32 rsvd5 : 16;
+ u32 sport : 16;
+ u32 outer_sipv4_h : 16;
+ u32 rsvd6 : 16;
+ u32 outer_dipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+ u32 vni_h : 16;
+ u32 outer_dipv4_l : 16;
+ u32 rsvd7 : 16;
+ u32 vni_l : 16;
+};
+
+union hinic3_tag_tcam_ext_info {
+ struct {
+ u32 id : 16; /* id */
+ u32 type : 4; /* type: 0-func, 1-vmdq, 2-port, 3-rsvd, 4-trunk, 5-dp, 6-mc */
+ u32 host_id : 3;
+ u32 rsv : 8;
+ u32 ext : 1;
+ } bs;
+ u32 value;
+};
+
+struct hinic3_tcam_key_ipv6_mem {
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+ u32 sipv6_key0 : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+ u32 sipv6_key2 : 16;
+ u32 sipv6_key1 : 16;
+ u32 sipv6_key4 : 16;
+ u32 sipv6_key3 : 16;
+ u32 sipv6_key6 : 16;
+ u32 sipv6_key5 : 16;
+ u32 dport : 16;
+ u32 sipv6_key7 : 16;
+ u32 dipv6_key0 : 16;
+ u32 sport : 16;
+ u32 dipv6_key2 : 16;
+ u32 dipv6_key1 : 16;
+ u32 dipv6_key4 : 16;
+ u32 dipv6_key3 : 16;
+ u32 dipv6_key6 : 16;
+ u32 dipv6_key5 : 16;
+ u32 rsvd2 : 16;
+ u32 dipv6_key7 : 16;
+};
+
+struct hinic3_tcam_key_vxlan_ipv6_mem {
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+
+ u32 dipv6_key0 : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+
+ u32 dipv6_key2 : 16;
+ u32 dipv6_key1 : 16;
+
+ u32 dipv6_key4 : 16;
+ u32 dipv6_key3 : 16;
+
+ u32 dipv6_key6 : 16;
+ u32 dipv6_key5 : 16;
+
+ u32 dport : 16;
+ u32 dipv6_key7 : 16;
+
+ u32 rsvd2 : 16;
+ u32 sport : 16;
+
+ u32 outer_sipv4_h : 16;
+ u32 rsvd3 : 16;
+
+ u32 outer_dipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+
+ u32 vni_h : 16;
+ u32 outer_dipv4_l : 16;
+
+ u32 rsvd4 : 16;
+ u32 vni_l : 16;
+};
+
+struct tag_tcam_key {
+ union {
+ struct hinic3_tcam_key_ipv4_mem key_info;
+ struct hinic3_tcam_key_ipv6_mem key_info_ipv6;
+ struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6;
+ };
+
+ union {
+ struct hinic3_tcam_key_ipv4_mem key_mask;
+ struct hinic3_tcam_key_ipv6_mem key_mask_ipv6;
+ struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6;
+ };
+};
+
+enum {
+ PPA_TABLE_ID_CLEAN_CMD = 0,
+ PPA_TABLE_ID_ADD_CMD,
+ PPA_TABLE_ID_DEL_CMD,
+ FDIR_TABLE_ID_ADD_CMD,
+ FDIR_TABLE_ID_DEL_CMD,
+ PPA_TABEL_ID_MAX
+};
+
+struct hinic3_ppa_cfg_table_id_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u16 cmd;
+ u16 table_id;
+ u16 rsvd1;
+};
+
+struct hinic3_ppa_cfg_ppa_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 ppa_en;
+ u8 rsvd;
+};
+
+struct hinic3_func_flow_bifur_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 func_id;
+ u8 flow_bifur_en;
+ u8 rsvd[5];
+};
+
+struct hinic3_port_flow_bifur_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 port_id;
+ u8 flow_bifur_en;
+ u8 rsvd[5];
+};
+
+struct hinic3_bond_mask_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 func_id;
+ u8 bond_mask;
+ u8 bond_en;
+ u8 func_valid;
+ u8 rsvd[3];
+};
+
+#define HINIC3_TX_SET_PROMISC_SKIP 0
+#define HINIC3_TX_GET_PROMISC_SKIP 1
+
+struct hinic3_tx_promisc_cfg {
+ struct hinic3_mgmt_msg_head msg_head;
+ u8 port_id;
+ u8 promisc_skip_en; /* 0: disable tx promisc replication, 1: enable */
+ u8 opcode; /* 0: set, 1: get */
+ u8 rsvd1;
+};
+
+struct hinic3_ppa_cfg_mode_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u8 ppa_mode;
+ u8 qpc_func_nums;
+ u16 base_qpc_func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_ppa_flush_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u8 flush_en; /* 0 flush done, 1 in flush operation */
+ u8 rsvd1;
+};
+
+struct hinic3_ppa_fdir_query_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u32 index;
+ u32 rsvd;
+ u64 pkt_nums;
+ u64 pkt_bytes;
+};
+
+/* BIOS CONF */
+enum {
+ NIC_NVM_DATA_SET = BIT(0), /* 1-save, 0-read */
+ NIC_NVM_DATA_PXE = BIT(1),
+ NIC_NVM_DATA_VLAN = BIT(2),
+ NIC_NVM_DATA_VLAN_PRI = BIT(3),
+ NIC_NVM_DATA_VLAN_ID = BIT(4),
+ NIC_NVM_DATA_WORK_MODE = BIT(5),
+ NIC_NVM_DATA_PF_SPEED_LIMIT = BIT(6),
+ NIC_NVM_DATA_GE_MODE = BIT(7),
+ NIC_NVM_DATA_AUTO_NEG = BIT(8),
+ NIC_NVM_DATA_LINK_FEC = BIT(9),
+ NIC_NVM_DATA_PF_ADAPTIVE_LINK = BIT(10),
+ NIC_NVM_DATA_SRIOV_CONTROL = BIT(11),
+ NIC_NVM_DATA_EXTEND_MODE = BIT(12),
+ NIC_NVM_DATA_RESET = BIT(31),
+};
+
+#define BIOS_CFG_SIGNATURE 0x1923E518
+#define BIOS_OP_CFG_ALL(op_code_val) ((((op_code_val) >> 1) & (0xFFFFFFFF)) != 0)
+#define BIOS_OP_CFG_WRITE(op_code_val) ((((op_code_val) & NIC_NVM_DATA_SET)) != 0)
+#define BIOS_OP_CFG_PXE_EN(op_code_val) (((op_code_val) & NIC_NVM_DATA_PXE) != 0)
+#define BIOS_OP_CFG_VLAN_EN(op_code_val) (((op_code_val) & NIC_NVM_DATA_VLAN) != 0)
+#define BIOS_OP_CFG_VLAN_PRI(op_code_val) (((op_code_val) & NIC_NVM_DATA_VLAN_PRI) != 0)
+#define BIOS_OP_CFG_VLAN_ID(op_code_val) (((op_code_val) & NIC_NVM_DATA_VLAN_ID) != 0)
+#define BIOS_OP_CFG_WORK_MODE(op_code_val) (((op_code_val) & NIC_NVM_DATA_WORK_MODE) != 0)
+#define BIOS_OP_CFG_PF_BW(op_code_val) (((op_code_val) & NIC_NVM_DATA_PF_SPEED_LIMIT) != 0)
+#define BIOS_OP_CFG_GE_SPEED(op_code_val) (((op_code_val) & NIC_NVM_DATA_GE_MODE) != 0)
+#define BIOS_OP_CFG_AUTO_NEG(op_code_val) (((op_code_val) & NIC_NVM_DATA_AUTO_NEG) != 0)
+#define BIOS_OP_CFG_LINK_FEC(op_code_val) (((op_code_val) & NIC_NVM_DATA_LINK_FEC) != 0)
+#define BIOS_OP_CFG_AUTO_ADPAT(op_code_val) (((op_code_val) & NIC_NVM_DATA_PF_ADAPTIVE_LINK) != 0)
+#define BIOS_OP_CFG_SRIOV_ENABLE(op_code_val) (((op_code_val) & NIC_NVM_DATA_SRIOV_CONTROL) != 0)
+#define BIOS_OP_CFG_EXTEND_MODE(op_code_val) (((op_code_val) & NIC_NVM_DATA_EXTEND_MODE) != 0)
+#define BIOS_OP_CFG_RST_DEF_SET(op_code_val) (((op_code_val) & (u32)NIC_NVM_DATA_RESET) != 0)
+
+#define NIC_BIOS_CFG_MAX_PF_BW 100
+/* Note: This structure must be 4-byte aligned. */
+struct nic_bios_cfg {
+ u32 signature;
+ u8 pxe_en; /* PXE enable: 0 - disable 1 - enable */
+ u8 extend_mode;
+ u8 rsvd0[2];
+ u8 pxe_vlan_en; /* PXE VLAN enable: 0 - disable 1 - enable */
+ u8 pxe_vlan_pri; /* PXE VLAN priority: 0-7 */
+ u16 pxe_vlan_id; /* PXE VLAN ID 1-4094 */
+ u32 service_mode; /* @See CHIPIF_SERVICE_MODE_x */
+ u32 pf_bw; /* PF rate, in percentage. The value ranges from 0 to 100. */
+ u8 speed; /* enum of port speed */
+ u8 auto_neg; /* Auto-Negotiation Switch 0 - Invalid Field 1 - On 2 - Off */
+ u8 lanes; /* lane num */
+ u8 fec; /* FEC mode, @See enum mag_cmd_port_fec */
+ u8 auto_adapt; /* Adaptive Mode Configuration 0 - Invalid Configuration 1 - On 2 - Off */
+ u8 func_valid; /* Whether func_id is valid; 0: invalid; other: valid */
+ u8 func_id; /* This member is valid only when func_valid is not set to 0. */
+ u8 sriov_en; /* SRIOV-EN: 0 - Invalid configuration, 1 - On, 2 - Off */
+};
+
+struct nic_cmd_bios_cfg {
+ struct hinic3_mgmt_msg_head head;
+ u32 op_code; /* Operation Code: Bit0[0: read 1:write, BIT1-6: cfg_mask */
+ struct nic_bios_cfg bios_cfg;
+};
+
+struct nic_cmd_vhd_config {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 vhd_type;
+ u8 virtio_small_enable; /* 0: mergeable mode, 1: small mode */
+};
+
+/* BOND */
+struct hinic3_create_bond_info {
+ u32 bond_id;
+ u32 master_slave_port_id;
+ u32 slave_bitmap; /* bond port id bitmap */
+ u32 poll_timeout; /* Bond device link check time */
+ u32 up_delay; /* Temporarily reserved */
+ u32 down_delay; /* Temporarily reserved */
+ u32 bond_mode; /* Temporarily reserved */
+ u32 active_pf; /* bond use active pf id */
+ u32 active_port_max_num; /* Maximum number of active bond member interfaces */
+ u32 active_port_min_num; /* Minimum number of active bond member interfaces */
+ u32 xmit_hash_policy;
+ u32 rsvd[2];
+};
+
+struct hinic3_cmd_create_bond {
+ struct hinic3_mgmt_msg_head head;
+ struct hinic3_create_bond_info create_bond_info;
+};
+
+struct hinic3_cmd_delete_bond {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 rsvd[2];
+};
+
+struct hinic3_open_close_bond_info {
+ u32 bond_id;
+ u32 open_close_flag; /* Bond flag. 1: open; 0: close. */
+ u32 rsvd[2];
+};
+
+struct hinic3_cmd_open_close_bond {
+ struct hinic3_mgmt_msg_head head;
+ struct hinic3_open_close_bond_info open_close_bond_info;
+};
+
+struct lacp_port_params {
+ u16 port_number;
+ u16 port_priority;
+ u16 key;
+ u16 system_priority;
+ u8 system[ETH_ALEN];
+ u8 port_state;
+ u8 rsvd;
+};
+
+struct lacp_port_info {
+ u32 selected;
+ u32 aggregator_port_id;
+
+ struct lacp_port_params actor;
+ struct lacp_port_params partner;
+
+ u64 tx_lacp_pkts;
+ u64 rx_lacp_pkts;
+ u64 rx_8023ad_drop;
+ u64 tx_8023ad_drop;
+ u64 unknown_pkt_drop;
+ u64 rx_marker_pkts;
+ u64 tx_marker_pkts;
+};
+
+struct hinic3_bond_status_info {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 bon_mmi_status;
+ u32 active_bitmap;
+ u32 port_count;
+
+ struct lacp_port_info port_info[4];
+
+ u64 success_report_cnt[4];
+ u64 fail_report_cnt[4];
+
+ u64 poll_timeout;
+ u64 fast_periodic_timeout;
+ u64 slow_periodic_timeout;
+ u64 short_timeout;
+ u64 long_timeout;
+ u64 aggregate_wait_timeout;
+ u64 tx_period_timeout;
+ u64 rx_marker_timer;
+};
+
+struct hinic3_bond_active_report_info {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 bon_mmi_status;
+ u32 active_bitmap;
+
+ u8 rsvd[16];
+};
+
+/* IP checksum error packets, enable rss quadruple hash. */
+struct hinic3_ipcs_err_rss_enable_operation_s {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 en_tag;
+ u8 type; /* 1: set 0: get */
+ u8 rsvd[2];
+};
+
+struct hinic3_smac_check_state {
+ struct hinic3_mgmt_msg_head head;
+ u8 smac_check_en; /* 1: enable 0: disable */
+ u8 op_code; /* 1: set 0: get */
+ u8 rsvd[2];
+};
+
+struct hinic3_clear_log_state {
+ struct hinic3_mgmt_msg_head head;
+ u32 type;
+};
+
+#endif /* HINIC_MGMT_INTERFACE_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mt.h b/drivers/net/ethernet/huawei/hinic3/hinic3_mt.h
new file mode 100644
index 0000000..4df82ff
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mt.h
@@ -0,0 +1,864 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_MT_H
+#define HINIC3_MT_H
+
+#define HINIC3_DRV_NAME "hinic3"
+#define HINIC3_CHIP_NAME "hinic"
+/* Interrupt at most records, interrupt will be recorded in the FFM */
+
+#define NICTOOL_CMD_TYPE (0x18)
+#define HINIC3_CARD_NAME_MAX_LEN (128)
+
+struct api_cmd_rd {
+ u32 pf_id;
+ u8 dest;
+ u8 *cmd;
+ u16 size;
+ void *ack;
+ u16 ack_size;
+};
+
+struct api_cmd_wr {
+ u32 pf_id;
+ u8 dest;
+ u8 *cmd;
+ u16 size;
+};
+
+#define PF_DEV_INFO_NUM 32
+
+struct pf_dev_info {
+ u64 bar0_size;
+ u8 bus;
+ u8 slot;
+ u8 func;
+ u64 phy_addr;
+};
+
+/* Indicates the maximum number of interrupts that can be recorded.
+ * Subsequent interrupts are not recorded in FFM.
+ */
+#define FFM_RECORD_NUM_MAX 64
+
+struct ffm_intr_info {
+ u8 node_id;
+ /* error level of the interrupt source */
+ u8 err_level;
+ /* Classification by interrupt source properties */
+ u16 err_type;
+ u32 err_csr_addr;
+ u32 err_csr_value;
+};
+
+struct ffm_intr_tm_info {
+ struct ffm_intr_info intr_info;
+ u8 times;
+ u8 sec;
+ u8 min;
+ u8 hour;
+ u8 mday;
+ u8 mon;
+ u16 year;
+};
+
+struct ffm_record_info {
+ u32 ffm_num;
+ u32 last_err_csr_addr;
+ u32 last_err_csr_value;
+ struct ffm_intr_tm_info ffm[FFM_RECORD_NUM_MAX];
+};
+
+struct dbgtool_k_glb_info {
+ struct semaphore dbgtool_sem;
+ struct ffm_record_info *ffm;
+};
+
+struct msg_2_up {
+ u8 pf_id;
+ u8 mod;
+ u8 cmd;
+ void *buf_in;
+ u16 in_size;
+ void *buf_out;
+ u16 *out_size;
+};
+
+struct dbgtool_param {
+ union {
+ struct api_cmd_rd api_rd;
+ struct api_cmd_wr api_wr;
+ struct pf_dev_info *dev_info;
+ struct ffm_record_info *ffm_rd;
+ struct msg_2_up msg2up;
+ } param;
+ char chip_name[16];
+};
+
+/* dbgtool command type */
+/* You can add commands as required. The dbgtool command can be
+ * used to invoke all interfaces of the kernel-mode x86 driver.
+ */
+enum dbgtool_cmd {
+ DBGTOOL_CMD_API_RD = 0,
+ DBGTOOL_CMD_API_WR,
+ DBGTOOL_CMD_FFM_RD,
+ DBGTOOL_CMD_FFM_CLR,
+ DBGTOOL_CMD_PF_DEV_INFO_GET,
+ DBGTOOL_CMD_MSG_2_UP,
+ DBGTOOL_CMD_FREE_MEM,
+ DBGTOOL_CMD_NUM
+};
+
+#define HINIC_PF_MAX_SIZE (16)
+#define HINIC_VF_MAX_SIZE (4096)
+#define BUSINFO_LEN (32)
+
+enum module_name {
+ SEND_TO_NPU = 1,
+ SEND_TO_MPU,
+ SEND_TO_SM,
+
+ SEND_TO_HW_DRIVER,
+#define SEND_TO_SRV_DRV_BASE (SEND_TO_HW_DRIVER + 1)
+ SEND_TO_NIC_DRIVER = SEND_TO_SRV_DRV_BASE,
+ SEND_TO_OVS_DRIVER,
+ SEND_TO_ROCE_DRIVER,
+ SEND_TO_TOE_DRIVER,
+ SEND_TO_IOE_DRIVER,
+ SEND_TO_FC_DRIVER,
+ SEND_TO_VBS_DRIVER,
+ SEND_TO_IPSEC_DRIVER,
+ SEND_TO_VIRTIO_DRIVER,
+ SEND_TO_MIGRATE_DRIVER,
+ SEND_TO_PPA_DRIVER,
+ SEND_TO_CUSTOM_DRIVER = SEND_TO_SRV_DRV_BASE + 11,
+ SEND_TO_VSOCK_DRIVER = SEND_TO_SRV_DRV_BASE + 14,
+ SEND_TO_BIFUR_DRIVER,
+ SEND_TO_DRIVER_MAX = SEND_TO_SRV_DRV_BASE + 16, /* reserved */
+};
+
+enum driver_cmd_type {
+ TX_INFO = 1,
+ Q_NUM,
+ TX_WQE_INFO,
+ TX_MAPPING,
+ RX_INFO,
+ RX_WQE_INFO,
+ RX_CQE_INFO,
+ UPRINT_FUNC_EN,
+ UPRINT_FUNC_RESET,
+ UPRINT_SET_PATH,
+ UPRINT_GET_STATISTICS,
+ FUNC_TYPE,
+ GET_FUNC_IDX,
+ GET_INTER_NUM,
+ CLOSE_TX_STREAM,
+ GET_DRV_VERSION,
+ CLEAR_FUNC_STASTIC,
+ GET_HW_STATS,
+ CLEAR_HW_STATS,
+ GET_SELF_TEST_RES,
+ GET_CHIP_FAULT_STATS,
+ NIC_RSVD1,
+ NIC_RSVD2,
+ GET_OS_HOT_REPLACE_INFO,
+ GET_CHIP_ID,
+ GET_SINGLE_CARD_INFO,
+ GET_FIRMWARE_ACTIVE_STATUS,
+ ROCE_DFX_FUNC,
+ GET_DEVICE_ID,
+ GET_PF_DEV_INFO,
+ CMD_FREE_MEM,
+ GET_LOOPBACK_MODE = 32,
+ SET_LOOPBACK_MODE,
+ SET_LINK_MODE,
+ SET_TX_PF_BW_LIMIT,
+ GET_PF_BW_LIMIT,
+ ROCE_CMD,
+ GET_POLL_WEIGHT,
+ SET_POLL_WEIGHT,
+ GET_HOMOLOGUE,
+ SET_HOMOLOGUE,
+ GET_SSET_COUNT,
+ GET_SSET_ITEMS,
+ IS_DRV_IN_VM,
+ LRO_ADPT_MGMT,
+ SET_INTER_COAL_PARAM,
+ GET_INTER_COAL_PARAM,
+ GET_CHIP_INFO,
+ GET_NIC_STATS_LEN,
+ GET_NIC_STATS_STRING,
+ GET_NIC_STATS_INFO,
+ GET_PF_ID,
+ GET_MBOX_CNT,
+ NIC_RSVD4,
+ NIC_RSVD5,
+ DCB_QOS_INFO,
+ DCB_PFC_STATE,
+ DCB_ETS_STATE,
+ DCB_STATE,
+ QOS_DEV,
+ GET_QOS_COS,
+ GET_ULD_DEV_NAME,
+ GET_TX_TIMEOUT,
+ SET_TX_TIMEOUT,
+
+ RSS_CFG = 0x40,
+ RSS_INDIR,
+ PORT_ID,
+
+ SET_RX_PF_BW_LIMIT = 0x43,
+
+ GET_FUNC_CAP = 0x50,
+ GET_XSFP_PRESENT = 0x51,
+ GET_XSFP_INFO = 0x52,
+ DEV_NAME_TEST = 0x53,
+ GET_XSFP_INFO_COMP_CMIS = 0x54,
+
+ GET_WIN_STAT = 0x60,
+ WIN_CSR_READ = 0x61,
+ WIN_CSR_WRITE = 0x62,
+ WIN_API_CMD_RD = 0x63,
+
+ GET_FUSION_Q = 0x64,
+
+ ROCE_CMD_BOND_HASH_TYPE_SET = 0xb2,
+
+ BIFUR_SET_ENABLE = 0xc0,
+ BIFUR_GET_ENABLE = 0xc1,
+
+ VM_COMPAT_TEST = 0xFF
+};
+
+enum api_chain_cmd_type {
+ API_CSR_READ,
+ API_CSR_WRITE
+};
+
+enum sm_cmd_type {
+ SM_CTR_RD16 = 1,
+ SM_CTR_RD32,
+ SM_CTR_RD64_PAIR,
+ SM_CTR_RD64,
+ SM_CTR_RD32_CLEAR,
+ SM_CTR_RD64_PAIR_CLEAR,
+ SM_CTR_RD64_CLEAR,
+ SM_CTR_RD16_CLEAR,
+};
+
+struct cqm_stats {
+ atomic_t cqm_cmd_alloc_cnt;
+ atomic_t cqm_cmd_free_cnt;
+ atomic_t cqm_send_cmd_box_cnt;
+ atomic_t cqm_send_cmd_imm_cnt;
+ atomic_t cqm_db_addr_alloc_cnt;
+ atomic_t cqm_db_addr_free_cnt;
+ atomic_t cqm_fc_srq_create_cnt;
+ atomic_t cqm_srq_create_cnt;
+ atomic_t cqm_rq_create_cnt;
+ atomic_t cqm_qpc_mpt_create_cnt;
+ atomic_t cqm_nonrdma_queue_create_cnt;
+ atomic_t cqm_rdma_queue_create_cnt;
+ atomic_t cqm_rdma_table_create_cnt;
+ atomic_t cqm_qpc_mpt_delete_cnt;
+ atomic_t cqm_nonrdma_queue_delete_cnt;
+ atomic_t cqm_rdma_queue_delete_cnt;
+ atomic_t cqm_rdma_table_delete_cnt;
+ atomic_t cqm_func_timer_clear_cnt;
+ atomic_t cqm_func_hash_buf_clear_cnt;
+ atomic_t cqm_scq_callback_cnt;
+ atomic_t cqm_ecq_callback_cnt;
+ atomic_t cqm_nocq_callback_cnt;
+ atomic_t cqm_aeq_callback_cnt[112];
+};
+
+struct link_event_stats {
+ atomic_t link_down_stats;
+ atomic_t link_up_stats;
+};
+
+enum hinic3_fault_err_level {
+ FAULT_LEVEL_FATAL,
+ FAULT_LEVEL_SERIOUS_RESET,
+ FAULT_LEVEL_HOST,
+ FAULT_LEVEL_SERIOUS_FLR,
+ FAULT_LEVEL_GENERAL,
+ FAULT_LEVEL_SUGGESTION,
+ FAULT_LEVEL_MAX,
+};
+
+enum hinic3_fault_type {
+ FAULT_TYPE_CHIP,
+ FAULT_TYPE_UCODE,
+ FAULT_TYPE_MEM_RD_TIMEOUT,
+ FAULT_TYPE_MEM_WR_TIMEOUT,
+ FAULT_TYPE_REG_RD_TIMEOUT,
+ FAULT_TYPE_REG_WR_TIMEOUT,
+ FAULT_TYPE_PHY_FAULT,
+ FAULT_TYPE_TSENSOR_FAULT,
+ FAULT_TYPE_MAX,
+};
+
+struct fault_event_stats {
+ /* TODO :HINIC_NODE_ID_MAX: temp use the value of 1822(22) */
+ atomic_t chip_fault_stats[22][FAULT_LEVEL_MAX];
+ atomic_t fault_type_stat[FAULT_TYPE_MAX];
+ atomic_t pcie_fault_stats;
+};
+
+enum hinic3_ucode_event_type {
+ HINIC3_INTERNAL_OTHER_FATAL_ERROR = 0x0,
+ HINIC3_CHANNEL_BUSY = 0x7,
+ HINIC3_NIC_FATAL_ERROR_MAX = 0x8,
+};
+
+struct hinic3_hw_stats {
+ atomic_t heart_lost_stats;
+ struct cqm_stats cqm_stats;
+ struct link_event_stats link_event_stats;
+ struct fault_event_stats fault_event_stats;
+ atomic_t nic_ucode_event_stats[HINIC3_NIC_FATAL_ERROR_MAX];
+};
+
+#ifndef IFNAMSIZ
+#define IFNAMSIZ 16
+#endif
+
+struct pf_info {
+ char name[IFNAMSIZ];
+ char bus_info[BUSINFO_LEN];
+ u32 pf_type;
+};
+
+struct card_info {
+ struct pf_info pf[HINIC_PF_MAX_SIZE];
+ u32 pf_num;
+};
+
+struct func_mbox_cnt_info {
+ char bus_info[BUSINFO_LEN];
+ u64 send_cnt;
+ u64 ack_cnt;
+};
+
+struct card_mbox_cnt_info {
+ struct func_mbox_cnt_info func_info[HINIC_PF_MAX_SIZE +
+ HINIC_VF_MAX_SIZE];
+ u32 func_num;
+};
+
+struct hinic3_nic_loop_mode {
+ u32 loop_mode;
+ u32 loop_ctrl;
+};
+
+struct hinic3_pf_info {
+ u32 isvalid;
+ u32 pf_id;
+};
+
+enum hinic3_show_set {
+ HINIC3_SHOW_SSET_IO_STATS = 1,
+};
+
+#define HINIC3_SHOW_ITEM_LEN 32
+struct hinic3_show_item {
+ char name[HINIC3_SHOW_ITEM_LEN];
+ u8 hexadecimal; /* 0: decimal , 1: Hexadecimal */
+ u8 rsvd[7];
+ u64 value;
+};
+
+#define HINIC3_CHIP_FAULT_SIZE (110 * 1024)
+#define MAX_DRV_BUF_SIZE 4096
+
+struct nic_cmd_chip_fault_stats {
+ u32 offset;
+ u8 chip_fault_stats[MAX_DRV_BUF_SIZE];
+};
+
+#define NIC_TOOL_MAGIC 'x'
+
+#define CARD_MAX_SIZE (64)
+
+struct nic_card_id {
+ u32 id[CARD_MAX_SIZE];
+ u32 num;
+};
+
+struct func_pdev_info {
+ u64 bar0_phy_addr;
+ u64 bar0_size;
+ u64 bar1_phy_addr;
+ u64 bar1_size;
+ u64 bar3_phy_addr;
+ u64 bar3_size;
+ u64 rsvd1[4];
+};
+
+struct hinic3_card_func_info {
+ u32 num_pf;
+ u32 rsvd0;
+ u64 usr_api_phy_addr;
+ struct func_pdev_info pdev_info[CARD_MAX_SIZE];
+};
+
+struct wqe_info {
+ int q_id;
+ void *slq_handle;
+ unsigned int wqe_id;
+};
+
+#define MAX_VER_INFO_LEN 128
+struct drv_version_info {
+ char ver[MAX_VER_INFO_LEN];
+};
+
+struct hinic3_tx_hw_page {
+ u64 phy_addr;
+ u64 *map_addr;
+};
+
+struct nic_sq_info {
+ u16 q_id;
+ u16 pi;
+ u16 ci; /* sw_ci */
+ u16 fi; /* hw_ci */
+ u32 q_depth;
+ u16 pi_reverse; /* TODO: what is this? */
+ u16 wqebb_size;
+ u8 priority;
+ u16 *ci_addr;
+ u64 cla_addr;
+ void *slq_handle;
+ /* TODO: NIC don't use direct wqe */
+ struct hinic3_tx_hw_page direct_wqe;
+ struct hinic3_tx_hw_page doorbell;
+ u32 page_idx;
+ u32 glb_sq_id;
+};
+
+struct nic_rq_info {
+ u16 q_id;
+ u16 delta;
+ u16 hw_pi;
+ u16 ci; /* sw_ci */
+ u16 sw_pi;
+ u16 wqebb_size;
+ u16 q_depth;
+ u16 buf_len;
+
+ void *slq_handle;
+ u64 ci_wqe_page_addr;
+ u64 ci_cla_tbl_addr;
+
+ u8 coalesc_timer_cfg;
+ u8 pending_limt;
+ u16 msix_idx;
+ u32 msix_vector;
+};
+
+#define MT_EPERM 1 /* Operation not permitted */
+#define MT_EIO 2 /* I/O error */
+#define MT_EINVAL 3 /* Invalid argument */
+#define MT_EBUSY 4 /* Device or resource busy */
+#define MT_EOPNOTSUPP 0xFF /* Operation not supported */
+
+struct mt_msg_head {
+ u8 status;
+ u8 rsvd1[3];
+};
+
+#define MT_DCB_OPCODE_WR BIT(0) /* 1 - write, 0 - read */
+struct hinic3_mt_qos_info { /* delete */
+ struct mt_msg_head head;
+
+ u16 op_code;
+ u8 valid_cos_bitmap;
+ u8 valid_up_bitmap;
+ u32 rsvd1;
+};
+
+struct hinic3_mt_dcb_state {
+ struct mt_msg_head head;
+
+ u16 op_code; /* 0 - get dcb state, 1 - set dcb state */
+ u8 state; /* 0 - disable, 1 - enable dcb */
+ u8 rsvd;
+};
+
+#define MT_DCB_ETS_UP_TC BIT(1)
+#define MT_DCB_ETS_UP_BW BIT(2)
+#define MT_DCB_ETS_UP_PRIO BIT(3)
+#define MT_DCB_ETS_TC_BW BIT(4)
+#define MT_DCB_ETS_TC_PRIO BIT(5)
+
+#define DCB_UP_TC_NUM 0x8
+struct hinic3_mt_ets_state { /* delete */
+ struct mt_msg_head head;
+
+ u16 op_code;
+ u8 up_tc[DCB_UP_TC_NUM];
+ u8 up_bw[DCB_UP_TC_NUM];
+ u8 tc_bw[DCB_UP_TC_NUM];
+ u8 up_prio_bitmap;
+ u8 tc_prio_bitmap;
+ u32 rsvd;
+};
+
+#define MT_DCB_PFC_PFC_STATE BIT(1)
+#define MT_DCB_PFC_PFC_PRI_EN BIT(2)
+
+struct hinic3_mt_pfc_state { /* delete */
+ struct mt_msg_head head;
+
+ u16 op_code;
+ u8 state;
+ u8 pfc_en_bitpamp;
+ u32 rsvd;
+};
+
+#define CMD_QOS_DEV_TRUST BIT(0)
+#define CMD_QOS_DEV_DFT_COS BIT(1)
+#define CMD_QOS_DEV_PCP2COS BIT(2)
+#define CMD_QOS_DEV_DSCP2COS BIT(3)
+
+struct hinic3_mt_qos_dev_cfg {
+ struct mt_msg_head head;
+
+ u8 op_code; /* 0:get 1: set */
+ u8 rsvd0;
+ /* bit0 - trust, bit1 - dft_cos, bit2 - pcp2cos, bit3 - dscp2cos */
+ u16 cfg_bitmap;
+
+ u8 trust; /* 0 - pcp, 1 - dscp */
+ u8 dft_cos;
+ u16 rsvd1;
+ u8 pcp2cos[8]; /* 必须8个一起配置 */
+ /* 配置dscp2cos时,若cos值设置为0xFF,驱动则忽略此dscp优先级的配置,
+ * 允许一次性配置多个dscp跟cos的映射关系
+ */
+ u8 dscp2cos[64];
+ u32 rsvd2[4];
+};
+
+enum mt_api_type {
+ API_TYPE_MBOX = 1,
+ API_TYPE_API_CHAIN_BYPASS,
+ API_TYPE_API_CHAIN_TO_MPU,
+ API_TYPE_CLP,
+};
+
+struct npu_cmd_st {
+ u32 mod : 8;
+ u32 cmd : 8;
+ u32 ack_type : 3;
+ u32 direct_resp : 1;
+ u32 len : 12;
+};
+
+struct mpu_cmd_st {
+ u32 api_type : 8;
+ u32 mod : 8;
+ u32 cmd : 16;
+};
+
+struct msg_module {
+ char device_name[IFNAMSIZ];
+ u32 module;
+ union {
+ u32 msg_formate; /* for driver */
+ struct npu_cmd_st npu_cmd;
+ struct mpu_cmd_st mpu_cmd;
+ };
+ u32 timeout; /* for mpu/npu cmd */
+ u32 func_idx;
+ u32 buf_in_size;
+ u32 buf_out_size;
+ void *in_buf;
+ void *out_buf;
+ int bus_num;
+ u8 port_id;
+ u8 rsvd1[3];
+ u32 rsvd2[4];
+};
+
+struct hinic3_mt_qos_cos_cfg {
+ struct mt_msg_head head;
+
+ u8 port_id;
+ u8 func_cos_bitmap;
+ u8 port_cos_bitmap;
+ u8 func_max_cos_num;
+ u32 rsvd2[4];
+};
+
+#define MAX_NETDEV_NUM 4
+
+enum hinic3_bond_cmd_to_custom_e {
+ CMD_CUSTOM_BOND_DEV_CREATE = 1,
+ CMD_CUSTOM_BOND_DEV_DELETE,
+ CMD_CUSTOM_BOND_GET_CHIP_NAME,
+ CMD_CUSTOM_BOND_GET_CARD_INFO
+};
+
+enum xmit_hash_policy {
+ HASH_POLICY_L2 = 0, /* SMAC_DMAC */
+ HASH_POLICY_L23 = 1, /* SIP_DIP_SPORT_DPORT */
+ HASH_POLICY_L34 = 2, /* SMAC_DMAC_SIP_DIP */
+ HASH_POLICY_MAX = 3 /* MAX */
+};
+
+/* bond mode */
+enum tag_bond_mode {
+ BOND_MODE_NONE = 0, /**< bond disable */
+ BOND_MODE_BACKUP = 1, /**< 1 for active-backup */
+ BOND_MODE_BALANCE = 2, /**< 2 for balance-xor */
+ BOND_MODE_LACP = 4, /**< 4 for 802.3ad */
+ BOND_MODE_MAX
+};
+
+struct add_bond_dev_s {
+ struct mt_msg_head head;
+ /* input can be empty, indicates that the value
+ * is assigned by the driver
+ */
+ char bond_name[IFNAMSIZ];
+ u8 slave_cnt;
+ u8 rsvd[3];
+ char slave_name[MAX_NETDEV_NUM][IFNAMSIZ]; /* unit : ms */
+ u32 poll_timeout; /* default value = 100 */
+ u32 up_delay; /* default value = 0 */
+ u32 down_delay; /* default value = 0 */
+ u32 bond_mode; /* default value = BOND_MODE_LACP */
+
+ /* maximum number of active bond member interfaces,
+ * default value = 0
+ */
+ u32 active_port_max_num;
+ /* minimum number of active bond member interfaces,
+ * default value = 0
+ */
+ u32 active_port_min_num;
+ /* hash policy, which is used for microcode routing logic,
+ * default value = 0
+ */
+ enum xmit_hash_policy xmit_hash_policy;
+};
+
+struct del_bond_dev_s {
+ struct mt_msg_head head;
+ char bond_name[IFNAMSIZ];
+};
+
+struct get_bond_chip_name_s {
+ char bond_name[IFNAMSIZ];
+ char chip_name[IFNAMSIZ];
+};
+
+struct bond_drv_msg_s {
+ u32 bond_id;
+ u32 slave_cnt;
+ u32 master_slave_index;
+ char bond_name[IFNAMSIZ];
+ char slave_name[MAX_NETDEV_NUM][IFNAMSIZ];
+};
+
+#define MAX_BONDING_CNT_PER_CARD (2)
+
+struct bond_negotiate_status {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+ u32 bond_id;
+ u32 bond_mmi_status; /* 该bond子设备的链路状态 */
+ u32 active_bitmap; /* 该bond子设备的slave port状态 */
+
+ u8 rsvd[16];
+};
+
+struct bond_all_msg_s {
+ struct bond_drv_msg_s drv_msg;
+ struct bond_negotiate_status active_info;
+};
+
+struct get_card_bond_msg_s {
+ u32 bond_cnt;
+ struct bond_all_msg_s all_msg[MAX_BONDING_CNT_PER_CARD];
+};
+
+#define MAX_FUSION_Q_STATS_STR_LEN 16
+#define MAX_FUSION_Q_NUM 256
+struct queue_status_s {
+ pid_t tgid;
+ char status[MAX_FUSION_Q_STATS_STR_LEN];
+};
+struct fusion_q_status_s {
+ u16 queue_num;
+ struct queue_status_s queue[MAX_FUSION_Q_NUM];
+};
+
+struct fusion_q_tx_hw_page {
+ u64 phy_addr;
+ u64 *map_addr;
+};
+
+struct fusion_sq_info {
+ u16 q_id;
+ u16 pi;
+ u16 ci; /* sw_ci */
+ u16 fi; /* hw_ci */
+ u32 q_depth;
+ u16 pi_reverse;
+ u16 wqebb_size;
+ u8 priority;
+ u16 *ci_addr;
+ u64 cla_addr;
+ void *slq_handle;
+ struct fusion_q_tx_hw_page direct_wqe;
+ struct fusion_q_tx_hw_page doorbell;
+ u32 page_idx;
+ u32 glb_sq_id;
+};
+
+struct fusion_q_tx_wqe {
+ u32 data[4];
+};
+
+struct fusion_rq_info {
+ u16 q_id;
+ u16 delta;
+ u16 hw_pi;
+ u16 ci; /* sw_ci */
+ u16 sw_pi;
+ u16 wqebb_size;
+ u16 q_depth;
+ u16 buf_len;
+
+ void *slq_handle;
+ u64 ci_wqe_page_addr;
+ u64 ci_cla_tbl_addr;
+
+ u8 coalesc_timer_cfg;
+ u8 pending_limt;
+ u16 msix_idx;
+ u32 msix_vector;
+};
+
+struct fusion_q_rx_wqe {
+ u32 data[8];
+};
+
+struct fusion_q_rx_cqe {
+ union {
+ struct {
+ unsigned int checksum_err : 16;
+ unsigned int lro_num : 8;
+ unsigned int rsvd1 : 7;
+ unsigned int rx_done : 1;
+ } bs;
+ unsigned int value;
+ } dw0;
+
+ union {
+ struct {
+ unsigned int vlan : 16;
+ unsigned int length : 16;
+ } bs;
+ unsigned int value;
+ } dw1;
+
+ union {
+ struct {
+ unsigned int pkt_types : 12;
+ unsigned int rsvd : 4;
+ unsigned int udp_0 : 1;
+ unsigned int ipv6_ex_add : 1;
+ unsigned int loopback : 1;
+ unsigned int umbcast : 2;
+ unsigned int vlan_offload_en : 1;
+ unsigned int tag_num : 2;
+ unsigned int rss_type : 8;
+ } bs;
+ unsigned int value;
+ } dw2;
+
+ union {
+ struct {
+ unsigned int rss_hash_value;
+ } bs;
+ unsigned int value;
+ } dw3;
+
+ union {
+ struct {
+ unsigned int tx_ts_seq : 16;
+ unsigned int message_1588_offset : 8;
+ unsigned int message_1588_type : 4;
+ unsigned int rsvd : 1;
+ unsigned int if_rx_ts : 1;
+ unsigned int if_tx_ts : 1;
+ unsigned int if_1588 : 1;
+ } bs;
+ unsigned int value;
+ } dw4;
+
+ union {
+ struct {
+ unsigned int ts;
+ } bs;
+ unsigned int value;
+ } dw5;
+
+ union {
+ struct {
+ unsigned int lro_ts;
+ } bs;
+ unsigned int value;
+ } dw6;
+
+ union {
+ struct {
+ unsigned int rsvd0;
+ } bs;
+ unsigned int value;
+ } dw7; /* 16Bytes Align */
+};
+
+struct os_hot_repalce_func_info {
+ char card_name[HINIC3_CARD_NAME_MAX_LEN];
+ int bus_num;
+ int valid;
+ int bdf;
+ int partition;
+ int backup_pf;
+ int pf_idx;
+ int port_id;
+};
+
+#define ALL_CARD_PF_NUM 2048 /* 64 card * 32 pf */
+struct os_hot_replace_info {
+ struct os_hot_repalce_func_info func_infos[ALL_CARD_PF_NUM];
+ u32 func_cnt;
+};
+
+int alloc_buff_in(void *hwdev, struct msg_module *nt_msg, u32 in_size, void **buf_in);
+
+int alloc_buff_out(void *hwdev, struct msg_module *nt_msg, u32 out_size, void **buf_out);
+
+void free_buff_in(void *hwdev, const struct msg_module *nt_msg, void *buf_in);
+
+void free_buff_out(void *hwdev, struct msg_module *nt_msg, void *buf_out);
+
+int copy_buf_out_to_user(struct msg_module *nt_msg, u32 out_size, void *buf_out);
+
+int send_to_mpu(void *hwdev, struct msg_module *nt_msg, void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+int send_to_npu(void *hwdev, struct msg_module *nt_msg, void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size);
+int send_to_sm(void *hwdev, struct msg_module *nt_msg, void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+#endif /* _HINIC3_MT_H_ */
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c b/drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c
new file mode 100644
index 0000000..7cd9e4d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c
@@ -0,0 +1,2125 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <net/dsfield.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/netlink.h>
+#include <linux/debugfs.h>
+#include <linux/ip.h>
+
+#include "ossl_knl.h"
+#if defined(HAVE_NDO_UDP_TUNNEL_ADD) || defined(HAVE_UDP_TUNNEL_NIC_INFO)
+#include <net/udp_tunnel.h>
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD || HAVE_UDP_TUNNEL_NIC_INFO */
+#ifdef HAVE_XDP_SUPPORT
+#include <linux/bpf.h>
+#endif
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+#include "hinic3_dcb.h"
+#include "hinic3_nic_prof.h"
+
+#include "nic_npu_cmd.h"
+
+#include "vram_common.h"
+
+#define HINIC3_DEFAULT_RX_CSUM_OFFLOAD 0xFFF
+
+#define HINIC3_LRO_DEFAULT_COAL_PKT_SIZE 32
+#define HINIC3_LRO_DEFAULT_TIME_LIMIT 16
+#define HINIC3_WAIT_FLUSH_QP_RESOURCE_TIMEOUT 100
+static void hinic3_nic_set_rx_mode(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (netdev_uc_count(netdev) != nic_dev->netdev_uc_cnt ||
+ netdev_mc_count(netdev) != nic_dev->netdev_mc_cnt) {
+ set_bit(HINIC3_UPDATE_MAC_FILTER, &nic_dev->flags);
+ nic_dev->netdev_uc_cnt = netdev_uc_count(netdev);
+ nic_dev->netdev_mc_cnt = netdev_mc_count(netdev);
+ }
+
+ queue_work(nic_dev->workq, &nic_dev->rx_mode_work);
+}
+
+static void hinic3_free_irq_vram(struct hinic3_nic_dev *nic_dev, struct hinic3_dyna_txrxq_params *in_q_params)
+{
+ u32 size;
+ int is_use_vram = get_use_vram_flag();
+ struct hinic3_dyna_txrxq_params q_params = nic_dev->q_params;
+
+ if (q_params.irq_cfg == NULL)
+ return;
+
+ size = sizeof(struct hinic3_irq) * (q_params.num_qps);
+
+ if (is_use_vram != 0) {
+ hi_vram_kfree((void *)q_params.irq_cfg, q_params.irq_cfg_vram_name, size);
+ q_params.irq_cfg = NULL;
+ } else {
+ kfree(in_q_params->irq_cfg);
+ in_q_params->irq_cfg = NULL;
+ }
+}
+
+static int hinic3_alloc_irq_vram(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params, bool is_up_eth)
+{
+ u32 size;
+ int is_use_vram = get_use_vram_flag();
+ u16 func_id;
+
+ size = sizeof(struct hinic3_irq) * q_params->num_qps;
+
+ if (is_use_vram != 0) {
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ snprintf(q_params->irq_cfg_vram_name,
+ VRAM_NAME_MAX_LEN, "%s%u",
+ VRAM_NIC_IRQ_VRAM, func_id);
+ q_params->irq_cfg = (struct hinic3_irq *)hi_vram_kalloc(
+ q_params->irq_cfg_vram_name, size);
+ if (q_params->irq_cfg == NULL) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "NIC irq vram alloc failed.\n");
+ return -ENOMEM;
+ }
+ /* in order to clear napi stored in vram, irq need to init when eth up */
+ if (is_up_eth) {
+ memset(q_params->irq_cfg, 0, size);
+ }
+ } else {
+ q_params->irq_cfg = kzalloc(size, GFP_KERNEL);
+ if (q_params->irq_cfg == NULL) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "NIC irq alloc failed.\n");
+ return -ENOMEM;
+ }
+ }
+
+ return 0;
+}
+
+static int hinic3_alloc_txrxq_resources(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params,
+ bool is_up_eth)
+{
+ u32 size;
+ int err;
+
+ size = sizeof(*q_params->txqs_res) * q_params->num_qps;
+ q_params->txqs_res = kzalloc(size, GFP_KERNEL);
+ if (!q_params->txqs_res) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc txqs resources array\n");
+ return -ENOMEM;
+ }
+
+ size = sizeof(*q_params->rxqs_res) * q_params->num_qps;
+ q_params->rxqs_res = kzalloc(size, GFP_KERNEL);
+ if (!q_params->rxqs_res) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxqs resource array\n");
+ err = -ENOMEM;
+ goto alloc_rxqs_res_arr_err;
+ }
+
+ err = hinic3_alloc_irq_vram(nic_dev, q_params, is_up_eth);
+ if (err != 0) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to alloc irq resource array\n");
+ goto alloc_irq_cfg_err;
+ }
+
+ err = hinic3_alloc_txqs_res(nic_dev, q_params->num_qps,
+ q_params->sq_depth, q_params->txqs_res);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc txqs resource\n");
+ goto alloc_txqs_res_err;
+ }
+
+ err = hinic3_alloc_rxqs_res(nic_dev, q_params->num_qps,
+ q_params->rq_depth, q_params->rxqs_res);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxqs resource\n");
+ goto alloc_rxqs_res_err;
+ }
+
+ return 0;
+
+alloc_rxqs_res_err:
+ hinic3_free_txqs_res(nic_dev, q_params->num_qps, q_params->sq_depth,
+ q_params->txqs_res);
+
+alloc_txqs_res_err:
+ hinic3_free_irq_vram(nic_dev, q_params);
+
+alloc_irq_cfg_err:
+ kfree(q_params->rxqs_res);
+ q_params->rxqs_res = NULL;
+
+alloc_rxqs_res_arr_err:
+ kfree(q_params->txqs_res);
+ q_params->txqs_res = NULL;
+
+ return err;
+}
+
+static void hinic3_free_txrxq_resources(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params)
+{
+ int is_in_kexec = vram_get_kexec_flag();
+ hinic3_free_rxqs_res(nic_dev, q_params->num_qps, q_params->rq_depth,
+ q_params->rxqs_res);
+ hinic3_free_txqs_res(nic_dev, q_params->num_qps, q_params->sq_depth,
+ q_params->txqs_res);
+
+ if (is_in_kexec == 0)
+ hinic3_free_irq_vram(nic_dev, q_params);
+
+ kfree(q_params->rxqs_res);
+ q_params->rxqs_res = NULL;
+
+ kfree(q_params->txqs_res);
+ q_params->txqs_res = NULL;
+}
+
+static int hinic3_configure_txrxqs(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params)
+{
+ int err;
+
+ err = hinic3_configure_txqs(nic_dev, q_params->num_qps,
+ q_params->sq_depth, q_params->txqs_res);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to configure txqs\n");
+ return err;
+ }
+
+ err = hinic3_configure_rxqs(nic_dev, q_params->num_qps,
+ q_params->rq_depth, q_params->rxqs_res);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to configure rxqs\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static void config_dcb_qps_map(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ u8 num_cos;
+
+ if (!test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ hinic3_update_tx_db_cos(nic_dev, 0);
+ return;
+ }
+
+ num_cos = hinic3_get_dev_user_cos_num(nic_dev);
+ hinic3_update_qp_cos_cfg(nic_dev, num_cos);
+ /* For now, we don't support to change num_cos */
+ if (num_cos > dcb->cos_config_num_max ||
+ nic_dev->q_params.num_qps < num_cos) {
+ nicif_err(nic_dev, drv, netdev, "Invalid num_cos: %u or num_qps: %u, disable DCB\n",
+ num_cos, nic_dev->q_params.num_qps);
+ nic_dev->q_params.num_cos = 0;
+ clear_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
+ clear_bit(HINIC3_DCB_ENABLE, &nic_dev->nic_vram->flags);
+ /* if we can't enable rss or get enough num_qps,
+ * need to sync default configure to hw
+ */
+ hinic3_configure_dcb(netdev);
+ }
+
+ hinic3_update_tx_db_cos(nic_dev, 1);
+}
+
+static int hinic3_configure(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int err;
+ int is_in_kexec = vram_get_kexec_flag();
+
+ if (is_in_kexec == 0) {
+ err = hinic3_set_port_mtu(nic_dev->hwdev, (u16)netdev->mtu);
+ if (err != 0) {
+ nicif_err(nic_dev, drv, netdev, "Failed to set mtu\n");
+ return err;
+ }
+ }
+
+ config_dcb_qps_map(nic_dev);
+
+ /* rx rss init */
+ err = hinic3_rx_configure(netdev, test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ? 1 : 0);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to configure rx\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static void hinic3_remove_configure(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_rx_remove_configure(nic_dev->netdev);
+}
+
+/* try to modify the number of irq to the target number,
+ * and return the actual number of irq.
+ */
+static u16 hinic3_qp_irq_change(struct hinic3_nic_dev *nic_dev,
+ u16 dst_num_qp_irq)
+{
+ struct irq_info *qps_irq_info = nic_dev->qps_irq_info;
+ u16 resp_irq_num, irq_num_gap, i;
+ u16 idx;
+ int err;
+
+ if (dst_num_qp_irq > nic_dev->num_qp_irq) {
+ irq_num_gap = dst_num_qp_irq - nic_dev->num_qp_irq;
+ err = hinic3_alloc_irqs(nic_dev->hwdev, SERVICE_T_NIC,
+ irq_num_gap,
+ &qps_irq_info[nic_dev->num_qp_irq],
+ &resp_irq_num);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to alloc irqs\n");
+ return nic_dev->num_qp_irq;
+ }
+
+ nic_dev->num_qp_irq += resp_irq_num;
+ } else if (dst_num_qp_irq < nic_dev->num_qp_irq) {
+ irq_num_gap = nic_dev->num_qp_irq - dst_num_qp_irq;
+ for (i = 0; i < irq_num_gap; i++) {
+ idx = (nic_dev->num_qp_irq - i) - 1;
+ hinic3_free_irq(nic_dev->hwdev, SERVICE_T_NIC,
+ qps_irq_info[idx].irq_id);
+ qps_irq_info[idx].irq_id = 0;
+ qps_irq_info[idx].msix_entry_idx = 0;
+ }
+ nic_dev->num_qp_irq = dst_num_qp_irq;
+ }
+
+ return nic_dev->num_qp_irq;
+}
+
+static void config_dcb_num_qps(struct hinic3_nic_dev *nic_dev,
+ const struct hinic3_dyna_txrxq_params *q_params,
+ u16 max_qps)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ u8 num_cos = q_params->num_cos;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ if (!num_cos || num_cos > dcb->cos_config_num_max || num_cos > max_qps)
+ return; /* will disable DCB in config_dcb_qps_map() */
+
+ hinic3_update_qp_cos_cfg(nic_dev, user_cos_num);
+}
+
+static void hinic3_config_num_qps(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params)
+{
+ u16 alloc_num_irq, cur_num_irq;
+ u16 dst_num_irq;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags))
+ q_params->num_qps = 1;
+
+ config_dcb_num_qps(nic_dev, q_params, q_params->num_qps);
+
+ if (nic_dev->num_qp_irq >= q_params->num_qps)
+ goto out;
+
+ cur_num_irq = nic_dev->num_qp_irq;
+
+ alloc_num_irq = hinic3_qp_irq_change(nic_dev, q_params->num_qps);
+ if (alloc_num_irq < q_params->num_qps) {
+ q_params->num_qps = alloc_num_irq;
+ config_dcb_num_qps(nic_dev, q_params, q_params->num_qps);
+ nicif_warn(nic_dev, drv, nic_dev->netdev,
+ "Can not get enough irqs, adjust num_qps to %u\n",
+ q_params->num_qps);
+
+ /* The current irq may be in use, we must keep it */
+ dst_num_irq = (u16)max_t(u16, cur_num_irq, q_params->num_qps);
+ hinic3_qp_irq_change(nic_dev, dst_num_irq);
+ }
+
+out:
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Finally num_qps: %u\n",
+ q_params->num_qps);
+}
+
+/* determin num_qps from rss_tmpl_id/irq_num/dcb_en */
+static int hinic3_setup_num_qps(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u32 irq_size;
+
+ nic_dev->num_qp_irq = 0;
+
+ irq_size = sizeof(*nic_dev->qps_irq_info) * nic_dev->max_qps;
+ if (!irq_size) {
+ nicif_err(nic_dev, drv, netdev, "Cannot allocate zero size entries\n");
+ return -EINVAL;
+ }
+ nic_dev->qps_irq_info = kzalloc(irq_size, GFP_KERNEL);
+ if (!nic_dev->qps_irq_info) {
+ nicif_err(nic_dev, drv, netdev, "Failed to alloc qps_irq_info\n");
+ return -ENOMEM;
+ }
+
+ hinic3_config_num_qps(nic_dev, &nic_dev->q_params);
+
+ return 0;
+}
+
+static void hinic3_destroy_num_qps(struct hinic3_nic_dev *nic_dev)
+{
+ u16 i;
+
+ for (i = 0; i < nic_dev->num_qp_irq; i++)
+ hinic3_free_irq(nic_dev->hwdev, SERVICE_T_NIC,
+ nic_dev->qps_irq_info[i].irq_id);
+
+ kfree(nic_dev->qps_irq_info);
+}
+
+int hinic3_maybe_set_port_state(struct hinic3_nic_dev *nic_dev, bool enable)
+{
+ return hinic3_set_port_enable(nic_dev->hwdev, enable,
+ HINIC3_CHANNEL_NIC);
+}
+
+static void hinic3_print_link_message(struct hinic3_nic_dev *nic_dev,
+ u8 link_status)
+{
+ if (nic_dev->link_status == link_status)
+ return;
+
+ nic_dev->link_status = link_status;
+
+ nicif_info(nic_dev, link, nic_dev->netdev, "Link is %s\n",
+ (link_status ? "up" : "down"));
+}
+
+static int hinic3_alloc_channel_resources(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_qp_params *qp_params,
+ struct hinic3_dyna_txrxq_params *trxq_params,
+ bool is_up_eth)
+{
+ int err;
+
+ qp_params->num_qps = trxq_params->num_qps;
+ qp_params->sq_depth = trxq_params->sq_depth;
+ qp_params->rq_depth = trxq_params->rq_depth;
+
+ err = hinic3_alloc_qps(nic_dev->hwdev, nic_dev->qps_irq_info,
+ qp_params);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to alloc qps\n");
+ return err;
+ }
+
+ err = hinic3_alloc_txrxq_resources(nic_dev, trxq_params, is_up_eth);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to alloc txrxq resources\n");
+ hinic3_free_qps(nic_dev->hwdev, qp_params);
+ return err;
+ }
+
+ return 0;
+}
+
+static void hinic3_free_channel_resources(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_qp_params *qp_params,
+ struct hinic3_dyna_txrxq_params *trxq_params)
+{
+ mutex_lock(&nic_dev->nic_mutex);
+ hinic3_free_txrxq_resources(nic_dev, trxq_params);
+ hinic3_free_qps(nic_dev->hwdev, qp_params);
+ mutex_unlock(&nic_dev->nic_mutex);
+}
+
+static int hinic3_open_channel(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_qp_params *qp_params,
+ struct hinic3_dyna_txrxq_params *trxq_params)
+{
+ int err;
+
+ err = hinic3_init_qps(nic_dev->hwdev, qp_params);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to init qps\n");
+ return err;
+ }
+
+ err = hinic3_configure_txrxqs(nic_dev, trxq_params);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to configure txrxqs\n");
+ goto cfg_txrxqs_err;
+ }
+
+ err = hinic3_qps_irq_init(nic_dev);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to init txrxq irq\n");
+ goto init_qp_irq_err;
+ }
+
+ err = hinic3_configure(nic_dev);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to init txrxq irq\n");
+ goto configure_err;
+ }
+
+ return 0;
+
+configure_err:
+ hinic3_qps_irq_deinit(nic_dev);
+
+init_qp_irq_err:
+cfg_txrxqs_err:
+ hinic3_deinit_qps(nic_dev->hwdev, qp_params);
+
+ return err;
+}
+
+static void hinic3_close_channel(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_qp_params *qp_params)
+{
+ hinic3_remove_configure(nic_dev);
+ hinic3_qps_irq_deinit(nic_dev);
+ hinic3_deinit_qps(nic_dev->hwdev, qp_params);
+}
+
+int hinic3_vport_up(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 link_status = 0;
+ u16 glb_func_id;
+ int err;
+
+ glb_func_id = hinic3_global_func_id(nic_dev->hwdev);
+ err = hinic3_set_vport_enable(nic_dev->hwdev, glb_func_id, true,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to enable vport\n");
+ goto vport_enable_err;
+ }
+
+ err = hinic3_maybe_set_port_state(nic_dev, true);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to enable port\n");
+ goto port_enable_err;
+ }
+
+ netif_set_real_num_tx_queues(netdev, nic_dev->q_params.num_qps);
+ netif_set_real_num_rx_queues(netdev, nic_dev->q_params.num_qps);
+ netif_tx_wake_all_queues(netdev);
+
+ if (test_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags)) {
+ link_status = true;
+ netif_carrier_on(netdev);
+ } else {
+ err = hinic3_get_link_state(nic_dev->hwdev, &link_status);
+ if (!err && link_status)
+ netif_carrier_on(netdev);
+ }
+
+ queue_delayed_work(nic_dev->workq, &nic_dev->moderation_task,
+ HINIC3_MODERATONE_DELAY);
+ if (test_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ queue_delayed_work(nic_dev->workq, &nic_dev->rxq_check_work, HZ);
+
+ hinic3_print_link_message(nic_dev, link_status);
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ hinic3_notify_all_vfs_link_changed(nic_dev->hwdev, link_status);
+
+ return 0;
+
+port_enable_err:
+ hinic3_set_vport_enable(nic_dev->hwdev, glb_func_id, false,
+ HINIC3_CHANNEL_NIC);
+
+vport_enable_err:
+ hinic3_flush_qps_res(nic_dev->hwdev);
+ /* After set vport disable 100ms, no packets will be send to host */
+ msleep(100);
+
+ return err;
+}
+
+static int hinic3_flush_rq_and_check(struct hinic3_nic_dev *nic_dev,
+ u16 glb_func_id)
+{
+ struct hinic3_flush_rq *rq_flush_msg = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ int out_buf_len = sizeof(struct hinic3_flush_rq);
+ u16 rq_id;
+ u64 out_param = 0;
+ int ret;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev);
+ if (!cmd_buf) {
+ nic_err(&nic_dev->pdev->dev, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct hinic3_flush_rq);
+ rq_flush_msg = (struct hinic3_flush_rq *)cmd_buf->buf;
+ rq_flush_msg->dw.bs.func_id = glb_func_id;
+ for (rq_id = 0; rq_id < nic_dev->q_params.num_qps; rq_id++) {
+ rq_flush_msg->dw.bs.rq_id = rq_id;
+ hinic3_cpu_to_be32(rq_flush_msg, out_buf_len);
+ ret = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CHK_RQ_STOP,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if (ret != 0 || out_param != 0) {
+ nic_err(&nic_dev->pdev->dev, "Failed to flush rq, ret:%d, func:%u, rq:%u\n",
+ ret, glb_func_id, rq_id);
+ goto err;
+ }
+ hinic3_be32_to_cpu(rq_flush_msg, out_buf_len);
+ }
+
+ nic_info(&nic_dev->pdev->dev, "Func:%u rq_num:%u flush rq success\n",
+ glb_func_id, nic_dev->q_params.num_qps);
+ hinic3_free_cmd_buf(nic_dev->hwdev, cmd_buf);
+ return 0;
+err:
+ hinic3_free_cmd_buf(nic_dev->hwdev, cmd_buf);
+ return -1;
+}
+
+void hinic3_vport_down(struct hinic3_nic_dev *nic_dev)
+{
+ u16 glb_func_id;
+ int is_in_kexec = vram_get_kexec_flag();
+
+ netif_carrier_off(nic_dev->netdev);
+ netif_tx_disable(nic_dev->netdev);
+
+ cancel_delayed_work_sync(&nic_dev->rxq_check_work);
+
+ cancel_delayed_work_sync(&nic_dev->moderation_task);
+
+ if (hinic3_get_chip_present_flag(nic_dev->hwdev)) {
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ hinic3_notify_all_vfs_link_changed(nic_dev->hwdev, 0);
+
+ if (is_in_kexec != 0)
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Skip changing mag status!\n");
+ else
+ hinic3_maybe_set_port_state(nic_dev, false);
+
+ glb_func_id = hinic3_global_func_id(nic_dev->hwdev);
+ hinic3_set_vport_enable(nic_dev->hwdev, glb_func_id, false,
+ HINIC3_CHANNEL_NIC);
+
+ hinic3_flush_txqs(nic_dev->netdev);
+ if (is_in_kexec == 0) {
+ msleep(HINIC3_WAIT_FLUSH_QP_RESOURCE_TIMEOUT);
+ } else {
+ (void)hinic3_flush_rq_and_check(nic_dev, glb_func_id);
+ }
+ hinic3_flush_qps_res(nic_dev->hwdev);
+ }
+}
+
+int hinic3_change_channel_settings(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *trxq_params,
+ hinic3_reopen_handler reopen_handler,
+ const void *priv_data)
+{
+ struct hinic3_dyna_qp_params new_qp_params = {0};
+ struct hinic3_dyna_qp_params cur_qp_params = {0};
+ int err;
+ bool is_free_resources = false;
+
+ hinic3_config_num_qps(nic_dev, trxq_params);
+
+ err = hinic3_alloc_channel_resources(nic_dev, &new_qp_params,
+ trxq_params, false);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc channel resources\n");
+ return err;
+ }
+
+ if (!test_and_set_bit(HINIC3_CHANGE_RES_INVALID, &nic_dev->flags)) {
+ hinic3_vport_down(nic_dev);
+ hinic3_close_channel(nic_dev, &cur_qp_params);
+ hinic3_free_channel_resources(nic_dev, &cur_qp_params,
+ &nic_dev->q_params);
+ is_free_resources = true;
+ }
+
+ if (nic_dev->num_qp_irq > trxq_params->num_qps)
+ hinic3_qp_irq_change(nic_dev, trxq_params->num_qps);
+
+ if (is_free_resources) {
+ err = hinic3_alloc_irq_vram(nic_dev, trxq_params, false);
+ if (err != 0) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Change chl alloc irq failed\n");
+ goto alloc_irq_err;
+ }
+ }
+ nic_dev->q_params = *trxq_params;
+
+ if (reopen_handler)
+ reopen_handler(nic_dev, priv_data);
+
+ err = hinic3_open_channel(nic_dev, &new_qp_params, trxq_params);
+ if (err)
+ goto open_channel_err;
+
+ err = hinic3_vport_up(nic_dev);
+ if (err)
+ goto vport_up_err;
+
+ clear_bit(HINIC3_CHANGE_RES_INVALID, &nic_dev->flags);
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Change channel settings success\n");
+
+ return 0;
+
+vport_up_err:
+ hinic3_close_channel(nic_dev, &new_qp_params);
+alloc_irq_err:
+open_channel_err:
+ hinic3_free_channel_resources(nic_dev, &new_qp_params, trxq_params);
+
+ return err;
+}
+
+int hinic3_open(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dyna_qp_params qp_params = {0};
+ int err;
+
+ if (test_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_info(nic_dev, drv, netdev, "Netdev already open, do nothing\n");
+ return 0;
+ }
+
+ err = hinic3_init_nicio_res(nic_dev->hwdev);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to init nicio resources\n");
+ return err;
+ }
+
+ err = hinic3_setup_num_qps(nic_dev);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to setup num_qps\n");
+ goto setup_qps_err;
+ }
+
+ err = hinic3_alloc_channel_resources(nic_dev, &qp_params,
+ &nic_dev->q_params, true);
+ if (err)
+ goto alloc_channel_res_err;
+
+ err = hinic3_open_channel(nic_dev, &qp_params, &nic_dev->q_params);
+ if (err)
+ goto open_channel_err;
+
+ err = hinic3_vport_up(nic_dev);
+ if (err)
+ goto vport_up_err;
+
+ err = hinic3_set_master_dev_state(nic_dev, true);
+ if (err)
+ goto set_master_dev_err;
+
+ set_bit(HINIC3_INTF_UP, &nic_dev->flags);
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Netdev is up\n");
+
+ return 0;
+
+set_master_dev_err:
+ hinic3_vport_down(nic_dev);
+
+vport_up_err:
+ hinic3_close_channel(nic_dev, &qp_params);
+
+open_channel_err:
+ hinic3_free_channel_resources(nic_dev, &qp_params, &nic_dev->q_params);
+
+alloc_channel_res_err:
+ hinic3_destroy_num_qps(nic_dev);
+
+setup_qps_err:
+ hinic3_deinit_nicio_res(nic_dev->hwdev);
+
+ return err;
+}
+
+static void hinic3_delete_napi(struct hinic3_nic_dev *nic_dev)
+{
+ u16 q_id;
+ int is_in_kexec = vram_get_kexec_flag();
+ struct hinic3_irq *irq_cfg = NULL;
+
+ if (is_in_kexec == 0 || nic_dev->q_params.irq_cfg == NULL)
+ return;
+
+ for (q_id = 0; q_id < nic_dev->q_params.num_qps; q_id++) {
+ irq_cfg = &(nic_dev->q_params.irq_cfg[q_id]);
+ qp_del_napi(irq_cfg);
+ }
+
+ hinic3_free_irq_vram(nic_dev, &nic_dev->q_params);
+}
+
+int hinic3_close(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dyna_qp_params qp_params = {0};
+
+ if (!test_and_clear_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ /* delete napi in os hotreplace rollback */
+ hinic3_delete_napi(nic_dev);
+ nicif_info(nic_dev, drv, netdev, "Netdev already close, do nothing\n");
+ return 0;
+ }
+
+ if (test_and_clear_bit(HINIC3_CHANGE_RES_INVALID, &nic_dev->flags))
+ goto out;
+
+ hinic3_set_master_dev_state(nic_dev, false);
+
+ hinic3_vport_down(nic_dev);
+ hinic3_close_channel(nic_dev, &qp_params);
+ hinic3_free_channel_resources(nic_dev, &qp_params, &nic_dev->q_params);
+
+out:
+ hinic3_deinit_nicio_res(nic_dev->hwdev);
+ hinic3_destroy_num_qps(nic_dev);
+
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Netdev is down\n");
+
+ return 0;
+}
+
+#define IPV6_ADDR_LEN 4
+#define PKT_INFO_LEN 9
+#define BITS_PER_TUPLE 32
+static u32 calc_xor_rss(u8 *rss_tunple, u32 len)
+{
+ u32 hash_value;
+ u32 i;
+
+ hash_value = rss_tunple[0];
+ for (i = 1; i < len; i++)
+ hash_value = hash_value ^ rss_tunple[i];
+
+ return hash_value;
+}
+
+static u32 calc_toep_rss(const u32 *rss_tunple, u32 len, const u32 *rss_key)
+{
+ u32 rss = 0;
+ u32 i, j;
+
+ for (i = 1; i <= len; i++) {
+ for (j = 0; j < BITS_PER_TUPLE; j++)
+ if (rss_tunple[i - 1] & ((u32)1 <<
+ (u32)((BITS_PER_TUPLE - 1) - j)))
+ rss ^= (rss_key[i - 1] << j) |
+ (u32)((u64)rss_key[i] >>
+ (BITS_PER_TUPLE - j));
+ }
+
+ return rss;
+}
+
+#define RSS_VAL(val, type) \
+ (((type) == HINIC3_RSS_HASH_ENGINE_TYPE_TOEP) ? ntohl(val) : (val))
+
+static u8 parse_ipv6_info(struct sk_buff *skb, u32 *rss_tunple,
+ u8 hash_engine, u32 *len)
+{
+ struct ipv6hdr *ipv6hdr = ipv6_hdr(skb);
+ u32 *saddr = (u32 *)&ipv6hdr->saddr;
+ u32 *daddr = (u32 *)&ipv6hdr->daddr;
+ u8 i;
+
+ for (i = 0; i < IPV6_ADDR_LEN; i++) {
+ rss_tunple[i] = RSS_VAL(daddr[i], hash_engine);
+ /* The offset of the sport relative to the dport is 4 */
+ rss_tunple[(u32)(i + IPV6_ADDR_LEN)] =
+ RSS_VAL(saddr[i], hash_engine);
+ }
+ *len = IPV6_ADDR_LEN + IPV6_ADDR_LEN;
+
+ if (skb_network_header(skb) + sizeof(*ipv6hdr) ==
+ skb_transport_header(skb))
+ return ipv6hdr->nexthdr;
+ return 0;
+}
+
+static u16 select_queue_by_hash_func(struct net_device *dev, struct sk_buff *skb,
+ unsigned int num_tx_queues)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(dev);
+ struct nic_rss_type rss_type = nic_dev->rss_type;
+ struct iphdr *iphdr = NULL;
+ u32 rss_tunple[PKT_INFO_LEN] = {0};
+ u32 len = 0;
+ u32 hash = 0;
+ u8 hash_engine = nic_dev->rss_hash_engine;
+ u8 l4_proto;
+ unsigned char *l4_hdr = NULL;
+
+ if (skb_rx_queue_recorded(skb)) {
+ hash = skb_get_rx_queue(skb);
+ if (unlikely(hash >= num_tx_queues))
+ hash %= num_tx_queues;
+
+ return (u16)hash;
+ }
+
+ iphdr = ip_hdr(skb);
+ if (iphdr->version == IPV4_VERSION) {
+ rss_tunple[len++] = RSS_VAL(iphdr->daddr, hash_engine);
+ rss_tunple[len++] = RSS_VAL(iphdr->saddr, hash_engine);
+ l4_proto = iphdr->protocol;
+ } else if (iphdr->version == IPV6_VERSION) {
+ l4_proto = parse_ipv6_info(skb, (u32 *)rss_tunple,
+ hash_engine, &len);
+ } else {
+ return (u16)hash;
+ }
+
+ if ((iphdr->version == IPV4_VERSION &&
+ ((l4_proto == IPPROTO_UDP && rss_type.udp_ipv4) ||
+ (l4_proto == IPPROTO_TCP && rss_type.tcp_ipv4))) ||
+ (iphdr->version == IPV6_VERSION &&
+ ((l4_proto == IPPROTO_UDP && rss_type.udp_ipv6) ||
+ (l4_proto == IPPROTO_TCP && rss_type.tcp_ipv6)))) {
+ l4_hdr = skb_transport_header(skb);
+ /* High 16 bits are dport, low 16 bits are sport. */
+ rss_tunple[len++] = ((u32)ntohs(*((u16 *)l4_hdr + 1U)) << 16) |
+ ntohs(*(u16 *)l4_hdr);
+ } /* rss_type.ipv4 and rss_type.ipv6 default on. */
+
+ if (hash_engine == HINIC3_RSS_HASH_ENGINE_TYPE_TOEP)
+ hash = calc_toep_rss((u32 *)rss_tunple, len,
+ nic_dev->rss_hkey_be);
+ else
+ hash = calc_xor_rss((u8 *)rss_tunple, len * (u32)sizeof(u32));
+
+ return (u16)nic_dev->rss_indir[hash & 0xFF];
+}
+
+#define GET_DSCP_PRI_OFFSET 2
+static u8 hinic3_get_dscp_up(struct hinic3_nic_dev *nic_dev, struct sk_buff *skb)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ int dscp_cp;
+
+ if (skb->protocol == htons(ETH_P_IP))
+ dscp_cp = ipv4_get_dsfield(ip_hdr(skb)) >> GET_DSCP_PRI_OFFSET;
+ else if (skb->protocol == htons(ETH_P_IPV6))
+ dscp_cp = ipv6_get_dsfield(ipv6_hdr(skb)) >> GET_DSCP_PRI_OFFSET;
+ else
+ return dcb->hw_dcb_cfg.default_cos;
+ return dcb->hw_dcb_cfg.dscp2cos[dscp_cp];
+}
+
+#if defined(HAVE_NDO_SELECT_QUEUE_SB_DEV_ONLY)
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ struct net_device *sb_dev)
+#elif defined(HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK)
+#if defined(HAVE_NDO_SELECT_QUEUE_SB_DEV)
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ struct net_device *sb_dev,
+ select_queue_fallback_t fallback)
+#else
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ __always_unused void *accel,
+ select_queue_fallback_t fallback)
+#endif
+
+#elif defined(HAVE_NDO_SELECT_QUEUE_ACCEL)
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ __always_unused void *accel)
+
+#else
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb)
+#endif /* end of HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK */
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ u16 txq;
+ u8 cos, qp_num;
+
+ if (test_bit(HINIC3_SAME_RXTX, &nic_dev->flags))
+ return select_queue_by_hash_func(netdev, skb, netdev->real_num_tx_queues);
+
+ txq =
+#if defined(HAVE_NDO_SELECT_QUEUE_SB_DEV_ONLY)
+ netdev_pick_tx(netdev, skb, NULL);
+#elif defined(HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK)
+#ifdef HAVE_NDO_SELECT_QUEUE_SB_DEV
+ fallback(netdev, skb, sb_dev);
+#else
+ fallback(netdev, skb);
+#endif
+#else
+ skb_tx_hash(netdev, skb);
+#endif
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ if (dcb->hw_dcb_cfg.trust == HINIC3_DCB_PCP) {
+ if (skb->vlan_tci)
+ cos = dcb->hw_dcb_cfg.pcp2cos[skb->vlan_tci >>
+ VLAN_PRIO_SHIFT];
+ else
+ cos = dcb->hw_dcb_cfg.default_cos;
+ } else {
+ cos = hinic3_get_dscp_up(nic_dev, skb);
+ }
+
+ qp_num = dcb->hw_dcb_cfg.cos_qp_num[cos] ?
+ txq % dcb->hw_dcb_cfg.cos_qp_num[cos] : 0;
+ txq = dcb->hw_dcb_cfg.cos_qp_offset[cos] + qp_num;
+ }
+
+ return txq;
+}
+
+#ifdef HAVE_NDO_GET_STATS64
+#ifdef HAVE_VOID_NDO_GET_STATS64
+static void hinic3_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+#else
+static struct rtnl_link_stats64
+ *hinic3_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+#endif
+
+#else /* !HAVE_NDO_GET_STATS64 */
+static struct net_device_stats *hinic3_get_stats(struct net_device *netdev)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+#ifndef HAVE_NDO_GET_STATS64
+#ifdef HAVE_NETDEV_STATS_IN_NETDEV
+ struct net_device_stats *stats = &netdev->stats;
+#else
+ struct net_device_stats *stats = &nic_dev->net_stats;
+#endif /* HAVE_NETDEV_STATS_IN_NETDEV */
+#endif /* HAVE_NDO_GET_STATS64 */
+ struct hinic3_txq_stats *txq_stats = NULL;
+ struct hinic3_rxq_stats *rxq_stats = NULL;
+ struct hinic3_txq *txq = NULL;
+ struct hinic3_rxq *rxq = NULL;
+ u64 bytes, packets, dropped, errors;
+ unsigned int start;
+ int i;
+
+ bytes = 0;
+ packets = 0;
+ dropped = 0;
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ if (!nic_dev->txqs)
+ break;
+
+ txq = &nic_dev->txqs[i];
+ txq_stats = &txq->txq_stats;
+ do {
+ start = u64_stats_fetch_begin(&txq_stats->syncp);
+ bytes += txq_stats->bytes;
+ packets += txq_stats->packets;
+ dropped += txq_stats->dropped;
+ } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
+ }
+ stats->tx_packets = packets;
+ stats->tx_bytes = bytes;
+ stats->tx_dropped = dropped;
+
+ bytes = 0;
+ packets = 0;
+ errors = 0;
+ dropped = 0;
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ if (!nic_dev->rxqs)
+ break;
+
+ rxq = &nic_dev->rxqs[i];
+ rxq_stats = &rxq->rxq_stats;
+ do {
+ start = u64_stats_fetch_begin(&rxq_stats->syncp);
+ bytes += rxq_stats->bytes;
+ packets += rxq_stats->packets;
+ errors += rxq_stats->csum_errors +
+ rxq_stats->other_errors;
+ dropped += rxq_stats->dropped;
+ } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
+ }
+ stats->rx_packets = packets;
+ stats->rx_bytes = bytes;
+ stats->rx_errors = errors;
+ stats->rx_dropped = dropped + nic_dev->vport_stats.rx_discard_vport;
+
+#ifndef HAVE_VOID_NDO_GET_STATS64
+ return stats;
+#endif
+}
+
+#ifdef HAVE_NDO_TX_TIMEOUT_TXQ
+static void hinic3_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+#else
+static void hinic3_tx_timeout(struct net_device *netdev)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_io_queue *sq = NULL;
+ bool hw_err = false;
+ u32 sw_pi, hw_ci;
+ u8 q_id;
+
+ HINIC3_NIC_STATS_INC(nic_dev, netdev_tx_timeout);
+ nicif_err(nic_dev, drv, netdev, "Tx timeout\n");
+
+ for (q_id = 0; q_id < nic_dev->q_params.num_qps; q_id++) {
+ if (!netif_xmit_stopped(netdev_get_tx_queue(netdev, q_id)))
+ continue;
+
+ sq = nic_dev->txqs[q_id].sq;
+ sw_pi = hinic3_get_sq_local_pi(sq);
+ hw_ci = hinic3_get_sq_hw_ci(sq);
+ nicif_info(nic_dev, drv, netdev,
+ "txq%u: sw_pi: %hu, hw_ci: %u, sw_ci: %u, napi->state: 0x%lx.\n",
+ q_id, sw_pi, hw_ci, hinic3_get_sq_local_ci(sq),
+ nic_dev->q_params.irq_cfg[q_id].napi.state);
+
+ if (sw_pi != hw_ci)
+ hw_err = true;
+ }
+
+ if (hw_err)
+ set_bit(EVENT_WORK_TX_TIMEOUT, &nic_dev->event_flag);
+}
+
+static int hinic3_change_mtu(struct net_device *netdev, int new_mtu)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u32 mtu = (u32)new_mtu;
+ int err = 0;
+ int is_in_kexec = vram_get_kexec_flag();
+#ifdef HAVE_XDP_SUPPORT
+ u32 xdp_max_mtu;
+#endif
+
+ if (is_in_kexec != 0) {
+ nicif_info(nic_dev, drv, netdev, "Hotreplace skip change mtu\n");
+ return err;
+ }
+
+#ifdef HAVE_XDP_SUPPORT
+ if (hinic3_is_xdp_enable(nic_dev)) {
+ xdp_max_mtu = hinic3_xdp_max_mtu(nic_dev);
+ if (mtu > xdp_max_mtu) {
+ nicif_err(nic_dev, drv, netdev,
+ "Max MTU for xdp usage is %d\n", xdp_max_mtu);
+ return -EINVAL;
+ }
+ }
+#endif
+
+ err = hinic3_config_port_mtu(nic_dev, mtu);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to change port mtu to %d\n",
+ new_mtu);
+ } else {
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Change mtu from %u to %d\n",
+ netdev->mtu, new_mtu);
+ netdev->mtu = mtu;
+ nic_dev->nic_vram->vram_mtu = mtu;
+ }
+
+ return err;
+}
+
+static int hinic3_set_mac_addr(struct net_device *netdev, void *addr)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct sockaddr *saddr = addr;
+ int err;
+
+ if (!is_valid_ether_addr(saddr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ if (ether_addr_equal(netdev->dev_addr, saddr->sa_data)) {
+ nicif_info(nic_dev, drv, netdev,
+ "Already using mac address %pM\n",
+ saddr->sa_data);
+ return 0;
+ }
+
+ err = hinic3_config_port_mac(nic_dev, saddr);
+ if (err)
+ return err;
+
+ ether_addr_copy(netdev->dev_addr, saddr->sa_data);
+
+ nicif_info(nic_dev, drv, netdev, "Set new mac address %pM\n",
+ saddr->sa_data);
+
+ return 0;
+}
+
+#if defined(HAVE_NDO_UDP_TUNNEL_ADD) || defined(HAVE_UDP_TUNNEL_NIC_INFO)
+static int hinic3_udp_tunnel_port_config(struct net_device *netdev,
+ struct udp_tunnel_info *ti,
+ u8 action)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 func_id = hinic3_global_func_id(nic_dev->hwdev);
+ u16 dst_port;
+ int ret = 0;
+
+ switch (ti->type) {
+ case UDP_TUNNEL_TYPE_VXLAN:
+ dst_port = ntohs(ti->port);
+ ret = hinic3_vlxan_port_config(nic_dev->hwdev, func_id,
+ dst_port, action);
+ if (ret != 0) {
+ nicif_warn(nic_dev, drv, netdev,
+ "Failed to set vxlan port %u to device(%d)\n",
+ dst_port, ret);
+ break;
+ }
+ nicif_info(nic_dev, link, netdev, "Vxlan dst port set to %u\n",
+ action == HINIC3_CMD_OP_ADD ?
+ dst_port : ntohs(VXLAN_OFFLOAD_PORT_LE));
+ break;
+ default:
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to add port, only vxlan dst port is supported\n");
+ ret = -EINVAL;
+ }
+ return ret;
+}
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD || HAVE_UDP_TUNNEL_NIC_INFO */
+#ifdef HAVE_NDO_UDP_TUNNEL_ADD
+static void hinic3_udp_tunnel_add(struct net_device *netdev, struct udp_tunnel_info *ti)
+{
+ if (ti->sa_family != AF_INET && ti->sa_family != AF_INET6)
+ return;
+
+ hinic3_udp_tunnel_port_config(netdev, ti, HINIC3_CMD_OP_ADD);
+}
+
+static void hinic3_udp_tunnel_del(struct net_device *netdev, struct udp_tunnel_info *ti)
+{
+ if (ti->sa_family != AF_INET && ti->sa_family != AF_INET6)
+ return;
+
+ hinic3_udp_tunnel_port_config(netdev, ti, HINIC3_CMD_OP_DEL);
+}
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD */
+
+#ifdef HAVE_UDP_TUNNEL_NIC_INFO
+int hinic3_udp_tunnel_set_port(struct net_device *netdev, __attribute__((unused)) unsigned int table,
+ __attribute__((unused))unsigned int entry, struct udp_tunnel_info *ti)
+{
+ return hinic3_udp_tunnel_port_config(netdev, ti, HINIC3_CMD_OP_ADD);
+}
+
+int hinic3_udp_tunnel_unset_port(struct net_device *netdev, __attribute__((unused)) unsigned int table,
+ __attribute__((unused)) unsigned int entry, struct udp_tunnel_info *ti)
+{
+ return hinic3_udp_tunnel_port_config(netdev, ti, HINIC3_CMD_OP_DEL);
+}
+#endif /* HAVE_UDP_TUNNEL_NIC_INFO */
+
+static int
+hinic3_vlan_rx_add_vid(struct net_device *netdev,
+ __always_unused __be16 proto,
+ u16 vid)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ unsigned long *vlan_bitmap = nic_dev->vlan_bitmap;
+ u16 func_id;
+ u32 col, line;
+ int err = 0;
+
+ /* VLAN 0 donot be added, which is the same as VLAN 0 deleted. */
+ if (vid == 0)
+ goto end;
+
+ col = VID_COL(nic_dev, vid);
+ line = VID_LINE(nic_dev, vid);
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+ err = hinic3_add_vlan(nic_dev->hwdev, vid, func_id);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to add vlan %u\n", vid);
+ goto end;
+ }
+
+ set_bit(col, &vlan_bitmap[line]);
+
+ nicif_info(nic_dev, drv, netdev, "Add vlan %u\n", vid);
+
+end:
+ return err;
+}
+
+static int
+hinic3_vlan_rx_kill_vid(struct net_device *netdev,
+ __always_unused __be16 proto,
+ u16 vid)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ unsigned long *vlan_bitmap = nic_dev->vlan_bitmap;
+ u16 func_id;
+ int col, line;
+ int err = 0;
+
+ col = VID_COL(nic_dev, vid);
+ line = (int)VID_LINE(nic_dev, vid);
+
+ /* In the broadcast scenario, ucode finds the corresponding function
+ * based on VLAN 0 of vlan table. If we delete VLAN 0, the VLAN function
+ * is affected.
+ */
+ if (vid == 0)
+ goto end;
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ err = hinic3_del_vlan(nic_dev->hwdev, vid, func_id);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to delete vlan\n");
+ goto end;
+ }
+
+ clear_bit(col, &vlan_bitmap[line]);
+
+ nicif_info(nic_dev, drv, netdev, "Remove vlan %u\n", vid);
+
+end:
+ return err;
+}
+
+#ifdef NEED_VLAN_RESTORE
+static int hinic3_vlan_restore(struct net_device *netdev)
+{
+ int err = 0;
+#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
+ struct net_device *vlandev = NULL;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ unsigned long *vlan_bitmap = nic_dev->vlan_bitmap;
+ u32 col, line;
+ u16 i;
+
+ if (!netdev->netdev_ops->ndo_vlan_rx_add_vid)
+ return -EFAULT;
+ rcu_read_lock();
+ for (i = 0; i < VLAN_N_VID; i++) {
+#ifdef HAVE_VLAN_FIND_DEV_DEEP_RCU
+ vlandev =
+ __vlan_find_dev_deep_rcu(netdev, htons(ETH_P_8021Q), i);
+#else
+ vlandev = __vlan_find_dev_deep(netdev, htons(ETH_P_8021Q), i);
+#endif
+ col = VID_COL(nic_dev, i);
+ line = VID_LINE(nic_dev, i);
+ if (!vlandev && (vlan_bitmap[line] & (1UL << col)) != 0) {
+ err = netdev->netdev_ops->ndo_vlan_rx_kill_vid(netdev,
+ htons(ETH_P_8021Q), i);
+ if (err) {
+ hinic3_err(nic_dev, drv, "delete vlan %u failed, err code %d\n",
+ i, err);
+ break;
+ }
+ } else if (vlandev && (vlan_bitmap[line] & (1UL << col)) == 0) {
+ err = netdev->netdev_ops->ndo_vlan_rx_add_vid(netdev,
+ htons(ETH_P_8021Q), i);
+ if (err) {
+ hinic3_err(nic_dev, drv, "restore vlan %u failed, err code %d\n",
+ i, err);
+ break;
+ }
+ }
+ }
+ rcu_read_unlock();
+#endif
+
+ return err;
+}
+#endif
+
+#define SET_FEATURES_OP_STR(op) ((op) ? "Enable" : "Disable")
+
+static int set_feature_rx_csum(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+
+ if (changed & NETIF_F_RXCSUM)
+ hinic3_info(nic_dev, drv, "%s rx csum success\n",
+ SET_FEATURES_OP_STR(wanted_features &
+ NETIF_F_RXCSUM));
+
+ return 0;
+}
+
+static int set_feature_tso(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+
+ if (changed & NETIF_F_TSO)
+ hinic3_info(nic_dev, drv, "%s tso success\n",
+ SET_FEATURES_OP_STR(wanted_features & NETIF_F_TSO));
+
+ return 0;
+}
+
+#ifdef NETIF_F_UFO
+static int set_feature_ufo(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+
+ if (changed & NETIF_F_UFO)
+ hinic3_info(nic_dev, drv, "%s ufo success\n",
+ SET_FEATURES_OP_STR(wanted_features & NETIF_F_UFO));
+
+ return 0;
+}
+#endif
+
+static int set_feature_lro(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+ bool en = !!(wanted_features & NETIF_F_LRO);
+ int err;
+
+ if (!(changed & NETIF_F_LRO))
+ return 0;
+
+#ifdef HAVE_XDP_SUPPORT
+ if (en && hinic3_is_xdp_enable(nic_dev)) {
+ hinic3_err(nic_dev, drv, "Can not enable LRO when xdp is enable\n");
+ *failed_features |= NETIF_F_LRO;
+ return -EINVAL;
+ }
+#endif
+
+ err = hinic3_set_rx_lro_state(nic_dev->hwdev, en,
+ HINIC3_LRO_DEFAULT_TIME_LIMIT,
+ HINIC3_LRO_DEFAULT_COAL_PKT_SIZE);
+ if (err) {
+ hinic3_err(nic_dev, drv, "%s lro failed\n",
+ SET_FEATURES_OP_STR(en));
+ *failed_features |= NETIF_F_LRO;
+ } else {
+ hinic3_info(nic_dev, drv, "%s lro success\n",
+ SET_FEATURES_OP_STR(en));
+ }
+
+ return err;
+}
+
+static int set_feature_rx_cvlan(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+#ifdef NETIF_F_HW_VLAN_CTAG_RX
+ netdev_features_t vlan_feature = NETIF_F_HW_VLAN_CTAG_RX;
+#else
+ netdev_features_t vlan_feature = NETIF_F_HW_VLAN_RX;
+#endif
+ bool en = !!(wanted_features & vlan_feature);
+ int err;
+
+ if (!(changed & vlan_feature))
+ return 0;
+
+ err = hinic3_set_rx_vlan_offload(nic_dev->hwdev, en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "%s rxvlan failed\n",
+ SET_FEATURES_OP_STR(en));
+ *failed_features |= vlan_feature;
+ } else {
+ hinic3_info(nic_dev, drv, "%s rxvlan success\n",
+ SET_FEATURES_OP_STR(en));
+ }
+
+ return err;
+}
+
+static int set_feature_vlan_filter(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+#if defined(NETIF_F_HW_VLAN_CTAG_FILTER)
+ netdev_features_t vlan_filter_feature = NETIF_F_HW_VLAN_CTAG_FILTER;
+#elif defined(NETIF_F_HW_VLAN_FILTER)
+ netdev_features_t vlan_filter_feature = NETIF_F_HW_VLAN_FILTER;
+#endif
+ bool en = !!(wanted_features & vlan_filter_feature);
+ int err = 0;
+
+ if (!(changed & vlan_filter_feature))
+ return 0;
+
+#ifdef NEED_VLAN_RESTORE
+ if (en) {
+ err = hinic3_vlan_restore(nic_dev->netdev);
+ if (err) {
+ hinic3_err(nic_dev, drv, "vlan restore failed\n");
+ *failed_features |= vlan_filter_feature;
+ return err;
+ }
+ }
+#endif
+
+ err = hinic3_set_vlan_fliter(nic_dev->hwdev, en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "%s rx vlan filter failed\n",
+ SET_FEATURES_OP_STR(en));
+ *failed_features |= vlan_filter_feature;
+ } else {
+ hinic3_info(nic_dev, drv, "%s rx vlan filter success\n",
+ SET_FEATURES_OP_STR(en));
+ }
+
+ return err;
+}
+
+static int set_features(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t pre_features,
+ netdev_features_t features)
+{
+ netdev_features_t failed_features = 0;
+ u32 err = 0;
+
+ err |= (u32)set_feature_rx_csum(nic_dev, features, pre_features,
+ &failed_features);
+ err |= (u32)set_feature_tso(nic_dev, features, pre_features,
+ &failed_features);
+ err |= (u32)set_feature_lro(nic_dev, features, pre_features,
+ &failed_features);
+#ifdef NETIF_F_UFO
+ err |= (u32)set_feature_ufo(nic_dev, features, pre_features,
+ &failed_features);
+#endif
+ err |= (u32)set_feature_rx_cvlan(nic_dev, features, pre_features,
+ &failed_features);
+ err |= (u32)set_feature_vlan_filter(nic_dev, features, pre_features,
+ &failed_features);
+ if (err) {
+ nic_dev->netdev->features = features ^ failed_features;
+ return -EIO;
+ }
+
+ return 0;
+}
+
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+static int hinic3_set_features(struct net_device *netdev, u32 features)
+#else
+static int hinic3_set_features(struct net_device *netdev,
+ netdev_features_t features)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ return set_features(nic_dev, nic_dev->netdev->features,
+ features);
+}
+
+int hinic3_set_hw_features(struct hinic3_nic_dev *nic_dev)
+{
+ /* enable all hw features in netdev->features */
+ return set_features(nic_dev, ~nic_dev->netdev->features,
+ nic_dev->netdev->features);
+}
+
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+static u32 hinic3_fix_features(struct net_device *netdev, u32 features)
+#else
+static netdev_features_t hinic3_fix_features(struct net_device *netdev,
+ netdev_features_t features)
+#endif
+{
+ netdev_features_t features_tmp = features;
+
+ /* If Rx checksum is disabled, then LRO should also be disabled */
+ if (!(features_tmp & NETIF_F_RXCSUM))
+ features_tmp &= ~NETIF_F_LRO;
+
+ return features_tmp;
+}
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void hinic3_netpoll(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 i;
+
+ for (i = 0; i < nic_dev->q_params.num_qps; i++)
+ napi_schedule(&nic_dev->q_params.irq_cfg[i].napi);
+}
+#endif /* CONFIG_NET_POLL_CONTROLLER */
+
+static int hinic3_ndo_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ int err;
+
+ if (is_multicast_ether_addr(mac) ||
+ vf >= pci_num_vf(adapter->pdev))
+ return -EINVAL;
+
+ err = hinic3_set_vf_mac(adapter->hwdev, OS_VF_ID_TO_HW(vf), mac);
+ if (err)
+ return err;
+
+ if (!is_zero_ether_addr(mac))
+ nic_info(&adapter->pdev->dev, "Setting MAC %pM on VF %d\n",
+ mac, vf);
+ else
+ nic_info(&adapter->pdev->dev, "Deleting MAC on VF %d\n", vf);
+
+ nic_info(&adapter->pdev->dev, "Please reload the VF driver to make this change effective.");
+
+ return 0;
+}
+
+#ifdef IFLA_VF_MAX
+static int set_hw_vf_vlan(void *hwdev, u16 cur_vlanprio, int vf,
+ u16 vlan, u8 qos)
+{
+ int err = 0;
+ u16 old_vlan = cur_vlanprio & VLAN_VID_MASK;
+
+ if (vlan || qos) {
+ if (cur_vlanprio) {
+ err = hinic3_kill_vf_vlan(hwdev, OS_VF_ID_TO_HW(vf));
+ if (err)
+ return err;
+ }
+ err = hinic3_add_vf_vlan(hwdev, OS_VF_ID_TO_HW(vf), vlan, qos);
+ } else {
+ err = hinic3_kill_vf_vlan(hwdev, OS_VF_ID_TO_HW(vf));
+ }
+
+ err = hinic3_update_mac_vlan(hwdev, old_vlan, vlan, OS_VF_ID_TO_HW(vf));
+ return err;
+}
+
+#define HINIC3_MAX_VLAN_ID 4094
+#define HINIC3_MAX_QOS_NUM 7
+
+#ifdef IFLA_VF_VLAN_INFO_MAX
+static int hinic3_ndo_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan,
+ u8 qos, __be16 vlan_proto)
+#else
+static int hinic3_ndo_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan,
+ u8 qos)
+#endif
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ u16 vlanprio, cur_vlanprio;
+
+ if (vf >= pci_num_vf(adapter->pdev) ||
+ vlan > HINIC3_MAX_VLAN_ID || qos > HINIC3_MAX_QOS_NUM)
+ return -EINVAL;
+#ifdef IFLA_VF_VLAN_INFO_MAX
+ if (vlan_proto != htons(ETH_P_8021Q))
+ return -EPROTONOSUPPORT;
+#endif
+ vlanprio = vlan | (qos << HINIC3_VLAN_PRIORITY_SHIFT);
+ cur_vlanprio = hinic3_vf_info_vlanprio(adapter->hwdev,
+ OS_VF_ID_TO_HW(vf));
+ /* duplicate request, so just return success */
+ if (vlanprio == cur_vlanprio)
+ return 0;
+
+ return set_hw_vf_vlan(adapter->hwdev, cur_vlanprio, vf, vlan, qos);
+}
+#endif
+
+#ifdef HAVE_VF_SPOOFCHK_CONFIGURE
+static int hinic3_ndo_set_vf_spoofchk(struct net_device *netdev, int vf,
+ bool setting)
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ int err = 0;
+ bool cur_spoofchk = false;
+
+ if (vf >= pci_num_vf(adapter->pdev))
+ return -EINVAL;
+
+ cur_spoofchk = hinic3_vf_info_spoofchk(adapter->hwdev,
+ OS_VF_ID_TO_HW(vf));
+ /* same request, so just return success */
+ if ((setting && cur_spoofchk) || (!setting && !cur_spoofchk))
+ return 0;
+
+ err = hinic3_set_vf_spoofchk(adapter->hwdev,
+ (u16)OS_VF_ID_TO_HW(vf), setting);
+ if (!err)
+ nicif_info(adapter, drv, netdev, "Set VF %d spoofchk %s\n",
+ vf, setting ? "on" : "off");
+
+ return err;
+}
+#endif
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+static int hinic3_ndo_set_vf_trust(struct net_device *netdev, int vf, bool setting)
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ int err;
+ bool cur_trust;
+
+ if (vf >= pci_num_vf(adapter->pdev))
+ return -EINVAL;
+
+ cur_trust = hinic3_get_vf_trust(adapter->hwdev,
+ OS_VF_ID_TO_HW(vf));
+ /* same request, so just return success */
+ if ((setting && cur_trust) || (!setting && !cur_trust))
+ return 0;
+
+ err = hinic3_set_vf_trust(adapter->hwdev,
+ (u16)OS_VF_ID_TO_HW(vf), setting);
+ if (!err)
+ nicif_info(adapter, drv, netdev, "Set VF %d trusted %s successfully\n",
+ vf, setting ? "on" : "off");
+ else
+ nicif_err(adapter, drv, netdev, "Failed set VF %d trusted %s\n",
+ vf, setting ? "on" : "off");
+
+ return err;
+}
+#endif
+
+static int hinic3_ndo_get_vf_config(struct net_device *netdev,
+ int vf, struct ifla_vf_info *ivi)
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+
+ if (vf >= pci_num_vf(adapter->pdev))
+ return -EINVAL;
+
+ hinic3_get_vf_config(adapter->hwdev, (u16)OS_VF_ID_TO_HW(vf), ivi);
+
+ return 0;
+}
+
+/**
+ * hinic3_ndo_set_vf_link_state
+ * @netdev: network interface device structure
+ * @vf_id: VF identifier
+ * @link: required link state
+ *
+ * Set the link state of a specified VF, regardless of physical link state
+ **/
+int hinic3_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link)
+{
+ static const char * const vf_link[] = {"auto", "enable", "disable"};
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ int err;
+
+ /* validate the request */
+ if (vf_id >= pci_num_vf(adapter->pdev)) {
+ nicif_err(adapter, drv, netdev,
+ "Invalid VF Identifier %d\n", vf_id);
+ return -EINVAL;
+ }
+
+ err = hinic3_set_vf_link_state(adapter->hwdev,
+ (u16)OS_VF_ID_TO_HW(vf_id), link);
+ if (!err)
+ nicif_info(adapter, drv, netdev, "Set VF %d link state: %s\n",
+ vf_id, vf_link[link]);
+
+ return err;
+}
+
+static int is_set_vf_bw_param_valid(const struct hinic3_nic_dev *adapter,
+ int vf, int min_tx_rate, int max_tx_rate)
+{
+ if (!HINIC3_SUPPORT_RATE_LIMIT(adapter->hwdev)) {
+ nicif_err(adapter, drv, adapter->netdev, "Current function doesn't support to set vf rate limit\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* verify VF is active */
+ if (vf >= pci_num_vf(adapter->pdev)) {
+ nicif_err(adapter, drv, adapter->netdev, "VF number must be less than %d\n",
+ pci_num_vf(adapter->pdev));
+ return -EINVAL;
+ }
+
+ if (max_tx_rate < min_tx_rate) {
+ nicif_err(adapter, drv, adapter->netdev, "Invalid rate, max rate %d must greater than min rate %d\n",
+ max_tx_rate, min_tx_rate);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define HINIC3_TX_RATE_TABLE_FULL 12
+
+#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+static int hinic3_ndo_set_vf_bw(struct net_device *netdev,
+ int vf, int min_tx_rate, int max_tx_rate)
+#else
+static int hinic3_ndo_set_vf_bw(struct net_device *netdev, int vf,
+ int max_tx_rate)
+#endif /* HAVE_NDO_SET_VF_MIN_MAX_TX_RATE */
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ struct nic_port_info port_info = {0};
+#ifndef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+ int min_tx_rate = 0;
+#endif
+ u8 link_status = 0;
+ u32 speeds[] = {0, SPEED_10, SPEED_100, SPEED_1000, SPEED_10000,
+ SPEED_25000, SPEED_40000, SPEED_50000, SPEED_100000,
+ SPEED_200000};
+ int err = 0;
+
+ err = is_set_vf_bw_param_valid(adapter, vf, min_tx_rate, max_tx_rate);
+ if (err)
+ return err;
+
+ err = hinic3_get_link_state(adapter->hwdev, &link_status);
+ if (err) {
+ nicif_err(adapter, drv, netdev,
+ "Get link status failed when set vf tx rate\n");
+ return -EIO;
+ }
+
+ if (!link_status) {
+ nicif_err(adapter, drv, netdev,
+ "Link status must be up when set vf tx rate\n");
+ return -EINVAL;
+ }
+
+ err = hinic3_get_port_info(adapter->hwdev, &port_info,
+ HINIC3_CHANNEL_NIC);
+ if (err || port_info.speed >= PORT_SPEED_UNKNOWN)
+ return -EIO;
+
+ /* rate limit cannot be less than 0 and greater than link speed */
+ if (max_tx_rate < 0 || max_tx_rate > (int)(speeds[port_info.speed])) {
+ nicif_err(adapter, drv, netdev, "Set vf max tx rate must be in [0 - %u]\n",
+ speeds[port_info.speed]);
+ return -EINVAL;
+ }
+
+ err = hinic3_set_vf_tx_rate(adapter->hwdev, (u16)OS_VF_ID_TO_HW(vf),
+ (u32)max_tx_rate, (u32)min_tx_rate);
+ if (err) {
+ nicif_err(adapter, drv, netdev,
+ "Unable to set VF %d max rate %d min rate %d%s\n",
+ vf, max_tx_rate, min_tx_rate,
+ err == HINIC3_TX_RATE_TABLE_FULL ?
+ ", tx rate profile is full" : "");
+ return -EIO;
+ }
+
+#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+ nicif_info(adapter, drv, netdev,
+ "Set VF %d max tx rate %d min tx rate %d successfully\n",
+ vf, max_tx_rate, min_tx_rate);
+#else
+ nicif_info(adapter, drv, netdev,
+ "Set VF %d tx rate %d successfully\n",
+ vf, max_tx_rate);
+#endif
+
+ return 0;
+}
+
+#ifdef HAVE_XDP_SUPPORT
+bool hinic3_is_xdp_enable(struct hinic3_nic_dev *nic_dev)
+{
+ return !!nic_dev->xdp_prog;
+}
+
+int hinic3_xdp_max_mtu(struct hinic3_nic_dev *nic_dev)
+{
+ return nic_dev->rx_buff_len - (ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN);
+}
+
+static int hinic3_xdp_setup(struct hinic3_nic_dev *nic_dev,
+ struct bpf_prog *prog,
+ struct netlink_ext_ack *extack)
+{
+ struct bpf_prog *old_prog = NULL;
+ int max_mtu = hinic3_xdp_max_mtu(nic_dev);
+ int q_id;
+
+ if (nic_dev->netdev->mtu > (u32)max_mtu) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to setup xdp program, the current MTU %d is larger than max allowed MTU %d\n",
+ nic_dev->netdev->mtu, max_mtu);
+ NL_SET_ERR_MSG_MOD(extack,
+ "MTU too large for loading xdp program");
+ return -EINVAL;
+ }
+
+ if (nic_dev->netdev->features & NETIF_F_LRO) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to setup xdp program while LRO is on\n");
+ NL_SET_ERR_MSG_MOD(extack,
+ "Failed to setup xdp program while LRO is on\n");
+ return -EINVAL;
+ }
+
+ old_prog = xchg(&nic_dev->xdp_prog, prog);
+ for (q_id = 0; q_id < nic_dev->max_qps; q_id++)
+ xchg(&nic_dev->rxqs[q_id].xdp_prog, nic_dev->xdp_prog);
+
+ if (old_prog)
+ bpf_prog_put(old_prog);
+
+ return 0;
+}
+
+#ifdef HAVE_NDO_BPF_NETDEV_BPF
+static int hinic3_xdp(struct net_device *netdev, struct netdev_bpf *xdp)
+#else
+static int hinic3_xdp(struct net_device *netdev, struct netdev_xdp *xdp)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ switch (xdp->command) {
+ case XDP_SETUP_PROG:
+ return hinic3_xdp_setup(nic_dev, xdp->prog, xdp->extack);
+#ifdef HAVE_XDP_QUERY_PROG
+ case XDP_QUERY_PROG:
+ xdp->prog_id = nic_dev->xdp_prog ?
+ nic_dev->xdp_prog->aux->id : 0;
+ return 0;
+#endif
+ default:
+ return -EINVAL;
+ }
+}
+#endif
+
+static const struct net_device_ops hinic3_netdev_ops = {
+ .ndo_open = hinic3_open,
+ .ndo_stop = hinic3_close,
+ .ndo_start_xmit = hinic3_xmit_frame,
+
+#ifdef HAVE_NDO_GET_STATS64
+ .ndo_get_stats64 = hinic3_get_stats64,
+#else
+ .ndo_get_stats = hinic3_get_stats,
+#endif /* HAVE_NDO_GET_STATS64 */
+
+ .ndo_tx_timeout = hinic3_tx_timeout,
+ .ndo_select_queue = hinic3_select_queue,
+#ifdef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_CHANGE_MTU
+ .extended.ndo_change_mtu = hinic3_change_mtu,
+#else
+ .ndo_change_mtu = hinic3_change_mtu,
+#endif
+ .ndo_set_mac_address = hinic3_set_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+
+#if defined(NETIF_F_HW_VLAN_TX) || defined(NETIF_F_HW_VLAN_CTAG_TX)
+ .ndo_vlan_rx_add_vid = hinic3_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = hinic3_vlan_rx_kill_vid,
+#endif
+
+#ifdef HAVE_RHEL7_NET_DEVICE_OPS_EXT
+ /* RHEL7 requires this to be defined to enable extended ops. RHEL7
+ * uses the function get_ndo_ext to retrieve offsets for extended
+ * fields from with the net_device_ops struct and ndo_size is checked
+ * to determine whether or not the offset is valid.
+ */
+ .ndo_size = sizeof(const struct net_device_ops),
+#endif
+
+#ifdef IFLA_VF_MAX
+ .ndo_set_vf_mac = hinic3_ndo_set_vf_mac,
+#ifdef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_SET_VF_VLAN
+ .extended.ndo_set_vf_vlan = hinic3_ndo_set_vf_vlan,
+#else
+ .ndo_set_vf_vlan = hinic3_ndo_set_vf_vlan,
+#endif
+#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+ .ndo_set_vf_rate = hinic3_ndo_set_vf_bw,
+#else
+ .ndo_set_vf_tx_rate = hinic3_ndo_set_vf_bw,
+#endif /* HAVE_NDO_SET_VF_MIN_MAX_TX_RATE */
+#ifdef HAVE_VF_SPOOFCHK_CONFIGURE
+ .ndo_set_vf_spoofchk = hinic3_ndo_set_vf_spoofchk,
+#endif
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+#ifdef HAVE_RHEL7_NET_DEVICE_OPS_EXT
+ .extended.ndo_set_vf_trust = hinic3_ndo_set_vf_trust,
+#else
+ .ndo_set_vf_trust = hinic3_ndo_set_vf_trust,
+#endif /* HAVE_RHEL7_NET_DEVICE_OPS_EXT */
+#endif /* HAVE_NDO_SET_VF_TRUST */
+
+ .ndo_get_vf_config = hinic3_ndo_get_vf_config,
+#endif
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = hinic3_netpoll,
+#endif /* CONFIG_NET_POLL_CONTROLLER */
+
+ .ndo_set_rx_mode = hinic3_nic_set_rx_mode,
+
+#ifdef HAVE_XDP_SUPPORT
+#ifdef HAVE_NDO_BPF_NETDEV_BPF
+ .ndo_bpf = hinic3_xdp,
+#else
+ .ndo_xdp = hinic3_xdp,
+#endif
+#endif
+#ifdef HAVE_NDO_UDP_TUNNEL_ADD
+ .ndo_udp_tunnel_add = hinic3_udp_tunnel_add,
+ .ndo_udp_tunnel_del = hinic3_udp_tunnel_del,
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD */
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+};
+
+/* RHEL6 keeps these operations in a separate structure */
+static const struct net_device_ops_ext hinic3_netdev_ops_ext = {
+ .size = sizeof(struct net_device_ops_ext),
+#endif /* HAVE_RHEL6_NET_DEVICE_OPS_EXT */
+
+#ifdef HAVE_NDO_SET_VF_LINK_STATE
+ .ndo_set_vf_link_state = hinic3_ndo_set_vf_link_state,
+#endif
+
+#ifdef HAVE_NDO_SET_FEATURES
+ .ndo_fix_features = hinic3_fix_features,
+ .ndo_set_features = hinic3_set_features,
+#endif /* HAVE_NDO_SET_FEATURES */
+};
+
+static const struct net_device_ops hinic3vf_netdev_ops = {
+ .ndo_open = hinic3_open,
+ .ndo_stop = hinic3_close,
+ .ndo_start_xmit = hinic3_xmit_frame,
+
+#ifdef HAVE_NDO_GET_STATS64
+ .ndo_get_stats64 = hinic3_get_stats64,
+#else
+ .ndo_get_stats = hinic3_get_stats,
+#endif /* HAVE_NDO_GET_STATS64 */
+
+ .ndo_tx_timeout = hinic3_tx_timeout,
+ .ndo_select_queue = hinic3_select_queue,
+
+#ifdef HAVE_RHEL7_NET_DEVICE_OPS_EXT
+ /* RHEL7 requires this to be defined to enable extended ops. RHEL7
+ * uses the function get_ndo_ext to retrieve offsets for extended
+ * fields from with the net_device_ops struct and ndo_size is checked
+ * to determine whether or not the offset is valid.
+ */
+ .ndo_size = sizeof(const struct net_device_ops),
+#endif
+
+#ifdef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_CHANGE_MTU
+ .extended.ndo_change_mtu = hinic3_change_mtu,
+#else
+ .ndo_change_mtu = hinic3_change_mtu,
+#endif
+ .ndo_set_mac_address = hinic3_set_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+
+#if defined(NETIF_F_HW_VLAN_TX) || defined(NETIF_F_HW_VLAN_CTAG_TX)
+ .ndo_vlan_rx_add_vid = hinic3_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = hinic3_vlan_rx_kill_vid,
+#endif
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = hinic3_netpoll,
+#endif /* CONFIG_NET_POLL_CONTROLLER */
+
+ .ndo_set_rx_mode = hinic3_nic_set_rx_mode,
+
+#ifdef HAVE_XDP_SUPPORT
+#ifdef HAVE_NDO_BPF_NETDEV_BPF
+ .ndo_bpf = hinic3_xdp,
+#else
+ .ndo_xdp = hinic3_xdp,
+#endif
+#endif
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+};
+
+/* RHEL6 keeps these operations in a separate structure */
+static const struct net_device_ops_ext hinic3vf_netdev_ops_ext = {
+ .size = sizeof(struct net_device_ops_ext),
+#endif /* HAVE_RHEL6_NET_DEVICE_OPS_EXT */
+
+#ifdef HAVE_NDO_SET_FEATURES
+ .ndo_fix_features = hinic3_fix_features,
+ .ndo_set_features = hinic3_set_features,
+#endif /* HAVE_NDO_SET_FEATURES */
+};
+
+void hinic3_set_netdev_ops(struct hinic3_nic_dev *nic_dev)
+{
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nic_dev->netdev->netdev_ops = &hinic3_netdev_ops;
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ set_netdev_ops_ext(nic_dev->netdev, &hinic3_netdev_ops_ext);
+#endif /* HAVE_RHEL6_NET_DEVICE_OPS_EXT */
+ } else {
+ nic_dev->netdev->netdev_ops = &hinic3vf_netdev_ops;
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ set_netdev_ops_ext(nic_dev->netdev, &hinic3vf_netdev_ops_ext);
+#endif /* HAVE_RHEL6_NET_DEVICE_OPS_EXT */
+ }
+}
+
+bool hinic3_is_netdev_ops_match(const struct net_device *netdev)
+{
+ return netdev->netdev_ops == &hinic3_netdev_ops ||
+ netdev->netdev_ops == &hinic3vf_netdev_ops;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic.h
new file mode 100644
index 0000000..1bc6a14
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic.h
@@ -0,0 +1,221 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_H
+#define HINIC3_NIC_H
+
+#include <linux/types.h>
+#include <linux/semaphore.h>
+
+#include "hinic3_common.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "mag_mpu_cmd.h"
+#include "mag_mpu_cmd_defs.h"
+
+/* ************************ array index define ********************* */
+#define ARRAY_INDEX_0 0
+#define ARRAY_INDEX_1 1
+#define ARRAY_INDEX_2 2
+#define ARRAY_INDEX_3 3
+#define ARRAY_INDEX_4 4
+#define ARRAY_INDEX_5 5
+#define ARRAY_INDEX_6 6
+#define ARRAY_INDEX_7 7
+
+#define XSFP_TLV_PRE_INFO_LEN 4
+
+enum hinic3_link_port_type {
+ LINK_PORT_UNKNOWN,
+ LINK_PORT_OPTICAL_MM,
+ LINK_PORT_OPTICAL_SM,
+ LINK_PORT_PAS_COPPER,
+ LINK_PORT_ACC,
+ LINK_PORT_BASET,
+ LINK_PORT_AOC = 0x40,
+ LINK_PORT_ELECTRIC,
+ LINK_PORT_BACKBOARD_INTERFACE,
+};
+
+enum hilink_fibre_subtype {
+ FIBRE_SUBTYPE_SR = 1,
+ FIBRE_SUBTYPE_LR,
+ FIBRE_SUBTYPE_MAX,
+};
+
+enum hilink_fec_type {
+ HILINK_FEC_NOT_SET,
+ HILINK_FEC_RSFEC,
+ HILINK_FEC_BASEFEC,
+ HILINK_FEC_NOFEC,
+ HILINK_FEC_LLRSFE,
+ HILINK_FEC_MAX_TYPE,
+};
+
+struct hinic3_sq_attr {
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u64 ci_dma_base;
+};
+
+struct vf_data_storage {
+ u8 drv_mac_addr[ETH_ALEN];
+ u8 user_mac_addr[ETH_ALEN];
+ bool registered;
+ bool use_specified_mac;
+ u16 pf_vlan;
+ u8 pf_qos;
+ u8 rsvd2;
+ u32 max_rate;
+ u32 min_rate;
+
+ bool link_forced;
+ bool link_up; /* only valid if VF link is forced */
+ bool spoofchk;
+ bool trust;
+ u16 num_qps;
+ u32 support_extra_feature;
+};
+
+struct hinic3_port_routine_cmd {
+ bool mpu_send_sfp_info;
+ bool mpu_send_sfp_abs;
+
+ struct mag_cmd_get_xsfp_info std_sfp_info;
+ struct mag_cmd_get_xsfp_present abs;
+};
+
+struct hinic3_port_routine_cmd_extern {
+ bool mpu_send_xsfp_tlv_info;
+
+ struct drv_mag_cmd_get_xsfp_tlv_rsp std_xsfp_tlv_info;
+};
+
+struct hinic3_nic_cfg {
+ struct semaphore cfg_lock;
+
+ /* Valid when pfc is disable */
+ bool pause_set;
+ struct nic_pause_config nic_pause;
+
+ u8 pfc_en;
+ u8 pfc_bitmap;
+
+ struct nic_port_info port_info;
+
+ /* percentage of pf link bandwidth */
+ u32 pf_bw_tx_limit;
+ u32 pf_bw_rx_limit;
+
+ struct hinic3_port_routine_cmd rt_cmd;
+ struct hinic3_port_routine_cmd_extern rt_cmd_ext;
+ /* mutex used for copy sfp info */
+ struct mutex sfp_mutex;
+};
+
+struct hinic3_nic_io {
+ void *hwdev;
+ void *pcidev_hdl;
+ void *dev_hdl;
+
+ u8 link_status;
+ u8 direct;
+ u32 rsvd2;
+
+ struct hinic3_io_queue *sq;
+ struct hinic3_io_queue *rq;
+
+ u16 num_qps;
+ u16 max_qps;
+
+ void *ci_vaddr_base;
+ dma_addr_t ci_dma_base;
+
+ u8 __iomem *sqs_db_addr;
+ u8 __iomem *rqs_db_addr;
+
+ u16 max_vfs;
+ u16 rsvd3;
+ u32 rsvd4;
+
+ struct vf_data_storage *vf_infos;
+ struct hinic3_dcb_state dcb_state;
+ struct hinic3_nic_cfg nic_cfg;
+
+ u16 rx_buff_len;
+ u16 rsvd5;
+ u32 rsvd6;
+ u64 feature_cap;
+ u64 rsvd7;
+};
+
+struct vf_msg_handler {
+ u16 cmd;
+ int (*handler)(struct hinic3_nic_io *nic_io, u16 vf,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+};
+
+struct nic_event_handler {
+ u16 cmd;
+ void (*handler)(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+};
+
+int hinic3_set_ci_table(void *hwdev, struct hinic3_sq_attr *attr);
+
+int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+int l2nic_msg_to_mgmt_sync_ch(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u16 channel);
+
+int hinic3_cfg_vf_vlan(struct hinic3_nic_io *nic_io, u8 opcode, u16 vid,
+ u8 qos, int vf_id);
+
+int hinic3_vf_event_handler(void *hwdev,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+void hinic3_pf_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+int hinic3_pf_mbox_handler(void *hwdev,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+u8 hinic3_nic_sw_aeqe_handler(void *hwdev, u8 event, u8 *data);
+
+int hinic3_vf_func_init(struct hinic3_nic_io *nic_io);
+
+void hinic3_vf_func_free(struct hinic3_nic_io *nic_io);
+
+void hinic3_notify_dcb_state_event(struct hinic3_nic_io *nic_io,
+ struct hinic3_dcb_state *dcb_state);
+
+int hinic3_save_dcb_state(struct hinic3_nic_io *nic_io,
+ struct hinic3_dcb_state *dcb_state);
+
+void hinic3_notify_vf_link_status(struct hinic3_nic_io *nic_io,
+ u16 vf_id, u8 link_status);
+
+int hinic3_vf_mag_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+void hinic3_pf_mag_event_handler(void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+int hinic3_pf_mag_mbox_handler(void *hwdev, u16 vf_id,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+void hinic3_unregister_vf(struct hinic3_nic_io *nic_io, u16 vf_id);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c
new file mode 100644
index 0000000..525a353
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c
@@ -0,0 +1,1894 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "nic_mpu_cmd.h"
+#include "nic_npu_cmd.h"
+#include "hinic3_common.h"
+#include "hinic3_nic_cfg.h"
+
+#include "vram_common.h"
+
+int hinic3_delete_bond(void *hwdev)
+{
+ struct hinic3_cmd_delete_bond cmd_delete_bond;
+ u16 out_size = sizeof(cmd_delete_bond);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err = 0;
+
+ if (!hwdev) {
+ pr_err("hwdev is null.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is null.\n");
+ return -EINVAL;
+ }
+
+ memset(&cmd_delete_bond, 0, sizeof(cmd_delete_bond));
+ cmd_delete_bond.bond_id = HINIC3_INVALID_BOND_ID;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_BOND_DEV_DELETE,
+ &cmd_delete_bond, sizeof(cmd_delete_bond),
+ &cmd_delete_bond, &out_size);
+ if (err || !out_size || cmd_delete_bond.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to delete bond, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, cmd_delete_bond.head.status, out_size);
+ return -EFAULT;
+ }
+
+ if (cmd_delete_bond.bond_id != HINIC3_INVALID_BOND_ID) {
+ nic_info(nic_io->dev_hdl, "Delete bond success\n");
+ }
+
+ return 0;
+}
+
+int hinic3_open_close_bond(void *hwdev, u32 bond_en)
+{
+ struct hinic3_cmd_open_close_bond cmd_open_close_bond;
+ u16 out_size = sizeof(cmd_open_close_bond);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err = 0;
+
+ if (!hwdev) {
+ pr_err("hwdev is null.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is null.\n");
+ return -EINVAL;
+ }
+
+ memset(&cmd_open_close_bond, 0, sizeof(cmd_open_close_bond));
+ cmd_open_close_bond.open_close_bond_info.bond_id = HINIC3_INVALID_BOND_ID;
+ cmd_open_close_bond.open_close_bond_info.open_close_flag = bond_en;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_BOND_DEV_OPEN_CLOSE,
+ &cmd_open_close_bond, sizeof(cmd_open_close_bond),
+ &cmd_open_close_bond, &out_size);
+ if (err || !out_size || cmd_open_close_bond.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to %s bond, err: %d, status: 0x%x, out_size: 0x%x\n",
+ bond_en == true ? "open" : "close", err, cmd_open_close_bond.head.status, out_size);
+ return -EFAULT;
+ }
+
+ if (cmd_open_close_bond.open_close_bond_info.bond_id != HINIC3_INVALID_BOND_ID) {
+ nic_info(nic_io->dev_hdl, "%s bond success\n", bond_en == true ? "Open" : "Close");
+ }
+
+ return 0;
+}
+
+int hinic3_create_bond(void *hwdev, u32 *bond_id)
+{
+ struct hinic3_cmd_create_bond cmd_create_bond;
+ u16 out_size = sizeof(cmd_create_bond);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err = 0;
+
+ if (!hwdev) {
+ pr_err("hwdev is null.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is null.\n");
+ return -EINVAL;
+ }
+
+ memset(&cmd_create_bond, 0, sizeof(cmd_create_bond));
+ cmd_create_bond.create_bond_info.default_param_flag = true;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_BOND_DEV_CREATE,
+ &cmd_create_bond, sizeof(cmd_create_bond),
+ &cmd_create_bond, &out_size);
+ if (err || !out_size || cmd_create_bond.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to create default bond, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, cmd_create_bond.head.status, out_size);
+ return -EFAULT;
+ }
+
+ if (cmd_create_bond.create_bond_info.bond_id != HINIC3_INVALID_BOND_ID) {
+ *bond_id = cmd_create_bond.create_bond_info.bond_id;
+ nic_info(nic_io->dev_hdl, "Create bond success\n");
+ }
+
+ return 0;
+}
+
+int hinic3_set_ci_table(void *hwdev, struct hinic3_sq_attr *attr)
+{
+ struct hinic3_cmd_cons_idx_attr cons_idx_attr;
+ u16 out_size = sizeof(cons_idx_attr);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !attr)
+ return -EINVAL;
+
+ memset(&cons_idx_attr, 0, sizeof(cons_idx_attr));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ cons_idx_attr.func_idx = hinic3_global_func_id(hwdev);
+
+ cons_idx_attr.dma_attr_off = attr->dma_attr_off;
+ cons_idx_attr.pending_limit = attr->pending_limit;
+ cons_idx_attr.coalescing_time = attr->coalescing_time;
+
+ if (attr->intr_en) {
+ cons_idx_attr.intr_en = attr->intr_en;
+ cons_idx_attr.intr_idx = attr->intr_idx;
+ }
+
+ cons_idx_attr.l2nic_sqn = attr->l2nic_sqn;
+ cons_idx_attr.ci_addr = attr->ci_dma_base;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SQ_CI_ATTR_SET,
+ &cons_idx_attr, sizeof(cons_idx_attr),
+ &cons_idx_attr, &out_size);
+ if (err || !out_size || cons_idx_attr.msg_head.status) {
+ sdk_err(nic_io->dev_hdl,
+ "Failed to set ci attribute table, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, cons_idx_attr.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+#define PF_SET_VF_MAC(hwdev, status) \
+ (hinic3_func_type(hwdev) == TYPE_VF && \
+ (status) == HINIC3_PF_SET_VF_ALREADY)
+
+static int hinic3_check_mac_info(void *hwdev, u8 status, u16 vlan_id)
+{
+ if ((status && status != HINIC3_MGMT_STATUS_EXIST) ||
+ ((vlan_id & CHECK_IPSU_15BIT) &&
+ status == HINIC3_MGMT_STATUS_EXIST)) {
+ if (PF_SET_VF_MAC(hwdev, status))
+ return 0;
+
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define HINIC_VLAN_ID_MASK 0x7FFF
+
+int hinic3_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id,
+ u16 channel)
+{
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if ((vlan_id & HINIC_VLAN_ID_MASK) >= VLAN_N_VID) {
+ nic_err(nic_io->dev_hdl, "Invalid VLAN number: %d\n",
+ (vlan_id & HINIC_VLAN_ID_MASK));
+ return -EINVAL;
+ }
+
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ ether_addr_copy(mac_info.mac, mac_addr);
+
+ err = l2nic_msg_to_mgmt_sync_ch(hwdev, HINIC3_NIC_CMD_SET_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size, channel);
+ if (err || !out_size ||
+ hinic3_check_mac_info(hwdev, mac_info.msg_head.status,
+ mac_info.vlan_id)) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to update MAC, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, mac_info.msg_head.status, out_size, channel);
+ return -EIO;
+ }
+
+ if (PF_SET_VF_MAC(hwdev, mac_info.msg_head.status)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF mac, Ignore set operation\n");
+ return HINIC3_PF_SET_VF_ALREADY;
+ }
+
+ if (mac_info.msg_head.status == HINIC3_MGMT_STATUS_EXIST) {
+ nic_warn(nic_io->dev_hdl, "MAC is repeated. Ignore update operation\n");
+ return 0;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_mac);
+
+int hinic3_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id,
+ u16 channel)
+{
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if ((vlan_id & HINIC_VLAN_ID_MASK) >= VLAN_N_VID) {
+ nic_err(nic_io->dev_hdl, "Invalid VLAN number: %d\n",
+ (vlan_id & HINIC_VLAN_ID_MASK));
+ return -EINVAL;
+ }
+
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ ether_addr_copy(mac_info.mac, mac_addr);
+
+ err = l2nic_msg_to_mgmt_sync_ch(hwdev, HINIC3_NIC_CMD_DEL_MAC,
+ &mac_info, sizeof(mac_info), &mac_info,
+ &out_size, channel);
+ if (err || !out_size ||
+ (mac_info.msg_head.status && !PF_SET_VF_MAC(hwdev, mac_info.msg_head.status))) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to delete MAC, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, mac_info.msg_head.status, out_size, channel);
+ return -EIO;
+ }
+
+ if (PF_SET_VF_MAC(hwdev, mac_info.msg_head.status)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF mac, Ignore delete operation.\n");
+ return HINIC3_PF_SET_VF_ALREADY;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_del_mac);
+
+int hinic3_update_mac(void *hwdev, const u8 *old_mac, u8 *new_mac, u16 vlan_id,
+ u16 func_id)
+{
+ struct hinic3_port_mac_update mac_info;
+ u16 out_size = sizeof(mac_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !old_mac || !new_mac)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if ((vlan_id & HINIC_VLAN_ID_MASK) >= VLAN_N_VID) {
+ nic_err(nic_io->dev_hdl, "Invalid VLAN number: %d\n",
+ (vlan_id & HINIC_VLAN_ID_MASK));
+ return -EINVAL;
+ }
+
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ ether_addr_copy(mac_info.old_mac, old_mac);
+ ether_addr_copy(mac_info.new_mac, new_mac);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_UPDATE_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size);
+ if (err || !out_size ||
+ hinic3_check_mac_info(hwdev, mac_info.msg_head.status,
+ mac_info.vlan_id)) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to update MAC, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, mac_info.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (PF_SET_VF_MAC(hwdev, mac_info.msg_head.status)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF MAC. Ignore update operation\n");
+ return HINIC3_PF_SET_VF_ALREADY;
+ }
+
+ if (mac_info.msg_head.status == HINIC3_MGMT_STATUS_EXIST) {
+ nic_warn(nic_io->dev_hdl, "MAC is repeated. Ignore update operation\n");
+ return 0;
+ }
+
+ return 0;
+}
+
+int hinic3_get_default_mac(void *hwdev, u8 *mac_addr)
+{
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ mac_info.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size);
+ if (err || !out_size || mac_info.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get mac, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, mac_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ ether_addr_copy(mac_addr, mac_info.mac);
+
+ return 0;
+}
+
+static int hinic3_config_vlan(struct hinic3_nic_io *nic_io, u8 opcode,
+ u16 vlan_id, u16 func_id)
+{
+ struct hinic3_cmd_vlan_config vlan_info;
+ u16 out_size = sizeof(vlan_info);
+ int err;
+
+ memset(&vlan_info, 0, sizeof(vlan_info));
+ vlan_info.opcode = opcode;
+ vlan_info.func_id = func_id;
+ vlan_info.vlan_id = vlan_id;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_FUNC_VLAN,
+ &vlan_info, sizeof(vlan_info),
+ &vlan_info, &out_size);
+ if (err || !out_size || vlan_info.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to %s vlan, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == HINIC3_CMD_OP_ADD ? "add" : "delete",
+ err, vlan_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#if defined(HAVE_NDO_UDP_TUNNEL_ADD) || defined(HAVE_UDP_TUNNEL_NIC_INFO)
+int hinic3_vlxan_port_config(void *hwdev, u16 func_id, u16 port, u8 action)
+{
+ struct hinic3_cmd_vxlan_port_info vxlan_port_info;
+ u16 out_size = sizeof(vxlan_port_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&vxlan_port_info, 0, sizeof(vxlan_port_info));
+ vxlan_port_info.opcode = action;
+ vxlan_port_info.cfg_mode = 0; // other ethtool set
+ vxlan_port_info.func_id = func_id;
+ vxlan_port_info.vxlan_port = port;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_VXLAN_PORT,
+ &vxlan_port_info, sizeof(vxlan_port_info),
+ &vxlan_port_info, &out_size);
+ if (err || !out_size || vxlan_port_info.msg_head.status) {
+ if (vxlan_port_info.msg_head.status == 0x2) {
+ nic_warn(nic_io->dev_hdl,
+ "Failed to %s vxlan dst port because it has already been set by hinicadm\n",
+ action == HINIC3_CMD_OP_ADD ? "add" : "delete");
+ } else {
+ nic_err(nic_io->dev_hdl,
+ "Failed to %s vxlan dst port, err: %d, status: 0x%x, out size: 0x%x\n",
+ action == HINIC3_CMD_OP_ADD ? "add" : "delete",
+ err, vxlan_port_info.msg_head.status, out_size);
+ }
+ return -EINVAL;
+ }
+
+ return 0;
+}
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD || HAVE_UDP_TUNNEL_NIC_INFO */
+
+int hinic3_add_vlan(void *hwdev, u16 vlan_id, u16 func_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ return hinic3_config_vlan(nic_io, HINIC3_CMD_OP_ADD, vlan_id, func_id);
+}
+
+int hinic3_del_vlan(void *hwdev, u16 vlan_id, u16 func_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ return hinic3_config_vlan(nic_io, HINIC3_CMD_OP_DEL, vlan_id, func_id);
+}
+
+int hinic3_set_vport_enable(void *hwdev, u16 func_id, bool enable, u16 channel)
+{
+ struct hinic3_vport_state en_state;
+ u16 out_size = sizeof(en_state);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&en_state, 0, sizeof(en_state));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ en_state.func_id = func_id;
+ en_state.state = enable ? 1 : 0;
+
+ err = l2nic_msg_to_mgmt_sync_ch(hwdev, HINIC3_NIC_CMD_SET_VPORT_ENABLE,
+ &en_state, sizeof(en_state),
+ &en_state, &out_size, channel);
+ if (err || !out_size || en_state.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set vport state, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, en_state.msg_head.status, out_size, channel);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(hinic3_set_vport_enable);
+
+int hinic3_set_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !dcb_state)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (!memcmp(&nic_io->dcb_state, dcb_state, sizeof(nic_io->dcb_state)))
+ return 0;
+
+ /* save in sdk, vf will get dcb state when probing */
+ hinic3_save_dcb_state(nic_io, dcb_state);
+
+ /* notify stateful in pf, than notify all vf */
+ hinic3_notify_dcb_state_event(nic_io, dcb_state);
+
+ return 0;
+}
+
+int hinic3_get_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !dcb_state)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memcpy(dcb_state, &nic_io->dcb_state, sizeof(*dcb_state));
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_dcb_state);
+
+int hinic3_get_cos_by_pri(void *hwdev, u8 pri, u8 *cos)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !cos)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (pri >= NIC_DCB_UP_MAX && nic_io->dcb_state.trust == HINIC3_DCB_PCP)
+ return -EINVAL;
+
+ if (pri >= NIC_DCB_IP_PRI_MAX && nic_io->dcb_state.trust == HINIC3_DCB_DSCP)
+ return -EINVAL;
+
+/*lint -e662*/
+/*lint -e661*/
+ if (nic_io->dcb_state.dcb_on) {
+ if (nic_io->dcb_state.trust == HINIC3_DCB_PCP)
+ *cos = nic_io->dcb_state.pcp2cos[pri];
+ else
+ *cos = nic_io->dcb_state.dscp2cos[pri];
+ } else {
+ *cos = nic_io->dcb_state.default_cos;
+ }
+/*lint +e662*/
+/*lint +e661*/
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_cos_by_pri);
+
+int hinic3_save_dcb_state(struct hinic3_nic_io *nic_io,
+ struct hinic3_dcb_state *dcb_state)
+{
+ memcpy(&nic_io->dcb_state, dcb_state, sizeof(*dcb_state));
+
+ return 0;
+}
+
+int hinic3_get_pf_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state)
+{
+ struct hinic3_cmd_vf_dcb_state vf_dcb;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(vf_dcb);
+ int err;
+
+ if (!hwdev || !dcb_state)
+ return -EINVAL;
+
+ memset(&vf_dcb, 0, sizeof(vf_dcb));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) != TYPE_VF) {
+ nic_err(nic_io->dev_hdl, "Only vf need to get pf dcb state\n");
+ return -EINVAL;
+ }
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_VF_COS, &vf_dcb,
+ sizeof(vf_dcb), &vf_dcb, &out_size);
+ if (err || !out_size || vf_dcb.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to get vf default cos, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vf_dcb.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ memcpy(dcb_state, &vf_dcb.state, sizeof(*dcb_state));
+ /* Save dcb_state in hw for stateful module */
+ hinic3_save_dcb_state(nic_io, dcb_state);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_pf_dcb_state);
+
+#define UNSUPPORT_SET_PAUSE 0x10
+static int hinic3_cfg_hw_pause(struct hinic3_nic_io *nic_io, u8 opcode,
+ struct nic_pause_config *nic_pause)
+{
+ struct hinic3_cmd_pause_config pause_info;
+ u16 out_size = sizeof(pause_info);
+ int err;
+
+ memset(&pause_info, 0, sizeof(pause_info));
+
+ pause_info.port_id = hinic3_physical_port_id(nic_io->hwdev);
+ pause_info.opcode = opcode;
+ if (opcode == HINIC3_CMD_OP_SET) {
+ pause_info.auto_neg = nic_pause->auto_neg;
+ pause_info.rx_pause = nic_pause->rx_pause;
+ pause_info.tx_pause = nic_pause->tx_pause;
+ }
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_PAUSE_INFO,
+ &pause_info, sizeof(pause_info),
+ &pause_info, &out_size);
+ if (err || !out_size || pause_info.msg_head.status) {
+ if (pause_info.msg_head.status == UNSUPPORT_SET_PAUSE) {
+ err = -EOPNOTSUPP;
+ nic_err(nic_io->dev_hdl, "Can not set pause when pfc is enable\n");
+ } else {
+ err = -EFAULT;
+ nic_err(nic_io->dev_hdl, "Failed to %s pause info, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == HINIC3_CMD_OP_SET ? "set" : "get",
+ err, pause_info.msg_head.status, out_size);
+ }
+ return err;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET) {
+ nic_pause->auto_neg = pause_info.auto_neg;
+ nic_pause->rx_pause = pause_info.rx_pause;
+ nic_pause->tx_pause = pause_info.tx_pause;
+ }
+
+ return 0;
+}
+
+int hinic3_set_pause_info(void *hwdev, struct nic_pause_config nic_pause)
+{
+ struct hinic3_nic_cfg *nic_cfg = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ nic_cfg = &nic_io->nic_cfg;
+
+ down(&nic_cfg->cfg_lock);
+
+ err = hinic3_cfg_hw_pause(nic_io, HINIC3_CMD_OP_SET, &nic_pause);
+ if (err) {
+ up(&nic_cfg->cfg_lock);
+ return err;
+ }
+
+ nic_cfg->pfc_en = 0;
+ nic_cfg->pfc_bitmap = 0;
+ nic_cfg->pause_set = true;
+ nic_cfg->nic_pause.auto_neg = nic_pause.auto_neg;
+ nic_cfg->nic_pause.rx_pause = nic_pause.rx_pause;
+ nic_cfg->nic_pause.tx_pause = nic_pause.tx_pause;
+
+ up(&nic_cfg->cfg_lock);
+
+ return 0;
+}
+
+int hinic3_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ int err = 0;
+
+ if (!hwdev || !nic_pause)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ err = hinic3_cfg_hw_pause(nic_io, HINIC3_CMD_OP_GET, nic_pause);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+int hinic3_sync_dcb_state(void *hwdev, u8 op_code, u8 state)
+{
+ struct hinic3_cmd_set_dcb_state dcb_state;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(dcb_state);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&dcb_state, 0, sizeof(dcb_state));
+
+ dcb_state.op_code = op_code;
+ dcb_state.state = state;
+ dcb_state.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_QOS_DCB_STATE,
+ &dcb_state, sizeof(dcb_state), &dcb_state, &out_size);
+ if (err || dcb_state.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to set dcb state, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, dcb_state.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_dcb_set_rq_iq_mapping(void *hwdev, u32 num_rqs, u8 *map,
+ u32 max_map_num)
+{
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_dcb_set_rq_iq_mapping);
+
+int hinic3_flush_qps_res(void *hwdev)
+{
+ struct hinic3_cmd_clear_qp_resource sq_res;
+ u16 out_size = sizeof(sq_res);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&sq_res, 0, sizeof(sq_res));
+
+ sq_res.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CLEAR_QP_RESOURCE,
+ &sq_res, sizeof(sq_res), &sq_res,
+ &out_size);
+ if (err || !out_size || sq_res.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to clear sq resources, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, sq_res.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_flush_qps_res);
+
+int hinic3_cache_out_qps_res(void *hwdev)
+{
+ struct hinic3_cmd_cache_out_qp_resource qp_res;
+ u16 out_size = sizeof(qp_res);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&qp_res, 0, sizeof(qp_res));
+
+ qp_res.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CACHE_OUT_QP_RES,
+ &qp_res, sizeof(qp_res), &qp_res, &out_size);
+ if (err || !out_size || qp_res.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to cache out qp resources, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, qp_res.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_get_vport_stats(void *hwdev, u16 func_id, struct hinic3_vport_stats *stats)
+{
+ struct hinic3_port_stats_info stats_info;
+ struct hinic3_cmd_vport_stats vport_stats;
+ u16 out_size = sizeof(vport_stats);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !stats)
+ return -EINVAL;
+
+ memset(&stats_info, 0, sizeof(stats_info));
+ memset(&vport_stats, 0, sizeof(vport_stats));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ stats_info.func_id = func_id;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_VPORT_STAT,
+ &stats_info, sizeof(stats_info),
+ &vport_stats, &out_size);
+ if (err || !out_size || vport_stats.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get function statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vport_stats.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ memcpy(stats, &vport_stats.stats, sizeof(*stats));
+
+ return 0;
+}
+
+static int hinic3_set_function_table(struct hinic3_nic_io *nic_io, u32 cfg_bitmap,
+ const struct hinic3_func_tbl_cfg *cfg)
+{
+ struct hinic3_cmd_set_func_tbl cmd_func_tbl;
+ u16 out_size = sizeof(cmd_func_tbl);
+ int err;
+
+ memset(&cmd_func_tbl, 0, sizeof(cmd_func_tbl));
+ cmd_func_tbl.func_id = hinic3_global_func_id(nic_io->hwdev);
+ cmd_func_tbl.cfg_bitmap = cfg_bitmap;
+ cmd_func_tbl.tbl_cfg = *cfg;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_SET_FUNC_TBL,
+ &cmd_func_tbl, sizeof(cmd_func_tbl),
+ &cmd_func_tbl, &out_size);
+ if (err || cmd_func_tbl.msg_head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to set func table, bitmap: 0x%x, err: %d, status: 0x%x, out size: 0x%x\n",
+ cfg_bitmap, err, cmd_func_tbl.msg_head.status,
+ out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int hinic3_init_function_table(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_func_tbl_cfg func_tbl_cfg = {0};
+ u32 cfg_bitmap = BIT(FUNC_CFG_INIT) | BIT(FUNC_CFG_MTU) |
+ BIT(FUNC_CFG_RX_BUF_SIZE);
+
+ func_tbl_cfg.mtu = 0x3FFF; /* default, max mtu */
+ func_tbl_cfg.rx_wqe_buf_size = nic_io->rx_buff_len;
+
+ return hinic3_set_function_table(nic_io, cfg_bitmap, &func_tbl_cfg);
+}
+
+int hinic3_set_port_mtu(void *hwdev, u16 new_mtu)
+{
+ struct hinic3_func_tbl_cfg func_tbl_cfg = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (new_mtu < HINIC3_MIN_MTU_SIZE) {
+ nic_err(nic_io->dev_hdl,
+ "Invalid mtu size: %ubytes, mtu size < %ubytes",
+ new_mtu, HINIC3_MIN_MTU_SIZE);
+ return -EINVAL;
+ }
+
+ if (new_mtu > HINIC3_MAX_JUMBO_FRAME_SIZE) {
+ nic_err(nic_io->dev_hdl, "Invalid mtu size: %ubytes, mtu size > %ubytes",
+ new_mtu, HINIC3_MAX_JUMBO_FRAME_SIZE);
+ return -EINVAL;
+ }
+
+ func_tbl_cfg.mtu = new_mtu;
+ return hinic3_set_function_table(nic_io, BIT(FUNC_CFG_MTU),
+ &func_tbl_cfg);
+}
+
+static int nic_feature_nego(void *hwdev, u8 opcode, u64 *s_feature, u16 size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_feature_nego feature_nego;
+ u16 out_size = sizeof(feature_nego);
+ int err;
+
+ if (!hwdev || !s_feature || size > NIC_MAX_FEATURE_QWORD)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&feature_nego, 0, sizeof(feature_nego));
+ feature_nego.func_id = hinic3_global_func_id(hwdev);
+ feature_nego.opcode = opcode;
+ if (opcode == HINIC3_CMD_OP_SET)
+ memcpy(feature_nego.s_feature, s_feature, size * sizeof(u64));
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_FEATURE_NEGO,
+ &feature_nego, sizeof(feature_nego),
+ &feature_nego, &out_size);
+ if (err || !out_size || feature_nego.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to negotiate nic feature, err:%d, status: 0x%x, out_size: 0x%x\n",
+ err, feature_nego.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET)
+ memcpy(s_feature, feature_nego.s_feature, size * sizeof(u64));
+
+ return 0;
+}
+
+static int hinic3_get_bios_pf_bw_tx_limit(void *hwdev, struct hinic3_nic_io *nic_io, u16 func_id, u32 *pf_rate)
+{
+ int err = 0; // default success
+ struct nic_cmd_bios_cfg cfg = {{0}};
+ u16 out_size = sizeof(cfg);
+
+ cfg.bios_cfg.func_id = (u8)func_id;
+ cfg.bios_cfg.func_valid = 1;
+ cfg.op_code = 0 | NIC_NVM_DATA_PF_TX_SPEED_LIMIT;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_BIOS_CFG, &cfg, sizeof(cfg),
+ &cfg, &out_size);
+ if (err || !out_size || cfg.head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get bios pf bandwidth tx limit, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, cfg.head.status, out_size);
+ return -EIO;
+ }
+
+ /* check data is valid or not */
+ if (cfg.bios_cfg.signature != BIOS_CFG_SIGNATURE)
+ nic_warn(nic_io->dev_hdl, "Invalid bios configuration data, signature: 0x%x\n",
+ cfg.bios_cfg.signature);
+
+ if (cfg.bios_cfg.pf_tx_bw > MAX_LIMIT_BW) {
+ nic_err(nic_io->dev_hdl, "Invalid bios cfg pf bandwidth limit: %u\n",
+ cfg.bios_cfg.pf_tx_bw);
+ return -EINVAL;
+ }
+
+ (*pf_rate) = cfg.bios_cfg.pf_tx_bw;
+ return err;
+}
+
+static int hinic3_get_bios_pf_bw_rx_limit(void *hwdev, struct hinic3_nic_io *nic_io, u16 func_id, u32 *pf_rate)
+{
+ int err = 0; // default success
+ struct nic_rx_rate_bios_cfg rx_bios_conf = {{0}};
+ u16 out_size = sizeof(rx_bios_conf);
+
+ rx_bios_conf.func_id = (u8)func_id;
+ rx_bios_conf.op_code = 0; /* 1-save, 0-read */
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_RX_RATE_CFG, &rx_bios_conf, sizeof(rx_bios_conf),
+ &rx_bios_conf, &out_size);
+ if (rx_bios_conf.msg_head.status == HINIC3_MGMT_CMD_UNSUPPORTED && err == 0) { // Compatible older firmware
+ nic_warn(nic_io->dev_hdl, "Not support get bios pf bandwidth rx limit\n");
+ return 0;
+ } else if (err || !out_size || rx_bios_conf.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get bios pf bandwidth rx limit, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rx_bios_conf.msg_head.status, out_size);
+ return -EIO;
+ }
+ if (rx_bios_conf.rx_rate_limit > MAX_LIMIT_BW) {
+ nic_err(nic_io->dev_hdl, "Invalid bios cfg pf bandwidth limit: %u\n",
+ rx_bios_conf.rx_rate_limit);
+ return -EINVAL;
+ }
+
+ (*pf_rate) = rx_bios_conf.rx_rate_limit;
+ return err;
+}
+
+static int hinic3_get_bios_pf_bw_limit(void *hwdev, u32 *pf_bw_limit, u8 direct)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 pf_rate = 0;
+ int err = 0;
+ u16 func_id;
+ func_id = hinic3_global_func_id(hwdev);
+
+ if (!hwdev || !pf_bw_limit)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF || !HINIC3_SUPPORT_RATE_LIMIT(hwdev))
+ return 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (direct == HINIC3_NIC_TX) {
+ err = hinic3_get_bios_pf_bw_tx_limit(hwdev, nic_io, func_id, &pf_rate);
+ } else if (direct == HINIC3_NIC_RX) {
+ err = hinic3_get_bios_pf_bw_rx_limit(hwdev, nic_io, func_id, &pf_rate);
+ }
+
+ if (err != 0)
+ return err;
+
+ if (pf_rate > MAX_LIMIT_BW) {
+ nic_err(nic_io->dev_hdl, "Invalid bios cfg pf bandwidth limit: %u\n", pf_rate);
+ return -EINVAL;
+ }
+ *pf_bw_limit = pf_rate;
+
+ return 0;
+}
+
+int hinic3_set_pf_rate(void *hwdev, u8 speed_level)
+{
+ struct hinic3_cmd_tx_rate_cfg rate_cfg = {{0}};
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 rate_limit;
+ u16 out_size = sizeof(rate_cfg);
+ u32 pf_rate = 0;
+ int err;
+ u32 speed_convert[PORT_SPEED_UNKNOWN] = {
+ 0, 10, 100, 1000, 10000, 25000, 40000, 50000, 100000, 200000
+ };
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (speed_level >= PORT_SPEED_UNKNOWN) {
+ nic_err(nic_io->dev_hdl, "Invalid speed level: %hhu\n", speed_level);
+ return -EINVAL;
+ }
+
+ rate_limit = (nic_io->direct == HINIC3_NIC_TX) ?
+ nic_io->nic_cfg.pf_bw_tx_limit : nic_io->nic_cfg.pf_bw_rx_limit;
+
+ if (rate_limit != MAX_LIMIT_BW) {
+ /* divided by 100 to convert to percentage */
+ pf_rate = (speed_convert[speed_level] / 100) * rate_limit;
+ /* bandwidth limit is very small but not unlimit in this case */
+ if ((pf_rate == 0) && (speed_level != PORT_SPEED_NOT_SET))
+ pf_rate = 1;
+ }
+
+ rate_cfg.func_id = hinic3_global_func_id(hwdev);
+ rate_cfg.min_rate = 0;
+ rate_cfg.max_rate = pf_rate;
+ rate_cfg.direct = nic_io->direct;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_MAX_MIN_RATE, &rate_cfg,
+ sizeof(rate_cfg), &rate_cfg, &out_size);
+ if (err || !out_size || rate_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set rate(%u), err: %d, status: 0x%x, out size: 0x%x\n",
+ pf_rate, err, rate_cfg.msg_head.status, out_size);
+ return rate_cfg.msg_head.status ? rate_cfg.msg_head.status : -EIO;
+ }
+
+ return 0;
+}
+
+static int hinic3_get_nic_feature_from_hw(void *hwdev, u64 *s_feature, u16 size)
+{
+ return nic_feature_nego(hwdev, HINIC3_CMD_OP_GET, s_feature, size);
+}
+
+int hinic3_set_nic_feature_to_hw(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ return nic_feature_nego(hwdev, HINIC3_CMD_OP_SET, &nic_io->feature_cap, 1);
+}
+
+u64 hinic3_get_feature_cap(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return 0;
+
+ return nic_io->feature_cap;
+}
+
+void hinic3_update_nic_feature(void *hwdev, u64 s_feature)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+
+ nic_io->feature_cap = s_feature;
+
+ nic_info(nic_io->dev_hdl, "Update nic feature to 0x%llx\n", nic_io->feature_cap);
+}
+
+static inline int init_nic_hwdev_param_valid(const void *hwdev, const void *pcidev_hdl,
+ const void *dev_hdl)
+{
+ if (!hwdev || !pcidev_hdl || !dev_hdl)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int hinic3_init_nic_io(void *hwdev, void *pcidev_hdl, void *dev_hdl,
+ struct hinic3_nic_io **nic_io)
+{
+ if (init_nic_hwdev_param_valid(hwdev, pcidev_hdl, dev_hdl))
+ return -EINVAL;
+
+ *nic_io = kzalloc(sizeof(**nic_io), GFP_KERNEL);
+ if (!(*nic_io))
+ return -ENOMEM;
+
+ (*nic_io)->dev_hdl = dev_hdl;
+ (*nic_io)->pcidev_hdl = pcidev_hdl;
+ (*nic_io)->hwdev = hwdev;
+
+ sema_init(&((*nic_io)->nic_cfg.cfg_lock), 1);
+ mutex_init(&((*nic_io)->nic_cfg.sfp_mutex));
+
+ (*nic_io)->nic_cfg.rt_cmd.mpu_send_sfp_abs = false;
+ (*nic_io)->nic_cfg.rt_cmd.mpu_send_sfp_info = false;
+ (*nic_io)->nic_cfg.rt_cmd_ext.mpu_send_xsfp_tlv_info = false;
+
+ return 0;
+}
+
+/* *
+ * hinic3_init_nic_hwdev - init nic hwdev
+ * @hwdev: pointer to hwdev
+ * @pcidev_hdl: pointer to pcidev or handler
+ * @dev_hdl: pointer to pcidev->dev or handler, for sdk_err() or dma_alloc()
+ * @rx_buff_len: rx_buff_len is receive buffer length
+ */
+int hinic3_init_nic_hwdev(void *hwdev, void *pcidev_hdl, void *dev_hdl,
+ u16 rx_buff_len)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+ int is_in_kexec = vram_get_kexec_flag();
+
+ err = hinic3_init_nic_io(hwdev, pcidev_hdl, dev_hdl, &nic_io);
+ if (err)
+ return err;
+
+ nic_io->rx_buff_len = rx_buff_len;
+
+ err = hinic3_register_service_adapter(hwdev, nic_io, SERVICE_T_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to register service adapter\n");
+ goto register_sa_err;
+ }
+
+ err = hinic3_set_func_svc_used_state(hwdev, SVC_T_NIC, 1, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set function svc used state\n");
+ goto set_used_state_err;
+ }
+
+ if (is_in_kexec == 0) {
+ err = hinic3_init_function_table(nic_io);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to init function table\n");
+ goto err_out;
+ }
+ }
+
+ err = hinic3_get_nic_feature_from_hw(hwdev, &nic_io->feature_cap, 1);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get nic features\n");
+ goto err_out;
+ }
+
+ sdk_info(dev_hdl, "nic features: 0x%llx\n", nic_io->feature_cap);
+
+ err = hinic3_get_bios_pf_bw_limit(hwdev,
+ &nic_io->nic_cfg.pf_bw_tx_limit,
+ HINIC3_NIC_TX);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to get pf tx bandwidth limit\n");
+ goto err_out;
+ }
+
+ err = hinic3_get_bios_pf_bw_limit(hwdev,
+ &nic_io->nic_cfg.pf_bw_rx_limit,
+ HINIC3_NIC_RX);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to get pf rx bandwidth limit\n");
+ goto err_out;
+ }
+
+ err = hinic3_vf_func_init(nic_io);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to init vf info\n");
+ goto err_out;
+ }
+
+ return 0;
+
+err_out:
+ if (hinic3_set_func_svc_used_state(hwdev, SVC_T_NIC, 0,
+ HINIC3_CHANNEL_NIC) != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set function svc used state\n");
+ }
+
+set_used_state_err:
+ hinic3_unregister_service_adapter(hwdev, SERVICE_T_NIC);
+
+register_sa_err:
+ mutex_deinit(&nic_io->nic_cfg.sfp_mutex);
+ sema_deinit(&nic_io->nic_cfg.cfg_lock);
+
+ kfree(nic_io);
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_init_nic_hwdev);
+
+void hinic3_free_nic_hwdev(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+
+ hinic3_vf_func_free(nic_io);
+
+ hinic3_set_func_svc_used_state(hwdev, SVC_T_NIC, 0, HINIC3_CHANNEL_NIC);
+
+ hinic3_unregister_service_adapter(hwdev, SERVICE_T_NIC);
+
+ mutex_deinit(&nic_io->nic_cfg.sfp_mutex);
+ sema_deinit(&nic_io->nic_cfg.cfg_lock);
+
+ kfree(nic_io);
+}
+EXPORT_SYMBOL(hinic3_free_nic_hwdev);
+
+int hinic3_force_drop_tx_pkt(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_force_pkt_drop pkt_drop;
+ u16 out_size = sizeof(pkt_drop);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&pkt_drop, 0, sizeof(pkt_drop));
+ pkt_drop.port = hinic3_physical_port_id(hwdev);
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_FORCE_PKT_DROP,
+ &pkt_drop, sizeof(pkt_drop),
+ &pkt_drop, &out_size);
+ if ((pkt_drop.msg_head.status != HINIC3_MGMT_CMD_UNSUPPORTED &&
+ pkt_drop.msg_head.status) || err || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to set force tx packets drop, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, pkt_drop.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return pkt_drop.msg_head.status;
+}
+
+int hinic3_set_rx_mode(void *hwdev, u32 enable)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_rx_mode_config rx_mode_cfg;
+ u16 out_size = sizeof(rx_mode_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&rx_mode_cfg, 0, sizeof(rx_mode_cfg));
+ rx_mode_cfg.func_id = hinic3_global_func_id(hwdev);
+ rx_mode_cfg.rx_mode = enable;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_RX_MODE,
+ &rx_mode_cfg, sizeof(rx_mode_cfg),
+ &rx_mode_cfg, &out_size);
+ if (err || !out_size || rx_mode_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set rx mode, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rx_mode_cfg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_rx_vlan_offload(void *hwdev, u8 en)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_vlan_offload vlan_cfg;
+ u16 out_size = sizeof(vlan_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&vlan_cfg, 0, sizeof(vlan_cfg));
+ vlan_cfg.func_id = hinic3_global_func_id(hwdev);
+ vlan_cfg.vlan_offload = en;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD,
+ &vlan_cfg, sizeof(vlan_cfg),
+ &vlan_cfg, &out_size);
+ if (err || !out_size || vlan_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set rx vlan offload, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vlan_cfg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_update_mac_vlan(void *hwdev, u16 old_vlan, u16 new_vlan, int vf_id)
+{
+ struct vf_data_storage *vf_info = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 func_id;
+ int err;
+
+ if (!hwdev || old_vlan >= VLAN_N_VID || new_vlan >= VLAN_N_VID)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ if (!nic_io->vf_infos || is_zero_ether_addr(vf_info->drv_mac_addr))
+ return 0;
+
+ func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + (u16)vf_id;
+
+ err = hinic3_del_mac(nic_io->hwdev, vf_info->drv_mac_addr,
+ old_vlan, func_id, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to delete VF %d MAC %pM vlan %u\n",
+ HW_VF_ID_TO_OS(vf_id), vf_info->drv_mac_addr, old_vlan);
+ return err;
+ }
+
+ err = hinic3_set_mac(nic_io->hwdev, vf_info->drv_mac_addr,
+ new_vlan, func_id, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to add VF %d MAC %pM vlan %u\n",
+ HW_VF_ID_TO_OS(vf_id), vf_info->drv_mac_addr, new_vlan);
+ hinic3_set_mac(nic_io->hwdev, vf_info->drv_mac_addr,
+ old_vlan, func_id, HINIC3_CHANNEL_NIC);
+ return err;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_rx_lro(void *hwdev, u8 ipv4_en, u8 ipv6_en,
+ u8 lro_max_pkt_len)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_lro_config lro_cfg;
+ u16 out_size = sizeof(lro_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&lro_cfg, 0, sizeof(lro_cfg));
+ lro_cfg.func_id = hinic3_global_func_id(hwdev);
+ lro_cfg.opcode = HINIC3_CMD_OP_SET;
+ lro_cfg.lro_ipv4_en = ipv4_en;
+ lro_cfg.lro_ipv6_en = ipv6_en;
+ lro_cfg.lro_max_pkt_len = lro_max_pkt_len;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_RX_LRO,
+ &lro_cfg, sizeof(lro_cfg),
+ &lro_cfg, &out_size);
+ if (err || !out_size || lro_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set lro offload, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, lro_cfg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_rx_lro_timer(void *hwdev, u32 timer_value)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_lro_timer lro_timer;
+ u16 out_size = sizeof(lro_timer);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&lro_timer, 0, sizeof(lro_timer));
+ lro_timer.opcode = HINIC3_CMD_OP_SET;
+ lro_timer.timer = timer_value;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_LRO_TIMER,
+ &lro_timer, sizeof(lro_timer),
+ &lro_timer, &out_size);
+ if (err || !out_size || lro_timer.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set lro timer, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, lro_timer.msg_head.status, out_size);
+
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_rx_lro_state(void *hwdev, u8 lro_en, u32 lro_timer,
+ u32 lro_max_pkt_len)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u8 ipv4_en = 0, ipv6_en = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ ipv4_en = lro_en ? 1 : 0;
+ ipv6_en = lro_en ? 1 : 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ nic_info(nic_io->dev_hdl, "Set LRO max coalesce packet size to %uK\n",
+ lro_max_pkt_len);
+
+ err = hinic3_set_rx_lro(hwdev, ipv4_en, ipv6_en, (u8)lro_max_pkt_len);
+ if (err)
+ return err;
+
+ /* we don't set LRO timer for VF */
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ nic_info(nic_io->dev_hdl, "Set LRO timer to %u\n", lro_timer);
+
+ return hinic3_set_rx_lro_timer(hwdev, lro_timer);
+}
+
+int hinic3_set_vlan_fliter(void *hwdev, u32 vlan_filter_ctrl)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_set_vlan_filter vlan_filter;
+ u16 out_size = sizeof(vlan_filter);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&vlan_filter, 0, sizeof(vlan_filter));
+ vlan_filter.func_id = hinic3_global_func_id(hwdev);
+ vlan_filter.vlan_filter_ctrl = vlan_filter_ctrl;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_VLAN_FILTER_EN,
+ &vlan_filter, sizeof(vlan_filter),
+ &vlan_filter, &out_size);
+ if (err || !out_size || vlan_filter.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set vlan filter, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vlan_filter.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_func_capture_en(void *hwdev, u16 func_id, bool cap_en)
+{
+ struct nic_cmd_capture_info cap_info = {{0}};
+ u16 out_size = sizeof(cap_info);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ /* 2 function capture types */
+ cap_info.is_en_trx = cap_en;
+ cap_info.func_port = func_id;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_UCAPTURE_OPT,
+ &cap_info, sizeof(cap_info),
+ &cap_info, &out_size);
+ if (err || !out_size || cap_info.msg_head.status)
+ return -EINVAL;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_func_capture_en);
+
+int hinic3_add_tcam_rule(void *hwdev, struct nic_tcam_cfg_rule *tcam_rule)
+{
+ u16 out_size = sizeof(struct nic_cmd_fdir_add_rule);
+ struct nic_cmd_fdir_add_rule tcam_cmd;
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !tcam_rule)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ if (tcam_rule->index >= HINIC3_MAX_TCAM_RULES_NUM) {
+ nic_err(nic_io->dev_hdl, "Tcam rules num to add is invalid\n");
+ return -EINVAL;
+ }
+
+ memset(&tcam_cmd, 0, sizeof(struct nic_cmd_fdir_add_rule));
+ memcpy((void *)&tcam_cmd.rule, (void *)tcam_rule,
+ sizeof(struct nic_tcam_cfg_rule));
+ tcam_cmd.func_id = hinic3_global_func_id(hwdev);
+ tcam_cmd.type = TCAM_RULE_FDIR_TYPE;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_ADD_TC_FLOW,
+ &tcam_cmd, sizeof(tcam_cmd),
+ &tcam_cmd, &out_size);
+ if (err || tcam_cmd.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, tcam_cmd.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_del_tcam_rule(void *hwdev, u32 index)
+{
+ u16 out_size = sizeof(struct nic_cmd_fdir_del_rules);
+ struct nic_cmd_fdir_del_rules tcam_cmd;
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ if (index >= HINIC3_MAX_TCAM_RULES_NUM) {
+ nic_err(nic_io->dev_hdl, "Tcam rules num to del is invalid\n");
+ return -EINVAL;
+ }
+
+ memset(&tcam_cmd, 0, sizeof(struct nic_cmd_fdir_del_rules));
+ tcam_cmd.index_start = index;
+ tcam_cmd.index_num = 1;
+ tcam_cmd.func_id = hinic3_global_func_id(hwdev);
+ tcam_cmd.type = TCAM_RULE_FDIR_TYPE;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_DEL_TC_FLOW,
+ &tcam_cmd, sizeof(tcam_cmd),
+ &tcam_cmd, &out_size);
+ if (err || tcam_cmd.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Del tcam rule failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, tcam_cmd.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+/**
+ * hinic3_mgmt_tcam_block - alloc or free tcam block for IO packet.
+ *
+ * @param hwdev
+ * The hardware interface of a nic device.
+ * @param alloc_en
+ * 1 alloc block.
+ * 0 free block.
+ * @param index
+ * block index from firmware.
+ * @return
+ * 0 on success,
+ * negative error value otherwise.
+ */
+static int hinic3_mgmt_tcam_block(void *hwdev, u8 alloc_en, u16 *index)
+{
+ struct nic_cmd_ctrl_tcam_block_out tcam_block_info;
+ u16 out_size = sizeof(struct nic_cmd_ctrl_tcam_block_out);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !index)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ memset(&tcam_block_info, 0,
+ sizeof(struct nic_cmd_ctrl_tcam_block_out));
+
+ tcam_block_info.func_id = hinic3_global_func_id(hwdev);
+ tcam_block_info.alloc_en = alloc_en;
+ tcam_block_info.tcam_type = NIC_TCAM_BLOCK_TYPE_LARGE;
+ tcam_block_info.tcam_block_index = *index;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_TCAM_BLOCK,
+ &tcam_block_info, sizeof(tcam_block_info),
+ &tcam_block_info, &out_size);
+ if (err || tcam_block_info.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Set tcam block failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, tcam_block_info.head.status, out_size);
+ return -EIO;
+ }
+
+ if (alloc_en)
+ *index = tcam_block_info.tcam_block_index;
+
+ return 0;
+}
+
+int hinic3_alloc_tcam_block(void *hwdev, u16 *index)
+{
+ return hinic3_mgmt_tcam_block(hwdev, HINIC3_TCAM_BLOCK_ENABLE, index);
+}
+
+int hinic3_free_tcam_block(void *hwdev, u16 *index)
+{
+ return hinic3_mgmt_tcam_block(hwdev, HINIC3_TCAM_BLOCK_DISABLE, index);
+}
+
+int hinic3_set_fdir_tcam_rule_filter(void *hwdev, bool enable)
+{
+ struct nic_cmd_set_tcam_enable port_tcam_cmd;
+ u16 out_size = sizeof(port_tcam_cmd);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ memset(&port_tcam_cmd, 0, sizeof(port_tcam_cmd));
+ port_tcam_cmd.func_id = hinic3_global_func_id(hwdev);
+ port_tcam_cmd.tcam_enable = (u8)enable;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_ENABLE_TCAM,
+ &port_tcam_cmd, sizeof(port_tcam_cmd),
+ &port_tcam_cmd, &out_size);
+ if (err || port_tcam_cmd.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl, "Set fdir tcam filter failed, err: %d, status: 0x%x, out size: 0x%x, enable: 0x%x\n",
+ err, port_tcam_cmd.head.status, out_size,
+ enable);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_flush_tcam_rule(void *hwdev)
+{
+ struct nic_cmd_flush_tcam_rules tcam_flush;
+ u16 out_size = sizeof(struct nic_cmd_flush_tcam_rules);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&tcam_flush, 0, sizeof(struct nic_cmd_flush_tcam_rules));
+ tcam_flush.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_FLUSH_TCAM,
+ &tcam_flush,
+ sizeof(struct nic_cmd_flush_tcam_rules),
+ &tcam_flush, &out_size);
+ if (err || tcam_flush.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Flush tcam fdir rules failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, tcam_flush.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_get_rxq_hw_info(void *hwdev, struct rxq_check_info *rxq_info, u16 num_qps, u16 wqe_type)
+{
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_rxq_hw *rxq_hw = NULL;
+ struct rxq_check_info *rxq_info_out = NULL;
+ int err;
+ u16 i;
+
+ if (!hwdev || !rxq_info)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd_buf.\n");
+ return -ENOMEM;
+ }
+
+ rxq_hw = cmd_buf->buf;
+ rxq_hw->func_id = hinic3_global_func_id(hwdev);
+ rxq_hw->num_queues = num_qps;
+
+ hinic3_cpu_to_be32(rxq_hw, sizeof(struct hinic3_rxq_hw));
+
+ cmd_buf->size = sizeof(struct hinic3_rxq_hw);
+
+ err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, HINIC3_UCODE_CMD_RXQ_INFO_GET,
+ cmd_buf, cmd_buf, NULL, 0, HINIC3_CHANNEL_NIC);
+ if (err)
+ goto get_rxq_info_failed;
+
+ rxq_info_out = cmd_buf->buf;
+ for (i = 0; i < num_qps; i++) {
+ rxq_info[i].hw_pi = rxq_info_out[i].hw_pi >> wqe_type;
+ rxq_info[i].hw_ci = rxq_info_out[i].hw_ci >> wqe_type;
+ }
+
+get_rxq_info_failed:
+ hinic3_free_cmd_buf(hwdev, cmd_buf);
+
+ return err;
+}
+
+int hinic3_pf_set_vf_link_state(void *hwdev, bool vf_link_forced, bool link_state)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct vf_data_storage *vf_infos = NULL;
+ int vf_id;
+
+ if (!hwdev) {
+ pr_err("hwdev is null.\n");
+ return -EINVAL;
+ }
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ pr_err("VF are not supported to set link state.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is null.\n");
+ return -EINVAL;
+ }
+
+ vf_infos = nic_io->vf_infos;
+ for (vf_id = 0; vf_id < nic_io->max_vfs; vf_id++) {
+ vf_infos[vf_id].link_up = link_state;
+ vf_infos[vf_id].link_forced = vf_link_forced;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_pf_set_vf_link_state);
+
+int hinic3_get_outband_vlan_cfg(void *hwdev, u16 *outband_default_vid)
+{
+ struct hinic3_outband_cfg_info outband_cfg_info;
+ u16 out_size = sizeof(outband_cfg_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !outband_default_vid)
+ return -EINVAL;
+
+ memset(&outband_cfg_info, 0, sizeof(outband_cfg_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_OUTBAND_CFG,
+ &outband_cfg_info,
+ sizeof(outband_cfg_info),
+ &outband_cfg_info, &out_size);
+ if (err || !out_size || outband_cfg_info.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get outband cfg, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, outband_cfg_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ *outband_default_vid = outband_cfg_info.outband_default_vid;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h
new file mode 100644
index 0000000..0fe7b9f
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h
@@ -0,0 +1,664 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_CFG_H
+#define HINIC3_NIC_CFG_H
+
+#include <linux/types.h>
+#include <linux/netdevice.h>
+
+#include "nic_mpu_cmd_defs.h"
+#include "mag_mpu_cmd.h"
+#include "mag_mpu_cmd_defs.h"
+
+#define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1)
+#define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1)
+
+#define HINIC3_VLAN_PRIORITY_SHIFT 13
+
+#define HINIC3_RSS_INDIR_4B_UNIT 3
+#define HINIC3_RSS_INDIR_NUM 2
+
+#define HINIC3_RSS_KEY_RSV_NUM 2
+#define HINIC3_MAX_NUM_RQ 256
+
+#define HINIC3_MIN_MTU_SIZE 256
+#define HINIC3_MAX_JUMBO_FRAME_SIZE 9600
+
+#define HINIC3_PF_SET_VF_ALREADY 0x4
+#define HINIC3_MGMT_STATUS_EXIST 0x6
+#define CHECK_IPSU_15BIT 0x8000
+
+#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB /* Table empty */
+#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC /* Table full */
+
+#define HINIC3_LOWEST_LATENCY 3
+#define HINIC3_MULTI_VM_LATENCY 32
+#define HINIC3_MULTI_VM_PENDING_LIMIT 4
+
+#define HINIC3_RX_RATE_LOW 200000
+#define HINIC3_RX_COAL_TIME_LOW 25
+#define HINIC3_RX_PENDING_LIMIT_LOW 2
+
+#define HINIC3_RX_RATE_HIGH 700000
+#define HINIC3_RX_COAL_TIME_HIGH 225
+#define HINIC3_RX_PENDING_LIMIT_HIGH 8
+
+#define HINIC3_RX_RATE_THRESH 50000
+#define HINIC3_TX_RATE_THRESH 50000
+#define HINIC3_RX_RATE_LOW_VM 100000
+#define HINIC3_RX_PENDING_LIMIT_HIGH_VM 87
+
+#define HINIC3_DCB_PCP 0
+#define HINIC3_DCB_DSCP 1
+
+#define MAX_LIMIT_BW 100
+
+#define HINIC3_INVALID_BOND_ID 0xffffffff
+
+enum hinic3_valid_link_settings {
+ HILINK_LINK_SET_SPEED = 0x1,
+ HILINK_LINK_SET_AUTONEG = 0x2,
+ HILINK_LINK_SET_FEC = 0x4,
+};
+
+enum hinic3_link_follow_status {
+ HINIC3_LINK_FOLLOW_DEFAULT,
+ HINIC3_LINK_FOLLOW_PORT,
+ HINIC3_LINK_FOLLOW_SEPARATE,
+ HINIC3_LINK_FOLLOW_STATUS_MAX,
+};
+
+enum hinic3_nic_pf_direct {
+ HINIC3_NIC_RX = 0,
+ HINIC3_NIC_TX,
+};
+
+struct hinic3_link_ksettings {
+ u32 valid_bitmap;
+ u8 speed; /* enum nic_speed_level */
+ u8 autoneg; /* 0 - off; 1 - on */
+ u8 fec; /* 0 - RSFEC; 1 - BASEFEC; 2 - NOFEC */
+};
+
+u64 hinic3_get_feature_cap(void *hwdev);
+
+#define HINIC3_SUPPORT_FEATURE(hwdev, feature) \
+ (hinic3_get_feature_cap(hwdev) & NIC_F_##feature)
+#define HINIC3_SUPPORT_CSUM(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, CSUM)
+#define HINIC3_SUPPORT_SCTP_CRC(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, SCTP_CRC)
+#define HINIC3_SUPPORT_TSO(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, TSO)
+#define HINIC3_SUPPORT_UFO(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, UFO)
+#define HINIC3_SUPPORT_LRO(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, LRO)
+#define HINIC3_SUPPORT_RSS(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, RSS)
+#define HINIC3_SUPPORT_RXVLAN_FILTER(hwdev) \
+ HINIC3_SUPPORT_FEATURE(hwdev, RX_VLAN_FILTER)
+#define HINIC3_SUPPORT_VLAN_OFFLOAD(hwdev) \
+ (HINIC3_SUPPORT_FEATURE(hwdev, RX_VLAN_STRIP) && \
+ HINIC3_SUPPORT_FEATURE(hwdev, TX_VLAN_INSERT))
+#define HINIC3_SUPPORT_VXLAN_OFFLOAD(hwdev) \
+ HINIC3_SUPPORT_FEATURE(hwdev, VXLAN_OFFLOAD)
+#define HINIC3_SUPPORT_IPSEC_OFFLOAD(hwdev) \
+ HINIC3_SUPPORT_FEATURE(hwdev, IPSEC_OFFLOAD)
+#define HINIC3_SUPPORT_FDIR(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, FDIR)
+#define HINIC3_SUPPORT_PROMISC(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, PROMISC)
+#define HINIC3_SUPPORT_ALLMULTI(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, ALLMULTI)
+#define HINIC3_SUPPORT_VF_MAC(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, VF_MAC)
+#define HINIC3_SUPPORT_RATE_LIMIT(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, RATE_LIMIT)
+
+#define HINIC3_SUPPORT_RXQ_RECOVERY(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, RXQ_RECOVERY)
+
+struct nic_rss_type {
+ u8 tcp_ipv6_ext;
+ u8 ipv6_ext;
+ u8 tcp_ipv6;
+ u8 ipv6;
+ u8 tcp_ipv4;
+ u8 ipv4;
+ u8 udp_ipv6;
+ u8 udp_ipv4;
+};
+
+enum hinic3_rss_hash_type {
+ HINIC3_RSS_HASH_ENGINE_TYPE_XOR = 0,
+ HINIC3_RSS_HASH_ENGINE_TYPE_TOEP,
+ HINIC3_RSS_HASH_ENGINE_TYPE_MAX,
+};
+
+/* rss */
+struct nic_rss_indirect_tbl {
+ u32 rsvd[4]; /* Make sure that 16B beyond entry[] */
+ u16 entry[NIC_RSS_INDIR_SIZE];
+};
+
+struct nic_rss_context_tbl {
+ u32 rsvd[4];
+ u32 ctx;
+};
+
+#define NIC_CONFIG_ALL_QUEUE_VLAN_CTX 0xFFFF
+struct nic_vlan_ctx {
+ u32 func_id;
+ u32 qid; /* if qid = 0xFFFF, config current function all queue */
+ u32 vlan_tag;
+ u32 vlan_mode;
+ u32 vlan_sel;
+};
+
+enum hinic3_link_status {
+ HINIC3_LINK_DOWN = 0,
+ HINIC3_LINK_UP
+};
+
+struct nic_port_info {
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+ u8 fec;
+ u8 lanes;
+ u8 rsvd;
+ u32 supported_mode;
+ u32 advertised_mode;
+ u32 supported_fec_mode;
+ u32 bond_speed;
+};
+
+struct nic_pause_config {
+ u8 auto_neg;
+ u8 rx_pause;
+ u8 tx_pause;
+};
+
+struct rxq_check_info {
+ u16 hw_pi;
+ u16 hw_ci;
+};
+
+struct hinic3_rxq_hw {
+ u32 func_id;
+ u32 num_queues;
+
+ u32 rsvd[14];
+};
+
+#define MODULE_TYPE_SFP 0x3
+#define MODULE_TYPE_QSFP28 0x11
+#define MODULE_TYPE_QSFP 0x0C
+#define MODULE_TYPE_QSFP_PLUS 0x0D
+#define MODULE_TYPE_DSFP 0x1B
+#define MODULE_TYPE_QSFP_CMIS 0x1E
+
+#define TCAM_IP_TYPE_MASK 0x1
+#define TCAM_TUNNEL_TYPE_MASK 0xF
+#define TCAM_FUNC_ID_MASK 0x7FFF
+
+int hinic3_delete_bond(void *hwdev);
+int hinic3_open_close_bond(void *hwdev, u32 bond_en);
+int hinic3_create_bond(void *hwdev, u32 *bond_id);
+
+int hinic3_add_tcam_rule(void *hwdev, struct nic_tcam_cfg_rule *tcam_rule);
+int hinic3_del_tcam_rule(void *hwdev, u32 index);
+
+int hinic3_alloc_tcam_block(void *hwdev, u16 *index);
+int hinic3_free_tcam_block(void *hwdev, u16 *index);
+
+int hinic3_set_fdir_tcam_rule_filter(void *hwdev, bool enable);
+
+int hinic3_flush_tcam_rule(void *hwdev);
+
+/* *
+ * @brief hinic3_update_mac - update mac address to hardware
+ * @param hwdev: device pointer to hwdev
+ * @param old_mac: old mac to delete
+ * @param new_mac: new mac to update
+ * @param vlan_id: vlan id
+ * @param func_id: function index
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_update_mac(void *hwdev, const u8 *old_mac, u8 *new_mac, u16 vlan_id,
+ u16 func_id);
+
+/* *
+ * @brief hinic3_get_default_mac - get default mac address
+ * @param hwdev: device pointer to hwdev
+ * @param mac_addr: mac address from hardware
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_default_mac(void *hwdev, u8 *mac_addr);
+
+/* *
+ * @brief hinic3_set_port_mtu - set function mtu
+ * @param hwdev: device pointer to hwdev
+ * @param new_mtu: mtu
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_port_mtu(void *hwdev, u16 new_mtu);
+
+/* *
+ * @brief hinic3_get_link_state - get link state
+ * @param hwdev: device pointer to hwdev
+ * @param link_state: link state, 0-link down, 1-link up
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_link_state(void *hwdev, u8 *link_state);
+
+/* *
+ * @brief hinic3_get_vport_stats - get function stats
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: function index
+ * @param stats: function stats
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_vport_stats(void *hwdev, u16 func_id, struct hinic3_vport_stats *stats);
+
+/* *
+ * @brief hinic3_notify_all_vfs_link_changed - notify to all vfs link changed
+ * @param hwdev: device pointer to hwdev
+ * @param link_status: link state, 0-link down, 1-link up
+ */
+void hinic3_notify_all_vfs_link_changed(void *hwdev, u8 link_status);
+
+/* *
+ * @brief hinic3_force_drop_tx_pkt - force drop tx packet
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_force_drop_tx_pkt(void *hwdev);
+
+/* *
+ * @brief hinic3_set_rx_mode - set function rx mode
+ * @param hwdev: device pointer to hwdev
+ * @param enable: rx mode state
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_rx_mode(void *hwdev, u32 enable);
+
+/* *
+ * @brief hinic3_set_rx_vlan_offload - set function vlan offload valid state
+ * @param hwdev: device pointer to hwdev
+ * @param en: 0-disable, 1-enable
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_rx_vlan_offload(void *hwdev, u8 en);
+
+/* *
+ * @brief hinic3_set_rx_lro_state - set rx LRO configuration
+ * @param hwdev: device pointer to hwdev
+ * @param lro_en: 0-disable, 1-enable
+ * @param lro_timer: LRO aggregation timeout
+ * @param lro_max_pkt_len: LRO coalesce packet size(unit is 1K)
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_rx_lro_state(void *hwdev, u8 lro_en, u32 lro_timer,
+ u32 lro_max_pkt_len);
+
+/* *
+ * @brief hinic3_set_vf_spoofchk - set vf spoofchk
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param spoofchk: spoofchk
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vf_spoofchk(void *hwdev, u16 vf_id, bool spoofchk);
+
+/* *
+ * @brief hinic3_vf_info_spoofchk - get vf spoofchk info
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @retval spoofchk state
+ */
+bool hinic3_vf_info_spoofchk(void *hwdev, int vf_id);
+
+/* *
+ * @brief hinic3_add_vf_vlan - add vf vlan id
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param vlan: vlan id
+ * @param qos: qos
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_add_vf_vlan(void *hwdev, int vf_id, u16 vlan, u8 qos);
+
+/* *
+ * @brief hinic3_kill_vf_vlan - kill vf vlan
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param vlan: vlan id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_kill_vf_vlan(void *hwdev, int vf_id);
+
+/* *
+ * @brief hinic3_set_vf_mac - set vf mac
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param mac_addr: vf mac address
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vf_mac(void *hwdev, int vf_id, const unsigned char *mac_addr);
+
+/* *
+ * @brief hinic3_vf_info_vlanprio - get vf vlan priority
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @retval zero: vlan priority
+ */
+u16 hinic3_vf_info_vlanprio(void *hwdev, int vf_id);
+
+/* *
+ * @brief hinic3_set_vf_tx_rate - set vf tx rate
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param max_rate: max rate
+ * @param min_rate: min rate
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vf_tx_rate(void *hwdev, u16 vf_id, u32 max_rate, u32 min_rate);
+
+/* *
+ * @brief hinic3_set_vf_tx_rate - set vf tx rate
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param ivi: vf info
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+void hinic3_get_vf_config(void *hwdev, u16 vf_id, struct ifla_vf_info *ivi);
+
+/* *
+ * @brief hinic3_set_vf_link_state - set vf link state
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param link: link state
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vf_link_state(void *hwdev, u16 vf_id, int link);
+
+/* *
+ * @brief hinic3_get_port_info - set port info
+ * @param hwdev: device pointer to hwdev
+ * @param port_info: port info
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_port_info(void *hwdev, struct nic_port_info *port_info,
+ u16 channel);
+
+/* *
+ * @brief hinic3_set_rss_type - set rss type
+ * @param hwdev: device pointer to hwdev
+ * @param rss_type: rss type
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_rss_type(void *hwdev, struct nic_rss_type rss_type);
+
+/* *
+ * @brief hinic3_get_rss_type - get rss type
+ * @param hwdev: device pointer to hwdev
+ * @param rss_type: rss type
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_rss_type(void *hwdev, struct nic_rss_type *rss_type);
+
+/* *
+ * @brief hinic3_rss_get_hash_engine - get rss hash engine
+ * @param hwdev: device pointer to hwdev
+ * @param type: hash engine
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_get_hash_engine(void *hwdev, u8 *type);
+
+/* *
+ * @brief hinic3_rss_set_hash_engine - set rss hash engine
+ * @param hwdev: device pointer to hwdev
+ * @param type: hash engine
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_set_hash_engine(void *hwdev, u8 type);
+
+/* *
+ * @brief hinic3_rss_cfg - set rss configuration
+ * @param hwdev: device pointer to hwdev
+ * @param rss_en: enable rss flag
+ * @param type: number of TC
+ * @param cos_num: cos num
+ * @param num_qps: number of queue
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_cfg(void *hwdev, u8 rss_en, u8 cos_num, u8 *prio_tc,
+ u16 num_qps);
+
+/* *
+ * @brief hinic3_rss_set_template_tbl - set template table
+ * @param hwdev: device pointer to hwdev
+ * @param key: rss key
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_set_hash_key(void *hwdev, const u8 *key);
+
+/* *
+ * @brief hinic3_rss_get_template_tbl - get template table
+ * @param hwdev: device pointer to hwdev
+ * @param key: rss key
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_get_hash_key(void *hwdev, u8 *key);
+
+/* *
+ * @brief hinic3_refresh_nic_cfg - refresh port cfg
+ * @param hwdev: device pointer to hwdev
+ * @param port_info: port information
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_refresh_nic_cfg(void *hwdev, struct nic_port_info *port_info);
+
+/* *
+ * @brief hinic3_add_vlan - add vlan
+ * @param hwdev: device pointer to hwdev
+ * @param vlan_id: vlan id
+ * @param func_id: function id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_add_vlan(void *hwdev, u16 vlan_id, u16 func_id);
+
+/* *
+ * @brief hinic3_del_vlan - delete vlan
+ * @param hwdev: device pointer to hwdev
+ * @param vlan_id: vlan id
+ * @param func_id: function id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_del_vlan(void *hwdev, u16 vlan_id, u16 func_id);
+
+/* *
+ * @brief hinic3_rss_set_indir_tbl - set rss indirect table
+ * @param hwdev: device pointer to hwdev
+ * @param indir_table: rss indirect table
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_set_indir_tbl(void *hwdev, const u32 *indir_table);
+
+/* *
+ * @brief hinic3_rss_get_indir_tbl - get rss indirect table
+ * @param hwdev: device pointer to hwdev
+ * @param indir_table: rss indirect table
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_get_indir_tbl(void *hwdev, u32 *indir_table);
+
+/* *
+ * @brief hinic3_get_phy_port_stats - get port stats
+ * @param hwdev: device pointer to hwdev
+ * @param stats: port stats
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_phy_port_stats(void *hwdev, struct mag_cmd_port_stats *stats);
+
+/* *
+ * @brief hinic3_get_phy_rsfec_stats - get rsfec stats
+ * @param hwdev: device pointer to hwdev
+ * @param stats: rsfec(Reed-Solomon Forward Error Correction) stats
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_phy_rsfec_stats(void *hwdev, struct mag_cmd_rsfec_stats *stats);
+
+int hinic3_set_port_funcs_state(void *hwdev, bool enable);
+
+int hinic3_reset_port_link_cfg(void *hwdev);
+
+int hinic3_force_port_relink(void *hwdev);
+
+int hinic3_set_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state);
+
+int hinic3_dcb_set_pfc(void *hwdev, u8 pfc_en, u8 pfc_bitmap);
+
+int hinic3_dcb_get_pfc(void *hwdev, u8 *pfc_en_bitmap);
+
+int hinic3_dcb_set_ets(void *hwdev, u8 *cos_tc, u8 *cos_bw, u8 *cos_prio,
+ u8 *tc_bw, u8 *tc_prio);
+
+int hinic3_dcb_set_cos_up_map(void *hwdev, u8 cos_valid_bitmap, u8 *cos_up,
+ u8 max_cos_num);
+
+int hinic3_dcb_set_rq_iq_mapping(void *hwdev, u32 num_rqs, u8 *map,
+ u32 max_map_num);
+
+int hinic3_sync_dcb_state(void *hwdev, u8 op_code, u8 state);
+
+int hinic3_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause);
+
+int hinic3_set_pause_info(void *hwdev, struct nic_pause_config nic_pause);
+
+int hinic3_set_link_settings(void *hwdev,
+ struct hinic3_link_ksettings *settings);
+
+int hinic3_set_vlan_fliter(void *hwdev, u32 vlan_filter_ctrl);
+
+void hinic3_clear_vfs_info(void *hwdev);
+
+int hinic3_notify_vf_outband_cfg(void *hwdev, u16 func_id, u16 vlan_id);
+
+int hinic3_update_mac_vlan(void *hwdev, u16 old_vlan, u16 new_vlan, int vf_id);
+
+int hinic3_set_led_status(void *hwdev, enum mag_led_type type,
+ enum mag_led_mode mode);
+
+int hinic3_set_func_capture_en(void *hwdev, u16 func_id, bool cap_en);
+
+int hinic3_set_loopback_mode(void *hwdev, u8 mode, u8 enable);
+int hinic3_get_loopback_mode(void *hwdev, u8 *mode, u8 *enable);
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+bool hinic3_get_vf_trust(void *hwdev, int vf_id);
+int hinic3_set_vf_trust(void *hwdev, u16 vf_id, bool trust);
+#endif
+
+int hinic3_set_autoneg(void *hwdev, bool enable);
+
+int hinic3_get_sfp_type(void *hwdev, u8 *sfp_type, u8 *sfp_type_ext);
+int hinic3_get_sfp_eeprom(void *hwdev, u8 *data, u32 len);
+int hinic3_get_tlv_xsfp_eeprom(void *hwdev, u8 *data, u32 len);
+
+bool hinic3_if_sfp_absent(void *hwdev);
+int hinic3_get_sfp_info(void *hwdev, struct mag_cmd_get_xsfp_info *sfp_info);
+int hinic3_get_sfp_tlv_info(void *hwdev,
+ struct drv_mag_cmd_get_xsfp_tlv_rsp *sfp_tlv_info,
+ const struct mag_cmd_get_xsfp_tlv_req *sfp_tlv_info_req);
+/* *
+ * @brief hinic3_set_nic_feature_to_hw - sync nic feature to hardware
+ * @param hwdev: device pointer to hwdev
+ */
+int hinic3_set_nic_feature_to_hw(void *hwdev);
+
+/* *
+ * @brief hinic3_update_nic_feature - update nic feature
+ * @param hwdev: device pointer to hwdev
+ * @param s_feature: nic features
+ * @param size: @s_feature's array size
+ */
+void hinic3_update_nic_feature(void *hwdev, u64 s_feature);
+
+/* *
+ * @brief hinic3_set_link_status_follow - set link follow status
+ * @param hwdev: device pointer to hwdev
+ * @param status: link follow status
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_link_status_follow(void *hwdev, enum hinic3_link_follow_status status);
+
+/* *
+ * @brief hinic3_update_pf_bw - update pf bandwidth
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_update_pf_bw(void *hwdev);
+
+/* *
+ * @brief hinic3_set_pf_bw_limit - set pf bandwidth limit
+ * @param hwdev: device pointer to hwdev
+ * @param bw_limit: pf bandwidth limit
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_pf_bw_limit(void *hwdev, u32 bw_limit);
+
+/* *
+ * @brief hinic3_set_pf_rate - set pf rate
+ * @param hwdev: device pointer to hwdev
+ * @param speed_level: speed level
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_pf_rate(void *hwdev, u8 speed_level);
+
+int hinic3_get_rxq_hw_info(void *hwdev, struct rxq_check_info *rxq_info, u16 num_qps, u16 wqe_type);
+
+#if defined(HAVE_NDO_UDP_TUNNEL_ADD) || defined(HAVE_UDP_TUNNEL_NIC_INFO)
+/* *
+ * @brief hinic3_vlxan_port_config - add/del vxlan dst port
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: function id
+ * @param port: vxlan dst port
+ * @param action: add or del, del will set to default value (0x12B5)
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_vlxan_port_config(void *hwdev, u16 func_id, u16 port, u8 action);
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD || HAVE_UDP_TUNNEL_NIC_INFO */
+
+int hinic3_get_outband_vlan_cfg(void *hwdev, u16 *outband_default_vid);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c
new file mode 100644
index 0000000..654673f
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c
@@ -0,0 +1,726 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "nic_mpu_cmd.h"
+#include "nic_npu_cmd.h"
+
+/*lint -e806*/
+static unsigned char set_vf_link_state;
+module_param(set_vf_link_state, byte, 0444);
+MODULE_PARM_DESC(set_vf_link_state, "Set vf link state, 0 represents link auto, 1 represents link always up, 2 represents link always down. - default is 0.");
+/*lint +e806*/
+
+/* In order to adapt different linux version */
+enum {
+ HINIC3_IFLA_VF_LINK_STATE_AUTO, /* link state of the uplink */
+ HINIC3_IFLA_VF_LINK_STATE_ENABLE, /* link always up */
+ HINIC3_IFLA_VF_LINK_STATE_DISABLE, /* link always down */
+};
+
+#define NIC_CVLAN_INSERT_ENABLE 0x1
+#define NIC_QINQ_INSERT_ENABLE 0X3
+static int hinic3_set_vlan_ctx(struct hinic3_nic_io *nic_io, u16 func_id,
+ u16 vlan_tag, u16 q_id, bool add)
+{
+ struct nic_vlan_ctx *vlan_ctx = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ u64 out_param = 0;
+ int err;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_io->hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_vlan_ctx);
+ vlan_ctx = (struct nic_vlan_ctx *)cmd_buf->buf;
+
+ vlan_ctx->func_id = func_id;
+ vlan_ctx->qid = q_id;
+ vlan_ctx->vlan_tag = vlan_tag;
+ vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */
+ vlan_ctx->vlan_mode = add ?
+ NIC_QINQ_INSERT_ENABLE : NIC_CVLAN_INSERT_ENABLE;
+
+ hinic3_cpu_to_be32(vlan_ctx, sizeof(struct nic_vlan_ctx));
+
+ err = hinic3_cmdq_direct_resp(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_MODIFY_VLAN_CTX,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+
+ hinic3_free_cmd_buf(nic_io->hwdev, cmd_buf);
+
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set vlan context, err: %d, out_param: 0x%llx\n",
+ err, out_param);
+ return -EFAULT;
+ }
+
+ return err;
+}
+
+int hinic3_cfg_vf_vlan(struct hinic3_nic_io *nic_io, u8 opcode, u16 vid,
+ u8 qos, int vf_id)
+{
+ struct hinic3_cmd_vf_vlan_config vf_vlan;
+ u16 out_size = sizeof(vf_vlan);
+ u16 glb_func_id;
+ int err;
+ u16 vlan_tag;
+
+ /* VLAN 0 is a special case, don't allow it to be removed */
+ if (!vid && opcode == HINIC3_CMD_OP_DEL)
+ return 0;
+
+ memset(&vf_vlan, 0, sizeof(vf_vlan));
+
+ vf_vlan.opcode = opcode;
+ vf_vlan.func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + (u16)vf_id;
+ vf_vlan.vlan_id = vid;
+ vf_vlan.qos = qos;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_CFG_VF_VLAN,
+ &vf_vlan, sizeof(vf_vlan),
+ &vf_vlan, &out_size);
+ if (err || !out_size || vf_vlan.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d vlan, err: %d, status: 0x%x,out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err, vf_vlan.msg_head.status,
+ out_size);
+ return -EFAULT;
+ }
+
+ vlan_tag = vid + (u16)(qos << VLAN_PRIO_SHIFT);
+
+ glb_func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + (u16)vf_id;
+ err = hinic3_set_vlan_ctx(nic_io, glb_func_id, vlan_tag,
+ NIC_CONFIG_ALL_QUEUE_VLAN_CTX,
+ opcode == HINIC3_CMD_OP_ADD);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d vlan ctx, err: %d\n",
+ HW_VF_ID_TO_OS(vf_id), err);
+
+ /* rollback vlan config */
+ if (opcode == HINIC3_CMD_OP_DEL)
+ vf_vlan.opcode = HINIC3_CMD_OP_ADD;
+ else
+ vf_vlan.opcode = HINIC3_CMD_OP_DEL;
+ l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_VF_VLAN, &vf_vlan,
+ sizeof(vf_vlan), &vf_vlan, &out_size);
+ return err;
+ }
+
+ return 0;
+}
+
+/* this function just be called by hinic3_ndo_set_vf_mac,
+ * others are not permitted.
+ */
+int hinic3_set_vf_mac(void *hwdev, int vf_id, const unsigned char *mac_addr)
+{
+ struct vf_data_storage *vf_info = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+
+ /* duplicate request, so just return success */
+ if (ether_addr_equal(vf_info->user_mac_addr, mac_addr))
+ return 0;
+
+ ether_addr_copy(vf_info->user_mac_addr, mac_addr);
+
+ return 0;
+}
+
+int hinic3_add_vf_vlan(void *hwdev, int vf_id, u16 vlan, u8 qos)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ err = hinic3_cfg_vf_vlan(nic_io, HINIC3_CMD_OP_ADD, vlan, qos, vf_id);
+ if (err != 0)
+ return err;
+
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan = vlan;
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_qos = qos;
+
+ nic_info(nic_io->dev_hdl, "Setting VLAN %u, QOS 0x%x on VF %d\n",
+ vlan, qos, HW_VF_ID_TO_OS(vf_id));
+
+ return 0;
+}
+
+int hinic3_kill_vf_vlan(void *hwdev, int vf_id)
+{
+ struct vf_data_storage *vf_infos = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ vf_infos = nic_io->vf_infos;
+
+ err = hinic3_cfg_vf_vlan(nic_io, HINIC3_CMD_OP_DEL,
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan,
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_qos, vf_id);
+ if (err != 0)
+ return err;
+
+ nic_info(nic_io->dev_hdl, "Remove VLAN %u on VF %d\n",
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan,
+ HW_VF_ID_TO_OS(vf_id));
+
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan = 0;
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_qos = 0;
+
+ return 0;
+}
+
+u16 hinic3_vf_info_vlanprio(void *hwdev, int vf_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 pf_vlan, vlanprio;
+ u8 pf_qos;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return 0;
+
+ pf_vlan = nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan;
+ pf_qos = nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_qos;
+ vlanprio = (u16)(pf_vlan | (pf_qos << HINIC3_VLAN_PRIORITY_SHIFT));
+
+ return vlanprio;
+}
+
+int hinic3_set_vf_link_state(void *hwdev, u16 vf_id, int link)
+{
+ struct hinic3_nic_io *nic_io =
+ hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ struct vf_data_storage *vf_infos = NULL;
+ u8 link_status = 0;
+
+ if (!nic_io)
+ return -EINVAL;
+
+ vf_infos = nic_io->vf_infos;
+
+ switch (link) {
+ case HINIC3_IFLA_VF_LINK_STATE_AUTO:
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_forced = false;
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_up = nic_io->link_status ?
+ true : false;
+ link_status = nic_io->link_status;
+ break;
+ case HINIC3_IFLA_VF_LINK_STATE_ENABLE:
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_forced = true;
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_up = true;
+ link_status = HINIC3_LINK_UP;
+ break;
+ case HINIC3_IFLA_VF_LINK_STATE_DISABLE:
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_forced = true;
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_up = false;
+ link_status = HINIC3_LINK_DOWN;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* Notify the VF of its new link state */
+ hinic3_notify_vf_link_status(nic_io, vf_id, link_status);
+
+ return 0;
+}
+
+int hinic3_set_vf_spoofchk(void *hwdev, u16 vf_id, bool spoofchk)
+{
+ struct hinic3_cmd_spoofchk_set spoofchk_cfg;
+ struct vf_data_storage *vf_infos = NULL;
+ u16 out_size = sizeof(spoofchk_cfg);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ vf_infos = nic_io->vf_infos;
+
+ memset(&spoofchk_cfg, 0, sizeof(spoofchk_cfg));
+
+ spoofchk_cfg.func_id = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+ spoofchk_cfg.state = spoofchk ? 1 : 0;
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_SPOOPCHK_STATE,
+ &spoofchk_cfg,
+ sizeof(spoofchk_cfg), &spoofchk_cfg,
+ &out_size);
+ if (err || !out_size || spoofchk_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF(%d) spoofchk, err: %d, status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err,
+ spoofchk_cfg.msg_head.status, out_size);
+ err = -EINVAL;
+ }
+
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].spoofchk = spoofchk;
+
+ return err;
+}
+
+bool hinic3_vf_info_spoofchk(void *hwdev, int vf_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return false;
+
+ return nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].spoofchk;
+}
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+int hinic3_set_vf_trust(void *hwdev, u16 vf_id, bool trust)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io || vf_id > nic_io->max_vfs)
+ return -EINVAL;
+
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].trust = trust;
+
+ return 0;
+}
+
+bool hinic3_get_vf_trust(void *hwdev, int vf_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return false;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io || vf_id > nic_io->max_vfs)
+ return false;
+
+ return nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].trust;
+}
+#endif
+
+static int hinic3_set_vf_tx_rate_max_min(struct hinic3_nic_io *nic_io,
+ u16 vf_id, u32 max_rate, u32 min_rate)
+{
+ struct hinic3_cmd_tx_rate_cfg rate_cfg;
+ u16 out_size = sizeof(rate_cfg);
+ int err;
+
+ memset(&rate_cfg, 0, sizeof(rate_cfg));
+
+ rate_cfg.func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf_id;
+ rate_cfg.max_rate = max_rate;
+ rate_cfg.min_rate = min_rate;
+ rate_cfg.direct = HINIC3_NIC_TX;
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_SET_MAX_MIN_RATE,
+ &rate_cfg, sizeof(rate_cfg), &rate_cfg,
+ &out_size);
+ if (rate_cfg.msg_head.status || err || !out_size) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d max rate %u, min rate %u, err: %d, status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), max_rate, min_rate, err,
+ rate_cfg.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_set_vf_tx_rate(void *hwdev, u16 vf_id, u32 max_rate, u32 min_rate)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (!HINIC3_SUPPORT_RATE_LIMIT(hwdev)) {
+ nic_err(nic_io->dev_hdl, "Current function doesn't support to set vf rate limit\n");
+ return -EOPNOTSUPP;
+ }
+
+ err = hinic3_set_vf_tx_rate_max_min(nic_io, vf_id, max_rate, min_rate);
+ if (err != 0)
+ return err;
+
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].max_rate = max_rate;
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].min_rate = min_rate;
+
+ return 0;
+}
+
+void hinic3_get_vf_config(void *hwdev, u16 vf_id, struct ifla_vf_info *ivi)
+{
+ struct vf_data_storage *vfinfo = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+
+ vfinfo = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ if (!vfinfo)
+ return;
+
+ ivi->vf = HW_VF_ID_TO_OS(vf_id);
+ ether_addr_copy(ivi->mac, vfinfo->user_mac_addr);
+ ivi->vlan = vfinfo->pf_vlan;
+ ivi->qos = vfinfo->pf_qos;
+
+#ifdef HAVE_VF_SPOOFCHK_CONFIGURE
+ ivi->spoofchk = vfinfo->spoofchk;
+#endif
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+ ivi->trusted = vfinfo->trust;
+#endif
+
+#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+ ivi->max_tx_rate = vfinfo->max_rate;
+ ivi->min_tx_rate = vfinfo->min_rate;
+#else
+ ivi->tx_rate = vfinfo->max_rate;
+#endif /* HAVE_NDO_SET_VF_MIN_MAX_TX_RATE */
+
+#ifdef HAVE_NDO_SET_VF_LINK_STATE
+ if (!vfinfo->link_forced)
+ ivi->linkstate = IFLA_VF_LINK_STATE_AUTO;
+ else if (vfinfo->link_up)
+ ivi->linkstate = IFLA_VF_LINK_STATE_ENABLE;
+ else
+ ivi->linkstate = IFLA_VF_LINK_STATE_DISABLE;
+#endif
+}
+
+static int hinic3_init_vf_infos(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ struct vf_data_storage *vf_infos = nic_io->vf_infos;
+ u8 vf_link_state;
+
+ if (set_vf_link_state > HINIC3_IFLA_VF_LINK_STATE_DISABLE) {
+ nic_warn(nic_io->dev_hdl, "Module Parameter set_vf_link_state value %u is out of range, resetting to %d\n",
+ set_vf_link_state, HINIC3_IFLA_VF_LINK_STATE_AUTO);
+ set_vf_link_state = HINIC3_IFLA_VF_LINK_STATE_AUTO;
+ }
+
+ vf_link_state = set_vf_link_state;
+
+ switch (vf_link_state) {
+ case HINIC3_IFLA_VF_LINK_STATE_AUTO:
+ vf_infos[vf_id].link_forced = false;
+ break;
+ case HINIC3_IFLA_VF_LINK_STATE_ENABLE:
+ vf_infos[vf_id].link_forced = true;
+ vf_infos[vf_id].link_up = true;
+ break;
+ case HINIC3_IFLA_VF_LINK_STATE_DISABLE:
+ vf_infos[vf_id].link_forced = true;
+ vf_infos[vf_id].link_up = false;
+ break;
+ default:
+ nic_err(nic_io->dev_hdl, "Input parameter set_vf_link_state error: %u\n",
+ vf_link_state);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int vf_func_register(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_cmd_register_vf register_info;
+ u16 out_size = sizeof(register_info);
+ int err;
+
+ err = hinic3_register_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ nic_io->hwdev, hinic3_vf_event_handler);
+ if (err != 0)
+ return err;
+
+ err = hinic3_register_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_HILINK,
+ nic_io->hwdev, hinic3_vf_mag_event_handler);
+ if (err != 0)
+ goto reg_hilink_err;
+
+ memset(®ister_info, 0, sizeof(register_info));
+ register_info.op_register = 1;
+ register_info.support_extra_feature = 0;
+ err = hinic3_mbox_to_pf(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_NIC_CMD_VF_REGISTER,
+ ®ister_info, sizeof(register_info),
+ ®ister_info, &out_size, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || !out_size || register_info.msg_head.status) {
+ if (hinic3_is_slave_host(nic_io->hwdev)) {
+ nic_warn(nic_io->dev_hdl,
+ "Failed to register VF, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, register_info.msg_head.status, out_size);
+ return 0;
+ }
+ nic_err(nic_io->dev_hdl, "Failed to register VF, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, register_info.msg_head.status, out_size);
+ err = -EIO;
+ goto register_err;
+ }
+
+ return 0;
+
+register_err:
+ hinic3_unregister_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_HILINK);
+
+reg_hilink_err:
+ hinic3_unregister_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+
+ return err;
+}
+
+static int pf_init_vf_infos(struct hinic3_nic_io *nic_io)
+{
+ u32 size;
+ int err;
+ u16 i;
+
+ nic_io->max_vfs = hinic3_func_max_vf(nic_io->hwdev);
+ size = sizeof(*nic_io->vf_infos) * nic_io->max_vfs;
+ if (!size)
+ return 0;
+
+ nic_io->vf_infos = kzalloc(size, GFP_KERNEL);
+ if (!nic_io->vf_infos)
+ return -ENOMEM;
+
+ for (i = 0; i < nic_io->max_vfs; i++) {
+ err = hinic3_init_vf_infos(nic_io, i);
+ if (err != 0)
+ goto init_vf_infos_err;
+ }
+
+ err = hinic3_register_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ nic_io->hwdev, hinic3_pf_mbox_handler);
+ if (err != 0)
+ goto register_pf_mbox_cb_err;
+
+ err = hinic3_register_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_HILINK,
+ nic_io->hwdev, hinic3_pf_mag_mbox_handler);
+ if (err != 0)
+ goto register_pf_mag_mbox_cb_err;
+
+ return 0;
+
+register_pf_mag_mbox_cb_err:
+ hinic3_unregister_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+register_pf_mbox_cb_err:
+init_vf_infos_err:
+ kfree(nic_io->vf_infos);
+
+ return err;
+}
+
+int hinic3_vf_func_init(struct hinic3_nic_io *nic_io)
+{
+ int err;
+
+ if (hinic3_func_type(nic_io->hwdev) == TYPE_VF)
+ return vf_func_register(nic_io);
+
+ err = hinic3_register_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ nic_io->hwdev, hinic3_pf_event_handler);
+ if (err != 0)
+ return err;
+
+ err = hinic3_register_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_HILINK,
+ nic_io->hwdev, hinic3_pf_mag_event_handler);
+ if (err != 0)
+ goto register_mgmt_msg_cb_err;
+
+ err = pf_init_vf_infos(nic_io);
+ if (err != 0)
+ goto pf_init_vf_infos_err;
+
+ return 0;
+
+pf_init_vf_infos_err:
+ hinic3_unregister_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_HILINK);
+register_mgmt_msg_cb_err:
+ hinic3_unregister_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+
+ return err;
+}
+
+void hinic3_vf_func_free(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_cmd_register_vf unregister;
+ u16 out_size = sizeof(unregister);
+ int err;
+
+ memset(&unregister, 0, sizeof(unregister));
+ unregister.op_register = 0;
+ if (hinic3_func_type(nic_io->hwdev) == TYPE_VF) {
+ err = hinic3_mbox_to_pf(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_NIC_CMD_VF_REGISTER,
+ &unregister, sizeof(unregister),
+ &unregister, &out_size, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || !out_size || unregister.msg_head.status) {
+ if (hinic3_is_slave_host(nic_io->hwdev))
+ nic_info(nic_io->dev_hdl,
+ "vRoCE VF notify PF unsuccessful is allowed");
+ else
+ nic_err(nic_io->dev_hdl,
+ "Failed to unregister VF, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, unregister.msg_head.status, out_size);
+ }
+
+ hinic3_unregister_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+ } else {
+ if (nic_io->vf_infos) {
+ hinic3_unregister_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_HILINK);
+ hinic3_unregister_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+ hinic3_clear_vfs_info(nic_io->hwdev);
+ kfree(nic_io->vf_infos);
+ nic_io->vf_infos = NULL;
+ }
+ hinic3_unregister_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_HILINK);
+ hinic3_unregister_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+ }
+}
+
+static void clear_vf_infos(void *hwdev, u16 vf_id)
+{
+ struct vf_data_storage *vf_infos = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 func_id;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ func_id = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+ vf_infos = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ if (vf_infos->use_specified_mac)
+ hinic3_del_mac(hwdev, vf_infos->drv_mac_addr,
+ vf_infos->pf_vlan, func_id, HINIC3_CHANNEL_NIC);
+
+ if (hinic3_vf_info_vlanprio(hwdev, vf_id))
+ hinic3_kill_vf_vlan(hwdev, vf_id);
+
+ if (vf_infos->max_rate)
+ hinic3_set_vf_tx_rate(hwdev, vf_id, 0, 0);
+
+ if (vf_infos->spoofchk)
+ hinic3_set_vf_spoofchk(hwdev, vf_id, false);
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+ if (vf_infos->trust)
+ hinic3_set_vf_trust(hwdev, vf_id, false);
+#endif
+
+ memset(vf_infos, 0, sizeof(*vf_infos));
+ /* set vf_infos to default */
+ hinic3_init_vf_infos(nic_io, HW_VF_ID_TO_OS(vf_id));
+}
+
+void hinic3_clear_vfs_info(void *hwdev)
+{
+ u16 i;
+ struct hinic3_nic_io *nic_io =
+ hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ for (i = 0; i < nic_io->max_vfs; i++)
+ clear_vf_infos(hwdev, OS_VF_ID_TO_HW(i));
+}
+
+int hinic3_notify_vf_outband_cfg(void *hwdev, u16 func_id, u16 vlan_id)
+{
+ int err = 0;
+ struct hinic3_outband_cfg_info outband_cfg_info;
+ struct vf_data_storage *vf_infos = NULL;
+ u16 out_size = sizeof(outband_cfg_info);
+ u16 vf_id;
+ struct hinic3_nic_io *nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return 0;
+ }
+
+ vf_id = func_id - hinic3_glb_pf_vf_offset(nic_io->hwdev);
+ vf_infos = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+
+ memset(&outband_cfg_info, 0, sizeof(outband_cfg_info));
+ if (vf_infos->registered) {
+ outband_cfg_info.func_id = func_id;
+ outband_cfg_info.outband_default_vid = vlan_id;
+ err = hinic3_mbox_to_vf_no_ack(nic_io->hwdev, vf_id,
+ HINIC3_MOD_L2NIC,
+ HINIC3_NIC_CMD_OUTBAND_CFG_NOTICE,
+ &outband_cfg_info,
+ sizeof(outband_cfg_info),
+ &outband_cfg_info, &out_size,
+ HINIC3_CHANNEL_NIC);
+ if (err == MBOX_ERRCODE_UNKNOWN_DES_FUNC) {
+ nic_warn(nic_io->dev_hdl, "VF%d not initialized, disconnect it\n",
+ HW_VF_ID_TO_OS(vf_id));
+ hinic3_unregister_vf(nic_io, vf_id);
+ return 0;
+ }
+ if (err || !out_size || outband_cfg_info.msg_head.status)
+ nic_err(nic_io->dev_hdl,
+ "outband cfg event to VF %d failed, err: %d, status: 0x%x, out_size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err,
+ outband_cfg_info.msg_head.status, out_size);
+ }
+
+ return err;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c
new file mode 100644
index 0000000..2878f66
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c
@@ -0,0 +1,159 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+
+int hinic3_dbg_get_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt,
+ u8 *wqe, const u16 *wqe_size, enum hinic3_queue_type q_type)
+{
+ struct hinic3_io_queue *queue = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ void *src_wqebb = NULL;
+ u32 i, offset;
+
+ if (!hwdev) {
+ pr_err("hwdev is NULL.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (q_id >= nic_io->num_qps) {
+ pr_err("q_id[%u] > num_qps_cfg[%u].\n", q_id, nic_io->num_qps);
+ return -EINVAL;
+ }
+
+ queue = (q_type == HINIC3_SQ) ? &nic_io->sq[q_id] : &nic_io->rq[q_id];
+
+ if ((idx + wqebb_cnt) > queue->wq.q_depth) {
+ pr_err("(idx[%u] + idx[%u]) > q_depth[%u].\n", idx, wqebb_cnt, queue->wq.q_depth);
+ return -EINVAL;
+ }
+
+ if (*wqe_size != (queue->wq.wqebb_size * wqebb_cnt)) {
+ pr_err("Unexpect out buf size from user :%u, expect: %d\n",
+ *wqe_size, (queue->wq.wqebb_size * wqebb_cnt));
+ return -EINVAL;
+ }
+
+ for (i = 0; i < wqebb_cnt; i++) {
+ src_wqebb = hinic3_wq_wqebb_addr(&queue->wq, (u16)WQ_MASK_IDX(&queue->wq, idx + i));
+ offset = queue->wq.wqebb_size * i;
+ memcpy(wqe + offset, src_wqebb, queue->wq.wqebb_size);
+ }
+
+ return 0;
+}
+
+int hinic3_dbg_get_sq_info(void *hwdev, u16 q_id, struct nic_sq_info *sq_info,
+ u32 msg_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_io_queue *sq = NULL;
+
+ if (!hwdev || !sq_info) {
+ pr_err("hwdev or sq_info is NULL.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (q_id >= nic_io->num_qps) {
+ nic_err(nic_io->dev_hdl, "Input queue id(%u) is larger than the actual queue number\n",
+ q_id);
+ return -EINVAL;
+ }
+
+ if (msg_size != sizeof(*sq_info)) {
+ nic_err(nic_io->dev_hdl, "Unexpect out buf size from user :%u, expect: %lu\n",
+ msg_size, sizeof(*sq_info));
+ return -EINVAL;
+ }
+
+ sq = &nic_io->sq[q_id];
+ if (!sq)
+ return -EINVAL;
+
+ sq_info->q_id = q_id;
+ sq_info->pi = hinic3_get_sq_local_pi(sq);
+ sq_info->ci = hinic3_get_sq_local_ci(sq);
+ sq_info->fi = hinic3_get_sq_hw_ci(sq);
+ sq_info->q_depth = sq->wq.q_depth;
+ sq_info->wqebb_size = sq->wq.wqebb_size;
+
+ sq_info->ci_addr = sq->tx.cons_idx_addr;
+
+ sq_info->cla_addr = sq->wq.wq_block_paddr;
+ sq_info->slq_handle = sq;
+
+ sq_info->doorbell.map_addr = (u64 *)sq->db_addr;
+
+ return 0;
+}
+
+int hinic3_dbg_get_rq_info(void *hwdev, u16 q_id, struct nic_rq_info *rq_info,
+ u32 msg_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_io_queue *rq = NULL;
+
+ if (!hwdev || !rq_info) {
+ pr_err("hwdev or rq_info is NULL.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (q_id >= nic_io->num_qps) {
+ nic_err(nic_io->dev_hdl, "Input queue id(%u) is larger than the actual queue number\n",
+ q_id);
+ return -EINVAL;
+ }
+
+ if (msg_size != sizeof(*rq_info)) {
+ nic_err(nic_io->dev_hdl, "Unexpect out buf size from user: %u, expect: %lu\n",
+ msg_size, sizeof(*rq_info));
+ return -EINVAL;
+ }
+
+ rq = &nic_io->rq[q_id];
+ if (!rq)
+ return -EINVAL;
+
+ rq_info->q_id = q_id;
+
+ rq_info->hw_pi = cpu_to_be16(*rq->rx.pi_virt_addr);
+
+ rq_info->wqebb_size = rq->wq.wqebb_size;
+ rq_info->q_depth = (u16)rq->wq.q_depth;
+
+ rq_info->buf_len = nic_io->rx_buff_len;
+
+ rq_info->slq_handle = rq;
+
+ rq_info->ci_wqe_page_addr = hinic3_wq_get_first_wqe_page_addr(&rq->wq);
+ rq_info->ci_cla_tbl_addr = rq->wq.wq_block_paddr;
+
+ rq_info->msix_idx = rq->msix_entry_idx;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h
new file mode 100644
index 0000000..4ba96d5
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_DBG_H
+#define HINIC3_NIC_DBG_H
+
+#include "hinic3_mt.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_srv_nic.h"
+
+int hinic3_dbg_get_sq_info(void *hwdev, u16 q_id, struct nic_sq_info *sq_info,
+ u32 msg_size);
+
+int hinic3_dbg_get_rq_info(void *hwdev, u16 q_id, struct nic_rq_info *rq_info,
+ u32 msg_size);
+
+int hinic3_dbg_get_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt,
+ u8 *wqe, const u16 *wqe_size,
+ enum hinic3_queue_type q_type);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h
new file mode 100644
index 0000000..3d2a962
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h
@@ -0,0 +1,462 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_DEV_H
+#define HINIC3_NIC_DEV_H
+
+#include <linux/netdevice.h>
+#include <linux/semaphore.h>
+#include <linux/types.h>
+#include <linux/bitops.h>
+
+#include "ossl_knl.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+#include "hinic3_dcb.h"
+#include "vram_common.h"
+
+#define HINIC3_NIC_DRV_NAME "hinic3"
+#define HINIC3_NIC_DRV_VERSION "17.7.8.101"
+
+#define HINIC3_FUNC_IS_VF(hwdev) (hinic3_func_type(hwdev) == TYPE_VF)
+
+#define HINIC3_AVG_PKT_SMALL 256U
+#define HINIC3_MODERATONE_DELAY HZ
+
+#define LP_PKT_CNT 64
+#define LP_PKT_LEN 60
+
+#define NAPI_IS_REGIN 1
+#define NAPI_NOT_REGIN 0
+
+enum hinic3_flags {
+ HINIC3_INTF_UP,
+ HINIC3_MAC_FILTER_CHANGED,
+ HINIC3_LP_TEST,
+ HINIC3_RSS_ENABLE,
+ HINIC3_DCB_ENABLE,
+ HINIC3_SAME_RXTX,
+ HINIC3_INTR_ADAPT,
+ HINIC3_UPDATE_MAC_FILTER,
+ HINIC3_CHANGE_RES_INVALID,
+ HINIC3_RSS_DEFAULT_INDIR,
+ HINIC3_FORCE_LINK_UP,
+ HINIC3_BONDING_MASTER,
+ HINIC3_AUTONEG_RESET,
+ HINIC3_RXQ_RECOVERY,
+};
+
+#define HINIC3_CHANNEL_RES_VALID(nic_dev) \
+ (test_bit(HINIC3_INTF_UP, &(nic_dev)->flags) && \
+ !test_bit(HINIC3_CHANGE_RES_INVALID, &(nic_dev)->flags))
+
+#define RX_BUFF_NUM_PER_PAGE 2
+
+#define VLAN_BITMAP_BYTE_SIZE(nic_dev) (sizeof(*(nic_dev)->vlan_bitmap))
+#define VLAN_BITMAP_BITS_SIZE(nic_dev) (VLAN_BITMAP_BYTE_SIZE(nic_dev) * 8)
+#define VLAN_NUM_BITMAPS(nic_dev) (VLAN_N_VID / \
+ VLAN_BITMAP_BITS_SIZE(nic_dev))
+#define VLAN_BITMAP_SIZE(nic_dev) (VLAN_N_VID / \
+ VLAN_BITMAP_BYTE_SIZE(nic_dev))
+#define VID_LINE(nic_dev, vid) ((vid) / VLAN_BITMAP_BITS_SIZE(nic_dev))
+#define VID_COL(nic_dev, vid) ((vid) & (VLAN_BITMAP_BITS_SIZE(nic_dev) - 1))
+
+#define NIC_DRV_DEFAULT_FEATURE NIC_F_ALL_MASK
+
+enum hinic3_event_work_flags {
+ EVENT_WORK_TX_TIMEOUT,
+};
+
+enum hinic3_rx_mode_state {
+ HINIC3_HW_PROMISC_ON,
+ HINIC3_HW_ALLMULTI_ON,
+ HINIC3_PROMISC_FORCE_ON,
+ HINIC3_ALLMULTI_FORCE_ON,
+};
+
+enum mac_filter_state {
+ HINIC3_MAC_WAIT_HW_SYNC,
+ HINIC3_MAC_HW_SYNCED,
+ HINIC3_MAC_WAIT_HW_UNSYNC,
+ HINIC3_MAC_HW_UNSYNCED,
+};
+
+struct hinic3_mac_filter {
+ struct list_head list;
+ u8 addr[ETH_ALEN];
+ unsigned long state;
+};
+
+struct hinic3_irq {
+ struct net_device *netdev;
+ /* IRQ corresponding index number */
+ u16 msix_entry_idx;
+ u16 rsvd1;
+ u32 irq_id; /* The IRQ number from OS */
+
+ u32 napi_reign;
+
+ char irq_name[IFNAMSIZ + 16];
+ struct napi_struct napi;
+ cpumask_t affinity_mask;
+ struct hinic3_txq *txq;
+ struct hinic3_rxq *rxq;
+};
+
+struct hinic3_intr_coal_info {
+ u8 pending_limt;
+ u8 coalesce_timer_cfg;
+ u8 resend_timer_cfg;
+
+ u64 pkt_rate_low;
+ u8 rx_usecs_low;
+ u8 rx_pending_limt_low;
+ u64 pkt_rate_high;
+ u8 rx_usecs_high;
+ u8 rx_pending_limt_high;
+
+ u8 user_set_intr_coal_flag;
+};
+
+struct hinic3_dyna_txrxq_params {
+ u16 num_qps;
+ u8 num_cos;
+ u8 rsvd1;
+ u32 sq_depth;
+ u32 rq_depth;
+
+ struct hinic3_dyna_txq_res *txqs_res;
+ struct hinic3_dyna_rxq_res *rxqs_res;
+ struct hinic3_irq *irq_cfg;
+ char irq_cfg_vram_name[VRAM_NAME_MAX_LEN];
+};
+
+struct hinic3_flush_rq {
+ union {
+ struct {
+#if defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)
+ u32 lb_proc : 1;
+ u32 rsvd : 10;
+ u32 rq_id : 8;
+ u32 func_id : 13;
+#else
+ u32 func_id : 13;
+ u32 rq_id : 8;
+ u32 rsvd : 10;
+ u32 lb_proc : 1;
+#endif
+ } bs;
+ u32 value;
+ } dw;
+
+ union {
+ struct {
+#if defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)
+ u32 rsvd2 : 2;
+ u32 src_chnl : 12;
+ u32 pkt_len : 18;
+#else
+ u32 pkt_len : 18;
+ u32 src_chnl : 12;
+ u32 rsvd2 : 2;
+#endif
+ } bs;
+ u32 value;
+ } lb_info0; /* loop back information, used by uCode */
+};
+
+#define HINIC3_NIC_STATS_INC(nic_dev, field) \
+do { \
+ u64_stats_update_begin(&(nic_dev)->stats.syncp); \
+ (nic_dev)->stats.field++; \
+ u64_stats_update_end(&(nic_dev)->stats.syncp); \
+} while (0)
+
+struct hinic3_nic_stats {
+ u64 netdev_tx_timeout;
+
+ /* Subdivision statistics show in private tool */
+ u64 tx_carrier_off_drop;
+ u64 tx_invalid_qid;
+ u64 rsvd1;
+ u64 rsvd2;
+#ifdef HAVE_NDO_GET_STATS64
+ struct u64_stats_sync syncp;
+#else
+ struct u64_stats_sync_empty syncp;
+#endif
+};
+
+struct hinic3_nic_vport_stats {
+ u64 rx_discard_vport;
+};
+
+#define HINIC3_TCAM_DYNAMIC_BLOCK_SIZE 16
+#define HINIC3_MAX_TCAM_FILTERS 512
+
+#define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \
+ (HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index))
+
+struct hinic3_rx_flow_rule {
+ struct list_head rules;
+ int tot_num_rules;
+};
+
+struct hinic3_tcam_dynamic_block {
+ struct list_head block_list;
+ u16 dynamic_block_id;
+ u16 dynamic_index_cnt;
+ u8 dynamic_index_used[HINIC3_TCAM_DYNAMIC_BLOCK_SIZE];
+};
+
+struct hinic3_tcam_dynamic_block_info {
+ struct list_head tcam_dynamic_list;
+ u16 dynamic_block_cnt;
+};
+
+struct hinic3_tcam_filter {
+ struct list_head tcam_filter_list;
+ u16 dynamic_block_id;
+ u16 index;
+ struct tag_tcam_key tcam_key;
+ u16 queue;
+};
+
+/* function level struct info */
+struct hinic3_tcam_info {
+ u16 tcam_rule_nums;
+ struct list_head tcam_list;
+ struct hinic3_tcam_dynamic_block_info tcam_dynamic_info;
+};
+
+struct hinic3_dcb {
+ u8 cos_config_num_max;
+ u8 func_dft_cos_bitmap;
+ /* used to tool validity check */
+ u16 port_dft_cos_bitmap;
+
+ struct hinic3_dcb_config hw_dcb_cfg;
+ struct hinic3_dcb_config wanted_dcb_cfg;
+ unsigned long dcb_flags;
+};
+
+struct hinic3_vram {
+ u32 vram_mtu;
+ u16 vram_num_qps;
+ unsigned long flags;
+};
+
+struct hinic3_outband_cfg {
+ u16 outband_default_vid;
+ u16 rsvd;
+};
+
+struct hinic3_nic_dev {
+ struct pci_dev *pdev;
+ struct net_device *netdev;
+ struct hinic3_lld_dev *lld_dev;
+ void *hwdev;
+
+ int poll_weight;
+ u32 rsvd1;
+ unsigned long *vlan_bitmap;
+
+ u16 max_qps;
+
+ u32 msg_enable;
+ unsigned long flags;
+
+ u32 lro_replenish_thld;
+ u32 dma_rx_buff_size;
+ u16 rx_buff_len;
+ u32 page_order;
+ bool page_pool_enabled;
+
+ /* Rss related varibles */
+ u8 rss_hash_engine;
+ struct nic_rss_type rss_type;
+ u8 *rss_hkey;
+ /* hkey in big endian */
+ u32 *rss_hkey_be;
+ u32 *rss_indir;
+
+ struct hinic3_dcb *dcb;
+ char dcb_name[VRAM_NAME_MAX_LEN];
+
+ struct hinic3_vram *nic_vram;
+ char nic_vram_name[VRAM_NAME_MAX_LEN];
+
+ int disable_port_cnt;
+
+ struct hinic3_intr_coal_info *intr_coalesce;
+ unsigned long last_moder_jiffies;
+ u32 adaptive_rx_coal;
+ u8 intr_coal_set_flag;
+
+#ifndef HAVE_NETDEV_STATS_IN_NETDEV
+ struct net_device_stats net_stats;
+#endif
+
+ struct hinic3_nic_stats stats;
+ struct hinic3_nic_vport_stats vport_stats;
+
+ /* lock for nic resource */
+ struct mutex nic_mutex;
+ u8 link_status;
+
+ struct nic_service_cap nic_cap;
+
+ struct hinic3_txq *txqs;
+ struct hinic3_rxq *rxqs;
+ struct hinic3_dyna_txrxq_params q_params;
+
+ u16 num_qp_irq;
+ struct irq_info *qps_irq_info;
+
+ struct workqueue_struct *workq;
+
+ struct work_struct rx_mode_work;
+ struct delayed_work moderation_task;
+
+ struct list_head uc_filter_list;
+ struct list_head mc_filter_list;
+ unsigned long rx_mod_state;
+ int netdev_uc_cnt;
+ int netdev_mc_cnt;
+
+ int lb_test_rx_idx;
+ int lb_pkt_len;
+ u8 *lb_test_rx_buf;
+
+ struct hinic3_tcam_info tcam;
+ struct hinic3_rx_flow_rule rx_flow_rule;
+
+#ifdef HAVE_XDP_SUPPORT
+ struct bpf_prog *xdp_prog;
+#endif
+
+ struct delayed_work periodic_work;
+ /* reference to enum hinic3_event_work_flags */
+ unsigned long event_flag;
+
+ struct hinic3_nic_prof_attr *prof_attr;
+ struct hinic3_prof_adapter *prof_adap;
+ u64 rsvd8[7];
+ struct hinic3_outband_cfg outband_cfg;
+ u32 rxq_get_err_times;
+ struct delayed_work rxq_check_work;
+ struct delayed_work vport_stats_work;
+};
+
+#define hinic_msg(level, nic_dev, msglvl, format, arg...) \
+do { \
+ if ((nic_dev)->netdev && (nic_dev)->netdev->reg_state \
+ == NETREG_REGISTERED) \
+ nicif_##level((nic_dev), msglvl, (nic_dev)->netdev, \
+ format, ## arg); \
+ else \
+ nic_##level(&(nic_dev)->pdev->dev, \
+ format, ## arg); \
+} while (0)
+
+#define hinic3_info(nic_dev, msglvl, format, arg...) \
+ hinic_msg(info, nic_dev, msglvl, format, ## arg)
+
+#define hinic3_warn(nic_dev, msglvl, format, arg...) \
+ hinic_msg(warn, nic_dev, msglvl, format, ## arg)
+
+#define hinic3_err(nic_dev, msglvl, format, arg...) \
+ hinic_msg(err, nic_dev, msglvl, format, ## arg)
+
+struct hinic3_uld_info *get_nic_uld_info(void);
+
+u32 hinic3_get_io_stats_size(const struct hinic3_nic_dev *nic_dev);
+
+int hinic3_get_io_stats(const struct hinic3_nic_dev *nic_dev, void *stats);
+
+int hinic3_open(struct net_device *netdev);
+
+int hinic3_close(struct net_device *netdev);
+
+void hinic3_set_ethtool_ops(struct net_device *netdev);
+
+void hinic3vf_set_ethtool_ops(struct net_device *netdev);
+
+int nic_ioctl(void *uld_dev, u32 cmd, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size);
+
+void hinic3_update_num_qps(struct net_device *netdev);
+
+int hinic3_qps_irq_init(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_qps_irq_deinit(struct hinic3_nic_dev *nic_dev);
+
+void qp_del_napi(struct hinic3_irq *irq_cfg);
+
+void hinic3_set_netdev_ops(struct hinic3_nic_dev *nic_dev);
+
+bool hinic3_is_netdev_ops_match(const struct net_device *netdev);
+
+int hinic3_set_hw_features(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_set_rx_mode_work(struct work_struct *work);
+
+void hinic3_clean_mac_list_filter(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_get_strings(struct net_device *netdev, u32 stringset, u8 *data);
+
+void hinic3_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data);
+
+int hinic3_get_sset_count(struct net_device *netdev, int sset);
+
+int hinic3_maybe_set_port_state(struct hinic3_nic_dev *nic_dev, bool enable);
+
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+int hinic3_get_link_ksettings(struct net_device *netdev,
+ struct ethtool_link_ksettings *link_settings);
+int hinic3_set_link_ksettings(struct net_device *netdev,
+ const struct ethtool_link_ksettings
+ *link_settings);
+#endif
+#endif
+
+#ifndef HAVE_NEW_ETHTOOL_LINK_SETTINGS_ONLY
+int hinic3_get_settings(struct net_device *netdev, struct ethtool_cmd *ep);
+int hinic3_set_settings(struct net_device *netdev,
+ struct ethtool_cmd *link_settings);
+#endif
+
+void hinic3_auto_moderation_work(struct work_struct *work);
+
+typedef void (*hinic3_reopen_handler)(struct hinic3_nic_dev *nic_dev,
+ const void *priv_data);
+int hinic3_change_channel_settings(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *trxq_params,
+ hinic3_reopen_handler reopen_handler,
+ const void *priv_data);
+
+void hinic3_link_status_change(struct hinic3_nic_dev *nic_dev, bool status);
+
+#ifdef HAVE_XDP_SUPPORT
+bool hinic3_is_xdp_enable(struct hinic3_nic_dev *nic_dev);
+int hinic3_xdp_max_mtu(struct hinic3_nic_dev *nic_dev);
+#endif
+
+#ifdef HAVE_UDP_TUNNEL_NIC_INFO
+int hinic3_udp_tunnel_set_port(struct net_device *netdev, unsigned int table,
+ unsigned int entry, struct udp_tunnel_info *ti);
+int hinic3_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table,
+ unsigned int entry, struct udp_tunnel_info *ti);
+#endif /* HAVE_UDP_TUNNEL_NIC_INFO */
+
+#if defined(ETHTOOL_GFECPARAM) && defined(ETHTOOL_SFECPARAM)
+int set_fecparam(void *hwdev, u8 fecparam);
+int get_fecparam(void *hwdev, u8 *advertised_fec, u8 *supported_fec);
+#endif
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c
new file mode 100644
index 0000000..6cc294e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c
@@ -0,0 +1,649 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "nic_mpu_cmd.h"
+#include "nic_npu_cmd.h"
+
+static int hinic3_init_vf_config(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ struct vf_data_storage *vf_info = NULL;
+ u16 func_id;
+ int err = 0;
+
+ vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ ether_addr_copy(vf_info->drv_mac_addr, vf_info->user_mac_addr);
+ if (!is_zero_ether_addr(vf_info->drv_mac_addr)) {
+ vf_info->use_specified_mac = true;
+ func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf_id;
+
+ err = hinic3_set_mac(nic_io->hwdev, vf_info->drv_mac_addr,
+ vf_info->pf_vlan, func_id,
+ HINIC3_CHANNEL_NIC);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d MAC\n",
+ HW_VF_ID_TO_OS(vf_id));
+ return err;
+ }
+ } else {
+ vf_info->use_specified_mac = false;
+ }
+
+ if (hinic3_vf_info_vlanprio(nic_io->hwdev, vf_id)) {
+ err = hinic3_cfg_vf_vlan(nic_io, HINIC3_CMD_OP_ADD,
+ vf_info->pf_vlan, vf_info->pf_qos,
+ vf_id);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to add VF %d VLAN_QOS\n",
+ HW_VF_ID_TO_OS(vf_id));
+ return err;
+ }
+ }
+
+ if (vf_info->max_rate) {
+ err = hinic3_set_vf_tx_rate(nic_io->hwdev, vf_id,
+ vf_info->max_rate,
+ vf_info->min_rate);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d max rate %u, min rate %u\n",
+ HW_VF_ID_TO_OS(vf_id), vf_info->max_rate,
+ vf_info->min_rate);
+ return err;
+ }
+ }
+
+ return 0;
+}
+
+static int register_vf_msg_handler(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ int err;
+
+ if (vf_id > nic_io->max_vfs) {
+ nic_err(nic_io->dev_hdl, "Register VF id %d exceed limit[0-%d]\n",
+ HW_VF_ID_TO_OS(vf_id), HW_VF_ID_TO_OS(nic_io->max_vfs));
+ return -EFAULT;
+ }
+
+ err = hinic3_init_vf_config(nic_io, vf_id);
+ if (err != 0)
+ return err;
+
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].registered = true;
+
+ return 0;
+}
+
+static int unregister_vf_msg_handler(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ struct vf_data_storage *vf_info =
+ nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ int err;
+
+ if (vf_id > nic_io->max_vfs)
+ return -EFAULT;
+
+ vf_info->registered = false;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+ mac_info.func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + (u16)vf_id;
+ mac_info.vlan_id = vf_info->pf_vlan;
+ ether_addr_copy(mac_info.mac, vf_info->drv_mac_addr);
+
+ if (vf_info->use_specified_mac || vf_info->pf_vlan) {
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_DEL_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size);
+ if (err || mac_info.msg_head.status || !out_size) {
+ nic_err(nic_io->dev_hdl, "Failed to delete VF %d MAC, err: %d, status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err,
+ mac_info.msg_head.status, out_size);
+ return -EFAULT;
+ }
+ }
+
+ memset(vf_info->drv_mac_addr, 0, ETH_ALEN);
+
+ return 0;
+}
+
+static int hinic3_register_vf_msg_handler(struct hinic3_nic_io *nic_io,
+ u16 vf_id, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_cmd_register_vf *register_vf = buf_in;
+ struct hinic3_cmd_register_vf *register_info = buf_out;
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ int err;
+
+ if (!vf_info)
+ return -EINVAL;
+
+ if (register_vf->op_register) {
+ vf_info->support_extra_feature = register_vf->support_extra_feature;
+ err = register_vf_msg_handler(nic_io, vf_id);
+ } else {
+ err = unregister_vf_msg_handler(nic_io, vf_id);
+ vf_info->support_extra_feature = 0;
+ }
+
+ if (err != 0)
+ register_info->msg_head.status = EFAULT;
+
+ *out_size = sizeof(*register_info);
+
+ return 0;
+}
+
+void hinic3_unregister_vf(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+
+ if (!vf_info)
+ return;
+
+ unregister_vf_msg_handler(nic_io, vf_id);
+ vf_info->support_extra_feature = 0;
+}
+
+static int hinic3_get_vf_cos_msg_handler(struct hinic3_nic_io *nic_io,
+ u16 vf_id, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_cmd_vf_dcb_state *dcb_state = buf_out;
+
+ memcpy(&dcb_state->state, &nic_io->dcb_state,
+ sizeof(nic_io->dcb_state));
+
+ dcb_state->msg_head.status = 0;
+ *out_size = sizeof(*dcb_state);
+ return 0;
+}
+
+static int hinic3_get_vf_mac_msg_handler(struct hinic3_nic_io *nic_io, u16 vf,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf);
+ struct hinic3_port_mac_set *mac_in =
+ (struct hinic3_port_mac_set *)buf_in;
+ struct hinic3_port_mac_set *mac_info = buf_out;
+
+ int err;
+
+ if (!mac_info || !vf_info)
+ return -EINVAL;
+
+ mac_in->func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf;
+
+ if (HINIC3_SUPPORT_VF_MAC(nic_io->hwdev) != 0) {
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_GET_MAC, buf_in,
+ in_size, buf_out, out_size);
+ if (err == 0) {
+ if (is_zero_ether_addr(mac_info->mac))
+ ether_addr_copy(mac_info->mac, vf_info->drv_mac_addr);
+ }
+ return err;
+ }
+
+ ether_addr_copy(mac_info->mac, vf_info->drv_mac_addr);
+ mac_info->msg_head.status = 0;
+ *out_size = sizeof(*mac_info);
+
+ return 0;
+}
+
+static int hinic3_set_vf_mac_msg_handler(struct hinic3_nic_io *nic_io, u16 vf,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf);
+ struct hinic3_port_mac_set *mac_in = buf_in;
+ struct hinic3_port_mac_set *mac_out = buf_out;
+ int err;
+
+ if (!vf_info)
+ return -EINVAL;
+
+ mac_in->func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf;
+
+ if (vf_info->use_specified_mac && !vf_info->trust &&
+ is_valid_ether_addr(mac_in->mac)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF %d MAC address, and vf trust is off.\n",
+ HW_VF_ID_TO_OS(vf));
+ mac_out->msg_head.status = HINIC3_PF_SET_VF_ALREADY;
+ *out_size = sizeof(*mac_out);
+ return 0;
+ }
+
+ if (is_valid_ether_addr(mac_in->mac))
+ mac_in->vlan_id = vf_info->pf_vlan;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_SET_MAC,
+ buf_in, in_size, buf_out, out_size);
+ if (err || !(*out_size)) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d MAC address, err: %d,status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf), err, mac_out->msg_head.status,
+ *out_size);
+ return -EFAULT;
+ }
+
+ if (is_valid_ether_addr(mac_in->mac) && !mac_out->msg_head.status)
+ ether_addr_copy(vf_info->drv_mac_addr, mac_in->mac);
+
+ return err;
+}
+
+static int hinic3_del_vf_mac_msg_handler(struct hinic3_nic_io *nic_io, u16 vf,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf);
+ struct hinic3_port_mac_set *mac_in = buf_in;
+ struct hinic3_port_mac_set *mac_out = buf_out;
+ int err;
+
+ if (!vf_info)
+ return -EINVAL;
+
+ mac_in->func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf;
+
+ if (vf_info->use_specified_mac && !vf_info->trust &&
+ is_valid_ether_addr(mac_in->mac)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF %d MAC address, and vf trust is off.\n",
+ HW_VF_ID_TO_OS(vf));
+ mac_out->msg_head.status = HINIC3_PF_SET_VF_ALREADY;
+ *out_size = sizeof(*mac_out);
+ return 0;
+ }
+
+ if (is_valid_ether_addr(mac_in->mac))
+ mac_in->vlan_id = vf_info->pf_vlan;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_DEL_MAC,
+ buf_in, in_size, buf_out, out_size);
+ if (err || !(*out_size)) {
+ nic_err(nic_io->dev_hdl, "Failed to delete VF %d MAC, err: %d, status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf), err, mac_out->msg_head.status,
+ *out_size);
+ return -EFAULT;
+ }
+
+ if (is_valid_ether_addr(mac_in->mac) && !mac_out->msg_head.status)
+ eth_zero_addr(vf_info->drv_mac_addr);
+
+ return err;
+}
+
+static int hinic3_update_vf_mac_msg_handler(struct hinic3_nic_io *nic_io,
+ u16 vf, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf);
+ struct hinic3_port_mac_update *mac_in = buf_in;
+ struct hinic3_port_mac_update *mac_out = buf_out;
+ int err;
+
+ if (!vf_info)
+ return -EINVAL;
+
+ if (!is_valid_ether_addr(mac_in->new_mac)) {
+ nic_err(nic_io->dev_hdl, "Update VF MAC is invalid.\n");
+ return -EINVAL;
+ }
+ mac_in->func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf;
+
+ if (vf_info->use_specified_mac && !vf_info->trust) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF %d MAC address, and vf trust is off.\n",
+ HW_VF_ID_TO_OS(vf));
+ mac_out->msg_head.status = HINIC3_PF_SET_VF_ALREADY;
+ *out_size = sizeof(*mac_out);
+ return 0;
+ }
+
+ mac_in->vlan_id = vf_info->pf_vlan;
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_UPDATE_MAC,
+ buf_in, in_size, buf_out, out_size);
+ if (err || !(*out_size)) {
+ nic_warn(nic_io->dev_hdl, "Failed to update VF %d MAC, err: %d,status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf), err, mac_out->msg_head.status,
+ *out_size);
+ return -EFAULT;
+ }
+
+ if (!mac_out->msg_head.status)
+ ether_addr_copy(vf_info->drv_mac_addr, mac_in->new_mac);
+
+ return err;
+}
+
+const struct vf_msg_handler vf_cmd_handler[] = {
+ {
+ .cmd = HINIC3_NIC_CMD_VF_REGISTER,
+ .handler = hinic3_register_vf_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_GET_MAC,
+ .handler = hinic3_get_vf_mac_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_SET_MAC,
+ .handler = hinic3_set_vf_mac_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_DEL_MAC,
+ .handler = hinic3_del_vf_mac_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_UPDATE_MAC,
+ .handler = hinic3_update_vf_mac_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_VF_COS,
+ .handler = hinic3_get_vf_cos_msg_handler
+ },
+};
+
+static int _l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u16 channel)
+{
+ u32 i, cmd_cnt = ARRAY_LEN(vf_cmd_handler);
+ bool cmd_to_pf = false;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF &&
+ !hinic3_is_slave_host(hwdev)) {
+ for (i = 0; i < cmd_cnt; i++) {
+ if (cmd == vf_cmd_handler[i].cmd)
+ cmd_to_pf = true;
+ }
+ }
+
+ if (cmd_to_pf)
+ return hinic3_mbox_to_pf(hwdev, HINIC3_MOD_L2NIC, cmd, buf_in,
+ in_size, buf_out, out_size, 0,
+ channel);
+
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, cmd, buf_in,
+ in_size, buf_out, out_size, 0, channel);
+}
+
+int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return _l2nic_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size, HINIC3_CHANNEL_NIC);
+}
+
+int l2nic_msg_to_mgmt_sync_ch(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u16 channel)
+{
+ return _l2nic_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size, channel);
+}
+
+/* pf/ppf handler mbox msg from vf */
+int hinic3_pf_mbox_handler(void *hwdev,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 index, cmd_size = ARRAY_LEN(vf_cmd_handler);
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ for (index = 0; index < cmd_size; index++) {
+ if (cmd == vf_cmd_handler[index].cmd)
+ return vf_cmd_handler[index].handler(nic_io, vf_id,
+ buf_in, in_size,
+ buf_out, out_size);
+ }
+
+ nic_warn(nic_io->dev_hdl, "NO handler for nic cmd(%u) received from vf id: %u\n",
+ cmd, vf_id);
+
+ return -EINVAL;
+}
+
+void hinic3_notify_dcb_state_event(struct hinic3_nic_io *nic_io,
+ struct hinic3_dcb_state *dcb_state)
+{
+ struct hinic3_event_info event_info = {0};
+ int i;
+/*lint -e679*/
+ if (dcb_state->trust == HINIC3_DCB_PCP)
+ /* This is 8 user priority to cos mapping relationships */
+ sdk_info(nic_io->dev_hdl, "DCB %s, default cos %u, pcp2cos %u%u%u%u%u%u%u%u\n",
+ dcb_state->dcb_on ? "on" : "off", dcb_state->default_cos,
+ dcb_state->pcp2cos[ARRAY_INDEX_0], dcb_state->pcp2cos[ARRAY_INDEX_1],
+ dcb_state->pcp2cos[ARRAY_INDEX_2], dcb_state->pcp2cos[ARRAY_INDEX_3],
+ dcb_state->pcp2cos[ARRAY_INDEX_4], dcb_state->pcp2cos[ARRAY_INDEX_5],
+ dcb_state->pcp2cos[ARRAY_INDEX_6], dcb_state->pcp2cos[ARRAY_INDEX_7]);
+ else
+ for (i = 0; i < NIC_DCB_DSCP_NUM; i++) {
+ sdk_info(nic_io->dev_hdl,
+ "DCB %s, default cos %u, dscp2cos %u%u%u%u%u%u%u%u\n",
+ dcb_state->dcb_on ? "on" : "off", dcb_state->default_cos,
+ dcb_state->dscp2cos[ARRAY_INDEX_0 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_1 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_2 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_3 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_4 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_5 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_6 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_7 + i * NIC_DCB_DSCP_NUM]);
+ }
+/*lint +e679*/
+ /* Saved in sdk for stateful module */
+ hinic3_save_dcb_state(nic_io, dcb_state);
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = EVENT_NIC_DCB_STATE_CHANGE;
+ memcpy((void *)event_info.event_data, dcb_state, sizeof(*dcb_state));
+
+ hinic3_event_callback(nic_io->hwdev, &event_info);
+}
+
+static void dcb_state_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_cmd_vf_dcb_state *vf_dcb = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ vf_dcb = buf_in;
+ if (!vf_dcb)
+ return;
+
+ hinic3_notify_dcb_state_event(nic_io, &vf_dcb->state);
+}
+
+static void tx_pause_excp_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct nic_cmd_tx_pause_notice *excp_info = buf_in;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ if (in_size != sizeof(*excp_info)) {
+ nic_err(nic_io->dev_hdl, "Invalid in_size: %u, should be %lu\n",
+ in_size, sizeof(*excp_info));
+ return;
+ }
+
+ nic_warn(nic_io->dev_hdl, "Receive tx pause exception event, excp: %u, level: %u\n",
+ excp_info->tx_pause_except, excp_info->except_level);
+
+ hinic3_fault_event_report(hwdev, HINIC3_FAULT_SRC_TX_PAUSE_EXCP,
+ (u16)excp_info->except_level);
+}
+
+static void bond_active_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_bond_active_report_info *active_info = buf_in;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_event_info event_info = {0};
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ if (in_size != sizeof(*active_info)) {
+ nic_err(nic_io->dev_hdl, "Invalid in_size: %u, should be %ld\n",
+ in_size, sizeof(*active_info));
+ return;
+ }
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = HINIC3_NIC_CMD_BOND_ACTIVE_NOTICE;
+ memcpy((void *)event_info.event_data, active_info, sizeof(*active_info));
+
+ hinic3_event_callback(nic_io->hwdev, &event_info);
+}
+
+static void outband_vlan_cfg_event_handler(void *hwdev, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_outband_cfg_info *outband_cfg_info = buf_in;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_event_info event_info = {0};
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ nic_info(nic_io->dev_hdl, "outband vlan cfg event received\n");
+
+ if (in_size != sizeof(*outband_cfg_info)) {
+ nic_err(nic_io->dev_hdl, "outband cfg info invalid in_size: %u, should be %lu\n",
+ in_size, sizeof(*outband_cfg_info));
+ return;
+ }
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = EVENT_NIC_OUTBAND_CFG;
+ memcpy((void *)event_info.event_data,
+ outband_cfg_info, sizeof(*outband_cfg_info));
+
+ hinic3_event_callback(nic_io->hwdev, &event_info);
+}
+
+static const struct nic_event_handler nic_cmd_handler[] = {
+ {
+ .cmd = HINIC3_NIC_CMD_VF_COS,
+ .handler = dcb_state_event,
+ },
+ {
+ .cmd = HINIC3_NIC_CMD_TX_PAUSE_EXCP_NOTICE,
+ .handler = tx_pause_excp_event_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_BOND_ACTIVE_NOTICE,
+ .handler = bond_active_event_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_OUTBAND_CFG_NOTICE,
+ .handler = outband_vlan_cfg_event_handler,
+ },
+};
+
+static int _event_handler(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 size = sizeof(nic_cmd_handler) / sizeof(struct nic_event_handler);
+ u32 i;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ *out_size = 0;
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ for (i = 0; i < size; i++) {
+ if (cmd == nic_cmd_handler[i].cmd) {
+ nic_cmd_handler[i].handler(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ return 0;
+ }
+ }
+
+ /* can't find this event cmd */
+ sdk_warn(nic_io->dev_hdl, "Unsupported nic event, cmd: %u\n", cmd);
+ *out_size = sizeof(struct mgmt_msg_head);
+ ((struct mgmt_msg_head *)buf_out)->status = HINIC3_MGMT_CMD_UNSUPPORTED;
+
+ return 0;
+}
+
+/* vf handler mbox msg from ppf/pf */
+/* vf link change event
+ * vf fault report event, TBD
+ */
+int hinic3_vf_event_handler(void *hwdev,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return _event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+}
+
+/* pf/ppf handler mgmt cpu report nic event */
+void hinic3_pf_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ _event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c
new file mode 100644
index 0000000..a9768b7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c
@@ -0,0 +1,1130 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "nic_mpu_cmd.h"
+#include "nic_npu_cmd.h"
+#include "hinic3_nic_io.h"
+
+#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 1
+#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 1
+#define HINIC3_DEAULT_DROP_THD_ON (0xFFFF)
+#define HINIC3_DEAULT_DROP_THD_OFF 0
+/*lint -e806*/
+static unsigned char tx_pending_limit = HINIC3_DEAULT_TX_CI_PENDING_LIMIT;
+module_param(tx_pending_limit, byte, 0444);
+MODULE_PARM_DESC(tx_pending_limit, "TX CI coalescing parameter pending_limit (default=0)");
+
+static unsigned char tx_coalescing_time = HINIC3_DEAULT_TX_CI_COALESCING_TIME;
+module_param(tx_coalescing_time, byte, 0444);
+MODULE_PARM_DESC(tx_coalescing_time, "TX CI coalescing parameter coalescing_time (default=0)");
+
+static unsigned char rq_wqe_type = HINIC3_NORMAL_RQ_WQE;
+module_param(rq_wqe_type, byte, 0444);
+MODULE_PARM_DESC(rq_wqe_type, "RQ WQE type 1-16Bytes, 2-32Bytes (default=2)");
+
+/*lint +e806*/
+static u32 tx_drop_thd_on = HINIC3_DEAULT_DROP_THD_ON;
+module_param(tx_drop_thd_on, uint, 0644);
+MODULE_PARM_DESC(tx_drop_thd_on, "TX parameter drop_thd_on (default=0xffff)");
+
+static u32 tx_drop_thd_off = HINIC3_DEAULT_DROP_THD_OFF;
+module_param(tx_drop_thd_off, uint, 0644);
+MODULE_PARM_DESC(tx_drop_thd_off, "TX parameter drop_thd_off (default=0)");
+/* performance: ci addr RTE_CACHE_SIZE(64B) alignment */
+#define HINIC3_CI_Q_ADDR_SIZE (64U)
+
+#define CI_TABLE_SIZE(num_qps, pg_sz) \
+ (ALIGN((num_qps) * HINIC3_CI_Q_ADDR_SIZE, pg_sz))
+
+#define HINIC3_CI_VADDR(base_addr, q_id) ((u8 *)(base_addr) + \
+ (q_id) * HINIC3_CI_Q_ADDR_SIZE)
+
+#define HINIC3_CI_PADDR(base_paddr, q_id) ((base_paddr) + \
+ (q_id) * HINIC3_CI_Q_ADDR_SIZE)
+
+#define WQ_PREFETCH_MAX 4
+#define WQ_PREFETCH_MIN 1
+#define WQ_PREFETCH_THRESHOLD 256
+
+#define HINIC3_Q_CTXT_MAX 31 /* (2048 - 8) / 64 */
+
+enum hinic3_qp_ctxt_type {
+ HINIC3_QP_CTXT_TYPE_SQ,
+ HINIC3_QP_CTXT_TYPE_RQ,
+};
+
+struct hinic3_qp_ctxt_header {
+ u16 num_queues;
+ u16 queue_type;
+ u16 start_qid;
+ u16 rsvd;
+};
+
+struct hinic3_sq_ctxt {
+ u32 ci_pi;
+ u32 drop_mode_sp;
+ u32 wq_pfn_hi_owner;
+ u32 wq_pfn_lo;
+
+ u32 rsvd0;
+ u32 pkt_drop_thd;
+ u32 global_sq_id;
+ u32 vlan_ceq_attr;
+
+ u32 pref_cache;
+ u32 pref_ci_owner;
+ u32 pref_wq_pfn_hi_ci;
+ u32 pref_wq_pfn_lo;
+
+ u32 rsvd8;
+ u32 rsvd9;
+ u32 wq_block_pfn_hi;
+ u32 wq_block_pfn_lo;
+};
+
+struct hinic3_rq_ctxt {
+ u32 ci_pi;
+ u32 ceq_attr;
+ u32 wq_pfn_hi_type_owner;
+ u32 wq_pfn_lo;
+
+ u32 rsvd[3];
+ u32 cqe_sge_len;
+
+ u32 pref_cache;
+ u32 pref_ci_owner;
+ u32 pref_wq_pfn_hi_ci;
+ u32 pref_wq_pfn_lo;
+
+ u32 pi_paddr_hi;
+ u32 pi_paddr_lo;
+ u32 wq_block_pfn_hi;
+ u32 wq_block_pfn_lo;
+};
+
+struct hinic3_sq_ctxt_block {
+ struct hinic3_qp_ctxt_header cmdq_hdr;
+ struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX];
+};
+
+struct hinic3_rq_ctxt_block {
+ struct hinic3_qp_ctxt_header cmdq_hdr;
+ struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX];
+};
+
+struct hinic3_clean_queue_ctxt {
+ struct hinic3_qp_ctxt_header cmdq_hdr;
+ u32 rsvd;
+};
+
+#define SQ_CTXT_SIZE(num_sqs) ((u16)(sizeof(struct hinic3_qp_ctxt_header) \
+ + (num_sqs) * sizeof(struct hinic3_sq_ctxt)))
+
+#define RQ_CTXT_SIZE(num_rqs) ((u16)(sizeof(struct hinic3_qp_ctxt_header) \
+ + (num_rqs) * sizeof(struct hinic3_rq_ctxt)))
+
+#define CI_IDX_HIGH_SHIFH 12
+
+#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH)
+
+#define SQ_CTXT_PI_IDX_SHIFT 0
+#define SQ_CTXT_CI_IDX_SHIFT 16
+
+#define SQ_CTXT_PI_IDX_MASK 0xFFFFU
+#define SQ_CTXT_CI_IDX_MASK 0xFFFFU
+
+#define SQ_CTXT_CI_PI_SET(val, member) (((val) & \
+ SQ_CTXT_##member##_MASK) \
+ << SQ_CTXT_##member##_SHIFT)
+
+#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0
+#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1
+
+#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U
+#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U
+
+#define SQ_CTXT_MODE_SET(val, member) (((val) & \
+ SQ_CTXT_MODE_##member##_MASK) \
+ << SQ_CTXT_MODE_##member##_SHIFT)
+
+#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
+#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23
+
+#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU
+#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U
+
+#define SQ_CTXT_WQ_PAGE_SET(val, member) (((val) & \
+ SQ_CTXT_WQ_PAGE_##member##_MASK) \
+ << SQ_CTXT_WQ_PAGE_##member##_SHIFT)
+
+#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0
+#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16
+
+#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU
+#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU
+
+#define SQ_CTXT_PKT_DROP_THD_SET(val, member) (((val) & \
+ SQ_CTXT_PKT_DROP_##member##_MASK) \
+ << SQ_CTXT_PKT_DROP_##member##_SHIFT)
+
+#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0
+
+#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU
+
+#define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) (((val) & \
+ SQ_CTXT_##member##_MASK) \
+ << SQ_CTXT_##member##_SHIFT)
+
+#define SQ_CTXT_VLAN_TAG_SHIFT 0
+#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16
+#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19
+#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23
+
+#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU
+#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U
+#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U
+#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U
+
+#define SQ_CTXT_VLAN_CEQ_SET(val, member) (((val) & \
+ SQ_CTXT_VLAN_##member##_MASK) \
+ << SQ_CTXT_VLAN_##member##_SHIFT)
+
+#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
+#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14
+#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25
+
+#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU
+#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU
+#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU
+
+#define SQ_CTXT_PREF_CI_HI_SHIFT 0
+#define SQ_CTXT_PREF_OWNER_SHIFT 4
+
+#define SQ_CTXT_PREF_CI_HI_MASK 0xFU
+#define SQ_CTXT_PREF_OWNER_MASK 0x1U
+
+#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0
+#define SQ_CTXT_PREF_CI_LOW_SHIFT 20
+
+#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU
+#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU
+
+#define SQ_CTXT_PREF_SET(val, member) (((val) & \
+ SQ_CTXT_PREF_##member##_MASK) \
+ << SQ_CTXT_PREF_##member##_SHIFT)
+
+#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0
+
+#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU
+
+#define SQ_CTXT_WQ_BLOCK_SET(val, member) (((val) & \
+ SQ_CTXT_WQ_BLOCK_##member##_MASK) \
+ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT)
+
+#define RQ_CTXT_PI_IDX_SHIFT 0
+#define RQ_CTXT_CI_IDX_SHIFT 16
+
+#define RQ_CTXT_PI_IDX_MASK 0xFFFFU
+#define RQ_CTXT_CI_IDX_MASK 0xFFFFU
+
+#define RQ_CTXT_CI_PI_SET(val, member) (((val) & \
+ RQ_CTXT_##member##_MASK) \
+ << RQ_CTXT_##member##_SHIFT)
+
+#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21
+#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31
+
+#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU
+#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U
+
+#define RQ_CTXT_CEQ_ATTR_SET(val, member) (((val) & \
+ RQ_CTXT_CEQ_ATTR_##member##_MASK) \
+ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT)
+
+#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
+#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28
+#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31
+
+#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU
+#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U
+#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U
+
+#define RQ_CTXT_WQ_PAGE_SET(val, member) (((val) & \
+ RQ_CTXT_WQ_PAGE_##member##_MASK) << \
+ RQ_CTXT_WQ_PAGE_##member##_SHIFT)
+
+#define RQ_CTXT_CQE_LEN_SHIFT 28
+
+#define RQ_CTXT_CQE_LEN_MASK 0x3U
+
+#define RQ_CTXT_CQE_LEN_SET(val, member) (((val) & \
+ RQ_CTXT_##member##_MASK) << \
+ RQ_CTXT_##member##_SHIFT)
+
+#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
+#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14
+#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25
+
+#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU
+#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU
+#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU
+
+#define RQ_CTXT_PREF_CI_HI_SHIFT 0
+#define RQ_CTXT_PREF_OWNER_SHIFT 4
+
+#define RQ_CTXT_PREF_CI_HI_MASK 0xFU
+#define RQ_CTXT_PREF_OWNER_MASK 0x1U
+
+#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0
+#define RQ_CTXT_PREF_CI_LOW_SHIFT 20
+
+#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU
+#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU
+
+#define RQ_CTXT_PREF_SET(val, member) (((val) & \
+ RQ_CTXT_PREF_##member##_MASK) << \
+ RQ_CTXT_PREF_##member##_SHIFT)
+
+#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0
+
+#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU
+
+#define RQ_CTXT_WQ_BLOCK_SET(val, member) (((val) & \
+ RQ_CTXT_WQ_BLOCK_##member##_MASK) << \
+ RQ_CTXT_WQ_BLOCK_##member##_SHIFT)
+
+#define SIZE_16BYTES(size) (ALIGN((size), 16) >> 4)
+
+#define WQ_PAGE_PFN_SHIFT 12
+#define WQ_BLOCK_PFN_SHIFT 9
+
+#define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT)
+#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT)
+
+/* sq and rq */
+#define TOTAL_DB_NUM(num_qps) ((u16)(2 * (num_qps)))
+
+static int hinic3_create_sq(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *sq,
+ u16 q_id, u32 sq_depth, u16 sq_msix_idx)
+{
+ int err;
+
+ /* sq used & hardware request init 1 */
+ sq->owner = 1;
+
+ sq->q_id = q_id;
+ sq->msix_entry_idx = sq_msix_idx;
+
+ err = hinic3_wq_create(nic_io->hwdev, &sq->wq, sq_depth,
+ (u16)BIT(HINIC3_SQ_WQEBB_SHIFT));
+ if (err) {
+ sdk_err(nic_io->dev_hdl, "Failed to create tx queue(%u) wq\n",
+ q_id);
+ return err;
+ }
+
+ return 0;
+}
+
+static void hinic3_destroy_sq(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *sq)
+{
+ hinic3_wq_destroy(&sq->wq);
+}
+
+static int hinic3_create_rq(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *rq,
+ u16 q_id, u32 rq_depth, u16 rq_msix_idx)
+{
+ int err;
+
+ /* rq_wqe_type Only support type 1-16Bytes, 2-32Bytes */
+ if (rq_wqe_type != HINIC3_NORMAL_RQ_WQE && rq_wqe_type != HINIC3_EXTEND_RQ_WQE) {
+ sdk_warn(nic_io->dev_hdl, "Module Parameter rq_wqe_type value %d is out of range: [%d, %d].",
+ rq_wqe_type, HINIC3_NORMAL_RQ_WQE, HINIC3_EXTEND_RQ_WQE);
+ rq_wqe_type = HINIC3_NORMAL_RQ_WQE;
+ }
+
+ rq->wqe_type = rq_wqe_type;
+ rq->q_id = q_id;
+ rq->msix_entry_idx = rq_msix_idx;
+
+ err = hinic3_wq_create(nic_io->hwdev, &rq->wq, rq_depth,
+ (u16)BIT(HINIC3_RQ_WQEBB_SHIFT + rq_wqe_type));
+ if (err) {
+ sdk_err(nic_io->dev_hdl, "Failed to create rx queue(%u) wq\n",
+ q_id);
+ return err;
+ }
+
+ rq->rx.pi_virt_addr = dma_zalloc_coherent(nic_io->dev_hdl, PAGE_SIZE,
+ &rq->rx.pi_dma_addr,
+ GFP_KERNEL);
+ if (!rq->rx.pi_virt_addr) {
+ hinic3_wq_destroy(&rq->wq);
+ nic_err(nic_io->dev_hdl, "Failed to allocate rq pi virt addr\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void hinic3_destroy_rq(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *rq)
+{
+ dma_free_coherent(nic_io->dev_hdl, PAGE_SIZE, rq->rx.pi_virt_addr,
+ rq->rx.pi_dma_addr);
+
+ hinic3_wq_destroy(&rq->wq);
+}
+
+static int create_qp(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *sq,
+ struct hinic3_io_queue *rq, u16 q_id, u32 sq_depth,
+ u32 rq_depth, u16 qp_msix_idx)
+{
+ int err;
+
+ err = hinic3_create_sq(nic_io, sq, q_id, sq_depth, qp_msix_idx);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to create sq, qid: %u\n",
+ q_id);
+ return err;
+ }
+
+ err = hinic3_create_rq(nic_io, rq, q_id, rq_depth, qp_msix_idx);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to create rq, qid: %u\n",
+ q_id);
+ goto create_rq_err;
+ }
+
+ return 0;
+
+create_rq_err:
+ hinic3_destroy_sq(nic_io, sq);
+
+ return err;
+}
+
+static void destroy_qp(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *sq,
+ struct hinic3_io_queue *rq)
+{
+ hinic3_destroy_sq(nic_io, sq);
+ hinic3_destroy_rq(nic_io, rq);
+}
+
+int hinic3_init_nicio_res(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ void __iomem *db_base = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return -EFAULT;
+ }
+
+ nic_io->max_qps = hinic3_func_max_qnum(hwdev);
+
+ err = hinic3_alloc_db_addr(hwdev, &db_base, NULL);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate doorbell for sqs\n");
+ return -ENOMEM;
+ }
+ nic_io->sqs_db_addr = (u8 *)db_base;
+
+ err = hinic3_alloc_db_addr(hwdev, &db_base, NULL);
+ if (err) {
+ hinic3_free_db_addr(hwdev, nic_io->sqs_db_addr, NULL);
+ nic_err(nic_io->dev_hdl, "Failed to allocate doorbell for rqs\n");
+ return -ENOMEM;
+ }
+ nic_io->rqs_db_addr = (u8 *)db_base;
+
+ nic_io->ci_vaddr_base =
+ dma_zalloc_coherent(nic_io->dev_hdl,
+ CI_TABLE_SIZE(nic_io->max_qps, PAGE_SIZE),
+ &nic_io->ci_dma_base, GFP_KERNEL);
+ if (!nic_io->ci_vaddr_base) {
+ hinic3_free_db_addr(hwdev, nic_io->sqs_db_addr, NULL);
+ hinic3_free_db_addr(hwdev, nic_io->rqs_db_addr, NULL);
+ nic_err(nic_io->dev_hdl, "Failed to allocate ci area\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+void hinic3_deinit_nicio_res(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return;
+ }
+
+ dma_free_coherent(nic_io->dev_hdl,
+ CI_TABLE_SIZE(nic_io->max_qps, PAGE_SIZE),
+ nic_io->ci_vaddr_base, nic_io->ci_dma_base);
+ /* free all doorbell */
+ hinic3_free_db_addr(hwdev, nic_io->sqs_db_addr, NULL);
+ hinic3_free_db_addr(hwdev, nic_io->rqs_db_addr, NULL);
+}
+
+int hinic3_alloc_qps(void *hwdev, struct irq_info *qps_msix_arry,
+ struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_io_queue *sqs = NULL;
+ struct hinic3_io_queue *rqs = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 q_id, i, num_qps;
+ int err;
+
+ if (!hwdev || !qps_msix_arry || !qp_params)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return -EFAULT;
+ }
+
+ if (qp_params->num_qps > nic_io->max_qps || !qp_params->num_qps)
+ return -EINVAL;
+
+ num_qps = qp_params->num_qps;
+ sqs = kcalloc(num_qps, sizeof(*sqs), GFP_KERNEL);
+ if (!sqs) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate sq\n");
+ err = -ENOMEM;
+ goto alloc_sqs_err;
+ }
+
+ rqs = kcalloc(num_qps, sizeof(*rqs), GFP_KERNEL);
+ if (!rqs) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate rq\n");
+ err = -ENOMEM;
+ goto alloc_rqs_err;
+ }
+
+ for (q_id = 0; q_id < num_qps; q_id++) {
+ err = create_qp(nic_io, &sqs[q_id], &rqs[q_id], q_id, qp_params->sq_depth,
+ qp_params->rq_depth, qps_msix_arry[q_id].msix_entry_idx);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate qp %u, err: %d\n", q_id, err);
+ goto create_qp_err;
+ }
+ }
+
+ qp_params->sqs = sqs;
+ qp_params->rqs = rqs;
+
+ return 0;
+
+create_qp_err:
+ for (i = 0; i < q_id; i++)
+ destroy_qp(nic_io, &sqs[i], &rqs[i]);
+
+ kfree(rqs);
+
+alloc_rqs_err:
+ kfree(sqs);
+
+alloc_sqs_err:
+
+ return err;
+}
+
+void hinic3_free_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 q_id;
+
+ if (!hwdev || !qp_params)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return;
+ }
+
+ for (q_id = 0; q_id < qp_params->num_qps; q_id++)
+ destroy_qp(nic_io, &qp_params->sqs[q_id],
+ &qp_params->rqs[q_id]);
+
+ kfree(qp_params->sqs);
+ kfree(qp_params->rqs);
+}
+
+static void init_qps_info(struct hinic3_nic_io *nic_io,
+ struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_io_queue *sqs = qp_params->sqs;
+ struct hinic3_io_queue *rqs = qp_params->rqs;
+ u16 q_id;
+
+ nic_io->num_qps = qp_params->num_qps;
+ nic_io->sq = qp_params->sqs;
+ nic_io->rq = qp_params->rqs;
+ for (q_id = 0; q_id < nic_io->num_qps; q_id++) {
+ sqs[q_id].tx.cons_idx_addr =
+ HINIC3_CI_VADDR(nic_io->ci_vaddr_base, q_id);
+ /* clear ci value */
+ *(u16 *)sqs[q_id].tx.cons_idx_addr = 0;
+ sqs[q_id].db_addr = nic_io->sqs_db_addr;
+
+ /* The first num_qps doorbell is used by sq */
+ rqs[q_id].db_addr = nic_io->rqs_db_addr;
+ }
+}
+
+int hinic3_init_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !qp_params)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return -EFAULT;
+ }
+
+ init_qps_info(nic_io, qp_params);
+
+ return hinic3_init_qp_ctxts(hwdev);
+}
+
+void hinic3_deinit_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !qp_params)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return;
+ }
+
+ qp_params->sqs = nic_io->sq;
+ qp_params->rqs = nic_io->rq;
+ qp_params->num_qps = nic_io->num_qps;
+
+ hinic3_free_qp_ctxts(hwdev);
+}
+
+int hinic3_create_qps(void *hwdev, u16 num_qp, u32 sq_depth, u32 rq_depth,
+ struct irq_info *qps_msix_arry)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_dyna_qp_params qp_params = {0};
+ int err;
+
+ if (!hwdev || !qps_msix_arry)
+ return -EFAULT;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return -EFAULT;
+ }
+
+ err = hinic3_init_nicio_res(hwdev);
+ if (err)
+ return err;
+
+ qp_params.num_qps = num_qp;
+ qp_params.sq_depth = sq_depth;
+ qp_params.rq_depth = rq_depth;
+ err = hinic3_alloc_qps(hwdev, qps_msix_arry, &qp_params);
+ if (err) {
+ hinic3_deinit_nicio_res(hwdev);
+ nic_err(nic_io->dev_hdl,
+ "Failed to allocate qps, err: %d\n", err);
+ return err;
+ }
+
+ init_qps_info(nic_io, &qp_params);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_create_qps);
+
+void hinic3_destroy_qps(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_dyna_qp_params qp_params = {0};
+
+ if (!hwdev)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+
+ hinic3_deinit_qps(hwdev, &qp_params);
+ hinic3_free_qps(hwdev, &qp_params);
+ hinic3_deinit_nicio_res(hwdev);
+}
+EXPORT_SYMBOL(hinic3_destroy_qps);
+
+void *hinic3_get_nic_queue(void *hwdev, u16 q_id, enum hinic3_queue_type q_type)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || q_type >= HINIC3_MAX_QUEUE_TYPE)
+ return NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return NULL;
+
+ return ((q_type == HINIC3_SQ) ? &nic_io->sq[q_id] : &nic_io->rq[q_id]);
+}
+EXPORT_SYMBOL(hinic3_get_nic_queue);
+
+static void hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr,
+ enum hinic3_qp_ctxt_type ctxt_type,
+ u16 num_queues, u16 q_id)
+{
+ qp_ctxt_hdr->queue_type = ctxt_type;
+ qp_ctxt_hdr->num_queues = num_queues;
+ qp_ctxt_hdr->start_qid = q_id;
+ qp_ctxt_hdr->rsvd = 0;
+
+ hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr));
+}
+
+static void hinic3_sq_prepare_ctxt(struct hinic3_io_queue *sq, u16 sq_id,
+ struct hinic3_sq_ctxt *sq_ctxt)
+{
+ u64 wq_page_addr;
+ u64 wq_page_pfn, wq_block_pfn;
+ u32 wq_page_pfn_hi, wq_page_pfn_lo;
+ u32 wq_block_pfn_hi, wq_block_pfn_lo;
+ u16 pi_start, ci_start;
+
+ ci_start = hinic3_get_sq_local_ci(sq);
+ pi_start = hinic3_get_sq_local_pi(sq);
+
+ wq_page_addr = hinic3_wq_get_first_wqe_page_addr(&sq->wq);
+
+ wq_page_pfn = WQ_PAGE_PFN(wq_page_addr);
+ wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
+ wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
+
+ wq_block_pfn = WQ_BLOCK_PFN(sq->wq.wq_block_paddr);
+ wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
+ wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
+
+ sq_ctxt->ci_pi =
+ SQ_CTXT_CI_PI_SET(ci_start, CI_IDX) |
+ SQ_CTXT_CI_PI_SET(pi_start, PI_IDX);
+
+ sq_ctxt->drop_mode_sp =
+ SQ_CTXT_MODE_SET(0, SP_FLAG) |
+ SQ_CTXT_MODE_SET(0, PKT_DROP);
+
+ sq_ctxt->wq_pfn_hi_owner =
+ SQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
+ SQ_CTXT_WQ_PAGE_SET(1, OWNER);
+
+ sq_ctxt->wq_pfn_lo = wq_page_pfn_lo;
+
+ /* TO DO */
+ sq_ctxt->pkt_drop_thd =
+ SQ_CTXT_PKT_DROP_THD_SET(tx_drop_thd_on, THD_ON) |
+ SQ_CTXT_PKT_DROP_THD_SET(tx_drop_thd_off, THD_OFF);
+
+ sq_ctxt->global_sq_id =
+ SQ_CTXT_GLOBAL_QUEUE_ID_SET(sq_id, GLOBAL_SQ_ID);
+
+ /* enable insert c-vlan in default */
+ sq_ctxt->vlan_ceq_attr =
+ SQ_CTXT_VLAN_CEQ_SET(0, CEQ_EN) |
+ SQ_CTXT_VLAN_CEQ_SET(1, INSERT_MODE);
+
+ sq_ctxt->rsvd0 = 0;
+
+ sq_ctxt->pref_cache =
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
+
+ sq_ctxt->pref_ci_owner =
+ SQ_CTXT_PREF_SET(CI_HIGN_IDX(ci_start), CI_HI) |
+ SQ_CTXT_PREF_SET(1, OWNER);
+
+ sq_ctxt->pref_wq_pfn_hi_ci =
+ SQ_CTXT_PREF_SET(ci_start, CI_LOW) |
+ SQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_PFN_HI);
+
+ sq_ctxt->pref_wq_pfn_lo = wq_page_pfn_lo;
+
+ sq_ctxt->wq_block_pfn_hi =
+ SQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, PFN_HI);
+
+ sq_ctxt->wq_block_pfn_lo = wq_block_pfn_lo;
+
+ hinic3_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt));
+}
+
+static void hinic3_rq_prepare_ctxt_get_wq_info(struct hinic3_io_queue *rq,
+ u32 *wq_page_pfn_hi, u32 *wq_page_pfn_lo,
+ u32 *wq_block_pfn_hi, u32 *wq_block_pfn_lo)
+{
+ u64 wq_page_addr;
+ u64 wq_page_pfn, wq_block_pfn;
+
+ wq_page_addr = hinic3_wq_get_first_wqe_page_addr(&rq->wq);
+
+ wq_page_pfn = WQ_PAGE_PFN(wq_page_addr);
+ *wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
+ *wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
+
+ wq_block_pfn = WQ_BLOCK_PFN(rq->wq.wq_block_paddr);
+ *wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
+ *wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
+}
+
+static void hinic3_rq_prepare_ctxt(struct hinic3_io_queue *rq, struct hinic3_rq_ctxt *rq_ctxt)
+{
+ u32 wq_page_pfn_hi, wq_page_pfn_lo;
+ u32 wq_block_pfn_hi, wq_block_pfn_lo;
+ u16 pi_start, ci_start;
+ u16 wqe_type = rq->wqe_type;
+
+ /* RQ depth is in unit of 8Bytes */
+ ci_start = (u16)((u32)hinic3_get_rq_local_ci(rq) << wqe_type);
+ pi_start = (u16)((u32)hinic3_get_rq_local_pi(rq) << wqe_type);
+
+ hinic3_rq_prepare_ctxt_get_wq_info(rq, &wq_page_pfn_hi, &wq_page_pfn_lo,
+ &wq_block_pfn_hi, &wq_block_pfn_lo);
+
+ rq_ctxt->ci_pi =
+ RQ_CTXT_CI_PI_SET(ci_start, CI_IDX) |
+ RQ_CTXT_CI_PI_SET(pi_start, PI_IDX);
+
+ rq_ctxt->ceq_attr = RQ_CTXT_CEQ_ATTR_SET(0, EN) |
+ RQ_CTXT_CEQ_ATTR_SET(rq->msix_entry_idx, INTR);
+
+ rq_ctxt->wq_pfn_hi_type_owner =
+ RQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
+ RQ_CTXT_WQ_PAGE_SET(1, OWNER);
+
+ switch (wqe_type) {
+ case HINIC3_EXTEND_RQ_WQE:
+ /* use 32Byte WQE with SGE for CQE */
+ rq_ctxt->wq_pfn_hi_type_owner |=
+ RQ_CTXT_WQ_PAGE_SET(0, WQE_TYPE);
+ break;
+ case HINIC3_NORMAL_RQ_WQE:
+ /* use 16Byte WQE with 32Bytes SGE for CQE */
+ rq_ctxt->wq_pfn_hi_type_owner |=
+ RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE);
+ rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN);
+ break;
+ default:
+ pr_err("Invalid rq wqe type: %u", wqe_type);
+ }
+
+ rq_ctxt->wq_pfn_lo = wq_page_pfn_lo;
+
+ rq_ctxt->pref_cache =
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
+
+ rq_ctxt->pref_ci_owner =
+ RQ_CTXT_PREF_SET(CI_HIGN_IDX(ci_start), CI_HI) |
+ RQ_CTXT_PREF_SET(1, OWNER);
+
+ rq_ctxt->pref_wq_pfn_hi_ci =
+ RQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_PFN_HI) |
+ RQ_CTXT_PREF_SET(ci_start, CI_LOW);
+
+ rq_ctxt->pref_wq_pfn_lo = wq_page_pfn_lo;
+
+ rq_ctxt->pi_paddr_hi = upper_32_bits(rq->rx.pi_dma_addr);
+ rq_ctxt->pi_paddr_lo = lower_32_bits(rq->rx.pi_dma_addr);
+
+ rq_ctxt->wq_block_pfn_hi =
+ RQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, PFN_HI);
+
+ rq_ctxt->wq_block_pfn_lo = wq_block_pfn_lo;
+
+ hinic3_cpu_to_be32(rq_ctxt, sizeof(*rq_ctxt));
+}
+
+static int init_sq_ctxts(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL;
+ struct hinic3_sq_ctxt *sq_ctxt = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_io_queue *sq = NULL;
+ u64 out_param = 0;
+ u16 q_id, curr_id, max_ctxts, i;
+ int err = 0;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_io->hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ q_id = 0;
+ while (q_id < nic_io->num_qps) {
+ sq_ctxt_block = cmd_buf->buf;
+ sq_ctxt = sq_ctxt_block->sq_ctxt;
+
+ max_ctxts = (nic_io->num_qps - q_id) > HINIC3_Q_CTXT_MAX ?
+ HINIC3_Q_CTXT_MAX : (nic_io->num_qps - q_id);
+
+ hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr,
+ HINIC3_QP_CTXT_TYPE_SQ, max_ctxts,
+ q_id);
+
+ for (i = 0; i < max_ctxts; i++) {
+ curr_id = q_id + i;
+ sq = &nic_io->sq[curr_id];
+
+ hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]);
+ }
+
+ cmd_buf->size = SQ_CTXT_SIZE(max_ctxts);
+
+ err = hinic3_cmdq_direct_resp(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set SQ ctxts, err: %d, out_param: 0x%llx\n",
+ err, out_param);
+
+ err = -EFAULT;
+ break;
+ }
+
+ q_id += max_ctxts;
+ }
+
+ hinic3_free_cmd_buf(nic_io->hwdev, cmd_buf);
+
+ return err;
+}
+
+static int init_rq_ctxts(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL;
+ struct hinic3_rq_ctxt *rq_ctxt = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_io_queue *rq = NULL;
+ u64 out_param = 0;
+ u16 q_id, curr_id, max_ctxts, i;
+ int err = 0;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_io->hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ q_id = 0;
+ while (q_id < nic_io->num_qps) {
+ rq_ctxt_block = cmd_buf->buf;
+ rq_ctxt = rq_ctxt_block->rq_ctxt;
+
+ max_ctxts = (nic_io->num_qps - q_id) > HINIC3_Q_CTXT_MAX ?
+ HINIC3_Q_CTXT_MAX : (nic_io->num_qps - q_id);
+
+ hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr,
+ HINIC3_QP_CTXT_TYPE_RQ, max_ctxts,
+ q_id);
+
+ for (i = 0; i < max_ctxts; i++) {
+ curr_id = q_id + i;
+ rq = &nic_io->rq[curr_id];
+
+ hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]);
+ }
+
+ cmd_buf->size = RQ_CTXT_SIZE(max_ctxts);
+
+ err = hinic3_cmdq_direct_resp(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set RQ ctxts, err: %d, out_param: 0x%llx\n",
+ err, out_param);
+
+ err = -EFAULT;
+ break;
+ }
+
+ q_id += max_ctxts;
+ }
+
+ hinic3_free_cmd_buf(nic_io->hwdev, cmd_buf);
+
+ return err;
+}
+
+static int init_qp_ctxts(struct hinic3_nic_io *nic_io)
+{
+ int err;
+
+ err = init_sq_ctxts(nic_io);
+ if (err)
+ return err;
+
+ err = init_rq_ctxts(nic_io);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static int clean_queue_offload_ctxt(struct hinic3_nic_io *nic_io,
+ enum hinic3_qp_ctxt_type ctxt_type)
+{
+ struct hinic3_clean_queue_ctxt *ctxt_block = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ u64 out_param = 0;
+ int err;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_io->hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ ctxt_block = cmd_buf->buf;
+ ctxt_block->cmdq_hdr.num_queues = nic_io->max_qps;
+ ctxt_block->cmdq_hdr.queue_type = ctxt_type;
+ ctxt_block->cmdq_hdr.start_qid = 0;
+
+ hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block));
+
+ cmd_buf->size = sizeof(*ctxt_block);
+
+ err = hinic3_cmdq_direct_resp(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if ((err) || (out_param)) {
+ nic_err(nic_io->dev_hdl, "Failed to clean queue offload ctxts, err: %d,out_param: 0x%llx\n",
+ err, out_param);
+
+ err = -EFAULT;
+ }
+
+ hinic3_free_cmd_buf(nic_io->hwdev, cmd_buf);
+
+ return err;
+}
+
+static int clean_qp_offload_ctxt(struct hinic3_nic_io *nic_io)
+{
+ /* clean LRO/TSO context space */
+ return ((clean_queue_offload_ctxt(nic_io, HINIC3_QP_CTXT_TYPE_SQ) != 0) ||
+ (clean_queue_offload_ctxt(nic_io, HINIC3_QP_CTXT_TYPE_RQ) != 0));
+}
+
+/* init qps ctxt and set sq ci attr and arm all sq */
+int hinic3_init_qp_ctxts(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_sq_attr sq_attr;
+ u32 rq_depth;
+ u16 q_id;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EFAULT;
+
+ err = init_qp_ctxts(nic_io);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to init QP ctxts\n");
+ return err;
+ }
+
+ /* clean LRO/TSO context space */
+ err = clean_qp_offload_ctxt(nic_io);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to clean qp offload ctxts\n");
+ return err;
+ }
+
+ rq_depth = nic_io->rq[0].wq.q_depth << nic_io->rq[0].wqe_type;
+
+ err = hinic3_set_root_ctxt(hwdev, rq_depth, nic_io->sq[0].wq.q_depth,
+ nic_io->rx_buff_len, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set root context\n");
+ return err;
+ }
+
+ for (q_id = 0; q_id < nic_io->num_qps; q_id++) {
+ sq_attr.ci_dma_base =
+ HINIC3_CI_PADDR(nic_io->ci_dma_base, q_id) >> 0x2;
+ sq_attr.pending_limit = tx_pending_limit;
+ sq_attr.coalescing_time = tx_coalescing_time;
+ sq_attr.intr_en = 1;
+ sq_attr.intr_idx = nic_io->sq[q_id].msix_entry_idx;
+ sq_attr.l2nic_sqn = q_id;
+ sq_attr.dma_attr_off = 0;
+ err = hinic3_set_ci_table(hwdev, &sq_attr);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set ci table\n");
+ goto set_cons_idx_table_err;
+ }
+ }
+
+ return 0;
+
+set_cons_idx_table_err:
+ hinic3_clean_root_ctxt(hwdev, HINIC3_CHANNEL_NIC);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(hinic3_init_qp_ctxts);
+
+void hinic3_free_qp_ctxts(void *hwdev)
+{
+ if (!hwdev)
+ return;
+
+ hinic3_clean_root_ctxt(hwdev, HINIC3_CHANNEL_NIC);
+}
+EXPORT_SYMBOL_GPL(hinic3_free_qp_ctxts);
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h
new file mode 100644
index 0000000..943a736
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h
@@ -0,0 +1,325 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_IO_H
+#define HINIC3_NIC_IO_H
+
+#include "hinic3_crm.h"
+#include "hinic3_common.h"
+#include "hinic3_wq.h"
+
+#define HINIC3_MAX_TX_QUEUE_DEPTH 65536
+#define HINIC3_MAX_RX_QUEUE_DEPTH 16384
+
+#define HINIC3_MIN_QUEUE_DEPTH 128
+
+#define HINIC3_SQ_WQEBB_SHIFT 4
+#define HINIC3_RQ_WQEBB_SHIFT 3
+
+#define HINIC3_SQ_WQEBB_SIZE BIT(HINIC3_SQ_WQEBB_SHIFT)
+#define HINIC3_CQE_SIZE_SHIFT 4
+
+enum hinic3_rq_wqe_type {
+ HINIC3_COMPACT_RQ_WQE,
+ HINIC3_NORMAL_RQ_WQE,
+ HINIC3_EXTEND_RQ_WQE,
+};
+
+struct hinic3_io_queue {
+ struct hinic3_wq wq;
+ union {
+ u8 wqe_type; /* for rq */
+ u8 owner; /* for sq */
+ };
+ u8 rsvd1;
+ u16 rsvd2;
+
+ u16 q_id;
+ u16 msix_entry_idx;
+
+ u8 __iomem *db_addr;
+
+ union {
+ struct {
+ void *cons_idx_addr;
+ } tx;
+
+ struct {
+ u16 *pi_virt_addr;
+ dma_addr_t pi_dma_addr;
+ } rx;
+ };
+} ____cacheline_aligned;
+
+struct hinic3_nic_db {
+ u32 db_info;
+ u32 pi_hi;
+};
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+/* *
+ * @brief hinic3_get_sq_free_wqebbs - get send queue free wqebb
+ * @param sq: send queue
+ * @retval : number of free wqebb
+ */
+static inline u16 hinic3_get_sq_free_wqebbs(struct hinic3_io_queue *sq)
+{
+ return hinic3_wq_free_wqebbs(&sq->wq);
+}
+
+/* *
+ * @brief hinic3_update_sq_local_ci - update send queue local consumer index
+ * @param sq: send queue
+ * @param wqe_cnt: number of wqebb
+ */
+static inline void hinic3_update_sq_local_ci(struct hinic3_io_queue *sq,
+ u16 wqebb_cnt)
+{
+ hinic3_wq_put_wqebbs(&sq->wq, wqebb_cnt);
+}
+
+/* *
+ * @brief hinic3_get_sq_local_ci - get send queue local consumer index
+ * @param sq: send queue
+ * @retval : local consumer index
+ */
+static inline u16 hinic3_get_sq_local_ci(const struct hinic3_io_queue *sq)
+{
+ return WQ_MASK_IDX(&sq->wq, sq->wq.cons_idx);
+}
+
+/* *
+ * @brief hinic3_get_sq_local_pi - get send queue local producer index
+ * @param sq: send queue
+ * @retval : local producer index
+ */
+static inline u16 hinic3_get_sq_local_pi(const struct hinic3_io_queue *sq)
+{
+ return WQ_MASK_IDX(&sq->wq, sq->wq.prod_idx);
+}
+
+/* *
+ * @brief hinic3_get_sq_hw_ci - get send queue hardware consumer index
+ * @param sq: send queue
+ * @retval : hardware consumer index
+ */
+static inline u16 hinic3_get_sq_hw_ci(const struct hinic3_io_queue *sq)
+{
+ return WQ_MASK_IDX(&sq->wq,
+ hinic3_hw_cpu16(*(u16 *)sq->tx.cons_idx_addr));
+}
+
+/* *
+ * @brief hinic3_get_sq_one_wqebb - get send queue wqe with single wqebb
+ * @param sq: send queue
+ * @param pi: return current pi
+ * @retval : wqe base address
+ */
+static inline void *hinic3_get_sq_one_wqebb(struct hinic3_io_queue *sq, u16 *pi)
+{
+ return hinic3_wq_get_one_wqebb(&sq->wq, pi);
+}
+
+/* *
+ * @brief hinic3_get_sq_multi_wqebb - get send queue wqe with multiple wqebbs
+ * @param sq: send queue
+ * @param wqebb_cnt: wqebb counter
+ * @param pi: return current pi
+ * @param second_part_wqebbs_addr: second part wqebbs base address
+ * @param first_part_wqebbs_num: number wqebbs of first part
+ * @retval : first part wqebbs base address
+ */
+static inline void *hinic3_get_sq_multi_wqebbs(struct hinic3_io_queue *sq,
+ u16 wqebb_cnt, u16 *pi,
+ void **second_part_wqebbs_addr,
+ u16 *first_part_wqebbs_num)
+{
+ return hinic3_wq_get_multi_wqebbs(&sq->wq, wqebb_cnt, pi,
+ second_part_wqebbs_addr,
+ first_part_wqebbs_num);
+}
+
+/* *
+ * @brief hinic3_get_and_update_sq_owner - get and update send queue owner bit
+ * @param sq: send queue
+ * @param curr_pi: current pi
+ * @param wqebb_cnt: wqebb counter
+ * @retval : owner bit
+ */
+static inline u16 hinic3_get_and_update_sq_owner(struct hinic3_io_queue *sq,
+ u16 curr_pi, u16 wqebb_cnt)
+{
+ u16 owner = sq->owner;
+
+ if (unlikely(curr_pi + wqebb_cnt >= sq->wq.q_depth))
+ sq->owner = !sq->owner;
+
+ return owner;
+}
+
+/* *
+ * @brief hinic3_get_sq_wqe_with_owner - get send queue wqe with owner
+ * @param sq: send queue
+ * @param wqebb_cnt: wqebb counter
+ * @param pi: return current pi
+ * @param owner: return owner bit
+ * @param second_part_wqebbs_addr: second part wqebbs base address
+ * @param first_part_wqebbs_num: number wqebbs of first part
+ * @retval : first part wqebbs base address
+ */
+static inline void *hinic3_get_sq_wqe_with_owner(struct hinic3_io_queue *sq,
+ u16 wqebb_cnt, u16 *pi,
+ u16 *owner,
+ void **second_part_wqebbs_addr,
+ u16 *first_part_wqebbs_num)
+{
+ void *wqe = hinic3_wq_get_multi_wqebbs(&sq->wq, wqebb_cnt, pi,
+ second_part_wqebbs_addr,
+ first_part_wqebbs_num);
+
+ *owner = sq->owner;
+ if (unlikely(*pi + wqebb_cnt >= sq->wq.q_depth))
+ sq->owner = !sq->owner;
+
+ return wqe;
+}
+
+/* *
+ * @brief hinic3_rollback_sq_wqebbs - rollback send queue wqe
+ * @param sq: send queue
+ * @param wqebb_cnt: wqebb counter
+ * @param owner: owner bit
+ */
+static inline void hinic3_rollback_sq_wqebbs(struct hinic3_io_queue *sq,
+ u16 wqebb_cnt, u16 owner)
+{
+ if (owner != sq->owner)
+ sq->owner = (u8)owner;
+ sq->wq.prod_idx -= wqebb_cnt;
+}
+
+/* *
+ * @brief hinic3_rq_wqe_addr - get receive queue wqe address by queue index
+ * @param rq: receive queue
+ * @param idx: wq index
+ * @retval: wqe base address
+ */
+static inline void *hinic3_rq_wqe_addr(struct hinic3_io_queue *rq, u16 idx)
+{
+ return hinic3_wq_wqebb_addr(&rq->wq, idx);
+}
+
+/* *
+ * @brief hinic3_update_rq_hw_pi - update receive queue hardware pi
+ * @param rq: receive queue
+ * @param pi: pi
+ */
+static inline void hinic3_update_rq_hw_pi(struct hinic3_io_queue *rq, u16 pi)
+{
+ *rq->rx.pi_virt_addr = cpu_to_be16((pi & rq->wq.idx_mask) <<
+ rq->wqe_type);
+}
+
+/* *
+ * @brief hinic3_update_rq_local_ci - update receive queue local consumer index
+ * @param sq: receive queue
+ * @param wqe_cnt: number of wqebb
+ */
+static inline void hinic3_update_rq_local_ci(struct hinic3_io_queue *rq,
+ u16 wqebb_cnt)
+{
+ hinic3_wq_put_wqebbs(&rq->wq, wqebb_cnt);
+}
+
+/* *
+ * @brief hinic3_get_rq_local_ci - get receive queue local ci
+ * @param rq: receive queue
+ * @retval: receive queue local ci
+ */
+static inline u16 hinic3_get_rq_local_ci(const struct hinic3_io_queue *rq)
+{
+ return WQ_MASK_IDX(&rq->wq, rq->wq.cons_idx);
+}
+
+/* *
+ * @brief hinic3_get_rq_local_pi - get receive queue local pi
+ * @param rq: receive queue
+ * @retval: receive queue local pi
+ */
+static inline u16 hinic3_get_rq_local_pi(const struct hinic3_io_queue *rq)
+{
+ return WQ_MASK_IDX(&rq->wq, rq->wq.prod_idx);
+}
+
+/* ******************** DB INFO ******************** */
+#define DB_INFO_QID_SHIFT 0
+#define DB_INFO_NON_FILTER_SHIFT 22
+#define DB_INFO_CFLAG_SHIFT 23
+#define DB_INFO_COS_SHIFT 24
+#define DB_INFO_TYPE_SHIFT 27
+
+#define DB_INFO_QID_MASK 0x1FFFU
+#define DB_INFO_NON_FILTER_MASK 0x1U
+#define DB_INFO_CFLAG_MASK 0x1U
+#define DB_INFO_COS_MASK 0x7U
+#define DB_INFO_TYPE_MASK 0x1FU
+#define DB_INFO_SET(val, member) \
+ (((u32)(val) & DB_INFO_##member##_MASK) << \
+ DB_INFO_##member##_SHIFT)
+
+#define DB_PI_LOW_MASK 0xFFU
+#define DB_PI_HIGH_MASK 0xFFU
+#define DB_PI_LOW(pi) ((pi) & DB_PI_LOW_MASK)
+#define DB_PI_HI_SHIFT 8
+#define DB_PI_HIGH(pi) (((pi) >> DB_PI_HI_SHIFT) & DB_PI_HIGH_MASK)
+#define DB_ADDR(queue, pi) ((u64 *)((queue)->db_addr) + DB_PI_LOW(pi))
+#define SRC_TYPE 1
+
+/* CFLAG_DATA_PATH */
+#define SQ_CFLAG_DP 0
+#define RQ_CFLAG_DP 1
+/* *
+ * @brief hinic3_write_db - write doorbell
+ * @param queue: nic io queue
+ * @param cos: cos index
+ * @param cflag: 0--sq, 1--rq
+ * @param pi: product index
+ */
+static inline void hinic3_write_db(struct hinic3_io_queue *queue, int cos,
+ u8 cflag, u16 pi)
+{
+ struct hinic3_nic_db db;
+
+ db.db_info = DB_INFO_SET(SRC_TYPE, TYPE) | DB_INFO_SET(cflag, CFLAG) |
+ DB_INFO_SET(cos, COS) | DB_INFO_SET(queue->q_id, QID);
+ db.pi_hi = DB_PI_HIGH(pi);
+ /* Data should be written to HW in Big Endian Format */
+ db.db_info = hinic3_hw_be32(db.db_info);
+ db.pi_hi = hinic3_hw_be32(db.pi_hi);
+
+ wmb(); /* Write all before the doorbell */
+
+ writeq(*((u64 *)(u8 *)&db), DB_ADDR(queue, pi));
+}
+
+struct hinic3_dyna_qp_params {
+ u16 num_qps;
+ u32 sq_depth;
+ u32 rq_depth;
+
+ struct hinic3_io_queue *sqs;
+ struct hinic3_io_queue *rqs;
+};
+
+int hinic3_alloc_qps(void *hwdev, struct irq_info *qps_msix_arry,
+ struct hinic3_dyna_qp_params *qp_params);
+void hinic3_free_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params);
+int hinic3_init_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params);
+void hinic3_deinit_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params);
+int hinic3_init_nicio_res(void *hwdev);
+void hinic3_deinit_nicio_res(void *hwdev);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c
new file mode 100644
index 0000000..9ea93a0
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c
@@ -0,0 +1,47 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+
+#include "ossl_knl.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_profile.h"
+#include "hinic3_nic_prof.h"
+
+static bool is_match_nic_prof_default_adapter(void *device)
+{
+ /* always match default profile adapter in standard scene */
+ return true;
+}
+
+struct hinic3_prof_adapter nic_prof_adap_objs[] = {
+ /* Add prof adapter before default profile */
+ {
+ .type = PROF_ADAP_TYPE_DEFAULT,
+ .match = is_match_nic_prof_default_adapter,
+ .init = NULL,
+ .deinit = NULL,
+ },
+};
+
+void hinic3_init_nic_prof_adapter(struct hinic3_nic_dev *nic_dev)
+{
+ int num_adap = ARRAY_LEN(nic_prof_adap_objs);
+
+ nic_dev->prof_adap = hinic3_prof_init(nic_dev, nic_prof_adap_objs, num_adap,
+ (void *)&nic_dev->prof_attr);
+ if (nic_dev->prof_adap)
+ nic_info(&nic_dev->pdev->dev, "Find profile adapter type: %d\n",
+ nic_dev->prof_adap->type);
+}
+
+void hinic3_deinit_nic_prof_adapter(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_prof_deinit(nic_dev->prof_adap, nic_dev->prof_attr);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h
new file mode 100644
index 0000000..3c279e7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_PROF_H
+#define HINIC3_NIC_PROF_H
+#include <linux/socket.h>
+
+#include <linux/types.h>
+
+#include "hinic3_nic_cfg.h"
+
+struct hinic3_nic_prof_attr {
+ void *priv_data;
+ char netdev_name[IFNAMSIZ];
+};
+
+struct hinic3_nic_dev;
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline char *hinic3_get_dft_netdev_name_fmt(struct hinic3_nic_dev *nic_dev)
+{
+ if (nic_dev->prof_attr)
+ return nic_dev->prof_attr->netdev_name;
+
+ return NULL;
+}
+
+#ifdef CONFIG_MODULE_PROF
+int hinic3_set_master_dev_state(struct hinic3_nic_dev *nic_dev, u32 flag);
+u32 hinic3_get_link(struct net_device *dev)
+int hinic3_config_port_mtu(struct hinic3_nic_dev *nic_dev, u32 mtu);
+int hinic3_config_port_mac(struct hinic3_nic_dev *nic_dev, struct sockaddr *saddr);
+#else
+static inline int hinic3_set_master_dev_state(struct hinic3_nic_dev *nic_dev, u32 flag)
+{
+ return 0;
+}
+
+static inline int hinic3_config_port_mtu(struct hinic3_nic_dev *nic_dev, u32 mtu)
+{
+ return hinic3_set_port_mtu(nic_dev->hwdev, (u16)mtu);
+}
+
+static inline int hinic3_config_port_mac(struct hinic3_nic_dev *nic_dev, struct sockaddr *saddr)
+{
+ return hinic3_update_mac(nic_dev->hwdev, nic_dev->netdev->dev_addr, saddr->sa_data, 0,
+ hinic3_global_func_id(nic_dev->hwdev));
+}
+
+#endif
+
+void hinic3_init_nic_prof_adapter(struct hinic3_nic_dev *nic_dev);
+void hinic3_deinit_nic_prof_adapter(struct hinic3_nic_dev *nic_dev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h
new file mode 100644
index 0000000..f492c5d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h
@@ -0,0 +1,384 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_QP_H
+#define HINIC3_NIC_QP_H
+
+#include "hinic3_common.h"
+
+#define TX_MSS_DEFAULT 0x3E00
+#define TX_MSS_MIN 0x50
+
+#define HINIC3_MAX_SQ_SGE 18
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0
+#define RQ_CQE_OFFOLAD_TYPE_IP_TYPE_SHIFT 5
+#define RQ_CQE_OFFOLAD_TYPE_ENC_L3_TYPE_SHIFT 7
+#define RQ_CQE_OFFOLAD_TYPE_TUNNEL_PKT_FORMAT_SHIFT 8
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0x1FU
+#define RQ_CQE_OFFOLAD_TYPE_IP_TYPE_MASK 0x3U
+#define RQ_CQE_OFFOLAD_TYPE_ENC_L3_TYPE_MASK 0x1U
+#define RQ_CQE_OFFOLAD_TYPE_TUNNEL_PKT_FORMAT_MASK 0xFU
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU
+
+#define RQ_CQE_OFFOLAD_TYPE_GET(val, member) \
+ (((val) >> RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \
+ RQ_CQE_OFFOLAD_TYPE_##member##_MASK)
+
+#define HINIC3_GET_RX_PKT_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE)
+#define HINIC3_GET_RX_IP_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, IP_TYPE)
+#define HINIC3_GET_RX_ENC_L3_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, ENC_L3_TYPE)
+#define HINIC3_GET_RX_TUNNEL_PKT_FORMAT(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, TUNNEL_PKT_FORMAT)
+
+#define HINIC3_GET_RX_PKT_UMBCAST(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_UMBCAST)
+
+#define HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, VLAN_EN)
+
+#define HINIC3_GET_RSS_TYPES(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, RSS_TYPE)
+
+#define RQ_CQE_SGE_VLAN_SHIFT 0
+#define RQ_CQE_SGE_LEN_SHIFT 16
+
+#define RQ_CQE_SGE_VLAN_MASK 0xFFFFU
+#define RQ_CQE_SGE_LEN_MASK 0xFFFFU
+
+#define RQ_CQE_SGE_GET(val, member) \
+ (((val) >> RQ_CQE_SGE_##member##_SHIFT) & RQ_CQE_SGE_##member##_MASK)
+
+#define HINIC3_GET_RX_VLAN_TAG(vlan_len) RQ_CQE_SGE_GET(vlan_len, VLAN)
+
+#define HINIC3_GET_RX_PKT_LEN(vlan_len) RQ_CQE_SGE_GET(vlan_len, LEN)
+
+#define RQ_CQE_STATUS_CSUM_ERR_SHIFT 0
+#define RQ_CQE_STATUS_NUM_LRO_SHIFT 16
+#define RQ_CQE_STATUS_LRO_PUSH_SHIFT 25
+#define RQ_CQE_STATUS_LRO_ENTER_SHIFT 26
+#define RQ_CQE_STATUS_LRO_INTR_SHIFT 27
+
+#define RQ_CQE_STATUS_BP_EN_SHIFT 30
+#define RQ_CQE_STATUS_RXDONE_SHIFT 31
+#define RQ_CQE_STATUS_DECRY_PKT_SHIFT 29
+#define RQ_CQE_STATUS_FLUSH_SHIFT 28
+
+#define RQ_CQE_STATUS_CSUM_ERR_MASK 0xFFFFU
+#define RQ_CQE_STATUS_NUM_LRO_MASK 0xFFU
+#define RQ_CQE_STATUS_LRO_PUSH_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_ENTER_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_INTR_MASK 0X1U
+#define RQ_CQE_STATUS_BP_EN_MASK 0X1U
+#define RQ_CQE_STATUS_RXDONE_MASK 0x1U
+#define RQ_CQE_STATUS_FLUSH_MASK 0x1U
+#define RQ_CQE_STATUS_DECRY_PKT_MASK 0x1U
+
+#define RQ_CQE_STATUS_GET(val, member) \
+ (((val) >> RQ_CQE_STATUS_##member##_SHIFT) & \
+ RQ_CQE_STATUS_##member##_MASK)
+
+#define HINIC3_GET_RX_CSUM_ERR(status) RQ_CQE_STATUS_GET(status, CSUM_ERR)
+
+#define HINIC3_GET_RX_DONE(status) RQ_CQE_STATUS_GET(status, RXDONE)
+
+#define HINIC3_GET_RX_FLUSH(status) RQ_CQE_STATUS_GET(status, FLUSH)
+
+#define HINIC3_GET_RX_BP_EN(status) RQ_CQE_STATUS_GET(status, BP_EN)
+
+#define HINIC3_GET_RX_NUM_LRO(status) RQ_CQE_STATUS_GET(status, NUM_LRO)
+
+#define HINIC3_RX_IS_DECRY_PKT(status) RQ_CQE_STATUS_GET(status, DECRY_PKT)
+
+#define RQ_CQE_SUPER_CQE_EN_SHIFT 0
+#define RQ_CQE_PKT_NUM_SHIFT 1
+#define RQ_CQE_PKT_LAST_LEN_SHIFT 6
+#define RQ_CQE_PKT_FIRST_LEN_SHIFT 19
+
+#define RQ_CQE_SUPER_CQE_EN_MASK 0x1
+#define RQ_CQE_PKT_NUM_MASK 0x1FU
+#define RQ_CQE_PKT_FIRST_LEN_MASK 0x1FFFU
+#define RQ_CQE_PKT_LAST_LEN_MASK 0x1FFFU
+
+#define RQ_CQE_PKT_NUM_GET(val, member) \
+ (((val) >> RQ_CQE_PKT_##member##_SHIFT) & RQ_CQE_PKT_##member##_MASK)
+#define HINIC3_GET_RQ_CQE_PKT_NUM(pkt_info) RQ_CQE_PKT_NUM_GET(pkt_info, NUM)
+
+#define RQ_CQE_SUPER_CQE_EN_GET(val, member) \
+ (((val) >> RQ_CQE_##member##_SHIFT) & RQ_CQE_##member##_MASK)
+#define HINIC3_GET_SUPER_CQE_EN(pkt_info) \
+ RQ_CQE_SUPER_CQE_EN_GET(pkt_info, SUPER_CQE_EN)
+
+#define RQ_CQE_PKT_LEN_GET(val, member) \
+ (((val) >> RQ_CQE_PKT_##member##_SHIFT) & RQ_CQE_PKT_##member##_MASK)
+
+#define RQ_CQE_DECRY_INFO_DECRY_STATUS_SHIFT 8
+#define RQ_CQE_DECRY_INFO_ESP_NEXT_HEAD_SHIFT 0
+
+#define RQ_CQE_DECRY_INFO_DECRY_STATUS_MASK 0xFFU
+#define RQ_CQE_DECRY_INFO_ESP_NEXT_HEAD_MASK 0xFFU
+
+#define RQ_CQE_DECRY_INFO_GET(val, member) \
+ (((val) >> RQ_CQE_DECRY_INFO_##member##_SHIFT) & \
+ RQ_CQE_DECRY_INFO_##member##_MASK)
+
+#define HINIC3_GET_DECRYPT_STATUS(decry_info) \
+ RQ_CQE_DECRY_INFO_GET(decry_info, DECRY_STATUS)
+
+#define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \
+ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD)
+
+struct hinic3_rq_cqe {
+ u32 status;
+ u32 vlan_len;
+
+ u32 offload_type;
+ u32 hash_val;
+ u32 xid;
+ u32 decrypt_info;
+ u32 rsvd6;
+ u32 pkt_info;
+};
+
+struct hinic3_sge_sect {
+ struct hinic3_sge sge;
+ u32 rsvd;
+};
+
+struct hinic3_rq_extend_wqe {
+ struct hinic3_sge_sect buf_desc;
+ struct hinic3_sge_sect cqe_sect;
+};
+
+struct hinic3_rq_normal_wqe {
+ u32 buf_hi_addr;
+ u32 buf_lo_addr;
+ u32 cqe_hi_addr;
+ u32 cqe_lo_addr;
+};
+
+struct hinic3_rq_wqe {
+ union {
+ struct hinic3_rq_normal_wqe normal_wqe;
+ struct hinic3_rq_extend_wqe extend_wqe;
+ };
+};
+
+struct hinic3_sq_wqe_desc {
+ u32 ctrl_len;
+ u32 queue_info;
+ u32 hi_addr;
+ u32 lo_addr;
+};
+
+/* Engine only pass first 12B TS field directly to uCode through metadata
+ * vlan_offoad is used for hardware when vlan insert in tx
+ */
+struct hinic3_sq_task {
+ u32 pkt_info0;
+ u32 ip_identify;
+ u32 pkt_info2; /* ipsec used as spi */
+ u32 vlan_offload;
+};
+
+struct hinic3_sq_bufdesc {
+ u32 len; /* 31-bits Length, L2NIC only use length[17:0] */
+ u32 rsvd;
+ u32 hi_addr;
+ u32 lo_addr;
+};
+
+struct hinic3_sq_compact_wqe {
+ struct hinic3_sq_wqe_desc wqe_desc;
+};
+
+struct hinic3_sq_extend_wqe {
+ struct hinic3_sq_wqe_desc wqe_desc;
+ struct hinic3_sq_task task;
+ struct hinic3_sq_bufdesc buf_desc[0];
+};
+
+struct hinic3_sq_wqe {
+ union {
+ struct hinic3_sq_compact_wqe compact_wqe;
+ struct hinic3_sq_extend_wqe extend_wqe;
+ };
+};
+
+/* use section pointer for support non continuous wqe */
+struct hinic3_sq_wqe_combo {
+ struct hinic3_sq_wqe_desc *ctrl_bd0;
+ struct hinic3_sq_task *task;
+ struct hinic3_sq_bufdesc *bds_head;
+ struct hinic3_sq_bufdesc *bds_sec2;
+ u16 first_bds_num;
+ u32 wqe_type;
+ u32 task_type;
+};
+
+/* ************* SQ_CTRL ************** */
+enum sq_wqe_data_format {
+ SQ_NORMAL_WQE = 0,
+};
+
+enum sq_wqe_ec_type {
+ SQ_WQE_COMPACT_TYPE = 0,
+ SQ_WQE_EXTENDED_TYPE = 1,
+};
+
+enum sq_wqe_tasksect_len_type {
+ SQ_WQE_TASKSECT_46BITS = 0,
+ SQ_WQE_TASKSECT_16BYTES = 1,
+};
+
+#define SQ_CTRL_BD0_LEN_SHIFT 0
+#define SQ_CTRL_RSVD_SHIFT 18
+#define SQ_CTRL_BUFDESC_NUM_SHIFT 19
+#define SQ_CTRL_TASKSECT_LEN_SHIFT 27
+#define SQ_CTRL_DATA_FORMAT_SHIFT 28
+#define SQ_CTRL_DIRECT_SHIFT 29
+#define SQ_CTRL_EXTENDED_SHIFT 30
+#define SQ_CTRL_OWNER_SHIFT 31
+
+#define SQ_CTRL_BD0_LEN_MASK 0x3FFFFU
+#define SQ_CTRL_RSVD_MASK 0x1U
+#define SQ_CTRL_BUFDESC_NUM_MASK 0xFFU
+#define SQ_CTRL_TASKSECT_LEN_MASK 0x1U
+#define SQ_CTRL_DATA_FORMAT_MASK 0x1U
+#define SQ_CTRL_DIRECT_MASK 0x1U
+#define SQ_CTRL_EXTENDED_MASK 0x1U
+#define SQ_CTRL_OWNER_MASK 0x1U
+
+#define SQ_CTRL_SET(val, member) \
+ (((u32)(val) & SQ_CTRL_##member##_MASK) << SQ_CTRL_##member##_SHIFT)
+
+#define SQ_CTRL_GET(val, member) \
+ (((val) >> SQ_CTRL_##member##_SHIFT) & SQ_CTRL_##member##_MASK)
+
+#define SQ_CTRL_CLEAR(val, member) \
+ ((val) & (~(SQ_CTRL_##member##_MASK << SQ_CTRL_##member##_SHIFT)))
+
+#define SQ_CTRL_QUEUE_INFO_PKT_TYPE_SHIFT 0
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_SHIFT 2
+#define SQ_CTRL_QUEUE_INFO_UFO_SHIFT 10
+#define SQ_CTRL_QUEUE_INFO_TSO_SHIFT 11
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_SHIFT 12
+#define SQ_CTRL_QUEUE_INFO_MSS_SHIFT 13
+#define SQ_CTRL_QUEUE_INFO_SCTP_SHIFT 27
+#define SQ_CTRL_QUEUE_INFO_UC_SHIFT 28
+#define SQ_CTRL_QUEUE_INFO_PRI_SHIFT 29
+
+#define SQ_CTRL_QUEUE_INFO_PKT_TYPE_MASK 0x3U
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_MASK 0xFFU
+#define SQ_CTRL_QUEUE_INFO_UFO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TSO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_MSS_MASK 0x3FFFU
+#define SQ_CTRL_QUEUE_INFO_SCTP_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_UC_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_PRI_MASK 0x7U
+
+#define SQ_CTRL_QUEUE_INFO_SET(val, member) \
+ (((u32)(val) & SQ_CTRL_QUEUE_INFO_##member##_MASK) << \
+ SQ_CTRL_QUEUE_INFO_##member##_SHIFT)
+
+#define SQ_CTRL_QUEUE_INFO_GET(val, member) \
+ (((val) >> SQ_CTRL_QUEUE_INFO_##member##_SHIFT) & \
+ SQ_CTRL_QUEUE_INFO_##member##_MASK)
+
+#define SQ_CTRL_QUEUE_INFO_CLEAR(val, member) \
+ ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK << \
+ SQ_CTRL_QUEUE_INFO_##member##_SHIFT)))
+
+#define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19
+#define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22
+#define SQ_TASK_INFO0_INNER_L4_EN_SHIFT 24
+#define SQ_TASK_INFO0_INNER_L3_EN_SHIFT 25
+#define SQ_TASK_INFO0_INNER_L4_PSEUDO_SHIFT 26
+#define SQ_TASK_INFO0_OUT_L4_EN_SHIFT 27
+#define SQ_TASK_INFO0_OUT_L3_EN_SHIFT 28
+#define SQ_TASK_INFO0_OUT_L4_PSEUDO_SHIFT 29
+#define SQ_TASK_INFO0_ESP_OFFLOAD_SHIFT 30
+#define SQ_TASK_INFO0_IPSEC_PROTO_SHIFT 31
+
+#define SQ_TASK_INFO0_TUNNEL_FLAG_MASK 0x1U
+#define SQ_TASK_INFO0_ESP_NEXT_PROTO_MASK 0x3U
+#define SQ_TASK_INFO0_INNER_L4_EN_MASK 0x1U
+#define SQ_TASK_INFO0_INNER_L3_EN_MASK 0x1U
+#define SQ_TASK_INFO0_INNER_L4_PSEUDO_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L4_EN_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L3_EN_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L4_PSEUDO_MASK 0x1U
+#define SQ_TASK_INFO0_ESP_OFFLOAD_MASK 0x1U
+#define SQ_TASK_INFO0_IPSEC_PROTO_MASK 0x1U
+
+#define SQ_TASK_INFO0_SET(val, member) \
+ (((u32)(val) & SQ_TASK_INFO0_##member##_MASK) << \
+ SQ_TASK_INFO0_##member##_SHIFT)
+#define SQ_TASK_INFO0_GET(val, member) \
+ (((val) >> SQ_TASK_INFO0_##member##_SHIFT) & \
+ SQ_TASK_INFO0_##member##_MASK)
+
+#define SQ_TASK_INFO1_SET(val, member) \
+ (((val) & SQ_TASK_INFO1_##member##_MASK) << \
+ SQ_TASK_INFO1_##member##_SHIFT)
+#define SQ_TASK_INFO1_GET(val, member) \
+ (((val) >> SQ_TASK_INFO1_##member##_SHIFT) & \
+ SQ_TASK_INFO1_##member##_MASK)
+
+#define SQ_TASK_INFO3_VLAN_TAG_SHIFT 0
+#define SQ_TASK_INFO3_VLAN_TYPE_SHIFT 16
+#define SQ_TASK_INFO3_VLAN_TAG_VALID_SHIFT 19
+
+#define SQ_TASK_INFO3_VLAN_TAG_MASK 0xFFFFU
+#define SQ_TASK_INFO3_VLAN_TYPE_MASK 0x7U
+#define SQ_TASK_INFO3_VLAN_TAG_VALID_MASK 0x1U
+
+#define SQ_TASK_INFO3_SET(val, member) \
+ (((val) & SQ_TASK_INFO3_##member##_MASK) << \
+ SQ_TASK_INFO3_##member##_SHIFT)
+#define SQ_TASK_INFO3_GET(val, member) \
+ (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \
+ SQ_TASK_INFO3_##member##_MASK)
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline u32 hinic3_get_pkt_len_for_super_cqe(const struct hinic3_rq_cqe *cqe,
+ bool last)
+{
+ u32 pkt_len = hinic3_hw_cpu32(cqe->pkt_info);
+
+ if (!last)
+ return RQ_CQE_PKT_LEN_GET(pkt_len, FIRST_LEN);
+ else
+ return RQ_CQE_PKT_LEN_GET(pkt_len, LAST_LEN);
+}
+
+/* *
+ * hinic3_set_vlan_tx_offload - set vlan offload info
+ * @task: wqe task section
+ * @vlan_tag: vlan tag
+ * @vlan_type: 0--select TPID0 in IPSU, 1--select TPID0 in IPSU
+ * 2--select TPID2 in IPSU, 3--select TPID3 in IPSU, 4--select TPID4 in IPSU
+ */
+static inline void hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task,
+ u16 vlan_tag, u8 vlan_type)
+{
+ task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) |
+ SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) |
+ SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID);
+}
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c b/drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c
new file mode 100644
index 0000000..6d9b0c1
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c
@@ -0,0 +1,909 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/ethtool.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_nic_dev.h"
+
+#define MAX_NUM_OF_ETHTOOL_NTUPLE_RULES BIT(9)
+struct hinic3_ethtool_rx_flow_rule {
+ struct list_head list;
+ struct ethtool_rx_flow_spec flow_spec;
+};
+
+static void tcam_translate_key_y(u8 *key_y, const u8 *src_input, const u8 *mask, u8 len)
+{
+ u8 idx;
+
+ for (idx = 0; idx < len; idx++)
+ key_y[idx] = src_input[idx] & mask[idx];
+}
+
+static void tcam_translate_key_x(u8 *key_x, const u8 *key_y, const u8 *mask, u8 len)
+{
+ u8 idx;
+
+ for (idx = 0; idx < len; idx++)
+ key_x[idx] = key_y[idx] ^ mask[idx];
+}
+
+static void tcam_key_calculate(struct tag_tcam_key *tcam_key,
+ struct nic_tcam_cfg_rule *fdir_tcam_rule)
+{
+ tcam_translate_key_y(fdir_tcam_rule->key.y,
+ (u8 *)(&tcam_key->key_info),
+ (u8 *)(&tcam_key->key_mask), TCAM_FLOW_KEY_SIZE);
+ tcam_translate_key_x(fdir_tcam_rule->key.x, fdir_tcam_rule->key.y,
+ (u8 *)(&tcam_key->key_mask), TCAM_FLOW_KEY_SIZE);
+}
+
+#define TCAM_IPV4_TYPE 0
+#define TCAM_IPV6_TYPE 1
+
+static int hinic3_base_ipv4_parse(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_tcpip4_spec *mask = &fs->m_u.tcp_ip4_spec;
+ struct ethtool_tcpip4_spec *val = &fs->h_u.tcp_ip4_spec;
+ u32 temp;
+
+ switch (mask->ip4src) {
+ case U32_MAX:
+ temp = ntohl(val->ip4src);
+ tcam_key->key_info.sipv4_h = high_16_bits(temp);
+ tcam_key->key_info.sipv4_l = low_16_bits(temp);
+
+ tcam_key->key_mask.sipv4_h = U16_MAX;
+ tcam_key->key_mask.sipv4_l = U16_MAX;
+ break;
+ case 0:
+ break;
+
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "invalid src_ip mask\n");
+ return -EINVAL;
+ }
+
+ switch (mask->ip4dst) {
+ case U32_MAX:
+ temp = ntohl(val->ip4dst);
+ tcam_key->key_info.dipv4_h = high_16_bits(temp);
+ tcam_key->key_info.dipv4_l = low_16_bits(temp);
+
+ tcam_key->key_mask.dipv4_h = U16_MAX;
+ tcam_key->key_mask.dipv4_l = U16_MAX;
+ break;
+ case 0:
+ break;
+
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "invalid src_ip mask\n");
+ return -EINVAL;
+ }
+
+ tcam_key->key_info.ip_type = TCAM_IPV4_TYPE;
+ tcam_key->key_mask.ip_type = TCAM_IP_TYPE_MASK;
+
+ tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev);
+ tcam_key->key_mask.function_id = TCAM_FUNC_ID_MASK;
+
+ return 0;
+}
+
+static int hinic3_fdir_tcam_ipv4_l4_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_tcpip4_spec *l4_mask = &fs->m_u.tcp_ip4_spec;
+ struct ethtool_tcpip4_spec *l4_val = &fs->h_u.tcp_ip4_spec;
+ int err;
+
+ err = hinic3_base_ipv4_parse(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+
+ tcam_key->key_info.dport = ntohs(l4_val->pdst);
+ tcam_key->key_mask.dport = l4_mask->pdst;
+
+ tcam_key->key_info.sport = ntohs(l4_val->psrc);
+ tcam_key->key_mask.sport = l4_mask->psrc;
+
+ if (fs->flow_type == TCP_V4_FLOW)
+ tcam_key->key_info.ip_proto = IPPROTO_TCP;
+ else
+ tcam_key->key_info.ip_proto = IPPROTO_UDP;
+ tcam_key->key_mask.ip_proto = U8_MAX;
+
+ return 0;
+}
+
+static int hinic3_fdir_tcam_ipv4_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_usrip4_spec *l3_mask = &fs->m_u.usr_ip4_spec;
+ struct ethtool_usrip4_spec *l3_val = &fs->h_u.usr_ip4_spec;
+ int err;
+
+ err = hinic3_base_ipv4_parse(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+
+ tcam_key->key_info.ip_proto = l3_val->proto;
+ tcam_key->key_mask.ip_proto = l3_mask->proto;
+
+ return 0;
+}
+
+#ifndef UNSUPPORT_NTUPLE_IPV6
+enum ipv6_parse_res {
+ IPV6_MASK_INVALID,
+ IPV6_MASK_ALL_MASK,
+ IPV6_MASK_ALL_ZERO,
+};
+
+enum ipv6_index {
+ IPV6_IDX0,
+ IPV6_IDX1,
+ IPV6_IDX2,
+ IPV6_IDX3,
+};
+
+static int ipv6_mask_parse(const u32 *ipv6_mask)
+{
+ if (ipv6_mask[IPV6_IDX0] == 0 && ipv6_mask[IPV6_IDX1] == 0 &&
+ ipv6_mask[IPV6_IDX2] == 0 && ipv6_mask[IPV6_IDX3] == 0)
+ return IPV6_MASK_ALL_ZERO;
+
+ if (ipv6_mask[IPV6_IDX0] == U32_MAX &&
+ ipv6_mask[IPV6_IDX1] == U32_MAX &&
+ ipv6_mask[IPV6_IDX2] == U32_MAX && ipv6_mask[IPV6_IDX3] == U32_MAX)
+ return IPV6_MASK_ALL_MASK;
+
+ return IPV6_MASK_INVALID;
+}
+
+static int hinic3_base_ipv6_parse(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_tcpip6_spec *mask = &fs->m_u.tcp_ip6_spec;
+ struct ethtool_tcpip6_spec *val = &fs->h_u.tcp_ip6_spec;
+ int parse_res;
+ u32 temp;
+
+ parse_res = ipv6_mask_parse((u32 *)mask->ip6src);
+ if (parse_res == IPV6_MASK_ALL_MASK) {
+ temp = ntohl(val->ip6src[IPV6_IDX0]);
+ tcam_key->key_info_ipv6.sipv6_key0 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.sipv6_key1 = low_16_bits(temp);
+ temp = ntohl(val->ip6src[IPV6_IDX1]);
+ tcam_key->key_info_ipv6.sipv6_key2 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.sipv6_key3 = low_16_bits(temp);
+ temp = ntohl(val->ip6src[IPV6_IDX2]);
+ tcam_key->key_info_ipv6.sipv6_key4 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.sipv6_key5 = low_16_bits(temp);
+ temp = ntohl(val->ip6src[IPV6_IDX3]);
+ tcam_key->key_info_ipv6.sipv6_key6 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.sipv6_key7 = low_16_bits(temp);
+
+ tcam_key->key_mask_ipv6.sipv6_key0 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key1 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key2 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key3 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key4 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key5 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key6 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key7 = U16_MAX;
+ } else if (parse_res == IPV6_MASK_INVALID) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "invalid src_ipv6 mask\n");
+ return -EINVAL;
+ }
+
+ parse_res = ipv6_mask_parse((u32 *)mask->ip6dst);
+ if (parse_res == IPV6_MASK_ALL_MASK) {
+ temp = ntohl(val->ip6dst[IPV6_IDX0]);
+ tcam_key->key_info_ipv6.dipv6_key0 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.dipv6_key1 = low_16_bits(temp);
+ temp = ntohl(val->ip6dst[IPV6_IDX1]);
+ tcam_key->key_info_ipv6.dipv6_key2 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.dipv6_key3 = low_16_bits(temp);
+ temp = ntohl(val->ip6dst[IPV6_IDX2]);
+ tcam_key->key_info_ipv6.dipv6_key4 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.dipv6_key5 = low_16_bits(temp);
+ temp = ntohl(val->ip6dst[IPV6_IDX3]);
+ tcam_key->key_info_ipv6.dipv6_key6 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.dipv6_key7 = low_16_bits(temp);
+
+ tcam_key->key_mask_ipv6.dipv6_key0 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key1 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key2 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key3 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key4 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key5 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key6 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key7 = U16_MAX;
+ } else if (parse_res == IPV6_MASK_INVALID) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "invalid dst_ipv6 mask\n");
+ return -EINVAL;
+ }
+
+ tcam_key->key_info_ipv6.ip_type = TCAM_IPV6_TYPE;
+ tcam_key->key_mask_ipv6.ip_type = TCAM_IP_TYPE_MASK;
+
+ tcam_key->key_info_ipv6.function_id =
+ hinic3_global_func_id(nic_dev->hwdev);
+ tcam_key->key_mask_ipv6.function_id = TCAM_FUNC_ID_MASK;
+
+ return 0;
+}
+
+static int hinic3_fdir_tcam_ipv6_l4_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_tcpip6_spec *l4_mask = &fs->m_u.tcp_ip6_spec;
+ struct ethtool_tcpip6_spec *l4_val = &fs->h_u.tcp_ip6_spec;
+ int err;
+
+ err = hinic3_base_ipv6_parse(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+
+ tcam_key->key_info_ipv6.dport = ntohs(l4_val->pdst);
+ tcam_key->key_mask_ipv6.dport = l4_mask->pdst;
+
+ tcam_key->key_info_ipv6.sport = ntohs(l4_val->psrc);
+ tcam_key->key_mask_ipv6.sport = l4_mask->psrc;
+
+ if (fs->flow_type == TCP_V6_FLOW)
+ tcam_key->key_info_ipv6.ip_proto = NEXTHDR_TCP;
+ else
+ tcam_key->key_info_ipv6.ip_proto = NEXTHDR_UDP;
+ tcam_key->key_mask_ipv6.ip_proto = U8_MAX;
+
+ return 0;
+}
+
+static int hinic3_fdir_tcam_ipv6_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_usrip6_spec *l3_mask = &fs->m_u.usr_ip6_spec;
+ struct ethtool_usrip6_spec *l3_val = &fs->h_u.usr_ip6_spec;
+ int err;
+
+ err = hinic3_base_ipv6_parse(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+
+ tcam_key->key_info_ipv6.ip_proto = l3_val->l4_proto;
+ tcam_key->key_mask_ipv6.ip_proto = l3_mask->l4_proto;
+
+ return 0;
+}
+#endif
+
+static int hinic3_fdir_tcam_info_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key,
+ struct nic_tcam_cfg_rule *fdir_tcam_rule)
+{
+ int err;
+
+ switch (fs->flow_type) {
+ case TCP_V4_FLOW:
+ case UDP_V4_FLOW:
+ err = hinic3_fdir_tcam_ipv4_l4_init(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+ break;
+ case IP_USER_FLOW:
+ err = hinic3_fdir_tcam_ipv4_init(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+ break;
+#ifndef UNSUPPORT_NTUPLE_IPV6
+ case TCP_V6_FLOW:
+ case UDP_V6_FLOW:
+ err = hinic3_fdir_tcam_ipv6_l4_init(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+ break;
+ case IPV6_USER_FLOW:
+ err = hinic3_fdir_tcam_ipv6_init(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+ break;
+#endif
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ tcam_key->key_info.tunnel_type = 0;
+ tcam_key->key_mask.tunnel_type = TCAM_TUNNEL_TYPE_MASK;
+
+ fdir_tcam_rule->data.qid = (u32)fs->ring_cookie;
+ tcam_key_calculate(tcam_key, fdir_tcam_rule);
+
+ return 0;
+}
+
+void hinic3_flush_rx_flow_rule(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+ struct hinic3_ethtool_rx_flow_rule *eth_rule_tmp = NULL;
+ struct hinic3_tcam_filter *tcam_iter = NULL;
+ struct hinic3_tcam_filter *tcam_iter_tmp = NULL;
+ struct hinic3_tcam_dynamic_block *block = NULL;
+ struct hinic3_tcam_dynamic_block *block_tmp = NULL;
+ struct list_head *dynamic_list =
+ &tcam_info->tcam_dynamic_info.tcam_dynamic_list;
+
+ if (!list_empty(&tcam_info->tcam_list)) {
+ list_for_each_entry_safe(tcam_iter, tcam_iter_tmp,
+ &tcam_info->tcam_list,
+ tcam_filter_list) {
+ list_del(&tcam_iter->tcam_filter_list);
+ kfree(tcam_iter);
+ }
+ }
+ if (!list_empty(dynamic_list)) {
+ list_for_each_entry_safe(block, block_tmp, dynamic_list,
+ block_list) {
+ list_del(&block->block_list);
+ kfree(block);
+ }
+ }
+
+ if (!list_empty(&nic_dev->rx_flow_rule.rules)) {
+ list_for_each_entry_safe(eth_rule, eth_rule_tmp,
+ &nic_dev->rx_flow_rule.rules, list) {
+ list_del(ð_rule->list);
+ kfree(eth_rule);
+ }
+ }
+
+ if (HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ hinic3_flush_tcam_rule(nic_dev->hwdev);
+ hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, false);
+ }
+}
+
+static struct hinic3_tcam_dynamic_block *
+hinic3_alloc_dynamic_block_resource(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_tcam_info *tcam_info,
+ u16 dynamic_block_id)
+{
+ struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL;
+
+ dynamic_block_ptr = kzalloc(sizeof(*dynamic_block_ptr), GFP_KERNEL);
+ if (!dynamic_block_ptr) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "fdir filter dynamic alloc block index %u memory failed\n",
+ dynamic_block_id);
+ return NULL;
+ }
+
+ dynamic_block_ptr->dynamic_block_id = dynamic_block_id;
+ list_add_tail(&dynamic_block_ptr->block_list,
+ &tcam_info->tcam_dynamic_info.tcam_dynamic_list);
+
+ tcam_info->tcam_dynamic_info.dynamic_block_cnt++;
+
+ return dynamic_block_ptr;
+}
+
+static void hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info,
+ struct hinic3_tcam_dynamic_block *block_ptr)
+{
+ if (!block_ptr)
+ return;
+
+ list_del(&block_ptr->block_list);
+ kfree(block_ptr);
+
+ tcam_info->tcam_dynamic_info.dynamic_block_cnt--;
+}
+
+static struct hinic3_tcam_dynamic_block *
+hinic3_dynamic_lookup_tcam_filter(struct hinic3_nic_dev *nic_dev,
+ struct nic_tcam_cfg_rule *fdir_tcam_rule,
+ const struct hinic3_tcam_info *tcam_info,
+ struct hinic3_tcam_filter *tcam_filter,
+ u16 *tcam_index)
+{
+ struct hinic3_tcam_dynamic_block *tmp = NULL;
+ u16 index;
+
+ list_for_each_entry(tmp,
+ &tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+ block_list)
+ if (!tmp ||
+ tmp->dynamic_index_cnt < HINIC3_TCAM_DYNAMIC_BLOCK_SIZE)
+ break;
+
+ if (!tmp || tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir filter dynamic lookup for index failed\n");
+ return NULL;
+ }
+
+ for (index = 0; index < HINIC3_TCAM_DYNAMIC_BLOCK_SIZE; index++)
+ if (tmp->dynamic_index_used[index] == 0)
+ break;
+
+ if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "tcam block 0x%x supports filter rules is full\n",
+ tmp->dynamic_block_id);
+ return NULL;
+ }
+
+ tcam_filter->dynamic_block_id = tmp->dynamic_block_id;
+ tcam_filter->index = index;
+ *tcam_index = index;
+
+ fdir_tcam_rule->index = index +
+ HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id);
+
+ return tmp;
+}
+
+static int hinic3_add_tcam_filter(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_tcam_filter *tcam_filter,
+ struct nic_tcam_cfg_rule *fdir_tcam_rule)
+{
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL;
+ struct hinic3_tcam_dynamic_block *tmp = NULL;
+ u16 block_cnt = tcam_info->tcam_dynamic_info.dynamic_block_cnt;
+ u16 tcam_block_index = 0;
+ int block_alloc_flag = 0;
+ u16 index = 0;
+ int err;
+
+ if (tcam_info->tcam_rule_nums >=
+ block_cnt * HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+ if (block_cnt >= (HINIC3_MAX_TCAM_FILTERS /
+ HINIC3_TCAM_DYNAMIC_BLOCK_SIZE)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Dynamic tcam block is full, alloc failed\n");
+ goto failed;
+ }
+
+ err = hinic3_alloc_tcam_block(nic_dev->hwdev,
+ &tcam_block_index);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir filter dynamic tcam alloc block failed\n");
+ goto failed;
+ }
+
+ block_alloc_flag = 1;
+
+ dynamic_block_ptr =
+ hinic3_alloc_dynamic_block_resource(nic_dev, tcam_info,
+ tcam_block_index);
+ if (!dynamic_block_ptr) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir filter dynamic alloc block memory failed\n");
+ goto block_alloc_failed;
+ }
+ }
+
+ tmp = hinic3_dynamic_lookup_tcam_filter(nic_dev,
+ fdir_tcam_rule, tcam_info,
+ tcam_filter, &index);
+ if (!tmp) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Dynamic lookup tcam filter failed\n");
+ goto lookup_tcam_index_failed;
+ }
+
+ err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir_tcam_rule add failed\n");
+ goto add_tcam_rules_failed;
+ }
+
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Add fdir tcam rule, function_id: 0x%x, tcam_block_id: %u, local_index: %u, global_index: %u, queue: %u, tcam_rule_nums: %u succeed\n",
+ hinic3_global_func_id(nic_dev->hwdev),
+ tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index,
+ fdir_tcam_rule->data.qid, tcam_info->tcam_rule_nums + 1);
+
+ if (tcam_info->tcam_rule_nums == 0) {
+ err = hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, true);
+ if (err)
+ goto enable_failed;
+ }
+
+ list_add_tail(&tcam_filter->tcam_filter_list, &tcam_info->tcam_list);
+
+ tmp->dynamic_index_used[index] = 1;
+ tmp->dynamic_index_cnt++;
+
+ tcam_info->tcam_rule_nums++;
+
+ return 0;
+
+enable_failed:
+ hinic3_del_tcam_rule(nic_dev->hwdev, fdir_tcam_rule->index);
+
+add_tcam_rules_failed:
+lookup_tcam_index_failed:
+ if (block_alloc_flag == 1)
+ hinic3_free_dynamic_block_resource(tcam_info,
+ dynamic_block_ptr);
+
+block_alloc_failed:
+ if (block_alloc_flag == 1)
+ hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index);
+
+failed:
+ return -EFAULT;
+}
+
+static int hinic3_del_tcam_filter(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_tcam_filter *tcam_filter)
+{
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ u16 dynamic_block_id = tcam_filter->dynamic_block_id;
+ struct hinic3_tcam_dynamic_block *tmp = NULL;
+ u32 index = 0;
+ int err;
+
+ list_for_each_entry(tmp,
+ &tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+ block_list) {
+ if (tmp->dynamic_block_id == dynamic_block_id)
+ break;
+ }
+ if (!tmp || tmp->dynamic_block_id != dynamic_block_id) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir filter del dynamic lookup for block failed\n");
+ return -EFAULT;
+ }
+
+ index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) +
+ tcam_filter->index;
+
+ err = hinic3_del_tcam_rule(nic_dev->hwdev, index);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "fdir_tcam_rule del failed\n");
+ return -EFAULT;
+ }
+
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Del fdir_tcam_dynamic_rule function_id: 0x%x, tcam_block_id: %u, local_index: %u, global_index: %u, local_rules_nums: %u, global_rule_nums: %u succeed\n",
+ hinic3_global_func_id(nic_dev->hwdev), dynamic_block_id,
+ tcam_filter->index, index, tmp->dynamic_index_cnt - 1,
+ tcam_info->tcam_rule_nums - 1);
+
+ tmp->dynamic_index_used[tcam_filter->index] = 0;
+ tmp->dynamic_index_cnt--;
+ tcam_info->tcam_rule_nums--;
+ if (tmp->dynamic_index_cnt == 0) {
+ hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id);
+ hinic3_free_dynamic_block_resource(tcam_info, tmp);
+ }
+
+ if (tcam_info->tcam_rule_nums == 0)
+ hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, false);
+
+ list_del(&tcam_filter->tcam_filter_list);
+ kfree(tcam_filter);
+
+ return 0;
+}
+
+static inline struct hinic3_tcam_filter *
+hinic3_tcam_filter_lookup(const struct list_head *filter_list,
+ struct tag_tcam_key *key)
+{
+ struct hinic3_tcam_filter *iter = NULL;
+
+ list_for_each_entry(iter, filter_list, tcam_filter_list) {
+ if (memcmp(key, &iter->tcam_key,
+ sizeof(struct tag_tcam_key)) == 0) {
+ return iter;
+ }
+ }
+
+ return NULL;
+}
+
+static void del_ethtool_rule(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_ethtool_rx_flow_rule *eth_rule)
+{
+ list_del(ð_rule->list);
+ nic_dev->rx_flow_rule.tot_num_rules--;
+
+ kfree(eth_rule);
+}
+
+static int hinic3_remove_one_rule(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_ethtool_rx_flow_rule *eth_rule)
+{
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ struct hinic3_tcam_filter *tcam_filter = NULL;
+ struct nic_tcam_cfg_rule fdir_tcam_rule;
+ struct tag_tcam_key tcam_key;
+ int err;
+
+ memset(&fdir_tcam_rule, 0, sizeof(fdir_tcam_rule));
+ memset(&tcam_key, 0, sizeof(tcam_key));
+
+ err = hinic3_fdir_tcam_info_init(nic_dev, ð_rule->flow_spec,
+ &tcam_key, &fdir_tcam_rule);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Init fdir info failed\n");
+ return err;
+ }
+
+ tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list,
+ &tcam_key);
+ if (!tcam_filter) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Filter does not exists\n");
+ return -EEXIST;
+ }
+
+ err = hinic3_del_tcam_filter(nic_dev, tcam_filter);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Delete tcam filter failed\n");
+ return err;
+ }
+
+ del_ethtool_rule(nic_dev, eth_rule);
+
+ return 0;
+}
+
+static void add_rule_to_list(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_ethtool_rx_flow_rule *rule)
+{
+ struct hinic3_ethtool_rx_flow_rule *iter = NULL;
+ struct list_head *head = &nic_dev->rx_flow_rule.rules;
+
+ list_for_each_entry(iter, &nic_dev->rx_flow_rule.rules, list) {
+ if (iter->flow_spec.location > rule->flow_spec.location)
+ break;
+ head = &iter->list;
+ }
+ nic_dev->rx_flow_rule.tot_num_rules++;
+ list_add(&rule->list, head);
+}
+
+static int hinic3_add_one_rule(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs)
+{
+ struct nic_tcam_cfg_rule fdir_tcam_rule;
+ struct tag_tcam_key tcam_key;
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+ struct hinic3_tcam_filter *tcam_filter = NULL;
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ int err;
+
+ memset(&fdir_tcam_rule, 0, sizeof(fdir_tcam_rule));
+ memset(&tcam_key, 0, sizeof(tcam_key));
+ err = hinic3_fdir_tcam_info_init(nic_dev, fs, &tcam_key,
+ &fdir_tcam_rule);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Init fdir info failed\n");
+ return err;
+ }
+
+ tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list,
+ &tcam_key);
+ if (tcam_filter) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Filter exists\n");
+ return -EEXIST;
+ }
+
+ tcam_filter = kzalloc(sizeof(*tcam_filter), GFP_KERNEL);
+ if (!tcam_filter)
+ return -ENOMEM;
+ memcpy(&tcam_filter->tcam_key,
+ &tcam_key, sizeof(struct tag_tcam_key));
+ tcam_filter->queue = (u16)fdir_tcam_rule.data.qid;
+
+ err = hinic3_add_tcam_filter(nic_dev, tcam_filter, &fdir_tcam_rule);
+ if (err)
+ goto add_tcam_filter_fail;
+
+ /* driver save new rule filter */
+ eth_rule = kzalloc(sizeof(*eth_rule), GFP_KERNEL);
+ if (!eth_rule) {
+ err = -ENOMEM;
+ goto alloc_eth_rule_fail;
+ }
+
+ eth_rule->flow_spec = *fs;
+ add_rule_to_list(nic_dev, eth_rule);
+
+ return 0;
+
+alloc_eth_rule_fail:
+ hinic3_del_tcam_filter(nic_dev, tcam_filter);
+add_tcam_filter_fail:
+ kfree(tcam_filter);
+ return err;
+}
+
+static struct hinic3_ethtool_rx_flow_rule *
+find_ethtool_rule(const struct hinic3_nic_dev *nic_dev, u32 location)
+{
+ struct hinic3_ethtool_rx_flow_rule *iter = NULL;
+
+ list_for_each_entry(iter, &nic_dev->rx_flow_rule.rules, list) {
+ if (iter->flow_spec.location == location)
+ return iter;
+ }
+ return NULL;
+}
+
+static int validate_flow(struct hinic3_nic_dev *nic_dev,
+ const struct ethtool_rx_flow_spec *fs)
+{
+ if (fs->location >= MAX_NUM_OF_ETHTOOL_NTUPLE_RULES) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "loc exceed limit[0,%lu]\n",
+ MAX_NUM_OF_ETHTOOL_NTUPLE_RULES - 1);
+ return -EINVAL;
+ }
+
+ if (fs->ring_cookie >= nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "action is larger than queue number %u\n",
+ nic_dev->q_params.num_qps);
+ return -EINVAL;
+ }
+
+ switch (fs->flow_type) {
+ case TCP_V4_FLOW:
+ case UDP_V4_FLOW:
+ case IP_USER_FLOW:
+#ifndef UNSUPPORT_NTUPLE_IPV6
+ case TCP_V6_FLOW:
+ case UDP_V6_FLOW:
+ case IPV6_USER_FLOW:
+#endif
+ break;
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "flow type is not supported\n");
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+int hinic3_ethtool_flow_replace(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs)
+{
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+ struct ethtool_rx_flow_spec flow_spec_temp;
+ int loc_exit_flag = 0;
+ int err;
+
+ if (!HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported ntuple function\n");
+ return -EOPNOTSUPP;
+ }
+
+ err = validate_flow(nic_dev, fs);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "flow is not valid %d\n", err);
+ return err;
+ }
+
+ eth_rule = find_ethtool_rule(nic_dev, fs->location);
+ /* when location is same, delete old location rule. */
+ if (eth_rule) {
+ memcpy(&flow_spec_temp, ð_rule->flow_spec,
+ sizeof(struct ethtool_rx_flow_spec));
+ err = hinic3_remove_one_rule(nic_dev, eth_rule);
+ if (err)
+ return err;
+
+ loc_exit_flag = 1;
+ }
+
+ /* add new rule filter */
+ err = hinic3_add_one_rule(nic_dev, fs);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Add new rule filter failed\n");
+ if (loc_exit_flag)
+ hinic3_add_one_rule(nic_dev, &flow_spec_temp);
+
+ return -ENOENT;
+ }
+
+ return 0;
+}
+
+int hinic3_ethtool_flow_remove(struct hinic3_nic_dev *nic_dev, u32 location)
+{
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+ int err;
+
+ if (!HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported ntuple function\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (location >= MAX_NUM_OF_ETHTOOL_NTUPLE_RULES)
+ return -ENOSPC;
+
+ eth_rule = find_ethtool_rule(nic_dev, location);
+ if (!eth_rule)
+ return -ENOENT;
+
+ err = hinic3_remove_one_rule(nic_dev, eth_rule);
+
+ return err;
+}
+
+int hinic3_ethtool_get_flow(const struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *info, u32 location)
+{
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+
+ if (!HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported ntuple function\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (location >= MAX_NUM_OF_ETHTOOL_NTUPLE_RULES)
+ return -EINVAL;
+
+ list_for_each_entry(eth_rule, &nic_dev->rx_flow_rule.rules, list) {
+ if (eth_rule->flow_spec.location == location) {
+ info->fs = eth_rule->flow_spec;
+ return 0;
+ }
+ }
+
+ return -ENOENT;
+}
+
+int hinic3_ethtool_get_all_flows(const struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *info, u32 *rule_locs)
+{
+ u32 idx = 0;
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+
+ if (!HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported ntuple function\n");
+ return -EOPNOTSUPP;
+ }
+
+ info->data = MAX_NUM_OF_ETHTOOL_NTUPLE_RULES;
+ list_for_each_entry(eth_rule, &nic_dev->rx_flow_rule.rules, list)
+ rule_locs[idx++] = eth_rule->flow_spec.location;
+
+ return info->rule_cnt == idx ? 0 : -ENOENT;
+}
+
+bool hinic3_validate_channel_setting_in_ntuple(const struct hinic3_nic_dev *nic_dev, u32 q_num)
+{
+ struct hinic3_ethtool_rx_flow_rule *iter = NULL;
+
+ list_for_each_entry(iter, &nic_dev->rx_flow_rule.rules, list) {
+ if (iter->flow_spec.ring_cookie >= q_num) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "User defined filter %u assigns flow to queue %llu. Queue number %u is invalid\n",
+ iter->flow_spec.location, iter->flow_spec.ring_cookie, q_num);
+ return false;
+ }
+ }
+
+ return true;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rss.c b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.c
new file mode 100644
index 0000000..94acf61
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.c
@@ -0,0 +1,1003 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/ethtool.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/dcbnl.h>
+#include <linux/init.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_hw.h"
+#include "hinic3_rss.h"
+
+#include "vram_common.h"
+
+static u16 num_qps;
+module_param(num_qps, ushort, 0444);
+MODULE_PARM_DESC(num_qps, "Number of Queue Pairs (default=0)");
+
+#define MOD_PARA_VALIDATE_NUM_QPS(nic_dev, num_qps, out_qps) do { \
+ if ((num_qps) > (nic_dev)->max_qps) \
+ nic_warn(&(nic_dev)->pdev->dev, \
+ "Module Parameter %s value %u is out of range, " \
+ "Maximum value for the device: %u, using %u\n", \
+ #num_qps, num_qps, (nic_dev)->max_qps, \
+ (nic_dev)->max_qps); \
+ if ((num_qps) > (nic_dev)->max_qps) \
+ (out_qps) = (nic_dev)->max_qps; \
+ else if ((num_qps) > 0) \
+ (out_qps) = (num_qps); \
+} while (0)
+
+/* In rx, iq means cos */
+static u8 hinic3_get_iqmap_by_tc(const u8 *prio_tc, u8 num_iq, u8 tc)
+{
+ u8 i, map = 0;
+
+ for (i = 0; i < num_iq; i++) {
+ if (prio_tc[i] == tc)
+ map |= (u8)(1U << ((num_iq - 1) - i));
+ }
+
+ return map;
+}
+
+static u8 hinic3_get_tcid_by_rq(const u32 *indir_tbl, u8 num_tcs, u16 rq_id)
+{
+ u16 tc_group_size;
+ int i;
+ u8 temp_num_tcs = num_tcs;
+
+ if (!num_tcs)
+ temp_num_tcs = 1;
+
+ tc_group_size = NIC_RSS_INDIR_SIZE / temp_num_tcs;
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++) {
+ if (indir_tbl[i] == rq_id)
+ return (u8)(i / tc_group_size);
+ }
+
+ return 0xFF; /* Invalid TC */
+}
+
+static int hinic3_get_rq2iq_map(struct hinic3_nic_dev *nic_dev,
+ u16 num_rq, u8 num_tcs, u8 *prio_tc, u8 cos_num,
+ u32 *indir_tbl, u8 *map, u32 map_size)
+{
+ u16 qid;
+ u8 tc_id;
+ u8 temp_num_tcs = num_tcs;
+
+ if (!num_tcs)
+ temp_num_tcs = 1;
+
+ if (num_rq > map_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Rq number(%u) exceed max map qid(%u)\n",
+ num_rq, map_size);
+ return -EINVAL;
+ }
+
+ if (cos_num < HINIC_NUM_IQ_PER_FUNC) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Cos number(%u) less then map qid(%d)\n",
+ cos_num, HINIC_NUM_IQ_PER_FUNC);
+ return -EINVAL;
+ }
+
+ for (qid = 0; qid < num_rq; qid++) {
+ tc_id = hinic3_get_tcid_by_rq(indir_tbl, temp_num_tcs, qid);
+ map[qid] = hinic3_get_iqmap_by_tc(prio_tc,
+ HINIC_NUM_IQ_PER_FUNC, tc_id);
+ }
+
+ return 0;
+}
+
+static void hinic3_fillout_indir_tbl(struct hinic3_nic_dev *nic_dev,
+ u8 group_num, u32 *indir)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ u16 k, group_size, start_qid = 0, cur_cos_qnum = 0;
+ u32 i = 0;
+ u8 j, cur_cos = 0, group = 0;
+ u8 valid_cos_map = hinic3_get_dev_valid_cos_map(nic_dev);
+
+ if (group_num == 0) {
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++)
+ indir[i] = i % nic_dev->q_params.num_qps;
+ } else {
+ group_size = NIC_RSS_INDIR_SIZE / group_num;
+
+ for (group = 0; group < group_num; group++) {
+ cur_cos = dcb->hw_dcb_cfg.default_cos;
+ for (j = 0; j < NIC_DCB_COS_MAX; j++) {
+ if ((BIT(j) & valid_cos_map) != 0) {
+ cur_cos = j;
+ valid_cos_map -= (u8)BIT(j);
+ break;
+ }
+ }
+
+ cur_cos_qnum = dcb->hw_dcb_cfg.cos_qp_num[cur_cos];
+ if (cur_cos_qnum > 0) {
+ start_qid =
+ dcb->hw_dcb_cfg.cos_qp_offset[cur_cos];
+ } else {
+ start_qid = cur_cos % nic_dev->q_params.num_qps;
+ /* Ensure that the offset of start_id is 0. */
+ cur_cos_qnum = 1;
+ }
+
+ for (k = 0; k < group_size; k++)
+ indir[i++] = start_qid + k % cur_cos_qnum;
+ }
+ }
+}
+
+int hinic3_rss_init(struct hinic3_nic_dev *nic_dev, u8 *rq2iq_map, u32 map_size, u8 dcb_en)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 i, group_num, cos_bitmap, group = 0;
+ u8 cos_group[NIC_DCB_UP_MAX] = {0};
+ int err;
+
+ if (dcb_en != 0) {
+ group_num = (u8)roundup_pow_of_two(
+ hinic3_get_dev_user_cos_num(nic_dev));
+
+ cos_bitmap = hinic3_get_dev_valid_cos_map(nic_dev);
+
+ for (i = 0; i < NIC_DCB_UP_MAX; i++) {
+ if ((BIT(i) & cos_bitmap) != 0)
+ cos_group[NIC_DCB_UP_MAX - i - 1] = group++;
+ else
+ cos_group[NIC_DCB_UP_MAX - i - 1] =
+ group_num - 1;
+ }
+ } else {
+ group_num = 0;
+ }
+
+ err = hinic3_set_hw_rss_parameters(netdev, 1, group_num,
+ cos_group, dcb_en);
+ if (err)
+ return err;
+
+ err = hinic3_get_rq2iq_map(nic_dev, nic_dev->q_params.num_qps,
+ group_num, cos_group, NIC_DCB_UP_MAX,
+ nic_dev->rss_indir, rq2iq_map, map_size);
+ if (err)
+ nicif_err(nic_dev, drv, netdev, "Failed to get rq map\n");
+ return err;
+}
+
+void hinic3_rss_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ u8 cos_map[NIC_DCB_UP_MAX] = {0};
+
+ hinic3_rss_cfg(nic_dev->hwdev, 0, 0, cos_map, 1);
+}
+
+void hinic3_init_rss_parameters(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ nic_dev->rss_hash_engine = HINIC3_RSS_HASH_ENGINE_TYPE_XOR;
+ nic_dev->rss_type.tcp_ipv6_ext = 1;
+ nic_dev->rss_type.ipv6_ext = 1;
+ nic_dev->rss_type.tcp_ipv6 = 1;
+ nic_dev->rss_type.ipv6 = 1;
+ nic_dev->rss_type.tcp_ipv4 = 1;
+ nic_dev->rss_type.ipv4 = 1;
+ nic_dev->rss_type.udp_ipv6 = 1;
+ nic_dev->rss_type.udp_ipv4 = 1;
+}
+
+void hinic3_clear_rss_config(struct hinic3_nic_dev *nic_dev)
+{
+ kfree(nic_dev->rss_hkey);
+ nic_dev->rss_hkey = NULL;
+
+ kfree(nic_dev->rss_indir);
+ nic_dev->rss_indir = NULL;
+}
+
+void hinic3_set_default_rss_indir(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ set_bit(HINIC3_RSS_DEFAULT_INDIR, &nic_dev->flags);
+}
+
+static void hinic3_maybe_reconfig_rss_indir(struct net_device *netdev, u8 dcb_en)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int i;
+
+ /* if dcb is enabled, user can not config rss indir table */
+ if (dcb_en) {
+ nicif_info(nic_dev, drv, netdev, "DCB is enabled, set default rss indir\n");
+ goto discard_user_rss_indir;
+ }
+
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++) {
+ if (nic_dev->rss_indir[i] >= nic_dev->q_params.num_qps)
+ goto discard_user_rss_indir;
+ }
+
+ return;
+
+discard_user_rss_indir:
+ hinic3_set_default_rss_indir(netdev);
+}
+
+#ifdef HAVE_HOT_REPLACE_FUNC
+bool partition_slave_doing_hotupgrade(void)
+{
+ return get_partition_role() && partition_doing_hotupgrade();
+}
+#endif
+
+static void decide_num_qps(struct hinic3_nic_dev *nic_dev)
+{
+ u16 tmp_num_qps = nic_dev->max_qps;
+ u16 num_cpus = 0;
+ u16 max_num_cpus;
+ int i, node;
+
+ int is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec != 0) {
+ nic_dev->q_params.num_qps = nic_dev->nic_vram->vram_num_qps;
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Os hotreplace use vram to init num qps 1:%hu 2:%hu\n",
+ nic_dev->q_params.num_qps,
+ nic_dev->nic_vram->vram_num_qps);
+ return;
+ }
+
+ if (nic_dev->nic_cap.default_num_queues != 0 &&
+ nic_dev->nic_cap.default_num_queues < nic_dev->max_qps)
+ tmp_num_qps = nic_dev->nic_cap.default_num_queues;
+
+ MOD_PARA_VALIDATE_NUM_QPS(nic_dev, num_qps, tmp_num_qps);
+
+#ifdef HAVE_HOT_REPLACE_FUNC
+ if (partition_slave_doing_hotupgrade())
+ max_num_cpus = (u16)num_present_cpus();
+ else
+ max_num_cpus = (u16)num_online_cpus();
+#else
+ max_num_cpus = (u16)num_online_cpus();
+#endif
+
+ for (i = 0; i < max_num_cpus; i++) {
+ node = (int)cpu_to_node(i);
+ if (node == dev_to_node(&nic_dev->pdev->dev))
+ num_cpus++;
+ }
+
+ if (!num_cpus)
+ num_cpus = max_num_cpus;
+
+ nic_dev->q_params.num_qps = (u16)min_t(u16, tmp_num_qps, num_cpus);
+ nic_dev->nic_vram->vram_num_qps = nic_dev->q_params.num_qps;
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "init num qps 1:%u 2:%u\n",
+ nic_dev->q_params.num_qps, nic_dev->nic_vram->vram_num_qps);
+}
+
+static void copy_value_to_rss_hkey(struct hinic3_nic_dev *nic_dev,
+ const u8 *hkey)
+{
+ u32 i;
+ u32 *rss_hkey = (u32 *)nic_dev->rss_hkey;
+
+ memcpy(nic_dev->rss_hkey, hkey, NIC_RSS_KEY_SIZE);
+
+ /* make a copy of the key, and convert it to Big Endian */
+ for (i = 0; i < NIC_RSS_KEY_SIZE / sizeof(u32); i++)
+ nic_dev->rss_hkey_be[i] = cpu_to_be32(rss_hkey[i]);
+}
+
+static int alloc_rss_resource(struct hinic3_nic_dev *nic_dev)
+{
+ u8 default_rss_key[NIC_RSS_KEY_SIZE] = {
+ 0x6d, 0x5a, 0x56, 0xda, 0x25, 0x5b, 0x0e, 0xc2,
+ 0x41, 0x67, 0x25, 0x3d, 0x43, 0xa3, 0x8f, 0xb0,
+ 0xd0, 0xca, 0x2b, 0xcb, 0xae, 0x7b, 0x30, 0xb4,
+ 0x77, 0xcb, 0x2d, 0xa3, 0x80, 0x30, 0xf2, 0x0c,
+ 0x6a, 0x42, 0xb7, 0x3b, 0xbe, 0xac, 0x01, 0xfa};
+
+ /* We request double spaces for the hash key,
+ * the second one holds the key of Big Edian
+ * format.
+ */
+ nic_dev->rss_hkey =
+ kzalloc(NIC_RSS_KEY_SIZE *
+ HINIC3_RSS_KEY_RSV_NUM, GFP_KERNEL);
+ if (!nic_dev->rss_hkey) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc memory for rss_hkey\n");
+ return -ENOMEM;
+ }
+
+ /* The second space is for big edian hash key */
+ nic_dev->rss_hkey_be = (u32 *)(nic_dev->rss_hkey +
+ NIC_RSS_KEY_SIZE);
+ copy_value_to_rss_hkey(nic_dev, (u8 *)default_rss_key);
+
+ nic_dev->rss_indir = kzalloc(sizeof(u32) * NIC_RSS_INDIR_SIZE, GFP_KERNEL);
+ if (!nic_dev->rss_indir) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc memory for rss_indir\n");
+ kfree(nic_dev->rss_hkey);
+ nic_dev->rss_hkey = NULL;
+ return -ENOMEM;
+ }
+
+ set_bit(HINIC3_RSS_DEFAULT_INDIR, &nic_dev->flags);
+
+ return 0;
+}
+
+void hinic3_try_to_enable_rss(struct hinic3_nic_dev *nic_dev)
+{
+ u8 cos_map[NIC_DCB_UP_MAX] = {0};
+ int err = 0;
+
+ if (!nic_dev)
+ return;
+
+ nic_dev->max_qps = hinic3_func_max_nic_qnum(nic_dev->hwdev);
+ if (nic_dev->max_qps <= 1 || !HINIC3_SUPPORT_RSS(nic_dev->hwdev))
+ goto set_q_params;
+
+ err = alloc_rss_resource(nic_dev);
+ if (err) {
+ nic_dev->max_qps = 1;
+ goto set_q_params;
+ }
+
+ set_bit(HINIC3_RSS_ENABLE, &nic_dev->flags);
+ nic_dev->max_qps = hinic3_func_max_nic_qnum(nic_dev->hwdev);
+
+ decide_num_qps(nic_dev);
+
+ hinic3_init_rss_parameters(nic_dev->netdev);
+ err = hinic3_set_hw_rss_parameters(nic_dev->netdev, 0, 0, cos_map,
+ test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ? 1 : 0);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to set hardware rss parameters\n");
+
+ hinic3_clear_rss_config(nic_dev);
+ nic_dev->max_qps = 1;
+ goto set_q_params;
+ }
+ return;
+
+set_q_params:
+ clear_bit(HINIC3_RSS_ENABLE, &nic_dev->flags);
+ nic_dev->q_params.num_qps = nic_dev->max_qps;
+ nic_dev->nic_vram->vram_num_qps = nic_dev->max_qps;
+}
+
+static int hinic3_config_rss_hw_resource(struct hinic3_nic_dev *nic_dev,
+ u32 *indir_tbl)
+{
+ int err;
+
+ err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl);
+ if (err)
+ return err;
+
+ err = hinic3_set_rss_type(nic_dev->hwdev, nic_dev->rss_type);
+ if (err)
+ return err;
+
+ return hinic3_rss_set_hash_engine(nic_dev->hwdev,
+ nic_dev->rss_hash_engine);
+}
+
+int hinic3_set_hw_rss_parameters(struct net_device *netdev, u8 rss_en,
+ u8 cos_num, u8 *cos_map, u8 dcb_en)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ /* RSS key */
+ err = hinic3_rss_set_hash_key(nic_dev->hwdev, nic_dev->rss_hkey);
+ if (err)
+ return err;
+
+ hinic3_maybe_reconfig_rss_indir(netdev, dcb_en);
+
+ if (test_bit(HINIC3_RSS_DEFAULT_INDIR, &nic_dev->flags))
+ hinic3_fillout_indir_tbl(nic_dev, cos_num, nic_dev->rss_indir);
+
+ err = hinic3_config_rss_hw_resource(nic_dev, nic_dev->rss_indir);
+ if (err)
+ return err;
+
+ err = hinic3_rss_cfg(nic_dev->hwdev, rss_en, cos_num, cos_map,
+ nic_dev->q_params.num_qps);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+/* for ethtool */
+static int set_l4_rss_hash_ops(const struct ethtool_rxnfc *cmd,
+ struct nic_rss_type *rss_type)
+{
+ u8 rss_l4_en = 0;
+
+ switch (cmd->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ case 0:
+ rss_l4_en = 0;
+ break;
+ case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
+ rss_l4_en = 1;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ rss_type->tcp_ipv4 = rss_l4_en;
+ break;
+ case TCP_V6_FLOW:
+ rss_type->tcp_ipv6 = rss_l4_en;
+ break;
+ case UDP_V4_FLOW:
+ rss_type->udp_ipv4 = rss_l4_en;
+ break;
+ case UDP_V6_FLOW:
+ rss_type->udp_ipv6 = rss_l4_en;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int update_rss_hash_opts(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *cmd,
+ struct nic_rss_type *rss_type)
+{
+ int err;
+
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ case TCP_V6_FLOW:
+ case UDP_V4_FLOW:
+ case UDP_V6_FLOW:
+ err = set_l4_rss_hash_ops(cmd, rss_type);
+ if (err)
+ return err;
+
+ break;
+ case IPV4_FLOW:
+ rss_type->ipv4 = 1;
+ break;
+ case IPV6_FLOW:
+ rss_type->ipv6 = 1;
+ break;
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unsupported flow type\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_rss_hash_opts(struct hinic3_nic_dev *nic_dev, struct ethtool_rxnfc *cmd)
+{
+ struct nic_rss_type *rss_type = &nic_dev->rss_type;
+ int err;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ cmd->data = 0;
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "RSS is disable, not support to set flow-hash\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* RSS does not support anything other than hashing
+ * to queues on src and dst IPs and ports
+ */
+ if (cmd->data & ~(RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 |
+ RXH_L4_B_2_3))
+ return -EINVAL;
+
+ /* We need at least the IP SRC and DEST fields for hashing */
+ if (!(cmd->data & RXH_IP_SRC) || !(cmd->data & RXH_IP_DST))
+ return -EINVAL;
+
+ err = hinic3_get_rss_type(nic_dev->hwdev, rss_type);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to get rss type\n");
+ return -EFAULT;
+ }
+
+ err = update_rss_hash_opts(nic_dev, cmd, rss_type);
+ if (err)
+ return err;
+
+ err = hinic3_set_rss_type(nic_dev->hwdev, *rss_type);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to set rss type\n");
+ return -EFAULT;
+ }
+
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Set rss hash options success\n");
+
+ return 0;
+}
+
+static void convert_rss_type(u8 rss_opt, struct ethtool_rxnfc *cmd)
+{
+ if (rss_opt)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+}
+
+static int hinic3_convert_rss_type(struct hinic3_nic_dev *nic_dev,
+ struct nic_rss_type *rss_type,
+ struct ethtool_rxnfc *cmd)
+{
+ cmd->data = RXH_IP_SRC | RXH_IP_DST;
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ convert_rss_type(rss_type->tcp_ipv4, cmd);
+ break;
+ case TCP_V6_FLOW:
+ convert_rss_type(rss_type->tcp_ipv6, cmd);
+ break;
+ case UDP_V4_FLOW:
+ convert_rss_type(rss_type->udp_ipv4, cmd);
+ break;
+ case UDP_V6_FLOW:
+ convert_rss_type(rss_type->udp_ipv6, cmd);
+ break;
+ case IPV4_FLOW:
+ case IPV6_FLOW:
+ break;
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported flow type\n");
+ cmd->data = 0;
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_get_rss_hash_opts(struct hinic3_nic_dev *nic_dev, struct ethtool_rxnfc *cmd)
+{
+ struct nic_rss_type rss_type = {0};
+ int err;
+
+ cmd->data = 0;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags))
+ return 0;
+
+ err = hinic3_get_rss_type(nic_dev->hwdev, &rss_type);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get rss type\n");
+ return err;
+ }
+
+ return hinic3_convert_rss_type(nic_dev, &rss_type, cmd);
+}
+
+int hinic3_get_rxnfc(struct net_device *netdev,
+ struct ethtool_rxnfc *cmd, u32 *rule_locs)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_GRXRINGS:
+ cmd->data = nic_dev->q_params.num_qps;
+ break;
+ case ETHTOOL_GRXCLSRLCNT:
+ cmd->rule_cnt = (u32)nic_dev->rx_flow_rule.tot_num_rules;
+ break;
+ case ETHTOOL_GRXCLSRULE:
+ err = hinic3_ethtool_get_flow(nic_dev, cmd, cmd->fs.location);
+ break;
+ case ETHTOOL_GRXCLSRLALL:
+ err = hinic3_ethtool_get_all_flows(nic_dev, cmd, rule_locs);
+ break;
+ case ETHTOOL_GRXFH:
+ err = hinic3_get_rss_hash_opts(nic_dev, cmd);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+int hinic3_set_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_SRXFH:
+ err = hinic3_set_rss_hash_opts(nic_dev, cmd);
+ break;
+ case ETHTOOL_SRXCLSRLINS:
+ err = hinic3_ethtool_flow_replace(nic_dev, &cmd->fs);
+ break;
+ case ETHTOOL_SRXCLSRLDEL:
+ err = hinic3_ethtool_flow_remove(nic_dev, cmd->fs.location);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+static u16 hinic3_max_channels(struct hinic3_nic_dev *nic_dev)
+{
+ u8 tcs = (u8)netdev_get_num_tc(nic_dev->netdev);
+
+ return tcs ? nic_dev->max_qps / tcs : nic_dev->max_qps;
+}
+
+static u16 hinic3_curr_channels(struct hinic3_nic_dev *nic_dev)
+{
+ if (netif_running(nic_dev->netdev))
+ return nic_dev->q_params.num_qps ?
+ nic_dev->q_params.num_qps : 1;
+ else
+ return (u16)min_t(u16, hinic3_max_channels(nic_dev),
+ nic_dev->q_params.num_qps);
+}
+
+void hinic3_get_channels(struct net_device *netdev,
+ struct ethtool_channels *channels)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ channels->max_rx = 0;
+ channels->max_tx = 0;
+ channels->max_other = 0;
+ /* report maximum channels */
+ channels->max_combined = hinic3_max_channels(nic_dev);
+ channels->rx_count = 0;
+ channels->tx_count = 0;
+ channels->other_count = 0;
+ /* report flow director queues as maximum channels */
+ channels->combined_count = hinic3_curr_channels(nic_dev);
+}
+
+static int hinic3_validate_channel_parameter(struct net_device *netdev,
+ const struct ethtool_channels *channels)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 max_channel = hinic3_max_channels(nic_dev);
+ unsigned int count = channels->combined_count;
+
+ if (!count) {
+ nicif_err(nic_dev, drv, netdev,
+ "Unsupported combined_count=0\n");
+ return -EINVAL;
+ }
+
+ if (channels->tx_count || channels->rx_count || channels->other_count) {
+ nicif_err(nic_dev, drv, netdev,
+ "Setting rx/tx/other count not supported\n");
+ return -EINVAL;
+ }
+
+ if (count > max_channel) {
+ nicif_err(nic_dev, drv, netdev,
+ "Combined count %u exceed limit %u\n", count,
+ max_channel);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void change_num_channel_reopen_handler(struct hinic3_nic_dev *nic_dev,
+ const void *priv_data)
+{
+ hinic3_set_default_rss_indir(nic_dev->netdev);
+}
+
+int hinic3_set_channels(struct net_device *netdev,
+ struct ethtool_channels *channels)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dyna_txrxq_params q_params = {0};
+ unsigned int count = channels->combined_count;
+ int err;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ if (hinic3_validate_channel_parameter(netdev, channels))
+ return -EINVAL;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev,
+ "This function don't support RSS, only support 1 queue pair\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ if (count < user_cos_num) {
+ nicif_err(nic_dev, drv, netdev,
+ "DCB is on, channels num should more than valid cos num:%u\n",
+ user_cos_num);
+
+ return -EOPNOTSUPP;
+ }
+ }
+
+ if (HINIC3_SUPPORT_FDIR(nic_dev->hwdev) &&
+ !hinic3_validate_channel_setting_in_ntuple(nic_dev, count))
+ return -EOPNOTSUPP;
+
+ nicif_info(nic_dev, drv, netdev, "Set max combined queue number from %u to %u\n",
+ nic_dev->q_params.num_qps, count);
+
+ if (netif_running(netdev)) {
+ q_params = nic_dev->q_params;
+ q_params.num_qps = (u16)count;
+ q_params.txqs_res = NULL;
+ q_params.rxqs_res = NULL;
+ q_params.irq_cfg = NULL;
+
+ nicif_info(nic_dev, drv, netdev, "Restarting channel\n");
+ err = hinic3_change_channel_settings(nic_dev, &q_params,
+ change_num_channel_reopen_handler, NULL);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to change channel settings\n");
+ return -EFAULT;
+ }
+ } else {
+ /* Discard user configured rss */
+ hinic3_set_default_rss_indir(netdev);
+ nic_dev->q_params.num_qps = (u16)count;
+ }
+
+ nic_dev->nic_vram->vram_num_qps = nic_dev->q_params.num_qps;
+ return 0;
+}
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+u32 hinic3_get_rxfh_indir_size(struct net_device *netdev)
+{
+ return NIC_RSS_INDIR_SIZE;
+}
+#endif
+
+static int set_rss_rxfh(struct net_device *netdev, const u32 *indir,
+ const u8 *key)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ if (indir) {
+ err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to set rss indir table\n");
+ return -EFAULT;
+ }
+ clear_bit(HINIC3_RSS_DEFAULT_INDIR, &nic_dev->flags);
+
+ memcpy(nic_dev->rss_indir, indir,
+ sizeof(u32) * NIC_RSS_INDIR_SIZE);
+ nicif_info(nic_dev, drv, netdev, "Change rss indir success\n");
+ }
+
+ if (key) {
+ err = hinic3_rss_set_hash_key(nic_dev->hwdev, key);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to set rss key\n");
+ return -EFAULT;
+ }
+
+ copy_value_to_rss_hkey(nic_dev, key);
+ nicif_info(nic_dev, drv, netdev, "Change rss key success\n");
+ }
+
+ return 0;
+}
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+u32 hinic3_get_rxfh_key_size(struct net_device *netdev)
+{
+ return NIC_RSS_KEY_SIZE;
+}
+
+#ifdef HAVE_RXFH_HASHFUNC
+int hinic3_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key, u8 *hfunc)
+#else
+int hinic3_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ netdev_warn_once(nic_dev->netdev, "Rss is disable\n");
+ return -EOPNOTSUPP;
+ }
+
+#ifdef HAVE_RXFH_HASHFUNC
+ if (hfunc)
+ *hfunc = nic_dev->rss_hash_engine ?
+ ETH_RSS_HASH_TOP : ETH_RSS_HASH_XOR;
+#endif
+
+ if (indir) {
+ err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indir);
+ if (err)
+ return -EFAULT;
+ }
+
+ if (key)
+ memcpy(key, nic_dev->rss_hkey, NIC_RSS_KEY_SIZE);
+
+ return err;
+}
+
+#ifdef HAVE_RXFH_HASHFUNC
+int hinic3_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key,
+ const u8 hfunc)
+#else
+#ifdef HAVE_RXFH_NONCONST
+int hinic3_set_rxfh(struct net_device *netdev, u32 *indir, u8 *key)
+#else
+int hinic3_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key)
+#endif
+#endif /* HAVE_RXFH_HASHFUNC */
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Not support to set rss parameters when rss is disable\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) && indir) {
+ nicif_err(nic_dev, drv, netdev,
+ "Not support to set indir when DCB is enabled\n");
+ return -EOPNOTSUPP;
+ }
+
+#ifdef HAVE_RXFH_HASHFUNC
+ if (hfunc != ETH_RSS_HASH_NO_CHANGE) {
+ if (hfunc != ETH_RSS_HASH_TOP && hfunc != ETH_RSS_HASH_XOR) {
+ nicif_err(nic_dev, drv, netdev,
+ "Not support to set hfunc type except TOP and XOR\n");
+ return -EOPNOTSUPP;
+ }
+
+ nic_dev->rss_hash_engine = (hfunc == ETH_RSS_HASH_XOR) ?
+ HINIC3_RSS_HASH_ENGINE_TYPE_XOR :
+ HINIC3_RSS_HASH_ENGINE_TYPE_TOEP;
+ err = hinic3_rss_set_hash_engine(nic_dev->hwdev,
+ nic_dev->rss_hash_engine);
+ if (err)
+ return -EFAULT;
+
+ nicif_info(nic_dev, drv, netdev,
+ "Change hfunc to RSS_HASH_%s success\n",
+ (hfunc == ETH_RSS_HASH_XOR) ? "XOR" : "TOP");
+ }
+#endif
+ err = set_rss_rxfh(netdev, indir, key);
+
+ return err;
+}
+
+#else /* !(defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)) */
+
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+int hinic3_get_rxfh_indir(struct net_device *netdev,
+ struct ethtool_rxfh_indir *indir1)
+#else
+int hinic3_get_rxfh_indir(struct net_device *netdev, u32 *indir)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ u32 *indir = NULL;
+
+ /* In a low version kernel(eg:suse 11.2), call the interface twice.
+ * First call to get the size value,
+ * and second call to get the rxfh indir according to the size value.
+ */
+ if (indir1->size == 0) {
+ indir1->size = NIC_RSS_INDIR_SIZE;
+ return 0;
+ }
+
+ if (indir1->size < NIC_RSS_INDIR_SIZE) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get rss indir, rss size(%d) is more than system rss size(%u).\n",
+ NIC_RSS_INDIR_SIZE, indir1->size);
+ return -EINVAL;
+ }
+
+ indir = indir1->ring_index;
+#endif
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ netdev_warn_once(nic_dev->netdev, "Rss is disable\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (indir)
+ err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indir);
+
+ return err;
+}
+
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+int hinic3_set_rxfh_indir(struct net_device *netdev,
+ const struct ethtool_rxfh_indir *indir1)
+#else
+int hinic3_set_rxfh_indir(struct net_device *netdev, const u32 *indir)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ const u32 *indir = NULL;
+
+ if (indir1->size != NIC_RSS_INDIR_SIZE) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to set rss indir, rss size(%d) is more than system rss size(%u).\n",
+ NIC_RSS_INDIR_SIZE, indir1->size);
+ return -EINVAL;
+ }
+
+ indir = indir1->ring_index;
+#endif
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Not support to set rss indir when rss is disable\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) && indir) {
+ nicif_err(nic_dev, drv, netdev,
+ "Not support to set indir when DCB is enabled\n");
+ return -EOPNOTSUPP;
+ }
+
+ return set_rss_rxfh(netdev, indir, NULL);
+}
+
+#endif /* defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH) */
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rss.h b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.h
new file mode 100644
index 0000000..17f511c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.h
@@ -0,0 +1,95 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_RSS_H
+#define HINIC3_RSS_H
+
+#include "hinic3_nic_dev.h"
+
+#define HINIC_NUM_IQ_PER_FUNC 8
+
+int hinic3_rss_init(struct hinic3_nic_dev *nic_dev, u8 *rq2iq_map,
+ u32 map_size, u8 dcb_en);
+
+void hinic3_rss_deinit(struct hinic3_nic_dev *nic_dev);
+
+int hinic3_set_hw_rss_parameters(struct net_device *netdev, u8 rss_en,
+ u8 cos_num, u8 *cos_map, u8 dcb_en);
+
+void hinic3_init_rss_parameters(struct net_device *netdev);
+
+void hinic3_set_default_rss_indir(struct net_device *netdev);
+
+void hinic3_try_to_enable_rss(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_clear_rss_config(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_flush_rx_flow_rule(struct hinic3_nic_dev *nic_dev);
+int hinic3_ethtool_get_flow(const struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *info, u32 location);
+
+int hinic3_ethtool_get_all_flows(const struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *info, u32 *rule_locs);
+
+int hinic3_ethtool_flow_remove(struct hinic3_nic_dev *nic_dev, u32 location);
+
+int hinic3_ethtool_flow_replace(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs);
+
+bool hinic3_validate_channel_setting_in_ntuple(const struct hinic3_nic_dev *nic_dev, u32 q_num);
+
+/* for ethtool */
+int hinic3_get_rxnfc(struct net_device *netdev,
+ struct ethtool_rxnfc *cmd, u32 *rule_locs);
+
+int hinic3_set_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd);
+
+void hinic3_get_channels(struct net_device *netdev,
+ struct ethtool_channels *channels);
+
+int hinic3_set_channels(struct net_device *netdev,
+ struct ethtool_channels *channels);
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+u32 hinic3_get_rxfh_indir_size(struct net_device *netdev);
+#endif /* NOT_HAVE_GET_RXFH_INDIR_SIZE */
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+u32 hinic3_get_rxfh_key_size(struct net_device *netdev);
+
+#ifdef HAVE_RXFH_HASHFUNC
+int hinic3_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key, u8 *hfunc);
+#else /* HAVE_RXFH_HASHFUNC */
+int hinic3_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key);
+#endif /* HAVE_RXFH_HASHFUNC */
+
+#ifdef HAVE_RXFH_HASHFUNC
+int hinic3_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key,
+ const u8 hfunc);
+#else
+#ifdef HAVE_RXFH_NONCONST
+int hinic3_set_rxfh(struct net_device *netdev, u32 *indir, u8 *key);
+#else
+int hinic3_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key);
+#endif /* HAVE_RXFH_NONCONST */
+#endif /* HAVE_RXFH_HASHFUNC */
+
+#else /* !(defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)) */
+
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+int hinic3_get_rxfh_indir(struct net_device *netdev,
+ struct ethtool_rxfh_indir *indir1);
+#else
+int hinic3_get_rxfh_indir(struct net_device *netdev, u32 *indir);
+#endif
+
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+int hinic3_set_rxfh_indir(struct net_device *netdev,
+ const struct ethtool_rxfh_indir *indir1);
+#else
+int hinic3_set_rxfh_indir(struct net_device *netdev, const u32 *indir);
+#endif /* NOT_HAVE_GET_RXFH_INDIR_SIZE */
+
+#endif /* (defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)) */
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c
new file mode 100644
index 0000000..902d7e2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c
@@ -0,0 +1,413 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/dcbnl.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_cfg.h"
+#include "nic_mpu_cmd.h"
+#include "nic_npu_cmd.h"
+#include "hinic3_hw.h"
+#include "hinic3_nic.h"
+#include "hinic3_common.h"
+
+static int hinic3_rss_cfg_hash_key(struct hinic3_nic_io *nic_io, u8 opcode,
+ u8 *key, u16 key_size)
+{
+ struct hinic3_cmd_rss_hash_key hash_key;
+ u16 out_size = sizeof(hash_key);
+ int err;
+
+ memset(&hash_key, 0, sizeof(struct hinic3_cmd_rss_hash_key));
+ hash_key.func_id = hinic3_global_func_id(nic_io->hwdev);
+ hash_key.opcode = opcode;
+
+ if (opcode == HINIC3_CMD_OP_SET)
+ memcpy(hash_key.key, key, key_size);
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_RSS_HASH_KEY,
+ &hash_key, sizeof(hash_key),
+ &hash_key, &out_size);
+ if (err || !out_size || hash_key.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to %s hash key, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == HINIC3_CMD_OP_SET ? "set" : "get",
+ err, hash_key.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET)
+ memcpy(key, hash_key.key, key_size);
+
+ return 0;
+}
+
+int hinic3_rss_set_hash_key(void *hwdev, const u8 *key)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u8 hash_key[NIC_RSS_KEY_SIZE];
+
+ if (!hwdev || !key)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ memcpy(hash_key, key, NIC_RSS_KEY_SIZE);
+ return hinic3_rss_cfg_hash_key(nic_io, HINIC3_CMD_OP_SET,
+ hash_key, NIC_RSS_KEY_SIZE);
+}
+
+int hinic3_rss_get_hash_key(void *hwdev, u8 *key)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !key)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ return hinic3_rss_cfg_hash_key(nic_io, HINIC3_CMD_OP_GET,
+ key, NIC_RSS_KEY_SIZE);
+}
+
+int hinic3_rss_get_indir_tbl(void *hwdev, u32 *indir_table)
+{
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 *indir_tbl = NULL;
+ int err, i;
+
+ if (!hwdev || !indir_table)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd_buf.\n");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_rss_indirect_tbl);
+ err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ cmd_buf, cmd_buf, NULL, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get rss indir table\n");
+ goto get_indir_tbl_failed;
+ }
+
+ indir_tbl = (u16 *)cmd_buf->buf;
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++)
+ indir_table[i] = *(indir_tbl + i);
+
+get_indir_tbl_failed:
+ hinic3_free_cmd_buf(hwdev, cmd_buf);
+
+ return err;
+}
+
+int hinic3_rss_set_indir_tbl(void *hwdev, const u32 *indir_table)
+{
+ struct nic_rss_indirect_tbl *indir_tbl = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 *temp = NULL;
+ u32 i, size;
+ u64 out_param = 0;
+ int err;
+
+ if (!hwdev || !indir_table)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_rss_indirect_tbl);
+ indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf;
+ memset(indir_tbl, 0, sizeof(*indir_tbl));
+
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++)
+ indir_tbl->entry[i] = (u16)(*(indir_table + i));
+
+ size = sizeof(indir_tbl->entry) / sizeof(u32);
+ temp = (u32 *)indir_tbl->entry;
+ for (i = 0; i < size; i++)
+ temp[i] = cpu_to_be32(temp[i]);
+
+ err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set rss indir table\n");
+ err = -EFAULT;
+ }
+
+ hinic3_free_cmd_buf(hwdev, cmd_buf);
+ return err;
+}
+
+static int hinic3_cmdq_set_rss_type(void *hwdev, struct nic_rss_type rss_type)
+{
+ struct nic_rss_context_tbl *ctx_tbl = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 ctx = 0;
+ u64 out_param = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ ctx |= HINIC3_RSS_TYPE_SET(1, VALID) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv4, IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6, IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6_ext, IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv4, TCP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6, TCP_IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6_ext, TCP_IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv4, UDP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv6, UDP_IPV6);
+
+ cmd_buf->size = sizeof(struct nic_rss_context_tbl);
+ ctx_tbl = (struct nic_rss_context_tbl *)cmd_buf->buf;
+ memset(ctx_tbl, 0, sizeof(*ctx_tbl));
+ ctx_tbl->ctx = cpu_to_be32(ctx);
+
+ /* cfg the rss context table by command queue */
+ err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+
+ hinic3_free_cmd_buf(hwdev, cmd_buf);
+
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "cmdq set set rss context table failed, err: %d\n",
+ err);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int hinic3_mgmt_set_rss_type(void *hwdev, struct nic_rss_type rss_type)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_rss_context_table ctx_tbl;
+ u32 ctx = 0;
+ u16 out_size = sizeof(ctx_tbl);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ memset(&ctx_tbl, 0, sizeof(ctx_tbl));
+ ctx_tbl.func_id = hinic3_global_func_id(hwdev);
+ ctx |= HINIC3_RSS_TYPE_SET(1, VALID) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv4, IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6, IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6_ext, IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv4, TCP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6, TCP_IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6_ext, TCP_IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv4, UDP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv6, UDP_IPV6);
+ ctx_tbl.context = ctx;
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_RSS_CTX_TBL_INTO_FUNC,
+ &ctx_tbl, sizeof(ctx_tbl),
+ &ctx_tbl, &out_size);
+
+ if (ctx_tbl.msg_head.status == HINIC3_MGMT_CMD_UNSUPPORTED) {
+ return HINIC3_MGMT_CMD_UNSUPPORTED;
+ } else if (err || !out_size || ctx_tbl.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "mgmt Failed to set rss context offload, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, ctx_tbl.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_rss_type(void *hwdev, struct nic_rss_type rss_type)
+{
+ int err;
+
+ err = hinic3_mgmt_set_rss_type(hwdev, rss_type);
+ if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+ err = hinic3_cmdq_set_rss_type(hwdev, rss_type);
+
+ return err;
+}
+
+int hinic3_get_rss_type(void *hwdev, struct nic_rss_type *rss_type)
+{
+ struct hinic3_rss_context_table ctx_tbl;
+ u16 out_size = sizeof(ctx_tbl);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !rss_type)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&ctx_tbl, 0, sizeof(struct hinic3_rss_context_table));
+ ctx_tbl.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_RSS_CTX_TBL,
+ &ctx_tbl, sizeof(ctx_tbl),
+ &ctx_tbl, &out_size);
+ if (err || !out_size || ctx_tbl.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to get hash type, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, ctx_tbl.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ rss_type->ipv4 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, IPV4);
+ rss_type->ipv6 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, IPV6);
+ rss_type->ipv6_ext = HINIC3_RSS_TYPE_GET(ctx_tbl.context, IPV6_EXT);
+ rss_type->tcp_ipv4 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV4);
+ rss_type->tcp_ipv6 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV6);
+ rss_type->tcp_ipv6_ext = HINIC3_RSS_TYPE_GET(ctx_tbl.context,
+ TCP_IPV6_EXT);
+ rss_type->udp_ipv4 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV4);
+ rss_type->udp_ipv6 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV6);
+
+ return 0;
+}
+
+static int hinic3_rss_cfg_hash_engine(struct hinic3_nic_io *nic_io, u8 opcode,
+ u8 *type)
+{
+ struct hinic3_cmd_rss_engine_type hash_type;
+ u16 out_size = sizeof(hash_type);
+ int err;
+
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&hash_type, 0, sizeof(struct hinic3_cmd_rss_engine_type));
+
+ hash_type.func_id = hinic3_global_func_id(nic_io->hwdev);
+ hash_type.opcode = opcode;
+
+ if (opcode == HINIC3_CMD_OP_SET)
+ hash_type.hash_engine = *type;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_RSS_HASH_ENGINE,
+ &hash_type, sizeof(hash_type),
+ &hash_type, &out_size);
+ if (err || !out_size || hash_type.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to %s hash engine, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == HINIC3_CMD_OP_SET ? "set" : "get",
+ err, hash_type.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET)
+ *type = hash_type.hash_engine;
+
+ return 0;
+}
+
+int hinic3_rss_set_hash_engine(void *hwdev, u8 type)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ return hinic3_rss_cfg_hash_engine(nic_io, HINIC3_CMD_OP_SET, &type);
+}
+
+int hinic3_rss_get_hash_engine(void *hwdev, u8 *type)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !type)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ return hinic3_rss_cfg_hash_engine(nic_io, HINIC3_CMD_OP_GET, type);
+}
+
+int hinic3_rss_cfg(void *hwdev, u8 rss_en, u8 cos_num, u8 *prio_tc, u16 num_qps)
+{
+ struct hinic3_cmd_rss_config rss_cfg;
+ u16 out_size = sizeof(rss_cfg);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ /* micro code required: number of TC should be power of 2 */
+ if (!hwdev || !prio_tc || (cos_num & (cos_num - 1)))
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ memset(&rss_cfg, 0, sizeof(struct hinic3_cmd_rss_config));
+ rss_cfg.func_id = hinic3_global_func_id(hwdev);
+ rss_cfg.rss_en = rss_en;
+ rss_cfg.rq_priority_number = cos_num ? (u8)ilog2(cos_num) : 0;
+ rss_cfg.num_qps = num_qps;
+
+ memcpy(rss_cfg.prio_tc, prio_tc, NIC_DCB_UP_MAX);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_RSS_CFG,
+ &rss_cfg, sizeof(rss_cfg),
+ &rss_cfg, &out_size);
+ if (err || !out_size || rss_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set rss cfg, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rss_cfg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rx.c b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.c
new file mode 100644
index 0000000..25536d1
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.c
@@ -0,0 +1,1523 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <net/xdp.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/u64_stats_sync.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/sctp.h>
+#include <linux/pkt_sched.h>
+#include <linux/ipv6.h>
+#include <linux/module.h>
+#include <linux/compiler.h>
+#include <linux/filter.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_common.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_rss.h"
+#include "hinic3_rx.h"
+
+/* performance: ci addr RTE_CACHE_SIZE(64B) alignment */
+#define HINIC3_RX_HDR_SIZE 256
+#define HINIC3_RX_BUFFER_WRITE 16
+
+#define HINIC3_RX_TCP_PKT 0x3
+#define HINIC3_RX_UDP_PKT 0x4
+#define HINIC3_RX_SCTP_PKT 0x7
+
+#define HINIC3_RX_IPV4_PKT 0
+#define HINIC3_RX_IPV6_PKT 1
+#define HINIC3_RX_INVALID_IP_TYPE 2
+
+#define HINIC3_RX_PKT_FORMAT_NON_TUNNEL 0
+#define HINIC3_RX_PKT_FORMAT_VXLAN 1
+
+#define RXQ_STATS_INC(rxq, field) \
+do { \
+ u64_stats_update_begin(&(rxq)->rxq_stats.syncp); \
+ (rxq)->rxq_stats.field++; \
+ u64_stats_update_end(&(rxq)->rxq_stats.syncp); \
+} while (0)
+
+static bool rx_alloc_mapped_page(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_rx_info *rx_info)
+{
+ struct pci_dev *pdev = nic_dev->pdev;
+ struct page *page = rx_info->page;
+ dma_addr_t dma = rx_info->buf_dma_addr;
+ u32 page_offset = 0;
+
+ if (likely(dma))
+ return true;
+
+ /* alloc new page for storage */
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool) {
+ page = page_pool_alloc_frag(rx_info->page_pool, &page_offset,
+ nic_dev->rx_buff_len,
+ GFP_ATOMIC | __GFP_COLD |
+ __GFP_COMP);
+ if (unlikely(!page))
+ return false;
+ dma = page_pool_get_dma_addr(page);
+ goto set_rx_info;
+ }
+#endif
+ page = alloc_pages_node(NUMA_NO_NODE,
+ GFP_ATOMIC | __GFP_COLD | __GFP_COMP,
+ nic_dev->page_order);
+
+ if (unlikely(!page))
+ return false;
+
+ /* map page for use */
+ dma = dma_map_page(&pdev->dev, page, page_offset,
+ nic_dev->dma_rx_buff_size, DMA_FROM_DEVICE);
+ /* if mapping failed free memory back to system since
+ * there isn't much point in holding memory we can't use
+ */
+ if (unlikely(dma_mapping_error(&pdev->dev, dma))) {
+ __free_pages(page, nic_dev->page_order);
+ return false;
+ }
+ goto set_rx_info;
+
+set_rx_info:
+ rx_info->page = page;
+ rx_info->buf_dma_addr = dma;
+ rx_info->page_offset = page_offset;
+
+ return true;
+}
+
+static u32 hinic3_rx_fill_wqe(struct hinic3_rxq *rxq)
+{
+ struct net_device *netdev = rxq->netdev;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int rq_wqe_len = rxq->rq->wq.wqebb_size;
+ struct hinic3_rq_wqe *rq_wqe = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ u32 i;
+
+ for (i = 0; i < rxq->q_depth; i++) {
+ rx_info = &rxq->rx_info[i];
+ rq_wqe = hinic3_rq_wqe_addr(rxq->rq, (u16)i);
+
+ if (rxq->rq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+ /* unit of cqe length is 16B */
+ hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge,
+ rx_info->cqe_dma,
+ (HINIC3_CQE_LEN >>
+ HINIC3_CQE_SIZE_SHIFT));
+ /* use fixed len */
+ rq_wqe->extend_wqe.buf_desc.sge.len =
+ nic_dev->rx_buff_len;
+ } else {
+ rq_wqe->normal_wqe.cqe_hi_addr =
+ upper_32_bits(rx_info->cqe_dma);
+ rq_wqe->normal_wqe.cqe_lo_addr =
+ lower_32_bits(rx_info->cqe_dma);
+ }
+
+ hinic3_hw_be32_len(rq_wqe, rq_wqe_len);
+ rx_info->rq_wqe = rq_wqe;
+ }
+
+ return i;
+}
+
+static u32 hinic3_rx_fill_buffers(struct hinic3_rxq *rxq)
+{
+ struct net_device *netdev = rxq->netdev;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_rq_wqe *rq_wqe = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ dma_addr_t dma_addr;
+ u32 i, free_wqebbs = rxq->delta - 1;
+
+ for (i = 0; i < free_wqebbs; i++) {
+ rx_info = &rxq->rx_info[rxq->next_to_update];
+
+ if (unlikely(!rx_alloc_mapped_page(nic_dev, rx_info))) {
+ RXQ_STATS_INC(rxq, alloc_rx_buf_err);
+ break;
+ }
+
+ dma_addr = rx_info->buf_dma_addr + rx_info->page_offset;
+
+ rq_wqe = rx_info->rq_wqe;
+
+ if (rxq->rq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+ rq_wqe->extend_wqe.buf_desc.sge.hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->extend_wqe.buf_desc.sge.lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ } else {
+ rq_wqe->normal_wqe.buf_hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->normal_wqe.buf_lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ }
+ rxq->next_to_update = (u16)((rxq->next_to_update + 1) & rxq->q_mask);
+ }
+
+ if (likely(i)) {
+ hinic3_write_db(rxq->rq,
+ rxq->q_id & (NIC_RX_DB_COS_MAX - 1),
+ RQ_CFLAG_DP,
+ (u16)((u32)rxq->next_to_update <<
+ rxq->rq->wqe_type));
+ rxq->delta -= i;
+ rxq->next_to_alloc = rxq->next_to_update;
+ } else if (free_wqebbs == rxq->q_depth - 1) {
+ RXQ_STATS_INC(rxq, rx_buf_empty);
+ }
+
+ return i;
+}
+
+static u32 hinic3_rx_alloc_buffers(struct hinic3_nic_dev *nic_dev, u32 rq_depth,
+ struct hinic3_rx_info *rx_info_arr)
+{
+ u32 free_wqebbs = rq_depth - 1;
+ u32 idx;
+
+ for (idx = 0; idx < free_wqebbs; idx++) {
+ if (!rx_alloc_mapped_page(nic_dev, &rx_info_arr[idx]))
+ break;
+ }
+
+ return idx;
+}
+
+static void hinic3_rx_free_buffers(struct hinic3_nic_dev *nic_dev, u32 q_depth,
+ struct hinic3_rx_info *rx_info_arr)
+{
+ struct hinic3_rx_info *rx_info = NULL;
+ u32 i;
+
+ /* Free all the Rx ring sk_buffs */
+ for (i = 0; i < q_depth; i++) {
+ rx_info = &rx_info_arr[i];
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool) {
+ if (rx_info->page) {
+ page_pool_put_full_page(rx_info->page_pool,
+ rx_info->page, false);
+ rx_info->buf_dma_addr = 0;
+ rx_info->page = NULL;
+ }
+ continue;
+ }
+#endif
+
+ if (rx_info->buf_dma_addr) {
+ dma_unmap_page(&nic_dev->pdev->dev,
+ rx_info->buf_dma_addr,
+ nic_dev->dma_rx_buff_size,
+ DMA_FROM_DEVICE);
+ rx_info->buf_dma_addr = 0;
+ }
+
+ if (rx_info->page) {
+ __free_pages(rx_info->page, nic_dev->page_order);
+ rx_info->page = NULL;
+ }
+ }
+}
+
+static void hinic3_reuse_rx_page(struct hinic3_rxq *rxq,
+ struct hinic3_rx_info *old_rx_info)
+{
+ struct hinic3_rx_info *new_rx_info = NULL;
+ u16 nta = rxq->next_to_alloc;
+
+ new_rx_info = &rxq->rx_info[nta];
+
+ /* update, and store next to alloc */
+ nta++;
+ rxq->next_to_alloc = (nta < rxq->q_depth) ? nta : 0;
+
+ new_rx_info->page = old_rx_info->page;
+ new_rx_info->page_offset = old_rx_info->page_offset;
+ new_rx_info->buf_dma_addr = old_rx_info->buf_dma_addr;
+
+ /* sync the buffer for use by the device */
+ dma_sync_single_range_for_device(rxq->dev, new_rx_info->buf_dma_addr,
+ new_rx_info->page_offset,
+ rxq->buf_len,
+ DMA_FROM_DEVICE);
+}
+
+static bool hinic3_add_rx_frag(struct hinic3_rxq *rxq,
+ struct hinic3_rx_info *rx_info,
+ struct sk_buff *skb, u32 size)
+{
+ struct page *page = NULL;
+ u8 *va = NULL;
+
+ page = rx_info->page;
+ va = (u8 *)page_address(page) + rx_info->page_offset;
+ prefetch(va);
+#if L1_CACHE_BYTES < 128
+ prefetch(va + L1_CACHE_BYTES);
+#endif
+
+ dma_sync_single_range_for_cpu(rxq->dev,
+ rx_info->buf_dma_addr,
+ rx_info->page_offset,
+ rxq->buf_len,
+ DMA_FROM_DEVICE);
+
+ if (size <= HINIC3_RX_HDR_SIZE && !skb_is_nonlinear(skb)) {
+ __skb_put_data(skb, va, size);
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool) {
+ page_pool_put_full_page(rx_info->page_pool,
+ page, false);
+ return false;
+ }
+#endif
+
+ /* page is not reserved, we can reuse buffer as-is */
+ if (likely(page_to_nid(page) == numa_node_id()))
+ return true;
+
+ /* this page cannot be reused so discard it */
+ put_page(page);
+ goto discard_page;
+ }
+
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ (int)rx_info->page_offset, (int)size, rxq->buf_len);
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool) {
+ skb_mark_for_recycle(skb);
+ return false;
+ }
+#endif
+
+ /* avoid re-using remote pages */
+ if (unlikely(page_to_nid(page) != numa_node_id()))
+ goto discard_page;
+
+ /* if we are only owner of page we can reuse it */
+ if (unlikely(page_count(page) != 1))
+ goto discard_page;
+
+ /* flip page offset to other buffer */
+ rx_info->page_offset ^= rxq->buf_len;
+ get_page(page);
+
+ return true;
+
+discard_page:
+ dma_unmap_page(rxq->dev, rx_info->buf_dma_addr,
+ rxq->dma_rx_buff_size, DMA_FROM_DEVICE);
+ return false;
+}
+
+static void packaging_skb(struct hinic3_rxq *rxq, struct sk_buff *head_skb,
+ u8 sge_num, u32 pkt_len)
+{
+ struct hinic3_rx_info *rx_info = NULL;
+ struct sk_buff *skb = NULL;
+ u8 frag_num = 0;
+ u32 size;
+ u32 sw_ci;
+ u32 temp_pkt_len = pkt_len;
+ u8 temp_sge_num = sge_num;
+
+ sw_ci = rxq->cons_idx & rxq->q_mask;
+ skb = head_skb;
+ while (temp_sge_num) {
+ rx_info = &rxq->rx_info[sw_ci];
+ sw_ci = (sw_ci + 1) & rxq->q_mask;
+ if (unlikely(temp_pkt_len > rxq->buf_len)) {
+ size = rxq->buf_len;
+ temp_pkt_len -= rxq->buf_len;
+ } else {
+ size = temp_pkt_len;
+ }
+
+ if (unlikely(frag_num == MAX_SKB_FRAGS)) {
+ frag_num = 0;
+ if (skb == head_skb)
+ skb = skb_shinfo(skb)->frag_list;
+ else
+ skb = skb->next;
+ }
+
+ if (unlikely(skb != head_skb)) {
+ head_skb->len += size;
+ head_skb->data_len += size;
+ head_skb->truesize += rxq->buf_len;
+ }
+
+ if (likely(hinic3_add_rx_frag(rxq, rx_info, skb, size)))
+ hinic3_reuse_rx_page(rxq, rx_info);
+
+ /* clear contents of buffer_info */
+ rx_info->buf_dma_addr = 0;
+ rx_info->page = NULL;
+ temp_sge_num--;
+ frag_num++;
+ }
+}
+
+#define HINIC3_GET_SGE_NUM(pkt_len, rxq) \
+ ((u8)(((pkt_len) >> (rxq)->rx_buff_shift) + \
+ (((pkt_len) & ((rxq)->buf_len - 1)) ? 1 : 0)))
+
+static struct sk_buff *hinic3_fetch_rx_buffer(struct hinic3_rxq *rxq,
+ u32 pkt_len)
+{
+ struct sk_buff *head_skb = NULL;
+ struct sk_buff *cur_skb = NULL;
+ struct sk_buff *skb = NULL;
+ struct net_device *netdev = rxq->netdev;
+ u8 sge_num, skb_num;
+ u16 wqebb_cnt = 0;
+
+ head_skb = netdev_alloc_skb_ip_align(netdev, HINIC3_RX_HDR_SIZE);
+ if (unlikely(!head_skb))
+ return NULL;
+
+ sge_num = HINIC3_GET_SGE_NUM(pkt_len, rxq);
+ if (likely(sge_num <= MAX_SKB_FRAGS))
+ skb_num = 1;
+ else
+ skb_num = (sge_num / MAX_SKB_FRAGS) +
+ ((sge_num % MAX_SKB_FRAGS) ? 1 : 0);
+
+ while (unlikely(skb_num > 1)) {
+ cur_skb = netdev_alloc_skb_ip_align(netdev, HINIC3_RX_HDR_SIZE);
+ if (unlikely(!cur_skb))
+ goto alloc_skb_fail;
+
+ if (!skb) {
+ skb_shinfo(head_skb)->frag_list = cur_skb;
+ skb = cur_skb;
+ } else {
+ skb->next = cur_skb;
+ skb = cur_skb;
+ }
+
+ skb_num--;
+ }
+
+ prefetchw(head_skb->data);
+ wqebb_cnt = sge_num;
+
+ packaging_skb(rxq, head_skb, sge_num, pkt_len);
+
+ rxq->cons_idx += wqebb_cnt;
+ rxq->delta += wqebb_cnt;
+
+ return head_skb;
+
+alloc_skb_fail:
+ dev_kfree_skb_any(head_skb);
+ return NULL;
+}
+
+void hinic3_rxq_get_stats(struct hinic3_rxq *rxq,
+ struct hinic3_rxq_stats *stats)
+{
+ struct hinic3_rxq_stats *rxq_stats = &rxq->rxq_stats;
+ unsigned int start;
+
+ u64_stats_update_begin(&stats->syncp);
+ do {
+ start = u64_stats_fetch_begin(&rxq_stats->syncp);
+ stats->bytes = rxq_stats->bytes;
+ stats->packets = rxq_stats->packets;
+ stats->errors = rxq_stats->csum_errors +
+ rxq_stats->other_errors;
+ stats->csum_errors = rxq_stats->csum_errors;
+ stats->other_errors = rxq_stats->other_errors;
+ stats->dropped = rxq_stats->dropped;
+ stats->xdp_dropped = rxq_stats->xdp_dropped;
+ stats->rx_buf_empty = rxq_stats->rx_buf_empty;
+ } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
+ u64_stats_update_end(&stats->syncp);
+}
+
+void hinic3_rxq_clean_stats(struct hinic3_rxq_stats *rxq_stats)
+{
+ u64_stats_update_begin(&rxq_stats->syncp);
+ rxq_stats->bytes = 0;
+ rxq_stats->packets = 0;
+ rxq_stats->errors = 0;
+ rxq_stats->csum_errors = 0;
+ rxq_stats->other_errors = 0;
+ rxq_stats->dropped = 0;
+ rxq_stats->xdp_dropped = 0;
+ rxq_stats->rx_buf_empty = 0;
+
+ rxq_stats->alloc_skb_err = 0;
+ rxq_stats->alloc_rx_buf_err = 0;
+ rxq_stats->xdp_large_pkt = 0;
+ rxq_stats->restore_drop_sge = 0;
+ rxq_stats->rsvd2 = 0;
+ u64_stats_update_end(&rxq_stats->syncp);
+}
+
+static void rxq_stats_init(struct hinic3_rxq *rxq)
+{
+ struct hinic3_rxq_stats *rxq_stats = &rxq->rxq_stats;
+
+ u64_stats_init(&rxq_stats->syncp);
+ hinic3_rxq_clean_stats(rxq_stats);
+}
+
+#ifndef HAVE_ETH_GET_HEADLEN_FUNC
+static unsigned int hinic3_eth_get_headlen(unsigned char *data, unsigned int max_len)
+{
+#define IP_FRAG_OFFSET 0x1FFF
+#define FCOE_HLEN 38
+#define ETH_P_8021_AD 0x88A8
+#define ETH_P_8021_Q 0x8100
+#define TCP_HEAD_OFFSET 12
+ union {
+ unsigned char *data;
+ struct ethhdr *eth;
+ struct vlan_ethhdr *vlan;
+ struct iphdr *ipv4;
+ struct ipv6hdr *ipv6;
+ } hdr;
+ u16 protocol;
+ u8 nexthdr = 0;
+ u8 hlen;
+
+ if (unlikely(max_len < ETH_HLEN))
+ return max_len;
+
+ hdr.data = data;
+ protocol = hdr.eth->h_proto;
+
+ /* L2 header */
+ if (protocol == htons(ETH_P_8021_AD) ||
+ protocol == htons(ETH_P_8021_Q)) {
+ if (unlikely(max_len < ETH_HLEN + VLAN_HLEN))
+ return max_len;
+
+ /* L3 protocol */
+ protocol = hdr.vlan->h_vlan_encapsulated_proto;
+ hdr.data += sizeof(struct vlan_ethhdr);
+ } else {
+ hdr.data += ETH_HLEN;
+ }
+
+ /* L3 header */
+ switch (protocol) {
+ case htons(ETH_P_IP):
+ if ((int)(hdr.data - data) >
+ (int)(max_len - sizeof(struct iphdr)))
+ return max_len;
+
+ /* L3 header length = (1st byte & 0x0F) << 2 */
+ hlen = (hdr.data[0] & 0x0F) << 2;
+
+ if (hlen < sizeof(struct iphdr))
+ return (unsigned int)(hdr.data - data);
+
+ if (!(hdr.ipv4->frag_off & htons(IP_FRAG_OFFSET)))
+ nexthdr = hdr.ipv4->protocol;
+
+ hdr.data += hlen;
+ break;
+
+ case htons(ETH_P_IPV6):
+ if ((int)(hdr.data - data) >
+ (int)(max_len - sizeof(struct ipv6hdr)))
+ return max_len;
+ /* L4 protocol */
+ nexthdr = hdr.ipv6->nexthdr;
+ hdr.data += sizeof(struct ipv6hdr);
+ break;
+
+ case htons(ETH_P_FCOE):
+ hdr.data += FCOE_HLEN;
+ break;
+
+ default:
+ return (unsigned int)(hdr.data - data);
+ }
+
+ /* L4 header */
+ switch (nexthdr) {
+ case IPPROTO_TCP:
+ if ((int)(hdr.data - data) >
+ (int)(max_len - sizeof(struct tcphdr)))
+ return max_len;
+
+ /* L4 header length = (13st byte & 0xF0) >> 2 */
+ if (((hdr.data[TCP_HEAD_OFFSET] & 0xF0) >>
+ HINIC3_HEADER_DATA_UNIT) > sizeof(struct tcphdr))
+ hdr.data += ((hdr.data[TCP_HEAD_OFFSET] & 0xF0) >>
+ HINIC3_HEADER_DATA_UNIT);
+ else
+ hdr.data += sizeof(struct tcphdr);
+ break;
+ case IPPROTO_UDP:
+ case IPPROTO_UDPLITE:
+ hdr.data += sizeof(struct udphdr);
+ break;
+
+ case IPPROTO_SCTP:
+ hdr.data += sizeof(struct sctphdr);
+ break;
+ default:
+ break;
+ }
+
+ if ((hdr.data - data) > max_len)
+ return max_len;
+ else
+ return (unsigned int)(hdr.data - data);
+}
+#endif
+
+static void hinic3_pull_tail(struct sk_buff *skb)
+{
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[0];
+ unsigned char *va = NULL;
+ unsigned int pull_len;
+
+ /* it is valid to use page_address instead of kmap since we are
+ * working with pages allocated out of the lomem pool per
+ * alloc_page(GFP_ATOMIC)
+ */
+ va = skb_frag_address(frag);
+
+#ifdef HAVE_ETH_GET_HEADLEN_FUNC
+ /* we need the header to contain the greater of either ETH_HLEN or
+ * 60 bytes if the skb->len is less than 60 for skb_pad.
+ */
+#ifdef ETH_GET_HEADLEN_NEED_DEV
+ pull_len = eth_get_headlen(skb->dev, va, HINIC3_RX_HDR_SIZE);
+#else
+ pull_len = eth_get_headlen(va, HINIC3_RX_HDR_SIZE);
+#endif
+
+#else
+ pull_len = hinic3_eth_get_headlen(va, HINIC3_RX_HDR_SIZE);
+#endif
+
+ /* align pull length to size of long to optimize memcpy performance */
+ skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
+
+ /* update all of the pointers */
+ skb_frag_size_sub(frag, (int)pull_len);
+ frag->page_offset += (int)pull_len;
+
+ skb->data_len -= pull_len;
+ skb->tail += pull_len;
+}
+
+static void hinic3_rx_csum(struct hinic3_rxq *rxq, u32 offload_type,
+ u32 status, struct sk_buff *skb)
+{
+ struct net_device *netdev = rxq->netdev;
+ u32 pkt_type = HINIC3_GET_RX_PKT_TYPE(offload_type);
+ u32 ip_type = HINIC3_GET_RX_IP_TYPE(offload_type);
+ u32 pkt_fmt = HINIC3_GET_RX_TUNNEL_PKT_FORMAT(offload_type);
+
+ u32 csum_err;
+
+ csum_err = HINIC3_GET_RX_CSUM_ERR(status);
+ if (unlikely(csum_err == HINIC3_RX_CSUM_IPSU_OTHER_ERR))
+ rxq->rxq_stats.other_errors++;
+
+ if (!(netdev->features & NETIF_F_RXCSUM))
+ return;
+
+ if (unlikely(csum_err)) {
+ /* pkt type is recognized by HW, and csum is wrong */
+ if (!(csum_err & (HINIC3_RX_CSUM_HW_CHECK_NONE |
+ HINIC3_RX_CSUM_IPSU_OTHER_ERR)))
+ rxq->rxq_stats.csum_errors++;
+ skb->ip_summed = CHECKSUM_NONE;
+ return;
+ }
+
+ if (ip_type == HINIC3_RX_INVALID_IP_TYPE ||
+ !(pkt_fmt == HINIC3_RX_PKT_FORMAT_NON_TUNNEL ||
+ pkt_fmt == HINIC3_RX_PKT_FORMAT_VXLAN)) {
+ skb->ip_summed = CHECKSUM_NONE;
+ return;
+ }
+
+ switch (pkt_type) {
+ case HINIC3_RX_TCP_PKT:
+ case HINIC3_RX_UDP_PKT:
+ case HINIC3_RX_SCTP_PKT:
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ break;
+ default:
+ skb->ip_summed = CHECKSUM_NONE;
+ break;
+ }
+}
+
+#ifdef HAVE_SKBUFF_CSUM_LEVEL
+static void hinic3_rx_gro(struct hinic3_rxq *rxq, u32 offload_type,
+ struct sk_buff *skb)
+{
+ struct net_device *netdev = rxq->netdev;
+ bool l2_tunnel = false;
+
+ if (!(netdev->features & NETIF_F_GRO))
+ return;
+
+ l2_tunnel =
+ HINIC3_GET_RX_TUNNEL_PKT_FORMAT(offload_type) ==
+ HINIC3_RX_PKT_FORMAT_VXLAN ? 1 : 0;
+ if (l2_tunnel && skb->ip_summed == CHECKSUM_UNNECESSARY)
+ /* If we checked the outer header let the stack know */
+ skb->csum_level = 1;
+}
+#endif /* HAVE_SKBUFF_CSUM_LEVEL */
+
+static void hinic3_copy_lp_data(struct hinic3_nic_dev *nic_dev,
+ struct sk_buff *skb)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 *lb_buf = nic_dev->lb_test_rx_buf;
+ void *frag_data = NULL;
+ int lb_len = nic_dev->lb_pkt_len;
+ int pkt_offset, frag_len, i;
+
+ if (nic_dev->lb_test_rx_idx == LP_PKT_CNT) {
+ nic_dev->lb_test_rx_idx = 0;
+ nicif_warn(nic_dev, rx_err, netdev, "Loopback test warning, receive too many test pkts\n");
+ }
+
+ if (skb->len != (u32)(nic_dev->lb_pkt_len)) {
+ nicif_warn(nic_dev, rx_err, netdev, "Wrong packet length\n");
+ nic_dev->lb_test_rx_idx++;
+ return;
+ }
+
+ pkt_offset = nic_dev->lb_test_rx_idx * lb_len;
+ frag_len = (int)skb_headlen(skb);
+ memcpy(lb_buf + pkt_offset, skb->data, (size_t)(u32)frag_len);
+
+ pkt_offset += frag_len;
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ frag_data = skb_frag_address(&skb_shinfo(skb)->frags[i]);
+ frag_len = (int)skb_frag_size(&skb_shinfo(skb)->frags[i]);
+ memcpy(lb_buf + pkt_offset, frag_data, (size_t)(u32)frag_len);
+
+ pkt_offset += frag_len;
+ }
+ nic_dev->lb_test_rx_idx++;
+}
+
+static inline void hinic3_lro_set_gso_params(struct sk_buff *skb, u16 num_lro)
+{
+ struct ethhdr *eth = (struct ethhdr *)(skb->data);
+ __be16 proto;
+
+ proto = __vlan_get_protocol(skb, eth->h_proto, NULL);
+
+ skb_shinfo(skb)->gso_size = (u16)DIV_ROUND_UP((skb->len - skb_headlen(skb)), num_lro);
+ skb_shinfo(skb)->gso_type = (proto == htons(ETH_P_IP)) ? SKB_GSO_TCPV4 : SKB_GSO_TCPV6;
+ skb_shinfo(skb)->gso_segs = num_lro;
+}
+
+#ifdef HAVE_XDP_SUPPORT
+enum hinic3_xdp_status {
+ // bpf_prog status
+ HINIC3_XDP_PROG_EMPTY,
+ // pkt action
+ HINIC3_XDP_PKT_PASS,
+ HINIC3_XDP_PKT_DROP,
+};
+
+static void update_drop_rx_info(struct hinic3_rxq *rxq, u16 weqbb_num)
+{
+ struct hinic3_rx_info *rx_info = NULL;
+
+ while (weqbb_num) {
+ rx_info = &rxq->rx_info[rxq->cons_idx & rxq->q_mask];
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool)
+ goto discard_direct;
+#endif
+ if (likely(page_to_nid(rx_info->page) == numa_node_id()))
+ hinic3_reuse_rx_page(rxq, rx_info);
+ goto discard_direct;
+
+discard_direct:
+ rx_info->buf_dma_addr = 0;
+ rx_info->page = NULL;
+ rxq->cons_idx++;
+ rxq->delta++;
+
+ weqbb_num--;
+ }
+}
+
+int hinic3_run_xdp(struct hinic3_rxq *rxq, u32 pkt_len, struct xdp_buff *xdp)
+{
+ struct bpf_prog *xdp_prog = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ int result = HINIC3_XDP_PKT_PASS;
+ u16 weqbb_num = 1; /* xdp can only use one rx_buff */
+ u8 *va = NULL;
+ u32 act;
+
+ rcu_read_lock();
+ xdp_prog = READ_ONCE(rxq->xdp_prog);
+ if (!xdp_prog) {
+ result = HINIC3_XDP_PROG_EMPTY;
+ goto unlock_rcu;
+ }
+
+ if (unlikely(pkt_len > rxq->buf_len)) {
+ RXQ_STATS_INC(rxq, xdp_large_pkt);
+ weqbb_num = HINIC3_GET_SGE_NUM(pkt_len, rxq);
+ result = HINIC3_XDP_PKT_DROP;
+ goto xdp_out;
+ }
+
+ rx_info = &rxq->rx_info[rxq->cons_idx & rxq->q_mask];
+ va = (u8 *)page_address(rx_info->page) + rx_info->page_offset;
+ prefetch(va);
+ dma_sync_single_range_for_cpu(rxq->dev, rx_info->buf_dma_addr,
+ rx_info->page_offset,
+ rxq->buf_len, DMA_FROM_DEVICE);
+ xdp->data = va;
+ xdp->data_hard_start = xdp->data;
+ xdp->data_end = xdp->data + pkt_len;
+#ifdef HAVE_XDP_FRAME_SZ
+ xdp->frame_sz = rxq->buf_len;
+#endif
+#ifdef HAVE_XDP_DATA_META
+ xdp_set_data_meta_invalid(xdp);
+#endif
+ prefetchw(xdp->data_hard_start);
+ act = bpf_prog_run_xdp(xdp_prog, xdp);
+ switch (act) {
+ case XDP_PASS:
+ result = HINIC3_XDP_PKT_PASS;
+ break;
+ case XDP_DROP:
+ result = HINIC3_XDP_PKT_DROP;
+ break;
+ default:
+ result = HINIC3_XDP_PKT_DROP;
+ bpf_warn_invalid_xdp_action(act);
+ }
+
+xdp_out:
+ if (result == HINIC3_XDP_PKT_DROP) {
+ RXQ_STATS_INC(rxq, xdp_dropped);
+ update_drop_rx_info(rxq, weqbb_num);
+ }
+
+unlock_rcu:
+ rcu_read_unlock();
+
+ return result;
+}
+
+static bool hinic3_add_rx_frag_with_xdp(struct hinic3_rxq *rxq, u32 pkt_len,
+ struct hinic3_rx_info *rx_info,
+ struct sk_buff *skb,
+ struct xdp_buff *xdp)
+{
+ struct page *page = rx_info->page;
+
+ if (pkt_len <= HINIC3_RX_HDR_SIZE) {
+ __skb_put_data(skb, xdp->data, pkt_len);
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool) {
+ page_pool_put_full_page(rx_info->page_pool,
+ page, false);
+ return false;
+ }
+#endif
+
+ if (likely(page_to_nid(page) == numa_node_id()))
+ return true;
+
+ put_page(page);
+ goto umap_page;
+ }
+
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ (int)(rx_info->page_offset +
+ (xdp->data - xdp->data_hard_start)),
+ (int)pkt_len, rxq->buf_len);
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool) {
+ skb_mark_for_recycle(skb);
+ return false;
+ }
+#endif
+ if (unlikely(page_to_nid(page) != numa_node_id()))
+ goto umap_page;
+ if (unlikely(page_count(page) != 1))
+ goto umap_page;
+
+ rx_info->page_offset ^= rxq->buf_len;
+ get_page(page);
+
+ return true;
+
+umap_page:
+ dma_unmap_page(rxq->dev, rx_info->buf_dma_addr,
+ rxq->dma_rx_buff_size, DMA_FROM_DEVICE);
+ return false;
+}
+
+static struct sk_buff *hinic3_fetch_rx_buffer_xdp(struct hinic3_rxq *rxq,
+ u32 pkt_len,
+ struct xdp_buff *xdp)
+{
+ struct sk_buff *skb;
+ struct hinic3_rx_info *rx_info;
+ u32 sw_ci;
+ bool reuse;
+
+ sw_ci = rxq->cons_idx & rxq->q_mask;
+ rx_info = &rxq->rx_info[sw_ci];
+
+ skb = netdev_alloc_skb_ip_align(rxq->netdev, HINIC3_RX_HDR_SIZE);
+ if (unlikely(!skb))
+ return NULL;
+
+ reuse = hinic3_add_rx_frag_with_xdp(rxq, pkt_len, rx_info, skb, xdp);
+ if (likely(reuse))
+ hinic3_reuse_rx_page(rxq, rx_info);
+
+ rx_info->buf_dma_addr = 0;
+ rx_info->page = NULL;
+
+ rxq->cons_idx += 1;
+ rxq->delta += 1;
+
+ return skb;
+}
+
+#endif
+
+static int recv_one_pkt(struct hinic3_rxq *rxq, struct hinic3_rq_cqe *rx_cqe,
+ u32 pkt_len, u32 vlan_len, u32 status)
+{
+ struct sk_buff *skb = NULL;
+ struct net_device *netdev = rxq->netdev;
+ u32 offload_type;
+ u16 num_lro;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(rxq->netdev);
+
+#ifdef HAVE_XDP_SUPPORT
+ u32 xdp_status;
+ struct xdp_buff xdp = { 0 };
+
+ xdp_status = (u32)(hinic3_run_xdp(rxq, pkt_len, &xdp));
+ if (xdp_status == HINIC3_XDP_PKT_DROP)
+ return 0;
+
+ // build skb
+ if (xdp_status != HINIC3_XDP_PROG_EMPTY) {
+ // xdp_prog configured, build skb with xdp
+ skb = hinic3_fetch_rx_buffer_xdp(rxq, pkt_len, &xdp);
+ } else {
+ // xdp_prog not configured, build skb
+ skb = hinic3_fetch_rx_buffer(rxq, pkt_len);
+ }
+#else
+
+ // xdp is not supported
+ skb = hinic3_fetch_rx_buffer(rxq, pkt_len);
+#endif
+ if (unlikely(!skb)) {
+ RXQ_STATS_INC(rxq, alloc_skb_err);
+ return -ENOMEM;
+ }
+
+ /* place header in linear portion of buffer */
+ if (skb_is_nonlinear(skb))
+ hinic3_pull_tail(skb);
+
+ offload_type = hinic3_hw_cpu32(rx_cqe->offload_type);
+ hinic3_rx_csum(rxq, offload_type, status, skb);
+
+#ifdef HAVE_SKBUFF_CSUM_LEVEL
+ hinic3_rx_gro(rxq, offload_type, skb);
+#endif
+
+#if defined(NETIF_F_HW_VLAN_CTAG_RX)
+ if ((netdev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
+ HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type)) {
+#else
+ if ((netdev->features & NETIF_F_HW_VLAN_RX) &&
+ HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type)) {
+#endif
+ u16 vid = HINIC3_GET_RX_VLAN_TAG(vlan_len);
+
+ /* if the packet is a vlan pkt, the vid may be 0 */
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vid);
+ }
+
+ if (unlikely(test_bit(HINIC3_LP_TEST, &nic_dev->flags)))
+ hinic3_copy_lp_data(nic_dev, skb);
+
+ num_lro = HINIC3_GET_RX_NUM_LRO(status);
+ if (num_lro > 1)
+ hinic3_lro_set_gso_params(skb, num_lro);
+
+ skb_record_rx_queue(skb, rxq->q_id);
+ skb->protocol = eth_type_trans(skb, netdev);
+
+ if (skb_has_frag_list(skb)) {
+#ifdef HAVE_NAPI_GRO_FLUSH_OLD
+ napi_gro_flush(&rxq->irq_cfg->napi, false);
+#else
+ napi_gro_flush(&rxq->irq_cfg->napi);
+#endif
+ netif_receive_skb(skb);
+ } else {
+ napi_gro_receive(&rxq->irq_cfg->napi, skb);
+ }
+
+ return 0;
+}
+
+#define LRO_PKT_HDR_LEN_IPV4 66
+#define LRO_PKT_HDR_LEN_IPV6 86
+#define LRO_PKT_HDR_LEN(cqe) \
+ (HINIC3_GET_RX_IP_TYPE(hinic3_hw_cpu32((cqe)->offload_type)) == \
+ HINIC3_RX_IPV6_PKT ? LRO_PKT_HDR_LEN_IPV6 : LRO_PKT_HDR_LEN_IPV4)
+
+int hinic3_rx_poll(struct hinic3_rxq *rxq, int budget)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(rxq->netdev);
+ u32 sw_ci, status, pkt_len, vlan_len, dropped = 0;
+ struct hinic3_rq_cqe *rx_cqe = NULL;
+ u64 rx_bytes = 0;
+ u16 num_lro;
+ int pkts = 0, nr_pkts = 0;
+ u16 num_wqe = 0;
+
+ while (likely(pkts < budget)) {
+ sw_ci = rxq->cons_idx & rxq->q_mask;
+ rx_cqe = rxq->rx_info[sw_ci].cqe;
+ status = hinic3_hw_cpu32(rx_cqe->status);
+ if (!HINIC3_GET_RX_DONE(status))
+ break;
+
+ /* make sure we read rx_done before packet length */
+ rmb();
+
+ vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len);
+ pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len);
+ if (recv_one_pkt(rxq, rx_cqe, pkt_len, vlan_len, status))
+ break;
+
+ rx_bytes += pkt_len;
+ pkts++;
+ nr_pkts++;
+
+ num_lro = HINIC3_GET_RX_NUM_LRO(status);
+ if (num_lro) {
+ rx_bytes += ((num_lro - 1) * LRO_PKT_HDR_LEN(rx_cqe));
+
+ num_wqe += HINIC3_GET_SGE_NUM(pkt_len, rxq);
+ }
+
+ rx_cqe->status = 0;
+
+ if (num_wqe >= nic_dev->lro_replenish_thld)
+ break;
+ }
+
+ if (rxq->delta >= HINIC3_RX_BUFFER_WRITE)
+ hinic3_rx_fill_buffers(rxq);
+
+ u64_stats_update_begin(&rxq->rxq_stats.syncp);
+ rxq->rxq_stats.packets += (u64)(u32)nr_pkts;
+ rxq->rxq_stats.bytes += rx_bytes;
+ rxq->rxq_stats.dropped += (u64)dropped;
+ u64_stats_update_end(&rxq->rxq_stats.syncp);
+ return pkts;
+}
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+static struct page_pool *hinic3_create_page_pool(struct hinic3_nic_dev *nic_dev,
+ u32 rq_depth,
+ struct hinic3_rx_info *rx_info_arr)
+{
+ struct page_pool_params pp_params = {
+ .flags = PP_FLAG_DMA_MAP | PP_FLAG_PAGE_FRAG |
+ PP_FLAG_DMA_SYNC_DEV,
+ .order = nic_dev->page_order,
+ .pool_size = rq_depth * nic_dev->rx_buff_len /
+ (PAGE_SIZE << nic_dev->page_order),
+ .nid = dev_to_node(&(nic_dev->pdev->dev)),
+ .dev = &(nic_dev->pdev->dev),
+ .dma_dir = DMA_FROM_DEVICE,
+ .offset = 0,
+ .max_len = PAGE_SIZE << nic_dev->page_order,
+ };
+ struct page_pool *page_pool;
+ int i;
+
+ page_pool = nic_dev->page_pool_enabled ?
+ page_pool_create(&pp_params) : NULL;
+ for (i = 0; i < rq_depth; i++)
+ rx_info_arr[i].page_pool = page_pool;
+ return page_pool;
+}
+#endif
+
+int hinic3_alloc_rxqs_res(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res)
+{
+ struct hinic3_dyna_rxq_res *rqres = NULL;
+ u64 cqe_mem_size = sizeof(struct hinic3_rq_cqe) * rq_depth;
+ int idx;
+ u32 pkts;
+ u64 size;
+
+ for (idx = 0; idx < num_rq; idx++) {
+ rqres = &rxqs_res[idx];
+ size = sizeof(*rqres->rx_info) * rq_depth;
+ rqres->rx_info = kzalloc(size, GFP_KERNEL);
+ if (!rqres->rx_info) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxq%d rx info\n", idx);
+ goto err_alloc_rx_info;
+ }
+ rqres->cqe_start_vaddr =
+ dma_zalloc_coherent(&nic_dev->pdev->dev, cqe_mem_size,
+ &rqres->cqe_start_paddr,
+ GFP_KERNEL);
+ if (!rqres->cqe_start_vaddr) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxq%d cqe\n", idx);
+ goto err_alloc_cqe;
+ }
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ rqres->page_pool = hinic3_create_page_pool(nic_dev, rq_depth,
+ rqres->rx_info);
+ if (nic_dev->page_pool_enabled && !rqres->page_pool) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to create rxq%d page pool\n", idx);
+ goto err_create_page_pool;
+ }
+#endif
+ pkts = hinic3_rx_alloc_buffers(nic_dev, rq_depth,
+ rqres->rx_info);
+ if (!pkts) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxq%d rx buffers\n", idx);
+ goto err_alloc_buffers;
+ }
+ rqres->next_to_alloc = (u16)pkts;
+ }
+ return 0;
+
+err_alloc_buffers:
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ page_pool_destroy(rqres->page_pool);
+err_create_page_pool:
+#endif
+ dma_free_coherent(&nic_dev->pdev->dev, cqe_mem_size,
+ rqres->cqe_start_vaddr,
+ rqres->cqe_start_paddr);
+err_alloc_cqe:
+ kfree(rqres->rx_info);
+err_alloc_rx_info:
+ hinic3_free_rxqs_res(nic_dev, idx, rq_depth, rxqs_res);
+ return -ENOMEM;
+}
+
+void hinic3_free_rxqs_res(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res)
+{
+ struct hinic3_dyna_rxq_res *rqres = NULL;
+ u64 cqe_mem_size = sizeof(struct hinic3_rq_cqe) * rq_depth;
+ int idx;
+
+ for (idx = 0; idx < num_rq; idx++) {
+ rqres = &rxqs_res[idx];
+
+ hinic3_rx_free_buffers(nic_dev, rq_depth, rqres->rx_info);
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rqres->page_pool)
+ page_pool_destroy(rqres->page_pool);
+#endif
+ dma_free_coherent(&nic_dev->pdev->dev, cqe_mem_size,
+ rqres->cqe_start_vaddr,
+ rqres->cqe_start_paddr);
+ kfree(rqres->rx_info);
+ }
+}
+
+int hinic3_configure_rxqs(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res)
+{
+ struct hinic3_dyna_rxq_res *rqres = NULL;
+ struct irq_info *msix_entry = NULL;
+ struct hinic3_rxq *rxq = NULL;
+ struct hinic3_rq_cqe *cqe_va = NULL;
+ dma_addr_t cqe_pa;
+ u16 q_id;
+ u32 idx;
+ u32 pkts;
+
+ nic_dev->rxq_get_err_times = 0;
+ for (q_id = 0; q_id < num_rq; q_id++) {
+ rxq = &nic_dev->rxqs[q_id];
+ rqres = &rxqs_res[q_id];
+ msix_entry = &nic_dev->qps_irq_info[q_id];
+
+ rxq->irq_id = msix_entry->irq_id;
+ rxq->msix_entry_idx = msix_entry->msix_entry_idx;
+ rxq->next_to_update = 0;
+ rxq->next_to_alloc = rqres->next_to_alloc;
+ rxq->q_depth = rq_depth;
+ rxq->delta = rxq->q_depth;
+ rxq->q_mask = rxq->q_depth - 1;
+ rxq->cons_idx = 0;
+
+ rxq->last_sw_pi = rxq->q_depth - 1;
+ rxq->last_sw_ci = 0;
+ rxq->last_hw_ci = 0;
+ rxq->rx_check_err_cnt = 0;
+ rxq->rxq_print_times = 0;
+ rxq->last_packets = 0;
+ rxq->restore_buf_num = 0;
+
+ rxq->rx_info = rqres->rx_info;
+
+ /* fill cqe */
+ cqe_va = (struct hinic3_rq_cqe *)rqres->cqe_start_vaddr;
+ cqe_pa = rqres->cqe_start_paddr;
+ for (idx = 0; idx < rq_depth; idx++) {
+ rxq->rx_info[idx].cqe = cqe_va;
+ rxq->rx_info[idx].cqe_dma = cqe_pa;
+ cqe_va++;
+ cqe_pa += sizeof(*rxq->rx_info->cqe);
+ }
+
+ rxq->rq = hinic3_get_nic_queue(nic_dev->hwdev, rxq->q_id,
+ HINIC3_RQ);
+ if (!rxq->rq) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to get rq\n");
+ return -EINVAL;
+ }
+
+ pkts = hinic3_rx_fill_wqe(rxq);
+ if (pkts != rxq->q_depth) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to fill rx wqe\n");
+ return -EFAULT;
+ }
+
+ pkts = hinic3_rx_fill_buffers(rxq);
+ if (!pkts) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to fill Rx buffer\n");
+ return -ENOMEM;
+ }
+ }
+
+ return 0;
+}
+
+void hinic3_free_rxqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ kfree(nic_dev->rxqs);
+ nic_dev->rxqs = NULL;
+}
+
+int hinic3_alloc_rxqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct pci_dev *pdev = nic_dev->pdev;
+ struct hinic3_rxq *rxq = NULL;
+ u16 num_rxqs = nic_dev->max_qps;
+ u16 q_id;
+ u64 rxq_size;
+
+ rxq_size = num_rxqs * sizeof(*nic_dev->rxqs);
+ if (!rxq_size) {
+ nic_err(&pdev->dev, "Cannot allocate zero size rxqs\n");
+ return -EINVAL;
+ }
+
+ nic_dev->rxqs = kzalloc(rxq_size, GFP_KERNEL);
+ if (!nic_dev->rxqs) {
+ nic_err(&pdev->dev, "Failed to allocate rxqs\n");
+ return -ENOMEM;
+ }
+
+ for (q_id = 0; q_id < num_rxqs; q_id++) {
+ rxq = &nic_dev->rxqs[q_id];
+ rxq->netdev = netdev;
+ rxq->dev = &pdev->dev;
+ rxq->q_id = q_id;
+ rxq->buf_len = nic_dev->rx_buff_len;
+ rxq->rx_buff_shift = (u32)ilog2(nic_dev->rx_buff_len);
+ rxq->dma_rx_buff_size = nic_dev->dma_rx_buff_size;
+ rxq->q_depth = nic_dev->q_params.rq_depth;
+ rxq->q_mask = nic_dev->q_params.rq_depth - 1;
+
+ rxq_stats_init(rxq);
+ }
+
+ return 0;
+}
+
+int hinic3_rx_configure(struct net_device *netdev, u8 dcb_en)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 rq2iq_map[HINIC3_MAX_NUM_RQ];
+ int err;
+
+ /* Set all rq mapping to all iq in default */
+
+ memset(rq2iq_map, 0xFF, sizeof(rq2iq_map));
+
+ if (test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ err = hinic3_rss_init(nic_dev, rq2iq_map, sizeof(rq2iq_map), dcb_en);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to init rss\n");
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+void hinic3_rx_remove_configure(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags))
+ hinic3_rss_deinit(nic_dev);
+}
+
+int rxq_restore(struct hinic3_nic_dev *nic_dev, u16 q_id, u16 hw_ci)
+{
+ struct hinic3_rxq *rxq = &nic_dev->rxqs[q_id];
+ struct hinic3_rq_wqe *rq_wqe = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ dma_addr_t dma_addr;
+ u32 free_wqebbs = rxq->delta - rxq->restore_buf_num;
+ u32 buff_pi;
+ u32 i;
+ int err;
+
+ if (rxq->delta < rxq->restore_buf_num)
+ return -EINVAL;
+
+ if (rxq->restore_buf_num == 0) /* start restore process */
+ rxq->restore_pi = rxq->next_to_update;
+
+ buff_pi = rxq->restore_pi;
+
+ if ((((rxq->cons_idx & rxq->q_mask) + rxq->q_depth -
+ rxq->next_to_update) % rxq->q_depth) != rxq->delta)
+ return -EINVAL;
+
+ for (i = 0; i < free_wqebbs; i++) {
+ rx_info = &rxq->rx_info[buff_pi];
+
+ if (unlikely(!rx_alloc_mapped_page(nic_dev, rx_info))) {
+ RXQ_STATS_INC(rxq, alloc_rx_buf_err);
+ rxq->restore_pi = (u16)((rxq->restore_pi + i) & rxq->q_mask);
+ return -ENOMEM;
+ }
+
+ dma_addr = rx_info->buf_dma_addr + rx_info->page_offset;
+
+ rq_wqe = rx_info->rq_wqe;
+
+ if (rxq->rq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+ rq_wqe->extend_wqe.buf_desc.sge.hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->extend_wqe.buf_desc.sge.lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ } else {
+ rq_wqe->normal_wqe.buf_hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->normal_wqe.buf_lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ }
+ buff_pi = (u16)((buff_pi + 1) & rxq->q_mask);
+ rxq->restore_buf_num++;
+ }
+
+ nic_info(&nic_dev->pdev->dev, "rxq %u restore_buf_num:%u\n", q_id, rxq->restore_buf_num);
+
+ rx_info = &rxq->rx_info[(hw_ci + rxq->q_depth - 1) & rxq->q_mask];
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool && rx_info->page) {
+ page_pool_put_full_page(rx_info->page_pool,
+ rx_info->page, false);
+ rx_info->buf_dma_addr = 0;
+ rx_info->page = NULL;
+ goto reset_rxq;
+ }
+#endif
+ if (rx_info->buf_dma_addr) {
+ dma_unmap_page(&nic_dev->pdev->dev, rx_info->buf_dma_addr,
+ nic_dev->dma_rx_buff_size, DMA_FROM_DEVICE);
+ rx_info->buf_dma_addr = 0;
+ }
+
+ if (rx_info->page) {
+ __free_pages(rx_info->page, nic_dev->page_order);
+ rx_info->page = NULL;
+ }
+ goto reset_rxq;
+
+reset_rxq:
+ rxq->delta = 1;
+ rxq->next_to_update = (u16)((hw_ci + rxq->q_depth - 1) & rxq->q_mask);
+ rxq->cons_idx = (u16)((rxq->next_to_update + 1) & rxq->q_mask);
+ rxq->restore_buf_num = 0;
+ rxq->next_to_alloc = rxq->next_to_update;
+
+ for (i = 0; i < rxq->q_depth; i++) {
+ if (!HINIC3_GET_RX_DONE(hinic3_hw_cpu32(rxq->rx_info[i].cqe->status)))
+ continue;
+
+ RXQ_STATS_INC(rxq, restore_drop_sge);
+ rxq->rx_info[i].cqe->status = 0;
+ }
+
+ err = hinic3_cache_out_qps_res(nic_dev->hwdev);
+ if (err) {
+ clear_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags);
+ return err;
+ }
+
+ hinic3_write_db(rxq->rq, rxq->q_id & (NIC_DCB_COS_MAX - 1),
+ RQ_CFLAG_DP,
+ (u16)((u32)rxq->next_to_update << rxq->rq->wqe_type));
+
+
+ return 0;
+}
+
+bool rxq_is_normal(struct hinic3_rxq *rxq, struct rxq_check_info rxq_info)
+{
+ u32 status;
+
+ if (rxq->rxq_stats.packets != rxq->last_packets || rxq_info.hw_pi != rxq_info.hw_ci ||
+ rxq_info.hw_ci != rxq->last_hw_ci || rxq->next_to_update != rxq->last_sw_pi)
+ return true;
+
+ /* hw rx no wqe and driver rx no packet recv */
+ status = rxq->rx_info[rxq->cons_idx & rxq->q_mask].cqe->status;
+ if (HINIC3_GET_RX_DONE(hinic3_hw_cpu32(status)))
+ return true;
+
+ if ((rxq->cons_idx & rxq->q_mask) != rxq->last_sw_ci ||
+ rxq->rxq_stats.packets != rxq->last_packets ||
+ rxq->next_to_update != rxq_info.hw_pi)
+ return true;
+
+ return false;
+}
+
+#define RXQ_CHECK_ERR_TIMES 2
+#define RXQ_PRINT_MAX_TIMES 3
+#define RXQ_GET_ERR_MAX_TIMES 3
+void hinic3_rxq_check_work_handler(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_nic_dev *nic_dev = container_of(delay, struct hinic3_nic_dev,
+ rxq_check_work);
+ struct rxq_check_info *rxq_info = NULL;
+ struct hinic3_rxq *rxq = NULL;
+ u64 size;
+ u16 qid;
+ int err;
+
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags))
+ return;
+
+ if (test_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ queue_delayed_work(nic_dev->workq, &nic_dev->rxq_check_work, HZ);
+
+ size = sizeof(*rxq_info) * nic_dev->q_params.num_qps;
+ if (!size)
+ return;
+
+ rxq_info = kzalloc(size, GFP_KERNEL);
+ if (!rxq_info)
+ return;
+
+ err = hinic3_get_rxq_hw_info(nic_dev->hwdev, rxq_info, nic_dev->q_params.num_qps,
+ nic_dev->rxqs[0].rq->wqe_type);
+ if (err) {
+ nic_dev->rxq_get_err_times++;
+ if (nic_dev->rxq_get_err_times >= RXQ_GET_ERR_MAX_TIMES)
+ clear_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags);
+ goto free_rxq_info;
+ }
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ rxq = &nic_dev->rxqs[qid];
+ if (!rxq_is_normal(rxq, rxq_info[qid])) {
+ rxq->rx_check_err_cnt++;
+ if (rxq->rx_check_err_cnt < RXQ_CHECK_ERR_TIMES)
+ continue;
+
+ if (rxq->rxq_print_times <= RXQ_PRINT_MAX_TIMES) {
+ nic_warn(&nic_dev->pdev->dev, "rxq %u wqe abnormal, hw_pi:%u, hw_ci:%u, sw_pi:%u, sw_ci:%u delta:%u\n",
+ qid, rxq_info[qid].hw_pi, rxq_info[qid].hw_ci,
+ rxq->next_to_update,
+ rxq->cons_idx & rxq->q_mask, rxq->delta);
+ rxq->rxq_print_times++;
+ }
+
+ err = rxq_restore(nic_dev, qid, rxq_info[qid].hw_ci);
+ if (err)
+ continue;
+ }
+
+ rxq->rxq_print_times = 0;
+ rxq->rx_check_err_cnt = 0;
+ rxq->last_sw_pi = rxq->next_to_update;
+ rxq->last_sw_ci = rxq->cons_idx & rxq->q_mask;
+ rxq->last_hw_ci = rxq_info[qid].hw_ci;
+ rxq->last_packets = rxq->rxq_stats.packets;
+ }
+
+ nic_dev->rxq_get_err_times = 0;
+
+free_rxq_info:
+ kfree(rxq_info);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h
new file mode 100644
index 0000000..7dd4618
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_RX_H
+#define HINIC3_RX_H
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+#include <net/page_pool.h>
+#endif
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/mm_types.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/u64_stats_sync.h>
+
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_dev.h"
+
+/* rx cqe checksum err */
+#define HINIC3_RX_CSUM_IP_CSUM_ERR BIT(0)
+#define HINIC3_RX_CSUM_TCP_CSUM_ERR BIT(1)
+#define HINIC3_RX_CSUM_UDP_CSUM_ERR BIT(2)
+#define HINIC3_RX_CSUM_IGMP_CSUM_ERR BIT(3)
+#define HINIC3_RX_CSUM_ICMPV4_CSUM_ERR BIT(4)
+#define HINIC3_RX_CSUM_ICMPV6_CSUM_ERR BIT(5)
+#define HINIC3_RX_CSUM_SCTP_CRC_ERR BIT(6)
+#define HINIC3_RX_CSUM_HW_CHECK_NONE BIT(7)
+#define HINIC3_RX_CSUM_IPSU_OTHER_ERR BIT(8)
+
+#define HINIC3_HEADER_DATA_UNIT 2
+#define HINIC3_CQE_LEN 32
+
+struct hinic3_rxq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 errors;
+ u64 csum_errors;
+ u64 other_errors;
+ u64 dropped;
+ u64 xdp_dropped;
+ u64 rx_buf_empty;
+ u64 alloc_skb_err;
+ u64 alloc_rx_buf_err;
+ u64 xdp_large_pkt;
+ u64 restore_drop_sge;
+ u64 rsvd2;
+#ifdef HAVE_NDO_GET_STATS64
+ struct u64_stats_sync syncp;
+#else
+ struct u64_stats_sync_empty syncp;
+#endif
+};
+
+struct hinic3_rx_info {
+ dma_addr_t buf_dma_addr;
+
+ struct hinic3_rq_cqe *cqe;
+ dma_addr_t cqe_dma;
+ struct page *page;
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ struct page_pool *page_pool;
+#endif
+ u32 page_offset;
+ u32 rsvd1;
+ struct hinic3_rq_wqe *rq_wqe;
+ struct sk_buff *saved_skb;
+ u32 skb_len;
+ u32 rsvd2;
+};
+
+struct hinic3_rxq {
+ struct net_device *netdev;
+
+ u16 q_id;
+ u16 rsvd1;
+ u32 q_depth;
+ u32 q_mask;
+
+ u16 buf_len;
+ u16 rsvd2;
+ u32 rx_buff_shift;
+ u32 dma_rx_buff_size;
+
+ struct hinic3_rxq_stats rxq_stats;
+ u32 cons_idx;
+ u32 delta;
+
+ u32 irq_id;
+ u16 msix_entry_idx;
+ u16 rsvd3;
+
+ struct hinic3_rx_info *rx_info;
+ struct hinic3_io_queue *rq;
+#ifdef HAVE_XDP_SUPPORT
+ struct bpf_prog *xdp_prog;
+#endif
+
+ struct hinic3_irq *irq_cfg;
+ u16 next_to_alloc;
+ u16 next_to_update;
+ struct device *dev; /* device for DMA mapping */
+
+ u64 status;
+ dma_addr_t cqe_start_paddr;
+ void *cqe_start_vaddr;
+
+ u64 last_moder_packets;
+ u64 last_moder_bytes;
+ u8 last_coalesc_timer_cfg;
+ u8 last_pending_limt;
+ u16 restore_buf_num;
+ u32 rsvd5;
+ u64 rsvd6;
+
+ u32 last_sw_pi;
+ u32 last_sw_ci;
+
+ u32 last_hw_ci;
+ u8 rx_check_err_cnt;
+ u8 rxq_print_times;
+ u16 restore_pi;
+
+ u64 last_packets;
+} ____cacheline_aligned;
+
+struct hinic3_dyna_rxq_res {
+ u16 next_to_alloc;
+ struct hinic3_rx_info *rx_info;
+ dma_addr_t cqe_start_paddr;
+ void *cqe_start_vaddr;
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ struct page_pool *page_pool;
+#endif
+};
+
+int hinic3_alloc_rxqs(struct net_device *netdev);
+
+void hinic3_free_rxqs(struct net_device *netdev);
+
+int hinic3_alloc_rxqs_res(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res);
+
+void hinic3_free_rxqs_res(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res);
+
+int hinic3_configure_rxqs(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res);
+
+int hinic3_rx_configure(struct net_device *netdev, u8 dcb_en);
+
+void hinic3_rx_remove_configure(struct net_device *netdev);
+
+int hinic3_rx_poll(struct hinic3_rxq *rxq, int budget);
+
+void hinic3_rxq_get_stats(struct hinic3_rxq *rxq,
+ struct hinic3_rxq_stats *stats);
+
+void hinic3_rxq_clean_stats(struct hinic3_rxq_stats *rxq_stats);
+
+void hinic3_rxq_check_work_handler(struct work_struct *work);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h b/drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h
new file mode 100644
index 0000000..051f05d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h
@@ -0,0 +1,220 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2018-2022. All rights reserved.
+ * @file hinic3_srv_nic.h
+ * @details nic service interface
+ * History :
+ * 1.Date : 2018/3/8
+ * Modification: Created file
+ */
+
+#ifndef HINIC3_SRV_NIC_H
+#define HINIC3_SRV_NIC_H
+
+#include <linux/netdevice.h>
+#include "nic_mpu_cmd_defs.h"
+#include "mag_mpu_cmd.h"
+#include "mag_mpu_cmd_defs.h"
+#include "hinic3_lld.h"
+
+enum hinic3_queue_type {
+ HINIC3_SQ,
+ HINIC3_RQ,
+ HINIC3_MAX_QUEUE_TYPE
+};
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_netdev(struct net_device *netdev);
+struct net_device *hinic3_get_netdev_by_lld(struct hinic3_lld_dev *lld_dev);
+
+struct hinic3_event_link_info {
+ u8 valid;
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+};
+
+enum link_err_type {
+ LINK_ERR_MODULE_UNRECOGENIZED,
+ LINK_ERR_NUM,
+};
+
+enum port_module_event_type {
+ HINIC3_PORT_MODULE_CABLE_PLUGGED,
+ HINIC3_PORT_MODULE_CABLE_UNPLUGGED,
+ HINIC3_PORT_MODULE_LINK_ERR,
+ HINIC3_PORT_MODULE_MAX_EVENT,
+};
+
+struct hinic3_port_module_event {
+ enum port_module_event_type type;
+ enum link_err_type err_type;
+};
+
+struct hinic3_dcb_info {
+ u8 dcb_on;
+ u8 default_cos;
+ u8 up_cos[NIC_DCB_COS_MAX];
+};
+
+enum hinic3_nic_event_type {
+ EVENT_NIC_LINK_DOWN,
+ EVENT_NIC_LINK_UP,
+ EVENT_NIC_PORT_MODULE_EVENT,
+ EVENT_NIC_DCB_STATE_CHANGE,
+ EVENT_NIC_BOND_DOWN,
+ EVENT_NIC_BOND_UP,
+ EVENT_NIC_OUTBAND_CFG,
+};
+
+/* *
+ * @brief hinic3_set_mac - set mac address
+ * @param hwdev: device pointer to hwdev
+ * @param mac_addr: mac address from hardware
+ * @param vlan_id: vlan id
+ * @param func_id: function index
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id, u16 channel);
+
+/* *
+ * @brief hinic3_del_mac - delete mac address
+ * @param hwdev: device pointer to hwdev
+ * @param mac_addr: mac address from hardware
+ * @param vlan_id: vlan id
+ * @param func_id: function index
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id, u16 channel);
+
+/* *
+ * @brief hinic3_set_vport_enable - set function valid status
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: global function index
+ * @param enable: 0-disable, 1-enable
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vport_enable(void *hwdev, u16 func_id, bool enable, u16 channel);
+
+/* *
+ * @brief hinic3_set_port_enable - set port status
+ * @param hwdev: device pointer to hwdev
+ * @param enable: 0-disable, 1-enable
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_port_enable(void *hwdev, bool enable, u16 channel);
+
+/* *
+ * @brief hinic3_flush_qps_res - flush queue pairs resource in hardware
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_flush_qps_res(void *hwdev);
+
+/* *
+ * @brief hinic3_cache_out_qps_res - cache out queue pairs wqe resource in hardware
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cache_out_qps_res(void *hwdev);
+
+/* *
+ * @brief hinic3_init_nic_hwdev - init nic hwdev
+ * @param hwdev: device pointer to hwdev
+ * @param pcidev_hdl: pointer to pcidev or handler
+ * @param dev_hdl: pointer to pcidev->dev or handler, for sdk_err() or
+ * dma_alloc()
+ * @param rx_buff_len: rx_buff_len is receive buffer length
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_init_nic_hwdev(void *hwdev, void *pcidev_hdl, void *dev_hdl, u16 rx_buff_len);
+
+/* *
+ * @brief hinic3_free_nic_hwdev - free nic hwdev
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+void hinic3_free_nic_hwdev(void *hwdev);
+
+/* *
+ * @brief hinic3_get_speed - set link speed
+ * @param hwdev: device pointer to hwdev
+ * @param port_info: link speed
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_speed(void *hwdev, enum mag_cmd_port_speed *speed, u16 channel);
+
+int hinic3_get_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state);
+
+int hinic3_get_pf_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state);
+
+int hinic3_get_cos_by_pri(void *hwdev, u8 pri, u8 *cos);
+
+/* *
+ * @brief hinic3_create_qps - create queue pairs
+ * @param hwdev: device pointer to hwdev
+ * @param num_qp: number of queue pairs
+ * @param sq_depth: sq depth
+ * @param rq_depth: rq depth
+ * @param qps_msix_arry: msix info
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_create_qps(void *hwdev, u16 num_qp, u32 sq_depth, u32 rq_depth,
+ struct irq_info *qps_msix_arry);
+
+/* *
+ * @brief hinic3_destroy_qps - destroy queue pairs
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_destroy_qps(void *hwdev);
+
+/* *
+ * @brief hinic3_get_nic_queue - get nic queue
+ * @param hwdev: device pointer to hwdev
+ * @param q_id: queue index
+ * @param q_type: queue type
+ * @retval queue address
+ */
+void *hinic3_get_nic_queue(void *hwdev, u16 q_id, enum hinic3_queue_type q_type);
+
+/* *
+ * @brief hinic3_init_qp_ctxts - init queue pair context
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_init_qp_ctxts(void *hwdev);
+
+/* *
+ * @brief hinic3_free_qp_ctxts - free queue pairs
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_free_qp_ctxts(void *hwdev);
+
+/* *
+ * @brief hinic3_pf_set_vf_link_state pf set vf link state
+ * @param hwdev: device pointer to hwdev
+ * @param vf_link_forced: set link forced
+ * @param link_state: Set link state, This parameter is valid only when vf_link_forced is true
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_pf_set_vf_link_state(void *hwdev, bool vf_link_forced, bool link_state);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_tx.c b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.c
new file mode 100644
index 0000000..d3f8696
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.c
@@ -0,0 +1,1051 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <net/xfrm.h>
+#include <linux/netdevice.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <linux/interrupt.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/tcp.h>
+#include <linux/sctp.h>
+#include <linux/dma-mapping.h>
+#include <linux/types.h>
+#include <linux/u64_stats_sync.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+
+#define MIN_SKB_LEN 32
+
+#define MAX_PAYLOAD_OFFSET 221
+
+#define NIC_QID(q_id, nic_dev) ((q_id) & ((nic_dev)->num_qps - 1))
+
+#define HINIC3_TX_TASK_WRAPPED 1
+#define HINIC3_TX_BD_DESC_WRAPPED 2
+
+#define TXQ_STATS_INC(txq, field) \
+do { \
+ u64_stats_update_begin(&(txq)->txq_stats.syncp); \
+ (txq)->txq_stats.field++; \
+ u64_stats_update_end(&(txq)->txq_stats.syncp); \
+} while (0)
+
+void hinic3_txq_get_stats(struct hinic3_txq *txq,
+ struct hinic3_txq_stats *stats)
+{
+ struct hinic3_txq_stats *txq_stats = &txq->txq_stats;
+ unsigned int start;
+
+ u64_stats_update_begin(&stats->syncp);
+ do {
+ start = u64_stats_fetch_begin(&txq_stats->syncp);
+ stats->bytes = txq_stats->bytes;
+ stats->packets = txq_stats->packets;
+ stats->busy = txq_stats->busy;
+ stats->wake = txq_stats->wake;
+ stats->dropped = txq_stats->dropped;
+ } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
+ u64_stats_update_end(&stats->syncp);
+}
+
+void hinic3_txq_clean_stats(struct hinic3_txq_stats *txq_stats)
+{
+ u64_stats_update_begin(&txq_stats->syncp);
+ txq_stats->bytes = 0;
+ txq_stats->packets = 0;
+ txq_stats->busy = 0;
+ txq_stats->wake = 0;
+ txq_stats->dropped = 0;
+
+ txq_stats->skb_pad_err = 0;
+ txq_stats->frag_len_overflow = 0;
+ txq_stats->offload_cow_skb_err = 0;
+ txq_stats->map_frag_err = 0;
+ txq_stats->unknown_tunnel_pkt = 0;
+ txq_stats->frag_size_err = 0;
+ txq_stats->rsvd1 = 0;
+ txq_stats->rsvd2 = 0;
+ u64_stats_update_end(&txq_stats->syncp);
+}
+
+static void txq_stats_init(struct hinic3_txq *txq)
+{
+ struct hinic3_txq_stats *txq_stats = &txq->txq_stats;
+
+ u64_stats_init(&txq_stats->syncp);
+ hinic3_txq_clean_stats(txq_stats);
+}
+
+static inline void hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs,
+ dma_addr_t addr, u32 len)
+{
+ buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr));
+ buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr));
+ buf_descs->len = hinic3_hw_be32(len);
+}
+
+static int tx_map_skb(struct hinic3_nic_dev *nic_dev, struct sk_buff *skb,
+ u16 valid_nr_frags, struct hinic3_txq *txq,
+ struct hinic3_tx_info *tx_info,
+ struct hinic3_sq_wqe_combo *wqe_combo)
+{
+ struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->ctrl_bd0;
+ struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head;
+ struct hinic3_dma_info *dma_info = tx_info->dma_info;
+ struct pci_dev *pdev = nic_dev->pdev;
+ skb_frag_t *frag = NULL;
+ u32 j, i;
+ int err;
+
+ dma_info[0].dma = dma_map_single(&pdev->dev, skb->data, skb_headlen(skb), DMA_TO_DEVICE);
+ if (dma_mapping_error(&pdev->dev, dma_info[0].dma)) {
+ TXQ_STATS_INC(txq, map_frag_err);
+ return -EFAULT;
+ }
+
+ dma_info[0].len = skb_headlen(skb);
+
+ wqe_desc->hi_addr = hinic3_hw_be32(upper_32_bits(dma_info[0].dma));
+ wqe_desc->lo_addr = hinic3_hw_be32(lower_32_bits(dma_info[0].dma));
+
+ wqe_desc->ctrl_len = dma_info[0].len;
+
+ for (i = 0; i < valid_nr_frags;) {
+ frag = &(skb_shinfo(skb)->frags[i]);
+ if (unlikely(i == wqe_combo->first_bds_num))
+ buf_desc = wqe_combo->bds_sec2;
+
+ i++;
+ dma_info[i].dma = skb_frag_dma_map(&pdev->dev, frag, 0,
+ skb_frag_size(frag),
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(&pdev->dev, dma_info[i].dma)) {
+ TXQ_STATS_INC(txq, map_frag_err);
+ i--;
+ err = -EFAULT;
+ goto frag_map_err;
+ }
+ dma_info[i].len = skb_frag_size(frag);
+
+ hinic3_set_buf_desc(buf_desc, dma_info[i].dma,
+ dma_info[i].len);
+ buf_desc++;
+ }
+
+ return 0;
+
+frag_map_err:
+ for (j = 0; j < i;) {
+ j++;
+ dma_unmap_page(&pdev->dev, dma_info[j].dma,
+ dma_info[j].len, DMA_TO_DEVICE);
+ }
+ dma_unmap_single(&pdev->dev, dma_info[0].dma, dma_info[0].len,
+ DMA_TO_DEVICE);
+ return err;
+}
+
+static inline void tx_unmap_skb(struct hinic3_nic_dev *nic_dev,
+ struct sk_buff *skb, u16 valid_nr_frags,
+ struct hinic3_dma_info *dma_info)
+{
+ struct pci_dev *pdev = nic_dev->pdev;
+ int i;
+
+ for (i = 0; i < valid_nr_frags;) {
+ i++;
+ dma_unmap_page(&pdev->dev,
+ dma_info[i].dma,
+ dma_info[i].len, DMA_TO_DEVICE);
+ }
+
+ dma_unmap_single(&pdev->dev, dma_info[0].dma,
+ dma_info[0].len, DMA_TO_DEVICE);
+}
+
+union hinic3_l4 {
+ struct tcphdr *tcp;
+ struct udphdr *udp;
+ unsigned char *hdr;
+};
+
+enum sq_l3_type {
+ UNKNOWN_L3TYPE = 0,
+ IPV6_PKT = 1,
+ IPV4_PKT_NO_CHKSUM_OFFLOAD = 2,
+ IPV4_PKT_WITH_CHKSUM_OFFLOAD = 3,
+};
+
+enum sq_l4offload_type {
+ OFFLOAD_DISABLE = 0,
+ TCP_OFFLOAD_ENABLE = 1,
+ SCTP_OFFLOAD_ENABLE = 2,
+ UDP_OFFLOAD_ENABLE = 3,
+};
+
+/* initialize l4_len and offset */
+static void get_inner_l4_info(struct sk_buff *skb, union hinic3_l4 *l4,
+ u8 l4_proto, u32 *offset,
+ enum sq_l4offload_type *l4_offload)
+{
+ switch (l4_proto) {
+ case IPPROTO_TCP:
+ *l4_offload = TCP_OFFLOAD_ENABLE;
+ /* To keep same with TSO, payload offset begins from paylaod */
+ *offset = (l4->tcp->doff << TCP_HDR_DATA_OFF_UNIT_SHIFT) +
+ TRANSPORT_OFFSET(l4->hdr, skb);
+ break;
+
+ case IPPROTO_UDP:
+ *l4_offload = UDP_OFFLOAD_ENABLE;
+ *offset = TRANSPORT_OFFSET(l4->hdr, skb);
+ break;
+ default:
+ break;
+ }
+}
+
+static void get_inner_l3_l4_type(struct sk_buff *skb, union hinic3_ip *ip,
+ union hinic3_l4 *l4,
+ enum sq_l3_type *l3_type, u8 *l4_proto)
+{
+ unsigned char *exthdr = NULL;
+
+ if (ip->v4->version == IP4_VERSION) {
+ *l3_type = IPV4_PKT_WITH_CHKSUM_OFFLOAD;
+ *l4_proto = ip->v4->protocol;
+
+#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ /* inner_transport_header is wrong in centos7.0 and suse12.1 */
+ l4->hdr = ip->hdr + ((u8)ip->v4->ihl << IP_HDR_IHL_UNIT_SHIFT);
+#endif
+ } else if (ip->v4->version == IP6_VERSION) {
+ *l3_type = IPV6_PKT;
+ exthdr = ip->hdr + sizeof(*ip->v6);
+ *l4_proto = ip->v6->nexthdr;
+ if (exthdr != l4->hdr) {
+ __be16 frag_off = 0;
+#ifndef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ ipv6_skip_exthdr(skb, (int)(exthdr - skb->data),
+ l4_proto, &frag_off);
+#else
+ int pld_off = 0;
+
+ pld_off = ipv6_skip_exthdr(skb,
+ (int)(exthdr - skb->data),
+ l4_proto, &frag_off);
+ l4->hdr = skb->data + pld_off;
+#endif
+ }
+ } else {
+ *l3_type = UNKNOWN_L3TYPE;
+ *l4_proto = 0;
+ }
+}
+
+static u8 hinic3_get_inner_l4_type(struct sk_buff *skb)
+{
+ enum sq_l3_type l3_type;
+ u8 l4_proto;
+ union hinic3_ip ip;
+ union hinic3_l4 l4;
+
+ ip.hdr = skb_inner_network_header(skb);
+ l4.hdr = skb_inner_transport_header(skb);
+
+ get_inner_l3_l4_type(skb, &ip, &l4, &l3_type, &l4_proto);
+ return l4_proto;
+}
+
+static void hinic3_set_unknown_tunnel_csum(struct sk_buff *skb)
+{
+ int csum_offset;
+ __sum16 skb_csum;
+ u8 l4_proto;
+
+ l4_proto = hinic3_get_inner_l4_type(skb);
+ /* Unsupport tunnel packet, disable csum offload */
+ skb_checksum_help(skb);
+ /* The value of csum is changed from 0xffff to 0 according to RFC1624 */
+ if (skb->ip_summed == CHECKSUM_NONE && l4_proto != IPPROTO_UDP) {
+ csum_offset = skb_checksum_start_offset(skb) + skb->csum_offset;
+ skb_csum = *(__sum16 *)(skb->data + csum_offset);
+ if (skb_csum == 0xffff) {
+ *(__sum16 *)(skb->data + csum_offset) = 0;
+ }
+ }
+}
+
+static int hinic3_tx_csum(struct hinic3_txq *txq, struct hinic3_sq_task *task,
+ struct sk_buff *skb)
+{
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return 0;
+
+ if (skb->encapsulation) {
+ union hinic3_ip ip;
+ u8 l4_proto;
+
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG);
+
+ ip.hdr = skb_network_header(skb);
+ if (ip.v4->version == IPV4_VERSION) {
+ l4_proto = ip.v4->protocol;
+ } else if (ip.v4->version == IPV6_VERSION) {
+ union hinic3_l4 l4;
+ unsigned char *exthdr;
+ __be16 frag_off;
+
+#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L4_EN);
+#endif
+ exthdr = ip.hdr + sizeof(*ip.v6);
+ l4_proto = ip.v6->nexthdr;
+ l4.hdr = skb_transport_header(skb);
+ if (l4.hdr != exthdr)
+ ipv6_skip_exthdr(skb, exthdr - skb->data,
+ &l4_proto, &frag_off);
+ } else {
+ l4_proto = IPPROTO_RAW;
+ }
+
+ if (l4_proto != IPPROTO_UDP) {
+ TXQ_STATS_INC(txq, unknown_tunnel_pkt);
+ hinic3_set_unknown_tunnel_csum(skb);
+ return 0;
+ }
+ }
+
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+
+ return 1;
+}
+
+static void hinic3_set_tso_info(struct hinic3_sq_task *task, u32 *queue_info,
+ enum sq_l4offload_type l4_offload,
+ u32 offset, u32 mss)
+{
+ if (l4_offload == TCP_OFFLOAD_ENABLE) {
+ *queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+ } else if (l4_offload == UDP_OFFLOAD_ENABLE) {
+ *queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UFO);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+ }
+
+ /* Default enable L3 calculation */
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN);
+
+ *queue_info |= SQ_CTRL_QUEUE_INFO_SET(offset >> 1, PLDOFF);
+
+ /* set MSS value */
+ *queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(*queue_info, MSS);
+ *queue_info |= SQ_CTRL_QUEUE_INFO_SET(mss, MSS);
+}
+
+static int hinic3_tso(struct hinic3_sq_task *task, u32 *queue_info,
+ struct sk_buff *skb)
+{
+ enum sq_l4offload_type l4_offload = OFFLOAD_DISABLE;
+ enum sq_l3_type l3_type;
+ union hinic3_ip ip;
+ union hinic3_l4 l4;
+ u32 offset = 0;
+ u8 l4_proto;
+ int err;
+
+ if (!skb_is_gso(skb))
+ return 0;
+
+ err = skb_cow_head(skb, 0);
+ if (err < 0)
+ return err;
+
+ if (skb->encapsulation) {
+ u32 gso_type = skb_shinfo(skb)->gso_type;
+ /* L3 checksum always enable */
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG);
+
+ l4.hdr = skb_transport_header(skb);
+ ip.hdr = skb_network_header(skb);
+
+ if (gso_type & SKB_GSO_UDP_TUNNEL_CSUM) {
+ l4.udp->check = ~csum_magic(&ip, IPPROTO_UDP);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L4_EN);
+ } else if (gso_type & SKB_GSO_UDP_TUNNEL) {
+#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ if (ip.v4->version == 6) {
+ l4.udp->check = ~csum_magic(&ip, IPPROTO_UDP);
+ task->pkt_info0 |=
+ SQ_TASK_INFO0_SET(1U, OUT_L4_EN);
+ }
+#endif
+ }
+
+ ip.hdr = skb_inner_network_header(skb);
+ l4.hdr = skb_inner_transport_header(skb);
+ } else {
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_transport_header(skb);
+ }
+
+ get_inner_l3_l4_type(skb, &ip, &l4, &l3_type, &l4_proto);
+
+ if (l4_proto == IPPROTO_TCP)
+ l4.tcp->check = ~csum_magic(&ip, IPPROTO_TCP);
+#ifdef HAVE_IP6_FRAG_ID_ENABLE_UFO
+ else if (l4_proto == IPPROTO_UDP && ip.v4->version == 6)
+ task->ip_identify =
+ be32_to_cpu(skb_shinfo(skb)->ip6_frag_id);
+#endif
+
+ get_inner_l4_info(skb, &l4, l4_proto, &offset, &l4_offload);
+
+#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ u32 network_hdr_len;
+
+ if (unlikely(l3_type == UNKNOWN_L3TYPE))
+ network_hdr_len = 0;
+ else
+ network_hdr_len = l4.hdr - ip.hdr;
+
+ if (unlikely(!offset)) {
+ if (l3_type == UNKNOWN_L3TYPE)
+ offset = ip.hdr - skb->data;
+ else if (l4_offload == OFFLOAD_DISABLE)
+ offset = ip.hdr - skb->data + network_hdr_len;
+ }
+#endif
+
+ hinic3_set_tso_info(task, queue_info, l4_offload, offset,
+ skb_shinfo(skb)->gso_size);
+
+ return 1;
+}
+
+static u32 hinic3_tx_offload(struct sk_buff *skb, struct hinic3_sq_task *task,
+ u32 *queue_info, struct hinic3_txq *txq)
+{
+ u32 offload = 0;
+ int tso_cs_en;
+
+ task->pkt_info0 = 0;
+ task->ip_identify = 0;
+ task->pkt_info2 = 0;
+ task->vlan_offload = 0;
+
+ tso_cs_en = hinic3_tso(task, queue_info, skb);
+ if (tso_cs_en < 0) {
+ offload = TX_OFFLOAD_INVALID;
+ return offload;
+ } else if (tso_cs_en) {
+ offload |= TX_OFFLOAD_TSO;
+ } else {
+ tso_cs_en = hinic3_tx_csum(txq, task, skb);
+ if (tso_cs_en)
+ offload |= TX_OFFLOAD_CSUM;
+ }
+
+#define VLAN_INSERT_MODE_MAX 5
+ if (unlikely(skb_vlan_tag_present(skb))) {
+ /* select vlan insert mode by qid, default 802.1Q Tag type */
+ hinic3_set_vlan_tx_offload(task, skb_vlan_tag_get(skb),
+ txq->q_id % VLAN_INSERT_MODE_MAX);
+ offload |= TX_OFFLOAD_VLAN;
+ }
+
+ if (unlikely(SQ_CTRL_QUEUE_INFO_GET(*queue_info, PLDOFF) >
+ MAX_PAYLOAD_OFFSET)) {
+ offload = TX_OFFLOAD_INVALID;
+ return offload;
+ }
+
+ return offload;
+}
+
+static void get_pkt_stats(struct hinic3_tx_info *tx_info, struct sk_buff *skb)
+{
+ u32 ihs, hdr_len;
+
+ if (skb_is_gso(skb)) {
+#if (defined(HAVE_SKB_INNER_TRANSPORT_HEADER) && \
+ defined(HAVE_SK_BUFF_ENCAPSULATION))
+ if (skb->encapsulation) {
+#ifdef HAVE_SKB_INNER_TRANSPORT_OFFSET
+ ihs = skb_inner_transport_offset(skb) +
+ inner_tcp_hdrlen(skb);
+#else
+ ihs = (skb_inner_transport_header(skb) - skb->data) +
+ inner_tcp_hdrlen(skb);
+#endif
+ } else {
+#endif
+ ihs = (u32)(skb_transport_offset(skb)) +
+ tcp_hdrlen(skb);
+#if (defined(HAVE_SKB_INNER_TRANSPORT_HEADER) && \
+ defined(HAVE_SK_BUFF_ENCAPSULATION))
+ }
+#endif
+ hdr_len = (skb_shinfo(skb)->gso_segs - 1) * ihs;
+ tx_info->num_bytes = skb->len + (u64)hdr_len;
+ } else {
+ tx_info->num_bytes = (skb->len > ETH_ZLEN) ?
+ skb->len : ETH_ZLEN;
+ }
+
+ tx_info->num_pkts = 1;
+}
+
+static inline int hinic3_maybe_stop_tx(struct hinic3_txq *txq, u16 wqebb_cnt)
+{
+ if (likely(hinic3_get_sq_free_wqebbs(txq->sq) >= wqebb_cnt))
+ return 0;
+
+ /* We need to check again in a case another CPU has just
+ * made room available.
+ */
+ netif_stop_subqueue(txq->netdev, txq->q_id);
+
+ if (likely(hinic3_get_sq_free_wqebbs(txq->sq) < wqebb_cnt))
+ return -EBUSY;
+
+ /* there have enough wqebbs after queue is wake up */
+ netif_start_subqueue(txq->netdev, txq->q_id);
+
+ return 0;
+}
+
+static u16 hinic3_set_wqe_combo(struct hinic3_txq *txq,
+ struct hinic3_sq_wqe_combo *wqe_combo,
+ u32 offload, u16 num_sge, u16 *curr_pi)
+{
+ void *second_part_wqebbs_addr = NULL;
+ void *wqe = NULL;
+ u16 first_part_wqebbs_num, tmp_pi;
+
+ wqe_combo->ctrl_bd0 = hinic3_get_sq_one_wqebb(txq->sq, curr_pi);
+ if (!offload && num_sge == 1) {
+ wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE;
+ return hinic3_get_and_update_sq_owner(txq->sq, *curr_pi, 1);
+ }
+
+ wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE;
+
+ if (offload) {
+ wqe_combo->task = hinic3_get_sq_one_wqebb(txq->sq, &tmp_pi);
+ wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES;
+ } else {
+ wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS;
+ }
+
+ if (num_sge > 1) {
+ /* first wqebb contain bd0, and bd size is equal to sq wqebb
+ * size, so we use (num_sge - 1) as wanted weqbb_cnt
+ */
+ wqe = hinic3_get_sq_multi_wqebbs(txq->sq, num_sge - 1, &tmp_pi,
+ &second_part_wqebbs_addr,
+ &first_part_wqebbs_num);
+ wqe_combo->bds_head = wqe;
+ wqe_combo->bds_sec2 = second_part_wqebbs_addr;
+ wqe_combo->first_bds_num = first_part_wqebbs_num;
+ }
+
+ return hinic3_get_and_update_sq_owner(txq->sq, *curr_pi,
+ num_sge + (u16)!!offload);
+}
+
+/* *
+ * hinic3_prepare_sq_ctrl - init sq wqe cs
+ * @nr_descs: total sge_num, include bd0 in cs
+ */
+static void hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo,
+ u32 queue_info, int nr_descs, u16 owner)
+{
+ struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->ctrl_bd0;
+
+ if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) {
+ wqe_desc->ctrl_len |=
+ SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
+ SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) |
+ SQ_CTRL_SET(owner, OWNER);
+
+ wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len);
+ /* compact wqe queue_info will transfer to ucode */
+ wqe_desc->queue_info = 0;
+ return;
+ }
+
+ wqe_desc->ctrl_len |= SQ_CTRL_SET(nr_descs, BUFDESC_NUM) |
+ SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) |
+ SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
+ SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) |
+ SQ_CTRL_SET(owner, OWNER);
+
+ wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len);
+
+ wqe_desc->queue_info = queue_info;
+ wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC);
+
+ if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) {
+ wqe_desc->queue_info |=
+ SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS);
+ } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) <
+ TX_MSS_MIN) {
+ /* mss should not less than 80 */
+ wqe_desc->queue_info =
+ SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS);
+ wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS);
+ }
+
+ wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info);
+}
+
+static netdev_tx_t hinic3_send_one_skb(struct sk_buff *skb,
+ struct net_device *netdev,
+ struct hinic3_txq *txq)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_sq_wqe_combo wqe_combo = {0};
+ struct hinic3_tx_info *tx_info = NULL;
+ struct hinic3_sq_task task;
+ u32 offload, queue_info = 0;
+ u16 owner = 0, pi = 0;
+ u16 wqebb_cnt, num_sge, valid_nr_frags;
+ bool find_zero_sge_len = false;
+ int err, i;
+
+ if (unlikely(skb->len < MIN_SKB_LEN)) {
+ if (skb_pad(skb, (int)(MIN_SKB_LEN - skb->len))) {
+ TXQ_STATS_INC(txq, skb_pad_err);
+ goto tx_skb_pad_err;
+ }
+
+ skb->len = MIN_SKB_LEN;
+ }
+
+ valid_nr_frags = 0;
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ if (!skb_frag_size(&skb_shinfo(skb)->frags[i])) {
+ find_zero_sge_len = true;
+ continue;
+ } else if (find_zero_sge_len) {
+ TXQ_STATS_INC(txq, frag_size_err);
+ goto tx_drop_pkts;
+ }
+
+ valid_nr_frags++;
+ }
+
+ num_sge = valid_nr_frags + 1;
+
+ /* assume need normal TS format wqe, task info need 1 wqebb */
+ wqebb_cnt = num_sge + 1;
+ if (unlikely(hinic3_maybe_stop_tx(txq, wqebb_cnt))) {
+ TXQ_STATS_INC(txq, busy);
+ return NETDEV_TX_BUSY;
+ }
+
+ /* l2nic outband vlan cfg enable */
+ if ((!skb_vlan_tag_present(skb)) &&
+ (nic_dev->nic_cap.outband_vlan_cfg_en == 1) &&
+ nic_dev->outband_cfg.outband_default_vid != 0) {
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+ (u16)nic_dev->outband_cfg.outband_default_vid);
+ }
+
+ offload = hinic3_tx_offload(skb, &task, &queue_info, txq);
+ if (unlikely(offload == TX_OFFLOAD_INVALID)) {
+ TXQ_STATS_INC(txq, offload_cow_skb_err);
+ goto tx_drop_pkts;
+ } else if (!offload) {
+ /* no TS in current wqe */
+ wqebb_cnt -= 1;
+ if (unlikely(num_sge == 1 && skb->len > COMPACET_WQ_SKB_MAX_LEN))
+ goto tx_drop_pkts;
+ }
+
+ owner = hinic3_set_wqe_combo(txq, &wqe_combo, offload, num_sge, &pi);
+ if (offload) {
+ /* ip6_frag_id is big endiant, not need to transfer */
+ wqe_combo.task->ip_identify = hinic3_hw_be32(task.ip_identify);
+ wqe_combo.task->pkt_info0 = hinic3_hw_be32(task.pkt_info0);
+ wqe_combo.task->pkt_info2 = hinic3_hw_be32(task.pkt_info2);
+ wqe_combo.task->vlan_offload =
+ hinic3_hw_be32(task.vlan_offload);
+ }
+
+ tx_info = &txq->tx_info[pi];
+ tx_info->skb = skb;
+ tx_info->wqebb_cnt = wqebb_cnt;
+ tx_info->valid_nr_frags = valid_nr_frags;
+
+ err = tx_map_skb(nic_dev, skb, valid_nr_frags, txq, tx_info,
+ &wqe_combo);
+ if (err) {
+ hinic3_rollback_sq_wqebbs(txq->sq, wqebb_cnt, owner);
+ goto tx_drop_pkts;
+ }
+
+ get_pkt_stats(tx_info, skb);
+
+ hinic3_prepare_sq_ctrl(&wqe_combo, queue_info, num_sge, owner);
+
+ hinic3_write_db(txq->sq, txq->cos, SQ_CFLAG_DP,
+ hinic3_get_sq_local_pi(txq->sq));
+
+ return NETDEV_TX_OK;
+
+tx_drop_pkts:
+ dev_kfree_skb_any(skb);
+
+tx_skb_pad_err:
+ TXQ_STATS_INC(txq, dropped);
+
+ return NETDEV_TX_OK;
+}
+
+netdev_tx_t hinic3_lb_xmit_frame(struct sk_buff *skb,
+ struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 q_id = skb_get_queue_mapping(skb);
+ struct hinic3_txq *txq = &nic_dev->txqs[q_id];
+
+ return hinic3_send_one_skb(skb, netdev, txq);
+}
+
+netdev_tx_t hinic3_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_txq *txq = NULL;
+ u16 q_id = skb_get_queue_mapping(skb);
+
+ if (unlikely(!netif_carrier_ok(netdev))) {
+ dev_kfree_skb_any(skb);
+ HINIC3_NIC_STATS_INC(nic_dev, tx_carrier_off_drop);
+ return NETDEV_TX_OK;
+ }
+
+ if (unlikely(q_id >= nic_dev->q_params.num_qps)) {
+ txq = &nic_dev->txqs[0];
+ HINIC3_NIC_STATS_INC(nic_dev, tx_invalid_qid);
+ goto tx_drop_pkts;
+ }
+ txq = &nic_dev->txqs[q_id];
+
+ return hinic3_send_one_skb(skb, netdev, txq);
+
+tx_drop_pkts:
+ dev_kfree_skb_any(skb);
+ u64_stats_update_begin(&txq->txq_stats.syncp);
+ txq->txq_stats.dropped++;
+ u64_stats_update_end(&txq->txq_stats.syncp);
+
+ return NETDEV_TX_OK;
+}
+
+static inline void tx_free_skb(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_tx_info *tx_info)
+{
+ tx_unmap_skb(nic_dev, tx_info->skb, tx_info->valid_nr_frags,
+ tx_info->dma_info);
+ dev_kfree_skb_any(tx_info->skb);
+ tx_info->skb = NULL;
+}
+
+static void free_all_tx_skbs(struct hinic3_nic_dev *nic_dev, u32 sq_depth,
+ struct hinic3_tx_info *tx_info_arr)
+{
+ struct hinic3_tx_info *tx_info = NULL;
+ u32 idx;
+
+ for (idx = 0; idx < sq_depth; idx++) {
+ tx_info = &tx_info_arr[idx];
+ if (tx_info->skb)
+ tx_free_skb(nic_dev, tx_info);
+ }
+}
+
+int hinic3_tx_poll(struct hinic3_txq *txq, int budget)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(txq->netdev);
+ struct hinic3_tx_info *tx_info = NULL;
+ u64 tx_bytes = 0, wake = 0, nr_pkts = 0;
+ int pkts = 0;
+ u16 wqebb_cnt = 0;
+ u16 hw_ci, sw_ci = 0, q_id = txq->sq->q_id;
+
+ hw_ci = hinic3_get_sq_hw_ci(txq->sq);
+ dma_rmb();
+ sw_ci = hinic3_get_sq_local_ci(txq->sq);
+
+ do {
+ tx_info = &txq->tx_info[sw_ci];
+
+ /* Whether all of the wqebb of this wqe is completed */
+ if (hw_ci == sw_ci ||
+ ((hw_ci - sw_ci) & txq->q_mask) < tx_info->wqebb_cnt)
+ break;
+
+ sw_ci = (sw_ci + tx_info->wqebb_cnt) & (u16)txq->q_mask;
+ prefetch(&txq->tx_info[sw_ci]);
+
+ wqebb_cnt += tx_info->wqebb_cnt;
+
+ tx_bytes += tx_info->num_bytes;
+ nr_pkts += tx_info->num_pkts;
+ pkts++;
+
+ tx_free_skb(nic_dev, tx_info);
+ } while (likely(pkts < budget));
+
+ hinic3_update_sq_local_ci(txq->sq, wqebb_cnt);
+
+ if (unlikely(__netif_subqueue_stopped(nic_dev->netdev, q_id) &&
+ hinic3_get_sq_free_wqebbs(txq->sq) >= 1 &&
+ test_bit(HINIC3_INTF_UP, &nic_dev->flags))) {
+ struct netdev_queue *netdev_txq =
+ netdev_get_tx_queue(txq->netdev, q_id);
+
+ __netif_tx_lock(netdev_txq, smp_processor_id());
+ /* To avoid re-waking subqueue with xmit_frame */
+ if (__netif_subqueue_stopped(nic_dev->netdev, q_id)) {
+ netif_wake_subqueue(nic_dev->netdev, q_id);
+ wake++;
+ }
+ __netif_tx_unlock(netdev_txq);
+ }
+
+ u64_stats_update_begin(&txq->txq_stats.syncp);
+ txq->txq_stats.bytes += tx_bytes;
+ txq->txq_stats.packets += nr_pkts;
+ txq->txq_stats.wake += wake;
+ u64_stats_update_end(&txq->txq_stats.syncp);
+
+ return pkts;
+}
+
+void hinic3_set_txq_cos(struct hinic3_nic_dev *nic_dev, u16 start_qid,
+ u16 q_num, u8 cos)
+{
+ u16 idx;
+
+ for (idx = 0; idx < q_num; idx++)
+ nic_dev->txqs[idx + start_qid].cos = cos;
+}
+
+#define HINIC3_BDS_PER_SQ_WQEBB \
+ (HINIC3_SQ_WQEBB_SIZE / sizeof(struct hinic3_sq_bufdesc))
+
+int hinic3_alloc_txqs_res(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res)
+{
+ struct hinic3_dyna_txq_res *tqres = NULL;
+ int idx, i;
+ u64 size;
+
+ for (idx = 0; idx < num_sq; idx++) {
+ tqres = &txqs_res[idx];
+
+ size = sizeof(*tqres->tx_info) * sq_depth;
+ tqres->tx_info = kzalloc(size, GFP_KERNEL);
+ if (!tqres->tx_info) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc txq%d tx info\n", idx);
+ goto err_out;
+ }
+
+ size = sizeof(*tqres->bds) *
+ (sq_depth * HINIC3_BDS_PER_SQ_WQEBB +
+ HINIC3_MAX_SQ_SGE);
+ tqres->bds = kzalloc(size, GFP_KERNEL);
+ if (!tqres->bds) {
+ kfree(tqres->tx_info);
+ tqres->tx_info = NULL;
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc txq%d bds info\n", idx);
+ goto err_out;
+ }
+ }
+
+ return 0;
+
+err_out:
+ for (i = 0; i < idx; i++) {
+ tqres = &txqs_res[i];
+
+ kfree(tqres->bds);
+ tqres->bds = NULL;
+ kfree(tqres->tx_info);
+ tqres->tx_info = NULL;
+ }
+
+ return -ENOMEM;
+}
+
+void hinic3_free_txqs_res(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res)
+{
+ struct hinic3_dyna_txq_res *tqres = NULL;
+ int idx;
+
+ for (idx = 0; idx < num_sq; idx++) {
+ tqres = &txqs_res[idx];
+
+ free_all_tx_skbs(nic_dev, sq_depth, tqres->tx_info);
+ kfree(tqres->bds);
+ tqres->bds = NULL;
+ kfree(tqres->tx_info);
+ tqres->tx_info = NULL;
+ }
+}
+
+int hinic3_configure_txqs(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res)
+{
+ struct hinic3_dyna_txq_res *tqres = NULL;
+ struct hinic3_txq *txq = NULL;
+ u16 q_id;
+ u32 idx;
+
+ for (q_id = 0; q_id < num_sq; q_id++) {
+ txq = &nic_dev->txqs[q_id];
+ tqres = &txqs_res[q_id];
+
+ txq->q_depth = sq_depth;
+ txq->q_mask = sq_depth - 1;
+
+ txq->tx_info = tqres->tx_info;
+ for (idx = 0; idx < sq_depth; idx++)
+ txq->tx_info[idx].dma_info =
+ &tqres->bds[idx * HINIC3_BDS_PER_SQ_WQEBB];
+
+ txq->sq = hinic3_get_nic_queue(nic_dev->hwdev, q_id, HINIC3_SQ);
+ if (!txq->sq) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get %u sq\n", q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+int hinic3_alloc_txqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct pci_dev *pdev = nic_dev->pdev;
+ struct hinic3_txq *txq = NULL;
+ u16 q_id, num_txqs = nic_dev->max_qps;
+ u64 txq_size;
+
+ txq_size = num_txqs * sizeof(*nic_dev->txqs);
+ if (!txq_size) {
+ nic_err(&pdev->dev, "Cannot allocate zero size txqs\n");
+ return -EINVAL;
+ }
+
+ nic_dev->txqs = kzalloc(txq_size, GFP_KERNEL);
+ if (!nic_dev->txqs) {
+ nic_err(&pdev->dev, "Failed to allocate txqs\n");
+ return -ENOMEM;
+ }
+
+ for (q_id = 0; q_id < num_txqs; q_id++) {
+ txq = &nic_dev->txqs[q_id];
+ txq->netdev = netdev;
+ txq->q_id = q_id;
+ txq->q_depth = nic_dev->q_params.sq_depth;
+ txq->q_mask = nic_dev->q_params.sq_depth - 1;
+ txq->dev = &pdev->dev;
+
+ txq_stats_init(txq);
+ }
+
+ return 0;
+}
+
+void hinic3_free_txqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ kfree(nic_dev->txqs);
+ nic_dev->txqs = NULL;
+}
+
+static bool is_hw_complete_sq_process(struct hinic3_io_queue *sq)
+{
+ u16 sw_pi, hw_ci;
+
+ sw_pi = hinic3_get_sq_local_pi(sq);
+ hw_ci = hinic3_get_sq_hw_ci(sq);
+
+ return sw_pi == hw_ci;
+}
+
+#define HINIC3_FLUSH_QUEUE_TIMEOUT 1000
+static int hinic3_stop_sq(struct hinic3_txq *txq)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(txq->netdev);
+ u64 timeout;
+ int err;
+
+ timeout = msecs_to_jiffies(HINIC3_FLUSH_QUEUE_TIMEOUT) + jiffies;
+ do {
+ if (is_hw_complete_sq_process(txq->sq))
+ return 0;
+
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+ } while (time_before(jiffies, (unsigned long)timeout));
+
+ /* force hardware to drop packets */
+ timeout = msecs_to_jiffies(HINIC3_FLUSH_QUEUE_TIMEOUT) + jiffies;
+ do {
+ if (is_hw_complete_sq_process(txq->sq))
+ return 0;
+
+ err = hinic3_force_drop_tx_pkt(nic_dev->hwdev);
+ if (err)
+ break;
+
+ usleep_range(9900, 10000); /* sleep 9900 us ~ 10000 us */
+ } while (time_before(jiffies, (unsigned long)timeout));
+
+ /* Avoid msleep takes too long and get a fake result */
+ if (is_hw_complete_sq_process(txq->sq))
+ return 0;
+
+ return -EFAULT;
+}
+
+/* should stop transmit any packets before calling this function */
+int hinic3_flush_txqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 qid;
+ int err;
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ err = hinic3_stop_sq(&nic_dev->txqs[qid]);
+ if (err)
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to stop sq%u\n", qid);
+ }
+
+ return 0;
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_tx.h b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.h
new file mode 100644
index 0000000..290ef29
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.h
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_TX_H
+#define HINIC3_TX_H
+
+#include <net/ipv6.h>
+#include <net/checksum.h>
+#include <net/ip6_checksum.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_io.h"
+
+#define VXLAN_OFFLOAD_PORT_LE 46354 /* big end is 4789 */
+
+#define COMPACET_WQ_SKB_MAX_LEN 16383
+
+#define IP4_VERSION 4
+#define IP6_VERSION 6
+#define IP_HDR_IHL_UNIT_SHIFT 2
+#define TCP_HDR_DATA_OFF_UNIT_SHIFT 2
+
+enum tx_offload_type {
+ TX_OFFLOAD_TSO = BIT(0),
+ TX_OFFLOAD_CSUM = BIT(1),
+ TX_OFFLOAD_VLAN = BIT(2),
+ TX_OFFLOAD_INVALID = BIT(3),
+ TX_OFFLOAD_ESP = BIT(4),
+};
+
+struct hinic3_txq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 busy;
+ u64 wake;
+ u64 dropped;
+
+ /* Subdivision statistics show in private tool */
+ u64 skb_pad_err;
+ u64 frag_len_overflow;
+ u64 offload_cow_skb_err;
+ u64 map_frag_err;
+ u64 unknown_tunnel_pkt;
+ u64 frag_size_err;
+ u64 rsvd1;
+ u64 rsvd2;
+
+#ifdef HAVE_NDO_GET_STATS64
+ struct u64_stats_sync syncp;
+#else
+ struct u64_stats_sync_empty syncp;
+#endif
+};
+
+struct hinic3_dma_info {
+ dma_addr_t dma;
+ u32 len;
+};
+
+#define IPV4_VERSION 4
+#define IPV6_VERSION 6
+#define TCP_HDR_DOFF_UNIT 2
+#define TRANSPORT_OFFSET(l4_hdr, skb) ((u32)((l4_hdr) - (skb)->data))
+
+union hinic3_ip {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+};
+
+struct hinic3_tx_info {
+ struct sk_buff *skb;
+
+ u16 wqebb_cnt;
+ u16 valid_nr_frags;
+
+ int num_sge;
+ u16 num_pkts;
+ u16 rsvd1;
+ u32 rsvd2;
+ u64 num_bytes;
+ struct hinic3_dma_info *dma_info;
+ u64 rsvd3;
+};
+
+struct hinic3_txq {
+ struct net_device *netdev;
+ struct device *dev;
+
+ struct hinic3_txq_stats txq_stats;
+
+ u8 cos;
+ u8 rsvd1;
+ u16 q_id;
+ u32 q_mask;
+ u32 q_depth;
+ u32 rsvd2;
+
+ struct hinic3_tx_info *tx_info;
+ struct hinic3_io_queue *sq;
+
+ u64 last_moder_packets;
+ u64 last_moder_bytes;
+ u64 rsvd3;
+} ____cacheline_aligned;
+
+netdev_tx_t hinic3_lb_xmit_frame(struct sk_buff *skb,
+ struct net_device *netdev);
+
+struct hinic3_dyna_txq_res {
+ struct hinic3_tx_info *tx_info;
+ struct hinic3_dma_info *bds;
+};
+
+netdev_tx_t hinic3_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
+
+void hinic3_txq_get_stats(struct hinic3_txq *txq,
+ struct hinic3_txq_stats *stats);
+
+void hinic3_txq_clean_stats(struct hinic3_txq_stats *txq_stats);
+
+struct hinic3_nic_dev;
+int hinic3_alloc_txqs_res(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res);
+
+void hinic3_free_txqs_res(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res);
+
+int hinic3_configure_txqs(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res);
+
+int hinic3_alloc_txqs(struct net_device *netdev);
+
+void hinic3_free_txqs(struct net_device *netdev);
+
+int hinic3_tx_poll(struct hinic3_txq *txq, int budget);
+
+int hinic3_flush_txqs(struct net_device *netdev);
+
+void hinic3_set_txq_cos(struct hinic3_nic_dev *nic_dev, u16 start_qid,
+ u16 q_num, u8 cos);
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline __sum16 csum_magic(union hinic3_ip *ip, unsigned short proto)
+{
+ return (ip->v4->version == IPV4_VERSION) ?
+ csum_tcpudp_magic(ip->v4->saddr, ip->v4->daddr, 0, proto, 0) :
+ csum_ipv6_magic(&ip->v6->saddr, &ip->v6->daddr, 0, proto, 0);
+}
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_wq.h b/drivers/net/ethernet/huawei/hinic3/hinic3_wq.h
new file mode 100644
index 0000000..7ae029b
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_wq.h
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_WQ_H
+#define HINIC3_WQ_H
+
+struct hinic3_wq {
+ u16 cons_idx;
+ u16 prod_idx;
+
+ u32 q_depth;
+ u16 idx_mask;
+ u16 wqebb_size_shift;
+ u16 rsvd1;
+ u16 num_wq_pages;
+ u32 wqebbs_per_page;
+ u16 wqebbs_per_page_shift;
+ u16 wqebbs_per_page_mask;
+
+ struct hinic3_dma_addr_align *wq_pages;
+
+ dma_addr_t wq_block_paddr;
+ u64 *wq_block_vaddr;
+
+ void *dev_hdl;
+ u32 wq_page_size;
+ u16 wqebb_size;
+} ____cacheline_aligned;
+
+#define WQ_MASK_IDX(wq, idx) ((idx) & (wq)->idx_mask)
+#define WQ_MASK_PAGE(wq, pg_idx) \
+ (((pg_idx) < ((wq)->num_wq_pages)) ? (pg_idx) : 0)
+#define WQ_PAGE_IDX(wq, idx) ((idx) >> (wq)->wqebbs_per_page_shift)
+#define WQ_OFFSET_IN_PAGE(wq, idx) ((idx) & (wq)->wqebbs_per_page_mask)
+#define WQ_GET_WQEBB_ADDR(wq, pg_idx, idx_in_pg) \
+ ((u8 *)(wq)->wq_pages[pg_idx].align_vaddr + \
+ ((idx_in_pg) << (wq)->wqebb_size_shift))
+#define WQ_IS_0_LEVEL_CLA(wq) ((wq)->num_wq_pages == 1)
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline u16 hinic3_wq_free_wqebbs(struct hinic3_wq *wq)
+{
+ return wq->q_depth - ((wq->q_depth + wq->prod_idx - wq->cons_idx) &
+ wq->idx_mask) - 1;
+}
+
+static inline bool hinic3_wq_is_empty(struct hinic3_wq *wq)
+{
+ return WQ_MASK_IDX(wq, wq->prod_idx) == WQ_MASK_IDX(wq, wq->cons_idx);
+}
+
+static inline void *hinic3_wq_get_one_wqebb(struct hinic3_wq *wq, u16 *pi)
+{
+ *pi = WQ_MASK_IDX(wq, wq->prod_idx);
+ wq->prod_idx++;
+
+ return WQ_GET_WQEBB_ADDR(wq, WQ_PAGE_IDX(wq, *pi),
+ WQ_OFFSET_IN_PAGE(wq, *pi));
+}
+
+static inline void *hinic3_wq_get_multi_wqebbs(struct hinic3_wq *wq,
+ u16 num_wqebbs, u16 *prod_idx,
+ void **second_part_wqebbs_addr,
+ u16 *first_part_wqebbs_num)
+{
+ u32 pg_idx, off_in_page;
+
+ *prod_idx = WQ_MASK_IDX(wq, wq->prod_idx);
+ wq->prod_idx += num_wqebbs;
+
+ pg_idx = WQ_PAGE_IDX(wq, *prod_idx);
+ off_in_page = WQ_OFFSET_IN_PAGE(wq, *prod_idx);
+
+ if ((off_in_page + num_wqebbs) > wq->wqebbs_per_page) {
+ /* wqe across wq page boundary */
+ *second_part_wqebbs_addr =
+ WQ_GET_WQEBB_ADDR(wq, WQ_MASK_PAGE(wq, pg_idx + 1), 0);
+ *first_part_wqebbs_num = wq->wqebbs_per_page - off_in_page;
+ } else {
+ *second_part_wqebbs_addr = NULL;
+ *first_part_wqebbs_num = num_wqebbs;
+ }
+
+ return WQ_GET_WQEBB_ADDR(wq, pg_idx, off_in_page);
+}
+
+static inline void hinic3_wq_put_wqebbs(struct hinic3_wq *wq, u16 num_wqebbs)
+{
+ wq->cons_idx += num_wqebbs;
+}
+
+static inline void *hinic3_wq_wqebb_addr(struct hinic3_wq *wq, u16 idx)
+{
+ return WQ_GET_WQEBB_ADDR(wq, WQ_PAGE_IDX(wq, idx),
+ WQ_OFFSET_IN_PAGE(wq, idx));
+}
+
+static inline void *hinic3_wq_read_one_wqebb(struct hinic3_wq *wq,
+ u16 *cons_idx)
+{
+ *cons_idx = WQ_MASK_IDX(wq, wq->cons_idx);
+
+ return hinic3_wq_wqebb_addr(wq, *cons_idx);
+}
+
+static inline u64 hinic3_wq_get_first_wqe_page_addr(struct hinic3_wq *wq)
+{
+ return wq->wq_pages[0].align_paddr;
+}
+
+static inline void hinic3_wq_reset(struct hinic3_wq *wq)
+{
+ u16 pg_idx;
+
+ wq->cons_idx = 0;
+ wq->prod_idx = 0;
+
+ for (pg_idx = 0; pg_idx < wq->num_wq_pages; pg_idx++)
+ memset(wq->wq_pages[pg_idx].align_vaddr, 0, wq->wq_page_size);
+}
+
+int hinic3_wq_create(void *hwdev, struct hinic3_wq *wq, u32 q_depth,
+ u16 wqebb_size);
+void hinic3_wq_destroy(struct hinic3_wq *wq);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c
new file mode 100644
index 0000000..0419fc2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c
@@ -0,0 +1,1214 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/completion.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/semaphore.h>
+#include <linux/jiffies.h>
+#include <linux/delay.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwif.h"
+#include "hinic3_api_cmd.h"
+
+#define API_CMD_CHAIN_CELL_SIZE_SHIFT 6U
+
+#define API_CMD_CELL_DESC_SIZE 8
+#define API_CMD_CELL_DATA_ADDR_SIZE 8
+
+#define API_CHAIN_NUM_CELLS 32
+#define API_CHAIN_CELL_SIZE 128
+#define API_CHAIN_RSP_DATA_SIZE 128
+
+#define API_CMD_CELL_WB_ADDR_SIZE 8
+
+#define API_CHAIN_CELL_ALIGNMENT 8
+
+#define API_CMD_TIMEOUT 10000
+#define API_CMD_STATUS_TIMEOUT 10000
+
+#define API_CMD_BUF_SIZE 2048ULL
+
+#define API_CMD_NODE_ALIGN_SIZE 512ULL
+#define API_PAYLOAD_ALIGN_SIZE 64ULL
+
+#define API_CHAIN_RESP_ALIGNMENT 128ULL
+
+#define COMPLETION_TIMEOUT_DEFAULT 1000UL
+#define POLLING_COMPLETION_TIMEOUT_DEFAULT 1000U
+
+#define API_CMD_RESPONSE_DATA_PADDR(val) be64_to_cpu(*((u64 *)(val)))
+
+#define READ_API_CMD_PRIV_DATA(id, token) ((((u32)(id)) << 16) + (token))
+#define WRITE_API_CMD_PRIV_DATA(id) (((u8)(id)) << 16)
+
+#define MASKED_IDX(chain, idx) ((idx) & ((chain)->num_cells - 1))
+
+#define SIZE_4BYTES(size) (ALIGN((u32)(size), 4U) >> 2)
+#define SIZE_8BYTES(size) (ALIGN((u32)(size), 8U) >> 3)
+
+enum api_cmd_data_format {
+ SGL_DATA = 1,
+};
+
+enum api_cmd_type {
+ API_CMD_WRITE_TYPE = 0,
+ API_CMD_READ_TYPE = 1,
+};
+
+enum api_cmd_bypass {
+ NOT_BYPASS = 0,
+ BYPASS = 1,
+};
+
+enum api_cmd_resp_aeq {
+ NOT_TRIGGER = 0,
+ TRIGGER = 1,
+};
+
+enum api_cmd_chn_code {
+ APICHN_0 = 0,
+};
+
+enum api_cmd_chn_rsvd {
+ APICHN_VALID = 0,
+ APICHN_INVALID = 1,
+};
+
+#define API_DESC_LEN (7)
+
+static u8 xor_chksum_set(void *data)
+{
+ int idx;
+ u8 checksum = 0;
+ u8 *val = data;
+
+ for (idx = 0; idx < API_DESC_LEN; idx++)
+ checksum ^= val[idx];
+
+ return checksum;
+}
+
+static void set_prod_idx(struct hinic3_api_cmd_chain *chain)
+{
+ enum hinic3_api_cmd_chain_type chain_type = chain->chain_type;
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 hw_prod_idx_addr = HINIC3_CSR_API_CMD_CHAIN_PI_ADDR(chain_type);
+ u32 prod_idx = chain->prod_idx;
+
+ hinic3_hwif_write_reg(hwif, hw_prod_idx_addr, prod_idx);
+}
+
+static u32 get_hw_cons_idx(struct hinic3_api_cmd_chain *chain)
+{
+ u32 addr, val;
+
+ addr = HINIC3_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
+ val = hinic3_hwif_read_reg(chain->hwdev->hwif, addr);
+
+ return HINIC3_API_CMD_STATUS_GET(val, CONS_IDX);
+}
+
+static void dump_api_chain_reg(struct hinic3_api_cmd_chain *chain)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ u32 addr, val;
+ u16 pci_cmd = 0;
+
+ addr = HINIC3_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
+ val = hinic3_hwif_read_reg(chain->hwdev->hwif, addr);
+
+ sdk_err(dev, "Chain type: 0x%x, cpld error: 0x%x, check error: 0x%x, current fsm: 0x%x\n",
+ chain->chain_type, HINIC3_API_CMD_STATUS_GET(val, CPLD_ERR),
+ HINIC3_API_CMD_STATUS_GET(val, CHKSUM_ERR),
+ HINIC3_API_CMD_STATUS_GET(val, FSM));
+
+ sdk_err(dev, "Chain hw current ci: 0x%x\n",
+ HINIC3_API_CMD_STATUS_GET(val, CONS_IDX));
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_PI_ADDR(chain->chain_type);
+ val = hinic3_hwif_read_reg(chain->hwdev->hwif, addr);
+ sdk_err(dev, "Chain hw current pi: 0x%x\n", val);
+ pci_read_config_word(chain->hwdev->pcidev_hdl, PCI_COMMAND, &pci_cmd);
+ sdk_err(dev, "PCI command reg: 0x%x\n", pci_cmd);
+}
+
+/**
+ * chain_busy - check if the chain is still processing last requests
+ * @chain: chain to check
+ **/
+static int chain_busy(struct hinic3_api_cmd_chain *chain)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ struct hinic3_api_cmd_cell_ctxt *ctxt;
+ u64 resp_header;
+
+ ctxt = &chain->cell_ctxt[chain->prod_idx];
+
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_MULTI_READ:
+ case HINIC3_API_CMD_POLL_READ:
+ resp_header = be64_to_cpu(ctxt->resp->header);
+ if (ctxt->status &&
+ !HINIC3_API_CMD_RESP_HEADER_VALID(resp_header)) {
+ sdk_err(dev, "Context(0x%x) busy!, pi: %u, resp_header: 0x%08x%08x\n",
+ ctxt->status, chain->prod_idx,
+ upper_32_bits(resp_header),
+ lower_32_bits(resp_header));
+ dump_api_chain_reg(chain);
+ return -EBUSY;
+ }
+ break;
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ chain->cons_idx = get_hw_cons_idx(chain);
+
+ if (chain->cons_idx == MASKED_IDX(chain, chain->prod_idx + 1)) {
+ sdk_err(dev, "API CMD chain %d is busy, cons_idx = %u, prod_idx = %u\n",
+ chain->chain_type, chain->cons_idx,
+ chain->prod_idx);
+ dump_api_chain_reg(chain);
+ return -EBUSY;
+ }
+ break;
+ default:
+ sdk_err(dev, "Unknown Chain type %d\n", chain->chain_type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/**
+ * get_cell_data_size - get the data size of specific cell type
+ * @type: chain type
+ **/
+static u16 get_cell_data_size(enum hinic3_api_cmd_chain_type type)
+{
+ u16 cell_data_size = 0;
+
+ switch (type) {
+ case HINIC3_API_CMD_POLL_READ:
+ cell_data_size = ALIGN(API_CMD_CELL_DESC_SIZE +
+ API_CMD_CELL_WB_ADDR_SIZE +
+ API_CMD_CELL_DATA_ADDR_SIZE,
+ API_CHAIN_CELL_ALIGNMENT);
+ break;
+
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ cell_data_size = ALIGN(API_CMD_CELL_DESC_SIZE +
+ API_CMD_CELL_DATA_ADDR_SIZE,
+ API_CHAIN_CELL_ALIGNMENT);
+ break;
+ default:
+ break;
+ }
+
+ return cell_data_size;
+}
+
+/**
+ * prepare_cell_ctrl - prepare the ctrl of the cell for the command
+ * @cell_ctrl: the control of the cell to set the control into it
+ * @cell_len: the size of the cell
+ **/
+static void prepare_cell_ctrl(u64 *cell_ctrl, u16 cell_len)
+{
+ u64 ctrl;
+ u8 chksum;
+
+ ctrl = HINIC3_API_CMD_CELL_CTRL_SET(SIZE_8BYTES(cell_len), CELL_LEN) |
+ HINIC3_API_CMD_CELL_CTRL_SET(0ULL, RD_DMA_ATTR_OFF) |
+ HINIC3_API_CMD_CELL_CTRL_SET(0ULL, WR_DMA_ATTR_OFF);
+
+ chksum = xor_chksum_set(&ctrl);
+
+ ctrl |= HINIC3_API_CMD_CELL_CTRL_SET(chksum, XOR_CHKSUM);
+
+ /* The data in the HW should be in Big Endian Format */
+ *cell_ctrl = cpu_to_be64(ctrl);
+}
+
+/**
+ * prepare_api_cmd - prepare API CMD command
+ * @chain: chain for the command
+ * @cell: the cell of the command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @cmd_size: the command size
+ **/
+static void prepare_api_cmd(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell *cell, u8 node_id,
+ const void *cmd, u16 cmd_size)
+{
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ u32 priv;
+
+ cell_ctxt = &chain->cell_ctxt[chain->prod_idx];
+
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_POLL_READ:
+ priv = READ_API_CMD_PRIV_DATA(chain->chain_type,
+ cell_ctxt->saved_prod_idx);
+ cell->desc = HINIC3_API_CMD_DESC_SET(SGL_DATA, API_TYPE) |
+ HINIC3_API_CMD_DESC_SET(API_CMD_READ_TYPE, RD_WR) |
+ HINIC3_API_CMD_DESC_SET(BYPASS, MGMT_BYPASS) |
+ HINIC3_API_CMD_DESC_SET(NOT_TRIGGER,
+ RESP_AEQE_EN) |
+ HINIC3_API_CMD_DESC_SET(priv, PRIV_DATA);
+ break;
+ case HINIC3_API_CMD_POLL_WRITE:
+ priv = WRITE_API_CMD_PRIV_DATA(chain->chain_type);
+ cell->desc = HINIC3_API_CMD_DESC_SET(SGL_DATA, API_TYPE) |
+ HINIC3_API_CMD_DESC_SET(API_CMD_WRITE_TYPE,
+ RD_WR) |
+ HINIC3_API_CMD_DESC_SET(BYPASS, MGMT_BYPASS) |
+ HINIC3_API_CMD_DESC_SET(NOT_TRIGGER,
+ RESP_AEQE_EN) |
+ HINIC3_API_CMD_DESC_SET(priv, PRIV_DATA);
+ break;
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ priv = WRITE_API_CMD_PRIV_DATA(chain->chain_type);
+ cell->desc = HINIC3_API_CMD_DESC_SET(SGL_DATA, API_TYPE) |
+ HINIC3_API_CMD_DESC_SET(API_CMD_WRITE_TYPE,
+ RD_WR) |
+ HINIC3_API_CMD_DESC_SET(NOT_BYPASS, MGMT_BYPASS) |
+ HINIC3_API_CMD_DESC_SET(TRIGGER, RESP_AEQE_EN) |
+ HINIC3_API_CMD_DESC_SET(priv, PRIV_DATA);
+ break;
+ default:
+ sdk_err(chain->hwdev->dev_hdl, "Unknown Chain type: %d\n",
+ chain->chain_type);
+ return;
+ }
+
+ cell->desc |= HINIC3_API_CMD_DESC_SET(APICHN_0, APICHN_CODE) |
+ HINIC3_API_CMD_DESC_SET(APICHN_VALID, APICHN_RSVD);
+
+ cell->desc |= HINIC3_API_CMD_DESC_SET(node_id, DEST) |
+ HINIC3_API_CMD_DESC_SET(SIZE_4BYTES(cmd_size), SIZE);
+
+ cell->desc |= HINIC3_API_CMD_DESC_SET(xor_chksum_set(&cell->desc),
+ XOR_CHKSUM);
+
+ /* The data in the HW should be in Big Endian Format */
+ cell->desc = cpu_to_be64(cell->desc);
+
+ memcpy(cell_ctxt->api_cmd_vaddr, cmd, cmd_size);
+}
+
+/**
+ * prepare_cell - prepare cell ctrl and cmd in the current producer cell
+ * @chain: chain for the command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @cmd_size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+static void prepare_cell(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 cmd_size)
+{
+ struct hinic3_api_cmd_cell *curr_node;
+ u16 cell_size;
+
+ curr_node = chain->curr_node;
+
+ cell_size = get_cell_data_size(chain->chain_type);
+
+ prepare_cell_ctrl(&curr_node->ctrl, cell_size);
+ prepare_api_cmd(chain, curr_node, node_id, cmd, cmd_size);
+}
+
+static inline void cmd_chain_prod_idx_inc(struct hinic3_api_cmd_chain *chain)
+{
+ chain->prod_idx = MASKED_IDX(chain, chain->prod_idx + 1);
+}
+
+static void issue_api_cmd(struct hinic3_api_cmd_chain *chain)
+{
+ set_prod_idx(chain);
+}
+
+/**
+ * api_cmd_status_update - update the status of the chain
+ * @chain: chain to update
+ **/
+static void api_cmd_status_update(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_api_cmd_status *wb_status;
+ enum hinic3_api_cmd_chain_type chain_type;
+ u64 status_header;
+ u32 buf_desc;
+
+ wb_status = chain->wb_status;
+
+ buf_desc = be32_to_cpu(wb_status->buf_desc);
+ if (HINIC3_API_CMD_STATUS_GET(buf_desc, CHKSUM_ERR))
+ return;
+
+ status_header = be64_to_cpu(wb_status->header);
+ chain_type = HINIC3_API_CMD_STATUS_HEADER_GET(status_header, CHAIN_ID);
+ if (chain_type >= HINIC3_API_CMD_MAX)
+ return;
+
+ if (chain_type != chain->chain_type)
+ return;
+
+ chain->cons_idx = HINIC3_API_CMD_STATUS_GET(buf_desc, CONS_IDX);
+}
+
+static enum hinic3_wait_return wait_for_status_poll_handler(void *priv_data)
+{
+ struct hinic3_api_cmd_chain *chain = priv_data;
+
+ if (!chain->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ api_cmd_status_update(chain);
+ /* SYNC API CMD cmd should start after prev cmd finished */
+ if (chain->cons_idx == chain->prod_idx)
+ return WAIT_PROCESS_CPL;
+
+ return WAIT_PROCESS_WAITING;
+}
+
+/**
+ * wait_for_status_poll - wait for write to mgmt command to complete
+ * @chain: the chain of the command
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_status_poll(struct hinic3_api_cmd_chain *chain)
+{
+ return hinic3_wait_for_timeout(chain,
+ wait_for_status_poll_handler,
+ API_CMD_STATUS_TIMEOUT, 100); /* wait 100 us once */
+}
+
+static void copy_resp_data(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell_ctxt *ctxt,
+ void *ack, u16 ack_size)
+{
+ struct hinic3_api_cmd_resp_fmt *resp = ctxt->resp;
+ int rsp_size_align = chain->rsp_size_align - 0x8;
+ int rsp_size = (ack_size > rsp_size_align) ? rsp_size_align : ack_size;
+
+ memcpy(ack, &resp->resp_data, rsp_size);
+
+ ctxt->status = 0;
+}
+
+static enum hinic3_wait_return check_cmd_resp_handler(void *priv_data)
+{
+ struct hinic3_api_cmd_cell_ctxt *ctxt = priv_data;
+ u64 resp_header;
+ u8 resp_status;
+
+ if (!ctxt->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ resp_header = be64_to_cpu(ctxt->resp->header);
+ rmb(); /* read the latest header */
+
+ if (HINIC3_API_CMD_RESP_HEADER_VALID(resp_header)) {
+ resp_status = HINIC3_API_CMD_RESP_HEAD_GET(resp_header, STATUS);
+ if (resp_status) {
+ pr_err("Api chain response data err, status: %u\n",
+ resp_status);
+ return WAIT_PROCESS_ERR;
+ }
+
+ return WAIT_PROCESS_CPL;
+ }
+
+ return WAIT_PROCESS_WAITING;
+}
+
+/**
+ * prepare_cell - polling for respense data of the read api-command
+ * @chain: pointer to api cmd chain
+ *
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_resp_polling(struct hinic3_api_cmd_cell_ctxt *ctxt)
+{
+ return hinic3_wait_for_timeout(ctxt, check_cmd_resp_handler,
+ POLLING_COMPLETION_TIMEOUT_DEFAULT,
+ USEC_PER_MSEC);
+}
+
+/**
+ * wait_for_api_cmd_completion - wait for command to complete
+ * @chain: chain for the command
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_api_cmd_completion(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell_ctxt *ctxt,
+ void *ack, u16 ack_size)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ int err = 0;
+
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_POLL_READ:
+ err = wait_for_resp_polling(ctxt);
+ if (err == 0)
+ copy_resp_data(chain, ctxt, ack, ack_size);
+ else
+ sdk_err(dev, "API CMD poll response timeout\n");
+ break;
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ err = wait_for_status_poll(chain);
+ if (err != 0) {
+ sdk_err(dev, "API CMD Poll status timeout, chain type: %d\n",
+ chain->chain_type);
+ break;
+ }
+ break;
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ /* No need to wait */
+ break;
+ default:
+ sdk_err(dev, "Unknown API CMD Chain type: %d\n",
+ chain->chain_type);
+ err = -EINVAL;
+ break;
+ }
+
+ if (err != 0)
+ dump_api_chain_reg(chain);
+
+ return err;
+}
+
+static inline void update_api_cmd_ctxt(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell_ctxt *ctxt)
+{
+ ctxt->status = 1;
+ ctxt->saved_prod_idx = chain->prod_idx;
+ if (ctxt->resp) {
+ ctxt->resp->header = 0;
+
+ /* make sure "header" was cleared */
+ wmb();
+ }
+}
+
+/**
+ * api_cmd - API CMD command
+ * @chain: chain for the command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 cmd_size, void *ack, u16 ack_size)
+{
+ struct hinic3_api_cmd_cell_ctxt *ctxt = NULL;
+
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock(&chain->async_lock);
+ else
+ down(&chain->sem);
+ ctxt = &chain->cell_ctxt[chain->prod_idx];
+ if (chain_busy(chain)) {
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_unlock(&chain->async_lock);
+ else
+ up(&chain->sem);
+ return -EBUSY;
+ }
+ update_api_cmd_ctxt(chain, ctxt);
+
+ prepare_cell(chain, node_id, cmd, cmd_size);
+
+ cmd_chain_prod_idx_inc(chain);
+
+ wmb(); /* issue the command */
+
+ issue_api_cmd(chain);
+
+ /* incremented prod idx, update ctxt */
+
+ chain->curr_node = chain->cell_ctxt[chain->prod_idx].cell_vaddr;
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_unlock(&chain->async_lock);
+ else
+ up(&chain->sem);
+
+ return wait_for_api_cmd_completion(chain, ctxt, ack, ack_size);
+}
+
+/**
+ * hinic3_api_cmd_write - Write API CMD command
+ * @chain: chain for write command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_api_cmd_write(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 size)
+{
+ /* Verify the chain type */
+ return api_cmd(chain, node_id, cmd, size, NULL, 0);
+}
+
+/**
+ * hinic3_api_cmd_read - Read API CMD command
+ * @chain: chain for read command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_api_cmd_read(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 size, void *ack, u16 ack_size)
+{
+ return api_cmd(chain, node_id, cmd, size, ack, ack_size);
+}
+
+static enum hinic3_wait_return check_chain_restart_handler(void *priv_data)
+{
+ struct hinic3_api_cmd_chain *cmd_chain = priv_data;
+ u32 reg_addr, val;
+
+ if (!cmd_chain->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ reg_addr = HINIC3_CSR_API_CMD_CHAIN_REQ_ADDR(cmd_chain->chain_type);
+ val = hinic3_hwif_read_reg(cmd_chain->hwdev->hwif, reg_addr);
+ if (!HINIC3_API_CMD_CHAIN_REQ_GET(val, RESTART))
+ return WAIT_PROCESS_CPL;
+
+ return WAIT_PROCESS_WAITING;
+}
+
+/**
+ * api_cmd_hw_restart - restart the chain in the HW
+ * @chain: the API CMD specific chain to restart
+ **/
+static int api_cmd_hw_restart(struct hinic3_api_cmd_chain *cmd_chain)
+{
+ struct hinic3_hwif *hwif = cmd_chain->hwdev->hwif;
+ u32 reg_addr, val;
+
+ /* Read Modify Write */
+ reg_addr = HINIC3_CSR_API_CMD_CHAIN_REQ_ADDR(cmd_chain->chain_type);
+ val = hinic3_hwif_read_reg(hwif, reg_addr);
+
+ val = HINIC3_API_CMD_CHAIN_REQ_CLEAR(val, RESTART);
+ val |= HINIC3_API_CMD_CHAIN_REQ_SET(1, RESTART);
+
+ hinic3_hwif_write_reg(hwif, reg_addr, val);
+
+ return hinic3_wait_for_timeout(cmd_chain, check_chain_restart_handler,
+ API_CMD_TIMEOUT, USEC_PER_MSEC);
+}
+
+/**
+ * api_cmd_ctrl_init - set the control register of a chain
+ * @chain: the API CMD specific chain to set control register for
+ **/
+static void api_cmd_ctrl_init(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 reg_addr, ctrl;
+ u32 size;
+
+ /* Read Modify Write */
+ reg_addr = HINIC3_CSR_API_CMD_CHAIN_CTRL_ADDR(chain->chain_type);
+
+ size = (u32)ilog2(chain->cell_size >> API_CMD_CHAIN_CELL_SIZE_SHIFT);
+
+ ctrl = hinic3_hwif_read_reg(hwif, reg_addr);
+
+ ctrl = HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, AEQE_EN) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, CELL_SIZE);
+
+ ctrl |= HINIC3_API_CMD_CHAIN_CTRL_SET(0, AEQE_EN) |
+ HINIC3_API_CMD_CHAIN_CTRL_SET(size, CELL_SIZE);
+
+ hinic3_hwif_write_reg(hwif, reg_addr, ctrl);
+}
+
+/**
+ * api_cmd_set_status_addr - set the status address of a chain in the HW
+ * @chain: the API CMD specific chain to set status address for
+ **/
+static void api_cmd_set_status_addr(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+
+ addr = HINIC3_CSR_API_CMD_STATUS_HI_ADDR(chain->chain_type);
+ val = upper_32_bits(chain->wb_status_paddr);
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ addr = HINIC3_CSR_API_CMD_STATUS_LO_ADDR(chain->chain_type);
+ val = lower_32_bits(chain->wb_status_paddr);
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+/**
+ * api_cmd_set_num_cells - set the number cells of a chain in the HW
+ * @chain: the API CMD specific chain to set the number of cells for
+ **/
+static void api_cmd_set_num_cells(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_NUM_CELLS_ADDR(chain->chain_type);
+ val = chain->num_cells;
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+/**
+ * api_cmd_head_init - set the head cell of a chain in the HW
+ * @chain: the API CMD specific chain to set the head for
+ **/
+static void api_cmd_head_init(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_HEAD_HI_ADDR(chain->chain_type);
+ val = upper_32_bits(chain->head_cell_paddr);
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_HEAD_LO_ADDR(chain->chain_type);
+ val = lower_32_bits(chain->head_cell_paddr);
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+static enum hinic3_wait_return check_chain_ready_handler(void *priv_data)
+{
+ struct hinic3_api_cmd_chain *chain = priv_data;
+ u32 addr, val;
+ u32 hw_cons_idx;
+
+ if (!chain->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ addr = HINIC3_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
+ val = hinic3_hwif_read_reg(chain->hwdev->hwif, addr);
+ hw_cons_idx = HINIC3_API_CMD_STATUS_GET(val, CONS_IDX);
+ /* wait for HW cons idx to be updated */
+ if (hw_cons_idx == chain->cons_idx)
+ return WAIT_PROCESS_CPL;
+ return WAIT_PROCESS_WAITING;
+}
+
+/**
+ * wait_for_ready_chain - wait for the chain to be ready
+ * @chain: the API CMD specific chain to wait for
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_ready_chain(struct hinic3_api_cmd_chain *chain)
+{
+ return hinic3_wait_for_timeout(chain, check_chain_ready_handler,
+ API_CMD_TIMEOUT, USEC_PER_MSEC);
+}
+
+/**
+ * api_cmd_chain_hw_clean - clean the HW
+ * @chain: the API CMD specific chain
+ **/
+static void api_cmd_chain_hw_clean(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, ctrl;
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_CTRL_ADDR(chain->chain_type);
+
+ ctrl = hinic3_hwif_read_reg(hwif, addr);
+ ctrl = HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, RESTART_EN) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_ERR) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, AEQE_EN) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_CHK_EN) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, CELL_SIZE);
+
+ hinic3_hwif_write_reg(hwif, addr, ctrl);
+}
+
+/**
+ * api_cmd_chain_hw_init - initialize the chain in the HW
+ * @chain: the API CMD specific chain to initialize in HW
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_chain_hw_init(struct hinic3_api_cmd_chain *chain)
+{
+ api_cmd_chain_hw_clean(chain);
+
+ api_cmd_set_status_addr(chain);
+
+ if (api_cmd_hw_restart(chain)) {
+ sdk_err(chain->hwdev->dev_hdl, "Failed to restart api_cmd_hw\n");
+ return -EBUSY;
+ }
+
+ api_cmd_ctrl_init(chain);
+ api_cmd_set_num_cells(chain);
+ api_cmd_head_init(chain);
+
+ return wait_for_ready_chain(chain);
+}
+
+/**
+ * alloc_cmd_buf - allocate a dma buffer for API CMD command
+ * @chain: the API CMD specific chain for the cmd
+ * @cell: the cell in the HW for the cmd
+ * @cell_idx: the index of the cell
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_cmd_buf(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell *cell, u32 cell_idx)
+{
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ void *dev = chain->hwdev->dev_hdl;
+ void *buf_vaddr;
+ u64 buf_paddr;
+ int err = 0;
+
+ buf_vaddr = (u8 *)((u64)chain->buf_vaddr_base +
+ chain->buf_size_align * cell_idx);
+ buf_paddr = chain->buf_paddr_base +
+ chain->buf_size_align * cell_idx;
+
+ cell_ctxt = &chain->cell_ctxt[cell_idx];
+
+ cell_ctxt->api_cmd_vaddr = buf_vaddr;
+
+ /* set the cmd DMA address in the cell */
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_POLL_READ:
+ cell->read.hw_cmd_paddr = cpu_to_be64(buf_paddr);
+ break;
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ /* The data in the HW should be in Big Endian Format */
+ cell->write.hw_cmd_paddr = cpu_to_be64(buf_paddr);
+ break;
+ default:
+ sdk_err(dev, "Unknown API CMD Chain type: %d\n",
+ chain->chain_type);
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
+/**
+ * alloc_cmd_buf - allocate a resp buffer for API CMD command
+ * @chain: the API CMD specific chain for the cmd
+ * @cell: the cell in the HW for the cmd
+ * @cell_idx: the index of the cell
+ **/
+static void alloc_resp_buf(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell *cell, u32 cell_idx)
+{
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ void *resp_vaddr;
+ u64 resp_paddr;
+
+ resp_vaddr = (u8 *)((u64)chain->rsp_vaddr_base +
+ chain->rsp_size_align * cell_idx);
+ resp_paddr = chain->rsp_paddr_base +
+ chain->rsp_size_align * cell_idx;
+
+ cell_ctxt = &chain->cell_ctxt[cell_idx];
+
+ cell_ctxt->resp = resp_vaddr;
+ cell->read.hw_wb_resp_paddr = cpu_to_be64(resp_paddr);
+}
+
+static int hinic3_alloc_api_cmd_cell_buf(struct hinic3_api_cmd_chain *chain,
+ u32 cell_idx,
+ struct hinic3_api_cmd_cell *node)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ int err;
+
+ /* For read chain, we should allocate buffer for the response data */
+ if (chain->chain_type == HINIC3_API_CMD_MULTI_READ ||
+ chain->chain_type == HINIC3_API_CMD_POLL_READ)
+ alloc_resp_buf(chain, node, cell_idx);
+
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_POLL_READ:
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ err = alloc_cmd_buf(chain, node, cell_idx);
+ if (err) {
+ sdk_err(dev, "Failed to allocate cmd buffer\n");
+ goto alloc_cmd_buf_err;
+ }
+ break;
+ /* For api command write and api command read, the data section
+ * is directly inserted in the cell, so no need to allocate.
+ */
+ case HINIC3_API_CMD_MULTI_READ:
+ chain->cell_ctxt[cell_idx].api_cmd_vaddr =
+ &node->read.hw_cmd_paddr;
+ break;
+ default:
+ sdk_err(dev, "Unsupported API CMD chain type\n");
+ err = -EINVAL;
+ goto alloc_cmd_buf_err;
+ }
+
+ return 0;
+
+alloc_cmd_buf_err:
+
+ return err;
+}
+
+/**
+ * api_cmd_create_cell - create API CMD cell of specific chain
+ * @chain: the API CMD specific chain to create its cell
+ * @cell_idx: the cell index to create
+ * @pre_node: previous cell
+ * @node_vaddr: the virt addr of the cell
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_create_cell(struct hinic3_api_cmd_chain *chain, u32 cell_idx,
+ struct hinic3_api_cmd_cell *pre_node,
+ struct hinic3_api_cmd_cell **node_vaddr)
+{
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ struct hinic3_api_cmd_cell *node;
+ void *cell_vaddr;
+ u64 cell_paddr;
+ int err;
+
+ cell_vaddr = (void *)((u64)chain->cell_vaddr_base +
+ chain->cell_size_align * cell_idx);
+ cell_paddr = chain->cell_paddr_base +
+ chain->cell_size_align * cell_idx;
+
+ cell_ctxt = &chain->cell_ctxt[cell_idx];
+ cell_ctxt->cell_vaddr = cell_vaddr;
+ cell_ctxt->hwdev = chain->hwdev;
+ node = cell_ctxt->cell_vaddr;
+
+ if (!pre_node) {
+ chain->head_node = cell_vaddr;
+ chain->head_cell_paddr = (dma_addr_t)cell_paddr;
+ } else {
+ /* The data in the HW should be in Big Endian Format */
+ pre_node->next_cell_paddr = cpu_to_be64(cell_paddr);
+ }
+
+ /* Driver software should make sure that there is an empty API
+ * command cell at the end the chain
+ */
+ node->next_cell_paddr = 0;
+
+ err = hinic3_alloc_api_cmd_cell_buf(chain, cell_idx, node);
+ if (err)
+ return err;
+
+ *node_vaddr = node;
+
+ return 0;
+}
+
+/**
+ * api_cmd_create_cells - create API CMD cells for specific chain
+ * @chain: the API CMD specific chain
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_create_cells(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_api_cmd_cell *node = NULL, *pre_node = NULL;
+ void *dev = chain->hwdev->dev_hdl;
+ u32 cell_idx;
+ int err;
+
+ for (cell_idx = 0; cell_idx < chain->num_cells; cell_idx++) {
+ err = api_cmd_create_cell(chain, cell_idx, pre_node, &node);
+ if (err) {
+ sdk_err(dev, "Failed to create API CMD cell\n");
+ return err;
+ }
+
+ pre_node = node;
+ }
+
+ if (!node)
+ return -EFAULT;
+
+ /* set the Final node to point on the start */
+ node->next_cell_paddr = cpu_to_be64(chain->head_cell_paddr);
+
+ /* set the current node to be the head */
+ chain->curr_node = chain->head_node;
+ return 0;
+}
+
+/**
+ * api_chain_init - initialize API CMD specific chain
+ * @chain: the API CMD specific chain to initialize
+ * @attr: attributes to set in the chain
+ * Return: 0 - success, negative - failure
+ **/
+static int api_chain_init(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_chain_attr *attr)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ size_t cell_ctxt_size;
+ size_t cells_buf_size;
+ int err;
+
+ chain->chain_type = attr->chain_type;
+ chain->num_cells = attr->num_cells;
+ chain->cell_size = attr->cell_size;
+ chain->rsp_size = attr->rsp_size;
+
+ chain->prod_idx = 0;
+ chain->cons_idx = 0;
+
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock_init(&chain->async_lock);
+ else
+ sema_init(&chain->sem, 1);
+
+ cell_ctxt_size = chain->num_cells * sizeof(*chain->cell_ctxt);
+ if (!cell_ctxt_size) {
+ sdk_err(dev, "Api chain cell size cannot be zero\n");
+ err = -EINVAL;
+ goto alloc_cell_ctxt_err;
+ }
+
+ chain->cell_ctxt = kzalloc(cell_ctxt_size, GFP_KERNEL);
+ if (!chain->cell_ctxt) {
+ sdk_err(dev, "Failed to allocate cell contexts for a chain\n");
+ err = -ENOMEM;
+ goto alloc_cell_ctxt_err;
+ }
+
+ chain->wb_status = dma_zalloc_coherent(dev,
+ sizeof(*chain->wb_status),
+ &chain->wb_status_paddr,
+ GFP_KERNEL);
+ if (!chain->wb_status) {
+ sdk_err(dev, "Failed to allocate DMA wb status\n");
+ err = -ENOMEM;
+ goto alloc_wb_status_err;
+ }
+
+ chain->cell_size_align = ALIGN((u64)chain->cell_size,
+ API_CMD_NODE_ALIGN_SIZE);
+ chain->rsp_size_align = ALIGN((u64)chain->rsp_size,
+ API_CHAIN_RESP_ALIGNMENT);
+ chain->buf_size_align = ALIGN(API_CMD_BUF_SIZE, API_PAYLOAD_ALIGN_SIZE);
+
+ cells_buf_size = (chain->cell_size_align + chain->rsp_size_align +
+ chain->buf_size_align) * chain->num_cells;
+
+ err = hinic3_dma_zalloc_coherent_align(dev, cells_buf_size,
+ API_CMD_NODE_ALIGN_SIZE,
+ GFP_KERNEL,
+ &chain->cells_addr);
+ if (err) {
+ sdk_err(dev, "Failed to allocate API CMD cells buffer\n");
+ goto alloc_cells_buf_err;
+ }
+
+ chain->cell_vaddr_base = chain->cells_addr.align_vaddr;
+ chain->cell_paddr_base = chain->cells_addr.align_paddr;
+
+ chain->rsp_vaddr_base = (u8 *)((u64)chain->cell_vaddr_base +
+ chain->cell_size_align * chain->num_cells);
+ chain->rsp_paddr_base = chain->cell_paddr_base +
+ chain->cell_size_align * chain->num_cells;
+
+ chain->buf_vaddr_base = (u8 *)((u64)chain->rsp_vaddr_base +
+ chain->rsp_size_align * chain->num_cells);
+ chain->buf_paddr_base = chain->rsp_paddr_base +
+ chain->rsp_size_align * chain->num_cells;
+
+ return 0;
+
+alloc_cells_buf_err:
+ dma_free_coherent(dev, sizeof(*chain->wb_status),
+ chain->wb_status, chain->wb_status_paddr);
+
+alloc_wb_status_err:
+ kfree(chain->cell_ctxt);
+
+alloc_cell_ctxt_err:
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock_deinit(&chain->async_lock);
+ else
+ sema_deinit(&chain->sem);
+ return err;
+}
+
+/**
+ * api_chain_free - free API CMD specific chain
+ * @chain: the API CMD specific chain to free
+ **/
+static void api_chain_free(struct hinic3_api_cmd_chain *chain)
+{
+ void *dev = chain->hwdev->dev_hdl;
+
+ hinic3_dma_free_coherent_align(dev, &chain->cells_addr);
+
+ dma_free_coherent(dev, sizeof(*chain->wb_status),
+ chain->wb_status, chain->wb_status_paddr);
+ kfree(chain->cell_ctxt);
+ chain->cell_ctxt = NULL;
+
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock_deinit(&chain->async_lock);
+ else
+ sema_deinit(&chain->sem);
+}
+
+/**
+ * api_cmd_create_chain - create API CMD specific chain
+ * @chain: the API CMD specific chain to create
+ * @attr: attributes to set in the chain
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_create_chain(struct hinic3_api_cmd_chain **cmd_chain,
+ struct hinic3_api_cmd_chain_attr *attr)
+{
+ struct hinic3_hwdev *hwdev = attr->hwdev;
+ struct hinic3_api_cmd_chain *chain = NULL;
+ int err;
+
+ if (attr->num_cells & (attr->num_cells - 1)) {
+ sdk_err(hwdev->dev_hdl, "Invalid number of cells, must be power of 2\n");
+ return -EINVAL;
+ }
+
+ chain = kzalloc(sizeof(*chain), GFP_KERNEL);
+ if (!chain)
+ return -ENOMEM;
+
+ chain->hwdev = hwdev;
+
+ err = api_chain_init(chain, attr);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize chain\n");
+ goto chain_init_err;
+ }
+
+ err = api_cmd_create_cells(chain);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to create cells for API CMD chain\n");
+ goto create_cells_err;
+ }
+
+ err = api_cmd_chain_hw_init(chain);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize chain HW\n");
+ goto chain_hw_init_err;
+ }
+
+ *cmd_chain = chain;
+ return 0;
+
+chain_hw_init_err:
+create_cells_err:
+ api_chain_free(chain);
+
+chain_init_err:
+ kfree(chain);
+ return err;
+}
+
+/**
+ * api_cmd_destroy_chain - destroy API CMD specific chain
+ * @chain: the API CMD specific chain to destroy
+ **/
+static void api_cmd_destroy_chain(struct hinic3_api_cmd_chain *chain)
+{
+ api_chain_free(chain);
+ kfree(chain);
+}
+
+/**
+ * hinic3_api_cmd_init - Initialize all the API CMD chains
+ * @hwif: the hardware interface of a pci function device
+ * @chain: the API CMD chains that will be initialized
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_api_cmd_init(struct hinic3_hwdev *hwdev,
+ struct hinic3_api_cmd_chain **chain)
+{
+ void *dev = hwdev->dev_hdl;
+ struct hinic3_api_cmd_chain_attr attr;
+ u8 chain_type, i;
+ int err;
+
+ if (COMM_SUPPORT_API_CHAIN(hwdev) == 0)
+ return 0;
+
+ attr.hwdev = hwdev;
+ attr.num_cells = API_CHAIN_NUM_CELLS;
+ attr.cell_size = API_CHAIN_CELL_SIZE;
+ attr.rsp_size = API_CHAIN_RSP_DATA_SIZE;
+
+ chain_type = HINIC3_API_CMD_WRITE_TO_MGMT_CPU;
+ for (; chain_type < HINIC3_API_CMD_MAX; chain_type++) {
+ attr.chain_type = chain_type;
+
+ err = api_cmd_create_chain(&chain[chain_type], &attr);
+ if (err) {
+ sdk_err(dev, "Failed to create chain %d\n", chain_type);
+ goto create_chain_err;
+ }
+ }
+
+ return 0;
+
+create_chain_err:
+ i = HINIC3_API_CMD_WRITE_TO_MGMT_CPU;
+ for (; i < chain_type; i++)
+ api_cmd_destroy_chain(chain[i]);
+
+ return err;
+}
+
+/**
+ * hinic3_api_cmd_free - free the API CMD chains
+ * @chain: the API CMD chains that will be freed
+ **/
+void hinic3_api_cmd_free(const struct hinic3_hwdev *hwdev, struct hinic3_api_cmd_chain **chain)
+{
+ u8 chain_type;
+
+ if (COMM_SUPPORT_API_CHAIN(hwdev) == 0)
+ return;
+
+ chain_type = HINIC3_API_CMD_WRITE_TO_MGMT_CPU;
+
+ for (; chain_type < HINIC3_API_CMD_MAX; chain_type++)
+ api_cmd_destroy_chain(chain[chain_type]);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h
new file mode 100644
index 0000000..727e668
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h
@@ -0,0 +1,286 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_API_CMD_H
+#define HINIC3_API_CMD_H
+
+#include <linux/semaphore.h>
+
+#include "hinic3_eqs.h"
+#include "hinic3_hwif.h"
+
+/* api_cmd_cell.ctrl structure */
+#define HINIC3_API_CMD_CELL_CTRL_CELL_LEN_SHIFT 0
+#define HINIC3_API_CMD_CELL_CTRL_RD_DMA_ATTR_OFF_SHIFT 16
+#define HINIC3_API_CMD_CELL_CTRL_WR_DMA_ATTR_OFF_SHIFT 24
+#define HINIC3_API_CMD_CELL_CTRL_XOR_CHKSUM_SHIFT 56
+
+#define HINIC3_API_CMD_CELL_CTRL_CELL_LEN_MASK 0x3FU
+#define HINIC3_API_CMD_CELL_CTRL_RD_DMA_ATTR_OFF_MASK 0x3FU
+#define HINIC3_API_CMD_CELL_CTRL_WR_DMA_ATTR_OFF_MASK 0x3FU
+#define HINIC3_API_CMD_CELL_CTRL_XOR_CHKSUM_MASK 0xFFU
+
+#define HINIC3_API_CMD_CELL_CTRL_SET(val, member) \
+ ((((u64)(val)) & HINIC3_API_CMD_CELL_CTRL_##member##_MASK) << \
+ HINIC3_API_CMD_CELL_CTRL_##member##_SHIFT)
+
+/* api_cmd_cell.desc structure */
+#define HINIC3_API_CMD_DESC_API_TYPE_SHIFT 0
+#define HINIC3_API_CMD_DESC_RD_WR_SHIFT 1
+#define HINIC3_API_CMD_DESC_MGMT_BYPASS_SHIFT 2
+#define HINIC3_API_CMD_DESC_RESP_AEQE_EN_SHIFT 3
+#define HINIC3_API_CMD_DESC_APICHN_RSVD_SHIFT 4
+#define HINIC3_API_CMD_DESC_APICHN_CODE_SHIFT 6
+#define HINIC3_API_CMD_DESC_PRIV_DATA_SHIFT 8
+#define HINIC3_API_CMD_DESC_DEST_SHIFT 32
+#define HINIC3_API_CMD_DESC_SIZE_SHIFT 40
+#define HINIC3_API_CMD_DESC_XOR_CHKSUM_SHIFT 56
+
+#define HINIC3_API_CMD_DESC_API_TYPE_MASK 0x1U
+#define HINIC3_API_CMD_DESC_RD_WR_MASK 0x1U
+#define HINIC3_API_CMD_DESC_MGMT_BYPASS_MASK 0x1U
+#define HINIC3_API_CMD_DESC_RESP_AEQE_EN_MASK 0x1U
+#define HINIC3_API_CMD_DESC_APICHN_RSVD_MASK 0x3U
+#define HINIC3_API_CMD_DESC_APICHN_CODE_MASK 0x3U
+#define HINIC3_API_CMD_DESC_PRIV_DATA_MASK 0xFFFFFFU
+#define HINIC3_API_CMD_DESC_DEST_MASK 0x1FU
+#define HINIC3_API_CMD_DESC_SIZE_MASK 0x7FFU
+#define HINIC3_API_CMD_DESC_XOR_CHKSUM_MASK 0xFFU
+
+#define HINIC3_API_CMD_DESC_SET(val, member) \
+ ((((u64)(val)) & HINIC3_API_CMD_DESC_##member##_MASK) << \
+ HINIC3_API_CMD_DESC_##member##_SHIFT)
+
+/* api_cmd_status header */
+#define HINIC3_API_CMD_STATUS_HEADER_VALID_SHIFT 0
+#define HINIC3_API_CMD_STATUS_HEADER_CHAIN_ID_SHIFT 16
+
+#define HINIC3_API_CMD_STATUS_HEADER_VALID_MASK 0xFFU
+#define HINIC3_API_CMD_STATUS_HEADER_CHAIN_ID_MASK 0xFFU
+
+#define HINIC3_API_CMD_STATUS_HEADER_GET(val, member) \
+ (((val) >> HINIC3_API_CMD_STATUS_HEADER_##member##_SHIFT) & \
+ HINIC3_API_CMD_STATUS_HEADER_##member##_MASK)
+
+/* API_CHAIN_REQ CSR: 0x0020+api_idx*0x080 */
+#define HINIC3_API_CMD_CHAIN_REQ_RESTART_SHIFT 1
+#define HINIC3_API_CMD_CHAIN_REQ_WB_TRIGGER_SHIFT 2
+
+#define HINIC3_API_CMD_CHAIN_REQ_RESTART_MASK 0x1U
+#define HINIC3_API_CMD_CHAIN_REQ_WB_TRIGGER_MASK 0x1U
+
+#define HINIC3_API_CMD_CHAIN_REQ_SET(val, member) \
+ (((val) & HINIC3_API_CMD_CHAIN_REQ_##member##_MASK) << \
+ HINIC3_API_CMD_CHAIN_REQ_##member##_SHIFT)
+
+#define HINIC3_API_CMD_CHAIN_REQ_GET(val, member) \
+ (((val) >> HINIC3_API_CMD_CHAIN_REQ_##member##_SHIFT) & \
+ HINIC3_API_CMD_CHAIN_REQ_##member##_MASK)
+
+#define HINIC3_API_CMD_CHAIN_REQ_CLEAR(val, member) \
+ ((val) & (~(HINIC3_API_CMD_CHAIN_REQ_##member##_MASK \
+ << HINIC3_API_CMD_CHAIN_REQ_##member##_SHIFT)))
+
+/* API_CHAIN_CTL CSR: 0x0014+api_idx*0x080 */
+#define HINIC3_API_CMD_CHAIN_CTRL_RESTART_EN_SHIFT 1
+#define HINIC3_API_CMD_CHAIN_CTRL_XOR_ERR_SHIFT 2
+#define HINIC3_API_CMD_CHAIN_CTRL_AEQE_EN_SHIFT 4
+#define HINIC3_API_CMD_CHAIN_CTRL_AEQ_ID_SHIFT 8
+#define HINIC3_API_CMD_CHAIN_CTRL_XOR_CHK_EN_SHIFT 28
+#define HINIC3_API_CMD_CHAIN_CTRL_CELL_SIZE_SHIFT 30
+
+#define HINIC3_API_CMD_CHAIN_CTRL_RESTART_EN_MASK 0x1U
+#define HINIC3_API_CMD_CHAIN_CTRL_XOR_ERR_MASK 0x1U
+#define HINIC3_API_CMD_CHAIN_CTRL_AEQE_EN_MASK 0x1U
+#define HINIC3_API_CMD_CHAIN_CTRL_AEQ_ID_MASK 0x3U
+#define HINIC3_API_CMD_CHAIN_CTRL_XOR_CHK_EN_MASK 0x3U
+#define HINIC3_API_CMD_CHAIN_CTRL_CELL_SIZE_MASK 0x3U
+
+#define HINIC3_API_CMD_CHAIN_CTRL_SET(val, member) \
+ (((val) & HINIC3_API_CMD_CHAIN_CTRL_##member##_MASK) << \
+ HINIC3_API_CMD_CHAIN_CTRL_##member##_SHIFT)
+
+#define HINIC3_API_CMD_CHAIN_CTRL_CLEAR(val, member) \
+ ((val) & (~(HINIC3_API_CMD_CHAIN_CTRL_##member##_MASK \
+ << HINIC3_API_CMD_CHAIN_CTRL_##member##_SHIFT)))
+
+/* api_cmd rsp header */
+#define HINIC3_API_CMD_RESP_HEAD_VALID_SHIFT 0
+#define HINIC3_API_CMD_RESP_HEAD_STATUS_SHIFT 8
+#define HINIC3_API_CMD_RESP_HEAD_CHAIN_ID_SHIFT 16
+#define HINIC3_API_CMD_RESP_HEAD_RESP_LEN_SHIFT 24
+#define HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV_SHIFT 40
+
+#define HINIC3_API_CMD_RESP_HEAD_VALID_MASK 0xFF
+#define HINIC3_API_CMD_RESP_HEAD_STATUS_MASK 0xFFU
+#define HINIC3_API_CMD_RESP_HEAD_CHAIN_ID_MASK 0xFFU
+#define HINIC3_API_CMD_RESP_HEAD_RESP_LEN_MASK 0x1FFU
+#define HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV_MASK 0xFFFFFFU
+
+#define HINIC3_API_CMD_RESP_HEAD_VALID_CODE 0xFF
+
+#define HINIC3_API_CMD_RESP_HEADER_VALID(val) \
+ (((val) & HINIC3_API_CMD_RESP_HEAD_VALID_MASK) == \
+ HINIC3_API_CMD_RESP_HEAD_VALID_CODE)
+
+#define HINIC3_API_CMD_RESP_HEAD_GET(val, member) \
+ (((val) >> HINIC3_API_CMD_RESP_HEAD_##member##_SHIFT) & \
+ HINIC3_API_CMD_RESP_HEAD_##member##_MASK)
+
+#define HINIC3_API_CMD_RESP_HEAD_CHAIN_ID(val) \
+ (((val) >> HINIC3_API_CMD_RESP_HEAD_CHAIN_ID_SHIFT) & \
+ HINIC3_API_CMD_RESP_HEAD_CHAIN_ID_MASK)
+
+#define HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV(val) \
+ ((u16)(((val) >> HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV_SHIFT) & \
+ HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV_MASK))
+/* API_STATUS_0 CSR: 0x0030+api_idx*0x080 */
+#define HINIC3_API_CMD_STATUS_CONS_IDX_MASK 0xFFFFFFU
+#define HINIC3_API_CMD_STATUS_CONS_IDX_SHIFT 0
+
+#define HINIC3_API_CMD_STATUS_FSM_MASK 0xFU
+#define HINIC3_API_CMD_STATUS_FSM_SHIFT 24
+
+#define HINIC3_API_CMD_STATUS_CHKSUM_ERR_MASK 0x3U
+#define HINIC3_API_CMD_STATUS_CHKSUM_ERR_SHIFT 28
+
+#define HINIC3_API_CMD_STATUS_CPLD_ERR_MASK 0x1U
+#define HINIC3_API_CMD_STATUS_CPLD_ERR_SHIFT 30
+
+#define HINIC3_API_CMD_STATUS_CONS_IDX(val) \
+ ((val) & HINIC3_API_CMD_STATUS_CONS_IDX_MASK)
+
+#define HINIC3_API_CMD_STATUS_CHKSUM_ERR(val) \
+ (((val) >> HINIC3_API_CMD_STATUS_CHKSUM_ERR_SHIFT) & \
+ HINIC3_API_CMD_STATUS_CHKSUM_ERR_MASK)
+
+#define HINIC3_API_CMD_STATUS_GET(val, member) \
+ (((val) >> HINIC3_API_CMD_STATUS_##member##_SHIFT) & \
+ HINIC3_API_CMD_STATUS_##member##_MASK)
+
+enum hinic3_api_cmd_chain_type {
+ /* write to mgmt cpu command with completion */
+ HINIC3_API_CMD_WRITE_TO_MGMT_CPU = 2,
+ /* multi read command with completion notification - not used */
+ HINIC3_API_CMD_MULTI_READ = 3,
+ /* write command without completion notification */
+ HINIC3_API_CMD_POLL_WRITE = 4,
+ /* read command without completion notification */
+ HINIC3_API_CMD_POLL_READ = 5,
+ /* read from mgmt cpu command with completion */
+ HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU = 6,
+ HINIC3_API_CMD_MAX,
+};
+
+struct hinic3_api_cmd_status {
+ u64 header;
+ u32 buf_desc;
+ u32 cell_addr_hi;
+ u32 cell_addr_lo;
+ u32 rsvd0;
+ u64 rsvd1;
+};
+
+/* HW struct */
+struct hinic3_api_cmd_cell {
+ u64 ctrl;
+
+ /* address is 64 bit in HW struct */
+ u64 next_cell_paddr;
+
+ u64 desc;
+
+ /* HW struct */
+ union {
+ struct {
+ u64 hw_cmd_paddr;
+ } write;
+
+ struct {
+ u64 hw_wb_resp_paddr;
+ u64 hw_cmd_paddr;
+ } read;
+ };
+};
+
+struct hinic3_api_cmd_resp_fmt {
+ u64 header;
+ u64 resp_data;
+};
+
+struct hinic3_api_cmd_cell_ctxt {
+ struct hinic3_api_cmd_cell *cell_vaddr;
+
+ void *api_cmd_vaddr;
+
+ struct hinic3_api_cmd_resp_fmt *resp;
+
+ struct completion done;
+ int status;
+
+ u32 saved_prod_idx;
+ struct hinic3_hwdev *hwdev;
+};
+
+struct hinic3_api_cmd_chain_attr {
+ struct hinic3_hwdev *hwdev;
+ enum hinic3_api_cmd_chain_type chain_type;
+
+ u32 num_cells;
+ u16 rsp_size;
+ u16 cell_size;
+};
+
+struct hinic3_api_cmd_chain {
+ struct hinic3_hwdev *hwdev;
+ enum hinic3_api_cmd_chain_type chain_type;
+
+ u32 num_cells;
+ u16 cell_size;
+ u16 rsp_size;
+ u32 rsvd1;
+
+ /* HW members is 24 bit format */
+ u32 prod_idx;
+ u32 cons_idx;
+
+ struct semaphore sem;
+ /* Async cmd can not be scheduling */
+ spinlock_t async_lock;
+
+ dma_addr_t wb_status_paddr;
+ struct hinic3_api_cmd_status *wb_status;
+
+ dma_addr_t head_cell_paddr;
+ struct hinic3_api_cmd_cell *head_node;
+
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ struct hinic3_api_cmd_cell *curr_node;
+
+ struct hinic3_dma_addr_align cells_addr;
+
+ u8 *cell_vaddr_base;
+ u64 cell_paddr_base;
+ u8 *rsp_vaddr_base;
+ u64 rsp_paddr_base;
+ u8 *buf_vaddr_base;
+ u64 buf_paddr_base;
+ u64 cell_size_align;
+ u64 rsp_size_align;
+ u64 buf_size_align;
+
+ u64 rsvd2;
+};
+
+int hinic3_api_cmd_write(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 size);
+
+int hinic3_api_cmd_read(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 size, void *ack, u16 ack_size);
+
+int hinic3_api_cmd_init(struct hinic3_hwdev *hwdev,
+ struct hinic3_api_cmd_chain **chain);
+
+void hinic3_api_cmd_free(const struct hinic3_hwdev *hwdev, struct hinic3_api_cmd_chain **chain);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c
new file mode 100644
index 0000000..ceb7636
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c
@@ -0,0 +1,1575 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/errno.h>
+#include <linux/completion.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "npu_cmdq_base_defs.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_eqs.h"
+#include "hinic3_common.h"
+#include "hinic3_wq.h"
+#include "hinic3_hw_comm.h"
+#include "hinic3_hwif.h"
+#include "hinic3_cmdq.h"
+
+#define HINIC3_CMDQ_BUF_SIZE 2048U
+
+#define CMDQ_CMD_TIMEOUT 5000 /* millisecond */
+
+#define UPPER_8_BITS(data) (((data) >> 8) & 0xFF)
+#define LOWER_8_BITS(data) ((data) & 0xFF)
+
+#define CMDQ_DB_INFO_HI_PROD_IDX_SHIFT 0
+#define CMDQ_DB_INFO_HI_PROD_IDX_MASK 0xFFU
+#define CMDQ_DB_INFO_SET(val, member) \
+ ((((u32)(val)) & CMDQ_DB_INFO_##member##_MASK) << \
+ CMDQ_DB_INFO_##member##_SHIFT)
+
+#define CMDQ_DB_HEAD_QUEUE_TYPE_SHIFT 23
+#define CMDQ_DB_HEAD_CMDQ_TYPE_SHIFT 24
+#define CMDQ_DB_HEAD_SRC_TYPE_SHIFT 27
+#define CMDQ_DB_HEAD_QUEUE_TYPE_MASK 0x1U
+#define CMDQ_DB_HEAD_CMDQ_TYPE_MASK 0x7U
+#define CMDQ_DB_HEAD_SRC_TYPE_MASK 0x1FU
+#define CMDQ_DB_HEAD_SET(val, member) \
+ ((((u32)(val)) & CMDQ_DB_HEAD_##member##_MASK) << \
+ CMDQ_DB_HEAD_##member##_SHIFT)
+
+#define CMDQ_CTRL_PI_SHIFT 0
+#define CMDQ_CTRL_CMD_SHIFT 16
+#define CMDQ_CTRL_MOD_SHIFT 24
+#define CMDQ_CTRL_ACK_TYPE_SHIFT 29
+#define CMDQ_CTRL_HW_BUSY_BIT_SHIFT 31
+
+#define CMDQ_CTRL_PI_MASK 0xFFFFU
+#define CMDQ_CTRL_CMD_MASK 0xFFU
+#define CMDQ_CTRL_MOD_MASK 0x1FU
+#define CMDQ_CTRL_ACK_TYPE_MASK 0x3U
+#define CMDQ_CTRL_HW_BUSY_BIT_MASK 0x1U
+
+#define CMDQ_CTRL_SET(val, member) \
+ ((((u32)(val)) & CMDQ_CTRL_##member##_MASK) << \
+ CMDQ_CTRL_##member##_SHIFT)
+
+#define CMDQ_CTRL_GET(val, member) \
+ (((val) >> CMDQ_CTRL_##member##_SHIFT) & \
+ CMDQ_CTRL_##member##_MASK)
+
+#define CMDQ_WQE_HEADER_BUFDESC_LEN_SHIFT 0
+#define CMDQ_WQE_HEADER_COMPLETE_FMT_SHIFT 15
+#define CMDQ_WQE_HEADER_DATA_FMT_SHIFT 22
+#define CMDQ_WQE_HEADER_COMPLETE_REQ_SHIFT 23
+#define CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_SHIFT 27
+#define CMDQ_WQE_HEADER_CTRL_LEN_SHIFT 29
+#define CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31
+
+#define CMDQ_WQE_HEADER_BUFDESC_LEN_MASK 0xFFU
+#define CMDQ_WQE_HEADER_COMPLETE_FMT_MASK 0x1U
+#define CMDQ_WQE_HEADER_DATA_FMT_MASK 0x1U
+#define CMDQ_WQE_HEADER_COMPLETE_REQ_MASK 0x1U
+#define CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_MASK 0x3U
+#define CMDQ_WQE_HEADER_CTRL_LEN_MASK 0x3U
+#define CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U
+
+#define CMDQ_WQE_HEADER_SET(val, member) \
+ ((((u32)(val)) & CMDQ_WQE_HEADER_##member##_MASK) << \
+ CMDQ_WQE_HEADER_##member##_SHIFT)
+
+#define CMDQ_WQE_HEADER_GET(val, member) \
+ (((val) >> CMDQ_WQE_HEADER_##member##_SHIFT) & \
+ CMDQ_WQE_HEADER_##member##_MASK)
+
+#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_SHIFT 0
+#define CMDQ_CTXT_EQ_ID_SHIFT 53
+#define CMDQ_CTXT_CEQ_ARM_SHIFT 61
+#define CMDQ_CTXT_CEQ_EN_SHIFT 62
+#define CMDQ_CTXT_HW_BUSY_BIT_SHIFT 63
+
+#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_MASK 0xFFFFFFFFFFFFF
+#define CMDQ_CTXT_EQ_ID_MASK 0xFF
+#define CMDQ_CTXT_CEQ_ARM_MASK 0x1
+#define CMDQ_CTXT_CEQ_EN_MASK 0x1
+#define CMDQ_CTXT_HW_BUSY_BIT_MASK 0x1
+
+#define CMDQ_CTXT_PAGE_INFO_SET(val, member) \
+ (((u64)(val) & CMDQ_CTXT_##member##_MASK) << \
+ CMDQ_CTXT_##member##_SHIFT)
+
+#define CMDQ_CTXT_PAGE_INFO_GET(val, member) \
+ (((u64)(val) >> CMDQ_CTXT_##member##_SHIFT) & \
+ CMDQ_CTXT_##member##_MASK)
+
+#define CMDQ_CTXT_WQ_BLOCK_PFN_SHIFT 0
+#define CMDQ_CTXT_CI_SHIFT 52
+
+#define CMDQ_CTXT_WQ_BLOCK_PFN_MASK 0xFFFFFFFFFFFFF
+#define CMDQ_CTXT_CI_MASK 0xFFF
+
+#define CMDQ_CTXT_BLOCK_INFO_SET(val, member) \
+ (((u64)(val) & CMDQ_CTXT_##member##_MASK) << \
+ CMDQ_CTXT_##member##_SHIFT)
+
+#define CMDQ_CTXT_BLOCK_INFO_GET(val, member) \
+ (((u64)(val) >> CMDQ_CTXT_##member##_SHIFT) & \
+ CMDQ_CTXT_##member##_MASK)
+
+#define SAVED_DATA_ARM_SHIFT 31
+
+#define SAVED_DATA_ARM_MASK 0x1U
+
+#define SAVED_DATA_SET(val, member) \
+ (((val) & SAVED_DATA_##member##_MASK) << \
+ SAVED_DATA_##member##_SHIFT)
+
+#define SAVED_DATA_CLEAR(val, member) \
+ ((val) & (~(SAVED_DATA_##member##_MASK << \
+ SAVED_DATA_##member##_SHIFT)))
+
+#define WQE_ERRCODE_VAL_SHIFT 0
+
+#define WQE_ERRCODE_VAL_MASK 0x7FFFFFFF
+
+#define WQE_ERRCODE_GET(val, member) \
+ (((val) >> WQE_ERRCODE_##member##_SHIFT) & \
+ WQE_ERRCODE_##member##_MASK)
+
+#define CEQE_CMDQ_TYPE_SHIFT 0
+
+#define CEQE_CMDQ_TYPE_MASK 0x7
+
+#define CEQE_CMDQ_GET(val, member) \
+ (((val) >> CEQE_CMDQ_##member##_SHIFT) & \
+ CEQE_CMDQ_##member##_MASK)
+
+#define WQE_COMPLETED(ctrl_info) CMDQ_CTRL_GET(ctrl_info, HW_BUSY_BIT)
+
+#define WQE_HEADER(wqe) ((struct hinic3_cmdq_header *)(wqe))
+
+#define CMDQ_DB_PI_OFF(pi) (((u16)LOWER_8_BITS(pi)) << 3)
+
+#define CMDQ_DB_ADDR(db_base, pi) \
+ (((u8 *)(db_base)) + CMDQ_DB_PI_OFF(pi))
+
+#define CMDQ_PFN_SHIFT 12
+#define CMDQ_PFN(addr) ((addr) >> CMDQ_PFN_SHIFT)
+
+#define FIRST_DATA_TO_WRITE_LAST sizeof(u64)
+
+#define WQE_LCMD_SIZE 64
+#define WQE_SCMD_SIZE 64
+
+#define COMPLETE_LEN 3
+
+#define CMDQ_WQEBB_SIZE 64
+#define CMDQ_WQE_SIZE 64
+
+#define cmdq_to_cmdqs(cmdq) container_of((cmdq) - (cmdq)->cmdq_type, \
+ struct hinic3_cmdqs, cmdq[0])
+
+#define CMDQ_SEND_CMPT_CODE 10
+#define CMDQ_COMPLETE_CMPT_CODE 11
+#define CMDQ_FORCE_STOP_CMPT_CODE 12
+
+enum cmdq_scmd_type {
+ CMDQ_SET_ARM_CMD = 2,
+};
+
+enum cmdq_wqe_type {
+ WQE_LCMD_TYPE,
+ WQE_SCMD_TYPE,
+};
+
+enum ctrl_sect_len {
+ CTRL_SECT_LEN = 1,
+ CTRL_DIRECT_SECT_LEN = 2,
+};
+
+enum bufdesc_len {
+ BUFDESC_LCMD_LEN = 2,
+ BUFDESC_SCMD_LEN = 3,
+};
+
+enum data_format {
+ DATA_SGE,
+ DATA_DIRECT,
+};
+
+enum completion_format {
+ COMPLETE_DIRECT,
+ COMPLETE_SGE,
+};
+
+enum completion_request {
+ CEQ_SET = 1,
+};
+
+enum cmdq_cmd_type {
+ SYNC_CMD_DIRECT_RESP,
+ SYNC_CMD_SGE_RESP,
+ ASYNC_CMD,
+};
+
+#define NUM_WQEBBS_FOR_CMDQ_WQE 1
+
+bool hinic3_cmdq_idle(struct hinic3_cmdq *cmdq)
+{
+ return hinic3_wq_is_empty(&cmdq->wq);
+}
+
+static void *cmdq_read_wqe(struct hinic3_wq *wq, u16 *ci)
+{
+ if (hinic3_wq_is_empty(wq))
+ return NULL;
+
+ return hinic3_wq_read_one_wqebb(wq, ci);
+}
+
+static void *cmdq_get_wqe(struct hinic3_wq *wq, u16 *pi)
+{
+ if (!hinic3_wq_free_wqebbs(wq))
+ return NULL;
+
+ return hinic3_wq_get_one_wqebb(wq, pi);
+}
+
+struct hinic3_cmd_buf *hinic3_alloc_cmd_buf(void *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ void *dev = NULL;
+
+ if (!hwdev) {
+ pr_err("Failed to alloc cmd buf, Invalid hwdev\n");
+ return NULL;
+ }
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+ dev = ((struct hinic3_hwdev *)hwdev)->dev_hdl;
+
+ cmd_buf = kzalloc(sizeof(*cmd_buf), GFP_ATOMIC);
+ if (!cmd_buf) {
+ sdk_err(dev, "Failed to allocate cmd buf\n");
+ return NULL;
+ }
+
+ cmd_buf->buf = pci_pool_alloc(cmdqs->cmd_buf_pool, GFP_ATOMIC,
+ &cmd_buf->dma_addr);
+ if (!cmd_buf->buf) {
+ sdk_err(dev, "Failed to allocate cmdq cmd buf from the pool\n");
+ goto alloc_pci_buf_err;
+ }
+
+ cmd_buf->size = HINIC3_CMDQ_BUF_SIZE;
+ atomic_set(&cmd_buf->ref_cnt, 1);
+
+ return cmd_buf;
+
+alloc_pci_buf_err:
+ kfree(cmd_buf);
+ return NULL;
+}
+EXPORT_SYMBOL(hinic3_alloc_cmd_buf);
+
+void hinic3_free_cmd_buf(void *hwdev, struct hinic3_cmd_buf *cmd_buf)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+
+ if (!hwdev || !cmd_buf) {
+ pr_err("Failed to free cmd buf, hwdev or cmd_buf is NULL\n");
+ return;
+ }
+
+ if (!atomic_dec_and_test(&cmd_buf->ref_cnt))
+ return;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ pci_pool_free(cmdqs->cmd_buf_pool, cmd_buf->buf, cmd_buf->dma_addr);
+ kfree(cmd_buf);
+}
+EXPORT_SYMBOL(hinic3_free_cmd_buf);
+
+static void cmdq_set_completion(struct hinic3_cmdq_completion *complete,
+ struct hinic3_cmd_buf *buf_out)
+{
+ struct hinic3_sge_resp *sge_resp = &complete->sge_resp;
+
+ hinic3_set_sge(&sge_resp->sge, buf_out->dma_addr,
+ HINIC3_CMDQ_BUF_SIZE);
+}
+
+static void cmdq_set_lcmd_bufdesc(struct hinic3_cmdq_wqe_lcmd *wqe,
+ struct hinic3_cmd_buf *buf_in)
+{
+ hinic3_set_sge(&wqe->buf_desc.sge, buf_in->dma_addr, buf_in->size);
+}
+
+static void cmdq_fill_db(struct hinic3_cmdq_db *db,
+ enum hinic3_cmdq_type cmdq_type, u16 prod_idx)
+{
+ db->db_info = CMDQ_DB_INFO_SET(UPPER_8_BITS(prod_idx), HI_PROD_IDX);
+
+ db->db_head = CMDQ_DB_HEAD_SET(HINIC3_DB_CMDQ_TYPE, QUEUE_TYPE) |
+ CMDQ_DB_HEAD_SET(cmdq_type, CMDQ_TYPE) |
+ CMDQ_DB_HEAD_SET(HINIC3_DB_SRC_CMDQ_TYPE, SRC_TYPE);
+}
+
+static void cmdq_set_db(struct hinic3_cmdq *cmdq,
+ enum hinic3_cmdq_type cmdq_type, u16 prod_idx)
+{
+ struct hinic3_cmdq_db db = {0};
+ u8 *db_base = cmdq->hwdev->cmdqs->cmdqs_db_base;
+
+ cmdq_fill_db(&db, cmdq_type, prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ db.db_info = hinic3_hw_be32(db.db_info);
+ db.db_head = hinic3_hw_be32(db.db_head);
+
+ wmb(); /* write all before the doorbell */
+ writeq(*((u64 *)&db), CMDQ_DB_ADDR(db_base, prod_idx));
+}
+
+static void cmdq_wqe_fill(void *dst, const void *src)
+{
+ memcpy((u8 *)dst + FIRST_DATA_TO_WRITE_LAST,
+ (u8 *)src + FIRST_DATA_TO_WRITE_LAST,
+ CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST);
+
+ wmb(); /* The first 8 bytes should be written last */
+
+ *(u64 *)dst = *(u64 *)src;
+}
+
+static void cmdq_prepare_wqe_ctrl(struct hinic3_cmdq_wqe *wqe, int wrapped,
+ u8 mod, u8 cmd, u16 prod_idx,
+ enum completion_format complete_format,
+ enum data_format data_format,
+ enum bufdesc_len buf_len)
+{
+ struct hinic3_ctrl *ctrl = NULL;
+ enum ctrl_sect_len ctrl_len;
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL;
+ struct hinic3_cmdq_wqe_scmd *wqe_scmd = NULL;
+ u32 saved_data = WQE_HEADER(wqe)->saved_data;
+
+ if (data_format == DATA_SGE) {
+ wqe_lcmd = &wqe->wqe_lcmd;
+
+ wqe_lcmd->status.status_info = 0;
+ ctrl = &wqe_lcmd->ctrl;
+ ctrl_len = CTRL_SECT_LEN;
+ } else {
+ wqe_scmd = &wqe->inline_wqe.wqe_scmd;
+
+ wqe_scmd->status.status_info = 0;
+ ctrl = &wqe_scmd->ctrl;
+ ctrl_len = CTRL_DIRECT_SECT_LEN;
+ }
+
+ ctrl->ctrl_info = CMDQ_CTRL_SET(prod_idx, PI) |
+ CMDQ_CTRL_SET(cmd, CMD) |
+ CMDQ_CTRL_SET(mod, MOD) |
+ CMDQ_CTRL_SET(HINIC3_ACK_TYPE_CMDQ, ACK_TYPE);
+
+ WQE_HEADER(wqe)->header_info =
+ CMDQ_WQE_HEADER_SET(buf_len, BUFDESC_LEN) |
+ CMDQ_WQE_HEADER_SET(complete_format, COMPLETE_FMT) |
+ CMDQ_WQE_HEADER_SET(data_format, DATA_FMT) |
+ CMDQ_WQE_HEADER_SET(CEQ_SET, COMPLETE_REQ) |
+ CMDQ_WQE_HEADER_SET(COMPLETE_LEN, COMPLETE_SECT_LEN) |
+ CMDQ_WQE_HEADER_SET(ctrl_len, CTRL_LEN) |
+ CMDQ_WQE_HEADER_SET((u32)wrapped, HW_BUSY_BIT);
+
+ if (cmd == CMDQ_SET_ARM_CMD && mod == HINIC3_MOD_COMM) {
+ saved_data &= SAVED_DATA_CLEAR(saved_data, ARM);
+ WQE_HEADER(wqe)->saved_data = saved_data |
+ SAVED_DATA_SET(1, ARM);
+ } else {
+ saved_data &= SAVED_DATA_CLEAR(saved_data, ARM);
+ WQE_HEADER(wqe)->saved_data = saved_data;
+ }
+}
+
+static void cmdq_set_lcmd_wqe(struct hinic3_cmdq_wqe *wqe,
+ enum cmdq_cmd_type cmd_type,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out, int wrapped,
+ u8 mod, u8 cmd, u16 prod_idx)
+{
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = &wqe->wqe_lcmd;
+ enum completion_format complete_format = COMPLETE_DIRECT;
+
+ switch (cmd_type) {
+ case SYNC_CMD_DIRECT_RESP:
+ wqe_lcmd->completion.direct_resp = 0;
+ break;
+ case SYNC_CMD_SGE_RESP:
+ if (buf_out) {
+ complete_format = COMPLETE_SGE;
+ cmdq_set_completion(&wqe_lcmd->completion,
+ buf_out);
+ }
+ break;
+ case ASYNC_CMD:
+ wqe_lcmd->completion.direct_resp = 0;
+ wqe_lcmd->buf_desc.saved_async_buf = (u64)(buf_in);
+ break;
+ }
+
+ cmdq_prepare_wqe_ctrl(wqe, wrapped, mod, cmd, prod_idx, complete_format,
+ DATA_SGE, BUFDESC_LCMD_LEN);
+
+ cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in);
+}
+
+static void cmdq_update_cmd_status(struct hinic3_cmdq *cmdq, u16 prod_idx,
+ struct hinic3_cmdq_wqe *wqe)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info;
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd;
+ u32 status_info;
+
+ wqe_lcmd = &wqe->wqe_lcmd;
+ cmd_info = &cmdq->cmd_infos[prod_idx];
+
+ if (cmd_info->errcode) {
+ status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info);
+ *cmd_info->errcode = WQE_ERRCODE_GET(status_info, VAL);
+ }
+
+ if (cmd_info->direct_resp)
+ *cmd_info->direct_resp =
+ hinic3_hw_cpu32(wqe_lcmd->completion.direct_resp);
+}
+
+static int hinic3_cmdq_sync_timeout_check(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 pi)
+{
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd;
+ struct hinic3_ctrl *ctrl;
+ u32 ctrl_info;
+
+ wqe_lcmd = &wqe->wqe_lcmd;
+ ctrl = &wqe_lcmd->ctrl;
+ ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info);
+ if (!WQE_COMPLETED(ctrl_info)) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Cmdq sync command check busy bit not set\n");
+ return -EFAULT;
+ }
+
+ cmdq_update_cmd_status(cmdq, pi, wqe);
+
+ sdk_info(cmdq->hwdev->dev_hdl, "Cmdq sync command check succeed\n");
+ return 0;
+}
+
+static void clear_cmd_info(struct hinic3_cmdq_cmd_info *cmd_info,
+ const struct hinic3_cmdq_cmd_info *saved_cmd_info)
+{
+ if (cmd_info->errcode == saved_cmd_info->errcode)
+ cmd_info->errcode = NULL;
+
+ if (cmd_info->done == saved_cmd_info->done)
+ cmd_info->done = NULL;
+
+ if (cmd_info->direct_resp == saved_cmd_info->direct_resp)
+ cmd_info->direct_resp = NULL;
+}
+
+static int cmdq_ceq_handler_status(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_cmd_info *cmd_info,
+ struct hinic3_cmdq_cmd_info *saved_cmd_info,
+ u64 curr_msg_id, u16 curr_prod_idx,
+ struct hinic3_cmdq_wqe *curr_wqe,
+ u32 timeout)
+{
+ ulong timeo;
+ int err;
+ ulong end = jiffies + msecs_to_jiffies(timeout);
+
+ if (cmdq->hwdev->poll) {
+ while (time_before(jiffies, end)) {
+ hinic3_cmdq_ceq_handler(cmdq->hwdev, 0);
+ if (saved_cmd_info->done->done != 0)
+ return 0;
+ usleep_range(9, 10); /* sleep 9 us ~ 10 us */
+ }
+ } else {
+ timeo = msecs_to_jiffies(timeout);
+ if (wait_for_completion_timeout(saved_cmd_info->done, timeo))
+ return 0;
+ }
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ if (cmd_info->cmpt_code == saved_cmd_info->cmpt_code)
+ cmd_info->cmpt_code = NULL;
+
+ if (*saved_cmd_info->cmpt_code == CMDQ_COMPLETE_CMPT_CODE) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Cmdq direct sync command has been completed\n");
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ return 0;
+ }
+
+ if (curr_msg_id == cmd_info->cmdq_msg_id) {
+ err = hinic3_cmdq_sync_timeout_check(cmdq, curr_wqe,
+ curr_prod_idx);
+ if (err)
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_TIMEOUT;
+ else
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_FAKE_TIMEOUT;
+ } else {
+ err = -ETIMEDOUT;
+ sdk_err(cmdq->hwdev->dev_hdl, "Cmdq sync command current msg id dismatch with cmd_info msg id\n");
+ }
+
+ clear_cmd_info(cmd_info, saved_cmd_info);
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+
+ if (err == 0)
+ return 0;
+
+ hinic3_dump_ceq_info(cmdq->hwdev);
+
+ return -ETIMEDOUT;
+}
+
+static int wait_cmdq_sync_cmd_completion(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_cmd_info *cmd_info,
+ struct hinic3_cmdq_cmd_info *saved_cmd_info,
+ u64 curr_msg_id, u16 curr_prod_idx,
+ struct hinic3_cmdq_wqe *curr_wqe, u32 timeout)
+{
+ return cmdq_ceq_handler_status(cmdq, cmd_info, saved_cmd_info,
+ curr_msg_id, curr_prod_idx,
+ curr_wqe, timeout);
+}
+
+static int cmdq_msg_lock(struct hinic3_cmdq *cmdq, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = cmdq_to_cmdqs(cmdq);
+
+ /* Keep wrapped and doorbell index correct. bh - for tasklet(ceq) */
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ if (cmdqs->lock_channel_en && test_bit(channel, &cmdqs->channel_stop)) {
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static void cmdq_msg_unlock(struct hinic3_cmdq *cmdq)
+{
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+static void cmdq_clear_cmd_buf(struct hinic3_cmdq_cmd_info *cmd_info,
+ struct hinic3_hwdev *hwdev)
+{
+ if (cmd_info->buf_in)
+ hinic3_free_cmd_buf(hwdev, cmd_info->buf_in);
+
+ if (cmd_info->buf_out)
+ hinic3_free_cmd_buf(hwdev, cmd_info->buf_out);
+
+ cmd_info->buf_in = NULL;
+ cmd_info->buf_out = NULL;
+}
+
+static void cmdq_set_cmd_buf(struct hinic3_cmdq_cmd_info *cmd_info,
+ struct hinic3_hwdev *hwdev,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out)
+{
+ cmd_info->buf_in = buf_in;
+ cmd_info->buf_out = buf_out;
+
+ if (buf_in)
+ atomic_inc(&buf_in->ref_cnt);
+
+ if (buf_out)
+ atomic_inc(&buf_out->ref_cnt);
+}
+
+static int cmdq_sync_cmd_direct_resp(struct hinic3_cmdq *cmdq, u8 mod,
+ u8 cmd, struct hinic3_cmd_buf *buf_in,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_wq *wq = &cmdq->wq;
+ struct hinic3_cmdq_wqe *curr_wqe = NULL, wqe;
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL, saved_cmd_info;
+ struct completion done;
+ u16 curr_prod_idx, next_prod_idx;
+ int wrapped, errcode = 0, wqe_size = WQE_LCMD_SIZE;
+ int cmpt_code = CMDQ_SEND_CMPT_CODE;
+ u64 curr_msg_id;
+ int err;
+ u32 real_timeout;
+
+ err = cmdq_msg_lock(cmdq, channel);
+ if (err)
+ return err;
+
+ curr_wqe = cmdq_get_wqe(wq, &curr_prod_idx);
+ if (!curr_wqe) {
+ cmdq_msg_unlock(cmdq);
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + NUM_WQEBBS_FOR_CMDQ_WQE;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = (cmdq->wrapped == 0) ? 1 : 0;
+ next_prod_idx -= (u16)wq->q_depth;
+ }
+
+ cmd_info = &cmdq->cmd_infos[curr_prod_idx];
+
+ init_completion(&done);
+
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_DIRECT_RESP;
+ cmd_info->done = &done;
+ cmd_info->errcode = &errcode;
+ cmd_info->direct_resp = out_param;
+ cmd_info->cmpt_code = &cmpt_code;
+ cmd_info->channel = channel;
+ cmdq_set_cmd_buf(cmd_info, cmdq->hwdev, buf_in, NULL);
+
+ memcpy(&saved_cmd_info, cmd_info, sizeof(struct hinic3_cmdq_cmd_info));
+
+ cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL,
+ wrapped, mod, cmd, curr_prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ hinic3_hw_be32_len(&wqe, wqe_size);
+
+ /* CMDQ WQE is not shadow, therefore wqe will be written to wq */
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ (cmd_info->cmdq_msg_id)++;
+ curr_msg_id = cmd_info->cmdq_msg_id;
+
+ cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx);
+
+ cmdq_msg_unlock(cmdq);
+
+ real_timeout = timeout ? timeout : CMDQ_CMD_TIMEOUT;
+ err = wait_cmdq_sync_cmd_completion(cmdq, cmd_info, &saved_cmd_info,
+ curr_msg_id, curr_prod_idx,
+ curr_wqe, real_timeout);
+ if (err) {
+ sdk_err(cmdq->hwdev->dev_hdl, "Cmdq sync command(mod: %u, cmd: %u) timeout, prod idx: 0x%x\n",
+ mod, cmd, curr_prod_idx);
+ err = -ETIMEDOUT;
+ }
+
+ if (cmpt_code == CMDQ_FORCE_STOP_CMPT_CODE) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Force stop cmdq cmd, mod: %u, cmd: %u\n",
+ mod, cmd);
+ err = -EAGAIN;
+ }
+
+ destroy_completion(&done);
+ smp_rmb(); /* read error code after completion */
+
+ return (err != 0) ? err : errcode;
+}
+
+static int cmdq_sync_cmd_detail_resp(struct hinic3_cmdq *cmdq, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_wq *wq = &cmdq->wq;
+ struct hinic3_cmdq_wqe *curr_wqe = NULL, wqe;
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL, saved_cmd_info;
+ struct completion done;
+ u16 curr_prod_idx, next_prod_idx;
+ int wrapped, errcode = 0, wqe_size = WQE_LCMD_SIZE;
+ int cmpt_code = CMDQ_SEND_CMPT_CODE;
+ u64 curr_msg_id;
+ int err;
+ u32 real_timeout;
+
+ err = cmdq_msg_lock(cmdq, channel);
+ if (err)
+ return err;
+
+ curr_wqe = cmdq_get_wqe(wq, &curr_prod_idx);
+ if (!curr_wqe) {
+ cmdq_msg_unlock(cmdq);
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + NUM_WQEBBS_FOR_CMDQ_WQE;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = (cmdq->wrapped == 0) ? 1 : 0;
+ next_prod_idx -= (u16)wq->q_depth;
+ }
+
+ cmd_info = &cmdq->cmd_infos[curr_prod_idx];
+
+ init_completion(&done);
+
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_SGE_RESP;
+ cmd_info->done = &done;
+ cmd_info->errcode = &errcode;
+ cmd_info->direct_resp = out_param;
+ cmd_info->cmpt_code = &cmpt_code;
+ cmd_info->channel = channel;
+ cmdq_set_cmd_buf(cmd_info, cmdq->hwdev, buf_in, buf_out);
+
+ memcpy(&saved_cmd_info, cmd_info, sizeof(struct hinic3_cmdq_cmd_info));
+
+ cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out,
+ wrapped, mod, cmd, curr_prod_idx);
+
+ hinic3_hw_be32_len(&wqe, wqe_size);
+
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ (cmd_info->cmdq_msg_id)++;
+ curr_msg_id = cmd_info->cmdq_msg_id;
+
+ cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx);
+
+ cmdq_msg_unlock(cmdq);
+
+ real_timeout = timeout ? timeout : CMDQ_CMD_TIMEOUT;
+ err = wait_cmdq_sync_cmd_completion(cmdq, cmd_info, &saved_cmd_info,
+ curr_msg_id, curr_prod_idx,
+ curr_wqe, real_timeout);
+ if (err) {
+ sdk_err(cmdq->hwdev->dev_hdl, "Cmdq sync command(mod: %u, cmd: %u) timeout, prod idx: 0x%x\n",
+ mod, cmd, curr_prod_idx);
+ err = -ETIMEDOUT;
+ }
+
+ if (cmpt_code == CMDQ_FORCE_STOP_CMPT_CODE) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Force stop cmdq cmd, mod: %u, cmd: %u\n",
+ mod, cmd);
+ err = -EAGAIN;
+ }
+
+ destroy_completion(&done);
+ smp_rmb(); /* read error code after completion */
+
+ return (err != 0) ? err : errcode;
+}
+
+static int cmdq_async_cmd(struct hinic3_cmdq *cmdq, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in, u16 channel)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ struct hinic3_wq *wq = &cmdq->wq;
+ int wqe_size = WQE_LCMD_SIZE;
+ u16 curr_prod_idx, next_prod_idx;
+ struct hinic3_cmdq_wqe *curr_wqe = NULL, wqe;
+ int wrapped, err;
+
+ err = cmdq_msg_lock(cmdq, channel);
+ if (err)
+ return err;
+
+ curr_wqe = cmdq_get_wqe(wq, &curr_prod_idx);
+ if (!curr_wqe) {
+ cmdq_msg_unlock(cmdq);
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+ next_prod_idx = curr_prod_idx + NUM_WQEBBS_FOR_CMDQ_WQE;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = (cmdq->wrapped == 0) ? 1 : 0;
+ next_prod_idx -= (u16)wq->q_depth;
+ }
+
+ cmdq_set_lcmd_wqe(&wqe, ASYNC_CMD, buf_in, NULL, wrapped,
+ mod, cmd, curr_prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ hinic3_hw_be32_len(&wqe, wqe_size);
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ cmd_info = &cmdq->cmd_infos[curr_prod_idx];
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_ASYNC;
+ cmd_info->channel = channel;
+ /* The caller will not free the cmd_buf of the asynchronous command,
+ * so there is no need to increase the reference count here
+ */
+ cmd_info->buf_in = buf_in;
+
+ cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx);
+
+ cmdq_msg_unlock(cmdq);
+
+ return 0;
+}
+
+static int cmdq_params_valid(const void *hwdev, const struct hinic3_cmd_buf *buf_in)
+{
+ if (!buf_in || !hwdev) {
+ pr_err("Invalid CMDQ buffer addr or hwdev\n");
+ return -EINVAL;
+ }
+
+ if (!buf_in->size || buf_in->size > HINIC3_CMDQ_BUF_SIZE) {
+ pr_err("Invalid CMDQ buffer size: 0x%x\n", buf_in->size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define WAIT_CMDQ_ENABLE_TIMEOUT 300
+static int wait_cmdqs_enable(struct hinic3_cmdqs *cmdqs)
+{
+ unsigned long end;
+
+ end = jiffies + msecs_to_jiffies(WAIT_CMDQ_ENABLE_TIMEOUT);
+ do {
+ if (cmdqs->status & HINIC3_CMDQ_ENABLE)
+ return 0;
+ } while (time_before(jiffies, end) && cmdqs->hwdev->chip_present_flag &&
+ !cmdqs->disable_flag);
+
+ cmdqs->disable_flag = 1;
+
+ return -EBUSY;
+}
+
+int hinic3_cmdq_direct_resp(void *hwdev, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err) {
+ pr_err("Invalid CMDQ parameters\n");
+ return err;
+ }
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ err = cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC],
+ mod, cmd, buf_in, out_param,
+ timeout, channel);
+
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -ETIMEDOUT;
+ else
+ return err;
+}
+EXPORT_SYMBOL(hinic3_cmdq_direct_resp);
+
+int hinic3_cmdq_detail_resp(void *hwdev, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err)
+ return err;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ err = cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC],
+ mod, cmd, buf_in, buf_out, out_param,
+ timeout, channel);
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -ETIMEDOUT;
+ else
+ return err;
+}
+EXPORT_SYMBOL(hinic3_cmdq_detail_resp);
+
+int hinic3_cos_id_detail_resp(void *hwdev, u8 mod, u8 cmd, u8 cos_id,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out, u64 *out_param,
+ u32 timeout, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err)
+ return err;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ if (cos_id >= cmdqs->cmdq_num) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq id is invalid\n");
+ return -EINVAL;
+ }
+
+ err = cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[cos_id], mod, cmd,
+ buf_in, buf_out, out_param,
+ timeout, channel);
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -ETIMEDOUT;
+ else
+ return err;
+}
+EXPORT_SYMBOL(hinic3_cos_id_detail_resp);
+
+int hinic3_cmdq_async(void *hwdev, u8 mod, u8 cmd, struct hinic3_cmd_buf *buf_in, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err)
+ return err;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+ /* LB mode 1 compatible, cmdq 0 also for async, which is sync_no_wait */
+ return cmdq_async_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod,
+ cmd, buf_in, channel);
+}
+EXPORT_SYMBOL(hinic3_cmdq_async);
+
+int hinic3_cmdq_async_cos(void *hwdev, u8 mod, u8 cmd,
+ u8 cos_id, struct hinic3_cmd_buf *buf_in,
+ u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err)
+ return err;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ if (cos_id >= cmdqs->cmdq_num) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq id is invalid\n");
+ return -EINVAL;
+ }
+
+ return cmdq_async_cmd(&cmdqs->cmdq[cos_id], mod, cmd, buf_in, channel);
+}
+
+static void clear_wqe_complete_bit(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 ci)
+{
+ struct hinic3_ctrl *ctrl = NULL;
+ u32 header_info = hinic3_hw_cpu32(WQE_HEADER(wqe)->header_info);
+ enum data_format df = CMDQ_WQE_HEADER_GET(header_info, DATA_FMT);
+
+ if (df == DATA_SGE)
+ ctrl = &wqe->wqe_lcmd.ctrl;
+ else
+ ctrl = &wqe->inline_wqe.wqe_scmd.ctrl;
+
+ /* clear HW busy bit */
+ ctrl->ctrl_info = 0;
+ cmdq->cmd_infos[ci].cmd_type = HINIC3_CMD_TYPE_NONE;
+
+ wmb(); /* verify wqe is clear */
+
+ hinic3_wq_put_wqebbs(&cmdq->wq, NUM_WQEBBS_FOR_CMDQ_WQE);
+}
+
+static void cmdq_sync_cmd_handler(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 ci)
+{
+ spin_lock(&cmdq->cmdq_lock);
+
+ cmdq_update_cmd_status(cmdq, ci, wqe);
+
+ if (cmdq->cmd_infos[ci].cmpt_code) {
+ *cmdq->cmd_infos[ci].cmpt_code = CMDQ_COMPLETE_CMPT_CODE;
+ cmdq->cmd_infos[ci].cmpt_code = NULL;
+ }
+
+ /* make sure cmpt_code operation before done operation */
+ smp_rmb();
+
+ if (cmdq->cmd_infos[ci].done) {
+ complete(cmdq->cmd_infos[ci].done);
+ cmdq->cmd_infos[ci].done = NULL;
+ }
+
+ spin_unlock(&cmdq->cmdq_lock);
+
+ cmdq_clear_cmd_buf(&cmdq->cmd_infos[ci], cmdq->hwdev);
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+}
+
+static void cmdq_async_cmd_handler(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 ci)
+{
+ cmdq_clear_cmd_buf(&cmdq->cmd_infos[ci], hwdev);
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+}
+
+static int cmdq_arm_ceq_handler(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 ci)
+{
+ struct hinic3_ctrl *ctrl = &wqe->inline_wqe.wqe_scmd.ctrl;
+ u32 ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info);
+
+ if (!WQE_COMPLETED(ctrl_info))
+ return -EBUSY;
+
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+
+ return 0;
+}
+
+#define HINIC3_CMDQ_WQE_HEAD_LEN 32
+static void hinic3_dump_cmdq_wqe_head(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmdq_wqe *wqe)
+{
+ u32 i;
+ u32 *data = (u32 *)wqe;
+
+ for (i = 0; i < (HINIC3_CMDQ_WQE_HEAD_LEN / sizeof(u32)); i += 0x4) {
+ sdk_info(hwdev->dev_hdl, "wqe data: 0x%08x, 0x%08x, 0x%08x, 0x%08x\n",
+ *(data + i), *(data + i + 0x1), *(data + i + 0x2),
+ *(data + i + 0x3));
+ }
+}
+
+void hinic3_cmdq_ceq_handler(void *handle, u32 ceqe_data)
+{
+ struct hinic3_cmdqs *cmdqs = ((struct hinic3_hwdev *)handle)->cmdqs;
+ enum hinic3_cmdq_type cmdq_type = CEQE_CMDQ_GET(ceqe_data, TYPE);
+ struct hinic3_cmdq *cmdq = &cmdqs->cmdq[cmdq_type];
+ struct hinic3_hwdev *hwdev = cmdqs->hwdev;
+ struct hinic3_cmdq_wqe *wqe = NULL;
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL;
+ struct hinic3_ctrl *ctrl = NULL;
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ u16 ci;
+
+ while ((wqe = cmdq_read_wqe(&cmdq->wq, &ci)) != NULL) {
+ cmd_info = &cmdq->cmd_infos[ci];
+
+ switch (cmd_info->cmd_type) {
+ case HINIC3_CMD_TYPE_NONE:
+ return;
+ case HINIC3_CMD_TYPE_TIMEOUT:
+ sdk_warn(hwdev->dev_hdl, "Cmdq timeout, q_id: %u, ci: %u\n",
+ cmdq_type, ci);
+ hinic3_dump_cmdq_wqe_head(hwdev, wqe);
+ /*lint -fallthrough */
+ case HINIC3_CMD_TYPE_FAKE_TIMEOUT:
+ cmdq_clear_cmd_buf(cmd_info, hwdev);
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+ break;
+ case HINIC3_CMD_TYPE_SET_ARM:
+ /* arm_bit was set until here */
+ if (cmdq_arm_ceq_handler(cmdq, wqe, ci) != 0)
+ return;
+ break;
+ default:
+ /* only arm bit is using scmd wqe, the wqe is lcmd */
+ wqe_lcmd = &wqe->wqe_lcmd;
+ ctrl = &wqe_lcmd->ctrl;
+ if (!WQE_COMPLETED(hinic3_hw_cpu32((ctrl)->ctrl_info)))
+ return;
+
+ dma_rmb();
+ /* For FORCE_STOP cmd_type, we also need to wait for
+ * the firmware processing to complete to prevent the
+ * firmware from accessing the released cmd_buf
+ */
+ if (cmd_info->cmd_type == HINIC3_CMD_TYPE_FORCE_STOP) {
+ cmdq_clear_cmd_buf(cmd_info, hwdev);
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+ } else if (cmd_info->cmd_type == HINIC3_CMD_TYPE_ASYNC) {
+ cmdq_async_cmd_handler(hwdev, cmdq, wqe, ci);
+ } else {
+ cmdq_sync_cmd_handler(cmdq, wqe, ci);
+ }
+
+ break;
+ }
+ }
+}
+
+static void cmdq_init_queue_ctxt(struct hinic3_cmdqs *cmdqs,
+ struct hinic3_cmdq *cmdq,
+ struct cmdq_ctxt_info *ctxt_info)
+{
+ struct hinic3_wq *wq = &cmdq->wq;
+ u64 cmdq_first_block_paddr, pfn;
+ u16 start_ci = (u16)wq->cons_idx;
+
+ pfn = CMDQ_PFN(hinic3_wq_get_first_wqe_page_addr(wq));
+
+ ctxt_info->curr_wqe_page_pfn =
+ CMDQ_CTXT_PAGE_INFO_SET(1, HW_BUSY_BIT) |
+ CMDQ_CTXT_PAGE_INFO_SET(1, CEQ_EN) |
+ CMDQ_CTXT_PAGE_INFO_SET(1, CEQ_ARM) |
+ CMDQ_CTXT_PAGE_INFO_SET(HINIC3_CEQ_ID_CMDQ, EQ_ID) |
+ CMDQ_CTXT_PAGE_INFO_SET(pfn, CURR_WQE_PAGE_PFN);
+
+ if (!WQ_IS_0_LEVEL_CLA(wq)) {
+ cmdq_first_block_paddr = cmdqs->wq_block_paddr;
+ pfn = CMDQ_PFN(cmdq_first_block_paddr);
+ }
+
+ ctxt_info->wq_block_pfn = CMDQ_CTXT_BLOCK_INFO_SET(start_ci, CI) |
+ CMDQ_CTXT_BLOCK_INFO_SET(pfn, WQ_BLOCK_PFN);
+}
+
+static int init_cmdq(struct hinic3_cmdq *cmdq, struct hinic3_hwdev *hwdev,
+ enum hinic3_cmdq_type q_type)
+{
+ int err;
+
+ cmdq->cmdq_type = q_type;
+ cmdq->wrapped = 1;
+ cmdq->hwdev = hwdev;
+
+ spin_lock_init(&cmdq->cmdq_lock);
+
+ cmdq->cmd_infos = kcalloc(cmdq->wq.q_depth, sizeof(*cmdq->cmd_infos),
+ GFP_KERNEL);
+ if (!cmdq->cmd_infos) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate cmdq infos\n");
+ err = -ENOMEM;
+ goto cmd_infos_err;
+ }
+
+ return 0;
+
+cmd_infos_err:
+ spin_lock_deinit(&cmdq->cmdq_lock);
+
+ return err;
+}
+
+static void free_cmdq(struct hinic3_cmdq *cmdq)
+{
+ kfree(cmdq->cmd_infos);
+ cmdq->cmd_infos = NULL;
+ spin_lock_deinit(&cmdq->cmdq_lock);
+}
+
+static int hinic3_set_cmdq_ctxts(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ u8 cmdq_type;
+ int err;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ err = hinic3_set_cmdq_ctxt(hwdev, cmdq_type,
+ &cmdqs->cmdq[cmdq_type].cmdq_ctxt);
+ if (err)
+ return err;
+ }
+
+ cmdqs->status |= HINIC3_CMDQ_ENABLE;
+ cmdqs->disable_flag = 0;
+
+ return 0;
+}
+
+static void cmdq_flush_sync_cmd(struct hinic3_cmdq_cmd_info *cmd_info)
+{
+ if (cmd_info->cmd_type != HINIC3_CMD_TYPE_DIRECT_RESP &&
+ cmd_info->cmd_type != HINIC3_CMD_TYPE_SGE_RESP)
+ return;
+
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_FORCE_STOP;
+
+ if (cmd_info->cmpt_code &&
+ *cmd_info->cmpt_code == CMDQ_SEND_CMPT_CODE)
+ *cmd_info->cmpt_code = CMDQ_FORCE_STOP_CMPT_CODE;
+
+ if (cmd_info->done) {
+ complete(cmd_info->done);
+ cmd_info->done = NULL;
+ cmd_info->cmpt_code = NULL;
+ cmd_info->direct_resp = NULL;
+ cmd_info->errcode = NULL;
+ }
+}
+
+void hinic3_cmdq_flush_cmd(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmdq *cmdq)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ u16 ci = 0;
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ while (cmdq_read_wqe(&cmdq->wq, &ci)) {
+ hinic3_wq_put_wqebbs(&cmdq->wq, NUM_WQEBBS_FOR_CMDQ_WQE);
+ cmd_info = &cmdq->cmd_infos[ci];
+
+ if (cmd_info->cmd_type == HINIC3_CMD_TYPE_DIRECT_RESP ||
+ cmd_info->cmd_type == HINIC3_CMD_TYPE_SGE_RESP)
+ cmdq_flush_sync_cmd(cmd_info);
+ }
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+static void hinic3_cmdq_flush_channel_sync_cmd(struct hinic3_hwdev *hwdev, u16 channel)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ struct hinic3_cmdq *cmdq = NULL;
+ struct hinic3_wq *wq = NULL;
+ u16 wqe_cnt, ci, i;
+
+ if (channel >= HINIC3_CHANNEL_MAX)
+ return;
+
+ cmdq = &hwdev->cmdqs->cmdq[HINIC3_CMDQ_SYNC];
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ wq = &cmdq->wq;
+ ci = wq->cons_idx;
+ wqe_cnt = (u16)WQ_MASK_IDX(wq, wq->prod_idx +
+ wq->q_depth - wq->cons_idx);
+ for (i = 0; i < wqe_cnt; i++) {
+ cmd_info = &cmdq->cmd_infos[WQ_MASK_IDX(wq, ci + i)];
+ if (cmd_info->channel == channel)
+ cmdq_flush_sync_cmd(cmd_info);
+ }
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+void hinic3_cmdq_flush_sync_cmd(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ struct hinic3_cmdq *cmdq = NULL;
+ struct hinic3_wq *wq = NULL;
+ u16 wqe_cnt, ci, i;
+
+ cmdq = &hwdev->cmdqs->cmdq[HINIC3_CMDQ_SYNC];
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ wq = &cmdq->wq;
+ ci = wq->cons_idx;
+ wqe_cnt = (u16)WQ_MASK_IDX(wq, wq->prod_idx +
+ wq->q_depth - wq->cons_idx);
+ for (i = 0; i < wqe_cnt; i++) {
+ cmd_info = &cmdq->cmd_infos[WQ_MASK_IDX(wq, ci + i)];
+ cmdq_flush_sync_cmd(cmd_info);
+ }
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+static void cmdq_reset_all_cmd_buff(struct hinic3_cmdq *cmdq)
+{
+ u16 i;
+
+ for (i = 0; i < cmdq->wq.q_depth; i++)
+ cmdq_clear_cmd_buf(&cmdq->cmd_infos[i], cmdq->hwdev);
+}
+
+int hinic3_cmdq_set_channel_status(struct hinic3_hwdev *hwdev, u16 channel,
+ bool enable)
+{
+ if (channel >= HINIC3_CHANNEL_MAX)
+ return -EINVAL;
+
+ if (enable) {
+ clear_bit(channel, &hwdev->cmdqs->channel_stop);
+ } else {
+ set_bit(channel, &hwdev->cmdqs->channel_stop);
+ hinic3_cmdq_flush_channel_sync_cmd(hwdev, channel);
+ }
+
+ sdk_info(hwdev->dev_hdl, "%s cmdq channel 0x%x\n",
+ enable ? "Enable" : "Disable", channel);
+
+ return 0;
+}
+
+void hinic3_cmdq_enable_channel_lock(struct hinic3_hwdev *hwdev, bool enable)
+{
+ hwdev->cmdqs->lock_channel_en = enable;
+
+ sdk_info(hwdev->dev_hdl, "%s cmdq channel lock\n",
+ enable ? "Enable" : "Disable");
+}
+
+int hinic3_reinit_cmdq_ctxts(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ u8 cmdq_type;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ hinic3_cmdq_flush_cmd(hwdev, &cmdqs->cmdq[cmdq_type]);
+ cmdq_reset_all_cmd_buff(&cmdqs->cmdq[cmdq_type]);
+ cmdqs->cmdq[cmdq_type].wrapped = 1;
+ hinic3_wq_reset(&cmdqs->cmdq[cmdq_type].wq);
+ }
+
+ return hinic3_set_cmdq_ctxts(hwdev);
+}
+
+static int create_cmdq_wq(struct hinic3_cmdqs *cmdqs)
+{
+ u8 type, cmdq_type;
+ int err;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ err = hinic3_wq_create(cmdqs->hwdev, &cmdqs->cmdq[cmdq_type].wq,
+ HINIC3_CMDQ_DEPTH, CMDQ_WQEBB_SIZE);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Failed to create cmdq wq\n");
+ goto destroy_wq;
+ }
+ }
+
+ /* 1-level CLA must put all cmdq's wq page addr in one wq block */
+ if (!WQ_IS_0_LEVEL_CLA(&cmdqs->cmdq[HINIC3_CMDQ_SYNC].wq)) {
+ /* cmdq wq's CLA table is up to 512B */
+#define CMDQ_WQ_CLA_SIZE 512
+ if (cmdqs->cmdq[HINIC3_CMDQ_SYNC].wq.num_wq_pages >
+ CMDQ_WQ_CLA_SIZE / sizeof(u64)) {
+ err = -EINVAL;
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq wq page exceed limit: %lu\n",
+ CMDQ_WQ_CLA_SIZE / sizeof(u64));
+ goto destroy_wq;
+ }
+
+ cmdqs->wq_block_vaddr =
+ dma_zalloc_coherent(cmdqs->hwdev->dev_hdl, PAGE_SIZE,
+ &cmdqs->wq_block_paddr, GFP_KERNEL);
+ if (!cmdqs->wq_block_vaddr) {
+ err = -ENOMEM;
+ sdk_err(cmdqs->hwdev->dev_hdl, "Failed to alloc cmdq wq block\n");
+ goto destroy_wq;
+ }
+
+ type = HINIC3_CMDQ_SYNC;
+ for (; type < cmdqs->cmdq_num; type++)
+ memcpy((u8 *)cmdqs->wq_block_vaddr +
+ ((u64)type * CMDQ_WQ_CLA_SIZE),
+ cmdqs->cmdq[type].wq.wq_block_vaddr,
+ cmdqs->cmdq[type].wq.num_wq_pages * sizeof(u64));
+ }
+
+ return 0;
+
+destroy_wq:
+ type = HINIC3_CMDQ_SYNC;
+ for (; type < cmdq_type; type++)
+ hinic3_wq_destroy(&cmdqs->cmdq[type].wq);
+
+ return err;
+}
+
+static void destroy_cmdq_wq(struct hinic3_cmdqs *cmdqs)
+{
+ u8 cmdq_type;
+
+ if (cmdqs->wq_block_vaddr)
+ dma_free_coherent(cmdqs->hwdev->dev_hdl, PAGE_SIZE,
+ cmdqs->wq_block_vaddr, cmdqs->wq_block_paddr);
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++)
+ hinic3_wq_destroy(&cmdqs->cmdq[cmdq_type].wq);
+}
+
+static int init_cmdqs(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ u8 cmdq_num;
+ int err = -ENOMEM;
+
+ if (COMM_SUPPORT_CMDQ_NUM(hwdev)) {
+ cmdq_num = hwdev->glb_attr.cmdq_num;
+ if (hwdev->glb_attr.cmdq_num > HINIC3_MAX_CMDQ_TYPES) {
+ sdk_warn(hwdev->dev_hdl, "Adjust cmdq num to %d\n", HINIC3_MAX_CMDQ_TYPES);
+ cmdq_num = HINIC3_MAX_CMDQ_TYPES;
+ }
+ } else {
+ cmdq_num = HINIC3_MAX_CMDQ_TYPES;
+ }
+
+ cmdqs = kzalloc(sizeof(*cmdqs), GFP_KERNEL);
+ if (!cmdqs)
+ return err;
+
+ hwdev->cmdqs = cmdqs;
+ cmdqs->hwdev = hwdev;
+ cmdqs->cmdq_num = cmdq_num;
+
+ cmdqs->cmd_buf_pool = dma_pool_create("hinic3_cmdq", hwdev->dev_hdl,
+ HINIC3_CMDQ_BUF_SIZE, HINIC3_CMDQ_BUF_SIZE, 0ULL);
+ if (!cmdqs->cmd_buf_pool) {
+ sdk_err(hwdev->dev_hdl, "Failed to create cmdq buffer pool\n");
+ goto pool_create_err;
+ }
+
+ return 0;
+
+pool_create_err:
+ kfree(cmdqs);
+
+ return err;
+}
+
+int hinic3_cmdqs_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ void __iomem *db_base = NULL;
+ u8 type, cmdq_type;
+ int err = -ENOMEM;
+
+ err = init_cmdqs(hwdev);
+ if (err)
+ return err;
+
+ cmdqs = hwdev->cmdqs;
+
+ err = create_cmdq_wq(cmdqs);
+ if (err)
+ goto create_wq_err;
+
+ err = hinic3_alloc_db_addr(hwdev, &db_base, NULL);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate doorbell address\n");
+ goto alloc_db_err;
+ }
+
+ cmdqs->cmdqs_db_base = (u8 *)db_base;
+ for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, cmdq_type);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize cmdq type :%d\n", cmdq_type);
+ goto init_cmdq_err;
+ }
+
+ cmdq_init_queue_ctxt(cmdqs, &cmdqs->cmdq[cmdq_type],
+ &cmdqs->cmdq[cmdq_type].cmdq_ctxt);
+ }
+
+ err = hinic3_set_cmdq_ctxts(hwdev);
+ if (err)
+ goto init_cmdq_err;
+
+ return 0;
+
+init_cmdq_err:
+ for (type = HINIC3_CMDQ_SYNC; type < cmdq_type; type++)
+ free_cmdq(&cmdqs->cmdq[type]);
+
+ hinic3_free_db_addr(hwdev, cmdqs->cmdqs_db_base, NULL);
+
+alloc_db_err:
+ destroy_cmdq_wq(cmdqs);
+
+create_wq_err:
+ dma_pool_destroy(cmdqs->cmd_buf_pool);
+ kfree(cmdqs);
+
+ return err;
+}
+
+void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ u8 cmdq_type = HINIC3_CMDQ_SYNC;
+
+ cmdqs->status &= ~HINIC3_CMDQ_ENABLE;
+
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ hinic3_cmdq_flush_cmd(hwdev, &cmdqs->cmdq[cmdq_type]);
+ cmdq_reset_all_cmd_buff(&cmdqs->cmdq[cmdq_type]);
+ free_cmdq(&cmdqs->cmdq[cmdq_type]);
+ }
+
+ hinic3_free_db_addr(hwdev, cmdqs->cmdqs_db_base, NULL);
+ destroy_cmdq_wq(cmdqs);
+
+ dma_pool_destroy(cmdqs->cmd_buf_pool);
+
+ kfree(cmdqs);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h
new file mode 100644
index 0000000..b174ad2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h
@@ -0,0 +1,204 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_CMDQ_H
+#define HINIC3_CMDQ_H
+
+#include <linux/types.h>
+#include <linux/completion.h>
+#include <linux/spinlock.h>
+
+#include "mpu_inband_cmd_defs.h"
+#include "hinic3_hw.h"
+#include "hinic3_wq.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+
+#define HINIC3_SCMD_DATA_LEN 16
+
+#define HINIC3_CMDQ_DEPTH 4096
+
+enum hinic3_cmdq_type {
+ HINIC3_CMDQ_SYNC,
+ HINIC3_CMDQ_ASYNC,
+ HINIC3_MAX_CMDQ_TYPES = 4
+};
+
+enum hinic3_db_src_type {
+ HINIC3_DB_SRC_CMDQ_TYPE,
+ HINIC3_DB_SRC_L2NIC_SQ_TYPE,
+};
+
+enum hinic3_cmdq_db_type {
+ HINIC3_DB_SQ_RQ_TYPE,
+ HINIC3_DB_CMDQ_TYPE,
+};
+
+/* hardware define: cmdq wqe */
+struct hinic3_cmdq_header {
+ u32 header_info;
+ u32 saved_data;
+};
+
+struct hinic3_scmd_bufdesc {
+ u32 buf_len;
+ u32 rsvd;
+ u8 data[HINIC3_SCMD_DATA_LEN];
+};
+
+struct hinic3_lcmd_bufdesc {
+ struct hinic3_sge sge;
+ u32 rsvd1;
+ u64 saved_async_buf;
+ u64 rsvd3;
+};
+
+struct hinic3_cmdq_db {
+ u32 db_head;
+ u32 db_info;
+};
+
+struct hinic3_status {
+ u32 status_info;
+};
+
+struct hinic3_ctrl {
+ u32 ctrl_info;
+};
+
+struct hinic3_sge_resp {
+ struct hinic3_sge sge;
+ u32 rsvd;
+};
+
+struct hinic3_cmdq_completion {
+ union {
+ struct hinic3_sge_resp sge_resp;
+ u64 direct_resp;
+ };
+};
+
+struct hinic3_cmdq_wqe_scmd {
+ struct hinic3_cmdq_header header;
+ u64 rsvd;
+ struct hinic3_status status;
+ struct hinic3_ctrl ctrl;
+ struct hinic3_cmdq_completion completion;
+ struct hinic3_scmd_bufdesc buf_desc;
+};
+
+struct hinic3_cmdq_wqe_lcmd {
+ struct hinic3_cmdq_header header;
+ struct hinic3_status status;
+ struct hinic3_ctrl ctrl;
+ struct hinic3_cmdq_completion completion;
+ struct hinic3_lcmd_bufdesc buf_desc;
+};
+
+struct hinic3_cmdq_inline_wqe {
+ struct hinic3_cmdq_wqe_scmd wqe_scmd;
+};
+
+struct hinic3_cmdq_wqe {
+ union {
+ struct hinic3_cmdq_inline_wqe inline_wqe;
+ struct hinic3_cmdq_wqe_lcmd wqe_lcmd;
+ };
+};
+
+struct hinic3_cmdq_arm_bit {
+ u32 q_type;
+ u32 q_id;
+};
+
+enum hinic3_cmdq_status {
+ HINIC3_CMDQ_ENABLE = BIT(0),
+};
+
+enum hinic3_cmdq_cmd_type {
+ HINIC3_CMD_TYPE_NONE,
+ HINIC3_CMD_TYPE_SET_ARM,
+ HINIC3_CMD_TYPE_DIRECT_RESP,
+ HINIC3_CMD_TYPE_SGE_RESP,
+ HINIC3_CMD_TYPE_ASYNC,
+ HINIC3_CMD_TYPE_FAKE_TIMEOUT,
+ HINIC3_CMD_TYPE_TIMEOUT,
+ HINIC3_CMD_TYPE_FORCE_STOP,
+};
+
+struct hinic3_cmdq_cmd_info {
+ enum hinic3_cmdq_cmd_type cmd_type;
+ u16 channel;
+ u16 rsvd1;
+
+ struct completion *done;
+ int *errcode;
+ int *cmpt_code;
+ u64 *direct_resp;
+ u64 cmdq_msg_id;
+
+ struct hinic3_cmd_buf *buf_in;
+ struct hinic3_cmd_buf *buf_out;
+};
+
+struct hinic3_cmdq {
+ struct hinic3_wq wq;
+
+ enum hinic3_cmdq_type cmdq_type;
+ int wrapped;
+
+ /* spinlock for send cmdq commands */
+ spinlock_t cmdq_lock;
+
+ struct cmdq_ctxt_info cmdq_ctxt;
+
+ struct hinic3_cmdq_cmd_info *cmd_infos;
+
+ struct hinic3_hwdev *hwdev;
+ u64 rsvd1[2];
+};
+
+struct hinic3_cmdqs {
+ struct hinic3_hwdev *hwdev;
+
+ struct pci_pool *cmd_buf_pool;
+ /* doorbell area */
+ u8 __iomem *cmdqs_db_base;
+
+ /* All cmdq's CLA of a VF occupy a PAGE when cmdq wq is 1-level CLA */
+ dma_addr_t wq_block_paddr;
+ void *wq_block_vaddr;
+ struct hinic3_cmdq cmdq[HINIC3_MAX_CMDQ_TYPES];
+
+ u32 status;
+ u32 disable_flag;
+
+ bool lock_channel_en;
+ unsigned long channel_stop;
+ u8 cmdq_num;
+ u32 rsvd1;
+ u64 rsvd2;
+};
+
+void hinic3_cmdq_ceq_handler(void *handle, u32 ceqe_data);
+
+int hinic3_reinit_cmdq_ctxts(struct hinic3_hwdev *hwdev);
+
+bool hinic3_cmdq_idle(struct hinic3_cmdq *cmdq);
+
+int hinic3_cmdqs_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev);
+
+void hinic3_cmdq_flush_cmd(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmdq *cmdq);
+
+int hinic3_cmdq_set_channel_status(struct hinic3_hwdev *hwdev, u16 channel,
+ bool enable);
+
+void hinic3_cmdq_enable_channel_lock(struct hinic3_hwdev *hwdev, bool enable);
+
+void hinic3_cmdq_flush_sync_cmd(struct hinic3_hwdev *hwdev);
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c
new file mode 100644
index 0000000..a942ef1
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c
@@ -0,0 +1,93 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/kernel.h>
+#include <linux/io-mapping.h>
+#include <linux/delay.h>
+
+#include "ossl_knl.h"
+#include "hinic3_common.h"
+
+int hinic3_dma_zalloc_coherent_align(void *dev_hdl, u64 size, u64 align,
+ unsigned int flag,
+ struct hinic3_dma_addr_align *mem_align)
+{
+ void *vaddr = NULL, *align_vaddr = NULL;
+ dma_addr_t paddr, align_paddr;
+ u64 real_size = size;
+
+ vaddr = dma_zalloc_coherent(dev_hdl, real_size, &paddr, flag);
+ if (!vaddr)
+ return -ENOMEM;
+
+ align_paddr = ALIGN(paddr, align);
+ /* align */
+ if (align_paddr == paddr) {
+ align_vaddr = vaddr;
+ goto out;
+ }
+
+ dma_free_coherent(dev_hdl, real_size, vaddr, paddr);
+
+ /* realloc memory for align */
+ real_size = size + align;
+ vaddr = dma_zalloc_coherent(dev_hdl, real_size, &paddr, flag);
+ if (!vaddr)
+ return -ENOMEM;
+
+ align_paddr = ALIGN(paddr, align);
+ align_vaddr = (void *)((u64)vaddr + (align_paddr - paddr));
+
+out:
+ mem_align->real_size = (u32)real_size;
+ mem_align->ori_vaddr = vaddr;
+ mem_align->ori_paddr = paddr;
+ mem_align->align_vaddr = align_vaddr;
+ mem_align->align_paddr = align_paddr;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_dma_zalloc_coherent_align);
+
+void hinic3_dma_free_coherent_align(void *dev_hdl,
+ struct hinic3_dma_addr_align *mem_align)
+{
+ dma_free_coherent(dev_hdl, mem_align->real_size,
+ mem_align->ori_vaddr, mem_align->ori_paddr);
+}
+EXPORT_SYMBOL(hinic3_dma_free_coherent_align);
+
+int hinic3_wait_for_timeout(void *priv_data, wait_cpl_handler handler,
+ u32 wait_total_ms, u32 wait_once_us)
+{
+ enum hinic3_wait_return ret;
+ unsigned long end;
+ /* Take 9/10 * wait_once_us as the minimum sleep time of usleep_range */
+ u32 usleep_min = wait_once_us - wait_once_us / 10;
+
+ if (!handler)
+ return -EINVAL;
+
+ end = jiffies + msecs_to_jiffies(wait_total_ms);
+ do {
+ ret = handler(priv_data);
+ if (ret == WAIT_PROCESS_CPL)
+ return 0;
+ else if (ret == WAIT_PROCESS_ERR)
+ return -EIO;
+
+ /* Sleep more than 20ms using msleep is accurate */
+ if (wait_once_us >= 20 * USEC_PER_MSEC)
+ msleep(wait_once_us / USEC_PER_MSEC);
+ else
+ usleep_range(usleep_min, wait_once_us);
+ } while (time_before(jiffies, end));
+
+ ret = handler(priv_data);
+ if (ret == WAIT_PROCESS_CPL)
+ return 0;
+ else if (ret == WAIT_PROCESS_ERR)
+ return -EIO;
+
+ return -ETIMEDOUT;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h
new file mode 100644
index 0000000..4098d7f
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h
@@ -0,0 +1,188 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_CSR_H
+#define HINIC3_CSR_H
+
+/* bit30/bit31 for bar index flag
+ * 00: bar0
+ * 01: bar1
+ * 10: bar2
+ * 11: bar3
+ */
+#define HINIC3_CFG_REGS_FLAG 0x40000000
+
+#define HINIC3_MGMT_REGS_FLAG 0xC0000000
+
+#define HINIC3_REGS_FLAG_MAKS 0x3FFFFFFF
+
+#define HINIC3_VF_CFG_REG_OFFSET 0x2000
+
+#define HINIC3_HOST_CSR_BASE_ADDR (HINIC3_MGMT_REGS_FLAG + 0x6000)
+#define HINIC3_CSR_GLOBAL_BASE_ADDR (HINIC3_MGMT_REGS_FLAG + 0x6400)
+
+/* HW interface registers */
+#define HINIC3_CSR_FUNC_ATTR0_ADDR (HINIC3_CFG_REGS_FLAG + 0x0)
+#define HINIC3_CSR_FUNC_ATTR1_ADDR (HINIC3_CFG_REGS_FLAG + 0x4)
+#define HINIC3_CSR_FUNC_ATTR2_ADDR (HINIC3_CFG_REGS_FLAG + 0x8)
+#define HINIC3_CSR_FUNC_ATTR3_ADDR (HINIC3_CFG_REGS_FLAG + 0xC)
+#define HINIC3_CSR_FUNC_ATTR4_ADDR (HINIC3_CFG_REGS_FLAG + 0x10)
+#define HINIC3_CSR_FUNC_ATTR5_ADDR (HINIC3_CFG_REGS_FLAG + 0x14)
+#define HINIC3_CSR_FUNC_ATTR6_ADDR (HINIC3_CFG_REGS_FLAG + 0x18)
+
+#define HINIC3_FUNC_CSR_MAILBOX_DATA_OFF 0x80
+#define HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF \
+ (HINIC3_CFG_REGS_FLAG + 0x0100)
+#define HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF \
+ (HINIC3_CFG_REGS_FLAG + 0x0104)
+#define HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF \
+ (HINIC3_CFG_REGS_FLAG + 0x0108)
+#define HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF \
+ (HINIC3_CFG_REGS_FLAG + 0x010C)
+/* CLP registers */
+#define HINIC3_BAR3_CLP_BASE_ADDR (HINIC3_MGMT_REGS_FLAG + 0x0000)
+
+#define HINIC3_UCPU_CLP_SIZE_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x40)
+#define HINIC3_UCPU_CLP_REQBASE_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x44)
+#define HINIC3_UCPU_CLP_RSPBASE_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x48)
+#define HINIC3_UCPU_CLP_REQ_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x4c)
+#define HINIC3_UCPU_CLP_RSP_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x50)
+#define HINIC3_CLP_REG(member) (HINIC3_UCPU_CLP_##member##_REG)
+
+#define HINIC3_CLP_REQ_DATA HINIC3_BAR3_CLP_BASE_ADDR
+#define HINIC3_CLP_RSP_DATA (HINIC3_BAR3_CLP_BASE_ADDR + 0x1000)
+#define HINIC3_CLP_DATA(member) (HINIC3_CLP_##member##_DATA)
+
+#define HINIC3_PPF_ELECTION_OFFSET 0x0
+#define HINIC3_MPF_ELECTION_OFFSET 0x20
+
+#define HINIC3_CSR_PPF_ELECTION_ADDR \
+ (HINIC3_HOST_CSR_BASE_ADDR + HINIC3_PPF_ELECTION_OFFSET)
+
+#define HINIC3_CSR_GLOBAL_MPF_ELECTION_ADDR \
+ (HINIC3_HOST_CSR_BASE_ADDR + HINIC3_MPF_ELECTION_OFFSET)
+
+#define HINIC3_CSR_FUNC_PPF_ELECT_BASE_ADDR (HINIC3_CFG_REGS_FLAG + 0x60)
+#define HINIC3_CSR_FUNC_PPF_ELECT_PORT_STRIDE 0x4
+
+#define HINIC3_CSR_FUNC_PPF_ELECT(host_idx) \
+ (HINIC3_CSR_FUNC_PPF_ELECT_BASE_ADDR + \
+ (host_idx) * HINIC3_CSR_FUNC_PPF_ELECT_PORT_STRIDE)
+
+#define HINIC3_CSR_DMA_ATTR_TBL_ADDR (HINIC3_CFG_REGS_FLAG + 0x380)
+#define HINIC3_CSR_DMA_ATTR_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x390)
+
+/* MSI-X registers */
+#define HINIC3_CSR_MSIX_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x310)
+#define HINIC3_CSR_MSIX_CTRL_ADDR (HINIC3_CFG_REGS_FLAG + 0x300)
+#define HINIC3_CSR_MSIX_CNT_ADDR (HINIC3_CFG_REGS_FLAG + 0x304)
+#define HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR (HINIC3_CFG_REGS_FLAG + 0x58)
+
+#define HINIC3_MSI_CLR_INDIR_RESEND_TIMER_CLR_SHIFT 0
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_SET_SHIFT 1
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_CLR_SHIFT 2
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_SET_SHIFT 3
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_CLR_SHIFT 4
+#define HINIC3_MSI_CLR_INDIR_SIMPLE_INDIR_IDX_SHIFT 22
+
+#define HINIC3_MSI_CLR_INDIR_RESEND_TIMER_CLR_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_SET_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_CLR_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_SET_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_CLR_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_SIMPLE_INDIR_IDX_MASK 0x3FFU
+
+#define HINIC3_MSI_CLR_INDIR_SET(val, member) \
+ (((val) & HINIC3_MSI_CLR_INDIR_##member##_MASK) << \
+ HINIC3_MSI_CLR_INDIR_##member##_SHIFT)
+
+/* EQ registers */
+#define HINIC3_AEQ_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x210)
+#define HINIC3_CEQ_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x290)
+
+#define HINIC3_EQ_INDIR_IDX_ADDR(type) \
+ ((type == HINIC3_AEQ) ? \
+ HINIC3_AEQ_INDIR_IDX_ADDR : HINIC3_CEQ_INDIR_IDX_ADDR)
+
+#define HINIC3_AEQ_MTT_OFF_BASE_ADDR (HINIC3_CFG_REGS_FLAG + 0x240)
+#define HINIC3_CEQ_MTT_OFF_BASE_ADDR (HINIC3_CFG_REGS_FLAG + 0x2C0)
+
+#define HINIC3_CSR_EQ_PAGE_OFF_STRIDE 8
+
+#define HINIC3_AEQ_HI_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_AEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE)
+
+#define HINIC3_AEQ_LO_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_AEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE + 4)
+
+#define HINIC3_CEQ_HI_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_CEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE)
+
+#define HINIC3_CEQ_LO_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_CEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE + 4)
+
+#define HINIC3_CSR_AEQ_CTRL_0_ADDR (HINIC3_CFG_REGS_FLAG + 0x200)
+#define HINIC3_CSR_AEQ_CTRL_1_ADDR (HINIC3_CFG_REGS_FLAG + 0x204)
+#define HINIC3_CSR_AEQ_CONS_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x208)
+#define HINIC3_CSR_AEQ_PROD_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x20C)
+#define HINIC3_CSR_AEQ_CI_SIMPLE_INDIR_ADDR (HINIC3_CFG_REGS_FLAG + 0x50)
+
+#define HINIC3_CSR_CEQ_CTRL_0_ADDR (HINIC3_CFG_REGS_FLAG + 0x280)
+#define HINIC3_CSR_CEQ_CTRL_1_ADDR (HINIC3_CFG_REGS_FLAG + 0x284)
+#define HINIC3_CSR_CEQ_CONS_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x288)
+#define HINIC3_CSR_CEQ_PROD_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x28c)
+#define HINIC3_CSR_CEQ_CI_SIMPLE_INDIR_ADDR (HINIC3_CFG_REGS_FLAG + 0x54)
+
+/* API CMD registers */
+#define HINIC3_CSR_API_CMD_BASE (HINIC3_MGMT_REGS_FLAG + 0x2000)
+
+#define HINIC3_CSR_API_CMD_STRIDE 0x80
+
+#define HINIC3_CSR_API_CMD_CHAIN_HEAD_HI_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x0 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_HEAD_LO_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x4 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_STATUS_HI_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x8 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_STATUS_LO_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0xC + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_NUM_CELLS_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x10 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_CTRL_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x14 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_PI_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x1C + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_REQ_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x20 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_STATUS_0_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x30 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+/* self test register */
+#define HINIC3_MGMT_HEALTH_STATUS_ADDR (HINIC3_MGMT_REGS_FLAG + 0x983c)
+
+#define HINIC3_CHIP_BASE_INFO_ADDR (HINIC3_MGMT_REGS_FLAG + 0xB02C)
+
+#define HINIC3_CHIP_ERR_STATUS0_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0EC)
+#define HINIC3_CHIP_ERR_STATUS1_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0F0)
+
+#define HINIC3_ERR_INFO0_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0F4)
+#define HINIC3_ERR_INFO1_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0F8)
+#define HINIC3_ERR_INFO2_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0FC)
+
+#define HINIC3_MULT_HOST_SLAVE_STATUS_ADDR (HINIC3_MGMT_REGS_FLAG + 0xDF30)
+#define HINIC3_MULT_MIGRATE_HOST_STATUS_ADDR (HINIC3_MGMT_REGS_FLAG + 0xDF4C)
+#define HINIC3_MULT_HOST_MASTER_MBOX_STATUS_ADDR HINIC3_MULT_HOST_SLAVE_STATUS_ADDR
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c
new file mode 100644
index 0000000..af336f2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c
@@ -0,0 +1,997 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <net/addrconf.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/io-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/time.h>
+#include <linux/timex.h>
+#include <linux/rtc.h>
+#include <linux/debugfs.h>
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_lld.h"
+#include "hinic3_sriov.h"
+#include "hinic3_nictool.h"
+#include "hinic3_pci_id_tbl.h"
+#include "hinic3_hwdev.h"
+#include "cfg_mgmt_mpu_cmd_defs.h"
+#include "mpu_cmd_base_defs.h"
+#include "hinic3_dev_mgmt.h"
+
+#define HINIC3_WAIT_TOOL_CNT_TIMEOUT 10000
+#define HINIC3_WAIT_TOOL_MIN_USLEEP_TIME 9900
+#define HINIC3_WAIT_TOOL_MAX_USLEEP_TIME 10000
+#define HIGHT_BDF 8
+
+static unsigned long card_bit_map;
+
+LIST_HEAD(g_hinic3_chip_list);
+
+struct list_head *get_hinic3_chip_list(void)
+{
+ return &g_hinic3_chip_list;
+}
+
+void uld_dev_hold(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(lld_dev->pdev);
+
+ atomic_inc(&pci_adapter->uld_ref_cnt[type]);
+}
+EXPORT_SYMBOL(uld_dev_hold);
+
+void uld_dev_put(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(lld_dev->pdev);
+
+ atomic_dec(&pci_adapter->uld_ref_cnt[type]);
+}
+EXPORT_SYMBOL(uld_dev_put);
+
+void lld_dev_cnt_init(struct hinic3_pcidev *pci_adapter)
+{
+ atomic_set(&pci_adapter->ref_cnt, 0);
+}
+
+void lld_dev_hold(struct hinic3_lld_dev *dev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!dev)
+ return;
+
+ pci_adapter = pci_get_drvdata(dev->pdev);
+ atomic_inc(&pci_adapter->ref_cnt);
+}
+
+void lld_dev_put(struct hinic3_lld_dev *dev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!dev)
+ return;
+
+ pci_adapter = pci_get_drvdata(dev->pdev);
+ atomic_dec(&pci_adapter->ref_cnt);
+}
+
+void wait_lld_dev_unused(struct hinic3_pcidev *pci_adapter)
+{
+ unsigned long end;
+
+ end = jiffies + msecs_to_jiffies(HINIC3_WAIT_TOOL_CNT_TIMEOUT);
+ do {
+ if (!atomic_read(&pci_adapter->ref_cnt))
+ return;
+
+ /* if sleep 10ms, use usleep_range to be more precise */
+ usleep_range(HINIC3_WAIT_TOOL_MIN_USLEEP_TIME,
+ HINIC3_WAIT_TOOL_MAX_USLEEP_TIME);
+ } while (time_before(jiffies, end));
+}
+
+enum hinic3_lld_status {
+ HINIC3_NODE_CHANGE = BIT(0),
+};
+
+struct hinic3_lld_lock {
+ /* lock for chip list */
+ struct mutex lld_mutex;
+ unsigned long status;
+ atomic_t dev_ref_cnt;
+};
+
+struct hinic3_lld_lock g_lld_lock;
+
+#define WAIT_LLD_DEV_HOLD_TIMEOUT (10 * 60 * 1000) /* 10minutes */
+#define WAIT_LLD_DEV_NODE_CHANGED (10 * 60 * 1000) /* 10minutes */
+#define WAIT_LLD_DEV_REF_CNT_EMPTY (2 * 60 * 1000) /* 2minutes */
+#define PRINT_TIMEOUT_INTERVAL 10000
+#define MS_PER_SEC 1000
+#define LLD_LOCK_MIN_USLEEP_TIME 900
+#define LLD_LOCK_MAX_USLEEP_TIME 1000
+
+/* node in chip_node will changed, tools or driver can't get node
+ * during this situation
+ */
+void lld_lock_chip_node(void)
+{
+ unsigned long end;
+ bool timeout = true;
+ u32 loop_cnt;
+
+ mutex_lock(&g_lld_lock.lld_mutex);
+
+ loop_cnt = 0;
+ end = jiffies + msecs_to_jiffies(WAIT_LLD_DEV_NODE_CHANGED);
+ do {
+ if (!test_and_set_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status)) {
+ timeout = false;
+ break;
+ }
+
+ loop_cnt++;
+ if (loop_cnt % PRINT_TIMEOUT_INTERVAL == 0)
+ pr_warn("Wait for lld node change complete for %us\n",
+ loop_cnt / MS_PER_SEC);
+
+ /* if sleep 1ms, use usleep_range to be more precise */
+ usleep_range(LLD_LOCK_MIN_USLEEP_TIME,
+ LLD_LOCK_MAX_USLEEP_TIME);
+ } while (time_before(jiffies, end));
+
+ if (timeout && test_and_set_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status))
+ pr_warn("Wait for lld node change complete timeout when trying to get lld lock\n");
+
+ loop_cnt = 0;
+ timeout = true;
+ end = jiffies + msecs_to_jiffies(WAIT_LLD_DEV_NODE_CHANGED);
+ do {
+ if (!atomic_read(&g_lld_lock.dev_ref_cnt)) {
+ timeout = false;
+ break;
+ }
+
+ loop_cnt++;
+ if (loop_cnt % PRINT_TIMEOUT_INTERVAL == 0)
+ pr_warn("Wait for lld dev unused for %us, reference count: %d\n",
+ loop_cnt / MS_PER_SEC,
+ atomic_read(&g_lld_lock.dev_ref_cnt));
+
+ /* if sleep 1ms, use usleep_range to be more precise */
+ usleep_range(LLD_LOCK_MIN_USLEEP_TIME,
+ LLD_LOCK_MAX_USLEEP_TIME);
+ } while (time_before(jiffies, end));
+
+ if (timeout && atomic_read(&g_lld_lock.dev_ref_cnt))
+ pr_warn("Wait for lld dev unused timeout\n");
+
+ mutex_unlock(&g_lld_lock.lld_mutex);
+}
+
+void lld_unlock_chip_node(void)
+{
+ clear_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status);
+}
+
+/* When tools or other drivers want to get node of chip_node, use this function
+ * to prevent node be freed
+ */
+void lld_hold(void)
+{
+ unsigned long end;
+ u32 loop_cnt = 0;
+
+ /* ensure there have not any chip node in changing */
+ mutex_lock(&g_lld_lock.lld_mutex);
+
+ end = jiffies + msecs_to_jiffies(WAIT_LLD_DEV_HOLD_TIMEOUT);
+ do {
+ if (!test_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status))
+ break;
+
+ loop_cnt++;
+
+ if (loop_cnt % PRINT_TIMEOUT_INTERVAL == 0)
+ pr_warn("Wait lld node change complete for %us\n",
+ loop_cnt / MS_PER_SEC);
+ /* if sleep 1ms, use usleep_range to be more precise */
+ usleep_range(LLD_LOCK_MIN_USLEEP_TIME,
+ LLD_LOCK_MAX_USLEEP_TIME);
+ } while (time_before(jiffies, end));
+
+ if (test_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status))
+ pr_warn("Wait lld node change complete timeout when trying to hode lld dev\n");
+
+ atomic_inc(&g_lld_lock.dev_ref_cnt);
+ mutex_unlock(&g_lld_lock.lld_mutex);
+}
+
+void lld_put(void)
+{
+ atomic_dec(&g_lld_lock.dev_ref_cnt);
+}
+
+void hinic3_lld_lock_init(void)
+{
+ mutex_init(&g_lld_lock.lld_mutex);
+ atomic_set(&g_lld_lock.dev_ref_cnt, 0);
+}
+
+void hinic3_get_all_chip_id(void *id_info)
+{
+ struct nic_card_id *card_id = (struct nic_card_id *)id_info;
+ struct card_node *chip_node = NULL;
+ int i = 0;
+ int id, err;
+
+ lld_hold();
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ err = sscanf(chip_node->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ pr_err("Failed to get hinic3 id\n");
+ continue;
+ }
+ card_id->id[i] = (u32)id;
+ i++;
+ }
+ lld_put();
+ card_id->num = (u32)i;
+}
+
+int hinic3_bar_mmap_param_valid(phys_addr_t phy_addr, unsigned long vmsize)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+ u64 bar1_phy_addr = 0;
+ u64 bar3_phy_addr = 0;
+ u64 bar1_size = 0;
+ u64 bar3_size = 0;
+
+ lld_hold();
+
+ /* get PF bar1 or bar3 physical address to verify */
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ continue;
+
+ bar1_phy_addr = pci_resource_start(dev->pcidev, HINIC3_PF_PCI_CFG_REG_BAR);
+ bar1_size = pci_resource_len(dev->pcidev, HINIC3_PF_PCI_CFG_REG_BAR);
+
+ bar3_phy_addr = pci_resource_start(dev->pcidev, HINIC3_PCI_MGMT_REG_BAR);
+ bar3_size = pci_resource_len(dev->pcidev, HINIC3_PCI_MGMT_REG_BAR);
+ if ((phy_addr == bar1_phy_addr && vmsize <= bar1_size) ||
+ (phy_addr == bar3_phy_addr && vmsize <= bar3_size)) {
+ lld_put();
+ return 0;
+ }
+ }
+ }
+
+ lld_put();
+ return -EINVAL;
+}
+
+void hinic3_get_card_func_info_by_card_name(const char *chip_name,
+ struct hinic3_card_func_info *card_func)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+ struct func_pdev_info *pdev_info = NULL;
+
+ card_func->num_pf = 0;
+
+ lld_hold();
+
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ if (strncmp(chip_node->chip_name, chip_name, IFNAMSIZ))
+ continue;
+
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ continue;
+
+ pdev_info = &card_func->pdev_info[card_func->num_pf];
+ pdev_info->bar1_size =
+ pci_resource_len(dev->pcidev,
+ HINIC3_PF_PCI_CFG_REG_BAR);
+ pdev_info->bar1_phy_addr =
+ pci_resource_start(dev->pcidev,
+ HINIC3_PF_PCI_CFG_REG_BAR);
+
+ pdev_info->bar3_size =
+ pci_resource_len(dev->pcidev,
+ HINIC3_PCI_MGMT_REG_BAR);
+ pdev_info->bar3_phy_addr =
+ pci_resource_start(dev->pcidev,
+ HINIC3_PCI_MGMT_REG_BAR);
+
+ card_func->num_pf++;
+ if (card_func->num_pf >= MAX_SIZE) {
+ lld_put();
+ return;
+ }
+ }
+ }
+
+ lld_put();
+}
+
+static bool is_pcidev_match_chip_name(const char *ifname, struct hinic3_pcidev *dev,
+ struct card_node *chip_node, enum func_type type)
+{
+ if (!strncmp(chip_node->chip_name, ifname, IFNAMSIZ)) {
+ if (hinic3_func_type(dev->hwdev) != type)
+ return false;
+ return true;
+ }
+
+ return false;
+}
+
+static struct hinic3_lld_dev *get_dst_type_lld_dev_by_chip_name(const char *ifname,
+ enum func_type type)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (is_pcidev_match_chip_name(ifname, dev, chip_node, type))
+ return &dev->lld_dev;
+ }
+ }
+
+ return NULL;
+}
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_chip_name(const char *chip_name)
+{
+ struct hinic3_lld_dev *dev = NULL;
+
+ lld_hold();
+
+ dev = get_dst_type_lld_dev_by_chip_name(chip_name, TYPE_PPF);
+ if (dev)
+ goto out;
+
+ dev = get_dst_type_lld_dev_by_chip_name(chip_name, TYPE_PF);
+ if (dev)
+ goto out;
+
+ dev = get_dst_type_lld_dev_by_chip_name(chip_name, TYPE_VF);
+out:
+ if (dev)
+ lld_dev_hold(dev);
+ lld_put();
+ return dev;
+}
+
+static int get_dynamic_uld_dev_name(struct hinic3_pcidev *dev, enum hinic3_service_type type,
+ char *ifname)
+{
+ u32 out_size = IFNAMSIZ;
+
+ if (!g_uld_info[type].ioctl)
+ return -EFAULT;
+
+ return g_uld_info[type].ioctl(dev->uld_dev[type], GET_ULD_DEV_NAME,
+ NULL, 0, ifname, &out_size);
+}
+
+static bool is_pcidev_match_dev_name(const char *dev_name, struct hinic3_pcidev *dev,
+ enum hinic3_service_type type)
+{
+ enum hinic3_service_type i;
+ char nic_uld_name[IFNAMSIZ] = {0};
+ int err;
+
+ if (type > SERVICE_T_MAX)
+ return false;
+
+ if (type == SERVICE_T_MAX) {
+ for (i = SERVICE_T_OVS; i < SERVICE_T_MAX; i++) {
+ if (!strncmp(dev->uld_dev_name[i], dev_name, IFNAMSIZ))
+ return true;
+ }
+ } else {
+ if (!strncmp(dev->uld_dev_name[type], dev_name, IFNAMSIZ))
+ return true;
+ }
+
+ err = get_dynamic_uld_dev_name(dev, SERVICE_T_NIC, (char *)nic_uld_name);
+ if (err == 0) {
+ if (!strncmp(nic_uld_name, dev_name, IFNAMSIZ))
+ return true;
+ }
+
+ return false;
+}
+
+static struct hinic3_lld_dev *get_lld_dev_by_dev_name(const char *dev_name,
+ enum hinic3_service_type type, bool hold)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ lld_hold();
+
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (is_pcidev_match_dev_name(dev_name, dev, type)) {
+ if (hold)
+ lld_dev_hold(&dev->lld_dev);
+ lld_put();
+ return &dev->lld_dev;
+ }
+ }
+ }
+
+ lld_put();
+
+ return NULL;
+}
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_chip_and_port(const char *chip_name, u8 port_id)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ continue;
+
+ if (hinic3_physical_port_id(dev->hwdev) == port_id &&
+ !strncmp(chip_node->chip_name, chip_name, IFNAMSIZ)) {
+ lld_dev_hold(&dev->lld_dev);
+ lld_put();
+
+ return &dev->lld_dev;
+ }
+ }
+ }
+ lld_put();
+
+ return NULL;
+}
+
+void *hinic3_get_ppf_dev(void)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct list_head *chip_list = NULL;
+
+ lld_hold();
+ chip_list = get_hinic3_chip_list();
+
+ list_for_each_entry(chip_node, chip_list, node)
+ list_for_each_entry(pci_adapter, &chip_node->func_list, node)
+ if (hinic3_func_type(pci_adapter->hwdev) == TYPE_PPF) {
+ pr_info("Get ppf_func_id:%u", hinic3_global_func_id(pci_adapter->hwdev));
+ lld_put();
+ return pci_adapter->lld_dev.hwdev;
+ }
+
+ lld_put();
+ return NULL;
+}
+EXPORT_SYMBOL(hinic3_get_ppf_dev);
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_dev_name(const char *dev_name,
+ enum hinic3_service_type type)
+{
+ return get_lld_dev_by_dev_name(dev_name, type, true);
+}
+EXPORT_SYMBOL(hinic3_get_lld_dev_by_dev_name);
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_dev_name_unsafe(const char *dev_name,
+ enum hinic3_service_type type)
+{
+ return get_lld_dev_by_dev_name(dev_name, type, false);
+}
+EXPORT_SYMBOL(hinic3_get_lld_dev_by_dev_name_unsafe);
+
+static void *get_uld_by_lld_dev(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type,
+ bool hold)
+{
+ struct hinic3_pcidev *dev = NULL;
+ void *uld = NULL;
+
+ if (!lld_dev)
+ return NULL;
+
+ dev = pci_get_drvdata(lld_dev->pdev);
+ if (!dev)
+ return NULL;
+
+ spin_lock_bh(&dev->uld_lock);
+ if (!dev->uld_dev[type] || !test_bit(type, &dev->uld_state)) {
+ spin_unlock_bh(&dev->uld_lock);
+ return NULL;
+ }
+ uld = dev->uld_dev[type];
+
+ if (hold)
+ atomic_inc(&dev->uld_ref_cnt[type]);
+ spin_unlock_bh(&dev->uld_lock);
+
+ return uld;
+}
+
+void *hinic3_get_uld_dev(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ return get_uld_by_lld_dev(lld_dev, type, true);
+}
+EXPORT_SYMBOL(hinic3_get_uld_dev);
+
+void *hinic3_get_uld_dev_unsafe(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ return get_uld_by_lld_dev(lld_dev, type, false);
+}
+EXPORT_SYMBOL(hinic3_get_uld_dev_unsafe);
+
+static struct hinic3_lld_dev *get_ppf_lld_dev(struct hinic3_lld_dev *lld_dev, bool hold)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(lld_dev->pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ lld_hold();
+ chip_node = pci_adapter->chip_node;
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (dev->hwdev && hinic3_func_type(dev->hwdev) == TYPE_PPF) {
+ if (hold)
+ lld_dev_hold(&dev->lld_dev);
+ lld_put();
+ return &dev->lld_dev;
+ }
+ }
+ lld_put();
+
+ return NULL;
+}
+
+struct hinic3_lld_dev *hinic3_get_ppf_lld_dev(struct hinic3_lld_dev *lld_dev)
+{
+ return get_ppf_lld_dev(lld_dev, true);
+}
+EXPORT_SYMBOL(hinic3_get_ppf_lld_dev);
+
+struct hinic3_lld_dev *hinic3_get_ppf_lld_dev_unsafe(struct hinic3_lld_dev *lld_dev)
+{
+ return get_ppf_lld_dev(lld_dev, false);
+}
+EXPORT_SYMBOL(hinic3_get_ppf_lld_dev_unsafe);
+
+int hinic3_get_chip_name(struct hinic3_lld_dev *lld_dev, char *chip_name, u16 max_len)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ int ret = 0;
+
+ if (!lld_dev || !chip_name || !max_len)
+ return -EINVAL;
+
+ pci_adapter = pci_get_drvdata(lld_dev->pdev);
+ if (!pci_adapter)
+ return -EFAULT;
+
+ lld_hold();
+ if (strscpy(chip_name, pci_adapter->chip_node->chip_name, max_len) < 0)
+ goto RELEASE;
+ chip_name[max_len - 1] = '\0';
+
+ lld_put();
+
+ return 0;
+
+RELEASE:
+ lld_put();
+
+ return ret;
+}
+EXPORT_SYMBOL(hinic3_get_chip_name);
+
+struct hinic3_hwdev *hinic3_get_sdk_hwdev_by_lld(struct hinic3_lld_dev *lld_dev)
+{
+ return lld_dev->hwdev;
+}
+
+void hinic3_write_oshr_info(struct os_hot_replace_info *out_oshr_info,
+ struct hw_pf_info *info,
+ struct hinic3_board_info *board_info,
+ struct card_node *chip_node, u32 serivce_enable,
+ u32 func_info_idx)
+{
+ out_oshr_info->func_infos[func_info_idx].pf_idx = info->glb_func_idx;
+ out_oshr_info->func_infos[func_info_idx].backup_pf =
+ (((info->glb_func_idx) / (board_info->port_num)) % HOT_REPLACE_PARTITION_NUM == 0) ?
+ ((info->glb_func_idx) + (board_info->port_num)) :
+ ((info->glb_func_idx) - (board_info->port_num));
+ out_oshr_info->func_infos[func_info_idx].partition =
+ ((info->glb_func_idx) / (board_info->port_num)) % HOT_REPLACE_PARTITION_NUM;
+ out_oshr_info->func_infos[func_info_idx].port_id = info->port_id;
+ out_oshr_info->func_infos[func_info_idx].bdf = (info->bus_num << HIGHT_BDF) + info->glb_func_idx;
+ out_oshr_info->func_infos[func_info_idx].bus_num = chip_node->bus_num;
+ out_oshr_info->func_infos[func_info_idx].valid = serivce_enable;
+ memcpy(out_oshr_info->func_infos[func_info_idx].card_name,
+ chip_node->chip_name, IFNAMSIZ);
+}
+
+void hinic3_get_os_hot_replace_info(void *oshr_info)
+{
+ struct os_hot_replace_info *out_oshr_info = (struct os_hot_replace_info *)oshr_info;
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dst_dev = NULL;
+ struct hinic3_board_info *board_info = NULL;
+ struct hw_pf_info *infos = NULL;
+ struct hinic3_hw_pf_infos *pf_infos = NULL;
+ u32 func_info_idx = 0, func_id = 0, func_num, serivce_enable = 0;
+ struct list_head *hinic3_chip_list = get_hinic3_chip_list();
+ int err;
+
+ lld_hold();
+ pf_infos = kzalloc(sizeof(struct hinic3_hw_pf_infos), GFP_KERNEL);
+ if (!pf_infos) {
+ pr_err("kzalloc pf_infos fail\n");
+ lld_put();
+ return;
+ }
+ list_for_each_entry(chip_node, hinic3_chip_list, node) {
+ list_for_each_entry(dst_dev, &chip_node->func_list, node) { // get all pf infos in one time by one pf_id
+ err = hinic3_get_hw_pf_infos(dst_dev->hwdev, pf_infos, HINIC3_CHANNEL_COMM);
+ if (err != 0) {
+ pr_err("get pf info failed\n");
+ break;
+ }
+
+ serivce_enable = 0;
+ infos = pf_infos->infos;
+ board_info = &((struct hinic3_hwdev *)(dst_dev->hwdev))->board_info;
+ if (((struct hinic3_hwdev *)(dst_dev->hwdev))->hot_replace_mode == HOT_REPLACE_ENABLE) {
+ serivce_enable = 1;
+ }
+ break;
+ }
+
+ func_num = pf_infos->num_pfs;
+ if (func_num <= 0) {
+ pr_err("get pf num failed\n");
+ break;
+ }
+
+ for (func_id = 0; func_id < func_num; func_id++) {
+ hinic3_write_oshr_info(out_oshr_info, &infos[func_id],
+ board_info, chip_node,
+ serivce_enable, func_info_idx);
+ func_info_idx++;
+ }
+ }
+ out_oshr_info->func_cnt = func_info_idx;
+ kfree(pf_infos);
+ lld_put();
+}
+
+struct card_node *hinic3_get_chip_node_by_lld(struct hinic3_lld_dev *lld_dev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(lld_dev->pdev);
+
+ return pci_adapter->chip_node;
+}
+
+static struct card_node *hinic3_get_chip_node_by_hwdev(const void *hwdev)
+{
+ struct card_node *chip_node = NULL;
+ struct card_node *node_tmp = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!hwdev)
+ return NULL;
+
+ lld_hold();
+
+ list_for_each_entry(node_tmp, &g_hinic3_chip_list, node) {
+ if (!chip_node) {
+ list_for_each_entry(dev, &node_tmp->func_list, node) {
+ if (dev->hwdev == hwdev) {
+ chip_node = node_tmp;
+ break;
+ }
+ }
+ }
+ }
+
+ lld_put();
+
+ return chip_node;
+}
+
+static bool is_func_valid(struct hinic3_pcidev *dev)
+{
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ return false;
+
+ return true;
+}
+
+void hinic3_get_card_info(const void *hwdev, void *bufin)
+{
+ struct card_node *chip_node = NULL;
+ struct card_info *info = (struct card_info *)bufin;
+ struct hinic3_pcidev *dev = NULL;
+ void *fun_hwdev = NULL;
+ u32 i = 0;
+
+ info->pf_num = 0;
+
+ chip_node = hinic3_get_chip_node_by_hwdev(hwdev);
+ if (!chip_node)
+ return;
+
+ lld_hold();
+
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (!is_func_valid(dev))
+ continue;
+
+ fun_hwdev = dev->hwdev;
+
+ if (hinic3_support_nic(fun_hwdev, NULL)) {
+ if (dev->uld_dev[SERVICE_T_NIC]) {
+ info->pf[i].pf_type |= (u32)BIT(SERVICE_T_NIC);
+ get_dynamic_uld_dev_name(dev, SERVICE_T_NIC, info->pf[i].name);
+ }
+ }
+
+ if (hinic3_support_ppa(fun_hwdev, NULL)) {
+ if (dev->uld_dev[SERVICE_T_PPA]) {
+ info->pf[i].pf_type |= (u32)BIT(SERVICE_T_PPA);
+ get_dynamic_uld_dev_name(dev, SERVICE_T_PPA, info->pf[i].name);
+ }
+ }
+
+ if (hinic3_func_for_mgmt(fun_hwdev))
+ strscpy(info->pf[i].name, "FOR_MGMT", IFNAMSIZ);
+
+ if (dev->lld_dev.pdev->subsystem_device == BIFUR_RESOURCE_PF_SSID) {
+ strscpy(info->pf[i].name, "bifur", IFNAMSIZ);
+ }
+
+ strscpy(info->pf[i].bus_info, pci_name(dev->pcidev),
+ sizeof(info->pf[i].bus_info));
+ info->pf_num++;
+ i = info->pf_num;
+ }
+
+ lld_put();
+}
+
+struct hinic3_sriov_info *hinic3_get_sriov_info_by_pcidev(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ return &pci_adapter->sriov_info;
+}
+
+void *hinic3_get_hwdev_by_pcidev(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ return pci_adapter->hwdev;
+}
+
+bool hinic3_is_in_host(void)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) != TYPE_VF) {
+ lld_put();
+ return true;
+ }
+ }
+ }
+
+ lld_put();
+
+ return false;
+}
+
+
+static bool chip_node_is_exist(struct hinic3_pcidev *pci_adapter,
+ unsigned char *bus_number)
+{
+ struct card_node *chip_node = NULL;
+ struct pci_dev *pf_pdev = NULL;
+
+ if (!pci_is_root_bus(pci_adapter->pcidev->bus))
+ *bus_number = pci_adapter->pcidev->bus->number;
+
+ if (*bus_number != 0) {
+ if (pci_adapter->pcidev->is_virtfn) {
+ pf_pdev = pci_adapter->pcidev->physfn;
+ *bus_number = pf_pdev->bus->number;
+ }
+
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ if (chip_node->bus_num == *bus_number) {
+ pci_adapter->chip_node = chip_node;
+ return true;
+ }
+ }
+ } else if (HINIC3_IS_VF_DEV(pci_adapter->pcidev) ||
+ HINIC3_IS_SPU_DEV(pci_adapter->pcidev)) {
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ if (chip_node) {
+ pci_adapter->chip_node = chip_node;
+ return true;
+ }
+ }
+ }
+
+ return false;
+}
+
+int alloc_chip_node(struct hinic3_pcidev *pci_adapter)
+{
+ struct card_node *chip_node = NULL;
+ unsigned char i;
+ unsigned char bus_number = 0;
+ int err;
+
+ if (chip_node_is_exist(pci_adapter, &bus_number))
+ return 0;
+
+ for (i = 0; i < CARD_MAX_SIZE; i++) {
+ if (test_and_set_bit(i, &card_bit_map) == 0)
+ break;
+ }
+
+ if (i == CARD_MAX_SIZE) {
+ sdk_err(&pci_adapter->pcidev->dev, "Failed to alloc card id\n");
+ return -EFAULT;
+ }
+
+ chip_node = kzalloc(sizeof(*chip_node), GFP_KERNEL);
+ if (!chip_node) {
+ clear_bit(i, &card_bit_map);
+ sdk_err(&pci_adapter->pcidev->dev,
+ "Failed to alloc chip node\n");
+ return -ENOMEM;
+ }
+
+ /* bus number */
+ chip_node->bus_num = bus_number;
+
+ if (snprintf(chip_node->chip_name, IFNAMSIZ, "%s%u", HINIC3_CHIP_NAME, i) < 0) {
+ clear_bit(i, &card_bit_map);
+ kfree(chip_node);
+ return -EINVAL;
+ }
+
+ err = sscanf(chip_node->chip_name, HINIC3_CHIP_NAME "%d", &(chip_node->chip_id));
+ if (err <= 0) {
+ clear_bit(i, &card_bit_map);
+ kfree(chip_node);
+ return -EINVAL;
+ }
+
+ sdk_info(&pci_adapter->pcidev->dev,
+ "Add new chip %s to global list succeed\n",
+ chip_node->chip_name);
+
+ list_add_tail(&chip_node->node, &g_hinic3_chip_list);
+
+ INIT_LIST_HEAD(&chip_node->func_list);
+ pci_adapter->chip_node = chip_node;
+
+ return 0;
+}
+
+void free_chip_node(struct hinic3_pcidev *pci_adapter)
+{
+ struct card_node *chip_node = pci_adapter->chip_node;
+ int id, err;
+
+ if (list_empty(&chip_node->func_list)) {
+ list_del(&chip_node->node);
+ sdk_info(&pci_adapter->pcidev->dev,
+ "Delete chip %s from global list succeed\n",
+ chip_node->chip_name);
+ err = sscanf(chip_node->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0)
+ sdk_err(&pci_adapter->pcidev->dev, "Failed to get hinic3 id\n");
+
+ clear_bit(id, &card_bit_map);
+
+ kfree(chip_node);
+ }
+}
+
+int hinic3_get_pf_id(struct card_node *chip_node, u32 port_id, u32 *pf_id, u32 *isvalid)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ continue;
+
+ if (hinic3_physical_port_id(dev->hwdev) == port_id) {
+ *pf_id = hinic3_global_func_id(dev->hwdev);
+ *isvalid = 1;
+ break;
+ }
+ }
+ lld_put();
+
+ return 0;
+}
+
+void hinic3_get_mbox_cnt(const void *hwdev, void *bufin)
+{
+ struct card_node *chip_node = NULL;
+ struct card_mbox_cnt_info *info = (struct card_mbox_cnt_info *)bufin;
+ struct hinic3_pcidev *dev = NULL;
+ struct hinic3_hwdev *func_hwdev = NULL;
+ u32 i = 0;
+
+ info->func_num = 0;
+ chip_node = hinic3_get_chip_node_by_hwdev(hwdev);
+ if (chip_node == NULL)
+ return;
+
+ lld_hold();
+
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ func_hwdev = (struct hinic3_hwdev *)dev->hwdev;
+ strscpy(info->func_info[i].bus_info, pci_name(dev->pcidev),
+ sizeof(info->func_info[i].bus_info));
+
+ info->func_info[i].send_cnt = func_hwdev->mbox_send_cnt;
+ info->func_info[i].ack_cnt = func_hwdev->mbox_ack_cnt;
+ info->func_num++;
+ i = info->func_num;
+ if (i >= ARRAY_SIZE(info->func_info)) {
+ sdk_err(&dev->pcidev->dev, "chip_node->func_list bigger than pf_max + vf_max\n");
+ break;
+ }
+ }
+
+ lld_put();
+}
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h
new file mode 100644
index 0000000..bfa8f3e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_DEV_MGMT_H
+#define HINIC3_DEV_MGMT_H
+#include <linux/types.h>
+#include <linux/bitops.h>
+
+#include "hinic3_sriov.h"
+#include "hinic3_lld.h"
+
+#define HINIC3_VF_PCI_CFG_REG_BAR 0
+#define HINIC3_PF_PCI_CFG_REG_BAR 1
+
+#define HINIC3_PCI_INTR_REG_BAR 2
+#define HINIC3_PCI_MGMT_REG_BAR 3 /* Only PF have mgmt bar */
+#define HINIC3_PCI_DB_BAR 4
+
+#define PRINT_ULD_DETACH_TIMEOUT_INTERVAL 1000 /* 1 second */
+#define ULD_LOCK_MIN_USLEEP_TIME 900
+#define ULD_LOCK_MAX_USLEEP_TIME 1000
+#define BIFUR_RESOURCE_PF_SSID 0x05a1
+
+#define HINIC3_IS_VF_DEV(pdev) ((pdev)->device == HINIC3_DEV_ID_VF || \
+ (pdev)->device == HINIC3_DEV_SDI_5_1_ID_VF)
+#define HINIC3_IS_SPU_DEV(pdev) \
+ (((pdev)->device == HINIC3_DEV_ID_SPU) || ((pdev)->device == HINIC3_DEV_ID_SDI_5_0_PF) || \
+ (((pdev)->device == HINIC3_DEV_ID_DPU_PF)))
+
+enum {
+ HINIC3_NOT_PROBE = 1,
+ HINIC3_PROBE_START = 2,
+ HINIC3_PROBE_OK = 3,
+ HINIC3_IN_REMOVE = 4,
+};
+
+/* Structure pcidev private */
+struct hinic3_pcidev {
+ struct pci_dev *pcidev;
+ void *hwdev;
+ struct card_node *chip_node;
+ struct hinic3_lld_dev lld_dev;
+ /* Record the service object address,
+ * such as hinic3_dev and toe_dev, fc_dev
+ */
+ void *uld_dev[SERVICE_T_MAX];
+ /* Record the service object name */
+ char uld_dev_name[SERVICE_T_MAX][IFNAMSIZ];
+ /* It is a the global variable for driver to manage
+ * all function device linked list
+ */
+ struct list_head node;
+
+ bool disable_vf_load;
+ bool disable_srv_load[SERVICE_T_MAX];
+
+ void __iomem *cfg_reg_base;
+ void __iomem *intr_reg_base;
+ void __iomem *mgmt_reg_base;
+ u64 db_dwqe_len;
+ u64 db_base_phy;
+ void __iomem *db_base;
+
+ /* lock for attach/detach uld */
+ struct mutex pdev_mutex;
+ int lld_state;
+ u32 rsvd1;
+
+ struct hinic3_sriov_info sriov_info;
+
+ /* setted when uld driver processing event */
+ unsigned long state;
+ struct pci_device_id id;
+
+ atomic_t ref_cnt;
+
+ atomic_t uld_ref_cnt[SERVICE_T_MAX];
+ unsigned long uld_state;
+ spinlock_t uld_lock;
+
+ u16 probe_fault_level;
+ u16 rsvd2;
+ u64 rsvd4;
+
+ struct workqueue_struct *multi_host_mgmt_workq;
+ struct work_struct slave_nic_work;
+ struct work_struct slave_vroce_work;
+
+ struct workqueue_struct *migration_probe_workq;
+ struct delayed_work migration_probe_dwork;
+};
+
+struct hinic_chip_info {
+ u8 chip_id; /* chip id within card */
+ u8 card_type; /* hinic_multi_chip_card_type */
+ u8 rsvd[10]; /* reserved 10 bytes */
+};
+
+struct list_head *get_hinic3_chip_list(void);
+
+int alloc_chip_node(struct hinic3_pcidev *pci_adapter);
+
+void free_chip_node(struct hinic3_pcidev *pci_adapter);
+
+void lld_lock_chip_node(void);
+
+void lld_unlock_chip_node(void);
+
+void hinic3_lld_lock_init(void);
+
+void lld_dev_cnt_init(struct hinic3_pcidev *pci_adapter);
+void wait_lld_dev_unused(struct hinic3_pcidev *pci_adapter);
+
+void *hinic3_get_hwdev_by_pcidev(struct pci_dev *pdev);
+
+int hinic3_bar_mmap_param_valid(phys_addr_t phy_addr, unsigned long vmsize);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c
new file mode 100644
index 0000000..8f9d00a
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c
@@ -0,0 +1,432 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/netlink.h>
+#include <linux/pci.h>
+#include <linux/firmware.h>
+
+#include "hinic3_devlink.h"
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+#include "hinic3_common.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_hw.h"
+
+static bool check_image_valid(struct hinic3_hwdev *hwdev, const u8 *buf,
+ u32 size, struct host_image *host_image)
+{
+ struct firmware_image *fw_image = NULL;
+ u32 len = 0;
+ u32 i;
+
+ fw_image = (struct firmware_image *)buf;
+ if (fw_image->fw_magic != FW_MAGIC_NUM) {
+ sdk_err(hwdev->dev_hdl, "Wrong fw magic read from file, fw_magic: 0x%x\n",
+ fw_image->fw_magic);
+ return false;
+ }
+
+ if (fw_image->fw_info.section_cnt > FW_TYPE_MAX_NUM) {
+ sdk_err(hwdev->dev_hdl, "Wrong fw type number read from file, fw_type_num: 0x%x\n",
+ fw_image->fw_info.section_cnt);
+ return false;
+ }
+
+ for (i = 0; i < fw_image->fw_info.section_cnt; i++) {
+ len += fw_image->section_info[i].section_len;
+ memcpy(&host_image->section_info[i], &fw_image->section_info[i],
+ sizeof(struct firmware_section));
+ }
+
+ if (len != fw_image->fw_len ||
+ (u32)(fw_image->fw_len + FW_IMAGE_HEAD_SIZE) != size) {
+ sdk_err(hwdev->dev_hdl, "Wrong data size read from file\n");
+ return false;
+ }
+
+ host_image->image_info.total_len = fw_image->fw_len;
+ host_image->image_info.fw_version = fw_image->fw_version;
+ host_image->type_num = fw_image->fw_info.section_cnt;
+ host_image->device_id = fw_image->device_id;
+
+ return true;
+}
+
+static bool check_image_integrity(struct hinic3_hwdev *hwdev, struct host_image *host_image)
+{
+ u64 collect_section_type = 0;
+ u32 type, i;
+
+ for (i = 0; i < host_image->type_num; i++) {
+ type = host_image->section_info[i].section_type;
+ if (collect_section_type & (1ULL << type)) {
+ sdk_err(hwdev->dev_hdl, "Duplicate section type: %u\n", type);
+ return false;
+ }
+ collect_section_type |= (1ULL << type);
+ }
+
+ if ((collect_section_type & IMAGE_COLD_SUB_MODULES_MUST_IN) ==
+ IMAGE_COLD_SUB_MODULES_MUST_IN &&
+ (collect_section_type & IMAGE_CFG_SUB_MODULES_MUST_IN) != 0)
+ return true;
+
+ sdk_err(hwdev->dev_hdl, "Failed to check file integrity, valid: 0x%llx, current: 0x%llx\n",
+ (IMAGE_COLD_SUB_MODULES_MUST_IN | IMAGE_CFG_SUB_MODULES_MUST_IN),
+ collect_section_type);
+
+ return false;
+}
+
+static bool check_image_device_type(struct hinic3_hwdev *hwdev, u32 device_type)
+{
+ struct comm_cmd_board_info board_info;
+
+ memset(&board_info, 0, sizeof(board_info));
+ if (hinic3_get_board_info(hwdev, &board_info.info, HINIC3_CHANNEL_COMM)) {
+ sdk_err(hwdev->dev_hdl, "Failed to get board info\n");
+ return false;
+ }
+
+ if (device_type == board_info.info.board_type)
+ return true;
+
+ sdk_err(hwdev->dev_hdl, "The image device type: 0x%x doesn't match the firmware device type: 0x%x\n",
+ device_type, board_info.info.board_type);
+
+ return false;
+}
+
+static void encapsulate_update_cmd(struct hinic3_cmd_update_firmware *msg,
+ struct firmware_section *section_info,
+ const int *remain_len, u32 *send_len,
+ u32 *send_pos)
+{
+ memset(msg->data, 0, sizeof(msg->data));
+ msg->ctl_info.sf = (*remain_len == section_info->section_len) ? true : false;
+ msg->section_info.section_crc = section_info->section_crc;
+ msg->section_info.section_type = section_info->section_type;
+ msg->section_version = section_info->section_version;
+ msg->section_len = section_info->section_len;
+ msg->section_offset = *send_pos;
+ msg->ctl_info.bit_signed = section_info->section_flag & 0x1;
+
+ if (*remain_len <= FW_FRAGMENT_MAX_LEN) {
+ msg->ctl_info.sl = true;
+ msg->ctl_info.fragment_len = (u32)(*remain_len);
+ *send_len += section_info->section_len;
+ } else {
+ msg->ctl_info.sl = false;
+ msg->ctl_info.fragment_len = FW_FRAGMENT_MAX_LEN;
+ *send_len += FW_FRAGMENT_MAX_LEN;
+ }
+}
+
+static int hinic3_flash_firmware(struct hinic3_hwdev *hwdev, const u8 *data,
+ struct host_image *image)
+{
+ u32 send_pos, send_len, section_offset, i;
+ struct hinic3_cmd_update_firmware *update_msg = NULL;
+ u16 out_size = sizeof(*update_msg);
+ bool total_flag = false;
+ int remain_len, err;
+
+ update_msg = kzalloc(sizeof(*update_msg), GFP_KERNEL);
+ if (!update_msg) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc update message\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < image->type_num; i++) {
+ section_offset = image->section_info[i].section_offset;
+ remain_len = (int)(image->section_info[i].section_len);
+ send_len = 0;
+ send_pos = 0;
+
+ while (remain_len > 0) {
+ if (!total_flag) {
+ update_msg->total_len = image->image_info.total_len;
+ total_flag = true;
+ } else {
+ update_msg->total_len = 0;
+ }
+
+ encapsulate_update_cmd(update_msg, &image->section_info[i],
+ &remain_len, &send_len, &send_pos);
+
+ memcpy(update_msg->data,
+ ((data + FW_IMAGE_HEAD_SIZE) + section_offset) + send_pos,
+ update_msg->ctl_info.fragment_len);
+
+ err = hinic3_pf_to_mgmt_sync(hwdev, HINIC3_MOD_COMM,
+ COMM_MGMT_CMD_UPDATE_FW,
+ update_msg, sizeof(*update_msg),
+ update_msg, &out_size,
+ FW_UPDATE_MGMT_TIMEOUT);
+ if (err || !out_size || update_msg->msg_head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to update firmware, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, update_msg->msg_head.status, out_size);
+ err = update_msg->msg_head.status ?
+ update_msg->msg_head.status : -EIO;
+ kfree(update_msg);
+ return err;
+ }
+
+ send_pos = send_len;
+ remain_len = (int)(image->section_info[i].section_len - send_len);
+ }
+ }
+
+ kfree(update_msg);
+
+ return 0;
+}
+
+static int hinic3_flash_update_notify(struct devlink *devlink, const struct firmware *fw,
+ struct host_image *image, struct netlink_ext_ack *extack)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+ int err;
+
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+ devlink_flash_update_begin_notify(devlink);
+#endif
+ devlink_flash_update_status_notify(devlink, "Flash firmware begin", NULL, 0, 0);
+ sdk_info(hwdev->dev_hdl, "Flash firmware begin\n");
+ err = hinic3_flash_firmware(hwdev, fw->data, image);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to flash firmware, err: %d\n", err);
+ NL_SET_ERR_MSG_MOD(extack, "Flash firmware failed");
+ devlink_flash_update_status_notify(devlink, "Flash firmware failed", NULL, 0, 0);
+ } else {
+ sdk_info(hwdev->dev_hdl, "Flash firmware end\n");
+ devlink_flash_update_status_notify(devlink, "Flash firmware end", NULL, 0, 0);
+ }
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+ devlink_flash_update_end_notify(devlink);
+#endif
+
+ return err;
+}
+
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_PARAM
+static int hinic3_devlink_flash_update(struct devlink *devlink, const char *file_name,
+ const char *component, struct netlink_ext_ack *extack)
+#else
+static int hinic3_devlink_flash_update(struct devlink *devlink,
+ struct devlink_flash_update_params *params,
+ struct netlink_ext_ack *extack)
+#endif
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+ const struct firmware *fw = NULL;
+#else
+ const struct firmware *fw = params->fw;
+#endif
+ struct host_image *image = NULL;
+ int err;
+
+ image = kzalloc(sizeof(*image), GFP_KERNEL);
+ if (!image) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc host image\n");
+ err = -ENOMEM;
+ goto devlink_param_reset;
+ }
+
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_PARAM
+ err = request_firmware_direct(&fw, file_name, hwdev->dev_hdl);
+#else
+ err = request_firmware_direct(&fw, params->file_name, hwdev->dev_hdl);
+#endif
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to request firmware\n");
+ goto devlink_request_fw_err;
+ }
+#endif
+
+ if (!check_image_valid(hwdev, fw->data, (u32)(fw->size), image) ||
+ !check_image_integrity(hwdev, image) ||
+ !check_image_device_type(hwdev, image->device_id)) {
+ sdk_err(hwdev->dev_hdl, "Failed to check image\n");
+ NL_SET_ERR_MSG_MOD(extack, "Check image failed");
+ err = -EINVAL;
+ goto devlink_update_out;
+ }
+
+ err = hinic3_flash_update_notify(devlink, fw, image, extack);
+
+devlink_update_out:
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+ release_firmware(fw);
+
+devlink_request_fw_err:
+#endif
+ kfree(image);
+
+devlink_param_reset:
+ /* reset activate_fw and switch_cfg after flash update operation */
+ devlink_dev->activate_fw = FW_CFG_DEFAULT_INDEX;
+ devlink_dev->switch_cfg = FW_CFG_DEFAULT_INDEX;
+
+ return err;
+}
+
+static const struct devlink_ops hinic3_devlink_ops = {
+ .flash_update = hinic3_devlink_flash_update,
+};
+
+static int hinic3_devlink_get_activate_firmware_config(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+
+ ctx->val.vu8 = devlink_dev->activate_fw;
+
+ return 0;
+}
+
+static int hinic3_devlink_set_activate_firmware_config(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+ int err;
+
+ devlink_dev->activate_fw = ctx->val.vu8;
+ sdk_info(hwdev->dev_hdl, "Activate firmware begin\n");
+
+ err = hinic3_activate_firmware(hwdev, devlink_dev->activate_fw);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to activate firmware, err: %d\n", err);
+ return err;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Activate firmware end\n");
+
+ return 0;
+}
+
+static int hinic3_devlink_get_switch_config(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+
+ ctx->val.vu8 = devlink_dev->switch_cfg;
+
+ return 0;
+}
+
+static int hinic3_devlink_set_switch_config(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+ int err;
+
+ devlink_dev->switch_cfg = ctx->val.vu8;
+ sdk_info(hwdev->dev_hdl, "Switch cfg begin");
+
+ err = hinic3_switch_config(hwdev, devlink_dev->switch_cfg);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to switch cfg, err: %d\n", err);
+ return err;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Switch cfg end\n");
+
+ return 0;
+}
+
+static int hinic3_devlink_firmware_config_validate(struct devlink *devlink, u32 id,
+ union devlink_param_value val,
+ struct netlink_ext_ack *extack)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+ u8 cfg_index = val.vu8;
+
+ if (cfg_index > FW_CFG_MAX_INDEX) {
+ sdk_err(hwdev->dev_hdl, "Firmware cfg index out of range [0,7]\n");
+ NL_SET_ERR_MSG_MOD(extack, "Firmware cfg index out of range [0,7]");
+ return -ERANGE;
+ }
+
+ return 0;
+}
+
+static const struct devlink_param hinic3_devlink_params[] = {
+ DEVLINK_PARAM_DRIVER(HINIC3_DEVLINK_PARAM_ID_ACTIVATE_FW,
+ "activate_fw", DEVLINK_PARAM_TYPE_U8,
+ BIT(DEVLINK_PARAM_CMODE_PERMANENT),
+ hinic3_devlink_get_activate_firmware_config,
+ hinic3_devlink_set_activate_firmware_config,
+ hinic3_devlink_firmware_config_validate),
+ DEVLINK_PARAM_DRIVER(HINIC3_DEVLINK_PARAM_ID_SWITCH_CFG,
+ "switch_cfg", DEVLINK_PARAM_TYPE_U8,
+ BIT(DEVLINK_PARAM_CMODE_PERMANENT),
+ hinic3_devlink_get_switch_config,
+ hinic3_devlink_set_switch_config,
+ hinic3_devlink_firmware_config_validate),
+};
+
+int hinic3_init_devlink(struct hinic3_hwdev *hwdev)
+{
+ struct devlink *devlink = NULL;
+ struct pci_dev *pdev = NULL;
+ int err;
+
+ pdev = hwdev->hwif->pdev;
+ devlink = devlink_alloc(&hinic3_devlink_ops, sizeof(struct hinic3_devlink));
+ if (!devlink) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc devlink\n");
+ return -ENOMEM;
+ }
+
+ hwdev->devlink_dev = devlink_priv(devlink);
+ hwdev->devlink_dev->hwdev = hwdev;
+ hwdev->devlink_dev->activate_fw = FW_CFG_DEFAULT_INDEX;
+ hwdev->devlink_dev->switch_cfg = FW_CFG_DEFAULT_INDEX;
+
+ err = devlink_register(devlink, &pdev->dev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to register devlink\n");
+ goto register_devlink_err;
+ }
+
+ err = devlink_params_register(devlink, hinic3_devlink_params,
+ ARRAY_SIZE(hinic3_devlink_params));
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to register devlink params\n");
+ goto register_devlink_params_err;
+ }
+
+ devlink_params_publish(devlink);
+
+ return 0;
+
+register_devlink_params_err:
+ devlink_unregister(devlink);
+
+register_devlink_err:
+ devlink_free(devlink);
+
+ return -EFAULT;
+}
+
+void hinic3_uninit_devlink(struct hinic3_hwdev *hwdev)
+{
+ struct devlink *devlink = priv_to_devlink(hwdev->devlink_dev);
+
+ devlink_params_unpublish(devlink);
+ devlink_params_unregister(devlink, hinic3_devlink_params,
+ ARRAY_SIZE(hinic3_devlink_params));
+ devlink_unregister(devlink);
+ devlink_free(devlink);
+}
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h
new file mode 100644
index 0000000..68dd0fb
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h
@@ -0,0 +1,173 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_DEVLINK_H
+#define HINIC3_DEVLINK_H
+
+#include "ossl_knl.h"
+#include "hinic3_hwdev.h"
+
+#define FW_MAGIC_NUM 0x5a5a1100
+#define FW_IMAGE_HEAD_SIZE 4096
+#define FW_FRAGMENT_MAX_LEN 1536
+#define FW_CFG_DEFAULT_INDEX 0xFF
+#define FW_TYPE_MAX_NUM 0x40
+#define FW_CFG_MAX_INDEX 7
+
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+enum hinic3_devlink_param_id {
+ HINIC3_DEVLINK_PARAM_ID_BASE = DEVLINK_PARAM_GENERIC_ID_MAX,
+ HINIC3_DEVLINK_PARAM_ID_ACTIVATE_FW,
+ HINIC3_DEVLINK_PARAM_ID_SWITCH_CFG,
+};
+#endif
+
+enum hinic3_firmware_type {
+ UP_FW_UPDATE_MIN_TYPE1 = 0x0,
+ UP_FW_UPDATE_UP_TEXT = 0x0,
+ UP_FW_UPDATE_UP_DATA = 0x1,
+ UP_FW_UPDATE_UP_DICT = 0x2,
+ UP_FW_UPDATE_TILE_PCPTR = 0x3,
+ UP_FW_UPDATE_TILE_TEXT = 0x4,
+ UP_FW_UPDATE_TILE_DATA = 0x5,
+ UP_FW_UPDATE_TILE_DICT = 0x6,
+ UP_FW_UPDATE_PPE_STATE = 0x7,
+ UP_FW_UPDATE_PPE_BRANCH = 0x8,
+ UP_FW_UPDATE_PPE_EXTACT = 0x9,
+ UP_FW_UPDATE_MAX_TYPE1 = 0x9,
+ UP_FW_UPDATE_CFG0 = 0xa,
+ UP_FW_UPDATE_CFG1 = 0xb,
+ UP_FW_UPDATE_CFG2 = 0xc,
+ UP_FW_UPDATE_CFG3 = 0xd,
+ UP_FW_UPDATE_MAX_TYPE1_CFG = 0xd,
+
+ UP_FW_UPDATE_MIN_TYPE2 = 0x14,
+ UP_FW_UPDATE_MAX_TYPE2 = 0x14,
+
+ UP_FW_UPDATE_MIN_TYPE3 = 0x18,
+ UP_FW_UPDATE_PHY = 0x18,
+ UP_FW_UPDATE_BIOS = 0x19,
+ UP_FW_UPDATE_HLINK_ONE = 0x1a,
+ UP_FW_UPDATE_HLINK_TWO = 0x1b,
+ UP_FW_UPDATE_HLINK_THR = 0x1c,
+ UP_FW_UPDATE_MAX_TYPE3 = 0x1c,
+
+ UP_FW_UPDATE_MIN_TYPE4 = 0x20,
+ UP_FW_UPDATE_L0FW = 0x20,
+ UP_FW_UPDATE_L1FW = 0x21,
+ UP_FW_UPDATE_BOOT = 0x22,
+ UP_FW_UPDATE_SEC_DICT = 0x23,
+ UP_FW_UPDATE_HOT_PATCH0 = 0x24,
+ UP_FW_UPDATE_HOT_PATCH1 = 0x25,
+ UP_FW_UPDATE_HOT_PATCH2 = 0x26,
+ UP_FW_UPDATE_HOT_PATCH3 = 0x27,
+ UP_FW_UPDATE_HOT_PATCH4 = 0x28,
+ UP_FW_UPDATE_HOT_PATCH5 = 0x29,
+ UP_FW_UPDATE_HOT_PATCH6 = 0x2a,
+ UP_FW_UPDATE_HOT_PATCH7 = 0x2b,
+ UP_FW_UPDATE_HOT_PATCH8 = 0x2c,
+ UP_FW_UPDATE_HOT_PATCH9 = 0x2d,
+ UP_FW_UPDATE_HOT_PATCH10 = 0x2e,
+ UP_FW_UPDATE_HOT_PATCH11 = 0x2f,
+ UP_FW_UPDATE_HOT_PATCH12 = 0x30,
+ UP_FW_UPDATE_HOT_PATCH13 = 0x31,
+ UP_FW_UPDATE_HOT_PATCH14 = 0x32,
+ UP_FW_UPDATE_HOT_PATCH15 = 0x33,
+ UP_FW_UPDATE_HOT_PATCH16 = 0x34,
+ UP_FW_UPDATE_HOT_PATCH17 = 0x35,
+ UP_FW_UPDATE_HOT_PATCH18 = 0x36,
+ UP_FW_UPDATE_HOT_PATCH19 = 0x37,
+ UP_FW_UPDATE_MAX_TYPE4 = 0x37,
+
+ UP_FW_UPDATE_MIN_TYPE5 = 0x3a,
+ UP_FW_UPDATE_OPTION_ROM = 0x3a,
+ UP_FW_UPDATE_MAX_TYPE5 = 0x3a,
+
+ UP_FW_UPDATE_MIN_TYPE6 = 0x3e,
+ UP_FW_UPDATE_MAX_TYPE6 = 0x3e,
+
+ UP_FW_UPDATE_MIN_TYPE7 = 0x40,
+ UP_FW_UPDATE_MAX_TYPE7 = 0x40,
+};
+
+#define IMAGE_MPU_ALL_IN (BIT_ULL(UP_FW_UPDATE_UP_TEXT) | \
+ BIT_ULL(UP_FW_UPDATE_UP_DATA) | \
+ BIT_ULL(UP_FW_UPDATE_UP_DICT))
+
+#define IMAGE_NPU_ALL_IN (BIT_ULL(UP_FW_UPDATE_TILE_PCPTR) | \
+ BIT_ULL(UP_FW_UPDATE_TILE_TEXT) | \
+ BIT_ULL(UP_FW_UPDATE_TILE_DATA) | \
+ BIT_ULL(UP_FW_UPDATE_TILE_DICT) | \
+ BIT_ULL(UP_FW_UPDATE_PPE_STATE) | \
+ BIT_ULL(UP_FW_UPDATE_PPE_BRANCH) | \
+ BIT_ULL(UP_FW_UPDATE_PPE_EXTACT))
+
+#define IMAGE_COLD_SUB_MODULES_MUST_IN (IMAGE_MPU_ALL_IN | IMAGE_NPU_ALL_IN)
+
+#define IMAGE_CFG_SUB_MODULES_MUST_IN (BIT_ULL(UP_FW_UPDATE_CFG0) | \
+ BIT_ULL(UP_FW_UPDATE_CFG1) | \
+ BIT_ULL(UP_FW_UPDATE_CFG2) | \
+ BIT_ULL(UP_FW_UPDATE_CFG3))
+
+struct firmware_section {
+ u32 section_len;
+ u32 section_offset;
+ u32 section_version;
+ u32 section_type;
+ u32 section_crc;
+ u32 section_flag;
+};
+
+struct firmware_image {
+ u32 fw_version;
+ u32 fw_len;
+ u32 fw_magic;
+ struct {
+ u32 section_cnt : 16;
+ u32 rsvd : 16;
+ } fw_info;
+ struct firmware_section section_info[FW_TYPE_MAX_NUM];
+ u32 device_id; /* cfg fw board_type value */
+ u32 rsvd0[101]; /* device_id and rsvd0[101] is update_head_extend_info */
+ u32 rsvd1[534]; /* big bin file total size 4096B */
+ u32 bin_data; /* obtain the address for use */
+};
+
+struct host_image {
+ struct firmware_section section_info[FW_TYPE_MAX_NUM];
+ struct {
+ u32 total_len;
+ u32 fw_version;
+ } image_info;
+ u32 type_num;
+ u32 device_id;
+};
+
+struct hinic3_cmd_update_firmware {
+ struct mgmt_msg_head msg_head;
+
+ struct {
+ u32 sl : 1;
+ u32 sf : 1;
+ u32 flag : 1;
+ u32 bit_signed : 1;
+ u32 reserved : 12;
+ u32 fragment_len : 16;
+ } ctl_info;
+
+ struct {
+ u32 section_crc;
+ u32 section_type;
+ } section_info;
+
+ u32 total_len;
+ u32 section_len;
+ u32 section_version;
+ u32 section_offset;
+ u32 data[384];
+};
+
+int hinic3_init_devlink(struct hinic3_hwdev *hwdev);
+void hinic3_uninit_devlink(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c
new file mode 100644
index 0000000..caa99e3
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c
@@ -0,0 +1,1422 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/workqueue.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_hw.h"
+#include "hinic3_csr.h"
+#include "hinic3_hw_comm.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_eqs.h"
+
+#include "vram_common.h"
+
+#define HINIC3_EQS_WQ_NAME "hinic3_eqs"
+
+#define AEQ_CTRL_0_INTR_IDX_SHIFT 0
+#define AEQ_CTRL_0_DMA_ATTR_SHIFT 12
+#define AEQ_CTRL_0_PCI_INTF_IDX_SHIFT 20
+#define AEQ_CTRL_0_INTR_MODE_SHIFT 31
+
+#define AEQ_CTRL_0_INTR_IDX_MASK 0x3FFU
+#define AEQ_CTRL_0_DMA_ATTR_MASK 0x3FU
+#define AEQ_CTRL_0_PCI_INTF_IDX_MASK 0x7U
+#define AEQ_CTRL_0_INTR_MODE_MASK 0x1U
+
+#define AEQ_CTRL_0_SET(val, member) \
+ (((val) & AEQ_CTRL_0_##member##_MASK) << \
+ AEQ_CTRL_0_##member##_SHIFT)
+
+#define AEQ_CTRL_0_CLEAR(val, member) \
+ ((val) & (~(AEQ_CTRL_0_##member##_MASK << \
+ AEQ_CTRL_0_##member##_SHIFT)))
+
+#define AEQ_CTRL_1_LEN_SHIFT 0
+#define AEQ_CTRL_1_ELEM_SIZE_SHIFT 24
+#define AEQ_CTRL_1_PAGE_SIZE_SHIFT 28
+
+#define AEQ_CTRL_1_LEN_MASK 0x1FFFFFU
+#define AEQ_CTRL_1_ELEM_SIZE_MASK 0x3U
+#define AEQ_CTRL_1_PAGE_SIZE_MASK 0xFU
+
+#define AEQ_CTRL_1_SET(val, member) \
+ (((val) & AEQ_CTRL_1_##member##_MASK) << \
+ AEQ_CTRL_1_##member##_SHIFT)
+
+#define AEQ_CTRL_1_CLEAR(val, member) \
+ ((val) & (~(AEQ_CTRL_1_##member##_MASK << \
+ AEQ_CTRL_1_##member##_SHIFT)))
+
+#define HINIC3_EQ_PROD_IDX_MASK 0xFFFFF
+#define HINIC3_TASK_PROCESS_EQE_LIMIT 1024
+#define HINIC3_EQ_UPDATE_CI_STEP 64
+
+static uint g_aeq_len = HINIC3_DEFAULT_AEQ_LEN;
+module_param(g_aeq_len, uint, 0444);
+MODULE_PARM_DESC(g_aeq_len,
+ "aeq depth, valid range is " __stringify(HINIC3_MIN_AEQ_LEN)
+ " - " __stringify(HINIC3_MAX_AEQ_LEN));
+
+static uint g_ceq_len = HINIC3_DEFAULT_CEQ_LEN;
+module_param(g_ceq_len, uint, 0444);
+MODULE_PARM_DESC(g_ceq_len,
+ "ceq depth, valid range is " __stringify(HINIC3_MIN_CEQ_LEN)
+ " - " __stringify(HINIC3_MAX_CEQ_LEN));
+
+static uint g_num_ceqe_in_tasklet = HINIC3_TASK_PROCESS_EQE_LIMIT;
+module_param(g_num_ceqe_in_tasklet, uint, 0444);
+MODULE_PARM_DESC(g_num_ceqe_in_tasklet,
+ "The max number of ceqe can be processed in tasklet, default = 1024");
+
+#define CEQ_CTRL_0_INTR_IDX_SHIFT 0
+#define CEQ_CTRL_0_DMA_ATTR_SHIFT 12
+#define CEQ_CTRL_0_LIMIT_KICK_SHIFT 20
+#define CEQ_CTRL_0_PCI_INTF_IDX_SHIFT 24
+#define CEQ_CTRL_0_PAGE_SIZE_SHIFT 27
+#define CEQ_CTRL_0_INTR_MODE_SHIFT 31
+
+#define CEQ_CTRL_0_INTR_IDX_MASK 0x3FFU
+#define CEQ_CTRL_0_DMA_ATTR_MASK 0x3FU
+#define CEQ_CTRL_0_LIMIT_KICK_MASK 0xFU
+#define CEQ_CTRL_0_PCI_INTF_IDX_MASK 0x3U
+#define CEQ_CTRL_0_PAGE_SIZE_MASK 0xF
+#define CEQ_CTRL_0_INTR_MODE_MASK 0x1U
+
+#define CEQ_CTRL_0_SET(val, member) \
+ (((val) & CEQ_CTRL_0_##member##_MASK) << \
+ CEQ_CTRL_0_##member##_SHIFT)
+
+#define CEQ_CTRL_1_LEN_SHIFT 0
+#define CEQ_CTRL_1_GLB_FUNC_ID_SHIFT 20
+
+#define CEQ_CTRL_1_LEN_MASK 0xFFFFFU
+#define CEQ_CTRL_1_GLB_FUNC_ID_MASK 0xFFFU
+
+#define CEQ_CTRL_1_SET(val, member) \
+ (((val) & CEQ_CTRL_1_##member##_MASK) << \
+ CEQ_CTRL_1_##member##_SHIFT)
+
+#define EQ_ELEM_DESC_TYPE_SHIFT 0
+#define EQ_ELEM_DESC_SRC_SHIFT 7
+#define EQ_ELEM_DESC_SIZE_SHIFT 8
+#define EQ_ELEM_DESC_WRAPPED_SHIFT 31
+
+#define EQ_ELEM_DESC_TYPE_MASK 0x7FU
+#define EQ_ELEM_DESC_SRC_MASK 0x1U
+#define EQ_ELEM_DESC_SIZE_MASK 0xFFU
+#define EQ_ELEM_DESC_WRAPPED_MASK 0x1U
+
+#define EQ_ELEM_DESC_GET(val, member) \
+ (((val) >> EQ_ELEM_DESC_##member##_SHIFT) & \
+ EQ_ELEM_DESC_##member##_MASK)
+
+#define EQ_CONS_IDX_CONS_IDX_SHIFT 0
+#define EQ_CONS_IDX_INT_ARMED_SHIFT 31
+
+#define EQ_CONS_IDX_CONS_IDX_MASK 0x1FFFFFU
+#define EQ_CONS_IDX_INT_ARMED_MASK 0x1U
+
+#define EQ_CONS_IDX_SET(val, member) \
+ (((val) & EQ_CONS_IDX_##member##_MASK) << \
+ EQ_CONS_IDX_##member##_SHIFT)
+
+#define EQ_CONS_IDX_CLEAR(val, member) \
+ ((val) & (~(EQ_CONS_IDX_##member##_MASK << \
+ EQ_CONS_IDX_##member##_SHIFT)))
+
+#define EQ_CI_SIMPLE_INDIR_CI_SHIFT 0
+#define EQ_CI_SIMPLE_INDIR_ARMED_SHIFT 21
+#define EQ_CI_SIMPLE_INDIR_AEQ_IDX_SHIFT 30
+#define EQ_CI_SIMPLE_INDIR_CEQ_IDX_SHIFT 24
+
+#define EQ_CI_SIMPLE_INDIR_CI_MASK 0x1FFFFFU
+#define EQ_CI_SIMPLE_INDIR_ARMED_MASK 0x1U
+#define EQ_CI_SIMPLE_INDIR_AEQ_IDX_MASK 0x3U
+#define EQ_CI_SIMPLE_INDIR_CEQ_IDX_MASK 0xFFU
+
+#define EQ_CI_SIMPLE_INDIR_SET(val, member) \
+ (((val) & EQ_CI_SIMPLE_INDIR_##member##_MASK) << \
+ EQ_CI_SIMPLE_INDIR_##member##_SHIFT)
+
+#define EQ_CI_SIMPLE_INDIR_CLEAR(val, member) \
+ ((val) & (~(EQ_CI_SIMPLE_INDIR_##member##_MASK << \
+ EQ_CI_SIMPLE_INDIR_##member##_SHIFT)))
+
+#define EQ_WRAPPED(eq) ((u32)(eq)->wrapped << EQ_VALID_SHIFT)
+
+#define EQ_CONS_IDX(eq) ((eq)->cons_idx | \
+ ((u32)(eq)->wrapped << EQ_WRAPPED_SHIFT))
+
+#define EQ_CONS_IDX_REG_ADDR(eq) \
+ (((eq)->type == HINIC3_AEQ) ? \
+ HINIC3_CSR_AEQ_CONS_IDX_ADDR : \
+ HINIC3_CSR_CEQ_CONS_IDX_ADDR)
+#define EQ_CI_SIMPLE_INDIR_REG_ADDR(eq) \
+ (((eq)->type == HINIC3_AEQ) ? \
+ HINIC3_CSR_AEQ_CI_SIMPLE_INDIR_ADDR : \
+ HINIC3_CSR_CEQ_CI_SIMPLE_INDIR_ADDR)
+
+#define EQ_PROD_IDX_REG_ADDR(eq) \
+ (((eq)->type == HINIC3_AEQ) ? \
+ HINIC3_CSR_AEQ_PROD_IDX_ADDR : \
+ HINIC3_CSR_CEQ_PROD_IDX_ADDR)
+
+#define HINIC3_EQ_HI_PHYS_ADDR_REG(type, pg_num) \
+ ((u32)((type == HINIC3_AEQ) ? \
+ HINIC3_AEQ_HI_PHYS_ADDR_REG(pg_num) : \
+ HINIC3_CEQ_HI_PHYS_ADDR_REG(pg_num)))
+
+#define HINIC3_EQ_LO_PHYS_ADDR_REG(type, pg_num) \
+ ((u32)((type == HINIC3_AEQ) ? \
+ HINIC3_AEQ_LO_PHYS_ADDR_REG(pg_num) : \
+ HINIC3_CEQ_LO_PHYS_ADDR_REG(pg_num)))
+
+#define GET_EQ_NUM_PAGES(eq, size) \
+ ((u16)(ALIGN((u32)((eq)->eq_len * (eq)->elem_size), \
+ (size)) / (size)))
+
+#define HINIC3_EQ_MAX_PAGES(eq) \
+ ((eq)->type == HINIC3_AEQ ? HINIC3_AEQ_MAX_PAGES : \
+ HINIC3_CEQ_MAX_PAGES)
+
+#define GET_EQ_NUM_ELEMS(eq, pg_size) ((pg_size) / (u32)(eq)->elem_size)
+
+#define GET_EQ_ELEMENT(eq, idx) \
+ (((u8 *)(eq)->eq_pages[(idx) / (eq)->num_elem_in_pg].align_vaddr) + \
+ (u32)(((idx) & ((eq)->num_elem_in_pg - 1)) * (eq)->elem_size))
+
+#define GET_AEQ_ELEM(eq, idx) \
+ ((struct hinic3_aeq_elem *)GET_EQ_ELEMENT((eq), (idx)))
+
+#define GET_CEQ_ELEM(eq, idx) ((u32 *)GET_EQ_ELEMENT((eq), (idx)))
+
+#define GET_CURR_AEQ_ELEM(eq) GET_AEQ_ELEM((eq), (eq)->cons_idx)
+
+#define GET_CURR_CEQ_ELEM(eq) GET_CEQ_ELEM((eq), (eq)->cons_idx)
+
+#define PAGE_IN_4K(page_size) ((page_size) >> 12)
+#define EQ_SET_HW_PAGE_SIZE_VAL(eq) \
+ ((u32)ilog2(PAGE_IN_4K((eq)->page_size)))
+
+#define ELEMENT_SIZE_IN_32B(eq) (((eq)->elem_size) >> 5)
+#define EQ_SET_HW_ELEM_SIZE_VAL(eq) ((u32)ilog2(ELEMENT_SIZE_IN_32B(eq)))
+
+#define AEQ_DMA_ATTR_DEFAULT 0
+#define CEQ_DMA_ATTR_DEFAULT 0
+
+#define CEQ_LMT_KICK_DEFAULT 0
+
+#define EQ_MSIX_RESEND_TIMER_CLEAR 1
+
+#define EQ_WRAPPED_SHIFT 20
+
+#define EQ_VALID_SHIFT 31
+
+#define CEQE_TYPE_SHIFT 23
+#define CEQE_TYPE_MASK 0x7
+
+#define CEQE_TYPE(type) (((type) >> CEQE_TYPE_SHIFT) & \
+ CEQE_TYPE_MASK)
+
+#define CEQE_DATA_MASK 0x3FFFFFF
+#define CEQE_DATA(data) ((data) & CEQE_DATA_MASK)
+
+#define aeq_to_aeqs(eq) \
+ container_of((eq) - (eq)->q_id, struct hinic3_aeqs, aeq[0])
+
+#define ceq_to_ceqs(eq) \
+ container_of((eq) - (eq)->q_id, struct hinic3_ceqs, ceq[0])
+
+static irqreturn_t ceq_interrupt(int irq, void *data);
+static irqreturn_t aeq_interrupt(int irq, void *data);
+
+static void ceq_tasklet(ulong ceq_data);
+
+/**
+ * hinic3_aeq_register_hw_cb - register aeq callback for specific event
+ * @hwdev: the pointer to hw device
+ * @pri_handle: the pointer to private invoker device
+ * @event: event for the handler
+ * @hw_cb: callback function
+ **/
+int hinic3_aeq_register_hw_cb(void *hwdev, void *pri_handle, enum hinic3_aeq_type event,
+ hinic3_aeq_hwe_cb hwe_cb)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+
+ if (!hwdev || !hwe_cb || event >= HINIC3_MAX_AEQ_EVENTS)
+ return -EINVAL;
+
+ aeqs = ((struct hinic3_hwdev *)hwdev)->aeqs;
+
+ aeqs->aeq_hwe_cb[event] = hwe_cb;
+ aeqs->aeq_hwe_cb_data[event] = pri_handle;
+
+ set_bit(HINIC3_AEQ_HW_CB_REG, &aeqs->aeq_hw_cb_state[event]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_aeq_register_hw_cb);
+
+/**
+ * hinic3_aeq_unregister_hw_cb - unregister the aeq callback for specific event
+ * @hwdev: the pointer to hw device
+ * @event: event for the handler
+ **/
+void hinic3_aeq_unregister_hw_cb(void *hwdev, enum hinic3_aeq_type event)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+
+ if (!hwdev || event >= HINIC3_MAX_AEQ_EVENTS)
+ return;
+
+ aeqs = ((struct hinic3_hwdev *)hwdev)->aeqs;
+
+ clear_bit(HINIC3_AEQ_HW_CB_REG, &aeqs->aeq_hw_cb_state[event]);
+
+ while (test_bit(HINIC3_AEQ_HW_CB_RUNNING,
+ &aeqs->aeq_hw_cb_state[event]))
+ usleep_range(EQ_USLEEP_LOW_BOUND, EQ_USLEEP_HIG_BOUND);
+
+ aeqs->aeq_hwe_cb[event] = NULL;
+}
+EXPORT_SYMBOL(hinic3_aeq_unregister_hw_cb);
+
+/**
+ * hinic3_aeq_register_swe_cb - register aeq callback for sw event
+ * @hwdev: the pointer to hw device
+ * @pri_handle: the pointer to private invoker device
+ * @event: soft event for the handler
+ * @sw_cb: callback function
+ **/
+int hinic3_aeq_register_swe_cb(void *hwdev, void *pri_handle, enum hinic3_aeq_sw_type event,
+ hinic3_aeq_swe_cb aeq_swe_cb)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+
+ if (!hwdev || !aeq_swe_cb || event >= HINIC3_MAX_AEQ_SW_EVENTS)
+ return -EINVAL;
+
+ aeqs = ((struct hinic3_hwdev *)hwdev)->aeqs;
+
+ aeqs->aeq_swe_cb[event] = aeq_swe_cb;
+ aeqs->aeq_swe_cb_data[event] = pri_handle;
+
+ set_bit(HINIC3_AEQ_SW_CB_REG, &aeqs->aeq_sw_cb_state[event]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_aeq_register_swe_cb);
+
+/**
+ * hinic3_aeq_unregister_swe_cb - unregister the aeq callback for sw event
+ * @hwdev: the pointer to hw device
+ * @event: soft event for the handler
+ **/
+void hinic3_aeq_unregister_swe_cb(void *hwdev, enum hinic3_aeq_sw_type event)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+
+ if (!hwdev || event >= HINIC3_MAX_AEQ_SW_EVENTS)
+ return;
+
+ aeqs = ((struct hinic3_hwdev *)hwdev)->aeqs;
+
+ clear_bit(HINIC3_AEQ_SW_CB_REG, &aeqs->aeq_sw_cb_state[event]);
+
+ while (test_bit(HINIC3_AEQ_SW_CB_RUNNING,
+ &aeqs->aeq_sw_cb_state[event]))
+ usleep_range(EQ_USLEEP_LOW_BOUND, EQ_USLEEP_HIG_BOUND);
+
+ aeqs->aeq_swe_cb[event] = NULL;
+}
+EXPORT_SYMBOL(hinic3_aeq_unregister_swe_cb);
+
+/**
+ * hinic3_ceq_register_cb - register ceq callback for specific event
+ * @hwdev: the pointer to hw device
+ * @pri_handle: the pointer to private invoker device
+ * @event: event for the handler
+ * @ceq_cb: callback function
+ **/
+int hinic3_ceq_register_cb(void *hwdev, void *pri_handle, enum hinic3_ceq_event event,
+ hinic3_ceq_event_cb callback)
+{
+ struct hinic3_ceqs *ceqs = NULL;
+
+ if (!hwdev || event >= HINIC3_MAX_CEQ_EVENTS)
+ return -EINVAL;
+
+ ceqs = ((struct hinic3_hwdev *)hwdev)->ceqs;
+
+ ceqs->ceq_cb[event] = callback;
+ ceqs->ceq_cb_data[event] = pri_handle;
+
+ set_bit(HINIC3_CEQ_CB_REG, &ceqs->ceq_cb_state[event]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_ceq_register_cb);
+
+/**
+ * hinic3_ceq_unregister_cb - unregister ceq callback for specific event
+ * @hwdev: the pointer to hw device
+ * @event: event for the handler
+ **/
+void hinic3_ceq_unregister_cb(void *hwdev, enum hinic3_ceq_event event)
+{
+ struct hinic3_ceqs *ceqs = NULL;
+
+ if (!hwdev || event >= HINIC3_MAX_CEQ_EVENTS)
+ return;
+
+ ceqs = ((struct hinic3_hwdev *)hwdev)->ceqs;
+
+ clear_bit(HINIC3_CEQ_CB_REG, &ceqs->ceq_cb_state[event]);
+
+ while (test_bit(HINIC3_CEQ_CB_RUNNING, &ceqs->ceq_cb_state[event]))
+ usleep_range(EQ_USLEEP_LOW_BOUND, EQ_USLEEP_HIG_BOUND);
+
+ ceqs->ceq_cb[event] = NULL;
+}
+EXPORT_SYMBOL(hinic3_ceq_unregister_cb);
+
+/**
+ * set_eq_cons_idx - write the cons idx to the hw
+ * @eq: The event queue to update the cons idx for
+ * @cons idx: consumer index value
+ **/
+static void set_eq_cons_idx(struct hinic3_eq *eq, u32 arm_state)
+{
+ u32 eq_wrap_ci, val;
+ u32 addr = EQ_CI_SIMPLE_INDIR_REG_ADDR(eq);
+
+ eq_wrap_ci = EQ_CONS_IDX(eq);
+
+ /* if use poll mode only eq0 use int_arm mode */
+ if (eq->q_id != 0 && eq->hwdev->poll)
+ val = EQ_CI_SIMPLE_INDIR_SET(HINIC3_EQ_NOT_ARMED, ARMED);
+ else
+ val = EQ_CI_SIMPLE_INDIR_SET(arm_state, ARMED);
+ if (eq->type == HINIC3_AEQ) {
+ val = val |
+ EQ_CI_SIMPLE_INDIR_SET(eq_wrap_ci, CI) |
+ EQ_CI_SIMPLE_INDIR_SET(eq->q_id, AEQ_IDX);
+ } else {
+ val = val |
+ EQ_CI_SIMPLE_INDIR_SET(eq_wrap_ci, CI) |
+ EQ_CI_SIMPLE_INDIR_SET(eq->q_id, CEQ_IDX);
+ }
+
+ hinic3_hwif_write_reg(eq->hwdev->hwif, addr, val);
+}
+
+/**
+ * ceq_event_handler - handle for the ceq events
+ * @ceqs: ceqs part of the chip
+ * @ceqe: ceq element of the event
+ **/
+static void ceq_event_handler(struct hinic3_ceqs *ceqs, u32 ceqe)
+{
+ struct hinic3_hwdev *hwdev = ceqs->hwdev;
+ enum hinic3_ceq_event event = CEQE_TYPE(ceqe);
+ u32 ceqe_data = CEQE_DATA(ceqe);
+
+ if (event >= HINIC3_MAX_CEQ_EVENTS) {
+ sdk_err(hwdev->dev_hdl, "Ceq unknown event:%d, ceqe date: 0x%x\n",
+ event, ceqe_data);
+ return;
+ }
+
+ set_bit(HINIC3_CEQ_CB_RUNNING, &ceqs->ceq_cb_state[event]);
+
+ if (ceqs->ceq_cb[event] &&
+ test_bit(HINIC3_CEQ_CB_REG, &ceqs->ceq_cb_state[event]))
+ ceqs->ceq_cb[event](ceqs->ceq_cb_data[event], ceqe_data);
+
+ clear_bit(HINIC3_CEQ_CB_RUNNING, &ceqs->ceq_cb_state[event]);
+}
+
+static void aeq_elem_handler(struct hinic3_eq *eq, u32 aeqe_desc)
+{
+ struct hinic3_aeqs *aeqs = aeq_to_aeqs(eq);
+ struct hinic3_aeq_elem *aeqe_pos;
+ enum hinic3_aeq_type event;
+ enum hinic3_aeq_sw_type sw_type;
+ u32 sw_event;
+ u8 data[HINIC3_AEQE_DATA_SIZE], size;
+
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+
+ eq->hwdev->cur_recv_aeq_cnt++;
+
+ event = EQ_ELEM_DESC_GET(aeqe_desc, TYPE);
+ if (EQ_ELEM_DESC_GET(aeqe_desc, SRC)) {
+ sw_event = event;
+ sw_type = sw_event >= HINIC3_NIC_FATAL_ERROR_MAX ?
+ HINIC3_STATEFUL_EVENT : HINIC3_STATELESS_EVENT;
+ /* SW event uses only the first 8B */
+ memcpy(data, aeqe_pos->aeqe_data, HINIC3_AEQE_DATA_SIZE);
+ hinic3_be32_to_cpu(data, HINIC3_AEQE_DATA_SIZE);
+ set_bit(HINIC3_AEQ_SW_CB_RUNNING,
+ &aeqs->aeq_sw_cb_state[sw_type]);
+ if (aeqs->aeq_swe_cb[sw_type] &&
+ test_bit(HINIC3_AEQ_SW_CB_REG,
+ &aeqs->aeq_sw_cb_state[sw_type]))
+ aeqs->aeq_swe_cb[sw_type](aeqs->aeq_swe_cb_data[sw_type], event, data);
+
+ clear_bit(HINIC3_AEQ_SW_CB_RUNNING,
+ &aeqs->aeq_sw_cb_state[sw_type]);
+ return;
+ }
+
+ if (event < HINIC3_MAX_AEQ_EVENTS) {
+ memcpy(data, aeqe_pos->aeqe_data, HINIC3_AEQE_DATA_SIZE);
+ hinic3_be32_to_cpu(data, HINIC3_AEQE_DATA_SIZE);
+
+ size = EQ_ELEM_DESC_GET(aeqe_desc, SIZE);
+ set_bit(HINIC3_AEQ_HW_CB_RUNNING,
+ &aeqs->aeq_hw_cb_state[event]);
+ if (aeqs->aeq_hwe_cb[event] &&
+ test_bit(HINIC3_AEQ_HW_CB_REG,
+ &aeqs->aeq_hw_cb_state[event]))
+ aeqs->aeq_hwe_cb[event](aeqs->aeq_hwe_cb_data[event], data, size);
+ clear_bit(HINIC3_AEQ_HW_CB_RUNNING,
+ &aeqs->aeq_hw_cb_state[event]);
+ return;
+ }
+ sdk_warn(eq->hwdev->dev_hdl, "Unknown aeq hw event %d\n", event);
+}
+
+/**
+ * aeq_irq_handler - handler for the aeq event
+ * @eq: the async event queue of the event
+ **/
+static bool aeq_irq_handler(struct hinic3_eq *eq)
+{
+ struct hinic3_aeq_elem *aeqe_pos = NULL;
+ u32 aeqe_desc;
+ u32 i, eqe_cnt = 0;
+
+ for (i = 0; i < HINIC3_TASK_PROCESS_EQE_LIMIT; i++) {
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+
+ /* Data in HW is in Big endian Format */
+ aeqe_desc = be32_to_cpu(aeqe_pos->desc);
+
+ /* HW updates wrapped bit, when it adds eq element event */
+ if (EQ_ELEM_DESC_GET(aeqe_desc, WRAPPED) == eq->wrapped)
+ return false;
+
+ dma_rmb();
+
+ aeq_elem_handler(eq, aeqe_desc);
+
+ eq->cons_idx++;
+
+ if (eq->cons_idx == eq->eq_len) {
+ eq->cons_idx = 0;
+ eq->wrapped = !eq->wrapped;
+ }
+
+ if (++eqe_cnt >= HINIC3_EQ_UPDATE_CI_STEP) {
+ eqe_cnt = 0;
+ set_eq_cons_idx(eq, HINIC3_EQ_NOT_ARMED);
+ }
+ }
+
+ return true;
+}
+
+/**
+ * ceq_irq_handler - handler for the ceq event
+ * @eq: the completion event queue of the event
+ **/
+static bool ceq_irq_handler(struct hinic3_eq *eq)
+{
+ struct hinic3_ceqs *ceqs = ceq_to_ceqs(eq);
+ u32 ceqe, eqe_cnt = 0;
+ u32 i;
+
+ for (i = 0; i < g_num_ceqe_in_tasklet; i++) {
+ ceqe = *(GET_CURR_CEQ_ELEM(eq));
+ ceqe = be32_to_cpu(ceqe);
+
+ /* HW updates wrapped bit, when it adds eq element event */
+ if (EQ_ELEM_DESC_GET(ceqe, WRAPPED) == eq->wrapped)
+ return false;
+
+ ceq_event_handler(ceqs, ceqe);
+
+ eq->cons_idx++;
+
+ if (eq->cons_idx == eq->eq_len) {
+ eq->cons_idx = 0;
+ eq->wrapped = !eq->wrapped;
+ }
+
+ if (++eqe_cnt >= HINIC3_EQ_UPDATE_CI_STEP) {
+ eqe_cnt = 0;
+ set_eq_cons_idx(eq, HINIC3_EQ_NOT_ARMED);
+ }
+ }
+
+ return true;
+}
+
+static void reschedule_eq_handler(struct hinic3_eq *eq)
+{
+ if (eq->type == HINIC3_AEQ) {
+ struct hinic3_aeqs *aeqs = aeq_to_aeqs(eq);
+
+ queue_work_on(hisdk3_get_work_cpu_affinity(eq->hwdev, WORK_TYPE_AEQ),
+ aeqs->workq, &eq->aeq_work);
+ } else {
+ tasklet_schedule(&eq->ceq_tasklet);
+ }
+}
+
+/**
+ * eq_irq_handler - handler for the eq event
+ * @data: the event queue of the event
+ **/
+static bool eq_irq_handler(void *data)
+{
+ struct hinic3_eq *eq = (struct hinic3_eq *)data;
+ bool uncompleted = false;
+
+ if (eq->type == HINIC3_AEQ)
+ uncompleted = aeq_irq_handler(eq);
+ else
+ uncompleted = ceq_irq_handler(eq);
+
+ set_eq_cons_idx(eq, uncompleted ? HINIC3_EQ_NOT_ARMED :
+ HINIC3_EQ_ARMED);
+
+ return uncompleted;
+}
+
+/**
+ * eq_irq_work - eq work for the event
+ * @work: the work that is associated with the eq
+ **/
+static void eq_irq_work(struct work_struct *work)
+{
+ struct hinic3_eq *eq = container_of(work, struct hinic3_eq, aeq_work);
+
+ if (eq_irq_handler(eq))
+ reschedule_eq_handler(eq);
+}
+
+/**
+ * aeq_interrupt - aeq interrupt handler
+ * @irq: irq number
+ * @data: the async event queue of the event
+ **/
+static irqreturn_t aeq_interrupt(int irq, void *data)
+{
+ struct hinic3_eq *aeq = (struct hinic3_eq *)data;
+ struct hinic3_hwdev *hwdev = aeq->hwdev;
+ struct hinic3_aeqs *aeqs = aeq_to_aeqs(aeq);
+ struct workqueue_struct *workq = aeqs->workq;
+
+ /* clear resend timer cnt register */
+ hinic3_misx_intr_clear_resend_bit(hwdev, aeq->eq_irq.msix_entry_idx,
+ EQ_MSIX_RESEND_TIMER_CLEAR);
+
+ queue_work_on(hisdk3_get_work_cpu_affinity(hwdev, WORK_TYPE_AEQ),
+ workq, &aeq->aeq_work);
+ return IRQ_HANDLED;
+}
+
+/**
+ * ceq_tasklet - ceq tasklet for the event
+ * @ceq_data: data that will be used by the tasklet(ceq)
+ **/
+static void ceq_tasklet(ulong ceq_data)
+{
+ struct hinic3_eq *eq = (struct hinic3_eq *)ceq_data;
+
+ eq->soft_intr_jif = jiffies;
+
+ if (eq_irq_handler(eq))
+ reschedule_eq_handler(eq);
+}
+
+/**
+ * ceq_interrupt - ceq interrupt handler
+ * @irq: irq number
+ * @data: the completion event queue of the event
+ **/
+static irqreturn_t ceq_interrupt(int irq, void *data)
+{
+ struct hinic3_eq *ceq = (struct hinic3_eq *)data;
+
+ ceq->hard_intr_jif = jiffies;
+
+ /* clear resend timer counters */
+ hinic3_misx_intr_clear_resend_bit(ceq->hwdev,
+ ceq->eq_irq.msix_entry_idx,
+ EQ_MSIX_RESEND_TIMER_CLEAR);
+
+ tasklet_schedule(&ceq->ceq_tasklet);
+
+ return IRQ_HANDLED;
+}
+
+/**
+ * set_eq_ctrls - setting eq's ctrls registers
+ * @eq: the event queue for setting
+ **/
+static int set_eq_ctrls(struct hinic3_eq *eq)
+{
+ enum hinic3_eq_type type = eq->type;
+ struct hinic3_hwif *hwif = eq->hwdev->hwif;
+ struct irq_info *eq_irq = &eq->eq_irq;
+ u32 addr, val, ctrl0, ctrl1, page_size_val, elem_size;
+ u32 pci_intf_idx = HINIC3_PCI_INTF_IDX(hwif);
+ int err;
+
+ if (type == HINIC3_AEQ) {
+ /* set ctrl0 */
+ addr = HINIC3_CSR_AEQ_CTRL_0_ADDR;
+
+ val = hinic3_hwif_read_reg(hwif, addr);
+
+ val = AEQ_CTRL_0_CLEAR(val, INTR_IDX) &
+ AEQ_CTRL_0_CLEAR(val, DMA_ATTR) &
+ AEQ_CTRL_0_CLEAR(val, PCI_INTF_IDX) &
+ AEQ_CTRL_0_CLEAR(val, INTR_MODE);
+
+ ctrl0 = AEQ_CTRL_0_SET(eq_irq->msix_entry_idx, INTR_IDX) |
+ AEQ_CTRL_0_SET(AEQ_DMA_ATTR_DEFAULT, DMA_ATTR) |
+ AEQ_CTRL_0_SET(pci_intf_idx, PCI_INTF_IDX) |
+ AEQ_CTRL_0_SET(HINIC3_INTR_MODE_ARMED, INTR_MODE);
+
+ val |= ctrl0;
+
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ /* set ctrl1 */
+ addr = HINIC3_CSR_AEQ_CTRL_1_ADDR;
+
+ page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
+ elem_size = EQ_SET_HW_ELEM_SIZE_VAL(eq);
+
+ ctrl1 = AEQ_CTRL_1_SET(eq->eq_len, LEN) |
+ AEQ_CTRL_1_SET(elem_size, ELEM_SIZE) |
+ AEQ_CTRL_1_SET(page_size_val, PAGE_SIZE);
+
+ hinic3_hwif_write_reg(hwif, addr, ctrl1);
+ } else {
+ page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
+ ctrl0 = CEQ_CTRL_0_SET(eq_irq->msix_entry_idx, INTR_IDX) |
+ CEQ_CTRL_0_SET(CEQ_DMA_ATTR_DEFAULT, DMA_ATTR) |
+ CEQ_CTRL_0_SET(CEQ_LMT_KICK_DEFAULT, LIMIT_KICK) |
+ CEQ_CTRL_0_SET(pci_intf_idx, PCI_INTF_IDX) |
+ CEQ_CTRL_0_SET(page_size_val, PAGE_SIZE) |
+ CEQ_CTRL_0_SET(HINIC3_INTR_MODE_ARMED, INTR_MODE);
+
+ ctrl1 = CEQ_CTRL_1_SET(eq->eq_len, LEN);
+
+ /* set ceq ctrl reg through mgmt cpu */
+ err = hinic3_set_ceq_ctrl_reg(eq->hwdev, eq->q_id, ctrl0,
+ ctrl1);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+/**
+ * ceq_elements_init - Initialize all the elements in the ceq
+ * @eq: the event queue
+ * @init_val: value to init with it the elements
+ **/
+static void ceq_elements_init(struct hinic3_eq *eq, u32 init_val)
+{
+ u32 *ceqe = NULL;
+ u32 i;
+
+ for (i = 0; i < eq->eq_len; i++) {
+ ceqe = GET_CEQ_ELEM(eq, i);
+ *(ceqe) = cpu_to_be32(init_val);
+ }
+
+ wmb(); /* Write the init values */
+}
+
+/**
+ * aeq_elements_init - initialize all the elements in the aeq
+ * @eq: the event queue
+ * @init_val: value to init with it the elements
+ **/
+static void aeq_elements_init(struct hinic3_eq *eq, u32 init_val)
+{
+ struct hinic3_aeq_elem *aeqe = NULL;
+ u32 i;
+
+ for (i = 0; i < eq->eq_len; i++) {
+ aeqe = GET_AEQ_ELEM(eq, i);
+ aeqe->desc = cpu_to_be32(init_val);
+ }
+
+ wmb(); /* Write the init values */
+}
+
+static void eq_elements_init(struct hinic3_eq *eq, u32 init_val)
+{
+ if (eq->type == HINIC3_AEQ)
+ aeq_elements_init(eq, init_val);
+ else
+ ceq_elements_init(eq, init_val);
+}
+
+/**
+ * alloc_eq_pages - allocate the pages for the queue
+ * @eq: the event queue
+ **/
+static int alloc_eq_pages(struct hinic3_eq *eq)
+{
+ struct hinic3_hwif *hwif = eq->hwdev->hwif;
+ struct hinic3_dma_addr_align *eq_page = NULL;
+ u32 reg, init_val;
+ u16 pg_idx, i;
+ int err;
+ gfp_t gfp_vram;
+
+ eq->eq_pages = kcalloc(eq->num_pages, sizeof(*eq->eq_pages),
+ GFP_KERNEL);
+ if (!eq->eq_pages) {
+ sdk_err(eq->hwdev->dev_hdl, "Failed to alloc eq pages description\n");
+ return -ENOMEM;
+ }
+
+ gfp_vram = hi_vram_get_gfp_vram();
+
+ for (pg_idx = 0; pg_idx < eq->num_pages; pg_idx++) {
+ eq_page = &eq->eq_pages[pg_idx];
+ err = hinic3_dma_zalloc_coherent_align(eq->hwdev->dev_hdl,
+ eq->page_size,
+ HINIC3_MIN_EQ_PAGE_SIZE,
+ GFP_KERNEL | gfp_vram,
+ eq_page);
+ if (err) {
+ sdk_err(eq->hwdev->dev_hdl, "Failed to alloc eq page, page index: %hu\n",
+ pg_idx);
+ goto dma_alloc_err;
+ }
+
+ reg = HINIC3_EQ_HI_PHYS_ADDR_REG(eq->type, pg_idx);
+ hinic3_hwif_write_reg(hwif, reg,
+ upper_32_bits(eq_page->align_paddr));
+
+ reg = HINIC3_EQ_LO_PHYS_ADDR_REG(eq->type, pg_idx);
+ hinic3_hwif_write_reg(hwif, reg,
+ lower_32_bits(eq_page->align_paddr));
+ }
+
+ eq->num_elem_in_pg = GET_EQ_NUM_ELEMS(eq, eq->page_size);
+ if (eq->num_elem_in_pg & (eq->num_elem_in_pg - 1)) {
+ sdk_err(eq->hwdev->dev_hdl, "Number element in eq page != power of 2\n");
+ err = -EINVAL;
+ goto dma_alloc_err;
+ }
+ init_val = EQ_WRAPPED(eq);
+
+ eq_elements_init(eq, init_val);
+
+ return 0;
+
+dma_alloc_err:
+ for (i = 0; i < pg_idx; i++)
+ hinic3_dma_free_coherent_align(eq->hwdev->dev_hdl,
+ &eq->eq_pages[i]);
+
+ kfree(eq->eq_pages);
+
+ return err;
+}
+
+/**
+ * free_eq_pages - free the pages of the queue
+ * @eq: the event queue
+ **/
+static void free_eq_pages(struct hinic3_eq *eq)
+{
+ u16 pg_idx;
+
+ for (pg_idx = 0; pg_idx < eq->num_pages; pg_idx++)
+ hinic3_dma_free_coherent_align(eq->hwdev->dev_hdl,
+ &eq->eq_pages[pg_idx]);
+
+ kfree(eq->eq_pages);
+ eq->eq_pages = NULL;
+}
+
+static inline u32 get_page_size(const struct hinic3_eq *eq)
+{
+ u32 total_size;
+ u32 count;
+
+ total_size = ALIGN((eq->eq_len * eq->elem_size),
+ HINIC3_MIN_EQ_PAGE_SIZE);
+ if (total_size <= (HINIC3_EQ_MAX_PAGES(eq) * HINIC3_MIN_EQ_PAGE_SIZE))
+ return HINIC3_MIN_EQ_PAGE_SIZE;
+
+ count = (u32)(ALIGN((total_size / HINIC3_EQ_MAX_PAGES(eq)),
+ HINIC3_MIN_EQ_PAGE_SIZE) / HINIC3_MIN_EQ_PAGE_SIZE);
+
+ /* round up to nearest power of two */
+ count = 1U << (u8)fls((int)(count - 1));
+
+ return ((u32)HINIC3_MIN_EQ_PAGE_SIZE) * count;
+}
+
+static int request_eq_irq(struct hinic3_eq *eq, struct irq_info *entry)
+{
+ int err = 0;
+
+ if (eq->type == HINIC3_AEQ)
+ INIT_WORK(&eq->aeq_work, eq_irq_work);
+ else
+ tasklet_init(&eq->ceq_tasklet, ceq_tasklet, (ulong)eq);
+
+ if (eq->type == HINIC3_AEQ) {
+ snprintf(eq->irq_name, sizeof(eq->irq_name),
+ "hinic3_aeq%u@pci:%s", eq->q_id,
+ pci_name(eq->hwdev->pcidev_hdl));
+
+ err = request_irq(entry->irq_id, aeq_interrupt, 0UL,
+ eq->irq_name, eq);
+ } else {
+ snprintf(eq->irq_name, sizeof(eq->irq_name),
+ "hinic3_ceq%u@pci:%s", eq->q_id,
+ pci_name(eq->hwdev->pcidev_hdl));
+ err = request_irq(entry->irq_id, ceq_interrupt, 0UL,
+ eq->irq_name, eq);
+ }
+
+ return err;
+}
+
+static void reset_eq(struct hinic3_eq *eq)
+{
+ /* clear eq_len to force eqe drop in hardware */
+ if (eq->type == HINIC3_AEQ)
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_CSR_AEQ_CTRL_1_ADDR, 0);
+ else
+ hinic3_set_ceq_ctrl_reg(eq->hwdev, eq->q_id, 0, 0);
+
+ wmb(); /* clear eq_len before clear prod idx */
+
+ hinic3_hwif_write_reg(eq->hwdev->hwif, EQ_PROD_IDX_REG_ADDR(eq), 0);
+}
+
+/**
+ * init_eq - initialize eq
+ * @eq: the event queue
+ * @hwdev: the pointer to hw device
+ * @q_id: Queue id number
+ * @q_len: the number of EQ elements
+ * @type: the type of the event queue, ceq or aeq
+ * @entry: msix entry associated with the event queue
+ * Return: 0 - Success, Negative - failure
+ **/
+static int init_eq(struct hinic3_eq *eq, struct hinic3_hwdev *hwdev, u16 q_id,
+ u32 q_len, enum hinic3_eq_type type, struct irq_info *entry)
+{
+ int err = 0;
+
+ eq->hwdev = hwdev;
+ eq->q_id = q_id;
+ eq->type = type;
+ eq->eq_len = q_len;
+
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_EQ_INDIR_IDX_ADDR(eq->type),
+ eq->q_id);
+ wmb(); /* write index before config */
+
+ reset_eq(eq);
+
+ eq->cons_idx = 0;
+ eq->wrapped = 0;
+
+ eq->elem_size = (type == HINIC3_AEQ) ? HINIC3_AEQE_SIZE : HINIC3_CEQE_SIZE;
+
+ eq->page_size = get_page_size(eq);
+ eq->orig_page_size = eq->page_size;
+ eq->num_pages = GET_EQ_NUM_PAGES(eq, eq->page_size);
+
+ if (eq->num_pages > HINIC3_EQ_MAX_PAGES(eq)) {
+ sdk_err(hwdev->dev_hdl, "Number pages: %u too many pages for eq\n",
+ eq->num_pages);
+ return -EINVAL;
+ }
+
+ err = alloc_eq_pages(eq);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate pages for eq\n");
+ return err;
+ }
+
+ eq->eq_irq.msix_entry_idx = entry->msix_entry_idx;
+ eq->eq_irq.irq_id = entry->irq_id;
+
+ err = set_eq_ctrls(eq);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to set ctrls for eq\n");
+ goto init_eq_ctrls_err;
+ }
+
+ set_eq_cons_idx(eq, HINIC3_EQ_ARMED);
+
+ err = request_eq_irq(eq, entry);
+ if (err) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to request irq for the eq, err: %d\n", err);
+ goto req_irq_err;
+ }
+
+ hinic3_set_msix_state(hwdev, entry->msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+
+ return 0;
+
+init_eq_ctrls_err:
+req_irq_err:
+ free_eq_pages(eq);
+ return err;
+}
+
+int hinic3_init_single_ceq_status(void *hwdev, u16 q_id)
+{
+ int err = 0;
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_eq *eq = NULL;
+
+ if (!hwdev) {
+ sdk_err(dev->dev_hdl, "hwdev is null\n");
+ return -EINVAL;
+ }
+
+ if (q_id >= dev->ceqs->num_ceqs) {
+ sdk_err(dev->dev_hdl, "q_id=%u is larger than num_ceqs %u.\n",
+ q_id, dev->ceqs->num_ceqs);
+ return -EINVAL;
+ }
+
+ eq = &dev->ceqs->ceq[q_id];
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(dev->hwif, HINIC3_EQ_INDIR_IDX_ADDR(eq->type), eq->q_id);
+ wmb(); /* write index before config */
+
+ reset_eq(eq);
+
+ err = set_eq_ctrls(eq);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to set ctrls for eq\n");
+ return err;
+ }
+ set_eq_cons_idx(eq, HINIC3_EQ_ARMED);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_init_single_ceq_status);
+
+/**
+ * remove_eq - remove eq
+ * @eq: the event queue
+ **/
+static void remove_eq(struct hinic3_eq *eq)
+{
+ struct irq_info *entry = &eq->eq_irq;
+
+ hinic3_set_msix_state(eq->hwdev, entry->msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+ synchronize_irq(entry->irq_id);
+
+ free_irq(entry->irq_id, eq);
+
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_EQ_INDIR_IDX_ADDR(eq->type),
+ eq->q_id);
+
+ wmb(); /* write index before config */
+
+ if (eq->type == HINIC3_AEQ) {
+ cancel_work_sync(&eq->aeq_work);
+
+ /* clear eq_len to avoid hw access host memory */
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_CSR_AEQ_CTRL_1_ADDR, 0);
+ } else {
+ tasklet_kill(&eq->ceq_tasklet);
+
+ hinic3_set_ceq_ctrl_reg(eq->hwdev, eq->q_id, 0, 0);
+ }
+
+ /* update cons_idx to avoid invalid interrupt */
+ eq->cons_idx = hinic3_hwif_read_reg(eq->hwdev->hwif,
+ EQ_PROD_IDX_REG_ADDR(eq));
+ set_eq_cons_idx(eq, HINIC3_EQ_NOT_ARMED);
+
+ free_eq_pages(eq);
+}
+
+/**
+ * hinic3_aeqs_init - init all the aeqs
+ * @hwdev: the pointer to hw device
+ * @num_aeqs: number of AEQs
+ * @msix_entries: msix entries associated with the event queues
+ * Return: 0 - Success, Negative - failure
+ **/
+int hinic3_aeqs_init(struct hinic3_hwdev *hwdev, u16 num_aeqs,
+ struct irq_info *msix_entries)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+ int err;
+ u16 i, q_id;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ aeqs = kzalloc(sizeof(*aeqs), GFP_KERNEL);
+ if (!aeqs)
+ return -ENOMEM;
+
+ hwdev->aeqs = aeqs;
+ aeqs->hwdev = hwdev;
+ aeqs->num_aeqs = num_aeqs;
+ aeqs->workq = alloc_workqueue(HINIC3_EQS_WQ_NAME,
+ WQ_MEM_RECLAIM | WQ_HIGHPRI,
+ HINIC3_MAX_AEQS);
+ if (!aeqs->workq) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize aeq workqueue\n");
+ err = -ENOMEM;
+ goto create_work_err;
+ }
+
+ if (g_aeq_len < HINIC3_MIN_AEQ_LEN || g_aeq_len > HINIC3_MAX_AEQ_LEN) {
+ sdk_warn(hwdev->dev_hdl, "Module Parameter g_aeq_len value %u out of range, resetting to %d\n",
+ g_aeq_len, HINIC3_DEFAULT_AEQ_LEN);
+ g_aeq_len = HINIC3_DEFAULT_AEQ_LEN;
+ }
+
+ for (q_id = 0; q_id < num_aeqs; q_id++) {
+ err = init_eq(&aeqs->aeq[q_id], hwdev, q_id, g_aeq_len,
+ HINIC3_AEQ, &msix_entries[q_id]);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init aeq %u\n",
+ q_id);
+ goto init_aeq_err;
+ }
+ }
+ for (q_id = 0; q_id < num_aeqs; q_id++)
+ hinic3_set_msix_state(hwdev, msix_entries[q_id].msix_entry_idx,
+ HINIC3_MSIX_ENABLE);
+
+ return 0;
+
+init_aeq_err:
+ for (i = 0; i < q_id; i++)
+ remove_eq(&aeqs->aeq[i]);
+
+ destroy_workqueue(aeqs->workq);
+
+create_work_err:
+ kfree(aeqs);
+
+ return err;
+}
+
+/**
+ * hinic3_aeqs_free - free all the aeqs
+ * @hwdev: the pointer to hw device
+ **/
+void hinic3_aeqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeqs *aeqs = hwdev->aeqs;
+ enum hinic3_aeq_type aeq_event = HINIC3_HW_INTER_INT;
+ enum hinic3_aeq_sw_type sw_aeq_event = HINIC3_STATELESS_EVENT;
+ u16 q_id;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++)
+ remove_eq(&aeqs->aeq[q_id]);
+
+ for (; sw_aeq_event < HINIC3_MAX_AEQ_SW_EVENTS; sw_aeq_event++)
+ hinic3_aeq_unregister_swe_cb(hwdev, sw_aeq_event);
+
+ for (; aeq_event < HINIC3_MAX_AEQ_EVENTS; aeq_event++)
+ hinic3_aeq_unregister_hw_cb(hwdev, aeq_event);
+
+ destroy_workqueue(aeqs->workq);
+
+ kfree(aeqs);
+}
+
+/**
+ * hinic3_ceqs_init - init all the ceqs
+ * @hwdev: the pointer to hw device
+ * @num_ceqs: number of CEQs
+ * @msix_entries: msix entries associated with the event queues
+ * Return: 0 - Success, Negative - failure
+ **/
+int hinic3_ceqs_init(struct hinic3_hwdev *hwdev, u16 num_ceqs,
+ struct irq_info *msix_entries)
+{
+ struct hinic3_ceqs *ceqs;
+ int err;
+ u16 i, q_id;
+
+ ceqs = kzalloc(sizeof(*ceqs), GFP_KERNEL);
+ if (!ceqs)
+ return -ENOMEM;
+
+ hwdev->ceqs = ceqs;
+
+ ceqs->hwdev = hwdev;
+ ceqs->num_ceqs = num_ceqs;
+
+ if (g_ceq_len < HINIC3_MIN_CEQ_LEN || g_ceq_len > HINIC3_MAX_CEQ_LEN) {
+ sdk_warn(hwdev->dev_hdl, "Module Parameter g_ceq_len value %u out of range, resetting to %d\n",
+ g_ceq_len, HINIC3_DEFAULT_CEQ_LEN);
+ g_ceq_len = HINIC3_DEFAULT_CEQ_LEN;
+ }
+
+ if (!g_num_ceqe_in_tasklet) {
+ sdk_warn(hwdev->dev_hdl, "Module Parameter g_num_ceqe_in_tasklet can not be zero, resetting to %d\n",
+ HINIC3_TASK_PROCESS_EQE_LIMIT);
+ g_num_ceqe_in_tasklet = HINIC3_TASK_PROCESS_EQE_LIMIT;
+ }
+ for (q_id = 0; q_id < num_ceqs; q_id++) {
+ err = init_eq(&ceqs->ceq[q_id], hwdev, q_id, g_ceq_len,
+ HINIC3_CEQ, &msix_entries[q_id]);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init ceq %u\n",
+ q_id);
+ goto init_ceq_err;
+ }
+ }
+ for (q_id = 0; q_id < num_ceqs; q_id++)
+ hinic3_set_msix_state(hwdev, msix_entries[q_id].msix_entry_idx,
+ HINIC3_MSIX_ENABLE);
+
+ for (i = 0; i < HINIC3_MAX_CEQ_EVENTS; i++)
+ ceqs->ceq_cb_state[i] = 0;
+
+ return 0;
+
+init_ceq_err:
+ for (i = 0; i < q_id; i++)
+ remove_eq(&ceqs->ceq[i]);
+
+ kfree(ceqs);
+
+ return err;
+}
+
+/**
+ * hinic3_ceqs_free - free all the ceqs
+ * @hwdev: the pointer to hw device
+ **/
+void hinic3_ceqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_ceqs *ceqs = hwdev->ceqs;
+ enum hinic3_ceq_event ceq_event = HINIC3_CMDQ;
+ u16 q_id;
+
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++)
+ remove_eq(&ceqs->ceq[q_id]);
+
+ for (; ceq_event < HINIC3_MAX_CEQ_EVENTS; ceq_event++)
+ hinic3_ceq_unregister_cb(hwdev, ceq_event);
+
+ kfree(ceqs);
+}
+
+void hinic3_get_ceq_irqs(struct hinic3_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs)
+{
+ struct hinic3_ceqs *ceqs = hwdev->ceqs;
+ u16 q_id;
+
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++) {
+ irqs[q_id].irq_id = ceqs->ceq[q_id].eq_irq.irq_id;
+ irqs[q_id].msix_entry_idx =
+ ceqs->ceq[q_id].eq_irq.msix_entry_idx;
+ }
+
+ *num_irqs = ceqs->num_ceqs;
+}
+
+void hinic3_get_aeq_irqs(struct hinic3_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs)
+{
+ struct hinic3_aeqs *aeqs = hwdev->aeqs;
+ u16 q_id;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++) {
+ irqs[q_id].irq_id = aeqs->aeq[q_id].eq_irq.irq_id;
+ irqs[q_id].msix_entry_idx =
+ aeqs->aeq[q_id].eq_irq.msix_entry_idx;
+ }
+
+ *num_irqs = aeqs->num_aeqs;
+}
+
+void hinic3_dump_aeq_info(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeq_elem *aeqe_pos = NULL;
+ struct hinic3_eq *eq = NULL;
+ u32 addr, ci, pi, ctrl0, idx;
+ int q_id;
+
+ for (q_id = 0; q_id < hwdev->aeqs->num_aeqs; q_id++) {
+ eq = &hwdev->aeqs->aeq[q_id];
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(eq->hwdev->hwif, HINIC3_EQ_INDIR_IDX_ADDR(eq->type),
+ eq->q_id);
+ wmb(); /* write index before config */
+
+ addr = HINIC3_CSR_AEQ_CTRL_0_ADDR;
+
+ ctrl0 = hinic3_hwif_read_reg(hwdev->hwif, addr);
+
+ idx = hinic3_hwif_read_reg(hwdev->hwif, HINIC3_EQ_INDIR_IDX_ADDR(eq->type));
+
+ addr = EQ_CONS_IDX_REG_ADDR(eq);
+ ci = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ addr = EQ_PROD_IDX_REG_ADDR(eq);
+ pi = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+ sdk_err(hwdev->dev_hdl,
+ "Aeq id: %d, idx: %u, ctrl0: 0x%08x, ci: 0x%08x, pi: 0x%x, work_state: 0x%x, wrap: %u, desc: 0x%x swci:0x%x\n",
+ q_id, idx, ctrl0, ci, pi, work_busy(&eq->aeq_work),
+ eq->wrapped, be32_to_cpu(aeqe_pos->desc), eq->cons_idx);
+ }
+
+ hinic3_show_chip_err_info(hwdev);
+}
+
+void hinic3_dump_ceq_info(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_eq *eq = NULL;
+ u32 addr, ci, pi;
+ int q_id;
+
+ for (q_id = 0; q_id < hwdev->ceqs->num_ceqs; q_id++) {
+ eq = &hwdev->ceqs->ceq[q_id];
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_EQ_INDIR_IDX_ADDR(eq->type),
+ eq->q_id);
+ wmb(); /* write index before config */
+
+ addr = EQ_CONS_IDX_REG_ADDR(eq);
+ ci = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ addr = EQ_PROD_IDX_REG_ADDR(eq);
+ pi = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ sdk_err(hwdev->dev_hdl,
+ "Ceq id: %d, ci: 0x%08x, sw_ci: 0x%08x, pi: 0x%x, tasklet_state: 0x%lx, wrap: %u, ceqe: 0x%x\n",
+ q_id, ci, eq->cons_idx, pi,
+ tasklet_state(&eq->ceq_tasklet),
+ eq->wrapped, be32_to_cpu(*(GET_CURR_CEQ_ELEM(eq))));
+
+ sdk_err(hwdev->dev_hdl, "Ceq last response hard interrupt time: %u\n",
+ jiffies_to_msecs(jiffies - eq->hard_intr_jif));
+ sdk_err(hwdev->dev_hdl, "Ceq last response soft interrupt time: %u\n",
+ jiffies_to_msecs(jiffies - eq->soft_intr_jif));
+ }
+
+ hinic3_show_chip_err_info(hwdev);
+}
+
+int hinic3_get_ceq_info(void *hwdev, u16 q_id, struct hinic3_ceq_info *ceq_info)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_eq *eq = NULL;
+
+ if (!hwdev || !ceq_info)
+ return -EINVAL;
+
+ if (q_id >= dev->ceqs->num_ceqs)
+ return -EINVAL;
+
+ eq = &dev->ceqs->ceq[q_id];
+ ceq_info->q_len = eq->eq_len;
+ ceq_info->num_pages = eq->num_pages;
+ ceq_info->page_size = eq->page_size;
+ ceq_info->num_elem_in_pg = eq->num_elem_in_pg;
+ ceq_info->elem_size = eq->elem_size;
+ sdk_info(dev->dev_hdl, "get_ceq_info: qid=0x%x page_size=%ul\n",
+ q_id, eq->page_size);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_ceq_info);
+
+int hinic3_get_ceq_page_phy_addr(void *hwdev, u16 q_id,
+ u16 page_idx, u64 *page_phy_addr)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_eq *eq = NULL;
+
+ if (!hwdev || !page_phy_addr)
+ return -EINVAL;
+
+ if (q_id >= dev->ceqs->num_ceqs)
+ return -EINVAL;
+
+ eq = &dev->ceqs->ceq[q_id];
+ if (page_idx >= eq->num_pages)
+ return -EINVAL;
+
+ *page_phy_addr = eq->eq_pages[page_idx].align_paddr;
+ sdk_info(dev->dev_hdl, "ceq_page_phy_addr: 0x%llx page_idx=%u\n",
+ eq->eq_pages[page_idx].align_paddr, page_idx);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_ceq_page_phy_addr);
+
+int hinic3_set_ceq_irq_disable(void *hwdev, u16 q_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_eq *ceq = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (q_id >= dev->ceqs->num_ceqs)
+ return -EINVAL;
+
+ ceq = &dev->ceqs->ceq[q_id];
+
+ hinic3_set_msix_state(ceq->hwdev, ceq->eq_irq.msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_ceq_irq_disable);
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h
new file mode 100644
index 0000000..a6b83c3
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_EQS_H
+#define HINIC3_EQS_H
+
+#include <linux/types.h>
+#include <linux/interrupt.h>
+#include <linux/workqueue.h>
+
+#include "hinic3_common.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+
+#define HINIC3_MAX_AEQS 4
+#define HINIC3_MAX_CEQS 32
+
+#define HINIC3_AEQ_MAX_PAGES 4
+#define HINIC3_CEQ_MAX_PAGES 8
+
+#define HINIC3_AEQE_SIZE 64
+#define HINIC3_CEQE_SIZE 4
+
+#define HINIC3_AEQE_DESC_SIZE 4
+#define HINIC3_AEQE_DATA_SIZE \
+ (HINIC3_AEQE_SIZE - HINIC3_AEQE_DESC_SIZE)
+
+#define HINIC3_DEFAULT_AEQ_LEN 0x10000
+#define HINIC3_DEFAULT_CEQ_LEN 0x10000
+
+#define HINIC3_MIN_EQ_PAGE_SIZE 0x1000 /* min eq page size 4K Bytes */
+#define HINIC3_MAX_EQ_PAGE_SIZE 0x400000 /* max eq page size 4M Bytes */
+
+#define HINIC3_MIN_AEQ_LEN 64
+#define HINIC3_MAX_AEQ_LEN \
+ ((HINIC3_MAX_EQ_PAGE_SIZE / HINIC3_AEQE_SIZE) * HINIC3_AEQ_MAX_PAGES)
+
+#define HINIC3_MIN_CEQ_LEN 64
+#define HINIC3_MAX_CEQ_LEN \
+ ((HINIC3_MAX_EQ_PAGE_SIZE / HINIC3_CEQE_SIZE) * HINIC3_CEQ_MAX_PAGES)
+#define HINIC3_CEQ_ID_CMDQ 0
+
+#define EQ_IRQ_NAME_LEN 64
+
+#define EQ_USLEEP_LOW_BOUND 900
+#define EQ_USLEEP_HIG_BOUND 1000
+
+enum hinic3_eq_type {
+ HINIC3_AEQ,
+ HINIC3_CEQ
+};
+
+enum hinic3_eq_intr_mode {
+ HINIC3_INTR_MODE_ARMED,
+ HINIC3_INTR_MODE_ALWAYS,
+};
+
+enum hinic3_eq_ci_arm_state {
+ HINIC3_EQ_NOT_ARMED,
+ HINIC3_EQ_ARMED,
+};
+
+struct hinic3_eq {
+ struct hinic3_hwdev *hwdev;
+ u16 q_id;
+ u16 rsvd1;
+ enum hinic3_eq_type type;
+ u32 page_size;
+ u32 orig_page_size;
+ u32 eq_len;
+
+ u32 cons_idx;
+ u16 wrapped;
+ u16 rsvd2;
+
+ u16 elem_size;
+ u16 num_pages;
+ u32 num_elem_in_pg;
+
+ struct irq_info eq_irq;
+ char irq_name[EQ_IRQ_NAME_LEN];
+
+ struct hinic3_dma_addr_align *eq_pages;
+
+ struct work_struct aeq_work;
+ struct tasklet_struct ceq_tasklet;
+
+ u64 hard_intr_jif;
+ u64 soft_intr_jif;
+
+ u64 rsvd3;
+};
+
+struct hinic3_aeq_elem {
+ u8 aeqe_data[HINIC3_AEQE_DATA_SIZE];
+ u32 desc;
+};
+
+enum hinic3_aeq_cb_state {
+ HINIC3_AEQ_HW_CB_REG = 0,
+ HINIC3_AEQ_HW_CB_RUNNING,
+ HINIC3_AEQ_SW_CB_REG,
+ HINIC3_AEQ_SW_CB_RUNNING,
+};
+
+struct hinic3_aeqs {
+ struct hinic3_hwdev *hwdev;
+
+ hinic3_aeq_hwe_cb aeq_hwe_cb[HINIC3_MAX_AEQ_EVENTS];
+ void *aeq_hwe_cb_data[HINIC3_MAX_AEQ_EVENTS];
+ hinic3_aeq_swe_cb aeq_swe_cb[HINIC3_MAX_AEQ_SW_EVENTS];
+ void *aeq_swe_cb_data[HINIC3_MAX_AEQ_SW_EVENTS];
+ unsigned long aeq_hw_cb_state[HINIC3_MAX_AEQ_EVENTS];
+ unsigned long aeq_sw_cb_state[HINIC3_MAX_AEQ_SW_EVENTS];
+
+ struct hinic3_eq aeq[HINIC3_MAX_AEQS];
+ u16 num_aeqs;
+ u16 rsvd1;
+ u32 rsvd2;
+
+ struct workqueue_struct *workq;
+};
+
+enum hinic3_ceq_cb_state {
+ HINIC3_CEQ_CB_REG = 0,
+ HINIC3_CEQ_CB_RUNNING,
+};
+
+struct hinic3_ceqs {
+ struct hinic3_hwdev *hwdev;
+
+ hinic3_ceq_event_cb ceq_cb[HINIC3_MAX_CEQ_EVENTS];
+ void *ceq_cb_data[HINIC3_MAX_CEQ_EVENTS];
+ void *ceq_data[HINIC3_MAX_CEQ_EVENTS];
+ unsigned long ceq_cb_state[HINIC3_MAX_CEQ_EVENTS];
+
+ struct hinic3_eq ceq[HINIC3_MAX_CEQS];
+ u16 num_ceqs;
+ u16 rsvd1;
+ u32 rsvd2;
+};
+
+int hinic3_aeqs_init(struct hinic3_hwdev *hwdev, u16 num_aeqs,
+ struct irq_info *msix_entries);
+
+void hinic3_aeqs_free(struct hinic3_hwdev *hwdev);
+
+int hinic3_ceqs_init(struct hinic3_hwdev *hwdev, u16 num_ceqs,
+ struct irq_info *msix_entries);
+
+void hinic3_ceqs_free(struct hinic3_hwdev *hwdev);
+
+void hinic3_get_ceq_irqs(struct hinic3_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs);
+
+void hinic3_get_aeq_irqs(struct hinic3_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs);
+
+void hinic3_dump_ceq_info(struct hinic3_hwdev *hwdev);
+
+void hinic3_dump_aeq_info(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c
new file mode 100644
index 0000000..6b96b87
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c
@@ -0,0 +1,495 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_hw_api.h"
+ #ifndef HTONL
+#define HTONL(x) \
+ ((((x) & 0x000000ff) << 24) \
+ | (((x) & 0x0000ff00) << 8) \
+ | (((x) & 0x00ff0000) >> 8) \
+ | (((x) & 0xff000000) >> 24))
+#endif
+
+static void hinic3_sml_ctr_read_build_req(struct chipif_sml_ctr_rd_req *msg,
+ u8 instance_id, u8 op_id,
+ u8 ack, u32 ctr_id, u32 init_val)
+{
+ msg->head.value = 0;
+ msg->head.bs.instance = instance_id;
+ msg->head.bs.op_id = op_id;
+ msg->head.bs.ack = ack;
+ msg->head.value = HTONL(msg->head.value);
+ msg->ctr_id = ctr_id;
+ msg->ctr_id = HTONL(msg->ctr_id);
+ msg->initial = init_val;
+}
+
+static void sml_ctr_htonl_n(u32 *node, u32 len)
+{
+ u32 i;
+ u32 *node_new = node;
+
+ for (i = 0; i < len; i++) {
+ *node_new = HTONL(*node_new);
+ node_new++;
+ }
+}
+
+/**
+ * hinic3_sm_ctr_rd16 - small single 16 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd16(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u16 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 16bit counter read fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = rsp.bs_ss16_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd16_clear - small single 16 counter read and clear to zero
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd16_clear(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u16 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance,
+ CHIPIF_SM_CTR_OP_READ_CLEAR,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 16bit counter clear fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = rsp.bs_ss16_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd32 - small single 32 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd32(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u32 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 32bit counter read fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = rsp.bs_ss32_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd32_clear - small single 32 counter read and clear to zero
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ * according to ACN error code (ERR_OK, ERR_PARAM, ERR_FAILED...etc)
+ **/
+int hinic3_sm_ctr_rd32_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u32 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance,
+ CHIPIF_SM_CTR_OP_READ_CLEAR,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 32bit counter clear fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = rsp.bs_ss32_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd64_pair - big pair 128 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value1: read counter value ptr
+ * @value2: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_pair(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value1, u64 *value2)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!value1) {
+ pr_err("First value is NULL for read 64 bit pair\n");
+ return -EFAULT;
+ }
+
+ if (!value2) {
+ pr_err("Second value is NULL for read 64 bit pair\n");
+ return -EFAULT;
+ }
+
+ if (!hwdev || ((ctr_id & 0x1) != 0)) {
+ pr_err("Hwdev is NULL or ctr_id(%d) is odd number for read 64 bit pair\n",
+ ctr_id);
+ return -EFAULT;
+ }
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 64 bit rd pair ret(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value1 = ((u64)rsp.bs_bp64_rsp.val1_h << BIT_32) | rsp.bs_bp64_rsp.val1_l;
+ *value2 = ((u64)rsp.bs_bp64_rsp.val2_h << BIT_32) | rsp.bs_bp64_rsp.val2_l;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd64_pair_clear - big pair 128 counter read and clear to zero
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value1: read counter value ptr
+ * @value2: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_pair_clear(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u64 *value1, u64 *value2)
+{
+ struct chipif_sml_ctr_rd_req req = {0};
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value1 || !value2 || ((ctr_id & 0x1) != 0)) {
+ pr_err("Hwdev or value1 or value2 is NULL or ctr_id(%u) is odd number\n", ctr_id);
+ return -EINVAL;
+ }
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ hinic3_sml_ctr_read_build_req(&req, instance,
+ CHIPIF_SM_CTR_OP_READ_CLEAR,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 64 bit clear pair fail. ret(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value1 = ((u64)rsp.bs_bp64_rsp.val1_h << BIT_32) | rsp.bs_bp64_rsp.val1_l;
+ *value2 = ((u64)rsp.bs_bp64_rsp.val2_h << BIT_32) | rsp.bs_bp64_rsp.val2_l;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd64 - big counter 64 read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u64 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 64bit counter read fail err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = ((u64)rsp.bs_bs64_rsp.value1 << BIT_32) | rsp.bs_bs64_rsp.value2;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_sm_ctr_rd64);
+
+/**
+ * hinic3_sm_ctr_rd64_clear - big counter 64 read and clear to zero
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_clear(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u64 *value)
+{
+ struct chipif_sml_ctr_rd_req req = {0};
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ hinic3_sml_ctr_read_build_req(&req, instance,
+ CHIPIF_SM_CTR_OP_READ_CLEAR,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 64bit counter clear fail err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = ((u64)rsp.bs_bs64_rsp.value1 << BIT_32) | rsp.bs_bs64_rsp.value2;
+
+ return 0;
+}
+
+int hinic3_api_csr_rd32(void *hwdev, u8 dest, u32 addr, u32 *val)
+{
+ struct hinic3_csr_request_api_data api_data = {0};
+ u32 csr_val = 0;
+ u16 in_size = sizeof(api_data);
+ int ret;
+
+ if (!hwdev || !val)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&api_data, 0, sizeof(struct hinic3_csr_request_api_data));
+ api_data.dw0 = 0;
+ api_data.dw1.bits.operation_id = HINIC3_CSR_OPERATION_READ_CSR;
+ api_data.dw1.bits.need_response = HINIC3_CSR_NEED_RESP_DATA;
+ api_data.dw1.bits.data_size = HINIC3_CSR_DATA_SZ_32;
+ api_data.dw1.val32 = cpu_to_be32(api_data.dw1.val32);
+ api_data.dw2.bits.csr_addr = addr;
+ api_data.dw2.val32 = cpu_to_be32(api_data.dw2.val32);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, dest, (u8 *)(&api_data),
+ in_size, &csr_val, 0x4);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Read 32 bit csr fail, dest %u addr 0x%x, ret: 0x%x\n",
+ dest, addr, ret);
+ return ret;
+ }
+
+ *val = csr_val;
+
+ return 0;
+}
+
+int hinic3_api_csr_wr32(void *hwdev, u8 dest, u32 addr, u32 val)
+{
+ struct hinic3_csr_request_api_data api_data;
+ u16 in_size = sizeof(api_data);
+ int ret;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&api_data, 0, sizeof(struct hinic3_csr_request_api_data));
+ api_data.dw1.bits.operation_id = HINIC3_CSR_OPERATION_WRITE_CSR;
+ api_data.dw1.bits.need_response = HINIC3_CSR_NO_RESP_DATA;
+ api_data.dw1.bits.data_size = HINIC3_CSR_DATA_SZ_32;
+ api_data.dw1.val32 = cpu_to_be32(api_data.dw1.val32);
+ api_data.dw2.bits.csr_addr = addr;
+ api_data.dw2.val32 = cpu_to_be32(api_data.dw2.val32);
+ api_data.csr_write_data_h = 0xffffffff;
+ api_data.csr_write_data_l = val;
+
+ ret = hinic3_api_cmd_write_nack(hwdev, dest, (u8 *)(&api_data),
+ in_size);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Write 32 bit csr fail! dest %u addr 0x%x val 0x%x\n",
+ dest, addr, val);
+ return ret;
+ }
+
+ return 0;
+}
+
+int hinic3_api_csr_rd64(void *hwdev, u8 dest, u32 addr, u64 *val)
+{
+ struct hinic3_csr_request_api_data api_data = {0};
+ u64 csr_val = 0;
+ u16 in_size = sizeof(api_data);
+ int ret;
+
+ if (!hwdev || !val)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&api_data, 0, sizeof(struct hinic3_csr_request_api_data));
+ api_data.dw0 = 0;
+ api_data.dw1.bits.operation_id = HINIC3_CSR_OPERATION_READ_CSR;
+ api_data.dw1.bits.need_response = HINIC3_CSR_NEED_RESP_DATA;
+ api_data.dw1.bits.data_size = HINIC3_CSR_DATA_SZ_64;
+ api_data.dw1.val32 = cpu_to_be32(api_data.dw1.val32);
+ api_data.dw2.bits.csr_addr = addr;
+ api_data.dw2.val32 = cpu_to_be32(api_data.dw2.val32);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, dest, (u8 *)(&api_data),
+ in_size, &csr_val, 0x8);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Read 64 bit csr fail, dest %u addr 0x%x\n",
+ dest, addr);
+ return ret;
+ }
+
+ *val = csr_val;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_api_csr_rd64);
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h
new file mode 100644
index 0000000..9ec812e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h
@@ -0,0 +1,141 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HW_API_H
+#define HINIC3_HW_API_H
+
+#include <linux/types.h>
+
+#define CHIPIF_ACK 1
+#define CHIPIF_NOACK 0
+
+#define CHIPIF_SM_CTR_OP_READ 0x2
+#define CHIPIF_SM_CTR_OP_READ_CLEAR 0x6
+
+#define BIT_32 32
+
+/* request head */
+union chipif_sml_ctr_req_head {
+ struct {
+ u32 pad:15;
+ u32 ack:1;
+ u32 op_id:5;
+ u32 instance:6;
+ u32 src:5;
+ } bs;
+
+ u32 value;
+};
+
+/* counter read request struct */
+struct chipif_sml_ctr_rd_req {
+ u32 extra;
+ union chipif_sml_ctr_req_head head;
+ u32 ctr_id;
+ u32 initial;
+ u32 pad;
+};
+
+struct hinic3_csr_request_api_data {
+ u32 dw0;
+
+ union {
+ struct {
+ u32 reserved1:13;
+ /* this field indicates the write/read data size:
+ * 2'b00: 32 bits
+ * 2'b01: 64 bits
+ * 2'b10~2'b11:reserved
+ */
+ u32 data_size:2;
+ /* this field indicates that requestor expect receive a
+ * response data or not.
+ * 1'b0: expect not to receive a response data.
+ * 1'b1: expect to receive a response data.
+ */
+ u32 need_response:1;
+ /* this field indicates the operation that the requestor
+ * expected.
+ * 5'b1_1110: write value to csr space.
+ * 5'b1_1111: read register from csr space.
+ */
+ u32 operation_id:5;
+ u32 reserved2:6;
+ /* this field specifies the Src node ID for this API
+ * request message.
+ */
+ u32 src_node_id:5;
+ } bits;
+
+ u32 val32;
+ } dw1;
+
+ union {
+ struct {
+ /* it specifies the CSR address. */
+ u32 csr_addr:26;
+ u32 reserved3:6;
+ } bits;
+
+ u32 val32;
+ } dw2;
+
+ /* if data_size=2'b01, it is high 32 bits of write data. else, it is
+ * 32'hFFFF_FFFF.
+ */
+ u32 csr_write_data_h;
+ /* the low 32 bits of write data. */
+ u32 csr_write_data_l;
+};
+
+/* counter read response union */
+union ctr_rd_rsp {
+ struct {
+ u32 value1:16;
+ u32 pad0:16;
+ u32 pad1[3];
+ } bs_ss16_rsp;
+
+ struct {
+ u32 value1;
+ u32 pad[3];
+ } bs_ss32_rsp;
+
+ struct {
+ u32 value1:20;
+ u32 pad0:12;
+ u32 value2:12;
+ u32 pad1:20;
+ u32 pad2[2];
+ } bs_sp_rsp;
+
+ struct {
+ u32 value1;
+ u32 value2;
+ u32 pad[2];
+ } bs_bs64_rsp;
+
+ struct {
+ u32 val1_h;
+ u32 val1_l;
+ u32 val2_h;
+ u32 val2_l;
+ } bs_bp64_rsp;
+};
+
+enum HINIC3_CSR_API_DATA_OPERATION_ID {
+ HINIC3_CSR_OPERATION_WRITE_CSR = 0x1E,
+ HINIC3_CSR_OPERATION_READ_CSR = 0x1F
+};
+
+enum HINIC3_CSR_API_DATA_NEED_RESPONSE_DATA {
+ HINIC3_CSR_NO_RESP_DATA = 0,
+ HINIC3_CSR_NEED_RESP_DATA = 1
+};
+
+enum HINIC3_CSR_API_DATA_DATA_SIZE {
+ HINIC3_CSR_DATA_SZ_32 = 0,
+ HINIC3_CSR_DATA_SZ_64 = 1
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c
new file mode 100644
index 0000000..2d4a9f6
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c
@@ -0,0 +1,1632 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/mutex.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/semaphore.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "cfg_mgmt_mpu_cmd.h"
+#include "cfg_mgmt_mpu_cmd_defs.h"
+#include "hinic3_hw_cfg.h"
+
+static void parse_pub_res_cap_dfx(struct hinic3_hwdev *hwdev,
+ const struct service_cap *cap)
+{
+ sdk_info(hwdev->dev_hdl, "Get public resource capbility: svc_cap_en: 0x%x\n",
+ cap->svc_type);
+ sdk_info(hwdev->dev_hdl, "Host_id: 0x%x, ep_id: 0x%x, er_id: 0x%x, port_id: 0x%x\n",
+ cap->host_id, cap->ep_id, cap->er_id, cap->port_id);
+ sdk_info(hwdev->dev_hdl, "cos_bitmap: 0x%x, flexq: 0x%x, virtio_vq_size: 0x%x\n",
+ cap->cos_valid_bitmap, cap->flexq_en, cap->virtio_vq_size);
+ sdk_info(hwdev->dev_hdl, "Host_total_function: 0x%x, host_oq_id_mask_val: 0x%x, max_vf: 0x%x\n",
+ cap->host_total_function, cap->host_oq_id_mask_val,
+ cap->max_vf);
+ sdk_info(hwdev->dev_hdl, "Host_pf_num: 0x%x, pf_id_start: 0x%x, host_vf_num: 0x%x, vf_id_start: 0x%x\n",
+ cap->pf_num, cap->pf_id_start, cap->vf_num, cap->vf_id_start);
+ sdk_info(hwdev->dev_hdl,
+ "host_valid_bitmap: 0x%x, master_host_id: 0x%x, srv_multi_host_mode: 0x%x, hot_plug_disable: 0x%x\n",
+ cap->host_valid_bitmap, cap->master_host_id, cap->srv_multi_host_mode,
+ cap->hot_plug_disable);
+ sdk_info(hwdev->dev_hdl,
+ "os_hot_replace: 0x%x, fake_vf_start_id: 0x%x, fake_vf_num: 0x%x, fake_vf_max_pctx: 0x%x\n",
+ cap->os_hot_replace, cap->fake_vf_start_id, cap->fake_vf_num, cap->fake_vf_max_pctx);
+ sdk_info(hwdev->dev_hdl,
+ "fake_vf_bfilter_start_addr: 0x%x, fake_vf_bfilter_len: 0x%x, bond_create_mode: 0x%x\n",
+ cap->fake_vf_bfilter_start_addr, cap->fake_vf_bfilter_len, cap->bond_create_mode);
+}
+
+static void parse_cqm_res_cap(struct hinic3_hwdev *hwdev, struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap)
+{
+ struct dev_sf_svc_attr *attr = &cap->sf_svc_attr;
+
+ cap->fake_vf_start_id = dev_cap->fake_vf_start_id;
+ cap->fake_vf_num = dev_cap->fake_vf_num;
+ cap->fake_vf_max_pctx = dev_cap->fake_vf_max_pctx;
+ cap->fake_vf_num_cfg = dev_cap->fake_vf_num;
+ cap->fake_vf_bfilter_start_addr = dev_cap->fake_vf_bfilter_start_addr;
+ cap->fake_vf_bfilter_len = dev_cap->fake_vf_bfilter_len;
+
+ if (COMM_SUPPORT_VIRTIO_VQ_SIZE(hwdev))
+ cap->virtio_vq_size = (u16)(VIRTIO_BASE_VQ_SIZE << dev_cap->virtio_vq_size);
+ else
+ cap->virtio_vq_size = VIRTIO_DEFAULT_VQ_SIZE;
+
+ if (dev_cap->sf_svc_attr & SF_SVC_FT_BIT)
+ attr->ft_en = true;
+ else
+ attr->ft_en = false;
+
+ if (dev_cap->sf_svc_attr & SF_SVC_RDMA_BIT)
+ attr->rdma_en = true;
+ else
+ attr->rdma_en = false;
+
+ /* PPF will overwrite it when parse dynamic resource */
+ if (dev_cap->func_sf_en)
+ cap->sf_en = true;
+ else
+ cap->sf_en = false;
+
+ cap->lb_mode = dev_cap->lb_mode;
+ cap->smf_pg = dev_cap->smf_pg;
+
+ cap->timer_en = dev_cap->timer_en;
+ cap->host_oq_id_mask_val = dev_cap->host_oq_id_mask_val;
+ cap->max_connect_num = dev_cap->max_conn_num;
+ cap->max_stick2cache_num = dev_cap->max_stick2cache_num;
+ cap->bfilter_start_addr = dev_cap->max_bfilter_start_addr;
+ cap->bfilter_len = dev_cap->bfilter_len;
+ cap->hash_bucket_num = dev_cap->hash_bucket_num;
+}
+
+static void parse_pub_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ cap->host_id = dev_cap->host_id;
+ cap->ep_id = dev_cap->ep_id;
+ cap->er_id = dev_cap->er_id;
+ cap->port_id = dev_cap->port_id;
+
+ cap->svc_type = dev_cap->svc_cap_en;
+ cap->chip_svc_type = cap->svc_type;
+
+ cap->cos_valid_bitmap = dev_cap->valid_cos_bitmap;
+ cap->port_cos_valid_bitmap = dev_cap->port_cos_valid_bitmap;
+ cap->flexq_en = dev_cap->flexq_en;
+
+ cap->host_total_function = dev_cap->host_total_func;
+ cap->host_valid_bitmap = dev_cap->host_valid_bitmap;
+ cap->master_host_id = dev_cap->master_host_id;
+ cap->srv_multi_host_mode = dev_cap->srv_multi_host_mode;
+ cap->hot_plug_disable = dev_cap->hot_plug_disable;
+ cap->bond_create_mode = dev_cap->bond_create_mode;
+ cap->os_hot_replace = dev_cap->os_hot_replace;
+ cap->fake_vf_en = dev_cap->fake_vf_en;
+ cap->fake_vf_start_bit = dev_cap->fake_vf_start_bit;
+ cap->fake_vf_end_bit = dev_cap->fake_vf_end_bit;
+ cap->fake_vf_page_bit = dev_cap->fake_vf_page_bit;
+ cap->map_host_id = dev_cap->map_host_id;
+
+ if (type != TYPE_VF) {
+ cap->max_vf = dev_cap->max_vf;
+ cap->pf_num = dev_cap->host_pf_num;
+ cap->pf_id_start = dev_cap->pf_id_start;
+ cap->vf_num = dev_cap->host_vf_num;
+ cap->vf_id_start = dev_cap->vf_id_start;
+ } else {
+ cap->max_vf = 0;
+ }
+
+ parse_cqm_res_cap(hwdev, cap, dev_cap);
+ parse_pub_res_cap_dfx(hwdev, cap);
+}
+
+static void parse_dynamic_share_res_cap(struct service_cap *cap,
+ const struct cfg_cmd_dev_cap *dev_cap)
+{
+ if (dev_cap->host_sf_en)
+ cap->sf_en = true;
+ else
+ cap->sf_en = false;
+}
+
+static void parse_l2nic_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct nic_service_cap *nic_cap = &cap->nic_cap;
+
+ nic_cap->max_sqs = dev_cap->nic_max_sq_id + 1;
+ nic_cap->max_rqs = dev_cap->nic_max_rq_id + 1;
+ nic_cap->default_num_queues = dev_cap->nic_default_num_queues;
+ nic_cap->outband_vlan_cfg_en = dev_cap->outband_vlan_cfg_en;
+ nic_cap->lro_enable = dev_cap->lro_enable;
+
+ sdk_info(hwdev->dev_hdl, "L2nic resource capbility, max_sqs: 0x%x, max_rqs: 0x%x\n",
+ nic_cap->max_sqs, nic_cap->max_rqs);
+
+ /* Check parameters from firmware */
+ if (nic_cap->max_sqs > HINIC3_CFG_MAX_QP) {
+ sdk_info(hwdev->dev_hdl, "Number of sq exceed limit[1-%d]: sq: %u\n",
+ HINIC3_CFG_MAX_QP, nic_cap->max_sqs);
+ nic_cap->max_sqs = HINIC3_CFG_MAX_QP;
+ }
+
+ if (nic_cap->max_rqs > HINIC3_CFG_MAX_QP) {
+ sdk_info(hwdev->dev_hdl, "Number of rq exceed limit[1-%d]: rq: %u\n",
+ HINIC3_CFG_MAX_QP, nic_cap->max_rqs);
+ nic_cap->max_rqs = HINIC3_CFG_MAX_QP;
+ }
+
+ if (nic_cap->outband_vlan_cfg_en)
+ sdk_info(hwdev->dev_hdl, "L2nic outband vlan cfg enabled\n");
+}
+
+static void parse_fc_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_fc_svc_cap *fc_cap = &cap->fc_cap.dev_fc_cap;
+
+ fc_cap->max_parent_qpc_num = dev_cap->fc_max_pctx;
+ fc_cap->scq_num = dev_cap->fc_max_scq;
+ fc_cap->srq_num = dev_cap->fc_max_srq;
+ fc_cap->max_child_qpc_num = dev_cap->fc_max_cctx;
+ fc_cap->child_qpc_id_start = dev_cap->fc_cctx_id_start;
+ fc_cap->vp_id_start = dev_cap->fc_vp_id_start;
+ fc_cap->vp_id_end = dev_cap->fc_vp_id_end;
+
+ sdk_info(hwdev->dev_hdl, "Get fc resource capbility\n");
+ sdk_info(hwdev->dev_hdl,
+ "Max_parent_qpc_num: 0x%x, scq_num: 0x%x, srq_num: 0x%x, max_child_qpc_num: 0x%x, child_qpc_id_start: 0x%x\n",
+ fc_cap->max_parent_qpc_num, fc_cap->scq_num, fc_cap->srq_num,
+ fc_cap->max_child_qpc_num, fc_cap->child_qpc_id_start);
+ sdk_info(hwdev->dev_hdl, "Vp_id_start: 0x%x, vp_id_end: 0x%x\n",
+ fc_cap->vp_id_start, fc_cap->vp_id_end);
+}
+
+static void parse_roce_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_roce_svc_own_cap *roce_cap =
+ &cap->rdma_cap.dev_rdma_cap.roce_own_cap;
+
+ roce_cap->max_qps = dev_cap->roce_max_qp;
+ roce_cap->max_cqs = dev_cap->roce_max_cq;
+ roce_cap->max_srqs = dev_cap->roce_max_srq;
+ roce_cap->max_mpts = dev_cap->roce_max_mpt;
+ roce_cap->max_drc_qps = dev_cap->roce_max_drc_qp;
+
+ roce_cap->wqe_cl_start = dev_cap->roce_wqe_cl_start;
+ roce_cap->wqe_cl_end = dev_cap->roce_wqe_cl_end;
+ roce_cap->wqe_cl_sz = dev_cap->roce_wqe_cl_size;
+
+ sdk_info(hwdev->dev_hdl, "Get roce resource capbility, type: 0x%x\n",
+ type);
+ sdk_info(hwdev->dev_hdl, "Max_qps: 0x%x, max_cqs: 0x%x, max_srqs: 0x%x, max_mpts: 0x%x, max_drcts: 0x%x\n",
+ roce_cap->max_qps, roce_cap->max_cqs, roce_cap->max_srqs,
+ roce_cap->max_mpts, roce_cap->max_drc_qps);
+
+ sdk_info(hwdev->dev_hdl, "Wqe_start: 0x%x, wqe_end: 0x%x, wqe_sz: 0x%x\n",
+ roce_cap->wqe_cl_start, roce_cap->wqe_cl_end,
+ roce_cap->wqe_cl_sz);
+
+ if (roce_cap->max_qps == 0) {
+ if (type == TYPE_PF || type == TYPE_PPF) {
+ roce_cap->max_qps = 0x400;
+ roce_cap->max_cqs = 0x800;
+ roce_cap->max_srqs = 0x400;
+ roce_cap->max_mpts = 0x400;
+ roce_cap->max_drc_qps = 0x40;
+ } else {
+ roce_cap->max_qps = 0x200;
+ roce_cap->max_cqs = 0x400;
+ roce_cap->max_srqs = 0x200;
+ roce_cap->max_mpts = 0x200;
+ roce_cap->max_drc_qps = 0x40;
+ }
+ }
+}
+
+static void parse_rdma_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_roce_svc_own_cap *roce_cap =
+ &cap->rdma_cap.dev_rdma_cap.roce_own_cap;
+
+ roce_cap->cmtt_cl_start = dev_cap->roce_cmtt_cl_start;
+ roce_cap->cmtt_cl_end = dev_cap->roce_cmtt_cl_end;
+ roce_cap->cmtt_cl_sz = dev_cap->roce_cmtt_cl_size;
+
+ roce_cap->dmtt_cl_start = dev_cap->roce_dmtt_cl_start;
+ roce_cap->dmtt_cl_end = dev_cap->roce_dmtt_cl_end;
+ roce_cap->dmtt_cl_sz = dev_cap->roce_dmtt_cl_size;
+
+ sdk_info(hwdev->dev_hdl, "Get rdma resource capbility, Cmtt_start: 0x%x, cmtt_end: 0x%x, cmtt_sz: 0x%x\n",
+ roce_cap->cmtt_cl_start, roce_cap->cmtt_cl_end,
+ roce_cap->cmtt_cl_sz);
+
+ sdk_info(hwdev->dev_hdl, "Dmtt_start: 0x%x, dmtt_end: 0x%x, dmtt_sz: 0x%x\n",
+ roce_cap->dmtt_cl_start, roce_cap->dmtt_cl_end,
+ roce_cap->dmtt_cl_sz);
+}
+
+static void parse_ovs_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct ovs_service_cap *ovs_cap = &cap->ovs_cap;
+
+ ovs_cap->dev_ovs_cap.max_pctxs = dev_cap->ovs_max_qpc;
+ ovs_cap->dev_ovs_cap.fake_vf_max_pctx = dev_cap->fake_vf_max_pctx;
+ ovs_cap->dev_ovs_cap.fake_vf_start_id = dev_cap->fake_vf_start_id;
+ ovs_cap->dev_ovs_cap.fake_vf_num = dev_cap->fake_vf_num;
+ ovs_cap->dev_ovs_cap.dynamic_qp_en = dev_cap->flexq_en;
+
+ sdk_info(hwdev->dev_hdl,
+ "Get ovs resource capbility, max_qpc: 0x%x, fake_vf_start_id: 0x%x, fake_vf_num: 0x%x\n",
+ ovs_cap->dev_ovs_cap.max_pctxs,
+ ovs_cap->dev_ovs_cap.fake_vf_start_id,
+ ovs_cap->dev_ovs_cap.fake_vf_num);
+ sdk_info(hwdev->dev_hdl,
+ "fake_vf_max_qpc: 0x%x, dynamic_qp_en: 0x%x\n",
+ ovs_cap->dev_ovs_cap.fake_vf_max_pctx,
+ ovs_cap->dev_ovs_cap.dynamic_qp_en);
+}
+
+static void parse_ppa_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct ppa_service_cap *dip_cap = &cap->ppa_cap;
+
+ dip_cap->qpc_fake_vf_ctx_num = dev_cap->fake_vf_max_pctx;
+ dip_cap->qpc_fake_vf_start = dev_cap->fake_vf_start_id;
+ dip_cap->qpc_fake_vf_num = dev_cap->fake_vf_num;
+ dip_cap->bloomfilter_en = dev_cap->fake_vf_bfilter_len ? 1 : 0;
+ dip_cap->bloomfilter_length = dev_cap->fake_vf_bfilter_len;
+ sdk_info(hwdev->dev_hdl,
+ "Get ppa resource capbility, fake_vf_start_id: 0x%x, fake_vf_num: 0x%x, fake_vf_max_qpc: 0x%x\n",
+ dip_cap->qpc_fake_vf_start,
+ dip_cap->qpc_fake_vf_num,
+ dip_cap->qpc_fake_vf_ctx_num);
+}
+
+static void parse_toe_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_toe_svc_cap *toe_cap = &cap->toe_cap.dev_toe_cap;
+
+ toe_cap->max_pctxs = dev_cap->toe_max_pctx;
+ toe_cap->max_cqs = dev_cap->toe_max_cq;
+ toe_cap->max_srqs = dev_cap->toe_max_srq;
+ toe_cap->srq_id_start = dev_cap->toe_srq_id_start;
+ toe_cap->max_mpts = dev_cap->toe_max_mpt;
+ toe_cap->max_cctxt = dev_cap->toe_max_cctxt;
+
+ sdk_info(hwdev->dev_hdl,
+ "Get toe resource capbility, max_pctxs: 0x%x, max_cqs: 0x%x, max_srqs: 0x%x, srq_id_start: 0x%x, max_mpts: 0x%x\n",
+ toe_cap->max_pctxs, toe_cap->max_cqs, toe_cap->max_srqs,
+ toe_cap->srq_id_start, toe_cap->max_mpts);
+}
+
+static void parse_ipsec_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct ipsec_service_cap *ipsec_cap = &cap->ipsec_cap;
+
+ ipsec_cap->dev_ipsec_cap.max_sactxs = dev_cap->ipsec_max_sactx;
+ ipsec_cap->dev_ipsec_cap.max_cqs = dev_cap->ipsec_max_cq;
+
+ sdk_info(hwdev->dev_hdl, "Get IPsec resource capbility, max_sactxs: 0x%x, max_cqs: 0x%x\n",
+ dev_cap->ipsec_max_sactx, dev_cap->ipsec_max_cq);
+}
+
+static void parse_vbs_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct vbs_service_cap *vbs_cap = &cap->vbs_cap;
+
+ vbs_cap->vbs_max_volq = dev_cap->vbs_max_volq;
+ vbs_cap->vbs_main_pf_enable = dev_cap->vbs_main_pf_enable;
+ vbs_cap->vbs_vsock_pf_enable = dev_cap->vbs_vsock_pf_enable;
+ vbs_cap->vbs_fushion_queue_pf_enable = dev_cap->vbs_fushion_queue_pf_enable;
+
+ sdk_info(hwdev->dev_hdl,
+ "Get VBS resource capbility, vbs_max_volq: 0x%x\n",
+ dev_cap->vbs_max_volq);
+ sdk_info(hwdev->dev_hdl,
+ "Get VBS pf info, vbs_main_pf_enable: 0x%x, vbs_vsock_pf_enable: 0x%x, vbs_fushion_queue_pf_enable: 0x%x\n",
+ dev_cap->vbs_main_pf_enable,
+ dev_cap->vbs_vsock_pf_enable,
+ dev_cap->vbs_fushion_queue_pf_enable);
+}
+
+static void parse_dev_cap(struct hinic3_hwdev *dev,
+ struct cfg_cmd_dev_cap *dev_cap, enum func_type type)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+
+ /* Public resource */
+ parse_pub_res_cap(dev, cap, dev_cap, type);
+
+ /* PPF managed dynamic resource */
+ if (type == TYPE_PPF)
+ parse_dynamic_share_res_cap(cap, dev_cap);
+
+ /* L2 NIC resource */
+ if (IS_NIC_TYPE(dev))
+ parse_l2nic_res_cap(dev, cap, dev_cap, type);
+
+ /* FC without virtulization */
+ if (type == TYPE_PF || type == TYPE_PPF) {
+ if (IS_FC_TYPE(dev))
+ parse_fc_res_cap(dev, cap, dev_cap, type);
+ }
+
+ /* toe resource */
+ if (IS_TOE_TYPE(dev))
+ parse_toe_res_cap(dev, cap, dev_cap, type);
+
+ /* mtt cache line */
+ if (IS_RDMA_ENABLE(dev))
+ parse_rdma_res_cap(dev, cap, dev_cap, type);
+
+ /* RoCE resource */
+ if (IS_ROCE_TYPE(dev))
+ parse_roce_res_cap(dev, cap, dev_cap, type);
+
+ if (IS_OVS_TYPE(dev))
+ parse_ovs_res_cap(dev, cap, dev_cap, type);
+
+ if (IS_IPSEC_TYPE(dev))
+ parse_ipsec_res_cap(dev, cap, dev_cap, type);
+
+ if (IS_PPA_TYPE(dev))
+ parse_ppa_res_cap(dev, cap, dev_cap, type);
+
+ if (IS_VBS_TYPE(dev))
+ parse_vbs_res_cap(dev, cap, dev_cap, type);
+}
+
+static int get_cap_from_fw(struct hinic3_hwdev *dev, enum func_type type)
+{
+ struct cfg_cmd_dev_cap dev_cap;
+ u16 out_len = sizeof(dev_cap);
+ int err;
+
+ memset(&dev_cap, 0, sizeof(dev_cap));
+ dev_cap.func_id = hinic3_global_func_id(dev);
+ sdk_info(dev->dev_hdl, "Get cap from fw, func_idx: %u\n",
+ dev_cap.func_id);
+
+ err = hinic3_msg_to_mgmt_sync(dev, HINIC3_MOD_CFGM, CFG_CMD_GET_DEV_CAP,
+ &dev_cap, sizeof(dev_cap),
+ &dev_cap, &out_len, 0,
+ HINIC3_CHANNEL_COMM);
+ if (err || dev_cap.head.status || !out_len) {
+ sdk_err(dev->dev_hdl,
+ "Failed to get capability from FW, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, dev_cap.head.status, out_len);
+ return -EIO;
+ }
+
+ parse_dev_cap(dev, &dev_cap, type);
+
+ return 0;
+}
+
+u8 hinic3_get_bond_create_mode(void *dev)
+{
+ struct hinic3_hwdev *hwdev = NULL;
+ struct service_cap *cap = NULL;
+
+ if (!dev) {
+ pr_err("pointer dev is NULL\n");
+ return -EINVAL;
+ }
+
+ hwdev = (struct hinic3_hwdev *)dev;
+ cap = &hwdev->cfg_mgmt->svc_cap;
+
+ return cap->bond_create_mode;
+}
+EXPORT_SYMBOL(hinic3_get_bond_create_mode);
+
+int hinic3_get_dev_cap(void *dev)
+{
+ enum func_type type;
+ int err;
+ struct hinic3_hwdev *hwdev = NULL;
+
+ if (!dev) {
+ pr_err("pointer dev is NULL\n");
+ return -EINVAL;
+ }
+ hwdev = (struct hinic3_hwdev *)dev;
+ type = HINIC3_FUNC_TYPE(hwdev);
+
+ switch (type) {
+ case TYPE_PF:
+ case TYPE_PPF:
+ case TYPE_VF:
+ err = get_cap_from_fw(hwdev, type);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to get PF/PPF capability\n");
+ return err;
+ }
+ break;
+ default:
+ sdk_err(hwdev->dev_hdl,
+ "Unsupported PCI Function type: %d\n", type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_dev_cap);
+
+int hinic3_get_ppf_timer_cfg(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_cmd_host_timer cfg_host_timer;
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ u16 out_len = sizeof(cfg_host_timer);
+ int err;
+
+ memset(&cfg_host_timer, 0, sizeof(cfg_host_timer));
+ cfg_host_timer.host_id = dev->cfg_mgmt->svc_cap.host_id;
+
+ err = hinic3_msg_to_mgmt_sync(dev, HINIC3_MOD_CFGM, CFG_CMD_GET_HOST_TIMER,
+ &cfg_host_timer, sizeof(cfg_host_timer),
+ &cfg_host_timer, &out_len, 0,
+ HINIC3_CHANNEL_COMM);
+ if (err || cfg_host_timer.head.status || !out_len) {
+ sdk_err(dev->dev_hdl,
+ "Failed to get host timer cfg from FW, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, cfg_host_timer.head.status, out_len);
+ return -EIO;
+ }
+
+ cap->timer_pf_id_start = cfg_host_timer.timer_pf_id_start;
+ cap->timer_pf_num = cfg_host_timer.timer_pf_num;
+ cap->timer_vf_id_start = cfg_host_timer.timer_vf_id_start;
+ cap->timer_vf_num = cfg_host_timer.timer_vf_num;
+
+ return 0;
+}
+
+static void nic_param_fix(struct hinic3_hwdev *dev)
+{
+}
+
+static void rdma_mtt_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct rdma_service_cap *rdma_cap = &cap->rdma_cap;
+
+ rdma_cap->log_mtt = LOG_MTT_SEG;
+ rdma_cap->log_mtt_seg = LOG_MTT_SEG;
+ rdma_cap->mtt_entry_sz = MTT_ENTRY_SZ;
+ rdma_cap->mpt_entry_sz = RDMA_MPT_ENTRY_SZ;
+ rdma_cap->num_mtts = RDMA_NUM_MTTS;
+}
+
+static void rdma_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct rdma_service_cap *rdma_cap = &cap->rdma_cap;
+ struct dev_roce_svc_own_cap *roce_cap =
+ &rdma_cap->dev_rdma_cap.roce_own_cap;
+
+ rdma_cap->log_mtt = LOG_MTT_SEG;
+ rdma_cap->log_rdmarc = LOG_RDMARC_SEG;
+ rdma_cap->reserved_qps = RDMA_RSVD_QPS;
+ rdma_cap->max_sq_sg = RDMA_MAX_SQ_SGE;
+
+ /* RoCE */
+ if (IS_ROCE_TYPE(dev)) {
+ roce_cap->qpc_entry_sz = ROCE_QPC_ENTRY_SZ;
+ roce_cap->max_wqes = ROCE_MAX_WQES;
+ roce_cap->max_rq_sg = ROCE_MAX_RQ_SGE;
+ roce_cap->max_sq_inline_data_sz = ROCE_MAX_SQ_INLINE_DATA_SZ;
+ roce_cap->max_rq_desc_sz = ROCE_MAX_RQ_DESC_SZ;
+ roce_cap->rdmarc_entry_sz = ROCE_RDMARC_ENTRY_SZ;
+ roce_cap->max_qp_init_rdma = ROCE_MAX_QP_INIT_RDMA;
+ roce_cap->max_qp_dest_rdma = ROCE_MAX_QP_DEST_RDMA;
+ roce_cap->max_srq_wqes = ROCE_MAX_SRQ_WQES;
+ roce_cap->reserved_srqs = ROCE_RSVD_SRQS;
+ roce_cap->max_srq_sge = ROCE_MAX_SRQ_SGE;
+ roce_cap->srqc_entry_sz = ROCE_SRQC_ENTERY_SZ;
+ roce_cap->max_msg_sz = ROCE_MAX_MSG_SZ;
+ }
+
+ rdma_cap->max_sq_desc_sz = RDMA_MAX_SQ_DESC_SZ;
+ rdma_cap->wqebb_size = WQEBB_SZ;
+ rdma_cap->max_cqes = RDMA_MAX_CQES;
+ rdma_cap->reserved_cqs = RDMA_RSVD_CQS;
+ rdma_cap->cqc_entry_sz = RDMA_CQC_ENTRY_SZ;
+ rdma_cap->cqe_size = RDMA_CQE_SZ;
+ rdma_cap->reserved_mrws = RDMA_RSVD_MRWS;
+ rdma_cap->mpt_entry_sz = RDMA_MPT_ENTRY_SZ;
+
+ /* 2^8 - 1
+ * +------------------------+-----------+
+ * | 4B | 1M(20b) | Key(8b) |
+ * +------------------------+-----------+
+ * key = 8bit key + 24bit index,
+ * now Lkey of SGE uses 2bit(bit31 and bit30), so key only have 10bit,
+ * we use original 8bits directly for simpilification
+ */
+ rdma_cap->max_fmr_maps = 0xff;
+ rdma_cap->num_mtts = RDMA_NUM_MTTS;
+ rdma_cap->log_mtt_seg = LOG_MTT_SEG;
+ rdma_cap->mtt_entry_sz = MTT_ENTRY_SZ;
+ rdma_cap->log_rdmarc_seg = LOG_RDMARC_SEG;
+ rdma_cap->local_ca_ack_delay = LOCAL_ACK_DELAY;
+ rdma_cap->num_ports = RDMA_NUM_PORTS;
+ rdma_cap->db_page_size = DB_PAGE_SZ;
+ rdma_cap->direct_wqe_size = DWQE_SZ;
+ rdma_cap->num_pds = NUM_PD;
+ rdma_cap->reserved_pds = RSVD_PD;
+ rdma_cap->max_xrcds = MAX_XRCDS;
+ rdma_cap->reserved_xrcds = RSVD_XRCDS;
+ rdma_cap->max_gid_per_port = MAX_GID_PER_PORT;
+ rdma_cap->gid_entry_sz = GID_ENTRY_SZ;
+ rdma_cap->reserved_lkey = RSVD_LKEY;
+ rdma_cap->num_comp_vectors = (u32)dev->cfg_mgmt->eq_info.num_ceq;
+ rdma_cap->page_size_cap = PAGE_SZ_CAP;
+ rdma_cap->flags = (RDMA_BMME_FLAG_LOCAL_INV |
+ RDMA_BMME_FLAG_REMOTE_INV |
+ RDMA_BMME_FLAG_FAST_REG_WR |
+ RDMA_DEV_CAP_FLAG_XRC |
+ RDMA_DEV_CAP_FLAG_MEM_WINDOW |
+ RDMA_BMME_FLAG_TYPE_2_WIN |
+ RDMA_BMME_FLAG_WIN_TYPE_2B |
+ RDMA_DEV_CAP_FLAG_ATOMIC);
+ rdma_cap->max_frpl_len = MAX_FRPL_LEN;
+ rdma_cap->max_pkeys = MAX_PKEYS;
+}
+
+static void toe_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct toe_service_cap *toe_cap = &cap->toe_cap;
+
+ toe_cap->pctx_sz = TOE_PCTX_SZ;
+ toe_cap->scqc_sz = TOE_CQC_SZ;
+}
+
+static void ovs_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct ovs_service_cap *ovs_cap = &cap->ovs_cap;
+
+ ovs_cap->pctx_sz = OVS_PCTX_SZ;
+}
+
+static void ppa_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct ppa_service_cap *ppa_cap = &cap->ppa_cap;
+
+ ppa_cap->pctx_sz = PPA_PCTX_SZ;
+}
+
+static void fc_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct fc_service_cap *fc_cap = &cap->fc_cap;
+
+ fc_cap->parent_qpc_size = FC_PCTX_SZ;
+ fc_cap->child_qpc_size = FC_CCTX_SZ;
+ fc_cap->sqe_size = FC_SQE_SZ;
+
+ fc_cap->scqc_size = FC_SCQC_SZ;
+ fc_cap->scqe_size = FC_SCQE_SZ;
+
+ fc_cap->srqc_size = FC_SRQC_SZ;
+ fc_cap->srqe_size = FC_SRQE_SZ;
+}
+
+static void ipsec_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct ipsec_service_cap *ipsec_cap = &cap->ipsec_cap;
+
+ ipsec_cap->sactx_sz = IPSEC_SACTX_SZ;
+}
+
+static void init_service_param(struct hinic3_hwdev *dev)
+{
+ if (IS_NIC_TYPE(dev))
+ nic_param_fix(dev);
+ if (IS_RDMA_ENABLE(dev))
+ rdma_mtt_fix(dev);
+ if (IS_ROCE_TYPE(dev))
+ rdma_param_fix(dev);
+ if (IS_FC_TYPE(dev))
+ fc_param_fix(dev);
+ if (IS_TOE_TYPE(dev))
+ toe_param_fix(dev);
+ if (IS_OVS_TYPE(dev))
+ ovs_param_fix(dev);
+ if (IS_IPSEC_TYPE(dev))
+ ipsec_param_fix(dev);
+ if (IS_PPA_TYPE(dev))
+ ppa_param_fix(dev);
+}
+
+static void cfg_get_eq_num(struct hinic3_hwdev *dev)
+{
+ struct cfg_eq_info *eq_info = &dev->cfg_mgmt->eq_info;
+
+ eq_info->num_ceq = dev->hwif->attr.num_ceqs;
+ eq_info->num_ceq_remain = eq_info->num_ceq;
+}
+
+static int cfg_init_eq(struct hinic3_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+ struct cfg_eq *eq = NULL;
+ u8 num_ceq, i = 0;
+
+ cfg_get_eq_num(dev);
+ num_ceq = cfg_mgmt->eq_info.num_ceq;
+
+ sdk_info(dev->dev_hdl, "Cfg mgmt: ceqs=0x%x, remain=0x%x\n",
+ cfg_mgmt->eq_info.num_ceq, cfg_mgmt->eq_info.num_ceq_remain);
+
+ if (!num_ceq) {
+ sdk_err(dev->dev_hdl, "Ceq num cfg in fw is zero\n");
+ return -EFAULT;
+ }
+
+ eq = kcalloc(num_ceq, sizeof(*eq), GFP_KERNEL);
+ if (!eq)
+ return -ENOMEM;
+
+ for (i = 0; i < num_ceq; ++i) {
+ eq[i].eqn = i;
+ eq[i].free = CFG_FREE;
+ eq[i].type = SERVICE_T_MAX;
+ }
+
+ cfg_mgmt->eq_info.eq = eq;
+
+ mutex_init(&cfg_mgmt->eq_info.eq_mutex);
+
+ return 0;
+}
+
+int hinic3_vector_to_eqn(void *hwdev, enum hinic3_service_type type, int vector)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_eq *eq = NULL;
+ int eqn = -EINVAL;
+ int vector_num = vector;
+
+ if (!hwdev || vector < 0)
+ return -EINVAL;
+
+ if (type != SERVICE_T_ROCE) {
+ sdk_err(dev->dev_hdl,
+ "Service type :%d, only RDMA service could get eqn by vector.\n",
+ type);
+ return -EINVAL;
+ }
+
+ cfg_mgmt = dev->cfg_mgmt;
+ vector_num = (vector_num % cfg_mgmt->eq_info.num_ceq) + CFG_RDMA_CEQ_BASE;
+
+ eq = cfg_mgmt->eq_info.eq;
+ if (eq[vector_num].type == SERVICE_T_ROCE && eq[vector_num].free == CFG_BUSY)
+ eqn = eq[vector_num].eqn;
+
+ return eqn;
+}
+EXPORT_SYMBOL(hinic3_vector_to_eqn);
+
+static int cfg_init_interrupt(struct hinic3_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+ struct cfg_irq_info *irq_info = &cfg_mgmt->irq_param_info;
+ u16 intr_num = dev->hwif->attr.num_irqs;
+ u16 intr_needed = dev->hwif->attr.msix_flex_en ? (dev->hwif->attr.num_aeqs +
+ dev->hwif->attr.num_ceqs + dev->hwif->attr.num_sq) : intr_num;
+
+ if (!intr_num) {
+ sdk_err(dev->dev_hdl, "Irq num cfg in fw is zero, msix_flex_en %d\n",
+ dev->hwif->attr.msix_flex_en);
+ return -EFAULT;
+ }
+
+ if (intr_needed > intr_num) {
+ sdk_warn(dev->dev_hdl, "Irq num cfg(%d) is less than the needed irq num(%d) msix_flex_en %d\n",
+ intr_num, intr_needed, dev->hwif->attr.msix_flex_en);
+ intr_needed = intr_num;
+ }
+
+ irq_info->alloc_info = kcalloc(intr_num, sizeof(*irq_info->alloc_info),
+ GFP_KERNEL);
+ if (!irq_info->alloc_info)
+ return -ENOMEM;
+
+ irq_info->num_irq_hw = intr_needed;
+ /* Production requires VF only surppots MSI-X */
+ if (HINIC3_FUNC_TYPE(dev) == TYPE_VF)
+ cfg_mgmt->svc_cap.interrupt_type = INTR_TYPE_MSIX;
+ else
+ cfg_mgmt->svc_cap.interrupt_type = 0;
+
+ mutex_init(&irq_info->irq_mutex);
+ return 0;
+}
+
+static int cfg_enable_interrupt(struct hinic3_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+ u16 nreq = cfg_mgmt->irq_param_info.num_irq_hw;
+
+ void *pcidev = dev->pcidev_hdl;
+ struct irq_alloc_info_st *irq_info = NULL;
+ struct msix_entry *entry = NULL;
+ u16 i = 0;
+ int actual_irq;
+
+ irq_info = cfg_mgmt->irq_param_info.alloc_info;
+
+ sdk_info(dev->dev_hdl, "Interrupt type: %u, irq num: %u.\n",
+ cfg_mgmt->svc_cap.interrupt_type, nreq);
+
+ switch (cfg_mgmt->svc_cap.interrupt_type) {
+ case INTR_TYPE_MSIX:
+ if (!nreq) {
+ sdk_err(dev->dev_hdl, "Interrupt number cannot be zero\n");
+ return -EINVAL;
+ }
+ entry = kcalloc(nreq, sizeof(*entry), GFP_KERNEL);
+ if (!entry)
+ return -ENOMEM;
+
+ for (i = 0; i < nreq; i++)
+ entry[i].entry = i;
+
+ actual_irq = pci_enable_msix_range(pcidev, entry,
+ VECTOR_THRESHOLD, nreq);
+ if (actual_irq < 0) {
+ sdk_err(dev->dev_hdl, "Alloc msix entries with threshold 2 failed. actual_irq: %d\n",
+ actual_irq);
+ kfree(entry);
+ return -ENOMEM;
+ }
+
+ nreq = (u16)actual_irq;
+ cfg_mgmt->irq_param_info.num_total = nreq;
+ cfg_mgmt->irq_param_info.num_irq_remain = nreq;
+ sdk_info(dev->dev_hdl, "Request %u msix vector success.\n",
+ nreq);
+
+ for (i = 0; i < nreq; ++i) {
+ /* u16 driver uses to specify entry, OS writes */
+ irq_info[i].info.msix_entry_idx = entry[i].entry;
+ /* u32 kernel uses to write allocated vector */
+ irq_info[i].info.irq_id = entry[i].vector;
+ irq_info[i].type = SERVICE_T_MAX;
+ irq_info[i].free = CFG_FREE;
+ }
+
+ kfree(entry);
+
+ break;
+
+ default:
+ sdk_err(dev->dev_hdl, "Unsupport interrupt type %d\n",
+ cfg_mgmt->svc_cap.interrupt_type);
+ break;
+ }
+
+ return 0;
+}
+
+int hinic3_alloc_irqs(void *hwdev, enum hinic3_service_type type, u16 num,
+ struct irq_info *irq_info_array, u16 *act_num)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_irq_info *irq_info = NULL;
+ struct irq_alloc_info_st *alloc_info = NULL;
+ int max_num_irq;
+ u16 free_num_irq;
+ int i, j;
+ u16 num_new = num;
+
+ if (!hwdev || !irq_info_array || !act_num)
+ return -EINVAL;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ irq_info = &cfg_mgmt->irq_param_info;
+ alloc_info = irq_info->alloc_info;
+ max_num_irq = irq_info->num_total;
+ free_num_irq = irq_info->num_irq_remain;
+
+ mutex_lock(&irq_info->irq_mutex);
+
+ if (num > free_num_irq) {
+ if (free_num_irq == 0) {
+ sdk_err(dev->dev_hdl, "no free irq resource in cfg mgmt.\n");
+ mutex_unlock(&irq_info->irq_mutex);
+ return -ENOMEM;
+ }
+
+ sdk_warn(dev->dev_hdl, "only %u irq resource in cfg mgmt.\n", free_num_irq);
+ num_new = free_num_irq;
+ }
+
+ *act_num = 0;
+
+ for (i = 0; i < num_new; i++) {
+ for (j = 0; j < max_num_irq; j++) {
+ if (alloc_info[j].free == CFG_FREE) {
+ if (irq_info->num_irq_remain == 0) {
+ sdk_err(dev->dev_hdl, "No free irq resource in cfg mgmt\n");
+ mutex_unlock(&irq_info->irq_mutex);
+ return -EINVAL;
+ }
+ alloc_info[j].type = type;
+ alloc_info[j].free = CFG_BUSY;
+
+ irq_info_array[i].msix_entry_idx =
+ alloc_info[j].info.msix_entry_idx;
+ irq_info_array[i].irq_id = alloc_info[j].info.irq_id;
+ (*act_num)++;
+ irq_info->num_irq_remain--;
+
+ break;
+ }
+ }
+ }
+
+ mutex_unlock(&irq_info->irq_mutex);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_alloc_irqs);
+
+void hinic3_free_irq(void *hwdev, enum hinic3_service_type type, u32 irq_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_irq_info *irq_info = NULL;
+ struct irq_alloc_info_st *alloc_info = NULL;
+ int max_num_irq;
+ int i;
+
+ if (!hwdev)
+ return;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ irq_info = &cfg_mgmt->irq_param_info;
+ alloc_info = irq_info->alloc_info;
+ max_num_irq = irq_info->num_total;
+
+ mutex_lock(&irq_info->irq_mutex);
+
+ for (i = 0; i < max_num_irq; i++) {
+ if (irq_id == alloc_info[i].info.irq_id &&
+ type == alloc_info[i].type) {
+ if (alloc_info[i].free == CFG_BUSY) {
+ alloc_info[i].free = CFG_FREE;
+ irq_info->num_irq_remain++;
+ if (irq_info->num_irq_remain > max_num_irq) {
+ sdk_err(dev->dev_hdl, "Find target,but over range\n");
+ mutex_unlock(&irq_info->irq_mutex);
+ return;
+ }
+ break;
+ }
+ }
+ }
+
+ if (i >= max_num_irq)
+ sdk_warn(dev->dev_hdl, "Irq %u don`t need to free\n", irq_id);
+
+ mutex_unlock(&irq_info->irq_mutex);
+}
+EXPORT_SYMBOL(hinic3_free_irq);
+
+int hinic3_alloc_ceqs(void *hwdev, enum hinic3_service_type type, int num,
+ int *ceq_id_array, int *act_num)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_eq_info *eq = NULL;
+ int free_ceq;
+ int i, j;
+ int num_new = num;
+
+ if (!hwdev || !ceq_id_array || !act_num)
+ return -EINVAL;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ eq = &cfg_mgmt->eq_info;
+ free_ceq = eq->num_ceq_remain;
+
+ mutex_lock(&eq->eq_mutex);
+
+ if (num > free_ceq) {
+ if (free_ceq <= 0) {
+ sdk_err(dev->dev_hdl, "No free ceq resource in cfg mgmt\n");
+ mutex_unlock(&eq->eq_mutex);
+ return -ENOMEM;
+ }
+
+ sdk_warn(dev->dev_hdl, "Only %d ceq resource in cfg mgmt\n",
+ free_ceq);
+ }
+
+ *act_num = 0;
+
+ num_new = min(num_new, eq->num_ceq - CFG_RDMA_CEQ_BASE);
+ for (i = 0; i < num_new; i++) {
+ if (eq->num_ceq_remain == 0) {
+ sdk_warn(dev->dev_hdl, "Alloc %d ceqs, less than required %d ceqs\n",
+ *act_num, num_new);
+ mutex_unlock(&eq->eq_mutex);
+ return 0;
+ }
+
+ for (j = CFG_RDMA_CEQ_BASE; j < eq->num_ceq; j++) {
+ if (eq->eq[j].free == CFG_FREE) {
+ eq->eq[j].type = type;
+ eq->eq[j].free = CFG_BUSY;
+ eq->num_ceq_remain--;
+ ceq_id_array[i] = eq->eq[j].eqn;
+ (*act_num)++;
+ break;
+ }
+ }
+ }
+
+ mutex_unlock(&eq->eq_mutex);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_alloc_ceqs);
+
+void hinic3_free_ceq(void *hwdev, enum hinic3_service_type type, int ceq_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_eq_info *eq = NULL;
+ u8 num_ceq;
+ u8 i = 0;
+
+ if (!hwdev)
+ return;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ eq = &cfg_mgmt->eq_info;
+ num_ceq = eq->num_ceq;
+
+ mutex_lock(&eq->eq_mutex);
+
+ for (i = 0; i < num_ceq; i++) {
+ if (ceq_id == eq->eq[i].eqn &&
+ type == cfg_mgmt->eq_info.eq[i].type) {
+ if (eq->eq[i].free == CFG_BUSY) {
+ eq->eq[i].free = CFG_FREE;
+ eq->num_ceq_remain++;
+ if (eq->num_ceq_remain > num_ceq)
+ eq->num_ceq_remain %= num_ceq;
+
+ mutex_unlock(&eq->eq_mutex);
+ return;
+ }
+ }
+ }
+
+ if (i >= num_ceq)
+ sdk_warn(dev->dev_hdl, "ceq %d don`t need to free.\n", ceq_id);
+
+ mutex_unlock(&eq->eq_mutex);
+}
+EXPORT_SYMBOL(hinic3_free_ceq);
+
+int init_cfg_mgmt(struct hinic3_hwdev *dev)
+{
+ int err;
+ struct cfg_mgmt_info *cfg_mgmt;
+
+ cfg_mgmt = kzalloc(sizeof(*cfg_mgmt), GFP_KERNEL);
+ if (!cfg_mgmt)
+ return -ENOMEM;
+
+ dev->cfg_mgmt = cfg_mgmt;
+ cfg_mgmt->hwdev = dev;
+
+ err = cfg_init_eq(dev);
+ if (err != 0) {
+ sdk_err(dev->dev_hdl, "Failed to init cfg event queue, err: %d\n",
+ err);
+ goto free_mgmt_mem;
+ }
+
+ err = cfg_init_interrupt(dev);
+ if (err != 0) {
+ sdk_err(dev->dev_hdl, "Failed to init cfg interrupt, err: %d\n",
+ err);
+ goto free_eq_mem;
+ }
+
+ err = cfg_enable_interrupt(dev);
+ if (err != 0) {
+ sdk_err(dev->dev_hdl, "Failed to enable cfg interrupt, err: %d\n",
+ err);
+ goto free_interrupt_mem;
+ }
+
+ return 0;
+
+free_interrupt_mem:
+ kfree(cfg_mgmt->irq_param_info.alloc_info);
+ mutex_deinit(&((cfg_mgmt->irq_param_info).irq_mutex));
+ cfg_mgmt->irq_param_info.alloc_info = NULL;
+
+free_eq_mem:
+ kfree(cfg_mgmt->eq_info.eq);
+ mutex_deinit(&cfg_mgmt->eq_info.eq_mutex);
+ cfg_mgmt->eq_info.eq = NULL;
+
+free_mgmt_mem:
+ kfree(cfg_mgmt);
+ return err;
+}
+
+void free_cfg_mgmt(struct hinic3_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+
+ /* if the allocated resource were recycled */
+ if (cfg_mgmt->irq_param_info.num_irq_remain !=
+ cfg_mgmt->irq_param_info.num_total ||
+ cfg_mgmt->eq_info.num_ceq_remain != cfg_mgmt->eq_info.num_ceq)
+ sdk_err(dev->dev_hdl, "Can't reclaim all irq and event queue, please check\n");
+
+ switch (cfg_mgmt->svc_cap.interrupt_type) {
+ case INTR_TYPE_MSIX:
+ pci_disable_msix(dev->pcidev_hdl);
+ break;
+
+ case INTR_TYPE_MSI:
+ pci_disable_msi(dev->pcidev_hdl);
+ break;
+
+ case INTR_TYPE_INT:
+ default:
+ break;
+ }
+
+ kfree(cfg_mgmt->irq_param_info.alloc_info);
+ cfg_mgmt->irq_param_info.alloc_info = NULL;
+ mutex_deinit(&((cfg_mgmt->irq_param_info).irq_mutex));
+
+ kfree(cfg_mgmt->eq_info.eq);
+ cfg_mgmt->eq_info.eq = NULL;
+ mutex_deinit(&cfg_mgmt->eq_info.eq_mutex);
+
+ kfree(cfg_mgmt);
+}
+
+/**
+ * hinic3_init_vf_dev_cap - Set max queue num for VF
+ * @hwdev: the HW device for VF
+ */
+int hinic3_init_vf_dev_cap(void *hwdev)
+{
+ struct hinic3_hwdev *dev = NULL;
+ enum func_type type;
+ int err;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ dev = (struct hinic3_hwdev *)hwdev;
+ type = HINIC3_FUNC_TYPE(dev);
+ if (type != TYPE_VF)
+ return -EPERM;
+
+ err = hinic3_get_dev_cap(dev);
+ if (err != 0)
+ return err;
+
+ nic_param_fix(dev);
+
+ return 0;
+}
+
+int init_capability(struct hinic3_hwdev *dev)
+{
+ int err;
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+
+ cfg_mgmt->svc_cap.sf_svc_attr.ft_pf_en = false;
+ cfg_mgmt->svc_cap.sf_svc_attr.rdma_pf_en = false;
+
+ err = hinic3_get_dev_cap(dev);
+ if (err != 0)
+ return err;
+
+ init_service_param(dev);
+
+ sdk_info(dev->dev_hdl, "Init capability success\n");
+ return 0;
+}
+
+void free_capability(struct hinic3_hwdev *dev)
+{
+ sdk_info(dev->dev_hdl, "Free capability success");
+}
+
+bool hinic3_support_nic(void *hwdev, struct nic_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_NIC_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.nic_cap, sizeof(struct nic_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_nic);
+
+bool hinic3_support_ppa(void *hwdev, struct ppa_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_PPA_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.ppa_cap, sizeof(struct ppa_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_ppa);
+
+bool hinic3_support_bifur(void *hwdev, struct bifur_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_BIFUR_TYPE(dev))
+ return false;
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_bifur);
+
+bool hinic3_support_migr(void *hwdev, struct migr_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_MIGR_TYPE(dev))
+ return false;
+
+ if (cap)
+ cap->master_host_id = dev->cfg_mgmt->svc_cap.master_host_id;
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_migr);
+
+bool hinic3_support_ipsec(void *hwdev, struct ipsec_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_IPSEC_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.ipsec_cap, sizeof(struct ipsec_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_ipsec);
+
+bool hinic3_support_roce(void *hwdev, struct rdma_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_ROCE_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.rdma_cap, sizeof(struct rdma_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_roce);
+
+bool hinic3_support_fc(void *hwdev, struct fc_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_FC_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.fc_cap, sizeof(struct fc_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_fc);
+
+bool hinic3_support_rdma(void *hwdev, struct rdma_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_RDMA_TYPE(dev) && !(IS_RDMA_ENABLE(dev)))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.rdma_cap, sizeof(struct rdma_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_rdma);
+
+bool hinic3_support_ovs(void *hwdev, struct ovs_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_OVS_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.ovs_cap, sizeof(struct ovs_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_ovs);
+
+bool hinic3_support_vbs(void *hwdev, struct vbs_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_VBS_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.vbs_cap, sizeof(struct vbs_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_vbs);
+
+bool hinic3_is_guest_vmsec_enable(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = hwdev;
+
+ if (!hwdev) {
+ pr_err("hwdev is null\n");
+ return false;
+ }
+
+ /* vf used in vm */
+ if (IS_VM_SLAVE_HOST(hw_dev) && (hinic3_func_type(hwdev) == TYPE_VF) &&
+ IS_RDMA_TYPE(hw_dev)) {
+ return true;
+ }
+
+ return false;
+}
+EXPORT_SYMBOL(hinic3_is_guest_vmsec_enable);
+
+/* Only PPF support it, PF is not */
+bool hinic3_support_toe(void *hwdev, struct toe_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_TOE_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.toe_cap, sizeof(struct toe_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_toe);
+
+bool hinic3_func_for_mgmt(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (dev->cfg_mgmt->svc_cap.chip_svc_type)
+ return false;
+ else
+ return true;
+}
+
+bool hinic3_get_stateful_enable(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ return dev->cfg_mgmt->svc_cap.sf_en;
+}
+EXPORT_SYMBOL(hinic3_get_stateful_enable);
+
+bool hinic3_get_timer_enable(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ return dev->cfg_mgmt->svc_cap.timer_en;
+}
+EXPORT_SYMBOL(hinic3_get_timer_enable);
+
+u8 hinic3_host_oq_id_mask(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting host oq id mask\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.host_oq_id_mask_val;
+}
+EXPORT_SYMBOL(hinic3_host_oq_id_mask);
+
+u8 hinic3_host_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting host id\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.host_id;
+}
+EXPORT_SYMBOL(hinic3_host_id);
+
+u16 hinic3_host_total_func(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting host total function number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.host_total_function;
+}
+EXPORT_SYMBOL(hinic3_host_total_func);
+
+u16 hinic3_func_max_qnum(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting function max queue number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.nic_cap.max_sqs;
+}
+EXPORT_SYMBOL(hinic3_func_max_qnum);
+
+u16 hinic3_func_max_nic_qnum(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting function max queue number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.nic_cap.max_sqs;
+}
+EXPORT_SYMBOL(hinic3_func_max_nic_qnum);
+
+u8 hinic3_ep_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting ep id\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.ep_id;
+}
+EXPORT_SYMBOL(hinic3_ep_id);
+
+u8 hinic3_er_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting er id\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.er_id;
+}
+EXPORT_SYMBOL(hinic3_er_id);
+
+u8 hinic3_physical_port_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting physical port id\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.port_id;
+}
+EXPORT_SYMBOL(hinic3_physical_port_id);
+
+u16 hinic3_func_max_vf(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting max vf number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.max_vf;
+}
+EXPORT_SYMBOL(hinic3_func_max_vf);
+
+int hinic3_cos_valid_bitmap(void *hwdev, u8 *func_dft_cos, u8 *port_cos_bitmap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting cos valid bitmap\n");
+ return 1;
+ }
+ *func_dft_cos = dev->cfg_mgmt->svc_cap.cos_valid_bitmap;
+ *port_cos_bitmap = dev->cfg_mgmt->svc_cap.port_cos_valid_bitmap;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_cos_valid_bitmap);
+
+void hinic3_shutdown_hwdev(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return;
+
+ if (IS_SLAVE_HOST(dev))
+ set_slave_host_enable(hwdev, hinic3_pcie_itf_id(hwdev), false);
+}
+
+u32 hinic3_host_pf_num(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting pf number capability\n");
+ return 0;
+ }
+
+ return dev->cfg_mgmt->svc_cap.pf_num;
+}
+EXPORT_SYMBOL(hinic3_host_pf_num);
+
+u32 hinic3_host_pf_id_start(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting pf id start capability\n");
+ return 0;
+ }
+
+ return dev->cfg_mgmt->svc_cap.pf_id_start;
+}
+EXPORT_SYMBOL(hinic3_host_pf_id_start);
+
+u8 hinic3_flexq_en(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return 0;
+
+ return dev->cfg_mgmt->svc_cap.flexq_en;
+}
+EXPORT_SYMBOL(hinic3_flexq_en);
+
+int hinic3_get_fake_vf_info(void *hwdev, u8 *fake_vf_vld,
+ u8 *page_bit, u8 *pf_start_bit, u8 *map_host_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting pf id start capability\n");
+ return -EINVAL;
+ }
+
+ if (!fake_vf_vld || !page_bit || !pf_start_bit || !map_host_id) {
+ pr_err("Fake vf member pointer is NULL for getting pf id start capability\n");
+ return -EINVAL;
+ }
+
+ *fake_vf_vld = dev->cfg_mgmt->svc_cap.fake_vf_en;
+ *page_bit = dev->cfg_mgmt->svc_cap.fake_vf_page_bit;
+ *pf_start_bit = dev->cfg_mgmt->svc_cap.fake_vf_start_bit;
+ *map_host_id = dev->cfg_mgmt->svc_cap.map_host_id;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_fake_vf_info);
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h
new file mode 100644
index 0000000..7157e97
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h
@@ -0,0 +1,346 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HW_CFG_H
+#define HINIC3_HW_CFG_H
+
+#include <linux/types.h>
+#include "cfg_mgmt_mpu_cmd_defs.h"
+#include "hinic3_hwdev.h"
+
+#define CFG_MAX_CMD_TIMEOUT 30000 /* ms */
+
+enum {
+ CFG_FREE = 0,
+ CFG_BUSY = 1
+};
+
+/* start position for CEQs allocation, Max number of CEQs is 32 */
+enum {
+ CFG_RDMA_CEQ_BASE = 0
+};
+
+/* RDMA resource */
+#define K_UNIT BIT(10)
+#define M_UNIT BIT(20)
+#define G_UNIT BIT(30)
+
+#define VIRTIO_BASE_VQ_SIZE 2048U
+#define VIRTIO_DEFAULT_VQ_SIZE 8192U
+
+/* L2NIC */
+#define HINIC3_CFG_MAX_QP 256
+
+/* RDMA */
+#define RDMA_RSVD_QPS 2
+#define ROCE_MAX_WQES (8 * K_UNIT - 1)
+#define IWARP_MAX_WQES (8 * K_UNIT)
+
+#define RDMA_MAX_SQ_SGE 16
+
+#define ROCE_MAX_RQ_SGE 16
+
+/* value changed should change ROCE_MAX_WQE_BB_PER_WR synchronously */
+#define RDMA_MAX_SQ_DESC_SZ (256)
+
+/* (256B(cache_line_len) - 16B(ctrl_seg_len) - 48B(max_task_seg_len)) */
+#define ROCE_MAX_SQ_INLINE_DATA_SZ 192
+
+#define ROCE_MAX_RQ_DESC_SZ 256
+
+#define ROCE_QPC_ENTRY_SZ 512
+
+#define WQEBB_SZ 64
+
+#define ROCE_RDMARC_ENTRY_SZ 32
+#define ROCE_MAX_QP_INIT_RDMA 128
+#define ROCE_MAX_QP_DEST_RDMA 128
+
+#define ROCE_MAX_SRQ_WQES (16 * K_UNIT - 1)
+#define ROCE_RSVD_SRQS 0
+#define ROCE_MAX_SRQ_SGE 15
+#define ROCE_SRQC_ENTERY_SZ 64
+
+#define RDMA_MAX_CQES (8 * M_UNIT - 1)
+#define RDMA_RSVD_CQS 0
+
+#define RDMA_CQC_ENTRY_SZ 128
+
+#define RDMA_CQE_SZ 64
+#define RDMA_RSVD_MRWS 128
+#define RDMA_MPT_ENTRY_SZ 64
+#define RDMA_NUM_MTTS (1 * G_UNIT)
+#define LOG_MTT_SEG 9
+#define MTT_ENTRY_SZ 8
+#define LOG_RDMARC_SEG 3
+
+#define LOCAL_ACK_DELAY 15
+#define RDMA_NUM_PORTS 1
+#define ROCE_MAX_MSG_SZ (2 * G_UNIT)
+
+#define DB_PAGE_SZ (4 * K_UNIT)
+#define DWQE_SZ 256
+
+#define NUM_PD (128 * K_UNIT)
+#define RSVD_PD 0
+
+#define MAX_XRCDS (64 * K_UNIT)
+#define RSVD_XRCDS 0
+
+#define MAX_GID_PER_PORT 128
+#define GID_ENTRY_SZ 32
+#define RSVD_LKEY ((RDMA_RSVD_MRWS - 1) << 8)
+#define NUM_COMP_VECTORS 32
+#define PAGE_SZ_CAP ((1UL << 12) | (1UL << 16) | (1UL << 21))
+#define ROCE_MODE 1
+
+#define MAX_FRPL_LEN 511
+#define MAX_PKEYS 1
+
+/* ToE */
+#define TOE_PCTX_SZ 1024
+#define TOE_CQC_SZ 64
+
+/* IoE */
+#define IOE_PCTX_SZ 512
+
+/* FC */
+#define FC_PCTX_SZ 256
+#define FC_CCTX_SZ 256
+#define FC_SQE_SZ 128
+#define FC_SCQC_SZ 64
+#define FC_SCQE_SZ 64
+#define FC_SRQC_SZ 64
+#define FC_SRQE_SZ 32
+
+/* OVS */
+#define OVS_PCTX_SZ 512
+
+/* PPA */
+#define PPA_PCTX_SZ 512
+
+/* IPsec */
+#define IPSEC_SACTX_SZ 512
+
+struct dev_sf_svc_attr {
+ bool ft_en; /* business enable flag (not include RDMA) */
+ bool ft_pf_en; /* In FPGA Test VF resource is in PF or not,
+ * 0 - VF, 1 - PF, VF doesn't need this bit.
+ */
+ bool rdma_en;
+ bool rdma_pf_en;/* In FPGA Test VF RDMA resource is in PF or not,
+ * 0 - VF, 1 - PF, VF doesn't need this bit.
+ */
+};
+
+enum intr_type {
+ INTR_TYPE_MSIX,
+ INTR_TYPE_MSI,
+ INTR_TYPE_INT,
+ INTR_TYPE_NONE,
+ /* PXE,OVS need single thread processing,
+ * synchronization messages must use poll wait mechanism interface
+ */
+};
+
+/* device capability */
+struct service_cap {
+ struct dev_sf_svc_attr sf_svc_attr;
+ u16 svc_type; /* user input service type */
+ u16 chip_svc_type; /* HW supported service type, reference to servic_bit_define */
+
+ u8 host_id;
+ u8 ep_id;
+ u8 er_id; /* PF/VF's ER */
+ u8 port_id; /* PF/VF's physical port */
+
+ /* Host global resources */
+ u16 host_total_function;
+ u8 pf_num;
+ u8 pf_id_start;
+ u16 vf_num; /* max numbers of vf in current host */
+ u16 vf_id_start;
+ u8 host_oq_id_mask_val;
+ u8 host_valid_bitmap;
+ u8 master_host_id;
+ u8 srv_multi_host_mode;
+ u16 virtio_vq_size;
+
+ u8 hot_plug_disable;
+ u8 bond_create_mode;
+ u8 os_hot_replace;
+ u8 rsvd1;
+
+ u8 timer_pf_num;
+ u8 timer_pf_id_start;
+ u16 timer_vf_num;
+ u16 timer_vf_id_start;
+
+ u8 flexq_en;
+ u8 cos_valid_bitmap;
+ u8 port_cos_valid_bitmap;
+ u16 max_vf; /* max VF number that PF supported */
+
+ u16 fake_vf_start_id;
+ u16 fake_vf_num;
+ u32 fake_vf_max_pctx;
+ u16 fake_vf_bfilter_start_addr;
+ u16 fake_vf_bfilter_len;
+
+ u16 fake_vf_num_cfg;
+
+ /* DO NOT get interrupt_type from firmware */
+ enum intr_type interrupt_type;
+
+ bool sf_en; /* stateful business status */
+ u8 timer_en; /* 0:disable, 1:enable */
+ u8 bloomfilter_en; /* 0:disable, 1:enable */
+
+ u8 lb_mode;
+ u8 smf_pg;
+
+ /* For test */
+ u32 test_mode;
+ u32 test_qpc_num;
+ u32 test_qpc_resvd_num;
+ u32 test_page_size_reorder;
+ bool test_xid_alloc_mode;
+ bool test_gpa_check_enable;
+ u8 test_qpc_alloc_mode;
+ u8 test_scqc_alloc_mode;
+
+ u32 test_max_conn_num;
+ u32 test_max_cache_conn_num;
+ u32 test_scqc_num;
+ u32 test_mpt_num;
+ u32 test_scq_resvd_num;
+ u32 test_mpt_recvd_num;
+ u32 test_hash_num;
+ u32 test_reorder_num;
+
+ u32 max_connect_num; /* PF/VF maximum connection number(1M) */
+ /* The maximum connections which can be stick to cache memory, max 1K */
+ u16 max_stick2cache_num;
+ /* Starting address in cache memory for bloom filter, 64Bytes aligned */
+ u16 bfilter_start_addr;
+ /* Length for bloom filter, aligned on 64Bytes. The size is length*64B.
+ * Bloom filter memory size + 1 must be power of 2.
+ * The maximum memory size of bloom filter is 4M
+ */
+ u16 bfilter_len;
+ /* The size of hash bucket tables, align on 64 entries.
+ * Be used to AND (&) the hash value. Bucket Size +1 must be power of 2.
+ * The maximum number of hash bucket is 4M
+ */
+ u16 hash_bucket_num;
+
+ u8 map_host_id;
+ u8 fake_vf_en;
+ u8 fake_vf_start_bit;
+ u8 fake_vf_end_bit;
+ u8 fake_vf_page_bit;
+
+ struct nic_service_cap nic_cap; /* NIC capability */
+ struct rdma_service_cap rdma_cap; /* RDMA capability */
+ struct fc_service_cap fc_cap; /* FC capability */
+ struct toe_service_cap toe_cap; /* ToE capability */
+ struct ovs_service_cap ovs_cap; /* OVS capability */
+ struct ipsec_service_cap ipsec_cap; /* IPsec capability */
+ struct ppa_service_cap ppa_cap; /* PPA capability */
+ struct vbs_service_cap vbs_cap; /* VBS capability */
+};
+
+struct svc_cap_info {
+ u32 func_idx;
+ struct service_cap cap;
+};
+
+struct cfg_eq {
+ enum hinic3_service_type type;
+ int eqn;
+ int free; /* 1 - alocated, 0- freed */
+};
+
+struct cfg_eq_info {
+ struct cfg_eq *eq;
+
+ u8 num_ceq;
+
+ u8 num_ceq_remain;
+
+ /* mutex used for allocate EQs */
+ struct mutex eq_mutex;
+};
+
+struct irq_alloc_info_st {
+ enum hinic3_service_type type;
+ int free; /* 1 - alocated, 0- freed */
+ struct irq_info info;
+};
+
+struct cfg_irq_info {
+ struct irq_alloc_info_st *alloc_info;
+ u16 num_total;
+ u16 num_irq_remain;
+ u16 num_irq_hw; /* device max irq number */
+
+ /* mutex used for allocate EQs */
+ struct mutex irq_mutex;
+};
+
+#define VECTOR_THRESHOLD 2
+
+struct cfg_mgmt_info {
+ struct hinic3_hwdev *hwdev;
+ struct service_cap svc_cap;
+ struct cfg_eq_info eq_info; /* EQ */
+ struct cfg_irq_info irq_param_info; /* IRQ */
+ u32 func_seq_num; /* temporary */
+};
+
+#define CFG_SERVICE_FT_EN (CFG_SERVICE_MASK_VBS | CFG_SERVICE_MASK_TOE | \
+ CFG_SERVICE_MASK_IPSEC | CFG_SERVICE_MASK_FC | \
+ CFG_SERVICE_MASK_VIRTIO | CFG_SERVICE_MASK_OVS)
+#define CFG_SERVICE_RDMA_EN CFG_SERVICE_MASK_ROCE
+
+#define IS_NIC_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_NIC)
+#define IS_ROCE_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_ROCE)
+#define IS_VBS_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_VBS)
+#define IS_TOE_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_TOE)
+#define IS_IPSEC_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_IPSEC)
+#define IS_FC_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_FC)
+#define IS_OVS_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_OVS)
+#define IS_FT_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_FT_EN)
+#define IS_RDMA_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_RDMA_EN)
+#define IS_RDMA_ENABLE(dev) \
+ ((dev)->cfg_mgmt->svc_cap.sf_svc_attr.rdma_en)
+#define IS_PPA_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_PPA)
+#define IS_MIGR_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_MIGRATE)
+#define IS_BIFUR_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_BIFUR)
+
+int init_cfg_mgmt(struct hinic3_hwdev *dev);
+
+void free_cfg_mgmt(struct hinic3_hwdev *dev);
+
+int init_capability(struct hinic3_hwdev *dev);
+
+void free_capability(struct hinic3_hwdev *dev);
+
+int hinic3_init_vf_dev_cap(void *hwdev);
+
+u8 hinic3_get_bond_create_mode(void *dev);
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c
new file mode 100644
index 0000000..47264f9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c
@@ -0,0 +1,1681 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/msi.h>
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/semaphore.h>
+#include <linux/interrupt.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_cmdq.h"
+#include "mpu_inband_cmd_defs.h"
+#include "mpu_board_defs.h"
+#include "hinic3_hw_comm.h"
+#include "vram_common.h"
+
+#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0
+#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8
+#define HINIC3_MSIX_CNT_COALESC_TIMER_SHIFT 8
+#define HINIC3_MSIX_CNT_PENDING_SHIFT 8
+#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29
+
+#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU
+#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU
+#define HINIC3_MSIX_CNT_COALESC_TIMER_MASK 0xFFU
+#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU
+#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U
+
+#define HINIC3_MSIX_CNT_SET(val, member) \
+ (((val) & HINIC3_MSIX_CNT_##member##_MASK) << \
+ HINIC3_MSIX_CNT_##member##_SHIFT)
+
+#define DEFAULT_RX_BUF_SIZE ((u16)0xB)
+
+enum hinic3_rx_buf_size {
+ HINIC3_RX_BUF_SIZE_32B = 0x20,
+ HINIC3_RX_BUF_SIZE_64B = 0x40,
+ HINIC3_RX_BUF_SIZE_96B = 0x60,
+ HINIC3_RX_BUF_SIZE_128B = 0x80,
+ HINIC3_RX_BUF_SIZE_192B = 0xC0,
+ HINIC3_RX_BUF_SIZE_256B = 0x100,
+ HINIC3_RX_BUF_SIZE_384B = 0x180,
+ HINIC3_RX_BUF_SIZE_512B = 0x200,
+ HINIC3_RX_BUF_SIZE_768B = 0x300,
+ HINIC3_RX_BUF_SIZE_1K = 0x400,
+ HINIC3_RX_BUF_SIZE_1_5K = 0x600,
+ HINIC3_RX_BUF_SIZE_2K = 0x800,
+ HINIC3_RX_BUF_SIZE_3K = 0xC00,
+ HINIC3_RX_BUF_SIZE_4K = 0x1000,
+ HINIC3_RX_BUF_SIZE_8K = 0x2000,
+ HINIC3_RX_BUF_SIZE_16K = 0x4000,
+};
+
+const int hinic3_hw_rx_buf_size[] = {
+ HINIC3_RX_BUF_SIZE_32B,
+ HINIC3_RX_BUF_SIZE_64B,
+ HINIC3_RX_BUF_SIZE_96B,
+ HINIC3_RX_BUF_SIZE_128B,
+ HINIC3_RX_BUF_SIZE_192B,
+ HINIC3_RX_BUF_SIZE_256B,
+ HINIC3_RX_BUF_SIZE_384B,
+ HINIC3_RX_BUF_SIZE_512B,
+ HINIC3_RX_BUF_SIZE_768B,
+ HINIC3_RX_BUF_SIZE_1K,
+ HINIC3_RX_BUF_SIZE_1_5K,
+ HINIC3_RX_BUF_SIZE_2K,
+ HINIC3_RX_BUF_SIZE_3K,
+ HINIC3_RX_BUF_SIZE_4K,
+ HINIC3_RX_BUF_SIZE_8K,
+ HINIC3_RX_BUF_SIZE_16K,
+};
+
+static inline int comm_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, buf_in,
+ in_size, buf_out, out_size, 0,
+ HINIC3_CHANNEL_COMM);
+}
+
+static inline int comm_msg_to_mgmt_sync_ch(struct hinic3_hwdev *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u16 channel)
+{
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, buf_in,
+ in_size, buf_out, out_size, 0, channel);
+}
+
+int hinic3_get_interrupt_cfg(void *dev, struct interrupt_info *info,
+ u16 channel)
+{
+ struct hinic3_hwdev *hwdev = dev;
+ struct comm_cmd_msix_config msix_cfg;
+ u16 out_size = sizeof(msix_cfg);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ memset(&msix_cfg, 0, sizeof(msix_cfg));
+ msix_cfg.func_id = hinic3_global_func_id(hwdev);
+ msix_cfg.msix_index = info->msix_index;
+ msix_cfg.opcode = MGMT_MSG_CMD_OP_GET;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_CFG_MSIX_CTRL_REG,
+ &msix_cfg, sizeof(msix_cfg), &msix_cfg,
+ &out_size, channel);
+ if (err || !out_size || msix_cfg.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to get interrupt config, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, msix_cfg.head.status, out_size, channel);
+ return -EINVAL;
+ }
+
+ info->lli_credit_limit = msix_cfg.lli_credit_cnt;
+ info->lli_timer_cfg = msix_cfg.lli_timer_cnt;
+ info->pending_limt = msix_cfg.pending_cnt;
+ info->coalesc_timer_cfg = msix_cfg.coalesce_timer_cnt;
+ info->resend_timer_cfg = msix_cfg.resend_timer_cnt;
+
+ return 0;
+}
+
+int hinic3_set_interrupt_cfg_direct(void *hwdev, struct interrupt_info *info,
+ u16 channel)
+{
+ struct comm_cmd_msix_config msix_cfg;
+ u16 out_size = sizeof(msix_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&msix_cfg, 0, sizeof(msix_cfg));
+ msix_cfg.func_id = hinic3_global_func_id(hwdev);
+ msix_cfg.msix_index = (u16)info->msix_index;
+ msix_cfg.opcode = MGMT_MSG_CMD_OP_SET;
+
+ msix_cfg.lli_credit_cnt = info->lli_credit_limit;
+ msix_cfg.lli_timer_cnt = info->lli_timer_cfg;
+ msix_cfg.pending_cnt = info->pending_limt;
+ msix_cfg.coalesce_timer_cnt = info->coalesc_timer_cfg;
+ msix_cfg.resend_timer_cnt = info->resend_timer_cfg;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_CFG_MSIX_CTRL_REG,
+ &msix_cfg, sizeof(msix_cfg), &msix_cfg,
+ &out_size, channel);
+ if (err || !out_size || msix_cfg.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set interrupt config, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, msix_cfg.head.status, out_size, channel);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_interrupt_cfg(void *dev, struct interrupt_info info, u16 channel)
+{
+ struct interrupt_info temp_info;
+ struct hinic3_hwdev *hwdev = dev;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ temp_info.msix_index = info.msix_index;
+
+ err = hinic3_get_interrupt_cfg(hwdev, &temp_info, channel);
+ if (err != 0)
+ return -EINVAL;
+
+ if (!info.lli_set) {
+ info.lli_credit_limit = temp_info.lli_credit_limit;
+ info.lli_timer_cfg = temp_info.lli_timer_cfg;
+ }
+
+ if (!info.interrupt_coalesc_set) {
+ info.pending_limt = temp_info.pending_limt;
+ info.coalesc_timer_cfg = temp_info.coalesc_timer_cfg;
+ info.resend_timer_cfg = temp_info.resend_timer_cfg;
+ }
+
+ return hinic3_set_interrupt_cfg_direct(hwdev, &info, channel);
+}
+EXPORT_SYMBOL(hinic3_set_interrupt_cfg);
+
+void hinic3_misx_intr_clear_resend_bit(void *hwdev, u16 msix_idx,
+ u8 clear_resend_en)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 msix_ctrl = 0, addr;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ msix_ctrl = HINIC3_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX) |
+ HINIC3_MSI_CLR_INDIR_SET(clear_resend_en, RESEND_TIMER_CLR);
+
+ addr = HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR;
+ hinic3_hwif_write_reg(hwif, addr, msix_ctrl);
+}
+EXPORT_SYMBOL(hinic3_misx_intr_clear_resend_bit);
+
+int hinic3_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size,
+ u16 channel)
+{
+ struct comm_cmd_wq_page_size page_size_info;
+ u16 out_size = sizeof(page_size_info);
+ int err;
+
+ memset(&page_size_info, 0, sizeof(page_size_info));
+ page_size_info.func_id = func_idx;
+ page_size_info.page_size = HINIC3_PAGE_SIZE_HW(page_size);
+ page_size_info.opcode = MGMT_MSG_CMD_OP_SET;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_CFG_PAGESIZE,
+ &page_size_info, sizeof(page_size_info),
+ &page_size_info, &out_size, channel);
+ if (err || !out_size || page_size_info.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set wq page size, err: %d, status: 0x%x, out_size: 0x%x, channel: 0x%x\n",
+ err, page_size_info.head.status, out_size, channel);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_func_reset(void *dev, u16 func_id, u64 reset_flag, u16 channel)
+{
+ struct comm_cmd_func_reset func_reset;
+ struct hinic3_hwdev *hwdev = dev;
+ u16 out_size = sizeof(func_reset);
+ int err = 0;
+ int is_in_kexec;
+
+ if (!dev) {
+ pr_err("Invalid para: dev is null.\n");
+ return -EINVAL;
+ }
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec != 0) {
+ sdk_info(hwdev->dev_hdl, "Skip function reset!\n");
+ return 0;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Function is reset, flag: 0x%llx, channel:0x%x\n",
+ reset_flag, channel);
+
+ memset(&func_reset, 0, sizeof(func_reset));
+ func_reset.func_id = func_id;
+ func_reset.reset_flag = reset_flag;
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_FUNC_RESET,
+ &func_reset, sizeof(func_reset),
+ &func_reset, &out_size, channel);
+ if (err || !out_size || func_reset.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to reset func resources, reset_flag 0x%llx, err: %d, status: 0x%x, out_size: 0x%x\n",
+ reset_flag, err, func_reset.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_func_reset);
+
+static u16 get_hw_rx_buf_size(int rx_buf_sz)
+{
+ u16 num_hw_types =
+ sizeof(hinic3_hw_rx_buf_size) /
+ sizeof(hinic3_hw_rx_buf_size[0]);
+ u16 i;
+
+ for (i = 0; i < num_hw_types; i++) {
+ if (hinic3_hw_rx_buf_size[i] == rx_buf_sz)
+ return i;
+ }
+
+ pr_err("Chip can't support rx buf size of %d\n", rx_buf_sz);
+
+ return DEFAULT_RX_BUF_SIZE; /* default 2K */
+}
+
+int hinic3_set_root_ctxt(void *hwdev, u32 rq_depth, u32 sq_depth, int rx_buf_sz,
+ u16 channel)
+{
+ struct comm_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_id = hinic3_global_func_id(hwdev);
+
+ root_ctxt.set_cmdq_depth = 0;
+ root_ctxt.cmdq_depth = 0;
+
+ root_ctxt.lro_en = 1;
+
+ root_ctxt.rq_depth = (u16)ilog2(rq_depth);
+ root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz);
+ root_ctxt.sq_depth = (u16)ilog2(sq_depth);
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_SET_VAT,
+ &root_ctxt, sizeof(root_ctxt),
+ &root_ctxt, &out_size, channel);
+ if (err || !out_size || root_ctxt.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set root context, err: %d, status: 0x%x, out_size: 0x%x, channel: 0x%x\n",
+ err, root_ctxt.head.status, out_size, channel);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_root_ctxt);
+
+int hinic3_clean_root_ctxt(void *hwdev, u16 channel)
+{
+ struct comm_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_id = hinic3_global_func_id(hwdev);
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_SET_VAT,
+ &root_ctxt, sizeof(root_ctxt),
+ &root_ctxt, &out_size, channel);
+ if (err || !out_size || root_ctxt.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set root context, err: %d, status: 0x%x, out_size: 0x%x, channel: 0x%x\n",
+ err, root_ctxt.head.status, out_size, channel);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_clean_root_ctxt);
+
+int hinic3_set_cmdq_depth(void *hwdev, u16 cmdq_depth)
+{
+ struct comm_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_id = hinic3_global_func_id(hwdev);
+
+ root_ctxt.set_cmdq_depth = 1;
+ root_ctxt.cmdq_depth = (u8)ilog2(cmdq_depth);
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_VAT, &root_ctxt,
+ sizeof(root_ctxt), &root_ctxt, &out_size);
+ if (err || !out_size || root_ctxt.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set cmdq depth, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, root_ctxt.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_set_cmdq_ctxt(struct hinic3_hwdev *hwdev, u8 cmdq_id,
+ struct cmdq_ctxt_info *ctxt)
+{
+ struct comm_cmd_cmdq_ctxt cmdq_ctxt;
+ u16 out_size = sizeof(cmdq_ctxt);
+ int err;
+
+ memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt));
+ memcpy(&cmdq_ctxt.ctxt, ctxt, sizeof(struct cmdq_ctxt_info));
+ cmdq_ctxt.func_id = hinic3_global_func_id(hwdev);
+ cmdq_ctxt.cmdq_id = cmdq_id;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_CMDQ_CTXT,
+ &cmdq_ctxt, sizeof(cmdq_ctxt),
+ &cmdq_ctxt, &out_size);
+ if (err || !out_size || cmdq_ctxt.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set cmdq ctxt, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, cmdq_ctxt.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_set_ceq_ctrl_reg(struct hinic3_hwdev *hwdev, u16 q_id,
+ u32 ctrl0, u32 ctrl1)
+{
+ struct comm_cmd_ceq_ctrl_reg ceq_ctrl;
+ u16 out_size = sizeof(ceq_ctrl);
+ int err;
+
+ memset(&ceq_ctrl, 0, sizeof(ceq_ctrl));
+ ceq_ctrl.func_id = hinic3_global_func_id(hwdev);
+ ceq_ctrl.q_id = q_id;
+ ceq_ctrl.ctrl0 = ctrl0;
+ ceq_ctrl.ctrl1 = ctrl1;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_CEQ_CTRL_REG,
+ &ceq_ctrl, sizeof(ceq_ctrl),
+ &ceq_ctrl, &out_size);
+ if (err || !out_size || ceq_ctrl.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set ceq %u ctrl reg, err: %d status: 0x%x, out_size: 0x%x\n",
+ q_id, err, ceq_ctrl.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_set_dma_attr_tbl(struct hinic3_hwdev *hwdev, u8 entry_idx, u8 st, u8 at, u8 ph,
+ u8 no_snooping, u8 tph_en)
+{
+ struct comm_cmd_dma_attr_config dma_attr;
+ u16 out_size = sizeof(dma_attr);
+ int err;
+
+ memset(&dma_attr, 0, sizeof(dma_attr));
+ dma_attr.func_id = hinic3_global_func_id(hwdev);
+ dma_attr.entry_idx = entry_idx;
+ dma_attr.st = st;
+ dma_attr.at = at;
+ dma_attr.ph = ph;
+ dma_attr.no_snooping = no_snooping;
+ dma_attr.tph_en = tph_en;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_DMA_ATTR, &dma_attr, sizeof(dma_attr),
+ &dma_attr, &out_size);
+ if (err || !out_size || dma_attr.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set dma attr, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, dma_attr.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_set_bdf_ctxt(void *hwdev, u8 bus, u8 device, u8 function)
+{
+ struct comm_cmd_bdf_info bdf_info;
+ u16 out_size = sizeof(bdf_info);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&bdf_info, 0, sizeof(bdf_info));
+ bdf_info.function_idx = hinic3_global_func_id(hwdev);
+ bdf_info.bus = bus;
+ bdf_info.device = device;
+ bdf_info.function = function;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SEND_BDF_INFO,
+ &bdf_info, sizeof(bdf_info),
+ &bdf_info, &out_size);
+ if (err || !out_size || bdf_info.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set bdf info to MPU, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, bdf_info.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_sync_time(void *hwdev, u64 time)
+{
+ struct comm_cmd_sync_time time_info;
+ u16 out_size = sizeof(time_info);
+ int err;
+
+ memset(&time_info, 0, sizeof(time_info));
+ time_info.mstime = time;
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SYNC_TIME, &time_info,
+ sizeof(time_info), &time_info, &out_size);
+ if (err || time_info.head.status || !out_size) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to sync time to mgmt, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, time_info.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_set_ppf_flr_type(void *hwdev, enum hinic3_ppf_flr_type flr_type)
+{
+ struct comm_cmd_ppf_flr_type_set flr_type_set;
+ u16 out_size = sizeof(struct comm_cmd_ppf_flr_type_set);
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&flr_type_set, 0, sizeof(flr_type_set));
+ flr_type_set.func_id = hinic3_global_func_id(hwdev);
+ flr_type_set.ppf_flr_type = flr_type;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_PPF_FLR_TYPE,
+ &flr_type_set, sizeof(flr_type_set),
+ &flr_type_set, &out_size);
+ if (err || !out_size || flr_type_set.head.status) {
+ sdk_err(dev->dev_hdl, "Failed to set ppf flr type, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, flr_type_set.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_ppf_flr_type);
+
+int hinic3_set_ppf_tbl_hotreplace_flag(void *hwdev, u8 flag)
+{
+ struct comm_cmd_ppf_tbl_htrp_config htr_info = {};
+ u16 out_size = sizeof(struct comm_cmd_ppf_tbl_htrp_config);
+ struct hinic3_hwdev *dev = hwdev;
+ int ret;
+
+ if (!hwdev) {
+ sdk_err(dev->dev_hdl, "Sdk set ppf table hotreplace flag para is null");
+ return -EINVAL;
+ }
+
+ htr_info.hotreplace_flag = flag;
+ ret = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_PPF_TBL_HTR_FLG,
+ &htr_info, sizeof(htr_info), &htr_info, &out_size);
+ if (ret != 0 || htr_info.head.status != 0) {
+ sdk_err(dev->dev_hdl, "Send mbox to mpu failed in sdk, ret:%d, status:%u",
+ ret, htr_info.head.status);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_ppf_tbl_hotreplace_flag);
+
+static int hinic3_get_fw_ver(struct hinic3_hwdev *hwdev, enum hinic3_fw_ver_type type,
+ u8 *mgmt_ver, u8 version_size, u16 channel)
+{
+ struct comm_cmd_get_fw_version fw_ver;
+ u16 out_size = sizeof(fw_ver);
+ int err;
+
+ if (!hwdev || !mgmt_ver)
+ return -EINVAL;
+
+ memset(&fw_ver, 0, sizeof(fw_ver));
+ fw_ver.fw_type = type;
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_GET_FW_VERSION,
+ &fw_ver, sizeof(fw_ver), &fw_ver,
+ &out_size, channel);
+ if (err || !out_size || fw_ver.head.status) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to get fw version, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, fw_ver.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ memcpy(mgmt_ver, fw_ver.ver, version_size);
+
+ return 0;
+}
+
+int hinic3_get_mgmt_version(void *hwdev, u8 *mgmt_ver, u8 version_size,
+ u16 channel)
+{
+ return hinic3_get_fw_ver(hwdev, HINIC3_FW_VER_TYPE_MPU, mgmt_ver,
+ version_size, channel);
+}
+EXPORT_SYMBOL(hinic3_get_mgmt_version);
+
+int hinic3_get_fw_version(void *hwdev, struct hinic3_fw_version *fw_ver,
+ u16 channel)
+{
+ int err;
+
+ if (!hwdev || !fw_ver)
+ return -EINVAL;
+
+ err = hinic3_get_fw_ver(hwdev, HINIC3_FW_VER_TYPE_MPU,
+ fw_ver->mgmt_ver, sizeof(fw_ver->mgmt_ver),
+ channel);
+ if (err != 0)
+ return err;
+
+ err = hinic3_get_fw_ver(hwdev, HINIC3_FW_VER_TYPE_NPU,
+ fw_ver->microcode_ver,
+ sizeof(fw_ver->microcode_ver), channel);
+ if (err != 0)
+ return err;
+
+ return hinic3_get_fw_ver(hwdev, HINIC3_FW_VER_TYPE_BOOT,
+ fw_ver->boot_ver, sizeof(fw_ver->boot_ver),
+ channel);
+}
+EXPORT_SYMBOL(hinic3_get_fw_version);
+
+static int hinic3_comm_features_nego(void *hwdev, u8 opcode, u64 *s_feature,
+ u16 size)
+{
+ struct comm_cmd_feature_nego feature_nego;
+ u16 out_size = sizeof(feature_nego);
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ if (!hwdev || !s_feature || size > COMM_MAX_FEATURE_QWORD)
+ return -EINVAL;
+
+ memset(&feature_nego, 0, sizeof(feature_nego));
+ feature_nego.func_id = hinic3_global_func_id(hwdev);
+ feature_nego.opcode = opcode;
+ if (opcode == MGMT_MSG_CMD_OP_SET) {
+ memcpy(feature_nego.s_feature, s_feature, (size * sizeof(u64)));
+ }
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_FEATURE_NEGO,
+ &feature_nego, sizeof(feature_nego),
+ &feature_nego, &out_size);
+ if (err || !out_size || feature_nego.head.status) {
+ sdk_err(dev->dev_hdl, "Failed to negotiate feature, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, feature_nego.head.status, out_size);
+ return -EINVAL;
+ }
+
+ if (opcode == MGMT_MSG_CMD_OP_GET)
+ memcpy(s_feature, feature_nego.s_feature, (COMM_MAX_FEATURE_QWORD * sizeof(u64)));
+
+ return 0;
+}
+
+int hinic3_get_comm_features(void *hwdev, u64 *s_feature, u16 size)
+{
+ return hinic3_comm_features_nego(hwdev, MGMT_MSG_CMD_OP_GET, s_feature,
+ size);
+}
+
+int hinic3_set_comm_features(void *hwdev, u64 *s_feature, u16 size)
+{
+ return hinic3_comm_features_nego(hwdev, MGMT_MSG_CMD_OP_SET, s_feature,
+ size);
+}
+
+int hinic3_comm_channel_detect(struct hinic3_hwdev *hwdev)
+{
+ struct comm_cmd_channel_detect channel_detect_info;
+ u16 out_size = sizeof(channel_detect_info);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&channel_detect_info, 0, sizeof(channel_detect_info));
+ channel_detect_info.func_id = hinic3_global_func_id(hwdev);
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_CHANNEL_DETECT,
+ &channel_detect_info, sizeof(channel_detect_info),
+ &channel_detect_info, &out_size);
+ if ((channel_detect_info.head.status != HINIC3_MGMT_CMD_UNSUPPORTED &&
+ channel_detect_info.head.status) || err || !out_size) {
+ sdk_err(hwdev->dev_hdl, "Failed to send channel detect, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, channel_detect_info.head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_func_tmr_bitmap_set(void *hwdev, u16 func_id, bool en)
+{
+ struct comm_cmd_func_tmr_bitmap_op bitmap_op;
+ u16 out_size = sizeof(bitmap_op);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&bitmap_op, 0, sizeof(bitmap_op));
+ bitmap_op.func_id = func_id;
+ bitmap_op.opcode = en ? FUNC_TMR_BITMAP_ENABLE : FUNC_TMR_BITMAP_DISABLE;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_FUNC_TMR_BITMAT,
+ &bitmap_op, sizeof(bitmap_op),
+ &bitmap_op, &out_size);
+ if (err || !out_size || bitmap_op.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set timer bitmap, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, bitmap_op.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int ppf_ht_gpa_malloc(struct hinic3_hwdev *hwdev, struct hinic3_page_addr *pg0,
+ struct hinic3_page_addr *pg1)
+{
+ pg0->virt_addr = dma_zalloc_coherent(hwdev->dev_hdl,
+ HINIC3_HT_GPA_PAGE_SIZE,
+ &pg0->phys_addr, GFP_KERNEL);
+ if (!pg0->virt_addr) {
+ sdk_err(hwdev->dev_hdl, "Alloc pg0 page addr failed\n");
+ return -EFAULT;
+ }
+
+ pg1->virt_addr = dma_zalloc_coherent(hwdev->dev_hdl,
+ HINIC3_HT_GPA_PAGE_SIZE,
+ &pg1->phys_addr, GFP_KERNEL);
+ if (!pg1->virt_addr) {
+ sdk_err(hwdev->dev_hdl, "Alloc pg1 page addr failed\n");
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static void ppf_ht_gpa_free(struct hinic3_hwdev *hwdev, struct hinic3_page_addr *pg0,
+ struct hinic3_page_addr *pg1)
+{
+ if (pg0->virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl, HINIC3_HT_GPA_PAGE_SIZE, pg0->virt_addr,
+ (dma_addr_t)(pg0->phys_addr));
+ pg0->virt_addr = NULL;
+ }
+ if (pg1->virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl, HINIC3_HT_GPA_PAGE_SIZE, pg1->virt_addr,
+ (dma_addr_t)(pg1->phys_addr));
+ pg1->virt_addr = NULL;
+ }
+}
+
+static int ppf_ht_gpa_set(struct hinic3_hwdev *hwdev, struct hinic3_page_addr *pg0,
+ struct hinic3_page_addr *pg1)
+{
+ struct comm_cmd_ht_gpa ht_gpa_set;
+ u16 out_size = sizeof(ht_gpa_set);
+ int ret;
+
+ memset(&ht_gpa_set, 0, sizeof(ht_gpa_set));
+
+ ret = ppf_ht_gpa_malloc(hwdev, pg0, pg1);
+ if (ret)
+ return ret;
+
+ ht_gpa_set.host_id = hinic3_host_id(hwdev);
+ ht_gpa_set.page_pa0 = pg0->phys_addr;
+ ht_gpa_set.page_pa1 = pg1->phys_addr;
+ sdk_info(hwdev->dev_hdl, "PPF ht gpa set: page_addr0.pa=0x%llx, page_addr1.pa=0x%llx\n",
+ pg0->phys_addr, pg1->phys_addr);
+ ret = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_PPF_HT_GPA,
+ &ht_gpa_set, sizeof(ht_gpa_set),
+ &ht_gpa_set, &out_size);
+ if (ret || !out_size || ht_gpa_set.head.status) {
+ sdk_warn(hwdev->dev_hdl, "PPF ht gpa set failed, ret: %d, status: 0x%x, out_size: 0x%x\n",
+ ret, ht_gpa_set.head.status, out_size);
+ return -EFAULT;
+ }
+
+ hwdev->page_pa0.phys_addr = pg0->phys_addr;
+ hwdev->page_pa0.virt_addr = pg0->virt_addr;
+
+ hwdev->page_pa1.phys_addr = pg1->phys_addr;
+ hwdev->page_pa1.virt_addr = pg1->virt_addr;
+
+ return 0;
+}
+
+int hinic3_ppf_ht_gpa_init(void *dev)
+{
+ struct hinic3_page_addr page_addr0[HINIC3_PPF_HT_GPA_SET_RETRY_TIMES];
+ struct hinic3_page_addr page_addr1[HINIC3_PPF_HT_GPA_SET_RETRY_TIMES];
+ struct hinic3_hwdev *hwdev = dev;
+ int ret;
+ int i;
+ int j;
+ size_t size;
+
+ if (!dev) {
+ pr_err("Invalid para: dev is null.\n");
+ return -EINVAL;
+ }
+
+ size = HINIC3_PPF_HT_GPA_SET_RETRY_TIMES * sizeof(page_addr0[0]);
+ memset(page_addr0, 0, size);
+ memset(page_addr1, 0, size);
+
+ for (i = 0; i < HINIC3_PPF_HT_GPA_SET_RETRY_TIMES; i++) {
+ ret = ppf_ht_gpa_set(hwdev, &page_addr0[i], &page_addr1[i]);
+ if (ret == 0)
+ break;
+ }
+
+ for (j = 0; j < i; j++)
+ ppf_ht_gpa_free(hwdev, &page_addr0[j], &page_addr1[j]);
+
+ if (i >= HINIC3_PPF_HT_GPA_SET_RETRY_TIMES) {
+ sdk_err(hwdev->dev_hdl, "PPF ht gpa init failed, retry times: %d\n",
+ i);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+void hinic3_ppf_ht_gpa_deinit(void *dev)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (!dev) {
+ pr_err("Invalid para: dev is null.\n");
+ return;
+ }
+
+ if (hwdev->page_pa0.virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl, HINIC3_HT_GPA_PAGE_SIZE,
+ hwdev->page_pa0.virt_addr,
+ (dma_addr_t)(hwdev->page_pa0.phys_addr));
+ hwdev->page_pa0.virt_addr = NULL;
+ }
+
+ if (hwdev->page_pa1.virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl, HINIC3_HT_GPA_PAGE_SIZE,
+ hwdev->page_pa1.virt_addr,
+ (dma_addr_t)hwdev->page_pa1.phys_addr);
+ hwdev->page_pa1.virt_addr = NULL;
+ }
+}
+
+static int set_ppf_tmr_status(struct hinic3_hwdev *hwdev,
+ enum ppf_tmr_status status)
+{
+ struct comm_cmd_ppf_tmr_op op;
+ u16 out_size = sizeof(op);
+ int err = 0;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&op, 0, sizeof(op));
+
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ return -EFAULT;
+
+ op.opcode = status;
+ op.ppf_id = hinic3_ppf_idx(hwdev);
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_PPF_TMR, &op,
+ sizeof(op), &op, &out_size);
+ if (err || !out_size || op.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set ppf timer, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, op.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_ppf_tmr_start(void *hwdev)
+{
+ int is_in_kexec;
+
+ if (!hwdev) {
+ pr_err("Hwdev pointer is NULL for starting ppf timer\n");
+ return -EINVAL;
+ }
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec != 0) {
+ pr_info("Skip starting ppt timer during kexec");
+ return 0;
+ }
+
+ return set_ppf_tmr_status(hwdev, HINIC_PPF_TMR_FLAG_START);
+}
+EXPORT_SYMBOL(hinic3_ppf_tmr_start);
+
+int hinic3_ppf_tmr_stop(void *hwdev)
+{
+ if (!hwdev) {
+ pr_err("Hwdev pointer is NULL for stop ppf timer\n");
+ return -EINVAL;
+ }
+
+ return set_ppf_tmr_status(hwdev, HINIC_PPF_TMR_FLAG_STOP);
+}
+EXPORT_SYMBOL(hinic3_ppf_tmr_stop);
+
+static int hi_vram_kalloc_align(struct hinic3_hwdev *hwdev, char *name,
+ u32 page_size, u32 page_num,
+ struct hinic3_dma_addr_align *mem_align)
+{
+ void *vaddr = NULL, *align_vaddr = NULL;
+ dma_addr_t paddr, align_paddr;
+ u64 real_size = page_size;
+ u64 align = page_size;
+
+ vaddr = (void *)hi_vram_kalloc(name, real_size);
+ if (vaddr == NULL) {
+ sdk_err(hwdev->dev_hdl, "vram kalloc failed, name:%s.\n", name);
+ return -ENOMEM;
+ }
+
+ paddr = (dma_addr_t)virt_to_phys(vaddr);
+ align_paddr = ALIGN(paddr, align);
+ /* align */
+ if (align_paddr == paddr) {
+ align_vaddr = vaddr;
+ goto out;
+ }
+
+ hi_vram_kfree((void *)vaddr, name, real_size);
+
+ /* realloc memory for align */
+ real_size = page_size + align;
+ vaddr = (void *)hi_vram_kalloc(name, real_size);
+ if (vaddr == NULL) {
+ sdk_err(hwdev->dev_hdl, "vram kalloc align failed, name:%s.\n", name);
+ return -ENOMEM;
+ }
+
+ paddr = (dma_addr_t)virt_to_phys(vaddr);
+ align_paddr = ALIGN(paddr, align);
+ align_vaddr = (void *)((u64)vaddr + (align_paddr - paddr));
+
+out:
+ mem_align->real_size = (u32)real_size;
+ mem_align->ori_vaddr = vaddr;
+ mem_align->ori_paddr = paddr;
+ mem_align->align_vaddr = align_vaddr;
+ mem_align->align_paddr = align_paddr;
+
+ return 0;
+}
+
+static void mqm_eqm_free_page_mem(struct hinic3_hwdev *hwdev)
+{
+ u32 i;
+ struct hinic3_dma_addr_align *page_addr;
+ int is_use_vram = get_use_vram_flag();
+ struct mqm_eqm_vram_name_s *mqm_eqm_vram_name = hwdev->mqm_eqm_vram_name;
+
+ page_addr = hwdev->mqm_att.brm_srch_page_addr;
+
+ for (i = 0; i < hwdev->mqm_att.page_num; i++) {
+ if (is_use_vram != 0) {
+ hi_vram_kfree(page_addr->ori_vaddr, mqm_eqm_vram_name[i].vram_name,
+ page_addr->real_size);
+ } else {
+ hinic3_dma_free_coherent_align(hwdev->dev_hdl, page_addr);
+ }
+ page_addr->ori_vaddr = NULL;
+ page_addr++;
+ }
+
+ kfree(mqm_eqm_vram_name);
+ hwdev->mqm_eqm_vram_name = NULL;
+}
+
+static int mqm_eqm_try_alloc_mem(struct hinic3_hwdev *hwdev, u32 page_size,
+ u32 page_num)
+{
+ struct hinic3_dma_addr_align *page_addr = hwdev->mqm_att.brm_srch_page_addr;
+ int is_use_vram = get_use_vram_flag();
+ struct mqm_eqm_vram_name_s *mqm_eqm_vram_name = NULL;
+ u32 valid_num = 0;
+ u32 flag = 1;
+ u32 i = 0;
+ int err;
+ u16 func_id;
+
+ mqm_eqm_vram_name = kzalloc(sizeof(struct mqm_eqm_vram_name_s) * page_num, GFP_KERNEL);
+ if (mqm_eqm_vram_name == NULL) {
+ sdk_err(hwdev->dev_hdl, "mqm eqm alloc vram name failed.\n");
+ return -ENOMEM;
+ }
+
+ hwdev->mqm_eqm_vram_name = mqm_eqm_vram_name;
+ func_id = hinic3_global_func_id(hwdev);
+
+ for (i = 0; i < page_num; i++) {
+ if (is_use_vram != 0) {
+ snprintf(mqm_eqm_vram_name[i].vram_name,
+ VRAM_NAME_MAX_LEN, "%s%u%s%u",
+ VRAM_CQM_GLB_FUNC_BASE, func_id, VRAM_NIC_MQM, i);
+ err = hi_vram_kalloc_align(
+ hwdev, mqm_eqm_vram_name[i].vram_name,
+ page_size, page_num, page_addr);
+ } else {
+ err = hinic3_dma_zalloc_coherent_align(hwdev->dev_hdl, page_size,
+ page_size, GFP_KERNEL, page_addr);
+ }
+ if (err) {
+ flag = 0;
+ break;
+ }
+ valid_num++;
+ page_addr++;
+ }
+
+ hwdev->mqm_att.page_num = valid_num;
+
+ if (flag == 1) {
+ hwdev->mqm_att.page_size = page_size;
+ } else {
+ mqm_eqm_free_page_mem(hwdev);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int mqm_eqm_alloc_page_mem(struct hinic3_hwdev *hwdev)
+{
+ int ret = 0;
+ u32 page_num;
+
+ /* apply for 2M page, page number is chunk_num/1024 */
+ page_num = (hwdev->mqm_att.chunk_num + 0x3ff) >> 0xa;
+ ret = mqm_eqm_try_alloc_mem(hwdev, 0x2 * 0x400 * 0x400, page_num);
+ if (ret == 0) {
+ sdk_info(hwdev->dev_hdl, "[mqm_eqm_init] Alloc page_size 2M OK\n");
+ return 0;
+ }
+
+ /* apply for 64KB page, page number is chunk_num/32 */
+ page_num = (hwdev->mqm_att.chunk_num + 0x1f) >> 0x5;
+ ret = mqm_eqm_try_alloc_mem(hwdev, 0x40 * 0x400, page_num);
+ if (ret == 0) {
+ sdk_info(hwdev->dev_hdl, "[mqm_eqm_init] Alloc page_size 64K OK\n");
+ return 0;
+ }
+
+ /* apply for 4KB page, page number is chunk_num/2 */
+ page_num = (hwdev->mqm_att.chunk_num + 1) >> 1;
+ ret = mqm_eqm_try_alloc_mem(hwdev, 0x4 * 0x400, page_num);
+ if (ret == 0) {
+ sdk_info(hwdev->dev_hdl, "[mqm_eqm_init] Alloc page_size 4K OK\n");
+ return 0;
+ }
+
+ return ret;
+}
+
+static int mqm_eqm_set_cfg_2_hw(struct hinic3_hwdev *hwdev, u8 valid)
+{
+ struct comm_cmd_eqm_cfg info_eqm_cfg;
+ u16 out_size = sizeof(info_eqm_cfg);
+ int err;
+
+ memset(&info_eqm_cfg, 0, sizeof(info_eqm_cfg));
+
+ info_eqm_cfg.host_id = hinic3_host_id(hwdev);
+ info_eqm_cfg.page_size = hwdev->mqm_att.page_size;
+ info_eqm_cfg.valid = valid;
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_MQM_CFG_INFO,
+ &info_eqm_cfg, sizeof(info_eqm_cfg),
+ &info_eqm_cfg, &out_size);
+ if (err || !out_size || info_eqm_cfg.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to init func table, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, info_eqm_cfg.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+#define EQM_DATA_BUF_SIZE 1024
+
+static int mqm_eqm_set_page_2_hw(struct hinic3_hwdev *hwdev)
+{
+ struct comm_cmd_eqm_search_gpa *info = NULL;
+ struct hinic3_dma_addr_align *page_addr = NULL;
+ void *send_buf = NULL;
+ u16 send_buf_size;
+ u32 i;
+ u64 *gpa_hi52 = NULL;
+ u64 gpa;
+ u32 num;
+ u32 start_idx;
+ int err = 0;
+ u16 out_size;
+ u8 cmd;
+
+ send_buf_size = sizeof(struct comm_cmd_eqm_search_gpa) +
+ EQM_DATA_BUF_SIZE;
+ send_buf = kzalloc(send_buf_size, GFP_KERNEL);
+ if (!send_buf) {
+ sdk_err(hwdev->dev_hdl, "Alloc virtual mem failed\r\n");
+ return -EFAULT;
+ }
+
+ page_addr = hwdev->mqm_att.brm_srch_page_addr;
+ info = (struct comm_cmd_eqm_search_gpa *)send_buf;
+
+ gpa_hi52 = info->gpa_hi52;
+ num = 0;
+ start_idx = 0;
+ cmd = COMM_MGMT_CMD_SET_MQM_SRCH_GPA;
+ for (i = 0; i < hwdev->mqm_att.page_num; i++) {
+ /* gpa align to 4K, save gpa[31:12] */
+ gpa = page_addr->align_paddr >> 12;
+ gpa_hi52[num] = gpa;
+ num++;
+ if (num == MQM_ATT_PAGE_NUM) {
+ info->num = num;
+ info->start_idx = start_idx;
+ info->host_id = hinic3_host_id(hwdev);
+ out_size = send_buf_size;
+ err = comm_msg_to_mgmt_sync(hwdev, cmd, info,
+ (u16)send_buf_size,
+ info, &out_size);
+ if (MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size,
+ info->head.status)) {
+ sdk_err(hwdev->dev_hdl, "Set mqm srch gpa fail, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, info->head.status, out_size);
+ err = -EFAULT;
+ goto set_page_2_hw_end;
+ }
+
+ gpa_hi52 = info->gpa_hi52;
+ num = 0;
+ start_idx = i + 1;
+ }
+ page_addr++;
+ }
+
+ if (num != 0) {
+ info->num = num;
+ info->start_idx = start_idx;
+ info->host_id = hinic3_host_id(hwdev);
+ out_size = send_buf_size;
+ err = comm_msg_to_mgmt_sync(hwdev, cmd, info,
+ (u16)send_buf_size, info,
+ &out_size);
+ if (MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size,
+ info->head.status)) {
+ sdk_err(hwdev->dev_hdl, "Set mqm srch gpa fail, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, info->head.status, out_size);
+ err = -EFAULT;
+ goto set_page_2_hw_end;
+ }
+ }
+
+set_page_2_hw_end:
+ kfree(send_buf);
+ return err;
+}
+
+static int get_eqm_num(struct hinic3_hwdev *hwdev, struct comm_cmd_get_eqm_num *info_eqm_fix)
+{
+ int ret;
+ u16 len = sizeof(*info_eqm_fix);
+
+ memset(info_eqm_fix, 0, sizeof(*info_eqm_fix));
+
+ ret = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_GET_MQM_FIX_INFO,
+ info_eqm_fix, sizeof(*info_eqm_fix), info_eqm_fix, &len);
+ if (ret || !len || info_eqm_fix->head.status) {
+ sdk_err(hwdev->dev_hdl, "Get mqm fix info fail,err: %d, status: 0x%x, out_size: 0x%x\n",
+ ret, info_eqm_fix->head.status, len);
+ return -EFAULT;
+ }
+
+ sdk_info(hwdev->dev_hdl, "get chunk_num: 0x%x, search_gpa_num: 0x%08x\n",
+ info_eqm_fix->chunk_num, info_eqm_fix->search_gpa_num);
+
+ return 0;
+}
+
+static int mqm_eqm_init(struct hinic3_hwdev *hwdev)
+{
+ struct comm_cmd_get_eqm_num info_eqm_fix;
+ int ret;
+ int is_in_kexec;
+
+ if (hwdev->hwif->attr.func_type != TYPE_PPF)
+ return 0;
+
+ ret = get_eqm_num(hwdev, &info_eqm_fix);
+ if (ret)
+ return ret;
+
+ if (!(info_eqm_fix.chunk_num))
+ return 0;
+
+ hwdev->mqm_att.chunk_num = info_eqm_fix.chunk_num;
+ hwdev->mqm_att.search_gpa_num = info_eqm_fix.search_gpa_num;
+ hwdev->mqm_att.page_size = 0;
+ hwdev->mqm_att.page_num = 0;
+
+ hwdev->mqm_att.brm_srch_page_addr =
+ kcalloc(hwdev->mqm_att.chunk_num, sizeof(struct hinic3_dma_addr_align), GFP_KERNEL);
+ if (!(hwdev->mqm_att.brm_srch_page_addr)) {
+ sdk_err(hwdev->dev_hdl, "Alloc virtual mem failed\r\n");
+ return -EFAULT;
+ }
+
+ ret = mqm_eqm_alloc_page_mem(hwdev);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "Alloc eqm page mem failed\r\n");
+ goto err_page;
+ }
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec == 0) {
+ ret = mqm_eqm_set_page_2_hw(hwdev);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "Set page to hw failed\r\n");
+ goto err_ecmd;
+ }
+ } else {
+ sdk_info(hwdev->dev_hdl, "Mqm db don't set to chip when os hot replace.\r\n");
+ }
+
+ ret = mqm_eqm_set_cfg_2_hw(hwdev, 1);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "Set page to hw failed\r\n");
+ goto err_ecmd;
+ }
+
+ sdk_info(hwdev->dev_hdl, "ppf_ext_db_init ok\r\n");
+
+ return 0;
+
+err_ecmd:
+ mqm_eqm_free_page_mem(hwdev);
+
+err_page:
+ kfree(hwdev->mqm_att.brm_srch_page_addr);
+
+ return ret;
+}
+
+static void mqm_eqm_deinit(struct hinic3_hwdev *hwdev)
+{
+ int ret;
+
+ if (hwdev->hwif->attr.func_type != TYPE_PPF)
+ return;
+
+ if (!(hwdev->mqm_att.chunk_num))
+ return;
+
+ mqm_eqm_free_page_mem(hwdev);
+ kfree(hwdev->mqm_att.brm_srch_page_addr);
+
+ ret = mqm_eqm_set_cfg_2_hw(hwdev, 0);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "Set mqm eqm cfg to chip fail! err: %d\n",
+ ret);
+ return;
+ }
+
+ hwdev->mqm_att.chunk_num = 0;
+ hwdev->mqm_att.search_gpa_num = 0;
+ hwdev->mqm_att.page_num = 0;
+ hwdev->mqm_att.page_size = 0;
+}
+
+int hinic3_ppf_ext_db_init(struct hinic3_hwdev *hwdev)
+{
+ int ret;
+
+ ret = mqm_eqm_init(hwdev);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "MQM eqm init fail!\n");
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_ppf_ext_db_deinit(struct hinic3_hwdev *hwdev)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hwdev->hwif->attr.func_type != TYPE_PPF)
+ return 0;
+
+ mqm_eqm_deinit(hwdev);
+
+ return 0;
+}
+
+#define HINIC3_FLR_TIMEOUT 1000
+
+static enum hinic3_wait_return check_flr_finish_handler(void *priv_data)
+{
+ struct hinic3_hwif *hwif = priv_data;
+ enum hinic3_pf_status status;
+
+ status = hinic3_get_pf_status(hwif);
+ if (status == HINIC3_PF_STATUS_FLR_FINISH_FLAG) {
+ hinic3_set_pf_status(hwif, HINIC3_PF_STATUS_ACTIVE_FLAG);
+ return WAIT_PROCESS_CPL;
+ }
+
+ return WAIT_PROCESS_WAITING;
+}
+
+static int wait_for_flr_finish(struct hinic3_hwif *hwif)
+{
+ return hinic3_wait_for_timeout(hwif, check_flr_finish_handler,
+ HINIC3_FLR_TIMEOUT, 0xa * USEC_PER_MSEC);
+}
+
+#define HINIC3_WAIT_CMDQ_IDLE_TIMEOUT 5000
+
+static enum hinic3_wait_return check_cmdq_stop_handler(void *priv_data)
+{
+ struct hinic3_hwdev *hwdev = priv_data;
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ enum hinic3_cmdq_type cmdq_type;
+
+ /* Stop waiting when card unpresent */
+ if (!hwdev->chip_present_flag)
+ return WAIT_PROCESS_CPL;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ if (!hinic3_cmdq_idle(&cmdqs->cmdq[cmdq_type]))
+ return WAIT_PROCESS_WAITING;
+ }
+
+ return WAIT_PROCESS_CPL;
+}
+
+static int wait_cmdq_stop(struct hinic3_hwdev *hwdev)
+{
+ enum hinic3_cmdq_type cmdq_type;
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ int err;
+
+ if (!(cmdqs->status & HINIC3_CMDQ_ENABLE))
+ return 0;
+
+ cmdqs->status &= ~HINIC3_CMDQ_ENABLE;
+
+ err = hinic3_wait_for_timeout(hwdev, check_cmdq_stop_handler,
+ HINIC3_WAIT_CMDQ_IDLE_TIMEOUT,
+ USEC_PER_MSEC);
+ if (err == 0)
+ return 0;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ if (!hinic3_cmdq_idle(&cmdqs->cmdq[cmdq_type]))
+ sdk_err(hwdev->dev_hdl, "Cmdq %d is busy\n", cmdq_type);
+ }
+
+ cmdqs->status |= HINIC3_CMDQ_ENABLE;
+
+ return err;
+}
+
+static int hinic3_rx_tx_flush(struct hinic3_hwdev *hwdev, u16 channel, bool wait_io)
+{
+ struct hinic3_hwif *hwif = hwdev->hwif;
+ struct comm_cmd_clear_doorbell clear_db;
+ struct comm_cmd_clear_resource clr_res;
+ u16 out_size;
+ int err;
+ int ret = 0;
+
+ if ((HINIC3_FUNC_TYPE(hwdev) != TYPE_VF) && wait_io)
+ msleep(100); /* wait ucode 100 ms stop I/O */
+
+ err = wait_cmdq_stop(hwdev);
+ if (err != 0) {
+ sdk_warn(hwdev->dev_hdl, "CMDQ is still working, please check CMDQ timeout value is reasonable\n");
+ ret = err;
+ }
+
+ hinic3_disable_doorbell(hwif);
+
+ out_size = sizeof(clear_db);
+ memset(&clear_db, 0, sizeof(clear_db));
+ clear_db.func_id = HINIC3_HWIF_GLOBAL_IDX(hwif);
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_FLUSH_DOORBELL,
+ &clear_db, sizeof(clear_db),
+ &clear_db, &out_size, channel);
+ if (err != 0 || !out_size || clear_db.head.status) {
+ sdk_warn(hwdev->dev_hdl, "Failed to flush doorbell, err: %d, status: 0x%x, out_size: 0x%x, channel: 0x%x\n",
+ err, clear_db.head.status, out_size, channel);
+ if (err != 0)
+ ret = err;
+ else
+ ret = -EFAULT;
+ }
+
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF)
+ hinic3_set_pf_status(hwif, HINIC3_PF_STATUS_FLR_START_FLAG);
+ else
+ msleep(100); /* wait ucode 100 ms stop I/O */
+
+ memset(&clr_res, 0, sizeof(clr_res));
+ clr_res.func_id = HINIC3_HWIF_GLOBAL_IDX(hwif);
+
+ err = hinic3_msg_to_mgmt_no_ack(hwdev, HINIC3_MOD_COMM,
+ COMM_MGMT_CMD_START_FLUSH, &clr_res,
+ sizeof(clr_res), channel);
+ if (err != 0) {
+ sdk_warn(hwdev->dev_hdl, "Failed to notice flush message, err: %d, channel: 0x%x\n",
+ err, channel);
+ ret = err;
+ }
+
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF) {
+ err = wait_for_flr_finish(hwif);
+ if (err != 0) {
+ sdk_warn(hwdev->dev_hdl, "Wait firmware FLR timeout\n");
+ ret = err;
+ }
+ }
+
+ hinic3_enable_doorbell(hwif);
+
+ err = hinic3_reinit_cmdq_ctxts(hwdev);
+ if (err != 0) {
+ sdk_warn(hwdev->dev_hdl, "Failed to reinit cmdq\n");
+ ret = err;
+ }
+
+ return ret;
+}
+
+int hinic3_func_rx_tx_flush(void *hwdev, u16 channel, bool wait_io)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (dev->chip_present_flag == 0)
+ return 0;
+
+ return hinic3_rx_tx_flush(dev, channel, wait_io);
+}
+EXPORT_SYMBOL(hinic3_func_rx_tx_flush);
+
+int hinic3_get_board_info(void *hwdev, struct hinic3_board_info *info,
+ u16 channel)
+{
+ struct comm_cmd_board_info board_info;
+ u16 out_size = sizeof(board_info);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ memset(&board_info, 0, sizeof(board_info));
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_GET_BOARD_INFO,
+ &board_info, sizeof(board_info),
+ &board_info, &out_size, channel);
+ if (err || board_info.head.status || !out_size) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get board info, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, board_info.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ memcpy(info, &board_info.info, sizeof(*info));
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_board_info);
+
+int hinic3_get_hw_pf_infos(void *hwdev, struct hinic3_hw_pf_infos *infos,
+ u16 channel)
+{
+ struct comm_cmd_hw_pf_infos *pf_infos = NULL;
+ u16 out_size = sizeof(*pf_infos);
+ int err = 0;
+
+ if (!hwdev || !infos)
+ return -EINVAL;
+
+ pf_infos = kzalloc(sizeof(*pf_infos), GFP_KERNEL);
+ if (!pf_infos)
+ return -ENOMEM;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_GET_HW_PF_INFOS,
+ pf_infos, sizeof(*pf_infos),
+ pf_infos, &out_size, channel);
+ if (pf_infos->head.status || err || !out_size) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get hw pf information, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, pf_infos->head.status, out_size, channel);
+ err = -EIO;
+ goto free_buf;
+ }
+
+ memcpy(infos, &pf_infos->infos, sizeof(struct hinic3_hw_pf_infos));
+
+free_buf:
+ kfree(pf_infos);
+ return err;
+}
+EXPORT_SYMBOL(hinic3_get_hw_pf_infos);
+
+int hinic3_get_global_attr(void *hwdev, struct comm_global_attr *attr)
+{
+ struct comm_cmd_get_glb_attr get_attr;
+ u16 out_size = sizeof(get_attr);
+ int err = 0;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_GET_GLOBAL_ATTR,
+ &get_attr, sizeof(get_attr), &get_attr,
+ &out_size);
+ if (err || !out_size || get_attr.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get global attribute, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, get_attr.head.status, out_size);
+ return -EIO;
+ }
+
+ memcpy(attr, &get_attr.attr, sizeof(struct comm_global_attr));
+
+ return 0;
+}
+
+int hinic3_set_func_svc_used_state(void *hwdev, u16 svc_type, u8 state,
+ u16 channel)
+{
+ struct comm_cmd_func_svc_used_state used_state;
+ u16 out_size = sizeof(used_state);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&used_state, 0, sizeof(used_state));
+ used_state.func_id = hinic3_global_func_id(hwdev);
+ used_state.svc_type = svc_type;
+ used_state.used_state = state;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev,
+ COMM_MGMT_CMD_SET_FUNC_SVC_USED_STATE,
+ &used_state, sizeof(used_state),
+ &used_state, &out_size, channel);
+ if (err || !out_size || used_state.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set func service used state, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n\n",
+ err, used_state.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_func_svc_used_state);
+
+int hinic3_get_sml_table_info(void *hwdev, u32 tbl_id, u8 *node_id, u8 *instance_id)
+{
+ struct sml_table_id_info sml_table[TABLE_INDEX_MAX];
+ struct comm_cmd_get_sml_tbl_data sml_tbl;
+ u16 out_size = sizeof(sml_tbl);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (tbl_id >= TABLE_INDEX_MAX) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl, "sml table index out of range [0, %u]",
+ TABLE_INDEX_MAX - 1);
+ return -EINVAL;
+ }
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_GET_SML_TABLE_INFO,
+ &sml_tbl, sizeof(sml_tbl), &sml_tbl, &out_size);
+ if (err || !out_size || sml_tbl.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get sml table information, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, sml_tbl.head.status, out_size);
+ return -EIO;
+ }
+
+ memcpy(sml_table, sml_tbl.tbl_data, sizeof(sml_table));
+
+ *node_id = sml_table[tbl_id].node_id;
+ *instance_id = sml_table[tbl_id].instance_id;
+
+ return 0;
+}
+
+int hinic3_activate_firmware(void *hwdev, u8 cfg_index)
+{
+ struct cmd_active_firmware activate_msg;
+ u16 out_size = sizeof(activate_msg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) != TYPE_PF)
+ return -EOPNOTSUPP;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&activate_msg, 0, sizeof(activate_msg));
+ activate_msg.index = cfg_index;
+
+ err = hinic3_pf_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, COMM_MGMT_CMD_ACTIVE_FW,
+ &activate_msg, sizeof(activate_msg),
+ &activate_msg, &out_size, FW_UPDATE_MGMT_TIMEOUT);
+ if (err || !out_size || activate_msg.msg_head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to activate firmware, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, activate_msg.msg_head.status, out_size);
+ err = activate_msg.msg_head.status ? activate_msg.msg_head.status : -EIO;
+ return err;
+ }
+
+ return 0;
+}
+
+int hinic3_switch_config(void *hwdev, u8 cfg_index)
+{
+ struct cmd_switch_cfg switch_cfg;
+ u16 out_size = sizeof(switch_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) != TYPE_PF)
+ return -EOPNOTSUPP;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&switch_cfg, 0, sizeof(switch_cfg));
+ switch_cfg.index = cfg_index;
+
+ err = hinic3_pf_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, COMM_MGMT_CMD_SWITCH_CFG,
+ &switch_cfg, sizeof(switch_cfg),
+ &switch_cfg, &out_size, FW_UPDATE_MGMT_TIMEOUT);
+ if (err || !out_size || switch_cfg.msg_head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to switch cfg, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, switch_cfg.msg_head.status, out_size);
+ err = switch_cfg.msg_head.status ? switch_cfg.msg_head.status : -EIO;
+ return err;
+ }
+
+ return 0;
+}
+
+bool hinic3_is_optical_module_mode(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (dev->board_info.board_type == BOARD_TYPE_STRG_4X25G_COMSTORAGE ||
+ dev->board_info.board_type == BOARD_TYPE_CAL_4X25G_COMSTORAGE ||
+ dev->board_info.board_type == BOARD_TYPE_CAL_2X100G_TCE_BACKPLANE)
+ return false;
+
+ return true;
+}
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h
new file mode 100644
index 0000000..e031ec4
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_COMM_H
+#define HINIC3_COMM_H
+
+#include <linux/types.h>
+
+#include "mpu_inband_cmd_defs.h"
+#include "hinic3_hwdev.h"
+
+#define MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size, status) \
+ ((err) || (status) || !(out_size))
+
+#define HINIC3_PAGE_SIZE_HW(pg_size) ((u8)ilog2((u32)((pg_size) >> 12)))
+
+enum func_tmr_bitmap_status {
+ FUNC_TMR_BITMAP_DISABLE,
+ FUNC_TMR_BITMAP_ENABLE,
+};
+
+enum ppf_tmr_status {
+ HINIC_PPF_TMR_FLAG_STOP,
+ HINIC_PPF_TMR_FLAG_START,
+};
+
+#define HINIC3_HT_GPA_PAGE_SIZE 4096UL
+#define HINIC3_PPF_HT_GPA_SET_RETRY_TIMES 10
+
+int hinic3_set_cmdq_depth(void *hwdev, u16 cmdq_depth);
+
+int hinic3_set_cmdq_ctxt(struct hinic3_hwdev *hwdev, u8 cmdq_id,
+ struct cmdq_ctxt_info *ctxt);
+
+int hinic3_ppf_ext_db_init(struct hinic3_hwdev *hwdev);
+
+int hinic3_ppf_ext_db_deinit(struct hinic3_hwdev *hwdev);
+
+int hinic3_set_ceq_ctrl_reg(struct hinic3_hwdev *hwdev, u16 q_id,
+ u32 ctrl0, u32 ctrl1);
+
+int hinic3_set_dma_attr_tbl(struct hinic3_hwdev *hwdev, u8 entry_idx, u8 st, u8 at, u8 ph,
+ u8 no_snooping, u8 tph_en);
+
+int hinic3_get_comm_features(void *hwdev, u64 *s_feature, u16 size);
+int hinic3_set_comm_features(void *hwdev, u64 *s_feature, u16 size);
+
+int hinic3_comm_channel_detect(struct hinic3_hwdev *hwdev);
+
+int hinic3_get_global_attr(void *hwdev, struct comm_global_attr *attr);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c
new file mode 100644
index 0000000..722fecd
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c
@@ -0,0 +1,530 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "mpu_inband_cmd.h"
+#include "hinic3_hw_mt.h"
+
+#define HINIC3_CMDQ_BUF_MAX_SIZE 2048U
+#define DW_WIDTH 4
+
+#define MSG_MAX_IN_SIZE (2048 * 1024)
+#define MSG_MAX_OUT_SIZE (2048 * 1024)
+
+#define API_CSR_MAX_RD_LEN (4 * 1024 * 1024)
+
+/* completion timeout interval, unit is millisecond */
+#define MGMT_MSG_UPDATE_TIMEOUT 200000U
+
+void free_buff_in(void *hwdev, const struct msg_module *nt_msg, void *buf_in)
+{
+ if (!buf_in)
+ return;
+
+ if (nt_msg->module == SEND_TO_NPU)
+ hinic3_free_cmd_buf(hwdev, buf_in);
+ else
+ kfree(buf_in);
+}
+
+void free_buff_out(void *hwdev, struct msg_module *nt_msg,
+ void *buf_out)
+{
+ if (!buf_out)
+ return;
+
+ if (nt_msg->module == SEND_TO_NPU &&
+ !nt_msg->npu_cmd.direct_resp)
+ hinic3_free_cmd_buf(hwdev, buf_out);
+ else
+ kfree(buf_out);
+}
+
+int alloc_buff_in(void *hwdev, struct msg_module *nt_msg,
+ u32 in_size, void **buf_in)
+{
+ void *msg_buf = NULL;
+
+ if (!in_size)
+ return 0;
+
+ if (nt_msg->module == SEND_TO_NPU) {
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+
+ if (in_size > HINIC3_CMDQ_BUF_MAX_SIZE) {
+ pr_err("Cmdq in size(%u) more than 2KB\n", in_size);
+ return -ENOMEM;
+ }
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ pr_err("Alloc cmdq cmd buffer failed in %s\n",
+ __func__);
+ return -ENOMEM;
+ }
+ msg_buf = cmd_buf->buf;
+ *buf_in = (void *)cmd_buf;
+ cmd_buf->size = (u16)in_size;
+ } else {
+ if (in_size > MSG_MAX_IN_SIZE) {
+ pr_err("In size(%u) more than 2M\n", in_size);
+ return -ENOMEM;
+ }
+ msg_buf = kzalloc(in_size, GFP_KERNEL);
+ *buf_in = msg_buf;
+ }
+ if (!(*buf_in)) {
+ pr_err("Alloc buffer in failed\n");
+ return -ENOMEM;
+ }
+
+ if (copy_from_user(msg_buf, nt_msg->in_buf, in_size)) {
+ pr_err("%s:%d: Copy from user failed\n",
+ __func__, __LINE__);
+ free_buff_in(hwdev, nt_msg, *buf_in);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int alloc_buff_out(void *hwdev, struct msg_module *nt_msg,
+ u32 out_size, void **buf_out)
+{
+ if (!out_size)
+ return 0;
+
+ if (nt_msg->module == SEND_TO_NPU &&
+ !nt_msg->npu_cmd.direct_resp) {
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+
+ if (out_size > HINIC3_CMDQ_BUF_MAX_SIZE) {
+ pr_err("Cmdq out size(%u) more than 2KB\n", out_size);
+ return -ENOMEM;
+ }
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ *buf_out = (void *)cmd_buf;
+ } else {
+ if (out_size > MSG_MAX_OUT_SIZE) {
+ pr_err("out size(%u) more than 2M\n", out_size);
+ return -ENOMEM;
+ }
+ *buf_out = kzalloc(out_size, GFP_KERNEL);
+ }
+ if (!(*buf_out)) {
+ pr_err("Alloc buffer out failed\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+int copy_buf_out_to_user(struct msg_module *nt_msg,
+ u32 out_size, void *buf_out)
+{
+ int ret = 0;
+ void *msg_out = NULL;
+
+ if (out_size == 0 || !buf_out)
+ return 0;
+
+ if (nt_msg->module == SEND_TO_NPU &&
+ !nt_msg->npu_cmd.direct_resp)
+ msg_out = ((struct hinic3_cmd_buf *)buf_out)->buf;
+ else
+ msg_out = buf_out;
+
+ if (copy_to_user(nt_msg->out_buf, msg_out, out_size))
+ ret = -EFAULT;
+
+ return ret;
+}
+
+int get_func_type(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u16 func_type;
+
+ if (*out_size != sizeof(u16) || !buf_out) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(u16));
+ return -EFAULT;
+ }
+
+ func_type = hinic3_func_type(hinic3_get_sdk_hwdev_by_lld(lld_dev));
+
+ *(u16 *)buf_out = func_type;
+ return 0;
+}
+
+int get_func_id(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u16 func_id;
+
+ if (*out_size != sizeof(u16) || !buf_out) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(u16));
+ return -EFAULT;
+ }
+
+ func_id = hinic3_global_func_id(hinic3_get_sdk_hwdev_by_lld(lld_dev));
+ *(u16 *)buf_out = func_id;
+
+ return 0;
+}
+
+int get_hw_driver_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ return hinic3_dbg_get_hw_stats(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ buf_out, out_size);
+}
+
+int clear_hw_driver_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u16 size;
+
+ size = hinic3_dbg_clear_hw_stats(hinic3_get_sdk_hwdev_by_lld(lld_dev));
+ if (*out_size != size) {
+ pr_err("Unexpect out buf size from user :%u, expect: %u\n",
+ *out_size, size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int get_self_test_result(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u32 result;
+
+ if (*out_size != sizeof(u32) || !buf_out) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(u32));
+ return -EFAULT;
+ }
+
+ result = hinic3_get_self_test_result(hinic3_get_sdk_hwdev_by_lld(lld_dev));
+ *(u32 *)buf_out = result;
+
+ return 0;
+}
+
+int get_chip_faults_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u32 offset = 0;
+ struct nic_cmd_chip_fault_stats *fault_info = NULL;
+
+ if (!buf_in || !buf_out || *out_size != sizeof(*fault_info) ||
+ in_size != sizeof(*fault_info)) {
+ pr_err("Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(*fault_info));
+ return -EFAULT;
+ }
+ fault_info = (struct nic_cmd_chip_fault_stats *)buf_in;
+ offset = fault_info->offset;
+
+ fault_info = (struct nic_cmd_chip_fault_stats *)buf_out;
+ hinic3_get_chip_fault_stats(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ fault_info->chip_fault_stats, offset);
+
+ return 0;
+}
+
+static u32 get_up_timeout_val(enum hinic3_mod_type mod, u16 cmd)
+{
+ if (mod == HINIC3_MOD_COMM &&
+ (cmd == COMM_MGMT_CMD_UPDATE_FW ||
+ cmd == COMM_MGMT_CMD_UPDATE_BIOS ||
+ cmd == COMM_MGMT_CMD_ACTIVE_FW ||
+ cmd == COMM_MGMT_CMD_SWITCH_CFG ||
+ cmd == COMM_MGMT_CMD_HOT_ACTIVE_FW))
+ return MGMT_MSG_UPDATE_TIMEOUT;
+
+ return 0; /* use default mbox/apichain timeout time */
+}
+
+int send_to_mpu(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ enum hinic3_mod_type mod;
+ u32 timeout;
+ int ret = 0;
+ u16 cmd;
+
+ mod = (enum hinic3_mod_type)nt_msg->mpu_cmd.mod;
+ cmd = nt_msg->mpu_cmd.cmd;
+
+ if (nt_msg->mpu_cmd.api_type == API_TYPE_MBOX || nt_msg->mpu_cmd.api_type == API_TYPE_CLP) {
+ timeout = get_up_timeout_val(mod, cmd);
+
+ if (nt_msg->mpu_cmd.api_type == API_TYPE_MBOX)
+ ret = hinic3_msg_to_mgmt_sync(hwdev, mod, cmd, buf_in, (u16)in_size,
+ buf_out, (u16 *)(u8 *)out_size, timeout,
+ HINIC3_CHANNEL_DEFAULT);
+ else
+ ret = hinic3_clp_to_mgmt(hwdev, mod, cmd, buf_in, (u16)in_size,
+ buf_out, (u16 *)out_size);
+ if (ret) {
+ pr_err("Message to mgmt cpu return fail, mod: %d, cmd: %u\n", mod, cmd);
+ return ret;
+ }
+ } else if (nt_msg->mpu_cmd.api_type == API_TYPE_API_CHAIN_BYPASS) {
+ pr_err("Unsupported api_type %u\n", nt_msg->mpu_cmd.api_type);
+ return -EINVAL;
+ } else if (nt_msg->mpu_cmd.api_type == API_TYPE_API_CHAIN_TO_MPU) {
+ timeout = get_up_timeout_val(mod, cmd);
+ if (hinic3_pcie_itf_id(hwdev) != SPU_HOST_ID)
+ ret = hinic3_msg_to_mgmt_api_chain_sync(hwdev, mod, cmd, buf_in,
+ (u16)in_size, buf_out,
+ (u16 *)(u8 *)out_size, timeout);
+ else
+ ret = hinic3_msg_to_mgmt_sync(hwdev, mod, cmd, buf_in, (u16)in_size,
+ buf_out, (u16 *)(u8 *)out_size, timeout,
+ HINIC3_CHANNEL_DEFAULT);
+ if (ret) {
+ pr_err("Message to mgmt api chain cpu return fail, mod: %d, cmd: %u\n",
+ mod, cmd);
+ return ret;
+ }
+ } else {
+ pr_err("Unsupported api_type %u\n", nt_msg->mpu_cmd.api_type);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+int send_to_npu(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ int ret = 0;
+ u8 cmd;
+ enum hinic3_mod_type mod;
+
+ mod = (enum hinic3_mod_type)nt_msg->npu_cmd.mod;
+ cmd = nt_msg->npu_cmd.cmd;
+
+ if (nt_msg->npu_cmd.direct_resp) {
+ ret = hinic3_cmdq_direct_resp(hwdev, mod, cmd,
+ buf_in, buf_out, 0,
+ HINIC3_CHANNEL_DEFAULT);
+ if (ret)
+ pr_err("Send direct cmdq failed, err: %d\n", ret);
+ } else {
+ ret = hinic3_cmdq_detail_resp(hwdev, mod, cmd, buf_in, buf_out,
+ NULL, 0, HINIC3_CHANNEL_DEFAULT);
+ if (ret)
+ pr_err("Send detail cmdq failed, err: %d\n", ret);
+ }
+
+ return ret;
+}
+
+static int sm_rd16(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u16 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd16(hwdev, node, instance, id, &val1);
+ if (ret != 0) {
+ pr_err("Get sm ctr information (16 bits)failed!\n");
+ val1 = 0xffff;
+ }
+
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd16_clear(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u16 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd16_clear(hwdev, node, instance, id, &val1);
+ if (ret != 0) {
+ pr_err("Get sm ctr clear information (16 bits)failed!\n");
+ val1 = 0xffff;
+ }
+
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd32(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u32 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd32(hwdev, node, instance, id, &val1);
+ if (ret) {
+ pr_err("Get sm ctr information (32 bits)failed!\n");
+ val1 = ~0;
+ }
+
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd32_clear(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u32 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd32_clear(hwdev, node, instance, id, &val1);
+ if (ret) {
+ pr_err("Get sm ctr clear information(32 bits) failed!\n");
+ val1 = ~0;
+ }
+
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd64_pair(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u64 val1 = 0, val2 = 0;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd64_pair(hwdev, node, instance, id, &val1, &val2);
+ if (ret) {
+ pr_err("Get sm ctr information (64 bits pair)failed!\n");
+ val1 = ~0;
+ val2 = ~0;
+ }
+
+ buf_out->val1 = val1;
+ buf_out->val2 = val2;
+
+ return ret;
+}
+
+static int sm_rd64_pair_clear(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u64 val1 = 0;
+ u64 val2 = 0;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd64_pair_clear(hwdev, node, instance, id, &val1,
+ &val2);
+ if (ret) {
+ pr_err("Get sm ctr clear information(64 bits pair) failed!\n");
+ val1 = ~0;
+ val2 = ~0;
+ }
+
+ buf_out->val1 = val1;
+ buf_out->val2 = val2;
+
+ return ret;
+}
+
+static int sm_rd64(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u64 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd64(hwdev, node, instance, id, &val1);
+ if (ret) {
+ pr_err("Get sm ctr information (64 bits)failed!\n");
+ val1 = ~0;
+ }
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd64_clear(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u64 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd64_clear(hwdev, node, instance, id, &val1);
+ if (ret) {
+ pr_err("Get sm ctr clear information(64 bits) failed!\n");
+ val1 = ~0;
+ }
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+typedef int (*sm_module)(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out);
+
+struct sm_module_handle {
+ enum sm_cmd_type sm_cmd_name;
+ sm_module sm_func;
+};
+
+const struct sm_module_handle sm_module_cmd_handle[] = {
+ {SM_CTR_RD16, sm_rd16},
+ {SM_CTR_RD32, sm_rd32},
+ {SM_CTR_RD64_PAIR, sm_rd64_pair},
+ {SM_CTR_RD64, sm_rd64},
+ {SM_CTR_RD16_CLEAR, sm_rd16_clear},
+ {SM_CTR_RD32_CLEAR, sm_rd32_clear},
+ {SM_CTR_RD64_PAIR_CLEAR, sm_rd64_pair_clear},
+ {SM_CTR_RD64_CLEAR, sm_rd64_clear}
+};
+
+int send_to_sm(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct sm_in_st *sm_in = buf_in;
+ struct sm_out_st *sm_out = buf_out;
+ u32 msg_formate;
+ int index, num_cmds = ARRAY_LEN(sm_module_cmd_handle);
+ int ret = 0;
+
+ if (!nt_msg || !buf_in || !buf_out ||
+ in_size != sizeof(*sm_in) || *out_size != sizeof(*sm_out)) {
+ pr_err("Unexpect out buf size :%u, in buf size: %u\n",
+ *out_size, in_size);
+ return -EINVAL;
+ }
+ msg_formate = nt_msg->msg_formate;
+
+ for (index = 0; index < num_cmds; index++) {
+ if (msg_formate != sm_module_cmd_handle[index].sm_cmd_name)
+ continue;
+
+ ret = sm_module_cmd_handle[index].sm_func(hwdev, (u32)sm_in->id,
+ (u8)sm_in->instance,
+ (u8)sm_in->node, sm_out);
+ break;
+ }
+
+ if (index == num_cmds) {
+ pr_err("Can't find callback for %d\n", msg_formate);
+ return -EINVAL;
+ }
+ if (ret != 0)
+ pr_err("Get sm information fail, id:%d, instance:%d, node:%d\n",
+ sm_in->id, sm_in->instance, sm_in->node);
+
+ *out_size = sizeof(struct sm_out_st);
+
+ return ret;
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h
new file mode 100644
index 0000000..9330200
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HW_MT_H
+#define HINIC3_HW_MT_H
+
+#include "hinic3_lld.h"
+
+struct sm_in_st {
+ int node;
+ int id;
+ int instance;
+};
+
+struct sm_out_st {
+ u64 val1;
+ u64 val2;
+};
+
+struct up_log_msg_st {
+ u32 rd_len;
+ u32 addr;
+};
+
+struct csr_write_st {
+ u32 rd_len;
+ u32 addr;
+ u8 *data;
+};
+
+int get_func_type(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int get_func_id(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int get_hw_driver_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int clear_hw_driver_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int get_self_test_result(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int get_chip_faults_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c
new file mode 100644
index 0000000..ac80b63
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c
@@ -0,0 +1,2222 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/time.h>
+#include <linux/timex.h>
+#include <linux/rtc.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/completion.h>
+#include <linux/semaphore.h>
+#include <linux/interrupt.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwif.h"
+#include "hinic3_eqs.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_mbox.h"
+#include "hinic3_cmdq.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_multi_host_mgmt.h"
+#include "hinic3_hw_comm.h"
+#include "hinic3_cqm.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_devlink.h"
+#include "hinic3_hwdev.h"
+
+static unsigned int wq_page_order = HINIC3_MAX_WQ_PAGE_SIZE_ORDER;
+module_param(wq_page_order, uint, 0444);
+MODULE_PARM_DESC(wq_page_order, "Set wq page size order, wq page size is 4K * (2 ^ wq_page_order) - default is 8");
+
+enum hinic3_pcie_nosnoop {
+ HINIC3_PCIE_SNOOP = 0,
+ HINIC3_PCIE_NO_SNOOP = 1,
+};
+
+enum hinic3_pcie_tph {
+ HINIC3_PCIE_TPH_DISABLE = 0,
+ HINIC3_PCIE_TPH_ENABLE = 1,
+};
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_SHIFT 0
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_MASK 0x3FF
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_SET(val, member) \
+ (((u32)(val) & HINIC3_DMA_ATTR_INDIR_##member##_MASK) << \
+ HINIC3_DMA_ATTR_INDIR_##member##_SHIFT)
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_CLEAR(val, member) \
+ ((val) & (~(HINIC3_DMA_ATTR_INDIR_##member##_MASK \
+ << HINIC3_DMA_ATTR_INDIR_##member##_SHIFT)))
+
+#define HINIC3_DMA_ATTR_ENTRY_ST_SHIFT 0
+#define HINIC3_DMA_ATTR_ENTRY_AT_SHIFT 8
+#define HINIC3_DMA_ATTR_ENTRY_PH_SHIFT 10
+#define HINIC3_DMA_ATTR_ENTRY_NO_SNOOPING_SHIFT 12
+#define HINIC3_DMA_ATTR_ENTRY_TPH_EN_SHIFT 13
+
+#define HINIC3_DMA_ATTR_ENTRY_ST_MASK 0xFF
+#define HINIC3_DMA_ATTR_ENTRY_AT_MASK 0x3
+#define HINIC3_DMA_ATTR_ENTRY_PH_MASK 0x3
+#define HINIC3_DMA_ATTR_ENTRY_NO_SNOOPING_MASK 0x1
+#define HINIC3_DMA_ATTR_ENTRY_TPH_EN_MASK 0x1
+
+#define HINIC3_DMA_ATTR_ENTRY_SET(val, member) \
+ (((u32)(val) & HINIC3_DMA_ATTR_ENTRY_##member##_MASK) << \
+ HINIC3_DMA_ATTR_ENTRY_##member##_SHIFT)
+
+#define HINIC3_DMA_ATTR_ENTRY_CLEAR(val, member) \
+ ((val) & (~(HINIC3_DMA_ATTR_ENTRY_##member##_MASK \
+ << HINIC3_DMA_ATTR_ENTRY_##member##_SHIFT)))
+
+#define HINIC3_PCIE_ST_DISABLE 0
+#define HINIC3_PCIE_AT_DISABLE 0
+#define HINIC3_PCIE_PH_DISABLE 0
+
+#define PCIE_MSIX_ATTR_ENTRY 0
+
+#define HINIC3_CHIP_PRESENT 1
+#define HINIC3_CHIP_ABSENT 0
+
+#define HINIC3_DEAULT_EQ_MSIX_PENDING_LIMIT 0
+#define HINIC3_DEAULT_EQ_MSIX_COALESC_TIMER_CFG 0xFF
+#define HINIC3_DEAULT_EQ_MSIX_RESEND_TIMER_CFG 7
+
+#define HINIC3_HWDEV_WQ_NAME "hinic3_hardware"
+#define HINIC3_WQ_MAX_REQ 10
+
+#define SLAVE_HOST_STATUS_CLEAR(host_id, val) ((val) & (~(1U << (host_id))))
+#define SLAVE_HOST_STATUS_SET(host_id, enable) (((u8)(enable) & 1U) << (host_id))
+#define SLAVE_HOST_STATUS_GET(host_id, val) (!!((val) & (1U << (host_id))))
+
+#ifdef HAVE_HOT_REPLACE_FUNC
+ extern int get_partition_id(void);
+#else
+ static int get_partition_id(void) { return 0; }
+#endif
+
+void set_slave_host_enable(void *hwdev, u8 host_id, bool enable)
+{
+ u32 reg_val;
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF)
+ return;
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_HOST_SLAVE_STATUS_ADDR);
+
+ reg_val = SLAVE_HOST_STATUS_CLEAR(host_id, reg_val);
+ reg_val |= SLAVE_HOST_STATUS_SET(host_id, enable);
+ hinic3_hwif_write_reg(dev->hwif, HINIC3_MULT_HOST_SLAVE_STATUS_ADDR, reg_val);
+
+ sdk_info(dev->dev_hdl, "Set slave host %d status %d, reg value: 0x%x\n",
+ host_id, enable, reg_val);
+}
+
+int hinic3_get_slave_host_enable(void *hwdev, u8 host_id, u8 *slave_en)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ u32 reg_val;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF) {
+ sdk_warn(dev->dev_hdl, "hwdev should be ppf\n");
+ return -EINVAL;
+ }
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_HOST_SLAVE_STATUS_ADDR);
+ *slave_en = SLAVE_HOST_STATUS_GET(host_id, reg_val);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_slave_host_enable);
+
+int hinic3_get_slave_bitmap(void *hwdev, u8 *slave_host_bitmap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct service_cap *cap = NULL;
+
+ if (!dev || !slave_host_bitmap)
+ return -EINVAL;
+
+ cap = &dev->cfg_mgmt->svc_cap;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF) {
+ sdk_warn(dev->dev_hdl, "hwdev should be ppf\n");
+ return -EINVAL;
+ }
+
+ *slave_host_bitmap = cap->host_valid_bitmap & (~(1U << cap->master_host_id));
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_slave_bitmap);
+
+void set_func_host_mode(struct hinic3_hwdev *hwdev, enum hinic3_func_mode mode)
+{
+ switch (mode) {
+ case FUNC_MOD_MULTI_BM_MASTER:
+ sdk_info(hwdev->dev_hdl, "Detect multi-host BM master host\n");
+ hwdev->func_mode = FUNC_MOD_MULTI_BM_MASTER;
+ break;
+ case FUNC_MOD_MULTI_BM_SLAVE:
+ sdk_info(hwdev->dev_hdl, "Detect multi-host BM slave host\n");
+ hwdev->func_mode = FUNC_MOD_MULTI_BM_SLAVE;
+ break;
+ case FUNC_MOD_MULTI_VM_MASTER:
+ sdk_info(hwdev->dev_hdl, "Detect multi-host VM master host\n");
+ hwdev->func_mode = FUNC_MOD_MULTI_VM_MASTER;
+ break;
+ case FUNC_MOD_MULTI_VM_SLAVE:
+ sdk_info(hwdev->dev_hdl, "Detect multi-host VM slave host\n");
+ hwdev->func_mode = FUNC_MOD_MULTI_VM_SLAVE;
+ break;
+ default:
+ hwdev->func_mode = FUNC_MOD_NORMAL_HOST;
+ break;
+ }
+}
+
+static void hinic3_init_host_mode_pre(struct hinic3_hwdev *hwdev)
+{
+ struct service_cap *cap = &hwdev->cfg_mgmt->svc_cap;
+ u8 host_id = hwdev->hwif->attr.pci_intf_idx;
+
+ switch (cap->srv_multi_host_mode) {
+ case HINIC3_SDI_MODE_BM:
+ if (host_id == cap->master_host_id)
+ set_func_host_mode(hwdev, FUNC_MOD_MULTI_BM_MASTER);
+ else
+ set_func_host_mode(hwdev, FUNC_MOD_MULTI_BM_SLAVE);
+ break;
+ case HINIC3_SDI_MODE_VM:
+ if (host_id == cap->master_host_id)
+ set_func_host_mode(hwdev, FUNC_MOD_MULTI_VM_MASTER);
+ else
+ set_func_host_mode(hwdev, FUNC_MOD_MULTI_VM_SLAVE);
+ break;
+ default:
+ set_func_host_mode(hwdev, FUNC_MOD_NORMAL_HOST);
+ break;
+ }
+}
+
+static void hinic3_init_hot_plug_status(struct hinic3_hwdev *hwdev)
+{
+ struct service_cap *cap = &hwdev->cfg_mgmt->svc_cap;
+
+ if (cap->hot_plug_disable) {
+ hwdev->hot_plug_mode = HOT_PLUG_DISABLE;
+ } else {
+ hwdev->hot_plug_mode = HOT_PLUG_ENABLE;
+ }
+}
+
+static void hinic3_init_os_hot_replace(struct hinic3_hwdev *hwdev)
+{
+ struct service_cap *cap = &hwdev->cfg_mgmt->svc_cap;
+
+ if (cap->os_hot_replace) {
+ hwdev->hot_replace_mode = HOT_REPLACE_ENABLE;
+ } else {
+ hwdev->hot_replace_mode = HOT_REPLACE_DISABLE;
+ }
+}
+
+static u8 hinic3_nic_sw_aeqe_handler(void *hwdev, u8 event, u8 *data)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev)
+ return 0;
+
+ sdk_err(dev->dev_hdl, "Received nic ucode aeq event type: 0x%x, data: 0x%llx\n",
+ event, *((u64 *)data));
+
+ if (event < HINIC3_NIC_FATAL_ERROR_MAX)
+ atomic_inc(&dev->hw_stats.nic_ucode_event_stats[event]);
+
+ return 0;
+}
+
+static void hinic3_init_heartbeat_detect(struct hinic3_hwdev *hwdev);
+static void hinic3_destroy_heartbeat_detect(struct hinic3_hwdev *hwdev);
+
+typedef void (*mgmt_event_cb)(void *handle, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+struct mgmt_event_handle {
+ u16 cmd;
+ mgmt_event_cb proc;
+};
+
+static int pf_handle_vf_comm_mbox(void *pri_handle,
+ u16 vf_id, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = pri_handle;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ sdk_warn(hwdev->dev_hdl, "Unsupported vf mbox event %u to process\n",
+ cmd);
+
+ return 0;
+}
+
+static int vf_handle_pf_comm_mbox(void *pri_handle,
+ u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = pri_handle;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ sdk_warn(hwdev->dev_hdl, "Unsupported pf mbox event %u to process\n",
+ cmd);
+ return 0;
+}
+
+static void chip_fault_show(struct hinic3_hwdev *hwdev,
+ struct hinic3_fault_event *event)
+{
+ char fault_level[FAULT_LEVEL_MAX][FAULT_SHOW_STR_LEN + 1] = {
+ "fatal", "reset", "host", "flr", "general", "suggestion"};
+ char level_str[FAULT_SHOW_STR_LEN + 1];
+ u8 level;
+ int ret;
+
+ memset(level_str, 0, FAULT_SHOW_STR_LEN + 1);
+ level = event->event.chip.err_level;
+ if (level < FAULT_LEVEL_MAX) {
+ ret = strscpy(level_str, fault_level[level],
+ FAULT_SHOW_STR_LEN);
+ if (ret < 0)
+ return;
+ } else {
+ ret = strscpy(level_str, "Unknown", FAULT_SHOW_STR_LEN);
+ if (ret < 0)
+ return;
+ }
+
+ if (level == FAULT_LEVEL_SERIOUS_FLR)
+ dev_err(hwdev->dev_hdl, "err_level: %u [%s], flr func_id: %u\n",
+ level, level_str, event->event.chip.func_id);
+
+ dev_err(hwdev->dev_hdl,
+ "Module_id: 0x%x, err_type: 0x%x, err_level: %u[%s], err_csr_addr: 0x%08x, err_csr_value: 0x%08x\n",
+ event->event.chip.node_id,
+ event->event.chip.err_type, level, level_str,
+ event->event.chip.err_csr_addr,
+ event->event.chip.err_csr_value);
+}
+
+static void fault_report_show(struct hinic3_hwdev *hwdev,
+ struct hinic3_fault_event *event)
+{
+ char fault_type[FAULT_TYPE_MAX][FAULT_SHOW_STR_LEN + 1] = {
+ "chip", "ucode", "mem rd timeout", "mem wr timeout",
+ "reg rd timeout", "reg wr timeout", "phy fault", "tsensor fault"};
+ char type_str[FAULT_SHOW_STR_LEN + 1] = {0};
+ struct fault_event_stats *fault = NULL;
+ int ret;
+
+ sdk_err(hwdev->dev_hdl, "Fault event report received, func_id: %u\n",
+ hinic3_global_func_id(hwdev));
+
+ fault = &hwdev->hw_stats.fault_event_stats;
+
+ if (event->type < FAULT_TYPE_MAX) {
+ ret = strscpy(type_str, fault_type[event->type], sizeof(type_str));
+ if (ret < 0)
+ return;
+ atomic_inc(&fault->fault_type_stat[event->type]);
+ } else {
+ ret = strscpy(type_str, "Unknown", sizeof(type_str));
+ if (ret < 0)
+ return;
+ }
+
+ sdk_err(hwdev->dev_hdl, "Fault type: %u [%s]\n", event->type, type_str);
+ /* 0, 1, 2 and 3 word Represents array event->event.val index */
+ sdk_err(hwdev->dev_hdl, "Fault val[0]: 0x%08x, val[1]: 0x%08x, val[2]: 0x%08x, val[3]: 0x%08x\n",
+ event->event.val[0x0], event->event.val[0x1],
+ event->event.val[0x2], event->event.val[0x3]);
+
+ hinic3_show_chip_err_info(hwdev);
+
+ switch (event->type) {
+ case FAULT_TYPE_CHIP:
+ chip_fault_show(hwdev, event);
+ break;
+ case FAULT_TYPE_UCODE:
+ sdk_err(hwdev->dev_hdl, "Cause_id: %u, core_id: %u, c_id: %u, epc: 0x%08x\n",
+ event->event.ucode.cause_id, event->event.ucode.core_id,
+ event->event.ucode.c_id, event->event.ucode.epc);
+ break;
+ case FAULT_TYPE_MEM_RD_TIMEOUT:
+ case FAULT_TYPE_MEM_WR_TIMEOUT:
+ sdk_err(hwdev->dev_hdl, "Err_csr_ctrl: 0x%08x, err_csr_data: 0x%08x, ctrl_tab: 0x%08x, mem_index: 0x%08x\n",
+ event->event.mem_timeout.err_csr_ctrl,
+ event->event.mem_timeout.err_csr_data,
+ event->event.mem_timeout.ctrl_tab, event->event.mem_timeout.mem_index);
+ break;
+ case FAULT_TYPE_REG_RD_TIMEOUT:
+ case FAULT_TYPE_REG_WR_TIMEOUT:
+ sdk_err(hwdev->dev_hdl, "Err_csr: 0x%08x\n", event->event.reg_timeout.err_csr);
+ break;
+ case FAULT_TYPE_PHY_FAULT:
+ sdk_err(hwdev->dev_hdl, "Op_type: %u, port_id: %u, dev_ad: %u, csr_addr: 0x%08x, op_data: 0x%08x\n",
+ event->event.phy_fault.op_type, event->event.phy_fault.port_id,
+ event->event.phy_fault.dev_ad, event->event.phy_fault.csr_addr,
+ event->event.phy_fault.op_data);
+ break;
+ default:
+ break;
+ }
+}
+
+static void fault_event_handler(void *dev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_cmd_fault_event *fault_event = NULL;
+ struct hinic3_fault_event *fault = NULL;
+ struct hinic3_event_info event_info;
+ struct hinic3_hwdev *hwdev = dev;
+ u8 fault_src = HINIC3_FAULT_SRC_TYPE_MAX;
+ u8 fault_level;
+
+ if (in_size != sizeof(*fault_event)) {
+ sdk_err(hwdev->dev_hdl, "Invalid fault event report, length: %u, should be %ld\n",
+ in_size, sizeof(*fault_event));
+ return;
+ }
+
+ fault_event = buf_in;
+ fault_report_show(hwdev, &fault_event->event);
+
+ if (fault_event->event.type == FAULT_TYPE_CHIP)
+ fault_level = fault_event->event.event.chip.err_level;
+ else
+ fault_level = FAULT_LEVEL_FATAL;
+
+ if (hwdev->event_callback) {
+ event_info.service = EVENT_SRV_COMM;
+ event_info.type = EVENT_COMM_FAULT;
+ fault = (void *)event_info.event_data;
+ memcpy(fault, &fault_event->event, sizeof(struct hinic3_fault_event));
+ fault->fault_level = fault_level;
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+ }
+
+ if (fault_event->event.type <= FAULT_TYPE_REG_WR_TIMEOUT)
+ fault_src = fault_event->event.type;
+ else if (fault_event->event.type == FAULT_TYPE_PHY_FAULT)
+ fault_src = HINIC3_FAULT_SRC_HW_PHY_FAULT;
+
+ hisdk3_fault_post_process(hwdev, fault_src, fault_level);
+}
+
+static void ffm_event_record(struct hinic3_hwdev *dev, struct dbgtool_k_glb_info *dbgtool_info,
+ struct ffm_intr_info *intr)
+{
+ struct rtc_time rctm;
+ struct timeval txc;
+ u32 ffm_idx;
+ u32 last_err_csr_addr;
+ u32 last_err_csr_value;
+
+ ffm_idx = dbgtool_info->ffm->ffm_num;
+ last_err_csr_addr = dbgtool_info->ffm->last_err_csr_addr;
+ last_err_csr_value = dbgtool_info->ffm->last_err_csr_value;
+ if (ffm_idx < FFM_RECORD_NUM_MAX) {
+ if (intr->err_csr_addr == last_err_csr_addr &&
+ intr->err_csr_value == last_err_csr_value) {
+ dbgtool_info->ffm->ffm[ffm_idx - 1].times++;
+ sdk_err(dev->dev_hdl, "Receive intr same, ffm_idx: %u\n", ffm_idx - 1);
+ return;
+ }
+ sdk_err(dev->dev_hdl, "Receive intr, ffm_idx: %u\n", ffm_idx);
+
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.node_id = intr->node_id;
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.err_level = intr->err_level;
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.err_type = intr->err_type;
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.err_csr_addr = intr->err_csr_addr;
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.err_csr_value = intr->err_csr_value;
+ dbgtool_info->ffm->last_err_csr_addr = intr->err_csr_addr;
+ dbgtool_info->ffm->last_err_csr_value = intr->err_csr_value;
+ dbgtool_info->ffm->ffm[ffm_idx].times = 1;
+
+ /* Obtain the current UTC time */
+ do_gettimeofday(&txc);
+
+ /* Calculate the time in date value to tm, i.e. GMT + 8, mutiplied by 60 * 60 */
+ rtc_time_to_tm((unsigned long)txc.tv_sec + 60 * 60 * 8, &rctm);
+
+ /* tm_year starts from 1900; 0->1900, 1->1901, and so on */
+ dbgtool_info->ffm->ffm[ffm_idx].year = (u16)(rctm.tm_year + 1900);
+ /* tm_mon starts from 0, 0 indicates January, and so on */
+ dbgtool_info->ffm->ffm[ffm_idx].mon = (u8)rctm.tm_mon + 1;
+ dbgtool_info->ffm->ffm[ffm_idx].mday = (u8)rctm.tm_mday;
+ dbgtool_info->ffm->ffm[ffm_idx].hour = (u8)rctm.tm_hour;
+ dbgtool_info->ffm->ffm[ffm_idx].min = (u8)rctm.tm_min;
+ dbgtool_info->ffm->ffm[ffm_idx].sec = (u8)rctm.tm_sec;
+
+ dbgtool_info->ffm->ffm_num++;
+ }
+}
+
+static void ffm_event_msg_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct dbgtool_k_glb_info *dbgtool_info = NULL;
+ struct hinic3_hwdev *dev = hwdev;
+ struct card_node *card_info = NULL;
+ struct ffm_intr_info *intr = NULL;
+
+ if (in_size != sizeof(*intr)) {
+ sdk_err(dev->dev_hdl, "Invalid fault event report, length: %u, should be %ld.\n",
+ in_size, sizeof(*intr));
+ return;
+ }
+
+ intr = buf_in;
+
+ sdk_err(dev->dev_hdl, "node_id: 0x%x, err_type: 0x%x, err_level: %u, err_csr_addr: 0x%08x, err_csr_value: 0x%08x\n",
+ intr->node_id, intr->err_type, intr->err_level,
+ intr->err_csr_addr, intr->err_csr_value);
+
+ hinic3_show_chip_err_info(hwdev);
+
+ card_info = dev->chip_node;
+ dbgtool_info = card_info->dbgtool_info;
+
+ *out_size = sizeof(*intr);
+
+ if (!dbgtool_info)
+ return;
+
+ if (!dbgtool_info->ffm)
+ return;
+
+ ffm_event_record(dev, dbgtool_info, intr);
+}
+
+#define X_CSR_INDEX 30
+
+static void sw_watchdog_timeout_info_show(struct hinic3_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct comm_info_sw_watchdog *watchdog_info = buf_in;
+ u32 stack_len, i, j, tmp;
+ u32 *dump_addr = NULL;
+ u64 *reg = NULL;
+
+ if (in_size != sizeof(*watchdog_info)) {
+ sdk_err(hwdev->dev_hdl, "Invalid mgmt watchdog report, length: %d, should be %ld\n",
+ in_size, sizeof(*watchdog_info));
+ return;
+ }
+
+ sdk_err(hwdev->dev_hdl, "Mgmt deadloop time: 0x%x 0x%x, task id: 0x%x, sp: 0x%llx\n",
+ watchdog_info->curr_time_h, watchdog_info->curr_time_l,
+ watchdog_info->task_id, watchdog_info->sp);
+ sdk_err(hwdev->dev_hdl,
+ "Stack current used: 0x%x, peak used: 0x%x, overflow flag: 0x%x, top: 0x%llx, bottom: 0x%llx\n",
+ watchdog_info->curr_used, watchdog_info->peak_used,
+ watchdog_info->is_overflow, watchdog_info->stack_top, watchdog_info->stack_bottom);
+
+ sdk_err(hwdev->dev_hdl, "Mgmt pc: 0x%llx, elr: 0x%llx, spsr: 0x%llx, far: 0x%llx, esr: 0x%llx, xzr: 0x%llx\n",
+ watchdog_info->pc, watchdog_info->elr, watchdog_info->spsr, watchdog_info->far,
+ watchdog_info->esr, watchdog_info->xzr);
+
+ sdk_err(hwdev->dev_hdl, "Mgmt register info\n");
+ reg = &watchdog_info->x30;
+ for (i = 0; i <= X_CSR_INDEX; i++)
+ sdk_err(hwdev->dev_hdl, "x%02u:0x%llx\n",
+ X_CSR_INDEX - i, reg[i]);
+
+ if (watchdog_info->stack_actlen <= DATA_LEN_1K) {
+ stack_len = watchdog_info->stack_actlen;
+ } else {
+ sdk_err(hwdev->dev_hdl, "Oops stack length: 0x%x is wrong\n",
+ watchdog_info->stack_actlen);
+ stack_len = DATA_LEN_1K;
+ }
+
+ sdk_err(hwdev->dev_hdl, "Mgmt dump stack, 16 bytes per line(start from sp)\n");
+ for (i = 0; i < (stack_len / DUMP_16B_PER_LINE); i++) {
+ dump_addr = (u32 *)(watchdog_info->stack_data + (u32)(i * DUMP_16B_PER_LINE));
+ sdk_err(hwdev->dev_hdl, "0x%08x 0x%08x 0x%08x 0x%08x\n",
+ *dump_addr, *(dump_addr + 0x1), *(dump_addr + 0x2), *(dump_addr + 0x3));
+ }
+
+ tmp = (stack_len % DUMP_16B_PER_LINE) / DUMP_4_VAR_PER_LINE;
+ for (j = 0; j < tmp; j++) {
+ dump_addr = (u32 *)(watchdog_info->stack_data +
+ (u32)(i * DUMP_16B_PER_LINE + j * DUMP_4_VAR_PER_LINE));
+ sdk_err(hwdev->dev_hdl, "0x%08x ", *dump_addr);
+ }
+
+ *out_size = sizeof(*watchdog_info);
+ watchdog_info = buf_out;
+ watchdog_info->head.status = 0;
+}
+
+static void mgmt_watchdog_timeout_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_event_info event_info = { 0 };
+ struct hinic3_hwdev *dev = hwdev;
+
+ sw_watchdog_timeout_info_show(dev, buf_in, in_size, buf_out, out_size);
+
+ if (dev->event_callback) {
+ event_info.type = EVENT_COMM_MGMT_WATCHDOG;
+ dev->event_callback(dev->event_pri_handle, &event_info);
+ }
+}
+
+static void show_exc_info(struct hinic3_hwdev *hwdev, struct tag_exc_info *exc_info)
+{
+ u32 i;
+
+ /* key information */
+ sdk_err(hwdev->dev_hdl, "==================== Exception Info Begin ====================\n");
+ sdk_err(hwdev->dev_hdl, "Exception CpuTick : 0x%08x 0x%08x\n",
+ exc_info->cpu_tick.cnt_hi, exc_info->cpu_tick.cnt_lo);
+ sdk_err(hwdev->dev_hdl, "Exception Cause : %u\n", exc_info->exc_cause);
+ sdk_err(hwdev->dev_hdl, "Os Version : %s\n", exc_info->os_ver);
+ sdk_err(hwdev->dev_hdl, "App Version : %s\n", exc_info->app_ver);
+ sdk_err(hwdev->dev_hdl, "CPU Type : 0x%08x\n", exc_info->cpu_type);
+ sdk_err(hwdev->dev_hdl, "CPU ID : 0x%08x\n", exc_info->cpu_id);
+ sdk_err(hwdev->dev_hdl, "Thread Type : 0x%08x\n", exc_info->thread_type);
+ sdk_err(hwdev->dev_hdl, "Thread ID : 0x%08x\n", exc_info->thread_id);
+ sdk_err(hwdev->dev_hdl, "Byte Order : 0x%08x\n", exc_info->byte_order);
+ sdk_err(hwdev->dev_hdl, "Nest Count : 0x%08x\n", exc_info->nest_cnt);
+ sdk_err(hwdev->dev_hdl, "Fatal Error Num : 0x%08x\n", exc_info->fatal_errno);
+ sdk_err(hwdev->dev_hdl, "Current SP : 0x%016llx\n", exc_info->uw_sp);
+ sdk_err(hwdev->dev_hdl, "Stack Bottom : 0x%016llx\n", exc_info->stack_bottom);
+
+ /* register field */
+ sdk_err(hwdev->dev_hdl, "Register contents when exception occur.\n");
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "TTBR0",
+ exc_info->reg_info.ttbr0, "TTBR1", exc_info->reg_info.ttbr1);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "TCR",
+ exc_info->reg_info.tcr, "MAIR", exc_info->reg_info.mair);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "SCTLR",
+ exc_info->reg_info.sctlr, "VBAR", exc_info->reg_info.vbar);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "CURRENTE1",
+ exc_info->reg_info.current_el, "SP", exc_info->reg_info.sp);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "ELR",
+ exc_info->reg_info.elr, "SPSR", exc_info->reg_info.spsr);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "FAR",
+ exc_info->reg_info.far_r, "ESR", exc_info->reg_info.esr);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx\n", "XZR", exc_info->reg_info.xzr);
+
+ for (i = 0; i < XREGS_NUM - 1; i += 0x2)
+ sdk_err(hwdev->dev_hdl, "XREGS[%02u]%-5s: 0x%016llx \t XREGS[%02u]%-5s: 0x%016llx",
+ i, " ", exc_info->reg_info.xregs[i],
+ (u32)(i + 0x1U), " ", exc_info->reg_info.xregs[(u32)(i + 0x1U)]);
+
+ sdk_err(hwdev->dev_hdl, "XREGS[%02u]%-5s: 0x%016llx \t ", XREGS_NUM - 1, " ",
+ exc_info->reg_info.xregs[XREGS_NUM - 1]);
+}
+
+#define FOUR_REG_LEN 16
+
+static void mgmt_lastword_report_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct tag_comm_info_up_lastword *lastword_info = buf_in;
+ struct tag_exc_info *exc_info = &lastword_info->stack_info;
+ u32 stack_len = lastword_info->stack_actlen;
+ struct hinic3_hwdev *dev = hwdev;
+ u32 *curr_reg = NULL;
+ u32 reg_i, cnt;
+
+ if (in_size != sizeof(*lastword_info)) {
+ sdk_err(dev->dev_hdl, "Invalid mgmt lastword, length: %u, should be %ld\n",
+ in_size, sizeof(*lastword_info));
+ return;
+ }
+
+ show_exc_info(dev, exc_info);
+
+ /* call stack dump */
+ sdk_err(dev->dev_hdl, "Dump stack when exceptioin occurs, 16Bytes per line.\n");
+
+ cnt = stack_len / FOUR_REG_LEN;
+ for (reg_i = 0; reg_i < cnt; reg_i++) {
+ curr_reg = (u32 *)(lastword_info->stack_data + ((u64)(u32)(reg_i * FOUR_REG_LEN)));
+ sdk_err(dev->dev_hdl, "0x%08x 0x%08x 0x%08x 0x%08x\n",
+ *curr_reg, *(curr_reg + 0x1), *(curr_reg + 0x2), *(curr_reg + 0x3));
+ }
+
+ sdk_err(dev->dev_hdl, "==================== Exception Info End ====================\n");
+}
+
+const struct mgmt_event_handle mgmt_event_proc[] = {
+ {
+ .cmd = COMM_MGMT_CMD_FAULT_REPORT,
+ .proc = fault_event_handler,
+ },
+
+ {
+ .cmd = COMM_MGMT_CMD_FFM_SET,
+ .proc = ffm_event_msg_handler,
+ },
+
+ {
+ .cmd = COMM_MGMT_CMD_WATCHDOG_INFO,
+ .proc = mgmt_watchdog_timeout_event_handler,
+ },
+
+ {
+ .cmd = COMM_MGMT_CMD_LASTWORD_GET,
+ .proc = mgmt_lastword_report_event_handler,
+ },
+};
+
+static void pf_handle_mgmt_comm_event(void *handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = handle;
+ u32 i, event_num = ARRAY_LEN(mgmt_event_proc);
+
+ if (!hwdev)
+ return;
+
+ for (i = 0; i < event_num; i++) {
+ if (cmd == mgmt_event_proc[i].cmd) {
+ if (mgmt_event_proc[i].proc)
+ mgmt_event_proc[i].proc(handle, buf_in, in_size,
+ buf_out, out_size);
+
+ return;
+ }
+ }
+
+ sdk_warn(hwdev->dev_hdl, "Unsupported mgmt cpu event %u to process\n",
+ cmd);
+ *out_size = sizeof(struct mgmt_msg_head);
+ ((struct mgmt_msg_head *)buf_out)->status = HINIC3_MGMT_CMD_UNSUPPORTED;
+}
+
+static void hinic3_set_chip_present(struct hinic3_hwdev *hwdev)
+{
+ hwdev->chip_present_flag = HINIC3_CHIP_PRESENT;
+}
+
+static void hinic3_set_chip_absent(struct hinic3_hwdev *hwdev)
+{
+ sdk_err(hwdev->dev_hdl, "Card not present\n");
+ hwdev->chip_present_flag = HINIC3_CHIP_ABSENT;
+}
+
+int hinic3_get_chip_present_flag(const void *hwdev)
+{
+ if (!hwdev)
+ return 0;
+
+ return ((struct hinic3_hwdev *)hwdev)->chip_present_flag;
+}
+EXPORT_SYMBOL(hinic3_get_chip_present_flag);
+
+void hinic3_force_complete_all(void *dev)
+{
+ struct hinic3_recv_msg *recv_resp_msg = NULL;
+ struct hinic3_hwdev *hwdev = dev;
+ struct hinic3_mbox *func_to_func = NULL;
+
+ spin_lock_bh(&hwdev->channel_lock);
+ if (hinic3_func_type(hwdev) != TYPE_VF &&
+ test_bit(HINIC3_HWDEV_MGMT_INITED, &hwdev->func_state)) {
+ recv_resp_msg = &hwdev->pf_to_mgmt->recv_resp_msg_from_mgmt;
+ spin_lock_bh(&hwdev->pf_to_mgmt->sync_event_lock);
+ if (hwdev->pf_to_mgmt->event_flag == SEND_EVENT_START) {
+ complete(&recv_resp_msg->recv_done);
+ hwdev->pf_to_mgmt->event_flag = SEND_EVENT_TIMEOUT;
+ }
+ spin_unlock_bh(&hwdev->pf_to_mgmt->sync_event_lock);
+ }
+
+ if (test_bit(HINIC3_HWDEV_MBOX_INITED, &hwdev->func_state)) {
+ func_to_func = hwdev->func_to_func;
+ spin_lock(&func_to_func->mbox_lock);
+ if (func_to_func->event_flag == EVENT_START)
+ func_to_func->event_flag = EVENT_TIMEOUT;
+ spin_unlock(&func_to_func->mbox_lock);
+ }
+
+ if (test_bit(HINIC3_HWDEV_CMDQ_INITED, &hwdev->func_state))
+ hinic3_cmdq_flush_sync_cmd(hwdev);
+
+ spin_unlock_bh(&hwdev->channel_lock);
+}
+EXPORT_SYMBOL(hinic3_force_complete_all);
+
+void hinic3_detect_hw_present(void *hwdev)
+{
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev)) {
+ hinic3_set_chip_absent(hwdev);
+ hinic3_force_complete_all(hwdev);
+ }
+}
+
+/**
+ * dma_attr_table_init - initialize the default dma attributes
+ * @hwdev: the pointer to hw device
+ **/
+static int dma_attr_table_init(struct hinic3_hwdev *hwdev)
+{
+ u32 addr, val, dst_attr;
+
+ /* Use indirect access should set entry_idx first */
+ addr = HINIC3_CSR_DMA_ATTR_INDIR_IDX_ADDR;
+ val = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ val = HINIC3_DMA_ATTR_INDIR_IDX_CLEAR(val, IDX);
+
+ val |= HINIC3_DMA_ATTR_INDIR_IDX_SET(PCIE_MSIX_ATTR_ENTRY, IDX);
+
+ hinic3_hwif_write_reg(hwdev->hwif, addr, val);
+
+ wmb(); /* write index before config */
+
+ addr = HINIC3_CSR_DMA_ATTR_TBL_ADDR;
+ val = hinic3_hwif_read_reg(hwdev->hwif, addr);
+
+ dst_attr = HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_ST_DISABLE, ST) |
+ HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_AT_DISABLE, AT) |
+ HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_PH_DISABLE, PH) |
+ HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_SNOOP, NO_SNOOPING) |
+ HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_TPH_DISABLE, TPH_EN);
+
+ if (val == dst_attr)
+ return 0;
+
+ return hinic3_set_dma_attr_tbl(hwdev, PCIE_MSIX_ATTR_ENTRY, HINIC3_PCIE_ST_DISABLE,
+ HINIC3_PCIE_AT_DISABLE, HINIC3_PCIE_PH_DISABLE,
+ HINIC3_PCIE_SNOOP, HINIC3_PCIE_TPH_DISABLE);
+}
+
+static int init_aeqs_msix_attr(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeqs *aeqs = hwdev->aeqs;
+ struct interrupt_info info = {0};
+ struct hinic3_eq *eq = NULL;
+ int q_id;
+ int err;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = HINIC3_DEAULT_EQ_MSIX_PENDING_LIMIT;
+ info.coalesc_timer_cfg = HINIC3_DEAULT_EQ_MSIX_COALESC_TIMER_CFG;
+ info.resend_timer_cfg = HINIC3_DEAULT_EQ_MSIX_RESEND_TIMER_CFG;
+
+ for (q_id = aeqs->num_aeqs - 1; q_id >= 0; q_id--) {
+ eq = &aeqs->aeq[q_id];
+ info.msix_index = eq->eq_irq.msix_entry_idx;
+ err = hinic3_set_interrupt_cfg_direct(hwdev, &info,
+ HINIC3_CHANNEL_COMM);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Set msix attr for aeq %d failed\n",
+ q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static int init_ceqs_msix_attr(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_ceqs *ceqs = hwdev->ceqs;
+ struct interrupt_info info = {0};
+ struct hinic3_eq *eq = NULL;
+ u16 q_id;
+ int err;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = HINIC3_DEAULT_EQ_MSIX_PENDING_LIMIT;
+ info.coalesc_timer_cfg = HINIC3_DEAULT_EQ_MSIX_COALESC_TIMER_CFG;
+ info.resend_timer_cfg = HINIC3_DEAULT_EQ_MSIX_RESEND_TIMER_CFG;
+
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++) {
+ eq = &ceqs->ceq[q_id];
+ info.msix_index = eq->eq_irq.msix_entry_idx;
+ err = hinic3_set_interrupt_cfg(hwdev, info,
+ HINIC3_CHANNEL_COMM);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Set msix attr for ceq %u failed\n",
+ q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static int hinic3_comm_clp_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF || !COMM_SUPPORT_CLP(hwdev))
+ return 0;
+
+ err = hinic3_clp_pf_to_mgmt_init(hwdev);
+ if (err != 0)
+ return err;
+
+ return 0;
+}
+
+static void hinic3_comm_clp_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ if (hinic3_func_type(hwdev) == TYPE_VF || !COMM_SUPPORT_CLP(hwdev))
+ return;
+
+ hinic3_clp_pf_to_mgmt_free(hwdev);
+}
+
+static int hinic3_comm_aeqs_init(struct hinic3_hwdev *hwdev)
+{
+ struct irq_info aeq_irqs[HINIC3_MAX_AEQS] = {{0} };
+ u16 num_aeqs, resp_num_irq = 0, i;
+ int err;
+
+ num_aeqs = HINIC3_HWIF_NUM_AEQS(hwdev->hwif);
+ if (num_aeqs > HINIC3_MAX_AEQS) {
+ sdk_warn(hwdev->dev_hdl, "Adjust aeq num to %d\n",
+ HINIC3_MAX_AEQS);
+ num_aeqs = HINIC3_MAX_AEQS;
+ }
+ err = hinic3_alloc_irqs(hwdev, SERVICE_T_INTF, num_aeqs, aeq_irqs,
+ &resp_num_irq);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc aeq irqs, num_aeqs: %u\n",
+ num_aeqs);
+ return err;
+ }
+
+ if (resp_num_irq < num_aeqs) {
+ sdk_warn(hwdev->dev_hdl, "Adjust aeq num to %u\n",
+ resp_num_irq);
+ num_aeqs = resp_num_irq;
+ }
+
+ err = hinic3_aeqs_init(hwdev, num_aeqs, aeq_irqs);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init aeqs\n");
+ goto aeqs_init_err;
+ }
+
+ return 0;
+
+aeqs_init_err:
+ for (i = 0; i < num_aeqs; i++)
+ hinic3_free_irq(hwdev, SERVICE_T_INTF, aeq_irqs[i].irq_id);
+
+ return err;
+}
+
+static void hinic3_comm_aeqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct irq_info aeq_irqs[HINIC3_MAX_AEQS] = {{0} };
+ u16 num_irqs, i;
+
+ hinic3_get_aeq_irqs(hwdev, (struct irq_info *)aeq_irqs, &num_irqs);
+
+ hinic3_aeqs_free(hwdev);
+
+ for (i = 0; i < num_irqs; i++)
+ hinic3_free_irq(hwdev, SERVICE_T_INTF, aeq_irqs[i].irq_id);
+}
+
+static int hinic3_comm_ceqs_init(struct hinic3_hwdev *hwdev)
+{
+ struct irq_info ceq_irqs[HINIC3_MAX_CEQS] = {{0} };
+ u16 num_ceqs, resp_num_irq = 0, i;
+ int err;
+
+ num_ceqs = HINIC3_HWIF_NUM_CEQS(hwdev->hwif);
+ if (num_ceqs > HINIC3_MAX_CEQS) {
+ sdk_warn(hwdev->dev_hdl, "Adjust ceq num to %d\n",
+ HINIC3_MAX_CEQS);
+ num_ceqs = HINIC3_MAX_CEQS;
+ }
+
+ err = hinic3_alloc_irqs(hwdev, SERVICE_T_INTF, num_ceqs, ceq_irqs,
+ &resp_num_irq);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc ceq irqs, num_ceqs: %u\n",
+ num_ceqs);
+ return err;
+ }
+
+ if (resp_num_irq < num_ceqs) {
+ sdk_warn(hwdev->dev_hdl, "Adjust ceq num to %u\n",
+ resp_num_irq);
+ num_ceqs = resp_num_irq;
+ }
+
+ err = hinic3_ceqs_init(hwdev, num_ceqs, ceq_irqs);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to init ceqs, err:%d\n", err);
+ goto ceqs_init_err;
+ }
+
+ return 0;
+
+ceqs_init_err:
+ for (i = 0; i < num_ceqs; i++)
+ hinic3_free_irq(hwdev, SERVICE_T_INTF, ceq_irqs[i].irq_id);
+
+ return err;
+}
+
+static void hinic3_comm_ceqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct irq_info ceq_irqs[HINIC3_MAX_CEQS] = {{0} };
+ u16 num_irqs;
+ int i;
+
+ hinic3_get_ceq_irqs(hwdev, (struct irq_info *)ceq_irqs, &num_irqs);
+
+ hinic3_ceqs_free(hwdev);
+
+ for (i = 0; i < num_irqs; i++)
+ hinic3_free_irq(hwdev, SERVICE_T_INTF, ceq_irqs[i].irq_id);
+}
+
+static int hinic3_comm_func_to_func_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_func_to_func_init(hwdev);
+ if (err != 0)
+ return err;
+
+ hinic3_aeq_register_hw_cb(hwdev, hwdev, HINIC3_MBX_FROM_FUNC,
+ hinic3_mbox_func_aeqe_handler);
+ hinic3_aeq_register_hw_cb(hwdev, hwdev, HINIC3_MSG_FROM_MGMT_CPU,
+ hinic3_mgmt_msg_aeqe_handler);
+
+ if (!HINIC3_IS_VF(hwdev)) {
+ hinic3_register_pf_mbox_cb(hwdev, HINIC3_MOD_COMM, hwdev, pf_handle_vf_comm_mbox);
+ hinic3_register_pf_mbox_cb(hwdev, HINIC3_MOD_SW_FUNC,
+ hwdev, sw_func_pf_mbox_handler);
+ } else {
+ hinic3_register_vf_mbox_cb(hwdev, HINIC3_MOD_COMM, hwdev, vf_handle_pf_comm_mbox);
+ }
+
+ set_bit(HINIC3_HWDEV_MBOX_INITED, &hwdev->func_state);
+
+ return 0;
+}
+
+static void hinic3_comm_func_to_func_free(struct hinic3_hwdev *hwdev)
+{
+ spin_lock_bh(&hwdev->channel_lock);
+ clear_bit(HINIC3_HWDEV_MBOX_INITED, &hwdev->func_state);
+ spin_unlock_bh(&hwdev->channel_lock);
+
+ hinic3_aeq_unregister_hw_cb(hwdev, HINIC3_MBX_FROM_FUNC);
+
+ if (!HINIC3_IS_VF(hwdev)) {
+ hinic3_unregister_pf_mbox_cb(hwdev, HINIC3_MOD_COMM);
+ } else {
+ hinic3_unregister_vf_mbox_cb(hwdev, HINIC3_MOD_COMM);
+
+ hinic3_aeq_unregister_hw_cb(hwdev, HINIC3_MSG_FROM_MGMT_CPU);
+ }
+
+ hinic3_func_to_func_free(hwdev);
+}
+
+static int hinic3_comm_pf_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ err = hinic3_pf_to_mgmt_init(hwdev);
+ if (err != 0)
+ return err;
+
+ hinic3_register_mgmt_msg_cb(hwdev, HINIC3_MOD_COMM, hwdev,
+ pf_handle_mgmt_comm_event);
+
+ set_bit(HINIC3_HWDEV_MGMT_INITED, &hwdev->func_state);
+
+ return 0;
+}
+
+static void hinic3_comm_pf_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return;
+
+ spin_lock_bh(&hwdev->channel_lock);
+ clear_bit(HINIC3_HWDEV_MGMT_INITED, &hwdev->func_state);
+ spin_unlock_bh(&hwdev->channel_lock);
+
+ hinic3_unregister_mgmt_msg_cb(hwdev, HINIC3_MOD_COMM);
+
+ hinic3_aeq_unregister_hw_cb(hwdev, HINIC3_MSG_FROM_MGMT_CPU);
+
+ hinic3_pf_to_mgmt_free(hwdev);
+}
+
+static int hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_cmdqs_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init cmd queues\n");
+ return err;
+ }
+
+ hinic3_ceq_register_cb(hwdev, hwdev, HINIC3_CMDQ, hinic3_cmdq_ceq_handler);
+
+ err = hinic3_set_cmdq_depth(hwdev, HINIC3_CMDQ_DEPTH);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to set cmdq depth\n");
+ goto set_cmdq_depth_err;
+ }
+
+ set_bit(HINIC3_HWDEV_CMDQ_INITED, &hwdev->func_state);
+
+ return 0;
+
+set_cmdq_depth_err:
+ hinic3_cmdqs_free(hwdev);
+
+ return err;
+}
+
+static void hinic3_comm_cmdqs_free(struct hinic3_hwdev *hwdev)
+{
+ spin_lock_bh(&hwdev->channel_lock);
+ clear_bit(HINIC3_HWDEV_CMDQ_INITED, &hwdev->func_state);
+ spin_unlock_bh(&hwdev->channel_lock);
+
+ hinic3_ceq_unregister_cb(hwdev, HINIC3_CMDQ);
+ hinic3_cmdqs_free(hwdev);
+}
+
+static void hinic3_sync_mgmt_func_state(struct hinic3_hwdev *hwdev)
+{
+ hinic3_set_pf_status(hwdev->hwif, HINIC3_PF_STATUS_ACTIVE_FLAG);
+}
+
+static void hinic3_unsync_mgmt_func_state(struct hinic3_hwdev *hwdev)
+{
+ hinic3_set_pf_status(hwdev->hwif, HINIC3_PF_STATUS_INIT);
+}
+
+static int init_basic_attributes(struct hinic3_hwdev *hwdev)
+{
+ u64 drv_features[COMM_MAX_FEATURE_QWORD] = {HINIC3_DRV_FEATURE_QW0, 0, 0, 0};
+ int err, i;
+
+ if (hinic3_func_type(hwdev) == TYPE_PPF)
+ drv_features[0] |= COMM_F_CHANNEL_DETECT;
+
+ err = hinic3_get_board_info(hwdev, &hwdev->board_info,
+ HINIC3_CHANNEL_COMM);
+ if (err != 0)
+ return err;
+
+ err = hinic3_get_comm_features(hwdev, hwdev->features,
+ COMM_MAX_FEATURE_QWORD);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Get comm features failed\n");
+ return err;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Comm hw features: 0x%llx, drv features: 0x%llx\n",
+ hwdev->features[0], drv_features[0]);
+
+ for (i = 0; i < COMM_MAX_FEATURE_QWORD; i++)
+ hwdev->features[i] &= drv_features[i];
+
+ err = hinic3_get_global_attr(hwdev, &hwdev->glb_attr);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to get global attribute\n");
+ return err;
+ }
+
+ sdk_info(hwdev->dev_hdl,
+ "global attribute: max_host: 0x%x, max_pf: 0x%x, vf_id_start: 0x%x, mgmt node id: 0x%x, cmdq_num: 0x%x\n",
+ hwdev->glb_attr.max_host_num, hwdev->glb_attr.max_pf_num,
+ hwdev->glb_attr.vf_id_start,
+ hwdev->glb_attr.mgmt_host_node_id,
+ hwdev->glb_attr.cmdq_num);
+
+ return 0;
+}
+
+static int init_basic_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_comm_aeqs_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init async event queues\n");
+ return err;
+ }
+
+ err = hinic3_comm_func_to_func_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init mailbox\n");
+ goto func_to_func_init_err;
+ }
+
+ err = init_aeqs_msix_attr(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init aeqs msix attr\n");
+ goto aeqs_msix_attr_init_err;
+ }
+
+ return 0;
+
+aeqs_msix_attr_init_err:
+ hinic3_comm_func_to_func_free(hwdev);
+
+func_to_func_init_err:
+ hinic3_comm_aeqs_free(hwdev);
+
+ return err;
+}
+
+static void free_base_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ hinic3_comm_func_to_func_free(hwdev);
+ hinic3_comm_aeqs_free(hwdev);
+}
+
+static int init_pf_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_comm_clp_to_mgmt_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init clp\n");
+ return err;
+ }
+
+ err = hinic3_comm_pf_to_mgmt_init(hwdev);
+ if (err != 0) {
+ hinic3_comm_clp_to_mgmt_free(hwdev);
+ sdk_err(hwdev->dev_hdl, "Failed to init pf to mgmt\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static void free_pf_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ hinic3_comm_clp_to_mgmt_free(hwdev);
+ hinic3_comm_pf_to_mgmt_free(hwdev);
+}
+
+static int init_mgmt_channel_post(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ /* mbox host channel resources will be freed in
+ * hinic3_func_to_func_free
+ */
+ if (HINIC3_IS_PPF(hwdev)) {
+ err = hinic3_mbox_init_host_msg_channel(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init mbox host channel\n");
+ return err;
+ }
+ }
+
+ err = init_pf_mgmt_channel(hwdev);
+ if (err != 0)
+ return err;
+
+ return 0;
+}
+
+static void free_mgmt_msg_channel_post(struct hinic3_hwdev *hwdev)
+{
+ free_pf_mgmt_channel(hwdev);
+}
+
+static int init_cmdqs_channel(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = dma_attr_table_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init dma attr table\n");
+ goto dma_attr_init_err;
+ }
+
+ err = hinic3_comm_ceqs_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init completion event queues\n");
+ goto ceqs_init_err;
+ }
+
+ err = init_ceqs_msix_attr(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init ceqs msix attr\n");
+ goto init_ceq_msix_err;
+ }
+
+ /* set default wq page_size */
+ if (wq_page_order > HINIC3_MAX_WQ_PAGE_SIZE_ORDER) {
+ sdk_info(hwdev->dev_hdl, "wq_page_order exceed limit[0, %d], reset to %d\n",
+ HINIC3_MAX_WQ_PAGE_SIZE_ORDER,
+ HINIC3_MAX_WQ_PAGE_SIZE_ORDER);
+ wq_page_order = HINIC3_MAX_WQ_PAGE_SIZE_ORDER;
+ }
+ hwdev->wq_page_size = HINIC3_HW_WQ_PAGE_SIZE * (1U << wq_page_order);
+ sdk_info(hwdev->dev_hdl, "WQ page size: 0x%x\n", hwdev->wq_page_size);
+ err = hinic3_set_wq_page_size(hwdev, hinic3_global_func_id(hwdev),
+ hwdev->wq_page_size, HINIC3_CHANNEL_COMM);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to set wq page size\n");
+ goto init_wq_pg_size_err;
+ }
+
+ err = hinic3_comm_cmdqs_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init cmd queues\n");
+ goto cmdq_init_err;
+ }
+
+ return 0;
+
+cmdq_init_err:
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF)
+ hinic3_set_wq_page_size(hwdev, hinic3_global_func_id(hwdev),
+ HINIC3_HW_WQ_PAGE_SIZE,
+ HINIC3_CHANNEL_COMM);
+init_wq_pg_size_err:
+init_ceq_msix_err:
+ hinic3_comm_ceqs_free(hwdev);
+
+ceqs_init_err:
+dma_attr_init_err:
+
+ return err;
+}
+
+static void hinic3_free_cmdqs_channel(struct hinic3_hwdev *hwdev)
+{
+ hinic3_comm_cmdqs_free(hwdev);
+
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF)
+ hinic3_set_wq_page_size(hwdev, hinic3_global_func_id(hwdev),
+ HINIC3_HW_WQ_PAGE_SIZE, HINIC3_CHANNEL_COMM);
+
+ hinic3_comm_ceqs_free(hwdev);
+}
+
+static int hinic3_init_comm_ch(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = init_basic_mgmt_channel(hwdev);
+ if (err != 0)
+ return err;
+
+ err = hinic3_func_reset(hwdev, hinic3_global_func_id(hwdev),
+ HINIC3_COMM_RES, HINIC3_CHANNEL_COMM);
+ if (err != 0)
+ goto func_reset_err;
+
+ err = init_basic_attributes(hwdev);
+ if (err != 0)
+ goto init_basic_attr_err;
+
+ err = init_mgmt_channel_post(hwdev);
+ if (err != 0)
+ goto init_mgmt_channel_post_err;
+
+ err = hinic3_set_func_svc_used_state(hwdev, SVC_T_COMM, 1, HINIC3_CHANNEL_COMM);
+ if (err != 0)
+ goto set_used_state_err;
+
+ err = init_cmdqs_channel(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init cmdq channel\n");
+ goto init_cmdqs_channel_err;
+ }
+
+ hinic3_sync_mgmt_func_state(hwdev);
+
+ if (HISDK3_F_CHANNEL_LOCK_EN(hwdev)) {
+ hinic3_mbox_enable_channel_lock(hwdev, true);
+ hinic3_cmdq_enable_channel_lock(hwdev, true);
+ }
+
+ err = hinic3_aeq_register_swe_cb(hwdev, hwdev, HINIC3_STATELESS_EVENT,
+ hinic3_nic_sw_aeqe_handler);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to register sw aeqe handler\n");
+ goto register_ucode_aeqe_err;
+ }
+
+ return 0;
+
+register_ucode_aeqe_err:
+ hinic3_unsync_mgmt_func_state(hwdev);
+ hinic3_free_cmdqs_channel(hwdev);
+init_cmdqs_channel_err:
+ hinic3_set_func_svc_used_state(hwdev, SVC_T_COMM, 0, HINIC3_CHANNEL_COMM);
+set_used_state_err:
+ free_mgmt_msg_channel_post(hwdev);
+init_mgmt_channel_post_err:
+init_basic_attr_err:
+func_reset_err:
+ free_base_mgmt_channel(hwdev);
+
+ return err;
+}
+
+static void hinic3_uninit_comm_ch(struct hinic3_hwdev *hwdev)
+{
+ hinic3_aeq_unregister_swe_cb(hwdev, HINIC3_STATELESS_EVENT);
+
+ hinic3_unsync_mgmt_func_state(hwdev);
+
+ hinic3_free_cmdqs_channel(hwdev);
+
+ hinic3_set_func_svc_used_state(hwdev, SVC_T_COMM, 0, HINIC3_CHANNEL_COMM);
+
+ free_mgmt_msg_channel_post(hwdev);
+
+ free_base_mgmt_channel(hwdev);
+}
+
+static void hinic3_auto_sync_time_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_hwdev *hwdev = container_of(delay, struct hinic3_hwdev, sync_time_task);
+ int err;
+
+ err = hinic3_sync_time(hwdev, ossl_get_real_time());
+ if (err != 0)
+ sdk_err(hwdev->dev_hdl, "Synchronize UTC time to firmware failed, errno:%d.\n",
+ err);
+
+ queue_delayed_work(hwdev->workq, &hwdev->sync_time_task,
+ msecs_to_jiffies(HINIC3_SYNFW_TIME_PERIOD));
+}
+
+static void hinic3_auto_channel_detect_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_hwdev *hwdev = container_of(delay, struct hinic3_hwdev, channel_detect_task);
+ struct card_node *chip_node = NULL;
+
+ hinic3_comm_channel_detect(hwdev);
+
+ chip_node = hwdev->chip_node;
+ if (!atomic_read(&chip_node->channel_busy_cnt))
+ queue_delayed_work(hwdev->workq, &hwdev->channel_detect_task,
+ msecs_to_jiffies(HINIC3_CHANNEL_DETECT_PERIOD));
+}
+
+static int hinic3_init_ppf_work(struct hinic3_hwdev *hwdev)
+{
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ return 0;
+
+ INIT_DELAYED_WORK(&hwdev->sync_time_task, hinic3_auto_sync_time_work);
+ queue_delayed_work(hwdev->workq, &hwdev->sync_time_task,
+ msecs_to_jiffies(HINIC3_SYNFW_TIME_PERIOD));
+
+ if (COMM_SUPPORT_CHANNEL_DETECT(hwdev)) {
+ INIT_DELAYED_WORK(&hwdev->channel_detect_task,
+ hinic3_auto_channel_detect_work);
+ queue_delayed_work(hwdev->workq, &hwdev->channel_detect_task,
+ msecs_to_jiffies(HINIC3_CHANNEL_DETECT_PERIOD));
+ }
+
+ return 0;
+}
+
+static void hinic3_free_ppf_work(struct hinic3_hwdev *hwdev)
+{
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ return;
+
+ if (COMM_SUPPORT_CHANNEL_DETECT(hwdev)) {
+ hwdev->features[0] &= ~(COMM_F_CHANNEL_DETECT);
+ cancel_delayed_work_sync(&hwdev->channel_detect_task);
+ }
+
+ cancel_delayed_work_sync(&hwdev->sync_time_task);
+}
+
+static int init_hwdew(struct hinic3_init_para *para)
+{
+ struct hinic3_hwdev *hwdev;
+
+ hwdev = kzalloc(sizeof(*hwdev), GFP_KERNEL);
+ if (!hwdev)
+ return -ENOMEM;
+
+ *para->hwdev = hwdev;
+ hwdev->adapter_hdl = para->adapter_hdl;
+ hwdev->pcidev_hdl = para->pcidev_hdl;
+ hwdev->dev_hdl = para->dev_hdl;
+ hwdev->chip_node = para->chip_node;
+ hwdev->poll = para->poll;
+ hwdev->probe_fault_level = para->probe_fault_level;
+ hwdev->func_state = 0;
+ sema_init(&hwdev->ppf_sem, 1);
+
+ hwdev->chip_fault_stats = vzalloc(HINIC3_CHIP_FAULT_SIZE);
+ if (!hwdev->chip_fault_stats)
+ goto alloc_chip_fault_stats_err;
+
+ hwdev->stateful_ref_cnt = 0;
+ memset(hwdev->features, 0, sizeof(hwdev->features));
+
+ spin_lock_init(&hwdev->channel_lock);
+ mutex_init(&hwdev->stateful_mutex);
+
+ return 0;
+
+alloc_chip_fault_stats_err:
+ sema_deinit(&hwdev->ppf_sem);
+ para->probe_fault_level = hwdev->probe_fault_level;
+ kfree(hwdev);
+ *para->hwdev = NULL;
+ return -EFAULT;
+}
+
+int hinic3_init_hwdev(struct hinic3_init_para *para)
+{
+ struct hinic3_hwdev *hwdev = NULL;
+ int err;
+
+ err = init_hwdew(para);
+ if (err != 0)
+ return err;
+
+ hwdev = *para->hwdev;
+ err = hinic3_init_hwif(hwdev, para->cfg_reg_base, para->intr_reg_base, para->mgmt_reg_base,
+ para->db_base_phy, para->db_base, para->db_dwqe_len);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init hwif\n");
+ goto init_hwif_err;
+ }
+
+ hinic3_set_chip_present(hwdev);
+
+ hisdk3_init_profile_adapter(hwdev);
+
+ hwdev->workq = alloc_workqueue(HINIC3_HWDEV_WQ_NAME, WQ_MEM_RECLAIM, HINIC3_WQ_MAX_REQ);
+ if (!hwdev->workq) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc hardware workq\n");
+ goto alloc_workq_err;
+ }
+
+ hinic3_init_heartbeat_detect(hwdev);
+
+ err = init_cfg_mgmt(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init config mgmt\n");
+ goto init_cfg_mgmt_err;
+ }
+
+ err = hinic3_init_comm_ch(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init communication channel\n");
+ goto init_comm_ch_err;
+ }
+
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+ err = hinic3_init_devlink(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init devlink\n");
+ goto init_devlink_err;
+ }
+#endif
+
+ err = init_capability(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init capability\n");
+ goto init_cap_err;
+ }
+
+ hinic3_init_host_mode_pre(hwdev);
+
+ hinic3_init_hot_plug_status(hwdev);
+
+ hinic3_init_os_hot_replace(hwdev);
+
+ err = hinic3_multi_host_mgmt_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init function mode\n");
+ goto init_multi_host_fail;
+ }
+
+ // hot_replace_mode is enable, run ppf function only when partition_id is 0
+ // or run ppf function directly
+ if (hwdev->hot_replace_mode == HOT_REPLACE_ENABLE) {
+ if (get_partition_id() == 0) {
+ err = hinic3_init_ppf_work(hwdev);
+ if (err != 0) {
+ goto init_ppf_work_fail;
+ }
+ }
+ } else {
+ err = hinic3_init_ppf_work(hwdev);
+ if (err != 0)
+ goto init_ppf_work_fail;
+ }
+
+ err = hinic3_set_comm_features(hwdev, hwdev->features, COMM_MAX_FEATURE_QWORD);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to set comm features\n");
+ goto set_feature_err;
+ }
+
+ return 0;
+
+set_feature_err:
+ hinic3_free_ppf_work(hwdev);
+
+init_ppf_work_fail:
+ hinic3_multi_host_mgmt_free(hwdev);
+
+init_multi_host_fail:
+ free_capability(hwdev);
+
+init_cap_err:
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+ hinic3_uninit_devlink(hwdev);
+
+init_devlink_err:
+#endif
+ hinic3_uninit_comm_ch(hwdev);
+
+init_comm_ch_err:
+ free_cfg_mgmt(hwdev);
+
+init_cfg_mgmt_err:
+ hinic3_destroy_heartbeat_detect(hwdev);
+ destroy_workqueue(hwdev->workq);
+
+alloc_workq_err:
+ hisdk3_deinit_profile_adapter(hwdev);
+
+ hinic3_free_hwif(hwdev);
+
+init_hwif_err:
+ spin_lock_deinit(&hwdev->channel_lock);
+ vfree(hwdev->chip_fault_stats);
+ para->probe_fault_level = hwdev->probe_fault_level;
+ kfree(hwdev);
+ *para->hwdev = NULL;
+
+ return -EFAULT;
+}
+
+void hinic3_free_hwdev(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u64 drv_features[COMM_MAX_FEATURE_QWORD];
+
+ memset(drv_features, 0, sizeof(drv_features));
+ hinic3_set_comm_features(hwdev, drv_features, COMM_MAX_FEATURE_QWORD);
+
+ hinic3_free_ppf_work(dev);
+
+ hinic3_multi_host_mgmt_free(dev);
+
+ hinic3_func_rx_tx_flush(hwdev, HINIC3_CHANNEL_COMM, true);
+
+ free_capability(dev);
+
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+ hinic3_uninit_devlink(dev);
+#endif
+
+ hinic3_uninit_comm_ch(dev);
+
+ free_cfg_mgmt(dev);
+ hinic3_destroy_heartbeat_detect(hwdev);
+ destroy_workqueue(dev->workq);
+
+ hisdk3_deinit_profile_adapter(hwdev);
+ hinic3_free_hwif(dev);
+
+ spin_lock_deinit(&dev->channel_lock);
+ vfree(dev->chip_fault_stats);
+
+ kfree(dev);
+}
+
+void *hinic3_get_pcidev_hdl(void *hwdev)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (!hwdev)
+ return NULL;
+
+ return dev->pcidev_hdl;
+}
+
+int hinic3_register_service_adapter(void *hwdev, void *service_adapter,
+ enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev || !service_adapter || type >= SERVICE_T_MAX)
+ return -EINVAL;
+
+ if (dev->service_adapter[type])
+ return -EINVAL;
+
+ dev->service_adapter[type] = service_adapter;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_service_adapter);
+
+void hinic3_unregister_service_adapter(void *hwdev,
+ enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev || type >= SERVICE_T_MAX)
+ return;
+
+ dev->service_adapter[type] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_service_adapter);
+
+void *hinic3_get_service_adapter(void *hwdev, enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev || type >= SERVICE_T_MAX)
+ return NULL;
+
+ return dev->service_adapter[type];
+}
+EXPORT_SYMBOL(hinic3_get_service_adapter);
+
+int hinic3_dbg_get_hw_stats(const void *hwdev, u8 *hw_stats, const u32 *out_size)
+{
+ struct hinic3_hw_stats *tmp_hw_stats = (struct hinic3_hw_stats *)hw_stats;
+ struct card_node *chip_node = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (*out_size != sizeof(struct hinic3_hw_stats) || !hw_stats) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(struct hinic3_hw_stats));
+ return -EFAULT;
+ }
+
+ memcpy(hw_stats,
+ &((struct hinic3_hwdev *)hwdev)->hw_stats, sizeof(struct hinic3_hw_stats));
+
+ chip_node = ((struct hinic3_hwdev *)hwdev)->chip_node;
+
+ atomic_set(&tmp_hw_stats->nic_ucode_event_stats[HINIC3_CHANNEL_BUSY],
+ atomic_read(&chip_node->channel_busy_cnt));
+
+ return 0;
+}
+
+u16 hinic3_dbg_clear_hw_stats(void *hwdev)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_hwdev *dev = hwdev;
+
+ memset((void *)&dev->hw_stats, 0, sizeof(struct hinic3_hw_stats));
+ memset((void *)dev->chip_fault_stats, 0, HINIC3_CHIP_FAULT_SIZE);
+
+ chip_node = dev->chip_node;
+ if (COMM_SUPPORT_CHANNEL_DETECT(dev) && atomic_read(&chip_node->channel_busy_cnt)) {
+ atomic_set(&chip_node->channel_busy_cnt, 0);
+ dev->aeq_busy_cnt = 0;
+ queue_delayed_work(dev->workq, &dev->channel_detect_task,
+ msecs_to_jiffies(HINIC3_CHANNEL_DETECT_PERIOD));
+ }
+
+ return sizeof(struct hinic3_hw_stats);
+}
+
+void hinic3_get_chip_fault_stats(const void *hwdev, u8 *chip_fault_stats,
+ u32 offset)
+{
+ if (offset >= HINIC3_CHIP_FAULT_SIZE) {
+ pr_err("Invalid chip offset value: %d\n", offset);
+ return;
+ }
+
+ if (offset + MAX_DRV_BUF_SIZE <= HINIC3_CHIP_FAULT_SIZE)
+ memcpy(chip_fault_stats,
+ ((struct hinic3_hwdev *)hwdev)->chip_fault_stats
+ + offset, MAX_DRV_BUF_SIZE);
+ else
+ memcpy(chip_fault_stats,
+ ((struct hinic3_hwdev *)hwdev)->chip_fault_stats
+ + offset, HINIC3_CHIP_FAULT_SIZE - offset);
+}
+
+void hinic3_event_register(void *dev, void *pri_handle,
+ hinic3_event_handler callback)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for register event\n");
+ return;
+ }
+
+ hwdev->event_callback = callback;
+ hwdev->event_pri_handle = pri_handle;
+}
+
+void hinic3_event_unregister(void *dev)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for register event\n");
+ return;
+ }
+
+ hwdev->event_callback = NULL;
+ hwdev->event_pri_handle = NULL;
+}
+
+void hinic3_event_callback(void *hwdev, struct hinic3_event_info *event)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev) {
+ pr_err("Hwdev pointer is NULL for event callback\n");
+ return;
+ }
+
+ if (!dev->event_callback) {
+ sdk_info(dev->dev_hdl, "Event callback function not register\n");
+ return;
+ }
+
+ dev->event_callback(dev->event_pri_handle, event);
+}
+EXPORT_SYMBOL(hinic3_event_callback);
+
+void hinic3_set_pcie_order_cfg(void *handle)
+{
+}
+
+void hinic3_disable_mgmt_msg_report(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = (struct hinic3_hwdev *)hwdev;
+
+ hinic3_set_pf_status(hw_dev->hwif, HINIC3_PF_STATUS_INIT);
+}
+
+void hinic3_record_pcie_error(void *hwdev)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (!hwdev)
+ return;
+
+ atomic_inc(&dev->hw_stats.fault_event_stats.pcie_fault_stats);
+}
+
+bool hinic3_need_init_stateful_default(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u16 chip_svc_type = dev->cfg_mgmt->svc_cap.svc_type;
+
+ /* Current virtio net have to init cqm in PPF. */
+ if (hinic3_func_type(hwdev) == TYPE_PPF && (chip_svc_type & CFG_SERVICE_MASK_VIRTIO) != 0)
+ return true;
+
+ /* vroce have to init cqm */
+ if (IS_MASTER_HOST(dev) &&
+ (hinic3_func_type(hwdev) != TYPE_PPF) &&
+ ((chip_svc_type & CFG_SERVICE_MASK_ROCE) != 0))
+ return true;
+
+ /* SDI5.1 vm mode nano os PF0 as ppf needs to do stateful init else mailbox will fail */
+ if (hinic3_func_type(hwdev) == TYPE_PPF && hinic3_is_vm_slave_host(hwdev))
+ return true;
+
+ /* Other service type will init cqm when uld call. */
+ return false;
+}
+
+static inline void stateful_uninit(struct hinic3_hwdev *hwdev)
+{
+ u32 stateful_en;
+
+ cqm_uninit(hwdev);
+
+ stateful_en = IS_FT_TYPE(hwdev) | IS_RDMA_TYPE(hwdev);
+ if (stateful_en)
+ hinic3_ppf_ext_db_deinit(hwdev);
+}
+
+int hinic3_stateful_init(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int stateful_en;
+ int err;
+
+ if (!dev)
+ return -EINVAL;
+
+ if (!hinic3_get_stateful_enable(dev))
+ return 0;
+
+ mutex_lock(&dev->stateful_mutex);
+ if (dev->stateful_ref_cnt++) {
+ mutex_unlock(&dev->stateful_mutex);
+ return 0;
+ }
+
+ stateful_en = (int)(IS_FT_TYPE(dev) | IS_RDMA_TYPE(dev));
+ if (stateful_en != 0 && HINIC3_IS_PPF(dev)) {
+ err = hinic3_ppf_ext_db_init(dev);
+ if (err != 0)
+ goto out;
+ }
+
+ err = cqm_init(dev);
+ if (err != 0) {
+ sdk_err(dev->dev_hdl, "Failed to init cqm, err: %d\n", err);
+ goto init_cqm_err;
+ }
+
+ mutex_unlock(&dev->stateful_mutex);
+ sdk_info(dev->dev_hdl, "Initialize stateful resource success\n");
+
+ return 0;
+
+init_cqm_err:
+ if (stateful_en != 0)
+ hinic3_ppf_ext_db_deinit(dev);
+
+out:
+ dev->stateful_ref_cnt--;
+ mutex_unlock(&dev->stateful_mutex);
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_stateful_init);
+
+void hinic3_stateful_deinit(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev || !hinic3_get_stateful_enable(dev))
+ return;
+
+ mutex_lock(&dev->stateful_mutex);
+ if (!dev->stateful_ref_cnt || --dev->stateful_ref_cnt) {
+ mutex_unlock(&dev->stateful_mutex);
+ return;
+ }
+
+ stateful_uninit(hwdev);
+ mutex_unlock(&dev->stateful_mutex);
+
+ sdk_info(dev->dev_hdl, "Clear stateful resource success\n");
+}
+EXPORT_SYMBOL(hinic3_stateful_deinit);
+
+void hinic3_free_stateful(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev || !hinic3_get_stateful_enable(dev) || !dev->stateful_ref_cnt)
+ return;
+
+ if (!hinic3_need_init_stateful_default(hwdev) || dev->stateful_ref_cnt > 1)
+ sdk_info(dev->dev_hdl, "Current stateful resource ref is incorrect, ref_cnt:%u\n",
+ dev->stateful_ref_cnt);
+
+ stateful_uninit(hwdev);
+
+ sdk_info(dev->dev_hdl, "Clear stateful resource success\n");
+}
+
+int hinic3_get_card_present_state(void *hwdev, bool *card_present_state)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev || !card_present_state)
+ return -EINVAL;
+
+ *card_present_state = get_card_present_state(dev);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_card_present_state);
+
+void hinic3_link_event_stats(void *dev, u8 link)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (link)
+ atomic_inc(&hwdev->hw_stats.link_event_stats.link_up_stats);
+ else
+ atomic_inc(&hwdev->hw_stats.link_event_stats.link_down_stats);
+}
+EXPORT_SYMBOL(hinic3_link_event_stats);
+
+int hinic3_get_link_event_stats(void *dev, int *link_state)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (!hwdev || !link_state)
+ return -EINVAL;
+
+ *link_state = hwdev->hw_stats.link_event_stats.link_down_stats.counter;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_link_event_stats);
+
+u8 hinic3_max_pf_num(void *hwdev)
+{
+ if (!hwdev)
+ return 0;
+
+ return HINIC3_MAX_PF_NUM((struct hinic3_hwdev *)hwdev);
+}
+EXPORT_SYMBOL(hinic3_max_pf_num);
+
+void hinic3_fault_event_report(void *hwdev, u16 src, u16 level)
+{
+ if (!hwdev)
+ return;
+
+ sdk_info(((struct hinic3_hwdev *)hwdev)->dev_hdl, "Fault event report, src: %u, level: %u\n",
+ src, level);
+
+ hisdk3_fault_post_process(hwdev, src, level);
+}
+EXPORT_SYMBOL(hinic3_fault_event_report);
+
+int hinic3_is_slave_func(const void *hwdev, bool *is_slave_func)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ *is_slave_func = IS_SLAVE_HOST((struct hinic3_hwdev *)hwdev);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_is_slave_func);
+
+int hinic3_is_master_func(const void *hwdev, bool *is_master_func)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ *is_master_func = IS_MASTER_HOST((struct hinic3_hwdev *)hwdev);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_is_master_func);
+
+void hinic3_probe_success(void *hwdev)
+{
+ if (!hwdev)
+ return;
+
+ hisdk3_probe_success(hwdev);
+}
+
+#define HINIC3_CHANNEL_BUSY_TIMEOUT 25
+
+static void hinic3_update_channel_status(struct hinic3_hwdev *hwdev)
+{
+ struct card_node *chip_node = hwdev->chip_node;
+
+ if (!chip_node)
+ return;
+
+ if (hinic3_func_type(hwdev) != TYPE_PPF || !COMM_SUPPORT_CHANNEL_DETECT(hwdev) ||
+ atomic_read(&chip_node->channel_busy_cnt))
+ return;
+
+ if (test_bit(HINIC3_HWDEV_MBOX_INITED, &hwdev->func_state)) {
+ if (hwdev->last_recv_aeq_cnt != hwdev->cur_recv_aeq_cnt) {
+ hwdev->aeq_busy_cnt = 0;
+ hwdev->last_recv_aeq_cnt = hwdev->cur_recv_aeq_cnt;
+ } else {
+ hwdev->aeq_busy_cnt++;
+ }
+
+ if (hwdev->aeq_busy_cnt > HINIC3_CHANNEL_BUSY_TIMEOUT) {
+ atomic_inc(&chip_node->channel_busy_cnt);
+ sdk_err(hwdev->dev_hdl, "Detect channel busy\n");
+ }
+ }
+}
+
+static void hinic3_heartbeat_lost_handler(struct work_struct *work)
+{
+ struct hinic3_event_info event_info = { 0 };
+ struct hinic3_hwdev *hwdev = container_of(work, struct hinic3_hwdev,
+ heartbeat_lost_work);
+ u16 src, level;
+
+ atomic_inc(&hwdev->hw_stats.heart_lost_stats);
+
+ if (hwdev->event_callback) {
+ event_info.service = EVENT_SRV_COMM;
+ event_info.type =
+ hwdev->pcie_link_down ? EVENT_COMM_PCIE_LINK_DOWN :
+ EVENT_COMM_HEART_LOST;
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+ }
+
+ if (hwdev->pcie_link_down) {
+ src = HINIC3_FAULT_SRC_PCIE_LINK_DOWN;
+ level = FAULT_LEVEL_HOST;
+ sdk_err(hwdev->dev_hdl, "Detect pcie is link down\n");
+ } else {
+ src = HINIC3_FAULT_SRC_HOST_HEARTBEAT_LOST;
+ level = FAULT_LEVEL_FATAL;
+ sdk_err(hwdev->dev_hdl, "Heart lost report received, func_id: %d\n",
+ hinic3_global_func_id(hwdev));
+ }
+
+ hinic3_show_chip_err_info(hwdev);
+
+ hisdk3_fault_post_process(hwdev, src, level);
+}
+
+#define DETECT_PCIE_LINK_DOWN_RETRY 2
+#define HINIC3_HEARTBEAT_START_EXPIRE 5000
+#define HINIC3_HEARTBEAT_PERIOD 1000
+
+static bool hinic3_is_hw_abnormal(struct hinic3_hwdev *hwdev)
+{
+ u32 status;
+
+ if (!hinic3_get_chip_present_flag(hwdev))
+ return false;
+
+ status = hinic3_get_heartbeat_status(hwdev);
+ if (status == HINIC3_PCIE_LINK_DOWN) {
+ sdk_warn(hwdev->dev_hdl, "Detect BAR register read failed\n");
+ hwdev->rd_bar_err_cnt++;
+ if (hwdev->rd_bar_err_cnt >= DETECT_PCIE_LINK_DOWN_RETRY) {
+ hinic3_set_chip_absent(hwdev);
+ hinic3_force_complete_all(hwdev);
+ hwdev->pcie_link_down = true;
+ return true;
+ }
+
+ return false;
+ }
+
+ if (status) {
+ hwdev->heartbeat_lost = true;
+ return true;
+ }
+
+ hwdev->rd_bar_err_cnt = 0;
+
+ return false;
+}
+
+#ifdef HAVE_TIMER_SETUP
+static void hinic3_heartbeat_timer_handler(struct timer_list *t)
+#else
+static void hinic3_heartbeat_timer_handler(unsigned long data)
+#endif
+{
+#ifdef HAVE_TIMER_SETUP
+ struct hinic3_hwdev *hwdev = from_timer(hwdev, t, heartbeat_timer);
+#else
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)data;
+#endif
+
+ if (hinic3_is_hw_abnormal(hwdev)) {
+ stop_timer(&hwdev->heartbeat_timer);
+ queue_work(hwdev->workq, &hwdev->heartbeat_lost_work);
+ } else {
+ mod_timer(&hwdev->heartbeat_timer,
+ jiffies + msecs_to_jiffies(HINIC3_HEARTBEAT_PERIOD));
+ }
+
+ hinic3_update_channel_status(hwdev);
+}
+
+static void hinic3_init_heartbeat_detect(struct hinic3_hwdev *hwdev)
+{
+#ifdef HAVE_TIMER_SETUP
+ timer_setup(&hwdev->heartbeat_timer, hinic3_heartbeat_timer_handler, 0);
+#else
+ initialize_timer(hwdev->adapter_hdl, &hwdev->heartbeat_timer);
+ hwdev->heartbeat_timer.data = (u64)hwdev;
+ hwdev->heartbeat_timer.function = hinic3_heartbeat_timer_handler;
+#endif
+
+ hwdev->heartbeat_timer.expires =
+ jiffies + msecs_to_jiffies(HINIC3_HEARTBEAT_START_EXPIRE);
+
+ add_to_timer(&hwdev->heartbeat_timer, HINIC3_HEARTBEAT_PERIOD);
+
+ INIT_WORK(&hwdev->heartbeat_lost_work, hinic3_heartbeat_lost_handler);
+}
+
+static void hinic3_destroy_heartbeat_detect(struct hinic3_hwdev *hwdev)
+{
+ destroy_work(&hwdev->heartbeat_lost_work);
+ stop_timer(&hwdev->heartbeat_timer);
+ delete_timer(&hwdev->heartbeat_timer);
+}
+
+void hinic3_set_api_stop(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return;
+
+ dev->chip_present_flag = HINIC3_CHIP_ABSENT;
+ sdk_info(dev->dev_hdl, "Set card absent\n");
+ hinic3_force_complete_all(dev);
+ sdk_info(dev->dev_hdl, "All messages interacting with the chip will stop\n");
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h
new file mode 100644
index 0000000..7c2cfc2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h
@@ -0,0 +1,234 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HWDEV_H
+#define HINIC3_HWDEV_H
+
+#include <linux/workqueue.h>
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "mpu_inband_cmd_defs.h"
+#include "hinic3_profile.h"
+#include "vram_common.h"
+
+struct cfg_mgmt_info;
+
+struct hinic3_hwif;
+struct hinic3_aeqs;
+struct hinic3_ceqs;
+struct hinic3_mbox;
+struct hinic3_msg_pf_to_mgmt;
+struct hinic3_hwdev;
+
+#define HINIC3_CHANNEL_DETECT_PERIOD (5 * 1000)
+
+struct hinic3_page_addr {
+ void *virt_addr;
+ u64 phys_addr;
+};
+
+struct mqm_addr_trans_tbl_info {
+ u32 chunk_num;
+ u32 search_gpa_num;
+ u32 page_size;
+ u32 page_num;
+ struct hinic3_dma_addr_align *brm_srch_page_addr;
+};
+
+struct hinic3_devlink {
+ struct hinic3_hwdev *hwdev;
+ u8 activate_fw; /* 0 ~ 7 */
+ u8 switch_cfg; /* 0 ~ 7 */
+};
+
+enum hinic3_func_mode {
+ /* single host */
+ FUNC_MOD_NORMAL_HOST,
+ /* multi host, bare-metal, sdi side */
+ FUNC_MOD_MULTI_BM_MASTER,
+ /* multi host, bare-metal, host side */
+ FUNC_MOD_MULTI_BM_SLAVE,
+ /* multi host, vm mode, sdi side */
+ FUNC_MOD_MULTI_VM_MASTER,
+ /* multi host, vm mode, host side */
+ FUNC_MOD_MULTI_VM_SLAVE,
+};
+
+#define IS_BMGW_MASTER_HOST(hwdev) \
+ ((hwdev)->func_mode == FUNC_MOD_MULTI_BM_MASTER)
+#define IS_BMGW_SLAVE_HOST(hwdev) \
+ ((hwdev)->func_mode == FUNC_MOD_MULTI_BM_SLAVE)
+#define IS_VM_MASTER_HOST(hwdev) \
+ ((hwdev)->func_mode == FUNC_MOD_MULTI_VM_MASTER)
+#define IS_VM_SLAVE_HOST(hwdev) \
+ ((hwdev)->func_mode == FUNC_MOD_MULTI_VM_SLAVE)
+
+#define IS_MASTER_HOST(hwdev) \
+ (IS_BMGW_MASTER_HOST(hwdev) || IS_VM_MASTER_HOST(hwdev))
+
+#define IS_SLAVE_HOST(hwdev) \
+ (IS_BMGW_SLAVE_HOST(hwdev) || IS_VM_SLAVE_HOST(hwdev))
+
+#define IS_MULTI_HOST(hwdev) \
+ (IS_BMGW_MASTER_HOST(hwdev) || IS_BMGW_SLAVE_HOST(hwdev) || \
+ IS_VM_MASTER_HOST(hwdev) || IS_VM_SLAVE_HOST(hwdev))
+
+#define NEED_MBOX_FORWARD(hwdev) IS_BMGW_SLAVE_HOST(hwdev)
+
+enum hinic3_host_mode_e {
+ HINIC3_MODE_NORMAL = 0,
+ HINIC3_SDI_MODE_VM,
+ HINIC3_SDI_MODE_BM,
+ HINIC3_SDI_MODE_MAX,
+};
+
+enum hinic3_hot_plug_mode {
+ HOT_PLUG_ENABLE,
+ HOT_PLUG_DISABLE,
+};
+
+enum hinic3_os_hot_replace_mode {
+ HOT_REPLACE_DISABLE,
+ HOT_REPLACE_ENABLE,
+};
+
+#define UNSUPPORT_HOT_PLUG(hwdev) \
+ ((hwdev)->hot_plug_mode == HOT_PLUG_DISABLE)
+
+#define SUPPORT_HOT_PLUG(hwdev) \
+ ((hwdev)->hot_plug_mode == HOT_PLUG_ENABLE)
+
+#define MULTI_HOST_CHIP_MODE_SHIFT 0
+#define MULTI_HOST_MASTER_MBX_STS_SHIFT 17
+#define MULTI_HOST_PRIV_DATA_SHIFT 0x8
+
+#define MULTI_HOST_CHIP_MODE_MASK 0xF
+#define MULTI_HOST_MASTER_MBX_STS_MASK 0x1
+#define MULTI_HOST_PRIV_DATA_MASK 0xFFFF
+
+#define MULTI_HOST_REG_SET(val, member) \
+ (((val) & MULTI_HOST_##member##_MASK) \
+ << MULTI_HOST_##member##_SHIFT)
+#define MULTI_HOST_REG_GET(val, member) \
+ (((val) >> MULTI_HOST_##member##_SHIFT) \
+ & MULTI_HOST_##member##_MASK)
+#define MULTI_HOST_REG_CLEAR(val, member) \
+ ((val) & (~(MULTI_HOST_##member##_MASK \
+ << MULTI_HOST_##member##_SHIFT)))
+
+struct mqm_eqm_vram_name_s {
+ char vram_name[VRAM_NAME_MAX_LEN];
+};
+
+struct hinic3_hwdev {
+ void *adapter_hdl; /* pointer to hinic3_pcidev or NDIS_Adapter */
+ void *pcidev_hdl; /* pointer to pcidev or Handler */
+ void *dev_hdl; /* pointer to pcidev->dev or Handler, for
+ * sdk_err() or dma_alloc()
+ */
+
+ void *service_adapter[SERVICE_T_MAX];
+ void *chip_node;
+ struct semaphore ppf_sem;
+ void *ppf_hwdev;
+
+ u32 wq_page_size;
+ int chip_present_flag;
+ bool poll; /* use polling mode or int mode */
+ u32 rsvd1;
+
+ struct hinic3_hwif *hwif; /* include void __iomem *bar */
+ struct comm_global_attr glb_attr;
+ u64 features[COMM_MAX_FEATURE_QWORD];
+
+ struct cfg_mgmt_info *cfg_mgmt;
+
+ struct hinic3_cmdqs *cmdqs;
+ struct hinic3_aeqs *aeqs;
+ struct hinic3_ceqs *ceqs;
+ struct hinic3_mbox *func_to_func;
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt;
+ struct hinic3_clp_pf_to_mgmt *clp_pf_to_mgmt;
+
+ void *cqm_hdl;
+ struct mqm_addr_trans_tbl_info mqm_att;
+ struct hinic3_page_addr page_pa0;
+ struct hinic3_page_addr page_pa1;
+ u32 stateful_ref_cnt;
+ u32 rsvd2;
+
+ struct hinic3_multi_host_mgmt *mhost_mgmt;
+ char mhost_mgmt_name[VRAM_NAME_MAX_LEN];
+
+ struct mqm_eqm_vram_name_s *mqm_eqm_vram_name;
+
+ struct mutex stateful_mutex; /* protect cqm init and deinit */
+
+ struct hinic3_hw_stats hw_stats;
+ u8 *chip_fault_stats;
+
+ hinic3_event_handler event_callback;
+ void *event_pri_handle;
+
+ struct hinic3_board_info board_info;
+
+ struct delayed_work sync_time_task;
+ struct delayed_work channel_detect_task;
+ struct hisdk3_prof_attr *prof_attr;
+ struct hinic3_prof_adapter *prof_adap;
+
+ struct workqueue_struct *workq;
+
+ u32 rd_bar_err_cnt;
+ bool pcie_link_down;
+ bool heartbeat_lost;
+ struct timer_list heartbeat_timer;
+ struct work_struct heartbeat_lost_work;
+
+ ulong func_state;
+ spinlock_t channel_lock; /* protect channel init and deinit */
+
+ u16 probe_fault_level;
+
+ struct hinic3_devlink *devlink_dev;
+
+ enum hinic3_func_mode func_mode;
+ enum hinic3_hot_plug_mode hot_plug_mode;
+
+ enum hinic3_os_hot_replace_mode hot_replace_mode;
+ u32 rsvd5;
+
+ DECLARE_BITMAP(func_probe_in_host, MAX_FUNCTION_NUM);
+ DECLARE_BITMAP(netdev_setup_state, MAX_FUNCTION_NUM);
+
+ u64 cur_recv_aeq_cnt;
+ u64 last_recv_aeq_cnt;
+ u16 aeq_busy_cnt;
+
+ u64 mbox_send_cnt;
+ u64 mbox_ack_cnt;
+
+ u64 rsvd4[5];
+};
+
+#define HINIC3_DRV_FEATURE_QW0 \
+ (COMM_F_API_CHAIN | COMM_F_CLP | COMM_F_MBOX_SEGMENT | \
+ COMM_F_CMDQ_NUM | COMM_F_VIRTIO_VQ_SIZE)
+
+#define HINIC3_MAX_HOST_NUM(hwdev) ((hwdev)->glb_attr.max_host_num)
+#define HINIC3_MAX_PF_NUM(hwdev) ((hwdev)->glb_attr.max_pf_num)
+#define HINIC3_MGMT_CPU_NODE_ID(hwdev) ((hwdev)->glb_attr.mgmt_host_node_id)
+
+#define COMM_FEATURE_QW0(hwdev, feature) \
+ ((hwdev)->features[0] & COMM_F_##feature)
+#define COMM_SUPPORT_API_CHAIN(hwdev) COMM_FEATURE_QW0(hwdev, API_CHAIN)
+#define COMM_SUPPORT_CLP(hwdev) COMM_FEATURE_QW0(hwdev, CLP)
+#define COMM_SUPPORT_CHANNEL_DETECT(hwdev) COMM_FEATURE_QW0(hwdev, CHANNEL_DETECT)
+#define COMM_SUPPORT_MBOX_SEGMENT(hwdev) (hinic3_pcie_itf_id(hwdev) == SPU_HOST_ID)
+#define COMM_SUPPORT_CMDQ_NUM(hwdev) COMM_FEATURE_QW0(hwdev, CMDQ_NUM)
+#define COMM_SUPPORT_VIRTIO_VQ_SIZE(hwdev) COMM_FEATURE_QW0(hwdev, VIRTIO_VQ_SIZE)
+
+void set_func_host_mode(struct hinic3_hwdev *hwdev, enum hinic3_func_mode mode);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c
new file mode 100644
index 0000000..8590f70
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c
@@ -0,0 +1,1050 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_csr.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+
+#ifndef CONFIG_MODULE_PROF
+#define WAIT_HWIF_READY_TIMEOUT 10000
+#else
+#define WAIT_HWIF_READY_TIMEOUT 30000
+#endif
+
+#define HINIC3_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT 60000
+
+#define MAX_MSIX_ENTRY 2048
+
+#define DB_IDX(db, db_base) \
+ ((u32)(((ulong)(db) - (ulong)(db_base)) / \
+ HINIC3_DB_PAGE_SIZE))
+
+#define HINIC3_AF0_FUNC_GLOBAL_IDX_SHIFT 0
+#define HINIC3_AF0_P2P_IDX_SHIFT 12
+#define HINIC3_AF0_PCI_INTF_IDX_SHIFT 17
+#define HINIC3_AF0_VF_IN_PF_SHIFT 20
+#define HINIC3_AF0_FUNC_TYPE_SHIFT 28
+
+#define HINIC3_AF0_FUNC_GLOBAL_IDX_MASK 0xFFF
+#define HINIC3_AF0_P2P_IDX_MASK 0x1F
+#define HINIC3_AF0_PCI_INTF_IDX_MASK 0x7
+#define HINIC3_AF0_VF_IN_PF_MASK 0xFF
+#define HINIC3_AF0_FUNC_TYPE_MASK 0x1
+
+#define HINIC3_AF0_GET(val, member) \
+ (((val) >> HINIC3_AF0_##member##_SHIFT) & HINIC3_AF0_##member##_MASK)
+
+#define HINIC3_AF1_PPF_IDX_SHIFT 0
+#define HINIC3_AF1_AEQS_PER_FUNC_SHIFT 8
+#define HINIC3_AF1_MGMT_INIT_STATUS_SHIFT 30
+#define HINIC3_AF1_PF_INIT_STATUS_SHIFT 31
+
+#define HINIC3_AF1_PPF_IDX_MASK 0x3F
+#define HINIC3_AF1_AEQS_PER_FUNC_MASK 0x3
+#define HINIC3_AF1_MGMT_INIT_STATUS_MASK 0x1
+#define HINIC3_AF1_PF_INIT_STATUS_MASK 0x1
+
+#define HINIC3_AF1_GET(val, member) \
+ (((val) >> HINIC3_AF1_##member##_SHIFT) & HINIC3_AF1_##member##_MASK)
+
+#define HINIC3_AF2_CEQS_PER_FUNC_SHIFT 0
+#define HINIC3_AF2_DMA_ATTR_PER_FUNC_SHIFT 9
+#define HINIC3_AF2_IRQS_PER_FUNC_SHIFT 16
+
+#define HINIC3_AF2_CEQS_PER_FUNC_MASK 0x1FF
+#define HINIC3_AF2_DMA_ATTR_PER_FUNC_MASK 0x7
+#define HINIC3_AF2_IRQS_PER_FUNC_MASK 0x7FF
+
+#define HINIC3_AF2_GET(val, member) \
+ (((val) >> HINIC3_AF2_##member##_SHIFT) & HINIC3_AF2_##member##_MASK)
+
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_NXT_PF_SHIFT 0
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_PF_SHIFT 16
+
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_NXT_PF_MASK 0xFFF
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_PF_MASK 0xFFF
+
+#define HINIC3_AF3_GET(val, member) \
+ (((val) >> HINIC3_AF3_##member##_SHIFT) & HINIC3_AF3_##member##_MASK)
+
+#define HINIC3_AF4_DOORBELL_CTRL_SHIFT 0
+#define HINIC3_AF4_DOORBELL_CTRL_MASK 0x1
+
+#define HINIC3_AF4_GET(val, member) \
+ (((val) >> HINIC3_AF4_##member##_SHIFT) & HINIC3_AF4_##member##_MASK)
+
+#define HINIC3_AF4_SET(val, member) \
+ (((val) & HINIC3_AF4_##member##_MASK) << HINIC3_AF4_##member##_SHIFT)
+
+#define HINIC3_AF4_CLEAR(val, member) \
+ ((val) & (~(HINIC3_AF4_##member##_MASK << HINIC3_AF4_##member##_SHIFT)))
+
+#define HINIC3_AF5_OUTBOUND_CTRL_SHIFT 0
+#define HINIC3_AF5_OUTBOUND_CTRL_MASK 0x1
+
+#define HINIC3_AF5_GET(val, member) \
+ (((val) >> HINIC3_AF5_##member##_SHIFT) & HINIC3_AF5_##member##_MASK)
+
+#define HINIC3_AF5_SET(val, member) \
+ (((val) & HINIC3_AF5_##member##_MASK) << HINIC3_AF5_##member##_SHIFT)
+
+#define HINIC3_AF5_CLEAR(val, member) \
+ ((val) & (~(HINIC3_AF5_##member##_MASK << HINIC3_AF5_##member##_SHIFT)))
+
+#define HINIC3_AF6_PF_STATUS_SHIFT 0
+#define HINIC3_AF6_PF_STATUS_MASK 0xFFFF
+
+#define HINIC3_AF6_FUNC_MAX_SQ_SHIFT 23
+#define HINIC3_AF6_FUNC_MAX_SQ_MASK 0x1FF
+
+#define HINIC3_AF6_MSIX_FLEX_EN_SHIFT 22
+#define HINIC3_AF6_MSIX_FLEX_EN_MASK 0x1
+
+#define HINIC3_AF6_SET(val, member) \
+ ((((u32)(val)) & HINIC3_AF6_##member##_MASK) << \
+ HINIC3_AF6_##member##_SHIFT)
+
+#define HINIC3_AF6_GET(val, member) \
+ (((u32)(val) >> HINIC3_AF6_##member##_SHIFT) & HINIC3_AF6_##member##_MASK)
+
+#define HINIC3_AF6_CLEAR(val, member) \
+ ((u32)(val) & (~(HINIC3_AF6_##member##_MASK << \
+ HINIC3_AF6_##member##_SHIFT)))
+
+#define HINIC3_PPF_ELECT_PORT_IDX_SHIFT 0
+
+#define HINIC3_PPF_ELECT_PORT_IDX_MASK 0x3F
+
+#define HINIC3_PPF_ELECT_PORT_GET(val, member) \
+ (((val) >> HINIC3_PPF_ELECT_PORT_##member##_SHIFT) & \
+ HINIC3_PPF_ELECT_PORT_##member##_MASK)
+
+#define HINIC3_PPF_ELECTION_IDX_SHIFT 0
+
+#define HINIC3_PPF_ELECTION_IDX_MASK 0x3F
+
+#define HINIC3_PPF_ELECTION_SET(val, member) \
+ (((val) & HINIC3_PPF_ELECTION_##member##_MASK) << \
+ HINIC3_PPF_ELECTION_##member##_SHIFT)
+
+#define HINIC3_PPF_ELECTION_GET(val, member) \
+ (((val) >> HINIC3_PPF_ELECTION_##member##_SHIFT) & \
+ HINIC3_PPF_ELECTION_##member##_MASK)
+
+#define HINIC3_PPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HINIC3_PPF_ELECTION_##member##_MASK << \
+ HINIC3_PPF_ELECTION_##member##_SHIFT)))
+
+#define HINIC3_MPF_ELECTION_IDX_SHIFT 0
+
+#define HINIC3_MPF_ELECTION_IDX_MASK 0x1F
+
+#define HINIC3_MPF_ELECTION_SET(val, member) \
+ (((val) & HINIC3_MPF_ELECTION_##member##_MASK) << \
+ HINIC3_MPF_ELECTION_##member##_SHIFT)
+
+#define HINIC3_MPF_ELECTION_GET(val, member) \
+ (((val) >> HINIC3_MPF_ELECTION_##member##_SHIFT) & \
+ HINIC3_MPF_ELECTION_##member##_MASK)
+
+#define HINIC3_MPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HINIC3_MPF_ELECTION_##member##_MASK << \
+ HINIC3_MPF_ELECTION_##member##_SHIFT)))
+
+#define HINIC3_GET_REG_FLAG(reg) ((reg) & (~(HINIC3_REGS_FLAG_MAKS)))
+
+#define HINIC3_GET_REG_ADDR(reg) ((reg) & (HINIC3_REGS_FLAG_MAKS))
+
+u32 hinic3_hwif_read_reg(struct hinic3_hwif *hwif, u32 reg)
+{
+ if (HINIC3_GET_REG_FLAG(reg) == HINIC3_MGMT_REGS_FLAG)
+ return be32_to_cpu(readl(hwif->mgmt_regs_base +
+ HINIC3_GET_REG_ADDR(reg)));
+ else
+ return be32_to_cpu(readl(hwif->cfg_regs_base +
+ HINIC3_GET_REG_ADDR(reg)));
+}
+
+void hinic3_hwif_write_reg(struct hinic3_hwif *hwif, u32 reg, u32 val)
+{
+ if (HINIC3_GET_REG_FLAG(reg) == HINIC3_MGMT_REGS_FLAG)
+ writel(cpu_to_be32(val),
+ hwif->mgmt_regs_base + HINIC3_GET_REG_ADDR(reg));
+ else
+ writel(cpu_to_be32(val),
+ hwif->cfg_regs_base + HINIC3_GET_REG_ADDR(reg));
+}
+
+bool get_card_present_state(struct hinic3_hwdev *hwdev)
+{
+ u32 attr1;
+
+ attr1 = hinic3_hwif_read_reg(hwdev->hwif, HINIC3_CSR_FUNC_ATTR1_ADDR);
+ if (attr1 == HINIC3_PCIE_LINK_DOWN) {
+ sdk_warn(hwdev->dev_hdl, "Card is not present\n");
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * hinic3_get_heartbeat_status - get heart beat status
+ * @hwdev: the pointer to hw device
+ * Return: 0 - normal, 1 - heart lost, 0xFFFFFFFF - Pcie link down
+ **/
+u32 hinic3_get_heartbeat_status(void *hwdev)
+{
+ u32 attr1;
+
+ if (!hwdev)
+ return HINIC3_PCIE_LINK_DOWN;
+
+ attr1 = hinic3_hwif_read_reg(((struct hinic3_hwdev *)hwdev)->hwif,
+ HINIC3_CSR_FUNC_ATTR1_ADDR);
+ if (attr1 == HINIC3_PCIE_LINK_DOWN)
+ return attr1;
+
+ return !HINIC3_AF1_GET(attr1, MGMT_INIT_STATUS);
+}
+EXPORT_SYMBOL(hinic3_get_heartbeat_status);
+
+#define MIGRATE_HOST_STATUS_CLEAR(host_id, val) ((val) & (~(1U << (host_id))))
+#define MIGRATE_HOST_STATUS_SET(host_id, enable) (((u8)(enable) & 1U) << (host_id))
+#define MIGRATE_HOST_STATUS_GET(host_id, val) (!!((val) & (1U << (host_id))))
+
+int hinic3_set_host_migrate_enable(void *hwdev, u8 host_id, bool enable)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ u32 reg_val;
+
+ if (!dev || host_id > SPU_HOST_ID)
+ return -EINVAL;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF) {
+ sdk_warn(dev->dev_hdl, "hwdev should be ppf\n");
+ return -EINVAL;
+ }
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_MIGRATE_HOST_STATUS_ADDR);
+ reg_val = MIGRATE_HOST_STATUS_CLEAR(host_id, reg_val);
+ reg_val |= MIGRATE_HOST_STATUS_SET(host_id, enable);
+
+ hinic3_hwif_write_reg(dev->hwif, HINIC3_MULT_MIGRATE_HOST_STATUS_ADDR, reg_val);
+
+ sdk_info(dev->dev_hdl, "Set migrate host %d status %d, reg value: 0x%x\n",
+ host_id, enable, reg_val);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_host_migrate_enable);
+
+int hinic3_get_host_migrate_enable(void *hwdev, u8 host_id, u8 *migrate_en)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ u32 reg_val;
+
+ if (!dev || !migrate_en || host_id > SPU_HOST_ID)
+ return -EINVAL;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF) {
+ sdk_warn(dev->dev_hdl, "hwdev should be ppf\n");
+ return -EINVAL;
+ }
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_MIGRATE_HOST_STATUS_ADDR);
+ *migrate_en = MIGRATE_HOST_STATUS_GET(host_id, reg_val);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_host_migrate_enable);
+
+static enum hinic3_wait_return check_hwif_ready_handler(void *priv_data)
+{
+ u32 status;
+
+ status = hinic3_get_heartbeat_status(priv_data);
+ if (status == HINIC3_PCIE_LINK_DOWN)
+ return WAIT_PROCESS_ERR;
+ else if (!status)
+ return WAIT_PROCESS_CPL;
+
+ return WAIT_PROCESS_WAITING;
+}
+
+static int wait_hwif_ready(struct hinic3_hwdev *hwdev)
+{
+ int ret;
+
+ ret = hinic3_wait_for_timeout(hwdev, check_hwif_ready_handler,
+ WAIT_HWIF_READY_TIMEOUT, USEC_PER_MSEC);
+ if (ret == -ETIMEDOUT) {
+ hwdev->probe_fault_level = FAULT_LEVEL_FATAL;
+ sdk_err(hwdev->dev_hdl, "Wait for hwif timeout\n");
+ }
+
+ return ret;
+}
+
+/**
+ * set_hwif_attr - set the attributes as members in hwif
+ * @hwif: the hardware interface of a pci function device
+ * @attr0: the first attribute that was read from the hw
+ * @attr1: the second attribute that was read from the hw
+ * @attr2: the third attribute that was read from the hw
+ * @attr3: the fourth attribute that was read from the hw
+ **/
+static void set_hwif_attr(struct hinic3_hwif *hwif, u32 attr0, u32 attr1,
+ u32 attr2, u32 attr3, u32 attr6)
+{
+ hwif->attr.func_global_idx = HINIC3_AF0_GET(attr0, FUNC_GLOBAL_IDX);
+ hwif->attr.port_to_port_idx = HINIC3_AF0_GET(attr0, P2P_IDX);
+ hwif->attr.pci_intf_idx = HINIC3_AF0_GET(attr0, PCI_INTF_IDX);
+ hwif->attr.vf_in_pf = HINIC3_AF0_GET(attr0, VF_IN_PF);
+ hwif->attr.func_type = HINIC3_AF0_GET(attr0, FUNC_TYPE);
+
+ hwif->attr.ppf_idx = HINIC3_AF1_GET(attr1, PPF_IDX);
+ hwif->attr.num_aeqs = BIT(HINIC3_AF1_GET(attr1, AEQS_PER_FUNC));
+ hwif->attr.num_ceqs = (u8)HINIC3_AF2_GET(attr2, CEQS_PER_FUNC);
+ hwif->attr.num_irqs = HINIC3_AF2_GET(attr2, IRQS_PER_FUNC);
+ if (hwif->attr.num_irqs > MAX_MSIX_ENTRY)
+ hwif->attr.num_irqs = MAX_MSIX_ENTRY;
+
+ hwif->attr.num_dma_attr = BIT(HINIC3_AF2_GET(attr2, DMA_ATTR_PER_FUNC));
+
+ hwif->attr.global_vf_id_of_pf = HINIC3_AF3_GET(attr3,
+ GLOBAL_VF_ID_OF_PF);
+
+ hwif->attr.num_sq = HINIC3_AF6_GET(attr6, FUNC_MAX_SQ);
+ hwif->attr.msix_flex_en = HINIC3_AF6_GET(attr6, MSIX_FLEX_EN);
+}
+
+/**
+ * get_hwif_attr - read and set the attributes as members in hwif
+ * @hwif: the hardware interface of a pci function device
+ **/
+static int get_hwif_attr(struct hinic3_hwif *hwif)
+{
+ u32 addr, attr0, attr1, attr2, attr3, attr6;
+
+ addr = HINIC3_CSR_FUNC_ATTR0_ADDR;
+ attr0 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr0 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ addr = HINIC3_CSR_FUNC_ATTR1_ADDR;
+ attr1 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr1 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ addr = HINIC3_CSR_FUNC_ATTR2_ADDR;
+ attr2 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr2 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ addr = HINIC3_CSR_FUNC_ATTR3_ADDR;
+ attr3 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr3 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ addr = HINIC3_CSR_FUNC_ATTR6_ADDR;
+ attr6 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr6 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ set_hwif_attr(hwif, attr0, attr1, attr2, attr3, attr6);
+
+ return 0;
+}
+
+void hinic3_set_pf_status(struct hinic3_hwif *hwif,
+ enum hinic3_pf_status status)
+{
+ u32 attr6 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR);
+
+ attr6 = HINIC3_AF6_CLEAR(attr6, PF_STATUS);
+ attr6 |= HINIC3_AF6_SET(status, PF_STATUS);
+
+ if (hwif->attr.func_type == TYPE_VF)
+ return;
+
+ hinic3_hwif_write_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR, attr6);
+}
+
+enum hinic3_pf_status hinic3_get_pf_status(struct hinic3_hwif *hwif)
+{
+ u32 attr6 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR);
+
+ return HINIC3_AF6_GET(attr6, PF_STATUS);
+}
+
+static enum hinic3_doorbell_ctrl hinic3_get_doorbell_ctrl_status(struct hinic3_hwif *hwif)
+{
+ u32 attr4 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR4_ADDR);
+
+ return HINIC3_AF4_GET(attr4, DOORBELL_CTRL);
+}
+
+static enum hinic3_outbound_ctrl hinic3_get_outbound_ctrl_status(struct hinic3_hwif *hwif)
+{
+ u32 attr5 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR5_ADDR);
+
+ return HINIC3_AF5_GET(attr5, OUTBOUND_CTRL);
+}
+
+void hinic3_enable_doorbell(struct hinic3_hwif *hwif)
+{
+ u32 addr, attr4;
+
+ addr = HINIC3_CSR_FUNC_ATTR4_ADDR;
+ attr4 = hinic3_hwif_read_reg(hwif, addr);
+
+ attr4 = HINIC3_AF4_CLEAR(attr4, DOORBELL_CTRL);
+ attr4 |= HINIC3_AF4_SET(ENABLE_DOORBELL, DOORBELL_CTRL);
+
+ hinic3_hwif_write_reg(hwif, addr, attr4);
+}
+
+void hinic3_disable_doorbell(struct hinic3_hwif *hwif)
+{
+ u32 addr, attr4;
+
+ addr = HINIC3_CSR_FUNC_ATTR4_ADDR;
+ attr4 = hinic3_hwif_read_reg(hwif, addr);
+
+ attr4 = HINIC3_AF4_CLEAR(attr4, DOORBELL_CTRL);
+ attr4 |= HINIC3_AF4_SET(DISABLE_DOORBELL, DOORBELL_CTRL);
+
+ hinic3_hwif_write_reg(hwif, addr, attr4);
+}
+
+/**
+ * set_ppf - try to set hwif as ppf and set the type of hwif in this case
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void set_ppf(struct hinic3_hwif *hwif)
+{
+ struct hinic3_func_attr *attr = &hwif->attr;
+ u32 addr, val, ppf_election;
+
+ /* Read Modify Write */
+ addr = HINIC3_CSR_PPF_ELECTION_ADDR;
+
+ val = hinic3_hwif_read_reg(hwif, addr);
+ val = HINIC3_PPF_ELECTION_CLEAR(val, IDX);
+
+ ppf_election = HINIC3_PPF_ELECTION_SET(attr->func_global_idx, IDX);
+ val |= ppf_election;
+
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ /* Check PPF */
+ val = hinic3_hwif_read_reg(hwif, addr);
+
+ attr->ppf_idx = HINIC3_PPF_ELECTION_GET(val, IDX);
+ if (attr->ppf_idx == attr->func_global_idx)
+ attr->func_type = TYPE_PPF;
+}
+
+/**
+ * get_mpf - get the mpf index into the hwif
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void get_mpf(struct hinic3_hwif *hwif)
+{
+ struct hinic3_func_attr *attr = &hwif->attr;
+ u32 mpf_election, addr;
+
+ addr = HINIC3_CSR_GLOBAL_MPF_ELECTION_ADDR;
+
+ mpf_election = hinic3_hwif_read_reg(hwif, addr);
+ attr->mpf_idx = HINIC3_MPF_ELECTION_GET(mpf_election, IDX);
+}
+
+/**
+ * set_mpf - try to set hwif as mpf and set the mpf idx in hwif
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void set_mpf(struct hinic3_hwif *hwif)
+{
+ struct hinic3_func_attr *attr = &hwif->attr;
+ u32 addr, val, mpf_election;
+
+ /* Read Modify Write */
+ addr = HINIC3_CSR_GLOBAL_MPF_ELECTION_ADDR;
+
+ val = hinic3_hwif_read_reg(hwif, addr);
+
+ val = HINIC3_MPF_ELECTION_CLEAR(val, IDX);
+ mpf_election = HINIC3_MPF_ELECTION_SET(attr->func_global_idx, IDX);
+
+ val |= mpf_election;
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+static int init_hwif(struct hinic3_hwdev *hwdev, void *cfg_reg_base, void *intr_reg_base,
+ void *mgmt_regs_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ hwif = kzalloc(sizeof(*hwif), GFP_KERNEL);
+ if (!hwif)
+ return -ENOMEM;
+
+ hwdev->hwif = hwif;
+ hwif->pdev = hwdev->pcidev_hdl;
+
+ /* if function is VF, mgmt_regs_base will be NULL */
+ hwif->cfg_regs_base = mgmt_regs_base ? cfg_reg_base :
+ (u8 *)cfg_reg_base + HINIC3_VF_CFG_REG_OFFSET;
+
+ hwif->intr_regs_base = intr_reg_base;
+ hwif->mgmt_regs_base = mgmt_regs_base;
+
+ return 0;
+}
+
+static int init_db_area_idx(struct hinic3_hwif *hwif, u64 db_base_phy, u8 *db_base,
+ u64 db_dwqe_len)
+{
+ struct hinic3_free_db_area *free_db_area = &hwif->free_db_area;
+ u32 db_max_areas;
+
+ hwif->db_base_phy = db_base_phy;
+ hwif->db_base = db_base;
+ hwif->db_dwqe_len = db_dwqe_len;
+
+ db_max_areas = (db_dwqe_len > HINIC3_DB_DWQE_SIZE) ?
+ HINIC3_DB_MAX_AREAS :
+ (u32)(db_dwqe_len / HINIC3_DB_PAGE_SIZE);
+ free_db_area->db_bitmap_array = bitmap_zalloc(db_max_areas, GFP_KERNEL);
+ if (!free_db_area->db_bitmap_array) {
+ pr_err("Failed to allocate db area.\n");
+ return -ENOMEM;
+ }
+ free_db_area->db_max_areas = db_max_areas;
+ spin_lock_init(&free_db_area->idx_lock);
+ return 0;
+}
+
+static void free_db_area(struct hinic3_free_db_area *free_db_area)
+{
+ spin_lock_deinit(&free_db_area->idx_lock);
+ kfree(free_db_area->db_bitmap_array);
+ free_db_area->db_bitmap_array = NULL;
+}
+
+static int get_db_idx(struct hinic3_hwif *hwif, u32 *idx)
+{
+ struct hinic3_free_db_area *free_db_area = &hwif->free_db_area;
+ u32 pg_idx;
+
+ spin_lock(&free_db_area->idx_lock);
+ pg_idx = (u32)find_first_zero_bit(free_db_area->db_bitmap_array,
+ free_db_area->db_max_areas);
+ if (pg_idx == free_db_area->db_max_areas) {
+ spin_unlock(&free_db_area->idx_lock);
+ return -ENOMEM;
+ }
+ set_bit(pg_idx, free_db_area->db_bitmap_array);
+ spin_unlock(&free_db_area->idx_lock);
+
+ *idx = pg_idx;
+
+ return 0;
+}
+
+static void free_db_idx(struct hinic3_hwif *hwif, u32 idx)
+{
+ struct hinic3_free_db_area *free_db_area = &hwif->free_db_area;
+
+ if (idx >= free_db_area->db_max_areas)
+ return;
+
+ spin_lock(&free_db_area->idx_lock);
+ clear_bit((int)idx, free_db_area->db_bitmap_array);
+
+ spin_unlock(&free_db_area->idx_lock);
+}
+
+void hinic3_free_db_addr(void *hwdev, const void __iomem *db_base,
+ void __iomem *dwqe_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 idx;
+
+ if (!hwdev || !db_base)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+ idx = DB_IDX(db_base, hwif->db_base);
+
+ free_db_idx(hwif, idx);
+}
+EXPORT_SYMBOL(hinic3_free_db_addr);
+
+int hinic3_alloc_db_addr(void *hwdev, void __iomem **db_base,
+ void __iomem **dwqe_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 idx = 0;
+ int err;
+
+ if (!hwdev || !db_base)
+ return -EINVAL;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ err = get_db_idx(hwif, &idx);
+ if (err)
+ return -EFAULT;
+
+ *db_base = hwif->db_base + idx * HINIC3_DB_PAGE_SIZE;
+
+ if (!dwqe_base)
+ return 0;
+
+ *dwqe_base = (u8 *)*db_base + HINIC3_DWQE_OFFSET;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_alloc_db_addr);
+
+void hinic3_free_db_phy_addr(void *hwdev, u64 db_base, u64 dwqe_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 idx;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+ idx = DB_IDX(db_base, hwif->db_base_phy);
+
+ free_db_idx(hwif, idx);
+}
+EXPORT_SYMBOL(hinic3_free_db_phy_addr);
+
+int hinic3_alloc_db_phy_addr(void *hwdev, u64 *db_base, u64 *dwqe_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 idx;
+ int err;
+
+ if (!hwdev || !db_base || !dwqe_base)
+ return -EINVAL;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ err = get_db_idx(hwif, &idx);
+ if (err)
+ return -EFAULT;
+
+ *db_base = hwif->db_base_phy + idx * HINIC3_DB_PAGE_SIZE;
+ *dwqe_base = *db_base + HINIC3_DWQE_OFFSET;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_alloc_db_phy_addr);
+
+void hinic3_set_msix_auto_mask_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_auto_mask flag)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 mask_bits;
+ u32 addr;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ if (flag)
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(1, AUTO_MSK_SET);
+ else
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(1, AUTO_MSK_CLR);
+
+ mask_bits = mask_bits |
+ HINIC3_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX);
+
+ addr = HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR;
+ hinic3_hwif_write_reg(hwif, addr, mask_bits);
+}
+EXPORT_SYMBOL(hinic3_set_msix_auto_mask_state);
+
+void hinic3_set_msix_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_state flag)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 mask_bits;
+ u32 addr;
+ u8 int_msk = 1;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ if (flag)
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(int_msk, INT_MSK_SET);
+ else
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(int_msk, INT_MSK_CLR);
+ mask_bits = mask_bits |
+ HINIC3_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX);
+
+ addr = HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR;
+ hinic3_hwif_write_reg(hwif, addr, mask_bits);
+}
+EXPORT_SYMBOL(hinic3_set_msix_state);
+
+static void disable_all_msix(struct hinic3_hwdev *hwdev)
+{
+ u16 num_irqs = hwdev->hwif->attr.num_irqs;
+ u16 i;
+
+ for (i = 0; i < num_irqs; i++)
+ hinic3_set_msix_state(hwdev, i, HINIC3_MSIX_DISABLE);
+}
+
+static void enable_all_msix(struct hinic3_hwdev *hwdev)
+{
+ u16 num_irqs = hwdev->hwif->attr.num_irqs;
+ u16 i;
+
+ for (i = 0; i < num_irqs; i++)
+ hinic3_set_msix_state(hwdev, i, HINIC3_MSIX_ENABLE);
+}
+
+static enum hinic3_wait_return check_db_outbound_enable_handler(void *priv_data)
+{
+ struct hinic3_hwif *hwif = priv_data;
+ enum hinic3_doorbell_ctrl db_ctrl;
+ enum hinic3_outbound_ctrl outbound_ctrl;
+
+ db_ctrl = hinic3_get_doorbell_ctrl_status(hwif);
+ outbound_ctrl = hinic3_get_outbound_ctrl_status(hwif);
+ if (outbound_ctrl == ENABLE_OUTBOUND && db_ctrl == ENABLE_DOORBELL)
+ return WAIT_PROCESS_CPL;
+
+ return WAIT_PROCESS_WAITING;
+}
+
+static int wait_until_doorbell_and_outbound_enabled(struct hinic3_hwif *hwif)
+{
+ return hinic3_wait_for_timeout(hwif, check_db_outbound_enable_handler,
+ HINIC3_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT, USEC_PER_MSEC);
+}
+
+static void select_ppf_mpf(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_hwif *hwif = hwdev->hwif;
+
+ if (!HINIC3_IS_VF(hwdev)) {
+ set_ppf(hwif);
+
+ if (HINIC3_IS_PPF(hwdev))
+ set_mpf(hwif);
+
+ get_mpf(hwif);
+ }
+}
+
+/**
+ * hinic3_init_hwif - initialize the hw interface
+ * @hwif: the hardware interface of a pci function device
+ * @pdev: the pci device that will be part of the hwif struct
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_init_hwif(struct hinic3_hwdev *hwdev, void *cfg_reg_base,
+ void *intr_reg_base, void *mgmt_regs_base, u64 db_base_phy,
+ void *db_base, u64 db_dwqe_len)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 attr1, attr4, attr5;
+ int err;
+
+ err = init_hwif(hwdev, cfg_reg_base, intr_reg_base, mgmt_regs_base);
+ if (err)
+ return err;
+
+ hwif = hwdev->hwif;
+
+ err = init_db_area_idx(hwif, db_base_phy, db_base, db_dwqe_len);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init db area.\n");
+ goto init_db_area_err;
+ }
+
+ err = wait_hwif_ready(hwdev);
+ if (err) {
+ attr1 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR1_ADDR);
+ sdk_err(hwdev->dev_hdl, "Chip status is not ready, attr1:0x%x\n", attr1);
+ goto hwif_ready_err;
+ }
+
+ err = get_hwif_attr(hwif);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Get hwif attr failed\n");
+ goto hwif_ready_err;
+ }
+
+ err = wait_until_doorbell_and_outbound_enabled(hwif);
+ if (err) {
+ attr4 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR4_ADDR);
+ attr5 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR5_ADDR);
+ sdk_err(hwdev->dev_hdl, "Hw doorbell/outbound is disabled, attr4 0x%x attr5 0x%x\n",
+ attr4, attr5);
+ goto hwif_ready_err;
+ }
+
+ select_ppf_mpf(hwdev);
+
+ disable_all_msix(hwdev);
+ /* disable mgmt cpu report any event */
+ hinic3_set_pf_status(hwdev->hwif, HINIC3_PF_STATUS_INIT);
+
+ sdk_info(hwdev->dev_hdl, "global_func_idx: %u, func_type: %d, host_id: %u, ppf: %u, mpf: %u\n",
+ hwif->attr.func_global_idx, hwif->attr.func_type, hwif->attr.pci_intf_idx,
+ hwif->attr.ppf_idx, hwif->attr.mpf_idx);
+
+ return 0;
+
+hwif_ready_err:
+ hinic3_show_chip_err_info(hwdev);
+ free_db_area(&hwif->free_db_area);
+init_db_area_err:
+ kfree(hwif);
+
+ return err;
+}
+
+/**
+ * hinic3_free_hwif - free the hw interface
+ * @hwif: the hardware interface of a pci function device
+ * @pdev: the pci device that will be part of the hwif struct
+ **/
+void hinic3_free_hwif(struct hinic3_hwdev *hwdev)
+{
+ spin_lock_deinit(&hwdev->hwif->free_db_area.idx_lock);
+ free_db_area(&hwdev->hwif->free_db_area);
+ enable_all_msix(hwdev);
+ kfree(hwdev->hwif);
+ hwdev->hwif = NULL;
+}
+
+u16 hinic3_global_func_id(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.func_global_idx;
+}
+EXPORT_SYMBOL(hinic3_global_func_id);
+
+/**
+ * get function id from register,used by sriov hot migration process
+ * @hwdev: the pointer to hw device
+ */
+u16 hinic3_global_func_id_hw(void *hwdev)
+{
+ u32 addr, attr0;
+ struct hinic3_hwdev *dev;
+
+ dev = (struct hinic3_hwdev *)hwdev;
+ addr = HINIC3_CSR_FUNC_ATTR0_ADDR;
+ attr0 = hinic3_hwif_read_reg(dev->hwif, addr);
+
+ return HINIC3_AF0_GET(attr0, FUNC_GLOBAL_IDX);
+}
+
+/**
+ * get function id, used by sriov hot migratition process.
+ * @hwdev: the pointer to hw device
+ * @func_id: function id
+ */
+int hinic3_global_func_id_get(void *hwdev, u16 *func_id)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (!hwdev || !func_id)
+ return -EINVAL;
+
+ /* only vf get func_id from chip reg for sriov migrate */
+ if (!HINIC3_IS_VF(dev)) {
+ *func_id = hinic3_global_func_id(hwdev);
+ return 0;
+ }
+
+ *func_id = hinic3_global_func_id_hw(dev);
+ return 0;
+}
+
+u16 hinic3_intr_num(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.num_irqs;
+}
+EXPORT_SYMBOL(hinic3_intr_num);
+
+u8 hinic3_pf_id_of_vf(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.port_to_port_idx;
+}
+EXPORT_SYMBOL(hinic3_pf_id_of_vf);
+
+u8 hinic3_pcie_itf_id(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.pci_intf_idx;
+}
+EXPORT_SYMBOL(hinic3_pcie_itf_id);
+
+u8 hinic3_vf_in_pf(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.vf_in_pf;
+}
+EXPORT_SYMBOL(hinic3_vf_in_pf);
+
+enum func_type hinic3_func_type(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.func_type;
+}
+EXPORT_SYMBOL(hinic3_func_type);
+
+u8 hinic3_ceq_num(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.num_ceqs;
+}
+EXPORT_SYMBOL(hinic3_ceq_num);
+
+u16 hinic3_glb_pf_vf_offset(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.global_vf_id_of_pf;
+}
+EXPORT_SYMBOL(hinic3_glb_pf_vf_offset);
+
+u8 hinic3_ppf_idx(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.ppf_idx;
+}
+EXPORT_SYMBOL(hinic3_ppf_idx);
+
+u8 hinic3_host_ppf_idx(struct hinic3_hwdev *hwdev, u8 host_id)
+{
+ u32 ppf_elect_port_addr;
+ u32 val;
+
+ if (!hwdev)
+ return 0;
+
+ ppf_elect_port_addr = HINIC3_CSR_FUNC_PPF_ELECT(host_id);
+ val = hinic3_hwif_read_reg(hwdev->hwif, ppf_elect_port_addr);
+
+ return HINIC3_PPF_ELECT_PORT_GET(val, IDX);
+}
+
+u32 hinic3_get_self_test_result(void *hwdev)
+{
+ struct hinic3_hwif *hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hinic3_hwif_read_reg(hwif, HINIC3_MGMT_HEALTH_STATUS_ADDR);
+}
+
+void hinic3_show_chip_err_info(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_hwif *hwif = hwdev->hwif;
+ u32 value;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return;
+
+ value = hinic3_hwif_read_reg(hwif, HINIC3_CHIP_BASE_INFO_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip base info: 0x%08x\n", value);
+
+ value = hinic3_hwif_read_reg(hwif, HINIC3_MGMT_HEALTH_STATUS_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Mgmt CPU health status: 0x%08x\n", value);
+
+ value = hinic3_hwif_read_reg(hwif, HINIC3_CHIP_ERR_STATUS0_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip fatal error status0: 0x%08x\n", value);
+ value = hinic3_hwif_read_reg(hwif, HINIC3_CHIP_ERR_STATUS1_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip fatal error status1: 0x%08x\n", value);
+
+ value = hinic3_hwif_read_reg(hwif, HINIC3_ERR_INFO0_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip exception info0: 0x%08x\n", value);
+ value = hinic3_hwif_read_reg(hwif, HINIC3_ERR_INFO1_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip exception info1: 0x%08x\n", value);
+ value = hinic3_hwif_read_reg(hwif, HINIC3_ERR_INFO2_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip exception info2: 0x%08x\n", value);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h
new file mode 100644
index 0000000..b204b21
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HWIF_H
+#define HINIC3_HWIF_H
+
+#include "hinic3_hwdev.h"
+
+#define HINIC3_PCIE_LINK_DOWN 0xFFFFFFFF
+
+struct hinic3_free_db_area {
+ unsigned long *db_bitmap_array;
+ u32 db_max_areas;
+ /* spinlock for allocating doorbell area */
+ spinlock_t idx_lock;
+};
+
+struct hinic3_func_attr {
+ u16 func_global_idx;
+ u8 port_to_port_idx;
+ u8 pci_intf_idx;
+ u8 vf_in_pf;
+ u8 rsvd1;
+ u16 rsvd2;
+ enum func_type func_type;
+
+ u8 mpf_idx;
+
+ u8 ppf_idx;
+
+ u16 num_irqs; /* max: 2 ^ 15 */
+ u8 num_aeqs; /* max: 2 ^ 3 */
+ u8 num_ceqs; /* max: 2 ^ 7 */
+
+ u16 num_sq; /* max: 2 ^ 8 */
+ u8 num_dma_attr; /* max: 2 ^ 6 */
+ u8 msix_flex_en;
+
+ u16 global_vf_id_of_pf;
+};
+
+struct hinic3_hwif {
+ u8 __iomem *cfg_regs_base;
+ u8 __iomem *intr_regs_base;
+ u8 __iomem *mgmt_regs_base;
+ u64 db_base_phy;
+ u64 db_dwqe_len;
+ u8 __iomem *db_base;
+
+ struct hinic3_free_db_area free_db_area;
+
+ struct hinic3_func_attr attr;
+
+ void *pdev;
+ u64 rsvd;
+};
+
+enum hinic3_outbound_ctrl {
+ ENABLE_OUTBOUND = 0x0,
+ DISABLE_OUTBOUND = 0x1,
+};
+
+enum hinic3_doorbell_ctrl {
+ ENABLE_DOORBELL = 0x0,
+ DISABLE_DOORBELL = 0x1,
+};
+
+enum hinic3_pf_status {
+ HINIC3_PF_STATUS_INIT = 0X0,
+ HINIC3_PF_STATUS_ACTIVE_FLAG = 0x11,
+ HINIC3_PF_STATUS_FLR_START_FLAG = 0x12,
+ HINIC3_PF_STATUS_FLR_FINISH_FLAG = 0x13,
+};
+
+#define HINIC3_HWIF_NUM_AEQS(hwif) ((hwif)->attr.num_aeqs)
+#define HINIC3_HWIF_NUM_CEQS(hwif) ((hwif)->attr.num_ceqs)
+#define HINIC3_HWIF_NUM_IRQS(hwif) ((hwif)->attr.num_irqs)
+#define HINIC3_HWIF_GLOBAL_IDX(hwif) ((hwif)->attr.func_global_idx)
+#define HINIC3_HWIF_GLOBAL_VF_OFFSET(hwif) ((hwif)->attr.global_vf_id_of_pf)
+#define HINIC3_HWIF_PPF_IDX(hwif) ((hwif)->attr.ppf_idx)
+#define HINIC3_PCI_INTF_IDX(hwif) ((hwif)->attr.pci_intf_idx)
+
+#define HINIC3_FUNC_TYPE(dev) ((dev)->hwif->attr.func_type)
+#define HINIC3_IS_PF(dev) (HINIC3_FUNC_TYPE(dev) == TYPE_PF)
+#define HINIC3_IS_VF(dev) (HINIC3_FUNC_TYPE(dev) == TYPE_VF)
+#define HINIC3_IS_PPF(dev) (HINIC3_FUNC_TYPE(dev) == TYPE_PPF)
+
+u32 hinic3_hwif_read_reg(struct hinic3_hwif *hwif, u32 reg);
+
+void hinic3_hwif_write_reg(struct hinic3_hwif *hwif, u32 reg, u32 val);
+
+void hinic3_set_pf_status(struct hinic3_hwif *hwif,
+ enum hinic3_pf_status status);
+
+enum hinic3_pf_status hinic3_get_pf_status(struct hinic3_hwif *hwif);
+
+void hinic3_disable_doorbell(struct hinic3_hwif *hwif);
+
+void hinic3_enable_doorbell(struct hinic3_hwif *hwif);
+
+int hinic3_init_hwif(struct hinic3_hwdev *hwdev, void *cfg_reg_base,
+ void *intr_reg_base, void *mgmt_regs_base, u64 db_base_phy,
+ void *db_base, u64 db_dwqe_len);
+
+void hinic3_free_hwif(struct hinic3_hwdev *hwdev);
+
+void hinic3_show_chip_err_info(struct hinic3_hwdev *hwdev);
+
+u8 hinic3_host_ppf_idx(struct hinic3_hwdev *hwdev, u8 host_id);
+
+bool get_card_present_state(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c
new file mode 100644
index 0000000..82a26ae
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c
@@ -0,0 +1,2460 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <net/addrconf.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/io-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/inetdevice.h>
+#include <linux/time.h>
+#include <linux/timex.h>
+#include <linux/rtc.h>
+#include <linux/aer.h>
+#include <linux/debugfs.h>
+#include <linux/notifier.h>
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_common.h"
+#include "hinic3_crm.h"
+#include "hinic3_pci_id_tbl.h"
+#include "hinic3_sriov.h"
+#include "hinic3_dev_mgmt.h"
+#include "hinic3_nictool.h"
+#include "hinic3_hw.h"
+#include "hinic3_lld.h"
+
+#include "hinic3_profile.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_multi_host_mgmt.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_devlink.h"
+
+#include "vram_common.h"
+
+enum partition_dev_type {
+ PARTITION_DEV_NONE = 0,
+ PARTITION_DEV_SHARED,
+ PARTITION_DEV_EXCLUSIVE,
+ PARTITION_DEV_BACKUP,
+};
+
+#ifdef HAVE_HOT_REPLACE_FUNC
+extern int vpci_set_partition_attrs(struct pci_dev *dev, unsigned int dev_type, unsigned int partition_id);
+extern int get_partition_id(void);
+#else
+static int vpci_set_partition_attrs(struct pci_dev *dev, unsigned int dev_type, unsigned int partition_id) { return 0; }
+static int get_partition_id(void) { return 0; }
+#endif
+
+static bool disable_vf_load;
+module_param(disable_vf_load, bool, 0444);
+MODULE_PARM_DESC(disable_vf_load,
+ "Disable virtual functions probe or not - default is false");
+
+static bool g_is_pf_migrated;
+static bool disable_attach;
+module_param(disable_attach, bool, 0444);
+MODULE_PARM_DESC(disable_attach, "disable_attach or not - default is false");
+
+#define HINIC3_WAIT_SRIOV_CFG_TIMEOUT 15000
+
+#if !(defined(HAVE_SRIOV_CONFIGURE) || defined(HAVE_RHEL6_SRIOV_CONFIGURE))
+static DEVICE_ATTR(sriov_numvfs, 0664,
+ hinic3_sriov_numvfs_show, hinic3_sriov_numvfs_store);
+static DEVICE_ATTR(sriov_totalvfs, 0444,
+ hinic3_sriov_totalvfs_show, NULL);
+#endif /* !(HAVE_SRIOV_CONFIGURE || HAVE_RHEL6_SRIOV_CONFIGURE) */
+
+static struct attribute *hinic3_attributes[] = {
+#if !(defined(HAVE_SRIOV_CONFIGURE) || defined(HAVE_RHEL6_SRIOV_CONFIGURE))
+ &dev_attr_sriov_numvfs.attr,
+ &dev_attr_sriov_totalvfs.attr,
+#endif /* !(HAVE_SRIOV_CONFIGURE || HAVE_RHEL6_SRIOV_CONFIGURE) */
+ NULL
+};
+
+static const struct attribute_group hinic3_attr_group = {
+ .attrs = hinic3_attributes,
+};
+
+struct hinic3_uld_info g_uld_info[SERVICE_T_MAX] = { {0} };
+
+#define HINIC3_EVENT_PROCESS_TIMEOUT 10000
+#define HINIC3_WAIT_EVENT_PROCESS_TIMEOUT 100
+struct mutex g_uld_mutex;
+#define BUS_MAX_DEV_NUM 256
+#define HINIC3_SLAVE_WORK_MAX_NUM 20
+
+typedef struct vf_offset_info {
+ u8 valid;
+ u16 vf_offset_from_pf[CMD_MAX_MAX_PF_NUM];
+} VF_OFFSET_INFO_S;
+
+static VF_OFFSET_INFO_S g_vf_offset;
+DEFINE_MUTEX(g_vf_offset_lock);
+
+void hinic3_uld_lock_init(void)
+{
+ mutex_init(&g_uld_mutex);
+}
+
+static const char *s_uld_name[SERVICE_T_MAX] = {
+ "nic", "ovs", "roce", "toe", "ioe",
+ "fc", "vbs", "ipsec", "virtio", "migrate",
+ "ppa", "custom", "vroce", "crypt", "vsock", "bifur"};
+
+const char **hinic3_get_uld_names(void)
+{
+ return s_uld_name;
+}
+
+#ifdef CONFIG_PCI_IOV
+static int hinic3_get_pf_device_id(struct pci_dev *pdev)
+{
+ struct pci_dev *pf_dev = pci_physfn(pdev);
+
+ return pf_dev->device;
+}
+#endif
+
+static int attach_uld(struct hinic3_pcidev *dev, enum hinic3_service_type type,
+ const struct hinic3_uld_info *uld_info)
+{
+ void *uld_dev = NULL;
+ int err;
+
+ mutex_lock(&dev->pdev_mutex);
+
+ if (dev->uld_dev[type]) {
+ sdk_err(&dev->pcidev->dev,
+ "%s driver has attached to pcie device\n",
+ s_uld_name[type]);
+ err = 0;
+ goto out_unlock;
+ }
+
+ atomic_set(&dev->uld_ref_cnt[type], 0);
+
+ if (!uld_info->probe) {
+ err = 0;
+ goto out_unlock;
+ }
+ err = uld_info->probe(&dev->lld_dev, &uld_dev, dev->uld_dev_name[type]);
+ if (err) {
+ sdk_err(&dev->pcidev->dev,
+ "Failed to add object for %s driver to pcie device\n",
+ s_uld_name[type]);
+ goto probe_failed;
+ }
+
+ dev->uld_dev[type] = uld_dev;
+ set_bit(type, &dev->uld_state);
+ mutex_unlock(&dev->pdev_mutex);
+
+ sdk_info(&dev->pcidev->dev,
+ "Attach %s driver to pcie device succeed\n", s_uld_name[type]);
+ return 0;
+
+probe_failed:
+out_unlock:
+ mutex_unlock(&dev->pdev_mutex);
+
+ return err;
+}
+
+static void wait_uld_unused(struct hinic3_pcidev *dev, enum hinic3_service_type type)
+{
+ u32 loop_cnt = 0;
+
+ while (atomic_read(&dev->uld_ref_cnt[type])) {
+ loop_cnt++;
+ if (loop_cnt % PRINT_ULD_DETACH_TIMEOUT_INTERVAL == 0)
+ sdk_err(&dev->pcidev->dev, "Wait for uld unused for %lds, reference count: %d\n",
+ loop_cnt / MSEC_PER_SEC, atomic_read(&dev->uld_ref_cnt[type]));
+
+ usleep_range(ULD_LOCK_MIN_USLEEP_TIME, ULD_LOCK_MAX_USLEEP_TIME);
+ }
+}
+
+static void detach_uld(struct hinic3_pcidev *dev,
+ enum hinic3_service_type type)
+{
+ struct hinic3_uld_info *uld_info = &g_uld_info[type];
+ unsigned long end;
+ bool timeout = true;
+
+ mutex_lock(&dev->pdev_mutex);
+ if (!dev->uld_dev[type]) {
+ mutex_unlock(&dev->pdev_mutex);
+ return;
+ }
+
+ end = jiffies + msecs_to_jiffies(HINIC3_EVENT_PROCESS_TIMEOUT);
+ do {
+ if (!test_and_set_bit(type, &dev->state)) {
+ timeout = false;
+ break;
+ }
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+ } while (time_before(jiffies, end));
+
+ if (timeout && !test_and_set_bit(type, &dev->state))
+ timeout = false;
+
+ spin_lock_bh(&dev->uld_lock);
+ clear_bit(type, &dev->uld_state);
+ spin_unlock_bh(&dev->uld_lock);
+
+ wait_uld_unused(dev, type);
+
+ if (!uld_info->remove) {
+ mutex_unlock(&dev->pdev_mutex);
+ return;
+ }
+ uld_info->remove(&dev->lld_dev, dev->uld_dev[type]);
+
+ dev->uld_dev[type] = NULL;
+ if (!timeout)
+ clear_bit(type, &dev->state);
+
+ sdk_info(&dev->pcidev->dev,
+ "Detach %s driver from pcie device succeed\n",
+ s_uld_name[type]);
+ mutex_unlock(&dev->pdev_mutex);
+}
+
+static void attach_ulds(struct hinic3_pcidev *dev)
+{
+ enum hinic3_service_type type;
+ struct pci_dev *pdev = dev->pcidev;
+
+ int is_in_kexec = vram_get_kexec_flag();
+ /* don't need hold when driver parallel load during spu hot replace */
+ if (is_in_kexec == 0) {
+ lld_hold();
+ }
+
+ mutex_lock(&g_uld_mutex);
+
+ for (type = SERVICE_T_OVS; type < SERVICE_T_MAX; type++) {
+ if (g_uld_info[type].probe) {
+ if (pdev->is_virtfn &&
+ (!hinic3_get_vf_service_load(pdev, (u16)type))) {
+ sdk_info(&pdev->dev, "VF device disable service_type = %d load in host\n",
+ type);
+ continue;
+ }
+ attach_uld(dev, type, &g_uld_info[type]);
+ }
+ }
+ mutex_unlock(&g_uld_mutex);
+
+ if (is_in_kexec == 0) {
+ lld_put();
+ }
+}
+
+static void detach_ulds(struct hinic3_pcidev *dev)
+{
+ enum hinic3_service_type type;
+
+ lld_hold();
+ mutex_lock(&g_uld_mutex);
+ for (type = SERVICE_T_MAX - 1; type > SERVICE_T_NIC; type--) {
+ if (g_uld_info[type].probe)
+ detach_uld(dev, type);
+ }
+
+ if (g_uld_info[SERVICE_T_NIC].probe)
+ detach_uld(dev, SERVICE_T_NIC);
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+}
+
+int hinic3_register_uld(enum hinic3_service_type type,
+ struct hinic3_uld_info *uld_info)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+ struct list_head *chip_list = NULL;
+
+ if (type >= SERVICE_T_MAX) {
+ pr_err("Unknown type %d of up layer driver to register\n",
+ type);
+ return -EINVAL;
+ }
+
+ if (!uld_info || !uld_info->probe || !uld_info->remove) {
+ pr_err("Invalid information of %s driver to register\n",
+ s_uld_name[type]);
+ return -EINVAL;
+ }
+
+ lld_hold();
+ mutex_lock(&g_uld_mutex);
+
+ if (g_uld_info[type].probe) {
+ pr_err("%s driver has registered\n", s_uld_name[type]);
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+ return -EINVAL;
+ }
+
+ chip_list = get_hinic3_chip_list();
+ memcpy(&g_uld_info[type], uld_info, sizeof(struct hinic3_uld_info));
+ list_for_each_entry(chip_node, chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (attach_uld(dev, type, uld_info) != 0) {
+ sdk_err(&dev->pcidev->dev,
+ "Attach %s driver to pcie device failed\n",
+ s_uld_name[type]);
+#ifdef CONFIG_MODULE_PROF
+ hinic3_probe_fault_process(dev->pcidev, FAULT_LEVEL_HOST);
+ break;
+#else
+ continue;
+#endif
+ }
+ }
+ }
+
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+
+ pr_info("Register %s driver succeed\n", s_uld_name[type]);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_uld);
+
+void hinic3_unregister_uld(enum hinic3_service_type type)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+ struct hinic3_uld_info *uld_info = NULL;
+ struct list_head *chip_list = NULL;
+
+ if (type >= SERVICE_T_MAX) {
+ pr_err("Unknown type %d of up layer driver to unregister\n",
+ type);
+ return;
+ }
+
+ lld_hold();
+ mutex_lock(&g_uld_mutex);
+ chip_list = get_hinic3_chip_list();
+ list_for_each_entry(chip_node, chip_list, node) {
+ /* detach vf first */
+ list_for_each_entry(dev, &chip_node->func_list, node)
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ detach_uld(dev, type);
+
+ list_for_each_entry(dev, &chip_node->func_list, node)
+ if (hinic3_func_type(dev->hwdev) == TYPE_PF)
+ detach_uld(dev, type);
+
+ list_for_each_entry(dev, &chip_node->func_list, node)
+ if (hinic3_func_type(dev->hwdev) == TYPE_PPF)
+ detach_uld(dev, type);
+ }
+
+ uld_info = &g_uld_info[type];
+ memset(uld_info, 0, sizeof(struct hinic3_uld_info));
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+}
+EXPORT_SYMBOL(hinic3_unregister_uld);
+
+int hinic3_attach_nic(struct hinic3_lld_dev *lld_dev)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev)
+ return -EINVAL;
+
+ dev = container_of(lld_dev, struct hinic3_pcidev, lld_dev);
+ return attach_uld(dev, SERVICE_T_NIC, &g_uld_info[SERVICE_T_NIC]);
+}
+EXPORT_SYMBOL(hinic3_attach_nic);
+
+void hinic3_detach_nic(const struct hinic3_lld_dev *lld_dev)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev)
+ return;
+
+ dev = container_of(lld_dev, struct hinic3_pcidev, lld_dev);
+ detach_uld(dev, SERVICE_T_NIC);
+}
+EXPORT_SYMBOL(hinic3_detach_nic);
+
+int hinic3_attach_service(const struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev || type >= SERVICE_T_MAX)
+ return -EINVAL;
+
+ dev = container_of(lld_dev, struct hinic3_pcidev, lld_dev);
+ return attach_uld(dev, type, &g_uld_info[type]);
+}
+EXPORT_SYMBOL(hinic3_attach_service);
+
+void hinic3_detach_service(const struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev || type >= SERVICE_T_MAX)
+ return;
+
+ dev = container_of(lld_dev, struct hinic3_pcidev, lld_dev);
+ detach_uld(dev, type);
+}
+EXPORT_SYMBOL(hinic3_detach_service);
+
+void hinic3_module_get(void *hwdev, enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev || type >= SERVICE_T_MAX)
+ return;
+ __module_get(THIS_MODULE);
+}
+EXPORT_SYMBOL(hinic3_module_get);
+
+void hinic3_module_put(void *hwdev, enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev || type >= SERVICE_T_MAX)
+ return;
+ module_put(THIS_MODULE);
+}
+EXPORT_SYMBOL(hinic3_module_put);
+
+static void hinic3_sync_time_to_fmw(struct hinic3_pcidev *pdev_pri)
+{
+ struct timeval tv = {0};
+ struct rtc_time rt_time = {0};
+ u64 tv_msec;
+ int err;
+
+ do_gettimeofday(&tv);
+
+ tv_msec = (u64)(tv.tv_sec * MSEC_PER_SEC + tv.tv_usec / USEC_PER_MSEC);
+ err = hinic3_sync_time(pdev_pri->hwdev, tv_msec);
+ if (err) {
+ sdk_err(&pdev_pri->pcidev->dev, "Synchronize UTC time to firmware failed, errno:%d.\n",
+ err);
+ } else {
+ rtc_time_to_tm((unsigned long)(tv.tv_sec), &rt_time);
+ sdk_info(&pdev_pri->pcidev->dev,
+ "Synchronize UTC time to firmware succeed. UTC time %d-%02d-%02d %02d:%02d:%02d.\n",
+ rt_time.tm_year + HINIC3_SYNC_YEAR_OFFSET,
+ rt_time.tm_mon + HINIC3_SYNC_MONTH_OFFSET,
+ rt_time.tm_mday, rt_time.tm_hour,
+ rt_time.tm_min, rt_time.tm_sec);
+ }
+}
+
+static void send_uld_dev_event(struct hinic3_pcidev *dev,
+ struct hinic3_event_info *event)
+{
+ enum hinic3_service_type type;
+
+ for (type = SERVICE_T_NIC; type < SERVICE_T_MAX; type++) {
+ if (test_and_set_bit(type, &dev->state)) {
+ sdk_warn(&dev->pcidev->dev, "Svc: 0x%x, event: 0x%x can't handler, %s is in detach\n",
+ event->service, event->type, s_uld_name[type]);
+ continue;
+ }
+
+ if (g_uld_info[type].event)
+ g_uld_info[type].event(&dev->lld_dev,
+ dev->uld_dev[type], event);
+ clear_bit(type, &dev->state);
+ }
+}
+
+static void send_event_to_dst_pf(struct hinic3_pcidev *dev, u16 func_id,
+ struct hinic3_event_info *event)
+{
+ struct hinic3_pcidev *des_dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(des_dev, &dev->chip_node->func_list, node) {
+ if (dev->lld_state == HINIC3_IN_REMOVE)
+ continue;
+
+ if (hinic3_func_type(des_dev->hwdev) == TYPE_VF)
+ continue;
+
+ if (hinic3_global_func_id(des_dev->hwdev) == func_id) {
+ send_uld_dev_event(des_dev, event);
+ break;
+ }
+ }
+ lld_put();
+}
+
+static void send_event_to_all_pf(struct hinic3_pcidev *dev,
+ struct hinic3_event_info *event)
+{
+ struct hinic3_pcidev *des_dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(des_dev, &dev->chip_node->func_list, node) {
+ if (dev->lld_state == HINIC3_IN_REMOVE)
+ continue;
+
+ if (hinic3_func_type(des_dev->hwdev) == TYPE_VF)
+ continue;
+
+ send_uld_dev_event(des_dev, event);
+ }
+ lld_put();
+}
+
+u32 hinic3_pdev_is_virtfn(struct pci_dev *pdev)
+{
+#ifdef CONFIG_PCI_IOV
+ return pdev->is_virtfn;
+#else
+ return 0;
+#endif
+}
+
+static int hinic3_get_function_enable(struct pci_dev *pdev, bool *en)
+{
+ struct pci_dev *pf_pdev = pdev->physfn;
+ struct hinic3_pcidev *pci_adapter = NULL;
+ void *pf_hwdev = NULL;
+ u16 global_func_id;
+ int err;
+
+ /* PF in host os or function in guest os, probe sdk in default */
+ if (!hinic3_pdev_is_virtfn(pdev) || !pf_pdev) {
+ *en = true;
+ return 0;
+ }
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter || !pci_adapter->hwdev) {
+ /* vf in host and pf sdk not probed */
+ return -EFAULT;
+ }
+ pf_hwdev = pci_adapter->hwdev;
+
+ err = hinic3_get_vfid_by_vfpci(NULL, pdev, &global_func_id);
+ if (err) {
+ sdk_err(&pci_adapter->pcidev->dev, "Func hinic3_get_vfid_by_vfpci fail %d \n", err);
+ return err;
+ }
+
+ err = hinic3_get_func_nic_enable(pf_hwdev, global_func_id, en);
+ if (!!err) {
+ sdk_info(&pdev->dev, "Failed to get function nic status, err %d.\n", err);
+ return err;
+ }
+
+ return 0;
+}
+
+int hinic3_set_func_probe_in_host(void *hwdev, u16 func_id, bool probe)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ return -EINVAL;
+
+ if (probe)
+ set_bit(func_id, dev->func_probe_in_host);
+ else
+ clear_bit(func_id, dev->func_probe_in_host);
+
+ return 0;
+}
+
+bool hinic3_get_func_probe_in_host(void *hwdev, u16 func_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_hwdev *ppf_dev = NULL;
+ bool probed = false;
+
+ if (!hwdev)
+ return false;
+
+ down(&dev->ppf_sem);
+ ppf_dev = hinic3_get_ppf_hwdev_by_pdev(dev->pcidev_hdl);
+ if (!ppf_dev || hinic3_func_type(ppf_dev) != TYPE_PPF) {
+ up(&dev->ppf_sem);
+ return false;
+ }
+
+ probed = !!test_bit(func_id, ppf_dev->func_probe_in_host);
+ up(&dev->ppf_sem);
+
+ return probed;
+}
+
+void *hinic3_get_ppf_hwdev_by_pdev(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!pdev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ chip_node = pci_adapter->chip_node;
+ lld_dev_hold(&pci_adapter->lld_dev);
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (dev->lld_state == HINIC3_IN_REMOVE)
+ continue;
+
+ if (dev->hwdev && hinic3_func_type(dev->hwdev) == TYPE_PPF) {
+ lld_dev_put(&pci_adapter->lld_dev);
+ return dev->hwdev;
+ }
+ }
+ lld_dev_put(&pci_adapter->lld_dev);
+
+ return NULL;
+}
+
+static int hinic3_set_vf_nic_used_state(void *hwdev, u16 func_id, bool opened)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_hwdev *ppf_dev = NULL;
+
+ if (!dev || func_id >= MAX_FUNCTION_NUM)
+ return -EINVAL;
+
+ down(&dev->ppf_sem);
+ ppf_dev = hinic3_get_ppf_hwdev_by_pdev(dev->pcidev_hdl);
+ if (!ppf_dev || hinic3_func_type(ppf_dev) != TYPE_PPF) {
+ up(&dev->ppf_sem);
+ return -EINVAL;
+ }
+
+ if (opened)
+ set_bit(func_id, ppf_dev->netdev_setup_state);
+ else
+ clear_bit(func_id, ppf_dev->netdev_setup_state);
+
+ up(&dev->ppf_sem);
+
+ return 0;
+}
+
+static void set_vf_func_in_use(struct pci_dev *pdev, bool in_use)
+{
+ struct pci_dev *pf_pdev = pdev->physfn;
+ struct hinic3_pcidev *pci_adapter = NULL;
+ void *pf_hwdev = NULL;
+ u16 global_func_id;
+
+ /* only need to be set when VF is on the host */
+ if (!hinic3_pdev_is_virtfn(pdev) || !pf_pdev)
+ return;
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter || !pci_adapter->hwdev)
+ return;
+
+ pf_hwdev = pci_adapter->hwdev;
+
+ global_func_id = (u16)pdev->devfn + hinic3_glb_pf_vf_offset(pf_hwdev);
+ (void)hinic3_set_vf_nic_used_state(pf_hwdev, global_func_id, in_use);
+}
+
+static int hinic3_pf_get_vf_offset_info(struct hinic3_pcidev *des_dev, u16 *vf_offset)
+{
+ int err, i;
+ struct hinic3_hw_pf_infos *pf_infos = NULL;
+ u16 pf_func_id;
+ struct hinic3_pcidev *pf_pci_adapter = NULL;
+
+ pf_pci_adapter = (hinic3_pdev_is_virtfn(des_dev->pcidev)) ? pci_get_drvdata(des_dev->pcidev->physfn) : des_dev;
+ pf_func_id = hinic3_global_func_id(pf_pci_adapter->hwdev);
+ if (pf_func_id >= CMD_MAX_MAX_PF_NUM || !vf_offset)
+ return -EINVAL;
+
+ mutex_lock(&g_vf_offset_lock);
+ if (g_vf_offset.valid == 0) {
+ pf_infos = kzalloc(sizeof(*pf_infos), GFP_KERNEL);
+ if (!pf_infos) {
+ sdk_err(&pf_pci_adapter->pcidev->dev, "Malloc pf_infos fail\n");
+ err = -ENOMEM;
+ goto err_malloc;
+ }
+
+ err = hinic3_get_hw_pf_infos(pf_pci_adapter->hwdev, pf_infos, HINIC3_CHANNEL_COMM);
+ if (err) {
+ sdk_warn(&pf_pci_adapter->pcidev->dev, "Hinic3_get_hw_pf_infos fail err %d\n", err);
+ err = -EFAULT;
+ goto err_out;
+ }
+
+ g_vf_offset.valid = 1;
+ for (i = 0; i < CMD_MAX_MAX_PF_NUM; i++) {
+ g_vf_offset.vf_offset_from_pf[i] = pf_infos->infos[i].vf_offset;
+ }
+
+ kfree(pf_infos);
+ }
+
+ *vf_offset = g_vf_offset.vf_offset_from_pf[pf_func_id];
+
+ mutex_unlock(&g_vf_offset_lock);
+
+ return 0;
+
+err_out:
+ kfree(pf_infos);
+err_malloc:
+ mutex_unlock(&g_vf_offset_lock);
+ return err;
+}
+
+static struct pci_dev *get_vf_pdev_by_pf(struct hinic3_pcidev *des_dev,
+ u16 func_id)
+{
+ int err;
+ u16 bus_num;
+ u16 vf_start, vf_end;
+ u16 des_fn, pf_func_id, vf_offset;
+
+ vf_start = hinic3_glb_pf_vf_offset(des_dev->hwdev);
+ vf_end = vf_start + hinic3_func_max_vf(des_dev->hwdev);
+ pf_func_id = hinic3_global_func_id(des_dev->hwdev);
+ if (func_id <= vf_start || func_id > vf_end || pf_func_id >= CMD_MAX_MAX_PF_NUM)
+ return NULL;
+
+ err = hinic3_pf_get_vf_offset_info(des_dev, &vf_offset);
+ if (err) {
+ sdk_warn(&des_dev->pcidev->dev, "Hinic3_pf_get_vf_offset_info fail\n");
+ return NULL;
+ }
+
+ des_fn = ((func_id - vf_start) - 1) + pf_func_id + vf_offset;
+ bus_num = des_dev->pcidev->bus->number + des_fn / BUS_MAX_DEV_NUM;
+
+ return pci_get_domain_bus_and_slot(0, bus_num, (des_fn % BUS_MAX_DEV_NUM));
+}
+
+static struct hinic3_pcidev *get_des_pci_adapter(struct hinic3_pcidev *des_dev,
+ u16 func_id)
+{
+ struct pci_dev *des_pdev = NULL;
+ u16 vf_start, vf_end;
+ bool probe_in_host = false;
+
+ if (hinic3_global_func_id(des_dev->hwdev) == func_id)
+ return des_dev;
+
+ vf_start = hinic3_glb_pf_vf_offset(des_dev->hwdev);
+ vf_end = vf_start + hinic3_func_max_vf(des_dev->hwdev);
+ if (func_id <= vf_start || func_id > vf_end)
+ return NULL;
+
+ des_pdev = get_vf_pdev_by_pf(des_dev, func_id);
+ if (!des_pdev)
+ return NULL;
+
+ pci_dev_put(des_pdev);
+
+ probe_in_host = hinic3_get_func_probe_in_host(des_dev->hwdev, func_id);
+ if (!probe_in_host)
+ return NULL;
+
+ return pci_get_drvdata(des_pdev);
+}
+
+int __set_vroce_func_state(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = pci_adapter->pcidev;
+ u16 func_id;
+ int err;
+ u8 enable_vroce = false;
+
+ func_id = hinic3_global_func_id(pci_adapter->hwdev);
+
+ err = hinic3_get_func_vroce_enable(pci_adapter->hwdev, func_id, &enable_vroce);
+ if (0 != err) {
+ sdk_err(&pdev->dev, "Failed to get vroce state.\n");
+ return err;
+ }
+
+ mutex_lock(&g_uld_mutex);
+
+ if (!!enable_vroce) {
+ if (!g_uld_info[SERVICE_T_ROCE].probe) {
+ sdk_info(&pdev->dev, "Uld(roce_info) has not been registered!\n");
+ mutex_unlock(&g_uld_mutex);
+ return 0;
+ }
+
+ err = attach_uld(pci_adapter, SERVICE_T_ROCE, &g_uld_info[SERVICE_T_ROCE]);
+ if (0 != err) {
+ sdk_err(&pdev->dev, "Failed to initialize VROCE.\n");
+ mutex_unlock(&g_uld_mutex);
+ return err;
+ }
+ } else {
+ sdk_info(&pdev->dev, "Func %hu vroce state: disable.\n", func_id);
+ if (g_uld_info[SERVICE_T_ROCE].remove)
+ detach_uld(pci_adapter, SERVICE_T_ROCE);
+ }
+
+ mutex_unlock(&g_uld_mutex);
+
+ return 0;
+}
+
+void slave_host_mgmt_vroce_work(struct work_struct *work)
+{
+ struct hinic3_pcidev *pci_adapter =
+ container_of(work, struct hinic3_pcidev, slave_vroce_work);
+
+ __set_vroce_func_state(pci_adapter);
+}
+
+void *hinic3_get_roce_uld_by_pdev(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ return pci_adapter->uld_dev[SERVICE_T_ROCE];
+}
+
+static int __func_service_state_process(struct hinic3_pcidev *event_dev,
+ struct hinic3_pcidev *des_dev,
+ struct hinic3_mhost_nic_func_state *state, u16 cmd)
+{
+ int err = 0;
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)event_dev->hwdev;
+
+ switch (cmd) {
+ case HINIC3_MHOST_GET_VROCE_STATE:
+ state->enable = hinic3_get_roce_uld_by_pdev(des_dev->pcidev) ? 1 : 0;
+ break;
+ case HINIC3_MHOST_NIC_STATE_CHANGE:
+ sdk_info(&des_dev->pcidev->dev, "Receive nic[%u] state changed event, state: %u\n",
+ state->func_idx, state->enable);
+ if (event_dev->multi_host_mgmt_workq) {
+ queue_work(event_dev->multi_host_mgmt_workq, &des_dev->slave_nic_work);
+ } else {
+ sdk_err(&des_dev->pcidev->dev, "Can not schedule slave nic work\n");
+ err = -EFAULT;
+ }
+ break;
+ case HINIC3_MHOST_VROCE_STATE_CHANGE:
+ sdk_info(&des_dev->pcidev->dev, "Receive vroce[%u] state changed event, state: %u\n",
+ state->func_idx, state->enable);
+ queue_work_on(hisdk3_get_work_cpu_affinity(dev, WORK_TYPE_MBOX),
+ event_dev->multi_host_mgmt_workq,
+ &des_dev->slave_vroce_work);
+ break;
+ default:
+ sdk_warn(&des_dev->pcidev->dev, "Service state process with unknown cmd: %u\n", cmd);
+ err = -EFAULT;
+ break;
+ }
+
+ return err;
+}
+
+static void __multi_host_mgmt(struct hinic3_pcidev *dev,
+ struct hinic3_multi_host_mgmt_event *mhost_mgmt)
+{
+ struct hinic3_pcidev *cur_dev = NULL;
+ struct hinic3_pcidev *des_dev = NULL;
+ struct hinic3_mhost_nic_func_state *nic_state = NULL;
+ u16 sub_cmd = mhost_mgmt->sub_cmd;
+
+ switch (sub_cmd) {
+ case HINIC3_MHOST_GET_VROCE_STATE:
+ case HINIC3_MHOST_VROCE_STATE_CHANGE:
+ case HINIC3_MHOST_NIC_STATE_CHANGE:
+ nic_state = mhost_mgmt->data;
+ nic_state->status = 0;
+ if (!dev->hwdev)
+ return;
+
+ if (!IS_BMGW_SLAVE_HOST((struct hinic3_hwdev *)dev->hwdev))
+ return;
+
+ /* find func_idx pci_adapter and disable or enable nic */
+ lld_dev_hold(&dev->lld_dev);
+ list_for_each_entry(cur_dev, &dev->chip_node->func_list, node) {
+ if (cur_dev->lld_state == HINIC3_IN_REMOVE || hinic3_pdev_is_virtfn(cur_dev->pcidev))
+ continue;
+
+ des_dev = get_des_pci_adapter(cur_dev, nic_state->func_idx);
+ if (!des_dev)
+ continue;
+
+ if (__func_service_state_process(dev, des_dev, nic_state, sub_cmd))
+ nic_state->status = 1;
+ break;
+ }
+ lld_dev_put(&dev->lld_dev);
+ break;
+ default:
+ sdk_warn(&dev->pcidev->dev, "Received unknown multi-host mgmt event: %u\n",
+ mhost_mgmt->sub_cmd);
+ break;
+ }
+}
+
+static void hinic3_event_process(void *adapter, struct hinic3_event_info *event)
+{
+ struct hinic3_pcidev *dev = adapter;
+ struct hinic3_fault_event *fault = (void *)event->event_data;
+ struct hinic3_multi_host_mgmt_event *mhost_event = (void *)event->event_data;
+ u16 func_id;
+
+ switch (HINIC3_SRV_EVENT_TYPE(event->service, event->type)) {
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_MULTI_HOST_MGMT):
+ __multi_host_mgmt(dev, mhost_event);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_FAULT):
+ if (fault->fault_level == FAULT_LEVEL_SERIOUS_FLR &&
+ fault->event.chip.func_id < hinic3_max_pf_num(dev->hwdev)) {
+ func_id = fault->event.chip.func_id;
+ return send_event_to_dst_pf(adapter, func_id, event);
+ }
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_MGMT_WATCHDOG):
+ send_event_to_all_pf(adapter, event);
+ break;
+ default:
+ send_uld_dev_event(adapter, event);
+ break;
+ }
+}
+
+static void uld_def_init(struct hinic3_pcidev *pci_adapter)
+{
+ int type;
+
+ for (type = 0; type < SERVICE_T_MAX; type++) {
+ atomic_set(&pci_adapter->uld_ref_cnt[type], 0);
+ clear_bit(type, &pci_adapter->uld_state);
+ }
+
+ spin_lock_init(&pci_adapter->uld_lock);
+}
+
+static int mapping_bar(struct pci_dev *pdev,
+ struct hinic3_pcidev *pci_adapter)
+{
+ int cfg_bar;
+
+ cfg_bar = HINIC3_IS_VF_DEV(pdev) ?
+ HINIC3_VF_PCI_CFG_REG_BAR : HINIC3_PF_PCI_CFG_REG_BAR;
+
+ pci_adapter->cfg_reg_base = pci_ioremap_bar(pdev, cfg_bar);
+ if (!pci_adapter->cfg_reg_base) {
+ sdk_err(&pdev->dev,
+ "Failed to map configuration regs\n");
+ return -ENOMEM;
+ }
+
+ pci_adapter->intr_reg_base = pci_ioremap_bar(pdev,
+ HINIC3_PCI_INTR_REG_BAR);
+ if (!pci_adapter->intr_reg_base) {
+ sdk_err(&pdev->dev,
+ "Failed to map interrupt regs\n");
+ goto map_intr_bar_err;
+ }
+
+ if (!HINIC3_IS_VF_DEV(pdev)) {
+ pci_adapter->mgmt_reg_base =
+ pci_ioremap_bar(pdev, HINIC3_PCI_MGMT_REG_BAR);
+ if (!pci_adapter->mgmt_reg_base) {
+ sdk_err(&pdev->dev,
+ "Failed to map mgmt regs\n");
+ goto map_mgmt_bar_err;
+ }
+ }
+
+ pci_adapter->db_base_phy = pci_resource_start(pdev, HINIC3_PCI_DB_BAR);
+ pci_adapter->db_dwqe_len = pci_resource_len(pdev, HINIC3_PCI_DB_BAR);
+ pci_adapter->db_base = pci_ioremap_bar(pdev, HINIC3_PCI_DB_BAR);
+ if (!pci_adapter->db_base) {
+ sdk_err(&pdev->dev,
+ "Failed to map doorbell regs\n");
+ goto map_db_err;
+ }
+
+ return 0;
+
+map_db_err:
+ if (!HINIC3_IS_VF_DEV(pdev))
+ iounmap(pci_adapter->mgmt_reg_base);
+
+map_mgmt_bar_err:
+ iounmap(pci_adapter->intr_reg_base);
+
+map_intr_bar_err:
+ iounmap(pci_adapter->cfg_reg_base);
+
+ return -ENOMEM;
+}
+
+static void unmapping_bar(struct hinic3_pcidev *pci_adapter)
+{
+ iounmap(pci_adapter->db_base);
+
+ if (!HINIC3_IS_VF_DEV(pci_adapter->pcidev))
+ iounmap(pci_adapter->mgmt_reg_base);
+
+ iounmap(pci_adapter->intr_reg_base);
+ iounmap(pci_adapter->cfg_reg_base);
+}
+
+static int hinic3_pci_init(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ int err;
+
+ pci_adapter = kzalloc(sizeof(*pci_adapter), GFP_KERNEL);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev,
+ "Failed to alloc pci device adapter\n");
+ return -ENOMEM;
+ }
+ pci_adapter->pcidev = pdev;
+ mutex_init(&pci_adapter->pdev_mutex);
+
+ pci_set_drvdata(pdev, pci_adapter);
+
+ err = pci_enable_device(pdev);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to enable PCI device\n");
+ goto pci_enable_err;
+ }
+
+ err = pci_request_regions(pdev, HINIC3_DRV_NAME);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to request regions\n");
+ goto pci_regions_err;
+ }
+
+ pci_enable_pcie_error_reporting(pdev);
+
+ pci_set_master(pdev);
+
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64)); /* 64 bit DMA mask */
+ if (err) {
+ sdk_warn(&pdev->dev, "Couldn't set 64-bit DMA mask\n");
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); /* 32 bit DMA mask */
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to set DMA mask\n");
+ goto dma_mask_err;
+ }
+ }
+
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); /* 64 bit DMA mask */
+ if (err) {
+ sdk_warn(&pdev->dev,
+ "Couldn't set 64-bit coherent DMA mask\n");
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); /* 32 bit DMA mask */
+ if (err) {
+ sdk_err(&pdev->dev,
+ "Failed to set coherent DMA mask\n");
+ goto dma_consistnet_mask_err;
+ }
+ }
+
+ return 0;
+
+dma_consistnet_mask_err:
+dma_mask_err:
+ pci_clear_master(pdev);
+ pci_disable_pcie_error_reporting(pdev);
+ pci_release_regions(pdev);
+
+pci_regions_err:
+ pci_disable_device(pdev);
+
+pci_enable_err:
+ pci_set_drvdata(pdev, NULL);
+ kfree(pci_adapter);
+
+ return err;
+}
+
+static void hinic3_pci_deinit(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ pci_clear_master(pdev);
+ pci_release_regions(pdev);
+ pci_disable_pcie_error_reporting(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ kfree(pci_adapter);
+}
+
+static void set_vf_load_state(struct pci_dev *pdev, struct hinic3_pcidev *pci_adapter)
+{
+ /* In bm mode, slave host will load vfs in default */
+ if (IS_BMGW_SLAVE_HOST(((struct hinic3_hwdev *)pci_adapter->hwdev)) &&
+ hinic3_func_type(pci_adapter->hwdev) != TYPE_VF)
+ hinic3_set_vf_load_state(pdev, false);
+
+ if (!disable_attach) {
+ if ((hinic3_func_type(pci_adapter->hwdev) != TYPE_VF) &&
+ hinic3_is_bm_slave_host(pci_adapter->hwdev)) {
+ if (hinic3_func_max_vf(pci_adapter->hwdev) == 0) {
+ sdk_warn(&pdev->dev, "The sriov enabling process is skipped, vfs_num: 0.\n");
+ return;
+ }
+ hinic3_pci_sriov_enable(pdev, hinic3_func_max_vf(pci_adapter->hwdev));
+ }
+ }
+}
+
+static void hinic3_init_ppf_hwdev(struct hinic3_hwdev *hwdev)
+{
+ if (!hwdev) {
+ pr_err("[%s:%d] null hwdev pointer\n", __FILE__, __LINE__);
+ return;
+ }
+
+ hwdev->ppf_hwdev = hinic3_get_ppf_hwdev_by_pdev(hwdev->pcidev_hdl);
+ return;
+}
+
+static int set_nic_func_state(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = pci_adapter->pcidev;
+ u16 func_id;
+ int err;
+ bool enable_nic = false;
+
+ func_id = hinic3_global_func_id(pci_adapter->hwdev);
+
+ err = hinic3_get_func_nic_enable(pci_adapter->hwdev, func_id, &enable_nic);
+ if (0 != err) {
+ sdk_err(&pdev->dev, "Failed to get nic state.\n");
+ return err;
+ }
+
+ if (!enable_nic) {
+ sdk_info(&pdev->dev, "Func %hu nic state: disable.\n", func_id);
+ detach_uld(pci_adapter, SERVICE_T_NIC);
+ return 0;
+ }
+
+ if (IS_BMGW_SLAVE_HOST((struct hinic3_hwdev *)pci_adapter->hwdev))
+ (void)hinic3_init_vf_dev_cap(pci_adapter->hwdev);
+
+ if (g_uld_info[SERVICE_T_NIC].probe) {
+ err = attach_uld(pci_adapter, SERVICE_T_NIC, &g_uld_info[SERVICE_T_NIC]);
+ if (0 != err) {
+ sdk_err(&pdev->dev, "Initialize NIC failed\n");
+ return err;
+ }
+ }
+
+ return 0;
+}
+
+static int hinic3_func_init(struct pci_dev *pdev, struct hinic3_pcidev *pci_adapter)
+{
+ struct hinic3_init_para init_para = {0};
+ bool cqm_init_en = false;
+ int err;
+
+ init_para.adapter_hdl = pci_adapter;
+ init_para.pcidev_hdl = pdev;
+ init_para.dev_hdl = &pdev->dev;
+ init_para.cfg_reg_base = pci_adapter->cfg_reg_base;
+ init_para.intr_reg_base = pci_adapter->intr_reg_base;
+ init_para.mgmt_reg_base = pci_adapter->mgmt_reg_base;
+ init_para.db_base = pci_adapter->db_base;
+ init_para.db_base_phy = pci_adapter->db_base_phy;
+ init_para.db_dwqe_len = pci_adapter->db_dwqe_len;
+ init_para.hwdev = &pci_adapter->hwdev;
+ init_para.chip_node = pci_adapter->chip_node;
+ init_para.probe_fault_level = pci_adapter->probe_fault_level;
+ err = hinic3_init_hwdev(&init_para);
+ if (err) {
+ pci_adapter->hwdev = NULL;
+ pci_adapter->probe_fault_level = init_para.probe_fault_level;
+ sdk_err(&pdev->dev, "Failed to initialize hardware device\n");
+ return -EFAULT;
+ }
+
+ cqm_init_en = hinic3_need_init_stateful_default(pci_adapter->hwdev);
+ if (cqm_init_en) {
+ err = hinic3_stateful_init(pci_adapter->hwdev);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to init stateful\n");
+ goto stateful_init_err;
+ }
+ }
+
+ pci_adapter->lld_dev.pdev = pdev;
+
+ pci_adapter->lld_dev.hwdev = pci_adapter->hwdev;
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF)
+ set_bit(HINIC3_FUNC_PERSENT, &pci_adapter->sriov_info.state);
+
+ hinic3_event_register(pci_adapter->hwdev, pci_adapter,
+ hinic3_event_process);
+
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF)
+ hinic3_sync_time_to_fmw(pci_adapter);
+
+ /* dbgtool init */
+ lld_lock_chip_node();
+ err = nictool_k_init(pci_adapter->hwdev, pci_adapter->chip_node);
+ if (err) {
+ lld_unlock_chip_node();
+ sdk_err(&pdev->dev, "Failed to initialize dbgtool\n");
+ goto nictool_init_err;
+ }
+ list_add_tail(&pci_adapter->node, &pci_adapter->chip_node->func_list);
+ lld_unlock_chip_node();
+
+ hinic3_init_ppf_hwdev((struct hinic3_hwdev *)pci_adapter->hwdev);
+
+ set_vf_load_state(pdev, pci_adapter);
+
+ if (!disable_attach) {
+ /* NIC is base driver, probe firstly */
+ err = set_nic_func_state(pci_adapter);
+ if (err)
+ goto set_nic_func_state_err;
+
+ attach_ulds(pci_adapter);
+
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF) {
+ err = sysfs_create_group(&pdev->dev.kobj,
+ &hinic3_attr_group);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to create sysfs group\n");
+ goto create_sysfs_err;
+ }
+ }
+ }
+
+ return 0;
+
+create_sysfs_err:
+ detach_ulds(pci_adapter);
+
+set_nic_func_state_err:
+ lld_lock_chip_node();
+ list_del(&pci_adapter->node);
+ lld_unlock_chip_node();
+
+ wait_lld_dev_unused(pci_adapter);
+
+ lld_lock_chip_node();
+ nictool_k_uninit(pci_adapter->hwdev, pci_adapter->chip_node);
+ lld_unlock_chip_node();
+
+nictool_init_err:
+ hinic3_event_unregister(pci_adapter->hwdev);
+ if (cqm_init_en)
+ hinic3_stateful_deinit(pci_adapter->hwdev);
+stateful_init_err:
+ hinic3_free_hwdev(pci_adapter->hwdev);
+
+ return err;
+}
+
+static void hinic3_func_deinit(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ /* When function deinit, disable mgmt initiative report events firstly,
+ * then flush mgmt work-queue.
+ */
+ hinic3_disable_mgmt_msg_report(pci_adapter->hwdev);
+
+ hinic3_flush_mgmt_workq(pci_adapter->hwdev);
+
+ lld_lock_chip_node();
+ list_del(&pci_adapter->node);
+ lld_unlock_chip_node();
+
+ detach_ulds(pci_adapter);
+
+ wait_lld_dev_unused(pci_adapter);
+
+ lld_lock_chip_node();
+ nictool_k_uninit(pci_adapter->hwdev, pci_adapter->chip_node);
+ lld_unlock_chip_node();
+
+ hinic3_event_unregister(pci_adapter->hwdev);
+
+ hinic3_free_stateful(pci_adapter->hwdev);
+
+ hinic3_free_hwdev(pci_adapter->hwdev);
+ pci_adapter->hwdev = NULL;
+}
+
+static void wait_sriov_cfg_complete(struct hinic3_pcidev *pci_adapter)
+{
+ struct hinic3_sriov_info *sriov_info;
+ unsigned long end;
+
+ sriov_info = &pci_adapter->sriov_info;
+ clear_bit(HINIC3_FUNC_PERSENT, &sriov_info->state);
+ usleep_range(9900, 10000); /* sleep 9900 us ~ 10000 us */
+
+ end = jiffies + msecs_to_jiffies(HINIC3_WAIT_SRIOV_CFG_TIMEOUT);
+ do {
+ if (!test_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state) &&
+ !test_bit(HINIC3_SRIOV_DISABLE, &sriov_info->state))
+ return;
+
+ usleep_range(9900, 10000); /* sleep 9900 us ~ 10000 us */
+ } while (time_before(jiffies, end));
+}
+
+static bool hinic3_get_vf_nic_en_status(struct pci_dev *pdev)
+{
+ bool nic_en = false;
+ u16 global_func_id;
+ struct pci_dev *pf_pdev = NULL;
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return false;
+ }
+
+ if (pdev->is_virtfn)
+ pf_pdev = pdev->physfn;
+ else
+ return false;
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return false;
+ }
+
+ if (!IS_BMGW_SLAVE_HOST((struct hinic3_hwdev *)pci_adapter->hwdev))
+ return false;
+
+ if (hinic3_get_vfid_by_vfpci(NULL, pdev, &global_func_id)) {
+ sdk_err(&pdev->dev, "Get vf id by vfpci failed\n");
+ return false;
+ }
+
+ if (hinic3_get_mhost_func_nic_enable(pci_adapter->hwdev,
+ global_func_id, &nic_en)) {
+ sdk_err(&pdev->dev, "Get function nic status failed\n");
+ return false;
+ }
+
+ sdk_info(&pdev->dev, "Func %hu %s default probe in host\n",
+ global_func_id, (nic_en) ? "enable" : "disable");
+
+ return nic_en;
+}
+
+bool hinic3_get_vf_load_state(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct pci_dev *pf_pdev = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return false;
+ }
+
+ /* vf used in vm */
+ if (pci_is_root_bus(pdev->bus))
+ return false;
+
+ if (pdev->is_virtfn)
+ pf_pdev = pdev->physfn;
+ else
+ pf_pdev = pdev;
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return false;
+ }
+
+ return !pci_adapter->disable_vf_load;
+}
+
+int hinic3_set_vf_load_state(struct pci_dev *pdev, bool vf_load_state)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return -EINVAL;
+ }
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return -EINVAL;
+ }
+
+ if (hinic3_func_type(pci_adapter->hwdev) == TYPE_VF)
+ return 0;
+
+ pci_adapter->disable_vf_load = !vf_load_state;
+ sdk_info(&pci_adapter->pcidev->dev, "Current function %s vf load in host\n",
+ vf_load_state ? "enable" : "disable");
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_vf_load_state);
+
+
+
+bool hinic3_get_vf_service_load(struct pci_dev *pdev, u16 service)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct pci_dev *pf_pdev = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return false;
+ }
+
+ if (pdev->is_virtfn)
+ pf_pdev = pdev->physfn;
+ else
+ pf_pdev = pdev;
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return false;
+ }
+
+ if (service >= SERVICE_T_MAX) {
+ sdk_err(&pdev->dev, "service_type = %u state is error\n",
+ service);
+ return false;
+ }
+
+ return !pci_adapter->disable_srv_load[service];
+}
+
+int hinic3_set_vf_service_load(struct pci_dev *pdev, u16 service,
+ bool vf_srv_load)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return -EINVAL;
+ }
+
+ if (service >= SERVICE_T_MAX) {
+ sdk_err(&pdev->dev, "service_type = %u state is error\n",
+ service);
+ return -EFAULT;
+ }
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return -EINVAL;
+ }
+
+ if (hinic3_func_type(pci_adapter->hwdev) == TYPE_VF)
+ return 0;
+
+ pci_adapter->disable_srv_load[service] = !vf_srv_load;
+ sdk_info(&pci_adapter->pcidev->dev, "Current function %s vf load in host\n",
+ vf_srv_load ? "enable" : "disable");
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_vf_service_load);
+
+static bool hinic3_is_host_vmsec_enable(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct pci_dev *pf_pdev = NULL;
+
+ if (pdev->is_virtfn) {
+ pf_pdev = pdev->physfn;
+ } else {
+ pf_pdev = pdev;
+ }
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ pr_err("Pci_adapter is null.\n");
+ return false;
+ }
+
+ /* pf/vf used in host */
+ if (IS_VM_SLAVE_HOST((struct hinic3_hwdev *)pci_adapter->hwdev) &&
+ (hinic3_func_type(pci_adapter->hwdev) == TYPE_PF) &&
+ IS_RDMA_TYPE((struct hinic3_hwdev *)pci_adapter->hwdev)) {
+ return true;
+ }
+
+ return false;
+}
+
+static int hinic3_remove_func(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = pci_adapter->pcidev;
+
+ mutex_lock(&pci_adapter->pdev_mutex);
+ if (pci_adapter->lld_state != HINIC3_PROBE_OK) {
+ sdk_warn(&pdev->dev, "Current function don not need remove\n");
+ mutex_unlock(&pci_adapter->pdev_mutex);
+ return 0;
+ }
+ pci_adapter->lld_state = HINIC3_IN_REMOVE;
+ mutex_unlock(&pci_adapter->pdev_mutex);
+
+ if (!(pdev->is_virtfn) && (hinic3_is_host_vmsec_enable(pdev) == true) &&
+ (hinic3_func_type((struct hinic3_hwdev *)pci_adapter->hwdev) == TYPE_PF)) {
+ cancel_delayed_work_sync(&pci_adapter->migration_probe_dwork);
+ flush_workqueue(pci_adapter->migration_probe_workq);
+ destroy_workqueue(pci_adapter->migration_probe_workq);
+ }
+
+ hinic3_detect_hw_present(pci_adapter->hwdev);
+
+ hisdk3_remove_pre_process(pci_adapter->hwdev);
+
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF) {
+ sysfs_remove_group(&pdev->dev.kobj, &hinic3_attr_group);
+ wait_sriov_cfg_complete(pci_adapter);
+ hinic3_pci_sriov_disable(pdev);
+ }
+
+ hinic3_func_deinit(pdev);
+
+ lld_lock_chip_node();
+ free_chip_node(pci_adapter);
+ lld_unlock_chip_node();
+
+ unmapping_bar(pci_adapter);
+
+ mutex_lock(&pci_adapter->pdev_mutex);
+ pci_adapter->lld_state = HINIC3_NOT_PROBE;
+ mutex_unlock(&pci_adapter->pdev_mutex);
+
+ sdk_info(&pdev->dev, "Pcie device removed function\n");
+
+ set_vf_func_in_use(pdev, false);
+
+ return 0;
+}
+
+int hinic3_get_vfid_by_vfpci(void *hwdev, struct pci_dev *pdev, u16 *global_func_id)
+{
+ struct pci_dev *pf_pdev = NULL;
+ struct hinic3_pcidev *pci_adapter = NULL;
+ u16 pf_bus, vf_bus, vf_offset;
+ int err;
+
+ if (!pdev || !global_func_id || !hinic3_pdev_is_virtfn(pdev))
+ return -EINVAL;
+ (void)hwdev;
+ pf_pdev = pdev->physfn;
+
+ vf_bus = pdev->bus->number;
+ pf_bus = pf_pdev->bus->number;
+
+ if (pdev->vendor == HINIC3_VIRTIO_VNEDER_ID) {
+ return -EPERM;
+ }
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return -EINVAL;
+ }
+
+ err = hinic3_pf_get_vf_offset_info(pci_adapter, &vf_offset);
+ if (err) {
+ sdk_err(&pdev->dev, "Func hinic3_pf_get_vf_offset_info fail\n");
+ return -EFAULT;
+ }
+
+ *global_func_id = (u16)((vf_bus - pf_bus) * BUS_MAX_DEV_NUM) + (u16)pdev->devfn +
+ (u16)(CMD_MAX_MAX_PF_NUM - g_vf_offset.vf_offset_from_pf[0]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_vfid_by_vfpci);
+
+static void hinic3_set_vf_status_in_host(struct pci_dev *pdev, bool status)
+{
+ struct pci_dev *pf_pdev = pdev->physfn;
+ struct hinic3_pcidev *pci_adapter = NULL;
+ void *pf_hwdev = NULL;
+ void *ppf_hwdev = NULL;
+ u16 global_func_id;
+ int ret;
+
+ if (!pf_pdev)
+ return;
+
+ if (!hinic3_pdev_is_virtfn(pdev))
+ return;
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ pf_hwdev = pci_adapter->hwdev;
+ ppf_hwdev = hinic3_get_ppf_hwdev_by_pdev(pf_pdev);
+ if (!pf_hwdev || !ppf_hwdev)
+ return;
+
+ ret = hinic3_get_vfid_by_vfpci(NULL, pdev, &global_func_id);
+ if (ret) {
+ sdk_err(&pci_adapter->pcidev->dev, "Func hinic3_get_vfid_by_vfpci fail %d \n", ret);
+ return;
+ }
+
+ ret = hinic3_set_func_probe_in_host(ppf_hwdev, global_func_id, status);
+ if (ret)
+ sdk_err(&pci_adapter->pcidev->dev, "Set the function probe status in host failed\n");
+}
+#ifdef CONFIG_PCI_IOV
+static bool check_pdev_type_and_state(struct pci_dev *pdev)
+{
+ if (!(pdev->is_virtfn)) {
+ return false;
+ }
+
+ if ((hinic3_get_pf_device_id(pdev) != HINIC3_DEV_ID_SDI_5_1_PF) &&
+ (hinic3_get_pf_device_id(pdev) != HINIC3_DEV_ID_SDI_5_0_PF)) {
+ return false;
+ }
+
+ if (!hinic3_get_vf_load_state(pdev)) {
+ return false;
+ }
+
+ return true;
+}
+#endif
+
+static void hinic3_remove(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ sdk_info(&pdev->dev, "Pcie device remove begin\n");
+
+ if (!pci_adapter)
+ goto out;
+#ifdef CONFIG_PCI_IOV
+ if (check_pdev_type_and_state(pdev)) {
+ goto out;
+ }
+#endif
+
+ cancel_work_sync(&pci_adapter->slave_nic_work);
+ cancel_work_sync(&pci_adapter->slave_vroce_work);
+
+ hinic3_remove_func(pci_adapter);
+
+ if (!pci_adapter->pcidev->is_virtfn &&
+ pci_adapter->multi_host_mgmt_workq)
+ destroy_workqueue(pci_adapter->multi_host_mgmt_workq);
+
+ hinic3_pci_deinit(pdev);
+ hinic3_probe_pre_unprocess(pdev);
+
+out:
+ hinic3_set_vf_status_in_host(pdev, false);
+
+ sdk_info(&pdev->dev, "Pcie device removed\n");
+}
+
+static int probe_func_param_init(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = NULL;
+
+ if (!pci_adapter)
+ return -EFAULT;
+
+ pdev = pci_adapter->pcidev;
+ if (!pdev)
+ return -EFAULT;
+
+ mutex_lock(&pci_adapter->pdev_mutex);
+ if (pci_adapter->lld_state >= HINIC3_PROBE_START) {
+ sdk_warn(&pdev->dev, "Don not probe repeat\n");
+ mutex_unlock(&pci_adapter->pdev_mutex);
+ return -EEXIST;
+ }
+ pci_adapter->lld_state = HINIC3_PROBE_START;
+ mutex_unlock(&pci_adapter->pdev_mutex);
+
+ return 0;
+}
+
+static void hinic3_probe_success_process(struct hinic3_pcidev *pci_adapter)
+{
+ hinic3_probe_success(pci_adapter->hwdev);
+
+ mutex_lock(&pci_adapter->pdev_mutex);
+ pci_adapter->lld_state = HINIC3_PROBE_OK;
+ mutex_unlock(&pci_adapter->pdev_mutex);
+}
+
+static int hinic3_probe_func(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = pci_adapter->pcidev;
+ int err;
+
+ err = probe_func_param_init(pci_adapter);
+ if (err == -EEXIST)
+ return 0;
+ else if (err)
+ return err;
+
+ set_vf_func_in_use(pdev, true);
+
+ err = mapping_bar(pdev, pci_adapter);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to map bar\n");
+ goto map_bar_failed;
+ }
+
+ uld_def_init(pci_adapter);
+
+ /* if chip information of pcie function exist, add the function into chip */
+ lld_lock_chip_node();
+ err = alloc_chip_node(pci_adapter);
+ if (err) {
+ lld_unlock_chip_node();
+ sdk_err(&pdev->dev, "Failed to add new chip node to global list\n");
+ goto alloc_chip_node_fail;
+ }
+ lld_unlock_chip_node();
+
+ err = hinic3_func_init(pdev, pci_adapter);
+ if (err)
+ goto func_init_err;
+
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF) {
+ err = hinic3_set_bdf_ctxt(pci_adapter->hwdev, pdev->bus->number,
+ PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to set BDF info to MPU\n");
+ goto set_bdf_err;
+ }
+ }
+
+ hinic3_probe_success_process(pci_adapter);
+
+ return 0;
+
+set_bdf_err:
+ hinic3_func_deinit(pdev);
+
+func_init_err:
+ lld_lock_chip_node();
+ free_chip_node(pci_adapter);
+ lld_unlock_chip_node();
+
+alloc_chip_node_fail:
+ unmapping_bar(pci_adapter);
+
+map_bar_failed:
+ set_vf_func_in_use(pdev, false);
+ sdk_err(&pdev->dev, "Pcie device probe function failed\n");
+ return err;
+}
+
+void hinic3_set_func_state(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = pci_adapter->pcidev;
+ int err;
+ bool enable_func = false;
+
+ err = hinic3_get_function_enable(pdev, &enable_func);
+ if (err) {
+ sdk_info(&pdev->dev, "Get function enable failed\n");
+ return;
+ }
+
+ sdk_info(&pdev->dev, "%s function resource start\n",
+ enable_func ? "Initialize" : "Free");
+ if (enable_func) {
+ err = hinic3_probe_func(pci_adapter);
+ if (err)
+ sdk_info(&pdev->dev, "Function probe failed\n");
+ } else {
+ hinic3_remove_func(pci_adapter);
+ }
+ if (err == 0)
+ sdk_info(&pdev->dev, "%s function resource end\n",
+ enable_func ? "Initialize" : "Free");
+}
+
+void slave_host_mgmt_work(struct work_struct *work)
+{
+ struct hinic3_pcidev *pci_adapter =
+ container_of(work, struct hinic3_pcidev, slave_nic_work);
+
+ if (hinic3_pdev_is_virtfn(pci_adapter->pcidev))
+ hinic3_set_func_state(pci_adapter);
+ else
+ set_nic_func_state(pci_adapter);
+}
+
+static int pci_adapter_assign_val(struct hinic3_pcidev **ppci_adapter,
+ struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ *ppci_adapter = pci_get_drvdata(pdev);
+ (*ppci_adapter)->disable_vf_load = disable_vf_load;
+ (*ppci_adapter)->id = *id;
+ (*ppci_adapter)->lld_state = HINIC3_NOT_PROBE;
+ (*ppci_adapter)->probe_fault_level = FAULT_LEVEL_SERIOUS_FLR;
+ lld_dev_cnt_init(*ppci_adapter);
+
+ (*ppci_adapter)->multi_host_mgmt_workq =
+ alloc_workqueue("hinic_mhost_mgmt", WQ_UNBOUND,
+ HINIC3_SLAVE_WORK_MAX_NUM);
+ if (!(*ppci_adapter)->multi_host_mgmt_workq) {
+ hinic3_pci_deinit(pdev);
+ sdk_err(&pdev->dev, "Alloc multi host mgmt workqueue failed\n");
+ return -ENOMEM;
+ }
+
+ INIT_WORK(&(*ppci_adapter)->slave_nic_work, slave_host_mgmt_work);
+ INIT_WORK(&(*ppci_adapter)->slave_vroce_work,
+ slave_host_mgmt_vroce_work);
+
+ return 0;
+}
+
+static void slave_host_vfio_probe_delay_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_pcidev *pci_adapter = container_of(delay, struct hinic3_pcidev, migration_probe_dwork);
+ struct pci_dev *pdev = pci_adapter->pcidev;
+ int (*dev_migration_probe)(struct pci_dev *);
+ int rc;
+
+ if (hinic3_func_type((struct hinic3_hwdev *)pci_adapter->hwdev) != TYPE_PF) {
+ return;
+ }
+
+ dev_migration_probe = __symbol_get("migration_dev_migration_probe");
+ if (!(dev_migration_probe)) {
+ sdk_err(&pdev->dev,
+ "Failed to find: migration_dev_migration_probe");
+ queue_delayed_work(pci_adapter->migration_probe_workq,
+ &pci_adapter->migration_probe_dwork, WAIT_TIME * HZ);
+ } else {
+ rc = dev_migration_probe(pdev);
+ __symbol_put("migration_dev_migration_probe");
+ if (rc) {
+ sdk_err(&pdev->dev,
+ "Failed to __dev_migration_probe, rc:0x%x, pf migrated(%d).\n",
+ rc, g_is_pf_migrated);
+ } else {
+ g_is_pf_migrated = true;
+ sdk_info(&pdev->dev,
+ "Successed in __dev_migration_probe, pf migrated(%d).\n",
+ g_is_pf_migrated);
+ }
+ }
+
+ return;
+}
+
+struct vf_add_delaywork {
+ struct pci_dev *vf_pdev;
+ struct delayed_work migration_vf_add_dwork;
+};
+
+static void slave_host_migration_vf_add_delay_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct vf_add_delaywork *vf_add = container_of(delay, struct vf_add_delaywork, migration_vf_add_dwork);
+ struct pci_dev *vf_pdev = vf_add->vf_pdev;
+ struct pci_dev *pf_pdev = NULL;
+ int (*migration_dev_add_vf)(struct pci_dev *);
+ int ret;
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!vf_pdev) {
+ pr_err("vf pdev is null.\n");
+ goto err1;
+ }
+ if (!vf_pdev->is_virtfn) {
+ sdk_err(&vf_pdev->dev, "Pdev is not virtfn.\n");
+ goto err1;
+ }
+
+ pf_pdev = vf_pdev->physfn;
+ if (!pf_pdev) {
+ sdk_err(&vf_pdev->dev, "pf_pdev is null.\n");
+ goto err1;
+ }
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_err(&vf_pdev->dev, "Pci_adapter is null.\n");
+ goto err1;
+ }
+
+ if (!g_is_pf_migrated) {
+ sdk_info(&vf_pdev->dev, "pf is not migrated yet, so vf continues to try again.\n");
+ goto delay_work;
+ }
+
+ migration_dev_add_vf = __symbol_get("migration_dev_add_vf");
+ if (migration_dev_add_vf) {
+ ret = migration_dev_add_vf(vf_pdev);
+ __symbol_put("migration_dev_add_vf");
+ if (ret) {
+ sdk_err(&vf_pdev->dev,
+ "vf get migration symbol successed, but dev add vf failed, ret:%d.\n",
+ ret);
+ } else {
+ sdk_info(&vf_pdev->dev,
+ "vf get migration symbol successed, and dev add vf success.\n");
+ }
+ goto err1;
+ }
+ sdk_info(&vf_pdev->dev, "pf is migrated, but vf get migration symbol failed.\n");
+
+delay_work:
+ queue_delayed_work(pci_adapter->migration_probe_workq,
+ &vf_add->migration_vf_add_dwork, WAIT_TIME * HZ);
+ return;
+
+err1:
+ kfree(vf_add);
+ return;
+}
+
+static void hinic3_probe_vf_add_dwork(struct pci_dev *pdev)
+{
+ struct pci_dev *pf_pdev = NULL;
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!hinic3_is_host_vmsec_enable(pdev)) {
+ return;
+ }
+
+#if defined(CONFIG_SP_VID_DID)
+ if ((pdev->vendor == PCI_VENDOR_ID_SPNIC) && (pdev->device == HINIC3_DEV_SDI_5_1_ID_VF)) {
+#elif defined(CONFIG_NF_VID_DID)
+ if ((pdev->vendor == PCI_VENDOR_ID_NF) && (pdev->device == NFNIC_DEV_ID_VF)) {
+#else
+ if ((pdev->vendor == PCI_VENDOR_ID_HUAWEI) && (pdev->device == HINIC3_DEV_SDI_5_0_ID_VF)) {
+#endif
+ struct vf_add_delaywork *vf_add = kmalloc(sizeof(struct vf_add_delaywork), GFP_ATOMIC);
+ if (!vf_add) {
+ sdk_info(&pdev->dev, "vf_add is null.\n");
+ return;
+ }
+ vf_add->vf_pdev = pdev;
+
+ pf_pdev = pdev->physfn;
+
+ if (!pf_pdev) {
+ sdk_info(&pdev->dev, "Vf-pf_pdev is null.\n");
+ kfree(vf_add);
+ return;
+ }
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_info(&pdev->dev, "Pci_adapter is null.\n");
+ kfree(vf_add);
+ return;
+ }
+
+ INIT_DELAYED_WORK(&vf_add->migration_vf_add_dwork,
+ slave_host_migration_vf_add_delay_work);
+
+ queue_delayed_work(pci_adapter->migration_probe_workq,
+ &vf_add->migration_vf_add_dwork,
+ WAIT_TIME * HZ);
+ }
+
+ return;
+}
+
+static int hinic3_probe_migration_dwork(struct pci_dev *pdev, struct hinic3_pcidev *pci_adapter)
+{
+ if (!hinic3_is_host_vmsec_enable(pdev)) {
+ sdk_info(&pdev->dev, "Probe_migration : hinic3_is_host_vmsec_enable is (0).\n");
+ return 0;
+ }
+
+ if (IS_VM_SLAVE_HOST((struct hinic3_hwdev *)pci_adapter->hwdev) &&
+ hinic3_func_type((struct hinic3_hwdev *)pci_adapter->hwdev) == TYPE_PF) {
+ pci_adapter->migration_probe_workq =
+ create_singlethread_workqueue("hinic3_migration_probe_delay");
+ if (!pci_adapter->migration_probe_workq) {
+ sdk_err(&pdev->dev, "Failed to create work queue:%s\n",
+ "hinic3_migration_probe_delay");
+ return -EINVAL;
+ }
+
+ INIT_DELAYED_WORK(&pci_adapter->migration_probe_dwork,
+ slave_host_vfio_probe_delay_work);
+
+ queue_delayed_work(pci_adapter->migration_probe_workq,
+ &pci_adapter->migration_probe_dwork, WAIT_TIME * HZ);
+ }
+
+ return 0;
+}
+
+static bool hinic3_os_hot_replace_allow(struct hinic3_pcidev *pci_adapter)
+{
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)pci_adapter->hwdev;
+ // check service enable and dev is not VF
+ if (hinic3_func_type(hwdev) == TYPE_VF || hwdev->hot_replace_mode == HOT_REPLACE_DISABLE)
+ return false;
+
+ return true;
+}
+
+static bool hinic3_os_hot_replace_process(struct hinic3_pcidev *pci_adapter)
+{
+ struct hinic3_board_info *board_info;
+ u16 cur_pf_id = hinic3_global_func_id(pci_adapter->hwdev);
+ u8 cur_partion_id;
+ board_info = &((struct hinic3_hwdev *)(pci_adapter->hwdev))->board_info;
+ // probe to os
+ vpci_set_partition_attrs(pci_adapter->pcidev, PARTITION_DEV_EXCLUSIVE,
+ get_function_partition(cur_pf_id, board_info->port_num));
+
+ // check pf_id is in the right partition_id
+ cur_partion_id = get_partition_id();
+ if (get_function_partition(cur_pf_id, board_info->port_num) == cur_partion_id) {
+ return true;
+ }
+
+ pci_adapter->probe_fault_level = FAULT_LEVEL_SUGGESTION;
+ return false;
+}
+
+static int hinic3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ u16 probe_fault_level = FAULT_LEVEL_SERIOUS_FLR;
+ u32 device_id, function_id;
+ int err;
+
+ sdk_info(&pdev->dev, "Pcie device probe begin\n");
+#ifdef CONFIG_PCI_IOV
+ hinic3_set_vf_status_in_host(pdev, true);
+ if (check_pdev_type_and_state(pdev)) {
+ sdk_info(&pdev->dev, "VFs are not binded to hinic\n");
+ hinic3_probe_vf_add_dwork(pdev);
+ return -EINVAL;
+ }
+#endif
+ err = hinic3_probe_pre_process(pdev);
+ if (err != 0 && err != HINIC3_NOT_PROBE)
+ goto out;
+
+ if (err == HINIC3_NOT_PROBE)
+ return 0;
+
+ if (hinic3_pci_init(pdev))
+ goto pci_init_err;
+
+ if (pci_adapter_assign_val(&pci_adapter, pdev, id))
+ goto allco_queue_err;
+
+ if (pdev->is_virtfn && (!hinic3_get_vf_load_state(pdev)) &&
+ (!hinic3_get_vf_nic_en_status(pdev))) {
+ sdk_info(&pdev->dev, "VF device disable load in host\n");
+ return 0;
+ }
+
+ if (hinic3_probe_func(pci_adapter))
+ goto hinic3_probe_func_fail;
+
+ if (hinic3_os_hot_replace_allow(pci_adapter)) {
+ if (!hinic3_os_hot_replace_process(pci_adapter)) {
+ device_id = PCI_SLOT(pdev->devfn);
+ function_id = PCI_FUNC(pdev->devfn);
+ sdk_info(&pdev->dev,
+ "os hot replace: skip function %d:%d for partition %d",
+ device_id, function_id, get_partition_id());
+ goto os_hot_repalce_not_allow;
+ }
+ }
+
+ if (hinic3_probe_migration_dwork(pdev, pci_adapter))
+ goto hinic3_probe_func_fail;
+
+ sdk_info(&pdev->dev, "Pcie device probed\n");
+ return 0;
+
+os_hot_repalce_not_allow:
+ hinic3_func_deinit(pdev);
+ lld_lock_chip_node();
+ free_chip_node(pci_adapter);
+ lld_unlock_chip_node();
+ unmapping_bar(pci_adapter);
+ set_vf_func_in_use(pdev, false);
+
+hinic3_probe_func_fail:
+ destroy_workqueue(pci_adapter->multi_host_mgmt_workq);
+ cancel_work_sync(&pci_adapter->slave_nic_work);
+ cancel_work_sync(&pci_adapter->slave_vroce_work);
+allco_queue_err:
+ probe_fault_level = pci_adapter->probe_fault_level;
+ hinic3_pci_deinit(pdev);
+pci_init_err:
+ hinic3_probe_pre_unprocess(pdev);
+
+out:
+ hinic3_probe_fault_process(pdev, probe_fault_level);
+ sdk_err(&pdev->dev, "Pcie device probe failed\n");
+ return err;
+}
+
+static int hinic3_get_pf_info(struct pci_dev *pdev, u16 service,
+ struct hinic3_hw_pf_infos **pf_infos)
+{
+ struct hinic3_pcidev *dev = pci_get_drvdata(pdev);
+ int err;
+
+ if (service >= SERVICE_T_MAX) {
+ sdk_err(&pdev->dev, "Current vf do not supports set service_type = %u state in host\n",
+ service);
+ return -EFAULT;
+ }
+
+ *pf_infos = kzalloc(sizeof(struct hinic3_hw_pf_infos), GFP_KERNEL);
+ if (*pf_infos == NULL) {
+ sdk_err(&pdev->dev, "pf_infos kzalloc failed\n");
+ return -EFAULT;
+ }
+ err = hinic3_get_hw_pf_infos(dev->hwdev, *pf_infos, HINIC3_CHANNEL_COMM);
+ if (err) {
+ kfree(*pf_infos);
+ sdk_err(&pdev->dev, "Get chipf pf info failed, ret %d\n", err);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_func_en(struct pci_dev *des_pdev, struct hinic3_pcidev *dst_dev,
+ bool en, u16 vf_func_id)
+{
+ int err;
+
+ mutex_lock(&dst_dev->pdev_mutex);
+ /* unload invalid vf func id */
+ if (!en && vf_func_id != hinic3_global_func_id(dst_dev->hwdev) &&
+ !strcmp(des_pdev->driver->name, HINIC3_DRV_NAME)) {
+ pr_err("dst_dev func id:%u, vf_func_id:%u\n",
+ hinic3_global_func_id(dst_dev->hwdev), vf_func_id);
+ mutex_unlock(&dst_dev->pdev_mutex);
+ return -EFAULT;
+ }
+
+ if (!en && dst_dev->lld_state == HINIC3_PROBE_OK) {
+ mutex_unlock(&dst_dev->pdev_mutex);
+ hinic3_remove_func(dst_dev);
+ } else if (en && dst_dev->lld_state == HINIC3_NOT_PROBE) {
+ mutex_unlock(&dst_dev->pdev_mutex);
+ err = hinic3_probe_func(dst_dev);
+ if (err)
+ return -EFAULT;
+ } else {
+ mutex_unlock(&dst_dev->pdev_mutex);
+ }
+
+ return 0;
+}
+
+static int get_vf_service_state_param(struct pci_dev *pdev, struct hinic3_pcidev **dev_ptr,
+ u16 service, struct hinic3_hw_pf_infos **pf_infos)
+{
+ int err;
+
+ if (!pdev)
+ return -EINVAL;
+
+ *dev_ptr = pci_get_drvdata(pdev);
+ if (!(*dev_ptr))
+ return -EINVAL;
+
+ err = hinic3_get_pf_info(pdev, service, pf_infos);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static int hinic3_dst_pdev_valid(struct hinic3_pcidev *dst_dev, struct pci_dev **des_pdev_ptr,
+ u16 vf_devfn, bool en)
+{
+ u16 bus;
+
+ bus = dst_dev->pcidev->bus->number + vf_devfn / BUS_MAX_DEV_NUM;
+ *des_pdev_ptr = pci_get_domain_bus_and_slot(pci_domain_nr(dst_dev->pcidev->bus),
+ bus, vf_devfn % BUS_MAX_DEV_NUM);
+ if (!(*des_pdev_ptr)) {
+ pr_err("des_pdev is NULL\n");
+ return -EFAULT;
+ }
+
+ if ((*des_pdev_ptr)->driver == NULL) {
+ pr_err("des_pdev_ptr->driver is NULL\n");
+ return -EFAULT;
+ }
+
+ /* OVS sriov hw scene, when vf bind to vf_io return error. */
+ if ((!en && strcmp((*des_pdev_ptr)->driver->name, HINIC3_DRV_NAME))) {
+ pr_err("vf bind driver:%s\n", (*des_pdev_ptr)->driver->name);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int paramerter_is_unexpected(struct hinic3_pcidev *dst_dev, u16 *func_id, u16 *vf_start,
+ u16 *vf_end, u16 vf_func_id)
+{
+ if (hinic3_func_type(dst_dev->hwdev) == TYPE_VF)
+ return -EPERM;
+
+ *func_id = hinic3_global_func_id(dst_dev->hwdev);
+ *vf_start = hinic3_glb_pf_vf_offset(dst_dev->hwdev) + 1;
+ *vf_end = *vf_start + hinic3_func_max_vf(dst_dev->hwdev);
+ if (vf_func_id < *vf_start || vf_func_id > *vf_end)
+ return -EPERM;
+
+ return 0;
+}
+
+int hinic3_set_vf_service_state(struct pci_dev *pdev, u16 vf_func_id, u16 service, bool en)
+{
+ struct hinic3_hw_pf_infos *pf_infos = NULL;
+ struct hinic3_pcidev *dev = NULL, *dst_dev = NULL;
+ struct pci_dev *des_pdev = NULL;
+ u16 vf_start, vf_end, vf_devfn, func_id;
+ int err;
+ bool find_dst_dev = false;
+
+ err = get_vf_service_state_param(pdev, &dev, service, &pf_infos);
+ if (err)
+ return err;
+
+ lld_hold();
+ list_for_each_entry(dst_dev, &dev->chip_node->func_list, node) {
+ if (paramerter_is_unexpected(dst_dev, &func_id, &vf_start, &vf_end, vf_func_id) != 0)
+ continue;
+
+ vf_devfn = pf_infos->infos[func_id].vf_offset + (vf_func_id - vf_start) +
+ (u16)dst_dev->pcidev->devfn;
+ err = hinic3_dst_pdev_valid(dst_dev, &des_pdev, vf_devfn, en);
+ if (err) {
+ sdk_err(&pdev->dev, "Can not get vf func_id %u from pf %u\n",
+ vf_func_id, func_id);
+ lld_put();
+ goto free_pf_info;
+ }
+
+ dst_dev = pci_get_drvdata(des_pdev);
+ /* When enable vf scene, if vf bind to vf-io, return ok */
+ if (strcmp(des_pdev->driver->name, HINIC3_DRV_NAME) ||
+ !dst_dev || (!en && dst_dev->lld_state != HINIC3_PROBE_OK) ||
+ (en && dst_dev->lld_state != HINIC3_NOT_PROBE)) {
+ lld_put();
+ goto free_pf_info;
+ }
+
+ if (en)
+ pci_dev_put(des_pdev);
+ find_dst_dev = true;
+ break;
+ }
+ lld_put();
+
+ if (!find_dst_dev) {
+ err = -EFAULT;
+ sdk_err(&pdev->dev, "Invalid parameter vf_id %u \n", vf_func_id);
+ goto free_pf_info;
+ }
+
+ err = hinic3_set_func_en(des_pdev, dst_dev, en, vf_func_id);
+
+free_pf_info:
+ kfree(pf_infos);
+ return err;
+}
+EXPORT_SYMBOL(hinic3_set_vf_service_state);
+
+static const struct pci_device_id hinic3_pci_table[] = {
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_SPU), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_STANDARD), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_SDI_5_1_PF), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_SDI_5_0_PF), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_DPU_PF), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_SDI_5_1_ID_VF), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_VF), 0},
+ {0, 0}
+
+};
+
+MODULE_DEVICE_TABLE(pci, hinic3_pci_table);
+
+/**
+ * hinic3_io_error_detected - called when PCI error is detected
+ * @pdev: Pointer to PCI device
+ * @state: The current pci connection state
+ *
+ * This function is called after a PCI bus error affecting
+ * this device has been detected.
+ *
+ * Since we only need error detecting not error handling, so we
+ * always return PCI_ERS_RESULT_CAN_RECOVER to tell the AER
+ * driver that we don't need reset(error handling).
+ */
+static pci_ers_result_t hinic3_io_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ sdk_err(&pdev->dev,
+ "Uncorrectable error detected, log and cleanup error status: 0x%08x\n",
+ state);
+
+ pci_cleanup_aer_uncorrect_error_status(pdev);
+ pci_adapter = pci_get_drvdata(pdev);
+ if (pci_adapter)
+ hinic3_record_pcie_error(pci_adapter->hwdev);
+
+ return PCI_ERS_RESULT_CAN_RECOVER;
+}
+
+static void hinic3_timer_disable(void *hwdev)
+{
+ if (!hwdev)
+ return;
+
+ if (hinic3_get_stateful_enable(hwdev) && hinic3_get_timer_enable(hwdev))
+ (void)hinic3_func_tmr_bitmap_set(hwdev, hinic3_global_func_id(hwdev), false);
+
+ return;
+}
+
+static void hinic3_shutdown(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ sdk_info(&pdev->dev, "Shutdown device\n");
+
+ if (pci_adapter) {
+ hinic3_timer_disable(pci_adapter->hwdev);
+ hinic3_shutdown_hwdev(pci_adapter->hwdev);
+ }
+
+ pci_disable_device(pdev);
+
+ if (pci_adapter)
+ hinic3_set_api_stop(pci_adapter->hwdev);
+}
+
+#ifdef HAVE_RHEL6_SRIOV_CONFIGURE
+static struct pci_driver_rh hinic3_driver_rh = {
+ .sriov_configure = hinic3_pci_sriov_configure,
+};
+#endif
+
+/* Cause we only need error detecting not error handling, so only error_detected
+ * callback is enough.
+ */
+static struct pci_error_handlers hinic3_err_handler = {
+ .error_detected = hinic3_io_error_detected,
+};
+
+static struct pci_driver hinic3_driver = {
+ .name = HINIC3_DRV_NAME,
+ .id_table = hinic3_pci_table,
+ .probe = hinic3_probe,
+ .remove = hinic3_remove,
+ .shutdown = hinic3_shutdown,
+#ifdef CONFIG_PARTITION_DEVICE
+ .driver.probe_concurrency = true,
+#endif
+#if defined(HAVE_SRIOV_CONFIGURE)
+ .sriov_configure = hinic3_pci_sriov_configure,
+#elif defined(HAVE_RHEL6_SRIOV_CONFIGURE)
+ .rh_reserved = &hinic3_driver_rh,
+#endif
+ .err_handler = &hinic3_err_handler
+};
+
+int hinic3_lld_init(void)
+{
+ int err;
+
+ pr_info("%s - version %s\n", HINIC3_DRV_DESC, HINIC3_DRV_VERSION);
+ memset(g_uld_info, 0, sizeof(g_uld_info));
+
+ hinic3_lld_lock_init();
+ hinic3_uld_lock_init();
+
+ err = hinic3_module_pre_init();
+ if (err) {
+ pr_err("Init custom failed\n");
+ goto module_pre_init_err;
+ }
+
+ err = pci_register_driver(&hinic3_driver);
+ if (err) {
+ pr_err("sdk3 pci register driver failed\n");
+ goto register_pci_driver_err;
+ }
+
+ return 0;
+
+register_pci_driver_err:
+ hinic3_module_post_exit();
+module_pre_init_err:
+ return err;
+}
+
+void hinic3_lld_exit(void)
+{
+ pci_unregister_driver(&hinic3_driver);
+
+ hinic3_module_post_exit();
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c
new file mode 100644
index 0000000..5398a34
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c
@@ -0,0 +1,1884 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/types.h>
+#include <linux/semaphore.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwif.h"
+#include "hinic3_eqs.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_common.h"
+#include "hinic3_mbox.h"
+
+#define HINIC3_MBOX_USEC_50 50
+
+#define HINIC3_MBOX_INT_DST_AEQN_SHIFT 10
+#define HINIC3_MBOX_INT_SRC_RESP_AEQN_SHIFT 12
+#define HINIC3_MBOX_INT_STAT_DMA_SHIFT 14
+/* The size of data to be send (unit of 4 bytes) */
+#define HINIC3_MBOX_INT_TX_SIZE_SHIFT 20
+/* SO_RO(strong order, relax order) */
+#define HINIC3_MBOX_INT_STAT_DMA_SO_RO_SHIFT 25
+#define HINIC3_MBOX_INT_WB_EN_SHIFT 28
+
+#define HINIC3_MBOX_INT_DST_AEQN_MASK 0x3
+#define HINIC3_MBOX_INT_SRC_RESP_AEQN_MASK 0x3
+#define HINIC3_MBOX_INT_STAT_DMA_MASK 0x3F
+#define HINIC3_MBOX_INT_TX_SIZE_MASK 0x1F
+#define HINIC3_MBOX_INT_STAT_DMA_SO_RO_MASK 0x3
+#define HINIC3_MBOX_INT_WB_EN_MASK 0x1
+
+#define HINIC3_MBOX_INT_SET(val, field) \
+ (((val) & HINIC3_MBOX_INT_##field##_MASK) << \
+ HINIC3_MBOX_INT_##field##_SHIFT)
+
+enum hinic3_mbox_tx_status {
+ TX_NOT_DONE = 1,
+};
+
+#define HINIC3_MBOX_CTRL_TRIGGER_AEQE_SHIFT 0
+/* specifies the issue request for the message data.
+ * 0 - Tx request is done;
+ * 1 - Tx request is in process.
+ */
+#define HINIC3_MBOX_CTRL_TX_STATUS_SHIFT 1
+#define HINIC3_MBOX_CTRL_DST_FUNC_SHIFT 16
+
+#define HINIC3_MBOX_CTRL_TRIGGER_AEQE_MASK 0x1
+#define HINIC3_MBOX_CTRL_TX_STATUS_MASK 0x1
+#define HINIC3_MBOX_CTRL_DST_FUNC_MASK 0x1FFF
+
+#define HINIC3_MBOX_CTRL_SET(val, field) \
+ (((val) & HINIC3_MBOX_CTRL_##field##_MASK) << \
+ HINIC3_MBOX_CTRL_##field##_SHIFT)
+
+#define MBOX_SEGLEN_MASK \
+ HINIC3_MSG_HEADER_SET(HINIC3_MSG_HEADER_SEG_LEN_MASK, SEG_LEN)
+
+#define MBOX_MSG_WAIT_ONCE_TIME_US 10
+#define MBOX_MSG_POLLING_TIMEOUT 8000
+#define HINIC3_MBOX_COMP_TIME 40000U
+
+/* MBOX size is 64B, 8B for mbox_header, 8B reserved */
+#define MBOX_SEG_LEN 48
+#define MBOX_SEG_LEN_ALIGN 4
+#define MBOX_WB_STATUS_LEN 16UL
+
+#define SEQ_ID_START_VAL 0
+#define SEQ_ID_MAX_VAL 42
+#define MBOX_LAST_SEG_MAX_LEN (MBOX_MAX_BUF_SZ - \
+ SEQ_ID_MAX_VAL * MBOX_SEG_LEN)
+
+/* mbox write back status is 16B, only first 4B is used */
+#define MBOX_WB_STATUS_ERRCODE_MASK 0xFFFF
+#define MBOX_WB_STATUS_MASK 0xFF
+#define MBOX_WB_ERROR_CODE_MASK 0xFF00
+#define MBOX_WB_STATUS_FINISHED_SUCCESS 0xFF
+#define MBOX_WB_STATUS_FINISHED_WITH_ERR 0xFE
+#define MBOX_WB_STATUS_NOT_FINISHED 0x00
+
+#define MBOX_STATUS_FINISHED(wb) \
+ (((wb) & MBOX_WB_STATUS_MASK) != MBOX_WB_STATUS_NOT_FINISHED)
+#define MBOX_STATUS_SUCCESS(wb) \
+ (((wb) & MBOX_WB_STATUS_MASK) == MBOX_WB_STATUS_FINISHED_SUCCESS)
+#define MBOX_STATUS_ERRCODE(wb) \
+ ((wb) & MBOX_WB_ERROR_CODE_MASK)
+
+#define DST_AEQ_IDX_DEFAULT_VAL 0
+#define SRC_AEQ_IDX_DEFAULT_VAL 0
+#define NO_DMA_ATTRIBUTE_VAL 0
+
+#define MBOX_MSG_NO_DATA_LEN 1
+
+#define MBOX_BODY_FROM_HDR(header) ((u8 *)(header) + MBOX_HEADER_SZ)
+#define MBOX_AREA(hwif) \
+ ((hwif)->cfg_regs_base + HINIC3_FUNC_CSR_MAILBOX_DATA_OFF)
+
+#define MBOX_DMA_MSG_QUEUE_DEPTH 32
+
+#define MBOX_MQ_CI_OFFSET (HINIC3_CFG_REGS_FLAG + HINIC3_FUNC_CSR_MAILBOX_DATA_OFF + \
+ MBOX_HEADER_SZ + MBOX_SEG_LEN)
+
+#define MBOX_MQ_SYNC_CI_SHIFT 0
+#define MBOX_MQ_ASYNC_CI_SHIFT 8
+
+#define MBOX_MQ_SYNC_CI_MASK 0xFF
+#define MBOX_MQ_ASYNC_CI_MASK 0xFF
+
+#define MBOX_MQ_CI_SET(val, field) \
+ (((val) & MBOX_MQ_##field##_CI_MASK) << MBOX_MQ_##field##_CI_SHIFT)
+#define MBOX_MQ_CI_GET(val, field) \
+ (((val) >> MBOX_MQ_##field##_CI_SHIFT) & MBOX_MQ_##field##_CI_MASK)
+#define MBOX_MQ_CI_CLEAR(val, field) \
+ ((val) & (~(MBOX_MQ_##field##_CI_MASK << MBOX_MQ_##field##_CI_SHIFT)))
+
+#define IS_PF_OR_PPF_SRC(hwdev, src_func_idx) \
+ ((src_func_idx) < HINIC3_MAX_PF_NUM(hwdev))
+
+#define MBOX_RESPONSE_ERROR 0x1
+#define MBOX_MSG_ID_MASK 0xF
+#define MBOX_MSG_ID(func_to_func) ((func_to_func)->send_msg_id)
+#define MBOX_MSG_ID_INC(func_to_func) \
+ (MBOX_MSG_ID(func_to_func) = \
+ (MBOX_MSG_ID(func_to_func) + 1) & MBOX_MSG_ID_MASK)
+
+/* max message counter wait to process for one function */
+#define HINIC3_MAX_MSG_CNT_TO_PROCESS 10
+
+#define MBOX_MSG_CHANNEL_STOP(func_to_func) \
+ ((((func_to_func)->lock_channel_en) && \
+ test_bit((func_to_func)->cur_msg_channel, \
+ &(func_to_func)->channel_stop)) ? true : false)
+
+enum mbox_ordering_type {
+ STRONG_ORDER,
+};
+
+enum mbox_write_back_type {
+ WRITE_BACK = 1,
+};
+
+enum mbox_aeq_trig_type {
+ NOT_TRIGGER,
+ TRIGGER,
+};
+
+static int send_mbox_msg(struct hinic3_mbox *func_to_func, u8 mod, u16 cmd,
+ void *msg, u16 msg_len, u16 dst_func,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info);
+
+static struct hinic3_msg_desc *get_mbox_msg_desc(struct hinic3_mbox *func_to_func,
+ u64 dir, u64 src_func_id);
+
+/**
+ * hinic3_register_ppf_mbox_cb - register mbox callback for ppf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ * @pri_handle specific mod's private data that will be used in callback
+ * @callback: callback function
+ * Return: 0 - success, negative - failure
+ */
+int hinic3_register_ppf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_ppf_mbox_cb callback)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return -EFAULT;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ func_to_func->ppf_mbox_cb[mod] = callback;
+ func_to_func->ppf_mbox_data[mod] = pri_handle;
+
+ set_bit(HINIC3_PPF_MBOX_CB_REG, &func_to_func->ppf_mbox_cb_state[mod]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_ppf_mbox_cb);
+
+/**
+ * hinic3_register_pf_mbox_cb - register mbox callback for pf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ * @pri_handle specific mod's private data that will be used in callback
+ * @callback: callback function
+ * Return: 0 - success, negative - failure
+ */
+int hinic3_register_pf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_pf_mbox_cb callback)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return -EFAULT;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ func_to_func->pf_mbox_cb[mod] = callback;
+ func_to_func->pf_mbox_data[mod] = pri_handle;
+
+ set_bit(HINIC3_PF_MBOX_CB_REG, &func_to_func->pf_mbox_cb_state[mod]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_pf_mbox_cb);
+
+/**
+ * hinic3_register_vf_mbox_cb - register mbox callback for vf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ * @pri_handle specific mod's private data that will be used in callback
+ * @callback: callback function
+ * Return: 0 - success, negative - failure
+ */
+int hinic3_register_vf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_vf_mbox_cb callback)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return -EFAULT;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ func_to_func->vf_mbox_cb[mod] = callback;
+ func_to_func->vf_mbox_data[mod] = pri_handle;
+
+ set_bit(HINIC3_VF_MBOX_CB_REG, &func_to_func->vf_mbox_cb_state[mod]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_vf_mbox_cb);
+
+/**
+ * hinic3_unregister_ppf_mbox_cb - unregister the mbox callback for ppf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ */
+void hinic3_unregister_ppf_mbox_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ clear_bit(HINIC3_PPF_MBOX_CB_REG,
+ &func_to_func->ppf_mbox_cb_state[mod]);
+
+ while (test_bit(HINIC3_PPF_MBOX_CB_RUNNING,
+ &func_to_func->ppf_mbox_cb_state[mod]))
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ func_to_func->ppf_mbox_data[mod] = NULL;
+ func_to_func->ppf_mbox_cb[mod] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_ppf_mbox_cb);
+
+/**
+ * hinic3_unregister_ppf_mbox_cb - unregister the mbox callback for pf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ */
+void hinic3_unregister_pf_mbox_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ clear_bit(HINIC3_PF_MBOX_CB_REG, &func_to_func->pf_mbox_cb_state[mod]);
+
+ while (test_bit(HINIC3_PF_MBOX_CB_RUNNING, &func_to_func->pf_mbox_cb_state[mod]) != 0)
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ func_to_func->pf_mbox_data[mod] = NULL;
+ func_to_func->pf_mbox_cb[mod] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_pf_mbox_cb);
+
+/**
+ * hinic3_unregister_vf_mbox_cb - unregister the mbox callback for vf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ */
+void hinic3_unregister_vf_mbox_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ clear_bit(HINIC3_VF_MBOX_CB_REG, &func_to_func->vf_mbox_cb_state[mod]);
+
+ while (test_bit(HINIC3_VF_MBOX_CB_RUNNING, &func_to_func->vf_mbox_cb_state[mod]) != 0)
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ func_to_func->vf_mbox_data[mod] = NULL;
+ func_to_func->vf_mbox_cb[mod] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_vf_mbox_cb);
+
+/**
+ * hinic3_unregister_ppf_mbox_cb - unregister the mbox callback for pf from ppf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ */
+void hinic3_unregister_ppf_to_pf_mbox_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ clear_bit(HINIC3_PPF_TO_PF_MBOX_CB_REG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]);
+
+ while (test_bit(HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]))
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ func_to_func->pf_recv_ppf_mbox_data[mod] = NULL;
+ func_to_func->pf_recv_ppf_mbox_cb[mod] = NULL;
+}
+
+static int recv_vf_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ void *buf_out, u16 *out_size)
+{
+ hinic3_vf_mbox_cb cb;
+ int ret;
+
+ if (recv_mbox->mod >= HINIC3_MOD_MAX) {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "Receive illegal mbox message, mod = %hhu\n",
+ recv_mbox->mod);
+ return -EINVAL;
+ }
+
+ set_bit(HINIC3_VF_MBOX_CB_RUNNING,
+ &func_to_func->vf_mbox_cb_state[recv_mbox->mod]);
+
+ cb = func_to_func->vf_mbox_cb[recv_mbox->mod];
+ if (cb && test_bit(HINIC3_VF_MBOX_CB_REG,
+ &func_to_func->vf_mbox_cb_state[recv_mbox->mod])) {
+ ret = cb(func_to_func->vf_mbox_data[recv_mbox->mod],
+ recv_mbox->cmd, recv_mbox->msg,
+ recv_mbox->msg_len, buf_out, out_size);
+ } else {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "VF mbox cb is not registered\n");
+ ret = -EINVAL;
+ }
+
+ clear_bit(HINIC3_VF_MBOX_CB_RUNNING,
+ &func_to_func->vf_mbox_cb_state[recv_mbox->mod]);
+
+ return ret;
+}
+
+static int recv_pf_from_ppf_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ void *buf_out, u16 *out_size)
+{
+ hinic3_pf_recv_from_ppf_mbox_cb cb;
+ enum hinic3_mod_type mod = recv_mbox->mod;
+ int ret;
+
+ if (mod >= HINIC3_MOD_MAX) {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "Receive illegal mbox message, mod = %d\n",
+ mod);
+ return -EINVAL;
+ }
+
+ set_bit(HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]);
+
+ cb = func_to_func->pf_recv_ppf_mbox_cb[mod];
+ if (cb && test_bit(HINIC3_PPF_TO_PF_MBOX_CB_REG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]) != 0) {
+ ret = cb(func_to_func->pf_recv_ppf_mbox_data[mod],
+ recv_mbox->cmd, recv_mbox->msg, recv_mbox->msg_len,
+ buf_out, out_size);
+ } else {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "PF receive ppf mailbox callback is not registered\n");
+ ret = -EINVAL;
+ }
+
+ clear_bit(HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]);
+
+ return ret;
+}
+
+static int recv_ppf_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ u8 pf_id, void *buf_out, u16 *out_size)
+{
+ hinic3_ppf_mbox_cb cb;
+ u16 vf_id = 0;
+ int ret;
+
+ if (recv_mbox->mod >= HINIC3_MOD_MAX) {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "Receive illegal mbox message, mod = %hhu\n",
+ recv_mbox->mod);
+ return -EINVAL;
+ }
+
+ set_bit(HINIC3_PPF_MBOX_CB_RUNNING,
+ &func_to_func->ppf_mbox_cb_state[recv_mbox->mod]);
+
+ cb = func_to_func->ppf_mbox_cb[recv_mbox->mod];
+ if (cb && test_bit(HINIC3_PPF_MBOX_CB_REG,
+ &func_to_func->ppf_mbox_cb_state[recv_mbox->mod])) {
+ ret = cb(func_to_func->ppf_mbox_data[recv_mbox->mod],
+ pf_id, vf_id, recv_mbox->cmd, recv_mbox->msg,
+ recv_mbox->msg_len, buf_out, out_size);
+ } else {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "PPF mbox cb is not registered, mod = %hhu\n",
+ recv_mbox->mod);
+ ret = -EINVAL;
+ }
+
+ clear_bit(HINIC3_PPF_MBOX_CB_RUNNING,
+ &func_to_func->ppf_mbox_cb_state[recv_mbox->mod]);
+
+ return ret;
+}
+
+static int recv_pf_from_vf_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ u16 src_func_idx, void *buf_out,
+ u16 *out_size)
+{
+ hinic3_pf_mbox_cb cb;
+ u16 vf_id = 0;
+ int ret;
+
+ if (recv_mbox->mod >= HINIC3_MOD_MAX) {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "Receive illegal mbox message, mod = %hhu\n",
+ recv_mbox->mod);
+ return -EINVAL;
+ }
+
+ set_bit(HINIC3_PF_MBOX_CB_RUNNING,
+ &func_to_func->pf_mbox_cb_state[recv_mbox->mod]);
+
+ cb = func_to_func->pf_mbox_cb[recv_mbox->mod];
+ if (cb && test_bit(HINIC3_PF_MBOX_CB_REG,
+ &func_to_func->pf_mbox_cb_state[recv_mbox->mod]) != 0) {
+ vf_id = src_func_idx -
+ hinic3_glb_pf_vf_offset(func_to_func->hwdev);
+ ret = cb(func_to_func->pf_mbox_data[recv_mbox->mod],
+ vf_id, recv_mbox->cmd, recv_mbox->msg,
+ recv_mbox->msg_len, buf_out, out_size);
+ } else {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "PF mbox mod(0x%x) cb is not registered\n",
+ recv_mbox->mod);
+ ret = -EINVAL;
+ }
+
+ clear_bit(HINIC3_PF_MBOX_CB_RUNNING,
+ &func_to_func->pf_mbox_cb_state[recv_mbox->mod]);
+
+ return ret;
+}
+
+static void response_for_recv_func_mbox(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ int err, u16 out_size, u16 src_func_idx)
+{
+ struct mbox_msg_info msg_info = {0};
+ u16 size = out_size;
+
+ msg_info.msg_id = recv_mbox->msg_id;
+ if (err)
+ msg_info.status = HINIC3_MBOX_PF_SEND_ERR;
+
+ /* if not data need to response, set out_size to 1 */
+ if (!out_size || err)
+ size = MBOX_MSG_NO_DATA_LEN;
+
+ if (size > HINIC3_MBOX_DATA_SIZE) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Response msg len(%d) exceed limit(%d)\n",
+ size, HINIC3_MBOX_DATA_SIZE);
+ size = HINIC3_MBOX_DATA_SIZE;
+ }
+
+ send_mbox_msg(func_to_func, recv_mbox->mod, recv_mbox->cmd,
+ recv_mbox->resp_buff, size, src_func_idx,
+ HINIC3_MSG_RESPONSE, HINIC3_MSG_NO_ACK, &msg_info);
+}
+
+static void recv_func_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox)
+{
+ struct hinic3_hwdev *dev = func_to_func->hwdev;
+ void *buf_out = recv_mbox->resp_buff;
+ u16 src_func_idx = recv_mbox->src_func_idx;
+ u16 out_size = HINIC3_MBOX_DATA_SIZE;
+ int err = 0;
+
+ if (HINIC3_IS_VF(dev)) {
+ err = recv_vf_mbox_handler(func_to_func, recv_mbox, buf_out,
+ &out_size);
+ } else { /* pf/ppf process */
+ if (IS_PF_OR_PPF_SRC(dev, src_func_idx)) {
+ if (HINIC3_IS_PPF(dev)) {
+ err = recv_ppf_mbox_handler(func_to_func,
+ recv_mbox,
+ (u8)src_func_idx,
+ buf_out, &out_size);
+ if (err)
+ goto out;
+ } else {
+ err = recv_pf_from_ppf_handler(func_to_func,
+ recv_mbox,
+ buf_out,
+ &out_size);
+ if (err)
+ goto out;
+ }
+ /* The source is neither PF nor PPF, so it is from VF */
+ } else {
+ err = recv_pf_from_vf_mbox_handler(func_to_func,
+ recv_mbox,
+ src_func_idx,
+ buf_out, &out_size);
+ }
+ }
+
+out:
+ if (recv_mbox->ack_type == HINIC3_MSG_ACK)
+ response_for_recv_func_mbox(func_to_func, recv_mbox, err,
+ out_size, src_func_idx);
+}
+
+static struct hinic3_recv_mbox *alloc_recv_mbox(void)
+{
+ struct hinic3_recv_mbox *recv_msg = NULL;
+
+ recv_msg = kzalloc(sizeof(*recv_msg), GFP_KERNEL);
+ if (!recv_msg)
+ return NULL;
+
+ recv_msg->msg = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
+ if (!recv_msg->msg)
+ goto alloc_msg_err;
+
+ recv_msg->resp_buff = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
+ if (!recv_msg->resp_buff)
+ goto alloc_resp_bff_err;
+
+ return recv_msg;
+
+alloc_resp_bff_err:
+ kfree(recv_msg->msg);
+
+alloc_msg_err:
+ kfree(recv_msg);
+
+ return NULL;
+}
+
+static void free_recv_mbox(struct hinic3_recv_mbox *recv_msg)
+{
+ kfree(recv_msg->resp_buff);
+ kfree(recv_msg->msg);
+ kfree(recv_msg);
+ recv_msg = NULL;
+}
+
+static void recv_func_mbox_work_handler(struct work_struct *work)
+{
+ struct hinic3_mbox_work *mbox_work =
+ container_of(work, struct hinic3_mbox_work, work);
+
+ recv_func_mbox_handler(mbox_work->func_to_func, mbox_work->recv_mbox);
+
+ atomic_dec(&mbox_work->msg_ch->recv_msg_cnt);
+
+ destroy_work(&mbox_work->work);
+
+ free_recv_mbox(mbox_work->recv_mbox);
+ kfree(mbox_work);
+}
+
+static void resp_mbox_handler(struct hinic3_mbox *func_to_func,
+ const struct hinic3_msg_desc *msg_desc)
+{
+ spin_lock(&func_to_func->mbox_lock);
+ if (msg_desc->msg_info.msg_id == func_to_func->send_msg_id &&
+ func_to_func->event_flag == EVENT_START)
+ func_to_func->event_flag = EVENT_SUCCESS;
+ else
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Mbox response timeout, current send msg id(0x%x), recv msg id(0x%x), status(0x%x)\n",
+ func_to_func->send_msg_id, msg_desc->msg_info.msg_id,
+ msg_desc->msg_info.status);
+ spin_unlock(&func_to_func->mbox_lock);
+}
+
+static void recv_mbox_msg_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_msg_desc *msg_desc,
+ u64 mbox_header)
+{
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ struct hinic3_recv_mbox *recv_msg = NULL;
+ struct hinic3_mbox_work *mbox_work = NULL;
+ struct hinic3_msg_channel *msg_ch =
+ container_of(msg_desc, struct hinic3_msg_channel, recv_msg);
+ u16 src_func_idx = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ if (atomic_read(&msg_ch->recv_msg_cnt) >
+ HINIC3_MAX_MSG_CNT_TO_PROCESS) {
+ sdk_warn(hwdev->dev_hdl, "This function(%u) have %d message wait to process, can't add to work queue\n",
+ src_func_idx, atomic_read(&msg_ch->recv_msg_cnt));
+ return;
+ }
+
+ recv_msg = alloc_recv_mbox();
+ if (!recv_msg) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc receive mbox message buffer\n");
+ return;
+ }
+ recv_msg->msg_len = msg_desc->msg_len;
+ memcpy(recv_msg->msg, msg_desc->msg, recv_msg->msg_len);
+
+ recv_msg->msg_id = msg_desc->msg_info.msg_id;
+ recv_msg->mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+ recv_msg->cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+ recv_msg->ack_type = HINIC3_MSG_HEADER_GET(mbox_header, NO_ACK);
+ recv_msg->src_func_idx = src_func_idx;
+
+ mbox_work = kzalloc(sizeof(*mbox_work), GFP_KERNEL);
+ if (!mbox_work) {
+ sdk_err(hwdev->dev_hdl, "Allocate mbox work memory failed.\n");
+ free_recv_mbox(recv_msg);
+ return;
+ }
+
+ atomic_inc(&msg_ch->recv_msg_cnt);
+
+ mbox_work->func_to_func = func_to_func;
+ mbox_work->recv_mbox = recv_msg;
+ mbox_work->msg_ch = msg_ch;
+
+ INIT_WORK(&mbox_work->work, recv_func_mbox_work_handler);
+ queue_work_on(hisdk3_get_work_cpu_affinity(hwdev, WORK_TYPE_MBOX),
+ func_to_func->workq, &mbox_work->work);
+}
+
+static bool check_mbox_segment(struct hinic3_mbox *func_to_func,
+ struct hinic3_msg_desc *msg_desc,
+ u64 mbox_header, void *mbox_body)
+{
+ u8 seq_id, seg_len, msg_id, mod;
+ u16 src_func_idx, cmd;
+
+ seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+ seg_len = HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN);
+ msg_id = HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID);
+ mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+ cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+ src_func_idx = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ if (seq_id > SEQ_ID_MAX_VAL || seg_len > MBOX_SEG_LEN ||
+ (seq_id == SEQ_ID_MAX_VAL && seg_len > MBOX_LAST_SEG_MAX_LEN))
+ goto seg_err;
+
+ if (seq_id == 0) {
+ msg_desc->seq_id = seq_id;
+ msg_desc->msg_info.msg_id = msg_id;
+ msg_desc->mod = mod;
+ msg_desc->cmd = cmd;
+ } else {
+ if (seq_id != msg_desc->seq_id + 1 || msg_id != msg_desc->msg_info.msg_id ||
+ mod != msg_desc->mod || cmd != msg_desc->cmd)
+ goto seg_err;
+
+ msg_desc->seq_id = seq_id;
+ }
+
+ return true;
+
+seg_err:
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Mailbox segment check failed, src func id: 0x%x, front seg info: seq id: 0x%x, msg id: 0x%x, mod: 0x%x, cmd: 0x%x\n",
+ src_func_idx, msg_desc->seq_id, msg_desc->msg_info.msg_id,
+ msg_desc->mod, msg_desc->cmd);
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Current seg info: seg len: 0x%x, seq id: 0x%x, msg id: 0x%x, mod: 0x%x, cmd: 0x%x\n",
+ seg_len, seq_id, msg_id, mod, cmd);
+
+ return false;
+}
+
+static void recv_mbox_handler(struct hinic3_mbox *func_to_func,
+ u64 *header, struct hinic3_msg_desc *msg_desc)
+{
+ u64 mbox_header = *header;
+ void *mbox_body = MBOX_BODY_FROM_HDR(((void *)header));
+ u8 seq_id, seg_len;
+ int pos;
+
+ if (!check_mbox_segment(func_to_func, msg_desc, mbox_header, mbox_body)) {
+ msg_desc->seq_id = SEQ_ID_MAX_VAL;
+ return;
+ }
+
+ seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+ seg_len = HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN);
+
+ pos = seq_id * MBOX_SEG_LEN;
+ memcpy((u8 *)msg_desc->msg + pos, mbox_body, seg_len);
+
+ if (!HINIC3_MSG_HEADER_GET(mbox_header, LAST))
+ return;
+
+ msg_desc->msg_len = HINIC3_MSG_HEADER_GET(mbox_header, MSG_LEN);
+ msg_desc->msg_info.status = HINIC3_MSG_HEADER_GET(mbox_header, STATUS);
+
+ if (HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION) ==
+ HINIC3_MSG_RESPONSE) {
+ resp_mbox_handler(func_to_func, msg_desc);
+ return;
+ }
+
+ recv_mbox_msg_handler(func_to_func, msg_desc, mbox_header);
+}
+
+void hinic3_mbox_func_aeqe_handler(void *handle, u8 *header, u8 size)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+ struct hinic3_msg_desc *msg_desc = NULL;
+ u64 mbox_header = *((u64 *)header);
+ u64 src, dir;
+
+ func_to_func = ((struct hinic3_hwdev *)handle)->func_to_func;
+
+ dir = HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION);
+ src = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ msg_desc = get_mbox_msg_desc(func_to_func, dir, src);
+ if (!msg_desc) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Mailbox source function id: %u is invalid for current function\n",
+ (u32)src);
+ return;
+ }
+
+ recv_mbox_handler(func_to_func, (u64 *)header, msg_desc);
+}
+
+static int init_mbox_dma_queue(struct hinic3_hwdev *hwdev, struct mbox_dma_queue *mq)
+{
+ u32 size;
+
+ mq->depth = MBOX_DMA_MSG_QUEUE_DEPTH;
+ mq->prod_idx = 0;
+ mq->cons_idx = 0;
+
+ size = mq->depth * MBOX_MAX_BUF_SZ;
+ mq->dma_buff_vaddr = dma_zalloc_coherent(hwdev->dev_hdl, size, &mq->dma_buff_paddr,
+ GFP_KERNEL);
+ if (!mq->dma_buff_vaddr) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc dma_buffer\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void deinit_mbox_dma_queue(struct hinic3_hwdev *hwdev, struct mbox_dma_queue *mq)
+{
+ dma_free_coherent(hwdev->dev_hdl, mq->depth * MBOX_MAX_BUF_SZ,
+ mq->dma_buff_vaddr, mq->dma_buff_paddr);
+}
+
+static int hinic3_init_mbox_dma_queue(struct hinic3_mbox *func_to_func)
+{
+ u32 val;
+ int err;
+
+ err = init_mbox_dma_queue(func_to_func->hwdev, &func_to_func->sync_msg_queue);
+ if (err)
+ return err;
+
+ err = init_mbox_dma_queue(func_to_func->hwdev, &func_to_func->async_msg_queue);
+ if (err) {
+ deinit_mbox_dma_queue(func_to_func->hwdev, &func_to_func->sync_msg_queue);
+ return err;
+ }
+
+ val = hinic3_hwif_read_reg(func_to_func->hwdev->hwif, MBOX_MQ_CI_OFFSET);
+ val = MBOX_MQ_CI_CLEAR(val, SYNC);
+ val = MBOX_MQ_CI_CLEAR(val, ASYNC);
+ hinic3_hwif_write_reg(func_to_func->hwdev->hwif, MBOX_MQ_CI_OFFSET, val);
+
+ return 0;
+}
+
+static void hinic3_deinit_mbox_dma_queue(struct hinic3_mbox *func_to_func)
+{
+ deinit_mbox_dma_queue(func_to_func->hwdev, &func_to_func->sync_msg_queue);
+ deinit_mbox_dma_queue(func_to_func->hwdev, &func_to_func->async_msg_queue);
+}
+
+#define MBOX_DMA_MSG_INIT_XOR_VAL 0x5a5a5a5a
+#define MBOX_XOR_DATA_ALIGN 4
+static u32 mbox_dma_msg_xor(u32 *data, u16 msg_len)
+{
+ u32 mbox_xor = MBOX_DMA_MSG_INIT_XOR_VAL;
+ u16 dw_len = msg_len / sizeof(u32);
+ u16 i;
+
+ for (i = 0; i < dw_len; i++)
+ mbox_xor ^= data[i];
+
+ return mbox_xor;
+}
+
+#define MQ_ID_MASK(mq, idx) ((idx) & ((mq)->depth - 1))
+#define IS_MSG_QUEUE_FULL(mq) (MQ_ID_MASK(mq, (mq)->prod_idx + 1) == \
+ MQ_ID_MASK(mq, (mq)->cons_idx))
+
+static int mbox_prepare_dma_entry(struct hinic3_mbox *func_to_func, struct mbox_dma_queue *mq,
+ struct mbox_dma_msg *dma_msg, void *msg, u16 msg_len)
+{
+ u64 dma_addr, offset;
+ void *dma_vaddr = NULL;
+
+ if (IS_MSG_QUEUE_FULL(mq)) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Mbox sync message queue is busy, pi: %u, ci: %u\n",
+ mq->prod_idx, MQ_ID_MASK(mq, mq->cons_idx));
+ return -EBUSY;
+ }
+
+ /* copy data to DMA buffer */
+ offset = mq->prod_idx * MBOX_MAX_BUF_SZ;
+ dma_vaddr = (u8 *)mq->dma_buff_vaddr + offset;
+ memcpy(dma_vaddr, msg, msg_len);
+
+ dma_addr = mq->dma_buff_paddr + offset;
+ dma_msg->dma_addr_high = upper_32_bits(dma_addr);
+ dma_msg->dma_addr_low = lower_32_bits(dma_addr);
+ dma_msg->msg_len = msg_len;
+ /* The firmware obtains message based on 4B alignment. */
+ dma_msg->xor = mbox_dma_msg_xor(dma_vaddr, ALIGN(msg_len, MBOX_XOR_DATA_ALIGN));
+
+ mq->prod_idx++;
+ mq->prod_idx = MQ_ID_MASK(mq, mq->prod_idx);
+
+ return 0;
+}
+
+static int mbox_prepare_dma_msg(struct hinic3_mbox *func_to_func, enum hinic3_msg_ack_type ack_type,
+ struct mbox_dma_msg *dma_msg, void *msg, u16 msg_len)
+{
+ struct mbox_dma_queue *mq = NULL;
+ u32 val;
+
+ val = hinic3_hwif_read_reg(func_to_func->hwdev->hwif, MBOX_MQ_CI_OFFSET);
+ if (ack_type == HINIC3_MSG_ACK) {
+ mq = &func_to_func->sync_msg_queue;
+ mq->cons_idx = MBOX_MQ_CI_GET(val, SYNC);
+ } else {
+ mq = &func_to_func->async_msg_queue;
+ mq->cons_idx = MBOX_MQ_CI_GET(val, ASYNC);
+ }
+
+ return mbox_prepare_dma_entry(func_to_func, mq, dma_msg, msg, msg_len);
+}
+
+static void clear_mbox_status(struct hinic3_send_mbox *mbox)
+{
+ *mbox->wb_status = 0;
+
+ /* clear mailbox write back status */
+ wmb();
+}
+
+static void mbox_copy_header(struct hinic3_hwdev *hwdev,
+ struct hinic3_send_mbox *mbox, u64 *header)
+{
+ u32 *data = (u32 *)header;
+ u32 i, idx_max = MBOX_HEADER_SZ / sizeof(u32);
+
+ for (i = 0; i < idx_max; i++) {
+ __raw_writel(cpu_to_be32(*(data + i)),
+ mbox->data + i * sizeof(u32));
+ }
+}
+
+static int mbox_copy_send_data(struct hinic3_hwdev *hwdev,
+ struct hinic3_send_mbox *mbox, void *seg, u16 seg_len)
+{
+ u32 *data = seg;
+ u32 data_len, chk_sz = sizeof(u32);
+ u32 i, idx_max;
+ u8 mbox_max_buf[MBOX_SEG_LEN] = {0};
+
+ /* The mbox message should be aligned in 4 bytes. */
+ if (seg_len % chk_sz) {
+ memcpy(mbox_max_buf, seg, seg_len);
+ data = (u32 *)mbox_max_buf;
+ }
+
+ data_len = seg_len;
+ idx_max = ALIGN(data_len, chk_sz) / chk_sz;
+
+ for (i = 0; i < idx_max; i++) {
+ __raw_writel(cpu_to_be32(*(data + i)),
+ mbox->data + MBOX_HEADER_SZ + i * sizeof(u32));
+ }
+
+ return 0;
+}
+
+static void write_mbox_msg_attr(struct hinic3_mbox *func_to_func,
+ u16 dst_func, u16 dst_aeqn, u16 seg_len)
+{
+ u32 mbox_int, mbox_ctrl;
+ u16 func = dst_func;
+
+ /* for VF to PF's message, dest func id will self-learning by HW */
+ if (HINIC3_IS_VF(func_to_func->hwdev) && dst_func != HINIC3_MGMT_SRC_ID)
+ func = 0; /* the destination is the VF's PF */
+
+ mbox_int = HINIC3_MBOX_INT_SET(dst_aeqn, DST_AEQN) |
+ HINIC3_MBOX_INT_SET(0, SRC_RESP_AEQN) |
+ HINIC3_MBOX_INT_SET(NO_DMA_ATTRIBUTE_VAL, STAT_DMA) |
+ HINIC3_MBOX_INT_SET(ALIGN(seg_len + MBOX_HEADER_SZ,
+ MBOX_SEG_LEN_ALIGN) >> 2,
+ TX_SIZE) |
+ HINIC3_MBOX_INT_SET(STRONG_ORDER, STAT_DMA_SO_RO) |
+ HINIC3_MBOX_INT_SET(WRITE_BACK, WB_EN);
+
+ hinic3_hwif_write_reg(func_to_func->hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF, mbox_int);
+
+ wmb(); /* writing the mbox int attributes */
+ mbox_ctrl = HINIC3_MBOX_CTRL_SET(TX_NOT_DONE, TX_STATUS);
+
+ mbox_ctrl |= HINIC3_MBOX_CTRL_SET(NOT_TRIGGER, TRIGGER_AEQE);
+
+ mbox_ctrl |= HINIC3_MBOX_CTRL_SET(func, DST_FUNC);
+
+ hinic3_hwif_write_reg(func_to_func->hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF, mbox_ctrl);
+}
+
+static void dump_mbox_reg(struct hinic3_hwdev *hwdev)
+{
+ u32 val;
+
+ val = hinic3_hwif_read_reg(hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF);
+ sdk_err(hwdev->dev_hdl, "Mailbox control reg: 0x%x\n", val);
+ val = hinic3_hwif_read_reg(hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF);
+ sdk_err(hwdev->dev_hdl, "Mailbox interrupt offset: 0x%x\n", val);
+}
+
+static u16 get_mbox_status(const struct hinic3_send_mbox *mbox)
+{
+ /* write back is 16B, but only use first 4B */
+ u64 wb_val = be64_to_cpu(*mbox->wb_status);
+
+ rmb(); /* verify reading before check */
+
+ return (u16)(wb_val & MBOX_WB_STATUS_ERRCODE_MASK);
+}
+
+static enum hinic3_wait_return check_mbox_wb_status(void *priv_data)
+{
+ struct hinic3_mbox *func_to_func = priv_data;
+ u16 wb_status;
+
+ if (MBOX_MSG_CHANNEL_STOP(func_to_func) || !func_to_func->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ wb_status = get_mbox_status(&func_to_func->send_mbox);
+
+ return MBOX_STATUS_FINISHED(wb_status) ?
+ WAIT_PROCESS_CPL : WAIT_PROCESS_WAITING;
+}
+
+static int send_mbox_seg(struct hinic3_mbox *func_to_func, u64 header,
+ u16 dst_func, void *seg, u16 seg_len, void *msg_info)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ u8 num_aeqs = hwdev->hwif->attr.num_aeqs;
+ u16 dst_aeqn, wb_status = 0, errcode;
+ u16 seq_dir = HINIC3_MSG_HEADER_GET(header, DIRECTION);
+ int err;
+
+ /* mbox to mgmt cpu, hardware don't care dst aeq id */
+ if (num_aeqs > HINIC3_MBOX_RSP_MSG_AEQ)
+ dst_aeqn = (seq_dir == HINIC3_MSG_DIRECT_SEND) ?
+ HINIC3_ASYNC_MSG_AEQ : HINIC3_MBOX_RSP_MSG_AEQ;
+ else
+ dst_aeqn = 0;
+
+ clear_mbox_status(send_mbox);
+
+ mbox_copy_header(hwdev, send_mbox, &header);
+
+ err = mbox_copy_send_data(hwdev, send_mbox, seg, seg_len);
+ if (err != 0)
+ return err;
+
+ write_mbox_msg_attr(func_to_func, dst_func, dst_aeqn, seg_len);
+
+ wmb(); /* writing the mbox msg attributes */
+
+ err = hinic3_wait_for_timeout(func_to_func, check_mbox_wb_status,
+ MBOX_MSG_POLLING_TIMEOUT, MBOX_MSG_WAIT_ONCE_TIME_US);
+ wb_status = get_mbox_status(send_mbox);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Send mailbox segment timeout, wb status: 0x%x\n",
+ wb_status);
+ dump_mbox_reg(hwdev);
+ return -ETIMEDOUT;
+ }
+
+ if (!MBOX_STATUS_SUCCESS(wb_status)) {
+ sdk_err(hwdev->dev_hdl, "Send mailbox segment to function %u error, wb status: 0x%x\n",
+ dst_func, wb_status);
+ errcode = MBOX_STATUS_ERRCODE(wb_status);
+ return errcode ? errcode : -EFAULT;
+ }
+
+ return 0;
+}
+
+static int send_mbox_msg(struct hinic3_mbox *func_to_func, u8 mod, u16 cmd,
+ void *msg, u16 msg_len, u16 dst_func,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info)
+{
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ struct mbox_dma_msg dma_msg = {0};
+ enum hinic3_data_type data_type = HINIC3_DATA_INLINE;
+ int err = 0;
+ u32 seq_id = 0;
+ u16 seg_len = MBOX_SEG_LEN;
+ u16 rsp_aeq_id, left;
+ u8 *msg_seg = NULL;
+ u64 header = 0;
+
+ if (hwdev->poll || hwdev->hwif->attr.num_aeqs >= 0x2)
+ rsp_aeq_id = HINIC3_MBOX_RSP_MSG_AEQ;
+ else
+ rsp_aeq_id = 0;
+
+ mutex_lock(&func_to_func->msg_send_lock);
+
+ if (IS_DMA_MBX_MSG(dst_func) && !COMM_SUPPORT_MBOX_SEGMENT(hwdev)) {
+ err = mbox_prepare_dma_msg(func_to_func, ack_type, &dma_msg, msg, msg_len);
+ if (err != 0)
+ goto send_err;
+
+ msg = &dma_msg;
+ msg_len = sizeof(dma_msg);
+ data_type = HINIC3_DATA_DMA;
+ }
+
+ msg_seg = (u8 *)msg;
+ left = msg_len;
+
+ header = HINIC3_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ HINIC3_MSG_HEADER_SET(mod, MODULE) |
+ HINIC3_MSG_HEADER_SET(seg_len, SEG_LEN) |
+ HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HINIC3_MSG_HEADER_SET(data_type, DATA_TYPE) |
+ HINIC3_MSG_HEADER_SET(SEQ_ID_START_VAL, SEQID) |
+ HINIC3_MSG_HEADER_SET(NOT_LAST_SEGMENT, LAST) |
+ HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+ HINIC3_MSG_HEADER_SET(cmd, CMD) |
+ /* The vf's offset to it's associated pf */
+ HINIC3_MSG_HEADER_SET(msg_info->msg_id, MSG_ID) |
+ HINIC3_MSG_HEADER_SET(rsp_aeq_id, AEQ_ID) |
+ HINIC3_MSG_HEADER_SET(HINIC3_MSG_FROM_MBOX, SOURCE) |
+ HINIC3_MSG_HEADER_SET(!!msg_info->status, STATUS) |
+ HINIC3_MSG_HEADER_SET(hinic3_global_func_id(hwdev),
+ SRC_GLB_FUNC_IDX);
+
+ while (!(HINIC3_MSG_HEADER_GET(header, LAST))) {
+ if (left <= MBOX_SEG_LEN) {
+ header &= ~MBOX_SEGLEN_MASK;
+ header |= HINIC3_MSG_HEADER_SET(left, SEG_LEN);
+ header |= HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST);
+
+ seg_len = left;
+ }
+
+ err = send_mbox_seg(func_to_func, header, dst_func, msg_seg,
+ seg_len, msg_info);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to send mbox seg, seq_id=0x%llx\n",
+ HINIC3_MSG_HEADER_GET(header, SEQID));
+ goto send_err;
+ }
+
+ left -= MBOX_SEG_LEN;
+ msg_seg += MBOX_SEG_LEN;
+
+ seq_id++;
+ header &= ~(HINIC3_MSG_HEADER_SET(HINIC3_MSG_HEADER_SEQID_MASK,
+ SEQID));
+ header |= HINIC3_MSG_HEADER_SET(seq_id, SEQID);
+ }
+
+send_err:
+ mutex_unlock(&func_to_func->msg_send_lock);
+
+ return err;
+}
+
+static void set_mbox_to_func_event(struct hinic3_mbox *func_to_func,
+ enum mbox_event_state event_flag)
+{
+ spin_lock(&func_to_func->mbox_lock);
+ func_to_func->event_flag = event_flag;
+ spin_unlock(&func_to_func->mbox_lock);
+}
+
+static enum hinic3_wait_return check_mbox_msg_finish(void *priv_data)
+{
+ struct hinic3_mbox *func_to_func = priv_data;
+
+ if (MBOX_MSG_CHANNEL_STOP(func_to_func) || func_to_func->hwdev->chip_present_flag == 0)
+ return WAIT_PROCESS_ERR;
+
+ return (func_to_func->event_flag == EVENT_SUCCESS) ?
+ WAIT_PROCESS_CPL : WAIT_PROCESS_WAITING;
+}
+
+static int wait_mbox_msg_completion(struct hinic3_mbox *func_to_func,
+ u32 timeout)
+{
+ u32 wait_time;
+ int err;
+
+ wait_time = (timeout != 0) ? timeout : HINIC3_MBOX_COMP_TIME;
+ err = hinic3_wait_for_timeout(func_to_func, check_mbox_msg_finish,
+ wait_time, WAIT_USEC_50);
+ if (err) {
+ set_mbox_to_func_event(func_to_func, EVENT_TIMEOUT);
+ return -ETIMEDOUT;
+ }
+
+ set_mbox_to_func_event(func_to_func, EVENT_END);
+
+ return 0;
+}
+
+#define TRY_MBOX_LOCK_SLEPP 1000
+static int send_mbox_msg_lock(struct hinic3_mbox *func_to_func, u16 channel)
+{
+ if (!func_to_func->lock_channel_en) {
+ mutex_lock(&func_to_func->mbox_send_lock);
+ return 0;
+ }
+
+ while (test_bit(channel, &func_to_func->channel_stop) == 0) {
+ if (mutex_trylock(&func_to_func->mbox_send_lock) != 0)
+ return 0;
+
+ usleep_range(TRY_MBOX_LOCK_SLEPP - 1, TRY_MBOX_LOCK_SLEPP);
+ }
+
+ return -EAGAIN;
+}
+
+static void send_mbox_msg_unlock(struct hinic3_mbox *func_to_func)
+{
+ mutex_unlock(&func_to_func->mbox_send_lock);
+}
+
+int hinic3_mbox_to_func(struct hinic3_mbox *func_to_func, u8 mod, u16 cmd,
+ u16 dst_func, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel)
+{
+ /* use mbox_resp to hole data which responsed from other function */
+ struct hinic3_msg_desc *msg_desc = NULL;
+ struct mbox_msg_info msg_info = {0};
+ int err;
+
+ if (func_to_func->hwdev->chip_present_flag == 0)
+ return -EPERM;
+
+ /* expect response message */
+ msg_desc = get_mbox_msg_desc(func_to_func, HINIC3_MSG_RESPONSE, dst_func);
+ if (!msg_desc) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "msg_desc null\n");
+ return -EFAULT;
+ }
+
+ err = send_mbox_msg_lock(func_to_func, channel);
+ if (err)
+ return err;
+
+ func_to_func->cur_msg_channel = channel;
+ msg_info.msg_id = MBOX_MSG_ID_INC(func_to_func);
+
+ set_mbox_to_func_event(func_to_func, EVENT_START);
+
+ err = send_mbox_msg(func_to_func, mod, cmd, buf_in, in_size, dst_func,
+ HINIC3_MSG_DIRECT_SEND, HINIC3_MSG_ACK, &msg_info);
+ if (err) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Send mailbox mod %u, cmd %u failed, msg_id: %u, err: %d\n",
+ mod, cmd, msg_info.msg_id, err);
+ set_mbox_to_func_event(func_to_func, EVENT_FAIL);
+ goto send_err;
+ }
+ func_to_func->hwdev->mbox_send_cnt++;
+
+ if (wait_mbox_msg_completion(func_to_func, timeout) != 0) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Send mbox msg timeout, msg_id: %u\n", msg_info.msg_id);
+ hinic3_dump_aeq_info(func_to_func->hwdev);
+ err = -ETIMEDOUT;
+ goto send_err;
+ }
+ func_to_func->hwdev->mbox_ack_cnt++;
+
+ if (mod != msg_desc->mod || cmd != msg_desc->cmd) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Invalid response mbox message, mod: 0x%x, cmd: 0x%x, expect mod: 0x%x, cmd: 0x%x\n",
+ msg_desc->mod, msg_desc->cmd, mod, cmd);
+ err = -EFAULT;
+ goto send_err;
+ }
+
+ if (msg_desc->msg_info.status) {
+ err = msg_desc->msg_info.status;
+ goto send_err;
+ }
+
+ if (buf_out && out_size) {
+ if (*out_size < msg_desc->msg_len) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Invalid response mbox message length: %u for mod %d cmd %u, should less than: %u\n",
+ msg_desc->msg_len, mod, cmd, *out_size);
+ err = -EFAULT;
+ goto send_err;
+ }
+
+ if (msg_desc->msg_len)
+ memcpy(buf_out, msg_desc->msg, msg_desc->msg_len);
+ *out_size = msg_desc->msg_len;
+ }
+
+send_err:
+ send_mbox_msg_unlock(func_to_func);
+
+ return err;
+}
+
+static int mbox_func_params_valid(struct hinic3_mbox *func_to_func,
+ void *buf_in, u16 in_size, u16 channel)
+{
+ if (!buf_in || !in_size)
+ return -EINVAL;
+
+ if (in_size > HINIC3_MBOX_DATA_SIZE) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Mbox msg len %u exceed limit: [1, %u]\n",
+ in_size, HINIC3_MBOX_DATA_SIZE);
+ return -EINVAL;
+ }
+
+ if (channel >= HINIC3_CHANNEL_MAX) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Invalid channel id: 0x%x\n", channel);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_mbox_to_host(struct hinic3_hwdev *hwdev, u16 dest_host_ppf_id, enum hinic3_mod_type mod,
+ u8 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ int err;
+
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size, channel);
+ if (err)
+ return err;
+
+ if (!HINIC3_IS_PPF(hwdev)) {
+ sdk_err(hwdev->dev_hdl, "Params error, only PPF can send message to other host, func_type: %d\n",
+ hinic3_func_type(hwdev));
+ return -EINVAL;
+ }
+
+ return hinic3_mbox_to_func(func_to_func, mod, cmd, dest_host_ppf_id, buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+}
+
+int hinic3_mbox_to_func_no_ack(struct hinic3_hwdev *hwdev, u16 func_idx,
+ u8 mod, u16 cmd, void *buf_in, u16 in_size, u16 channel)
+{
+ struct mbox_msg_info msg_info = {0};
+ int err = mbox_func_params_valid(hwdev->func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ err = send_mbox_msg_lock(hwdev->func_to_func, channel);
+ if (err)
+ return err;
+
+ err = send_mbox_msg(hwdev->func_to_func, mod, cmd, buf_in, in_size,
+ func_idx, HINIC3_MSG_DIRECT_SEND,
+ HINIC3_MSG_NO_ACK, &msg_info);
+ if (err)
+ sdk_err(hwdev->dev_hdl, "Send mailbox no ack failed\n");
+
+ send_mbox_msg_unlock(hwdev->func_to_func);
+
+ return err;
+}
+
+int hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ int err = mbox_func_params_valid(func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ /* TODO: MPU have not implement this cmd yet */
+ if (mod == HINIC3_MOD_COMM && cmd == COMM_MGMT_CMD_SEND_API_ACK_BY_UP)
+ return 0;
+
+ return hinic3_mbox_to_func(func_to_func, mod, cmd, HINIC3_MGMT_SRC_ID,
+ buf_in, in_size, buf_out, out_size, timeout,
+ channel);
+}
+
+void hinic3_response_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 msg_id)
+{
+ struct mbox_msg_info msg_info;
+
+ msg_info.msg_id = (u8)msg_id;
+ msg_info.status = 0;
+
+ send_mbox_msg(hwdev->func_to_func, mod, cmd, buf_in, in_size,
+ HINIC3_MGMT_SRC_ID, HINIC3_MSG_RESPONSE,
+ HINIC3_MSG_NO_ACK, &msg_info);
+}
+
+int hinic3_send_mbox_to_mgmt_no_ack(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 channel)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ int err = mbox_func_params_valid(func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ return hinic3_mbox_to_func_no_ack(hwdev, HINIC3_MGMT_SRC_ID, mod, cmd,
+ buf_in, in_size, channel);
+}
+
+int hinic3_mbox_ppf_to_host(void *hwdev, u8 mod, u16 cmd, u8 host_id,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u16 dst_ppf_func;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (!(dev->chip_present_flag))
+ return -EPERM;
+
+ err = mbox_func_params_valid(dev->func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ if (!HINIC3_IS_PPF(dev)) {
+ sdk_err(dev->dev_hdl, "Params error, only ppf support send mbox to ppf. func_type: %d\n",
+ hinic3_func_type(dev));
+ return -EINVAL;
+ }
+
+ if (host_id >= HINIC3_MAX_HOST_NUM(dev) ||
+ host_id == HINIC3_PCI_INTF_IDX(dev->hwif)) {
+ sdk_err(dev->dev_hdl, "Params error, host id: %u\n", host_id);
+ return -EINVAL;
+ }
+
+ dst_ppf_func = hinic3_host_ppf_idx(dev, host_id);
+ if (dst_ppf_func >= HINIC3_MAX_PF_NUM(dev)) {
+ sdk_err(dev->dev_hdl, "Dest host(%u) have not elect ppf(0x%x).\n",
+ host_id, dst_ppf_func);
+ return -EINVAL;
+ }
+
+ return hinic3_mbox_to_func(dev->func_to_func, mod, cmd,
+ dst_ppf_func, buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_ppf_to_host);
+
+int hinic3_mbox_to_pf(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (!(dev->chip_present_flag))
+ return -EPERM;
+
+ err = mbox_func_params_valid(dev->func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ if (!HINIC3_IS_VF(dev)) {
+ sdk_err(dev->dev_hdl, "Params error, func_type: %d\n",
+ hinic3_func_type(dev));
+ return -EINVAL;
+ }
+
+ return hinic3_mbox_to_func(dev->func_to_func, mod, cmd,
+ hinic3_pf_id_of_vf(dev), buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_to_pf);
+
+int hinic3_mbox_to_vf(void *hwdev, u16 vf_id, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u32 timeout,
+ u16 channel)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+ int err = 0;
+ u16 dst_func_idx;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size, channel);
+ if (err != 0)
+ return err;
+
+ if (HINIC3_IS_VF((struct hinic3_hwdev *)hwdev)) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl, "Params error, func_type: %d\n",
+ hinic3_func_type(hwdev));
+ return -EINVAL;
+ }
+
+ if (!vf_id) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "VF id(%u) error!\n", vf_id);
+ return -EINVAL;
+ }
+
+ /* vf_offset_to_pf + vf_id is the vf's global function id of vf in
+ * this pf
+ */
+ dst_func_idx = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+
+ return hinic3_mbox_to_func(func_to_func, mod, cmd, dst_func_idx, buf_in,
+ in_size, buf_out, out_size, timeout,
+ channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_to_vf);
+
+int hinic3_mbox_to_vf_no_ack(void *hwdev, u16 vf_id, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size, u16 channel)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+ int err = 0;
+ u16 dst_func_idx;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size, channel);
+ if (err != 0)
+ return err;
+
+ if (HINIC3_IS_VF((struct hinic3_hwdev *)hwdev)) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl, "Params error, func_type: %d\n",
+ hinic3_func_type(hwdev));
+ return -EINVAL;
+ }
+
+ if (!vf_id) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "VF id(%u) error!\n", vf_id);
+ return -EINVAL;
+ }
+
+ /* vf_offset_to_pf + vf_id is the vf's global function id of vf in
+ * this pf
+ */
+ dst_func_idx = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+
+ return hinic3_mbox_to_func_no_ack(hwdev, dst_func_idx, mod, cmd,
+ buf_in, in_size, channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_to_vf_no_ack);
+
+void hinic3_mbox_enable_channel_lock(struct hinic3_hwdev *hwdev, bool enable)
+{
+ hwdev->func_to_func->lock_channel_en = enable;
+
+ sdk_info(hwdev->dev_hdl, "%s mbox channel lock\n",
+ enable ? "Enable" : "Disable");
+}
+
+static int alloc_mbox_msg_channel(struct hinic3_msg_channel *msg_ch)
+{
+ msg_ch->resp_msg.msg = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
+ if (!msg_ch->resp_msg.msg)
+ return -ENOMEM;
+
+ msg_ch->recv_msg.msg = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
+ if (!msg_ch->recv_msg.msg) {
+ kfree(msg_ch->resp_msg.msg);
+ return -ENOMEM;
+ }
+
+ msg_ch->resp_msg.seq_id = SEQ_ID_MAX_VAL;
+ msg_ch->recv_msg.seq_id = SEQ_ID_MAX_VAL;
+ atomic_set(&msg_ch->recv_msg_cnt, 0);
+
+ return 0;
+}
+
+static void free_mbox_msg_channel(struct hinic3_msg_channel *msg_ch)
+{
+ kfree(msg_ch->recv_msg.msg);
+ kfree(msg_ch->resp_msg.msg);
+}
+
+static int init_mgmt_msg_channel(struct hinic3_mbox *func_to_func)
+{
+ int err;
+
+ err = alloc_mbox_msg_channel(&func_to_func->mgmt_msg);
+ if (err != 0) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Failed to alloc mgmt message channel\n");
+ return err;
+ }
+
+ err = hinic3_init_mbox_dma_queue(func_to_func);
+ if (err != 0) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Failed to init mbox dma queue\n");
+ free_mbox_msg_channel(&func_to_func->mgmt_msg);
+ }
+
+ return err;
+}
+
+static void deinit_mgmt_msg_channel(struct hinic3_mbox *func_to_func)
+{
+ hinic3_deinit_mbox_dma_queue(func_to_func);
+ free_mbox_msg_channel(&func_to_func->mgmt_msg);
+}
+
+int hinic3_mbox_init_host_msg_channel(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ u8 host_num = HINIC3_MAX_HOST_NUM(hwdev);
+ int i, host_id, err;
+
+ if (host_num == 0)
+ return 0;
+
+ func_to_func->host_msg = kcalloc(host_num,
+ sizeof(*func_to_func->host_msg),
+ GFP_KERNEL);
+ if (!func_to_func->host_msg) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Failed to alloc host message array\n");
+ return -ENOMEM;
+ }
+
+ for (host_id = 0; host_id < host_num; host_id++) {
+ err = alloc_mbox_msg_channel(&func_to_func->host_msg[host_id]);
+ if (err) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Failed to alloc host %d message channel\n",
+ host_id);
+ goto alloc_msg_ch_err;
+ }
+ }
+
+ func_to_func->support_h2h_msg = true;
+
+ return 0;
+
+alloc_msg_ch_err:
+ for (i = 0; i < host_id; i++)
+ free_mbox_msg_channel(&func_to_func->host_msg[i]);
+
+ kfree(func_to_func->host_msg);
+ func_to_func->host_msg = NULL;
+
+ return -ENOMEM;
+}
+
+static void deinit_host_msg_channel(struct hinic3_mbox *func_to_func)
+{
+ int i;
+
+ if (!func_to_func->host_msg)
+ return;
+
+ for (i = 0; i < HINIC3_MAX_HOST_NUM(func_to_func->hwdev); i++)
+ free_mbox_msg_channel(&func_to_func->host_msg[i]);
+
+ kfree(func_to_func->host_msg);
+ func_to_func->host_msg = NULL;
+}
+
+int hinic3_init_func_mbox_msg_channel(void *hwdev, u16 num_func)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_mbox *func_to_func = NULL;
+ u16 func_id, i;
+ int err;
+
+ if (!hwdev || !num_func || num_func > HINIC3_MAX_FUNCTIONS)
+ return -EINVAL;
+
+ func_to_func = dev->func_to_func;
+ if (func_to_func->func_msg)
+ return (func_to_func->num_func_msg == num_func) ? 0 : -EFAULT;
+
+ func_to_func->func_msg =
+ kcalloc(num_func, sizeof(*func_to_func->func_msg), GFP_KERNEL);
+ if (!func_to_func->func_msg) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Failed to alloc func message array\n");
+ return -ENOMEM;
+ }
+
+ for (func_id = 0; func_id < num_func; func_id++) {
+ err = alloc_mbox_msg_channel(&func_to_func->func_msg[func_id]);
+ if (err != 0) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Failed to alloc func %hu message channel\n",
+ func_id);
+ goto alloc_msg_ch_err;
+ }
+ }
+
+ func_to_func->num_func_msg = num_func;
+
+ return 0;
+
+alloc_msg_ch_err:
+ for (i = 0; i < func_id; i++)
+ free_mbox_msg_channel(&func_to_func->func_msg[i]);
+
+ kfree(func_to_func->func_msg);
+ func_to_func->func_msg = NULL;
+
+ return -ENOMEM;
+}
+
+static void hinic3_deinit_func_mbox_msg_channel(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ u16 i;
+
+ if (!func_to_func->func_msg)
+ return;
+
+ for (i = 0; i < func_to_func->num_func_msg; i++)
+ free_mbox_msg_channel(&func_to_func->func_msg[i]);
+
+ kfree(func_to_func->func_msg);
+ func_to_func->func_msg = NULL;
+}
+
+static struct hinic3_msg_desc *get_mbox_msg_desc(struct hinic3_mbox *func_to_func,
+ u64 dir, u64 src_func_id)
+{
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ struct hinic3_msg_channel *msg_ch = NULL;
+ u16 id;
+
+ if (src_func_id == HINIC3_MGMT_SRC_ID) {
+ msg_ch = &func_to_func->mgmt_msg;
+ } else if (HINIC3_IS_VF(hwdev)) {
+ /* message from pf */
+ msg_ch = func_to_func->func_msg;
+ if (src_func_id != hinic3_pf_id_of_vf(hwdev) || !msg_ch)
+ return NULL;
+ } else if (src_func_id > hinic3_glb_pf_vf_offset(hwdev)) {
+ /* message from vf */
+ id = (u16)(src_func_id - 1U) - hinic3_glb_pf_vf_offset(hwdev);
+ if (id >= func_to_func->num_func_msg)
+ return NULL;
+
+ msg_ch = &func_to_func->func_msg[id];
+ } else {
+ /* message from other host's ppf */
+ if (!func_to_func->support_h2h_msg)
+ return NULL;
+
+ for (id = 0; id < HINIC3_MAX_HOST_NUM(hwdev); id++) {
+ if (src_func_id == hinic3_host_ppf_idx(hwdev, (u8)id))
+ break;
+ }
+
+ if (id == HINIC3_MAX_HOST_NUM(hwdev) || !func_to_func->host_msg)
+ return NULL;
+
+ msg_ch = &func_to_func->host_msg[id];
+ }
+
+ return (dir == HINIC3_MSG_DIRECT_SEND) ?
+ &msg_ch->recv_msg : &msg_ch->resp_msg;
+}
+
+static void prepare_send_mbox(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+
+ send_mbox->data = MBOX_AREA(func_to_func->hwdev->hwif);
+}
+
+static int alloc_mbox_wb_status(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ u32 addr_h, addr_l;
+
+ send_mbox->wb_vaddr = dma_zalloc_coherent(hwdev->dev_hdl,
+ MBOX_WB_STATUS_LEN,
+ &send_mbox->wb_paddr,
+ GFP_KERNEL);
+ if (!send_mbox->wb_vaddr)
+ return -ENOMEM;
+
+ send_mbox->wb_status = send_mbox->wb_vaddr;
+
+ addr_h = upper_32_bits(send_mbox->wb_paddr);
+ addr_l = lower_32_bits(send_mbox->wb_paddr);
+
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF,
+ addr_h);
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF,
+ addr_l);
+
+ return 0;
+}
+
+static void free_mbox_wb_status(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF,
+ 0);
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF,
+ 0);
+
+ dma_free_coherent(hwdev->dev_hdl, MBOX_WB_STATUS_LEN,
+ send_mbox->wb_vaddr, send_mbox->wb_paddr);
+}
+
+int hinic3_func_to_func_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func;
+ int err = -ENOMEM;
+
+ func_to_func = kzalloc(sizeof(*func_to_func), GFP_KERNEL);
+ if (!func_to_func)
+ return -ENOMEM;
+
+ hwdev->func_to_func = func_to_func;
+ func_to_func->hwdev = hwdev;
+ mutex_init(&func_to_func->mbox_send_lock);
+ mutex_init(&func_to_func->msg_send_lock);
+ spin_lock_init(&func_to_func->mbox_lock);
+ func_to_func->workq = create_singlethread_workqueue(HINIC3_MBOX_WQ_NAME);
+ if (!func_to_func->workq) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize MBOX workqueue\n");
+ goto create_mbox_workq_err;
+ }
+
+ err = init_mgmt_msg_channel(func_to_func);
+ if (err)
+ goto init_mgmt_msg_ch_err;
+
+ if (HINIC3_IS_VF(hwdev)) {
+ /* VF to PF mbox message channel */
+ err = hinic3_init_func_mbox_msg_channel(hwdev, 1);
+ if (err)
+ goto init_func_msg_ch_err;
+ }
+
+ err = alloc_mbox_wb_status(func_to_func);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc mbox write back status\n");
+ goto alloc_wb_status_err;
+ }
+
+ prepare_send_mbox(func_to_func);
+
+ return 0;
+
+alloc_wb_status_err:
+ if (HINIC3_IS_VF(hwdev))
+ hinic3_deinit_func_mbox_msg_channel(hwdev);
+
+init_func_msg_ch_err:
+ deinit_mgmt_msg_channel(func_to_func);
+
+init_mgmt_msg_ch_err:
+ destroy_workqueue(func_to_func->workq);
+
+create_mbox_workq_err:
+ spin_lock_deinit(&func_to_func->mbox_lock);
+ mutex_deinit(&func_to_func->msg_send_lock);
+ mutex_deinit(&func_to_func->mbox_send_lock);
+ kfree(func_to_func);
+
+ return err;
+}
+
+void hinic3_func_to_func_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+
+ /* destroy workqueue before free related mbox resources in case of
+ * illegal resource access
+ */
+ destroy_workqueue(func_to_func->workq);
+
+ free_mbox_wb_status(func_to_func);
+ if (HINIC3_IS_PPF(hwdev))
+ deinit_host_msg_channel(func_to_func);
+ hinic3_deinit_func_mbox_msg_channel(hwdev);
+ deinit_mgmt_msg_channel(func_to_func);
+ spin_lock_deinit(&func_to_func->mbox_lock);
+ mutex_deinit(&func_to_func->mbox_send_lock);
+ mutex_deinit(&func_to_func->msg_send_lock);
+
+ kfree(func_to_func);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h
new file mode 100644
index 0000000..226f8d6
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h
@@ -0,0 +1,281 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_MBOX_H
+#define HINIC3_MBOX_H
+
+#include "hinic3_crm.h"
+#include "hinic3_hwdev.h"
+
+#define HINIC3_MBOX_PF_SEND_ERR 0x1
+
+#define HINIC3_MGMT_SRC_ID 0x1FFF
+#define HINIC3_MAX_FUNCTIONS 4096
+
+/* message header define */
+#define HINIC3_MSG_HEADER_SRC_GLB_FUNC_IDX_SHIFT 0
+#define HINIC3_MSG_HEADER_STATUS_SHIFT 13
+#define HINIC3_MSG_HEADER_SOURCE_SHIFT 15
+#define HINIC3_MSG_HEADER_AEQ_ID_SHIFT 16
+#define HINIC3_MSG_HEADER_MSG_ID_SHIFT 18
+#define HINIC3_MSG_HEADER_CMD_SHIFT 22
+
+#define HINIC3_MSG_HEADER_MSG_LEN_SHIFT 32
+#define HINIC3_MSG_HEADER_MODULE_SHIFT 43
+#define HINIC3_MSG_HEADER_SEG_LEN_SHIFT 48
+#define HINIC3_MSG_HEADER_NO_ACK_SHIFT 54
+#define HINIC3_MSG_HEADER_DATA_TYPE_SHIFT 55
+#define HINIC3_MSG_HEADER_SEQID_SHIFT 56
+#define HINIC3_MSG_HEADER_LAST_SHIFT 62
+#define HINIC3_MSG_HEADER_DIRECTION_SHIFT 63
+
+#define HINIC3_MSG_HEADER_SRC_GLB_FUNC_IDX_MASK 0x1FFF
+#define HINIC3_MSG_HEADER_STATUS_MASK 0x1
+#define HINIC3_MSG_HEADER_SOURCE_MASK 0x1
+#define HINIC3_MSG_HEADER_AEQ_ID_MASK 0x3
+#define HINIC3_MSG_HEADER_MSG_ID_MASK 0xF
+#define HINIC3_MSG_HEADER_CMD_MASK 0x3FF
+
+#define HINIC3_MSG_HEADER_MSG_LEN_MASK 0x7FF
+#define HINIC3_MSG_HEADER_MODULE_MASK 0x1F
+#define HINIC3_MSG_HEADER_SEG_LEN_MASK 0x3F
+#define HINIC3_MSG_HEADER_NO_ACK_MASK 0x1
+#define HINIC3_MSG_HEADER_DATA_TYPE_MASK 0x1
+#define HINIC3_MSG_HEADER_SEQID_MASK 0x3F
+#define HINIC3_MSG_HEADER_LAST_MASK 0x1
+#define HINIC3_MSG_HEADER_DIRECTION_MASK 0x1
+
+#define MBOX_MAX_BUF_SZ 2048U
+#define MBOX_HEADER_SZ 8
+#define HINIC3_MBOX_DATA_SIZE (MBOX_MAX_BUF_SZ - MBOX_HEADER_SZ)
+
+#define HINIC3_MSG_HEADER_GET(val, field) \
+ (((val) >> HINIC3_MSG_HEADER_##field##_SHIFT) & \
+ HINIC3_MSG_HEADER_##field##_MASK)
+#define HINIC3_MSG_HEADER_SET(val, field) \
+ ((u64)(((u64)(val)) & HINIC3_MSG_HEADER_##field##_MASK) << \
+ HINIC3_MSG_HEADER_##field##_SHIFT)
+
+#define IS_DMA_MBX_MSG(dst_func) ((dst_func) == HINIC3_MGMT_SRC_ID)
+
+enum hinic3_msg_direction_type {
+ HINIC3_MSG_DIRECT_SEND = 0,
+ HINIC3_MSG_RESPONSE = 1,
+};
+
+enum hinic3_msg_segment_type {
+ NOT_LAST_SEGMENT = 0,
+ LAST_SEGMENT = 1,
+};
+
+enum hinic3_msg_ack_type {
+ HINIC3_MSG_ACK,
+ HINIC3_MSG_NO_ACK,
+};
+
+enum hinic3_data_type {
+ HINIC3_DATA_INLINE = 0,
+ HINIC3_DATA_DMA = 1,
+};
+
+enum hinic3_msg_src_type {
+ HINIC3_MSG_FROM_MGMT = 0,
+ HINIC3_MSG_FROM_MBOX = 1,
+};
+
+enum hinic3_msg_aeq_type {
+ HINIC3_ASYNC_MSG_AEQ = 0,
+ /* indicate dest func or mgmt cpu which aeq to response mbox message */
+ HINIC3_MBOX_RSP_MSG_AEQ = 1,
+ /* indicate mgmt cpu which aeq to response api cmd message */
+ HINIC3_MGMT_RSP_MSG_AEQ = 2,
+};
+
+#define HINIC3_MBOX_WQ_NAME "hinic3_mbox"
+
+struct mbox_msg_info {
+ u8 msg_id;
+ u8 status; /* can only use 1 bit */
+};
+
+struct hinic3_msg_desc {
+ void *msg;
+ u16 msg_len;
+ u8 seq_id;
+ u8 mod;
+ u16 cmd;
+ struct mbox_msg_info msg_info;
+};
+
+struct hinic3_msg_channel {
+ struct hinic3_msg_desc resp_msg;
+ struct hinic3_msg_desc recv_msg;
+
+ atomic_t recv_msg_cnt;
+};
+
+/* Receive other functions mbox message */
+struct hinic3_recv_mbox {
+ void *msg;
+ u16 msg_len;
+ u8 msg_id;
+ u8 mod;
+ u16 cmd;
+ u16 src_func_idx;
+
+ enum hinic3_msg_ack_type ack_type;
+ u32 rsvd1;
+
+ void *resp_buff;
+};
+
+struct hinic3_send_mbox {
+ u8 *data;
+
+ u64 *wb_status; /* write back status */
+ void *wb_vaddr;
+ dma_addr_t wb_paddr;
+};
+
+enum mbox_event_state {
+ EVENT_START = 0,
+ EVENT_FAIL,
+ EVENT_SUCCESS,
+ EVENT_TIMEOUT,
+ EVENT_END,
+};
+
+enum hinic3_mbox_cb_state {
+ HINIC3_VF_MBOX_CB_REG = 0,
+ HINIC3_VF_MBOX_CB_RUNNING,
+ HINIC3_PF_MBOX_CB_REG,
+ HINIC3_PF_MBOX_CB_RUNNING,
+ HINIC3_PPF_MBOX_CB_REG,
+ HINIC3_PPF_MBOX_CB_RUNNING,
+ HINIC3_PPF_TO_PF_MBOX_CB_REG,
+ HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG,
+};
+
+enum hinic3_mbox_ack_type {
+ MBOX_ACK,
+ MBOX_NO_ACK,
+};
+
+struct mbox_dma_msg {
+ u32 xor;
+ u32 dma_addr_high;
+ u32 dma_addr_low;
+ u32 msg_len;
+ u64 rsvd;
+};
+
+struct mbox_dma_queue {
+ void *dma_buff_vaddr;
+ dma_addr_t dma_buff_paddr;
+
+ u16 depth;
+ u16 prod_idx;
+ u16 cons_idx;
+};
+
+struct hinic3_mbox {
+ struct hinic3_hwdev *hwdev;
+
+ bool lock_channel_en;
+ unsigned long channel_stop;
+ u16 cur_msg_channel;
+ u32 rsvd1;
+
+ /* lock for send mbox message and ack message */
+ struct mutex mbox_send_lock;
+ /* lock for send mbox message */
+ struct mutex msg_send_lock;
+ struct hinic3_send_mbox send_mbox;
+
+ struct mbox_dma_queue sync_msg_queue;
+ struct mbox_dma_queue async_msg_queue;
+
+ struct workqueue_struct *workq;
+
+ struct hinic3_msg_channel mgmt_msg; /* driver and MGMT CPU */
+ struct hinic3_msg_channel *host_msg; /* PPF message between hosts */
+ struct hinic3_msg_channel *func_msg; /* PF to VF or VF to PF */
+ u16 num_func_msg;
+ bool support_h2h_msg; /* host to host */
+
+ /* vf receive pf/ppf callback */
+ hinic3_vf_mbox_cb vf_mbox_cb[HINIC3_MOD_MAX];
+ void *vf_mbox_data[HINIC3_MOD_MAX];
+ /* pf/ppf receive vf callback */
+ hinic3_pf_mbox_cb pf_mbox_cb[HINIC3_MOD_MAX];
+ void *pf_mbox_data[HINIC3_MOD_MAX];
+ /* ppf receive pf/ppf callback */
+ hinic3_ppf_mbox_cb ppf_mbox_cb[HINIC3_MOD_MAX];
+ void *ppf_mbox_data[HINIC3_MOD_MAX];
+ /* pf receive ppf callback */
+ hinic3_pf_recv_from_ppf_mbox_cb pf_recv_ppf_mbox_cb[HINIC3_MOD_MAX];
+ void *pf_recv_ppf_mbox_data[HINIC3_MOD_MAX];
+ unsigned long ppf_to_pf_mbox_cb_state[HINIC3_MOD_MAX];
+ unsigned long ppf_mbox_cb_state[HINIC3_MOD_MAX];
+ unsigned long pf_mbox_cb_state[HINIC3_MOD_MAX];
+ unsigned long vf_mbox_cb_state[HINIC3_MOD_MAX];
+
+ u8 send_msg_id;
+ u16 rsvd2;
+ enum mbox_event_state event_flag;
+ /* lock for mbox event flag */
+ spinlock_t mbox_lock;
+ u64 rsvd3;
+};
+
+struct hinic3_mbox_work {
+ struct work_struct work;
+ struct hinic3_mbox *func_to_func;
+ struct hinic3_recv_mbox *recv_mbox;
+ struct hinic3_msg_channel *msg_ch;
+};
+
+struct vf_cmd_check_handle {
+ u16 cmd;
+ bool (*check_cmd)(struct hinic3_hwdev *hwdev, u16 src_func_idx,
+ void *buf_in, u16 in_size);
+};
+
+void hinic3_mbox_func_aeqe_handler(void *handle, u8 *header, u8 size);
+
+bool hinic3_mbox_check_cmd_valid(struct hinic3_hwdev *hwdev,
+ struct vf_cmd_check_handle *cmd_handle,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+ u8 size);
+
+int hinic3_func_to_func_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_func_to_func_free(struct hinic3_hwdev *hwdev);
+
+int hinic3_mbox_to_host(struct hinic3_hwdev *hwdev, u16 dest_host_ppf_id,
+ enum hinic3_mod_type mod, u8 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u32 timeout, u16 channel);
+
+int hinic3_mbox_to_func_no_ack(struct hinic3_hwdev *hwdev, u16 func_idx,
+ u8 mod, u16 cmd, void *buf_in, u16 in_size,
+ u16 channel);
+
+int hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel);
+
+void hinic3_response_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 msg_id);
+
+int hinic3_send_mbox_to_mgmt_no_ack(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 channel);
+int hinic3_mbox_to_func(struct hinic3_mbox *func_to_func, u8 mod, u16 cmd,
+ u16 dst_func, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout, u16 channel);
+
+int hinic3_mbox_init_host_msg_channel(struct hinic3_hwdev *hwdev);
+
+void hinic3_mbox_enable_channel_lock(struct hinic3_hwdev *hwdev, bool enable);
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c
new file mode 100644
index 0000000..0d75177
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c
@@ -0,0 +1,1571 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/spinlock.h>
+#include <linux/completion.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/semaphore.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "mpu_inband_cmd.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_eqs.h"
+#include "hinic3_mbox.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_csr.h"
+#include "hinic3_mgmt.h"
+
+#define HINIC3_MSG_TO_MGMT_MAX_LEN 2016
+
+#define HINIC3_API_CHAIN_AEQ_ID 2
+#define MAX_PF_MGMT_BUF_SIZE 2048UL
+#define SEGMENT_LEN 48
+#define ASYNC_MSG_FLAG 0x8
+#define MGMT_MSG_MAX_SEQ_ID (ALIGN(HINIC3_MSG_TO_MGMT_MAX_LEN, \
+ SEGMENT_LEN) / SEGMENT_LEN)
+
+#define MGMT_MSG_LAST_SEG_MAX_LEN (MAX_PF_MGMT_BUF_SIZE - \
+ SEGMENT_LEN * MGMT_MSG_MAX_SEQ_ID)
+
+#define BUF_OUT_DEFAULT_SIZE 1
+
+#define MGMT_MSG_SIZE_MIN 20
+#define MGMT_MSG_SIZE_STEP 16
+#define MGMT_MSG_RSVD_FOR_DEV 8
+
+#define SYNC_MSG_ID_MASK 0x7
+#define ASYNC_MSG_ID_MASK 0x7
+
+#define SYNC_FLAG 0
+#define ASYNC_FLAG 1
+
+#define MSG_NO_RESP 0xFFFF
+
+#define MGMT_MSG_TIMEOUT 20000 /* millisecond */
+
+#define SYNC_MSG_ID(pf_to_mgmt) ((pf_to_mgmt)->sync_msg_id)
+
+#define SYNC_MSG_ID_INC(pf_to_mgmt) (SYNC_MSG_ID(pf_to_mgmt) = \
+ (SYNC_MSG_ID(pf_to_mgmt) + 1) & SYNC_MSG_ID_MASK)
+#define ASYNC_MSG_ID(pf_to_mgmt) ((pf_to_mgmt)->async_msg_id)
+
+#define ASYNC_MSG_ID_INC(pf_to_mgmt) (ASYNC_MSG_ID(pf_to_mgmt) = \
+ ((ASYNC_MSG_ID(pf_to_mgmt) + 1) & ASYNC_MSG_ID_MASK) \
+ | ASYNC_MSG_FLAG)
+
+static void pf_to_mgmt_send_event_set(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ int event_flag)
+{
+ spin_lock(&pf_to_mgmt->sync_event_lock);
+ pf_to_mgmt->event_flag = event_flag;
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+}
+
+/**
+ * hinic3_register_mgmt_msg_cb - register sync msg handler for a module
+ * @hwdev: the pointer to hw device
+ * @mod: module in the chip that this handler will handle its sync messages
+ * @pri_handle: specific mod's private data that will be used in callback
+ * @callback: the handler for a sync message that will handle messages
+ **/
+int hinic3_register_mgmt_msg_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_mgmt_msg_cb callback)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+
+ if (mod >= HINIC3_MOD_HW_MAX || !hwdev)
+ return -EFAULT;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return -EINVAL;
+
+ pf_to_mgmt->recv_mgmt_msg_cb[mod] = callback;
+ pf_to_mgmt->recv_mgmt_msg_data[mod] = pri_handle;
+
+ set_bit(HINIC3_MGMT_MSG_CB_REG, &pf_to_mgmt->mgmt_msg_cb_state[mod]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_mgmt_msg_cb);
+
+/**
+ * hinic3_unregister_mgmt_msg_cb - unregister sync msg handler for a module
+ * @hwdev: the pointer to hw device
+ * @mod: module in the chip that this handler will handle its sync messages
+ **/
+void hinic3_unregister_mgmt_msg_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+
+ if (!hwdev || mod >= HINIC3_MOD_HW_MAX)
+ return;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return;
+
+ clear_bit(HINIC3_MGMT_MSG_CB_REG, &pf_to_mgmt->mgmt_msg_cb_state[mod]);
+
+ while (test_bit(HINIC3_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[mod]))
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ pf_to_mgmt->recv_mgmt_msg_cb[mod] = NULL;
+ pf_to_mgmt->recv_mgmt_msg_data[mod] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_mgmt_msg_cb);
+
+/**
+ * mgmt_msg_len - calculate the total message length
+ * @msg_data_len: the length of the message data
+ * Return: the total message length
+ **/
+static u16 mgmt_msg_len(u16 msg_data_len)
+{
+ /* u64 - the size of the header */
+ u16 msg_size;
+
+ msg_size = (u16)(MGMT_MSG_RSVD_FOR_DEV + sizeof(u64) + msg_data_len);
+
+ if (msg_size > MGMT_MSG_SIZE_MIN)
+ msg_size = MGMT_MSG_SIZE_MIN +
+ ALIGN((msg_size - MGMT_MSG_SIZE_MIN),
+ MGMT_MSG_SIZE_STEP);
+ else
+ msg_size = MGMT_MSG_SIZE_MIN;
+
+ return msg_size;
+}
+
+/**
+ * prepare_header - prepare the header of the message
+ * @pf_to_mgmt: PF to MGMT channel
+ * @header: pointer of the header to prepare
+ * @msg_len: the length of the message
+ * @mod: module in the chip that will get the message
+ * @direction: the direction of the original message
+ * @msg_id: message id
+ **/
+static void prepare_header(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u64 *header, u16 msg_len, u8 mod,
+ enum hinic3_msg_ack_type ack_type,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_mgmt_cmd cmd, u32 msg_id)
+{
+ struct hinic3_hwif *hwif = pf_to_mgmt->hwdev->hwif;
+
+ *header = HINIC3_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ HINIC3_MSG_HEADER_SET(mod, MODULE) |
+ HINIC3_MSG_HEADER_SET(msg_len, SEG_LEN) |
+ HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HINIC3_MSG_HEADER_SET(HINIC3_DATA_INLINE, DATA_TYPE) |
+ HINIC3_MSG_HEADER_SET(0, SEQID) |
+ HINIC3_MSG_HEADER_SET(HINIC3_API_CHAIN_AEQ_ID, AEQ_ID) |
+ HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
+ HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+ HINIC3_MSG_HEADER_SET(cmd, CMD) |
+ HINIC3_MSG_HEADER_SET(HINIC3_MSG_FROM_MGMT, SOURCE) |
+ HINIC3_MSG_HEADER_SET(hwif->attr.func_global_idx,
+ SRC_GLB_FUNC_IDX) |
+ HINIC3_MSG_HEADER_SET(msg_id, MSG_ID);
+}
+
+static void clp_prepare_header(struct hinic3_hwdev *hwdev, u64 *header,
+ u16 msg_len, u8 mod,
+ enum hinic3_msg_ack_type ack_type,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_mgmt_cmd cmd, u32 msg_id)
+{
+ struct hinic3_hwif *hwif = hwdev->hwif;
+
+ *header = HINIC3_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ HINIC3_MSG_HEADER_SET(mod, MODULE) |
+ HINIC3_MSG_HEADER_SET(msg_len, SEG_LEN) |
+ HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HINIC3_MSG_HEADER_SET(HINIC3_DATA_INLINE, DATA_TYPE) |
+ HINIC3_MSG_HEADER_SET(0, SEQID) |
+ HINIC3_MSG_HEADER_SET(HINIC3_API_CHAIN_AEQ_ID, AEQ_ID) |
+ HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
+ HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+ HINIC3_MSG_HEADER_SET(cmd, CMD) |
+ HINIC3_MSG_HEADER_SET(hwif->attr.func_global_idx,
+ SRC_GLB_FUNC_IDX) |
+ HINIC3_MSG_HEADER_SET(msg_id, MSG_ID);
+}
+
+/**
+ * prepare_mgmt_cmd - prepare the mgmt command
+ * @mgmt_cmd: pointer to the command to prepare
+ * @header: pointer of the header to prepare
+ * @msg: the data of the message
+ * @msg_len: the length of the message
+ **/
+static int prepare_mgmt_cmd(u8 *mgmt_cmd, u64 *header, const void *msg, int msg_len)
+{
+ u8 *mgmt_cmd_new = mgmt_cmd;
+
+ memset(mgmt_cmd_new, 0, MGMT_MSG_RSVD_FOR_DEV);
+
+ mgmt_cmd_new += MGMT_MSG_RSVD_FOR_DEV;
+ memcpy(mgmt_cmd_new, header, sizeof(*header));
+
+ mgmt_cmd_new += sizeof(*header);
+ memcpy(mgmt_cmd_new, msg, (size_t)(u32)msg_len);
+
+ return 0;
+}
+
+/**
+ * send_msg_to_mgmt_sync - send async message
+ * @pf_to_mgmt: PF to MGMT channel
+ * @mod: module in the chip that will get the message
+ * @cmd: command of the message
+ * @msg: the msg data
+ * @msg_len: the msg data length
+ * @direction: the direction of the original message
+ * @resp_msg_id: msg id to response for
+ * Return: 0 - success, negative - failure
+ **/
+static int send_msg_to_mgmt_sync(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 mod, u16 cmd, const void *msg, u16 msg_len,
+ enum hinic3_msg_ack_type ack_type,
+ enum hinic3_msg_direction_type direction,
+ u16 resp_msg_id)
+{
+ void *mgmt_cmd = pf_to_mgmt->sync_msg_buf;
+ struct hinic3_api_cmd_chain *chain = NULL;
+ u8 node_id = HINIC3_MGMT_CPU_NODE_ID(pf_to_mgmt->hwdev);
+ u64 header;
+ u16 cmd_size = mgmt_msg_len(msg_len);
+ int ret;
+
+ if (hinic3_get_chip_present_flag(pf_to_mgmt->hwdev) == 0)
+ return -EFAULT;
+
+ if (cmd_size > HINIC3_MSG_TO_MGMT_MAX_LEN)
+ return -EFAULT;
+
+ if (direction == HINIC3_MSG_RESPONSE)
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, ack_type,
+ direction, cmd, resp_msg_id);
+ else
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, ack_type,
+ direction, cmd, SYNC_MSG_ID_INC(pf_to_mgmt));
+ chain = pf_to_mgmt->cmd_chain[HINIC3_API_CMD_WRITE_TO_MGMT_CPU];
+
+ if (ack_type == HINIC3_MSG_ACK)
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_START);
+
+ ret = prepare_mgmt_cmd((u8 *)mgmt_cmd, &header, msg, msg_len);
+ if (ret != 0)
+ return ret;
+
+ return hinic3_api_cmd_write(chain, node_id, mgmt_cmd, cmd_size);
+}
+
+/**
+ * send_msg_to_mgmt_async - send async message
+ * @pf_to_mgmt: PF to MGMT channel
+ * @mod: module in the chip that will get the message
+ * @cmd: command of the message
+ * @msg: the data of the message
+ * @msg_len: the length of the message
+ * @direction: the direction of the original message
+ * Return: 0 - success, negative - failure
+ **/
+static int send_msg_to_mgmt_async(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 mod, u16 cmd, const void *msg, u16 msg_len,
+ enum hinic3_msg_direction_type direction)
+{
+ void *mgmt_cmd = pf_to_mgmt->async_msg_buf;
+ struct hinic3_api_cmd_chain *chain = NULL;
+ u8 node_id = HINIC3_MGMT_CPU_NODE_ID(pf_to_mgmt->hwdev);
+ u64 header;
+ u16 cmd_size = mgmt_msg_len(msg_len);
+ int ret;
+
+ if (hinic3_get_chip_present_flag(pf_to_mgmt->hwdev) == 0)
+ return -EFAULT;
+
+ if (cmd_size > HINIC3_MSG_TO_MGMT_MAX_LEN)
+ return -EFAULT;
+
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, HINIC3_MSG_NO_ACK,
+ direction, cmd, ASYNC_MSG_ID(pf_to_mgmt));
+
+ ret = prepare_mgmt_cmd((u8 *)mgmt_cmd, &header, msg, msg_len);
+ if (ret != 0)
+ return ret;
+
+ chain = pf_to_mgmt->cmd_chain[HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU];
+
+ return hinic3_api_cmd_write(chain, node_id, mgmt_cmd, cmd_size);
+}
+
+static inline int msg_to_mgmt_pre(u8 mod, void *buf_in, u16 in_size)
+{
+ struct hinic3_msg_head *msg_head = NULL;
+
+ /* set aeq fix num to 3, need to ensure response aeq id < 3 */
+ if (mod == HINIC3_MOD_COMM || mod == HINIC3_MOD_L2NIC) {
+ if (in_size < sizeof(struct hinic3_msg_head))
+ return -EINVAL;
+
+ msg_head = buf_in;
+
+ if (msg_head->resp_aeq_num >= HINIC3_MAX_AEQS)
+ msg_head->resp_aeq_num = 0;
+ }
+
+ return 0;
+}
+
+int hinic3_pf_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ void *dev = ((struct hinic3_hwdev *)hwdev)->dev_hdl;
+ struct hinic3_recv_msg *recv_msg = NULL;
+ struct completion *recv_done = NULL;
+ ulong timeo;
+ int err;
+ ulong ret;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ if ((buf_in == NULL) || (in_size == 0))
+ return -EINVAL;
+
+ ret = msg_to_mgmt_pre(mod, buf_in, in_size);
+ if (ret != 0)
+ return -EINVAL;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+
+ /* Lock the sync_msg_buf */
+ down(&pf_to_mgmt->sync_msg_lock);
+ recv_msg = &pf_to_mgmt->recv_resp_msg_from_mgmt;
+ recv_done = &recv_msg->recv_done;
+
+ init_completion(recv_done);
+
+ err = send_msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
+ HINIC3_MSG_ACK, HINIC3_MSG_DIRECT_SEND,
+ MSG_NO_RESP);
+ if (err) {
+ sdk_err(dev, "Failed to send sync msg to mgmt, sync_msg_id: %u\n",
+ pf_to_mgmt->sync_msg_id);
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_FAIL);
+ goto unlock_sync_msg;
+ }
+
+ timeo = msecs_to_jiffies(timeout ? timeout : MGMT_MSG_TIMEOUT);
+
+ ret = wait_for_completion_timeout(recv_done, timeo);
+ if (!ret) {
+ sdk_err(dev, "Mgmt response sync cmd timeout, sync_msg_id: %u\n",
+ pf_to_mgmt->sync_msg_id);
+ hinic3_dump_aeq_info((struct hinic3_hwdev *)hwdev);
+ err = -ETIMEDOUT;
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_TIMEOUT);
+ goto unlock_sync_msg;
+ }
+
+ spin_lock(&pf_to_mgmt->sync_event_lock);
+ if (pf_to_mgmt->event_flag == SEND_EVENT_TIMEOUT) {
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+ err = -ETIMEDOUT;
+ goto unlock_sync_msg;
+ }
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_END);
+
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag)) {
+ destroy_completion(recv_done);
+ up(&pf_to_mgmt->sync_msg_lock);
+ return -ETIMEDOUT;
+ }
+
+ if (buf_out && out_size) {
+ if (*out_size < recv_msg->msg_len) {
+ sdk_err(dev, "Invalid response message length: %u for mod %d cmd %u from mgmt, should less than: %u\n",
+ recv_msg->msg_len, mod, cmd, *out_size);
+ err = -EFAULT;
+ goto unlock_sync_msg;
+ }
+
+ if (recv_msg->msg_len)
+ memcpy(buf_out, recv_msg->msg, recv_msg->msg_len);
+ *out_size = recv_msg->msg_len;
+ }
+
+unlock_sync_msg:
+ destroy_completion(recv_done);
+ up(&pf_to_mgmt->sync_msg_lock);
+
+ return err;
+}
+
+int hinic3_pf_to_mgmt_async(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ void *dev = ((struct hinic3_hwdev *)hwdev)->dev_hdl;
+ int err;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+
+ /* Lock the async_msg_buf */
+ spin_lock_bh(&pf_to_mgmt->async_msg_lock);
+ ASYNC_MSG_ID_INC(pf_to_mgmt);
+
+ err = send_msg_to_mgmt_async(pf_to_mgmt, mod, cmd, buf_in, in_size,
+ HINIC3_MSG_DIRECT_SEND);
+ spin_unlock_bh(&pf_to_mgmt->async_msg_lock);
+
+ if (err) {
+ sdk_err(dev, "Failed to send async mgmt msg\n");
+ return err;
+ }
+
+ return 0;
+}
+
+/* This function is only used by tx/rx flush */
+int hinic3_pf_to_mgmt_no_ack(void *hwdev, enum hinic3_mod_type mod, u8 cmd, void *buf_in,
+ u16 in_size)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ void *dev = NULL;
+ int err = -EINVAL;
+ struct hinic3_hwdev *tmp_hwdev = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ tmp_hwdev = (struct hinic3_hwdev *)hwdev;
+ dev = tmp_hwdev->dev_hdl;
+ pf_to_mgmt = tmp_hwdev->pf_to_mgmt;
+
+ if (in_size > HINIC3_MBOX_DATA_SIZE) {
+ sdk_err(dev, "Mgmt msg buffer size: %u is invalid\n", in_size);
+ return -EINVAL;
+ }
+
+ if (!(tmp_hwdev->chip_present_flag))
+ return -EPERM;
+
+ /* lock the sync_msg_buf */
+ down(&pf_to_mgmt->sync_msg_lock);
+
+ err = send_msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size, HINIC3_MSG_NO_ACK,
+ HINIC3_MSG_DIRECT_SEND, MSG_NO_RESP);
+
+ up(&pf_to_mgmt->sync_msg_lock);
+
+ return err;
+}
+
+int hinic3_pf_msg_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_get_chip_present_flag(hwdev) == 0)
+ return -EPERM;
+
+ if (in_size > HINIC3_MSG_TO_MGMT_MAX_LEN)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ return hinic3_pf_to_mgmt_sync(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout);
+}
+
+int hinic3_msg_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_get_chip_present_flag(hwdev) == 0)
+ return -EPERM;
+
+ return hinic3_send_mbox_to_mgmt(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+}
+EXPORT_SYMBOL(hinic3_msg_to_mgmt_sync);
+
+int hinic3_msg_to_mgmt_no_ack(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 channel)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_get_chip_present_flag(hwdev) == 0)
+ return -EPERM;
+
+ return hinic3_send_mbox_to_mgmt_no_ack(hwdev, mod, cmd, buf_in,
+ in_size, channel);
+}
+EXPORT_SYMBOL(hinic3_msg_to_mgmt_no_ack);
+
+int hinic3_msg_to_mgmt_async(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 channel)
+{
+ return hinic3_msg_to_mgmt_api_chain_async(hwdev, mod, cmd, buf_in,
+ in_size);
+}
+EXPORT_SYMBOL(hinic3_msg_to_mgmt_async);
+
+int hinic3_msg_to_mgmt_api_chain_sync(void *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_get_chip_present_flag(hwdev) == 0)
+ return -EPERM;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev)) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "PF don't support api chain\n");
+ return -EPERM;
+ }
+
+ return hinic3_pf_msg_to_mgmt_sync(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout);
+}
+
+int hinic3_msg_to_mgmt_api_chain_async(void *hwdev, u8 mod, u16 cmd,
+ const void *buf_in, u16 in_size)
+{
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ err = -EFAULT;
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "VF don't support async cmd\n");
+ } else if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev)) {
+ err = -EPERM;
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "PF don't support api chain\n");
+ } else {
+ err = hinic3_pf_to_mgmt_async(hwdev, mod, cmd, buf_in, in_size);
+ }
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_msg_to_mgmt_api_chain_async);
+
+static void send_mgmt_ack(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 mod, u16 cmd, void *buf_in, u16 in_size,
+ u16 msg_id)
+{
+ u16 buf_size;
+
+ if (!in_size)
+ buf_size = BUF_OUT_DEFAULT_SIZE;
+ else
+ buf_size = in_size;
+
+ hinic3_response_mbox_to_mgmt(pf_to_mgmt->hwdev, mod, cmd, buf_in,
+ buf_size, msg_id);
+}
+
+static void mgmt_recv_msg_handler(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 mod, u16 cmd, void *buf_in, u16 in_size,
+ u16 msg_id, int need_resp)
+{
+ void *dev = pf_to_mgmt->hwdev->dev_hdl;
+ void *buf_out = pf_to_mgmt->mgmt_ack_buf;
+ enum hinic3_mod_type tmp_mod = mod;
+ bool ack_first = false;
+ u16 out_size = 0;
+
+ memset(buf_out, 0, MAX_PF_MGMT_BUF_SIZE);
+
+ if (mod >= HINIC3_MOD_HW_MAX) {
+ sdk_warn(dev, "Receive illegal message from mgmt cpu, mod = %d\n",
+ mod);
+ goto unsupported;
+ }
+
+ set_bit(HINIC3_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod]);
+
+ if (!pf_to_mgmt->recv_mgmt_msg_cb[mod] ||
+ !test_bit(HINIC3_MGMT_MSG_CB_REG,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod])) {
+ sdk_warn(dev, "Receive mgmt callback is null, mod = %u, cmd=%u\n", mod, cmd);
+ clear_bit(HINIC3_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod]);
+ goto unsupported;
+ }
+
+ pf_to_mgmt->recv_mgmt_msg_cb[tmp_mod](pf_to_mgmt->recv_mgmt_msg_data[tmp_mod],
+ cmd, buf_in, in_size,
+ buf_out, &out_size);
+
+ clear_bit(HINIC3_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod]);
+
+ goto resp;
+
+unsupported:
+ out_size = sizeof(struct mgmt_msg_head);
+ ((struct mgmt_msg_head *)buf_out)->status = HINIC3_MGMT_CMD_UNSUPPORTED;
+
+resp:
+ if (!ack_first && need_resp)
+ send_mgmt_ack(pf_to_mgmt, mod, cmd, buf_out, out_size, msg_id);
+}
+
+/**
+ * mgmt_resp_msg_handler - handler for response message from mgmt cpu
+ * @pf_to_mgmt: PF to MGMT channel
+ * @recv_msg: received message details
+ **/
+static void mgmt_resp_msg_handler(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ struct hinic3_recv_msg *recv_msg)
+{
+ void *dev = pf_to_mgmt->hwdev->dev_hdl;
+
+ /* delete async msg */
+ if (recv_msg->msg_id & ASYNC_MSG_FLAG)
+ return;
+
+ spin_lock(&pf_to_mgmt->sync_event_lock);
+ if (recv_msg->msg_id == pf_to_mgmt->sync_msg_id &&
+ pf_to_mgmt->event_flag == SEND_EVENT_START) {
+ pf_to_mgmt->event_flag = SEND_EVENT_SUCCESS;
+ complete(&recv_msg->recv_done);
+ } else if (recv_msg->msg_id != pf_to_mgmt->sync_msg_id) {
+ sdk_err(dev, "Send msg id(0x%x) recv msg id(0x%x) dismatch, event state=%d\n",
+ pf_to_mgmt->sync_msg_id, recv_msg->msg_id,
+ pf_to_mgmt->event_flag);
+ } else {
+ sdk_err(dev, "Wait timeout, send msg id(0x%x) recv msg id(0x%x), event state=%d!\n",
+ pf_to_mgmt->sync_msg_id, recv_msg->msg_id,
+ pf_to_mgmt->event_flag);
+ }
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+}
+
+static void recv_mgmt_msg_work_handler(struct work_struct *work)
+{
+ struct hinic3_mgmt_msg_handle_work *mgmt_work =
+ container_of(work, struct hinic3_mgmt_msg_handle_work, work);
+
+ mgmt_recv_msg_handler(mgmt_work->pf_to_mgmt, mgmt_work->mod,
+ mgmt_work->cmd, mgmt_work->msg,
+ mgmt_work->msg_len, mgmt_work->msg_id,
+ !mgmt_work->async_mgmt_to_pf);
+
+ destroy_work(&mgmt_work->work);
+
+ kfree(mgmt_work->msg);
+ kfree(mgmt_work);
+}
+
+static bool check_mgmt_head_info(struct hinic3_recv_msg *recv_msg,
+ u8 seq_id, u8 seg_len, u16 msg_id)
+{
+ if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN ||
+ (seq_id == MGMT_MSG_MAX_SEQ_ID && seg_len > MGMT_MSG_LAST_SEG_MAX_LEN))
+ return false;
+
+ if (seq_id == 0) {
+ recv_msg->seq_id = seq_id;
+ recv_msg->msg_id = msg_id;
+ } else {
+ if (seq_id != recv_msg->seq_id + 1 || msg_id != recv_msg->msg_id)
+ return false;
+
+ recv_msg->seq_id = seq_id;
+ }
+
+ return true;
+}
+
+static void init_mgmt_msg_work(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ struct hinic3_recv_msg *recv_msg)
+{
+ struct hinic3_mgmt_msg_handle_work *mgmt_work = NULL;
+ struct hinic3_hwdev *hwdev = pf_to_mgmt->hwdev;
+
+ mgmt_work = kzalloc(sizeof(*mgmt_work), GFP_KERNEL);
+ if (!mgmt_work) {
+ sdk_err(hwdev->dev_hdl, "Allocate mgmt work memory failed\n");
+ return;
+ }
+
+ if (recv_msg->msg_len) {
+ mgmt_work->msg = kzalloc(recv_msg->msg_len, GFP_KERNEL);
+ if (!mgmt_work->msg) {
+ sdk_err(hwdev->dev_hdl, "Allocate mgmt msg memory failed\n");
+ kfree(mgmt_work);
+ return;
+ }
+ }
+
+ mgmt_work->pf_to_mgmt = pf_to_mgmt;
+ mgmt_work->msg_len = recv_msg->msg_len;
+ memcpy(mgmt_work->msg, recv_msg->msg, recv_msg->msg_len);
+ mgmt_work->msg_id = recv_msg->msg_id;
+ mgmt_work->mod = recv_msg->mod;
+ mgmt_work->cmd = recv_msg->cmd;
+ mgmt_work->async_mgmt_to_pf = recv_msg->async_mgmt_to_pf;
+
+ INIT_WORK(&mgmt_work->work, recv_mgmt_msg_work_handler);
+ queue_work_on(hisdk3_get_work_cpu_affinity(hwdev, WORK_TYPE_MGMT_MSG),
+ pf_to_mgmt->workq, &mgmt_work->work);
+}
+
+/**
+ * recv_mgmt_msg_handler - handler a message from mgmt cpu
+ * @pf_to_mgmt: PF to MGMT channel
+ * @header: the header of the message
+ * @recv_msg: received message details
+ **/
+static void recv_mgmt_msg_handler(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 *header, struct hinic3_recv_msg *recv_msg)
+{
+ struct hinic3_hwdev *hwdev = pf_to_mgmt->hwdev;
+ u64 mbox_header = *((u64 *)header);
+ void *msg_body = header + sizeof(mbox_header);
+ u8 seq_id, seq_len;
+ u16 msg_id;
+ u32 offset;
+ u64 dir;
+
+ /* Don't need to get anything from hw when cmd is async */
+ dir = HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION);
+ if (dir == HINIC3_MSG_RESPONSE &&
+ (HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID) & ASYNC_MSG_FLAG))
+ return;
+
+ seq_len = HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN);
+ seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+ msg_id = HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID);
+ if (!check_mgmt_head_info(recv_msg, seq_id, seq_len, msg_id)) {
+ sdk_err(hwdev->dev_hdl, "Mgmt msg sequence id and segment length check failed\n");
+ sdk_err(hwdev->dev_hdl,
+ "Front seq_id: 0x%x,current seq_id: 0x%x, seg len: 0x%x, front msg_id: %d, cur: %d\n",
+ recv_msg->seq_id, seq_id, seq_len, recv_msg->msg_id, msg_id);
+ /* set seq_id to invalid seq_id */
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+ return;
+ }
+
+ offset = seq_id * SEGMENT_LEN;
+ memcpy((u8 *)recv_msg->msg + offset, msg_body, seq_len);
+
+ if (!HINIC3_MSG_HEADER_GET(mbox_header, LAST))
+ return;
+
+ recv_msg->cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+ recv_msg->mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+ recv_msg->async_mgmt_to_pf = HINIC3_MSG_HEADER_GET(mbox_header,
+ NO_ACK);
+ recv_msg->msg_len = HINIC3_MSG_HEADER_GET(mbox_header, MSG_LEN);
+ recv_msg->msg_id = msg_id;
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+
+ if (HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION) ==
+ HINIC3_MSG_RESPONSE) {
+ mgmt_resp_msg_handler(pf_to_mgmt, recv_msg);
+ return;
+ }
+
+ init_mgmt_msg_work(pf_to_mgmt, recv_msg);
+}
+
+/**
+ * hinic3_mgmt_msg_aeqe_handler - handler for a mgmt message event
+ * @handle: PF to MGMT channel
+ * @header: the header of the message
+ * @size: unused
+ **/
+void hinic3_mgmt_msg_aeqe_handler(void *hwdev, u8 *header, u8 size)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ struct hinic3_recv_msg *recv_msg = NULL;
+ bool is_send_dir = false;
+
+ if ((HINIC3_MSG_HEADER_GET(*(u64 *)header, SOURCE) ==
+ HINIC3_MSG_FROM_MBOX)) {
+ hinic3_mbox_func_aeqe_handler(hwdev, header, size);
+ return;
+ }
+
+ pf_to_mgmt = dev->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return;
+
+ is_send_dir = (HINIC3_MSG_HEADER_GET(*(u64 *)header, DIRECTION) ==
+ HINIC3_MSG_DIRECT_SEND) ? true : false;
+
+ recv_msg = is_send_dir ? &pf_to_mgmt->recv_msg_from_mgmt :
+ &pf_to_mgmt->recv_resp_msg_from_mgmt;
+
+ recv_mgmt_msg_handler(pf_to_mgmt, header, recv_msg);
+}
+
+/**
+ * alloc_recv_msg - allocate received message memory
+ * @recv_msg: pointer that will hold the allocated data
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_recv_msg(struct hinic3_recv_msg *recv_msg)
+{
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+
+ recv_msg->msg = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!recv_msg->msg)
+ return -ENOMEM;
+
+ return 0;
+}
+
+/**
+ * free_recv_msg - free received message memory
+ * @recv_msg: pointer that holds the allocated data
+ **/
+static void free_recv_msg(struct hinic3_recv_msg *recv_msg)
+{
+ kfree(recv_msg->msg);
+ recv_msg->msg = NULL;
+}
+
+/**
+ * alloc_msg_buf - allocate all the message buffers of PF to MGMT channel
+ * @pf_to_mgmt: PF to MGMT channel
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_msg_buf(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt)
+{
+ int err;
+ void *dev = pf_to_mgmt->hwdev->dev_hdl;
+
+ err = alloc_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ if (err) {
+ sdk_err(dev, "Failed to allocate recv msg\n");
+ return err;
+ }
+
+ err = alloc_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+ if (err) {
+ sdk_err(dev, "Failed to allocate resp recv msg\n");
+ goto alloc_msg_for_resp_err;
+ }
+
+ pf_to_mgmt->async_msg_buf = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!pf_to_mgmt->async_msg_buf) {
+ err = -ENOMEM;
+ goto async_msg_buf_err;
+ }
+
+ pf_to_mgmt->sync_msg_buf = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!pf_to_mgmt->sync_msg_buf) {
+ err = -ENOMEM;
+ goto sync_msg_buf_err;
+ }
+
+ pf_to_mgmt->mgmt_ack_buf = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!pf_to_mgmt->mgmt_ack_buf) {
+ err = -ENOMEM;
+ goto ack_msg_buf_err;
+ }
+
+ return 0;
+
+ack_msg_buf_err:
+ kfree(pf_to_mgmt->sync_msg_buf);
+
+sync_msg_buf_err:
+ kfree(pf_to_mgmt->async_msg_buf);
+
+async_msg_buf_err:
+ free_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+
+alloc_msg_for_resp_err:
+ free_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ return err;
+}
+
+/**
+ * free_msg_buf - free all the message buffers of PF to MGMT channel
+ * @pf_to_mgmt: PF to MGMT channel
+ * Return: 0 - success, negative - failure
+ **/
+static void free_msg_buf(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt)
+{
+ kfree(pf_to_mgmt->mgmt_ack_buf);
+ kfree(pf_to_mgmt->sync_msg_buf);
+ kfree(pf_to_mgmt->async_msg_buf);
+
+ free_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+ free_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ pf_to_mgmt->mgmt_ack_buf = NULL;
+ pf_to_mgmt->sync_msg_buf = NULL;
+ pf_to_mgmt->async_msg_buf = NULL;
+}
+
+/**
+ * hinic_pf_to_mgmt_init - initialize PF to MGMT channel
+ * @hwdev: the pointer to hw device
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_pf_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt;
+ void *dev = hwdev->dev_hdl;
+ int err;
+
+ pf_to_mgmt = kzalloc(sizeof(*pf_to_mgmt), GFP_KERNEL);
+ if (!pf_to_mgmt)
+ return -ENOMEM;
+
+ hwdev->pf_to_mgmt = pf_to_mgmt;
+ pf_to_mgmt->hwdev = hwdev;
+ spin_lock_init(&pf_to_mgmt->async_msg_lock);
+ spin_lock_init(&pf_to_mgmt->sync_event_lock);
+ sema_init(&pf_to_mgmt->sync_msg_lock, 1);
+ pf_to_mgmt->workq = create_singlethread_workqueue(HINIC3_MGMT_WQ_NAME);
+ if (!pf_to_mgmt->workq) {
+ sdk_err(dev, "Failed to initialize MGMT workqueue\n");
+ err = -ENOMEM;
+ goto create_mgmt_workq_err;
+ }
+
+ err = alloc_msg_buf(pf_to_mgmt);
+ if (err) {
+ sdk_err(dev, "Failed to allocate msg buffers\n");
+ goto alloc_msg_buf_err;
+ }
+
+ err = hinic3_api_cmd_init(hwdev, pf_to_mgmt->cmd_chain);
+ if (err) {
+ sdk_err(dev, "Failed to init the api cmd chains\n");
+ goto api_cmd_init_err;
+ }
+
+ return 0;
+
+api_cmd_init_err:
+ free_msg_buf(pf_to_mgmt);
+
+alloc_msg_buf_err:
+ destroy_workqueue(pf_to_mgmt->workq);
+
+create_mgmt_workq_err:
+ spin_lock_deinit(&pf_to_mgmt->sync_event_lock);
+ spin_lock_deinit(&pf_to_mgmt->async_msg_lock);
+ sema_deinit(&pf_to_mgmt->sync_msg_lock);
+ kfree(pf_to_mgmt);
+
+ return err;
+}
+
+/**
+ * hinic_pf_to_mgmt_free - free PF to MGMT channel
+ * @hwdev: the pointer to hw device
+ **/
+void hinic3_pf_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = hwdev->pf_to_mgmt;
+
+ /* destroy workqueue before free related pf_to_mgmt resources in case of
+ * illegal resource access
+ */
+ destroy_workqueue(pf_to_mgmt->workq);
+ hinic3_api_cmd_free(hwdev, pf_to_mgmt->cmd_chain);
+
+ free_msg_buf(pf_to_mgmt);
+ spin_lock_deinit(&pf_to_mgmt->sync_event_lock);
+ spin_lock_deinit(&pf_to_mgmt->async_msg_lock);
+ sema_deinit(&pf_to_mgmt->sync_msg_lock);
+ kfree(pf_to_mgmt);
+}
+
+void hinic3_flush_mgmt_workq(void *hwdev)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ flush_workqueue(dev->aeqs->workq);
+
+ if (hinic3_func_type(dev) != TYPE_VF)
+ flush_workqueue(dev->pf_to_mgmt->workq);
+}
+
+int hinic3_api_cmd_read_ack(void *hwdev, u8 dest, const void *cmd,
+ u16 size, void *ack, u16 ack_size)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ struct hinic3_api_cmd_chain *chain = NULL;
+
+ if (!hwdev || !cmd || (ack_size && !ack) || size > MAX_PF_MGMT_BUF_SIZE)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+ chain = pf_to_mgmt->cmd_chain[HINIC3_API_CMD_POLL_READ];
+
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -EPERM;
+
+ return hinic3_api_cmd_read(chain, dest, cmd, size, ack, ack_size);
+}
+
+/**
+ * api cmd write or read bypass default use poll, if want to use aeq interrupt,
+ * please set wb_trigger_aeqe to 1
+ **/
+int hinic3_api_cmd_write_nack(void *hwdev, u8 dest, const void *cmd, u16 size)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ struct hinic3_api_cmd_chain *chain = NULL;
+
+ if (!hwdev || !size || !cmd || size > MAX_PF_MGMT_BUF_SIZE)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+ chain = pf_to_mgmt->cmd_chain[HINIC3_API_CMD_POLL_WRITE];
+
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -EPERM;
+
+ return hinic3_api_cmd_write(chain, dest, cmd, size);
+}
+
+static int get_clp_reg(void *hwdev, enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 *reg_addr)
+{
+ switch (reg_type) {
+ case HINIC3_CLP_BA_HOST:
+ *reg_addr = (data_type == HINIC3_CLP_REQ_HOST) ?
+ HINIC3_CLP_REG(REQBASE) :
+ HINIC3_CLP_REG(RSPBASE);
+ break;
+
+ case HINIC3_CLP_SIZE_HOST:
+ *reg_addr = HINIC3_CLP_REG(SIZE);
+ break;
+
+ case HINIC3_CLP_LEN_HOST:
+ *reg_addr = (data_type == HINIC3_CLP_REQ_HOST) ?
+ HINIC3_CLP_REG(REQ) : HINIC3_CLP_REG(RSP);
+ break;
+
+ case HINIC3_CLP_START_REQ_HOST:
+ *reg_addr = HINIC3_CLP_REG(REQ);
+ break;
+
+ case HINIC3_CLP_READY_RSP_HOST:
+ *reg_addr = HINIC3_CLP_REG(RSP);
+ break;
+
+ default:
+ *reg_addr = 0;
+ break;
+ }
+ if (*reg_addr == 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static inline int clp_param_valid(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type)
+{
+ if (data_type == HINIC3_CLP_REQ_HOST &&
+ reg_type == HINIC3_CLP_READY_RSP_HOST)
+ return -EINVAL;
+
+ if (data_type == HINIC3_CLP_RSP_HOST &&
+ reg_type == HINIC3_CLP_START_REQ_HOST)
+ return -EINVAL;
+
+ return 0;
+}
+
+static u32 get_clp_reg_value(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 reg_addr)
+{
+ u32 value;
+
+ value = hinic3_hwif_read_reg(hwdev->hwif, reg_addr);
+
+ switch (reg_type) {
+ case HINIC3_CLP_BA_HOST:
+ value = ((value >> HINIC3_CLP_OFFSET(BASE)) &
+ HINIC3_CLP_MASK(BASE));
+ break;
+
+ case HINIC3_CLP_SIZE_HOST:
+ if (data_type == HINIC3_CLP_REQ_HOST)
+ value = ((value >> HINIC3_CLP_OFFSET(REQ_SIZE)) &
+ HINIC3_CLP_MASK(SIZE));
+ else
+ value = ((value >> HINIC3_CLP_OFFSET(RSP_SIZE)) &
+ HINIC3_CLP_MASK(SIZE));
+ break;
+
+ case HINIC3_CLP_LEN_HOST:
+ value = ((value >> HINIC3_CLP_OFFSET(LEN)) &
+ HINIC3_CLP_MASK(LEN));
+ break;
+
+ case HINIC3_CLP_START_REQ_HOST:
+ value = ((value >> HINIC3_CLP_OFFSET(START)) &
+ HINIC3_CLP_MASK(START));
+ break;
+
+ case HINIC3_CLP_READY_RSP_HOST:
+ value = ((value >> HINIC3_CLP_OFFSET(READY)) &
+ HINIC3_CLP_MASK(READY));
+ break;
+
+ default:
+ break;
+ }
+
+ return value;
+}
+
+static int hinic3_read_clp_reg(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 *read_value)
+{
+ u32 reg_addr;
+ int err;
+
+ err = clp_param_valid(hwdev, data_type, reg_type);
+ if (err)
+ return err;
+
+ err = get_clp_reg(hwdev, data_type, reg_type, ®_addr);
+ if (err)
+ return err;
+
+ *read_value = get_clp_reg_value(hwdev, data_type, reg_type, reg_addr);
+
+ return 0;
+}
+
+static int check_data_type(enum clp_data_type data_type,
+ enum clp_reg_type reg_type)
+{
+ if (data_type == HINIC3_CLP_REQ_HOST &&
+ reg_type == HINIC3_CLP_READY_RSP_HOST)
+ return -EINVAL;
+ if (data_type == HINIC3_CLP_RSP_HOST &&
+ reg_type == HINIC3_CLP_START_REQ_HOST)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int check_reg_value(enum clp_reg_type reg_type, u32 value)
+{
+ if (reg_type == HINIC3_CLP_BA_HOST &&
+ value > HINIC3_CLP_SRAM_BASE_REG_MAX)
+ return -EINVAL;
+
+ if (reg_type == HINIC3_CLP_SIZE_HOST &&
+ value > HINIC3_CLP_SRAM_SIZE_REG_MAX)
+ return -EINVAL;
+
+ if (reg_type == HINIC3_CLP_LEN_HOST &&
+ value > HINIC3_CLP_LEN_REG_MAX)
+ return -EINVAL;
+
+ if ((reg_type == HINIC3_CLP_START_REQ_HOST ||
+ reg_type == HINIC3_CLP_READY_RSP_HOST) &&
+ value > HINIC3_CLP_START_OR_READY_REG_MAX)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int hinic3_check_clp_init_status(struct hinic3_hwdev *hwdev)
+{
+ int err;
+ u32 reg_value = 0;
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_BA_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong req ba value: 0x%x\n",
+ reg_value);
+ return -EINVAL;
+ }
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_BA_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong rsp ba value: 0x%x\n",
+ reg_value);
+ return -EINVAL;
+ }
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_SIZE_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong req size\n");
+ return -EINVAL;
+ }
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_SIZE_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong rsp size\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void hinic3_write_clp_reg(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 value)
+{
+ u32 reg_addr, reg_value;
+
+ if (check_data_type(data_type, reg_type))
+ return;
+
+ if (check_reg_value(reg_type, value))
+ return;
+
+ if (get_clp_reg(hwdev, data_type, reg_type, ®_addr))
+ return;
+
+ reg_value = hinic3_hwif_read_reg(hwdev->hwif, reg_addr);
+
+ switch (reg_type) {
+ case HINIC3_CLP_LEN_HOST:
+ reg_value = reg_value &
+ (~(HINIC3_CLP_MASK(LEN) << HINIC3_CLP_OFFSET(LEN)));
+ reg_value = reg_value | (value << HINIC3_CLP_OFFSET(LEN));
+ break;
+
+ case HINIC3_CLP_START_REQ_HOST:
+ reg_value = reg_value &
+ (~(HINIC3_CLP_MASK(START) <<
+ HINIC3_CLP_OFFSET(START)));
+ reg_value = reg_value | (value << HINIC3_CLP_OFFSET(START));
+ break;
+
+ case HINIC3_CLP_READY_RSP_HOST:
+ reg_value = reg_value &
+ (~(HINIC3_CLP_MASK(READY) <<
+ HINIC3_CLP_OFFSET(READY)));
+ reg_value = reg_value | (value << HINIC3_CLP_OFFSET(READY));
+ break;
+
+ default:
+ return;
+ }
+
+ hinic3_hwif_write_reg(hwdev->hwif, reg_addr, reg_value);
+}
+
+static int hinic3_read_clp_data(struct hinic3_hwdev *hwdev,
+ void *buf_out, u16 *out_size)
+{
+ int err;
+ u32 reg = HINIC3_CLP_DATA(RSP);
+ u32 ready, delay_cnt;
+ u32 *ptr = (u32 *)buf_out;
+ u32 temp_out_size = 0;
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_READY_RSP_HOST, &ready);
+ if (err)
+ return err;
+
+ delay_cnt = 0;
+ while (ready == 0) {
+ usleep_range(9000, 10000); /* sleep 9000 us ~ 10000 us */
+ delay_cnt++;
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_READY_RSP_HOST, &ready);
+ if (err || delay_cnt > HINIC3_CLP_DELAY_CNT_MAX) {
+ sdk_err(hwdev->dev_hdl, "Timeout with delay_cnt: %u\n",
+ delay_cnt);
+ return -EINVAL;
+ }
+ }
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_LEN_HOST, &temp_out_size);
+ if (err)
+ return err;
+
+ if (temp_out_size > HINIC3_CLP_SRAM_SIZE_REG_MAX || !temp_out_size) {
+ sdk_err(hwdev->dev_hdl, "Invalid temp_out_size: %u\n",
+ temp_out_size);
+ return -EINVAL;
+ }
+
+ *out_size = (u16)temp_out_size;
+ for (; temp_out_size > 0; temp_out_size--) {
+ *ptr = hinic3_hwif_read_reg(hwdev->hwif, reg);
+ ptr++;
+ /* read 4 bytes every time */
+ reg = reg + 4;
+ }
+
+ hinic3_write_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_READY_RSP_HOST, (u32)0x0);
+ hinic3_write_clp_reg(hwdev, HINIC3_CLP_RSP_HOST, HINIC3_CLP_LEN_HOST,
+ (u32)0x0);
+
+ return 0;
+}
+
+static int hinic3_write_clp_data(struct hinic3_hwdev *hwdev,
+ void *buf_in, u16 in_size)
+{
+ int err;
+ u32 reg = HINIC3_CLP_DATA(REQ);
+ u32 start = 1;
+ u32 delay_cnt = 0;
+ u32 *ptr = (u32 *)buf_in;
+ u16 size_in = in_size;
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_START_REQ_HOST, &start);
+ if (err != 0)
+ return err;
+
+ while (start == 1) {
+ usleep_range(9000, 10000); /* sleep 9000 us ~ 10000 us */
+ delay_cnt++;
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_START_REQ_HOST, &start);
+ if (err || delay_cnt > HINIC3_CLP_DELAY_CNT_MAX)
+ return -EINVAL;
+ }
+
+ hinic3_write_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_LEN_HOST, size_in);
+ hinic3_write_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_START_REQ_HOST, (u32)0x1);
+
+ for (; size_in > 0; size_in--) {
+ hinic3_hwif_write_reg(hwdev->hwif, reg, *ptr);
+ ptr++;
+ reg = reg + sizeof(u32);
+ }
+
+ return 0;
+}
+
+static void hinic3_clear_clp_data(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type)
+{
+ u32 reg = (data_type == HINIC3_CLP_REQ_HOST) ?
+ HINIC3_CLP_DATA(REQ) : HINIC3_CLP_DATA(RSP);
+ u32 count = HINIC3_CLP_INPUT_BUF_LEN_HOST / HINIC3_CLP_DATA_UNIT_HOST;
+
+ for (; count > 0; count--) {
+ hinic3_hwif_write_reg(hwdev->hwif, reg, 0x0);
+ reg = reg + sizeof(u32);
+ }
+}
+
+int hinic3_pf_clp_to_mgmt(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic3_clp_pf_to_mgmt *clp_pf_to_mgmt = NULL;
+ struct hinic3_hwdev *dev = hwdev;
+ u64 header;
+ u16 real_size;
+ u8 *clp_msg_buf = NULL;
+ int err;
+
+ if (!COMM_SUPPORT_CLP(dev))
+ return -EPERM;
+
+ clp_pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->clp_pf_to_mgmt;
+ if (!clp_pf_to_mgmt)
+ return -EPERM;
+
+ clp_msg_buf = clp_pf_to_mgmt->clp_msg_buf;
+
+ /* 4 bytes alignment */
+ if (in_size % HINIC3_CLP_DATA_UNIT_HOST)
+ real_size = (in_size + (u16)sizeof(header) +
+ HINIC3_CLP_DATA_UNIT_HOST);
+ else
+ real_size = in_size + (u16)sizeof(header);
+ real_size = real_size / HINIC3_CLP_DATA_UNIT_HOST;
+
+ if (real_size >
+ (HINIC3_CLP_INPUT_BUF_LEN_HOST / HINIC3_CLP_DATA_UNIT_HOST)) {
+ sdk_err(dev->dev_hdl, "Invalid real_size: %u\n", real_size);
+ return -EINVAL;
+ }
+ down(&clp_pf_to_mgmt->clp_msg_lock);
+
+ err = hinic3_check_clp_init_status(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Check clp init status failed\n");
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return err;
+ }
+
+ hinic3_clear_clp_data(dev, HINIC3_CLP_RSP_HOST);
+ hinic3_write_clp_reg(dev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_READY_RSP_HOST, 0x0);
+
+ /* Send request */
+ memset(clp_msg_buf, 0x0, HINIC3_CLP_INPUT_BUF_LEN_HOST);
+ clp_prepare_header(dev, &header, in_size, mod, 0, 0, cmd, 0);
+
+ memcpy(clp_msg_buf, &header, sizeof(header));
+
+ clp_msg_buf += sizeof(header);
+ memcpy(clp_msg_buf, buf_in, in_size);
+
+ clp_msg_buf = clp_pf_to_mgmt->clp_msg_buf;
+
+ hinic3_clear_clp_data(dev, HINIC3_CLP_REQ_HOST);
+ err = hinic3_write_clp_data(hwdev,
+ clp_pf_to_mgmt->clp_msg_buf, real_size);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Send clp request failed\n");
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+
+ /* Get response */
+ clp_msg_buf = clp_pf_to_mgmt->clp_msg_buf;
+ memset(clp_msg_buf, 0x0, HINIC3_CLP_INPUT_BUF_LEN_HOST);
+ err = hinic3_read_clp_data(hwdev, clp_msg_buf, &real_size);
+ hinic3_clear_clp_data(dev, HINIC3_CLP_RSP_HOST);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Read clp response failed\n");
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+
+ real_size = (u16)((real_size * HINIC3_CLP_DATA_UNIT_HOST) & 0xffff);
+ if (real_size <= sizeof(header) || real_size > HINIC3_CLP_INPUT_BUF_LEN_HOST) {
+ sdk_err(dev->dev_hdl, "Invalid response size: %u", real_size);
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+ real_size = real_size - sizeof(header);
+ if (real_size != *out_size) {
+ sdk_err(dev->dev_hdl, "Invalid real_size:%u, out_size: %u\n",
+ real_size, *out_size);
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+
+ memcpy(buf_out, (clp_msg_buf + sizeof(header)), real_size);
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+
+ return 0;
+}
+
+int hinic3_clp_to_mgmt(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ if (!dev)
+ return -EINVAL;
+
+ if (!dev->chip_present_flag)
+ return -EPERM;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_CLP(dev))
+ return -EPERM;
+
+ err = hinic3_pf_clp_to_mgmt(dev, mod, cmd, buf_in, in_size, buf_out,
+ out_size);
+
+ return err;
+}
+
+int hinic3_clp_pf_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_clp_pf_to_mgmt *clp_pf_to_mgmt = NULL;
+
+ if (!COMM_SUPPORT_CLP(hwdev))
+ return 0;
+
+ clp_pf_to_mgmt = kzalloc(sizeof(*clp_pf_to_mgmt), GFP_KERNEL);
+ if (!clp_pf_to_mgmt)
+ return -ENOMEM;
+
+ clp_pf_to_mgmt->clp_msg_buf = kzalloc(HINIC3_CLP_INPUT_BUF_LEN_HOST,
+ GFP_KERNEL);
+ if (!clp_pf_to_mgmt->clp_msg_buf) {
+ kfree(clp_pf_to_mgmt);
+ return -ENOMEM;
+ }
+ sema_init(&clp_pf_to_mgmt->clp_msg_lock, 1);
+
+ hwdev->clp_pf_to_mgmt = clp_pf_to_mgmt;
+
+ return 0;
+}
+
+void hinic3_clp_pf_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_clp_pf_to_mgmt *clp_pf_to_mgmt = hwdev->clp_pf_to_mgmt;
+
+ if (!COMM_SUPPORT_CLP(hwdev))
+ return;
+
+ sema_deinit(&clp_pf_to_mgmt->clp_msg_lock);
+ kfree(clp_pf_to_mgmt->clp_msg_buf);
+ kfree(clp_pf_to_mgmt);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h
new file mode 100644
index 0000000..48970e3
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_MGMT_H
+#define HINIC3_MGMT_H
+
+#include <linux/types.h>
+#include <linux/completion.h>
+#include <linux/semaphore.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+
+#include "mpu_cmd_base_defs.h"
+#include "hinic3_hw.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_hwdev.h"
+
+#define HINIC3_MGMT_WQ_NAME "hinic3_mgmt"
+
+#define HINIC3_CLP_REG_GAP 0x20
+#define HINIC3_CLP_INPUT_BUF_LEN_HOST 4096UL
+#define HINIC3_CLP_DATA_UNIT_HOST 4UL
+
+enum clp_data_type {
+ HINIC3_CLP_REQ_HOST = 0,
+ HINIC3_CLP_RSP_HOST = 1
+};
+
+enum clp_reg_type {
+ HINIC3_CLP_BA_HOST = 0,
+ HINIC3_CLP_SIZE_HOST = 1,
+ HINIC3_CLP_LEN_HOST = 2,
+ HINIC3_CLP_START_REQ_HOST = 3,
+ HINIC3_CLP_READY_RSP_HOST = 4
+};
+
+#define HINIC3_CLP_REQ_SIZE_OFFSET 0
+#define HINIC3_CLP_RSP_SIZE_OFFSET 16
+#define HINIC3_CLP_BASE_OFFSET 0
+#define HINIC3_CLP_LEN_OFFSET 0
+#define HINIC3_CLP_START_OFFSET 31
+#define HINIC3_CLP_READY_OFFSET 31
+#define HINIC3_CLP_OFFSET(member) (HINIC3_CLP_##member##_OFFSET)
+
+#define HINIC3_CLP_SIZE_MASK 0x7ffUL
+#define HINIC3_CLP_BASE_MASK 0x7ffffffUL
+#define HINIC3_CLP_LEN_MASK 0x7ffUL
+#define HINIC3_CLP_START_MASK 0x1UL
+#define HINIC3_CLP_READY_MASK 0x1UL
+#define HINIC3_CLP_MASK(member) (HINIC3_CLP_##member##_MASK)
+
+#define HINIC3_CLP_DELAY_CNT_MAX 200UL
+#define HINIC3_CLP_SRAM_SIZE_REG_MAX 0x3ff
+#define HINIC3_CLP_SRAM_BASE_REG_MAX 0x7ffffff
+#define HINIC3_CLP_LEN_REG_MAX 0x3ff
+#define HINIC3_CLP_START_OR_READY_REG_MAX 0x1
+
+struct hinic3_recv_msg {
+ void *msg;
+
+ u16 msg_len;
+ u16 rsvd1;
+ enum hinic3_mod_type mod;
+
+ u16 cmd;
+ u8 seq_id;
+ u8 rsvd2;
+ u16 msg_id;
+ u16 rsvd3;
+
+ int async_mgmt_to_pf;
+ u32 rsvd4;
+
+ struct completion recv_done;
+};
+
+struct hinic3_msg_head {
+ u8 status;
+ u8 version;
+ u8 resp_aeq_num;
+ u8 rsvd0[5];
+};
+
+enum comm_pf_to_mgmt_event_state {
+ SEND_EVENT_UNINIT = 0,
+ SEND_EVENT_START,
+ SEND_EVENT_SUCCESS,
+ SEND_EVENT_FAIL,
+ SEND_EVENT_TIMEOUT,
+ SEND_EVENT_END,
+};
+
+enum hinic3_mgmt_msg_cb_state {
+ HINIC3_MGMT_MSG_CB_REG = 0,
+ HINIC3_MGMT_MSG_CB_RUNNING,
+};
+
+struct hinic3_clp_pf_to_mgmt {
+ struct semaphore clp_msg_lock;
+ void *clp_msg_buf;
+};
+
+struct hinic3_msg_pf_to_mgmt {
+ struct hinic3_hwdev *hwdev;
+
+ /* Async cmd can not be scheduling */
+ spinlock_t async_msg_lock;
+ struct semaphore sync_msg_lock;
+
+ struct workqueue_struct *workq;
+
+ void *async_msg_buf;
+ void *sync_msg_buf;
+ void *mgmt_ack_buf;
+
+ struct hinic3_recv_msg recv_msg_from_mgmt;
+ struct hinic3_recv_msg recv_resp_msg_from_mgmt;
+
+ u16 async_msg_id;
+ u16 sync_msg_id;
+ u32 rsvd1;
+ struct hinic3_api_cmd_chain *cmd_chain[HINIC3_API_CMD_MAX];
+
+ hinic3_mgmt_msg_cb recv_mgmt_msg_cb[HINIC3_MOD_HW_MAX];
+ void *recv_mgmt_msg_data[HINIC3_MOD_HW_MAX];
+ unsigned long mgmt_msg_cb_state[HINIC3_MOD_HW_MAX];
+
+ void *async_msg_cb_data[HINIC3_MOD_HW_MAX];
+
+ /* lock when sending msg */
+ spinlock_t sync_event_lock;
+ enum comm_pf_to_mgmt_event_state event_flag;
+ u64 rsvd2;
+};
+
+struct hinic3_mgmt_msg_handle_work {
+ struct work_struct work;
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt;
+
+ void *msg;
+ u16 msg_len;
+ u16 rsvd1;
+
+ enum hinic3_mod_type mod;
+ u16 cmd;
+ u16 msg_id;
+
+ int async_mgmt_to_pf;
+};
+
+void hinic3_mgmt_msg_aeqe_handler(void *hwdev, u8 *header, u8 size);
+
+int hinic3_pf_to_mgmt_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_pf_to_mgmt_free(struct hinic3_hwdev *hwdev);
+
+int hinic3_pf_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout);
+int hinic3_pf_to_mgmt_async(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size);
+
+int hinic3_pf_msg_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout);
+
+int hinic3_pf_to_mgmt_no_ack(void *hwdev, enum hinic3_mod_type mod, u8 cmd, void *buf_in,
+ u16 in_size);
+
+int hinic3_api_cmd_read_ack(void *hwdev, u8 dest, const void *cmd, u16 size,
+ void *ack, u16 ack_size);
+
+int hinic3_api_cmd_write_nack(void *hwdev, u8 dest, const void *cmd, u16 size);
+
+int hinic3_pf_clp_to_mgmt(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size);
+
+int hinic3_clp_pf_to_mgmt_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_clp_pf_to_mgmt_free(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.c
new file mode 100644
index 0000000..b619800
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.c
@@ -0,0 +1,1259 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/semaphore.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/completion.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+
+#include "ossl_knl.h"
+#include "hinic3_common.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwif.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_mbox.h"
+#include "hinic3_multi_host_mgmt.h"
+#include "hinic3_hw_cfg.h"
+
+#define HINIC3_SUPPORT_MAX_PF_NUM 32
+#define HINIC3_MBOX_PF_BUSY_ACTIVE_FW 0x2
+
+void set_master_host_mbox_enable(struct hinic3_hwdev *hwdev, bool enable)
+{
+ u32 reg_val;
+
+ if (!IS_MASTER_HOST(hwdev) || HINIC3_FUNC_TYPE(hwdev) != TYPE_PPF)
+ return;
+
+ reg_val = hinic3_hwif_read_reg(hwdev->hwif, HINIC3_MULT_HOST_MASTER_MBOX_STATUS_ADDR);
+ reg_val = MULTI_HOST_REG_CLEAR(reg_val, MASTER_MBX_STS);
+ reg_val |= MULTI_HOST_REG_SET((u8)enable, MASTER_MBX_STS);
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_MULT_HOST_MASTER_MBOX_STATUS_ADDR, reg_val);
+
+ sdk_info(hwdev->dev_hdl, "Multi-host status: %d, reg value: 0x%x\n",
+ enable, reg_val);
+}
+
+static bool hinic3_get_master_host_mbox_enable(void *hwdev)
+{
+ u32 reg_val;
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_SLAVE_HOST(dev) || HINIC3_FUNC_TYPE(dev) == TYPE_VF)
+ return true;
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_HOST_MASTER_MBOX_STATUS_ADDR);
+
+ return !!MULTI_HOST_REG_GET(reg_val, MASTER_MBX_STS);
+}
+
+bool hinic3_is_multi_bm(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ return ((IS_BMGW_SLAVE_HOST(hw_dev)) || (IS_BMGW_MASTER_HOST(hw_dev))) ? true : false;
+}
+EXPORT_SYMBOL(hinic3_is_multi_bm);
+
+bool hinic3_is_slave_host(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = hwdev;
+
+ if (!hwdev) {
+ pr_err("hwdev is null\n");
+ return false;
+ }
+
+ return ((IS_BMGW_SLAVE_HOST(hw_dev)) || (IS_VM_SLAVE_HOST(hw_dev))) ? true : false;
+}
+EXPORT_SYMBOL(hinic3_is_slave_host);
+
+bool hinic3_is_vm_slave_host(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = hwdev;
+
+ if (!hwdev) {
+ pr_err("hwdev is null\n");
+ return false;
+ }
+
+ return (IS_VM_SLAVE_HOST(hw_dev)) ? true : false;
+}
+EXPORT_SYMBOL(hinic3_is_vm_slave_host);
+
+bool hinic3_is_bm_slave_host(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = hwdev;
+
+ if (!hwdev) {
+ pr_err("hwdev is null\n");
+ return false;
+ }
+
+ return (IS_BMGW_SLAVE_HOST(hw_dev)) ? true : false;
+}
+EXPORT_SYMBOL(hinic3_is_bm_slave_host);
+
+static int __send_mbox_to_host(struct hinic3_hwdev *mbox_hwdev,
+ struct hinic3_hwdev *hwdev,
+ enum hinic3_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout,
+ enum hinic3_mbox_ack_type ack_type, u16 channel)
+{
+ u8 dst_host_func_idx;
+ struct service_cap *cap = &hwdev->cfg_mgmt->svc_cap;
+
+ if (!mbox_hwdev->chip_present_flag)
+ return -EPERM;
+
+ if (!hinic3_get_master_host_mbox_enable(hwdev)) {
+ sdk_err(hwdev->dev_hdl, "Master host not initialized\n");
+ return -EFAULT;
+ }
+
+ if (!mbox_hwdev->mhost_mgmt) {
+ /* send to master host in default */
+ dst_host_func_idx = hinic3_host_ppf_idx(hwdev, cap->master_host_id);
+ } else {
+ dst_host_func_idx = IS_MASTER_HOST(hwdev) ?
+ mbox_hwdev->mhost_mgmt->shost_ppf_idx :
+ mbox_hwdev->mhost_mgmt->mhost_ppf_idx;
+ }
+
+ if (ack_type == MBOX_ACK)
+ return hinic3_mbox_to_host(mbox_hwdev, dst_host_func_idx,
+ mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+ else
+ return hinic3_mbox_to_func_no_ack(mbox_hwdev, dst_host_func_idx,
+ mod, cmd, buf_in, in_size, channel);
+}
+
+static int __mbox_to_host(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod,
+ u8 cmd, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout,
+ enum hinic3_mbox_ack_type ack_type, u16 channel)
+{
+ struct hinic3_hwdev *mbox_hwdev = hwdev;
+ int err;
+
+ if (!IS_MULTI_HOST(hwdev) || HINIC3_IS_VF(hwdev))
+ return -EPERM;
+
+ if (hinic3_func_type(hwdev) == TYPE_PF) {
+ down(&hwdev->ppf_sem);
+ mbox_hwdev = hwdev->ppf_hwdev;
+ if (!mbox_hwdev) {
+ err = -EINVAL;
+ goto release_lock;
+ }
+
+ if (!test_bit(HINIC3_HWDEV_MBOX_INITED, &mbox_hwdev->func_state)) {
+ err = -EPERM;
+ goto release_lock;
+ }
+ }
+
+ err = __send_mbox_to_host(mbox_hwdev, hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout, ack_type, channel);
+
+release_lock:
+ if (hinic3_func_type(hwdev) == TYPE_PF)
+ up(&hwdev->ppf_sem);
+
+ return err;
+}
+
+int hinic3_mbox_to_host_sync(void *hwdev, enum hinic3_mod_type mod,
+ u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout, u16 channel)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ return __mbox_to_host((struct hinic3_hwdev *)hwdev, mod, cmd, buf_in,
+ in_size, buf_out, out_size, timeout, MBOX_ACK, channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_to_host_sync);
+
+int hinic3_mbox_to_host_no_ack(struct hinic3_hwdev *hwdev,
+ enum hinic3_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size, u16 channel)
+{
+ return __mbox_to_host(hwdev, mod, cmd, buf_in, in_size, NULL, NULL,
+ 0, MBOX_NO_ACK, channel);
+}
+
+static int __get_func_nic_state_from_pf(struct hinic3_hwdev *hwdev,
+ u16 glb_func_idx, u8 *en);
+static int __get_func_vroce_state_from_pf(struct hinic3_hwdev *hwdev,
+ u16 glb_func_idx, u8 *en);
+
+int sw_func_pf_mbox_handler(void *pri_handle, u16 vf_id, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = pri_handle;
+ struct hinic3_slave_func_nic_state *nic_state = NULL;
+ struct hinic3_slave_func_nic_state *out_state = NULL;
+ int err;
+
+ switch (cmd) {
+ case HINIC3_SW_CMD_GET_SLAVE_FUNC_NIC_STATE:
+ nic_state = buf_in;
+ out_state = buf_out;
+ *out_size = sizeof(*nic_state);
+
+ /* find nic state in PPF func_nic_en bitmap */
+ err = __get_func_nic_state_from_pf(hwdev, nic_state->func_idx,
+ &out_state->enable);
+ out_state->status = err ? 1 : 0;
+
+ break;
+ case HINIC3_SW_CMD_GET_SLAVE_FUNC_VROCE_STATE:
+ nic_state = buf_in;
+ out_state = buf_out;
+ *out_size = sizeof(*nic_state);
+
+ err = __get_func_vroce_state_from_pf(hwdev, nic_state->func_idx,
+ &out_state->enable);
+ out_state->status = err ? 1 : 0;
+
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int __master_host_sw_func_handler(struct hinic3_hwdev *hwdev, u16 pf_idx,
+ u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_multi_host_mgmt *mhost_mgmt = hwdev->mhost_mgmt;
+ struct register_slave_host *out_shost = NULL;
+ struct register_slave_host *slave_host = NULL;
+ u64 *vroce_en = NULL;
+
+ int err = 0;
+
+ if (!mhost_mgmt)
+ return -ENXIO;
+ switch (cmd) {
+ case HINIC3_SW_CMD_SLAVE_HOST_PPF_REGISTER:
+ slave_host = buf_in;
+ out_shost = buf_out;
+ *out_size = sizeof(*slave_host);
+ vroce_en = out_shost->funcs_vroce_en;
+
+ /* just get information about function nic enable */
+ if (slave_host->get_nic_en) {
+ bitmap_copy((ulong *)out_shost->funcs_nic_en,
+ mhost_mgmt->func_nic_en,
+ HINIC3_MAX_MGMT_FUNCTIONS);
+
+ if (IS_MASTER_HOST(hwdev))
+ bitmap_copy((ulong *)vroce_en,
+ mhost_mgmt->func_vroce_en,
+ HINIC3_MAX_MGMT_FUNCTIONS);
+ out_shost->status = 0;
+ break;
+ }
+
+ mhost_mgmt->shost_registered = true;
+ mhost_mgmt->shost_host_idx = slave_host->host_id;
+ mhost_mgmt->shost_ppf_idx = slave_host->ppf_idx;
+
+ bitmap_copy((ulong *)out_shost->funcs_nic_en,
+ mhost_mgmt->func_nic_en, HINIC3_MAX_MGMT_FUNCTIONS);
+
+ if (IS_MASTER_HOST(hwdev))
+ bitmap_copy((ulong *)vroce_en,
+ mhost_mgmt->func_vroce_en,
+ HINIC3_MAX_MGMT_FUNCTIONS);
+
+ sdk_info(hwdev->dev_hdl, "Slave host registers PPF, host_id: %u, ppf_idx: %u\n",
+ slave_host->host_id, slave_host->ppf_idx);
+
+ out_shost->status = 0;
+ break;
+ case HINIC3_SW_CMD_SLAVE_HOST_PPF_UNREGISTER:
+ slave_host = buf_in;
+ mhost_mgmt->shost_registered = false;
+ sdk_info(hwdev->dev_hdl, "Slave host unregisters PPF, host_id: %u, ppf_idx: %u\n",
+ slave_host->host_id, slave_host->ppf_idx);
+
+ *out_size = sizeof(*slave_host);
+ ((struct register_slave_host *)buf_out)->status = 0;
+ break;
+
+ default:
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
+static int __event_func_service_state_handler(struct hinic3_hwdev *hwdev,
+ u8 sub_cmd, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_event_info event_info = {0};
+ struct hinic3_mhost_nic_func_state state = {0};
+ struct hinic3_slave_func_nic_state *out_state = NULL;
+ struct hinic3_slave_func_nic_state *in_state = buf_in;
+
+ if (!hwdev->event_callback)
+ return 0;
+
+ event_info.type = EVENT_COMM_MULTI_HOST_MGMT;
+ ((struct hinic3_multi_host_mgmt_event *)(void *)event_info.event_data)->sub_cmd = sub_cmd;
+ ((struct hinic3_multi_host_mgmt_event *)(void *)event_info.event_data)->data = &state;
+
+ state.func_idx = in_state->func_idx;
+ state.enable = in_state->enable;
+
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+
+ *out_size = sizeof(*out_state);
+ out_state = buf_out;
+ out_state->status = state.status;
+ if (sub_cmd == HINIC3_MHOST_GET_VROCE_STATE)
+ out_state->opened = state.enable;
+
+ return state.status;
+}
+
+static int __event_set_func_nic_state(struct hinic3_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return __event_func_service_state_handler(hwdev,
+ HINIC3_MHOST_NIC_STATE_CHANGE,
+ buf_in, in_size,
+ buf_out, out_size);
+}
+
+static int __event_set_func_vroce_state(struct hinic3_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return __event_func_service_state_handler(hwdev,
+ HINIC3_MHOST_VROCE_STATE_CHANGE,
+ buf_in, in_size,
+ buf_out, out_size);
+}
+
+static int __event_get_func_vroce_state(struct hinic3_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return __event_func_service_state_handler(hwdev,
+ HINIC3_MHOST_GET_VROCE_STATE,
+ buf_in, in_size,
+ buf_out, out_size);
+}
+
+int vf_sw_func_handler(void *hwdev, u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ int err = 0;
+
+ switch (cmd) {
+ case HINIC3_SW_CMD_SET_SLAVE_FUNC_VROCE_STATE:
+ err = __event_set_func_vroce_state(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ case HINIC3_SW_CMD_GET_SLAVE_VROCE_DEVICE_STATE:
+ err = __event_get_func_vroce_state(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+static int multi_host_event_handler(struct hinic3_hwdev *hwdev,
+ u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ int err;
+
+ switch (cmd) {
+ case HINIC3_SW_CMD_SET_SLAVE_FUNC_VROCE_STATE:
+ err = __event_set_func_vroce_state(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ case HINIC3_SW_CMD_SET_SLAVE_FUNC_NIC_STATE:
+ err = __event_set_func_nic_state(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ case HINIC3_SW_CMD_GET_SLAVE_VROCE_DEVICE_STATE:
+ err = __event_get_func_vroce_state(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+static int sw_set_slave_func_nic_state(struct hinic3_hwdev *hwdev, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_slave_func_nic_state *nic_state = buf_in;
+ struct hinic3_slave_func_nic_state *nic_state_out = buf_out;
+ struct hinic3_multi_host_mgmt *mhost_mgmt = hwdev->mhost_mgmt;
+ *out_size = sizeof(*nic_state);
+ nic_state_out->status = 0;
+ sdk_info(hwdev->dev_hdl, "Slave func %u %s nic\n",
+ nic_state->func_idx,
+ nic_state->enable ? "register" : "unregister");
+
+ if (nic_state->enable) {
+ set_bit(nic_state->func_idx, mhost_mgmt->func_nic_en);
+ } else {
+ if ((test_bit(nic_state->func_idx, mhost_mgmt->func_nic_en)) &&
+ nic_state->func_idx >= HINIC3_SUPPORT_MAX_PF_NUM &&
+ (!test_bit(nic_state->func_idx, hwdev->func_probe_in_host))) {
+ sdk_warn(hwdev->dev_hdl, "VF%u in vm, delete tap port failed\n",
+ nic_state->func_idx);
+ nic_state_out->status = HINIC3_VF_IN_VM;
+ return 0;
+ }
+ clear_bit(nic_state->func_idx, mhost_mgmt->func_nic_en);
+ }
+
+ return multi_host_event_handler(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size);
+}
+
+static int sw_set_slave_vroce_state(struct hinic3_hwdev *hwdev, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_slave_func_nic_state *nic_state = buf_in;
+ struct hinic3_slave_func_nic_state *nic_state_out = buf_out;
+ struct hinic3_multi_host_mgmt *mhost_mgmt = hwdev->mhost_mgmt;
+ int err;
+
+ nic_state = buf_in;
+ *out_size = sizeof(*nic_state);
+ nic_state_out->status = 0;
+
+ sdk_info(hwdev->dev_hdl, "Slave func %u %s vroce\n", nic_state->func_idx,
+ nic_state->enable ? "register" : "unregister");
+
+ if (nic_state->enable)
+ set_bit(nic_state->func_idx,
+ mhost_mgmt->func_vroce_en);
+ else
+ clear_bit(nic_state->func_idx,
+ mhost_mgmt->func_vroce_en);
+
+ err = multi_host_event_handler(hwdev, cmd, buf_in, in_size,
+ buf_out, out_size);
+
+ return err;
+}
+
+static int sw_get_slave_vroce_device_state(struct hinic3_hwdev *hwdev, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_slave_func_nic_state *nic_state_out = buf_out;
+ int err;
+
+ *out_size = sizeof(struct hinic3_slave_func_nic_state);
+ nic_state_out->status = 0;
+ err = multi_host_event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+
+ return err;
+}
+
+static void sw_get_slave_netdev_state(struct hinic3_hwdev *hwdev, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_slave_func_nic_state *nic_state = buf_in;
+ struct hinic3_slave_func_nic_state *nic_state_out = buf_out;
+
+ *out_size = sizeof(*nic_state);
+ nic_state_out->status = 0;
+ nic_state_out->opened =
+ test_bit(nic_state->func_idx,
+ hwdev->netdev_setup_state) ? 1 : 0;
+}
+
+static int __slave_host_sw_func_handler(struct hinic3_hwdev *hwdev, u16 pf_idx,
+ u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_multi_host_mgmt *mhost_mgmt = hwdev->mhost_mgmt;
+ int err = 0;
+
+ if (!mhost_mgmt)
+ return -ENXIO;
+ switch (cmd) {
+ case HINIC3_SW_CMD_SET_SLAVE_FUNC_NIC_STATE:
+ err = sw_set_slave_func_nic_state(hwdev, cmd, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ case HINIC3_SW_CMD_SET_SLAVE_FUNC_VROCE_STATE:
+ err = sw_set_slave_vroce_state(hwdev, cmd, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ case HINIC3_SW_CMD_GET_SLAVE_VROCE_DEVICE_STATE:
+ err = sw_get_slave_vroce_device_state(hwdev, cmd,
+ buf_in, in_size,
+ buf_out, out_size);
+ break;
+ case HINIC3_SW_CMD_GET_SLAVE_NETDEV_STATE:
+ sw_get_slave_netdev_state(hwdev, cmd, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
+static int sw_func_ppf_mbox_handler(void *handle, u16 pf_idx, u16 vf_id, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = handle;
+ int err;
+
+ if (IS_MASTER_HOST(hwdev))
+ err = __master_host_sw_func_handler(hwdev, pf_idx, (u8)cmd, buf_in,
+ in_size, buf_out, out_size);
+ else if (IS_SLAVE_HOST(hwdev))
+ err = __slave_host_sw_func_handler(hwdev, pf_idx, (u8)cmd, buf_in,
+ in_size, buf_out, out_size);
+ else
+ err = -EINVAL;
+
+ if (err)
+ sdk_err(hwdev->dev_hdl, "PPF process sw funcs cmd %u failed, err: %d\n",
+ cmd, err);
+
+ return err;
+}
+
+static int __ppf_process_mbox_msg(struct hinic3_hwdev *hwdev, u16 pf_idx, u16 vf_id,
+ enum hinic3_mod_type mod, u8 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ /* when not support return err */
+ int err = -EFAULT;
+
+ if (IS_SLAVE_HOST(hwdev)) {
+ err = hinic3_mbox_to_host_sync(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, 0, HINIC3_CHANNEL_COMM);
+ if (err)
+ sdk_err(hwdev->dev_hdl, "Send mailbox to mPF failed, err: %d\n",
+ err);
+ } else if (IS_MASTER_HOST(hwdev)) {
+ if (mod == HINIC3_MOD_COMM && cmd == COMM_MGMT_CMD_START_FLR)
+ err = hinic3_pf_to_mgmt_no_ack(hwdev, mod, cmd, buf_in,
+ in_size);
+ else
+ err = hinic3_pf_msg_to_mgmt_sync(hwdev, mod, cmd, buf_in,
+ in_size, buf_out,
+ out_size, 0U);
+ if (err && err != HINIC3_MBOX_PF_BUSY_ACTIVE_FW)
+ sdk_err(hwdev->dev_hdl, "PF mbox mod %d cmd %u callback handler err: %d\n",
+ mod, cmd, err);
+ }
+
+ return err;
+}
+
+static int hinic3_ppf_process_mbox_msg(struct hinic3_hwdev *hwdev, u16 pf_idx, u16 vf_id,
+ enum hinic3_mod_type mod, u8 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ bool same_host = false;
+ int err = -EFAULT;
+
+ /* Currently, only the master ppf and slave ppf communicate with each
+ * other through ppf messages. If other PF/VFs need to communicate
+ * with the PPF, modify the same_host based on the
+ * hinic3_get_hw_pf_infos information.
+ */
+
+ switch (hwdev->func_mode) {
+ case FUNC_MOD_MULTI_VM_MASTER:
+ case FUNC_MOD_MULTI_BM_MASTER:
+ if (!same_host)
+ err = __ppf_process_mbox_msg(hwdev, pf_idx, vf_id,
+ mod, cmd, buf_in, in_size,
+ buf_out, out_size);
+ else
+ sdk_warn(hwdev->dev_hdl, "Doesn't support PPF mbox message in BM master\n");
+
+ break;
+ case FUNC_MOD_MULTI_VM_SLAVE:
+ case FUNC_MOD_MULTI_BM_SLAVE:
+ same_host = true;
+ if (same_host)
+ err = __ppf_process_mbox_msg(hwdev, pf_idx, vf_id,
+ mod, cmd, buf_in, in_size,
+ buf_out, out_size);
+ else
+ sdk_warn(hwdev->dev_hdl, "Doesn't support receiving control messages from BM master\n");
+
+ break;
+ default:
+ sdk_warn(hwdev->dev_hdl, "Doesn't support PPF mbox message\n");
+
+ break;
+ }
+
+ return err;
+}
+
+static int comm_ppf_mbox_handler(void *handle, u16 pf_idx, u16 vf_id, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ return hinic3_ppf_process_mbox_msg(handle, pf_idx, vf_id, HINIC3_MOD_COMM,
+ (u8)cmd, buf_in, in_size, buf_out,
+ out_size);
+}
+
+static int hilink_ppf_mbox_handler(void *handle, u16 pf_idx, u16 vf_id, u16 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return hinic3_ppf_process_mbox_msg(handle, pf_idx, vf_id,
+ HINIC3_MOD_HILINK, (u8)cmd, buf_in,
+ in_size, buf_out, out_size);
+}
+
+static int hinic3_nic_ppf_mbox_handler(void *handle, u16 pf_idx, u16 vf_id, u16 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return hinic3_ppf_process_mbox_msg(handle, pf_idx, vf_id,
+ HINIC3_MOD_L2NIC, (u8)cmd, buf_in, in_size,
+ buf_out, out_size);
+}
+
+static int hinic3_register_slave_ppf(struct hinic3_hwdev *hwdev, bool registered)
+{
+ struct register_slave_host *host_info = NULL;
+ u16 out_size = sizeof(struct register_slave_host);
+ u8 cmd;
+ int err;
+
+ if (!IS_SLAVE_HOST(hwdev))
+ return -EINVAL;
+
+ /* if unsupport hot plug, return true. */
+ if (UNSUPPORT_HOT_PLUG((struct hinic3_hwdev *)hwdev)) {
+ return 0;
+ }
+
+ host_info = kcalloc(1, sizeof(struct register_slave_host), GFP_KERNEL);
+ if (!host_info)
+ return -ENOMEM;
+
+ cmd = registered ? HINIC3_SW_CMD_SLAVE_HOST_PPF_REGISTER :
+ HINIC3_SW_CMD_SLAVE_HOST_PPF_UNREGISTER;
+
+ host_info->host_id = hinic3_pcie_itf_id(hwdev);
+ host_info->ppf_idx = hinic3_ppf_idx(hwdev);
+
+ err = hinic3_mbox_to_host_sync(hwdev, HINIC3_MOD_SW_FUNC, cmd,
+ host_info, sizeof(struct register_slave_host), host_info,
+ &out_size, 0, HINIC3_CHANNEL_COMM);
+ if (!!err || !out_size || host_info->status) {
+ sdk_err(hwdev->dev_hdl, "Failed to %s slave host, err: %d, out_size: 0x%x, status: 0x%x\n",
+ registered ? "register" : "unregister", err, out_size, host_info->status);
+
+ kfree(host_info);
+ return -EFAULT;
+ }
+ bitmap_copy(hwdev->mhost_mgmt->func_nic_en,
+ (ulong *)host_info->funcs_nic_en,
+ HINIC3_MAX_MGMT_FUNCTIONS);
+
+ if (IS_SLAVE_HOST(hwdev))
+ bitmap_copy(hwdev->mhost_mgmt->func_vroce_en,
+ (ulong *)host_info->funcs_vroce_en,
+ HINIC3_MAX_MGMT_FUNCTIONS);
+
+ kfree(host_info);
+ return 0;
+}
+
+static int get_host_id_by_func_id(struct hinic3_hwdev *hwdev, u16 func_idx,
+ u8 *host_id)
+{
+ struct hinic3_hw_pf_infos *pf_infos = NULL;
+ u16 vf_id_start, vf_id_end;
+ int i;
+
+ if (!hwdev || !host_id || !hwdev->mhost_mgmt)
+ return -EINVAL;
+
+ pf_infos = &hwdev->mhost_mgmt->pf_infos;
+
+ for (i = 0; i < pf_infos->num_pfs; i++) {
+ if (func_idx == pf_infos->infos[i].glb_func_idx) {
+ *host_id = pf_infos->infos[i].itf_idx;
+ return 0;
+ }
+
+ vf_id_start = pf_infos->infos[i].glb_pf_vf_offset + 1;
+ vf_id_end = pf_infos->infos[i].glb_pf_vf_offset +
+ pf_infos->infos[i].max_vfs;
+ if (func_idx >= vf_id_start && func_idx <= vf_id_end) {
+ *host_id = pf_infos->infos[i].itf_idx;
+ return 0;
+ }
+ }
+
+ return -EFAULT;
+}
+
+static int set_slave_func_nic_state(struct hinic3_hwdev *hwdev,
+ struct hinic3_func_nic_state *state)
+{
+ struct hinic3_slave_func_nic_state nic_state = {0};
+ u16 out_size = sizeof(nic_state);
+ u8 cmd = HINIC3_SW_CMD_SET_SLAVE_FUNC_NIC_STATE;
+ int err;
+
+ nic_state.func_idx = state->func_idx;
+ nic_state.enable = state->state;
+ nic_state.vroce_flag = state->vroce_flag;
+
+ if (state->vroce_flag)
+ cmd = HINIC3_SW_CMD_SET_SLAVE_FUNC_VROCE_STATE;
+
+ err = hinic3_mbox_to_host_sync(hwdev, HINIC3_MOD_SW_FUNC,
+ cmd, &nic_state, sizeof(nic_state),
+ &nic_state, &out_size, 0, HINIC3_CHANNEL_COMM);
+ if (err == MBOX_ERRCODE_UNKNOWN_DES_FUNC) {
+ sdk_warn(hwdev->dev_hdl,
+ "Can not notify func %u %s state because slave host isn't initialized\n",
+ state->func_idx, state->vroce_flag ? "vroce" : "nic");
+ } else if (err || !out_size || nic_state.status) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to set slave %s state, err: %d, out_size: 0x%x, status: 0x%x\n",
+ state->vroce_flag ? "vroce" : "nic",
+ err, out_size, nic_state.status);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int get_slave_func_netdev_state(struct hinic3_hwdev *hwdev, u16 func_idx, int *opened)
+{
+ struct hinic3_slave_func_nic_state nic_state = {0};
+ u16 out_size = sizeof(nic_state);
+ int err;
+
+ nic_state.func_idx = func_idx;
+ err = hinic3_mbox_to_host_sync(hwdev, HINIC3_MOD_SW_FUNC,
+ HINIC3_SW_CMD_GET_SLAVE_NETDEV_STATE,
+ &nic_state, sizeof(nic_state), &nic_state,
+ &out_size, 0, HINIC3_CHANNEL_COMM);
+ if (err == MBOX_ERRCODE_UNKNOWN_DES_FUNC) {
+ sdk_warn(hwdev->dev_hdl,
+ "Can not get func %u netdev state because slave host isn't initialized\n",
+ func_idx);
+ } else if (err || !out_size || nic_state.status) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to get netdev state, err: %d, out_size: 0x%x, status: 0x%x\n",
+ err, out_size, nic_state.status);
+ return -EFAULT;
+ }
+
+ *opened = nic_state.opened;
+ return 0;
+}
+
+static int set_nic_state_params_valid(void *hwdev,
+ struct hinic3_func_nic_state *state)
+{
+ struct hinic3_multi_host_mgmt *mhost_mgmt = NULL;
+ struct hinic3_hwdev *ppf_hwdev = hwdev;
+
+ if (!hwdev || !state)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ ppf_hwdev = ((struct hinic3_hwdev *)hwdev)->ppf_hwdev;
+
+ if (!ppf_hwdev || !IS_MASTER_HOST(ppf_hwdev))
+ return -EINVAL;
+
+ mhost_mgmt = ppf_hwdev->mhost_mgmt;
+ if (!mhost_mgmt || state->func_idx >= HINIC3_MAX_MGMT_FUNCTIONS)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int get_func_current_state(struct hinic3_multi_host_mgmt *mhost_mgmt,
+ struct hinic3_func_nic_state *state,
+ int *old_state)
+{
+ ulong *func_bitmap = NULL;
+
+ if (state->vroce_flag == 1)
+ func_bitmap = mhost_mgmt->func_vroce_en;
+ else
+ func_bitmap = mhost_mgmt->func_nic_en;
+
+ *old_state = test_bit(state->func_idx, func_bitmap) ? 1 : 0;
+ if (state->state == HINIC3_FUNC_NIC_DEL)
+ clear_bit(state->func_idx, func_bitmap);
+ else if (state->state == HINIC3_FUNC_NIC_ADD)
+ set_bit(state->func_idx, func_bitmap);
+ else
+ return -EINVAL;
+
+ return 0;
+}
+
+static bool check_vroce_state(struct hinic3_multi_host_mgmt *mhost_mgmt,
+ struct hinic3_func_nic_state *state)
+{
+ bool is_ready = true;
+ ulong *func_bitmap = mhost_mgmt->func_vroce_en;
+
+ if (!state->vroce_flag && state->state == HINIC3_FUNC_NIC_DEL)
+ is_ready = test_bit(state->func_idx, func_bitmap) ? false : true;
+
+ return is_ready;
+}
+
+int hinic3_set_func_nic_state(void *hwdev, struct hinic3_func_nic_state *state)
+{
+ struct hinic3_multi_host_mgmt *mhost_mgmt = NULL;
+ struct hinic3_hwdev *ppf_hwdev = hwdev;
+ u8 host_enable;
+ int err, old_state = 0;
+ u8 host_id = 0;
+
+ err = set_nic_state_params_valid(hwdev, state);
+ if (err)
+ return err;
+
+ mhost_mgmt = ppf_hwdev->mhost_mgmt;
+
+ if (IS_MASTER_HOST(ppf_hwdev) &&
+ !check_vroce_state(mhost_mgmt, state)) {
+ sdk_warn(ppf_hwdev->dev_hdl,
+ "Should disable vroce before disable nic for function %u\n",
+ state->func_idx);
+ return -EFAULT;
+ }
+
+ err = get_func_current_state(mhost_mgmt, state, &old_state);
+ if (err) {
+ sdk_err(ppf_hwdev->dev_hdl, "Failed to get function %u current state, err: %d\n",
+ state->func_idx, err);
+ return err;
+ }
+
+ err = get_host_id_by_func_id(ppf_hwdev, state->func_idx, &host_id);
+ if (err) {
+ sdk_err(ppf_hwdev->dev_hdl,
+ "Failed to get function %u host id, err: %d\n", state->func_idx, err);
+ if (state->vroce_flag)
+ return -EFAULT;
+
+ old_state ? set_bit(state->func_idx, mhost_mgmt->func_nic_en) :
+ clear_bit(state->func_idx, mhost_mgmt->func_nic_en);
+ return -EFAULT;
+ }
+
+ err = hinic3_get_slave_host_enable(hwdev, host_id, &host_enable);
+ if (err != 0) {
+ sdk_err(ppf_hwdev->dev_hdl,
+ "Get slave host %u enable failed, ret %d\n", host_id, err);
+ return err;
+ }
+ sdk_info(ppf_hwdev->dev_hdl, "Set slave host %u(status: %u) func %u %s %s\n",
+ host_id, host_enable, state->func_idx,
+ state->state ? "enable" : "disable", state->vroce_flag ? "vroce" : "nic");
+
+ if (!host_enable)
+ return 0;
+
+ /* notify slave host */
+ err = set_slave_func_nic_state(hwdev, state);
+ if (err) {
+ if (state->vroce_flag)
+ return -EFAULT;
+
+ old_state ? set_bit(state->func_idx, mhost_mgmt->func_nic_en) :
+ clear_bit(state->func_idx, mhost_mgmt->func_nic_en);
+ return err;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_func_nic_state);
+
+int hinic3_get_netdev_state(void *hwdev, u16 func_idx, int *opened)
+{
+ struct hinic3_hwdev *ppf_hwdev = hwdev;
+ int err;
+ u8 host_enable;
+ u8 host_id = 0;
+ struct hinic3_func_nic_state state = {0};
+
+ *opened = 0;
+ state.func_idx = func_idx;
+ err = set_nic_state_params_valid(hwdev, &state);
+ if (err)
+ return err;
+
+ err = get_host_id_by_func_id(ppf_hwdev, func_idx, &host_id);
+ if (err) {
+ sdk_err(ppf_hwdev->dev_hdl, "Failed to get function %u host id, err: %d\n",
+ func_idx, err);
+ return -EFAULT;
+ }
+
+ err = hinic3_get_slave_host_enable(hwdev, host_id, &host_enable);
+ if (err != 0) {
+ sdk_err(ppf_hwdev->dev_hdl, "Get slave host %u enable failed, ret %d\n",
+ host_id, err);
+ return err;
+ }
+ if (!host_enable)
+ return 0;
+
+ return get_slave_func_netdev_state(hwdev, func_idx, opened);
+}
+EXPORT_SYMBOL(hinic3_get_netdev_state);
+
+static int __get_func_nic_state_from_pf(struct hinic3_hwdev *hwdev,
+ u16 glb_func_idx, u8 *en)
+{
+ struct hinic3_multi_host_mgmt *mhost_mgmt = NULL;
+ struct hinic3_hwdev *ppf_hwdev = hwdev;
+
+ down(&hwdev->ppf_sem);
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ ppf_hwdev = ((struct hinic3_hwdev *)hwdev)->ppf_hwdev;
+
+ if (!ppf_hwdev || !ppf_hwdev->mhost_mgmt) {
+ up(&hwdev->ppf_sem);
+ return -EFAULT;
+ }
+
+ mhost_mgmt = ppf_hwdev->mhost_mgmt;
+ *en = !!test_bit(glb_func_idx, mhost_mgmt->func_nic_en);
+ up(&hwdev->ppf_sem);
+
+ return 0;
+}
+
+static int __get_func_vroce_state_from_pf(struct hinic3_hwdev *hwdev,
+ u16 glb_func_idx, u8 *en)
+{
+ struct hinic3_multi_host_mgmt *mhost_mgmt = NULL;
+ struct hinic3_hwdev *ppf_hwdev = hwdev;
+
+ down(&hwdev->ppf_sem);
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ ppf_hwdev = ((struct hinic3_hwdev *)hwdev)->ppf_hwdev;
+
+ if (!ppf_hwdev || !ppf_hwdev->mhost_mgmt) {
+ up(&hwdev->ppf_sem);
+ return -EFAULT;
+ }
+
+ mhost_mgmt = ppf_hwdev->mhost_mgmt;
+ *en = !!test_bit(glb_func_idx, mhost_mgmt->func_vroce_en);
+ up(&hwdev->ppf_sem);
+
+ return 0;
+}
+
+static int __get_vf_func_nic_state(struct hinic3_hwdev *hwdev, u16 glb_func_idx,
+ bool *en)
+{
+ struct hinic3_slave_func_nic_state nic_state = {0};
+ u16 out_size = sizeof(nic_state);
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ nic_state.func_idx = glb_func_idx;
+ err = hinic3_mbox_to_pf(hwdev, HINIC3_MOD_SW_FUNC,
+ HINIC3_SW_CMD_GET_SLAVE_FUNC_NIC_STATE,
+ &nic_state, sizeof(nic_state),
+ &nic_state, &out_size, 0, HINIC3_CHANNEL_COMM);
+ if (err || !out_size || nic_state.status) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to get vf %u state, err: %d, out_size: %u, status: 0x%x\n",
+ glb_func_idx, err, out_size, nic_state.status);
+ return -EFAULT;
+ }
+
+ *en = !!nic_state.enable;
+
+ return 0;
+ }
+
+ return -EFAULT;
+}
+
+static int __get_func_vroce_state(struct hinic3_hwdev *hwdev, u16 glb_func_idx,
+ u8 *en)
+{
+ struct hinic3_slave_func_nic_state vroce_state = {0};
+ u16 out_size = sizeof(vroce_state);
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ vroce_state.func_idx = glb_func_idx;
+ err = hinic3_mbox_to_pf(hwdev, HINIC3_MOD_SW_FUNC,
+ HINIC3_SW_CMD_GET_SLAVE_FUNC_VROCE_STATE,
+ &vroce_state, sizeof(vroce_state),
+ &vroce_state, &out_size, 0, HINIC3_CHANNEL_COMM);
+ if (err || !out_size || vroce_state.status) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to get vf %u state, err: %d, out_size: %u, status: 0x%x\n",
+ glb_func_idx, err, out_size, vroce_state.status);
+ return -EFAULT;
+ }
+
+ *en = !!vroce_state.enable;
+
+ return 0;
+ }
+
+ return __get_func_vroce_state_from_pf(hwdev, glb_func_idx, en);
+}
+
+int hinic3_get_func_vroce_enable(void *hwdev, u16 glb_func_idx, u8 *en)
+{
+ if (!hwdev || !en)
+ return -EINVAL;
+
+ return __get_func_vroce_state(hwdev, glb_func_idx, en);
+}
+EXPORT_SYMBOL(hinic3_get_func_vroce_enable);
+
+int hinic3_get_func_nic_enable(void *hwdev, u16 glb_func_idx, bool *en)
+{
+ u8 nic_en;
+ int err;
+
+ if (!hwdev || !en)
+ return -EINVAL;
+
+ /* if single host or unsupport hot plug, return true. */
+ if (!IS_MULTI_HOST((struct hinic3_hwdev *)hwdev) ||
+ UNSUPPORT_HOT_PLUG((struct hinic3_hwdev *)hwdev)) {
+ *en = true;
+ return 0;
+ }
+
+ if (!IS_SLAVE_HOST((struct hinic3_hwdev *)hwdev)) {
+ /* if card mode is OVS, VFs don't need attach_uld, so return false. */
+ if (hinic3_func_type(hwdev) == TYPE_VF &&
+ hinic3_support_ovs(hwdev, NULL))
+ *en = false;
+ else
+ *en = true;
+
+ return 0;
+ }
+
+ /* PF in slave host should be probe in CHIP_MODE_VMGW
+ * mode for pxe install.
+ * PF num need (0 ~31)
+ */
+ if (hinic3_func_type(hwdev) != TYPE_VF &&
+ IS_VM_SLAVE_HOST((struct hinic3_hwdev *)hwdev) &&
+ glb_func_idx < HINIC3_SUPPORT_MAX_PF_NUM) {
+ *en = true;
+ return 0;
+ }
+
+ /* try to get function nic state in sdk directly */
+ err = __get_func_nic_state_from_pf(hwdev, glb_func_idx, &nic_en);
+ if (err) {
+ if (glb_func_idx < HINIC3_SUPPORT_MAX_PF_NUM)
+ return err;
+ } else {
+ *en = !!nic_en;
+ return 0;
+ }
+
+ return __get_vf_func_nic_state(hwdev, glb_func_idx, en);
+}
+
+static int slave_host_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ if (IS_SLAVE_HOST(hwdev)) {
+ /* PXE doesn't support to receive mbox from master host */
+ set_slave_host_enable(hwdev, hinic3_pcie_itf_id(hwdev), true);
+ if ((IS_VM_SLAVE_HOST(hwdev) &&
+ hinic3_get_master_host_mbox_enable(hwdev)) ||
+ IS_BMGW_SLAVE_HOST(hwdev)) {
+ err = hinic3_register_slave_ppf(hwdev, true);
+ if (err) {
+ set_slave_host_enable(hwdev, hinic3_pcie_itf_id(hwdev), false);
+ return err;
+ }
+ }
+ } else {
+ /* slave host can send message to mgmt cpu
+ * after setup master mbox
+ */
+ set_master_host_mbox_enable(hwdev, true);
+ }
+
+ return 0;
+}
+
+int hinic3_multi_host_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+ struct service_cap *cap = &hwdev->cfg_mgmt->svc_cap;
+ int is_use_vram, is_in_kexec;
+
+ if (!IS_MULTI_HOST(hwdev) || !HINIC3_IS_PPF(hwdev))
+ return 0;
+
+ is_use_vram = get_use_vram_flag();
+ if (is_use_vram != 0) {
+ snprintf(hwdev->mhost_mgmt_name, VRAM_NAME_MAX_LEN, "%s", VRAM_NIC_MHOST_MGMT);
+ hwdev->mhost_mgmt = hi_vram_kalloc(hwdev->mhost_mgmt_name, sizeof(*hwdev->mhost_mgmt));
+ } else {
+ hwdev->mhost_mgmt = kcalloc(1, sizeof(*hwdev->mhost_mgmt), GFP_KERNEL);
+ }
+ if (!hwdev->mhost_mgmt)
+ return -ENOMEM;
+
+ hwdev->mhost_mgmt->shost_ppf_idx = hinic3_host_ppf_idx(hwdev, HINIC3_MGMT_SHOST_HOST_ID);
+ hwdev->mhost_mgmt->mhost_ppf_idx = hinic3_host_ppf_idx(hwdev, cap->master_host_id);
+
+ err = hinic3_get_hw_pf_infos(hwdev, &hwdev->mhost_mgmt->pf_infos, HINIC3_CHANNEL_COMM);
+ if (err)
+ goto out_free_mhost_mgmt;
+
+ hinic3_register_ppf_mbox_cb(hwdev, HINIC3_MOD_COMM, hwdev, comm_ppf_mbox_handler);
+ hinic3_register_ppf_mbox_cb(hwdev, HINIC3_MOD_L2NIC, hwdev, hinic3_nic_ppf_mbox_handler);
+ hinic3_register_ppf_mbox_cb(hwdev, HINIC3_MOD_HILINK, hwdev, hilink_ppf_mbox_handler);
+ hinic3_register_ppf_mbox_cb(hwdev, HINIC3_MOD_SW_FUNC, hwdev, sw_func_ppf_mbox_handler);
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec == 0) {
+ bitmap_zero(hwdev->mhost_mgmt->func_nic_en, HINIC3_MAX_MGMT_FUNCTIONS);
+ bitmap_zero(hwdev->mhost_mgmt->func_vroce_en, HINIC3_MAX_MGMT_FUNCTIONS);
+ }
+
+ /* Slave host:
+ * register slave host ppf functions
+ * Get function's nic state
+ */
+ err = slave_host_init(hwdev);
+ if (err)
+ goto out_free_mhost_mgmt;
+
+ return 0;
+
+out_free_mhost_mgmt:
+ if (is_use_vram != 0) {
+ hi_vram_kfree((void *)hwdev->mhost_mgmt,
+ hwdev->mhost_mgmt_name,
+ sizeof(*hwdev->mhost_mgmt));
+ } else {
+ kfree(hwdev->mhost_mgmt);
+ }
+ hwdev->mhost_mgmt = NULL;
+
+ return err;
+}
+
+int hinic3_multi_host_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ int is_use_vram;
+ if (!IS_MULTI_HOST(hwdev) || !HINIC3_IS_PPF(hwdev))
+ return 0;
+
+ if (IS_SLAVE_HOST(hwdev)) {
+ hinic3_register_slave_ppf(hwdev, false);
+
+ set_slave_host_enable(hwdev, hinic3_pcie_itf_id(hwdev), false);
+ } else {
+ set_master_host_mbox_enable(hwdev, false);
+ }
+
+ hinic3_unregister_ppf_mbox_cb(hwdev, HINIC3_MOD_COMM);
+ hinic3_unregister_ppf_mbox_cb(hwdev, HINIC3_MOD_L2NIC);
+ hinic3_unregister_ppf_mbox_cb(hwdev, HINIC3_MOD_HILINK);
+ hinic3_unregister_ppf_mbox_cb(hwdev, HINIC3_MOD_SW_FUNC);
+
+ is_use_vram = get_use_vram_flag();
+ if (is_use_vram != 0) {
+ hi_vram_kfree((void *)hwdev->mhost_mgmt,
+ hwdev->mhost_mgmt_name,
+ sizeof(*hwdev->mhost_mgmt));
+ } else {
+ kfree(hwdev->mhost_mgmt);
+ }
+ hwdev->mhost_mgmt = NULL;
+
+ return 0;
+}
+
+int hinic3_get_mhost_func_nic_enable(void *hwdev, u16 func_id, bool *en)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u8 func_en;
+ int ret;
+
+ if (!hwdev || !en || func_id >= HINIC3_MAX_MGMT_FUNCTIONS || !IS_MULTI_HOST(dev))
+ return -EINVAL;
+
+ ret = __get_func_nic_state_from_pf(hwdev, func_id, &func_en);
+ if (ret)
+ return ret;
+
+ *en = !!func_en;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_mhost_func_nic_enable);
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.h
new file mode 100644
index 0000000..fb25160
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.h
@@ -0,0 +1,124 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2022 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_MULTI_HOST_MGMT_H
+#define HINIC3_MULTI_HOST_MGMT_H
+
+#define HINIC3_VF_IN_VM 0x3
+
+#define HINIC3_MGMT_SHOST_HOST_ID 0
+#define HINIC3_MAX_MGMT_FUNCTIONS 1024
+#define HINIC3_MAX_MGMT_FUNCTIONS_64 (HINIC3_MAX_MGMT_FUNCTIONS / 64)
+
+struct hinic3_multi_host_mgmt {
+ struct hinic3_hwdev *hwdev;
+
+ /* slave host registered */
+ bool shost_registered;
+ u8 shost_host_idx;
+ u8 shost_ppf_idx;
+
+ u8 mhost_ppf_idx;
+ u8 rsvd1;
+
+ /* slave host functios support nic enable */
+ DECLARE_BITMAP(func_nic_en, HINIC3_MAX_MGMT_FUNCTIONS);
+ DECLARE_BITMAP(func_vroce_en, HINIC3_MAX_MGMT_FUNCTIONS);
+
+ struct hinic3_hw_pf_infos pf_infos;
+
+ u64 rsvd2;
+};
+
+struct hinic3_host_fwd_head {
+ unsigned short dst_glb_func_idx;
+ unsigned char dst_itf_idx;
+ unsigned char mod;
+
+ unsigned char cmd;
+ unsigned char rsv[3];
+};
+
+/* software cmds, vf->pf and multi-host */
+enum hinic3_sw_funcs_cmd {
+ HINIC3_SW_CMD_SLAVE_HOST_PPF_REGISTER = 0x0,
+ HINIC3_SW_CMD_SLAVE_HOST_PPF_UNREGISTER,
+ HINIC3_SW_CMD_GET_SLAVE_FUNC_NIC_STATE,
+ HINIC3_SW_CMD_SET_SLAVE_FUNC_NIC_STATE,
+ HINIC3_SW_CMD_SEND_MSG_TO_VF,
+ HINIC3_SW_CMD_MIGRATE_READY,
+ HINIC3_SW_CMD_GET_SLAVE_NETDEV_STATE,
+
+ HINIC3_SW_CMD_GET_SLAVE_FUNC_VROCE_STATE,
+ HINIC3_SW_CMD_SET_SLAVE_FUNC_VROCE_STATE,
+ HINIC3_SW_CMD_GET_SLAVE_VROCE_DEVICE_STATE = 0x9, // 与vroce_cfg_vf_do.h宏一致
+};
+
+/* multi host mgmt event sub cmd */
+enum hinic3_mhost_even_type {
+ HINIC3_MHOST_NIC_STATE_CHANGE = 1,
+ HINIC3_MHOST_VROCE_STATE_CHANGE = 2,
+ HINIC3_MHOST_GET_VROCE_STATE = 3,
+};
+
+struct hinic3_mhost_nic_func_state {
+ u8 status;
+ u8 enable;
+ u16 func_idx;
+};
+
+struct hinic3_multi_host_mgmt_event {
+ u16 sub_cmd;
+ u16 rsvd[3];
+
+ void *data;
+};
+
+int hinic3_multi_host_mgmt_init(struct hinic3_hwdev *hwdev);
+int hinic3_multi_host_mgmt_free(struct hinic3_hwdev *hwdev);
+int hinic3_mbox_to_host_no_ack(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size, u16 channel);
+
+struct register_slave_host {
+ u8 status;
+ u8 version;
+ u8 rsvd[6];
+
+ u8 host_id;
+ u8 ppf_idx;
+ u8 get_nic_en;
+ u8 rsvd2[5];
+
+ /* 16 * 64 bits for max 1024 functions */
+ u64 funcs_nic_en[HINIC3_MAX_MGMT_FUNCTIONS_64];
+ /* 16 * 64 bits for max 1024 functions */
+ u64 funcs_vroce_en[HINIC3_MAX_MGMT_FUNCTIONS_64];
+};
+
+struct hinic3_slave_func_nic_state {
+ u8 status;
+ u8 version;
+ u8 rsvd[6];
+
+ u16 func_idx;
+ u8 enable;
+ u8 opened;
+ u8 vroce_flag;
+ u8 rsvd2[7];
+};
+
+void set_master_host_mbox_enable(struct hinic3_hwdev *hwdev, bool enable);
+
+int sw_func_pf_mbox_handler(void *pri_handle, u16 vf_id, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size);
+
+int vf_sw_func_handler(void *hwdev, u8 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size);
+int hinic3_set_func_probe_in_host(void *hwdev, u16 func_id, bool probe);
+bool hinic3_get_func_probe_in_host(void *hwdev, u16 func_id);
+
+void *hinic3_get_ppf_hwdev_by_pdev(struct pci_dev *pdev);
+
+int hinic3_get_func_nic_enable(void *hwdev, u16 glb_func_idx, bool *en);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c
new file mode 100644
index 0000000..ee7afef
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c
@@ -0,0 +1,1021 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <net/sock.h>
+#include <linux/cdev.h>
+#include <linux/device.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_dev_mgmt.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_lld.h"
+#include "hinic3_hw_mt.h"
+#include "hinic3_nictool.h"
+
+static int g_nictool_ref_cnt;
+
+static dev_t g_dev_id = {0};
+static struct class *g_nictool_class;
+static struct cdev g_nictool_cdev;
+
+#define HINIC3_MAX_BUF_SIZE (2048 * 1024)
+
+void *g_card_node_array[MAX_CARD_NUM] = {0};
+void *g_card_vir_addr[MAX_CARD_NUM] = {0};
+u64 g_card_phy_addr[MAX_CARD_NUM] = {0};
+int card_id;
+
+#define HIADM3_DEV_PATH "/dev/hinic3_nictool_dev"
+#define HIADM3_DEV_CLASS "hinic3_nictool_class"
+#define HIADM3_DEV_NAME "hinic3_nictool_dev"
+
+typedef int (*hw_driv_module)(struct hinic3_lld_dev *lld_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size);
+
+struct hw_drv_module_handle {
+ enum driver_cmd_type driv_cmd_name;
+ hw_driv_module driv_func;
+};
+
+static int get_single_card_info(struct hinic3_lld_dev *lld_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ if (!buf_out || *out_size != sizeof(struct card_info)) {
+ pr_err("buf_out is NULL, or out_size != %lu\n", sizeof(struct card_info));
+ return -EINVAL;
+ }
+
+ hinic3_get_card_info(hinic3_get_sdk_hwdev_by_lld(lld_dev), buf_out);
+
+ return 0;
+}
+
+static int is_driver_in_vm(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ bool in_host = false;
+
+ if (!buf_out || (*out_size != sizeof(u8))) {
+ pr_err("buf_out is NULL, or out_size != %lu\n", sizeof(u8));
+ return -EINVAL;
+ }
+
+ in_host = hinic3_is_in_host();
+ if (in_host)
+ *((u8 *)buf_out) = 0;
+ else
+ *((u8 *)buf_out) = 1;
+
+ return 0;
+}
+
+static int get_all_chip_id_cmd(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ if (*out_size != sizeof(struct nic_card_id) || !buf_out) {
+ pr_err("Invalid parameter: out_buf_size %u, expect %lu\n",
+ *out_size, sizeof(struct nic_card_id));
+ return -EFAULT;
+ }
+
+ hinic3_get_all_chip_id(buf_out);
+
+ return 0;
+}
+
+static int get_os_hot_replace_info(struct hinic3_lld_dev *lld_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ if (*out_size != sizeof(struct os_hot_replace_info) || !buf_out) {
+ pr_err("Invalid parameter: out_buf_size %u, expect %lu\n",
+ *out_size, sizeof(struct os_hot_replace_info));
+ return -EFAULT;
+ }
+
+ hinic3_get_os_hot_replace_info(buf_out);
+
+ return 0;
+}
+
+static int get_card_usr_api_chain_mem(int card_idx)
+{
+ unsigned char *tmp = NULL;
+ int i;
+
+ card_id = card_idx;
+ if (!g_card_vir_addr[card_idx]) {
+ g_card_vir_addr[card_idx] =
+ (void *)ossl_get_free_pages(GFP_KERNEL,
+ DBGTOOL_PAGE_ORDER);
+ if (!g_card_vir_addr[card_idx]) {
+ pr_err("Alloc api chain memory fail for card %d!\n", card_idx);
+ return -EFAULT;
+ }
+
+ memset(g_card_vir_addr[card_idx], 0,
+ PAGE_SIZE * (1 << DBGTOOL_PAGE_ORDER));
+
+ g_card_phy_addr[card_idx] =
+ virt_to_phys(g_card_vir_addr[card_idx]);
+ if (!g_card_phy_addr[card_idx]) {
+ pr_err("phy addr for card %d is 0\n", card_idx);
+ free_pages((unsigned long)g_card_vir_addr[card_idx], DBGTOOL_PAGE_ORDER);
+ g_card_vir_addr[card_idx] = NULL;
+ return -EFAULT;
+ }
+
+ tmp = g_card_vir_addr[card_idx];
+ for (i = 0; i < (1 << DBGTOOL_PAGE_ORDER); i++) {
+ SetPageReserved(virt_to_page(tmp));
+ tmp += PAGE_SIZE;
+ }
+ }
+
+ return 0;
+}
+
+static void chipif_get_all_pf_dev_info(struct pf_dev_info *dev_info, int card_idx,
+ void **g_func_handle_array)
+{
+ u32 func_idx;
+ void *hwdev = NULL;
+ struct pci_dev *pdev = NULL;
+
+ for (func_idx = 0; func_idx < PF_DEV_INFO_NUM; func_idx++) {
+ hwdev = (void *)g_func_handle_array[func_idx];
+
+ dev_info[func_idx].phy_addr = g_card_phy_addr[card_idx];
+
+ if (!hwdev) {
+ dev_info[func_idx].bar0_size = 0;
+ dev_info[func_idx].bus = 0;
+ dev_info[func_idx].slot = 0;
+ dev_info[func_idx].func = 0;
+ } else {
+ pdev = (struct pci_dev *)hinic3_get_pcidev_hdl(hwdev);
+ dev_info[func_idx].bar0_size =
+ pci_resource_len(pdev, 0);
+ dev_info[func_idx].bus = pdev->bus->number;
+ dev_info[func_idx].slot = PCI_SLOT(pdev->devfn);
+ dev_info[func_idx].func = PCI_FUNC(pdev->devfn);
+ }
+ }
+}
+
+static int get_pf_dev_info(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct pf_dev_info *dev_info = buf_out;
+ struct card_node *card_info = hinic3_get_chip_node_by_lld(lld_dev);
+ int id, err;
+
+ if (!buf_out || *out_size != sizeof(struct pf_dev_info) * PF_DEV_INFO_NUM) {
+ pr_err("Invalid parameter: out_buf_size %u, expect %lu\n",
+ *out_size, sizeof(*dev_info) * PF_DEV_INFO_NUM);
+ return -EFAULT;
+ }
+
+ err = sscanf(card_info->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ pr_err("Failed to get card id\n");
+ return err;
+ }
+
+ if (id >= MAX_CARD_NUM || id < 0) {
+ pr_err("chip id %d exceed limit[0-%d]\n", id, MAX_CARD_NUM - 1);
+ return -EINVAL;
+ }
+
+ chipif_get_all_pf_dev_info(dev_info, id, card_info->func_handle_array);
+
+ err = get_card_usr_api_chain_mem(id);
+ if (err) {
+ pr_err("Faile to get api chain memory for userspace %s\n",
+ card_info->chip_name);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static void dbgtool_knl_free_mem(int id)
+{
+ unsigned char *tmp = NULL;
+ int i;
+
+ if (id < 0 || id >= MAX_CARD_NUM) {
+ pr_err("Invalid card id\n");
+ return;
+ }
+
+ if (!g_card_vir_addr[id])
+ return;
+
+ tmp = g_card_vir_addr[id];
+ for (i = 0; i < (1 << DBGTOOL_PAGE_ORDER); i++) {
+ ClearPageReserved(virt_to_page(tmp));
+ tmp += PAGE_SIZE;
+ }
+
+ free_pages((unsigned long)g_card_vir_addr[id], DBGTOOL_PAGE_ORDER);
+ g_card_vir_addr[id] = NULL;
+ g_card_phy_addr[id] = 0;
+}
+
+static int free_knl_mem(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct card_node *card_info = hinic3_get_chip_node_by_lld(lld_dev);
+ int id, err;
+
+ err = sscanf(card_info->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ pr_err("Failed to get card id\n");
+ return err;
+ }
+
+ if (id >= MAX_CARD_NUM || id < 0) {
+ pr_err("chip id %d exceed limit[0-%d]\n", id, MAX_CARD_NUM - 1);
+ return -EINVAL;
+ }
+
+ dbgtool_knl_free_mem(id);
+
+ return 0;
+}
+
+static int card_info_param_valid(const char *dev_name, const void *buf_out,
+ u32 buf_out_size, int *id)
+{
+ int err;
+
+ if (!buf_out || buf_out_size != sizeof(struct hinic3_card_func_info)) {
+ pr_err("Invalid parameter: out_buf_size %u, expect %lu\n",
+ buf_out_size, sizeof(struct hinic3_card_func_info));
+ return -EINVAL;
+ }
+
+ err = memcmp(dev_name, HINIC3_CHIP_NAME, strlen(HINIC3_CHIP_NAME));
+ if (err) {
+ pr_err("Invalid chip name %s\n", dev_name);
+ return err;
+ }
+
+ err = sscanf(dev_name, HINIC3_CHIP_NAME "%d", id);
+ if (err < 0) {
+ pr_err("Failed to get card id\n");
+ return err;
+ }
+
+ if (*id >= MAX_CARD_NUM || *id < 0) {
+ pr_err("chip id %d exceed limit[0-%d]\n",
+ *id, MAX_CARD_NUM - 1);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int get_card_func_info(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct hinic3_card_func_info *card_func_info = buf_out;
+ struct card_node *card_info = hinic3_get_chip_node_by_lld(lld_dev);
+ int err, id = 0;
+
+ err = card_info_param_valid(card_info->chip_name, buf_out, *out_size, &id);
+ if (err)
+ return err;
+
+ hinic3_get_card_func_info_by_card_name(card_info->chip_name, card_func_info);
+
+ if (!card_func_info->num_pf) {
+ pr_err("None function found for %s\n", card_info->chip_name);
+ return -EFAULT;
+ }
+
+ err = get_card_usr_api_chain_mem(id);
+ if (err) {
+ pr_err("Faile to get api chain memory for userspace %s\n",
+ card_info->chip_name);
+ return -EFAULT;
+ }
+
+ card_func_info->usr_api_phy_addr = g_card_phy_addr[id];
+
+ return 0;
+}
+
+static int get_pf_cap_info(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct service_cap *func_cap = NULL;
+ struct hinic3_hwdev *hwdev = NULL;
+ struct card_node *card_info = hinic3_get_chip_node_by_lld(lld_dev);
+ struct svc_cap_info *svc_cap_info_in = (struct svc_cap_info *)buf_in;
+ struct svc_cap_info *svc_cap_info_out = (struct svc_cap_info *)buf_out;
+
+ if (*out_size != sizeof(struct svc_cap_info) || in_size != sizeof(struct svc_cap_info) ||
+ !buf_in || !buf_out) {
+ pr_err("Invalid parameter: out_buf_size %u, in_size: %u, expect %lu\n",
+ *out_size, in_size, sizeof(struct svc_cap_info));
+ return -EINVAL;
+ }
+
+ if (svc_cap_info_in->func_idx >= MAX_FUNCTION_NUM) {
+ pr_err("func_idx is illegal. func_idx: %u, max_num: %u\n",
+ svc_cap_info_in->func_idx, MAX_FUNCTION_NUM);
+ return -EINVAL;
+ }
+
+ lld_hold();
+ hwdev = (struct hinic3_hwdev *)(card_info->func_handle_array)[svc_cap_info_in->func_idx];
+ if (!hwdev) {
+ lld_put();
+ return -EINVAL;
+ }
+
+ func_cap = &hwdev->cfg_mgmt->svc_cap;
+ memcpy(&svc_cap_info_out->cap, func_cap, sizeof(struct service_cap));
+ lld_put();
+
+ return 0;
+}
+
+static int get_hw_drv_version(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct drv_version_info *ver_info = buf_out;
+ int err;
+
+ if (!buf_out) {
+ pr_err("Buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (*out_size != sizeof(*ver_info)) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(*ver_info));
+ return -EINVAL;
+ }
+
+ err = snprintf(ver_info->ver, sizeof(ver_info->ver), "%s %s", HINIC3_DRV_VERSION,
+ "2025-05-01_00:00:03");
+ if (err < 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int get_pf_id(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct hinic3_pf_info *pf_info = NULL;
+ struct card_node *chip_node = hinic3_get_chip_node_by_lld(lld_dev);
+ u32 port_id;
+ int err;
+
+ if (!chip_node)
+ return -ENODEV;
+
+ if (!buf_out || (*out_size != sizeof(*pf_info)) || !buf_in || in_size != sizeof(u32)) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu, in size:%u\n",
+ *out_size, sizeof(*pf_info), in_size);
+ return -EINVAL;
+ }
+
+ port_id = *((u32 *)buf_in);
+ pf_info = (struct hinic3_pf_info *)buf_out;
+ err = hinic3_get_pf_id(chip_node, port_id, &pf_info->pf_id, &pf_info->isvalid);
+ if (err)
+ return err;
+
+ *out_size = sizeof(*pf_info);
+
+ return 0;
+}
+
+static int get_mbox_cnt(struct hinic3_lld_dev *lld_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ if (buf_out == NULL || *out_size != sizeof(struct card_mbox_cnt_info)) {
+ pr_err("buf_out is NULL, or out_size != %lu\n",
+ sizeof(struct card_info));
+ return -EINVAL;
+ }
+
+ hinic3_get_mbox_cnt(hinic3_get_sdk_hwdev_by_lld(lld_dev), buf_out);
+
+ return 0;
+}
+
+struct hw_drv_module_handle hw_driv_module_cmd_handle[] = {
+ {FUNC_TYPE, get_func_type},
+ {GET_FUNC_IDX, get_func_id},
+ {GET_HW_STATS, (hw_driv_module)get_hw_driver_stats},
+ {CLEAR_HW_STATS, clear_hw_driver_stats},
+ {GET_SELF_TEST_RES, get_self_test_result},
+ {GET_CHIP_FAULT_STATS, (hw_driv_module)get_chip_faults_stats},
+ {GET_SINGLE_CARD_INFO, (hw_driv_module)get_single_card_info},
+ {IS_DRV_IN_VM, is_driver_in_vm},
+ {GET_CHIP_ID, get_all_chip_id_cmd},
+ {GET_PF_DEV_INFO, get_pf_dev_info},
+ {CMD_FREE_MEM, free_knl_mem},
+ {GET_CHIP_INFO, get_card_func_info},
+ {GET_FUNC_CAP, get_pf_cap_info},
+ {GET_DRV_VERSION, get_hw_drv_version},
+ {GET_PF_ID, get_pf_id},
+ {GET_OS_HOT_REPLACE_INFO, get_os_hot_replace_info},
+ {GET_MBOX_CNT, (hw_driv_module)get_mbox_cnt},
+};
+
+static int alloc_tmp_buf(void *hwdev, struct msg_module *nt_msg, u32 in_size,
+ void **buf_in, u32 out_size, void **buf_out)
+{
+ int ret;
+
+ ret = alloc_buff_in(hwdev, nt_msg, in_size, buf_in);
+ if (ret) {
+ pr_err("Alloc tool cmd buff in failed\n");
+ return ret;
+ }
+
+ ret = alloc_buff_out(hwdev, nt_msg, out_size, buf_out);
+ if (ret) {
+ pr_err("Alloc tool cmd buff out failed\n");
+ goto out_free_buf_in;
+ }
+
+ return 0;
+
+out_free_buf_in:
+ free_buff_in(hwdev, nt_msg, *buf_in);
+
+ return ret;
+}
+
+static void free_tmp_buf(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, void *buf_out)
+{
+ free_buff_out(hwdev, nt_msg, buf_out);
+ free_buff_in(hwdev, nt_msg, buf_in);
+}
+
+static int send_to_hw_driver(struct hinic3_lld_dev *lld_dev, struct msg_module *nt_msg,
+ const void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ int index, num_cmds = (int)(sizeof(hw_driv_module_cmd_handle) /
+ sizeof(hw_driv_module_cmd_handle[0]));
+ enum driver_cmd_type cmd_type =
+ (enum driver_cmd_type)(nt_msg->msg_formate);
+ int err = 0;
+
+ for (index = 0; index < num_cmds; index++) {
+ if (cmd_type ==
+ hw_driv_module_cmd_handle[index].driv_cmd_name) {
+ err = hw_driv_module_cmd_handle[index].driv_func
+ (lld_dev, buf_in, in_size, buf_out, out_size);
+ break;
+ }
+ }
+
+ if (index == num_cmds) {
+ pr_err("Can't find callback for %d\n", cmd_type);
+ return -EINVAL;
+ }
+
+ return err;
+}
+
+static int send_to_service_driver(struct hinic3_lld_dev *lld_dev, struct msg_module *nt_msg,
+ const void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ const char **service_name = NULL;
+ enum hinic3_service_type type;
+ void *uld_dev = NULL;
+ int ret = -EINVAL;
+
+ service_name = hinic3_get_uld_names();
+ type = nt_msg->module - SEND_TO_SRV_DRV_BASE;
+ if (type >= SERVICE_T_MAX) {
+ pr_err("Ioctl input module id: %u is incorrectly\n", nt_msg->module);
+ return -EINVAL;
+ }
+
+ uld_dev = hinic3_get_uld_dev(lld_dev, type);
+ if (!uld_dev) {
+ if (nt_msg->msg_formate == GET_DRV_VERSION)
+ return 0;
+
+ pr_err("Can not get the uld dev correctly: %s driver may be not register\n",
+ service_name[type]);
+ return -EINVAL;
+ }
+
+ if (g_uld_info[type].ioctl)
+ ret = g_uld_info[type].ioctl(uld_dev, nt_msg->msg_formate,
+ buf_in, in_size, buf_out, out_size);
+ uld_dev_put(lld_dev, type);
+
+ return ret;
+}
+
+static int nictool_exec_cmd(struct hinic3_lld_dev *lld_dev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ int ret = 0;
+
+ switch (nt_msg->module) {
+ case SEND_TO_HW_DRIVER:
+ ret = send_to_hw_driver(lld_dev, nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ case SEND_TO_MPU:
+ ret = send_to_mpu(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ case SEND_TO_SM:
+ ret = send_to_sm(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ case SEND_TO_NPU:
+ ret = send_to_npu(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ default:
+ ret = send_to_service_driver(lld_dev, nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ }
+
+ return ret;
+}
+
+static int cmd_parameter_valid(struct msg_module *nt_msg, unsigned long arg,
+ u32 *out_size_expect, u32 *in_size)
+{
+ if (copy_from_user(nt_msg, (void *)arg, sizeof(*nt_msg))) {
+ pr_err("Copy information from user failed\n");
+ return -EFAULT;
+ }
+
+ *out_size_expect = nt_msg->buf_out_size;
+ *in_size = nt_msg->buf_in_size;
+ if (*out_size_expect > HINIC3_MAX_BUF_SIZE ||
+ *in_size > HINIC3_MAX_BUF_SIZE) {
+ pr_err("Invalid in size: %u or out size: %u\n",
+ *in_size, *out_size_expect);
+ return -EFAULT;
+ }
+
+ nt_msg->device_name[IFNAMSIZ - 1] = '\0';
+
+ return 0;
+}
+
+static struct hinic3_lld_dev *get_lld_dev_by_nt_msg(struct msg_module *nt_msg)
+{
+ struct hinic3_lld_dev *lld_dev = NULL;
+
+ if (nt_msg->module == SEND_TO_NIC_DRIVER &&
+ (nt_msg->msg_formate == GET_XSFP_INFO ||
+ nt_msg->msg_formate == GET_XSFP_PRESENT ||
+ nt_msg->msg_formate == GET_XSFP_INFO_COMP_CMIS)) {
+ lld_dev = hinic3_get_lld_dev_by_chip_and_port(nt_msg->device_name, nt_msg->port_id);
+ } else if (nt_msg->module == SEND_TO_CUSTOM_DRIVER &&
+ nt_msg->msg_formate == CMD_CUSTOM_BOND_GET_CHIP_NAME) {
+ lld_dev = hinic3_get_lld_dev_by_dev_name(nt_msg->device_name,
+ SERVICE_T_MAX);
+ } else if (nt_msg->module == SEND_TO_VBS_DRIVER ||
+ nt_msg->module == SEND_TO_BIFUR_DRIVER) {
+ lld_dev = hinic3_get_lld_dev_by_chip_name(nt_msg->device_name);
+ } else if (nt_msg->module >= SEND_TO_SRV_DRV_BASE &&
+ nt_msg->module < SEND_TO_DRIVER_MAX &&
+ nt_msg->msg_formate != GET_DRV_VERSION) {
+ lld_dev = hinic3_get_lld_dev_by_dev_name(nt_msg->device_name,
+ nt_msg->module - SEND_TO_SRV_DRV_BASE);
+ } else {
+ lld_dev = hinic3_get_lld_dev_by_chip_name(nt_msg->device_name);
+ if (!lld_dev)
+ lld_dev = hinic3_get_lld_dev_by_dev_name(nt_msg->device_name, SERVICE_T_MAX);
+ }
+
+ return lld_dev;
+}
+
+static long hinicadm_k_unlocked_ioctl(struct file *pfile, unsigned long arg)
+{
+ struct hinic3_lld_dev *lld_dev = NULL;
+ struct msg_module nt_msg;
+ void *buf_out = NULL;
+ void *buf_in = NULL;
+ u32 out_size_expect = 0;
+ u32 out_size = 0;
+ u32 in_size = 0;
+ int ret = 0;
+
+ memset(&nt_msg, 0, sizeof(nt_msg));
+ if (cmd_parameter_valid(&nt_msg, arg, &out_size_expect, &in_size))
+ return -EFAULT;
+
+ lld_dev = get_lld_dev_by_nt_msg(&nt_msg);
+ if (!lld_dev) {
+ if (nt_msg.msg_formate != DEV_NAME_TEST)
+ pr_err("Can not find device %s for module %u\n",
+ nt_msg.device_name, nt_msg.module);
+
+ return -ENODEV;
+ }
+
+ if (nt_msg.msg_formate == DEV_NAME_TEST) {
+ lld_dev_put(lld_dev);
+ return 0;
+ }
+
+ ret = alloc_tmp_buf(hinic3_get_sdk_hwdev_by_lld(lld_dev), &nt_msg,
+ in_size, &buf_in, out_size_expect, &buf_out);
+ if (ret) {
+ pr_err("Alloc tmp buff failed\n");
+ goto out_free_lock;
+ }
+
+ out_size = out_size_expect;
+
+ ret = nictool_exec_cmd(lld_dev, &nt_msg, buf_in, in_size, buf_out, &out_size);
+ if (ret) {
+ pr_err("nictool_exec_cmd failed, module: %u, ret: %d.\n", nt_msg.module, ret);
+ goto out_free_buf;
+ }
+
+ if (out_size > out_size_expect) {
+ ret = -EFAULT;
+ pr_err("Out size is greater than expected out size from user: %u, out size: %u\n",
+ out_size_expect, out_size);
+ goto out_free_buf;
+ }
+
+ ret = copy_buf_out_to_user(&nt_msg, out_size, buf_out);
+ if (ret)
+ pr_err("Copy information to user failed\n");
+
+out_free_buf:
+ free_tmp_buf(hinic3_get_sdk_hwdev_by_lld(lld_dev), &nt_msg, buf_in, buf_out);
+
+out_free_lock:
+ lld_dev_put(lld_dev);
+ return (long)ret;
+}
+
+/**
+ * dbgtool_knl_ffm_info_rd - Read ffm information
+ * @para: the dbgtool parameter
+ * @dbgtool_info: the dbgtool info
+ **/
+static long dbgtool_knl_ffm_info_rd(struct dbgtool_param *para,
+ struct dbgtool_k_glb_info *dbgtool_info)
+{
+ if (!para->param.ffm_rd || !dbgtool_info->ffm)
+ return -EINVAL;
+
+ /* Copy the ffm_info to user mode */
+ if (copy_to_user(para->param.ffm_rd, dbgtool_info->ffm,
+ (unsigned int)sizeof(struct ffm_record_info))) {
+ pr_err("Copy ffm_info to user fail\n");
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static long dbgtool_k_unlocked_ioctl(struct file *pfile,
+ unsigned int real_cmd,
+ unsigned long arg)
+{
+ long ret;
+ struct dbgtool_param param;
+ struct dbgtool_k_glb_info *dbgtool_info = NULL;
+ struct card_node *card_info = NULL;
+ int i;
+
+ (void)memset(¶m, 0, sizeof(param));
+
+ if (copy_from_user(¶m, (void *)arg, sizeof(param))) {
+ pr_err("Copy param from user fail\n");
+ return -EFAULT;
+ }
+
+ lld_hold();
+ for (i = 0; i < MAX_CARD_NUM; i++) {
+ card_info = (struct card_node *)g_card_node_array[i];
+ if (!card_info)
+ continue;
+ if (memcmp(param.chip_name, card_info->chip_name,
+ strlen(card_info->chip_name) + 1) == 0)
+ break;
+ }
+
+ if (i == MAX_CARD_NUM || !card_info) {
+ lld_put();
+ pr_err("Can't find this card.\n");
+ return -EFAULT;
+ }
+
+ card_id = i;
+ dbgtool_info = (struct dbgtool_k_glb_info *)card_info->dbgtool_info;
+
+ down(&dbgtool_info->dbgtool_sem);
+
+ switch (real_cmd) {
+ case DBGTOOL_CMD_FFM_RD:
+ ret = dbgtool_knl_ffm_info_rd(¶m, dbgtool_info);
+ break;
+ case DBGTOOL_CMD_MSG_2_UP:
+ pr_err("Not suppose to use this cmd(0x%x).\n", real_cmd);
+ ret = 0;
+ break;
+
+ default:
+ pr_err("Dbgtool cmd(0x%x) not support now\n", real_cmd);
+ ret = -EFAULT;
+ break;
+ }
+
+ up(&dbgtool_info->dbgtool_sem);
+
+ lld_put();
+
+ return ret;
+}
+
+static int nictool_k_release(struct inode *pnode, struct file *pfile)
+{
+ return 0;
+}
+
+static int nictool_k_open(struct inode *pnode, struct file *pfile)
+{
+ return 0;
+}
+
+static ssize_t nictool_k_read(struct file *pfile, char __user *ubuf,
+ size_t size, loff_t *ppos)
+{
+ return 0;
+}
+
+static ssize_t nictool_k_write(struct file *pfile, const char __user *ubuf,
+ size_t size, loff_t *ppos)
+{
+ return 0;
+}
+
+static long nictool_k_unlocked_ioctl(struct file *pfile,
+ unsigned int cmd, unsigned long arg)
+{
+ unsigned int real_cmd;
+
+ real_cmd = _IOC_NR(cmd);
+
+ return (real_cmd == NICTOOL_CMD_TYPE) ?
+ hinicadm_k_unlocked_ioctl(pfile, arg) :
+ dbgtool_k_unlocked_ioctl(pfile, real_cmd, arg);
+}
+
+static int hinic3_mem_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ pgprot_t vm_page_prot;
+ unsigned long vmsize = vma->vm_end - vma->vm_start;
+ phys_addr_t offset = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT;
+ phys_addr_t phy_addr;
+ int err = 0;
+
+ if (vmsize > (PAGE_SIZE * (1 << DBGTOOL_PAGE_ORDER))) {
+ pr_err("Map size = %lu is bigger than alloc\n", vmsize);
+ return -EAGAIN;
+ }
+
+ /* old version of tool set vma->vm_pgoff to 0 */
+ phy_addr = offset ? offset : g_card_phy_addr[card_id];
+ /* check phy_addr valid */
+ if (phy_addr != g_card_phy_addr[card_id]) {
+ err = hinic3_bar_mmap_param_valid(phy_addr, vmsize);
+ if (err != 0) {
+ pr_err("mmap param invalid, err: %d\n", err);
+ return err;
+ }
+ }
+
+ /* Disable cache and write buffer in the mapping area */
+ vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ vma->vm_page_prot = vm_page_prot;
+ if (remap_pfn_range(vma, vma->vm_start, (phy_addr >> PAGE_SHIFT),
+ vmsize, vma->vm_page_prot)) {
+ pr_err("Remap pfn range failed.\n");
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static const struct file_operations fifo_operations = {
+ .owner = THIS_MODULE,
+ .release = nictool_k_release,
+ .open = nictool_k_open,
+ .read = nictool_k_read,
+ .write = nictool_k_write,
+ .unlocked_ioctl = nictool_k_unlocked_ioctl,
+ .mmap = hinic3_mem_mmap,
+};
+
+static void free_dbgtool_info(void *hwdev, struct card_node *chip_info)
+{
+ struct dbgtool_k_glb_info *dbgtool_info = NULL;
+
+ if (hinic3_func_type(hwdev) != TYPE_VF)
+ chip_info->func_handle_array[hinic3_global_func_id(hwdev)] = NULL;
+
+ if (--chip_info->func_num)
+ return;
+
+ if (chip_info->chip_id >= 0 && chip_info->chip_id < MAX_CARD_NUM)
+ g_card_node_array[chip_info->chip_id] = NULL;
+
+ dbgtool_info = chip_info->dbgtool_info;
+ /* FFM deinit */
+ if (dbgtool_info && dbgtool_info->ffm) {
+ kfree(dbgtool_info->ffm);
+ dbgtool_info->ffm = NULL;
+ }
+
+ if (dbgtool_info)
+ kfree(dbgtool_info);
+
+ chip_info->dbgtool_info = NULL;
+
+ if (chip_info->chip_id >= 0 && chip_info->chip_id < MAX_CARD_NUM)
+ dbgtool_knl_free_mem(chip_info->chip_id);
+}
+
+static int alloc_dbgtool_info(void *hwdev, struct card_node *chip_info)
+{
+ struct dbgtool_k_glb_info *dbgtool_info = NULL;
+ int err, id = 0;
+
+ if (hinic3_func_type(hwdev) != TYPE_VF)
+ chip_info->func_handle_array[hinic3_global_func_id(hwdev)] = hwdev;
+
+ if (chip_info->func_num++)
+ return 0;
+
+ dbgtool_info = (struct dbgtool_k_glb_info *)
+ kzalloc(sizeof(struct dbgtool_k_glb_info), GFP_KERNEL);
+ if (!dbgtool_info) {
+ pr_err("Failed to allocate dbgtool_info\n");
+ goto dbgtool_info_fail;
+ }
+
+ chip_info->dbgtool_info = dbgtool_info;
+
+ /* FFM init */
+ dbgtool_info->ffm = (struct ffm_record_info *)
+ kzalloc(sizeof(struct ffm_record_info), GFP_KERNEL);
+ if (!dbgtool_info->ffm) {
+ pr_err("Failed to allocate cell contexts for a chain\n");
+ goto dbgtool_info_ffm_fail;
+ }
+
+ sema_init(&dbgtool_info->dbgtool_sem, 1);
+
+ err = sscanf(chip_info->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ pr_err("Failed to get card id\n");
+ goto sscanf_chdev_fail;
+ }
+
+ g_card_node_array[id] = chip_info;
+
+ return 0;
+
+sscanf_chdev_fail:
+ kfree(dbgtool_info->ffm);
+
+dbgtool_info_ffm_fail:
+ kfree(dbgtool_info);
+ chip_info->dbgtool_info = NULL;
+
+dbgtool_info_fail:
+ if (hinic3_func_type(hwdev) != TYPE_VF)
+ chip_info->func_handle_array[hinic3_global_func_id(hwdev)] = NULL;
+ chip_info->func_num--;
+ return -ENOMEM;
+}
+
+/**
+ * nictool_k_init - initialize the hw interface
+ **/
+/* temp for dbgtool_info */
+int nictool_k_init(void *hwdev, void *chip_node)
+{
+ struct card_node *chip_info = (struct card_node *)chip_node;
+ struct device *pdevice = NULL;
+ int err;
+
+ err = alloc_dbgtool_info(hwdev, chip_info);
+ if (err)
+ return err;
+
+ if (g_nictool_ref_cnt++) {
+ /* already initialized */
+ return 0;
+ }
+
+ err = alloc_chrdev_region(&g_dev_id, 0, 1, HIADM3_DEV_NAME);
+ if (err) {
+ pr_err("Register nictool_dev failed(0x%x)\n", err);
+ goto alloc_chdev_fail;
+ }
+
+ /* Create equipment */
+ g_nictool_class = class_create(THIS_MODULE, HIADM3_DEV_CLASS);
+ if (IS_ERR(g_nictool_class)) {
+ pr_err("Create nictool_class fail\n");
+ err = -EFAULT;
+ goto class_create_err;
+ }
+
+ /* Initializing the character device */
+ cdev_init(&g_nictool_cdev, &fifo_operations);
+
+ /* Add devices to the operating system */
+ err = cdev_add(&g_nictool_cdev, g_dev_id, 1);
+ if (err < 0) {
+ pr_err("Add nictool_dev to operating system fail(0x%x)\n", err);
+ goto cdev_add_err;
+ }
+
+ /* Export device information to user space
+ * (/sys/class/class name/device name)
+ */
+ pdevice = device_create(g_nictool_class, NULL,
+ g_dev_id, NULL, HIADM3_DEV_NAME);
+ if (IS_ERR(pdevice)) {
+ pr_err("Export nictool device information to user space fail\n");
+ err = -EFAULT;
+ goto device_create_err;
+ }
+
+ pr_info("Register nictool_dev to system succeed\n");
+
+ return 0;
+
+device_create_err:
+ cdev_del(&g_nictool_cdev);
+
+cdev_add_err:
+ class_destroy(g_nictool_class);
+
+class_create_err:
+ g_nictool_class = NULL;
+ unregister_chrdev_region(g_dev_id, 1);
+
+alloc_chdev_fail:
+ g_nictool_ref_cnt--;
+ free_dbgtool_info(hwdev, chip_info);
+
+ return err;
+}
+
+void nictool_k_uninit(void *hwdev, void *chip_node)
+{
+ struct card_node *chip_info = (struct card_node *)chip_node;
+
+ free_dbgtool_info(hwdev, chip_info);
+
+ if (!g_nictool_ref_cnt)
+ return;
+
+ if (--g_nictool_ref_cnt)
+ return;
+
+ if (!g_nictool_class || IS_ERR(g_nictool_class)) {
+ pr_err("Nictool class is NULL.\n");
+ return;
+ }
+
+ device_destroy(g_nictool_class, g_dev_id);
+ cdev_del(&g_nictool_cdev);
+ class_destroy(g_nictool_class);
+ g_nictool_class = NULL;
+
+ unregister_chrdev_region(g_dev_id, 1);
+
+ pr_info("Unregister nictool_dev succeed\n");
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h
new file mode 100644
index 0000000..c943dfc
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NICTOOL_H
+#define HINIC3_NICTOOL_H
+
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+
+#ifndef MAX_SIZE
+#define MAX_SIZE (16)
+#endif
+
+#define DBGTOOL_PAGE_ORDER (10)
+
+#define MAX_CARD_NUM (64)
+
+int nictool_k_init(void *hwdev, void *chip_node);
+void nictool_k_uninit(void *hwdev, void *chip_node);
+
+void hinic3_get_os_hot_replace_info(void *oshr_info);
+
+void hinic3_get_all_chip_id(void *id_info);
+
+void hinic3_get_card_func_info_by_card_name
+ (const char *chip_name, struct hinic3_card_func_info *card_func);
+
+void hinic3_get_card_info(const void *hwdev, void *bufin);
+
+bool hinic3_is_in_host(void);
+
+int hinic3_get_pf_id(struct card_node *chip_node, u32 port_id, u32 *pf_id, u32 *isvalid);
+
+void hinic3_get_mbox_cnt(const void *hwdev, void *bufin);
+
+extern struct hinic3_uld_info g_uld_info[SERVICE_T_MAX];
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h
new file mode 100644
index 0000000..6f145a0
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_PCI_ID_TBL_H
+#define HINIC3_PCI_ID_TBL_H
+
+#define HINIC3_VIRTIO_VNEDER_ID 0x1AF4
+#ifdef CONFIG_SP_VID_DID
+#define PCI_VENDOR_ID_SPNIC 0x1F3F
+#define HINIC3_DEV_ID_STANDARD 0x9020
+#define HINIC3_DEV_ID_SDI_5_1_PF 0x9032
+#define HINIC3_DEV_ID_SDI_5_0_PF 0x9031
+#define HINIC3_DEV_ID_DPU_PF 0x9030
+#define HINIC3_DEV_ID_SPN120 0x9021
+#define HINIC3_DEV_ID_VF 0x9001
+#define HINIC3_DEV_ID_VF_HV 0x9002
+#define HINIC3_DEV_SDI_5_1_ID_VF 0x9003
+#define HINIC3_DEV_SDI_5_1_ID_VF_HV 0x9004
+#define HINIC3_DEV_ID_SPU 0xAC00
+#define HINIC3_DEV_SDI_5_1_SSDID_VF 0x1000
+#define HINIC3_DEV_SDI_V100_SSDID_MASK (3 << 12)
+#elif defined(CONFIG_NF_VID_DID)
+#define PCI_VENDOR_ID_NF 0x2036
+#define NFNIC_DEV_ID_STANDARD 0x1618
+#define NFNIC_DEV_ID_SDI_5_1_PF 0x0226
+#define NFNIC_DEV_ID_SDI_5_0_PF 0x0225
+#define NFNIC_DEV_ID_DPU_PF 0x0224
+#define NFNIC_DEV_ID_VF 0x1619
+#define NFNIC_DEV_ID_VF_HV 0x379F
+#define NFNIC_DEV_SDI_5_1_ID_VF 0x375F
+#define NFNIC_DEV_SDI_5_0_ID_VF 0x375F
+#define NFNIC_DEV_SDI_5_1_ID_VF_HV 0x379F
+#define NFNIC_DEV_ID_SPU 0xAC00
+#define NFNIC_DEV_SDI_5_1_SSDID_VF 0x1000
+#define NFNIC_DEV_SDI_V100_SSDID_MASK (3 << 12)
+#else
+#define PCI_VENDOR_ID_HUAWEI 0x19e5
+#define HINIC3_DEV_ID_STANDARD 0x0222
+#define HINIC3_DEV_ID_SDI_5_1_PF 0x0226
+#define HINIC3_DEV_ID_SDI_5_0_PF 0x0225
+#define HINIC3_DEV_ID_DPU_PF 0x0224
+#define HINIC3_DEV_ID_VF 0x375F
+#define HINIC3_DEV_ID_VF_HV 0x379F
+#define HINIC3_DEV_SDI_5_1_ID_VF 0x375F
+#define HINIC3_DEV_SDI_5_0_ID_VF 0x375F
+#define HINIC3_DEV_SDI_5_1_ID_VF_HV 0x379F
+#define HINIC3_DEV_ID_SPU 0xAC00
+#define HINIC3_DEV_SDI_5_1_SSDID_VF 0x1000
+#define HINIC3_DEV_SDI_V100_SSDID_MASK (3 << 12)
+#endif
+
+#define NFNIC_DEV_SSID_2X25G_NF 0x0860
+#define NFNIC_DEV_SSID_4X25G_NF 0x0861
+#define NFNIC_DEV_SSID_2x100G_NF 0x0862
+#define NFNIC_DEV_SSID_2x200G_NF 0x0863
+
+#define HINIC3_DEV_SSID_2X10G 0x0035
+#define HINIC3_DEV_SSID_2X25G 0x0051
+#define HINIC3_DEV_SSID_4X25G 0x0052
+#define HINIC3_DEV_SSID_4X25G_BD 0x0252
+#define HINIC3_DEV_SSID_4X25G_SMARTNIC 0x0152
+#define HINIC3_DEV_SSID_6X25G_VL 0x0356
+#define HINIC3_DEV_SSID_2X100G 0x00A1
+#define HINIC3_DEV_SSID_2X100G_SMARTNIC 0x01A1
+#define HINIC3_DEV_SSID_2X200G 0x04B1
+#define HINIC3_DEV_SSID_2X100G_VF 0x1000
+#define HINIC3_DEV_SSID_HPC_4_HOST_NIC 0x005A
+#define HINIC3_DEV_SSID_2X200G_VL 0x00B1
+#define HINIC3_DEV_SSID_1X100G 0x02A4
+
+#define BIFUR_RESOURCE_PF_SSID 0x05a1
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c
new file mode 100644
index 0000000..fbb6198
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c
@@ -0,0 +1,44 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/semaphore.h>
+#include <linux/workqueue.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_profile.h"
+#include "hinic3_prof_adap.h"
+
+static bool is_match_prof_default_adapter(void *device)
+{
+ /* always match default profile adapter in standard scene */
+ return true;
+}
+
+struct hinic3_prof_adapter prof_adap_objs[] = {
+ /* Add prof adapter before default profile */
+ {
+ .type = PROF_ADAP_TYPE_DEFAULT,
+ .match = is_match_prof_default_adapter,
+ .init = NULL,
+ .deinit = NULL,
+ },
+};
+
+void hisdk3_init_profile_adapter(struct hinic3_hwdev *hwdev)
+{
+ u16 num_adap = ARRAY_SIZE(prof_adap_objs);
+
+ hwdev->prof_adap = hinic3_prof_init(hwdev, prof_adap_objs, num_adap,
+ (void *)&hwdev->prof_attr);
+ if (hwdev->prof_adap)
+ sdk_info(hwdev->dev_hdl, "Find profile adapter type: %d\n", hwdev->prof_adap->type);
+}
+
+void hisdk3_deinit_profile_adapter(struct hinic3_hwdev *hwdev)
+{
+ hinic3_prof_deinit(hwdev->prof_adap, hwdev->prof_attr);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h
new file mode 100644
index 0000000..e244d11
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_PROF_ADAP_H
+#define HINIC3_PROF_ADAP_H
+
+#include <linux/workqueue.h>
+
+#include "hinic3_profile.h"
+#include "hinic3_hwdev.h"
+
+enum cpu_affinity_work_type {
+ WORK_TYPE_AEQ,
+ WORK_TYPE_MBOX,
+ WORK_TYPE_MGMT_MSG,
+ WORK_TYPE_COMM,
+};
+
+enum hisdk3_sw_features {
+ HISDK3_SW_F_CHANNEL_LOCK = BIT(0),
+};
+
+struct hisdk3_prof_ops {
+ void (*fault_recover)(void *data, u16 src, u16 level);
+ int (*get_work_cpu_affinity)(void *data, u32 work_type);
+ void (*probe_success)(void *data);
+ void (*remove_pre_handle)(struct hinic3_hwdev *hwdev);
+};
+
+struct hisdk3_prof_attr {
+ void *priv_data;
+ u64 hw_feature_cap;
+ u64 sw_feature_cap;
+ u64 dft_hw_feature;
+ u64 dft_sw_feature;
+
+ struct hisdk3_prof_ops *ops;
+};
+
+#define GET_PROF_ATTR_OPS(hwdev) \
+ ((hwdev)->prof_attr ? (hwdev)->prof_attr->ops : NULL)
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline int hisdk3_get_work_cpu_affinity(struct hinic3_hwdev *hwdev,
+ enum cpu_affinity_work_type type)
+{
+ struct hisdk3_prof_ops *ops = GET_PROF_ATTR_OPS(hwdev);
+
+ if (ops && ops->get_work_cpu_affinity)
+ return ops->get_work_cpu_affinity(hwdev->prof_attr->priv_data, type);
+
+ return WORK_CPU_UNBOUND;
+}
+
+static inline void hisdk3_fault_post_process(struct hinic3_hwdev *hwdev,
+ u16 src, u16 level)
+{
+ struct hisdk3_prof_ops *ops = GET_PROF_ATTR_OPS(hwdev);
+
+ if (ops && ops->fault_recover)
+ ops->fault_recover(hwdev->prof_attr->priv_data, src, level);
+}
+
+static inline void hisdk3_probe_success(struct hinic3_hwdev *hwdev)
+{
+ struct hisdk3_prof_ops *ops = GET_PROF_ATTR_OPS(hwdev);
+
+ if (ops && ops->probe_success)
+ ops->probe_success(hwdev->prof_attr->priv_data);
+}
+
+static inline bool hisdk3_sw_feature_en(const struct hinic3_hwdev *hwdev,
+ u64 feature_bit)
+{
+ if (!hwdev->prof_attr)
+ return false;
+
+ return (hwdev->prof_attr->sw_feature_cap & feature_bit) &&
+ (hwdev->prof_attr->dft_sw_feature & feature_bit);
+}
+
+#ifdef CONFIG_MODULE_PROF
+static inline void hisdk3_remove_pre_process(struct hinic3_hwdev *hwdev)
+{
+ struct hisdk3_prof_ops *ops = NULL;
+
+ if (!hwdev)
+ return;
+
+ ops = GET_PROF_ATTR_OPS(hwdev);
+
+ if (ops && ops->remove_pre_handle)
+ ops->remove_pre_handle(hwdev);
+}
+#else
+static inline void hisdk3_remove_pre_process(struct hinic3_hwdev *hwdev) {};
+#endif
+#define SW_FEATURE_EN(hwdev, f_bit) \
+ hisdk3_sw_feature_en(hwdev, HISDK3_SW_F_##f_bit)
+#define HISDK3_F_CHANNEL_LOCK_EN(hwdev) SW_FEATURE_EN(hwdev, CHANNEL_LOCK)
+
+void hisdk3_init_profile_adapter(struct hinic3_hwdev *hwdev);
+void hisdk3_deinit_profile_adapter(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h
new file mode 100644
index 0000000..e204a98
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CHIPIF_SM_LT_H
+#define CHIPIF_SM_LT_H
+
+#include <linux/types.h>
+
+#define SM_LT_LOAD (0x12)
+#define SM_LT_STORE (0x14)
+
+#define SM_LT_NUM_OFFSET 13
+#define SM_LT_ABUF_FLG_OFFSET 12
+#define SM_LT_BC_OFFSET 11
+
+#define SM_LT_ENTRY_16B 16
+#define SM_LT_ENTRY_32B 32
+#define SM_LT_ENTRY_48B 48
+#define SM_LT_ENTRY_64B 64
+
+#define TBL_LT_OFFSET_DEFAULT 0
+
+#define SM_CACHE_LINE_SHFT 4 /* log2(16) */
+#define SM_CACHE_LINE_SIZE 16 /* the size of cache line */
+
+#define MAX_SM_LT_READ_LINE_NUM 4
+#define MAX_SM_LT_WRITE_LINE_NUM 3
+
+#define SM_LT_FULL_BYTEENB 0xFFFF
+
+#define TBL_GET_ENB3_MASK(bitmask) ((u16)(((bitmask) >> 32) & 0xFFFF))
+#define TBL_GET_ENB2_MASK(bitmask) ((u16)(((bitmask) >> 16) & 0xFFFF))
+#define TBL_GET_ENB1_MASK(bitmask) ((u16)((bitmask) & 0xFFFF))
+
+enum {
+ SM_LT_NUM_0 = 0, /* lt num = 0, load/store 16B */
+ SM_LT_NUM_1, /* lt num = 1, load/store 32B */
+ SM_LT_NUM_2, /* lt num = 2, load/store 48B */
+ SM_LT_NUM_3 /* lt num = 3, load 64B */
+};
+
+/* lt load request */
+union sml_lt_req_head {
+ struct {
+ u32 offset:8;
+ u32 pad:3;
+ u32 bc:1;
+ u32 abuf_flg:1;
+ u32 num:2;
+ u32 ack:1;
+ u32 op_id:5;
+ u32 instance:6;
+ u32 src:5;
+ } bs;
+
+ u32 value;
+};
+
+struct sml_lt_load_req {
+ u32 extra;
+ union sml_lt_req_head head;
+ u32 index;
+ u32 pad0;
+ u32 pad1;
+};
+
+struct sml_lt_store_req {
+ u32 extra;
+ union sml_lt_req_head head;
+ u32 index;
+ u32 byte_enb[2];
+ u8 write_data[48];
+};
+
+enum {
+ SM_LT_OFFSET_1 = 1,
+ SM_LT_OFFSET_2,
+ SM_LT_OFFSET_3,
+ SM_LT_OFFSET_4,
+ SM_LT_OFFSET_5,
+ SM_LT_OFFSET_6,
+ SM_LT_OFFSET_7,
+ SM_LT_OFFSET_8,
+ SM_LT_OFFSET_9,
+ SM_LT_OFFSET_10,
+ SM_LT_OFFSET_11,
+ SM_LT_OFFSET_12,
+ SM_LT_OFFSET_13,
+ SM_LT_OFFSET_14,
+ SM_LT_OFFSET_15
+};
+
+enum HINIC_CSR_API_DATA_OPERATION_ID {
+ HINIC_CSR_OPERATION_WRITE_CSR = 0x1E,
+ HINIC_CSR_OPERATION_READ_CSR = 0x1F
+};
+
+enum HINIC_CSR_API_DATA_NEED_RESPONSE_DATA {
+ HINIC_CSR_NO_RESP_DATA = 0,
+ HINIC_CSR_NEED_RESP_DATA = 1
+};
+
+enum HINIC_CSR_API_DATA_DATA_SIZE {
+ HINIC_CSR_DATA_SZ_32 = 0,
+ HINIC_CSR_DATA_SZ_64 = 1
+};
+
+struct hinic_csr_request_api_data {
+ u32 dw0;
+
+ union {
+ struct {
+ u32 reserved1:13;
+ /* this field indicates the write/read data size:
+ * 2'b00: 32 bits
+ * 2'b01: 64 bits
+ * 2'b10~2'b11:reserved
+ */
+ u32 data_size:2;
+ /* this field indicates that requestor expect receive a
+ * response data or not.
+ * 1'b0: expect not to receive a response data.
+ * 1'b1: expect to receive a response data.
+ */
+ u32 need_response:1;
+ /* this field indicates the operation that the requestor
+ * expected.
+ * 5'b1_1110: write value to csr space.
+ * 5'b1_1111: read register from csr space.
+ */
+ u32 operation_id:5;
+ u32 reserved2:6;
+ /* this field specifies the Src node ID for this API
+ * request message.
+ */
+ u32 src_node_id:5;
+ } bits;
+
+ u32 val32;
+ } dw1;
+
+ union {
+ struct {
+ /* it specifies the CSR address. */
+ u32 csr_addr:26;
+ u32 reserved3:6;
+ } bits;
+
+ u32 val32;
+ } dw2;
+
+ /* if data_size=2'b01, it is high 32 bits of write data. else, it is
+ * 32'hFFFF_FFFF.
+ */
+ u32 csr_write_data_h;
+ /* the low 32 bits of write data. */
+ u32 csr_write_data_l;
+};
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c
new file mode 100644
index 0000000..b802104
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c
@@ -0,0 +1,160 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_common.h"
+#include "hinic3_sm_lt.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+
+#define ACK 1
+#define NOACK 0
+
+#define LT_LOAD16_API_SIZE (16 + 4)
+#define LT_STORE16_API_SIZE (32 + 4)
+
+#ifndef HTONL
+#define HTONL(x) \
+ ((((x) & 0x000000ff) << 24) \
+ | (((x) & 0x0000ff00) << 8) \
+ | (((x) & 0x00ff0000) >> 8) \
+ | (((x) & 0xff000000) >> 24))
+#endif
+
+static inline void sm_lt_build_head(union sml_lt_req_head *head,
+ u8 instance_id,
+ u8 op_id, u8 ack,
+ u8 offset, u8 num)
+{
+ head->value = 0;
+ head->bs.instance = instance_id;
+ head->bs.op_id = op_id;
+ head->bs.ack = ack;
+ head->bs.num = num;
+ head->bs.abuf_flg = 0;
+ head->bs.bc = 1;
+ head->bs.offset = offset;
+ head->value = HTONL((head->value));
+}
+
+static inline void sm_lt_load_build_req(struct sml_lt_load_req *req,
+ u8 instance_id,
+ u8 op_id, u8 ack,
+ u32 lt_index,
+ u8 offset, u8 num)
+{
+ sm_lt_build_head(&req->head, instance_id, op_id, ack, offset, num);
+ req->extra = 0;
+ req->index = lt_index;
+ req->index = HTONL(req->index);
+ req->pad0 = 0;
+ req->pad1 = 0;
+}
+
+static void sml_lt_store_data(u32 *dst, const u32 *src, u8 num)
+{
+ switch (num) {
+ case SM_LT_NUM_2:
+ *(dst + SM_LT_OFFSET_11) = *(src + SM_LT_OFFSET_11);
+ *(dst + SM_LT_OFFSET_10) = *(src + SM_LT_OFFSET_10);
+ *(dst + SM_LT_OFFSET_9) = *(src + SM_LT_OFFSET_9);
+ *(dst + SM_LT_OFFSET_8) = *(src + SM_LT_OFFSET_8);
+ /*lint -fallthrough */
+ case SM_LT_NUM_1:
+ *(dst + SM_LT_OFFSET_7) = *(src + SM_LT_OFFSET_7);
+ *(dst + SM_LT_OFFSET_6) = *(src + SM_LT_OFFSET_6);
+ *(dst + SM_LT_OFFSET_5) = *(src + SM_LT_OFFSET_5);
+ *(dst + SM_LT_OFFSET_4) = *(src + SM_LT_OFFSET_4);
+ /*lint -fallthrough */
+ case SM_LT_NUM_0:
+ *(dst + SM_LT_OFFSET_3) = *(src + SM_LT_OFFSET_3);
+ *(dst + SM_LT_OFFSET_2) = *(src + SM_LT_OFFSET_2);
+ *(dst + SM_LT_OFFSET_1) = *(src + SM_LT_OFFSET_1);
+ *dst = *src;
+ break;
+ default:
+ break;
+ }
+}
+
+static inline void sm_lt_store_build_req(struct sml_lt_store_req *req,
+ u8 instance_id,
+ u8 op_id, u8 ack,
+ u32 lt_index,
+ u8 offset,
+ u8 num,
+ u16 byte_enb3,
+ u16 byte_enb2,
+ u16 byte_enb1,
+ u8 *data)
+{
+ sm_lt_build_head(&req->head, instance_id, op_id, ack, offset, num);
+ req->index = lt_index;
+ req->index = HTONL(req->index);
+ req->extra = 0;
+ req->byte_enb[0] = (u32)(byte_enb3);
+ req->byte_enb[0] = HTONL(req->byte_enb[0]);
+ req->byte_enb[1] = HTONL((((u32)byte_enb2) << 16) | byte_enb1);
+ sml_lt_store_data((u32 *)req->write_data, (u32 *)(void *)data, num);
+}
+
+int hinic3_dbg_lt_rd_16byte(void *hwdev, u8 dest, u8 instance,
+ u32 lt_index, u8 *data)
+{
+ struct sml_lt_load_req req;
+ int ret;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ sm_lt_load_build_req(&req, instance, SM_LT_LOAD, ACK, lt_index, 0, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, dest, (u8 *)(&req),
+ LT_LOAD16_API_SIZE, (void *)data, 0x10);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Read linear table 16byte fail, err: %d\n", ret);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_dbg_lt_wr_16byte_mask(void *hwdev, u8 dest, u8 instance,
+ u32 lt_index, u8 *data, u16 mask)
+{
+ struct sml_lt_store_req req;
+ int ret;
+
+ if (!hwdev || !data)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ sm_lt_store_build_req(&req, instance, SM_LT_STORE, NOACK, lt_index,
+ 0, 0, 0, 0, mask, data);
+
+ ret = hinic3_api_cmd_write_nack(hwdev, dest, &req, LT_STORE16_API_SIZE);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Write linear table 16byte fail, err: %d\n", ret);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c
new file mode 100644
index 0000000..c8258ff
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c
@@ -0,0 +1,279 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_lld.h"
+#include "hinic3_dev_mgmt.h"
+#include "hinic3_sriov.h"
+
+static int hinic3_init_vf_hw(void *hwdev, u16 start_vf_id, u16 end_vf_id)
+{
+ u16 i, func_idx;
+ int err;
+
+ /* mbox msg channel resources will be freed during remove process */
+ err = hinic3_init_func_mbox_msg_channel(hwdev,
+ hinic3_func_max_vf(hwdev));
+ if (err != 0)
+ return err;
+
+ /* vf use 256K as default wq page size, and can't change it */
+ for (i = start_vf_id; i <= end_vf_id; i++) {
+ func_idx = hinic3_glb_pf_vf_offset(hwdev) + i;
+ err = hinic3_set_wq_page_size(hwdev, func_idx,
+ HINIC3_DEFAULT_WQ_PAGE_SIZE,
+ HINIC3_CHANNEL_COMM);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int hinic3_deinit_vf_hw(void *hwdev, u16 start_vf_id, u16 end_vf_id)
+{
+ u16 func_idx, idx;
+
+ for (idx = start_vf_id; idx <= end_vf_id; idx++) {
+ func_idx = hinic3_glb_pf_vf_offset(hwdev) + idx;
+ hinic3_set_wq_page_size(hwdev, func_idx,
+ HINIC3_HW_WQ_PAGE_SIZE,
+ HINIC3_CHANNEL_COMM);
+ }
+
+ return 0;
+}
+
+#if !(defined(HAVE_SRIOV_CONFIGURE) || defined(HAVE_RHEL6_SRIOV_CONFIGURE))
+ssize_t hinic3_sriov_totalvfs_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ return sprintf(buf, "%d\n", pci_sriov_get_totalvfs(pdev));
+}
+
+ssize_t hinic3_sriov_numvfs_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ return sprintf(buf, "%d\n", pci_num_vf(pdev));
+}
+
+ssize_t hinic3_sriov_numvfs_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ int ret;
+ u16 num_vfs;
+ int cur_vfs, total_vfs;
+
+ ret = kstrtou16(buf, 0, &num_vfs);
+ if (ret < 0)
+ return ret;
+
+ cur_vfs = pci_num_vf(pdev);
+ total_vfs = pci_sriov_get_totalvfs(pdev);
+ if (num_vfs > total_vfs)
+ return -ERANGE;
+
+ if (num_vfs == cur_vfs)
+ return count; /* no change */
+
+ if (num_vfs == 0) {
+ /* disable VFs */
+ ret = hinic3_pci_sriov_configure(pdev, 0);
+ if (ret < 0)
+ return ret;
+ return count;
+ }
+
+ /* enable VFs */
+ if (cur_vfs) {
+ nic_warn(&pdev->dev, "%d VFs already enabled. Disable before enabling %d VFs\n",
+ cur_vfs, num_vfs);
+ return -EBUSY;
+ }
+
+ ret = hinic3_pci_sriov_configure(pdev, num_vfs);
+ if (ret < 0)
+ return ret;
+
+ if (ret != num_vfs)
+ nic_warn(&pdev->dev, "%d VFs requested; only %d enabled\n",
+ num_vfs, ret);
+
+ return count;
+}
+
+#endif /* !(HAVE_SRIOV_CONFIGURE || HAVE_RHEL6_SRIOV_CONFIGURE) */
+
+int hinic3_pci_sriov_disable(struct pci_dev *dev)
+{
+#ifdef CONFIG_PCI_IOV
+ struct hinic3_sriov_info *sriov_info = NULL;
+ struct hinic3_event_info event = {0};
+ void *hwdev = NULL;
+ u16 tmp_vfs;
+
+ sriov_info = hinic3_get_sriov_info_by_pcidev(dev);
+ hwdev = hinic3_get_hwdev_by_pcidev(dev);
+ if (!hwdev) {
+ sdk_err(&dev->dev, "SR-IOV disable is not permitted, please wait...\n");
+ return -EPERM;
+ }
+
+ /* if SR-IOV is already disabled then there is nothing to do */
+ if (!sriov_info->sriov_enabled)
+ return 0;
+
+ if (test_and_set_bit(HINIC3_SRIOV_DISABLE, &sriov_info->state)) {
+ sdk_err(&dev->dev, "SR-IOV disable in process, please wait");
+ return -EPERM;
+ }
+
+ /* If our VFs are assigned we cannot shut down SR-IOV
+ * without causing issues, so just leave the hardware
+ * available but disabled
+ */
+ if (pci_vfs_assigned(dev)) {
+ clear_bit(HINIC3_SRIOV_DISABLE, &sriov_info->state);
+ sdk_warn(&dev->dev, "Unloading driver while VFs are assigned - VFs will not be deallocated\n");
+ return -EPERM;
+ }
+
+ event.service = EVENT_SRV_COMM;
+ event.type = EVENT_COMM_SRIOV_STATE_CHANGE;
+ ((struct hinic3_sriov_state_info *)(void *)event.event_data)->enable = 0;
+ hinic3_event_callback(hwdev, &event);
+
+ sriov_info->sriov_enabled = false;
+
+ /* disable iov and allow time for transactions to clear */
+ pci_disable_sriov(dev);
+
+ tmp_vfs = (u16)sriov_info->num_vfs;
+ sriov_info->num_vfs = 0;
+ hinic3_deinit_vf_hw(hwdev, 1, tmp_vfs);
+
+ clear_bit(HINIC3_SRIOV_DISABLE, &sriov_info->state);
+
+#endif
+
+ return 0;
+}
+
+#ifdef CONFIG_PCI_IOV
+int hinic3_pci_sriov_check(struct hinic3_sriov_info *sriov_info, struct pci_dev *dev, int num_vfs)
+{
+ int pre_existing_vfs;
+ int err;
+
+ if (test_and_set_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state)) {
+ sdk_err(&dev->dev,
+ "SR-IOV enable in process, please wait, num_vfs %d\n",
+ num_vfs);
+ return -EPERM;
+ }
+
+ pre_existing_vfs = pci_num_vf(dev);
+
+ if (num_vfs > pci_sriov_get_totalvfs(dev)) {
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return -ERANGE;
+ }
+
+ if (pre_existing_vfs && pre_existing_vfs != num_vfs) {
+ err = hinic3_pci_sriov_disable(dev);
+ if (err) {
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return err;
+ }
+ } else if (pre_existing_vfs == num_vfs) {
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return num_vfs;
+ }
+
+ return 0;
+}
+#endif
+
+
+int hinic3_pci_sriov_enable(struct pci_dev *dev, int num_vfs)
+{
+#ifdef CONFIG_PCI_IOV
+ struct hinic3_sriov_info *sriov_info = NULL;
+ struct hinic3_event_info event = {0};
+ void *hwdev = NULL;
+ int err = 0;
+
+ sriov_info = hinic3_get_sriov_info_by_pcidev(dev);
+ hwdev = hinic3_get_hwdev_by_pcidev(dev);
+ if (!hwdev) {
+ sdk_err(&dev->dev, "SR-IOV enable is not permitted, please wait...\n");
+ return -EPERM;
+ }
+
+ err = hinic3_pci_sriov_check(sriov_info, dev, num_vfs);
+ if (err != 0) {
+ return err;
+ }
+
+ err = hinic3_init_vf_hw(hwdev, 1, (u16)num_vfs);
+ if (err) {
+ sdk_err(&dev->dev, "Failed to init vf in hardware before enable sriov, error %d\n",
+ err);
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return err;
+ }
+
+ err = pci_enable_sriov(dev, num_vfs);
+ if (err) {
+ sdk_err(&dev->dev, "Failed to enable SR-IOV, error %d\n", err);
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return err;
+ }
+
+ sriov_info->sriov_enabled = true;
+ sriov_info->num_vfs = num_vfs;
+
+ event.service = EVENT_SRV_COMM;
+ event.type = EVENT_COMM_SRIOV_STATE_CHANGE;
+ ((struct hinic3_sriov_state_info *)(void *)event.event_data)->enable = 1;
+ ((struct hinic3_sriov_state_info *)(void *)event.event_data)->num_vfs = (u16)num_vfs;
+ hinic3_event_callback(hwdev, &event);
+
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+
+ return num_vfs;
+#else
+
+ return 0;
+#endif
+}
+
+int hinic3_pci_sriov_configure(struct pci_dev *dev, int num_vfs)
+{
+ struct hinic3_sriov_info *sriov_info = NULL;
+
+ sriov_info = hinic3_get_sriov_info_by_pcidev(dev);
+ if (!sriov_info)
+ return -EFAULT;
+
+ if (!test_bit(HINIC3_FUNC_PERSENT, &sriov_info->state))
+ return -EFAULT;
+
+ if (num_vfs == 0)
+ return hinic3_pci_sriov_disable(dev);
+ else
+ return hinic3_pci_sriov_enable(dev, num_vfs);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h
new file mode 100644
index 0000000..4a640ad
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_SRIOV_H
+#define HINIC3_SRIOV_H
+#include <linux/types.h>
+#include <linux/pci.h>
+
+#if !(defined(HAVE_SRIOV_CONFIGURE) || defined(HAVE_RHEL6_SRIOV_CONFIGURE))
+ssize_t hinic3_sriov_totalvfs_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+ssize_t hinic3_sriov_numvfs_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+ssize_t hinic3_sriov_numvfs_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count);
+#endif /* !(HAVE_SRIOV_CONFIGURE || HAVE_RHEL6_SRIOV_CONFIGURE) */
+
+enum hinic3_sriov_state {
+ HINIC3_SRIOV_DISABLE,
+ HINIC3_SRIOV_ENABLE,
+ HINIC3_FUNC_PERSENT,
+};
+
+struct hinic3_sriov_info {
+ bool sriov_enabled;
+ unsigned int num_vfs;
+ unsigned long state;
+};
+
+struct hinic3_sriov_info *hinic3_get_sriov_info_by_pcidev(struct pci_dev *pdev);
+int hinic3_pci_sriov_disable(struct pci_dev *dev);
+int hinic3_pci_sriov_enable(struct pci_dev *dev, int num_vfs);
+int hinic3_pci_sriov_configure(struct pci_dev *dev, int num_vfs);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c
new file mode 100644
index 0000000..4f8acd6
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c
@@ -0,0 +1,159 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "ossl_knl.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_wq.h"
+
+#define WQ_MIN_DEPTH 64
+#define WQ_MAX_DEPTH 65536
+#define WQ_MAX_NUM_PAGES (PAGE_SIZE / sizeof(u64))
+
+static int wq_init_wq_block(struct hinic3_wq *wq)
+{
+ int i;
+
+ if (WQ_IS_0_LEVEL_CLA(wq)) {
+ wq->wq_block_paddr = wq->wq_pages[0].align_paddr;
+ wq->wq_block_vaddr = wq->wq_pages[0].align_vaddr;
+
+ return 0;
+ }
+
+ if (wq->num_wq_pages > WQ_MAX_NUM_PAGES) {
+ sdk_err(wq->dev_hdl, "num_wq_pages exceed limit: %lu\n",
+ WQ_MAX_NUM_PAGES);
+ return -EFAULT;
+ }
+
+ wq->wq_block_vaddr = dma_zalloc_coherent(wq->dev_hdl, PAGE_SIZE,
+ &wq->wq_block_paddr,
+ GFP_KERNEL);
+ if (!wq->wq_block_vaddr) {
+ sdk_err(wq->dev_hdl, "Failed to alloc wq block\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < wq->num_wq_pages; i++)
+ wq->wq_block_vaddr[i] =
+ cpu_to_be64(wq->wq_pages[i].align_paddr);
+
+ return 0;
+}
+
+static int wq_alloc_pages(struct hinic3_wq *wq)
+{
+ int i, page_idx, err;
+
+ wq->wq_pages = kcalloc(wq->num_wq_pages, sizeof(*wq->wq_pages),
+ GFP_KERNEL);
+ if (!wq->wq_pages) {
+ sdk_err(wq->dev_hdl, "Failed to alloc wq pages handle\n");
+ return -ENOMEM;
+ }
+
+ for (page_idx = 0; page_idx < wq->num_wq_pages; page_idx++) {
+ err = hinic3_dma_zalloc_coherent_align(wq->dev_hdl,
+ wq->wq_page_size,
+ wq->wq_page_size,
+ GFP_KERNEL,
+ &wq->wq_pages[page_idx]);
+ if (err) {
+ sdk_err(wq->dev_hdl, "Failed to alloc wq page\n");
+ goto free_wq_pages;
+ }
+ }
+
+ err = wq_init_wq_block(wq);
+ if (err)
+ goto free_wq_pages;
+
+ return 0;
+
+free_wq_pages:
+ for (i = 0; i < page_idx; i++)
+ hinic3_dma_free_coherent_align(wq->dev_hdl, &wq->wq_pages[i]);
+
+ kfree(wq->wq_pages);
+ wq->wq_pages = NULL;
+
+ return -ENOMEM;
+}
+
+static void wq_free_pages(struct hinic3_wq *wq)
+{
+ int i;
+
+ if (!WQ_IS_0_LEVEL_CLA(wq))
+ dma_free_coherent(wq->dev_hdl, PAGE_SIZE, wq->wq_block_vaddr,
+ wq->wq_block_paddr);
+
+ for (i = 0; i < wq->num_wq_pages; i++)
+ hinic3_dma_free_coherent_align(wq->dev_hdl, &wq->wq_pages[i]);
+
+ kfree(wq->wq_pages);
+ wq->wq_pages = NULL;
+}
+
+int hinic3_wq_create(void *hwdev, struct hinic3_wq *wq, u32 q_depth,
+ u16 wqebb_size)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u32 wq_page_size;
+
+ if (!wq || !dev) {
+ pr_err("Invalid wq or dev_hdl\n");
+ return -EINVAL;
+ }
+
+ if (q_depth < WQ_MIN_DEPTH || q_depth > WQ_MAX_DEPTH ||
+ (q_depth & (q_depth - 1)) || !wqebb_size ||
+ (wqebb_size & (wqebb_size - 1))) {
+ sdk_err(dev->dev_hdl, "Wq q_depth(%u) or wqebb_size(%u) is invalid\n",
+ q_depth, wqebb_size);
+ return -EINVAL;
+ }
+
+ wq_page_size = ALIGN(dev->wq_page_size, PAGE_SIZE);
+
+ memset(wq, 0, sizeof(struct hinic3_wq));
+ wq->dev_hdl = dev->dev_hdl;
+ wq->q_depth = q_depth;
+ wq->idx_mask = (u16)(q_depth - 1);
+ wq->wqebb_size = wqebb_size;
+ wq->wqebb_size_shift = (u16)ilog2(wq->wqebb_size);
+ wq->wq_page_size = wq_page_size;
+
+ wq->wqebbs_per_page = wq_page_size / wqebb_size;
+ /* In case of wq_page_size is larger than q_depth * wqebb_size */
+ if (wq->wqebbs_per_page > q_depth)
+ wq->wqebbs_per_page = q_depth;
+ wq->wqebbs_per_page_shift = (u16)ilog2(wq->wqebbs_per_page);
+ wq->wqebbs_per_page_mask = (u16)(wq->wqebbs_per_page - 1);
+ wq->num_wq_pages = (u16)(ALIGN(((u32)q_depth * wqebb_size),
+ wq_page_size) / wq_page_size);
+
+ return wq_alloc_pages(wq);
+}
+EXPORT_SYMBOL(hinic3_wq_create);
+
+void hinic3_wq_destroy(struct hinic3_wq *wq)
+{
+ if (!wq)
+ return;
+
+ wq_free_pages(wq);
+}
+EXPORT_SYMBOL(hinic3_wq_destroy);
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c b/drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c
new file mode 100644
index 0000000..f8aea696
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c
@@ -0,0 +1,119 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/vmalloc.h>
+#include "ossl_knl_linux.h"
+
+#define OSSL_MINUTE_BASE (60)
+
+struct file *file_creat(const char *file_name)
+{
+ return filp_open(file_name, O_CREAT | O_RDWR | O_APPEND, 0);
+}
+
+struct file *file_open(const char *file_name)
+{
+ return filp_open(file_name, O_RDONLY, 0);
+}
+
+void file_close(struct file *file_handle)
+{
+ (void)filp_close(file_handle, NULL);
+}
+
+u32 get_file_size(struct file *file_handle)
+{
+ struct inode *file_inode = NULL;
+
+ file_inode = file_handle->f_inode;
+
+ return (u32)(file_inode->i_size);
+}
+
+void set_file_position(struct file *file_handle, u32 position)
+{
+ file_handle->f_pos = position;
+}
+
+int file_read(struct file *file_handle, char *log_buffer, u32 rd_length,
+ u32 *file_pos)
+{
+ return (int)kernel_read(file_handle, log_buffer, rd_length,
+ &file_handle->f_pos);
+}
+
+u32 file_write(struct file *file_handle, const char *log_buffer, u32 wr_length)
+{
+ return (u32)kernel_write(file_handle, log_buffer, wr_length,
+ &file_handle->f_pos);
+}
+
+static int _linux_thread_func(void *thread)
+{
+ struct sdk_thread_info *info = (struct sdk_thread_info *)thread;
+
+ while (!kthread_should_stop())
+ info->thread_fn(info->data);
+
+ return 0;
+}
+
+int creat_thread(struct sdk_thread_info *thread_info)
+{
+ thread_info->thread_obj = kthread_run(_linux_thread_func, thread_info,
+ thread_info->name);
+ if (!thread_info->thread_obj)
+ return -EFAULT;
+
+ return 0;
+}
+
+void stop_thread(struct sdk_thread_info *thread_info)
+{
+ if (thread_info->thread_obj)
+ (void)kthread_stop(thread_info->thread_obj);
+}
+
+void utctime_to_localtime(u64 utctime, u64 *localtime)
+{
+ *localtime = utctime - (u64)(sys_tz.tz_minuteswest * OSSL_MINUTE_BASE); /*lint !e647 !e571*/
+}
+
+#ifndef HAVE_TIMER_SETUP
+void initialize_timer(const void *adapter_hdl, struct timer_list *timer)
+{
+ if (!adapter_hdl || !timer)
+ return;
+
+ init_timer(timer);
+}
+#endif
+
+void add_to_timer(struct timer_list *timer, u64 period)
+{
+ if (!timer)
+ return;
+
+ add_timer(timer);
+}
+
+void stop_timer(struct timer_list *timer) {}
+
+void delete_timer(struct timer_list *timer)
+{
+ if (!timer)
+ return;
+
+ del_timer_sync(timer);
+}
+
+u64 ossl_get_real_time(void)
+{
+ struct timeval tv = {0};
+ u64 tv_msec;
+
+ do_gettimeofday(&tv);
+
+ tv_msec = (u64)tv.tv_sec * MSEC_PER_SEC + (u64)tv.tv_usec / USEC_PER_MSEC;
+ return tv_msec;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/include/bond/bond_common_defs.h b/drivers/net/ethernet/huawei/hinic3/include/bond/bond_common_defs.h
new file mode 100644
index 0000000..bfb4499
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/bond/bond_common_defs.h
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) Huawei Technologies Co., Ltd. 2021. All rights reserved. */
+
+#ifndef BOND_COMMON_DEFS_H
+#define BOND_COMMON_DEFS_H
+
+#define BOND_NAME_MAX_LEN 16
+#define BOND_PORT_MAX_NUM 4
+#define BOND_ID_INVALID 0xFFFF
+#define OVS_PORT_NUM_MAX BOND_PORT_MAX_NUM
+#define DEFAULT_ROCE_BOND_FUNC 0xFFFFFFFF
+
+#define BOND_ID_IS_VALID(_id) \
+ (((_id) >= BOND_FIRST_ID) && ((_id) <= BOND_MAX_ID))
+#define BOND_ID_IS_INVALID(_id) (!(BOND_ID_IS_VALID(_id)))
+
+enum bond_group_id {
+ BOND_FIRST_ID = 1,
+ BOND_MAX_ID = 4,
+ BOND_MAX_NUM,
+};
+
+#pragma pack(push, 4)
+/**
+ * bond per port statistics
+ */
+struct tag_bond_port_stat {
+ /** mpu provide */
+ u64 rx_pkts;
+ u64 rx_bytes;
+ u64 rx_drops;
+ u64 rx_errors;
+
+ u64 tx_pkts;
+ u64 tx_bytes;
+ u64 tx_drops;
+ u64 tx_errors;
+};
+
+#pragma pack(pop)
+
+/**
+ * bond port attribute
+ */
+struct tag_bond_port_attr {
+ u8 duplex;
+ u8 status;
+ u8 rsvd0[2];
+ u32 speed;
+};
+
+/**
+ * Get bond information command struct defination
+ * @see OVS_MPU_CMD_BOND_GET_ATTR
+ */
+struct tag_bond_get {
+ u16 bond_id_vld; /* 1: used bond_id get bond info, 0: used bond_name */
+ u16 bond_id; /* if bond_id_vld=1 input, else output */
+ u8 bond_name[BOND_NAME_MAX_LEN]; /* if bond_id_vld=0 input, else output */
+
+ u16 bond_mode; /* 1 for active-backup,2 for balance-xor,4 for 802.3ad */
+ u8 active_slaves; /* active port slaves(bitmaps) */
+ u8 slaves; /* bond port id bitmaps */
+
+ u8 lacp_collect_slaves; /* bond port id bitmaps */
+ u8 xmit_hash_policy; /* xmit hash:0 for layer 2, 1 for layer 2+3, 2 for layer 3+4 */
+ u16 rsvd0; /* in order to 4B aligned */
+
+ struct tag_bond_port_stat stat[BOND_PORT_MAX_NUM];
+ struct tag_bond_port_attr attr[BOND_PORT_MAX_NUM];
+};
+
+#endif /** BOND_COMMON_DEFS_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd.h b/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd.h
new file mode 100644
index 0000000..a13b66d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CFG_MGMT_MPU_CMD_H
+#define CFG_MGMT_MPU_CMD_H
+
+enum cfg_cmd {
+ CFG_CMD_GET_DEV_CAP = 0, /**< Device capability of pf/vf, @see cfg_cmd_dev_cap */
+ CFG_CMD_GET_HOST_TIMER = 1, /**< Capability of host timer, @see cfg_cmd_host_timer */
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd_defs.h b/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd_defs.h
new file mode 100644
index 0000000..f9737ea
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd_defs.h
@@ -0,0 +1,221 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CFG_MGMT_MPU_CMD_DEFS_H
+#define CFG_MGMT_MPU_CMD_DEFS_H
+
+#include "mpu_cmd_base_defs.h"
+
+enum servic_bit_define {
+ SERVICE_BIT_NIC = 0,
+ SERVICE_BIT_ROCE = 1,
+ SERVICE_BIT_VBS = 2,
+ SERVICE_BIT_TOE = 3,
+ SERVICE_BIT_IPSEC = 4,
+ SERVICE_BIT_FC = 5,
+ SERVICE_BIT_VIRTIO = 6,
+ SERVICE_BIT_OVS = 7,
+ SERVICE_BIT_NVME = 8,
+ SERVICE_BIT_ROCEAA = 9,
+ SERVICE_BIT_CURRENET = 10,
+ SERVICE_BIT_PPA = 11,
+ SERVICE_BIT_MIGRATE = 12,
+ SERVICE_BIT_VROCE = 13,
+ SERVICE_BIT_BIFUR = 14,
+ SERVICE_BIT_MAX
+};
+
+#define CFG_SERVICE_MASK_NIC (0x1 << SERVICE_BIT_NIC)
+#define CFG_SERVICE_MASK_ROCE (0x1 << SERVICE_BIT_ROCE)
+#define CFG_SERVICE_MASK_VBS (0x1 << SERVICE_BIT_VBS)
+#define CFG_SERVICE_MASK_TOE (0x1 << SERVICE_BIT_TOE)
+#define CFG_SERVICE_MASK_IPSEC (0x1 << SERVICE_BIT_IPSEC)
+#define CFG_SERVICE_MASK_FC (0x1 << SERVICE_BIT_FC)
+#define CFG_SERVICE_MASK_VIRTIO (0x1 << SERVICE_BIT_VIRTIO)
+#define CFG_SERVICE_MASK_OVS (0x1 << SERVICE_BIT_OVS)
+#define CFG_SERVICE_MASK_NVME (0x1 << SERVICE_BIT_NVME)
+#define CFG_SERVICE_MASK_ROCEAA (0x1 << SERVICE_BIT_ROCEAA)
+#define CFG_SERVICE_MASK_CURRENET (0x1 << SERVICE_BIT_CURRENET)
+#define CFG_SERVICE_MASK_PPA (0x1 << SERVICE_BIT_PPA)
+#define CFG_SERVICE_MASK_MIGRATE (0x1 << SERVICE_BIT_MIGRATE)
+#define CFG_SERVICE_MASK_VROCE (0x1 << SERVICE_BIT_VROCE)
+#define CFG_SERVICE_MASK_BIFUR (0x1 << SERVICE_BIT_BIFUR)
+
+/* Definition of the scenario ID in the cfg_data, which is used for SML memory allocation. */
+enum scenes_id_define {
+ SCENES_ID_FPGA_ETH = 0,
+ SCENES_ID_COMPUTE_STANDARD = 1,
+ SCENES_ID_STORAGE_ROCEAA_2x100 = 2,
+ SCENES_ID_STORAGE_ROCEAA_4x25 = 3,
+ SCENES_ID_CLOUD = 4,
+ SCENES_ID_FC = 5,
+ SCENES_ID_STORAGE_ROCE = 6,
+ SCENES_ID_COMPUTE_ROCE = 7,
+ SCENES_ID_STORAGE_TOE = 8,
+ SCENES_ID_COMPUTE_DPU = 100,
+ SCENES_ID_COMPUTE_SMART_NIC = 101,
+ SCENES_ID_MAX
+};
+
+/* struct cfg_cmd_dev_cap.sf_svc_attr */
+enum {
+ SF_SVC_FT_BIT = (1 << 0),
+ SF_SVC_RDMA_BIT = (1 << 1),
+};
+
+struct cfg_cmd_host_timer {
+ struct mgmt_msg_head head;
+
+ u8 host_id;
+ u8 rsvd1;
+
+ u8 timer_pf_num;
+ u8 timer_pf_id_start;
+ u16 timer_vf_num;
+ u16 timer_vf_id_start;
+ u32 rsvd2[8];
+};
+
+struct cfg_cmd_dev_cap {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1;
+
+ /* Public resources */
+ u8 host_id;
+ u8 ep_id;
+ u8 er_id;
+ u8 port_id;
+
+ u16 host_total_func;
+ u8 host_pf_num;
+ u8 pf_id_start;
+ u16 host_vf_num;
+ u16 vf_id_start;
+ u8 host_oq_id_mask_val;
+ u8 timer_en;
+ u8 host_valid_bitmap;
+ u8 rsvd_host;
+
+ u16 svc_cap_en;
+ u16 max_vf;
+ u8 flexq_en;
+ u8 valid_cos_bitmap;
+ /* Reserved for func_valid_cos_bitmap */
+ u8 port_cos_valid_bitmap;
+ u8 rsvd_func1;
+ u32 rsvd_func2;
+
+ u8 sf_svc_attr;
+ u8 func_sf_en;
+ u8 lb_mode;
+ u8 smf_pg;
+
+ u32 max_conn_num;
+ u16 max_stick2cache_num;
+ u16 max_bfilter_start_addr;
+ u16 bfilter_len;
+ u16 hash_bucket_num;
+
+ /* shared resource */
+ u8 host_sf_en;
+ u8 master_host_id;
+ u8 srv_multi_host_mode;
+ u8 virtio_vq_size;
+
+ u8 hot_plug_disable;
+ u8 bond_create_mode;
+ u8 lro_enable;
+ u8 os_hot_replace;
+
+ u32 rsvd_func3[4];
+
+ /* l2nic */
+ u16 nic_max_sq_id;
+ u16 nic_max_rq_id;
+ u16 nic_default_num_queues;
+ u16 outband_vlan_cfg_en;
+ u32 rsvd2_nic[2];
+
+ /* RoCE */
+ u32 roce_max_qp;
+ u32 roce_max_cq;
+ u32 roce_max_srq;
+ u32 roce_max_mpt;
+ u32 roce_max_drc_qp;
+
+ u32 roce_cmtt_cl_start;
+ u32 roce_cmtt_cl_end;
+ u32 roce_cmtt_cl_size;
+
+ u32 roce_dmtt_cl_start;
+ u32 roce_dmtt_cl_end;
+ u32 roce_dmtt_cl_size;
+
+ u32 roce_wqe_cl_start;
+ u32 roce_wqe_cl_end;
+ u32 roce_wqe_cl_size;
+ u8 roce_srq_container_mode;
+ u8 rsvd_roce1[3];
+ u32 rsvd_roce2[5];
+
+ /* IPsec */
+ u32 ipsec_max_sactx;
+ u16 ipsec_max_cq;
+ u16 rsvd_ipsec1;
+ u32 rsvd_ipsec[2];
+
+ /* OVS */
+ u32 ovs_max_qpc;
+ u32 rsvd_ovs1[3];
+
+ /* ToE */
+ u32 toe_max_pctx;
+ u32 toe_max_cq;
+ u16 toe_max_srq;
+ u16 toe_srq_id_start;
+ u16 toe_max_mpt;
+ u16 toe_rsvd_1;
+ u32 toe_max_cctxt;
+ u32 rsvd_toe[1];
+
+ /* FC */
+ u32 fc_max_pctx;
+ u32 fc_max_scq;
+ u32 fc_max_srq;
+
+ u32 fc_max_cctx;
+ u32 fc_cctx_id_start;
+
+ u8 fc_vp_id_start;
+ u8 fc_vp_id_end;
+ u8 rsvd_fc1[2];
+ u32 rsvd_fc2[5];
+
+ /* VBS */
+ u16 vbs_max_volq;
+ u8 vbs_main_pf_enable;
+ u8 vbs_vsock_pf_enable;
+ u8 vbs_fushion_queue_pf_enable;
+ u8 rsvd0_vbs;
+ u16 rsvd1_vbs;
+ u32 rsvd2_vbs[2];
+
+ u16 fake_vf_start_id;
+ u16 fake_vf_num;
+ u32 fake_vf_max_pctx;
+ u16 fake_vf_bfilter_start_addr;
+ u16 fake_vf_bfilter_len;
+
+ u32 map_host_id : 3;
+ u32 fake_vf_en : 1;
+ u32 fake_vf_start_bit : 4;
+ u32 fake_vf_end_bit : 4;
+ u32 fake_vf_page_bit : 4;
+ u32 rsvd2 : 16;
+
+ u32 rsvd_glb[7];
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd.h b/drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd.h
new file mode 100644
index 0000000..d4e33f7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_NPU_CMD_H
+#define CQM_NPU_CMD_H
+
+enum cqm_cmd_type {
+ CQM_CMD_T_INVALID = 0, /* < Invalid command */
+ CQM_CMD_T_BAT_UPDATE, /* < Update the bat configuration of the funciton,
+ * @see struct tag_cqm_cmdq_bat_update
+ */
+ CQM_CMD_T_CLA_UPDATE, /* < Update the cla configuration of the funciton,
+ * @see struct tag_cqm_cla_update_cmd
+ */
+ CQM_CMD_T_BLOOMFILTER_SET, /* < Set the bloomfilter configuration of the funciton,
+ * @see struct tag_cqm_bloomfilter_cmd
+ */
+ CQM_CMD_T_BLOOMFILTER_CLEAR, /* < Clear the bloomfilter configuration of the funciton,
+ * @see struct tag_cqm_bloomfilter_cmd
+ */
+ CQM_CMD_T_RSVD, /* < Unused */
+ CQM_CMD_T_CLA_CACHE_INVALID, /* < Invalidate the cla cacheline,
+ * @see struct tag_cqm_cla_cache_invalid_cmd
+ */
+ CQM_CMD_T_BLOOMFILTER_INIT, /* < Init the bloomfilter configuration of the funciton,
+ * @see struct tag_cqm_bloomfilter_init_cmd
+ */
+ CQM_CMD_T_MAX
+};
+
+#endif /* CQM_NPU_CMD_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd_defs.h b/drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd_defs.h
new file mode 100644
index 0000000..28b83ed
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd_defs.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_NPU_CMD_DEFS_H
+#define CQM_NPU_CMD_DEFS_H
+
+struct tag_cqm_cla_cache_invalid_cmd {
+ u32 gpa_h;
+ u32 gpa_l;
+
+ u32 cache_size; /* CLA cache size=4096B */
+
+ u32 smf_id;
+ u32 func_id;
+};
+
+struct tag_cqm_cla_update_cmd {
+ /* Gpa address to be updated */
+ u32 gpa_h; // byte addr
+ u32 gpa_l; // byte addr
+
+ /* Updated Value */
+ u32 value_h;
+ u32 value_l;
+
+ u32 smf_id;
+ u32 func_id;
+};
+
+struct tag_cqm_bloomfilter_cmd {
+ u32 rsv1;
+
+#if (BYTE_ORDER == LITTLE_ENDIAN)
+ u32 k_en : 4;
+ u32 func_id : 16;
+ u32 rsv2 : 12;
+#else
+ u32 rsv2 : 12;
+ u32 func_id : 16;
+ u32 k_en : 4;
+#endif
+
+ u32 index_h;
+ u32 index_l;
+};
+
+#define CQM_BAT_MAX_SIZE 256
+struct tag_cqm_cmdq_bat_update {
+ u32 offset; // byte offset,16Byte aligned
+ u32 byte_len; // max size: 256byte
+ u8 data[CQM_BAT_MAX_SIZE];
+ u32 smf_id;
+ u32 func_id;
+};
+
+struct tag_cqm_bloomfilter_init_cmd {
+ u32 bloom_filter_len; // 16Byte aligned
+ u32 bloom_filter_addr;
+};
+
+#endif /* CQM_CMDQ_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/include/hinic3_common.h b/drivers/net/ethernet/huawei/hinic3/include/hinic3_common.h
new file mode 100644
index 0000000..6c5b995
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/hinic3_common.h
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_COMMON_H
+#define HINIC3_COMMON_H
+
+#include <linux/types.h>
+
+struct hinic3_dma_addr_align {
+ u32 real_size;
+
+ void *ori_vaddr;
+ dma_addr_t ori_paddr;
+
+ void *align_vaddr;
+ dma_addr_t align_paddr;
+};
+
+enum hinic3_wait_return {
+ WAIT_PROCESS_CPL = 0,
+ WAIT_PROCESS_WAITING = 1,
+ WAIT_PROCESS_ERR = 2,
+};
+
+struct hinic3_sge {
+ u32 hi_addr;
+ u32 lo_addr;
+ u32 len;
+};
+
+/* *
+ * hinic_cpu_to_be32 - convert data to big endian 32 bit format
+ * @data: the data to convert
+ * @len: length of data to convert, must be Multiple of 4B
+ */
+static inline void hinic3_cpu_to_be32(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ int data_len = len;
+ u32 *mem = (u32 *)data;
+
+ if (!data)
+ return;
+
+ data_len = data_len / chunk_sz;
+
+ for (i = 0; i < data_len; i++) {
+ *mem = cpu_to_be32(*mem);
+ mem++;
+ }
+}
+
+/* *
+ * hinic3_cpu_to_be32 - convert data from big endian 32 bit format
+ * @data: the data to convert
+ * @len: length of data to convert
+ */
+static inline void hinic3_be32_to_cpu(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ int data_len = len;
+ u32 *mem = (u32 *)data;
+
+ if (!data)
+ return;
+
+ data_len = data_len / chunk_sz;
+
+ for (i = 0; i < data_len; i++) {
+ *mem = be32_to_cpu(*mem);
+ mem++;
+ }
+}
+
+/* *
+ * hinic3_set_sge - set dma area in scatter gather entry
+ * @sge: scatter gather entry
+ * @addr: dma address
+ * @len: length of relevant data in the dma address
+ */
+static inline void hinic3_set_sge(struct hinic3_sge *sge, dma_addr_t addr,
+ int len)
+{
+ sge->hi_addr = upper_32_bits(addr);
+ sge->lo_addr = lower_32_bits(addr);
+ sge->len = (u32)len;
+}
+
+#define hinic3_hw_be32(val) (val)
+#define hinic3_hw_cpu32(val) (val)
+#define hinic3_hw_cpu16(val) (val)
+
+static inline void hinic3_hw_be32_len(void *data, int len)
+{
+}
+
+static inline void hinic3_hw_cpu32_len(void *data, int len)
+{
+}
+
+int hinic3_dma_zalloc_coherent_align(void *dev_hdl, u64 size, u64 align,
+ unsigned int flag,
+ struct hinic3_dma_addr_align *mem_align);
+
+void hinic3_dma_free_coherent_align(void *dev_hdl,
+ struct hinic3_dma_addr_align *mem_align);
+
+typedef enum hinic3_wait_return (*wait_cpl_handler)(void *priv_data);
+
+int hinic3_wait_for_timeout(void *priv_data, wait_cpl_handler handler,
+ u32 wait_total_ms, u32 wait_once_us);
+
+/* func_attr.glb_func_idx, global function index */
+u16 hinic3_global_func_id(void *hwdev);
+
+int hinic3_global_func_id_get(void *hwdev, u16 *func_id);
+
+/* func_attr.p2p_idx, belongs to which pf */
+u8 hinic3_pf_id_of_vf(void *hwdev);
+
+/* func_attr.itf_idx, pcie interface index */
+u8 hinic3_pcie_itf_id(void *hwdev);
+int hinic3_get_vfid_by_vfpci(void *hwdev, struct pci_dev *pdev, u16 *global_func_id);
+/* func_attr.vf_in_pf, the vf offset in pf */
+u8 hinic3_vf_in_pf(void *hwdev);
+
+/* func_attr.func_type, 0-PF 1-VF 2-PPF */
+enum func_type hinic3_func_type(void *hwdev);
+
+/* The PF func_attr.glb_pf_vf_offset,
+ * PF use only
+ */
+u16 hinic3_glb_pf_vf_offset(void *hwdev);
+
+/* func_attr.mpf_idx, mpf global function index,
+ * This value is valid only when it is PF
+ */
+u8 hinic3_mpf_idx(void *hwdev);
+
+u8 hinic3_ppf_idx(void *hwdev);
+
+/* func_attr.intr_num, MSI-X table entry in function */
+u16 hinic3_intr_num(void *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm.h b/drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm.h
new file mode 100644
index 0000000..47857a3
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm.h
@@ -0,0 +1,353 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_H
+#define CQM_H
+
+#include <linux/completion.h>
+
+#ifndef HIUDK_SDK
+
+#include "hinic3_cqm_define.h"
+#include "vram_common.h"
+
+#define CQM_SUCCESS 0
+#define CQM_FAIL (-1)
+#define CQM_CONTINUE 1
+
+#define CQM_WQE_WF_LINK 1
+#define CQM_WQE_WF_NORMAL 0
+
+#define CQM_QUEUE_LINK_MODE 0
+#define CQM_QUEUE_RING_MODE 1
+#define CQM_QUEUE_TOE_SRQ_LINK_MODE 2
+#define CQM_QUEUE_RDMA_QUEUE_MODE 3
+
+struct tag_cqm_linkwqe {
+ u32 rsv1 : 14;
+ u32 wf : 1;
+ u32 rsv2 : 14;
+ u32 ctrlsl : 2;
+ u32 o : 1;
+
+ u32 rsv3 : 31;
+ u32 lp : 1; /* lp define o-bit is flipping */
+
+ u32 next_page_gpa_h; /* Record the upper 32 bits of the PADDR of the next page */
+ u32 next_page_gpa_l; /* Record the lower 32 bits of the PADDR of the next page */
+
+ u32 next_buffer_addr_h; /* Record the upper 32 bits of the VADDR of the next page */
+ u32 next_buffer_addr_l; /* Record the lower 32 bits of the VADDR of the next page */
+};
+
+/* The WQE size cannot exceed the common RQE size. */
+struct tag_cqm_srq_linkwqe {
+ struct tag_cqm_linkwqe linkwqe;
+ u32 current_buffer_gpa_h;
+ u32 current_buffer_gpa_l;
+ u32 current_buffer_addr_h;
+ u32 current_buffer_addr_l;
+
+ u32 fast_link_page_addr_h;
+ u32 fast_link_page_addr_l;
+
+ u32 fixed_next_buffer_addr_h;
+ u32 fixed_next_buffer_addr_l;
+};
+
+/* First 64B of standard 128B WQE */
+union tag_cqm_linkwqe_first64B {
+ struct tag_cqm_linkwqe basic_linkwqe;
+ struct tag_cqm_srq_linkwqe toe_srq_linkwqe;
+ u32 value[16];
+};
+
+/* Last 64 bytes of the standard 128-byte WQE */
+struct tag_cqm_linkwqe_second64B {
+ u32 rsvd0[4];
+ u32 rsvd1[4];
+ union {
+ struct {
+ u32 rsvd0[3];
+ u32 rsvd1 : 29;
+ u32 toe_o : 1;
+ u32 resvd2 : 2;
+ } bs;
+ u32 value[4];
+ } third_16B;
+
+ union {
+ struct {
+ u32 rsvd0[2];
+ u32 rsvd1 : 31;
+ u32 ifoe_o : 1;
+ u32 rsvd2;
+ } bs;
+ u32 value[4];
+ } forth_16B;
+};
+
+/* Standard 128B WQE structure */
+struct tag_cqm_linkwqe_128B {
+ union tag_cqm_linkwqe_first64B first64B;
+ struct tag_cqm_linkwqe_second64B second64B;
+};
+
+enum cqm_aeq_event_type {
+ CQM_AEQ_BASE_T_NIC = 0,
+ CQM_AEQ_BASE_T_ROCE = 16,
+ CQM_AEQ_BASE_T_FC = 48,
+ CQM_AEQ_BASE_T_IOE = 56,
+ CQM_AEQ_BASE_T_TOE = 64,
+ CQM_AEQ_BASE_T_VBS = 96,
+ CQM_AEQ_BASE_T_IPSEC = 112,
+ CQM_AEQ_BASE_T_MAX = 128
+};
+
+struct tag_service_register_template {
+ u32 service_type;
+ u32 srq_ctx_size;
+ u32 scq_ctx_size;
+ void *service_handle; /* The ceq/aeq function is called back */
+ void (*shared_cq_ceq_callback)(void *service_handle, u32 cqn, void *cq_priv);
+ void (*embedded_cq_ceq_callback)(void *service_handle, u32 xid, void *qpc_priv);
+ void (*no_cq_ceq_callback)(void *service_handle, u32 xid, u32 qid, void *qpc_priv);
+ u8 (*aeq_level_callback)(void *service_handle, u8 event_type, u8 *val);
+ void (*aeq_callback)(void *service_handle, u8 event_type, u8 *val);
+};
+
+enum cqm_object_type {
+ CQM_OBJECT_ROOT_CTX = 0, ///<0:root context
+ CQM_OBJECT_SERVICE_CTX, ///<1:QPC
+ CQM_OBJECT_MPT, ///<2:RDMA
+
+ CQM_OBJECT_NONRDMA_EMBEDDED_RQ = 10,
+ CQM_OBJECT_NONRDMA_EMBEDDED_SQ,
+ CQM_OBJECT_NONRDMA_SRQ,
+ CQM_OBJECT_NONRDMA_EMBEDDED_CQ,
+ CQM_OBJECT_NONRDMA_SCQ,
+
+ CQM_OBJECT_RESV = 20,
+
+ CQM_OBJECT_RDMA_QP = 30,
+ CQM_OBJECT_RDMA_SRQ,
+ CQM_OBJECT_RDMA_SCQ,
+
+ CQM_OBJECT_MTT = 50,
+ CQM_OBJECT_RDMARC,
+};
+
+#define CQM_INDEX_INVALID ~(0U)
+#define CQM_INDEX_RESERVED (0xfffff)
+
+#define CQM_RDMA_Q_ROOM_1 (1)
+#define CQM_RDMA_Q_ROOM_2 (2)
+
+#define CQM_HARDWARE_DOORBELL (1)
+#define CQM_SOFTWARE_DOORBELL (2)
+
+struct tag_cqm_buf_list {
+ void *va;
+ dma_addr_t pa;
+ u32 refcount;
+};
+
+struct tag_cqm_buf {
+ struct tag_cqm_buf_list *buf_list;
+ struct tag_cqm_buf_list direct;
+ u32 page_number;
+ u32 buf_number;
+ u32 buf_size;
+ struct vram_buf_info buf_info;
+ u32 bat_entry_type;
+};
+
+struct completion;
+
+struct tag_cqm_object {
+ u32 service_type;
+ u32 object_type;
+ u32 object_size;
+ atomic_t refcount;
+ struct completion free;
+ void *cqm_handle;
+};
+
+struct tag_cqm_qpc_mpt {
+ struct tag_cqm_object object;
+ u32 xid;
+ dma_addr_t paddr;
+ void *priv;
+ u8 *vaddr;
+};
+
+struct tag_cqm_queue_header {
+ u64 doorbell_record;
+ u64 ci_record;
+ u64 rsv1;
+ u64 rsv2;
+};
+
+struct tag_cqm_queue {
+ struct tag_cqm_object object;
+ u32 index;
+ void *priv;
+ u32 current_q_doorbell;
+ u32 current_q_room;
+ struct tag_cqm_buf q_room_buf_1;
+ struct tag_cqm_buf q_room_buf_2;
+ struct tag_cqm_queue_header *q_header_vaddr;
+ dma_addr_t q_header_paddr;
+ u8 *q_ctx_vaddr;
+ dma_addr_t q_ctx_paddr;
+ u32 valid_wqe_num;
+ u8 *tail_container;
+ u8 *head_container;
+ u8 queue_link_mode;
+};
+
+struct tag_cqm_mtt_rdmarc {
+ struct tag_cqm_object object;
+ u32 index_base;
+ u32 index_number;
+ u8 *vaddr;
+};
+
+struct tag_cqm_cmd_buf {
+ void *buf;
+ dma_addr_t dma;
+ u16 size;
+};
+
+enum cqm_cmd_ack_type_e {
+ CQM_CMD_ACK_TYPE_CMDQ = 0,
+ CQM_CMD_ACK_TYPE_SHARE_CQN = 1,
+ CQM_CMD_ACK_TYPE_APP_CQN = 2
+};
+
+#define CQM_CMD_BUF_LEN 0x800
+
+#endif
+
+#define hiudk_cqm_object_delete(x, y) cqm_object_delete(y)
+#define hiudk_cqm_object_funcid(x, y) cqm_object_funcid(y)
+#define hiudk_cqm_object_offset_addr(x, y, z, m) cqm_object_offset_addr(y, z, m)
+#define hiudk_cqm_object_put(x, y) cqm_object_put(y)
+#define hiudk_cqm_object_resize_alloc_new(x, y, z) cqm_object_resize_alloc_new(y, z)
+#define hiudk_cqm_object_resize_free_new(x, y) cqm_object_resize_free_new(y)
+#define hiudk_cqm_object_resize_free_old(x, y) cqm_object_resize_free_old(y)
+#define hiudk_cqm_object_share_recv_queue_add_container(x, y) \
+ cqm_object_share_recv_queue_add_container(y)
+#define hiudk_cqm_object_srq_add_container_free(x, y, z) cqm_object_srq_add_container_free(y, z)
+#define hiudk_cqm_ring_software_db(x, y, z) cqm_ring_software_db(y, z)
+#define hiudk_cqm_srq_used_rq_container_delete(x, y, z) cqm_srq_used_rq_container_delete(y, z)
+
+s32 cqm3_init(void *ex_handle);
+void cqm3_uninit(void *ex_handle);
+
+s32 cqm3_service_register(void *ex_handle,
+ struct tag_service_register_template *service_template);
+void cqm3_service_unregister(void *ex_handle, u32 service_type);
+s32 cqm3_fake_vf_num_set(void *ex_handle, u16 fake_vf_num_cfg);
+bool cqm3_need_secure_mem(void *ex_handle);
+struct tag_cqm_queue *cqm3_object_fc_srq_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv);
+struct tag_cqm_queue *cqm3_object_recv_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 init_rq_num, u32 container_size,
+ u32 wqe_size, void *object_priv);
+struct tag_cqm_queue *cqm3_object_share_recv_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 container_number, u32 container_size,
+ u32 wqe_size);
+struct tag_cqm_qpc_mpt *cqm3_object_qpc_mpt_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ u32 index, bool low2bit_align_en);
+
+struct tag_cqm_queue *cqm3_object_nonrdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv);
+struct tag_cqm_queue *cqm3_object_rdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ bool room_header_alloc, u32 xid);
+struct tag_cqm_mtt_rdmarc *cqm3_object_rdma_table_get(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 index_base, u32 index_number);
+struct tag_cqm_object *cqm3_object_get(void *ex_handle, enum cqm_object_type object_type,
+ u32 index, bool bh);
+struct tag_cqm_cmd_buf *cqm3_cmd_alloc(void *ex_handle);
+void cqm3_cmd_free(void *ex_handle, struct tag_cqm_cmd_buf *cmd_buf);
+
+s32 cqm3_send_cmd_box(void *ex_handle, u8 mod, u8 cmd,
+ struct tag_cqm_cmd_buf *buf_in, struct tag_cqm_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel);
+
+s32 cqm3_lb_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, u8 cos_id,
+ struct tag_cqm_cmd_buf *buf_in, struct tag_cqm_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel);
+s32 cqm3_lb_send_cmd_box_async(void *ex_handle, u8 mod, u8 cmd, u8 cos_id,
+ struct tag_cqm_cmd_buf *buf_in, u16 channel);
+
+s32 cqm3_db_addr_alloc(void *ex_handle, void __iomem **db_addr, void __iomem **dwqe_addr);
+void cqm3_db_addr_free(void *ex_handle, const void __iomem *db_addr,
+ void __iomem *dwqe_addr);
+
+void *cqm3_get_db_addr(void *ex_handle, u32 service_type);
+s32 cqm3_ring_hardware_db(void *ex_handle, u32 service_type, u8 db_count, u64 db);
+
+s32 cqm_ring_hardware_db_fc(void *ex_handle, u32 service_type, u8 db_count, u8 pagenum, u64 db);
+s32 cqm3_ring_hardware_db_update_pri(void *ex_handle, u32 service_type, u8 db_count, u64 db);
+s32 cqm3_bloomfilter_inc(void *ex_handle, u16 func_id, u64 id);
+s32 cqm3_bloomfilter_dec(void *ex_handle, u16 func_id, u64 id);
+void *cqm3_gid_base(void *ex_handle);
+void *cqm3_timer_base(void *ex_handle);
+void cqm3_function_timer_clear(void *ex_handle, u32 function_id);
+void cqm3_function_hash_buf_clear(void *ex_handle, s32 global_funcid);
+s32 cqm3_ring_direct_wqe_db(void *ex_handle, u32 service_type, u8 db_count, void *direct_wqe);
+s32 cqm_ring_direct_wqe_db_fc(void *ex_handle, u32 service_type, void *direct_wqe);
+
+s32 cqm3_object_share_recv_queue_add_container(struct tag_cqm_queue *common);
+s32 cqm3_object_srq_add_container_free(struct tag_cqm_queue *common, u8 **container_addr);
+
+s32 cqm3_ring_software_db(struct tag_cqm_object *object, u64 db_record);
+void cqm3_object_put(struct tag_cqm_object *object);
+
+/**
+ * @brief Obtains the function ID of an object.
+ * @param Object Pointer
+ * @retval >=0 function's ID
+ * @retval -1 Fails
+ */
+s32 cqm3_object_funcid(struct tag_cqm_object *object);
+
+s32 cqm3_object_resize_alloc_new(struct tag_cqm_object *object, u32 object_size);
+void cqm3_object_resize_free_new(struct tag_cqm_object *object);
+void cqm3_object_resize_free_old(struct tag_cqm_object *object);
+
+/**
+ * @brief Releasing a container
+ * @param Object Pointer
+ * @param container Pointer to the container to be released
+ * @retval void
+ */
+void cqm3_srq_used_rq_container_delete(struct tag_cqm_object *object, u8 *container);
+
+void cqm3_object_delete(struct tag_cqm_object *object);
+
+/**
+ * @brief Obtains the PADDR and VADDR of the specified offset in the object buffer.
+ * @details Only rdma table lookup is supported
+ * @param Object Pointer
+ * @param offset For an RDMA table, the offset is the absolute index number.
+ * @param paddr The physical address is returned only for the RDMA table.
+ * @retval u8 *buffer Virtual address at specified offset
+ */
+u8 *cqm3_object_offset_addr(struct tag_cqm_object *object, u32 offset, dma_addr_t *paddr);
+
+#endif /* CQM_H */
+
diff --git a/drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm_define.h b/drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm_define.h
new file mode 100644
index 0000000..71d6166
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm_define.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_CQM_DEFINE_H
+#define HINIC3_CQM_DEFINE_H
+#if !defined(HIUDK_ULD) && !defined(HIUDK_SDK_ADPT)
+#define cqm_init cqm3_init
+#define cqm_uninit cqm3_uninit
+#define cqm_service_register cqm3_service_register
+#define cqm_service_unregister cqm3_service_unregister
+#define cqm_bloomfilter_dec cqm3_bloomfilter_dec
+#define cqm_bloomfilter_inc cqm3_bloomfilter_inc
+#define cqm_cmd_alloc cqm3_cmd_alloc
+#define cqm_cmd_free cqm3_cmd_free
+#define cqm_send_cmd_box cqm3_send_cmd_box
+#define cqm_lb_send_cmd_box cqm3_lb_send_cmd_box
+#define cqm_lb_send_cmd_box_async cqm3_lb_send_cmd_box_async
+#define cqm_db_addr_alloc cqm3_db_addr_alloc
+#define cqm_db_addr_free cqm3_db_addr_free
+#define cqm_ring_hardware_db cqm3_ring_hardware_db
+#define cqm_ring_software_db cqm3_ring_software_db
+#define cqm_object_fc_srq_create cqm3_object_fc_srq_create
+#define cqm_object_share_recv_queue_create cqm3_object_share_recv_queue_create
+#define cqm_object_share_recv_queue_add_container cqm3_object_share_recv_queue_add_container
+#define cqm_object_srq_add_container_free cqm3_object_srq_add_container_free
+#define cqm_object_recv_queue_create cqm3_object_recv_queue_create
+#define cqm_object_qpc_mpt_create cqm3_object_qpc_mpt_create
+#define cqm_object_nonrdma_queue_create cqm3_object_nonrdma_queue_create
+#define cqm_object_rdma_queue_create cqm3_object_rdma_queue_create
+#define cqm_object_rdma_table_get cqm3_object_rdma_table_get
+#define cqm_object_delete cqm3_object_delete
+#define cqm_object_offset_addr cqm3_object_offset_addr
+#define cqm_object_get cqm3_object_get
+#define cqm_object_put cqm3_object_put
+#define cqm_object_funcid cqm3_object_funcid
+#define cqm_object_resize_alloc_new cqm3_object_resize_alloc_new
+#define cqm_object_resize_free_new cqm3_object_resize_free_new
+#define cqm_object_resize_free_old cqm3_object_resize_free_old
+#define cqm_function_timer_clear cqm3_function_timer_clear
+#define cqm_function_hash_buf_clear cqm3_function_hash_buf_clear
+#define cqm_srq_used_rq_container_delete cqm3_srq_used_rq_container_delete
+#define cqm_timer_base cqm3_timer_base
+#define cqm_get_db_addr cqm3_get_db_addr
+#define cqm_ring_direct_wqe_db cqm3_ring_direct_wqe_db
+#define cqm_fake_vf_num_set cqm3_fake_vf_num_set
+#define cqm_need_secure_mem cqm3_need_secure_mem
+#endif
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/hinic3_lld.h b/drivers/net/ethernet/huawei/hinic3/include/hinic3_lld.h
new file mode 100644
index 0000000..e36ba1d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/hinic3_lld.h
@@ -0,0 +1,225 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_LLD_H
+#define HINIC3_LLD_H
+
+#include "hinic3_crm.h"
+
+#define WAIT_TIME 1
+
+#ifdef HIUDK_SDK
+
+int hwsdk_set_vf_load_state(struct hinic3_lld_dev *lld_dev, bool vf_load_state);
+
+int hwsdk_set_vf_service_load(struct hinic3_lld_dev *lld_dev, u16 service,
+ bool vf_srv_load);
+
+int hwsdk_set_vf_service_state(struct hinic3_lld_dev *lld_dev, u16 vf_func_id,
+ u16 service, bool en);
+#else
+struct hinic3_lld_dev {
+ struct pci_dev *pdev;
+ void *hwdev;
+};
+
+struct hinic3_uld_info {
+ /* When the function does not need to initialize the corresponding uld,
+ * @probe needs to return 0 and uld_dev is set to NULL;
+ * if uld_dev is NULL, @remove will not be called when uninstalling
+ */
+ int (*probe)(struct hinic3_lld_dev *lld_dev, void **uld_dev, char *uld_dev_name);
+ void (*remove)(struct hinic3_lld_dev *lld_dev, void *uld_dev);
+ int (*suspend)(struct hinic3_lld_dev *lld_dev, void *uld_dev, pm_message_t state);
+ int (*resume)(struct hinic3_lld_dev *lld_dev, void *uld_dev);
+ void (*event)(struct hinic3_lld_dev *lld_dev, void *uld_dev,
+ struct hinic3_event_info *event);
+ int (*ioctl)(void *uld_dev, u32 cmd, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+};
+#endif
+
+#ifndef HIUDK_ULD
+/* hinic3_register_uld - register an upper-layer driver
+ * @type: uld service type
+ * @uld_info: uld callback
+ *
+ * Registers an upper-layer driver.
+ * Traverse existing devices and call @probe to initialize the uld device.
+ */
+int hinic3_register_uld(enum hinic3_service_type type, struct hinic3_uld_info *uld_info);
+
+/**
+ * hinic3_unregister_uld - unregister an upper-layer driver
+ * @type: uld service type
+ *
+ * Traverse existing devices and call @remove to uninstall the uld device.
+ * Unregisters an existing upper-layer driver.
+ */
+void hinic3_unregister_uld(enum hinic3_service_type type);
+
+void lld_hold(void);
+void lld_put(void);
+
+/**
+ * @brief hinic3_get_lld_dev_by_chip_name - get lld device by chip name
+ * @param chip_name: chip name
+ *
+ * The value of lld_dev reference increases when lld_dev is obtained. The caller needs
+ * to release the reference by calling lld_dev_put.
+ **/
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_chip_name(const char *chip_name);
+
+/**
+ * @brief lld_dev_hold - get reference to lld_dev
+ * @param dev: lld device
+ *
+ * Hold reference to device to keep it from being freed
+ **/
+void lld_dev_hold(struct hinic3_lld_dev *dev);
+
+/**
+ * @brief lld_dev_put - release reference to lld_dev
+ * @param dev: lld device
+ *
+ * Release reference to device to allow it to be freed
+ **/
+void lld_dev_put(struct hinic3_lld_dev *dev);
+
+/**
+ * @brief hinic3_get_lld_dev_by_dev_name - get lld device by uld device name
+ * @param dev_name: uld device name
+ * @param type: uld service type, When the type is SERVICE_T_MAX, try to match
+ * all ULD names to get uld_dev
+ *
+ * The value of lld_dev reference increases when lld_dev is obtained. The caller needs
+ * to release the reference by calling lld_dev_put.
+ **/
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_dev_name(const char *dev_name,
+ enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_lld_dev_by_dev_name_unsafe - get lld device by uld device name
+ * @param dev_name: uld device name
+ * @param type: uld service type, When the type is SERVICE_T_MAX, try to match
+ * all ULD names to get uld_dev
+ *
+ * hinic3_get_lld_dev_by_dev_name_unsafe() is completely analogous to
+ * hinic3_get_lld_dev_by_dev_name(), The only difference is that the reference
+ * of lld_dev is not increased when lld_dev is obtained.
+ *
+ * The caller must ensure that lld_dev will not be freed during the remove process
+ * when using lld_dev.
+ **/
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_dev_name_unsafe(const char *dev_name,
+ enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_lld_dev_by_chip_and_port - get lld device by chip name and port id
+ * @param chip_name: chip name
+ * @param port_id: port id
+ **/
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_chip_and_port(const char *chip_name, u8 port_id);
+
+/**
+ * @brief hinic3_get_ppf_dev - get ppf device without depend on input parameter
+ **/
+void *hinic3_get_ppf_dev(void);
+
+/**
+ * @brief hinic3_get_ppf_lld_dev - get ppf lld device by current function's lld device
+ * @param lld_dev: current function's lld device
+ *
+ * The value of lld_dev reference increases when lld_dev is obtained. The caller needs
+ * to release the reference by calling lld_dev_put.
+ **/
+struct hinic3_lld_dev *hinic3_get_ppf_lld_dev(struct hinic3_lld_dev *lld_dev);
+
+/**
+ * @brief hinic3_get_ppf_lld_dev_unsafe - get ppf lld device by current function's lld device
+ * @param lld_dev: current function's lld device
+ *
+ * hinic3_get_ppf_lld_dev_unsafe() is completely analogous to hinic3_get_ppf_lld_dev(),
+ * The only difference is that the reference of lld_dev is not increased when lld_dev is obtained.
+ *
+ * The caller must ensure that ppf's lld_dev will not be freed during the remove process
+ * when using ppf lld_dev.
+ **/
+struct hinic3_lld_dev *hinic3_get_ppf_lld_dev_unsafe(struct hinic3_lld_dev *lld_dev);
+
+/**
+ * @brief uld_dev_hold - get reference to uld_dev
+ * @param lld_dev: lld device
+ * @param type: uld service type
+ *
+ * Hold reference to uld device to keep it from being freed
+ **/
+void uld_dev_hold(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+
+/**
+ * @brief uld_dev_put - release reference to lld_dev
+ * @param dev: lld device
+ * @param type: uld service type
+ *
+ * Release reference to uld device to allow it to be freed
+ **/
+void uld_dev_put(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_uld_dev - get uld device by lld device
+ * @param lld_dev: lld device
+ * @param type: uld service type
+ *
+ * The value of uld_dev reference increases when uld_dev is obtained. The caller needs
+ * to release the reference by calling uld_dev_put.
+ **/
+void *hinic3_get_uld_dev(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_uld_dev_unsafe - get uld device by lld device
+ * @param lld_dev: lld device
+ * @param type: uld service type
+ *
+ * hinic3_get_uld_dev_unsafe() is completely analogous to hinic3_get_uld_dev(),
+ * The only difference is that the reference of uld_dev is not increased when uld_dev is obtained.
+ *
+ * The caller must ensure that uld_dev will not be freed during the remove process
+ * when using uld_dev.
+ **/
+void *hinic3_get_uld_dev_unsafe(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_chip_name - get chip name by lld device
+ * @param lld_dev: lld device
+ * @param chip_name: String for storing the chip name
+ * @param max_len: Maximum number of characters to be copied for chip_name
+ **/
+int hinic3_get_chip_name(struct hinic3_lld_dev *lld_dev, char *chip_name, u16 max_len);
+
+struct card_node *hinic3_get_chip_node_by_lld(struct hinic3_lld_dev *lld_dev);
+
+struct hinic3_hwdev *hinic3_get_sdk_hwdev_by_lld(struct hinic3_lld_dev *lld_dev);
+
+bool hinic3_get_vf_service_load(struct pci_dev *pdev, u16 service);
+
+int hinic3_set_vf_service_load(struct pci_dev *pdev, u16 service,
+ bool vf_srv_load);
+
+int hinic3_set_vf_service_state(struct pci_dev *pdev, u16 vf_func_id,
+ u16 service, bool en);
+
+bool hinic3_get_vf_load_state(struct pci_dev *pdev);
+
+int hinic3_set_vf_load_state(struct pci_dev *pdev, bool vf_load_state);
+
+int hinic3_attach_nic(struct hinic3_lld_dev *lld_dev);
+
+void hinic3_detach_nic(const struct hinic3_lld_dev *lld_dev);
+
+int hinic3_attach_service(const struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+void hinic3_detach_service(const struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+const char **hinic3_get_uld_names(void);
+int hinic3_lld_init(void);
+void hinic3_lld_exit(void);
+#endif
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/hinic3_profile.h b/drivers/net/ethernet/huawei/hinic3/include/hinic3_profile.h
new file mode 100644
index 0000000..e0bd256
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/hinic3_profile.h
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_PROFILE_H
+#define HINIC3_PROFILE_H
+
+typedef bool (*hinic3_is_match_prof)(void *device);
+typedef void *(*hinic3_init_prof_attr)(void *device);
+typedef void (*hinic3_deinit_prof_attr)(void *porf_attr);
+
+enum prof_adapter_type {
+ PROF_ADAP_TYPE_INVALID,
+ PROF_ADAP_TYPE_PANGEA = 1,
+
+ /* Add prof adapter type before default */
+ PROF_ADAP_TYPE_DEFAULT,
+};
+
+/**
+ * struct hinic3_prof_adapter - custom scene's profile adapter
+ * @type: adapter type
+ * @match: Check whether the current function is used in the custom scene.
+ * Implemented in the current source file
+ * @init: When @match return true, the initialization function called in probe.
+ * Implemented in the source file of the custom scene
+ * @deinit: When @match return true, the deinitialization function called when
+ * remove. Implemented in the source file of the custom scene
+ */
+struct hinic3_prof_adapter {
+ enum prof_adapter_type type;
+ hinic3_is_match_prof match;
+ hinic3_init_prof_attr init;
+ hinic3_deinit_prof_attr deinit;
+};
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline struct hinic3_prof_adapter *hinic3_prof_init(void *device,
+ struct hinic3_prof_adapter *adap_objs,
+ int num_adap, void **prof_attr)
+{
+ struct hinic3_prof_adapter *prof_obj = NULL;
+ int i;
+
+ for (i = 0; i < num_adap; i++) {
+ prof_obj = &adap_objs[i];
+ if (!(prof_obj->match && prof_obj->match(device)))
+ continue;
+
+ *prof_attr = prof_obj->init ? prof_obj->init(device) : NULL;
+
+ return prof_obj;
+ }
+
+ return NULL;
+}
+
+static inline void hinic3_prof_deinit(struct hinic3_prof_adapter *prof_obj, void *prof_attr)
+{
+ if (!prof_obj)
+ return;
+
+ if (prof_obj->deinit)
+ prof_obj->deinit(prof_attr);
+}
+
+/* module-level interface */
+#ifdef CONFIG_MODULE_PROF
+struct hinic3_module_ops {
+ int (*module_prof_init)(void);
+ void (*module_prof_exit)(void);
+ void (*probe_fault_process)(void *pdev, u16 level);
+ int (*probe_pre_process)(void *pdev);
+ void (*probe_pre_unprocess)(void *pdev);
+};
+
+struct hinic3_module_ops *hinic3_get_module_prof_ops(void);
+
+static inline void hinic3_probe_fault_process(void *pdev, u16 level)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (ops && ops->probe_fault_process)
+ ops->probe_fault_process(pdev, level);
+}
+
+static inline int hinic3_module_pre_init(void)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (!ops || !ops->module_prof_init)
+ return -EINVAL;
+
+ return ops->module_prof_init();
+}
+
+static inline void hinic3_module_post_exit(void)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (ops && ops->module_prof_exit)
+ ops->module_prof_exit();
+}
+
+static inline int hinic3_probe_pre_process(void *pdev)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (!ops || !ops->probe_pre_process)
+ return -EINVAL;
+
+ return ops->probe_pre_process(pdev);
+}
+
+static inline void hinic3_probe_pre_unprocess(void *pdev)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (ops && ops->probe_pre_unprocess)
+ ops->probe_pre_unprocess(pdev);
+}
+#else
+static inline void hinic3_probe_fault_process(void *pdev, u16 level) { };
+
+static inline int hinic3_module_pre_init(void)
+{
+ return 0;
+}
+
+static inline void hinic3_module_post_exit(void) { };
+
+static inline int hinic3_probe_pre_process(void *pdev)
+{
+ return 0;
+}
+
+static inline void hinic3_probe_pre_unprocess(void *pdev) { };
+#endif
+
+#ifdef LLT_STATIC_DEF_SAVED
+#define static
+#undef LLT_STATIC_DEF_SAVED
+#endif
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/mag_mpu_cmd.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/mag_mpu_cmd.h
new file mode 100644
index 0000000..199f17a
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/mag_mpu_cmd.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef MAG_MPU_CMD_H
+#define MAG_MPU_CMD_H
+
+/* Definition of the SerDes/MAG message command word */
+enum mag_cmd {
+ SERDES_CMD_PROCESS = 0, /* serdes cmd @see struct serdes_cmd_in */
+
+ MAG_CMD_SET_PORT_CFG = 1, /* set port cfg function @see struct mag_cmd_set_port_cfg */
+ MAG_CMD_SET_PORT_ADAPT = 2, /* set port adapt mode @see struct mag_cmd_set_port_adapt */
+ MAG_CMD_CFG_LOOPBACK_MODE = 3, /* set port loopback mode @see mag_cmd_cfg_loopback_mode */
+
+ MAG_CMD_GET_PORT_ENABLE = 5, /* get port enable status @see mag_cmd_get_port_enable */
+ MAG_CMD_SET_PORT_ENABLE = 6, /* set port enable mode @see mag_cmd_set_port_enable */
+ MAG_CMD_GET_LINK_STATUS = 7, /* get port link status @see mag_cmd_get_link_status */
+ MAG_CMD_SET_LINK_FOLLOW = 8, /* set port link_follow mode @see mag_cmd_set_link_follow */
+ MAG_CMD_SET_PMA_ENABLE = 9, /* set pma enable mode @see struct mag_cmd_set_pma_enable */
+ MAG_CMD_CFG_FEC_MODE = 10, /* set port fec mode @see struct mag_cmd_cfg_fec_mode */
+ MAG_CMD_GET_BOND_STATUS = 11, /* reserved for future use */
+
+ MAG_CMD_CFG_AN_TYPE = 12, /* reserved for future use */
+ MAG_CMD_CFG_LINK_TIME = 13, /* get link time @see struct mag_cmd_get_link_time */
+
+ MAG_CMD_SET_PANGEA_ADAPT = 15, /* set pangea adapt mode @see mag_cmd_set_pangea_adapt */
+
+ /* Bios link configuration dependency 30-49 */
+ MAG_CMD_CFG_BIOS_LINK_CFG = 31, /* reserved for future use */
+ MAG_CMD_RESTORE_LINK_CFG = 32, /* restore link cfg @see mag_cmd_restore_link_cfg */
+ MAG_CMD_ACTIVATE_BIOS_LINK_CFG = 33, /* active bios link cfg */
+
+ /* Optical module、LED, PHY and other peripheral configuration management 50 - 99 */
+ /* LED */
+ MAG_CMD_SET_LED_CFG = 50, /* set led cfg @see struct mag_cmd_set_led_cfg */
+
+ /* PHY */
+ MAG_CMD_GET_PHY_INIT_STATUS = 55, /* reserved for future use */
+
+ /* Optical module */
+ MAG_CMD_GET_XSFP_INFO = 60, /* get xsfp info @see struct mag_cmd_get_xsfp_info */
+ MAG_CMD_SET_XSFP_ENABLE = 61, /* set xsfp enable mode @see mag_cmd_set_xsfp_enable */
+ MAG_CMD_GET_XSFP_PRESENT = 62, /* get xsfp present status @see mag_cmd_get_xsfp_present */
+ MAG_CMD_SET_XSFP_RW = 63, /* sfp/qsfp single byte read/write, @see mag_cmd_set_xsfp_rw */
+ MAG_CMD_CFG_XSFP_TEMPERATURE = 64, /* get xsfp temp @see mag_cmd_sfp_temp_out_info */
+ /**< set xsfp tlv info @see struct mag_cmd_set_xsfp_tlv_req */
+ MAG_CMD_SET_XSFP_TLV_INFO = 65,
+ /**< get xsfp tlv info @see struct drv_mag_cmd_get_xsfp_tlv_rsp */
+ MAG_CMD_GET_XSFP_TLV_INFO = 66,
+
+ /* Event reported 100-149 */
+ MAG_CMD_WIRE_EVENT = 100,
+ MAG_CMD_LINK_ERR_EVENT = 101,
+
+ /* DFX、Counter */
+ MAG_CMD_EVENT_PORT_INFO = 150, /* get port event info @see mag_cmd_event_port_info */
+ MAG_CMD_GET_PORT_STAT = 151, /* get port state @see struct mag_cmd_get_port_stat */
+ MAG_CMD_CLR_PORT_STAT = 152, /* clear port state @see struct mag_cmd_port_stats_info */
+ MAG_CMD_GET_PORT_INFO = 153, /* get port info @see struct mag_cmd_get_port_info */
+ MAG_CMD_GET_PCS_ERR_CNT = 154, /* pcs err count @see struct mag_cmd_event_port_info */
+ MAG_CMD_GET_MAG_CNT = 155, /* fec code count @see struct mag_cmd_get_mag_cnt */
+ MAG_CMD_DUMP_ANTRAIN_INFO = 156, /* dump anlt info @see mag_cmd_dump_antrain_info */
+
+ /* patch reserve cmd */
+ MAG_CMD_PATCH_RSVD_0 = 200,
+ MAG_CMD_PATCH_RSVD_1 = 201,
+ MAG_CMD_PATCH_RSVD_2 = 202,
+ MAG_CMD_PATCH_RSVD_3 = 203,
+ MAG_CMD_PATCH_RSVD_4 = 204,
+
+ MAG_CMD_MAX = 0xFF
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_board_defs.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_board_defs.h
new file mode 100644
index 0000000..88a9c0d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_board_defs.h
@@ -0,0 +1,135 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef MPU_BOARD_DEFS_H
+#define MPU_BOARD_DEFS_H
+
+#define BOARD_TYPE_TEST_RANGE_START 1
+#define BOARD_TYPE_TEST_RANGE_END 29
+#define BOARD_TYPE_STRG_RANGE_START 30
+#define BOARD_TYPE_STRG_RANGE_END 99
+#define BOARD_TYPE_CAL_RANGE_START 100
+#define BOARD_TYPE_CAL_RANGE_END 169
+#define BOARD_TYPE_CLD_RANGE_START 170
+#define BOARD_TYPE_CLD_RANGE_END 239
+#define BOARD_TYPE_RSVD_RANGE_START 240
+#define BOARD_TYPE_RSVD_RANGE_END 255
+
+enum board_type_define_e {
+ BOARD_TYPE_MPU_DEFAULT = 0,
+ BOARD_TYPE_TEST_EVB_4X25G = 1,
+ BOARD_TYPE_TEST_CEM_2X100G = 2,
+ BOARD_TYPE_STRG_SMARTIO_4X32G_FC = 30,
+ BOARD_TYPE_STRG_SMARTIO_4X25G_TIOE = 31,
+ BOARD_TYPE_STRG_SMARTIO_4X25G_ROCE = 32,
+ BOARD_TYPE_STRG_SMARTIO_4X25G_ROCE_AA = 33,
+ BOARD_TYPE_STRG_SMARTIO_4X25G_SRIOV = 34,
+ BOARD_TYPE_STRG_SMARTIO_4X25G_SRIOV_SW = 35,
+ BOARD_TYPE_STRG_4X25G_COMSTORAGE = 36,
+ BOARD_TYPE_STRG_2X100G_TIOE = 40,
+ BOARD_TYPE_STRG_2X100G_ROCE = 41,
+ BOARD_TYPE_STRG_2X100G_ROCE_AA = 42,
+ BOARD_TYPE_CAL_2X25G_NIC_75MPPS = 100,
+ BOARD_TYPE_CAL_2X25G_NIC_40MPPS = 101,
+ BOARD_TYPE_CAL_2X100G_DPU_VL = 102,
+ BOARD_TYPE_CAL_4X25G_NIC_120MPPS = 105,
+ BOARD_TYPE_CAL_4X25G_COMSTORAGE = 106,
+ BOARD_TYPE_CAL_2X32G_FC_HBA = 110,
+ BOARD_TYPE_CAL_2X16G_FC_HBA = 111,
+ BOARD_TYPE_CAL_2X100G_NIC_120MPPS = 115,
+ BOARD_TYPE_CAL_2X25G_DPU_BD = 116,
+ BOARD_TYPE_CAL_2X100G_TCE_BACKPLANE = 117,
+ BOARD_TYPE_CAL_4X25G_DPU_VL = 118,
+ BOARD_TYPE_CAL_4X25G_SMARTNIC_120MPPS = 119,
+ BOARD_TYPE_CAL_2X100G_SMARTNIC_120MPPS = 120,
+ BOARD_TYPE_CAL_6X25G_DPU_VL = 121,
+ BOARD_TYPE_CAL_4X25G_DPU_BD = 122,
+ BOARD_TYPE_CAL_2X25G_NIC_4HOST = 123,
+ BOARD_TYPE_CAL_2X10G_LOW_POWER = 125,
+ BOARD_TYPE_CAL_2X200G_NIC_INTERNET = 127,
+ BOARD_TYPE_CAL_1X100GR2_OCP = 129,
+ BOARD_TYPE_CAL_2X200G_DPU_VL = 130,
+ BOARD_TYPE_CLD_2X100G_SDI5_1 = 170,
+ BOARD_TYPE_CLD_2X25G_SDI5_0_LITE = 171,
+ BOARD_TYPE_CLD_2X100G_SDI5_0 = 172,
+ BOARD_TYPE_CLD_4X25G_SDI5_0_C = 175,
+ BOARD_TYPE_MAX_INDEX = 0xFF
+};
+
+static inline u32 spu_board_type_valid(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CLD_2X25G_SDI5_0_LITE) ||
+ ((board_type) == BOARD_TYPE_CLD_2X100G_SDI5_0) ||
+ ((board_type) == BOARD_TYPE_CLD_4X25G_SDI5_0_C) ||
+ ((board_type) == BOARD_TYPE_CAL_2X25G_DPU_BD) ||
+ ((board_type) == BOARD_TYPE_CAL_2X100G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_DPU_BD) ||
+ ((board_type) == BOARD_TYPE_CAL_2X200G_DPU_VL);
+}
+
+static inline int board_type_is_sdi_50(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CLD_2X25G_SDI5_0_LITE) ||
+ ((board_type) == BOARD_TYPE_CLD_2X100G_SDI5_0) ||
+ ((board_type) == BOARD_TYPE_CLD_4X25G_SDI5_0_C);
+}
+
+static inline int board_type_is_sdi(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CLD_2X100G_SDI5_1) ||
+ ((board_type) == BOARD_TYPE_CLD_2X25G_SDI5_0_LITE) ||
+ ((board_type) == BOARD_TYPE_CLD_2X100G_SDI5_0) ||
+ ((board_type) == BOARD_TYPE_CLD_4X25G_SDI5_0_C);
+}
+
+static inline int board_type_is_dpu_spu(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CAL_2X25G_DPU_BD) ||
+ ((board_type) == BOARD_TYPE_CAL_2X100G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_DPU_BD) ||
+ ((board_type) == BOARD_TYPE_CAL_2X200G_DPU_VL);
+}
+
+static inline int board_type_is_dpu(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CAL_2X25G_DPU_BD) ||
+ ((board_type) == BOARD_TYPE_CAL_2X100G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_DPU_BD) ||
+ ((board_type) == BOARD_TYPE_CAL_6X25G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_2X200G_DPU_VL);
+}
+
+static inline int board_type_is_smartnic(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CAL_4X25G_SMARTNIC_120MPPS) ||
+ ((board_type) == BOARD_TYPE_CAL_2X100G_SMARTNIC_120MPPS);
+}
+
+/* 此接口判断是否是分布式存储的标卡以及计算的标卡(含ROCE特性),
+ * 仅用于LLDP TX功能冲突命令字处理的判断
+ */
+static inline int board_type_is_compute(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CAL_2X25G_NIC_75MPPS) ||
+ ((board_type) == BOARD_TYPE_CAL_2X25G_NIC_40MPPS) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_NIC_120MPPS) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_COMSTORAGE) ||
+ ((board_type) == BOARD_TYPE_CAL_2X100G_NIC_120MPPS) ||
+ ((board_type) == BOARD_TYPE_CAL_2X10G_LOW_POWER) ||
+ ((board_type) == BOARD_TYPE_CAL_2X200G_NIC_INTERNET) ||
+ ((board_type) == BOARD_TYPE_CAL_1X100GR2_OCP) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_SMARTNIC_120MPPS) ||
+ ((board_type) == BOARD_TYPE_CAL_2X25G_NIC_4HOST) ||
+ ((board_type) == BOARD_TYPE_CAL_2X100G_SMARTNIC_120MPPS);
+}
+
+/* 此接口判断服务器输入reboot网卡是否需要复位 */
+static inline int board_type_is_multi_socket(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CAL_1X100GR2_OCP);
+}
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_cmd_base_defs.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_cmd_base_defs.h
new file mode 100644
index 0000000..e65c206
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_cmd_base_defs.h
@@ -0,0 +1,165 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2021-2023. All rights reserved.
+ * Description : common definitions
+ */
+
+#ifndef COMM_DEFS_H
+#define COMM_DEFS_H
+
+/** MPU CMD MODULE TYPE */
+enum hinic3_mod_type {
+ HINIC3_MOD_COMM = 0, /* HW communication module */
+ HINIC3_MOD_L2NIC = 1, /* L2NIC module */
+ HINIC3_MOD_ROCE = 2,
+ HINIC3_MOD_PLOG = 3,
+ HINIC3_MOD_TOE = 4,
+ HINIC3_MOD_FLR = 5,
+ HINIC3_MOD_VROCE = 6,
+ HINIC3_MOD_CFGM = 7, /* Configuration management */
+ HINIC3_MOD_CQM = 8,
+ HINIC3_MOD_VMSEC = 9,
+ COMM_MOD_FC = 10,
+ HINIC3_MOD_OVS = 11,
+ HINIC3_MOD_DSW = 12,
+ HINIC3_MOD_MIGRATE = 13,
+ HINIC3_MOD_HILINK = 14,
+ HINIC3_MOD_CRYPT = 15, /* secure crypto module */
+ HINIC3_MOD_VIO = 16,
+ HINIC3_MOD_IMU = 17,
+ HINIC3_MOD_DFX = 18, /* DFX */
+ HINIC3_MOD_HW_MAX = 19, /* hardware max module id */
+ /* Software module id, for PF/VF and multi-host */
+ HINIC3_MOD_SW_FUNC = 20,
+ HINIC3_MOD_MAX,
+};
+
+/* func reset的flag ,用于指示清理哪种资源 */
+enum func_reset_flag_e{
+ RES_TYPE_FLUSH_BIT = 0,
+ RES_TYPE_MQM,
+ RES_TYPE_SMF,
+ RES_TYPE_PF_BW_CFG,
+
+ RES_TYPE_COMM = 10,
+ RES_TYPE_COMM_MGMT_CH, /* clear mbox and aeq, The RES_TYPE_COMM bit must be set */
+ RES_TYPE_COMM_CMD_CH, /* clear cmdq and ceq, The RES_TYPE_COMM bit must be set */
+ RES_TYPE_NIC,
+ RES_TYPE_OVS,
+ RES_TYPE_VBS,
+ RES_TYPE_ROCE,
+ RES_TYPE_FC,
+ RES_TYPE_TOE,
+ RES_TYPE_IPSEC,
+ RES_TYPE_MAX,
+};
+
+#define HINIC3_COMM_RES \
+ ((1 << RES_TYPE_COMM) | (1 << RES_TYPE_COMM_CMD_CH) | \
+ (1 << RES_TYPE_FLUSH_BIT) | (1 << RES_TYPE_MQM) | \
+ (1 << RES_TYPE_SMF) | (1 << RES_TYPE_PF_BW_CFG))
+
+#define HINIC3_NIC_RES (1 << RES_TYPE_NIC)
+#define HINIC3_OVS_RES (1 << RES_TYPE_OVS)
+#define HINIC3_VBS_RES (1 << RES_TYPE_VBS)
+#define HINIC3_ROCE_RES (1 << RES_TYPE_ROCE)
+#define HINIC3_FC_RES (1 << RES_TYPE_FC)
+#define HINIC3_TOE_RES (1 << RES_TYPE_TOE)
+#define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC)
+
+/* MODE OVS、NIC、UNKNOWN */
+#define HINIC3_WORK_MODE_OVS 0
+#define HINIC3_WORK_MODE_UNKNOWN 1
+#define HINIC3_WORK_MODE_NIC 2
+
+#define DEVICE_TYPE_L2NIC 0
+#define DEVICE_TYPE_NVME 1
+#define DEVICE_TYPE_VIRTIO_NET 2
+#define DEVICE_TYPE_VIRTIO_BLK 3
+#define DEVICE_TYPE_VIRTIO_VSOCK 4
+#define DEVICE_TYPE_VIRTIO_NET_TRANSITION 5
+#define DEVICE_TYPE_VIRTIO_BLK_TRANSITION 6
+#define DEVICE_TYPE_VIRTIO_SCSI_TRANSITION 7
+#define DEVICE_TYPE_VIRTIO_HPC 8
+#define DEVICE_TYPE_VIRTIO_FS 9
+
+#define IS_STORAGE_DEVICE_TYPE(dev_type) \
+ ((dev_type) == DEVICE_TYPE_VIRTIO_BLK || \
+ (dev_type) == DEVICE_TYPE_VIRTIO_BLK_TRANSITION || \
+ (dev_type) == DEVICE_TYPE_VIRTIO_SCSI_TRANSITION || \
+ (dev_type) == DEVICE_TYPE_VIRTIO_FS)
+
+#define MGMT_MSG_CMD_OP_SET 1
+#define MGMT_MSG_CMD_OP_GET 0
+
+#define MGMT_MSG_CMD_OP_START 1
+#define MGMT_MSG_CMD_OP_STOP 0
+
+#define HOT_REPLACE_PARTITION_NUM 2
+
+enum hinic3_svc_type {
+ SVC_T_COMM = 0,
+ SVC_T_NIC,
+ SVC_T_OVS,
+ SVC_T_ROCE,
+ SVC_T_TOE,
+ SVC_T_IOE,
+ SVC_T_FC,
+ SVC_T_VBS,
+ SVC_T_IPSEC,
+ SVC_T_VIRTIO,
+ SVC_T_MIGRATE,
+ SVC_T_PPA,
+ SVC_T_MAX,
+};
+
+/**
+ * Common header control information of the COMM message interaction command word between the driver and PF
+ * stuct mgmt_msg_head and struct comm_info_head are the same stucture
+ */
+struct mgmt_msg_head {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+};
+
+/**
+ * Common header control information of the COMM message interaction command word between the driver and PF
+ */
+struct comm_info_head {
+ /** response status code, 0: success, others: error code */
+ u8 status;
+
+ /** firmware version for command */
+ u8 version;
+
+ /** response aeq number, unused for now */
+ u8 rep_aeq_num;
+ u8 rsvd[5];
+};
+
+
+static inline u32 get_function_partition(u32 function_id, u32 port_num)
+{
+ return (function_id / port_num) % HOT_REPLACE_PARTITION_NUM;
+}
+
+static inline u32 is_primary_function(u32 function_id, u32 port_num)
+{
+ return (function_id / port_num) % HOT_REPLACE_PARTITION_NUM == 0;
+}
+
+static inline u32 mpu_nic_get_primary_function(u32 function_id, u32 port_num)
+{
+ return ((function_id / port_num) % HOT_REPLACE_PARTITION_NUM == 0) ?
+ function_id : (function_id - port_num);
+}
+
+// when func_id is in partition 0/1, it will get its another func_id in partition 1/0
+static inline u32 mpu_nic_get_backup_function(u32 function_id, u32 port_num)
+{
+ return ((function_id / port_num) % HOT_REPLACE_PARTITION_NUM == 0) ?
+ (function_id + port_num) : (function_id - port_num);
+}
+
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd.h
new file mode 100644
index 0000000..fd0401f
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd.h
@@ -0,0 +1,192 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef MPU_INBAND_CMD_H
+#define MPU_INBAND_CMD_H
+
+enum hinic3_mgmt_cmd {
+ COMM_MGMT_CMD_FUNC_RESET = 0, /* reset function @see comm_cmd_func_reset */
+ COMM_MGMT_CMD_FEATURE_NEGO, /* feature negotiation @see comm_cmd_feature_nego */
+ COMM_MGMT_CMD_FLUSH_DOORBELL, /* clear doorbell @see comm_cmd_clear_doorbell */
+ COMM_MGMT_CMD_START_FLUSH, /* clear statefull business txrx resource
+ * @see comm_cmd_clear_resource
+ */
+ COMM_MGMT_CMD_SET_FUNC_FLR, /* set function flr @see comm_cmd_func_flr_set */
+ COMM_MGMT_CMD_GET_GLOBAL_ATTR, /* get global attr @see comm_cmd_get_glb_attr */
+ COMM_MGMT_CMD_SET_PPF_FLR_TYPE, /* set ppf flr type @see comm_cmd_ppf_flr_type_set */
+ COMM_MGMT_CMD_SET_FUNC_SVC_USED_STATE, /* set function service used state
+ * @see comm_cmd_func_svc_used_state
+ */
+ COMM_MGMT_CMD_START_FLR, /* MPU not use */
+
+ COMM_MGMT_CMD_CFG_MSIX_NUM = 10, /**< set msix num @see comm_cmd_cfg_msix_num */
+
+ COMM_MGMT_CMD_SET_CMDQ_CTXT = 20, /* set commandq context @see comm_cmd_cmdq_ctxt */
+ COMM_MGMT_CMD_SET_VAT, /** set vat table info @see comm_cmd_root_ctxt */
+ COMM_MGMT_CMD_CFG_PAGESIZE, /**< set rootctx pagesize @see comm_cmd_wq_page_size */
+ COMM_MGMT_CMD_CFG_MSIX_CTRL_REG, /* config msix ctrl register @see comm_cmd_msix_config */
+ COMM_MGMT_CMD_SET_CEQ_CTRL_REG, /**< set ceq ctrl register @see comm_cmd_ceq_ctrl_reg */
+ COMM_MGMT_CMD_SET_DMA_ATTR, /**< set PF/VF DMA table attr @see comm_cmd_dma_attr_config */
+ COMM_MGMT_CMD_SET_PPF_TBL_HTR_FLG, /* set PPF func table os hotreplace flag
+ * @see comm_cmd_ppf_tbl_htrp_config
+ */
+
+ COMM_MGMT_CMD_GET_MQM_FIX_INFO = 40, /**< get mqm fix info @see comm_cmd_get_eqm_num */
+ COMM_MGMT_CMD_SET_MQM_CFG_INFO, /**< set mqm config info @see comm_cmd_eqm_cfg */
+ COMM_MGMT_CMD_SET_MQM_SRCH_GPA, /* set mqm search gpa info @see comm_cmd_eqm_search_gpa */
+ COMM_MGMT_CMD_SET_PPF_TMR, /**< set ppf tmr @see comm_cmd_ppf_tmr_op */
+ COMM_MGMT_CMD_SET_PPF_HT_GPA, /**< set ppf ht gpa @see comm_cmd_ht_gpa */
+ COMM_MGMT_CMD_SET_FUNC_TMR_BITMAT, /* @see comm_cmd_func_tmr_bitmap_op */
+ COMM_MGMT_CMD_SET_MBX_CRDT, /**< reserved */
+ COMM_MGMT_CMD_CFG_TEMPLATE, /**< config template @see comm_cmd_cfg_template */
+ COMM_MGMT_CMD_SET_MQM_LIMIT, /**< set mqm limit @see comm_cmd_set_mqm_limit */
+
+ COMM_MGMT_CMD_GET_FW_VERSION = 60, /**< get firmware version @see comm_cmd_get_fw_version */
+ COMM_MGMT_CMD_GET_BOARD_INFO, /**< get board info @see comm_cmd_board_info */
+ COMM_MGMT_CMD_SYNC_TIME, /**< synchronize host time to MPU @see comm_cmd_sync_time */
+ COMM_MGMT_CMD_GET_HW_PF_INFOS, /**< get pf info @see comm_cmd_hw_pf_infos */
+ COMM_MGMT_CMD_SEND_BDF_INFO, /**< send bdf info @see comm_cmd_bdf_info */
+ COMM_MGMT_CMD_GET_VIRTIO_BDF_INFO, /**< get virtio bdf info @see mpu_pcie_device_info_s */
+ COMM_MGMT_CMD_GET_SML_TABLE_INFO, /**< get sml table info @see comm_cmd_get_sml_tbl_data */
+ COMM_MGMT_CMD_GET_SDI_INFO, /**< get sdi info @see comm_cmd_sdi_info */
+ COMM_MGMT_CMD_ROOT_CTX_LOAD, /* get root context info @see comm_cmd_root_ctx_load_req_s */
+ COMM_MGMT_CMD_GET_HW_BOND, /**< get bond info @see comm_cmd_hw_bond_infos */
+
+ COMM_MGMT_CMD_UPDATE_FW = 80, /* update firmware @see cmd_update_fw @see comm_info_head */
+ COMM_MGMT_CMD_ACTIVE_FW, /**< cold active firmware @see cmd_active_firmware */
+ COMM_MGMT_CMD_HOT_ACTIVE_FW, /**< hot active firmware @see cmd_hot_active_fw */
+ COMM_MGMT_CMD_HOT_ACTIVE_DONE_NOTICE, /**< reserved */
+ COMM_MGMT_CMD_SWITCH_CFG, /**< switch config file @see cmd_switch_cfg */
+ COMM_MGMT_CMD_CHECK_FLASH, /**< check flash @see comm_info_check_flash */
+ COMM_MGMT_CMD_CHECK_FLASH_RW, /* check whether flash reads and writes normally
+ * @see comm_cmd_hw_bond_infos
+ */
+ COMM_MGMT_CMD_RESOURCE_CFG, /**< reserved */
+ COMM_MGMT_CMD_UPDATE_BIOS, /**< update bios firmware @see cmd_update_fw */
+ COMM_MGMT_CMD_MPU_GIT_CODE, /**< get mpu git tag @see cmd_get_mpu_git_code */
+
+ COMM_MGMT_CMD_FAULT_REPORT = 100, /**< report fault event to driver */
+ COMM_MGMT_CMD_WATCHDOG_INFO, /* report software watchdog timeout to driver
+ * @see comm_info_sw_watchdog
+ */
+ COMM_MGMT_CMD_MGMT_RESET, /**< report mpu chip reset to driver */
+ COMM_MGMT_CMD_FFM_SET, /* report except interrupt to driver */
+
+ COMM_MGMT_CMD_GET_LOG = 120, /* get the log of the dictionary @see nic_log_info_request */
+ COMM_MGMT_CMD_TEMP_OP, /* temperature operation @see comm_temp_in_info
+ * @see comm_temp_out_info
+ */
+ COMM_MGMT_CMD_EN_AUTO_RST_CHIP, /* @see comm_cmd_enable_auto_rst_chip */
+ COMM_MGMT_CMD_CFG_REG, /**< reserved */
+ COMM_MGMT_CMD_GET_CHIP_ID, /**< get chip id @see comm_chip_id_info */
+ COMM_MGMT_CMD_SYSINFO_DFX, /**< reserved */
+ COMM_MGMT_CMD_PCIE_DFX_NTC, /**< reserved */
+ COMM_MGMT_CMD_DICT_LOG_STATUS, /* @see mpu_log_status_info */
+ COMM_MGMT_CMD_MSIX_INFO, /**< read msix map table @see comm_cmd_msix_info */
+ COMM_MGMT_CMD_CHANNEL_DETECT, /**< auto channel detect @see comm_cmd_channel_detect */
+ COMM_MGMT_CMD_DICT_COUNTER_STATUS, /**< get flash counter status @see flash_counter_info */
+ COMM_MGMT_CMD_UCODE_SM_COUNTER, /* get ucode sm counter @see comm_read_ucode_sm_req
+ * @see comm_read_ucode_sm_resp
+ */
+ COMM_MGMT_CMD_CLEAR_LOG, /**< clear log @see comm_cmd_clear_log_s */
+ COMM_MGMT_CMD_UCODE_SM_COUNTER_PER,
+ /**< get ucode sm counter @see struct comm_read_ucode_sm_per_req
+ * @see struct comm_read_ucode_sm_per_resp
+ */
+
+ COMM_MGMT_CMD_CHECK_IF_SWITCH_WORKMODE = 140, /* check if switch workmode reserved
+ * @see comm_cmd_check_if_switch_workmode
+ */
+ COMM_MGMT_CMD_SWITCH_WORKMODE, /* switch workmode reserved @see comm_cmd_switch_workmode */
+
+ COMM_MGMT_CMD_MIGRATE_DFX_HPA = 150, /* query migrate varialbe @see comm_cmd_migrate_dfx */
+ COMM_MGMT_CMD_BDF_INFO, /**< get bdf info @see cmd_get_bdf_info_s */
+ COMM_MGMT_CMD_NCSI_CFG_INFO_GET_PROC, /**< get ncsi config info @see comm_cmd_ncsi_cfg_s */
+ COMM_MGMT_CMD_CPI_TCAM_DBG, /* enable or disable the scheduled cpi tcam task,
+ * set task interval time @see comm_cmd_cpi_tcam_dbg_s
+ */
+ COMM_MGMT_CMD_LLDP_TX_FUNC_SET,
+
+ COMM_MGMT_CMD_SECTION_RSVD_0 = 160, /**< rsvd0 section */
+ COMM_MGMT_CMD_SECTION_RSVD_1 = 170, /**< rsvd1 section */
+ COMM_MGMT_CMD_SECTION_RSVD_2 = 180, /**< rsvd2 section */
+ COMM_MGMT_CMD_SECTION_RSVD_3 = 190, /**< rsvd3 section */
+
+ COMM_MGMT_CMD_GET_TDIE_ID = 199, /**< get totem die id @see comm_cmd_get_totem_die_id */
+ COMM_MGMT_CMD_GET_UDIE_ID = 200, /**< get unicorn die id @see comm_cmd_get_die_id */
+ COMM_MGMT_CMD_GET_EFUSE_TEST, /**< reserved */
+ COMM_MGMT_CMD_EFUSE_INFO_CFG, /**< set efuse config @see comm_efuse_cfg_info */
+ COMM_MGMT_CMD_GPIO_CTL, /**< reserved */
+ COMM_MGMT_CMD_HI30_SERLOOP_START, /* set serloop start @see comm_cmd_hi30_serloop */
+ COMM_MGMT_CMD_HI30_SERLOOP_STOP, /* set serloop stop @see comm_cmd_hi30_serloop */
+ COMM_MGMT_CMD_HI30_MBIST_SET_FLAG, /**< reserved */
+ COMM_MGMT_CMD_HI30_MBIST_GET_RESULT, /**< reserved */
+ COMM_MGMT_CMD_ECC_TEST, /**< reserved */
+ COMM_MGMT_CMD_FUNC_BIST_TEST, /**< reserved */
+
+ COMM_MGMT_CMD_VPD_SET = 210, /**< reserved */
+ COMM_MGMT_CMD_VPD_GET, /**< reserved */
+
+ COMM_MGMT_CMD_ERASE_FLASH, /**< erase flash sector @see cmd_sector_info */
+ COMM_MGMT_CMD_QUERY_FW_INFO, /**< get firmware info @see cmd_query_fw */
+ COMM_MGMT_CMD_GET_CFG_INFO, /* get cfg in flash reserved @see comm_cmd_get_cfg_info_t */
+ COMM_MGMT_CMD_GET_UART_LOG, /* collect hinicshell log @see nic_cmd_get_uart_log_info */
+ COMM_MGMT_CMD_SET_UART_CMD, /* hinicshell command to mpu @see nic_cmd_set_uart_log_cmd */
+ COMM_MGMT_CMD_SPI_TEST, /**< reserved */
+
+ /* TODO: ALL reg read/write merge to COMM_MGMT_CMD_CFG_REG */
+ COMM_MGMT_CMD_MPU_REG_GET, /**< get mpu register value @see dbgtool_up_reg_opt_info */
+ COMM_MGMT_CMD_MPU_REG_SET, /**< set mpu register value @see dbgtool_up_reg_opt_info */
+
+ COMM_MGMT_CMD_REG_READ = 220, /**< read register value @see comm_info_reg_read_write */
+ COMM_MGMT_CMD_REG_WRITE, /**< write register value @see comm_info_reg_read_write */
+ COMM_MGMT_CMD_MAG_REG_WRITE, /**< write mag register value @see comm_info_dfx_mag_reg */
+ COMM_MGMT_CMD_ANLT_REG_WRITE, /**< read register value @see comm_info_dfx_anlt_reg */
+
+ COMM_MGMT_CMD_HEART_EVENT, /**< ncsi heart event @see comm_cmd_heart_event */
+ COMM_MGMT_CMD_NCSI_OEM_GET_DRV_INFO, /**< nsci oem get driver info */
+ COMM_MGMT_CMD_LASTWORD_GET, /**< report lastword to driver @see comm_info_up_lastword_s */
+ COMM_MGMT_CMD_READ_BIN_DATA, /**< reserved */
+ COMM_MGMT_CMD_GET_REG_VAL, /**< read register value @see comm_cmd_mbox_csr_rd_req */
+ COMM_MGMT_CMD_SET_REG_VAL, /**< write register value @see comm_cmd_mbox_csr_wt_req */
+
+ /* TODO: check if needed */
+ COMM_MGMT_CMD_SET_VIRTIO_DEV = 230, /* set the virtio device
+ * @see comm_cmd_set_virtio_dev
+ */
+ COMM_MGMT_CMD_SET_MAC, /**< set mac address @see comm_info_mac */
+ /* MPU patch cmd */
+ COMM_MGMT_CMD_LOAD_PATCH, /**< load hot patch @see cmd_update_fw */
+ COMM_MGMT_CMD_REMOVE_PATCH, /**< remove hot patch @see cmd_patch_remove */
+ COMM_MGMT_CMD_PATCH_ACTIVE, /**< actice hot patch @see cmd_patch_active */
+ COMM_MGMT_CMD_PATCH_DEACTIVE, /**< deactice hot patch @see cmd_patch_deactive */
+ COMM_MGMT_CMD_PATCH_SRAM_OPTIMIZE, /**< set hot patch sram optimize */
+ /* container host process */
+ COMM_MGMT_CMD_CONTAINER_HOST_PROC, /* container host process reserved
+ * @see comm_cmd_con_sel_sta
+ */
+ /* nsci counter */
+ COMM_MGMT_CMD_NCSI_COUNTER_PROC, /* get ncsi counter @see nsci_counter_in_info_s */
+ COMM_MGMT_CMD_CHANNEL_STATUS_CHECK, /* check channel status reserved
+ * @see channel_status_check_info_s
+ */
+
+ COMM_MGMT_CMD_RSVD_0 = 240, /**< hot patch reserved cmd */
+ COMM_MGMT_CMD_RSVD_1, /**< hot patch reserved cmd */
+ COMM_MGMT_CMD_RSVD_2, /**< hot patch reserved cmd */
+ COMM_MGMT_CMD_RSVD_3, /**< hot patch reserved cmd */
+ COMM_MGMT_CMD_RSVD_4, /**< hot patch reserved cmd */
+ COMM_MGMT_CMD_SEND_API_ACK_BY_UP, /**< reserved */
+
+ /* for tool ver compatible info */
+ COMM_MGMT_CMD_GET_VER_COMPATIBLE_INFO = 254, /* get compatible info
+ * @see comm_cmd_compatible_info
+ */
+ /* When adding a command word, you cannot change the value of an existing command word.
+ * Add the command word in the rsvd section. In principle,
+ * the cmd tables of all branches are the same.
+ */
+ COMM_MGMT_CMD_MAX = 255,
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd_defs.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd_defs.h
new file mode 100644
index 0000000..fd3a7dd
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd_defs.h
@@ -0,0 +1,1104 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef MPU_INBAND_CMD_DEFS_H
+#define MPU_INBAND_CMD_DEFS_H
+
+#include "mpu_cmd_base_defs.h"
+#include "mpu_outband_ncsi_cmd_defs.h"
+
+#define HARDWARE_ID_1XX3V100_TAG 31 /* 1xx3v100 tag */
+#define DUMP_16B_PER_LINE 16
+#define DUMP_8_VAR_PER_LINE 8
+#define DUMP_4_VAR_PER_LINE 4
+#define FW_UPDATE_MGMT_TIMEOUT 3000000U
+
+#define FUNC_RESET_FLAG_MAX_VALUE ((1U << (RES_TYPE_MAX + 1)) - 1)
+struct comm_cmd_func_reset {
+ struct mgmt_msg_head head;
+ u16 func_id; /**< function id */
+ u16 rsvd1[3];
+ u64 reset_flag; /**< reset function type flag @see enum func_reset_flag_e */
+};
+
+enum {
+ COMM_F_API_CHAIN = 1U << 0,
+ COMM_F_CLP = 1U << 1,
+ COMM_F_CHANNEL_DETECT = 1U << 2,
+ COMM_F_MBOX_SEGMENT = 1U << 3,
+ COMM_F_CMDQ_NUM = 1U << 4,
+ COMM_F_VIRTIO_VQ_SIZE = 1U << 5,
+};
+
+#define COMM_MAX_FEATURE_QWORD 4
+enum COMM_FEATURE_NEGO_OPCODE {
+ COMM_FEATURE_NEGO_OPCODE_GET = 0,
+ COMM_FEATURE_NEGO_OPCODE_SET = 1
+};
+
+struct comm_cmd_feature_nego {
+ struct mgmt_msg_head head;
+ u16 func_id; /**< function id */
+ u8 opcode; /**< operate type 0: get, 1: set */
+ u8 rsvd;
+ u64 s_feature[COMM_MAX_FEATURE_QWORD]; /**< feature info */
+};
+
+struct comm_cmd_func_flr_set {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u8 type; /**< 1: flr enable */
+ u8 isall; /**< flr type 0: specify PF and associated VF flr, 1: all functions flr */
+ u32 rsvd;
+};
+
+struct comm_cmd_clear_doorbell {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u16 rsvd1[3];
+};
+
+struct comm_cmd_clear_resource {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u16 rsvd1[3];
+};
+
+struct comm_global_attr {
+ u8 max_host_num; /**< maximum number of host */
+ u8 max_pf_num; /**< maximum number of pf */
+ u16 vf_id_start; /**< VF function id start */
+
+ u8 mgmt_host_node_id; /**< node id */
+ u8 cmdq_num; /**< cmdq num */
+ u8 rsvd1[2];
+ u32 rsvd2[8];
+};
+
+struct comm_cmd_get_glb_attr {
+ struct mgmt_msg_head head;
+ struct comm_global_attr attr; /**< global attr @see struct comm_global_attr */
+};
+
+struct comm_cmd_ppf_flr_type_set {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 func_service_type;
+ u8 rsvd1;
+ u32 ppf_flr_type; /**< function flr type 1:statefull 0:stateless */
+};
+
+struct comm_cmd_func_svc_used_state {
+ struct mgmt_msg_head head;
+ u16 func_id;
+ u16 svc_type;
+ u8 used_state;
+ u8 rsvd[35];
+};
+
+struct comm_cmd_cfg_msix_num {
+ struct comm_info_head head;
+
+ u16 func_id;
+ u8 op_code; /**< operate type 1: alloc 0: free */
+ u8 rsvd0;
+
+ u16 msix_num;
+ u16 rsvd1;
+};
+
+struct cmdq_ctxt_info {
+ u64 curr_wqe_page_pfn;
+ u64 wq_block_pfn;
+};
+
+struct comm_cmd_cmdq_ctxt {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 cmdq_id;
+ u8 rsvd1[5];
+
+ struct cmdq_ctxt_info ctxt;
+};
+
+struct comm_cmd_root_ctxt {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 set_cmdq_depth;
+ u8 cmdq_depth;
+ u16 rx_buf_sz;
+ u8 lro_en;
+ u8 rsvd1;
+ u16 sq_depth;
+ u16 rq_depth;
+ u64 rsvd2;
+};
+
+struct comm_cmd_wq_page_size {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u8 opcode; /**< operate type 0:get , 1:set */
+ /* real_size=4KB*2^page_size, range(0~20) must be checked by driver */
+ u8 page_size;
+
+ u32 rsvd1;
+};
+
+struct comm_cmd_msix_config {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u8 opcode; /**< operate type 0:get , 1:set */
+ u8 rsvd1;
+ u16 msix_index;
+ u8 pending_cnt;
+ u8 coalesce_timer_cnt;
+ u8 resend_timer_cnt;
+ u8 lli_timer_cnt;
+ u8 lli_credit_cnt;
+ u8 rsvd2[5];
+};
+
+struct comm_cmd_ceq_ctrl_reg {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u16 q_id;
+ u32 ctrl0;
+ u32 ctrl1;
+ u32 rsvd1;
+};
+
+struct comm_cmd_dma_attr_config {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u8 entry_idx;
+ u8 st;
+ u8 at;
+ u8 ph;
+ u8 no_snooping;
+ u8 tph_en;
+ u32 resv1;
+};
+
+struct comm_cmd_ppf_tbl_htrp_config {
+ struct mgmt_msg_head head;
+
+ u32 hotreplace_flag;
+};
+
+struct comm_cmd_get_eqm_num {
+ struct mgmt_msg_head head;
+
+ u8 host_id; /**< host id */
+ u8 rsvd1[3];
+ u32 chunk_num;
+ u32 search_gpa_num;
+};
+
+struct comm_cmd_eqm_cfg {
+ struct mgmt_msg_head head;
+
+ u8 host_id; /**< host id */
+ u8 valid; /**< 0:clear config , 1:set config */
+ u16 rsvd1;
+ u32 page_size; /**< page size */
+ u32 rsvd2;
+};
+
+struct comm_cmd_eqm_search_gpa {
+ struct mgmt_msg_head head;
+
+ u8 host_id; /**< host id Deprecated field, not used */
+ u8 rsvd1[3];
+ u32 start_idx; /**< start index */
+ u32 num;
+ u32 rsvd2;
+ u64 gpa_hi52[0]; /**< [gpa data */
+};
+
+struct comm_cmd_ppf_tmr_op {
+ struct mgmt_msg_head head;
+
+ u8 ppf_id; /**< ppf function id */
+ u8 opcode; /**< operation type 1: start timer, 0: stop timer */
+ u8 rsvd1[6];
+};
+
+struct comm_cmd_ht_gpa {
+ struct mgmt_msg_head head;
+
+ u8 host_id; /**< host id */
+ u8 rsvd0[3];
+ u32 rsvd1[7];
+ u64 page_pa0;
+ u64 page_pa1;
+};
+
+struct comm_cmd_func_tmr_bitmap_op {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u8 opcode; /**< operation type 1: start timer, 0: stop timer */
+ u8 rsvd1[5];
+};
+
+#define DD_CFG_TEMPLATE_MAX_IDX 12
+#define DD_CFG_TEMPLATE_MAX_TXT_LEN 64
+#define CFG_TEMPLATE_OP_QUERY 0
+#define CFG_TEMPLATE_OP_SET 1
+#define CFG_TEMPLATE_SET_MODE_BY_IDX 0
+#define CFG_TEMPLATE_SET_MODE_BY_NAME 1
+
+struct comm_cmd_cfg_template {
+ struct mgmt_msg_head head;
+ u8 opt_type; /**< operation type 0: query 1: set */
+ u8 set_mode; /**< set mode 0:index mode 1:name mode. */
+ u8 tp_err;
+ u8 rsvd0;
+
+ u8 cur_index; /**< current cfg tempalte index. */
+ u8 cur_max_index; /** max support cfg tempalte index. */
+ u8 rsvd1[2];
+ u8 cur_name[DD_CFG_TEMPLATE_MAX_TXT_LEN]; /**< current cfg tempalte name. */
+ u8 cur_cfg_temp_info[DD_CFG_TEMPLATE_MAX_IDX][DD_CFG_TEMPLATE_MAX_TXT_LEN];
+
+ u8 next_index; /**< next reset cfg tempalte index. */
+ u8 next_max_index; /**< max support cfg tempalte index. */
+ u8 rsvd2[2];
+ u8 next_name[DD_CFG_TEMPLATE_MAX_TXT_LEN]; /**< next reset cfg tempalte name. */
+ u8 next_cfg_temp_info[DD_CFG_TEMPLATE_MAX_IDX][DD_CFG_TEMPLATE_MAX_TXT_LEN];
+};
+
+#define MQM_SUPPORT_COS_NUM 8
+#define MQM_INVALID_WEIGHT 256
+#define MQM_LIMIT_SET_FLAG_READ 0
+#define MQM_LIMIT_SET_FLAG_WRITE 1
+struct comm_cmd_set_mqm_limit {
+ struct mgmt_msg_head head;
+
+ u16 set_flag; /**< operation type 0: read 1: write */
+ u16 func_id; /**< function id */
+ /* Indicates the weight of cos_id. The value ranges from 0 to 255.
+ * The value 0 indicates SP scheduling.
+ */
+ u16 cos_weight[MQM_SUPPORT_COS_NUM]; /**< cos weight range[0,255] */
+ u32 host_min_rate; /**< current host minimum rate */
+ u32 func_min_rate; /**< current function minimum rate,unit:Mbps */
+ u32 func_max_rate; /**< current function maximum rate,unit:Mbps */
+ u8 rsvd[64]; /* Reserved */
+};
+
+#define HINIC3_FW_VERSION_LEN 16
+#define HINIC3_FW_COMPILE_TIME_LEN 20
+
+enum hinic3_fw_ver_type {
+ HINIC3_FW_VER_TYPE_BOOT,
+ HINIC3_FW_VER_TYPE_MPU,
+ HINIC3_FW_VER_TYPE_NPU,
+ HINIC3_FW_VER_TYPE_SMU_L0,
+ HINIC3_FW_VER_TYPE_SMU_L1,
+ HINIC3_FW_VER_TYPE_CFG,
+};
+
+struct comm_cmd_get_fw_version {
+ struct mgmt_msg_head head;
+
+ u16 fw_type; /**< firmware type @see enum hinic3_fw_ver_type */
+ u16 fw_dfx_vld : 1; /**< 0: release, 1: debug */
+ u16 rsvd1 : 15;
+ u8 ver[HINIC3_FW_VERSION_LEN]; /**< firmware version */
+ u8 time[HINIC3_FW_COMPILE_TIME_LEN]; /**< firmware compile time */
+};
+
+struct hinic3_board_info {
+ u8 board_type; /**< board type */
+ u8 port_num; /**< current port number */
+ u8 port_speed; /**< port speed */
+ u8 pcie_width; /**< pcie width */
+ u8 host_num; /**< host number */
+ u8 pf_num; /**< pf number */
+ u16 vf_total_num; /**< vf total number */
+ u8 tile_num; /**< tile number */
+ u8 qcm_num; /**< qcm number */
+ u8 core_num; /**< core number */
+ u8 work_mode; /**< work mode */
+ u8 service_mode; /**< service mode */
+ u8 pcie_mode; /**< pcie mode */
+ u8 boot_sel; /**< boot sel */
+ u8 board_id; /**< board id */
+ u32 rsvd;
+ u32 service_en_bitmap; /**< service en bitmap */
+ u8 scenes_id; /**< scenes id */
+ u8 cfg_template_id; /**< cfg template index */
+ u8 hardware_id; /**< hardware id */
+ u8 spu_en; /**< spu enable flag */
+ u16 pf_vendor_id; /**< pf vendor id */
+ u8 tile_bitmap; /**< used tile bitmap */
+ u8 sm_bitmap; /**< used sm bitmap */
+};
+
+struct comm_cmd_board_info {
+ struct mgmt_msg_head head;
+
+ struct hinic3_board_info info; /**< board info @see struct hinic3_board_info */
+ u32 rsvd[22];
+};
+
+struct comm_cmd_sync_time {
+ struct mgmt_msg_head head;
+
+ u64 mstime; /**< time,unit:ms */
+ u64 rsvd1;
+};
+
+struct hw_pf_info {
+ u16 glb_func_idx; /**< function id */
+ u16 glb_pf_vf_offset;
+ u8 p2p_idx;
+ u8 itf_idx; /**< host id */
+ u16 max_vfs; /**< max vf number */
+ u16 max_queue_num; /**< max queue number */
+ u16 vf_max_queue_num;
+ u16 port_id;
+ u16 rsvd0;
+ u32 pf_service_en_bitmap;
+ u32 vf_service_en_bitmap;
+ u16 rsvd1[2];
+
+ u8 device_type;
+ u8 bus_num; /**< bdf info */
+ u16 vf_stride; /**< vf stride */
+ u16 vf_offset; /**< vf offset */
+ u8 rsvd[2];
+};
+
+#define CMD_MAX_MAX_PF_NUM 32
+struct hinic3_hw_pf_infos {
+ u8 num_pfs; /**< pf number */
+ u8 rsvd1[3];
+
+ struct hw_pf_info infos[CMD_MAX_MAX_PF_NUM]; /**< pf info @see struct hw_pf_info */
+};
+
+struct comm_cmd_hw_pf_infos {
+ struct mgmt_msg_head head;
+
+ struct hinic3_hw_pf_infos infos; /**< all pf info @see struct hinic3_hw_pf_infos */
+};
+
+struct comm_cmd_bdf_info {
+ struct mgmt_msg_head head;
+
+ u16 function_idx; /**< function id */
+ u8 rsvd1[2];
+ u8 bus; /**< bus info */
+ u8 device; /**< device info */
+ u8 function; /**< function info */
+ u8 rsvd2[5];
+};
+
+#define TABLE_INDEX_MAX 129
+struct sml_table_id_info {
+ u8 node_id;
+ u8 instance_id;
+};
+
+struct comm_cmd_get_sml_tbl_data {
+ struct comm_info_head head; /* 8B */
+ u8 tbl_data[512]; /**< sml table data */
+};
+
+struct comm_cmd_sdi_info {
+ struct mgmt_msg_head head;
+ u32 cfg_sdi_mode; /**< host mode, 0:normal 1:virtual machine 2:bare metal */
+};
+
+#define HINIC_OVS_BOND_DEFAULT_ID 1
+struct hinic3_hw_bond_infos {
+ u8 bond_id;
+ u8 valid;
+ u8 rsvd1[2];
+};
+
+struct comm_cmd_hw_bond_infos {
+ struct mgmt_msg_head head;
+ struct hinic3_hw_bond_infos infos; /**< bond info @see struct hinic3_hw_bond_infos */
+};
+
+/* 工具数据长度为1536(1.5K),工具最大发2k,包含头部 */
+struct cmd_update_fw {
+ struct comm_info_head head; // 8B
+ u16 fw_flag; /**< subfirmware flag, bit 0: last slice flag, bit 1 first slice flag */
+ u16 slice_len; /**< current slice length */
+ u32 fw_crc; /**< subfirmware crc */
+ u32 fw_type; /**< subfirmware type */
+ u32 bin_total_len; /**< total firmware length, only fisrt slice is effective */
+ u32 bin_section_len; /**< subfirmware length */
+ u32 fw_verion; /**< subfirmware version */
+ u32 fw_offset; /**< current slice offset of current subfirmware */
+ u32 data[0]; /**< data */
+};
+
+struct cmd_switch_cfg {
+ struct comm_info_head msg_head;
+ u8 index; /**< index, range[0,7] */
+ u8 data[7];
+};
+
+struct cmd_active_firmware {
+ struct comm_info_head msg_head;
+ u8 index; /* 0 ~ 7 */
+ u8 data[7];
+};
+
+#define HOT_ACTIVE_MPU 1
+#define HOT_ACTIVE_NPU 2
+#define HOT_ACTIVE_MNPU 3
+struct cmd_hot_active_fw {
+ struct comm_info_head head;
+ u32 type; /**< hot actice firmware type 1: mpu; 2: ucode; 3: mpu & npu */
+ u32 data[3];
+};
+
+#define FLASH_CHECK_OK 1
+#define FLASH_CHECK_ERR 2
+#define FLASH_CHECK_DISMATCH 3
+
+struct comm_info_check_flash {
+ struct comm_info_head head;
+
+ u8 status; /**< flash check status */
+ u8 rsv[3];
+};
+
+struct cmd_get_mpu_git_code {
+ struct comm_info_head head; /* 8B */
+ u32 rsvd; /* reserve */
+ char mpu_git_code[64]; /**< mpu git tag and compile time */
+};
+
+#define DATA_LEN_1K 1024
+struct comm_info_sw_watchdog {
+ struct comm_info_head head;
+
+ u32 curr_time_h; /**< infinite loop occurrence time,cycle */
+ u32 curr_time_l; /**< infinite loop occurrence time,cycle */
+ u32 task_id; /**< task id .task that occur in an infinite loop */
+ u32 rsv;
+
+ u64 pc;
+
+ u64 elr;
+ u64 spsr;
+ u64 far;
+ u64 esr;
+ u64 xzr;
+ u64 x30;
+ u64 x29;
+ u64 x28;
+ u64 x27;
+ u64 x26;
+ u64 x25;
+ u64 x24;
+ u64 x23;
+ u64 x22;
+ u64 x21;
+ u64 x20;
+ u64 x19;
+ u64 x18;
+ u64 x17;
+ u64 x16;
+ u64 x15;
+ u64 x14;
+ u64 x13;
+ u64 x12;
+ u64 x11;
+ u64 x10;
+ u64 x09;
+ u64 x08;
+ u64 x07;
+ u64 x06;
+ u64 x05;
+ u64 x04;
+ u64 x03;
+ u64 x02;
+ u64 x01;
+ u64 x00;
+
+ u64 stack_top; /**< stack top */
+ u64 stack_bottom; /**< stack bottom */
+ u64 sp; /**< sp pointer */
+ u32 curr_used; /**< the size currently used by the stack */
+ u32 peak_used; /**< historical peak of stack usage */
+ u32 is_overflow; /**< stack overflow flag */
+
+ u32 stack_actlen; /**< actual stack length(<=1024) */
+ u8 stack_data[DATA_LEN_1K]; /* If the value exceeds 1024, it will be truncated. */
+};
+
+struct nic_log_info_request {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u32 offset;
+ u8 log_or_index; /* 0:log 1:index */
+ u8 type; /* log type 0:up 1:ucode 2:smu 3:mpu lastword 4.npu lastword */
+ u8 area; /* area 0:ram 1:flash (this bit is valid only when log_or_index is 0) */
+ u8 rsvd1; /* reserved */
+};
+
+#define MPU_TEMP_OP_GET 0
+#define MPU_TEMP_THRESHOLD_OP_CFG 1
+#define MPU_TEMP_MCTP_DFX_INFO_GET 2
+struct comm_temp_in_info {
+ struct comm_info_head head;
+ u8 opt_type; /**< operation type 0:read operation 1:cfg operation */
+ u8 rsv[3];
+ s32 max_temp; /**< maximum threshold of temperature */
+ s32 min_temp; /**< minimum threshold of temperature */
+};
+
+struct comm_temp_out_info {
+ struct comm_info_head head;
+ s32 temp_data; /**< current temperature */
+ s32 max_temp_threshold; /**< maximum threshold of temperature */
+ s32 min_temp_threshold; /**< minimum threshold of temperature */
+ s32 max_temp; /**< maximum temperature */
+ s32 min_temp; /**< minimum temperature */
+};
+
+/* 关闭芯片自复位 */
+struct comm_cmd_enable_auto_rst_chip {
+ struct comm_info_head head;
+ u8 op_code; /**< operation type 0:get operation 1:set operation */
+ u8 enable; /* auto reset status 0: disable auto reset chip 1: enable */
+ u8 rsvd[2];
+};
+
+struct comm_chip_id_info {
+ struct comm_info_head head;
+ u8 chip_id; /**< chip id */
+ u8 rsvd[3];
+};
+
+struct mpu_log_status_info {
+ struct comm_info_head head;
+ u8 type; /**< operation type 0:read operation 1:write operation */
+ u8 log_status; /**< log status 0:idle 1:busy */
+ u8 rsvd[2];
+};
+
+struct comm_cmd_msix_info {
+ struct comm_info_head head;
+ u8 rsvd1;
+ u8 flag; /**< table flag 0:second table, 1:actual table */
+ u8 rsvd[2];
+};
+
+struct comm_cmd_channel_detect {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u16 rsvd1[3];
+ u32 rsvd2[2];
+};
+
+#define MAX_LOG_BUF_SIZE 1024
+#define FLASH_NPU_COUNTER_HEAD_MAGIC (0x5a)
+#define FLASH_NPU_COUNTER_NIC_TYPE 0
+#define FLASH_NPU_COUNTER_FC_TYPE 1
+
+struct flash_npu_counter_head_s {
+ u8 magic;
+ u8 tbl_type;
+ u8 count_type; /**< 0:nic;1:fc */
+ u8 count_num; /**< current count number */
+ u16 base_offset; /**< address offset */
+ u16 base_count;
+};
+
+struct flash_counter_info {
+ struct comm_info_head head;
+
+ u32 length; /**< flash counter buff len */
+ u32 offset; /**< flash counter buff offset */
+ u8 data[MAX_LOG_BUF_SIZE]; /**< flash counter data */
+};
+
+enum mpu_sm_cmd_type {
+ COMM_SM_CTR_RD16 = 1,
+ COMM_SM_CTR_RD32,
+ COMM_SM_CTR_RD64_PAIR,
+ COMM_SM_CTR_RD64,
+ COMM_SM_CTR_RD32_CLEAR,
+ COMM_SM_CTR_RD64_PAIR_CLEAR,
+ COMM_SM_CTR_RD64_CLEAR,
+ COMM_SM_CTR_RD16_CLEAR,
+};
+
+struct comm_read_ucode_sm_req {
+ struct mgmt_msg_head msg_head;
+
+ u32 node; /**< node id @see enum INTERNAL_RING_NODE_ID_E */
+ u32 count_id; /**< count id */
+ u32 instanse; /**< instance id */
+ u32 type; /**< read type @see enum mpu_sm_cmd_type */
+};
+
+struct comm_read_ucode_sm_resp {
+ struct mgmt_msg_head msg_head;
+
+ u64 val1;
+ u64 val2;
+};
+
+#define PER_REQ_MAX_DATA_LEN 0x600
+
+struct comm_read_ucode_sm_per_req {
+ struct mgmt_msg_head msg_head;
+
+ u32 tbl_type;
+ u32 count_id;
+};
+
+struct comm_read_ucode_sm_per_resp {
+ struct mgmt_msg_head msg_head;
+
+ u8 data[PER_REQ_MAX_DATA_LEN];
+};
+
+struct ucode_sm_counter_get_info {
+ u32 width_type;
+ u32 tbl_type;
+ unsigned int base_count;
+ unsigned int count_num;
+};
+
+enum log_type {
+ MPU_LOG_CLEAR = 0,
+ SMU_LOG_CLEAR = 1,
+ NPU_LOG_CLEAR = 2,
+ SPU_LOG_CLEAR = 3,
+ ALL_LOG_CLEAR = 4,
+};
+
+#define ABLESWITCH 1
+#define IMABLESWITCH 2
+enum switch_workmode_op {
+ SWITCH_WORKMODE_SWITCH = 0,
+ SWITCH_WORKMODE_OTHER = 1
+};
+
+enum switch_workmode_obj {
+ SWITCH_WORKMODE_FC = 0,
+ SWITCH_WORKMODE_TOE = 1,
+ SWITCH_WORKMODE_ROCE_AND_NOF = 2,
+ SWITCH_WORKMODE_NOF_AA = 3,
+ SWITCH_WORKMODE_ETH_CNTR = 4,
+ SWITCH_WORKMODE_NOF_CNTR = 5,
+};
+
+struct comm_cmd_check_if_switch_workmode {
+ struct mgmt_msg_head head;
+ u8 switch_able;
+ u8 rsvd1;
+ u16 rsvd2[3];
+ u32 rsvd3[3];
+};
+
+#define MIG_NOR_VM_ONE_MAX_SGE_MEM (64 * 8)
+#define MIG_NOR_VM_ONE_MAX_MEM (MIG_NOR_VM_ONE_MAX_SGE_MEM + 16)
+#define MIG_VM_MAX_SML_ENTRY_NUM 24
+
+struct comm_cmd_migrate_dfx_s {
+ struct mgmt_msg_head head;
+ u32 hpa_entry_id; /**< hpa entry id */
+ u8 vm_hpa[MIG_NOR_VM_ONE_MAX_MEM]; /**< vm hpa info */
+};
+
+#define BDF_BUS_BIT 8
+struct pf_bdf_info {
+ u8 itf_idx; /**< host id */
+ u16 bdf; /**< bdf info */
+ u8 pf_bdf_info_vld; /**< pf bdf info valid */
+};
+
+struct vf_bdf_info {
+ u16 glb_pf_vf_offset; /**< global_func_id offset of 1st vf in pf */
+ u16 max_vfs; /**< vf number */
+ u16 vf_stride; /**< VF_RID_SETTING.vf_stride */
+ u16 vf_offset; /**< VF_RID_SETTING.vf_offset */
+ u8 bus_num; /**< tl_cfg_bus_num */
+ u8 rsv[3];
+};
+
+struct cmd_get_bdf_info_s {
+ struct mgmt_msg_head head;
+ struct pf_bdf_info pf_bdf_info[CMD_MAX_MAX_PF_NUM];
+ struct vf_bdf_info vf_bdf_info[CMD_MAX_MAX_PF_NUM];
+ u32 vf_num; /**< vf num */
+};
+
+#define CPI_TCAM_DBG_CMD_SET_TASK_ENABLE_VALID 0x1
+#define CPI_TCAM_DBG_CMD_SET_TIME_INTERVAL_VALID 0x2
+#define CPI_TCAM_DBG_CMD_TYPE_SET 0
+#define CPI_TCAM_DBG_CMD_TYPE_GET 1
+
+#define UDIE_ID_DATA_LEN 8
+#define TDIE_ID_DATA_LEN 18
+struct comm_cmd_get_die_id {
+ struct comm_info_head head;
+
+ u32 die_id_data[UDIE_ID_DATA_LEN]; /**< die id data */
+};
+
+struct comm_cmd_get_totem_die_id {
+ struct comm_info_head head;
+
+ u32 die_id_data[TDIE_ID_DATA_LEN]; /**< die id data */
+};
+
+#define MAX_EFUSE_INFO_BUF_SIZE 1024
+
+enum comm_efuse_opt_type {
+ EFUSE_OPT_UNICORN_EFUSE_BURN = 1, /**< burn unicorn efuse bin */
+ EFUSE_OPT_UPDATE_SWSB = 2, /**< hw rotpk switch to guest rotpk */
+ EFUSE_OPT_TOTEM_EFUSE_BURN = 3 /**< burn totem efuse bin */
+};
+
+struct comm_efuse_cfg_info {
+ struct comm_info_head head;
+ u8 opt_type; /**< operation type @see enum comm_efuse_opt_type */
+ u8 rsvd[3];
+ u32 total_len; /**< entire package leng value */
+ u32 data_csum; /**< data csum */
+ u8 data[MAX_EFUSE_INFO_BUF_SIZE]; /**< efuse cfg data, size 768byte */
+};
+
+/* serloop模块接口 */
+struct comm_cmd_hi30_serloop {
+ struct comm_info_head head;
+
+ u32 macro;
+ u32 lane;
+ u32 prbs_pattern;
+ u32 result;
+};
+
+struct cmd_sector_info {
+ struct comm_info_head head;
+ u32 offset; /**< flash addr */
+ u32 len; /**< flash length */
+};
+
+struct cmd_query_fw {
+ struct comm_info_head head;
+ u32 offset; /**< offset addr */
+ u32 len; /**< length */
+};
+
+struct nic_cmd_get_uart_log_info {
+ struct comm_info_head head;
+ struct {
+ u32 ret : 8;
+ u32 version : 8;
+ u32 log_elem_real_num : 16;
+ } log_head;
+ char uart_log[MAX_LOG_BUF_SIZE];
+};
+
+#define MAX_LOG_CMD_BUF_SIZE 128
+
+struct nic_cmd_set_uart_log_cmd {
+ struct comm_info_head head;
+ struct {
+ u32 ret : 8;
+ u32 version : 8;
+ u32 cmd_elem_real_num : 16;
+ } log_head;
+ char uart_cmd[MAX_LOG_CMD_BUF_SIZE];
+};
+
+struct dbgtool_up_reg_opt_info {
+ struct comm_info_head head;
+
+ u8 len;
+ u8 is_car;
+ u8 car_clear_flag;
+ u32 csr_addr; /**< register addr */
+ u32 csr_value; /**< register value */
+};
+
+struct comm_info_reg_read_write {
+ struct comm_info_head head;
+
+ u32 reg_addr; /**< register address */
+ u32 val_length; /**< register value length */
+
+ u32 data[2]; /**< register value */
+};
+
+#ifndef DFX_MAG_MAX_REG_NUM
+#define DFX_MAG_MAX_REG_NUM (32)
+#endif
+struct comm_info_dfx_mag_reg {
+ struct comm_info_head head;
+ u32 write; /**< read or write flag: 0:read; 1:write */
+ u32 reg_addr; /**< register address */
+ u32 reg_cnt; /**< register num , up to 32 */
+ u32 clear; /**< clear flag: 0:do not clear after read 1:clear after read */
+ u32 data[DFX_MAG_MAX_REG_NUM]; /**< register data */
+};
+
+struct comm_info_dfx_anlt_reg {
+ struct comm_info_head head;
+ u32 write; /**< read or write flag: 0:read; 1:write */
+ u32 reg_addr; /**< register address */
+ u32 reg_cnt; /**< register num , up to 32 */
+ u32 clear; /**< clear flag: 0:do not clear after read 1:clear after read */
+ u32 data[DFX_MAG_MAX_REG_NUM]; /**< register data */
+};
+
+#define MAX_DATA_NUM (240)
+struct csr_msg {
+ struct {
+ u32 node_id : 5; // [4:0]
+ u32 data_width : 10; // [14:5]
+ u32 rsvd : 17; // [31:15]
+ } bits;
+ u32 addr;
+};
+
+struct comm_cmd_heart_event {
+ struct mgmt_msg_head head;
+
+ u8 init_sta; /* 0: mpu init ok, 1: mpu init error. */
+ u8 rsvd1[3];
+ u32 heart; /* add one by one */
+ u32 heart_handshake; /* should be alwasys: 0x5A5A5A5A */
+};
+
+#define XREGS_NUM 31
+struct tag_cpu_tick {
+ u32 cnt_hi;
+ u32 cnt_lo;
+};
+
+struct tag_ax_exc_reg_info {
+ u64 ttbr0;
+ u64 ttbr1;
+ u64 tcr;
+ u64 mair;
+ u64 sctlr;
+ u64 vbar;
+ u64 current_el;
+ u64 sp;
+ /* The memory layout of the following fields is the same as that of TskContext. */
+ u64 elr; /* 返回地址 */
+ u64 spsr;
+ u64 far_r;
+ u64 esr;
+ u64 xzr;
+ u64 xregs[XREGS_NUM]; /* 0~30: x30~x0 */
+};
+
+struct tag_exc_info {
+ char os_ver[48]; /**< os version */
+ char app_ver[64]; /**< application version*/
+ u32 exc_cause; /**< exception reason */
+ u32 thread_type; /**< Thread type before exception */
+ u32 thread_id; /**< Thread PID before exception */
+ u16 byte_order; /**< byte order */
+ u16 cpu_type; /**< CPU type */
+ u32 cpu_id; /**< CPU ID */
+ struct tag_cpu_tick cpu_tick; /**< CPU Tick */
+ u32 nest_cnt; /**< exception nesting count */
+ u32 fatal_errno; /**< fatal error code, valid when a fatal error occurs */
+ u64 uw_sp; /**< exception front stack pointer */
+ u64 stack_bottom; /**< bottom of stack before exception */
+ /* Context information of the core register when an exception occurs.
+ * 82\57 must be located in byte 152, If any change is made,
+ * the OS_EXC_REGINFO_OFFSET macro in sre_platform.eh needs to be updated.
+ */
+ struct tag_ax_exc_reg_info reg_info; /**< register info @see EXC_REGS_S */
+};
+
+/* 上报给驱动的up lastword模块接口 */
+#define MPU_LASTWORD_SIZE 1024
+struct tag_comm_info_up_lastword {
+ struct comm_info_head head;
+
+ struct tag_exc_info stack_info;
+ u32 stack_actlen; /**< actual stack length (<=1024) */
+ u8 stack_data[MPU_LASTWORD_SIZE];
+};
+
+struct comm_cmd_mbox_csr_rd_req {
+ struct mgmt_msg_head head;
+ struct csr_msg csr_info[MAX_DATA_NUM];
+ u32 data_num;
+};
+
+struct comm_cmd_mbox_csr_wt_req {
+ struct mgmt_msg_head head;
+ struct csr_msg csr_info;
+ u64 value;
+};
+
+struct comm_cmd_mbox_csr_rd_ret {
+ struct mgmt_msg_head head;
+ u64 value[MAX_DATA_NUM];
+};
+
+struct comm_cmd_mbox_csr_wt_ret {
+ struct mgmt_msg_head head;
+};
+
+enum comm_virtio_dev_type {
+ COMM_VIRTIO_NET_TYPE = 0,
+ COMM_VIRTIO_BLK_TYPE = 1,
+ COMM_VIRTIO_SCSI_TYPE = 4,
+};
+
+struct comm_virtio_dev_cmd {
+ u16 device_type; /**< device type @see enum comm_virtio_dev_type */
+ u16 device_id;
+ u32 devid_switch;
+ u32 sub_vendor_id;
+ u32 sub_class_code;
+ u32 flash_en;
+};
+
+struct comm_virtio_dev_ctl {
+ u32 device_type_mark;
+ u32 devid_switch_mark;
+ u32 sub_vendor_id_mark;
+ u32 sub_class_code_mark;
+ u32 flash_en_mark;
+};
+
+struct comm_cmd_set_virtio_dev {
+ struct comm_info_head head;
+ struct comm_virtio_dev_cmd virtio_dev_cmd; /**< @see struct comm_virtio_dev_cmd_s */
+ struct comm_virtio_dev_ctl virtio_dev_ctl; /**< @see struct comm_virtio_dev_ctl_s */
+};
+
+/* Interfaces of the MAC Module */
+#ifndef MAC_ADDRESS_BYTE_NUM
+#define MAC_ADDRESS_BYTE_NUM (6)
+#endif
+struct comm_info_mac {
+ struct comm_info_head head;
+
+ u16 is_valid;
+ u16 rsvd0;
+ u8 data[MAC_ADDRESS_BYTE_NUM];
+ u16 rsvd1;
+};
+
+struct cmd_patch_active {
+ struct comm_info_head head;
+ u32 fw_type; /**< firmware type */
+ u32 data[3]; /**< reserved */
+};
+
+struct cmd_patch_deactive {
+ struct comm_info_head head;
+ u32 fw_type; /**< firmware type */
+ u32 data[3]; /**< reserved */
+};
+
+struct cmd_patch_remove {
+ struct comm_info_head head;
+ u32 fw_type; /**< firmware type */
+ u32 data[3]; /**< reserved */
+};
+
+struct cmd_patch_sram_optimize {
+ struct comm_info_head head;
+ u32 data[4]; /**< reserved */
+};
+
+/* ncsi counter */
+struct nsci_counter_in_info_s {
+ struct comm_info_head head;
+ u8 opt_type; /**< operate type 0:read counter 1:counter clear */
+ u8 rsvd[3];
+};
+
+struct channel_status_check_info_s {
+ struct comm_info_head head;
+ u32 rsvd1;
+ u32 rsvd2;
+};
+
+struct comm_cmd_compatible_info {
+ struct mgmt_msg_head head;
+ u8 chip_ver;
+ u8 host_env;
+ u8 rsv[13];
+
+ u8 cmd_count;
+ union {
+ struct {
+ u8 module;
+ u8 mod_type;
+ u16 cmd;
+ } cmd_desc;
+ u32 cmd_desc_val;
+ } cmds_desc[24];
+ u8 cmd_ver[24];
+};
+
+struct tag_ncsi_chan_info {
+ u8 aen_en; /**< aen enable */
+ u8 index; /**< index of channel */
+ u8 port; /**< net port number */
+ u8 state; /**< ncsi state */
+ u8 ncsi_port_en; /**< ncsi port enable flag (1:enable 0:disable) */
+ u8 rsv[3];
+ struct tag_ncsi_chan_capa capabilities; /**< ncsi channel capabilities*/
+ struct tg_g_ncsi_parameters parameters; /**< ncsi state */
+};
+
+struct comm_cmd_ncsi_settings {
+ u8 ncsi_ver; /**< ncsi version */
+ u8 ncsi_pkg_id;
+ u8 arb_en; /**< arbitration en */
+ u8 duplex_set; /**< duplex mode */
+ u8 chan_num; /**< Number of virtual channels */
+ u8 iid; /**< identify new instances of a command */
+ u8 lldp_over_ncsi_enable;
+ u8 lldp_over_mctp_enable;
+ u32 magicwd;
+ u8 lldp_tx_enable;
+ u8 rsvd[3];
+ u32 crc;
+ struct tag_ncsi_chan_info ncsi_chan_info;
+};
+
+struct comm_cmd_ncsi_cfg {
+ struct comm_info_head head;
+ u8 ncsi_cable_state; /**< ncsi cable status 0:cable out of place,1:cable in place */
+ u8 setting_type; /**< nsci info type:0:ram cofig, 1: flash config */
+ u8 port; /**< net port number */
+ u8 erase_flag; /**< flash erase flag, 1: erase flash info */
+ struct comm_cmd_ncsi_settings setting_info;
+};
+
+#define MQM_ATT_PAGE_NUM 128
+
+/* Maximum segment data length of the upgrade command */
+#define MAX_FW_FRAGMENT_LEN (1536)
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_outband_ncsi_cmd_defs.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_outband_ncsi_cmd_defs.h
new file mode 100644
index 0000000..767f886
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_outband_ncsi_cmd_defs.h
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef MPU_OUTBAND_NCSI_CMD_DEFS_H
+#define MPU_OUTBAND_NCSI_CMD_DEFS_H
+
+#pragma pack(push, 1)
+
+enum NCSI_RESPONSE_CODE_E {
+ COMMAND_COMPLETED = 0x00, /**< command completed */
+ COMMAND_FAILED = 0x01, /**< command failed */
+ COMMAND_UNAVAILABLE = 0x02, /**< command unavailable */
+ COMMAND_UNSPORRTED = 0x03 /**< command unsporrted */
+};
+
+enum NCSI_REASON_CODE_E {
+ NO_ERROR = 0x00, /**< no error */
+ INTERFACE_INIT_REQUIRED = 0x01, /**< interface init required */
+ INVALID_PARA = 0x02, /**< invalid parameter */
+ CHAN_NOT_READY = 0x03, /**< channel not ready */
+ PKG_NOT_READY = 0x04, /**< package not ready */
+ INVALID_PAYLOAD_LEN = 0x05, /**< invalid payload len */
+ LINK_STATUS_ERROR = 0xA06, /**< get link status fail */
+ VLAN_TAG_INVALID = 0xB07, /**< vlan tag invalid */
+ MAC_ADD_IS_ZERO = 0xE08, /**< mac add is zero */
+ FLOW_CONTROL_UNSUPPORTED = 0x09, /**< flow control unsupported */
+ CHECKSUM_ERR = 0xA, /**< check sum error */
+ /**< the command type is unsupported only when the response code is 0x03 */
+ UNSUPPORTED_COMMAND_TYPE = 0x7FFF
+};
+
+enum NCSI_CLIENT_TYPE_E {
+ NCSI_RMII_TYPE = 1, /**< rmii client */
+ NCSI_MCTP_TYPE = 2, /**< MCTP client */
+ NCSI_AEN_TYPE = 3 /**< AEN client */
+};
+
+/**
+ * @brief ncsi ctrl packet header
+ */
+struct tag_ncsi_ctrl_packet_header {
+ u8 mc_id; /**< management control ID */
+ u8 head_revision; /**< head revision */
+ u8 reserved0; /**< reserved */
+ u8 iid; /**< instance ID */
+ u8 pkt_type; /**< packet type */
+#ifdef NCSI_BIG_ENDIAN
+ u8 pkg_id : 3; /**< packet ID */
+ u8 inter_chan_id : 5; /**< channel ID */
+#else
+ u8 inter_chan_id : 5; /**< channel ID */
+ u8 pkg_id : 3; /**< packet ID */
+#endif
+#ifdef BD_BIG_ENDIAN
+ u8 reserved1 : 4; /**< reserved1 */
+ u8 payload_len_hi : 4; /**< payload len have 12bits */
+#else
+ u8 payload_len_hi : 4; /**< payload len have 12bits */
+ u8 reserved1 : 4; /**< reserved1 */
+#endif
+ u8 payload_len_lo; /**< payload len lo */
+ u32 reserved2; /**< reserved2 */
+ u32 reserved3; /**< reserved3 */
+};
+
+#define NCSI_MAX_PAYLOAD_LEN 1500
+#define NCSI_MAC_LEN 6
+
+/**
+ * @brief ncsi clear initial state command struct defination
+ *
+ */
+struct tag_ncsi_ctrl_packet {
+ struct tag_ncsi_ctrl_packet_header packet_head; /**< ncsi ctrl packet header */
+ u8 payload[NCSI_MAX_PAYLOAD_LEN]; /**< ncsi ctrl packet payload */
+};
+
+/**
+ * @brief ethernet header description
+ *
+ */
+struct tag_ethernet_header {
+ u8 dst_addr[NCSI_MAC_LEN]; /**< ethernet destination address */
+ u8 src_addr[NCSI_MAC_LEN]; /**< ethernet source address */
+ u16 ether_type; /**< ethernet type */
+};
+
+/**
+ * @brief ncsi common packet description
+ *
+ */
+struct tg_ncsi_common_packet {
+ struct tag_ethernet_header frame_head; /**< common packet ethernet frame header */
+ struct tag_ncsi_ctrl_packet ctrl_packet; /**< common packet ncsi ctrl packet */
+};
+
+/**
+ * @brief ncsi clear initial state command struct defination
+ */
+struct tag_ncsi_client_info {
+ u8 *name; /**< client info client name */
+ u32 type; /**< client info type of ncsi media @see enum NCSI_CLIENT_TYPE_E */
+ u8 bmc_mac[NCSI_MAC_LEN]; /**< client info BMC mac addr */
+ u8 ncsi_mac[NCSI_MAC_LEN]; /**< client info local mac addr */
+ u8 reserve[2]; /**< client info reserved, Four-byte alignment */
+ u32 rsp_len; /**< client info include pad */
+ struct tg_ncsi_common_packet ncsi_packet_rsp; /**< ncsi common packet response */
+};
+
+/* AEN Enable Command (0x08) */
+#define AEN_ENABLE_REQ_LEN 8
+#define AEN_ENABLE_RSP_LEN 4
+#define AEN_CTRL_LINK_STATUS_SHIFT 0
+#define AEN_CTRL_CONFIG_REQ_SHIFT 1
+#define AEN_CTRL_DRV_CHANGE_SHIFT 2
+
+/* AEN Type */
+enum aen_type_e{
+ AEN_LINK_STATUS_CHANGE_TYPE = 0x0,
+ AEN_CONFIG_REQUIRED_TYPE = 0x1,
+ OEM_AEN_CONFIG_REQUEST_TYPE = 0x80,
+ AEN_TYPE_MAX = 0x100
+} ;
+
+/* get link status 0x0A */
+#define GET_LINK_STATUS_REQ_LEN 0
+#define GET_LINK_STATUS_RSP_LEN 16
+/* link speed(fc link speed is mapped to unknown) */
+enum NCSI_CMD_LINK_SPEED_E {
+ LINK_SPEED_10M = 0x2, /**< 10M */
+ LINK_SPEED_100M = 0x5, /**< 100M */
+ LINK_SPEED_1G = 0x7, /**< 1G */
+ LINK_SPEED_10G = 0x8, /**< 10G */
+ LINK_SPEED_20G = 0x9, /**< 20G */
+ LINK_SPEED_25G = 0xa, /**< 25G */
+ LINK_SPEED_40G = 0xb, /**< 40G */
+ LINK_SPEED_50G = 0xc, /**< 50G */
+ LINK_SPEED_100G = 0xd, /**< 100G */
+ LINK_SPEED_2_5G = 0xe, /**< 2.5G */
+ LINK_SPEED_UNKNOWN = 0xf
+};
+
+/* Set Vlan Filter (0x0B) */
+/* Only VLAN-tagged packets that match the enabled VLAN Filter settings are accepted. */
+#define VLAN_MODE_UNSET 0X00
+#define VLAN_ONLY 0x01
+/* if match the MAC address ,any vlan-tagged and non-vlan-tagged will be accepted */
+#define ANYVLAN_NONVLAN 0x03
+#define VLAN_MODE_SUPPORT 0x05
+
+/* chanel vlan filter enable */
+#define CHNL_VALN_FL_ENABLE 0x01
+#define CHNL_VALN_FL_DISABLE 0x00
+
+/* vlan id invalid */
+#define VLAN_ID_VALID 0x01
+#define VLAN_ID_INVALID 0x00
+
+/* VLAN ID */
+#define SET_VLAN_FILTER_REQ_LEN 8
+#define SET_VLAN_FILTER_RSP_LEN 4
+
+/* ncsi_get_controller_packet_statistics_config */
+#define NO_INFORMATION_STATISTICS 0xff
+
+/* Enable VLAN Command (0x0C) */
+#define ENABLE_VLAN_REQ_LEN 4
+#define ENABLE_VLAN_RSP_LEN 4
+#define VLAN_FL_MAX_ID 8
+
+/* NCSI channel capabilities */
+struct tag_ncsi_chan_capa {
+ u32 capa_flags; /**< NCSI channel capabilities capa flags */
+ u32 bcast_filter; /**< NCSI channel capabilities bcast filter */
+ u32 multicast_filter; /**< NCSI channel capabilities multicast filter */
+ u32 buffering; /**< NCSI channel capabilities buffering */
+ u32 aen_ctrl; /**< NCSI channel capabilities aen ctrl */
+ u8 vlan_count; /**< NCSI channel capabilities vlan count */
+ u8 mixed_count; /**< NCSI channel capabilities mixed count */
+ u8 multicast_count; /**< NCSI channel capabilities multicast count */
+ u8 unicast_count; /**< NCSI channel capabilities unicast count */
+ u16 rsvd; /**< NCSI channel capabilities reserved */
+ u8 vlan_mode; /**< NCSI channel capabilities vlan mode */
+ u8 chan_count; /**< NCSI channel capabilities channel count */
+};
+
+struct tg_g_ncsi_parameters {
+ u8 mac_address_count;
+ u8 reserved1[2];
+ u8 mac_address_flags;
+ u8 vlan_tag_count;
+ u8 reserved2;
+ u16 vlan_tag_flags;
+ u32 link_settings;
+ u32 broadcast_packet_filter_settings;
+ u8 broadcast_packet_filter_status : 1;
+ u8 channel_enable : 1;
+ u8 channel_network_tx_enable : 1;
+ u8 global_mulicast_packet_filter_status : 1;
+ /**< bit0-3:mac_add0——mac_add3 address type:0 unicast,1 multileaving */
+ u8 config_flags_reserved1 : 4;
+ u8 config_flags_reserved2[3];
+ u8 vlan_mode; /**< current vlan mode */
+ u8 flow_control_enable;
+ u16 reserved3;
+ u32 AEN_control;
+ u8 mac_add[4][6];
+ u16 vlan_tag[VLAN_FL_MAX_ID];
+};
+
+#pragma pack(pop)
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/nic_cfg_comm.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/nic_cfg_comm.h
new file mode 100644
index 0000000..83b75f9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/nic_cfg_comm.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C), 2001-2021, Huawei Tech. Co., Ltd.
+ * File Name : nic_cfg_comm.h
+ * Version : Initial Draft
+ * Description : nic config common header file
+ * Function List :
+ * History :
+ * Modification: Created file
+ */
+
+#ifndef NIC_CFG_COMM_H
+#define NIC_CFG_COMM_H
+
+/* rss */
+#define HINIC3_RSS_TYPE_VALID_SHIFT 23
+#define HINIC3_RSS_TYPE_TCP_IPV6_EXT_SHIFT 24
+#define HINIC3_RSS_TYPE_IPV6_EXT_SHIFT 25
+#define HINIC3_RSS_TYPE_TCP_IPV6_SHIFT 26
+#define HINIC3_RSS_TYPE_IPV6_SHIFT 27
+#define HINIC3_RSS_TYPE_TCP_IPV4_SHIFT 28
+#define HINIC3_RSS_TYPE_IPV4_SHIFT 29
+#define HINIC3_RSS_TYPE_UDP_IPV6_SHIFT 30
+#define HINIC3_RSS_TYPE_UDP_IPV4_SHIFT 31
+
+#define HINIC3_RSS_TYPE_SET(val, member) (((u32)(val) & 0x1) << HINIC3_RSS_TYPE_##member##_SHIFT)
+#define HINIC3_RSS_TYPE_GET(val, member) (((u32)(val) >> HINIC3_RSS_TYPE_##member##_SHIFT) & 0x1)
+
+enum nic_rss_hash_type {
+ NIC_RSS_HASH_TYPE_XOR = 0,
+ NIC_RSS_HASH_TYPE_TOEP,
+
+ NIC_RSS_HASH_TYPE_MAX /* MUST BE THE LAST ONE */
+};
+
+#define NIC_RSS_INDIR_SIZE 256
+#define NIC_RSS_KEY_SIZE 40
+
+/* *
+ * Definition of the NIC receiving mode
+ */
+#define NIC_RX_MODE_UC 0x01
+#define NIC_RX_MODE_MC 0x02
+#define NIC_RX_MODE_BC 0x04
+#define NIC_RX_MODE_MC_ALL 0x08
+#define NIC_RX_MODE_PROMISC 0x10
+#define NIC_RX_DB_COS_MAX 0x4
+
+/* IEEE 802.1Qaz std */
+#define NIC_DCB_COS_MAX 0x8
+#define NIC_DCB_UP_MAX 0x8
+#define NIC_DCB_TC_MAX 0x8
+#define NIC_DCB_PG_MAX 0x8
+#define NIC_DCB_TSA_SP 0x0
+#define NIC_DCB_TSA_CBS 0x1 /* hi1822 do NOT support */
+#define NIC_DCB_TSA_ETS 0x2
+#define NIC_DCB_DSCP_NUM 0x8
+#define NIC_DCB_IP_PRI_MAX 0x40
+
+#define NIC_DCB_PRIO_DWRR 0x0
+#define NIC_DCB_PRIO_STRICT 0x1
+
+#define NIC_DCB_MAX_PFC_NUM 0x4
+
+#define NIC_ETS_PERCENT_WEIGHT 100
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/ossl_types.h b/drivers/net/ethernet/huawei/hinic3/include/ossl_types.h
new file mode 100644
index 0000000..c646e7c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/ossl_types.h
@@ -0,0 +1,144 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef _OSSL_TYPES_H
+#define _OSSL_TYPES_H
+
+#undef NULL
+#if defined(__cplusplus)
+#define NULL 0
+#else
+#define NULL ((void *)0)
+#endif
+
+#if defined(__LINUX__)
+#ifdef __USER__ /* linux user */
+#if defined(__ia64__) || defined(__x86_64__) || defined(__aarch64__)
+#define s64 long
+#define u64 unsigned long
+#else
+#define s64 long long
+#define u64 unsigned long long
+#endif
+#define s32 int
+#define u32 unsigned int
+#define s16 short
+#define u16 unsigned short
+
+#ifdef __hinic_arm__
+#define s8 signed char
+#else
+#define s8 char
+#endif
+
+#ifndef dma_addr_t
+typedef u64 dma_addr_t;
+#endif
+
+#define u8 unsigned char
+#define ulong unsigned long
+#define uint unsigned int
+
+#define ushort unsigned short
+
+#endif
+#endif
+
+#define uda_handle void *
+
+#define UDA_TRUE 1
+#define UDA_FALSE 0
+
+#if defined(__USER__) || defined(USER)
+#ifndef F_OK
+#define F_OK 0
+#endif
+#ifndef F_FAILED
+#define F_FAILED (-1)
+#endif
+
+#define uda_status int
+#define TOOL_REAL_PATH_MAX_LEN 512
+#define SAFE_FUNCTION_ERR (-1)
+
+enum {
+ UDA_SUCCESS = 0x0, // run success
+ UDA_FAIL, // run failed
+ UDA_ENXIO, // no device
+ UDA_ENONMEM, // alloc memory failed
+ UDA_EBUSY, // card busy or restart
+ UDA_ECRC, // CRC check error
+ UDA_EINVAL, // invalid parameter
+ UDA_EFAULT, // invalid address
+ UDA_ELEN, // invalid length
+ UDA_ECMD, // error occurs when execute the cmd
+ UDA_ENODRIVER, // driver is not installed
+ UDA_EXIST, // has existed
+ UDA_EOVERSTEP, // over step
+ UDA_ENOOBJ, // have no object
+ UDA_EOBJ, // error object
+ UDA_ENOMATCH, // driver does not match to firmware
+ UDA_ETIMEOUT, // timeout
+
+ UDA_CONTOP,
+
+ UDA_REBOOT = 0xFD,
+ UDA_CANCEL = 0xFE,
+ UDA_KILLED = 0xFF,
+};
+
+enum {
+ UDA_FLOCK_NOBLOCK = 0,
+ UDA_FLOCK_BLOCK = 1,
+};
+
+/* array index */
+#define ARRAY_INDEX_0 0
+#define ARRAY_INDEX_1 1
+#define ARRAY_INDEX_2 2
+#define ARRAY_INDEX_3 3
+#define ARRAY_INDEX_4 4
+#define ARRAY_INDEX_5 5
+#define ARRAY_INDEX_6 6
+#define ARRAY_INDEX_7 7
+#define ARRAY_INDEX_8 8
+#define ARRAY_INDEX_12 12
+#define ARRAY_INDEX_13 13
+
+/* define shift bits */
+#define SHIFT_BIT_1 1
+#define SHIFT_BIT_2 2
+#define SHIFT_BIT_3 3
+#define SHIFT_BIT_4 4
+#define SHIFT_BIT_6 6
+#define SHIFT_BIT_7 7
+#define SHIFT_BIT_8 8
+#define SHIFT_BIT_11 11
+#define SHIFT_BIT_12 12
+#define SHIFT_BIT_15 15
+#define SHIFT_BIT_16 16
+#define SHIFT_BIT_17 17
+#define SHIFT_BIT_19 19
+#define SHIFT_BIT_20 20
+#define SHIFT_BIT_23 23
+#define SHIFT_BIT_24 24
+#define SHIFT_BIT_25 25
+#define SHIFT_BIT_26 26
+#define SHIFT_BIT_28 28
+#define SHIFT_BIT_29 29
+#define SHIFT_BIT_32 32
+#define SHIFT_BIT_35 35
+#define SHIFT_BIT_37 37
+#define SHIFT_BIT_39 39
+#define SHIFT_BIT_40 40
+#define SHIFT_BIT_43 43
+#define SHIFT_BIT_48 48
+#define SHIFT_BIT_51 51
+#define SHIFT_BIT_56 56
+#define SHIFT_BIT_57 57
+#define SHIFT_BIT_59 59
+#define SHIFT_BIT_60 60
+#define SHIFT_BIT_61 61
+
+#endif
+#endif /* OSSL_TYPES_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/include/public/npu_cmdq_base_defs.h b/drivers/net/ethernet/huawei/hinic3/include/public/npu_cmdq_base_defs.h
new file mode 100644
index 0000000..78236c9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/public/npu_cmdq_base_defs.h
@@ -0,0 +1,232 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef NPU_CMDQ_BASE_DEFS_H
+#define NPU_CMDQ_BASE_DEFS_H
+
+/* CmdQ Common subtype */
+enum comm_cmdq_cmd {
+ COMM_CMD_UCODE_ARM_BIT_SET = 2,
+ COMM_CMD_SEND_NPU_DFT_CMD,
+};
+
+/* Cmdq ack type */
+enum hinic3_ack_type {
+ HINIC3_ACK_TYPE_CMDQ,
+ HINIC3_ACK_TYPE_SHARE_CQN,
+ HINIC3_ACK_TYPE_APP_CQN,
+
+ HINIC3_MOD_ACK_MAX = 15,
+};
+
+/* Defines the queue type of the set arm bit. */
+enum {
+ SET_ARM_BIT_FOR_CMDQ = 0,
+ SET_ARM_BIT_FOR_L2NIC_SQ,
+ SET_ARM_BIT_FOR_L2NIC_RQ,
+ SET_ARM_BIT_TYPE_NUM
+};
+
+/* Defines the type. Each function supports a maximum of eight CMDQ types. */
+enum {
+ CMDQ_0 = 0,
+ CMDQ_1 = 1, /* dedicated and non-blocking queues */
+ CMDQ_NUM
+};
+
+/* *******************cmd common command data structure ************************ */
+// Func->ucode, which is used to set arm bit data,
+// The microcode needs to perform big-endian conversion.
+struct comm_info_ucode_set_arm_bit {
+ u32 q_type;
+ u32 q_id;
+};
+
+/* *******************WQE data structure ************************ */
+union cmdq_wqe_cs_dw0 {
+ struct {
+ u32 err_status : 29;
+ u32 error_code : 2;
+ u32 rsvd : 1;
+ } bs;
+ u32 val;
+};
+
+union cmdq_wqe_cs_dw1 {
+ struct {
+ u32 token : 16; // [15:0]
+ u32 cmd : 8; // [23:16]
+ u32 mod : 5; // [28:24]
+ u32 ack_type : 2; // [30:29]
+ u32 obit : 1; // [31]
+ } drv_wr; // This structure is used when the driver writes the wqe.
+
+ struct {
+ u32 mod : 5; // [4:0]
+ u32 ack_type : 3; // [7:5]
+ u32 cmd : 8; // [15:8]
+ u32 arm : 1; // [16]
+ u32 rsvd : 14; // [30:17]
+ u32 obit : 1; // [31]
+ } wb;
+ u32 val;
+};
+
+/* CmdQ BD information or write back buffer information */
+struct cmdq_sge {
+ u32 pa_h; // Upper 32 bits of the physical address
+ u32 pa_l; // Upper 32 bits of the physical address
+ u32 len; // Invalid bit[31].
+ u32 resv;
+};
+
+/* Ctrls section definition of WQE */
+struct cmdq_wqe_ctrls {
+ union {
+ struct {
+ u32 bdsl : 8; // [7:0]
+ u32 drvsl : 2; // [9:8]
+ u32 rsv : 4; // [13:10]
+ u32 wf : 1; // [14]
+ u32 cf : 1; // [15]
+ u32 tsl : 5; // [20:16]
+ u32 va : 1; // [21]
+ u32 df : 1; // [22]
+ u32 cr : 1; // [23]
+ u32 difsl : 3; // [26:24]
+ u32 csl : 2; // [28:27]
+ u32 ctrlsl : 2; // [30:29]
+ u32 obit : 1; // [31]
+ } bs;
+ u32 val;
+ } header;
+ u32 qsf;
+};
+
+/* Complete section definition of WQE */
+struct cmdq_wqe_cs {
+ union cmdq_wqe_cs_dw0 dw0;
+ union cmdq_wqe_cs_dw1 dw1;
+ union {
+ struct cmdq_sge sge;
+ u32 dw2_5[4];
+ } ack;
+};
+
+/* Inline header in WQE inline, describing the length of inline data */
+union cmdq_wqe_inline_header {
+ struct {
+ u32 buf_len : 11; // [10:0] inline data len
+ u32 rsv : 21; // [31:11]
+ } bs;
+ u32 val;
+};
+
+/* Definition of buffer descriptor section in WQE */
+union cmdq_wqe_bds {
+ struct {
+ struct cmdq_sge bds_sge;
+ u32 rsvd[4]; /* Zwy is used to transfer the virtual address of the buffer. */
+ } lcmd; /* Long command, non-inline, and SGE describe the buffer information. */
+};
+
+/* Definition of CMDQ WQE */
+/* (long cmd, 64B)
+ * +----------------------------------------+
+ * | ctrl section(8B) |
+ * +----------------------------------------+
+ * | |
+ * | complete section(24B) |
+ * | |
+ * +----------------------------------------+
+ * | |
+ * | buffer descriptor section(16B) |
+ * | |
+ * +----------------------------------------+
+ * | driver section(16B) |
+ * +----------------------------------------+
+ *
+ *
+ * (middle cmd, 128B)
+ * +----------------------------------------+
+ * | ctrl section(8B) |
+ * +----------------------------------------+
+ * | |
+ * | complete section(24B) |
+ * | |
+ * +----------------------------------------+
+ * | |
+ * | buffer descriptor section(88B) |
+ * | |
+ * +----------------------------------------+
+ * | driver section(8B) |
+ * +----------------------------------------+
+ *
+ *
+ * (short cmd, 64B)
+ * +----------------------------------------+
+ * | ctrl section(8B) |
+ * +----------------------------------------+
+ * | |
+ * | complete section(24B) |
+ * | |
+ * +----------------------------------------+
+ * | |
+ * | buffer descriptor section(24B) |
+ * | |
+ * +----------------------------------------+
+ * | driver section(8B) |
+ * +----------------------------------------+
+ */
+struct cmdq_wqe {
+ struct cmdq_wqe_ctrls ctrls;
+ struct cmdq_wqe_cs cs;
+ union cmdq_wqe_bds bds;
+};
+
+/* Definition of ctrls section in inline WQE */
+struct cmdq_wqe_ctrls_inline {
+ union {
+ struct {
+ u32 bdsl : 8; // [7:0]
+ u32 drvsl : 2; // [9:8]
+ u32 rsv : 4; // [13:10]
+ u32 wf : 1; // [14]
+ u32 cf : 1; // [15]
+ u32 tsl : 5; // [20:16]
+ u32 va : 1; // [21]
+ u32 df : 1; // [22]
+ u32 cr : 1; // [23]
+ u32 difsl : 3; // [26:24]
+ u32 csl : 2; // [28:27]
+ u32 ctrlsl : 2; // [30:29]
+ u32 obit : 1; // [31]
+ } bs;
+ u32 val;
+ } header;
+ u32 qsf;
+ u64 db;
+};
+
+/* Buffer descriptor section definition of WQE */
+union cmdq_wqe_bds_inline {
+ struct {
+ union cmdq_wqe_inline_header header;
+ u32 rsvd;
+ u8 data_inline[80];
+ } mcmd; /* Middle command, inline mode */
+
+ struct {
+ union cmdq_wqe_inline_header header;
+ u32 rsvd;
+ u8 data_inline[16];
+ } scmd; /* Short command, inline mode */
+};
+
+struct cmdq_wqe_inline {
+ struct cmdq_wqe_ctrls_inline ctrls;
+ struct cmdq_wqe_cs cs;
+ union cmdq_wqe_bds_inline bds;
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/readme.txt b/drivers/net/ethernet/huawei/hinic3/include/readme.txt
new file mode 100644
index 0000000..895f213
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/readme.txt
@@ -0,0 +1 @@
+本目录是业务内部共享接口
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/include/vmsec/vmsec_mpu_common.h b/drivers/net/ethernet/huawei/hinic3/include/vmsec/vmsec_mpu_common.h
new file mode 100644
index 0000000..d78dba8
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/vmsec/vmsec_mpu_common.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef VMSEC_MPU_COMMON_H
+#define VMSEC_MPU_COMMON_H
+
+#include "mpu_cmd_base_defs.h"
+
+#define VM_GPA_INFO_MODE_MIG 0
+#define VM_GPA_INFO_MODE_NMIG 1
+
+/**
+ * Commands between VMSEC to MPU
+ */
+enum tag_vmsec_mpu_cmd {
+ /* vmsec ctx gpa */
+ VMSEC_MPU_CMD_CTX_GPA_SET = 0,
+ VMSEC_MPU_CMD_CTX_GPA_SHOW,
+ VMSEC_MPU_CMD_CTX_GPA_DEL,
+
+ /* vmsec pci hole */
+ VMSEC_MPU_CMD_PCI_HOLE_SET,
+ VMSEC_MPU_CMD_PCI_HOLE_SHOW,
+ VMSEC_MPU_CMD_PCI_HOLE_DEL,
+
+ /* vmsec func cfg */
+ VMSEC_MPU_CMD_FUN_CFG_ENTRY_IDX_SET,
+ VMSEC_MPU_CMD_FUN_CFG_ENTRY_IDX_SHOW,
+
+ VMSEC_MPU_CMD_MAX
+};
+
+struct vmsec_ctx_gpa_entry {
+#if defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)
+ u32 func_id : 16;
+ u32 mode : 8;
+ u32 rsvd : 8;
+#else
+ u32 rsvd : 8;
+ u32 mode : 8;
+ u32 func_id : 16;
+#endif
+
+ /* sml tbl to wr */
+ u32 gpa_addr0_hi;
+ u32 gpa_addr0_lo;
+ u32 gpa_len0;
+};
+
+struct vmsec_pci_hole_idx {
+#if defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)
+ u32 entry_idx : 5;
+ u32 rsvd : 27;
+#else
+ u32 rsvd : 27;
+ u32 entry_idx : 5;
+#endif
+};
+
+struct vmsec_pci_hole_entry {
+ /* sml tbl to wr */
+ /* pcie hole 32-bit region */
+ u32 gpa_addr0_hi;
+ u32 gpa_addr0_lo;
+ u32 gpa_len0_hi;
+ u32 gpa_len0_lo;
+
+ /* pcie hole 64-bit region */
+ u32 gpa_addr1_hi;
+ u32 gpa_addr1_lo;
+ u32 gpa_len1_hi;
+ u32 gpa_len1_lo;
+
+ /* ctrl info used by drv */
+ u32 domain_id; /* unique vm id */
+#if defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)
+ u32 rsvd1 : 21;
+ u32 vf_nums : 11;
+#else
+ u32 rsvd1 : 21;
+ u32 vf_nums : 11;
+#endif
+ u32 vroce_vf_bitmap;
+};
+
+struct vmsec_funcfg_info_entry {
+ /* funcfg to update */
+#if defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)
+ u32 func_id : 16;
+ u32 entry_vld : 1;
+ u32 entry_idx : 5;
+ u32 rsvd : 10;
+#else
+ u32 rsvd : 10;
+ u32 entry_idx : 5;
+ u32 entry_vld : 1;
+ u32 func_id : 16;
+#endif
+};
+
+/* set/get/del */
+struct vmsec_cfg_ctx_gpa_entry_cmd {
+ struct comm_info_head head;
+ struct vmsec_ctx_gpa_entry entry;
+};
+
+#endif /* VMSEC_MPU_COMMON_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/include/vram_common.h b/drivers/net/ethernet/huawei/hinic3/include/vram_common.h
new file mode 100644
index 0000000..9f93f7e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/vram_common.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef VRAM_COMMON_H
+#define VRAM_COMMON_H
+
+#include <linux/pci.h>
+#include <linux/notifier.h>
+
+#define VRAM_BLOCK_SIZE_2M 0x200000UL
+#define KEXEC_SIGN "hinic-in-kexec"
+// now vram_name max len is 14, when add other vram, attention this value
+#define VRAM_NAME_MAX_LEN 16
+
+#define VRAM_CQM_GLB_FUNC_BASE "F"
+#define VRAM_CQM_FAKE_MEM_BASE "FK"
+#define VRAM_CQM_CLA_BASE "C"
+#define VRAM_CQM_CLA_TYPE_BASE "T"
+#define VRAM_CQM_CLA_SMF_BASE "SMF"
+#define VRAM_CQM_CLA_COORD_X "X"
+#define VRAM_CQM_CLA_COORD_Y "Y"
+#define VRAM_CQM_CLA_COORD_Z "Z"
+#define VRAM_CQM_BITMAP_BASE "B"
+
+#define VRAM_NIC_DCB "DCB"
+#define VRAM_NIC_MHOST_MGMT "MHOST_MGMT"
+#define VRAM_NIC_VRAM "NIC_VRAM"
+#define VRAM_NIC_IRQ_VRAM "NIC_IRQ"
+
+#define VRAM_NIC_MQM "NM"
+
+#define VRAM_VBS_BASE_IOCB "BASE_IOCB"
+#define VRAM_VBS_EX_IOCB "EX_IOCB"
+#define VRAM_VBS_RXQS_CQE "RXQS_CQE"
+
+#define VRAM_VBS_VOLQ_MTT "VOLQ_MTT"
+#define VRAM_VBS_VOLQ_MTT_PAGE "MTT_PAGE"
+
+#define VRAM_VROCE_ENTRY_POOL "VROCE_ENTRY"
+#define VRAM_VROCE_GROUP_POOL "VROCE_GROUP"
+#define VRAM_VROCE_UUID "VROCE_UUID"
+#define VRAM_VROCE_VID "VROCE_VID"
+#define VRAM_VROCE_BASE "VROCE_BASE"
+#define VRAM_VROCE_DSCP "VROCE_DSCP"
+#define VRAM_VROCE_QOS "VROCE_QOS"
+#define VRAM_VROCE_DEV "VROCE_DEV"
+#define VRAM_VROCE_RGROUP_HT_CNT "RGROUP_CNT"
+#define VRAM_VROCE_RACL_HT_CNT "RACL_CNT"
+
+#define VRAM_NAME_APPLY_LEN 64
+
+#define MPU_OS_HOTREPLACE_FLAG 0x1
+struct vram_buf_info {
+ char buf_vram_name[VRAM_NAME_APPLY_LEN];
+ int use_vram;
+};
+
+enum KUP_HOOK_POINT {
+ PRE_FREEZE,
+ FREEZE_TO_KILL,
+ PRE_UPDATE_KERNEL,
+ POST_UPDATE_KERNEL,
+ UNFREEZE_TO_RUN,
+ POST_RUN,
+ KUP_HOOK_MAX,
+};
+
+#define hi_vram_kalloc(name, size) 0
+#define hi_vram_kfree(vaddr, name, size)
+#define get_use_vram_flag(void) 0
+#define vram_get_kexec_flag(void) 0
+#define hi_vram_get_gfp_vram(void) 0
+
+#endif /* VRAM_COMMON_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/mag_mpu_cmd_defs.h b/drivers/net/ethernet/huawei/hinic3/mag_mpu_cmd_defs.h
new file mode 100644
index 0000000..e77d7d5
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/mag_mpu_cmd_defs.h
@@ -0,0 +1,1143 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef MAG_MPU_CMD_DEFS_H
+#define MAG_MPU_CMD_DEFS_H
+
+#include "mpu_cmd_base_defs.h"
+
+/* serdes cmd struct define */
+#define CMD_ARRAY_BUF_SIZE 64
+#define SERDES_CMD_DATA_BUF_SIZE 512
+#define RATE_MBPS_TO_GBPS 1000
+struct serdes_in_info {
+ u32 chip_id : 16;
+ u32 macro_id : 16;
+ u32 start_sds_id : 16;
+ u32 sds_num : 16;
+
+ u32 cmd_type : 8; /* reserved for iotype */
+ u32 sub_cmd : 8;
+ u32 rw : 1; /* 0: read, 1: write */
+ u32 rsvd : 15;
+
+ u32 val;
+ union {
+ char field[CMD_ARRAY_BUF_SIZE];
+ u32 addr;
+ u8 *ex_param;
+ };
+};
+
+struct serdes_out_info {
+ u32 str_len; /* out_str length */
+ u32 result_offset;
+ u32 type; /* 0:data; 1:string */
+ char out_str[SERDES_CMD_DATA_BUF_SIZE];
+};
+
+struct serdes_cmd_in {
+ struct mgmt_msg_head head;
+
+ struct serdes_in_info serdes_in;
+};
+
+struct serdes_cmd_out {
+ struct mgmt_msg_head head;
+
+ struct serdes_out_info serdes_out;
+};
+
+enum mag_cmd_port_speed {
+ PORT_SPEED_NOT_SET = 0,
+ PORT_SPEED_10MB = 1,
+ PORT_SPEED_100MB = 2,
+ PORT_SPEED_1GB = 3,
+ PORT_SPEED_10GB = 4,
+ PORT_SPEED_25GB = 5,
+ PORT_SPEED_40GB = 6,
+ PORT_SPEED_50GB = 7,
+ PORT_SPEED_100GB = 8,
+ PORT_SPEED_200GB = 9,
+ PORT_SPEED_UNKNOWN
+};
+
+enum mag_cmd_port_an {
+ PORT_AN_NOT_SET = 0,
+ PORT_CFG_AN_ON = 1,
+ PORT_CFG_AN_OFF = 2
+};
+
+enum mag_cmd_port_adapt {
+ PORT_ADAPT_NOT_SET = 0,
+ PORT_CFG_ADAPT_ON = 1,
+ PORT_CFG_ADAPT_OFF = 2
+};
+
+enum mag_cmd_port_sriov {
+ PORT_SRIOV_NOT_SET = 0,
+ PORT_CFG_SRIOV_ON = 1,
+ PORT_CFG_SRIOV_OFF = 2
+};
+
+enum mag_cmd_port_fec {
+ PORT_FEC_NOT_SET = 0,
+ PORT_FEC_RSFEC = 1,
+ PORT_FEC_BASEFEC = 2,
+ PORT_FEC_NOFEC = 3,
+ PORT_FEC_LLRSFEC = 4,
+ PORT_FEC_AUTO = 5
+};
+
+enum mag_cmd_port_lanes {
+ PORT_LANES_NOT_SET = 0,
+ PORT_LANES_X1 = 1,
+ PORT_LANES_X2 = 2,
+ PORT_LANES_X4 = 4,
+ PORT_LANES_X8 = 8 /* reserved for future use */
+};
+
+enum mag_cmd_port_duplex {
+ PORT_DUPLEX_HALF = 0,
+ PORT_DUPLEX_FULL = 1
+};
+
+enum mag_cmd_wire_node {
+ WIRE_NODE_UNDEF = 0,
+ CABLE_10G = 1,
+ FIBER_10G = 2,
+ CABLE_25G = 3,
+ FIBER_25G = 4,
+ CABLE_40G = 5,
+ FIBER_40G = 6,
+ CABLE_50G = 7,
+ FIBER_50G = 8,
+ CABLE_100G = 9,
+ FIBER_100G = 10,
+ CABLE_200G = 11,
+ FIBER_200G = 12,
+ WIRE_NODE_NUM
+};
+
+enum mag_cmd_cnt_type {
+ MAG_RX_RSFEC_DEC_CW_CNT = 0,
+ MAG_RX_RSFEC_CORR_CW_CNT = 1,
+ MAG_RX_RSFEC_UNCORR_CW_CNT = 2,
+ MAG_RX_PCS_BER_CNT = 3,
+ MAG_RX_PCS_ERR_BLOCK_CNT = 4,
+ MAG_RX_PCS_E_BLK_CNT = 5,
+ MAG_RX_PCS_DEC_ERR_BLK_CNT = 6,
+ MAG_RX_PCS_LANE_BIP_ERR_CNT = 7,
+ MAG_RX_RSFEC_ERR_CW_CNT = 8,
+ MAG_CNT_NUM
+};
+
+/* mag_cmd_set_port_cfg config bitmap */
+#define MAG_CMD_SET_SPEED 0x1
+#define MAG_CMD_SET_AUTONEG 0x2
+#define MAG_CMD_SET_FEC 0x4
+#define MAG_CMD_SET_LANES 0x8
+struct mag_cmd_set_port_cfg {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+
+ u32 config_bitmap;
+ u8 speed;
+ u8 autoneg;
+ u8 fec;
+ u8 lanes;
+ u8 rsvd1[20];
+};
+
+/* mag supported/advertised link mode bitmap */
+enum mag_cmd_link_mode {
+ LINK_MODE_GE = 0,
+ LINK_MODE_10GE_BASE_R = 1,
+ LINK_MODE_25GE_BASE_R = 2,
+ LINK_MODE_40GE_BASE_R4 = 3,
+ LINK_MODE_50GE_BASE_R = 4,
+ LINK_MODE_50GE_BASE_R2 = 5,
+ LINK_MODE_100GE_BASE_R = 6,
+ LINK_MODE_100GE_BASE_R2 = 7,
+ LINK_MODE_100GE_BASE_R4 = 8,
+ LINK_MODE_200GE_BASE_R2 = 9,
+ LINK_MODE_200GE_BASE_R4 = 10,
+ LINK_MODE_MAX_NUMBERS,
+
+ LINK_MODE_UNKNOWN = 0xFFFF
+};
+
+#define LINK_MODE_GE_BIT 0x1u
+#define LINK_MODE_10GE_BASE_R_BIT 0x2u
+#define LINK_MODE_25GE_BASE_R_BIT 0x4u
+#define LINK_MODE_40GE_BASE_R4_BIT 0x8u
+#define LINK_MODE_50GE_BASE_R_BIT 0x10u
+#define LINK_MODE_50GE_BASE_R2_BIT 0x20u
+#define LINK_MODE_100GE_BASE_R_BIT 0x40u
+#define LINK_MODE_100GE_BASE_R2_BIT 0x80u
+#define LINK_MODE_100GE_BASE_R4_BIT 0x100u
+#define LINK_MODE_200GE_BASE_R2_BIT 0x200u
+#define LINK_MODE_200GE_BASE_R4_BIT 0x400u
+
+#define CABLE_10GE_BASE_R_BIT LINK_MODE_10GE_BASE_R_BIT
+#define CABLE_25GE_BASE_R_BIT (LINK_MODE_25GE_BASE_R_BIT | LINK_MODE_10GE_BASE_R_BIT)
+#define CABLE_40GE_BASE_R4_BIT LINK_MODE_40GE_BASE_R4_BIT
+#define CABLE_50GE_BASE_R_BIT (LINK_MODE_50GE_BASE_R_BIT | LINK_MODE_25GE_BASE_R_BIT | \
+ LINK_MODE_10GE_BASE_R_BIT)
+#define CABLE_50GE_BASE_R2_BIT LINK_MODE_50GE_BASE_R2_BIT
+#define CABLE_100GE_BASE_R2_BIT (LINK_MODE_100GE_BASE_R2_BIT | LINK_MODE_50GE_BASE_R2_BIT)
+#define CABLE_100GE_BASE_R4_BIT (LINK_MODE_100GE_BASE_R4_BIT | LINK_MODE_40GE_BASE_R4_BIT)
+#define CABLE_200GE_BASE_R4_BIT (LINK_MODE_200GE_BASE_R4_BIT | LINK_MODE_100GE_BASE_R4_BIT | \
+ LINK_MODE_40GE_BASE_R4_BIT)
+
+struct mag_cmd_get_port_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+
+ u8 wire_type;
+ u8 an_support;
+ u8 an_en;
+ u8 duplex;
+
+ u8 speed;
+ u8 fec;
+ u8 lanes;
+ u8 rsvd1;
+
+ u32 supported_mode;
+ u32 advertised_mode;
+ u32 supported_fec_mode;
+ u16 bond_speed;
+ u8 rsvd2[2];
+};
+
+#define MAG_CMD_OPCODE_GET 0
+#define MAG_CMD_OPCODE_SET 1
+struct mag_cmd_set_port_adapt {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get adapt info 1:set adapt */
+ u8 enable;
+ u8 rsvd0;
+ u32 speed_mode;
+ u32 rsvd1[3];
+};
+
+#define MAG_CMD_LP_MODE_SDS_S_TX2RX 1
+#define MAG_CMD_LP_MODE_SDS_P_RX2TX 2
+#define MAG_CMD_LP_MODE_SDS_P_TX2RX 3
+#define MAG_CMD_LP_MODE_MAC_RX2TX 4
+#define MAG_CMD_LP_MODE_MAC_TX2RX 5
+#define MAG_CMD_LP_MODE_TXDP2RXDP 6
+struct mag_cmd_cfg_loopback_mode {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get loopback mode 1:set loopback mode */
+ u8 lp_mode;
+ u8 lp_en; /* 0:disable 1:enable */
+
+ u32 rsvd0[2];
+};
+
+#define MAG_CMD_PORT_DISABLE 0x0
+#define MAG_CMD_TX_ENABLE 0x1
+#define MAG_CMD_RX_ENABLE 0x2
+/* the physical port is disable only when all pf of the port are set to down,
+ * if any pf is enable, the port is enable
+ */
+struct mag_cmd_set_port_enable {
+ struct mgmt_msg_head head;
+
+ u16 function_id; /* function_id should not more than the max support pf_id(32) */
+ u16 rsvd0;
+
+ u8 state; /* bitmap bit0:tx_en bit1:rx_en */
+ u8 rsvd1[3];
+};
+
+struct mag_cmd_get_port_enable {
+ struct mgmt_msg_head head;
+
+ u8 port;
+ u8 state; /* bitmap bit0:tx_en bit1:rx_en */
+ u8 rsvd0[2];
+};
+
+#define PMA_FOLLOW_DEFAULT 0x0
+#define PMA_FOLLOW_ENABLE 0x1
+#define PMA_FOLLOW_DISABLE 0x2
+#define PMA_FOLLOW_GET 0x4
+/* the physical port disable link follow only when all pf of the port are set to follow disable */
+struct mag_cmd_set_link_follow {
+ struct mgmt_msg_head head;
+
+ u16 function_id; /* function_id should not more than the max support pf_id(32) */
+ u16 rsvd0;
+
+ u8 follow;
+ u8 rsvd1[3];
+};
+
+/* firmware also use this cmd report link event to driver */
+struct mag_cmd_get_link_status {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 status; /* 0:link down 1:link up */
+ u8 rsvd0[2];
+};
+
+/* firmware also use this cmd report bond event to driver */
+struct mag_cmd_get_bond_status {
+ struct mgmt_msg_head head;
+
+ u8 status; /* 0:bond down 1:bond up */
+ u8 rsvd0[3];
+};
+
+struct mag_cmd_set_pma_enable {
+ struct mgmt_msg_head head;
+
+ u16 function_id; /* function_id should not more than the max support pf_id(32) */
+ u16 enable;
+};
+
+struct mag_cmd_cfg_an_type {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get an type 1:set an type */
+ u8 rsvd0[2];
+
+ u32 an_type; /* 0:ieee 1:25G/50 eth consortium */
+};
+
+struct mag_cmd_get_link_time {
+ struct mgmt_msg_head head;
+ u8 port_id;
+ u8 rsvd0[3];
+
+ u32 link_up_begin;
+ u32 link_up_end;
+ u32 link_down_begin;
+ u32 link_down_end;
+};
+
+struct mag_cmd_cfg_fec_mode {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get fec mode 1:set fec mode */
+ u8 advertised_fec;
+ u8 supported_fec;
+};
+
+/* speed */
+#define PANGEA_ADAPT_10G_BITMAP 0xd
+#define PANGEA_ADAPT_25G_BITMAP 0x72
+#define PANGEA_ADAPT_40G_BITMAP 0x680
+#define PANGEA_ADAPT_100G_BITMAP 0x1900
+
+/* speed and fec */
+#define PANGEA_10G_NO_BITMAP 0x8
+#define PANGEA_10G_BASE_BITMAP 0x4
+#define PANGEA_25G_NO_BITMAP 0x10
+#define PANGEA_25G_BASE_BITMAP 0x20
+#define PANGEA_25G_RS_BITMAP 0x40
+#define PANGEA_40G_NO_BITMAP 0x400
+#define PANGEA_40G_BASE_BITMAP 0x200
+#define PANGEA_100G_NO_BITMAP 0x800
+#define PANGEA_100G_RS_BITMAP 0x1000
+
+/* adapt or fec */
+#define PANGEA_ADAPT_ADAPT_BITMAP 0x183
+#define PANGEA_ADAPT_NO_BITMAP 0xc18
+#define PANGEA_ADAPT_BASE_BITMAP 0x224
+#define PANGEA_ADAPT_RS_BITMAP 0x1040
+
+/* default cfg */
+#define PANGEA_ADAPT_CFG_10G_CR 0x200d
+#define PANGEA_ADAPT_CFG_10G_SRLR 0xd
+#define PANGEA_ADAPT_CFG_25G_CR 0x207f
+#define PANGEA_ADAPT_CFG_25G_SRLR 0x72
+#define PANGEA_ADAPT_CFG_40G_CR4 0x2680
+#define PANGEA_ADAPT_CFG_40G_SRLR4 0x680
+#define PANGEA_ADAPT_CFG_100G_CR4 0x3f80
+#define PANGEA_ADAPT_CFG_100G_SRLR4 0x1900
+
+union pangea_adapt_bitmap_u {
+ struct {
+ u32 adapt_10g : 1; /* [0] adapt_10g */
+ u32 adapt_25g : 1; /* [1] adapt_25g */
+ u32 base_10g : 1; /* [2] base_10g */
+ u32 no_10g : 1; /* [3] no_10g */
+ u32 no_25g : 1; /* [4] no_25g */
+ u32 base_25g : 1; /* [5] base_25g */
+ u32 rs_25g : 1; /* [6] rs_25g */
+ u32 adapt_40g : 1; /* [7] adapt_40g */
+ u32 adapt_100g : 1; /* [8] adapt_100g */
+ u32 base_40g : 1; /* [9] base_40g */
+ u32 no_40g : 1; /* [10] no_40g */
+ u32 no_100g : 1; /* [11] no_100g */
+ u32 rs_100g : 1; /* [12] rs_100g */
+ u32 auto_neg : 1; /* [13] auto_neg */
+ u32 rsvd0 : 18; /* [31:14] reserved */
+ } bits;
+
+ u32 value;
+};
+
+#define PANGEA_ADAPT_GET 0x0
+#define PANGEA_ADAPT_SET 0x1
+struct mag_cmd_set_pangea_adapt {
+ struct mgmt_msg_head head;
+
+ u16 port_id;
+ u8 opcode; /* 0:get adapt info 1:cfg adapt info */
+ u8 wire_type;
+
+ union pangea_adapt_bitmap_u cfg_bitmap;
+ union pangea_adapt_bitmap_u cur_bitmap;
+ u32 rsvd1[3];
+};
+
+struct mag_cmd_cfg_bios_link_cfg {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get bios link info 1:set bios link cfg */
+ u8 clear;
+ u8 rsvd0;
+
+ u32 wire_type;
+ u8 an_en;
+ u8 speed;
+ u8 fec;
+ u8 rsvd1;
+ u32 speed_mode;
+ u32 rsvd2[3];
+};
+
+struct mag_cmd_restore_link_cfg {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd[7];
+};
+
+struct mag_cmd_activate_bios_link_cfg {
+ struct mgmt_msg_head head;
+
+ u32 rsvd[8];
+};
+
+/* led type */
+enum mag_led_type {
+ MAG_CMD_LED_TYPE_ALARM = 0x0,
+ MAG_CMD_LED_TYPE_LOW_SPEED = 0x1,
+ MAG_CMD_LED_TYPE_HIGH_SPEED = 0x2
+};
+
+/* led mode */
+enum mag_led_mode {
+ MAG_CMD_LED_MODE_DEFAULT = 0x0,
+ MAG_CMD_LED_MODE_FORCE_ON = 0x1,
+ MAG_CMD_LED_MODE_FORCE_OFF = 0x2,
+ MAG_CMD_LED_MODE_FORCE_BLINK_1HZ = 0x3,
+ MAG_CMD_LED_MODE_FORCE_BLINK_2HZ = 0x4,
+ MAG_CMD_LED_MODE_FORCE_BLINK_4HZ = 0x5,
+ MAG_CMD_LED_MODE_1HZ = 0x6,
+ MAG_CMD_LED_MODE_2HZ = 0x7,
+ MAG_CMD_LED_MODE_4HZ = 0x8
+};
+
+/* the led is report alarm when any pf of the port is alram */
+struct mag_cmd_set_led_cfg {
+ struct mgmt_msg_head head;
+
+ u16 function_id;
+ u8 type;
+ u8 mode;
+};
+
+#define XSFP_INFO_MAX_SIZE 640
+/* xsfp wire type, refer to cmis protocol definition */
+enum mag_wire_type {
+ MAG_CMD_WIRE_TYPE_UNKNOWN = 0x0,
+ MAG_CMD_WIRE_TYPE_MM = 0x1,
+ MAG_CMD_WIRE_TYPE_SM = 0x2,
+ MAG_CMD_WIRE_TYPE_COPPER = 0x3,
+ MAG_CMD_WIRE_TYPE_ACC = 0x4,
+ MAG_CMD_WIRE_TYPE_BASET = 0x5,
+ MAG_CMD_WIRE_TYPE_AOC = 0x40,
+ MAG_CMD_WIRE_TYPE_ELECTRIC = 0x41,
+ MAG_CMD_WIRE_TYPE_BACKPLANE = 0x42
+};
+
+struct mag_cmd_get_xsfp_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 wire_type;
+ u16 out_len;
+ u32 rsvd;
+ u8 sfp_info[XSFP_INFO_MAX_SIZE];
+};
+
+#define MAG_CMD_XSFP_DISABLE 0x0
+#define MAG_CMD_XSFP_ENABLE 0x1
+/* the sfp is disable only when all pf of the port are set sfp down,
+ * if any pf is enable, the sfp is enable
+ */
+struct mag_cmd_set_xsfp_enable {
+ struct mgmt_msg_head head;
+
+ u32 port_id;
+ u32 status; /* 0:on 1:off */
+};
+
+#define MAG_CMD_XSFP_PRESENT 0x0
+#define MAG_CMD_XSFP_ABSENT 0x1
+struct mag_cmd_get_xsfp_present {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 abs_status; /* 0:present, 1:absent */
+ u8 rsvd[2];
+};
+
+#define MAG_CMD_XSFP_READ 0x0
+#define MAG_CMD_XSFP_WRITE 0x1
+struct mag_cmd_set_xsfp_rw {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 operation; /* 0: read; 1: write */
+ u8 value;
+ u8 rsvd0;
+ u32 devaddr;
+ u32 offset;
+ u32 rsvd1;
+};
+
+struct mag_cmd_cfg_xsfp_temperature {
+ struct mgmt_msg_head head;
+
+ u8 opcode; /* 0:read 1:write */
+ u8 rsvd0[3];
+ s32 max_temp;
+ s32 min_temp;
+};
+
+struct mag_cmd_get_xsfp_temperature {
+ struct mgmt_msg_head head;
+
+ s16 sfp_temp[8];
+ u8 rsvd[32];
+ s32 max_temp;
+ s32 min_temp;
+};
+
+/* xsfp plug event */
+struct mag_cmd_wire_event {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 status; /* 0:present, 1:absent */
+ u8 rsvd[2];
+};
+
+/* link err type definition */
+#define MAG_CMD_ERR_XSFP_UNKNOWN 0x0
+struct mag_cmd_link_err_event {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 link_err_type;
+ u8 rsvd[2];
+};
+
+#define MAG_PARAM_TYPE_DEFAULT_CFG 0x0
+#define MAG_PARAM_TYPE_BIOS_CFG 0x1
+#define MAG_PARAM_TYPE_TOOL_CFG 0x2
+#define MAG_PARAM_TYPE_FINAL_CFG 0x3
+#define MAG_PARAM_TYPE_WIRE_INFO 0x4
+#define MAG_PARAM_TYPE_ADAPT_INFO 0x5
+#define MAG_PARAM_TYPE_MAX_CNT 0x6
+struct param_head {
+ u8 valid_len;
+ u8 info_type;
+ u8 rsvd[2];
+};
+
+struct mag_port_link_param {
+ struct param_head head;
+
+ u8 an;
+ u8 fec;
+ u8 speed;
+ u8 rsvd0;
+
+ u32 used;
+ u32 an_fec_ability;
+ u32 an_speed_ability;
+ u32 an_pause_ability;
+};
+
+struct mag_port_wire_info {
+ struct param_head head;
+
+ u8 status;
+ u8 rsvd0[3];
+
+ u8 wire_type;
+ u8 default_fec;
+ u8 speed;
+ u8 rsvd1;
+ u32 speed_ability;
+};
+
+struct mag_port_adapt_info {
+ struct param_head head;
+
+ u32 adapt_en;
+ u32 flash_adapt;
+ u32 rsvd0[2];
+
+ u32 wire_node;
+ u32 an_en;
+ u32 speed;
+ u32 fec;
+};
+
+struct mag_port_param_info {
+ u8 parameter_cnt;
+ u8 lane_id;
+ u8 lane_num;
+ u8 rsvd0;
+
+ struct mag_port_link_param default_cfg;
+ struct mag_port_link_param bios_cfg;
+ struct mag_port_link_param tool_cfg;
+ struct mag_port_link_param final_cfg;
+
+ struct mag_port_wire_info wire_info;
+ struct mag_port_adapt_info adapt_info;
+};
+
+#define XSFP_VENDOR_NAME_LEN 16
+struct mag_cmd_event_port_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 event_type;
+ u8 rsvd0[2];
+
+ u8 vendor_name[XSFP_VENDOR_NAME_LEN];
+ u32 port_type; /* fiber / copper */
+ u32 port_sub_type; /* sr / lr */
+ u32 cable_length; /* 1/3/5m */
+ u8 cable_temp; /* temp */
+ u8 max_speed; /* Maximum rate of an optical module */
+ u8 sfp_type; /* sfp/qsfp/dsfp */
+ u8 rsvd1;
+ u32 power[4]; /* Optical Power */
+
+ u8 an_state;
+ u8 fec;
+ u16 speed;
+
+ u8 gpio_insert; /* 0:present 1:absent */
+ u8 alos;
+ u8 rx_los;
+ u8 pma_ctrl;
+
+ u32 pma_fifo_reg;
+ u32 pma_signal_ok_reg;
+ u32 pcs_64_66b_reg;
+ u32 rf_lf;
+ u8 pcs_link;
+ u8 pcs_mac_link;
+ u8 tx_enable;
+ u8 rx_enable;
+ u32 pcs_err_cnt;
+
+ u8 eq_data[38];
+ u8 rsvd2[2];
+
+ u32 his_link_machine_state;
+ u32 cur_link_machine_state;
+ u8 his_machine_state_data[128];
+ u8 cur_machine_state_data[128];
+ u8 his_machine_state_length;
+ u8 cur_machine_state_length;
+
+ struct mag_port_param_info param_info;
+ u8 rsvd3[360];
+};
+
+struct mag_cmd_rsfec_stats {
+ u32 rx_err_lane_phy;
+};
+
+struct mag_cmd_port_stats {
+ u64 mac_tx_fragment_pkt_num;
+ u64 mac_tx_undersize_pkt_num;
+ u64 mac_tx_undermin_pkt_num;
+ u64 mac_tx_64_oct_pkt_num;
+ u64 mac_tx_65_127_oct_pkt_num;
+ u64 mac_tx_128_255_oct_pkt_num;
+ u64 mac_tx_256_511_oct_pkt_num;
+ u64 mac_tx_512_1023_oct_pkt_num;
+ u64 mac_tx_1024_1518_oct_pkt_num;
+ u64 mac_tx_1519_2047_oct_pkt_num;
+ u64 mac_tx_2048_4095_oct_pkt_num;
+ u64 mac_tx_4096_8191_oct_pkt_num;
+ u64 mac_tx_8192_9216_oct_pkt_num;
+ u64 mac_tx_9217_12287_oct_pkt_num;
+ u64 mac_tx_12288_16383_oct_pkt_num;
+ u64 mac_tx_1519_max_bad_pkt_num;
+ u64 mac_tx_1519_max_good_pkt_num;
+ u64 mac_tx_oversize_pkt_num;
+ u64 mac_tx_jabber_pkt_num;
+ u64 mac_tx_bad_pkt_num;
+ u64 mac_tx_bad_oct_num;
+ u64 mac_tx_good_pkt_num;
+ u64 mac_tx_good_oct_num;
+ u64 mac_tx_total_pkt_num;
+ u64 mac_tx_total_oct_num;
+ u64 mac_tx_uni_pkt_num;
+ u64 mac_tx_multi_pkt_num;
+ u64 mac_tx_broad_pkt_num;
+ u64 mac_tx_pause_num;
+ u64 mac_tx_pfc_pkt_num;
+ u64 mac_tx_pfc_pri0_pkt_num;
+ u64 mac_tx_pfc_pri1_pkt_num;
+ u64 mac_tx_pfc_pri2_pkt_num;
+ u64 mac_tx_pfc_pri3_pkt_num;
+ u64 mac_tx_pfc_pri4_pkt_num;
+ u64 mac_tx_pfc_pri5_pkt_num;
+ u64 mac_tx_pfc_pri6_pkt_num;
+ u64 mac_tx_pfc_pri7_pkt_num;
+ u64 mac_tx_control_pkt_num;
+ u64 mac_tx_err_all_pkt_num;
+ u64 mac_tx_from_app_good_pkt_num;
+ u64 mac_tx_from_app_bad_pkt_num;
+
+ u64 mac_rx_fragment_pkt_num;
+ u64 mac_rx_undersize_pkt_num;
+ u64 mac_rx_undermin_pkt_num;
+ u64 mac_rx_64_oct_pkt_num;
+ u64 mac_rx_65_127_oct_pkt_num;
+ u64 mac_rx_128_255_oct_pkt_num;
+ u64 mac_rx_256_511_oct_pkt_num;
+ u64 mac_rx_512_1023_oct_pkt_num;
+ u64 mac_rx_1024_1518_oct_pkt_num;
+ u64 mac_rx_1519_2047_oct_pkt_num;
+ u64 mac_rx_2048_4095_oct_pkt_num;
+ u64 mac_rx_4096_8191_oct_pkt_num;
+ u64 mac_rx_8192_9216_oct_pkt_num;
+ u64 mac_rx_9217_12287_oct_pkt_num;
+ u64 mac_rx_12288_16383_oct_pkt_num;
+ u64 mac_rx_1519_max_bad_pkt_num;
+ u64 mac_rx_1519_max_good_pkt_num;
+ u64 mac_rx_oversize_pkt_num;
+ u64 mac_rx_jabber_pkt_num;
+ u64 mac_rx_bad_pkt_num;
+ u64 mac_rx_bad_oct_num;
+ u64 mac_rx_good_pkt_num;
+ u64 mac_rx_good_oct_num;
+ u64 mac_rx_total_pkt_num;
+ u64 mac_rx_total_oct_num;
+ u64 mac_rx_uni_pkt_num;
+ u64 mac_rx_multi_pkt_num;
+ u64 mac_rx_broad_pkt_num;
+ u64 mac_rx_pause_num;
+ u64 mac_rx_pfc_pkt_num;
+ u64 mac_rx_pfc_pri0_pkt_num;
+ u64 mac_rx_pfc_pri1_pkt_num;
+ u64 mac_rx_pfc_pri2_pkt_num;
+ u64 mac_rx_pfc_pri3_pkt_num;
+ u64 mac_rx_pfc_pri4_pkt_num;
+ u64 mac_rx_pfc_pri5_pkt_num;
+ u64 mac_rx_pfc_pri6_pkt_num;
+ u64 mac_rx_pfc_pri7_pkt_num;
+ u64 mac_rx_control_pkt_num;
+ u64 mac_rx_sym_err_pkt_num;
+ u64 mac_rx_fcs_err_pkt_num;
+ u64 mac_rx_send_app_good_pkt_num;
+ u64 mac_rx_send_app_bad_pkt_num;
+ u64 mac_rx_unfilter_pkt_num;
+};
+
+struct mag_port_stats {
+ u64 tx_frag_pkts_port;
+ u64 tx_under_frame_pkts_port;
+ u64 tx_under_min_pkts_port;
+ u64 tx_64_oct_pkts_port;
+ u64 tx_127_oct_pkts_port;
+ u64 tx_255_oct_pkts_port;
+ u64 tx_511_oct_pkts_port;
+ u64 tx_1023_oct_pkts_port;
+ u64 tx_1518_oct_pkts_port;
+ u64 tx_2047_oct_pkts_port;
+ u64 tx_4095_oct_pkts_port;
+ u64 tx_8191_oct_pkts_port;
+ u64 tx_9216_oct_pkts_port;
+ u64 tx_12287_oct_pkts_port;
+ u64 tx_16383_oct_pkts_port;
+ u64 tx_1519_to_max_bad_pkts_port;
+ u64 tx_1519_to_max_good_pkts_port;
+ u64 tx_oversize_pkts_port;
+ u64 tx_jabber_pkts_port;
+ u64 tx_bad_pkts_port;
+ u64 tx_bad_octs_port;
+ u64 tx_good_pkts_port;
+ u64 tx_good_octs_port;
+ u64 tx_total_pkts_port;
+ u64 tx_total_octs_port;
+ u64 tx_unicast_pkts_port;
+ u64 tx_multicast_pkts_port;
+ u64 tx_broadcast_pkts_port;
+ u64 tx_pause_pkts_port;
+ u64 tx_pfc_pkts_port;
+ u64 tx_pri_0_pkts_port;
+ u64 tx_pri_1_pkts_port;
+ u64 tx_pri_2_pkts_port;
+ u64 tx_pri_3_pkts_port;
+ u64 tx_pri_4_pkts_port;
+ u64 tx_pri_5_pkts_port;
+ u64 tx_pri_6_pkts_port;
+ u64 tx_pri_7_pkts_port;
+ u64 tx_mac_control_pkts_port;
+ u64 tx_y1731_pkts_port;
+ u64 tx_1588_pkts_port;
+ u64 tx_error_pkts_port;
+ u64 tx_app_good_pkts_port;
+ u64 tx_app_bad_pkts_port;
+ u64 rx_frag_pkts_port;
+ u64 rx_under_frame_pkts_port;
+ u64 rx_under_min_pkts_port;
+ u64 rx_64_oct_pkts_port;
+ u64 rx_127_oct_pkts_port;
+ u64 rx_255_oct_pkts_port;
+ u64 rx_511_oct_pkts_port;
+ u64 rx_1023_oct_pkts_port;
+ u64 rx_1518_oct_pkts_port;
+ u64 rx_2047_oct_pkts_port;
+ u64 rx_4095_oct_pkts_port;
+ u64 rx_8191_oct_pkts_port;
+ u64 rx_9216_oct_pkts_port;
+ u64 rx_12287_oct_pkts_port;
+ u64 rx_16383_oct_pkts_port;
+ u64 rx_1519_to_max_bad_pkts_port;
+ u64 rx_1519_to_max_good_pkts_port;
+ u64 rx_oversize_pkts_port;
+ u64 rx_jabber_pkts_port;
+ u64 rx_bad_pkts_port;
+ u64 rx_bad_octs_port;
+ u64 rx_good_pkts_port;
+ u64 rx_good_octs_port;
+ u64 rx_total_pkts_port;
+ u64 rx_total_octs_port;
+ u64 rx_unicast_pkts_port;
+ u64 rx_multicast_pkts_port;
+ u64 rx_broadcast_pkts_port;
+ u64 rx_pause_pkts_port;
+ u64 rx_pfc_pkts_port;
+ u64 rx_pri_0_pkts_port;
+ u64 rx_pri_1_pkts_port;
+ u64 rx_pri_2_pkts_port;
+ u64 rx_pri_3_pkts_port;
+ u64 rx_pri_4_pkts_port;
+ u64 rx_pri_5_pkts_port;
+ u64 rx_pri_6_pkts_port;
+ u64 rx_pri_7_pkts_port;
+ u64 rx_mac_control_pkts_port;
+ u64 rx_y1731_pkts_port;
+ u64 rx_sym_err_pkts_port;
+ u64 rx_fcs_err_pkts_port;
+ u64 rx_app_good_pkts_port;
+ u64 rx_app_bad_pkts_port;
+ u64 rx_unfilter_pkts_port;
+};
+
+struct mag_cmd_port_stats_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+};
+
+struct mag_cmd_get_port_stat {
+ struct mgmt_msg_head head;
+
+ struct mag_cmd_port_stats counter;
+ u64 rsvd1[15];
+};
+
+struct mag_cmd_get_pcs_err_cnt {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+
+ u32 pcs_err_cnt;
+};
+
+struct mag_cmd_get_mag_cnt {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 len;
+ u8 rsvd0[2];
+
+ u32 mag_csr[128];
+};
+
+struct mag_cmd_dump_antrain_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 len;
+ u8 rsvd0[2];
+
+ u32 antrain_csr[256];
+};
+
+#define MAG_SFP_PORT_NUM 24
+struct mag_cmd_sfp_temp_in_info {
+ struct mgmt_msg_head head; /* 8B */
+ u8 opt_type; /* 0:read operation 1:cfg operation */
+ u8 rsv[3];
+ s32 max_temp; /* Chip optical module threshold */
+ s32 min_temp; /* Chip optical module threshold */
+};
+
+struct mag_cmd_sfp_temp_out_info {
+ struct mgmt_msg_head head; /* 8B */
+ s16 sfp_temp_data[MAG_SFP_PORT_NUM]; /* Temperature read */
+ s32 max_temp; /* Chip optical module threshold */
+ s32 min_temp; /* Chip optical module threshold */
+};
+
+#define XSFP_CMIS_PARSE_PAGE_NUM 6
+#define XSFP_CMIS_INFO_MAX_SIZE 1536
+#define QSFP_CMIS_PAGE_SIZE 128
+#define QSFP_CMIS_MAX_CHANNEL_NUM 0x8
+
+/* Lower: Control and Essentials, Upper: Administrative Information */
+#define QSFP_CMIS_PAGE_00H 0x00
+/* Advertising */
+#define QSFP_CMIS_PAGE_01H 0x01
+/* Module and lane Thresholds */
+#define QSFP_CMIS_PAGE_02H 0x02
+/* User EEPROM */
+#define QSFP_CMIS_PAGE_03H 0x03
+/* Laser Capabilities Advertising (Page 04h, Optional) */
+#define QSFP_CMIS_PAGE_04H 0x04
+#define QSFP_CMIS_PAGE_05H 0x05
+/* Lane and Data Path Control */
+#define QSFP_CMIS_PAGE_10H 0x10
+/* Lane Status */
+#define QSFP_CMIS_PAGE_11H 0x11
+#define QSFP_CMIS_PAGE_12H 0x12
+
+#define MGMT_TLV_U8_SIZE 1
+#define MGMT_TLV_U16_SIZE 2
+#define MGMT_TLV_U32_SIZE 4
+
+#define MGMT_TLV_GET_U8(addr) (*((u8 *)(void *)(addr)))
+#define MGMT_TLV_SET_U8(addr, value) \
+ ((*((u8 *)(void *)(addr))) = ((u8)(value)))
+
+#define MGMT_TLV_GET_U16(addr) (*((u16 *)(void *)(addr)))
+#define MGMT_TLV_SET_U16(addr, value) \
+ ((*((u16 *)(void *)(addr))) = ((u16)(value)))
+
+#define MGMT_TLV_GET_U32(addr) (*((u32 *)(void *)(addr)))
+#define MGMT_TLV_SET_U32(addr, value) \
+ ((*((u32 *)(void *)(addr))) = ((u32)(value)))
+
+#define MGMT_TLV_TYPE_END 0xFFFF
+
+enum mag_xsfp_type {
+ MAG_XSFP_TYPE_PAGE = 0x01,
+ MAG_XSFP_TYPE_WIRE_TYPE = 0x02,
+ MAG_XSFP_TYPE_END = MGMT_TLV_TYPE_END
+};
+
+struct qsfp_cmis_lower_page_00_s {
+ u8 resv0[14];
+ u8 temperature_msb;
+ u8 temperature_lsb;
+ u8 volt_supply[2];
+ u8 resv1[67];
+ u8 media_type;
+ u8 electrical_interface_id;
+ u8 media_interface_id;
+ u8 lane_count;
+ u8 resv2[39];
+};
+
+struct qsfp_cmis_upper_page_00_s {
+ u8 identifier;
+ u8 vendor_name[16];
+ u8 vendor_oui[3];
+ u8 vendor_pn[16];
+ u8 vendor_rev[2];
+ u8 vendor_sn[16];
+ u8 date_code[8];
+ u8 clei_code[10];
+ u8 power_character[2];
+ u8 cable_len;
+ u8 connector;
+ u8 copper_cable_attenuation[6];
+ u8 near_end_implementation;
+ u8 far_end_config;
+ u8 media_technology;
+ u8 resv0[43];
+};
+
+struct qsfp_cmis_upper_page_01_s {
+ u8 firmware_rev[2];
+ u8 hardware_rev[2];
+ u8 smf_len_km;
+ u8 om5_len;
+ u8 om4_len;
+ u8 om3_len;
+ u8 om2_len;
+ u8 resv0;
+ u8 wavelength[2];
+ u8 wavelength_tolerance[2];
+ u8 pages_implement;
+ u8 resv1[16];
+ u8 monitor_implement[2];
+ u8 resv2[95];
+};
+
+struct qsfp_cmis_upper_page_02_s {
+ u8 temperature_high_alarm[2];
+ u8 temperature_low_alarm[2];
+ u8 temperature_high_warn[2];
+ u8 temperature_low_warn[2];
+ u8 volt_high_alarm[2];
+ u8 volt_low_alarm[2];
+ u8 volt_high_warn[2];
+ u8 volt_low_warn[2];
+ u8 resv0[32];
+ u8 tx_power_high_alarm[2];
+ u8 tx_power_low_alarm[2];
+ u8 tx_power_high_warn[2];
+ u8 tx_power_low_warn[2];
+ u8 tx_bias_high_alarm[2];
+ u8 tx_bias_low_alarm[2];
+ u8 tx_bias_high_warn[2];
+ u8 tx_bias_low_warn[2];
+ u8 rx_power_high_alarm[2];
+ u8 rx_power_low_alarm[2];
+ u8 rx_power_high_warn[2];
+ u8 rx_power_low_warn[2];
+ u8 resv1[56];
+};
+
+struct qsfp_cmis_upper_page_03_s {
+ u8 resv0[QSFP_CMIS_PAGE_SIZE]; /* Reg 128-255: Upper Memory: Page 03H */
+};
+
+struct qsfp_cmis_upper_page_10_s {
+ u8 resv0[2]; /* Reg 128-129: Upper Memory: Page 10H */
+ u8 tx_disable; /* Reg 130: Tx disable, 0b=enabled, 1b=disabled */
+ u8 resv1[125]; /* Reg 131-255 */
+};
+
+struct qsfp_cmis_upper_page_11_s {
+ u8 resv0[7];
+ u8 tx_fault;
+ u8 tx_los;
+ u8 resv1[10];
+ u8 rx_los;
+ u8 resv2[6];
+ u8 tx_power[16];
+ u8 tx_bias[16];
+ u8 rx_power[16];
+ u8 resv3[54];
+};
+
+struct qsfp_cmis_info_s {
+ struct qsfp_cmis_lower_page_00_s lower_page_00;
+ struct qsfp_cmis_upper_page_00_s upper_page_00;
+ struct qsfp_cmis_upper_page_01_s upper_page_01;
+ struct qsfp_cmis_upper_page_02_s upper_page_02;
+ struct qsfp_cmis_upper_page_10_s upper_page_10;
+ struct qsfp_cmis_upper_page_11_s upper_page_11;
+};
+
+struct qsfp_cmis_comm_power_s {
+ u32 chl_power[QSFP_CMIS_MAX_CHANNEL_NUM];
+};
+
+struct qsfp_cmis_wire_info_s {
+ struct qsfp_cmis_comm_power_s rx_power;
+ u8 rx_los;
+ u8 resv0[3];
+};
+
+struct mgmt_tlv_info {
+ u16 type;
+ u16 length;
+ u8 value[];
+};
+
+struct mag_cmd_set_xsfp_tlv_req {
+ struct mgmt_msg_head head;
+
+ u8 tlv_buf[];
+};
+
+struct mag_cmd_set_xsfp_tlv_rsp {
+ struct mgmt_msg_head head;
+};
+
+struct mag_cmd_get_xsfp_tlv_req {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd;
+ u16 rsp_buf_len;
+};
+
+struct mag_cmd_get_xsfp_tlv_rsp {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd[3];
+
+ u8 tlv_buf[];
+};
+
+
+struct parse_tlv_info {
+ u8 tlv_page_info[XSFP_CMIS_INFO_MAX_SIZE + 1];
+ u32 tlv_page_info_len;
+ u32 tlv_page_num[XSFP_CMIS_PARSE_PAGE_NUM];
+ u32 wire_type;
+ u8 id;
+};
+
+struct drv_mag_cmd_get_xsfp_tlv_rsp {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd[3];
+
+ u8 tlv_buf[XSFP_CMIS_INFO_MAX_SIZE];
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd.h b/drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd.h
new file mode 100644
index 0000000..8e0fa89
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd.h
@@ -0,0 +1,174 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C), 2001-2011, Huawei Tech. Co., Ltd.
+ * File Name : nic_mpu_cmd.h
+ * Version : Initial Draft
+ * Created : 2019/4/25
+ * Last Modified :
+ * Description : NIC Commands between Driver and MPU
+ * Function List :
+ */
+
+#ifndef NIC_MPU_CMD_H
+#define NIC_MPU_CMD_H
+
+/* Commands between NIC to MPU
+ */
+enum hinic3_nic_cmd {
+ HINIC3_NIC_CMD_VF_REGISTER = 0, /* only for PFD and VFD */
+
+ /* FUNC CFG */
+ HINIC3_NIC_CMD_SET_FUNC_TBL = 5,
+ HINIC3_NIC_CMD_SET_VPORT_ENABLE,
+ HINIC3_NIC_CMD_SET_RX_MODE,
+ HINIC3_NIC_CMD_SQ_CI_ATTR_SET,
+ HINIC3_NIC_CMD_GET_VPORT_STAT,
+ HINIC3_NIC_CMD_CLEAN_VPORT_STAT,
+ HINIC3_NIC_CMD_CLEAR_QP_RESOURCE,
+ HINIC3_NIC_CMD_CFG_FLEX_QUEUE,
+ /* LRO CFG */
+ HINIC3_NIC_CMD_CFG_RX_LRO,
+ HINIC3_NIC_CMD_CFG_LRO_TIMER,
+ HINIC3_NIC_CMD_FEATURE_NEGO,
+ HINIC3_NIC_CMD_CFG_LOCAL_LRO_STATE,
+
+ HINIC3_NIC_CMD_CACHE_OUT_QP_RES,
+ HINIC3_NIC_CMD_SET_FUNC_ER_FWD_ID,
+
+ /** MAC & VLAN CFG & VXLAN CFG */
+ HINIC3_NIC_CMD_GET_MAC = 20,
+ HINIC3_NIC_CMD_SET_MAC,
+ HINIC3_NIC_CMD_DEL_MAC,
+ HINIC3_NIC_CMD_UPDATE_MAC,
+ HINIC3_NIC_CMD_GET_ALL_DEFAULT_MAC,
+
+ HINIC3_NIC_CMD_CFG_FUNC_VLAN,
+ HINIC3_NIC_CMD_SET_VLAN_FILTER_EN,
+ HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD,
+ HINIC3_NIC_CMD_SMAC_CHECK_STATE,
+ HINIC3_NIC_CMD_OUTBAND_SET_FUNC_VLAN,
+
+ HINIC3_NIC_CMD_CFG_VXLAN_PORT,
+ HINIC3_NIC_CMD_RX_RATE_CFG,
+ HINIC3_NIC_CMD_WR_ORDERING_CFG,
+
+ /* SR-IOV */
+ HINIC3_NIC_CMD_CFG_VF_VLAN = 40,
+ HINIC3_NIC_CMD_SET_SPOOPCHK_STATE,
+ /* RATE LIMIT */
+ HINIC3_NIC_CMD_SET_MAX_MIN_RATE,
+
+ /* RSS CFG */
+ HINIC3_NIC_CMD_RSS_CFG = 60,
+ HINIC3_NIC_CMD_RSS_TEMP_MGR, /* TODO: delete after implement nego cmd */
+ HINIC3_NIC_CMD_GET_RSS_CTX_TBL, /* TODO: delete: move to ucode cmd */
+ HINIC3_NIC_CMD_CFG_RSS_HASH_KEY,
+ HINIC3_NIC_CMD_CFG_RSS_HASH_ENGINE,
+ HINIC3_NIC_CMD_SET_RSS_CTX_TBL_INTO_FUNC,
+ /* IP checksum error packets, enable rss quadruple hash */
+ HINIC3_NIC_CMD_IPCS_ERR_RSS_ENABLE_OP = 66,
+ HINIC3_NIC_CMD_GTP_INNER_PARSE_STATUS,
+
+ /* PPA/FDIR */
+ HINIC3_NIC_CMD_ADD_TC_FLOW = 80,
+ HINIC3_NIC_CMD_DEL_TC_FLOW,
+ HINIC3_NIC_CMD_GET_TC_FLOW,
+ HINIC3_NIC_CMD_FLUSH_TCAM,
+ HINIC3_NIC_CMD_CFG_TCAM_BLOCK,
+ HINIC3_NIC_CMD_ENABLE_TCAM,
+ HINIC3_NIC_CMD_GET_TCAM_BLOCK,
+ HINIC3_NIC_CMD_CFG_PPA_TABLE_ID,
+ HINIC3_NIC_CMD_SET_PPA_EN = 88,
+ HINIC3_NIC_CMD_CFG_PPA_MODE,
+ HINIC3_NIC_CMD_CFG_PPA_FLUSH,
+ HINIC3_NIC_CMD_SET_FDIR_STATUS,
+ HINIC3_NIC_CMD_GET_PPA_COUNTER,
+ HINIC3_NIC_CMD_SET_FUNC_FLOW_BIFUR_ENABLE,
+ HINIC3_NIC_CMD_SET_BOND_MASK,
+ HINIC3_NIC_CMD_GET_BLOCK_TC_FLOWS,
+ HINIC3_NIC_CMD_GET_BOND_MASK,
+
+ /* PORT CFG */
+ HINIC3_NIC_CMD_SET_PORT_ENABLE = 100,
+ HINIC3_NIC_CMD_CFG_PAUSE_INFO,
+
+ HINIC3_NIC_CMD_SET_PORT_CAR,
+ HINIC3_NIC_CMD_SET_ER_DROP_PKT,
+
+ HINIC3_NIC_CMD_VF_COS,
+ HINIC3_NIC_CMD_SETUP_COS_MAPPING,
+ HINIC3_NIC_CMD_SET_ETS,
+ HINIC3_NIC_CMD_SET_PFC,
+ HINIC3_NIC_CMD_QOS_ETS,
+ HINIC3_NIC_CMD_QOS_PFC,
+ HINIC3_NIC_CMD_QOS_DCB_STATE,
+ HINIC3_NIC_CMD_QOS_PORT_CFG,
+ HINIC3_NIC_CMD_QOS_MAP_CFG,
+ HINIC3_NIC_CMD_FORCE_PKT_DROP,
+ HINIC3_NIC_CMD_CFG_TX_PROMISC_SKIP = 114,
+ HINIC3_NIC_CMD_SET_PORT_FLOW_BIFUR_ENABLE = 117,
+ HINIC3_NIC_CMD_TX_PAUSE_EXCP_NOTICE = 118,
+ HINIC3_NIC_CMD_INQUIRT_PAUSE_CFG = 119,
+
+ /* MISC */
+ HINIC3_NIC_CMD_BIOS_CFG = 120,
+ HINIC3_NIC_CMD_SET_FIRMWARE_CUSTOM_PACKETS_MSG,
+
+ /* BOND */
+ HINIC3_NIC_CMD_BOND_DEV_CREATE = 134,
+ HINIC3_NIC_CMD_BOND_DEV_DELETE,
+ HINIC3_NIC_CMD_BOND_DEV_OPEN_CLOSE,
+ HINIC3_NIC_CMD_BOND_INFO_GET,
+ HINIC3_NIC_CMD_BOND_ACTIVE_INFO_GET,
+ HINIC3_NIC_CMD_BOND_ACTIVE_NOTICE,
+
+ /* DFX */
+ HINIC3_NIC_CMD_GET_SM_TABLE = 140,
+ HINIC3_NIC_CMD_RD_LINE_TBL,
+
+ HINIC3_NIC_CMD_SET_UCAPTURE_OPT = 160, /* TODO: move to roce */
+ HINIC3_NIC_CMD_SET_VHD_CFG,
+
+ /* OUT OF BAND */
+ HINIC3_NIC_CMD_GET_OUTBAND_CFG = 170,
+ HINIC3_NIC_CMD_OUTBAND_CFG_NOTICE,
+
+ /* TODO: move to HILINK */
+ HINIC3_NIC_CMD_GET_PORT_STAT = 200,
+ HINIC3_NIC_CMD_CLEAN_PORT_STAT,
+ HINIC3_NIC_CMD_CFG_LOOPBACK_MODE,
+ HINIC3_NIC_CMD_GET_SFP_QSFP_INFO,
+ HINIC3_NIC_CMD_SET_SFP_STATUS,
+ HINIC3_NIC_CMD_GET_LIGHT_MODULE_ABS,
+ HINIC3_NIC_CMD_GET_LINK_INFO,
+ HINIC3_NIC_CMD_CFG_AN_TYPE,
+ HINIC3_NIC_CMD_GET_PORT_INFO,
+ HINIC3_NIC_CMD_SET_LINK_SETTINGS,
+ HINIC3_NIC_CMD_ACTIVATE_BIOS_LINK_CFG,
+ HINIC3_NIC_CMD_RESTORE_LINK_CFG,
+ HINIC3_NIC_CMD_SET_LINK_FOLLOW,
+ HINIC3_NIC_CMD_GET_LINK_STATE,
+ HINIC3_NIC_CMD_LINK_STATUS_REPORT,
+ HINIC3_NIC_CMD_CABLE_PLUG_EVENT,
+ HINIC3_NIC_CMD_LINK_ERR_EVENT,
+ HINIC3_NIC_CMD_SET_LED_STATUS,
+
+ /* mig */
+ HINIC3_NIC_CMD_MIG_SET_CEQ_CTRL = 230,
+ HINIC3_NIC_CMD_MIG_CFG_MSIX_INFO,
+ HINIC3_NIC_CMD_MIG_CFG_FUNC_VAT_TBL,
+ HINIC3_NIC_CMD_MIG_GET_VF_INFO,
+ HINIC3_NIC_CMD_MIG_CHK_MBX_EMPTY,
+ HINIC3_NIC_CMD_MIG_SET_VPORT_ENABLE,
+ HINIC3_NIC_CMD_MIG_CFG_SQ_CI,
+ HINIC3_NIC_CMD_MIG_CFG_RSS_TBL,
+ HINIC3_NIC_CMD_MIG_CFG_MAC_TBL,
+ HINIC3_NIC_CMD_MIG_TMP_SET_CMDQ_CTX,
+
+ HINIC3_OSHR_CMD_ACTIVE_FUNCTION = 240,
+ HINIC3_NIC_CMD_GET_RQ_INFO = 241,
+
+ HINIC3_NIC_CMD_MAX = 256,
+};
+
+#endif /* NIC_MPU_CMD_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd_defs.h b/drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd_defs.h
new file mode 100644
index 0000000..ee6bf20
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd_defs.h
@@ -0,0 +1,1420 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef NIC_MPU_CMD_DEFS_H
+#define NIC_MPU_CMD_DEFS_H
+
+#include "nic_cfg_comm.h"
+#include "mpu_cmd_base_defs.h"
+
+#ifndef ETH_ALEN
+#define ETH_ALEN 6
+#endif
+
+#define HINIC3_CMD_OP_SET 1
+#define HINIC3_CMD_OP_GET 0
+
+#define HINIC3_CMD_OP_ADD 1
+#define HINIC3_CMD_OP_DEL 0
+
+#define NIC_TCAM_BLOCK_LARGE_NUM 256
+#define NIC_TCAM_BLOCK_LARGE_SIZE 16
+
+#define TRAFFIC_BIFUR_MODEL_TYPE 2
+
+#define NIC_TCAM_FLOW_BIFUR_FLAG (1 << 0)
+
+#ifndef BIT
+#define BIT(n) (1UL << (n))
+#endif
+
+enum nic_feature_cap {
+ NIC_F_CSUM = BIT(0),
+ NIC_F_SCTP_CRC = BIT(1),
+ NIC_F_TSO = BIT(2),
+ NIC_F_LRO = BIT(3),
+ NIC_F_UFO = BIT(4),
+ NIC_F_RSS = BIT(5),
+ NIC_F_RX_VLAN_FILTER = BIT(6),
+ NIC_F_RX_VLAN_STRIP = BIT(7),
+ NIC_F_TX_VLAN_INSERT = BIT(8),
+ NIC_F_VXLAN_OFFLOAD = BIT(9),
+ NIC_F_IPSEC_OFFLOAD = BIT(10),
+ NIC_F_FDIR = BIT(11),
+ NIC_F_PROMISC = BIT(12),
+ NIC_F_ALLMULTI = BIT(13),
+ NIC_F_XSFP_REPORT = BIT(14),
+ NIC_F_VF_MAC = BIT(15),
+ NIC_F_RATE_LIMIT = BIT(16),
+ NIC_F_RXQ_RECOVERY = BIT(17),
+};
+
+#define NIC_F_ALL_MASK 0x3FFFF /* 使能所有属性 */
+
+struct hinic3_mgmt_msg_head {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+};
+
+#define NIC_MAX_FEATURE_QWORD 4
+struct hinic3_cmd_feature_nego {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 1: set, 0: get */
+ u8 rsvd;
+ u64 s_feature[NIC_MAX_FEATURE_QWORD];
+};
+
+struct hinic3_port_mac_set {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 mac[ETH_ALEN];
+};
+
+struct hinic3_port_mac_update {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 old_mac[ETH_ALEN];
+ u16 rsvd2;
+ u8 new_mac[ETH_ALEN];
+};
+
+struct hinic3_vport_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd2[3];
+};
+
+struct hinic3_port_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd2[3];
+};
+
+#define HINIC3_SET_PORT_CAR_PROFILE 0
+#define HINIC3_SET_PORT_CAR_STATE 1
+#define HINIC3_GET_PORT_CAR_LIMIT_SPEED 2
+
+struct hinic3_port_car_info {
+ u32 cir; /* unit: kbps, range:[1,400*1000*1000], i.e. 1Kbps~400Gbps(400M*kbps) */
+ u32 xir; /* unit: kbps, range:[1,400*1000*1000], i.e. 1Kbps~400Gbps(400M*kbps) */
+ u32 cbs; /* unit: Byte, range:[1,320*1000*1000], i.e. 1byte~2560Mbit */
+ u32 xbs; /* unit: Byte, range:[1,320*1000*1000], i.e. 1byte~2560Mbit */
+};
+
+struct hinic3_cmd_set_port_car {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode; /* 0--set car profile, 1--set car state */
+ u8 state; /* 0--disable, 1--enable */
+ u8 level;
+
+ struct hinic3_port_car_info car;
+};
+
+struct hinic3_cmd_clear_qp_resource {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_cmd_cache_out_qp_resource {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_port_stats_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_vport_stats {
+ u64 tx_unicast_pkts_vport;
+ u64 tx_unicast_bytes_vport;
+ u64 tx_multicast_pkts_vport;
+ u64 tx_multicast_bytes_vport;
+ u64 tx_broadcast_pkts_vport;
+ u64 tx_broadcast_bytes_vport;
+
+ u64 rx_unicast_pkts_vport;
+ u64 rx_unicast_bytes_vport;
+ u64 rx_multicast_pkts_vport;
+ u64 rx_multicast_bytes_vport;
+ u64 rx_broadcast_pkts_vport;
+ u64 rx_broadcast_bytes_vport;
+
+ u64 tx_discard_vport;
+ u64 rx_discard_vport;
+ u64 tx_err_vport;
+ u64 rx_err_vport;
+};
+
+struct hinic3_phy_fpga_port_stats {
+ u64 mac_rx_total_octs_port;
+ u64 mac_tx_total_octs_port;
+ u64 mac_rx_under_frame_pkts_port;
+ u64 mac_rx_frag_pkts_port;
+ u64 mac_rx_64_oct_pkts_port;
+ u64 mac_rx_127_oct_pkts_port;
+ u64 mac_rx_255_oct_pkts_port;
+ u64 mac_rx_511_oct_pkts_port;
+ u64 mac_rx_1023_oct_pkts_port;
+ u64 mac_rx_max_oct_pkts_port;
+ u64 mac_rx_over_oct_pkts_port;
+ u64 mac_tx_64_oct_pkts_port;
+ u64 mac_tx_127_oct_pkts_port;
+ u64 mac_tx_255_oct_pkts_port;
+ u64 mac_tx_511_oct_pkts_port;
+ u64 mac_tx_1023_oct_pkts_port;
+ u64 mac_tx_max_oct_pkts_port;
+ u64 mac_tx_over_oct_pkts_port;
+ u64 mac_rx_good_pkts_port;
+ u64 mac_rx_crc_error_pkts_port;
+ u64 mac_rx_broadcast_ok_port;
+ u64 mac_rx_multicast_ok_port;
+ u64 mac_rx_mac_frame_ok_port;
+ u64 mac_rx_length_err_pkts_port;
+ u64 mac_rx_vlan_pkts_port;
+ u64 mac_rx_pause_pkts_port;
+ u64 mac_rx_unknown_mac_frame_port;
+ u64 mac_tx_good_pkts_port;
+ u64 mac_tx_broadcast_ok_port;
+ u64 mac_tx_multicast_ok_port;
+ u64 mac_tx_underrun_pkts_port;
+ u64 mac_tx_mac_frame_ok_port;
+ u64 mac_tx_vlan_pkts_port;
+ u64 mac_tx_pause_pkts_port;
+};
+
+struct hinic3_port_stats {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ struct hinic3_phy_fpga_port_stats stats;
+};
+
+struct hinic3_cmd_vport_stats {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u32 stats_size;
+ u32 rsvd1;
+ struct hinic3_vport_stats stats;
+ u64 rsvd2[6];
+};
+
+struct hinic3_cmd_qpn {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 base_qpn;
+};
+
+enum hinic3_func_tbl_cfg_bitmap {
+ FUNC_CFG_INIT,
+ FUNC_CFG_RX_BUF_SIZE,
+ FUNC_CFG_MTU,
+};
+
+struct hinic3_func_tbl_cfg {
+ u16 rx_wqe_buf_size;
+ u16 mtu;
+ u32 rsvd[9];
+};
+
+struct hinic3_cmd_set_func_tbl {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd;
+
+ u32 cfg_bitmap;
+ struct hinic3_func_tbl_cfg tbl_cfg;
+};
+
+struct hinic3_cmd_cons_idx_attr {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_idx;
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u32 rsvd;
+ u64 ci_addr;
+};
+
+union sm_tbl_args {
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } mac_table_arg;
+ struct {
+ u32 er_id;
+ u32 vlan_id;
+ } vlan_elb_table_arg;
+ struct {
+ u32 func_id;
+ } vlan_filter_arg;
+ struct {
+ u32 mc_id;
+ } mc_elb_arg;
+ struct {
+ u32 func_id;
+ } func_tbl_arg;
+ struct {
+ u32 port_id;
+ } port_tbl_arg;
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } fdir_io_table_arg;
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } flexq_table_arg;
+ u32 args[4];
+};
+
+#define DFX_SM_TBL_BUF_MAX (768)
+
+struct nic_cmd_dfx_sm_table {
+ struct hinic3_mgmt_msg_head msg_head;
+ u32 tbl_type;
+ union sm_tbl_args args;
+ u8 tbl_buf[DFX_SM_TBL_BUF_MAX];
+};
+
+struct hinic3_cmd_vlan_offload {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 vlan_offload;
+ u8 rsvd1[5];
+};
+
+/* ucode capture cfg info */
+struct nic_cmd_capture_info {
+ struct hinic3_mgmt_msg_head msg_head;
+ u32 op_type;
+ u32 func_port;
+ u32 is_en_trx;
+ u32 offset_cos;
+ u32 data_vlan;
+};
+
+struct hinic3_cmd_lro_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 lro_ipv4_en;
+ u8 lro_ipv6_en;
+ u8 lro_max_pkt_len; /* unit is 1K */
+ u8 resv2[13];
+};
+
+struct hinic3_cmd_lro_timer {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 opcode; /* 1: set timer value, 0: get timer value */
+ u8 rsvd1;
+ u16 rsvd2;
+ u32 timer;
+};
+
+struct hinic3_cmd_local_lro_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 0: get state, 1: set state */
+ u8 state; /* 0: disable, 1: enable */
+};
+
+struct hinic3_cmd_gtp_inner_parse_status {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 0: get state, 1: set state */
+ u8 status; /* 0: disable, 1: enable */
+};
+
+struct hinic3_cmd_vf_vlan_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 vlan_id;
+ u8 qos;
+ u8 rsvd2[5];
+};
+
+struct hinic3_cmd_spoofchk_set {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 state;
+ u8 rsvd1;
+};
+
+struct hinic3_cmd_tx_rate_cfg {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 rsvd1;
+ u8 direct;
+ u32 min_rate;
+ u32 max_rate;
+ u8 rsvd2[8];
+};
+
+struct hinic3_cmd_port_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+ u8 fec;
+ u16 rsvd2;
+ u32 rsvd3[4];
+};
+
+struct hinic3_cmd_register_vf {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 op_register; /* 0 - unregister, 1 - register */
+ u8 rsvd1[3];
+ u32 support_extra_feature;
+ u8 rsvd2[32];
+};
+
+struct hinic3_cmd_link_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 state;
+ u16 rsvd1;
+};
+
+struct hinic3_cmd_vlan_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 outband_defvid_flag;
+ u16 vlan_id;
+ u8 blacklist_flag;
+ u8 rsvd2;
+};
+
+#define VLAN_BLACKLIST_ENABLE 1
+#define VLAN_BLACKLIST_DISABLE 0
+
+struct hinic3_cmd_vxlan_port_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 cfg_mode;
+ u16 vxlan_port;
+ u16 rsvd2;
+};
+
+/* set vlan filter */
+struct hinic3_cmd_set_vlan_filter {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 resvd[2];
+ u32 vlan_filter_ctrl; /* bit0:vlan filter en; bit1:broadcast_filter_en */
+};
+
+struct hinic3_cmd_link_ksettings_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+
+ u32 valid_bitmap;
+ u8 speed; /* enum nic_speed_level */
+ u8 autoneg; /* 0 - off, 1 - on */
+ u8 fec; /* 0 - RSFEC, 1 - BASEFEC, 2 - NOFEC */
+ u8 rsvd2[21]; /* reserved for duplex, port, etc. */
+};
+
+struct mpu_lt_info {
+ u8 node;
+ u8 inst;
+ u8 entry_size;
+ u8 rsvd;
+ u32 lt_index;
+ u32 offset;
+ u32 len;
+};
+
+struct nic_mpu_lt_opera {
+ struct hinic3_mgmt_msg_head msg_head;
+ struct mpu_lt_info net_lt_cmd;
+ u8 data[100];
+};
+
+struct hinic3_force_pkt_drop {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port;
+ u8 rsvd1[3];
+};
+
+struct hinic3_rx_mode_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 rx_mode;
+};
+
+/* rss */
+struct hinic3_rss_context_table {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 context;
+};
+
+struct hinic3_cmd_rss_engine_type {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 hash_engine;
+ u8 rsvd1[4];
+};
+
+struct hinic3_cmd_rss_hash_key {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 key[NIC_RSS_KEY_SIZE];
+};
+
+struct hinic3_rss_indir_table {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 indir[NIC_RSS_INDIR_SIZE];
+};
+
+#define NIC_RSS_CMD_TEMP_ALLOC 0x01
+#define NIC_RSS_CMD_TEMP_FREE 0x02
+
+struct hinic3_rss_template_mgmt {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 cmd;
+ u8 template_id;
+ u8 rsvd1[4];
+};
+
+struct hinic3_cmd_rss_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 rss_en;
+ u8 rq_priority_number;
+ u8 prio_tc[NIC_DCB_COS_MAX];
+ u16 num_qps;
+ u16 rsvd1;
+};
+
+struct hinic3_dcb_state {
+ u8 dcb_on;
+ u8 default_cos;
+ u8 trust;
+ u8 rsvd1;
+ u8 pcp2cos[NIC_DCB_UP_MAX];
+ u8 dscp2cos[64];
+ u32 rsvd2[7];
+};
+
+struct hinic3_cmd_vf_dcb_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ struct hinic3_dcb_state state;
+};
+
+struct hinic3_up_ets_cfg { /* delet */
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+
+ u8 cos_tc[NIC_DCB_COS_MAX];
+ u8 tc_bw[NIC_DCB_TC_MAX];
+ u8 cos_prio[NIC_DCB_COS_MAX];
+ u8 cos_bw[NIC_DCB_COS_MAX];
+ u8 tc_prio[NIC_DCB_TC_MAX];
+};
+
+#define CMD_QOS_ETS_COS_TC BIT(0)
+#define CMD_QOS_ETS_TC_BW BIT(1)
+#define CMD_QOS_ETS_COS_PRIO BIT(2)
+#define CMD_QOS_ETS_COS_BW BIT(3)
+#define CMD_QOS_ETS_TC_PRIO BIT(4)
+#define CMD_QOS_ETS_TC_RATELIMIT BIT(5)
+
+struct hinic3_cmd_ets_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 1 - set, 0 - get */
+ /* bit0 - cos_tc, bit1 - tc_bw, bit2 - cos_prio, bit3 - cos_bw, bit4 - tc_prio */
+ u8 cfg_bitmap;
+ u8 rsvd;
+
+ u8 cos_tc[NIC_DCB_COS_MAX];
+ u8 tc_bw[NIC_DCB_TC_MAX];
+ u8 cos_prio[NIC_DCB_COS_MAX]; /* 0 - DWRR, 1 - STRICT */
+ u8 cos_bw[NIC_DCB_COS_MAX];
+ u8 tc_prio[NIC_DCB_TC_MAX]; /* 0 - DWRR, 1 - STRICT */
+ u8 rate_limit[NIC_DCB_TC_MAX];
+};
+
+struct hinic3_cmd_set_dcb_state {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 op_code; /* 0 - get dcb state, 1 - set dcb state */
+ u8 state; /* 0 - disable, 1 - enable dcb */
+ u8 port_state; /* 0 - disable, 1 - enable dcb */
+ u8 rsvd[7];
+};
+
+#define PFC_BIT_MAP_NUM 8
+struct hinic3_cmd_set_pfc {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 0:get 1: set pfc_en 2: set pfc_bitmap 3: set all */
+ u8 pfc_en; /* pfc_en 和 pfc_bitmap 必须同时设置 */
+ u8 pfc_bitmap;
+ u8 rsvd[4];
+};
+
+#define CMD_QOS_PORT_TRUST BIT(0)
+#define CMD_QOS_PORT_DFT_COS BIT(1)
+struct hinic3_cmd_qos_port_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 0 - get, 1 - set */
+ u8 cfg_bitmap; /* bit0 - trust, bit1 - dft_cos */
+ u8 rsvd0;
+
+ u8 trust;
+ u8 dft_cos;
+ u8 rsvd1[18];
+};
+
+#define MAP_COS_MAX_NUM 8
+#define CMD_QOS_MAP_PCP2COS BIT(0)
+#define CMD_QOS_MAP_DSCP2COS BIT(1)
+struct hinic3_cmd_qos_map_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 op_code;
+ u8 cfg_bitmap; /* bit0 - pcp2cos, bit1 - dscp2cos */
+ u16 rsvd0;
+
+ u8 pcp2cos[8]; /* 8 must be configured together */
+ /* If the dscp2cos parameter is set to 0xFF, the MPU ignores the DSCP priority,
+ * Multiple mappings between DSCP values and CoS values can be configured at a time.
+ */
+ u8 dscp2cos[64];
+ u32 rsvd1[4];
+};
+
+struct hinic3_cos_up_map {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 cos_valid_mask; /* every bit indicate index of map is valid 1 or not 0 */
+ u16 rsvd1;
+
+ /* user priority in cos(index:cos, value: up pri) */
+ u8 map[NIC_DCB_UP_MAX];
+};
+
+struct hinic3_cmd_pause_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode;
+ u16 rsvd1;
+ u8 auto_neg;
+ u8 rx_pause;
+ u8 tx_pause;
+ u8 rsvd2[5];
+};
+
+struct nic_cmd_pause_inquiry_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 valid;
+
+ u32 type; /* 1: set, 2: get */
+
+ u32 cos_id;
+
+ u32 rx_inquiry_pause_drop_pkts_en;
+ u32 rx_inquiry_pause_period_ms;
+ u32 rx_inquiry_pause_times;
+ /* rx pause Detection Threshold, Default PAUSE_FRAME_THD_10G/25G/40G/100 */
+ u32 rx_inquiry_pause_frame_thd;
+ u32 rx_inquiry_tx_total_pkts;
+
+ u32 tx_inquiry_pause_en; /* tx pause detect enable */
+ u32 tx_inquiry_pause_period_ms; /* tx pause Default Detection Period 200ms */
+ u32 tx_inquiry_pause_times; /* tx pause Default Times Period 5 */
+ u32 tx_inquiry_pause_frame_thd; /* tx pause Detection Threshold */
+ u32 tx_inquiry_rx_total_pkts;
+ u32 rsvd[3];
+};
+
+/* pfc/pause Storm TX exception reporting */
+struct nic_cmd_tx_pause_notice {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 tx_pause_except; /* 1: abnormality,0: normal */
+ u32 except_level;
+ u32 rsvd;
+};
+
+#define HINIC3_CMD_OP_FREE 0
+#define HINIC3_CMD_OP_ALLOC 1
+
+struct hinic3_cmd_cfg_qps {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 1: alloc qp, 0: free qp */
+ u8 rsvd1;
+ u16 num_qps;
+ u16 rsvd2;
+};
+
+struct hinic3_cmd_led_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port;
+ u8 type;
+ u8 mode;
+ u8 rsvd1;
+};
+
+struct hinic3_cmd_port_loopback {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode;
+ u8 mode;
+ u8 en;
+ u32 rsvd1[2];
+};
+
+struct hinic3_cmd_get_light_module_abs {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 abs_status; /* 0:present, 1:absent */
+ u8 rsv[2];
+};
+
+#define STD_SFP_INFO_MAX_SIZE 640
+struct hinic3_cmd_get_std_sfp_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 wire_type;
+ u16 eeprom_len;
+ u32 rsvd;
+ u8 sfp_info[STD_SFP_INFO_MAX_SIZE];
+};
+
+struct hinic3_cable_plug_event {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 plugged; /* 0: unplugged, 1: plugged */
+ u8 port_id;
+};
+
+struct nic_cmd_mac_info {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 valid_bitmap;
+ u16 rsvd;
+
+ u8 host_id[32];
+ u8 port_id[32];
+ u8 mac_addr[192];
+};
+
+struct nic_cmd_set_tcam_enable {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 tcam_enable;
+ u8 rsvd1;
+ u32 rsvd2;
+};
+
+struct nic_cmd_set_fdir_status {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 pkt_type_en;
+ u8 pkt_type;
+ u8 qid;
+ u8 rsvd2;
+};
+
+#define HINIC3_TCAM_BLOCK_ENABLE 1
+#define HINIC3_TCAM_BLOCK_DISABLE 0
+#define HINIC3_MAX_TCAM_RULES_NUM 4096
+
+/* tcam block type, according to tcam block size */
+enum {
+ NIC_TCAM_BLOCK_TYPE_LARGE = 0, /* block_size: 16 */
+ NIC_TCAM_BLOCK_TYPE_SMALL, /* block_size: 0 */
+ NIC_TCAM_BLOCK_TYPE_MAX
+};
+
+/* alloc tcam block input struct */
+struct nic_cmd_ctrl_tcam_block_in {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u8 alloc_en; /* 0: Releases the allocated TCAM block. 1: Applies for a new TCAM block */
+ /* 0: 16 size tcam block, 1: 0 size tcam block, other reserved. */
+ u8 tcam_type;
+ u16 tcam_block_index;
+ /* Size of the block that the driver wants to allocate
+ * Interface returned by the UP to the driver,
+ * indicating the size of the allocated TCAM block supported by the UP
+ */
+ u16 alloc_block_num;
+};
+
+/* alloc tcam block output struct */
+struct nic_cmd_ctrl_tcam_block_out {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u8 alloc_en; /* 0: Releases the allocated TCAM block. 1: Applies for a new TCAM block */
+ /* 0: 16 size tcam block, 1: 0 size tcam block, other reserved. */
+ u8 tcam_type;
+ u16 tcam_block_index;
+ /* Size of the block that the driver wants to allocate
+ * Interface returned by the UP to the driver,
+ * indicating the size of the allocated TCAM block supported by the UP
+ */
+ u16 mpu_alloc_block_size;
+};
+
+struct nic_cmd_flush_tcam_rules {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u16 rsvd;
+};
+
+struct nic_cmd_dfx_fdir_tcam_block_table {
+ struct hinic3_mgmt_msg_head head;
+ u8 tcam_type;
+ u8 valid;
+ u16 tcam_block_index;
+ u16 use_function_id;
+ u16 rsvd;
+};
+
+struct tcam_result {
+ u32 qid;
+ u32 rsvd;
+};
+
+#define TCAM_FLOW_KEY_SIZE (44)
+
+struct tcam_key_x_y {
+ u8 x[TCAM_FLOW_KEY_SIZE];
+ u8 y[TCAM_FLOW_KEY_SIZE];
+};
+
+struct nic_tcam_cfg_rule {
+ u32 index;
+ struct tcam_result data;
+ struct tcam_key_x_y key;
+};
+
+#define TCAM_RULE_FDIR_TYPE 0
+#define TCAM_RULE_PPA_TYPE 1
+
+struct nic_cmd_fdir_add_rule {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 type;
+ u8 fdir_ext; /* 0x1: flow bifur en bit */
+ struct nic_tcam_cfg_rule rule;
+};
+
+struct nic_cmd_fdir_del_rules {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 type;
+ u8 rsvd;
+ u32 index_start;
+ u32 index_num;
+};
+
+struct nic_cmd_fdir_get_rule {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 index;
+ u8 valid;
+ u8 type;
+ u16 rsvd;
+ struct tcam_key_x_y key;
+ struct tcam_result data;
+ u64 packet_count;
+ u64 byte_count;
+};
+
+struct nic_cmd_fdir_get_block_rules {
+ struct hinic3_mgmt_msg_head head;
+ u8 tcam_block_type; // only NIC_TCAM_BLOCK_TYPE_LARGE
+ u8 tcam_table_type; // TCAM_RULE_PPA_TYPE or TCAM_RULE_FDIR_TYPE
+ u16 tcam_block_index;
+ u8 valid[NIC_TCAM_BLOCK_LARGE_SIZE];
+ struct tcam_key_x_y key[NIC_TCAM_BLOCK_LARGE_SIZE];
+ struct tcam_result data[NIC_TCAM_BLOCK_LARGE_SIZE];
+};
+
+struct hinic3_tcam_key_ipv4_mem {
+ u32 rsvd1 : 1;
+ u32 bifur_flag : 2;
+ u32 model : 1;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+ u32 sipv4_h : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+ u32 dipv4_h : 16;
+ u32 sipv4_l : 16;
+ u32 vlan_id : 15;
+ u32 vlan_flag : 1;
+ u32 dipv4_l : 16;
+ u32 rsvd3;
+ u32 dport : 16;
+ u32 rsvd4 : 16;
+ u32 rsvd5 : 16;
+ u32 sport : 16;
+ u32 outer_sipv4_h : 16;
+ u32 rsvd6 : 16;
+ u32 outer_dipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+ u32 vni_h : 16;
+ u32 outer_dipv4_l : 16;
+ u32 rsvd7 : 16;
+ u32 vni_l : 16;
+};
+
+union hinic3_tag_tcam_ext_info {
+ struct {
+ u32 id : 16; /* id */
+ u32 type : 4; /* type: 0-func, 1-vmdq, 2-port, 3-rsvd, 4-trunk, 5-dp, 6-mc */
+ u32 host_id : 3;
+ u32 rss_q_num : 8; /* rss queue num */
+ u32 ext : 1;
+ } bs;
+ u32 value;
+};
+
+struct hinic3_tcam_key_ipv6_mem {
+ u32 bifur_flag : 2;
+ u32 vlan_flag : 1;
+ u32 outer_ip_type : 1;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+ u32 sipv6_key0 : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+ u32 sipv6_key2 : 16;
+ u32 sipv6_key1 : 16;
+ u32 sipv6_key4 : 16;
+ u32 sipv6_key3 : 16;
+ u32 sipv6_key6 : 16;
+ u32 sipv6_key5 : 16;
+ u32 dport : 16;
+ u32 sipv6_key7 : 16;
+ u32 dipv6_key0 : 16;
+ u32 sport : 16;
+ u32 dipv6_key2 : 16;
+ u32 dipv6_key1 : 16;
+ u32 dipv6_key4 : 16;
+ u32 dipv6_key3 : 16;
+ u32 dipv6_key6 : 16;
+ u32 dipv6_key5 : 16;
+ u32 rsvd2 : 16;
+ u32 dipv6_key7 : 16;
+};
+
+struct hinic3_tcam_key_vxlan_ipv6_mem {
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+
+ u32 dipv6_key0 : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+
+ u32 dipv6_key2 : 16;
+ u32 dipv6_key1 : 16;
+
+ u32 dipv6_key4 : 16;
+ u32 dipv6_key3 : 16;
+
+ u32 dipv6_key6 : 16;
+ u32 dipv6_key5 : 16;
+
+ u32 dport : 16;
+ u32 dipv6_key7 : 16;
+
+ u32 rsvd2 : 16;
+ u32 sport : 16;
+
+ u32 outer_sipv4_h : 16;
+ u32 rsvd3 : 16;
+
+ u32 outer_dipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+
+ u32 vni_h : 16;
+ u32 outer_dipv4_l : 16;
+
+ u32 rsvd4 : 16;
+ u32 vni_l : 16;
+};
+
+struct tag_tcam_key {
+ union {
+ struct hinic3_tcam_key_ipv4_mem key_info;
+ struct hinic3_tcam_key_ipv6_mem key_info_ipv6;
+ struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6;
+ };
+
+ union {
+ struct hinic3_tcam_key_ipv4_mem key_mask;
+ struct hinic3_tcam_key_ipv6_mem key_mask_ipv6;
+ struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6;
+ };
+};
+
+enum {
+ PPA_TABLE_ID_CLEAN_CMD = 0,
+ PPA_TABLE_ID_ADD_CMD,
+ PPA_TABLE_ID_DEL_CMD,
+ FDIR_TABLE_ID_ADD_CMD,
+ FDIR_TABLE_ID_DEL_CMD,
+ PPA_TABEL_ID_MAX
+};
+
+struct hinic3_ppa_cfg_table_id_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u16 cmd;
+ u16 table_id;
+ u16 rsvd1;
+};
+
+struct hinic3_ppa_cfg_ppa_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 ppa_en;
+ u8 ppa_miss_drop_en;
+};
+
+struct hinic3_func_flow_bifur_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 func_id;
+ u8 flow_bifur_en;
+ u8 rsvd[5];
+};
+
+struct hinic3_port_flow_bifur_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 port_id;
+ u8 flow_bifur_en;
+ u8 flow_bifur_type; /* 0->vf bifur, 2->traffic bifur */
+ u8 rsvd[4];
+};
+
+struct hinic3_bond_mask_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 func_id;
+ u8 bond_mask;
+ u8 bond_en;
+ u8 func_valid;
+ u8 rsvd[3];
+};
+
+struct hinic3_func_er_value_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 vf_id;
+ u16 er_fwd_id;
+};
+
+#define HINIC3_TX_SET_PROMISC_SKIP 0
+#define HINIC3_TX_GET_PROMISC_SKIP 1
+
+#define HINIC3_GET_TRAFFIC_BIFUR_STATE 0
+#define HINIC3_SET_TRAFFIC_BIFUR_STATE 1
+
+struct hinic3_tx_promisc_cfg {
+ struct hinic3_mgmt_msg_head msg_head;
+ u8 port_id;
+ u8 promisc_skip_en; /* 0: disable tx promisc replication, 1: enable */
+ u8 opcode; /* 0: set, 1: get */
+ u8 rsvd1;
+};
+
+struct hinic3_ppa_cfg_mode_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u8 ppa_mode;
+ u8 qpc_func_nums;
+ u16 base_qpc_func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_ppa_flush_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u8 flush_en; /* 0 flush done, 1 in flush operation */
+ u8 rsvd1;
+};
+
+struct hinic3_ppa_fdir_query_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u32 index;
+ u32 rsvd;
+ u64 pkt_nums;
+ u64 pkt_bytes;
+};
+
+/* BIOS CONF */
+enum {
+ NIC_NVM_DATA_SET = BIT(0), /* 1-save, 0-read */
+ NIC_NVM_DATA_PXE = BIT(1),
+ NIC_NVM_DATA_VLAN = BIT(2),
+ NIC_NVM_DATA_VLAN_PRI = BIT(3),
+ NIC_NVM_DATA_VLAN_ID = BIT(4),
+ NIC_NVM_DATA_WORK_MODE = BIT(5),
+ NIC_NVM_DATA_PF_TX_SPEED_LIMIT = BIT(6),
+ NIC_NVM_DATA_GE_MODE = BIT(7),
+ NIC_NVM_DATA_AUTO_NEG = BIT(8),
+ NIC_NVM_DATA_LINK_FEC = BIT(9),
+ NIC_NVM_DATA_PF_ADAPTIVE_LINK = BIT(10),
+ NIC_NVM_DATA_SRIOV_CONTROL = BIT(11),
+ NIC_NVM_DATA_EXTEND_MODE = BIT(12),
+ NIC_NVM_DATA_LEGACY_VLAN = BIT(13),
+ NIC_NVM_DATA_LEGACY_VLAN_PRI = BIT(14),
+ NIC_NVM_DATA_LEGACY_VLAN_ID = BIT(15),
+ NIC_NVM_DATA_RESET = BIT(31),
+};
+
+#define BIOS_CFG_SIGNATURE 0x1923E518
+#define BIOS_OP_CFG_ALL(op_code_val) \
+ ((((op_code_val) >> 1) & (0xFFFFFFFF)) != 0)
+#define BIOS_OP_CFG_WRITE(op_code_val) \
+ ((((op_code_val) & NIC_NVM_DATA_SET)) != 0)
+#define BIOS_OP_CFG_PXE_EN(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_PXE) != 0)
+#define BIOS_OP_CFG_VLAN_EN(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_VLAN) != 0)
+#define BIOS_OP_CFG_VLAN_PRI(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_VLAN_PRI) != 0)
+#define BIOS_OP_CFG_VLAN_ID(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_VLAN_ID) != 0)
+#define BIOS_OP_CFG_WORK_MODE(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_WORK_MODE) != 0)
+#define BIOS_OP_CFG_PF_BW(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_PF_TX_SPEED_LIMIT) != 0)
+#define BIOS_OP_CFG_GE_SPEED(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_GE_MODE) != 0)
+#define BIOS_OP_CFG_AUTO_NEG(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_AUTO_NEG) != 0)
+#define BIOS_OP_CFG_LINK_FEC(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_LINK_FEC) != 0)
+#define BIOS_OP_CFG_AUTO_ADPAT(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_PF_ADAPTIVE_LINK) != 0)
+#define BIOS_OP_CFG_SRIOV_ENABLE(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_SRIOV_CONTROL) != 0)
+#define BIOS_OP_CFG_EXTEND_MODE(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_EXTEND_MODE) != 0)
+#define BIOS_OP_CFG_LEGACY_VLAN_EN(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_LEGACY_VLAN) != 0)
+#define BIOS_OP_CFG_LEGACY_VLAN_PRI(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_LEGACY_VLAN_PRI) != 0)
+#define BIOS_OP_CFG_LEGACY_VLAN_ID(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_LEGACY_VLAN_ID) != 0)
+#define BIOS_OP_CFG_RST_DEF_SET(op_code_val) \
+ (((op_code_val) & (u32)NIC_NVM_DATA_RESET) != 0)
+
+
+#define NIC_BIOS_CFG_MAX_PF_BW 100
+
+struct nic_legacy_vlan_cfg {
+ /* Legacy mode PXE VLAN enable: 0 - disable 1 - enable */
+ u8 pxe_vlan_en : 1;
+ /* Legacy mode PXE VLAN priority: 0-7 */
+ u8 pxe_vlan_pri : 3;
+ /* Legacy mode PXE VLAN ID 1-4094 */
+ u16 pxe_vlan_id : 12;
+};
+
+/* Note: This structure must be 4-byte aligned. */
+struct nic_bios_cfg {
+ u32 signature;
+ u8 pxe_en;
+ u8 extend_mode;
+ struct nic_legacy_vlan_cfg nlvc;
+ u8 pxe_vlan_en;
+ u8 pxe_vlan_pri;
+ u16 pxe_vlan_id;
+ u32 service_mode;
+ u32 pf_tx_bw;
+ u8 speed;
+ u8 auto_neg;
+ u8 lanes;
+ u8 fec;
+ u8 auto_adapt;
+ u8 func_valid;
+ u8 func_id;
+ u8 sriov_en;
+};
+
+struct nic_cmd_bios_cfg {
+ struct hinic3_mgmt_msg_head head;
+ u32 op_code; /* Operation Code: Bit0[0: read 1:write, BIT1-6: cfg_mask */
+ struct nic_bios_cfg bios_cfg;
+};
+
+struct nic_rx_rate_bios_cfg {
+ struct mgmt_msg_head msg_head;
+
+ u32 op_code; /* Operation Code:[0:read 1:write] */
+ u8 rx_rate_limit;
+ u8 func_id;
+};
+
+struct nic_cmd_vhd_config {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 vhd_type;
+ u8 virtio_small_enable; /* 0: mergeable mode, 1: small mode */
+};
+
+/* BOND */
+struct hinic3_create_bond_info {
+ u32 bond_id;
+ u32 master_slave_port_id;
+ u32 slave_bitmap; /* bond port id bitmap */
+ u32 poll_timeout; /* Bond device link check time */
+ u32 up_delay; /* Temporarily reserved */
+ u32 down_delay; /* Temporarily reserved */
+ u32 bond_mode; /* Temporarily reserved */
+ u32 active_pf; /* bond use active pf id */
+ u32 active_port_max_num; /* Maximum number of active bond member interfaces */
+ u32 active_port_min_num; /* Minimum number of active bond member interfaces */
+ u32 xmit_hash_policy;
+ u32 default_param_flag;
+ u32 rsvd;
+};
+
+struct hinic3_cmd_create_bond {
+ struct hinic3_mgmt_msg_head head;
+ struct hinic3_create_bond_info create_bond_info;
+};
+
+struct hinic3_cmd_delete_bond {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 rsvd[2];
+};
+
+struct hinic3_open_close_bond_info {
+ u32 bond_id;
+ u32 open_close_flag; /* Bond flag. 1: open; 0: close. */
+ u32 rsvd[2];
+};
+
+struct hinic3_cmd_open_close_bond {
+ struct hinic3_mgmt_msg_head head;
+ struct hinic3_open_close_bond_info open_close_bond_info;
+};
+
+struct lacp_port_params {
+ u16 port_number;
+ u16 port_priority;
+ u16 key;
+ u16 system_priority;
+ u8 system[ETH_ALEN];
+ u8 port_state;
+ u8 rsvd;
+};
+
+struct lacp_port_info {
+ u32 selected;
+ u32 aggregator_port_id;
+
+ struct lacp_port_params actor;
+ struct lacp_port_params partner;
+
+ u64 tx_lacp_pkts;
+ u64 rx_lacp_pkts;
+ u64 rx_8023ad_drop;
+ u64 tx_8023ad_drop;
+ u64 unknown_pkt_drop;
+ u64 rx_marker_pkts;
+ u64 tx_marker_pkts;
+};
+
+struct hinic3_bond_status_info {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 bon_mmi_status;
+ u32 active_bitmap;
+ u32 port_count;
+
+ struct lacp_port_info port_info[4];
+
+ u64 success_report_cnt[4];
+ u64 fail_report_cnt[4];
+
+ u64 poll_timeout;
+ u64 fast_periodic_timeout;
+ u64 slow_periodic_timeout;
+ u64 short_timeout;
+ u64 long_timeout;
+ u64 aggregate_wait_timeout;
+ u64 tx_period_timeout;
+ u64 rx_marker_timer;
+};
+
+struct hinic3_bond_active_report_info {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 bon_mmi_status;
+ u32 active_bitmap;
+
+ u8 rsvd[16];
+};
+
+/* IP checksum error packets, enable rss quadruple hash. */
+struct hinic3_ipcs_err_rss_enable_operation_s {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 en_tag;
+ u8 type; /* 1: set 0: get */
+ u8 rsvd[2];
+};
+
+struct hinic3_smac_check_state {
+ struct hinic3_mgmt_msg_head head;
+ u8 smac_check_en; /* 1: enable 0: disable */
+ u8 op_code; /* 1: set 0: get */
+ u8 flash_en; /* 1: enable 0: disable */
+ u8 rsvd;
+};
+
+struct hinic3_clear_log_state {
+ struct hinic3_mgmt_msg_head head;
+ u32 type;
+};
+
+struct hinic3_outband_cfg_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 outband_default_vid;
+ u16 func_id;
+};
+
+struct hinic3_wr_ordering {
+ struct hinic3_mgmt_msg_head head;
+ u8 op_code; /* 1: set 0: get */
+ u8 wr_pkt_so_ro;
+ u8 rd_pkt_so_ro;
+ u8 rsvd;
+};
+
+struct hinic3_function_active_info {
+ struct hinic3_mgmt_msg_head head;
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_rq_info {
+ struct hinic3_mgmt_msg_head head;
+ u16 func_id;
+ u16 rq_depth;
+ u16 rq_num;
+ u16 pf_num;
+ u16 port_num;
+};
+
+#endif /* HINIC_MGMT_INTERFACE_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/nic_npu_cmd.h b/drivers/net/ethernet/huawei/hinic3/nic_npu_cmd.h
new file mode 100644
index 0000000..97eda43
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/nic_npu_cmd.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C), 2001-2011, Huawei Tech. Co., Ltd.
+ * File Name : nic_npu_cmd.h
+ * Version : Initial Draft
+ * Created : 2019/4/25
+ * Last Modified :
+ * Description : NIC Commands between Driver and NPU
+ * Function List :
+ */
+
+#ifndef NIC_NPU_CMD_H
+#define NIC_NPU_CMD_H
+
+/* NIC CMDQ MODE */
+enum hinic3_ucode_cmd {
+ HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0,
+ HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ HINIC3_UCODE_CMD_ARM_SQ, /**< Unused */
+ HINIC3_UCODE_CMD_ARM_RQ, /**< Unused */
+ HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ HINIC3_UCODE_CMD_GET_RSS_CONTEXT_TABLE, /**< Unused */
+ HINIC3_UCODE_CMD_SET_IQ_ENABLE, /**< Unused */
+ HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10,
+ HINIC3_UCODE_CMD_MODIFY_VLAN_CTX,
+ HINIC3_UCODE_CMD_PPA_HASH_TABLE,
+ HINIC3_UCODE_CMD_RXQ_INFO_GET = 13,
+ HINIC3_UCODE_MIG_CFG_Q_CTX = 14,
+ HINIC3_UCODE_MIG_CHK_SQ_STOP,
+ HINIC3_UCODE_CHK_RQ_STOP,
+ HINIC3_UCODE_MIG_CFG_BAT_INFO,
+};
+
+#endif /* NIC_NPU_CMD_H */
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/ossl_knl.h b/drivers/net/ethernet/huawei/hinic3/ossl_knl.h
new file mode 100644
index 0000000..bb658cb
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/ossl_knl.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef OSSL_KNL_H
+#define OSSL_KNL_H
+
+#include "ossl_knl_linux.h"
+#include <linux/types.h>
+
+#define sdk_err(dev, format, ...) dev_err(dev, "[COMM]" format, ##__VA_ARGS__)
+#define sdk_warn(dev, format, ...) dev_warn(dev, "[COMM]" format, ##__VA_ARGS__)
+#define sdk_notice(dev, format, ...) dev_notice(dev, "[COMM]" format, ##__VA_ARGS__)
+#define sdk_info(dev, format, ...) dev_info(dev, "[COMM]" format, ##__VA_ARGS__)
+
+#define nic_err(dev, format, ...) dev_err(dev, "[NIC]" format, ##__VA_ARGS__)
+#define nic_warn(dev, format, ...) dev_warn(dev, "[NIC]" format, ##__VA_ARGS__)
+#define nic_notice(dev, format, ...) dev_notice(dev, "[NIC]" format, ##__VA_ARGS__)
+#define nic_info(dev, format, ...) dev_info(dev, "[NIC]" format, ##__VA_ARGS__)
+
+#ifndef BIG_ENDIAN
+#define BIG_ENDIAN 0x4321
+#endif
+
+#ifndef LITTLE_ENDIAN
+#define LITTLE_ENDIAN 0x1234
+#endif
+
+#ifdef BYTE_ORDER
+#undef BYTE_ORDER
+#endif
+/* X86 */
+#define BYTE_ORDER LITTLE_ENDIAN
+#define USEC_PER_MSEC 1000L
+#define MSEC_PER_SEC 1000L
+
+/* Waiting for 50 us */
+#define WAIT_USEC_50 50L
+
+#endif /* OSSL_KNL_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h b/drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h
new file mode 100644
index 0000000..b815d7c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h
@@ -0,0 +1,371 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef OSSL_KNL_LINUX_H_
+#define OSSL_KNL_LINUX_H_
+
+#include <net/ipv6.h>
+#include <net/devlink.h>
+#include <linux/string.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/version.h>
+#include <linux/ethtool.h>
+#include <linux/fs.h>
+#include <linux/kthread.h>
+#include <linux/if_vlan.h>
+#include <linux/udp.h>
+#include <linux/highmem.h>
+#include <linux/list.h>
+#include <linux/bitmap.h>
+#include <linux/slab.h>
+#include <linux/proc_fs.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/filter.h>
+#include <linux/aer.h>
+#include <linux/socket.h>
+
+#ifndef NETIF_F_SCTP_CSUM
+#define NETIF_F_SCTP_CSUM 0
+#endif
+
+#ifndef __GFP_COLD
+#define __GFP_COLD 0
+#endif
+
+#ifndef __GFP_COMP
+#define __GFP_COMP 0
+#endif
+
+#undef __always_unused
+#define __always_unused __attribute__((__unused__))
+
+#define ossl_get_free_pages __get_free_pages
+
+#ifndef ETHTOOL_LINK_MODE_100000baseKR_Full_BIT
+#define ETHTOOL_LINK_MODE_100000baseKR_Full_BIT 75
+#define ETHTOOL_LINK_MODE_100000baseCR_Full_BIT 78
+#define ETHTOOL_LINK_MODE_100000baseSR_Full_BIT 76
+#endif
+#ifndef ETHTOOL_LINK_MODE_200000baseKR2_Full_BIT
+#define ETHTOOL_LINK_MODE_200000baseKR2_Full_BIT 80
+#define ETHTOOL_LINK_MODE_200000baseSR2_Full_BIT 81
+#define ETHTOOL_LINK_MODE_200000baseCR2_Full_BIT 84
+#endif
+
+#ifndef high_16_bits
+#define low_16_bits(x) ((x) & 0xFFFF)
+#define high_16_bits(x) (((x) & 0xFFFF0000) >> 16)
+#endif
+
+#ifndef U8_MAX
+#define U8_MAX 0xFF
+#endif
+
+#define ETH_TYPE_TRANS_SETS_DEV
+#define HAVE_NETDEV_STATS_IN_NETDEV
+
+#ifndef HAVE_SET_RX_MODE
+#define HAVE_SET_RX_MODE
+#endif
+
+#define HAVE_INET6_IFADDR_LIST
+#define HAVE_NDO_GET_STATS64
+
+#ifndef HAVE_MQPRIO
+#define HAVE_MQPRIO
+#endif
+#ifndef HAVE_SETUP_TC
+#define HAVE_SETUP_TC
+#endif
+
+#ifndef HAVE_NDO_SET_FEATURES
+#define HAVE_NDO_SET_FEATURES
+#endif
+#define HAVE_IRQ_AFFINITY_NOTIFY
+#define HAVE_ETHTOOL_SET_PHYS_ID
+#define HAVE_NETDEV_WANTED_FEAUTES
+
+#ifndef HAVE_PCI_DEV_FLAGS_ASSIGNED
+#define HAVE_PCI_DEV_FLAGS_ASSIGNED
+#define HAVE_VF_SPOOFCHK_CONFIGURE
+#endif
+#ifndef HAVE_SKB_L4_RXHASH
+#define HAVE_SKB_L4_RXHASH
+#endif
+
+#define HAVE_ETHTOOL_GRXFHINDIR_SIZE
+#define HAVE_INT_NDO_VLAN_RX_ADD_VID
+#ifdef ETHTOOL_SRXNTUPLE
+#undef ETHTOOL_SRXNTUPLE
+#endif
+
+#define _kc_kmap_atomic(page) kmap_atomic(page)
+#define _kc_kunmap_atomic(addr) kunmap_atomic(addr)
+
+#include <linux/of_net.h>
+#define HAVE_FDB_OPS
+#define HAVE_ETHTOOL_GET_TS_INFO
+
+#define HAVE_NAPI_GRO_FLUSH_OLD
+
+#ifndef HAVE_SRIOV_CONFIGURE
+#define HAVE_SRIOV_CONFIGURE
+#endif
+
+#define HAVE_ENCAP_TSO_OFFLOAD
+#define HAVE_SKB_INNER_NETWORK_HEADER
+
+
+#define HAVE_NDO_SET_VF_LINK_STATE
+#define HAVE_SKB_INNER_PROTOCOL
+#define HAVE_MPLS_FEATURES
+
+#define HAVE_NDO_GET_PHYS_PORT_ID
+#define HAVE_NETIF_SET_XPS_QUEUE_CONST_MASK
+
+#define HAVE_VXLAN_CHECKS
+#define HAVE_NET_GET_RANDOM_ONCE
+#define HAVE_HWMON_DEVICE_REGISTER_WITH_GROUPS
+
+#define HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK
+
+
+#define HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+#define HAVE_VLAN_FIND_DEV_DEEP_RCU
+
+#define HAVE_SKBUFF_CSUM_LEVEL
+#define HAVE_MULTI_VLAN_OFFLOAD_EN
+#define HAVE_ETH_GET_HEADLEN_FUNC
+
+
+#define HAVE_RXFH_HASHFUNC
+#define HAVE_NDO_SET_VF_TRUST
+
+#include <net/devlink.h>
+
+#define HAVE_IO_MAP_WC_SIZE
+
+#define HAVE_NETDEVICE_MIN_MAX_MTU
+
+
+#define HAVE_VOID_NDO_GET_STATS64
+#define HAVE_VM_OPS_FAULT_NO_VMA
+
+#define HAVE_HWTSTAMP_FILTER_NTP_ALL
+#define HAVE_NDO_SETUP_TC_CHAIN_INDEX
+#define HAVE_PCI_ERROR_HANDLER_RESET_PREPARE
+#define HAVE_PTP_CLOCK_DO_AUX_WORK
+
+
+#define HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV
+
+#define HAVE_XDP_SUPPORT
+#if (KERNEL_VERSION(5, 9, 0) > LINUX_VERSION_CODE)
+#define HAVE_XDP_QUERY_PROG
+#endif
+
+#define HAVE_NDO_BPF_NETDEV_BPF
+#define HAVE_TIMER_SETUP
+#define HAVE_XDP_DATA_META
+
+#define HAVE_MACRO_VM_FAULT_T
+
+#define HAVE_NDO_SELECT_QUEUE_SB_DEV
+
+
+#define dev_open(x) dev_open(x, NULL)
+#define HAVE_NEW_ETHTOOL_LINK_SETTINGS_ONLY
+
+#ifndef get_ds
+#define get_ds() (KERNEL_DS)
+#endif
+
+#ifndef dma_zalloc_coherent
+#define dma_zalloc_coherent(d, s, h, f) _hinic3_dma_zalloc_coherent(d, s, h, f)
+static inline void *_hinic3_dma_zalloc_coherent(struct device *dev,
+ size_t size, dma_addr_t *dma_handle,
+ gfp_t gfp)
+{
+ /* Above kernel 5.0, fixed up all remaining architectures
+ * to zero the memory in dma_alloc_coherent, and made
+ * dma_zalloc_coherent a no-op wrapper around dma_alloc_coherent,
+ * which fixes all of the above issues.
+ */
+ return dma_alloc_coherent(dev, size, dma_handle, gfp);
+}
+#endif
+
+#if (KERNEL_VERSION(5, 6, 0) <= LINUX_VERSION_CODE)
+#ifndef DT_KNL_EMU
+struct timeval {
+ __kernel_old_time_t tv_sec; /* seconds */
+ __kernel_suseconds_t tv_usec; /* microseconds */
+};
+#endif
+#endif
+
+#ifndef do_gettimeofday
+#define do_gettimeofday(time) _kc_do_gettimeofday(time)
+static inline void _kc_do_gettimeofday(struct timeval *tv)
+{
+ struct timespec64 ts;
+
+ ktime_get_real_ts64(&ts);
+ tv->tv_sec = ts.tv_sec;
+ tv->tv_usec = ts.tv_nsec / NSEC_PER_USEC;
+}
+#endif
+
+
+
+#define HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK
+#define HAVE_NDO_SELECT_QUEUE_SB_DEV
+#define HAVE_GENL_OPS_FIELD_VALIDATE
+#define ETH_MODULE_SFF_8436_MAX_LEN 640
+#define ETH_MODULE_SFF_8636_MAX_LEN 640
+#define SPEED_200000 200000
+
+#ifndef FIELD_SIZEOF
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->f))
+#endif
+
+/*****************************************************************************/
+#if (KERNEL_VERSION(5, 5, 0) > LINUX_VERSION_CODE)
+#else /* >= 5.5.0 */
+#define HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+#endif /* 5.5.0 */
+
+/*****************************************************************************/
+#if (KERNEL_VERSION(5, 6, 0) > LINUX_VERSION_CODE)
+#else /* >= 5.6.0 */
+#ifndef rtc_time_to_tm
+#define rtc_time_to_tm rtc_time64_to_tm
+#endif
+#define HAVE_NDO_TX_TIMEOUT_TXQ
+#define HAVE_PROC_OPS
+#endif /* 5.6.0 */
+
+/*****************************************************************************/
+#if (KERNEL_VERSION(5, 7, 0) > LINUX_VERSION_CODE)
+#else /* >= 5.7.0 */
+#define SUPPORTED_COALESCE_PARAMS
+
+#ifndef pci_cleanup_aer_uncorrect_error_status
+#define pci_cleanup_aer_uncorrect_error_status pci_aer_clear_nonfatal_status
+#endif
+#endif /* 5.7.0 */
+
+/* ************************************************************************ */
+#if (KERNEL_VERSION(5, 9, 0) > LINUX_VERSION_CODE)
+
+#else /* >= 5.9.0 */
+#define HAVE_XDP_FRAME_SZ
+#endif /* 5.9.0 */
+
+/* ************************************************************************ */
+#if (KERNEL_VERSION(5, 10, 0) > LINUX_VERSION_CODE)
+#define HAVE_DEVLINK_FW_FILE_NAME_PARAM
+#else /* >= 5.10.0 */
+#endif /* 5.10.0 */
+
+#define HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+
+/* ************************************************************************ */
+#if (KERNEL_VERSION(5, 10, 0) > LINUX_VERSION_CODE)
+
+#else /* >= 5.10.0 */
+#if !defined(HAVE_ETHTOOL_COALESCE_EXTACK) && \
+ !defined(NO_ETHTOOL_COALESCE_EXTACK)
+#define HAVE_ETHTOOL_COALESCE_EXTACK
+#endif
+#endif /* 5.10.0 */
+
+/* ************************************************************************ */
+#if (KERNEL_VERSION(5, 10, 0) > LINUX_VERSION_CODE)
+
+#else /* >= 5.10.0 */
+#if !defined(HAVE_ETHTOOL_RINGPARAM_EXTACK) && \
+ !defined(NO_ETHTOOL_RINGPARAM_EXTACK)
+#define HAVE_ETHTOOL_RINGPARAM_EXTACK
+#endif
+#endif /* 5.10.0 */
+/* ************************************************************************ */
+#define HAVE_NDO_UDP_TUNNEL_ADD
+#define HAVE_ENCAPSULATION_TSO
+#define HAVE_ENCAPSULATION_CSUM
+
+#ifndef eth_zero_addr
+static inline void hinic3_eth_zero_addr(u8 *addr)
+{
+ (void)memset(addr, 0x00, ETH_ALEN);
+}
+
+#define eth_zero_addr(_addr) hinic3_eth_zero_addr(_addr)
+#endif
+
+#ifndef netdev_hw_addr_list_for_each
+#define netdev_hw_addr_list_for_each(ha, l) \
+ list_for_each_entry(ha, &(l)->list, list)
+#endif
+
+#define spin_lock_deinit(lock)
+
+struct file *file_creat(const char *file_name);
+
+struct file *file_open(const char *file_name);
+
+void file_close(struct file *file_handle);
+
+u32 get_file_size(struct file *file_handle);
+
+void set_file_position(struct file *file_handle, u32 position);
+
+int file_read(struct file *file_handle, char *log_buffer, u32 rd_length,
+ u32 *file_pos);
+
+u32 file_write(struct file *file_handle, const char *log_buffer, u32 wr_length);
+
+struct sdk_thread_info {
+ struct task_struct *thread_obj;
+ char *name;
+ void (*thread_fn)(void *x);
+ void *thread_event;
+ void *data;
+};
+
+int creat_thread(struct sdk_thread_info *thread_info);
+
+void stop_thread(struct sdk_thread_info *thread_info);
+
+#define destroy_work(work)
+void utctime_to_localtime(u64 utctime, u64 *localtime);
+#ifndef HAVE_TIMER_SETUP
+void initialize_timer(const void *adapter_hdl, struct timer_list *timer);
+#endif
+void add_to_timer(struct timer_list *timer, u64 period);
+void stop_timer(struct timer_list *timer);
+void delete_timer(struct timer_list *timer);
+u64 ossl_get_real_time(void);
+
+#define nicif_err(priv, type, dev, fmt, args...) \
+ netif_level(err, priv, type, dev, "[NIC]" fmt, ##args)
+#define nicif_warn(priv, type, dev, fmt, args...) \
+ netif_level(warn, priv, type, dev, "[NIC]" fmt, ##args)
+#define nicif_notice(priv, type, dev, fmt, args...) \
+ netif_level(notice, priv, type, dev, "[NIC]" fmt, ##args)
+#define nicif_info(priv, type, dev, fmt, args...) \
+ netif_level(info, priv, type, dev, "[NIC]" fmt, ##args)
+#define nicif_dbg(priv, type, dev, fmt, args...) \
+ netif_level(dbg, priv, type, dev, "[NIC]" fmt, ##args)
+
+#define destroy_completion(completion)
+#define sema_deinit(lock)
+#define mutex_deinit(lock)
+#define rwlock_deinit(lock)
+
+#define tasklet_state(tasklet) ((tasklet)->state)
+
+#endif
+/* ************************************************************************ */
--
2.28.0.windows.1
2
1

13 Oct '25
From: Zhenyu Wang <zhenyuw(a)linux.intel.com>
stable inclusion
from stable-v5.10.163
commit af90f8b36d78544433a48a3eda6a5faeafacd0a1
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/ID0VH2
CVE: CVE-2023-53625
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 704f3384f322b40ba24d958473edfb1c9750c8fd upstream.
Check carefully on root debugfs available when destroying vgpu,
e.g in remove case drm minor's debugfs root might already be destroyed,
which led to kernel oops like below.
Console: switching to colour dummy device 80x25
i915 0000:00:02.0: MDEV: Unregistering
intel_vgpu_mdev b1338b2d-a709-4c23-b766-cc436c36cdf0: Removing from iommu group 14
BUG: kernel NULL pointer dereference, address: 0000000000000150
PGD 0 P4D 0
Oops: 0000 [#1] PREEMPT SMP
CPU: 3 PID: 1046 Comm: driverctl Not tainted 6.1.0-rc2+ #6
Hardware name: HP HP ProDesk 600 G3 MT/829D, BIOS P02 Ver. 02.44 09/13/2022
RIP: 0010:__lock_acquire+0x5e2/0x1f90
Code: 87 ad 09 00 00 39 05 e1 1e cc 02 0f 82 f1 09 00 00 ba 01 00 00 00 48 83 c4 48 89 d0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 45 31 ff <48> 81 3f 60 9e c2 b6 45 0f 45 f8 83 fe 01 0f 87 55 fa ff ff 89 f0
RSP: 0018:ffff9f770274f948 EFLAGS: 00010046
RAX: 0000000000000003 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000150
RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
R10: ffff8895d1173300 R11: 0000000000000001 R12: 0000000000000000
R13: 0000000000000150 R14: 0000000000000000 R15: 0000000000000000
FS: 00007fc9b2ba0740(0000) GS:ffff889cdfcc0000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000150 CR3: 000000010fd93005 CR4: 00000000003706e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
lock_acquire+0xbf/0x2b0
? simple_recursive_removal+0xa5/0x2b0
? lock_release+0x13d/0x2d0
down_write+0x2a/0xd0
? simple_recursive_removal+0xa5/0x2b0
simple_recursive_removal+0xa5/0x2b0
? start_creating.part.0+0x110/0x110
? _raw_spin_unlock+0x29/0x40
debugfs_remove+0x40/0x60
intel_gvt_debugfs_remove_vgpu+0x15/0x30 [kvmgt]
intel_gvt_destroy_vgpu+0x60/0x100 [kvmgt]
intel_vgpu_release_dev+0xe/0x20 [kvmgt]
device_release+0x30/0x80
kobject_put+0x79/0x1b0
device_release_driver_internal+0x1b8/0x230
bus_remove_device+0xec/0x160
device_del+0x189/0x400
? up_write+0x9c/0x1b0
? mdev_device_remove_common+0x60/0x60 [mdev]
mdev_device_remove_common+0x22/0x60 [mdev]
mdev_device_remove_cb+0x17/0x20 [mdev]
device_for_each_child+0x56/0x80
mdev_unregister_parent+0x5a/0x81 [mdev]
intel_gvt_clean_device+0x2d/0xe0 [kvmgt]
intel_gvt_driver_remove+0x2e/0xb0 [i915]
i915_driver_remove+0xac/0x100 [i915]
i915_pci_remove+0x1a/0x30 [i915]
pci_device_remove+0x31/0xa0
device_release_driver_internal+0x1b8/0x230
unbind_store+0xd8/0x100
kernfs_fop_write_iter+0x156/0x210
vfs_write+0x236/0x4a0
ksys_write+0x61/0xd0
do_syscall_64+0x55/0x80
? find_held_lock+0x2b/0x80
? lock_release+0x13d/0x2d0
? up_read+0x17/0x20
? lock_is_held_type+0xe3/0x140
? asm_exc_page_fault+0x22/0x30
? lockdep_hardirqs_on+0x7d/0x100
entry_SYSCALL_64_after_hwframe+0x46/0xb0
RIP: 0033:0x7fc9b2c9e0c4
Code: 15 71 7d 0d 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 f3 0f 1e fa 80 3d 3d 05 0e 00 00 74 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 48 83 ec 28 48 89 54 24 18 48
RSP: 002b:00007ffec29c81c8 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 000000000000000d RCX: 00007fc9b2c9e0c4
RDX: 000000000000000d RSI: 0000559f8b5f48a0 RDI: 0000000000000001
RBP: 0000559f8b5f48a0 R08: 0000559f8b5f3540 R09: 00007fc9b2d76d30
R10: 0000000000000000 R11: 0000000000000202 R12: 000000000000000d
R13: 00007fc9b2d77780 R14: 000000000000000d R15: 00007fc9b2d72a00
</TASK>
Modules linked in: sunrpc intel_rapl_msr intel_rapl_common intel_pmc_core_pltdrv intel_pmc_core intel_tcc_cooling x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel ee1004 igbvf rapl vfat fat intel_cstate intel_uncore pktcdvd i2c_i801 pcspkr wmi_bmof i2c_smbus acpi_pad vfio_pci vfio_pci_core vfio_virqfd zram fuse dm_multipath kvmgt mdev vfio_iommu_type1 vfio kvm irqbypass i915 nvme e1000e igb nvme_core crct10dif_pclmul crc32_pclmul crc32c_intel polyval_clmulni polyval_generic serio_raw ghash_clmulni_intel sha512_ssse3 dca drm_buddy intel_gtt video wmi drm_display_helper ttm
CR2: 0000000000000150
---[ end trace 0000000000000000 ]---
Cc: Wang Zhi <zhi.a.wang(a)intel.com>
Cc: He Yu <yu.he(a)intel.com>
Cc: Alex Williamson <alex.williamson(a)redhat.com>
Cc: stable(a)vger.kernel.org
Reviewed-by: Zhi Wang <zhi.a.wang(a)intel.com>
Tested-by: Yu He <yu.he(a)intel.com>
Fixes: bc7b0be316ae ("drm/i915/gvt: Add basic debugfs infrastructure")
Signed-off-by: Zhenyu Wang <zhenyuw(a)linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20221219140357.769557-2-zhenyu…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
drivers/gpu/drm/i915/gvt/debugfs.c
[ a61ac1e75105 (drm/i915/gvt: Wean gvt off using dev_priv) not merged in, which replace gvt->dev_priv with gvt->gt->i915 ]
Signed-off-by: Zhang Kunbo <zhangkunbo(a)huawei.com>
---
drivers/gpu/drm/i915/gvt/debugfs.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/gvt/debugfs.c b/drivers/gpu/drm/i915/gvt/debugfs.c
index 2ec89bcb59f1..de7b039e5c5a 100644
--- a/drivers/gpu/drm/i915/gvt/debugfs.c
+++ b/drivers/gpu/drm/i915/gvt/debugfs.c
@@ -227,8 +227,13 @@ int intel_gvt_debugfs_add_vgpu(struct intel_vgpu *vgpu)
*/
void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu)
{
- debugfs_remove_recursive(vgpu->debugfs);
- vgpu->debugfs = NULL;
+ struct intel_gvt *gvt = vgpu->gvt;
+ struct drm_minor *minor = gvt->dev_priv->drm.primary;
+
+ if (minor->debugfs_root && gvt->debugfs_root) {
+ debugfs_remove_recursive(vgpu->debugfs);
+ vgpu->debugfs = NULL;
+ }
}
/**
--
2.34.1
2
1

13 Oct '25
From: zhoubin <zhoubin120(a)h-partners.com>
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/ID1L0L?from=project-issue
CVE: NA
--------------------------------
Add Huawei Intelligent Network Card Driver: hinic3
Signed-off-by: zhoubin <zhoubin120(a)h-partners.com>
Signed-off-by: zhuyikai <zhuyikai1(a)h-partners.com>
Signed-off-by: zhengjiezhen <zhengjiezhen(a)h-partners.com>
Signed-off-by: shijing <shijing34(a)huawei.com>
---
drivers/net/ethernet/huawei/hinic3/Kconfig | 13 +
drivers/net/ethernet/huawei/hinic3/Makefile | 66 +
drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.c | 1125 ++++++++
drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.h | 99 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.c | 2015 ++++++++++++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.h | 216 ++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.c | 1516 ++++++++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.h | 67 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.c | 506 ++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.h | 53 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.c | 182 ++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.h | 37 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.c | 444 +++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.h | 36 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_define.h | 50 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.c | 1674 +++++++++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.h | 381 +++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.c | 682 +++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.h | 23 +
drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.c | 1493 ++++++++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.h | 714 +++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.c | 1389 ++++++++++
drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.h | 93 +
drivers/net/ethernet/huawei/hinic3/cqm/readme.txt | 3 +
drivers/net/ethernet/huawei/hinic3/hinic3_crm.h | 1280 +++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c | 1108 ++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c | 482 ++++
drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h | 75 +
drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c | 1464 ++++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c | 1320 +++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_filter.c | 483 ++++
drivers/net/ethernet/huawei/hinic3/hinic3_hw.h | 877 ++++++
drivers/net/ethernet/huawei/hinic3/hinic3_irq.c | 194 ++
drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c | 1737 ++++++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_main.c | 1469 ++++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h | 1298 +++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_mt.h | 864 ++++++
drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c | 2125 ++++++++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic.h | 221 ++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c | 1894 +++++++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h | 664 +++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c | 726 +++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c | 159 ++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h | 21 +
drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h | 462 ++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c | 649 +++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c | 1130 ++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h | 325 +++
drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c | 47 +
drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h | 59 +
drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h | 384 +++
drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c | 909 ++++++
drivers/net/ethernet/huawei/hinic3/hinic3_rss.c | 1003 +++++++
drivers/net/ethernet/huawei/hinic3/hinic3_rss.h | 95 +
drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c | 413 +++
drivers/net/ethernet/huawei/hinic3/hinic3_rx.c | 1523 ++++++++++
drivers/net/ethernet/huawei/hinic3/hinic3_rx.h | 164 ++
drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h | 220 ++
drivers/net/ethernet/huawei/hinic3/hinic3_tx.c | 1051 +++++++
drivers/net/ethernet/huawei/hinic3/hinic3_tx.h | 157 ++
drivers/net/ethernet/huawei/hinic3/hinic3_wq.h | 130 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c | 1214 ++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h | 286 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c | 1575 +++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h | 204 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c | 93 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h | 188 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c | 997 +++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h | 118 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c | 432 +++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h | 173 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c | 1422 ++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h | 164 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c | 495 ++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h | 141 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c | 1632 +++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h | 346 +++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c | 1681 +++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h | 51 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c | 530 ++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h | 49 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c | 2222 +++++++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h | 234 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c | 1050 +++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h | 113 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c | 2460 +++++++++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c | 1884 +++++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h | 281 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c | 1571 +++++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h | 182 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.c | 1259 +++++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.h | 124 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c | 1021 +++++++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h | 39 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h | 74 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c | 44 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h | 109 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h | 160 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c | 160 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c | 279 ++
drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h | 35 +
drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c | 159 ++
drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c | 119 +
drivers/net/ethernet/huawei/hinic3/include/bond/bond_common_defs.h | 73 +
drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd.h | 12 +
.../include/cfg_mgmt/cfg_mgmt_mpu_cmd_defs.h | 221 ++
drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd.h | 31 +
drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd_defs.h | 61 +
drivers/net/ethernet/huawei/hinic3/include/hinic3_common.h | 145 +
drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm.h | 353 +++
drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm_define.h | 48 +
drivers/net/ethernet/huawei/hinic3/include/hinic3_lld.h | 225 ++
drivers/net/ethernet/huawei/hinic3/include/hinic3_profile.h | 148 +
drivers/net/ethernet/huawei/hinic3/include/mpu/mag_mpu_cmd.h | 74 +
drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_board_defs.h | 135 +
drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_cmd_base_defs.h | 165 ++
drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd.h | 192 ++
drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd_defs.h | 1104 ++++++++
.../include/mpu/mpu_outband_ncsi_cmd_defs.h | 213 ++
drivers/net/ethernet/huawei/hinic3/include/mpu/nic_cfg_comm.h | 67 +
drivers/net/ethernet/huawei/hinic3/include/ossl_types.h | 144 +
drivers/net/ethernet/huawei/hinic3/include/public/npu_cmdq_base_defs.h | 232 ++
drivers/net/ethernet/huawei/hinic3/include/readme.txt | 1 +
drivers/net/ethernet/huawei/hinic3/include/vmsec/vmsec_mpu_common.h | 107 +
drivers/net/ethernet/huawei/hinic3/include/vram_common.h | 74 +
drivers/net/ethernet/huawei/hinic3/mag_mpu_cmd_defs.h | 1143 ++++++++
drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd.h | 174 ++
drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd_defs.h | 1420 ++++++++++
drivers/net/ethernet/huawei/hinic3/nic_npu_cmd.h | 36 +
drivers/net/ethernet/huawei/hinic3/ossl_knl.h | 39 +
drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h | 371 +++
131 files changed, 72437 insertions(+)
create mode 100644 drivers/net/ethernet/huawei/hinic3/Kconfig
create mode 100644 drivers/net/ethernet/huawei/hinic3/Makefile
create mode 100644 drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_define.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/cqm/readme.txt
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_crm.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_filter.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_hw.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_irq.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_main.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_mt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rss.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rss.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rx.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rx.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_tx.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_tx.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_wq.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/bond/bond_common_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/hinic3_common.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm_define.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/hinic3_lld.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/hinic3_profile.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/mag_mpu_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_board_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_cmd_base_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_outband_ncsi_cmd_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/mpu/nic_cfg_comm.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/ossl_types.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/public/npu_cmdq_base_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/readme.txt
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/vmsec/vmsec_mpu_common.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/include/vram_common.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/mag_mpu_cmd_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/nic_npu_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/ossl_knl.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h
diff --git a/drivers/net/ethernet/huawei/hinic3/Kconfig b/drivers/net/ethernet/huawei/hinic3/Kconfig
new file mode 100644
index 0000000..7208864
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/Kconfig
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Huawei driver configuration
+#
+
+config HINIC3
+ tristate "Huawei Intelligent Network Interface Card 3rd"
+ depends on PCI_MSI && NUMA && PCI_IOV && DCB && (X86 || ARM64)
+ help
+ This driver supports HiNIC PCIE Ethernet cards.
+ To compile this driver as part of the kernel, choose Y here.
+ If unsure, choose N.
+ The default is N.
diff --git a/drivers/net/ethernet/huawei/hinic3/Makefile b/drivers/net/ethernet/huawei/hinic3/Makefile
new file mode 100644
index 0000000..21d8093
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/Makefile
@@ -0,0 +1,66 @@
+# SPDX-License-Identifier: GPL-2.0-only
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/drivers/net/ethernet/huawei/hinic3/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/drivers/net/ethernet/huawei/hinic3/hw/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/drivers/net/ethernet/huawei/hinic3/bond/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/drivers/net/ethernet/huawei/hinic3/cqm/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/drivers/net/ethernet/huawei/hinic3/include/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/drivers/net/ethernet/huawei/hinic3/include/cqm/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/drivers/net/ethernet/huawei/hinic3/include/public/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/drivers/net/ethernet/huawei/hinic3/include/mpu/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/drivers/net/ethernet/huawei/hinic3/include/bond/
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/drivers/net/ethernet/huawei/hinic3/include/vmsec/
+
+obj-$(CONFIG_HINIC3) += hinic3.o
+hinic3-objs := hw/hinic3_hwdev.o \
+ hw/hinic3_hw_cfg.o \
+ hw/hinic3_hw_comm.o \
+ hw/hinic3_prof_adap.o \
+ hw/hinic3_sriov.o \
+ hw/hinic3_lld.o \
+ hw/hinic3_dev_mgmt.o \
+ hw/hinic3_common.o \
+ hw/hinic3_hwif.o \
+ hw/hinic3_wq.o \
+ hw/hinic3_cmdq.o \
+ hw/hinic3_eqs.o \
+ hw/hinic3_mbox.o \
+ hw/hinic3_mgmt.o \
+ hw/hinic3_api_cmd.o \
+ hw/hinic3_hw_api.o \
+ hw/hinic3_sml_lt.o \
+ hw/hinic3_hw_mt.o \
+ hw/hinic3_nictool.o \
+ hw/hinic3_devlink.o \
+ hw/ossl_knl_linux.o \
+ hw/hinic3_multi_host_mgmt.o \
+ bond/hinic3_bond.o \
+ hinic3_main.o \
+ hinic3_tx.o \
+ hinic3_rx.o \
+ hinic3_rss.o \
+ hinic3_ntuple.o \
+ hinic3_dcb.o \
+ hinic3_ethtool.o \
+ hinic3_ethtool_stats.o \
+ hinic3_dbg.o \
+ hinic3_irq.o \
+ hinic3_filter.o \
+ hinic3_netdev_ops.o \
+ hinic3_nic_prof.o \
+ hinic3_nic_cfg.o \
+ hinic3_mag_cfg.o \
+ hinic3_nic_cfg_vf.o \
+ hinic3_rss_cfg.o \
+ hinic3_nic_event.o \
+ hinic3_nic_io.o \
+ hinic3_nic_dbg.o \
+ cqm/cqm_bat_cla.o \
+ cqm/cqm_bitmap_table.o \
+ cqm/cqm_object_intern.o \
+ cqm/cqm_bloomfilter.o \
+ cqm/cqm_cmd.o \
+ cqm/cqm_db.o \
+ cqm/cqm_object.o \
+ cqm/cqm_main.o \
+ cqm/cqm_memsec.o
diff --git a/drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.c b/drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.c
new file mode 100644
index 0000000..a252e09
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.c
@@ -0,0 +1,1125 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <net/sock.h>
+#include <net/bonding.h>
+#include <linux/rtnetlink.h>
+#include <linux/net.h>
+#include <linux/mutex.h>
+#include <linux/netdevice.h>
+#include <linux/version.h>
+
+#include "ossl_knl.h"
+#include "hinic3_lld.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_hw.h"
+#include "hinic3_bond.h"
+#include "hinic3_hwdev.h"
+
+#include "bond_common_defs.h"
+#include "vram_common.h"
+
+#define PORT_INVALID_ID 0xFF
+
+#define STATE_SYNCHRONIZATION_INDEX 3
+
+struct hinic3_bond_dev {
+ char name[BOND_NAME_MAX_LEN];
+ struct bond_attr bond_attr;
+ struct bond_attr new_attr;
+ struct bonding *bond;
+ void *ppf_hwdev;
+ struct kref ref;
+#define BOND_DEV_STATUS_IDLE 0x0
+#define BOND_DEV_STATUS_ACTIVATED 0x1
+ u8 status;
+ u8 slot_used[HINIC3_BOND_USER_NUM];
+ struct workqueue_struct *wq;
+ struct delayed_work bond_work;
+ struct bond_tracker tracker;
+ spinlock_t lock; /* lock for change status */
+};
+
+typedef void (*bond_service_func)(const char *bond_name, void *bond_attr,
+ enum bond_service_proc_pos pos);
+
+static DEFINE_MUTEX(g_bond_service_func_mutex);
+
+static bond_service_func g_bond_service_func[HINIC3_BOND_USER_NUM];
+
+struct hinic3_bond_mngr {
+ u32 cnt;
+ struct hinic3_bond_dev *bond_dev[BOND_MAX_NUM];
+ struct socket *rtnl_sock;
+};
+
+static struct hinic3_bond_mngr bond_mngr = { .cnt = 0 };
+static DEFINE_MUTEX(g_bond_mutex);
+
+static bool bond_dev_is_activated(const struct hinic3_bond_dev *bdev)
+{
+ return bdev->status == BOND_DEV_STATUS_ACTIVATED;
+}
+
+#define PCI_DBDF(dom, bus, dev, func) \
+ (((dom) << 16) | ((bus) << 8) | ((dev) << 3) | ((func) & 0x7))
+
+#ifdef __PCLINT__
+static inline bool netif_is_bond_master(const struct net_device *dev)
+{
+ return (dev->flags & IFF_MASTER) && (dev->priv_flags & IFF_BONDING);
+}
+#endif
+
+static u32 bond_gen_uplink_id(struct hinic3_bond_dev *bdev)
+{
+ u32 uplink_id = 0;
+ u8 i;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ struct pci_dev *pdev = NULL;
+ u32 domain, bus, dev, func;
+
+ spin_lock(&bdev->lock);
+ for (i = 0; i < BOND_PORT_MAX_NUM; i++) {
+ if (BITMAP_JUDGE(bdev->bond_attr.slaves, i)) {
+ if (!bdev->tracker.ndev[i])
+ continue;
+ nic_dev = netdev_priv(bdev->tracker.ndev[i]);
+ pdev = nic_dev->pdev;
+ domain = (u32)pci_domain_nr(pdev->bus);
+ bus = pdev->bus->number;
+ dev = PCI_SLOT(pdev->devfn);
+ func = PCI_FUNC(pdev->devfn);
+ uplink_id = PCI_DBDF(domain, bus, dev, func);
+ break;
+ }
+ }
+ spin_unlock(&bdev->lock);
+
+ return uplink_id;
+}
+
+static struct hinic3_nic_dev *get_nic_dev_safe(struct net_device *ndev)
+{
+ struct hinic3_lld_dev *lld_dev = NULL;
+
+ lld_dev = hinic3_get_lld_dev_by_netdev(ndev);
+ if (!lld_dev)
+ return NULL;
+
+ return netdev_priv(ndev);
+}
+
+static u8 bond_get_slaves_bitmap(struct hinic3_bond_dev *bdev, struct bonding *bond)
+{
+ struct slave *slave = NULL;
+ struct list_head *iter = NULL;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ u8 bitmap = 0;
+ u8 port_id;
+
+ rcu_read_lock();
+ bond_for_each_slave_rcu(bond, slave, iter) {
+ nic_dev = get_nic_dev_safe(slave->dev);
+ if (!nic_dev)
+ continue;
+
+ port_id = hinic3_physical_port_id(nic_dev->hwdev);
+ BITMAP_SET(bitmap, port_id);
+ (void)iter;
+ }
+ rcu_read_unlock();
+
+ return bitmap;
+}
+
+static void bond_update_attr(struct hinic3_bond_dev *bdev, struct bonding *bond)
+{
+ spin_lock(&bdev->lock);
+
+ bdev->new_attr.bond_mode = (u16)bond->params.mode;
+ bdev->new_attr.bond_id = bdev->bond_attr.bond_id;
+ bdev->new_attr.up_delay = (u16)bond->params.updelay;
+ bdev->new_attr.down_delay = (u16)bond->params.downdelay;
+ bdev->new_attr.slaves = 0;
+ bdev->new_attr.active_slaves = 0;
+ bdev->new_attr.lacp_collect_slaves = 0;
+ bdev->new_attr.first_roce_func = DEFAULT_ROCE_BOND_FUNC;
+
+ /* Only support L2/L34/L23 three policy */
+ if (bond->params.xmit_policy <= BOND_XMIT_POLICY_LAYER23)
+ bdev->new_attr.xmit_hash_policy = (u8)bond->params.xmit_policy;
+ else
+ bdev->new_attr.xmit_hash_policy = BOND_XMIT_POLICY_LAYER2;
+
+ bdev->new_attr.slaves = bond_get_slaves_bitmap(bdev, bond);
+
+ spin_unlock(&bdev->lock);
+}
+
+static u8 bond_get_netdev_idx(const struct hinic3_bond_dev *bdev,
+ const struct net_device *ndev)
+{
+ u8 i;
+
+ for (i = 0; i < BOND_PORT_MAX_NUM; i++) {
+ if (bdev->tracker.ndev[i] == ndev)
+ return i;
+ }
+
+ return PORT_INVALID_ID;
+}
+
+static u8 bond_dev_track_port(struct hinic3_bond_dev *bdev,
+ struct net_device *ndev)
+{
+ u8 port_id;
+ void *ppf_hwdev = NULL;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ struct hinic3_lld_dev *ppf_lld_dev = NULL;
+
+ nic_dev = get_nic_dev_safe(ndev);
+ if (!nic_dev) {
+ pr_warn("hinic3_bond: invalid slave: %s\n", ndev->name);
+ return PORT_INVALID_ID;
+ }
+
+ ppf_lld_dev = hinic3_get_ppf_lld_dev_unsafe(nic_dev->lld_dev);
+ if (ppf_lld_dev)
+ ppf_hwdev = ppf_lld_dev->hwdev;
+
+ pr_info("hinic3_bond: track ndev:%s", ndev->name);
+ port_id = hinic3_physical_port_id(nic_dev->hwdev);
+
+ spin_lock(&bdev->lock);
+ /* attach netdev to the port position associated with it */
+ if (bdev->tracker.ndev[port_id]) {
+ pr_warn("hinic3_bond: Old ndev:%s is replaced\n",
+ bdev->tracker.ndev[port_id]->name);
+ } else {
+ bdev->tracker.cnt++;
+ }
+ bdev->tracker.ndev[port_id] = ndev;
+ bdev->tracker.netdev_state[port_id].link_up = 0;
+ bdev->tracker.netdev_state[port_id].tx_enabled = 0;
+ if (!bdev->ppf_hwdev)
+ bdev->ppf_hwdev = ppf_hwdev;
+ pr_info("TRACK cnt: %d, slave_name(%s)\n", bdev->tracker.cnt, ndev->name);
+ spin_unlock(&bdev->lock);
+
+ return port_id;
+}
+
+static void bond_dev_untrack_port(struct hinic3_bond_dev *bdev, u8 idx)
+{
+ spin_lock(&bdev->lock);
+
+ if (bdev->tracker.ndev[idx]) {
+ bdev->tracker.ndev[idx] = NULL;
+ bdev->tracker.cnt--;
+ pr_info("hinic3_bond: untrack port:%u ndev:%s cnt:%d\n", idx,
+ bdev->tracker.ndev[idx]->name, bdev->tracker.cnt);
+ }
+
+ spin_unlock(&bdev->lock);
+}
+
+static void bond_slave_event(struct hinic3_bond_dev *bdev, struct slave *slave)
+{
+ u8 idx;
+
+ idx = bond_get_netdev_idx(bdev, slave->dev);
+ if (idx == PORT_INVALID_ID)
+ idx = bond_dev_track_port(bdev, slave->dev);
+ if (idx == PORT_INVALID_ID)
+ return;
+
+ spin_lock(&bdev->lock);
+ bdev->tracker.netdev_state[idx].link_up = bond_slave_is_up(slave);
+ bdev->tracker.netdev_state[idx].tx_enabled = bond_slave_is_up(slave) &&
+ bond_is_active_slave(slave);
+ spin_unlock(&bdev->lock);
+
+ queue_delayed_work(bdev->wq, &bdev->bond_work, 0);
+}
+
+static bool bond_eval_bonding_stats(const struct hinic3_bond_dev *bdev,
+ struct bonding *bond)
+{
+ int mode;
+
+ mode = BOND_MODE(bond);
+ if (mode != BOND_MODE_8023AD &&
+ mode != BOND_MODE_XOR &&
+ mode != BOND_MODE_ACTIVEBACKUP) {
+ pr_err("hinic3_bond: Wrong mode:%d\n", mode);
+ return false;
+ }
+
+ return bdev->tracker.cnt > 0;
+}
+
+static void bond_master_event(struct hinic3_bond_dev *bdev,
+ struct bonding *bond)
+{
+ spin_lock(&bdev->lock);
+ bdev->tracker.is_bonded = bond_eval_bonding_stats(bdev, bond);
+ spin_unlock(&bdev->lock);
+
+ queue_delayed_work(bdev->wq, &bdev->bond_work, 0);
+}
+
+static struct hinic3_bond_dev *bond_get_bdev(struct bonding *bond)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ int bid;
+
+ if (bond == NULL) {
+ pr_err("hinic3_bond: bond is NULL\n");
+ return NULL;
+ }
+
+ mutex_lock(&g_bond_mutex);
+ for (bid = BOND_FIRST_ID; bid <= BOND_MAX_ID; bid++) {
+ bdev = bond_mngr.bond_dev[bid];
+ if (!bdev)
+ continue;
+
+ if (bond == bdev->bond) {
+ mutex_unlock(&g_bond_mutex);
+ return bdev;
+ }
+
+ if (strncmp(bond->dev->name, bdev->name, BOND_NAME_MAX_LEN) == 0) {
+ bdev->bond = bond;
+ return bdev;
+ }
+ }
+ mutex_unlock(&g_bond_mutex);
+ return NULL;
+}
+
+static struct bonding *get_bonding_by_netdev(struct net_device *ndev)
+{
+ struct bonding *bond = NULL;
+ struct slave *slave = NULL;
+
+ if (netif_is_bond_master(ndev)) {
+ bond = netdev_priv(ndev);
+ } else if (netif_is_bond_slave(ndev)) {
+ slave = bond_slave_get_rtnl(ndev);
+ if (slave) {
+ bond = bond_get_bond_by_slave(slave);
+ }
+ }
+
+ return bond;
+}
+/*lint -e580 -e546*/
+bool hinic3_is_bond_dev_status_actived(struct net_device *ndev)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ struct bonding *bond = NULL;
+
+ if (!ndev) {
+ pr_err("hinic3_bond: netdev is NULL\n");
+ return false;
+ }
+
+ bond = get_bonding_by_netdev(ndev);
+ bdev = bond_get_bdev(bond);
+ if (!bdev)
+ return false;
+
+ return bdev->status == BOND_DEV_STATUS_ACTIVATED;
+}
+EXPORT_SYMBOL(hinic3_is_bond_dev_status_actived);
+/*lint +e580 +e546*/
+
+static void bond_handle_rtnl_event(struct net_device *ndev)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ struct bonding *bond = NULL;
+ struct slave *slave = NULL;
+
+ bond = get_bonding_by_netdev(ndev);
+ bdev = bond_get_bdev(bond);
+ if (!bdev)
+ return;
+
+ bond_update_attr(bdev, bond);
+
+ if (netif_is_bond_slave(ndev)) {
+ slave = bond_slave_get_rtnl(ndev);
+ bond_slave_event(bdev, slave);
+ } else {
+ bond_master_event(bdev, bond);
+ }
+}
+
+static void bond_rtnl_data_ready(struct sock *sk)
+{
+ struct net_device *ndev = NULL;
+ struct ifinfomsg *ifinfo = NULL;
+ struct nlmsghdr *hdr = NULL;
+ struct sk_buff *skb = NULL;
+ int err = 0;
+
+ skb = skb_recv_datagram(sk, 0, 0, &err);
+ if (err != 0 || !skb)
+ return;
+
+ hdr = (struct nlmsghdr *)skb->data;
+ if (!hdr ||
+ !NLMSG_OK(hdr, skb->len) ||
+ hdr->nlmsg_type != RTM_NEWLINK ||
+ !rtnl_is_locked()) {
+ goto free_skb;
+ }
+
+ ifinfo = nlmsg_data(hdr);
+ ndev = dev_get_by_index(&init_net, ifinfo->ifi_index);
+ if (ndev) {
+ bond_handle_rtnl_event(ndev);
+ dev_put(ndev);
+ }
+
+free_skb:
+ kfree_skb(skb);
+}
+
+static int bond_enable_netdev_event(void)
+{
+ struct sockaddr_nl addr = {
+ .nl_family = AF_NETLINK,
+ .nl_groups = RTNLGRP_LINK,
+ };
+ int err;
+ struct socket **rtnl_sock = &bond_mngr.rtnl_sock;
+
+ err = sock_create_kern(&init_net, AF_NETLINK, SOCK_DGRAM, NETLINK_ROUTE,
+ rtnl_sock);
+ if (err) {
+ pr_err("hinic3_bond: Couldn't create rtnl socket.\n");
+ *rtnl_sock = NULL;
+ return err;
+ }
+
+ (*rtnl_sock)->sk->sk_data_ready = bond_rtnl_data_ready;
+ (*rtnl_sock)->sk->sk_allocation = GFP_KERNEL;
+
+ err = kernel_bind(*rtnl_sock, (struct sockaddr *)(u8 *)&addr, sizeof(addr));
+ if (err) {
+ pr_err("hinic3_bond: Couldn't bind rtnl socket.\n");
+ sock_release(*rtnl_sock);
+ *rtnl_sock = NULL;
+ }
+
+ return err;
+}
+
+static void bond_disable_netdev_event(void)
+{
+ if (bond_mngr.rtnl_sock)
+ sock_release(bond_mngr.rtnl_sock);
+}
+
+static int bond_send_upcmd(struct hinic3_bond_dev *bdev, struct bond_attr *attr,
+ u8 cmd_type)
+{
+ int err, len;
+ struct hinic3_bond_cmd cmd = {0};
+ u16 out_size = sizeof(cmd);
+
+ cmd.sub_cmd = 0;
+ cmd.ret_status = 0;
+
+ if (attr) {
+ memcpy(&cmd.attr, attr, sizeof(*attr));
+ } else {
+ cmd.attr.bond_id = bdev->bond_attr.bond_id;
+ cmd.attr.slaves = bdev->bond_attr.slaves;
+ }
+
+ len = sizeof(cmd.bond_name);
+ if (cmd_type == MPU_CMD_BOND_CREATE) {
+ strscpy(cmd.bond_name, bdev->name, len);
+ cmd.bond_name[sizeof(cmd.bond_name) - 1] = '\0';
+ }
+
+ err = hinic3_msg_to_mgmt_sync(bdev->ppf_hwdev, HINIC3_MOD_OVS, cmd_type,
+ &cmd, sizeof(cmd), &cmd, &out_size, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err != 0 || !out_size || cmd.ret_status != 0) {
+ pr_err("hinic3_bond: uP cmd: %u failed, err: %d, sts: %u, out size: %u\n",
+ cmd_type, err, cmd.ret_status, out_size);
+ err = -EIO;
+ }
+
+ return err;
+}
+
+static int bond_upcmd_deactivate(struct hinic3_bond_dev *bdev)
+{
+ int err;
+ u16 id_tmp;
+
+ if (bdev->status == BOND_DEV_STATUS_IDLE)
+ return 0;
+
+ pr_info("hinic3_bond: deactivate bond: %u\n", bdev->bond_attr.bond_id);
+
+ err = bond_send_upcmd(bdev, NULL, MPU_CMD_BOND_DELETE);
+ if (err == 0) {
+ id_tmp = bdev->bond_attr.bond_id;
+ memset(&bdev->bond_attr, 0, sizeof(bdev->bond_attr));
+ bdev->status = BOND_DEV_STATUS_IDLE;
+ bdev->bond_attr.bond_id = id_tmp;
+ if (!bdev->tracker.cnt)
+ bdev->ppf_hwdev = NULL;
+ }
+
+ return err;
+}
+
+static void bond_pf_bitmap_set(struct hinic3_bond_dev *bdev, u8 index)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+ u8 pf_id;
+
+ nic_dev = netdev_priv(bdev->tracker.ndev[index]);
+ if (!nic_dev)
+ return;
+
+ pf_id = hinic3_pf_id_of_vf(nic_dev->hwdev);
+ BITMAP_SET(bdev->new_attr.bond_pf_bitmap, pf_id);
+}
+
+static void bond_update_slave_info(struct hinic3_bond_dev *bdev,
+ struct bond_attr *attr)
+{
+ struct net_device *ndev = NULL;
+ u8 i;
+
+ if (!netif_running(bdev->bond->dev))
+ return;
+
+ if (attr->bond_mode == BOND_MODE_ACTIVEBACKUP) {
+ rcu_read_lock();
+ ndev = bond_option_active_slave_get_rcu(bdev->bond);
+ rcu_read_unlock();
+ }
+
+ for (i = 0; i < BOND_PORT_MAX_NUM; i++) {
+ if (!BITMAP_JUDGE(attr->slaves, i)) {
+ if (BITMAP_JUDGE(bdev->bond_attr.slaves, i))
+ bond_dev_untrack_port(bdev, i);
+
+ continue;
+ }
+
+ if (!bdev->tracker.ndev[i])
+ continue;
+
+ bond_pf_bitmap_set(bdev, i);
+
+ if (!bdev->tracker.netdev_state[i].tx_enabled)
+ continue;
+
+ if (attr->bond_mode == BOND_MODE_8023AD) {
+ BITMAP_SET(attr->active_slaves, i);
+ BITMAP_SET(attr->lacp_collect_slaves, i);
+ } else if (attr->bond_mode == BOND_MODE_XOR) {
+ BITMAP_SET(attr->active_slaves, i);
+ } else if (ndev && (ndev == bdev->tracker.ndev[i])) {
+ /* BOND_MODE_ACTIVEBACKUP */
+ BITMAP_SET(attr->active_slaves, i);
+ break;
+ }
+ }
+}
+
+static int bond_upcmd_config(struct hinic3_bond_dev *bdev,
+ struct bond_attr *attr)
+{
+ int err;
+
+ bond_update_slave_info(bdev, attr);
+ attr->bond_pf_bitmap = bdev->new_attr.bond_pf_bitmap;
+
+ if (memcmp(&bdev->bond_attr, attr, sizeof(struct bond_attr)) == 0)
+ return 0;
+
+ pr_info("hinic3_bond: Config bond: %u\n", attr->bond_id);
+ pr_info("mode:%u, up_d:%u, down_d:%u, hash:%u, slaves:%u, ap:%u, cs:%u\n",
+ attr->bond_mode,
+ attr->up_delay,
+ attr->down_delay,
+ attr->xmit_hash_policy,
+ attr->slaves,
+ attr->active_slaves,
+ attr->lacp_collect_slaves);
+ pr_info("bond_pf_bitmap: 0x%x\n", attr->bond_pf_bitmap);
+ pr_info("bond user_bitmap 0x%x\n", attr->user_bitmap);
+
+ err = bond_send_upcmd(bdev, attr, MPU_CMD_BOND_SET_ATTR);
+ if (!err)
+ memcpy(&bdev->bond_attr, attr, sizeof(*attr));
+
+ return err;
+}
+
+static int bond_upcmd_activate(struct hinic3_bond_dev *bdev,
+ struct bond_attr *attr)
+{
+ int err;
+
+ if (bond_dev_is_activated(bdev))
+ return 0;
+
+ pr_info("hinic3_bond: active bond: %u\n", bdev->bond_attr.bond_id);
+
+ err = bond_send_upcmd(bdev, attr, MPU_CMD_BOND_CREATE);
+ if (err == 0) {
+ bdev->status = BOND_DEV_STATUS_ACTIVATED;
+ bdev->bond_attr.bond_mode = attr->bond_mode;
+ err = bond_upcmd_config(bdev, attr);
+ }
+
+ return err;
+}
+
+static void bond_call_service_func(struct hinic3_bond_dev *bdev, struct bond_attr *attr,
+ enum bond_service_proc_pos pos, int bond_status)
+{
+ int i;
+
+ if (bond_status)
+ return;
+
+ mutex_lock(&g_bond_service_func_mutex);
+ for (i = 0; i < HINIC3_BOND_USER_NUM; i++) {
+ if (g_bond_service_func[i])
+ g_bond_service_func[i](bdev->name, (void *)attr, pos);
+ }
+ mutex_unlock(&g_bond_service_func_mutex);
+}
+
+static u32 bond_get_user_bitmap(struct hinic3_bond_dev *bdev)
+{
+ u32 user_bitmap = 0;
+ u8 user;
+
+ for (user = HINIC3_BOND_USER_OVS; user < HINIC3_BOND_USER_NUM; user++) {
+ if (bdev->slot_used[user] == 1)
+ BITMAP_SET(user_bitmap, user);
+ }
+ return user_bitmap;
+}
+
+static void bond_do_work(struct hinic3_bond_dev *bdev)
+{
+ bool is_bonded = 0;
+ struct bond_attr attr;
+ int is_in_kexec;
+ int err = 0;
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec != 0) {
+ pr_info("Skip changing bond status during os replace\n");
+ return;
+ }
+
+ spin_lock(&bdev->lock);
+ is_bonded = bdev->tracker.is_bonded;
+ attr = bdev->new_attr;
+ spin_unlock(&bdev->lock);
+ attr.user_bitmap = bond_get_user_bitmap(bdev);
+
+ /* is_bonded indicates whether bond should be activated. */
+ if (is_bonded && !bond_dev_is_activated(bdev)) {
+ bond_call_service_func(bdev, &attr, BOND_BEFORE_ACTIVE, 0);
+ err = bond_upcmd_activate(bdev, &attr);
+ bond_call_service_func(bdev, &attr, BOND_AFTER_ACTIVE, err);
+ } else if (is_bonded && bond_dev_is_activated(bdev)) {
+ bond_call_service_func(bdev, &attr, BOND_BEFORE_MODIFY, 0);
+ err = bond_upcmd_config(bdev, &attr);
+ bond_call_service_func(bdev, &attr, BOND_AFTER_MODIFY, err);
+ } else if (!is_bonded && bond_dev_is_activated(bdev)) {
+ bond_call_service_func(bdev, &attr, BOND_BEFORE_DEACTIVE, 0);
+ err = bond_upcmd_deactivate(bdev);
+ bond_call_service_func(bdev, &attr, BOND_AFTER_DEACTIVE, err);
+ }
+
+ if (err)
+ pr_err("hinic3_bond: Do bond failed\n");
+}
+
+static void bond_try_do_work(struct work_struct *work)
+{
+ struct delayed_work *delayed_work = to_delayed_work(work);
+ struct hinic3_bond_dev *bdev =
+ container_of(delayed_work, struct hinic3_bond_dev, bond_work);
+ int status;
+
+ status = mutex_trylock(&g_bond_mutex);
+ if (status == 0) {
+ /* Delay 1 sec and retry */
+ queue_delayed_work(bdev->wq, &bdev->bond_work, HZ);
+ } else {
+ bond_do_work(bdev);
+ mutex_unlock(&g_bond_mutex);
+ }
+}
+
+static int bond_dev_init(struct hinic3_bond_dev *bdev, const char *name)
+{
+ bdev->wq = create_singlethread_workqueue("hinic3_bond_wq");
+ if (!bdev->wq) {
+ pr_err("hinic3_bond: Failed to create workqueue\n");
+ return -ENODEV;
+ }
+
+ INIT_DELAYED_WORK(&bdev->bond_work, bond_try_do_work);
+ bdev->status = BOND_DEV_STATUS_IDLE;
+ strscpy(bdev->name, name, sizeof(bdev->name));
+
+ spin_lock_init(&bdev->lock);
+
+ return 0;
+}
+
+static int bond_dev_release(struct hinic3_bond_dev *bdev)
+{
+ int err;
+ u8 i;
+ u32 bond_cnt;
+
+ err = bond_upcmd_deactivate(bdev);
+ if (err) {
+ pr_err("hinic3_bond: Failed to deactivate dev\n");
+ mutex_unlock(&g_bond_mutex);
+ return err;
+ }
+
+ for (i = BOND_FIRST_ID; i <= BOND_MAX_ID; i++) {
+ if (bond_mngr.bond_dev[i] == bdev) {
+ bond_mngr.bond_dev[i] = NULL;
+ bond_mngr.cnt--;
+ pr_info("hinic3_bond: Free bond, id: %u mngr_cnt:%u\n", i, bond_mngr.cnt);
+ break;
+ }
+ }
+
+ bond_cnt = bond_mngr.cnt;
+ mutex_unlock(&g_bond_mutex);
+ if (!bond_cnt)
+ bond_disable_netdev_event();
+
+ cancel_delayed_work_sync(&bdev->bond_work);
+ destroy_workqueue(bdev->wq);
+ kfree(bdev);
+
+ return err;
+}
+
+static void bond_dev_free(struct kref *ref)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+
+ bdev = container_of(ref, struct hinic3_bond_dev, ref);
+ bond_dev_release(bdev);
+}
+
+static struct hinic3_bond_dev *bond_dev_alloc(const char *name)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ u16 i;
+ int err;
+
+ bdev = kzalloc(sizeof(*bdev), GFP_KERNEL);
+ if (!bdev) {
+ mutex_unlock(&g_bond_mutex);
+ return NULL;
+ }
+
+ err = bond_dev_init(bdev, name);
+ if (err) {
+ kfree(bdev);
+ mutex_unlock(&g_bond_mutex);
+ return NULL;
+ }
+
+ if (!bond_mngr.cnt) {
+ err = bond_enable_netdev_event();
+ if (err) {
+ bond_dev_release(bdev);
+ return NULL;
+ }
+ }
+
+ for (i = BOND_FIRST_ID; i <= BOND_MAX_ID; i++) {
+ if (!bond_mngr.bond_dev[i]) {
+ bdev->bond_attr.bond_id = i;
+ bond_mngr.bond_dev[i] = bdev;
+ bond_mngr.cnt++;
+ pr_info("hinic3_bond: Create bond dev, id:%u cnt:%u\n", i, bond_mngr.cnt);
+ break;
+ }
+ }
+
+ if (i > BOND_MAX_ID) {
+ bond_dev_release(bdev);
+ bdev = NULL;
+ pr_err("hinic3_bond: Failed to get free bond id\n");
+ }
+
+ return bdev;
+}
+
+static void update_bond_info(struct hinic3_bond_dev *bdev, struct bonding *bond)
+{
+ struct slave *slave = NULL;
+ struct list_head *iter = NULL;
+ struct net_device *ndev[BOND_PORT_MAX_NUM];
+ int i = 0;
+
+ bdev->bond = bond;
+
+ rtnl_lock();
+ bond_for_each_slave(bond, slave, iter) {
+ if (bond_dev_track_port(bdev, slave->dev) == PORT_INVALID_ID)
+ continue;
+ ndev[i] = slave->dev;
+ dev_hold(ndev[i++]);
+ if (i >= BOND_PORT_MAX_NUM)
+ break;
+ (void)iter;
+ }
+
+ bond_for_each_slave(bond, slave, iter) {
+ bond_handle_rtnl_event(slave->dev);
+ (void)iter;
+ }
+
+ bond_handle_rtnl_event(bond->dev);
+
+ rtnl_unlock();
+ /* In case user queries info before bonding is complete */
+ flush_delayed_work(&bdev->bond_work);
+
+ rtnl_lock();
+ while (i)
+ dev_put(ndev[--i]);
+ rtnl_unlock();
+}
+
+static struct hinic3_bond_dev *bond_dev_by_name(const char *name)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ int i;
+
+ for (i = BOND_FIRST_ID; i <= BOND_MAX_ID; i++) {
+ if (bond_mngr.bond_dev[i] &&
+ (strcmp(bond_mngr.bond_dev[i]->name, name) == 0)) {
+ bdev = bond_mngr.bond_dev[i];
+ break;
+ }
+ }
+
+ return bdev;
+}
+
+static void bond_dev_user_attach(struct hinic3_bond_dev *bdev,
+ enum hinic3_bond_user user)
+{
+ u32 user_bitmap;
+
+ if (user < 0 || user >= HINIC3_BOND_USER_NUM)
+ return;
+
+ if (bdev->slot_used[user])
+ return;
+
+ bdev->slot_used[user] = 1;
+ if (!kref_get_unless_zero(&bdev->ref))
+ kref_init(&bdev->ref);
+ else {
+ user_bitmap = bond_get_user_bitmap(bdev);
+ pr_info("hinic3_bond: user %u attach bond %s, user_bitmap %#x\n",
+ user, bdev->name, user_bitmap);
+ queue_delayed_work(bdev->wq, &bdev->bond_work, 0);
+ }
+}
+
+static void bond_dev_user_detach(struct hinic3_bond_dev *bdev,
+ enum hinic3_bond_user user, bool *freed)
+{
+ if (bdev->slot_used[user]) {
+ bdev->slot_used[user] = 0;
+ if (kref_read(&bdev->ref) == 1)
+ *freed = true;
+ kref_put(&bdev->ref, bond_dev_free);
+ }
+}
+
+static struct bonding *bond_get_knl_bonding(const char *name)
+{
+ struct net_device *ndev_tmp = NULL;
+
+ rcu_read_lock();
+ for_each_netdev(&init_net, ndev_tmp) {
+ if (netif_is_bond_master(ndev_tmp) &&
+ !strcmp(ndev_tmp->name, name)) {
+ rcu_read_unlock();
+ return netdev_priv(ndev_tmp);
+ }
+ }
+ rcu_read_unlock();
+ return NULL;
+}
+
+void hinic3_bond_set_user_bitmap(struct bond_attr *attr, enum hinic3_bond_user user)
+{
+ if (!BITMAP_JUDGE(attr->user_bitmap, user))
+ BITMAP_SET(attr->user_bitmap, user);
+}
+EXPORT_SYMBOL(hinic3_bond_set_user_bitmap);
+
+int hinic3_bond_attach(const char *name, enum hinic3_bond_user user,
+ u16 *bond_id)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ struct bonding *bond = NULL;
+ bool new_dev = false;
+
+ if (!name || !bond_id)
+ return -EINVAL;
+
+ bond = bond_get_knl_bonding(name);
+ if (!bond) {
+ pr_warn("hinic3_bond: Kernel bond %s not exist.\n", name);
+ return -ENODEV;
+ }
+
+ mutex_lock(&g_bond_mutex);
+ bdev = bond_dev_by_name(name);
+ if (!bdev) {
+ bdev = bond_dev_alloc(name);
+ new_dev = true;
+ } else {
+ pr_info("hinic3_bond: %s already exist\n", name);
+ }
+
+ if (!bdev) {
+ // lock has beed released in bond_dev_alloc
+ return -ENODEV;
+ }
+
+ bond_dev_user_attach(bdev, user);
+ mutex_unlock(&g_bond_mutex);
+
+ if (new_dev)
+ update_bond_info(bdev, bond);
+
+ *bond_id = bdev->bond_attr.bond_id;
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_bond_attach);
+
+int hinic3_bond_detach(u16 bond_id, enum hinic3_bond_user user)
+{
+ int err = 0;
+ bool lock_freed = false;
+
+ if (!BOND_ID_IS_VALID(bond_id) || user >= HINIC3_BOND_USER_NUM) {
+ pr_warn("hinic3_bond: Invalid bond id or user, bond_id: %u, user: %d\n",
+ bond_id, user);
+ return -EINVAL;
+ }
+
+ mutex_lock(&g_bond_mutex);
+ if (!bond_mngr.bond_dev[bond_id])
+ err = -ENODEV;
+ else
+ bond_dev_user_detach(bond_mngr.bond_dev[bond_id], user, &lock_freed);
+
+ if (!lock_freed)
+ mutex_unlock(&g_bond_mutex);
+ return err;
+}
+EXPORT_SYMBOL(hinic3_bond_detach);
+
+void hinic3_bond_clean_user(enum hinic3_bond_user user)
+{
+ int i = 0;
+ bool lock_freed = false;
+
+ mutex_lock(&g_bond_mutex);
+ for (i = BOND_FIRST_ID; i <= BOND_MAX_ID; i++) {
+ if (bond_mngr.bond_dev[i]) {
+ bond_dev_user_detach(bond_mngr.bond_dev[i], user, &lock_freed);
+ if (lock_freed) {
+ mutex_lock(&g_bond_mutex);
+ lock_freed = false;
+ }
+ }
+ }
+ if (!lock_freed)
+ mutex_unlock(&g_bond_mutex);
+}
+EXPORT_SYMBOL(hinic3_bond_clean_user);
+
+int hinic3_bond_get_uplink_id(u16 bond_id, u32 *uplink_id)
+{
+ if (!BOND_ID_IS_VALID(bond_id) || !uplink_id) {
+ pr_warn("hinic3_bond: Invalid args, id: %u, uplink: %d\n",
+ bond_id, !!uplink_id);
+ return -EINVAL;
+ }
+
+ mutex_lock(&g_bond_mutex);
+ if (bond_mngr.bond_dev[bond_id])
+ *uplink_id = bond_gen_uplink_id(bond_mngr.bond_dev[bond_id]);
+ mutex_unlock(&g_bond_mutex);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_bond_get_uplink_id);
+
+int hinic3_bond_register_service_func(enum hinic3_bond_user user, void (*func)
+ (const char *bond_name, void *bond_attr,
+ enum bond_service_proc_pos pos))
+{
+ if (user >= HINIC3_BOND_USER_NUM)
+ return -EINVAL;
+
+ mutex_lock(&g_bond_service_func_mutex);
+ g_bond_service_func[user] = func;
+ mutex_unlock(&g_bond_service_func_mutex);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_bond_register_service_func);
+
+int hinic3_bond_unregister_service_func(enum hinic3_bond_user user)
+{
+ if (user >= HINIC3_BOND_USER_NUM)
+ return -EINVAL;
+
+ mutex_lock(&g_bond_service_func_mutex);
+ g_bond_service_func[user] = NULL;
+ mutex_unlock(&g_bond_service_func_mutex);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_bond_unregister_service_func);
+
+int hinic3_bond_get_slaves(u16 bond_id, struct hinic3_bond_info_s *info)
+{
+ struct bond_tracker *tracker = NULL;
+ int size;
+ int i;
+ int len;
+
+ if (!info || !BOND_ID_IS_VALID(bond_id)) {
+ pr_warn("hinic3_bond: Invalid args, info: %d,id: %u\n",
+ !!info, bond_id);
+ return -EINVAL;
+ }
+
+ size = ARRAY_LEN(info->slaves_name);
+ if (size < BOND_PORT_MAX_NUM) {
+ pr_warn("hinic3_bond: Invalid args, size: %u\n",
+ size);
+ return -EINVAL;
+ }
+
+ mutex_lock(&g_bond_mutex);
+ if (bond_mngr.bond_dev[bond_id]) {
+ info->slaves = bond_mngr.bond_dev[bond_id]->bond_attr.slaves;
+ tracker = &bond_mngr.bond_dev[bond_id]->tracker;
+ info->cnt = 0;
+ for (i = 0; i < BOND_PORT_MAX_NUM; i++) {
+ if (BITMAP_JUDGE(info->slaves, i) && tracker->ndev[i]) {
+ len = sizeof(info->slaves_name[0]);
+ strscpy(info->slaves_name[info->cnt], tracker->ndev[i]->name, len);
+ info->cnt++;
+ }
+ }
+ }
+ mutex_unlock(&g_bond_mutex);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_bond_get_slaves);
+
+struct net_device *hinic3_bond_get_netdev_by_portid(const char *bond_name, u8 port_id)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+
+ if (port_id >= BOND_PORT_MAX_NUM)
+ return NULL;
+ mutex_lock(&g_bond_mutex);
+ bdev = bond_dev_by_name(bond_name);
+ if (!bdev) {
+ mutex_unlock(&g_bond_mutex);
+ return NULL;
+ }
+ mutex_unlock(&g_bond_mutex);
+ return bdev->tracker.ndev[port_id];
+}
+EXPORT_SYMBOL(hinic3_bond_get_netdev_by_portid);
+
+int hinic3_get_hw_bond_infos(void *hwdev, struct hinic3_hw_bond_infos *infos, u16 channel)
+{
+ struct comm_cmd_hw_bond_infos bond_infos;
+ u16 out_size = sizeof(bond_infos);
+ int err;
+
+ if (!hwdev || !infos)
+ return -EINVAL;
+
+ memset(&bond_infos, 0, sizeof(bond_infos));
+
+ bond_infos.infos.bond_id = infos->bond_id;
+
+ err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, COMM_MGMT_CMD_GET_HW_BOND,
+ &bond_infos, sizeof(bond_infos),
+ &bond_infos, &out_size, 0, channel);
+ if (bond_infos.head.status || err || !out_size) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get hw bond information, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, bond_infos.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ memcpy(infos, &bond_infos.infos, sizeof(*infos));
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_hw_bond_infos);
+
+int hinic3_get_bond_tracker_by_name(const char *name, struct bond_tracker *tracker)
+{
+ struct hinic3_bond_dev *bdev = NULL;
+ int i;
+
+ mutex_lock(&g_bond_mutex);
+ for (i = BOND_FIRST_ID; i <= BOND_MAX_ID; i++) {
+ if (bond_mngr.bond_dev[i] &&
+ (strcmp(bond_mngr.bond_dev[i]->name, name) == 0)) {
+ bdev = bond_mngr.bond_dev[i];
+ spin_lock(&bdev->lock);
+ *tracker = bdev->tracker;
+ spin_unlock(&bdev->lock);
+ mutex_unlock(&g_bond_mutex);
+ return 0;
+ }
+ }
+ mutex_unlock(&g_bond_mutex);
+ return -ENODEV;
+}
+EXPORT_SYMBOL(hinic3_get_bond_tracker_by_name);
diff --git a/drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.h b/drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.h
new file mode 100644
index 0000000..5ab36f7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/bond/hinic3_bond.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_BOND_H
+#define HINIC3_BOND_H
+
+#include <linux/netdevice.h>
+#include <linux/types.h>
+#include "mpu_inband_cmd_defs.h"
+#include "bond_common_defs.h"
+
+enum hinic3_bond_user {
+ HINIC3_BOND_USER_OVS,
+ HINIC3_BOND_USER_TOE,
+ HINIC3_BOND_USER_ROCE,
+ HINIC3_BOND_USER_NUM
+};
+
+enum bond_service_proc_pos {
+ BOND_BEFORE_ACTIVE,
+ BOND_AFTER_ACTIVE,
+ BOND_BEFORE_MODIFY,
+ BOND_AFTER_MODIFY,
+ BOND_BEFORE_DEACTIVE,
+ BOND_AFTER_DEACTIVE,
+ BOND_POS_MAX
+};
+
+#define BITMAP_SET(bm, bit) ((bm) |= (typeof(bm))(1U << (bit)))
+#define BITMAP_CLR(bm, bit) ((bm) &= ~((typeof(bm))(1U << (bit))))
+#define BITMAP_JUDGE(bm, bit) ((bm) & (typeof(bm))(1U << (bit)))
+
+#define MPU_CMD_BOND_CREATE 17
+#define MPU_CMD_BOND_DELETE 18
+#define MPU_CMD_BOND_SET_ATTR 19
+#define MPU_CMD_BOND_GET_ATTR 20
+
+#define HINIC3_MAX_PORT 4
+#define HINIC3_IFNAMSIZ 16
+struct hinic3_bond_info_s {
+ u8 slaves;
+ u8 cnt;
+ u8 srv[2];
+ char slaves_name[HINIC3_MAX_PORT][HINIC3_IFNAMSIZ];
+};
+
+#pragma pack(push, 1)
+struct netdev_lower_state_info {
+ u8 link_up : 1;
+ u8 tx_enabled : 1;
+ u8 rsvd : 6;
+};
+
+#pragma pack(pop)
+
+struct bond_tracker {
+ struct netdev_lower_state_info netdev_state[BOND_PORT_MAX_NUM];
+ struct net_device *ndev[BOND_PORT_MAX_NUM];
+ u8 cnt;
+ bool is_bonded;
+};
+
+struct bond_attr {
+ u16 bond_mode;
+ u16 bond_id;
+ u16 up_delay;
+ u16 down_delay;
+ u8 active_slaves;
+ u8 slaves;
+ u8 lacp_collect_slaves;
+ u8 xmit_hash_policy;
+ u32 first_roce_func;
+ u32 bond_pf_bitmap;
+ u32 user_bitmap;
+};
+
+struct hinic3_bond_cmd {
+ u8 ret_status;
+ u8 version;
+ u16 sub_cmd;
+ struct bond_attr attr;
+ char bond_name[16];
+};
+
+bool hinic3_is_bond_dev_status_actived(struct net_device *ndev);
+void hinic3_bond_set_user_bitmap(struct bond_attr *attr, enum hinic3_bond_user user);
+int hinic3_bond_attach(const char *name, enum hinic3_bond_user user, u16 *bond_id);
+int hinic3_bond_detach(u16 bond_id, enum hinic3_bond_user user);
+void hinic3_bond_clean_user(enum hinic3_bond_user user);
+int hinic3_bond_get_uplink_id(u16 bond_id, u32 *uplink_id);
+int hinic3_bond_register_service_func(enum hinic3_bond_user user, void (*func)
+ (const char *bond_name, void *bond_attr,
+ enum bond_service_proc_pos pos));
+int hinic3_bond_unregister_service_func(enum hinic3_bond_user user);
+int hinic3_bond_get_slaves(u16 bond_id, struct hinic3_bond_info_s *info);
+struct net_device *hinic3_bond_get_netdev_by_portid(const char *bond_name, u8 port_id);
+int hinic3_get_hw_bond_infos(void *hwdev, struct hinic3_hw_bond_infos *infos, u16 channel);
+int hinic3_get_bond_tracker_by_name(const char *name, struct bond_tracker *tracker);
+#endif /* HINIC3_BOND_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.c
new file mode 100644
index 0000000..1562c59
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.c
@@ -0,0 +1,2015 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/mm.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/kernel.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+
+#include "cqm_object.h"
+#include "cqm_bitmap_table.h"
+#include "cqm_cmd.h"
+#include "cqm_object_intern.h"
+#include "cqm_main.h"
+#include "cqm_memsec.h"
+#include "cqm_bat_cla.h"
+
+#include "cqm_npu_cmd.h"
+#include "cqm_npu_cmd_defs.h"
+
+#include "vram_common.h"
+
+static void cqm_bat_fill_cla_common_gpa(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ struct tag_cqm_bat_entry_standerd *bat_entry_standerd)
+{
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ struct hinic3_func_attr *func_attr = NULL;
+ struct tag_cqm_bat_entry_vf2pf gpa = {0};
+ u32 cla_gpa_h = 0;
+ dma_addr_t pa;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_0)
+ pa = cla_table->cla_z_buf.buf_list[0].pa;
+ else if (cla_table->cla_lvl == CQM_CLA_LVL_1)
+ pa = cla_table->cla_y_buf.buf_list[0].pa;
+ else
+ pa = cla_table->cla_x_buf.buf_list[0].pa;
+
+ gpa.cla_gpa_h = CQM_ADDR_HI(pa) & CQM_CHIP_GPA_HIMASK;
+
+ /* On the SPU, the value of spu_en in the GPA address
+ * in the BAT is determined by the host ID and fun IDx.
+ */
+ if (hinic3_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID) {
+ func_attr = &cqm_handle->func_attribute;
+ gpa.acs_spu_en = func_attr->func_global_idx & 0x1;
+ } else {
+ gpa.acs_spu_en = 0;
+ }
+
+ /* In fake mode, fake_vf_en in the GPA address of the BAT
+ * must be set to 1.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD) {
+ gpa.fake_vf_en = 1;
+ func_attr = &cqm_handle->parent_cqm_handle->func_attribute;
+ gpa.pf_id = func_attr->func_global_idx;
+ } else {
+ gpa.fake_vf_en = 0;
+ }
+
+ memcpy(&cla_gpa_h, &gpa, sizeof(u32));
+ bat_entry_standerd->cla_gpa_h = cla_gpa_h;
+
+ /* GPA is valid when gpa[0] = 1.
+ * CQM_BAT_ENTRY_T_REORDER does not support GPA validity check.
+ */
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ bat_entry_standerd->cla_gpa_l = CQM_ADDR_LW(pa);
+ else
+ bat_entry_standerd->cla_gpa_l = CQM_ADDR_LW(pa) |
+ gpa_check_enable;
+
+ cqm_info(handle->dev_hdl, "Cla type %u, pa 0x%llx, gpa 0x%x-0x%x, level %u\n",
+ cla_table->type, pa, bat_entry_standerd->cla_gpa_h, bat_entry_standerd->cla_gpa_l,
+ bat_entry_standerd->cla_level);
+}
+
+static void cqm_bat_fill_cla_common(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u8 *entry_base_addr)
+{
+ struct tag_cqm_bat_entry_standerd *bat_entry_standerd = NULL;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 cache_line = 0;
+
+ /* The cacheline of the timer is changed to 512. */
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER)
+ cache_line = CQM_CHIP_TIMER_CACHELINE;
+ else
+ cache_line = CQM_CHIP_CACHELINE;
+
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0, don't init bat entry\n",
+ cla_table->type);
+ return;
+ }
+
+ bat_entry_standerd = (struct tag_cqm_bat_entry_standerd *)entry_base_addr;
+
+ /* The QPC value is 256/512/1024 and the timer value is 512.
+ * The other cacheline value is 256B.
+ * The conversion operation is performed inside the chip.
+ */
+ if (cla_table->obj_size > cache_line) {
+ if (cla_table->obj_size == CQM_OBJECT_512)
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_512;
+ else
+ bat_entry_standerd->entry_size =
+ CQM_BAT_ENTRY_SIZE_1024;
+ bat_entry_standerd->max_number = cla_table->max_buffer_size /
+ cla_table->obj_size;
+ } else {
+ if (cache_line == CQM_CHIP_CACHELINE) {
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_256;
+ bat_entry_standerd->max_number =
+ cla_table->max_buffer_size / cache_line;
+ } else {
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_512;
+ bat_entry_standerd->max_number =
+ cla_table->max_buffer_size / cache_line;
+ }
+ }
+
+ bat_entry_standerd->max_number = bat_entry_standerd->max_number - 1;
+
+ bat_entry_standerd->bypass = CQM_BAT_NO_BYPASS_CACHE;
+ bat_entry_standerd->z = cla_table->cacheline_z;
+ bat_entry_standerd->y = cla_table->cacheline_y;
+ bat_entry_standerd->x = cla_table->cacheline_x;
+ bat_entry_standerd->cla_level = cla_table->cla_lvl;
+
+ cqm_bat_fill_cla_common_gpa(cqm_handle, cla_table, bat_entry_standerd);
+}
+
+static void cqm_bat_fill_cla_cfg(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct tag_cqm_bat_entry_cfg *bat_entry_cfg = NULL;
+
+ bat_entry_cfg = (struct tag_cqm_bat_entry_cfg *)(*entry_base_addr);
+ bat_entry_cfg->cur_conn_cache = 0;
+ bat_entry_cfg->max_conn_cache =
+ func_cap->flow_table_based_conn_cache_number;
+ bat_entry_cfg->cur_conn_num_h_4 = 0;
+ bat_entry_cfg->cur_conn_num_l_16 = 0;
+ bat_entry_cfg->max_conn_num = func_cap->flow_table_based_conn_number;
+
+ /* Aligns with 64 buckets and shifts rightward by 6 bits.
+ * The maximum value of this field is 16 bits. A maximum of 4M buckets
+ * can be supported. The value is subtracted by 1. It is used for &hash
+ * value.
+ */
+ if ((func_cap->hash_number >> CQM_HASH_NUMBER_UNIT) != 0) {
+ bat_entry_cfg->bucket_num = ((func_cap->hash_number >>
+ CQM_HASH_NUMBER_UNIT) - 1);
+ }
+ if (func_cap->bloomfilter_length != 0) {
+ bat_entry_cfg->bloom_filter_len = func_cap->bloomfilter_length -
+ 1;
+ bat_entry_cfg->bloom_filter_addr = func_cap->bloomfilter_addr;
+ }
+
+ (*entry_base_addr) += sizeof(struct tag_cqm_bat_entry_cfg);
+}
+
+static void cqm_bat_fill_cla_other(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ cqm_bat_fill_cla_common(cqm_handle, cla_table, *entry_base_addr);
+
+ (*entry_base_addr) += sizeof(struct tag_cqm_bat_entry_standerd);
+}
+
+static void cqm_bat_fill_cla_taskmap(struct tag_cqm_handle *cqm_handle,
+ const struct tag_cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ struct tag_cqm_bat_entry_taskmap *bat_entry_taskmap = NULL;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ int i;
+
+ if (cqm_handle->func_capability.taskmap_number != 0) {
+ bat_entry_taskmap =
+ (struct tag_cqm_bat_entry_taskmap *)(*entry_base_addr);
+ for (i = 0; i < CQM_BAT_ENTRY_TASKMAP_NUM; i++) {
+ bat_entry_taskmap->addr[i].gpa_h =
+ (u32)(cla_table->cla_z_buf.buf_list[i].pa >>
+ CQM_CHIP_GPA_HSHIFT);
+ bat_entry_taskmap->addr[i].gpa_l =
+ (u32)(cla_table->cla_z_buf.buf_list[i].pa &
+ CQM_CHIP_GPA_LOMASK);
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: taskmap bat entry: 0x%x 0x%x\n",
+ bat_entry_taskmap->addr[i].gpa_h,
+ bat_entry_taskmap->addr[i].gpa_l);
+ }
+ }
+
+ (*entry_base_addr) += sizeof(struct tag_cqm_bat_entry_taskmap);
+}
+
+static void cqm_bat_fill_cla_timer(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ /* Only the PPF allocates timer resources. */
+ if (cqm_handle->func_attribute.func_type != CQM_PPF) {
+ (*entry_base_addr) += CQM_BAT_ENTRY_SIZE;
+ } else {
+ cqm_bat_fill_cla_common(cqm_handle, cla_table,
+ *entry_base_addr);
+
+ (*entry_base_addr) += sizeof(struct tag_cqm_bat_entry_standerd);
+ }
+}
+
+static void cqm_bat_fill_cla_invalid(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ (*entry_base_addr) += CQM_BAT_ENTRY_SIZE;
+}
+
+/**
+ * cqm_bat_fill_cla - Fill the base address of the CLA table into the BAT table
+ * @cqm_handle: CQM handle
+ */
+static void cqm_bat_fill_cla(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ u32 entry_type = CQM_BAT_ENTRY_T_INVALID;
+ u8 *entry_base_addr = NULL;
+ u32 i = 0;
+
+ /* Fills each item in the BAT table according to the BAT format. */
+ entry_base_addr = bat_table->bat;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cqm_dbg("entry_base_addr = %p\n", entry_base_addr);
+ entry_type = bat_table->bat_entry_type[i];
+ cla_table = &bat_table->entry[i];
+
+ if (entry_type == CQM_BAT_ENTRY_T_CFG) {
+ cqm_bat_fill_cla_cfg(cqm_handle, cla_table, &entry_base_addr);
+ } else if (entry_type == CQM_BAT_ENTRY_T_TASKMAP) {
+ cqm_bat_fill_cla_taskmap(cqm_handle, cla_table, &entry_base_addr);
+ } else if (entry_type == CQM_BAT_ENTRY_T_INVALID) {
+ cqm_bat_fill_cla_invalid(cqm_handle, cla_table, &entry_base_addr);
+ } else if (entry_type == CQM_BAT_ENTRY_T_TIMER) {
+ if (cqm_handle->func_attribute.func_type == CQM_PPF &&
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2)) {
+ entry_base_addr += sizeof(struct tag_cqm_bat_entry_standerd);
+ continue;
+ }
+
+ cqm_bat_fill_cla_timer(cqm_handle, cla_table,
+ &entry_base_addr);
+ } else {
+ cqm_bat_fill_cla_other(cqm_handle, cla_table, &entry_base_addr);
+ }
+
+ /* Check whether entry_base_addr is out-of-bounds array. */
+ if (entry_base_addr >=
+ (bat_table->bat + CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE))
+ break;
+ }
+}
+
+u32 cqm_funcid2smfid(const struct tag_cqm_handle *cqm_handle)
+{
+ u32 funcid = 0;
+ u32 smf_sel = 0;
+ u32 smf_id = 0;
+ u32 smf_pg_partial = 0;
+ /* SMF_Selection is selected based on
+ * the lower two bits of the function id
+ */
+ u32 lbf_smfsel[4] = {0, 2, 1, 3};
+ /* SMFID is selected based on SMF_PG[1:0] and SMF_Selection(0-1) */
+ u32 smfsel_smfid01[4][2] = { {0, 0}, {0, 0}, {1, 1}, {0, 1} };
+ /* SMFID is selected based on SMF_PG[3:2] and SMF_Selection(2-4) */
+ u32 smfsel_smfid23[4][2] = { {2, 2}, {2, 2}, {3, 3}, {2, 3} };
+
+ /* When the LB mode is disabled, SMF0 is always returned. */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL) {
+ smf_id = 0;
+ } else {
+ funcid = cqm_handle->func_attribute.func_global_idx & 0x3;
+ smf_sel = lbf_smfsel[funcid];
+
+ if (smf_sel < 0x2) {
+ smf_pg_partial = cqm_handle->func_capability.smf_pg &
+ 0x3;
+ smf_id = smfsel_smfid01[smf_pg_partial][smf_sel];
+ } else {
+ smf_pg_partial =
+ /* shift to right by 2 bits */
+ (cqm_handle->func_capability.smf_pg >> 2) & 0x3;
+ smf_id = smfsel_smfid23[smf_pg_partial][smf_sel - 0x2];
+ }
+ }
+
+ return smf_id;
+}
+
+/* This function is used in LB mode 1/2. The timer spoker info
+ * of independent space needs to be configured for 4 SMFs.
+ */
+static void cqm_update_timer_gpa(struct tag_cqm_handle *cqm_handle, u32 smf_id)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ u32 entry_type = CQM_BAT_ENTRY_T_INVALID;
+ u8 *entry_base_addr = NULL;
+ u32 i = 0;
+
+ if (cqm_handle->func_attribute.func_type != CQM_PPF)
+ return;
+
+ if (cqm_handle->func_capability.lb_mode != CQM_LB_MODE_1 &&
+ cqm_handle->func_capability.lb_mode != CQM_LB_MODE_2)
+ return;
+
+ cla_table = &bat_table->timer_entry[smf_id];
+ entry_base_addr = bat_table->bat;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ entry_type = bat_table->bat_entry_type[i];
+
+ if (entry_type == CQM_BAT_ENTRY_T_TIMER) {
+ cqm_bat_fill_cla_timer(cqm_handle, cla_table,
+ &entry_base_addr);
+ break;
+ }
+
+ if (entry_type == CQM_BAT_ENTRY_T_TASKMAP)
+ entry_base_addr += sizeof(struct tag_cqm_bat_entry_taskmap);
+ else
+ entry_base_addr += CQM_BAT_ENTRY_SIZE;
+
+ /* Check whether entry_base_addr is out-of-bounds array. */
+ if (entry_base_addr >=
+ (bat_table->bat + CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE))
+ break;
+ }
+}
+
+static s32 cqm_bat_update_cmd(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cmd_buf *buf_in,
+ u32 smf_id, u32 func_id)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cmdq_bat_update *bat_update_cmd = NULL;
+ s32 ret = CQM_FAIL;
+
+ int is_in_kexec;
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec != 0) {
+ cqm_info(handle->dev_hdl, "Skip updating the cqm_bat to chip during kexec!\n");
+ return CQM_SUCCESS;
+ }
+
+ bat_update_cmd = (struct tag_cqm_cmdq_bat_update *)(buf_in->buf);
+ bat_update_cmd->offset = 0;
+
+ if (cqm_handle->bat_table.bat_size > CQM_BAT_MAX_SIZE) {
+ cqm_err(handle->dev_hdl,
+ "bat_size = %u, which is more than %d.\n",
+ cqm_handle->bat_table.bat_size, CQM_BAT_MAX_SIZE);
+ return CQM_FAIL;
+ }
+ bat_update_cmd->byte_len = cqm_handle->bat_table.bat_size;
+
+ memcpy(bat_update_cmd->data, cqm_handle->bat_table.bat, bat_update_cmd->byte_len);
+
+#ifdef __CQM_DEBUG__
+ cqm_byte_print((u32 *)(cqm_handle->bat_table.bat),
+ CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE);
+#endif
+
+ bat_update_cmd->smf_id = smf_id;
+ bat_update_cmd->func_id = func_id;
+
+ cqm_info(handle->dev_hdl, "Bat update: smf_id=%u\n",
+ bat_update_cmd->smf_id);
+ cqm_info(handle->dev_hdl, "Bat update: func_id=%u\n",
+ bat_update_cmd->func_id);
+
+ cqm_swab32((u8 *)bat_update_cmd,
+ sizeof(struct tag_cqm_cmdq_bat_update) >> CQM_DW_SHIFT);
+
+ ret = cqm_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
+ CQM_CMD_T_BAT_UPDATE, buf_in, NULL, NULL,
+ CQM_CMD_TIMEOUT, HINIC3_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(handle->dev_hdl, "%s: send_cmd_box ret=%d\n", __func__,
+ ret);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_bat_update - Send a command to tile to update the BAT table through cmdq
+ * @cqm_handle: CQM handle
+ */
+static s32 cqm_bat_update(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cmd_buf *buf_in = NULL;
+ s32 ret = CQM_FAIL;
+ u32 smf_id = 0;
+ u32 func_id = 0;
+ u32 i = 0;
+
+ buf_in = cqm_cmd_alloc((void *)(cqm_handle->ex_handle));
+ if (!buf_in)
+ return CQM_FAIL;
+ buf_in->size = sizeof(struct tag_cqm_cmdq_bat_update);
+
+ /* In non-fake mode, func_id is set to 0xffff, indicating the current
+ * func. In fake mode, the value of func_id is specified. This is a fake
+ * func_id.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD)
+ func_id = cqm_handle->func_attribute.func_global_idx;
+ else
+ func_id = 0xffff;
+
+ /* The LB scenario is supported.
+ * The normal mode is the traditional mode and is configured on SMF0.
+ * In mode 0, load is balanced to four SMFs based on the func ID (except
+ * the PPF func ID). The PPF in mode 0 needs to be configured on four
+ * SMF, so the timer resources can be shared by the four timer engine.
+ * Mode 1/2 is load balanced to four SMF by flow. Therefore, one
+ * function needs to be configured to four SMF.
+ */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ smf_id = cqm_funcid2smfid(cqm_handle);
+ ret = cqm_bat_update_cmd(cqm_handle, buf_in, smf_id, func_id);
+ } else if ((cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1) ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2) ||
+ ((cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0) &&
+ (cqm_handle->func_attribute.func_type == CQM_PPF))) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ cqm_update_timer_gpa(cqm_handle, i);
+
+ /* The smf_pg variable stores the currently
+ * enabled SMF.
+ */
+ if (cqm_handle->func_capability.smf_pg & (1U << i)) {
+ smf_id = i;
+ ret = cqm_bat_update_cmd(cqm_handle, buf_in,
+ smf_id, func_id);
+ if (ret != CQM_SUCCESS)
+ goto out;
+ }
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Bat update: unsupport lb mode=%u\n",
+ cqm_handle->func_capability.lb_mode);
+ ret = CQM_FAIL;
+ }
+
+out:
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+static s32 cqm_bat_init_ft(struct tag_cqm_handle *cqm_handle, struct tag_cqm_bat_table *bat_table,
+ enum func_type function_type)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 i = 0;
+
+ bat_table->bat_entry_type[CQM_BAT_INDEX0] = CQM_BAT_ENTRY_T_CFG;
+ bat_table->bat_entry_type[CQM_BAT_INDEX1] = CQM_BAT_ENTRY_T_HASH;
+ bat_table->bat_entry_type[CQM_BAT_INDEX2] = CQM_BAT_ENTRY_T_QPC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX3] = CQM_BAT_ENTRY_T_SCQC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX4] = CQM_BAT_ENTRY_T_LUN;
+ bat_table->bat_entry_type[CQM_BAT_INDEX5] = CQM_BAT_ENTRY_T_TASKMAP;
+
+ if (function_type == CQM_PF || function_type == CQM_PPF) {
+ bat_table->bat_entry_type[CQM_BAT_INDEX6] = CQM_BAT_ENTRY_T_L3I;
+ bat_table->bat_entry_type[CQM_BAT_INDEX7] = CQM_BAT_ENTRY_T_CHILDC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX8] = CQM_BAT_ENTRY_T_TIMER;
+ bat_table->bat_entry_type[CQM_BAT_INDEX9] = CQM_BAT_ENTRY_T_XID2CID;
+ bat_table->bat_entry_type[CQM_BAT_INDEX10] = CQM_BAT_ENTRY_T_REORDER;
+ bat_table->bat_size = CQM_BAT_SIZE_FT_PF;
+ } else if (function_type == CQM_VF) {
+ bat_table->bat_size = CQM_BAT_SIZE_FT_VF;
+ } else {
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(function_type));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_bat_init_rdma(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_bat_table *bat_table,
+ enum func_type function_type)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 i = 0;
+
+ bat_table->bat_entry_type[CQM_BAT_INDEX0] = CQM_BAT_ENTRY_T_QPC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX1] = CQM_BAT_ENTRY_T_SCQC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX2] = CQM_BAT_ENTRY_T_SRQC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX3] = CQM_BAT_ENTRY_T_MPT;
+ bat_table->bat_entry_type[CQM_BAT_INDEX4] = CQM_BAT_ENTRY_T_GID;
+
+ if (function_type == CQM_PF || function_type == CQM_PPF) {
+ bat_table->bat_entry_type[CQM_BAT_INDEX5] = CQM_BAT_ENTRY_T_L3I;
+ bat_table->bat_entry_type[CQM_BAT_INDEX6] =
+ CQM_BAT_ENTRY_T_CHILDC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX7] =
+ CQM_BAT_ENTRY_T_TIMER;
+ bat_table->bat_entry_type[CQM_BAT_INDEX8] =
+ CQM_BAT_ENTRY_T_XID2CID;
+ bat_table->bat_entry_type[CQM_BAT_INDEX9] =
+ CQM_BAT_ENTRY_T_REORDER;
+ bat_table->bat_size = CQM_BAT_SIZE_RDMA_PF;
+ } else if (function_type == CQM_VF) {
+ bat_table->bat_size = CQM_BAT_SIZE_RDMA_VF;
+ } else {
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(function_type));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_bat_init_ft_rdma(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_bat_table *bat_table,
+ enum func_type function_type)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 i = 0;
+
+ bat_table->bat_entry_type[CQM_BAT_INDEX0] = CQM_BAT_ENTRY_T_CFG;
+ bat_table->bat_entry_type[CQM_BAT_INDEX1] = CQM_BAT_ENTRY_T_HASH;
+ bat_table->bat_entry_type[CQM_BAT_INDEX2] = CQM_BAT_ENTRY_T_QPC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX3] = CQM_BAT_ENTRY_T_SCQC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX4] = CQM_BAT_ENTRY_T_SRQC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX5] = CQM_BAT_ENTRY_T_MPT;
+ bat_table->bat_entry_type[CQM_BAT_INDEX6] = CQM_BAT_ENTRY_T_GID;
+ bat_table->bat_entry_type[CQM_BAT_INDEX7] = CQM_BAT_ENTRY_T_LUN;
+ bat_table->bat_entry_type[CQM_BAT_INDEX8] = CQM_BAT_ENTRY_T_TASKMAP;
+
+ if (function_type == CQM_PF || function_type == CQM_PPF) {
+ bat_table->bat_entry_type[CQM_BAT_INDEX9] = CQM_BAT_ENTRY_T_L3I;
+ bat_table->bat_entry_type[CQM_BAT_INDEX10] =
+ CQM_BAT_ENTRY_T_CHILDC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX11] =
+ CQM_BAT_ENTRY_T_TIMER;
+ bat_table->bat_entry_type[CQM_BAT_INDEX12] =
+ CQM_BAT_ENTRY_T_XID2CID;
+ bat_table->bat_entry_type[CQM_BAT_INDEX13] =
+ CQM_BAT_ENTRY_T_REORDER;
+ bat_table->bat_size = CQM_BAT_SIZE_FT_RDMA_PF;
+ } else if (function_type == CQM_VF) {
+ bat_table->bat_size = CQM_BAT_SIZE_FT_RDMA_VF;
+ } else {
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(function_type));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_bat_init - Initialize the BAT table. Only the items to be initialized and
+ * the entry sequence are selected. The content of the BAT entry
+ * is filled after the CLA is allocated.
+ * @cqm_handle: CQM handle
+ */
+s32 cqm_bat_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *capability = &cqm_handle->func_capability;
+ enum func_type function_type = cqm_handle->func_attribute.func_type;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ u32 i;
+
+ memset(bat_table, 0, sizeof(struct tag_cqm_bat_table));
+
+ /* Initialize the type of each bat entry. */
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ /* Select BATs based on service types. Currently,
+ * feature-related resources of the VF are stored in the BATs of the VF.
+ */
+ if (capability->ft_enable && capability->rdma_enable)
+ return cqm_bat_init_ft_rdma(cqm_handle, bat_table, function_type);
+ else if (capability->ft_enable)
+ return cqm_bat_init_ft(cqm_handle, bat_table, function_type);
+ else if (capability->rdma_enable)
+ return cqm_bat_init_rdma(cqm_handle, bat_table, function_type);
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_bat_uninit - Deinitialize the BAT table
+ * @cqm_handle: CQM handle
+ */
+void cqm_bat_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ memset(bat_table->bat, 0, CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE);
+
+ /* Instruct the chip to update the BAT table. */
+ if (cqm_bat_update(cqm_handle) != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_update));
+}
+
+static s32 cqm_cla_fill_buf(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *cla_base_buf,
+ struct tag_cqm_buf *cla_sub_buf, u8 gpa_check_enable)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct hinic3_func_attr *func_attr = NULL;
+ dma_addr_t *base = NULL;
+ u64 fake_en = 0;
+ u64 spu_en = 0;
+ u64 pf_id = 0;
+ u32 i = 0;
+ u32 addr_num;
+ u32 buf_index = 0;
+
+ /* Apply for space for base_buf */
+ if (!cla_base_buf->buf_list) {
+ if (cqm_buf_alloc(cqm_handle, cla_base_buf, false) ==
+ CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(cla_base_buf));
+ return CQM_FAIL;
+ }
+ }
+
+ /* Apply for space for sub_buf */
+ if (!cla_sub_buf->buf_list) {
+ if (cqm_buf_alloc(cqm_handle, cla_sub_buf, false) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(cla_sub_buf));
+ cqm_buf_free(cla_base_buf, cqm_handle);
+ return CQM_FAIL;
+ }
+ }
+
+ /* Fill base_buff with the gpa of sub_buf */
+ addr_num = cla_base_buf->buf_size / sizeof(dma_addr_t);
+ base = (dma_addr_t *)(cla_base_buf->buf_list[0].va);
+ for (i = 0; i < cla_sub_buf->buf_number; i++) {
+ /* The SPU SMF supports load balancing from the SMF to the CPI,
+ * depending on the host ID and func ID.
+ */
+ if (hinic3_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID) {
+ func_attr = &cqm_handle->func_attribute;
+ spu_en = (u64)(func_attr->func_global_idx & 0x1) << 0x3F;
+ } else {
+ spu_en = 0;
+ }
+
+ /* fake enable */
+ if (cqm_handle->func_capability.fake_func_type ==
+ CQM_FAKE_FUNC_CHILD) {
+ fake_en = 1ULL << 0x3E;
+ func_attr =
+ &cqm_handle->parent_cqm_handle->func_attribute;
+ pf_id = func_attr->func_global_idx;
+ pf_id = (pf_id & 0x1f) << 0x39;
+ } else {
+ fake_en = 0;
+ pf_id = 0;
+ }
+
+ *base = (dma_addr_t)((((((u64)(cla_sub_buf->buf_list[i].pa) & CQM_CHIP_GPA_MASK) |
+ spu_en) |
+ fake_en) |
+ pf_id) |
+ gpa_check_enable);
+
+ cqm_swab64((u8 *)base, 1);
+ if ((i + 1) % addr_num == 0) {
+ buf_index++;
+ if (buf_index < cla_base_buf->buf_number)
+ base = cla_base_buf->buf_list[buf_index].va;
+ } else {
+ base++;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_cla_xyz_lvl1(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u32 trunk_size)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf *cla_y_buf = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+ s32 shift = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ u32 cache_line = 0;
+
+ /* The cacheline of the timer is changed to 512. */
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER)
+ cache_line = CQM_CHIP_TIMER_CACHELINE;
+ else
+ cache_line = CQM_CHIP_CACHELINE;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ gpa_check_enable = 0;
+
+ cla_table->cla_lvl = CQM_CLA_LVL_1;
+
+ shift = cqm_shift(trunk_size / cla_table->obj_size);
+ cla_table->z = (u32)(shift ? (shift - 1) : (shift));
+ cla_table->y = CQM_MAX_INDEX_BIT;
+ cla_table->x = 0;
+
+ cqm_dbg("cla_table->obj_size = %d, cache_line = %d",
+ cla_table->obj_size, cache_line);
+ if (cla_table->obj_size >= cache_line) {
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+ } else {
+ shift = cqm_shift(trunk_size / cache_line);
+ cla_table->cacheline_z = (u32)(shift ? (shift - 1) : (shift));
+ cla_table->cacheline_y = CQM_MAX_INDEX_BIT;
+ cla_table->cacheline_x = 0;
+ }
+
+ /* Applying for CLA_Y_BUF Space */
+ cla_y_buf = &cla_table->cla_y_buf;
+ cla_y_buf->buf_size = trunk_size;
+ cla_y_buf->buf_number = 1;
+ cla_y_buf->page_number = cla_y_buf->buf_number <<
+ cla_table->trunk_order;
+
+ ret = cqm_buf_alloc(cqm_handle, cla_y_buf, false);
+ if (ret != CQM_SUCCESS)
+ return CQM_FAIL;
+
+ /* Applying for CLA_Z_BUF Space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number =
+ (ALIGN(cla_table->max_buffer_size, trunk_size)) / trunk_size;
+ cla_z_buf->page_number = cla_z_buf->buf_number <<
+ cla_table->trunk_order;
+
+ /* All buffer space must be statically allocated. */
+ if (cla_table->alloc_static) {
+ ret = cqm_cla_fill_buf(cqm_handle, cla_y_buf, cla_z_buf,
+ gpa_check_enable);
+ if (unlikely(ret != CQM_SUCCESS)) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_fill_buf));
+ return CQM_FAIL;
+ }
+ } else { /* Only the buffer list space is initialized. The buffer space
+ * is dynamically allocated in services.
+ */
+ cla_z_buf->buf_list = vmalloc(cla_z_buf->buf_number *
+ sizeof(struct tag_cqm_buf_list));
+ if (!cla_z_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_1_z_buf));
+ cqm_buf_free(cla_y_buf, cqm_handle);
+ return CQM_FAIL;
+ }
+ memset(cla_z_buf->buf_list, 0,
+ cla_z_buf->buf_number * sizeof(struct tag_cqm_buf_list));
+ }
+
+ return CQM_SUCCESS;
+}
+
+static void cqm_cla_xyz_lvl2_param_init(struct tag_cqm_cla_table *cla_table, u32 trunk_size)
+{
+ s32 shift = 0;
+ u32 cache_line = 0;
+
+ /* The cacheline of the timer is changed to 512. */
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER)
+ cache_line = CQM_CHIP_TIMER_CACHELINE;
+ else
+ cache_line = CQM_CHIP_CACHELINE;
+
+ cla_table->cla_lvl = CQM_CLA_LVL_2;
+
+ shift = cqm_shift(trunk_size / cla_table->obj_size);
+ cla_table->z = (u32)(shift ? (shift - 1) : (shift));
+ shift = cqm_shift(trunk_size / sizeof(dma_addr_t));
+ cla_table->y = cla_table->z + shift;
+ cla_table->x = CQM_MAX_INDEX_BIT;
+
+ if (cla_table->obj_size >= cache_line) {
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+ } else {
+ shift = cqm_shift(trunk_size / cache_line);
+ cla_table->cacheline_z = (u32)(shift ? (shift - 1) : (shift));
+ shift = cqm_shift(trunk_size / sizeof(dma_addr_t));
+ cla_table->cacheline_y = cla_table->cacheline_z + shift;
+ cla_table->cacheline_x = CQM_MAX_INDEX_BIT;
+ }
+}
+
+static s32 cqm_cla_xyz_lvl2_xyz_apply(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table, u32 trunk_size)
+{
+ struct tag_cqm_buf *cla_x_buf = NULL;
+ struct tag_cqm_buf *cla_y_buf = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+ s32 ret = CQM_FAIL;
+
+ /* Apply for CLA_X_BUF Space */
+ cla_x_buf = &cla_table->cla_x_buf;
+ cla_x_buf->buf_size = trunk_size;
+ cla_x_buf->buf_number = 1;
+ cla_x_buf->page_number = cla_x_buf->buf_number << cla_table->trunk_order;
+ cla_x_buf->buf_info.use_vram = get_use_vram_flag();
+ ret = cqm_buf_alloc(cqm_handle, cla_x_buf, false);
+ if (ret != CQM_SUCCESS)
+ return CQM_FAIL;
+
+ /* Apply for CLA_Z_BUF and CLA_Y_BUF Space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number = (ALIGN(cla_table->max_buffer_size, trunk_size)) / trunk_size;
+ cla_z_buf->page_number = cla_z_buf->buf_number << cla_table->trunk_order;
+
+ cla_y_buf = &cla_table->cla_y_buf;
+ cla_y_buf->buf_size = trunk_size;
+ cla_y_buf->buf_number =
+ (u32)(ALIGN(cla_z_buf->buf_number * sizeof(dma_addr_t), trunk_size)) / trunk_size;
+ cla_y_buf->page_number = cla_y_buf->buf_number << cla_table->trunk_order;
+
+ return 0;
+}
+
+static s32 cqm_cla_xyz_vram_name_init(struct tag_cqm_cla_table *cla_table,
+ struct hinic3_hwdev *handle)
+{
+ struct tag_cqm_buf *cla_x_buf = NULL;
+ struct tag_cqm_buf *cla_y_buf = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+
+ cla_x_buf = &cla_table->cla_x_buf;
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_y_buf = &cla_table->cla_y_buf;
+ cla_x_buf->buf_info.use_vram = get_use_vram_flag();
+ snprintf(cla_x_buf->buf_info.buf_vram_name,
+ VRAM_NAME_APPLY_LEN, "%s%s", cla_table->name,
+ VRAM_CQM_CLA_COORD_X);
+
+ cla_y_buf->buf_info.use_vram = get_use_vram_flag();
+ snprintf(cla_y_buf->buf_info.buf_vram_name,
+ VRAM_NAME_APPLY_LEN, "%s%s", cla_table->name,
+ VRAM_CQM_CLA_COORD_Y);
+
+ cla_z_buf->buf_info.use_vram = get_use_vram_flag();
+ snprintf(cla_z_buf->buf_info.buf_vram_name,
+ VRAM_NAME_APPLY_LEN, "%s%s",
+ cla_table->name, VRAM_CQM_CLA_COORD_Z);
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_cla_xyz_lvl2(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table, u32 trunk_size)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf *cla_x_buf = NULL;
+ struct tag_cqm_buf *cla_y_buf = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+
+ cqm_cla_xyz_lvl2_param_init(cla_table, trunk_size);
+
+ ret = cqm_cla_xyz_lvl2_xyz_apply(cqm_handle, cla_table, trunk_size);
+ if (ret)
+ return ret;
+
+ cla_x_buf = &cla_table->cla_x_buf;
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_y_buf = &cla_table->cla_y_buf;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ gpa_check_enable = 0;
+
+ /* All buffer space must be statically allocated. */
+ if (cla_table->alloc_static) {
+ /* Apply for y buf and z buf, and fill the gpa of z buf list in y buf */
+ if (cqm_cla_fill_buf(cqm_handle, cla_y_buf, cla_z_buf,
+ gpa_check_enable) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_fill_buf));
+ cqm_buf_free(cla_x_buf, cqm_handle);
+ return CQM_FAIL;
+ }
+
+ /* Fill the gpa of the y buf list into the x buf.
+ * After the x and y bufs are applied for, this function will not fail.
+ * Use void to forcibly convert the return of the function.
+ */
+ (void)cqm_cla_fill_buf(cqm_handle, cla_x_buf, cla_y_buf, gpa_check_enable);
+ } else { /* Only the buffer list space is initialized. The buffer space
+ * is dynamically allocated in services.
+ */
+ cla_z_buf->buf_list = vmalloc(cla_z_buf->buf_number *
+ sizeof(struct tag_cqm_buf_list));
+ if (!cla_z_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_z_buf));
+ cqm_buf_free(cla_x_buf, cqm_handle);
+ return CQM_FAIL;
+ }
+ memset(cla_z_buf->buf_list, 0,
+ cla_z_buf->buf_number * sizeof(struct tag_cqm_buf_list));
+
+ cla_y_buf->buf_list = vmalloc(cla_y_buf->buf_number *
+ sizeof(struct tag_cqm_buf_list));
+ if (!cla_y_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_y_buf));
+ cqm_buf_free(cla_z_buf, cqm_handle);
+ cqm_buf_free(cla_x_buf, cqm_handle);
+ return CQM_FAIL;
+ }
+ memset(cla_y_buf->buf_list, 0,
+ cla_y_buf->buf_number * sizeof(struct tag_cqm_buf_list));
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_cla_xyz_check(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table, u32 *size)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 trunk_size = 0;
+
+ /* If the capability(obj_num) is set to 0, the CLA does not need to be
+ * initialized and exits directly.
+ */
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0, don't alloc buffer\n",
+ cla_table->type);
+ return CQM_SUCCESS;
+ }
+
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0x%x, gpa_check_enable=%d\n",
+ cla_table->type, cla_table->obj_num,
+ cqm_handle->func_capability.gpa_check_enable);
+
+ /* Check whether obj_size is 2^n-aligned. An error is reported when
+ * obj_size is 0 or 1.
+ */
+ if (!cqm_check_align(cla_table->obj_size)) {
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_size 0x%x is not align on 2^n\n",
+ cla_table->type, cla_table->obj_size);
+ return CQM_FAIL;
+ }
+
+ trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+
+ if (trunk_size < cla_table->obj_size) {
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cla type %u, obj_size 0x%x is out of trunk size\n",
+ cla_table->type, cla_table->obj_size);
+ return CQM_FAIL;
+ }
+
+ *size = trunk_size;
+
+ return CQM_CONTINUE;
+}
+
+static s32 cqm_cla_xyz_lvl0(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table, u32 trunk_size)
+{
+ struct tag_cqm_buf *cla_z_buf = NULL;
+
+ cla_table->cla_lvl = CQM_CLA_LVL_0;
+
+ cla_table->z = CQM_MAX_INDEX_BIT;
+ cla_table->y = 0;
+ cla_table->x = 0;
+
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+
+ /* Applying for CLA_Z_BUF Space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number = 1;
+ cla_z_buf->page_number = cla_z_buf->buf_number << cla_table->trunk_order;
+ cla_z_buf->bat_entry_type = cla_table->type;
+
+ return cqm_buf_alloc(cqm_handle, cla_z_buf, false);
+}
+
+/**
+ * cqm_cla_xyz - Calculate the number of levels of CLA tables and allocate
+ * space for each level of CLA table.
+ * @cqm_handle: CQM handle
+ * @cla_table: CLA table
+ */
+static s32 cqm_cla_xyz(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 trunk_size = 0;
+ s32 ret = CQM_FAIL;
+
+ ret = cqm_cla_xyz_check(cqm_handle, cla_table, &trunk_size);
+ if (ret != CQM_CONTINUE)
+ return ret;
+
+ ret = cqm_cla_xyz_vram_name_init(cla_table, handle);
+ if (ret != CQM_SUCCESS)
+ return ret;
+
+ /* Level-0 CLA occupies a small space.
+ * Only CLA_Z_BUF can be allocated during initialization.
+ */
+ cqm_dbg("cla_table->max_buffer_size = %d trunk_size = %d\n",
+ cla_table->max_buffer_size, trunk_size);
+
+ if (cla_table->max_buffer_size > trunk_size &&
+ cqm_need_secure_mem((void *)handle)) {
+ trunk_size = roundup(cla_table->max_buffer_size, CQM_SECURE_MEM_ALIGNED_SIZE);
+ cqm_dbg("[memsec]reset trunk_size = %u\n", trunk_size);
+ }
+
+ if (cla_table->max_buffer_size <= trunk_size) {
+ ret = cqm_cla_xyz_lvl0(cqm_handle, cla_table, trunk_size);
+ if (ret != CQM_SUCCESS)
+ return CQM_FAIL;
+ /* Level-1 CLA
+ * Allocates CLA_Y_BUF and CLA_Z_BUF during initialization.
+ */
+ } else if (cla_table->max_buffer_size <=
+ (trunk_size * (trunk_size / sizeof(dma_addr_t)))) {
+ if (cqm_cla_xyz_lvl1(cqm_handle, cla_table, trunk_size) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_xyz_lvl1));
+ return CQM_FAIL;
+ }
+ /* Level-2 CLA
+ * Allocates CLA_X_BUF, CLA_Y_BUF, and CLA_Z_BUF during initialization.
+ */
+ } else if (cla_table->max_buffer_size <= (trunk_size * (trunk_size / sizeof(dma_addr_t)) *
+ (trunk_size / sizeof(dma_addr_t)))) {
+ if (cqm_cla_xyz_lvl2(cqm_handle, cla_table, trunk_size) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_xyz_lvl2));
+ return CQM_FAIL;
+ }
+ } else { /* The current memory management mode does not support such
+ * a large buffer addressing. The order value needs to
+ * be increased.
+ */
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cla max_buffer_size 0x%x exceeds support range\n",
+ cla_table->max_buffer_size);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static void cqm_cla_init_entry_normal(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ struct tag_cqm_func_capability *capability)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_HASH:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->hash_number * capability->hash_basic_size;
+ cla_table->obj_size = capability->hash_basic_size;
+ cla_table->obj_num = capability->hash_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_QPC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->qpc_number * capability->qpc_basic_size;
+ cla_table->obj_size = capability->qpc_basic_size;
+ cla_table->obj_num = capability->qpc_number;
+ cla_table->alloc_static = capability->qpc_alloc_static;
+ cqm_info(handle->dev_hdl, "Cla alloc: qpc alloc_static=%d\n",
+ cla_table->alloc_static);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->mpt_number *
+ capability->mpt_basic_size;
+ cla_table->obj_size = capability->mpt_basic_size;
+ cla_table->obj_num = capability->mpt_number;
+ cla_table->alloc_static = true; /* CCB decided. MPT uses only
+ * static application scenarios.
+ */
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->scqc_number * capability->scqc_basic_size;
+ cla_table->obj_size = capability->scqc_basic_size;
+ cla_table->obj_num = capability->scqc_number;
+ cla_table->alloc_static = capability->scqc_alloc_static;
+ cqm_info(handle->dev_hdl, "Cla alloc: scqc alloc_static=%d\n",
+ cla_table->alloc_static);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->srqc_number * capability->srqc_basic_size;
+ cla_table->obj_size = capability->srqc_basic_size;
+ cla_table->obj_num = capability->srqc_number;
+ cla_table->alloc_static = false;
+ break;
+ default:
+ break;
+ }
+}
+
+static void cqm_cla_init_entry_extern(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ struct tag_cqm_func_capability *capability)
+{
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_GID:
+ /* Level-0 CLA table required */
+ cla_table->max_buffer_size = capability->gid_number *
+ capability->gid_basic_size;
+ cla_table->trunk_order =
+ (u32)cqm_shift(ALIGN(cla_table->max_buffer_size, PAGE_SIZE) / PAGE_SIZE);
+ cla_table->obj_size = capability->gid_basic_size;
+ cla_table->obj_num = capability->gid_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_LUN:
+ cla_table->trunk_order = CLA_TABLE_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->lun_number *
+ capability->lun_basic_size;
+ cla_table->obj_size = capability->lun_basic_size;
+ cla_table->obj_num = capability->lun_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_TASKMAP:
+ cla_table->trunk_order = CQM_4K_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->taskmap_number *
+ capability->taskmap_basic_size;
+ cla_table->obj_size = capability->taskmap_basic_size;
+ cla_table->obj_num = capability->taskmap_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_L3I:
+ cla_table->trunk_order = CLA_TABLE_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->l3i_number *
+ capability->l3i_basic_size;
+ cla_table->obj_size = capability->l3i_basic_size;
+ cla_table->obj_num = capability->l3i_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_CHILDC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->childc_number *
+ capability->childc_basic_size;
+ cla_table->obj_size = capability->childc_basic_size;
+ cla_table->obj_num = capability->childc_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_TIMER:
+ /* Ensure that the basic size of the timer buffer page does not
+ * exceed 128 x 4 KB. Otherwise, clearing the timer buffer of
+ * the function is complex.
+ */
+ cla_table->trunk_order = CQM_8K_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->timer_number *
+ capability->timer_basic_size;
+ cla_table->obj_size = capability->timer_basic_size;
+ cla_table->obj_num = capability->timer_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_XID2CID:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->xid2cid_number *
+ capability->xid2cid_basic_size;
+ cla_table->obj_size = capability->xid2cid_basic_size;
+ cla_table->obj_num = capability->xid2cid_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_REORDER:
+ /* This entry supports only IWARP and does not support GPA
+ * validity check.
+ */
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->reorder_number *
+ capability->reorder_basic_size;
+ cla_table->obj_size = capability->reorder_basic_size;
+ cla_table->obj_num = capability->reorder_number;
+ cla_table->alloc_static = true;
+ break;
+ default:
+ break;
+ }
+}
+
+static s32 cqm_cla_init_entry_condition(struct tag_cqm_handle *cqm_handle, u32 entry_type)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_cla_table *cla_table = &bat_table->entry[entry_type];
+ struct tag_cqm_cla_table *cla_table_timer = NULL;
+ u32 i;
+
+ /* When the timer is in LB mode 1 or 2, the timer needs to be
+ * configured for four SMFs and the address space is independent.
+ */
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER &&
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ cla_table_timer = &bat_table->timer_entry[i];
+ memcpy(cla_table_timer, cla_table, sizeof(struct tag_cqm_cla_table));
+
+ snprintf(cla_table_timer->name,
+ VRAM_NAME_APPLY_LEN, "%s%s%01u", cla_table->name,
+ VRAM_CQM_CLA_SMF_BASE, i);
+
+ if (cqm_cla_xyz(cqm_handle, cla_table_timer) ==
+ CQM_FAIL) {
+ cqm_cla_uninit(cqm_handle, entry_type);
+ return CQM_FAIL;
+ }
+ }
+ return CQM_SUCCESS;
+ }
+
+ if (cqm_cla_xyz(cqm_handle, cla_table) == CQM_FAIL) {
+ cqm_cla_uninit(cqm_handle, entry_type);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_cla_init_entry(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_func_capability *capability)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ s32 ret;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ cla_table->type = bat_table->bat_entry_type[i];
+ snprintf(cla_table->name, VRAM_NAME_APPLY_LEN,
+ "%s%s%s%02u", cqm_handle->name, VRAM_CQM_CLA_BASE,
+ VRAM_CQM_CLA_TYPE_BASE, cla_table->type);
+
+ cqm_cla_init_entry_normal(cqm_handle, cla_table, capability);
+ cqm_cla_init_entry_extern(cqm_handle, cla_table, capability);
+
+ /* Allocate CLA entry space at each level. */
+ if (cla_table->type < CQM_BAT_ENTRY_T_HASH ||
+ cla_table->type > CQM_BAT_ENTRY_T_REORDER) {
+ mutex_init(&cla_table->lock);
+ continue;
+ }
+
+ /* For the PPF, resources (8 wheels x 2k scales x 32B x
+ * func_num) need to be applied for to the timer. The
+ * structure of the timer entry in the BAT table needs
+ * to be filled. For the PF, no resource needs to be
+ * applied for the timer and no structure needs to be
+ * filled in the timer entry in the BAT table.
+ */
+ if (!(cla_table->type == CQM_BAT_ENTRY_T_TIMER &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ ret = cqm_cla_init_entry_condition(cqm_handle, i);
+ if (ret != CQM_SUCCESS)
+ return CQM_FAIL;
+ cqm_dbg("~~~~cla_table->type = %d\n", cla_table->type);
+ }
+ cqm_dbg("****cla_table->type = %d\n", cla_table->type);
+ mutex_init(&cla_table->lock);
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_cla_init - Initialize the CLA table
+ * @cqm_handle: CQM handle
+ */
+s32 cqm_cla_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ s32 ret;
+
+ /* Applying for CLA Entries */
+ ret = cqm_cla_init_entry(cqm_handle, capability);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_init_entry));
+ return ret;
+ }
+
+ /* After the CLA entry is applied, the address is filled
+ * in the BAT table.
+ */
+ cqm_bat_fill_cla(cqm_handle);
+
+ /* Instruct the chip to update the BAT table. */
+ ret = cqm_bat_update(cqm_handle);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_update));
+ goto err;
+ }
+
+ cqm_info(handle->dev_hdl, "Timer start: func_type=%d, timer_enable=%u\n",
+ cqm_handle->func_attribute.func_type,
+ cqm_handle->func_capability.timer_enable);
+
+ if (cqm_handle->func_attribute.func_type == CQM_PPF) {
+ ret = hinic3_ppf_ht_gpa_init(handle);
+ if (ret) {
+ cqm_err(handle->dev_hdl, "PPF ht gpa init fail!\n");
+ goto err;
+ }
+
+ if (cqm_handle->func_capability.timer_enable ==
+ CQM_TIMER_ENABLE) {
+ /* Enable the timer after the timer resources are applied for */
+ cqm_info(handle->dev_hdl, "PPF timer start\n");
+ ret = hinic3_ppf_tmr_start(handle);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "PPF timer start, ret=%d\n", ret);
+ goto err1;
+ }
+ }
+ }
+
+ return CQM_SUCCESS;
+err1:
+ hinic3_ppf_ht_gpa_deinit(handle);
+err:
+ cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_cla_uninit - Deinitialize the CLA table
+ * @cqm_handle: CQM handle
+ * @entry_numb: entry number
+ */
+void cqm_cla_uninit(struct tag_cqm_handle *cqm_handle, u32 entry_numb)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ s32 inv_flag = 0;
+ u32 i;
+
+ for (i = 0; i < entry_numb; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_x_buf,
+ &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_y_buf,
+ &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_z_buf,
+ &inv_flag);
+ }
+ mutex_deinit(&cla_table->lock);
+ }
+
+ /* When the lb mode is 1/2, the timer space allocated to the 4 SMFs
+ * needs to be released.
+ */
+ if (cqm_handle->func_attribute.func_type == CQM_PPF &&
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ cla_table = &bat_table->timer_entry[i];
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_x_buf,
+ &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_y_buf,
+ &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle,
+ &cla_table->cla_z_buf,
+ &inv_flag);
+ mutex_deinit(&cla_table->lock);
+ }
+ }
+}
+
+static s32 cqm_cla_update_cmd(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cmd_buf *buf_in,
+ struct tag_cqm_cla_update_cmd *cmd)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cla_update_cmd *cla_update_cmd = NULL;
+ s32 ret = CQM_FAIL;
+
+ cla_update_cmd = (struct tag_cqm_cla_update_cmd *)(buf_in->buf);
+
+ cla_update_cmd->gpa_h = cmd->gpa_h;
+ cla_update_cmd->gpa_l = cmd->gpa_l;
+ cla_update_cmd->value_h = cmd->value_h;
+ cla_update_cmd->value_l = cmd->value_l;
+ cla_update_cmd->smf_id = cmd->smf_id;
+ cla_update_cmd->func_id = cmd->func_id;
+
+ cqm_swab32((u8 *)cla_update_cmd,
+ (sizeof(struct tag_cqm_cla_update_cmd) >> CQM_DW_SHIFT));
+
+ ret = cqm_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
+ CQM_CMD_T_CLA_UPDATE, buf_in, NULL, NULL,
+ CQM_CMD_TIMEOUT, HINIC3_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(handle->dev_hdl, "Cla alloc: cqm_cla_update, cqm_send_cmd_box_ret=%d\n",
+ ret);
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cqm_cla_update, cla_update_cmd: 0x%x 0x%x 0x%x 0x%x\n",
+ cmd->gpa_h, cmd->gpa_l, cmd->value_h, cmd->value_l);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_cla_update - Send a command to update the CLA table
+ * @cqm_handle: CQM handle
+ * @buf_node_parent: parent node of the content to be updated
+ * @buf_node_child: Subnode for which the buffer is to be applied
+ * @child_index: Index of a child node
+ * @cla_update_mode: CLA update mod
+ */
+static s32 cqm_cla_update(struct tag_cqm_handle *cqm_handle,
+ const struct tag_cqm_buf_list *buf_node_parent,
+ const struct tag_cqm_buf_list *buf_node_child,
+ u32 child_index, u8 cla_update_mode)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cmd_buf *buf_in = NULL;
+ struct tag_cqm_cla_update_cmd cmd;
+ dma_addr_t pa = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ u32 i = 0;
+ u64 spu_en;
+
+ buf_in = cqm_cmd_alloc(cqm_handle->ex_handle);
+ if (!buf_in)
+ return CQM_FAIL;
+ buf_in->size = sizeof(struct tag_cqm_cla_update_cmd);
+
+ /* Fill command format, convert to big endian. */
+ /* SPU function sets bit63: acs_spu_en based on function id. */
+ if (hinic3_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID)
+ spu_en = ((u64)(cqm_handle->func_attribute.func_global_idx &
+ 0x1)) << 0x3F;
+ else
+ spu_en = 0;
+
+ pa = ((buf_node_parent->pa + (child_index * sizeof(dma_addr_t))) |
+ spu_en);
+ cmd.gpa_h = CQM_ADDR_HI(pa);
+ cmd.gpa_l = CQM_ADDR_LW(pa);
+
+ pa = (buf_node_child->pa | spu_en);
+ cmd.value_h = CQM_ADDR_HI(pa);
+ cmd.value_l = CQM_ADDR_LW(pa);
+
+ cqm_dbg("Cla alloc: %s, gpa=0x%x 0x%x, value=0x%x 0x%x, cla_update_mode=0x%x\n",
+ __func__, cmd.gpa_h, cmd.gpa_l, cmd.value_h, cmd.value_l,
+ cla_update_mode);
+
+ /* current CLA GPA CHECK */
+ if (gpa_check_enable) {
+ switch (cla_update_mode) {
+ /* gpa[0]=1 means this GPA is valid */
+ case CQM_CLA_RECORD_NEW_GPA:
+ cmd.value_l |= 1;
+ break;
+ /* gpa[0]=0 means this GPA is valid */
+ case CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID:
+ case CQM_CLA_DEL_GPA_WITH_CACHE_INVALID:
+ cmd.value_l &= (~1);
+ break;
+ default:
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: %s, wrong cla_update_mode=%u\n",
+ __func__, cla_update_mode);
+ break;
+ }
+ }
+
+ /* Todo: The following code is the same as that in the bat update and
+ * needs to be reconstructed.
+ */
+ /* In non-fake mode, set func_id to 0xffff.
+ * Indicates the current func fake mode, set func_id to the
+ * specified value, This is a fake func_id.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD)
+ cmd.func_id = cqm_handle->func_attribute.func_global_idx;
+ else
+ cmd.func_id = 0xffff;
+
+ /* Normal mode is 1822 traditional mode and is configured on SMF0. */
+ /* Mode 0 is hashed to 4 SMF engines (excluding PPF) by func ID. */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ cmd.smf_id = cqm_funcid2smfid(cqm_handle);
+ ret = cqm_cla_update_cmd(cqm_handle, buf_in, &cmd);
+ /* Modes 1/2 are allocated to four SMF engines by flow.
+ * Therefore, one function needs to be allocated to four SMF engines.
+ */
+ /* Mode 0 PPF needs to be configured on 4 engines,
+ * and the timer resources need to be shared by the 4 engines.
+ */
+ } else if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2 ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type == CQM_PPF)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ /* The smf_pg variable stores currently enabled SMF. */
+ if (cqm_handle->func_capability.smf_pg & (1U << i)) {
+ cmd.smf_id = i;
+ ret = cqm_cla_update_cmd(cqm_handle, buf_in,
+ &cmd);
+ if (ret != CQM_SUCCESS)
+ goto out;
+ }
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Cla update: unsupport lb mode=%u\n",
+ cqm_handle->func_capability.lb_mode);
+ ret = CQM_FAIL;
+ }
+
+out:
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+/**
+ * cqm_cla_alloc - Trunk page for applying for a CLA.
+ * @cqm_handle: CQM handle
+ * @cla_table: BAT table entry
+ * @buf_node_parent: parent node of the content to be updated
+ * @buf_node_child: subnode for which the buffer is to be applied
+ * @child_index: index of a child node
+ */
+static s32 cqm_cla_alloc(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ struct tag_cqm_buf_list *buf_node_parent,
+ struct tag_cqm_buf_list *buf_node_child, u32 child_index)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ s32 ret = CQM_FAIL;
+
+ /* Apply for trunk page */
+ buf_node_child->va = (u8 *)ossl_get_free_pages(GFP_KERNEL | __GFP_ZERO,
+ cla_table->trunk_order);
+ if (!buf_node_child->va)
+ return CQM_FAIL;
+
+ /* PCI mapping */
+ buf_node_child->pa = pci_map_single(cqm_handle->dev, buf_node_child->va,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, buf_node_child->pa)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf_node_child->pa));
+ goto err1;
+ }
+
+ /* Notify the chip of trunk_pa so that the chip fills in cla entry */
+ ret = cqm_cla_update(cqm_handle, buf_node_parent, buf_node_child,
+ child_index, CQM_CLA_RECORD_NEW_GPA);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_update));
+ goto err2;
+ }
+
+ return CQM_SUCCESS;
+
+err2:
+ pci_unmap_single(cqm_handle->dev, buf_node_child->pa,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+err1:
+ free_pages((ulong)(buf_node_child->va), cla_table->trunk_order);
+ buf_node_child->va = NULL;
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_cla_free - Release trunk page of a CLA
+ * @cqm_handle: CQM handle
+ * @cla_table: BAT table entry
+ * @buf_node_parent: parent node of the content to be updated
+ * @buf_node_child: subnode for which the buffer is to be applied
+ * @child_index: index of a child node
+ * @cla_update_mode: the update mode of CLA
+ */
+static void cqm_cla_free(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ struct tag_cqm_buf_list *buf_node_parent,
+ struct tag_cqm_buf_list *buf_node_child,
+ u32 child_index, u8 cla_update_mode)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 trunk_size;
+
+ cqm_dbg("Cla free: cla_update_mode=%u\n", cla_update_mode);
+
+ if (cqm_cla_update(cqm_handle, buf_node_parent, buf_node_child,
+ child_index, cla_update_mode) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_update));
+ return;
+ }
+
+ if (cla_update_mode == CQM_CLA_DEL_GPA_WITH_CACHE_INVALID) {
+ trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+ if (cqm_cla_cache_invalid(cqm_handle, buf_node_child->pa,
+ trunk_size) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_cache_invalid));
+ return;
+ }
+ }
+
+ /* Remove PCI mapping from the trunk page */
+ pci_unmap_single(cqm_handle->dev, buf_node_child->pa,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+
+ /* Rlease trunk page */
+ free_pages((ulong)(buf_node_child->va), cla_table->trunk_order);
+ buf_node_child->va = NULL;
+}
+
+static u8 *cqm_cla_get_unlock_lvl0(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct tag_cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+
+ /* Level 0 CLA pages are statically allocated. */
+ offset = index * cla_table->obj_size;
+ ret_addr = (u8 *)(cla_z_buf->buf_list->va) + offset;
+ *pa = cla_z_buf->buf_list->pa + offset;
+
+ return ret_addr;
+}
+
+static u8 *cqm_cla_get_unlock_lvl1(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct tag_cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
+ struct tag_cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf_list *buf_node_y = NULL;
+ struct tag_cqm_buf_list *buf_node_z = NULL;
+ u32 y_index = 0;
+ u32 z_index = 0;
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+
+ z_index = index & ((1U << (cla_table->z + 1)) - 1);
+ y_index = index >> (cla_table->z + 1);
+
+ if (y_index >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla get: index exceeds buf_number, y_index %u, z_buf_number %u\n",
+ y_index, cla_z_buf->buf_number);
+ return NULL;
+ }
+ buf_node_z = &cla_z_buf->buf_list[y_index];
+ buf_node_y = cla_y_buf->buf_list;
+
+ /* The z buf node does not exist, applying for a page first. */
+ if (!buf_node_z->va) {
+ if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_y, buf_node_z,
+ y_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ cqm_err(handle->dev_hdl,
+ "Cla get: cla_table->type=%u\n",
+ cla_table->type);
+ return NULL;
+ }
+ }
+
+ cqm_dbg("Cla get: 1L: z_refcount=0x%x, count=0x%x\n",
+ buf_node_z->refcount, count);
+ buf_node_z->refcount += count;
+ offset = z_index * cla_table->obj_size;
+ ret_addr = (u8 *)(buf_node_z->va) + offset;
+ *pa = buf_node_z->pa + offset;
+
+ return ret_addr;
+}
+
+static u8 *cqm_cla_get_unlock_lvl2(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct tag_cqm_buf *cla_x_buf = &cla_table->cla_x_buf;
+ struct tag_cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
+ struct tag_cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf_list *buf_node_x = NULL;
+ struct tag_cqm_buf_list *buf_node_y = NULL;
+ struct tag_cqm_buf_list *buf_node_z = NULL;
+ u32 x_index = 0;
+ u32 y_index = 0;
+ u32 z_index = 0;
+ u32 trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+ u64 tmp;
+
+ z_index = index & ((1U << (cla_table->z + 1)) - 1);
+ y_index = (index >> (cla_table->z + 1)) &
+ ((1U << (cla_table->y - cla_table->z)) - 1);
+ x_index = index >> (cla_table->y + 1);
+ tmp = x_index * (trunk_size / sizeof(dma_addr_t)) + y_index;
+
+ if (x_index >= cla_y_buf->buf_number || tmp >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla get: index exceeds buf_number, x %u, y %u, y_buf_n %u, z_buf_n %u\n",
+ x_index, y_index, cla_y_buf->buf_number,
+ cla_z_buf->buf_number);
+ return NULL;
+ }
+
+ buf_node_x = cla_x_buf->buf_list;
+ buf_node_y = &cla_y_buf->buf_list[x_index];
+ buf_node_z = &cla_z_buf->buf_list[tmp];
+
+ /* The y buf node does not exist, applying for pages for y node. */
+ if (!buf_node_y->va) {
+ if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_x, buf_node_y,
+ x_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ return NULL;
+ }
+ }
+
+ /* The z buf node does not exist, applying for pages for z node. */
+ if (!buf_node_z->va) {
+ if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_y, buf_node_z,
+ y_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ if (buf_node_y->refcount == 0)
+ /* To release node Y, cache_invalid is
+ * required.
+ */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_x, buf_node_y, x_index,
+ CQM_CLA_DEL_GPA_WITH_CACHE_INVALID);
+ return NULL;
+ }
+
+ cqm_dbg("Cla get: 2L: y_refcount=0x%x\n", buf_node_y->refcount);
+ /* reference counting of the y buffer node needs to increase
+ * by 1.
+ */
+ buf_node_y->refcount++;
+ }
+
+ cqm_dbg("Cla get: 2L: z_refcount=0x%x, count=0x%x\n",
+ buf_node_z->refcount, count);
+ buf_node_z->refcount += count;
+ offset = z_index * cla_table->obj_size;
+ ret_addr = (u8 *)(buf_node_z->va) + offset;
+ *pa = buf_node_z->pa + offset;
+
+ return ret_addr;
+}
+
+/**
+ * cqm_cla_get_unlock - Apply for block buffer in number of count from the index
+ * position in the cla table, The unlocked process is used for
+ * static buffer application.
+ * @cqm_handle: CQM handle
+ * @cla_table: BAT table entry
+ * @index: the index position in the cla table
+ * @count: number of block buffer
+ * @pa: dma physical address
+ */
+u8 *cqm_cla_get_unlock(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ u8 *ret_addr = NULL;
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_0)
+ ret_addr = cqm_cla_get_unlock_lvl0(cqm_handle, cla_table, index,
+ count, pa);
+ else if (cla_table->cla_lvl == CQM_CLA_LVL_1)
+ ret_addr = cqm_cla_get_unlock_lvl1(cqm_handle, cla_table, index,
+ count, pa);
+ else
+ ret_addr = cqm_cla_get_unlock_lvl2(cqm_handle, cla_table, index,
+ count, pa);
+
+ return ret_addr;
+}
+
+/**
+ * cqm_cla_get_lock - Apply for block buffer in number of count from the index position
+ * in the cla table. The lock process is used during dynamic buffer application.
+ * @cqm_handle: CQM handle
+ * @cla_table: BAT table entry
+ * @index: the index position in the cla table
+ * @count: number of block buffer
+ * @pa: dma physical address
+ */
+u8 *cqm_cla_get_lock(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ u8 *ret_addr = NULL;
+
+ mutex_lock(&cla_table->lock);
+
+ ret_addr = cqm_cla_get_unlock(cqm_handle, cla_table, index, count, pa);
+
+ mutex_unlock(&cla_table->lock);
+
+ return ret_addr;
+}
+
+/**
+ * cqm_cla_put - Decrease the value of reference counting on the trunk page. If the value is 0,
+ * the trunk page is released.
+ * @cqm_handle: CQM handle
+ * @cla_table: BAT table entry
+ * @index: the index position in the cla table
+ * @count: number of block buffer
+ */
+void cqm_cla_put(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count)
+{
+ struct tag_cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ struct tag_cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
+ struct tag_cqm_buf *cla_x_buf = &cla_table->cla_x_buf;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf_list *buf_node_z = NULL;
+ struct tag_cqm_buf_list *buf_node_y = NULL;
+ struct tag_cqm_buf_list *buf_node_x = NULL;
+ u32 x_index = 0;
+ u32 y_index = 0;
+ u32 trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+ u64 tmp;
+
+ /* The buffer is applied statically, and the reference counting
+ * does not need to be controlled.
+ */
+ if (cla_table->alloc_static)
+ return;
+
+ mutex_lock(&cla_table->lock);
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_1) {
+ y_index = index >> (cla_table->z + 1);
+
+ if (y_index >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla put: index exceeds buf_number, y_index %u, z_buf_number %u\n",
+ y_index, cla_z_buf->buf_number);
+ cqm_err(handle->dev_hdl,
+ "Cla put: cla_table->type=%u\n",
+ cla_table->type);
+ mutex_unlock(&cla_table->lock);
+ return;
+ }
+
+ buf_node_z = &cla_z_buf->buf_list[y_index];
+ buf_node_y = cla_y_buf->buf_list;
+
+ /* When the value of reference counting on the z node page is 0,
+ * the z node page is released.
+ */
+ cqm_dbg("Cla put: 1L: z_refcount=0x%x, count=0x%x\n",
+ buf_node_z->refcount, count);
+ buf_node_z->refcount -= count;
+ if (buf_node_z->refcount == 0)
+ /* The cache invalid is not required for the Z node. */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_y,
+ buf_node_z, y_index,
+ CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID);
+ } else if (cla_table->cla_lvl == CQM_CLA_LVL_2) {
+ y_index = (index >> (cla_table->z + 1)) &
+ ((1U << (cla_table->y - cla_table->z)) - 1);
+ x_index = index >> (cla_table->y + 1);
+ tmp = x_index * (trunk_size / sizeof(dma_addr_t)) + y_index;
+
+ if (x_index >= cla_y_buf->buf_number || tmp >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla put: index exceeds buf, x %u, y %u, y_buf_n %u, z_buf_n %u\n",
+ x_index, y_index, cla_y_buf->buf_number,
+ cla_z_buf->buf_number);
+ mutex_unlock(&cla_table->lock);
+ return;
+ }
+
+ buf_node_x = cla_x_buf->buf_list;
+ buf_node_y = &cla_y_buf->buf_list[x_index];
+ buf_node_z = &cla_z_buf->buf_list[tmp];
+ cqm_dbg("Cla put: 2L: z_refcount=0x%x, count=0x%x\n",
+ buf_node_z->refcount, count);
+
+ /* When the value of reference counting on the z node page is 0,
+ * the z node page is released.
+ */
+ buf_node_z->refcount -= count;
+ if (buf_node_z->refcount == 0) {
+ cqm_cla_free(cqm_handle, cla_table, buf_node_y,
+ buf_node_z, y_index,
+ CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID);
+
+ /* When the value of reference counting on the y node
+ * page is 0, the y node page is released.
+ */
+ cqm_dbg("Cla put: 2L: y_refcount=0x%x\n",
+ buf_node_y->refcount);
+ buf_node_y->refcount--;
+ if (buf_node_y->refcount == 0)
+ /* Node y requires cache to be invalid. */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_x, buf_node_y,
+ x_index, CQM_CLA_DEL_GPA_WITH_CACHE_INVALID);
+ }
+ }
+
+ mutex_unlock(&cla_table->lock);
+}
+
+/**
+ * cqm_cla_table_get - Searches for the CLA table data structure corresponding to a BAT entry
+ * @bat_table: bat entry
+ * @entry_type: cla table type
+ *
+ * RETURNS:
+ * Queried cla table
+ */
+struct tag_cqm_cla_table *cqm_cla_table_get(struct tag_cqm_bat_table *bat_table,
+ u32 entry_type)
+{
+ struct tag_cqm_cla_table *cla_table = NULL;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if ((cla_table != NULL) && (entry_type == cla_table->type))
+ return cla_table;
+ }
+
+ return NULL;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.h
new file mode 100644
index 0000000..a51c1dc
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bat_cla.h
@@ -0,0 +1,216 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_BAT_CLA_H
+#define CQM_BAT_CLA_H
+
+#include <linux/types.h>
+#include <linux/mutex.h>
+
+#include "cqm_bitmap_table.h"
+#include "cqm_object.h"
+#include "vram_common.h"
+
+/* When the connection check is enabled, the maximum number of connections
+ * supported by the chip is 1M - 63, which cannot reach 1M
+ */
+#define CQM_BAT_MAX_CONN_NUM (0x100000 - 63)
+#define CQM_BAT_MAX_CACHE_CONN_NUM (0x100000 - 63)
+
+#define CLA_TABLE_PAGE_ORDER 0
+#define CQM_4K_PAGE_ORDER 0
+#define CQM_4K_PAGE_SIZE 4096
+#define CQM_8K_PAGE_ORDER 1
+
+#define CQM_BAT_ENTRY_MAX 16
+#define CQM_BAT_ENTRY_SIZE 16
+#define CQM_BAT_STORE_API_SIZE 16
+
+#define CQM_BAT_SIZE_FT_RDMA_PF 240
+#define CQM_BAT_SIZE_FT_RDMA_VF 160
+#define CQM_BAT_SIZE_FT_PF 192
+#define CQM_BAT_SIZE_FT_VF 112
+#define CQM_BAT_SIZE_RDMA_PF 160
+#define CQM_BAT_SIZE_RDMA_VF 80
+
+#define CQM_BAT_INDEX0 0
+#define CQM_BAT_INDEX1 1
+#define CQM_BAT_INDEX2 2
+#define CQM_BAT_INDEX3 3
+#define CQM_BAT_INDEX4 4
+#define CQM_BAT_INDEX5 5
+#define CQM_BAT_INDEX6 6
+#define CQM_BAT_INDEX7 7
+#define CQM_BAT_INDEX8 8
+#define CQM_BAT_INDEX9 9
+#define CQM_BAT_INDEX10 10
+#define CQM_BAT_INDEX11 11
+#define CQM_BAT_INDEX12 12
+#define CQM_BAT_INDEX13 13
+#define CQM_BAT_INDEX14 14
+#define CQM_BAT_INDEX15 15
+
+enum cqm_bat_entry_type {
+ CQM_BAT_ENTRY_T_CFG = 0,
+ CQM_BAT_ENTRY_T_HASH = 1,
+ CQM_BAT_ENTRY_T_QPC = 2,
+ CQM_BAT_ENTRY_T_SCQC = 3,
+ CQM_BAT_ENTRY_T_SRQC = 4,
+ CQM_BAT_ENTRY_T_MPT = 5,
+ CQM_BAT_ENTRY_T_GID = 6,
+ CQM_BAT_ENTRY_T_LUN = 7,
+ CQM_BAT_ENTRY_T_TASKMAP = 8,
+ CQM_BAT_ENTRY_T_L3I = 9,
+ CQM_BAT_ENTRY_T_CHILDC = 10,
+ CQM_BAT_ENTRY_T_TIMER = 11,
+ CQM_BAT_ENTRY_T_XID2CID = 12,
+ CQM_BAT_ENTRY_T_REORDER = 13,
+ CQM_BAT_ENTRY_T_INVALID = 14,
+ CQM_BAT_ENTRY_T_MAX = 15,
+};
+
+/* CLA update mode */
+#define CQM_CLA_RECORD_NEW_GPA 0
+#define CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID 1
+#define CQM_CLA_DEL_GPA_WITH_CACHE_INVALID 2
+
+#define CQM_CLA_LVL_0 0
+#define CQM_CLA_LVL_1 1
+#define CQM_CLA_LVL_2 2
+
+#define CQM_MAX_INDEX_BIT 19
+
+#define CQM_CHIP_CACHELINE 256
+#define CQM_CHIP_TIMER_CACHELINE 512
+#define CQM_OBJECT_256 256
+#define CQM_OBJECT_512 512
+#define CQM_OBJECT_1024 1024
+#define CQM_CHIP_GPA_MASK 0x1ffffffffffffff
+#define CQM_CHIP_GPA_HIMASK 0x1ffffff
+#define CQM_CHIP_GPA_LOMASK 0xffffffff
+#define CQM_CHIP_GPA_HSHIFT 32
+
+/* Aligns with 64 buckets and shifts rightward by 6 bits */
+#define CQM_HASH_NUMBER_UNIT 6
+
+struct tag_cqm_cla_table {
+ u32 type;
+ u32 max_buffer_size;
+ u32 obj_num;
+ bool alloc_static; /* Whether the buffer is statically allocated */
+ u32 cla_lvl;
+ u32 cacheline_x; /* x value calculated based on cacheline,
+ * used by the chip
+ */
+ u32 cacheline_y; /* y value calculated based on cacheline,
+ * used by the chip
+ */
+ u32 cacheline_z; /* z value calculated based on cacheline,
+ * used by the chip
+ */
+ u32 x; /* x value calculated based on obj_size, used by software */
+ u32 y; /* y value calculated based on obj_size, used by software */
+ u32 z; /* z value calculated based on obj_size, used by software */
+ struct tag_cqm_buf cla_x_buf;
+ struct tag_cqm_buf cla_y_buf;
+ struct tag_cqm_buf cla_z_buf;
+ u32 trunk_order; /* A continuous physical page contains 2^order pages */
+ u32 obj_size;
+ struct mutex lock; /* Lock for cla buffer allocation and free */
+
+ struct tag_cqm_bitmap bitmap;
+
+ struct tag_cqm_object_table obj_table; /* Mapping table between
+ * indexes and objects
+ */
+ char name[VRAM_NAME_APPLY_LEN];
+};
+
+struct tag_cqm_bat_entry_cfg {
+ u32 cur_conn_num_h_4 : 4;
+ u32 rsv1 : 4;
+ u32 max_conn_num : 20;
+ u32 rsv2 : 4;
+
+ u32 max_conn_cache : 10;
+ u32 rsv3 : 6;
+ u32 cur_conn_num_l_16 : 16;
+
+ u32 bloom_filter_addr : 16;
+ u32 cur_conn_cache : 10;
+ u32 rsv4 : 6;
+
+ u32 bucket_num : 16;
+ u32 bloom_filter_len : 16;
+};
+
+#define CQM_BAT_NO_BYPASS_CACHE 0
+#define CQM_BAT_BYPASS_CACHE 1
+
+#define CQM_BAT_ENTRY_SIZE_256 0
+#define CQM_BAT_ENTRY_SIZE_512 1
+#define CQM_BAT_ENTRY_SIZE_1024 2
+
+struct tag_cqm_bat_entry_standerd {
+ u32 entry_size : 2;
+ u32 rsv1 : 6;
+ u32 max_number : 20;
+ u32 rsv2 : 4;
+
+ u32 cla_gpa_h : 32;
+
+ u32 cla_gpa_l : 32;
+
+ u32 rsv3 : 8;
+ u32 z : 5;
+ u32 y : 5;
+ u32 x : 5;
+ u32 rsv24 : 1;
+ u32 bypass : 1;
+ u32 cla_level : 2;
+ u32 rsv5 : 5;
+};
+
+struct tag_cqm_bat_entry_vf2pf {
+ u32 cla_gpa_h : 25;
+ u32 pf_id : 5;
+ u32 fake_vf_en : 1;
+ u32 acs_spu_en : 1;
+};
+
+#define CQM_BAT_ENTRY_TASKMAP_NUM 4
+struct tag_cqm_bat_entry_taskmap_addr {
+ u32 gpa_h;
+ u32 gpa_l;
+};
+
+struct tag_cqm_bat_entry_taskmap {
+ struct tag_cqm_bat_entry_taskmap_addr addr[CQM_BAT_ENTRY_TASKMAP_NUM];
+};
+
+struct tag_cqm_bat_table {
+ u32 bat_entry_type[CQM_BAT_ENTRY_MAX];
+ u8 bat[CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE];
+ struct tag_cqm_cla_table entry[CQM_BAT_ENTRY_MAX];
+ /* In LB mode 1, the timer needs to be configured in 4 SMFs,
+ * and the GPAs must be different and independent.
+ */
+ struct tag_cqm_cla_table timer_entry[4];
+ u32 bat_size;
+};
+
+s32 cqm_bat_init(struct tag_cqm_handle *cqm_handle);
+void cqm_bat_uninit(struct tag_cqm_handle *cqm_handle);
+s32 cqm_cla_init(struct tag_cqm_handle *cqm_handle);
+void cqm_cla_uninit(struct tag_cqm_handle *cqm_handle, u32 entry_numb);
+u8 *cqm_cla_get_unlock(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa);
+u8 *cqm_cla_get_lock(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa);
+void cqm_cla_put(struct tag_cqm_handle *cqm_handle, struct tag_cqm_cla_table *cla_table,
+ u32 index, u32 count);
+struct tag_cqm_cla_table *cqm_cla_table_get(struct tag_cqm_bat_table *bat_table,
+ u32 entry_type);
+u32 cqm_funcid2smfid(const struct tag_cqm_handle *cqm_handle);
+
+#endif /* CQM_BAT_CLA_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.c
new file mode 100644
index 0000000..86b268c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.c
@@ -0,0 +1,1516 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include <linux/mm.h>
+#include <linux/gfp.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "cqm_memsec.h"
+#include "cqm_object.h"
+#include "cqm_bat_cla.h"
+#include "cqm_cmd.h"
+#include "cqm_object_intern.h"
+#include "cqm_main.h"
+
+#include "cqm_npu_cmd.h"
+#include "cqm_npu_cmd_defs.h"
+#include "vram_common.h"
+
+#define common_section
+
+struct malloc_memory {
+ bool (*check_alloc_mode)(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf);
+ s32 (*malloc_func)(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf);
+};
+
+struct free_memory {
+ bool (*check_alloc_mode)(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf);
+ void (*free_func)(struct tag_cqm_buf *buf);
+};
+
+/**
+ * Prototype : cqm_swab64(Encapsulation of __swab64)
+ * Description : Perform big-endian conversion for a memory block (8 bytes).
+ * Input : u8 *addr: Start address of the memory block
+ * u32 cnt: Number of 8 bytes in the memory block
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+void cqm_swab64(u8 *addr, u32 cnt)
+{
+ u64 *temp = (u64 *)addr;
+ u64 value = 0;
+ u32 i;
+
+ for (i = 0; i < cnt; i++) {
+ value = __swab64(*temp);
+ *temp = value;
+ temp++;
+ }
+}
+
+/**
+ * Prototype : cqm_swab32(Encapsulation of __swab32)
+ * Description : Perform big-endian conversion for a memory block (4 bytes).
+ * Input : u8 *addr: Start address of the memory block
+ * u32 cnt: Number of 4 bytes in the memory block
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/7/23
+ * Modification : Created function
+ */
+void cqm_swab32(u8 *addr, u32 cnt)
+{
+ u32 *temp = (u32 *)addr;
+ u32 value = 0;
+ u32 i;
+
+ for (i = 0; i < cnt; i++) {
+ value = __swab32(*temp);
+ *temp = value;
+ temp++;
+ }
+}
+
+/**
+ * Prototype : cqm_shift
+ * Description : Calculates n in a 2^n number.(Find the logarithm of 2^n)
+ * Input : u32 data
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+s32 cqm_shift(u32 data)
+{
+ u32 data_num = data;
+ s32 shift = -1;
+
+ do {
+ data_num >>= 1;
+ shift++;
+ } while (data_num);
+
+ return shift;
+}
+
+/**
+ * Prototype : cqm_check_align
+ * Description : Check whether the value is 2^n-aligned. If 0 or 1, false is
+ * returned.
+ * Input : u32 data
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/9/15
+ * Modification : Created function
+ */
+bool cqm_check_align(u32 data)
+{
+ u32 data_num = data;
+
+ if (data == 0)
+ return false;
+
+ /* Todo: (n & (n - 1) == 0) can be used to determine the value. */
+ do {
+ /* When the value can be exactly divided by 2,
+ * the value of data is shifted right by one bit, that is,
+ * divided by 2.
+ */
+ if ((data_num & 0x1) == 0)
+ data_num >>= 1;
+ /* If the value cannot be divisible by 2, the value is
+ * not 2^n-aligned and false is returned.
+ */
+ else
+ return false;
+ } while (data_num != 1);
+
+ return true;
+}
+
+/**
+ * Prototype : cqm_kmalloc_align
+ * Description : Allocates 2^n-byte-aligned memory for the start address.
+ * Input : size_t size
+ * gfp_t flags
+ * u16 align_order
+ * Output : None
+ * Return Value : void *
+ * 1.Date : 2017/9/22
+ * Modification : Created function
+ */
+void *cqm_kmalloc_align(size_t size, gfp_t flags, u16 align_order)
+{
+ void *orig_addr = NULL;
+ void *align_addr = NULL;
+ void *index_addr = NULL;
+
+ orig_addr = kmalloc(size + ((u64)1 << align_order) + sizeof(void *),
+ flags);
+ if (!orig_addr)
+ return NULL;
+
+ index_addr = (void *)((char *)orig_addr + sizeof(void *));
+ align_addr =
+ (void *)((((u64)index_addr + ((u64)1 << align_order) - 1) >>
+ align_order) << align_order);
+
+ /* Record the original memory address for memory release. */
+ index_addr = (void *)((char *)align_addr - sizeof(void *));
+ *(void **)index_addr = orig_addr;
+
+ return align_addr;
+}
+
+/**
+ * Prototype : cqm_kfree_align
+ * Description : Release the memory allocated for starting address alignment.
+ * Input : void *addr
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2017/9/22
+ * Modification : Created function
+ */
+void cqm_kfree_align(void *addr)
+{
+ void *index_addr = NULL;
+
+ /* Release the original memory address. */
+ index_addr = (void *)((char *)addr - sizeof(void *));
+
+ cqm_dbg("free aligned address: %p, original address: %p\n", addr,
+ *(void **)index_addr);
+
+ kfree(*(void **)index_addr);
+}
+
+static void cqm_write_lock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ write_lock_bh(lock);
+ else
+ write_lock(lock);
+}
+
+static void cqm_write_unlock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ write_unlock_bh(lock);
+ else
+ write_unlock(lock);
+}
+
+static void cqm_read_lock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ read_lock_bh(lock);
+ else
+ read_lock(lock);
+}
+
+static void cqm_read_unlock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ read_unlock_bh(lock);
+ else
+ read_unlock(lock);
+}
+
+static inline bool cqm_bat_entry_in_secure_mem(void *handle, u32 type)
+{
+ if (!cqm_need_secure_mem(handle))
+ return false;
+
+ if (type == CQM_BAT_ENTRY_T_QPC || type == CQM_BAT_ENTRY_T_SCQC ||
+ type == CQM_BAT_ENTRY_T_SRQC || type == CQM_BAT_ENTRY_T_MPT)
+ return true;
+
+ return false;
+}
+
+s32 cqm_buf_alloc_direct(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf, bool direct)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct page **pages = NULL;
+ u32 i, j, order;
+
+ order = (u32)get_order(buf->buf_size);
+
+ if (!direct) {
+ buf->direct.va = NULL;
+ return CQM_SUCCESS;
+ }
+
+ pages = vmalloc(sizeof(struct page *) * buf->page_number);
+ if (!pages) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(pages));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < buf->buf_number; i++) {
+ for (j = 0; j < ((u32)1 << order); j++)
+ pages[(ulong)(unsigned int)((i << order) + j)] =
+ (void *)virt_to_page((u8 *)(buf->buf_list[i].va) + (PAGE_SIZE * j));
+ }
+
+ buf->direct.va = vmap(pages, buf->page_number, VM_MAP, PAGE_KERNEL);
+ vfree(pages);
+ if (!buf->direct.va) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf->direct.va));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static bool check_use_vram(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ return buf->buf_info.use_vram ? true : false;
+}
+
+static bool check_use_non_vram(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ return buf->buf_info.use_vram ? false : true;
+}
+
+static bool check_for_use_node_alloc(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ if (buf->buf_info.use_vram == 0 && handle->board_info.service_mode == 0)
+ return true;
+
+ return false;
+}
+
+static bool check_for_nouse_node_alloc(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ if (buf->buf_info.use_vram == 0 && handle->board_info.service_mode != 0)
+ return true;
+
+ return false;
+}
+
+static s32 cqm_buf_vram_kalloc(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ void *vaddr = NULL;
+ int i;
+
+ vaddr = hi_vram_kalloc(buf->buf_info.buf_vram_name, (u64)buf->buf_size * buf->buf_number);
+ if (!vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(buf_page));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (s32)buf->buf_number; i++)
+ buf->buf_list[i].va = (void *)((char *)vaddr + i * (u64)buf->buf_size);
+
+ return CQM_SUCCESS;
+}
+
+static void cqm_buf_vram_free(struct tag_cqm_buf *buf)
+{
+ s32 i;
+
+ if (buf->buf_list == NULL) {
+ return;
+ }
+
+ if (buf->buf_list[0].va)
+ hi_vram_kfree(buf->buf_list[0].va, buf->buf_info.buf_vram_name,
+ (u64)buf->buf_size * buf->buf_number);
+
+ for (i = 0; i < (s32)buf->buf_number; i++)
+ buf->buf_list[i].va = NULL;
+}
+
+static void cqm_buf_free_page_common(struct tag_cqm_buf *buf)
+{
+ u32 order;
+ s32 i;
+
+ if (buf->buf_list == NULL) {
+ return;
+ }
+
+ order = (u32)get_order(buf->buf_size);
+
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ if (buf->buf_list[i].va) {
+ free_pages((ulong)(buf->buf_list[i].va), order);
+ buf->buf_list[i].va = NULL;
+ }
+ }
+}
+
+static s32 cqm_buf_use_node_alloc_page(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ struct page *newpage = NULL;
+ u32 order;
+ void *va = NULL;
+ s32 i, node;
+
+ order = (u32)get_order(buf->buf_size);
+ node = dev_to_node(handle->dev_hdl);
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ newpage = alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, order);
+ if (!newpage) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(buf_page));
+ break;
+ }
+ va = (void *)page_address(newpage);
+ /* Initialize the page after the page is applied for.
+ * If hash entries are involved, the initialization
+ * value must be 0.
+ */
+ memset(va, 0, buf->buf_size);
+ buf->buf_list[i].va = va;
+ }
+
+ if (i != buf->buf_number) {
+ cqm_buf_free_page_common(buf);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_buf_unused_node_alloc_page(struct hinic3_hwdev *handle, struct tag_cqm_buf *buf)
+{
+ u32 order;
+ void *va = NULL;
+ s32 i;
+
+ order = (u32)get_order(buf->buf_size);
+
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ va = (void *)ossl_get_free_pages(GFP_KERNEL | __GFP_ZERO, order);
+ if (!va) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(buf_page));
+ break;
+ }
+ /* Initialize the page after the page is applied for.
+ * If hash entries are involved, the initialization
+ * value must be 0.
+ */
+ memset(va, 0, buf->buf_size);
+ buf->buf_list[i].va = va;
+ }
+
+ if (i != buf->buf_number) {
+ cqm_buf_free_page_common(buf);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static const struct malloc_memory g_malloc_funcs[] = {
+ {check_use_vram, cqm_buf_vram_kalloc},
+ {check_for_use_node_alloc, cqm_buf_use_node_alloc_page},
+ {check_for_nouse_node_alloc, cqm_buf_unused_node_alloc_page}
+};
+
+static const struct free_memory g_free_funcs[] = {
+ {check_use_vram, cqm_buf_vram_free},
+ {check_use_non_vram, cqm_buf_free_page_common}
+};
+
+static s32 cqm_buf_alloc_page(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 malloc_funcs_num = ARRAY_SIZE(g_malloc_funcs);
+ u32 i;
+
+ for (i = 0; i < malloc_funcs_num; i++) {
+ if (g_malloc_funcs[i].check_alloc_mode &&
+ g_malloc_funcs[i].malloc_func &&
+ g_malloc_funcs[i].check_alloc_mode(handle, buf))
+ return g_malloc_funcs[i].malloc_func(handle, buf);
+ }
+
+ cqm_err(handle->dev_hdl, "Unknown alloc mode\n");
+
+ return CQM_FAIL;
+}
+
+static void cqm_buf_free_page(struct tag_cqm_buf *buf)
+{
+ u32 free_funcs_num = ARRAY_SIZE(g_free_funcs);
+ u32 i;
+
+ for (i = 0; i < free_funcs_num; i++) {
+ if (g_free_funcs[i].check_alloc_mode &&
+ g_free_funcs[i].free_func &&
+ g_free_funcs[i].check_alloc_mode(NULL, buf))
+ return g_free_funcs[i].free_func(buf);
+ }
+}
+
+static s32 cqm_buf_alloc_map(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct pci_dev *dev = cqm_handle->dev;
+ void *va = NULL;
+ s32 i;
+
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ va = buf->buf_list[i].va;
+ buf->buf_list[i].pa = pci_map_single(dev, va, buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(dev, buf->buf_list[i].pa)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf_list));
+ break;
+ }
+ }
+
+ if (i != buf->buf_number) {
+ i--;
+ for (; i >= 0; i--)
+ pci_unmap_single(dev, buf->buf_list[i].pa,
+ buf->buf_size, PCI_DMA_BIDIRECTIONAL);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_buf_get_secure_mem_pages(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 i;
+
+ for (i = 0; i < buf->buf_number; i++) {
+ buf->buf_list[i].va =
+ cqm_get_secure_mem_pages(handle,
+ (u32)get_order(buf->buf_size),
+ &buf->buf_list[i].pa);
+ if (!buf->buf_list[i].va) {
+ cqm_err(handle->dev_hdl,
+ CQM_ALLOC_FAIL(cqm_get_secure_mem_pages));
+ break;
+ }
+ }
+
+ if (i != buf->buf_number) {
+ cqm_free_secure_mem_pages(handle, buf->buf_list[0].va,
+ (u32)get_order(buf->buf_size));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * Prototype : cqm_buf_alloc
+ * Description : Apply for buffer space and DMA mapping for the struct tag_cqm_buf
+ * structure.
+ * Input : struct tag_cqm_buf *buf
+ * struct pci_dev *dev
+ * bool direct: Whether direct remapping is required
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+s32 cqm_buf_alloc(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf, bool direct)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct pci_dev *dev = cqm_handle->dev;
+ s32 i;
+ s32 ret;
+
+ /* Applying for the buffer list descriptor space */
+ buf->buf_list = vmalloc(buf->buf_number * sizeof(struct tag_cqm_buf_list));
+ if (!buf->buf_list)
+ return CQM_FAIL;
+ memset(buf->buf_list, 0, buf->buf_number * sizeof(struct tag_cqm_buf_list));
+
+ /* Page for applying for each buffer */
+ if (cqm_bat_entry_in_secure_mem((void *)handle, buf->bat_entry_type))
+ ret = cqm_buf_get_secure_mem_pages(cqm_handle, buf);
+ else
+ ret = cqm_buf_alloc_page(cqm_handle, buf);
+
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(linux_cqm_buf_alloc_page));
+ goto err1;
+ }
+
+ /* PCI mapping of the buffer */
+ if (!cqm_bat_entry_in_secure_mem((void *)handle, buf->bat_entry_type)) {
+ if (cqm_buf_alloc_map(cqm_handle, buf) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(linux_cqm_buf_alloc_map));
+ goto err2;
+ }
+ }
+
+ /* direct remapping */
+ if (cqm_buf_alloc_direct(cqm_handle, buf, direct) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_buf_alloc_direct));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ if (!cqm_bat_entry_in_secure_mem((void *)handle, buf->bat_entry_type)) {
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ pci_unmap_single(dev, buf->buf_list[i].pa, buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+ }
+ }
+err2:
+ if (cqm_bat_entry_in_secure_mem((void *)handle, buf->bat_entry_type))
+ cqm_free_secure_mem_pages(handle, buf->buf_list[0].va,
+ (u32)get_order(buf->buf_size));
+ else
+ cqm_buf_free_page(buf);
+err1:
+ vfree(buf->buf_list);
+ buf->buf_list = NULL;
+ return CQM_FAIL;
+}
+
+/**
+ * Prototype : cqm_buf_free
+ * Description : Release the buffer space and DMA mapping for the struct tag_cqm_buf
+ * structure.
+ * Input : struct tag_cqm_buf *buf
+ * struct pci_dev *dev
+ * bool direct: Whether direct remapping is required
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+void cqm_buf_free(struct tag_cqm_buf *buf, struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct pci_dev *dev = cqm_handle->dev;
+ s32 i;
+
+ if (buf->direct.va) {
+ vunmap(buf->direct.va);
+ buf->direct.va = NULL;
+ }
+
+ if (!buf->buf_list)
+ return;
+
+ if (cqm_bat_entry_in_secure_mem(handle, buf->bat_entry_type)) {
+ cqm_free_secure_mem_pages(handle, buf->buf_list[0].va,
+ (u32)get_order(buf->buf_size));
+ goto free;
+ }
+
+ for (i = 0; i < (s32)(buf->buf_number); i++) {
+ if (buf->buf_list[i].va)
+ pci_unmap_single(dev, buf->buf_list[i].pa,
+ buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+ }
+ cqm_buf_free_page(buf);
+
+free:
+ vfree(buf->buf_list);
+ buf->buf_list = NULL;
+}
+
+static s32 cqm_cla_cache_invalid_cmd(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_cmd_buf *buf_in,
+ struct tag_cqm_cla_cache_invalid_cmd *cmd)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cla_cache_invalid_cmd *cla_cache_invalid_cmd = NULL;
+ s32 ret;
+
+ cla_cache_invalid_cmd = (struct tag_cqm_cla_cache_invalid_cmd *)(buf_in->buf);
+ cla_cache_invalid_cmd->gpa_h = cmd->gpa_h;
+ cla_cache_invalid_cmd->gpa_l = cmd->gpa_l;
+ cla_cache_invalid_cmd->cache_size = cmd->cache_size;
+ cla_cache_invalid_cmd->smf_id = cmd->smf_id;
+ cla_cache_invalid_cmd->func_id = cmd->func_id;
+
+ cqm_swab32((u8 *)cla_cache_invalid_cmd,
+ /* shift 2 bits by right to get length of dw(4B) */
+ (sizeof(struct tag_cqm_cla_cache_invalid_cmd) >> 2));
+
+ /* Send the cmdq command. */
+ ret = cqm_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
+ CQM_CMD_T_CLA_CACHE_INVALID, buf_in, NULL, NULL,
+ CQM_CMD_TIMEOUT, HINIC3_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(handle->dev_hdl,
+ "Cla cache invalid: cqm_send_cmd_box_ret=%d\n",
+ ret);
+ cqm_err(handle->dev_hdl,
+ "Cla cache invalid: cla_cache_invalid_cmd: 0x%x 0x%x 0x%x\n",
+ cmd->gpa_h, cmd->gpa_l, cmd->cache_size);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_cache_invalid(struct tag_cqm_handle *cqm_handle, dma_addr_t pa, u32 cache_size)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cmd_buf *buf_in = NULL;
+ struct hinic3_func_attr *func_attr = NULL;
+ struct tag_cqm_bat_entry_vf2pf gpa = {0};
+ struct tag_cqm_cla_cache_invalid_cmd cmd;
+ u32 cla_gpa_h = 0;
+ s32 ret = CQM_FAIL;
+ u32 i;
+
+ buf_in = cqm_cmd_alloc((void *)(cqm_handle->ex_handle));
+ if (!buf_in)
+ return CQM_FAIL;
+ buf_in->size = sizeof(struct tag_cqm_cla_cache_invalid_cmd);
+
+ gpa.cla_gpa_h = CQM_ADDR_HI(pa) & CQM_CHIP_GPA_HIMASK;
+
+ /* On the SPU, the value of spu_en in the GPA address
+ * in the BAT is determined by the host ID and fun IDx.
+ */
+ if (hinic3_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID) {
+ func_attr = &cqm_handle->func_attribute;
+ gpa.acs_spu_en = func_attr->func_global_idx & 0x1;
+ } else {
+ gpa.acs_spu_en = 0;
+ }
+
+ /* In non-fake mode, set func_id to 0xffff.
+ * Indicate the current func fake mode.
+ * The value of func_id is a fake func ID.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD) {
+ cmd.func_id = cqm_handle->func_attribute.func_global_idx;
+ func_attr = &cqm_handle->parent_cqm_handle->func_attribute;
+ gpa.fake_vf_en = 1;
+ gpa.pf_id = func_attr->func_global_idx;
+ } else {
+ cmd.func_id = 0xffff;
+ }
+
+ memcpy(&cla_gpa_h, &gpa, sizeof(u32));
+
+ /* Fill command and convert it to big endian */
+ cmd.cache_size = cache_size;
+ cmd.gpa_l = CQM_ADDR_LW(pa);
+ cmd.gpa_h = cla_gpa_h;
+
+ /* The normal mode is the 1822 traditional mode and is all configured
+ * on SMF0.
+ */
+ /* Mode 0 is hashed to 4 SMF engines (excluding PPF) by func ID. */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ cmd.smf_id = cqm_funcid2smfid(cqm_handle);
+ ret = cqm_cla_cache_invalid_cmd(cqm_handle, buf_in, &cmd);
+ /* Mode 1/2 are allocated to 4 SMF engines by flow. Therefore,
+ * one function needs to be allocated to 4 SMF engines.
+ */
+ /* The PPF in mode 0 needs to be configured on 4 engines,
+ * and the timer resources need to be shared by the 4 engines.
+ */
+ } else if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2 ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type == CQM_PPF)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ /* The smf_pg stored currently enabled SMF engine. */
+ if (cqm_handle->func_capability.smf_pg & (1U << i)) {
+ cmd.smf_id = i;
+ ret = cqm_cla_cache_invalid_cmd(cqm_handle,
+ buf_in, &cmd);
+ if (ret != CQM_SUCCESS)
+ goto out;
+ }
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Cla cache invalid: unsupport lb mode=%u\n",
+ cqm_handle->func_capability.lb_mode);
+ ret = CQM_FAIL;
+ }
+
+out:
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+static void free_cache_inv(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf,
+ s32 *inv_flag)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 order;
+ s32 i;
+
+ order = (u32)get_order(buf->buf_size);
+
+ if (!handle->chip_present_flag)
+ return;
+
+ if (!buf->buf_list)
+ return;
+
+ for (i = 0; i < (s32)(buf->buf_number); i++) {
+ if (!buf->buf_list[i].va)
+ continue;
+
+ if (*inv_flag != CQM_SUCCESS)
+ continue;
+
+ /* In the Pangea environment, if the cmdq times out,
+ * no subsequent message is sent.
+ */
+ *inv_flag = cqm_cla_cache_invalid(cqm_handle, buf->buf_list[i].pa,
+ (u32)(PAGE_SIZE << order));
+ if (*inv_flag != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl,
+ "Buffer free: fail to invalid buf_list pa cache, inv_flag=%d\n",
+ *inv_flag);
+ }
+}
+
+void cqm_buf_free_cache_inv(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf,
+ s32 *inv_flag)
+{
+ /* Send a command to the chip to kick out the cache. */
+ free_cache_inv(cqm_handle, buf, inv_flag);
+
+ /* Clear host resources */
+ cqm_buf_free(buf, cqm_handle);
+}
+
+void cqm_byte_print(u32 *ptr, u32 len)
+{
+ u32 i;
+ u32 len_num = len;
+
+ len_num = (len_num >> 0x2);
+ for (i = 0; i < len_num; i = i + 0x4) {
+ cqm_dbg("%.8x %.8x %.8x %.8x\n", ptr[i], ptr[i + 1],
+ ptr[i + 2], /* index increases by 2 */
+ ptr[i + 3]); /* index increases by 3 */
+ }
+}
+
+#define bitmap_section
+
+/**
+ * Prototype : cqm_single_bitmap_init
+ * Description : Initialize a bitmap.
+ * Input : struct tag_cqm_bitmap *bitmap
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/9/9
+ * Modification : Created function
+ */
+static s32 cqm_single_bitmap_init(struct tag_cqm_bitmap *bitmap)
+{
+ u32 bit_number;
+
+ spin_lock_init(&bitmap->lock);
+
+ /* Max_num of the bitmap is 8-aligned and then
+ * shifted rightward by 3 bits to obtain the number of bytes required.
+ */
+ bit_number = (ALIGN(bitmap->max_num, CQM_NUM_BIT_BYTE) >>
+ CQM_BYTE_BIT_SHIFT);
+ if (bitmap->bitmap_info.use_vram != 0)
+ bitmap->table = hi_vram_kalloc(bitmap->bitmap_info.buf_vram_name, bit_number);
+ else
+ bitmap->table = vmalloc(bit_number);
+ if (!bitmap->table)
+ return CQM_FAIL;
+ memset(bitmap->table, 0, bit_number);
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_bitmap_toe_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_bitmap *bitmap = NULL;
+
+ /* SRQC of TOE services is not managed through the CLA table,
+ * but the bitmap is required to manage SRQid.
+ */
+ if (cqm_handle->service[CQM_SERVICE_T_TOE].valid) {
+ bitmap = &cqm_handle->toe_own_capability.srqc_bitmap;
+ bitmap->max_num =
+ cqm_handle->toe_own_capability.toe_srqc_number;
+ bitmap->reserved_top = 0;
+ bitmap->reserved_back = 0;
+ bitmap->last = 0;
+ if (bitmap->max_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: toe_srqc_number=0, don't init bitmap\n");
+ return CQM_SUCCESS;
+ }
+
+ if (cqm_single_bitmap_init(bitmap) != CQM_SUCCESS)
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+static void cqm_bitmap_toe_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bitmap *bitmap = NULL;
+
+ if (cqm_handle->service[CQM_SERVICE_T_TOE].valid) {
+ bitmap = &cqm_handle->toe_own_capability.srqc_bitmap;
+ if (bitmap->table) {
+ spin_lock_deinit(&bitmap->lock);
+ vfree(bitmap->table);
+ bitmap->table = NULL;
+ }
+ }
+}
+
+/**
+ * Prototype : cqm_bitmap_init
+ * Description : Initialize the bitmap.
+ * Input : struct tag_cqm_handle *cqm_handle
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+s32 cqm_bitmap_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ s32 ret = CQM_SUCCESS;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0, don't init bitmap\n",
+ cla_table->type);
+ continue;
+ }
+
+ bitmap = &cla_table->bitmap;
+ snprintf(bitmap->bitmap_info.buf_vram_name, VRAM_NAME_APPLY_LEN,
+ "%s%s%02d", cla_table->name,
+ VRAM_CQM_BITMAP_BASE, cla_table->type);
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_QPC:
+ bitmap->max_num = capability->qpc_number;
+ bitmap->reserved_top = capability->qpc_reserved;
+ bitmap->reserved_back = capability->qpc_reserved_back;
+ bitmap->last = capability->qpc_reserved;
+ bitmap->bitmap_info.use_vram = get_use_vram_flag();
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ bitmap->max_num = capability->mpt_number;
+ bitmap->reserved_top = capability->mpt_reserved;
+ bitmap->reserved_back = 0;
+ bitmap->last = capability->mpt_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ bitmap->max_num = capability->scqc_number;
+ bitmap->reserved_top = capability->scq_reserved;
+ bitmap->reserved_back = 0;
+ bitmap->last = capability->scq_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ bitmap->max_num = capability->srqc_number;
+ bitmap->reserved_top = capability->srq_reserved;
+ bitmap->reserved_back = 0;
+ bitmap->last = capability->srq_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ default:
+ break;
+ }
+
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ "Bitmap init: failed to init cla_table_type=%u, obj_num=0x%x\n",
+ cla_table->type, cla_table->obj_num);
+ goto err;
+ }
+ }
+
+ if (cqm_bitmap_toe_init(cqm_handle) != CQM_SUCCESS)
+ goto err;
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_bitmap_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * Prototype : cqm_bitmap_uninit
+ * Description : Deinitialize the bitmap.
+ * Input : struct tag_cqm_handle *cqm_handle
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+void cqm_bitmap_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ bitmap = &cla_table->bitmap;
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID &&
+ bitmap->table) {
+ spin_lock_deinit(&bitmap->lock);
+ if (bitmap->bitmap_info.use_vram != 0)
+ hi_vram_kfree(bitmap->table, bitmap->bitmap_info.buf_vram_name,
+ ALIGN(bitmap->max_num, CQM_NUM_BIT_BYTE) >>
+ CQM_BYTE_BIT_SHIFT);
+ else
+ vfree(bitmap->table);
+ bitmap->table = NULL;
+ }
+ }
+
+ cqm_bitmap_toe_uninit(cqm_handle);
+}
+
+/**
+ * Prototype : cqm_bitmap_check_range
+ * Description : Starting from begin, check whether the bits in number of count
+ * are idle in the table. Requirement:
+ * 1. This group of bits cannot cross steps.
+ * 2. This group of bits must be 0.
+ * Input : const ulong *table,
+ * u32 step,
+ * u32 max_num,
+ * u32 begin,
+ * u32 count
+ * Output : None
+ * Return Value : u32
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+static u32 cqm_bitmap_check_range(const ulong *table, u32 step, u32 max_num, u32 begin,
+ u32 count)
+{
+ u32 end = (begin + (count - 1));
+ u32 i;
+
+ /* Single-bit check is not performed. */
+ if (count == 1)
+ return begin;
+
+ /* The end value exceeds the threshold. */
+ if (end >= max_num)
+ return max_num;
+
+ /* Bit check, the next bit is returned when a non-zero bit is found. */
+ for (i = (begin + 1); i <= end; i++) {
+ if (test_bit((int)i, table))
+ return i + 1;
+ }
+
+ /* Check whether it's in different steps. */
+ if ((begin & (~(step - 1))) != (end & (~(step - 1))))
+ return (end & (~(step - 1)));
+
+ /* If the check succeeds, begin is returned. */
+ return begin;
+}
+
+static void cqm_bitmap_find(struct tag_cqm_bitmap *bitmap, u32 *index, u32 last,
+ u32 step, u32 count)
+{
+ u32 last_num = last;
+ u32 max_num = bitmap->max_num - bitmap->reserved_back;
+ ulong *table = bitmap->table;
+
+ do {
+ *index = (u32)find_next_zero_bit(table, max_num, last_num);
+ if (*index < max_num)
+ last_num = cqm_bitmap_check_range(table, step, max_num,
+ *index, count);
+ else
+ break;
+ } while (last_num != *index);
+}
+
+static void cqm_bitmap_find_with_low2bit_align(struct tag_cqm_bitmap *bitmap, u32 *index,
+ u32 max_num, u32 last, u32 low2bit)
+{
+ ulong *table = bitmap->table;
+ u32 offset = last;
+
+ while (offset < max_num) {
+ *index = (u32)find_next_zero_bit(table, max_num, offset);
+ if (*index >= max_num)
+ break;
+
+ if ((*index & 0x3) == (low2bit & 0x3)) /* 0x3 used for low2bit align */
+ break;
+
+ offset = *index + 1;
+ if (offset == max_num)
+ *index = max_num;
+ }
+}
+
+/**
+ * Prototype : cqm_bitmap_alloc
+ * Description : Apply for a bitmap index. 0 and 1 must be left blank.
+ * Scan backwards from where you last applied.
+ * A string of consecutive indexes must be applied for and
+ * cannot be applied for across trunks.
+ * Input : struct tag_cqm_bitmap *bitmap,
+ * u32 step,
+ * u32 count
+ * Output : None
+ * Return Value : u32
+ * The obtained index is returned.
+ * If a failure occurs, the value of max is returned.
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+u32 cqm_bitmap_alloc(struct tag_cqm_bitmap *bitmap, u32 step, u32 count, bool update_last)
+{
+ u32 index = 0;
+ u32 max_num = bitmap->max_num - bitmap->reserved_back;
+ u32 last = bitmap->last;
+ ulong *table = bitmap->table;
+ u32 i;
+
+ spin_lock(&bitmap->lock);
+
+ /* Search for an idle bit from the last position. */
+ cqm_bitmap_find(bitmap, &index, last, step, count);
+
+ /* The preceding search fails. Search for an idle bit
+ * from the beginning.
+ */
+ if (index >= max_num) {
+ last = bitmap->reserved_top;
+ cqm_bitmap_find(bitmap, &index, last, step, count);
+ }
+
+ /* Set the found bit to 1 and reset last. */
+ if (index < max_num) {
+ for (i = index; i < (index + count); i++)
+ set_bit(i, table);
+
+ if (update_last) {
+ bitmap->last = (index + count);
+ if (bitmap->last >= max_num)
+ bitmap->last = bitmap->reserved_top;
+ }
+ }
+
+ spin_unlock(&bitmap->lock);
+ return index;
+}
+
+/**
+ * Prototype : cqm_bitmap_alloc_low2bit_align
+ * Description : Apply for a bitmap index with low2bit align. 0 and 1 must be left blank.
+ * Scan backwards from where you last applied.
+ * A string of consecutive indexes must be applied for and
+ * cannot be applied for across trunks.
+ * Input : struct tag_cqm_bitmap *bitmap,
+ * u32 low2bit,
+ * bool update_last
+ * Output : None
+ * Return Value : u32
+ * The obtained index is returned.
+ * If a failure occurs, the value of max is returned.
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+u32 cqm_bitmap_alloc_low2bit_align(struct tag_cqm_bitmap *bitmap, u32 low2bit, bool update_last)
+{
+ u32 index = 0;
+ u32 max_num = bitmap->max_num - bitmap->reserved_back;
+ u32 last = bitmap->last;
+ ulong *table = bitmap->table;
+
+ spin_lock(&bitmap->lock);
+
+ /* Search for an idle bit from the last position. */
+ cqm_bitmap_find_with_low2bit_align(bitmap, &index, max_num, last, low2bit);
+
+ /* The preceding search fails. Search for an idle bit from the beginning. */
+ if (index >= max_num) {
+ last = bitmap->reserved_top;
+ cqm_bitmap_find_with_low2bit_align(bitmap, &index, max_num, last, low2bit);
+ }
+
+ /* Set the found bit to 1 and reset last. */
+ if (index < max_num) {
+ set_bit(index, table);
+
+ if (update_last) {
+ bitmap->last = index;
+ if (bitmap->last >= max_num)
+ bitmap->last = bitmap->reserved_top;
+ }
+ }
+
+ spin_unlock(&bitmap->lock);
+ return index;
+}
+
+/**
+ * Prototype : cqm_bitmap_alloc_reserved
+ * Description : Reserve bit applied for based on index.
+ * Input : struct tag_cqm_bitmap *bitmap,
+ * u32 count,
+ * u32 index
+ * Output : None
+ * Return Value : u32
+ * The obtained index is returned.
+ * If a failure occurs, the value of max is returned.
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+u32 cqm_bitmap_alloc_reserved(struct tag_cqm_bitmap *bitmap, u32 count, u32 index)
+{
+ ulong *table = bitmap->table;
+ u32 ret_index;
+
+ if (index >= bitmap->max_num || count != 1)
+ return CQM_INDEX_INVALID;
+
+ if (index >= bitmap->reserved_top && (index < bitmap->max_num - bitmap->reserved_back))
+ return CQM_INDEX_INVALID;
+
+ spin_lock(&bitmap->lock);
+
+ if (test_bit((int)index, table)) {
+ ret_index = CQM_INDEX_INVALID;
+ } else {
+ set_bit(index, table);
+ ret_index = index;
+ }
+
+ spin_unlock(&bitmap->lock);
+ return ret_index;
+}
+
+/**
+ * Prototype : cqm_bitmap_free
+ * Description : Releases a bitmap index.
+ * Input : struct tag_cqm_bitmap *bitmap,
+ * u32 index,
+ * u32 count
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+void cqm_bitmap_free(struct tag_cqm_bitmap *bitmap, u32 index, u32 count)
+{
+ u32 i;
+
+ spin_lock(&bitmap->lock);
+
+ for (i = index; i < (index + count); i++)
+ clear_bit((s32)i, bitmap->table);
+
+ spin_unlock(&bitmap->lock);
+}
+
+#define obj_table_section
+
+/**
+ * Prototype : cqm_single_object_table_init
+ * Description : Initialize a object table.
+ * Input : struct tag_cqm_object_table *obj_table
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/9/9
+ * Modification : Created function
+ */
+static s32 cqm_single_object_table_init(struct tag_cqm_object_table *obj_table)
+{
+ rwlock_init(&obj_table->lock);
+
+ obj_table->table = vmalloc(obj_table->max_num * sizeof(void *));
+ if (!obj_table->table)
+ return CQM_FAIL;
+ memset(obj_table->table, 0, obj_table->max_num * sizeof(void *));
+ return CQM_SUCCESS;
+}
+
+/**
+ * Prototype : cqm_object_table_init
+ * Description : Initialize the association table between objects and indexes.
+ * Input : struct tag_cqm_handle *cqm_handle
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+s32 cqm_object_table_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *obj_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ s32 ret = CQM_SUCCESS;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Obj table init: cla_table_type %u, obj_num=0, don't init obj table\n",
+ cla_table->type);
+ continue;
+ }
+
+ obj_table = &cla_table->obj_table;
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_QPC:
+ obj_table->max_num = capability->qpc_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ obj_table->max_num = capability->mpt_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ obj_table->max_num = capability->scqc_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ obj_table->max_num = capability->srqc_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ default:
+ break;
+ }
+
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ "Obj table init: failed to init cla_table_type=%u, obj_num=0x%x\n",
+ cla_table->type, cla_table->obj_num);
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_object_table_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * Prototype : cqm_object_table_uninit
+ * Description : Deinitialize the association table between objects and
+ * indexes.
+ * Input : struct tag_cqm_handle *cqm_handle
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+void cqm_object_table_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_object_table *obj_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ obj_table = &cla_table->obj_table;
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ if (obj_table->table) {
+ rwlock_deinit(&obj_table->lock);
+ vfree(obj_table->table);
+ obj_table->table = NULL;
+ }
+ }
+ }
+}
+
+/**
+ * Prototype : cqm_object_table_insert
+ * Description : Insert an object
+ * Input : struct tag_cqm_handle *cqm_handle
+ * struct tag_cqm_object_table *object_table
+ * u32 index
+ * struct tag_cqm_object *obj
+ * bool bh
+ * Output : None
+ * Return Value : s32
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+s32 cqm_object_table_insert(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_object_table *object_table,
+ u32 index, struct tag_cqm_object *obj, bool bh)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl,
+ "Obj table insert: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return CQM_FAIL;
+ }
+
+ cqm_write_lock(&object_table->lock, bh);
+
+ if (!object_table->table[index]) {
+ object_table->table[index] = obj;
+ cqm_write_unlock(&object_table->lock, bh);
+ return CQM_SUCCESS;
+ }
+
+ cqm_write_unlock(&object_table->lock, bh);
+ cqm_err(handle->dev_hdl,
+ "Obj table insert: object_table->table[0x%x] has been inserted\n",
+ index);
+
+ return CQM_FAIL;
+}
+
+/**
+ * Prototype : cqm_object_table_remove
+ * Description : Remove an object
+ * Input : struct tag_cqm_handle *cqm_handle
+ * struct tag_cqm_object_table *object_table
+ * u32 index
+ * const struct tag_cqm_object *obj
+ * bool bh
+ * Output : None
+ * Return Value : void
+ * 1.Date : 2015/4/15
+ * Modification : Created function
+ */
+void cqm_object_table_remove(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_object_table *object_table,
+ u32 index, const struct tag_cqm_object *obj, bool bh)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl,
+ "Obj table remove: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return;
+ }
+
+ cqm_write_lock(&object_table->lock, bh);
+
+ if (object_table->table[index] && object_table->table[index] == obj)
+ object_table->table[index] = NULL;
+ else
+ cqm_err(handle->dev_hdl,
+ "Obj table remove: object_table->table[0x%x] has been removed\n",
+ index);
+
+ cqm_write_unlock(&object_table->lock, bh);
+}
+
+/**
+ * Prototype : cqm_object_table_get
+ * Description : Remove an object
+ * Input : struct tag_cqm_handle *cqm_handle
+ * struct tag_cqm_object_table *object_table
+ * u32 index
+ * bool bh
+ * Output : None
+ * Return Value : struct tag_cqm_object *obj
+ * 1.Date : 2018/6/20
+ * Modification : Created function
+ */
+struct tag_cqm_object *cqm_object_table_get(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_object_table *object_table,
+ u32 index, bool bh)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object *obj = NULL;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl,
+ "Obj table get: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return NULL;
+ }
+
+ cqm_read_lock(&object_table->lock, bh);
+
+ obj = object_table->table[index];
+ if (obj)
+ atomic_inc(&obj->refcount);
+
+ cqm_read_unlock(&object_table->lock, bh);
+
+ return obj;
+}
+
+u32 cqm_bitmap_alloc_by_xid(struct tag_cqm_bitmap *bitmap, u32 count, u32 index)
+{
+ ulong *table = bitmap->table;
+ u32 ret_index;
+
+ if (index >= bitmap->max_num || count != 1)
+ return CQM_INDEX_INVALID;
+
+ spin_lock(&bitmap->lock);
+
+ if (test_bit((int)index, table)) {
+ ret_index = CQM_INDEX_INVALID;
+ } else {
+ set_bit(index, table);
+ ret_index = index;
+ }
+
+ spin_unlock(&bitmap->lock);
+ return ret_index;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.h
new file mode 100644
index 0000000..06b8661
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bitmap_table.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_BITMAP_TABLE_H
+#define CQM_BITMAP_TABLE_H
+
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+
+#include "cqm_object.h"
+#include "vram_common.h"
+
+struct tag_cqm_bitmap {
+ ulong *table;
+ u32 max_num;
+ u32 last;
+ u32 reserved_top; /* reserved index */
+ u32 reserved_back;
+ spinlock_t lock; /* lock for cqm */
+ struct vram_buf_info bitmap_info;
+};
+
+struct tag_cqm_object_table {
+ /* Now is big array. Later will be optimized as a red-black tree. */
+ struct tag_cqm_object **table;
+ u32 max_num;
+ rwlock_t lock;
+};
+
+struct tag_cqm_handle;
+
+s32 cqm_bitmap_init(struct tag_cqm_handle *cqm_handle);
+void cqm_bitmap_uninit(struct tag_cqm_handle *cqm_handle);
+u32 cqm_bitmap_alloc(struct tag_cqm_bitmap *bitmap, u32 step, u32 count, bool update_last);
+u32 cqm_bitmap_alloc_low2bit_align(struct tag_cqm_bitmap *bitmap, u32 low2bit, bool update_last);
+u32 cqm_bitmap_alloc_reserved(struct tag_cqm_bitmap *bitmap, u32 count, u32 index);
+void cqm_bitmap_free(struct tag_cqm_bitmap *bitmap, u32 index, u32 count);
+s32 cqm_object_table_init(struct tag_cqm_handle *cqm_handle);
+void cqm_object_table_uninit(struct tag_cqm_handle *cqm_handle);
+s32 cqm_object_table_insert(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_object_table *object_table,
+ u32 index, struct tag_cqm_object *obj, bool bh);
+void cqm_object_table_remove(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_object_table *object_table,
+ u32 index, const struct tag_cqm_object *obj, bool bh);
+struct tag_cqm_object *cqm_object_table_get(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_object_table *object_table,
+ u32 index, bool bh);
+u32 cqm_bitmap_alloc_by_xid(struct tag_cqm_bitmap *bitmap, u32 count, u32 index);
+
+void cqm_swab64(u8 *addr, u32 cnt);
+void cqm_swab32(u8 *addr, u32 cnt);
+bool cqm_check_align(u32 data);
+s32 cqm_shift(u32 data);
+s32 cqm_buf_alloc(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf, bool direct);
+s32 cqm_buf_alloc_direct(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf, bool direct);
+void cqm_buf_free(struct tag_cqm_buf *buf, struct tag_cqm_handle *cqm_handle);
+void cqm_buf_free_cache_inv(struct tag_cqm_handle *cqm_handle, struct tag_cqm_buf *buf,
+ s32 *inv_flag);
+s32 cqm_cla_cache_invalid(struct tag_cqm_handle *cqm_handle, dma_addr_t gpa,
+ u32 cache_size);
+void *cqm_kmalloc_align(size_t size, gfp_t flags, u16 align_order);
+void cqm_kfree_align(void *addr);
+void cqm_byte_print(u32 *ptr, u32 len);
+
+#endif /* CQM_BITMAP_TABLE_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.c
new file mode 100644
index 0000000..1d9198f
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.c
@@ -0,0 +1,506 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+
+#include "cqm_object.h"
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_cmd.h"
+#include "cqm_main.h"
+#include "cqm_bloomfilter.h"
+
+#include "cqm_npu_cmd.h"
+#include "cqm_npu_cmd_defs.h"
+
+/**
+ * bloomfilter_init_cmd - host send cmd to ucode to init bloomfilter mem
+ * @cqm_handle: CQM handle
+ */
+static s32 bloomfilter_init_cmd(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct tag_cqm_bloomfilter_init_cmd *cmd = NULL;
+ struct tag_cqm_cmd_buf *buf_in = NULL;
+ s32 ret;
+
+ buf_in = cqm_cmd_alloc((void *)(cqm_handle->ex_handle));
+ if (!buf_in)
+ return CQM_FAIL;
+
+ /* Fill the command format and convert it to big-endian. */
+ buf_in->size = sizeof(struct tag_cqm_bloomfilter_init_cmd);
+ cmd = (struct tag_cqm_bloomfilter_init_cmd *)(buf_in->buf);
+ cmd->bloom_filter_addr = capability->bloomfilter_addr;
+ cmd->bloom_filter_len = capability->bloomfilter_length;
+
+ cqm_swab32((u8 *)cmd,
+ (sizeof(struct tag_cqm_bloomfilter_init_cmd) >> CQM_DW_SHIFT));
+
+ ret = cqm_send_cmd_box((void *)(cqm_handle->ex_handle),
+ CQM_MOD_CQM, CQM_CMD_T_BLOOMFILTER_INIT, buf_in,
+ NULL, NULL, CQM_CMD_TIMEOUT,
+ HINIC3_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(cqm_handle->ex_handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(cqm_handle->ex_handle->dev_hdl, "Bloomfilter: %s ret=%d\n", __func__,
+ ret);
+ cqm_err(cqm_handle->ex_handle->dev_hdl, "Bloomfilter: %s: 0x%x 0x%x\n",
+ __func__, cmd->bloom_filter_addr,
+ cmd->bloom_filter_len);
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return CQM_FAIL;
+ }
+ cqm_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return CQM_SUCCESS;
+}
+
+static void cqm_func_bloomfilter_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bloomfilter_table *bloomfilter_table = &cqm_handle->bloomfilter_table;
+
+ if (bloomfilter_table->table) {
+ mutex_deinit(&bloomfilter_table->lock);
+ vfree(bloomfilter_table->table);
+ bloomfilter_table->table = NULL;
+ }
+}
+
+static s32 cqm_func_bloomfilter_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_bloomfilter_table *bloomfilter_table = NULL;
+ struct tag_cqm_func_capability *capability = NULL;
+ u32 array_size;
+ s32 ret;
+
+ bloomfilter_table = &cqm_handle->bloomfilter_table;
+ capability = &cqm_handle->func_capability;
+
+ if (capability->bloomfilter_length == 0) {
+ cqm_info(cqm_handle->ex_handle->dev_hdl,
+ "Bloomfilter: bf_length=0, don't need to init bloomfilter\n");
+ return CQM_SUCCESS;
+ }
+
+ /* The unit of bloomfilter_length is 64B(512bits). Each bit is a table
+ * node. Therefore the value must be shift 9 bits to the left.
+ */
+ bloomfilter_table->table_size = capability->bloomfilter_length <<
+ CQM_BF_LENGTH_UNIT;
+ /* The unit of bloomfilter_length is 64B. The unit of array entryis 32B.
+ */
+ array_size = capability->bloomfilter_length << 1;
+ if (array_size == 0 || array_size > CQM_BF_BITARRAY_MAX) {
+ cqm_err(cqm_handle->ex_handle->dev_hdl, CQM_WRONG_VALUE(array_size));
+ return CQM_FAIL;
+ }
+
+ bloomfilter_table->array_mask = array_size - 1;
+ /* This table is not a bitmap, it is the counter of corresponding bit.
+ */
+ bloomfilter_table->table = vmalloc(bloomfilter_table->table_size *
+ (sizeof(u32)));
+ if (!bloomfilter_table->table)
+ return CQM_FAIL;
+
+ memset(bloomfilter_table->table, 0, (bloomfilter_table->table_size * sizeof(u32)));
+
+ /* The bloomfilter must be initialized to 0 by ucode,
+ * because the bloomfilter is mem mode
+ */
+ if (cqm_handle->func_capability.bloomfilter_enable) {
+ ret = bloomfilter_init_cmd(cqm_handle);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(cqm_handle->ex_handle->dev_hdl,
+ "Bloomfilter: bloomfilter_init_cmd ret=%d\n",
+ ret);
+ vfree(bloomfilter_table->table);
+ bloomfilter_table->table = NULL;
+ return CQM_FAIL;
+ }
+ }
+
+ mutex_init(&bloomfilter_table->lock);
+ cqm_dbg("Bloomfilter: table_size=0x%x, array_size=0x%x\n",
+ bloomfilter_table->table_size, array_size);
+ return CQM_SUCCESS;
+}
+
+static void cqm_fake_bloomfilter_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_handle *fake_cqm_handle = NULL;
+ s32 child_func_number;
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type != CQM_FAKE_FUNC_PARENT)
+ return;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = cqm_handle->fake_cqm_handle[i];
+ cqm_func_bloomfilter_uninit(fake_cqm_handle);
+ }
+}
+
+static s32 cqm_fake_bloomfilter_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_handle *fake_cqm_handle = NULL;
+ s32 child_func_number;
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type != CQM_FAKE_FUNC_PARENT)
+ return CQM_SUCCESS;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = cqm_handle->fake_cqm_handle[i];
+ if (cqm_func_bloomfilter_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_func_bloomfilter_init));
+ goto bloomfilter_init_err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+bloomfilter_init_err:
+ cqm_fake_bloomfilter_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_bloomfilter_init - initialize the bloomfilter of cqm
+ * @ex_handle: device pointer that represents the PF
+ */
+s32 cqm_bloomfilter_init(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ if (cqm_fake_bloomfilter_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_fake_bloomfilter_init));
+ return CQM_FAIL;
+ }
+
+ if (cqm_func_bloomfilter_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_func_bloomfilter_init));
+ goto bloomfilter_init_err;
+ }
+
+ return CQM_SUCCESS;
+
+bloomfilter_init_err:
+ cqm_fake_bloomfilter_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_bloomfilter_uninit - uninitialize the bloomfilter of cqm
+ * @ex_handle: device pointer that represents the PF
+ */
+void cqm_bloomfilter_uninit(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ cqm_fake_bloomfilter_uninit(cqm_handle);
+ cqm_func_bloomfilter_uninit(cqm_handle);
+}
+
+/**
+ * cqm_bloomfilter_cmd - host send bloomfilter api cmd to ucode
+ * @ex_handle: device pointer that represents the PF
+ * @func_id: function id
+ * @op: operation code
+ * @k_flag: kernel enable flag
+ * @id: the ID of the bloomfilter
+ */
+s32 cqm_bloomfilter_cmd(void *ex_handle, u16 func_id, u32 op, u32 k_flag, u64 id)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_cmd_buf *buf_in = NULL;
+ struct tag_cqm_bloomfilter_cmd *cmd = NULL;
+ s32 ret;
+
+ buf_in = cqm_cmd_alloc(ex_handle);
+ if (!buf_in)
+ return CQM_FAIL;
+
+ /* Fill the command format and convert it to big-endian. */
+ buf_in->size = sizeof(struct tag_cqm_bloomfilter_cmd);
+ cmd = (struct tag_cqm_bloomfilter_cmd *)(buf_in->buf);
+ memset((void *)cmd, 0, sizeof(struct tag_cqm_bloomfilter_cmd));
+ cmd->func_id = func_id;
+ cmd->k_en = k_flag;
+ cmd->index_h = (u32)(id >> CQM_DW_OFFSET);
+ cmd->index_l = (u32)(id & CQM_DW_MASK);
+
+ cqm_swab32((u8 *)cmd, (sizeof(struct tag_cqm_bloomfilter_cmd) >> CQM_DW_SHIFT));
+
+ ret = cqm_send_cmd_box(ex_handle, CQM_MOD_CQM, (u8)op, buf_in, NULL,
+ NULL, CQM_CMD_TIMEOUT, HINIC3_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_send_cmd_box));
+ cqm_err(handle->dev_hdl, "Bloomfilter: bloomfilter_cmd ret=%d\n",
+ ret);
+ cqm_err(handle->dev_hdl, "Bloomfilter: op=0x%x, cmd: 0x%x 0x%x 0x%x 0x%x\n",
+ op, *((u32 *)cmd), *(((u32 *)cmd) + CQM_DW_INDEX1),
+ *(((u32 *)cmd) + CQM_DW_INDEX2),
+ *(((u32 *)cmd) + CQM_DW_INDEX3));
+ cqm_cmd_free(ex_handle, buf_in);
+ return CQM_FAIL;
+ }
+
+ cqm_cmd_free(ex_handle, buf_in);
+
+ return CQM_SUCCESS;
+}
+
+static struct tag_cqm_handle *cqm_get_func_cqm_handle(struct hinic3_hwdev *ex_handle, u16 func_id)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_func_capability *func_cap = NULL;
+ s32 child_func_start, child_func_number;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(ex_handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ /* function id is PF/VF */
+ if (func_id == hinic3_global_func_id(ex_handle))
+ return cqm_handle;
+
+ func_cap = &cqm_handle->func_capability;
+ if (func_cap->fake_func_type != CQM_FAKE_FUNC_PARENT) {
+ cqm_err(ex_handle->dev_hdl, CQM_WRONG_VALUE(func_cap->fake_func_type));
+ return NULL;
+ }
+
+ child_func_start = cqm_get_child_func_start(cqm_handle);
+ if (child_func_start == CQM_FAIL) {
+ cqm_err(ex_handle->dev_hdl, CQM_WRONG_VALUE(child_func_start));
+ return NULL;
+ }
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(ex_handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return NULL;
+ }
+
+ /* function id is fake vf */
+ if (func_id >= child_func_start && (func_id < (child_func_start + child_func_number)))
+ return cqm_handle->fake_cqm_handle[func_id - (u16)child_func_start];
+
+ return NULL;
+}
+
+/**
+ * cqm_bloomfilter_inc - The reference counting field is added to the ID of the bloomfilter
+ * @ex_handle: device pointer that represents the PF
+ * @func_id: function id
+ * @id: the ID of the bloomfilter
+ */
+s32 cqm_bloomfilter_inc(void *ex_handle, u16 func_id, u64 id)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_bloomfilter_table *bloomfilter_table = NULL;
+ u32 array_tmp[CQM_BF_SECTION_NUMBER] = {0};
+ struct tag_cqm_handle *cqm_handle = NULL;
+ u32 array_index, array_bit, i;
+ u32 k_flag = 0;
+
+ cqm_dbg("Bloomfilter: func_id: %d, inc id=0x%llx\n", func_id, id);
+
+ cqm_handle = cqm_get_func_cqm_handle(ex_handle, func_id);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle_bf_inc is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ if (cqm_handle->func_capability.bloomfilter_enable == 0) {
+ cqm_info(handle->dev_hdl, "Bloomfilter inc: bloomfilter is disable\n");
+ return CQM_SUCCESS;
+ }
+
+ /* |(array_index=0)32B(array_bit:256bits)|(array_index=1)32B(256bits)|
+ * array_index = 0~bloomfilter_table->table_size/256bit
+ * array_bit = 0~255
+ */
+ cqm_dbg("Bloomfilter: inc id=0x%llx\n", id);
+ bloomfilter_table = &cqm_handle->bloomfilter_table;
+
+ /* The array index identifies a 32-byte entry. */
+ array_index = (u32)CQM_BF_BITARRAY_INDEX(id, bloomfilter_table->array_mask);
+ /* convert the unit of array_index to bit */
+ array_index = array_index << CQM_BF_ENTRY_SIZE_UNIT;
+ cqm_dbg("Bloomfilter: inc array_index=0x%x\n", array_index);
+
+ mutex_lock(&bloomfilter_table->lock);
+ for (i = 0; i < CQM_BF_SECTION_NUMBER; i++) {
+ /* the position of the bit in 64-bit section */
+ array_bit =
+ (id >> (CQM_BF_SECTION_BASE + i * CQM_BF_SECTION_SIZE)) &
+ CQM_BF_SECTION_MASK;
+ /* array_bit + number of 32-byte array entries + number of
+ * 64-bit sections before the section
+ */
+ array_bit = array_bit + array_index +
+ (i * CQM_BF_SECTION_BIT_NUMBER);
+
+ /* array_temp[i] records the index of the bloomfilter.
+ * It is used to roll back the reference counting of the
+ * bitarray.
+ */
+ array_tmp[i] = array_bit;
+ cqm_dbg("Bloomfilter: inc array_bit=0x%x\n", array_bit);
+
+ /* Add one to the corresponding bit in bloomfilter table.
+ * If the value changes from 0 to 1, change the corresponding
+ * bit in k_flag.
+ */
+ (bloomfilter_table->table[array_bit])++;
+ cqm_dbg("Bloomfilter: inc bloomfilter_table->table[%d]=0x%x\n",
+ array_bit, bloomfilter_table->table[array_bit]);
+ if (bloomfilter_table->table[array_bit] == 1)
+ k_flag |= (1U << i);
+ }
+
+ if (k_flag != 0) {
+ /* send cmd to ucode and set corresponding bit. */
+ if (cqm_bloomfilter_cmd(ex_handle, func_id, CQM_CMD_T_BLOOMFILTER_SET,
+ k_flag, id) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bloomfilter_cmd_inc));
+ for (i = 0; i < CQM_BF_SECTION_NUMBER; i++) {
+ array_bit = array_tmp[i];
+ (bloomfilter_table->table[array_bit])--;
+ }
+ mutex_unlock(&bloomfilter_table->lock);
+ return CQM_FAIL;
+ }
+ }
+
+ mutex_unlock(&bloomfilter_table->lock);
+
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_bloomfilter_inc);
+
+/**
+ * cqm_bloomfilter_dec - The reference counting field is decreased to the ID of the bloomfilter
+ * @ex_handle: device pointer that represents the PF
+ * @func_id: function id
+ * @id: the ID of the bloomfilter
+ */
+s32 cqm_bloomfilter_dec(void *ex_handle, u16 func_id, u64 id)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_bloomfilter_table *bloomfilter_table = NULL;
+ u32 array_tmp[CQM_BF_SECTION_NUMBER] = {0};
+ struct tag_cqm_handle *cqm_handle = NULL;
+ u32 array_index, array_bit, i;
+ u32 k_flag = 0;
+
+ cqm_handle = cqm_get_func_cqm_handle(ex_handle, func_id);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle_bf_dec is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ if (cqm_handle->func_capability.bloomfilter_enable == 0) {
+ cqm_info(handle->dev_hdl, "Bloomfilter dec: bloomfilter is disable\n");
+ return CQM_SUCCESS;
+ }
+
+ cqm_dbg("Bloomfilter: dec id=0x%llx\n", id);
+ bloomfilter_table = &cqm_handle->bloomfilter_table;
+
+ /* The array index identifies a 32-byte entry. */
+ array_index = (u32)CQM_BF_BITARRAY_INDEX(id, bloomfilter_table->array_mask);
+ cqm_dbg("Bloomfilter: dec array_index=0x%x\n", array_index);
+ mutex_lock(&bloomfilter_table->lock);
+ for (i = 0; i < CQM_BF_SECTION_NUMBER; i++) {
+ /* the position of the bit in 64-bit section */
+ array_bit =
+ (id >> (CQM_BF_SECTION_BASE + i * CQM_BF_SECTION_SIZE)) &
+ CQM_BF_SECTION_MASK;
+ /* array_bit + number of 32-byte array entries + number of
+ * 64-bit sections before the section
+ */
+ array_bit = array_bit + (array_index << 0x8) + (i * 0x40);
+
+ /* array_temp[i] records the index of the bloomfilter.
+ * It is used to roll back the reference counting of the
+ * bitarray.
+ */
+ array_tmp[i] = array_bit;
+
+ /* Deduct one to the corresponding bit in bloomfilter table.
+ * If the value changes from 1 to 0, change the corresponding
+ * bit in k_flag. Do not continue -1 when the reference counting
+ * value of the bit is 0.
+ */
+ if (bloomfilter_table->table[array_bit] != 0) {
+ (bloomfilter_table->table[array_bit])--;
+ cqm_dbg("Bloomfilter: dec bloomfilter_table->table[%d]=0x%x\n",
+ array_bit, (bloomfilter_table->table[array_bit]));
+ if (bloomfilter_table->table[array_bit] == 0)
+ k_flag |= (1U << i);
+ }
+ }
+
+ if (k_flag != 0) {
+ /* send cmd to ucode and clear corresponding bit. */
+ if (cqm_bloomfilter_cmd(ex_handle, func_id, CQM_CMD_T_BLOOMFILTER_CLEAR,
+ k_flag, id) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bloomfilter_cmd_dec));
+ for (i = 0; i < CQM_BF_SECTION_NUMBER; i++) {
+ array_bit = array_tmp[i];
+ (bloomfilter_table->table[array_bit])++;
+ }
+ mutex_unlock(&bloomfilter_table->lock);
+ return CQM_FAIL;
+ }
+ }
+
+ mutex_unlock(&bloomfilter_table->lock);
+
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_bloomfilter_dec);
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.h
new file mode 100644
index 0000000..8fd446c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_bloomfilter.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_BLOOMFILTER_H
+#define CQM_BLOOMFILTER_H
+
+#include <linux/types.h>
+#include <linux/mutex.h>
+
+/* Bloomfilter entry size is 32B(256bit), whitch index is the 48-32-bit of the
+ * hash. |31~26|25~20|19~14|13~8| will be used to locate 4 bloom filter section
+ * in one entry. k_en[3:0] used to specify the section of bloom filter.
+ */
+#define CQM_BF_ENTRY_SIZE 32
+#define CQM_BF_ENTRY_SIZE_UNIT 8
+#define CQM_BF_BITARRAY_MAX BIT(17)
+
+#define CQM_BF_SECTION_NUMBER 4
+#define CQM_BF_SECTION_BASE 8
+#define CQM_BF_SECTION_SIZE 6
+#define CQM_BF_SECTION_MASK 0x3f
+#define CQM_BF_SECTION_BIT_NUMBER 64
+
+#define CQM_BF_ARRAY_INDEX_OFFSET 32
+#define CQM_BF_BITARRAY_INDEX(id, mask) \
+ (((id) >> CQM_BF_ARRAY_INDEX_OFFSET) & (mask))
+
+/* The unit of bloomfilter_length is 64B(512bits). */
+#define CQM_BF_LENGTH_UNIT 9
+
+#define CQM_DW_MASK 0xffffffff
+#define CQM_DW_OFFSET 32
+#define CQM_DW_INDEX0 0
+#define CQM_DW_INDEX1 1
+#define CQM_DW_INDEX2 2
+#define CQM_DW_INDEX3 3
+
+struct tag_cqm_bloomfilter_table {
+ u32 *table;
+ u32 table_size; /* The unit is bit */
+ u32 array_mask; /* The unit of array entry is 32B, used to address entry
+ */
+ struct mutex lock;
+};
+
+/* only for test */
+s32 cqm_bloomfilter_cmd(void *ex_handle, u16 func_id, u32 op, u32 k_flag, u64 id);
+s32 cqm_bloomfilter_init(void *ex_handle);
+void cqm_bloomfilter_uninit(void *ex_handle);
+s32 cqm_bloomfilter_inc(void *ex_handle, u16 func_id, u64 id);
+s32 cqm_bloomfilter_dec(void *ex_handle, u16 func_id, u64 id);
+
+#endif /* CQM_BLOOMFILTER_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.c
new file mode 100644
index 0000000..3d38edc
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.c
@@ -0,0 +1,182 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_hwdev.h"
+
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_main.h"
+
+/**
+ * cqm_cmd_alloc - Apply for a cmd buffer. The buffer size is fixed to 2 KB,
+ * The buffer content is not cleared and needs to be cleared by services.
+ * @ex_handle: device pointer that represents the PF
+ */
+struct tag_cqm_cmd_buf *cqm_cmd_alloc(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_cmd_alloc_cnt);
+
+ return (struct tag_cqm_cmd_buf *)hinic3_alloc_cmd_buf(ex_handle);
+}
+EXPORT_SYMBOL(cqm_cmd_alloc);
+
+/**
+ * cqm_cmd_free - Release for a cmd buffer
+ * @ex_handle: device pointer that represents the PF
+ * @cmd_buf: command buffer
+ */
+void cqm_cmd_free(void *ex_handle, struct tag_cqm_cmd_buf *cmd_buf)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return;
+ }
+ if (unlikely(!cmd_buf)) {
+ pr_err("[CQM]%s: cmd_buf is null\n", __func__);
+ return;
+ }
+ if (unlikely(!cmd_buf->buf)) {
+ pr_err("[CQM]%s: buf is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_cmd_free_cnt);
+
+ hinic3_free_cmd_buf(ex_handle, (struct hinic3_cmd_buf *)cmd_buf);
+}
+EXPORT_SYMBOL(cqm_cmd_free);
+
+/**
+ * cqm_send_cmd_box - Send a cmd message in box mode,
+ * This interface will mount a completion quantity, causing sleep.
+ * @ex_handle: device pointer that represents the PF
+ * @mod: command module
+ * @cmd:command type
+ * @buf_in: data buffer in address
+ * @buf_out: data buffer out address
+ * @out_param: out paramer
+ * @timeout: out of time
+ * @channel: mailbox channel
+ */
+s32 cqm_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, struct tag_cqm_cmd_buf *buf_in,
+ struct tag_cqm_cmd_buf *buf_out, u64 *out_param, u32 timeout,
+ u16 channel)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!buf_in)) {
+ pr_err("[CQM]%s: buf_in is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!buf_in->buf)) {
+ pr_err("[CQM]%s: buf is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_send_cmd_box_cnt);
+
+ return hinic3_cmdq_detail_resp(ex_handle, mod, cmd,
+ (struct hinic3_cmd_buf *)buf_in,
+ (struct hinic3_cmd_buf *)buf_out,
+ out_param, timeout, channel);
+}
+EXPORT_SYMBOL(cqm_send_cmd_box);
+
+/**
+ * cqm_lb_send_cmd_box - Send a cmd message in box mode and open cos_id,
+ * This interface will mount a completion quantity, causing sleep.
+ * @ex_handle: device pointer that represents the PF
+ * @mod: command module
+ * @cmd:command type
+ * @cos_id: cos id
+ * @buf_in: data buffer in address
+ * @buf_out: data buffer out address
+ * @out_param: out paramer
+ * @timeout: out of time
+ * @channel: mailbox channel
+ */
+s32 cqm_lb_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, u8 cos_id,
+ struct tag_cqm_cmd_buf *buf_in, struct tag_cqm_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!buf_in)) {
+ pr_err("[CQM]%s: buf_in is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!buf_in->buf)) {
+ pr_err("[CQM]%s: buf is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_send_cmd_box_cnt);
+
+ return hinic3_cos_id_detail_resp(ex_handle, mod, cmd, cos_id,
+ (struct hinic3_cmd_buf *)buf_in,
+ (struct hinic3_cmd_buf *)buf_out,
+ out_param, timeout, channel);
+}
+EXPORT_SYMBOL(cqm_lb_send_cmd_box);
+
+/**
+ * cqm_lb_send_cmd_box_async - Send a cmd message in box mode and open cos_id,
+ * This interface will not wait completion
+ * @ex_handle: device pointer that represents the PF
+ * @mod: command module
+ * @cmd:command type
+ * @cos_id: cos id
+ * @buf_in: data buffer in address
+ * @channel: mailbox channel
+ */
+s32 cqm_lb_send_cmd_box_async(void *ex_handle, u8 mod, u8 cmd,
+ u8 cos_id, struct tag_cqm_cmd_buf *buf_in,
+ u16 channel)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!buf_in)) {
+ pr_err("[CQM]%s: buf_in is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!buf_in->buf)) {
+ pr_err("[CQM]%s: buf is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_send_cmd_box_cnt);
+
+ return hinic3_cmdq_async_cos(ex_handle, mod, cmd, cos_id,
+ (struct hinic3_cmd_buf *)buf_in, channel);
+}
+EXPORT_SYMBOL(cqm_lb_send_cmd_box_async);
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.h
new file mode 100644
index 0000000..46eb8ec
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_cmd.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_CMD_H
+#define CQM_CMD_H
+
+#include <linux/types.h>
+
+#include "cqm_object.h"
+
+#ifdef __cplusplus
+#if __cplusplus
+extern "C" {
+#endif
+#endif /* __cplusplus */
+
+#define CQM_CMD_TIMEOUT 10000 /* ms */
+
+struct tag_cqm_cmd_buf *cqm_cmd_alloc(void *ex_handle);
+void cqm_cmd_free(void *ex_handle, struct tag_cqm_cmd_buf *cmd_buf);
+s32 cqm_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, struct tag_cqm_cmd_buf *buf_in,
+ struct tag_cqm_cmd_buf *buf_out, u64 *out_param, u32 timeout,
+ u16 channel);
+s32 cqm_lb_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, u8 cos_id,
+ struct tag_cqm_cmd_buf *buf_in, struct tag_cqm_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel);
+s32 cqm_lb_send_cmd_box_async(void *ex_handle, u8 mod, u8 cmd,
+ u8 cos_id, struct tag_cqm_cmd_buf *buf_in,
+ u16 channel);
+
+#ifdef __cplusplus
+#if __cplusplus
+}
+#endif
+#endif /* __cplusplus */
+
+#endif /* CQM_CMD_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.c
new file mode 100644
index 0000000..db65c8b
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.c
@@ -0,0 +1,444 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_hwdev.h"
+
+#include "cqm_object.h"
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_object_intern.h"
+#include "cqm_main.h"
+#include "cqm_db.h"
+
+/**
+ * cqm_db_addr_alloc - Apply for a page of hardware doorbell and dwqe.
+ * The indexes are the same. The obtained addresses are physical
+ * addresses. Each function has a maximum of 1K addresses(DB).
+ * @ex_handle: device pointer that represents the PF
+ * @db_addr: doorbell address
+ * @dwqe_addr: DWQE address
+ */
+s32 cqm_db_addr_alloc(void *ex_handle, void __iomem **db_addr,
+ void __iomem **dwqe_addr)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!db_addr)) {
+ pr_err("[CQM]%s: db_addr is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!dwqe_addr)) {
+ pr_err("[CQM]%s: dwqe_addr is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_db_addr_alloc_cnt);
+
+ return hinic3_alloc_db_addr(ex_handle, db_addr, dwqe_addr);
+}
+
+s32 cqm_db_phy_addr_alloc(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr)
+{
+ return hinic3_alloc_db_phy_addr(ex_handle, db_paddr, dwqe_addr);
+}
+
+/**
+ * cqm_db_addr_free - Release a page of hardware doorbell and dwqe
+ * @ex_handle: device pointer that represents the PF
+ * @db_addr: doorbell address
+ * @dwqe_addr: DWQE address
+ */
+static void cqm_db_addr_free(void *ex_handle, const void __iomem *db_addr, void __iomem *dwqe_addr)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_db_addr_free_cnt);
+
+ hinic3_free_db_addr(ex_handle, db_addr, dwqe_addr);
+}
+
+static void cqm_db_phy_addr_free(void *ex_handle, u64 *db_paddr, const u64 *dwqe_addr)
+{
+ hinic3_free_db_phy_addr(ex_handle, *db_paddr, *dwqe_addr);
+}
+
+static bool cqm_need_db_init(s32 service)
+{
+ bool need_db_init = false;
+
+ switch (service) {
+ case CQM_SERVICE_T_NIC:
+ case CQM_SERVICE_T_OVS:
+ case CQM_SERVICE_T_IPSEC:
+ case CQM_SERVICE_T_VIRTIO:
+ case CQM_SERVICE_T_PPA:
+ need_db_init = false;
+ break;
+ default:
+ need_db_init = true;
+ }
+
+ return need_db_init;
+}
+
+/**
+ * cqm_db_init - Initialize the doorbell of the CQM
+ * @ex_handle: device pointer that represents the PF
+ */
+s32 cqm_db_init(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ s32 i;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ /* Allocate hardware doorbells to services. */
+ for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
+ service = &cqm_handle->service[i];
+ if (!cqm_need_db_init(i) || !service->valid)
+ continue;
+
+ if (cqm_db_addr_alloc(ex_handle, &service->hardware_db_vaddr,
+ &service->dwqe_vaddr) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_db_addr_alloc));
+ break;
+ }
+
+ if (cqm_db_phy_addr_alloc(handle, &service->hardware_db_paddr,
+ &service->dwqe_paddr) !=
+ CQM_SUCCESS) {
+ cqm_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_db_phy_addr_alloc));
+ break;
+ }
+ }
+
+ if (i != CQM_SERVICE_T_MAX) {
+ i--;
+ for (; i >= 0; i--) {
+ service = &cqm_handle->service[i];
+ if (!cqm_need_db_init(i) || !service->valid)
+ continue;
+
+ cqm_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+ cqm_db_phy_addr_free(ex_handle,
+ &service->hardware_db_paddr,
+ &service->dwqe_paddr);
+ }
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_db_uninit - Deinitialize the doorbell of the CQM
+ * @ex_handle: device pointer that represents the PF
+ */
+void cqm_db_uninit(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ s32 i;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ /* Release hardware doorbell. */
+ for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
+ service = &cqm_handle->service[i];
+ if (service->valid && cqm_need_db_init(i)) {
+ cqm_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+ cqm_db_phy_addr_free(ex_handle, &service->hardware_db_paddr,
+ &service->dwqe_paddr);
+ }
+ }
+}
+
+/**
+ * cqm_get_db_addr - Return hardware DB vaddr
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ */
+void *cqm_get_db_addr(void *ex_handle, u32 service_type)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct hinic3_hwdev *handle = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ if (service_type >= CQM_SERVICE_T_MAX) {
+ pr_err("%s service_type = %d state is error\n", __func__,
+ service_type);
+ return NULL;
+ }
+ handle = (struct hinic3_hwdev *)ex_handle;
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+
+ return (void *)service->hardware_db_vaddr;
+}
+EXPORT_SYMBOL(cqm_get_db_addr);
+
+/**
+ * cqm_ring_hardware_db - Ring hardware DB to chip
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: Each kernel-mode service is allocated a hardware db page
+ * @db_count: The bit[7:0] of PI can't be store in 64-bit db
+ * @db: It contains the content of db, whitch is organized by
+ * service, including big-endian conversion.
+ */
+s32 cqm_ring_hardware_db(void *ex_handle, u32 service_type, u8 db_count, u64 db)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct hinic3_hwdev *handle = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ if (service_type >= CQM_SERVICE_T_MAX) {
+ pr_err("%s service_type = %d state is error\n", __func__,
+ service_type);
+ return CQM_FAIL;
+ }
+
+ handle = (struct hinic3_hwdev *)ex_handle;
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ *((u64 *)service->hardware_db_vaddr + db_count) = db;
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_ring_hardware_db);
+
+/**
+ * cqm_ring_hardware_db_fc - Ring fake vf hardware DB to chip
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: Each kernel-mode service is allocated a hardware db page
+ * @db_count: The bit[7:0] of PI can't be store in 64-bit db
+ * @pagenum: Indicates the doorbell address offset of the fake VFID
+ * @db: It contains the content of db, whitch is organized by
+ * service, including big-endian conversion.
+ */
+s32 cqm_ring_hardware_db_fc(void *ex_handle, u32 service_type, u8 db_count,
+ u8 pagenum, u64 db)
+{
+#define HIFC_DB_FAKE_VF_OFFSET 32
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct hinic3_hwdev *handle = NULL;
+ void *dbaddr = NULL;
+
+ handle = (struct hinic3_hwdev *)ex_handle;
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ dbaddr = (u8 *)service->hardware_db_vaddr +
+ ((pagenum + HIFC_DB_FAKE_VF_OFFSET) * HINIC3_DB_PAGE_SIZE);
+ *((u64 *)dbaddr + db_count) = db;
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_ring_direct_wqe_db - Ring direct wqe hardware DB to chip
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: Each kernel-mode service is allocated a hardware db page
+ * @db_count: The bit[7:0] of PI can't be store in 64-bit db
+ * @direct_wqe: The content of direct_wqe
+ */
+s32 cqm_ring_direct_wqe_db(void *ex_handle, u32 service_type, u8 db_count,
+ void *direct_wqe)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct hinic3_hwdev *handle = NULL;
+ u64 *tmp = (u64 *)direct_wqe;
+ int i;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ if (service_type >= CQM_SERVICE_T_MAX) {
+ pr_err("%s service_type = %d state is error\n", __func__,
+ service_type);
+ return CQM_FAIL;
+ }
+
+ handle = (struct hinic3_hwdev *)ex_handle;
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ for (i = 0; i < 0x80 / 0x8; i++)
+ *((u64 *)service->dwqe_vaddr + 0x40 + i) = *tmp++;
+
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_ring_direct_wqe_db);
+
+s32 cqm_ring_direct_wqe_db_fc(void *ex_handle, u32 service_type,
+ void *direct_wqe)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct hinic3_hwdev *handle = NULL;
+ u64 *tmp = (u64 *)direct_wqe;
+ int i;
+
+ handle = (struct hinic3_hwdev *)ex_handle;
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ *((u64 *)service->dwqe_vaddr + 0x0) = tmp[0x2];
+ *((u64 *)service->dwqe_vaddr + 0x1) = tmp[0x3];
+ *((u64 *)service->dwqe_vaddr + 0x2) = tmp[0x0];
+ *((u64 *)service->dwqe_vaddr + 0x3) = tmp[0x1];
+ tmp += 0x4;
+
+ /* The FC use 256B WQE. The directwqe is written at block0,
+ * and the length is 256B
+ */
+ for (i = 0x4; i < 0x20; i++)
+ *((u64 *)service->dwqe_vaddr + i) = *tmp++;
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_ring_hardware_db_update_pri - Provides the doorbell interface for the CQM to convert the PRI
+ * to the CoS. The doorbell transmitted by the service must be
+ * the host sequence. This interface converts the network
+ * sequence.
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: Each kernel-mode service is allocated a hardware db page
+ * @db_count: The bit[7:0] of PI can't be store in 64-bit db
+ * @db: It contains the content of db, whitch is organized by
+ * service, including big-endian conversion.
+ */
+s32 cqm_ring_hardware_db_update_pri(void *ex_handle, u32 service_type,
+ u8 db_count, u64 db)
+{
+ struct tag_cqm_db_common *db_common = (struct tag_cqm_db_common *)(&db);
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct hinic3_hwdev *handle = NULL;
+
+ handle = (struct hinic3_hwdev *)ex_handle;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+
+ /* the CQM converts the PRI to the CoS */
+ db_common->cos = 0x7 - db_common->cos;
+
+ cqm_swab32((u8 *)db_common, sizeof(u64) >> CQM_DW_SHIFT);
+
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ *((u64 *)service->hardware_db_vaddr + db_count) = db;
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_ring_software_db - Ring software db
+ * @object: CQM object
+ * @db_record: It contains the content of db, whitch is organized by service,
+ * including big-endian conversion. For RQ/SQ: This field is filled
+ * with the doorbell_record area of queue_header. For CQ: This field
+ * is filled with the value of ci_record in queue_header.
+ */
+s32 cqm_ring_software_db(struct tag_cqm_object *object, u64 db_record)
+{
+ struct tag_cqm_nonrdma_qinfo *nonrdma_qinfo = NULL;
+ struct tag_cqm_rdma_qinfo *rdma_qinfo = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct hinic3_hwdev *handle = NULL;
+
+ if (unlikely(!object)) {
+ pr_err("[CQM]%s: object is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ handle = cqm_handle->ex_handle;
+
+ if (object->object_type == CQM_OBJECT_NONRDMA_EMBEDDED_RQ ||
+ object->object_type == CQM_OBJECT_NONRDMA_EMBEDDED_SQ ||
+ object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
+ nonrdma_qinfo = (struct tag_cqm_nonrdma_qinfo *)(void *)object;
+ nonrdma_qinfo->common.q_header_vaddr->doorbell_record =
+ db_record;
+ } else if ((object->object_type == CQM_OBJECT_NONRDMA_EMBEDDED_CQ) ||
+ (object->object_type == CQM_OBJECT_NONRDMA_SCQ)) {
+ nonrdma_qinfo = (struct tag_cqm_nonrdma_qinfo *)(void *)object;
+ nonrdma_qinfo->common.q_header_vaddr->ci_record = db_record;
+ } else if ((object->object_type == CQM_OBJECT_RDMA_QP) ||
+ (object->object_type == CQM_OBJECT_RDMA_SRQ)) {
+ rdma_qinfo = (struct tag_cqm_rdma_qinfo *)(void *)object;
+ rdma_qinfo->common.q_header_vaddr->doorbell_record = db_record;
+ } else if (object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ rdma_qinfo = (struct tag_cqm_rdma_qinfo *)(void *)object;
+ rdma_qinfo->common.q_header_vaddr->ci_record = db_record;
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ }
+
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_ring_software_db);
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.h
new file mode 100644
index 0000000..954f62b
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_db.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_DB_H
+#define CQM_DB_H
+
+#include <linux/types.h>
+
+struct tag_cqm_db_common {
+#if (BYTE_ORDER == LITTLE_ENDIAN)
+ u32 rsvd1 : 23;
+ u32 c : 1;
+ u32 cos : 3;
+ u32 service_type : 5;
+#else
+ u32 service_type : 5;
+ u32 cos : 3;
+ u32 c : 1;
+ u32 rsvd1 : 23;
+#endif
+
+ u32 rsvd2;
+};
+
+/* Only for test */
+s32 cqm_db_addr_alloc(void *ex_handle, void __iomem **db_addr,
+ void __iomem **dwqe_addr);
+s32 cqm_db_phy_addr_alloc(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr);
+
+s32 cqm_db_init(void *ex_handle);
+void cqm_db_uninit(void *ex_handle);
+
+s32 cqm_ring_hardware_db(void *ex_handle, u32 service_type, u8 db_count,
+ u64 db);
+
+#endif /* CQM_DB_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_define.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_define.h
new file mode 100644
index 0000000..2c227ae
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_define.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_DEFINE_H
+#define CQM_DEFINE_H
+#ifndef HIUDK_SDK
+#define cqm_init cqm3_init
+#define cqm_uninit cqm3_uninit
+#define cqm_service_register cqm3_service_register
+#define cqm_service_unregister cqm3_service_unregister
+#define cqm_bloomfilter_dec cqm3_bloomfilter_dec
+#define cqm_bloomfilter_inc cqm3_bloomfilter_inc
+#define cqm_cmd_alloc cqm3_cmd_alloc
+#define cqm_cmd_free cqm3_cmd_free
+#define cqm_send_cmd_box cqm3_send_cmd_box
+#define cqm_lb_send_cmd_box cqm3_lb_send_cmd_box
+#define cqm_lb_send_cmd_box_async cqm3_lb_send_cmd_box_async
+#define cqm_db_addr_alloc cqm3_db_addr_alloc
+#define cqm_db_addr_free cqm3_db_addr_free
+#define cqm_ring_hardware_db cqm3_ring_hardware_db
+#define cqm_ring_software_db cqm3_ring_software_db
+#define cqm_object_fc_srq_create cqm3_object_fc_srq_create
+#define cqm_object_share_recv_queue_create cqm3_object_share_recv_queue_create
+#define cqm_object_share_recv_queue_add_container \
+ cqm3_object_share_recv_queue_add_container
+#define cqm_object_srq_add_container_free cqm3_object_srq_add_container_free
+#define cqm_object_recv_queue_create cqm3_object_recv_queue_create
+#define cqm_object_qpc_mpt_create cqm3_object_qpc_mpt_create
+#define cqm_object_nonrdma_queue_create cqm3_object_nonrdma_queue_create
+#define cqm_object_rdma_queue_create cqm3_object_rdma_queue_create
+#define cqm_object_rdma_table_get cqm3_object_rdma_table_get
+#define cqm_object_delete cqm3_object_delete
+#define cqm_object_offset_addr cqm3_object_offset_addr
+#define cqm_object_get cqm3_object_get
+#define cqm_object_put cqm3_object_put
+#define cqm_object_funcid cqm3_object_funcid
+#define cqm_object_resize_alloc_new cqm3_object_resize_alloc_new
+#define cqm_object_resize_free_new cqm3_object_resize_free_new
+#define cqm_object_resize_free_old cqm3_object_resize_free_old
+#define cqm_function_timer_clear cqm3_function_timer_clear
+#define cqm_function_hash_buf_clear cqm3_function_hash_buf_clear
+#define cqm_srq_used_rq_container_delete cqm3_srq_used_rq_container_delete
+#define cqm_timer_base cqm3_timer_base
+#define cqm_get_db_addr cqm3_get_db_addr
+#define cqm_ring_direct_wqe_db cqm3_ring_direct_wqe_db
+#define cqm_fake_vf_num_set cqm3_fake_vf_num_set
+#define cqm_need_secure_mem cqm3_need_secure_mem
+
+#endif
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.c
new file mode 100644
index 0000000..0e8a579
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.c
@@ -0,0 +1,1674 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_hw_cfg.h"
+
+#include "cqm_object.h"
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_bloomfilter.h"
+#include "cqm_db.h"
+#include "cqm_memsec.h"
+#include "cqm_main.h"
+
+#include "vram_common.h"
+
+static unsigned char roce_qpc_rsv_mode = CQM_QPC_ROCE_NORMAL;
+module_param(roce_qpc_rsv_mode, byte, 0644);
+MODULE_PARM_DESC(roce_qpc_rsv_mode,
+ "for roce reserve 4k qpc(qpn) (default=0, 0-rsv:2, 1-rsv:4k, 2-rsv:200k+2)");
+
+static s32 cqm_set_fake_vf_child_timer(struct tag_cqm_handle *cqm_handle,
+ struct tag_cqm_handle *fake_cqm_handle, bool en)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)cqm_handle->ex_handle;
+ u16 func_global_idx;
+ s32 ret;
+
+ if (fake_cqm_handle->func_capability.timer_enable == 0)
+ return CQM_SUCCESS;
+
+ func_global_idx = fake_cqm_handle->func_attribute.func_global_idx;
+ ret = hinic3_func_tmr_bitmap_set(cqm_handle->ex_handle, func_global_idx, en);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "func_id %u Timer %s timer bitmap failed\n",
+ func_global_idx, en ? "enable" : "disable");
+ return CQM_FAIL;
+}
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_unset_fake_vf_timer(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)cqm_handle->ex_handle;
+ s32 child_func_number;
+ u32 i;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++)
+ (void)cqm_set_fake_vf_child_timer(cqm_handle,
+ cqm_handle->fake_cqm_handle[i], false);
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_set_fake_vf_timer(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)cqm_handle->ex_handle;
+ s32 child_func_number;
+ u32 i;
+ s32 ret;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ ret = cqm_set_fake_vf_child_timer(cqm_handle,
+ cqm_handle->fake_cqm_handle[i], true);
+ if (ret != CQM_SUCCESS)
+ goto err;
+ }
+
+ return CQM_SUCCESS;
+err:
+ (void)cqm_unset_fake_vf_timer(cqm_handle);
+ return CQM_FAIL;
+}
+
+static s32 cqm_set_timer_enable(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ int is_in_kexec;
+
+ if (!ex_handle)
+ return CQM_FAIL;
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec != 0) {
+ cqm_info(handle->dev_hdl, "Skip starting cqm timer during kexec\n");
+ return CQM_SUCCESS;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_PARENT &&
+ cqm_set_fake_vf_timer(cqm_handle) != CQM_SUCCESS)
+ return CQM_FAIL;
+
+ /* The timer bitmap is set directly at the beginning of the CQM.
+ * The ifconfig up/down command is not used to set or clear the bitmap.
+ */
+ if (hinic3_func_tmr_bitmap_set(ex_handle, hinic3_global_func_id(ex_handle),
+ true) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "func_id %u Timer start: enable timer bitmap failed\n",
+ hinic3_global_func_id(ex_handle));
+ goto err;
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_unset_fake_vf_timer(cqm_handle);
+ return CQM_FAIL;
+}
+
+static s32 cqm_set_timer_disable(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+
+ if (!ex_handle)
+ return CQM_FAIL;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ if (cqm_handle->func_capability.fake_func_type != CQM_FAKE_FUNC_CHILD_CONFLICT &&
+ hinic3_func_tmr_bitmap_set(ex_handle, hinic3_global_func_id(ex_handle),
+ false) != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl, "func_id %u Timer stop: disable timer bitmap failed\n",
+ hinic3_global_func_id(ex_handle));
+
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_PARENT &&
+ cqm_unset_fake_vf_timer(cqm_handle) != CQM_SUCCESS)
+ return CQM_FAIL;
+
+ return CQM_SUCCESS;
+}
+
+static s32 cqm_init_all(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ /* Initialize secure memory. */
+ if (cqm_secure_mem_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_mem_init));
+ return CQM_FAIL;
+ }
+
+ /* Initialize memory entries such as BAT, CLA, and bitmap. */
+ if (cqm_mem_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_mem_init));
+ goto err1;
+ }
+
+ /* Event callback initialization */
+ if (cqm_event_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_event_init));
+ goto err2;
+ }
+
+ /* Doorbell initiation */
+ if (cqm_db_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_db_init));
+ goto err3;
+ }
+
+ /* Initialize the bloom filter. */
+ if (cqm_bloomfilter_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bloomfilter_init));
+ goto err4;
+ }
+
+ if (cqm_set_timer_enable(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_set_timer_enable));
+ goto err5;
+ }
+
+ return CQM_SUCCESS;
+err5:
+ cqm_bloomfilter_uninit(ex_handle);
+err4:
+ cqm_db_uninit(ex_handle);
+err3:
+ cqm_event_uninit(ex_handle);
+err2:
+ cqm_mem_uninit(ex_handle);
+err1:
+ cqm_secure_mem_deinit(ex_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_init - Complete CQM initialization.
+ * If the function is a parent fake function, copy the fake.
+ * If it is a child fake function (in the fake copy function,
+ * not in this function), set fake_en in the BAT/CLA table.
+ * cqm_init->cqm_mem_init->cqm_fake_init(copy)
+ * If the child fake conflict occurs, resources are not
+ * initialized, but the timer must be enabled.
+ * If the function is of the normal type,
+ * follow the normal process.
+ * @ex_handle: device pointer that represents the PF
+ */
+s32 cqm_init(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ cqm_handle = kmalloc(sizeof(*cqm_handle), GFP_KERNEL | __GFP_ZERO);
+ if (!cqm_handle)
+ return CQM_FAIL;
+
+ /* Clear the memory to prevent other systems from
+ * not clearing the memory.
+ */
+ memset(cqm_handle, 0, sizeof(struct tag_cqm_handle));
+
+ cqm_handle->ex_handle = handle;
+ cqm_handle->dev = (struct pci_dev *)(handle->pcidev_hdl);
+ handle->cqm_hdl = (void *)cqm_handle;
+
+ /* Clearing Statistics */
+ memset(&handle->hw_stats.cqm_stats, 0, sizeof(struct cqm_stats));
+
+ /* Reads VF/PF information. */
+ cqm_handle->func_attribute = handle->hwif->attr;
+ cqm_info(handle->dev_hdl, "Func init: function[%u] type %d(0:PF,1:VF,2:PPF)\n",
+ cqm_handle->func_attribute.func_global_idx,
+ cqm_handle->func_attribute.func_type);
+
+ /* Read capability from configuration management module */
+ ret = cqm_capability_init(ex_handle);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_capability_init));
+ goto err;
+ }
+
+ /* In FAKE mode, only the bitmap of the timer of the function is
+ * enabled, and resources are not initialized. Otherwise, the
+ * configuration of the fake function is overwritten.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD_CONFLICT) {
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+ return CQM_SUCCESS;
+ }
+
+ ret = cqm_init_all(ex_handle);
+ if (ret == CQM_FAIL)
+ goto err;
+
+ return CQM_SUCCESS;
+err:
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_uninit - Deinitializes the CQM module. This function is called once each time
+ * a function is removed.
+ * @ex_handle: device pointer that represents the PF
+ */
+void cqm_uninit(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return;
+ }
+
+ cqm_set_timer_disable(ex_handle);
+
+ /* After the TMR timer stops, the system releases resources
+ * after a delay of one or two milliseconds.
+ */
+ if (cqm_handle->func_attribute.func_type == CQM_PPF) {
+ if (cqm_handle->func_capability.timer_enable ==
+ CQM_TIMER_ENABLE) {
+ cqm_info(handle->dev_hdl, "PPF timer stop\n");
+ ret = hinic3_ppf_tmr_stop(handle);
+ if (ret != CQM_SUCCESS)
+ /* The timer fails to be stopped,
+ * and the resource release is not affected.
+ */
+ cqm_info(handle->dev_hdl, "PPF timer stop, ret=%d\n", ret);
+ }
+
+ hinic3_ppf_ht_gpa_deinit(handle);
+
+ usleep_range(0x384, 0x3E8); /* Somebody requires a delay of 1 ms,
+ * which is inaccurate.
+ */
+ }
+
+ /* Release Bloom Filter Table */
+ cqm_bloomfilter_uninit(ex_handle);
+
+ /* Release hardware doorbell */
+ cqm_db_uninit(ex_handle);
+
+ /* Cancel the callback of the event */
+ cqm_event_uninit(ex_handle);
+
+ /* Release various memory tables and require the service
+ * to release all objects.
+ */
+ cqm_mem_uninit(ex_handle);
+
+ cqm_secure_mem_deinit(ex_handle);
+
+ /* Release cqm_handle */
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+}
+
+static void cqm_test_mode_init(struct tag_cqm_handle *cqm_handle,
+ struct service_cap *service_capability)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ if (service_capability->test_mode == 0)
+ return;
+
+ cqm_info(handle->dev_hdl, "Enter CQM test mode\n");
+
+ func_cap->qpc_number = service_capability->test_qpc_num;
+ func_cap->qpc_reserved =
+ GET_MAX(func_cap->qpc_reserved,
+ service_capability->test_qpc_resvd_num);
+ func_cap->xid_alloc_mode = service_capability->test_xid_alloc_mode;
+ func_cap->gpa_check_enable = service_capability->test_gpa_check_enable;
+ func_cap->pagesize_reorder = service_capability->test_page_size_reorder;
+ func_cap->qpc_alloc_static =
+ (bool)(service_capability->test_qpc_alloc_mode);
+ func_cap->scqc_alloc_static =
+ (bool)(service_capability->test_scqc_alloc_mode);
+ func_cap->flow_table_based_conn_number =
+ service_capability->test_max_conn_num;
+ func_cap->flow_table_based_conn_cache_number =
+ service_capability->test_max_cache_conn_num;
+ func_cap->scqc_number = service_capability->test_scqc_num;
+ func_cap->mpt_number = service_capability->test_mpt_num;
+ func_cap->mpt_reserved = service_capability->test_mpt_recvd_num;
+ func_cap->reorder_number = service_capability->test_reorder_num;
+ /* 256K buckets, 256K*64B = 16MB */
+ func_cap->hash_number = service_capability->test_hash_num;
+}
+
+static void cqm_service_capability_update(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+
+ func_cap->qpc_number = GET_MIN(CQM_MAX_QPC_NUM, func_cap->qpc_number);
+ func_cap->scqc_number = GET_MIN(CQM_MAX_SCQC_NUM,
+ func_cap->scqc_number);
+ func_cap->srqc_number = GET_MIN(CQM_MAX_SRQC_NUM,
+ func_cap->srqc_number);
+ func_cap->childc_number = GET_MIN(CQM_MAX_CHILDC_NUM,
+ func_cap->childc_number);
+}
+
+static void cqm_service_valid_init(struct tag_cqm_handle *cqm_handle,
+ const struct service_cap *service_capability)
+{
+ u16 type = service_capability->chip_svc_type;
+ struct tag_cqm_service *svc = cqm_handle->service;
+
+ svc[CQM_SERVICE_T_NIC].valid = ((type & CFG_SERVICE_MASK_NIC) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_OVS].valid = ((type & CFG_SERVICE_MASK_OVS) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_ROCE].valid = ((type & CFG_SERVICE_MASK_ROCE) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_TOE].valid = ((type & CFG_SERVICE_MASK_TOE) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_FC].valid = ((type & CFG_SERVICE_MASK_FC) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_IPSEC].valid = ((type & CFG_SERVICE_MASK_IPSEC) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_VBS].valid = ((type & CFG_SERVICE_MASK_VBS) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_VIRTIO].valid = ((type & CFG_SERVICE_MASK_VIRTIO) != 0) ?
+ true : false;
+ svc[CQM_SERVICE_T_IOE].valid = false;
+ svc[CQM_SERVICE_T_PPA].valid = ((type & CFG_SERVICE_MASK_PPA) != 0) ?
+ true : false;
+}
+
+static void cqm_service_capability_init_nic(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: nic is valid, but nic need not be init by cqm\n");
+}
+
+static void cqm_service_capability_init_ovs(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct ovs_service_cap *ovs_cap = &service_capability->ovs_cap;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: ovs is valid\n");
+ cqm_info(handle->dev_hdl, "Cap init: ovs qpc 0x%x\n",
+ ovs_cap->dev_ovs_cap.max_pctxs);
+ func_cap->hash_number += ovs_cap->dev_ovs_cap.max_pctxs;
+ func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ func_cap->qpc_number += ovs_cap->dev_ovs_cap.max_pctxs;
+ func_cap->qpc_basic_size = GET_MAX(ovs_cap->pctx_sz,
+ func_cap->qpc_basic_size);
+ func_cap->qpc_reserved += ovs_cap->dev_ovs_cap.max_pctxs;
+ func_cap->qpc_alloc_static = true;
+ func_cap->pagesize_reorder = CQM_OVS_PAGESIZE_ORDER;
+}
+
+static void cqm_service_capability_init_roce(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct hinic3_board_info *board_info = &handle->board_info;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct rdma_service_cap *rdma_cap = &service_capability->rdma_cap;
+ struct dev_roce_svc_own_cap *roce_own_cap =
+ &rdma_cap->dev_rdma_cap.roce_own_cap;
+
+ cqm_info(handle->dev_hdl, "Cap init: roce is valid\n");
+ cqm_info(handle->dev_hdl, "Cap init: roce qpc 0x%x, scqc 0x%x, srqc 0x%x, drc_qp 0x%x\n",
+ roce_own_cap->max_qps, roce_own_cap->max_cqs,
+ roce_own_cap->max_srqs, roce_own_cap->max_drc_qps);
+ cqm_info(handle->dev_hdl, "Cap init: board_type 0x%x, scenes_id:0x%x, qpc_rsv_mode:0x%x, srv_bmp:0x%x\n",
+ board_info->board_type, board_info->scenes_id,
+ roce_qpc_rsv_mode, board_info->service_en_bitmap);
+
+ if (roce_qpc_rsv_mode == CQM_QPC_ROCE_VBS_MODE) {
+ func_cap->qpc_reserved += CQM_QPC_ROCE_RSVD;
+ func_cap->qpc_reserved_back += CQM_QPC_ROCE_VBS_RSVD_BACK;
+ } else if ((service_capability->chip_svc_type & CFG_SERVICE_MASK_ROCEAA) != 0) {
+ func_cap->qpc_reserved += CQM_QPC_ROCEAA_RSVD;
+ func_cap->scq_reserved += CQM_CQ_ROCEAA_RSVD;
+ func_cap->srq_reserved += CQM_SRQ_ROCEAA_RSVD;
+ } else {
+ func_cap->qpc_reserved += CQM_QPC_ROCE_RSVD;
+ }
+ func_cap->qpc_number += roce_own_cap->max_qps;
+ func_cap->qpc_basic_size = GET_MAX(roce_own_cap->qpc_entry_sz,
+ func_cap->qpc_basic_size);
+ if (cqm_handle->func_attribute.func_type == CQM_PF && (IS_MASTER_HOST(handle))) {
+ func_cap->hash_number = roce_own_cap->max_qps;
+ func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ }
+ func_cap->qpc_alloc_static = true;
+ func_cap->scqc_number += roce_own_cap->max_cqs;
+ func_cap->scqc_basic_size = GET_MAX(rdma_cap->cqc_entry_sz,
+ func_cap->scqc_basic_size);
+ func_cap->srqc_number += roce_own_cap->max_srqs;
+ func_cap->srqc_basic_size = GET_MAX(roce_own_cap->srqc_entry_sz,
+ func_cap->srqc_basic_size);
+ func_cap->mpt_number += roce_own_cap->max_mpts;
+ func_cap->mpt_reserved += rdma_cap->reserved_mrws;
+ func_cap->mpt_basic_size = GET_MAX(rdma_cap->mpt_entry_sz,
+ func_cap->mpt_basic_size);
+ func_cap->gid_number = CQM_GID_RDMA_NUM;
+ func_cap->gid_basic_size = CQM_GID_SIZE_32;
+ func_cap->childc_number += CQM_CHILDC_ROCE_NUM;
+ func_cap->childc_basic_size = GET_MAX(CQM_CHILDC_SIZE_256,
+ func_cap->childc_basic_size);
+}
+
+static void cqm_service_capability_init_toe(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_toe_private_capability *toe_own_cap = &cqm_handle->toe_own_capability;
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct rdma_service_cap *rdma_cap = &service_capability->rdma_cap;
+ struct toe_service_cap *toe_cap = &service_capability->toe_cap;
+ struct dev_toe_svc_cap *dev_toe_cap = &toe_cap->dev_toe_cap;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: toe is valid\n");
+ cqm_info(handle->dev_hdl, "Cap init: toe qpc 0x%x, scqc 0x%x, srqc 0x%x\n",
+ dev_toe_cap->max_pctxs, dev_toe_cap->max_cqs,
+ dev_toe_cap->max_srqs);
+ func_cap->hash_number += dev_toe_cap->max_pctxs;
+ func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ func_cap->qpc_number += dev_toe_cap->max_pctxs;
+ func_cap->qpc_basic_size = GET_MAX(toe_cap->pctx_sz,
+ func_cap->qpc_basic_size);
+ func_cap->qpc_alloc_static = true;
+ func_cap->scqc_number += dev_toe_cap->max_cqs;
+ func_cap->scqc_basic_size = GET_MAX(toe_cap->scqc_sz,
+ func_cap->scqc_basic_size);
+ func_cap->scqc_alloc_static = true;
+
+ toe_own_cap->toe_srqc_number = dev_toe_cap->max_srqs;
+ toe_own_cap->toe_srqc_start_id = dev_toe_cap->srq_id_start;
+ toe_own_cap->toe_srqc_basic_size = CQM_SRQC_SIZE_64;
+ func_cap->childc_number += dev_toe_cap->max_cctxt;
+ func_cap->childc_basic_size = GET_MAX(CQM_CHILDC_SIZE_256,
+ func_cap->childc_basic_size);
+ func_cap->mpt_number += dev_toe_cap->max_mpts;
+ func_cap->mpt_reserved = 0;
+ func_cap->mpt_basic_size = GET_MAX(rdma_cap->mpt_entry_sz,
+ func_cap->mpt_basic_size);
+}
+
+static void cqm_service_capability_init_ioe(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: ioe is valid\n");
+}
+
+static void cqm_service_capability_init_fc(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct fc_service_cap *fc_cap = &service_capability->fc_cap;
+ struct dev_fc_svc_cap *dev_fc_cap = &fc_cap->dev_fc_cap;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: fc is valid\n");
+ cqm_info(handle->dev_hdl, "Cap init: fc qpc 0x%x, scqc 0x%x, srqc 0x%x\n",
+ dev_fc_cap->max_parent_qpc_num, dev_fc_cap->scq_num,
+ dev_fc_cap->srq_num);
+ func_cap->hash_number += dev_fc_cap->max_parent_qpc_num;
+ func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ func_cap->qpc_number += dev_fc_cap->max_parent_qpc_num;
+ func_cap->qpc_basic_size = GET_MAX(fc_cap->parent_qpc_size,
+ func_cap->qpc_basic_size);
+ func_cap->qpc_alloc_static = true;
+ func_cap->scqc_number += dev_fc_cap->scq_num;
+ func_cap->scqc_basic_size = GET_MAX(fc_cap->scqc_size,
+ func_cap->scqc_basic_size);
+ func_cap->srqc_number += dev_fc_cap->srq_num;
+ func_cap->srqc_basic_size = GET_MAX(fc_cap->srqc_size,
+ func_cap->srqc_basic_size);
+ func_cap->lun_number = CQM_LUN_FC_NUM;
+ func_cap->lun_basic_size = CQM_LUN_SIZE_8;
+ func_cap->taskmap_number = CQM_TASKMAP_FC_NUM;
+ func_cap->taskmap_basic_size = PAGE_SIZE;
+ func_cap->childc_number += dev_fc_cap->max_child_qpc_num;
+ func_cap->childc_basic_size = GET_MAX(fc_cap->child_qpc_size,
+ func_cap->childc_basic_size);
+ func_cap->pagesize_reorder = CQM_FC_PAGESIZE_ORDER;
+}
+
+static void cqm_service_capability_init_vbs(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct vbs_service_cap *vbs_cap = &service_capability->vbs_cap;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: vbs is valid\n");
+
+ /* If the entry size is greater than the cache line (256 bytes),
+ * align the entries by cache line.
+ */
+ func_cap->xid2cid_number +=
+ (CQM_XID2CID_VBS_NUM * service_capability->virtio_vq_size) / CQM_CHIP_CACHELINE;
+ func_cap->xid2cid_basic_size = CQM_CHIP_CACHELINE;
+ func_cap->qpc_number += (vbs_cap->vbs_max_volq * 2); // VOLQ group * 2
+ func_cap->qpc_basic_size = GET_MAX(CQM_VBS_QPC_SIZE,
+ func_cap->qpc_basic_size);
+ func_cap->qpc_alloc_static = true;
+}
+
+static void cqm_service_capability_init_ipsec(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct ipsec_service_cap *ipsec_cap = &service_capability->ipsec_cap;
+ struct dev_ipsec_svc_cap *ipsec_srvcap = &ipsec_cap->dev_ipsec_cap;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ func_cap->childc_number += ipsec_srvcap->max_sactxs;
+ func_cap->childc_basic_size = GET_MAX(CQM_CHILDC_SIZE_256,
+ func_cap->childc_basic_size);
+ func_cap->scqc_number += ipsec_srvcap->max_cqs;
+ func_cap->scqc_basic_size = GET_MAX(CQM_SCQC_SIZE_64,
+ func_cap->scqc_basic_size);
+ func_cap->scqc_alloc_static = true;
+ cqm_info(handle->dev_hdl, "Cap init: ipsec is valid\n");
+ cqm_info(handle->dev_hdl, "Cap init: ipsec childc_num 0x%x, childc_bsize %d, scqc_num 0x%x, scqc_bsize %d\n",
+ ipsec_srvcap->max_sactxs, func_cap->childc_basic_size,
+ ipsec_srvcap->max_cqs, func_cap->scqc_basic_size);
+}
+
+static void cqm_service_capability_init_virtio(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+
+ cqm_info(handle->dev_hdl, "Cap init: virtio is valid\n");
+ /* If the entry size is greater than the cache line (256 bytes),
+ * align the entries by cache line.
+ */
+ cqm_handle->func_capability.xid2cid_number +=
+ (CQM_XID2CID_VIRTIO_NUM * service_capability->virtio_vq_size) / CQM_CHIP_CACHELINE;
+ cqm_handle->func_capability.xid2cid_basic_size = CQM_CHIP_CACHELINE;
+}
+
+static void cqm_service_capability_init_ppa(struct tag_cqm_handle *cqm_handle, void *pra)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct ppa_service_cap *ppa_cap = &service_capability->ppa_cap;
+
+ cqm_info(handle->dev_hdl, "Cap init: ppa is valid\n");
+ func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ func_cap->qpc_alloc_static = true;
+ func_cap->pagesize_reorder = CQM_PPA_PAGESIZE_ORDER;
+ func_cap->qpc_basic_size = GET_MAX(ppa_cap->pctx_sz,
+ func_cap->qpc_basic_size);
+}
+
+struct cqm_srv_cap_init serv_cap_init_list[] = {
+ {CQM_SERVICE_T_NIC, cqm_service_capability_init_nic},
+ {CQM_SERVICE_T_OVS, cqm_service_capability_init_ovs},
+ {CQM_SERVICE_T_ROCE, cqm_service_capability_init_roce},
+ {CQM_SERVICE_T_TOE, cqm_service_capability_init_toe},
+ {CQM_SERVICE_T_IOE, cqm_service_capability_init_ioe},
+ {CQM_SERVICE_T_FC, cqm_service_capability_init_fc},
+ {CQM_SERVICE_T_VBS, cqm_service_capability_init_vbs},
+ {CQM_SERVICE_T_IPSEC, cqm_service_capability_init_ipsec},
+ {CQM_SERVICE_T_VIRTIO, cqm_service_capability_init_virtio},
+ {CQM_SERVICE_T_PPA, cqm_service_capability_init_ppa},
+};
+
+static void cqm_service_capability_init(struct tag_cqm_handle *cqm_handle,
+ struct service_cap *service_capability)
+{
+ u32 list_size = ARRAY_SIZE(serv_cap_init_list);
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 i;
+
+ for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
+ cqm_handle->service[i].valid = false;
+ cqm_handle->service[i].has_register = false;
+ cqm_handle->service[i].buf_order = 0;
+ }
+
+ cqm_service_valid_init(cqm_handle, service_capability);
+
+ cqm_info(handle->dev_hdl, "Cap init: service type %d\n",
+ service_capability->chip_svc_type);
+
+ for (i = 0; i < list_size; i++) {
+ if (cqm_handle->service[serv_cap_init_list[i].service_type].valid &&
+ serv_cap_init_list[i].serv_cap_proc) {
+ serv_cap_init_list[i].serv_cap_proc(cqm_handle,
+ (void *)service_capability);
+ }
+ }
+}
+
+s32 cqm_get_fake_func_type(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ u32 parent_func, child_func_start, child_func_number, i;
+ u32 idx = cqm_handle->func_attribute.func_global_idx;
+
+ /* Currently, only one set of fake configurations is implemented.
+ * fake_cfg_number = 1
+ */
+ for (i = 0; i < func_cap->fake_cfg_number; i++) {
+ parent_func = func_cap->fake_cfg[i].parent_func;
+ child_func_start = func_cap->fake_cfg[i].child_func_start;
+ child_func_number = func_cap->fake_cfg[i].child_func_number;
+
+ if (idx == parent_func) {
+ return CQM_FAKE_FUNC_PARENT;
+ } else if ((idx >= child_func_start) &&
+ (idx < (child_func_start + child_func_number))) {
+ return CQM_FAKE_FUNC_CHILD_CONFLICT;
+ }
+ }
+
+ return CQM_FAKE_FUNC_NORMAL;
+}
+
+s32 cqm_get_child_func_start(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_func_attr *func_attr = &cqm_handle->func_attribute;
+ u32 i;
+
+ /* Currently, only one set of fake configurations is implemented.
+ * fake_cfg_number = 1
+ */
+ for (i = 0; i < func_cap->fake_cfg_number; i++) {
+ if (func_attr->func_global_idx ==
+ func_cap->fake_cfg[i].parent_func)
+ return (s32)(func_cap->fake_cfg[i].child_func_start);
+ }
+
+ return CQM_FAIL;
+}
+
+s32 cqm_get_child_func_number(struct tag_cqm_handle *cqm_handle)
+{
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_func_attr *func_attr = &cqm_handle->func_attribute;
+ u32 i;
+
+ for (i = 0; i < func_cap->fake_cfg_number; i++) {
+ if (func_attr->func_global_idx ==
+ func_cap->fake_cfg[i].parent_func)
+ return (s32)(func_cap->fake_cfg[i].child_func_number);
+ }
+
+ return CQM_FAIL;
+}
+
+/* Set func_type in fake_cqm_handle to ppf, pf, or vf. */
+static void cqm_set_func_type(struct tag_cqm_handle *cqm_handle)
+{
+ u32 idx = cqm_handle->func_attribute.func_global_idx;
+
+ if (idx == 0)
+ cqm_handle->func_attribute.func_type = CQM_PPF;
+ else if (idx < CQM_MAX_PF_NUM)
+ cqm_handle->func_attribute.func_type = CQM_PF;
+ else
+ cqm_handle->func_attribute.func_type = CQM_VF;
+}
+
+static void cqm_lb_fake_mode_init(struct hinic3_hwdev *handle, struct service_cap *svc_cap)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct tag_cqm_fake_cfg *cfg = func_cap->fake_cfg;
+
+ func_cap->lb_mode = svc_cap->lb_mode;
+
+ /* Initializing the LB Mode */
+ if (func_cap->lb_mode == CQM_LB_MODE_NORMAL)
+ func_cap->smf_pg = 0;
+ else
+ func_cap->smf_pg = svc_cap->smf_pg;
+
+ /* Initializing the FAKE Mode */
+ if (svc_cap->fake_vf_num == 0) {
+ func_cap->fake_cfg_number = 0;
+ func_cap->fake_func_type = CQM_FAKE_FUNC_NORMAL;
+ func_cap->fake_vf_qpc_number = 0;
+ } else {
+ func_cap->fake_cfg_number = 1;
+
+ /* When configuring fake mode, ensure that the parent function
+ * cannot be contained in the child function; otherwise, the
+ * system will be initialized repeatedly. The following
+ * configuration is used to verify the OVS fake configuration on
+ * the FPGA.
+ */
+ cfg[0].parent_func = cqm_handle->func_attribute.port_to_port_idx;
+ cfg[0].child_func_start = svc_cap->fake_vf_start_id;
+ cfg[0].child_func_number = svc_cap->fake_vf_num_cfg;
+
+ func_cap->fake_func_type = (u32)cqm_get_fake_func_type(cqm_handle);
+ func_cap->fake_vf_qpc_number = svc_cap->fake_vf_max_pctx;
+ }
+
+ cqm_info(handle->dev_hdl, "Cap init: lb_mode=%u\n", func_cap->lb_mode);
+ cqm_info(handle->dev_hdl, "Cap init: smf_pg=%u\n", func_cap->smf_pg);
+ cqm_info(handle->dev_hdl, "Cap init: fake_func_type=%u\n", func_cap->fake_func_type);
+ cqm_info(handle->dev_hdl, "Cap init: fake_cfg_number=%u\n", func_cap->fake_cfg_number);
+}
+
+static int cqm_capability_init_bloomfilter(struct hinic3_hwdev *handle)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+
+ func_cap->bloomfilter_enable = service_capability->bloomfilter_en;
+ cqm_info(handle->dev_hdl, "Cap init: bloomfilter_enable %u (1: enable; 0: disable)\n",
+ func_cap->bloomfilter_enable);
+
+ if (func_cap->bloomfilter_enable != 0) {
+ func_cap->bloomfilter_length = service_capability->bfilter_len;
+ func_cap->bloomfilter_addr = service_capability->bfilter_start_addr;
+ if (func_cap->bloomfilter_length != 0 &&
+ !cqm_check_align(func_cap->bloomfilter_length)) {
+ cqm_err(handle->dev_hdl, "Cap init: bloomfilter_length %u is not the power of 2\n",
+ func_cap->bloomfilter_length);
+
+ return CQM_FAIL;
+ }
+ }
+
+ cqm_info(handle->dev_hdl, "Cap init: bloomfilter_length 0x%x, bloomfilter_addr 0x%x\n",
+ func_cap->bloomfilter_length, func_cap->bloomfilter_addr);
+
+ return 0;
+}
+
+static void cqm_capability_init_part_cap(struct hinic3_hwdev *handle)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+
+ func_cap->flow_table_based_conn_number = service_capability->max_connect_num;
+ func_cap->flow_table_based_conn_cache_number = service_capability->max_stick2cache_num;
+ cqm_info(handle->dev_hdl, "Cap init: cfg max_conn_num 0x%x, max_cache_conn_num 0x%x\n",
+ func_cap->flow_table_based_conn_number,
+ func_cap->flow_table_based_conn_cache_number);
+
+ func_cap->qpc_reserved = 0;
+ func_cap->qpc_reserved_back = 0;
+ func_cap->mpt_reserved = 0;
+ func_cap->scq_reserved = 0;
+ func_cap->srq_reserved = 0;
+ func_cap->qpc_alloc_static = false;
+ func_cap->scqc_alloc_static = false;
+
+ func_cap->l3i_number = 0;
+ func_cap->l3i_basic_size = CQM_L3I_SIZE_8;
+
+ func_cap->xid_alloc_mode = true; /* xid alloc do not reuse */
+ func_cap->gpa_check_enable = true;
+}
+
+static int cqm_capability_init_timer(struct hinic3_hwdev *handle)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+ struct hinic3_func_attr *func_attr = &cqm_handle->func_attribute;
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ u32 total_timer_num = 0;
+ int err;
+
+ /* Initializes the PPF capabilities: include timer, pf, vf. */
+ if (func_attr->func_type == CQM_PPF && service_capability->timer_en) {
+ func_cap->pf_num = service_capability->pf_num;
+ func_cap->pf_id_start = service_capability->pf_id_start;
+ func_cap->vf_num = service_capability->vf_num;
+ func_cap->vf_id_start = service_capability->vf_id_start;
+ cqm_info(handle->dev_hdl, "Cap init: total function num 0x%x\n",
+ service_capability->host_total_function);
+ cqm_info(handle->dev_hdl,
+ "Cap init: pf_num 0x%x, pf_start 0x%x, vf_num 0x%x, vf_start 0x%x\n",
+ func_cap->pf_num, func_cap->pf_id_start,
+ func_cap->vf_num, func_cap->vf_id_start);
+
+ err = hinic3_get_ppf_timer_cfg(handle);
+ if (err != 0)
+ return err;
+
+ func_cap->timer_pf_num = service_capability->timer_pf_num;
+ func_cap->timer_pf_id_start = service_capability->timer_pf_id_start;
+ func_cap->timer_vf_num = service_capability->timer_vf_num;
+ func_cap->timer_vf_id_start = service_capability->timer_vf_id_start;
+ cqm_info(handle->dev_hdl,
+ "host timer init: timer_pf_num 0x%x, timer_pf_id_start 0x%x, timer_vf_num 0x%x, timer_vf_id_start 0x%x\n",
+ func_cap->timer_pf_num, func_cap->timer_pf_id_start,
+ func_cap->timer_vf_num, func_cap->timer_vf_id_start);
+
+ total_timer_num = func_cap->timer_pf_num + func_cap->timer_vf_num;
+ if (IS_SLAVE_HOST(handle)) {
+ total_timer_num *= CQM_TIMER_NUM_MULTI;
+ cqm_info(handle->dev_hdl,
+ "host timer init: need double tw resources, total_timer_num=0x%x\n",
+ total_timer_num);
+ }
+ }
+
+ func_cap->timer_enable = service_capability->timer_en;
+ cqm_info(handle->dev_hdl, "Cap init: timer_enable %u (1: enable; 0: disable)\n",
+ func_cap->timer_enable);
+
+ func_cap->timer_number = CQM_TIMER_ALIGN_SCALE_NUM * total_timer_num;
+ func_cap->timer_basic_size = CQM_TIMER_SIZE_32;
+
+ return 0;
+}
+
+static void cqm_capability_init_cap_print(struct hinic3_hwdev *handle)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+
+ func_cap->ft_enable = service_capability->sf_svc_attr.ft_en;
+ func_cap->rdma_enable = service_capability->sf_svc_attr.rdma_en;
+
+ cqm_info(handle->dev_hdl, "Cap init: pagesize_reorder %u\n", func_cap->pagesize_reorder);
+ cqm_info(handle->dev_hdl, "Cap init: xid_alloc_mode %d, gpa_check_enable %d\n",
+ func_cap->xid_alloc_mode, func_cap->gpa_check_enable);
+ cqm_info(handle->dev_hdl, "Cap init: qpc_alloc_mode %d, scqc_alloc_mode %d\n",
+ func_cap->qpc_alloc_static, func_cap->scqc_alloc_static);
+ cqm_info(handle->dev_hdl, "Cap init: hash_number 0x%x\n", func_cap->hash_number);
+ cqm_info(handle->dev_hdl, "Cap init: qpc_num 0x%x, qpc_rsvd 0x%x, qpc_basic_size 0x%x\n",
+ func_cap->qpc_number, func_cap->qpc_reserved, func_cap->qpc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: scqc_num 0x%x, scqc_rsvd 0x%x, scqc_basic 0x%x\n",
+ func_cap->scqc_number, func_cap->scq_reserved, func_cap->scqc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: srqc_num 0x%x, srqc_rsvd 0x%x, srqc_basic 0x%x\n",
+ func_cap->srqc_number, func_cap->srq_reserved, func_cap->srqc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: mpt_number 0x%x, mpt_reserved 0x%x\n",
+ func_cap->mpt_number, func_cap->mpt_reserved);
+ cqm_info(handle->dev_hdl, "Cap init: gid_number 0x%x, lun_number 0x%x\n",
+ func_cap->gid_number, func_cap->lun_number);
+ cqm_info(handle->dev_hdl, "Cap init: taskmap_number 0x%x, l3i_number 0x%x\n",
+ func_cap->taskmap_number, func_cap->l3i_number);
+ cqm_info(handle->dev_hdl, "Cap init: timer_number 0x%x, childc_number 0x%x\n",
+ func_cap->timer_number, func_cap->childc_number);
+ cqm_info(handle->dev_hdl, "Cap init: childc_basic_size 0x%x\n",
+ func_cap->childc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: xid2cid_number 0x%x, reorder_number 0x%x\n",
+ func_cap->xid2cid_number, func_cap->reorder_number);
+ cqm_info(handle->dev_hdl, "Cap init: ft_enable %d, rdma_enable %d\n",
+ func_cap->ft_enable, func_cap->rdma_enable);
+}
+
+/**
+ * cqm_capability_init - Initializes the function and service capabilities of the CQM.
+ * Information needs to be read from the configuration management module.
+ * @ex_handle: device pointer that represents the PF
+ */
+s32 cqm_capability_init(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+ struct hinic3_func_attr *func_attr = &cqm_handle->func_attribute;
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ int err = 0;
+
+ err = cqm_capability_init_timer(handle);
+ if (err != 0)
+ goto out;
+
+ err = cqm_capability_init_bloomfilter(handle);
+ if (err != 0)
+ goto out;
+
+ cqm_capability_init_part_cap(handle);
+
+ cqm_lb_fake_mode_init(handle, service_capability);
+
+ cqm_service_capability_init(cqm_handle, service_capability);
+
+ cqm_test_mode_init(cqm_handle, service_capability);
+
+ cqm_service_capability_update(cqm_handle);
+
+ cqm_capability_init_cap_print(handle);
+
+ return CQM_SUCCESS;
+
+out:
+ if (func_attr->func_type == CQM_PPF)
+ func_cap->timer_enable = 0;
+
+ return err;
+}
+
+static void cqm_fake_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type !=
+ CQM_FAKE_FUNC_PARENT)
+ return;
+
+ for (i = 0; i < CQM_FAKE_FUNC_MAX; i++) {
+ kfree(cqm_handle->fake_cqm_handle[i]);
+ cqm_handle->fake_cqm_handle[i] = NULL;
+ }
+}
+
+static void set_fake_cqm_attr(struct hinic3_hwdev *handle, struct tag_cqm_handle *fake_cqm_handle,
+ s32 child_func_start, u32 i)
+{
+ struct tag_cqm_func_capability *func_cap = NULL;
+ struct hinic3_func_attr *func_attr = NULL;
+ struct service_cap *cap = &handle->cfg_mgmt->svc_cap;
+
+ func_attr = &fake_cqm_handle->func_attribute;
+ func_cap = &fake_cqm_handle->func_capability;
+ func_attr->func_global_idx = (u16)(child_func_start + i);
+ cqm_set_func_type(fake_cqm_handle);
+ func_cap->fake_func_type = CQM_FAKE_FUNC_CHILD;
+ cqm_info(handle->dev_hdl, "Fake func init: function[%u] type %d(0:PF,1:VF,2:PPF)\n",
+ func_attr->func_global_idx, func_attr->func_type);
+
+ func_cap->qpc_number = cap->fake_vf_max_pctx;
+ func_cap->qpc_number = GET_MIN(CQM_MAX_QPC_NUM, func_cap->qpc_number);
+ func_cap->hash_number = cap->fake_vf_max_pctx;
+ func_cap->qpc_reserved = cap->fake_vf_max_pctx;
+
+ if (cap->fake_vf_bfilter_len != 0) {
+ func_cap->bloomfilter_enable = true;
+ func_cap->bloomfilter_addr = cap->fake_vf_bfilter_start_addr +
+ cap->fake_vf_bfilter_len * i;
+ func_cap->bloomfilter_length = cap->fake_vf_bfilter_len;
+ }
+}
+
+/**
+ * cqm_fake_init - When the fake VF mode is supported, the CQM handles of the fake VFs
+ * need to be copied.
+ * @cqm_handle: Parent CQM handle of the current PF
+ */
+static s32 cqm_fake_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_handle *fake_cqm_handle = NULL;
+ struct tag_cqm_func_capability *func_cap = NULL;
+ s32 child_func_start, child_func_number;
+ u32 i;
+
+ func_cap = &cqm_handle->func_capability;
+ if (func_cap->fake_func_type != CQM_FAKE_FUNC_PARENT)
+ return CQM_SUCCESS;
+
+ child_func_start = cqm_get_child_func_start(cqm_handle);
+ if (child_func_start == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_start));
+ return CQM_FAIL;
+ }
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = kmalloc(sizeof(*fake_cqm_handle), GFP_KERNEL | __GFP_ZERO);
+ if (!fake_cqm_handle) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(fake_cqm_handle));
+ goto err;
+ }
+
+ /* Copy the attributes of the parent CQM handle to the child CQM
+ * handle and modify the values of function.
+ */
+ memcpy(fake_cqm_handle, cqm_handle, sizeof(struct tag_cqm_handle));
+ set_fake_cqm_attr(handle, fake_cqm_handle, child_func_start, i);
+
+ fake_cqm_handle->parent_cqm_handle = cqm_handle;
+ cqm_handle->fake_cqm_handle[i] = fake_cqm_handle;
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_fake_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+static void cqm_fake_mem_uninit(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_handle *fake_cqm_handle = NULL;
+ s32 child_func_number;
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type !=
+ CQM_FAKE_FUNC_PARENT)
+ return;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = cqm_handle->fake_cqm_handle[i];
+
+ cqm_object_table_uninit(fake_cqm_handle);
+ cqm_bitmap_uninit(fake_cqm_handle);
+ cqm_cla_uninit(fake_cqm_handle, CQM_BAT_ENTRY_MAX);
+ cqm_bat_uninit(fake_cqm_handle);
+ }
+}
+
+/**
+ * cqm_fake_mem_init - Initialize resources of the extended fake function
+ * @cqm_handle: Parent CQM handle of the current PF
+ */
+static s32 cqm_fake_mem_init(struct tag_cqm_handle *cqm_handle)
+{
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_handle *fake_cqm_handle = NULL;
+ s32 child_func_number;
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type !=
+ CQM_FAKE_FUNC_PARENT)
+ return CQM_SUCCESS;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = cqm_handle->fake_cqm_handle[i];
+ snprintf(fake_cqm_handle->name, VRAM_NAME_APPLY_LEN,
+ "%s%s%02u", cqm_handle->name, VRAM_CQM_FAKE_MEM_BASE, i);
+
+ if (cqm_bat_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bat_init));
+ goto err;
+ }
+
+ if (cqm_cla_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_init));
+ goto err;
+ }
+
+ if (cqm_bitmap_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bitmap_init));
+ goto err;
+ }
+
+ if (cqm_object_table_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_init));
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_fake_mem_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_mem_init - Initialize CQM memory, including tables at different levels.
+ * @ex_handle: device pointer that represents the PF
+ */
+s32 cqm_mem_init(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ snprintf(cqm_handle->name, VRAM_NAME_APPLY_LEN,
+ "%s%02u", VRAM_CQM_GLB_FUNC_BASE, hinic3_global_func_id(handle));
+
+ if (cqm_fake_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fake_init));
+ return CQM_FAIL;
+ }
+
+ if (cqm_fake_mem_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fake_mem_init));
+ goto err1;
+ }
+
+ if (cqm_bat_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_init));
+ goto err2;
+ }
+
+ if (cqm_cla_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_init));
+ goto err3;
+ }
+
+ if (cqm_bitmap_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bitmap_init));
+ goto err4;
+ }
+
+ if (cqm_object_table_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_init));
+ goto err5;
+ }
+
+ return CQM_SUCCESS;
+
+err5:
+ cqm_bitmap_uninit(cqm_handle);
+err4:
+ cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
+err3:
+ cqm_bat_uninit(cqm_handle);
+err2:
+ cqm_fake_mem_uninit(cqm_handle);
+err1:
+ cqm_fake_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_mem_uninit - Deinitialize CQM memory, including tables at different levels
+ * @ex_handle: device pointer that represents the PF
+ */
+void cqm_mem_uninit(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ cqm_object_table_uninit(cqm_handle);
+ cqm_bitmap_uninit(cqm_handle);
+ cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
+ cqm_bat_uninit(cqm_handle);
+ cqm_fake_mem_uninit(cqm_handle);
+ cqm_fake_uninit(cqm_handle);
+}
+
+/**
+ * cqm_event_init - Initialize CQM event callback
+ * @ex_handle: device pointer that represents the PF
+ */
+s32 cqm_event_init(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ /* Registers the CEQ and AEQ callback functions. */
+ if (hinic3_ceq_register_cb(ex_handle, ex_handle, HINIC3_NON_L2NIC_SCQ,
+ cqm_scq_callback) != CHIPIF_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Event: fail to register scq callback\n");
+ return CQM_FAIL;
+ }
+
+ if (hinic3_ceq_register_cb(ex_handle, ex_handle, HINIC3_NON_L2NIC_ECQ,
+ cqm_ecq_callback) != CHIPIF_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Event: fail to register ecq callback\n");
+ goto err1;
+ }
+
+ if (hinic3_ceq_register_cb(ex_handle, ex_handle, HINIC3_NON_L2NIC_NO_CQ_EQ,
+ cqm_nocq_callback) != CHIPIF_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Event: fail to register nocq callback\n");
+ goto err2;
+ }
+
+ if (hinic3_aeq_register_swe_cb(ex_handle, ex_handle, HINIC3_STATEFUL_EVENT,
+ cqm_aeq_callback) != CHIPIF_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Event: fail to register aeq callback\n");
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ hinic3_ceq_unregister_cb(ex_handle, HINIC3_NON_L2NIC_NO_CQ_EQ);
+err2:
+ hinic3_ceq_unregister_cb(ex_handle, HINIC3_NON_L2NIC_ECQ);
+err1:
+ hinic3_ceq_unregister_cb(ex_handle, HINIC3_NON_L2NIC_SCQ);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_event_uninit - Deinitialize CQM event callback
+ * @ex_handle: device pointer that represents the PF
+ */
+void cqm_event_uninit(void *ex_handle)
+{
+ hinic3_aeq_unregister_swe_cb(ex_handle, HINIC3_STATEFUL_EVENT);
+ hinic3_ceq_unregister_cb(ex_handle, HINIC3_NON_L2NIC_NO_CQ_EQ);
+ hinic3_ceq_unregister_cb(ex_handle, HINIC3_NON_L2NIC_ECQ);
+ hinic3_ceq_unregister_cb(ex_handle, HINIC3_NON_L2NIC_SCQ);
+}
+
+/**
+ * cqm_scq_callback - CQM module callback processing for the ceq, which processes NON_L2NIC_SCQ
+ * @ex_handle: device pointer that represents the PF
+ * @ceqe_data: CEQE data
+ */
+void cqm_scq_callback(void *ex_handle, u32 ceqe_data)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_service_register_template *service_template = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct tag_cqm_queue *cqm_queue = NULL;
+ struct tag_cqm_object *obj = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: scq_callback_ex_handle is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_scq_callback_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: scq_callback_cqm_handle is null\n", __func__);
+ return;
+ }
+
+ cqm_dbg("Event: %s, ceqe_data=0x%x\n", __func__, ceqe_data);
+ obj = cqm_object_get(ex_handle, CQM_OBJECT_NONRDMA_SCQ,
+ CQM_CQN_FROM_CEQE(ceqe_data), true);
+ if (unlikely(!obj)) {
+ pr_err("[CQM]%s: scq_callback_obj is null\n", __func__);
+ return;
+ }
+
+ if (unlikely(obj->service_type >= CQM_SERVICE_T_MAX)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(obj->service_type));
+ cqm_object_put(obj);
+ return;
+ }
+
+ service = &cqm_handle->service[obj->service_type];
+ service_template = &service->service_template;
+ if (service_template->shared_cq_ceq_callback) {
+ cqm_queue = (struct tag_cqm_queue *)obj;
+ service_template->shared_cq_ceq_callback(service_template->service_handle,
+ CQM_CQN_FROM_CEQE(ceqe_data),
+ cqm_queue->priv);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_PTR_NULL(shared_cq_ceq_callback));
+ }
+
+ cqm_object_put(obj);
+}
+
+/**
+ * cqm_ecq_callback - CQM module callback processing for the ceq, which processes NON_L2NIC_ECQ.
+ * @ex_handle: device pointer that represents the PF
+ * @ceqe_data: CEQE data
+ */
+void cqm_ecq_callback(void *ex_handle, u32 ceqe_data)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_service_register_template *service_template = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct tag_cqm_qpc_mpt *qpc = NULL;
+ struct tag_cqm_object *obj = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ecq_callback_ex_handle is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_ecq_callback_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: ecq_callback_cqm_handle is null\n", __func__);
+ return;
+ }
+
+ obj = cqm_object_get(ex_handle, CQM_OBJECT_SERVICE_CTX,
+ CQM_XID_FROM_CEQE(ceqe_data), true);
+ if (unlikely(!obj)) {
+ pr_err("[CQM]%s: ecq_callback_obj is null\n", __func__);
+ return;
+ }
+
+ if (unlikely(obj->service_type >= CQM_SERVICE_T_MAX)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(obj->service_type));
+ cqm_object_put(obj);
+ return;
+ }
+
+ service = &cqm_handle->service[obj->service_type];
+ service_template = &service->service_template;
+ if (service_template->embedded_cq_ceq_callback) {
+ qpc = (struct tag_cqm_qpc_mpt *)obj;
+ service_template->embedded_cq_ceq_callback(service_template->service_handle,
+ CQM_XID_FROM_CEQE(ceqe_data),
+ qpc->priv);
+ } else {
+ cqm_err(handle->dev_hdl,
+ CQM_PTR_NULL(embedded_cq_ceq_callback));
+ }
+
+ cqm_object_put(obj);
+}
+
+/**
+ * cqm_nocq_callback - CQM module callback processing for the ceq,
+ * which processes NON_L2NIC_NO_CQ_EQ
+ * @ex_handle: device pointer that represents the PF
+ * @ceqe_data: CEQE data
+ */
+void cqm_nocq_callback(void *ex_handle, u32 ceqe_data)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_service_register_template *service_template = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct tag_cqm_qpc_mpt *qpc = NULL;
+ struct tag_cqm_object *obj = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: nocq_callback_ex_handle is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_nocq_callback_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: nocq_callback_cqm_handle is null\n", __func__);
+ return;
+ }
+
+ obj = cqm_object_get(ex_handle, CQM_OBJECT_SERVICE_CTX,
+ CQM_XID_FROM_CEQE(ceqe_data), true);
+ if (unlikely(!obj)) {
+ pr_err("[CQM]%s: nocq_callback_obj is null\n", __func__);
+ return;
+ }
+
+ if (unlikely(obj->service_type >= CQM_SERVICE_T_MAX)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(obj->service_type));
+ cqm_object_put(obj);
+ return;
+ }
+
+ service = &cqm_handle->service[obj->service_type];
+ service_template = &service->service_template;
+ if (service_template->no_cq_ceq_callback) {
+ qpc = (struct tag_cqm_qpc_mpt *)obj;
+ service_template->no_cq_ceq_callback(service_template->service_handle,
+ CQM_XID_FROM_CEQE(ceqe_data),
+ CQM_QID_FROM_CEQE(ceqe_data),
+ qpc->priv);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_PTR_NULL(no_cq_ceq_callback));
+ }
+
+ cqm_object_put(obj);
+}
+
+static u32 cqm_aeq_event2type(u8 event)
+{
+ u32 service_type;
+
+ /* Distributes events to different service modules
+ * based on the event type.
+ */
+ if (event < CQM_AEQ_BASE_T_ROCE)
+ service_type = CQM_SERVICE_T_NIC;
+ else if (event < CQM_AEQ_BASE_T_FC)
+ service_type = CQM_SERVICE_T_ROCE;
+ else if (event < CQM_AEQ_BASE_T_IOE)
+ service_type = CQM_SERVICE_T_FC;
+ else if (event < CQM_AEQ_BASE_T_TOE)
+ service_type = CQM_SERVICE_T_IOE;
+ else if (event < CQM_AEQ_BASE_T_VBS)
+ service_type = CQM_SERVICE_T_TOE;
+ else if (event < CQM_AEQ_BASE_T_IPSEC)
+ service_type = CQM_SERVICE_T_VBS;
+ else if (event < CQM_AEQ_BASE_T_MAX)
+ service_type = CQM_SERVICE_T_IPSEC;
+ else
+ service_type = CQM_SERVICE_T_MAX;
+
+ return service_type;
+}
+
+/**
+ * cqm_aeq_callback - CQM module callback processing for the aeq
+ * @ex_handle: device pointer that represents the PF
+ * @event: QEQ event
+ * @data: callback private data
+ */
+u8 cqm_aeq_callback(void *ex_handle, u8 event, u8 *data)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_service_register_template *service_template = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ u8 event_level = FAULT_LEVEL_MAX;
+ u32 service_type;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: aeq_callback_ex_handle is null\n", __func__);
+ return event_level;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_aeq_callback_cnt[event]);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: aeq_callback_cqm_handle is null\n", __func__);
+ return event_level;
+ }
+
+ /* Distributes events to different service modules
+ * based on the event type.
+ */
+ service_type = cqm_aeq_event2type(event);
+ if (service_type == CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(event));
+ return event_level;
+ }
+
+ service = &cqm_handle->service[service_type];
+ service_template = &service->service_template;
+
+ if (!service_template->aeq_level_callback)
+ cqm_err(handle->dev_hdl,
+ "Event: service_type %u aeq_level_callback unregistered, event %u\n",
+ service_type, event);
+ else
+ event_level =
+ service_template->aeq_level_callback(service_template->service_handle,
+ event, data);
+
+ if (!service_template->aeq_callback)
+ cqm_err(handle->dev_hdl, "Event: service_type %u aeq_callback unregistered\n",
+ service_type);
+ else
+ service_template->aeq_callback(service_template->service_handle,
+ event, data);
+
+ return event_level;
+}
+
+/**
+ * cqm_service_register - Callback template for the service driver to register with the CQM
+ * @ex_handle: device pointer that represents the PF
+ * @service_template: CQM service template
+ */
+s32 cqm_service_register(void *ex_handle, struct tag_service_register_template *service_template)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!service_template)) {
+ pr_err("[CQM]%s: service_template is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ if (service_template->service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl,
+ CQM_WRONG_VALUE(service_template->service_type));
+ return CQM_FAIL;
+ }
+ service = &cqm_handle->service[service_template->service_type];
+ if (!service->valid) {
+ cqm_err(handle->dev_hdl, "Service register: service_type %u is invalid\n",
+ service_template->service_type);
+ return CQM_FAIL;
+ }
+
+ if (service->has_register) {
+ cqm_err(handle->dev_hdl, "Service register: service_type %u has registered\n",
+ service_template->service_type);
+ return CQM_FAIL;
+ }
+
+ service->has_register = true;
+ memcpy((void *)(&service->service_template), (void *)service_template,
+ sizeof(struct tag_service_register_template));
+
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_service_register);
+
+/**
+ * cqm_service_unregister - The service driver deregisters the callback function from the CQM
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ */
+void cqm_service_unregister(void *ex_handle, u32 service_type)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return;
+ }
+
+ if (service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return;
+ }
+
+ service = &cqm_handle->service[service_type];
+ if (!service->valid)
+ cqm_err(handle->dev_hdl, "Service unregister: service_type %u is disable\n",
+ service_type);
+
+ service->has_register = false;
+ memset(&service->service_template, 0, sizeof(struct tag_service_register_template));
+}
+EXPORT_SYMBOL(cqm_service_unregister);
+
+s32 cqm_fake_vf_num_set(void *ex_handle, u16 fake_vf_num_cfg)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct service_cap *svc_cap = NULL;
+
+ if (!ex_handle)
+ return CQM_FAIL;
+
+ svc_cap = &handle->cfg_mgmt->svc_cap;
+
+ if (fake_vf_num_cfg > svc_cap->fake_vf_num) {
+ cqm_err(handle->dev_hdl, "fake_vf_num_cfg is invlaid, fw fake_vf_num is %u\n",
+ svc_cap->fake_vf_num);
+ return CQM_FAIL;
+ }
+
+ /* fake_vf_num_cfg is valid when func type is CQM_FAKE_FUNC_PARENT */
+ svc_cap->fake_vf_num_cfg = fake_vf_num_cfg;
+
+ return CQM_SUCCESS;
+}
+EXPORT_SYMBOL(cqm_fake_vf_num_set);
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.h
new file mode 100644
index 0000000..8d1e481
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_main.h
@@ -0,0 +1,381 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_MAIN_H
+#define CQM_MAIN_H
+
+#include <linux/pci.h>
+
+#include "hinic3_crm.h"
+#include "cqm_bloomfilter.h"
+#include "hinic3_hwif.h"
+#include "cqm_bat_cla.h"
+
+#define GET_MAX max
+#define GET_MIN min
+#define CQM_DW_SHIFT 2
+#define CQM_QW_SHIFT 3
+#define CQM_BYTE_BIT_SHIFT 3
+#define CQM_NUM_BIT_BYTE 8
+
+#define CHIPIF_SUCCESS 0
+#define CHIPIF_FAIL (-1)
+
+#define CQM_TIMER_ENABLE 1
+#define CQM_TIMER_DISABLE 0
+
+#define CQM_TIMER_NUM_MULTI 2
+
+/* The value must be the same as that of hinic3_service_type in hinic3_crm.h. */
+#define CQM_SERVICE_T_NIC SERVICE_T_NIC
+#define CQM_SERVICE_T_OVS SERVICE_T_OVS
+#define CQM_SERVICE_T_ROCE SERVICE_T_ROCE
+#define CQM_SERVICE_T_TOE SERVICE_T_TOE
+#define CQM_SERVICE_T_IOE SERVICE_T_IOE
+#define CQM_SERVICE_T_FC SERVICE_T_FC
+#define CQM_SERVICE_T_VBS SERVICE_T_VBS
+#define CQM_SERVICE_T_IPSEC SERVICE_T_IPSEC
+#define CQM_SERVICE_T_VIRTIO SERVICE_T_VIRTIO
+#define CQM_SERVICE_T_PPA SERVICE_T_PPA
+#define CQM_SERVICE_T_MAX SERVICE_T_MAX
+
+struct tag_cqm_service {
+ bool valid; /* Whether to enable this service on the function. */
+ bool has_register; /* Registered or Not */
+ u64 hardware_db_paddr;
+ void __iomem *hardware_db_vaddr;
+ u64 dwqe_paddr;
+ void __iomem *dwqe_vaddr;
+ u32 buf_order; /* The size of each buf node is 2^buf_order pages. */
+ struct tag_service_register_template service_template;
+};
+
+struct tag_cqm_fake_cfg {
+ u32 parent_func; /* The parent func_id of the fake vfs. */
+ u32 child_func_start; /* The start func_id of the child fake vfs. */
+ u32 child_func_number; /* The number of the child fake vfs. */
+};
+
+#define CQM_MAX_FACKVF_GROUP 4
+
+struct tag_cqm_func_capability {
+ /* BAT_PTR table(SMLC) */
+ bool ft_enable; /* BAT for flow table enable: support toe/ioe/fc service
+ */
+ bool rdma_enable; /* BAT for rdma enable: support RoCE */
+ /* VAT table(SMIR) */
+ bool ft_pf_enable; /* Same as ft_enable. BAT entry for toe/ioe/fc on pf
+ */
+ bool rdma_pf_enable; /* Same as rdma_enable. BAT entry for rdma on pf */
+
+ /* Dynamic or static memory allocation during the application of
+ * specified QPC/SCQC for each service.
+ */
+ bool qpc_alloc_static;
+ bool scqc_alloc_static;
+
+ u8 timer_enable; /* Whether the timer function is enabled */
+ u8 bloomfilter_enable; /* Whether the bloomgfilter function is enabled
+ */
+ u32 flow_table_based_conn_number; /* Maximum number of connections for
+ * toe/ioe/fc, whitch cannot excedd
+ * qpc_number
+ */
+ u32 flow_table_based_conn_cache_number; /* Maximum number of sticky
+ * caches
+ */
+ u32 bloomfilter_length; /* Size of the bloomfilter table, 64-byte
+ * aligned
+ */
+ u32 bloomfilter_addr; /* Start position of the bloomfilter table in the
+ * SMF main cache.
+ */
+ u32 qpc_reserved; /* Reserved bit in bitmap */
+ u32 qpc_reserved_back; /* Reserved back bit in bitmap */
+ u32 mpt_reserved; /* The ROCE/IWARP MPT also has a reserved bit. */
+
+ /* All basic_size must be 2^n-aligned. */
+ u32 hash_number; /* The number of hash bucket. The size of BAT table is
+ * aliaed with 64 bucket. At least 64 buckets is
+ * required.
+ */
+ u32 hash_basic_size; /* THe basic size of hash bucket is 64B, including
+ * 5 valid entry and one next entry.
+ */
+ u32 qpc_number;
+ u32 fake_vf_qpc_number;
+ u32 qpc_basic_size;
+
+ /* NUmber of PFs/VFs on the current host only for timer resource used */
+ u32 pf_num;
+ u32 pf_id_start;
+ u32 vf_num;
+ u32 vf_id_start;
+
+ u8 timer_pf_num;
+ u8 timer_pf_id_start;
+ u16 timer_vf_num;
+ u16 timer_vf_id_start;
+
+ u32 lb_mode;
+ /* Only lower 4bit is valid, indicating which SMFs are enabled.
+ * For example, 0101B indicates that SMF0 and SMF2 are enabled.
+ */
+ u32 smf_pg;
+
+ u32 fake_mode;
+ u32 fake_func_type; /* Whether the current function belongs to the fake
+ * group (parent or child)
+ */
+ u32 fake_cfg_number; /* Number of current configuration groups */
+ struct tag_cqm_fake_cfg fake_cfg[CQM_MAX_FACKVF_GROUP];
+
+ /* Note: for cqm specail test */
+ u32 pagesize_reorder;
+ bool xid_alloc_mode;
+ bool gpa_check_enable;
+ u32 scq_reserved;
+ u32 srq_reserved;
+
+ u32 mpt_number;
+ u32 mpt_basic_size;
+ u32 scqc_number;
+ u32 scqc_basic_size;
+ u32 srqc_number;
+ u32 srqc_basic_size;
+
+ u32 gid_number;
+ u32 gid_basic_size;
+ u32 lun_number;
+ u32 lun_basic_size;
+ u32 taskmap_number;
+ u32 taskmap_basic_size;
+ u32 l3i_number;
+ u32 l3i_basic_size;
+ u32 childc_number;
+ u32 childc_basic_size;
+ u32 child_qpc_id_start; /* FC service Child CTX is global addressing. */
+ u32 childc_number_all_function; /* The chip supports a maximum of 8096
+ * child CTXs.
+ */
+ u32 timer_number;
+ u32 timer_basic_size;
+ u32 xid2cid_number;
+ u32 xid2cid_basic_size;
+ u32 reorder_number;
+ u32 reorder_basic_size;
+};
+
+#define CQM_PF TYPE_PF
+#define CQM_VF TYPE_VF
+#define CQM_PPF TYPE_PPF
+#define CQM_UNKNOWN TYPE_UNKNOWN
+#define CQM_MAX_PF_NUM 32
+
+#define CQM_LB_MODE_NORMAL 0xff
+#define CQM_LB_MODE_0 0
+#define CQM_LB_MODE_1 1
+#define CQM_LB_MODE_2 2
+
+#define CQM_LB_SMF_MAX 4
+
+#define CQM_FPGA_MODE 0
+#define CQM_EMU_MODE 1
+
+#define CQM_FAKE_FUNC_NORMAL 0
+#define CQM_FAKE_FUNC_PARENT 1
+#define CQM_FAKE_FUNC_CHILD 2
+#define CQM_FAKE_FUNC_CHILD_CONFLICT 3 /* The detected function is the
+ * function that is faked.
+ */
+
+#define CQM_FAKE_FUNC_MAX 32
+
+#define CQM_SPU_HOST_ID 4
+
+#define CQM_QPC_ROCE_PER_DRCT 12
+#define CQM_QPC_ROCE_NORMAL 0
+#define CQM_QPC_ROCE_VBS_MODE 2
+
+struct tag_cqm_toe_private_capability {
+ /* TOE srq is different from other services
+ * and does not need to be managed by the CLA table.
+ */
+ u32 toe_srqc_number;
+ u32 toe_srqc_basic_size;
+ u32 toe_srqc_start_id;
+
+ struct tag_cqm_bitmap srqc_bitmap;
+};
+
+struct tag_cqm_secure_mem {
+ u16 func_id;
+ bool need_secure_mem;
+
+ u32 mode;
+ u32 gpa_len0;
+
+ void __iomem *va_base;
+ void __iomem *va_end;
+ u64 pa_base;
+ u32 page_num;
+
+ /* bitmap mgmt */
+ spinlock_t bitmap_lock;
+ unsigned long *bitmap;
+ u32 bits_nr;
+ u32 alloc_cnt;
+ u32 free_cnt;
+};
+
+struct tag_cqm_handle {
+ struct hinic3_hwdev *ex_handle;
+ struct pci_dev *dev;
+ struct hinic3_func_attr func_attribute; /* vf/pf attributes */
+ struct tag_cqm_func_capability func_capability; /* function capability set */
+ struct tag_cqm_service service[CQM_SERVICE_T_MAX]; /* Service-related structure */
+ struct tag_cqm_bat_table bat_table;
+ struct tag_cqm_bloomfilter_table bloomfilter_table;
+ /* fake-vf-related structure */
+ struct tag_cqm_handle *fake_cqm_handle[CQM_FAKE_FUNC_MAX];
+ struct tag_cqm_handle *parent_cqm_handle;
+
+ struct tag_cqm_toe_private_capability toe_own_capability; /* TOE service-related
+ * capability set
+ */
+ struct tag_cqm_secure_mem secure_mem;
+ struct list_head node;
+ char name[VRAM_NAME_APPLY_LEN];
+};
+
+#define CQM_CQN_FROM_CEQE(data) ((data) & 0xfffff)
+#define CQM_XID_FROM_CEQE(data) ((data) & 0xfffff)
+#define CQM_QID_FROM_CEQE(data) (((data) >> 20) & 0x7)
+#define CQM_TYPE_FROM_CEQE(data) (((data) >> 23) & 0x7)
+
+#define CQM_HASH_BUCKET_SIZE_64 64
+
+#define CQM_MAX_QPC_NUM 0x100000U
+#define CQM_MAX_SCQC_NUM 0x100000U
+#define CQM_MAX_SRQC_NUM 0x100000U
+#define CQM_MAX_CHILDC_NUM 0x100000U
+
+#define CQM_QPC_SIZE_256 256U
+#define CQM_QPC_SIZE_512 512U
+#define CQM_QPC_SIZE_1024 1024U
+
+#define CQM_SCQC_SIZE_32 32U
+#define CQM_SCQC_SIZE_64 64U
+#define CQM_SCQC_SIZE_128 128U
+
+#define CQM_SRQC_SIZE_32 32
+#define CQM_SRQC_SIZE_64 64
+#define CQM_SRQC_SIZE_128 128
+
+#define CQM_MPT_SIZE_64 64
+
+#define CQM_GID_SIZE_32 32
+
+#define CQM_LUN_SIZE_8 8
+
+#define CQM_L3I_SIZE_8 8
+
+#define CQM_TIMER_SIZE_32 32
+
+#define CQM_XID2CID_SIZE_8 8
+
+#define CQM_REORDER_SIZE_256 256
+
+#define CQM_CHILDC_SIZE_256 256U
+
+#define CQM_XID2CID_VBS_NUM (2 * 1024) /* 2K nvme Q */
+
+#define CQM_VBS_QPC_SIZE 512U
+
+#define CQM_XID2CID_VIRTIO_NUM (16 * 1024) /* 16K virt Q */
+
+#define CQM_GID_RDMA_NUM 128
+
+#define CQM_LUN_FC_NUM 64
+
+#define CQM_TASKMAP_FC_NUM 4
+
+#define CQM_L3I_COMM_NUM 64
+
+#define CQM_CHILDC_ROCE_NUM (8 * 1024)
+#define CQM_CHILDC_OVS_VBS_NUM (8 * 1024)
+
+#define CQM_TIMER_SCALE_NUM (2 * 1024)
+#define CQM_TIMER_ALIGN_WHEEL_NUM 8
+#define CQM_TIMER_ALIGN_SCALE_NUM \
+ (CQM_TIMER_SCALE_NUM * CQM_TIMER_ALIGN_WHEEL_NUM)
+
+#define CQM_QPC_OVS_RSVD (1024 * 1024)
+#define CQM_QPC_ROCE_RSVD 2
+#define CQM_QPC_ROCEAA_SWITCH_QP_NUM 4
+#define CQM_QPC_ROCEAA_RSVD \
+ (4 * 1024 + CQM_QPC_ROCEAA_SWITCH_QP_NUM) /* 4096 Normal QP +
+ * 4 Switch QP
+ */
+
+#define CQM_CQ_ROCEAA_RSVD 64
+#define CQM_SRQ_ROCEAA_RSVD 64
+#define CQM_QPC_ROCE_VBS_RSVD_BACK 204800 /* 200K */
+
+#define CQM_OVS_PAGESIZE_ORDER 9
+#define CQM_OVS_MAX_TIMER_FUNC 48
+
+#define CQM_PPA_PAGESIZE_ORDER 8
+
+#define CQM_FC_PAGESIZE_ORDER 0
+
+#define CQM_QHEAD_ALIGN_ORDER 6
+
+typedef void (*serv_cap_init_cb)(struct tag_cqm_handle *, void *);
+
+struct cqm_srv_cap_init {
+ u32 service_type;
+ serv_cap_init_cb serv_cap_proc;
+};
+
+/* Only for llt test */
+s32 cqm_capability_init(void *ex_handle);
+/* Can be defined as static */
+s32 cqm_mem_init(void *ex_handle);
+void cqm_mem_uninit(void *ex_handle);
+s32 cqm_event_init(void *ex_handle);
+void cqm_event_uninit(void *ex_handle);
+void cqm_scq_callback(void *ex_handle, u32 ceqe_data);
+void cqm_ecq_callback(void *ex_handle, u32 ceqe_data);
+void cqm_nocq_callback(void *ex_handle, u32 ceqe_data);
+u8 cqm_aeq_callback(void *ex_handle, u8 event, u8 *data);
+s32 cqm_get_fake_func_type(struct tag_cqm_handle *cqm_handle);
+s32 cqm_get_child_func_start(struct tag_cqm_handle *cqm_handle);
+s32 cqm_get_child_func_number(struct tag_cqm_handle *cqm_handle);
+
+s32 cqm_init(void *ex_handle);
+void cqm_uninit(void *ex_handle);
+s32 cqm_service_register(void *ex_handle, struct tag_service_register_template *service_template);
+void cqm_service_unregister(void *ex_handle, u32 service_type);
+
+s32 cqm_fake_vf_num_set(void *ex_handle, u16 fake_vf_num_cfg);
+#define CQM_LOG_ID 0
+
+#define CQM_PTR_NULL(x) "%s: " #x " is null\n", __func__
+#define CQM_ALLOC_FAIL(x) "%s: " #x " alloc fail\n", __func__
+#define CQM_MAP_FAIL(x) "%s: " #x " map fail\n", __func__
+#define CQM_FUNCTION_FAIL(x) "%s: " #x " return failure\n", __func__
+#define CQM_WRONG_VALUE(x) "%s: " #x " %u is wrong\n", __func__, (u32)(x)
+
+#define cqm_err(dev, format, ...) dev_err(dev, "[CQM]" format, ##__VA_ARGS__)
+#define cqm_warn(dev, format, ...) dev_warn(dev, "[CQM]" format, ##__VA_ARGS__)
+#define cqm_notice(dev, format, ...) \
+ dev_notice(dev, "[CQM]" format, ##__VA_ARGS__)
+#define cqm_info(dev, format, ...) dev_info(dev, "[CQM]" format, ##__VA_ARGS__)
+#ifdef __CQM_DEBUG__
+#define cqm_dbg(format, ...) pr_info("[CQM]" format, ##__VA_ARGS__)
+#else
+#define cqm_dbg(format, ...)
+#endif
+
+#endif /* CQM_MAIN_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.c
new file mode 100644
index 0000000..258c10a
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.c
@@ -0,0 +1,682 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_hwif.h"
+#include "hinic3_hw_cfg.h"
+
+#include "cqm_object.h"
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_bloomfilter.h"
+#include "cqm_db.h"
+#include "cqm_main.h"
+#include "vram_common.h"
+#include "vmsec_mpu_common.h"
+#include "cqm_memsec.h"
+
+#define SECURE_VA_TO_IDX(va, base) (((va) - (base)) / PAGE_SIZE)
+#define PCI_PROC_NAME_LEN 32
+#define U8_BIT 8
+#define MEM_SEC_PROC_DIR "driver/memsec"
+#define BITS_TO_MB(bits) ((bits) * PAGE_SIZE / 1024 / 1024)
+#define MEM_SEC_UNNECESSARY 1
+#define MEMSEC_TMP_LEN 32
+#define STD_INPUT_ONE_PARA 1
+#define STD_INPUT_TWO_PARA 2
+#define MR_KEY_2_INDEX_SHIFT 8
+#define IS_ADDR_IN_MEMSEC(va, len, start, end) \
+ ((va) >= (start) && (va) + (len) < (end))
+
+static int memsec_proc_show(struct seq_file *seq, void *offset);
+static int memsec_proc_open(struct inode *inode, struct file *file);
+static int memsec_proc_release(struct inode *inode, struct file *file);
+static void memsec_info_print(struct seq_file *seq, struct tag_cqm_secure_mem *secure_mem);
+static int hinic3_secure_mem_proc_ent_init(void *hwdev);
+static void hinic3_secure_mem_proc_ent_deinit(void);
+static int hinic3_secure_mem_proc_node_remove(void *hwdev);
+static int hinic3_secure_mem_proc_node_add(void *hwdev);
+static ssize_t memsec_proc_write(struct file *file, const char __user *data, size_t len,
+ loff_t *pff);
+
+static struct proc_dir_entry *g_hinic3_memsec_proc_ent; /* proc dir */
+static atomic_t g_memsec_proc_refcnt = ATOMIC_INIT(0);
+
+#if KERNEL_VERSION(5, 10, 0) > LINUX_VERSION_CODE
+static const struct file_operations memsec_proc_fops = {
+ .open = memsec_proc_open,
+ .read = seq_read,
+ .write = memsec_proc_write,
+ .release = memsec_proc_release,
+};
+#else
+static const struct proc_ops memsec_proc_fops = {
+ .proc_open = memsec_proc_open,
+ .proc_read = seq_read,
+ .proc_write = memsec_proc_write,
+ .proc_release = memsec_proc_release,
+};
+#endif
+
+bool cqm_need_secure_mem(void *hwdev)
+{
+ struct tag_cqm_secure_mem *info = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)hwdev;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (cqm_handle == NULL) {
+ return false;
+ }
+ info = &cqm_handle->secure_mem;
+ return ((info->need_secure_mem) && hinic3_is_guest_vmsec_enable(hwdev));
+}
+EXPORT_SYMBOL(cqm_need_secure_mem);
+
+static int memsec_proc_open(struct inode *inode, struct file *file)
+{
+ struct hinic3_hwdev *handle = PDE_DATA(inode);
+ int ret;
+
+ if (!try_module_get(THIS_MODULE))
+ return -ENODEV;
+
+ ret = single_open(file, memsec_proc_show, handle);
+ if (ret)
+ module_put(THIS_MODULE);
+
+ return ret;
+}
+
+static int memsec_proc_release(struct inode *inode, struct file *file)
+{
+ module_put(THIS_MODULE);
+ return single_release(inode, file);
+}
+
+static void memsec_info_print(struct seq_file *seq, struct tag_cqm_secure_mem *secure_mem)
+{
+ int i, j;
+
+ seq_printf(seq, "Secure MemPageSize: %lu\n", PAGE_SIZE);
+ seq_printf(seq, "Secure MemTotal: %u pages\n", secure_mem->bits_nr);
+ seq_printf(seq, "Secure MemTotal: %lu MB\n", BITS_TO_MB(secure_mem->bits_nr));
+ seq_printf(seq, "Secure MemUsed: %d pages\n",
+ bitmap_weight(secure_mem->bitmap, secure_mem->bits_nr));
+ seq_printf(seq, "Secure MemAvailable: %d pages\n",
+ secure_mem->bits_nr - bitmap_weight(secure_mem->bitmap, secure_mem->bits_nr));
+ seq_printf(seq, "Secure MemFirstAvailableIdx: %lu\n",
+ find_first_zero_bit(secure_mem->bitmap, secure_mem->bits_nr));
+ seq_printf(seq, "Secure MemVirtualAddrStart: 0x%p\n", secure_mem->va_base);
+ seq_printf(seq, "Secure MemVirtualAddrEnd: 0x%p\n", secure_mem->va_end);
+ seq_printf(seq, "Secure MemPhysicalAddrStart: 0x%llx\n", secure_mem->pa_base);
+ seq_printf(seq, "Secure MemPhysicalAddrEnd: 0x%llx\n",
+ secure_mem->pa_base + secure_mem->gpa_len0);
+ seq_printf(seq, "Secure MemAllocCnt: %d\n", secure_mem->alloc_cnt);
+ seq_printf(seq, "Secure MemFreeCnt: %d\n", secure_mem->free_cnt);
+ seq_printf(seq, "Secure MemProcRefCnt: %d\n", atomic_read(&g_memsec_proc_refcnt));
+ seq_puts(seq, "Secure MemBitmap:");
+
+ for (i = 0, j = 0; i < (secure_mem->bits_nr / U8_BIT); i++) {
+ if (i % U8_BIT == 0) {
+ seq_printf(seq, "\n [%05d-%05d]: ", j, j + (U8_BIT * U8_BIT) - 0x1);
+ j += U8_BIT * U8_BIT;
+ }
+ seq_printf(seq, "0x%x ", *(u8 *)((u8 *)secure_mem->bitmap + i));
+ }
+
+ seq_puts(seq, "\nSecure MemBitmap info end\n");
+}
+
+static struct tag_cqm_secure_mem *memsec_proc_get_secure_mem(struct hinic3_hwdev *handle)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_secure_mem *info = NULL;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (!cqm_handle) {
+ cqm_err(handle->dev_hdl, "[memsec]cqm not inited yet\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ info = &cqm_handle->secure_mem;
+ if (!info || !info->bitmap) {
+ cqm_err(handle->dev_hdl, "[memsec]secure mem not inited yet\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ return info;
+}
+
+static int memsec_proc_show(struct seq_file *seq, void *offset)
+{
+ struct hinic3_hwdev *handle = seq->private;
+ struct tag_cqm_secure_mem *info = NULL;
+
+ info = memsec_proc_get_secure_mem(handle);
+ if (IS_ERR(info))
+ return -EINVAL;
+
+ memsec_info_print(seq, info);
+
+ return 0;
+}
+
+static int test_read_secure_mem(struct hinic3_hwdev *handle, char *data, size_t len)
+{
+ u64 mem_ptr;
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_secure_mem *info = &cqm_handle->secure_mem;
+
+ if (sscanf(data, "r %llx", &mem_ptr) != STD_INPUT_ONE_PARA) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] read info format unknown!\n");
+ return -EINVAL;
+ }
+
+ if (mem_ptr < (u64)(info->va_base) || mem_ptr >= (u64)(info->va_end)) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] addr 0x%llx invalid!\n", mem_ptr);
+ return -EINVAL;
+ }
+
+ cqm_info(handle->dev_hdl, "[memsec_dfx] read addr 0x%llx val 0x%llx\n",
+ mem_ptr, *(u64 *)mem_ptr);
+ return 0;
+}
+
+static int test_write_secure_mem(struct hinic3_hwdev *handle, char *data, size_t len)
+{
+ u64 mem_ptr;
+ u64 val;
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_secure_mem *info = &cqm_handle->secure_mem;
+
+ if (sscanf(data, "w %llx %llx", &mem_ptr, &val) != STD_INPUT_TWO_PARA) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] read info format unknown!\n");
+ return -EINVAL;
+ }
+
+ if (mem_ptr < (u64)(info->va_base) || mem_ptr >= (u64)(info->va_end)) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] addr 0x%llx invalid!\n", mem_ptr);
+ return -EINVAL;
+ }
+
+ *(u64 *)mem_ptr = val;
+
+ cqm_info(handle->dev_hdl, "[memsec_dfx] write addr 0x%llx val 0x%llx now val 0x%llx\n",
+ mem_ptr, val, *(u64 *)mem_ptr);
+ return 0;
+}
+
+static void test_query_usage(struct hinic3_hwdev *handle)
+{
+ cqm_info(handle->dev_hdl, "\t[memsec_dfx]Usage: q <query_type> <index>\n");
+ cqm_info(handle->dev_hdl, "\t[memsec_dfx]Check whether roce context is in secure memory\n");
+ cqm_info(handle->dev_hdl, "\t[memsec_dfx]Options:\n");
+ cqm_info(handle->dev_hdl, "\t[memsec_dfx]query_type: qpc, mpt, srqc, scqc\n");
+ cqm_info(handle->dev_hdl, "\t[memsec_dfx]index: valid index.e.g. 0x3\n");
+}
+
+static int test_query_parse_type(struct hinic3_hwdev *handle, char *data,
+ enum cqm_object_type *type, u32 *index)
+{
+ char query_type[MEMSEC_TMP_LEN] = {'\0'};
+
+ if (sscanf(data, "q %s %x", query_type, index) != STD_INPUT_TWO_PARA) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] parse query cmd fail!\n");
+ return -1;
+ }
+ query_type[MEMSEC_TMP_LEN - 1] = '\0';
+
+ if (*index <= 0) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] query index 0x%x is invalid\n", *index);
+ return -1;
+ }
+
+ if (strcmp(query_type, "qpc") == 0) {
+ *type = CQM_OBJECT_SERVICE_CTX;
+ } else if (strcmp(query_type, "mpt") == 0) {
+ *type = CQM_OBJECT_MPT;
+ *index = (*index >> MR_KEY_2_INDEX_SHIFT) & 0xFFFFFF;
+ } else if (strcmp(query_type, "srqc") == 0) {
+ *type = CQM_OBJECT_RDMA_SRQ;
+ } else if (strcmp(query_type, "scqc") == 0) {
+ *type = CQM_OBJECT_RDMA_SCQ;
+ } else {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] query type is invalid\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int test_query_context(struct hinic3_hwdev *handle, char *data, size_t len)
+{
+ int ret;
+ u32 index = 0;
+ bool in_secmem = false;
+ struct tag_cqm_object *cqm_obj = NULL;
+ struct tag_cqm_qpc_mpt *qpc_mpt = NULL;
+ struct tag_cqm_queue *cqm_queue = NULL;
+ struct tag_cqm_secure_mem *info = NULL;
+ enum cqm_object_type query_type;
+
+ ret = test_query_parse_type(handle, data, &query_type, &index);
+ if (ret < 0) {
+ test_query_usage(handle);
+ return -EINVAL;
+ }
+
+ info = memsec_proc_get_secure_mem(handle);
+ if (IS_ERR(info))
+ return -EINVAL;
+
+ cqm_obj = cqm_object_get((void *)handle, query_type, index, false);
+ if (!cqm_obj) {
+ cqm_err(handle->dev_hdl, "[memsec_dfx] get cmq obj fail!\n");
+ return -EINVAL;
+ }
+
+ switch (query_type) {
+ case CQM_OBJECT_SERVICE_CTX:
+ case CQM_OBJECT_MPT:
+ qpc_mpt = (struct tag_cqm_qpc_mpt *)cqm_obj;
+ in_secmem = IS_ADDR_IN_MEMSEC(qpc_mpt->vaddr,
+ cqm_obj->object_size,
+ (u8 *)info->va_base,
+ (u8 *)info->va_end);
+ cqm_info(handle->dev_hdl,
+ "[memsec_dfx]Query %s:0x%x, va=%p %sin secure mem\n",
+ query_type == CQM_OBJECT_MPT ? "MPT, mpt_index" : "QPC, qpn",
+ index, qpc_mpt->vaddr, in_secmem ? "" : "not ");
+ break;
+ case CQM_OBJECT_RDMA_SRQ:
+ case CQM_OBJECT_RDMA_SCQ:
+ cqm_queue = (struct tag_cqm_queue *)cqm_obj;
+ in_secmem = IS_ADDR_IN_MEMSEC(cqm_queue->q_ctx_vaddr,
+ cqm_obj->object_size,
+ (u8 *)info->va_base,
+ (u8 *)info->va_end);
+ cqm_info(handle->dev_hdl,
+ "[memsec_dfx]Query %s:0x%x, va=%p %sin secure mem\n",
+ query_type == CQM_OBJECT_RDMA_SRQ ? "SRQC, srqn " : "SCQC, scqn",
+ index, cqm_queue->q_ctx_vaddr, in_secmem ? "" : "not ");
+ break;
+ default:
+ cqm_err(handle->dev_hdl, "[memsec_dfx] not support query type!\n");
+ break;
+ }
+
+ cqm_object_put(cqm_obj);
+ return 0;
+}
+
+static ssize_t memsec_proc_write(struct file *file, const char __user *data,
+ size_t len, loff_t *off)
+{
+ int ret = -EINVAL;
+ struct hinic3_hwdev *handle = PDE_DATA(file->f_inode);
+ char tmp[MEMSEC_TMP_LEN] = {0};
+
+ if (!handle)
+ return -EIO;
+
+ if (len >= MEMSEC_TMP_LEN)
+ return -EFBIG;
+
+ if (copy_from_user(tmp, data, len))
+ return -EIO;
+
+ switch (tmp[0]) {
+ case 'r':
+ ret = test_read_secure_mem(handle, tmp, len);
+ break;
+ case 'w':
+ ret = test_write_secure_mem(handle, tmp, len);
+ break;
+ case 'q':
+ ret = test_query_context(handle, tmp, len);
+ break;
+ default:
+ cqm_err(handle->dev_hdl, "[memsec_dfx] not support cmd!\n");
+ }
+
+ return (ret == 0) ? len : ret;
+}
+
+static int hinic3_secure_mem_proc_ent_init(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (g_hinic3_memsec_proc_ent)
+ return 0;
+
+ g_hinic3_memsec_proc_ent = proc_mkdir(MEM_SEC_PROC_DIR, NULL);
+ if (!g_hinic3_memsec_proc_ent) {
+ /* try again */
+ remove_proc_entry(MEM_SEC_PROC_DIR, NULL);
+ g_hinic3_memsec_proc_ent = proc_mkdir(MEM_SEC_PROC_DIR, NULL);
+ if (!g_hinic3_memsec_proc_ent) {
+ cqm_err(dev->dev_hdl, "[memsec]create secure mem proc fail!\n");
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+static void hinic3_secure_mem_proc_ent_deinit(void)
+{
+ if (g_hinic3_memsec_proc_ent && !atomic_read(&g_memsec_proc_refcnt)) {
+ remove_proc_entry(MEM_SEC_PROC_DIR, NULL);
+ g_hinic3_memsec_proc_ent = NULL;
+ }
+}
+
+static int hinic3_secure_mem_proc_node_remove(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct pci_dev *pdev = dev->pcidev_hdl;
+ char pci_name[PCI_PROC_NAME_LEN] = {0};
+
+ if (!g_hinic3_memsec_proc_ent) {
+ sdk_info(dev->dev_hdl, "[memsec]proc_ent_null!\n");
+ return 0;
+ }
+
+ atomic_dec(&g_memsec_proc_refcnt);
+
+ snprintf(pci_name, PCI_PROC_NAME_LEN,
+ "%02x:%02x:%x", pdev->bus->number, pdev->slot->number,
+ PCI_FUNC(pdev->devfn));
+
+ remove_proc_entry(pci_name, g_hinic3_memsec_proc_ent);
+
+ return 0;
+}
+
+static int hinic3_secure_mem_proc_node_add(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct pci_dev *pdev = dev->pcidev_hdl;
+ struct proc_dir_entry *res = NULL;
+ char pci_name[PCI_PROC_NAME_LEN] = {0};
+
+ if (!g_hinic3_memsec_proc_ent) {
+ cqm_err(dev->dev_hdl, "[memsec]proc_ent_null!\n");
+ return -EINVAL;
+ }
+
+ atomic_inc(&g_memsec_proc_refcnt);
+
+ snprintf(pci_name, PCI_PROC_NAME_LEN,
+ "%02x:%02x:%x", pdev->bus->number, pdev->slot->number,
+ PCI_FUNC(pdev->devfn));
+ /* 0400 Read by owner */
+ res = proc_create_data(pci_name, 0400, g_hinic3_memsec_proc_ent, &memsec_proc_fops,
+ hwdev);
+ if (!res) {
+ cqm_err(dev->dev_hdl, "[memsec]proc_create_data fail!\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+void hinic3_memsec_proc_init(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int ret;
+
+ ret = hinic3_secure_mem_proc_ent_init(hwdev);
+ if (ret != 0) {
+ cqm_err(dev->dev_hdl, "[memsec]proc ent init fail!\n");
+ return;
+ }
+
+ ret = hinic3_secure_mem_proc_node_add(hwdev);
+ if (ret != 0) {
+ cqm_err(dev->dev_hdl, "[memsec]proc node add fail!\n");
+ return;
+ }
+}
+
+void hinic3_memsec_proc_deinit(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int ret;
+
+ if (!cqm_need_secure_mem(hwdev))
+ return;
+
+ ret = hinic3_secure_mem_proc_node_remove(hwdev);
+ if (ret != 0) {
+ cqm_err(dev->dev_hdl, "[memsec]proc node remove fail!\n");
+ return;
+ }
+
+ hinic3_secure_mem_proc_ent_deinit();
+}
+
+static int cqm_get_secure_mem_cfg(void *dev, struct tag_cqm_secure_mem *info)
+{
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+ struct vmsec_cfg_ctx_gpa_entry_cmd mem_info;
+ u16 out_size = sizeof(struct vmsec_cfg_ctx_gpa_entry_cmd);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ memset(&mem_info, 0, sizeof(mem_info));
+ mem_info.entry.func_id = info->func_id;
+
+ err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_VMSEC, VMSEC_MPU_CMD_CTX_GPA_SHOW,
+ &mem_info, sizeof(mem_info), &mem_info,
+ &out_size, 0, HINIC3_CHANNEL_COMM);
+ if (err || !out_size || mem_info.head.status) {
+ cqm_err(hwdev->dev_hdl, "failed to get memsec info, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, mem_info.head.status, out_size);
+ return -EINVAL;
+ }
+
+ info->gpa_len0 = mem_info.entry.gpa_len0;
+ info->mode = mem_info.entry.mode;
+ info->pa_base = (u64)((((u64)mem_info.entry.gpa_addr0_hi) << CQM_INT_ADDR_SHIFT) |
+ mem_info.entry.gpa_addr0_lo);
+
+ return 0;
+}
+
+static int cqm_secure_mem_param_check(void *ex_handle, struct tag_cqm_secure_mem *info)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (!info->pa_base || !info->gpa_len0)
+ goto no_need_secure_mem;
+
+ if (!IS_ALIGNED(info->pa_base, CQM_SECURE_MEM_ALIGNED_SIZE) ||
+ !IS_ALIGNED(info->gpa_len0, CQM_SECURE_MEM_ALIGNED_SIZE)) {
+ cqm_err(handle->dev_hdl, "func_id %u secure mem not 2M aligned\n",
+ info->func_id);
+ return -EINVAL;
+ }
+
+ if (info->mode == VM_GPA_INFO_MODE_NMIG)
+ goto no_need_secure_mem;
+
+ return 0;
+
+no_need_secure_mem:
+ cqm_info(handle->dev_hdl, "func_id %u no need secure mem gpa 0x%llx len0 0x%x mode 0x%x\n",
+ info->func_id, info->pa_base, info->gpa_len0, info->mode);
+ info->need_secure_mem = false;
+ return MEM_SEC_UNNECESSARY;
+}
+
+int cqm_secure_mem_init(void *ex_handle)
+{
+ int err;
+ struct tag_cqm_secure_mem *info = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (!handle)
+ return -EINVAL;
+
+ // only vf in vm need secure mem
+ if (!hinic3_is_guest_vmsec_enable(ex_handle)) {
+ cqm_info(handle->dev_hdl, "no need secure mem\n");
+ return 0;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ info = &cqm_handle->secure_mem;
+ info->func_id = hinic3_global_func_id(ex_handle);
+
+ // get gpa info from mpu
+ err = cqm_get_secure_mem_cfg(ex_handle, info);
+ if (err) {
+ cqm_err(handle->dev_hdl, "func_id %u get secure mem failed, ret %d\n",
+ info->func_id, err);
+ return err;
+ }
+
+ // remap gpa
+ err = cqm_secure_mem_param_check(ex_handle, info);
+ if (err) {
+ cqm_info(handle->dev_hdl, "func_id %u cqm_secure_mem_param_check failed\n",
+ info->func_id);
+ return (err == MEM_SEC_UNNECESSARY) ? 0 : err;
+ }
+
+ info->va_base = ioremap(info->pa_base, info->gpa_len0);
+ info->va_end = info->va_base + info->gpa_len0;
+ info->page_num = info->gpa_len0 / PAGE_SIZE;
+ info->need_secure_mem = true;
+ info->bits_nr = info->page_num;
+ info->bitmap = bitmap_zalloc(info->bits_nr, GFP_KERNEL);
+ if (!info->bitmap) {
+ cqm_err(handle->dev_hdl, "func_id %u bitmap_zalloc failed\n",
+ info->func_id);
+ iounmap(info->va_base);
+ return -ENOMEM;
+ }
+
+ hinic3_memsec_proc_init(ex_handle);
+ return err;
+}
+
+int cqm_secure_mem_deinit(void *ex_handle)
+{
+ struct tag_cqm_secure_mem *info = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+
+ if (!handle)
+ return -EINVAL;
+
+ // only vf in vm need secure mem
+ if (!cqm_need_secure_mem(ex_handle))
+ return 0;
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ info = &cqm_handle->secure_mem;
+
+ if (info && info->va_base)
+ iounmap(info->va_base);
+
+ if (info && info->bitmap)
+ kfree(info->bitmap);
+
+ hinic3_memsec_proc_deinit(ex_handle);
+ return 0;
+}
+
+void *cqm_get_secure_mem_pages(struct hinic3_hwdev *handle, u32 order, dma_addr_t *pa_base)
+{
+ struct tag_cqm_secure_mem *info = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ unsigned int nr;
+ unsigned long *bitmap = NULL;
+ unsigned long index;
+ unsigned long flags;
+
+ if (!handle || !(handle->cqm_hdl)) {
+ pr_err("[memsec]%s null pointer\n", __func__);
+ return NULL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ info = &cqm_handle->secure_mem;
+ bitmap = info->bitmap;
+ nr = 1 << order;
+
+ if (!bitmap) {
+ cqm_err(handle->dev_hdl, "[memsec] %s bitmap null\n", __func__);
+ return NULL;
+ }
+
+ spin_lock_irqsave(&info->bitmap_lock, flags);
+
+ index = (order) ? bitmap_find_next_zero_area(bitmap, info->bits_nr, 0, nr, 0) :
+ find_first_zero_bit(bitmap, info->bits_nr);
+ if (index >= info->bits_nr) {
+ spin_unlock_irqrestore(&info->bitmap_lock, flags);
+ cqm_err(handle->dev_hdl,
+ "can not find continuous memory, size %d pages, weight %d\n",
+ nr, bitmap_weight(bitmap, info->bits_nr));
+ return NULL;
+ }
+
+ bitmap_set(bitmap, index, nr);
+ info->alloc_cnt++;
+ spin_unlock_irqrestore(&info->bitmap_lock, flags);
+
+ *pa_base = info->pa_base + index * PAGE_SIZE;
+ return (void *)(info->va_base + index * PAGE_SIZE);
+}
+
+void cqm_free_secure_mem_pages(struct hinic3_hwdev *handle, void *va, u32 order)
+{
+ struct tag_cqm_secure_mem *info = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ unsigned int nr;
+ unsigned long *bitmap = NULL;
+ unsigned long index;
+ unsigned long flags;
+
+ if (!handle || !(handle->cqm_hdl)) {
+ pr_err("%s null pointer\n", __func__);
+ return;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ info = &cqm_handle->secure_mem;
+ bitmap = info->bitmap;
+ nr = 1UL << order;
+
+ if (!bitmap) {
+ cqm_err(handle->dev_hdl, "%s bitmap null\n", __func__);
+ return;
+ }
+
+ if (va < info->va_base || (va > (info->va_end - PAGE_SIZE)) ||
+ !PAGE_ALIGNED((va - info->va_base)))
+ cqm_err(handle->dev_hdl, "%s va wrong value\n", __func__);
+
+ index = SECURE_VA_TO_IDX(va, info->va_base);
+ spin_lock_irqsave(&info->bitmap_lock, flags);
+ bitmap_clear(bitmap, index, nr);
+ info->free_cnt++;
+ spin_unlock_irqrestore(&info->bitmap_lock, flags);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.h
new file mode 100644
index 0000000..7d4a422
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_memsec.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) Huawei Technologies Co., Ltd. 2023-2023. All rights reserved. */
+#ifndef CQM_MEMSEC_H
+#define CQM_MEMSEC_H
+
+#include <linux/pci.h>
+#include "hinic3_hwdev.h"
+#include "hinic3_crm.h"
+#include "cqm_define.h"
+
+#define CQM_GET_MEMSEC_CTX_GPA 19
+#define CQM_INT_ADDR_SHIFT 32
+#define CQM_SECURE_MEM_ALIGNED_SIZE (2 * 1024 * 1024)
+
+bool cqm_need_secure_mem(void *hwdev);
+void *cqm_get_secure_mem_pages(struct hinic3_hwdev *handle, u32 order, dma_addr_t *pa_base);
+void cqm_free_secure_mem_pages(struct hinic3_hwdev *handle, void *va, u32 order);
+int cqm_secure_mem_init(void *ex_handle);
+int cqm_secure_mem_deinit(void *ex_handle);
+void hinic3_memsec_proc_init(void *hwdev);
+void hinic3_memsec_proc_deinit(void *hwdev);
+
+#endif /* CQM_MEMSEC_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.c
new file mode 100644
index 0000000..86359c0
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.c
@@ -0,0 +1,1493 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/mm.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_hwdev.h"
+
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_object_intern.h"
+#include "cqm_main.h"
+#include "cqm_object.h"
+
+/**
+ * cqm_object_qpc_mpt_create - create QPC/MPT
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type, must be MPT or CTX.
+ * @object_size: object size, unit is Byte
+ * @object_priv: private structure of the service layer, it can be NULL.
+ * @index: apply for the reserved qpn based on this value, if automatic allocation is required,
+ * please fill CQM_INDEX_INVALID.
+ */
+struct tag_cqm_qpc_mpt *cqm_object_qpc_mpt_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv, u32 index,
+ bool low2bit_align_en)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_qpc_mpt_info *qpc_mpt_info = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ s32 ret = CQM_FAIL;
+ u32 relative_index;
+ u32 fake_func_id;
+ u32 index_num = index;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_qpc_mpt_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ if (service_type >= CQM_SERVICE_T_MAX || !cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ if (object_type != CQM_OBJECT_SERVICE_CTX && object_type != CQM_OBJECT_MPT) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ /* fake vf adaption, switch to corresponding VF. */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_PARENT) {
+ fake_func_id = index_num / cqm_handle->func_capability.fake_vf_qpc_number;
+ relative_index = index_num % cqm_handle->func_capability.fake_vf_qpc_number;
+
+ if ((s32)fake_func_id >= cqm_get_child_func_number(cqm_handle)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(fake_func_id));
+ return NULL;
+ }
+
+ index_num = relative_index;
+ cqm_handle = cqm_handle->fake_cqm_handle[fake_func_id];
+ }
+
+ qpc_mpt_info = kmalloc(sizeof(*qpc_mpt_info), GFP_ATOMIC | __GFP_ZERO);
+ if (!qpc_mpt_info)
+ return NULL;
+
+ qpc_mpt_info->common.object.service_type = service_type;
+ qpc_mpt_info->common.object.object_type = object_type;
+ qpc_mpt_info->common.object.object_size = object_size;
+ atomic_set(&qpc_mpt_info->common.object.refcount, 1);
+ init_completion(&qpc_mpt_info->common.object.free);
+ qpc_mpt_info->common.object.cqm_handle = cqm_handle;
+ qpc_mpt_info->common.xid = index_num;
+
+ qpc_mpt_info->common.priv = object_priv;
+
+ ret = cqm_qpc_mpt_create(&qpc_mpt_info->common.object, low2bit_align_en);
+ if (ret == CQM_SUCCESS)
+ return &qpc_mpt_info->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_qpc_mpt_create));
+ kfree(qpc_mpt_info);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_qpc_mpt_create);
+
+/**
+ * cqm_object_recv_queue_create - when srq is used, create rq
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type
+ * @init_rq_num: init RQ number
+ * @container_size: CQM queue container size
+ * @wqe_size: CQM WQE size
+ * @object_priv: RQ privite data
+ */
+struct tag_cqm_queue *cqm_object_recv_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 init_rq_num, u32 container_size,
+ u32 wqe_size, void *object_priv)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_nonrdma_qinfo *rq_qinfo = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ s32 ret;
+ u32 i;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_rq_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ if (object_type != CQM_OBJECT_NONRDMA_EMBEDDED_RQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ if (service_type != CQM_SERVICE_T_TOE) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, "Rq create: service_type %u has not registered\n",
+ service_type);
+ return NULL;
+ }
+
+ /* 1. create rq qinfo */
+ rq_qinfo = kmalloc(sizeof(*rq_qinfo), GFP_KERNEL | __GFP_ZERO);
+ if (!rq_qinfo)
+ return NULL;
+
+ /* 2. init rq qinfo */
+ rq_qinfo->container_size = container_size;
+ rq_qinfo->wqe_size = wqe_size;
+ rq_qinfo->wqe_per_buf = container_size / wqe_size - 1;
+
+ rq_qinfo->common.queue_link_mode = CQM_QUEUE_TOE_SRQ_LINK_MODE;
+ rq_qinfo->common.priv = object_priv;
+ rq_qinfo->common.object.cqm_handle = cqm_handle;
+ /* this object_size is used as container num */
+ rq_qinfo->common.object.object_size = init_rq_num;
+ rq_qinfo->common.object.service_type = service_type;
+ rq_qinfo->common.object.object_type = object_type;
+ atomic_set(&rq_qinfo->common.object.refcount, 1);
+ init_completion(&rq_qinfo->common.object.free);
+
+ /* 3. create queue header */
+ rq_qinfo->common.q_header_vaddr =
+ cqm_kmalloc_align(sizeof(struct tag_cqm_queue_header),
+ GFP_KERNEL | __GFP_ZERO, CQM_QHEAD_ALIGN_ORDER);
+ if (!rq_qinfo->common.q_header_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_header_vaddr));
+ goto err1;
+ }
+
+ rq_qinfo->common.q_header_paddr =
+ pci_map_single(cqm_handle->dev, rq_qinfo->common.q_header_vaddr,
+ sizeof(struct tag_cqm_queue_header), PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev,
+ rq_qinfo->common.q_header_paddr) != 0) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_header_vaddr));
+ goto err2;
+ }
+
+ /* 4. create rq */
+ for (i = 0; i < init_rq_num; i++) {
+ ret = cqm_container_create(&rq_qinfo->common.object, NULL,
+ true);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_container_create));
+ goto err3;
+ }
+ if (!rq_qinfo->common.head_container)
+ rq_qinfo->common.head_container =
+ rq_qinfo->common.tail_container;
+ }
+
+ return &rq_qinfo->common;
+
+err3:
+ cqm_container_free(rq_qinfo->common.head_container, NULL,
+ &rq_qinfo->common);
+err2:
+ cqm_kfree_align(rq_qinfo->common.q_header_vaddr);
+ rq_qinfo->common.q_header_vaddr = NULL;
+err1:
+ kfree(rq_qinfo);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_recv_queue_create);
+
+/**
+ * cqm_object_share_recv_queue_add_container - allocate new container for srq
+ * @common: queue structure pointer
+ */
+s32 cqm_object_share_recv_queue_add_container(struct tag_cqm_queue *common)
+{
+ if (unlikely(!common)) {
+ pr_err("[CQM]%s: common is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ return cqm_container_create(&common->object, NULL, true);
+}
+EXPORT_SYMBOL(cqm_object_share_recv_queue_add_container);
+
+s32 cqm_object_srq_add_container_free(struct tag_cqm_queue *common, u8 **container_addr)
+{
+ if (unlikely(!common)) {
+ pr_err("[CQM]%s: common is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ return cqm_container_create(&common->object, container_addr, false);
+}
+EXPORT_SYMBOL(cqm_object_srq_add_container_free);
+
+static bool cqm_object_share_recv_queue_param_check(struct hinic3_hwdev *handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 container_size, u32 wqe_size)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ /* service_type must be CQM_SERVICE_T_TOE */
+ if (service_type != CQM_SERVICE_T_TOE) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return false;
+ }
+
+ /* exception of service registration check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return false;
+ }
+
+ /* container size2^N aligning */
+ if (!cqm_check_align(container_size)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(container_size));
+ return false;
+ }
+
+ /* external parameter check: object_type must be
+ * CQM_OBJECT_NONRDMA_SRQ
+ */
+ if (object_type != CQM_OBJECT_NONRDMA_SRQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return false;
+ }
+
+ /* wqe_size, the divisor, cannot be 0 */
+ if (wqe_size == 0) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * cqm_object_share_recv_queue_create - create srq
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type
+ * @container_number: CQM queue container number
+ * @container_size: CQM queue container size
+ * @wqe_size: CQM WQE size
+ */
+struct tag_cqm_queue *cqm_object_share_recv_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 container_number, u32 container_size,
+ u32 wqe_size)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_nonrdma_qinfo *srq_qinfo = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_srq_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ if (!cqm_object_share_recv_queue_param_check(handle, service_type, object_type,
+ container_size, wqe_size))
+ return NULL;
+
+ /* 2. create and initialize srq info */
+ srq_qinfo = kmalloc(sizeof(*srq_qinfo), GFP_KERNEL | __GFP_ZERO);
+ if (!srq_qinfo)
+ return NULL;
+
+ srq_qinfo->common.object.cqm_handle = cqm_handle;
+ srq_qinfo->common.object.object_size = container_number;
+ srq_qinfo->common.object.object_type = object_type;
+ srq_qinfo->common.object.service_type = service_type;
+ atomic_set(&srq_qinfo->common.object.refcount, 1);
+ init_completion(&srq_qinfo->common.object.free);
+
+ srq_qinfo->common.queue_link_mode = CQM_QUEUE_TOE_SRQ_LINK_MODE;
+ srq_qinfo->common.priv = NULL;
+ srq_qinfo->wqe_per_buf = container_size / wqe_size - 1;
+ srq_qinfo->wqe_size = wqe_size;
+ srq_qinfo->container_size = container_size;
+ service = &cqm_handle->service[service_type];
+ srq_qinfo->q_ctx_size = service->service_template.srq_ctx_size;
+
+ /* 3. create srq and srq ctx */
+ ret = cqm_share_recv_queue_create(&srq_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &srq_qinfo->common;
+
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_share_recv_queue_create));
+ kfree(srq_qinfo);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_share_recv_queue_create);
+
+/**
+ * cqm_object_fc_srq_create - RQ creation temporarily provided for the FC service.
+ * Special requirement: The number of valid WQEs in the queue
+ * must meet the number of transferred WQEs. Linkwqe can only be
+ * filled at the end of the page. The actual valid number exceeds
+ * the requirement. In this case, the service needs to be
+ * informed of the additional number to be created.
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type
+ * @wqe_number: number of valid WQEs
+ * @wqe_size: size of valid WQEs
+ * @object_priv: private structure of the service layer, it can be NULL
+ */
+struct tag_cqm_queue *cqm_object_fc_srq_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_nonrdma_qinfo *nonrdma_qinfo = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ u32 valid_wqe_per_buffer;
+ u32 wqe_sum; /* include linkwqe, normal wqe */
+ u32 buf_size;
+ u32 buf_num;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_fc_srq_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ /* service_type must be fc */
+ if (service_type != CQM_SERVICE_T_FC) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ /* exception of service unregistered check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ /* wqe_size cannot exceed PAGE_SIZE and must be 2^n aligned. */
+ if (wqe_size >= PAGE_SIZE || (!cqm_check_align(wqe_size))) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
+ return NULL;
+ }
+
+ /* FC RQ is SRQ. (Different from the SRQ concept of TOE, FC indicates
+ * that packets received by all flows are placed on the same RQ.
+ * The SRQ of TOE is similar to the RQ resource pool.)
+ */
+ if (object_type != CQM_OBJECT_NONRDMA_SRQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ service = &cqm_handle->service[service_type];
+ buf_size = (u32)(PAGE_SIZE << (service->buf_order));
+ /* subtract 1 link wqe */
+ valid_wqe_per_buffer = buf_size / wqe_size - 1;
+ buf_num = wqe_number / valid_wqe_per_buffer;
+ if (wqe_number % valid_wqe_per_buffer != 0)
+ buf_num++;
+
+ /* calculate the total number of WQEs */
+ wqe_sum = buf_num * (valid_wqe_per_buffer + 1);
+ nonrdma_qinfo = kmalloc(sizeof(*nonrdma_qinfo), GFP_KERNEL | __GFP_ZERO);
+ if (!nonrdma_qinfo)
+ return NULL;
+
+ /* initialize object member */
+ nonrdma_qinfo->common.object.service_type = service_type;
+ nonrdma_qinfo->common.object.object_type = object_type;
+ /* total number of WQEs */
+ nonrdma_qinfo->common.object.object_size = wqe_sum;
+ atomic_set(&nonrdma_qinfo->common.object.refcount, 1);
+ init_completion(&nonrdma_qinfo->common.object.free);
+ nonrdma_qinfo->common.object.cqm_handle = cqm_handle;
+
+ /* Initialize the doorbell used by the current queue.
+ * The default doorbell is the hardware doorbell.
+ */
+ nonrdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
+ /* Currently, the connection mode is fixed. In the future,
+ * the service needs to transfer the connection mode.
+ */
+ nonrdma_qinfo->common.queue_link_mode = CQM_QUEUE_RING_MODE;
+
+ /* initialize public members */
+ nonrdma_qinfo->common.priv = object_priv;
+ nonrdma_qinfo->common.valid_wqe_num = wqe_sum - buf_num;
+
+ /* initialize internal private members */
+ nonrdma_qinfo->wqe_size = wqe_size;
+ /* RQ (also called SRQ of FC) created by FC services,
+ * CTX needs to be created.
+ */
+ nonrdma_qinfo->q_ctx_size = service->service_template.srq_ctx_size;
+
+ ret = cqm_nonrdma_queue_create(&nonrdma_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &nonrdma_qinfo->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fc_queue_create));
+ kfree(nonrdma_qinfo);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_fc_srq_create);
+
+static bool cqm_object_nonrdma_queue_param_check(struct hinic3_hwdev *handle, u32 service_type,
+ enum cqm_object_type object_type, u32 wqe_size)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ /* exception of service registrion check */
+ if (service_type >= CQM_SERVICE_T_MAX ||
+ !cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return false;
+ }
+ /* wqe_size can't be more than PAGE_SIZE, can't be zero, must be power
+ * of 2 the function of cqm_check_align is to check above
+ */
+ if (wqe_size >= PAGE_SIZE || (!cqm_check_align(wqe_size))) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
+ return false;
+ }
+
+ /* nonrdma supports: RQ, SQ, SRQ, CQ, SCQ */
+ if (object_type < CQM_OBJECT_NONRDMA_EMBEDDED_RQ ||
+ object_type > CQM_OBJECT_NONRDMA_SCQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * cqm_object_nonrdma_queue_create - create nonrdma queue
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type, can be embedded RQ/SQ/CQ and SRQ/SCQ
+ * @wqe_number: include link wqe
+ * @wqe_size: fixed length, must be power of 2
+ * @object_priv: private structure of the service layer, it can be NULL
+ */
+struct tag_cqm_queue *cqm_object_nonrdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_nonrdma_qinfo *nonrdma_qinfo = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_nonrdma_queue_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ if (!cqm_object_nonrdma_queue_param_check(handle, service_type, object_type, wqe_size))
+ return NULL;
+
+ nonrdma_qinfo = kmalloc(sizeof(*nonrdma_qinfo), GFP_KERNEL | __GFP_ZERO);
+ if (!nonrdma_qinfo)
+ return NULL;
+
+ nonrdma_qinfo->common.object.service_type = service_type;
+ nonrdma_qinfo->common.object.object_type = object_type;
+ nonrdma_qinfo->common.object.object_size = wqe_number;
+ atomic_set(&nonrdma_qinfo->common.object.refcount, 1);
+ init_completion(&nonrdma_qinfo->common.object.free);
+ nonrdma_qinfo->common.object.cqm_handle = cqm_handle;
+
+ /* Initialize the doorbell used by the current queue.
+ * The default value is hardware doorbell
+ */
+ nonrdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
+ /* Currently, the link mode is hardcoded and needs to be transferred by
+ * the service side.
+ */
+ nonrdma_qinfo->common.queue_link_mode = CQM_QUEUE_RING_MODE;
+
+ nonrdma_qinfo->common.priv = object_priv;
+
+ /* Initialize internal private members */
+ nonrdma_qinfo->wqe_size = wqe_size;
+ service = &cqm_handle->service[service_type];
+ switch (object_type) {
+ case CQM_OBJECT_NONRDMA_SCQ:
+ nonrdma_qinfo->q_ctx_size = service->service_template.scq_ctx_size;
+ break;
+ case CQM_OBJECT_NONRDMA_SRQ:
+ /* Currently, the SRQ of the service is created through a
+ * dedicated interface.
+ */
+ nonrdma_qinfo->q_ctx_size = service->service_template.srq_ctx_size;
+ break;
+ default:
+ break;
+ }
+
+ ret = cqm_nonrdma_queue_create(&nonrdma_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &nonrdma_qinfo->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_nonrdma_queue_create));
+ kfree(nonrdma_qinfo);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_nonrdma_queue_create);
+
+static bool cqm_object_rdma_queue_param_check(struct hinic3_hwdev *handle, u32 service_type,
+ enum cqm_object_type object_type)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+
+ /* service_type must be CQM_SERVICE_T_ROCE */
+ if (service_type != CQM_SERVICE_T_ROCE) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return false;
+ }
+ /* exception of service registrion check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return false;
+ }
+
+ /* rdma supports: QP, SRQ, SCQ */
+ if (object_type > CQM_OBJECT_RDMA_SCQ || object_type < CQM_OBJECT_RDMA_QP) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * cqm_object_rdma_queue_create - create rdma queue
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type, can be QP and SRQ/SCQ
+ * @object_size: object size
+ * @object_priv: private structure of the service layer, it can be NULL
+ * @room_header_alloc: Whether to apply for queue room and header space
+ * @xid: common index
+ */
+struct tag_cqm_queue *cqm_object_rdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ bool room_header_alloc, u32 xid)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_rdma_qinfo *rdma_qinfo = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_rdma_queue_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ if (!cqm_object_rdma_queue_param_check(handle, service_type, object_type))
+ return NULL;
+
+ rdma_qinfo = kmalloc(sizeof(*rdma_qinfo), GFP_KERNEL | __GFP_ZERO);
+ if (!rdma_qinfo)
+ return NULL;
+
+ rdma_qinfo->common.object.service_type = service_type;
+ rdma_qinfo->common.object.object_type = object_type;
+ rdma_qinfo->common.object.object_size = object_size;
+ atomic_set(&rdma_qinfo->common.object.refcount, 1);
+ init_completion(&rdma_qinfo->common.object.free);
+ rdma_qinfo->common.object.cqm_handle = cqm_handle;
+ rdma_qinfo->common.queue_link_mode = CQM_QUEUE_RDMA_QUEUE_MODE;
+ rdma_qinfo->common.priv = object_priv;
+ rdma_qinfo->common.current_q_room = CQM_RDMA_Q_ROOM_1;
+ rdma_qinfo->room_header_alloc = room_header_alloc;
+ rdma_qinfo->common.index = xid;
+
+ /* Initializes the doorbell used by the current queue.
+ * The default value is hardware doorbell
+ */
+ rdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
+
+ service = &cqm_handle->service[service_type];
+ switch (object_type) {
+ case CQM_OBJECT_RDMA_SCQ:
+ rdma_qinfo->q_ctx_size = service->service_template.scq_ctx_size;
+ break;
+ case CQM_OBJECT_RDMA_SRQ:
+ rdma_qinfo->q_ctx_size = service->service_template.srq_ctx_size;
+ break;
+ default:
+ break;
+ }
+
+ ret = cqm_rdma_queue_create(&rdma_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &rdma_qinfo->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_rdma_queue_create));
+ kfree(rdma_qinfo);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_rdma_queue_create);
+
+/**
+ * cqm_object_rdma_table_get - create mtt and rdmarc of the rdma service
+ * @ex_handle: device pointer that represents the PF
+ * @service_type: CQM service type
+ * @object_type: object type
+ * @index_base: start of index
+ * @index_number: number of created
+ */
+struct tag_cqm_mtt_rdmarc *cqm_object_rdma_table_get(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 index_base, u32 index_number)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_rdma_table *rdma_table = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ s32 ret;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_rdma_table_create_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ /* service_type must be CQM_SERVICE_T_ROCE */
+ if (service_type != CQM_SERVICE_T_ROCE) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ /* exception of service registrion check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ if (object_type != CQM_OBJECT_MTT &&
+ object_type != CQM_OBJECT_RDMARC) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ rdma_table = kmalloc(sizeof(*rdma_table), GFP_KERNEL | __GFP_ZERO);
+ if (!rdma_table)
+ return NULL;
+
+ rdma_table->common.object.service_type = service_type;
+ rdma_table->common.object.object_type = object_type;
+ rdma_table->common.object.object_size = (u32)(index_number *
+ sizeof(dma_addr_t));
+ atomic_set(&rdma_table->common.object.refcount, 1);
+ init_completion(&rdma_table->common.object.free);
+ rdma_table->common.object.cqm_handle = cqm_handle;
+ rdma_table->common.index_base = index_base;
+ rdma_table->common.index_number = index_number;
+
+ ret = cqm_rdma_table_create(&rdma_table->common.object);
+ if (ret == CQM_SUCCESS)
+ return &rdma_table->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_rdma_table_create));
+ kfree(rdma_table);
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_rdma_table_get);
+
+static s32 cqm_qpc_mpt_delete_ret(struct tag_cqm_object *object)
+{
+ u32 object_type;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_SERVICE_CTX:
+ case CQM_OBJECT_MPT:
+ cqm_qpc_mpt_delete(object);
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+static s32 cqm_nonrdma_queue_delete_ret(struct tag_cqm_object *object)
+{
+ u32 object_type;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_NONRDMA_EMBEDDED_RQ:
+ case CQM_OBJECT_NONRDMA_EMBEDDED_SQ:
+ case CQM_OBJECT_NONRDMA_EMBEDDED_CQ:
+ case CQM_OBJECT_NONRDMA_SCQ:
+ cqm_nonrdma_queue_delete(object);
+ return CQM_SUCCESS;
+ case CQM_OBJECT_NONRDMA_SRQ:
+ if (object->service_type == CQM_SERVICE_T_TOE)
+ cqm_share_recv_queue_delete(object);
+ else
+ cqm_nonrdma_queue_delete(object);
+
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+static s32 cqm_rdma_queue_delete_ret(struct tag_cqm_object *object)
+{
+ u32 object_type;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_RDMA_QP:
+ case CQM_OBJECT_RDMA_SRQ:
+ case CQM_OBJECT_RDMA_SCQ:
+ cqm_rdma_queue_delete(object);
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+static s32 cqm_rdma_table_delete_ret(struct tag_cqm_object *object)
+{
+ u32 object_type;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_MTT:
+ case CQM_OBJECT_RDMARC:
+ cqm_rdma_table_delete(object);
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+/**
+ * cqm_object_delete - Deletes a created object. This function will be sleep and wait
+ * for all operations on this object to be performed.
+ * @object: CQM object
+ */
+void cqm_object_delete(struct tag_cqm_object *object)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct hinic3_hwdev *handle = NULL;
+
+ if (unlikely(!object)) {
+ pr_err("[CQM]%s: object is null\n", __func__);
+ return;
+ }
+ if (!object->cqm_handle) {
+ pr_err("[CQM]object del: cqm_handle is null, service type %u, refcount %d\n",
+ object->service_type, (int)object->refcount.counter);
+ kfree(object);
+ return;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+
+ if (!cqm_handle->ex_handle) {
+ pr_err("[CQM]object del: ex_handle is null, service type %u, refcount %d\n",
+ object->service_type, (int)object->refcount.counter);
+ kfree(object);
+ return;
+ }
+
+ handle = cqm_handle->ex_handle;
+
+ if (object->service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->service_type));
+ kfree(object);
+ return;
+ }
+
+ if (cqm_qpc_mpt_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ if (cqm_nonrdma_queue_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ if (cqm_rdma_queue_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ if (cqm_rdma_table_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ kfree(object);
+}
+EXPORT_SYMBOL(cqm_object_delete);
+
+/**
+ * cqm_object_offset_addr - Only the rdma table can be searched to obtain the PA and VA
+ * at the specified offset of the object buffer.
+ * @object: CQM object
+ * @offset: For a rdma table, the offset is the absolute index number
+ * @paddr: physical address
+ */
+u8 *cqm_object_offset_addr(struct tag_cqm_object *object, u32 offset, dma_addr_t *paddr)
+{
+ u32 object_type = object->object_type;
+
+ /* The data flow path takes performance into consideration and
+ * does not check input parameters.
+ */
+ switch (object_type) {
+ case CQM_OBJECT_MTT:
+ case CQM_OBJECT_RDMARC:
+ return cqm_rdma_table_offset_addr(object, offset, paddr);
+ default:
+ break;
+ }
+
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_object_offset_addr);
+
+/**
+ * cqm_object_get - Obtain an object based on the index
+ * @ex_handle: device pointer that represents the PF
+ * @object_type: object type
+ * @index: support qpn,mptn,scqn,srqn (n->number)
+ * @bh: barrier or not
+ */
+struct tag_cqm_object *cqm_object_get(void *ex_handle, enum cqm_object_type object_type,
+ u32 index, bool bh)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_object *object = NULL;
+
+ /* The data flow path takes performance into consideration and
+ * does not check input parameters.
+ */
+ switch (object_type) {
+ case CQM_OBJECT_SERVICE_CTX:
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ break;
+ case CQM_OBJECT_MPT:
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_MPT);
+ break;
+ case CQM_OBJECT_RDMA_SRQ:
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SRQC);
+ break;
+ case CQM_OBJECT_RDMA_SCQ:
+ case CQM_OBJECT_NONRDMA_SCQ:
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ break;
+ default:
+ return NULL;
+ }
+
+ if (!cla_table) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_table_get));
+ return NULL;
+ }
+
+ object_table = &cla_table->obj_table;
+ object = cqm_object_table_get(cqm_handle, object_table, index, bh);
+ return object;
+}
+EXPORT_SYMBOL(cqm_object_get);
+
+/**
+ * cqm_object_put - This function must be called after the cqm_object_get function.
+ * Otherwise, the object cannot be released.
+ * @object: CQM object
+ */
+void cqm_object_put(struct tag_cqm_object *object)
+{
+ /* The data flow path takes performance into consideration and
+ * does not check input parameters.
+ */
+ if (atomic_dec_and_test(&object->refcount) != 0)
+ complete(&object->free);
+}
+EXPORT_SYMBOL(cqm_object_put);
+
+/**
+ * cqm_object_funcid - Obtain the ID of the function to which the object belongs
+ * @object: CQM object
+ */
+s32 cqm_object_funcid(struct tag_cqm_object *object)
+{
+ struct tag_cqm_handle *cqm_handle = NULL;
+
+ if (unlikely(!object)) {
+ pr_err("[CQM]%s: object is null\n", __func__);
+ return CQM_FAIL;
+ }
+ if (unlikely(!object->cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+
+ return cqm_handle->func_attribute.func_global_idx;
+}
+EXPORT_SYMBOL(cqm_object_funcid);
+
+/**
+ * cqm_object_resize_alloc_new - Currently this function is only used for RoCE. The CQ buffer
+ * is ajusted, but the cqn and cqc remain unchanged.
+ * This function allocates new buffer, but do not release old buffer.
+ * The valid buffer is still old buffer.
+ * @object: CQM object
+ * @object_size: new buffer size
+ */
+s32 cqm_object_resize_alloc_new(struct tag_cqm_object *object, u32 object_size)
+{
+ struct tag_cqm_rdma_qinfo *qinfo = (struct tag_cqm_rdma_qinfo *)(void *)object;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_service *service = NULL;
+ struct tag_cqm_buf *q_room_buf = NULL;
+ struct hinic3_hwdev *handle = NULL;
+ u32 order, buf_size;
+
+ if (unlikely(!object)) {
+ pr_err("[CQM]%s: object is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return CQM_FAIL;
+ }
+ handle = cqm_handle->ex_handle;
+
+ /* This interface is used only for the CQ of RoCE service. */
+ if (object->service_type == CQM_SERVICE_T_ROCE &&
+ object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ service = cqm_handle->service + object->service_type;
+ order = service->buf_order;
+ buf_size = (u32)(PAGE_SIZE << order);
+
+ if (qinfo->common.current_q_room == CQM_RDMA_Q_ROOM_1)
+ q_room_buf = &qinfo->common.q_room_buf_2;
+ else
+ q_room_buf = &qinfo->common.q_room_buf_1;
+
+ if (qinfo->room_header_alloc) {
+ q_room_buf->buf_number = ALIGN(object_size, buf_size) /
+ buf_size;
+ q_room_buf->page_number = q_room_buf->buf_number <<
+ order;
+ q_room_buf->buf_size = buf_size;
+ if (cqm_buf_alloc(cqm_handle, q_room_buf, true) ==
+ CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_buf_alloc));
+ return CQM_FAIL;
+ }
+
+ qinfo->new_object_size = object_size;
+ return CQM_SUCCESS;
+ }
+
+ cqm_err(handle->dev_hdl,
+ CQM_WRONG_VALUE(qinfo->room_header_alloc));
+ return CQM_FAIL;
+ }
+
+ cqm_err(handle->dev_hdl,
+ "Cq resize alloc: service_type %u object_type %u do not support resize\n",
+ object->service_type, object->object_type);
+ return CQM_FAIL;
+}
+EXPORT_SYMBOL(cqm_object_resize_alloc_new);
+
+/**
+ * cqm_object_resize_free_new - Currently this function is only used for RoCE. The CQ buffer
+ * is ajusted, but the cqn and cqc remain unchanged. This function
+ * frees new buffer, and is used to deal with exceptions.
+ * @object: CQM object
+ */
+void cqm_object_resize_free_new(struct tag_cqm_object *object)
+{
+ struct tag_cqm_rdma_qinfo *qinfo = (struct tag_cqm_rdma_qinfo *)(void *)object;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_buf *q_room_buf = NULL;
+ struct hinic3_hwdev *handle = NULL;
+
+ if (unlikely(!object)) {
+ pr_err("[CQM]%s: object is null\n", __func__);
+ return;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return;
+ }
+ handle = cqm_handle->ex_handle;
+
+ /* This interface is used only for the CQ of RoCE service. */
+ if (object->service_type == CQM_SERVICE_T_ROCE &&
+ object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ if (qinfo->common.current_q_room == CQM_RDMA_Q_ROOM_1)
+ q_room_buf = &qinfo->common.q_room_buf_2;
+ else
+ q_room_buf = &qinfo->common.q_room_buf_1;
+
+ qinfo->new_object_size = 0;
+
+ cqm_buf_free(q_room_buf, cqm_handle);
+ } else {
+ cqm_err(handle->dev_hdl,
+ "Cq resize free: service_type %u object_type %u do not support resize\n",
+ object->service_type, object->object_type);
+ }
+}
+EXPORT_SYMBOL(cqm_object_resize_free_new);
+
+/**
+ * cqm_object_resize_free_old - Currently this function is only used for RoCE. The CQ buffer
+ * is ajusted, but the cqn and cqc remain unchanged. This function
+ * frees old buffer and switches the valid buffer to new buffer.
+ * @object: CQM object
+ */
+void cqm_object_resize_free_old(struct tag_cqm_object *object)
+{
+ struct tag_cqm_rdma_qinfo *qinfo = (struct tag_cqm_rdma_qinfo *)(void *)object;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_buf *q_room_buf = NULL;
+
+ if (unlikely(!object)) {
+ pr_err("[CQM]%s: object is null\n", __func__);
+ return;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return;
+ }
+
+ /* This interface is used only for the CQ of RoCE service. */
+ if (object->service_type == CQM_SERVICE_T_ROCE &&
+ object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ if (qinfo->common.current_q_room == CQM_RDMA_Q_ROOM_1) {
+ q_room_buf = &qinfo->common.q_room_buf_1;
+ qinfo->common.current_q_room = CQM_RDMA_Q_ROOM_2;
+ } else {
+ q_room_buf = &qinfo->common.q_room_buf_2;
+ qinfo->common.current_q_room = CQM_RDMA_Q_ROOM_1;
+ }
+
+ object->object_size = qinfo->new_object_size;
+
+ cqm_buf_free(q_room_buf, cqm_handle);
+ }
+}
+EXPORT_SYMBOL(cqm_object_resize_free_old);
+
+/**
+ * cqm_gid_base - Obtain the base virtual address of the gid table for FT debug
+ * @ex_handle: device pointer that represents the PF
+ */
+void *cqm_gid_base(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bat_table *bat_table = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+ u32 entry_type, i;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ bat_table = &cqm_handle->bat_table;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ entry_type = bat_table->bat_entry_type[i];
+ if (entry_type == CQM_BAT_ENTRY_T_GID) {
+ cla_table = &bat_table->entry[i];
+ cla_z_buf = &cla_table->cla_z_buf;
+ if (cla_z_buf->buf_list)
+ return cla_z_buf->buf_list->va;
+ }
+ }
+
+ return NULL;
+}
+
+/**
+ * cqm_timer_base - Obtain the base virtual address of the timer for live migration
+ * @ex_handle: device pointer that represents the PF
+ */
+void *cqm_timer_base(void *ex_handle)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bat_table *bat_table = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+ u32 entry_type, i;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return NULL;
+ }
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return NULL;
+ }
+
+ /* Timer resource is configured on PPF. */
+ if (handle->hwif->attr.func_type != CQM_PPF) {
+ cqm_err(handle->dev_hdl, "%s: wrong function type:%d\n",
+ __func__, handle->hwif->attr.func_type);
+ return NULL;
+ }
+
+ bat_table = &cqm_handle->bat_table;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ entry_type = bat_table->bat_entry_type[i];
+ if (entry_type != CQM_BAT_ENTRY_T_TIMER)
+ continue;
+
+ cla_table = &bat_table->entry[i];
+ cla_z_buf = &cla_table->cla_z_buf;
+
+ if (!cla_z_buf->direct.va) {
+ if (cqm_buf_alloc_direct(cqm_handle, cla_z_buf, true) ==
+ CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_buf_alloc_direct));
+ return NULL;
+ }
+ }
+
+ return cla_z_buf->direct.va;
+ }
+
+ return NULL;
+}
+EXPORT_SYMBOL(cqm_timer_base);
+
+static s32 cqm_function_timer_clear_getindex(struct hinic3_hwdev *ex_handle, u32 *buffer_index,
+ u32 function_id, u32 timer_page_num,
+ const struct tag_cqm_buf *cla_z_buf)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(ex_handle->cqm_hdl);
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ u32 index;
+
+ /* Convert functionid and the functionid does not exceed the value range
+ * of the tiemr buffer.
+ */
+ if (function_id < (func_cap->timer_pf_id_start + func_cap->timer_pf_num) &&
+ function_id >= func_cap->timer_pf_id_start) {
+ index = function_id - func_cap->timer_pf_id_start;
+ } else if (function_id < (func_cap->timer_vf_id_start + func_cap->timer_vf_num) &&
+ function_id >= func_cap->timer_vf_id_start) {
+ index = (function_id - func_cap->timer_vf_id_start) +
+ func_cap->timer_pf_num;
+ } else {
+ cqm_err(ex_handle->dev_hdl, "Timer clear: wrong function_id=0x%x\n",
+ function_id);
+ return CQM_FAIL;
+ }
+
+ if ((index * timer_page_num + timer_page_num) > cla_z_buf->buf_number) {
+ cqm_err(ex_handle->dev_hdl,
+ "Timer clear: over cla_z_buf_num, buffer_i=0x%x, zbuf_num=0x%x\n",
+ index, cla_z_buf->buf_number);
+ return CQM_FAIL;
+ }
+
+ *buffer_index = index;
+ return CQM_SUCCESS;
+}
+
+static void cqm_clear_timer(void *ex_handle, u32 function_id, struct hinic3_hwdev *handle,
+ struct tag_cqm_cla_table *cla_table)
+{
+ u32 timer_buffer_size = CQM_TIMER_ALIGN_SCALE_NUM * CQM_TIMER_SIZE_32;
+ struct tag_cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ u32 timer_page_num, i;
+ u32 buffer_index = 0;
+ s32 ret;
+
+ /* During CQM capability initialization, ensure that the basic size of
+ * the timer buffer page does not exceed 128 x 4 KB. Otherwise,
+ * clearing the timer buffer of the function is complex.
+ */
+ timer_page_num = timer_buffer_size /
+ (PAGE_SIZE << cla_table->trunk_order);
+ if (timer_page_num == 0) {
+ cqm_err(handle->dev_hdl,
+ "Timer clear: fail to clear timer, buffer_size=0x%x, trunk_order=0x%x\n",
+ timer_buffer_size, cla_table->trunk_order);
+ return;
+ }
+
+ /* Convert functionid and the functionid does not exceed the value range
+ * of the tiemr buffer.
+ */
+ ret = cqm_function_timer_clear_getindex(ex_handle, &buffer_index,
+ function_id, timer_page_num,
+ cla_z_buf);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_function_timer_clear_getindex));
+ return;
+ }
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_1 ||
+ cla_table->cla_lvl == CQM_CLA_LVL_2) {
+ for (i = buffer_index * timer_page_num;
+ i < (buffer_index * timer_page_num + timer_page_num); i++)
+ memset((u8 *)(cla_z_buf->buf_list[i].va), 0,
+ (PAGE_SIZE << cla_table->trunk_order));
+ } else {
+ cqm_err(handle->dev_hdl, "Timer clear: timer cla lvl: %u, cla_z_buf_num=0x%x\n",
+ cla_table->cla_lvl, cla_z_buf->buf_number);
+ cqm_err(handle->dev_hdl,
+ "Timer clear: buf_i=0x%x, buf_size=0x%x, page_num=0x%x, order=0x%x\n",
+ buffer_index, timer_buffer_size, timer_page_num,
+ cla_table->trunk_order);
+ }
+}
+
+/**
+ * cqm_function_timer_clear - Clear the timer buffer based on the function ID.
+ * The function ID starts from 0 and the timer buffer is arranged
+ * in sequence by function ID.
+ * @ex_handle: device pointer that represents the PF
+ * @function_id: the function id of CQM timer
+ */
+void cqm_function_timer_clear(void *ex_handle, u32 function_id)
+{
+ /* The timer buffer of one function is 32B*8wheel*2048spoke=128*4k */
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ int loop, i;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_func_timer_clear_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return;
+ }
+
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2) {
+ cla_table = &cqm_handle->bat_table.timer_entry[0];
+ loop = CQM_LB_SMF_MAX;
+ } else {
+ cla_table = cqm_cla_table_get(&cqm_handle->bat_table, CQM_BAT_ENTRY_T_TIMER);
+ loop = 1;
+ }
+
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cla_table is null\n", __func__);
+ return;
+ }
+ for (i = 0; i < loop; i++) {
+ cqm_clear_timer(ex_handle, function_id, handle, cla_table);
+ cla_table++;
+ }
+}
+EXPORT_SYMBOL(cqm_function_timer_clear);
+
+/**
+ * cqm_function_hash_buf_clear - clear hash buffer based on global function_id
+ * @ex_handle: device pointer that represents the PF
+ * @global_funcid: the function id of clear hash buf
+ */
+void cqm_function_hash_buf_clear(void *ex_handle, s32 global_funcid)
+{
+ struct hinic3_hwdev *handle = (struct hinic3_hwdev *)ex_handle;
+ struct tag_cqm_func_capability *func_cap = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_handle *cqm_handle = NULL;
+ struct tag_cqm_buf *cla_z_buf = NULL;
+ s32 fake_funcid;
+ u32 i;
+
+ if (unlikely(!ex_handle)) {
+ pr_err("[CQM]%s: ex_handle is null\n", __func__);
+ return;
+ }
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_func_hash_buf_clear_cnt);
+
+ cqm_handle = (struct tag_cqm_handle *)(handle->cqm_hdl);
+ if (unlikely(!cqm_handle)) {
+ pr_err("[CQM]%s: cqm_handle is null\n", __func__);
+ return;
+ }
+ func_cap = &cqm_handle->func_capability;
+
+ /* fake vf adaption, switch to corresponding VF. */
+ if (func_cap->fake_func_type == CQM_FAKE_FUNC_PARENT) {
+ fake_funcid = global_funcid -
+ (s32)(func_cap->fake_cfg[0].child_func_start);
+ cqm_info(handle->dev_hdl, "fake_funcid =%d\n", fake_funcid);
+ if (fake_funcid < 0 || fake_funcid >= CQM_FAKE_FUNC_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(fake_funcid));
+ return;
+ }
+
+ cqm_handle = cqm_handle->fake_cqm_handle[fake_funcid];
+ }
+
+ cla_table = cqm_cla_table_get(&cqm_handle->bat_table,
+ CQM_BAT_ENTRY_T_HASH);
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cla_table is null\n", __func__);
+ return;
+ }
+ cla_z_buf = &cla_table->cla_z_buf;
+
+ for (i = 0; i < cla_z_buf->buf_number; i++)
+ memset(cla_z_buf->buf_list[i].va, 0, cla_z_buf->buf_size);
+}
+EXPORT_SYMBOL(cqm_function_hash_buf_clear);
+
+void cqm_srq_used_rq_container_delete(struct tag_cqm_object *object, u8 *container)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ u32 link_wqe_offset = qinfo->wqe_per_buf * qinfo->wqe_size;
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(common->object.cqm_handle);
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_srq_linkwqe *srq_link_wqe = NULL;
+ dma_addr_t addr;
+
+ /* 1. Obtain the current container pa through link wqe table,
+ * unmap pa
+ */
+ srq_link_wqe = (struct tag_cqm_srq_linkwqe *)(container + link_wqe_offset);
+ /* shift right by 2 bits to get the length of dw(4B) */
+ cqm_swab32((u8 *)(srq_link_wqe), sizeof(struct tag_cqm_linkwqe) >> 2);
+
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_gpa_h,
+ srq_link_wqe->current_buffer_gpa_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Rq container del: buffer physical addr is null\n");
+ return;
+ }
+ pci_unmap_single(cqm_handle->dev, addr, qinfo->container_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ /* 2. Obtain the current container va through link wqe table, free va */
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_addr_h,
+ srq_link_wqe->current_buffer_addr_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Rq container del: buffer virtual addr is null\n");
+ return;
+ }
+ kfree((void *)addr);
+}
+EXPORT_SYMBOL(cqm_srq_used_rq_container_delete);
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.h
new file mode 100644
index 0000000..ba61828
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object.h
@@ -0,0 +1,714 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_OBJECT_H
+#define CQM_OBJECT_H
+
+#include "cqm_define.h"
+#include "vram_common.h"
+
+#define CQM_LINKWQE_128B 128
+#define CQM_MOD_TOE HINIC3_MOD_TOE
+#define CQM_MOD_CQM HINIC3_MOD_CQM
+
+#ifdef __cplusplus
+#if __cplusplus
+extern "C" {
+#endif
+#endif /* __cplusplus */
+
+#ifndef HIUDK_SDK
+
+#define CQM_SUCCESS 0
+#define CQM_FAIL (-1)
+/* Ignore the return value and continue */
+#define CQM_CONTINUE 1
+
+/* type of WQE is LINK WQE */
+#define CQM_WQE_WF_LINK 1
+/* type of WQE is common WQE */
+#define CQM_WQE_WF_NORMAL 0
+
+/* chain queue mode */
+#define CQM_QUEUE_LINK_MODE 0
+/* RING queue mode */
+#define CQM_QUEUE_RING_MODE 1
+/* SRQ queue mode */
+#define CQM_QUEUE_TOE_SRQ_LINK_MODE 2
+/* RDMA queue mode */
+#define CQM_QUEUE_RDMA_QUEUE_MODE 3
+
+/* generic linkwqe structure */
+struct tag_cqm_linkwqe {
+ u32 rsv1 : 14; /* <reserved field */
+ u32 wf : 1; /* <wf */
+ u32 rsv2 : 14; /* <reserved field */
+ u32 ctrlsl : 2; /* <ctrlsl */
+ u32 o : 1; /* <o bit */
+
+ u32 rsv3 : 31; /* <reserved field */
+ u32 lp : 1; /* The lp field determines whether the o-bit
+ * meaning is reversed.
+ */
+
+ u32 next_page_gpa_h; /* <record the upper 32b physical address of the
+ * next page for the chip
+ */
+ u32 next_page_gpa_l; /* <record the lower 32b physical address of the
+ * next page for the chip
+ */
+
+ u32 next_buffer_addr_h; /* <record the upper 32b virtual address of the
+ * next page for the driver
+ */
+ u32 next_buffer_addr_l; /* <record the lower 32b virtual address of the
+ * next page for the driver
+ */
+};
+
+/* SRQ linkwqe structure. The wqe size must not exceed the common RQE size. */
+struct tag_cqm_srq_linkwqe {
+ struct tag_cqm_linkwqe linkwqe; /* <generic linkwqe structure */
+ u32 current_buffer_gpa_h; /* <Record the upper 32b physical address of
+ * the current page, which is used when the
+ * driver releases the container and cancels
+ * the mapping.
+ */
+ u32 current_buffer_gpa_l; /* <Record the lower 32b physical address of
+ * the current page, which is used when the
+ * driver releases the container and cancels
+ * the mapping.
+ */
+ u32 current_buffer_addr_h; /* <Record the upper 32b of the virtual
+ * address of the current page, which is used
+ * when the driver releases the container.
+ */
+ u32 current_buffer_addr_l; /* <Record the lower 32b of the virtual
+ * address of the current page, which is used
+ * when the driver releases the container.
+ */
+
+ u32 fast_link_page_addr_h; /* <Record the upper 32b of the virtual
+ * address of the fastlink page where the
+ * container address is recorded. It is used
+ * when the driver releases the fastlink.
+ */
+ u32 fast_link_page_addr_l; /* <Record the lower 32b virtual address of
+ * the fastlink page where the container
+ * address is recorded. It is used when the
+ * driver releases the fastlink.
+ */
+
+ u32 fixed_next_buffer_addr_h; /* <Record the upper 32b virtual address
+ * of the next contianer, which is used to
+ * release driver resources. The driver
+ * cannot be modified.
+ */
+ u32 fixed_next_buffer_addr_l; /* <Record the lower 32b virtual address
+ * of the next contianer, which is used to
+ * release driver resources. The driver
+ * cannot be modified.
+ */
+};
+
+/* first 64B of standard 128B WQE */
+union tag_cqm_linkwqe_first64B {
+ struct tag_cqm_linkwqe basic_linkwqe; /* <generic linkwqe structure */
+ struct tag_cqm_srq_linkwqe toe_srq_linkwqe; /* <SRQ linkwqe structure */
+ u32 value[16]; /* <reserved field */
+};
+
+/* second 64B of standard 128B WQE */
+struct tag_cqm_linkwqe_second64B {
+ u32 rsvd0[4]; /* <first 16B reserved field */
+ u32 rsvd1[4]; /* <second 16B reserved field */
+ union {
+ struct {
+ u32 rsvd0[3];
+ u32 rsvd1 : 29;
+ u32 toe_o : 1; /* <o bit of toe */
+ u32 resvd2 : 2;
+ } bs;
+ u32 value[4];
+ } third_16B; /* <third 16B */
+
+ union {
+ struct {
+ u32 rsvd0[2];
+ u32 rsvd1 : 31;
+ u32 ifoe_o : 1; /* <o bit of ifoe */
+ u32 rsvd2;
+ } bs;
+ u32 value[4];
+ } forth_16B; /* <fourth 16B */
+};
+
+/* standard 128B WQE structure */
+struct tag_cqm_linkwqe_128B {
+ union tag_cqm_linkwqe_first64B first64B; /* <first 64B of standard 128B WQE */
+ struct tag_cqm_linkwqe_second64B second64B; /* <back 64B of standard 128B WQE */
+};
+
+/* AEQ type definition */
+enum cqm_aeq_event_type {
+ CQM_AEQ_BASE_T_NIC = 0, /* <NIC consists of 16 events:0~15 */
+ CQM_AEQ_BASE_T_ROCE = 16, /* <ROCE consists of 32 events:16~47 */
+ CQM_AEQ_BASE_T_FC = 48, /* <FC consists of 8 events:48~55 */
+ CQM_AEQ_BASE_T_IOE = 56, /* <IOE consists of 8 events:56~63 */
+ CQM_AEQ_BASE_T_TOE = 64, /* <TOE consists of 16 events:64~95 */
+ CQM_AEQ_BASE_T_VBS = 96, /* <VBS consists of 16 events:96~111 */
+ CQM_AEQ_BASE_T_IPSEC = 112, /* <VBS consists of 16 events:112~127 */
+ CQM_AEQ_BASE_T_MAX = 128 /* <maximum of 128 events can be defined */
+};
+
+/* service registration template */
+struct tag_service_register_template {
+ u32 service_type; /* <service type */
+ u32 srq_ctx_size; /* <SRQ context size */
+ u32 scq_ctx_size; /* <SCQ context size */
+ void *service_handle; /* <pointer to the service driver when the
+ * ceq/aeq function is called back
+ */
+ /* <ceq callback:shared cq */
+ void (*shared_cq_ceq_callback)(void *service_handle, u32 cqn,
+ void *cq_priv);
+ /* <ceq callback:embedded cq */
+ void (*embedded_cq_ceq_callback)(void *service_handle, u32 xid,
+ void *qpc_priv);
+ /* <ceq callback:no cq */
+ void (*no_cq_ceq_callback)(void *service_handle, u32 xid, u32 qid,
+ void *qpc_priv);
+ /* <aeq level callback */
+ u8 (*aeq_level_callback)(void *service_handle, u8 event_type, u8 *val);
+ /* <aeq callback */
+ void (*aeq_callback)(void *service_handle, u8 event_type, u8 *val);
+};
+
+/* object operation type definition */
+enum cqm_object_type {
+ CQM_OBJECT_ROOT_CTX = 0, /* <0:root context, which is compatible with
+ * root CTX management
+ */
+ CQM_OBJECT_SERVICE_CTX, /* <1:QPC, connection management object */
+ CQM_OBJECT_MPT, /* <2:RDMA service usage */
+
+ CQM_OBJECT_NONRDMA_EMBEDDED_RQ = 10, /* <10:RQ of non-RDMA services,
+ * managed by LINKWQE
+ */
+ CQM_OBJECT_NONRDMA_EMBEDDED_SQ, /* <11:SQ of non-RDMA services,
+ * managed by LINKWQE
+ */
+ CQM_OBJECT_NONRDMA_SRQ, /* <12:SRQ of non-RDMA services,
+ * managed by MTT, but the CQM
+ * needs to apply for MTT.
+ */
+ CQM_OBJECT_NONRDMA_EMBEDDED_CQ, /* <13:Embedded CQ for non-RDMA
+ * services, managed by LINKWQE
+ */
+ CQM_OBJECT_NONRDMA_SCQ, /* <14:SCQ of non-RDMA services,
+ * managed by LINKWQE
+ */
+
+ CQM_OBJECT_RESV = 20,
+
+ CQM_OBJECT_RDMA_QP = 30, /* <30:QP of RDMA services, managed by MTT */
+ CQM_OBJECT_RDMA_SRQ, /* <31:SRQ of RDMA services, managed by MTT */
+ CQM_OBJECT_RDMA_SCQ, /* <32:SCQ of RDMA services, managed by MTT */
+
+ CQM_OBJECT_MTT = 50, /* <50:MTT table of the RDMA service */
+ CQM_OBJECT_RDMARC, /* <51:RC of the RDMA service */
+};
+
+/* return value of the failure to apply for the BITMAP table */
+#define CQM_INDEX_INVALID (~(0U))
+/* Return value of the reserved bit applied for in the BITMAP table,
+ * indicating that the index is allocated by the CQM and
+ * cannot be specified by the driver.
+ */
+#define CQM_INDEX_RESERVED 0xfffff
+
+/* to support ROCE Q buffer resize, the first Q buffer space */
+#define CQM_RDMA_Q_ROOM_1 1
+/* to support the Q buffer resize of ROCE, the second Q buffer space */
+#define CQM_RDMA_Q_ROOM_2 2
+
+/* doorbell mode selected by the current Q, hardware doorbell */
+#define CQM_HARDWARE_DOORBELL 1
+/* doorbell mode selected by the current Q, software doorbell */
+#define CQM_SOFTWARE_DOORBELL 2
+
+/* single-node structure of the CQM buffer */
+struct tag_cqm_buf_list {
+ void *va; /* <virtual address */
+ dma_addr_t pa; /* <physical address */
+ u32 refcount; /* <reference counting of the buf,
+ * which is used for internal buf management.
+ */
+};
+
+/* common management structure of the CQM buffer */
+struct tag_cqm_buf {
+ struct tag_cqm_buf_list *buf_list; /* <buffer list */
+ struct tag_cqm_buf_list direct; /* <map the discrete buffer list to a group
+ * of consecutive addresses
+ */
+ u32 page_number; /* <buf_number in quantity of page_number=2^n */
+ u32 buf_number; /* <number of buf_list nodes */
+ u32 buf_size; /* <PAGE_SIZE in quantity of buf_size=2^n */
+ struct vram_buf_info buf_info;
+ u32 bat_entry_type;
+};
+
+/* CQM object structure, which can be considered
+ * as the base class abstracted from all queues/CTX.
+ */
+struct tag_cqm_object {
+ u32 service_type; /* <service type */
+ u32 object_type; /* <object type, such as context, queue, mpt,
+ * and mtt, etc
+ */
+ u32 object_size; /* <object Size, for queue/CTX/MPT,
+ * the unit is Byte, for MTT/RDMARC,
+ * the unit is the number of entries,
+ * for containers, the unit is the number of
+ * containers.
+ */
+ atomic_t refcount; /* <reference counting */
+ struct completion free; /* <release completed quantity */
+ void *cqm_handle; /* <cqm_handle */
+};
+
+/* structure of the QPC and MPT objects of the CQM */
+struct tag_cqm_qpc_mpt {
+ struct tag_cqm_object object; /* <object base class */
+ u32 xid; /* <xid */
+ dma_addr_t paddr; /* <physical address of the QPC/MTT memory */
+ void *priv; /* <private information about the object of
+ * the service driver.
+ */
+ u8 *vaddr; /* <virtual address of the QPC/MTT memory */
+};
+
+/* queue header structure */
+struct tag_cqm_queue_header {
+ u64 doorbell_record; /* <SQ/RQ DB content */
+ u64 ci_record; /* <CQ DB content */
+ u64 rsv1; /* <This area is a user-defined area for driver
+ * and microcode information transfer.
+ */
+ u64 rsv2; /* <This area is a user-defined area for driver
+ * and microcode information transfer.
+ */
+};
+
+/* queue management structure: for queues of non-RDMA services, embedded queues
+ * are managed by LinkWQE, SRQ and SCQ are managed by MTT, but MTT needs to be
+ * applied by CQM; the queue of the RDMA service is managed by the MTT.
+ */
+struct tag_cqm_queue {
+ struct tag_cqm_object object; /* <object base class */
+ u32 index; /* <The embedded queue and QP do not have
+ * indexes, but the SRQ and SCQ do.
+ */
+ void *priv; /* <private information about the object of
+ * the service driver
+ */
+ u32 current_q_doorbell; /* <doorbell type selected by the current
+ * queue. HW/SW are used for the roce QP.
+ */
+ u32 current_q_room; /* <roce:current valid room buf */
+ struct tag_cqm_buf q_room_buf_1; /* <nonrdma:only q_room_buf_1 can be set to
+ * q_room_buf
+ */
+ struct tag_cqm_buf q_room_buf_2; /* <The CQ of RDMA reallocates the size of
+ * the queue room.
+ */
+ struct tag_cqm_queue_header *q_header_vaddr; /* <queue header virtual address */
+ dma_addr_t q_header_paddr; /* <physical address of the queue header */
+ u8 *q_ctx_vaddr; /* <CTX virtual addresses of SRQ and SCQ */
+ dma_addr_t q_ctx_paddr; /* <CTX physical addresses of SRQ and SCQ */
+ u32 valid_wqe_num; /* <number of valid WQEs that are
+ * successfully created
+ */
+ u8 *tail_container; /* <tail pointer of the SRQ container */
+ u8 *head_container; /* <head pointer of SRQ container */
+ u8 queue_link_mode; /* <Determine the connection mode during
+ * queue creation, such as link and ring.
+ */
+};
+
+/* MTT/RDMARC management structure */
+struct tag_cqm_mtt_rdmarc {
+ struct tag_cqm_object object; /* <object base class */
+ u32 index_base; /* <index_base */
+ u32 index_number; /* <index_number */
+ u8 *vaddr; /* <buffer virtual address */
+};
+
+/* sending command structure */
+struct tag_cqm_cmd_buf {
+ void *buf; /* <command buffer virtual address */
+ dma_addr_t dma; /* <physical address of the command buffer */
+ u16 size; /* <command buffer size */
+};
+
+/* definition of sending ACK mode */
+enum cqm_cmd_ack_type {
+ CQM_CMD_ACK_TYPE_CMDQ = 0, /* <ack is written back to cmdq */
+ CQM_CMD_ACK_TYPE_SHARE_CQN = 1, /* <ack is reported through the SCQ of
+ * the root CTX.
+ */
+ CQM_CMD_ACK_TYPE_APP_CQN = 2 /* <ack is reported through the SCQ of
+ * service
+ */
+};
+
+#endif
+/**
+ * @brief: create FC SRQ.
+ * @details: The number of valid WQEs in the queue must meet the number of
+ * transferred WQEs. Linkwqe can only be filled at the end of the
+ * page. The actual number of valid links exceeds the requirement.
+ * The service needs to be informed of the number of extra links to
+ * be created.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param wqe_number: number of WQEs
+ * @param wqe_size: wqe size
+ * @param object_priv: pointer to object private information
+ * @retval struct tag_cqm_queue*: queue structure pointer
+ * @date: 2019-5-4
+ */
+struct tag_cqm_queue *cqm_object_fc_srq_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv);
+
+/**
+ * @brief: create RQ.
+ * @details: When SRQ is used, the RQ queue is created.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param init_rq_num: number of containers
+ * @param container_size: container size
+ * @param wqe_size: wqe size
+ * @param object_priv: pointer to object private information
+ * @retval struct tag_cqm_queue*: queue structure pointer
+ * @date: 2019-5-4
+ */
+struct tag_cqm_queue *cqm_object_recv_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 init_rq_num, u32 container_size,
+ u32 wqe_size, void *object_priv);
+
+/**
+ * @brief: SRQ applies for a new container and is linked after the container
+ * is created.
+ * @details: SRQ applies for a new container and is linked after the container
+ * is created.
+ * @param common: queue structure pointer
+ * @retval 0: success
+ * @retval -1: fail
+ * @date: 2019-5-4
+ */
+s32 cqm_object_share_recv_queue_add_container(struct tag_cqm_queue *common);
+
+/**
+ * @brief: SRQ applies for a new container. After the container is created,
+ * no link is attached to the container. The service is attached to
+ * the container.
+ * @details: SRQ applies for a new container. After the container is created,
+ * no link is attached to the container. The service is attached to
+ * the container.
+ * @param common: queue structure pointer
+ * @param container_addr: returned container address
+ * @retval 0: success
+ * @retval -1: fail
+ * @date: 2019-5-4
+ */
+s32 cqm_object_srq_add_container_free(struct tag_cqm_queue *common, u8 **container_addr);
+
+/**
+ * @brief: create SRQ for TOE services.
+ * @details: create SRQ for TOE services.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param container_number: number of containers
+ * @param container_size: container size
+ * @param wqe_size: wqe size
+ * @retval struct tag_cqm_queue*: queue structure pointer
+ * @date: 2019-5-4
+ */
+struct tag_cqm_queue *cqm_object_share_recv_queue_create(void *ex_handle,
+ u32 service_type,
+ enum cqm_object_type object_type,
+ u32 container_number,
+ u32 container_size,
+ u32 wqe_size);
+
+/**
+ * @brief: create QPC and MPT.
+ * @details: When QPC and MPT are created, the interface sleeps.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param object_size: object size, in bytes.
+ * @param object_priv: private structure of the service layer.
+ * The value can be NULL.
+ * @param index: apply for reserved qpn based on the value. If automatic
+ * allocation is required, fill CQM_INDEX_INVALID.
+ * @retval struct tag_cqm_qpc_mpt *: pointer to the QPC/MPT structure
+ * @date: 2019-5-4
+ */
+struct tag_cqm_qpc_mpt *cqm_object_qpc_mpt_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ u32 index, bool low2bit_align_en);
+
+/**
+ * @brief: create a queue for non-RDMA services.
+ * @details: create a queue for non-RDMA services. The interface sleeps.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param wqe_number: number of Link WQEs
+ * @param wqe_size: fixed length, size 2^n
+ * @param object_priv: private structure of the service layer.
+ * The value can be NULL.
+ * @retval struct tag_cqm_queue *: queue structure pointer
+ * @date: 2019-5-4
+ */
+struct tag_cqm_queue *cqm_object_nonrdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv);
+
+/**
+ * @brief: create a RDMA service queue.
+ * @details: create a queue for the RDMA service. The interface sleeps.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param object_size: object size
+ * @param object_priv: private structure of the service layer.
+ * The value can be NULL.
+ * @param room_header_alloc: whether to apply for the queue room and header
+ * space
+ * @retval struct tag_cqm_queue *: queue structure pointer
+ * @date: 2019-5-4
+ */
+struct tag_cqm_queue *cqm_object_rdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ bool room_header_alloc, u32 xid);
+
+/**
+ * @brief: create the MTT and RDMARC of the RDMA service.
+ * @details: create the MTT and RDMARC of the RDMA service.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: service type
+ * @param object_type: object type
+ * @param index_base: start index number
+ * @param index_number: index number
+ * @retval struct tag_cqm_mtt_rdmarc *: pointer to the MTT/RDMARC structure
+ * @date: 2019-5-4
+ */
+struct tag_cqm_mtt_rdmarc *cqm_object_rdma_table_get(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 index_base, u32 index_number);
+
+/**
+ * @brief: delete created objects.
+ * @details: delete the created object. This function does not return until all
+ * operations on the object are complete.
+ * @param object: object pointer
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_object_delete(struct tag_cqm_object *object);
+
+/**
+ * @brief: obtains the physical address and virtual address at the specified
+ * offset of the object buffer.
+ * @details: Only RDMA table query is supported to obtain the physical address
+ * and virtual address at the specified offset of the object buffer.
+ * @param object: object pointer
+ * @param offset: for a rdma table, offset is the absolute index number.
+ * @param paddr: The physical address is returned only for the rdma table.
+ * @retval u8 *: buffer specify the virtual address at the offset
+ * @date: 2019-5-4
+ */
+u8 *cqm_object_offset_addr(struct tag_cqm_object *object, u32 offset, dma_addr_t *paddr);
+
+/**
+ * @brief: obtain object according index.
+ * @details: obtain object according index.
+ * @param ex_handle: device pointer that represents the PF
+ * @param object_type: object type
+ * @param index: support qpn,mptn,scqn,srqn
+ * @param bh: whether to disable the bottom half of the interrupt
+ * @retval struct tag_cqm_object *: object pointer
+ * @date: 2019-5-4
+ */
+struct tag_cqm_object *cqm_object_get(void *ex_handle, enum cqm_object_type object_type,
+ u32 index, bool bh);
+
+/**
+ * @brief: object reference counting release
+ * @details: After the function cqm_object_get is invoked, this API must be put.
+ * Otherwise, the object cannot be released.
+ * @param object: object pointer
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_object_put(struct tag_cqm_object *object);
+
+/**
+ * @brief: obtain the ID of the function where the object resides.
+ * @details: obtain the ID of the function where the object resides.
+ * @param object: object pointer
+ * @retval >=0: ID of function
+ * @retval -1: fail
+ * @date: 2020-4-15
+ */
+s32 cqm_object_funcid(struct tag_cqm_object *object);
+
+/**
+ * @brief: apply for a new space for an object.
+ * @details: Currently, this parameter is valid only for the ROCE service.
+ * The CQ buffer size is adjusted, but the CQN and CQC remain
+ * unchanged. New buffer space is applied for, and the old buffer
+ * space is not released. The current valid buffer is still the old
+ * buffer.
+ * @param object: object pointer
+ * @param object_size: new buffer size
+ * @retval 0: success
+ * @retval -1: fail
+ * @date: 2019-5-4
+ */
+s32 cqm_object_resize_alloc_new(struct tag_cqm_object *object, u32 object_size);
+
+/**
+ * @brief: release the newly applied buffer space for the object.
+ * @details: This function is used to release the newly applied buffer space for
+ * service exception handling.
+ * @param object: object pointer
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_object_resize_free_new(struct tag_cqm_object *object);
+
+/**
+ * @brief: release old buffer space for objects.
+ * @details: This function releases the old buffer and sets the current valid
+ * buffer to the new buffer.
+ * @param object: object pointer
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_object_resize_free_old(struct tag_cqm_object *object);
+
+/**
+ * @brief: release container.
+ * @details: release container.
+ * @param object: object pointer
+ * @param container: container pointer to be released
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_srq_used_rq_container_delete(struct tag_cqm_object *object, u8 *container);
+
+void *cqm_get_db_addr(void *ex_handle, u32 service_type);
+
+s32 cqm_ring_hardware_db_fc(void *ex_handle, u32 service_type, u8 db_count,
+ u8 pagenum, u64 db);
+
+/**
+ * @brief: provide the interface of knocking on doorbell.
+ * The CQM converts the pri to cos.
+ * @details: provide interface of knocking on doorbell for the CQM to convert
+ * the pri to cos. The doorbell transferred by the service must be the
+ * host sequence. This interface converts the network sequence.
+ * @param ex_handle: device pointer that represents the PF
+ * @param service_type: Each kernel-mode service is allocated a hardware
+ * doorbell page.
+ * @param db_count: PI[7:0] beyond 64b in the doorbell
+ * @param db: The doorbell content is organized by the service. If there is
+ * endian conversion, the service needs to complete the conversion.
+ * @retval 0: success
+ * @retval -1: fail
+ * @date: 2019-5-4
+ */
+s32 cqm_ring_hardware_db_update_pri(void *ex_handle, u32 service_type,
+ u8 db_count, u64 db);
+
+/**
+ * @brief: knock on software doorbell.
+ * @details: knock on software doorbell.
+ * @param object: object pointer
+ * @param db_record: software doorbell content. If there is big-endian
+ * conversion, the service needs to complete the conversion.
+ * @retval 0: success
+ * @retval -1: fail
+ * @date: 2019-5-4
+ */
+s32 cqm_ring_software_db(struct tag_cqm_object *object, u64 db_record);
+
+/**
+ * @brief: reference counting is added to the bloom filter ID.
+ * @details: reference counting is added to the bloom filter ID. When the ID
+ * changes from 0 to 1, the sending API is set to 1.
+ * This interface sleeps.
+ * @param ex_handle: device pointer that represents the PF
+ * @param id: id
+ * @retval 0: success
+ * @retval -1: fail
+ * @date: 2019-5-4
+ */
+void *cqm_gid_base(void *ex_handle);
+
+/**
+ * @brief: obtain the base virtual address of the timer.
+ * @details: obtain the base virtual address of the timer.
+ * @param ex_handle: device pointer that represents the PF
+ * @retval void *: base virtual address of the timer
+ * @date: 2020-5-21
+ */
+void *cqm_timer_base(void *ex_handle);
+
+/**
+ * @brief: clear timer buffer.
+ * @details: clear the timer buffer based on the function ID. Function IDs start
+ * from 0, and timer buffers are arranged by function ID.
+ * @param ex_handle: device pointer that represents the PF
+ * @param function_id: function id
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_function_timer_clear(void *ex_handle, u32 function_id);
+
+/**
+ * @brief: clear hash buffer.
+ * @details: clear the hash buffer based on the function ID.
+ * @param ex_handle: device pointer that represents the PF
+ * @param global_funcid
+ * @retval: void
+ * @date: 2019-5-4
+ */
+void cqm_function_hash_buf_clear(void *ex_handle, s32 global_funcid);
+
+s32 cqm_ring_direct_wqe_db(void *ex_handle, u32 service_type, u8 db_count,
+ void *direct_wqe);
+s32 cqm_ring_direct_wqe_db_fc(void *ex_handle, u32 service_type,
+ void *direct_wqe);
+
+#ifdef __cplusplus
+#if __cplusplus
+}
+#endif
+#endif /* __cplusplus */
+
+#endif /* CQM_OBJECT_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.c b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.c
new file mode 100644
index 0000000..1007b44
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.c
@@ -0,0 +1,1389 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/mm.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+
+#include "cqm_object.h"
+#include "cqm_bitmap_table.h"
+#include "cqm_bat_cla.h"
+#include "cqm_main.h"
+#include "cqm_object_intern.h"
+
+#define srq_obj_intern_if_section
+
+/**
+ * cqm_container_free - Only the container buffer is released. The buffer in the WQE
+ * and fast link tables are not involved. Containers can be released
+ * from head to tail, including head and tail. This function does not
+ * modify the start and end pointers of qinfo records.
+ * @srq_head_container: head pointer of the containers be released
+ * @srq_tail_container: If it is NULL, it means to release container from head to tail
+ * @common: CQM nonrdma queue info
+ */
+void cqm_container_free(u8 *srq_head_container, u8 *srq_tail_container,
+ struct tag_cqm_queue *common)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(common->object.cqm_handle);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ u32 link_wqe_offset = qinfo->wqe_per_buf * qinfo->wqe_size;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_srq_linkwqe *srq_link_wqe = NULL;
+ u32 container_size = qinfo->container_size;
+ struct pci_dev *dev = cqm_handle->dev;
+ u64 addr;
+ u8 *srqhead_container = srq_head_container;
+ u8 *srqtail_container = srq_tail_container;
+
+ if (unlikely(!srqhead_container)) {
+ pr_err("[CQM]%s: srqhead_container is null\n", __func__);
+ return;
+ }
+
+ /* 1. The range is released cyclically from the head to the tail, i.e.
+ * [head:tail]. If the tail is null, the range is [head:null]. Oterwise,
+ * [head:tail->next).
+ */
+ if (srqtail_container) {
+ /* [head:tail->next): Update srqtail_container to the next
+ * container va.
+ */
+ srq_link_wqe = (struct tag_cqm_srq_linkwqe *)(srqtail_container +
+ link_wqe_offset);
+ /* Only the link wqe part needs to be converted. */
+ cqm_swab32((u8 *)(srq_link_wqe), sizeof(struct tag_cqm_linkwqe) >> CQM_DW_SHIFT);
+ srqtail_container = (u8 *)CQM_ADDR_COMBINE(srq_link_wqe->fixed_next_buffer_addr_h,
+ srq_link_wqe->fixed_next_buffer_addr_l);
+ }
+
+ do {
+ /* 2. Obtain the link wqe of the current container */
+ srq_link_wqe = (struct tag_cqm_srq_linkwqe *)(srqhead_container +
+ link_wqe_offset);
+ /* Only the link wqe part needs to be converted. */
+ cqm_swab32((u8 *)(srq_link_wqe), sizeof(struct tag_cqm_linkwqe) >> CQM_DW_SHIFT);
+ /* Obtain the va of the next container using the link wqe. */
+ srqhead_container = (u8 *)CQM_ADDR_COMBINE(srq_link_wqe->fixed_next_buffer_addr_h,
+ srq_link_wqe->fixed_next_buffer_addr_l);
+
+ /* 3. Obtain the current container pa from the link wqe,
+ * and cancel the mapping
+ */
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_gpa_h,
+ srq_link_wqe->current_buffer_gpa_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Container free: buffer physical addr is null\n");
+ return;
+ }
+ pci_unmap_single(dev, (dma_addr_t)addr, container_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ /* 4. Obtain the container va through linkwqe and release the
+ * container va.
+ */
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_addr_h,
+ srq_link_wqe->current_buffer_addr_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Container free: buffer virtual addr is null\n");
+ return;
+ }
+ kfree((void *)addr);
+ } while (srqhead_container != srqtail_container);
+}
+
+/**
+ * cqm_container_create - Create a container for the RQ or SRQ, link it to the tail of the queue,
+ * and update the tail container pointer of the queue.
+ * @object: CQM object
+ * @container_addr: the pointer of container created
+ * @link: if the SRQ is not empty, update the linkwqe of the tail container
+ */
+s32 cqm_container_create(struct tag_cqm_object *object, u8 **container_addr, bool link)
+{
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(object->cqm_handle);
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ u32 link_wqe_offset = qinfo->wqe_per_buf * qinfo->wqe_size;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_srq_linkwqe *srq_link_wqe = NULL;
+ struct tag_cqm_linkwqe *link_wqe = NULL;
+ dma_addr_t new_container_pa;
+ u8 *new_container = NULL;
+
+ /* 1. Applying for Container Space and Initializing Invalid/Normal WQE
+ * of the Container.
+ */
+ new_container = kmalloc(qinfo->container_size, GFP_ATOMIC | __GFP_ZERO);
+ if (!new_container) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(new_container));
+ return CQM_FAIL;
+ }
+
+ /* Container PCI mapping */
+ new_container_pa = pci_map_single(cqm_handle->dev, new_container,
+ qinfo->container_size,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, new_container_pa) != 0) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(new_container_pa));
+ goto map_fail;
+ }
+
+ /* 2. The container is linked to the SRQ, and the link wqe of
+ * tail_container and new_container is updated.
+ */
+ /* If the SRQ is not empty, update the linkwqe of the tail container. */
+ if (link) {
+ if (common->tail_container) {
+ srq_link_wqe = (struct tag_cqm_srq_linkwqe *)(common->tail_container +
+ link_wqe_offset);
+ link_wqe = &srq_link_wqe->linkwqe;
+ link_wqe->next_page_gpa_h =
+ __swab32((u32)CQM_ADDR_HI(new_container_pa));
+ link_wqe->next_page_gpa_l =
+ __swab32((u32)CQM_ADDR_LW(new_container_pa));
+ link_wqe->next_buffer_addr_h =
+ __swab32((u32)CQM_ADDR_HI(new_container));
+ link_wqe->next_buffer_addr_l =
+ __swab32((u32)CQM_ADDR_LW(new_container));
+ /* make sure next page gpa and next buffer addr of
+ * link wqe update first
+ */
+ wmb();
+ /* The SRQ tail container may be accessed by the chip.
+ * Therefore, obit must be set to 1 at last.
+ */
+ (*(u32 *)link_wqe) |= 0x80;
+ /* make sure obit set ahead of fixed next buffer addr
+ * updating of srq link wqe
+ */
+ wmb();
+ srq_link_wqe->fixed_next_buffer_addr_h =
+ (u32)CQM_ADDR_HI(new_container);
+ srq_link_wqe->fixed_next_buffer_addr_l =
+ (u32)CQM_ADDR_LW(new_container);
+ }
+ }
+
+ /* Update the Invalid WQE of a New Container */
+ clear_bit(0x1F, (ulong *)new_container);
+ /* Update the link wqe of the new container. */
+ srq_link_wqe = (struct tag_cqm_srq_linkwqe *)(new_container + link_wqe_offset);
+ link_wqe = &srq_link_wqe->linkwqe;
+ link_wqe->o = CQM_LINK_WQE_OWNER_INVALID;
+ link_wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+ link_wqe->lp = CQM_LINK_WQE_LP_INVALID;
+ link_wqe->wf = CQM_WQE_WF_LINK;
+ srq_link_wqe->current_buffer_gpa_h = CQM_ADDR_HI(new_container_pa);
+ srq_link_wqe->current_buffer_gpa_l = CQM_ADDR_LW(new_container_pa);
+ srq_link_wqe->current_buffer_addr_h = CQM_ADDR_HI(new_container);
+ srq_link_wqe->current_buffer_addr_l = CQM_ADDR_LW(new_container);
+ /* Convert only the area accessed by the chip to the network sequence */
+ cqm_swab32((u8 *)link_wqe, sizeof(struct tag_cqm_linkwqe) >> CQM_DW_SHIFT);
+ if (link)
+ /* Update the tail pointer of a queue. */
+ common->tail_container = new_container;
+ else
+ *container_addr = new_container;
+
+ return CQM_SUCCESS;
+
+map_fail:
+ kfree(new_container);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_srq_container_init - Initialize the SRQ to create all containers and link them
+ * @object: CQM object
+ */
+static s32 cqm_srq_container_init(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 container_num = object->object_size;
+ s32 ret;
+ u32 i;
+
+ if (common->head_container || common->tail_container) {
+ cqm_err(handle->dev_hdl, "Srq container init: srq tail/head container not null\n");
+ return CQM_FAIL;
+ }
+
+ /* Applying for a Container
+ * During initialization, the head/tail pointer is null.
+ * After the first application is successful, head=tail.
+ */
+ ret = cqm_container_create(&qinfo->common.object, NULL, true);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, "Srq container init: cqm_srq_container_add fail\n");
+ return CQM_FAIL;
+ }
+ common->head_container = common->tail_container;
+
+ /* The container is dynamically created and the tail pointer is updated.
+ * If the container fails to be created, release the containers from
+ * head to null.
+ */
+ for (i = 1; i < container_num; i++) {
+ ret = cqm_container_create(&qinfo->common.object, NULL, true);
+ if (ret == CQM_FAIL) {
+ cqm_container_free(common->head_container, NULL,
+ &qinfo->common);
+ return CQM_FAIL;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_share_recv_queue_create - Create SRQ(share receive queue)
+ * @object: CQM object
+ */
+s32 cqm_share_recv_queue_create(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_toe_private_capability *toe_own_cap = &cqm_handle->toe_own_capability;
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ u32 step;
+ s32 ret;
+
+ /* 1. Create srq container, including initializing the link wqe. */
+ ret = cqm_srq_container_init(object);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_srq_container_init));
+ return CQM_FAIL;
+ }
+
+ /* 2. Create srq ctx: SRQ CTX is directly delivered by the driver to the
+ * chip memory area through the cmdq channel, and no CLA table
+ * management is required. Therefore, the CQM applies for only one empty
+ * buffer for the driver.
+ */
+ /* bitmap applies for index */
+ bitmap = &toe_own_cap->srqc_bitmap;
+ qinfo->index_count = (ALIGN(qinfo->q_ctx_size,
+ toe_own_cap->toe_srqc_basic_size)) /
+ toe_own_cap->toe_srqc_basic_size;
+ /* align with 2 as the upper bound */
+ step = ALIGN(toe_own_cap->toe_srqc_number, 2);
+ qinfo->common.index = cqm_bitmap_alloc(bitmap, step, qinfo->index_count,
+ func_cap->xid_alloc_mode);
+ if (qinfo->common.index >= bitmap->max_num) {
+ cqm_err(handle->dev_hdl, "Srq create: queue index %u exceeds max_num %u\n",
+ qinfo->common.index, bitmap->max_num);
+ goto err1;
+ }
+ qinfo->common.index += toe_own_cap->toe_srqc_start_id;
+
+ /* apply for buffer for SRQC */
+ common->q_ctx_vaddr = kmalloc(qinfo->q_ctx_size,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!common->q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_ctx_vaddr));
+ goto err2;
+ }
+ return CQM_SUCCESS;
+
+err2:
+ cqm_bitmap_free(bitmap,
+ qinfo->common.index - toe_own_cap->toe_srqc_start_id,
+ qinfo->index_count);
+err1:
+ cqm_container_free(common->head_container, common->tail_container,
+ &qinfo->common);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_srq_used_rq_delete - Delete RQ in TOE SRQ mode
+ * @object: CQM object
+ */
+static void cqm_srq_used_rq_delete(const struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)(common->object.cqm_handle);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ u32 link_wqe_offset = qinfo->wqe_per_buf * qinfo->wqe_size;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_srq_linkwqe *srq_link_wqe = NULL;
+ dma_addr_t addr;
+
+ /* Currently, the SRQ solution does not support RQ initialization
+ * without mounting container.
+ * As a result, RQ resources are released incorrectly.
+ * Temporary workaround: Only one container is mounted during RQ
+ * initialization and only one container is released
+ * during resource release.
+ */
+ if (unlikely(!common->head_container)) {
+ pr_err("[CQM]%s: Rq del: rq has no contianer to release\n", __func__);
+ return;
+ }
+
+ /* 1. Obtain current container pa from the link wqe table and
+ * cancel the mapping.
+ */
+ srq_link_wqe = (struct tag_cqm_srq_linkwqe *)(common->head_container + link_wqe_offset);
+ /* Only the link wqe part needs to be converted. */
+ cqm_swab32((u8 *)(srq_link_wqe), sizeof(struct tag_cqm_linkwqe) >> CQM_DW_SHIFT);
+
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_gpa_h,
+ srq_link_wqe->current_buffer_gpa_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Rq del: buffer physical addr is null\n");
+ return;
+ }
+ pci_unmap_single(cqm_handle->dev, addr, qinfo->container_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ /* 2. Obtain the container va through the linkwqe and release. */
+ addr = CQM_ADDR_COMBINE(srq_link_wqe->current_buffer_addr_h,
+ srq_link_wqe->current_buffer_addr_l);
+ if (addr == 0) {
+ cqm_err(handle->dev_hdl, "Rq del: buffer virtual addr is null\n");
+ return;
+ }
+ kfree((void *)addr);
+}
+
+/**
+ * cqm_share_recv_queue_delete - The SRQ object is deleted. Delete only containers that are not
+ * used by SRQ, that is, containers from the head to the tail.
+ * The RQ releases containers that have been used by the RQ.
+ * @object: CQM object
+ */
+void cqm_share_recv_queue_delete(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bitmap *bitmap = &cqm_handle->toe_own_capability.srqc_bitmap;
+ u32 index = common->index - cqm_handle->toe_own_capability.toe_srqc_start_id;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+
+ /* 1. Wait for completion and ensure that all references to the QPC
+ * are complete.
+ */
+ if (atomic_dec_and_test(&object->refcount) != 0)
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Srq del: object is referred by others, has to wait for completion\n");
+
+ wait_for_completion(&object->free);
+ destroy_completion(&object->free);
+ /* 2. The corresponding index in the bitmap is cleared. */
+ cqm_bitmap_free(bitmap, index, qinfo->index_count);
+
+ /* 3. SRQC resource release */
+ if (unlikely(!common->q_ctx_vaddr)) {
+ pr_err("[CQM]%s: Srq del: srqc kfree, context virtual addr is null\n", __func__);
+ return;
+ }
+ kfree(common->q_ctx_vaddr);
+
+ /* 4. The SRQ queue is released. */
+ cqm_container_free(common->head_container, NULL, &qinfo->common);
+}
+
+#define obj_intern_if_section
+
+#define CQM_INDEX_INVALID_MASK 0x1FFFFFFFU
+#define CQM_IDX_VALID_SHIFT 29
+
+/**
+ * cqm_qpc_mpt_bitmap_alloc - Apply for index from the bitmap when creating QPC or MPT
+ * @object: CQM object
+ * @cla_table: CLA table entry
+ * @low2bit_align_en: enable alignment of the lower two bits
+ */
+static s32 cqm_qpc_mpt_bitmap_alloc(struct tag_cqm_object *object,
+ struct tag_cqm_cla_table *cla_table, bool low2bit_align_en)
+{
+ struct tag_cqm_qpc_mpt *common = container_of(object, struct tag_cqm_qpc_mpt, object);
+ struct tag_cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
+ struct tag_cqm_qpc_mpt_info,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_bitmap *bitmap = &cla_table->bitmap;
+ u32 index, count;
+ u32 xid = qpc_mpt_info->common.xid;
+
+ count = (ALIGN(object->object_size, cla_table->obj_size)) / cla_table->obj_size;
+ qpc_mpt_info->index_count = count;
+
+ if ((xid & CQM_INDEX_INVALID_MASK) == CQM_INDEX_INVALID_MASK) {
+ if (low2bit_align_en) {
+ if (count > 1) {
+ cqm_err(handle->dev_hdl, "Not support alloc multiple bits.");
+ return CQM_FAIL;
+ }
+
+ index = cqm_bitmap_alloc_low2bit_align(bitmap, xid >> CQM_IDX_VALID_SHIFT,
+ func_cap->xid_alloc_mode);
+ } else {
+ /* apply for an index normally */
+ index = cqm_bitmap_alloc(bitmap, 1U << (cla_table->z + 1),
+ count, func_cap->xid_alloc_mode);
+ }
+
+ if (index < bitmap->max_num - bitmap->reserved_back) {
+ qpc_mpt_info->common.xid = index;
+ } else {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bitmap_alloc));
+ return CQM_FAIL;
+ }
+ } else {
+ if ((hinic3_func_type((void *)handle) != TYPE_PPF) &&
+ (hinic3_support_roce((void *)handle, NULL))) {
+ /* If PF is vroce control function, apply for index by xid */
+ index = cqm_bitmap_alloc_by_xid(bitmap, count, xid);
+ } else {
+ /* apply for index to be reserved */
+ index = cqm_bitmap_alloc_reserved(bitmap, count, xid);
+ }
+
+ if (index != xid) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bitmap_alloc_reserved));
+ return CQM_FAIL;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_qpc_mpt_create - Create QPC or MPT
+ * @object: CQM object
+ * @low2bit_align_en: enable alignment of the lower two bits
+ */
+s32 cqm_qpc_mpt_create(struct tag_cqm_object *object, bool low2bit_align_en)
+{
+ struct tag_cqm_qpc_mpt *common = container_of(object, struct tag_cqm_qpc_mpt, object);
+ struct tag_cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
+ struct tag_cqm_qpc_mpt_info,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ u32 index, count;
+
+ /* find the corresponding cla table */
+ if (object->object_type == CQM_OBJECT_SERVICE_CTX) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ } else if (object->object_type == CQM_OBJECT_MPT) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_MPT);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ return CQM_FAIL;
+ }
+
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cqm_cla_table_get is null\n", __func__);
+ return CQM_FAIL;
+ }
+
+ /* Bitmap applies for index. */
+ if (cqm_qpc_mpt_bitmap_alloc(object, cla_table, low2bit_align_en) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_qpc_mpt_bitmap_alloc));
+ return CQM_FAIL;
+ }
+
+ bitmap = &cla_table->bitmap;
+ index = qpc_mpt_info->common.xid;
+ count = qpc_mpt_info->index_count;
+
+ /* Find the trunk page from the BAT/CLA and allocate the buffer.
+ * Ensure that the released buffer has been cleared.
+ */
+ if (cla_table->alloc_static)
+ qpc_mpt_info->common.vaddr = cqm_cla_get_unlock(cqm_handle,
+ cla_table,
+ index, count,
+ &common->paddr);
+ else
+ qpc_mpt_info->common.vaddr = cqm_cla_get_lock(cqm_handle,
+ cla_table, index,
+ count,
+ &common->paddr);
+
+ if (!qpc_mpt_info->common.vaddr) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_get_lock));
+ cqm_err(handle->dev_hdl, "Qpc mpt init: qpc mpt vaddr is null, cla_table->alloc_static=%d\n",
+ cla_table->alloc_static);
+ goto err1;
+ }
+
+ /* Indexes are associated with objects, and FC is executed
+ * in the interrupt context.
+ */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC) {
+ if (cqm_object_table_insert(cqm_handle, object_table, index,
+ object, false) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_insert));
+ goto err2;
+ }
+ } else {
+ if (cqm_object_table_insert(cqm_handle, object_table, index,
+ object, true) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_insert));
+ goto err2;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err2:
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+err1:
+ cqm_bitmap_free(bitmap, index, count);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_qpc_mpt_delete - Delete QPC or MPT
+ * @object: CQM object
+ */
+void cqm_qpc_mpt_delete(struct tag_cqm_object *object)
+{
+ struct tag_cqm_qpc_mpt *common = container_of(object, struct tag_cqm_qpc_mpt, object);
+ struct tag_cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
+ struct tag_cqm_qpc_mpt_info,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ u32 count = qpc_mpt_info->index_count;
+ u32 index = qpc_mpt_info->common.xid;
+ struct tag_cqm_bitmap *bitmap = NULL;
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_qpc_mpt_delete_cnt);
+
+ /* find the corresponding cla table */
+ /* Todo */
+ if (object->object_type == CQM_OBJECT_SERVICE_CTX) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ } else if (object->object_type == CQM_OBJECT_MPT) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_MPT);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ return;
+ }
+
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cqm_cla_table_get_qpc return failure\n", __func__);
+ return;
+ }
+
+ /* disassociate index and object */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC)
+ cqm_object_table_remove(cqm_handle, object_table, index, object,
+ false);
+ else
+ cqm_object_table_remove(cqm_handle, object_table, index, object,
+ true);
+
+ /* wait for completion to ensure that all references to
+ * the QPC are complete
+ */
+ if (atomic_dec_and_test(&object->refcount) != 0)
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Qpc mpt del: object is referred by others, has to wait for completion\n");
+
+ /* Static QPC allocation must be non-blocking.
+ * Services ensure that the QPC is referenced
+ * when the QPC is deleted.
+ */
+ if (!cla_table->alloc_static)
+ wait_for_completion(&object->free);
+
+ /* VMware FC need explicitly deinit spin_lock in completion */
+ destroy_completion(&object->free);
+
+ /* release qpc buffer */
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+
+ /* release the index to the bitmap */
+ bitmap = &cla_table->bitmap;
+ cqm_bitmap_free(bitmap, index, count);
+}
+
+/**
+ * cqm_linkwqe_fill - Used to organize the queue buffer of non-RDMA services and fill the link wqe
+ * @buf: CQM queue buffer
+ * @wqe_per_buf: Linkwqe is not included
+ * @wqe_size: Linkwqe size
+ * @wqe_number: Linkwqe number
+ * @tail: true - The linkwqe must be at the end of the page;
+ * false - The linkwqe can be not at the end of the page.
+ * @link_mode: Link mode
+ */
+static void cqm_linkwqe_fill(struct tag_cqm_buf *buf, u32 wqe_per_buf, u32 wqe_size,
+ u32 wqe_number, bool tail, u8 link_mode)
+{
+ struct tag_cqm_linkwqe_128B *linkwqe = NULL;
+ struct tag_cqm_linkwqe *wqe = NULL;
+ dma_addr_t addr;
+ u8 *tmp = NULL;
+ u8 *va = NULL;
+ u32 i;
+
+ /* The linkwqe of other buffer except the last buffer
+ * is directly filled to the tail.
+ */
+ for (i = 0; i < buf->buf_number; i++) {
+ va = (u8 *)(buf->buf_list[i].va);
+
+ if (i != (buf->buf_number - 1)) {
+ wqe = (struct tag_cqm_linkwqe *)(va + (u32)(wqe_size * wqe_per_buf));
+ wqe->wf = CQM_WQE_WF_LINK;
+ wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+ wqe->lp = CQM_LINK_WQE_LP_INVALID;
+ /* The valid value of link wqe needs to be set to 1.
+ * Each service ensures that o-bit=1 indicates that
+ * link wqe is valid and o-bit=0 indicates that
+ * link wqe is invalid.
+ */
+ wqe->o = CQM_LINK_WQE_OWNER_VALID;
+ addr = buf->buf_list[(u32)(i + 1)].pa;
+ wqe->next_page_gpa_h = CQM_ADDR_HI(addr);
+ wqe->next_page_gpa_l = CQM_ADDR_LW(addr);
+ } else { /* linkwqe special padding of the last buffer */
+ if (tail) {
+ /* must be filled at the end of the page */
+ tmp = va + (u32)(wqe_size * wqe_per_buf);
+ wqe = (struct tag_cqm_linkwqe *)tmp;
+ } else {
+ /* The last linkwqe is filled
+ * following the last wqe.
+ */
+ tmp = va + (u32)(wqe_size * (wqe_number - wqe_per_buf *
+ (buf->buf_number - 1)));
+ wqe = (struct tag_cqm_linkwqe *)tmp;
+ }
+ wqe->wf = CQM_WQE_WF_LINK;
+ wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+
+ /* In link mode, the last link WQE is invalid;
+ * In ring mode, the last link wqe is valid, pointing to
+ * the home page, and the lp is set.
+ */
+ if (link_mode == CQM_QUEUE_LINK_MODE) {
+ wqe->o = CQM_LINK_WQE_OWNER_INVALID;
+ } else {
+ /* The lp field of the last link_wqe is set to
+ * 1, indicating that the meaning of the o-bit
+ * is reversed.
+ */
+ wqe->lp = CQM_LINK_WQE_LP_VALID;
+ wqe->o = CQM_LINK_WQE_OWNER_VALID;
+ addr = buf->buf_list[0].pa;
+ wqe->next_page_gpa_h = CQM_ADDR_HI(addr);
+ wqe->next_page_gpa_l = CQM_ADDR_LW(addr);
+ }
+ }
+
+ if (wqe_size == CQM_LINKWQE_128B) {
+ /* After the B800 version, the WQE obit scheme is
+ * changed. The 64B bits before and after the 128B WQE
+ * need to be assigned a value:
+ * ifoe the 63rd bit from the end of the last 64B is
+ * obit;
+ * toe the 157th bit from the end of the last 64B is
+ * obit.
+ */
+ linkwqe = (struct tag_cqm_linkwqe_128B *)wqe;
+ linkwqe->second64B.third_16B.bs.toe_o = CQM_LINK_WQE_OWNER_VALID;
+ linkwqe->second64B.forth_16B.bs.ifoe_o = CQM_LINK_WQE_OWNER_VALID;
+
+ /* shift 2 bits by right to get length of dw(4B) */
+ cqm_swab32((u8 *)wqe, sizeof(struct tag_cqm_linkwqe_128B) >> 2);
+ } else {
+ /* shift 2 bits by right to get length of dw(4B) */
+ cqm_swab32((u8 *)wqe, sizeof(struct tag_cqm_linkwqe) >> 2);
+ }
+ }
+}
+
+static int cqm_nonrdma_queue_ctx_create_scq(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ bool bh = false;
+
+ /* find the corresponding cla table */
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ if (!cla_table) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(nonrdma_cqm_cla_table_get));
+ return CQM_FAIL;
+ }
+
+ /* bitmap applies for index */
+ bitmap = &cla_table->bitmap;
+ qinfo->index_count = (ALIGN(qinfo->q_ctx_size, cla_table->obj_size)) / cla_table->obj_size;
+ qinfo->common.index = cqm_bitmap_alloc(bitmap, 1U << (cla_table->z + 1),
+ qinfo->index_count,
+ cqm_handle->func_capability.xid_alloc_mode);
+ if (qinfo->common.index >= bitmap->max_num) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(nonrdma_cqm_bitmap_alloc));
+ return CQM_FAIL;
+ }
+
+ /* find the trunk page from BAT/CLA and allocate the buffer */
+ common->q_ctx_vaddr = cqm_cla_get_lock(cqm_handle, cla_table, qinfo->common.index,
+ qinfo->index_count, &common->q_ctx_paddr);
+ if (!common->q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(nonrdma_cqm_cla_get_lock));
+ cqm_bitmap_free(bitmap, qinfo->common.index, qinfo->index_count);
+ return CQM_FAIL;
+ }
+
+ /* index and object association */
+ object_table = &cla_table->obj_table;
+ bh = ((object->service_type == CQM_SERVICE_T_FC) ? false : true);
+ if (cqm_object_table_insert(cqm_handle, object_table, qinfo->common.index, object,
+ bh) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(nonrdma_cqm_object_table_insert));
+ cqm_cla_put(cqm_handle, cla_table, qinfo->common.index, qinfo->index_count);
+ cqm_bitmap_free(bitmap, qinfo->common.index, qinfo->index_count);
+
+ return CQM_FAIL;
+ }
+
+ return 0;
+}
+
+static s32 cqm_nonrdma_queue_ctx_create(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ s32 shift;
+ int ret;
+
+ if (object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
+ shift = cqm_shift(qinfo->q_ctx_size);
+ common->q_ctx_vaddr = cqm_kmalloc_align(qinfo->q_ctx_size,
+ GFP_KERNEL | __GFP_ZERO,
+ (u16)shift);
+ if (!common->q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_ctx_vaddr));
+ return CQM_FAIL;
+ }
+
+ common->q_ctx_paddr = pci_map_single(cqm_handle->dev, common->q_ctx_vaddr,
+ qinfo->q_ctx_size, PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, common->q_ctx_paddr) != 0) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_ctx_vaddr));
+ cqm_kfree_align(common->q_ctx_vaddr);
+ common->q_ctx_vaddr = NULL;
+ return CQM_FAIL;
+ }
+ } else if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ ret = cqm_nonrdma_queue_ctx_create_scq(object);
+ if (ret != 0)
+ return ret;
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_nonrdma_queue_create - Create a queue for non-RDMA services
+ * @object: CQM object
+ */
+s32 cqm_nonrdma_queue_create(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_service *service = cqm_handle->service + object->service_type;
+ struct tag_cqm_buf *q_room_buf = &common->q_room_buf_1;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ u32 wqe_number = qinfo->common.object.object_size;
+ u32 wqe_size = qinfo->wqe_size;
+ u32 order = service->buf_order;
+ u32 buf_number, buf_size;
+ bool tail = false; /* determine whether the linkwqe is at the end of the page */
+
+ /* When creating a CQ/SCQ queue, the page size is 4 KB,
+ * the linkwqe must be at the end of the page.
+ */
+ if (object->object_type == CQM_OBJECT_NONRDMA_EMBEDDED_CQ ||
+ object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ /* depth: 2^n-aligned; depth range: 256-32 K */
+ if (wqe_number < CQM_CQ_DEPTH_MIN ||
+ wqe_number > CQM_CQ_DEPTH_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_number));
+ return CQM_FAIL;
+ }
+ if (!cqm_check_align(wqe_number)) {
+ cqm_err(handle->dev_hdl, "Nonrdma queue alloc: wqe_number is not align on 2^n\n");
+ return CQM_FAIL;
+ }
+
+ order = CQM_4K_PAGE_ORDER; /* wqe page 4k */
+ tail = true; /* The linkwqe must be at the end of the page. */
+ buf_size = CQM_4K_PAGE_SIZE;
+ } else {
+ buf_size = (u32)(PAGE_SIZE << order);
+ }
+
+ /* Calculate the total number of buffers required,
+ * -1 indicates that the link wqe in a buffer is deducted.
+ */
+ qinfo->wqe_per_buf = (buf_size / wqe_size) - 1;
+ /* number of linkwqes that are included in the depth transferred
+ * by the service
+ */
+ buf_number = ALIGN((wqe_size * wqe_number), buf_size) / buf_size;
+
+ /* apply for buffer */
+ q_room_buf->buf_number = buf_number;
+ q_room_buf->buf_size = buf_size;
+ q_room_buf->page_number = buf_number << order;
+ if (cqm_buf_alloc(cqm_handle, q_room_buf, false) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_buf_alloc));
+ return CQM_FAIL;
+ }
+ /* fill link wqe, wqe_number - buf_number is the number of wqe without
+ * link wqe
+ */
+ cqm_linkwqe_fill(q_room_buf, qinfo->wqe_per_buf, wqe_size,
+ wqe_number - buf_number, tail,
+ common->queue_link_mode);
+
+ /* create queue header */
+ qinfo->common.q_header_vaddr = cqm_kmalloc_align(sizeof(struct tag_cqm_queue_header),
+ GFP_KERNEL | __GFP_ZERO,
+ CQM_QHEAD_ALIGN_ORDER);
+ if (!qinfo->common.q_header_vaddr)
+ goto err1;
+
+ common->q_header_paddr = pci_map_single(cqm_handle->dev,
+ qinfo->common.q_header_vaddr,
+ sizeof(struct tag_cqm_queue_header),
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, common->q_header_paddr) != 0) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_header_vaddr));
+ goto err2;
+ }
+
+ /* create queue ctx */
+ if (cqm_nonrdma_queue_ctx_create(object) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_nonrdma_queue_ctx_create));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ pci_unmap_single(cqm_handle->dev, common->q_header_paddr,
+ sizeof(struct tag_cqm_queue_header), PCI_DMA_BIDIRECTIONAL);
+err2:
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+err1:
+ cqm_buf_free(q_room_buf, cqm_handle);
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_nonrdma_queue_delete - Delete the queues of non-RDMA services
+ * @object: CQM object
+ */
+void cqm_nonrdma_queue_delete(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_nonrdma_qinfo *qinfo = container_of(common, struct tag_cqm_nonrdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct tag_cqm_buf *q_room_buf = &common->q_room_buf_1;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ u32 index = qinfo->common.index;
+ u32 count = qinfo->index_count;
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_nonrdma_queue_delete_cnt);
+
+ /* The SCQ has an independent SCQN association. */
+ if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cqm_cla_table_get_queue return failure\n", __func__);
+ return;
+ }
+
+ /* disassociate index and object */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC)
+ cqm_object_table_remove(cqm_handle, object_table, index,
+ object, false);
+ else
+ cqm_object_table_remove(cqm_handle, object_table, index,
+ object, true);
+ }
+
+ /* wait for completion to ensure that all references to
+ * the QPC are complete
+ */
+ if (atomic_dec_and_test(&object->refcount) != 0)
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Nonrdma queue del: object is referred by others, has to wait for completion\n");
+
+ wait_for_completion(&object->free);
+ destroy_completion(&object->free);
+
+ /* If the q header exists, release. */
+ if (qinfo->common.q_header_vaddr) {
+ pci_unmap_single(cqm_handle->dev, common->q_header_paddr,
+ sizeof(struct tag_cqm_queue_header),
+ PCI_DMA_BIDIRECTIONAL);
+
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+ }
+
+ /* RQ deletion in TOE SRQ mode */
+ if (common->queue_link_mode == CQM_QUEUE_TOE_SRQ_LINK_MODE) {
+ cqm_dbg("Nonrdma queue del: delete srq used rq\n");
+ cqm_srq_used_rq_delete(&common->object);
+ } else {
+ /* If q room exists, release. */
+ cqm_buf_free(q_room_buf, cqm_handle);
+ }
+ /* SRQ and SCQ have independent CTXs and release. */
+ if (object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
+ /* The CTX of the SRQ of the nordma is
+ * applied for independently.
+ */
+ if (common->q_ctx_vaddr) {
+ pci_unmap_single(cqm_handle->dev, common->q_ctx_paddr,
+ qinfo->q_ctx_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ cqm_kfree_align(common->q_ctx_vaddr);
+ common->q_ctx_vaddr = NULL;
+ }
+ } else if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ /* The CTX of the SCQ of the nordma is managed by BAT/CLA. */
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+
+ /* release the index to the bitmap */
+ bitmap = &cla_table->bitmap;
+ cqm_bitmap_free(bitmap, index, count);
+ }
+}
+
+static s32 cqm_rdma_queue_ctx_create(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_rdma_qinfo *qinfo = container_of(common, struct tag_cqm_rdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ u32 index;
+
+ if (object->object_type == CQM_OBJECT_RDMA_SRQ ||
+ object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ if (object->object_type == CQM_OBJECT_RDMA_SRQ)
+ cla_table = cqm_cla_table_get(bat_table,
+ CQM_BAT_ENTRY_T_SRQC);
+ else
+ cla_table = cqm_cla_table_get(bat_table,
+ CQM_BAT_ENTRY_T_SCQC);
+
+ if (!cla_table) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(rdma_cqm_cla_table_get));
+ return CQM_FAIL;
+ }
+
+ /* bitmap applies for index */
+ bitmap = &cla_table->bitmap;
+ if (qinfo->common.index == CQM_INDEX_INVALID) {
+ qinfo->index_count = (ALIGN(qinfo->q_ctx_size,
+ cla_table->obj_size)) /
+ cla_table->obj_size;
+ qinfo->common.index =
+ cqm_bitmap_alloc(bitmap, 1U << (cla_table->z + 1),
+ qinfo->index_count,
+ cqm_handle->func_capability.xid_alloc_mode);
+ if (qinfo->common.index >= bitmap->max_num) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(rdma_cqm_bitmap_alloc));
+ return CQM_FAIL;
+ }
+ } else {
+ /* apply for reserved index */
+ qinfo->index_count = (ALIGN(qinfo->q_ctx_size, cla_table->obj_size)) /
+ cla_table->obj_size;
+ index = cqm_bitmap_alloc_reserved(bitmap, qinfo->index_count,
+ qinfo->common.index);
+ if (index != qinfo->common.index) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bitmap_alloc_reserved));
+ return CQM_FAIL;
+ }
+ }
+
+ /* find the trunk page from BAT/CLA and allocate the buffer */
+ qinfo->common.q_ctx_vaddr =
+ cqm_cla_get_lock(cqm_handle, cla_table, qinfo->common.index,
+ qinfo->index_count, &qinfo->common.q_ctx_paddr);
+ if (!qinfo->common.q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(rdma_cqm_cla_get_lock));
+ cqm_bitmap_free(bitmap, qinfo->common.index, qinfo->index_count);
+ return CQM_FAIL;
+ }
+
+ /* associate index and object */
+ object_table = &cla_table->obj_table;
+ if (cqm_object_table_insert(cqm_handle, object_table, qinfo->common.index, object,
+ true) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(rdma_cqm_object_table_insert));
+ cqm_cla_put(cqm_handle, cla_table, qinfo->common.index,
+ qinfo->index_count);
+ cqm_bitmap_free(bitmap, qinfo->common.index, qinfo->index_count);
+ return CQM_FAIL;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_rdma_queue_create - Create rdma queue
+ * @object: CQM object
+ */
+s32 cqm_rdma_queue_create(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_rdma_qinfo *qinfo = container_of(common, struct tag_cqm_rdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_service *service = cqm_handle->service + object->service_type;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf *q_room_buf = NULL;
+ u32 order = service->buf_order;
+ u32 buf_size = (u32)(PAGE_SIZE << order);
+
+ if (qinfo->room_header_alloc) {
+ /* apply for queue room buffer */
+ if (qinfo->common.current_q_room == CQM_RDMA_Q_ROOM_1)
+ q_room_buf = &qinfo->common.q_room_buf_1;
+ else
+ q_room_buf = &qinfo->common.q_room_buf_2;
+
+ q_room_buf->buf_number = ALIGN(object->object_size, buf_size) /
+ buf_size;
+ q_room_buf->page_number = (q_room_buf->buf_number << order);
+ q_room_buf->buf_size = buf_size;
+ if (cqm_buf_alloc(cqm_handle, q_room_buf, true) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_buf_alloc));
+ return CQM_FAIL;
+ }
+
+ /* queue header */
+ qinfo->common.q_header_vaddr =
+ cqm_kmalloc_align(sizeof(struct tag_cqm_queue_header),
+ GFP_KERNEL | __GFP_ZERO,
+ CQM_QHEAD_ALIGN_ORDER);
+ if (!qinfo->common.q_header_vaddr)
+ goto err1;
+
+ qinfo->common.q_header_paddr =
+ pci_map_single(cqm_handle->dev,
+ qinfo->common.q_header_vaddr,
+ sizeof(struct tag_cqm_queue_header),
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev,
+ qinfo->common.q_header_paddr) != 0) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_header_vaddr));
+ goto err2;
+ }
+ }
+
+ /* queue ctx */
+ if (cqm_rdma_queue_ctx_create(object) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_rdma_queue_ctx_create));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ if (qinfo->room_header_alloc)
+ pci_unmap_single(cqm_handle->dev, qinfo->common.q_header_paddr,
+ sizeof(struct tag_cqm_queue_header),
+ PCI_DMA_BIDIRECTIONAL);
+err2:
+ if (qinfo->room_header_alloc) {
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+ }
+err1:
+ if (qinfo->room_header_alloc)
+ cqm_buf_free(q_room_buf, cqm_handle);
+
+ return CQM_FAIL;
+}
+
+/**
+ * cqm_rdma_queue_delete - Create rdma queue
+ * @object: CQM object
+ */
+void cqm_rdma_queue_delete(struct tag_cqm_object *object)
+{
+ struct tag_cqm_queue *common = container_of(object, struct tag_cqm_queue, object);
+ struct tag_cqm_rdma_qinfo *qinfo = container_of(common, struct tag_cqm_rdma_qinfo,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct tag_cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_object_table *object_table = NULL;
+ struct tag_cqm_cla_table *cla_table = NULL;
+ struct tag_cqm_buf *q_room_buf = NULL;
+ struct tag_cqm_bitmap *bitmap = NULL;
+ u32 index = qinfo->common.index;
+ u32 count = qinfo->index_count;
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_rdma_queue_delete_cnt);
+
+ if (qinfo->common.current_q_room == CQM_RDMA_Q_ROOM_1)
+ q_room_buf = &qinfo->common.q_room_buf_1;
+ else
+ q_room_buf = &qinfo->common.q_room_buf_2;
+
+ /* SCQ and SRQ are associated with independent SCQN and SRQN. */
+ if (object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cqm_cla_table_get return failure\n", __func__);
+ return;
+ }
+ /* disassociate index and object */
+ object_table = &cla_table->obj_table;
+ cqm_object_table_remove(cqm_handle, object_table, index, object, true);
+ } else if (object->object_type == CQM_OBJECT_RDMA_SRQ) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SRQC);
+ if (unlikely(!cla_table)) {
+ pr_err("[CQM]%s: cqm_cla_table_get return failure\n", __func__);
+ return;
+ }
+ /* disassociate index and object */
+ object_table = &cla_table->obj_table;
+ cqm_object_table_remove(cqm_handle, object_table, index, object, true);
+ }
+
+ /* wait for completion to make sure all references are complete */
+ if (atomic_dec_and_test(&object->refcount) != 0)
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Rdma queue del: object is referred by others, has to wait for completion\n");
+
+ wait_for_completion(&object->free);
+ destroy_completion(&object->free);
+
+ /* If the q header exists, release. */
+ if (qinfo->room_header_alloc && qinfo->common.q_header_vaddr) {
+ pci_unmap_single(cqm_handle->dev, qinfo->common.q_header_paddr,
+ sizeof(struct tag_cqm_queue_header), PCI_DMA_BIDIRECTIONAL);
+
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+ }
+
+ /* If q room exists, release. */
+ cqm_buf_free(q_room_buf, cqm_handle);
+
+ /* SRQ and SCQ have independent CTX, released. */
+ if (object->object_type == CQM_OBJECT_RDMA_SRQ ||
+ object->object_type == CQM_OBJECT_RDMA_SCQ) {
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+
+ /* release the index to the bitmap */
+ bitmap = &cla_table->bitmap;
+ cqm_bitmap_free(bitmap, index, count);
+ }
+}
+
+/**
+ * cqm_rdma_table_create - Create RDMA-related entries
+ * @object: CQM object
+ */
+s32 cqm_rdma_table_create(struct tag_cqm_object *object)
+{
+ struct tag_cqm_mtt_rdmarc *common = container_of(object, struct tag_cqm_mtt_rdmarc,
+ object);
+ struct tag_cqm_rdma_table *rdma_table = container_of(common, struct tag_cqm_rdma_table,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf *buf = &rdma_table->buf;
+
+ /* Less than one page is allocated by actual size.
+ * RDMARC also requires physical continuity.
+ */
+ if (object->object_size <= PAGE_SIZE ||
+ object->object_type == CQM_OBJECT_RDMARC) {
+ buf->buf_number = 1;
+ buf->page_number = buf->buf_number;
+ buf->buf_size = object->object_size;
+ buf->direct.va = pci_alloc_consistent(cqm_handle->dev,
+ buf->buf_size,
+ &buf->direct.pa);
+ if (!buf->direct.va)
+ return CQM_FAIL;
+ } else { /* page-by-page alignment greater than one page */
+ buf->buf_number = ALIGN(object->object_size, PAGE_SIZE) /
+ PAGE_SIZE;
+ buf->page_number = buf->buf_number;
+ buf->buf_size = PAGE_SIZE;
+ if (cqm_buf_alloc(cqm_handle, buf, true) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_buf_alloc));
+ return CQM_FAIL;
+ }
+ }
+
+ rdma_table->common.vaddr = (u8 *)(buf->direct.va);
+
+ return CQM_SUCCESS;
+}
+
+/**
+ * cqm_rdma_table_delete - Delete RDMA-related Entries
+ * @object: CQM object
+ */
+void cqm_rdma_table_delete(struct tag_cqm_object *object)
+{
+ struct tag_cqm_mtt_rdmarc *common = container_of(object, struct tag_cqm_mtt_rdmarc,
+ object);
+ struct tag_cqm_rdma_table *rdma_table = container_of(common, struct tag_cqm_rdma_table,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf *buf = &rdma_table->buf;
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_rdma_table_delete_cnt);
+
+ if (buf->buf_number == 1) {
+ if (buf->direct.va) {
+ pci_free_consistent(cqm_handle->dev, buf->buf_size,
+ buf->direct.va, buf->direct.pa);
+ buf->direct.va = NULL;
+ }
+ } else {
+ cqm_buf_free(buf, cqm_handle);
+ }
+}
+
+/**
+ * cqm_rdma_table_offset_addr - Obtain the address of the RDMA entry based on the offset
+ * @object: CQM object
+ * @offset: The offset is the index
+ * @paddr: dma physical addr
+ */
+u8 *cqm_rdma_table_offset_addr(struct tag_cqm_object *object, u32 offset, dma_addr_t *paddr)
+{
+ struct tag_cqm_mtt_rdmarc *common = container_of(object, struct tag_cqm_mtt_rdmarc,
+ object);
+ struct tag_cqm_rdma_table *rdma_table = container_of(common, struct tag_cqm_rdma_table,
+ common);
+ struct tag_cqm_handle *cqm_handle = (struct tag_cqm_handle *)object->cqm_handle;
+ struct hinic3_hwdev *handle = cqm_handle->ex_handle;
+ struct tag_cqm_buf *buf = &rdma_table->buf;
+ struct tag_cqm_buf_list *buf_node = NULL;
+ u32 buf_id, buf_offset;
+
+ if (offset < rdma_table->common.index_base ||
+ ((offset - rdma_table->common.index_base) >=
+ rdma_table->common.index_number)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(offset));
+ return NULL;
+ }
+
+ if (buf->buf_number == 1) {
+ buf_offset = (u32)((offset - rdma_table->common.index_base) *
+ (sizeof(dma_addr_t)));
+
+ *paddr = buf->direct.pa + buf_offset;
+ return ((u8 *)(buf->direct.va)) + buf_offset;
+ }
+
+ buf_id = (offset - rdma_table->common.index_base) /
+ (PAGE_SIZE / sizeof(dma_addr_t));
+ buf_offset = (u32)((offset - rdma_table->common.index_base) -
+ (buf_id * (PAGE_SIZE / sizeof(dma_addr_t))));
+ buf_offset = (u32)(buf_offset * sizeof(dma_addr_t));
+
+ if (buf_id >= buf->buf_number) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(buf_id));
+ return NULL;
+ }
+ buf_node = buf->buf_list + buf_id;
+ *paddr = buf_node->pa + buf_offset;
+
+ return ((u8 *)(buf->direct.va)) +
+ (offset - rdma_table->common.index_base) * (sizeof(dma_addr_t));
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.h b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.h
new file mode 100644
index 0000000..f82fda2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/cqm_object_intern.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_OBJECT_INTERN_H
+#define CQM_OBJECT_INTERN_H
+
+#include "ossl_knl.h"
+#include "cqm_object.h"
+
+#define CQM_CQ_DEPTH_MAX 32768
+#define CQM_CQ_DEPTH_MIN 256
+
+/* linkwqe */
+#define CQM_LINK_WQE_CTRLSL_VALUE 2
+#define CQM_LINK_WQE_LP_VALID 1
+#define CQM_LINK_WQE_LP_INVALID 0
+#define CQM_LINK_WQE_OWNER_VALID 1
+#define CQM_LINK_WQE_OWNER_INVALID 0
+
+#define CQM_ADDR_COMBINE(high_addr, low_addr) \
+ ((((dma_addr_t)(high_addr)) << 32) + ((dma_addr_t)(low_addr)))
+#define CQM_ADDR_HI(addr) ((u32)((u64)(addr) >> 32))
+#define CQM_ADDR_LW(addr) ((u32)((u64)(addr) & 0xffffffff))
+
+#define CQM_QPC_LAYOUT_TABLE_SIZE 16
+struct tag_cqm_qpc_layout_table_node {
+ u32 type;
+ u32 size;
+ u32 offset;
+ struct tag_cqm_object *object;
+};
+
+struct tag_cqm_qpc_mpt_info {
+ struct tag_cqm_qpc_mpt common;
+ /* Different service has different QPC.
+ * The large QPC/mpt will occupy some continuous indexes in bitmap.
+ */
+ u32 index_count;
+ struct tag_cqm_qpc_layout_table_node qpc_layout_table[CQM_QPC_LAYOUT_TABLE_SIZE];
+};
+
+struct tag_cqm_nonrdma_qinfo {
+ struct tag_cqm_queue common;
+ u32 wqe_size;
+ /* Number of WQEs in each buffer (excluding link WQEs)
+ * For SRQ, the value is the number of WQEs contained in a container.
+ */
+ u32 wqe_per_buf;
+ u32 q_ctx_size;
+ /* When different services use CTXs of different sizes,
+ * a large CTX occupies multiple consecutive indexes in the bitmap.
+ */
+ u32 index_count;
+
+ /* add for srq */
+ u32 container_size;
+};
+
+struct tag_cqm_rdma_qinfo {
+ struct tag_cqm_queue common;
+ bool room_header_alloc;
+ /* This field is used to temporarily record the new object_size during
+ * CQ resize.
+ */
+ u32 new_object_size;
+ u32 q_ctx_size;
+ /* When different services use CTXs of different sizes,
+ * a large CTX occupies multiple consecutive indexes in the bitmap.
+ */
+ u32 index_count;
+};
+
+struct tag_cqm_rdma_table {
+ struct tag_cqm_mtt_rdmarc common;
+ struct tag_cqm_buf buf;
+};
+
+void cqm_container_free(u8 *srq_head_container, u8 *srq_tail_container,
+ struct tag_cqm_queue *common);
+s32 cqm_container_create(struct tag_cqm_object *object, u8 **container_addr, bool link);
+s32 cqm_share_recv_queue_create(struct tag_cqm_object *object);
+void cqm_share_recv_queue_delete(struct tag_cqm_object *object);
+s32 cqm_qpc_mpt_create(struct tag_cqm_object *object, bool low2bit_align_en);
+void cqm_qpc_mpt_delete(struct tag_cqm_object *object);
+s32 cqm_nonrdma_queue_create(struct tag_cqm_object *object);
+void cqm_nonrdma_queue_delete(struct tag_cqm_object *object);
+s32 cqm_rdma_queue_create(struct tag_cqm_object *object);
+void cqm_rdma_queue_delete(struct tag_cqm_object *object);
+s32 cqm_rdma_table_create(struct tag_cqm_object *object);
+void cqm_rdma_table_delete(struct tag_cqm_object *object);
+u8 *cqm_rdma_table_offset_addr(struct tag_cqm_object *object, u32 offset, dma_addr_t *paddr);
+
+#endif /* CQM_OBJECT_INTERN_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/cqm/readme.txt b/drivers/net/ethernet/huawei/hinic3/cqm/readme.txt
new file mode 100644
index 0000000..1e21b66
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cqm/readme.txt
@@ -0,0 +1,3 @@
+
+2021/02/25/10:35 gf ovs fake vf hash clear support, change comment
+2019/03/28/15:17 wss provide stateful service queue and context management
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_crm.h b/drivers/net/ethernet/huawei/hinic3/hinic3_crm.h
new file mode 100644
index 0000000..5a11331
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_crm.h
@@ -0,0 +1,1280 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_CRM_H
+#define HINIC3_CRM_H
+
+#include <linux/pci.h>
+
+#include "mpu_cmd_base_defs.h"
+
+#define HINIC3_DRV_VERSION "17.7.8.101"
+#define HINIC3_DRV_DESC "Intelligent Network Interface Card Driver"
+#define HIUDK_DRV_DESC "Intelligent Network Unified Driver"
+
+#define ARRAY_LEN(arr) ((int)((int)sizeof(arr) / (int)sizeof((arr)[0])))
+
+#define HINIC3_MGMT_VERSION_MAX_LEN 32
+
+#define HINIC3_FW_VERSION_NAME 16
+#define HINIC3_FW_VERSION_SECTION_CNT 4
+#define HINIC3_FW_VERSION_SECTION_BORDER 0xFF
+struct hinic3_fw_version {
+ u8 mgmt_ver[HINIC3_FW_VERSION_NAME];
+ u8 microcode_ver[HINIC3_FW_VERSION_NAME];
+ u8 boot_ver[HINIC3_FW_VERSION_NAME];
+};
+
+#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF
+
+/* show each drivers only such as nic_service_cap,
+ * toe_service_cap structure, but not show service_cap
+ */
+enum hinic3_service_type {
+ SERVICE_T_NIC = 0,
+ SERVICE_T_OVS,
+ SERVICE_T_ROCE,
+ SERVICE_T_TOE,
+ SERVICE_T_IOE,
+ SERVICE_T_FC,
+ SERVICE_T_VBS,
+ SERVICE_T_IPSEC,
+ SERVICE_T_VIRTIO,
+ SERVICE_T_MIGRATE,
+ SERVICE_T_PPA,
+ SERVICE_T_CUSTOM,
+ SERVICE_T_VROCE,
+ SERVICE_T_CRYPT,
+ SERVICE_T_VSOCK,
+ SERVICE_T_BIFUR,
+ SERVICE_T_MAX,
+
+ /* Only used for interruption resource management,
+ * mark the request module
+ */
+ SERVICE_T_INTF = (1 << 15),
+ SERVICE_T_CQM = (1 << 16),
+};
+
+enum hinic3_ppf_flr_type {
+ STATELESS_FLR_TYPE,
+ STATEFUL_FLR_TYPE,
+};
+
+struct nic_service_cap {
+ u16 max_sqs;
+ u16 max_rqs;
+ u16 default_num_queues;
+ u16 outband_vlan_cfg_en;
+ u8 lro_enable;
+ u8 rsvd1[3];
+};
+
+struct ppa_service_cap {
+ u16 qpc_fake_vf_start;
+ u16 qpc_fake_vf_num;
+ u32 qpc_fake_vf_ctx_num;
+ u32 pctx_sz; /* 512B */
+ u32 bloomfilter_length;
+ u8 bloomfilter_en;
+ u8 rsvd;
+ u16 rsvd1;
+};
+
+struct bifur_service_cap {
+ u8 rsvd;
+};
+
+struct vbs_service_cap {
+ u16 vbs_max_volq;
+ u8 vbs_main_pf_enable;
+ u8 vbs_vsock_pf_enable;
+ u8 vbs_fushion_queue_pf_enable;
+};
+
+struct migr_service_cap {
+ u8 master_host_id;
+ u8 rsvd[3];
+};
+
+/* PF/VF ToE service resource structure */
+struct dev_toe_svc_cap {
+ /* PF resources */
+ u32 max_pctxs; /* Parent Context: max specifications 1M */
+ u32 max_cctxt;
+ u32 max_cqs;
+ u16 max_srqs;
+ u32 srq_id_start;
+ u32 max_mpts;
+};
+
+/* ToE services */
+struct toe_service_cap {
+ struct dev_toe_svc_cap dev_toe_cap;
+
+ bool alloc_flag;
+ u32 pctx_sz; /* 1KB */
+ u32 scqc_sz; /* 64B */
+};
+
+/* PF FC service resource structure defined */
+struct dev_fc_svc_cap {
+ /* PF Parent QPC */
+ u32 max_parent_qpc_num; /* max number is 2048 */
+
+ /* PF Child QPC */
+ u32 max_child_qpc_num; /* max number is 2048 */
+ u32 child_qpc_id_start;
+
+ /* PF SCQ */
+ u32 scq_num; /* 16 */
+
+ /* PF supports SRQ */
+ u32 srq_num; /* Number of SRQ is 2 */
+
+ u8 vp_id_start;
+ u8 vp_id_end;
+};
+
+/* FC services */
+struct fc_service_cap {
+ struct dev_fc_svc_cap dev_fc_cap;
+
+ /* Parent QPC */
+ u32 parent_qpc_size; /* 256B */
+
+ /* Child QPC */
+ u32 child_qpc_size; /* 256B */
+
+ /* SQ */
+ u32 sqe_size; /* 128B(in linked list mode) */
+
+ /* SCQ */
+ u32 scqc_size; /* Size of the Context 32B */
+ u32 scqe_size; /* 64B */
+
+ /* SRQ */
+ u32 srqc_size; /* Size of SRQ Context (64B) */
+ u32 srqe_size; /* 32B */
+};
+
+struct dev_roce_svc_own_cap {
+ u32 max_qps;
+ u32 max_cqs;
+ u32 max_srqs;
+ u32 max_mpts;
+ u32 max_drc_qps;
+
+ u32 cmtt_cl_start;
+ u32 cmtt_cl_end;
+ u32 cmtt_cl_sz;
+
+ u32 dmtt_cl_start;
+ u32 dmtt_cl_end;
+ u32 dmtt_cl_sz;
+
+ u32 wqe_cl_start;
+ u32 wqe_cl_end;
+ u32 wqe_cl_sz;
+
+ u32 qpc_entry_sz;
+ u32 max_wqes;
+ u32 max_rq_sg;
+ u32 max_sq_inline_data_sz;
+ u32 max_rq_desc_sz;
+
+ u32 rdmarc_entry_sz;
+ u32 max_qp_init_rdma;
+ u32 max_qp_dest_rdma;
+
+ u32 max_srq_wqes;
+ u32 reserved_srqs;
+ u32 max_srq_sge;
+ u32 srqc_entry_sz;
+
+ u32 max_msg_sz; /* Message size 2GB */
+};
+
+/* RDMA service capability structure */
+struct dev_rdma_svc_cap {
+ /* ROCE service unique parameter structure */
+ struct dev_roce_svc_own_cap roce_own_cap;
+};
+
+/* Defines the RDMA service capability flag */
+enum {
+ RDMA_BMME_FLAG_LOCAL_INV = (1 << 0),
+ RDMA_BMME_FLAG_REMOTE_INV = (1 << 1),
+ RDMA_BMME_FLAG_FAST_REG_WR = (1 << 2),
+ RDMA_BMME_FLAG_RESERVED_LKEY = (1 << 3),
+ RDMA_BMME_FLAG_TYPE_2_WIN = (1 << 4),
+ RDMA_BMME_FLAG_WIN_TYPE_2B = (1 << 5),
+
+ RDMA_DEV_CAP_FLAG_XRC = (1 << 6),
+ RDMA_DEV_CAP_FLAG_MEM_WINDOW = (1 << 7),
+ RDMA_DEV_CAP_FLAG_ATOMIC = (1 << 8),
+ RDMA_DEV_CAP_FLAG_APM = (1 << 9),
+};
+
+/* RDMA services */
+struct rdma_service_cap {
+ struct dev_rdma_svc_cap dev_rdma_cap;
+
+ u8 log_mtt; /* 1. the number of MTT PA must be integer power of 2
+ * 2. represented by logarithm. Each MTT table can
+ * contain 1, 2, 4, 8, and 16 PA)
+ */
+ /* todo: need to check whether related to max_mtt_seg */
+ u32 num_mtts; /* Number of MTT table (4M),
+ * is actually MTT seg number
+ */
+ u32 log_mtt_seg;
+ u32 mtt_entry_sz; /* MTT table size 8B, including 1 PA(64bits) */
+ u32 mpt_entry_sz; /* MPT table size (64B) */
+
+ u32 dmtt_cl_start;
+ u32 dmtt_cl_end;
+ u32 dmtt_cl_sz;
+
+ u8 log_rdmarc; /* 1. the number of RDMArc PA must be integer power of 2
+ * 2. represented by logarithm. Each MTT table can
+ * contain 1, 2, 4, 8, and 16 PA)
+ */
+
+ u32 reserved_qps; /* Number of reserved QP */
+ u32 max_sq_sg; /* Maximum SGE number of SQ (8) */
+ u32 max_sq_desc_sz; /* WQE maximum size of SQ(1024B), inline maximum
+ * size if 960B(944B aligned to the 960B),
+ * 960B=>wqebb alignment=>1024B
+ */
+ u32 wqebb_size; /* Currently, the supports 64B and 128B,
+ * defined as 64Bytes
+ */
+
+ u32 max_cqes; /* Size of the depth of the CQ (64K-1) */
+ u32 reserved_cqs; /* Number of reserved CQ */
+ u32 cqc_entry_sz; /* Size of the CQC (64B/128B) */
+ u32 cqe_size; /* Size of CQE (32B) */
+
+ u32 reserved_mrws; /* Number of reserved MR/MR Window */
+
+ u32 max_fmr_maps; /* max MAP of FMR,
+ * (1 << (32-ilog2(num_mpt)))-1;
+ */
+
+ /* todo: max value needs to be confirmed */
+ /* MTT table number of Each MTT seg(3) */
+
+ u32 log_rdmarc_seg; /* table number of each RDMArc seg(3) */
+
+ /* Timeout time. Formula:Tr=4.096us*2(local_ca_ack_delay), [Tr,4Tr] */
+ u32 local_ca_ack_delay;
+ u32 num_ports; /* Physical port number */
+
+ u32 db_page_size; /* Size of the DB (4KB) */
+ u32 direct_wqe_size; /* Size of the DWQE (256B) */
+
+ u32 num_pds; /* Maximum number of PD (128K) */
+ u32 reserved_pds; /* Number of reserved PD */
+ u32 max_xrcds; /* Maximum number of xrcd (64K) */
+ u32 reserved_xrcds; /* Number of reserved xrcd */
+
+ u32 max_gid_per_port; /* gid number (16) of each port */
+ u32 gid_entry_sz; /* RoCE v2 GID table is 32B,
+ * compatible RoCE v1 expansion
+ */
+
+ u32 reserved_lkey; /* local_dma_lkey */
+ u32 num_comp_vectors; /* Number of complete vector (32) */
+ u32 page_size_cap; /* Supports 4K,8K,64K,256K,1M and 4M page_size */
+
+ u32 flags; /* RDMA some identity */
+ u32 max_frpl_len; /* Maximum number of pages frmr registration */
+ u32 max_pkeys; /* Number of supported pkey group */
+};
+
+/* PF OVS service resource structure defined */
+struct dev_ovs_svc_cap {
+ u32 max_pctxs; /* Parent Context: max specifications 1M */
+ u32 fake_vf_max_pctx;
+ u16 fake_vf_num;
+ u16 fake_vf_start_id;
+ u8 dynamic_qp_en;
+};
+
+/* OVS services */
+struct ovs_service_cap {
+ struct dev_ovs_svc_cap dev_ovs_cap;
+
+ u32 pctx_sz; /* 512B */
+};
+
+/* PF IPsec service resource structure defined */
+struct dev_ipsec_svc_cap {
+ u32 max_sactxs; /* max IPsec SA context num */
+ u16 max_cqs; /* max IPsec SCQC num */
+ u16 rsvd0;
+};
+
+/* IPsec services */
+struct ipsec_service_cap {
+ struct dev_ipsec_svc_cap dev_ipsec_cap;
+ u32 sactx_sz; /* 512B */
+};
+
+/* Defines the IRQ information structure */
+struct irq_info {
+ u16 msix_entry_idx; /* IRQ corresponding index number */
+ u32 irq_id; /* the IRQ number from OS */
+};
+
+struct interrupt_info {
+ u32 lli_set;
+ u32 interrupt_coalesc_set;
+ u16 msix_index;
+ u8 lli_credit_limit;
+ u8 lli_timer_cfg;
+ u8 pending_limt;
+ u8 coalesc_timer_cfg;
+ u8 resend_timer_cfg;
+};
+
+enum hinic3_msix_state {
+ HINIC3_MSIX_ENABLE,
+ HINIC3_MSIX_DISABLE,
+};
+
+enum hinic3_msix_auto_mask {
+ HINIC3_CLR_MSIX_AUTO_MASK,
+ HINIC3_SET_MSIX_AUTO_MASK,
+};
+
+enum func_type {
+ TYPE_PF,
+ TYPE_VF,
+ TYPE_PPF,
+ TYPE_UNKNOWN,
+};
+
+enum func_nic_state {
+ HINIC3_FUNC_NIC_DEL,
+ HINIC3_FUNC_NIC_ADD,
+};
+
+struct hinic3_init_para {
+ /* Record hinic_pcidev or NDIS_Adapter pointer address */
+ void *adapter_hdl;
+ /* Record pcidev or Handler pointer address
+ * for example: ioremap interface input parameter
+ */
+ void *pcidev_hdl;
+ /* Record pcidev->dev or Handler pointer address which used to
+ * dma address application or dev_err print the parameter
+ */
+ void *dev_hdl;
+
+ /* Configure virtual address, PF is bar1, VF is bar0/1 */
+ void *cfg_reg_base;
+ /* interrupt configuration register address, PF is bar2, VF is bar2/3
+ */
+ void *intr_reg_base;
+ /* for PF bar3 virtual address, if function is VF should set to NULL */
+ void *mgmt_reg_base;
+
+ u64 db_dwqe_len;
+ u64 db_base_phy;
+ /* the doorbell address, bar4/5 higher 4M space */
+ void *db_base;
+ /* direct wqe 4M, follow the doorbell address space */
+ void *dwqe_mapping;
+ void **hwdev;
+ void *chip_node;
+ /* if use polling mode, set it true */
+ bool poll;
+
+ u16 probe_fault_level;
+};
+
+/* B200 config BAR45 4MB, DB & DWQE both 2MB */
+#define HINIC3_DB_DWQE_SIZE 0x00400000
+
+/* db/dwqe page size: 4K */
+#define HINIC3_DB_PAGE_SIZE 0x00001000ULL
+#define HINIC3_DWQE_OFFSET 0x00000800ULL
+
+#define HINIC3_DB_MAX_AREAS (HINIC3_DB_DWQE_SIZE / HINIC3_DB_PAGE_SIZE)
+
+#ifndef IFNAMSIZ
+#define IFNAMSIZ 16
+#endif
+#define MAX_FUNCTION_NUM 4096
+
+struct card_node {
+ struct list_head node;
+ struct list_head func_list;
+ char chip_name[IFNAMSIZ];
+ int chip_id;
+ void *log_info;
+ void *dbgtool_info;
+ void *func_handle_array[MAX_FUNCTION_NUM];
+ unsigned char bus_num;
+ u16 func_num;
+ u32 rsvd1;
+ atomic_t channel_busy_cnt;
+ void *priv_data;
+ u64 rsvd2;
+};
+
+#define HINIC3_SYNFW_TIME_PERIOD (60 * 60 * 1000)
+#define HINIC3_SYNC_YEAR_OFFSET 1900
+#define HINIC3_SYNC_MONTH_OFFSET 1
+
+#define FAULT_SHOW_STR_LEN 16
+
+enum hinic3_fault_source_type {
+ /* same as FAULT_TYPE_CHIP */
+ HINIC3_FAULT_SRC_HW_MGMT_CHIP = 0,
+ /* same as FAULT_TYPE_UCODE */
+ HINIC3_FAULT_SRC_HW_MGMT_UCODE,
+ /* same as FAULT_TYPE_MEM_RD_TIMEOUT */
+ HINIC3_FAULT_SRC_HW_MGMT_MEM_RD_TIMEOUT,
+ /* same as FAULT_TYPE_MEM_WR_TIMEOUT */
+ HINIC3_FAULT_SRC_HW_MGMT_MEM_WR_TIMEOUT,
+ /* same as FAULT_TYPE_REG_RD_TIMEOUT */
+ HINIC3_FAULT_SRC_HW_MGMT_REG_RD_TIMEOUT,
+ /* same as FAULT_TYPE_REG_WR_TIMEOUT */
+ HINIC3_FAULT_SRC_HW_MGMT_REG_WR_TIMEOUT,
+ HINIC3_FAULT_SRC_SW_MGMT_UCODE,
+ HINIC3_FAULT_SRC_MGMT_WATCHDOG,
+ HINIC3_FAULT_SRC_MGMT_RESET = 8,
+ HINIC3_FAULT_SRC_HW_PHY_FAULT,
+ HINIC3_FAULT_SRC_TX_PAUSE_EXCP,
+ HINIC3_FAULT_SRC_PCIE_LINK_DOWN = 20,
+ HINIC3_FAULT_SRC_HOST_HEARTBEAT_LOST = 21,
+ HINIC3_FAULT_SRC_TX_TIMEOUT,
+ HINIC3_FAULT_SRC_TYPE_MAX,
+};
+
+union hinic3_fault_hw_mgmt {
+ u32 val[4];
+ /* valid only type == FAULT_TYPE_CHIP */
+ struct {
+ u8 node_id;
+ /* enum hinic_fault_err_level */
+ u8 err_level;
+ u16 err_type;
+ u32 err_csr_addr;
+ u32 err_csr_value;
+ /* func_id valid only if err_level == FAULT_LEVEL_SERIOUS_FLR */
+ u8 rsvd1;
+ u8 host_id;
+ u16 func_id;
+ } chip;
+
+ /* valid only if type == FAULT_TYPE_UCODE */
+ struct {
+ u8 cause_id;
+ u8 core_id;
+ u8 c_id;
+ u8 rsvd3;
+ u32 epc;
+ u32 rsvd4;
+ u32 rsvd5;
+ } ucode;
+
+ /* valid only if type == FAULT_TYPE_MEM_RD_TIMEOUT ||
+ * FAULT_TYPE_MEM_WR_TIMEOUT
+ */
+ struct {
+ u32 err_csr_ctrl;
+ u32 err_csr_data;
+ u32 ctrl_tab;
+ u32 mem_index;
+ } mem_timeout;
+
+ /* valid only if type == FAULT_TYPE_REG_RD_TIMEOUT ||
+ * FAULT_TYPE_REG_WR_TIMEOUT
+ */
+ struct {
+ u32 err_csr;
+ u32 rsvd6;
+ u32 rsvd7;
+ u32 rsvd8;
+ } reg_timeout;
+
+ struct {
+ /* 0: read; 1: write */
+ u8 op_type;
+ u8 port_id;
+ u8 dev_ad;
+ u8 rsvd9;
+ u32 csr_addr;
+ u32 op_data;
+ u32 rsvd10;
+ } phy_fault;
+};
+
+/* defined by chip */
+struct hinic3_fault_event {
+ /* enum hinic_fault_type */
+ u8 type;
+ u8 fault_level; /* sdk write fault level for uld event */
+ u8 rsvd0[2];
+ union hinic3_fault_hw_mgmt event;
+};
+
+struct hinic3_cmd_fault_event {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+ struct hinic3_fault_event event;
+};
+
+struct hinic3_sriov_state_info {
+ u8 enable;
+ u16 num_vfs;
+};
+
+enum hinic3_comm_event_type {
+ EVENT_COMM_PCIE_LINK_DOWN,
+ EVENT_COMM_HEART_LOST,
+ EVENT_COMM_FAULT,
+ EVENT_COMM_SRIOV_STATE_CHANGE,
+ EVENT_COMM_CARD_REMOVE,
+ EVENT_COMM_MGMT_WATCHDOG,
+ EVENT_COMM_MULTI_HOST_MGMT,
+};
+
+enum hinic3_event_service_type {
+ EVENT_SRV_COMM = 0,
+#define SERVICE_EVENT_BASE (EVENT_SRV_COMM + 1)
+ EVENT_SRV_NIC = SERVICE_EVENT_BASE + SERVICE_T_NIC,
+ EVENT_SRV_MIGRATE = SERVICE_EVENT_BASE + SERVICE_T_MIGRATE,
+};
+
+#define HINIC3_SRV_EVENT_TYPE(svc, type) ((((u32)(svc)) << 16) | (type))
+#ifndef HINIC3_EVENT_DATA_SIZE
+#define HINIC3_EVENT_DATA_SIZE 104
+#endif
+struct hinic3_event_info {
+ u16 service; /* enum hinic3_event_service_type */
+ u16 type;
+ u8 event_data[HINIC3_EVENT_DATA_SIZE];
+};
+
+typedef void (*hinic3_event_handler)(void *handle, struct hinic3_event_info *event);
+
+struct hinic3_func_nic_state {
+ u8 state;
+ u8 rsvd0;
+ u16 func_idx;
+
+ u8 vroce_flag;
+ u8 rsvd1[15];
+};
+
+/* *
+ * @brief hinic3_event_register - register hardware event
+ * @param dev: device pointer to hwdev
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ */
+void hinic3_event_register(void *dev, void *pri_handle,
+ hinic3_event_handler callback);
+
+/* *
+ * @brief hinic3_event_unregister - unregister hardware event
+ * @param dev: device pointer to hwdev
+ */
+void hinic3_event_unregister(void *dev);
+
+/* *
+ * @brief hinic3_set_msix_auto_mask - set msix auto mask function
+ * @param hwdev: device pointer to hwdev
+ * @param msix_idx: msix id
+ * @param flag: msix auto_mask flag, 1-enable, 2-clear
+ */
+void hinic3_set_msix_auto_mask_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_auto_mask flag);
+
+/* *
+ * @brief hinic3_set_msix_state - set msix state
+ * @param hwdev: device pointer to hwdev
+ * @param msix_idx: msix id
+ * @param flag: msix state flag, 0-enable, 1-disable
+ */
+void hinic3_set_msix_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_state flag);
+
+/* *
+ * @brief hinic3_misx_intr_clear_resend_bit - clear msix resend bit
+ * @param hwdev: device pointer to hwdev
+ * @param msix_idx: msix id
+ * @param clear_resend_en: 1-clear
+ */
+void hinic3_misx_intr_clear_resend_bit(void *hwdev, u16 msix_idx,
+ u8 clear_resend_en);
+
+/* *
+ * @brief hinic3_set_interrupt_cfg_direct - set interrupt cfg
+ * @param hwdev: device pointer to hwdev
+ * @param interrupt_para: interrupt info
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_interrupt_cfg_direct(void *hwdev,
+ struct interrupt_info *info,
+ u16 channel);
+
+int hinic3_set_interrupt_cfg(void *dev, struct interrupt_info info,
+ u16 channel);
+
+/* *
+ * @brief hinic3_get_interrupt_cfg - get interrupt cfg
+ * @param dev: device pointer to hwdev
+ * @param info: interrupt info
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_interrupt_cfg(void *dev, struct interrupt_info *info,
+ u16 channel);
+
+/* *
+ * @brief hinic3_alloc_irqs - alloc irq
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param num: alloc number
+ * @param irq_info_array: alloc irq info
+ * @param act_num: alloc actual number
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_alloc_irqs(void *hwdev, enum hinic3_service_type type, u16 num,
+ struct irq_info *irq_info_array, u16 *act_num);
+
+/* *
+ * @brief hinic3_free_irq - free irq
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param irq_id: irq id
+ */
+void hinic3_free_irq(void *hwdev, enum hinic3_service_type type, u32 irq_id);
+
+/* *
+ * @brief hinic3_alloc_ceqs - alloc ceqs
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param num: alloc ceq number
+ * @param ceq_id_array: alloc ceq_id_array
+ * @param act_num: alloc actual number
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_alloc_ceqs(void *hwdev, enum hinic3_service_type type, int num,
+ int *ceq_id_array, int *act_num);
+
+/* *
+ * @brief hinic3_free_irq - free ceq
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param irq_id: ceq id
+ */
+void hinic3_free_ceq(void *hwdev, enum hinic3_service_type type, int ceq_id);
+
+/* *
+ * @brief hinic3_get_pcidev_hdl - get pcidev_hdl
+ * @param hwdev: device pointer to hwdev
+ * @retval non-null: success
+ * @retval null: failure
+ */
+void *hinic3_get_pcidev_hdl(void *hwdev);
+
+/* *
+ * @brief hinic3_ppf_idx - get ppf id
+ * @param hwdev: device pointer to hwdev
+ * @retval ppf id
+ */
+u8 hinic3_ppf_idx(void *hwdev);
+
+/* *
+ * @brief hinic3_get_chip_present_flag - get chip present flag
+ * @param hwdev: device pointer to hwdev
+ * @retval 1: chip is present
+ * @retval 0: chip is absent
+ */
+int hinic3_get_chip_present_flag(const void *hwdev);
+
+/* *
+ * @brief hinic3_get_heartbeat_status - get heartbeat status
+ * @param hwdev: device pointer to hwdev
+ * @retval heartbeat status
+ */
+u32 hinic3_get_heartbeat_status(void *hwdev);
+
+/* *
+ * @brief hinic3_support_nic - function support nic
+ * @param hwdev: device pointer to hwdev
+ * @param cap: nic service capbility
+ * @retval true: function support nic
+ * @retval false: function not support nic
+ */
+bool hinic3_support_nic(void *hwdev, struct nic_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_ipsec - function support ipsec
+ * @param hwdev: device pointer to hwdev
+ * @param cap: ipsec service capbility
+ * @retval true: function support ipsec
+ * @retval false: function not support ipsec
+ */
+bool hinic3_support_ipsec(void *hwdev, struct ipsec_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_roce - function support roce
+ * @param hwdev: device pointer to hwdev
+ * @param cap: roce service capbility
+ * @retval true: function support roce
+ * @retval false: function not support roce
+ */
+bool hinic3_support_roce(void *hwdev, struct rdma_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_fc - function support fc
+ * @param hwdev: device pointer to hwdev
+ * @param cap: fc service capbility
+ * @retval true: function support fc
+ * @retval false: function not support fc
+ */
+bool hinic3_support_fc(void *hwdev, struct fc_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_rdma - function support rdma
+ * @param hwdev: device pointer to hwdev
+ * @param cap: rdma service capbility
+ * @retval true: function support rdma
+ * @retval false: function not support rdma
+ */
+bool hinic3_support_rdma(void *hwdev, struct rdma_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_ovs - function support ovs
+ * @param hwdev: device pointer to hwdev
+ * @param cap: ovs service capbility
+ * @retval true: function support ovs
+ * @retval false: function not support ovs
+ */
+bool hinic3_support_ovs(void *hwdev, struct ovs_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_vbs - function support vbs
+ * @param hwdev: device pointer to hwdev
+ * @param cap: vbs service capbility
+ * @retval true: function support vbs
+ * @retval false: function not support vbs
+ */
+bool hinic3_support_vbs(void *hwdev, struct vbs_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_toe - sync time to hardware
+ * @param hwdev: device pointer to hwdev
+ * @param cap: toe service capbility
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+bool hinic3_support_toe(void *hwdev, struct toe_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_ppa - function support ppa
+ * @param hwdev: device pointer to hwdev
+ * @param cap: ppa service capbility
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+bool hinic3_support_ppa(void *hwdev, struct ppa_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_bifur - function support bifur
+ * @param hwdev: device pointer to hwdev
+ * @param cap: bifur service capbility
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+bool hinic3_support_bifur(void *hwdev, struct bifur_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_migr - function support migrate
+ * @param hwdev: device pointer to hwdev
+ * @param cap: migrate service capbility
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+bool hinic3_support_migr(void *hwdev, struct migr_service_cap *cap);
+
+/* *
+ * @brief hinic3_sync_time - sync time to hardware
+ * @param hwdev: device pointer to hwdev
+ * @param time: time to sync
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_sync_time(void *hwdev, u64 time);
+
+/* *
+ * @brief hinic3_disable_mgmt_msg_report - disable mgmt report msg
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_disable_mgmt_msg_report(void *hwdev);
+
+/* *
+ * @brief hinic3_func_for_mgmt - get function service type
+ * @param hwdev: device pointer to hwdev
+ * @retval true: function for mgmt
+ * @retval false: function is not for mgmt
+ */
+bool hinic3_func_for_mgmt(void *hwdev);
+
+/* *
+ * @brief hinic3_set_pcie_order_cfg - set pcie order cfg
+ * @param handle: device pointer to hwdev
+ */
+void hinic3_set_pcie_order_cfg(void *handle);
+
+/* *
+ * @brief hinic3_init_hwdev - call to init hwdev
+ * @param para: device pointer to para
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_init_hwdev(struct hinic3_init_para *para);
+
+/* *
+ * @brief hinic3_free_hwdev - free hwdev
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_free_hwdev(void *hwdev);
+
+/* *
+ * @brief hinic3_detect_hw_present - detect hardware present
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_detect_hw_present(void *hwdev);
+
+/* *
+ * @brief hinic3_record_pcie_error - record pcie error
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_record_pcie_error(void *hwdev);
+
+/* *
+ * @brief hinic3_shutdown_hwdev - shutdown hwdev
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_shutdown_hwdev(void *hwdev);
+
+/* *
+ * @brief hinic3_set_ppf_flr_type - set ppf flr type
+ * @param hwdev: device pointer to hwdev
+ * @param ppf_flr_type: ppf flr type
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_ppf_flr_type(void *hwdev, enum hinic3_ppf_flr_type flr_type);
+
+/* *
+ * @brief hinic3_set_ppf_tbl_hotreplace_flag - set os hotreplace flag in ppf function table
+ * @param hwdev: device pointer to hwdev
+ * @param flag : os hotreplace flag : 0-not in os hotreplace 1-in os hotreplace
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_ppf_tbl_hotreplace_flag(void *hwdev, u8 flag);
+
+/* *
+ * @brief hinic3_get_mgmt_version - get management cpu version
+ * @param hwdev: device pointer to hwdev
+ * @param mgmt_ver: output management version
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_mgmt_version(void *hwdev, u8 *mgmt_ver, u8 version_size,
+ u16 channel);
+
+/* *
+ * @brief hinic3_get_fw_version - get firmware version
+ * @param hwdev: device pointer to hwdev
+ * @param fw_ver: firmware version
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_fw_version(void *hwdev, struct hinic3_fw_version *fw_ver,
+ u16 channel);
+
+/* *
+ * @brief hinic3_get_bond_create_mode - get bond create mode
+ * @param hwdev: device pointer to hwdev
+ * @retval global function id
+ */
+u8 hinic3_get_bond_create_mode(void *udkdev);
+
+/* *
+ * @brief hinic3_global_func_id - get global function id
+ * @param hwdev: device pointer to hwdev
+ * @retval global function id
+ */
+u16 hinic3_global_func_id(void *hwdev);
+
+/* *
+ * @brief hinic3_vector_to_eqn - vector to eq id
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param vector: vertor
+ * @retval eq id
+ */
+int hinic3_vector_to_eqn(void *hwdev, enum hinic3_service_type type,
+ int vector);
+
+/* *
+ * @brief hinic3_glb_pf_vf_offset - get vf offset id of pf
+ * @param hwdev: device pointer to hwdev
+ * @retval vf offset id
+ */
+u16 hinic3_glb_pf_vf_offset(void *hwdev);
+
+/* *
+ * @brief hinic3_pf_id_of_vf - get pf id of vf
+ * @param hwdev: device pointer to hwdev
+ * @retval pf id
+ */
+u8 hinic3_pf_id_of_vf(void *hwdev);
+
+/* *
+ * @brief hinic3_func_type - get function type
+ * @param hwdev: device pointer to hwdev
+ * @retval function type
+ */
+enum func_type hinic3_func_type(void *hwdev);
+
+/* *
+ * @brief hinic3_get_stateful_enable - get stateful status
+ * @param hwdev: device pointer to hwdev
+ * @retval stateful enabel status
+ */
+bool hinic3_get_stateful_enable(void *hwdev);
+
+/* *
+ * @brief hinic3_get_timer_enable - get timer status
+ * @param hwdev: device pointer to hwdev
+ * @retval timer enabel status
+ */
+bool hinic3_get_timer_enable(void *hwdev);
+
+/* *
+ * @brief hinic3_host_oq_id_mask - get oq id
+ * @param hwdev: device pointer to hwdev
+ * @retval oq id
+ */
+u8 hinic3_host_oq_id_mask(void *hwdev);
+
+/* *
+ * @brief hinic3_host_id - get host id
+ * @param hwdev: device pointer to hwdev
+ * @retval host id
+ */
+u8 hinic3_host_id(void *hwdev);
+
+/* *
+ * @brief hinic3_func_max_qnum - get host total function number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: host total function number
+ * @retval zero: failure
+ */
+u16 hinic3_host_total_func(void *hwdev);
+
+/* *
+ * @brief hinic3_func_max_qnum - get max nic queue number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: max nic queue number
+ * @retval zero: failure
+ */
+u16 hinic3_func_max_nic_qnum(void *hwdev);
+
+/* *
+ * @brief hinic3_func_max_qnum - get max queue number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: max queue number
+ * @retval zero: failure
+ */
+u16 hinic3_func_max_qnum(void *hwdev);
+
+/* *
+ * @brief hinic3_er_id - get ep id
+ * @param hwdev: device pointer to hwdev
+ * @retval ep id
+ */
+u8 hinic3_ep_id(void *hwdev); /* Obtain service_cap.ep_id */
+
+/* *
+ * @brief hinic3_er_id - get er id
+ * @param hwdev: device pointer to hwdev
+ * @retval er id
+ */
+u8 hinic3_er_id(void *hwdev); /* Obtain service_cap.er_id */
+
+/* *
+ * @brief hinic3_physical_port_id - get physical port id
+ * @param hwdev: device pointer to hwdev
+ * @retval physical port id
+ */
+u8 hinic3_physical_port_id(void *hwdev); /* Obtain service_cap.port_id */
+
+/* *
+ * @brief hinic3_func_max_vf - get vf number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: vf number
+ * @retval zero: failure
+ */
+u16 hinic3_func_max_vf(void *hwdev); /* Obtain service_cap.max_vf */
+
+/* *
+ * @brief hinic3_max_pf_num - get global max pf number
+ */
+u8 hinic3_max_pf_num(void *hwdev);
+
+/* *
+ * @brief hinic3_host_pf_num - get current host pf number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: pf number
+ * @retval zero: failure
+ */
+u32 hinic3_host_pf_num(void *hwdev); /* Obtain service_cap.pf_num */
+
+/* *
+ * @brief hinic3_host_pf_id_start - get current host pf id start
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: pf id start
+ * @retval zero: failure
+ */
+u32 hinic3_host_pf_id_start(void *hwdev); /* Obtain service_cap.pf_num */
+
+/* *
+ * @brief hinic3_pcie_itf_id - get pcie port id
+ * @param hwdev: device pointer to hwdev
+ * @retval pcie port id
+ */
+u8 hinic3_pcie_itf_id(void *hwdev);
+
+/* *
+ * @brief hinic3_vf_in_pf - get vf offset in pf
+ * @param hwdev: device pointer to hwdev
+ * @retval vf offset in pf
+ */
+u8 hinic3_vf_in_pf(void *hwdev);
+
+/* *
+ * @brief hinic3_cos_valid_bitmap - get cos valid bitmap
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: valid cos bit map
+ * @retval zero: failure
+ */
+int hinic3_cos_valid_bitmap(void *hwdev, u8 *func_dft_cos, u8 *port_cos_bitmap);
+
+/* *
+ * @brief hinic3_stateful_init - init stateful resource
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_stateful_init(void *hwdev);
+
+/* *
+ * @brief hinic3_stateful_deinit - deinit stateful resource
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_stateful_deinit(void *hwdev);
+
+/* *
+ * @brief hinic3_free_stateful - sdk remove free stateful resource
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_free_stateful(void *hwdev);
+
+/* *
+ * @brief hinic3_need_init_stateful_default - get need init stateful default
+ * @param hwdev: device pointer to hwdev
+ */
+bool hinic3_need_init_stateful_default(void *hwdev);
+
+/* *
+ * @brief hinic3_get_card_present_state - get card present state
+ * @param hwdev: device pointer to hwdev
+ * @param card_present_state: return card present state
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_card_present_state(void *hwdev, bool *card_present_state);
+
+/* *
+ * @brief hinic3_func_rx_tx_flush - function flush
+ * @param hwdev: device pointer to hwdev
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_func_rx_tx_flush(void *hwdev, u16 channel, bool wait_io);
+
+/* *
+ * @brief hinic3_flush_mgmt_workq - when remove function should flush work queue
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_flush_mgmt_workq(void *hwdev);
+
+/* *
+ * @brief hinic3_ceq_num get toe ceq num
+ */
+u8 hinic3_ceq_num(void *hwdev);
+
+/* *
+ * @brief hinic3_intr_num get intr num
+ */
+u16 hinic3_intr_num(void *hwdev);
+
+/* *
+ * @brief hinic3_flexq_en get flexq en
+ */
+u8 hinic3_flexq_en(void *hwdev);
+
+/* *
+ * @brief hinic3_get_fake_vf_info get fake_vf info
+ */
+int hinic3_get_fake_vf_info(void *hwdev, u8 *fake_vf_vld,
+ u8 *page_bit, u8 *pf_start_bit, u8 *map_host_id);
+
+/* *
+ * @brief hinic3_fault_event_report - report fault event
+ * @param hwdev: device pointer to hwdev
+ * @param src: fault event source, reference to enum hinic3_fault_source_type
+ * @param level: fault level, reference to enum hinic3_fault_err_level
+ */
+void hinic3_fault_event_report(void *hwdev, u16 src, u16 level);
+
+/* *
+ * @brief hinic3_probe_success - notify device probe successful
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_probe_success(void *hwdev);
+
+/* *
+ * @brief hinic3_set_func_svc_used_state - set function service used state
+ * @param hwdev: device pointer to hwdev
+ * @param svc_type: service type
+ * @param state: function used state
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_func_svc_used_state(void *hwdev, u16 svc_type, u8 state,
+ u16 channel);
+
+/* *
+ * @brief hinic3_get_self_test_result - get self test result
+ * @param hwdev: device pointer to hwdev
+ * @retval self test result
+ */
+u32 hinic3_get_self_test_result(void *hwdev);
+
+/* *
+ * @brief hinic3_get_slave_host_enable - get slave host enable
+ * @param hwdev: device pointer to hwdev
+ * @param host_id: set host id
+ * @param slave_en-zero: slave is enable
+ * @retval zero: failure
+ */
+void set_slave_host_enable(void *hwdev, u8 host_id, bool enable);
+
+/* *
+ * @brief hinic3_get_slave_bitmap - get slave host bitmap
+ * @param hwdev: device pointer to hwdev
+ * @param slave_host_bitmap-zero: slave host bitmap
+ * @retval zero: failure
+ */
+int hinic3_get_slave_bitmap(void *hwdev, u8 *slave_host_bitmap);
+
+/* *
+ * @brief hinic3_get_slave_host_enable - get slave host enable
+ * @param hwdev: device pointer to hwdev
+ * @param host_id: get host id
+ * @param slave_en-zero: slave is enable
+ * @retval zero: failure
+ */
+int hinic3_get_slave_host_enable(void *hwdev, u8 host_id, u8 *slave_en);
+
+/* *
+ * @brief hinic3_set_host_migrate_enable - set migrate host enable
+ * @param hwdev: device pointer to hwdev
+ * @param host_id: get host id
+ * @param slave_en-zero: migrate is enable
+ * @retval zero: failure
+ */
+int hinic3_set_host_migrate_enable(void *hwdev, u8 host_id, bool enable);
+
+/* *
+ * @brief hinic3_get_host_migrate_enable - get migrate host enable
+ * @param hwdev: device pointer to hwdev
+ * @param host_id: get host id
+ * @param slave_en-zero: migrte enable ptr
+ * @retval zero: failure
+ */
+int hinic3_get_host_migrate_enable(void *hwdev, u8 host_id, u8 *migrate_en);
+
+/* *
+ * @brief hinic3_is_slave_func - hwdev is slave func
+ * @param dev: device pointer to hwdev
+ * @param is_slave_func: slave func
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_is_slave_func(const void *hwdev, bool *is_slave_func);
+
+/* *
+ * @brief hinic3_is_master_func - hwdev is master func
+ * @param dev: device pointer to hwdev
+ * @param is_master_func: master func
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_is_master_func(const void *hwdev, bool *is_master_func);
+
+bool hinic3_is_multi_bm(void *hwdev);
+
+bool hinic3_is_slave_host(void *hwdev);
+
+bool hinic3_is_vm_slave_host(void *hwdev);
+
+bool hinic3_is_bm_slave_host(void *hwdev);
+
+bool hinic3_is_guest_vmsec_enable(void *hwdev);
+
+int hinic3_get_vfid_by_vfpci(void *hwdev, struct pci_dev *pdev, u16 *global_func_id);
+
+int hinic3_set_func_nic_state(void *hwdev, struct hinic3_func_nic_state *state);
+
+int hinic3_get_netdev_state(void *hwdev, u16 func_idx, int *opened);
+
+int hinic3_get_mhost_func_nic_enable(void *hwdev, u16 func_id, bool *en);
+
+int hinic3_get_dev_cap(void *hwdev);
+
+int hinic3_mbox_to_host_sync(void *hwdev, enum hinic3_mod_type mod,
+ u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout, u16 channel);
+
+int hinic3_get_func_vroce_enable(void *hwdev, u16 glb_func_idx, u8 *en);
+
+void hinic3_module_get(void *hwdev, enum hinic3_service_type type);
+void hinic3_module_put(void *hwdev, enum hinic3_service_type type);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c
new file mode 100644
index 0000000..9b5f017
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c
@@ -0,0 +1,1108 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+#include <linux/semaphore.h>
+
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_nic_dbg.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_rx.h"
+#include "hinic3_tx.h"
+#include "hinic3_dcb.h"
+#include "hinic3_nic.h"
+#include "hinic3_bond.h"
+#include "nic_mpu_cmd_defs.h"
+
+typedef int (*nic_driv_module)(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+struct nic_drv_module_handle {
+ enum driver_cmd_type driv_cmd_name;
+ nic_driv_module driv_func;
+};
+
+static int get_nic_drv_version(void *buf_out, const u32 *out_size)
+{
+ struct drv_version_info *ver_info = buf_out;
+ int err;
+
+ if (!buf_out) {
+ pr_err("Buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (*out_size != sizeof(*ver_info)) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(*ver_info));
+ return -EINVAL;
+ }
+
+ err = snprintf(ver_info->ver, sizeof(ver_info->ver), "%s %s",
+ HINIC3_NIC_DRV_VERSION, "2025-05-01_00:00:03");
+ if (err < 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int get_tx_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u16 q_id;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get tx info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || in_size != sizeof(u32)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect in buf size from user :%u, expect: %lu\n",
+ in_size, sizeof(u32));
+ return -EINVAL;
+ }
+
+ q_id = (u16)(*((u32 *)buf_in));
+
+ return hinic3_dbg_get_sq_info(nic_dev->hwdev, q_id, buf_out, *out_size);
+}
+
+static int get_q_num(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get queue number\n");
+ return -EFAULT;
+ }
+
+ if (!buf_out || !out_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Get queue number para buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (*out_size != sizeof(u16)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(u16));
+ return -EINVAL;
+ }
+
+ *((u16 *)buf_out) = nic_dev->q_params.num_qps;
+
+ return 0;
+}
+
+static int get_tx_wqe_info(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ const struct wqe_info *info = buf_in;
+ u16 wqebb_cnt = 1;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get tx wqe info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || in_size != sizeof(struct wqe_info)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(struct wqe_info));
+ return -EINVAL;
+ }
+
+ return hinic3_dbg_get_wqe_info(nic_dev->hwdev, (u16)info->q_id,
+ (u16)info->wqe_id, wqebb_cnt,
+ buf_out, (u16 *)out_size, HINIC3_SQ);
+}
+
+static int get_rx_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct nic_rq_info *rq_info = buf_out;
+ u16 q_id;
+ int err;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get rx info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || in_size != sizeof(u32)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(u32));
+ return -EINVAL;
+ }
+
+ q_id = (u16)(*((u32 *)buf_in));
+
+ err = hinic3_dbg_get_rq_info(nic_dev->hwdev, q_id, buf_out, *out_size);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Get rq info failed, ret is %d.\n", err);
+ return err;
+ }
+
+ rq_info->delta = (u16)nic_dev->rxqs[q_id].delta;
+ rq_info->ci = (u16)(nic_dev->rxqs[q_id].cons_idx &
+ nic_dev->rxqs[q_id].q_mask);
+ rq_info->sw_pi = nic_dev->rxqs[q_id].next_to_update;
+ rq_info->msix_vector = nic_dev->rxqs[q_id].irq_id;
+
+ rq_info->coalesc_timer_cfg = nic_dev->rxqs[q_id].last_coalesc_timer_cfg;
+ rq_info->pending_limt = nic_dev->rxqs[q_id].last_pending_limt;
+
+ return 0;
+}
+
+static int get_rx_wqe_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct wqe_info *info = buf_in;
+ u16 wqebb_cnt = 1;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get rx wqe info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || in_size != sizeof(struct wqe_info)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(struct wqe_info));
+ return -EINVAL;
+ }
+
+ return hinic3_dbg_get_wqe_info(nic_dev->hwdev, (u16)info->q_id,
+ (u16)info->wqe_id, wqebb_cnt,
+ buf_out, (u16 *)out_size, HINIC3_RQ);
+}
+
+static int get_rx_cqe_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct wqe_info *info = buf_in;
+ u16 q_id = 0;
+ u16 idx = 0;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get rx cqe info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out || !out_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (in_size != sizeof(struct wqe_info)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(struct wqe_info));
+ return -EINVAL;
+ }
+
+ if (*out_size != sizeof(struct hinic3_rq_cqe)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(struct hinic3_rq_cqe));
+ return -EINVAL;
+ }
+ q_id = (u16)info->q_id;
+ idx = (u16)info->wqe_id;
+
+ if (q_id >= nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid q_id[%u] >= %u.\n", q_id,
+ nic_dev->q_params.num_qps);
+ return -EFAULT;
+ }
+ if (idx >= nic_dev->rxqs[q_id].q_depth) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid wqe idx[%u] >= %u.\n", idx,
+ nic_dev->rxqs[q_id].q_depth);
+ return -EFAULT;
+ }
+
+ memcpy(buf_out, nic_dev->rxqs[q_id].rx_info[idx].cqe,
+ sizeof(struct hinic3_rq_cqe));
+
+ return 0;
+}
+
+static void clean_nicdev_stats(struct hinic3_nic_dev *nic_dev)
+{
+ u64_stats_update_begin(&nic_dev->stats.syncp);
+ nic_dev->stats.netdev_tx_timeout = 0;
+ nic_dev->stats.tx_carrier_off_drop = 0;
+ nic_dev->stats.tx_invalid_qid = 0;
+ nic_dev->stats.rsvd1 = 0;
+ nic_dev->stats.rsvd2 = 0;
+ u64_stats_update_end(&nic_dev->stats.syncp);
+}
+
+static int clear_func_static(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ int i;
+
+ *out_size = 0;
+#ifndef HAVE_NETDEV_STATS_IN_NETDEV
+ memset(&nic_dev->net_stats, 0, sizeof(nic_dev->net_stats));
+#endif
+ clean_nicdev_stats(nic_dev);
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ hinic3_rxq_clean_stats(&nic_dev->rxqs[i].rxq_stats);
+ hinic3_txq_clean_stats(&nic_dev->txqs[i].txq_stats);
+ }
+
+ return 0;
+}
+
+static int get_loopback_mode(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_nic_loop_mode *mode = buf_out;
+
+ if (!out_size || !mode)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*mode)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(*mode));
+ return -EINVAL;
+ }
+
+ return hinic3_get_loopback_mode(nic_dev->hwdev, (u8 *)&mode->loop_mode,
+ (u8 *)&mode->loop_ctrl);
+}
+
+static int set_loopback_mode(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct hinic3_nic_loop_mode *mode = buf_in;
+ int err;
+
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't set loopback mode\n");
+ return -EFAULT;
+ }
+
+ if (!mode || !out_size || in_size != sizeof(*mode))
+ return -EINVAL;
+
+ if (*out_size != sizeof(*mode)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(*mode));
+ return -EINVAL;
+ }
+
+ err = hinic3_set_loopback_mode(nic_dev->hwdev, (u8)mode->loop_mode,
+ (u8)mode->loop_ctrl);
+ if (err == 0)
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Set loopback mode %u en %u succeed\n",
+ mode->loop_mode, mode->loop_ctrl);
+
+ return err;
+}
+
+enum hinic3_nic_link_mode {
+ HINIC3_LINK_MODE_AUTO = 0,
+ HINIC3_LINK_MODE_UP,
+ HINIC3_LINK_MODE_DOWN,
+ HINIC3_LINK_MODE_MAX,
+};
+
+static int set_link_mode_param_valid(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ const u32 *out_size)
+{
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't set link mode\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !out_size ||
+ in_size != sizeof(enum hinic3_nic_link_mode))
+ return -EINVAL;
+
+ if (*out_size != sizeof(enum hinic3_nic_link_mode)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(enum hinic3_nic_link_mode));
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int set_link_mode(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const enum hinic3_nic_link_mode *link = buf_in;
+ u8 link_status;
+
+ if (set_link_mode_param_valid(nic_dev, buf_in, in_size, out_size))
+ return -EFAULT;
+
+ switch (*link) {
+ case HINIC3_LINK_MODE_AUTO:
+ if (hinic3_get_link_state(nic_dev->hwdev, &link_status))
+ link_status = false;
+ hinic3_link_status_change(nic_dev, (bool)link_status);
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Set link mode: auto succeed, now is link %s\n",
+ (link_status ? "up" : "down"));
+ break;
+ case HINIC3_LINK_MODE_UP:
+ hinic3_link_status_change(nic_dev, true);
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Set link mode: up succeed\n");
+ break;
+ case HINIC3_LINK_MODE_DOWN:
+ hinic3_link_status_change(nic_dev, false);
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Set link mode: down succeed\n");
+ break;
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid link mode %d to set\n", *link);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int set_pf_bw_limit(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u32 pf_bw_limit;
+ int err;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct net_device *net_dev = nic_dev->netdev;
+
+ if (hinic3_support_roce(nic_dev->hwdev, NULL) &&
+ hinic3_is_bond_dev_status_actived(net_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "The rate limit func is not supported when RoCE bonding is enabled\n");
+ return -EINVAL;
+ }
+
+ if (HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "To set VF bandwidth rate, please use ip link cmd\n");
+ return -EINVAL;
+ }
+
+ if (!buf_in || !buf_out || in_size != sizeof(u32) ||
+ !out_size || *out_size != sizeof(u8))
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(nic_dev->hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ nic_io->direct = HINIC3_NIC_TX;
+ pf_bw_limit = *((u32 *)buf_in);
+
+ err = hinic3_set_pf_bw_limit(nic_dev->hwdev, pf_bw_limit);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to set pf bandwidth limit to %u%%\n",
+ pf_bw_limit);
+ if (err < 0)
+ return err;
+ }
+
+ *((u8 *)buf_out) = (u8)err;
+
+ return 0;
+}
+
+static int set_rx_pf_bw_limit(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u32 pf_bw_limit;
+ int err;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct net_device *net_dev = nic_dev->netdev;
+
+ if (hinic3_support_roce(nic_dev->hwdev, NULL) &&
+ hinic3_is_bond_dev_status_actived(net_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "The rate limit func is not supported when RoCE bonding is enabled\n");
+ return -EINVAL;
+ }
+
+ if (HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "To set VF bandwidth rate, please use ip link cmd\n");
+ return -EINVAL;
+ }
+
+ if (!buf_in || !buf_out || in_size != sizeof(u32) || !out_size || *out_size != sizeof(u8))
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(nic_dev->hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ nic_io->direct = HINIC3_NIC_RX;
+ pf_bw_limit = *((u32 *)buf_in);
+
+ err = hinic3_set_pf_bw_limit(nic_dev->hwdev, pf_bw_limit);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to set pf bandwidth limit to %d%%\n",
+ pf_bw_limit);
+ if (err < 0)
+ return err;
+ }
+
+ *((u8 *)buf_out) = (u8)err;
+
+ return 0;
+}
+
+static int get_pf_bw_limit(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 *rate_limit = (u32 *)buf_out;
+
+ if (HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "To get VF bandwidth rate, please use ip link cmd\n");
+ return -EINVAL;
+ }
+
+ if (!buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(u32) * 2) { // 2:Stored in an array, TX and RX, both length are u32
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %d, expect: %lu\n",
+ *out_size, sizeof(u32) * 2);
+ return -EFAULT;
+ }
+
+ nic_io = hinic3_get_service_adapter(nic_dev->hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ rate_limit[HINIC3_NIC_RX] = nic_io->nic_cfg.pf_bw_rx_limit;
+ rate_limit[HINIC3_NIC_TX] = nic_io->nic_cfg.pf_bw_tx_limit;
+
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "read rate cfg success rx rate is: %u, tx rate is : %u\n",
+ rate_limit[HINIC3_NIC_RX], rate_limit[HINIC3_NIC_TX]);
+ return 0;
+}
+
+static int get_sset_count(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u32 count;
+
+ if (!buf_in || in_size != sizeof(u32) || !out_size ||
+ *out_size != sizeof(u32) || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid parameters, in_size: %u\n", in_size);
+ return -EINVAL;
+ }
+
+ switch (*((u32 *)buf_in)) {
+ case HINIC3_SHOW_SSET_IO_STATS:
+ count = hinic3_get_io_stats_size(nic_dev);
+ break;
+ default:
+ count = 0;
+ break;
+ }
+
+ *((u32 *)buf_out) = count;
+
+ return 0;
+}
+
+static int get_sset_stats(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_show_item *items = buf_out;
+ u32 sset, count, size;
+ int err;
+
+ if (!buf_in || in_size != sizeof(u32) || !out_size || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid parameters, in_size: %u\n", in_size);
+ return -EINVAL;
+ }
+
+ size = sizeof(u32);
+ err = get_sset_count(nic_dev, buf_in, in_size, &count, &size);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Get sset count failed, ret=%d\n", err);
+ return -EINVAL;
+ }
+ if (count * sizeof(*items) != *out_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, count * sizeof(*items));
+ return -EINVAL;
+ }
+
+ sset = *((u32 *)buf_in);
+
+ switch (sset) {
+ case HINIC3_SHOW_SSET_IO_STATS:
+ err = hinic3_get_io_stats(nic_dev, items);
+ if (err < 0)
+ return -EINVAL;
+ break;
+
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unknown %u to get stats\n", sset);
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
+static int update_pcp_dscp_cfg(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dcb_config *wanted_dcb_cfg,
+ const struct hinic3_mt_qos_dev_cfg *qos_in)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ int i;
+ u8 cos_num = 0, valid_cos_bitmap = 0;
+
+ if (qos_in->cfg_bitmap & CMD_QOS_DEV_PCP2COS) {
+ for (i = 0; i < NIC_DCB_UP_MAX; i++) {
+ if (!(dcb->func_dft_cos_bitmap &
+ BIT(qos_in->pcp2cos[i]))) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid cos=%u, func cos valid map is %u",
+ qos_in->pcp2cos[i],
+ dcb->func_dft_cos_bitmap);
+ return -EINVAL;
+ }
+
+ if ((BIT(qos_in->pcp2cos[i]) & valid_cos_bitmap) == 0) {
+ valid_cos_bitmap |= (u8)BIT(qos_in->pcp2cos[i]);
+ cos_num++;
+ }
+ }
+
+ memcpy(wanted_dcb_cfg->pcp2cos, qos_in->pcp2cos,
+ sizeof(qos_in->pcp2cos));
+ wanted_dcb_cfg->pcp_user_cos_num = cos_num;
+ wanted_dcb_cfg->pcp_valid_cos_map = valid_cos_bitmap;
+ }
+
+ if (qos_in->cfg_bitmap & CMD_QOS_DEV_DSCP2COS) {
+ cos_num = 0;
+ valid_cos_bitmap = 0;
+ for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++) {
+ u8 cos = qos_in->dscp2cos[i] == DBG_DFLT_DSCP_VAL ?
+ dcb->wanted_dcb_cfg.dscp2cos[i] :
+ qos_in->dscp2cos[i];
+
+ if (cos >= NIC_DCB_UP_MAX ||
+ !(dcb->func_dft_cos_bitmap & BIT(cos))) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid cos=%u, func cos valid map is %u",
+ cos, dcb->func_dft_cos_bitmap);
+ return -EINVAL;
+ }
+
+ if ((BIT(cos) & valid_cos_bitmap) == 0) {
+ valid_cos_bitmap |= (u8)BIT(cos);
+ cos_num++;
+ }
+ }
+
+ for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++)
+ wanted_dcb_cfg->dscp2cos[i] =
+ qos_in->dscp2cos[i] == DBG_DFLT_DSCP_VAL ?
+ dcb->hw_dcb_cfg.dscp2cos[i] :
+ qos_in->dscp2cos[i];
+ wanted_dcb_cfg->dscp_user_cos_num = cos_num;
+ wanted_dcb_cfg->dscp_valid_cos_map = valid_cos_bitmap;
+ }
+
+ return 0;
+}
+
+static int update_wanted_qos_cfg(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dcb_config *wanted_dcb_cfg,
+ const struct hinic3_mt_qos_dev_cfg *qos_in)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ int ret;
+ u8 cos_num, valid_cos_bitmap;
+
+ if (qos_in->cfg_bitmap & CMD_QOS_DEV_TRUST) {
+ if (qos_in->trust > HINIC3_DCB_DSCP) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid trust=%u\n", qos_in->trust);
+ return -EINVAL;
+ }
+
+ wanted_dcb_cfg->trust = qos_in->trust;
+ }
+
+ if (qos_in->cfg_bitmap & CMD_QOS_DEV_DFT_COS) {
+ if (!(BIT(qos_in->dft_cos) & dcb->func_dft_cos_bitmap)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid dft_cos=%u\n", qos_in->dft_cos);
+ return -EINVAL;
+ }
+
+ wanted_dcb_cfg->default_cos = qos_in->dft_cos;
+ }
+
+ ret = update_pcp_dscp_cfg(nic_dev, wanted_dcb_cfg, qos_in);
+ if (ret)
+ return ret;
+
+ if (wanted_dcb_cfg->trust == HINIC3_DCB_PCP) {
+ cos_num = wanted_dcb_cfg->pcp_user_cos_num;
+ valid_cos_bitmap = wanted_dcb_cfg->pcp_valid_cos_map;
+ } else {
+ cos_num = wanted_dcb_cfg->dscp_user_cos_num;
+ valid_cos_bitmap = wanted_dcb_cfg->dscp_valid_cos_map;
+ }
+
+ if (!(BIT(wanted_dcb_cfg->default_cos) & valid_cos_bitmap)) {
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Current default_cos=%u, change to %u\n",
+ wanted_dcb_cfg->default_cos,
+ (u8)fls(valid_cos_bitmap) - 1);
+ wanted_dcb_cfg->default_cos = (u8)fls(valid_cos_bitmap) - 1;
+ }
+
+ return 0;
+}
+
+static int dcb_mt_qos_map(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ const struct hinic3_mt_qos_dev_cfg *qos_in = buf_in;
+ struct hinic3_mt_qos_dev_cfg *qos_out = buf_out;
+ u8 i;
+ int err;
+
+ if (!buf_out || !out_size || !buf_in)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*qos_out) || in_size != sizeof(*qos_in)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*qos_in));
+ return -EINVAL;
+ }
+
+ memcpy(qos_out, qos_in, sizeof(*qos_in));
+ qos_out->head.status = 0;
+ if (qos_in->op_code & MT_DCB_OPCODE_WR) {
+ memcpy(&dcb->wanted_dcb_cfg, &dcb->hw_dcb_cfg,
+ sizeof(struct hinic3_dcb_config));
+ err = update_wanted_qos_cfg(nic_dev, &dcb->wanted_dcb_cfg,
+ qos_in);
+ if (err) {
+ qos_out->head.status = MT_EINVAL;
+ return 0;
+ }
+
+ err = hinic3_dcbcfg_set_up_bitmap(nic_dev);
+ if (err)
+ qos_out->head.status = MT_EIO;
+ } else {
+ qos_out->dft_cos = dcb->hw_dcb_cfg.default_cos;
+ qos_out->trust = dcb->hw_dcb_cfg.trust;
+ for (i = 0; i < NIC_DCB_UP_MAX; i++)
+ qos_out->pcp2cos[i] = dcb->hw_dcb_cfg.pcp2cos[i];
+ for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++)
+ qos_out->dscp2cos[i] = dcb->hw_dcb_cfg.dscp2cos[i];
+ }
+
+ return 0;
+}
+
+static int dcb_mt_dcb_state(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct hinic3_mt_dcb_state *dcb_in = buf_in;
+ struct hinic3_mt_dcb_state *dcb_out = buf_out;
+ int err;
+ u8 user_cos_num;
+ u8 netif_run = 0;
+
+ if (!buf_in || !buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*dcb_out) || in_size != sizeof(*dcb_in)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*dcb_in));
+ return -EINVAL;
+ }
+
+ user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+ memcpy(dcb_out, dcb_in, sizeof(*dcb_in));
+ dcb_out->head.status = 0;
+ if (dcb_in->op_code & MT_DCB_OPCODE_WR) {
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ==
+ dcb_in->state)
+ return 0;
+
+ if (netif_running(nic_dev->netdev)) {
+ netif_run = 1;
+ hinic3_vport_down(nic_dev);
+ }
+
+ err = hinic3_setup_cos(nic_dev->netdev,
+ dcb_in->state ? user_cos_num : 0,
+ netif_run);
+ if (err)
+ goto setup_cos_fail;
+
+ if (netif_run) {
+ err = hinic3_vport_up(nic_dev);
+ if (err)
+ goto vport_up_fail;
+ }
+ } else {
+ dcb_out->state = !!test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
+ }
+
+ return 0;
+
+vport_up_fail:
+ hinic3_setup_cos(nic_dev->netdev, dcb_in->state ? 0 : user_cos_num,
+ netif_run);
+
+setup_cos_fail:
+ if (netif_run)
+ hinic3_vport_up(nic_dev);
+
+ return err;
+}
+
+static int dcb_mt_hw_qos_get(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ const struct hinic3_mt_qos_cos_cfg *cos_cfg_in = buf_in;
+ struct hinic3_mt_qos_cos_cfg *cos_cfg_out = buf_out;
+
+ if (!buf_in || !buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*cos_cfg_out) ||
+ in_size != sizeof(*cos_cfg_in)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*cos_cfg_in));
+ return -EINVAL;
+ }
+
+ memcpy(cos_cfg_out, cos_cfg_in, sizeof(*cos_cfg_in));
+ cos_cfg_out->head.status = 0;
+
+ cos_cfg_out->port_id = hinic3_physical_port_id(nic_dev->hwdev);
+ cos_cfg_out->func_cos_bitmap = (u8)dcb->func_dft_cos_bitmap;
+ cos_cfg_out->port_cos_bitmap = (u8)dcb->port_dft_cos_bitmap;
+ cos_cfg_out->func_max_cos_num = dcb->cos_config_num_max;
+
+ return 0;
+}
+
+static int get_inter_num(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u16 intr_num;
+
+ intr_num = hinic3_intr_num(nic_dev->hwdev);
+
+ if (!buf_out || !out_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_out or out_size is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (*out_size != sizeof(u16)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(u16));
+ return -EFAULT;
+ }
+ *(u16 *)buf_out = intr_num;
+
+ return 0;
+}
+
+static int get_netdev_name(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ if (!buf_out || !out_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_out or out_size is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (*out_size != IFNAMSIZ) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user :%u, expect: %u\n",
+ *out_size, IFNAMSIZ);
+ return -EFAULT;
+ }
+
+ strscpy(buf_out, nic_dev->netdev->name, IFNAMSIZ);
+
+ return 0;
+}
+
+static int get_netdev_tx_timeout(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct net_device *net_dev = nic_dev->netdev;
+ int *tx_timeout = buf_out;
+
+ if (!buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(int)) {
+ nicif_err(nic_dev, drv, net_dev,
+ "Unexpect buf size from user, out_size: %u, expect: %lu\n",
+ *out_size, sizeof(int));
+ return -EINVAL;
+ }
+
+ *tx_timeout = net_dev->watchdog_timeo;
+
+ return 0;
+}
+
+static int set_netdev_tx_timeout(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct net_device *net_dev = nic_dev->netdev;
+ const int *tx_timeout = buf_in;
+
+ if (!buf_in)
+ return -EINVAL;
+
+ if (in_size != sizeof(int)) {
+ nicif_err(nic_dev, drv, net_dev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(int));
+ return -EINVAL;
+ }
+
+ net_dev->watchdog_timeo = *tx_timeout * HZ;
+ nicif_info(nic_dev, drv, net_dev,
+ "Set tx timeout check period to %ds\n", *tx_timeout);
+
+ return 0;
+}
+
+static int get_xsfp_present(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct mag_cmd_get_xsfp_present *sfp_abs = buf_out;
+
+ if (!buf_in || !buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*sfp_abs) || in_size != sizeof(*sfp_abs)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*sfp_abs));
+ return -EINVAL;
+ }
+
+ sfp_abs->head.status = 0;
+ sfp_abs->abs_status = hinic3_if_sfp_absent(nic_dev->hwdev);
+
+ return 0;
+}
+
+static int get_xsfp_tlv_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct drv_mag_cmd_get_xsfp_tlv_rsp *sfp_tlv_info = buf_out;
+ const struct mag_cmd_get_xsfp_tlv_req *sfp_tlv_info_req = buf_in;
+ int err;
+
+ if ((buf_in == NULL) || (buf_out == NULL) || (out_size == NULL))
+ return -EINVAL;
+
+ if (*out_size != sizeof(*sfp_tlv_info) ||
+ in_size != sizeof(*sfp_tlv_info_req)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*sfp_tlv_info));
+ return -EINVAL;
+ }
+
+ err = hinic3_get_sfp_tlv_info(nic_dev->hwdev,
+ sfp_tlv_info, sfp_tlv_info_req);
+ if (err != 0) {
+ sfp_tlv_info->head.status = MT_EIO;
+ return 0;
+ }
+
+ return 0;
+}
+
+static int get_xsfp_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct mag_cmd_get_xsfp_info *sfp_info = buf_out;
+ int err;
+
+ if (!buf_in || !buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*sfp_info) || in_size != sizeof(*sfp_info)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*sfp_info));
+ return -EINVAL;
+ }
+
+ err = hinic3_get_sfp_info(nic_dev->hwdev, sfp_info);
+ if (err) {
+ sfp_info->head.status = MT_EIO;
+ return 0;
+ }
+
+ return 0;
+}
+
+static const struct nic_drv_module_handle nic_driv_module_cmd_handle[] = {
+ {TX_INFO, get_tx_info},
+ {Q_NUM, get_q_num},
+ {TX_WQE_INFO, get_tx_wqe_info},
+ {RX_INFO, get_rx_info},
+ {RX_WQE_INFO, get_rx_wqe_info},
+ {RX_CQE_INFO, get_rx_cqe_info},
+ {GET_INTER_NUM, get_inter_num},
+ {CLEAR_FUNC_STASTIC, clear_func_static},
+ {GET_LOOPBACK_MODE, get_loopback_mode},
+ {SET_LOOPBACK_MODE, set_loopback_mode},
+ {SET_LINK_MODE, set_link_mode},
+ {SET_TX_PF_BW_LIMIT, set_pf_bw_limit},
+ {GET_PF_BW_LIMIT, get_pf_bw_limit},
+ {GET_SSET_COUNT, get_sset_count},
+ {GET_SSET_ITEMS, get_sset_stats},
+ {DCB_STATE, dcb_mt_dcb_state},
+ {QOS_DEV, dcb_mt_qos_map},
+ {GET_QOS_COS, dcb_mt_hw_qos_get},
+ {GET_ULD_DEV_NAME, get_netdev_name},
+ {GET_TX_TIMEOUT, get_netdev_tx_timeout},
+ {SET_TX_TIMEOUT, set_netdev_tx_timeout},
+ {GET_XSFP_PRESENT, get_xsfp_present},
+ {GET_XSFP_INFO, get_xsfp_info},
+ {GET_XSFP_INFO_COMP_CMIS, get_xsfp_tlv_info},
+ {SET_RX_PF_BW_LIMIT, set_rx_pf_bw_limit}
+};
+
+static int send_to_nic_driver(struct hinic3_nic_dev *nic_dev,
+ u32 cmd, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ int index, num_cmds = (int)(sizeof(nic_driv_module_cmd_handle) /
+ sizeof(nic_driv_module_cmd_handle[0]));
+ enum driver_cmd_type cmd_type = (enum driver_cmd_type)cmd;
+ int err = 0;
+
+ if (cmd_type == DCB_STATE || cmd_type == QOS_DEV)
+ rtnl_lock();
+
+ mutex_lock(&nic_dev->nic_mutex);
+ for (index = 0; index < num_cmds; index++) {
+ if (cmd_type ==
+ nic_driv_module_cmd_handle[index].driv_cmd_name) {
+ err = nic_driv_module_cmd_handle[index].driv_func
+ (nic_dev, buf_in,
+ in_size, buf_out, out_size);
+ break;
+ }
+ }
+ mutex_unlock(&nic_dev->nic_mutex);
+
+ if (cmd_type == DCB_STATE || cmd_type == QOS_DEV)
+ rtnl_unlock();
+
+ if (index == num_cmds) {
+ pr_err("Can't find callback for %d\n", cmd_type);
+ return -EINVAL;
+ }
+
+ return err;
+}
+
+int nic_ioctl(void *uld_dev, u32 cmd, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ if (cmd == GET_DRV_VERSION)
+ return get_nic_drv_version(buf_out, out_size);
+ else if (!uld_dev)
+ return -EINVAL;
+
+ return send_to_nic_driver(uld_dev, cmd, buf_in,
+ in_size, buf_out, out_size);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c b/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c
new file mode 100644
index 0000000..aa53c19
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c
@@ -0,0 +1,482 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+
+#include "hinic3_crm.h"
+#include "hinic3_lld.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_dcb.h"
+
+#define MAX_BW_PERCENT 100
+
+u8 hinic3_get_dev_user_cos_num(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+
+ if (dcb->hw_dcb_cfg.trust == HINIC3_DCB_PCP)
+ return dcb->hw_dcb_cfg.pcp_user_cos_num;
+ if (dcb->hw_dcb_cfg.trust == HINIC3_DCB_DSCP)
+ return dcb->hw_dcb_cfg.dscp_user_cos_num;
+ return 0;
+}
+
+u8 hinic3_get_dev_valid_cos_map(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+
+ if (dcb->hw_dcb_cfg.trust == HINIC3_DCB_PCP)
+ return dcb->hw_dcb_cfg.pcp_valid_cos_map;
+ if (dcb->hw_dcb_cfg.trust == HINIC3_DCB_DSCP)
+ return dcb->hw_dcb_cfg.dscp_valid_cos_map;
+ return 0;
+}
+
+void hinic3_update_qp_cos_cfg(struct hinic3_nic_dev *nic_dev, u8 num_cos)
+{
+ struct hinic3_dcb_config *hw_dcb_cfg = &nic_dev->dcb->hw_dcb_cfg;
+ struct hinic3_dcb_config *wanted_dcb_cfg =
+ &nic_dev->dcb->wanted_dcb_cfg;
+ u8 valid_cos_map = hinic3_get_dev_valid_cos_map(nic_dev);
+ u8 cos_qp_num, cos_qp_offset = 0;
+ u8 i, remainder, num_qp_per_cos;
+
+ if (num_cos == 0 || nic_dev->q_params.num_qps == 0)
+ return;
+
+ num_qp_per_cos = (u8)(nic_dev->q_params.num_qps / num_cos);
+ remainder = nic_dev->q_params.num_qps % num_cos;
+
+ memset(hw_dcb_cfg->cos_qp_offset, 0, sizeof(hw_dcb_cfg->cos_qp_offset));
+ memset(hw_dcb_cfg->cos_qp_num, 0, sizeof(hw_dcb_cfg->cos_qp_num));
+
+ for (i = 0; i < PCP_MAX_UP; i++) {
+ if (BIT(i) & valid_cos_map) {
+ cos_qp_num = num_qp_per_cos + ((remainder > 0) ?
+ (remainder--, 1) : 0);
+
+ hw_dcb_cfg->cos_qp_offset[i] = cos_qp_offset;
+ hw_dcb_cfg->cos_qp_num[i] = cos_qp_num;
+ hinic3_info(nic_dev, drv, "cos %u, cos_qp_offset=%u cos_qp_num=%u\n",
+ i, cos_qp_offset, cos_qp_num);
+
+ cos_qp_offset += cos_qp_num;
+ valid_cos_map -= (int)BIT(i);
+ }
+ }
+
+ memcpy(wanted_dcb_cfg->cos_qp_offset, hw_dcb_cfg->cos_qp_offset,
+ sizeof(hw_dcb_cfg->cos_qp_offset));
+ memcpy(wanted_dcb_cfg->cos_qp_num, hw_dcb_cfg->cos_qp_num,
+ sizeof(hw_dcb_cfg->cos_qp_num));
+}
+
+void hinic3_update_tx_db_cos(struct hinic3_nic_dev *nic_dev, u8 dcb_en)
+{
+ struct hinic3_dcb_config *hw_dcb_cfg = &nic_dev->dcb->hw_dcb_cfg;
+ u8 i;
+ u16 start_qid, q_num;
+
+ hinic3_set_txq_cos(nic_dev, 0, nic_dev->q_params.num_qps,
+ hw_dcb_cfg->default_cos);
+ if (!dcb_en)
+ return;
+
+ for (i = 0; i < NIC_DCB_COS_MAX; i++) {
+ q_num = (u16)hw_dcb_cfg->cos_qp_num[i];
+ if (q_num) {
+ start_qid = (u16)hw_dcb_cfg->cos_qp_offset[i];
+
+ hinic3_set_txq_cos(nic_dev, start_qid, q_num, i);
+ hinic3_info(nic_dev, drv, "update tx db cos, start_qid %u, q_num=%u cos=%u\n",
+ start_qid, q_num, i);
+ }
+ }
+}
+
+static int hinic3_set_tx_cos_state(struct hinic3_nic_dev *nic_dev, u8 dcb_en)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ struct hinic3_dcb_config *hw_dcb_cfg = &dcb->hw_dcb_cfg;
+ struct hinic3_dcb_state dcb_state = {0};
+ u8 i;
+ int err;
+
+ u32 pcp2cos_size = sizeof(dcb_state.pcp2cos);
+ u32 dscp2cos_size = sizeof(dcb_state.dscp2cos);
+
+ dcb_state.dcb_on = dcb_en;
+ dcb_state.default_cos = hw_dcb_cfg->default_cos;
+ dcb_state.trust = hw_dcb_cfg->trust;
+
+ if (dcb_en) {
+ for (i = 0; i < NIC_DCB_COS_MAX; i++)
+ dcb_state.pcp2cos[i] = hw_dcb_cfg->pcp2cos[i];
+ for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++)
+ dcb_state.dscp2cos[i] = hw_dcb_cfg->dscp2cos[i];
+ } else {
+ memset(dcb_state.pcp2cos, hw_dcb_cfg->default_cos,
+ pcp2cos_size);
+ memset(dcb_state.dscp2cos, hw_dcb_cfg->default_cos,
+ dscp2cos_size);
+ }
+
+ err = hinic3_set_dcb_state(nic_dev->hwdev, &dcb_state);
+ if (err)
+ hinic3_err(nic_dev, drv, "Failed to set dcb state\n");
+
+ return err;
+}
+
+int hinic3_configure_dcb_hw(struct hinic3_nic_dev *nic_dev, u8 dcb_en)
+{
+ int err;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ err = hinic3_sync_dcb_state(nic_dev->hwdev, 1, dcb_en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Set dcb state failed\n");
+ return err;
+ }
+
+ hinic3_update_qp_cos_cfg(nic_dev, user_cos_num);
+ hinic3_update_tx_db_cos(nic_dev, dcb_en);
+
+ err = hinic3_set_tx_cos_state(nic_dev, dcb_en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Set tx cos state failed\n");
+ goto set_tx_cos_fail;
+ }
+
+ err = hinic3_rx_configure(nic_dev->netdev, dcb_en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "rx configure failed\n");
+ goto rx_configure_fail;
+ }
+
+ if (dcb_en) {
+ set_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
+ set_bit(HINIC3_DCB_ENABLE, &nic_dev->nic_vram->flags);
+ } else {
+ clear_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
+ clear_bit(HINIC3_DCB_ENABLE, &nic_dev->nic_vram->flags);
+ }
+ return 0;
+rx_configure_fail:
+ hinic3_set_tx_cos_state(nic_dev, dcb_en ? 0 : 1);
+
+set_tx_cos_fail:
+ hinic3_update_tx_db_cos(nic_dev, dcb_en ? 0 : 1);
+ hinic3_sync_dcb_state(nic_dev->hwdev, 1, dcb_en ? 0 : 1);
+
+ return err;
+}
+
+int hinic3_setup_cos(struct net_device *netdev, u8 cos, u8 netif_run)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ int err;
+
+ if (cos && test_bit(HINIC3_SAME_RXTX, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev, "Failed to enable DCB while Symmetric RSS is enabled\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (cos > dcb->cos_config_num_max) {
+ nicif_err(nic_dev, drv, netdev,
+ "Invalid num_tc: %u, max cos: %u\n",
+ cos, dcb->cos_config_num_max);
+ return -EINVAL;
+ }
+
+ err = hinic3_configure_dcb_hw(nic_dev, cos ? 1 : 0);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static u8 get_cos_num(u8 hw_valid_cos_bitmap)
+{
+ u8 support_cos = 0;
+ u8 i;
+
+ for (i = 0; i < NIC_DCB_COS_MAX; i++)
+ if (hw_valid_cos_bitmap & BIT(i))
+ support_cos++;
+
+ return support_cos;
+}
+
+static void hinic3_sync_dcb_cfg(struct hinic3_nic_dev *nic_dev,
+ const struct hinic3_dcb_config *dcb_cfg)
+{
+ struct hinic3_dcb_config *hw_dcb_cfg = &nic_dev->dcb->hw_dcb_cfg;
+
+ memcpy(hw_dcb_cfg, dcb_cfg, sizeof(struct hinic3_dcb_config));
+}
+
+static int init_default_dcb_cfg(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dcb_config *dcb_cfg)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ u8 i, hw_dft_cos_map, port_cos_bitmap, dscp_ind;
+ int err;
+ int is_in_kexec;
+
+ err = hinic3_cos_valid_bitmap(nic_dev->hwdev,
+ &hw_dft_cos_map, &port_cos_bitmap);
+ if (err) {
+ hinic3_err(nic_dev, drv, "None cos supported\n");
+ return -EFAULT;
+ }
+
+ is_in_kexec = vram_get_kexec_flag();
+
+ dcb->func_dft_cos_bitmap = hw_dft_cos_map;
+ dcb->port_dft_cos_bitmap = port_cos_bitmap;
+
+ dcb->cos_config_num_max = get_cos_num(hw_dft_cos_map);
+
+ if (is_in_kexec == 0) {
+ dcb_cfg->trust = HINIC3_DCB_PCP;
+ dcb_cfg->default_cos = (u8)fls(dcb->func_dft_cos_bitmap) - 1;
+ } else {
+ dcb_cfg->trust = nic_dev->dcb->hw_dcb_cfg.trust;
+ dcb_cfg->default_cos = nic_dev->dcb->hw_dcb_cfg.default_cos;
+ }
+ dcb_cfg->pcp_user_cos_num = dcb->cos_config_num_max;
+ dcb_cfg->dscp_user_cos_num = dcb->cos_config_num_max;
+ dcb_cfg->pcp_valid_cos_map = hw_dft_cos_map;
+ dcb_cfg->dscp_valid_cos_map = hw_dft_cos_map;
+
+ for (i = 0; i < NIC_DCB_COS_MAX; i++) {
+ dcb_cfg->pcp2cos[i] = hw_dft_cos_map & BIT(i)
+ ? i : (u8)fls(dcb->func_dft_cos_bitmap) - 1;
+ for (dscp_ind = 0; dscp_ind < NIC_DCB_COS_MAX; dscp_ind++)
+ dcb_cfg->dscp2cos[i * NIC_DCB_DSCP_NUM + dscp_ind] = dcb_cfg->pcp2cos[i];
+ }
+
+ return 0;
+}
+
+void hinic3_dcb_reset_hw_config(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_dcb_config dft_cfg = {0};
+
+ init_default_dcb_cfg(nic_dev, &dft_cfg);
+ hinic3_sync_dcb_cfg(nic_dev, &dft_cfg);
+
+ hinic3_info(nic_dev, drv, "Reset DCB configuration done\n");
+}
+
+int hinic3_configure_dcb(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ err = hinic3_sync_dcb_state(nic_dev->hwdev, 1,
+ test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)
+ ? 1 : 0);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Set dcb state failed\n");
+ return err;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags))
+ hinic3_sync_dcb_cfg(nic_dev, &nic_dev->dcb->wanted_dcb_cfg);
+ else
+ hinic3_dcb_reset_hw_config(nic_dev);
+
+ return 0;
+}
+
+static int hinic3_dcb_alloc(struct hinic3_nic_dev *nic_dev)
+{
+ u16 func_id;
+ int is_use_vram;
+ int ret;
+
+ is_use_vram = get_use_vram_flag();
+ if (is_use_vram) {
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ ret = snprintf(nic_dev->dcb_name, VRAM_NAME_MAX_LEN,
+ "%s%u%s", VRAM_CQM_GLB_FUNC_BASE, func_id,
+ VRAM_NIC_DCB);
+ if (ret < 0) {
+ hinic3_err(nic_dev, drv, "Nic dcb snprintf failed, ret:%d.\n", ret);
+ return ret;
+ }
+
+ nic_dev->dcb = (struct hinic3_dcb *)hi_vram_kalloc(nic_dev->dcb_name,
+ sizeof(*nic_dev->dcb));
+ if (!nic_dev->dcb) {
+ hinic3_err(nic_dev, drv, "Failed to vram alloc dcb.\n");
+ return -EFAULT;
+ }
+ } else {
+ nic_dev->dcb = kzalloc(sizeof(*nic_dev->dcb), GFP_KERNEL);
+ if (!nic_dev->dcb) {
+ hinic3_err(nic_dev, drv, "Failed to create dcb.\n");
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static void hinic3_dcb_free(struct hinic3_nic_dev *nic_dev)
+{
+ int is_use_vram;
+
+ is_use_vram = get_use_vram_flag();
+ if (is_use_vram)
+ hi_vram_kfree((void *)nic_dev->dcb, nic_dev->dcb_name, sizeof(*nic_dev->dcb));
+ else
+ kfree(nic_dev->dcb);
+ nic_dev->dcb = NULL;
+}
+
+void hinic3_dcb_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags))
+ hinic3_sync_dcb_state(nic_dev->hwdev, 1, 0);
+
+ hinic3_dcb_free(nic_dev);
+}
+
+int hinic3_dcb_init(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_dcb_config *hw_dcb_cfg = NULL;
+ int err;
+ u8 dcb_en = test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ? 1 : 0;
+
+ err = hinic3_dcb_alloc(nic_dev);
+ if (err != 0) {
+ hinic3_err(nic_dev, drv, "Dcb alloc failed.\n");
+ return err;
+ }
+
+ hw_dcb_cfg = &nic_dev->dcb->hw_dcb_cfg;
+ err = init_default_dcb_cfg(nic_dev, hw_dcb_cfg);
+ if (err) {
+ hinic3_err(nic_dev, drv,
+ "Initialize dcb configuration failed\n");
+ hinic3_dcb_free(nic_dev);
+ return err;
+ }
+
+ memcpy(&nic_dev->dcb->wanted_dcb_cfg, hw_dcb_cfg,
+ sizeof(struct hinic3_dcb_config));
+
+ hinic3_info(nic_dev, drv, "Support num cos %u, default cos %u\n",
+ nic_dev->dcb->cos_config_num_max, hw_dcb_cfg->default_cos);
+
+ err = hinic3_set_tx_cos_state(nic_dev, dcb_en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Set tx cos state failed\n");
+ hinic3_dcb_free(nic_dev);
+ return err;
+ }
+
+ return 0;
+}
+
+static int change_qos_cfg(struct hinic3_nic_dev *nic_dev,
+ const struct hinic3_dcb_config *dcb_cfg)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ int err = 0;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ if (test_and_set_bit(HINIC3_DCB_UP_COS_SETTING, &dcb->dcb_flags)) {
+ nicif_warn(nic_dev, drv, netdev,
+ "Cos_up map setting in inprocess, please try again later\n");
+ return -EFAULT;
+ }
+
+ hinic3_sync_dcb_cfg(nic_dev, dcb_cfg);
+
+ hinic3_update_qp_cos_cfg(nic_dev, user_cos_num);
+
+ clear_bit(HINIC3_DCB_UP_COS_SETTING, &dcb->dcb_flags);
+
+ return err;
+}
+
+int hinic3_dcbcfg_set_up_bitmap(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ int err, rollback_err;
+ u8 netif_run = 0;
+ struct hinic3_dcb_config old_dcb_cfg;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ memcpy(&old_dcb_cfg, &dcb->hw_dcb_cfg,
+ sizeof(struct hinic3_dcb_config));
+
+ if (!memcmp(&dcb->wanted_dcb_cfg, &old_dcb_cfg,
+ sizeof(struct hinic3_dcb_config))) {
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Same valid up bitmap, don't need to change anything\n");
+ return 0;
+ }
+
+ if (netif_running(nic_dev->netdev)) {
+ netif_run = 1;
+ hinic3_vport_down(nic_dev);
+ }
+
+ err = change_qos_cfg(nic_dev, &dcb->wanted_dcb_cfg);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Set cos_up map to hw failed\n");
+ goto change_qos_cfg_fail;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ err = hinic3_setup_cos(nic_dev->netdev,
+ user_cos_num, netif_run);
+ if (err)
+ goto set_err;
+ }
+
+ if (netif_run) {
+ err = hinic3_vport_up(nic_dev);
+ if (err)
+ goto vport_up_fail;
+ }
+
+ return 0;
+
+vport_up_fail:
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags))
+ hinic3_setup_cos(nic_dev->netdev, user_cos_num
+ ? 0 : user_cos_num, netif_run);
+
+set_err:
+ rollback_err = change_qos_cfg(nic_dev, &old_dcb_cfg);
+ if (rollback_err)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to rollback qos configure\n");
+
+change_qos_cfg_fail:
+ if (netif_run)
+ hinic3_vport_up(nic_dev);
+
+ return err;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h b/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h
new file mode 100644
index 0000000..e0b35cb
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h
@@ -0,0 +1,75 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_DCB_H
+#define HINIC3_DCB_H
+
+#include "ossl_knl.h"
+
+enum HINIC3_DCB_FLAGS {
+ HINIC3_DCB_UP_COS_SETTING,
+ HINIC3_DCB_TRAFFIC_STOPPED,
+};
+
+struct hinic3_cos_cfg {
+ u8 up;
+ u8 bw_pct;
+ u8 tc_id;
+ u8 prio_sp; /* 0 - DWRR, 1 - SP */
+};
+
+struct hinic3_tc_cfg {
+ u8 bw_pct;
+ u8 prio_sp; /* 0 - DWRR, 1 - SP */
+ u16 rsvd;
+};
+
+#define PCP_MAX_UP 8
+#define DSCP_MAC_UP 64
+#define DBG_DFLT_DSCP_VAL 0xFF
+
+struct hinic3_dcb_config {
+ u8 trust; /* pcp, dscp */
+ u8 default_cos;
+ u8 pcp_user_cos_num;
+ u8 pcp_valid_cos_map;
+ u8 dscp_user_cos_num;
+ u8 dscp_valid_cos_map;
+ u8 pcp2cos[PCP_MAX_UP];
+ u8 dscp2cos[DSCP_MAC_UP];
+
+ u8 cos_qp_offset[NIC_DCB_COS_MAX];
+ u8 cos_qp_num[NIC_DCB_COS_MAX];
+};
+
+u8 hinic3_get_dev_user_cos_num(struct hinic3_nic_dev *nic_dev);
+u8 hinic3_get_dev_valid_cos_map(struct hinic3_nic_dev *nic_dev);
+int hinic3_dcb_init(struct hinic3_nic_dev *nic_dev);
+void hinic3_dcb_deinit(struct hinic3_nic_dev *nic_dev);
+void hinic3_dcb_reset_hw_config(struct hinic3_nic_dev *nic_dev);
+int hinic3_configure_dcb(struct net_device *netdev);
+int hinic3_setup_cos(struct net_device *netdev, u8 cos, u8 netif_run);
+void hinic3_dcbcfg_set_pfc_state(struct hinic3_nic_dev *nic_dev, u8 pfc_state);
+u8 hinic3_dcbcfg_get_pfc_state(struct hinic3_nic_dev *nic_dev);
+void hinic3_dcbcfg_set_pfc_pri_en(struct hinic3_nic_dev *nic_dev,
+ u8 pfc_en_bitmap);
+u8 hinic3_dcbcfg_get_pfc_pri_en(struct hinic3_nic_dev *nic_dev);
+int hinic3_dcbcfg_set_ets_up_tc_map(struct hinic3_nic_dev *nic_dev,
+ const u8 *up_tc_map);
+void hinic3_dcbcfg_get_ets_up_tc_map(struct hinic3_nic_dev *nic_dev,
+ u8 *up_tc_map);
+int hinic3_dcbcfg_set_ets_tc_bw(struct hinic3_nic_dev *nic_dev,
+ const u8 *tc_bw);
+void hinic3_dcbcfg_get_ets_tc_bw(struct hinic3_nic_dev *nic_dev, u8 *tc_bw);
+void hinic3_dcbcfg_set_ets_tc_prio_type(struct hinic3_nic_dev *nic_dev,
+ u8 tc_prio_bitmap);
+void hinic3_dcbcfg_get_ets_tc_prio_type(struct hinic3_nic_dev *nic_dev,
+ u8 *tc_prio_bitmap);
+int hinic3_dcbcfg_set_up_bitmap(struct hinic3_nic_dev *nic_dev);
+void hinic3_update_tx_db_cos(struct hinic3_nic_dev *nic_dev, u8 dcb_en);
+
+void hinic3_update_qp_cos_cfg(struct hinic3_nic_dev *nic_dev, u8 num_cos);
+void hinic3_vport_down(struct hinic3_nic_dev *nic_dev);
+int hinic3_vport_up(struct hinic3_nic_dev *nic_dev);
+int hinic3_configure_dcb_hw(struct hinic3_nic_dev *nic_dev, u8 dcb_en);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c
new file mode 100644
index 0000000..548d67d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c
@@ -0,0 +1,1464 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+#include "hinic3_rss.h"
+
+#define COALESCE_ALL_QUEUE 0xFFFF
+#define COALESCE_PENDING_LIMIT_UNIT 8
+#define COALESCE_TIMER_CFG_UNIT 5
+#define COALESCE_MAX_PENDING_LIMIT (255 * COALESCE_PENDING_LIMIT_UNIT)
+#define COALESCE_MAX_TIMER_CFG (255 * COALESCE_TIMER_CFG_UNIT)
+#define HINIC3_WAIT_PKTS_TO_RX_BUFFER 200
+#define HINIC3_WAIT_CLEAR_LP_TEST 100
+
+#ifndef SET_ETHTOOL_OPS
+#define SET_ETHTOOL_OPS(netdev, ops) \
+ ((netdev)->ethtool_ops = (ops))
+#endif
+
+static void hinic3_get_drvinfo(struct net_device *netdev,
+ struct ethtool_drvinfo *info)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct pci_dev *pdev = nic_dev->pdev;
+ u8 mgmt_ver[HINIC3_MGMT_VERSION_MAX_LEN] = {0};
+ int err;
+
+ strscpy(info->driver, HINIC3_NIC_DRV_NAME, sizeof(info->driver));
+ strscpy(info->version, HINIC3_NIC_DRV_VERSION, sizeof(info->version));
+ strscpy(info->bus_info, pci_name(pdev), sizeof(info->bus_info));
+
+ err = hinic3_get_mgmt_version(nic_dev->hwdev, mgmt_ver,
+ HINIC3_MGMT_VERSION_MAX_LEN,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to get fw version\n");
+ return;
+ }
+
+ err = snprintf(info->fw_version, sizeof(info->fw_version), "%s", mgmt_ver);
+ if (err < 0)
+ nicif_err(nic_dev, drv, netdev, "Failed to snprintf fw version\n");
+}
+
+static u32 hinic3_get_msglevel(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ return nic_dev->msg_enable;
+}
+
+static void hinic3_set_msglevel(struct net_device *netdev, u32 data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ nic_dev->msg_enable = data;
+
+ nicif_info(nic_dev, drv, netdev, "Set message level: 0x%x\n", data);
+}
+
+static int hinic3_nway_reset(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct nic_port_info port_info = {0};
+ int err;
+
+ while (test_and_set_bit(HINIC3_AUTONEG_RESET, &nic_dev->flags))
+ msleep(100); /* sleep 100 ms, waiting for another autoneg restart progress done */
+
+ err = hinic3_get_port_info(nic_dev->hwdev, &port_info, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Get port info failed\n");
+ err = -EFAULT;
+ goto reset_err;
+ }
+
+ if (port_info.autoneg_state != PORT_CFG_AN_ON) {
+ nicif_err(nic_dev, drv, netdev, "Autonegotiation is not on, don't support to restart it\n");
+ err = -EOPNOTSUPP;
+ goto reset_err;
+ }
+
+ err = hinic3_set_autoneg(nic_dev->hwdev, false);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Set autonegotiation off failed\n");
+ err = -EFAULT;
+ goto reset_err;
+ }
+
+ msleep(200); /* sleep 200 ms, waiting for status polling finished */
+
+ err = hinic3_set_autoneg(nic_dev->hwdev, true);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Set autonegotiation on failed\n");
+ err = -EFAULT;
+ goto reset_err;
+ }
+
+ msleep(200); /* sleep 200 ms, waiting for status polling finished */
+ nicif_info(nic_dev, drv, netdev, "Restart autonegotiation successfully\n");
+
+reset_err:
+ clear_bit(HINIC3_AUTONEG_RESET, &nic_dev->flags);
+ return err;
+}
+
+#ifdef HAVE_ETHTOOL_RINGPARAM_EXTACK
+static void hinic3_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring,
+ struct kernel_ethtool_ringparam *kernel_ring,
+ struct netlink_ext_ack *extack)
+#else
+static void hinic3_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ ring->rx_max_pending = HINIC3_MAX_RX_QUEUE_DEPTH;
+ ring->tx_max_pending = HINIC3_MAX_TX_QUEUE_DEPTH;
+ ring->rx_pending = nic_dev->rxqs[0].q_depth;
+ ring->tx_pending = nic_dev->txqs[0].q_depth;
+}
+
+static void hinic3_update_qp_depth(struct hinic3_nic_dev *nic_dev,
+ u32 sq_depth, u32 rq_depth)
+{
+ u16 i;
+
+ nic_dev->q_params.sq_depth = sq_depth;
+ nic_dev->q_params.rq_depth = rq_depth;
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ nic_dev->txqs[i].q_depth = sq_depth;
+ nic_dev->txqs[i].q_mask = sq_depth - 1;
+ nic_dev->rxqs[i].q_depth = rq_depth;
+ nic_dev->rxqs[i].q_mask = rq_depth - 1;
+ }
+}
+
+static int check_ringparam_valid(struct net_device *netdev,
+ const struct ethtool_ringparam *ring)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (ring->rx_jumbo_pending || ring->rx_mini_pending) {
+ nicif_err(nic_dev, drv, netdev,
+ "Unsupported rx_jumbo_pending/rx_mini_pending\n");
+ return -EINVAL;
+ }
+
+ if (ring->tx_pending > HINIC3_MAX_TX_QUEUE_DEPTH ||
+ ring->tx_pending < HINIC3_MIN_QUEUE_DEPTH ||
+ ring->rx_pending > HINIC3_MAX_RX_QUEUE_DEPTH ||
+ ring->rx_pending < HINIC3_MIN_QUEUE_DEPTH) {
+ nicif_err(nic_dev, drv, netdev,
+ "Queue depth out of rang tx[%d-%d] rx[%d-%d]\n",
+ HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_TX_QUEUE_DEPTH,
+ HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_RX_QUEUE_DEPTH);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#ifdef HAVE_ETHTOOL_RINGPARAM_EXTACK
+static int hinic3_set_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring,
+ struct kernel_ethtool_ringparam *kernel_ring,
+ struct netlink_ext_ack *extack)
+#else
+static int hinic3_set_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dyna_txrxq_params q_params = {0};
+ u32 new_sq_depth, new_rq_depth;
+ int err;
+
+ err = check_ringparam_valid(netdev, ring);
+ if (err)
+ return err;
+
+ new_sq_depth = (u32)(1U << (u16)ilog2(ring->tx_pending));
+ new_rq_depth = (u32)(1U << (u16)ilog2(ring->rx_pending));
+ if (new_sq_depth == nic_dev->q_params.sq_depth &&
+ new_rq_depth == nic_dev->q_params.rq_depth)
+ return 0; /* nothing to do */
+
+ nicif_info(nic_dev, drv, netdev,
+ "Change Tx/Rx ring depth from %u/%u to %u/%u\n",
+ nic_dev->q_params.sq_depth, nic_dev->q_params.rq_depth,
+ new_sq_depth, new_rq_depth);
+
+ if (!netif_running(netdev)) {
+ hinic3_update_qp_depth(nic_dev, new_sq_depth, new_rq_depth);
+ } else {
+ q_params = nic_dev->q_params;
+ q_params.sq_depth = new_sq_depth;
+ q_params.rq_depth = new_rq_depth;
+ q_params.txqs_res = NULL;
+ q_params.rxqs_res = NULL;
+ q_params.irq_cfg = NULL;
+
+ nicif_info(nic_dev, drv, netdev, "Restarting channel\n");
+ err = hinic3_change_channel_settings(nic_dev, &q_params,
+ NULL, NULL);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to change channel settings\n");
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static int get_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal, u16 queue)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_intr_coal_info *interrupt_info = NULL;
+
+ if (queue == COALESCE_ALL_QUEUE) {
+ /* get tx/rx irq0 as default parameters */
+ interrupt_info = &nic_dev->intr_coalesce[0];
+ } else {
+ if (queue >= nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, netdev,
+ "Invalid queue_id: %u\n", queue);
+ return -EINVAL;
+ }
+ interrupt_info = &nic_dev->intr_coalesce[queue];
+ }
+
+ /* coalescs_timer is in unit of 5us */
+ coal->rx_coalesce_usecs = interrupt_info->coalesce_timer_cfg *
+ COALESCE_TIMER_CFG_UNIT;
+ /* coalescs_frams is in unit of 8 */
+ coal->rx_max_coalesced_frames = interrupt_info->pending_limt *
+ COALESCE_PENDING_LIMIT_UNIT;
+
+ /* tx/rx use the same interrupt */
+ coal->tx_coalesce_usecs = coal->rx_coalesce_usecs;
+ coal->tx_max_coalesced_frames = coal->rx_max_coalesced_frames;
+ coal->use_adaptive_rx_coalesce = nic_dev->adaptive_rx_coal;
+
+ coal->pkt_rate_high = (u32)interrupt_info->pkt_rate_high;
+ coal->rx_coalesce_usecs_high = interrupt_info->rx_usecs_high *
+ COALESCE_TIMER_CFG_UNIT;
+ coal->rx_max_coalesced_frames_high =
+ interrupt_info->rx_pending_limt_high *
+ COALESCE_PENDING_LIMIT_UNIT;
+
+ coal->pkt_rate_low = (u32)interrupt_info->pkt_rate_low;
+ coal->rx_coalesce_usecs_low = interrupt_info->rx_usecs_low *
+ COALESCE_TIMER_CFG_UNIT;
+ coal->rx_max_coalesced_frames_low =
+ interrupt_info->rx_pending_limt_low *
+ COALESCE_PENDING_LIMIT_UNIT;
+
+ return 0;
+}
+
+static int set_queue_coalesce(struct hinic3_nic_dev *nic_dev, u16 q_id,
+ struct hinic3_intr_coal_info *coal)
+{
+ struct hinic3_intr_coal_info *intr_coal = NULL;
+ struct interrupt_info info = {0};
+ struct net_device *netdev = nic_dev->netdev;
+ int err;
+
+ intr_coal = &nic_dev->intr_coalesce[q_id];
+ if (intr_coal->coalesce_timer_cfg != coal->coalesce_timer_cfg ||
+ intr_coal->pending_limt != coal->pending_limt)
+ intr_coal->user_set_intr_coal_flag = 1;
+
+ intr_coal->coalesce_timer_cfg = coal->coalesce_timer_cfg;
+ intr_coal->pending_limt = coal->pending_limt;
+ intr_coal->pkt_rate_low = coal->pkt_rate_low;
+ intr_coal->rx_usecs_low = coal->rx_usecs_low;
+ intr_coal->rx_pending_limt_low = coal->rx_pending_limt_low;
+ intr_coal->pkt_rate_high = coal->pkt_rate_high;
+ intr_coal->rx_usecs_high = coal->rx_usecs_high;
+ intr_coal->rx_pending_limt_high = coal->rx_pending_limt_high;
+
+ /* netdev not running or qp not in using,
+ * don't need to set coalesce to hw
+ */
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags) ||
+ q_id >= nic_dev->q_params.num_qps || nic_dev->adaptive_rx_coal)
+ return 0;
+
+ info.msix_index = nic_dev->q_params.irq_cfg[q_id].msix_entry_idx;
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.coalesc_timer_cfg = intr_coal->coalesce_timer_cfg;
+ info.pending_limt = intr_coal->pending_limt;
+ info.resend_timer_cfg = intr_coal->resend_timer_cfg;
+ nic_dev->rxqs[q_id].last_coalesc_timer_cfg =
+ intr_coal->coalesce_timer_cfg;
+ nic_dev->rxqs[q_id].last_pending_limt = intr_coal->pending_limt;
+ err = hinic3_set_interrupt_cfg(nic_dev->hwdev, info,
+ HINIC3_CHANNEL_NIC);
+ if (err)
+ nicif_warn(nic_dev, drv, netdev,
+ "Failed to set queue%u coalesce", q_id);
+
+ return err;
+}
+
+static int is_coalesce_exceed_limit(struct net_device *netdev,
+ const struct ethtool_coalesce *coal)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (coal->rx_coalesce_usecs > COALESCE_MAX_TIMER_CFG) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_coalesce_usecs out of range[%d-%d]\n", 0,
+ COALESCE_MAX_TIMER_CFG);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_max_coalesced_frames > COALESCE_MAX_PENDING_LIMIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_max_coalesced_frames out of range[%d-%d]\n", 0,
+ COALESCE_MAX_PENDING_LIMIT);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_coalesce_usecs_low > COALESCE_MAX_TIMER_CFG) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_coalesce_usecs_low out of range[%d-%d]\n", 0,
+ COALESCE_MAX_TIMER_CFG);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_max_coalesced_frames_low > COALESCE_MAX_PENDING_LIMIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_max_coalesced_frames_low out of range[%d-%d]\n",
+ 0, COALESCE_MAX_PENDING_LIMIT);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_coalesce_usecs_high > COALESCE_MAX_TIMER_CFG) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_coalesce_usecs_high out of range[%d-%d]\n", 0,
+ COALESCE_MAX_TIMER_CFG);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_max_coalesced_frames_high > COALESCE_MAX_PENDING_LIMIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_max_coalesced_frames_high out of range[%d-%d]\n",
+ 0, COALESCE_MAX_PENDING_LIMIT);
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int is_coalesce_allowed_change(struct net_device *netdev,
+ const struct ethtool_coalesce *coal)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct ethtool_coalesce tmp_coal = {0};
+
+ tmp_coal.cmd = coal->cmd;
+ tmp_coal.rx_coalesce_usecs = coal->rx_coalesce_usecs;
+ tmp_coal.rx_max_coalesced_frames = coal->rx_max_coalesced_frames;
+ tmp_coal.tx_coalesce_usecs = coal->tx_coalesce_usecs;
+ tmp_coal.tx_max_coalesced_frames = coal->tx_max_coalesced_frames;
+ tmp_coal.use_adaptive_rx_coalesce = coal->use_adaptive_rx_coalesce;
+
+ tmp_coal.pkt_rate_low = coal->pkt_rate_low;
+ tmp_coal.rx_coalesce_usecs_low = coal->rx_coalesce_usecs_low;
+ tmp_coal.rx_max_coalesced_frames_low =
+ coal->rx_max_coalesced_frames_low;
+
+ tmp_coal.pkt_rate_high = coal->pkt_rate_high;
+ tmp_coal.rx_coalesce_usecs_high = coal->rx_coalesce_usecs_high;
+ tmp_coal.rx_max_coalesced_frames_high =
+ coal->rx_max_coalesced_frames_high;
+
+ if (memcmp(coal, &tmp_coal, sizeof(struct ethtool_coalesce))) {
+ nicif_err(nic_dev, drv, netdev,
+ "Only support to change rx/tx-usecs and rx/tx-frames\n");
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int is_coalesce_legal(struct net_device *netdev,
+ const struct ethtool_coalesce *coal)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ if (coal->rx_coalesce_usecs != coal->tx_coalesce_usecs) {
+ nicif_err(nic_dev, drv, netdev,
+ "tx-usecs must be equal to rx-usecs\n");
+ return -EINVAL;
+ }
+
+ if (coal->rx_max_coalesced_frames != coal->tx_max_coalesced_frames) {
+ nicif_err(nic_dev, drv, netdev,
+ "tx-frames must be equal to rx-frames\n");
+ return -EINVAL;
+ }
+
+ err = is_coalesce_allowed_change(netdev, coal);
+ if (err)
+ return err;
+
+ err = is_coalesce_exceed_limit(netdev, coal);
+ if (err)
+ return err;
+
+ if (coal->rx_coalesce_usecs_low / COALESCE_TIMER_CFG_UNIT >=
+ coal->rx_coalesce_usecs_high / COALESCE_TIMER_CFG_UNIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "coalesce_usecs_high(%u) must more than coalesce_usecs_low(%u), after dividing %d usecs unit\n",
+ coal->rx_coalesce_usecs_high,
+ coal->rx_coalesce_usecs_low,
+ COALESCE_TIMER_CFG_UNIT);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_max_coalesced_frames_low / COALESCE_PENDING_LIMIT_UNIT >=
+ coal->rx_max_coalesced_frames_high / COALESCE_PENDING_LIMIT_UNIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "coalesced_frames_high(%u) must more than coalesced_frames_low(%u),after dividing %d frames unit\n",
+ coal->rx_max_coalesced_frames_high,
+ coal->rx_max_coalesced_frames_low,
+ COALESCE_PENDING_LIMIT_UNIT);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->pkt_rate_low >= coal->pkt_rate_high) {
+ nicif_err(nic_dev, drv, netdev,
+ "pkt_rate_high(%u) must more than pkt_rate_low(%u)\n",
+ coal->pkt_rate_high,
+ coal->pkt_rate_low);
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+#define CHECK_COALESCE_ALIGN(coal, item, unit) \
+do { \
+ if ((coal)->item % (unit)) \
+ nicif_warn(nic_dev, drv, netdev, \
+ "%s in %d units, change to %u\n", \
+ #item, (unit), ((coal)->item - \
+ (coal)->item % (unit))); \
+} while (0)
+
+#define CHECK_COALESCE_CHANGED(coal, item, unit, ori_val, obj_str) \
+do { \
+ if (((coal)->item / (unit)) != (ori_val)) \
+ nicif_info(nic_dev, drv, netdev, \
+ "Change %s from %d to %u %s\n", \
+ #item, (ori_val) * (unit), \
+ ((coal)->item - (coal)->item % (unit)), \
+ (obj_str)); \
+} while (0)
+
+#define CHECK_PKT_RATE_CHANGED(coal, item, ori_val, obj_str) \
+do { \
+ if ((coal)->item != (ori_val)) \
+ nicif_info(nic_dev, drv, netdev, \
+ "Change %s from %llu to %u %s\n", \
+ #item, (ori_val), (coal)->item, (obj_str)); \
+} while (0)
+
+static int set_hw_coal_param(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_intr_coal_info *intr_coal, u16 queue)
+{
+ u16 i;
+
+ if (queue == COALESCE_ALL_QUEUE) {
+ for (i = 0; i < nic_dev->max_qps; i++)
+ set_queue_coalesce(nic_dev, i, intr_coal);
+ } else {
+ if (queue >= nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid queue_id: %u\n", queue);
+ return -EINVAL;
+ }
+ set_queue_coalesce(nic_dev, queue, intr_coal);
+ }
+
+ return 0;
+}
+
+static void check_coalesce_align(struct net_device *netdev,
+ struct ethtool_coalesce *coal)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ CHECK_COALESCE_ALIGN(coal, rx_coalesce_usecs, COALESCE_TIMER_CFG_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_max_coalesced_frames,
+ COALESCE_PENDING_LIMIT_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_coalesce_usecs_high,
+ COALESCE_TIMER_CFG_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_max_coalesced_frames_high,
+ COALESCE_PENDING_LIMIT_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_coalesce_usecs_low,
+ COALESCE_TIMER_CFG_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_max_coalesced_frames_low,
+ COALESCE_PENDING_LIMIT_UNIT);
+}
+
+static int check_coalesce_change(struct net_device *netdev,
+ u16 queue, struct ethtool_coalesce *coal)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_intr_coal_info *ori_intr_coal = NULL;
+ char obj_str[32] = {0};
+
+ if (queue == COALESCE_ALL_QUEUE) {
+ ori_intr_coal = &nic_dev->intr_coalesce[0];
+ snprintf(obj_str, sizeof(obj_str), "for netdev");
+ } else {
+ ori_intr_coal = &nic_dev->intr_coalesce[queue];
+ snprintf(obj_str, sizeof(obj_str), "for queue %u", queue);
+ }
+
+ CHECK_COALESCE_CHANGED(coal, rx_coalesce_usecs, COALESCE_TIMER_CFG_UNIT,
+ ori_intr_coal->coalesce_timer_cfg, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_max_coalesced_frames,
+ COALESCE_PENDING_LIMIT_UNIT,
+ ori_intr_coal->pending_limt, obj_str);
+ CHECK_PKT_RATE_CHANGED(coal, pkt_rate_high,
+ ori_intr_coal->pkt_rate_high, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_coalesce_usecs_high,
+ COALESCE_TIMER_CFG_UNIT,
+ ori_intr_coal->rx_usecs_high, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_max_coalesced_frames_high,
+ COALESCE_PENDING_LIMIT_UNIT,
+ ori_intr_coal->rx_pending_limt_high, obj_str);
+ CHECK_PKT_RATE_CHANGED(coal, pkt_rate_low,
+ ori_intr_coal->pkt_rate_low, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_coalesce_usecs_low,
+ COALESCE_TIMER_CFG_UNIT,
+ ori_intr_coal->rx_usecs_low, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_max_coalesced_frames_low,
+ COALESCE_PENDING_LIMIT_UNIT,
+ ori_intr_coal->rx_pending_limt_low, obj_str);
+ return 0;
+}
+
+static void init_intr_coal_params(struct hinic3_intr_coal_info *intr_coal,
+ struct ethtool_coalesce *coal)
+{
+ intr_coal->coalesce_timer_cfg =
+ (u8)(coal->rx_coalesce_usecs / COALESCE_TIMER_CFG_UNIT);
+ intr_coal->pending_limt = (u8)(coal->rx_max_coalesced_frames /
+ COALESCE_PENDING_LIMIT_UNIT);
+
+ intr_coal->pkt_rate_high = coal->pkt_rate_high;
+ intr_coal->rx_usecs_high =
+ (u8)(coal->rx_coalesce_usecs_high / COALESCE_TIMER_CFG_UNIT);
+ intr_coal->rx_pending_limt_high =
+ (u8)(coal->rx_max_coalesced_frames_high /
+ COALESCE_PENDING_LIMIT_UNIT);
+
+ intr_coal->pkt_rate_low = coal->pkt_rate_low;
+ intr_coal->rx_usecs_low =
+ (u8)(coal->rx_coalesce_usecs_low / COALESCE_TIMER_CFG_UNIT);
+ intr_coal->rx_pending_limt_low =
+ (u8)(coal->rx_max_coalesced_frames_low /
+ COALESCE_PENDING_LIMIT_UNIT);
+}
+
+static int set_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal, u16 queue)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_intr_coal_info intr_coal = {0};
+ u32 last_adaptive_rx;
+ int err = 0;
+
+ err = is_coalesce_legal(netdev, coal);
+ if (err)
+ return err;
+
+ check_coalesce_align(netdev, coal);
+
+ check_coalesce_change(netdev, queue, coal);
+
+ init_intr_coal_params(&intr_coal, coal);
+
+ last_adaptive_rx = nic_dev->adaptive_rx_coal;
+ nic_dev->adaptive_rx_coal = coal->use_adaptive_rx_coalesce;
+
+ /* coalesce timer or pending set to zero will disable coalesce */
+ if (!nic_dev->adaptive_rx_coal &&
+ (!intr_coal.coalesce_timer_cfg || !intr_coal.pending_limt))
+ nicif_warn(nic_dev, drv, netdev, "Coalesce will be disabled\n");
+
+ /* ensure coalesce paramester will not be changed in auto
+ * moderation work
+ */
+ if (HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ if (!nic_dev->adaptive_rx_coal)
+ cancel_delayed_work_sync(&nic_dev->moderation_task);
+ else if (!last_adaptive_rx)
+ queue_delayed_work(nic_dev->workq,
+ &nic_dev->moderation_task,
+ HINIC3_MODERATONE_DELAY);
+ }
+
+ return set_hw_coal_param(nic_dev, &intr_coal, queue);
+}
+
+#ifdef HAVE_ETHTOOL_COALESCE_EXTACK
+static int hinic3_get_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal,
+ struct kernel_ethtool_coalesce *kernel_coal,
+ struct netlink_ext_ack *extack)
+#else
+static int hinic3_get_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal)
+#endif
+{
+ return get_coalesce(netdev, coal, COALESCE_ALL_QUEUE);
+}
+
+#ifdef HAVE_ETHTOOL_COALESCE_EXTACK
+static int hinic3_set_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal,
+ struct kernel_ethtool_coalesce *kernel_coal,
+ struct netlink_ext_ack *extack)
+#else
+static int hinic3_set_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal)
+#endif
+{
+ return set_coalesce(netdev, coal, COALESCE_ALL_QUEUE);
+}
+
+#if defined(ETHTOOL_PERQUEUE) && defined(ETHTOOL_GCOALESCE)
+static int hinic3_get_per_queue_coalesce(struct net_device *netdev, u32 queue,
+ struct ethtool_coalesce *coal)
+{
+ return get_coalesce(netdev, coal, (u16)queue);
+}
+
+static int hinic3_set_per_queue_coalesce(struct net_device *netdev, u32 queue,
+ struct ethtool_coalesce *coal)
+{
+ return set_coalesce(netdev, coal, (u16)queue);
+}
+#endif
+
+#ifdef HAVE_ETHTOOL_SET_PHYS_ID
+static int hinic3_set_phys_id(struct net_device *netdev,
+ enum ethtool_phys_id_state state)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ switch (state) {
+ case ETHTOOL_ID_ACTIVE:
+ err = hinic3_set_led_status(nic_dev->hwdev,
+ MAG_CMD_LED_TYPE_ALARM,
+ MAG_CMD_LED_MODE_FORCE_BLINK_2HZ);
+ if (err)
+ nicif_err(nic_dev, drv, netdev,
+ "Set LED blinking in 2HZ failed\n");
+ else
+ nicif_info(nic_dev, drv, netdev,
+ "Set LED blinking in 2HZ success\n");
+ break;
+
+ case ETHTOOL_ID_INACTIVE:
+ err = hinic3_set_led_status(nic_dev->hwdev,
+ MAG_CMD_LED_TYPE_ALARM,
+ MAG_CMD_LED_MODE_DEFAULT);
+ if (err)
+ nicif_err(nic_dev, drv, netdev,
+ "Reset LED to original status failed\n");
+ else
+ nicif_info(nic_dev, drv, netdev,
+ "Reset LED to original status success\n");
+ break;
+
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ return err;
+}
+#else
+static int hinic3_phys_id(struct net_device *netdev, u32 data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ nicif_err(nic_dev, drv, netdev, "Not support to set phys id\n");
+
+ return -EOPNOTSUPP;
+}
+#endif
+
+static void hinic3_get_pauseparam(struct net_device *netdev,
+ struct ethtool_pauseparam *pause)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct nic_pause_config nic_pause = {0};
+ int err;
+
+ err = hinic3_get_pause_info(nic_dev->hwdev, &nic_pause);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to get pauseparam from hw\n");
+ } else {
+ pause->autoneg = nic_pause.auto_neg == PORT_CFG_AN_ON ?
+ AUTONEG_ENABLE : AUTONEG_DISABLE;
+ pause->rx_pause = nic_pause.rx_pause;
+ pause->tx_pause = nic_pause.tx_pause;
+ }
+}
+
+static int hinic3_set_pauseparam(struct net_device *netdev,
+ struct ethtool_pauseparam *pause)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct nic_pause_config nic_pause = {0};
+ struct nic_port_info port_info = {0};
+ u32 auto_neg;
+ int err;
+
+ err = hinic3_get_port_info(nic_dev->hwdev, &port_info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to get auto-negotiation state\n");
+ return -EFAULT;
+ }
+
+ auto_neg = port_info.autoneg_state == PORT_CFG_AN_ON ? AUTONEG_ENABLE : AUTONEG_DISABLE;
+ if (pause->autoneg != auto_neg) {
+ nicif_err(nic_dev, drv, netdev,
+ "To change autoneg please use: ethtool -s <dev> autoneg <on|off>\n");
+ return -EOPNOTSUPP;
+ }
+
+ nic_pause.auto_neg = pause->autoneg == AUTONEG_ENABLE ? PORT_CFG_AN_ON : PORT_CFG_AN_OFF;
+ nic_pause.rx_pause = (u8)pause->rx_pause;
+ nic_pause.tx_pause = (u8)pause->tx_pause;
+
+ err = hinic3_set_pause_info(nic_dev->hwdev, nic_pause);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to set pauseparam\n");
+ return err;
+ }
+
+ nicif_info(nic_dev, drv, netdev, "Set pause options, tx: %s, rx: %s\n",
+ pause->tx_pause ? "on" : "off",
+ pause->rx_pause ? "on" : "off");
+
+ return 0;
+}
+
+#ifdef ETHTOOL_GMODULEEEPROM
+static int hinic3_get_module_info(struct net_device *netdev,
+ struct ethtool_modinfo *modinfo)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 sfp_type = 0;
+ u8 sfp_type_ext = 0;
+ int err;
+
+ err = hinic3_get_sfp_type(nic_dev->hwdev, &sfp_type, &sfp_type_ext);
+ if (err)
+ return err;
+
+ switch (sfp_type) {
+ case MODULE_TYPE_SFP:
+ modinfo->type = ETH_MODULE_SFF_8472;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
+ break;
+ case MODULE_TYPE_QSFP:
+ modinfo->type = ETH_MODULE_SFF_8436;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN;
+ break;
+ case MODULE_TYPE_QSFP_PLUS:
+ if (sfp_type_ext >= 0x3) {
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8636_MAX_LEN;
+ } else {
+ modinfo->type = ETH_MODULE_SFF_8436;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN;
+ }
+ break;
+ case MODULE_TYPE_QSFP28:
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8636_MAX_LEN;
+ break;
+ case MODULE_TYPE_DSFP:
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8636_MAX_LEN;
+ break;
+ case MODULE_TYPE_QSFP_CMIS:
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8636_MAX_LEN;
+ break;
+ default:
+ nicif_warn(nic_dev, drv, netdev,
+ "Optical module unknown: 0x%x\n", sfp_type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_get_module_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *ee, u8 *data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 sfp_data[STD_SFP_INFO_MAX_SIZE];
+ int err;
+
+ if (!ee->len || ((ee->len + ee->offset) > STD_SFP_INFO_MAX_SIZE))
+ return -EINVAL;
+
+ memset(data, 0, ee->len);
+
+ err = hinic3_get_sfp_eeprom(nic_dev->hwdev, (u8 *)sfp_data, ee->len);
+ if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+ err = hinic3_get_tlv_xsfp_eeprom(nic_dev->hwdev, (u8 *)sfp_data, sizeof(sfp_data));
+
+ if (err)
+ return err;
+
+ memcpy(data, sfp_data + ee->offset, ee->len);
+
+ return 0;
+}
+#endif /* ETHTOOL_GMODULEEEPROM */
+
+#define HINIC3_PRIV_FLAGS_SYMM_RSS BIT(0)
+#define HINIC3_PRIV_FLAGS_LINK_UP BIT(1)
+#define HINIC3_PRIV_FLAGS_RXQ_RECOVERY BIT(2)
+
+static u32 hinic3_get_priv_flags(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u32 priv_flags = 0;
+
+ if (test_bit(HINIC3_SAME_RXTX, &nic_dev->flags))
+ priv_flags |= HINIC3_PRIV_FLAGS_SYMM_RSS;
+
+ if (test_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags))
+ priv_flags |= HINIC3_PRIV_FLAGS_LINK_UP;
+
+ if (test_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ priv_flags |= HINIC3_PRIV_FLAGS_RXQ_RECOVERY;
+
+ return priv_flags;
+}
+
+static int hinic3_set_rxq_recovery_flag(struct net_device *netdev, u32 priv_flags)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (priv_flags & HINIC3_PRIV_FLAGS_RXQ_RECOVERY) {
+ if (!HINIC3_SUPPORT_RXQ_RECOVERY(nic_dev->hwdev)) {
+ nicif_info(nic_dev, drv, netdev, "Unsupport open rxq recovery\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (test_and_set_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ return 0;
+ queue_delayed_work(nic_dev->workq, &nic_dev->rxq_check_work, HZ);
+ nicif_info(nic_dev, drv, netdev, "open rxq recovery\n");
+ } else {
+ if (!test_and_clear_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ return 0;
+ cancel_delayed_work_sync(&nic_dev->rxq_check_work);
+ nicif_info(nic_dev, drv, netdev, "close rxq recovery\n");
+ }
+
+ return 0;
+}
+
+static int hinic3_set_symm_rss_flag(struct net_device *netdev, u32 priv_flags)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (priv_flags & HINIC3_PRIV_FLAGS_SYMM_RSS) {
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to open Symmetric RSS while DCB is enabled\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to open Symmetric RSS while RSS is disabled\n");
+ return -EOPNOTSUPP;
+ }
+
+ set_bit(HINIC3_SAME_RXTX, &nic_dev->flags);
+ } else {
+ clear_bit(HINIC3_SAME_RXTX, &nic_dev->flags);
+ }
+
+ return 0;
+}
+
+static int hinic3_set_force_link_flag(struct net_device *netdev, u32 priv_flags)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 link_status = 0;
+ int err;
+
+ if (priv_flags & HINIC3_PRIV_FLAGS_LINK_UP) {
+ if (test_and_set_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags))
+ return 0;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev))
+ return 0;
+
+ if (netif_carrier_ok(netdev))
+ return 0;
+
+ nic_dev->link_status = true;
+ netif_carrier_on(netdev);
+ nicif_info(nic_dev, link, netdev, "Set link up\n");
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ hinic3_notify_all_vfs_link_changed(nic_dev->hwdev, nic_dev->link_status);
+ } else {
+ if (!test_and_clear_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags))
+ return 0;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev))
+ return 0;
+
+ err = hinic3_get_link_state(nic_dev->hwdev, &link_status);
+ if (err) {
+ nicif_err(nic_dev, link, netdev, "Get link state err: %d\n", err);
+ return err;
+ }
+
+ nic_dev->link_status = link_status;
+
+ if (link_status) {
+ if (netif_carrier_ok(netdev))
+ return 0;
+
+ netif_carrier_on(netdev);
+ nicif_info(nic_dev, link, netdev, "Link state is up\n");
+ } else {
+ if (!netif_carrier_ok(netdev))
+ return 0;
+
+ netif_carrier_off(netdev);
+ nicif_info(nic_dev, link, netdev, "Link state is down\n");
+ }
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ hinic3_notify_all_vfs_link_changed(nic_dev->hwdev, nic_dev->link_status);
+ }
+
+ return 0;
+}
+
+static int hinic3_set_priv_flags(struct net_device *netdev, u32 priv_flags)
+{
+ int err;
+
+ err = hinic3_set_symm_rss_flag(netdev, priv_flags);
+ if (err)
+ return err;
+
+ err = hinic3_set_rxq_recovery_flag(netdev, priv_flags);
+ if (err)
+ return err;
+
+ return hinic3_set_force_link_flag(netdev, priv_flags);
+}
+
+#define PORT_DOWN_ERR_IDX 0
+#define LP_DEFAULT_TIME 5 /* seconds */
+#define LP_PKT_LEN 60
+
+#define TEST_TIME_MULTIPLE 5
+static int hinic3_run_lp_test(struct hinic3_nic_dev *nic_dev, u32 test_time)
+{
+ u8 *lb_test_rx_buf = nic_dev->lb_test_rx_buf;
+ struct net_device *netdev = nic_dev->netdev;
+ u32 cnt = test_time * TEST_TIME_MULTIPLE;
+ struct sk_buff *skb_tmp = NULL;
+ struct ethhdr *eth_hdr = NULL;
+ struct sk_buff *skb = NULL;
+ u8 *test_data = NULL;
+ u32 i;
+ u8 j;
+
+ skb_tmp = alloc_skb(LP_PKT_LEN, GFP_ATOMIC);
+ if (!skb_tmp) {
+ nicif_err(nic_dev, drv, netdev,
+ "Alloc xmit skb template failed for loopback test\n");
+ return -ENOMEM;
+ }
+
+ eth_hdr = __skb_put(skb_tmp, ETH_HLEN);
+ eth_hdr->h_proto = htons(ETH_P_ARP);
+ ether_addr_copy(eth_hdr->h_dest, nic_dev->netdev->dev_addr);
+ eth_zero_addr(eth_hdr->h_source);
+ skb_reset_mac_header(skb_tmp);
+
+ test_data = __skb_put(skb_tmp, LP_PKT_LEN - ETH_HLEN);
+ for (i = ETH_HLEN; i < LP_PKT_LEN; i++)
+ test_data[i] = i & 0xFF;
+
+ skb_tmp->queue_mapping = 0;
+ skb_tmp->dev = netdev;
+ skb_tmp->protocol = htons(ETH_P_ARP);
+
+ for (i = 0; i < cnt; i++) {
+ nic_dev->lb_test_rx_idx = 0;
+ memset(lb_test_rx_buf, 0, LP_PKT_CNT * LP_PKT_LEN);
+
+ for (j = 0; j < LP_PKT_CNT; j++) {
+ skb = pskb_copy(skb_tmp, GFP_ATOMIC);
+ if (!skb) {
+ dev_kfree_skb_any(skb_tmp);
+ nicif_err(nic_dev, drv, netdev,
+ "Copy skb failed for loopback test\n");
+ return -ENOMEM;
+ }
+
+ /* mark index for every pkt */
+ skb->data[LP_PKT_LEN - 1] = j;
+
+ if (hinic3_lb_xmit_frame(skb, netdev)) {
+ dev_kfree_skb_any(skb);
+ dev_kfree_skb_any(skb_tmp);
+ nicif_err(nic_dev, drv, netdev,
+ "Xmit pkt failed for loopback test\n");
+ return -EBUSY;
+ }
+ }
+
+ /* wait till all pkts received to RX buffer */
+ msleep(HINIC3_WAIT_PKTS_TO_RX_BUFFER);
+
+ for (j = 0; j < LP_PKT_CNT; j++) {
+ if (memcmp((lb_test_rx_buf + (j * LP_PKT_LEN)),
+ skb_tmp->data, (LP_PKT_LEN - 1)) ||
+ (*(lb_test_rx_buf + ((j * LP_PKT_LEN) +
+ (LP_PKT_LEN - 1))) != j)) {
+ dev_kfree_skb_any(skb_tmp);
+ nicif_err(nic_dev, drv, netdev,
+ "Compare pkt failed in loopback test(index=0x%02x, data[%d]=0x%02x)\n",
+ (j + (i * LP_PKT_CNT)),
+ (LP_PKT_LEN - 1),
+ *(lb_test_rx_buf +
+ (((j * LP_PKT_LEN) +
+ (LP_PKT_LEN - 1)))));
+ return -EIO;
+ }
+ }
+ }
+
+ dev_kfree_skb_any(skb_tmp);
+ nicif_info(nic_dev, drv, netdev, "Loopback test succeed.\n");
+ return 0;
+}
+
+enum diag_test_index {
+ INTERNAL_LP_TEST = 0,
+ EXTERNAL_LP_TEST = 1,
+ DIAG_TEST_MAX = 2,
+};
+
+#define HINIC3_INTERNAL_LP_MODE 5
+static int do_lp_test(struct hinic3_nic_dev *nic_dev, u32 *flags, u32 test_time,
+ enum diag_test_index *test_index)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 *lb_test_rx_buf = NULL;
+ int err = 0;
+
+ if (!(*flags & ETH_TEST_FL_EXTERNAL_LB)) {
+ *test_index = INTERNAL_LP_TEST;
+ if (hinic3_set_loopback_mode(nic_dev->hwdev,
+ HINIC3_INTERNAL_LP_MODE, true)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to set port loopback mode before loopback test\n");
+ return -EFAULT;
+ }
+
+ /* suspend 5000 ms, waiting for port to stop receiving frames */
+ msleep(5000);
+ } else {
+ *test_index = EXTERNAL_LP_TEST;
+ }
+
+ lb_test_rx_buf = vmalloc(LP_PKT_CNT * LP_PKT_LEN);
+ if (!lb_test_rx_buf) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to alloc RX buffer for loopback test\n");
+ err = -ENOMEM;
+ } else {
+ nic_dev->lb_test_rx_buf = lb_test_rx_buf;
+ nic_dev->lb_pkt_len = LP_PKT_LEN;
+ set_bit(HINIC3_LP_TEST, &nic_dev->flags);
+
+ if (hinic3_run_lp_test(nic_dev, test_time))
+ err = -EFAULT;
+
+ clear_bit(HINIC3_LP_TEST, &nic_dev->flags);
+ msleep(HINIC3_WAIT_CLEAR_LP_TEST);
+ vfree(lb_test_rx_buf);
+ nic_dev->lb_test_rx_buf = NULL;
+ }
+
+ if (!(*flags & ETH_TEST_FL_EXTERNAL_LB)) {
+ if (hinic3_set_loopback_mode(nic_dev->hwdev,
+ HINIC3_INTERNAL_LP_MODE, false)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to cancel port loopback mode after loopback test\n");
+ err = -EFAULT;
+ }
+ } else {
+ *flags |= ETH_TEST_FL_EXTERNAL_LB_DONE;
+ }
+
+ return err;
+}
+
+static void hinic3_lp_test(struct net_device *netdev, struct ethtool_test *eth_test,
+ u64 *data, u32 test_time)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ enum diag_test_index test_index = 0;
+ u8 link_status = 0;
+ int err;
+ u32 test_time_real = test_time;
+
+ /* don't support loopback test when netdev is closed. */
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Do not support loopback test when netdev is closed\n");
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+ data[PORT_DOWN_ERR_IDX] = 1;
+ return;
+ }
+ if (test_time_real == 0)
+ test_time_real = LP_DEFAULT_TIME;
+
+ netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
+
+ err = do_lp_test(nic_dev, ð_test->flags, test_time_real, &test_index);
+ if (err) {
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+ data[test_index] = 1;
+ }
+
+ netif_tx_wake_all_queues(netdev);
+
+ err = hinic3_get_link_state(nic_dev->hwdev, &link_status);
+ if (!err && link_status)
+ netif_carrier_on(netdev);
+}
+
+static void hinic3_diag_test(struct net_device *netdev,
+ struct ethtool_test *eth_test, u64 *data)
+{
+ memset(data, 0, DIAG_TEST_MAX * sizeof(u64));
+
+ hinic3_lp_test(netdev, eth_test, data, 0);
+}
+
+#if defined(ETHTOOL_GFECPARAM) && defined(ETHTOOL_SFECPARAM)
+static int hinic3_get_fecparam(struct net_device *netdev, struct ethtool_fecparam *fecparam)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 advertised_fec = 0;
+ u8 supported_fec = 0;
+ int err;
+
+ if (fecparam->cmd != ETHTOOL_GFECPARAM) {
+ nicif_err(nic_dev, drv, netdev,
+ "get fecparam cmd err.exp:0x%x,real:0x%x\n",
+ ETHTOOL_GFECPARAM, fecparam->cmd);
+ return -EINVAL;
+ }
+
+ err = get_fecparam(nic_dev->hwdev, &advertised_fec, &supported_fec);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Get fec param failed\n");
+ return err;
+ }
+ fecparam->active_fec = (u32)advertised_fec;
+ fecparam->fec = (u32)supported_fec;
+
+ nicif_info(nic_dev, drv, netdev, "Get fec param success\n");
+ return 0;
+}
+
+static int hinic3_set_fecparam(struct net_device *netdev, struct ethtool_fecparam *fecparam)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ if (fecparam->cmd != ETHTOOL_SFECPARAM) {
+ nicif_err(nic_dev, drv, netdev, "Set fecparam cmd err.exp:0x%x,real:0x%x\n", ETHTOOL_SFECPARAM, fecparam->cmd);
+ return -EINVAL;
+ }
+
+ err = set_fecparam(nic_dev->hwdev, (u8)fecparam->fec);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Set fec param failed\n");
+ return err;
+ }
+
+ nicif_info(nic_dev, drv, netdev, "Set fec param success\n");
+ return 0;
+}
+#endif
+
+static const struct ethtool_ops hinic3_ethtool_ops = {
+#ifdef SUPPORTED_COALESCE_PARAMS
+ .supported_coalesce_params = ETHTOOL_COALESCE_USECS |
+ ETHTOOL_COALESCE_PKT_RATE_RX_USECS |
+ ETHTOOL_COALESCE_MAX_FRAMES |
+ ETHTOOL_COALESCE_USECS_LOW_HIGH |
+ ETHTOOL_COALESCE_MAX_FRAMES_LOW_HIGH,
+#endif
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+ .get_link_ksettings = hinic3_get_link_ksettings,
+ .set_link_ksettings = hinic3_set_link_ksettings,
+#endif
+#endif
+#ifndef HAVE_NEW_ETHTOOL_LINK_SETTINGS_ONLY
+ .get_settings = hinic3_get_settings,
+ .set_settings = hinic3_set_settings,
+#endif
+
+ .get_drvinfo = hinic3_get_drvinfo,
+ .get_msglevel = hinic3_get_msglevel,
+ .set_msglevel = hinic3_set_msglevel,
+ .nway_reset = hinic3_nway_reset,
+#ifdef CONFIG_MODULE_PROF
+ .get_link = hinic3_get_link,
+#else
+ .get_link = ethtool_op_get_link,
+#endif
+ .get_ringparam = hinic3_get_ringparam,
+ .set_ringparam = hinic3_set_ringparam,
+ .get_pauseparam = hinic3_get_pauseparam,
+ .set_pauseparam = hinic3_set_pauseparam,
+ .get_sset_count = hinic3_get_sset_count,
+ .get_ethtool_stats = hinic3_get_ethtool_stats,
+ .get_strings = hinic3_get_strings,
+
+ .self_test = hinic3_diag_test,
+
+#ifndef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+#ifdef HAVE_ETHTOOL_SET_PHYS_ID
+ .set_phys_id = hinic3_set_phys_id,
+#else
+ .phys_id = hinic3_phys_id,
+#endif
+#endif
+
+ .get_coalesce = hinic3_get_coalesce,
+ .set_coalesce = hinic3_set_coalesce,
+#if defined(ETHTOOL_PERQUEUE) && defined(ETHTOOL_GCOALESCE)
+ .get_per_queue_coalesce = hinic3_get_per_queue_coalesce,
+ .set_per_queue_coalesce = hinic3_set_per_queue_coalesce,
+#endif
+
+#if defined(ETHTOOL_GFECPARAM) && defined(ETHTOOL_SFECPARAM)
+ .get_fecparam = hinic3_get_fecparam,
+ .set_fecparam = hinic3_set_fecparam,
+#endif
+
+ .get_rxnfc = hinic3_get_rxnfc,
+ .set_rxnfc = hinic3_set_rxnfc,
+ .get_priv_flags = hinic3_get_priv_flags,
+ .set_priv_flags = hinic3_set_priv_flags,
+
+#ifndef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+ .get_channels = hinic3_get_channels,
+ .set_channels = hinic3_set_channels,
+
+#ifdef ETHTOOL_GMODULEEEPROM
+ .get_module_info = hinic3_get_module_info,
+ .get_module_eeprom = hinic3_get_module_eeprom,
+#endif
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ .get_rxfh_indir_size = hinic3_get_rxfh_indir_size,
+#endif
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+ .get_rxfh_key_size = hinic3_get_rxfh_key_size,
+ .get_rxfh = hinic3_get_rxfh,
+ .set_rxfh = hinic3_set_rxfh,
+#else
+ .get_rxfh_indir = hinic3_get_rxfh_indir,
+ .set_rxfh_indir = hinic3_set_rxfh_indir,
+#endif
+
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+};
+
+#ifdef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+static const struct ethtool_ops_ext hinic3_ethtool_ops_ext = {
+ .size = sizeof(struct ethtool_ops_ext),
+ .set_phys_id = hinic3_set_phys_id,
+ .get_channels = hinic3_get_channels,
+ .set_channels = hinic3_set_channels,
+#ifdef ETHTOOL_GMODULEEEPROM
+ .get_module_info = hinic3_get_module_info,
+ .get_module_eeprom = hinic3_get_module_eeprom,
+#endif
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ .get_rxfh_indir_size = hinic3_get_rxfh_indir_size,
+#endif
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+ .get_rxfh_key_size = hinic3_get_rxfh_key_size,
+ .get_rxfh = hinic3_get_rxfh,
+ .set_rxfh = hinic3_set_rxfh,
+#else
+ .get_rxfh_indir = hinic3_get_rxfh_indir,
+ .set_rxfh_indir = hinic3_set_rxfh_indir,
+#endif
+
+};
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+
+static const struct ethtool_ops hinic3vf_ethtool_ops = {
+#ifdef SUPPORTED_COALESCE_PARAMS
+ .supported_coalesce_params = ETHTOOL_COALESCE_USECS |
+ ETHTOOL_COALESCE_PKT_RATE_RX_USECS |
+ ETHTOOL_COALESCE_MAX_FRAMES |
+ ETHTOOL_COALESCE_USECS_LOW_HIGH |
+ ETHTOOL_COALESCE_MAX_FRAMES_LOW_HIGH,
+#endif
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+ .get_link_ksettings = hinic3_get_link_ksettings,
+#endif
+#else
+ .get_settings = hinic3_get_settings,
+#endif
+ .get_drvinfo = hinic3_get_drvinfo,
+ .get_msglevel = hinic3_get_msglevel,
+ .set_msglevel = hinic3_set_msglevel,
+ .get_link = ethtool_op_get_link,
+ .get_ringparam = hinic3_get_ringparam,
+
+ .set_ringparam = hinic3_set_ringparam,
+ .get_sset_count = hinic3_get_sset_count,
+ .get_ethtool_stats = hinic3_get_ethtool_stats,
+ .get_strings = hinic3_get_strings,
+
+ .get_coalesce = hinic3_get_coalesce,
+ .set_coalesce = hinic3_set_coalesce,
+#if defined(ETHTOOL_PERQUEUE) && defined(ETHTOOL_GCOALESCE)
+ .get_per_queue_coalesce = hinic3_get_per_queue_coalesce,
+ .set_per_queue_coalesce = hinic3_set_per_queue_coalesce,
+#endif
+
+#if defined(ETHTOOL_GFECPARAM) && defined(ETHTOOL_SFECPARAM)
+ .get_fecparam = hinic3_get_fecparam,
+ .set_fecparam = hinic3_set_fecparam,
+#endif
+
+ .get_rxnfc = hinic3_get_rxnfc,
+ .set_rxnfc = hinic3_set_rxnfc,
+ .get_priv_flags = hinic3_get_priv_flags,
+ .set_priv_flags = hinic3_set_priv_flags,
+
+#ifndef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+ .get_channels = hinic3_get_channels,
+ .set_channels = hinic3_set_channels,
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ .get_rxfh_indir_size = hinic3_get_rxfh_indir_size,
+#endif
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+ .get_rxfh_key_size = hinic3_get_rxfh_key_size,
+ .get_rxfh = hinic3_get_rxfh,
+ .set_rxfh = hinic3_set_rxfh,
+#else
+ .get_rxfh_indir = hinic3_get_rxfh_indir,
+ .set_rxfh_indir = hinic3_set_rxfh_indir,
+#endif
+
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+};
+
+#ifdef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+static const struct ethtool_ops_ext hinic3vf_ethtool_ops_ext = {
+ .size = sizeof(struct ethtool_ops_ext),
+ .get_channels = hinic3_get_channels,
+ .set_channels = hinic3_set_channels,
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ .get_rxfh_indir_size = hinic3_get_rxfh_indir_size,
+#endif
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+ .get_rxfh_key_size = hinic3_get_rxfh_key_size,
+ .get_rxfh = hinic3_get_rxfh,
+ .set_rxfh = hinic3_set_rxfh,
+#else
+ .get_rxfh_indir = hinic3_get_rxfh_indir,
+ .set_rxfh_indir = hinic3_set_rxfh_indir,
+#endif
+
+};
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+
+void hinic3_set_ethtool_ops(struct net_device *netdev)
+{
+ SET_ETHTOOL_OPS(netdev, &hinic3_ethtool_ops);
+#ifdef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+ set_ethtool_ops_ext(netdev, &hinic3_ethtool_ops_ext);
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+}
+
+void hinic3vf_set_ethtool_ops(struct net_device *netdev)
+{
+ SET_ETHTOOL_OPS(netdev, &hinic3vf_ethtool_ops);
+#ifdef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+ set_ethtool_ops_ext(netdev, &hinic3vf_ethtool_ops_ext);
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c
new file mode 100644
index 0000000..ec89f62
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c
@@ -0,0 +1,1320 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_mt.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+
+#define HINIC_SET_LINK_STR_LEN 128
+#define HINIC_ETHTOOL_FEC_INFO_LEN 6
+#define HINIC_SUPPORTED_FEC_CMD 0
+#define HINIC_ADVERTISED_FEC_CMD 1
+
+struct hinic3_ethtool_fec {
+ u8 hinic_fec_offset;
+ u8 ethtool_bit_offset;
+};
+
+static struct hinic3_ethtool_fec hinic3_ethtool_fec_info[HINIC_ETHTOOL_FEC_INFO_LEN] = {
+ {PORT_FEC_NOT_SET, 0xFF}, /* The ethtool does not have the corresponding enumeration variable */
+ {PORT_FEC_RSFEC, 0x32}, /* ETHTOOL_LINK_MODE_FEC_RS_BIT */
+ {PORT_FEC_BASEFEC, 0x33}, /* ETHTOOL_LINK_MODE_FEC_BASER_BIT */
+ {PORT_FEC_NOFEC, 0x31}, /* ETHTOOL_LINK_MODE_FEC_NONE_BIT */
+ {PORT_FEC_LLRSFEC, 0x4A}, /* ETHTOOL_LINK_MODE_FEC_LLRS_BIT: Available only in later versions */
+ {PORT_FEC_AUTO, 0XFF} /* The ethtool does not have the corresponding enumeration variable */
+};
+
+struct hinic3_stats {
+ char name[ETH_GSTRING_LEN];
+ u32 size;
+ int offset;
+};
+
+struct hinic3_netdev_link_count_str {
+ u64 link_down_events_phy;
+};
+
+#define HINIC3_NETDEV_LINK_COUNT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_netdev_link_count_str, _stat_item), \
+ .offset = offsetof(struct hinic3_netdev_link_count_str, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_netdev_link_count[] = {
+ HINIC3_NETDEV_LINK_COUNT(link_down_events_phy),
+};
+
+#define HINIC3_NETDEV_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct rtnl_link_stats64, _stat_item), \
+ .offset = offsetof(struct rtnl_link_stats64, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_netdev_stats[] = {
+ HINIC3_NETDEV_STAT(rx_packets),
+ HINIC3_NETDEV_STAT(tx_packets),
+ HINIC3_NETDEV_STAT(rx_bytes),
+ HINIC3_NETDEV_STAT(tx_bytes),
+ HINIC3_NETDEV_STAT(rx_errors),
+ HINIC3_NETDEV_STAT(tx_errors),
+ HINIC3_NETDEV_STAT(rx_dropped),
+ HINIC3_NETDEV_STAT(tx_dropped),
+ HINIC3_NETDEV_STAT(multicast),
+ HINIC3_NETDEV_STAT(collisions),
+ HINIC3_NETDEV_STAT(rx_length_errors),
+ HINIC3_NETDEV_STAT(rx_over_errors),
+ HINIC3_NETDEV_STAT(rx_crc_errors),
+ HINIC3_NETDEV_STAT(rx_frame_errors),
+ HINIC3_NETDEV_STAT(rx_fifo_errors),
+ HINIC3_NETDEV_STAT(rx_missed_errors),
+ HINIC3_NETDEV_STAT(tx_aborted_errors),
+ HINIC3_NETDEV_STAT(tx_carrier_errors),
+ HINIC3_NETDEV_STAT(tx_fifo_errors),
+ HINIC3_NETDEV_STAT(tx_heartbeat_errors),
+};
+
+#define HINIC3_NIC_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_nic_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_nic_stats, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_nic_dev_stats[] = {
+ HINIC3_NIC_STAT(netdev_tx_timeout),
+};
+
+static struct hinic3_stats hinic3_nic_dev_stats_extern[] = {
+ HINIC3_NIC_STAT(tx_carrier_off_drop),
+ HINIC3_NIC_STAT(tx_invalid_qid),
+ HINIC3_NIC_STAT(rsvd1),
+ HINIC3_NIC_STAT(rsvd2),
+};
+
+#define HINIC3_RXQ_STAT(_stat_item) { \
+ .name = "rxq%d_"#_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_rxq_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_rxq_stats, _stat_item) \
+}
+
+#define HINIC3_TXQ_STAT(_stat_item) { \
+ .name = "txq%d_"#_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_txq_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_txq_stats, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_rx_queue_stats[] = {
+ HINIC3_RXQ_STAT(packets),
+ HINIC3_RXQ_STAT(bytes),
+ HINIC3_RXQ_STAT(errors),
+ HINIC3_RXQ_STAT(csum_errors),
+ HINIC3_RXQ_STAT(other_errors),
+ HINIC3_RXQ_STAT(dropped),
+#ifdef HAVE_XDP_SUPPORT
+ HINIC3_RXQ_STAT(xdp_dropped),
+#endif
+ HINIC3_RXQ_STAT(rx_buf_empty),
+};
+
+static struct hinic3_stats hinic3_rx_queue_stats_extern[] = {
+ HINIC3_RXQ_STAT(alloc_skb_err),
+ HINIC3_RXQ_STAT(alloc_rx_buf_err),
+ HINIC3_RXQ_STAT(xdp_large_pkt),
+ HINIC3_RXQ_STAT(restore_drop_sge),
+ HINIC3_RXQ_STAT(rsvd2),
+};
+
+static struct hinic3_stats hinic3_tx_queue_stats[] = {
+ HINIC3_TXQ_STAT(packets),
+ HINIC3_TXQ_STAT(bytes),
+ HINIC3_TXQ_STAT(busy),
+ HINIC3_TXQ_STAT(wake),
+ HINIC3_TXQ_STAT(dropped),
+};
+
+static struct hinic3_stats hinic3_tx_queue_stats_extern[] = {
+ HINIC3_TXQ_STAT(skb_pad_err),
+ HINIC3_TXQ_STAT(frag_len_overflow),
+ HINIC3_TXQ_STAT(offload_cow_skb_err),
+ HINIC3_TXQ_STAT(map_frag_err),
+ HINIC3_TXQ_STAT(unknown_tunnel_pkt),
+ HINIC3_TXQ_STAT(frag_size_err),
+ HINIC3_TXQ_STAT(rsvd1),
+ HINIC3_TXQ_STAT(rsvd2),
+};
+
+#define HINIC3_FUNC_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_vport_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_vport_stats, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_function_stats[] = {
+ HINIC3_FUNC_STAT(tx_unicast_pkts_vport),
+ HINIC3_FUNC_STAT(tx_unicast_bytes_vport),
+ HINIC3_FUNC_STAT(tx_multicast_pkts_vport),
+ HINIC3_FUNC_STAT(tx_multicast_bytes_vport),
+ HINIC3_FUNC_STAT(tx_broadcast_pkts_vport),
+ HINIC3_FUNC_STAT(tx_broadcast_bytes_vport),
+
+ HINIC3_FUNC_STAT(rx_unicast_pkts_vport),
+ HINIC3_FUNC_STAT(rx_unicast_bytes_vport),
+ HINIC3_FUNC_STAT(rx_multicast_pkts_vport),
+ HINIC3_FUNC_STAT(rx_multicast_bytes_vport),
+ HINIC3_FUNC_STAT(rx_broadcast_pkts_vport),
+ HINIC3_FUNC_STAT(rx_broadcast_bytes_vport),
+
+ HINIC3_FUNC_STAT(tx_discard_vport),
+ HINIC3_FUNC_STAT(rx_discard_vport),
+ HINIC3_FUNC_STAT(tx_err_vport),
+ HINIC3_FUNC_STAT(rx_err_vport),
+};
+
+#define HINIC3_PORT_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct mag_cmd_port_stats, _stat_item), \
+ .offset = offsetof(struct mag_cmd_port_stats, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_port_stats[] = {
+ HINIC3_PORT_STAT(mac_tx_fragment_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_undersize_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_undermin_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_64_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_65_127_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_128_255_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_256_511_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_512_1023_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1024_1518_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1519_2047_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_2048_4095_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_4096_8191_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_8192_9216_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_9217_12287_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_12288_16383_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1519_max_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1519_max_good_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_oversize_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_jabber_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_bad_oct_num),
+ HINIC3_PORT_STAT(mac_tx_good_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_good_oct_num),
+ HINIC3_PORT_STAT(mac_tx_total_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_total_oct_num),
+ HINIC3_PORT_STAT(mac_tx_uni_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_multi_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_broad_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pause_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri0_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri1_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri2_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri3_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri4_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri5_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri6_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri7_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_control_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_err_all_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_from_app_good_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_from_app_bad_pkt_num),
+
+ HINIC3_PORT_STAT(mac_rx_fragment_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_undersize_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_undermin_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_64_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_65_127_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_128_255_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_256_511_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_512_1023_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1024_1518_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1519_2047_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_2048_4095_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_4096_8191_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_8192_9216_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_9217_12287_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_12288_16383_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1519_max_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1519_max_good_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_oversize_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_jabber_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_bad_oct_num),
+ HINIC3_PORT_STAT(mac_rx_good_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_good_oct_num),
+ HINIC3_PORT_STAT(mac_rx_total_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_total_oct_num),
+ HINIC3_PORT_STAT(mac_rx_uni_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_multi_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_broad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pause_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri0_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri1_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri2_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri3_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri4_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri5_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri6_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri7_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_control_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_sym_err_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_fcs_err_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_send_app_good_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_send_app_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_unfilter_pkt_num),
+};
+
+#define HINIC3_RSFEC_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct mag_cmd_rsfec_stats, _stat_item), \
+ .offset = offsetof(struct mag_cmd_rsfec_stats, _stat_item) \
+}
+
+static struct hinic3_stats g_hinic3_rsfec_stats[] = {
+ HINIC3_RSFEC_STAT(rx_err_lane_phy),
+};
+
+#define HINIC3_FGPA_PORT_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_phy_fpga_port_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_phy_fpga_port_stats, _stat_item) \
+}
+
+static char g_hinic_priv_flags_strings[][ETH_GSTRING_LEN] = {
+ "Symmetric-RSS",
+ "Force-Link-up",
+ "Rxq_Recovery",
+};
+
+u32 hinic3_get_io_stats_size(const struct hinic3_nic_dev *nic_dev)
+{
+ u32 count;
+
+ count = (u32)(ARRAY_LEN(hinic3_nic_dev_stats) +
+ ARRAY_LEN(hinic3_nic_dev_stats_extern) +
+ (ARRAY_LEN(hinic3_tx_queue_stats) +
+ ARRAY_LEN(hinic3_tx_queue_stats_extern) +
+ ARRAY_LEN(hinic3_rx_queue_stats) +
+ ARRAY_LEN(hinic3_rx_queue_stats_extern)) * nic_dev->max_qps);
+
+ return count;
+}
+
+#define GET_VALUE_OF_PTR(size, ptr) ( \
+ (size) == sizeof(u64) ? *(u64 *)(ptr) : \
+ (size) == sizeof(u32) ? *(u32 *)(ptr) : \
+ (size) == sizeof(u16) ? *(u16 *)(ptr) : *(u8 *)(ptr) \
+)
+
+#define DEV_STATS_PACK(items, item_idx, array, stats_ptr) do { \
+ int j; \
+ for (j = 0; j < ARRAY_LEN(array); j++) { \
+ memcpy((items)[item_idx].name, (array)[j].name, \
+ HINIC3_SHOW_ITEM_LEN); \
+ (items)[item_idx].hexadecimal = 0; \
+ (items)[item_idx].value = \
+ GET_VALUE_OF_PTR((array)[j].size, \
+ (char *)(stats_ptr) + (array)[j].offset); \
+ (item_idx)++; \
+ } \
+} while (0)
+
+int hinic3_rx_queue_stat_pack(struct hinic3_show_item *item,
+ struct hinic3_stats *stat, struct hinic3_rxq_stats *rxq_stats, u16 qid)
+{
+ int ret;
+
+ ret = snprintf(item->name, HINIC3_SHOW_ITEM_LEN, stat->name, qid);
+ if (ret < 0)
+ return -EINVAL;
+
+ item->hexadecimal = 0;
+ item->value = GET_VALUE_OF_PTR(stat->size, (char *)(rxq_stats) + stat->offset);
+
+ return 0;
+}
+
+int hinic3_tx_queue_stat_pack(struct hinic3_show_item *item,
+ struct hinic3_stats *stat, struct hinic3_txq_stats *txq_stats, u16 qid)
+{
+ int ret;
+
+ ret = snprintf(item->name, HINIC3_SHOW_ITEM_LEN, stat->name, qid);
+ if (ret < 0)
+ return -EINVAL;
+
+ item->hexadecimal = 0;
+ item->value = GET_VALUE_OF_PTR(stat->size, (char *)(txq_stats) + stat->offset);
+
+ return 0;
+}
+
+int hinic3_get_io_stats(const struct hinic3_nic_dev *nic_dev, void *stats)
+{
+ struct hinic3_show_item *items = stats;
+ int item_idx = 0;
+ u16 qid;
+ int idx;
+ int ret;
+
+ DEV_STATS_PACK(items, item_idx, hinic3_nic_dev_stats, &nic_dev->stats);
+ DEV_STATS_PACK(items, item_idx, hinic3_nic_dev_stats_extern, &nic_dev->stats);
+
+ for (qid = 0; qid < nic_dev->max_qps; qid++) {
+ for (idx = 0; idx < ARRAY_LEN(hinic3_tx_queue_stats); idx++) {
+ ret = hinic3_tx_queue_stat_pack(&items[item_idx++], &hinic3_tx_queue_stats[idx],
+ &nic_dev->txqs[qid].txq_stats, qid);
+ if (ret != 0)
+ return -EINVAL;
+ }
+
+ for (idx = 0; idx < ARRAY_LEN(hinic3_tx_queue_stats_extern); idx++) {
+ ret = hinic3_tx_queue_stat_pack(&items[item_idx++], &hinic3_tx_queue_stats_extern[idx],
+ &nic_dev->txqs[qid].txq_stats, qid);
+ if (ret != 0)
+ return -EINVAL;
+ }
+ }
+
+ for (qid = 0; qid < nic_dev->max_qps; qid++) {
+ for (idx = 0; idx < ARRAY_LEN(hinic3_rx_queue_stats); idx++) {
+ ret = hinic3_rx_queue_stat_pack(&items[item_idx++], &hinic3_rx_queue_stats[idx],
+ &nic_dev->rxqs[qid].rxq_stats, qid);
+ if (ret != 0)
+ return -EINVAL;
+ }
+
+ for (idx = 0; idx < ARRAY_LEN(hinic3_rx_queue_stats_extern); idx++) {
+ ret = hinic3_rx_queue_stat_pack(&items[item_idx++], &hinic3_rx_queue_stats_extern[idx],
+ &nic_dev->rxqs[qid].rxq_stats, qid);
+ if (ret != 0)
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+static char g_hinic3_test_strings[][ETH_GSTRING_LEN] = {
+ "Internal lb test (on/offline)",
+ "External lb test (external_lb)",
+};
+
+int hinic3_get_sset_count(struct net_device *netdev, int sset)
+{
+ int count = 0, q_num = 0;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ switch (sset) {
+ case ETH_SS_TEST:
+ return ARRAY_LEN(g_hinic3_test_strings);
+ case ETH_SS_STATS:
+ q_num = nic_dev->q_params.num_qps;
+ count = ARRAY_LEN(hinic3_netdev_stats) +
+ ARRAY_LEN(hinic3_nic_dev_stats) +
+ ARRAY_LEN(hinic3_netdev_link_count) +
+ ARRAY_LEN(hinic3_function_stats) +
+ (ARRAY_LEN(hinic3_tx_queue_stats) +
+ ARRAY_LEN(hinic3_rx_queue_stats)) * q_num;
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ count += ARRAY_LEN(hinic3_port_stats);
+ count += ARRAY_LEN(g_hinic3_rsfec_stats);
+ }
+
+ return count;
+ case ETH_SS_PRIV_FLAGS:
+ return ARRAY_LEN(g_hinic_priv_flags_strings);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void get_drv_queue_stats(struct hinic3_nic_dev *nic_dev, u64 *data)
+{
+ struct hinic3_txq_stats txq_stats;
+ struct hinic3_rxq_stats rxq_stats;
+ u16 i = 0, j = 0, qid = 0;
+ char *p = NULL;
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ if (!nic_dev->txqs)
+ break;
+
+ hinic3_txq_get_stats(&nic_dev->txqs[qid], &txq_stats);
+ for (j = 0; j < ARRAY_LEN(hinic3_tx_queue_stats); j++, i++) {
+ p = (char *)(&txq_stats) +
+ hinic3_tx_queue_stats[j].offset;
+ data[i] = (hinic3_tx_queue_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+ }
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ if (!nic_dev->rxqs)
+ break;
+
+ hinic3_rxq_get_stats(&nic_dev->rxqs[qid], &rxq_stats);
+ for (j = 0; j < ARRAY_LEN(hinic3_rx_queue_stats); j++, i++) {
+ p = (char *)(&rxq_stats) +
+ hinic3_rx_queue_stats[j].offset;
+ data[i] = (hinic3_rx_queue_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+ }
+}
+
+static u16 get_ethtool_port_stats(struct hinic3_nic_dev *nic_dev, u64 *data)
+{
+ struct mag_cmd_port_stats *port_stats = NULL;
+ char *p = NULL;
+ u16 i = 0, j = 0;
+ int err;
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to malloc port stats\n");
+ memset(&data[i], 0,
+ ARRAY_LEN(hinic3_port_stats) * sizeof(*data));
+ i += ARRAY_LEN(hinic3_port_stats);
+ return i;
+ }
+
+ err = hinic3_get_phy_port_stats(nic_dev->hwdev, port_stats);
+ if (err)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get port stats from fw\n");
+
+ for (j = 0; j < ARRAY_LEN(hinic3_port_stats); j++, i++) {
+ p = (char *)(port_stats) + hinic3_port_stats[j].offset;
+ data[i] = (hinic3_port_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ kfree(port_stats);
+
+ return i;
+}
+
+static u16 get_ethtool_rsfec_stats(struct hinic3_nic_dev *nic_dev, u64 *data)
+{
+ struct mag_cmd_rsfec_stats *port_stats = NULL;
+ char *p = NULL;
+ u16 i = 0, j = 0;
+ int err;
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to malloc port stats\n");
+ memset(&data[i], 0,
+ ARRAY_LEN(g_hinic3_rsfec_stats) * sizeof(*data));
+ i += ARRAY_LEN(g_hinic3_rsfec_stats);
+ return i;
+ }
+
+ err = hinic3_get_phy_rsfec_stats(nic_dev->hwdev, port_stats);
+ if (err)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get rsfec stats from fw\n");
+
+ for (j = 0; j < ARRAY_LEN(g_hinic3_rsfec_stats); j++, i++) {
+ p = (char *)(port_stats) + g_hinic3_rsfec_stats[j].offset;
+ data[i] = (g_hinic3_rsfec_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ kfree(port_stats);
+
+ return i;
+}
+
+void hinic3_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+#ifdef HAVE_NDO_GET_STATS64
+ struct rtnl_link_stats64 temp;
+ const struct rtnl_link_stats64 *net_stats = NULL;
+#else
+ const struct net_device_stats *net_stats = NULL;
+#endif
+ struct hinic3_nic_stats *nic_stats = NULL;
+
+ struct hinic3_vport_stats vport_stats = {0};
+ u16 i = 0, j = 0;
+ char *p = NULL;
+ int err;
+ int link_down_events_phy_tmp = 0;
+ struct hinic3_netdev_link_count_str link_count = {0};
+
+#ifdef HAVE_NDO_GET_STATS64
+ net_stats = dev_get_stats(netdev, &temp);
+#else
+ net_stats = dev_get_stats(netdev);
+#endif
+ for (j = 0; j < ARRAY_LEN(hinic3_netdev_stats); j++, i++) {
+ p = (char *)(net_stats) + hinic3_netdev_stats[j].offset;
+ data[i] = GET_VALUE_OF_PTR(hinic3_netdev_stats[j].size, p);
+ }
+
+ nic_stats = &nic_dev->stats;
+ for (j = 0; j < ARRAY_LEN(hinic3_nic_dev_stats); j++, i++) {
+ p = (char *)(nic_stats) + hinic3_nic_dev_stats[j].offset;
+ data[i] = GET_VALUE_OF_PTR(hinic3_nic_dev_stats[j].size, p);
+ }
+
+ err = hinic3_get_link_event_stats(nic_dev->hwdev, &link_down_events_phy_tmp);
+
+ link_count.link_down_events_phy = (u64)link_down_events_phy_tmp;
+ for (j = 0; j < ARRAY_LEN(hinic3_netdev_link_count); j++, i++) {
+ p = (char *)(&link_count) + hinic3_netdev_link_count[j].offset;
+ data[i] = GET_VALUE_OF_PTR(hinic3_netdev_link_count[j].size, p);
+ }
+
+ err = hinic3_get_vport_stats(nic_dev->hwdev, hinic3_global_func_id(nic_dev->hwdev),
+ &vport_stats);
+ if (err)
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to get function stats from fw\n");
+
+ for (j = 0; j < ARRAY_LEN(hinic3_function_stats); j++, i++) {
+ p = (char *)(&vport_stats) + hinic3_function_stats[j].offset;
+ data[i] = GET_VALUE_OF_PTR(hinic3_function_stats[j].size, p);
+ }
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ i += get_ethtool_port_stats(nic_dev, data + i);
+ i += get_ethtool_rsfec_stats(nic_dev, data + i);
+ }
+
+ get_drv_queue_stats(nic_dev, data + i);
+}
+
+static u16 get_drv_dev_strings(struct hinic3_nic_dev *nic_dev, char *p)
+{
+ u16 i, cnt = 0;
+
+ for (i = 0; i < ARRAY_LEN(hinic3_netdev_stats); i++) {
+ memcpy(p, hinic3_netdev_stats[i].name,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+
+ for (i = 0; i < ARRAY_LEN(hinic3_nic_dev_stats); i++) {
+ memcpy(p, hinic3_nic_dev_stats[i].name, ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+
+ for (i = 0; i < ARRAY_LEN(hinic3_netdev_link_count); i++) {
+ memcpy(p, hinic3_netdev_link_count[i].name, ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+
+ return cnt;
+}
+
+static u16 get_hw_stats_strings(struct hinic3_nic_dev *nic_dev, char *p)
+{
+ u16 i, cnt = 0;
+
+ for (i = 0; i < ARRAY_LEN(hinic3_function_stats); i++) {
+ memcpy(p, hinic3_function_stats[i].name,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ for (i = 0; i < ARRAY_LEN(hinic3_port_stats); i++) {
+ memcpy(p, hinic3_port_stats[i].name, ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+ for (i = 0; i < ARRAY_LEN(g_hinic3_rsfec_stats); i++) {
+ memcpy(p, g_hinic3_rsfec_stats[i].name,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+ }
+
+ return cnt;
+}
+
+static u16 get_qp_stats_strings(const struct hinic3_nic_dev *nic_dev, char *p)
+{
+ u16 i = 0, j = 0, cnt = 0;
+ int err;
+
+ for (i = 0; i < nic_dev->q_params.num_qps; i++) {
+ for (j = 0; j < ARRAY_LEN(hinic3_tx_queue_stats); j++) {
+ err = sprintf(p, hinic3_tx_queue_stats[j].name, i);
+ if (err < 0)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to sprintf tx queue stats name, idx_qps: %u, idx_stats: %u\n",
+ i, j);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+ }
+
+ for (i = 0; i < nic_dev->q_params.num_qps; i++) {
+ for (j = 0; j < ARRAY_LEN(hinic3_rx_queue_stats); j++) {
+ err = sprintf(p, hinic3_rx_queue_stats[j].name, i);
+ if (err < 0)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to sprintf rx queue stats name, idx_qps: %u, idx_stats: %u\n",
+ i, j);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+ }
+
+ return cnt;
+}
+
+void hinic3_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ char *p = (char *)data;
+ u16 offset = 0;
+
+ switch (stringset) {
+ case ETH_SS_TEST:
+ memcpy(data, *g_hinic3_test_strings, sizeof(g_hinic3_test_strings));
+ return;
+ case ETH_SS_STATS:
+ offset = get_drv_dev_strings(nic_dev, p);
+ offset += get_hw_stats_strings(nic_dev,
+ p + offset * ETH_GSTRING_LEN);
+ get_qp_stats_strings(nic_dev, p + offset * ETH_GSTRING_LEN);
+
+ return;
+ case ETH_SS_PRIV_FLAGS:
+ memcpy(data, g_hinic_priv_flags_strings,
+ sizeof(g_hinic_priv_flags_strings));
+ return;
+ default:
+ nicif_err(nic_dev, drv, netdev,
+ "Invalid string set %u.", stringset);
+ return;
+ }
+}
+
+static const u32 hinic3_mag_link_mode_ge[] = {
+ ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
+ ETHTOOL_LINK_MODE_1000baseKX_Full_BIT,
+ ETHTOOL_LINK_MODE_1000baseX_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_10ge_base_r[] = {
+ ETHTOOL_LINK_MODE_10000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseR_FEC_BIT,
+ ETHTOOL_LINK_MODE_10000baseCR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseSR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseLR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseLRM_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_25ge_base_r[] = {
+ ETHTOOL_LINK_MODE_25000baseCR_Full_BIT,
+ ETHTOOL_LINK_MODE_25000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_25000baseSR_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_40ge_base_r4[] = {
+ ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT,
+ ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT,
+ ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT,
+ ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_50ge_base_r[] = {
+ ETHTOOL_LINK_MODE_50000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseSR_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseCR_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_50ge_base_r2[] = {
+ ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_100ge_base_r[] = {
+ ETHTOOL_LINK_MODE_100000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseSR_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseCR_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_100ge_base_r2[] = {
+ ETHTOOL_LINK_MODE_100000baseKR2_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseSR2_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseCR2_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_100ge_base_r4[] = {
+ ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_200ge_base_r2[] = {
+ ETHTOOL_LINK_MODE_200000baseKR2_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseSR2_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseCR2_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_200ge_base_r4[] = {
+ ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseCR4_Full_BIT,
+};
+
+struct hw2ethtool_link_mode {
+ const u32 *link_mode_bit_arr;
+ u32 arr_size;
+ u32 speed;
+};
+
+static const struct hw2ethtool_link_mode
+ hw2ethtool_link_mode_table[LINK_MODE_MAX_NUMBERS] = {
+ [LINK_MODE_GE] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_ge,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_ge),
+ .speed = SPEED_1000,
+ },
+ [LINK_MODE_10GE_BASE_R] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_10ge_base_r,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_10ge_base_r),
+ .speed = SPEED_10000,
+ },
+ [LINK_MODE_25GE_BASE_R] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_25ge_base_r,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_25ge_base_r),
+ .speed = SPEED_25000,
+ },
+ [LINK_MODE_40GE_BASE_R4] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_40ge_base_r4,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_40ge_base_r4),
+ .speed = SPEED_40000,
+ },
+ [LINK_MODE_50GE_BASE_R] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_50ge_base_r,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_50ge_base_r),
+ .speed = SPEED_50000,
+ },
+ [LINK_MODE_50GE_BASE_R2] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_50ge_base_r2,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_50ge_base_r2),
+ .speed = SPEED_50000,
+ },
+ [LINK_MODE_100GE_BASE_R] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_100ge_base_r,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_100ge_base_r),
+ .speed = SPEED_100000,
+ },
+ [LINK_MODE_100GE_BASE_R2] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_100ge_base_r2,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_100ge_base_r2),
+ .speed = SPEED_100000,
+ },
+ [LINK_MODE_100GE_BASE_R4] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_100ge_base_r4,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_100ge_base_r4),
+ .speed = SPEED_100000,
+ },
+ [LINK_MODE_200GE_BASE_R2] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_200ge_base_r2,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_200ge_base_r2),
+ .speed = SPEED_200000,
+ },
+ [LINK_MODE_200GE_BASE_R4] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_200ge_base_r4,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_200ge_base_r4),
+ .speed = SPEED_200000,
+ },
+};
+
+#define GET_SUPPORTED_MODE 0
+#define GET_ADVERTISED_MODE 1
+
+struct cmd_link_settings {
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(supported);
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(advertising);
+
+ u32 speed;
+ u8 duplex;
+ u8 port;
+ u8 autoneg;
+};
+
+#define ETHTOOL_ADD_SUPPORTED_LINK_MODE(ecmd, mode) \
+ set_bit(ETHTOOL_LINK_MODE_##mode##_BIT, (ecmd)->supported)
+#define ETHTOOL_ADD_ADVERTISED_LINK_MODE(ecmd, mode) \
+ set_bit(ETHTOOL_LINK_MODE_##mode##_BIT, (ecmd)->advertising)
+
+static void ethtool_add_supported_speed_link_mode(struct cmd_link_settings *link_settings,
+ u32 mode)
+{
+ u32 i;
+
+ for (i = 0; i < hw2ethtool_link_mode_table[mode].arr_size; i++) {
+ if (hw2ethtool_link_mode_table[mode].link_mode_bit_arr[i] >=
+ __ETHTOOL_LINK_MODE_MASK_NBITS)
+ continue;
+ set_bit(hw2ethtool_link_mode_table[mode].link_mode_bit_arr[i],
+ link_settings->supported);
+ }
+}
+
+static void ethtool_add_advertised_speed_link_mode(struct cmd_link_settings *link_settings,
+ u32 mode)
+{
+ u32 i;
+
+ for (i = 0; i < hw2ethtool_link_mode_table[mode].arr_size; i++) {
+ if (hw2ethtool_link_mode_table[mode].link_mode_bit_arr[i] >=
+ __ETHTOOL_LINK_MODE_MASK_NBITS)
+ continue;
+ set_bit(hw2ethtool_link_mode_table[mode].link_mode_bit_arr[i],
+ link_settings->advertising);
+ }
+}
+
+/* Related to enum mag_cmd_port_speed */
+static u32 hw_to_ethtool_speed[] = {
+ (u32)SPEED_UNKNOWN, SPEED_10, SPEED_100, SPEED_1000, SPEED_10000,
+ SPEED_25000, SPEED_40000, SPEED_50000, SPEED_100000, SPEED_200000
+};
+
+static int hinic3_ethtool_to_hw_speed_level(u32 speed)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_LEN(hw_to_ethtool_speed); i++) {
+ if (hw_to_ethtool_speed[i] == speed)
+ break;
+ }
+
+ return i;
+}
+
+static void hinic3_add_ethtool_link_mode(struct cmd_link_settings *link_settings,
+ u32 hw_link_mode, u32 name)
+{
+ u32 link_mode;
+
+ for (link_mode = 0; link_mode < LINK_MODE_MAX_NUMBERS; link_mode++) {
+ if (hw_link_mode & BIT(link_mode)) {
+ if (name == GET_SUPPORTED_MODE)
+ ethtool_add_supported_speed_link_mode(
+ link_settings, link_mode);
+ else
+ ethtool_add_advertised_speed_link_mode(
+ link_settings, link_mode);
+ }
+ }
+}
+
+static int hinic3_link_speed_set(struct hinic3_nic_dev *nic_dev,
+ struct cmd_link_settings *link_settings,
+ struct nic_port_info *port_info)
+{
+ u8 link_state = 0;
+ int err;
+
+ if (port_info->supported_mode != LINK_MODE_UNKNOWN)
+ hinic3_add_ethtool_link_mode(link_settings,
+ port_info->supported_mode,
+ GET_SUPPORTED_MODE);
+ if (port_info->advertised_mode != LINK_MODE_UNKNOWN)
+ hinic3_add_ethtool_link_mode(link_settings,
+ port_info->advertised_mode,
+ GET_ADVERTISED_MODE);
+
+ err = hinic3_get_link_state(nic_dev->hwdev, &link_state);
+ if (!err && link_state) {
+ if (hinic3_get_bond_create_mode(nic_dev->hwdev)) {
+ link_settings->speed = port_info->bond_speed;
+ } else {
+ link_settings->speed =
+ port_info->speed <
+ ARRAY_LEN(hw_to_ethtool_speed) ?
+ hw_to_ethtool_speed[port_info->speed] :
+ (u32)SPEED_UNKNOWN;
+ }
+
+ link_settings->duplex = port_info->duplex;
+ } else {
+ link_settings->speed = (u32)SPEED_UNKNOWN;
+ link_settings->duplex = DUPLEX_UNKNOWN;
+ }
+
+ return 0;
+}
+
+static void hinic3_link_port_type(struct cmd_link_settings *link_settings,
+ u8 port_type)
+{
+ switch (port_type) {
+ case MAG_CMD_WIRE_TYPE_ELECTRIC:
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, TP);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, TP);
+ link_settings->port = PORT_TP;
+ break;
+
+ case MAG_CMD_WIRE_TYPE_AOC:
+ case MAG_CMD_WIRE_TYPE_MM:
+ case MAG_CMD_WIRE_TYPE_SM:
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, FIBRE);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, FIBRE);
+ link_settings->port = PORT_FIBRE;
+ break;
+
+ case MAG_CMD_WIRE_TYPE_COPPER:
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, FIBRE);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, FIBRE);
+ link_settings->port = PORT_DA;
+ break;
+
+ case MAG_CMD_WIRE_TYPE_BACKPLANE:
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, Backplane);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, Backplane);
+ link_settings->port = PORT_NONE;
+ break;
+
+ default:
+ link_settings->port = PORT_OTHER;
+ break;
+ }
+}
+
+static int get_link_pause_settings(struct hinic3_nic_dev *nic_dev,
+ struct cmd_link_settings *link_settings)
+{
+ struct nic_pause_config nic_pause = {0};
+ int err;
+
+ err = hinic3_get_pause_info(nic_dev->hwdev, &nic_pause);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get pauseparam from hw\n");
+ return err;
+ }
+
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, Pause);
+ if (nic_pause.rx_pause && nic_pause.tx_pause) {
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, Pause);
+ } else if (nic_pause.tx_pause) {
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings,
+ Asym_Pause);
+ } else if (nic_pause.rx_pause) {
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, Pause);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings,
+ Asym_Pause);
+ }
+
+ return 0;
+}
+
+static bool is_bit_offset_defined(u8 bit_offset)
+{
+ if (bit_offset < __ETHTOOL_LINK_MODE_MASK_NBITS)
+ return true;
+ return false;
+}
+
+static void
+ethtool_add_supported_advertised_fec(struct cmd_link_settings *link_settings,
+ u32 fec, u8 cmd)
+{
+ u8 i;
+ for (i = 0; i < HINIC_ETHTOOL_FEC_INFO_LEN; i++) {
+ if ((fec & BIT(hinic3_ethtool_fec_info[i].hinic_fec_offset)) == 0)
+ continue;
+ if ((is_bit_offset_defined(hinic3_ethtool_fec_info[i].ethtool_bit_offset) == true) &&
+ (cmd == HINIC_ADVERTISED_FEC_CMD)) {
+ set_bit(hinic3_ethtool_fec_info[i].ethtool_bit_offset, link_settings->advertising);
+ return; /* There can be only one advertised fec mode. */
+ }
+ if ((is_bit_offset_defined(hinic3_ethtool_fec_info[i].ethtool_bit_offset) == true) &&
+ (cmd == HINIC_SUPPORTED_FEC_CMD))
+ set_bit(hinic3_ethtool_fec_info[i].ethtool_bit_offset, link_settings->supported);
+ }
+}
+
+static void hinic3_link_fec_type(struct cmd_link_settings *link_settings,
+ u32 fec, u32 supported_fec)
+{
+ ethtool_add_supported_advertised_fec(link_settings, supported_fec, HINIC_SUPPORTED_FEC_CMD);
+ ethtool_add_supported_advertised_fec(link_settings, fec, HINIC_ADVERTISED_FEC_CMD);
+}
+
+static int get_link_settings(struct net_device *netdev,
+ struct cmd_link_settings *link_settings)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct nic_port_info port_info = {0};
+ int err;
+
+ err = hinic3_get_port_info(nic_dev->hwdev, &port_info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to get port info\n");
+ return err;
+ }
+
+ err = hinic3_link_speed_set(nic_dev, link_settings, &port_info);
+ if (err)
+ return err;
+
+ hinic3_link_port_type(link_settings, port_info.port_type);
+
+ hinic3_link_fec_type(link_settings, BIT(port_info.fec),
+ port_info.supported_fec_mode);
+
+ link_settings->autoneg = port_info.autoneg_state == PORT_CFG_AN_ON ?
+ AUTONEG_ENABLE : AUTONEG_DISABLE;
+ if (port_info.autoneg_cap)
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, Autoneg);
+ if (port_info.autoneg_state == PORT_CFG_AN_ON)
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, Autoneg);
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ err = get_link_pause_settings(nic_dev, link_settings);
+
+ return err;
+}
+
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+int hinic3_get_link_ksettings(struct net_device *netdev,
+ struct ethtool_link_ksettings *link_settings)
+{
+ struct cmd_link_settings settings = { { 0 } };
+ struct ethtool_link_settings *base = &link_settings->base;
+ int err;
+
+ ethtool_link_ksettings_zero_link_mode(link_settings, supported);
+ ethtool_link_ksettings_zero_link_mode(link_settings, advertising);
+
+ err = get_link_settings(netdev, &settings);
+ if (err)
+ return err;
+
+ bitmap_copy(link_settings->link_modes.supported, settings.supported,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+ bitmap_copy(link_settings->link_modes.advertising, settings.advertising,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+
+ base->autoneg = settings.autoneg;
+ base->speed = settings.speed;
+ base->duplex = settings.duplex;
+ base->port = settings.port;
+
+ return 0;
+}
+#endif
+#endif
+
+static bool hinic3_is_support_speed(u32 supported_link, u32 speed)
+{
+ u32 link_mode;
+
+ for (link_mode = 0; link_mode < LINK_MODE_MAX_NUMBERS; link_mode++) {
+ if (!(supported_link & BIT(link_mode)))
+ continue;
+
+ if (hw2ethtool_link_mode_table[link_mode].speed == speed)
+ return true;
+ }
+
+ return false;
+}
+
+static int hinic3_is_speed_legal(struct hinic3_nic_dev *nic_dev,
+ struct nic_port_info *port_info, u32 speed)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int speed_level = 0;
+
+ if (port_info->supported_mode == LINK_MODE_UNKNOWN ||
+ port_info->advertised_mode == LINK_MODE_UNKNOWN) {
+ nicif_err(nic_dev, drv, netdev, "Unknown supported link modes\n");
+ return -EAGAIN;
+ }
+
+ speed_level = hinic3_ethtool_to_hw_speed_level(speed);
+ if (speed_level >= PORT_SPEED_UNKNOWN ||
+ !hinic3_is_support_speed(port_info->supported_mode, speed)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Not supported speed: %u\n", speed);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int get_link_settings_type(struct hinic3_nic_dev *nic_dev,
+ u8 autoneg, u32 speed, u32 *set_settings)
+{
+ struct nic_port_info port_info = {0};
+ int err;
+
+ err = hinic3_get_port_info(nic_dev->hwdev, &port_info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to get current settings\n");
+ return -EAGAIN;
+ }
+
+ /* Alwayse set autonegation */
+ if (port_info.autoneg_cap)
+ *set_settings |= HILINK_LINK_SET_AUTONEG;
+
+ if (autoneg == AUTONEG_ENABLE) {
+ if (!port_info.autoneg_cap) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Not support autoneg\n");
+ return -EOPNOTSUPP;
+ }
+ } else if (speed != (u32)SPEED_UNKNOWN) {
+ /* Set speed only when autoneg is disable */
+ err = hinic3_is_speed_legal(nic_dev, &port_info, speed);
+ if (err)
+ return err;
+
+ *set_settings |= HILINK_LINK_SET_SPEED;
+ } else {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Need to set speed when autoneg is off\n");
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_settings_to_hw(struct hinic3_nic_dev *nic_dev,
+ u32 set_settings, u8 autoneg, u32 speed)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_link_ksettings settings = {0};
+ int speed_level = 0;
+ char set_link_str[HINIC_SET_LINK_STR_LEN] = {0};
+ char link_info[HINIC_SET_LINK_STR_LEN] = {0};
+ int err = 0;
+
+ err = snprintf(link_info, sizeof(link_info), "%s",
+ (bool)(set_settings & HILINK_LINK_SET_AUTONEG) ?
+ ((bool)autoneg ? "autong enable " : "autong disable ") : "");
+ if (err < 0)
+ return -EINVAL;
+
+ if (set_settings & HILINK_LINK_SET_SPEED) {
+ speed_level = hinic3_ethtool_to_hw_speed_level(speed);
+ err = snprintf(set_link_str, sizeof(set_link_str),
+ "%sspeed %u ", link_info, speed);
+ if (err < 0)
+ return -EINVAL;
+ }
+
+ settings.valid_bitmap = set_settings;
+ settings.autoneg = (bool)autoneg ? PORT_CFG_AN_ON : PORT_CFG_AN_OFF;
+ settings.speed = (u8)speed_level;
+
+ err = hinic3_set_link_settings(nic_dev->hwdev, &settings);
+ if (err)
+ nicif_err(nic_dev, drv, netdev, "Set %sfailed\n",
+ set_link_str);
+ else
+ nicif_info(nic_dev, drv, netdev, "Set %ssuccess\n",
+ set_link_str);
+
+ return err;
+}
+
+static int set_link_settings(struct net_device *netdev, u8 autoneg, u32 speed)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u32 set_settings = 0;
+ int err = 0;
+
+ err = get_link_settings_type(nic_dev, autoneg, speed, &set_settings);
+ if (err)
+ return err;
+
+ if (set_settings)
+ err = hinic3_set_settings_to_hw(nic_dev, set_settings,
+ autoneg, speed);
+ else
+ nicif_info(nic_dev, drv, netdev, "Nothing changed, exiting without setting anything\n");
+
+ return err;
+}
+
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+int hinic3_set_link_ksettings(struct net_device *netdev,
+ const struct ethtool_link_ksettings *link_settings)
+{
+ /* Only support to set autoneg and speed */
+ return set_link_settings(netdev, link_settings->base.autoneg,
+ link_settings->base.speed);
+}
+#endif
+#endif
+
+#ifndef HAVE_NEW_ETHTOOL_LINK_SETTINGS_ONLY
+int hinic3_get_settings(struct net_device *netdev, struct ethtool_cmd *ep)
+{
+ struct cmd_link_settings settings = { { 0 } };
+ int err;
+
+ err = get_link_settings(netdev, &settings);
+ if (err)
+ return err;
+
+ ep->supported = settings.supported[0] & ((u32)~0);
+ ep->advertising = settings.advertising[0] & ((u32)~0);
+
+ ep->autoneg = settings.autoneg;
+ ethtool_cmd_speed_set(ep, settings.speed);
+ ep->duplex = settings.duplex;
+ ep->port = settings.port;
+ ep->transceiver = XCVR_INTERNAL;
+
+ return 0;
+}
+
+int hinic3_set_settings(struct net_device *netdev,
+ struct ethtool_cmd *link_settings)
+{
+ /* Only support to set autoneg and speed */
+ return set_link_settings(netdev, link_settings->autoneg,
+ ethtool_cmd_speed(link_settings));
+}
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_filter.c b/drivers/net/ethernet/huawei/hinic3/hinic3_filter.c
new file mode 100644
index 0000000..2daa7f9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_filter.c
@@ -0,0 +1,483 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/debugfs.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_srv_nic.h"
+
+static unsigned char set_filter_state = 1;
+module_param(set_filter_state, byte, 0444);
+MODULE_PARM_DESC(set_filter_state, "Set mac filter config state: 0 - disable, 1 - enable (default=1)");
+
+static int hinic3_uc_sync(struct net_device *netdev, u8 *addr)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ return hinic3_set_mac(nic_dev->hwdev, addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+}
+
+static int hinic3_uc_unsync(struct net_device *netdev, u8 *addr)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ /* The addr is in use */
+ if (ether_addr_equal(addr, netdev->dev_addr))
+ return 0;
+
+ return hinic3_del_mac(nic_dev->hwdev, addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+}
+
+void hinic3_clean_mac_list_filter(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry_safe(f, ftmp, &nic_dev->uc_filter_list, list) {
+ if (f->state == HINIC3_MAC_HW_SYNCED)
+ hinic3_uc_unsync(netdev, f->addr);
+ list_del(&f->list);
+ kfree(f);
+ }
+
+ list_for_each_entry_safe(f, ftmp, &nic_dev->mc_filter_list, list) {
+ if (f->state == HINIC3_MAC_HW_SYNCED)
+ hinic3_uc_unsync(netdev, f->addr);
+ list_del(&f->list);
+ kfree(f);
+ }
+}
+
+static struct hinic3_mac_filter *hinic3_find_mac(const struct list_head *filter_list,
+ u8 *addr)
+{
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry(f, filter_list, list) {
+ if (ether_addr_equal(addr, f->addr))
+ return f;
+ }
+ return NULL;
+}
+
+static struct hinic3_mac_filter *hinic3_add_filter(struct hinic3_nic_dev *nic_dev,
+ struct list_head *mac_filter_list,
+ u8 *addr)
+{
+ struct hinic3_mac_filter *f = NULL;
+
+ f = kzalloc(sizeof(*f), GFP_ATOMIC);
+ if (!f)
+ goto out;
+
+ ether_addr_copy(f->addr, addr);
+
+ INIT_LIST_HEAD(&f->list);
+ list_add_tail(&f->list, mac_filter_list);
+
+ f->state = HINIC3_MAC_WAIT_HW_SYNC;
+ set_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags);
+
+out:
+ return f;
+}
+
+static void hinic3_del_filter(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_mac_filter *f)
+{
+ set_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags);
+
+ if (f->state == HINIC3_MAC_WAIT_HW_SYNC) {
+ /* have not added to hw, delete it directly */
+ list_del(&f->list);
+ kfree(f);
+ return;
+ }
+
+ f->state = HINIC3_MAC_WAIT_HW_UNSYNC;
+}
+
+static struct hinic3_mac_filter *hinic3_mac_filter_entry_clone(const struct hinic3_mac_filter *src)
+{
+ struct hinic3_mac_filter *f = NULL;
+
+ f = kzalloc(sizeof(*f), GFP_ATOMIC);
+ if (!f)
+ return NULL;
+
+ *f = *src;
+ INIT_LIST_HEAD(&f->list);
+
+ return f;
+}
+
+static void hinic3_undo_del_filter_entries(struct list_head *filter_list,
+ const struct list_head *from)
+{
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry_safe(f, ftmp, from, list) {
+ if (hinic3_find_mac(filter_list, f->addr))
+ continue;
+
+ if (f->state == HINIC3_MAC_HW_SYNCED)
+ f->state = HINIC3_MAC_WAIT_HW_UNSYNC;
+
+ list_move_tail(&f->list, filter_list);
+ }
+}
+
+static void hinic3_undo_add_filter_entries(struct list_head *filter_list,
+ const struct list_head *from)
+{
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *tmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry_safe(f, ftmp, from, list) {
+ tmp = hinic3_find_mac(filter_list, f->addr);
+ if (tmp && tmp->state == HINIC3_MAC_HW_SYNCED)
+ tmp->state = HINIC3_MAC_WAIT_HW_SYNC;
+ }
+}
+
+static void hinic3_cleanup_filter_list(const struct list_head *head)
+{
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry_safe(f, ftmp, head, list) {
+ list_del(&f->list);
+ kfree(f);
+ }
+}
+
+static int hinic3_mac_filter_sync_hw(struct hinic3_nic_dev *nic_dev,
+ struct list_head *del_list,
+ struct list_head *add_list)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+ int err = 0, add_count = 0;
+
+ if (!list_empty(del_list)) {
+ list_for_each_entry_safe(f, ftmp, del_list, list) {
+ err = hinic3_uc_unsync(netdev, f->addr);
+ if (err) { /* ignore errors when delete mac */
+ nic_err(&nic_dev->pdev->dev, "Failed to delete mac\n");
+ }
+
+ list_del(&f->list);
+ kfree(f);
+ }
+ }
+
+ if (!list_empty(add_list)) {
+ list_for_each_entry_safe(f, ftmp, add_list, list) {
+ err = hinic3_uc_sync(netdev, f->addr);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to add mac\n");
+ return err;
+ }
+
+ add_count++;
+ list_del(&f->list);
+ kfree(f);
+ }
+ }
+
+ return add_count;
+}
+
+static int hinic3_mac_filter_sync(struct hinic3_nic_dev *nic_dev,
+ struct list_head *mac_filter_list, bool uc)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct list_head tmp_del_list, tmp_add_list;
+ struct hinic3_mac_filter *fclone = NULL;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+ int err = 0, add_count = 0;
+
+ INIT_LIST_HEAD(&tmp_del_list);
+ INIT_LIST_HEAD(&tmp_add_list);
+
+ list_for_each_entry_safe(f, ftmp, mac_filter_list, list) {
+ if (f->state != HINIC3_MAC_WAIT_HW_UNSYNC)
+ continue;
+
+ f->state = HINIC3_MAC_HW_UNSYNCED;
+ list_move_tail(&f->list, &tmp_del_list);
+ }
+
+ list_for_each_entry_safe(f, ftmp, mac_filter_list, list) {
+ if (f->state != HINIC3_MAC_WAIT_HW_SYNC)
+ continue;
+
+ fclone = hinic3_mac_filter_entry_clone(f);
+ if (!fclone) {
+ err = -ENOMEM;
+ break;
+ }
+
+ f->state = HINIC3_MAC_HW_SYNCED;
+ list_add_tail(&fclone->list, &tmp_add_list);
+ }
+
+ if (err) {
+ hinic3_undo_del_filter_entries(mac_filter_list, &tmp_del_list);
+ hinic3_undo_add_filter_entries(mac_filter_list, &tmp_add_list);
+ nicif_err(nic_dev, drv, netdev, "Failed to clone mac_filter_entry\n");
+
+ hinic3_cleanup_filter_list(&tmp_del_list);
+ hinic3_cleanup_filter_list(&tmp_add_list);
+ return -ENOMEM;
+ }
+
+ add_count = hinic3_mac_filter_sync_hw(nic_dev, &tmp_del_list,
+ &tmp_add_list);
+ if (list_empty(&tmp_add_list))
+ return add_count;
+
+ /* there are errors when add mac to hw, delete all mac in hw */
+ hinic3_undo_add_filter_entries(mac_filter_list, &tmp_add_list);
+ /* VF don't support to enter promisc mode,
+ * so we can't delete any other uc mac
+ */
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev) || !uc) {
+ list_for_each_entry_safe(f, ftmp, mac_filter_list, list) {
+ if (f->state != HINIC3_MAC_HW_SYNCED)
+ continue;
+
+ fclone = hinic3_mac_filter_entry_clone(f);
+ if (!fclone)
+ break;
+
+ f->state = HINIC3_MAC_WAIT_HW_SYNC;
+ list_add_tail(&fclone->list, &tmp_del_list);
+ }
+ }
+
+ hinic3_cleanup_filter_list(&tmp_add_list);
+ hinic3_mac_filter_sync_hw(nic_dev, &tmp_del_list, &tmp_add_list);
+
+ /* need to enter promisc/allmulti mode */
+ return -ENOMEM;
+}
+
+static void hinic3_mac_filter_sync_all(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int add_count;
+
+ if (test_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags)) {
+ clear_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags);
+ add_count = hinic3_mac_filter_sync(nic_dev,
+ &nic_dev->uc_filter_list,
+ true);
+ if (add_count < 0 && HINIC3_SUPPORT_PROMISC(nic_dev->hwdev)) {
+ set_bit(HINIC3_PROMISC_FORCE_ON,
+ &nic_dev->rx_mod_state);
+ nicif_info(nic_dev, drv, netdev, "Promisc mode forced on\n");
+ } else if (add_count) {
+ clear_bit(HINIC3_PROMISC_FORCE_ON,
+ &nic_dev->rx_mod_state);
+ }
+
+ add_count = hinic3_mac_filter_sync(nic_dev,
+ &nic_dev->mc_filter_list,
+ false);
+ if (add_count < 0 && HINIC3_SUPPORT_ALLMULTI(nic_dev->hwdev)) {
+ set_bit(HINIC3_ALLMULTI_FORCE_ON,
+ &nic_dev->rx_mod_state);
+ nicif_info(nic_dev, drv, netdev, "All multicast mode forced on\n");
+ } else if (add_count) {
+ clear_bit(HINIC3_ALLMULTI_FORCE_ON,
+ &nic_dev->rx_mod_state);
+ }
+ }
+}
+
+#define HINIC3_DEFAULT_RX_MODE (NIC_RX_MODE_UC | NIC_RX_MODE_MC | \
+ NIC_RX_MODE_BC)
+
+static void hinic3_update_mac_filter(struct hinic3_nic_dev *nic_dev,
+ const struct netdev_hw_addr_list *src_list,
+ struct list_head *filter_list)
+{
+ struct hinic3_mac_filter *filter = NULL;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+ struct netdev_hw_addr *ha = NULL;
+
+ /* add addr if not already in the filter list */
+ netif_addr_lock_bh(nic_dev->netdev);
+ netdev_hw_addr_list_for_each(ha, src_list) {
+ filter = hinic3_find_mac(filter_list, ha->addr);
+ if (!filter)
+ hinic3_add_filter(nic_dev, filter_list, ha->addr);
+ else if (filter->state == HINIC3_MAC_WAIT_HW_UNSYNC)
+ filter->state = HINIC3_MAC_HW_SYNCED;
+ }
+ netif_addr_unlock_bh(nic_dev->netdev);
+
+ /* delete addr if not in netdev list */
+ list_for_each_entry_safe(f, ftmp, filter_list, list) {
+ bool found = false;
+
+ netif_addr_lock_bh(nic_dev->netdev);
+ netdev_hw_addr_list_for_each(ha, src_list)
+ if (ether_addr_equal(ha->addr, f->addr)) {
+ found = true;
+ break;
+ }
+ netif_addr_unlock_bh(nic_dev->netdev);
+
+ if (found)
+ continue;
+
+ hinic3_del_filter(nic_dev, f);
+ }
+}
+
+#ifndef NETDEV_HW_ADDR_T_MULTICAST
+static void hinic3_update_mc_filter(struct hinic3_nic_dev *nic_dev,
+ struct list_head *filter_list)
+{
+ struct hinic3_mac_filter *filter = NULL;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+ struct dev_mc_list *ha = NULL;
+
+ /* add addr if not already in the filter list */
+ netif_addr_lock_bh(nic_dev->netdev);
+ netdev_for_each_mc_addr(ha, nic_dev->netdev) {
+ filter = hinic3_find_mac(filter_list, ha->da_addr);
+ if (!filter)
+ hinic3_add_filter(nic_dev, filter_list, ha->da_addr);
+ else if (filter->state == HINIC3_MAC_WAIT_HW_UNSYNC)
+ filter->state = HINIC3_MAC_HW_SYNCED;
+ }
+ netif_addr_unlock_bh(nic_dev->netdev);
+ /* delete addr if not in netdev list */
+ list_for_each_entry_safe(f, ftmp, filter_list, list) {
+ bool found = false;
+
+ netif_addr_lock_bh(nic_dev->netdev);
+ netdev_for_each_mc_addr(ha, nic_dev->netdev)
+ if (ether_addr_equal(ha->da_addr, f->addr)) {
+ found = true;
+ break;
+ }
+ netif_addr_unlock_bh(nic_dev->netdev);
+
+ if (found)
+ continue;
+
+ hinic3_del_filter(nic_dev, f);
+ }
+}
+#endif
+
+static void update_mac_filter(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+
+ if (test_and_clear_bit(HINIC3_UPDATE_MAC_FILTER, &nic_dev->flags)) {
+ hinic3_update_mac_filter(nic_dev, &netdev->uc,
+ &nic_dev->uc_filter_list);
+ /* FPGA mc only 12 entry, default disable mc */
+ if (set_filter_state) {
+#ifdef NETDEV_HW_ADDR_T_MULTICAST
+ hinic3_update_mac_filter(nic_dev, &netdev->mc,
+ &nic_dev->mc_filter_list);
+#else
+ hinic3_update_mc_filter(nic_dev,
+ &nic_dev->mc_filter_list);
+#endif
+ }
+ }
+}
+
+static void sync_rx_mode_to_hw(struct hinic3_nic_dev *nic_dev, int promisc_en,
+ int allmulti_en)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u32 rx_mod = HINIC3_DEFAULT_RX_MODE;
+ int err;
+
+ rx_mod |= (promisc_en ? NIC_RX_MODE_PROMISC : 0);
+ rx_mod |= (allmulti_en ? NIC_RX_MODE_MC_ALL : 0);
+
+ if (promisc_en != test_bit(HINIC3_HW_PROMISC_ON,
+ &nic_dev->rx_mod_state))
+ nicif_info(nic_dev, drv, netdev,
+ "%s promisc mode\n",
+ promisc_en ? "Enter" : "Left");
+ if (allmulti_en !=
+ test_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state))
+ nicif_info(nic_dev, drv, netdev,
+ "%s all_multi mode\n",
+ allmulti_en ? "Enter" : "Left");
+
+ err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mod);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to set rx_mode\n");
+ return;
+ }
+
+ promisc_en ? set_bit(HINIC3_HW_PROMISC_ON, &nic_dev->rx_mod_state) :
+ clear_bit(HINIC3_HW_PROMISC_ON, &nic_dev->rx_mod_state);
+
+ allmulti_en ? set_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state) :
+ clear_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state);
+}
+
+void hinic3_set_rx_mode_work(struct work_struct *work)
+{
+ struct hinic3_nic_dev *nic_dev =
+ container_of(work, struct hinic3_nic_dev, rx_mode_work);
+ struct net_device *netdev = nic_dev->netdev;
+ int promisc_en = 0, allmulti_en = 0;
+
+ update_mac_filter(nic_dev);
+
+ hinic3_mac_filter_sync_all(nic_dev);
+
+ if (HINIC3_SUPPORT_PROMISC(nic_dev->hwdev))
+ promisc_en = !!(netdev->flags & IFF_PROMISC) ||
+ test_bit(HINIC3_PROMISC_FORCE_ON,
+ &nic_dev->rx_mod_state);
+
+ if (HINIC3_SUPPORT_ALLMULTI(nic_dev->hwdev))
+ allmulti_en = !!(netdev->flags & IFF_ALLMULTI) ||
+ test_bit(HINIC3_ALLMULTI_FORCE_ON,
+ &nic_dev->rx_mod_state);
+
+ if (promisc_en !=
+ test_bit(HINIC3_HW_PROMISC_ON, &nic_dev->rx_mod_state) ||
+ allmulti_en !=
+ test_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state))
+ sync_rx_mode_to_hw(nic_dev, promisc_en, allmulti_en);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_hw.h b/drivers/net/ethernet/huawei/hinic3/hinic3_hw.h
new file mode 100644
index 0000000..7fed1c1
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_hw.h
@@ -0,0 +1,877 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HW_H
+#define HINIC3_HW_H
+
+#include "mpu_inband_cmd.h"
+#include "mpu_inband_cmd_defs.h"
+
+#include "hinic3_crm.h"
+
+#ifndef BIG_ENDIAN
+#define BIG_ENDIAN 0x4321
+#endif
+
+#ifndef LITTLE_ENDIAN
+#define LITTLE_ENDIAN 0x1234
+#endif
+
+#ifdef BYTE_ORDER
+#undef BYTE_ORDER
+#endif
+/* X86 */
+#define BYTE_ORDER LITTLE_ENDIAN
+
+/* to use 0-level CLA, page size must be: SQ 16B(wqe) * 64k(max_q_depth) */
+#define HINIC3_DEFAULT_WQ_PAGE_SIZE 0x100000
+#define HINIC3_HW_WQ_PAGE_SIZE 0x1000
+#define HINIC3_MAX_WQ_PAGE_SIZE_ORDER 8
+#define SPU_HOST_ID 4
+
+enum hinic3_channel_id {
+ HINIC3_CHANNEL_DEFAULT,
+ HINIC3_CHANNEL_COMM,
+ HINIC3_CHANNEL_NIC,
+ HINIC3_CHANNEL_ROCE,
+ HINIC3_CHANNEL_TOE,
+ HINIC3_CHANNEL_FC,
+ HINIC3_CHANNEL_OVS,
+ HINIC3_CHANNEL_DSW,
+ HINIC3_CHANNEL_MIG,
+ HINIC3_CHANNEL_CRYPT,
+ HINIC3_CHANNEL_VROCE,
+
+ HINIC3_CHANNEL_MAX = 32,
+};
+
+struct hinic3_cmd_buf {
+ void *buf;
+ dma_addr_t dma_addr;
+ u16 size;
+ /* Usage count, USERS DO NOT USE */
+ atomic_t ref_cnt;
+};
+
+enum hinic3_aeq_type {
+ HINIC3_HW_INTER_INT = 0,
+ HINIC3_MBX_FROM_FUNC = 1,
+ HINIC3_MSG_FROM_MGMT_CPU = 2,
+ HINIC3_API_RSP = 3,
+ HINIC3_API_CHAIN_STS = 4,
+ HINIC3_MBX_SEND_RSLT = 5,
+ HINIC3_MAX_AEQ_EVENTS
+};
+
+enum hinic3_aeq_sw_type {
+ HINIC3_STATELESS_EVENT = 0,
+ HINIC3_STATEFUL_EVENT = 1,
+ HINIC3_MAX_AEQ_SW_EVENTS
+};
+
+enum hinic3_hwdev_init_state {
+ HINIC3_HWDEV_NONE_INITED = 0,
+ HINIC3_HWDEV_MGMT_INITED,
+ HINIC3_HWDEV_MBOX_INITED,
+ HINIC3_HWDEV_CMDQ_INITED,
+};
+
+enum hinic3_ceq_event {
+ HINIC3_NON_L2NIC_SCQ,
+ HINIC3_NON_L2NIC_ECQ,
+ HINIC3_NON_L2NIC_NO_CQ_EQ,
+ HINIC3_CMDQ,
+ HINIC3_L2NIC_SQ,
+ HINIC3_L2NIC_RQ,
+ HINIC3_MAX_CEQ_EVENTS,
+};
+
+enum hinic3_mbox_seg_errcode {
+ MBOX_ERRCODE_NO_ERRORS = 0,
+ /* VF send the mailbox data to the wrong destination functions */
+ MBOX_ERRCODE_VF_TO_WRONG_FUNC = 0x100,
+ /* PPF send the mailbox data to the wrong destination functions */
+ MBOX_ERRCODE_PPF_TO_WRONG_FUNC = 0x200,
+ /* PF send the mailbox data to the wrong destination functions */
+ MBOX_ERRCODE_PF_TO_WRONG_FUNC = 0x300,
+ /* The mailbox data size is set to all zero */
+ MBOX_ERRCODE_ZERO_DATA_SIZE = 0x400,
+ /* The sender function attribute has not been learned by hardware */
+ MBOX_ERRCODE_UNKNOWN_SRC_FUNC = 0x500,
+ /* The receiver function attr has not been learned by hardware */
+ MBOX_ERRCODE_UNKNOWN_DES_FUNC = 0x600,
+};
+
+struct hinic3_ceq_info {
+ u32 q_len;
+ u32 page_size;
+ u16 elem_size;
+ u16 num_pages;
+ u32 num_elem_in_pg;
+};
+
+typedef void (*hinic3_aeq_hwe_cb)(void *pri_handle, u8 *data, u8 size);
+typedef u8 (*hinic3_aeq_swe_cb)(void *pri_handle, u8 event, u8 *data);
+typedef void (*hinic3_ceq_event_cb)(void *pri_handle, u32 ceqe_data);
+
+typedef int (*hinic3_vf_mbox_cb)(void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+typedef int (*hinic3_pf_mbox_cb)(void *pri_handle,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+typedef int (*hinic3_ppf_mbox_cb)(void *pri_handle, u16 pf_idx,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+typedef int (*hinic3_pf_recv_from_ppf_mbox_cb)(void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+/**
+ * @brief hinic3_aeq_register_hw_cb - register aeq hardware callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ * @param hwe_cb: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_aeq_register_hw_cb(void *hwdev, void *pri_handle,
+ enum hinic3_aeq_type event, hinic3_aeq_hwe_cb hwe_cb);
+
+/**
+ * @brief hinic3_aeq_unregister_hw_cb - unregister aeq hardware callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ **/
+void hinic3_aeq_unregister_hw_cb(void *hwdev, enum hinic3_aeq_type event);
+
+/**
+ * @brief hinic3_aeq_register_swe_cb - register aeq soft event callback
+ * @param hwdev: device pointer to hwdev
+ * @pri_handle: the pointer to private invoker device
+ * @param event: event type
+ * @param aeq_swe_cb: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_aeq_register_swe_cb(void *hwdev, void *pri_handle, enum hinic3_aeq_sw_type event,
+ hinic3_aeq_swe_cb aeq_swe_cb);
+
+/**
+ * @brief hinic3_aeq_unregister_swe_cb - unregister aeq soft event callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ **/
+void hinic3_aeq_unregister_swe_cb(void *hwdev, enum hinic3_aeq_sw_type event);
+
+/**
+ * @brief hinic3_ceq_register_cb - register ceq callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_ceq_register_cb(void *hwdev, void *pri_handle, enum hinic3_ceq_event event,
+ hinic3_ceq_event_cb callback);
+/**
+ * @brief hinic3_ceq_unregister_cb - unregister ceq callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ **/
+void hinic3_ceq_unregister_cb(void *hwdev, enum hinic3_ceq_event event);
+
+/**
+ * @brief hinic3_register_ppf_mbox_cb - ppf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_ppf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_ppf_mbox_cb callback);
+
+/**
+ * @brief hinic3_register_pf_mbox_cb - pf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_pf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_pf_mbox_cb callback);
+/**
+ * @brief hinic3_register_vf_mbox_cb - vf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_vf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_vf_mbox_cb callback);
+
+/**
+ * @brief hinic3_unregister_ppf_mbox_cb - ppf unregister mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_ppf_mbox_cb(void *hwdev, u8 mod);
+
+/**
+ * @brief hinic3_unregister_pf_mbox_cb - pf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_pf_mbox_cb(void *hwdev, u8 mod);
+
+/**
+ * @brief hinic3_unregister_vf_mbox_cb - pf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_vf_mbox_cb(void *hwdev, u8 mod);
+
+/**
+ * @brief hinic3_unregister_ppf_to_pf_mbox_cb - unregister mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_ppf_to_pf_mbox_cb(void *hwdev, u8 mod);
+
+typedef void (*hinic3_mgmt_msg_cb)(void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+/**
+ * @brief hinic3_register_service_adapter - register mgmt msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_mgmt_msg_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_mgmt_msg_cb callback);
+
+/**
+ * @brief hinic3_unregister_mgmt_msg_cb - unregister mgmt msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_mgmt_msg_cb(void *hwdev, u8 mod);
+
+/**
+ * @brief hinic3_register_service_adapter - register service adapter
+ * @param hwdev: device pointer to hwdev
+ * @param service_adapter: service adapter
+ * @param type: service type
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_service_adapter(void *hwdev, void *service_adapter,
+ enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_unregister_service_adapter - unregister service adapter
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ **/
+void hinic3_unregister_service_adapter(void *hwdev,
+ enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_service_adapter - get service adapter
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @retval non-zero: success
+ * @retval null: failure
+ **/
+void *hinic3_get_service_adapter(void *hwdev, enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_alloc_db_phy_addr - alloc doorbell & direct wqe pyhsical addr
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to alloc doorbell base address
+ * @param dwqe_base: pointer to alloc direct base address
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_alloc_db_phy_addr(void *hwdev, u64 *db_base, u64 *dwqe_base);
+
+/**
+ * @brief hinic3_free_db_phy_addr - free doorbell & direct wqe physical address
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to free doorbell base address
+ * @param dwqe_base: pointer to free direct base address
+ **/
+void hinic3_free_db_phy_addr(void *hwdev, u64 db_base, u64 dwqe_base);
+
+/**
+ * @brief hinic3_alloc_db_addr - alloc doorbell & direct wqe
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to alloc doorbell base address
+ * @param dwqe_base: pointer to alloc direct base address
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_alloc_db_addr(void *hwdev, void __iomem **db_base,
+ void __iomem **dwqe_base);
+
+/**
+ * @brief hinic3_free_db_addr - free doorbell & direct wqe
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to free doorbell base address
+ * @param dwqe_base: pointer to free direct base address
+ **/
+void hinic3_free_db_addr(void *hwdev, const void __iomem *db_base,
+ void __iomem *dwqe_base);
+
+/**
+ * @brief hinic3_alloc_db_phy_addr - alloc physical doorbell & direct wqe
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to alloc doorbell base address
+ * @param dwqe_base: pointer to alloc direct base address
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_alloc_db_phy_addr(void *hwdev, u64 *db_base, u64 *dwqe_base);
+
+/**
+ * @brief hinic3_free_db_phy_addr - free physical doorbell & direct wqe
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: free doorbell base address
+ * @param dwqe_base: free direct base address
+ **/
+
+void hinic3_free_db_phy_addr(void *hwdev, u64 db_base, u64 dwqe_base);
+
+/**
+ * @brief hinic3_set_root_ctxt - set root context
+ * @param hwdev: device pointer to hwdev
+ * @param rq_depth: rq depth
+ * @param sq_depth: sq depth
+ * @param rx_buf_sz: rx buffer size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_set_root_ctxt(void *hwdev, u32 rq_depth, u32 sq_depth,
+ int rx_buf_sz, u16 channel);
+
+/**
+ * @brief hinic3_clean_root_ctxt - clean root context
+ * @param hwdev: device pointer to hwdev
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_clean_root_ctxt(void *hwdev, u16 channel);
+
+/**
+ * @brief hinic3_alloc_cmd_buf - alloc cmd buffer
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: success
+ * @retval null: failure
+ **/
+struct hinic3_cmd_buf *hinic3_alloc_cmd_buf(void *hwdev);
+
+/**
+ * @brief hinic3_free_cmd_buf - free cmd buffer
+ * @param hwdev: device pointer to hwdev
+ * @param cmd_buf: cmd buffer to free
+ **/
+void hinic3_free_cmd_buf(void *hwdev, struct hinic3_cmd_buf *cmd_buf);
+
+/**
+ * hinic3_sm_ctr_rd16 - small single 16 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd16(void *hwdev, u8 node, u8 instance, u32 ctr_id, u16 *value);
+
+/**
+ * hinic3_sm_ctr_rd16_clear - small single 16 counter read clear
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd16_clear(void *hwdev, u8 node, u8 instance, u32 ctr_id, u16 *value);
+
+/**
+ * @brief hinic3_sm_ctr_rd32 - small single 32 counter read
+ * @param hwdev: device pointer to hwdev
+ * @param node: the node id
+ * @param instance: instance id
+ * @param ctr_id: counter id
+ * @param value: read counter value ptr
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_sm_ctr_rd32(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u32 *value);
+/**
+ * @brief hinic3_sm_ctr_rd32_clear - small single 32 counter read clear
+ * @param hwdev: device pointer to hwdev
+ * @param node: the node id
+ * @param instance: instance id
+ * @param ctr_id: counter id
+ * @param value: read counter value ptr
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_sm_ctr_rd32_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u32 *value);
+
+/**
+ * @brief hinic3_sm_ctr_rd64_pair - big pair 128 counter read
+ * @param hwdev: device pointer to hwdev
+ * @param node: the node id
+ * @param instance: instance id
+ * @param ctr_id: counter id
+ * @param value1: read counter value ptr
+ * @param value2: read counter value ptr
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_sm_ctr_rd64_pair(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value1, u64 *value2);
+
+/**
+ * hinic3_sm_ctr_rd64_pair_clear - big pair 128 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value1: read counter value ptr
+ * @value2: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_pair_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value1, u64 *value2);
+
+/**
+ * @brief hinic3_sm_ctr_rd64 - big counter 64 read
+ * @param hwdev: device pointer to hwdev
+ * @param node: the node id
+ * @param instance: instance id
+ * @param ctr_id: counter id
+ * @param value: read counter value ptr
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_sm_ctr_rd64(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u64 *value);
+
+/**
+ * hinic3_sm_ctr_rd64_clear - big counter 64 read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value);
+
+/**
+ * @brief hinic3_api_csr_rd32 - read 32 byte csr
+ * @param hwdev: device pointer to hwdev
+ * @param dest: hardware node id
+ * @param addr: reg address
+ * @param val: reg value
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_api_csr_rd32(void *hwdev, u8 dest, u32 addr, u32 *val);
+
+/**
+ * @brief hinic3_api_csr_wr32 - write 32 byte csr
+ * @param hwdev: device pointer to hwdev
+ * @param dest: hardware node id
+ * @param addr: reg address
+ * @param val: reg value
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_api_csr_wr32(void *hwdev, u8 dest, u32 addr, u32 val);
+
+/**
+ * @brief hinic3_api_csr_rd64 - read 64 byte csr
+ * @param hwdev: device pointer to hwdev
+ * @param dest: hardware node id
+ * @param addr: reg address
+ * @param val: reg value
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_api_csr_rd64(void *hwdev, u8 dest, u32 addr, u64 *val);
+
+/**
+ * @brief hinic3_dbg_get_hw_stats - get hardware stats
+ * @param hwdev: device pointer to hwdev
+ * @param hw_stats: pointer to memory caller to alloc
+ * @param out_size: out size
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_dbg_get_hw_stats(const void *hwdev, u8 *hw_stats, const u32 *out_size);
+
+/**
+ * @brief hinic3_dbg_clear_hw_stats - clear hardware stats
+ * @param hwdev: device pointer to hwdev
+ * @retval clear hardware size
+ */
+u16 hinic3_dbg_clear_hw_stats(void *hwdev);
+
+/**
+ * @brief hinic3_get_chip_fault_stats - get chip fault stats
+ * @param hwdev: device pointer to hwdev
+ * @param chip_fault_stats: pointer to memory caller to alloc
+ * @param offset: offset
+ */
+void hinic3_get_chip_fault_stats(const void *hwdev, u8 *chip_fault_stats,
+ u32 offset);
+
+/**
+ * @brief hinic3_msg_to_mgmt_sync - msg to management cpu
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param buf_out: message buffer out
+ * @param out_size: out buffer size
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_msg_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_msg_to_mgmt_async - msg to management cpu async
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ *
+ * The function does not sleep inside, allowing use in irq context
+ */
+int hinic3_msg_to_mgmt_async(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 channel);
+
+/**
+ * @brief hinic3_msg_to_mgmt_no_ack - msg to management cpu don't need no ack
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ *
+ * The function will sleep inside, and it is not allowed to be used in
+ * interrupt context
+ */
+int hinic3_msg_to_mgmt_no_ack(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 channel);
+
+int hinic3_msg_to_mgmt_api_chain_async(void *hwdev, u8 mod, u16 cmd,
+ const void *buf_in, u16 in_size);
+
+int hinic3_msg_to_mgmt_api_chain_sync(void *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout);
+
+/**
+ * @brief hinic3_mbox_to_pf - vf mbox message to pf
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param buf_out: message buffer out
+ * @param out_size: out buffer size
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_mbox_to_pf(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_mbox_to_vf - mbox message to vf
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf index
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param buf_out: message buffer out
+ * @param out_size: out buffer size
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_mbox_to_vf(void *hwdev, u16 vf_id, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u32 timeout,
+ u16 channel);
+
+/**
+ * @brief hinic3_mbox_to_vf_no_ack - mbox message to vf no ack
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf index
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param buf_out: message buffer out
+ * @param out_size: out buffer size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_mbox_to_vf_no_ack(void *hwdev, u16 vf_id, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u16 channel);
+
+int hinic3_clp_to_mgmt(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size);
+/**
+ * @brief hinic3_cmdq_async - cmdq asynchronous message
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cmdq_async(void *hwdev, u8 mod, u8 cmd, struct hinic3_cmd_buf *buf_in, u16 channel);
+
+/**
+ * @brief hinic3_cmdq_async_cos - cmdq asynchronous message by cos
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param cos_id: cos id
+ * @param buf_in: message buffer in
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cmdq_async_cos(void *hwdev, u8 mod, u8 cmd, u8 cos_id,
+ struct hinic3_cmd_buf *buf_in, u16 channel);
+
+/**
+ * @brief hinic3_cmdq_detail_resp - cmdq direct message response
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param out_param: message out
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cmdq_direct_resp(void *hwdev, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ u64 *out_param, u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_cmdq_detail_resp - cmdq detail message response
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param buf_out: message buffer out
+ * @param out_param: inline output data
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cmdq_detail_resp(void *hwdev, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_cmdq_detail_resp - cmdq detail message response
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param cos_id: cos id
+ * @param buf_in: message buffer in
+ * @param buf_out: message buffer out
+ * @param out_param: inline output data
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cos_id_detail_resp(void *hwdev, u8 mod, u8 cmd, u8 cos_id,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_ppf_tmr_start - start ppf timer
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_ppf_tmr_start(void *hwdev);
+
+/**
+ * @brief hinic3_ppf_tmr_stop - stop ppf timer
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_ppf_tmr_stop(void *hwdev);
+
+/**
+ * @brief hinic3_func_tmr_bitmap_set - set timer bitmap status
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: global function index
+ * @param enable: 0-disable, 1-enable
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_func_tmr_bitmap_set(void *hwdev, u16 func_id, bool en);
+
+/**
+ * @brief hinic3_get_board_info - get board info
+ * @param hwdev: device pointer to hwdev
+ * @param info: board info
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_board_info(void *hwdev, struct hinic3_board_info *info,
+ u16 channel);
+
+/**
+ * @brief hinic3_set_wq_page_size - set work queue page size
+ * @param hwdev: device pointer to hwdev
+ * @param func_idx: function id
+ * @param page_size: page size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size,
+ u16 channel);
+
+/**
+ * @brief hinic3_event_callback - evnet callback to notify service driver
+ * @param hwdev: device pointer to hwdev
+ * @param event: event info to service driver
+ */
+void hinic3_event_callback(void *hwdev, struct hinic3_event_info *event);
+
+/**
+ * @brief hinic3_dbg_lt_rd_16byte - liner table read
+ * @param hwdev: device pointer to hwdev
+ * @param dest: destine id
+ * @param instance: instance id
+ * @param lt_index: liner table index id
+ * @param data: data
+ */
+int hinic3_dbg_lt_rd_16byte(void *hwdev, u8 dest, u8 instance,
+ u32 lt_index, u8 *data);
+
+/**
+ * @brief hinic3_dbg_lt_wr_16byte_mask - liner table write
+ * @param hwdev: device pointer to hwdev
+ * @param dest: destine id
+ * @param instance: instance id
+ * @param lt_index: liner table index id
+ * @param data: data
+ * @param mask: mask
+ */
+int hinic3_dbg_lt_wr_16byte_mask(void *hwdev, u8 dest, u8 instance,
+ u32 lt_index, u8 *data, u16 mask);
+
+/**
+ * @brief hinic3_link_event_stats - link event stats
+ * @param hwdev: device pointer to hwdev
+ * @param link: link status
+ */
+void hinic3_link_event_stats(void *dev, u8 link);
+
+/**
+ * @brief hinic3_get_link_event_stats - link event stats
+ * @param hwdev: device pointer to hwdev
+ * @param link: link status
+ */
+int hinic3_get_link_event_stats(void *dev, int *link_state);
+
+/**
+ * @brief hinic3_get_hw_pf_infos - get pf infos
+ * @param hwdev: device pointer to hwdev
+ * @param infos: pf infos
+ * @param channel: channel id
+ */
+int hinic3_get_hw_pf_infos(void *hwdev, struct hinic3_hw_pf_infos *infos,
+ u16 channel);
+
+/**
+ * @brief hinic3_func_reset - reset func
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: global function index
+ * @param reset_flag: reset flag
+ * @param channel: channel id
+ */
+int hinic3_func_reset(void *dev, u16 func_id, u64 reset_flag, u16 channel);
+
+int hinic3_get_ppf_timer_cfg(void *hwdev);
+
+int hinic3_set_bdf_ctxt(void *hwdev, u8 bus, u8 device, u8 function);
+
+int hinic3_init_func_mbox_msg_channel(void *hwdev, u16 num_func);
+
+int hinic3_ppf_ht_gpa_init(void *dev);
+
+void hinic3_ppf_ht_gpa_deinit(void *dev);
+
+int hinic3_get_sml_table_info(void *hwdev, u32 tbl_id, u8 *node_id, u8 *instance_id);
+
+int hinic3_mbox_ppf_to_host(void *hwdev, u8 mod, u16 cmd, u8 host_id,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel);
+
+void hinic3_force_complete_all(void *dev);
+int hinic3_get_ceq_page_phy_addr(void *hwdev, u16 q_id,
+ u16 page_idx, u64 *page_phy_addr);
+int hinic3_set_ceq_irq_disable(void *hwdev, u16 q_id);
+int hinic3_get_ceq_info(void *hwdev, u16 q_id, struct hinic3_ceq_info *ceq_info);
+
+int hinic3_init_single_ceq_status(void *hwdev, u16 q_id);
+void hinic3_set_api_stop(void *hwdev);
+
+int hinic3_activate_firmware(void *hwdev, u8 cfg_index);
+int hinic3_switch_config(void *hwdev, u8 cfg_index);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c b/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c
new file mode 100644
index 0000000..7a2644c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c
@@ -0,0 +1,194 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/debugfs.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+
+int hinic3_poll(struct napi_struct *napi, int budget)
+{
+ int tx_pkts, rx_pkts;
+ struct hinic3_irq *irq_cfg =
+ container_of(napi, struct hinic3_irq, napi);
+ struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev);
+
+ rx_pkts = hinic3_rx_poll(irq_cfg->rxq, budget);
+
+ tx_pkts = hinic3_tx_poll(irq_cfg->txq, budget);
+ if (tx_pkts >= budget || rx_pkts >= budget)
+ return budget;
+
+ napi_complete(napi);
+
+ hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx,
+ HINIC3_MSIX_ENABLE);
+
+ return max(tx_pkts, rx_pkts);
+}
+
+static void qp_add_napi(struct hinic3_irq *irq_cfg)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev);
+
+ netif_napi_add(nic_dev->netdev, &irq_cfg->napi,
+ hinic3_poll, nic_dev->poll_weight);
+ napi_enable(&irq_cfg->napi);
+ irq_cfg->napi_reign = NAPI_IS_REGIN;
+}
+
+void qp_del_napi(struct hinic3_irq *irq_cfg)
+{
+ if (irq_cfg->napi_reign == NAPI_IS_REGIN) {
+ napi_disable(&irq_cfg->napi);
+ netif_napi_del(&irq_cfg->napi);
+ irq_cfg->napi_reign = NAPI_NOT_REGIN;
+ }
+}
+
+static irqreturn_t qp_irq(int irq, void *data)
+{
+ struct hinic3_irq *irq_cfg = (struct hinic3_irq *)data;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev);
+
+ hinic3_misx_intr_clear_resend_bit(nic_dev->hwdev, irq_cfg->msix_entry_idx, 1);
+
+ napi_schedule(&irq_cfg->napi);
+
+ return IRQ_HANDLED;
+}
+
+static int hinic3_request_irq(struct hinic3_irq *irq_cfg, u16 q_id)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev);
+ struct interrupt_info info = {0};
+ int err;
+
+ qp_add_napi(irq_cfg);
+
+ info.msix_index = irq_cfg->msix_entry_idx;
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = nic_dev->intr_coalesce[q_id].pending_limt;
+ info.coalesc_timer_cfg =
+ nic_dev->intr_coalesce[q_id].coalesce_timer_cfg;
+ info.resend_timer_cfg = nic_dev->intr_coalesce[q_id].resend_timer_cfg;
+ nic_dev->rxqs[q_id].last_coalesc_timer_cfg =
+ nic_dev->intr_coalesce[q_id].coalesce_timer_cfg;
+ nic_dev->rxqs[q_id].last_pending_limt =
+ nic_dev->intr_coalesce[q_id].pending_limt;
+ err = hinic3_set_interrupt_cfg(nic_dev->hwdev, info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, irq_cfg->netdev,
+ "Failed to set RX interrupt coalescing attribute.\n");
+ qp_del_napi(irq_cfg);
+ return err;
+ }
+
+ err = request_irq(irq_cfg->irq_id, &qp_irq, 0, irq_cfg->irq_name, irq_cfg);
+ if (err) {
+ nicif_err(nic_dev, drv, irq_cfg->netdev, "Failed to request Rx irq\n");
+ qp_del_napi(irq_cfg);
+ return err;
+ }
+
+ irq_set_affinity_hint(irq_cfg->irq_id, &irq_cfg->affinity_mask);
+
+ return 0;
+}
+
+static void hinic3_release_irq(struct hinic3_irq *irq_cfg)
+{
+ irq_set_affinity_hint(irq_cfg->irq_id, NULL);
+ synchronize_irq(irq_cfg->irq_id);
+ free_irq(irq_cfg->irq_id, irq_cfg);
+ qp_del_napi(irq_cfg);
+}
+
+int hinic3_qps_irq_init(struct hinic3_nic_dev *nic_dev)
+{
+ struct pci_dev *pdev = nic_dev->pdev;
+ struct irq_info *qp_irq_info = NULL;
+ struct hinic3_irq *irq_cfg = NULL;
+ u16 q_id, i;
+ u32 local_cpu;
+ int err;
+
+ for (q_id = 0; q_id < nic_dev->q_params.num_qps; q_id++) {
+ qp_irq_info = &nic_dev->qps_irq_info[q_id];
+ irq_cfg = &nic_dev->q_params.irq_cfg[q_id];
+
+ irq_cfg->irq_id = qp_irq_info->irq_id;
+ irq_cfg->msix_entry_idx = qp_irq_info->msix_entry_idx;
+ irq_cfg->netdev = nic_dev->netdev;
+ irq_cfg->txq = &nic_dev->txqs[q_id];
+ irq_cfg->rxq = &nic_dev->rxqs[q_id];
+ nic_dev->rxqs[q_id].irq_cfg = irq_cfg;
+
+ local_cpu = cpumask_local_spread(q_id, dev_to_node(&pdev->dev));
+ cpumask_set_cpu(local_cpu, &irq_cfg->affinity_mask);
+
+ err = snprintf(irq_cfg->irq_name, sizeof(irq_cfg->irq_name),
+ "%s_qp%u", nic_dev->netdev->name, q_id);
+ if (err < 0) {
+ err = -EINVAL;
+ goto req_tx_irq_err;
+ }
+
+ err = hinic3_request_irq(irq_cfg, q_id);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to request Rx irq\n");
+ goto req_tx_irq_err;
+ }
+
+ hinic3_set_msix_auto_mask_state(nic_dev->hwdev, irq_cfg->msix_entry_idx,
+ HINIC3_SET_MSIX_AUTO_MASK);
+ hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx, HINIC3_MSIX_ENABLE);
+ }
+
+ INIT_DELAYED_WORK(&nic_dev->moderation_task, hinic3_auto_moderation_work);
+
+ return 0;
+
+req_tx_irq_err:
+ for (i = 0; i < q_id; i++) {
+ irq_cfg = &nic_dev->q_params.irq_cfg[i];
+ hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx, HINIC3_MSIX_DISABLE);
+ hinic3_set_msix_auto_mask_state(nic_dev->hwdev, irq_cfg->msix_entry_idx,
+ HINIC3_CLR_MSIX_AUTO_MASK);
+ hinic3_release_irq(irq_cfg);
+ }
+
+ return err;
+}
+
+void hinic3_qps_irq_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_irq *irq_cfg = NULL;
+ u16 q_id;
+
+ for (q_id = 0; q_id < nic_dev->q_params.num_qps; q_id++) {
+ irq_cfg = &nic_dev->q_params.irq_cfg[q_id];
+ hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+ hinic3_set_msix_auto_mask_state(nic_dev->hwdev,
+ irq_cfg->msix_entry_idx,
+ HINIC3_CLR_MSIX_AUTO_MASK);
+ hinic3_release_irq(irq_cfg);
+ }
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c
new file mode 100644
index 0000000..8cd891e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c
@@ -0,0 +1,1737 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "mag_mpu_cmd.h"
+#include "mag_mpu_cmd_defs.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "hinic3_common.h"
+
+#define BIFUR_RESOURCE_PF_SSID 0x5a1
+#define CAP_INFO_MAX_LEN 512
+#define DEVICE_VENDOR_MAX_LEN 17
+#define READ_RSFEC_REGISTER_DELAY_TIME_MS 500
+
+struct parse_tlv_info g_page_info = {0};
+struct drv_mag_cmd_get_xsfp_tlv_rsp g_xsfp_tlv_info = {0};
+
+static int mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+static int mag_msg_to_mgmt_sync_ch(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u16 channel);
+
+int hinic3_set_port_enable(void *hwdev, bool enable, u16 channel)
+{
+ struct mag_cmd_set_port_enable en_state;
+ u16 out_size = sizeof(en_state);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ memset(&en_state, 0, sizeof(en_state));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ en_state.function_id = hinic3_global_func_id(hwdev);
+ en_state.state = enable ? MAG_CMD_TX_ENABLE | MAG_CMD_RX_ENABLE :
+ MAG_CMD_PORT_DISABLE;
+
+ err = mag_msg_to_mgmt_sync_ch(hwdev, MAG_CMD_SET_PORT_ENABLE, &en_state,
+ sizeof(en_state), &en_state, &out_size,
+ channel);
+ if (err || !out_size || en_state.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set port state, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, en_state.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_port_enable);
+
+int hinic3_get_phy_port_stats(void *hwdev, struct mag_cmd_port_stats *stats)
+{
+ struct mag_cmd_get_port_stat *port_stats = NULL;
+ struct mag_cmd_port_stats_info stats_info;
+ u16 out_size = sizeof(*port_stats);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats)
+ return -ENOMEM;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ memset(&stats_info, 0, sizeof(stats_info));
+ stats_info.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_PORT_STAT,
+ &stats_info, sizeof(stats_info),
+ port_stats, &out_size);
+ if (err || !out_size || port_stats->head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, port_stats->head.status, out_size);
+ err = -EIO;
+ goto out;
+ }
+
+ memcpy(stats, &port_stats->counter, sizeof(*stats));
+
+out:
+ kfree(port_stats);
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_get_phy_port_stats);
+
+int hinic3_get_phy_rsfec_stats(void *hwdev, struct mag_cmd_rsfec_stats *stats)
+{
+ struct mag_cmd_get_mag_cnt *port_stats = NULL;
+ struct mag_cmd_get_mag_cnt stats_info;
+ u16 out_size = sizeof(*port_stats);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !stats)
+ return -EINVAL;
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats)
+ return -ENOMEM;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ memset(&stats_info, 0, sizeof(stats_info));
+ stats_info.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_MAG_CNT,
+ &stats_info, sizeof(stats_info),
+ port_stats, &out_size);
+ if (err || !out_size || port_stats->head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get rsfec statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, port_stats->head.status, out_size);
+ err = -EIO;
+ goto out;
+ }
+ /* 读2遍, 清除误码残留 */
+ msleep(READ_RSFEC_REGISTER_DELAY_TIME_MS);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_MAG_CNT, &stats_info,
+ sizeof(stats_info),
+ port_stats, &out_size);
+ if (err || !out_size || port_stats->head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get rsfec statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, port_stats->head.status, out_size);
+ err = -EIO;
+ goto out;
+ }
+
+ memcpy(stats, &port_stats->mag_csr[MAG_RX_RSFEC_ERR_CW_CNT],
+ sizeof(u32));
+
+out:
+ kfree(port_stats);
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_get_phy_rsfec_stats);
+
+int hinic3_set_port_funcs_state(void *hwdev, bool enable)
+{
+ return 0;
+}
+
+int hinic3_reset_port_link_cfg(void *hwdev)
+{
+ return 0;
+}
+
+int hinic3_force_port_relink(void *hwdev)
+{
+ return 0;
+}
+
+int hinic3_set_autoneg(void *hwdev, bool enable)
+{
+ struct hinic3_link_ksettings settings = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 set_settings = 0;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ set_settings |= HILINK_LINK_SET_AUTONEG;
+ settings.valid_bitmap = set_settings;
+ settings.autoneg = enable ? PORT_CFG_AN_ON : PORT_CFG_AN_OFF;
+
+ return hinic3_set_link_settings(hwdev, &settings);
+}
+
+static int hinic3_cfg_loopback_mode(struct hinic3_nic_io *nic_io, u8 opcode,
+ u8 *mode, u8 *enable)
+{
+ struct mag_cmd_cfg_loopback_mode lp;
+ u16 out_size = sizeof(lp);
+ int err;
+
+ memset(&lp, 0, sizeof(lp));
+ lp.port_id = hinic3_physical_port_id(nic_io->hwdev);
+ lp.opcode = opcode;
+ if (opcode == MGMT_MSG_CMD_OP_SET) {
+ lp.lp_mode = *mode;
+ lp.lp_en = *enable;
+ }
+
+ err = mag_msg_to_mgmt_sync(nic_io->hwdev, MAG_CMD_CFG_LOOPBACK_MODE,
+ &lp, sizeof(lp), &lp, &out_size);
+ if (err || !out_size || lp.head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to %s loopback mode, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == MGMT_MSG_CMD_OP_SET ? "set" : "get",
+ err, lp.head.status, out_size);
+ return -EIO;
+ }
+
+ if (opcode == MGMT_MSG_CMD_OP_GET) {
+ *mode = lp.lp_mode;
+ *enable = lp.lp_en;
+ }
+
+ return 0;
+}
+
+int hinic3_get_loopback_mode(void *hwdev, u8 *mode, u8 *enable)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !mode || !enable)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ return hinic3_cfg_loopback_mode(nic_io, MGMT_MSG_CMD_OP_GET, mode,
+ enable);
+}
+
+#define LOOP_MODE_MIN 1
+#define LOOP_MODE_MAX 6
+int hinic3_set_loopback_mode(void *hwdev, u8 mode, u8 enable)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (mode < LOOP_MODE_MIN || mode > LOOP_MODE_MAX) {
+ nic_err(nic_io->dev_hdl, "Invalid loopback mode %u to set\n",
+ mode);
+ return -EINVAL;
+ }
+
+ return hinic3_cfg_loopback_mode(nic_io, MGMT_MSG_CMD_OP_SET, &mode,
+ &enable);
+}
+
+int hinic3_set_led_status(void *hwdev, enum mag_led_type type,
+ enum mag_led_mode mode)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct mag_cmd_set_led_cfg led_info;
+ u16 out_size = sizeof(led_info);
+ int err;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&led_info, 0, sizeof(led_info));
+
+ led_info.function_id = hinic3_global_func_id(hwdev);
+ led_info.type = type;
+ led_info.mode = mode;
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_SET_LED_CFG, &led_info,
+ sizeof(led_info), &led_info, &out_size);
+ if (err || led_info.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl, "Failed to set led status, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, led_info.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_get_port_info(void *hwdev, struct nic_port_info *port_info,
+ u16 channel)
+{
+ struct mag_cmd_get_port_info port_msg;
+ u16 out_size = sizeof(port_msg);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !port_info)
+ return -EINVAL;
+
+ memset(&port_msg, 0, sizeof(port_msg));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ port_msg.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync_ch(hwdev, MAG_CMD_GET_PORT_INFO, &port_msg,
+ sizeof(port_msg), &port_msg, &out_size,
+ channel);
+ if (err || !out_size || port_msg.head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port info, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, port_msg.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ port_info->autoneg_cap = port_msg.an_support;
+ port_info->autoneg_state = port_msg.an_en;
+ port_info->duplex = port_msg.duplex;
+ port_info->port_type = port_msg.wire_type;
+ port_info->speed = port_msg.speed;
+ port_info->fec = port_msg.fec;
+ port_info->lanes = port_msg.lanes;
+ port_info->supported_mode = port_msg.supported_mode;
+ port_info->advertised_mode = port_msg.advertised_mode;
+ port_info->supported_fec_mode = port_msg.supported_fec_mode;
+ /* switch Gbps to Mbps */
+ port_info->bond_speed = (u32)port_msg.bond_speed * RATE_MBPS_TO_GBPS;
+ return 0;
+}
+
+int hinic3_get_speed(void *hwdev, enum mag_cmd_port_speed *speed, u16 channel)
+{
+ struct nic_port_info port_info = {0};
+ int err;
+
+ if (!hwdev || !speed)
+ return -EINVAL;
+
+ err = hinic3_get_port_info(hwdev, &port_info, channel);
+ if (err)
+ return err;
+
+ *speed = port_info.speed;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_speed);
+
+int hinic3_set_link_settings(void *hwdev,
+ struct hinic3_link_ksettings *settings)
+{
+ struct mag_cmd_set_port_cfg info;
+ u16 out_size = sizeof(info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !settings)
+ return -EINVAL;
+
+ memset(&info, 0, sizeof(info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ info.port_id = hinic3_physical_port_id(hwdev);
+ info.config_bitmap = settings->valid_bitmap;
+ info.autoneg = settings->autoneg;
+ info.speed = settings->speed;
+ info.fec = settings->fec;
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_SET_PORT_CFG, &info,
+ sizeof(info), &info, &out_size);
+ if (err || !out_size || info.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set link settings, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, info.head.status, out_size);
+ return -EIO;
+ }
+
+ return info.head.status;
+}
+
+int hinic3_get_link_state(void *hwdev, u8 *link_state)
+{
+ struct mag_cmd_get_link_status get_link;
+ u16 out_size = sizeof(get_link);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !link_state)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&get_link, 0, sizeof(get_link));
+ get_link.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_LINK_STATUS, &get_link,
+ sizeof(get_link), &get_link, &out_size);
+ if (err || !out_size || get_link.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to get link state, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, get_link.head.status, out_size);
+ return -EIO;
+ }
+
+ *link_state = get_link.status;
+
+ return 0;
+}
+
+void hinic3_notify_vf_link_status(struct hinic3_nic_io *nic_io,
+ u16 vf_id, u8 link_status)
+{
+ struct mag_cmd_get_link_status link;
+ struct vf_data_storage *vf_infos = nic_io->vf_infos;
+ u16 out_size = sizeof(link);
+ int err;
+
+ memset(&link, 0, sizeof(link));
+ if (vf_infos[HW_VF_ID_TO_OS(vf_id)].registered) {
+ link.status = link_status;
+ link.port_id = hinic3_physical_port_id(nic_io->hwdev);
+ err = hinic3_mbox_to_vf_no_ack(nic_io->hwdev, vf_id,
+ HINIC3_MOD_HILINK,
+ MAG_CMD_GET_LINK_STATUS, &link,
+ sizeof(link), &link, &out_size,
+ HINIC3_CHANNEL_NIC);
+ if (err == MBOX_ERRCODE_UNKNOWN_DES_FUNC) {
+ nic_warn(nic_io->dev_hdl, "VF%d not initialized, disconnect it\n",
+ HW_VF_ID_TO_OS(vf_id));
+ hinic3_unregister_vf(nic_io, vf_id);
+ return;
+ }
+ if (err || !out_size || link.head.status)
+ nic_err(nic_io->dev_hdl,
+ "Send link change event to VF %d failed, err: %d, status: 0x%x, out_size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err, link.head.status, out_size);
+ }
+}
+
+void hinic3_notify_all_vfs_link_changed(void *hwdev, u8 link_status)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 i;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+
+ nic_io->link_status = link_status;
+ for (i = 1; i <= nic_io->max_vfs; i++) {
+ if (!nic_io->vf_infos[HW_VF_ID_TO_OS(i)].link_forced)
+ hinic3_notify_vf_link_status(nic_io, i, link_status);
+ }
+}
+
+static char *g_hw_to_char_fec[HILINK_FEC_MAX_TYPE] = {
+ "not set", "rsfec", "basefec",
+ "nofec", "llrsfec"};
+static char *g_hw_to_speed_info[PORT_SPEED_UNKNOWN] = {
+ "not set", "10MB", "100MB", "1GB", "10GB",
+ "25GB", "40GB", "50GB", "100GB", "200GB"};
+static char *g_hw_to_an_state_info[PORT_CFG_AN_OFF + 1] = {
+ "not set", "on", "off"};
+
+struct port_type_table {
+ u32 port_type;
+ char *port_type_name;
+};
+
+static const struct port_type_table port_optical_type_table_s[] = {
+ {LINK_PORT_UNKNOWN, "UNKNOWN"},
+ {LINK_PORT_OPTICAL_MM, "optical_sr"},
+ {LINK_PORT_OPTICAL_SM, "optical_lr"},
+ {LINK_PORT_PAS_COPPER, "copper"},
+ {LINK_PORT_ACC, "ACC"},
+ {LINK_PORT_BASET, "baset"},
+ {LINK_PORT_AOC, "AOC"},
+ {LINK_PORT_ELECTRIC, "electric"},
+ {LINK_PORT_BACKBOARD_INTERFACE, "interface"},
+};
+
+static char *get_port_type_name(u32 type)
+{
+ u32 i;
+
+ for (i = 0; i < ARRAY_SIZE(port_optical_type_table_s); i++) {
+ if (type == port_optical_type_table_s[i].port_type)
+ return port_optical_type_table_s[i].port_type_name;
+ }
+ return "UNKNOWN TYPE";
+}
+
+static void get_port_type(struct hinic3_nic_io *nic_io,
+ struct mag_cmd_event_port_info *info,
+ char **port_type)
+{
+ if (info->port_type <= LINK_PORT_BACKBOARD_INTERFACE)
+ *port_type = get_port_type_name(info->port_type);
+ else
+ sdk_info(nic_io->dev_hdl, "Unknown port type: %u\n",
+ info->port_type);
+}
+
+static int get_port_temperature_power(struct mag_cmd_event_port_info *info,
+ char *str)
+{
+ char cap_info[CAP_INFO_MAX_LEN];
+
+ memset(cap_info, 0, sizeof(cap_info));
+ snprintf(cap_info, CAP_INFO_MAX_LEN, "%s, %s, Temperature: %u", str,
+ info->sfp_type ? "QSFP" : "SFP", info->cable_temp);
+
+ if (info->sfp_type)
+ snprintf(str, CAP_INFO_MAX_LEN, "%s, rx power: %uuw %uuW %uuW %uuW",
+ cap_info, info->power[0x0], info->power[0x1],
+ info->power[0x2], info->power[0x3]);
+ else
+ snprintf(str, CAP_INFO_MAX_LEN, "%s, rx power: %uuW, tx power: %uuW",
+ cap_info, info->power[0x0], info->power[0x1]);
+
+ return 0;
+}
+
+static void print_cable_info(struct hinic3_nic_io *nic_io,
+ struct mag_cmd_event_port_info *info)
+{
+ char tmp_str[CAP_INFO_MAX_LEN] = {0};
+ char tmp_vendor[DEVICE_VENDOR_MAX_LEN] = {0};
+ char *port_type = "Unknown port type";
+ int i;
+ int err = 0;
+
+ if (info->gpio_insert) {
+ sdk_info(nic_io->dev_hdl, "Cable unpresent\n");
+ return;
+ }
+
+ get_port_type(nic_io, info, &port_type);
+
+ for (i = sizeof(info->vendor_name) - 1; i >= 0; i--) {
+ if (info->vendor_name[i] == ' ')
+ info->vendor_name[i] = '\0';
+ else
+ break;
+ }
+
+ memcpy(tmp_vendor, info->vendor_name, sizeof(info->vendor_name));
+ snprintf(tmp_str, CAP_INFO_MAX_LEN, "Vendor: %s, %s, length: %um, max_speed: %uGbps",
+ tmp_vendor, port_type, info->cable_length, info->max_speed);
+
+ if (info->port_type == LINK_PORT_OPTICAL_MM ||
+ info->port_type == LINK_PORT_AOC) {
+ err = get_port_temperature_power(info, tmp_str);
+ if (err)
+ return;
+ }
+
+ sdk_info(nic_io->dev_hdl, "Cable information: %s\n", tmp_str);
+}
+
+static void print_link_info(struct hinic3_nic_io *nic_io,
+ struct mag_cmd_event_port_info *info,
+ enum hinic3_nic_event_type type)
+{
+ char *fec = "None";
+ char *speed = "None";
+ char *an_state = "None";
+
+ if (info->fec < HILINK_FEC_MAX_TYPE)
+ fec = g_hw_to_char_fec[info->fec];
+ else
+ sdk_info(nic_io->dev_hdl, "Unknown fec type: %u\n", info->fec);
+
+ if (info->an_state > PORT_CFG_AN_OFF) {
+ sdk_info(nic_io->dev_hdl, "an_state %u is invalid",
+ info->an_state);
+ return;
+ }
+
+ an_state = g_hw_to_an_state_info[info->an_state];
+
+ if (info->speed >= PORT_SPEED_UNKNOWN) {
+ sdk_info(nic_io->dev_hdl, "speed %u is invalid", info->speed);
+ return;
+ }
+
+ speed = g_hw_to_speed_info[info->speed];
+ sdk_info(nic_io->dev_hdl, "Link information: speed %s, %s, autoneg %s",
+ speed, fec, an_state);
+}
+
+void print_port_info(struct hinic3_nic_io *nic_io,
+ struct mag_cmd_event_port_info *port_info,
+ enum hinic3_nic_event_type type)
+{
+ print_cable_info(nic_io, port_info);
+
+ print_link_info(nic_io, port_info, type);
+
+ if (type == EVENT_NIC_LINK_UP)
+ return;
+
+ sdk_info(nic_io->dev_hdl, "PMA ctrl: %s, tx %s, rx %s, PMA fifo reg: 0x%x, PMA signal ok reg: 0x%x, RF/LF status reg: 0x%x\n",
+ port_info->pma_ctrl == 1 ? "off" : "on",
+ port_info->tx_enable ? "enable" : "disable",
+ port_info->rx_enable ? "enable" : "disable", port_info->pma_fifo_reg,
+ port_info->pma_signal_ok_reg, port_info->rf_lf);
+ sdk_info(nic_io->dev_hdl, "alos: %u, rx_los: %u, PCS 64 66b reg: 0x%x, PCS link: 0x%x, MAC link: 0x%x PCS_err_cnt: 0x%x\n",
+ port_info->alos, port_info->rx_los, port_info->pcs_64_66b_reg,
+ port_info->pcs_link, port_info->pcs_mac_link,
+ port_info->pcs_err_cnt);
+ sdk_info(nic_io->dev_hdl, "his_link_machine_state = 0x%08x, cur_link_machine_state = 0x%08x\n",
+ port_info->his_link_machine_state,
+ port_info->cur_link_machine_state);
+}
+
+static int hinic3_get_vf_link_status_msg_handler(struct hinic3_nic_io *nic_io,
+ u16 vf_id, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct vf_data_storage *vf_infos = nic_io->vf_infos;
+ struct mag_cmd_get_link_status *get_link = buf_out;
+ bool link_forced, link_up;
+
+ link_forced = vf_infos[HW_VF_ID_TO_OS(vf_id)].link_forced;
+ link_up = vf_infos[HW_VF_ID_TO_OS(vf_id)].link_up;
+
+ if (link_forced)
+ get_link->status = link_up ?
+ HINIC3_LINK_UP : HINIC3_LINK_DOWN;
+ else
+ get_link->status = nic_io->link_status;
+
+ get_link->head.status = 0;
+ *out_size = sizeof(*get_link);
+
+ return 0;
+}
+
+int hinic3_refresh_nic_cfg(void *hwdev, struct nic_port_info *port_info)
+{
+ /* TO DO */
+ return 0;
+}
+
+static void get_port_info(void *hwdev,
+ const struct mag_cmd_get_link_status *link_status,
+ struct hinic3_event_link_info *link_info)
+{
+ struct nic_port_info port_info = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+ if (hinic3_func_type(hwdev) != TYPE_VF && link_status->status) {
+ err = hinic3_get_port_info(hwdev, &port_info, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_warn(nic_io->dev_hdl, "Failed to get port info\n");
+ } else {
+ link_info->valid = 1;
+ link_info->port_type = port_info.port_type;
+ link_info->autoneg_cap = port_info.autoneg_cap;
+ link_info->autoneg_state = port_info.autoneg_state;
+ link_info->duplex = port_info.duplex;
+ link_info->speed = port_info.speed;
+ hinic3_refresh_nic_cfg(hwdev, &port_info);
+ }
+ }
+}
+
+static void link_status_event_handler(void *hwdev, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_link_status *link_status = NULL;
+ struct mag_cmd_get_link_status *ret_link_status = NULL;
+ struct hinic3_event_info event_info = {0};
+ struct hinic3_event_link_info *link_info = (void *)event_info.event_data;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct pci_dev *pdev = NULL;
+
+ /* Ignore link change event */
+ if (hinic3_is_bm_slave_host(hwdev))
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ link_status = buf_in;
+ sdk_info(nic_io->dev_hdl, "Link status report received, func_id: %u, status: %u\n",
+ hinic3_global_func_id(hwdev), link_status->status);
+
+ hinic3_link_event_stats(hwdev, link_status->status);
+
+ /* link event reported only after set vport enable */
+ get_port_info(hwdev, link_status, link_info);
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = link_status->status ?
+ EVENT_NIC_LINK_UP : EVENT_NIC_LINK_DOWN;
+
+ hinic3_event_callback(hwdev, &event_info);
+
+ if (nic_io->pcidev_hdl != NULL) {
+ pdev = nic_io->pcidev_hdl;
+ if (pdev->subsystem_device == BIFUR_RESOURCE_PF_SSID) {
+ return;
+ }
+ }
+
+ if (hinic3_func_type(hwdev) != TYPE_VF) {
+ hinic3_notify_all_vfs_link_changed(hwdev, link_status->status);
+ ret_link_status = buf_out;
+ ret_link_status->head.status = 0;
+ *out_size = sizeof(*ret_link_status);
+ }
+}
+
+static void port_info_event_printf(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_event_port_info *port_info = buf_in;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_event_info event_info;
+ enum hinic3_nic_event_type type;
+
+ if (!hwdev) {
+ pr_err("hwdev is NULL\n");
+ return;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ if (in_size != sizeof(*port_info)) {
+ sdk_info(nic_io->dev_hdl, "Invalid port info message size %u, should be %lu\n",
+ in_size, sizeof(*port_info));
+ return;
+ }
+
+ ((struct mag_cmd_event_port_info *)buf_out)->head.status = 0;
+
+ type = port_info->event_type;
+ if (type < EVENT_NIC_LINK_DOWN || type > EVENT_NIC_LINK_UP) {
+ sdk_info(nic_io->dev_hdl, "Invalid hilink info report, type: %d\n",
+ type);
+ return;
+ }
+
+ print_port_info(nic_io, port_info, type);
+
+ memset(&event_info, 0, sizeof(event_info));
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = type;
+
+ *out_size = sizeof(*port_info);
+
+ hinic3_event_callback(hwdev, &event_info);
+}
+
+void hinic3_notify_vf_bond_status(struct hinic3_nic_io *nic_io,
+ u16 vf_id, u8 bond_status)
+{
+ struct mag_cmd_get_bond_status bond;
+ struct vf_data_storage *vf_infos = nic_io->vf_infos;
+ u16 out_size = sizeof(bond);
+ int err;
+
+ memset(&bond, 0, sizeof(bond));
+ if (vf_infos[HW_VF_ID_TO_OS(vf_id)].registered) {
+ bond.status = bond_status;
+ err = hinic3_mbox_to_vf_no_ack(nic_io->hwdev, vf_id,
+ HINIC3_MOD_HILINK,
+ MAG_CMD_GET_BOND_STATUS, &bond,
+ sizeof(bond), &bond, &out_size,
+ HINIC3_CHANNEL_NIC);
+ if (err == MBOX_ERRCODE_UNKNOWN_DES_FUNC) {
+ nic_warn(nic_io->dev_hdl, "VF %hu not initialized, disconnect it\n",
+ HW_VF_ID_TO_OS(vf_id));
+ hinic3_unregister_vf(nic_io, vf_id);
+ return;
+ }
+ if (err || !out_size || bond.head.status)
+ nic_err(nic_io->dev_hdl,
+ "Send bond change event to VF %hu failed, err: %d, status: 0x%x, out_size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err, bond.head.status,
+ out_size);
+ }
+}
+
+void hinic3_notify_all_vfs_bond_changed(void *hwdev, u8 bond_status)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 i;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ nic_io->link_status = bond_status;
+ for (i = 1; i <= nic_io->max_vfs; i++)
+ hinic3_notify_vf_bond_status(nic_io, i, bond_status);
+}
+
+static void bond_status_event_handler(void *hwdev, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_bond_status *bond_status = NULL;
+ struct hinic3_event_info event_info = {};
+ struct hinic3_nic_io *nic_io = NULL;
+ struct mag_cmd_get_bond_status *ret_bond_status = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ bond_status = (struct mag_cmd_get_bond_status *)buf_in;
+ sdk_info(nic_io->dev_hdl, "bond status report received, func_id: %u, status: %u\n",
+ hinic3_global_func_id(hwdev), bond_status->status);
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = bond_status->status ?
+ EVENT_NIC_BOND_UP : EVENT_NIC_BOND_DOWN;
+
+ hinic3_event_callback(hwdev, &event_info);
+
+ if (hinic3_func_type(hwdev) != TYPE_VF) {
+ hinic3_notify_all_vfs_bond_changed(hwdev, bond_status->status);
+ ret_bond_status = buf_out;
+ ret_bond_status->head.status = 0;
+ *out_size = sizeof(*ret_bond_status);
+ }
+}
+
+static void cable_plug_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_wire_event *plug_event = buf_in;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct hinic3_port_routine_cmd_extern *rt_cmd_ext = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_event_info event_info;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ rt_cmd_ext = &nic_io->nic_cfg.rt_cmd_ext;
+
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ rt_cmd->mpu_send_sfp_abs = false;
+ rt_cmd->mpu_send_sfp_info = false;
+ rt_cmd_ext->mpu_send_xsfp_tlv_info = false;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ memset(&event_info, 0, sizeof(event_info));
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = EVENT_NIC_PORT_MODULE_EVENT;
+ ((struct hinic3_port_module_event *)(void *)event_info.event_data)->type =
+ plug_event->status ? HINIC3_PORT_MODULE_CABLE_PLUGGED :
+ HINIC3_PORT_MODULE_CABLE_UNPLUGGED;
+
+ *out_size = sizeof(*plug_event);
+ plug_event = buf_out;
+ plug_event->head.status = 0;
+
+ hinic3_event_callback(hwdev, &event_info);
+}
+
+static void port_sfp_info_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_xsfp_info *sfp_info = buf_in;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct hinic3_port_routine_cmd_extern *rt_cmd_ext = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+ if (in_size != sizeof(*sfp_info)) {
+ sdk_err(nic_io->dev_hdl, "Invalid sfp info cmd, length: %u, should be %lu\n",
+ in_size, sizeof(*sfp_info));
+ return;
+ }
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ rt_cmd_ext = &nic_io->nic_cfg.rt_cmd_ext;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ memcpy(&rt_cmd->std_sfp_info, sfp_info,
+ sizeof(struct mag_cmd_get_xsfp_info));
+ rt_cmd->mpu_send_sfp_info = true;
+ rt_cmd_ext->mpu_send_xsfp_tlv_info = false;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+}
+
+static void port_xsfp_tlv_info_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_xsfp_tlv_rsp *xsfp_tlv_info = buf_in;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct hinic3_port_routine_cmd_extern *rt_cmd_ext = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ size_t cpy_len = in_size - sizeof(struct mgmt_msg_head) -
+ XSFP_TLV_PRE_INFO_LEN;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (nic_io == NULL)
+ return;
+
+ if (cpy_len > XSFP_CMIS_INFO_MAX_SIZE) {
+ sdk_err(nic_io->dev_hdl, "invalid cpy_len(%lu)\n", cpy_len);
+ return;
+ }
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ rt_cmd_ext = &nic_io->nic_cfg.rt_cmd_ext;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ rt_cmd_ext->std_xsfp_tlv_info.port_id = xsfp_tlv_info->port_id;
+ memcpy(&(rt_cmd_ext->std_xsfp_tlv_info.tlv_buf[0]),
+ &(xsfp_tlv_info->tlv_buf[0]), cpy_len);
+ rt_cmd->mpu_send_sfp_info = false;
+ rt_cmd_ext->mpu_send_xsfp_tlv_info = true;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+}
+
+static void port_sfp_abs_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_xsfp_present *sfp_abs = buf_in;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+ if (in_size != sizeof(*sfp_abs)) {
+ sdk_err(nic_io->dev_hdl, "Invalid sfp absent cmd, length: %u, should be %lu\n",
+ in_size, sizeof(*sfp_abs));
+ return;
+ }
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ memcpy(&rt_cmd->abs, sfp_abs, sizeof(struct mag_cmd_get_xsfp_present));
+ rt_cmd->mpu_send_sfp_abs = true;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+}
+
+bool hinic3_if_sfp_absent(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct mag_cmd_get_xsfp_present sfp_abs;
+ u8 port_id = hinic3_physical_port_id(hwdev);
+ u16 out_size = sizeof(sfp_abs);
+ int err;
+ bool sfp_abs_status = 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return true;
+ memset(&sfp_abs, 0, sizeof(sfp_abs));
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ if (rt_cmd->mpu_send_sfp_abs) {
+ if (rt_cmd->abs.head.status) {
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return true;
+ }
+
+ sfp_abs_status = (bool)rt_cmd->abs.abs_status;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return sfp_abs_status;
+ }
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ sfp_abs.port_id = port_id;
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_XSFP_PRESENT,
+ &sfp_abs, sizeof(sfp_abs), &sfp_abs,
+ &out_size);
+ if (sfp_abs.head.status || err || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port%u sfp absent status, err: %d, status: 0x%x, out size: 0x%x\n",
+ port_id, err, sfp_abs.head.status, out_size);
+ return true;
+ }
+
+ return (sfp_abs.abs_status == 0 ? false : true);
+}
+
+int hinic3_get_sfp_tlv_info(void *hwdev, struct drv_mag_cmd_get_xsfp_tlv_rsp
+ *sfp_tlv_info,
+ const struct mag_cmd_get_xsfp_tlv_req
+ *sfp_tlv_info_req)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_port_routine_cmd_extern *rt_cmd_ext = NULL;
+ u16 out_size = sizeof(*sfp_tlv_info);
+ int err;
+
+ if ((hwdev == NULL) || (sfp_tlv_info == NULL))
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (nic_io == NULL)
+ return -EINVAL;
+
+ rt_cmd_ext = &nic_io->nic_cfg.rt_cmd_ext;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ if (rt_cmd_ext->mpu_send_xsfp_tlv_info == true) {
+ if (rt_cmd_ext->std_xsfp_tlv_info.head.status != 0) {
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return -EIO;
+ }
+
+ memcpy(sfp_tlv_info, &rt_cmd_ext->std_xsfp_tlv_info,
+ sizeof(*sfp_tlv_info));
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return 0;
+ }
+
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_XSFP_TLV_INFO,
+ (void *)sfp_tlv_info_req,
+ sizeof(*sfp_tlv_info_req),
+ sfp_tlv_info, &out_size);
+ if ((sfp_tlv_info->head.status != 0) || (err != 0) || (out_size == 0)) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port%u tlv sfp eeprom information, err: %d, status: 0x%x, out size: 0x%x\n",
+ hinic3_physical_port_id(hwdev), err,
+ sfp_tlv_info->head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int hinic3_trans_cmis_get_page_pos(u32 page_id, u32 content_len, u32 *pos)
+{
+ if (page_id <= QSFP_CMIS_PAGE_03H) {
+ *pos = (page_id * content_len);
+ return 0;
+ }
+
+ if (page_id == QSFP_CMIS_PAGE_11H) {
+ *pos = (QSFP_CMIS_PAGE_04H * content_len);
+ return 0;
+ }
+
+ if (page_id == QSFP_CMIS_PAGE_12H) {
+ *pos = (QSFP_CMIS_PAGE_05H * content_len);
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int hinic3_get_page_key_info(struct mgmt_tlv_info *tlv_info,
+ struct parse_tlv_info *page_info, u8 idx,
+ u32 *total_len)
+{
+ u8 *src_addr = NULL;
+ u8 *dst_addr = NULL;
+ u8 *tmp_addr = NULL;
+ u32 page_id = 0;
+ u32 content_len = 0;
+ u32 src_pos = 0;
+ int ret;
+
+ page_id = MGMT_TLV_GET_U32(tlv_info->value);
+ content_len = tlv_info->length - MGMT_TLV_U32_SIZE;
+ if (page_id == QSFP_CMIS_PAGE_00H) {
+ tmp_addr = (u8 *)(tlv_info + 1);
+ page_info->id = *(tmp_addr + MGMT_TLV_U32_SIZE);
+ }
+
+ ret = hinic3_trans_cmis_get_page_pos(page_id, content_len, &src_pos);
+ if (ret != 0)
+ return ret;
+
+ src_addr = page_info->tlv_page_info + src_pos;
+ tmp_addr = (u8 *)(tlv_info + 1);
+ dst_addr = tmp_addr + MGMT_TLV_U32_SIZE;
+ memcpy(src_addr, dst_addr, content_len);
+ if (ret != 0)
+ return ret;
+
+ if (idx < XSFP_CMIS_PARSE_PAGE_NUM)
+ page_info->tlv_page_num[idx] = page_id;
+
+ *total_len += content_len;
+
+ return 0;
+}
+
+static int hinic3_trans_cmis_tlv_info_to_buf(u8 *sfp_tlv_info,
+ struct parse_tlv_info *page_info)
+{
+ struct mgmt_tlv_info *tlv_info = NULL;
+ u8 *tlv_buf = sfp_tlv_info;
+ u8 idx = 0;
+ u32 total_len = 0;
+ int ret = 0;
+ bool need_continue = true;
+
+ if ((sfp_tlv_info == NULL) || (page_info == NULL))
+ return -EIO;
+
+ while (need_continue) {
+ tlv_info = (struct mgmt_tlv_info *)tlv_buf;
+ switch (tlv_info->type) {
+ case MAG_XSFP_TYPE_PAGE:
+ ret = hinic3_get_page_key_info(
+ tlv_info, page_info, idx, &total_len);
+ if (ret != 0) {
+ pr_err("lib_get_page_key_info fail,ret:0x%x.\n",
+ ret);
+ break;
+ }
+ idx++;
+ break;
+
+ case MAG_XSFP_TYPE_WIRE_TYPE:
+ page_info->wire_type =
+ MGMT_TLV_GET_U32(&(tlv_info->value[0]));
+ break;
+
+ case MAG_XSFP_TYPE_END:
+ need_continue = false;
+ break;
+
+ default:
+ break;
+ }
+
+ tlv_buf += (sizeof(struct mgmt_tlv_info) + tlv_info->length);
+ }
+
+ page_info->tlv_page_info_len = total_len;
+
+ return 0;
+}
+
+int hinic3_get_tlv_xsfp_eeprom(void *hwdev, u8 *data, u32 len)
+{
+ int err = 0;
+ struct mag_cmd_get_xsfp_tlv_req xsfp_tlv_info_req = {0};
+
+ xsfp_tlv_info_req.rsp_buf_len = XSFP_CMIS_INFO_MAX_SIZE;
+ xsfp_tlv_info_req.port_id = hinic3_physical_port_id(hwdev);
+ err = hinic3_get_sfp_tlv_info(hwdev, &g_xsfp_tlv_info,
+ &xsfp_tlv_info_req);
+ if (err != 0)
+ return err;
+
+ err = hinic3_trans_cmis_tlv_info_to_buf(g_xsfp_tlv_info.tlv_buf,
+ &g_page_info);
+ if (err)
+ return -ENOMEM;
+
+ memcpy(data, g_page_info.tlv_page_info, len);
+
+ return (err == 0) ? 0 : -ENOMEM;
+}
+
+int hinic3_get_sfp_info(void *hwdev, struct mag_cmd_get_xsfp_info *sfp_info)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ u8 sfp_info_status = 0;
+ u16 out_size = sizeof(*sfp_info);
+ int err;
+
+ if (!hwdev || !sfp_info)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ sfp_info_status = rt_cmd->std_sfp_info.head.status;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ if (rt_cmd->mpu_send_sfp_info) {
+ if (sfp_info_status != 0) {
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return (sfp_info_status == HINIC3_MGMT_CMD_UNSUPPORTED)
+ ? HINIC3_MGMT_CMD_UNSUPPORTED : -EIO;
+ }
+
+ memcpy(sfp_info, &rt_cmd->std_sfp_info, sizeof(*sfp_info));
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return 0;
+ }
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ sfp_info->port_id = hinic3_physical_port_id(hwdev);
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_XSFP_INFO, sfp_info,
+ sizeof(*sfp_info), sfp_info, &out_size);
+ if (sfp_info->head.status == HINIC3_MGMT_CMD_UNSUPPORTED) {
+ return HINIC3_MGMT_CMD_UNSUPPORTED;
+ }
+
+ if (sfp_info->head.status == HINIC3_MGMT_CMD_UNSUPPORTED) {
+ return -EOPNOTSUPP;
+ }
+ if ((sfp_info->head.status != 0) || (err != 0) || (out_size == 0)) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port%u sfp eeprom information, err: %d, status: 0x%x, out size: 0x%x\n",
+ hinic3_physical_port_id(hwdev), err,
+ sfp_info->head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_get_sfp_eeprom(void *hwdev, u8 *data, u32 len)
+{
+ struct mag_cmd_get_xsfp_info sfp_info;
+ int err;
+
+ if (!hwdev || !data || len > PAGE_SIZE)
+ return -EINVAL;
+
+ if (hinic3_if_sfp_absent(hwdev))
+ return -ENXIO;
+
+ memset(&sfp_info, 0, sizeof(sfp_info));
+
+ err = hinic3_get_sfp_info(hwdev, &sfp_info);
+ if (err)
+ return err;
+
+ memcpy(data, sfp_info.sfp_info, sizeof(sfp_info.sfp_info));
+
+ return 0;
+}
+
+int hinic3_get_sfp_type(void *hwdev, u8 *sfp_type, u8 *sfp_type_ext)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ u8 sfp_data[STD_SFP_INFO_MAX_SIZE];
+ int err = 0;
+
+ if (!hwdev || !sfp_type || !sfp_type_ext)
+ return -EINVAL;
+
+ if (hinic3_if_sfp_absent(hwdev))
+ return -ENXIO;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ if (rt_cmd->mpu_send_sfp_info) {
+ if (rt_cmd->std_sfp_info.head.status == 0) {
+ *sfp_type = rt_cmd->std_sfp_info.sfp_info[0];
+ *sfp_type_ext = rt_cmd->std_sfp_info.sfp_info[1];
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return 0;
+ }
+
+ if (rt_cmd->std_sfp_info.head.status != HINIC3_MGMT_CMD_UNSUPPORTED) {
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return -EIO;
+ }
+
+ err = HINIC3_MGMT_CMD_UNSUPPORTED; /* cmis */
+ }
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ if (err == 0) {
+ err = hinic3_get_sfp_eeprom(hwdev, (u8 *)sfp_data,
+ STD_SFP_INFO_MAX_SIZE);
+ } else {
+ /* mpu_send_sfp_info is false */
+ err = hinic3_get_tlv_xsfp_eeprom(hwdev, (u8 *)sfp_data,
+ STD_SFP_INFO_MAX_SIZE);
+ }
+
+ if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+ err = hinic3_get_tlv_xsfp_eeprom(hwdev, (u8 *)sfp_data,
+ STD_SFP_INFO_MAX_SIZE);
+
+ if (err)
+ return err;
+
+ *sfp_type = sfp_data[0];
+ *sfp_type_ext = sfp_data[1];
+
+ return 0;
+}
+
+int hinic3_set_link_status_follow(void *hwdev, enum hinic3_link_follow_status status)
+{
+ struct mag_cmd_set_link_follow follow;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(follow);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (status >= HINIC3_LINK_FOLLOW_STATUS_MAX) {
+ nic_err(nic_io->dev_hdl, "Invalid link follow status: %d\n", status);
+ return -EINVAL;
+ }
+
+ memset(&follow, 0, sizeof(follow));
+ follow.function_id = hinic3_global_func_id(hwdev);
+ follow.follow = status;
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_SET_LINK_FOLLOW, &follow,
+ sizeof(follow), &follow, &out_size);
+ if ((follow.head.status != HINIC3_MGMT_CMD_UNSUPPORTED && follow.head.status) ||
+ err || !out_size) {
+ nic_err(nic_io->dev_hdl, "Failed to set link status follow port status, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, follow.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return follow.head.status;
+}
+
+int hinic3_update_pf_bw(void *hwdev)
+{
+ struct nic_port_info port_info = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF || !HINIC3_SUPPORT_RATE_LIMIT(hwdev))
+ return 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ err = hinic3_get_port_info(hwdev, &port_info, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get port info\n");
+ return -EIO;
+ }
+
+ err = hinic3_set_pf_rate(hwdev, port_info.speed);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set pf bandwidth\n");
+ return err;
+ }
+
+ return 0;
+}
+
+int hinic3_set_pf_bw_limit(void *hwdev, u32 bw_limit)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 old_bw_limit;
+ u8 link_state = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (bw_limit > MAX_LIMIT_BW) {
+ nic_err(nic_io->dev_hdl, "Invalid bandwidth: %u\n", bw_limit);
+ return -EINVAL;
+ }
+
+ err = hinic3_get_link_state(hwdev, &link_state);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get link state\n");
+ return -EIO;
+ }
+
+ if (!link_state) {
+ nic_err(nic_io->dev_hdl, "Link status must be up when setting pf tx rate\n");
+ return -EINVAL;
+ }
+
+ if (nic_io->direct == HINIC3_NIC_TX) {
+ old_bw_limit = nic_io->nic_cfg.pf_bw_tx_limit;
+ nic_io->nic_cfg.pf_bw_tx_limit = bw_limit;
+ } else {
+ old_bw_limit = nic_io->nic_cfg.pf_bw_rx_limit;
+ nic_io->nic_cfg.pf_bw_rx_limit = bw_limit;
+ }
+
+ err = hinic3_update_pf_bw(hwdev);
+ if (err) {
+ if (nic_io->direct == HINIC3_NIC_TX)
+ nic_io->nic_cfg.pf_bw_tx_limit = old_bw_limit;
+ else
+ nic_io->nic_cfg.pf_bw_rx_limit = old_bw_limit;
+ return err;
+ }
+
+ return 0;
+}
+
+static const struct vf_msg_handler vf_mag_cmd_handler[] = {
+ {
+ .cmd = MAG_CMD_GET_LINK_STATUS,
+ .handler = hinic3_get_vf_link_status_msg_handler,
+ },
+};
+
+/* pf/ppf handler mbox msg from vf */
+int hinic3_pf_mag_mbox_handler(void *hwdev, u16 vf_id,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 index, cmd_size = ARRAY_LEN(vf_mag_cmd_handler);
+ struct hinic3_nic_io *nic_io = NULL;
+ const struct vf_msg_handler *handler = NULL;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ for (index = 0; index < cmd_size; index++) {
+ handler = &vf_mag_cmd_handler[index];
+ if (cmd == handler->cmd)
+ return handler->handler(nic_io, vf_id, buf_in, in_size,
+ buf_out, out_size);
+ }
+
+ nic_warn(nic_io->dev_hdl, "NO handler for mag cmd: %u received from vf id: %u\n",
+ cmd, vf_id);
+
+ return -EINVAL;
+}
+
+static struct nic_event_handler mag_cmd_handler[] = {
+ {
+ .cmd = MAG_CMD_GET_LINK_STATUS,
+ .handler = link_status_event_handler,
+ },
+
+ {
+ .cmd = MAG_CMD_EVENT_PORT_INFO,
+ .handler = port_info_event_printf,
+ },
+
+ {
+ .cmd = MAG_CMD_WIRE_EVENT,
+ .handler = cable_plug_event,
+ },
+
+ {
+ .cmd = MAG_CMD_GET_XSFP_INFO,
+ .handler = port_sfp_info_event,
+ },
+
+ {
+ .cmd = MAG_CMD_GET_XSFP_PRESENT,
+ .handler = port_sfp_abs_event,
+ },
+
+ {
+ .cmd = MAG_CMD_GET_BOND_STATUS,
+ .handler = bond_status_event_handler,
+ },
+
+ {
+ .cmd = MAG_CMD_GET_XSFP_TLV_INFO,
+ .handler = port_xsfp_tlv_info_event,
+ },
+};
+
+static int hinic3_mag_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 size = ARRAY_LEN(mag_cmd_handler);
+ u32 i;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ *out_size = 0;
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ for (i = 0; i < size; i++) {
+ if (cmd == mag_cmd_handler[i].cmd) {
+ mag_cmd_handler[i].handler(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ return 0;
+ }
+ }
+
+ /* can't find this event cmd */
+ sdk_warn(nic_io->dev_hdl, "Unsupported mag event, cmd: %u\n", cmd);
+ *out_size = sizeof(struct mgmt_msg_head);
+ ((struct mgmt_msg_head *)buf_out)->status = HINIC3_MGMT_CMD_UNSUPPORTED;
+
+ return 0;
+}
+
+int hinic3_vf_mag_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ return hinic3_mag_event_handler(hwdev, cmd, buf_in, in_size,
+ buf_out, out_size);
+}
+
+/* pf/ppf handler mgmt cpu report hilink event */
+void hinic3_pf_mag_event_handler(void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ hinic3_mag_event_handler(pri_handle, cmd, buf_in, in_size,
+ buf_out, out_size);
+}
+
+static int _mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u16 channel)
+{
+ u32 i, cmd_cnt = ARRAY_LEN(vf_mag_cmd_handler);
+ bool cmd_to_pf = false;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF &&
+ !hinic3_is_slave_host(hwdev)) {
+ for (i = 0; i < cmd_cnt; i++) {
+ if (cmd == vf_mag_cmd_handler[i].cmd) {
+ cmd_to_pf = true;
+ break;
+ }
+ }
+ }
+
+ if (cmd_to_pf)
+ return hinic3_mbox_to_pf(hwdev, HINIC3_MOD_HILINK, cmd, buf_in,
+ in_size, buf_out, out_size, 0,
+ channel);
+
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_HILINK, cmd, buf_in,
+ in_size, buf_out, out_size, 0, channel);
+}
+
+static int mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return _mag_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size, HINIC3_CHANNEL_NIC);
+}
+
+static int mag_msg_to_mgmt_sync_ch(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u16 channel)
+{
+ return _mag_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size, channel);
+}
+
+#if defined(ETHTOOL_GFECPARAM) && defined(ETHTOOL_SFECPARAM)
+struct fecparam_value_map {
+ u8 hinic3_fec_offset;
+ u8 hinic3_fec_value;
+ u8 ethtool_fec_value;
+};
+
+static void fecparam_convert(u32 opcode, u8 in_fec_param, u8 *out_fec_param)
+{
+ u8 i;
+ u8 fec_value_table_lenth;
+ struct fecparam_value_map fec_value_table[] = {
+ {PORT_FEC_NOT_SET, BIT(PORT_FEC_NOT_SET), ETHTOOL_FEC_NONE},
+ {PORT_FEC_RSFEC, BIT(PORT_FEC_RSFEC), ETHTOOL_FEC_RS},
+ {PORT_FEC_BASEFEC, BIT(PORT_FEC_BASEFEC), ETHTOOL_FEC_BASER},
+ {PORT_FEC_NOFEC, BIT(PORT_FEC_NOFEC), ETHTOOL_FEC_OFF},
+#ifdef ETHTOOL_FEC_LLRS
+ {PORT_FEC_LLRSFEC, BIT(PORT_FEC_LLRSFEC), ETHTOOL_FEC_LLRS},
+#endif
+ {PORT_FEC_AUTO, BIT(PORT_FEC_AUTO), ETHTOOL_FEC_AUTO}
+ };
+
+ *out_fec_param = 0;
+ fec_value_table_lenth = (u8)(sizeof(fec_value_table) / sizeof(struct fecparam_value_map));
+
+ if (opcode == MAG_CMD_OPCODE_SET) {
+ for (i = 0; i < fec_value_table_lenth; i++) {
+ if ((in_fec_param &
+ fec_value_table[i].ethtool_fec_value) != 0)
+ /* The MPU uses the offset to determine the FEC mode. */
+ *out_fec_param =
+ fec_value_table[i].hinic3_fec_offset;
+ }
+ }
+
+ if (opcode == MAG_CMD_OPCODE_GET) {
+ for (i = 0; i < fec_value_table_lenth; i++) {
+ if ((in_fec_param &
+ fec_value_table[i].hinic3_fec_value) != 0)
+ *out_fec_param |=
+ fec_value_table[i].ethtool_fec_value;
+ }
+ }
+}
+
+/* When the ethtool is used to set the FEC mode */
+static bool check_fecparam_is_valid(u8 fec_param)
+{
+ if (
+#ifdef ETHTOOL_FEC_LLRS
+ (fec_param == ETHTOOL_FEC_LLRS) ||
+#endif
+ (fec_param == ETHTOOL_FEC_RS) ||
+ (fec_param == ETHTOOL_FEC_BASER) ||
+ (fec_param == ETHTOOL_FEC_OFF)) {
+ return true;
+ }
+ return false;
+}
+
+int set_fecparam(void *hwdev, u8 fecparam)
+{
+ struct mag_cmd_cfg_fec_mode fec_msg = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(fec_msg);
+ u8 advertised_fec = 0;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (check_fecparam_is_valid(fecparam) == false) {
+ nic_err(nic_io->dev_hdl, "fec param is invalid, failed to set fec param\n");
+ return -EINVAL;
+ }
+ fecparam_convert(MAG_CMD_OPCODE_SET, fecparam, &advertised_fec);
+ fec_msg.opcode = MAG_CMD_OPCODE_SET;
+ fec_msg.port_id = hinic3_physical_port_id(hwdev);
+ fec_msg.advertised_fec = advertised_fec;
+ err = mag_msg_to_mgmt_sync_ch(hwdev, MAG_CMD_CFG_FEC_MODE,
+ &fec_msg, sizeof(fec_msg),
+ &fec_msg, &out_size, HINIC3_CHANNEL_NIC);
+ if ((err != 0) || (fec_msg.head.status != 0))
+ return -EINVAL;
+ return 0;
+}
+
+int get_fecparam(void *hwdev, u8 *advertised_fec, u8 *supported_fec)
+{
+ struct mag_cmd_cfg_fec_mode fec_msg = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(fec_msg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ fec_msg.opcode = MAG_CMD_OPCODE_GET;
+ fec_msg.port_id = hinic3_physical_port_id(hwdev);
+ err = mag_msg_to_mgmt_sync_ch(hwdev, MAG_CMD_CFG_FEC_MODE,
+ &fec_msg, sizeof(fec_msg),
+ &fec_msg, &out_size, HINIC3_CHANNEL_NIC);
+ if ((err != 0) || (fec_msg.head.status != 0))
+ return -EINVAL;
+
+ /* fec_msg.advertised_fec: bit offset,
+ *value is BIT(fec_msg.advertised_fec); fec_msg.supported_fec: value
+ */
+ fecparam_convert(MAG_CMD_OPCODE_GET, BIT(fec_msg.advertised_fec),
+ advertised_fec);
+ fecparam_convert(MAG_CMD_OPCODE_GET, fec_msg.supported_fec,
+ supported_fec);
+ return 0;
+}
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_main.c b/drivers/net/ethernet/huawei/hinic3/hinic3_main.c
new file mode 100644
index 0000000..7790ae2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_main.c
@@ -0,0 +1,1469 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/dcbnl.h>
+#include <linux/tcp.h>
+#include <linux/ip.h>
+#include <linux/debugfs.h>
+
+#include "ossl_knl.h"
+#if defined(HAVE_NDO_UDP_TUNNEL_ADD) || defined(HAVE_UDP_TUNNEL_NIC_INFO)
+#include <net/udp_tunnel.h>
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD || HAVE_UDP_TUNNEL_NIC_INFO */
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_mt.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+#include "hinic3_lld.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_rss.h"
+#include "hinic3_dcb.h"
+#include "hinic3_nic_prof.h"
+#include "hinic3_profile.h"
+#include "hinic3_bond.h"
+
+#define DEFAULT_POLL_WEIGHT 64
+static unsigned int poll_weight = DEFAULT_POLL_WEIGHT;
+module_param(poll_weight, uint, 0444);
+MODULE_PARM_DESC(poll_weight, "Number packets for NAPI budget (default=64)");
+
+#define HINIC3_DEAULT_TXRX_MSIX_PENDING_LIMIT 2
+#define HINIC3_DEAULT_TXRX_MSIX_COALESC_TIMER_CFG 25
+#define HINIC3_DEAULT_TXRX_MSIX_RESEND_TIMER_CFG 7
+
+static unsigned char qp_pending_limit = HINIC3_DEAULT_TXRX_MSIX_PENDING_LIMIT;
+module_param(qp_pending_limit, byte, 0444);
+MODULE_PARM_DESC(qp_pending_limit, "QP MSI-X Interrupt coalescing parameter pending_limit (default=2)");
+
+static unsigned char qp_coalesc_timer_cfg =
+ HINIC3_DEAULT_TXRX_MSIX_COALESC_TIMER_CFG;
+module_param(qp_coalesc_timer_cfg, byte, 0444);
+MODULE_PARM_DESC(qp_coalesc_timer_cfg, "QP MSI-X Interrupt coalescing parameter coalesc_timer_cfg (default=32)");
+
+#define DEFAULT_RX_BUFF_LEN 2
+u16 rx_buff = DEFAULT_RX_BUFF_LEN;
+module_param(rx_buff, ushort, 0444);
+MODULE_PARM_DESC(rx_buff, "Set rx_buff size, buffer len must be 2^n. 2 - 16, default is 2KB");
+
+static unsigned int lro_replenish_thld = 256;
+module_param(lro_replenish_thld, uint, 0444);
+MODULE_PARM_DESC(lro_replenish_thld, "Number wqe for lro replenish buffer (default=256)");
+
+static unsigned char set_link_status_follow = HINIC3_LINK_FOLLOW_STATUS_MAX;
+module_param(set_link_status_follow, byte, 0444);
+MODULE_PARM_DESC(set_link_status_follow, "Set link status follow port status (0=default,1=follow,2=separate,3=unset");
+
+static bool page_pool_enabled = true;
+module_param(page_pool_enabled, bool, 0444);
+MODULE_PARM_DESC(page_pool_enabled, "enable/disable page_pool feature for rxq page management (default enable)");
+
+#define HINIC3_NIC_DEV_WQ_NAME "hinic3_nic_dev_wq"
+
+#define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_LINK)
+
+#define QID_MASKED(q_id, nic_dev) ((q_id) & ((nic_dev)->num_qps - 1))
+#define WATCHDOG_TIMEOUT 5
+
+#define HINIC3_SQ_DEPTH 1024
+#define HINIC3_RQ_DEPTH 1024
+
+#define LRO_ENABLE 1
+
+enum hinic3_rx_buff_len {
+ RX_BUFF_VALID_2KB = 2,
+ RX_BUFF_VALID_4KB = 4,
+ RX_BUFF_VALID_8KB = 8,
+ RX_BUFF_VALID_16KB = 16,
+};
+
+#define CONVERT_UNIT 1024
+#define NIC_MAX_PF_NUM 32
+
+#define BIFUR_RESOURCE_PF_SSID 0x5a1
+
+#ifdef HAVE_MULTI_VLAN_OFFLOAD_EN
+static int hinic3_netdev_event(struct notifier_block *notifier, unsigned long event, void *ptr);
+
+/* used for netdev notifier register/unregister */
+static DEFINE_MUTEX(hinic3_netdev_notifiers_mutex);
+static int hinic3_netdev_notifiers_ref_cnt;
+static struct notifier_block hinic3_netdev_notifier = {
+ .notifier_call = hinic3_netdev_event,
+};
+
+#ifdef HAVE_UDP_TUNNEL_NIC_INFO
+static const struct udp_tunnel_nic_info hinic3_udp_tunnels = {
+ .set_port = hinic3_udp_tunnel_set_port,
+ .unset_port = hinic3_udp_tunnel_unset_port,
+ .flags = UDP_TUNNEL_NIC_INFO_MAY_SLEEP,
+ .tables = {
+ { .n_entries = 1, .tunnel_types = UDP_TUNNEL_TYPE_VXLAN, },
+ },
+};
+#endif /* HAVE_UDP_TUNNEL_NIC_INFO */
+
+static void hinic3_register_notifier(struct hinic3_nic_dev *nic_dev)
+{
+ int err;
+
+ mutex_lock(&hinic3_netdev_notifiers_mutex);
+ hinic3_netdev_notifiers_ref_cnt++;
+ if (hinic3_netdev_notifiers_ref_cnt == 1) {
+ err = register_netdevice_notifier(&hinic3_netdev_notifier);
+ if (err) {
+ nic_info(&nic_dev->pdev->dev, "Register netdevice notifier failed, err: %d\n",
+ err);
+ hinic3_netdev_notifiers_ref_cnt--;
+ }
+ }
+ mutex_unlock(&hinic3_netdev_notifiers_mutex);
+}
+
+static void hinic3_unregister_notifier(struct hinic3_nic_dev *nic_dev)
+{
+ mutex_lock(&hinic3_netdev_notifiers_mutex);
+ if (hinic3_netdev_notifiers_ref_cnt == 1)
+ unregister_netdevice_notifier(&hinic3_netdev_notifier);
+
+ if (hinic3_netdev_notifiers_ref_cnt)
+ hinic3_netdev_notifiers_ref_cnt--;
+ mutex_unlock(&hinic3_netdev_notifiers_mutex);
+}
+
+#define HINIC3_MAX_VLAN_DEPTH_OFFLOAD_SUPPORT 1
+#define HINIC3_VLAN_CLEAR_OFFLOAD (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | \
+ NETIF_F_SCTP_CRC | NETIF_F_RXCSUM | \
+ NETIF_F_ALL_TSO)
+
+static int hinic3_netdev_event(struct notifier_block *notifier, unsigned long event, void *ptr)
+{
+ struct net_device *ndev = netdev_notifier_info_to_dev(ptr);
+ struct net_device *real_dev = NULL;
+ struct net_device *ret = NULL;
+ u16 vlan_depth;
+
+ if (!is_vlan_dev(ndev))
+ return NOTIFY_DONE;
+
+ dev_hold(ndev);
+
+ switch (event) {
+ case NETDEV_REGISTER:
+ real_dev = vlan_dev_real_dev(ndev);
+ if (!hinic3_is_netdev_ops_match(real_dev))
+ goto out;
+
+ vlan_depth = 1;
+ ret = vlan_dev_priv(ndev)->real_dev;
+ while (is_vlan_dev(ret)) {
+ ret = vlan_dev_priv(ret)->real_dev;
+ vlan_depth++;
+ }
+
+ if (vlan_depth == HINIC3_MAX_VLAN_DEPTH_OFFLOAD_SUPPORT) {
+ ndev->vlan_features &= (~HINIC3_VLAN_CLEAR_OFFLOAD);
+ } else if (vlan_depth > HINIC3_MAX_VLAN_DEPTH_OFFLOAD_SUPPORT) {
+#ifdef HAVE_NDO_SET_FEATURES
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ set_netdev_hw_features(ndev,
+ get_netdev_hw_features(ndev) &
+ (~HINIC3_VLAN_CLEAR_OFFLOAD));
+#else
+ ndev->hw_features &= (~HINIC3_VLAN_CLEAR_OFFLOAD);
+#endif
+#endif
+ ndev->features &= (~HINIC3_VLAN_CLEAR_OFFLOAD);
+ }
+
+ break;
+
+ default:
+ break;
+ };
+
+out:
+ dev_put(ndev);
+
+ return NOTIFY_DONE;
+}
+#endif
+
+void hinic3_link_status_change(struct hinic3_nic_dev *nic_dev, bool status)
+{
+ struct net_device *netdev = nic_dev->netdev;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev) ||
+ test_bit(HINIC3_LP_TEST, &nic_dev->flags) ||
+ test_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags))
+ return;
+
+ if (status) {
+ if (netif_carrier_ok(netdev))
+ return;
+
+ nic_dev->link_status = status;
+ netif_carrier_on(netdev);
+ nicif_info(nic_dev, link, netdev, "Link is up\n");
+ } else {
+ if (!netif_carrier_ok(netdev))
+ return;
+
+ nic_dev->link_status = status;
+ netif_carrier_off(netdev);
+ nicif_info(nic_dev, link, netdev, "Link is down\n");
+ }
+}
+
+static void netdev_feature_init(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ netdev_features_t dft_fts = 0;
+ netdev_features_t cso_fts = 0;
+ netdev_features_t vlan_fts = 0;
+ netdev_features_t tso_fts = 0;
+ netdev_features_t hw_features = 0;
+
+ dft_fts |= NETIF_F_SG | NETIF_F_HIGHDMA;
+
+ if (HINIC3_SUPPORT_CSUM(nic_dev->hwdev))
+ cso_fts |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM;
+ if (HINIC3_SUPPORT_SCTP_CRC(nic_dev->hwdev))
+ cso_fts |= NETIF_F_SCTP_CRC;
+
+ if (HINIC3_SUPPORT_TSO(nic_dev->hwdev))
+ tso_fts |= NETIF_F_TSO | NETIF_F_TSO6;
+
+ if (HINIC3_SUPPORT_VLAN_OFFLOAD(nic_dev->hwdev)) {
+#if defined(NETIF_F_HW_VLAN_CTAG_TX)
+ vlan_fts |= NETIF_F_HW_VLAN_CTAG_TX;
+#elif defined(NETIF_F_HW_VLAN_TX)
+ vlan_fts |= NETIF_F_HW_VLAN_TX;
+#endif
+
+#if defined(NETIF_F_HW_VLAN_CTAG_RX)
+ vlan_fts |= NETIF_F_HW_VLAN_CTAG_RX;
+#elif defined(NETIF_F_HW_VLAN_RX)
+ vlan_fts |= NETIF_F_HW_VLAN_RX;
+#endif
+ }
+
+ if (HINIC3_SUPPORT_RXVLAN_FILTER(nic_dev->hwdev)) {
+#if defined(NETIF_F_HW_VLAN_CTAG_FILTER)
+ vlan_fts |= NETIF_F_HW_VLAN_CTAG_FILTER;
+#elif defined(NETIF_F_HW_VLAN_FILTER)
+ vlan_fts |= NETIF_F_HW_VLAN_FILTER;
+#endif
+ }
+
+#ifdef HAVE_ENCAPSULATION_TSO
+ if (HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev->hwdev))
+ tso_fts |= NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_UDP_TUNNEL_CSUM;
+#endif /* HAVE_ENCAPSULATION_TSO */
+
+ /* LRO is disable in default, only set hw features */
+ if (HINIC3_SUPPORT_LRO(nic_dev->hwdev))
+ hw_features |= NETIF_F_LRO;
+
+ netdev->features |= dft_fts | cso_fts | tso_fts | vlan_fts;
+ netdev->vlan_features |= dft_fts | cso_fts | tso_fts;
+
+ if (nic_dev->nic_cap.lro_enable == LRO_ENABLE) {
+ netdev->features |= NETIF_F_LRO;
+ netdev->vlan_features |= NETIF_F_LRO;
+ }
+
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ hw_features |= get_netdev_hw_features(netdev);
+#else
+ hw_features |= netdev->hw_features;
+#endif
+
+ hw_features |= netdev->features;
+
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ set_netdev_hw_features(netdev, hw_features);
+#else
+ netdev->hw_features = hw_features;
+#endif
+
+#ifdef IFF_UNICAST_FLT
+ netdev->priv_flags |= IFF_UNICAST_FLT;
+#endif
+
+#ifdef HAVE_ENCAPSULATION_CSUM
+ netdev->hw_enc_features |= dft_fts;
+ if (HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev->hwdev)) {
+ netdev->hw_enc_features |= cso_fts;
+#ifdef HAVE_ENCAPSULATION_TSO
+ netdev->hw_enc_features |= tso_fts | NETIF_F_TSO_ECN;
+#endif /* HAVE_ENCAPSULATION_TSO */
+ }
+#endif /* HAVE_ENCAPSULATION_CSUM */
+}
+
+static void init_intr_coal_param(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_intr_coal_info *info = NULL;
+ u16 i;
+
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ info = &nic_dev->intr_coalesce[i];
+
+ info->pending_limt = qp_pending_limit;
+ info->coalesce_timer_cfg = qp_coalesc_timer_cfg;
+
+ info->resend_timer_cfg = HINIC3_DEAULT_TXRX_MSIX_RESEND_TIMER_CFG;
+
+ info->pkt_rate_high = HINIC3_RX_RATE_HIGH;
+ info->rx_usecs_high = HINIC3_RX_COAL_TIME_HIGH;
+ info->rx_pending_limt_high = HINIC3_RX_PENDING_LIMIT_HIGH;
+
+ info->pkt_rate_low = HINIC3_RX_RATE_LOW;
+ info->rx_usecs_low = HINIC3_RX_COAL_TIME_LOW;
+ info->rx_pending_limt_low = HINIC3_RX_PENDING_LIMIT_LOW;
+ }
+}
+
+static int hinic3_init_intr_coalesce(struct hinic3_nic_dev *nic_dev)
+{
+ u64 size;
+
+ if (qp_pending_limit != HINIC3_DEAULT_TXRX_MSIX_PENDING_LIMIT ||
+ qp_coalesc_timer_cfg != HINIC3_DEAULT_TXRX_MSIX_COALESC_TIMER_CFG)
+ nic_dev->intr_coal_set_flag = 1;
+ else
+ nic_dev->intr_coal_set_flag = 0;
+
+ size = sizeof(*nic_dev->intr_coalesce) * nic_dev->max_qps;
+ if (!size) {
+ nic_err(&nic_dev->pdev->dev, "Cannot allocate zero size intr coalesce\n");
+ return -EINVAL;
+ }
+ nic_dev->intr_coalesce = kzalloc(size, GFP_KERNEL);
+ if (!nic_dev->intr_coalesce) {
+ nic_err(&nic_dev->pdev->dev, "Failed to alloc intr coalesce\n");
+ return -ENOMEM;
+ }
+
+ init_intr_coal_param(nic_dev);
+
+ if (test_bit(HINIC3_INTR_ADAPT, &nic_dev->flags))
+ nic_dev->adaptive_rx_coal = 1;
+ else
+ nic_dev->adaptive_rx_coal = 0;
+
+ return 0;
+}
+
+static void hinic3_free_intr_coalesce(struct hinic3_nic_dev *nic_dev)
+{
+ kfree(nic_dev->intr_coalesce);
+ nic_dev->intr_coalesce = NULL;
+}
+
+static int hinic3_alloc_txrxqs(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int err;
+
+ err = hinic3_alloc_txqs(netdev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to alloc txqs\n");
+ return err;
+ }
+
+ err = hinic3_alloc_rxqs(netdev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to alloc rxqs\n");
+ goto alloc_rxqs_err;
+ }
+
+ err = hinic3_init_intr_coalesce(nic_dev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to init_intr_coalesce\n");
+ goto init_intr_err;
+ }
+
+ return 0;
+
+init_intr_err:
+ hinic3_free_rxqs(netdev);
+
+alloc_rxqs_err:
+ hinic3_free_txqs(netdev);
+
+ return err;
+}
+
+static void hinic3_free_txrxqs(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_free_intr_coalesce(nic_dev);
+ hinic3_free_rxqs(nic_dev->netdev);
+ hinic3_free_txqs(nic_dev->netdev);
+}
+
+static void hinic3_sw_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_free_txrxqs(nic_dev);
+
+ hinic3_clean_mac_list_filter(nic_dev);
+
+ hinic3_del_mac(nic_dev->hwdev, nic_dev->netdev->dev_addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+
+ hinic3_clear_rss_config(nic_dev);
+ hinic3_dcb_deinit(nic_dev);
+}
+
+static void hinic3_netdev_mtu_init(struct net_device *netdev)
+{
+ /* MTU range: 384 - 9600 */
+#ifdef HAVE_NETDEVICE_MIN_MAX_MTU
+ netdev->min_mtu = HINIC3_MIN_MTU_SIZE;
+ netdev->max_mtu = HINIC3_MAX_JUMBO_FRAME_SIZE;
+#endif
+
+#ifdef HAVE_NETDEVICE_EXTENDED_MIN_MAX_MTU
+ netdev->extended->min_mtu = HINIC3_MIN_MTU_SIZE;
+ netdev->extended->max_mtu = HINIC3_MAX_JUMBO_FRAME_SIZE;
+#endif
+}
+
+static int hinic3_set_default_mac(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 mac_addr[ETH_ALEN];
+ int err = 0;
+
+ err = hinic3_get_default_mac(nic_dev->hwdev, mac_addr);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to get MAC address\n");
+ return err;
+ }
+
+ ether_addr_copy(netdev->dev_addr, mac_addr);
+
+ if (!is_valid_ether_addr(netdev->dev_addr)) {
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nic_err(&nic_dev->pdev->dev,
+ "Invalid MAC address %pM\n",
+ netdev->dev_addr);
+ return -EIO;
+ }
+
+ nic_info(&nic_dev->pdev->dev,
+ "Invalid MAC address %pM, using random\n",
+ netdev->dev_addr);
+ eth_hw_addr_random(netdev);
+ }
+
+ err = hinic3_set_mac(nic_dev->hwdev, netdev->dev_addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+ /* When this is VF driver, we must consider that PF has already set VF
+ * MAC, and we can't consider this condition is error status during
+ * driver probe procedure.
+ */
+ if (err && err != HINIC3_PF_SET_VF_ALREADY) {
+ nic_err(&nic_dev->pdev->dev, "Failed to set default MAC\n");
+ }
+
+ if (err == HINIC3_PF_SET_VF_ALREADY)
+ return 0;
+
+ return err;
+}
+
+static void hinic3_outband_cfg_init(struct hinic3_nic_dev *nic_dev)
+{
+ u16 outband_default_vid = 0;
+ int err = 0;
+
+ if (!nic_dev->nic_cap.outband_vlan_cfg_en)
+ return;
+
+ err = hinic3_get_outband_vlan_cfg(nic_dev->hwdev, &outband_default_vid);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to get_outband_cfg, err: %d\n", err);
+ return;
+ }
+
+ nic_dev->outband_cfg.outband_default_vid = outband_default_vid;
+
+ return;
+}
+
+static int hinic3_sw_init(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u64 nic_features;
+ int err = 0;
+
+ nic_features = hinic3_get_feature_cap(nic_dev->hwdev);
+ /* You can update the features supported by the driver according to the
+ * scenario here
+ */
+ nic_features &= NIC_DRV_DEFAULT_FEATURE;
+ hinic3_update_nic_feature(nic_dev->hwdev, nic_features);
+
+ err = hinic3_dcb_init(nic_dev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to init dcb\n");
+ return -EFAULT;
+ }
+
+ nic_dev->q_params.sq_depth = HINIC3_SQ_DEPTH;
+ nic_dev->q_params.rq_depth = HINIC3_RQ_DEPTH;
+
+ hinic3_try_to_enable_rss(nic_dev);
+
+ err = hinic3_set_default_mac(nic_dev);
+ if (err) {
+ goto set_mac_err;
+ }
+
+ hinic3_netdev_mtu_init(netdev);
+
+ err = hinic3_alloc_txrxqs(nic_dev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to alloc qps\n");
+ goto alloc_qps_err;
+ }
+
+ hinic3_outband_cfg_init(nic_dev);
+
+ return 0;
+
+alloc_qps_err:
+ hinic3_del_mac(nic_dev->hwdev, netdev->dev_addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+
+set_mac_err:
+ hinic3_clear_rss_config(nic_dev);
+
+ return err;
+}
+
+static void hinic3_assign_netdev_ops(struct hinic3_nic_dev *adapter)
+{
+ hinic3_set_netdev_ops(adapter);
+ if (!HINIC3_FUNC_IS_VF(adapter->hwdev))
+ hinic3_set_ethtool_ops(adapter->netdev);
+ else
+ hinic3vf_set_ethtool_ops(adapter->netdev);
+
+ adapter->netdev->watchdog_timeo = WATCHDOG_TIMEOUT * HZ;
+}
+
+static int hinic3_validate_parameters(struct hinic3_lld_dev *lld_dev)
+{
+ struct pci_dev *pdev = lld_dev->pdev;
+
+ /* If weight exceeds the queue depth, the queue resources will be
+ * exhausted, and increasing it has no effect.
+ */
+ if (!poll_weight || poll_weight > HINIC3_MAX_RX_QUEUE_DEPTH) {
+ nic_warn(&pdev->dev, "Module Parameter poll_weight is out of range: [1, %d], resetting to %d\n",
+ HINIC3_MAX_RX_QUEUE_DEPTH, DEFAULT_POLL_WEIGHT);
+ poll_weight = DEFAULT_POLL_WEIGHT;
+ }
+
+ /* check rx_buff value, default rx_buff is 2KB.
+ * Valid rx_buff include 2KB/4KB/8KB/16KB.
+ */
+ if (rx_buff != RX_BUFF_VALID_2KB && rx_buff != RX_BUFF_VALID_4KB &&
+ rx_buff != RX_BUFF_VALID_8KB && rx_buff != RX_BUFF_VALID_16KB) {
+ nic_warn(&pdev->dev, "Module Parameter rx_buff value %u is out of range, must be 2^n. Valid range is 2 - 16, resetting to %dKB",
+ rx_buff, DEFAULT_RX_BUFF_LEN);
+ rx_buff = DEFAULT_RX_BUFF_LEN;
+ }
+
+ return 0;
+}
+
+static void decide_intr_cfg(struct hinic3_nic_dev *nic_dev)
+{
+ set_bit(HINIC3_INTR_ADAPT, &nic_dev->flags);
+}
+
+static void adaptive_configuration_init(struct hinic3_nic_dev *nic_dev)
+{
+ decide_intr_cfg(nic_dev);
+}
+
+static int set_interrupt_moder(struct hinic3_nic_dev *nic_dev, u16 q_id,
+ u8 coalesc_timer_cfg, u8 pending_limt)
+{
+ struct interrupt_info info;
+ int err;
+
+ memset(&info, 0, sizeof(info));
+
+ if (coalesc_timer_cfg == nic_dev->rxqs[q_id].last_coalesc_timer_cfg &&
+ pending_limt == nic_dev->rxqs[q_id].last_pending_limt)
+ return 0;
+
+ /* netdev not running or qp not in using,
+ * don't need to set coalesce to hw
+ */
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev) ||
+ q_id >= nic_dev->q_params.num_qps)
+ return 0;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.coalesc_timer_cfg = coalesc_timer_cfg;
+ info.pending_limt = pending_limt;
+ info.msix_index = nic_dev->q_params.irq_cfg[q_id].msix_entry_idx;
+ info.resend_timer_cfg =
+ nic_dev->intr_coalesce[q_id].resend_timer_cfg;
+
+ err = hinic3_set_interrupt_cfg(nic_dev->hwdev, info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to modify moderation for Queue: %u\n", q_id);
+ } else {
+ nic_dev->rxqs[q_id].last_coalesc_timer_cfg = coalesc_timer_cfg;
+ nic_dev->rxqs[q_id].last_pending_limt = pending_limt;
+ }
+
+ return err;
+}
+
+static void calc_coal_para(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_intr_coal_info *q_coal, u64 rx_rate,
+ u8 *coalesc_timer_cfg, u8 *pending_limt)
+{
+ if (rx_rate < q_coal->pkt_rate_low) {
+ *coalesc_timer_cfg = q_coal->rx_usecs_low;
+ *pending_limt = q_coal->rx_pending_limt_low;
+ } else if (rx_rate > q_coal->pkt_rate_high) {
+ *coalesc_timer_cfg = q_coal->rx_usecs_high;
+ *pending_limt = q_coal->rx_pending_limt_high;
+ } else {
+ *coalesc_timer_cfg =
+ (u8)((rx_rate - q_coal->pkt_rate_low) *
+ (q_coal->rx_usecs_high - q_coal->rx_usecs_low) /
+ (q_coal->pkt_rate_high - q_coal->pkt_rate_low) +
+ q_coal->rx_usecs_low);
+
+ *pending_limt =
+ (u8)((rx_rate - q_coal->pkt_rate_low) *
+ (q_coal->rx_pending_limt_high - q_coal->rx_pending_limt_low) /
+ (q_coal->pkt_rate_high - q_coal->pkt_rate_low) +
+ q_coal->rx_pending_limt_low);
+ }
+}
+
+static void update_queue_coal(struct hinic3_nic_dev *nic_dev, u16 qid,
+ u64 rx_rate, u64 avg_pkt_size, u64 tx_rate)
+{
+ struct hinic3_intr_coal_info *q_coal = NULL;
+ u8 coalesc_timer_cfg, pending_limt;
+
+ q_coal = &nic_dev->intr_coalesce[qid];
+
+ if (rx_rate > HINIC3_RX_RATE_THRESH && avg_pkt_size > HINIC3_AVG_PKT_SMALL) {
+ calc_coal_para(nic_dev, q_coal, rx_rate, &coalesc_timer_cfg, &pending_limt);
+ } else {
+ coalesc_timer_cfg = HINIC3_LOWEST_LATENCY;
+ pending_limt = q_coal->rx_pending_limt_low;
+ }
+
+ set_interrupt_moder(nic_dev, qid, coalesc_timer_cfg, pending_limt);
+}
+
+void hinic3_auto_moderation_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_nic_dev *nic_dev = container_of(delay,
+ struct hinic3_nic_dev,
+ moderation_task);
+ unsigned long period = (unsigned long)(jiffies -
+ nic_dev->last_moder_jiffies);
+ u64 rx_packets, rx_bytes, rx_pkt_diff, rx_rate, avg_pkt_size;
+ u64 tx_packets, tx_bytes, tx_pkt_diff, tx_rate;
+ u16 qid;
+
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags))
+ return;
+
+ queue_delayed_work(nic_dev->workq, &nic_dev->moderation_task,
+ HINIC3_MODERATONE_DELAY);
+
+ if (!nic_dev->adaptive_rx_coal || !period)
+ return;
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ rx_packets = nic_dev->rxqs[qid].rxq_stats.packets;
+ rx_bytes = nic_dev->rxqs[qid].rxq_stats.bytes;
+ tx_packets = nic_dev->txqs[qid].txq_stats.packets;
+ tx_bytes = nic_dev->txqs[qid].txq_stats.bytes;
+
+ rx_pkt_diff =
+ rx_packets - nic_dev->rxqs[qid].last_moder_packets;
+ avg_pkt_size = rx_pkt_diff ?
+ ((unsigned long)(rx_bytes -
+ nic_dev->rxqs[qid].last_moder_bytes)) /
+ rx_pkt_diff : 0;
+
+ rx_rate = rx_pkt_diff * HZ / period;
+ tx_pkt_diff =
+ tx_packets - nic_dev->txqs[qid].last_moder_packets;
+ tx_rate = tx_pkt_diff * HZ / period;
+
+ update_queue_coal(nic_dev, qid, rx_rate, avg_pkt_size,
+ tx_rate);
+
+ nic_dev->rxqs[qid].last_moder_packets = rx_packets;
+ nic_dev->rxqs[qid].last_moder_bytes = rx_bytes;
+ nic_dev->txqs[qid].last_moder_packets = tx_packets;
+ nic_dev->txqs[qid].last_moder_bytes = tx_bytes;
+ }
+
+ nic_dev->last_moder_jiffies = jiffies;
+}
+
+static void hinic3_periodic_work_handler(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_nic_dev *nic_dev = container_of(delay, struct hinic3_nic_dev, periodic_work);
+
+ if (test_and_clear_bit(EVENT_WORK_TX_TIMEOUT, &nic_dev->event_flag))
+ hinic3_fault_event_report(nic_dev->hwdev, HINIC3_FAULT_SRC_TX_TIMEOUT,
+ FAULT_LEVEL_SERIOUS_FLR);
+
+ queue_delayed_work(nic_dev->workq, &nic_dev->periodic_work, HZ);
+}
+
+static void hinic3_vport_stats_work_handler(struct work_struct *work)
+{
+ int err;
+ struct hinic3_vport_stats vport_stats = {0};
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_nic_dev *nic_dev = container_of(delay, struct hinic3_nic_dev, vport_stats_work);
+ err = hinic3_get_vport_stats(nic_dev->hwdev, hinic3_global_func_id(nic_dev->hwdev), &vport_stats);
+ if (err)
+ nic_err(&nic_dev->pdev->dev, "Failed to get dropped stats from fw\n");
+ else
+ nic_dev->vport_stats.rx_discard_vport = vport_stats.rx_discard_vport;
+ queue_delayed_work(nic_dev->workq, &nic_dev->vport_stats_work, HZ);
+}
+
+static void free_nic_dev_vram(struct hinic3_nic_dev *nic_dev)
+{
+ int is_use_vram = get_use_vram_flag();
+ if (is_use_vram != 0)
+ hi_vram_kfree((void *)nic_dev->nic_vram, nic_dev->nic_vram_name,
+ sizeof(struct hinic3_vram));
+ else
+ kfree(nic_dev->nic_vram);
+ nic_dev->nic_vram = NULL;
+}
+
+static void free_nic_dev(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_deinit_nic_prof_adapter(nic_dev);
+ destroy_workqueue(nic_dev->workq);
+ kfree(nic_dev->vlan_bitmap);
+ nic_dev->vlan_bitmap = NULL;
+ free_nic_dev_vram(nic_dev);
+}
+
+static int setup_nic_dev(struct net_device *netdev,
+ struct hinic3_lld_dev *lld_dev)
+{
+ struct pci_dev *pdev = lld_dev->pdev;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ char *netdev_name_fmt = NULL;
+ u32 page_num;
+ u16 func_id;
+ int ret;
+ int is_in_kexec = vram_get_kexec_flag();
+ int is_use_vram = get_use_vram_flag();
+
+ nic_dev = (struct hinic3_nic_dev *)netdev_priv(netdev);
+ nic_dev->netdev = netdev;
+ SET_NETDEV_DEV(netdev, &pdev->dev);
+ nic_dev->lld_dev = lld_dev;
+ nic_dev->hwdev = lld_dev->hwdev;
+ nic_dev->pdev = pdev;
+ nic_dev->poll_weight = (int)poll_weight;
+ nic_dev->msg_enable = DEFAULT_MSG_ENABLE;
+ nic_dev->lro_replenish_thld = lro_replenish_thld;
+ nic_dev->rx_buff_len = (u16)(rx_buff * CONVERT_UNIT);
+ nic_dev->dma_rx_buff_size = RX_BUFF_NUM_PER_PAGE * nic_dev->rx_buff_len;
+ page_num = nic_dev->dma_rx_buff_size / PAGE_SIZE;
+ nic_dev->page_order = page_num > 0 ? ilog2(page_num) : 0;
+ nic_dev->page_pool_enabled = page_pool_enabled;
+ nic_dev->outband_cfg.outband_default_vid = 0;
+
+ // value other than 0 indicates hot replace
+ if (is_use_vram != 0) {
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ ret = snprintf(nic_dev->nic_vram_name,
+ VRAM_NAME_MAX_LEN,
+ "%s%u", VRAM_NIC_VRAM, func_id);
+ if (ret < 0) {
+ nic_err(&pdev->dev, "NIC vram name snprintf failed, ret:%d.\n",
+ ret);
+ return -EINVAL;
+ }
+
+ nic_dev->nic_vram = (struct hinic3_vram *)hi_vram_kalloc(nic_dev->nic_vram_name,
+ sizeof(struct hinic3_vram));
+ if (!nic_dev->nic_vram) {
+ nic_err(&pdev->dev, "Failed to allocate nic vram\n");
+ return -ENOMEM;
+ }
+
+ if (is_in_kexec == 0)
+ nic_dev->nic_vram->vram_mtu = netdev->mtu;
+ else
+ netdev->mtu = nic_dev->nic_vram->vram_mtu;
+ } else {
+ nic_dev->nic_vram = kzalloc(sizeof(struct hinic3_vram),
+ GFP_KERNEL);
+ if (!nic_dev->nic_vram) {
+ nic_err(&pdev->dev, "Failed to allocate nic vram\n");
+ return -ENOMEM;
+ }
+ nic_dev->nic_vram->vram_mtu = netdev->mtu;
+ }
+
+ mutex_init(&nic_dev->nic_mutex);
+
+ nic_dev->vlan_bitmap = kzalloc(VLAN_BITMAP_SIZE(nic_dev), GFP_KERNEL);
+ if (!nic_dev->vlan_bitmap) {
+ nic_err(&pdev->dev, "Failed to allocate vlan bitmap\n");
+ ret = -ENOMEM;
+ goto vlan_bitmap_error;
+ }
+
+ nic_dev->workq = create_singlethread_workqueue(HINIC3_NIC_DEV_WQ_NAME);
+ if (!nic_dev->workq) {
+ nic_err(&pdev->dev, "Failed to initialize nic workqueue\n");
+ ret = -ENOMEM;
+ goto create_workq_error;
+ }
+
+ INIT_DELAYED_WORK(&nic_dev->periodic_work,
+ hinic3_periodic_work_handler);
+ INIT_DELAYED_WORK(&nic_dev->rxq_check_work,
+ hinic3_rxq_check_work_handler);
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ INIT_DELAYED_WORK(&nic_dev->vport_stats_work,
+ hinic3_vport_stats_work_handler);
+
+ INIT_LIST_HEAD(&nic_dev->uc_filter_list);
+ INIT_LIST_HEAD(&nic_dev->mc_filter_list);
+ INIT_WORK(&nic_dev->rx_mode_work, hinic3_set_rx_mode_work);
+
+ INIT_LIST_HEAD(&nic_dev->rx_flow_rule.rules);
+ INIT_LIST_HEAD(&nic_dev->tcam.tcam_list);
+ INIT_LIST_HEAD(&nic_dev->tcam.tcam_dynamic_info.tcam_dynamic_list);
+
+ hinic3_init_nic_prof_adapter(nic_dev);
+
+ netdev_name_fmt = hinic3_get_dft_netdev_name_fmt(nic_dev);
+ if (netdev_name_fmt) {
+ ret = strscpy(netdev->name, netdev_name_fmt, IFNAMSIZ);
+ if (ret < 0)
+ goto get_netdev_name_error;
+ }
+
+ return 0;
+
+get_netdev_name_error:
+ hinic3_deinit_nic_prof_adapter(nic_dev);
+ destroy_workqueue(nic_dev->workq);
+create_workq_error:
+ kfree(nic_dev->vlan_bitmap);
+ nic_dev->vlan_bitmap = NULL;
+vlan_bitmap_error:
+ free_nic_dev_vram(nic_dev);
+ return ret;
+}
+
+static int hinic3_set_default_hw_feature(struct hinic3_nic_dev *nic_dev)
+{
+ int err;
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ hinic3_dcb_reset_hw_config(nic_dev);
+
+ if (set_link_status_follow < HINIC3_LINK_FOLLOW_STATUS_MAX) {
+ err = hinic3_set_link_status_follow(nic_dev->hwdev,
+ set_link_status_follow);
+ if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+ nic_warn(&nic_dev->pdev->dev,
+ "Current version of firmware doesn't support to set link status follow port status\n");
+ }
+ }
+
+ err = hinic3_set_nic_feature_to_hw(nic_dev->hwdev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to set nic features\n");
+ return err;
+ }
+
+ /* enable all hw features in netdev->features */
+ err = hinic3_set_hw_features(nic_dev);
+ if (err) {
+ hinic3_update_nic_feature(nic_dev->hwdev, 0);
+ hinic3_set_nic_feature_to_hw(nic_dev->hwdev);
+ return err;
+ }
+
+ if (HINIC3_SUPPORT_RXQ_RECOVERY(nic_dev->hwdev))
+ set_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags);
+
+ return 0;
+}
+
+static void hinic3_bond_init(struct hinic3_nic_dev *nic_dev)
+{
+ u32 bond_id = HINIC3_INVALID_BOND_ID;
+ int err = hinic3_create_bond(nic_dev->hwdev, &bond_id);
+ if (err != 0) {
+ goto bond_init_failed;
+ }
+
+ /* bond id does not change, means this pf is not bond active pf, no log is generated */
+ if (bond_id == HINIC3_INVALID_BOND_ID) {
+ return;
+ }
+
+ err = hinic3_open_close_bond(nic_dev->hwdev, true);
+ if (err != 0) {
+ hinic3_delete_bond(nic_dev->hwdev);
+ goto bond_init_failed;
+ }
+
+ nic_info(&nic_dev->pdev->dev, "Bond %d init success\n", bond_id);
+ return;
+
+bond_init_failed:
+ nic_err(&nic_dev->pdev->dev, "Bond init failed\n");
+}
+
+static int nic_probe(struct hinic3_lld_dev *lld_dev, void **uld_dev,
+ char *uld_dev_name)
+{
+ struct pci_dev *pdev = lld_dev->pdev;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ struct net_device *netdev = NULL;
+ u16 max_qps, glb_func_id;
+ int err;
+
+ if (!hinic3_support_nic(lld_dev->hwdev, NULL)) {
+ nic_info(&pdev->dev, "Hw don't support nic\n");
+ return 0;
+ }
+
+ nic_info(&pdev->dev, "NIC service probe begin\n");
+
+ err = hinic3_validate_parameters(lld_dev);
+ if (err) {
+ err = -EINVAL;
+ goto err_out;
+ }
+
+ glb_func_id = hinic3_global_func_id(lld_dev->hwdev);
+ err = hinic3_func_reset(lld_dev->hwdev, glb_func_id, HINIC3_NIC_RES,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(&pdev->dev, "Failed to reset function\n");
+ goto err_out;
+ }
+
+ err = hinic3_get_dev_cap(lld_dev->hwdev);
+ if (err != 0) {
+ nic_err(&pdev->dev, "Failed to get dev cap\n");
+ goto err_out;
+ }
+
+ max_qps = hinic3_func_max_nic_qnum(lld_dev->hwdev);
+ netdev = alloc_etherdev_mq(sizeof(*nic_dev), max_qps);
+ if (!netdev) {
+ nic_err(&pdev->dev, "Failed to allocate ETH device\n");
+ err = -ENOMEM;
+ goto err_out;
+ }
+
+ nic_dev = (struct hinic3_nic_dev *)netdev_priv(netdev);
+ err = setup_nic_dev(netdev, lld_dev);
+ if (err)
+ goto setup_dev_err;
+
+ adaptive_configuration_init(nic_dev);
+
+ /* get nic cap from hw */
+ hinic3_support_nic(lld_dev->hwdev, &nic_dev->nic_cap);
+
+ err = hinic3_init_nic_hwdev(nic_dev->hwdev, pdev, &pdev->dev,
+ nic_dev->rx_buff_len);
+ if (err) {
+ nic_err(&pdev->dev, "Failed to init nic hwdev\n");
+ goto init_nic_hwdev_err;
+ }
+
+ err = hinic3_sw_init(nic_dev);
+ if (err)
+ goto sw_init_err;
+
+ hinic3_assign_netdev_ops(nic_dev);
+ netdev_feature_init(netdev);
+#ifdef HAVE_UDP_TUNNEL_NIC_INFO
+ netdev->udp_tunnel_nic_info = &hinic3_udp_tunnels;
+#endif /* HAVE_UDP_TUNNEL_NIC_INFO */
+ err = hinic3_set_default_hw_feature(nic_dev);
+ if (err)
+ goto set_features_err;
+
+ if (hinic3_get_bond_create_mode(lld_dev->hwdev) != 0) {
+ hinic3_bond_init(nic_dev);
+ }
+
+#ifdef HAVE_MULTI_VLAN_OFFLOAD_EN
+ hinic3_register_notifier(nic_dev);
+#endif
+
+ if (pdev->subsystem_device != BIFUR_RESOURCE_PF_SSID) {
+ err = register_netdev(netdev);
+ if (err) {
+ nic_err(&pdev->dev, "Failed to register netdev\n");
+ err = -ENOMEM;
+ goto netdev_err;
+ }
+ }
+
+ queue_delayed_work(nic_dev->workq, &nic_dev->periodic_work, HZ);
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ queue_delayed_work(nic_dev->workq,
+ &nic_dev->vport_stats_work, HZ);
+
+ netif_carrier_off(netdev);
+
+ *uld_dev = nic_dev;
+ nicif_info(nic_dev, probe, netdev, "Register netdev succeed\n");
+ nic_info(&pdev->dev, "NIC service probed\n");
+
+ return 0;
+
+netdev_err:
+#ifdef HAVE_MULTI_VLAN_OFFLOAD_EN
+ hinic3_unregister_notifier(nic_dev);
+#endif
+ hinic3_update_nic_feature(nic_dev->hwdev, 0);
+ hinic3_set_nic_feature_to_hw(nic_dev->hwdev);
+
+set_features_err:
+ hinic3_sw_deinit(nic_dev);
+
+sw_init_err:
+ hinic3_free_nic_hwdev(nic_dev->hwdev);
+
+init_nic_hwdev_err:
+ free_nic_dev(nic_dev);
+setup_dev_err:
+ free_netdev(netdev);
+
+err_out:
+ nic_err(&pdev->dev, "NIC service probe failed\n");
+
+ return err;
+}
+
+static void hinic3_bond_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ int ret = 0;
+
+ ret = hinic3_open_close_bond(nic_dev->hwdev, false);
+ if (ret != 0) {
+ goto bond_deinit_failed;
+ }
+
+ ret = hinic3_delete_bond(nic_dev->hwdev);
+ if (ret != 0) {
+ goto bond_deinit_failed;
+ }
+
+ nic_info(&nic_dev->pdev->dev, "Bond deinit success\n");
+ return;
+
+bond_deinit_failed:
+ nic_err(&nic_dev->pdev->dev, "Bond deinit failed\n");
+}
+
+static void nic_remove(struct hinic3_lld_dev *lld_dev, void *adapter)
+{
+ struct hinic3_nic_dev *nic_dev = adapter;
+ struct net_device *netdev = NULL;
+
+ if (!nic_dev || !hinic3_support_nic(lld_dev->hwdev, NULL))
+ return;
+
+ nic_info(&lld_dev->pdev->dev, "NIC service remove begin\n");
+
+ netdev = nic_dev->netdev;
+
+ if (lld_dev->pdev->subsystem_device != BIFUR_RESOURCE_PF_SSID) {
+ unregister_netdev(netdev);
+ }
+#ifdef HAVE_MULTI_VLAN_OFFLOAD_EN
+ hinic3_unregister_notifier(nic_dev);
+#endif
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ cancel_delayed_work_sync(&nic_dev->vport_stats_work);
+
+ cancel_delayed_work_sync(&nic_dev->periodic_work);
+ cancel_delayed_work_sync(&nic_dev->rxq_check_work);
+ cancel_work_sync(&nic_dev->rx_mode_work);
+ destroy_workqueue(nic_dev->workq);
+
+ hinic3_flush_rx_flow_rule(nic_dev);
+
+ if (hinic3_get_bond_create_mode(lld_dev->hwdev) != 0) {
+ hinic3_bond_deinit(nic_dev);
+ }
+
+ hinic3_update_nic_feature(nic_dev->hwdev, 0);
+ hinic3_set_nic_feature_to_hw(nic_dev->hwdev);
+
+ hinic3_sw_deinit(nic_dev);
+
+ hinic3_free_nic_hwdev(nic_dev->hwdev);
+
+ hinic3_deinit_nic_prof_adapter(nic_dev);
+ kfree(nic_dev->vlan_bitmap);
+ nic_dev->vlan_bitmap = NULL;
+
+ free_netdev(netdev);
+
+ nic_info(&lld_dev->pdev->dev, "NIC service removed\n");
+}
+
+static void sriov_state_change(struct hinic3_nic_dev *nic_dev,
+ const struct hinic3_sriov_state_info *info)
+{
+ if (!info->enable)
+ hinic3_clear_vfs_info(nic_dev->hwdev);
+}
+
+static void hinic3_port_module_event_handler(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_event_info *event)
+{
+ const char *g_hinic3_module_link_err[LINK_ERR_NUM] = { "Unrecognized module" };
+ struct hinic3_port_module_event *module_event = (void *)event->event_data;
+ enum port_module_event_type type = module_event->type;
+ enum link_err_type err_type = module_event->err_type;
+
+ switch (type) {
+ case HINIC3_PORT_MODULE_CABLE_PLUGGED:
+ case HINIC3_PORT_MODULE_CABLE_UNPLUGGED:
+ nicif_info(nic_dev, link, nic_dev->netdev,
+ "Port module event: Cable %s\n",
+ type == HINIC3_PORT_MODULE_CABLE_PLUGGED ?
+ "plugged" : "unplugged");
+ break;
+ case HINIC3_PORT_MODULE_LINK_ERR:
+ if (err_type >= LINK_ERR_NUM) {
+ nicif_info(nic_dev, link, nic_dev->netdev,
+ "Link failed, Unknown error type: 0x%x\n",
+ err_type);
+ } else {
+ nicif_info(nic_dev, link, nic_dev->netdev,
+ "Link failed, error type: 0x%x: %s\n",
+ err_type,
+ g_hinic3_module_link_err[err_type]);
+ }
+ break;
+ default:
+ nicif_err(nic_dev, link, nic_dev->netdev,
+ "Unknown port module type %d\n", type);
+ break;
+ }
+}
+
+bool hinic3_need_proc_link_event(struct hinic3_lld_dev *lld_dev)
+{
+ int ret = 0;
+ u16 func_id;
+ u8 roce_enable = false;
+ bool is_slave_func = false;
+ struct hinic3_hw_bond_infos hw_bond_infos = {0};
+
+ if (!lld_dev)
+ return false;
+
+ /* 非slave设备需要处理link down事件 */
+ ret = hinic3_is_slave_func(lld_dev->hwdev, &is_slave_func);
+ if (ret != 0) {
+ nic_err(&lld_dev->pdev->dev, "NIC get info, lld_dev is null\n");
+ return true;
+ }
+
+ if (!is_slave_func)
+ return true;
+
+ /* 未使能了vroce功能,需处理link down事件 */
+ func_id = hinic3_global_func_id(lld_dev->hwdev);
+ ret = hinic3_get_func_vroce_enable(lld_dev->hwdev, func_id,
+ &roce_enable);
+ if (ret != 0)
+ return true;
+
+ if (!roce_enable)
+ return true;
+
+ /* 未创建bond,需要处理link down事件 */
+ hw_bond_infos.bond_id = HINIC_OVS_BOND_DEFAULT_ID;
+
+ ret = hinic3_get_hw_bond_infos(lld_dev->hwdev, &hw_bond_infos,
+ HINIC3_CHANNEL_COMM);
+ if (ret != 0) {
+ pr_err("[ROCE, ERR] Get chipf bond info failed (%d)\n", ret);
+ return true;
+ }
+
+ if (!hw_bond_infos.valid)
+ return true;
+
+ return false;
+}
+
+bool hinic3_need_proc_bond_event(struct hinic3_lld_dev *lld_dev)
+{
+ return !hinic3_need_proc_link_event(lld_dev);
+}
+
+static void hinic_porc_bond_state_change(struct hinic3_lld_dev *lld_dev,
+ void *adapter,
+ struct hinic3_event_info *event)
+{
+ struct hinic3_nic_dev *nic_dev = adapter;
+
+ if (!nic_dev || !event || !hinic3_support_nic(lld_dev->hwdev, NULL))
+ return;
+
+ switch (HINIC3_SRV_EVENT_TYPE(event->service, event->type)) {
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_BOND_DOWN):
+ if (!hinic3_need_proc_bond_event(lld_dev)) {
+ nic_info(&lld_dev->pdev->dev, "NIC don't need proc bond event\n");
+ return;
+ }
+ nic_info(&lld_dev->pdev->dev, "NIC proc bond down\n");
+ hinic3_link_status_change(nic_dev, false);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_BOND_UP):
+ if (!hinic3_need_proc_bond_event(lld_dev)) {
+ nic_info(&lld_dev->pdev->dev, "NIC don't need proc bond event\n");
+ return;
+ }
+ nic_info(&lld_dev->pdev->dev, "NIC proc bond up\n");
+ hinic3_link_status_change(nic_dev, true);
+ break;
+ default:
+ break;
+ }
+}
+
+static void hinic3_outband_cfg_event_handler(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_outband_cfg_info *info)
+{
+ int err = 0;
+ if (!nic_dev || !info || !hinic3_support_nic(nic_dev->hwdev, NULL)) {
+ pr_err("Outband cfg event invalid param\n");
+ return;
+ }
+
+ if (hinic3_func_type(nic_dev->hwdev) != TYPE_VF &&
+ info->func_id >= NIC_MAX_PF_NUM) {
+ err = hinic3_notify_vf_outband_cfg(nic_dev->hwdev,
+ info->func_id,
+ info->outband_default_vid);
+ if (err)
+ nic_err(&nic_dev->pdev->dev, "Outband cfg event notify vf err: %d,"
+ "func_id: 0x%x, vid: 0x%x\n",
+ err, info->func_id, info->outband_default_vid);
+ return;
+ }
+
+ nic_info(&nic_dev->pdev->dev,
+ "Change outband default vid from %u to %u\n",
+ nic_dev->outband_cfg.outband_default_vid,
+ info->outband_default_vid);
+
+ nic_dev->outband_cfg.outband_default_vid = info->outband_default_vid;
+
+ return;
+}
+
+static void nic_event(struct hinic3_lld_dev *lld_dev, void *adapter,
+ struct hinic3_event_info *event)
+{
+ struct hinic3_nic_dev *nic_dev = adapter;
+ struct hinic3_fault_event *fault = NULL;
+
+ if (!nic_dev || !event || !hinic3_support_nic(lld_dev->hwdev, NULL))
+ return;
+
+ switch (HINIC3_SRV_EVENT_TYPE(event->service, event->type)) {
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_LINK_DOWN):
+ if (!hinic3_need_proc_link_event(lld_dev)) {
+ nic_info(&lld_dev->pdev->dev, "NIC don't need proc link event\n");
+ return;
+ }
+ hinic3_link_status_change(nic_dev, false);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_LINK_UP):
+ hinic3_link_status_change(nic_dev, true);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_BOND_DOWN):
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_BOND_UP):
+ hinic_porc_bond_state_change(lld_dev, adapter, event);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_PORT_MODULE_EVENT):
+ hinic3_port_module_event_handler(nic_dev, event);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_OUTBAND_CFG):
+ hinic3_outband_cfg_event_handler(nic_dev, (void *)event->event_data);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_SRIOV_STATE_CHANGE):
+ sriov_state_change(nic_dev, (void *)event->event_data);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_FAULT):
+ fault = (void *)event->event_data;
+ if (fault->fault_level == FAULT_LEVEL_SERIOUS_FLR &&
+ fault->event.chip.func_id == hinic3_global_func_id(lld_dev->hwdev))
+ hinic3_link_status_change(nic_dev, false);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_PCIE_LINK_DOWN):
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_HEART_LOST):
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_MGMT_WATCHDOG):
+ hinic3_link_status_change(nic_dev, false);
+ break;
+ default:
+ break;
+ }
+}
+
+struct net_device *hinic3_get_netdev_by_lld(struct hinic3_lld_dev *lld_dev)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+
+ if (!lld_dev || !hinic3_support_nic(lld_dev->hwdev, NULL))
+ return NULL;
+
+ nic_dev = hinic3_get_uld_dev_unsafe(lld_dev, SERVICE_T_NIC);
+ if (!nic_dev) {
+ nic_err(&lld_dev->pdev->dev,
+ "There's no net device attached on the pci device");
+ return NULL;
+ }
+
+ return nic_dev->netdev;
+}
+EXPORT_SYMBOL(hinic3_get_netdev_by_lld);
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_netdev(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+
+ if (!netdev || !hinic3_is_netdev_ops_match(netdev))
+ return NULL;
+
+ nic_dev = netdev_priv(netdev);
+ if (!nic_dev)
+ return NULL;
+
+ return nic_dev->lld_dev;
+}
+EXPORT_SYMBOL(hinic3_get_lld_dev_by_netdev);
+
+struct hinic3_uld_info g_nic_uld_info = {
+ .probe = nic_probe,
+ .remove = nic_remove,
+ .suspend = NULL,
+ .resume = NULL,
+ .event = nic_event,
+ .ioctl = nic_ioctl,
+};
+
+struct hinic3_uld_info *get_nic_uld_info(void)
+{
+ return &g_nic_uld_info;
+}
+
+#define HINIC3_NIC_DRV_DESC "Intelligent Network Interface Card Driver"
+
+static __init int hinic3_nic_lld_init(void)
+{
+ int err;
+
+ pr_info("%s - version %s\n", HINIC3_NIC_DRV_DESC,
+ HINIC3_NIC_DRV_VERSION);
+
+ err = hinic3_lld_init();
+ if (err) {
+ pr_err("SDK init failed.\n");
+ return err;
+ }
+
+ err = hinic3_register_uld(SERVICE_T_NIC, &g_nic_uld_info);
+ if (err) {
+ pr_err("Register hinic3 uld failed\n");
+ hinic3_lld_exit();
+ return err;
+ }
+
+ err = hinic3_module_pre_init();
+ if (err) {
+ pr_err("Init custom failed\n");
+ hinic3_unregister_uld(SERVICE_T_NIC);
+ hinic3_lld_exit();
+ return err;
+ }
+
+ return 0;
+}
+
+static __exit void hinic3_nic_lld_exit(void)
+{
+ hinic3_unregister_uld(SERVICE_T_NIC);
+
+ hinic3_module_post_exit();
+
+ hinic3_lld_exit();
+}
+
+module_init(hinic3_nic_lld_init);
+module_exit(hinic3_nic_lld_exit);
+
+MODULE_AUTHOR("Huawei Technologies CO., Ltd");
+MODULE_DESCRIPTION(HINIC3_NIC_DRV_DESC);
+MODULE_VERSION(HINIC3_NIC_DRV_VERSION);
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h b/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h
new file mode 100644
index 0000000..522518d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h
@@ -0,0 +1,1298 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef NIC_MPU_CMD_DEFS_H
+#define NIC_MPU_CMD_DEFS_H
+
+#include "nic_cfg_comm.h"
+#include "mpu_cmd_base_defs.h"
+
+#ifndef ETH_ALEN
+#define ETH_ALEN 6
+#endif
+
+#define HINIC3_CMD_OP_SET 1
+#define HINIC3_CMD_OP_GET 0
+
+#define HINIC3_CMD_OP_ADD 1
+#define HINIC3_CMD_OP_DEL 0
+
+#define NIC_TCAM_BLOCK_LARGE_NUM 256
+#define NIC_TCAM_BLOCK_LARGE_SIZE 16
+
+#ifndef BIT
+#define BIT(n) (1UL << (n))
+#endif
+
+enum nic_feature_cap {
+ NIC_F_CSUM = BIT(0),
+ NIC_F_SCTP_CRC = BIT(1),
+ NIC_F_TSO = BIT(2),
+ NIC_F_LRO = BIT(3),
+ NIC_F_UFO = BIT(4),
+ NIC_F_RSS = BIT(5),
+ NIC_F_RX_VLAN_FILTER = BIT(6),
+ NIC_F_RX_VLAN_STRIP = BIT(7),
+ NIC_F_TX_VLAN_INSERT = BIT(8),
+ NIC_F_VXLAN_OFFLOAD = BIT(9),
+ NIC_F_IPSEC_OFFLOAD = BIT(10),
+ NIC_F_FDIR = BIT(11),
+ NIC_F_PROMISC = BIT(12),
+ NIC_F_ALLMULTI = BIT(13),
+ NIC_F_XSFP_REPORT = BIT(14),
+ NIC_F_VF_MAC = BIT(15),
+ NIC_F_RATE_LIMIT = BIT(16),
+ NIC_F_RXQ_RECOVERY = BIT(17),
+};
+
+#define NIC_F_ALL_MASK 0x3FFFF /* 使能所有属性 */
+
+struct hinic3_mgmt_msg_head {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+};
+
+#define NIC_MAX_FEATURE_QWORD 4
+struct hinic3_cmd_feature_nego {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 1: set, 0: get */
+ u8 rsvd;
+ u64 s_feature[NIC_MAX_FEATURE_QWORD];
+};
+
+struct hinic3_port_mac_set {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 mac[ETH_ALEN];
+};
+
+struct hinic3_port_mac_update {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 old_mac[ETH_ALEN];
+ u16 rsvd2;
+ u8 new_mac[ETH_ALEN];
+};
+
+struct hinic3_vport_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd2[3];
+};
+
+struct hinic3_port_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd2[3];
+};
+
+#define HINIC3_SET_PORT_CAR_PROFILE 0
+#define HINIC3_SET_PORT_CAR_STATE 1
+
+struct hinic3_port_car_info {
+ u32 cir; /* unit: kbps, range:[1,400*1000*1000], i.e. 1Kbps~400Gbps(400M*kbps) */
+ u32 xir; /* unit: kbps, range:[1,400*1000*1000], i.e. 1Kbps~400Gbps(400M*kbps) */
+ u32 cbs; /* unit: Byte, range:[1,320*1000*1000], i.e. 1byte~2560Mbit */
+ u32 xbs; /* unit: Byte, range:[1,320*1000*1000], i.e. 1byte~2560Mbit */
+};
+
+struct hinic3_cmd_set_port_car {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode; /* 0--set car profile, 1--set car state */
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd;
+
+ struct hinic3_port_car_info car;
+};
+
+struct hinic3_cmd_clear_qp_resource {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_cmd_cache_out_qp_resource {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_port_stats_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_vport_stats {
+ u64 tx_unicast_pkts_vport;
+ u64 tx_unicast_bytes_vport;
+ u64 tx_multicast_pkts_vport;
+ u64 tx_multicast_bytes_vport;
+ u64 tx_broadcast_pkts_vport;
+ u64 tx_broadcast_bytes_vport;
+
+ u64 rx_unicast_pkts_vport;
+ u64 rx_unicast_bytes_vport;
+ u64 rx_multicast_pkts_vport;
+ u64 rx_multicast_bytes_vport;
+ u64 rx_broadcast_pkts_vport;
+ u64 rx_broadcast_bytes_vport;
+
+ u64 tx_discard_vport;
+ u64 rx_discard_vport;
+ u64 tx_err_vport;
+ u64 rx_err_vport;
+};
+
+struct hinic3_phy_fpga_port_stats {
+ u64 mac_rx_total_octs_port;
+ u64 mac_tx_total_octs_port;
+ u64 mac_rx_under_frame_pkts_port;
+ u64 mac_rx_frag_pkts_port;
+ u64 mac_rx_64_oct_pkts_port;
+ u64 mac_rx_127_oct_pkts_port;
+ u64 mac_rx_255_oct_pkts_port;
+ u64 mac_rx_511_oct_pkts_port;
+ u64 mac_rx_1023_oct_pkts_port;
+ u64 mac_rx_max_oct_pkts_port;
+ u64 mac_rx_over_oct_pkts_port;
+ u64 mac_tx_64_oct_pkts_port;
+ u64 mac_tx_127_oct_pkts_port;
+ u64 mac_tx_255_oct_pkts_port;
+ u64 mac_tx_511_oct_pkts_port;
+ u64 mac_tx_1023_oct_pkts_port;
+ u64 mac_tx_max_oct_pkts_port;
+ u64 mac_tx_over_oct_pkts_port;
+ u64 mac_rx_good_pkts_port;
+ u64 mac_rx_crc_error_pkts_port;
+ u64 mac_rx_broadcast_ok_port;
+ u64 mac_rx_multicast_ok_port;
+ u64 mac_rx_mac_frame_ok_port;
+ u64 mac_rx_length_err_pkts_port;
+ u64 mac_rx_vlan_pkts_port;
+ u64 mac_rx_pause_pkts_port;
+ u64 mac_rx_unknown_mac_frame_port;
+ u64 mac_tx_good_pkts_port;
+ u64 mac_tx_broadcast_ok_port;
+ u64 mac_tx_multicast_ok_port;
+ u64 mac_tx_underrun_pkts_port;
+ u64 mac_tx_mac_frame_ok_port;
+ u64 mac_tx_vlan_pkts_port;
+ u64 mac_tx_pause_pkts_port;
+};
+
+struct hinic3_port_stats {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ struct hinic3_phy_fpga_port_stats stats;
+};
+
+struct hinic3_cmd_vport_stats {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u32 stats_size;
+ u32 rsvd1;
+ struct hinic3_vport_stats stats;
+ u64 rsvd2[6];
+};
+
+struct hinic3_cmd_qpn {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 base_qpn;
+};
+
+enum hinic3_func_tbl_cfg_bitmap {
+ FUNC_CFG_INIT,
+ FUNC_CFG_RX_BUF_SIZE,
+ FUNC_CFG_MTU,
+};
+
+struct hinic3_func_tbl_cfg {
+ u16 rx_wqe_buf_size;
+ u16 mtu;
+ u32 rsvd[9];
+};
+
+struct hinic3_cmd_set_func_tbl {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd;
+
+ u32 cfg_bitmap;
+ struct hinic3_func_tbl_cfg tbl_cfg;
+};
+
+struct hinic3_cmd_cons_idx_attr {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_idx;
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u32 rsvd;
+ u64 ci_addr;
+};
+
+union sm_tbl_args {
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } mac_table_arg;
+ struct {
+ u32 er_id;
+ u32 vlan_id;
+ } vlan_elb_table_arg;
+ struct {
+ u32 func_id;
+ } vlan_filter_arg;
+ struct {
+ u32 mc_id;
+ } mc_elb_arg;
+ struct {
+ u32 func_id;
+ } func_tbl_arg;
+ struct {
+ u32 port_id;
+ } port_tbl_arg;
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } fdir_io_table_arg;
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } flexq_table_arg;
+ u32 args[4];
+};
+
+#define DFX_SM_TBL_BUF_MAX (768)
+
+struct nic_cmd_dfx_sm_table {
+ struct hinic3_mgmt_msg_head msg_head;
+ u32 tbl_type;
+ union sm_tbl_args args;
+ u8 tbl_buf[DFX_SM_TBL_BUF_MAX];
+};
+
+struct hinic3_cmd_vlan_offload {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 vlan_offload;
+ u8 rsvd1[5];
+};
+
+/* ucode capture cfg info */
+struct nic_cmd_capture_info {
+ struct hinic3_mgmt_msg_head msg_head;
+ u32 op_type;
+ u32 func_port;
+ u32 is_en_trx;
+ u32 offset_cos;
+ u32 data_vlan;
+};
+
+struct hinic3_cmd_lro_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 lro_ipv4_en;
+ u8 lro_ipv6_en;
+ u8 lro_max_pkt_len; /* unit is 1K */
+ u8 resv2[13];
+};
+
+struct hinic3_cmd_lro_timer {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 opcode; /* 1: set timer value, 0: get timer value */
+ u8 rsvd1;
+ u16 rsvd2;
+ u32 timer;
+};
+
+struct hinic3_cmd_local_lro_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 0: get state, 1: set state */
+ u8 state; /* 0: disable, 1: enable */
+};
+
+struct hinic3_cmd_vf_vlan_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 vlan_id;
+ u8 qos;
+ u8 rsvd2[5];
+};
+
+struct hinic3_cmd_spoofchk_set {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 state;
+ u8 rsvd1;
+};
+
+struct hinic3_cmd_tx_rate_cfg {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 min_rate;
+ u32 max_rate;
+ u8 rsvd2[8];
+};
+
+struct hinic3_cmd_port_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+ u8 fec;
+ u16 rsvd2;
+ u32 rsvd3[4];
+};
+
+struct hinic3_cmd_register_vf {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 op_register; /* 0 - unregister, 1 - register */
+ u8 rsvd1[3];
+ u32 support_extra_feature;
+ u8 rsvd2[32];
+};
+
+struct hinic3_cmd_link_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 state;
+ u16 rsvd1;
+};
+
+struct hinic3_cmd_vlan_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 vlan_id;
+ u16 rsvd2;
+};
+
+/* set vlan filter */
+struct hinic3_cmd_set_vlan_filter {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 resvd[2];
+ u32 vlan_filter_ctrl; /* bit0:vlan filter en; bit1:broadcast_filter_en */
+};
+
+struct hinic3_cmd_link_ksettings_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+
+ u32 valid_bitmap;
+ u8 speed; /* enum nic_speed_level */
+ u8 autoneg; /* 0 - off, 1 - on */
+ u8 fec; /* 0 - RSFEC, 1 - BASEFEC, 2 - NOFEC */
+ u8 rsvd2[21]; /* reserved for duplex, port, etc. */
+};
+
+struct mpu_lt_info {
+ u8 node;
+ u8 inst;
+ u8 entry_size;
+ u8 rsvd;
+ u32 lt_index;
+ u32 offset;
+ u32 len;
+};
+
+struct nic_mpu_lt_opera {
+ struct hinic3_mgmt_msg_head msg_head;
+ struct mpu_lt_info net_lt_cmd;
+ u8 data[100];
+};
+
+struct hinic3_force_pkt_drop {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port;
+ u8 rsvd1[3];
+};
+
+struct hinic3_rx_mode_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 rx_mode;
+};
+
+/* rss */
+struct hinic3_rss_context_table {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 context;
+};
+
+struct hinic3_cmd_rss_engine_type {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 hash_engine;
+ u8 rsvd1[4];
+};
+
+struct hinic3_cmd_rss_hash_key {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 key[NIC_RSS_KEY_SIZE];
+};
+
+struct hinic3_rss_indir_table {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 indir[NIC_RSS_INDIR_SIZE];
+};
+
+#define NIC_RSS_CMD_TEMP_ALLOC 0x01
+#define NIC_RSS_CMD_TEMP_FREE 0x02
+
+struct hinic3_rss_template_mgmt {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 cmd;
+ u8 template_id;
+ u8 rsvd1[4];
+};
+
+struct hinic3_cmd_rss_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 rss_en;
+ u8 rq_priority_number;
+ u8 prio_tc[NIC_DCB_COS_MAX];
+ u16 num_qps;
+ u16 rsvd1;
+};
+
+struct hinic3_dcb_state {
+ u8 dcb_on;
+ u8 default_cos;
+ u8 trust;
+ u8 rsvd1;
+ u8 pcp2cos[NIC_DCB_UP_MAX];
+ u8 dscp2cos[64];
+ u32 rsvd2[7];
+};
+
+struct hinic3_cmd_vf_dcb_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ struct hinic3_dcb_state state;
+};
+
+struct hinic3_up_ets_cfg { /* delet */
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+
+ u8 cos_tc[NIC_DCB_COS_MAX];
+ u8 tc_bw[NIC_DCB_TC_MAX];
+ u8 cos_prio[NIC_DCB_COS_MAX];
+ u8 cos_bw[NIC_DCB_COS_MAX];
+ u8 tc_prio[NIC_DCB_TC_MAX];
+};
+
+#define CMD_QOS_ETS_COS_TC BIT(0)
+#define CMD_QOS_ETS_TC_BW BIT(1)
+#define CMD_QOS_ETS_COS_PRIO BIT(2)
+#define CMD_QOS_ETS_COS_BW BIT(3)
+#define CMD_QOS_ETS_TC_PRIO BIT(4)
+struct hinic3_cmd_ets_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 1 - set, 0 - get */
+ /* bit0 - cos_tc, bit1 - tc_bw, bit2 - cos_prio, bit3 - cos_bw, bit4 - tc_prio */
+ u8 cfg_bitmap;
+ u8 rsvd;
+
+ u8 cos_tc[NIC_DCB_COS_MAX];
+ u8 tc_bw[NIC_DCB_TC_MAX];
+ u8 cos_prio[NIC_DCB_COS_MAX]; /* 0 - DWRR, 1 - STRICT */
+ u8 cos_bw[NIC_DCB_COS_MAX];
+ u8 tc_prio[NIC_DCB_TC_MAX]; /* 0 - DWRR, 1 - STRICT */
+};
+
+struct hinic3_cmd_set_dcb_state {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 op_code; /* 0 - get dcb state, 1 - set dcb state */
+ u8 state; /* 0 - disable, 1 - enable dcb */
+ u8 port_state; /* 0 - disable, 1 - enable dcb */
+ u8 rsvd[7];
+};
+
+#define PFC_BIT_MAP_NUM 8
+struct hinic3_cmd_set_pfc {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 0:get 1: set pfc_en 2: set pfc_bitmap 3: set all */
+ u8 pfc_en; /* pfc_en 和 pfc_bitmap 必须同时设置 */
+ u8 pfc_bitmap;
+ u8 rsvd[4];
+};
+
+#define CMD_QOS_PORT_TRUST BIT(0)
+#define CMD_QOS_PORT_DFT_COS BIT(1)
+struct hinic3_cmd_qos_port_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 0 - get, 1 - set */
+ u8 cfg_bitmap; /* bit0 - trust, bit1 - dft_cos */
+ u8 rsvd0;
+
+ u8 trust;
+ u8 dft_cos;
+ u8 rsvd1[18];
+};
+
+#define MAP_COS_MAX_NUM 8
+#define CMD_QOS_MAP_PCP2COS BIT(0)
+#define CMD_QOS_MAP_DSCP2COS BIT(1)
+struct hinic3_cmd_qos_map_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 op_code;
+ u8 cfg_bitmap; /* bit0 - pcp2cos, bit1 - dscp2cos */
+ u16 rsvd0;
+
+ u8 pcp2cos[8]; /* 8 must be configured together */
+ /* If the dscp2cos parameter is set to 0xFF, the MPU ignores the DSCP priority,
+ * Multiple mappings between DSCP values and CoS values can be configured at a time.
+ */
+ u8 dscp2cos[64];
+ u32 rsvd1[4];
+};
+
+struct hinic3_cos_up_map {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 cos_valid_mask; /* every bit indicate index of map is valid 1 or not 0 */
+ u16 rsvd1;
+
+ /* user priority in cos(index:cos, value: up pri) */
+ u8 map[NIC_DCB_UP_MAX];
+};
+
+struct hinic3_cmd_pause_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode;
+ u16 rsvd1;
+ u8 auto_neg;
+ u8 rx_pause;
+ u8 tx_pause;
+ u8 rsvd2[5];
+};
+
+struct nic_cmd_pause_inquiry_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 valid;
+
+ u32 type; /* 1: set, 2: get */
+
+ u32 rx_inquiry_pause_drop_pkts_en;
+ u32 rx_inquiry_pause_period_ms;
+ u32 rx_inquiry_pause_times;
+ /* rx pause Detection Threshold, Default PAUSE_FRAME_THD_10G/25G/40G/100 */
+ u32 rx_inquiry_pause_frame_thd;
+ u32 rx_inquiry_tx_total_pkts;
+
+ u32 tx_inquiry_pause_en; /* tx pause detect enable */
+ u32 tx_inquiry_pause_period_ms; /* tx pause Default Detection Period 200ms */
+ u32 tx_inquiry_pause_times; /* tx pause Default Times Period 5 */
+ u32 tx_inquiry_pause_frame_thd; /* tx pause Detection Threshold */
+ u32 tx_inquiry_rx_total_pkts;
+ u32 rsvd[4];
+};
+
+/* pfc/pause Storm TX exception reporting */
+struct nic_cmd_tx_pause_notice {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 tx_pause_except; /* 1: abnormality,0: normal */
+ u32 except_level;
+ u32 rsvd;
+};
+
+#define HINIC3_CMD_OP_FREE 0
+#define HINIC3_CMD_OP_ALLOC 1
+
+struct hinic3_cmd_cfg_qps {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 1: alloc qp, 0: free qp */
+ u8 rsvd1;
+ u16 num_qps;
+ u16 rsvd2;
+};
+
+struct hinic3_cmd_led_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port;
+ u8 type;
+ u8 mode;
+ u8 rsvd1;
+};
+
+struct hinic3_cmd_port_loopback {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode;
+ u8 mode;
+ u8 en;
+ u32 rsvd1[2];
+};
+
+struct hinic3_cmd_get_light_module_abs {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 abs_status; /* 0:present, 1:absent */
+ u8 rsv[2];
+};
+
+#define STD_SFP_INFO_MAX_SIZE 640
+struct hinic3_cmd_get_std_sfp_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 wire_type;
+ u16 eeprom_len;
+ u32 rsvd;
+ u8 sfp_info[STD_SFP_INFO_MAX_SIZE];
+};
+
+struct hinic3_cable_plug_event {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 plugged; /* 0: unplugged, 1: plugged */
+ u8 port_id;
+};
+
+struct nic_cmd_mac_info {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 valid_bitmap;
+ u16 rsvd;
+
+ u8 host_id[32];
+ u8 port_id[32];
+ u8 mac_addr[192];
+};
+
+struct nic_cmd_set_tcam_enable {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 tcam_enable;
+ u8 rsvd1;
+ u32 rsvd2;
+};
+
+struct nic_cmd_set_fdir_status {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 pkt_type_en;
+ u8 pkt_type;
+ u8 qid;
+ u8 rsvd2;
+};
+
+#define HINIC3_TCAM_BLOCK_ENABLE 1
+#define HINIC3_TCAM_BLOCK_DISABLE 0
+#define HINIC3_MAX_TCAM_RULES_NUM 4096
+
+/* tcam block type, according to tcam block size */
+enum {
+ NIC_TCAM_BLOCK_TYPE_LARGE = 0, /* block_size: 16 */
+ NIC_TCAM_BLOCK_TYPE_SMALL, /* block_size: 0 */
+ NIC_TCAM_BLOCK_TYPE_MAX
+};
+
+/* alloc tcam block input struct */
+struct nic_cmd_ctrl_tcam_block_in {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u8 alloc_en; /* 0: Releases the allocated TCAM block. 1: Applies for a new TCAM block */
+ /* 0: 16 size tcam block, 1: 0 size tcam block, other reserved. */
+ u8 tcam_type;
+ u16 tcam_block_index;
+ /* Size of the block that the driver wants to allocate
+ * Interface returned by the UP to the driver,
+ * indicating the size of the allocated TCAM block supported by the UP
+ */
+ u16 alloc_block_num;
+};
+
+/* alloc tcam block output struct */
+struct nic_cmd_ctrl_tcam_block_out {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u8 alloc_en; /* 0: Releases the allocated TCAM block. 1: Applies for a new TCAM block */
+ /* 0: 16 size tcam block, 1: 0 size tcam block, other reserved. */
+ u8 tcam_type;
+ u16 tcam_block_index;
+ /* Size of the block that the driver wants to allocate
+ * Interface returned by the UP to the driver,
+ * indicating the size of the allocated TCAM block supported by the UP
+ */
+ u16 mpu_alloc_block_size;
+};
+
+struct nic_cmd_flush_tcam_rules {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u16 rsvd;
+};
+
+struct nic_cmd_dfx_fdir_tcam_block_table {
+ struct hinic3_mgmt_msg_head head;
+ u8 tcam_type;
+ u8 valid;
+ u16 tcam_block_index;
+ u16 use_function_id;
+ u16 rsvd;
+};
+
+struct tcam_result {
+ u32 qid;
+ u32 rsvd;
+};
+
+#define TCAM_FLOW_KEY_SIZE (44)
+
+struct tcam_key_x_y {
+ u8 x[TCAM_FLOW_KEY_SIZE];
+ u8 y[TCAM_FLOW_KEY_SIZE];
+};
+
+struct nic_tcam_cfg_rule {
+ u32 index;
+ struct tcam_result data;
+ struct tcam_key_x_y key;
+};
+
+#define TCAM_RULE_FDIR_TYPE 0
+#define TCAM_RULE_PPA_TYPE 1
+
+struct nic_cmd_fdir_add_rule {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 type;
+ u8 fdir_ext; /* 0x1: flow bifur en bit */
+ struct nic_tcam_cfg_rule rule;
+};
+
+struct nic_cmd_fdir_del_rules {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 type;
+ u8 rsvd;
+ u32 index_start;
+ u32 index_num;
+};
+
+struct nic_cmd_fdir_get_rule {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 index;
+ u8 valid;
+ u8 type;
+ u16 rsvd;
+ struct tcam_key_x_y key;
+ struct tcam_result data;
+ u64 packet_count;
+ u64 byte_count;
+};
+
+struct nic_cmd_fdir_get_block_rules {
+ struct hinic3_mgmt_msg_head head;
+ u8 tcam_block_type; // only NIC_TCAM_BLOCK_TYPE_LARGE
+ u8 tcam_table_type; // TCAM_RULE_PPA_TYPE or TCAM_RULE_FDIR_TYPE
+ u16 tcam_block_index;
+ u8 valid[NIC_TCAM_BLOCK_LARGE_SIZE];
+ struct tcam_key_x_y key[NIC_TCAM_BLOCK_LARGE_SIZE];
+ struct tcam_result data[NIC_TCAM_BLOCK_LARGE_SIZE];
+};
+
+struct hinic3_tcam_key_ipv4_mem {
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+ u32 sipv4_h : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+ u32 dipv4_h : 16;
+ u32 sipv4_l : 16;
+ u32 vlan_id : 15;
+ u32 vlan_flag : 1;
+ u32 dipv4_l : 16;
+ u32 rsvd3;
+ u32 dport : 16;
+ u32 rsvd4 : 16;
+ u32 rsvd5 : 16;
+ u32 sport : 16;
+ u32 outer_sipv4_h : 16;
+ u32 rsvd6 : 16;
+ u32 outer_dipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+ u32 vni_h : 16;
+ u32 outer_dipv4_l : 16;
+ u32 rsvd7 : 16;
+ u32 vni_l : 16;
+};
+
+union hinic3_tag_tcam_ext_info {
+ struct {
+ u32 id : 16; /* id */
+ u32 type : 4; /* type: 0-func, 1-vmdq, 2-port, 3-rsvd, 4-trunk, 5-dp, 6-mc */
+ u32 host_id : 3;
+ u32 rsv : 8;
+ u32 ext : 1;
+ } bs;
+ u32 value;
+};
+
+struct hinic3_tcam_key_ipv6_mem {
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+ u32 sipv6_key0 : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+ u32 sipv6_key2 : 16;
+ u32 sipv6_key1 : 16;
+ u32 sipv6_key4 : 16;
+ u32 sipv6_key3 : 16;
+ u32 sipv6_key6 : 16;
+ u32 sipv6_key5 : 16;
+ u32 dport : 16;
+ u32 sipv6_key7 : 16;
+ u32 dipv6_key0 : 16;
+ u32 sport : 16;
+ u32 dipv6_key2 : 16;
+ u32 dipv6_key1 : 16;
+ u32 dipv6_key4 : 16;
+ u32 dipv6_key3 : 16;
+ u32 dipv6_key6 : 16;
+ u32 dipv6_key5 : 16;
+ u32 rsvd2 : 16;
+ u32 dipv6_key7 : 16;
+};
+
+struct hinic3_tcam_key_vxlan_ipv6_mem {
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+
+ u32 dipv6_key0 : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+
+ u32 dipv6_key2 : 16;
+ u32 dipv6_key1 : 16;
+
+ u32 dipv6_key4 : 16;
+ u32 dipv6_key3 : 16;
+
+ u32 dipv6_key6 : 16;
+ u32 dipv6_key5 : 16;
+
+ u32 dport : 16;
+ u32 dipv6_key7 : 16;
+
+ u32 rsvd2 : 16;
+ u32 sport : 16;
+
+ u32 outer_sipv4_h : 16;
+ u32 rsvd3 : 16;
+
+ u32 outer_dipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+
+ u32 vni_h : 16;
+ u32 outer_dipv4_l : 16;
+
+ u32 rsvd4 : 16;
+ u32 vni_l : 16;
+};
+
+struct tag_tcam_key {
+ union {
+ struct hinic3_tcam_key_ipv4_mem key_info;
+ struct hinic3_tcam_key_ipv6_mem key_info_ipv6;
+ struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6;
+ };
+
+ union {
+ struct hinic3_tcam_key_ipv4_mem key_mask;
+ struct hinic3_tcam_key_ipv6_mem key_mask_ipv6;
+ struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6;
+ };
+};
+
+enum {
+ PPA_TABLE_ID_CLEAN_CMD = 0,
+ PPA_TABLE_ID_ADD_CMD,
+ PPA_TABLE_ID_DEL_CMD,
+ FDIR_TABLE_ID_ADD_CMD,
+ FDIR_TABLE_ID_DEL_CMD,
+ PPA_TABEL_ID_MAX
+};
+
+struct hinic3_ppa_cfg_table_id_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u16 cmd;
+ u16 table_id;
+ u16 rsvd1;
+};
+
+struct hinic3_ppa_cfg_ppa_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 ppa_en;
+ u8 rsvd;
+};
+
+struct hinic3_func_flow_bifur_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 func_id;
+ u8 flow_bifur_en;
+ u8 rsvd[5];
+};
+
+struct hinic3_port_flow_bifur_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 port_id;
+ u8 flow_bifur_en;
+ u8 rsvd[5];
+};
+
+struct hinic3_bond_mask_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 func_id;
+ u8 bond_mask;
+ u8 bond_en;
+ u8 func_valid;
+ u8 rsvd[3];
+};
+
+#define HINIC3_TX_SET_PROMISC_SKIP 0
+#define HINIC3_TX_GET_PROMISC_SKIP 1
+
+struct hinic3_tx_promisc_cfg {
+ struct hinic3_mgmt_msg_head msg_head;
+ u8 port_id;
+ u8 promisc_skip_en; /* 0: disable tx promisc replication, 1: enable */
+ u8 opcode; /* 0: set, 1: get */
+ u8 rsvd1;
+};
+
+struct hinic3_ppa_cfg_mode_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u8 ppa_mode;
+ u8 qpc_func_nums;
+ u16 base_qpc_func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_ppa_flush_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u8 flush_en; /* 0 flush done, 1 in flush operation */
+ u8 rsvd1;
+};
+
+struct hinic3_ppa_fdir_query_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u32 index;
+ u32 rsvd;
+ u64 pkt_nums;
+ u64 pkt_bytes;
+};
+
+/* BIOS CONF */
+enum {
+ NIC_NVM_DATA_SET = BIT(0), /* 1-save, 0-read */
+ NIC_NVM_DATA_PXE = BIT(1),
+ NIC_NVM_DATA_VLAN = BIT(2),
+ NIC_NVM_DATA_VLAN_PRI = BIT(3),
+ NIC_NVM_DATA_VLAN_ID = BIT(4),
+ NIC_NVM_DATA_WORK_MODE = BIT(5),
+ NIC_NVM_DATA_PF_SPEED_LIMIT = BIT(6),
+ NIC_NVM_DATA_GE_MODE = BIT(7),
+ NIC_NVM_DATA_AUTO_NEG = BIT(8),
+ NIC_NVM_DATA_LINK_FEC = BIT(9),
+ NIC_NVM_DATA_PF_ADAPTIVE_LINK = BIT(10),
+ NIC_NVM_DATA_SRIOV_CONTROL = BIT(11),
+ NIC_NVM_DATA_EXTEND_MODE = BIT(12),
+ NIC_NVM_DATA_RESET = BIT(31),
+};
+
+#define BIOS_CFG_SIGNATURE 0x1923E518
+#define BIOS_OP_CFG_ALL(op_code_val) ((((op_code_val) >> 1) & (0xFFFFFFFF)) != 0)
+#define BIOS_OP_CFG_WRITE(op_code_val) ((((op_code_val) & NIC_NVM_DATA_SET)) != 0)
+#define BIOS_OP_CFG_PXE_EN(op_code_val) (((op_code_val) & NIC_NVM_DATA_PXE) != 0)
+#define BIOS_OP_CFG_VLAN_EN(op_code_val) (((op_code_val) & NIC_NVM_DATA_VLAN) != 0)
+#define BIOS_OP_CFG_VLAN_PRI(op_code_val) (((op_code_val) & NIC_NVM_DATA_VLAN_PRI) != 0)
+#define BIOS_OP_CFG_VLAN_ID(op_code_val) (((op_code_val) & NIC_NVM_DATA_VLAN_ID) != 0)
+#define BIOS_OP_CFG_WORK_MODE(op_code_val) (((op_code_val) & NIC_NVM_DATA_WORK_MODE) != 0)
+#define BIOS_OP_CFG_PF_BW(op_code_val) (((op_code_val) & NIC_NVM_DATA_PF_SPEED_LIMIT) != 0)
+#define BIOS_OP_CFG_GE_SPEED(op_code_val) (((op_code_val) & NIC_NVM_DATA_GE_MODE) != 0)
+#define BIOS_OP_CFG_AUTO_NEG(op_code_val) (((op_code_val) & NIC_NVM_DATA_AUTO_NEG) != 0)
+#define BIOS_OP_CFG_LINK_FEC(op_code_val) (((op_code_val) & NIC_NVM_DATA_LINK_FEC) != 0)
+#define BIOS_OP_CFG_AUTO_ADPAT(op_code_val) (((op_code_val) & NIC_NVM_DATA_PF_ADAPTIVE_LINK) != 0)
+#define BIOS_OP_CFG_SRIOV_ENABLE(op_code_val) (((op_code_val) & NIC_NVM_DATA_SRIOV_CONTROL) != 0)
+#define BIOS_OP_CFG_EXTEND_MODE(op_code_val) (((op_code_val) & NIC_NVM_DATA_EXTEND_MODE) != 0)
+#define BIOS_OP_CFG_RST_DEF_SET(op_code_val) (((op_code_val) & (u32)NIC_NVM_DATA_RESET) != 0)
+
+#define NIC_BIOS_CFG_MAX_PF_BW 100
+/* Note: This structure must be 4-byte aligned. */
+struct nic_bios_cfg {
+ u32 signature;
+ u8 pxe_en; /* PXE enable: 0 - disable 1 - enable */
+ u8 extend_mode;
+ u8 rsvd0[2];
+ u8 pxe_vlan_en; /* PXE VLAN enable: 0 - disable 1 - enable */
+ u8 pxe_vlan_pri; /* PXE VLAN priority: 0-7 */
+ u16 pxe_vlan_id; /* PXE VLAN ID 1-4094 */
+ u32 service_mode; /* @See CHIPIF_SERVICE_MODE_x */
+ u32 pf_bw; /* PF rate, in percentage. The value ranges from 0 to 100. */
+ u8 speed; /* enum of port speed */
+ u8 auto_neg; /* Auto-Negotiation Switch 0 - Invalid Field 1 - On 2 - Off */
+ u8 lanes; /* lane num */
+ u8 fec; /* FEC mode, @See enum mag_cmd_port_fec */
+ u8 auto_adapt; /* Adaptive Mode Configuration 0 - Invalid Configuration 1 - On 2 - Off */
+ u8 func_valid; /* Whether func_id is valid; 0: invalid; other: valid */
+ u8 func_id; /* This member is valid only when func_valid is not set to 0. */
+ u8 sriov_en; /* SRIOV-EN: 0 - Invalid configuration, 1 - On, 2 - Off */
+};
+
+struct nic_cmd_bios_cfg {
+ struct hinic3_mgmt_msg_head head;
+ u32 op_code; /* Operation Code: Bit0[0: read 1:write, BIT1-6: cfg_mask */
+ struct nic_bios_cfg bios_cfg;
+};
+
+struct nic_cmd_vhd_config {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 vhd_type;
+ u8 virtio_small_enable; /* 0: mergeable mode, 1: small mode */
+};
+
+/* BOND */
+struct hinic3_create_bond_info {
+ u32 bond_id;
+ u32 master_slave_port_id;
+ u32 slave_bitmap; /* bond port id bitmap */
+ u32 poll_timeout; /* Bond device link check time */
+ u32 up_delay; /* Temporarily reserved */
+ u32 down_delay; /* Temporarily reserved */
+ u32 bond_mode; /* Temporarily reserved */
+ u32 active_pf; /* bond use active pf id */
+ u32 active_port_max_num; /* Maximum number of active bond member interfaces */
+ u32 active_port_min_num; /* Minimum number of active bond member interfaces */
+ u32 xmit_hash_policy;
+ u32 rsvd[2];
+};
+
+struct hinic3_cmd_create_bond {
+ struct hinic3_mgmt_msg_head head;
+ struct hinic3_create_bond_info create_bond_info;
+};
+
+struct hinic3_cmd_delete_bond {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 rsvd[2];
+};
+
+struct hinic3_open_close_bond_info {
+ u32 bond_id;
+ u32 open_close_flag; /* Bond flag. 1: open; 0: close. */
+ u32 rsvd[2];
+};
+
+struct hinic3_cmd_open_close_bond {
+ struct hinic3_mgmt_msg_head head;
+ struct hinic3_open_close_bond_info open_close_bond_info;
+};
+
+struct lacp_port_params {
+ u16 port_number;
+ u16 port_priority;
+ u16 key;
+ u16 system_priority;
+ u8 system[ETH_ALEN];
+ u8 port_state;
+ u8 rsvd;
+};
+
+struct lacp_port_info {
+ u32 selected;
+ u32 aggregator_port_id;
+
+ struct lacp_port_params actor;
+ struct lacp_port_params partner;
+
+ u64 tx_lacp_pkts;
+ u64 rx_lacp_pkts;
+ u64 rx_8023ad_drop;
+ u64 tx_8023ad_drop;
+ u64 unknown_pkt_drop;
+ u64 rx_marker_pkts;
+ u64 tx_marker_pkts;
+};
+
+struct hinic3_bond_status_info {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 bon_mmi_status;
+ u32 active_bitmap;
+ u32 port_count;
+
+ struct lacp_port_info port_info[4];
+
+ u64 success_report_cnt[4];
+ u64 fail_report_cnt[4];
+
+ u64 poll_timeout;
+ u64 fast_periodic_timeout;
+ u64 slow_periodic_timeout;
+ u64 short_timeout;
+ u64 long_timeout;
+ u64 aggregate_wait_timeout;
+ u64 tx_period_timeout;
+ u64 rx_marker_timer;
+};
+
+struct hinic3_bond_active_report_info {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 bon_mmi_status;
+ u32 active_bitmap;
+
+ u8 rsvd[16];
+};
+
+/* IP checksum error packets, enable rss quadruple hash. */
+struct hinic3_ipcs_err_rss_enable_operation_s {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 en_tag;
+ u8 type; /* 1: set 0: get */
+ u8 rsvd[2];
+};
+
+struct hinic3_smac_check_state {
+ struct hinic3_mgmt_msg_head head;
+ u8 smac_check_en; /* 1: enable 0: disable */
+ u8 op_code; /* 1: set 0: get */
+ u8 rsvd[2];
+};
+
+struct hinic3_clear_log_state {
+ struct hinic3_mgmt_msg_head head;
+ u32 type;
+};
+
+#endif /* HINIC_MGMT_INTERFACE_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mt.h b/drivers/net/ethernet/huawei/hinic3/hinic3_mt.h
new file mode 100644
index 0000000..4df82ff
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mt.h
@@ -0,0 +1,864 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_MT_H
+#define HINIC3_MT_H
+
+#define HINIC3_DRV_NAME "hinic3"
+#define HINIC3_CHIP_NAME "hinic"
+/* Interrupt at most records, interrupt will be recorded in the FFM */
+
+#define NICTOOL_CMD_TYPE (0x18)
+#define HINIC3_CARD_NAME_MAX_LEN (128)
+
+struct api_cmd_rd {
+ u32 pf_id;
+ u8 dest;
+ u8 *cmd;
+ u16 size;
+ void *ack;
+ u16 ack_size;
+};
+
+struct api_cmd_wr {
+ u32 pf_id;
+ u8 dest;
+ u8 *cmd;
+ u16 size;
+};
+
+#define PF_DEV_INFO_NUM 32
+
+struct pf_dev_info {
+ u64 bar0_size;
+ u8 bus;
+ u8 slot;
+ u8 func;
+ u64 phy_addr;
+};
+
+/* Indicates the maximum number of interrupts that can be recorded.
+ * Subsequent interrupts are not recorded in FFM.
+ */
+#define FFM_RECORD_NUM_MAX 64
+
+struct ffm_intr_info {
+ u8 node_id;
+ /* error level of the interrupt source */
+ u8 err_level;
+ /* Classification by interrupt source properties */
+ u16 err_type;
+ u32 err_csr_addr;
+ u32 err_csr_value;
+};
+
+struct ffm_intr_tm_info {
+ struct ffm_intr_info intr_info;
+ u8 times;
+ u8 sec;
+ u8 min;
+ u8 hour;
+ u8 mday;
+ u8 mon;
+ u16 year;
+};
+
+struct ffm_record_info {
+ u32 ffm_num;
+ u32 last_err_csr_addr;
+ u32 last_err_csr_value;
+ struct ffm_intr_tm_info ffm[FFM_RECORD_NUM_MAX];
+};
+
+struct dbgtool_k_glb_info {
+ struct semaphore dbgtool_sem;
+ struct ffm_record_info *ffm;
+};
+
+struct msg_2_up {
+ u8 pf_id;
+ u8 mod;
+ u8 cmd;
+ void *buf_in;
+ u16 in_size;
+ void *buf_out;
+ u16 *out_size;
+};
+
+struct dbgtool_param {
+ union {
+ struct api_cmd_rd api_rd;
+ struct api_cmd_wr api_wr;
+ struct pf_dev_info *dev_info;
+ struct ffm_record_info *ffm_rd;
+ struct msg_2_up msg2up;
+ } param;
+ char chip_name[16];
+};
+
+/* dbgtool command type */
+/* You can add commands as required. The dbgtool command can be
+ * used to invoke all interfaces of the kernel-mode x86 driver.
+ */
+enum dbgtool_cmd {
+ DBGTOOL_CMD_API_RD = 0,
+ DBGTOOL_CMD_API_WR,
+ DBGTOOL_CMD_FFM_RD,
+ DBGTOOL_CMD_FFM_CLR,
+ DBGTOOL_CMD_PF_DEV_INFO_GET,
+ DBGTOOL_CMD_MSG_2_UP,
+ DBGTOOL_CMD_FREE_MEM,
+ DBGTOOL_CMD_NUM
+};
+
+#define HINIC_PF_MAX_SIZE (16)
+#define HINIC_VF_MAX_SIZE (4096)
+#define BUSINFO_LEN (32)
+
+enum module_name {
+ SEND_TO_NPU = 1,
+ SEND_TO_MPU,
+ SEND_TO_SM,
+
+ SEND_TO_HW_DRIVER,
+#define SEND_TO_SRV_DRV_BASE (SEND_TO_HW_DRIVER + 1)
+ SEND_TO_NIC_DRIVER = SEND_TO_SRV_DRV_BASE,
+ SEND_TO_OVS_DRIVER,
+ SEND_TO_ROCE_DRIVER,
+ SEND_TO_TOE_DRIVER,
+ SEND_TO_IOE_DRIVER,
+ SEND_TO_FC_DRIVER,
+ SEND_TO_VBS_DRIVER,
+ SEND_TO_IPSEC_DRIVER,
+ SEND_TO_VIRTIO_DRIVER,
+ SEND_TO_MIGRATE_DRIVER,
+ SEND_TO_PPA_DRIVER,
+ SEND_TO_CUSTOM_DRIVER = SEND_TO_SRV_DRV_BASE + 11,
+ SEND_TO_VSOCK_DRIVER = SEND_TO_SRV_DRV_BASE + 14,
+ SEND_TO_BIFUR_DRIVER,
+ SEND_TO_DRIVER_MAX = SEND_TO_SRV_DRV_BASE + 16, /* reserved */
+};
+
+enum driver_cmd_type {
+ TX_INFO = 1,
+ Q_NUM,
+ TX_WQE_INFO,
+ TX_MAPPING,
+ RX_INFO,
+ RX_WQE_INFO,
+ RX_CQE_INFO,
+ UPRINT_FUNC_EN,
+ UPRINT_FUNC_RESET,
+ UPRINT_SET_PATH,
+ UPRINT_GET_STATISTICS,
+ FUNC_TYPE,
+ GET_FUNC_IDX,
+ GET_INTER_NUM,
+ CLOSE_TX_STREAM,
+ GET_DRV_VERSION,
+ CLEAR_FUNC_STASTIC,
+ GET_HW_STATS,
+ CLEAR_HW_STATS,
+ GET_SELF_TEST_RES,
+ GET_CHIP_FAULT_STATS,
+ NIC_RSVD1,
+ NIC_RSVD2,
+ GET_OS_HOT_REPLACE_INFO,
+ GET_CHIP_ID,
+ GET_SINGLE_CARD_INFO,
+ GET_FIRMWARE_ACTIVE_STATUS,
+ ROCE_DFX_FUNC,
+ GET_DEVICE_ID,
+ GET_PF_DEV_INFO,
+ CMD_FREE_MEM,
+ GET_LOOPBACK_MODE = 32,
+ SET_LOOPBACK_MODE,
+ SET_LINK_MODE,
+ SET_TX_PF_BW_LIMIT,
+ GET_PF_BW_LIMIT,
+ ROCE_CMD,
+ GET_POLL_WEIGHT,
+ SET_POLL_WEIGHT,
+ GET_HOMOLOGUE,
+ SET_HOMOLOGUE,
+ GET_SSET_COUNT,
+ GET_SSET_ITEMS,
+ IS_DRV_IN_VM,
+ LRO_ADPT_MGMT,
+ SET_INTER_COAL_PARAM,
+ GET_INTER_COAL_PARAM,
+ GET_CHIP_INFO,
+ GET_NIC_STATS_LEN,
+ GET_NIC_STATS_STRING,
+ GET_NIC_STATS_INFO,
+ GET_PF_ID,
+ GET_MBOX_CNT,
+ NIC_RSVD4,
+ NIC_RSVD5,
+ DCB_QOS_INFO,
+ DCB_PFC_STATE,
+ DCB_ETS_STATE,
+ DCB_STATE,
+ QOS_DEV,
+ GET_QOS_COS,
+ GET_ULD_DEV_NAME,
+ GET_TX_TIMEOUT,
+ SET_TX_TIMEOUT,
+
+ RSS_CFG = 0x40,
+ RSS_INDIR,
+ PORT_ID,
+
+ SET_RX_PF_BW_LIMIT = 0x43,
+
+ GET_FUNC_CAP = 0x50,
+ GET_XSFP_PRESENT = 0x51,
+ GET_XSFP_INFO = 0x52,
+ DEV_NAME_TEST = 0x53,
+ GET_XSFP_INFO_COMP_CMIS = 0x54,
+
+ GET_WIN_STAT = 0x60,
+ WIN_CSR_READ = 0x61,
+ WIN_CSR_WRITE = 0x62,
+ WIN_API_CMD_RD = 0x63,
+
+ GET_FUSION_Q = 0x64,
+
+ ROCE_CMD_BOND_HASH_TYPE_SET = 0xb2,
+
+ BIFUR_SET_ENABLE = 0xc0,
+ BIFUR_GET_ENABLE = 0xc1,
+
+ VM_COMPAT_TEST = 0xFF
+};
+
+enum api_chain_cmd_type {
+ API_CSR_READ,
+ API_CSR_WRITE
+};
+
+enum sm_cmd_type {
+ SM_CTR_RD16 = 1,
+ SM_CTR_RD32,
+ SM_CTR_RD64_PAIR,
+ SM_CTR_RD64,
+ SM_CTR_RD32_CLEAR,
+ SM_CTR_RD64_PAIR_CLEAR,
+ SM_CTR_RD64_CLEAR,
+ SM_CTR_RD16_CLEAR,
+};
+
+struct cqm_stats {
+ atomic_t cqm_cmd_alloc_cnt;
+ atomic_t cqm_cmd_free_cnt;
+ atomic_t cqm_send_cmd_box_cnt;
+ atomic_t cqm_send_cmd_imm_cnt;
+ atomic_t cqm_db_addr_alloc_cnt;
+ atomic_t cqm_db_addr_free_cnt;
+ atomic_t cqm_fc_srq_create_cnt;
+ atomic_t cqm_srq_create_cnt;
+ atomic_t cqm_rq_create_cnt;
+ atomic_t cqm_qpc_mpt_create_cnt;
+ atomic_t cqm_nonrdma_queue_create_cnt;
+ atomic_t cqm_rdma_queue_create_cnt;
+ atomic_t cqm_rdma_table_create_cnt;
+ atomic_t cqm_qpc_mpt_delete_cnt;
+ atomic_t cqm_nonrdma_queue_delete_cnt;
+ atomic_t cqm_rdma_queue_delete_cnt;
+ atomic_t cqm_rdma_table_delete_cnt;
+ atomic_t cqm_func_timer_clear_cnt;
+ atomic_t cqm_func_hash_buf_clear_cnt;
+ atomic_t cqm_scq_callback_cnt;
+ atomic_t cqm_ecq_callback_cnt;
+ atomic_t cqm_nocq_callback_cnt;
+ atomic_t cqm_aeq_callback_cnt[112];
+};
+
+struct link_event_stats {
+ atomic_t link_down_stats;
+ atomic_t link_up_stats;
+};
+
+enum hinic3_fault_err_level {
+ FAULT_LEVEL_FATAL,
+ FAULT_LEVEL_SERIOUS_RESET,
+ FAULT_LEVEL_HOST,
+ FAULT_LEVEL_SERIOUS_FLR,
+ FAULT_LEVEL_GENERAL,
+ FAULT_LEVEL_SUGGESTION,
+ FAULT_LEVEL_MAX,
+};
+
+enum hinic3_fault_type {
+ FAULT_TYPE_CHIP,
+ FAULT_TYPE_UCODE,
+ FAULT_TYPE_MEM_RD_TIMEOUT,
+ FAULT_TYPE_MEM_WR_TIMEOUT,
+ FAULT_TYPE_REG_RD_TIMEOUT,
+ FAULT_TYPE_REG_WR_TIMEOUT,
+ FAULT_TYPE_PHY_FAULT,
+ FAULT_TYPE_TSENSOR_FAULT,
+ FAULT_TYPE_MAX,
+};
+
+struct fault_event_stats {
+ /* TODO :HINIC_NODE_ID_MAX: temp use the value of 1822(22) */
+ atomic_t chip_fault_stats[22][FAULT_LEVEL_MAX];
+ atomic_t fault_type_stat[FAULT_TYPE_MAX];
+ atomic_t pcie_fault_stats;
+};
+
+enum hinic3_ucode_event_type {
+ HINIC3_INTERNAL_OTHER_FATAL_ERROR = 0x0,
+ HINIC3_CHANNEL_BUSY = 0x7,
+ HINIC3_NIC_FATAL_ERROR_MAX = 0x8,
+};
+
+struct hinic3_hw_stats {
+ atomic_t heart_lost_stats;
+ struct cqm_stats cqm_stats;
+ struct link_event_stats link_event_stats;
+ struct fault_event_stats fault_event_stats;
+ atomic_t nic_ucode_event_stats[HINIC3_NIC_FATAL_ERROR_MAX];
+};
+
+#ifndef IFNAMSIZ
+#define IFNAMSIZ 16
+#endif
+
+struct pf_info {
+ char name[IFNAMSIZ];
+ char bus_info[BUSINFO_LEN];
+ u32 pf_type;
+};
+
+struct card_info {
+ struct pf_info pf[HINIC_PF_MAX_SIZE];
+ u32 pf_num;
+};
+
+struct func_mbox_cnt_info {
+ char bus_info[BUSINFO_LEN];
+ u64 send_cnt;
+ u64 ack_cnt;
+};
+
+struct card_mbox_cnt_info {
+ struct func_mbox_cnt_info func_info[HINIC_PF_MAX_SIZE +
+ HINIC_VF_MAX_SIZE];
+ u32 func_num;
+};
+
+struct hinic3_nic_loop_mode {
+ u32 loop_mode;
+ u32 loop_ctrl;
+};
+
+struct hinic3_pf_info {
+ u32 isvalid;
+ u32 pf_id;
+};
+
+enum hinic3_show_set {
+ HINIC3_SHOW_SSET_IO_STATS = 1,
+};
+
+#define HINIC3_SHOW_ITEM_LEN 32
+struct hinic3_show_item {
+ char name[HINIC3_SHOW_ITEM_LEN];
+ u8 hexadecimal; /* 0: decimal , 1: Hexadecimal */
+ u8 rsvd[7];
+ u64 value;
+};
+
+#define HINIC3_CHIP_FAULT_SIZE (110 * 1024)
+#define MAX_DRV_BUF_SIZE 4096
+
+struct nic_cmd_chip_fault_stats {
+ u32 offset;
+ u8 chip_fault_stats[MAX_DRV_BUF_SIZE];
+};
+
+#define NIC_TOOL_MAGIC 'x'
+
+#define CARD_MAX_SIZE (64)
+
+struct nic_card_id {
+ u32 id[CARD_MAX_SIZE];
+ u32 num;
+};
+
+struct func_pdev_info {
+ u64 bar0_phy_addr;
+ u64 bar0_size;
+ u64 bar1_phy_addr;
+ u64 bar1_size;
+ u64 bar3_phy_addr;
+ u64 bar3_size;
+ u64 rsvd1[4];
+};
+
+struct hinic3_card_func_info {
+ u32 num_pf;
+ u32 rsvd0;
+ u64 usr_api_phy_addr;
+ struct func_pdev_info pdev_info[CARD_MAX_SIZE];
+};
+
+struct wqe_info {
+ int q_id;
+ void *slq_handle;
+ unsigned int wqe_id;
+};
+
+#define MAX_VER_INFO_LEN 128
+struct drv_version_info {
+ char ver[MAX_VER_INFO_LEN];
+};
+
+struct hinic3_tx_hw_page {
+ u64 phy_addr;
+ u64 *map_addr;
+};
+
+struct nic_sq_info {
+ u16 q_id;
+ u16 pi;
+ u16 ci; /* sw_ci */
+ u16 fi; /* hw_ci */
+ u32 q_depth;
+ u16 pi_reverse; /* TODO: what is this? */
+ u16 wqebb_size;
+ u8 priority;
+ u16 *ci_addr;
+ u64 cla_addr;
+ void *slq_handle;
+ /* TODO: NIC don't use direct wqe */
+ struct hinic3_tx_hw_page direct_wqe;
+ struct hinic3_tx_hw_page doorbell;
+ u32 page_idx;
+ u32 glb_sq_id;
+};
+
+struct nic_rq_info {
+ u16 q_id;
+ u16 delta;
+ u16 hw_pi;
+ u16 ci; /* sw_ci */
+ u16 sw_pi;
+ u16 wqebb_size;
+ u16 q_depth;
+ u16 buf_len;
+
+ void *slq_handle;
+ u64 ci_wqe_page_addr;
+ u64 ci_cla_tbl_addr;
+
+ u8 coalesc_timer_cfg;
+ u8 pending_limt;
+ u16 msix_idx;
+ u32 msix_vector;
+};
+
+#define MT_EPERM 1 /* Operation not permitted */
+#define MT_EIO 2 /* I/O error */
+#define MT_EINVAL 3 /* Invalid argument */
+#define MT_EBUSY 4 /* Device or resource busy */
+#define MT_EOPNOTSUPP 0xFF /* Operation not supported */
+
+struct mt_msg_head {
+ u8 status;
+ u8 rsvd1[3];
+};
+
+#define MT_DCB_OPCODE_WR BIT(0) /* 1 - write, 0 - read */
+struct hinic3_mt_qos_info { /* delete */
+ struct mt_msg_head head;
+
+ u16 op_code;
+ u8 valid_cos_bitmap;
+ u8 valid_up_bitmap;
+ u32 rsvd1;
+};
+
+struct hinic3_mt_dcb_state {
+ struct mt_msg_head head;
+
+ u16 op_code; /* 0 - get dcb state, 1 - set dcb state */
+ u8 state; /* 0 - disable, 1 - enable dcb */
+ u8 rsvd;
+};
+
+#define MT_DCB_ETS_UP_TC BIT(1)
+#define MT_DCB_ETS_UP_BW BIT(2)
+#define MT_DCB_ETS_UP_PRIO BIT(3)
+#define MT_DCB_ETS_TC_BW BIT(4)
+#define MT_DCB_ETS_TC_PRIO BIT(5)
+
+#define DCB_UP_TC_NUM 0x8
+struct hinic3_mt_ets_state { /* delete */
+ struct mt_msg_head head;
+
+ u16 op_code;
+ u8 up_tc[DCB_UP_TC_NUM];
+ u8 up_bw[DCB_UP_TC_NUM];
+ u8 tc_bw[DCB_UP_TC_NUM];
+ u8 up_prio_bitmap;
+ u8 tc_prio_bitmap;
+ u32 rsvd;
+};
+
+#define MT_DCB_PFC_PFC_STATE BIT(1)
+#define MT_DCB_PFC_PFC_PRI_EN BIT(2)
+
+struct hinic3_mt_pfc_state { /* delete */
+ struct mt_msg_head head;
+
+ u16 op_code;
+ u8 state;
+ u8 pfc_en_bitpamp;
+ u32 rsvd;
+};
+
+#define CMD_QOS_DEV_TRUST BIT(0)
+#define CMD_QOS_DEV_DFT_COS BIT(1)
+#define CMD_QOS_DEV_PCP2COS BIT(2)
+#define CMD_QOS_DEV_DSCP2COS BIT(3)
+
+struct hinic3_mt_qos_dev_cfg {
+ struct mt_msg_head head;
+
+ u8 op_code; /* 0:get 1: set */
+ u8 rsvd0;
+ /* bit0 - trust, bit1 - dft_cos, bit2 - pcp2cos, bit3 - dscp2cos */
+ u16 cfg_bitmap;
+
+ u8 trust; /* 0 - pcp, 1 - dscp */
+ u8 dft_cos;
+ u16 rsvd1;
+ u8 pcp2cos[8]; /* 必须8个一起配置 */
+ /* 配置dscp2cos时,若cos值设置为0xFF,驱动则忽略此dscp优先级的配置,
+ * 允许一次性配置多个dscp跟cos的映射关系
+ */
+ u8 dscp2cos[64];
+ u32 rsvd2[4];
+};
+
+enum mt_api_type {
+ API_TYPE_MBOX = 1,
+ API_TYPE_API_CHAIN_BYPASS,
+ API_TYPE_API_CHAIN_TO_MPU,
+ API_TYPE_CLP,
+};
+
+struct npu_cmd_st {
+ u32 mod : 8;
+ u32 cmd : 8;
+ u32 ack_type : 3;
+ u32 direct_resp : 1;
+ u32 len : 12;
+};
+
+struct mpu_cmd_st {
+ u32 api_type : 8;
+ u32 mod : 8;
+ u32 cmd : 16;
+};
+
+struct msg_module {
+ char device_name[IFNAMSIZ];
+ u32 module;
+ union {
+ u32 msg_formate; /* for driver */
+ struct npu_cmd_st npu_cmd;
+ struct mpu_cmd_st mpu_cmd;
+ };
+ u32 timeout; /* for mpu/npu cmd */
+ u32 func_idx;
+ u32 buf_in_size;
+ u32 buf_out_size;
+ void *in_buf;
+ void *out_buf;
+ int bus_num;
+ u8 port_id;
+ u8 rsvd1[3];
+ u32 rsvd2[4];
+};
+
+struct hinic3_mt_qos_cos_cfg {
+ struct mt_msg_head head;
+
+ u8 port_id;
+ u8 func_cos_bitmap;
+ u8 port_cos_bitmap;
+ u8 func_max_cos_num;
+ u32 rsvd2[4];
+};
+
+#define MAX_NETDEV_NUM 4
+
+enum hinic3_bond_cmd_to_custom_e {
+ CMD_CUSTOM_BOND_DEV_CREATE = 1,
+ CMD_CUSTOM_BOND_DEV_DELETE,
+ CMD_CUSTOM_BOND_GET_CHIP_NAME,
+ CMD_CUSTOM_BOND_GET_CARD_INFO
+};
+
+enum xmit_hash_policy {
+ HASH_POLICY_L2 = 0, /* SMAC_DMAC */
+ HASH_POLICY_L23 = 1, /* SIP_DIP_SPORT_DPORT */
+ HASH_POLICY_L34 = 2, /* SMAC_DMAC_SIP_DIP */
+ HASH_POLICY_MAX = 3 /* MAX */
+};
+
+/* bond mode */
+enum tag_bond_mode {
+ BOND_MODE_NONE = 0, /**< bond disable */
+ BOND_MODE_BACKUP = 1, /**< 1 for active-backup */
+ BOND_MODE_BALANCE = 2, /**< 2 for balance-xor */
+ BOND_MODE_LACP = 4, /**< 4 for 802.3ad */
+ BOND_MODE_MAX
+};
+
+struct add_bond_dev_s {
+ struct mt_msg_head head;
+ /* input can be empty, indicates that the value
+ * is assigned by the driver
+ */
+ char bond_name[IFNAMSIZ];
+ u8 slave_cnt;
+ u8 rsvd[3];
+ char slave_name[MAX_NETDEV_NUM][IFNAMSIZ]; /* unit : ms */
+ u32 poll_timeout; /* default value = 100 */
+ u32 up_delay; /* default value = 0 */
+ u32 down_delay; /* default value = 0 */
+ u32 bond_mode; /* default value = BOND_MODE_LACP */
+
+ /* maximum number of active bond member interfaces,
+ * default value = 0
+ */
+ u32 active_port_max_num;
+ /* minimum number of active bond member interfaces,
+ * default value = 0
+ */
+ u32 active_port_min_num;
+ /* hash policy, which is used for microcode routing logic,
+ * default value = 0
+ */
+ enum xmit_hash_policy xmit_hash_policy;
+};
+
+struct del_bond_dev_s {
+ struct mt_msg_head head;
+ char bond_name[IFNAMSIZ];
+};
+
+struct get_bond_chip_name_s {
+ char bond_name[IFNAMSIZ];
+ char chip_name[IFNAMSIZ];
+};
+
+struct bond_drv_msg_s {
+ u32 bond_id;
+ u32 slave_cnt;
+ u32 master_slave_index;
+ char bond_name[IFNAMSIZ];
+ char slave_name[MAX_NETDEV_NUM][IFNAMSIZ];
+};
+
+#define MAX_BONDING_CNT_PER_CARD (2)
+
+struct bond_negotiate_status {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+ u32 bond_id;
+ u32 bond_mmi_status; /* 该bond子设备的链路状态 */
+ u32 active_bitmap; /* 该bond子设备的slave port状态 */
+
+ u8 rsvd[16];
+};
+
+struct bond_all_msg_s {
+ struct bond_drv_msg_s drv_msg;
+ struct bond_negotiate_status active_info;
+};
+
+struct get_card_bond_msg_s {
+ u32 bond_cnt;
+ struct bond_all_msg_s all_msg[MAX_BONDING_CNT_PER_CARD];
+};
+
+#define MAX_FUSION_Q_STATS_STR_LEN 16
+#define MAX_FUSION_Q_NUM 256
+struct queue_status_s {
+ pid_t tgid;
+ char status[MAX_FUSION_Q_STATS_STR_LEN];
+};
+struct fusion_q_status_s {
+ u16 queue_num;
+ struct queue_status_s queue[MAX_FUSION_Q_NUM];
+};
+
+struct fusion_q_tx_hw_page {
+ u64 phy_addr;
+ u64 *map_addr;
+};
+
+struct fusion_sq_info {
+ u16 q_id;
+ u16 pi;
+ u16 ci; /* sw_ci */
+ u16 fi; /* hw_ci */
+ u32 q_depth;
+ u16 pi_reverse;
+ u16 wqebb_size;
+ u8 priority;
+ u16 *ci_addr;
+ u64 cla_addr;
+ void *slq_handle;
+ struct fusion_q_tx_hw_page direct_wqe;
+ struct fusion_q_tx_hw_page doorbell;
+ u32 page_idx;
+ u32 glb_sq_id;
+};
+
+struct fusion_q_tx_wqe {
+ u32 data[4];
+};
+
+struct fusion_rq_info {
+ u16 q_id;
+ u16 delta;
+ u16 hw_pi;
+ u16 ci; /* sw_ci */
+ u16 sw_pi;
+ u16 wqebb_size;
+ u16 q_depth;
+ u16 buf_len;
+
+ void *slq_handle;
+ u64 ci_wqe_page_addr;
+ u64 ci_cla_tbl_addr;
+
+ u8 coalesc_timer_cfg;
+ u8 pending_limt;
+ u16 msix_idx;
+ u32 msix_vector;
+};
+
+struct fusion_q_rx_wqe {
+ u32 data[8];
+};
+
+struct fusion_q_rx_cqe {
+ union {
+ struct {
+ unsigned int checksum_err : 16;
+ unsigned int lro_num : 8;
+ unsigned int rsvd1 : 7;
+ unsigned int rx_done : 1;
+ } bs;
+ unsigned int value;
+ } dw0;
+
+ union {
+ struct {
+ unsigned int vlan : 16;
+ unsigned int length : 16;
+ } bs;
+ unsigned int value;
+ } dw1;
+
+ union {
+ struct {
+ unsigned int pkt_types : 12;
+ unsigned int rsvd : 4;
+ unsigned int udp_0 : 1;
+ unsigned int ipv6_ex_add : 1;
+ unsigned int loopback : 1;
+ unsigned int umbcast : 2;
+ unsigned int vlan_offload_en : 1;
+ unsigned int tag_num : 2;
+ unsigned int rss_type : 8;
+ } bs;
+ unsigned int value;
+ } dw2;
+
+ union {
+ struct {
+ unsigned int rss_hash_value;
+ } bs;
+ unsigned int value;
+ } dw3;
+
+ union {
+ struct {
+ unsigned int tx_ts_seq : 16;
+ unsigned int message_1588_offset : 8;
+ unsigned int message_1588_type : 4;
+ unsigned int rsvd : 1;
+ unsigned int if_rx_ts : 1;
+ unsigned int if_tx_ts : 1;
+ unsigned int if_1588 : 1;
+ } bs;
+ unsigned int value;
+ } dw4;
+
+ union {
+ struct {
+ unsigned int ts;
+ } bs;
+ unsigned int value;
+ } dw5;
+
+ union {
+ struct {
+ unsigned int lro_ts;
+ } bs;
+ unsigned int value;
+ } dw6;
+
+ union {
+ struct {
+ unsigned int rsvd0;
+ } bs;
+ unsigned int value;
+ } dw7; /* 16Bytes Align */
+};
+
+struct os_hot_repalce_func_info {
+ char card_name[HINIC3_CARD_NAME_MAX_LEN];
+ int bus_num;
+ int valid;
+ int bdf;
+ int partition;
+ int backup_pf;
+ int pf_idx;
+ int port_id;
+};
+
+#define ALL_CARD_PF_NUM 2048 /* 64 card * 32 pf */
+struct os_hot_replace_info {
+ struct os_hot_repalce_func_info func_infos[ALL_CARD_PF_NUM];
+ u32 func_cnt;
+};
+
+int alloc_buff_in(void *hwdev, struct msg_module *nt_msg, u32 in_size, void **buf_in);
+
+int alloc_buff_out(void *hwdev, struct msg_module *nt_msg, u32 out_size, void **buf_out);
+
+void free_buff_in(void *hwdev, const struct msg_module *nt_msg, void *buf_in);
+
+void free_buff_out(void *hwdev, struct msg_module *nt_msg, void *buf_out);
+
+int copy_buf_out_to_user(struct msg_module *nt_msg, u32 out_size, void *buf_out);
+
+int send_to_mpu(void *hwdev, struct msg_module *nt_msg, void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+int send_to_npu(void *hwdev, struct msg_module *nt_msg, void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size);
+int send_to_sm(void *hwdev, struct msg_module *nt_msg, void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+#endif /* _HINIC3_MT_H_ */
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c b/drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c
new file mode 100644
index 0000000..7cd9e4d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c
@@ -0,0 +1,2125 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <net/dsfield.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/netlink.h>
+#include <linux/debugfs.h>
+#include <linux/ip.h>
+
+#include "ossl_knl.h"
+#if defined(HAVE_NDO_UDP_TUNNEL_ADD) || defined(HAVE_UDP_TUNNEL_NIC_INFO)
+#include <net/udp_tunnel.h>
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD || HAVE_UDP_TUNNEL_NIC_INFO */
+#ifdef HAVE_XDP_SUPPORT
+#include <linux/bpf.h>
+#endif
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+#include "hinic3_dcb.h"
+#include "hinic3_nic_prof.h"
+
+#include "nic_npu_cmd.h"
+
+#include "vram_common.h"
+
+#define HINIC3_DEFAULT_RX_CSUM_OFFLOAD 0xFFF
+
+#define HINIC3_LRO_DEFAULT_COAL_PKT_SIZE 32
+#define HINIC3_LRO_DEFAULT_TIME_LIMIT 16
+#define HINIC3_WAIT_FLUSH_QP_RESOURCE_TIMEOUT 100
+static void hinic3_nic_set_rx_mode(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (netdev_uc_count(netdev) != nic_dev->netdev_uc_cnt ||
+ netdev_mc_count(netdev) != nic_dev->netdev_mc_cnt) {
+ set_bit(HINIC3_UPDATE_MAC_FILTER, &nic_dev->flags);
+ nic_dev->netdev_uc_cnt = netdev_uc_count(netdev);
+ nic_dev->netdev_mc_cnt = netdev_mc_count(netdev);
+ }
+
+ queue_work(nic_dev->workq, &nic_dev->rx_mode_work);
+}
+
+static void hinic3_free_irq_vram(struct hinic3_nic_dev *nic_dev, struct hinic3_dyna_txrxq_params *in_q_params)
+{
+ u32 size;
+ int is_use_vram = get_use_vram_flag();
+ struct hinic3_dyna_txrxq_params q_params = nic_dev->q_params;
+
+ if (q_params.irq_cfg == NULL)
+ return;
+
+ size = sizeof(struct hinic3_irq) * (q_params.num_qps);
+
+ if (is_use_vram != 0) {
+ hi_vram_kfree((void *)q_params.irq_cfg, q_params.irq_cfg_vram_name, size);
+ q_params.irq_cfg = NULL;
+ } else {
+ kfree(in_q_params->irq_cfg);
+ in_q_params->irq_cfg = NULL;
+ }
+}
+
+static int hinic3_alloc_irq_vram(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params, bool is_up_eth)
+{
+ u32 size;
+ int is_use_vram = get_use_vram_flag();
+ u16 func_id;
+
+ size = sizeof(struct hinic3_irq) * q_params->num_qps;
+
+ if (is_use_vram != 0) {
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ snprintf(q_params->irq_cfg_vram_name,
+ VRAM_NAME_MAX_LEN, "%s%u",
+ VRAM_NIC_IRQ_VRAM, func_id);
+ q_params->irq_cfg = (struct hinic3_irq *)hi_vram_kalloc(
+ q_params->irq_cfg_vram_name, size);
+ if (q_params->irq_cfg == NULL) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "NIC irq vram alloc failed.\n");
+ return -ENOMEM;
+ }
+ /* in order to clear napi stored in vram, irq need to init when eth up */
+ if (is_up_eth) {
+ memset(q_params->irq_cfg, 0, size);
+ }
+ } else {
+ q_params->irq_cfg = kzalloc(size, GFP_KERNEL);
+ if (q_params->irq_cfg == NULL) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "NIC irq alloc failed.\n");
+ return -ENOMEM;
+ }
+ }
+
+ return 0;
+}
+
+static int hinic3_alloc_txrxq_resources(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params,
+ bool is_up_eth)
+{
+ u32 size;
+ int err;
+
+ size = sizeof(*q_params->txqs_res) * q_params->num_qps;
+ q_params->txqs_res = kzalloc(size, GFP_KERNEL);
+ if (!q_params->txqs_res) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc txqs resources array\n");
+ return -ENOMEM;
+ }
+
+ size = sizeof(*q_params->rxqs_res) * q_params->num_qps;
+ q_params->rxqs_res = kzalloc(size, GFP_KERNEL);
+ if (!q_params->rxqs_res) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxqs resource array\n");
+ err = -ENOMEM;
+ goto alloc_rxqs_res_arr_err;
+ }
+
+ err = hinic3_alloc_irq_vram(nic_dev, q_params, is_up_eth);
+ if (err != 0) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to alloc irq resource array\n");
+ goto alloc_irq_cfg_err;
+ }
+
+ err = hinic3_alloc_txqs_res(nic_dev, q_params->num_qps,
+ q_params->sq_depth, q_params->txqs_res);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc txqs resource\n");
+ goto alloc_txqs_res_err;
+ }
+
+ err = hinic3_alloc_rxqs_res(nic_dev, q_params->num_qps,
+ q_params->rq_depth, q_params->rxqs_res);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxqs resource\n");
+ goto alloc_rxqs_res_err;
+ }
+
+ return 0;
+
+alloc_rxqs_res_err:
+ hinic3_free_txqs_res(nic_dev, q_params->num_qps, q_params->sq_depth,
+ q_params->txqs_res);
+
+alloc_txqs_res_err:
+ hinic3_free_irq_vram(nic_dev, q_params);
+
+alloc_irq_cfg_err:
+ kfree(q_params->rxqs_res);
+ q_params->rxqs_res = NULL;
+
+alloc_rxqs_res_arr_err:
+ kfree(q_params->txqs_res);
+ q_params->txqs_res = NULL;
+
+ return err;
+}
+
+static void hinic3_free_txrxq_resources(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params)
+{
+ int is_in_kexec = vram_get_kexec_flag();
+ hinic3_free_rxqs_res(nic_dev, q_params->num_qps, q_params->rq_depth,
+ q_params->rxqs_res);
+ hinic3_free_txqs_res(nic_dev, q_params->num_qps, q_params->sq_depth,
+ q_params->txqs_res);
+
+ if (is_in_kexec == 0)
+ hinic3_free_irq_vram(nic_dev, q_params);
+
+ kfree(q_params->rxqs_res);
+ q_params->rxqs_res = NULL;
+
+ kfree(q_params->txqs_res);
+ q_params->txqs_res = NULL;
+}
+
+static int hinic3_configure_txrxqs(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params)
+{
+ int err;
+
+ err = hinic3_configure_txqs(nic_dev, q_params->num_qps,
+ q_params->sq_depth, q_params->txqs_res);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to configure txqs\n");
+ return err;
+ }
+
+ err = hinic3_configure_rxqs(nic_dev, q_params->num_qps,
+ q_params->rq_depth, q_params->rxqs_res);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to configure rxqs\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static void config_dcb_qps_map(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ u8 num_cos;
+
+ if (!test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ hinic3_update_tx_db_cos(nic_dev, 0);
+ return;
+ }
+
+ num_cos = hinic3_get_dev_user_cos_num(nic_dev);
+ hinic3_update_qp_cos_cfg(nic_dev, num_cos);
+ /* For now, we don't support to change num_cos */
+ if (num_cos > dcb->cos_config_num_max ||
+ nic_dev->q_params.num_qps < num_cos) {
+ nicif_err(nic_dev, drv, netdev, "Invalid num_cos: %u or num_qps: %u, disable DCB\n",
+ num_cos, nic_dev->q_params.num_qps);
+ nic_dev->q_params.num_cos = 0;
+ clear_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
+ clear_bit(HINIC3_DCB_ENABLE, &nic_dev->nic_vram->flags);
+ /* if we can't enable rss or get enough num_qps,
+ * need to sync default configure to hw
+ */
+ hinic3_configure_dcb(netdev);
+ }
+
+ hinic3_update_tx_db_cos(nic_dev, 1);
+}
+
+static int hinic3_configure(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int err;
+ int is_in_kexec = vram_get_kexec_flag();
+
+ if (is_in_kexec == 0) {
+ err = hinic3_set_port_mtu(nic_dev->hwdev, (u16)netdev->mtu);
+ if (err != 0) {
+ nicif_err(nic_dev, drv, netdev, "Failed to set mtu\n");
+ return err;
+ }
+ }
+
+ config_dcb_qps_map(nic_dev);
+
+ /* rx rss init */
+ err = hinic3_rx_configure(netdev, test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ? 1 : 0);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to configure rx\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static void hinic3_remove_configure(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_rx_remove_configure(nic_dev->netdev);
+}
+
+/* try to modify the number of irq to the target number,
+ * and return the actual number of irq.
+ */
+static u16 hinic3_qp_irq_change(struct hinic3_nic_dev *nic_dev,
+ u16 dst_num_qp_irq)
+{
+ struct irq_info *qps_irq_info = nic_dev->qps_irq_info;
+ u16 resp_irq_num, irq_num_gap, i;
+ u16 idx;
+ int err;
+
+ if (dst_num_qp_irq > nic_dev->num_qp_irq) {
+ irq_num_gap = dst_num_qp_irq - nic_dev->num_qp_irq;
+ err = hinic3_alloc_irqs(nic_dev->hwdev, SERVICE_T_NIC,
+ irq_num_gap,
+ &qps_irq_info[nic_dev->num_qp_irq],
+ &resp_irq_num);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to alloc irqs\n");
+ return nic_dev->num_qp_irq;
+ }
+
+ nic_dev->num_qp_irq += resp_irq_num;
+ } else if (dst_num_qp_irq < nic_dev->num_qp_irq) {
+ irq_num_gap = nic_dev->num_qp_irq - dst_num_qp_irq;
+ for (i = 0; i < irq_num_gap; i++) {
+ idx = (nic_dev->num_qp_irq - i) - 1;
+ hinic3_free_irq(nic_dev->hwdev, SERVICE_T_NIC,
+ qps_irq_info[idx].irq_id);
+ qps_irq_info[idx].irq_id = 0;
+ qps_irq_info[idx].msix_entry_idx = 0;
+ }
+ nic_dev->num_qp_irq = dst_num_qp_irq;
+ }
+
+ return nic_dev->num_qp_irq;
+}
+
+static void config_dcb_num_qps(struct hinic3_nic_dev *nic_dev,
+ const struct hinic3_dyna_txrxq_params *q_params,
+ u16 max_qps)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ u8 num_cos = q_params->num_cos;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ if (!num_cos || num_cos > dcb->cos_config_num_max || num_cos > max_qps)
+ return; /* will disable DCB in config_dcb_qps_map() */
+
+ hinic3_update_qp_cos_cfg(nic_dev, user_cos_num);
+}
+
+static void hinic3_config_num_qps(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params)
+{
+ u16 alloc_num_irq, cur_num_irq;
+ u16 dst_num_irq;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags))
+ q_params->num_qps = 1;
+
+ config_dcb_num_qps(nic_dev, q_params, q_params->num_qps);
+
+ if (nic_dev->num_qp_irq >= q_params->num_qps)
+ goto out;
+
+ cur_num_irq = nic_dev->num_qp_irq;
+
+ alloc_num_irq = hinic3_qp_irq_change(nic_dev, q_params->num_qps);
+ if (alloc_num_irq < q_params->num_qps) {
+ q_params->num_qps = alloc_num_irq;
+ config_dcb_num_qps(nic_dev, q_params, q_params->num_qps);
+ nicif_warn(nic_dev, drv, nic_dev->netdev,
+ "Can not get enough irqs, adjust num_qps to %u\n",
+ q_params->num_qps);
+
+ /* The current irq may be in use, we must keep it */
+ dst_num_irq = (u16)max_t(u16, cur_num_irq, q_params->num_qps);
+ hinic3_qp_irq_change(nic_dev, dst_num_irq);
+ }
+
+out:
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Finally num_qps: %u\n",
+ q_params->num_qps);
+}
+
+/* determin num_qps from rss_tmpl_id/irq_num/dcb_en */
+static int hinic3_setup_num_qps(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u32 irq_size;
+
+ nic_dev->num_qp_irq = 0;
+
+ irq_size = sizeof(*nic_dev->qps_irq_info) * nic_dev->max_qps;
+ if (!irq_size) {
+ nicif_err(nic_dev, drv, netdev, "Cannot allocate zero size entries\n");
+ return -EINVAL;
+ }
+ nic_dev->qps_irq_info = kzalloc(irq_size, GFP_KERNEL);
+ if (!nic_dev->qps_irq_info) {
+ nicif_err(nic_dev, drv, netdev, "Failed to alloc qps_irq_info\n");
+ return -ENOMEM;
+ }
+
+ hinic3_config_num_qps(nic_dev, &nic_dev->q_params);
+
+ return 0;
+}
+
+static void hinic3_destroy_num_qps(struct hinic3_nic_dev *nic_dev)
+{
+ u16 i;
+
+ for (i = 0; i < nic_dev->num_qp_irq; i++)
+ hinic3_free_irq(nic_dev->hwdev, SERVICE_T_NIC,
+ nic_dev->qps_irq_info[i].irq_id);
+
+ kfree(nic_dev->qps_irq_info);
+}
+
+int hinic3_maybe_set_port_state(struct hinic3_nic_dev *nic_dev, bool enable)
+{
+ return hinic3_set_port_enable(nic_dev->hwdev, enable,
+ HINIC3_CHANNEL_NIC);
+}
+
+static void hinic3_print_link_message(struct hinic3_nic_dev *nic_dev,
+ u8 link_status)
+{
+ if (nic_dev->link_status == link_status)
+ return;
+
+ nic_dev->link_status = link_status;
+
+ nicif_info(nic_dev, link, nic_dev->netdev, "Link is %s\n",
+ (link_status ? "up" : "down"));
+}
+
+static int hinic3_alloc_channel_resources(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_qp_params *qp_params,
+ struct hinic3_dyna_txrxq_params *trxq_params,
+ bool is_up_eth)
+{
+ int err;
+
+ qp_params->num_qps = trxq_params->num_qps;
+ qp_params->sq_depth = trxq_params->sq_depth;
+ qp_params->rq_depth = trxq_params->rq_depth;
+
+ err = hinic3_alloc_qps(nic_dev->hwdev, nic_dev->qps_irq_info,
+ qp_params);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to alloc qps\n");
+ return err;
+ }
+
+ err = hinic3_alloc_txrxq_resources(nic_dev, trxq_params, is_up_eth);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to alloc txrxq resources\n");
+ hinic3_free_qps(nic_dev->hwdev, qp_params);
+ return err;
+ }
+
+ return 0;
+}
+
+static void hinic3_free_channel_resources(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_qp_params *qp_params,
+ struct hinic3_dyna_txrxq_params *trxq_params)
+{
+ mutex_lock(&nic_dev->nic_mutex);
+ hinic3_free_txrxq_resources(nic_dev, trxq_params);
+ hinic3_free_qps(nic_dev->hwdev, qp_params);
+ mutex_unlock(&nic_dev->nic_mutex);
+}
+
+static int hinic3_open_channel(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_qp_params *qp_params,
+ struct hinic3_dyna_txrxq_params *trxq_params)
+{
+ int err;
+
+ err = hinic3_init_qps(nic_dev->hwdev, qp_params);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to init qps\n");
+ return err;
+ }
+
+ err = hinic3_configure_txrxqs(nic_dev, trxq_params);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to configure txrxqs\n");
+ goto cfg_txrxqs_err;
+ }
+
+ err = hinic3_qps_irq_init(nic_dev);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to init txrxq irq\n");
+ goto init_qp_irq_err;
+ }
+
+ err = hinic3_configure(nic_dev);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to init txrxq irq\n");
+ goto configure_err;
+ }
+
+ return 0;
+
+configure_err:
+ hinic3_qps_irq_deinit(nic_dev);
+
+init_qp_irq_err:
+cfg_txrxqs_err:
+ hinic3_deinit_qps(nic_dev->hwdev, qp_params);
+
+ return err;
+}
+
+static void hinic3_close_channel(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_qp_params *qp_params)
+{
+ hinic3_remove_configure(nic_dev);
+ hinic3_qps_irq_deinit(nic_dev);
+ hinic3_deinit_qps(nic_dev->hwdev, qp_params);
+}
+
+int hinic3_vport_up(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 link_status = 0;
+ u16 glb_func_id;
+ int err;
+
+ glb_func_id = hinic3_global_func_id(nic_dev->hwdev);
+ err = hinic3_set_vport_enable(nic_dev->hwdev, glb_func_id, true,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to enable vport\n");
+ goto vport_enable_err;
+ }
+
+ err = hinic3_maybe_set_port_state(nic_dev, true);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to enable port\n");
+ goto port_enable_err;
+ }
+
+ netif_set_real_num_tx_queues(netdev, nic_dev->q_params.num_qps);
+ netif_set_real_num_rx_queues(netdev, nic_dev->q_params.num_qps);
+ netif_tx_wake_all_queues(netdev);
+
+ if (test_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags)) {
+ link_status = true;
+ netif_carrier_on(netdev);
+ } else {
+ err = hinic3_get_link_state(nic_dev->hwdev, &link_status);
+ if (!err && link_status)
+ netif_carrier_on(netdev);
+ }
+
+ queue_delayed_work(nic_dev->workq, &nic_dev->moderation_task,
+ HINIC3_MODERATONE_DELAY);
+ if (test_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ queue_delayed_work(nic_dev->workq, &nic_dev->rxq_check_work, HZ);
+
+ hinic3_print_link_message(nic_dev, link_status);
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ hinic3_notify_all_vfs_link_changed(nic_dev->hwdev, link_status);
+
+ return 0;
+
+port_enable_err:
+ hinic3_set_vport_enable(nic_dev->hwdev, glb_func_id, false,
+ HINIC3_CHANNEL_NIC);
+
+vport_enable_err:
+ hinic3_flush_qps_res(nic_dev->hwdev);
+ /* After set vport disable 100ms, no packets will be send to host */
+ msleep(100);
+
+ return err;
+}
+
+static int hinic3_flush_rq_and_check(struct hinic3_nic_dev *nic_dev,
+ u16 glb_func_id)
+{
+ struct hinic3_flush_rq *rq_flush_msg = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ int out_buf_len = sizeof(struct hinic3_flush_rq);
+ u16 rq_id;
+ u64 out_param = 0;
+ int ret;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev);
+ if (!cmd_buf) {
+ nic_err(&nic_dev->pdev->dev, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct hinic3_flush_rq);
+ rq_flush_msg = (struct hinic3_flush_rq *)cmd_buf->buf;
+ rq_flush_msg->dw.bs.func_id = glb_func_id;
+ for (rq_id = 0; rq_id < nic_dev->q_params.num_qps; rq_id++) {
+ rq_flush_msg->dw.bs.rq_id = rq_id;
+ hinic3_cpu_to_be32(rq_flush_msg, out_buf_len);
+ ret = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CHK_RQ_STOP,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if (ret != 0 || out_param != 0) {
+ nic_err(&nic_dev->pdev->dev, "Failed to flush rq, ret:%d, func:%u, rq:%u\n",
+ ret, glb_func_id, rq_id);
+ goto err;
+ }
+ hinic3_be32_to_cpu(rq_flush_msg, out_buf_len);
+ }
+
+ nic_info(&nic_dev->pdev->dev, "Func:%u rq_num:%u flush rq success\n",
+ glb_func_id, nic_dev->q_params.num_qps);
+ hinic3_free_cmd_buf(nic_dev->hwdev, cmd_buf);
+ return 0;
+err:
+ hinic3_free_cmd_buf(nic_dev->hwdev, cmd_buf);
+ return -1;
+}
+
+void hinic3_vport_down(struct hinic3_nic_dev *nic_dev)
+{
+ u16 glb_func_id;
+ int is_in_kexec = vram_get_kexec_flag();
+
+ netif_carrier_off(nic_dev->netdev);
+ netif_tx_disable(nic_dev->netdev);
+
+ cancel_delayed_work_sync(&nic_dev->rxq_check_work);
+
+ cancel_delayed_work_sync(&nic_dev->moderation_task);
+
+ if (hinic3_get_chip_present_flag(nic_dev->hwdev)) {
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ hinic3_notify_all_vfs_link_changed(nic_dev->hwdev, 0);
+
+ if (is_in_kexec != 0)
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Skip changing mag status!\n");
+ else
+ hinic3_maybe_set_port_state(nic_dev, false);
+
+ glb_func_id = hinic3_global_func_id(nic_dev->hwdev);
+ hinic3_set_vport_enable(nic_dev->hwdev, glb_func_id, false,
+ HINIC3_CHANNEL_NIC);
+
+ hinic3_flush_txqs(nic_dev->netdev);
+ if (is_in_kexec == 0) {
+ msleep(HINIC3_WAIT_FLUSH_QP_RESOURCE_TIMEOUT);
+ } else {
+ (void)hinic3_flush_rq_and_check(nic_dev, glb_func_id);
+ }
+ hinic3_flush_qps_res(nic_dev->hwdev);
+ }
+}
+
+int hinic3_change_channel_settings(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *trxq_params,
+ hinic3_reopen_handler reopen_handler,
+ const void *priv_data)
+{
+ struct hinic3_dyna_qp_params new_qp_params = {0};
+ struct hinic3_dyna_qp_params cur_qp_params = {0};
+ int err;
+ bool is_free_resources = false;
+
+ hinic3_config_num_qps(nic_dev, trxq_params);
+
+ err = hinic3_alloc_channel_resources(nic_dev, &new_qp_params,
+ trxq_params, false);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc channel resources\n");
+ return err;
+ }
+
+ if (!test_and_set_bit(HINIC3_CHANGE_RES_INVALID, &nic_dev->flags)) {
+ hinic3_vport_down(nic_dev);
+ hinic3_close_channel(nic_dev, &cur_qp_params);
+ hinic3_free_channel_resources(nic_dev, &cur_qp_params,
+ &nic_dev->q_params);
+ is_free_resources = true;
+ }
+
+ if (nic_dev->num_qp_irq > trxq_params->num_qps)
+ hinic3_qp_irq_change(nic_dev, trxq_params->num_qps);
+
+ if (is_free_resources) {
+ err = hinic3_alloc_irq_vram(nic_dev, trxq_params, false);
+ if (err != 0) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Change chl alloc irq failed\n");
+ goto alloc_irq_err;
+ }
+ }
+ nic_dev->q_params = *trxq_params;
+
+ if (reopen_handler)
+ reopen_handler(nic_dev, priv_data);
+
+ err = hinic3_open_channel(nic_dev, &new_qp_params, trxq_params);
+ if (err)
+ goto open_channel_err;
+
+ err = hinic3_vport_up(nic_dev);
+ if (err)
+ goto vport_up_err;
+
+ clear_bit(HINIC3_CHANGE_RES_INVALID, &nic_dev->flags);
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Change channel settings success\n");
+
+ return 0;
+
+vport_up_err:
+ hinic3_close_channel(nic_dev, &new_qp_params);
+alloc_irq_err:
+open_channel_err:
+ hinic3_free_channel_resources(nic_dev, &new_qp_params, trxq_params);
+
+ return err;
+}
+
+int hinic3_open(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dyna_qp_params qp_params = {0};
+ int err;
+
+ if (test_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_info(nic_dev, drv, netdev, "Netdev already open, do nothing\n");
+ return 0;
+ }
+
+ err = hinic3_init_nicio_res(nic_dev->hwdev);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to init nicio resources\n");
+ return err;
+ }
+
+ err = hinic3_setup_num_qps(nic_dev);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to setup num_qps\n");
+ goto setup_qps_err;
+ }
+
+ err = hinic3_alloc_channel_resources(nic_dev, &qp_params,
+ &nic_dev->q_params, true);
+ if (err)
+ goto alloc_channel_res_err;
+
+ err = hinic3_open_channel(nic_dev, &qp_params, &nic_dev->q_params);
+ if (err)
+ goto open_channel_err;
+
+ err = hinic3_vport_up(nic_dev);
+ if (err)
+ goto vport_up_err;
+
+ err = hinic3_set_master_dev_state(nic_dev, true);
+ if (err)
+ goto set_master_dev_err;
+
+ set_bit(HINIC3_INTF_UP, &nic_dev->flags);
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Netdev is up\n");
+
+ return 0;
+
+set_master_dev_err:
+ hinic3_vport_down(nic_dev);
+
+vport_up_err:
+ hinic3_close_channel(nic_dev, &qp_params);
+
+open_channel_err:
+ hinic3_free_channel_resources(nic_dev, &qp_params, &nic_dev->q_params);
+
+alloc_channel_res_err:
+ hinic3_destroy_num_qps(nic_dev);
+
+setup_qps_err:
+ hinic3_deinit_nicio_res(nic_dev->hwdev);
+
+ return err;
+}
+
+static void hinic3_delete_napi(struct hinic3_nic_dev *nic_dev)
+{
+ u16 q_id;
+ int is_in_kexec = vram_get_kexec_flag();
+ struct hinic3_irq *irq_cfg = NULL;
+
+ if (is_in_kexec == 0 || nic_dev->q_params.irq_cfg == NULL)
+ return;
+
+ for (q_id = 0; q_id < nic_dev->q_params.num_qps; q_id++) {
+ irq_cfg = &(nic_dev->q_params.irq_cfg[q_id]);
+ qp_del_napi(irq_cfg);
+ }
+
+ hinic3_free_irq_vram(nic_dev, &nic_dev->q_params);
+}
+
+int hinic3_close(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dyna_qp_params qp_params = {0};
+
+ if (!test_and_clear_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ /* delete napi in os hotreplace rollback */
+ hinic3_delete_napi(nic_dev);
+ nicif_info(nic_dev, drv, netdev, "Netdev already close, do nothing\n");
+ return 0;
+ }
+
+ if (test_and_clear_bit(HINIC3_CHANGE_RES_INVALID, &nic_dev->flags))
+ goto out;
+
+ hinic3_set_master_dev_state(nic_dev, false);
+
+ hinic3_vport_down(nic_dev);
+ hinic3_close_channel(nic_dev, &qp_params);
+ hinic3_free_channel_resources(nic_dev, &qp_params, &nic_dev->q_params);
+
+out:
+ hinic3_deinit_nicio_res(nic_dev->hwdev);
+ hinic3_destroy_num_qps(nic_dev);
+
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Netdev is down\n");
+
+ return 0;
+}
+
+#define IPV6_ADDR_LEN 4
+#define PKT_INFO_LEN 9
+#define BITS_PER_TUPLE 32
+static u32 calc_xor_rss(u8 *rss_tunple, u32 len)
+{
+ u32 hash_value;
+ u32 i;
+
+ hash_value = rss_tunple[0];
+ for (i = 1; i < len; i++)
+ hash_value = hash_value ^ rss_tunple[i];
+
+ return hash_value;
+}
+
+static u32 calc_toep_rss(const u32 *rss_tunple, u32 len, const u32 *rss_key)
+{
+ u32 rss = 0;
+ u32 i, j;
+
+ for (i = 1; i <= len; i++) {
+ for (j = 0; j < BITS_PER_TUPLE; j++)
+ if (rss_tunple[i - 1] & ((u32)1 <<
+ (u32)((BITS_PER_TUPLE - 1) - j)))
+ rss ^= (rss_key[i - 1] << j) |
+ (u32)((u64)rss_key[i] >>
+ (BITS_PER_TUPLE - j));
+ }
+
+ return rss;
+}
+
+#define RSS_VAL(val, type) \
+ (((type) == HINIC3_RSS_HASH_ENGINE_TYPE_TOEP) ? ntohl(val) : (val))
+
+static u8 parse_ipv6_info(struct sk_buff *skb, u32 *rss_tunple,
+ u8 hash_engine, u32 *len)
+{
+ struct ipv6hdr *ipv6hdr = ipv6_hdr(skb);
+ u32 *saddr = (u32 *)&ipv6hdr->saddr;
+ u32 *daddr = (u32 *)&ipv6hdr->daddr;
+ u8 i;
+
+ for (i = 0; i < IPV6_ADDR_LEN; i++) {
+ rss_tunple[i] = RSS_VAL(daddr[i], hash_engine);
+ /* The offset of the sport relative to the dport is 4 */
+ rss_tunple[(u32)(i + IPV6_ADDR_LEN)] =
+ RSS_VAL(saddr[i], hash_engine);
+ }
+ *len = IPV6_ADDR_LEN + IPV6_ADDR_LEN;
+
+ if (skb_network_header(skb) + sizeof(*ipv6hdr) ==
+ skb_transport_header(skb))
+ return ipv6hdr->nexthdr;
+ return 0;
+}
+
+static u16 select_queue_by_hash_func(struct net_device *dev, struct sk_buff *skb,
+ unsigned int num_tx_queues)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(dev);
+ struct nic_rss_type rss_type = nic_dev->rss_type;
+ struct iphdr *iphdr = NULL;
+ u32 rss_tunple[PKT_INFO_LEN] = {0};
+ u32 len = 0;
+ u32 hash = 0;
+ u8 hash_engine = nic_dev->rss_hash_engine;
+ u8 l4_proto;
+ unsigned char *l4_hdr = NULL;
+
+ if (skb_rx_queue_recorded(skb)) {
+ hash = skb_get_rx_queue(skb);
+ if (unlikely(hash >= num_tx_queues))
+ hash %= num_tx_queues;
+
+ return (u16)hash;
+ }
+
+ iphdr = ip_hdr(skb);
+ if (iphdr->version == IPV4_VERSION) {
+ rss_tunple[len++] = RSS_VAL(iphdr->daddr, hash_engine);
+ rss_tunple[len++] = RSS_VAL(iphdr->saddr, hash_engine);
+ l4_proto = iphdr->protocol;
+ } else if (iphdr->version == IPV6_VERSION) {
+ l4_proto = parse_ipv6_info(skb, (u32 *)rss_tunple,
+ hash_engine, &len);
+ } else {
+ return (u16)hash;
+ }
+
+ if ((iphdr->version == IPV4_VERSION &&
+ ((l4_proto == IPPROTO_UDP && rss_type.udp_ipv4) ||
+ (l4_proto == IPPROTO_TCP && rss_type.tcp_ipv4))) ||
+ (iphdr->version == IPV6_VERSION &&
+ ((l4_proto == IPPROTO_UDP && rss_type.udp_ipv6) ||
+ (l4_proto == IPPROTO_TCP && rss_type.tcp_ipv6)))) {
+ l4_hdr = skb_transport_header(skb);
+ /* High 16 bits are dport, low 16 bits are sport. */
+ rss_tunple[len++] = ((u32)ntohs(*((u16 *)l4_hdr + 1U)) << 16) |
+ ntohs(*(u16 *)l4_hdr);
+ } /* rss_type.ipv4 and rss_type.ipv6 default on. */
+
+ if (hash_engine == HINIC3_RSS_HASH_ENGINE_TYPE_TOEP)
+ hash = calc_toep_rss((u32 *)rss_tunple, len,
+ nic_dev->rss_hkey_be);
+ else
+ hash = calc_xor_rss((u8 *)rss_tunple, len * (u32)sizeof(u32));
+
+ return (u16)nic_dev->rss_indir[hash & 0xFF];
+}
+
+#define GET_DSCP_PRI_OFFSET 2
+static u8 hinic3_get_dscp_up(struct hinic3_nic_dev *nic_dev, struct sk_buff *skb)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ int dscp_cp;
+
+ if (skb->protocol == htons(ETH_P_IP))
+ dscp_cp = ipv4_get_dsfield(ip_hdr(skb)) >> GET_DSCP_PRI_OFFSET;
+ else if (skb->protocol == htons(ETH_P_IPV6))
+ dscp_cp = ipv6_get_dsfield(ipv6_hdr(skb)) >> GET_DSCP_PRI_OFFSET;
+ else
+ return dcb->hw_dcb_cfg.default_cos;
+ return dcb->hw_dcb_cfg.dscp2cos[dscp_cp];
+}
+
+#if defined(HAVE_NDO_SELECT_QUEUE_SB_DEV_ONLY)
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ struct net_device *sb_dev)
+#elif defined(HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK)
+#if defined(HAVE_NDO_SELECT_QUEUE_SB_DEV)
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ struct net_device *sb_dev,
+ select_queue_fallback_t fallback)
+#else
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ __always_unused void *accel,
+ select_queue_fallback_t fallback)
+#endif
+
+#elif defined(HAVE_NDO_SELECT_QUEUE_ACCEL)
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ __always_unused void *accel)
+
+#else
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb)
+#endif /* end of HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK */
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ u16 txq;
+ u8 cos, qp_num;
+
+ if (test_bit(HINIC3_SAME_RXTX, &nic_dev->flags))
+ return select_queue_by_hash_func(netdev, skb, netdev->real_num_tx_queues);
+
+ txq =
+#if defined(HAVE_NDO_SELECT_QUEUE_SB_DEV_ONLY)
+ netdev_pick_tx(netdev, skb, NULL);
+#elif defined(HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK)
+#ifdef HAVE_NDO_SELECT_QUEUE_SB_DEV
+ fallback(netdev, skb, sb_dev);
+#else
+ fallback(netdev, skb);
+#endif
+#else
+ skb_tx_hash(netdev, skb);
+#endif
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ if (dcb->hw_dcb_cfg.trust == HINIC3_DCB_PCP) {
+ if (skb->vlan_tci)
+ cos = dcb->hw_dcb_cfg.pcp2cos[skb->vlan_tci >>
+ VLAN_PRIO_SHIFT];
+ else
+ cos = dcb->hw_dcb_cfg.default_cos;
+ } else {
+ cos = hinic3_get_dscp_up(nic_dev, skb);
+ }
+
+ qp_num = dcb->hw_dcb_cfg.cos_qp_num[cos] ?
+ txq % dcb->hw_dcb_cfg.cos_qp_num[cos] : 0;
+ txq = dcb->hw_dcb_cfg.cos_qp_offset[cos] + qp_num;
+ }
+
+ return txq;
+}
+
+#ifdef HAVE_NDO_GET_STATS64
+#ifdef HAVE_VOID_NDO_GET_STATS64
+static void hinic3_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+#else
+static struct rtnl_link_stats64
+ *hinic3_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+#endif
+
+#else /* !HAVE_NDO_GET_STATS64 */
+static struct net_device_stats *hinic3_get_stats(struct net_device *netdev)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+#ifndef HAVE_NDO_GET_STATS64
+#ifdef HAVE_NETDEV_STATS_IN_NETDEV
+ struct net_device_stats *stats = &netdev->stats;
+#else
+ struct net_device_stats *stats = &nic_dev->net_stats;
+#endif /* HAVE_NETDEV_STATS_IN_NETDEV */
+#endif /* HAVE_NDO_GET_STATS64 */
+ struct hinic3_txq_stats *txq_stats = NULL;
+ struct hinic3_rxq_stats *rxq_stats = NULL;
+ struct hinic3_txq *txq = NULL;
+ struct hinic3_rxq *rxq = NULL;
+ u64 bytes, packets, dropped, errors;
+ unsigned int start;
+ int i;
+
+ bytes = 0;
+ packets = 0;
+ dropped = 0;
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ if (!nic_dev->txqs)
+ break;
+
+ txq = &nic_dev->txqs[i];
+ txq_stats = &txq->txq_stats;
+ do {
+ start = u64_stats_fetch_begin(&txq_stats->syncp);
+ bytes += txq_stats->bytes;
+ packets += txq_stats->packets;
+ dropped += txq_stats->dropped;
+ } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
+ }
+ stats->tx_packets = packets;
+ stats->tx_bytes = bytes;
+ stats->tx_dropped = dropped;
+
+ bytes = 0;
+ packets = 0;
+ errors = 0;
+ dropped = 0;
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ if (!nic_dev->rxqs)
+ break;
+
+ rxq = &nic_dev->rxqs[i];
+ rxq_stats = &rxq->rxq_stats;
+ do {
+ start = u64_stats_fetch_begin(&rxq_stats->syncp);
+ bytes += rxq_stats->bytes;
+ packets += rxq_stats->packets;
+ errors += rxq_stats->csum_errors +
+ rxq_stats->other_errors;
+ dropped += rxq_stats->dropped;
+ } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
+ }
+ stats->rx_packets = packets;
+ stats->rx_bytes = bytes;
+ stats->rx_errors = errors;
+ stats->rx_dropped = dropped + nic_dev->vport_stats.rx_discard_vport;
+
+#ifndef HAVE_VOID_NDO_GET_STATS64
+ return stats;
+#endif
+}
+
+#ifdef HAVE_NDO_TX_TIMEOUT_TXQ
+static void hinic3_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+#else
+static void hinic3_tx_timeout(struct net_device *netdev)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_io_queue *sq = NULL;
+ bool hw_err = false;
+ u32 sw_pi, hw_ci;
+ u8 q_id;
+
+ HINIC3_NIC_STATS_INC(nic_dev, netdev_tx_timeout);
+ nicif_err(nic_dev, drv, netdev, "Tx timeout\n");
+
+ for (q_id = 0; q_id < nic_dev->q_params.num_qps; q_id++) {
+ if (!netif_xmit_stopped(netdev_get_tx_queue(netdev, q_id)))
+ continue;
+
+ sq = nic_dev->txqs[q_id].sq;
+ sw_pi = hinic3_get_sq_local_pi(sq);
+ hw_ci = hinic3_get_sq_hw_ci(sq);
+ nicif_info(nic_dev, drv, netdev,
+ "txq%u: sw_pi: %hu, hw_ci: %u, sw_ci: %u, napi->state: 0x%lx.\n",
+ q_id, sw_pi, hw_ci, hinic3_get_sq_local_ci(sq),
+ nic_dev->q_params.irq_cfg[q_id].napi.state);
+
+ if (sw_pi != hw_ci)
+ hw_err = true;
+ }
+
+ if (hw_err)
+ set_bit(EVENT_WORK_TX_TIMEOUT, &nic_dev->event_flag);
+}
+
+static int hinic3_change_mtu(struct net_device *netdev, int new_mtu)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u32 mtu = (u32)new_mtu;
+ int err = 0;
+ int is_in_kexec = vram_get_kexec_flag();
+#ifdef HAVE_XDP_SUPPORT
+ u32 xdp_max_mtu;
+#endif
+
+ if (is_in_kexec != 0) {
+ nicif_info(nic_dev, drv, netdev, "Hotreplace skip change mtu\n");
+ return err;
+ }
+
+#ifdef HAVE_XDP_SUPPORT
+ if (hinic3_is_xdp_enable(nic_dev)) {
+ xdp_max_mtu = hinic3_xdp_max_mtu(nic_dev);
+ if (mtu > xdp_max_mtu) {
+ nicif_err(nic_dev, drv, netdev,
+ "Max MTU for xdp usage is %d\n", xdp_max_mtu);
+ return -EINVAL;
+ }
+ }
+#endif
+
+ err = hinic3_config_port_mtu(nic_dev, mtu);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to change port mtu to %d\n",
+ new_mtu);
+ } else {
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Change mtu from %u to %d\n",
+ netdev->mtu, new_mtu);
+ netdev->mtu = mtu;
+ nic_dev->nic_vram->vram_mtu = mtu;
+ }
+
+ return err;
+}
+
+static int hinic3_set_mac_addr(struct net_device *netdev, void *addr)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct sockaddr *saddr = addr;
+ int err;
+
+ if (!is_valid_ether_addr(saddr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ if (ether_addr_equal(netdev->dev_addr, saddr->sa_data)) {
+ nicif_info(nic_dev, drv, netdev,
+ "Already using mac address %pM\n",
+ saddr->sa_data);
+ return 0;
+ }
+
+ err = hinic3_config_port_mac(nic_dev, saddr);
+ if (err)
+ return err;
+
+ ether_addr_copy(netdev->dev_addr, saddr->sa_data);
+
+ nicif_info(nic_dev, drv, netdev, "Set new mac address %pM\n",
+ saddr->sa_data);
+
+ return 0;
+}
+
+#if defined(HAVE_NDO_UDP_TUNNEL_ADD) || defined(HAVE_UDP_TUNNEL_NIC_INFO)
+static int hinic3_udp_tunnel_port_config(struct net_device *netdev,
+ struct udp_tunnel_info *ti,
+ u8 action)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 func_id = hinic3_global_func_id(nic_dev->hwdev);
+ u16 dst_port;
+ int ret = 0;
+
+ switch (ti->type) {
+ case UDP_TUNNEL_TYPE_VXLAN:
+ dst_port = ntohs(ti->port);
+ ret = hinic3_vlxan_port_config(nic_dev->hwdev, func_id,
+ dst_port, action);
+ if (ret != 0) {
+ nicif_warn(nic_dev, drv, netdev,
+ "Failed to set vxlan port %u to device(%d)\n",
+ dst_port, ret);
+ break;
+ }
+ nicif_info(nic_dev, link, netdev, "Vxlan dst port set to %u\n",
+ action == HINIC3_CMD_OP_ADD ?
+ dst_port : ntohs(VXLAN_OFFLOAD_PORT_LE));
+ break;
+ default:
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to add port, only vxlan dst port is supported\n");
+ ret = -EINVAL;
+ }
+ return ret;
+}
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD || HAVE_UDP_TUNNEL_NIC_INFO */
+#ifdef HAVE_NDO_UDP_TUNNEL_ADD
+static void hinic3_udp_tunnel_add(struct net_device *netdev, struct udp_tunnel_info *ti)
+{
+ if (ti->sa_family != AF_INET && ti->sa_family != AF_INET6)
+ return;
+
+ hinic3_udp_tunnel_port_config(netdev, ti, HINIC3_CMD_OP_ADD);
+}
+
+static void hinic3_udp_tunnel_del(struct net_device *netdev, struct udp_tunnel_info *ti)
+{
+ if (ti->sa_family != AF_INET && ti->sa_family != AF_INET6)
+ return;
+
+ hinic3_udp_tunnel_port_config(netdev, ti, HINIC3_CMD_OP_DEL);
+}
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD */
+
+#ifdef HAVE_UDP_TUNNEL_NIC_INFO
+int hinic3_udp_tunnel_set_port(struct net_device *netdev, __attribute__((unused)) unsigned int table,
+ __attribute__((unused))unsigned int entry, struct udp_tunnel_info *ti)
+{
+ return hinic3_udp_tunnel_port_config(netdev, ti, HINIC3_CMD_OP_ADD);
+}
+
+int hinic3_udp_tunnel_unset_port(struct net_device *netdev, __attribute__((unused)) unsigned int table,
+ __attribute__((unused)) unsigned int entry, struct udp_tunnel_info *ti)
+{
+ return hinic3_udp_tunnel_port_config(netdev, ti, HINIC3_CMD_OP_DEL);
+}
+#endif /* HAVE_UDP_TUNNEL_NIC_INFO */
+
+static int
+hinic3_vlan_rx_add_vid(struct net_device *netdev,
+ __always_unused __be16 proto,
+ u16 vid)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ unsigned long *vlan_bitmap = nic_dev->vlan_bitmap;
+ u16 func_id;
+ u32 col, line;
+ int err = 0;
+
+ /* VLAN 0 donot be added, which is the same as VLAN 0 deleted. */
+ if (vid == 0)
+ goto end;
+
+ col = VID_COL(nic_dev, vid);
+ line = VID_LINE(nic_dev, vid);
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+ err = hinic3_add_vlan(nic_dev->hwdev, vid, func_id);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to add vlan %u\n", vid);
+ goto end;
+ }
+
+ set_bit(col, &vlan_bitmap[line]);
+
+ nicif_info(nic_dev, drv, netdev, "Add vlan %u\n", vid);
+
+end:
+ return err;
+}
+
+static int
+hinic3_vlan_rx_kill_vid(struct net_device *netdev,
+ __always_unused __be16 proto,
+ u16 vid)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ unsigned long *vlan_bitmap = nic_dev->vlan_bitmap;
+ u16 func_id;
+ int col, line;
+ int err = 0;
+
+ col = VID_COL(nic_dev, vid);
+ line = (int)VID_LINE(nic_dev, vid);
+
+ /* In the broadcast scenario, ucode finds the corresponding function
+ * based on VLAN 0 of vlan table. If we delete VLAN 0, the VLAN function
+ * is affected.
+ */
+ if (vid == 0)
+ goto end;
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ err = hinic3_del_vlan(nic_dev->hwdev, vid, func_id);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to delete vlan\n");
+ goto end;
+ }
+
+ clear_bit(col, &vlan_bitmap[line]);
+
+ nicif_info(nic_dev, drv, netdev, "Remove vlan %u\n", vid);
+
+end:
+ return err;
+}
+
+#ifdef NEED_VLAN_RESTORE
+static int hinic3_vlan_restore(struct net_device *netdev)
+{
+ int err = 0;
+#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
+ struct net_device *vlandev = NULL;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ unsigned long *vlan_bitmap = nic_dev->vlan_bitmap;
+ u32 col, line;
+ u16 i;
+
+ if (!netdev->netdev_ops->ndo_vlan_rx_add_vid)
+ return -EFAULT;
+ rcu_read_lock();
+ for (i = 0; i < VLAN_N_VID; i++) {
+#ifdef HAVE_VLAN_FIND_DEV_DEEP_RCU
+ vlandev =
+ __vlan_find_dev_deep_rcu(netdev, htons(ETH_P_8021Q), i);
+#else
+ vlandev = __vlan_find_dev_deep(netdev, htons(ETH_P_8021Q), i);
+#endif
+ col = VID_COL(nic_dev, i);
+ line = VID_LINE(nic_dev, i);
+ if (!vlandev && (vlan_bitmap[line] & (1UL << col)) != 0) {
+ err = netdev->netdev_ops->ndo_vlan_rx_kill_vid(netdev,
+ htons(ETH_P_8021Q), i);
+ if (err) {
+ hinic3_err(nic_dev, drv, "delete vlan %u failed, err code %d\n",
+ i, err);
+ break;
+ }
+ } else if (vlandev && (vlan_bitmap[line] & (1UL << col)) == 0) {
+ err = netdev->netdev_ops->ndo_vlan_rx_add_vid(netdev,
+ htons(ETH_P_8021Q), i);
+ if (err) {
+ hinic3_err(nic_dev, drv, "restore vlan %u failed, err code %d\n",
+ i, err);
+ break;
+ }
+ }
+ }
+ rcu_read_unlock();
+#endif
+
+ return err;
+}
+#endif
+
+#define SET_FEATURES_OP_STR(op) ((op) ? "Enable" : "Disable")
+
+static int set_feature_rx_csum(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+
+ if (changed & NETIF_F_RXCSUM)
+ hinic3_info(nic_dev, drv, "%s rx csum success\n",
+ SET_FEATURES_OP_STR(wanted_features &
+ NETIF_F_RXCSUM));
+
+ return 0;
+}
+
+static int set_feature_tso(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+
+ if (changed & NETIF_F_TSO)
+ hinic3_info(nic_dev, drv, "%s tso success\n",
+ SET_FEATURES_OP_STR(wanted_features & NETIF_F_TSO));
+
+ return 0;
+}
+
+#ifdef NETIF_F_UFO
+static int set_feature_ufo(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+
+ if (changed & NETIF_F_UFO)
+ hinic3_info(nic_dev, drv, "%s ufo success\n",
+ SET_FEATURES_OP_STR(wanted_features & NETIF_F_UFO));
+
+ return 0;
+}
+#endif
+
+static int set_feature_lro(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+ bool en = !!(wanted_features & NETIF_F_LRO);
+ int err;
+
+ if (!(changed & NETIF_F_LRO))
+ return 0;
+
+#ifdef HAVE_XDP_SUPPORT
+ if (en && hinic3_is_xdp_enable(nic_dev)) {
+ hinic3_err(nic_dev, drv, "Can not enable LRO when xdp is enable\n");
+ *failed_features |= NETIF_F_LRO;
+ return -EINVAL;
+ }
+#endif
+
+ err = hinic3_set_rx_lro_state(nic_dev->hwdev, en,
+ HINIC3_LRO_DEFAULT_TIME_LIMIT,
+ HINIC3_LRO_DEFAULT_COAL_PKT_SIZE);
+ if (err) {
+ hinic3_err(nic_dev, drv, "%s lro failed\n",
+ SET_FEATURES_OP_STR(en));
+ *failed_features |= NETIF_F_LRO;
+ } else {
+ hinic3_info(nic_dev, drv, "%s lro success\n",
+ SET_FEATURES_OP_STR(en));
+ }
+
+ return err;
+}
+
+static int set_feature_rx_cvlan(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+#ifdef NETIF_F_HW_VLAN_CTAG_RX
+ netdev_features_t vlan_feature = NETIF_F_HW_VLAN_CTAG_RX;
+#else
+ netdev_features_t vlan_feature = NETIF_F_HW_VLAN_RX;
+#endif
+ bool en = !!(wanted_features & vlan_feature);
+ int err;
+
+ if (!(changed & vlan_feature))
+ return 0;
+
+ err = hinic3_set_rx_vlan_offload(nic_dev->hwdev, en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "%s rxvlan failed\n",
+ SET_FEATURES_OP_STR(en));
+ *failed_features |= vlan_feature;
+ } else {
+ hinic3_info(nic_dev, drv, "%s rxvlan success\n",
+ SET_FEATURES_OP_STR(en));
+ }
+
+ return err;
+}
+
+static int set_feature_vlan_filter(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+#if defined(NETIF_F_HW_VLAN_CTAG_FILTER)
+ netdev_features_t vlan_filter_feature = NETIF_F_HW_VLAN_CTAG_FILTER;
+#elif defined(NETIF_F_HW_VLAN_FILTER)
+ netdev_features_t vlan_filter_feature = NETIF_F_HW_VLAN_FILTER;
+#endif
+ bool en = !!(wanted_features & vlan_filter_feature);
+ int err = 0;
+
+ if (!(changed & vlan_filter_feature))
+ return 0;
+
+#ifdef NEED_VLAN_RESTORE
+ if (en) {
+ err = hinic3_vlan_restore(nic_dev->netdev);
+ if (err) {
+ hinic3_err(nic_dev, drv, "vlan restore failed\n");
+ *failed_features |= vlan_filter_feature;
+ return err;
+ }
+ }
+#endif
+
+ err = hinic3_set_vlan_fliter(nic_dev->hwdev, en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "%s rx vlan filter failed\n",
+ SET_FEATURES_OP_STR(en));
+ *failed_features |= vlan_filter_feature;
+ } else {
+ hinic3_info(nic_dev, drv, "%s rx vlan filter success\n",
+ SET_FEATURES_OP_STR(en));
+ }
+
+ return err;
+}
+
+static int set_features(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t pre_features,
+ netdev_features_t features)
+{
+ netdev_features_t failed_features = 0;
+ u32 err = 0;
+
+ err |= (u32)set_feature_rx_csum(nic_dev, features, pre_features,
+ &failed_features);
+ err |= (u32)set_feature_tso(nic_dev, features, pre_features,
+ &failed_features);
+ err |= (u32)set_feature_lro(nic_dev, features, pre_features,
+ &failed_features);
+#ifdef NETIF_F_UFO
+ err |= (u32)set_feature_ufo(nic_dev, features, pre_features,
+ &failed_features);
+#endif
+ err |= (u32)set_feature_rx_cvlan(nic_dev, features, pre_features,
+ &failed_features);
+ err |= (u32)set_feature_vlan_filter(nic_dev, features, pre_features,
+ &failed_features);
+ if (err) {
+ nic_dev->netdev->features = features ^ failed_features;
+ return -EIO;
+ }
+
+ return 0;
+}
+
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+static int hinic3_set_features(struct net_device *netdev, u32 features)
+#else
+static int hinic3_set_features(struct net_device *netdev,
+ netdev_features_t features)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ return set_features(nic_dev, nic_dev->netdev->features,
+ features);
+}
+
+int hinic3_set_hw_features(struct hinic3_nic_dev *nic_dev)
+{
+ /* enable all hw features in netdev->features */
+ return set_features(nic_dev, ~nic_dev->netdev->features,
+ nic_dev->netdev->features);
+}
+
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+static u32 hinic3_fix_features(struct net_device *netdev, u32 features)
+#else
+static netdev_features_t hinic3_fix_features(struct net_device *netdev,
+ netdev_features_t features)
+#endif
+{
+ netdev_features_t features_tmp = features;
+
+ /* If Rx checksum is disabled, then LRO should also be disabled */
+ if (!(features_tmp & NETIF_F_RXCSUM))
+ features_tmp &= ~NETIF_F_LRO;
+
+ return features_tmp;
+}
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void hinic3_netpoll(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 i;
+
+ for (i = 0; i < nic_dev->q_params.num_qps; i++)
+ napi_schedule(&nic_dev->q_params.irq_cfg[i].napi);
+}
+#endif /* CONFIG_NET_POLL_CONTROLLER */
+
+static int hinic3_ndo_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ int err;
+
+ if (is_multicast_ether_addr(mac) ||
+ vf >= pci_num_vf(adapter->pdev))
+ return -EINVAL;
+
+ err = hinic3_set_vf_mac(adapter->hwdev, OS_VF_ID_TO_HW(vf), mac);
+ if (err)
+ return err;
+
+ if (!is_zero_ether_addr(mac))
+ nic_info(&adapter->pdev->dev, "Setting MAC %pM on VF %d\n",
+ mac, vf);
+ else
+ nic_info(&adapter->pdev->dev, "Deleting MAC on VF %d\n", vf);
+
+ nic_info(&adapter->pdev->dev, "Please reload the VF driver to make this change effective.");
+
+ return 0;
+}
+
+#ifdef IFLA_VF_MAX
+static int set_hw_vf_vlan(void *hwdev, u16 cur_vlanprio, int vf,
+ u16 vlan, u8 qos)
+{
+ int err = 0;
+ u16 old_vlan = cur_vlanprio & VLAN_VID_MASK;
+
+ if (vlan || qos) {
+ if (cur_vlanprio) {
+ err = hinic3_kill_vf_vlan(hwdev, OS_VF_ID_TO_HW(vf));
+ if (err)
+ return err;
+ }
+ err = hinic3_add_vf_vlan(hwdev, OS_VF_ID_TO_HW(vf), vlan, qos);
+ } else {
+ err = hinic3_kill_vf_vlan(hwdev, OS_VF_ID_TO_HW(vf));
+ }
+
+ err = hinic3_update_mac_vlan(hwdev, old_vlan, vlan, OS_VF_ID_TO_HW(vf));
+ return err;
+}
+
+#define HINIC3_MAX_VLAN_ID 4094
+#define HINIC3_MAX_QOS_NUM 7
+
+#ifdef IFLA_VF_VLAN_INFO_MAX
+static int hinic3_ndo_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan,
+ u8 qos, __be16 vlan_proto)
+#else
+static int hinic3_ndo_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan,
+ u8 qos)
+#endif
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ u16 vlanprio, cur_vlanprio;
+
+ if (vf >= pci_num_vf(adapter->pdev) ||
+ vlan > HINIC3_MAX_VLAN_ID || qos > HINIC3_MAX_QOS_NUM)
+ return -EINVAL;
+#ifdef IFLA_VF_VLAN_INFO_MAX
+ if (vlan_proto != htons(ETH_P_8021Q))
+ return -EPROTONOSUPPORT;
+#endif
+ vlanprio = vlan | (qos << HINIC3_VLAN_PRIORITY_SHIFT);
+ cur_vlanprio = hinic3_vf_info_vlanprio(adapter->hwdev,
+ OS_VF_ID_TO_HW(vf));
+ /* duplicate request, so just return success */
+ if (vlanprio == cur_vlanprio)
+ return 0;
+
+ return set_hw_vf_vlan(adapter->hwdev, cur_vlanprio, vf, vlan, qos);
+}
+#endif
+
+#ifdef HAVE_VF_SPOOFCHK_CONFIGURE
+static int hinic3_ndo_set_vf_spoofchk(struct net_device *netdev, int vf,
+ bool setting)
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ int err = 0;
+ bool cur_spoofchk = false;
+
+ if (vf >= pci_num_vf(adapter->pdev))
+ return -EINVAL;
+
+ cur_spoofchk = hinic3_vf_info_spoofchk(adapter->hwdev,
+ OS_VF_ID_TO_HW(vf));
+ /* same request, so just return success */
+ if ((setting && cur_spoofchk) || (!setting && !cur_spoofchk))
+ return 0;
+
+ err = hinic3_set_vf_spoofchk(adapter->hwdev,
+ (u16)OS_VF_ID_TO_HW(vf), setting);
+ if (!err)
+ nicif_info(adapter, drv, netdev, "Set VF %d spoofchk %s\n",
+ vf, setting ? "on" : "off");
+
+ return err;
+}
+#endif
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+static int hinic3_ndo_set_vf_trust(struct net_device *netdev, int vf, bool setting)
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ int err;
+ bool cur_trust;
+
+ if (vf >= pci_num_vf(adapter->pdev))
+ return -EINVAL;
+
+ cur_trust = hinic3_get_vf_trust(adapter->hwdev,
+ OS_VF_ID_TO_HW(vf));
+ /* same request, so just return success */
+ if ((setting && cur_trust) || (!setting && !cur_trust))
+ return 0;
+
+ err = hinic3_set_vf_trust(adapter->hwdev,
+ (u16)OS_VF_ID_TO_HW(vf), setting);
+ if (!err)
+ nicif_info(adapter, drv, netdev, "Set VF %d trusted %s successfully\n",
+ vf, setting ? "on" : "off");
+ else
+ nicif_err(adapter, drv, netdev, "Failed set VF %d trusted %s\n",
+ vf, setting ? "on" : "off");
+
+ return err;
+}
+#endif
+
+static int hinic3_ndo_get_vf_config(struct net_device *netdev,
+ int vf, struct ifla_vf_info *ivi)
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+
+ if (vf >= pci_num_vf(adapter->pdev))
+ return -EINVAL;
+
+ hinic3_get_vf_config(adapter->hwdev, (u16)OS_VF_ID_TO_HW(vf), ivi);
+
+ return 0;
+}
+
+/**
+ * hinic3_ndo_set_vf_link_state
+ * @netdev: network interface device structure
+ * @vf_id: VF identifier
+ * @link: required link state
+ *
+ * Set the link state of a specified VF, regardless of physical link state
+ **/
+int hinic3_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link)
+{
+ static const char * const vf_link[] = {"auto", "enable", "disable"};
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ int err;
+
+ /* validate the request */
+ if (vf_id >= pci_num_vf(adapter->pdev)) {
+ nicif_err(adapter, drv, netdev,
+ "Invalid VF Identifier %d\n", vf_id);
+ return -EINVAL;
+ }
+
+ err = hinic3_set_vf_link_state(adapter->hwdev,
+ (u16)OS_VF_ID_TO_HW(vf_id), link);
+ if (!err)
+ nicif_info(adapter, drv, netdev, "Set VF %d link state: %s\n",
+ vf_id, vf_link[link]);
+
+ return err;
+}
+
+static int is_set_vf_bw_param_valid(const struct hinic3_nic_dev *adapter,
+ int vf, int min_tx_rate, int max_tx_rate)
+{
+ if (!HINIC3_SUPPORT_RATE_LIMIT(adapter->hwdev)) {
+ nicif_err(adapter, drv, adapter->netdev, "Current function doesn't support to set vf rate limit\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* verify VF is active */
+ if (vf >= pci_num_vf(adapter->pdev)) {
+ nicif_err(adapter, drv, adapter->netdev, "VF number must be less than %d\n",
+ pci_num_vf(adapter->pdev));
+ return -EINVAL;
+ }
+
+ if (max_tx_rate < min_tx_rate) {
+ nicif_err(adapter, drv, adapter->netdev, "Invalid rate, max rate %d must greater than min rate %d\n",
+ max_tx_rate, min_tx_rate);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define HINIC3_TX_RATE_TABLE_FULL 12
+
+#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+static int hinic3_ndo_set_vf_bw(struct net_device *netdev,
+ int vf, int min_tx_rate, int max_tx_rate)
+#else
+static int hinic3_ndo_set_vf_bw(struct net_device *netdev, int vf,
+ int max_tx_rate)
+#endif /* HAVE_NDO_SET_VF_MIN_MAX_TX_RATE */
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ struct nic_port_info port_info = {0};
+#ifndef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+ int min_tx_rate = 0;
+#endif
+ u8 link_status = 0;
+ u32 speeds[] = {0, SPEED_10, SPEED_100, SPEED_1000, SPEED_10000,
+ SPEED_25000, SPEED_40000, SPEED_50000, SPEED_100000,
+ SPEED_200000};
+ int err = 0;
+
+ err = is_set_vf_bw_param_valid(adapter, vf, min_tx_rate, max_tx_rate);
+ if (err)
+ return err;
+
+ err = hinic3_get_link_state(adapter->hwdev, &link_status);
+ if (err) {
+ nicif_err(adapter, drv, netdev,
+ "Get link status failed when set vf tx rate\n");
+ return -EIO;
+ }
+
+ if (!link_status) {
+ nicif_err(adapter, drv, netdev,
+ "Link status must be up when set vf tx rate\n");
+ return -EINVAL;
+ }
+
+ err = hinic3_get_port_info(adapter->hwdev, &port_info,
+ HINIC3_CHANNEL_NIC);
+ if (err || port_info.speed >= PORT_SPEED_UNKNOWN)
+ return -EIO;
+
+ /* rate limit cannot be less than 0 and greater than link speed */
+ if (max_tx_rate < 0 || max_tx_rate > (int)(speeds[port_info.speed])) {
+ nicif_err(adapter, drv, netdev, "Set vf max tx rate must be in [0 - %u]\n",
+ speeds[port_info.speed]);
+ return -EINVAL;
+ }
+
+ err = hinic3_set_vf_tx_rate(adapter->hwdev, (u16)OS_VF_ID_TO_HW(vf),
+ (u32)max_tx_rate, (u32)min_tx_rate);
+ if (err) {
+ nicif_err(adapter, drv, netdev,
+ "Unable to set VF %d max rate %d min rate %d%s\n",
+ vf, max_tx_rate, min_tx_rate,
+ err == HINIC3_TX_RATE_TABLE_FULL ?
+ ", tx rate profile is full" : "");
+ return -EIO;
+ }
+
+#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+ nicif_info(adapter, drv, netdev,
+ "Set VF %d max tx rate %d min tx rate %d successfully\n",
+ vf, max_tx_rate, min_tx_rate);
+#else
+ nicif_info(adapter, drv, netdev,
+ "Set VF %d tx rate %d successfully\n",
+ vf, max_tx_rate);
+#endif
+
+ return 0;
+}
+
+#ifdef HAVE_XDP_SUPPORT
+bool hinic3_is_xdp_enable(struct hinic3_nic_dev *nic_dev)
+{
+ return !!nic_dev->xdp_prog;
+}
+
+int hinic3_xdp_max_mtu(struct hinic3_nic_dev *nic_dev)
+{
+ return nic_dev->rx_buff_len - (ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN);
+}
+
+static int hinic3_xdp_setup(struct hinic3_nic_dev *nic_dev,
+ struct bpf_prog *prog,
+ struct netlink_ext_ack *extack)
+{
+ struct bpf_prog *old_prog = NULL;
+ int max_mtu = hinic3_xdp_max_mtu(nic_dev);
+ int q_id;
+
+ if (nic_dev->netdev->mtu > (u32)max_mtu) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to setup xdp program, the current MTU %d is larger than max allowed MTU %d\n",
+ nic_dev->netdev->mtu, max_mtu);
+ NL_SET_ERR_MSG_MOD(extack,
+ "MTU too large for loading xdp program");
+ return -EINVAL;
+ }
+
+ if (nic_dev->netdev->features & NETIF_F_LRO) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to setup xdp program while LRO is on\n");
+ NL_SET_ERR_MSG_MOD(extack,
+ "Failed to setup xdp program while LRO is on\n");
+ return -EINVAL;
+ }
+
+ old_prog = xchg(&nic_dev->xdp_prog, prog);
+ for (q_id = 0; q_id < nic_dev->max_qps; q_id++)
+ xchg(&nic_dev->rxqs[q_id].xdp_prog, nic_dev->xdp_prog);
+
+ if (old_prog)
+ bpf_prog_put(old_prog);
+
+ return 0;
+}
+
+#ifdef HAVE_NDO_BPF_NETDEV_BPF
+static int hinic3_xdp(struct net_device *netdev, struct netdev_bpf *xdp)
+#else
+static int hinic3_xdp(struct net_device *netdev, struct netdev_xdp *xdp)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ switch (xdp->command) {
+ case XDP_SETUP_PROG:
+ return hinic3_xdp_setup(nic_dev, xdp->prog, xdp->extack);
+#ifdef HAVE_XDP_QUERY_PROG
+ case XDP_QUERY_PROG:
+ xdp->prog_id = nic_dev->xdp_prog ?
+ nic_dev->xdp_prog->aux->id : 0;
+ return 0;
+#endif
+ default:
+ return -EINVAL;
+ }
+}
+#endif
+
+static const struct net_device_ops hinic3_netdev_ops = {
+ .ndo_open = hinic3_open,
+ .ndo_stop = hinic3_close,
+ .ndo_start_xmit = hinic3_xmit_frame,
+
+#ifdef HAVE_NDO_GET_STATS64
+ .ndo_get_stats64 = hinic3_get_stats64,
+#else
+ .ndo_get_stats = hinic3_get_stats,
+#endif /* HAVE_NDO_GET_STATS64 */
+
+ .ndo_tx_timeout = hinic3_tx_timeout,
+ .ndo_select_queue = hinic3_select_queue,
+#ifdef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_CHANGE_MTU
+ .extended.ndo_change_mtu = hinic3_change_mtu,
+#else
+ .ndo_change_mtu = hinic3_change_mtu,
+#endif
+ .ndo_set_mac_address = hinic3_set_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+
+#if defined(NETIF_F_HW_VLAN_TX) || defined(NETIF_F_HW_VLAN_CTAG_TX)
+ .ndo_vlan_rx_add_vid = hinic3_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = hinic3_vlan_rx_kill_vid,
+#endif
+
+#ifdef HAVE_RHEL7_NET_DEVICE_OPS_EXT
+ /* RHEL7 requires this to be defined to enable extended ops. RHEL7
+ * uses the function get_ndo_ext to retrieve offsets for extended
+ * fields from with the net_device_ops struct and ndo_size is checked
+ * to determine whether or not the offset is valid.
+ */
+ .ndo_size = sizeof(const struct net_device_ops),
+#endif
+
+#ifdef IFLA_VF_MAX
+ .ndo_set_vf_mac = hinic3_ndo_set_vf_mac,
+#ifdef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_SET_VF_VLAN
+ .extended.ndo_set_vf_vlan = hinic3_ndo_set_vf_vlan,
+#else
+ .ndo_set_vf_vlan = hinic3_ndo_set_vf_vlan,
+#endif
+#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+ .ndo_set_vf_rate = hinic3_ndo_set_vf_bw,
+#else
+ .ndo_set_vf_tx_rate = hinic3_ndo_set_vf_bw,
+#endif /* HAVE_NDO_SET_VF_MIN_MAX_TX_RATE */
+#ifdef HAVE_VF_SPOOFCHK_CONFIGURE
+ .ndo_set_vf_spoofchk = hinic3_ndo_set_vf_spoofchk,
+#endif
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+#ifdef HAVE_RHEL7_NET_DEVICE_OPS_EXT
+ .extended.ndo_set_vf_trust = hinic3_ndo_set_vf_trust,
+#else
+ .ndo_set_vf_trust = hinic3_ndo_set_vf_trust,
+#endif /* HAVE_RHEL7_NET_DEVICE_OPS_EXT */
+#endif /* HAVE_NDO_SET_VF_TRUST */
+
+ .ndo_get_vf_config = hinic3_ndo_get_vf_config,
+#endif
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = hinic3_netpoll,
+#endif /* CONFIG_NET_POLL_CONTROLLER */
+
+ .ndo_set_rx_mode = hinic3_nic_set_rx_mode,
+
+#ifdef HAVE_XDP_SUPPORT
+#ifdef HAVE_NDO_BPF_NETDEV_BPF
+ .ndo_bpf = hinic3_xdp,
+#else
+ .ndo_xdp = hinic3_xdp,
+#endif
+#endif
+#ifdef HAVE_NDO_UDP_TUNNEL_ADD
+ .ndo_udp_tunnel_add = hinic3_udp_tunnel_add,
+ .ndo_udp_tunnel_del = hinic3_udp_tunnel_del,
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD */
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+};
+
+/* RHEL6 keeps these operations in a separate structure */
+static const struct net_device_ops_ext hinic3_netdev_ops_ext = {
+ .size = sizeof(struct net_device_ops_ext),
+#endif /* HAVE_RHEL6_NET_DEVICE_OPS_EXT */
+
+#ifdef HAVE_NDO_SET_VF_LINK_STATE
+ .ndo_set_vf_link_state = hinic3_ndo_set_vf_link_state,
+#endif
+
+#ifdef HAVE_NDO_SET_FEATURES
+ .ndo_fix_features = hinic3_fix_features,
+ .ndo_set_features = hinic3_set_features,
+#endif /* HAVE_NDO_SET_FEATURES */
+};
+
+static const struct net_device_ops hinic3vf_netdev_ops = {
+ .ndo_open = hinic3_open,
+ .ndo_stop = hinic3_close,
+ .ndo_start_xmit = hinic3_xmit_frame,
+
+#ifdef HAVE_NDO_GET_STATS64
+ .ndo_get_stats64 = hinic3_get_stats64,
+#else
+ .ndo_get_stats = hinic3_get_stats,
+#endif /* HAVE_NDO_GET_STATS64 */
+
+ .ndo_tx_timeout = hinic3_tx_timeout,
+ .ndo_select_queue = hinic3_select_queue,
+
+#ifdef HAVE_RHEL7_NET_DEVICE_OPS_EXT
+ /* RHEL7 requires this to be defined to enable extended ops. RHEL7
+ * uses the function get_ndo_ext to retrieve offsets for extended
+ * fields from with the net_device_ops struct and ndo_size is checked
+ * to determine whether or not the offset is valid.
+ */
+ .ndo_size = sizeof(const struct net_device_ops),
+#endif
+
+#ifdef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_CHANGE_MTU
+ .extended.ndo_change_mtu = hinic3_change_mtu,
+#else
+ .ndo_change_mtu = hinic3_change_mtu,
+#endif
+ .ndo_set_mac_address = hinic3_set_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+
+#if defined(NETIF_F_HW_VLAN_TX) || defined(NETIF_F_HW_VLAN_CTAG_TX)
+ .ndo_vlan_rx_add_vid = hinic3_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = hinic3_vlan_rx_kill_vid,
+#endif
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = hinic3_netpoll,
+#endif /* CONFIG_NET_POLL_CONTROLLER */
+
+ .ndo_set_rx_mode = hinic3_nic_set_rx_mode,
+
+#ifdef HAVE_XDP_SUPPORT
+#ifdef HAVE_NDO_BPF_NETDEV_BPF
+ .ndo_bpf = hinic3_xdp,
+#else
+ .ndo_xdp = hinic3_xdp,
+#endif
+#endif
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+};
+
+/* RHEL6 keeps these operations in a separate structure */
+static const struct net_device_ops_ext hinic3vf_netdev_ops_ext = {
+ .size = sizeof(struct net_device_ops_ext),
+#endif /* HAVE_RHEL6_NET_DEVICE_OPS_EXT */
+
+#ifdef HAVE_NDO_SET_FEATURES
+ .ndo_fix_features = hinic3_fix_features,
+ .ndo_set_features = hinic3_set_features,
+#endif /* HAVE_NDO_SET_FEATURES */
+};
+
+void hinic3_set_netdev_ops(struct hinic3_nic_dev *nic_dev)
+{
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nic_dev->netdev->netdev_ops = &hinic3_netdev_ops;
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ set_netdev_ops_ext(nic_dev->netdev, &hinic3_netdev_ops_ext);
+#endif /* HAVE_RHEL6_NET_DEVICE_OPS_EXT */
+ } else {
+ nic_dev->netdev->netdev_ops = &hinic3vf_netdev_ops;
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ set_netdev_ops_ext(nic_dev->netdev, &hinic3vf_netdev_ops_ext);
+#endif /* HAVE_RHEL6_NET_DEVICE_OPS_EXT */
+ }
+}
+
+bool hinic3_is_netdev_ops_match(const struct net_device *netdev)
+{
+ return netdev->netdev_ops == &hinic3_netdev_ops ||
+ netdev->netdev_ops == &hinic3vf_netdev_ops;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic.h
new file mode 100644
index 0000000..1bc6a14
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic.h
@@ -0,0 +1,221 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_H
+#define HINIC3_NIC_H
+
+#include <linux/types.h>
+#include <linux/semaphore.h>
+
+#include "hinic3_common.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "mag_mpu_cmd.h"
+#include "mag_mpu_cmd_defs.h"
+
+/* ************************ array index define ********************* */
+#define ARRAY_INDEX_0 0
+#define ARRAY_INDEX_1 1
+#define ARRAY_INDEX_2 2
+#define ARRAY_INDEX_3 3
+#define ARRAY_INDEX_4 4
+#define ARRAY_INDEX_5 5
+#define ARRAY_INDEX_6 6
+#define ARRAY_INDEX_7 7
+
+#define XSFP_TLV_PRE_INFO_LEN 4
+
+enum hinic3_link_port_type {
+ LINK_PORT_UNKNOWN,
+ LINK_PORT_OPTICAL_MM,
+ LINK_PORT_OPTICAL_SM,
+ LINK_PORT_PAS_COPPER,
+ LINK_PORT_ACC,
+ LINK_PORT_BASET,
+ LINK_PORT_AOC = 0x40,
+ LINK_PORT_ELECTRIC,
+ LINK_PORT_BACKBOARD_INTERFACE,
+};
+
+enum hilink_fibre_subtype {
+ FIBRE_SUBTYPE_SR = 1,
+ FIBRE_SUBTYPE_LR,
+ FIBRE_SUBTYPE_MAX,
+};
+
+enum hilink_fec_type {
+ HILINK_FEC_NOT_SET,
+ HILINK_FEC_RSFEC,
+ HILINK_FEC_BASEFEC,
+ HILINK_FEC_NOFEC,
+ HILINK_FEC_LLRSFE,
+ HILINK_FEC_MAX_TYPE,
+};
+
+struct hinic3_sq_attr {
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u64 ci_dma_base;
+};
+
+struct vf_data_storage {
+ u8 drv_mac_addr[ETH_ALEN];
+ u8 user_mac_addr[ETH_ALEN];
+ bool registered;
+ bool use_specified_mac;
+ u16 pf_vlan;
+ u8 pf_qos;
+ u8 rsvd2;
+ u32 max_rate;
+ u32 min_rate;
+
+ bool link_forced;
+ bool link_up; /* only valid if VF link is forced */
+ bool spoofchk;
+ bool trust;
+ u16 num_qps;
+ u32 support_extra_feature;
+};
+
+struct hinic3_port_routine_cmd {
+ bool mpu_send_sfp_info;
+ bool mpu_send_sfp_abs;
+
+ struct mag_cmd_get_xsfp_info std_sfp_info;
+ struct mag_cmd_get_xsfp_present abs;
+};
+
+struct hinic3_port_routine_cmd_extern {
+ bool mpu_send_xsfp_tlv_info;
+
+ struct drv_mag_cmd_get_xsfp_tlv_rsp std_xsfp_tlv_info;
+};
+
+struct hinic3_nic_cfg {
+ struct semaphore cfg_lock;
+
+ /* Valid when pfc is disable */
+ bool pause_set;
+ struct nic_pause_config nic_pause;
+
+ u8 pfc_en;
+ u8 pfc_bitmap;
+
+ struct nic_port_info port_info;
+
+ /* percentage of pf link bandwidth */
+ u32 pf_bw_tx_limit;
+ u32 pf_bw_rx_limit;
+
+ struct hinic3_port_routine_cmd rt_cmd;
+ struct hinic3_port_routine_cmd_extern rt_cmd_ext;
+ /* mutex used for copy sfp info */
+ struct mutex sfp_mutex;
+};
+
+struct hinic3_nic_io {
+ void *hwdev;
+ void *pcidev_hdl;
+ void *dev_hdl;
+
+ u8 link_status;
+ u8 direct;
+ u32 rsvd2;
+
+ struct hinic3_io_queue *sq;
+ struct hinic3_io_queue *rq;
+
+ u16 num_qps;
+ u16 max_qps;
+
+ void *ci_vaddr_base;
+ dma_addr_t ci_dma_base;
+
+ u8 __iomem *sqs_db_addr;
+ u8 __iomem *rqs_db_addr;
+
+ u16 max_vfs;
+ u16 rsvd3;
+ u32 rsvd4;
+
+ struct vf_data_storage *vf_infos;
+ struct hinic3_dcb_state dcb_state;
+ struct hinic3_nic_cfg nic_cfg;
+
+ u16 rx_buff_len;
+ u16 rsvd5;
+ u32 rsvd6;
+ u64 feature_cap;
+ u64 rsvd7;
+};
+
+struct vf_msg_handler {
+ u16 cmd;
+ int (*handler)(struct hinic3_nic_io *nic_io, u16 vf,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+};
+
+struct nic_event_handler {
+ u16 cmd;
+ void (*handler)(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+};
+
+int hinic3_set_ci_table(void *hwdev, struct hinic3_sq_attr *attr);
+
+int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+int l2nic_msg_to_mgmt_sync_ch(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u16 channel);
+
+int hinic3_cfg_vf_vlan(struct hinic3_nic_io *nic_io, u8 opcode, u16 vid,
+ u8 qos, int vf_id);
+
+int hinic3_vf_event_handler(void *hwdev,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+void hinic3_pf_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+int hinic3_pf_mbox_handler(void *hwdev,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+u8 hinic3_nic_sw_aeqe_handler(void *hwdev, u8 event, u8 *data);
+
+int hinic3_vf_func_init(struct hinic3_nic_io *nic_io);
+
+void hinic3_vf_func_free(struct hinic3_nic_io *nic_io);
+
+void hinic3_notify_dcb_state_event(struct hinic3_nic_io *nic_io,
+ struct hinic3_dcb_state *dcb_state);
+
+int hinic3_save_dcb_state(struct hinic3_nic_io *nic_io,
+ struct hinic3_dcb_state *dcb_state);
+
+void hinic3_notify_vf_link_status(struct hinic3_nic_io *nic_io,
+ u16 vf_id, u8 link_status);
+
+int hinic3_vf_mag_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+void hinic3_pf_mag_event_handler(void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+int hinic3_pf_mag_mbox_handler(void *hwdev, u16 vf_id,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+void hinic3_unregister_vf(struct hinic3_nic_io *nic_io, u16 vf_id);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c
new file mode 100644
index 0000000..525a353
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c
@@ -0,0 +1,1894 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "nic_mpu_cmd.h"
+#include "nic_npu_cmd.h"
+#include "hinic3_common.h"
+#include "hinic3_nic_cfg.h"
+
+#include "vram_common.h"
+
+int hinic3_delete_bond(void *hwdev)
+{
+ struct hinic3_cmd_delete_bond cmd_delete_bond;
+ u16 out_size = sizeof(cmd_delete_bond);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err = 0;
+
+ if (!hwdev) {
+ pr_err("hwdev is null.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is null.\n");
+ return -EINVAL;
+ }
+
+ memset(&cmd_delete_bond, 0, sizeof(cmd_delete_bond));
+ cmd_delete_bond.bond_id = HINIC3_INVALID_BOND_ID;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_BOND_DEV_DELETE,
+ &cmd_delete_bond, sizeof(cmd_delete_bond),
+ &cmd_delete_bond, &out_size);
+ if (err || !out_size || cmd_delete_bond.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to delete bond, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, cmd_delete_bond.head.status, out_size);
+ return -EFAULT;
+ }
+
+ if (cmd_delete_bond.bond_id != HINIC3_INVALID_BOND_ID) {
+ nic_info(nic_io->dev_hdl, "Delete bond success\n");
+ }
+
+ return 0;
+}
+
+int hinic3_open_close_bond(void *hwdev, u32 bond_en)
+{
+ struct hinic3_cmd_open_close_bond cmd_open_close_bond;
+ u16 out_size = sizeof(cmd_open_close_bond);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err = 0;
+
+ if (!hwdev) {
+ pr_err("hwdev is null.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is null.\n");
+ return -EINVAL;
+ }
+
+ memset(&cmd_open_close_bond, 0, sizeof(cmd_open_close_bond));
+ cmd_open_close_bond.open_close_bond_info.bond_id = HINIC3_INVALID_BOND_ID;
+ cmd_open_close_bond.open_close_bond_info.open_close_flag = bond_en;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_BOND_DEV_OPEN_CLOSE,
+ &cmd_open_close_bond, sizeof(cmd_open_close_bond),
+ &cmd_open_close_bond, &out_size);
+ if (err || !out_size || cmd_open_close_bond.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to %s bond, err: %d, status: 0x%x, out_size: 0x%x\n",
+ bond_en == true ? "open" : "close", err, cmd_open_close_bond.head.status, out_size);
+ return -EFAULT;
+ }
+
+ if (cmd_open_close_bond.open_close_bond_info.bond_id != HINIC3_INVALID_BOND_ID) {
+ nic_info(nic_io->dev_hdl, "%s bond success\n", bond_en == true ? "Open" : "Close");
+ }
+
+ return 0;
+}
+
+int hinic3_create_bond(void *hwdev, u32 *bond_id)
+{
+ struct hinic3_cmd_create_bond cmd_create_bond;
+ u16 out_size = sizeof(cmd_create_bond);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err = 0;
+
+ if (!hwdev) {
+ pr_err("hwdev is null.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is null.\n");
+ return -EINVAL;
+ }
+
+ memset(&cmd_create_bond, 0, sizeof(cmd_create_bond));
+ cmd_create_bond.create_bond_info.default_param_flag = true;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_BOND_DEV_CREATE,
+ &cmd_create_bond, sizeof(cmd_create_bond),
+ &cmd_create_bond, &out_size);
+ if (err || !out_size || cmd_create_bond.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to create default bond, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, cmd_create_bond.head.status, out_size);
+ return -EFAULT;
+ }
+
+ if (cmd_create_bond.create_bond_info.bond_id != HINIC3_INVALID_BOND_ID) {
+ *bond_id = cmd_create_bond.create_bond_info.bond_id;
+ nic_info(nic_io->dev_hdl, "Create bond success\n");
+ }
+
+ return 0;
+}
+
+int hinic3_set_ci_table(void *hwdev, struct hinic3_sq_attr *attr)
+{
+ struct hinic3_cmd_cons_idx_attr cons_idx_attr;
+ u16 out_size = sizeof(cons_idx_attr);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !attr)
+ return -EINVAL;
+
+ memset(&cons_idx_attr, 0, sizeof(cons_idx_attr));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ cons_idx_attr.func_idx = hinic3_global_func_id(hwdev);
+
+ cons_idx_attr.dma_attr_off = attr->dma_attr_off;
+ cons_idx_attr.pending_limit = attr->pending_limit;
+ cons_idx_attr.coalescing_time = attr->coalescing_time;
+
+ if (attr->intr_en) {
+ cons_idx_attr.intr_en = attr->intr_en;
+ cons_idx_attr.intr_idx = attr->intr_idx;
+ }
+
+ cons_idx_attr.l2nic_sqn = attr->l2nic_sqn;
+ cons_idx_attr.ci_addr = attr->ci_dma_base;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SQ_CI_ATTR_SET,
+ &cons_idx_attr, sizeof(cons_idx_attr),
+ &cons_idx_attr, &out_size);
+ if (err || !out_size || cons_idx_attr.msg_head.status) {
+ sdk_err(nic_io->dev_hdl,
+ "Failed to set ci attribute table, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, cons_idx_attr.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+#define PF_SET_VF_MAC(hwdev, status) \
+ (hinic3_func_type(hwdev) == TYPE_VF && \
+ (status) == HINIC3_PF_SET_VF_ALREADY)
+
+static int hinic3_check_mac_info(void *hwdev, u8 status, u16 vlan_id)
+{
+ if ((status && status != HINIC3_MGMT_STATUS_EXIST) ||
+ ((vlan_id & CHECK_IPSU_15BIT) &&
+ status == HINIC3_MGMT_STATUS_EXIST)) {
+ if (PF_SET_VF_MAC(hwdev, status))
+ return 0;
+
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define HINIC_VLAN_ID_MASK 0x7FFF
+
+int hinic3_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id,
+ u16 channel)
+{
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if ((vlan_id & HINIC_VLAN_ID_MASK) >= VLAN_N_VID) {
+ nic_err(nic_io->dev_hdl, "Invalid VLAN number: %d\n",
+ (vlan_id & HINIC_VLAN_ID_MASK));
+ return -EINVAL;
+ }
+
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ ether_addr_copy(mac_info.mac, mac_addr);
+
+ err = l2nic_msg_to_mgmt_sync_ch(hwdev, HINIC3_NIC_CMD_SET_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size, channel);
+ if (err || !out_size ||
+ hinic3_check_mac_info(hwdev, mac_info.msg_head.status,
+ mac_info.vlan_id)) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to update MAC, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, mac_info.msg_head.status, out_size, channel);
+ return -EIO;
+ }
+
+ if (PF_SET_VF_MAC(hwdev, mac_info.msg_head.status)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF mac, Ignore set operation\n");
+ return HINIC3_PF_SET_VF_ALREADY;
+ }
+
+ if (mac_info.msg_head.status == HINIC3_MGMT_STATUS_EXIST) {
+ nic_warn(nic_io->dev_hdl, "MAC is repeated. Ignore update operation\n");
+ return 0;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_mac);
+
+int hinic3_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id,
+ u16 channel)
+{
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if ((vlan_id & HINIC_VLAN_ID_MASK) >= VLAN_N_VID) {
+ nic_err(nic_io->dev_hdl, "Invalid VLAN number: %d\n",
+ (vlan_id & HINIC_VLAN_ID_MASK));
+ return -EINVAL;
+ }
+
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ ether_addr_copy(mac_info.mac, mac_addr);
+
+ err = l2nic_msg_to_mgmt_sync_ch(hwdev, HINIC3_NIC_CMD_DEL_MAC,
+ &mac_info, sizeof(mac_info), &mac_info,
+ &out_size, channel);
+ if (err || !out_size ||
+ (mac_info.msg_head.status && !PF_SET_VF_MAC(hwdev, mac_info.msg_head.status))) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to delete MAC, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, mac_info.msg_head.status, out_size, channel);
+ return -EIO;
+ }
+
+ if (PF_SET_VF_MAC(hwdev, mac_info.msg_head.status)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF mac, Ignore delete operation.\n");
+ return HINIC3_PF_SET_VF_ALREADY;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_del_mac);
+
+int hinic3_update_mac(void *hwdev, const u8 *old_mac, u8 *new_mac, u16 vlan_id,
+ u16 func_id)
+{
+ struct hinic3_port_mac_update mac_info;
+ u16 out_size = sizeof(mac_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !old_mac || !new_mac)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if ((vlan_id & HINIC_VLAN_ID_MASK) >= VLAN_N_VID) {
+ nic_err(nic_io->dev_hdl, "Invalid VLAN number: %d\n",
+ (vlan_id & HINIC_VLAN_ID_MASK));
+ return -EINVAL;
+ }
+
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ ether_addr_copy(mac_info.old_mac, old_mac);
+ ether_addr_copy(mac_info.new_mac, new_mac);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_UPDATE_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size);
+ if (err || !out_size ||
+ hinic3_check_mac_info(hwdev, mac_info.msg_head.status,
+ mac_info.vlan_id)) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to update MAC, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, mac_info.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (PF_SET_VF_MAC(hwdev, mac_info.msg_head.status)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF MAC. Ignore update operation\n");
+ return HINIC3_PF_SET_VF_ALREADY;
+ }
+
+ if (mac_info.msg_head.status == HINIC3_MGMT_STATUS_EXIST) {
+ nic_warn(nic_io->dev_hdl, "MAC is repeated. Ignore update operation\n");
+ return 0;
+ }
+
+ return 0;
+}
+
+int hinic3_get_default_mac(void *hwdev, u8 *mac_addr)
+{
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ mac_info.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size);
+ if (err || !out_size || mac_info.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get mac, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, mac_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ ether_addr_copy(mac_addr, mac_info.mac);
+
+ return 0;
+}
+
+static int hinic3_config_vlan(struct hinic3_nic_io *nic_io, u8 opcode,
+ u16 vlan_id, u16 func_id)
+{
+ struct hinic3_cmd_vlan_config vlan_info;
+ u16 out_size = sizeof(vlan_info);
+ int err;
+
+ memset(&vlan_info, 0, sizeof(vlan_info));
+ vlan_info.opcode = opcode;
+ vlan_info.func_id = func_id;
+ vlan_info.vlan_id = vlan_id;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_FUNC_VLAN,
+ &vlan_info, sizeof(vlan_info),
+ &vlan_info, &out_size);
+ if (err || !out_size || vlan_info.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to %s vlan, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == HINIC3_CMD_OP_ADD ? "add" : "delete",
+ err, vlan_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#if defined(HAVE_NDO_UDP_TUNNEL_ADD) || defined(HAVE_UDP_TUNNEL_NIC_INFO)
+int hinic3_vlxan_port_config(void *hwdev, u16 func_id, u16 port, u8 action)
+{
+ struct hinic3_cmd_vxlan_port_info vxlan_port_info;
+ u16 out_size = sizeof(vxlan_port_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&vxlan_port_info, 0, sizeof(vxlan_port_info));
+ vxlan_port_info.opcode = action;
+ vxlan_port_info.cfg_mode = 0; // other ethtool set
+ vxlan_port_info.func_id = func_id;
+ vxlan_port_info.vxlan_port = port;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_VXLAN_PORT,
+ &vxlan_port_info, sizeof(vxlan_port_info),
+ &vxlan_port_info, &out_size);
+ if (err || !out_size || vxlan_port_info.msg_head.status) {
+ if (vxlan_port_info.msg_head.status == 0x2) {
+ nic_warn(nic_io->dev_hdl,
+ "Failed to %s vxlan dst port because it has already been set by hinicadm\n",
+ action == HINIC3_CMD_OP_ADD ? "add" : "delete");
+ } else {
+ nic_err(nic_io->dev_hdl,
+ "Failed to %s vxlan dst port, err: %d, status: 0x%x, out size: 0x%x\n",
+ action == HINIC3_CMD_OP_ADD ? "add" : "delete",
+ err, vxlan_port_info.msg_head.status, out_size);
+ }
+ return -EINVAL;
+ }
+
+ return 0;
+}
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD || HAVE_UDP_TUNNEL_NIC_INFO */
+
+int hinic3_add_vlan(void *hwdev, u16 vlan_id, u16 func_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ return hinic3_config_vlan(nic_io, HINIC3_CMD_OP_ADD, vlan_id, func_id);
+}
+
+int hinic3_del_vlan(void *hwdev, u16 vlan_id, u16 func_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ return hinic3_config_vlan(nic_io, HINIC3_CMD_OP_DEL, vlan_id, func_id);
+}
+
+int hinic3_set_vport_enable(void *hwdev, u16 func_id, bool enable, u16 channel)
+{
+ struct hinic3_vport_state en_state;
+ u16 out_size = sizeof(en_state);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&en_state, 0, sizeof(en_state));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ en_state.func_id = func_id;
+ en_state.state = enable ? 1 : 0;
+
+ err = l2nic_msg_to_mgmt_sync_ch(hwdev, HINIC3_NIC_CMD_SET_VPORT_ENABLE,
+ &en_state, sizeof(en_state),
+ &en_state, &out_size, channel);
+ if (err || !out_size || en_state.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set vport state, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, en_state.msg_head.status, out_size, channel);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(hinic3_set_vport_enable);
+
+int hinic3_set_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !dcb_state)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (!memcmp(&nic_io->dcb_state, dcb_state, sizeof(nic_io->dcb_state)))
+ return 0;
+
+ /* save in sdk, vf will get dcb state when probing */
+ hinic3_save_dcb_state(nic_io, dcb_state);
+
+ /* notify stateful in pf, than notify all vf */
+ hinic3_notify_dcb_state_event(nic_io, dcb_state);
+
+ return 0;
+}
+
+int hinic3_get_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !dcb_state)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memcpy(dcb_state, &nic_io->dcb_state, sizeof(*dcb_state));
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_dcb_state);
+
+int hinic3_get_cos_by_pri(void *hwdev, u8 pri, u8 *cos)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !cos)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (pri >= NIC_DCB_UP_MAX && nic_io->dcb_state.trust == HINIC3_DCB_PCP)
+ return -EINVAL;
+
+ if (pri >= NIC_DCB_IP_PRI_MAX && nic_io->dcb_state.trust == HINIC3_DCB_DSCP)
+ return -EINVAL;
+
+/*lint -e662*/
+/*lint -e661*/
+ if (nic_io->dcb_state.dcb_on) {
+ if (nic_io->dcb_state.trust == HINIC3_DCB_PCP)
+ *cos = nic_io->dcb_state.pcp2cos[pri];
+ else
+ *cos = nic_io->dcb_state.dscp2cos[pri];
+ } else {
+ *cos = nic_io->dcb_state.default_cos;
+ }
+/*lint +e662*/
+/*lint +e661*/
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_cos_by_pri);
+
+int hinic3_save_dcb_state(struct hinic3_nic_io *nic_io,
+ struct hinic3_dcb_state *dcb_state)
+{
+ memcpy(&nic_io->dcb_state, dcb_state, sizeof(*dcb_state));
+
+ return 0;
+}
+
+int hinic3_get_pf_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state)
+{
+ struct hinic3_cmd_vf_dcb_state vf_dcb;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(vf_dcb);
+ int err;
+
+ if (!hwdev || !dcb_state)
+ return -EINVAL;
+
+ memset(&vf_dcb, 0, sizeof(vf_dcb));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) != TYPE_VF) {
+ nic_err(nic_io->dev_hdl, "Only vf need to get pf dcb state\n");
+ return -EINVAL;
+ }
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_VF_COS, &vf_dcb,
+ sizeof(vf_dcb), &vf_dcb, &out_size);
+ if (err || !out_size || vf_dcb.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to get vf default cos, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vf_dcb.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ memcpy(dcb_state, &vf_dcb.state, sizeof(*dcb_state));
+ /* Save dcb_state in hw for stateful module */
+ hinic3_save_dcb_state(nic_io, dcb_state);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_pf_dcb_state);
+
+#define UNSUPPORT_SET_PAUSE 0x10
+static int hinic3_cfg_hw_pause(struct hinic3_nic_io *nic_io, u8 opcode,
+ struct nic_pause_config *nic_pause)
+{
+ struct hinic3_cmd_pause_config pause_info;
+ u16 out_size = sizeof(pause_info);
+ int err;
+
+ memset(&pause_info, 0, sizeof(pause_info));
+
+ pause_info.port_id = hinic3_physical_port_id(nic_io->hwdev);
+ pause_info.opcode = opcode;
+ if (opcode == HINIC3_CMD_OP_SET) {
+ pause_info.auto_neg = nic_pause->auto_neg;
+ pause_info.rx_pause = nic_pause->rx_pause;
+ pause_info.tx_pause = nic_pause->tx_pause;
+ }
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_PAUSE_INFO,
+ &pause_info, sizeof(pause_info),
+ &pause_info, &out_size);
+ if (err || !out_size || pause_info.msg_head.status) {
+ if (pause_info.msg_head.status == UNSUPPORT_SET_PAUSE) {
+ err = -EOPNOTSUPP;
+ nic_err(nic_io->dev_hdl, "Can not set pause when pfc is enable\n");
+ } else {
+ err = -EFAULT;
+ nic_err(nic_io->dev_hdl, "Failed to %s pause info, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == HINIC3_CMD_OP_SET ? "set" : "get",
+ err, pause_info.msg_head.status, out_size);
+ }
+ return err;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET) {
+ nic_pause->auto_neg = pause_info.auto_neg;
+ nic_pause->rx_pause = pause_info.rx_pause;
+ nic_pause->tx_pause = pause_info.tx_pause;
+ }
+
+ return 0;
+}
+
+int hinic3_set_pause_info(void *hwdev, struct nic_pause_config nic_pause)
+{
+ struct hinic3_nic_cfg *nic_cfg = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ nic_cfg = &nic_io->nic_cfg;
+
+ down(&nic_cfg->cfg_lock);
+
+ err = hinic3_cfg_hw_pause(nic_io, HINIC3_CMD_OP_SET, &nic_pause);
+ if (err) {
+ up(&nic_cfg->cfg_lock);
+ return err;
+ }
+
+ nic_cfg->pfc_en = 0;
+ nic_cfg->pfc_bitmap = 0;
+ nic_cfg->pause_set = true;
+ nic_cfg->nic_pause.auto_neg = nic_pause.auto_neg;
+ nic_cfg->nic_pause.rx_pause = nic_pause.rx_pause;
+ nic_cfg->nic_pause.tx_pause = nic_pause.tx_pause;
+
+ up(&nic_cfg->cfg_lock);
+
+ return 0;
+}
+
+int hinic3_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ int err = 0;
+
+ if (!hwdev || !nic_pause)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ err = hinic3_cfg_hw_pause(nic_io, HINIC3_CMD_OP_GET, nic_pause);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+int hinic3_sync_dcb_state(void *hwdev, u8 op_code, u8 state)
+{
+ struct hinic3_cmd_set_dcb_state dcb_state;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(dcb_state);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&dcb_state, 0, sizeof(dcb_state));
+
+ dcb_state.op_code = op_code;
+ dcb_state.state = state;
+ dcb_state.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_QOS_DCB_STATE,
+ &dcb_state, sizeof(dcb_state), &dcb_state, &out_size);
+ if (err || dcb_state.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to set dcb state, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, dcb_state.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_dcb_set_rq_iq_mapping(void *hwdev, u32 num_rqs, u8 *map,
+ u32 max_map_num)
+{
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_dcb_set_rq_iq_mapping);
+
+int hinic3_flush_qps_res(void *hwdev)
+{
+ struct hinic3_cmd_clear_qp_resource sq_res;
+ u16 out_size = sizeof(sq_res);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&sq_res, 0, sizeof(sq_res));
+
+ sq_res.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CLEAR_QP_RESOURCE,
+ &sq_res, sizeof(sq_res), &sq_res,
+ &out_size);
+ if (err || !out_size || sq_res.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to clear sq resources, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, sq_res.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_flush_qps_res);
+
+int hinic3_cache_out_qps_res(void *hwdev)
+{
+ struct hinic3_cmd_cache_out_qp_resource qp_res;
+ u16 out_size = sizeof(qp_res);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&qp_res, 0, sizeof(qp_res));
+
+ qp_res.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CACHE_OUT_QP_RES,
+ &qp_res, sizeof(qp_res), &qp_res, &out_size);
+ if (err || !out_size || qp_res.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to cache out qp resources, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, qp_res.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_get_vport_stats(void *hwdev, u16 func_id, struct hinic3_vport_stats *stats)
+{
+ struct hinic3_port_stats_info stats_info;
+ struct hinic3_cmd_vport_stats vport_stats;
+ u16 out_size = sizeof(vport_stats);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !stats)
+ return -EINVAL;
+
+ memset(&stats_info, 0, sizeof(stats_info));
+ memset(&vport_stats, 0, sizeof(vport_stats));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ stats_info.func_id = func_id;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_VPORT_STAT,
+ &stats_info, sizeof(stats_info),
+ &vport_stats, &out_size);
+ if (err || !out_size || vport_stats.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get function statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vport_stats.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ memcpy(stats, &vport_stats.stats, sizeof(*stats));
+
+ return 0;
+}
+
+static int hinic3_set_function_table(struct hinic3_nic_io *nic_io, u32 cfg_bitmap,
+ const struct hinic3_func_tbl_cfg *cfg)
+{
+ struct hinic3_cmd_set_func_tbl cmd_func_tbl;
+ u16 out_size = sizeof(cmd_func_tbl);
+ int err;
+
+ memset(&cmd_func_tbl, 0, sizeof(cmd_func_tbl));
+ cmd_func_tbl.func_id = hinic3_global_func_id(nic_io->hwdev);
+ cmd_func_tbl.cfg_bitmap = cfg_bitmap;
+ cmd_func_tbl.tbl_cfg = *cfg;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_SET_FUNC_TBL,
+ &cmd_func_tbl, sizeof(cmd_func_tbl),
+ &cmd_func_tbl, &out_size);
+ if (err || cmd_func_tbl.msg_head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to set func table, bitmap: 0x%x, err: %d, status: 0x%x, out size: 0x%x\n",
+ cfg_bitmap, err, cmd_func_tbl.msg_head.status,
+ out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int hinic3_init_function_table(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_func_tbl_cfg func_tbl_cfg = {0};
+ u32 cfg_bitmap = BIT(FUNC_CFG_INIT) | BIT(FUNC_CFG_MTU) |
+ BIT(FUNC_CFG_RX_BUF_SIZE);
+
+ func_tbl_cfg.mtu = 0x3FFF; /* default, max mtu */
+ func_tbl_cfg.rx_wqe_buf_size = nic_io->rx_buff_len;
+
+ return hinic3_set_function_table(nic_io, cfg_bitmap, &func_tbl_cfg);
+}
+
+int hinic3_set_port_mtu(void *hwdev, u16 new_mtu)
+{
+ struct hinic3_func_tbl_cfg func_tbl_cfg = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (new_mtu < HINIC3_MIN_MTU_SIZE) {
+ nic_err(nic_io->dev_hdl,
+ "Invalid mtu size: %ubytes, mtu size < %ubytes",
+ new_mtu, HINIC3_MIN_MTU_SIZE);
+ return -EINVAL;
+ }
+
+ if (new_mtu > HINIC3_MAX_JUMBO_FRAME_SIZE) {
+ nic_err(nic_io->dev_hdl, "Invalid mtu size: %ubytes, mtu size > %ubytes",
+ new_mtu, HINIC3_MAX_JUMBO_FRAME_SIZE);
+ return -EINVAL;
+ }
+
+ func_tbl_cfg.mtu = new_mtu;
+ return hinic3_set_function_table(nic_io, BIT(FUNC_CFG_MTU),
+ &func_tbl_cfg);
+}
+
+static int nic_feature_nego(void *hwdev, u8 opcode, u64 *s_feature, u16 size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_feature_nego feature_nego;
+ u16 out_size = sizeof(feature_nego);
+ int err;
+
+ if (!hwdev || !s_feature || size > NIC_MAX_FEATURE_QWORD)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&feature_nego, 0, sizeof(feature_nego));
+ feature_nego.func_id = hinic3_global_func_id(hwdev);
+ feature_nego.opcode = opcode;
+ if (opcode == HINIC3_CMD_OP_SET)
+ memcpy(feature_nego.s_feature, s_feature, size * sizeof(u64));
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_FEATURE_NEGO,
+ &feature_nego, sizeof(feature_nego),
+ &feature_nego, &out_size);
+ if (err || !out_size || feature_nego.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to negotiate nic feature, err:%d, status: 0x%x, out_size: 0x%x\n",
+ err, feature_nego.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET)
+ memcpy(s_feature, feature_nego.s_feature, size * sizeof(u64));
+
+ return 0;
+}
+
+static int hinic3_get_bios_pf_bw_tx_limit(void *hwdev, struct hinic3_nic_io *nic_io, u16 func_id, u32 *pf_rate)
+{
+ int err = 0; // default success
+ struct nic_cmd_bios_cfg cfg = {{0}};
+ u16 out_size = sizeof(cfg);
+
+ cfg.bios_cfg.func_id = (u8)func_id;
+ cfg.bios_cfg.func_valid = 1;
+ cfg.op_code = 0 | NIC_NVM_DATA_PF_TX_SPEED_LIMIT;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_BIOS_CFG, &cfg, sizeof(cfg),
+ &cfg, &out_size);
+ if (err || !out_size || cfg.head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get bios pf bandwidth tx limit, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, cfg.head.status, out_size);
+ return -EIO;
+ }
+
+ /* check data is valid or not */
+ if (cfg.bios_cfg.signature != BIOS_CFG_SIGNATURE)
+ nic_warn(nic_io->dev_hdl, "Invalid bios configuration data, signature: 0x%x\n",
+ cfg.bios_cfg.signature);
+
+ if (cfg.bios_cfg.pf_tx_bw > MAX_LIMIT_BW) {
+ nic_err(nic_io->dev_hdl, "Invalid bios cfg pf bandwidth limit: %u\n",
+ cfg.bios_cfg.pf_tx_bw);
+ return -EINVAL;
+ }
+
+ (*pf_rate) = cfg.bios_cfg.pf_tx_bw;
+ return err;
+}
+
+static int hinic3_get_bios_pf_bw_rx_limit(void *hwdev, struct hinic3_nic_io *nic_io, u16 func_id, u32 *pf_rate)
+{
+ int err = 0; // default success
+ struct nic_rx_rate_bios_cfg rx_bios_conf = {{0}};
+ u16 out_size = sizeof(rx_bios_conf);
+
+ rx_bios_conf.func_id = (u8)func_id;
+ rx_bios_conf.op_code = 0; /* 1-save, 0-read */
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_RX_RATE_CFG, &rx_bios_conf, sizeof(rx_bios_conf),
+ &rx_bios_conf, &out_size);
+ if (rx_bios_conf.msg_head.status == HINIC3_MGMT_CMD_UNSUPPORTED && err == 0) { // Compatible older firmware
+ nic_warn(nic_io->dev_hdl, "Not support get bios pf bandwidth rx limit\n");
+ return 0;
+ } else if (err || !out_size || rx_bios_conf.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get bios pf bandwidth rx limit, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rx_bios_conf.msg_head.status, out_size);
+ return -EIO;
+ }
+ if (rx_bios_conf.rx_rate_limit > MAX_LIMIT_BW) {
+ nic_err(nic_io->dev_hdl, "Invalid bios cfg pf bandwidth limit: %u\n",
+ rx_bios_conf.rx_rate_limit);
+ return -EINVAL;
+ }
+
+ (*pf_rate) = rx_bios_conf.rx_rate_limit;
+ return err;
+}
+
+static int hinic3_get_bios_pf_bw_limit(void *hwdev, u32 *pf_bw_limit, u8 direct)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 pf_rate = 0;
+ int err = 0;
+ u16 func_id;
+ func_id = hinic3_global_func_id(hwdev);
+
+ if (!hwdev || !pf_bw_limit)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF || !HINIC3_SUPPORT_RATE_LIMIT(hwdev))
+ return 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (direct == HINIC3_NIC_TX) {
+ err = hinic3_get_bios_pf_bw_tx_limit(hwdev, nic_io, func_id, &pf_rate);
+ } else if (direct == HINIC3_NIC_RX) {
+ err = hinic3_get_bios_pf_bw_rx_limit(hwdev, nic_io, func_id, &pf_rate);
+ }
+
+ if (err != 0)
+ return err;
+
+ if (pf_rate > MAX_LIMIT_BW) {
+ nic_err(nic_io->dev_hdl, "Invalid bios cfg pf bandwidth limit: %u\n", pf_rate);
+ return -EINVAL;
+ }
+ *pf_bw_limit = pf_rate;
+
+ return 0;
+}
+
+int hinic3_set_pf_rate(void *hwdev, u8 speed_level)
+{
+ struct hinic3_cmd_tx_rate_cfg rate_cfg = {{0}};
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 rate_limit;
+ u16 out_size = sizeof(rate_cfg);
+ u32 pf_rate = 0;
+ int err;
+ u32 speed_convert[PORT_SPEED_UNKNOWN] = {
+ 0, 10, 100, 1000, 10000, 25000, 40000, 50000, 100000, 200000
+ };
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (speed_level >= PORT_SPEED_UNKNOWN) {
+ nic_err(nic_io->dev_hdl, "Invalid speed level: %hhu\n", speed_level);
+ return -EINVAL;
+ }
+
+ rate_limit = (nic_io->direct == HINIC3_NIC_TX) ?
+ nic_io->nic_cfg.pf_bw_tx_limit : nic_io->nic_cfg.pf_bw_rx_limit;
+
+ if (rate_limit != MAX_LIMIT_BW) {
+ /* divided by 100 to convert to percentage */
+ pf_rate = (speed_convert[speed_level] / 100) * rate_limit;
+ /* bandwidth limit is very small but not unlimit in this case */
+ if ((pf_rate == 0) && (speed_level != PORT_SPEED_NOT_SET))
+ pf_rate = 1;
+ }
+
+ rate_cfg.func_id = hinic3_global_func_id(hwdev);
+ rate_cfg.min_rate = 0;
+ rate_cfg.max_rate = pf_rate;
+ rate_cfg.direct = nic_io->direct;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_MAX_MIN_RATE, &rate_cfg,
+ sizeof(rate_cfg), &rate_cfg, &out_size);
+ if (err || !out_size || rate_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set rate(%u), err: %d, status: 0x%x, out size: 0x%x\n",
+ pf_rate, err, rate_cfg.msg_head.status, out_size);
+ return rate_cfg.msg_head.status ? rate_cfg.msg_head.status : -EIO;
+ }
+
+ return 0;
+}
+
+static int hinic3_get_nic_feature_from_hw(void *hwdev, u64 *s_feature, u16 size)
+{
+ return nic_feature_nego(hwdev, HINIC3_CMD_OP_GET, s_feature, size);
+}
+
+int hinic3_set_nic_feature_to_hw(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ return nic_feature_nego(hwdev, HINIC3_CMD_OP_SET, &nic_io->feature_cap, 1);
+}
+
+u64 hinic3_get_feature_cap(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return 0;
+
+ return nic_io->feature_cap;
+}
+
+void hinic3_update_nic_feature(void *hwdev, u64 s_feature)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+
+ nic_io->feature_cap = s_feature;
+
+ nic_info(nic_io->dev_hdl, "Update nic feature to 0x%llx\n", nic_io->feature_cap);
+}
+
+static inline int init_nic_hwdev_param_valid(const void *hwdev, const void *pcidev_hdl,
+ const void *dev_hdl)
+{
+ if (!hwdev || !pcidev_hdl || !dev_hdl)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int hinic3_init_nic_io(void *hwdev, void *pcidev_hdl, void *dev_hdl,
+ struct hinic3_nic_io **nic_io)
+{
+ if (init_nic_hwdev_param_valid(hwdev, pcidev_hdl, dev_hdl))
+ return -EINVAL;
+
+ *nic_io = kzalloc(sizeof(**nic_io), GFP_KERNEL);
+ if (!(*nic_io))
+ return -ENOMEM;
+
+ (*nic_io)->dev_hdl = dev_hdl;
+ (*nic_io)->pcidev_hdl = pcidev_hdl;
+ (*nic_io)->hwdev = hwdev;
+
+ sema_init(&((*nic_io)->nic_cfg.cfg_lock), 1);
+ mutex_init(&((*nic_io)->nic_cfg.sfp_mutex));
+
+ (*nic_io)->nic_cfg.rt_cmd.mpu_send_sfp_abs = false;
+ (*nic_io)->nic_cfg.rt_cmd.mpu_send_sfp_info = false;
+ (*nic_io)->nic_cfg.rt_cmd_ext.mpu_send_xsfp_tlv_info = false;
+
+ return 0;
+}
+
+/* *
+ * hinic3_init_nic_hwdev - init nic hwdev
+ * @hwdev: pointer to hwdev
+ * @pcidev_hdl: pointer to pcidev or handler
+ * @dev_hdl: pointer to pcidev->dev or handler, for sdk_err() or dma_alloc()
+ * @rx_buff_len: rx_buff_len is receive buffer length
+ */
+int hinic3_init_nic_hwdev(void *hwdev, void *pcidev_hdl, void *dev_hdl,
+ u16 rx_buff_len)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+ int is_in_kexec = vram_get_kexec_flag();
+
+ err = hinic3_init_nic_io(hwdev, pcidev_hdl, dev_hdl, &nic_io);
+ if (err)
+ return err;
+
+ nic_io->rx_buff_len = rx_buff_len;
+
+ err = hinic3_register_service_adapter(hwdev, nic_io, SERVICE_T_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to register service adapter\n");
+ goto register_sa_err;
+ }
+
+ err = hinic3_set_func_svc_used_state(hwdev, SVC_T_NIC, 1, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set function svc used state\n");
+ goto set_used_state_err;
+ }
+
+ if (is_in_kexec == 0) {
+ err = hinic3_init_function_table(nic_io);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to init function table\n");
+ goto err_out;
+ }
+ }
+
+ err = hinic3_get_nic_feature_from_hw(hwdev, &nic_io->feature_cap, 1);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get nic features\n");
+ goto err_out;
+ }
+
+ sdk_info(dev_hdl, "nic features: 0x%llx\n", nic_io->feature_cap);
+
+ err = hinic3_get_bios_pf_bw_limit(hwdev,
+ &nic_io->nic_cfg.pf_bw_tx_limit,
+ HINIC3_NIC_TX);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to get pf tx bandwidth limit\n");
+ goto err_out;
+ }
+
+ err = hinic3_get_bios_pf_bw_limit(hwdev,
+ &nic_io->nic_cfg.pf_bw_rx_limit,
+ HINIC3_NIC_RX);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to get pf rx bandwidth limit\n");
+ goto err_out;
+ }
+
+ err = hinic3_vf_func_init(nic_io);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to init vf info\n");
+ goto err_out;
+ }
+
+ return 0;
+
+err_out:
+ if (hinic3_set_func_svc_used_state(hwdev, SVC_T_NIC, 0,
+ HINIC3_CHANNEL_NIC) != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set function svc used state\n");
+ }
+
+set_used_state_err:
+ hinic3_unregister_service_adapter(hwdev, SERVICE_T_NIC);
+
+register_sa_err:
+ mutex_deinit(&nic_io->nic_cfg.sfp_mutex);
+ sema_deinit(&nic_io->nic_cfg.cfg_lock);
+
+ kfree(nic_io);
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_init_nic_hwdev);
+
+void hinic3_free_nic_hwdev(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+
+ hinic3_vf_func_free(nic_io);
+
+ hinic3_set_func_svc_used_state(hwdev, SVC_T_NIC, 0, HINIC3_CHANNEL_NIC);
+
+ hinic3_unregister_service_adapter(hwdev, SERVICE_T_NIC);
+
+ mutex_deinit(&nic_io->nic_cfg.sfp_mutex);
+ sema_deinit(&nic_io->nic_cfg.cfg_lock);
+
+ kfree(nic_io);
+}
+EXPORT_SYMBOL(hinic3_free_nic_hwdev);
+
+int hinic3_force_drop_tx_pkt(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_force_pkt_drop pkt_drop;
+ u16 out_size = sizeof(pkt_drop);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&pkt_drop, 0, sizeof(pkt_drop));
+ pkt_drop.port = hinic3_physical_port_id(hwdev);
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_FORCE_PKT_DROP,
+ &pkt_drop, sizeof(pkt_drop),
+ &pkt_drop, &out_size);
+ if ((pkt_drop.msg_head.status != HINIC3_MGMT_CMD_UNSUPPORTED &&
+ pkt_drop.msg_head.status) || err || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to set force tx packets drop, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, pkt_drop.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return pkt_drop.msg_head.status;
+}
+
+int hinic3_set_rx_mode(void *hwdev, u32 enable)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_rx_mode_config rx_mode_cfg;
+ u16 out_size = sizeof(rx_mode_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&rx_mode_cfg, 0, sizeof(rx_mode_cfg));
+ rx_mode_cfg.func_id = hinic3_global_func_id(hwdev);
+ rx_mode_cfg.rx_mode = enable;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_RX_MODE,
+ &rx_mode_cfg, sizeof(rx_mode_cfg),
+ &rx_mode_cfg, &out_size);
+ if (err || !out_size || rx_mode_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set rx mode, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rx_mode_cfg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_rx_vlan_offload(void *hwdev, u8 en)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_vlan_offload vlan_cfg;
+ u16 out_size = sizeof(vlan_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&vlan_cfg, 0, sizeof(vlan_cfg));
+ vlan_cfg.func_id = hinic3_global_func_id(hwdev);
+ vlan_cfg.vlan_offload = en;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD,
+ &vlan_cfg, sizeof(vlan_cfg),
+ &vlan_cfg, &out_size);
+ if (err || !out_size || vlan_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set rx vlan offload, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vlan_cfg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_update_mac_vlan(void *hwdev, u16 old_vlan, u16 new_vlan, int vf_id)
+{
+ struct vf_data_storage *vf_info = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 func_id;
+ int err;
+
+ if (!hwdev || old_vlan >= VLAN_N_VID || new_vlan >= VLAN_N_VID)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ if (!nic_io->vf_infos || is_zero_ether_addr(vf_info->drv_mac_addr))
+ return 0;
+
+ func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + (u16)vf_id;
+
+ err = hinic3_del_mac(nic_io->hwdev, vf_info->drv_mac_addr,
+ old_vlan, func_id, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to delete VF %d MAC %pM vlan %u\n",
+ HW_VF_ID_TO_OS(vf_id), vf_info->drv_mac_addr, old_vlan);
+ return err;
+ }
+
+ err = hinic3_set_mac(nic_io->hwdev, vf_info->drv_mac_addr,
+ new_vlan, func_id, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to add VF %d MAC %pM vlan %u\n",
+ HW_VF_ID_TO_OS(vf_id), vf_info->drv_mac_addr, new_vlan);
+ hinic3_set_mac(nic_io->hwdev, vf_info->drv_mac_addr,
+ old_vlan, func_id, HINIC3_CHANNEL_NIC);
+ return err;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_rx_lro(void *hwdev, u8 ipv4_en, u8 ipv6_en,
+ u8 lro_max_pkt_len)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_lro_config lro_cfg;
+ u16 out_size = sizeof(lro_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&lro_cfg, 0, sizeof(lro_cfg));
+ lro_cfg.func_id = hinic3_global_func_id(hwdev);
+ lro_cfg.opcode = HINIC3_CMD_OP_SET;
+ lro_cfg.lro_ipv4_en = ipv4_en;
+ lro_cfg.lro_ipv6_en = ipv6_en;
+ lro_cfg.lro_max_pkt_len = lro_max_pkt_len;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_RX_LRO,
+ &lro_cfg, sizeof(lro_cfg),
+ &lro_cfg, &out_size);
+ if (err || !out_size || lro_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set lro offload, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, lro_cfg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_rx_lro_timer(void *hwdev, u32 timer_value)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_lro_timer lro_timer;
+ u16 out_size = sizeof(lro_timer);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&lro_timer, 0, sizeof(lro_timer));
+ lro_timer.opcode = HINIC3_CMD_OP_SET;
+ lro_timer.timer = timer_value;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_LRO_TIMER,
+ &lro_timer, sizeof(lro_timer),
+ &lro_timer, &out_size);
+ if (err || !out_size || lro_timer.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set lro timer, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, lro_timer.msg_head.status, out_size);
+
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_rx_lro_state(void *hwdev, u8 lro_en, u32 lro_timer,
+ u32 lro_max_pkt_len)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u8 ipv4_en = 0, ipv6_en = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ ipv4_en = lro_en ? 1 : 0;
+ ipv6_en = lro_en ? 1 : 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ nic_info(nic_io->dev_hdl, "Set LRO max coalesce packet size to %uK\n",
+ lro_max_pkt_len);
+
+ err = hinic3_set_rx_lro(hwdev, ipv4_en, ipv6_en, (u8)lro_max_pkt_len);
+ if (err)
+ return err;
+
+ /* we don't set LRO timer for VF */
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ nic_info(nic_io->dev_hdl, "Set LRO timer to %u\n", lro_timer);
+
+ return hinic3_set_rx_lro_timer(hwdev, lro_timer);
+}
+
+int hinic3_set_vlan_fliter(void *hwdev, u32 vlan_filter_ctrl)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_set_vlan_filter vlan_filter;
+ u16 out_size = sizeof(vlan_filter);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&vlan_filter, 0, sizeof(vlan_filter));
+ vlan_filter.func_id = hinic3_global_func_id(hwdev);
+ vlan_filter.vlan_filter_ctrl = vlan_filter_ctrl;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_VLAN_FILTER_EN,
+ &vlan_filter, sizeof(vlan_filter),
+ &vlan_filter, &out_size);
+ if (err || !out_size || vlan_filter.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set vlan filter, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vlan_filter.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_func_capture_en(void *hwdev, u16 func_id, bool cap_en)
+{
+ struct nic_cmd_capture_info cap_info = {{0}};
+ u16 out_size = sizeof(cap_info);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ /* 2 function capture types */
+ cap_info.is_en_trx = cap_en;
+ cap_info.func_port = func_id;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_UCAPTURE_OPT,
+ &cap_info, sizeof(cap_info),
+ &cap_info, &out_size);
+ if (err || !out_size || cap_info.msg_head.status)
+ return -EINVAL;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_func_capture_en);
+
+int hinic3_add_tcam_rule(void *hwdev, struct nic_tcam_cfg_rule *tcam_rule)
+{
+ u16 out_size = sizeof(struct nic_cmd_fdir_add_rule);
+ struct nic_cmd_fdir_add_rule tcam_cmd;
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !tcam_rule)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ if (tcam_rule->index >= HINIC3_MAX_TCAM_RULES_NUM) {
+ nic_err(nic_io->dev_hdl, "Tcam rules num to add is invalid\n");
+ return -EINVAL;
+ }
+
+ memset(&tcam_cmd, 0, sizeof(struct nic_cmd_fdir_add_rule));
+ memcpy((void *)&tcam_cmd.rule, (void *)tcam_rule,
+ sizeof(struct nic_tcam_cfg_rule));
+ tcam_cmd.func_id = hinic3_global_func_id(hwdev);
+ tcam_cmd.type = TCAM_RULE_FDIR_TYPE;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_ADD_TC_FLOW,
+ &tcam_cmd, sizeof(tcam_cmd),
+ &tcam_cmd, &out_size);
+ if (err || tcam_cmd.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, tcam_cmd.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_del_tcam_rule(void *hwdev, u32 index)
+{
+ u16 out_size = sizeof(struct nic_cmd_fdir_del_rules);
+ struct nic_cmd_fdir_del_rules tcam_cmd;
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ if (index >= HINIC3_MAX_TCAM_RULES_NUM) {
+ nic_err(nic_io->dev_hdl, "Tcam rules num to del is invalid\n");
+ return -EINVAL;
+ }
+
+ memset(&tcam_cmd, 0, sizeof(struct nic_cmd_fdir_del_rules));
+ tcam_cmd.index_start = index;
+ tcam_cmd.index_num = 1;
+ tcam_cmd.func_id = hinic3_global_func_id(hwdev);
+ tcam_cmd.type = TCAM_RULE_FDIR_TYPE;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_DEL_TC_FLOW,
+ &tcam_cmd, sizeof(tcam_cmd),
+ &tcam_cmd, &out_size);
+ if (err || tcam_cmd.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Del tcam rule failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, tcam_cmd.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+/**
+ * hinic3_mgmt_tcam_block - alloc or free tcam block for IO packet.
+ *
+ * @param hwdev
+ * The hardware interface of a nic device.
+ * @param alloc_en
+ * 1 alloc block.
+ * 0 free block.
+ * @param index
+ * block index from firmware.
+ * @return
+ * 0 on success,
+ * negative error value otherwise.
+ */
+static int hinic3_mgmt_tcam_block(void *hwdev, u8 alloc_en, u16 *index)
+{
+ struct nic_cmd_ctrl_tcam_block_out tcam_block_info;
+ u16 out_size = sizeof(struct nic_cmd_ctrl_tcam_block_out);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !index)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ memset(&tcam_block_info, 0,
+ sizeof(struct nic_cmd_ctrl_tcam_block_out));
+
+ tcam_block_info.func_id = hinic3_global_func_id(hwdev);
+ tcam_block_info.alloc_en = alloc_en;
+ tcam_block_info.tcam_type = NIC_TCAM_BLOCK_TYPE_LARGE;
+ tcam_block_info.tcam_block_index = *index;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_TCAM_BLOCK,
+ &tcam_block_info, sizeof(tcam_block_info),
+ &tcam_block_info, &out_size);
+ if (err || tcam_block_info.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Set tcam block failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, tcam_block_info.head.status, out_size);
+ return -EIO;
+ }
+
+ if (alloc_en)
+ *index = tcam_block_info.tcam_block_index;
+
+ return 0;
+}
+
+int hinic3_alloc_tcam_block(void *hwdev, u16 *index)
+{
+ return hinic3_mgmt_tcam_block(hwdev, HINIC3_TCAM_BLOCK_ENABLE, index);
+}
+
+int hinic3_free_tcam_block(void *hwdev, u16 *index)
+{
+ return hinic3_mgmt_tcam_block(hwdev, HINIC3_TCAM_BLOCK_DISABLE, index);
+}
+
+int hinic3_set_fdir_tcam_rule_filter(void *hwdev, bool enable)
+{
+ struct nic_cmd_set_tcam_enable port_tcam_cmd;
+ u16 out_size = sizeof(port_tcam_cmd);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ memset(&port_tcam_cmd, 0, sizeof(port_tcam_cmd));
+ port_tcam_cmd.func_id = hinic3_global_func_id(hwdev);
+ port_tcam_cmd.tcam_enable = (u8)enable;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_ENABLE_TCAM,
+ &port_tcam_cmd, sizeof(port_tcam_cmd),
+ &port_tcam_cmd, &out_size);
+ if (err || port_tcam_cmd.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl, "Set fdir tcam filter failed, err: %d, status: 0x%x, out size: 0x%x, enable: 0x%x\n",
+ err, port_tcam_cmd.head.status, out_size,
+ enable);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_flush_tcam_rule(void *hwdev)
+{
+ struct nic_cmd_flush_tcam_rules tcam_flush;
+ u16 out_size = sizeof(struct nic_cmd_flush_tcam_rules);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&tcam_flush, 0, sizeof(struct nic_cmd_flush_tcam_rules));
+ tcam_flush.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_FLUSH_TCAM,
+ &tcam_flush,
+ sizeof(struct nic_cmd_flush_tcam_rules),
+ &tcam_flush, &out_size);
+ if (err || tcam_flush.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Flush tcam fdir rules failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, tcam_flush.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_get_rxq_hw_info(void *hwdev, struct rxq_check_info *rxq_info, u16 num_qps, u16 wqe_type)
+{
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_rxq_hw *rxq_hw = NULL;
+ struct rxq_check_info *rxq_info_out = NULL;
+ int err;
+ u16 i;
+
+ if (!hwdev || !rxq_info)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd_buf.\n");
+ return -ENOMEM;
+ }
+
+ rxq_hw = cmd_buf->buf;
+ rxq_hw->func_id = hinic3_global_func_id(hwdev);
+ rxq_hw->num_queues = num_qps;
+
+ hinic3_cpu_to_be32(rxq_hw, sizeof(struct hinic3_rxq_hw));
+
+ cmd_buf->size = sizeof(struct hinic3_rxq_hw);
+
+ err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, HINIC3_UCODE_CMD_RXQ_INFO_GET,
+ cmd_buf, cmd_buf, NULL, 0, HINIC3_CHANNEL_NIC);
+ if (err)
+ goto get_rxq_info_failed;
+
+ rxq_info_out = cmd_buf->buf;
+ for (i = 0; i < num_qps; i++) {
+ rxq_info[i].hw_pi = rxq_info_out[i].hw_pi >> wqe_type;
+ rxq_info[i].hw_ci = rxq_info_out[i].hw_ci >> wqe_type;
+ }
+
+get_rxq_info_failed:
+ hinic3_free_cmd_buf(hwdev, cmd_buf);
+
+ return err;
+}
+
+int hinic3_pf_set_vf_link_state(void *hwdev, bool vf_link_forced, bool link_state)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct vf_data_storage *vf_infos = NULL;
+ int vf_id;
+
+ if (!hwdev) {
+ pr_err("hwdev is null.\n");
+ return -EINVAL;
+ }
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ pr_err("VF are not supported to set link state.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is null.\n");
+ return -EINVAL;
+ }
+
+ vf_infos = nic_io->vf_infos;
+ for (vf_id = 0; vf_id < nic_io->max_vfs; vf_id++) {
+ vf_infos[vf_id].link_up = link_state;
+ vf_infos[vf_id].link_forced = vf_link_forced;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_pf_set_vf_link_state);
+
+int hinic3_get_outband_vlan_cfg(void *hwdev, u16 *outband_default_vid)
+{
+ struct hinic3_outband_cfg_info outband_cfg_info;
+ u16 out_size = sizeof(outband_cfg_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !outband_default_vid)
+ return -EINVAL;
+
+ memset(&outband_cfg_info, 0, sizeof(outband_cfg_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_OUTBAND_CFG,
+ &outband_cfg_info,
+ sizeof(outband_cfg_info),
+ &outband_cfg_info, &out_size);
+ if (err || !out_size || outband_cfg_info.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get outband cfg, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, outband_cfg_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ *outband_default_vid = outband_cfg_info.outband_default_vid;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h
new file mode 100644
index 0000000..0fe7b9f
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h
@@ -0,0 +1,664 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_CFG_H
+#define HINIC3_NIC_CFG_H
+
+#include <linux/types.h>
+#include <linux/netdevice.h>
+
+#include "nic_mpu_cmd_defs.h"
+#include "mag_mpu_cmd.h"
+#include "mag_mpu_cmd_defs.h"
+
+#define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1)
+#define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1)
+
+#define HINIC3_VLAN_PRIORITY_SHIFT 13
+
+#define HINIC3_RSS_INDIR_4B_UNIT 3
+#define HINIC3_RSS_INDIR_NUM 2
+
+#define HINIC3_RSS_KEY_RSV_NUM 2
+#define HINIC3_MAX_NUM_RQ 256
+
+#define HINIC3_MIN_MTU_SIZE 256
+#define HINIC3_MAX_JUMBO_FRAME_SIZE 9600
+
+#define HINIC3_PF_SET_VF_ALREADY 0x4
+#define HINIC3_MGMT_STATUS_EXIST 0x6
+#define CHECK_IPSU_15BIT 0x8000
+
+#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB /* Table empty */
+#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC /* Table full */
+
+#define HINIC3_LOWEST_LATENCY 3
+#define HINIC3_MULTI_VM_LATENCY 32
+#define HINIC3_MULTI_VM_PENDING_LIMIT 4
+
+#define HINIC3_RX_RATE_LOW 200000
+#define HINIC3_RX_COAL_TIME_LOW 25
+#define HINIC3_RX_PENDING_LIMIT_LOW 2
+
+#define HINIC3_RX_RATE_HIGH 700000
+#define HINIC3_RX_COAL_TIME_HIGH 225
+#define HINIC3_RX_PENDING_LIMIT_HIGH 8
+
+#define HINIC3_RX_RATE_THRESH 50000
+#define HINIC3_TX_RATE_THRESH 50000
+#define HINIC3_RX_RATE_LOW_VM 100000
+#define HINIC3_RX_PENDING_LIMIT_HIGH_VM 87
+
+#define HINIC3_DCB_PCP 0
+#define HINIC3_DCB_DSCP 1
+
+#define MAX_LIMIT_BW 100
+
+#define HINIC3_INVALID_BOND_ID 0xffffffff
+
+enum hinic3_valid_link_settings {
+ HILINK_LINK_SET_SPEED = 0x1,
+ HILINK_LINK_SET_AUTONEG = 0x2,
+ HILINK_LINK_SET_FEC = 0x4,
+};
+
+enum hinic3_link_follow_status {
+ HINIC3_LINK_FOLLOW_DEFAULT,
+ HINIC3_LINK_FOLLOW_PORT,
+ HINIC3_LINK_FOLLOW_SEPARATE,
+ HINIC3_LINK_FOLLOW_STATUS_MAX,
+};
+
+enum hinic3_nic_pf_direct {
+ HINIC3_NIC_RX = 0,
+ HINIC3_NIC_TX,
+};
+
+struct hinic3_link_ksettings {
+ u32 valid_bitmap;
+ u8 speed; /* enum nic_speed_level */
+ u8 autoneg; /* 0 - off; 1 - on */
+ u8 fec; /* 0 - RSFEC; 1 - BASEFEC; 2 - NOFEC */
+};
+
+u64 hinic3_get_feature_cap(void *hwdev);
+
+#define HINIC3_SUPPORT_FEATURE(hwdev, feature) \
+ (hinic3_get_feature_cap(hwdev) & NIC_F_##feature)
+#define HINIC3_SUPPORT_CSUM(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, CSUM)
+#define HINIC3_SUPPORT_SCTP_CRC(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, SCTP_CRC)
+#define HINIC3_SUPPORT_TSO(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, TSO)
+#define HINIC3_SUPPORT_UFO(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, UFO)
+#define HINIC3_SUPPORT_LRO(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, LRO)
+#define HINIC3_SUPPORT_RSS(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, RSS)
+#define HINIC3_SUPPORT_RXVLAN_FILTER(hwdev) \
+ HINIC3_SUPPORT_FEATURE(hwdev, RX_VLAN_FILTER)
+#define HINIC3_SUPPORT_VLAN_OFFLOAD(hwdev) \
+ (HINIC3_SUPPORT_FEATURE(hwdev, RX_VLAN_STRIP) && \
+ HINIC3_SUPPORT_FEATURE(hwdev, TX_VLAN_INSERT))
+#define HINIC3_SUPPORT_VXLAN_OFFLOAD(hwdev) \
+ HINIC3_SUPPORT_FEATURE(hwdev, VXLAN_OFFLOAD)
+#define HINIC3_SUPPORT_IPSEC_OFFLOAD(hwdev) \
+ HINIC3_SUPPORT_FEATURE(hwdev, IPSEC_OFFLOAD)
+#define HINIC3_SUPPORT_FDIR(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, FDIR)
+#define HINIC3_SUPPORT_PROMISC(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, PROMISC)
+#define HINIC3_SUPPORT_ALLMULTI(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, ALLMULTI)
+#define HINIC3_SUPPORT_VF_MAC(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, VF_MAC)
+#define HINIC3_SUPPORT_RATE_LIMIT(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, RATE_LIMIT)
+
+#define HINIC3_SUPPORT_RXQ_RECOVERY(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, RXQ_RECOVERY)
+
+struct nic_rss_type {
+ u8 tcp_ipv6_ext;
+ u8 ipv6_ext;
+ u8 tcp_ipv6;
+ u8 ipv6;
+ u8 tcp_ipv4;
+ u8 ipv4;
+ u8 udp_ipv6;
+ u8 udp_ipv4;
+};
+
+enum hinic3_rss_hash_type {
+ HINIC3_RSS_HASH_ENGINE_TYPE_XOR = 0,
+ HINIC3_RSS_HASH_ENGINE_TYPE_TOEP,
+ HINIC3_RSS_HASH_ENGINE_TYPE_MAX,
+};
+
+/* rss */
+struct nic_rss_indirect_tbl {
+ u32 rsvd[4]; /* Make sure that 16B beyond entry[] */
+ u16 entry[NIC_RSS_INDIR_SIZE];
+};
+
+struct nic_rss_context_tbl {
+ u32 rsvd[4];
+ u32 ctx;
+};
+
+#define NIC_CONFIG_ALL_QUEUE_VLAN_CTX 0xFFFF
+struct nic_vlan_ctx {
+ u32 func_id;
+ u32 qid; /* if qid = 0xFFFF, config current function all queue */
+ u32 vlan_tag;
+ u32 vlan_mode;
+ u32 vlan_sel;
+};
+
+enum hinic3_link_status {
+ HINIC3_LINK_DOWN = 0,
+ HINIC3_LINK_UP
+};
+
+struct nic_port_info {
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+ u8 fec;
+ u8 lanes;
+ u8 rsvd;
+ u32 supported_mode;
+ u32 advertised_mode;
+ u32 supported_fec_mode;
+ u32 bond_speed;
+};
+
+struct nic_pause_config {
+ u8 auto_neg;
+ u8 rx_pause;
+ u8 tx_pause;
+};
+
+struct rxq_check_info {
+ u16 hw_pi;
+ u16 hw_ci;
+};
+
+struct hinic3_rxq_hw {
+ u32 func_id;
+ u32 num_queues;
+
+ u32 rsvd[14];
+};
+
+#define MODULE_TYPE_SFP 0x3
+#define MODULE_TYPE_QSFP28 0x11
+#define MODULE_TYPE_QSFP 0x0C
+#define MODULE_TYPE_QSFP_PLUS 0x0D
+#define MODULE_TYPE_DSFP 0x1B
+#define MODULE_TYPE_QSFP_CMIS 0x1E
+
+#define TCAM_IP_TYPE_MASK 0x1
+#define TCAM_TUNNEL_TYPE_MASK 0xF
+#define TCAM_FUNC_ID_MASK 0x7FFF
+
+int hinic3_delete_bond(void *hwdev);
+int hinic3_open_close_bond(void *hwdev, u32 bond_en);
+int hinic3_create_bond(void *hwdev, u32 *bond_id);
+
+int hinic3_add_tcam_rule(void *hwdev, struct nic_tcam_cfg_rule *tcam_rule);
+int hinic3_del_tcam_rule(void *hwdev, u32 index);
+
+int hinic3_alloc_tcam_block(void *hwdev, u16 *index);
+int hinic3_free_tcam_block(void *hwdev, u16 *index);
+
+int hinic3_set_fdir_tcam_rule_filter(void *hwdev, bool enable);
+
+int hinic3_flush_tcam_rule(void *hwdev);
+
+/* *
+ * @brief hinic3_update_mac - update mac address to hardware
+ * @param hwdev: device pointer to hwdev
+ * @param old_mac: old mac to delete
+ * @param new_mac: new mac to update
+ * @param vlan_id: vlan id
+ * @param func_id: function index
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_update_mac(void *hwdev, const u8 *old_mac, u8 *new_mac, u16 vlan_id,
+ u16 func_id);
+
+/* *
+ * @brief hinic3_get_default_mac - get default mac address
+ * @param hwdev: device pointer to hwdev
+ * @param mac_addr: mac address from hardware
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_default_mac(void *hwdev, u8 *mac_addr);
+
+/* *
+ * @brief hinic3_set_port_mtu - set function mtu
+ * @param hwdev: device pointer to hwdev
+ * @param new_mtu: mtu
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_port_mtu(void *hwdev, u16 new_mtu);
+
+/* *
+ * @brief hinic3_get_link_state - get link state
+ * @param hwdev: device pointer to hwdev
+ * @param link_state: link state, 0-link down, 1-link up
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_link_state(void *hwdev, u8 *link_state);
+
+/* *
+ * @brief hinic3_get_vport_stats - get function stats
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: function index
+ * @param stats: function stats
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_vport_stats(void *hwdev, u16 func_id, struct hinic3_vport_stats *stats);
+
+/* *
+ * @brief hinic3_notify_all_vfs_link_changed - notify to all vfs link changed
+ * @param hwdev: device pointer to hwdev
+ * @param link_status: link state, 0-link down, 1-link up
+ */
+void hinic3_notify_all_vfs_link_changed(void *hwdev, u8 link_status);
+
+/* *
+ * @brief hinic3_force_drop_tx_pkt - force drop tx packet
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_force_drop_tx_pkt(void *hwdev);
+
+/* *
+ * @brief hinic3_set_rx_mode - set function rx mode
+ * @param hwdev: device pointer to hwdev
+ * @param enable: rx mode state
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_rx_mode(void *hwdev, u32 enable);
+
+/* *
+ * @brief hinic3_set_rx_vlan_offload - set function vlan offload valid state
+ * @param hwdev: device pointer to hwdev
+ * @param en: 0-disable, 1-enable
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_rx_vlan_offload(void *hwdev, u8 en);
+
+/* *
+ * @brief hinic3_set_rx_lro_state - set rx LRO configuration
+ * @param hwdev: device pointer to hwdev
+ * @param lro_en: 0-disable, 1-enable
+ * @param lro_timer: LRO aggregation timeout
+ * @param lro_max_pkt_len: LRO coalesce packet size(unit is 1K)
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_rx_lro_state(void *hwdev, u8 lro_en, u32 lro_timer,
+ u32 lro_max_pkt_len);
+
+/* *
+ * @brief hinic3_set_vf_spoofchk - set vf spoofchk
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param spoofchk: spoofchk
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vf_spoofchk(void *hwdev, u16 vf_id, bool spoofchk);
+
+/* *
+ * @brief hinic3_vf_info_spoofchk - get vf spoofchk info
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @retval spoofchk state
+ */
+bool hinic3_vf_info_spoofchk(void *hwdev, int vf_id);
+
+/* *
+ * @brief hinic3_add_vf_vlan - add vf vlan id
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param vlan: vlan id
+ * @param qos: qos
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_add_vf_vlan(void *hwdev, int vf_id, u16 vlan, u8 qos);
+
+/* *
+ * @brief hinic3_kill_vf_vlan - kill vf vlan
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param vlan: vlan id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_kill_vf_vlan(void *hwdev, int vf_id);
+
+/* *
+ * @brief hinic3_set_vf_mac - set vf mac
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param mac_addr: vf mac address
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vf_mac(void *hwdev, int vf_id, const unsigned char *mac_addr);
+
+/* *
+ * @brief hinic3_vf_info_vlanprio - get vf vlan priority
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @retval zero: vlan priority
+ */
+u16 hinic3_vf_info_vlanprio(void *hwdev, int vf_id);
+
+/* *
+ * @brief hinic3_set_vf_tx_rate - set vf tx rate
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param max_rate: max rate
+ * @param min_rate: min rate
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vf_tx_rate(void *hwdev, u16 vf_id, u32 max_rate, u32 min_rate);
+
+/* *
+ * @brief hinic3_set_vf_tx_rate - set vf tx rate
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param ivi: vf info
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+void hinic3_get_vf_config(void *hwdev, u16 vf_id, struct ifla_vf_info *ivi);
+
+/* *
+ * @brief hinic3_set_vf_link_state - set vf link state
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param link: link state
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vf_link_state(void *hwdev, u16 vf_id, int link);
+
+/* *
+ * @brief hinic3_get_port_info - set port info
+ * @param hwdev: device pointer to hwdev
+ * @param port_info: port info
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_port_info(void *hwdev, struct nic_port_info *port_info,
+ u16 channel);
+
+/* *
+ * @brief hinic3_set_rss_type - set rss type
+ * @param hwdev: device pointer to hwdev
+ * @param rss_type: rss type
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_rss_type(void *hwdev, struct nic_rss_type rss_type);
+
+/* *
+ * @brief hinic3_get_rss_type - get rss type
+ * @param hwdev: device pointer to hwdev
+ * @param rss_type: rss type
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_rss_type(void *hwdev, struct nic_rss_type *rss_type);
+
+/* *
+ * @brief hinic3_rss_get_hash_engine - get rss hash engine
+ * @param hwdev: device pointer to hwdev
+ * @param type: hash engine
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_get_hash_engine(void *hwdev, u8 *type);
+
+/* *
+ * @brief hinic3_rss_set_hash_engine - set rss hash engine
+ * @param hwdev: device pointer to hwdev
+ * @param type: hash engine
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_set_hash_engine(void *hwdev, u8 type);
+
+/* *
+ * @brief hinic3_rss_cfg - set rss configuration
+ * @param hwdev: device pointer to hwdev
+ * @param rss_en: enable rss flag
+ * @param type: number of TC
+ * @param cos_num: cos num
+ * @param num_qps: number of queue
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_cfg(void *hwdev, u8 rss_en, u8 cos_num, u8 *prio_tc,
+ u16 num_qps);
+
+/* *
+ * @brief hinic3_rss_set_template_tbl - set template table
+ * @param hwdev: device pointer to hwdev
+ * @param key: rss key
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_set_hash_key(void *hwdev, const u8 *key);
+
+/* *
+ * @brief hinic3_rss_get_template_tbl - get template table
+ * @param hwdev: device pointer to hwdev
+ * @param key: rss key
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_get_hash_key(void *hwdev, u8 *key);
+
+/* *
+ * @brief hinic3_refresh_nic_cfg - refresh port cfg
+ * @param hwdev: device pointer to hwdev
+ * @param port_info: port information
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_refresh_nic_cfg(void *hwdev, struct nic_port_info *port_info);
+
+/* *
+ * @brief hinic3_add_vlan - add vlan
+ * @param hwdev: device pointer to hwdev
+ * @param vlan_id: vlan id
+ * @param func_id: function id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_add_vlan(void *hwdev, u16 vlan_id, u16 func_id);
+
+/* *
+ * @brief hinic3_del_vlan - delete vlan
+ * @param hwdev: device pointer to hwdev
+ * @param vlan_id: vlan id
+ * @param func_id: function id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_del_vlan(void *hwdev, u16 vlan_id, u16 func_id);
+
+/* *
+ * @brief hinic3_rss_set_indir_tbl - set rss indirect table
+ * @param hwdev: device pointer to hwdev
+ * @param indir_table: rss indirect table
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_set_indir_tbl(void *hwdev, const u32 *indir_table);
+
+/* *
+ * @brief hinic3_rss_get_indir_tbl - get rss indirect table
+ * @param hwdev: device pointer to hwdev
+ * @param indir_table: rss indirect table
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_get_indir_tbl(void *hwdev, u32 *indir_table);
+
+/* *
+ * @brief hinic3_get_phy_port_stats - get port stats
+ * @param hwdev: device pointer to hwdev
+ * @param stats: port stats
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_phy_port_stats(void *hwdev, struct mag_cmd_port_stats *stats);
+
+/* *
+ * @brief hinic3_get_phy_rsfec_stats - get rsfec stats
+ * @param hwdev: device pointer to hwdev
+ * @param stats: rsfec(Reed-Solomon Forward Error Correction) stats
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_phy_rsfec_stats(void *hwdev, struct mag_cmd_rsfec_stats *stats);
+
+int hinic3_set_port_funcs_state(void *hwdev, bool enable);
+
+int hinic3_reset_port_link_cfg(void *hwdev);
+
+int hinic3_force_port_relink(void *hwdev);
+
+int hinic3_set_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state);
+
+int hinic3_dcb_set_pfc(void *hwdev, u8 pfc_en, u8 pfc_bitmap);
+
+int hinic3_dcb_get_pfc(void *hwdev, u8 *pfc_en_bitmap);
+
+int hinic3_dcb_set_ets(void *hwdev, u8 *cos_tc, u8 *cos_bw, u8 *cos_prio,
+ u8 *tc_bw, u8 *tc_prio);
+
+int hinic3_dcb_set_cos_up_map(void *hwdev, u8 cos_valid_bitmap, u8 *cos_up,
+ u8 max_cos_num);
+
+int hinic3_dcb_set_rq_iq_mapping(void *hwdev, u32 num_rqs, u8 *map,
+ u32 max_map_num);
+
+int hinic3_sync_dcb_state(void *hwdev, u8 op_code, u8 state);
+
+int hinic3_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause);
+
+int hinic3_set_pause_info(void *hwdev, struct nic_pause_config nic_pause);
+
+int hinic3_set_link_settings(void *hwdev,
+ struct hinic3_link_ksettings *settings);
+
+int hinic3_set_vlan_fliter(void *hwdev, u32 vlan_filter_ctrl);
+
+void hinic3_clear_vfs_info(void *hwdev);
+
+int hinic3_notify_vf_outband_cfg(void *hwdev, u16 func_id, u16 vlan_id);
+
+int hinic3_update_mac_vlan(void *hwdev, u16 old_vlan, u16 new_vlan, int vf_id);
+
+int hinic3_set_led_status(void *hwdev, enum mag_led_type type,
+ enum mag_led_mode mode);
+
+int hinic3_set_func_capture_en(void *hwdev, u16 func_id, bool cap_en);
+
+int hinic3_set_loopback_mode(void *hwdev, u8 mode, u8 enable);
+int hinic3_get_loopback_mode(void *hwdev, u8 *mode, u8 *enable);
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+bool hinic3_get_vf_trust(void *hwdev, int vf_id);
+int hinic3_set_vf_trust(void *hwdev, u16 vf_id, bool trust);
+#endif
+
+int hinic3_set_autoneg(void *hwdev, bool enable);
+
+int hinic3_get_sfp_type(void *hwdev, u8 *sfp_type, u8 *sfp_type_ext);
+int hinic3_get_sfp_eeprom(void *hwdev, u8 *data, u32 len);
+int hinic3_get_tlv_xsfp_eeprom(void *hwdev, u8 *data, u32 len);
+
+bool hinic3_if_sfp_absent(void *hwdev);
+int hinic3_get_sfp_info(void *hwdev, struct mag_cmd_get_xsfp_info *sfp_info);
+int hinic3_get_sfp_tlv_info(void *hwdev,
+ struct drv_mag_cmd_get_xsfp_tlv_rsp *sfp_tlv_info,
+ const struct mag_cmd_get_xsfp_tlv_req *sfp_tlv_info_req);
+/* *
+ * @brief hinic3_set_nic_feature_to_hw - sync nic feature to hardware
+ * @param hwdev: device pointer to hwdev
+ */
+int hinic3_set_nic_feature_to_hw(void *hwdev);
+
+/* *
+ * @brief hinic3_update_nic_feature - update nic feature
+ * @param hwdev: device pointer to hwdev
+ * @param s_feature: nic features
+ * @param size: @s_feature's array size
+ */
+void hinic3_update_nic_feature(void *hwdev, u64 s_feature);
+
+/* *
+ * @brief hinic3_set_link_status_follow - set link follow status
+ * @param hwdev: device pointer to hwdev
+ * @param status: link follow status
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_link_status_follow(void *hwdev, enum hinic3_link_follow_status status);
+
+/* *
+ * @brief hinic3_update_pf_bw - update pf bandwidth
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_update_pf_bw(void *hwdev);
+
+/* *
+ * @brief hinic3_set_pf_bw_limit - set pf bandwidth limit
+ * @param hwdev: device pointer to hwdev
+ * @param bw_limit: pf bandwidth limit
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_pf_bw_limit(void *hwdev, u32 bw_limit);
+
+/* *
+ * @brief hinic3_set_pf_rate - set pf rate
+ * @param hwdev: device pointer to hwdev
+ * @param speed_level: speed level
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_pf_rate(void *hwdev, u8 speed_level);
+
+int hinic3_get_rxq_hw_info(void *hwdev, struct rxq_check_info *rxq_info, u16 num_qps, u16 wqe_type);
+
+#if defined(HAVE_NDO_UDP_TUNNEL_ADD) || defined(HAVE_UDP_TUNNEL_NIC_INFO)
+/* *
+ * @brief hinic3_vlxan_port_config - add/del vxlan dst port
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: function id
+ * @param port: vxlan dst port
+ * @param action: add or del, del will set to default value (0x12B5)
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_vlxan_port_config(void *hwdev, u16 func_id, u16 port, u8 action);
+#endif /* HAVE_NDO_UDP_TUNNEL_ADD || HAVE_UDP_TUNNEL_NIC_INFO */
+
+int hinic3_get_outband_vlan_cfg(void *hwdev, u16 *outband_default_vid);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c
new file mode 100644
index 0000000..654673f
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c
@@ -0,0 +1,726 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "nic_mpu_cmd.h"
+#include "nic_npu_cmd.h"
+
+/*lint -e806*/
+static unsigned char set_vf_link_state;
+module_param(set_vf_link_state, byte, 0444);
+MODULE_PARM_DESC(set_vf_link_state, "Set vf link state, 0 represents link auto, 1 represents link always up, 2 represents link always down. - default is 0.");
+/*lint +e806*/
+
+/* In order to adapt different linux version */
+enum {
+ HINIC3_IFLA_VF_LINK_STATE_AUTO, /* link state of the uplink */
+ HINIC3_IFLA_VF_LINK_STATE_ENABLE, /* link always up */
+ HINIC3_IFLA_VF_LINK_STATE_DISABLE, /* link always down */
+};
+
+#define NIC_CVLAN_INSERT_ENABLE 0x1
+#define NIC_QINQ_INSERT_ENABLE 0X3
+static int hinic3_set_vlan_ctx(struct hinic3_nic_io *nic_io, u16 func_id,
+ u16 vlan_tag, u16 q_id, bool add)
+{
+ struct nic_vlan_ctx *vlan_ctx = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ u64 out_param = 0;
+ int err;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_io->hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_vlan_ctx);
+ vlan_ctx = (struct nic_vlan_ctx *)cmd_buf->buf;
+
+ vlan_ctx->func_id = func_id;
+ vlan_ctx->qid = q_id;
+ vlan_ctx->vlan_tag = vlan_tag;
+ vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */
+ vlan_ctx->vlan_mode = add ?
+ NIC_QINQ_INSERT_ENABLE : NIC_CVLAN_INSERT_ENABLE;
+
+ hinic3_cpu_to_be32(vlan_ctx, sizeof(struct nic_vlan_ctx));
+
+ err = hinic3_cmdq_direct_resp(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_MODIFY_VLAN_CTX,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+
+ hinic3_free_cmd_buf(nic_io->hwdev, cmd_buf);
+
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set vlan context, err: %d, out_param: 0x%llx\n",
+ err, out_param);
+ return -EFAULT;
+ }
+
+ return err;
+}
+
+int hinic3_cfg_vf_vlan(struct hinic3_nic_io *nic_io, u8 opcode, u16 vid,
+ u8 qos, int vf_id)
+{
+ struct hinic3_cmd_vf_vlan_config vf_vlan;
+ u16 out_size = sizeof(vf_vlan);
+ u16 glb_func_id;
+ int err;
+ u16 vlan_tag;
+
+ /* VLAN 0 is a special case, don't allow it to be removed */
+ if (!vid && opcode == HINIC3_CMD_OP_DEL)
+ return 0;
+
+ memset(&vf_vlan, 0, sizeof(vf_vlan));
+
+ vf_vlan.opcode = opcode;
+ vf_vlan.func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + (u16)vf_id;
+ vf_vlan.vlan_id = vid;
+ vf_vlan.qos = qos;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_CFG_VF_VLAN,
+ &vf_vlan, sizeof(vf_vlan),
+ &vf_vlan, &out_size);
+ if (err || !out_size || vf_vlan.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d vlan, err: %d, status: 0x%x,out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err, vf_vlan.msg_head.status,
+ out_size);
+ return -EFAULT;
+ }
+
+ vlan_tag = vid + (u16)(qos << VLAN_PRIO_SHIFT);
+
+ glb_func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + (u16)vf_id;
+ err = hinic3_set_vlan_ctx(nic_io, glb_func_id, vlan_tag,
+ NIC_CONFIG_ALL_QUEUE_VLAN_CTX,
+ opcode == HINIC3_CMD_OP_ADD);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d vlan ctx, err: %d\n",
+ HW_VF_ID_TO_OS(vf_id), err);
+
+ /* rollback vlan config */
+ if (opcode == HINIC3_CMD_OP_DEL)
+ vf_vlan.opcode = HINIC3_CMD_OP_ADD;
+ else
+ vf_vlan.opcode = HINIC3_CMD_OP_DEL;
+ l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_VF_VLAN, &vf_vlan,
+ sizeof(vf_vlan), &vf_vlan, &out_size);
+ return err;
+ }
+
+ return 0;
+}
+
+/* this function just be called by hinic3_ndo_set_vf_mac,
+ * others are not permitted.
+ */
+int hinic3_set_vf_mac(void *hwdev, int vf_id, const unsigned char *mac_addr)
+{
+ struct vf_data_storage *vf_info = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+
+ /* duplicate request, so just return success */
+ if (ether_addr_equal(vf_info->user_mac_addr, mac_addr))
+ return 0;
+
+ ether_addr_copy(vf_info->user_mac_addr, mac_addr);
+
+ return 0;
+}
+
+int hinic3_add_vf_vlan(void *hwdev, int vf_id, u16 vlan, u8 qos)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ err = hinic3_cfg_vf_vlan(nic_io, HINIC3_CMD_OP_ADD, vlan, qos, vf_id);
+ if (err != 0)
+ return err;
+
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan = vlan;
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_qos = qos;
+
+ nic_info(nic_io->dev_hdl, "Setting VLAN %u, QOS 0x%x on VF %d\n",
+ vlan, qos, HW_VF_ID_TO_OS(vf_id));
+
+ return 0;
+}
+
+int hinic3_kill_vf_vlan(void *hwdev, int vf_id)
+{
+ struct vf_data_storage *vf_infos = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ vf_infos = nic_io->vf_infos;
+
+ err = hinic3_cfg_vf_vlan(nic_io, HINIC3_CMD_OP_DEL,
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan,
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_qos, vf_id);
+ if (err != 0)
+ return err;
+
+ nic_info(nic_io->dev_hdl, "Remove VLAN %u on VF %d\n",
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan,
+ HW_VF_ID_TO_OS(vf_id));
+
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan = 0;
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_qos = 0;
+
+ return 0;
+}
+
+u16 hinic3_vf_info_vlanprio(void *hwdev, int vf_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 pf_vlan, vlanprio;
+ u8 pf_qos;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return 0;
+
+ pf_vlan = nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan;
+ pf_qos = nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_qos;
+ vlanprio = (u16)(pf_vlan | (pf_qos << HINIC3_VLAN_PRIORITY_SHIFT));
+
+ return vlanprio;
+}
+
+int hinic3_set_vf_link_state(void *hwdev, u16 vf_id, int link)
+{
+ struct hinic3_nic_io *nic_io =
+ hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ struct vf_data_storage *vf_infos = NULL;
+ u8 link_status = 0;
+
+ if (!nic_io)
+ return -EINVAL;
+
+ vf_infos = nic_io->vf_infos;
+
+ switch (link) {
+ case HINIC3_IFLA_VF_LINK_STATE_AUTO:
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_forced = false;
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_up = nic_io->link_status ?
+ true : false;
+ link_status = nic_io->link_status;
+ break;
+ case HINIC3_IFLA_VF_LINK_STATE_ENABLE:
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_forced = true;
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_up = true;
+ link_status = HINIC3_LINK_UP;
+ break;
+ case HINIC3_IFLA_VF_LINK_STATE_DISABLE:
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_forced = true;
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_up = false;
+ link_status = HINIC3_LINK_DOWN;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* Notify the VF of its new link state */
+ hinic3_notify_vf_link_status(nic_io, vf_id, link_status);
+
+ return 0;
+}
+
+int hinic3_set_vf_spoofchk(void *hwdev, u16 vf_id, bool spoofchk)
+{
+ struct hinic3_cmd_spoofchk_set spoofchk_cfg;
+ struct vf_data_storage *vf_infos = NULL;
+ u16 out_size = sizeof(spoofchk_cfg);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ vf_infos = nic_io->vf_infos;
+
+ memset(&spoofchk_cfg, 0, sizeof(spoofchk_cfg));
+
+ spoofchk_cfg.func_id = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+ spoofchk_cfg.state = spoofchk ? 1 : 0;
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_SPOOPCHK_STATE,
+ &spoofchk_cfg,
+ sizeof(spoofchk_cfg), &spoofchk_cfg,
+ &out_size);
+ if (err || !out_size || spoofchk_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF(%d) spoofchk, err: %d, status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err,
+ spoofchk_cfg.msg_head.status, out_size);
+ err = -EINVAL;
+ }
+
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].spoofchk = spoofchk;
+
+ return err;
+}
+
+bool hinic3_vf_info_spoofchk(void *hwdev, int vf_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return false;
+
+ return nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].spoofchk;
+}
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+int hinic3_set_vf_trust(void *hwdev, u16 vf_id, bool trust)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io || vf_id > nic_io->max_vfs)
+ return -EINVAL;
+
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].trust = trust;
+
+ return 0;
+}
+
+bool hinic3_get_vf_trust(void *hwdev, int vf_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return false;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io || vf_id > nic_io->max_vfs)
+ return false;
+
+ return nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].trust;
+}
+#endif
+
+static int hinic3_set_vf_tx_rate_max_min(struct hinic3_nic_io *nic_io,
+ u16 vf_id, u32 max_rate, u32 min_rate)
+{
+ struct hinic3_cmd_tx_rate_cfg rate_cfg;
+ u16 out_size = sizeof(rate_cfg);
+ int err;
+
+ memset(&rate_cfg, 0, sizeof(rate_cfg));
+
+ rate_cfg.func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf_id;
+ rate_cfg.max_rate = max_rate;
+ rate_cfg.min_rate = min_rate;
+ rate_cfg.direct = HINIC3_NIC_TX;
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_SET_MAX_MIN_RATE,
+ &rate_cfg, sizeof(rate_cfg), &rate_cfg,
+ &out_size);
+ if (rate_cfg.msg_head.status || err || !out_size) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d max rate %u, min rate %u, err: %d, status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), max_rate, min_rate, err,
+ rate_cfg.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_set_vf_tx_rate(void *hwdev, u16 vf_id, u32 max_rate, u32 min_rate)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (!HINIC3_SUPPORT_RATE_LIMIT(hwdev)) {
+ nic_err(nic_io->dev_hdl, "Current function doesn't support to set vf rate limit\n");
+ return -EOPNOTSUPP;
+ }
+
+ err = hinic3_set_vf_tx_rate_max_min(nic_io, vf_id, max_rate, min_rate);
+ if (err != 0)
+ return err;
+
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].max_rate = max_rate;
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].min_rate = min_rate;
+
+ return 0;
+}
+
+void hinic3_get_vf_config(void *hwdev, u16 vf_id, struct ifla_vf_info *ivi)
+{
+ struct vf_data_storage *vfinfo = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+
+ vfinfo = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ if (!vfinfo)
+ return;
+
+ ivi->vf = HW_VF_ID_TO_OS(vf_id);
+ ether_addr_copy(ivi->mac, vfinfo->user_mac_addr);
+ ivi->vlan = vfinfo->pf_vlan;
+ ivi->qos = vfinfo->pf_qos;
+
+#ifdef HAVE_VF_SPOOFCHK_CONFIGURE
+ ivi->spoofchk = vfinfo->spoofchk;
+#endif
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+ ivi->trusted = vfinfo->trust;
+#endif
+
+#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+ ivi->max_tx_rate = vfinfo->max_rate;
+ ivi->min_tx_rate = vfinfo->min_rate;
+#else
+ ivi->tx_rate = vfinfo->max_rate;
+#endif /* HAVE_NDO_SET_VF_MIN_MAX_TX_RATE */
+
+#ifdef HAVE_NDO_SET_VF_LINK_STATE
+ if (!vfinfo->link_forced)
+ ivi->linkstate = IFLA_VF_LINK_STATE_AUTO;
+ else if (vfinfo->link_up)
+ ivi->linkstate = IFLA_VF_LINK_STATE_ENABLE;
+ else
+ ivi->linkstate = IFLA_VF_LINK_STATE_DISABLE;
+#endif
+}
+
+static int hinic3_init_vf_infos(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ struct vf_data_storage *vf_infos = nic_io->vf_infos;
+ u8 vf_link_state;
+
+ if (set_vf_link_state > HINIC3_IFLA_VF_LINK_STATE_DISABLE) {
+ nic_warn(nic_io->dev_hdl, "Module Parameter set_vf_link_state value %u is out of range, resetting to %d\n",
+ set_vf_link_state, HINIC3_IFLA_VF_LINK_STATE_AUTO);
+ set_vf_link_state = HINIC3_IFLA_VF_LINK_STATE_AUTO;
+ }
+
+ vf_link_state = set_vf_link_state;
+
+ switch (vf_link_state) {
+ case HINIC3_IFLA_VF_LINK_STATE_AUTO:
+ vf_infos[vf_id].link_forced = false;
+ break;
+ case HINIC3_IFLA_VF_LINK_STATE_ENABLE:
+ vf_infos[vf_id].link_forced = true;
+ vf_infos[vf_id].link_up = true;
+ break;
+ case HINIC3_IFLA_VF_LINK_STATE_DISABLE:
+ vf_infos[vf_id].link_forced = true;
+ vf_infos[vf_id].link_up = false;
+ break;
+ default:
+ nic_err(nic_io->dev_hdl, "Input parameter set_vf_link_state error: %u\n",
+ vf_link_state);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int vf_func_register(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_cmd_register_vf register_info;
+ u16 out_size = sizeof(register_info);
+ int err;
+
+ err = hinic3_register_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ nic_io->hwdev, hinic3_vf_event_handler);
+ if (err != 0)
+ return err;
+
+ err = hinic3_register_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_HILINK,
+ nic_io->hwdev, hinic3_vf_mag_event_handler);
+ if (err != 0)
+ goto reg_hilink_err;
+
+ memset(®ister_info, 0, sizeof(register_info));
+ register_info.op_register = 1;
+ register_info.support_extra_feature = 0;
+ err = hinic3_mbox_to_pf(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_NIC_CMD_VF_REGISTER,
+ ®ister_info, sizeof(register_info),
+ ®ister_info, &out_size, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || !out_size || register_info.msg_head.status) {
+ if (hinic3_is_slave_host(nic_io->hwdev)) {
+ nic_warn(nic_io->dev_hdl,
+ "Failed to register VF, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, register_info.msg_head.status, out_size);
+ return 0;
+ }
+ nic_err(nic_io->dev_hdl, "Failed to register VF, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, register_info.msg_head.status, out_size);
+ err = -EIO;
+ goto register_err;
+ }
+
+ return 0;
+
+register_err:
+ hinic3_unregister_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_HILINK);
+
+reg_hilink_err:
+ hinic3_unregister_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+
+ return err;
+}
+
+static int pf_init_vf_infos(struct hinic3_nic_io *nic_io)
+{
+ u32 size;
+ int err;
+ u16 i;
+
+ nic_io->max_vfs = hinic3_func_max_vf(nic_io->hwdev);
+ size = sizeof(*nic_io->vf_infos) * nic_io->max_vfs;
+ if (!size)
+ return 0;
+
+ nic_io->vf_infos = kzalloc(size, GFP_KERNEL);
+ if (!nic_io->vf_infos)
+ return -ENOMEM;
+
+ for (i = 0; i < nic_io->max_vfs; i++) {
+ err = hinic3_init_vf_infos(nic_io, i);
+ if (err != 0)
+ goto init_vf_infos_err;
+ }
+
+ err = hinic3_register_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ nic_io->hwdev, hinic3_pf_mbox_handler);
+ if (err != 0)
+ goto register_pf_mbox_cb_err;
+
+ err = hinic3_register_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_HILINK,
+ nic_io->hwdev, hinic3_pf_mag_mbox_handler);
+ if (err != 0)
+ goto register_pf_mag_mbox_cb_err;
+
+ return 0;
+
+register_pf_mag_mbox_cb_err:
+ hinic3_unregister_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+register_pf_mbox_cb_err:
+init_vf_infos_err:
+ kfree(nic_io->vf_infos);
+
+ return err;
+}
+
+int hinic3_vf_func_init(struct hinic3_nic_io *nic_io)
+{
+ int err;
+
+ if (hinic3_func_type(nic_io->hwdev) == TYPE_VF)
+ return vf_func_register(nic_io);
+
+ err = hinic3_register_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ nic_io->hwdev, hinic3_pf_event_handler);
+ if (err != 0)
+ return err;
+
+ err = hinic3_register_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_HILINK,
+ nic_io->hwdev, hinic3_pf_mag_event_handler);
+ if (err != 0)
+ goto register_mgmt_msg_cb_err;
+
+ err = pf_init_vf_infos(nic_io);
+ if (err != 0)
+ goto pf_init_vf_infos_err;
+
+ return 0;
+
+pf_init_vf_infos_err:
+ hinic3_unregister_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_HILINK);
+register_mgmt_msg_cb_err:
+ hinic3_unregister_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+
+ return err;
+}
+
+void hinic3_vf_func_free(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_cmd_register_vf unregister;
+ u16 out_size = sizeof(unregister);
+ int err;
+
+ memset(&unregister, 0, sizeof(unregister));
+ unregister.op_register = 0;
+ if (hinic3_func_type(nic_io->hwdev) == TYPE_VF) {
+ err = hinic3_mbox_to_pf(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_NIC_CMD_VF_REGISTER,
+ &unregister, sizeof(unregister),
+ &unregister, &out_size, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || !out_size || unregister.msg_head.status) {
+ if (hinic3_is_slave_host(nic_io->hwdev))
+ nic_info(nic_io->dev_hdl,
+ "vRoCE VF notify PF unsuccessful is allowed");
+ else
+ nic_err(nic_io->dev_hdl,
+ "Failed to unregister VF, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, unregister.msg_head.status, out_size);
+ }
+
+ hinic3_unregister_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+ } else {
+ if (nic_io->vf_infos) {
+ hinic3_unregister_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_HILINK);
+ hinic3_unregister_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+ hinic3_clear_vfs_info(nic_io->hwdev);
+ kfree(nic_io->vf_infos);
+ nic_io->vf_infos = NULL;
+ }
+ hinic3_unregister_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_HILINK);
+ hinic3_unregister_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+ }
+}
+
+static void clear_vf_infos(void *hwdev, u16 vf_id)
+{
+ struct vf_data_storage *vf_infos = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 func_id;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ func_id = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+ vf_infos = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ if (vf_infos->use_specified_mac)
+ hinic3_del_mac(hwdev, vf_infos->drv_mac_addr,
+ vf_infos->pf_vlan, func_id, HINIC3_CHANNEL_NIC);
+
+ if (hinic3_vf_info_vlanprio(hwdev, vf_id))
+ hinic3_kill_vf_vlan(hwdev, vf_id);
+
+ if (vf_infos->max_rate)
+ hinic3_set_vf_tx_rate(hwdev, vf_id, 0, 0);
+
+ if (vf_infos->spoofchk)
+ hinic3_set_vf_spoofchk(hwdev, vf_id, false);
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+ if (vf_infos->trust)
+ hinic3_set_vf_trust(hwdev, vf_id, false);
+#endif
+
+ memset(vf_infos, 0, sizeof(*vf_infos));
+ /* set vf_infos to default */
+ hinic3_init_vf_infos(nic_io, HW_VF_ID_TO_OS(vf_id));
+}
+
+void hinic3_clear_vfs_info(void *hwdev)
+{
+ u16 i;
+ struct hinic3_nic_io *nic_io =
+ hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ for (i = 0; i < nic_io->max_vfs; i++)
+ clear_vf_infos(hwdev, OS_VF_ID_TO_HW(i));
+}
+
+int hinic3_notify_vf_outband_cfg(void *hwdev, u16 func_id, u16 vlan_id)
+{
+ int err = 0;
+ struct hinic3_outband_cfg_info outband_cfg_info;
+ struct vf_data_storage *vf_infos = NULL;
+ u16 out_size = sizeof(outband_cfg_info);
+ u16 vf_id;
+ struct hinic3_nic_io *nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return 0;
+ }
+
+ vf_id = func_id - hinic3_glb_pf_vf_offset(nic_io->hwdev);
+ vf_infos = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+
+ memset(&outband_cfg_info, 0, sizeof(outband_cfg_info));
+ if (vf_infos->registered) {
+ outband_cfg_info.func_id = func_id;
+ outband_cfg_info.outband_default_vid = vlan_id;
+ err = hinic3_mbox_to_vf_no_ack(nic_io->hwdev, vf_id,
+ HINIC3_MOD_L2NIC,
+ HINIC3_NIC_CMD_OUTBAND_CFG_NOTICE,
+ &outband_cfg_info,
+ sizeof(outband_cfg_info),
+ &outband_cfg_info, &out_size,
+ HINIC3_CHANNEL_NIC);
+ if (err == MBOX_ERRCODE_UNKNOWN_DES_FUNC) {
+ nic_warn(nic_io->dev_hdl, "VF%d not initialized, disconnect it\n",
+ HW_VF_ID_TO_OS(vf_id));
+ hinic3_unregister_vf(nic_io, vf_id);
+ return 0;
+ }
+ if (err || !out_size || outband_cfg_info.msg_head.status)
+ nic_err(nic_io->dev_hdl,
+ "outband cfg event to VF %d failed, err: %d, status: 0x%x, out_size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err,
+ outband_cfg_info.msg_head.status, out_size);
+ }
+
+ return err;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c
new file mode 100644
index 0000000..2878f66
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c
@@ -0,0 +1,159 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+
+int hinic3_dbg_get_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt,
+ u8 *wqe, const u16 *wqe_size, enum hinic3_queue_type q_type)
+{
+ struct hinic3_io_queue *queue = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ void *src_wqebb = NULL;
+ u32 i, offset;
+
+ if (!hwdev) {
+ pr_err("hwdev is NULL.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (q_id >= nic_io->num_qps) {
+ pr_err("q_id[%u] > num_qps_cfg[%u].\n", q_id, nic_io->num_qps);
+ return -EINVAL;
+ }
+
+ queue = (q_type == HINIC3_SQ) ? &nic_io->sq[q_id] : &nic_io->rq[q_id];
+
+ if ((idx + wqebb_cnt) > queue->wq.q_depth) {
+ pr_err("(idx[%u] + idx[%u]) > q_depth[%u].\n", idx, wqebb_cnt, queue->wq.q_depth);
+ return -EINVAL;
+ }
+
+ if (*wqe_size != (queue->wq.wqebb_size * wqebb_cnt)) {
+ pr_err("Unexpect out buf size from user :%u, expect: %d\n",
+ *wqe_size, (queue->wq.wqebb_size * wqebb_cnt));
+ return -EINVAL;
+ }
+
+ for (i = 0; i < wqebb_cnt; i++) {
+ src_wqebb = hinic3_wq_wqebb_addr(&queue->wq, (u16)WQ_MASK_IDX(&queue->wq, idx + i));
+ offset = queue->wq.wqebb_size * i;
+ memcpy(wqe + offset, src_wqebb, queue->wq.wqebb_size);
+ }
+
+ return 0;
+}
+
+int hinic3_dbg_get_sq_info(void *hwdev, u16 q_id, struct nic_sq_info *sq_info,
+ u32 msg_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_io_queue *sq = NULL;
+
+ if (!hwdev || !sq_info) {
+ pr_err("hwdev or sq_info is NULL.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (q_id >= nic_io->num_qps) {
+ nic_err(nic_io->dev_hdl, "Input queue id(%u) is larger than the actual queue number\n",
+ q_id);
+ return -EINVAL;
+ }
+
+ if (msg_size != sizeof(*sq_info)) {
+ nic_err(nic_io->dev_hdl, "Unexpect out buf size from user :%u, expect: %lu\n",
+ msg_size, sizeof(*sq_info));
+ return -EINVAL;
+ }
+
+ sq = &nic_io->sq[q_id];
+ if (!sq)
+ return -EINVAL;
+
+ sq_info->q_id = q_id;
+ sq_info->pi = hinic3_get_sq_local_pi(sq);
+ sq_info->ci = hinic3_get_sq_local_ci(sq);
+ sq_info->fi = hinic3_get_sq_hw_ci(sq);
+ sq_info->q_depth = sq->wq.q_depth;
+ sq_info->wqebb_size = sq->wq.wqebb_size;
+
+ sq_info->ci_addr = sq->tx.cons_idx_addr;
+
+ sq_info->cla_addr = sq->wq.wq_block_paddr;
+ sq_info->slq_handle = sq;
+
+ sq_info->doorbell.map_addr = (u64 *)sq->db_addr;
+
+ return 0;
+}
+
+int hinic3_dbg_get_rq_info(void *hwdev, u16 q_id, struct nic_rq_info *rq_info,
+ u32 msg_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_io_queue *rq = NULL;
+
+ if (!hwdev || !rq_info) {
+ pr_err("hwdev or rq_info is NULL.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (q_id >= nic_io->num_qps) {
+ nic_err(nic_io->dev_hdl, "Input queue id(%u) is larger than the actual queue number\n",
+ q_id);
+ return -EINVAL;
+ }
+
+ if (msg_size != sizeof(*rq_info)) {
+ nic_err(nic_io->dev_hdl, "Unexpect out buf size from user: %u, expect: %lu\n",
+ msg_size, sizeof(*rq_info));
+ return -EINVAL;
+ }
+
+ rq = &nic_io->rq[q_id];
+ if (!rq)
+ return -EINVAL;
+
+ rq_info->q_id = q_id;
+
+ rq_info->hw_pi = cpu_to_be16(*rq->rx.pi_virt_addr);
+
+ rq_info->wqebb_size = rq->wq.wqebb_size;
+ rq_info->q_depth = (u16)rq->wq.q_depth;
+
+ rq_info->buf_len = nic_io->rx_buff_len;
+
+ rq_info->slq_handle = rq;
+
+ rq_info->ci_wqe_page_addr = hinic3_wq_get_first_wqe_page_addr(&rq->wq);
+ rq_info->ci_cla_tbl_addr = rq->wq.wq_block_paddr;
+
+ rq_info->msix_idx = rq->msix_entry_idx;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h
new file mode 100644
index 0000000..4ba96d5
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_DBG_H
+#define HINIC3_NIC_DBG_H
+
+#include "hinic3_mt.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_srv_nic.h"
+
+int hinic3_dbg_get_sq_info(void *hwdev, u16 q_id, struct nic_sq_info *sq_info,
+ u32 msg_size);
+
+int hinic3_dbg_get_rq_info(void *hwdev, u16 q_id, struct nic_rq_info *rq_info,
+ u32 msg_size);
+
+int hinic3_dbg_get_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt,
+ u8 *wqe, const u16 *wqe_size,
+ enum hinic3_queue_type q_type);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h
new file mode 100644
index 0000000..3d2a962
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h
@@ -0,0 +1,462 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_DEV_H
+#define HINIC3_NIC_DEV_H
+
+#include <linux/netdevice.h>
+#include <linux/semaphore.h>
+#include <linux/types.h>
+#include <linux/bitops.h>
+
+#include "ossl_knl.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+#include "hinic3_dcb.h"
+#include "vram_common.h"
+
+#define HINIC3_NIC_DRV_NAME "hinic3"
+#define HINIC3_NIC_DRV_VERSION "17.7.8.101"
+
+#define HINIC3_FUNC_IS_VF(hwdev) (hinic3_func_type(hwdev) == TYPE_VF)
+
+#define HINIC3_AVG_PKT_SMALL 256U
+#define HINIC3_MODERATONE_DELAY HZ
+
+#define LP_PKT_CNT 64
+#define LP_PKT_LEN 60
+
+#define NAPI_IS_REGIN 1
+#define NAPI_NOT_REGIN 0
+
+enum hinic3_flags {
+ HINIC3_INTF_UP,
+ HINIC3_MAC_FILTER_CHANGED,
+ HINIC3_LP_TEST,
+ HINIC3_RSS_ENABLE,
+ HINIC3_DCB_ENABLE,
+ HINIC3_SAME_RXTX,
+ HINIC3_INTR_ADAPT,
+ HINIC3_UPDATE_MAC_FILTER,
+ HINIC3_CHANGE_RES_INVALID,
+ HINIC3_RSS_DEFAULT_INDIR,
+ HINIC3_FORCE_LINK_UP,
+ HINIC3_BONDING_MASTER,
+ HINIC3_AUTONEG_RESET,
+ HINIC3_RXQ_RECOVERY,
+};
+
+#define HINIC3_CHANNEL_RES_VALID(nic_dev) \
+ (test_bit(HINIC3_INTF_UP, &(nic_dev)->flags) && \
+ !test_bit(HINIC3_CHANGE_RES_INVALID, &(nic_dev)->flags))
+
+#define RX_BUFF_NUM_PER_PAGE 2
+
+#define VLAN_BITMAP_BYTE_SIZE(nic_dev) (sizeof(*(nic_dev)->vlan_bitmap))
+#define VLAN_BITMAP_BITS_SIZE(nic_dev) (VLAN_BITMAP_BYTE_SIZE(nic_dev) * 8)
+#define VLAN_NUM_BITMAPS(nic_dev) (VLAN_N_VID / \
+ VLAN_BITMAP_BITS_SIZE(nic_dev))
+#define VLAN_BITMAP_SIZE(nic_dev) (VLAN_N_VID / \
+ VLAN_BITMAP_BYTE_SIZE(nic_dev))
+#define VID_LINE(nic_dev, vid) ((vid) / VLAN_BITMAP_BITS_SIZE(nic_dev))
+#define VID_COL(nic_dev, vid) ((vid) & (VLAN_BITMAP_BITS_SIZE(nic_dev) - 1))
+
+#define NIC_DRV_DEFAULT_FEATURE NIC_F_ALL_MASK
+
+enum hinic3_event_work_flags {
+ EVENT_WORK_TX_TIMEOUT,
+};
+
+enum hinic3_rx_mode_state {
+ HINIC3_HW_PROMISC_ON,
+ HINIC3_HW_ALLMULTI_ON,
+ HINIC3_PROMISC_FORCE_ON,
+ HINIC3_ALLMULTI_FORCE_ON,
+};
+
+enum mac_filter_state {
+ HINIC3_MAC_WAIT_HW_SYNC,
+ HINIC3_MAC_HW_SYNCED,
+ HINIC3_MAC_WAIT_HW_UNSYNC,
+ HINIC3_MAC_HW_UNSYNCED,
+};
+
+struct hinic3_mac_filter {
+ struct list_head list;
+ u8 addr[ETH_ALEN];
+ unsigned long state;
+};
+
+struct hinic3_irq {
+ struct net_device *netdev;
+ /* IRQ corresponding index number */
+ u16 msix_entry_idx;
+ u16 rsvd1;
+ u32 irq_id; /* The IRQ number from OS */
+
+ u32 napi_reign;
+
+ char irq_name[IFNAMSIZ + 16];
+ struct napi_struct napi;
+ cpumask_t affinity_mask;
+ struct hinic3_txq *txq;
+ struct hinic3_rxq *rxq;
+};
+
+struct hinic3_intr_coal_info {
+ u8 pending_limt;
+ u8 coalesce_timer_cfg;
+ u8 resend_timer_cfg;
+
+ u64 pkt_rate_low;
+ u8 rx_usecs_low;
+ u8 rx_pending_limt_low;
+ u64 pkt_rate_high;
+ u8 rx_usecs_high;
+ u8 rx_pending_limt_high;
+
+ u8 user_set_intr_coal_flag;
+};
+
+struct hinic3_dyna_txrxq_params {
+ u16 num_qps;
+ u8 num_cos;
+ u8 rsvd1;
+ u32 sq_depth;
+ u32 rq_depth;
+
+ struct hinic3_dyna_txq_res *txqs_res;
+ struct hinic3_dyna_rxq_res *rxqs_res;
+ struct hinic3_irq *irq_cfg;
+ char irq_cfg_vram_name[VRAM_NAME_MAX_LEN];
+};
+
+struct hinic3_flush_rq {
+ union {
+ struct {
+#if defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)
+ u32 lb_proc : 1;
+ u32 rsvd : 10;
+ u32 rq_id : 8;
+ u32 func_id : 13;
+#else
+ u32 func_id : 13;
+ u32 rq_id : 8;
+ u32 rsvd : 10;
+ u32 lb_proc : 1;
+#endif
+ } bs;
+ u32 value;
+ } dw;
+
+ union {
+ struct {
+#if defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)
+ u32 rsvd2 : 2;
+ u32 src_chnl : 12;
+ u32 pkt_len : 18;
+#else
+ u32 pkt_len : 18;
+ u32 src_chnl : 12;
+ u32 rsvd2 : 2;
+#endif
+ } bs;
+ u32 value;
+ } lb_info0; /* loop back information, used by uCode */
+};
+
+#define HINIC3_NIC_STATS_INC(nic_dev, field) \
+do { \
+ u64_stats_update_begin(&(nic_dev)->stats.syncp); \
+ (nic_dev)->stats.field++; \
+ u64_stats_update_end(&(nic_dev)->stats.syncp); \
+} while (0)
+
+struct hinic3_nic_stats {
+ u64 netdev_tx_timeout;
+
+ /* Subdivision statistics show in private tool */
+ u64 tx_carrier_off_drop;
+ u64 tx_invalid_qid;
+ u64 rsvd1;
+ u64 rsvd2;
+#ifdef HAVE_NDO_GET_STATS64
+ struct u64_stats_sync syncp;
+#else
+ struct u64_stats_sync_empty syncp;
+#endif
+};
+
+struct hinic3_nic_vport_stats {
+ u64 rx_discard_vport;
+};
+
+#define HINIC3_TCAM_DYNAMIC_BLOCK_SIZE 16
+#define HINIC3_MAX_TCAM_FILTERS 512
+
+#define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \
+ (HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index))
+
+struct hinic3_rx_flow_rule {
+ struct list_head rules;
+ int tot_num_rules;
+};
+
+struct hinic3_tcam_dynamic_block {
+ struct list_head block_list;
+ u16 dynamic_block_id;
+ u16 dynamic_index_cnt;
+ u8 dynamic_index_used[HINIC3_TCAM_DYNAMIC_BLOCK_SIZE];
+};
+
+struct hinic3_tcam_dynamic_block_info {
+ struct list_head tcam_dynamic_list;
+ u16 dynamic_block_cnt;
+};
+
+struct hinic3_tcam_filter {
+ struct list_head tcam_filter_list;
+ u16 dynamic_block_id;
+ u16 index;
+ struct tag_tcam_key tcam_key;
+ u16 queue;
+};
+
+/* function level struct info */
+struct hinic3_tcam_info {
+ u16 tcam_rule_nums;
+ struct list_head tcam_list;
+ struct hinic3_tcam_dynamic_block_info tcam_dynamic_info;
+};
+
+struct hinic3_dcb {
+ u8 cos_config_num_max;
+ u8 func_dft_cos_bitmap;
+ /* used to tool validity check */
+ u16 port_dft_cos_bitmap;
+
+ struct hinic3_dcb_config hw_dcb_cfg;
+ struct hinic3_dcb_config wanted_dcb_cfg;
+ unsigned long dcb_flags;
+};
+
+struct hinic3_vram {
+ u32 vram_mtu;
+ u16 vram_num_qps;
+ unsigned long flags;
+};
+
+struct hinic3_outband_cfg {
+ u16 outband_default_vid;
+ u16 rsvd;
+};
+
+struct hinic3_nic_dev {
+ struct pci_dev *pdev;
+ struct net_device *netdev;
+ struct hinic3_lld_dev *lld_dev;
+ void *hwdev;
+
+ int poll_weight;
+ u32 rsvd1;
+ unsigned long *vlan_bitmap;
+
+ u16 max_qps;
+
+ u32 msg_enable;
+ unsigned long flags;
+
+ u32 lro_replenish_thld;
+ u32 dma_rx_buff_size;
+ u16 rx_buff_len;
+ u32 page_order;
+ bool page_pool_enabled;
+
+ /* Rss related varibles */
+ u8 rss_hash_engine;
+ struct nic_rss_type rss_type;
+ u8 *rss_hkey;
+ /* hkey in big endian */
+ u32 *rss_hkey_be;
+ u32 *rss_indir;
+
+ struct hinic3_dcb *dcb;
+ char dcb_name[VRAM_NAME_MAX_LEN];
+
+ struct hinic3_vram *nic_vram;
+ char nic_vram_name[VRAM_NAME_MAX_LEN];
+
+ int disable_port_cnt;
+
+ struct hinic3_intr_coal_info *intr_coalesce;
+ unsigned long last_moder_jiffies;
+ u32 adaptive_rx_coal;
+ u8 intr_coal_set_flag;
+
+#ifndef HAVE_NETDEV_STATS_IN_NETDEV
+ struct net_device_stats net_stats;
+#endif
+
+ struct hinic3_nic_stats stats;
+ struct hinic3_nic_vport_stats vport_stats;
+
+ /* lock for nic resource */
+ struct mutex nic_mutex;
+ u8 link_status;
+
+ struct nic_service_cap nic_cap;
+
+ struct hinic3_txq *txqs;
+ struct hinic3_rxq *rxqs;
+ struct hinic3_dyna_txrxq_params q_params;
+
+ u16 num_qp_irq;
+ struct irq_info *qps_irq_info;
+
+ struct workqueue_struct *workq;
+
+ struct work_struct rx_mode_work;
+ struct delayed_work moderation_task;
+
+ struct list_head uc_filter_list;
+ struct list_head mc_filter_list;
+ unsigned long rx_mod_state;
+ int netdev_uc_cnt;
+ int netdev_mc_cnt;
+
+ int lb_test_rx_idx;
+ int lb_pkt_len;
+ u8 *lb_test_rx_buf;
+
+ struct hinic3_tcam_info tcam;
+ struct hinic3_rx_flow_rule rx_flow_rule;
+
+#ifdef HAVE_XDP_SUPPORT
+ struct bpf_prog *xdp_prog;
+#endif
+
+ struct delayed_work periodic_work;
+ /* reference to enum hinic3_event_work_flags */
+ unsigned long event_flag;
+
+ struct hinic3_nic_prof_attr *prof_attr;
+ struct hinic3_prof_adapter *prof_adap;
+ u64 rsvd8[7];
+ struct hinic3_outband_cfg outband_cfg;
+ u32 rxq_get_err_times;
+ struct delayed_work rxq_check_work;
+ struct delayed_work vport_stats_work;
+};
+
+#define hinic_msg(level, nic_dev, msglvl, format, arg...) \
+do { \
+ if ((nic_dev)->netdev && (nic_dev)->netdev->reg_state \
+ == NETREG_REGISTERED) \
+ nicif_##level((nic_dev), msglvl, (nic_dev)->netdev, \
+ format, ## arg); \
+ else \
+ nic_##level(&(nic_dev)->pdev->dev, \
+ format, ## arg); \
+} while (0)
+
+#define hinic3_info(nic_dev, msglvl, format, arg...) \
+ hinic_msg(info, nic_dev, msglvl, format, ## arg)
+
+#define hinic3_warn(nic_dev, msglvl, format, arg...) \
+ hinic_msg(warn, nic_dev, msglvl, format, ## arg)
+
+#define hinic3_err(nic_dev, msglvl, format, arg...) \
+ hinic_msg(err, nic_dev, msglvl, format, ## arg)
+
+struct hinic3_uld_info *get_nic_uld_info(void);
+
+u32 hinic3_get_io_stats_size(const struct hinic3_nic_dev *nic_dev);
+
+int hinic3_get_io_stats(const struct hinic3_nic_dev *nic_dev, void *stats);
+
+int hinic3_open(struct net_device *netdev);
+
+int hinic3_close(struct net_device *netdev);
+
+void hinic3_set_ethtool_ops(struct net_device *netdev);
+
+void hinic3vf_set_ethtool_ops(struct net_device *netdev);
+
+int nic_ioctl(void *uld_dev, u32 cmd, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size);
+
+void hinic3_update_num_qps(struct net_device *netdev);
+
+int hinic3_qps_irq_init(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_qps_irq_deinit(struct hinic3_nic_dev *nic_dev);
+
+void qp_del_napi(struct hinic3_irq *irq_cfg);
+
+void hinic3_set_netdev_ops(struct hinic3_nic_dev *nic_dev);
+
+bool hinic3_is_netdev_ops_match(const struct net_device *netdev);
+
+int hinic3_set_hw_features(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_set_rx_mode_work(struct work_struct *work);
+
+void hinic3_clean_mac_list_filter(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_get_strings(struct net_device *netdev, u32 stringset, u8 *data);
+
+void hinic3_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data);
+
+int hinic3_get_sset_count(struct net_device *netdev, int sset);
+
+int hinic3_maybe_set_port_state(struct hinic3_nic_dev *nic_dev, bool enable);
+
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+int hinic3_get_link_ksettings(struct net_device *netdev,
+ struct ethtool_link_ksettings *link_settings);
+int hinic3_set_link_ksettings(struct net_device *netdev,
+ const struct ethtool_link_ksettings
+ *link_settings);
+#endif
+#endif
+
+#ifndef HAVE_NEW_ETHTOOL_LINK_SETTINGS_ONLY
+int hinic3_get_settings(struct net_device *netdev, struct ethtool_cmd *ep);
+int hinic3_set_settings(struct net_device *netdev,
+ struct ethtool_cmd *link_settings);
+#endif
+
+void hinic3_auto_moderation_work(struct work_struct *work);
+
+typedef void (*hinic3_reopen_handler)(struct hinic3_nic_dev *nic_dev,
+ const void *priv_data);
+int hinic3_change_channel_settings(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *trxq_params,
+ hinic3_reopen_handler reopen_handler,
+ const void *priv_data);
+
+void hinic3_link_status_change(struct hinic3_nic_dev *nic_dev, bool status);
+
+#ifdef HAVE_XDP_SUPPORT
+bool hinic3_is_xdp_enable(struct hinic3_nic_dev *nic_dev);
+int hinic3_xdp_max_mtu(struct hinic3_nic_dev *nic_dev);
+#endif
+
+#ifdef HAVE_UDP_TUNNEL_NIC_INFO
+int hinic3_udp_tunnel_set_port(struct net_device *netdev, unsigned int table,
+ unsigned int entry, struct udp_tunnel_info *ti);
+int hinic3_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table,
+ unsigned int entry, struct udp_tunnel_info *ti);
+#endif /* HAVE_UDP_TUNNEL_NIC_INFO */
+
+#if defined(ETHTOOL_GFECPARAM) && defined(ETHTOOL_SFECPARAM)
+int set_fecparam(void *hwdev, u8 fecparam);
+int get_fecparam(void *hwdev, u8 *advertised_fec, u8 *supported_fec);
+#endif
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c
new file mode 100644
index 0000000..6cc294e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c
@@ -0,0 +1,649 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "nic_mpu_cmd.h"
+#include "nic_npu_cmd.h"
+
+static int hinic3_init_vf_config(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ struct vf_data_storage *vf_info = NULL;
+ u16 func_id;
+ int err = 0;
+
+ vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ ether_addr_copy(vf_info->drv_mac_addr, vf_info->user_mac_addr);
+ if (!is_zero_ether_addr(vf_info->drv_mac_addr)) {
+ vf_info->use_specified_mac = true;
+ func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf_id;
+
+ err = hinic3_set_mac(nic_io->hwdev, vf_info->drv_mac_addr,
+ vf_info->pf_vlan, func_id,
+ HINIC3_CHANNEL_NIC);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d MAC\n",
+ HW_VF_ID_TO_OS(vf_id));
+ return err;
+ }
+ } else {
+ vf_info->use_specified_mac = false;
+ }
+
+ if (hinic3_vf_info_vlanprio(nic_io->hwdev, vf_id)) {
+ err = hinic3_cfg_vf_vlan(nic_io, HINIC3_CMD_OP_ADD,
+ vf_info->pf_vlan, vf_info->pf_qos,
+ vf_id);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to add VF %d VLAN_QOS\n",
+ HW_VF_ID_TO_OS(vf_id));
+ return err;
+ }
+ }
+
+ if (vf_info->max_rate) {
+ err = hinic3_set_vf_tx_rate(nic_io->hwdev, vf_id,
+ vf_info->max_rate,
+ vf_info->min_rate);
+ if (err != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d max rate %u, min rate %u\n",
+ HW_VF_ID_TO_OS(vf_id), vf_info->max_rate,
+ vf_info->min_rate);
+ return err;
+ }
+ }
+
+ return 0;
+}
+
+static int register_vf_msg_handler(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ int err;
+
+ if (vf_id > nic_io->max_vfs) {
+ nic_err(nic_io->dev_hdl, "Register VF id %d exceed limit[0-%d]\n",
+ HW_VF_ID_TO_OS(vf_id), HW_VF_ID_TO_OS(nic_io->max_vfs));
+ return -EFAULT;
+ }
+
+ err = hinic3_init_vf_config(nic_io, vf_id);
+ if (err != 0)
+ return err;
+
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].registered = true;
+
+ return 0;
+}
+
+static int unregister_vf_msg_handler(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ struct vf_data_storage *vf_info =
+ nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ int err;
+
+ if (vf_id > nic_io->max_vfs)
+ return -EFAULT;
+
+ vf_info->registered = false;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+ mac_info.func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + (u16)vf_id;
+ mac_info.vlan_id = vf_info->pf_vlan;
+ ether_addr_copy(mac_info.mac, vf_info->drv_mac_addr);
+
+ if (vf_info->use_specified_mac || vf_info->pf_vlan) {
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_DEL_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size);
+ if (err || mac_info.msg_head.status || !out_size) {
+ nic_err(nic_io->dev_hdl, "Failed to delete VF %d MAC, err: %d, status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err,
+ mac_info.msg_head.status, out_size);
+ return -EFAULT;
+ }
+ }
+
+ memset(vf_info->drv_mac_addr, 0, ETH_ALEN);
+
+ return 0;
+}
+
+static int hinic3_register_vf_msg_handler(struct hinic3_nic_io *nic_io,
+ u16 vf_id, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_cmd_register_vf *register_vf = buf_in;
+ struct hinic3_cmd_register_vf *register_info = buf_out;
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ int err;
+
+ if (!vf_info)
+ return -EINVAL;
+
+ if (register_vf->op_register) {
+ vf_info->support_extra_feature = register_vf->support_extra_feature;
+ err = register_vf_msg_handler(nic_io, vf_id);
+ } else {
+ err = unregister_vf_msg_handler(nic_io, vf_id);
+ vf_info->support_extra_feature = 0;
+ }
+
+ if (err != 0)
+ register_info->msg_head.status = EFAULT;
+
+ *out_size = sizeof(*register_info);
+
+ return 0;
+}
+
+void hinic3_unregister_vf(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+
+ if (!vf_info)
+ return;
+
+ unregister_vf_msg_handler(nic_io, vf_id);
+ vf_info->support_extra_feature = 0;
+}
+
+static int hinic3_get_vf_cos_msg_handler(struct hinic3_nic_io *nic_io,
+ u16 vf_id, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_cmd_vf_dcb_state *dcb_state = buf_out;
+
+ memcpy(&dcb_state->state, &nic_io->dcb_state,
+ sizeof(nic_io->dcb_state));
+
+ dcb_state->msg_head.status = 0;
+ *out_size = sizeof(*dcb_state);
+ return 0;
+}
+
+static int hinic3_get_vf_mac_msg_handler(struct hinic3_nic_io *nic_io, u16 vf,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf);
+ struct hinic3_port_mac_set *mac_in =
+ (struct hinic3_port_mac_set *)buf_in;
+ struct hinic3_port_mac_set *mac_info = buf_out;
+
+ int err;
+
+ if (!mac_info || !vf_info)
+ return -EINVAL;
+
+ mac_in->func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf;
+
+ if (HINIC3_SUPPORT_VF_MAC(nic_io->hwdev) != 0) {
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_GET_MAC, buf_in,
+ in_size, buf_out, out_size);
+ if (err == 0) {
+ if (is_zero_ether_addr(mac_info->mac))
+ ether_addr_copy(mac_info->mac, vf_info->drv_mac_addr);
+ }
+ return err;
+ }
+
+ ether_addr_copy(mac_info->mac, vf_info->drv_mac_addr);
+ mac_info->msg_head.status = 0;
+ *out_size = sizeof(*mac_info);
+
+ return 0;
+}
+
+static int hinic3_set_vf_mac_msg_handler(struct hinic3_nic_io *nic_io, u16 vf,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf);
+ struct hinic3_port_mac_set *mac_in = buf_in;
+ struct hinic3_port_mac_set *mac_out = buf_out;
+ int err;
+
+ if (!vf_info)
+ return -EINVAL;
+
+ mac_in->func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf;
+
+ if (vf_info->use_specified_mac && !vf_info->trust &&
+ is_valid_ether_addr(mac_in->mac)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF %d MAC address, and vf trust is off.\n",
+ HW_VF_ID_TO_OS(vf));
+ mac_out->msg_head.status = HINIC3_PF_SET_VF_ALREADY;
+ *out_size = sizeof(*mac_out);
+ return 0;
+ }
+
+ if (is_valid_ether_addr(mac_in->mac))
+ mac_in->vlan_id = vf_info->pf_vlan;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_SET_MAC,
+ buf_in, in_size, buf_out, out_size);
+ if (err || !(*out_size)) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d MAC address, err: %d,status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf), err, mac_out->msg_head.status,
+ *out_size);
+ return -EFAULT;
+ }
+
+ if (is_valid_ether_addr(mac_in->mac) && !mac_out->msg_head.status)
+ ether_addr_copy(vf_info->drv_mac_addr, mac_in->mac);
+
+ return err;
+}
+
+static int hinic3_del_vf_mac_msg_handler(struct hinic3_nic_io *nic_io, u16 vf,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf);
+ struct hinic3_port_mac_set *mac_in = buf_in;
+ struct hinic3_port_mac_set *mac_out = buf_out;
+ int err;
+
+ if (!vf_info)
+ return -EINVAL;
+
+ mac_in->func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf;
+
+ if (vf_info->use_specified_mac && !vf_info->trust &&
+ is_valid_ether_addr(mac_in->mac)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF %d MAC address, and vf trust is off.\n",
+ HW_VF_ID_TO_OS(vf));
+ mac_out->msg_head.status = HINIC3_PF_SET_VF_ALREADY;
+ *out_size = sizeof(*mac_out);
+ return 0;
+ }
+
+ if (is_valid_ether_addr(mac_in->mac))
+ mac_in->vlan_id = vf_info->pf_vlan;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_DEL_MAC,
+ buf_in, in_size, buf_out, out_size);
+ if (err || !(*out_size)) {
+ nic_err(nic_io->dev_hdl, "Failed to delete VF %d MAC, err: %d, status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf), err, mac_out->msg_head.status,
+ *out_size);
+ return -EFAULT;
+ }
+
+ if (is_valid_ether_addr(mac_in->mac) && !mac_out->msg_head.status)
+ eth_zero_addr(vf_info->drv_mac_addr);
+
+ return err;
+}
+
+static int hinic3_update_vf_mac_msg_handler(struct hinic3_nic_io *nic_io,
+ u16 vf, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf);
+ struct hinic3_port_mac_update *mac_in = buf_in;
+ struct hinic3_port_mac_update *mac_out = buf_out;
+ int err;
+
+ if (!vf_info)
+ return -EINVAL;
+
+ if (!is_valid_ether_addr(mac_in->new_mac)) {
+ nic_err(nic_io->dev_hdl, "Update VF MAC is invalid.\n");
+ return -EINVAL;
+ }
+ mac_in->func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf;
+
+ if (vf_info->use_specified_mac && !vf_info->trust) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF %d MAC address, and vf trust is off.\n",
+ HW_VF_ID_TO_OS(vf));
+ mac_out->msg_head.status = HINIC3_PF_SET_VF_ALREADY;
+ *out_size = sizeof(*mac_out);
+ return 0;
+ }
+
+ mac_in->vlan_id = vf_info->pf_vlan;
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_UPDATE_MAC,
+ buf_in, in_size, buf_out, out_size);
+ if (err || !(*out_size)) {
+ nic_warn(nic_io->dev_hdl, "Failed to update VF %d MAC, err: %d,status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf), err, mac_out->msg_head.status,
+ *out_size);
+ return -EFAULT;
+ }
+
+ if (!mac_out->msg_head.status)
+ ether_addr_copy(vf_info->drv_mac_addr, mac_in->new_mac);
+
+ return err;
+}
+
+const struct vf_msg_handler vf_cmd_handler[] = {
+ {
+ .cmd = HINIC3_NIC_CMD_VF_REGISTER,
+ .handler = hinic3_register_vf_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_GET_MAC,
+ .handler = hinic3_get_vf_mac_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_SET_MAC,
+ .handler = hinic3_set_vf_mac_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_DEL_MAC,
+ .handler = hinic3_del_vf_mac_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_UPDATE_MAC,
+ .handler = hinic3_update_vf_mac_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_VF_COS,
+ .handler = hinic3_get_vf_cos_msg_handler
+ },
+};
+
+static int _l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u16 channel)
+{
+ u32 i, cmd_cnt = ARRAY_LEN(vf_cmd_handler);
+ bool cmd_to_pf = false;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF &&
+ !hinic3_is_slave_host(hwdev)) {
+ for (i = 0; i < cmd_cnt; i++) {
+ if (cmd == vf_cmd_handler[i].cmd)
+ cmd_to_pf = true;
+ }
+ }
+
+ if (cmd_to_pf)
+ return hinic3_mbox_to_pf(hwdev, HINIC3_MOD_L2NIC, cmd, buf_in,
+ in_size, buf_out, out_size, 0,
+ channel);
+
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, cmd, buf_in,
+ in_size, buf_out, out_size, 0, channel);
+}
+
+int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return _l2nic_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size, HINIC3_CHANNEL_NIC);
+}
+
+int l2nic_msg_to_mgmt_sync_ch(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u16 channel)
+{
+ return _l2nic_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size, channel);
+}
+
+/* pf/ppf handler mbox msg from vf */
+int hinic3_pf_mbox_handler(void *hwdev,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 index, cmd_size = ARRAY_LEN(vf_cmd_handler);
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ for (index = 0; index < cmd_size; index++) {
+ if (cmd == vf_cmd_handler[index].cmd)
+ return vf_cmd_handler[index].handler(nic_io, vf_id,
+ buf_in, in_size,
+ buf_out, out_size);
+ }
+
+ nic_warn(nic_io->dev_hdl, "NO handler for nic cmd(%u) received from vf id: %u\n",
+ cmd, vf_id);
+
+ return -EINVAL;
+}
+
+void hinic3_notify_dcb_state_event(struct hinic3_nic_io *nic_io,
+ struct hinic3_dcb_state *dcb_state)
+{
+ struct hinic3_event_info event_info = {0};
+ int i;
+/*lint -e679*/
+ if (dcb_state->trust == HINIC3_DCB_PCP)
+ /* This is 8 user priority to cos mapping relationships */
+ sdk_info(nic_io->dev_hdl, "DCB %s, default cos %u, pcp2cos %u%u%u%u%u%u%u%u\n",
+ dcb_state->dcb_on ? "on" : "off", dcb_state->default_cos,
+ dcb_state->pcp2cos[ARRAY_INDEX_0], dcb_state->pcp2cos[ARRAY_INDEX_1],
+ dcb_state->pcp2cos[ARRAY_INDEX_2], dcb_state->pcp2cos[ARRAY_INDEX_3],
+ dcb_state->pcp2cos[ARRAY_INDEX_4], dcb_state->pcp2cos[ARRAY_INDEX_5],
+ dcb_state->pcp2cos[ARRAY_INDEX_6], dcb_state->pcp2cos[ARRAY_INDEX_7]);
+ else
+ for (i = 0; i < NIC_DCB_DSCP_NUM; i++) {
+ sdk_info(nic_io->dev_hdl,
+ "DCB %s, default cos %u, dscp2cos %u%u%u%u%u%u%u%u\n",
+ dcb_state->dcb_on ? "on" : "off", dcb_state->default_cos,
+ dcb_state->dscp2cos[ARRAY_INDEX_0 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_1 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_2 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_3 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_4 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_5 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_6 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_7 + i * NIC_DCB_DSCP_NUM]);
+ }
+/*lint +e679*/
+ /* Saved in sdk for stateful module */
+ hinic3_save_dcb_state(nic_io, dcb_state);
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = EVENT_NIC_DCB_STATE_CHANGE;
+ memcpy((void *)event_info.event_data, dcb_state, sizeof(*dcb_state));
+
+ hinic3_event_callback(nic_io->hwdev, &event_info);
+}
+
+static void dcb_state_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_cmd_vf_dcb_state *vf_dcb = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ vf_dcb = buf_in;
+ if (!vf_dcb)
+ return;
+
+ hinic3_notify_dcb_state_event(nic_io, &vf_dcb->state);
+}
+
+static void tx_pause_excp_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct nic_cmd_tx_pause_notice *excp_info = buf_in;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ if (in_size != sizeof(*excp_info)) {
+ nic_err(nic_io->dev_hdl, "Invalid in_size: %u, should be %lu\n",
+ in_size, sizeof(*excp_info));
+ return;
+ }
+
+ nic_warn(nic_io->dev_hdl, "Receive tx pause exception event, excp: %u, level: %u\n",
+ excp_info->tx_pause_except, excp_info->except_level);
+
+ hinic3_fault_event_report(hwdev, HINIC3_FAULT_SRC_TX_PAUSE_EXCP,
+ (u16)excp_info->except_level);
+}
+
+static void bond_active_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_bond_active_report_info *active_info = buf_in;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_event_info event_info = {0};
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ if (in_size != sizeof(*active_info)) {
+ nic_err(nic_io->dev_hdl, "Invalid in_size: %u, should be %ld\n",
+ in_size, sizeof(*active_info));
+ return;
+ }
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = HINIC3_NIC_CMD_BOND_ACTIVE_NOTICE;
+ memcpy((void *)event_info.event_data, active_info, sizeof(*active_info));
+
+ hinic3_event_callback(nic_io->hwdev, &event_info);
+}
+
+static void outband_vlan_cfg_event_handler(void *hwdev, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_outband_cfg_info *outband_cfg_info = buf_in;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_event_info event_info = {0};
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is NULL\n");
+ return;
+ }
+
+ nic_info(nic_io->dev_hdl, "outband vlan cfg event received\n");
+
+ if (in_size != sizeof(*outband_cfg_info)) {
+ nic_err(nic_io->dev_hdl, "outband cfg info invalid in_size: %u, should be %lu\n",
+ in_size, sizeof(*outband_cfg_info));
+ return;
+ }
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = EVENT_NIC_OUTBAND_CFG;
+ memcpy((void *)event_info.event_data,
+ outband_cfg_info, sizeof(*outband_cfg_info));
+
+ hinic3_event_callback(nic_io->hwdev, &event_info);
+}
+
+static const struct nic_event_handler nic_cmd_handler[] = {
+ {
+ .cmd = HINIC3_NIC_CMD_VF_COS,
+ .handler = dcb_state_event,
+ },
+ {
+ .cmd = HINIC3_NIC_CMD_TX_PAUSE_EXCP_NOTICE,
+ .handler = tx_pause_excp_event_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_BOND_ACTIVE_NOTICE,
+ .handler = bond_active_event_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_OUTBAND_CFG_NOTICE,
+ .handler = outband_vlan_cfg_event_handler,
+ },
+};
+
+static int _event_handler(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 size = sizeof(nic_cmd_handler) / sizeof(struct nic_event_handler);
+ u32 i;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ *out_size = 0;
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ for (i = 0; i < size; i++) {
+ if (cmd == nic_cmd_handler[i].cmd) {
+ nic_cmd_handler[i].handler(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ return 0;
+ }
+ }
+
+ /* can't find this event cmd */
+ sdk_warn(nic_io->dev_hdl, "Unsupported nic event, cmd: %u\n", cmd);
+ *out_size = sizeof(struct mgmt_msg_head);
+ ((struct mgmt_msg_head *)buf_out)->status = HINIC3_MGMT_CMD_UNSUPPORTED;
+
+ return 0;
+}
+
+/* vf handler mbox msg from ppf/pf */
+/* vf link change event
+ * vf fault report event, TBD
+ */
+int hinic3_vf_event_handler(void *hwdev,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return _event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+}
+
+/* pf/ppf handler mgmt cpu report nic event */
+void hinic3_pf_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ _event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c
new file mode 100644
index 0000000..a9768b7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c
@@ -0,0 +1,1130 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "nic_mpu_cmd.h"
+#include "nic_npu_cmd.h"
+#include "hinic3_nic_io.h"
+
+#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 1
+#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 1
+#define HINIC3_DEAULT_DROP_THD_ON (0xFFFF)
+#define HINIC3_DEAULT_DROP_THD_OFF 0
+/*lint -e806*/
+static unsigned char tx_pending_limit = HINIC3_DEAULT_TX_CI_PENDING_LIMIT;
+module_param(tx_pending_limit, byte, 0444);
+MODULE_PARM_DESC(tx_pending_limit, "TX CI coalescing parameter pending_limit (default=0)");
+
+static unsigned char tx_coalescing_time = HINIC3_DEAULT_TX_CI_COALESCING_TIME;
+module_param(tx_coalescing_time, byte, 0444);
+MODULE_PARM_DESC(tx_coalescing_time, "TX CI coalescing parameter coalescing_time (default=0)");
+
+static unsigned char rq_wqe_type = HINIC3_NORMAL_RQ_WQE;
+module_param(rq_wqe_type, byte, 0444);
+MODULE_PARM_DESC(rq_wqe_type, "RQ WQE type 1-16Bytes, 2-32Bytes (default=2)");
+
+/*lint +e806*/
+static u32 tx_drop_thd_on = HINIC3_DEAULT_DROP_THD_ON;
+module_param(tx_drop_thd_on, uint, 0644);
+MODULE_PARM_DESC(tx_drop_thd_on, "TX parameter drop_thd_on (default=0xffff)");
+
+static u32 tx_drop_thd_off = HINIC3_DEAULT_DROP_THD_OFF;
+module_param(tx_drop_thd_off, uint, 0644);
+MODULE_PARM_DESC(tx_drop_thd_off, "TX parameter drop_thd_off (default=0)");
+/* performance: ci addr RTE_CACHE_SIZE(64B) alignment */
+#define HINIC3_CI_Q_ADDR_SIZE (64U)
+
+#define CI_TABLE_SIZE(num_qps, pg_sz) \
+ (ALIGN((num_qps) * HINIC3_CI_Q_ADDR_SIZE, pg_sz))
+
+#define HINIC3_CI_VADDR(base_addr, q_id) ((u8 *)(base_addr) + \
+ (q_id) * HINIC3_CI_Q_ADDR_SIZE)
+
+#define HINIC3_CI_PADDR(base_paddr, q_id) ((base_paddr) + \
+ (q_id) * HINIC3_CI_Q_ADDR_SIZE)
+
+#define WQ_PREFETCH_MAX 4
+#define WQ_PREFETCH_MIN 1
+#define WQ_PREFETCH_THRESHOLD 256
+
+#define HINIC3_Q_CTXT_MAX 31 /* (2048 - 8) / 64 */
+
+enum hinic3_qp_ctxt_type {
+ HINIC3_QP_CTXT_TYPE_SQ,
+ HINIC3_QP_CTXT_TYPE_RQ,
+};
+
+struct hinic3_qp_ctxt_header {
+ u16 num_queues;
+ u16 queue_type;
+ u16 start_qid;
+ u16 rsvd;
+};
+
+struct hinic3_sq_ctxt {
+ u32 ci_pi;
+ u32 drop_mode_sp;
+ u32 wq_pfn_hi_owner;
+ u32 wq_pfn_lo;
+
+ u32 rsvd0;
+ u32 pkt_drop_thd;
+ u32 global_sq_id;
+ u32 vlan_ceq_attr;
+
+ u32 pref_cache;
+ u32 pref_ci_owner;
+ u32 pref_wq_pfn_hi_ci;
+ u32 pref_wq_pfn_lo;
+
+ u32 rsvd8;
+ u32 rsvd9;
+ u32 wq_block_pfn_hi;
+ u32 wq_block_pfn_lo;
+};
+
+struct hinic3_rq_ctxt {
+ u32 ci_pi;
+ u32 ceq_attr;
+ u32 wq_pfn_hi_type_owner;
+ u32 wq_pfn_lo;
+
+ u32 rsvd[3];
+ u32 cqe_sge_len;
+
+ u32 pref_cache;
+ u32 pref_ci_owner;
+ u32 pref_wq_pfn_hi_ci;
+ u32 pref_wq_pfn_lo;
+
+ u32 pi_paddr_hi;
+ u32 pi_paddr_lo;
+ u32 wq_block_pfn_hi;
+ u32 wq_block_pfn_lo;
+};
+
+struct hinic3_sq_ctxt_block {
+ struct hinic3_qp_ctxt_header cmdq_hdr;
+ struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX];
+};
+
+struct hinic3_rq_ctxt_block {
+ struct hinic3_qp_ctxt_header cmdq_hdr;
+ struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX];
+};
+
+struct hinic3_clean_queue_ctxt {
+ struct hinic3_qp_ctxt_header cmdq_hdr;
+ u32 rsvd;
+};
+
+#define SQ_CTXT_SIZE(num_sqs) ((u16)(sizeof(struct hinic3_qp_ctxt_header) \
+ + (num_sqs) * sizeof(struct hinic3_sq_ctxt)))
+
+#define RQ_CTXT_SIZE(num_rqs) ((u16)(sizeof(struct hinic3_qp_ctxt_header) \
+ + (num_rqs) * sizeof(struct hinic3_rq_ctxt)))
+
+#define CI_IDX_HIGH_SHIFH 12
+
+#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH)
+
+#define SQ_CTXT_PI_IDX_SHIFT 0
+#define SQ_CTXT_CI_IDX_SHIFT 16
+
+#define SQ_CTXT_PI_IDX_MASK 0xFFFFU
+#define SQ_CTXT_CI_IDX_MASK 0xFFFFU
+
+#define SQ_CTXT_CI_PI_SET(val, member) (((val) & \
+ SQ_CTXT_##member##_MASK) \
+ << SQ_CTXT_##member##_SHIFT)
+
+#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0
+#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1
+
+#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U
+#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U
+
+#define SQ_CTXT_MODE_SET(val, member) (((val) & \
+ SQ_CTXT_MODE_##member##_MASK) \
+ << SQ_CTXT_MODE_##member##_SHIFT)
+
+#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
+#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23
+
+#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU
+#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U
+
+#define SQ_CTXT_WQ_PAGE_SET(val, member) (((val) & \
+ SQ_CTXT_WQ_PAGE_##member##_MASK) \
+ << SQ_CTXT_WQ_PAGE_##member##_SHIFT)
+
+#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0
+#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16
+
+#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU
+#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU
+
+#define SQ_CTXT_PKT_DROP_THD_SET(val, member) (((val) & \
+ SQ_CTXT_PKT_DROP_##member##_MASK) \
+ << SQ_CTXT_PKT_DROP_##member##_SHIFT)
+
+#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0
+
+#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU
+
+#define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) (((val) & \
+ SQ_CTXT_##member##_MASK) \
+ << SQ_CTXT_##member##_SHIFT)
+
+#define SQ_CTXT_VLAN_TAG_SHIFT 0
+#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16
+#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19
+#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23
+
+#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU
+#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U
+#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U
+#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U
+
+#define SQ_CTXT_VLAN_CEQ_SET(val, member) (((val) & \
+ SQ_CTXT_VLAN_##member##_MASK) \
+ << SQ_CTXT_VLAN_##member##_SHIFT)
+
+#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
+#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14
+#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25
+
+#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU
+#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU
+#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU
+
+#define SQ_CTXT_PREF_CI_HI_SHIFT 0
+#define SQ_CTXT_PREF_OWNER_SHIFT 4
+
+#define SQ_CTXT_PREF_CI_HI_MASK 0xFU
+#define SQ_CTXT_PREF_OWNER_MASK 0x1U
+
+#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0
+#define SQ_CTXT_PREF_CI_LOW_SHIFT 20
+
+#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU
+#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU
+
+#define SQ_CTXT_PREF_SET(val, member) (((val) & \
+ SQ_CTXT_PREF_##member##_MASK) \
+ << SQ_CTXT_PREF_##member##_SHIFT)
+
+#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0
+
+#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU
+
+#define SQ_CTXT_WQ_BLOCK_SET(val, member) (((val) & \
+ SQ_CTXT_WQ_BLOCK_##member##_MASK) \
+ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT)
+
+#define RQ_CTXT_PI_IDX_SHIFT 0
+#define RQ_CTXT_CI_IDX_SHIFT 16
+
+#define RQ_CTXT_PI_IDX_MASK 0xFFFFU
+#define RQ_CTXT_CI_IDX_MASK 0xFFFFU
+
+#define RQ_CTXT_CI_PI_SET(val, member) (((val) & \
+ RQ_CTXT_##member##_MASK) \
+ << RQ_CTXT_##member##_SHIFT)
+
+#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21
+#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31
+
+#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU
+#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U
+
+#define RQ_CTXT_CEQ_ATTR_SET(val, member) (((val) & \
+ RQ_CTXT_CEQ_ATTR_##member##_MASK) \
+ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT)
+
+#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
+#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28
+#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31
+
+#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU
+#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U
+#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U
+
+#define RQ_CTXT_WQ_PAGE_SET(val, member) (((val) & \
+ RQ_CTXT_WQ_PAGE_##member##_MASK) << \
+ RQ_CTXT_WQ_PAGE_##member##_SHIFT)
+
+#define RQ_CTXT_CQE_LEN_SHIFT 28
+
+#define RQ_CTXT_CQE_LEN_MASK 0x3U
+
+#define RQ_CTXT_CQE_LEN_SET(val, member) (((val) & \
+ RQ_CTXT_##member##_MASK) << \
+ RQ_CTXT_##member##_SHIFT)
+
+#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
+#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14
+#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25
+
+#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU
+#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU
+#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU
+
+#define RQ_CTXT_PREF_CI_HI_SHIFT 0
+#define RQ_CTXT_PREF_OWNER_SHIFT 4
+
+#define RQ_CTXT_PREF_CI_HI_MASK 0xFU
+#define RQ_CTXT_PREF_OWNER_MASK 0x1U
+
+#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0
+#define RQ_CTXT_PREF_CI_LOW_SHIFT 20
+
+#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU
+#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU
+
+#define RQ_CTXT_PREF_SET(val, member) (((val) & \
+ RQ_CTXT_PREF_##member##_MASK) << \
+ RQ_CTXT_PREF_##member##_SHIFT)
+
+#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0
+
+#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU
+
+#define RQ_CTXT_WQ_BLOCK_SET(val, member) (((val) & \
+ RQ_CTXT_WQ_BLOCK_##member##_MASK) << \
+ RQ_CTXT_WQ_BLOCK_##member##_SHIFT)
+
+#define SIZE_16BYTES(size) (ALIGN((size), 16) >> 4)
+
+#define WQ_PAGE_PFN_SHIFT 12
+#define WQ_BLOCK_PFN_SHIFT 9
+
+#define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT)
+#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT)
+
+/* sq and rq */
+#define TOTAL_DB_NUM(num_qps) ((u16)(2 * (num_qps)))
+
+static int hinic3_create_sq(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *sq,
+ u16 q_id, u32 sq_depth, u16 sq_msix_idx)
+{
+ int err;
+
+ /* sq used & hardware request init 1 */
+ sq->owner = 1;
+
+ sq->q_id = q_id;
+ sq->msix_entry_idx = sq_msix_idx;
+
+ err = hinic3_wq_create(nic_io->hwdev, &sq->wq, sq_depth,
+ (u16)BIT(HINIC3_SQ_WQEBB_SHIFT));
+ if (err) {
+ sdk_err(nic_io->dev_hdl, "Failed to create tx queue(%u) wq\n",
+ q_id);
+ return err;
+ }
+
+ return 0;
+}
+
+static void hinic3_destroy_sq(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *sq)
+{
+ hinic3_wq_destroy(&sq->wq);
+}
+
+static int hinic3_create_rq(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *rq,
+ u16 q_id, u32 rq_depth, u16 rq_msix_idx)
+{
+ int err;
+
+ /* rq_wqe_type Only support type 1-16Bytes, 2-32Bytes */
+ if (rq_wqe_type != HINIC3_NORMAL_RQ_WQE && rq_wqe_type != HINIC3_EXTEND_RQ_WQE) {
+ sdk_warn(nic_io->dev_hdl, "Module Parameter rq_wqe_type value %d is out of range: [%d, %d].",
+ rq_wqe_type, HINIC3_NORMAL_RQ_WQE, HINIC3_EXTEND_RQ_WQE);
+ rq_wqe_type = HINIC3_NORMAL_RQ_WQE;
+ }
+
+ rq->wqe_type = rq_wqe_type;
+ rq->q_id = q_id;
+ rq->msix_entry_idx = rq_msix_idx;
+
+ err = hinic3_wq_create(nic_io->hwdev, &rq->wq, rq_depth,
+ (u16)BIT(HINIC3_RQ_WQEBB_SHIFT + rq_wqe_type));
+ if (err) {
+ sdk_err(nic_io->dev_hdl, "Failed to create rx queue(%u) wq\n",
+ q_id);
+ return err;
+ }
+
+ rq->rx.pi_virt_addr = dma_zalloc_coherent(nic_io->dev_hdl, PAGE_SIZE,
+ &rq->rx.pi_dma_addr,
+ GFP_KERNEL);
+ if (!rq->rx.pi_virt_addr) {
+ hinic3_wq_destroy(&rq->wq);
+ nic_err(nic_io->dev_hdl, "Failed to allocate rq pi virt addr\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void hinic3_destroy_rq(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *rq)
+{
+ dma_free_coherent(nic_io->dev_hdl, PAGE_SIZE, rq->rx.pi_virt_addr,
+ rq->rx.pi_dma_addr);
+
+ hinic3_wq_destroy(&rq->wq);
+}
+
+static int create_qp(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *sq,
+ struct hinic3_io_queue *rq, u16 q_id, u32 sq_depth,
+ u32 rq_depth, u16 qp_msix_idx)
+{
+ int err;
+
+ err = hinic3_create_sq(nic_io, sq, q_id, sq_depth, qp_msix_idx);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to create sq, qid: %u\n",
+ q_id);
+ return err;
+ }
+
+ err = hinic3_create_rq(nic_io, rq, q_id, rq_depth, qp_msix_idx);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to create rq, qid: %u\n",
+ q_id);
+ goto create_rq_err;
+ }
+
+ return 0;
+
+create_rq_err:
+ hinic3_destroy_sq(nic_io, sq);
+
+ return err;
+}
+
+static void destroy_qp(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *sq,
+ struct hinic3_io_queue *rq)
+{
+ hinic3_destroy_sq(nic_io, sq);
+ hinic3_destroy_rq(nic_io, rq);
+}
+
+int hinic3_init_nicio_res(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ void __iomem *db_base = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return -EFAULT;
+ }
+
+ nic_io->max_qps = hinic3_func_max_qnum(hwdev);
+
+ err = hinic3_alloc_db_addr(hwdev, &db_base, NULL);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate doorbell for sqs\n");
+ return -ENOMEM;
+ }
+ nic_io->sqs_db_addr = (u8 *)db_base;
+
+ err = hinic3_alloc_db_addr(hwdev, &db_base, NULL);
+ if (err) {
+ hinic3_free_db_addr(hwdev, nic_io->sqs_db_addr, NULL);
+ nic_err(nic_io->dev_hdl, "Failed to allocate doorbell for rqs\n");
+ return -ENOMEM;
+ }
+ nic_io->rqs_db_addr = (u8 *)db_base;
+
+ nic_io->ci_vaddr_base =
+ dma_zalloc_coherent(nic_io->dev_hdl,
+ CI_TABLE_SIZE(nic_io->max_qps, PAGE_SIZE),
+ &nic_io->ci_dma_base, GFP_KERNEL);
+ if (!nic_io->ci_vaddr_base) {
+ hinic3_free_db_addr(hwdev, nic_io->sqs_db_addr, NULL);
+ hinic3_free_db_addr(hwdev, nic_io->rqs_db_addr, NULL);
+ nic_err(nic_io->dev_hdl, "Failed to allocate ci area\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+void hinic3_deinit_nicio_res(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return;
+ }
+
+ dma_free_coherent(nic_io->dev_hdl,
+ CI_TABLE_SIZE(nic_io->max_qps, PAGE_SIZE),
+ nic_io->ci_vaddr_base, nic_io->ci_dma_base);
+ /* free all doorbell */
+ hinic3_free_db_addr(hwdev, nic_io->sqs_db_addr, NULL);
+ hinic3_free_db_addr(hwdev, nic_io->rqs_db_addr, NULL);
+}
+
+int hinic3_alloc_qps(void *hwdev, struct irq_info *qps_msix_arry,
+ struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_io_queue *sqs = NULL;
+ struct hinic3_io_queue *rqs = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 q_id, i, num_qps;
+ int err;
+
+ if (!hwdev || !qps_msix_arry || !qp_params)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return -EFAULT;
+ }
+
+ if (qp_params->num_qps > nic_io->max_qps || !qp_params->num_qps)
+ return -EINVAL;
+
+ num_qps = qp_params->num_qps;
+ sqs = kcalloc(num_qps, sizeof(*sqs), GFP_KERNEL);
+ if (!sqs) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate sq\n");
+ err = -ENOMEM;
+ goto alloc_sqs_err;
+ }
+
+ rqs = kcalloc(num_qps, sizeof(*rqs), GFP_KERNEL);
+ if (!rqs) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate rq\n");
+ err = -ENOMEM;
+ goto alloc_rqs_err;
+ }
+
+ for (q_id = 0; q_id < num_qps; q_id++) {
+ err = create_qp(nic_io, &sqs[q_id], &rqs[q_id], q_id, qp_params->sq_depth,
+ qp_params->rq_depth, qps_msix_arry[q_id].msix_entry_idx);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate qp %u, err: %d\n", q_id, err);
+ goto create_qp_err;
+ }
+ }
+
+ qp_params->sqs = sqs;
+ qp_params->rqs = rqs;
+
+ return 0;
+
+create_qp_err:
+ for (i = 0; i < q_id; i++)
+ destroy_qp(nic_io, &sqs[i], &rqs[i]);
+
+ kfree(rqs);
+
+alloc_rqs_err:
+ kfree(sqs);
+
+alloc_sqs_err:
+
+ return err;
+}
+
+void hinic3_free_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 q_id;
+
+ if (!hwdev || !qp_params)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return;
+ }
+
+ for (q_id = 0; q_id < qp_params->num_qps; q_id++)
+ destroy_qp(nic_io, &qp_params->sqs[q_id],
+ &qp_params->rqs[q_id]);
+
+ kfree(qp_params->sqs);
+ kfree(qp_params->rqs);
+}
+
+static void init_qps_info(struct hinic3_nic_io *nic_io,
+ struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_io_queue *sqs = qp_params->sqs;
+ struct hinic3_io_queue *rqs = qp_params->rqs;
+ u16 q_id;
+
+ nic_io->num_qps = qp_params->num_qps;
+ nic_io->sq = qp_params->sqs;
+ nic_io->rq = qp_params->rqs;
+ for (q_id = 0; q_id < nic_io->num_qps; q_id++) {
+ sqs[q_id].tx.cons_idx_addr =
+ HINIC3_CI_VADDR(nic_io->ci_vaddr_base, q_id);
+ /* clear ci value */
+ *(u16 *)sqs[q_id].tx.cons_idx_addr = 0;
+ sqs[q_id].db_addr = nic_io->sqs_db_addr;
+
+ /* The first num_qps doorbell is used by sq */
+ rqs[q_id].db_addr = nic_io->rqs_db_addr;
+ }
+}
+
+int hinic3_init_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !qp_params)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return -EFAULT;
+ }
+
+ init_qps_info(nic_io, qp_params);
+
+ return hinic3_init_qp_ctxts(hwdev);
+}
+
+void hinic3_deinit_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !qp_params)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return;
+ }
+
+ qp_params->sqs = nic_io->sq;
+ qp_params->rqs = nic_io->rq;
+ qp_params->num_qps = nic_io->num_qps;
+
+ hinic3_free_qp_ctxts(hwdev);
+}
+
+int hinic3_create_qps(void *hwdev, u16 num_qp, u32 sq_depth, u32 rq_depth,
+ struct irq_info *qps_msix_arry)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_dyna_qp_params qp_params = {0};
+ int err;
+
+ if (!hwdev || !qps_msix_arry)
+ return -EFAULT;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return -EFAULT;
+ }
+
+ err = hinic3_init_nicio_res(hwdev);
+ if (err)
+ return err;
+
+ qp_params.num_qps = num_qp;
+ qp_params.sq_depth = sq_depth;
+ qp_params.rq_depth = rq_depth;
+ err = hinic3_alloc_qps(hwdev, qps_msix_arry, &qp_params);
+ if (err) {
+ hinic3_deinit_nicio_res(hwdev);
+ nic_err(nic_io->dev_hdl,
+ "Failed to allocate qps, err: %d\n", err);
+ return err;
+ }
+
+ init_qps_info(nic_io, &qp_params);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_create_qps);
+
+void hinic3_destroy_qps(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_dyna_qp_params qp_params = {0};
+
+ if (!hwdev)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+
+ hinic3_deinit_qps(hwdev, &qp_params);
+ hinic3_free_qps(hwdev, &qp_params);
+ hinic3_deinit_nicio_res(hwdev);
+}
+EXPORT_SYMBOL(hinic3_destroy_qps);
+
+void *hinic3_get_nic_queue(void *hwdev, u16 q_id, enum hinic3_queue_type q_type)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || q_type >= HINIC3_MAX_QUEUE_TYPE)
+ return NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return NULL;
+
+ return ((q_type == HINIC3_SQ) ? &nic_io->sq[q_id] : &nic_io->rq[q_id]);
+}
+EXPORT_SYMBOL(hinic3_get_nic_queue);
+
+static void hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr,
+ enum hinic3_qp_ctxt_type ctxt_type,
+ u16 num_queues, u16 q_id)
+{
+ qp_ctxt_hdr->queue_type = ctxt_type;
+ qp_ctxt_hdr->num_queues = num_queues;
+ qp_ctxt_hdr->start_qid = q_id;
+ qp_ctxt_hdr->rsvd = 0;
+
+ hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr));
+}
+
+static void hinic3_sq_prepare_ctxt(struct hinic3_io_queue *sq, u16 sq_id,
+ struct hinic3_sq_ctxt *sq_ctxt)
+{
+ u64 wq_page_addr;
+ u64 wq_page_pfn, wq_block_pfn;
+ u32 wq_page_pfn_hi, wq_page_pfn_lo;
+ u32 wq_block_pfn_hi, wq_block_pfn_lo;
+ u16 pi_start, ci_start;
+
+ ci_start = hinic3_get_sq_local_ci(sq);
+ pi_start = hinic3_get_sq_local_pi(sq);
+
+ wq_page_addr = hinic3_wq_get_first_wqe_page_addr(&sq->wq);
+
+ wq_page_pfn = WQ_PAGE_PFN(wq_page_addr);
+ wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
+ wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
+
+ wq_block_pfn = WQ_BLOCK_PFN(sq->wq.wq_block_paddr);
+ wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
+ wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
+
+ sq_ctxt->ci_pi =
+ SQ_CTXT_CI_PI_SET(ci_start, CI_IDX) |
+ SQ_CTXT_CI_PI_SET(pi_start, PI_IDX);
+
+ sq_ctxt->drop_mode_sp =
+ SQ_CTXT_MODE_SET(0, SP_FLAG) |
+ SQ_CTXT_MODE_SET(0, PKT_DROP);
+
+ sq_ctxt->wq_pfn_hi_owner =
+ SQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
+ SQ_CTXT_WQ_PAGE_SET(1, OWNER);
+
+ sq_ctxt->wq_pfn_lo = wq_page_pfn_lo;
+
+ /* TO DO */
+ sq_ctxt->pkt_drop_thd =
+ SQ_CTXT_PKT_DROP_THD_SET(tx_drop_thd_on, THD_ON) |
+ SQ_CTXT_PKT_DROP_THD_SET(tx_drop_thd_off, THD_OFF);
+
+ sq_ctxt->global_sq_id =
+ SQ_CTXT_GLOBAL_QUEUE_ID_SET(sq_id, GLOBAL_SQ_ID);
+
+ /* enable insert c-vlan in default */
+ sq_ctxt->vlan_ceq_attr =
+ SQ_CTXT_VLAN_CEQ_SET(0, CEQ_EN) |
+ SQ_CTXT_VLAN_CEQ_SET(1, INSERT_MODE);
+
+ sq_ctxt->rsvd0 = 0;
+
+ sq_ctxt->pref_cache =
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
+
+ sq_ctxt->pref_ci_owner =
+ SQ_CTXT_PREF_SET(CI_HIGN_IDX(ci_start), CI_HI) |
+ SQ_CTXT_PREF_SET(1, OWNER);
+
+ sq_ctxt->pref_wq_pfn_hi_ci =
+ SQ_CTXT_PREF_SET(ci_start, CI_LOW) |
+ SQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_PFN_HI);
+
+ sq_ctxt->pref_wq_pfn_lo = wq_page_pfn_lo;
+
+ sq_ctxt->wq_block_pfn_hi =
+ SQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, PFN_HI);
+
+ sq_ctxt->wq_block_pfn_lo = wq_block_pfn_lo;
+
+ hinic3_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt));
+}
+
+static void hinic3_rq_prepare_ctxt_get_wq_info(struct hinic3_io_queue *rq,
+ u32 *wq_page_pfn_hi, u32 *wq_page_pfn_lo,
+ u32 *wq_block_pfn_hi, u32 *wq_block_pfn_lo)
+{
+ u64 wq_page_addr;
+ u64 wq_page_pfn, wq_block_pfn;
+
+ wq_page_addr = hinic3_wq_get_first_wqe_page_addr(&rq->wq);
+
+ wq_page_pfn = WQ_PAGE_PFN(wq_page_addr);
+ *wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
+ *wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
+
+ wq_block_pfn = WQ_BLOCK_PFN(rq->wq.wq_block_paddr);
+ *wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
+ *wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
+}
+
+static void hinic3_rq_prepare_ctxt(struct hinic3_io_queue *rq, struct hinic3_rq_ctxt *rq_ctxt)
+{
+ u32 wq_page_pfn_hi, wq_page_pfn_lo;
+ u32 wq_block_pfn_hi, wq_block_pfn_lo;
+ u16 pi_start, ci_start;
+ u16 wqe_type = rq->wqe_type;
+
+ /* RQ depth is in unit of 8Bytes */
+ ci_start = (u16)((u32)hinic3_get_rq_local_ci(rq) << wqe_type);
+ pi_start = (u16)((u32)hinic3_get_rq_local_pi(rq) << wqe_type);
+
+ hinic3_rq_prepare_ctxt_get_wq_info(rq, &wq_page_pfn_hi, &wq_page_pfn_lo,
+ &wq_block_pfn_hi, &wq_block_pfn_lo);
+
+ rq_ctxt->ci_pi =
+ RQ_CTXT_CI_PI_SET(ci_start, CI_IDX) |
+ RQ_CTXT_CI_PI_SET(pi_start, PI_IDX);
+
+ rq_ctxt->ceq_attr = RQ_CTXT_CEQ_ATTR_SET(0, EN) |
+ RQ_CTXT_CEQ_ATTR_SET(rq->msix_entry_idx, INTR);
+
+ rq_ctxt->wq_pfn_hi_type_owner =
+ RQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
+ RQ_CTXT_WQ_PAGE_SET(1, OWNER);
+
+ switch (wqe_type) {
+ case HINIC3_EXTEND_RQ_WQE:
+ /* use 32Byte WQE with SGE for CQE */
+ rq_ctxt->wq_pfn_hi_type_owner |=
+ RQ_CTXT_WQ_PAGE_SET(0, WQE_TYPE);
+ break;
+ case HINIC3_NORMAL_RQ_WQE:
+ /* use 16Byte WQE with 32Bytes SGE for CQE */
+ rq_ctxt->wq_pfn_hi_type_owner |=
+ RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE);
+ rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN);
+ break;
+ default:
+ pr_err("Invalid rq wqe type: %u", wqe_type);
+ }
+
+ rq_ctxt->wq_pfn_lo = wq_page_pfn_lo;
+
+ rq_ctxt->pref_cache =
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
+
+ rq_ctxt->pref_ci_owner =
+ RQ_CTXT_PREF_SET(CI_HIGN_IDX(ci_start), CI_HI) |
+ RQ_CTXT_PREF_SET(1, OWNER);
+
+ rq_ctxt->pref_wq_pfn_hi_ci =
+ RQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_PFN_HI) |
+ RQ_CTXT_PREF_SET(ci_start, CI_LOW);
+
+ rq_ctxt->pref_wq_pfn_lo = wq_page_pfn_lo;
+
+ rq_ctxt->pi_paddr_hi = upper_32_bits(rq->rx.pi_dma_addr);
+ rq_ctxt->pi_paddr_lo = lower_32_bits(rq->rx.pi_dma_addr);
+
+ rq_ctxt->wq_block_pfn_hi =
+ RQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, PFN_HI);
+
+ rq_ctxt->wq_block_pfn_lo = wq_block_pfn_lo;
+
+ hinic3_cpu_to_be32(rq_ctxt, sizeof(*rq_ctxt));
+}
+
+static int init_sq_ctxts(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL;
+ struct hinic3_sq_ctxt *sq_ctxt = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_io_queue *sq = NULL;
+ u64 out_param = 0;
+ u16 q_id, curr_id, max_ctxts, i;
+ int err = 0;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_io->hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ q_id = 0;
+ while (q_id < nic_io->num_qps) {
+ sq_ctxt_block = cmd_buf->buf;
+ sq_ctxt = sq_ctxt_block->sq_ctxt;
+
+ max_ctxts = (nic_io->num_qps - q_id) > HINIC3_Q_CTXT_MAX ?
+ HINIC3_Q_CTXT_MAX : (nic_io->num_qps - q_id);
+
+ hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr,
+ HINIC3_QP_CTXT_TYPE_SQ, max_ctxts,
+ q_id);
+
+ for (i = 0; i < max_ctxts; i++) {
+ curr_id = q_id + i;
+ sq = &nic_io->sq[curr_id];
+
+ hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]);
+ }
+
+ cmd_buf->size = SQ_CTXT_SIZE(max_ctxts);
+
+ err = hinic3_cmdq_direct_resp(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set SQ ctxts, err: %d, out_param: 0x%llx\n",
+ err, out_param);
+
+ err = -EFAULT;
+ break;
+ }
+
+ q_id += max_ctxts;
+ }
+
+ hinic3_free_cmd_buf(nic_io->hwdev, cmd_buf);
+
+ return err;
+}
+
+static int init_rq_ctxts(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL;
+ struct hinic3_rq_ctxt *rq_ctxt = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_io_queue *rq = NULL;
+ u64 out_param = 0;
+ u16 q_id, curr_id, max_ctxts, i;
+ int err = 0;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_io->hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ q_id = 0;
+ while (q_id < nic_io->num_qps) {
+ rq_ctxt_block = cmd_buf->buf;
+ rq_ctxt = rq_ctxt_block->rq_ctxt;
+
+ max_ctxts = (nic_io->num_qps - q_id) > HINIC3_Q_CTXT_MAX ?
+ HINIC3_Q_CTXT_MAX : (nic_io->num_qps - q_id);
+
+ hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr,
+ HINIC3_QP_CTXT_TYPE_RQ, max_ctxts,
+ q_id);
+
+ for (i = 0; i < max_ctxts; i++) {
+ curr_id = q_id + i;
+ rq = &nic_io->rq[curr_id];
+
+ hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]);
+ }
+
+ cmd_buf->size = RQ_CTXT_SIZE(max_ctxts);
+
+ err = hinic3_cmdq_direct_resp(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set RQ ctxts, err: %d, out_param: 0x%llx\n",
+ err, out_param);
+
+ err = -EFAULT;
+ break;
+ }
+
+ q_id += max_ctxts;
+ }
+
+ hinic3_free_cmd_buf(nic_io->hwdev, cmd_buf);
+
+ return err;
+}
+
+static int init_qp_ctxts(struct hinic3_nic_io *nic_io)
+{
+ int err;
+
+ err = init_sq_ctxts(nic_io);
+ if (err)
+ return err;
+
+ err = init_rq_ctxts(nic_io);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static int clean_queue_offload_ctxt(struct hinic3_nic_io *nic_io,
+ enum hinic3_qp_ctxt_type ctxt_type)
+{
+ struct hinic3_clean_queue_ctxt *ctxt_block = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ u64 out_param = 0;
+ int err;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_io->hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ ctxt_block = cmd_buf->buf;
+ ctxt_block->cmdq_hdr.num_queues = nic_io->max_qps;
+ ctxt_block->cmdq_hdr.queue_type = ctxt_type;
+ ctxt_block->cmdq_hdr.start_qid = 0;
+
+ hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block));
+
+ cmd_buf->size = sizeof(*ctxt_block);
+
+ err = hinic3_cmdq_direct_resp(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if ((err) || (out_param)) {
+ nic_err(nic_io->dev_hdl, "Failed to clean queue offload ctxts, err: %d,out_param: 0x%llx\n",
+ err, out_param);
+
+ err = -EFAULT;
+ }
+
+ hinic3_free_cmd_buf(nic_io->hwdev, cmd_buf);
+
+ return err;
+}
+
+static int clean_qp_offload_ctxt(struct hinic3_nic_io *nic_io)
+{
+ /* clean LRO/TSO context space */
+ return ((clean_queue_offload_ctxt(nic_io, HINIC3_QP_CTXT_TYPE_SQ) != 0) ||
+ (clean_queue_offload_ctxt(nic_io, HINIC3_QP_CTXT_TYPE_RQ) != 0));
+}
+
+/* init qps ctxt and set sq ci attr and arm all sq */
+int hinic3_init_qp_ctxts(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_sq_attr sq_attr;
+ u32 rq_depth;
+ u16 q_id;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EFAULT;
+
+ err = init_qp_ctxts(nic_io);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to init QP ctxts\n");
+ return err;
+ }
+
+ /* clean LRO/TSO context space */
+ err = clean_qp_offload_ctxt(nic_io);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to clean qp offload ctxts\n");
+ return err;
+ }
+
+ rq_depth = nic_io->rq[0].wq.q_depth << nic_io->rq[0].wqe_type;
+
+ err = hinic3_set_root_ctxt(hwdev, rq_depth, nic_io->sq[0].wq.q_depth,
+ nic_io->rx_buff_len, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set root context\n");
+ return err;
+ }
+
+ for (q_id = 0; q_id < nic_io->num_qps; q_id++) {
+ sq_attr.ci_dma_base =
+ HINIC3_CI_PADDR(nic_io->ci_dma_base, q_id) >> 0x2;
+ sq_attr.pending_limit = tx_pending_limit;
+ sq_attr.coalescing_time = tx_coalescing_time;
+ sq_attr.intr_en = 1;
+ sq_attr.intr_idx = nic_io->sq[q_id].msix_entry_idx;
+ sq_attr.l2nic_sqn = q_id;
+ sq_attr.dma_attr_off = 0;
+ err = hinic3_set_ci_table(hwdev, &sq_attr);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set ci table\n");
+ goto set_cons_idx_table_err;
+ }
+ }
+
+ return 0;
+
+set_cons_idx_table_err:
+ hinic3_clean_root_ctxt(hwdev, HINIC3_CHANNEL_NIC);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(hinic3_init_qp_ctxts);
+
+void hinic3_free_qp_ctxts(void *hwdev)
+{
+ if (!hwdev)
+ return;
+
+ hinic3_clean_root_ctxt(hwdev, HINIC3_CHANNEL_NIC);
+}
+EXPORT_SYMBOL_GPL(hinic3_free_qp_ctxts);
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h
new file mode 100644
index 0000000..943a736
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h
@@ -0,0 +1,325 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_IO_H
+#define HINIC3_NIC_IO_H
+
+#include "hinic3_crm.h"
+#include "hinic3_common.h"
+#include "hinic3_wq.h"
+
+#define HINIC3_MAX_TX_QUEUE_DEPTH 65536
+#define HINIC3_MAX_RX_QUEUE_DEPTH 16384
+
+#define HINIC3_MIN_QUEUE_DEPTH 128
+
+#define HINIC3_SQ_WQEBB_SHIFT 4
+#define HINIC3_RQ_WQEBB_SHIFT 3
+
+#define HINIC3_SQ_WQEBB_SIZE BIT(HINIC3_SQ_WQEBB_SHIFT)
+#define HINIC3_CQE_SIZE_SHIFT 4
+
+enum hinic3_rq_wqe_type {
+ HINIC3_COMPACT_RQ_WQE,
+ HINIC3_NORMAL_RQ_WQE,
+ HINIC3_EXTEND_RQ_WQE,
+};
+
+struct hinic3_io_queue {
+ struct hinic3_wq wq;
+ union {
+ u8 wqe_type; /* for rq */
+ u8 owner; /* for sq */
+ };
+ u8 rsvd1;
+ u16 rsvd2;
+
+ u16 q_id;
+ u16 msix_entry_idx;
+
+ u8 __iomem *db_addr;
+
+ union {
+ struct {
+ void *cons_idx_addr;
+ } tx;
+
+ struct {
+ u16 *pi_virt_addr;
+ dma_addr_t pi_dma_addr;
+ } rx;
+ };
+} ____cacheline_aligned;
+
+struct hinic3_nic_db {
+ u32 db_info;
+ u32 pi_hi;
+};
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+/* *
+ * @brief hinic3_get_sq_free_wqebbs - get send queue free wqebb
+ * @param sq: send queue
+ * @retval : number of free wqebb
+ */
+static inline u16 hinic3_get_sq_free_wqebbs(struct hinic3_io_queue *sq)
+{
+ return hinic3_wq_free_wqebbs(&sq->wq);
+}
+
+/* *
+ * @brief hinic3_update_sq_local_ci - update send queue local consumer index
+ * @param sq: send queue
+ * @param wqe_cnt: number of wqebb
+ */
+static inline void hinic3_update_sq_local_ci(struct hinic3_io_queue *sq,
+ u16 wqebb_cnt)
+{
+ hinic3_wq_put_wqebbs(&sq->wq, wqebb_cnt);
+}
+
+/* *
+ * @brief hinic3_get_sq_local_ci - get send queue local consumer index
+ * @param sq: send queue
+ * @retval : local consumer index
+ */
+static inline u16 hinic3_get_sq_local_ci(const struct hinic3_io_queue *sq)
+{
+ return WQ_MASK_IDX(&sq->wq, sq->wq.cons_idx);
+}
+
+/* *
+ * @brief hinic3_get_sq_local_pi - get send queue local producer index
+ * @param sq: send queue
+ * @retval : local producer index
+ */
+static inline u16 hinic3_get_sq_local_pi(const struct hinic3_io_queue *sq)
+{
+ return WQ_MASK_IDX(&sq->wq, sq->wq.prod_idx);
+}
+
+/* *
+ * @brief hinic3_get_sq_hw_ci - get send queue hardware consumer index
+ * @param sq: send queue
+ * @retval : hardware consumer index
+ */
+static inline u16 hinic3_get_sq_hw_ci(const struct hinic3_io_queue *sq)
+{
+ return WQ_MASK_IDX(&sq->wq,
+ hinic3_hw_cpu16(*(u16 *)sq->tx.cons_idx_addr));
+}
+
+/* *
+ * @brief hinic3_get_sq_one_wqebb - get send queue wqe with single wqebb
+ * @param sq: send queue
+ * @param pi: return current pi
+ * @retval : wqe base address
+ */
+static inline void *hinic3_get_sq_one_wqebb(struct hinic3_io_queue *sq, u16 *pi)
+{
+ return hinic3_wq_get_one_wqebb(&sq->wq, pi);
+}
+
+/* *
+ * @brief hinic3_get_sq_multi_wqebb - get send queue wqe with multiple wqebbs
+ * @param sq: send queue
+ * @param wqebb_cnt: wqebb counter
+ * @param pi: return current pi
+ * @param second_part_wqebbs_addr: second part wqebbs base address
+ * @param first_part_wqebbs_num: number wqebbs of first part
+ * @retval : first part wqebbs base address
+ */
+static inline void *hinic3_get_sq_multi_wqebbs(struct hinic3_io_queue *sq,
+ u16 wqebb_cnt, u16 *pi,
+ void **second_part_wqebbs_addr,
+ u16 *first_part_wqebbs_num)
+{
+ return hinic3_wq_get_multi_wqebbs(&sq->wq, wqebb_cnt, pi,
+ second_part_wqebbs_addr,
+ first_part_wqebbs_num);
+}
+
+/* *
+ * @brief hinic3_get_and_update_sq_owner - get and update send queue owner bit
+ * @param sq: send queue
+ * @param curr_pi: current pi
+ * @param wqebb_cnt: wqebb counter
+ * @retval : owner bit
+ */
+static inline u16 hinic3_get_and_update_sq_owner(struct hinic3_io_queue *sq,
+ u16 curr_pi, u16 wqebb_cnt)
+{
+ u16 owner = sq->owner;
+
+ if (unlikely(curr_pi + wqebb_cnt >= sq->wq.q_depth))
+ sq->owner = !sq->owner;
+
+ return owner;
+}
+
+/* *
+ * @brief hinic3_get_sq_wqe_with_owner - get send queue wqe with owner
+ * @param sq: send queue
+ * @param wqebb_cnt: wqebb counter
+ * @param pi: return current pi
+ * @param owner: return owner bit
+ * @param second_part_wqebbs_addr: second part wqebbs base address
+ * @param first_part_wqebbs_num: number wqebbs of first part
+ * @retval : first part wqebbs base address
+ */
+static inline void *hinic3_get_sq_wqe_with_owner(struct hinic3_io_queue *sq,
+ u16 wqebb_cnt, u16 *pi,
+ u16 *owner,
+ void **second_part_wqebbs_addr,
+ u16 *first_part_wqebbs_num)
+{
+ void *wqe = hinic3_wq_get_multi_wqebbs(&sq->wq, wqebb_cnt, pi,
+ second_part_wqebbs_addr,
+ first_part_wqebbs_num);
+
+ *owner = sq->owner;
+ if (unlikely(*pi + wqebb_cnt >= sq->wq.q_depth))
+ sq->owner = !sq->owner;
+
+ return wqe;
+}
+
+/* *
+ * @brief hinic3_rollback_sq_wqebbs - rollback send queue wqe
+ * @param sq: send queue
+ * @param wqebb_cnt: wqebb counter
+ * @param owner: owner bit
+ */
+static inline void hinic3_rollback_sq_wqebbs(struct hinic3_io_queue *sq,
+ u16 wqebb_cnt, u16 owner)
+{
+ if (owner != sq->owner)
+ sq->owner = (u8)owner;
+ sq->wq.prod_idx -= wqebb_cnt;
+}
+
+/* *
+ * @brief hinic3_rq_wqe_addr - get receive queue wqe address by queue index
+ * @param rq: receive queue
+ * @param idx: wq index
+ * @retval: wqe base address
+ */
+static inline void *hinic3_rq_wqe_addr(struct hinic3_io_queue *rq, u16 idx)
+{
+ return hinic3_wq_wqebb_addr(&rq->wq, idx);
+}
+
+/* *
+ * @brief hinic3_update_rq_hw_pi - update receive queue hardware pi
+ * @param rq: receive queue
+ * @param pi: pi
+ */
+static inline void hinic3_update_rq_hw_pi(struct hinic3_io_queue *rq, u16 pi)
+{
+ *rq->rx.pi_virt_addr = cpu_to_be16((pi & rq->wq.idx_mask) <<
+ rq->wqe_type);
+}
+
+/* *
+ * @brief hinic3_update_rq_local_ci - update receive queue local consumer index
+ * @param sq: receive queue
+ * @param wqe_cnt: number of wqebb
+ */
+static inline void hinic3_update_rq_local_ci(struct hinic3_io_queue *rq,
+ u16 wqebb_cnt)
+{
+ hinic3_wq_put_wqebbs(&rq->wq, wqebb_cnt);
+}
+
+/* *
+ * @brief hinic3_get_rq_local_ci - get receive queue local ci
+ * @param rq: receive queue
+ * @retval: receive queue local ci
+ */
+static inline u16 hinic3_get_rq_local_ci(const struct hinic3_io_queue *rq)
+{
+ return WQ_MASK_IDX(&rq->wq, rq->wq.cons_idx);
+}
+
+/* *
+ * @brief hinic3_get_rq_local_pi - get receive queue local pi
+ * @param rq: receive queue
+ * @retval: receive queue local pi
+ */
+static inline u16 hinic3_get_rq_local_pi(const struct hinic3_io_queue *rq)
+{
+ return WQ_MASK_IDX(&rq->wq, rq->wq.prod_idx);
+}
+
+/* ******************** DB INFO ******************** */
+#define DB_INFO_QID_SHIFT 0
+#define DB_INFO_NON_FILTER_SHIFT 22
+#define DB_INFO_CFLAG_SHIFT 23
+#define DB_INFO_COS_SHIFT 24
+#define DB_INFO_TYPE_SHIFT 27
+
+#define DB_INFO_QID_MASK 0x1FFFU
+#define DB_INFO_NON_FILTER_MASK 0x1U
+#define DB_INFO_CFLAG_MASK 0x1U
+#define DB_INFO_COS_MASK 0x7U
+#define DB_INFO_TYPE_MASK 0x1FU
+#define DB_INFO_SET(val, member) \
+ (((u32)(val) & DB_INFO_##member##_MASK) << \
+ DB_INFO_##member##_SHIFT)
+
+#define DB_PI_LOW_MASK 0xFFU
+#define DB_PI_HIGH_MASK 0xFFU
+#define DB_PI_LOW(pi) ((pi) & DB_PI_LOW_MASK)
+#define DB_PI_HI_SHIFT 8
+#define DB_PI_HIGH(pi) (((pi) >> DB_PI_HI_SHIFT) & DB_PI_HIGH_MASK)
+#define DB_ADDR(queue, pi) ((u64 *)((queue)->db_addr) + DB_PI_LOW(pi))
+#define SRC_TYPE 1
+
+/* CFLAG_DATA_PATH */
+#define SQ_CFLAG_DP 0
+#define RQ_CFLAG_DP 1
+/* *
+ * @brief hinic3_write_db - write doorbell
+ * @param queue: nic io queue
+ * @param cos: cos index
+ * @param cflag: 0--sq, 1--rq
+ * @param pi: product index
+ */
+static inline void hinic3_write_db(struct hinic3_io_queue *queue, int cos,
+ u8 cflag, u16 pi)
+{
+ struct hinic3_nic_db db;
+
+ db.db_info = DB_INFO_SET(SRC_TYPE, TYPE) | DB_INFO_SET(cflag, CFLAG) |
+ DB_INFO_SET(cos, COS) | DB_INFO_SET(queue->q_id, QID);
+ db.pi_hi = DB_PI_HIGH(pi);
+ /* Data should be written to HW in Big Endian Format */
+ db.db_info = hinic3_hw_be32(db.db_info);
+ db.pi_hi = hinic3_hw_be32(db.pi_hi);
+
+ wmb(); /* Write all before the doorbell */
+
+ writeq(*((u64 *)(u8 *)&db), DB_ADDR(queue, pi));
+}
+
+struct hinic3_dyna_qp_params {
+ u16 num_qps;
+ u32 sq_depth;
+ u32 rq_depth;
+
+ struct hinic3_io_queue *sqs;
+ struct hinic3_io_queue *rqs;
+};
+
+int hinic3_alloc_qps(void *hwdev, struct irq_info *qps_msix_arry,
+ struct hinic3_dyna_qp_params *qp_params);
+void hinic3_free_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params);
+int hinic3_init_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params);
+void hinic3_deinit_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params);
+int hinic3_init_nicio_res(void *hwdev);
+void hinic3_deinit_nicio_res(void *hwdev);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c
new file mode 100644
index 0000000..9ea93a0
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c
@@ -0,0 +1,47 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+
+#include "ossl_knl.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_profile.h"
+#include "hinic3_nic_prof.h"
+
+static bool is_match_nic_prof_default_adapter(void *device)
+{
+ /* always match default profile adapter in standard scene */
+ return true;
+}
+
+struct hinic3_prof_adapter nic_prof_adap_objs[] = {
+ /* Add prof adapter before default profile */
+ {
+ .type = PROF_ADAP_TYPE_DEFAULT,
+ .match = is_match_nic_prof_default_adapter,
+ .init = NULL,
+ .deinit = NULL,
+ },
+};
+
+void hinic3_init_nic_prof_adapter(struct hinic3_nic_dev *nic_dev)
+{
+ int num_adap = ARRAY_LEN(nic_prof_adap_objs);
+
+ nic_dev->prof_adap = hinic3_prof_init(nic_dev, nic_prof_adap_objs, num_adap,
+ (void *)&nic_dev->prof_attr);
+ if (nic_dev->prof_adap)
+ nic_info(&nic_dev->pdev->dev, "Find profile adapter type: %d\n",
+ nic_dev->prof_adap->type);
+}
+
+void hinic3_deinit_nic_prof_adapter(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_prof_deinit(nic_dev->prof_adap, nic_dev->prof_attr);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h
new file mode 100644
index 0000000..3c279e7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_PROF_H
+#define HINIC3_NIC_PROF_H
+#include <linux/socket.h>
+
+#include <linux/types.h>
+
+#include "hinic3_nic_cfg.h"
+
+struct hinic3_nic_prof_attr {
+ void *priv_data;
+ char netdev_name[IFNAMSIZ];
+};
+
+struct hinic3_nic_dev;
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline char *hinic3_get_dft_netdev_name_fmt(struct hinic3_nic_dev *nic_dev)
+{
+ if (nic_dev->prof_attr)
+ return nic_dev->prof_attr->netdev_name;
+
+ return NULL;
+}
+
+#ifdef CONFIG_MODULE_PROF
+int hinic3_set_master_dev_state(struct hinic3_nic_dev *nic_dev, u32 flag);
+u32 hinic3_get_link(struct net_device *dev)
+int hinic3_config_port_mtu(struct hinic3_nic_dev *nic_dev, u32 mtu);
+int hinic3_config_port_mac(struct hinic3_nic_dev *nic_dev, struct sockaddr *saddr);
+#else
+static inline int hinic3_set_master_dev_state(struct hinic3_nic_dev *nic_dev, u32 flag)
+{
+ return 0;
+}
+
+static inline int hinic3_config_port_mtu(struct hinic3_nic_dev *nic_dev, u32 mtu)
+{
+ return hinic3_set_port_mtu(nic_dev->hwdev, (u16)mtu);
+}
+
+static inline int hinic3_config_port_mac(struct hinic3_nic_dev *nic_dev, struct sockaddr *saddr)
+{
+ return hinic3_update_mac(nic_dev->hwdev, nic_dev->netdev->dev_addr, saddr->sa_data, 0,
+ hinic3_global_func_id(nic_dev->hwdev));
+}
+
+#endif
+
+void hinic3_init_nic_prof_adapter(struct hinic3_nic_dev *nic_dev);
+void hinic3_deinit_nic_prof_adapter(struct hinic3_nic_dev *nic_dev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h
new file mode 100644
index 0000000..f492c5d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h
@@ -0,0 +1,384 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_QP_H
+#define HINIC3_NIC_QP_H
+
+#include "hinic3_common.h"
+
+#define TX_MSS_DEFAULT 0x3E00
+#define TX_MSS_MIN 0x50
+
+#define HINIC3_MAX_SQ_SGE 18
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0
+#define RQ_CQE_OFFOLAD_TYPE_IP_TYPE_SHIFT 5
+#define RQ_CQE_OFFOLAD_TYPE_ENC_L3_TYPE_SHIFT 7
+#define RQ_CQE_OFFOLAD_TYPE_TUNNEL_PKT_FORMAT_SHIFT 8
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0x1FU
+#define RQ_CQE_OFFOLAD_TYPE_IP_TYPE_MASK 0x3U
+#define RQ_CQE_OFFOLAD_TYPE_ENC_L3_TYPE_MASK 0x1U
+#define RQ_CQE_OFFOLAD_TYPE_TUNNEL_PKT_FORMAT_MASK 0xFU
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU
+
+#define RQ_CQE_OFFOLAD_TYPE_GET(val, member) \
+ (((val) >> RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \
+ RQ_CQE_OFFOLAD_TYPE_##member##_MASK)
+
+#define HINIC3_GET_RX_PKT_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE)
+#define HINIC3_GET_RX_IP_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, IP_TYPE)
+#define HINIC3_GET_RX_ENC_L3_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, ENC_L3_TYPE)
+#define HINIC3_GET_RX_TUNNEL_PKT_FORMAT(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, TUNNEL_PKT_FORMAT)
+
+#define HINIC3_GET_RX_PKT_UMBCAST(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_UMBCAST)
+
+#define HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, VLAN_EN)
+
+#define HINIC3_GET_RSS_TYPES(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, RSS_TYPE)
+
+#define RQ_CQE_SGE_VLAN_SHIFT 0
+#define RQ_CQE_SGE_LEN_SHIFT 16
+
+#define RQ_CQE_SGE_VLAN_MASK 0xFFFFU
+#define RQ_CQE_SGE_LEN_MASK 0xFFFFU
+
+#define RQ_CQE_SGE_GET(val, member) \
+ (((val) >> RQ_CQE_SGE_##member##_SHIFT) & RQ_CQE_SGE_##member##_MASK)
+
+#define HINIC3_GET_RX_VLAN_TAG(vlan_len) RQ_CQE_SGE_GET(vlan_len, VLAN)
+
+#define HINIC3_GET_RX_PKT_LEN(vlan_len) RQ_CQE_SGE_GET(vlan_len, LEN)
+
+#define RQ_CQE_STATUS_CSUM_ERR_SHIFT 0
+#define RQ_CQE_STATUS_NUM_LRO_SHIFT 16
+#define RQ_CQE_STATUS_LRO_PUSH_SHIFT 25
+#define RQ_CQE_STATUS_LRO_ENTER_SHIFT 26
+#define RQ_CQE_STATUS_LRO_INTR_SHIFT 27
+
+#define RQ_CQE_STATUS_BP_EN_SHIFT 30
+#define RQ_CQE_STATUS_RXDONE_SHIFT 31
+#define RQ_CQE_STATUS_DECRY_PKT_SHIFT 29
+#define RQ_CQE_STATUS_FLUSH_SHIFT 28
+
+#define RQ_CQE_STATUS_CSUM_ERR_MASK 0xFFFFU
+#define RQ_CQE_STATUS_NUM_LRO_MASK 0xFFU
+#define RQ_CQE_STATUS_LRO_PUSH_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_ENTER_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_INTR_MASK 0X1U
+#define RQ_CQE_STATUS_BP_EN_MASK 0X1U
+#define RQ_CQE_STATUS_RXDONE_MASK 0x1U
+#define RQ_CQE_STATUS_FLUSH_MASK 0x1U
+#define RQ_CQE_STATUS_DECRY_PKT_MASK 0x1U
+
+#define RQ_CQE_STATUS_GET(val, member) \
+ (((val) >> RQ_CQE_STATUS_##member##_SHIFT) & \
+ RQ_CQE_STATUS_##member##_MASK)
+
+#define HINIC3_GET_RX_CSUM_ERR(status) RQ_CQE_STATUS_GET(status, CSUM_ERR)
+
+#define HINIC3_GET_RX_DONE(status) RQ_CQE_STATUS_GET(status, RXDONE)
+
+#define HINIC3_GET_RX_FLUSH(status) RQ_CQE_STATUS_GET(status, FLUSH)
+
+#define HINIC3_GET_RX_BP_EN(status) RQ_CQE_STATUS_GET(status, BP_EN)
+
+#define HINIC3_GET_RX_NUM_LRO(status) RQ_CQE_STATUS_GET(status, NUM_LRO)
+
+#define HINIC3_RX_IS_DECRY_PKT(status) RQ_CQE_STATUS_GET(status, DECRY_PKT)
+
+#define RQ_CQE_SUPER_CQE_EN_SHIFT 0
+#define RQ_CQE_PKT_NUM_SHIFT 1
+#define RQ_CQE_PKT_LAST_LEN_SHIFT 6
+#define RQ_CQE_PKT_FIRST_LEN_SHIFT 19
+
+#define RQ_CQE_SUPER_CQE_EN_MASK 0x1
+#define RQ_CQE_PKT_NUM_MASK 0x1FU
+#define RQ_CQE_PKT_FIRST_LEN_MASK 0x1FFFU
+#define RQ_CQE_PKT_LAST_LEN_MASK 0x1FFFU
+
+#define RQ_CQE_PKT_NUM_GET(val, member) \
+ (((val) >> RQ_CQE_PKT_##member##_SHIFT) & RQ_CQE_PKT_##member##_MASK)
+#define HINIC3_GET_RQ_CQE_PKT_NUM(pkt_info) RQ_CQE_PKT_NUM_GET(pkt_info, NUM)
+
+#define RQ_CQE_SUPER_CQE_EN_GET(val, member) \
+ (((val) >> RQ_CQE_##member##_SHIFT) & RQ_CQE_##member##_MASK)
+#define HINIC3_GET_SUPER_CQE_EN(pkt_info) \
+ RQ_CQE_SUPER_CQE_EN_GET(pkt_info, SUPER_CQE_EN)
+
+#define RQ_CQE_PKT_LEN_GET(val, member) \
+ (((val) >> RQ_CQE_PKT_##member##_SHIFT) & RQ_CQE_PKT_##member##_MASK)
+
+#define RQ_CQE_DECRY_INFO_DECRY_STATUS_SHIFT 8
+#define RQ_CQE_DECRY_INFO_ESP_NEXT_HEAD_SHIFT 0
+
+#define RQ_CQE_DECRY_INFO_DECRY_STATUS_MASK 0xFFU
+#define RQ_CQE_DECRY_INFO_ESP_NEXT_HEAD_MASK 0xFFU
+
+#define RQ_CQE_DECRY_INFO_GET(val, member) \
+ (((val) >> RQ_CQE_DECRY_INFO_##member##_SHIFT) & \
+ RQ_CQE_DECRY_INFO_##member##_MASK)
+
+#define HINIC3_GET_DECRYPT_STATUS(decry_info) \
+ RQ_CQE_DECRY_INFO_GET(decry_info, DECRY_STATUS)
+
+#define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \
+ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD)
+
+struct hinic3_rq_cqe {
+ u32 status;
+ u32 vlan_len;
+
+ u32 offload_type;
+ u32 hash_val;
+ u32 xid;
+ u32 decrypt_info;
+ u32 rsvd6;
+ u32 pkt_info;
+};
+
+struct hinic3_sge_sect {
+ struct hinic3_sge sge;
+ u32 rsvd;
+};
+
+struct hinic3_rq_extend_wqe {
+ struct hinic3_sge_sect buf_desc;
+ struct hinic3_sge_sect cqe_sect;
+};
+
+struct hinic3_rq_normal_wqe {
+ u32 buf_hi_addr;
+ u32 buf_lo_addr;
+ u32 cqe_hi_addr;
+ u32 cqe_lo_addr;
+};
+
+struct hinic3_rq_wqe {
+ union {
+ struct hinic3_rq_normal_wqe normal_wqe;
+ struct hinic3_rq_extend_wqe extend_wqe;
+ };
+};
+
+struct hinic3_sq_wqe_desc {
+ u32 ctrl_len;
+ u32 queue_info;
+ u32 hi_addr;
+ u32 lo_addr;
+};
+
+/* Engine only pass first 12B TS field directly to uCode through metadata
+ * vlan_offoad is used for hardware when vlan insert in tx
+ */
+struct hinic3_sq_task {
+ u32 pkt_info0;
+ u32 ip_identify;
+ u32 pkt_info2; /* ipsec used as spi */
+ u32 vlan_offload;
+};
+
+struct hinic3_sq_bufdesc {
+ u32 len; /* 31-bits Length, L2NIC only use length[17:0] */
+ u32 rsvd;
+ u32 hi_addr;
+ u32 lo_addr;
+};
+
+struct hinic3_sq_compact_wqe {
+ struct hinic3_sq_wqe_desc wqe_desc;
+};
+
+struct hinic3_sq_extend_wqe {
+ struct hinic3_sq_wqe_desc wqe_desc;
+ struct hinic3_sq_task task;
+ struct hinic3_sq_bufdesc buf_desc[0];
+};
+
+struct hinic3_sq_wqe {
+ union {
+ struct hinic3_sq_compact_wqe compact_wqe;
+ struct hinic3_sq_extend_wqe extend_wqe;
+ };
+};
+
+/* use section pointer for support non continuous wqe */
+struct hinic3_sq_wqe_combo {
+ struct hinic3_sq_wqe_desc *ctrl_bd0;
+ struct hinic3_sq_task *task;
+ struct hinic3_sq_bufdesc *bds_head;
+ struct hinic3_sq_bufdesc *bds_sec2;
+ u16 first_bds_num;
+ u32 wqe_type;
+ u32 task_type;
+};
+
+/* ************* SQ_CTRL ************** */
+enum sq_wqe_data_format {
+ SQ_NORMAL_WQE = 0,
+};
+
+enum sq_wqe_ec_type {
+ SQ_WQE_COMPACT_TYPE = 0,
+ SQ_WQE_EXTENDED_TYPE = 1,
+};
+
+enum sq_wqe_tasksect_len_type {
+ SQ_WQE_TASKSECT_46BITS = 0,
+ SQ_WQE_TASKSECT_16BYTES = 1,
+};
+
+#define SQ_CTRL_BD0_LEN_SHIFT 0
+#define SQ_CTRL_RSVD_SHIFT 18
+#define SQ_CTRL_BUFDESC_NUM_SHIFT 19
+#define SQ_CTRL_TASKSECT_LEN_SHIFT 27
+#define SQ_CTRL_DATA_FORMAT_SHIFT 28
+#define SQ_CTRL_DIRECT_SHIFT 29
+#define SQ_CTRL_EXTENDED_SHIFT 30
+#define SQ_CTRL_OWNER_SHIFT 31
+
+#define SQ_CTRL_BD0_LEN_MASK 0x3FFFFU
+#define SQ_CTRL_RSVD_MASK 0x1U
+#define SQ_CTRL_BUFDESC_NUM_MASK 0xFFU
+#define SQ_CTRL_TASKSECT_LEN_MASK 0x1U
+#define SQ_CTRL_DATA_FORMAT_MASK 0x1U
+#define SQ_CTRL_DIRECT_MASK 0x1U
+#define SQ_CTRL_EXTENDED_MASK 0x1U
+#define SQ_CTRL_OWNER_MASK 0x1U
+
+#define SQ_CTRL_SET(val, member) \
+ (((u32)(val) & SQ_CTRL_##member##_MASK) << SQ_CTRL_##member##_SHIFT)
+
+#define SQ_CTRL_GET(val, member) \
+ (((val) >> SQ_CTRL_##member##_SHIFT) & SQ_CTRL_##member##_MASK)
+
+#define SQ_CTRL_CLEAR(val, member) \
+ ((val) & (~(SQ_CTRL_##member##_MASK << SQ_CTRL_##member##_SHIFT)))
+
+#define SQ_CTRL_QUEUE_INFO_PKT_TYPE_SHIFT 0
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_SHIFT 2
+#define SQ_CTRL_QUEUE_INFO_UFO_SHIFT 10
+#define SQ_CTRL_QUEUE_INFO_TSO_SHIFT 11
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_SHIFT 12
+#define SQ_CTRL_QUEUE_INFO_MSS_SHIFT 13
+#define SQ_CTRL_QUEUE_INFO_SCTP_SHIFT 27
+#define SQ_CTRL_QUEUE_INFO_UC_SHIFT 28
+#define SQ_CTRL_QUEUE_INFO_PRI_SHIFT 29
+
+#define SQ_CTRL_QUEUE_INFO_PKT_TYPE_MASK 0x3U
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_MASK 0xFFU
+#define SQ_CTRL_QUEUE_INFO_UFO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TSO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_MSS_MASK 0x3FFFU
+#define SQ_CTRL_QUEUE_INFO_SCTP_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_UC_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_PRI_MASK 0x7U
+
+#define SQ_CTRL_QUEUE_INFO_SET(val, member) \
+ (((u32)(val) & SQ_CTRL_QUEUE_INFO_##member##_MASK) << \
+ SQ_CTRL_QUEUE_INFO_##member##_SHIFT)
+
+#define SQ_CTRL_QUEUE_INFO_GET(val, member) \
+ (((val) >> SQ_CTRL_QUEUE_INFO_##member##_SHIFT) & \
+ SQ_CTRL_QUEUE_INFO_##member##_MASK)
+
+#define SQ_CTRL_QUEUE_INFO_CLEAR(val, member) \
+ ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK << \
+ SQ_CTRL_QUEUE_INFO_##member##_SHIFT)))
+
+#define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19
+#define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22
+#define SQ_TASK_INFO0_INNER_L4_EN_SHIFT 24
+#define SQ_TASK_INFO0_INNER_L3_EN_SHIFT 25
+#define SQ_TASK_INFO0_INNER_L4_PSEUDO_SHIFT 26
+#define SQ_TASK_INFO0_OUT_L4_EN_SHIFT 27
+#define SQ_TASK_INFO0_OUT_L3_EN_SHIFT 28
+#define SQ_TASK_INFO0_OUT_L4_PSEUDO_SHIFT 29
+#define SQ_TASK_INFO0_ESP_OFFLOAD_SHIFT 30
+#define SQ_TASK_INFO0_IPSEC_PROTO_SHIFT 31
+
+#define SQ_TASK_INFO0_TUNNEL_FLAG_MASK 0x1U
+#define SQ_TASK_INFO0_ESP_NEXT_PROTO_MASK 0x3U
+#define SQ_TASK_INFO0_INNER_L4_EN_MASK 0x1U
+#define SQ_TASK_INFO0_INNER_L3_EN_MASK 0x1U
+#define SQ_TASK_INFO0_INNER_L4_PSEUDO_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L4_EN_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L3_EN_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L4_PSEUDO_MASK 0x1U
+#define SQ_TASK_INFO0_ESP_OFFLOAD_MASK 0x1U
+#define SQ_TASK_INFO0_IPSEC_PROTO_MASK 0x1U
+
+#define SQ_TASK_INFO0_SET(val, member) \
+ (((u32)(val) & SQ_TASK_INFO0_##member##_MASK) << \
+ SQ_TASK_INFO0_##member##_SHIFT)
+#define SQ_TASK_INFO0_GET(val, member) \
+ (((val) >> SQ_TASK_INFO0_##member##_SHIFT) & \
+ SQ_TASK_INFO0_##member##_MASK)
+
+#define SQ_TASK_INFO1_SET(val, member) \
+ (((val) & SQ_TASK_INFO1_##member##_MASK) << \
+ SQ_TASK_INFO1_##member##_SHIFT)
+#define SQ_TASK_INFO1_GET(val, member) \
+ (((val) >> SQ_TASK_INFO1_##member##_SHIFT) & \
+ SQ_TASK_INFO1_##member##_MASK)
+
+#define SQ_TASK_INFO3_VLAN_TAG_SHIFT 0
+#define SQ_TASK_INFO3_VLAN_TYPE_SHIFT 16
+#define SQ_TASK_INFO3_VLAN_TAG_VALID_SHIFT 19
+
+#define SQ_TASK_INFO3_VLAN_TAG_MASK 0xFFFFU
+#define SQ_TASK_INFO3_VLAN_TYPE_MASK 0x7U
+#define SQ_TASK_INFO3_VLAN_TAG_VALID_MASK 0x1U
+
+#define SQ_TASK_INFO3_SET(val, member) \
+ (((val) & SQ_TASK_INFO3_##member##_MASK) << \
+ SQ_TASK_INFO3_##member##_SHIFT)
+#define SQ_TASK_INFO3_GET(val, member) \
+ (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \
+ SQ_TASK_INFO3_##member##_MASK)
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline u32 hinic3_get_pkt_len_for_super_cqe(const struct hinic3_rq_cqe *cqe,
+ bool last)
+{
+ u32 pkt_len = hinic3_hw_cpu32(cqe->pkt_info);
+
+ if (!last)
+ return RQ_CQE_PKT_LEN_GET(pkt_len, FIRST_LEN);
+ else
+ return RQ_CQE_PKT_LEN_GET(pkt_len, LAST_LEN);
+}
+
+/* *
+ * hinic3_set_vlan_tx_offload - set vlan offload info
+ * @task: wqe task section
+ * @vlan_tag: vlan tag
+ * @vlan_type: 0--select TPID0 in IPSU, 1--select TPID0 in IPSU
+ * 2--select TPID2 in IPSU, 3--select TPID3 in IPSU, 4--select TPID4 in IPSU
+ */
+static inline void hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task,
+ u16 vlan_tag, u8 vlan_type)
+{
+ task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) |
+ SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) |
+ SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID);
+}
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c b/drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c
new file mode 100644
index 0000000..6d9b0c1
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c
@@ -0,0 +1,909 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/ethtool.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_nic_dev.h"
+
+#define MAX_NUM_OF_ETHTOOL_NTUPLE_RULES BIT(9)
+struct hinic3_ethtool_rx_flow_rule {
+ struct list_head list;
+ struct ethtool_rx_flow_spec flow_spec;
+};
+
+static void tcam_translate_key_y(u8 *key_y, const u8 *src_input, const u8 *mask, u8 len)
+{
+ u8 idx;
+
+ for (idx = 0; idx < len; idx++)
+ key_y[idx] = src_input[idx] & mask[idx];
+}
+
+static void tcam_translate_key_x(u8 *key_x, const u8 *key_y, const u8 *mask, u8 len)
+{
+ u8 idx;
+
+ for (idx = 0; idx < len; idx++)
+ key_x[idx] = key_y[idx] ^ mask[idx];
+}
+
+static void tcam_key_calculate(struct tag_tcam_key *tcam_key,
+ struct nic_tcam_cfg_rule *fdir_tcam_rule)
+{
+ tcam_translate_key_y(fdir_tcam_rule->key.y,
+ (u8 *)(&tcam_key->key_info),
+ (u8 *)(&tcam_key->key_mask), TCAM_FLOW_KEY_SIZE);
+ tcam_translate_key_x(fdir_tcam_rule->key.x, fdir_tcam_rule->key.y,
+ (u8 *)(&tcam_key->key_mask), TCAM_FLOW_KEY_SIZE);
+}
+
+#define TCAM_IPV4_TYPE 0
+#define TCAM_IPV6_TYPE 1
+
+static int hinic3_base_ipv4_parse(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_tcpip4_spec *mask = &fs->m_u.tcp_ip4_spec;
+ struct ethtool_tcpip4_spec *val = &fs->h_u.tcp_ip4_spec;
+ u32 temp;
+
+ switch (mask->ip4src) {
+ case U32_MAX:
+ temp = ntohl(val->ip4src);
+ tcam_key->key_info.sipv4_h = high_16_bits(temp);
+ tcam_key->key_info.sipv4_l = low_16_bits(temp);
+
+ tcam_key->key_mask.sipv4_h = U16_MAX;
+ tcam_key->key_mask.sipv4_l = U16_MAX;
+ break;
+ case 0:
+ break;
+
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "invalid src_ip mask\n");
+ return -EINVAL;
+ }
+
+ switch (mask->ip4dst) {
+ case U32_MAX:
+ temp = ntohl(val->ip4dst);
+ tcam_key->key_info.dipv4_h = high_16_bits(temp);
+ tcam_key->key_info.dipv4_l = low_16_bits(temp);
+
+ tcam_key->key_mask.dipv4_h = U16_MAX;
+ tcam_key->key_mask.dipv4_l = U16_MAX;
+ break;
+ case 0:
+ break;
+
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "invalid src_ip mask\n");
+ return -EINVAL;
+ }
+
+ tcam_key->key_info.ip_type = TCAM_IPV4_TYPE;
+ tcam_key->key_mask.ip_type = TCAM_IP_TYPE_MASK;
+
+ tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev);
+ tcam_key->key_mask.function_id = TCAM_FUNC_ID_MASK;
+
+ return 0;
+}
+
+static int hinic3_fdir_tcam_ipv4_l4_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_tcpip4_spec *l4_mask = &fs->m_u.tcp_ip4_spec;
+ struct ethtool_tcpip4_spec *l4_val = &fs->h_u.tcp_ip4_spec;
+ int err;
+
+ err = hinic3_base_ipv4_parse(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+
+ tcam_key->key_info.dport = ntohs(l4_val->pdst);
+ tcam_key->key_mask.dport = l4_mask->pdst;
+
+ tcam_key->key_info.sport = ntohs(l4_val->psrc);
+ tcam_key->key_mask.sport = l4_mask->psrc;
+
+ if (fs->flow_type == TCP_V4_FLOW)
+ tcam_key->key_info.ip_proto = IPPROTO_TCP;
+ else
+ tcam_key->key_info.ip_proto = IPPROTO_UDP;
+ tcam_key->key_mask.ip_proto = U8_MAX;
+
+ return 0;
+}
+
+static int hinic3_fdir_tcam_ipv4_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_usrip4_spec *l3_mask = &fs->m_u.usr_ip4_spec;
+ struct ethtool_usrip4_spec *l3_val = &fs->h_u.usr_ip4_spec;
+ int err;
+
+ err = hinic3_base_ipv4_parse(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+
+ tcam_key->key_info.ip_proto = l3_val->proto;
+ tcam_key->key_mask.ip_proto = l3_mask->proto;
+
+ return 0;
+}
+
+#ifndef UNSUPPORT_NTUPLE_IPV6
+enum ipv6_parse_res {
+ IPV6_MASK_INVALID,
+ IPV6_MASK_ALL_MASK,
+ IPV6_MASK_ALL_ZERO,
+};
+
+enum ipv6_index {
+ IPV6_IDX0,
+ IPV6_IDX1,
+ IPV6_IDX2,
+ IPV6_IDX3,
+};
+
+static int ipv6_mask_parse(const u32 *ipv6_mask)
+{
+ if (ipv6_mask[IPV6_IDX0] == 0 && ipv6_mask[IPV6_IDX1] == 0 &&
+ ipv6_mask[IPV6_IDX2] == 0 && ipv6_mask[IPV6_IDX3] == 0)
+ return IPV6_MASK_ALL_ZERO;
+
+ if (ipv6_mask[IPV6_IDX0] == U32_MAX &&
+ ipv6_mask[IPV6_IDX1] == U32_MAX &&
+ ipv6_mask[IPV6_IDX2] == U32_MAX && ipv6_mask[IPV6_IDX3] == U32_MAX)
+ return IPV6_MASK_ALL_MASK;
+
+ return IPV6_MASK_INVALID;
+}
+
+static int hinic3_base_ipv6_parse(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_tcpip6_spec *mask = &fs->m_u.tcp_ip6_spec;
+ struct ethtool_tcpip6_spec *val = &fs->h_u.tcp_ip6_spec;
+ int parse_res;
+ u32 temp;
+
+ parse_res = ipv6_mask_parse((u32 *)mask->ip6src);
+ if (parse_res == IPV6_MASK_ALL_MASK) {
+ temp = ntohl(val->ip6src[IPV6_IDX0]);
+ tcam_key->key_info_ipv6.sipv6_key0 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.sipv6_key1 = low_16_bits(temp);
+ temp = ntohl(val->ip6src[IPV6_IDX1]);
+ tcam_key->key_info_ipv6.sipv6_key2 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.sipv6_key3 = low_16_bits(temp);
+ temp = ntohl(val->ip6src[IPV6_IDX2]);
+ tcam_key->key_info_ipv6.sipv6_key4 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.sipv6_key5 = low_16_bits(temp);
+ temp = ntohl(val->ip6src[IPV6_IDX3]);
+ tcam_key->key_info_ipv6.sipv6_key6 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.sipv6_key7 = low_16_bits(temp);
+
+ tcam_key->key_mask_ipv6.sipv6_key0 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key1 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key2 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key3 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key4 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key5 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key6 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key7 = U16_MAX;
+ } else if (parse_res == IPV6_MASK_INVALID) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "invalid src_ipv6 mask\n");
+ return -EINVAL;
+ }
+
+ parse_res = ipv6_mask_parse((u32 *)mask->ip6dst);
+ if (parse_res == IPV6_MASK_ALL_MASK) {
+ temp = ntohl(val->ip6dst[IPV6_IDX0]);
+ tcam_key->key_info_ipv6.dipv6_key0 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.dipv6_key1 = low_16_bits(temp);
+ temp = ntohl(val->ip6dst[IPV6_IDX1]);
+ tcam_key->key_info_ipv6.dipv6_key2 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.dipv6_key3 = low_16_bits(temp);
+ temp = ntohl(val->ip6dst[IPV6_IDX2]);
+ tcam_key->key_info_ipv6.dipv6_key4 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.dipv6_key5 = low_16_bits(temp);
+ temp = ntohl(val->ip6dst[IPV6_IDX3]);
+ tcam_key->key_info_ipv6.dipv6_key6 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.dipv6_key7 = low_16_bits(temp);
+
+ tcam_key->key_mask_ipv6.dipv6_key0 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key1 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key2 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key3 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key4 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key5 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key6 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key7 = U16_MAX;
+ } else if (parse_res == IPV6_MASK_INVALID) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "invalid dst_ipv6 mask\n");
+ return -EINVAL;
+ }
+
+ tcam_key->key_info_ipv6.ip_type = TCAM_IPV6_TYPE;
+ tcam_key->key_mask_ipv6.ip_type = TCAM_IP_TYPE_MASK;
+
+ tcam_key->key_info_ipv6.function_id =
+ hinic3_global_func_id(nic_dev->hwdev);
+ tcam_key->key_mask_ipv6.function_id = TCAM_FUNC_ID_MASK;
+
+ return 0;
+}
+
+static int hinic3_fdir_tcam_ipv6_l4_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_tcpip6_spec *l4_mask = &fs->m_u.tcp_ip6_spec;
+ struct ethtool_tcpip6_spec *l4_val = &fs->h_u.tcp_ip6_spec;
+ int err;
+
+ err = hinic3_base_ipv6_parse(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+
+ tcam_key->key_info_ipv6.dport = ntohs(l4_val->pdst);
+ tcam_key->key_mask_ipv6.dport = l4_mask->pdst;
+
+ tcam_key->key_info_ipv6.sport = ntohs(l4_val->psrc);
+ tcam_key->key_mask_ipv6.sport = l4_mask->psrc;
+
+ if (fs->flow_type == TCP_V6_FLOW)
+ tcam_key->key_info_ipv6.ip_proto = NEXTHDR_TCP;
+ else
+ tcam_key->key_info_ipv6.ip_proto = NEXTHDR_UDP;
+ tcam_key->key_mask_ipv6.ip_proto = U8_MAX;
+
+ return 0;
+}
+
+static int hinic3_fdir_tcam_ipv6_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_usrip6_spec *l3_mask = &fs->m_u.usr_ip6_spec;
+ struct ethtool_usrip6_spec *l3_val = &fs->h_u.usr_ip6_spec;
+ int err;
+
+ err = hinic3_base_ipv6_parse(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+
+ tcam_key->key_info_ipv6.ip_proto = l3_val->l4_proto;
+ tcam_key->key_mask_ipv6.ip_proto = l3_mask->l4_proto;
+
+ return 0;
+}
+#endif
+
+static int hinic3_fdir_tcam_info_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key,
+ struct nic_tcam_cfg_rule *fdir_tcam_rule)
+{
+ int err;
+
+ switch (fs->flow_type) {
+ case TCP_V4_FLOW:
+ case UDP_V4_FLOW:
+ err = hinic3_fdir_tcam_ipv4_l4_init(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+ break;
+ case IP_USER_FLOW:
+ err = hinic3_fdir_tcam_ipv4_init(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+ break;
+#ifndef UNSUPPORT_NTUPLE_IPV6
+ case TCP_V6_FLOW:
+ case UDP_V6_FLOW:
+ err = hinic3_fdir_tcam_ipv6_l4_init(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+ break;
+ case IPV6_USER_FLOW:
+ err = hinic3_fdir_tcam_ipv6_init(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+ break;
+#endif
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ tcam_key->key_info.tunnel_type = 0;
+ tcam_key->key_mask.tunnel_type = TCAM_TUNNEL_TYPE_MASK;
+
+ fdir_tcam_rule->data.qid = (u32)fs->ring_cookie;
+ tcam_key_calculate(tcam_key, fdir_tcam_rule);
+
+ return 0;
+}
+
+void hinic3_flush_rx_flow_rule(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+ struct hinic3_ethtool_rx_flow_rule *eth_rule_tmp = NULL;
+ struct hinic3_tcam_filter *tcam_iter = NULL;
+ struct hinic3_tcam_filter *tcam_iter_tmp = NULL;
+ struct hinic3_tcam_dynamic_block *block = NULL;
+ struct hinic3_tcam_dynamic_block *block_tmp = NULL;
+ struct list_head *dynamic_list =
+ &tcam_info->tcam_dynamic_info.tcam_dynamic_list;
+
+ if (!list_empty(&tcam_info->tcam_list)) {
+ list_for_each_entry_safe(tcam_iter, tcam_iter_tmp,
+ &tcam_info->tcam_list,
+ tcam_filter_list) {
+ list_del(&tcam_iter->tcam_filter_list);
+ kfree(tcam_iter);
+ }
+ }
+ if (!list_empty(dynamic_list)) {
+ list_for_each_entry_safe(block, block_tmp, dynamic_list,
+ block_list) {
+ list_del(&block->block_list);
+ kfree(block);
+ }
+ }
+
+ if (!list_empty(&nic_dev->rx_flow_rule.rules)) {
+ list_for_each_entry_safe(eth_rule, eth_rule_tmp,
+ &nic_dev->rx_flow_rule.rules, list) {
+ list_del(ð_rule->list);
+ kfree(eth_rule);
+ }
+ }
+
+ if (HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ hinic3_flush_tcam_rule(nic_dev->hwdev);
+ hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, false);
+ }
+}
+
+static struct hinic3_tcam_dynamic_block *
+hinic3_alloc_dynamic_block_resource(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_tcam_info *tcam_info,
+ u16 dynamic_block_id)
+{
+ struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL;
+
+ dynamic_block_ptr = kzalloc(sizeof(*dynamic_block_ptr), GFP_KERNEL);
+ if (!dynamic_block_ptr) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "fdir filter dynamic alloc block index %u memory failed\n",
+ dynamic_block_id);
+ return NULL;
+ }
+
+ dynamic_block_ptr->dynamic_block_id = dynamic_block_id;
+ list_add_tail(&dynamic_block_ptr->block_list,
+ &tcam_info->tcam_dynamic_info.tcam_dynamic_list);
+
+ tcam_info->tcam_dynamic_info.dynamic_block_cnt++;
+
+ return dynamic_block_ptr;
+}
+
+static void hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info,
+ struct hinic3_tcam_dynamic_block *block_ptr)
+{
+ if (!block_ptr)
+ return;
+
+ list_del(&block_ptr->block_list);
+ kfree(block_ptr);
+
+ tcam_info->tcam_dynamic_info.dynamic_block_cnt--;
+}
+
+static struct hinic3_tcam_dynamic_block *
+hinic3_dynamic_lookup_tcam_filter(struct hinic3_nic_dev *nic_dev,
+ struct nic_tcam_cfg_rule *fdir_tcam_rule,
+ const struct hinic3_tcam_info *tcam_info,
+ struct hinic3_tcam_filter *tcam_filter,
+ u16 *tcam_index)
+{
+ struct hinic3_tcam_dynamic_block *tmp = NULL;
+ u16 index;
+
+ list_for_each_entry(tmp,
+ &tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+ block_list)
+ if (!tmp ||
+ tmp->dynamic_index_cnt < HINIC3_TCAM_DYNAMIC_BLOCK_SIZE)
+ break;
+
+ if (!tmp || tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir filter dynamic lookup for index failed\n");
+ return NULL;
+ }
+
+ for (index = 0; index < HINIC3_TCAM_DYNAMIC_BLOCK_SIZE; index++)
+ if (tmp->dynamic_index_used[index] == 0)
+ break;
+
+ if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "tcam block 0x%x supports filter rules is full\n",
+ tmp->dynamic_block_id);
+ return NULL;
+ }
+
+ tcam_filter->dynamic_block_id = tmp->dynamic_block_id;
+ tcam_filter->index = index;
+ *tcam_index = index;
+
+ fdir_tcam_rule->index = index +
+ HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id);
+
+ return tmp;
+}
+
+static int hinic3_add_tcam_filter(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_tcam_filter *tcam_filter,
+ struct nic_tcam_cfg_rule *fdir_tcam_rule)
+{
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL;
+ struct hinic3_tcam_dynamic_block *tmp = NULL;
+ u16 block_cnt = tcam_info->tcam_dynamic_info.dynamic_block_cnt;
+ u16 tcam_block_index = 0;
+ int block_alloc_flag = 0;
+ u16 index = 0;
+ int err;
+
+ if (tcam_info->tcam_rule_nums >=
+ block_cnt * HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+ if (block_cnt >= (HINIC3_MAX_TCAM_FILTERS /
+ HINIC3_TCAM_DYNAMIC_BLOCK_SIZE)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Dynamic tcam block is full, alloc failed\n");
+ goto failed;
+ }
+
+ err = hinic3_alloc_tcam_block(nic_dev->hwdev,
+ &tcam_block_index);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir filter dynamic tcam alloc block failed\n");
+ goto failed;
+ }
+
+ block_alloc_flag = 1;
+
+ dynamic_block_ptr =
+ hinic3_alloc_dynamic_block_resource(nic_dev, tcam_info,
+ tcam_block_index);
+ if (!dynamic_block_ptr) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir filter dynamic alloc block memory failed\n");
+ goto block_alloc_failed;
+ }
+ }
+
+ tmp = hinic3_dynamic_lookup_tcam_filter(nic_dev,
+ fdir_tcam_rule, tcam_info,
+ tcam_filter, &index);
+ if (!tmp) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Dynamic lookup tcam filter failed\n");
+ goto lookup_tcam_index_failed;
+ }
+
+ err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir_tcam_rule add failed\n");
+ goto add_tcam_rules_failed;
+ }
+
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Add fdir tcam rule, function_id: 0x%x, tcam_block_id: %u, local_index: %u, global_index: %u, queue: %u, tcam_rule_nums: %u succeed\n",
+ hinic3_global_func_id(nic_dev->hwdev),
+ tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index,
+ fdir_tcam_rule->data.qid, tcam_info->tcam_rule_nums + 1);
+
+ if (tcam_info->tcam_rule_nums == 0) {
+ err = hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, true);
+ if (err)
+ goto enable_failed;
+ }
+
+ list_add_tail(&tcam_filter->tcam_filter_list, &tcam_info->tcam_list);
+
+ tmp->dynamic_index_used[index] = 1;
+ tmp->dynamic_index_cnt++;
+
+ tcam_info->tcam_rule_nums++;
+
+ return 0;
+
+enable_failed:
+ hinic3_del_tcam_rule(nic_dev->hwdev, fdir_tcam_rule->index);
+
+add_tcam_rules_failed:
+lookup_tcam_index_failed:
+ if (block_alloc_flag == 1)
+ hinic3_free_dynamic_block_resource(tcam_info,
+ dynamic_block_ptr);
+
+block_alloc_failed:
+ if (block_alloc_flag == 1)
+ hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index);
+
+failed:
+ return -EFAULT;
+}
+
+static int hinic3_del_tcam_filter(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_tcam_filter *tcam_filter)
+{
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ u16 dynamic_block_id = tcam_filter->dynamic_block_id;
+ struct hinic3_tcam_dynamic_block *tmp = NULL;
+ u32 index = 0;
+ int err;
+
+ list_for_each_entry(tmp,
+ &tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+ block_list) {
+ if (tmp->dynamic_block_id == dynamic_block_id)
+ break;
+ }
+ if (!tmp || tmp->dynamic_block_id != dynamic_block_id) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir filter del dynamic lookup for block failed\n");
+ return -EFAULT;
+ }
+
+ index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) +
+ tcam_filter->index;
+
+ err = hinic3_del_tcam_rule(nic_dev->hwdev, index);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "fdir_tcam_rule del failed\n");
+ return -EFAULT;
+ }
+
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Del fdir_tcam_dynamic_rule function_id: 0x%x, tcam_block_id: %u, local_index: %u, global_index: %u, local_rules_nums: %u, global_rule_nums: %u succeed\n",
+ hinic3_global_func_id(nic_dev->hwdev), dynamic_block_id,
+ tcam_filter->index, index, tmp->dynamic_index_cnt - 1,
+ tcam_info->tcam_rule_nums - 1);
+
+ tmp->dynamic_index_used[tcam_filter->index] = 0;
+ tmp->dynamic_index_cnt--;
+ tcam_info->tcam_rule_nums--;
+ if (tmp->dynamic_index_cnt == 0) {
+ hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id);
+ hinic3_free_dynamic_block_resource(tcam_info, tmp);
+ }
+
+ if (tcam_info->tcam_rule_nums == 0)
+ hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, false);
+
+ list_del(&tcam_filter->tcam_filter_list);
+ kfree(tcam_filter);
+
+ return 0;
+}
+
+static inline struct hinic3_tcam_filter *
+hinic3_tcam_filter_lookup(const struct list_head *filter_list,
+ struct tag_tcam_key *key)
+{
+ struct hinic3_tcam_filter *iter = NULL;
+
+ list_for_each_entry(iter, filter_list, tcam_filter_list) {
+ if (memcmp(key, &iter->tcam_key,
+ sizeof(struct tag_tcam_key)) == 0) {
+ return iter;
+ }
+ }
+
+ return NULL;
+}
+
+static void del_ethtool_rule(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_ethtool_rx_flow_rule *eth_rule)
+{
+ list_del(ð_rule->list);
+ nic_dev->rx_flow_rule.tot_num_rules--;
+
+ kfree(eth_rule);
+}
+
+static int hinic3_remove_one_rule(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_ethtool_rx_flow_rule *eth_rule)
+{
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ struct hinic3_tcam_filter *tcam_filter = NULL;
+ struct nic_tcam_cfg_rule fdir_tcam_rule;
+ struct tag_tcam_key tcam_key;
+ int err;
+
+ memset(&fdir_tcam_rule, 0, sizeof(fdir_tcam_rule));
+ memset(&tcam_key, 0, sizeof(tcam_key));
+
+ err = hinic3_fdir_tcam_info_init(nic_dev, ð_rule->flow_spec,
+ &tcam_key, &fdir_tcam_rule);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Init fdir info failed\n");
+ return err;
+ }
+
+ tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list,
+ &tcam_key);
+ if (!tcam_filter) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Filter does not exists\n");
+ return -EEXIST;
+ }
+
+ err = hinic3_del_tcam_filter(nic_dev, tcam_filter);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Delete tcam filter failed\n");
+ return err;
+ }
+
+ del_ethtool_rule(nic_dev, eth_rule);
+
+ return 0;
+}
+
+static void add_rule_to_list(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_ethtool_rx_flow_rule *rule)
+{
+ struct hinic3_ethtool_rx_flow_rule *iter = NULL;
+ struct list_head *head = &nic_dev->rx_flow_rule.rules;
+
+ list_for_each_entry(iter, &nic_dev->rx_flow_rule.rules, list) {
+ if (iter->flow_spec.location > rule->flow_spec.location)
+ break;
+ head = &iter->list;
+ }
+ nic_dev->rx_flow_rule.tot_num_rules++;
+ list_add(&rule->list, head);
+}
+
+static int hinic3_add_one_rule(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs)
+{
+ struct nic_tcam_cfg_rule fdir_tcam_rule;
+ struct tag_tcam_key tcam_key;
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+ struct hinic3_tcam_filter *tcam_filter = NULL;
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ int err;
+
+ memset(&fdir_tcam_rule, 0, sizeof(fdir_tcam_rule));
+ memset(&tcam_key, 0, sizeof(tcam_key));
+ err = hinic3_fdir_tcam_info_init(nic_dev, fs, &tcam_key,
+ &fdir_tcam_rule);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Init fdir info failed\n");
+ return err;
+ }
+
+ tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list,
+ &tcam_key);
+ if (tcam_filter) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Filter exists\n");
+ return -EEXIST;
+ }
+
+ tcam_filter = kzalloc(sizeof(*tcam_filter), GFP_KERNEL);
+ if (!tcam_filter)
+ return -ENOMEM;
+ memcpy(&tcam_filter->tcam_key,
+ &tcam_key, sizeof(struct tag_tcam_key));
+ tcam_filter->queue = (u16)fdir_tcam_rule.data.qid;
+
+ err = hinic3_add_tcam_filter(nic_dev, tcam_filter, &fdir_tcam_rule);
+ if (err)
+ goto add_tcam_filter_fail;
+
+ /* driver save new rule filter */
+ eth_rule = kzalloc(sizeof(*eth_rule), GFP_KERNEL);
+ if (!eth_rule) {
+ err = -ENOMEM;
+ goto alloc_eth_rule_fail;
+ }
+
+ eth_rule->flow_spec = *fs;
+ add_rule_to_list(nic_dev, eth_rule);
+
+ return 0;
+
+alloc_eth_rule_fail:
+ hinic3_del_tcam_filter(nic_dev, tcam_filter);
+add_tcam_filter_fail:
+ kfree(tcam_filter);
+ return err;
+}
+
+static struct hinic3_ethtool_rx_flow_rule *
+find_ethtool_rule(const struct hinic3_nic_dev *nic_dev, u32 location)
+{
+ struct hinic3_ethtool_rx_flow_rule *iter = NULL;
+
+ list_for_each_entry(iter, &nic_dev->rx_flow_rule.rules, list) {
+ if (iter->flow_spec.location == location)
+ return iter;
+ }
+ return NULL;
+}
+
+static int validate_flow(struct hinic3_nic_dev *nic_dev,
+ const struct ethtool_rx_flow_spec *fs)
+{
+ if (fs->location >= MAX_NUM_OF_ETHTOOL_NTUPLE_RULES) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "loc exceed limit[0,%lu]\n",
+ MAX_NUM_OF_ETHTOOL_NTUPLE_RULES - 1);
+ return -EINVAL;
+ }
+
+ if (fs->ring_cookie >= nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "action is larger than queue number %u\n",
+ nic_dev->q_params.num_qps);
+ return -EINVAL;
+ }
+
+ switch (fs->flow_type) {
+ case TCP_V4_FLOW:
+ case UDP_V4_FLOW:
+ case IP_USER_FLOW:
+#ifndef UNSUPPORT_NTUPLE_IPV6
+ case TCP_V6_FLOW:
+ case UDP_V6_FLOW:
+ case IPV6_USER_FLOW:
+#endif
+ break;
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "flow type is not supported\n");
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+int hinic3_ethtool_flow_replace(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs)
+{
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+ struct ethtool_rx_flow_spec flow_spec_temp;
+ int loc_exit_flag = 0;
+ int err;
+
+ if (!HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported ntuple function\n");
+ return -EOPNOTSUPP;
+ }
+
+ err = validate_flow(nic_dev, fs);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "flow is not valid %d\n", err);
+ return err;
+ }
+
+ eth_rule = find_ethtool_rule(nic_dev, fs->location);
+ /* when location is same, delete old location rule. */
+ if (eth_rule) {
+ memcpy(&flow_spec_temp, ð_rule->flow_spec,
+ sizeof(struct ethtool_rx_flow_spec));
+ err = hinic3_remove_one_rule(nic_dev, eth_rule);
+ if (err)
+ return err;
+
+ loc_exit_flag = 1;
+ }
+
+ /* add new rule filter */
+ err = hinic3_add_one_rule(nic_dev, fs);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Add new rule filter failed\n");
+ if (loc_exit_flag)
+ hinic3_add_one_rule(nic_dev, &flow_spec_temp);
+
+ return -ENOENT;
+ }
+
+ return 0;
+}
+
+int hinic3_ethtool_flow_remove(struct hinic3_nic_dev *nic_dev, u32 location)
+{
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+ int err;
+
+ if (!HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported ntuple function\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (location >= MAX_NUM_OF_ETHTOOL_NTUPLE_RULES)
+ return -ENOSPC;
+
+ eth_rule = find_ethtool_rule(nic_dev, location);
+ if (!eth_rule)
+ return -ENOENT;
+
+ err = hinic3_remove_one_rule(nic_dev, eth_rule);
+
+ return err;
+}
+
+int hinic3_ethtool_get_flow(const struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *info, u32 location)
+{
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+
+ if (!HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported ntuple function\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (location >= MAX_NUM_OF_ETHTOOL_NTUPLE_RULES)
+ return -EINVAL;
+
+ list_for_each_entry(eth_rule, &nic_dev->rx_flow_rule.rules, list) {
+ if (eth_rule->flow_spec.location == location) {
+ info->fs = eth_rule->flow_spec;
+ return 0;
+ }
+ }
+
+ return -ENOENT;
+}
+
+int hinic3_ethtool_get_all_flows(const struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *info, u32 *rule_locs)
+{
+ u32 idx = 0;
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+
+ if (!HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported ntuple function\n");
+ return -EOPNOTSUPP;
+ }
+
+ info->data = MAX_NUM_OF_ETHTOOL_NTUPLE_RULES;
+ list_for_each_entry(eth_rule, &nic_dev->rx_flow_rule.rules, list)
+ rule_locs[idx++] = eth_rule->flow_spec.location;
+
+ return info->rule_cnt == idx ? 0 : -ENOENT;
+}
+
+bool hinic3_validate_channel_setting_in_ntuple(const struct hinic3_nic_dev *nic_dev, u32 q_num)
+{
+ struct hinic3_ethtool_rx_flow_rule *iter = NULL;
+
+ list_for_each_entry(iter, &nic_dev->rx_flow_rule.rules, list) {
+ if (iter->flow_spec.ring_cookie >= q_num) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "User defined filter %u assigns flow to queue %llu. Queue number %u is invalid\n",
+ iter->flow_spec.location, iter->flow_spec.ring_cookie, q_num);
+ return false;
+ }
+ }
+
+ return true;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rss.c b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.c
new file mode 100644
index 0000000..94acf61
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.c
@@ -0,0 +1,1003 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/ethtool.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/dcbnl.h>
+#include <linux/init.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_hw.h"
+#include "hinic3_rss.h"
+
+#include "vram_common.h"
+
+static u16 num_qps;
+module_param(num_qps, ushort, 0444);
+MODULE_PARM_DESC(num_qps, "Number of Queue Pairs (default=0)");
+
+#define MOD_PARA_VALIDATE_NUM_QPS(nic_dev, num_qps, out_qps) do { \
+ if ((num_qps) > (nic_dev)->max_qps) \
+ nic_warn(&(nic_dev)->pdev->dev, \
+ "Module Parameter %s value %u is out of range, " \
+ "Maximum value for the device: %u, using %u\n", \
+ #num_qps, num_qps, (nic_dev)->max_qps, \
+ (nic_dev)->max_qps); \
+ if ((num_qps) > (nic_dev)->max_qps) \
+ (out_qps) = (nic_dev)->max_qps; \
+ else if ((num_qps) > 0) \
+ (out_qps) = (num_qps); \
+} while (0)
+
+/* In rx, iq means cos */
+static u8 hinic3_get_iqmap_by_tc(const u8 *prio_tc, u8 num_iq, u8 tc)
+{
+ u8 i, map = 0;
+
+ for (i = 0; i < num_iq; i++) {
+ if (prio_tc[i] == tc)
+ map |= (u8)(1U << ((num_iq - 1) - i));
+ }
+
+ return map;
+}
+
+static u8 hinic3_get_tcid_by_rq(const u32 *indir_tbl, u8 num_tcs, u16 rq_id)
+{
+ u16 tc_group_size;
+ int i;
+ u8 temp_num_tcs = num_tcs;
+
+ if (!num_tcs)
+ temp_num_tcs = 1;
+
+ tc_group_size = NIC_RSS_INDIR_SIZE / temp_num_tcs;
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++) {
+ if (indir_tbl[i] == rq_id)
+ return (u8)(i / tc_group_size);
+ }
+
+ return 0xFF; /* Invalid TC */
+}
+
+static int hinic3_get_rq2iq_map(struct hinic3_nic_dev *nic_dev,
+ u16 num_rq, u8 num_tcs, u8 *prio_tc, u8 cos_num,
+ u32 *indir_tbl, u8 *map, u32 map_size)
+{
+ u16 qid;
+ u8 tc_id;
+ u8 temp_num_tcs = num_tcs;
+
+ if (!num_tcs)
+ temp_num_tcs = 1;
+
+ if (num_rq > map_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Rq number(%u) exceed max map qid(%u)\n",
+ num_rq, map_size);
+ return -EINVAL;
+ }
+
+ if (cos_num < HINIC_NUM_IQ_PER_FUNC) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Cos number(%u) less then map qid(%d)\n",
+ cos_num, HINIC_NUM_IQ_PER_FUNC);
+ return -EINVAL;
+ }
+
+ for (qid = 0; qid < num_rq; qid++) {
+ tc_id = hinic3_get_tcid_by_rq(indir_tbl, temp_num_tcs, qid);
+ map[qid] = hinic3_get_iqmap_by_tc(prio_tc,
+ HINIC_NUM_IQ_PER_FUNC, tc_id);
+ }
+
+ return 0;
+}
+
+static void hinic3_fillout_indir_tbl(struct hinic3_nic_dev *nic_dev,
+ u8 group_num, u32 *indir)
+{
+ struct hinic3_dcb *dcb = nic_dev->dcb;
+ u16 k, group_size, start_qid = 0, cur_cos_qnum = 0;
+ u32 i = 0;
+ u8 j, cur_cos = 0, group = 0;
+ u8 valid_cos_map = hinic3_get_dev_valid_cos_map(nic_dev);
+
+ if (group_num == 0) {
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++)
+ indir[i] = i % nic_dev->q_params.num_qps;
+ } else {
+ group_size = NIC_RSS_INDIR_SIZE / group_num;
+
+ for (group = 0; group < group_num; group++) {
+ cur_cos = dcb->hw_dcb_cfg.default_cos;
+ for (j = 0; j < NIC_DCB_COS_MAX; j++) {
+ if ((BIT(j) & valid_cos_map) != 0) {
+ cur_cos = j;
+ valid_cos_map -= (u8)BIT(j);
+ break;
+ }
+ }
+
+ cur_cos_qnum = dcb->hw_dcb_cfg.cos_qp_num[cur_cos];
+ if (cur_cos_qnum > 0) {
+ start_qid =
+ dcb->hw_dcb_cfg.cos_qp_offset[cur_cos];
+ } else {
+ start_qid = cur_cos % nic_dev->q_params.num_qps;
+ /* Ensure that the offset of start_id is 0. */
+ cur_cos_qnum = 1;
+ }
+
+ for (k = 0; k < group_size; k++)
+ indir[i++] = start_qid + k % cur_cos_qnum;
+ }
+ }
+}
+
+int hinic3_rss_init(struct hinic3_nic_dev *nic_dev, u8 *rq2iq_map, u32 map_size, u8 dcb_en)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 i, group_num, cos_bitmap, group = 0;
+ u8 cos_group[NIC_DCB_UP_MAX] = {0};
+ int err;
+
+ if (dcb_en != 0) {
+ group_num = (u8)roundup_pow_of_two(
+ hinic3_get_dev_user_cos_num(nic_dev));
+
+ cos_bitmap = hinic3_get_dev_valid_cos_map(nic_dev);
+
+ for (i = 0; i < NIC_DCB_UP_MAX; i++) {
+ if ((BIT(i) & cos_bitmap) != 0)
+ cos_group[NIC_DCB_UP_MAX - i - 1] = group++;
+ else
+ cos_group[NIC_DCB_UP_MAX - i - 1] =
+ group_num - 1;
+ }
+ } else {
+ group_num = 0;
+ }
+
+ err = hinic3_set_hw_rss_parameters(netdev, 1, group_num,
+ cos_group, dcb_en);
+ if (err)
+ return err;
+
+ err = hinic3_get_rq2iq_map(nic_dev, nic_dev->q_params.num_qps,
+ group_num, cos_group, NIC_DCB_UP_MAX,
+ nic_dev->rss_indir, rq2iq_map, map_size);
+ if (err)
+ nicif_err(nic_dev, drv, netdev, "Failed to get rq map\n");
+ return err;
+}
+
+void hinic3_rss_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ u8 cos_map[NIC_DCB_UP_MAX] = {0};
+
+ hinic3_rss_cfg(nic_dev->hwdev, 0, 0, cos_map, 1);
+}
+
+void hinic3_init_rss_parameters(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ nic_dev->rss_hash_engine = HINIC3_RSS_HASH_ENGINE_TYPE_XOR;
+ nic_dev->rss_type.tcp_ipv6_ext = 1;
+ nic_dev->rss_type.ipv6_ext = 1;
+ nic_dev->rss_type.tcp_ipv6 = 1;
+ nic_dev->rss_type.ipv6 = 1;
+ nic_dev->rss_type.tcp_ipv4 = 1;
+ nic_dev->rss_type.ipv4 = 1;
+ nic_dev->rss_type.udp_ipv6 = 1;
+ nic_dev->rss_type.udp_ipv4 = 1;
+}
+
+void hinic3_clear_rss_config(struct hinic3_nic_dev *nic_dev)
+{
+ kfree(nic_dev->rss_hkey);
+ nic_dev->rss_hkey = NULL;
+
+ kfree(nic_dev->rss_indir);
+ nic_dev->rss_indir = NULL;
+}
+
+void hinic3_set_default_rss_indir(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ set_bit(HINIC3_RSS_DEFAULT_INDIR, &nic_dev->flags);
+}
+
+static void hinic3_maybe_reconfig_rss_indir(struct net_device *netdev, u8 dcb_en)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int i;
+
+ /* if dcb is enabled, user can not config rss indir table */
+ if (dcb_en) {
+ nicif_info(nic_dev, drv, netdev, "DCB is enabled, set default rss indir\n");
+ goto discard_user_rss_indir;
+ }
+
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++) {
+ if (nic_dev->rss_indir[i] >= nic_dev->q_params.num_qps)
+ goto discard_user_rss_indir;
+ }
+
+ return;
+
+discard_user_rss_indir:
+ hinic3_set_default_rss_indir(netdev);
+}
+
+#ifdef HAVE_HOT_REPLACE_FUNC
+bool partition_slave_doing_hotupgrade(void)
+{
+ return get_partition_role() && partition_doing_hotupgrade();
+}
+#endif
+
+static void decide_num_qps(struct hinic3_nic_dev *nic_dev)
+{
+ u16 tmp_num_qps = nic_dev->max_qps;
+ u16 num_cpus = 0;
+ u16 max_num_cpus;
+ int i, node;
+
+ int is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec != 0) {
+ nic_dev->q_params.num_qps = nic_dev->nic_vram->vram_num_qps;
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Os hotreplace use vram to init num qps 1:%hu 2:%hu\n",
+ nic_dev->q_params.num_qps,
+ nic_dev->nic_vram->vram_num_qps);
+ return;
+ }
+
+ if (nic_dev->nic_cap.default_num_queues != 0 &&
+ nic_dev->nic_cap.default_num_queues < nic_dev->max_qps)
+ tmp_num_qps = nic_dev->nic_cap.default_num_queues;
+
+ MOD_PARA_VALIDATE_NUM_QPS(nic_dev, num_qps, tmp_num_qps);
+
+#ifdef HAVE_HOT_REPLACE_FUNC
+ if (partition_slave_doing_hotupgrade())
+ max_num_cpus = (u16)num_present_cpus();
+ else
+ max_num_cpus = (u16)num_online_cpus();
+#else
+ max_num_cpus = (u16)num_online_cpus();
+#endif
+
+ for (i = 0; i < max_num_cpus; i++) {
+ node = (int)cpu_to_node(i);
+ if (node == dev_to_node(&nic_dev->pdev->dev))
+ num_cpus++;
+ }
+
+ if (!num_cpus)
+ num_cpus = max_num_cpus;
+
+ nic_dev->q_params.num_qps = (u16)min_t(u16, tmp_num_qps, num_cpus);
+ nic_dev->nic_vram->vram_num_qps = nic_dev->q_params.num_qps;
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "init num qps 1:%u 2:%u\n",
+ nic_dev->q_params.num_qps, nic_dev->nic_vram->vram_num_qps);
+}
+
+static void copy_value_to_rss_hkey(struct hinic3_nic_dev *nic_dev,
+ const u8 *hkey)
+{
+ u32 i;
+ u32 *rss_hkey = (u32 *)nic_dev->rss_hkey;
+
+ memcpy(nic_dev->rss_hkey, hkey, NIC_RSS_KEY_SIZE);
+
+ /* make a copy of the key, and convert it to Big Endian */
+ for (i = 0; i < NIC_RSS_KEY_SIZE / sizeof(u32); i++)
+ nic_dev->rss_hkey_be[i] = cpu_to_be32(rss_hkey[i]);
+}
+
+static int alloc_rss_resource(struct hinic3_nic_dev *nic_dev)
+{
+ u8 default_rss_key[NIC_RSS_KEY_SIZE] = {
+ 0x6d, 0x5a, 0x56, 0xda, 0x25, 0x5b, 0x0e, 0xc2,
+ 0x41, 0x67, 0x25, 0x3d, 0x43, 0xa3, 0x8f, 0xb0,
+ 0xd0, 0xca, 0x2b, 0xcb, 0xae, 0x7b, 0x30, 0xb4,
+ 0x77, 0xcb, 0x2d, 0xa3, 0x80, 0x30, 0xf2, 0x0c,
+ 0x6a, 0x42, 0xb7, 0x3b, 0xbe, 0xac, 0x01, 0xfa};
+
+ /* We request double spaces for the hash key,
+ * the second one holds the key of Big Edian
+ * format.
+ */
+ nic_dev->rss_hkey =
+ kzalloc(NIC_RSS_KEY_SIZE *
+ HINIC3_RSS_KEY_RSV_NUM, GFP_KERNEL);
+ if (!nic_dev->rss_hkey) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc memory for rss_hkey\n");
+ return -ENOMEM;
+ }
+
+ /* The second space is for big edian hash key */
+ nic_dev->rss_hkey_be = (u32 *)(nic_dev->rss_hkey +
+ NIC_RSS_KEY_SIZE);
+ copy_value_to_rss_hkey(nic_dev, (u8 *)default_rss_key);
+
+ nic_dev->rss_indir = kzalloc(sizeof(u32) * NIC_RSS_INDIR_SIZE, GFP_KERNEL);
+ if (!nic_dev->rss_indir) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc memory for rss_indir\n");
+ kfree(nic_dev->rss_hkey);
+ nic_dev->rss_hkey = NULL;
+ return -ENOMEM;
+ }
+
+ set_bit(HINIC3_RSS_DEFAULT_INDIR, &nic_dev->flags);
+
+ return 0;
+}
+
+void hinic3_try_to_enable_rss(struct hinic3_nic_dev *nic_dev)
+{
+ u8 cos_map[NIC_DCB_UP_MAX] = {0};
+ int err = 0;
+
+ if (!nic_dev)
+ return;
+
+ nic_dev->max_qps = hinic3_func_max_nic_qnum(nic_dev->hwdev);
+ if (nic_dev->max_qps <= 1 || !HINIC3_SUPPORT_RSS(nic_dev->hwdev))
+ goto set_q_params;
+
+ err = alloc_rss_resource(nic_dev);
+ if (err) {
+ nic_dev->max_qps = 1;
+ goto set_q_params;
+ }
+
+ set_bit(HINIC3_RSS_ENABLE, &nic_dev->flags);
+ nic_dev->max_qps = hinic3_func_max_nic_qnum(nic_dev->hwdev);
+
+ decide_num_qps(nic_dev);
+
+ hinic3_init_rss_parameters(nic_dev->netdev);
+ err = hinic3_set_hw_rss_parameters(nic_dev->netdev, 0, 0, cos_map,
+ test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ? 1 : 0);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to set hardware rss parameters\n");
+
+ hinic3_clear_rss_config(nic_dev);
+ nic_dev->max_qps = 1;
+ goto set_q_params;
+ }
+ return;
+
+set_q_params:
+ clear_bit(HINIC3_RSS_ENABLE, &nic_dev->flags);
+ nic_dev->q_params.num_qps = nic_dev->max_qps;
+ nic_dev->nic_vram->vram_num_qps = nic_dev->max_qps;
+}
+
+static int hinic3_config_rss_hw_resource(struct hinic3_nic_dev *nic_dev,
+ u32 *indir_tbl)
+{
+ int err;
+
+ err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl);
+ if (err)
+ return err;
+
+ err = hinic3_set_rss_type(nic_dev->hwdev, nic_dev->rss_type);
+ if (err)
+ return err;
+
+ return hinic3_rss_set_hash_engine(nic_dev->hwdev,
+ nic_dev->rss_hash_engine);
+}
+
+int hinic3_set_hw_rss_parameters(struct net_device *netdev, u8 rss_en,
+ u8 cos_num, u8 *cos_map, u8 dcb_en)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ /* RSS key */
+ err = hinic3_rss_set_hash_key(nic_dev->hwdev, nic_dev->rss_hkey);
+ if (err)
+ return err;
+
+ hinic3_maybe_reconfig_rss_indir(netdev, dcb_en);
+
+ if (test_bit(HINIC3_RSS_DEFAULT_INDIR, &nic_dev->flags))
+ hinic3_fillout_indir_tbl(nic_dev, cos_num, nic_dev->rss_indir);
+
+ err = hinic3_config_rss_hw_resource(nic_dev, nic_dev->rss_indir);
+ if (err)
+ return err;
+
+ err = hinic3_rss_cfg(nic_dev->hwdev, rss_en, cos_num, cos_map,
+ nic_dev->q_params.num_qps);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+/* for ethtool */
+static int set_l4_rss_hash_ops(const struct ethtool_rxnfc *cmd,
+ struct nic_rss_type *rss_type)
+{
+ u8 rss_l4_en = 0;
+
+ switch (cmd->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ case 0:
+ rss_l4_en = 0;
+ break;
+ case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
+ rss_l4_en = 1;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ rss_type->tcp_ipv4 = rss_l4_en;
+ break;
+ case TCP_V6_FLOW:
+ rss_type->tcp_ipv6 = rss_l4_en;
+ break;
+ case UDP_V4_FLOW:
+ rss_type->udp_ipv4 = rss_l4_en;
+ break;
+ case UDP_V6_FLOW:
+ rss_type->udp_ipv6 = rss_l4_en;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int update_rss_hash_opts(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *cmd,
+ struct nic_rss_type *rss_type)
+{
+ int err;
+
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ case TCP_V6_FLOW:
+ case UDP_V4_FLOW:
+ case UDP_V6_FLOW:
+ err = set_l4_rss_hash_ops(cmd, rss_type);
+ if (err)
+ return err;
+
+ break;
+ case IPV4_FLOW:
+ rss_type->ipv4 = 1;
+ break;
+ case IPV6_FLOW:
+ rss_type->ipv6 = 1;
+ break;
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unsupported flow type\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_rss_hash_opts(struct hinic3_nic_dev *nic_dev, struct ethtool_rxnfc *cmd)
+{
+ struct nic_rss_type *rss_type = &nic_dev->rss_type;
+ int err;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ cmd->data = 0;
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "RSS is disable, not support to set flow-hash\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* RSS does not support anything other than hashing
+ * to queues on src and dst IPs and ports
+ */
+ if (cmd->data & ~(RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 |
+ RXH_L4_B_2_3))
+ return -EINVAL;
+
+ /* We need at least the IP SRC and DEST fields for hashing */
+ if (!(cmd->data & RXH_IP_SRC) || !(cmd->data & RXH_IP_DST))
+ return -EINVAL;
+
+ err = hinic3_get_rss_type(nic_dev->hwdev, rss_type);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to get rss type\n");
+ return -EFAULT;
+ }
+
+ err = update_rss_hash_opts(nic_dev, cmd, rss_type);
+ if (err)
+ return err;
+
+ err = hinic3_set_rss_type(nic_dev->hwdev, *rss_type);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to set rss type\n");
+ return -EFAULT;
+ }
+
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Set rss hash options success\n");
+
+ return 0;
+}
+
+static void convert_rss_type(u8 rss_opt, struct ethtool_rxnfc *cmd)
+{
+ if (rss_opt)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+}
+
+static int hinic3_convert_rss_type(struct hinic3_nic_dev *nic_dev,
+ struct nic_rss_type *rss_type,
+ struct ethtool_rxnfc *cmd)
+{
+ cmd->data = RXH_IP_SRC | RXH_IP_DST;
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ convert_rss_type(rss_type->tcp_ipv4, cmd);
+ break;
+ case TCP_V6_FLOW:
+ convert_rss_type(rss_type->tcp_ipv6, cmd);
+ break;
+ case UDP_V4_FLOW:
+ convert_rss_type(rss_type->udp_ipv4, cmd);
+ break;
+ case UDP_V6_FLOW:
+ convert_rss_type(rss_type->udp_ipv6, cmd);
+ break;
+ case IPV4_FLOW:
+ case IPV6_FLOW:
+ break;
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported flow type\n");
+ cmd->data = 0;
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_get_rss_hash_opts(struct hinic3_nic_dev *nic_dev, struct ethtool_rxnfc *cmd)
+{
+ struct nic_rss_type rss_type = {0};
+ int err;
+
+ cmd->data = 0;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags))
+ return 0;
+
+ err = hinic3_get_rss_type(nic_dev->hwdev, &rss_type);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get rss type\n");
+ return err;
+ }
+
+ return hinic3_convert_rss_type(nic_dev, &rss_type, cmd);
+}
+
+int hinic3_get_rxnfc(struct net_device *netdev,
+ struct ethtool_rxnfc *cmd, u32 *rule_locs)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_GRXRINGS:
+ cmd->data = nic_dev->q_params.num_qps;
+ break;
+ case ETHTOOL_GRXCLSRLCNT:
+ cmd->rule_cnt = (u32)nic_dev->rx_flow_rule.tot_num_rules;
+ break;
+ case ETHTOOL_GRXCLSRULE:
+ err = hinic3_ethtool_get_flow(nic_dev, cmd, cmd->fs.location);
+ break;
+ case ETHTOOL_GRXCLSRLALL:
+ err = hinic3_ethtool_get_all_flows(nic_dev, cmd, rule_locs);
+ break;
+ case ETHTOOL_GRXFH:
+ err = hinic3_get_rss_hash_opts(nic_dev, cmd);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+int hinic3_set_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_SRXFH:
+ err = hinic3_set_rss_hash_opts(nic_dev, cmd);
+ break;
+ case ETHTOOL_SRXCLSRLINS:
+ err = hinic3_ethtool_flow_replace(nic_dev, &cmd->fs);
+ break;
+ case ETHTOOL_SRXCLSRLDEL:
+ err = hinic3_ethtool_flow_remove(nic_dev, cmd->fs.location);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+static u16 hinic3_max_channels(struct hinic3_nic_dev *nic_dev)
+{
+ u8 tcs = (u8)netdev_get_num_tc(nic_dev->netdev);
+
+ return tcs ? nic_dev->max_qps / tcs : nic_dev->max_qps;
+}
+
+static u16 hinic3_curr_channels(struct hinic3_nic_dev *nic_dev)
+{
+ if (netif_running(nic_dev->netdev))
+ return nic_dev->q_params.num_qps ?
+ nic_dev->q_params.num_qps : 1;
+ else
+ return (u16)min_t(u16, hinic3_max_channels(nic_dev),
+ nic_dev->q_params.num_qps);
+}
+
+void hinic3_get_channels(struct net_device *netdev,
+ struct ethtool_channels *channels)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ channels->max_rx = 0;
+ channels->max_tx = 0;
+ channels->max_other = 0;
+ /* report maximum channels */
+ channels->max_combined = hinic3_max_channels(nic_dev);
+ channels->rx_count = 0;
+ channels->tx_count = 0;
+ channels->other_count = 0;
+ /* report flow director queues as maximum channels */
+ channels->combined_count = hinic3_curr_channels(nic_dev);
+}
+
+static int hinic3_validate_channel_parameter(struct net_device *netdev,
+ const struct ethtool_channels *channels)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 max_channel = hinic3_max_channels(nic_dev);
+ unsigned int count = channels->combined_count;
+
+ if (!count) {
+ nicif_err(nic_dev, drv, netdev,
+ "Unsupported combined_count=0\n");
+ return -EINVAL;
+ }
+
+ if (channels->tx_count || channels->rx_count || channels->other_count) {
+ nicif_err(nic_dev, drv, netdev,
+ "Setting rx/tx/other count not supported\n");
+ return -EINVAL;
+ }
+
+ if (count > max_channel) {
+ nicif_err(nic_dev, drv, netdev,
+ "Combined count %u exceed limit %u\n", count,
+ max_channel);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void change_num_channel_reopen_handler(struct hinic3_nic_dev *nic_dev,
+ const void *priv_data)
+{
+ hinic3_set_default_rss_indir(nic_dev->netdev);
+}
+
+int hinic3_set_channels(struct net_device *netdev,
+ struct ethtool_channels *channels)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dyna_txrxq_params q_params = {0};
+ unsigned int count = channels->combined_count;
+ int err;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ if (hinic3_validate_channel_parameter(netdev, channels))
+ return -EINVAL;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev,
+ "This function don't support RSS, only support 1 queue pair\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ if (count < user_cos_num) {
+ nicif_err(nic_dev, drv, netdev,
+ "DCB is on, channels num should more than valid cos num:%u\n",
+ user_cos_num);
+
+ return -EOPNOTSUPP;
+ }
+ }
+
+ if (HINIC3_SUPPORT_FDIR(nic_dev->hwdev) &&
+ !hinic3_validate_channel_setting_in_ntuple(nic_dev, count))
+ return -EOPNOTSUPP;
+
+ nicif_info(nic_dev, drv, netdev, "Set max combined queue number from %u to %u\n",
+ nic_dev->q_params.num_qps, count);
+
+ if (netif_running(netdev)) {
+ q_params = nic_dev->q_params;
+ q_params.num_qps = (u16)count;
+ q_params.txqs_res = NULL;
+ q_params.rxqs_res = NULL;
+ q_params.irq_cfg = NULL;
+
+ nicif_info(nic_dev, drv, netdev, "Restarting channel\n");
+ err = hinic3_change_channel_settings(nic_dev, &q_params,
+ change_num_channel_reopen_handler, NULL);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to change channel settings\n");
+ return -EFAULT;
+ }
+ } else {
+ /* Discard user configured rss */
+ hinic3_set_default_rss_indir(netdev);
+ nic_dev->q_params.num_qps = (u16)count;
+ }
+
+ nic_dev->nic_vram->vram_num_qps = nic_dev->q_params.num_qps;
+ return 0;
+}
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+u32 hinic3_get_rxfh_indir_size(struct net_device *netdev)
+{
+ return NIC_RSS_INDIR_SIZE;
+}
+#endif
+
+static int set_rss_rxfh(struct net_device *netdev, const u32 *indir,
+ const u8 *key)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ if (indir) {
+ err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to set rss indir table\n");
+ return -EFAULT;
+ }
+ clear_bit(HINIC3_RSS_DEFAULT_INDIR, &nic_dev->flags);
+
+ memcpy(nic_dev->rss_indir, indir,
+ sizeof(u32) * NIC_RSS_INDIR_SIZE);
+ nicif_info(nic_dev, drv, netdev, "Change rss indir success\n");
+ }
+
+ if (key) {
+ err = hinic3_rss_set_hash_key(nic_dev->hwdev, key);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to set rss key\n");
+ return -EFAULT;
+ }
+
+ copy_value_to_rss_hkey(nic_dev, key);
+ nicif_info(nic_dev, drv, netdev, "Change rss key success\n");
+ }
+
+ return 0;
+}
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+u32 hinic3_get_rxfh_key_size(struct net_device *netdev)
+{
+ return NIC_RSS_KEY_SIZE;
+}
+
+#ifdef HAVE_RXFH_HASHFUNC
+int hinic3_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key, u8 *hfunc)
+#else
+int hinic3_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ netdev_warn_once(nic_dev->netdev, "Rss is disable\n");
+ return -EOPNOTSUPP;
+ }
+
+#ifdef HAVE_RXFH_HASHFUNC
+ if (hfunc)
+ *hfunc = nic_dev->rss_hash_engine ?
+ ETH_RSS_HASH_TOP : ETH_RSS_HASH_XOR;
+#endif
+
+ if (indir) {
+ err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indir);
+ if (err)
+ return -EFAULT;
+ }
+
+ if (key)
+ memcpy(key, nic_dev->rss_hkey, NIC_RSS_KEY_SIZE);
+
+ return err;
+}
+
+#ifdef HAVE_RXFH_HASHFUNC
+int hinic3_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key,
+ const u8 hfunc)
+#else
+#ifdef HAVE_RXFH_NONCONST
+int hinic3_set_rxfh(struct net_device *netdev, u32 *indir, u8 *key)
+#else
+int hinic3_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key)
+#endif
+#endif /* HAVE_RXFH_HASHFUNC */
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Not support to set rss parameters when rss is disable\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) && indir) {
+ nicif_err(nic_dev, drv, netdev,
+ "Not support to set indir when DCB is enabled\n");
+ return -EOPNOTSUPP;
+ }
+
+#ifdef HAVE_RXFH_HASHFUNC
+ if (hfunc != ETH_RSS_HASH_NO_CHANGE) {
+ if (hfunc != ETH_RSS_HASH_TOP && hfunc != ETH_RSS_HASH_XOR) {
+ nicif_err(nic_dev, drv, netdev,
+ "Not support to set hfunc type except TOP and XOR\n");
+ return -EOPNOTSUPP;
+ }
+
+ nic_dev->rss_hash_engine = (hfunc == ETH_RSS_HASH_XOR) ?
+ HINIC3_RSS_HASH_ENGINE_TYPE_XOR :
+ HINIC3_RSS_HASH_ENGINE_TYPE_TOEP;
+ err = hinic3_rss_set_hash_engine(nic_dev->hwdev,
+ nic_dev->rss_hash_engine);
+ if (err)
+ return -EFAULT;
+
+ nicif_info(nic_dev, drv, netdev,
+ "Change hfunc to RSS_HASH_%s success\n",
+ (hfunc == ETH_RSS_HASH_XOR) ? "XOR" : "TOP");
+ }
+#endif
+ err = set_rss_rxfh(netdev, indir, key);
+
+ return err;
+}
+
+#else /* !(defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)) */
+
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+int hinic3_get_rxfh_indir(struct net_device *netdev,
+ struct ethtool_rxfh_indir *indir1)
+#else
+int hinic3_get_rxfh_indir(struct net_device *netdev, u32 *indir)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ u32 *indir = NULL;
+
+ /* In a low version kernel(eg:suse 11.2), call the interface twice.
+ * First call to get the size value,
+ * and second call to get the rxfh indir according to the size value.
+ */
+ if (indir1->size == 0) {
+ indir1->size = NIC_RSS_INDIR_SIZE;
+ return 0;
+ }
+
+ if (indir1->size < NIC_RSS_INDIR_SIZE) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get rss indir, rss size(%d) is more than system rss size(%u).\n",
+ NIC_RSS_INDIR_SIZE, indir1->size);
+ return -EINVAL;
+ }
+
+ indir = indir1->ring_index;
+#endif
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ netdev_warn_once(nic_dev->netdev, "Rss is disable\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (indir)
+ err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indir);
+
+ return err;
+}
+
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+int hinic3_set_rxfh_indir(struct net_device *netdev,
+ const struct ethtool_rxfh_indir *indir1)
+#else
+int hinic3_set_rxfh_indir(struct net_device *netdev, const u32 *indir)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ const u32 *indir = NULL;
+
+ if (indir1->size != NIC_RSS_INDIR_SIZE) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to set rss indir, rss size(%d) is more than system rss size(%u).\n",
+ NIC_RSS_INDIR_SIZE, indir1->size);
+ return -EINVAL;
+ }
+
+ indir = indir1->ring_index;
+#endif
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Not support to set rss indir when rss is disable\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) && indir) {
+ nicif_err(nic_dev, drv, netdev,
+ "Not support to set indir when DCB is enabled\n");
+ return -EOPNOTSUPP;
+ }
+
+ return set_rss_rxfh(netdev, indir, NULL);
+}
+
+#endif /* defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH) */
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rss.h b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.h
new file mode 100644
index 0000000..17f511c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.h
@@ -0,0 +1,95 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_RSS_H
+#define HINIC3_RSS_H
+
+#include "hinic3_nic_dev.h"
+
+#define HINIC_NUM_IQ_PER_FUNC 8
+
+int hinic3_rss_init(struct hinic3_nic_dev *nic_dev, u8 *rq2iq_map,
+ u32 map_size, u8 dcb_en);
+
+void hinic3_rss_deinit(struct hinic3_nic_dev *nic_dev);
+
+int hinic3_set_hw_rss_parameters(struct net_device *netdev, u8 rss_en,
+ u8 cos_num, u8 *cos_map, u8 dcb_en);
+
+void hinic3_init_rss_parameters(struct net_device *netdev);
+
+void hinic3_set_default_rss_indir(struct net_device *netdev);
+
+void hinic3_try_to_enable_rss(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_clear_rss_config(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_flush_rx_flow_rule(struct hinic3_nic_dev *nic_dev);
+int hinic3_ethtool_get_flow(const struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *info, u32 location);
+
+int hinic3_ethtool_get_all_flows(const struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *info, u32 *rule_locs);
+
+int hinic3_ethtool_flow_remove(struct hinic3_nic_dev *nic_dev, u32 location);
+
+int hinic3_ethtool_flow_replace(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs);
+
+bool hinic3_validate_channel_setting_in_ntuple(const struct hinic3_nic_dev *nic_dev, u32 q_num);
+
+/* for ethtool */
+int hinic3_get_rxnfc(struct net_device *netdev,
+ struct ethtool_rxnfc *cmd, u32 *rule_locs);
+
+int hinic3_set_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd);
+
+void hinic3_get_channels(struct net_device *netdev,
+ struct ethtool_channels *channels);
+
+int hinic3_set_channels(struct net_device *netdev,
+ struct ethtool_channels *channels);
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+u32 hinic3_get_rxfh_indir_size(struct net_device *netdev);
+#endif /* NOT_HAVE_GET_RXFH_INDIR_SIZE */
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+u32 hinic3_get_rxfh_key_size(struct net_device *netdev);
+
+#ifdef HAVE_RXFH_HASHFUNC
+int hinic3_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key, u8 *hfunc);
+#else /* HAVE_RXFH_HASHFUNC */
+int hinic3_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key);
+#endif /* HAVE_RXFH_HASHFUNC */
+
+#ifdef HAVE_RXFH_HASHFUNC
+int hinic3_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key,
+ const u8 hfunc);
+#else
+#ifdef HAVE_RXFH_NONCONST
+int hinic3_set_rxfh(struct net_device *netdev, u32 *indir, u8 *key);
+#else
+int hinic3_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key);
+#endif /* HAVE_RXFH_NONCONST */
+#endif /* HAVE_RXFH_HASHFUNC */
+
+#else /* !(defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)) */
+
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+int hinic3_get_rxfh_indir(struct net_device *netdev,
+ struct ethtool_rxfh_indir *indir1);
+#else
+int hinic3_get_rxfh_indir(struct net_device *netdev, u32 *indir);
+#endif
+
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+int hinic3_set_rxfh_indir(struct net_device *netdev,
+ const struct ethtool_rxfh_indir *indir1);
+#else
+int hinic3_set_rxfh_indir(struct net_device *netdev, const u32 *indir);
+#endif /* NOT_HAVE_GET_RXFH_INDIR_SIZE */
+
+#endif /* (defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)) */
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c
new file mode 100644
index 0000000..902d7e2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c
@@ -0,0 +1,413 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/dcbnl.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_cfg.h"
+#include "nic_mpu_cmd.h"
+#include "nic_npu_cmd.h"
+#include "hinic3_hw.h"
+#include "hinic3_nic.h"
+#include "hinic3_common.h"
+
+static int hinic3_rss_cfg_hash_key(struct hinic3_nic_io *nic_io, u8 opcode,
+ u8 *key, u16 key_size)
+{
+ struct hinic3_cmd_rss_hash_key hash_key;
+ u16 out_size = sizeof(hash_key);
+ int err;
+
+ memset(&hash_key, 0, sizeof(struct hinic3_cmd_rss_hash_key));
+ hash_key.func_id = hinic3_global_func_id(nic_io->hwdev);
+ hash_key.opcode = opcode;
+
+ if (opcode == HINIC3_CMD_OP_SET)
+ memcpy(hash_key.key, key, key_size);
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_RSS_HASH_KEY,
+ &hash_key, sizeof(hash_key),
+ &hash_key, &out_size);
+ if (err || !out_size || hash_key.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to %s hash key, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == HINIC3_CMD_OP_SET ? "set" : "get",
+ err, hash_key.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET)
+ memcpy(key, hash_key.key, key_size);
+
+ return 0;
+}
+
+int hinic3_rss_set_hash_key(void *hwdev, const u8 *key)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u8 hash_key[NIC_RSS_KEY_SIZE];
+
+ if (!hwdev || !key)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ memcpy(hash_key, key, NIC_RSS_KEY_SIZE);
+ return hinic3_rss_cfg_hash_key(nic_io, HINIC3_CMD_OP_SET,
+ hash_key, NIC_RSS_KEY_SIZE);
+}
+
+int hinic3_rss_get_hash_key(void *hwdev, u8 *key)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !key)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ return hinic3_rss_cfg_hash_key(nic_io, HINIC3_CMD_OP_GET,
+ key, NIC_RSS_KEY_SIZE);
+}
+
+int hinic3_rss_get_indir_tbl(void *hwdev, u32 *indir_table)
+{
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 *indir_tbl = NULL;
+ int err, i;
+
+ if (!hwdev || !indir_table)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd_buf.\n");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_rss_indirect_tbl);
+ err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ cmd_buf, cmd_buf, NULL, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get rss indir table\n");
+ goto get_indir_tbl_failed;
+ }
+
+ indir_tbl = (u16 *)cmd_buf->buf;
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++)
+ indir_table[i] = *(indir_tbl + i);
+
+get_indir_tbl_failed:
+ hinic3_free_cmd_buf(hwdev, cmd_buf);
+
+ return err;
+}
+
+int hinic3_rss_set_indir_tbl(void *hwdev, const u32 *indir_table)
+{
+ struct nic_rss_indirect_tbl *indir_tbl = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 *temp = NULL;
+ u32 i, size;
+ u64 out_param = 0;
+ int err;
+
+ if (!hwdev || !indir_table)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_rss_indirect_tbl);
+ indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf;
+ memset(indir_tbl, 0, sizeof(*indir_tbl));
+
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++)
+ indir_tbl->entry[i] = (u16)(*(indir_table + i));
+
+ size = sizeof(indir_tbl->entry) / sizeof(u32);
+ temp = (u32 *)indir_tbl->entry;
+ for (i = 0; i < size; i++)
+ temp[i] = cpu_to_be32(temp[i]);
+
+ err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set rss indir table\n");
+ err = -EFAULT;
+ }
+
+ hinic3_free_cmd_buf(hwdev, cmd_buf);
+ return err;
+}
+
+static int hinic3_cmdq_set_rss_type(void *hwdev, struct nic_rss_type rss_type)
+{
+ struct nic_rss_context_tbl *ctx_tbl = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 ctx = 0;
+ u64 out_param = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ ctx |= HINIC3_RSS_TYPE_SET(1, VALID) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv4, IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6, IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6_ext, IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv4, TCP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6, TCP_IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6_ext, TCP_IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv4, UDP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv6, UDP_IPV6);
+
+ cmd_buf->size = sizeof(struct nic_rss_context_tbl);
+ ctx_tbl = (struct nic_rss_context_tbl *)cmd_buf->buf;
+ memset(ctx_tbl, 0, sizeof(*ctx_tbl));
+ ctx_tbl->ctx = cpu_to_be32(ctx);
+
+ /* cfg the rss context table by command queue */
+ err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+
+ hinic3_free_cmd_buf(hwdev, cmd_buf);
+
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "cmdq set set rss context table failed, err: %d\n",
+ err);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int hinic3_mgmt_set_rss_type(void *hwdev, struct nic_rss_type rss_type)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_rss_context_table ctx_tbl;
+ u32 ctx = 0;
+ u16 out_size = sizeof(ctx_tbl);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ memset(&ctx_tbl, 0, sizeof(ctx_tbl));
+ ctx_tbl.func_id = hinic3_global_func_id(hwdev);
+ ctx |= HINIC3_RSS_TYPE_SET(1, VALID) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv4, IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6, IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6_ext, IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv4, TCP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6, TCP_IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6_ext, TCP_IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv4, UDP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv6, UDP_IPV6);
+ ctx_tbl.context = ctx;
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_RSS_CTX_TBL_INTO_FUNC,
+ &ctx_tbl, sizeof(ctx_tbl),
+ &ctx_tbl, &out_size);
+
+ if (ctx_tbl.msg_head.status == HINIC3_MGMT_CMD_UNSUPPORTED) {
+ return HINIC3_MGMT_CMD_UNSUPPORTED;
+ } else if (err || !out_size || ctx_tbl.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "mgmt Failed to set rss context offload, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, ctx_tbl.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_rss_type(void *hwdev, struct nic_rss_type rss_type)
+{
+ int err;
+
+ err = hinic3_mgmt_set_rss_type(hwdev, rss_type);
+ if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+ err = hinic3_cmdq_set_rss_type(hwdev, rss_type);
+
+ return err;
+}
+
+int hinic3_get_rss_type(void *hwdev, struct nic_rss_type *rss_type)
+{
+ struct hinic3_rss_context_table ctx_tbl;
+ u16 out_size = sizeof(ctx_tbl);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !rss_type)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&ctx_tbl, 0, sizeof(struct hinic3_rss_context_table));
+ ctx_tbl.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_RSS_CTX_TBL,
+ &ctx_tbl, sizeof(ctx_tbl),
+ &ctx_tbl, &out_size);
+ if (err || !out_size || ctx_tbl.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to get hash type, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, ctx_tbl.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ rss_type->ipv4 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, IPV4);
+ rss_type->ipv6 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, IPV6);
+ rss_type->ipv6_ext = HINIC3_RSS_TYPE_GET(ctx_tbl.context, IPV6_EXT);
+ rss_type->tcp_ipv4 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV4);
+ rss_type->tcp_ipv6 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV6);
+ rss_type->tcp_ipv6_ext = HINIC3_RSS_TYPE_GET(ctx_tbl.context,
+ TCP_IPV6_EXT);
+ rss_type->udp_ipv4 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV4);
+ rss_type->udp_ipv6 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV6);
+
+ return 0;
+}
+
+static int hinic3_rss_cfg_hash_engine(struct hinic3_nic_io *nic_io, u8 opcode,
+ u8 *type)
+{
+ struct hinic3_cmd_rss_engine_type hash_type;
+ u16 out_size = sizeof(hash_type);
+ int err;
+
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&hash_type, 0, sizeof(struct hinic3_cmd_rss_engine_type));
+
+ hash_type.func_id = hinic3_global_func_id(nic_io->hwdev);
+ hash_type.opcode = opcode;
+
+ if (opcode == HINIC3_CMD_OP_SET)
+ hash_type.hash_engine = *type;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_RSS_HASH_ENGINE,
+ &hash_type, sizeof(hash_type),
+ &hash_type, &out_size);
+ if (err || !out_size || hash_type.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to %s hash engine, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == HINIC3_CMD_OP_SET ? "set" : "get",
+ err, hash_type.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET)
+ *type = hash_type.hash_engine;
+
+ return 0;
+}
+
+int hinic3_rss_set_hash_engine(void *hwdev, u8 type)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ return hinic3_rss_cfg_hash_engine(nic_io, HINIC3_CMD_OP_SET, &type);
+}
+
+int hinic3_rss_get_hash_engine(void *hwdev, u8 *type)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !type)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ return hinic3_rss_cfg_hash_engine(nic_io, HINIC3_CMD_OP_GET, type);
+}
+
+int hinic3_rss_cfg(void *hwdev, u8 rss_en, u8 cos_num, u8 *prio_tc, u16 num_qps)
+{
+ struct hinic3_cmd_rss_config rss_cfg;
+ u16 out_size = sizeof(rss_cfg);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ /* micro code required: number of TC should be power of 2 */
+ if (!hwdev || !prio_tc || (cos_num & (cos_num - 1)))
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+ memset(&rss_cfg, 0, sizeof(struct hinic3_cmd_rss_config));
+ rss_cfg.func_id = hinic3_global_func_id(hwdev);
+ rss_cfg.rss_en = rss_en;
+ rss_cfg.rq_priority_number = cos_num ? (u8)ilog2(cos_num) : 0;
+ rss_cfg.num_qps = num_qps;
+
+ memcpy(rss_cfg.prio_tc, prio_tc, NIC_DCB_UP_MAX);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_RSS_CFG,
+ &rss_cfg, sizeof(rss_cfg),
+ &rss_cfg, &out_size);
+ if (err || !out_size || rss_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set rss cfg, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rss_cfg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rx.c b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.c
new file mode 100644
index 0000000..25536d1
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.c
@@ -0,0 +1,1523 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <net/xdp.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/u64_stats_sync.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/sctp.h>
+#include <linux/pkt_sched.h>
+#include <linux/ipv6.h>
+#include <linux/module.h>
+#include <linux/compiler.h>
+#include <linux/filter.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_common.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_rss.h"
+#include "hinic3_rx.h"
+
+/* performance: ci addr RTE_CACHE_SIZE(64B) alignment */
+#define HINIC3_RX_HDR_SIZE 256
+#define HINIC3_RX_BUFFER_WRITE 16
+
+#define HINIC3_RX_TCP_PKT 0x3
+#define HINIC3_RX_UDP_PKT 0x4
+#define HINIC3_RX_SCTP_PKT 0x7
+
+#define HINIC3_RX_IPV4_PKT 0
+#define HINIC3_RX_IPV6_PKT 1
+#define HINIC3_RX_INVALID_IP_TYPE 2
+
+#define HINIC3_RX_PKT_FORMAT_NON_TUNNEL 0
+#define HINIC3_RX_PKT_FORMAT_VXLAN 1
+
+#define RXQ_STATS_INC(rxq, field) \
+do { \
+ u64_stats_update_begin(&(rxq)->rxq_stats.syncp); \
+ (rxq)->rxq_stats.field++; \
+ u64_stats_update_end(&(rxq)->rxq_stats.syncp); \
+} while (0)
+
+static bool rx_alloc_mapped_page(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_rx_info *rx_info)
+{
+ struct pci_dev *pdev = nic_dev->pdev;
+ struct page *page = rx_info->page;
+ dma_addr_t dma = rx_info->buf_dma_addr;
+ u32 page_offset = 0;
+
+ if (likely(dma))
+ return true;
+
+ /* alloc new page for storage */
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool) {
+ page = page_pool_alloc_frag(rx_info->page_pool, &page_offset,
+ nic_dev->rx_buff_len,
+ GFP_ATOMIC | __GFP_COLD |
+ __GFP_COMP);
+ if (unlikely(!page))
+ return false;
+ dma = page_pool_get_dma_addr(page);
+ goto set_rx_info;
+ }
+#endif
+ page = alloc_pages_node(NUMA_NO_NODE,
+ GFP_ATOMIC | __GFP_COLD | __GFP_COMP,
+ nic_dev->page_order);
+
+ if (unlikely(!page))
+ return false;
+
+ /* map page for use */
+ dma = dma_map_page(&pdev->dev, page, page_offset,
+ nic_dev->dma_rx_buff_size, DMA_FROM_DEVICE);
+ /* if mapping failed free memory back to system since
+ * there isn't much point in holding memory we can't use
+ */
+ if (unlikely(dma_mapping_error(&pdev->dev, dma))) {
+ __free_pages(page, nic_dev->page_order);
+ return false;
+ }
+ goto set_rx_info;
+
+set_rx_info:
+ rx_info->page = page;
+ rx_info->buf_dma_addr = dma;
+ rx_info->page_offset = page_offset;
+
+ return true;
+}
+
+static u32 hinic3_rx_fill_wqe(struct hinic3_rxq *rxq)
+{
+ struct net_device *netdev = rxq->netdev;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int rq_wqe_len = rxq->rq->wq.wqebb_size;
+ struct hinic3_rq_wqe *rq_wqe = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ u32 i;
+
+ for (i = 0; i < rxq->q_depth; i++) {
+ rx_info = &rxq->rx_info[i];
+ rq_wqe = hinic3_rq_wqe_addr(rxq->rq, (u16)i);
+
+ if (rxq->rq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+ /* unit of cqe length is 16B */
+ hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge,
+ rx_info->cqe_dma,
+ (HINIC3_CQE_LEN >>
+ HINIC3_CQE_SIZE_SHIFT));
+ /* use fixed len */
+ rq_wqe->extend_wqe.buf_desc.sge.len =
+ nic_dev->rx_buff_len;
+ } else {
+ rq_wqe->normal_wqe.cqe_hi_addr =
+ upper_32_bits(rx_info->cqe_dma);
+ rq_wqe->normal_wqe.cqe_lo_addr =
+ lower_32_bits(rx_info->cqe_dma);
+ }
+
+ hinic3_hw_be32_len(rq_wqe, rq_wqe_len);
+ rx_info->rq_wqe = rq_wqe;
+ }
+
+ return i;
+}
+
+static u32 hinic3_rx_fill_buffers(struct hinic3_rxq *rxq)
+{
+ struct net_device *netdev = rxq->netdev;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_rq_wqe *rq_wqe = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ dma_addr_t dma_addr;
+ u32 i, free_wqebbs = rxq->delta - 1;
+
+ for (i = 0; i < free_wqebbs; i++) {
+ rx_info = &rxq->rx_info[rxq->next_to_update];
+
+ if (unlikely(!rx_alloc_mapped_page(nic_dev, rx_info))) {
+ RXQ_STATS_INC(rxq, alloc_rx_buf_err);
+ break;
+ }
+
+ dma_addr = rx_info->buf_dma_addr + rx_info->page_offset;
+
+ rq_wqe = rx_info->rq_wqe;
+
+ if (rxq->rq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+ rq_wqe->extend_wqe.buf_desc.sge.hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->extend_wqe.buf_desc.sge.lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ } else {
+ rq_wqe->normal_wqe.buf_hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->normal_wqe.buf_lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ }
+ rxq->next_to_update = (u16)((rxq->next_to_update + 1) & rxq->q_mask);
+ }
+
+ if (likely(i)) {
+ hinic3_write_db(rxq->rq,
+ rxq->q_id & (NIC_RX_DB_COS_MAX - 1),
+ RQ_CFLAG_DP,
+ (u16)((u32)rxq->next_to_update <<
+ rxq->rq->wqe_type));
+ rxq->delta -= i;
+ rxq->next_to_alloc = rxq->next_to_update;
+ } else if (free_wqebbs == rxq->q_depth - 1) {
+ RXQ_STATS_INC(rxq, rx_buf_empty);
+ }
+
+ return i;
+}
+
+static u32 hinic3_rx_alloc_buffers(struct hinic3_nic_dev *nic_dev, u32 rq_depth,
+ struct hinic3_rx_info *rx_info_arr)
+{
+ u32 free_wqebbs = rq_depth - 1;
+ u32 idx;
+
+ for (idx = 0; idx < free_wqebbs; idx++) {
+ if (!rx_alloc_mapped_page(nic_dev, &rx_info_arr[idx]))
+ break;
+ }
+
+ return idx;
+}
+
+static void hinic3_rx_free_buffers(struct hinic3_nic_dev *nic_dev, u32 q_depth,
+ struct hinic3_rx_info *rx_info_arr)
+{
+ struct hinic3_rx_info *rx_info = NULL;
+ u32 i;
+
+ /* Free all the Rx ring sk_buffs */
+ for (i = 0; i < q_depth; i++) {
+ rx_info = &rx_info_arr[i];
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool) {
+ if (rx_info->page) {
+ page_pool_put_full_page(rx_info->page_pool,
+ rx_info->page, false);
+ rx_info->buf_dma_addr = 0;
+ rx_info->page = NULL;
+ }
+ continue;
+ }
+#endif
+
+ if (rx_info->buf_dma_addr) {
+ dma_unmap_page(&nic_dev->pdev->dev,
+ rx_info->buf_dma_addr,
+ nic_dev->dma_rx_buff_size,
+ DMA_FROM_DEVICE);
+ rx_info->buf_dma_addr = 0;
+ }
+
+ if (rx_info->page) {
+ __free_pages(rx_info->page, nic_dev->page_order);
+ rx_info->page = NULL;
+ }
+ }
+}
+
+static void hinic3_reuse_rx_page(struct hinic3_rxq *rxq,
+ struct hinic3_rx_info *old_rx_info)
+{
+ struct hinic3_rx_info *new_rx_info = NULL;
+ u16 nta = rxq->next_to_alloc;
+
+ new_rx_info = &rxq->rx_info[nta];
+
+ /* update, and store next to alloc */
+ nta++;
+ rxq->next_to_alloc = (nta < rxq->q_depth) ? nta : 0;
+
+ new_rx_info->page = old_rx_info->page;
+ new_rx_info->page_offset = old_rx_info->page_offset;
+ new_rx_info->buf_dma_addr = old_rx_info->buf_dma_addr;
+
+ /* sync the buffer for use by the device */
+ dma_sync_single_range_for_device(rxq->dev, new_rx_info->buf_dma_addr,
+ new_rx_info->page_offset,
+ rxq->buf_len,
+ DMA_FROM_DEVICE);
+}
+
+static bool hinic3_add_rx_frag(struct hinic3_rxq *rxq,
+ struct hinic3_rx_info *rx_info,
+ struct sk_buff *skb, u32 size)
+{
+ struct page *page = NULL;
+ u8 *va = NULL;
+
+ page = rx_info->page;
+ va = (u8 *)page_address(page) + rx_info->page_offset;
+ prefetch(va);
+#if L1_CACHE_BYTES < 128
+ prefetch(va + L1_CACHE_BYTES);
+#endif
+
+ dma_sync_single_range_for_cpu(rxq->dev,
+ rx_info->buf_dma_addr,
+ rx_info->page_offset,
+ rxq->buf_len,
+ DMA_FROM_DEVICE);
+
+ if (size <= HINIC3_RX_HDR_SIZE && !skb_is_nonlinear(skb)) {
+ __skb_put_data(skb, va, size);
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool) {
+ page_pool_put_full_page(rx_info->page_pool,
+ page, false);
+ return false;
+ }
+#endif
+
+ /* page is not reserved, we can reuse buffer as-is */
+ if (likely(page_to_nid(page) == numa_node_id()))
+ return true;
+
+ /* this page cannot be reused so discard it */
+ put_page(page);
+ goto discard_page;
+ }
+
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ (int)rx_info->page_offset, (int)size, rxq->buf_len);
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool) {
+ skb_mark_for_recycle(skb);
+ return false;
+ }
+#endif
+
+ /* avoid re-using remote pages */
+ if (unlikely(page_to_nid(page) != numa_node_id()))
+ goto discard_page;
+
+ /* if we are only owner of page we can reuse it */
+ if (unlikely(page_count(page) != 1))
+ goto discard_page;
+
+ /* flip page offset to other buffer */
+ rx_info->page_offset ^= rxq->buf_len;
+ get_page(page);
+
+ return true;
+
+discard_page:
+ dma_unmap_page(rxq->dev, rx_info->buf_dma_addr,
+ rxq->dma_rx_buff_size, DMA_FROM_DEVICE);
+ return false;
+}
+
+static void packaging_skb(struct hinic3_rxq *rxq, struct sk_buff *head_skb,
+ u8 sge_num, u32 pkt_len)
+{
+ struct hinic3_rx_info *rx_info = NULL;
+ struct sk_buff *skb = NULL;
+ u8 frag_num = 0;
+ u32 size;
+ u32 sw_ci;
+ u32 temp_pkt_len = pkt_len;
+ u8 temp_sge_num = sge_num;
+
+ sw_ci = rxq->cons_idx & rxq->q_mask;
+ skb = head_skb;
+ while (temp_sge_num) {
+ rx_info = &rxq->rx_info[sw_ci];
+ sw_ci = (sw_ci + 1) & rxq->q_mask;
+ if (unlikely(temp_pkt_len > rxq->buf_len)) {
+ size = rxq->buf_len;
+ temp_pkt_len -= rxq->buf_len;
+ } else {
+ size = temp_pkt_len;
+ }
+
+ if (unlikely(frag_num == MAX_SKB_FRAGS)) {
+ frag_num = 0;
+ if (skb == head_skb)
+ skb = skb_shinfo(skb)->frag_list;
+ else
+ skb = skb->next;
+ }
+
+ if (unlikely(skb != head_skb)) {
+ head_skb->len += size;
+ head_skb->data_len += size;
+ head_skb->truesize += rxq->buf_len;
+ }
+
+ if (likely(hinic3_add_rx_frag(rxq, rx_info, skb, size)))
+ hinic3_reuse_rx_page(rxq, rx_info);
+
+ /* clear contents of buffer_info */
+ rx_info->buf_dma_addr = 0;
+ rx_info->page = NULL;
+ temp_sge_num--;
+ frag_num++;
+ }
+}
+
+#define HINIC3_GET_SGE_NUM(pkt_len, rxq) \
+ ((u8)(((pkt_len) >> (rxq)->rx_buff_shift) + \
+ (((pkt_len) & ((rxq)->buf_len - 1)) ? 1 : 0)))
+
+static struct sk_buff *hinic3_fetch_rx_buffer(struct hinic3_rxq *rxq,
+ u32 pkt_len)
+{
+ struct sk_buff *head_skb = NULL;
+ struct sk_buff *cur_skb = NULL;
+ struct sk_buff *skb = NULL;
+ struct net_device *netdev = rxq->netdev;
+ u8 sge_num, skb_num;
+ u16 wqebb_cnt = 0;
+
+ head_skb = netdev_alloc_skb_ip_align(netdev, HINIC3_RX_HDR_SIZE);
+ if (unlikely(!head_skb))
+ return NULL;
+
+ sge_num = HINIC3_GET_SGE_NUM(pkt_len, rxq);
+ if (likely(sge_num <= MAX_SKB_FRAGS))
+ skb_num = 1;
+ else
+ skb_num = (sge_num / MAX_SKB_FRAGS) +
+ ((sge_num % MAX_SKB_FRAGS) ? 1 : 0);
+
+ while (unlikely(skb_num > 1)) {
+ cur_skb = netdev_alloc_skb_ip_align(netdev, HINIC3_RX_HDR_SIZE);
+ if (unlikely(!cur_skb))
+ goto alloc_skb_fail;
+
+ if (!skb) {
+ skb_shinfo(head_skb)->frag_list = cur_skb;
+ skb = cur_skb;
+ } else {
+ skb->next = cur_skb;
+ skb = cur_skb;
+ }
+
+ skb_num--;
+ }
+
+ prefetchw(head_skb->data);
+ wqebb_cnt = sge_num;
+
+ packaging_skb(rxq, head_skb, sge_num, pkt_len);
+
+ rxq->cons_idx += wqebb_cnt;
+ rxq->delta += wqebb_cnt;
+
+ return head_skb;
+
+alloc_skb_fail:
+ dev_kfree_skb_any(head_skb);
+ return NULL;
+}
+
+void hinic3_rxq_get_stats(struct hinic3_rxq *rxq,
+ struct hinic3_rxq_stats *stats)
+{
+ struct hinic3_rxq_stats *rxq_stats = &rxq->rxq_stats;
+ unsigned int start;
+
+ u64_stats_update_begin(&stats->syncp);
+ do {
+ start = u64_stats_fetch_begin(&rxq_stats->syncp);
+ stats->bytes = rxq_stats->bytes;
+ stats->packets = rxq_stats->packets;
+ stats->errors = rxq_stats->csum_errors +
+ rxq_stats->other_errors;
+ stats->csum_errors = rxq_stats->csum_errors;
+ stats->other_errors = rxq_stats->other_errors;
+ stats->dropped = rxq_stats->dropped;
+ stats->xdp_dropped = rxq_stats->xdp_dropped;
+ stats->rx_buf_empty = rxq_stats->rx_buf_empty;
+ } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
+ u64_stats_update_end(&stats->syncp);
+}
+
+void hinic3_rxq_clean_stats(struct hinic3_rxq_stats *rxq_stats)
+{
+ u64_stats_update_begin(&rxq_stats->syncp);
+ rxq_stats->bytes = 0;
+ rxq_stats->packets = 0;
+ rxq_stats->errors = 0;
+ rxq_stats->csum_errors = 0;
+ rxq_stats->other_errors = 0;
+ rxq_stats->dropped = 0;
+ rxq_stats->xdp_dropped = 0;
+ rxq_stats->rx_buf_empty = 0;
+
+ rxq_stats->alloc_skb_err = 0;
+ rxq_stats->alloc_rx_buf_err = 0;
+ rxq_stats->xdp_large_pkt = 0;
+ rxq_stats->restore_drop_sge = 0;
+ rxq_stats->rsvd2 = 0;
+ u64_stats_update_end(&rxq_stats->syncp);
+}
+
+static void rxq_stats_init(struct hinic3_rxq *rxq)
+{
+ struct hinic3_rxq_stats *rxq_stats = &rxq->rxq_stats;
+
+ u64_stats_init(&rxq_stats->syncp);
+ hinic3_rxq_clean_stats(rxq_stats);
+}
+
+#ifndef HAVE_ETH_GET_HEADLEN_FUNC
+static unsigned int hinic3_eth_get_headlen(unsigned char *data, unsigned int max_len)
+{
+#define IP_FRAG_OFFSET 0x1FFF
+#define FCOE_HLEN 38
+#define ETH_P_8021_AD 0x88A8
+#define ETH_P_8021_Q 0x8100
+#define TCP_HEAD_OFFSET 12
+ union {
+ unsigned char *data;
+ struct ethhdr *eth;
+ struct vlan_ethhdr *vlan;
+ struct iphdr *ipv4;
+ struct ipv6hdr *ipv6;
+ } hdr;
+ u16 protocol;
+ u8 nexthdr = 0;
+ u8 hlen;
+
+ if (unlikely(max_len < ETH_HLEN))
+ return max_len;
+
+ hdr.data = data;
+ protocol = hdr.eth->h_proto;
+
+ /* L2 header */
+ if (protocol == htons(ETH_P_8021_AD) ||
+ protocol == htons(ETH_P_8021_Q)) {
+ if (unlikely(max_len < ETH_HLEN + VLAN_HLEN))
+ return max_len;
+
+ /* L3 protocol */
+ protocol = hdr.vlan->h_vlan_encapsulated_proto;
+ hdr.data += sizeof(struct vlan_ethhdr);
+ } else {
+ hdr.data += ETH_HLEN;
+ }
+
+ /* L3 header */
+ switch (protocol) {
+ case htons(ETH_P_IP):
+ if ((int)(hdr.data - data) >
+ (int)(max_len - sizeof(struct iphdr)))
+ return max_len;
+
+ /* L3 header length = (1st byte & 0x0F) << 2 */
+ hlen = (hdr.data[0] & 0x0F) << 2;
+
+ if (hlen < sizeof(struct iphdr))
+ return (unsigned int)(hdr.data - data);
+
+ if (!(hdr.ipv4->frag_off & htons(IP_FRAG_OFFSET)))
+ nexthdr = hdr.ipv4->protocol;
+
+ hdr.data += hlen;
+ break;
+
+ case htons(ETH_P_IPV6):
+ if ((int)(hdr.data - data) >
+ (int)(max_len - sizeof(struct ipv6hdr)))
+ return max_len;
+ /* L4 protocol */
+ nexthdr = hdr.ipv6->nexthdr;
+ hdr.data += sizeof(struct ipv6hdr);
+ break;
+
+ case htons(ETH_P_FCOE):
+ hdr.data += FCOE_HLEN;
+ break;
+
+ default:
+ return (unsigned int)(hdr.data - data);
+ }
+
+ /* L4 header */
+ switch (nexthdr) {
+ case IPPROTO_TCP:
+ if ((int)(hdr.data - data) >
+ (int)(max_len - sizeof(struct tcphdr)))
+ return max_len;
+
+ /* L4 header length = (13st byte & 0xF0) >> 2 */
+ if (((hdr.data[TCP_HEAD_OFFSET] & 0xF0) >>
+ HINIC3_HEADER_DATA_UNIT) > sizeof(struct tcphdr))
+ hdr.data += ((hdr.data[TCP_HEAD_OFFSET] & 0xF0) >>
+ HINIC3_HEADER_DATA_UNIT);
+ else
+ hdr.data += sizeof(struct tcphdr);
+ break;
+ case IPPROTO_UDP:
+ case IPPROTO_UDPLITE:
+ hdr.data += sizeof(struct udphdr);
+ break;
+
+ case IPPROTO_SCTP:
+ hdr.data += sizeof(struct sctphdr);
+ break;
+ default:
+ break;
+ }
+
+ if ((hdr.data - data) > max_len)
+ return max_len;
+ else
+ return (unsigned int)(hdr.data - data);
+}
+#endif
+
+static void hinic3_pull_tail(struct sk_buff *skb)
+{
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[0];
+ unsigned char *va = NULL;
+ unsigned int pull_len;
+
+ /* it is valid to use page_address instead of kmap since we are
+ * working with pages allocated out of the lomem pool per
+ * alloc_page(GFP_ATOMIC)
+ */
+ va = skb_frag_address(frag);
+
+#ifdef HAVE_ETH_GET_HEADLEN_FUNC
+ /* we need the header to contain the greater of either ETH_HLEN or
+ * 60 bytes if the skb->len is less than 60 for skb_pad.
+ */
+#ifdef ETH_GET_HEADLEN_NEED_DEV
+ pull_len = eth_get_headlen(skb->dev, va, HINIC3_RX_HDR_SIZE);
+#else
+ pull_len = eth_get_headlen(va, HINIC3_RX_HDR_SIZE);
+#endif
+
+#else
+ pull_len = hinic3_eth_get_headlen(va, HINIC3_RX_HDR_SIZE);
+#endif
+
+ /* align pull length to size of long to optimize memcpy performance */
+ skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
+
+ /* update all of the pointers */
+ skb_frag_size_sub(frag, (int)pull_len);
+ frag->page_offset += (int)pull_len;
+
+ skb->data_len -= pull_len;
+ skb->tail += pull_len;
+}
+
+static void hinic3_rx_csum(struct hinic3_rxq *rxq, u32 offload_type,
+ u32 status, struct sk_buff *skb)
+{
+ struct net_device *netdev = rxq->netdev;
+ u32 pkt_type = HINIC3_GET_RX_PKT_TYPE(offload_type);
+ u32 ip_type = HINIC3_GET_RX_IP_TYPE(offload_type);
+ u32 pkt_fmt = HINIC3_GET_RX_TUNNEL_PKT_FORMAT(offload_type);
+
+ u32 csum_err;
+
+ csum_err = HINIC3_GET_RX_CSUM_ERR(status);
+ if (unlikely(csum_err == HINIC3_RX_CSUM_IPSU_OTHER_ERR))
+ rxq->rxq_stats.other_errors++;
+
+ if (!(netdev->features & NETIF_F_RXCSUM))
+ return;
+
+ if (unlikely(csum_err)) {
+ /* pkt type is recognized by HW, and csum is wrong */
+ if (!(csum_err & (HINIC3_RX_CSUM_HW_CHECK_NONE |
+ HINIC3_RX_CSUM_IPSU_OTHER_ERR)))
+ rxq->rxq_stats.csum_errors++;
+ skb->ip_summed = CHECKSUM_NONE;
+ return;
+ }
+
+ if (ip_type == HINIC3_RX_INVALID_IP_TYPE ||
+ !(pkt_fmt == HINIC3_RX_PKT_FORMAT_NON_TUNNEL ||
+ pkt_fmt == HINIC3_RX_PKT_FORMAT_VXLAN)) {
+ skb->ip_summed = CHECKSUM_NONE;
+ return;
+ }
+
+ switch (pkt_type) {
+ case HINIC3_RX_TCP_PKT:
+ case HINIC3_RX_UDP_PKT:
+ case HINIC3_RX_SCTP_PKT:
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ break;
+ default:
+ skb->ip_summed = CHECKSUM_NONE;
+ break;
+ }
+}
+
+#ifdef HAVE_SKBUFF_CSUM_LEVEL
+static void hinic3_rx_gro(struct hinic3_rxq *rxq, u32 offload_type,
+ struct sk_buff *skb)
+{
+ struct net_device *netdev = rxq->netdev;
+ bool l2_tunnel = false;
+
+ if (!(netdev->features & NETIF_F_GRO))
+ return;
+
+ l2_tunnel =
+ HINIC3_GET_RX_TUNNEL_PKT_FORMAT(offload_type) ==
+ HINIC3_RX_PKT_FORMAT_VXLAN ? 1 : 0;
+ if (l2_tunnel && skb->ip_summed == CHECKSUM_UNNECESSARY)
+ /* If we checked the outer header let the stack know */
+ skb->csum_level = 1;
+}
+#endif /* HAVE_SKBUFF_CSUM_LEVEL */
+
+static void hinic3_copy_lp_data(struct hinic3_nic_dev *nic_dev,
+ struct sk_buff *skb)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 *lb_buf = nic_dev->lb_test_rx_buf;
+ void *frag_data = NULL;
+ int lb_len = nic_dev->lb_pkt_len;
+ int pkt_offset, frag_len, i;
+
+ if (nic_dev->lb_test_rx_idx == LP_PKT_CNT) {
+ nic_dev->lb_test_rx_idx = 0;
+ nicif_warn(nic_dev, rx_err, netdev, "Loopback test warning, receive too many test pkts\n");
+ }
+
+ if (skb->len != (u32)(nic_dev->lb_pkt_len)) {
+ nicif_warn(nic_dev, rx_err, netdev, "Wrong packet length\n");
+ nic_dev->lb_test_rx_idx++;
+ return;
+ }
+
+ pkt_offset = nic_dev->lb_test_rx_idx * lb_len;
+ frag_len = (int)skb_headlen(skb);
+ memcpy(lb_buf + pkt_offset, skb->data, (size_t)(u32)frag_len);
+
+ pkt_offset += frag_len;
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ frag_data = skb_frag_address(&skb_shinfo(skb)->frags[i]);
+ frag_len = (int)skb_frag_size(&skb_shinfo(skb)->frags[i]);
+ memcpy(lb_buf + pkt_offset, frag_data, (size_t)(u32)frag_len);
+
+ pkt_offset += frag_len;
+ }
+ nic_dev->lb_test_rx_idx++;
+}
+
+static inline void hinic3_lro_set_gso_params(struct sk_buff *skb, u16 num_lro)
+{
+ struct ethhdr *eth = (struct ethhdr *)(skb->data);
+ __be16 proto;
+
+ proto = __vlan_get_protocol(skb, eth->h_proto, NULL);
+
+ skb_shinfo(skb)->gso_size = (u16)DIV_ROUND_UP((skb->len - skb_headlen(skb)), num_lro);
+ skb_shinfo(skb)->gso_type = (proto == htons(ETH_P_IP)) ? SKB_GSO_TCPV4 : SKB_GSO_TCPV6;
+ skb_shinfo(skb)->gso_segs = num_lro;
+}
+
+#ifdef HAVE_XDP_SUPPORT
+enum hinic3_xdp_status {
+ // bpf_prog status
+ HINIC3_XDP_PROG_EMPTY,
+ // pkt action
+ HINIC3_XDP_PKT_PASS,
+ HINIC3_XDP_PKT_DROP,
+};
+
+static void update_drop_rx_info(struct hinic3_rxq *rxq, u16 weqbb_num)
+{
+ struct hinic3_rx_info *rx_info = NULL;
+
+ while (weqbb_num) {
+ rx_info = &rxq->rx_info[rxq->cons_idx & rxq->q_mask];
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool)
+ goto discard_direct;
+#endif
+ if (likely(page_to_nid(rx_info->page) == numa_node_id()))
+ hinic3_reuse_rx_page(rxq, rx_info);
+ goto discard_direct;
+
+discard_direct:
+ rx_info->buf_dma_addr = 0;
+ rx_info->page = NULL;
+ rxq->cons_idx++;
+ rxq->delta++;
+
+ weqbb_num--;
+ }
+}
+
+int hinic3_run_xdp(struct hinic3_rxq *rxq, u32 pkt_len, struct xdp_buff *xdp)
+{
+ struct bpf_prog *xdp_prog = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ int result = HINIC3_XDP_PKT_PASS;
+ u16 weqbb_num = 1; /* xdp can only use one rx_buff */
+ u8 *va = NULL;
+ u32 act;
+
+ rcu_read_lock();
+ xdp_prog = READ_ONCE(rxq->xdp_prog);
+ if (!xdp_prog) {
+ result = HINIC3_XDP_PROG_EMPTY;
+ goto unlock_rcu;
+ }
+
+ if (unlikely(pkt_len > rxq->buf_len)) {
+ RXQ_STATS_INC(rxq, xdp_large_pkt);
+ weqbb_num = HINIC3_GET_SGE_NUM(pkt_len, rxq);
+ result = HINIC3_XDP_PKT_DROP;
+ goto xdp_out;
+ }
+
+ rx_info = &rxq->rx_info[rxq->cons_idx & rxq->q_mask];
+ va = (u8 *)page_address(rx_info->page) + rx_info->page_offset;
+ prefetch(va);
+ dma_sync_single_range_for_cpu(rxq->dev, rx_info->buf_dma_addr,
+ rx_info->page_offset,
+ rxq->buf_len, DMA_FROM_DEVICE);
+ xdp->data = va;
+ xdp->data_hard_start = xdp->data;
+ xdp->data_end = xdp->data + pkt_len;
+#ifdef HAVE_XDP_FRAME_SZ
+ xdp->frame_sz = rxq->buf_len;
+#endif
+#ifdef HAVE_XDP_DATA_META
+ xdp_set_data_meta_invalid(xdp);
+#endif
+ prefetchw(xdp->data_hard_start);
+ act = bpf_prog_run_xdp(xdp_prog, xdp);
+ switch (act) {
+ case XDP_PASS:
+ result = HINIC3_XDP_PKT_PASS;
+ break;
+ case XDP_DROP:
+ result = HINIC3_XDP_PKT_DROP;
+ break;
+ default:
+ result = HINIC3_XDP_PKT_DROP;
+ bpf_warn_invalid_xdp_action(act);
+ }
+
+xdp_out:
+ if (result == HINIC3_XDP_PKT_DROP) {
+ RXQ_STATS_INC(rxq, xdp_dropped);
+ update_drop_rx_info(rxq, weqbb_num);
+ }
+
+unlock_rcu:
+ rcu_read_unlock();
+
+ return result;
+}
+
+static bool hinic3_add_rx_frag_with_xdp(struct hinic3_rxq *rxq, u32 pkt_len,
+ struct hinic3_rx_info *rx_info,
+ struct sk_buff *skb,
+ struct xdp_buff *xdp)
+{
+ struct page *page = rx_info->page;
+
+ if (pkt_len <= HINIC3_RX_HDR_SIZE) {
+ __skb_put_data(skb, xdp->data, pkt_len);
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool) {
+ page_pool_put_full_page(rx_info->page_pool,
+ page, false);
+ return false;
+ }
+#endif
+
+ if (likely(page_to_nid(page) == numa_node_id()))
+ return true;
+
+ put_page(page);
+ goto umap_page;
+ }
+
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ (int)(rx_info->page_offset +
+ (xdp->data - xdp->data_hard_start)),
+ (int)pkt_len, rxq->buf_len);
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool) {
+ skb_mark_for_recycle(skb);
+ return false;
+ }
+#endif
+ if (unlikely(page_to_nid(page) != numa_node_id()))
+ goto umap_page;
+ if (unlikely(page_count(page) != 1))
+ goto umap_page;
+
+ rx_info->page_offset ^= rxq->buf_len;
+ get_page(page);
+
+ return true;
+
+umap_page:
+ dma_unmap_page(rxq->dev, rx_info->buf_dma_addr,
+ rxq->dma_rx_buff_size, DMA_FROM_DEVICE);
+ return false;
+}
+
+static struct sk_buff *hinic3_fetch_rx_buffer_xdp(struct hinic3_rxq *rxq,
+ u32 pkt_len,
+ struct xdp_buff *xdp)
+{
+ struct sk_buff *skb;
+ struct hinic3_rx_info *rx_info;
+ u32 sw_ci;
+ bool reuse;
+
+ sw_ci = rxq->cons_idx & rxq->q_mask;
+ rx_info = &rxq->rx_info[sw_ci];
+
+ skb = netdev_alloc_skb_ip_align(rxq->netdev, HINIC3_RX_HDR_SIZE);
+ if (unlikely(!skb))
+ return NULL;
+
+ reuse = hinic3_add_rx_frag_with_xdp(rxq, pkt_len, rx_info, skb, xdp);
+ if (likely(reuse))
+ hinic3_reuse_rx_page(rxq, rx_info);
+
+ rx_info->buf_dma_addr = 0;
+ rx_info->page = NULL;
+
+ rxq->cons_idx += 1;
+ rxq->delta += 1;
+
+ return skb;
+}
+
+#endif
+
+static int recv_one_pkt(struct hinic3_rxq *rxq, struct hinic3_rq_cqe *rx_cqe,
+ u32 pkt_len, u32 vlan_len, u32 status)
+{
+ struct sk_buff *skb = NULL;
+ struct net_device *netdev = rxq->netdev;
+ u32 offload_type;
+ u16 num_lro;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(rxq->netdev);
+
+#ifdef HAVE_XDP_SUPPORT
+ u32 xdp_status;
+ struct xdp_buff xdp = { 0 };
+
+ xdp_status = (u32)(hinic3_run_xdp(rxq, pkt_len, &xdp));
+ if (xdp_status == HINIC3_XDP_PKT_DROP)
+ return 0;
+
+ // build skb
+ if (xdp_status != HINIC3_XDP_PROG_EMPTY) {
+ // xdp_prog configured, build skb with xdp
+ skb = hinic3_fetch_rx_buffer_xdp(rxq, pkt_len, &xdp);
+ } else {
+ // xdp_prog not configured, build skb
+ skb = hinic3_fetch_rx_buffer(rxq, pkt_len);
+ }
+#else
+
+ // xdp is not supported
+ skb = hinic3_fetch_rx_buffer(rxq, pkt_len);
+#endif
+ if (unlikely(!skb)) {
+ RXQ_STATS_INC(rxq, alloc_skb_err);
+ return -ENOMEM;
+ }
+
+ /* place header in linear portion of buffer */
+ if (skb_is_nonlinear(skb))
+ hinic3_pull_tail(skb);
+
+ offload_type = hinic3_hw_cpu32(rx_cqe->offload_type);
+ hinic3_rx_csum(rxq, offload_type, status, skb);
+
+#ifdef HAVE_SKBUFF_CSUM_LEVEL
+ hinic3_rx_gro(rxq, offload_type, skb);
+#endif
+
+#if defined(NETIF_F_HW_VLAN_CTAG_RX)
+ if ((netdev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
+ HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type)) {
+#else
+ if ((netdev->features & NETIF_F_HW_VLAN_RX) &&
+ HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type)) {
+#endif
+ u16 vid = HINIC3_GET_RX_VLAN_TAG(vlan_len);
+
+ /* if the packet is a vlan pkt, the vid may be 0 */
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vid);
+ }
+
+ if (unlikely(test_bit(HINIC3_LP_TEST, &nic_dev->flags)))
+ hinic3_copy_lp_data(nic_dev, skb);
+
+ num_lro = HINIC3_GET_RX_NUM_LRO(status);
+ if (num_lro > 1)
+ hinic3_lro_set_gso_params(skb, num_lro);
+
+ skb_record_rx_queue(skb, rxq->q_id);
+ skb->protocol = eth_type_trans(skb, netdev);
+
+ if (skb_has_frag_list(skb)) {
+#ifdef HAVE_NAPI_GRO_FLUSH_OLD
+ napi_gro_flush(&rxq->irq_cfg->napi, false);
+#else
+ napi_gro_flush(&rxq->irq_cfg->napi);
+#endif
+ netif_receive_skb(skb);
+ } else {
+ napi_gro_receive(&rxq->irq_cfg->napi, skb);
+ }
+
+ return 0;
+}
+
+#define LRO_PKT_HDR_LEN_IPV4 66
+#define LRO_PKT_HDR_LEN_IPV6 86
+#define LRO_PKT_HDR_LEN(cqe) \
+ (HINIC3_GET_RX_IP_TYPE(hinic3_hw_cpu32((cqe)->offload_type)) == \
+ HINIC3_RX_IPV6_PKT ? LRO_PKT_HDR_LEN_IPV6 : LRO_PKT_HDR_LEN_IPV4)
+
+int hinic3_rx_poll(struct hinic3_rxq *rxq, int budget)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(rxq->netdev);
+ u32 sw_ci, status, pkt_len, vlan_len, dropped = 0;
+ struct hinic3_rq_cqe *rx_cqe = NULL;
+ u64 rx_bytes = 0;
+ u16 num_lro;
+ int pkts = 0, nr_pkts = 0;
+ u16 num_wqe = 0;
+
+ while (likely(pkts < budget)) {
+ sw_ci = rxq->cons_idx & rxq->q_mask;
+ rx_cqe = rxq->rx_info[sw_ci].cqe;
+ status = hinic3_hw_cpu32(rx_cqe->status);
+ if (!HINIC3_GET_RX_DONE(status))
+ break;
+
+ /* make sure we read rx_done before packet length */
+ rmb();
+
+ vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len);
+ pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len);
+ if (recv_one_pkt(rxq, rx_cqe, pkt_len, vlan_len, status))
+ break;
+
+ rx_bytes += pkt_len;
+ pkts++;
+ nr_pkts++;
+
+ num_lro = HINIC3_GET_RX_NUM_LRO(status);
+ if (num_lro) {
+ rx_bytes += ((num_lro - 1) * LRO_PKT_HDR_LEN(rx_cqe));
+
+ num_wqe += HINIC3_GET_SGE_NUM(pkt_len, rxq);
+ }
+
+ rx_cqe->status = 0;
+
+ if (num_wqe >= nic_dev->lro_replenish_thld)
+ break;
+ }
+
+ if (rxq->delta >= HINIC3_RX_BUFFER_WRITE)
+ hinic3_rx_fill_buffers(rxq);
+
+ u64_stats_update_begin(&rxq->rxq_stats.syncp);
+ rxq->rxq_stats.packets += (u64)(u32)nr_pkts;
+ rxq->rxq_stats.bytes += rx_bytes;
+ rxq->rxq_stats.dropped += (u64)dropped;
+ u64_stats_update_end(&rxq->rxq_stats.syncp);
+ return pkts;
+}
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+static struct page_pool *hinic3_create_page_pool(struct hinic3_nic_dev *nic_dev,
+ u32 rq_depth,
+ struct hinic3_rx_info *rx_info_arr)
+{
+ struct page_pool_params pp_params = {
+ .flags = PP_FLAG_DMA_MAP | PP_FLAG_PAGE_FRAG |
+ PP_FLAG_DMA_SYNC_DEV,
+ .order = nic_dev->page_order,
+ .pool_size = rq_depth * nic_dev->rx_buff_len /
+ (PAGE_SIZE << nic_dev->page_order),
+ .nid = dev_to_node(&(nic_dev->pdev->dev)),
+ .dev = &(nic_dev->pdev->dev),
+ .dma_dir = DMA_FROM_DEVICE,
+ .offset = 0,
+ .max_len = PAGE_SIZE << nic_dev->page_order,
+ };
+ struct page_pool *page_pool;
+ int i;
+
+ page_pool = nic_dev->page_pool_enabled ?
+ page_pool_create(&pp_params) : NULL;
+ for (i = 0; i < rq_depth; i++)
+ rx_info_arr[i].page_pool = page_pool;
+ return page_pool;
+}
+#endif
+
+int hinic3_alloc_rxqs_res(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res)
+{
+ struct hinic3_dyna_rxq_res *rqres = NULL;
+ u64 cqe_mem_size = sizeof(struct hinic3_rq_cqe) * rq_depth;
+ int idx;
+ u32 pkts;
+ u64 size;
+
+ for (idx = 0; idx < num_rq; idx++) {
+ rqres = &rxqs_res[idx];
+ size = sizeof(*rqres->rx_info) * rq_depth;
+ rqres->rx_info = kzalloc(size, GFP_KERNEL);
+ if (!rqres->rx_info) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxq%d rx info\n", idx);
+ goto err_alloc_rx_info;
+ }
+ rqres->cqe_start_vaddr =
+ dma_zalloc_coherent(&nic_dev->pdev->dev, cqe_mem_size,
+ &rqres->cqe_start_paddr,
+ GFP_KERNEL);
+ if (!rqres->cqe_start_vaddr) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxq%d cqe\n", idx);
+ goto err_alloc_cqe;
+ }
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ rqres->page_pool = hinic3_create_page_pool(nic_dev, rq_depth,
+ rqres->rx_info);
+ if (nic_dev->page_pool_enabled && !rqres->page_pool) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to create rxq%d page pool\n", idx);
+ goto err_create_page_pool;
+ }
+#endif
+ pkts = hinic3_rx_alloc_buffers(nic_dev, rq_depth,
+ rqres->rx_info);
+ if (!pkts) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxq%d rx buffers\n", idx);
+ goto err_alloc_buffers;
+ }
+ rqres->next_to_alloc = (u16)pkts;
+ }
+ return 0;
+
+err_alloc_buffers:
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ page_pool_destroy(rqres->page_pool);
+err_create_page_pool:
+#endif
+ dma_free_coherent(&nic_dev->pdev->dev, cqe_mem_size,
+ rqres->cqe_start_vaddr,
+ rqres->cqe_start_paddr);
+err_alloc_cqe:
+ kfree(rqres->rx_info);
+err_alloc_rx_info:
+ hinic3_free_rxqs_res(nic_dev, idx, rq_depth, rxqs_res);
+ return -ENOMEM;
+}
+
+void hinic3_free_rxqs_res(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res)
+{
+ struct hinic3_dyna_rxq_res *rqres = NULL;
+ u64 cqe_mem_size = sizeof(struct hinic3_rq_cqe) * rq_depth;
+ int idx;
+
+ for (idx = 0; idx < num_rq; idx++) {
+ rqres = &rxqs_res[idx];
+
+ hinic3_rx_free_buffers(nic_dev, rq_depth, rqres->rx_info);
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rqres->page_pool)
+ page_pool_destroy(rqres->page_pool);
+#endif
+ dma_free_coherent(&nic_dev->pdev->dev, cqe_mem_size,
+ rqres->cqe_start_vaddr,
+ rqres->cqe_start_paddr);
+ kfree(rqres->rx_info);
+ }
+}
+
+int hinic3_configure_rxqs(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res)
+{
+ struct hinic3_dyna_rxq_res *rqres = NULL;
+ struct irq_info *msix_entry = NULL;
+ struct hinic3_rxq *rxq = NULL;
+ struct hinic3_rq_cqe *cqe_va = NULL;
+ dma_addr_t cqe_pa;
+ u16 q_id;
+ u32 idx;
+ u32 pkts;
+
+ nic_dev->rxq_get_err_times = 0;
+ for (q_id = 0; q_id < num_rq; q_id++) {
+ rxq = &nic_dev->rxqs[q_id];
+ rqres = &rxqs_res[q_id];
+ msix_entry = &nic_dev->qps_irq_info[q_id];
+
+ rxq->irq_id = msix_entry->irq_id;
+ rxq->msix_entry_idx = msix_entry->msix_entry_idx;
+ rxq->next_to_update = 0;
+ rxq->next_to_alloc = rqres->next_to_alloc;
+ rxq->q_depth = rq_depth;
+ rxq->delta = rxq->q_depth;
+ rxq->q_mask = rxq->q_depth - 1;
+ rxq->cons_idx = 0;
+
+ rxq->last_sw_pi = rxq->q_depth - 1;
+ rxq->last_sw_ci = 0;
+ rxq->last_hw_ci = 0;
+ rxq->rx_check_err_cnt = 0;
+ rxq->rxq_print_times = 0;
+ rxq->last_packets = 0;
+ rxq->restore_buf_num = 0;
+
+ rxq->rx_info = rqres->rx_info;
+
+ /* fill cqe */
+ cqe_va = (struct hinic3_rq_cqe *)rqres->cqe_start_vaddr;
+ cqe_pa = rqres->cqe_start_paddr;
+ for (idx = 0; idx < rq_depth; idx++) {
+ rxq->rx_info[idx].cqe = cqe_va;
+ rxq->rx_info[idx].cqe_dma = cqe_pa;
+ cqe_va++;
+ cqe_pa += sizeof(*rxq->rx_info->cqe);
+ }
+
+ rxq->rq = hinic3_get_nic_queue(nic_dev->hwdev, rxq->q_id,
+ HINIC3_RQ);
+ if (!rxq->rq) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to get rq\n");
+ return -EINVAL;
+ }
+
+ pkts = hinic3_rx_fill_wqe(rxq);
+ if (pkts != rxq->q_depth) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to fill rx wqe\n");
+ return -EFAULT;
+ }
+
+ pkts = hinic3_rx_fill_buffers(rxq);
+ if (!pkts) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to fill Rx buffer\n");
+ return -ENOMEM;
+ }
+ }
+
+ return 0;
+}
+
+void hinic3_free_rxqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ kfree(nic_dev->rxqs);
+ nic_dev->rxqs = NULL;
+}
+
+int hinic3_alloc_rxqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct pci_dev *pdev = nic_dev->pdev;
+ struct hinic3_rxq *rxq = NULL;
+ u16 num_rxqs = nic_dev->max_qps;
+ u16 q_id;
+ u64 rxq_size;
+
+ rxq_size = num_rxqs * sizeof(*nic_dev->rxqs);
+ if (!rxq_size) {
+ nic_err(&pdev->dev, "Cannot allocate zero size rxqs\n");
+ return -EINVAL;
+ }
+
+ nic_dev->rxqs = kzalloc(rxq_size, GFP_KERNEL);
+ if (!nic_dev->rxqs) {
+ nic_err(&pdev->dev, "Failed to allocate rxqs\n");
+ return -ENOMEM;
+ }
+
+ for (q_id = 0; q_id < num_rxqs; q_id++) {
+ rxq = &nic_dev->rxqs[q_id];
+ rxq->netdev = netdev;
+ rxq->dev = &pdev->dev;
+ rxq->q_id = q_id;
+ rxq->buf_len = nic_dev->rx_buff_len;
+ rxq->rx_buff_shift = (u32)ilog2(nic_dev->rx_buff_len);
+ rxq->dma_rx_buff_size = nic_dev->dma_rx_buff_size;
+ rxq->q_depth = nic_dev->q_params.rq_depth;
+ rxq->q_mask = nic_dev->q_params.rq_depth - 1;
+
+ rxq_stats_init(rxq);
+ }
+
+ return 0;
+}
+
+int hinic3_rx_configure(struct net_device *netdev, u8 dcb_en)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 rq2iq_map[HINIC3_MAX_NUM_RQ];
+ int err;
+
+ /* Set all rq mapping to all iq in default */
+
+ memset(rq2iq_map, 0xFF, sizeof(rq2iq_map));
+
+ if (test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ err = hinic3_rss_init(nic_dev, rq2iq_map, sizeof(rq2iq_map), dcb_en);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to init rss\n");
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+void hinic3_rx_remove_configure(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags))
+ hinic3_rss_deinit(nic_dev);
+}
+
+int rxq_restore(struct hinic3_nic_dev *nic_dev, u16 q_id, u16 hw_ci)
+{
+ struct hinic3_rxq *rxq = &nic_dev->rxqs[q_id];
+ struct hinic3_rq_wqe *rq_wqe = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ dma_addr_t dma_addr;
+ u32 free_wqebbs = rxq->delta - rxq->restore_buf_num;
+ u32 buff_pi;
+ u32 i;
+ int err;
+
+ if (rxq->delta < rxq->restore_buf_num)
+ return -EINVAL;
+
+ if (rxq->restore_buf_num == 0) /* start restore process */
+ rxq->restore_pi = rxq->next_to_update;
+
+ buff_pi = rxq->restore_pi;
+
+ if ((((rxq->cons_idx & rxq->q_mask) + rxq->q_depth -
+ rxq->next_to_update) % rxq->q_depth) != rxq->delta)
+ return -EINVAL;
+
+ for (i = 0; i < free_wqebbs; i++) {
+ rx_info = &rxq->rx_info[buff_pi];
+
+ if (unlikely(!rx_alloc_mapped_page(nic_dev, rx_info))) {
+ RXQ_STATS_INC(rxq, alloc_rx_buf_err);
+ rxq->restore_pi = (u16)((rxq->restore_pi + i) & rxq->q_mask);
+ return -ENOMEM;
+ }
+
+ dma_addr = rx_info->buf_dma_addr + rx_info->page_offset;
+
+ rq_wqe = rx_info->rq_wqe;
+
+ if (rxq->rq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+ rq_wqe->extend_wqe.buf_desc.sge.hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->extend_wqe.buf_desc.sge.lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ } else {
+ rq_wqe->normal_wqe.buf_hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->normal_wqe.buf_lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ }
+ buff_pi = (u16)((buff_pi + 1) & rxq->q_mask);
+ rxq->restore_buf_num++;
+ }
+
+ nic_info(&nic_dev->pdev->dev, "rxq %u restore_buf_num:%u\n", q_id, rxq->restore_buf_num);
+
+ rx_info = &rxq->rx_info[(hw_ci + rxq->q_depth - 1) & rxq->q_mask];
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ if (rx_info->page_pool && rx_info->page) {
+ page_pool_put_full_page(rx_info->page_pool,
+ rx_info->page, false);
+ rx_info->buf_dma_addr = 0;
+ rx_info->page = NULL;
+ goto reset_rxq;
+ }
+#endif
+ if (rx_info->buf_dma_addr) {
+ dma_unmap_page(&nic_dev->pdev->dev, rx_info->buf_dma_addr,
+ nic_dev->dma_rx_buff_size, DMA_FROM_DEVICE);
+ rx_info->buf_dma_addr = 0;
+ }
+
+ if (rx_info->page) {
+ __free_pages(rx_info->page, nic_dev->page_order);
+ rx_info->page = NULL;
+ }
+ goto reset_rxq;
+
+reset_rxq:
+ rxq->delta = 1;
+ rxq->next_to_update = (u16)((hw_ci + rxq->q_depth - 1) & rxq->q_mask);
+ rxq->cons_idx = (u16)((rxq->next_to_update + 1) & rxq->q_mask);
+ rxq->restore_buf_num = 0;
+ rxq->next_to_alloc = rxq->next_to_update;
+
+ for (i = 0; i < rxq->q_depth; i++) {
+ if (!HINIC3_GET_RX_DONE(hinic3_hw_cpu32(rxq->rx_info[i].cqe->status)))
+ continue;
+
+ RXQ_STATS_INC(rxq, restore_drop_sge);
+ rxq->rx_info[i].cqe->status = 0;
+ }
+
+ err = hinic3_cache_out_qps_res(nic_dev->hwdev);
+ if (err) {
+ clear_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags);
+ return err;
+ }
+
+ hinic3_write_db(rxq->rq, rxq->q_id & (NIC_DCB_COS_MAX - 1),
+ RQ_CFLAG_DP,
+ (u16)((u32)rxq->next_to_update << rxq->rq->wqe_type));
+
+
+ return 0;
+}
+
+bool rxq_is_normal(struct hinic3_rxq *rxq, struct rxq_check_info rxq_info)
+{
+ u32 status;
+
+ if (rxq->rxq_stats.packets != rxq->last_packets || rxq_info.hw_pi != rxq_info.hw_ci ||
+ rxq_info.hw_ci != rxq->last_hw_ci || rxq->next_to_update != rxq->last_sw_pi)
+ return true;
+
+ /* hw rx no wqe and driver rx no packet recv */
+ status = rxq->rx_info[rxq->cons_idx & rxq->q_mask].cqe->status;
+ if (HINIC3_GET_RX_DONE(hinic3_hw_cpu32(status)))
+ return true;
+
+ if ((rxq->cons_idx & rxq->q_mask) != rxq->last_sw_ci ||
+ rxq->rxq_stats.packets != rxq->last_packets ||
+ rxq->next_to_update != rxq_info.hw_pi)
+ return true;
+
+ return false;
+}
+
+#define RXQ_CHECK_ERR_TIMES 2
+#define RXQ_PRINT_MAX_TIMES 3
+#define RXQ_GET_ERR_MAX_TIMES 3
+void hinic3_rxq_check_work_handler(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_nic_dev *nic_dev = container_of(delay, struct hinic3_nic_dev,
+ rxq_check_work);
+ struct rxq_check_info *rxq_info = NULL;
+ struct hinic3_rxq *rxq = NULL;
+ u64 size;
+ u16 qid;
+ int err;
+
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags))
+ return;
+
+ if (test_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ queue_delayed_work(nic_dev->workq, &nic_dev->rxq_check_work, HZ);
+
+ size = sizeof(*rxq_info) * nic_dev->q_params.num_qps;
+ if (!size)
+ return;
+
+ rxq_info = kzalloc(size, GFP_KERNEL);
+ if (!rxq_info)
+ return;
+
+ err = hinic3_get_rxq_hw_info(nic_dev->hwdev, rxq_info, nic_dev->q_params.num_qps,
+ nic_dev->rxqs[0].rq->wqe_type);
+ if (err) {
+ nic_dev->rxq_get_err_times++;
+ if (nic_dev->rxq_get_err_times >= RXQ_GET_ERR_MAX_TIMES)
+ clear_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags);
+ goto free_rxq_info;
+ }
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ rxq = &nic_dev->rxqs[qid];
+ if (!rxq_is_normal(rxq, rxq_info[qid])) {
+ rxq->rx_check_err_cnt++;
+ if (rxq->rx_check_err_cnt < RXQ_CHECK_ERR_TIMES)
+ continue;
+
+ if (rxq->rxq_print_times <= RXQ_PRINT_MAX_TIMES) {
+ nic_warn(&nic_dev->pdev->dev, "rxq %u wqe abnormal, hw_pi:%u, hw_ci:%u, sw_pi:%u, sw_ci:%u delta:%u\n",
+ qid, rxq_info[qid].hw_pi, rxq_info[qid].hw_ci,
+ rxq->next_to_update,
+ rxq->cons_idx & rxq->q_mask, rxq->delta);
+ rxq->rxq_print_times++;
+ }
+
+ err = rxq_restore(nic_dev, qid, rxq_info[qid].hw_ci);
+ if (err)
+ continue;
+ }
+
+ rxq->rxq_print_times = 0;
+ rxq->rx_check_err_cnt = 0;
+ rxq->last_sw_pi = rxq->next_to_update;
+ rxq->last_sw_ci = rxq->cons_idx & rxq->q_mask;
+ rxq->last_hw_ci = rxq_info[qid].hw_ci;
+ rxq->last_packets = rxq->rxq_stats.packets;
+ }
+
+ nic_dev->rxq_get_err_times = 0;
+
+free_rxq_info:
+ kfree(rxq_info);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h
new file mode 100644
index 0000000..7dd4618
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_RX_H
+#define HINIC3_RX_H
+
+#ifdef HAVE_PAGE_POOL_SUPPORT
+#include <net/page_pool.h>
+#endif
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/mm_types.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/u64_stats_sync.h>
+
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_dev.h"
+
+/* rx cqe checksum err */
+#define HINIC3_RX_CSUM_IP_CSUM_ERR BIT(0)
+#define HINIC3_RX_CSUM_TCP_CSUM_ERR BIT(1)
+#define HINIC3_RX_CSUM_UDP_CSUM_ERR BIT(2)
+#define HINIC3_RX_CSUM_IGMP_CSUM_ERR BIT(3)
+#define HINIC3_RX_CSUM_ICMPV4_CSUM_ERR BIT(4)
+#define HINIC3_RX_CSUM_ICMPV6_CSUM_ERR BIT(5)
+#define HINIC3_RX_CSUM_SCTP_CRC_ERR BIT(6)
+#define HINIC3_RX_CSUM_HW_CHECK_NONE BIT(7)
+#define HINIC3_RX_CSUM_IPSU_OTHER_ERR BIT(8)
+
+#define HINIC3_HEADER_DATA_UNIT 2
+#define HINIC3_CQE_LEN 32
+
+struct hinic3_rxq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 errors;
+ u64 csum_errors;
+ u64 other_errors;
+ u64 dropped;
+ u64 xdp_dropped;
+ u64 rx_buf_empty;
+ u64 alloc_skb_err;
+ u64 alloc_rx_buf_err;
+ u64 xdp_large_pkt;
+ u64 restore_drop_sge;
+ u64 rsvd2;
+#ifdef HAVE_NDO_GET_STATS64
+ struct u64_stats_sync syncp;
+#else
+ struct u64_stats_sync_empty syncp;
+#endif
+};
+
+struct hinic3_rx_info {
+ dma_addr_t buf_dma_addr;
+
+ struct hinic3_rq_cqe *cqe;
+ dma_addr_t cqe_dma;
+ struct page *page;
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ struct page_pool *page_pool;
+#endif
+ u32 page_offset;
+ u32 rsvd1;
+ struct hinic3_rq_wqe *rq_wqe;
+ struct sk_buff *saved_skb;
+ u32 skb_len;
+ u32 rsvd2;
+};
+
+struct hinic3_rxq {
+ struct net_device *netdev;
+
+ u16 q_id;
+ u16 rsvd1;
+ u32 q_depth;
+ u32 q_mask;
+
+ u16 buf_len;
+ u16 rsvd2;
+ u32 rx_buff_shift;
+ u32 dma_rx_buff_size;
+
+ struct hinic3_rxq_stats rxq_stats;
+ u32 cons_idx;
+ u32 delta;
+
+ u32 irq_id;
+ u16 msix_entry_idx;
+ u16 rsvd3;
+
+ struct hinic3_rx_info *rx_info;
+ struct hinic3_io_queue *rq;
+#ifdef HAVE_XDP_SUPPORT
+ struct bpf_prog *xdp_prog;
+#endif
+
+ struct hinic3_irq *irq_cfg;
+ u16 next_to_alloc;
+ u16 next_to_update;
+ struct device *dev; /* device for DMA mapping */
+
+ u64 status;
+ dma_addr_t cqe_start_paddr;
+ void *cqe_start_vaddr;
+
+ u64 last_moder_packets;
+ u64 last_moder_bytes;
+ u8 last_coalesc_timer_cfg;
+ u8 last_pending_limt;
+ u16 restore_buf_num;
+ u32 rsvd5;
+ u64 rsvd6;
+
+ u32 last_sw_pi;
+ u32 last_sw_ci;
+
+ u32 last_hw_ci;
+ u8 rx_check_err_cnt;
+ u8 rxq_print_times;
+ u16 restore_pi;
+
+ u64 last_packets;
+} ____cacheline_aligned;
+
+struct hinic3_dyna_rxq_res {
+ u16 next_to_alloc;
+ struct hinic3_rx_info *rx_info;
+ dma_addr_t cqe_start_paddr;
+ void *cqe_start_vaddr;
+#ifdef HAVE_PAGE_POOL_SUPPORT
+ struct page_pool *page_pool;
+#endif
+};
+
+int hinic3_alloc_rxqs(struct net_device *netdev);
+
+void hinic3_free_rxqs(struct net_device *netdev);
+
+int hinic3_alloc_rxqs_res(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res);
+
+void hinic3_free_rxqs_res(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res);
+
+int hinic3_configure_rxqs(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res);
+
+int hinic3_rx_configure(struct net_device *netdev, u8 dcb_en);
+
+void hinic3_rx_remove_configure(struct net_device *netdev);
+
+int hinic3_rx_poll(struct hinic3_rxq *rxq, int budget);
+
+void hinic3_rxq_get_stats(struct hinic3_rxq *rxq,
+ struct hinic3_rxq_stats *stats);
+
+void hinic3_rxq_clean_stats(struct hinic3_rxq_stats *rxq_stats);
+
+void hinic3_rxq_check_work_handler(struct work_struct *work);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h b/drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h
new file mode 100644
index 0000000..051f05d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h
@@ -0,0 +1,220 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2018-2022. All rights reserved.
+ * @file hinic3_srv_nic.h
+ * @details nic service interface
+ * History :
+ * 1.Date : 2018/3/8
+ * Modification: Created file
+ */
+
+#ifndef HINIC3_SRV_NIC_H
+#define HINIC3_SRV_NIC_H
+
+#include <linux/netdevice.h>
+#include "nic_mpu_cmd_defs.h"
+#include "mag_mpu_cmd.h"
+#include "mag_mpu_cmd_defs.h"
+#include "hinic3_lld.h"
+
+enum hinic3_queue_type {
+ HINIC3_SQ,
+ HINIC3_RQ,
+ HINIC3_MAX_QUEUE_TYPE
+};
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_netdev(struct net_device *netdev);
+struct net_device *hinic3_get_netdev_by_lld(struct hinic3_lld_dev *lld_dev);
+
+struct hinic3_event_link_info {
+ u8 valid;
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+};
+
+enum link_err_type {
+ LINK_ERR_MODULE_UNRECOGENIZED,
+ LINK_ERR_NUM,
+};
+
+enum port_module_event_type {
+ HINIC3_PORT_MODULE_CABLE_PLUGGED,
+ HINIC3_PORT_MODULE_CABLE_UNPLUGGED,
+ HINIC3_PORT_MODULE_LINK_ERR,
+ HINIC3_PORT_MODULE_MAX_EVENT,
+};
+
+struct hinic3_port_module_event {
+ enum port_module_event_type type;
+ enum link_err_type err_type;
+};
+
+struct hinic3_dcb_info {
+ u8 dcb_on;
+ u8 default_cos;
+ u8 up_cos[NIC_DCB_COS_MAX];
+};
+
+enum hinic3_nic_event_type {
+ EVENT_NIC_LINK_DOWN,
+ EVENT_NIC_LINK_UP,
+ EVENT_NIC_PORT_MODULE_EVENT,
+ EVENT_NIC_DCB_STATE_CHANGE,
+ EVENT_NIC_BOND_DOWN,
+ EVENT_NIC_BOND_UP,
+ EVENT_NIC_OUTBAND_CFG,
+};
+
+/* *
+ * @brief hinic3_set_mac - set mac address
+ * @param hwdev: device pointer to hwdev
+ * @param mac_addr: mac address from hardware
+ * @param vlan_id: vlan id
+ * @param func_id: function index
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id, u16 channel);
+
+/* *
+ * @brief hinic3_del_mac - delete mac address
+ * @param hwdev: device pointer to hwdev
+ * @param mac_addr: mac address from hardware
+ * @param vlan_id: vlan id
+ * @param func_id: function index
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id, u16 channel);
+
+/* *
+ * @brief hinic3_set_vport_enable - set function valid status
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: global function index
+ * @param enable: 0-disable, 1-enable
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vport_enable(void *hwdev, u16 func_id, bool enable, u16 channel);
+
+/* *
+ * @brief hinic3_set_port_enable - set port status
+ * @param hwdev: device pointer to hwdev
+ * @param enable: 0-disable, 1-enable
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_port_enable(void *hwdev, bool enable, u16 channel);
+
+/* *
+ * @brief hinic3_flush_qps_res - flush queue pairs resource in hardware
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_flush_qps_res(void *hwdev);
+
+/* *
+ * @brief hinic3_cache_out_qps_res - cache out queue pairs wqe resource in hardware
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cache_out_qps_res(void *hwdev);
+
+/* *
+ * @brief hinic3_init_nic_hwdev - init nic hwdev
+ * @param hwdev: device pointer to hwdev
+ * @param pcidev_hdl: pointer to pcidev or handler
+ * @param dev_hdl: pointer to pcidev->dev or handler, for sdk_err() or
+ * dma_alloc()
+ * @param rx_buff_len: rx_buff_len is receive buffer length
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_init_nic_hwdev(void *hwdev, void *pcidev_hdl, void *dev_hdl, u16 rx_buff_len);
+
+/* *
+ * @brief hinic3_free_nic_hwdev - free nic hwdev
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+void hinic3_free_nic_hwdev(void *hwdev);
+
+/* *
+ * @brief hinic3_get_speed - set link speed
+ * @param hwdev: device pointer to hwdev
+ * @param port_info: link speed
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_speed(void *hwdev, enum mag_cmd_port_speed *speed, u16 channel);
+
+int hinic3_get_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state);
+
+int hinic3_get_pf_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state);
+
+int hinic3_get_cos_by_pri(void *hwdev, u8 pri, u8 *cos);
+
+/* *
+ * @brief hinic3_create_qps - create queue pairs
+ * @param hwdev: device pointer to hwdev
+ * @param num_qp: number of queue pairs
+ * @param sq_depth: sq depth
+ * @param rq_depth: rq depth
+ * @param qps_msix_arry: msix info
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_create_qps(void *hwdev, u16 num_qp, u32 sq_depth, u32 rq_depth,
+ struct irq_info *qps_msix_arry);
+
+/* *
+ * @brief hinic3_destroy_qps - destroy queue pairs
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_destroy_qps(void *hwdev);
+
+/* *
+ * @brief hinic3_get_nic_queue - get nic queue
+ * @param hwdev: device pointer to hwdev
+ * @param q_id: queue index
+ * @param q_type: queue type
+ * @retval queue address
+ */
+void *hinic3_get_nic_queue(void *hwdev, u16 q_id, enum hinic3_queue_type q_type);
+
+/* *
+ * @brief hinic3_init_qp_ctxts - init queue pair context
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_init_qp_ctxts(void *hwdev);
+
+/* *
+ * @brief hinic3_free_qp_ctxts - free queue pairs
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_free_qp_ctxts(void *hwdev);
+
+/* *
+ * @brief hinic3_pf_set_vf_link_state pf set vf link state
+ * @param hwdev: device pointer to hwdev
+ * @param vf_link_forced: set link forced
+ * @param link_state: Set link state, This parameter is valid only when vf_link_forced is true
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_pf_set_vf_link_state(void *hwdev, bool vf_link_forced, bool link_state);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_tx.c b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.c
new file mode 100644
index 0000000..d3f8696
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.c
@@ -0,0 +1,1051 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <net/xfrm.h>
+#include <linux/netdevice.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <linux/interrupt.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/tcp.h>
+#include <linux/sctp.h>
+#include <linux/dma-mapping.h>
+#include <linux/types.h>
+#include <linux/u64_stats_sync.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+
+#define MIN_SKB_LEN 32
+
+#define MAX_PAYLOAD_OFFSET 221
+
+#define NIC_QID(q_id, nic_dev) ((q_id) & ((nic_dev)->num_qps - 1))
+
+#define HINIC3_TX_TASK_WRAPPED 1
+#define HINIC3_TX_BD_DESC_WRAPPED 2
+
+#define TXQ_STATS_INC(txq, field) \
+do { \
+ u64_stats_update_begin(&(txq)->txq_stats.syncp); \
+ (txq)->txq_stats.field++; \
+ u64_stats_update_end(&(txq)->txq_stats.syncp); \
+} while (0)
+
+void hinic3_txq_get_stats(struct hinic3_txq *txq,
+ struct hinic3_txq_stats *stats)
+{
+ struct hinic3_txq_stats *txq_stats = &txq->txq_stats;
+ unsigned int start;
+
+ u64_stats_update_begin(&stats->syncp);
+ do {
+ start = u64_stats_fetch_begin(&txq_stats->syncp);
+ stats->bytes = txq_stats->bytes;
+ stats->packets = txq_stats->packets;
+ stats->busy = txq_stats->busy;
+ stats->wake = txq_stats->wake;
+ stats->dropped = txq_stats->dropped;
+ } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
+ u64_stats_update_end(&stats->syncp);
+}
+
+void hinic3_txq_clean_stats(struct hinic3_txq_stats *txq_stats)
+{
+ u64_stats_update_begin(&txq_stats->syncp);
+ txq_stats->bytes = 0;
+ txq_stats->packets = 0;
+ txq_stats->busy = 0;
+ txq_stats->wake = 0;
+ txq_stats->dropped = 0;
+
+ txq_stats->skb_pad_err = 0;
+ txq_stats->frag_len_overflow = 0;
+ txq_stats->offload_cow_skb_err = 0;
+ txq_stats->map_frag_err = 0;
+ txq_stats->unknown_tunnel_pkt = 0;
+ txq_stats->frag_size_err = 0;
+ txq_stats->rsvd1 = 0;
+ txq_stats->rsvd2 = 0;
+ u64_stats_update_end(&txq_stats->syncp);
+}
+
+static void txq_stats_init(struct hinic3_txq *txq)
+{
+ struct hinic3_txq_stats *txq_stats = &txq->txq_stats;
+
+ u64_stats_init(&txq_stats->syncp);
+ hinic3_txq_clean_stats(txq_stats);
+}
+
+static inline void hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs,
+ dma_addr_t addr, u32 len)
+{
+ buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr));
+ buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr));
+ buf_descs->len = hinic3_hw_be32(len);
+}
+
+static int tx_map_skb(struct hinic3_nic_dev *nic_dev, struct sk_buff *skb,
+ u16 valid_nr_frags, struct hinic3_txq *txq,
+ struct hinic3_tx_info *tx_info,
+ struct hinic3_sq_wqe_combo *wqe_combo)
+{
+ struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->ctrl_bd0;
+ struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head;
+ struct hinic3_dma_info *dma_info = tx_info->dma_info;
+ struct pci_dev *pdev = nic_dev->pdev;
+ skb_frag_t *frag = NULL;
+ u32 j, i;
+ int err;
+
+ dma_info[0].dma = dma_map_single(&pdev->dev, skb->data, skb_headlen(skb), DMA_TO_DEVICE);
+ if (dma_mapping_error(&pdev->dev, dma_info[0].dma)) {
+ TXQ_STATS_INC(txq, map_frag_err);
+ return -EFAULT;
+ }
+
+ dma_info[0].len = skb_headlen(skb);
+
+ wqe_desc->hi_addr = hinic3_hw_be32(upper_32_bits(dma_info[0].dma));
+ wqe_desc->lo_addr = hinic3_hw_be32(lower_32_bits(dma_info[0].dma));
+
+ wqe_desc->ctrl_len = dma_info[0].len;
+
+ for (i = 0; i < valid_nr_frags;) {
+ frag = &(skb_shinfo(skb)->frags[i]);
+ if (unlikely(i == wqe_combo->first_bds_num))
+ buf_desc = wqe_combo->bds_sec2;
+
+ i++;
+ dma_info[i].dma = skb_frag_dma_map(&pdev->dev, frag, 0,
+ skb_frag_size(frag),
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(&pdev->dev, dma_info[i].dma)) {
+ TXQ_STATS_INC(txq, map_frag_err);
+ i--;
+ err = -EFAULT;
+ goto frag_map_err;
+ }
+ dma_info[i].len = skb_frag_size(frag);
+
+ hinic3_set_buf_desc(buf_desc, dma_info[i].dma,
+ dma_info[i].len);
+ buf_desc++;
+ }
+
+ return 0;
+
+frag_map_err:
+ for (j = 0; j < i;) {
+ j++;
+ dma_unmap_page(&pdev->dev, dma_info[j].dma,
+ dma_info[j].len, DMA_TO_DEVICE);
+ }
+ dma_unmap_single(&pdev->dev, dma_info[0].dma, dma_info[0].len,
+ DMA_TO_DEVICE);
+ return err;
+}
+
+static inline void tx_unmap_skb(struct hinic3_nic_dev *nic_dev,
+ struct sk_buff *skb, u16 valid_nr_frags,
+ struct hinic3_dma_info *dma_info)
+{
+ struct pci_dev *pdev = nic_dev->pdev;
+ int i;
+
+ for (i = 0; i < valid_nr_frags;) {
+ i++;
+ dma_unmap_page(&pdev->dev,
+ dma_info[i].dma,
+ dma_info[i].len, DMA_TO_DEVICE);
+ }
+
+ dma_unmap_single(&pdev->dev, dma_info[0].dma,
+ dma_info[0].len, DMA_TO_DEVICE);
+}
+
+union hinic3_l4 {
+ struct tcphdr *tcp;
+ struct udphdr *udp;
+ unsigned char *hdr;
+};
+
+enum sq_l3_type {
+ UNKNOWN_L3TYPE = 0,
+ IPV6_PKT = 1,
+ IPV4_PKT_NO_CHKSUM_OFFLOAD = 2,
+ IPV4_PKT_WITH_CHKSUM_OFFLOAD = 3,
+};
+
+enum sq_l4offload_type {
+ OFFLOAD_DISABLE = 0,
+ TCP_OFFLOAD_ENABLE = 1,
+ SCTP_OFFLOAD_ENABLE = 2,
+ UDP_OFFLOAD_ENABLE = 3,
+};
+
+/* initialize l4_len and offset */
+static void get_inner_l4_info(struct sk_buff *skb, union hinic3_l4 *l4,
+ u8 l4_proto, u32 *offset,
+ enum sq_l4offload_type *l4_offload)
+{
+ switch (l4_proto) {
+ case IPPROTO_TCP:
+ *l4_offload = TCP_OFFLOAD_ENABLE;
+ /* To keep same with TSO, payload offset begins from paylaod */
+ *offset = (l4->tcp->doff << TCP_HDR_DATA_OFF_UNIT_SHIFT) +
+ TRANSPORT_OFFSET(l4->hdr, skb);
+ break;
+
+ case IPPROTO_UDP:
+ *l4_offload = UDP_OFFLOAD_ENABLE;
+ *offset = TRANSPORT_OFFSET(l4->hdr, skb);
+ break;
+ default:
+ break;
+ }
+}
+
+static void get_inner_l3_l4_type(struct sk_buff *skb, union hinic3_ip *ip,
+ union hinic3_l4 *l4,
+ enum sq_l3_type *l3_type, u8 *l4_proto)
+{
+ unsigned char *exthdr = NULL;
+
+ if (ip->v4->version == IP4_VERSION) {
+ *l3_type = IPV4_PKT_WITH_CHKSUM_OFFLOAD;
+ *l4_proto = ip->v4->protocol;
+
+#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ /* inner_transport_header is wrong in centos7.0 and suse12.1 */
+ l4->hdr = ip->hdr + ((u8)ip->v4->ihl << IP_HDR_IHL_UNIT_SHIFT);
+#endif
+ } else if (ip->v4->version == IP6_VERSION) {
+ *l3_type = IPV6_PKT;
+ exthdr = ip->hdr + sizeof(*ip->v6);
+ *l4_proto = ip->v6->nexthdr;
+ if (exthdr != l4->hdr) {
+ __be16 frag_off = 0;
+#ifndef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ ipv6_skip_exthdr(skb, (int)(exthdr - skb->data),
+ l4_proto, &frag_off);
+#else
+ int pld_off = 0;
+
+ pld_off = ipv6_skip_exthdr(skb,
+ (int)(exthdr - skb->data),
+ l4_proto, &frag_off);
+ l4->hdr = skb->data + pld_off;
+#endif
+ }
+ } else {
+ *l3_type = UNKNOWN_L3TYPE;
+ *l4_proto = 0;
+ }
+}
+
+static u8 hinic3_get_inner_l4_type(struct sk_buff *skb)
+{
+ enum sq_l3_type l3_type;
+ u8 l4_proto;
+ union hinic3_ip ip;
+ union hinic3_l4 l4;
+
+ ip.hdr = skb_inner_network_header(skb);
+ l4.hdr = skb_inner_transport_header(skb);
+
+ get_inner_l3_l4_type(skb, &ip, &l4, &l3_type, &l4_proto);
+ return l4_proto;
+}
+
+static void hinic3_set_unknown_tunnel_csum(struct sk_buff *skb)
+{
+ int csum_offset;
+ __sum16 skb_csum;
+ u8 l4_proto;
+
+ l4_proto = hinic3_get_inner_l4_type(skb);
+ /* Unsupport tunnel packet, disable csum offload */
+ skb_checksum_help(skb);
+ /* The value of csum is changed from 0xffff to 0 according to RFC1624 */
+ if (skb->ip_summed == CHECKSUM_NONE && l4_proto != IPPROTO_UDP) {
+ csum_offset = skb_checksum_start_offset(skb) + skb->csum_offset;
+ skb_csum = *(__sum16 *)(skb->data + csum_offset);
+ if (skb_csum == 0xffff) {
+ *(__sum16 *)(skb->data + csum_offset) = 0;
+ }
+ }
+}
+
+static int hinic3_tx_csum(struct hinic3_txq *txq, struct hinic3_sq_task *task,
+ struct sk_buff *skb)
+{
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return 0;
+
+ if (skb->encapsulation) {
+ union hinic3_ip ip;
+ u8 l4_proto;
+
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG);
+
+ ip.hdr = skb_network_header(skb);
+ if (ip.v4->version == IPV4_VERSION) {
+ l4_proto = ip.v4->protocol;
+ } else if (ip.v4->version == IPV6_VERSION) {
+ union hinic3_l4 l4;
+ unsigned char *exthdr;
+ __be16 frag_off;
+
+#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L4_EN);
+#endif
+ exthdr = ip.hdr + sizeof(*ip.v6);
+ l4_proto = ip.v6->nexthdr;
+ l4.hdr = skb_transport_header(skb);
+ if (l4.hdr != exthdr)
+ ipv6_skip_exthdr(skb, exthdr - skb->data,
+ &l4_proto, &frag_off);
+ } else {
+ l4_proto = IPPROTO_RAW;
+ }
+
+ if (l4_proto != IPPROTO_UDP) {
+ TXQ_STATS_INC(txq, unknown_tunnel_pkt);
+ hinic3_set_unknown_tunnel_csum(skb);
+ return 0;
+ }
+ }
+
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+
+ return 1;
+}
+
+static void hinic3_set_tso_info(struct hinic3_sq_task *task, u32 *queue_info,
+ enum sq_l4offload_type l4_offload,
+ u32 offset, u32 mss)
+{
+ if (l4_offload == TCP_OFFLOAD_ENABLE) {
+ *queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+ } else if (l4_offload == UDP_OFFLOAD_ENABLE) {
+ *queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UFO);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+ }
+
+ /* Default enable L3 calculation */
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN);
+
+ *queue_info |= SQ_CTRL_QUEUE_INFO_SET(offset >> 1, PLDOFF);
+
+ /* set MSS value */
+ *queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(*queue_info, MSS);
+ *queue_info |= SQ_CTRL_QUEUE_INFO_SET(mss, MSS);
+}
+
+static int hinic3_tso(struct hinic3_sq_task *task, u32 *queue_info,
+ struct sk_buff *skb)
+{
+ enum sq_l4offload_type l4_offload = OFFLOAD_DISABLE;
+ enum sq_l3_type l3_type;
+ union hinic3_ip ip;
+ union hinic3_l4 l4;
+ u32 offset = 0;
+ u8 l4_proto;
+ int err;
+
+ if (!skb_is_gso(skb))
+ return 0;
+
+ err = skb_cow_head(skb, 0);
+ if (err < 0)
+ return err;
+
+ if (skb->encapsulation) {
+ u32 gso_type = skb_shinfo(skb)->gso_type;
+ /* L3 checksum always enable */
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG);
+
+ l4.hdr = skb_transport_header(skb);
+ ip.hdr = skb_network_header(skb);
+
+ if (gso_type & SKB_GSO_UDP_TUNNEL_CSUM) {
+ l4.udp->check = ~csum_magic(&ip, IPPROTO_UDP);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L4_EN);
+ } else if (gso_type & SKB_GSO_UDP_TUNNEL) {
+#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ if (ip.v4->version == 6) {
+ l4.udp->check = ~csum_magic(&ip, IPPROTO_UDP);
+ task->pkt_info0 |=
+ SQ_TASK_INFO0_SET(1U, OUT_L4_EN);
+ }
+#endif
+ }
+
+ ip.hdr = skb_inner_network_header(skb);
+ l4.hdr = skb_inner_transport_header(skb);
+ } else {
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_transport_header(skb);
+ }
+
+ get_inner_l3_l4_type(skb, &ip, &l4, &l3_type, &l4_proto);
+
+ if (l4_proto == IPPROTO_TCP)
+ l4.tcp->check = ~csum_magic(&ip, IPPROTO_TCP);
+#ifdef HAVE_IP6_FRAG_ID_ENABLE_UFO
+ else if (l4_proto == IPPROTO_UDP && ip.v4->version == 6)
+ task->ip_identify =
+ be32_to_cpu(skb_shinfo(skb)->ip6_frag_id);
+#endif
+
+ get_inner_l4_info(skb, &l4, l4_proto, &offset, &l4_offload);
+
+#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ u32 network_hdr_len;
+
+ if (unlikely(l3_type == UNKNOWN_L3TYPE))
+ network_hdr_len = 0;
+ else
+ network_hdr_len = l4.hdr - ip.hdr;
+
+ if (unlikely(!offset)) {
+ if (l3_type == UNKNOWN_L3TYPE)
+ offset = ip.hdr - skb->data;
+ else if (l4_offload == OFFLOAD_DISABLE)
+ offset = ip.hdr - skb->data + network_hdr_len;
+ }
+#endif
+
+ hinic3_set_tso_info(task, queue_info, l4_offload, offset,
+ skb_shinfo(skb)->gso_size);
+
+ return 1;
+}
+
+static u32 hinic3_tx_offload(struct sk_buff *skb, struct hinic3_sq_task *task,
+ u32 *queue_info, struct hinic3_txq *txq)
+{
+ u32 offload = 0;
+ int tso_cs_en;
+
+ task->pkt_info0 = 0;
+ task->ip_identify = 0;
+ task->pkt_info2 = 0;
+ task->vlan_offload = 0;
+
+ tso_cs_en = hinic3_tso(task, queue_info, skb);
+ if (tso_cs_en < 0) {
+ offload = TX_OFFLOAD_INVALID;
+ return offload;
+ } else if (tso_cs_en) {
+ offload |= TX_OFFLOAD_TSO;
+ } else {
+ tso_cs_en = hinic3_tx_csum(txq, task, skb);
+ if (tso_cs_en)
+ offload |= TX_OFFLOAD_CSUM;
+ }
+
+#define VLAN_INSERT_MODE_MAX 5
+ if (unlikely(skb_vlan_tag_present(skb))) {
+ /* select vlan insert mode by qid, default 802.1Q Tag type */
+ hinic3_set_vlan_tx_offload(task, skb_vlan_tag_get(skb),
+ txq->q_id % VLAN_INSERT_MODE_MAX);
+ offload |= TX_OFFLOAD_VLAN;
+ }
+
+ if (unlikely(SQ_CTRL_QUEUE_INFO_GET(*queue_info, PLDOFF) >
+ MAX_PAYLOAD_OFFSET)) {
+ offload = TX_OFFLOAD_INVALID;
+ return offload;
+ }
+
+ return offload;
+}
+
+static void get_pkt_stats(struct hinic3_tx_info *tx_info, struct sk_buff *skb)
+{
+ u32 ihs, hdr_len;
+
+ if (skb_is_gso(skb)) {
+#if (defined(HAVE_SKB_INNER_TRANSPORT_HEADER) && \
+ defined(HAVE_SK_BUFF_ENCAPSULATION))
+ if (skb->encapsulation) {
+#ifdef HAVE_SKB_INNER_TRANSPORT_OFFSET
+ ihs = skb_inner_transport_offset(skb) +
+ inner_tcp_hdrlen(skb);
+#else
+ ihs = (skb_inner_transport_header(skb) - skb->data) +
+ inner_tcp_hdrlen(skb);
+#endif
+ } else {
+#endif
+ ihs = (u32)(skb_transport_offset(skb)) +
+ tcp_hdrlen(skb);
+#if (defined(HAVE_SKB_INNER_TRANSPORT_HEADER) && \
+ defined(HAVE_SK_BUFF_ENCAPSULATION))
+ }
+#endif
+ hdr_len = (skb_shinfo(skb)->gso_segs - 1) * ihs;
+ tx_info->num_bytes = skb->len + (u64)hdr_len;
+ } else {
+ tx_info->num_bytes = (skb->len > ETH_ZLEN) ?
+ skb->len : ETH_ZLEN;
+ }
+
+ tx_info->num_pkts = 1;
+}
+
+static inline int hinic3_maybe_stop_tx(struct hinic3_txq *txq, u16 wqebb_cnt)
+{
+ if (likely(hinic3_get_sq_free_wqebbs(txq->sq) >= wqebb_cnt))
+ return 0;
+
+ /* We need to check again in a case another CPU has just
+ * made room available.
+ */
+ netif_stop_subqueue(txq->netdev, txq->q_id);
+
+ if (likely(hinic3_get_sq_free_wqebbs(txq->sq) < wqebb_cnt))
+ return -EBUSY;
+
+ /* there have enough wqebbs after queue is wake up */
+ netif_start_subqueue(txq->netdev, txq->q_id);
+
+ return 0;
+}
+
+static u16 hinic3_set_wqe_combo(struct hinic3_txq *txq,
+ struct hinic3_sq_wqe_combo *wqe_combo,
+ u32 offload, u16 num_sge, u16 *curr_pi)
+{
+ void *second_part_wqebbs_addr = NULL;
+ void *wqe = NULL;
+ u16 first_part_wqebbs_num, tmp_pi;
+
+ wqe_combo->ctrl_bd0 = hinic3_get_sq_one_wqebb(txq->sq, curr_pi);
+ if (!offload && num_sge == 1) {
+ wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE;
+ return hinic3_get_and_update_sq_owner(txq->sq, *curr_pi, 1);
+ }
+
+ wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE;
+
+ if (offload) {
+ wqe_combo->task = hinic3_get_sq_one_wqebb(txq->sq, &tmp_pi);
+ wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES;
+ } else {
+ wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS;
+ }
+
+ if (num_sge > 1) {
+ /* first wqebb contain bd0, and bd size is equal to sq wqebb
+ * size, so we use (num_sge - 1) as wanted weqbb_cnt
+ */
+ wqe = hinic3_get_sq_multi_wqebbs(txq->sq, num_sge - 1, &tmp_pi,
+ &second_part_wqebbs_addr,
+ &first_part_wqebbs_num);
+ wqe_combo->bds_head = wqe;
+ wqe_combo->bds_sec2 = second_part_wqebbs_addr;
+ wqe_combo->first_bds_num = first_part_wqebbs_num;
+ }
+
+ return hinic3_get_and_update_sq_owner(txq->sq, *curr_pi,
+ num_sge + (u16)!!offload);
+}
+
+/* *
+ * hinic3_prepare_sq_ctrl - init sq wqe cs
+ * @nr_descs: total sge_num, include bd0 in cs
+ */
+static void hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo,
+ u32 queue_info, int nr_descs, u16 owner)
+{
+ struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->ctrl_bd0;
+
+ if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) {
+ wqe_desc->ctrl_len |=
+ SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
+ SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) |
+ SQ_CTRL_SET(owner, OWNER);
+
+ wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len);
+ /* compact wqe queue_info will transfer to ucode */
+ wqe_desc->queue_info = 0;
+ return;
+ }
+
+ wqe_desc->ctrl_len |= SQ_CTRL_SET(nr_descs, BUFDESC_NUM) |
+ SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) |
+ SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
+ SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) |
+ SQ_CTRL_SET(owner, OWNER);
+
+ wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len);
+
+ wqe_desc->queue_info = queue_info;
+ wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC);
+
+ if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) {
+ wqe_desc->queue_info |=
+ SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS);
+ } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) <
+ TX_MSS_MIN) {
+ /* mss should not less than 80 */
+ wqe_desc->queue_info =
+ SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS);
+ wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS);
+ }
+
+ wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info);
+}
+
+static netdev_tx_t hinic3_send_one_skb(struct sk_buff *skb,
+ struct net_device *netdev,
+ struct hinic3_txq *txq)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_sq_wqe_combo wqe_combo = {0};
+ struct hinic3_tx_info *tx_info = NULL;
+ struct hinic3_sq_task task;
+ u32 offload, queue_info = 0;
+ u16 owner = 0, pi = 0;
+ u16 wqebb_cnt, num_sge, valid_nr_frags;
+ bool find_zero_sge_len = false;
+ int err, i;
+
+ if (unlikely(skb->len < MIN_SKB_LEN)) {
+ if (skb_pad(skb, (int)(MIN_SKB_LEN - skb->len))) {
+ TXQ_STATS_INC(txq, skb_pad_err);
+ goto tx_skb_pad_err;
+ }
+
+ skb->len = MIN_SKB_LEN;
+ }
+
+ valid_nr_frags = 0;
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ if (!skb_frag_size(&skb_shinfo(skb)->frags[i])) {
+ find_zero_sge_len = true;
+ continue;
+ } else if (find_zero_sge_len) {
+ TXQ_STATS_INC(txq, frag_size_err);
+ goto tx_drop_pkts;
+ }
+
+ valid_nr_frags++;
+ }
+
+ num_sge = valid_nr_frags + 1;
+
+ /* assume need normal TS format wqe, task info need 1 wqebb */
+ wqebb_cnt = num_sge + 1;
+ if (unlikely(hinic3_maybe_stop_tx(txq, wqebb_cnt))) {
+ TXQ_STATS_INC(txq, busy);
+ return NETDEV_TX_BUSY;
+ }
+
+ /* l2nic outband vlan cfg enable */
+ if ((!skb_vlan_tag_present(skb)) &&
+ (nic_dev->nic_cap.outband_vlan_cfg_en == 1) &&
+ nic_dev->outband_cfg.outband_default_vid != 0) {
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+ (u16)nic_dev->outband_cfg.outband_default_vid);
+ }
+
+ offload = hinic3_tx_offload(skb, &task, &queue_info, txq);
+ if (unlikely(offload == TX_OFFLOAD_INVALID)) {
+ TXQ_STATS_INC(txq, offload_cow_skb_err);
+ goto tx_drop_pkts;
+ } else if (!offload) {
+ /* no TS in current wqe */
+ wqebb_cnt -= 1;
+ if (unlikely(num_sge == 1 && skb->len > COMPACET_WQ_SKB_MAX_LEN))
+ goto tx_drop_pkts;
+ }
+
+ owner = hinic3_set_wqe_combo(txq, &wqe_combo, offload, num_sge, &pi);
+ if (offload) {
+ /* ip6_frag_id is big endiant, not need to transfer */
+ wqe_combo.task->ip_identify = hinic3_hw_be32(task.ip_identify);
+ wqe_combo.task->pkt_info0 = hinic3_hw_be32(task.pkt_info0);
+ wqe_combo.task->pkt_info2 = hinic3_hw_be32(task.pkt_info2);
+ wqe_combo.task->vlan_offload =
+ hinic3_hw_be32(task.vlan_offload);
+ }
+
+ tx_info = &txq->tx_info[pi];
+ tx_info->skb = skb;
+ tx_info->wqebb_cnt = wqebb_cnt;
+ tx_info->valid_nr_frags = valid_nr_frags;
+
+ err = tx_map_skb(nic_dev, skb, valid_nr_frags, txq, tx_info,
+ &wqe_combo);
+ if (err) {
+ hinic3_rollback_sq_wqebbs(txq->sq, wqebb_cnt, owner);
+ goto tx_drop_pkts;
+ }
+
+ get_pkt_stats(tx_info, skb);
+
+ hinic3_prepare_sq_ctrl(&wqe_combo, queue_info, num_sge, owner);
+
+ hinic3_write_db(txq->sq, txq->cos, SQ_CFLAG_DP,
+ hinic3_get_sq_local_pi(txq->sq));
+
+ return NETDEV_TX_OK;
+
+tx_drop_pkts:
+ dev_kfree_skb_any(skb);
+
+tx_skb_pad_err:
+ TXQ_STATS_INC(txq, dropped);
+
+ return NETDEV_TX_OK;
+}
+
+netdev_tx_t hinic3_lb_xmit_frame(struct sk_buff *skb,
+ struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 q_id = skb_get_queue_mapping(skb);
+ struct hinic3_txq *txq = &nic_dev->txqs[q_id];
+
+ return hinic3_send_one_skb(skb, netdev, txq);
+}
+
+netdev_tx_t hinic3_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_txq *txq = NULL;
+ u16 q_id = skb_get_queue_mapping(skb);
+
+ if (unlikely(!netif_carrier_ok(netdev))) {
+ dev_kfree_skb_any(skb);
+ HINIC3_NIC_STATS_INC(nic_dev, tx_carrier_off_drop);
+ return NETDEV_TX_OK;
+ }
+
+ if (unlikely(q_id >= nic_dev->q_params.num_qps)) {
+ txq = &nic_dev->txqs[0];
+ HINIC3_NIC_STATS_INC(nic_dev, tx_invalid_qid);
+ goto tx_drop_pkts;
+ }
+ txq = &nic_dev->txqs[q_id];
+
+ return hinic3_send_one_skb(skb, netdev, txq);
+
+tx_drop_pkts:
+ dev_kfree_skb_any(skb);
+ u64_stats_update_begin(&txq->txq_stats.syncp);
+ txq->txq_stats.dropped++;
+ u64_stats_update_end(&txq->txq_stats.syncp);
+
+ return NETDEV_TX_OK;
+}
+
+static inline void tx_free_skb(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_tx_info *tx_info)
+{
+ tx_unmap_skb(nic_dev, tx_info->skb, tx_info->valid_nr_frags,
+ tx_info->dma_info);
+ dev_kfree_skb_any(tx_info->skb);
+ tx_info->skb = NULL;
+}
+
+static void free_all_tx_skbs(struct hinic3_nic_dev *nic_dev, u32 sq_depth,
+ struct hinic3_tx_info *tx_info_arr)
+{
+ struct hinic3_tx_info *tx_info = NULL;
+ u32 idx;
+
+ for (idx = 0; idx < sq_depth; idx++) {
+ tx_info = &tx_info_arr[idx];
+ if (tx_info->skb)
+ tx_free_skb(nic_dev, tx_info);
+ }
+}
+
+int hinic3_tx_poll(struct hinic3_txq *txq, int budget)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(txq->netdev);
+ struct hinic3_tx_info *tx_info = NULL;
+ u64 tx_bytes = 0, wake = 0, nr_pkts = 0;
+ int pkts = 0;
+ u16 wqebb_cnt = 0;
+ u16 hw_ci, sw_ci = 0, q_id = txq->sq->q_id;
+
+ hw_ci = hinic3_get_sq_hw_ci(txq->sq);
+ dma_rmb();
+ sw_ci = hinic3_get_sq_local_ci(txq->sq);
+
+ do {
+ tx_info = &txq->tx_info[sw_ci];
+
+ /* Whether all of the wqebb of this wqe is completed */
+ if (hw_ci == sw_ci ||
+ ((hw_ci - sw_ci) & txq->q_mask) < tx_info->wqebb_cnt)
+ break;
+
+ sw_ci = (sw_ci + tx_info->wqebb_cnt) & (u16)txq->q_mask;
+ prefetch(&txq->tx_info[sw_ci]);
+
+ wqebb_cnt += tx_info->wqebb_cnt;
+
+ tx_bytes += tx_info->num_bytes;
+ nr_pkts += tx_info->num_pkts;
+ pkts++;
+
+ tx_free_skb(nic_dev, tx_info);
+ } while (likely(pkts < budget));
+
+ hinic3_update_sq_local_ci(txq->sq, wqebb_cnt);
+
+ if (unlikely(__netif_subqueue_stopped(nic_dev->netdev, q_id) &&
+ hinic3_get_sq_free_wqebbs(txq->sq) >= 1 &&
+ test_bit(HINIC3_INTF_UP, &nic_dev->flags))) {
+ struct netdev_queue *netdev_txq =
+ netdev_get_tx_queue(txq->netdev, q_id);
+
+ __netif_tx_lock(netdev_txq, smp_processor_id());
+ /* To avoid re-waking subqueue with xmit_frame */
+ if (__netif_subqueue_stopped(nic_dev->netdev, q_id)) {
+ netif_wake_subqueue(nic_dev->netdev, q_id);
+ wake++;
+ }
+ __netif_tx_unlock(netdev_txq);
+ }
+
+ u64_stats_update_begin(&txq->txq_stats.syncp);
+ txq->txq_stats.bytes += tx_bytes;
+ txq->txq_stats.packets += nr_pkts;
+ txq->txq_stats.wake += wake;
+ u64_stats_update_end(&txq->txq_stats.syncp);
+
+ return pkts;
+}
+
+void hinic3_set_txq_cos(struct hinic3_nic_dev *nic_dev, u16 start_qid,
+ u16 q_num, u8 cos)
+{
+ u16 idx;
+
+ for (idx = 0; idx < q_num; idx++)
+ nic_dev->txqs[idx + start_qid].cos = cos;
+}
+
+#define HINIC3_BDS_PER_SQ_WQEBB \
+ (HINIC3_SQ_WQEBB_SIZE / sizeof(struct hinic3_sq_bufdesc))
+
+int hinic3_alloc_txqs_res(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res)
+{
+ struct hinic3_dyna_txq_res *tqres = NULL;
+ int idx, i;
+ u64 size;
+
+ for (idx = 0; idx < num_sq; idx++) {
+ tqres = &txqs_res[idx];
+
+ size = sizeof(*tqres->tx_info) * sq_depth;
+ tqres->tx_info = kzalloc(size, GFP_KERNEL);
+ if (!tqres->tx_info) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc txq%d tx info\n", idx);
+ goto err_out;
+ }
+
+ size = sizeof(*tqres->bds) *
+ (sq_depth * HINIC3_BDS_PER_SQ_WQEBB +
+ HINIC3_MAX_SQ_SGE);
+ tqres->bds = kzalloc(size, GFP_KERNEL);
+ if (!tqres->bds) {
+ kfree(tqres->tx_info);
+ tqres->tx_info = NULL;
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc txq%d bds info\n", idx);
+ goto err_out;
+ }
+ }
+
+ return 0;
+
+err_out:
+ for (i = 0; i < idx; i++) {
+ tqres = &txqs_res[i];
+
+ kfree(tqres->bds);
+ tqres->bds = NULL;
+ kfree(tqres->tx_info);
+ tqres->tx_info = NULL;
+ }
+
+ return -ENOMEM;
+}
+
+void hinic3_free_txqs_res(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res)
+{
+ struct hinic3_dyna_txq_res *tqres = NULL;
+ int idx;
+
+ for (idx = 0; idx < num_sq; idx++) {
+ tqres = &txqs_res[idx];
+
+ free_all_tx_skbs(nic_dev, sq_depth, tqres->tx_info);
+ kfree(tqres->bds);
+ tqres->bds = NULL;
+ kfree(tqres->tx_info);
+ tqres->tx_info = NULL;
+ }
+}
+
+int hinic3_configure_txqs(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res)
+{
+ struct hinic3_dyna_txq_res *tqres = NULL;
+ struct hinic3_txq *txq = NULL;
+ u16 q_id;
+ u32 idx;
+
+ for (q_id = 0; q_id < num_sq; q_id++) {
+ txq = &nic_dev->txqs[q_id];
+ tqres = &txqs_res[q_id];
+
+ txq->q_depth = sq_depth;
+ txq->q_mask = sq_depth - 1;
+
+ txq->tx_info = tqres->tx_info;
+ for (idx = 0; idx < sq_depth; idx++)
+ txq->tx_info[idx].dma_info =
+ &tqres->bds[idx * HINIC3_BDS_PER_SQ_WQEBB];
+
+ txq->sq = hinic3_get_nic_queue(nic_dev->hwdev, q_id, HINIC3_SQ);
+ if (!txq->sq) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get %u sq\n", q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+int hinic3_alloc_txqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct pci_dev *pdev = nic_dev->pdev;
+ struct hinic3_txq *txq = NULL;
+ u16 q_id, num_txqs = nic_dev->max_qps;
+ u64 txq_size;
+
+ txq_size = num_txqs * sizeof(*nic_dev->txqs);
+ if (!txq_size) {
+ nic_err(&pdev->dev, "Cannot allocate zero size txqs\n");
+ return -EINVAL;
+ }
+
+ nic_dev->txqs = kzalloc(txq_size, GFP_KERNEL);
+ if (!nic_dev->txqs) {
+ nic_err(&pdev->dev, "Failed to allocate txqs\n");
+ return -ENOMEM;
+ }
+
+ for (q_id = 0; q_id < num_txqs; q_id++) {
+ txq = &nic_dev->txqs[q_id];
+ txq->netdev = netdev;
+ txq->q_id = q_id;
+ txq->q_depth = nic_dev->q_params.sq_depth;
+ txq->q_mask = nic_dev->q_params.sq_depth - 1;
+ txq->dev = &pdev->dev;
+
+ txq_stats_init(txq);
+ }
+
+ return 0;
+}
+
+void hinic3_free_txqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ kfree(nic_dev->txqs);
+ nic_dev->txqs = NULL;
+}
+
+static bool is_hw_complete_sq_process(struct hinic3_io_queue *sq)
+{
+ u16 sw_pi, hw_ci;
+
+ sw_pi = hinic3_get_sq_local_pi(sq);
+ hw_ci = hinic3_get_sq_hw_ci(sq);
+
+ return sw_pi == hw_ci;
+}
+
+#define HINIC3_FLUSH_QUEUE_TIMEOUT 1000
+static int hinic3_stop_sq(struct hinic3_txq *txq)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(txq->netdev);
+ u64 timeout;
+ int err;
+
+ timeout = msecs_to_jiffies(HINIC3_FLUSH_QUEUE_TIMEOUT) + jiffies;
+ do {
+ if (is_hw_complete_sq_process(txq->sq))
+ return 0;
+
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+ } while (time_before(jiffies, (unsigned long)timeout));
+
+ /* force hardware to drop packets */
+ timeout = msecs_to_jiffies(HINIC3_FLUSH_QUEUE_TIMEOUT) + jiffies;
+ do {
+ if (is_hw_complete_sq_process(txq->sq))
+ return 0;
+
+ err = hinic3_force_drop_tx_pkt(nic_dev->hwdev);
+ if (err)
+ break;
+
+ usleep_range(9900, 10000); /* sleep 9900 us ~ 10000 us */
+ } while (time_before(jiffies, (unsigned long)timeout));
+
+ /* Avoid msleep takes too long and get a fake result */
+ if (is_hw_complete_sq_process(txq->sq))
+ return 0;
+
+ return -EFAULT;
+}
+
+/* should stop transmit any packets before calling this function */
+int hinic3_flush_txqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 qid;
+ int err;
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ err = hinic3_stop_sq(&nic_dev->txqs[qid]);
+ if (err)
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to stop sq%u\n", qid);
+ }
+
+ return 0;
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_tx.h b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.h
new file mode 100644
index 0000000..290ef29
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.h
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_TX_H
+#define HINIC3_TX_H
+
+#include <net/ipv6.h>
+#include <net/checksum.h>
+#include <net/ip6_checksum.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_io.h"
+
+#define VXLAN_OFFLOAD_PORT_LE 46354 /* big end is 4789 */
+
+#define COMPACET_WQ_SKB_MAX_LEN 16383
+
+#define IP4_VERSION 4
+#define IP6_VERSION 6
+#define IP_HDR_IHL_UNIT_SHIFT 2
+#define TCP_HDR_DATA_OFF_UNIT_SHIFT 2
+
+enum tx_offload_type {
+ TX_OFFLOAD_TSO = BIT(0),
+ TX_OFFLOAD_CSUM = BIT(1),
+ TX_OFFLOAD_VLAN = BIT(2),
+ TX_OFFLOAD_INVALID = BIT(3),
+ TX_OFFLOAD_ESP = BIT(4),
+};
+
+struct hinic3_txq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 busy;
+ u64 wake;
+ u64 dropped;
+
+ /* Subdivision statistics show in private tool */
+ u64 skb_pad_err;
+ u64 frag_len_overflow;
+ u64 offload_cow_skb_err;
+ u64 map_frag_err;
+ u64 unknown_tunnel_pkt;
+ u64 frag_size_err;
+ u64 rsvd1;
+ u64 rsvd2;
+
+#ifdef HAVE_NDO_GET_STATS64
+ struct u64_stats_sync syncp;
+#else
+ struct u64_stats_sync_empty syncp;
+#endif
+};
+
+struct hinic3_dma_info {
+ dma_addr_t dma;
+ u32 len;
+};
+
+#define IPV4_VERSION 4
+#define IPV6_VERSION 6
+#define TCP_HDR_DOFF_UNIT 2
+#define TRANSPORT_OFFSET(l4_hdr, skb) ((u32)((l4_hdr) - (skb)->data))
+
+union hinic3_ip {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+};
+
+struct hinic3_tx_info {
+ struct sk_buff *skb;
+
+ u16 wqebb_cnt;
+ u16 valid_nr_frags;
+
+ int num_sge;
+ u16 num_pkts;
+ u16 rsvd1;
+ u32 rsvd2;
+ u64 num_bytes;
+ struct hinic3_dma_info *dma_info;
+ u64 rsvd3;
+};
+
+struct hinic3_txq {
+ struct net_device *netdev;
+ struct device *dev;
+
+ struct hinic3_txq_stats txq_stats;
+
+ u8 cos;
+ u8 rsvd1;
+ u16 q_id;
+ u32 q_mask;
+ u32 q_depth;
+ u32 rsvd2;
+
+ struct hinic3_tx_info *tx_info;
+ struct hinic3_io_queue *sq;
+
+ u64 last_moder_packets;
+ u64 last_moder_bytes;
+ u64 rsvd3;
+} ____cacheline_aligned;
+
+netdev_tx_t hinic3_lb_xmit_frame(struct sk_buff *skb,
+ struct net_device *netdev);
+
+struct hinic3_dyna_txq_res {
+ struct hinic3_tx_info *tx_info;
+ struct hinic3_dma_info *bds;
+};
+
+netdev_tx_t hinic3_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
+
+void hinic3_txq_get_stats(struct hinic3_txq *txq,
+ struct hinic3_txq_stats *stats);
+
+void hinic3_txq_clean_stats(struct hinic3_txq_stats *txq_stats);
+
+struct hinic3_nic_dev;
+int hinic3_alloc_txqs_res(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res);
+
+void hinic3_free_txqs_res(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res);
+
+int hinic3_configure_txqs(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res);
+
+int hinic3_alloc_txqs(struct net_device *netdev);
+
+void hinic3_free_txqs(struct net_device *netdev);
+
+int hinic3_tx_poll(struct hinic3_txq *txq, int budget);
+
+int hinic3_flush_txqs(struct net_device *netdev);
+
+void hinic3_set_txq_cos(struct hinic3_nic_dev *nic_dev, u16 start_qid,
+ u16 q_num, u8 cos);
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline __sum16 csum_magic(union hinic3_ip *ip, unsigned short proto)
+{
+ return (ip->v4->version == IPV4_VERSION) ?
+ csum_tcpudp_magic(ip->v4->saddr, ip->v4->daddr, 0, proto, 0) :
+ csum_ipv6_magic(&ip->v6->saddr, &ip->v6->daddr, 0, proto, 0);
+}
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_wq.h b/drivers/net/ethernet/huawei/hinic3/hinic3_wq.h
new file mode 100644
index 0000000..7ae029b
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_wq.h
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_WQ_H
+#define HINIC3_WQ_H
+
+struct hinic3_wq {
+ u16 cons_idx;
+ u16 prod_idx;
+
+ u32 q_depth;
+ u16 idx_mask;
+ u16 wqebb_size_shift;
+ u16 rsvd1;
+ u16 num_wq_pages;
+ u32 wqebbs_per_page;
+ u16 wqebbs_per_page_shift;
+ u16 wqebbs_per_page_mask;
+
+ struct hinic3_dma_addr_align *wq_pages;
+
+ dma_addr_t wq_block_paddr;
+ u64 *wq_block_vaddr;
+
+ void *dev_hdl;
+ u32 wq_page_size;
+ u16 wqebb_size;
+} ____cacheline_aligned;
+
+#define WQ_MASK_IDX(wq, idx) ((idx) & (wq)->idx_mask)
+#define WQ_MASK_PAGE(wq, pg_idx) \
+ (((pg_idx) < ((wq)->num_wq_pages)) ? (pg_idx) : 0)
+#define WQ_PAGE_IDX(wq, idx) ((idx) >> (wq)->wqebbs_per_page_shift)
+#define WQ_OFFSET_IN_PAGE(wq, idx) ((idx) & (wq)->wqebbs_per_page_mask)
+#define WQ_GET_WQEBB_ADDR(wq, pg_idx, idx_in_pg) \
+ ((u8 *)(wq)->wq_pages[pg_idx].align_vaddr + \
+ ((idx_in_pg) << (wq)->wqebb_size_shift))
+#define WQ_IS_0_LEVEL_CLA(wq) ((wq)->num_wq_pages == 1)
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline u16 hinic3_wq_free_wqebbs(struct hinic3_wq *wq)
+{
+ return wq->q_depth - ((wq->q_depth + wq->prod_idx - wq->cons_idx) &
+ wq->idx_mask) - 1;
+}
+
+static inline bool hinic3_wq_is_empty(struct hinic3_wq *wq)
+{
+ return WQ_MASK_IDX(wq, wq->prod_idx) == WQ_MASK_IDX(wq, wq->cons_idx);
+}
+
+static inline void *hinic3_wq_get_one_wqebb(struct hinic3_wq *wq, u16 *pi)
+{
+ *pi = WQ_MASK_IDX(wq, wq->prod_idx);
+ wq->prod_idx++;
+
+ return WQ_GET_WQEBB_ADDR(wq, WQ_PAGE_IDX(wq, *pi),
+ WQ_OFFSET_IN_PAGE(wq, *pi));
+}
+
+static inline void *hinic3_wq_get_multi_wqebbs(struct hinic3_wq *wq,
+ u16 num_wqebbs, u16 *prod_idx,
+ void **second_part_wqebbs_addr,
+ u16 *first_part_wqebbs_num)
+{
+ u32 pg_idx, off_in_page;
+
+ *prod_idx = WQ_MASK_IDX(wq, wq->prod_idx);
+ wq->prod_idx += num_wqebbs;
+
+ pg_idx = WQ_PAGE_IDX(wq, *prod_idx);
+ off_in_page = WQ_OFFSET_IN_PAGE(wq, *prod_idx);
+
+ if ((off_in_page + num_wqebbs) > wq->wqebbs_per_page) {
+ /* wqe across wq page boundary */
+ *second_part_wqebbs_addr =
+ WQ_GET_WQEBB_ADDR(wq, WQ_MASK_PAGE(wq, pg_idx + 1), 0);
+ *first_part_wqebbs_num = wq->wqebbs_per_page - off_in_page;
+ } else {
+ *second_part_wqebbs_addr = NULL;
+ *first_part_wqebbs_num = num_wqebbs;
+ }
+
+ return WQ_GET_WQEBB_ADDR(wq, pg_idx, off_in_page);
+}
+
+static inline void hinic3_wq_put_wqebbs(struct hinic3_wq *wq, u16 num_wqebbs)
+{
+ wq->cons_idx += num_wqebbs;
+}
+
+static inline void *hinic3_wq_wqebb_addr(struct hinic3_wq *wq, u16 idx)
+{
+ return WQ_GET_WQEBB_ADDR(wq, WQ_PAGE_IDX(wq, idx),
+ WQ_OFFSET_IN_PAGE(wq, idx));
+}
+
+static inline void *hinic3_wq_read_one_wqebb(struct hinic3_wq *wq,
+ u16 *cons_idx)
+{
+ *cons_idx = WQ_MASK_IDX(wq, wq->cons_idx);
+
+ return hinic3_wq_wqebb_addr(wq, *cons_idx);
+}
+
+static inline u64 hinic3_wq_get_first_wqe_page_addr(struct hinic3_wq *wq)
+{
+ return wq->wq_pages[0].align_paddr;
+}
+
+static inline void hinic3_wq_reset(struct hinic3_wq *wq)
+{
+ u16 pg_idx;
+
+ wq->cons_idx = 0;
+ wq->prod_idx = 0;
+
+ for (pg_idx = 0; pg_idx < wq->num_wq_pages; pg_idx++)
+ memset(wq->wq_pages[pg_idx].align_vaddr, 0, wq->wq_page_size);
+}
+
+int hinic3_wq_create(void *hwdev, struct hinic3_wq *wq, u32 q_depth,
+ u16 wqebb_size);
+void hinic3_wq_destroy(struct hinic3_wq *wq);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c
new file mode 100644
index 0000000..0419fc2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c
@@ -0,0 +1,1214 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/completion.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/semaphore.h>
+#include <linux/jiffies.h>
+#include <linux/delay.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwif.h"
+#include "hinic3_api_cmd.h"
+
+#define API_CMD_CHAIN_CELL_SIZE_SHIFT 6U
+
+#define API_CMD_CELL_DESC_SIZE 8
+#define API_CMD_CELL_DATA_ADDR_SIZE 8
+
+#define API_CHAIN_NUM_CELLS 32
+#define API_CHAIN_CELL_SIZE 128
+#define API_CHAIN_RSP_DATA_SIZE 128
+
+#define API_CMD_CELL_WB_ADDR_SIZE 8
+
+#define API_CHAIN_CELL_ALIGNMENT 8
+
+#define API_CMD_TIMEOUT 10000
+#define API_CMD_STATUS_TIMEOUT 10000
+
+#define API_CMD_BUF_SIZE 2048ULL
+
+#define API_CMD_NODE_ALIGN_SIZE 512ULL
+#define API_PAYLOAD_ALIGN_SIZE 64ULL
+
+#define API_CHAIN_RESP_ALIGNMENT 128ULL
+
+#define COMPLETION_TIMEOUT_DEFAULT 1000UL
+#define POLLING_COMPLETION_TIMEOUT_DEFAULT 1000U
+
+#define API_CMD_RESPONSE_DATA_PADDR(val) be64_to_cpu(*((u64 *)(val)))
+
+#define READ_API_CMD_PRIV_DATA(id, token) ((((u32)(id)) << 16) + (token))
+#define WRITE_API_CMD_PRIV_DATA(id) (((u8)(id)) << 16)
+
+#define MASKED_IDX(chain, idx) ((idx) & ((chain)->num_cells - 1))
+
+#define SIZE_4BYTES(size) (ALIGN((u32)(size), 4U) >> 2)
+#define SIZE_8BYTES(size) (ALIGN((u32)(size), 8U) >> 3)
+
+enum api_cmd_data_format {
+ SGL_DATA = 1,
+};
+
+enum api_cmd_type {
+ API_CMD_WRITE_TYPE = 0,
+ API_CMD_READ_TYPE = 1,
+};
+
+enum api_cmd_bypass {
+ NOT_BYPASS = 0,
+ BYPASS = 1,
+};
+
+enum api_cmd_resp_aeq {
+ NOT_TRIGGER = 0,
+ TRIGGER = 1,
+};
+
+enum api_cmd_chn_code {
+ APICHN_0 = 0,
+};
+
+enum api_cmd_chn_rsvd {
+ APICHN_VALID = 0,
+ APICHN_INVALID = 1,
+};
+
+#define API_DESC_LEN (7)
+
+static u8 xor_chksum_set(void *data)
+{
+ int idx;
+ u8 checksum = 0;
+ u8 *val = data;
+
+ for (idx = 0; idx < API_DESC_LEN; idx++)
+ checksum ^= val[idx];
+
+ return checksum;
+}
+
+static void set_prod_idx(struct hinic3_api_cmd_chain *chain)
+{
+ enum hinic3_api_cmd_chain_type chain_type = chain->chain_type;
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 hw_prod_idx_addr = HINIC3_CSR_API_CMD_CHAIN_PI_ADDR(chain_type);
+ u32 prod_idx = chain->prod_idx;
+
+ hinic3_hwif_write_reg(hwif, hw_prod_idx_addr, prod_idx);
+}
+
+static u32 get_hw_cons_idx(struct hinic3_api_cmd_chain *chain)
+{
+ u32 addr, val;
+
+ addr = HINIC3_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
+ val = hinic3_hwif_read_reg(chain->hwdev->hwif, addr);
+
+ return HINIC3_API_CMD_STATUS_GET(val, CONS_IDX);
+}
+
+static void dump_api_chain_reg(struct hinic3_api_cmd_chain *chain)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ u32 addr, val;
+ u16 pci_cmd = 0;
+
+ addr = HINIC3_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
+ val = hinic3_hwif_read_reg(chain->hwdev->hwif, addr);
+
+ sdk_err(dev, "Chain type: 0x%x, cpld error: 0x%x, check error: 0x%x, current fsm: 0x%x\n",
+ chain->chain_type, HINIC3_API_CMD_STATUS_GET(val, CPLD_ERR),
+ HINIC3_API_CMD_STATUS_GET(val, CHKSUM_ERR),
+ HINIC3_API_CMD_STATUS_GET(val, FSM));
+
+ sdk_err(dev, "Chain hw current ci: 0x%x\n",
+ HINIC3_API_CMD_STATUS_GET(val, CONS_IDX));
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_PI_ADDR(chain->chain_type);
+ val = hinic3_hwif_read_reg(chain->hwdev->hwif, addr);
+ sdk_err(dev, "Chain hw current pi: 0x%x\n", val);
+ pci_read_config_word(chain->hwdev->pcidev_hdl, PCI_COMMAND, &pci_cmd);
+ sdk_err(dev, "PCI command reg: 0x%x\n", pci_cmd);
+}
+
+/**
+ * chain_busy - check if the chain is still processing last requests
+ * @chain: chain to check
+ **/
+static int chain_busy(struct hinic3_api_cmd_chain *chain)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ struct hinic3_api_cmd_cell_ctxt *ctxt;
+ u64 resp_header;
+
+ ctxt = &chain->cell_ctxt[chain->prod_idx];
+
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_MULTI_READ:
+ case HINIC3_API_CMD_POLL_READ:
+ resp_header = be64_to_cpu(ctxt->resp->header);
+ if (ctxt->status &&
+ !HINIC3_API_CMD_RESP_HEADER_VALID(resp_header)) {
+ sdk_err(dev, "Context(0x%x) busy!, pi: %u, resp_header: 0x%08x%08x\n",
+ ctxt->status, chain->prod_idx,
+ upper_32_bits(resp_header),
+ lower_32_bits(resp_header));
+ dump_api_chain_reg(chain);
+ return -EBUSY;
+ }
+ break;
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ chain->cons_idx = get_hw_cons_idx(chain);
+
+ if (chain->cons_idx == MASKED_IDX(chain, chain->prod_idx + 1)) {
+ sdk_err(dev, "API CMD chain %d is busy, cons_idx = %u, prod_idx = %u\n",
+ chain->chain_type, chain->cons_idx,
+ chain->prod_idx);
+ dump_api_chain_reg(chain);
+ return -EBUSY;
+ }
+ break;
+ default:
+ sdk_err(dev, "Unknown Chain type %d\n", chain->chain_type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/**
+ * get_cell_data_size - get the data size of specific cell type
+ * @type: chain type
+ **/
+static u16 get_cell_data_size(enum hinic3_api_cmd_chain_type type)
+{
+ u16 cell_data_size = 0;
+
+ switch (type) {
+ case HINIC3_API_CMD_POLL_READ:
+ cell_data_size = ALIGN(API_CMD_CELL_DESC_SIZE +
+ API_CMD_CELL_WB_ADDR_SIZE +
+ API_CMD_CELL_DATA_ADDR_SIZE,
+ API_CHAIN_CELL_ALIGNMENT);
+ break;
+
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ cell_data_size = ALIGN(API_CMD_CELL_DESC_SIZE +
+ API_CMD_CELL_DATA_ADDR_SIZE,
+ API_CHAIN_CELL_ALIGNMENT);
+ break;
+ default:
+ break;
+ }
+
+ return cell_data_size;
+}
+
+/**
+ * prepare_cell_ctrl - prepare the ctrl of the cell for the command
+ * @cell_ctrl: the control of the cell to set the control into it
+ * @cell_len: the size of the cell
+ **/
+static void prepare_cell_ctrl(u64 *cell_ctrl, u16 cell_len)
+{
+ u64 ctrl;
+ u8 chksum;
+
+ ctrl = HINIC3_API_CMD_CELL_CTRL_SET(SIZE_8BYTES(cell_len), CELL_LEN) |
+ HINIC3_API_CMD_CELL_CTRL_SET(0ULL, RD_DMA_ATTR_OFF) |
+ HINIC3_API_CMD_CELL_CTRL_SET(0ULL, WR_DMA_ATTR_OFF);
+
+ chksum = xor_chksum_set(&ctrl);
+
+ ctrl |= HINIC3_API_CMD_CELL_CTRL_SET(chksum, XOR_CHKSUM);
+
+ /* The data in the HW should be in Big Endian Format */
+ *cell_ctrl = cpu_to_be64(ctrl);
+}
+
+/**
+ * prepare_api_cmd - prepare API CMD command
+ * @chain: chain for the command
+ * @cell: the cell of the command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @cmd_size: the command size
+ **/
+static void prepare_api_cmd(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell *cell, u8 node_id,
+ const void *cmd, u16 cmd_size)
+{
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ u32 priv;
+
+ cell_ctxt = &chain->cell_ctxt[chain->prod_idx];
+
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_POLL_READ:
+ priv = READ_API_CMD_PRIV_DATA(chain->chain_type,
+ cell_ctxt->saved_prod_idx);
+ cell->desc = HINIC3_API_CMD_DESC_SET(SGL_DATA, API_TYPE) |
+ HINIC3_API_CMD_DESC_SET(API_CMD_READ_TYPE, RD_WR) |
+ HINIC3_API_CMD_DESC_SET(BYPASS, MGMT_BYPASS) |
+ HINIC3_API_CMD_DESC_SET(NOT_TRIGGER,
+ RESP_AEQE_EN) |
+ HINIC3_API_CMD_DESC_SET(priv, PRIV_DATA);
+ break;
+ case HINIC3_API_CMD_POLL_WRITE:
+ priv = WRITE_API_CMD_PRIV_DATA(chain->chain_type);
+ cell->desc = HINIC3_API_CMD_DESC_SET(SGL_DATA, API_TYPE) |
+ HINIC3_API_CMD_DESC_SET(API_CMD_WRITE_TYPE,
+ RD_WR) |
+ HINIC3_API_CMD_DESC_SET(BYPASS, MGMT_BYPASS) |
+ HINIC3_API_CMD_DESC_SET(NOT_TRIGGER,
+ RESP_AEQE_EN) |
+ HINIC3_API_CMD_DESC_SET(priv, PRIV_DATA);
+ break;
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ priv = WRITE_API_CMD_PRIV_DATA(chain->chain_type);
+ cell->desc = HINIC3_API_CMD_DESC_SET(SGL_DATA, API_TYPE) |
+ HINIC3_API_CMD_DESC_SET(API_CMD_WRITE_TYPE,
+ RD_WR) |
+ HINIC3_API_CMD_DESC_SET(NOT_BYPASS, MGMT_BYPASS) |
+ HINIC3_API_CMD_DESC_SET(TRIGGER, RESP_AEQE_EN) |
+ HINIC3_API_CMD_DESC_SET(priv, PRIV_DATA);
+ break;
+ default:
+ sdk_err(chain->hwdev->dev_hdl, "Unknown Chain type: %d\n",
+ chain->chain_type);
+ return;
+ }
+
+ cell->desc |= HINIC3_API_CMD_DESC_SET(APICHN_0, APICHN_CODE) |
+ HINIC3_API_CMD_DESC_SET(APICHN_VALID, APICHN_RSVD);
+
+ cell->desc |= HINIC3_API_CMD_DESC_SET(node_id, DEST) |
+ HINIC3_API_CMD_DESC_SET(SIZE_4BYTES(cmd_size), SIZE);
+
+ cell->desc |= HINIC3_API_CMD_DESC_SET(xor_chksum_set(&cell->desc),
+ XOR_CHKSUM);
+
+ /* The data in the HW should be in Big Endian Format */
+ cell->desc = cpu_to_be64(cell->desc);
+
+ memcpy(cell_ctxt->api_cmd_vaddr, cmd, cmd_size);
+}
+
+/**
+ * prepare_cell - prepare cell ctrl and cmd in the current producer cell
+ * @chain: chain for the command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @cmd_size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+static void prepare_cell(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 cmd_size)
+{
+ struct hinic3_api_cmd_cell *curr_node;
+ u16 cell_size;
+
+ curr_node = chain->curr_node;
+
+ cell_size = get_cell_data_size(chain->chain_type);
+
+ prepare_cell_ctrl(&curr_node->ctrl, cell_size);
+ prepare_api_cmd(chain, curr_node, node_id, cmd, cmd_size);
+}
+
+static inline void cmd_chain_prod_idx_inc(struct hinic3_api_cmd_chain *chain)
+{
+ chain->prod_idx = MASKED_IDX(chain, chain->prod_idx + 1);
+}
+
+static void issue_api_cmd(struct hinic3_api_cmd_chain *chain)
+{
+ set_prod_idx(chain);
+}
+
+/**
+ * api_cmd_status_update - update the status of the chain
+ * @chain: chain to update
+ **/
+static void api_cmd_status_update(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_api_cmd_status *wb_status;
+ enum hinic3_api_cmd_chain_type chain_type;
+ u64 status_header;
+ u32 buf_desc;
+
+ wb_status = chain->wb_status;
+
+ buf_desc = be32_to_cpu(wb_status->buf_desc);
+ if (HINIC3_API_CMD_STATUS_GET(buf_desc, CHKSUM_ERR))
+ return;
+
+ status_header = be64_to_cpu(wb_status->header);
+ chain_type = HINIC3_API_CMD_STATUS_HEADER_GET(status_header, CHAIN_ID);
+ if (chain_type >= HINIC3_API_CMD_MAX)
+ return;
+
+ if (chain_type != chain->chain_type)
+ return;
+
+ chain->cons_idx = HINIC3_API_CMD_STATUS_GET(buf_desc, CONS_IDX);
+}
+
+static enum hinic3_wait_return wait_for_status_poll_handler(void *priv_data)
+{
+ struct hinic3_api_cmd_chain *chain = priv_data;
+
+ if (!chain->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ api_cmd_status_update(chain);
+ /* SYNC API CMD cmd should start after prev cmd finished */
+ if (chain->cons_idx == chain->prod_idx)
+ return WAIT_PROCESS_CPL;
+
+ return WAIT_PROCESS_WAITING;
+}
+
+/**
+ * wait_for_status_poll - wait for write to mgmt command to complete
+ * @chain: the chain of the command
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_status_poll(struct hinic3_api_cmd_chain *chain)
+{
+ return hinic3_wait_for_timeout(chain,
+ wait_for_status_poll_handler,
+ API_CMD_STATUS_TIMEOUT, 100); /* wait 100 us once */
+}
+
+static void copy_resp_data(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell_ctxt *ctxt,
+ void *ack, u16 ack_size)
+{
+ struct hinic3_api_cmd_resp_fmt *resp = ctxt->resp;
+ int rsp_size_align = chain->rsp_size_align - 0x8;
+ int rsp_size = (ack_size > rsp_size_align) ? rsp_size_align : ack_size;
+
+ memcpy(ack, &resp->resp_data, rsp_size);
+
+ ctxt->status = 0;
+}
+
+static enum hinic3_wait_return check_cmd_resp_handler(void *priv_data)
+{
+ struct hinic3_api_cmd_cell_ctxt *ctxt = priv_data;
+ u64 resp_header;
+ u8 resp_status;
+
+ if (!ctxt->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ resp_header = be64_to_cpu(ctxt->resp->header);
+ rmb(); /* read the latest header */
+
+ if (HINIC3_API_CMD_RESP_HEADER_VALID(resp_header)) {
+ resp_status = HINIC3_API_CMD_RESP_HEAD_GET(resp_header, STATUS);
+ if (resp_status) {
+ pr_err("Api chain response data err, status: %u\n",
+ resp_status);
+ return WAIT_PROCESS_ERR;
+ }
+
+ return WAIT_PROCESS_CPL;
+ }
+
+ return WAIT_PROCESS_WAITING;
+}
+
+/**
+ * prepare_cell - polling for respense data of the read api-command
+ * @chain: pointer to api cmd chain
+ *
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_resp_polling(struct hinic3_api_cmd_cell_ctxt *ctxt)
+{
+ return hinic3_wait_for_timeout(ctxt, check_cmd_resp_handler,
+ POLLING_COMPLETION_TIMEOUT_DEFAULT,
+ USEC_PER_MSEC);
+}
+
+/**
+ * wait_for_api_cmd_completion - wait for command to complete
+ * @chain: chain for the command
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_api_cmd_completion(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell_ctxt *ctxt,
+ void *ack, u16 ack_size)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ int err = 0;
+
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_POLL_READ:
+ err = wait_for_resp_polling(ctxt);
+ if (err == 0)
+ copy_resp_data(chain, ctxt, ack, ack_size);
+ else
+ sdk_err(dev, "API CMD poll response timeout\n");
+ break;
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ err = wait_for_status_poll(chain);
+ if (err != 0) {
+ sdk_err(dev, "API CMD Poll status timeout, chain type: %d\n",
+ chain->chain_type);
+ break;
+ }
+ break;
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ /* No need to wait */
+ break;
+ default:
+ sdk_err(dev, "Unknown API CMD Chain type: %d\n",
+ chain->chain_type);
+ err = -EINVAL;
+ break;
+ }
+
+ if (err != 0)
+ dump_api_chain_reg(chain);
+
+ return err;
+}
+
+static inline void update_api_cmd_ctxt(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell_ctxt *ctxt)
+{
+ ctxt->status = 1;
+ ctxt->saved_prod_idx = chain->prod_idx;
+ if (ctxt->resp) {
+ ctxt->resp->header = 0;
+
+ /* make sure "header" was cleared */
+ wmb();
+ }
+}
+
+/**
+ * api_cmd - API CMD command
+ * @chain: chain for the command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 cmd_size, void *ack, u16 ack_size)
+{
+ struct hinic3_api_cmd_cell_ctxt *ctxt = NULL;
+
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock(&chain->async_lock);
+ else
+ down(&chain->sem);
+ ctxt = &chain->cell_ctxt[chain->prod_idx];
+ if (chain_busy(chain)) {
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_unlock(&chain->async_lock);
+ else
+ up(&chain->sem);
+ return -EBUSY;
+ }
+ update_api_cmd_ctxt(chain, ctxt);
+
+ prepare_cell(chain, node_id, cmd, cmd_size);
+
+ cmd_chain_prod_idx_inc(chain);
+
+ wmb(); /* issue the command */
+
+ issue_api_cmd(chain);
+
+ /* incremented prod idx, update ctxt */
+
+ chain->curr_node = chain->cell_ctxt[chain->prod_idx].cell_vaddr;
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_unlock(&chain->async_lock);
+ else
+ up(&chain->sem);
+
+ return wait_for_api_cmd_completion(chain, ctxt, ack, ack_size);
+}
+
+/**
+ * hinic3_api_cmd_write - Write API CMD command
+ * @chain: chain for write command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_api_cmd_write(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 size)
+{
+ /* Verify the chain type */
+ return api_cmd(chain, node_id, cmd, size, NULL, 0);
+}
+
+/**
+ * hinic3_api_cmd_read - Read API CMD command
+ * @chain: chain for read command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_api_cmd_read(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 size, void *ack, u16 ack_size)
+{
+ return api_cmd(chain, node_id, cmd, size, ack, ack_size);
+}
+
+static enum hinic3_wait_return check_chain_restart_handler(void *priv_data)
+{
+ struct hinic3_api_cmd_chain *cmd_chain = priv_data;
+ u32 reg_addr, val;
+
+ if (!cmd_chain->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ reg_addr = HINIC3_CSR_API_CMD_CHAIN_REQ_ADDR(cmd_chain->chain_type);
+ val = hinic3_hwif_read_reg(cmd_chain->hwdev->hwif, reg_addr);
+ if (!HINIC3_API_CMD_CHAIN_REQ_GET(val, RESTART))
+ return WAIT_PROCESS_CPL;
+
+ return WAIT_PROCESS_WAITING;
+}
+
+/**
+ * api_cmd_hw_restart - restart the chain in the HW
+ * @chain: the API CMD specific chain to restart
+ **/
+static int api_cmd_hw_restart(struct hinic3_api_cmd_chain *cmd_chain)
+{
+ struct hinic3_hwif *hwif = cmd_chain->hwdev->hwif;
+ u32 reg_addr, val;
+
+ /* Read Modify Write */
+ reg_addr = HINIC3_CSR_API_CMD_CHAIN_REQ_ADDR(cmd_chain->chain_type);
+ val = hinic3_hwif_read_reg(hwif, reg_addr);
+
+ val = HINIC3_API_CMD_CHAIN_REQ_CLEAR(val, RESTART);
+ val |= HINIC3_API_CMD_CHAIN_REQ_SET(1, RESTART);
+
+ hinic3_hwif_write_reg(hwif, reg_addr, val);
+
+ return hinic3_wait_for_timeout(cmd_chain, check_chain_restart_handler,
+ API_CMD_TIMEOUT, USEC_PER_MSEC);
+}
+
+/**
+ * api_cmd_ctrl_init - set the control register of a chain
+ * @chain: the API CMD specific chain to set control register for
+ **/
+static void api_cmd_ctrl_init(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 reg_addr, ctrl;
+ u32 size;
+
+ /* Read Modify Write */
+ reg_addr = HINIC3_CSR_API_CMD_CHAIN_CTRL_ADDR(chain->chain_type);
+
+ size = (u32)ilog2(chain->cell_size >> API_CMD_CHAIN_CELL_SIZE_SHIFT);
+
+ ctrl = hinic3_hwif_read_reg(hwif, reg_addr);
+
+ ctrl = HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, AEQE_EN) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, CELL_SIZE);
+
+ ctrl |= HINIC3_API_CMD_CHAIN_CTRL_SET(0, AEQE_EN) |
+ HINIC3_API_CMD_CHAIN_CTRL_SET(size, CELL_SIZE);
+
+ hinic3_hwif_write_reg(hwif, reg_addr, ctrl);
+}
+
+/**
+ * api_cmd_set_status_addr - set the status address of a chain in the HW
+ * @chain: the API CMD specific chain to set status address for
+ **/
+static void api_cmd_set_status_addr(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+
+ addr = HINIC3_CSR_API_CMD_STATUS_HI_ADDR(chain->chain_type);
+ val = upper_32_bits(chain->wb_status_paddr);
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ addr = HINIC3_CSR_API_CMD_STATUS_LO_ADDR(chain->chain_type);
+ val = lower_32_bits(chain->wb_status_paddr);
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+/**
+ * api_cmd_set_num_cells - set the number cells of a chain in the HW
+ * @chain: the API CMD specific chain to set the number of cells for
+ **/
+static void api_cmd_set_num_cells(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_NUM_CELLS_ADDR(chain->chain_type);
+ val = chain->num_cells;
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+/**
+ * api_cmd_head_init - set the head cell of a chain in the HW
+ * @chain: the API CMD specific chain to set the head for
+ **/
+static void api_cmd_head_init(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_HEAD_HI_ADDR(chain->chain_type);
+ val = upper_32_bits(chain->head_cell_paddr);
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_HEAD_LO_ADDR(chain->chain_type);
+ val = lower_32_bits(chain->head_cell_paddr);
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+static enum hinic3_wait_return check_chain_ready_handler(void *priv_data)
+{
+ struct hinic3_api_cmd_chain *chain = priv_data;
+ u32 addr, val;
+ u32 hw_cons_idx;
+
+ if (!chain->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ addr = HINIC3_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
+ val = hinic3_hwif_read_reg(chain->hwdev->hwif, addr);
+ hw_cons_idx = HINIC3_API_CMD_STATUS_GET(val, CONS_IDX);
+ /* wait for HW cons idx to be updated */
+ if (hw_cons_idx == chain->cons_idx)
+ return WAIT_PROCESS_CPL;
+ return WAIT_PROCESS_WAITING;
+}
+
+/**
+ * wait_for_ready_chain - wait for the chain to be ready
+ * @chain: the API CMD specific chain to wait for
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_ready_chain(struct hinic3_api_cmd_chain *chain)
+{
+ return hinic3_wait_for_timeout(chain, check_chain_ready_handler,
+ API_CMD_TIMEOUT, USEC_PER_MSEC);
+}
+
+/**
+ * api_cmd_chain_hw_clean - clean the HW
+ * @chain: the API CMD specific chain
+ **/
+static void api_cmd_chain_hw_clean(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, ctrl;
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_CTRL_ADDR(chain->chain_type);
+
+ ctrl = hinic3_hwif_read_reg(hwif, addr);
+ ctrl = HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, RESTART_EN) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_ERR) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, AEQE_EN) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_CHK_EN) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, CELL_SIZE);
+
+ hinic3_hwif_write_reg(hwif, addr, ctrl);
+}
+
+/**
+ * api_cmd_chain_hw_init - initialize the chain in the HW
+ * @chain: the API CMD specific chain to initialize in HW
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_chain_hw_init(struct hinic3_api_cmd_chain *chain)
+{
+ api_cmd_chain_hw_clean(chain);
+
+ api_cmd_set_status_addr(chain);
+
+ if (api_cmd_hw_restart(chain)) {
+ sdk_err(chain->hwdev->dev_hdl, "Failed to restart api_cmd_hw\n");
+ return -EBUSY;
+ }
+
+ api_cmd_ctrl_init(chain);
+ api_cmd_set_num_cells(chain);
+ api_cmd_head_init(chain);
+
+ return wait_for_ready_chain(chain);
+}
+
+/**
+ * alloc_cmd_buf - allocate a dma buffer for API CMD command
+ * @chain: the API CMD specific chain for the cmd
+ * @cell: the cell in the HW for the cmd
+ * @cell_idx: the index of the cell
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_cmd_buf(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell *cell, u32 cell_idx)
+{
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ void *dev = chain->hwdev->dev_hdl;
+ void *buf_vaddr;
+ u64 buf_paddr;
+ int err = 0;
+
+ buf_vaddr = (u8 *)((u64)chain->buf_vaddr_base +
+ chain->buf_size_align * cell_idx);
+ buf_paddr = chain->buf_paddr_base +
+ chain->buf_size_align * cell_idx;
+
+ cell_ctxt = &chain->cell_ctxt[cell_idx];
+
+ cell_ctxt->api_cmd_vaddr = buf_vaddr;
+
+ /* set the cmd DMA address in the cell */
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_POLL_READ:
+ cell->read.hw_cmd_paddr = cpu_to_be64(buf_paddr);
+ break;
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ /* The data in the HW should be in Big Endian Format */
+ cell->write.hw_cmd_paddr = cpu_to_be64(buf_paddr);
+ break;
+ default:
+ sdk_err(dev, "Unknown API CMD Chain type: %d\n",
+ chain->chain_type);
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
+/**
+ * alloc_cmd_buf - allocate a resp buffer for API CMD command
+ * @chain: the API CMD specific chain for the cmd
+ * @cell: the cell in the HW for the cmd
+ * @cell_idx: the index of the cell
+ **/
+static void alloc_resp_buf(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell *cell, u32 cell_idx)
+{
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ void *resp_vaddr;
+ u64 resp_paddr;
+
+ resp_vaddr = (u8 *)((u64)chain->rsp_vaddr_base +
+ chain->rsp_size_align * cell_idx);
+ resp_paddr = chain->rsp_paddr_base +
+ chain->rsp_size_align * cell_idx;
+
+ cell_ctxt = &chain->cell_ctxt[cell_idx];
+
+ cell_ctxt->resp = resp_vaddr;
+ cell->read.hw_wb_resp_paddr = cpu_to_be64(resp_paddr);
+}
+
+static int hinic3_alloc_api_cmd_cell_buf(struct hinic3_api_cmd_chain *chain,
+ u32 cell_idx,
+ struct hinic3_api_cmd_cell *node)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ int err;
+
+ /* For read chain, we should allocate buffer for the response data */
+ if (chain->chain_type == HINIC3_API_CMD_MULTI_READ ||
+ chain->chain_type == HINIC3_API_CMD_POLL_READ)
+ alloc_resp_buf(chain, node, cell_idx);
+
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_POLL_READ:
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ err = alloc_cmd_buf(chain, node, cell_idx);
+ if (err) {
+ sdk_err(dev, "Failed to allocate cmd buffer\n");
+ goto alloc_cmd_buf_err;
+ }
+ break;
+ /* For api command write and api command read, the data section
+ * is directly inserted in the cell, so no need to allocate.
+ */
+ case HINIC3_API_CMD_MULTI_READ:
+ chain->cell_ctxt[cell_idx].api_cmd_vaddr =
+ &node->read.hw_cmd_paddr;
+ break;
+ default:
+ sdk_err(dev, "Unsupported API CMD chain type\n");
+ err = -EINVAL;
+ goto alloc_cmd_buf_err;
+ }
+
+ return 0;
+
+alloc_cmd_buf_err:
+
+ return err;
+}
+
+/**
+ * api_cmd_create_cell - create API CMD cell of specific chain
+ * @chain: the API CMD specific chain to create its cell
+ * @cell_idx: the cell index to create
+ * @pre_node: previous cell
+ * @node_vaddr: the virt addr of the cell
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_create_cell(struct hinic3_api_cmd_chain *chain, u32 cell_idx,
+ struct hinic3_api_cmd_cell *pre_node,
+ struct hinic3_api_cmd_cell **node_vaddr)
+{
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ struct hinic3_api_cmd_cell *node;
+ void *cell_vaddr;
+ u64 cell_paddr;
+ int err;
+
+ cell_vaddr = (void *)((u64)chain->cell_vaddr_base +
+ chain->cell_size_align * cell_idx);
+ cell_paddr = chain->cell_paddr_base +
+ chain->cell_size_align * cell_idx;
+
+ cell_ctxt = &chain->cell_ctxt[cell_idx];
+ cell_ctxt->cell_vaddr = cell_vaddr;
+ cell_ctxt->hwdev = chain->hwdev;
+ node = cell_ctxt->cell_vaddr;
+
+ if (!pre_node) {
+ chain->head_node = cell_vaddr;
+ chain->head_cell_paddr = (dma_addr_t)cell_paddr;
+ } else {
+ /* The data in the HW should be in Big Endian Format */
+ pre_node->next_cell_paddr = cpu_to_be64(cell_paddr);
+ }
+
+ /* Driver software should make sure that there is an empty API
+ * command cell at the end the chain
+ */
+ node->next_cell_paddr = 0;
+
+ err = hinic3_alloc_api_cmd_cell_buf(chain, cell_idx, node);
+ if (err)
+ return err;
+
+ *node_vaddr = node;
+
+ return 0;
+}
+
+/**
+ * api_cmd_create_cells - create API CMD cells for specific chain
+ * @chain: the API CMD specific chain
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_create_cells(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_api_cmd_cell *node = NULL, *pre_node = NULL;
+ void *dev = chain->hwdev->dev_hdl;
+ u32 cell_idx;
+ int err;
+
+ for (cell_idx = 0; cell_idx < chain->num_cells; cell_idx++) {
+ err = api_cmd_create_cell(chain, cell_idx, pre_node, &node);
+ if (err) {
+ sdk_err(dev, "Failed to create API CMD cell\n");
+ return err;
+ }
+
+ pre_node = node;
+ }
+
+ if (!node)
+ return -EFAULT;
+
+ /* set the Final node to point on the start */
+ node->next_cell_paddr = cpu_to_be64(chain->head_cell_paddr);
+
+ /* set the current node to be the head */
+ chain->curr_node = chain->head_node;
+ return 0;
+}
+
+/**
+ * api_chain_init - initialize API CMD specific chain
+ * @chain: the API CMD specific chain to initialize
+ * @attr: attributes to set in the chain
+ * Return: 0 - success, negative - failure
+ **/
+static int api_chain_init(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_chain_attr *attr)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ size_t cell_ctxt_size;
+ size_t cells_buf_size;
+ int err;
+
+ chain->chain_type = attr->chain_type;
+ chain->num_cells = attr->num_cells;
+ chain->cell_size = attr->cell_size;
+ chain->rsp_size = attr->rsp_size;
+
+ chain->prod_idx = 0;
+ chain->cons_idx = 0;
+
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock_init(&chain->async_lock);
+ else
+ sema_init(&chain->sem, 1);
+
+ cell_ctxt_size = chain->num_cells * sizeof(*chain->cell_ctxt);
+ if (!cell_ctxt_size) {
+ sdk_err(dev, "Api chain cell size cannot be zero\n");
+ err = -EINVAL;
+ goto alloc_cell_ctxt_err;
+ }
+
+ chain->cell_ctxt = kzalloc(cell_ctxt_size, GFP_KERNEL);
+ if (!chain->cell_ctxt) {
+ sdk_err(dev, "Failed to allocate cell contexts for a chain\n");
+ err = -ENOMEM;
+ goto alloc_cell_ctxt_err;
+ }
+
+ chain->wb_status = dma_zalloc_coherent(dev,
+ sizeof(*chain->wb_status),
+ &chain->wb_status_paddr,
+ GFP_KERNEL);
+ if (!chain->wb_status) {
+ sdk_err(dev, "Failed to allocate DMA wb status\n");
+ err = -ENOMEM;
+ goto alloc_wb_status_err;
+ }
+
+ chain->cell_size_align = ALIGN((u64)chain->cell_size,
+ API_CMD_NODE_ALIGN_SIZE);
+ chain->rsp_size_align = ALIGN((u64)chain->rsp_size,
+ API_CHAIN_RESP_ALIGNMENT);
+ chain->buf_size_align = ALIGN(API_CMD_BUF_SIZE, API_PAYLOAD_ALIGN_SIZE);
+
+ cells_buf_size = (chain->cell_size_align + chain->rsp_size_align +
+ chain->buf_size_align) * chain->num_cells;
+
+ err = hinic3_dma_zalloc_coherent_align(dev, cells_buf_size,
+ API_CMD_NODE_ALIGN_SIZE,
+ GFP_KERNEL,
+ &chain->cells_addr);
+ if (err) {
+ sdk_err(dev, "Failed to allocate API CMD cells buffer\n");
+ goto alloc_cells_buf_err;
+ }
+
+ chain->cell_vaddr_base = chain->cells_addr.align_vaddr;
+ chain->cell_paddr_base = chain->cells_addr.align_paddr;
+
+ chain->rsp_vaddr_base = (u8 *)((u64)chain->cell_vaddr_base +
+ chain->cell_size_align * chain->num_cells);
+ chain->rsp_paddr_base = chain->cell_paddr_base +
+ chain->cell_size_align * chain->num_cells;
+
+ chain->buf_vaddr_base = (u8 *)((u64)chain->rsp_vaddr_base +
+ chain->rsp_size_align * chain->num_cells);
+ chain->buf_paddr_base = chain->rsp_paddr_base +
+ chain->rsp_size_align * chain->num_cells;
+
+ return 0;
+
+alloc_cells_buf_err:
+ dma_free_coherent(dev, sizeof(*chain->wb_status),
+ chain->wb_status, chain->wb_status_paddr);
+
+alloc_wb_status_err:
+ kfree(chain->cell_ctxt);
+
+alloc_cell_ctxt_err:
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock_deinit(&chain->async_lock);
+ else
+ sema_deinit(&chain->sem);
+ return err;
+}
+
+/**
+ * api_chain_free - free API CMD specific chain
+ * @chain: the API CMD specific chain to free
+ **/
+static void api_chain_free(struct hinic3_api_cmd_chain *chain)
+{
+ void *dev = chain->hwdev->dev_hdl;
+
+ hinic3_dma_free_coherent_align(dev, &chain->cells_addr);
+
+ dma_free_coherent(dev, sizeof(*chain->wb_status),
+ chain->wb_status, chain->wb_status_paddr);
+ kfree(chain->cell_ctxt);
+ chain->cell_ctxt = NULL;
+
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock_deinit(&chain->async_lock);
+ else
+ sema_deinit(&chain->sem);
+}
+
+/**
+ * api_cmd_create_chain - create API CMD specific chain
+ * @chain: the API CMD specific chain to create
+ * @attr: attributes to set in the chain
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_create_chain(struct hinic3_api_cmd_chain **cmd_chain,
+ struct hinic3_api_cmd_chain_attr *attr)
+{
+ struct hinic3_hwdev *hwdev = attr->hwdev;
+ struct hinic3_api_cmd_chain *chain = NULL;
+ int err;
+
+ if (attr->num_cells & (attr->num_cells - 1)) {
+ sdk_err(hwdev->dev_hdl, "Invalid number of cells, must be power of 2\n");
+ return -EINVAL;
+ }
+
+ chain = kzalloc(sizeof(*chain), GFP_KERNEL);
+ if (!chain)
+ return -ENOMEM;
+
+ chain->hwdev = hwdev;
+
+ err = api_chain_init(chain, attr);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize chain\n");
+ goto chain_init_err;
+ }
+
+ err = api_cmd_create_cells(chain);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to create cells for API CMD chain\n");
+ goto create_cells_err;
+ }
+
+ err = api_cmd_chain_hw_init(chain);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize chain HW\n");
+ goto chain_hw_init_err;
+ }
+
+ *cmd_chain = chain;
+ return 0;
+
+chain_hw_init_err:
+create_cells_err:
+ api_chain_free(chain);
+
+chain_init_err:
+ kfree(chain);
+ return err;
+}
+
+/**
+ * api_cmd_destroy_chain - destroy API CMD specific chain
+ * @chain: the API CMD specific chain to destroy
+ **/
+static void api_cmd_destroy_chain(struct hinic3_api_cmd_chain *chain)
+{
+ api_chain_free(chain);
+ kfree(chain);
+}
+
+/**
+ * hinic3_api_cmd_init - Initialize all the API CMD chains
+ * @hwif: the hardware interface of a pci function device
+ * @chain: the API CMD chains that will be initialized
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_api_cmd_init(struct hinic3_hwdev *hwdev,
+ struct hinic3_api_cmd_chain **chain)
+{
+ void *dev = hwdev->dev_hdl;
+ struct hinic3_api_cmd_chain_attr attr;
+ u8 chain_type, i;
+ int err;
+
+ if (COMM_SUPPORT_API_CHAIN(hwdev) == 0)
+ return 0;
+
+ attr.hwdev = hwdev;
+ attr.num_cells = API_CHAIN_NUM_CELLS;
+ attr.cell_size = API_CHAIN_CELL_SIZE;
+ attr.rsp_size = API_CHAIN_RSP_DATA_SIZE;
+
+ chain_type = HINIC3_API_CMD_WRITE_TO_MGMT_CPU;
+ for (; chain_type < HINIC3_API_CMD_MAX; chain_type++) {
+ attr.chain_type = chain_type;
+
+ err = api_cmd_create_chain(&chain[chain_type], &attr);
+ if (err) {
+ sdk_err(dev, "Failed to create chain %d\n", chain_type);
+ goto create_chain_err;
+ }
+ }
+
+ return 0;
+
+create_chain_err:
+ i = HINIC3_API_CMD_WRITE_TO_MGMT_CPU;
+ for (; i < chain_type; i++)
+ api_cmd_destroy_chain(chain[i]);
+
+ return err;
+}
+
+/**
+ * hinic3_api_cmd_free - free the API CMD chains
+ * @chain: the API CMD chains that will be freed
+ **/
+void hinic3_api_cmd_free(const struct hinic3_hwdev *hwdev, struct hinic3_api_cmd_chain **chain)
+{
+ u8 chain_type;
+
+ if (COMM_SUPPORT_API_CHAIN(hwdev) == 0)
+ return;
+
+ chain_type = HINIC3_API_CMD_WRITE_TO_MGMT_CPU;
+
+ for (; chain_type < HINIC3_API_CMD_MAX; chain_type++)
+ api_cmd_destroy_chain(chain[chain_type]);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h
new file mode 100644
index 0000000..727e668
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h
@@ -0,0 +1,286 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_API_CMD_H
+#define HINIC3_API_CMD_H
+
+#include <linux/semaphore.h>
+
+#include "hinic3_eqs.h"
+#include "hinic3_hwif.h"
+
+/* api_cmd_cell.ctrl structure */
+#define HINIC3_API_CMD_CELL_CTRL_CELL_LEN_SHIFT 0
+#define HINIC3_API_CMD_CELL_CTRL_RD_DMA_ATTR_OFF_SHIFT 16
+#define HINIC3_API_CMD_CELL_CTRL_WR_DMA_ATTR_OFF_SHIFT 24
+#define HINIC3_API_CMD_CELL_CTRL_XOR_CHKSUM_SHIFT 56
+
+#define HINIC3_API_CMD_CELL_CTRL_CELL_LEN_MASK 0x3FU
+#define HINIC3_API_CMD_CELL_CTRL_RD_DMA_ATTR_OFF_MASK 0x3FU
+#define HINIC3_API_CMD_CELL_CTRL_WR_DMA_ATTR_OFF_MASK 0x3FU
+#define HINIC3_API_CMD_CELL_CTRL_XOR_CHKSUM_MASK 0xFFU
+
+#define HINIC3_API_CMD_CELL_CTRL_SET(val, member) \
+ ((((u64)(val)) & HINIC3_API_CMD_CELL_CTRL_##member##_MASK) << \
+ HINIC3_API_CMD_CELL_CTRL_##member##_SHIFT)
+
+/* api_cmd_cell.desc structure */
+#define HINIC3_API_CMD_DESC_API_TYPE_SHIFT 0
+#define HINIC3_API_CMD_DESC_RD_WR_SHIFT 1
+#define HINIC3_API_CMD_DESC_MGMT_BYPASS_SHIFT 2
+#define HINIC3_API_CMD_DESC_RESP_AEQE_EN_SHIFT 3
+#define HINIC3_API_CMD_DESC_APICHN_RSVD_SHIFT 4
+#define HINIC3_API_CMD_DESC_APICHN_CODE_SHIFT 6
+#define HINIC3_API_CMD_DESC_PRIV_DATA_SHIFT 8
+#define HINIC3_API_CMD_DESC_DEST_SHIFT 32
+#define HINIC3_API_CMD_DESC_SIZE_SHIFT 40
+#define HINIC3_API_CMD_DESC_XOR_CHKSUM_SHIFT 56
+
+#define HINIC3_API_CMD_DESC_API_TYPE_MASK 0x1U
+#define HINIC3_API_CMD_DESC_RD_WR_MASK 0x1U
+#define HINIC3_API_CMD_DESC_MGMT_BYPASS_MASK 0x1U
+#define HINIC3_API_CMD_DESC_RESP_AEQE_EN_MASK 0x1U
+#define HINIC3_API_CMD_DESC_APICHN_RSVD_MASK 0x3U
+#define HINIC3_API_CMD_DESC_APICHN_CODE_MASK 0x3U
+#define HINIC3_API_CMD_DESC_PRIV_DATA_MASK 0xFFFFFFU
+#define HINIC3_API_CMD_DESC_DEST_MASK 0x1FU
+#define HINIC3_API_CMD_DESC_SIZE_MASK 0x7FFU
+#define HINIC3_API_CMD_DESC_XOR_CHKSUM_MASK 0xFFU
+
+#define HINIC3_API_CMD_DESC_SET(val, member) \
+ ((((u64)(val)) & HINIC3_API_CMD_DESC_##member##_MASK) << \
+ HINIC3_API_CMD_DESC_##member##_SHIFT)
+
+/* api_cmd_status header */
+#define HINIC3_API_CMD_STATUS_HEADER_VALID_SHIFT 0
+#define HINIC3_API_CMD_STATUS_HEADER_CHAIN_ID_SHIFT 16
+
+#define HINIC3_API_CMD_STATUS_HEADER_VALID_MASK 0xFFU
+#define HINIC3_API_CMD_STATUS_HEADER_CHAIN_ID_MASK 0xFFU
+
+#define HINIC3_API_CMD_STATUS_HEADER_GET(val, member) \
+ (((val) >> HINIC3_API_CMD_STATUS_HEADER_##member##_SHIFT) & \
+ HINIC3_API_CMD_STATUS_HEADER_##member##_MASK)
+
+/* API_CHAIN_REQ CSR: 0x0020+api_idx*0x080 */
+#define HINIC3_API_CMD_CHAIN_REQ_RESTART_SHIFT 1
+#define HINIC3_API_CMD_CHAIN_REQ_WB_TRIGGER_SHIFT 2
+
+#define HINIC3_API_CMD_CHAIN_REQ_RESTART_MASK 0x1U
+#define HINIC3_API_CMD_CHAIN_REQ_WB_TRIGGER_MASK 0x1U
+
+#define HINIC3_API_CMD_CHAIN_REQ_SET(val, member) \
+ (((val) & HINIC3_API_CMD_CHAIN_REQ_##member##_MASK) << \
+ HINIC3_API_CMD_CHAIN_REQ_##member##_SHIFT)
+
+#define HINIC3_API_CMD_CHAIN_REQ_GET(val, member) \
+ (((val) >> HINIC3_API_CMD_CHAIN_REQ_##member##_SHIFT) & \
+ HINIC3_API_CMD_CHAIN_REQ_##member##_MASK)
+
+#define HINIC3_API_CMD_CHAIN_REQ_CLEAR(val, member) \
+ ((val) & (~(HINIC3_API_CMD_CHAIN_REQ_##member##_MASK \
+ << HINIC3_API_CMD_CHAIN_REQ_##member##_SHIFT)))
+
+/* API_CHAIN_CTL CSR: 0x0014+api_idx*0x080 */
+#define HINIC3_API_CMD_CHAIN_CTRL_RESTART_EN_SHIFT 1
+#define HINIC3_API_CMD_CHAIN_CTRL_XOR_ERR_SHIFT 2
+#define HINIC3_API_CMD_CHAIN_CTRL_AEQE_EN_SHIFT 4
+#define HINIC3_API_CMD_CHAIN_CTRL_AEQ_ID_SHIFT 8
+#define HINIC3_API_CMD_CHAIN_CTRL_XOR_CHK_EN_SHIFT 28
+#define HINIC3_API_CMD_CHAIN_CTRL_CELL_SIZE_SHIFT 30
+
+#define HINIC3_API_CMD_CHAIN_CTRL_RESTART_EN_MASK 0x1U
+#define HINIC3_API_CMD_CHAIN_CTRL_XOR_ERR_MASK 0x1U
+#define HINIC3_API_CMD_CHAIN_CTRL_AEQE_EN_MASK 0x1U
+#define HINIC3_API_CMD_CHAIN_CTRL_AEQ_ID_MASK 0x3U
+#define HINIC3_API_CMD_CHAIN_CTRL_XOR_CHK_EN_MASK 0x3U
+#define HINIC3_API_CMD_CHAIN_CTRL_CELL_SIZE_MASK 0x3U
+
+#define HINIC3_API_CMD_CHAIN_CTRL_SET(val, member) \
+ (((val) & HINIC3_API_CMD_CHAIN_CTRL_##member##_MASK) << \
+ HINIC3_API_CMD_CHAIN_CTRL_##member##_SHIFT)
+
+#define HINIC3_API_CMD_CHAIN_CTRL_CLEAR(val, member) \
+ ((val) & (~(HINIC3_API_CMD_CHAIN_CTRL_##member##_MASK \
+ << HINIC3_API_CMD_CHAIN_CTRL_##member##_SHIFT)))
+
+/* api_cmd rsp header */
+#define HINIC3_API_CMD_RESP_HEAD_VALID_SHIFT 0
+#define HINIC3_API_CMD_RESP_HEAD_STATUS_SHIFT 8
+#define HINIC3_API_CMD_RESP_HEAD_CHAIN_ID_SHIFT 16
+#define HINIC3_API_CMD_RESP_HEAD_RESP_LEN_SHIFT 24
+#define HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV_SHIFT 40
+
+#define HINIC3_API_CMD_RESP_HEAD_VALID_MASK 0xFF
+#define HINIC3_API_CMD_RESP_HEAD_STATUS_MASK 0xFFU
+#define HINIC3_API_CMD_RESP_HEAD_CHAIN_ID_MASK 0xFFU
+#define HINIC3_API_CMD_RESP_HEAD_RESP_LEN_MASK 0x1FFU
+#define HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV_MASK 0xFFFFFFU
+
+#define HINIC3_API_CMD_RESP_HEAD_VALID_CODE 0xFF
+
+#define HINIC3_API_CMD_RESP_HEADER_VALID(val) \
+ (((val) & HINIC3_API_CMD_RESP_HEAD_VALID_MASK) == \
+ HINIC3_API_CMD_RESP_HEAD_VALID_CODE)
+
+#define HINIC3_API_CMD_RESP_HEAD_GET(val, member) \
+ (((val) >> HINIC3_API_CMD_RESP_HEAD_##member##_SHIFT) & \
+ HINIC3_API_CMD_RESP_HEAD_##member##_MASK)
+
+#define HINIC3_API_CMD_RESP_HEAD_CHAIN_ID(val) \
+ (((val) >> HINIC3_API_CMD_RESP_HEAD_CHAIN_ID_SHIFT) & \
+ HINIC3_API_CMD_RESP_HEAD_CHAIN_ID_MASK)
+
+#define HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV(val) \
+ ((u16)(((val) >> HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV_SHIFT) & \
+ HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV_MASK))
+/* API_STATUS_0 CSR: 0x0030+api_idx*0x080 */
+#define HINIC3_API_CMD_STATUS_CONS_IDX_MASK 0xFFFFFFU
+#define HINIC3_API_CMD_STATUS_CONS_IDX_SHIFT 0
+
+#define HINIC3_API_CMD_STATUS_FSM_MASK 0xFU
+#define HINIC3_API_CMD_STATUS_FSM_SHIFT 24
+
+#define HINIC3_API_CMD_STATUS_CHKSUM_ERR_MASK 0x3U
+#define HINIC3_API_CMD_STATUS_CHKSUM_ERR_SHIFT 28
+
+#define HINIC3_API_CMD_STATUS_CPLD_ERR_MASK 0x1U
+#define HINIC3_API_CMD_STATUS_CPLD_ERR_SHIFT 30
+
+#define HINIC3_API_CMD_STATUS_CONS_IDX(val) \
+ ((val) & HINIC3_API_CMD_STATUS_CONS_IDX_MASK)
+
+#define HINIC3_API_CMD_STATUS_CHKSUM_ERR(val) \
+ (((val) >> HINIC3_API_CMD_STATUS_CHKSUM_ERR_SHIFT) & \
+ HINIC3_API_CMD_STATUS_CHKSUM_ERR_MASK)
+
+#define HINIC3_API_CMD_STATUS_GET(val, member) \
+ (((val) >> HINIC3_API_CMD_STATUS_##member##_SHIFT) & \
+ HINIC3_API_CMD_STATUS_##member##_MASK)
+
+enum hinic3_api_cmd_chain_type {
+ /* write to mgmt cpu command with completion */
+ HINIC3_API_CMD_WRITE_TO_MGMT_CPU = 2,
+ /* multi read command with completion notification - not used */
+ HINIC3_API_CMD_MULTI_READ = 3,
+ /* write command without completion notification */
+ HINIC3_API_CMD_POLL_WRITE = 4,
+ /* read command without completion notification */
+ HINIC3_API_CMD_POLL_READ = 5,
+ /* read from mgmt cpu command with completion */
+ HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU = 6,
+ HINIC3_API_CMD_MAX,
+};
+
+struct hinic3_api_cmd_status {
+ u64 header;
+ u32 buf_desc;
+ u32 cell_addr_hi;
+ u32 cell_addr_lo;
+ u32 rsvd0;
+ u64 rsvd1;
+};
+
+/* HW struct */
+struct hinic3_api_cmd_cell {
+ u64 ctrl;
+
+ /* address is 64 bit in HW struct */
+ u64 next_cell_paddr;
+
+ u64 desc;
+
+ /* HW struct */
+ union {
+ struct {
+ u64 hw_cmd_paddr;
+ } write;
+
+ struct {
+ u64 hw_wb_resp_paddr;
+ u64 hw_cmd_paddr;
+ } read;
+ };
+};
+
+struct hinic3_api_cmd_resp_fmt {
+ u64 header;
+ u64 resp_data;
+};
+
+struct hinic3_api_cmd_cell_ctxt {
+ struct hinic3_api_cmd_cell *cell_vaddr;
+
+ void *api_cmd_vaddr;
+
+ struct hinic3_api_cmd_resp_fmt *resp;
+
+ struct completion done;
+ int status;
+
+ u32 saved_prod_idx;
+ struct hinic3_hwdev *hwdev;
+};
+
+struct hinic3_api_cmd_chain_attr {
+ struct hinic3_hwdev *hwdev;
+ enum hinic3_api_cmd_chain_type chain_type;
+
+ u32 num_cells;
+ u16 rsp_size;
+ u16 cell_size;
+};
+
+struct hinic3_api_cmd_chain {
+ struct hinic3_hwdev *hwdev;
+ enum hinic3_api_cmd_chain_type chain_type;
+
+ u32 num_cells;
+ u16 cell_size;
+ u16 rsp_size;
+ u32 rsvd1;
+
+ /* HW members is 24 bit format */
+ u32 prod_idx;
+ u32 cons_idx;
+
+ struct semaphore sem;
+ /* Async cmd can not be scheduling */
+ spinlock_t async_lock;
+
+ dma_addr_t wb_status_paddr;
+ struct hinic3_api_cmd_status *wb_status;
+
+ dma_addr_t head_cell_paddr;
+ struct hinic3_api_cmd_cell *head_node;
+
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ struct hinic3_api_cmd_cell *curr_node;
+
+ struct hinic3_dma_addr_align cells_addr;
+
+ u8 *cell_vaddr_base;
+ u64 cell_paddr_base;
+ u8 *rsp_vaddr_base;
+ u64 rsp_paddr_base;
+ u8 *buf_vaddr_base;
+ u64 buf_paddr_base;
+ u64 cell_size_align;
+ u64 rsp_size_align;
+ u64 buf_size_align;
+
+ u64 rsvd2;
+};
+
+int hinic3_api_cmd_write(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 size);
+
+int hinic3_api_cmd_read(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 size, void *ack, u16 ack_size);
+
+int hinic3_api_cmd_init(struct hinic3_hwdev *hwdev,
+ struct hinic3_api_cmd_chain **chain);
+
+void hinic3_api_cmd_free(const struct hinic3_hwdev *hwdev, struct hinic3_api_cmd_chain **chain);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c
new file mode 100644
index 0000000..ceb7636
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c
@@ -0,0 +1,1575 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/errno.h>
+#include <linux/completion.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "npu_cmdq_base_defs.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_eqs.h"
+#include "hinic3_common.h"
+#include "hinic3_wq.h"
+#include "hinic3_hw_comm.h"
+#include "hinic3_hwif.h"
+#include "hinic3_cmdq.h"
+
+#define HINIC3_CMDQ_BUF_SIZE 2048U
+
+#define CMDQ_CMD_TIMEOUT 5000 /* millisecond */
+
+#define UPPER_8_BITS(data) (((data) >> 8) & 0xFF)
+#define LOWER_8_BITS(data) ((data) & 0xFF)
+
+#define CMDQ_DB_INFO_HI_PROD_IDX_SHIFT 0
+#define CMDQ_DB_INFO_HI_PROD_IDX_MASK 0xFFU
+#define CMDQ_DB_INFO_SET(val, member) \
+ ((((u32)(val)) & CMDQ_DB_INFO_##member##_MASK) << \
+ CMDQ_DB_INFO_##member##_SHIFT)
+
+#define CMDQ_DB_HEAD_QUEUE_TYPE_SHIFT 23
+#define CMDQ_DB_HEAD_CMDQ_TYPE_SHIFT 24
+#define CMDQ_DB_HEAD_SRC_TYPE_SHIFT 27
+#define CMDQ_DB_HEAD_QUEUE_TYPE_MASK 0x1U
+#define CMDQ_DB_HEAD_CMDQ_TYPE_MASK 0x7U
+#define CMDQ_DB_HEAD_SRC_TYPE_MASK 0x1FU
+#define CMDQ_DB_HEAD_SET(val, member) \
+ ((((u32)(val)) & CMDQ_DB_HEAD_##member##_MASK) << \
+ CMDQ_DB_HEAD_##member##_SHIFT)
+
+#define CMDQ_CTRL_PI_SHIFT 0
+#define CMDQ_CTRL_CMD_SHIFT 16
+#define CMDQ_CTRL_MOD_SHIFT 24
+#define CMDQ_CTRL_ACK_TYPE_SHIFT 29
+#define CMDQ_CTRL_HW_BUSY_BIT_SHIFT 31
+
+#define CMDQ_CTRL_PI_MASK 0xFFFFU
+#define CMDQ_CTRL_CMD_MASK 0xFFU
+#define CMDQ_CTRL_MOD_MASK 0x1FU
+#define CMDQ_CTRL_ACK_TYPE_MASK 0x3U
+#define CMDQ_CTRL_HW_BUSY_BIT_MASK 0x1U
+
+#define CMDQ_CTRL_SET(val, member) \
+ ((((u32)(val)) & CMDQ_CTRL_##member##_MASK) << \
+ CMDQ_CTRL_##member##_SHIFT)
+
+#define CMDQ_CTRL_GET(val, member) \
+ (((val) >> CMDQ_CTRL_##member##_SHIFT) & \
+ CMDQ_CTRL_##member##_MASK)
+
+#define CMDQ_WQE_HEADER_BUFDESC_LEN_SHIFT 0
+#define CMDQ_WQE_HEADER_COMPLETE_FMT_SHIFT 15
+#define CMDQ_WQE_HEADER_DATA_FMT_SHIFT 22
+#define CMDQ_WQE_HEADER_COMPLETE_REQ_SHIFT 23
+#define CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_SHIFT 27
+#define CMDQ_WQE_HEADER_CTRL_LEN_SHIFT 29
+#define CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31
+
+#define CMDQ_WQE_HEADER_BUFDESC_LEN_MASK 0xFFU
+#define CMDQ_WQE_HEADER_COMPLETE_FMT_MASK 0x1U
+#define CMDQ_WQE_HEADER_DATA_FMT_MASK 0x1U
+#define CMDQ_WQE_HEADER_COMPLETE_REQ_MASK 0x1U
+#define CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_MASK 0x3U
+#define CMDQ_WQE_HEADER_CTRL_LEN_MASK 0x3U
+#define CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U
+
+#define CMDQ_WQE_HEADER_SET(val, member) \
+ ((((u32)(val)) & CMDQ_WQE_HEADER_##member##_MASK) << \
+ CMDQ_WQE_HEADER_##member##_SHIFT)
+
+#define CMDQ_WQE_HEADER_GET(val, member) \
+ (((val) >> CMDQ_WQE_HEADER_##member##_SHIFT) & \
+ CMDQ_WQE_HEADER_##member##_MASK)
+
+#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_SHIFT 0
+#define CMDQ_CTXT_EQ_ID_SHIFT 53
+#define CMDQ_CTXT_CEQ_ARM_SHIFT 61
+#define CMDQ_CTXT_CEQ_EN_SHIFT 62
+#define CMDQ_CTXT_HW_BUSY_BIT_SHIFT 63
+
+#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_MASK 0xFFFFFFFFFFFFF
+#define CMDQ_CTXT_EQ_ID_MASK 0xFF
+#define CMDQ_CTXT_CEQ_ARM_MASK 0x1
+#define CMDQ_CTXT_CEQ_EN_MASK 0x1
+#define CMDQ_CTXT_HW_BUSY_BIT_MASK 0x1
+
+#define CMDQ_CTXT_PAGE_INFO_SET(val, member) \
+ (((u64)(val) & CMDQ_CTXT_##member##_MASK) << \
+ CMDQ_CTXT_##member##_SHIFT)
+
+#define CMDQ_CTXT_PAGE_INFO_GET(val, member) \
+ (((u64)(val) >> CMDQ_CTXT_##member##_SHIFT) & \
+ CMDQ_CTXT_##member##_MASK)
+
+#define CMDQ_CTXT_WQ_BLOCK_PFN_SHIFT 0
+#define CMDQ_CTXT_CI_SHIFT 52
+
+#define CMDQ_CTXT_WQ_BLOCK_PFN_MASK 0xFFFFFFFFFFFFF
+#define CMDQ_CTXT_CI_MASK 0xFFF
+
+#define CMDQ_CTXT_BLOCK_INFO_SET(val, member) \
+ (((u64)(val) & CMDQ_CTXT_##member##_MASK) << \
+ CMDQ_CTXT_##member##_SHIFT)
+
+#define CMDQ_CTXT_BLOCK_INFO_GET(val, member) \
+ (((u64)(val) >> CMDQ_CTXT_##member##_SHIFT) & \
+ CMDQ_CTXT_##member##_MASK)
+
+#define SAVED_DATA_ARM_SHIFT 31
+
+#define SAVED_DATA_ARM_MASK 0x1U
+
+#define SAVED_DATA_SET(val, member) \
+ (((val) & SAVED_DATA_##member##_MASK) << \
+ SAVED_DATA_##member##_SHIFT)
+
+#define SAVED_DATA_CLEAR(val, member) \
+ ((val) & (~(SAVED_DATA_##member##_MASK << \
+ SAVED_DATA_##member##_SHIFT)))
+
+#define WQE_ERRCODE_VAL_SHIFT 0
+
+#define WQE_ERRCODE_VAL_MASK 0x7FFFFFFF
+
+#define WQE_ERRCODE_GET(val, member) \
+ (((val) >> WQE_ERRCODE_##member##_SHIFT) & \
+ WQE_ERRCODE_##member##_MASK)
+
+#define CEQE_CMDQ_TYPE_SHIFT 0
+
+#define CEQE_CMDQ_TYPE_MASK 0x7
+
+#define CEQE_CMDQ_GET(val, member) \
+ (((val) >> CEQE_CMDQ_##member##_SHIFT) & \
+ CEQE_CMDQ_##member##_MASK)
+
+#define WQE_COMPLETED(ctrl_info) CMDQ_CTRL_GET(ctrl_info, HW_BUSY_BIT)
+
+#define WQE_HEADER(wqe) ((struct hinic3_cmdq_header *)(wqe))
+
+#define CMDQ_DB_PI_OFF(pi) (((u16)LOWER_8_BITS(pi)) << 3)
+
+#define CMDQ_DB_ADDR(db_base, pi) \
+ (((u8 *)(db_base)) + CMDQ_DB_PI_OFF(pi))
+
+#define CMDQ_PFN_SHIFT 12
+#define CMDQ_PFN(addr) ((addr) >> CMDQ_PFN_SHIFT)
+
+#define FIRST_DATA_TO_WRITE_LAST sizeof(u64)
+
+#define WQE_LCMD_SIZE 64
+#define WQE_SCMD_SIZE 64
+
+#define COMPLETE_LEN 3
+
+#define CMDQ_WQEBB_SIZE 64
+#define CMDQ_WQE_SIZE 64
+
+#define cmdq_to_cmdqs(cmdq) container_of((cmdq) - (cmdq)->cmdq_type, \
+ struct hinic3_cmdqs, cmdq[0])
+
+#define CMDQ_SEND_CMPT_CODE 10
+#define CMDQ_COMPLETE_CMPT_CODE 11
+#define CMDQ_FORCE_STOP_CMPT_CODE 12
+
+enum cmdq_scmd_type {
+ CMDQ_SET_ARM_CMD = 2,
+};
+
+enum cmdq_wqe_type {
+ WQE_LCMD_TYPE,
+ WQE_SCMD_TYPE,
+};
+
+enum ctrl_sect_len {
+ CTRL_SECT_LEN = 1,
+ CTRL_DIRECT_SECT_LEN = 2,
+};
+
+enum bufdesc_len {
+ BUFDESC_LCMD_LEN = 2,
+ BUFDESC_SCMD_LEN = 3,
+};
+
+enum data_format {
+ DATA_SGE,
+ DATA_DIRECT,
+};
+
+enum completion_format {
+ COMPLETE_DIRECT,
+ COMPLETE_SGE,
+};
+
+enum completion_request {
+ CEQ_SET = 1,
+};
+
+enum cmdq_cmd_type {
+ SYNC_CMD_DIRECT_RESP,
+ SYNC_CMD_SGE_RESP,
+ ASYNC_CMD,
+};
+
+#define NUM_WQEBBS_FOR_CMDQ_WQE 1
+
+bool hinic3_cmdq_idle(struct hinic3_cmdq *cmdq)
+{
+ return hinic3_wq_is_empty(&cmdq->wq);
+}
+
+static void *cmdq_read_wqe(struct hinic3_wq *wq, u16 *ci)
+{
+ if (hinic3_wq_is_empty(wq))
+ return NULL;
+
+ return hinic3_wq_read_one_wqebb(wq, ci);
+}
+
+static void *cmdq_get_wqe(struct hinic3_wq *wq, u16 *pi)
+{
+ if (!hinic3_wq_free_wqebbs(wq))
+ return NULL;
+
+ return hinic3_wq_get_one_wqebb(wq, pi);
+}
+
+struct hinic3_cmd_buf *hinic3_alloc_cmd_buf(void *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ void *dev = NULL;
+
+ if (!hwdev) {
+ pr_err("Failed to alloc cmd buf, Invalid hwdev\n");
+ return NULL;
+ }
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+ dev = ((struct hinic3_hwdev *)hwdev)->dev_hdl;
+
+ cmd_buf = kzalloc(sizeof(*cmd_buf), GFP_ATOMIC);
+ if (!cmd_buf) {
+ sdk_err(dev, "Failed to allocate cmd buf\n");
+ return NULL;
+ }
+
+ cmd_buf->buf = pci_pool_alloc(cmdqs->cmd_buf_pool, GFP_ATOMIC,
+ &cmd_buf->dma_addr);
+ if (!cmd_buf->buf) {
+ sdk_err(dev, "Failed to allocate cmdq cmd buf from the pool\n");
+ goto alloc_pci_buf_err;
+ }
+
+ cmd_buf->size = HINIC3_CMDQ_BUF_SIZE;
+ atomic_set(&cmd_buf->ref_cnt, 1);
+
+ return cmd_buf;
+
+alloc_pci_buf_err:
+ kfree(cmd_buf);
+ return NULL;
+}
+EXPORT_SYMBOL(hinic3_alloc_cmd_buf);
+
+void hinic3_free_cmd_buf(void *hwdev, struct hinic3_cmd_buf *cmd_buf)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+
+ if (!hwdev || !cmd_buf) {
+ pr_err("Failed to free cmd buf, hwdev or cmd_buf is NULL\n");
+ return;
+ }
+
+ if (!atomic_dec_and_test(&cmd_buf->ref_cnt))
+ return;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ pci_pool_free(cmdqs->cmd_buf_pool, cmd_buf->buf, cmd_buf->dma_addr);
+ kfree(cmd_buf);
+}
+EXPORT_SYMBOL(hinic3_free_cmd_buf);
+
+static void cmdq_set_completion(struct hinic3_cmdq_completion *complete,
+ struct hinic3_cmd_buf *buf_out)
+{
+ struct hinic3_sge_resp *sge_resp = &complete->sge_resp;
+
+ hinic3_set_sge(&sge_resp->sge, buf_out->dma_addr,
+ HINIC3_CMDQ_BUF_SIZE);
+}
+
+static void cmdq_set_lcmd_bufdesc(struct hinic3_cmdq_wqe_lcmd *wqe,
+ struct hinic3_cmd_buf *buf_in)
+{
+ hinic3_set_sge(&wqe->buf_desc.sge, buf_in->dma_addr, buf_in->size);
+}
+
+static void cmdq_fill_db(struct hinic3_cmdq_db *db,
+ enum hinic3_cmdq_type cmdq_type, u16 prod_idx)
+{
+ db->db_info = CMDQ_DB_INFO_SET(UPPER_8_BITS(prod_idx), HI_PROD_IDX);
+
+ db->db_head = CMDQ_DB_HEAD_SET(HINIC3_DB_CMDQ_TYPE, QUEUE_TYPE) |
+ CMDQ_DB_HEAD_SET(cmdq_type, CMDQ_TYPE) |
+ CMDQ_DB_HEAD_SET(HINIC3_DB_SRC_CMDQ_TYPE, SRC_TYPE);
+}
+
+static void cmdq_set_db(struct hinic3_cmdq *cmdq,
+ enum hinic3_cmdq_type cmdq_type, u16 prod_idx)
+{
+ struct hinic3_cmdq_db db = {0};
+ u8 *db_base = cmdq->hwdev->cmdqs->cmdqs_db_base;
+
+ cmdq_fill_db(&db, cmdq_type, prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ db.db_info = hinic3_hw_be32(db.db_info);
+ db.db_head = hinic3_hw_be32(db.db_head);
+
+ wmb(); /* write all before the doorbell */
+ writeq(*((u64 *)&db), CMDQ_DB_ADDR(db_base, prod_idx));
+}
+
+static void cmdq_wqe_fill(void *dst, const void *src)
+{
+ memcpy((u8 *)dst + FIRST_DATA_TO_WRITE_LAST,
+ (u8 *)src + FIRST_DATA_TO_WRITE_LAST,
+ CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST);
+
+ wmb(); /* The first 8 bytes should be written last */
+
+ *(u64 *)dst = *(u64 *)src;
+}
+
+static void cmdq_prepare_wqe_ctrl(struct hinic3_cmdq_wqe *wqe, int wrapped,
+ u8 mod, u8 cmd, u16 prod_idx,
+ enum completion_format complete_format,
+ enum data_format data_format,
+ enum bufdesc_len buf_len)
+{
+ struct hinic3_ctrl *ctrl = NULL;
+ enum ctrl_sect_len ctrl_len;
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL;
+ struct hinic3_cmdq_wqe_scmd *wqe_scmd = NULL;
+ u32 saved_data = WQE_HEADER(wqe)->saved_data;
+
+ if (data_format == DATA_SGE) {
+ wqe_lcmd = &wqe->wqe_lcmd;
+
+ wqe_lcmd->status.status_info = 0;
+ ctrl = &wqe_lcmd->ctrl;
+ ctrl_len = CTRL_SECT_LEN;
+ } else {
+ wqe_scmd = &wqe->inline_wqe.wqe_scmd;
+
+ wqe_scmd->status.status_info = 0;
+ ctrl = &wqe_scmd->ctrl;
+ ctrl_len = CTRL_DIRECT_SECT_LEN;
+ }
+
+ ctrl->ctrl_info = CMDQ_CTRL_SET(prod_idx, PI) |
+ CMDQ_CTRL_SET(cmd, CMD) |
+ CMDQ_CTRL_SET(mod, MOD) |
+ CMDQ_CTRL_SET(HINIC3_ACK_TYPE_CMDQ, ACK_TYPE);
+
+ WQE_HEADER(wqe)->header_info =
+ CMDQ_WQE_HEADER_SET(buf_len, BUFDESC_LEN) |
+ CMDQ_WQE_HEADER_SET(complete_format, COMPLETE_FMT) |
+ CMDQ_WQE_HEADER_SET(data_format, DATA_FMT) |
+ CMDQ_WQE_HEADER_SET(CEQ_SET, COMPLETE_REQ) |
+ CMDQ_WQE_HEADER_SET(COMPLETE_LEN, COMPLETE_SECT_LEN) |
+ CMDQ_WQE_HEADER_SET(ctrl_len, CTRL_LEN) |
+ CMDQ_WQE_HEADER_SET((u32)wrapped, HW_BUSY_BIT);
+
+ if (cmd == CMDQ_SET_ARM_CMD && mod == HINIC3_MOD_COMM) {
+ saved_data &= SAVED_DATA_CLEAR(saved_data, ARM);
+ WQE_HEADER(wqe)->saved_data = saved_data |
+ SAVED_DATA_SET(1, ARM);
+ } else {
+ saved_data &= SAVED_DATA_CLEAR(saved_data, ARM);
+ WQE_HEADER(wqe)->saved_data = saved_data;
+ }
+}
+
+static void cmdq_set_lcmd_wqe(struct hinic3_cmdq_wqe *wqe,
+ enum cmdq_cmd_type cmd_type,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out, int wrapped,
+ u8 mod, u8 cmd, u16 prod_idx)
+{
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = &wqe->wqe_lcmd;
+ enum completion_format complete_format = COMPLETE_DIRECT;
+
+ switch (cmd_type) {
+ case SYNC_CMD_DIRECT_RESP:
+ wqe_lcmd->completion.direct_resp = 0;
+ break;
+ case SYNC_CMD_SGE_RESP:
+ if (buf_out) {
+ complete_format = COMPLETE_SGE;
+ cmdq_set_completion(&wqe_lcmd->completion,
+ buf_out);
+ }
+ break;
+ case ASYNC_CMD:
+ wqe_lcmd->completion.direct_resp = 0;
+ wqe_lcmd->buf_desc.saved_async_buf = (u64)(buf_in);
+ break;
+ }
+
+ cmdq_prepare_wqe_ctrl(wqe, wrapped, mod, cmd, prod_idx, complete_format,
+ DATA_SGE, BUFDESC_LCMD_LEN);
+
+ cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in);
+}
+
+static void cmdq_update_cmd_status(struct hinic3_cmdq *cmdq, u16 prod_idx,
+ struct hinic3_cmdq_wqe *wqe)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info;
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd;
+ u32 status_info;
+
+ wqe_lcmd = &wqe->wqe_lcmd;
+ cmd_info = &cmdq->cmd_infos[prod_idx];
+
+ if (cmd_info->errcode) {
+ status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info);
+ *cmd_info->errcode = WQE_ERRCODE_GET(status_info, VAL);
+ }
+
+ if (cmd_info->direct_resp)
+ *cmd_info->direct_resp =
+ hinic3_hw_cpu32(wqe_lcmd->completion.direct_resp);
+}
+
+static int hinic3_cmdq_sync_timeout_check(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 pi)
+{
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd;
+ struct hinic3_ctrl *ctrl;
+ u32 ctrl_info;
+
+ wqe_lcmd = &wqe->wqe_lcmd;
+ ctrl = &wqe_lcmd->ctrl;
+ ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info);
+ if (!WQE_COMPLETED(ctrl_info)) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Cmdq sync command check busy bit not set\n");
+ return -EFAULT;
+ }
+
+ cmdq_update_cmd_status(cmdq, pi, wqe);
+
+ sdk_info(cmdq->hwdev->dev_hdl, "Cmdq sync command check succeed\n");
+ return 0;
+}
+
+static void clear_cmd_info(struct hinic3_cmdq_cmd_info *cmd_info,
+ const struct hinic3_cmdq_cmd_info *saved_cmd_info)
+{
+ if (cmd_info->errcode == saved_cmd_info->errcode)
+ cmd_info->errcode = NULL;
+
+ if (cmd_info->done == saved_cmd_info->done)
+ cmd_info->done = NULL;
+
+ if (cmd_info->direct_resp == saved_cmd_info->direct_resp)
+ cmd_info->direct_resp = NULL;
+}
+
+static int cmdq_ceq_handler_status(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_cmd_info *cmd_info,
+ struct hinic3_cmdq_cmd_info *saved_cmd_info,
+ u64 curr_msg_id, u16 curr_prod_idx,
+ struct hinic3_cmdq_wqe *curr_wqe,
+ u32 timeout)
+{
+ ulong timeo;
+ int err;
+ ulong end = jiffies + msecs_to_jiffies(timeout);
+
+ if (cmdq->hwdev->poll) {
+ while (time_before(jiffies, end)) {
+ hinic3_cmdq_ceq_handler(cmdq->hwdev, 0);
+ if (saved_cmd_info->done->done != 0)
+ return 0;
+ usleep_range(9, 10); /* sleep 9 us ~ 10 us */
+ }
+ } else {
+ timeo = msecs_to_jiffies(timeout);
+ if (wait_for_completion_timeout(saved_cmd_info->done, timeo))
+ return 0;
+ }
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ if (cmd_info->cmpt_code == saved_cmd_info->cmpt_code)
+ cmd_info->cmpt_code = NULL;
+
+ if (*saved_cmd_info->cmpt_code == CMDQ_COMPLETE_CMPT_CODE) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Cmdq direct sync command has been completed\n");
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ return 0;
+ }
+
+ if (curr_msg_id == cmd_info->cmdq_msg_id) {
+ err = hinic3_cmdq_sync_timeout_check(cmdq, curr_wqe,
+ curr_prod_idx);
+ if (err)
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_TIMEOUT;
+ else
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_FAKE_TIMEOUT;
+ } else {
+ err = -ETIMEDOUT;
+ sdk_err(cmdq->hwdev->dev_hdl, "Cmdq sync command current msg id dismatch with cmd_info msg id\n");
+ }
+
+ clear_cmd_info(cmd_info, saved_cmd_info);
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+
+ if (err == 0)
+ return 0;
+
+ hinic3_dump_ceq_info(cmdq->hwdev);
+
+ return -ETIMEDOUT;
+}
+
+static int wait_cmdq_sync_cmd_completion(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_cmd_info *cmd_info,
+ struct hinic3_cmdq_cmd_info *saved_cmd_info,
+ u64 curr_msg_id, u16 curr_prod_idx,
+ struct hinic3_cmdq_wqe *curr_wqe, u32 timeout)
+{
+ return cmdq_ceq_handler_status(cmdq, cmd_info, saved_cmd_info,
+ curr_msg_id, curr_prod_idx,
+ curr_wqe, timeout);
+}
+
+static int cmdq_msg_lock(struct hinic3_cmdq *cmdq, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = cmdq_to_cmdqs(cmdq);
+
+ /* Keep wrapped and doorbell index correct. bh - for tasklet(ceq) */
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ if (cmdqs->lock_channel_en && test_bit(channel, &cmdqs->channel_stop)) {
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static void cmdq_msg_unlock(struct hinic3_cmdq *cmdq)
+{
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+static void cmdq_clear_cmd_buf(struct hinic3_cmdq_cmd_info *cmd_info,
+ struct hinic3_hwdev *hwdev)
+{
+ if (cmd_info->buf_in)
+ hinic3_free_cmd_buf(hwdev, cmd_info->buf_in);
+
+ if (cmd_info->buf_out)
+ hinic3_free_cmd_buf(hwdev, cmd_info->buf_out);
+
+ cmd_info->buf_in = NULL;
+ cmd_info->buf_out = NULL;
+}
+
+static void cmdq_set_cmd_buf(struct hinic3_cmdq_cmd_info *cmd_info,
+ struct hinic3_hwdev *hwdev,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out)
+{
+ cmd_info->buf_in = buf_in;
+ cmd_info->buf_out = buf_out;
+
+ if (buf_in)
+ atomic_inc(&buf_in->ref_cnt);
+
+ if (buf_out)
+ atomic_inc(&buf_out->ref_cnt);
+}
+
+static int cmdq_sync_cmd_direct_resp(struct hinic3_cmdq *cmdq, u8 mod,
+ u8 cmd, struct hinic3_cmd_buf *buf_in,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_wq *wq = &cmdq->wq;
+ struct hinic3_cmdq_wqe *curr_wqe = NULL, wqe;
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL, saved_cmd_info;
+ struct completion done;
+ u16 curr_prod_idx, next_prod_idx;
+ int wrapped, errcode = 0, wqe_size = WQE_LCMD_SIZE;
+ int cmpt_code = CMDQ_SEND_CMPT_CODE;
+ u64 curr_msg_id;
+ int err;
+ u32 real_timeout;
+
+ err = cmdq_msg_lock(cmdq, channel);
+ if (err)
+ return err;
+
+ curr_wqe = cmdq_get_wqe(wq, &curr_prod_idx);
+ if (!curr_wqe) {
+ cmdq_msg_unlock(cmdq);
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + NUM_WQEBBS_FOR_CMDQ_WQE;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = (cmdq->wrapped == 0) ? 1 : 0;
+ next_prod_idx -= (u16)wq->q_depth;
+ }
+
+ cmd_info = &cmdq->cmd_infos[curr_prod_idx];
+
+ init_completion(&done);
+
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_DIRECT_RESP;
+ cmd_info->done = &done;
+ cmd_info->errcode = &errcode;
+ cmd_info->direct_resp = out_param;
+ cmd_info->cmpt_code = &cmpt_code;
+ cmd_info->channel = channel;
+ cmdq_set_cmd_buf(cmd_info, cmdq->hwdev, buf_in, NULL);
+
+ memcpy(&saved_cmd_info, cmd_info, sizeof(struct hinic3_cmdq_cmd_info));
+
+ cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL,
+ wrapped, mod, cmd, curr_prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ hinic3_hw_be32_len(&wqe, wqe_size);
+
+ /* CMDQ WQE is not shadow, therefore wqe will be written to wq */
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ (cmd_info->cmdq_msg_id)++;
+ curr_msg_id = cmd_info->cmdq_msg_id;
+
+ cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx);
+
+ cmdq_msg_unlock(cmdq);
+
+ real_timeout = timeout ? timeout : CMDQ_CMD_TIMEOUT;
+ err = wait_cmdq_sync_cmd_completion(cmdq, cmd_info, &saved_cmd_info,
+ curr_msg_id, curr_prod_idx,
+ curr_wqe, real_timeout);
+ if (err) {
+ sdk_err(cmdq->hwdev->dev_hdl, "Cmdq sync command(mod: %u, cmd: %u) timeout, prod idx: 0x%x\n",
+ mod, cmd, curr_prod_idx);
+ err = -ETIMEDOUT;
+ }
+
+ if (cmpt_code == CMDQ_FORCE_STOP_CMPT_CODE) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Force stop cmdq cmd, mod: %u, cmd: %u\n",
+ mod, cmd);
+ err = -EAGAIN;
+ }
+
+ destroy_completion(&done);
+ smp_rmb(); /* read error code after completion */
+
+ return (err != 0) ? err : errcode;
+}
+
+static int cmdq_sync_cmd_detail_resp(struct hinic3_cmdq *cmdq, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_wq *wq = &cmdq->wq;
+ struct hinic3_cmdq_wqe *curr_wqe = NULL, wqe;
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL, saved_cmd_info;
+ struct completion done;
+ u16 curr_prod_idx, next_prod_idx;
+ int wrapped, errcode = 0, wqe_size = WQE_LCMD_SIZE;
+ int cmpt_code = CMDQ_SEND_CMPT_CODE;
+ u64 curr_msg_id;
+ int err;
+ u32 real_timeout;
+
+ err = cmdq_msg_lock(cmdq, channel);
+ if (err)
+ return err;
+
+ curr_wqe = cmdq_get_wqe(wq, &curr_prod_idx);
+ if (!curr_wqe) {
+ cmdq_msg_unlock(cmdq);
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + NUM_WQEBBS_FOR_CMDQ_WQE;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = (cmdq->wrapped == 0) ? 1 : 0;
+ next_prod_idx -= (u16)wq->q_depth;
+ }
+
+ cmd_info = &cmdq->cmd_infos[curr_prod_idx];
+
+ init_completion(&done);
+
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_SGE_RESP;
+ cmd_info->done = &done;
+ cmd_info->errcode = &errcode;
+ cmd_info->direct_resp = out_param;
+ cmd_info->cmpt_code = &cmpt_code;
+ cmd_info->channel = channel;
+ cmdq_set_cmd_buf(cmd_info, cmdq->hwdev, buf_in, buf_out);
+
+ memcpy(&saved_cmd_info, cmd_info, sizeof(struct hinic3_cmdq_cmd_info));
+
+ cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out,
+ wrapped, mod, cmd, curr_prod_idx);
+
+ hinic3_hw_be32_len(&wqe, wqe_size);
+
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ (cmd_info->cmdq_msg_id)++;
+ curr_msg_id = cmd_info->cmdq_msg_id;
+
+ cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx);
+
+ cmdq_msg_unlock(cmdq);
+
+ real_timeout = timeout ? timeout : CMDQ_CMD_TIMEOUT;
+ err = wait_cmdq_sync_cmd_completion(cmdq, cmd_info, &saved_cmd_info,
+ curr_msg_id, curr_prod_idx,
+ curr_wqe, real_timeout);
+ if (err) {
+ sdk_err(cmdq->hwdev->dev_hdl, "Cmdq sync command(mod: %u, cmd: %u) timeout, prod idx: 0x%x\n",
+ mod, cmd, curr_prod_idx);
+ err = -ETIMEDOUT;
+ }
+
+ if (cmpt_code == CMDQ_FORCE_STOP_CMPT_CODE) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Force stop cmdq cmd, mod: %u, cmd: %u\n",
+ mod, cmd);
+ err = -EAGAIN;
+ }
+
+ destroy_completion(&done);
+ smp_rmb(); /* read error code after completion */
+
+ return (err != 0) ? err : errcode;
+}
+
+static int cmdq_async_cmd(struct hinic3_cmdq *cmdq, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in, u16 channel)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ struct hinic3_wq *wq = &cmdq->wq;
+ int wqe_size = WQE_LCMD_SIZE;
+ u16 curr_prod_idx, next_prod_idx;
+ struct hinic3_cmdq_wqe *curr_wqe = NULL, wqe;
+ int wrapped, err;
+
+ err = cmdq_msg_lock(cmdq, channel);
+ if (err)
+ return err;
+
+ curr_wqe = cmdq_get_wqe(wq, &curr_prod_idx);
+ if (!curr_wqe) {
+ cmdq_msg_unlock(cmdq);
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+ next_prod_idx = curr_prod_idx + NUM_WQEBBS_FOR_CMDQ_WQE;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = (cmdq->wrapped == 0) ? 1 : 0;
+ next_prod_idx -= (u16)wq->q_depth;
+ }
+
+ cmdq_set_lcmd_wqe(&wqe, ASYNC_CMD, buf_in, NULL, wrapped,
+ mod, cmd, curr_prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ hinic3_hw_be32_len(&wqe, wqe_size);
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ cmd_info = &cmdq->cmd_infos[curr_prod_idx];
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_ASYNC;
+ cmd_info->channel = channel;
+ /* The caller will not free the cmd_buf of the asynchronous command,
+ * so there is no need to increase the reference count here
+ */
+ cmd_info->buf_in = buf_in;
+
+ cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx);
+
+ cmdq_msg_unlock(cmdq);
+
+ return 0;
+}
+
+static int cmdq_params_valid(const void *hwdev, const struct hinic3_cmd_buf *buf_in)
+{
+ if (!buf_in || !hwdev) {
+ pr_err("Invalid CMDQ buffer addr or hwdev\n");
+ return -EINVAL;
+ }
+
+ if (!buf_in->size || buf_in->size > HINIC3_CMDQ_BUF_SIZE) {
+ pr_err("Invalid CMDQ buffer size: 0x%x\n", buf_in->size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define WAIT_CMDQ_ENABLE_TIMEOUT 300
+static int wait_cmdqs_enable(struct hinic3_cmdqs *cmdqs)
+{
+ unsigned long end;
+
+ end = jiffies + msecs_to_jiffies(WAIT_CMDQ_ENABLE_TIMEOUT);
+ do {
+ if (cmdqs->status & HINIC3_CMDQ_ENABLE)
+ return 0;
+ } while (time_before(jiffies, end) && cmdqs->hwdev->chip_present_flag &&
+ !cmdqs->disable_flag);
+
+ cmdqs->disable_flag = 1;
+
+ return -EBUSY;
+}
+
+int hinic3_cmdq_direct_resp(void *hwdev, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err) {
+ pr_err("Invalid CMDQ parameters\n");
+ return err;
+ }
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ err = cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC],
+ mod, cmd, buf_in, out_param,
+ timeout, channel);
+
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -ETIMEDOUT;
+ else
+ return err;
+}
+EXPORT_SYMBOL(hinic3_cmdq_direct_resp);
+
+int hinic3_cmdq_detail_resp(void *hwdev, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err)
+ return err;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ err = cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC],
+ mod, cmd, buf_in, buf_out, out_param,
+ timeout, channel);
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -ETIMEDOUT;
+ else
+ return err;
+}
+EXPORT_SYMBOL(hinic3_cmdq_detail_resp);
+
+int hinic3_cos_id_detail_resp(void *hwdev, u8 mod, u8 cmd, u8 cos_id,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out, u64 *out_param,
+ u32 timeout, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err)
+ return err;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ if (cos_id >= cmdqs->cmdq_num) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq id is invalid\n");
+ return -EINVAL;
+ }
+
+ err = cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[cos_id], mod, cmd,
+ buf_in, buf_out, out_param,
+ timeout, channel);
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -ETIMEDOUT;
+ else
+ return err;
+}
+EXPORT_SYMBOL(hinic3_cos_id_detail_resp);
+
+int hinic3_cmdq_async(void *hwdev, u8 mod, u8 cmd, struct hinic3_cmd_buf *buf_in, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err)
+ return err;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+ /* LB mode 1 compatible, cmdq 0 also for async, which is sync_no_wait */
+ return cmdq_async_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod,
+ cmd, buf_in, channel);
+}
+EXPORT_SYMBOL(hinic3_cmdq_async);
+
+int hinic3_cmdq_async_cos(void *hwdev, u8 mod, u8 cmd,
+ u8 cos_id, struct hinic3_cmd_buf *buf_in,
+ u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err)
+ return err;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ if (cos_id >= cmdqs->cmdq_num) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq id is invalid\n");
+ return -EINVAL;
+ }
+
+ return cmdq_async_cmd(&cmdqs->cmdq[cos_id], mod, cmd, buf_in, channel);
+}
+
+static void clear_wqe_complete_bit(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 ci)
+{
+ struct hinic3_ctrl *ctrl = NULL;
+ u32 header_info = hinic3_hw_cpu32(WQE_HEADER(wqe)->header_info);
+ enum data_format df = CMDQ_WQE_HEADER_GET(header_info, DATA_FMT);
+
+ if (df == DATA_SGE)
+ ctrl = &wqe->wqe_lcmd.ctrl;
+ else
+ ctrl = &wqe->inline_wqe.wqe_scmd.ctrl;
+
+ /* clear HW busy bit */
+ ctrl->ctrl_info = 0;
+ cmdq->cmd_infos[ci].cmd_type = HINIC3_CMD_TYPE_NONE;
+
+ wmb(); /* verify wqe is clear */
+
+ hinic3_wq_put_wqebbs(&cmdq->wq, NUM_WQEBBS_FOR_CMDQ_WQE);
+}
+
+static void cmdq_sync_cmd_handler(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 ci)
+{
+ spin_lock(&cmdq->cmdq_lock);
+
+ cmdq_update_cmd_status(cmdq, ci, wqe);
+
+ if (cmdq->cmd_infos[ci].cmpt_code) {
+ *cmdq->cmd_infos[ci].cmpt_code = CMDQ_COMPLETE_CMPT_CODE;
+ cmdq->cmd_infos[ci].cmpt_code = NULL;
+ }
+
+ /* make sure cmpt_code operation before done operation */
+ smp_rmb();
+
+ if (cmdq->cmd_infos[ci].done) {
+ complete(cmdq->cmd_infos[ci].done);
+ cmdq->cmd_infos[ci].done = NULL;
+ }
+
+ spin_unlock(&cmdq->cmdq_lock);
+
+ cmdq_clear_cmd_buf(&cmdq->cmd_infos[ci], cmdq->hwdev);
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+}
+
+static void cmdq_async_cmd_handler(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 ci)
+{
+ cmdq_clear_cmd_buf(&cmdq->cmd_infos[ci], hwdev);
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+}
+
+static int cmdq_arm_ceq_handler(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 ci)
+{
+ struct hinic3_ctrl *ctrl = &wqe->inline_wqe.wqe_scmd.ctrl;
+ u32 ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info);
+
+ if (!WQE_COMPLETED(ctrl_info))
+ return -EBUSY;
+
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+
+ return 0;
+}
+
+#define HINIC3_CMDQ_WQE_HEAD_LEN 32
+static void hinic3_dump_cmdq_wqe_head(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmdq_wqe *wqe)
+{
+ u32 i;
+ u32 *data = (u32 *)wqe;
+
+ for (i = 0; i < (HINIC3_CMDQ_WQE_HEAD_LEN / sizeof(u32)); i += 0x4) {
+ sdk_info(hwdev->dev_hdl, "wqe data: 0x%08x, 0x%08x, 0x%08x, 0x%08x\n",
+ *(data + i), *(data + i + 0x1), *(data + i + 0x2),
+ *(data + i + 0x3));
+ }
+}
+
+void hinic3_cmdq_ceq_handler(void *handle, u32 ceqe_data)
+{
+ struct hinic3_cmdqs *cmdqs = ((struct hinic3_hwdev *)handle)->cmdqs;
+ enum hinic3_cmdq_type cmdq_type = CEQE_CMDQ_GET(ceqe_data, TYPE);
+ struct hinic3_cmdq *cmdq = &cmdqs->cmdq[cmdq_type];
+ struct hinic3_hwdev *hwdev = cmdqs->hwdev;
+ struct hinic3_cmdq_wqe *wqe = NULL;
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL;
+ struct hinic3_ctrl *ctrl = NULL;
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ u16 ci;
+
+ while ((wqe = cmdq_read_wqe(&cmdq->wq, &ci)) != NULL) {
+ cmd_info = &cmdq->cmd_infos[ci];
+
+ switch (cmd_info->cmd_type) {
+ case HINIC3_CMD_TYPE_NONE:
+ return;
+ case HINIC3_CMD_TYPE_TIMEOUT:
+ sdk_warn(hwdev->dev_hdl, "Cmdq timeout, q_id: %u, ci: %u\n",
+ cmdq_type, ci);
+ hinic3_dump_cmdq_wqe_head(hwdev, wqe);
+ /*lint -fallthrough */
+ case HINIC3_CMD_TYPE_FAKE_TIMEOUT:
+ cmdq_clear_cmd_buf(cmd_info, hwdev);
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+ break;
+ case HINIC3_CMD_TYPE_SET_ARM:
+ /* arm_bit was set until here */
+ if (cmdq_arm_ceq_handler(cmdq, wqe, ci) != 0)
+ return;
+ break;
+ default:
+ /* only arm bit is using scmd wqe, the wqe is lcmd */
+ wqe_lcmd = &wqe->wqe_lcmd;
+ ctrl = &wqe_lcmd->ctrl;
+ if (!WQE_COMPLETED(hinic3_hw_cpu32((ctrl)->ctrl_info)))
+ return;
+
+ dma_rmb();
+ /* For FORCE_STOP cmd_type, we also need to wait for
+ * the firmware processing to complete to prevent the
+ * firmware from accessing the released cmd_buf
+ */
+ if (cmd_info->cmd_type == HINIC3_CMD_TYPE_FORCE_STOP) {
+ cmdq_clear_cmd_buf(cmd_info, hwdev);
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+ } else if (cmd_info->cmd_type == HINIC3_CMD_TYPE_ASYNC) {
+ cmdq_async_cmd_handler(hwdev, cmdq, wqe, ci);
+ } else {
+ cmdq_sync_cmd_handler(cmdq, wqe, ci);
+ }
+
+ break;
+ }
+ }
+}
+
+static void cmdq_init_queue_ctxt(struct hinic3_cmdqs *cmdqs,
+ struct hinic3_cmdq *cmdq,
+ struct cmdq_ctxt_info *ctxt_info)
+{
+ struct hinic3_wq *wq = &cmdq->wq;
+ u64 cmdq_first_block_paddr, pfn;
+ u16 start_ci = (u16)wq->cons_idx;
+
+ pfn = CMDQ_PFN(hinic3_wq_get_first_wqe_page_addr(wq));
+
+ ctxt_info->curr_wqe_page_pfn =
+ CMDQ_CTXT_PAGE_INFO_SET(1, HW_BUSY_BIT) |
+ CMDQ_CTXT_PAGE_INFO_SET(1, CEQ_EN) |
+ CMDQ_CTXT_PAGE_INFO_SET(1, CEQ_ARM) |
+ CMDQ_CTXT_PAGE_INFO_SET(HINIC3_CEQ_ID_CMDQ, EQ_ID) |
+ CMDQ_CTXT_PAGE_INFO_SET(pfn, CURR_WQE_PAGE_PFN);
+
+ if (!WQ_IS_0_LEVEL_CLA(wq)) {
+ cmdq_first_block_paddr = cmdqs->wq_block_paddr;
+ pfn = CMDQ_PFN(cmdq_first_block_paddr);
+ }
+
+ ctxt_info->wq_block_pfn = CMDQ_CTXT_BLOCK_INFO_SET(start_ci, CI) |
+ CMDQ_CTXT_BLOCK_INFO_SET(pfn, WQ_BLOCK_PFN);
+}
+
+static int init_cmdq(struct hinic3_cmdq *cmdq, struct hinic3_hwdev *hwdev,
+ enum hinic3_cmdq_type q_type)
+{
+ int err;
+
+ cmdq->cmdq_type = q_type;
+ cmdq->wrapped = 1;
+ cmdq->hwdev = hwdev;
+
+ spin_lock_init(&cmdq->cmdq_lock);
+
+ cmdq->cmd_infos = kcalloc(cmdq->wq.q_depth, sizeof(*cmdq->cmd_infos),
+ GFP_KERNEL);
+ if (!cmdq->cmd_infos) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate cmdq infos\n");
+ err = -ENOMEM;
+ goto cmd_infos_err;
+ }
+
+ return 0;
+
+cmd_infos_err:
+ spin_lock_deinit(&cmdq->cmdq_lock);
+
+ return err;
+}
+
+static void free_cmdq(struct hinic3_cmdq *cmdq)
+{
+ kfree(cmdq->cmd_infos);
+ cmdq->cmd_infos = NULL;
+ spin_lock_deinit(&cmdq->cmdq_lock);
+}
+
+static int hinic3_set_cmdq_ctxts(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ u8 cmdq_type;
+ int err;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ err = hinic3_set_cmdq_ctxt(hwdev, cmdq_type,
+ &cmdqs->cmdq[cmdq_type].cmdq_ctxt);
+ if (err)
+ return err;
+ }
+
+ cmdqs->status |= HINIC3_CMDQ_ENABLE;
+ cmdqs->disable_flag = 0;
+
+ return 0;
+}
+
+static void cmdq_flush_sync_cmd(struct hinic3_cmdq_cmd_info *cmd_info)
+{
+ if (cmd_info->cmd_type != HINIC3_CMD_TYPE_DIRECT_RESP &&
+ cmd_info->cmd_type != HINIC3_CMD_TYPE_SGE_RESP)
+ return;
+
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_FORCE_STOP;
+
+ if (cmd_info->cmpt_code &&
+ *cmd_info->cmpt_code == CMDQ_SEND_CMPT_CODE)
+ *cmd_info->cmpt_code = CMDQ_FORCE_STOP_CMPT_CODE;
+
+ if (cmd_info->done) {
+ complete(cmd_info->done);
+ cmd_info->done = NULL;
+ cmd_info->cmpt_code = NULL;
+ cmd_info->direct_resp = NULL;
+ cmd_info->errcode = NULL;
+ }
+}
+
+void hinic3_cmdq_flush_cmd(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmdq *cmdq)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ u16 ci = 0;
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ while (cmdq_read_wqe(&cmdq->wq, &ci)) {
+ hinic3_wq_put_wqebbs(&cmdq->wq, NUM_WQEBBS_FOR_CMDQ_WQE);
+ cmd_info = &cmdq->cmd_infos[ci];
+
+ if (cmd_info->cmd_type == HINIC3_CMD_TYPE_DIRECT_RESP ||
+ cmd_info->cmd_type == HINIC3_CMD_TYPE_SGE_RESP)
+ cmdq_flush_sync_cmd(cmd_info);
+ }
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+static void hinic3_cmdq_flush_channel_sync_cmd(struct hinic3_hwdev *hwdev, u16 channel)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ struct hinic3_cmdq *cmdq = NULL;
+ struct hinic3_wq *wq = NULL;
+ u16 wqe_cnt, ci, i;
+
+ if (channel >= HINIC3_CHANNEL_MAX)
+ return;
+
+ cmdq = &hwdev->cmdqs->cmdq[HINIC3_CMDQ_SYNC];
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ wq = &cmdq->wq;
+ ci = wq->cons_idx;
+ wqe_cnt = (u16)WQ_MASK_IDX(wq, wq->prod_idx +
+ wq->q_depth - wq->cons_idx);
+ for (i = 0; i < wqe_cnt; i++) {
+ cmd_info = &cmdq->cmd_infos[WQ_MASK_IDX(wq, ci + i)];
+ if (cmd_info->channel == channel)
+ cmdq_flush_sync_cmd(cmd_info);
+ }
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+void hinic3_cmdq_flush_sync_cmd(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ struct hinic3_cmdq *cmdq = NULL;
+ struct hinic3_wq *wq = NULL;
+ u16 wqe_cnt, ci, i;
+
+ cmdq = &hwdev->cmdqs->cmdq[HINIC3_CMDQ_SYNC];
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ wq = &cmdq->wq;
+ ci = wq->cons_idx;
+ wqe_cnt = (u16)WQ_MASK_IDX(wq, wq->prod_idx +
+ wq->q_depth - wq->cons_idx);
+ for (i = 0; i < wqe_cnt; i++) {
+ cmd_info = &cmdq->cmd_infos[WQ_MASK_IDX(wq, ci + i)];
+ cmdq_flush_sync_cmd(cmd_info);
+ }
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+static void cmdq_reset_all_cmd_buff(struct hinic3_cmdq *cmdq)
+{
+ u16 i;
+
+ for (i = 0; i < cmdq->wq.q_depth; i++)
+ cmdq_clear_cmd_buf(&cmdq->cmd_infos[i], cmdq->hwdev);
+}
+
+int hinic3_cmdq_set_channel_status(struct hinic3_hwdev *hwdev, u16 channel,
+ bool enable)
+{
+ if (channel >= HINIC3_CHANNEL_MAX)
+ return -EINVAL;
+
+ if (enable) {
+ clear_bit(channel, &hwdev->cmdqs->channel_stop);
+ } else {
+ set_bit(channel, &hwdev->cmdqs->channel_stop);
+ hinic3_cmdq_flush_channel_sync_cmd(hwdev, channel);
+ }
+
+ sdk_info(hwdev->dev_hdl, "%s cmdq channel 0x%x\n",
+ enable ? "Enable" : "Disable", channel);
+
+ return 0;
+}
+
+void hinic3_cmdq_enable_channel_lock(struct hinic3_hwdev *hwdev, bool enable)
+{
+ hwdev->cmdqs->lock_channel_en = enable;
+
+ sdk_info(hwdev->dev_hdl, "%s cmdq channel lock\n",
+ enable ? "Enable" : "Disable");
+}
+
+int hinic3_reinit_cmdq_ctxts(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ u8 cmdq_type;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ hinic3_cmdq_flush_cmd(hwdev, &cmdqs->cmdq[cmdq_type]);
+ cmdq_reset_all_cmd_buff(&cmdqs->cmdq[cmdq_type]);
+ cmdqs->cmdq[cmdq_type].wrapped = 1;
+ hinic3_wq_reset(&cmdqs->cmdq[cmdq_type].wq);
+ }
+
+ return hinic3_set_cmdq_ctxts(hwdev);
+}
+
+static int create_cmdq_wq(struct hinic3_cmdqs *cmdqs)
+{
+ u8 type, cmdq_type;
+ int err;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ err = hinic3_wq_create(cmdqs->hwdev, &cmdqs->cmdq[cmdq_type].wq,
+ HINIC3_CMDQ_DEPTH, CMDQ_WQEBB_SIZE);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Failed to create cmdq wq\n");
+ goto destroy_wq;
+ }
+ }
+
+ /* 1-level CLA must put all cmdq's wq page addr in one wq block */
+ if (!WQ_IS_0_LEVEL_CLA(&cmdqs->cmdq[HINIC3_CMDQ_SYNC].wq)) {
+ /* cmdq wq's CLA table is up to 512B */
+#define CMDQ_WQ_CLA_SIZE 512
+ if (cmdqs->cmdq[HINIC3_CMDQ_SYNC].wq.num_wq_pages >
+ CMDQ_WQ_CLA_SIZE / sizeof(u64)) {
+ err = -EINVAL;
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq wq page exceed limit: %lu\n",
+ CMDQ_WQ_CLA_SIZE / sizeof(u64));
+ goto destroy_wq;
+ }
+
+ cmdqs->wq_block_vaddr =
+ dma_zalloc_coherent(cmdqs->hwdev->dev_hdl, PAGE_SIZE,
+ &cmdqs->wq_block_paddr, GFP_KERNEL);
+ if (!cmdqs->wq_block_vaddr) {
+ err = -ENOMEM;
+ sdk_err(cmdqs->hwdev->dev_hdl, "Failed to alloc cmdq wq block\n");
+ goto destroy_wq;
+ }
+
+ type = HINIC3_CMDQ_SYNC;
+ for (; type < cmdqs->cmdq_num; type++)
+ memcpy((u8 *)cmdqs->wq_block_vaddr +
+ ((u64)type * CMDQ_WQ_CLA_SIZE),
+ cmdqs->cmdq[type].wq.wq_block_vaddr,
+ cmdqs->cmdq[type].wq.num_wq_pages * sizeof(u64));
+ }
+
+ return 0;
+
+destroy_wq:
+ type = HINIC3_CMDQ_SYNC;
+ for (; type < cmdq_type; type++)
+ hinic3_wq_destroy(&cmdqs->cmdq[type].wq);
+
+ return err;
+}
+
+static void destroy_cmdq_wq(struct hinic3_cmdqs *cmdqs)
+{
+ u8 cmdq_type;
+
+ if (cmdqs->wq_block_vaddr)
+ dma_free_coherent(cmdqs->hwdev->dev_hdl, PAGE_SIZE,
+ cmdqs->wq_block_vaddr, cmdqs->wq_block_paddr);
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++)
+ hinic3_wq_destroy(&cmdqs->cmdq[cmdq_type].wq);
+}
+
+static int init_cmdqs(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ u8 cmdq_num;
+ int err = -ENOMEM;
+
+ if (COMM_SUPPORT_CMDQ_NUM(hwdev)) {
+ cmdq_num = hwdev->glb_attr.cmdq_num;
+ if (hwdev->glb_attr.cmdq_num > HINIC3_MAX_CMDQ_TYPES) {
+ sdk_warn(hwdev->dev_hdl, "Adjust cmdq num to %d\n", HINIC3_MAX_CMDQ_TYPES);
+ cmdq_num = HINIC3_MAX_CMDQ_TYPES;
+ }
+ } else {
+ cmdq_num = HINIC3_MAX_CMDQ_TYPES;
+ }
+
+ cmdqs = kzalloc(sizeof(*cmdqs), GFP_KERNEL);
+ if (!cmdqs)
+ return err;
+
+ hwdev->cmdqs = cmdqs;
+ cmdqs->hwdev = hwdev;
+ cmdqs->cmdq_num = cmdq_num;
+
+ cmdqs->cmd_buf_pool = dma_pool_create("hinic3_cmdq", hwdev->dev_hdl,
+ HINIC3_CMDQ_BUF_SIZE, HINIC3_CMDQ_BUF_SIZE, 0ULL);
+ if (!cmdqs->cmd_buf_pool) {
+ sdk_err(hwdev->dev_hdl, "Failed to create cmdq buffer pool\n");
+ goto pool_create_err;
+ }
+
+ return 0;
+
+pool_create_err:
+ kfree(cmdqs);
+
+ return err;
+}
+
+int hinic3_cmdqs_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ void __iomem *db_base = NULL;
+ u8 type, cmdq_type;
+ int err = -ENOMEM;
+
+ err = init_cmdqs(hwdev);
+ if (err)
+ return err;
+
+ cmdqs = hwdev->cmdqs;
+
+ err = create_cmdq_wq(cmdqs);
+ if (err)
+ goto create_wq_err;
+
+ err = hinic3_alloc_db_addr(hwdev, &db_base, NULL);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate doorbell address\n");
+ goto alloc_db_err;
+ }
+
+ cmdqs->cmdqs_db_base = (u8 *)db_base;
+ for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, cmdq_type);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize cmdq type :%d\n", cmdq_type);
+ goto init_cmdq_err;
+ }
+
+ cmdq_init_queue_ctxt(cmdqs, &cmdqs->cmdq[cmdq_type],
+ &cmdqs->cmdq[cmdq_type].cmdq_ctxt);
+ }
+
+ err = hinic3_set_cmdq_ctxts(hwdev);
+ if (err)
+ goto init_cmdq_err;
+
+ return 0;
+
+init_cmdq_err:
+ for (type = HINIC3_CMDQ_SYNC; type < cmdq_type; type++)
+ free_cmdq(&cmdqs->cmdq[type]);
+
+ hinic3_free_db_addr(hwdev, cmdqs->cmdqs_db_base, NULL);
+
+alloc_db_err:
+ destroy_cmdq_wq(cmdqs);
+
+create_wq_err:
+ dma_pool_destroy(cmdqs->cmd_buf_pool);
+ kfree(cmdqs);
+
+ return err;
+}
+
+void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ u8 cmdq_type = HINIC3_CMDQ_SYNC;
+
+ cmdqs->status &= ~HINIC3_CMDQ_ENABLE;
+
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ hinic3_cmdq_flush_cmd(hwdev, &cmdqs->cmdq[cmdq_type]);
+ cmdq_reset_all_cmd_buff(&cmdqs->cmdq[cmdq_type]);
+ free_cmdq(&cmdqs->cmdq[cmdq_type]);
+ }
+
+ hinic3_free_db_addr(hwdev, cmdqs->cmdqs_db_base, NULL);
+ destroy_cmdq_wq(cmdqs);
+
+ dma_pool_destroy(cmdqs->cmd_buf_pool);
+
+ kfree(cmdqs);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h
new file mode 100644
index 0000000..b174ad2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h
@@ -0,0 +1,204 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_CMDQ_H
+#define HINIC3_CMDQ_H
+
+#include <linux/types.h>
+#include <linux/completion.h>
+#include <linux/spinlock.h>
+
+#include "mpu_inband_cmd_defs.h"
+#include "hinic3_hw.h"
+#include "hinic3_wq.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+
+#define HINIC3_SCMD_DATA_LEN 16
+
+#define HINIC3_CMDQ_DEPTH 4096
+
+enum hinic3_cmdq_type {
+ HINIC3_CMDQ_SYNC,
+ HINIC3_CMDQ_ASYNC,
+ HINIC3_MAX_CMDQ_TYPES = 4
+};
+
+enum hinic3_db_src_type {
+ HINIC3_DB_SRC_CMDQ_TYPE,
+ HINIC3_DB_SRC_L2NIC_SQ_TYPE,
+};
+
+enum hinic3_cmdq_db_type {
+ HINIC3_DB_SQ_RQ_TYPE,
+ HINIC3_DB_CMDQ_TYPE,
+};
+
+/* hardware define: cmdq wqe */
+struct hinic3_cmdq_header {
+ u32 header_info;
+ u32 saved_data;
+};
+
+struct hinic3_scmd_bufdesc {
+ u32 buf_len;
+ u32 rsvd;
+ u8 data[HINIC3_SCMD_DATA_LEN];
+};
+
+struct hinic3_lcmd_bufdesc {
+ struct hinic3_sge sge;
+ u32 rsvd1;
+ u64 saved_async_buf;
+ u64 rsvd3;
+};
+
+struct hinic3_cmdq_db {
+ u32 db_head;
+ u32 db_info;
+};
+
+struct hinic3_status {
+ u32 status_info;
+};
+
+struct hinic3_ctrl {
+ u32 ctrl_info;
+};
+
+struct hinic3_sge_resp {
+ struct hinic3_sge sge;
+ u32 rsvd;
+};
+
+struct hinic3_cmdq_completion {
+ union {
+ struct hinic3_sge_resp sge_resp;
+ u64 direct_resp;
+ };
+};
+
+struct hinic3_cmdq_wqe_scmd {
+ struct hinic3_cmdq_header header;
+ u64 rsvd;
+ struct hinic3_status status;
+ struct hinic3_ctrl ctrl;
+ struct hinic3_cmdq_completion completion;
+ struct hinic3_scmd_bufdesc buf_desc;
+};
+
+struct hinic3_cmdq_wqe_lcmd {
+ struct hinic3_cmdq_header header;
+ struct hinic3_status status;
+ struct hinic3_ctrl ctrl;
+ struct hinic3_cmdq_completion completion;
+ struct hinic3_lcmd_bufdesc buf_desc;
+};
+
+struct hinic3_cmdq_inline_wqe {
+ struct hinic3_cmdq_wqe_scmd wqe_scmd;
+};
+
+struct hinic3_cmdq_wqe {
+ union {
+ struct hinic3_cmdq_inline_wqe inline_wqe;
+ struct hinic3_cmdq_wqe_lcmd wqe_lcmd;
+ };
+};
+
+struct hinic3_cmdq_arm_bit {
+ u32 q_type;
+ u32 q_id;
+};
+
+enum hinic3_cmdq_status {
+ HINIC3_CMDQ_ENABLE = BIT(0),
+};
+
+enum hinic3_cmdq_cmd_type {
+ HINIC3_CMD_TYPE_NONE,
+ HINIC3_CMD_TYPE_SET_ARM,
+ HINIC3_CMD_TYPE_DIRECT_RESP,
+ HINIC3_CMD_TYPE_SGE_RESP,
+ HINIC3_CMD_TYPE_ASYNC,
+ HINIC3_CMD_TYPE_FAKE_TIMEOUT,
+ HINIC3_CMD_TYPE_TIMEOUT,
+ HINIC3_CMD_TYPE_FORCE_STOP,
+};
+
+struct hinic3_cmdq_cmd_info {
+ enum hinic3_cmdq_cmd_type cmd_type;
+ u16 channel;
+ u16 rsvd1;
+
+ struct completion *done;
+ int *errcode;
+ int *cmpt_code;
+ u64 *direct_resp;
+ u64 cmdq_msg_id;
+
+ struct hinic3_cmd_buf *buf_in;
+ struct hinic3_cmd_buf *buf_out;
+};
+
+struct hinic3_cmdq {
+ struct hinic3_wq wq;
+
+ enum hinic3_cmdq_type cmdq_type;
+ int wrapped;
+
+ /* spinlock for send cmdq commands */
+ spinlock_t cmdq_lock;
+
+ struct cmdq_ctxt_info cmdq_ctxt;
+
+ struct hinic3_cmdq_cmd_info *cmd_infos;
+
+ struct hinic3_hwdev *hwdev;
+ u64 rsvd1[2];
+};
+
+struct hinic3_cmdqs {
+ struct hinic3_hwdev *hwdev;
+
+ struct pci_pool *cmd_buf_pool;
+ /* doorbell area */
+ u8 __iomem *cmdqs_db_base;
+
+ /* All cmdq's CLA of a VF occupy a PAGE when cmdq wq is 1-level CLA */
+ dma_addr_t wq_block_paddr;
+ void *wq_block_vaddr;
+ struct hinic3_cmdq cmdq[HINIC3_MAX_CMDQ_TYPES];
+
+ u32 status;
+ u32 disable_flag;
+
+ bool lock_channel_en;
+ unsigned long channel_stop;
+ u8 cmdq_num;
+ u32 rsvd1;
+ u64 rsvd2;
+};
+
+void hinic3_cmdq_ceq_handler(void *handle, u32 ceqe_data);
+
+int hinic3_reinit_cmdq_ctxts(struct hinic3_hwdev *hwdev);
+
+bool hinic3_cmdq_idle(struct hinic3_cmdq *cmdq);
+
+int hinic3_cmdqs_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev);
+
+void hinic3_cmdq_flush_cmd(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmdq *cmdq);
+
+int hinic3_cmdq_set_channel_status(struct hinic3_hwdev *hwdev, u16 channel,
+ bool enable);
+
+void hinic3_cmdq_enable_channel_lock(struct hinic3_hwdev *hwdev, bool enable);
+
+void hinic3_cmdq_flush_sync_cmd(struct hinic3_hwdev *hwdev);
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c
new file mode 100644
index 0000000..a942ef1
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c
@@ -0,0 +1,93 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/kernel.h>
+#include <linux/io-mapping.h>
+#include <linux/delay.h>
+
+#include "ossl_knl.h"
+#include "hinic3_common.h"
+
+int hinic3_dma_zalloc_coherent_align(void *dev_hdl, u64 size, u64 align,
+ unsigned int flag,
+ struct hinic3_dma_addr_align *mem_align)
+{
+ void *vaddr = NULL, *align_vaddr = NULL;
+ dma_addr_t paddr, align_paddr;
+ u64 real_size = size;
+
+ vaddr = dma_zalloc_coherent(dev_hdl, real_size, &paddr, flag);
+ if (!vaddr)
+ return -ENOMEM;
+
+ align_paddr = ALIGN(paddr, align);
+ /* align */
+ if (align_paddr == paddr) {
+ align_vaddr = vaddr;
+ goto out;
+ }
+
+ dma_free_coherent(dev_hdl, real_size, vaddr, paddr);
+
+ /* realloc memory for align */
+ real_size = size + align;
+ vaddr = dma_zalloc_coherent(dev_hdl, real_size, &paddr, flag);
+ if (!vaddr)
+ return -ENOMEM;
+
+ align_paddr = ALIGN(paddr, align);
+ align_vaddr = (void *)((u64)vaddr + (align_paddr - paddr));
+
+out:
+ mem_align->real_size = (u32)real_size;
+ mem_align->ori_vaddr = vaddr;
+ mem_align->ori_paddr = paddr;
+ mem_align->align_vaddr = align_vaddr;
+ mem_align->align_paddr = align_paddr;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_dma_zalloc_coherent_align);
+
+void hinic3_dma_free_coherent_align(void *dev_hdl,
+ struct hinic3_dma_addr_align *mem_align)
+{
+ dma_free_coherent(dev_hdl, mem_align->real_size,
+ mem_align->ori_vaddr, mem_align->ori_paddr);
+}
+EXPORT_SYMBOL(hinic3_dma_free_coherent_align);
+
+int hinic3_wait_for_timeout(void *priv_data, wait_cpl_handler handler,
+ u32 wait_total_ms, u32 wait_once_us)
+{
+ enum hinic3_wait_return ret;
+ unsigned long end;
+ /* Take 9/10 * wait_once_us as the minimum sleep time of usleep_range */
+ u32 usleep_min = wait_once_us - wait_once_us / 10;
+
+ if (!handler)
+ return -EINVAL;
+
+ end = jiffies + msecs_to_jiffies(wait_total_ms);
+ do {
+ ret = handler(priv_data);
+ if (ret == WAIT_PROCESS_CPL)
+ return 0;
+ else if (ret == WAIT_PROCESS_ERR)
+ return -EIO;
+
+ /* Sleep more than 20ms using msleep is accurate */
+ if (wait_once_us >= 20 * USEC_PER_MSEC)
+ msleep(wait_once_us / USEC_PER_MSEC);
+ else
+ usleep_range(usleep_min, wait_once_us);
+ } while (time_before(jiffies, end));
+
+ ret = handler(priv_data);
+ if (ret == WAIT_PROCESS_CPL)
+ return 0;
+ else if (ret == WAIT_PROCESS_ERR)
+ return -EIO;
+
+ return -ETIMEDOUT;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h
new file mode 100644
index 0000000..4098d7f
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h
@@ -0,0 +1,188 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_CSR_H
+#define HINIC3_CSR_H
+
+/* bit30/bit31 for bar index flag
+ * 00: bar0
+ * 01: bar1
+ * 10: bar2
+ * 11: bar3
+ */
+#define HINIC3_CFG_REGS_FLAG 0x40000000
+
+#define HINIC3_MGMT_REGS_FLAG 0xC0000000
+
+#define HINIC3_REGS_FLAG_MAKS 0x3FFFFFFF
+
+#define HINIC3_VF_CFG_REG_OFFSET 0x2000
+
+#define HINIC3_HOST_CSR_BASE_ADDR (HINIC3_MGMT_REGS_FLAG + 0x6000)
+#define HINIC3_CSR_GLOBAL_BASE_ADDR (HINIC3_MGMT_REGS_FLAG + 0x6400)
+
+/* HW interface registers */
+#define HINIC3_CSR_FUNC_ATTR0_ADDR (HINIC3_CFG_REGS_FLAG + 0x0)
+#define HINIC3_CSR_FUNC_ATTR1_ADDR (HINIC3_CFG_REGS_FLAG + 0x4)
+#define HINIC3_CSR_FUNC_ATTR2_ADDR (HINIC3_CFG_REGS_FLAG + 0x8)
+#define HINIC3_CSR_FUNC_ATTR3_ADDR (HINIC3_CFG_REGS_FLAG + 0xC)
+#define HINIC3_CSR_FUNC_ATTR4_ADDR (HINIC3_CFG_REGS_FLAG + 0x10)
+#define HINIC3_CSR_FUNC_ATTR5_ADDR (HINIC3_CFG_REGS_FLAG + 0x14)
+#define HINIC3_CSR_FUNC_ATTR6_ADDR (HINIC3_CFG_REGS_FLAG + 0x18)
+
+#define HINIC3_FUNC_CSR_MAILBOX_DATA_OFF 0x80
+#define HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF \
+ (HINIC3_CFG_REGS_FLAG + 0x0100)
+#define HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF \
+ (HINIC3_CFG_REGS_FLAG + 0x0104)
+#define HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF \
+ (HINIC3_CFG_REGS_FLAG + 0x0108)
+#define HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF \
+ (HINIC3_CFG_REGS_FLAG + 0x010C)
+/* CLP registers */
+#define HINIC3_BAR3_CLP_BASE_ADDR (HINIC3_MGMT_REGS_FLAG + 0x0000)
+
+#define HINIC3_UCPU_CLP_SIZE_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x40)
+#define HINIC3_UCPU_CLP_REQBASE_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x44)
+#define HINIC3_UCPU_CLP_RSPBASE_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x48)
+#define HINIC3_UCPU_CLP_REQ_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x4c)
+#define HINIC3_UCPU_CLP_RSP_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x50)
+#define HINIC3_CLP_REG(member) (HINIC3_UCPU_CLP_##member##_REG)
+
+#define HINIC3_CLP_REQ_DATA HINIC3_BAR3_CLP_BASE_ADDR
+#define HINIC3_CLP_RSP_DATA (HINIC3_BAR3_CLP_BASE_ADDR + 0x1000)
+#define HINIC3_CLP_DATA(member) (HINIC3_CLP_##member##_DATA)
+
+#define HINIC3_PPF_ELECTION_OFFSET 0x0
+#define HINIC3_MPF_ELECTION_OFFSET 0x20
+
+#define HINIC3_CSR_PPF_ELECTION_ADDR \
+ (HINIC3_HOST_CSR_BASE_ADDR + HINIC3_PPF_ELECTION_OFFSET)
+
+#define HINIC3_CSR_GLOBAL_MPF_ELECTION_ADDR \
+ (HINIC3_HOST_CSR_BASE_ADDR + HINIC3_MPF_ELECTION_OFFSET)
+
+#define HINIC3_CSR_FUNC_PPF_ELECT_BASE_ADDR (HINIC3_CFG_REGS_FLAG + 0x60)
+#define HINIC3_CSR_FUNC_PPF_ELECT_PORT_STRIDE 0x4
+
+#define HINIC3_CSR_FUNC_PPF_ELECT(host_idx) \
+ (HINIC3_CSR_FUNC_PPF_ELECT_BASE_ADDR + \
+ (host_idx) * HINIC3_CSR_FUNC_PPF_ELECT_PORT_STRIDE)
+
+#define HINIC3_CSR_DMA_ATTR_TBL_ADDR (HINIC3_CFG_REGS_FLAG + 0x380)
+#define HINIC3_CSR_DMA_ATTR_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x390)
+
+/* MSI-X registers */
+#define HINIC3_CSR_MSIX_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x310)
+#define HINIC3_CSR_MSIX_CTRL_ADDR (HINIC3_CFG_REGS_FLAG + 0x300)
+#define HINIC3_CSR_MSIX_CNT_ADDR (HINIC3_CFG_REGS_FLAG + 0x304)
+#define HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR (HINIC3_CFG_REGS_FLAG + 0x58)
+
+#define HINIC3_MSI_CLR_INDIR_RESEND_TIMER_CLR_SHIFT 0
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_SET_SHIFT 1
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_CLR_SHIFT 2
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_SET_SHIFT 3
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_CLR_SHIFT 4
+#define HINIC3_MSI_CLR_INDIR_SIMPLE_INDIR_IDX_SHIFT 22
+
+#define HINIC3_MSI_CLR_INDIR_RESEND_TIMER_CLR_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_SET_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_CLR_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_SET_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_CLR_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_SIMPLE_INDIR_IDX_MASK 0x3FFU
+
+#define HINIC3_MSI_CLR_INDIR_SET(val, member) \
+ (((val) & HINIC3_MSI_CLR_INDIR_##member##_MASK) << \
+ HINIC3_MSI_CLR_INDIR_##member##_SHIFT)
+
+/* EQ registers */
+#define HINIC3_AEQ_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x210)
+#define HINIC3_CEQ_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x290)
+
+#define HINIC3_EQ_INDIR_IDX_ADDR(type) \
+ ((type == HINIC3_AEQ) ? \
+ HINIC3_AEQ_INDIR_IDX_ADDR : HINIC3_CEQ_INDIR_IDX_ADDR)
+
+#define HINIC3_AEQ_MTT_OFF_BASE_ADDR (HINIC3_CFG_REGS_FLAG + 0x240)
+#define HINIC3_CEQ_MTT_OFF_BASE_ADDR (HINIC3_CFG_REGS_FLAG + 0x2C0)
+
+#define HINIC3_CSR_EQ_PAGE_OFF_STRIDE 8
+
+#define HINIC3_AEQ_HI_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_AEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE)
+
+#define HINIC3_AEQ_LO_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_AEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE + 4)
+
+#define HINIC3_CEQ_HI_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_CEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE)
+
+#define HINIC3_CEQ_LO_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_CEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE + 4)
+
+#define HINIC3_CSR_AEQ_CTRL_0_ADDR (HINIC3_CFG_REGS_FLAG + 0x200)
+#define HINIC3_CSR_AEQ_CTRL_1_ADDR (HINIC3_CFG_REGS_FLAG + 0x204)
+#define HINIC3_CSR_AEQ_CONS_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x208)
+#define HINIC3_CSR_AEQ_PROD_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x20C)
+#define HINIC3_CSR_AEQ_CI_SIMPLE_INDIR_ADDR (HINIC3_CFG_REGS_FLAG + 0x50)
+
+#define HINIC3_CSR_CEQ_CTRL_0_ADDR (HINIC3_CFG_REGS_FLAG + 0x280)
+#define HINIC3_CSR_CEQ_CTRL_1_ADDR (HINIC3_CFG_REGS_FLAG + 0x284)
+#define HINIC3_CSR_CEQ_CONS_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x288)
+#define HINIC3_CSR_CEQ_PROD_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x28c)
+#define HINIC3_CSR_CEQ_CI_SIMPLE_INDIR_ADDR (HINIC3_CFG_REGS_FLAG + 0x54)
+
+/* API CMD registers */
+#define HINIC3_CSR_API_CMD_BASE (HINIC3_MGMT_REGS_FLAG + 0x2000)
+
+#define HINIC3_CSR_API_CMD_STRIDE 0x80
+
+#define HINIC3_CSR_API_CMD_CHAIN_HEAD_HI_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x0 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_HEAD_LO_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x4 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_STATUS_HI_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x8 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_STATUS_LO_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0xC + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_NUM_CELLS_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x10 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_CTRL_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x14 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_PI_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x1C + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_REQ_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x20 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_STATUS_0_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x30 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+/* self test register */
+#define HINIC3_MGMT_HEALTH_STATUS_ADDR (HINIC3_MGMT_REGS_FLAG + 0x983c)
+
+#define HINIC3_CHIP_BASE_INFO_ADDR (HINIC3_MGMT_REGS_FLAG + 0xB02C)
+
+#define HINIC3_CHIP_ERR_STATUS0_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0EC)
+#define HINIC3_CHIP_ERR_STATUS1_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0F0)
+
+#define HINIC3_ERR_INFO0_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0F4)
+#define HINIC3_ERR_INFO1_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0F8)
+#define HINIC3_ERR_INFO2_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0FC)
+
+#define HINIC3_MULT_HOST_SLAVE_STATUS_ADDR (HINIC3_MGMT_REGS_FLAG + 0xDF30)
+#define HINIC3_MULT_MIGRATE_HOST_STATUS_ADDR (HINIC3_MGMT_REGS_FLAG + 0xDF4C)
+#define HINIC3_MULT_HOST_MASTER_MBOX_STATUS_ADDR HINIC3_MULT_HOST_SLAVE_STATUS_ADDR
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c
new file mode 100644
index 0000000..af336f2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c
@@ -0,0 +1,997 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <net/addrconf.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/io-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/time.h>
+#include <linux/timex.h>
+#include <linux/rtc.h>
+#include <linux/debugfs.h>
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_lld.h"
+#include "hinic3_sriov.h"
+#include "hinic3_nictool.h"
+#include "hinic3_pci_id_tbl.h"
+#include "hinic3_hwdev.h"
+#include "cfg_mgmt_mpu_cmd_defs.h"
+#include "mpu_cmd_base_defs.h"
+#include "hinic3_dev_mgmt.h"
+
+#define HINIC3_WAIT_TOOL_CNT_TIMEOUT 10000
+#define HINIC3_WAIT_TOOL_MIN_USLEEP_TIME 9900
+#define HINIC3_WAIT_TOOL_MAX_USLEEP_TIME 10000
+#define HIGHT_BDF 8
+
+static unsigned long card_bit_map;
+
+LIST_HEAD(g_hinic3_chip_list);
+
+struct list_head *get_hinic3_chip_list(void)
+{
+ return &g_hinic3_chip_list;
+}
+
+void uld_dev_hold(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(lld_dev->pdev);
+
+ atomic_inc(&pci_adapter->uld_ref_cnt[type]);
+}
+EXPORT_SYMBOL(uld_dev_hold);
+
+void uld_dev_put(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(lld_dev->pdev);
+
+ atomic_dec(&pci_adapter->uld_ref_cnt[type]);
+}
+EXPORT_SYMBOL(uld_dev_put);
+
+void lld_dev_cnt_init(struct hinic3_pcidev *pci_adapter)
+{
+ atomic_set(&pci_adapter->ref_cnt, 0);
+}
+
+void lld_dev_hold(struct hinic3_lld_dev *dev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!dev)
+ return;
+
+ pci_adapter = pci_get_drvdata(dev->pdev);
+ atomic_inc(&pci_adapter->ref_cnt);
+}
+
+void lld_dev_put(struct hinic3_lld_dev *dev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!dev)
+ return;
+
+ pci_adapter = pci_get_drvdata(dev->pdev);
+ atomic_dec(&pci_adapter->ref_cnt);
+}
+
+void wait_lld_dev_unused(struct hinic3_pcidev *pci_adapter)
+{
+ unsigned long end;
+
+ end = jiffies + msecs_to_jiffies(HINIC3_WAIT_TOOL_CNT_TIMEOUT);
+ do {
+ if (!atomic_read(&pci_adapter->ref_cnt))
+ return;
+
+ /* if sleep 10ms, use usleep_range to be more precise */
+ usleep_range(HINIC3_WAIT_TOOL_MIN_USLEEP_TIME,
+ HINIC3_WAIT_TOOL_MAX_USLEEP_TIME);
+ } while (time_before(jiffies, end));
+}
+
+enum hinic3_lld_status {
+ HINIC3_NODE_CHANGE = BIT(0),
+};
+
+struct hinic3_lld_lock {
+ /* lock for chip list */
+ struct mutex lld_mutex;
+ unsigned long status;
+ atomic_t dev_ref_cnt;
+};
+
+struct hinic3_lld_lock g_lld_lock;
+
+#define WAIT_LLD_DEV_HOLD_TIMEOUT (10 * 60 * 1000) /* 10minutes */
+#define WAIT_LLD_DEV_NODE_CHANGED (10 * 60 * 1000) /* 10minutes */
+#define WAIT_LLD_DEV_REF_CNT_EMPTY (2 * 60 * 1000) /* 2minutes */
+#define PRINT_TIMEOUT_INTERVAL 10000
+#define MS_PER_SEC 1000
+#define LLD_LOCK_MIN_USLEEP_TIME 900
+#define LLD_LOCK_MAX_USLEEP_TIME 1000
+
+/* node in chip_node will changed, tools or driver can't get node
+ * during this situation
+ */
+void lld_lock_chip_node(void)
+{
+ unsigned long end;
+ bool timeout = true;
+ u32 loop_cnt;
+
+ mutex_lock(&g_lld_lock.lld_mutex);
+
+ loop_cnt = 0;
+ end = jiffies + msecs_to_jiffies(WAIT_LLD_DEV_NODE_CHANGED);
+ do {
+ if (!test_and_set_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status)) {
+ timeout = false;
+ break;
+ }
+
+ loop_cnt++;
+ if (loop_cnt % PRINT_TIMEOUT_INTERVAL == 0)
+ pr_warn("Wait for lld node change complete for %us\n",
+ loop_cnt / MS_PER_SEC);
+
+ /* if sleep 1ms, use usleep_range to be more precise */
+ usleep_range(LLD_LOCK_MIN_USLEEP_TIME,
+ LLD_LOCK_MAX_USLEEP_TIME);
+ } while (time_before(jiffies, end));
+
+ if (timeout && test_and_set_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status))
+ pr_warn("Wait for lld node change complete timeout when trying to get lld lock\n");
+
+ loop_cnt = 0;
+ timeout = true;
+ end = jiffies + msecs_to_jiffies(WAIT_LLD_DEV_NODE_CHANGED);
+ do {
+ if (!atomic_read(&g_lld_lock.dev_ref_cnt)) {
+ timeout = false;
+ break;
+ }
+
+ loop_cnt++;
+ if (loop_cnt % PRINT_TIMEOUT_INTERVAL == 0)
+ pr_warn("Wait for lld dev unused for %us, reference count: %d\n",
+ loop_cnt / MS_PER_SEC,
+ atomic_read(&g_lld_lock.dev_ref_cnt));
+
+ /* if sleep 1ms, use usleep_range to be more precise */
+ usleep_range(LLD_LOCK_MIN_USLEEP_TIME,
+ LLD_LOCK_MAX_USLEEP_TIME);
+ } while (time_before(jiffies, end));
+
+ if (timeout && atomic_read(&g_lld_lock.dev_ref_cnt))
+ pr_warn("Wait for lld dev unused timeout\n");
+
+ mutex_unlock(&g_lld_lock.lld_mutex);
+}
+
+void lld_unlock_chip_node(void)
+{
+ clear_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status);
+}
+
+/* When tools or other drivers want to get node of chip_node, use this function
+ * to prevent node be freed
+ */
+void lld_hold(void)
+{
+ unsigned long end;
+ u32 loop_cnt = 0;
+
+ /* ensure there have not any chip node in changing */
+ mutex_lock(&g_lld_lock.lld_mutex);
+
+ end = jiffies + msecs_to_jiffies(WAIT_LLD_DEV_HOLD_TIMEOUT);
+ do {
+ if (!test_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status))
+ break;
+
+ loop_cnt++;
+
+ if (loop_cnt % PRINT_TIMEOUT_INTERVAL == 0)
+ pr_warn("Wait lld node change complete for %us\n",
+ loop_cnt / MS_PER_SEC);
+ /* if sleep 1ms, use usleep_range to be more precise */
+ usleep_range(LLD_LOCK_MIN_USLEEP_TIME,
+ LLD_LOCK_MAX_USLEEP_TIME);
+ } while (time_before(jiffies, end));
+
+ if (test_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status))
+ pr_warn("Wait lld node change complete timeout when trying to hode lld dev\n");
+
+ atomic_inc(&g_lld_lock.dev_ref_cnt);
+ mutex_unlock(&g_lld_lock.lld_mutex);
+}
+
+void lld_put(void)
+{
+ atomic_dec(&g_lld_lock.dev_ref_cnt);
+}
+
+void hinic3_lld_lock_init(void)
+{
+ mutex_init(&g_lld_lock.lld_mutex);
+ atomic_set(&g_lld_lock.dev_ref_cnt, 0);
+}
+
+void hinic3_get_all_chip_id(void *id_info)
+{
+ struct nic_card_id *card_id = (struct nic_card_id *)id_info;
+ struct card_node *chip_node = NULL;
+ int i = 0;
+ int id, err;
+
+ lld_hold();
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ err = sscanf(chip_node->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ pr_err("Failed to get hinic3 id\n");
+ continue;
+ }
+ card_id->id[i] = (u32)id;
+ i++;
+ }
+ lld_put();
+ card_id->num = (u32)i;
+}
+
+int hinic3_bar_mmap_param_valid(phys_addr_t phy_addr, unsigned long vmsize)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+ u64 bar1_phy_addr = 0;
+ u64 bar3_phy_addr = 0;
+ u64 bar1_size = 0;
+ u64 bar3_size = 0;
+
+ lld_hold();
+
+ /* get PF bar1 or bar3 physical address to verify */
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ continue;
+
+ bar1_phy_addr = pci_resource_start(dev->pcidev, HINIC3_PF_PCI_CFG_REG_BAR);
+ bar1_size = pci_resource_len(dev->pcidev, HINIC3_PF_PCI_CFG_REG_BAR);
+
+ bar3_phy_addr = pci_resource_start(dev->pcidev, HINIC3_PCI_MGMT_REG_BAR);
+ bar3_size = pci_resource_len(dev->pcidev, HINIC3_PCI_MGMT_REG_BAR);
+ if ((phy_addr == bar1_phy_addr && vmsize <= bar1_size) ||
+ (phy_addr == bar3_phy_addr && vmsize <= bar3_size)) {
+ lld_put();
+ return 0;
+ }
+ }
+ }
+
+ lld_put();
+ return -EINVAL;
+}
+
+void hinic3_get_card_func_info_by_card_name(const char *chip_name,
+ struct hinic3_card_func_info *card_func)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+ struct func_pdev_info *pdev_info = NULL;
+
+ card_func->num_pf = 0;
+
+ lld_hold();
+
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ if (strncmp(chip_node->chip_name, chip_name, IFNAMSIZ))
+ continue;
+
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ continue;
+
+ pdev_info = &card_func->pdev_info[card_func->num_pf];
+ pdev_info->bar1_size =
+ pci_resource_len(dev->pcidev,
+ HINIC3_PF_PCI_CFG_REG_BAR);
+ pdev_info->bar1_phy_addr =
+ pci_resource_start(dev->pcidev,
+ HINIC3_PF_PCI_CFG_REG_BAR);
+
+ pdev_info->bar3_size =
+ pci_resource_len(dev->pcidev,
+ HINIC3_PCI_MGMT_REG_BAR);
+ pdev_info->bar3_phy_addr =
+ pci_resource_start(dev->pcidev,
+ HINIC3_PCI_MGMT_REG_BAR);
+
+ card_func->num_pf++;
+ if (card_func->num_pf >= MAX_SIZE) {
+ lld_put();
+ return;
+ }
+ }
+ }
+
+ lld_put();
+}
+
+static bool is_pcidev_match_chip_name(const char *ifname, struct hinic3_pcidev *dev,
+ struct card_node *chip_node, enum func_type type)
+{
+ if (!strncmp(chip_node->chip_name, ifname, IFNAMSIZ)) {
+ if (hinic3_func_type(dev->hwdev) != type)
+ return false;
+ return true;
+ }
+
+ return false;
+}
+
+static struct hinic3_lld_dev *get_dst_type_lld_dev_by_chip_name(const char *ifname,
+ enum func_type type)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (is_pcidev_match_chip_name(ifname, dev, chip_node, type))
+ return &dev->lld_dev;
+ }
+ }
+
+ return NULL;
+}
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_chip_name(const char *chip_name)
+{
+ struct hinic3_lld_dev *dev = NULL;
+
+ lld_hold();
+
+ dev = get_dst_type_lld_dev_by_chip_name(chip_name, TYPE_PPF);
+ if (dev)
+ goto out;
+
+ dev = get_dst_type_lld_dev_by_chip_name(chip_name, TYPE_PF);
+ if (dev)
+ goto out;
+
+ dev = get_dst_type_lld_dev_by_chip_name(chip_name, TYPE_VF);
+out:
+ if (dev)
+ lld_dev_hold(dev);
+ lld_put();
+ return dev;
+}
+
+static int get_dynamic_uld_dev_name(struct hinic3_pcidev *dev, enum hinic3_service_type type,
+ char *ifname)
+{
+ u32 out_size = IFNAMSIZ;
+
+ if (!g_uld_info[type].ioctl)
+ return -EFAULT;
+
+ return g_uld_info[type].ioctl(dev->uld_dev[type], GET_ULD_DEV_NAME,
+ NULL, 0, ifname, &out_size);
+}
+
+static bool is_pcidev_match_dev_name(const char *dev_name, struct hinic3_pcidev *dev,
+ enum hinic3_service_type type)
+{
+ enum hinic3_service_type i;
+ char nic_uld_name[IFNAMSIZ] = {0};
+ int err;
+
+ if (type > SERVICE_T_MAX)
+ return false;
+
+ if (type == SERVICE_T_MAX) {
+ for (i = SERVICE_T_OVS; i < SERVICE_T_MAX; i++) {
+ if (!strncmp(dev->uld_dev_name[i], dev_name, IFNAMSIZ))
+ return true;
+ }
+ } else {
+ if (!strncmp(dev->uld_dev_name[type], dev_name, IFNAMSIZ))
+ return true;
+ }
+
+ err = get_dynamic_uld_dev_name(dev, SERVICE_T_NIC, (char *)nic_uld_name);
+ if (err == 0) {
+ if (!strncmp(nic_uld_name, dev_name, IFNAMSIZ))
+ return true;
+ }
+
+ return false;
+}
+
+static struct hinic3_lld_dev *get_lld_dev_by_dev_name(const char *dev_name,
+ enum hinic3_service_type type, bool hold)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ lld_hold();
+
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (is_pcidev_match_dev_name(dev_name, dev, type)) {
+ if (hold)
+ lld_dev_hold(&dev->lld_dev);
+ lld_put();
+ return &dev->lld_dev;
+ }
+ }
+ }
+
+ lld_put();
+
+ return NULL;
+}
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_chip_and_port(const char *chip_name, u8 port_id)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ continue;
+
+ if (hinic3_physical_port_id(dev->hwdev) == port_id &&
+ !strncmp(chip_node->chip_name, chip_name, IFNAMSIZ)) {
+ lld_dev_hold(&dev->lld_dev);
+ lld_put();
+
+ return &dev->lld_dev;
+ }
+ }
+ }
+ lld_put();
+
+ return NULL;
+}
+
+void *hinic3_get_ppf_dev(void)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct list_head *chip_list = NULL;
+
+ lld_hold();
+ chip_list = get_hinic3_chip_list();
+
+ list_for_each_entry(chip_node, chip_list, node)
+ list_for_each_entry(pci_adapter, &chip_node->func_list, node)
+ if (hinic3_func_type(pci_adapter->hwdev) == TYPE_PPF) {
+ pr_info("Get ppf_func_id:%u", hinic3_global_func_id(pci_adapter->hwdev));
+ lld_put();
+ return pci_adapter->lld_dev.hwdev;
+ }
+
+ lld_put();
+ return NULL;
+}
+EXPORT_SYMBOL(hinic3_get_ppf_dev);
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_dev_name(const char *dev_name,
+ enum hinic3_service_type type)
+{
+ return get_lld_dev_by_dev_name(dev_name, type, true);
+}
+EXPORT_SYMBOL(hinic3_get_lld_dev_by_dev_name);
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_dev_name_unsafe(const char *dev_name,
+ enum hinic3_service_type type)
+{
+ return get_lld_dev_by_dev_name(dev_name, type, false);
+}
+EXPORT_SYMBOL(hinic3_get_lld_dev_by_dev_name_unsafe);
+
+static void *get_uld_by_lld_dev(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type,
+ bool hold)
+{
+ struct hinic3_pcidev *dev = NULL;
+ void *uld = NULL;
+
+ if (!lld_dev)
+ return NULL;
+
+ dev = pci_get_drvdata(lld_dev->pdev);
+ if (!dev)
+ return NULL;
+
+ spin_lock_bh(&dev->uld_lock);
+ if (!dev->uld_dev[type] || !test_bit(type, &dev->uld_state)) {
+ spin_unlock_bh(&dev->uld_lock);
+ return NULL;
+ }
+ uld = dev->uld_dev[type];
+
+ if (hold)
+ atomic_inc(&dev->uld_ref_cnt[type]);
+ spin_unlock_bh(&dev->uld_lock);
+
+ return uld;
+}
+
+void *hinic3_get_uld_dev(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ return get_uld_by_lld_dev(lld_dev, type, true);
+}
+EXPORT_SYMBOL(hinic3_get_uld_dev);
+
+void *hinic3_get_uld_dev_unsafe(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ return get_uld_by_lld_dev(lld_dev, type, false);
+}
+EXPORT_SYMBOL(hinic3_get_uld_dev_unsafe);
+
+static struct hinic3_lld_dev *get_ppf_lld_dev(struct hinic3_lld_dev *lld_dev, bool hold)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(lld_dev->pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ lld_hold();
+ chip_node = pci_adapter->chip_node;
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (dev->hwdev && hinic3_func_type(dev->hwdev) == TYPE_PPF) {
+ if (hold)
+ lld_dev_hold(&dev->lld_dev);
+ lld_put();
+ return &dev->lld_dev;
+ }
+ }
+ lld_put();
+
+ return NULL;
+}
+
+struct hinic3_lld_dev *hinic3_get_ppf_lld_dev(struct hinic3_lld_dev *lld_dev)
+{
+ return get_ppf_lld_dev(lld_dev, true);
+}
+EXPORT_SYMBOL(hinic3_get_ppf_lld_dev);
+
+struct hinic3_lld_dev *hinic3_get_ppf_lld_dev_unsafe(struct hinic3_lld_dev *lld_dev)
+{
+ return get_ppf_lld_dev(lld_dev, false);
+}
+EXPORT_SYMBOL(hinic3_get_ppf_lld_dev_unsafe);
+
+int hinic3_get_chip_name(struct hinic3_lld_dev *lld_dev, char *chip_name, u16 max_len)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ int ret = 0;
+
+ if (!lld_dev || !chip_name || !max_len)
+ return -EINVAL;
+
+ pci_adapter = pci_get_drvdata(lld_dev->pdev);
+ if (!pci_adapter)
+ return -EFAULT;
+
+ lld_hold();
+ if (strscpy(chip_name, pci_adapter->chip_node->chip_name, max_len) < 0)
+ goto RELEASE;
+ chip_name[max_len - 1] = '\0';
+
+ lld_put();
+
+ return 0;
+
+RELEASE:
+ lld_put();
+
+ return ret;
+}
+EXPORT_SYMBOL(hinic3_get_chip_name);
+
+struct hinic3_hwdev *hinic3_get_sdk_hwdev_by_lld(struct hinic3_lld_dev *lld_dev)
+{
+ return lld_dev->hwdev;
+}
+
+void hinic3_write_oshr_info(struct os_hot_replace_info *out_oshr_info,
+ struct hw_pf_info *info,
+ struct hinic3_board_info *board_info,
+ struct card_node *chip_node, u32 serivce_enable,
+ u32 func_info_idx)
+{
+ out_oshr_info->func_infos[func_info_idx].pf_idx = info->glb_func_idx;
+ out_oshr_info->func_infos[func_info_idx].backup_pf =
+ (((info->glb_func_idx) / (board_info->port_num)) % HOT_REPLACE_PARTITION_NUM == 0) ?
+ ((info->glb_func_idx) + (board_info->port_num)) :
+ ((info->glb_func_idx) - (board_info->port_num));
+ out_oshr_info->func_infos[func_info_idx].partition =
+ ((info->glb_func_idx) / (board_info->port_num)) % HOT_REPLACE_PARTITION_NUM;
+ out_oshr_info->func_infos[func_info_idx].port_id = info->port_id;
+ out_oshr_info->func_infos[func_info_idx].bdf = (info->bus_num << HIGHT_BDF) + info->glb_func_idx;
+ out_oshr_info->func_infos[func_info_idx].bus_num = chip_node->bus_num;
+ out_oshr_info->func_infos[func_info_idx].valid = serivce_enable;
+ memcpy(out_oshr_info->func_infos[func_info_idx].card_name,
+ chip_node->chip_name, IFNAMSIZ);
+}
+
+void hinic3_get_os_hot_replace_info(void *oshr_info)
+{
+ struct os_hot_replace_info *out_oshr_info = (struct os_hot_replace_info *)oshr_info;
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dst_dev = NULL;
+ struct hinic3_board_info *board_info = NULL;
+ struct hw_pf_info *infos = NULL;
+ struct hinic3_hw_pf_infos *pf_infos = NULL;
+ u32 func_info_idx = 0, func_id = 0, func_num, serivce_enable = 0;
+ struct list_head *hinic3_chip_list = get_hinic3_chip_list();
+ int err;
+
+ lld_hold();
+ pf_infos = kzalloc(sizeof(struct hinic3_hw_pf_infos), GFP_KERNEL);
+ if (!pf_infos) {
+ pr_err("kzalloc pf_infos fail\n");
+ lld_put();
+ return;
+ }
+ list_for_each_entry(chip_node, hinic3_chip_list, node) {
+ list_for_each_entry(dst_dev, &chip_node->func_list, node) { // get all pf infos in one time by one pf_id
+ err = hinic3_get_hw_pf_infos(dst_dev->hwdev, pf_infos, HINIC3_CHANNEL_COMM);
+ if (err != 0) {
+ pr_err("get pf info failed\n");
+ break;
+ }
+
+ serivce_enable = 0;
+ infos = pf_infos->infos;
+ board_info = &((struct hinic3_hwdev *)(dst_dev->hwdev))->board_info;
+ if (((struct hinic3_hwdev *)(dst_dev->hwdev))->hot_replace_mode == HOT_REPLACE_ENABLE) {
+ serivce_enable = 1;
+ }
+ break;
+ }
+
+ func_num = pf_infos->num_pfs;
+ if (func_num <= 0) {
+ pr_err("get pf num failed\n");
+ break;
+ }
+
+ for (func_id = 0; func_id < func_num; func_id++) {
+ hinic3_write_oshr_info(out_oshr_info, &infos[func_id],
+ board_info, chip_node,
+ serivce_enable, func_info_idx);
+ func_info_idx++;
+ }
+ }
+ out_oshr_info->func_cnt = func_info_idx;
+ kfree(pf_infos);
+ lld_put();
+}
+
+struct card_node *hinic3_get_chip_node_by_lld(struct hinic3_lld_dev *lld_dev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(lld_dev->pdev);
+
+ return pci_adapter->chip_node;
+}
+
+static struct card_node *hinic3_get_chip_node_by_hwdev(const void *hwdev)
+{
+ struct card_node *chip_node = NULL;
+ struct card_node *node_tmp = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!hwdev)
+ return NULL;
+
+ lld_hold();
+
+ list_for_each_entry(node_tmp, &g_hinic3_chip_list, node) {
+ if (!chip_node) {
+ list_for_each_entry(dev, &node_tmp->func_list, node) {
+ if (dev->hwdev == hwdev) {
+ chip_node = node_tmp;
+ break;
+ }
+ }
+ }
+ }
+
+ lld_put();
+
+ return chip_node;
+}
+
+static bool is_func_valid(struct hinic3_pcidev *dev)
+{
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ return false;
+
+ return true;
+}
+
+void hinic3_get_card_info(const void *hwdev, void *bufin)
+{
+ struct card_node *chip_node = NULL;
+ struct card_info *info = (struct card_info *)bufin;
+ struct hinic3_pcidev *dev = NULL;
+ void *fun_hwdev = NULL;
+ u32 i = 0;
+
+ info->pf_num = 0;
+
+ chip_node = hinic3_get_chip_node_by_hwdev(hwdev);
+ if (!chip_node)
+ return;
+
+ lld_hold();
+
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (!is_func_valid(dev))
+ continue;
+
+ fun_hwdev = dev->hwdev;
+
+ if (hinic3_support_nic(fun_hwdev, NULL)) {
+ if (dev->uld_dev[SERVICE_T_NIC]) {
+ info->pf[i].pf_type |= (u32)BIT(SERVICE_T_NIC);
+ get_dynamic_uld_dev_name(dev, SERVICE_T_NIC, info->pf[i].name);
+ }
+ }
+
+ if (hinic3_support_ppa(fun_hwdev, NULL)) {
+ if (dev->uld_dev[SERVICE_T_PPA]) {
+ info->pf[i].pf_type |= (u32)BIT(SERVICE_T_PPA);
+ get_dynamic_uld_dev_name(dev, SERVICE_T_PPA, info->pf[i].name);
+ }
+ }
+
+ if (hinic3_func_for_mgmt(fun_hwdev))
+ strscpy(info->pf[i].name, "FOR_MGMT", IFNAMSIZ);
+
+ if (dev->lld_dev.pdev->subsystem_device == BIFUR_RESOURCE_PF_SSID) {
+ strscpy(info->pf[i].name, "bifur", IFNAMSIZ);
+ }
+
+ strscpy(info->pf[i].bus_info, pci_name(dev->pcidev),
+ sizeof(info->pf[i].bus_info));
+ info->pf_num++;
+ i = info->pf_num;
+ }
+
+ lld_put();
+}
+
+struct hinic3_sriov_info *hinic3_get_sriov_info_by_pcidev(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ return &pci_adapter->sriov_info;
+}
+
+void *hinic3_get_hwdev_by_pcidev(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ return pci_adapter->hwdev;
+}
+
+bool hinic3_is_in_host(void)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) != TYPE_VF) {
+ lld_put();
+ return true;
+ }
+ }
+ }
+
+ lld_put();
+
+ return false;
+}
+
+
+static bool chip_node_is_exist(struct hinic3_pcidev *pci_adapter,
+ unsigned char *bus_number)
+{
+ struct card_node *chip_node = NULL;
+ struct pci_dev *pf_pdev = NULL;
+
+ if (!pci_is_root_bus(pci_adapter->pcidev->bus))
+ *bus_number = pci_adapter->pcidev->bus->number;
+
+ if (*bus_number != 0) {
+ if (pci_adapter->pcidev->is_virtfn) {
+ pf_pdev = pci_adapter->pcidev->physfn;
+ *bus_number = pf_pdev->bus->number;
+ }
+
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ if (chip_node->bus_num == *bus_number) {
+ pci_adapter->chip_node = chip_node;
+ return true;
+ }
+ }
+ } else if (HINIC3_IS_VF_DEV(pci_adapter->pcidev) ||
+ HINIC3_IS_SPU_DEV(pci_adapter->pcidev)) {
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ if (chip_node) {
+ pci_adapter->chip_node = chip_node;
+ return true;
+ }
+ }
+ }
+
+ return false;
+}
+
+int alloc_chip_node(struct hinic3_pcidev *pci_adapter)
+{
+ struct card_node *chip_node = NULL;
+ unsigned char i;
+ unsigned char bus_number = 0;
+ int err;
+
+ if (chip_node_is_exist(pci_adapter, &bus_number))
+ return 0;
+
+ for (i = 0; i < CARD_MAX_SIZE; i++) {
+ if (test_and_set_bit(i, &card_bit_map) == 0)
+ break;
+ }
+
+ if (i == CARD_MAX_SIZE) {
+ sdk_err(&pci_adapter->pcidev->dev, "Failed to alloc card id\n");
+ return -EFAULT;
+ }
+
+ chip_node = kzalloc(sizeof(*chip_node), GFP_KERNEL);
+ if (!chip_node) {
+ clear_bit(i, &card_bit_map);
+ sdk_err(&pci_adapter->pcidev->dev,
+ "Failed to alloc chip node\n");
+ return -ENOMEM;
+ }
+
+ /* bus number */
+ chip_node->bus_num = bus_number;
+
+ if (snprintf(chip_node->chip_name, IFNAMSIZ, "%s%u", HINIC3_CHIP_NAME, i) < 0) {
+ clear_bit(i, &card_bit_map);
+ kfree(chip_node);
+ return -EINVAL;
+ }
+
+ err = sscanf(chip_node->chip_name, HINIC3_CHIP_NAME "%d", &(chip_node->chip_id));
+ if (err <= 0) {
+ clear_bit(i, &card_bit_map);
+ kfree(chip_node);
+ return -EINVAL;
+ }
+
+ sdk_info(&pci_adapter->pcidev->dev,
+ "Add new chip %s to global list succeed\n",
+ chip_node->chip_name);
+
+ list_add_tail(&chip_node->node, &g_hinic3_chip_list);
+
+ INIT_LIST_HEAD(&chip_node->func_list);
+ pci_adapter->chip_node = chip_node;
+
+ return 0;
+}
+
+void free_chip_node(struct hinic3_pcidev *pci_adapter)
+{
+ struct card_node *chip_node = pci_adapter->chip_node;
+ int id, err;
+
+ if (list_empty(&chip_node->func_list)) {
+ list_del(&chip_node->node);
+ sdk_info(&pci_adapter->pcidev->dev,
+ "Delete chip %s from global list succeed\n",
+ chip_node->chip_name);
+ err = sscanf(chip_node->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0)
+ sdk_err(&pci_adapter->pcidev->dev, "Failed to get hinic3 id\n");
+
+ clear_bit(id, &card_bit_map);
+
+ kfree(chip_node);
+ }
+}
+
+int hinic3_get_pf_id(struct card_node *chip_node, u32 port_id, u32 *pf_id, u32 *isvalid)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ continue;
+
+ if (hinic3_physical_port_id(dev->hwdev) == port_id) {
+ *pf_id = hinic3_global_func_id(dev->hwdev);
+ *isvalid = 1;
+ break;
+ }
+ }
+ lld_put();
+
+ return 0;
+}
+
+void hinic3_get_mbox_cnt(const void *hwdev, void *bufin)
+{
+ struct card_node *chip_node = NULL;
+ struct card_mbox_cnt_info *info = (struct card_mbox_cnt_info *)bufin;
+ struct hinic3_pcidev *dev = NULL;
+ struct hinic3_hwdev *func_hwdev = NULL;
+ u32 i = 0;
+
+ info->func_num = 0;
+ chip_node = hinic3_get_chip_node_by_hwdev(hwdev);
+ if (chip_node == NULL)
+ return;
+
+ lld_hold();
+
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ func_hwdev = (struct hinic3_hwdev *)dev->hwdev;
+ strscpy(info->func_info[i].bus_info, pci_name(dev->pcidev),
+ sizeof(info->func_info[i].bus_info));
+
+ info->func_info[i].send_cnt = func_hwdev->mbox_send_cnt;
+ info->func_info[i].ack_cnt = func_hwdev->mbox_ack_cnt;
+ info->func_num++;
+ i = info->func_num;
+ if (i >= ARRAY_SIZE(info->func_info)) {
+ sdk_err(&dev->pcidev->dev, "chip_node->func_list bigger than pf_max + vf_max\n");
+ break;
+ }
+ }
+
+ lld_put();
+}
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h
new file mode 100644
index 0000000..bfa8f3e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_DEV_MGMT_H
+#define HINIC3_DEV_MGMT_H
+#include <linux/types.h>
+#include <linux/bitops.h>
+
+#include "hinic3_sriov.h"
+#include "hinic3_lld.h"
+
+#define HINIC3_VF_PCI_CFG_REG_BAR 0
+#define HINIC3_PF_PCI_CFG_REG_BAR 1
+
+#define HINIC3_PCI_INTR_REG_BAR 2
+#define HINIC3_PCI_MGMT_REG_BAR 3 /* Only PF have mgmt bar */
+#define HINIC3_PCI_DB_BAR 4
+
+#define PRINT_ULD_DETACH_TIMEOUT_INTERVAL 1000 /* 1 second */
+#define ULD_LOCK_MIN_USLEEP_TIME 900
+#define ULD_LOCK_MAX_USLEEP_TIME 1000
+#define BIFUR_RESOURCE_PF_SSID 0x05a1
+
+#define HINIC3_IS_VF_DEV(pdev) ((pdev)->device == HINIC3_DEV_ID_VF || \
+ (pdev)->device == HINIC3_DEV_SDI_5_1_ID_VF)
+#define HINIC3_IS_SPU_DEV(pdev) \
+ (((pdev)->device == HINIC3_DEV_ID_SPU) || ((pdev)->device == HINIC3_DEV_ID_SDI_5_0_PF) || \
+ (((pdev)->device == HINIC3_DEV_ID_DPU_PF)))
+
+enum {
+ HINIC3_NOT_PROBE = 1,
+ HINIC3_PROBE_START = 2,
+ HINIC3_PROBE_OK = 3,
+ HINIC3_IN_REMOVE = 4,
+};
+
+/* Structure pcidev private */
+struct hinic3_pcidev {
+ struct pci_dev *pcidev;
+ void *hwdev;
+ struct card_node *chip_node;
+ struct hinic3_lld_dev lld_dev;
+ /* Record the service object address,
+ * such as hinic3_dev and toe_dev, fc_dev
+ */
+ void *uld_dev[SERVICE_T_MAX];
+ /* Record the service object name */
+ char uld_dev_name[SERVICE_T_MAX][IFNAMSIZ];
+ /* It is a the global variable for driver to manage
+ * all function device linked list
+ */
+ struct list_head node;
+
+ bool disable_vf_load;
+ bool disable_srv_load[SERVICE_T_MAX];
+
+ void __iomem *cfg_reg_base;
+ void __iomem *intr_reg_base;
+ void __iomem *mgmt_reg_base;
+ u64 db_dwqe_len;
+ u64 db_base_phy;
+ void __iomem *db_base;
+
+ /* lock for attach/detach uld */
+ struct mutex pdev_mutex;
+ int lld_state;
+ u32 rsvd1;
+
+ struct hinic3_sriov_info sriov_info;
+
+ /* setted when uld driver processing event */
+ unsigned long state;
+ struct pci_device_id id;
+
+ atomic_t ref_cnt;
+
+ atomic_t uld_ref_cnt[SERVICE_T_MAX];
+ unsigned long uld_state;
+ spinlock_t uld_lock;
+
+ u16 probe_fault_level;
+ u16 rsvd2;
+ u64 rsvd4;
+
+ struct workqueue_struct *multi_host_mgmt_workq;
+ struct work_struct slave_nic_work;
+ struct work_struct slave_vroce_work;
+
+ struct workqueue_struct *migration_probe_workq;
+ struct delayed_work migration_probe_dwork;
+};
+
+struct hinic_chip_info {
+ u8 chip_id; /* chip id within card */
+ u8 card_type; /* hinic_multi_chip_card_type */
+ u8 rsvd[10]; /* reserved 10 bytes */
+};
+
+struct list_head *get_hinic3_chip_list(void);
+
+int alloc_chip_node(struct hinic3_pcidev *pci_adapter);
+
+void free_chip_node(struct hinic3_pcidev *pci_adapter);
+
+void lld_lock_chip_node(void);
+
+void lld_unlock_chip_node(void);
+
+void hinic3_lld_lock_init(void);
+
+void lld_dev_cnt_init(struct hinic3_pcidev *pci_adapter);
+void wait_lld_dev_unused(struct hinic3_pcidev *pci_adapter);
+
+void *hinic3_get_hwdev_by_pcidev(struct pci_dev *pdev);
+
+int hinic3_bar_mmap_param_valid(phys_addr_t phy_addr, unsigned long vmsize);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c
new file mode 100644
index 0000000..8f9d00a
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c
@@ -0,0 +1,432 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/netlink.h>
+#include <linux/pci.h>
+#include <linux/firmware.h>
+
+#include "hinic3_devlink.h"
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+#include "hinic3_common.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_hw.h"
+
+static bool check_image_valid(struct hinic3_hwdev *hwdev, const u8 *buf,
+ u32 size, struct host_image *host_image)
+{
+ struct firmware_image *fw_image = NULL;
+ u32 len = 0;
+ u32 i;
+
+ fw_image = (struct firmware_image *)buf;
+ if (fw_image->fw_magic != FW_MAGIC_NUM) {
+ sdk_err(hwdev->dev_hdl, "Wrong fw magic read from file, fw_magic: 0x%x\n",
+ fw_image->fw_magic);
+ return false;
+ }
+
+ if (fw_image->fw_info.section_cnt > FW_TYPE_MAX_NUM) {
+ sdk_err(hwdev->dev_hdl, "Wrong fw type number read from file, fw_type_num: 0x%x\n",
+ fw_image->fw_info.section_cnt);
+ return false;
+ }
+
+ for (i = 0; i < fw_image->fw_info.section_cnt; i++) {
+ len += fw_image->section_info[i].section_len;
+ memcpy(&host_image->section_info[i], &fw_image->section_info[i],
+ sizeof(struct firmware_section));
+ }
+
+ if (len != fw_image->fw_len ||
+ (u32)(fw_image->fw_len + FW_IMAGE_HEAD_SIZE) != size) {
+ sdk_err(hwdev->dev_hdl, "Wrong data size read from file\n");
+ return false;
+ }
+
+ host_image->image_info.total_len = fw_image->fw_len;
+ host_image->image_info.fw_version = fw_image->fw_version;
+ host_image->type_num = fw_image->fw_info.section_cnt;
+ host_image->device_id = fw_image->device_id;
+
+ return true;
+}
+
+static bool check_image_integrity(struct hinic3_hwdev *hwdev, struct host_image *host_image)
+{
+ u64 collect_section_type = 0;
+ u32 type, i;
+
+ for (i = 0; i < host_image->type_num; i++) {
+ type = host_image->section_info[i].section_type;
+ if (collect_section_type & (1ULL << type)) {
+ sdk_err(hwdev->dev_hdl, "Duplicate section type: %u\n", type);
+ return false;
+ }
+ collect_section_type |= (1ULL << type);
+ }
+
+ if ((collect_section_type & IMAGE_COLD_SUB_MODULES_MUST_IN) ==
+ IMAGE_COLD_SUB_MODULES_MUST_IN &&
+ (collect_section_type & IMAGE_CFG_SUB_MODULES_MUST_IN) != 0)
+ return true;
+
+ sdk_err(hwdev->dev_hdl, "Failed to check file integrity, valid: 0x%llx, current: 0x%llx\n",
+ (IMAGE_COLD_SUB_MODULES_MUST_IN | IMAGE_CFG_SUB_MODULES_MUST_IN),
+ collect_section_type);
+
+ return false;
+}
+
+static bool check_image_device_type(struct hinic3_hwdev *hwdev, u32 device_type)
+{
+ struct comm_cmd_board_info board_info;
+
+ memset(&board_info, 0, sizeof(board_info));
+ if (hinic3_get_board_info(hwdev, &board_info.info, HINIC3_CHANNEL_COMM)) {
+ sdk_err(hwdev->dev_hdl, "Failed to get board info\n");
+ return false;
+ }
+
+ if (device_type == board_info.info.board_type)
+ return true;
+
+ sdk_err(hwdev->dev_hdl, "The image device type: 0x%x doesn't match the firmware device type: 0x%x\n",
+ device_type, board_info.info.board_type);
+
+ return false;
+}
+
+static void encapsulate_update_cmd(struct hinic3_cmd_update_firmware *msg,
+ struct firmware_section *section_info,
+ const int *remain_len, u32 *send_len,
+ u32 *send_pos)
+{
+ memset(msg->data, 0, sizeof(msg->data));
+ msg->ctl_info.sf = (*remain_len == section_info->section_len) ? true : false;
+ msg->section_info.section_crc = section_info->section_crc;
+ msg->section_info.section_type = section_info->section_type;
+ msg->section_version = section_info->section_version;
+ msg->section_len = section_info->section_len;
+ msg->section_offset = *send_pos;
+ msg->ctl_info.bit_signed = section_info->section_flag & 0x1;
+
+ if (*remain_len <= FW_FRAGMENT_MAX_LEN) {
+ msg->ctl_info.sl = true;
+ msg->ctl_info.fragment_len = (u32)(*remain_len);
+ *send_len += section_info->section_len;
+ } else {
+ msg->ctl_info.sl = false;
+ msg->ctl_info.fragment_len = FW_FRAGMENT_MAX_LEN;
+ *send_len += FW_FRAGMENT_MAX_LEN;
+ }
+}
+
+static int hinic3_flash_firmware(struct hinic3_hwdev *hwdev, const u8 *data,
+ struct host_image *image)
+{
+ u32 send_pos, send_len, section_offset, i;
+ struct hinic3_cmd_update_firmware *update_msg = NULL;
+ u16 out_size = sizeof(*update_msg);
+ bool total_flag = false;
+ int remain_len, err;
+
+ update_msg = kzalloc(sizeof(*update_msg), GFP_KERNEL);
+ if (!update_msg) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc update message\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < image->type_num; i++) {
+ section_offset = image->section_info[i].section_offset;
+ remain_len = (int)(image->section_info[i].section_len);
+ send_len = 0;
+ send_pos = 0;
+
+ while (remain_len > 0) {
+ if (!total_flag) {
+ update_msg->total_len = image->image_info.total_len;
+ total_flag = true;
+ } else {
+ update_msg->total_len = 0;
+ }
+
+ encapsulate_update_cmd(update_msg, &image->section_info[i],
+ &remain_len, &send_len, &send_pos);
+
+ memcpy(update_msg->data,
+ ((data + FW_IMAGE_HEAD_SIZE) + section_offset) + send_pos,
+ update_msg->ctl_info.fragment_len);
+
+ err = hinic3_pf_to_mgmt_sync(hwdev, HINIC3_MOD_COMM,
+ COMM_MGMT_CMD_UPDATE_FW,
+ update_msg, sizeof(*update_msg),
+ update_msg, &out_size,
+ FW_UPDATE_MGMT_TIMEOUT);
+ if (err || !out_size || update_msg->msg_head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to update firmware, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, update_msg->msg_head.status, out_size);
+ err = update_msg->msg_head.status ?
+ update_msg->msg_head.status : -EIO;
+ kfree(update_msg);
+ return err;
+ }
+
+ send_pos = send_len;
+ remain_len = (int)(image->section_info[i].section_len - send_len);
+ }
+ }
+
+ kfree(update_msg);
+
+ return 0;
+}
+
+static int hinic3_flash_update_notify(struct devlink *devlink, const struct firmware *fw,
+ struct host_image *image, struct netlink_ext_ack *extack)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+ int err;
+
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+ devlink_flash_update_begin_notify(devlink);
+#endif
+ devlink_flash_update_status_notify(devlink, "Flash firmware begin", NULL, 0, 0);
+ sdk_info(hwdev->dev_hdl, "Flash firmware begin\n");
+ err = hinic3_flash_firmware(hwdev, fw->data, image);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to flash firmware, err: %d\n", err);
+ NL_SET_ERR_MSG_MOD(extack, "Flash firmware failed");
+ devlink_flash_update_status_notify(devlink, "Flash firmware failed", NULL, 0, 0);
+ } else {
+ sdk_info(hwdev->dev_hdl, "Flash firmware end\n");
+ devlink_flash_update_status_notify(devlink, "Flash firmware end", NULL, 0, 0);
+ }
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+ devlink_flash_update_end_notify(devlink);
+#endif
+
+ return err;
+}
+
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_PARAM
+static int hinic3_devlink_flash_update(struct devlink *devlink, const char *file_name,
+ const char *component, struct netlink_ext_ack *extack)
+#else
+static int hinic3_devlink_flash_update(struct devlink *devlink,
+ struct devlink_flash_update_params *params,
+ struct netlink_ext_ack *extack)
+#endif
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+ const struct firmware *fw = NULL;
+#else
+ const struct firmware *fw = params->fw;
+#endif
+ struct host_image *image = NULL;
+ int err;
+
+ image = kzalloc(sizeof(*image), GFP_KERNEL);
+ if (!image) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc host image\n");
+ err = -ENOMEM;
+ goto devlink_param_reset;
+ }
+
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_PARAM
+ err = request_firmware_direct(&fw, file_name, hwdev->dev_hdl);
+#else
+ err = request_firmware_direct(&fw, params->file_name, hwdev->dev_hdl);
+#endif
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to request firmware\n");
+ goto devlink_request_fw_err;
+ }
+#endif
+
+ if (!check_image_valid(hwdev, fw->data, (u32)(fw->size), image) ||
+ !check_image_integrity(hwdev, image) ||
+ !check_image_device_type(hwdev, image->device_id)) {
+ sdk_err(hwdev->dev_hdl, "Failed to check image\n");
+ NL_SET_ERR_MSG_MOD(extack, "Check image failed");
+ err = -EINVAL;
+ goto devlink_update_out;
+ }
+
+ err = hinic3_flash_update_notify(devlink, fw, image, extack);
+
+devlink_update_out:
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+ release_firmware(fw);
+
+devlink_request_fw_err:
+#endif
+ kfree(image);
+
+devlink_param_reset:
+ /* reset activate_fw and switch_cfg after flash update operation */
+ devlink_dev->activate_fw = FW_CFG_DEFAULT_INDEX;
+ devlink_dev->switch_cfg = FW_CFG_DEFAULT_INDEX;
+
+ return err;
+}
+
+static const struct devlink_ops hinic3_devlink_ops = {
+ .flash_update = hinic3_devlink_flash_update,
+};
+
+static int hinic3_devlink_get_activate_firmware_config(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+
+ ctx->val.vu8 = devlink_dev->activate_fw;
+
+ return 0;
+}
+
+static int hinic3_devlink_set_activate_firmware_config(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+ int err;
+
+ devlink_dev->activate_fw = ctx->val.vu8;
+ sdk_info(hwdev->dev_hdl, "Activate firmware begin\n");
+
+ err = hinic3_activate_firmware(hwdev, devlink_dev->activate_fw);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to activate firmware, err: %d\n", err);
+ return err;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Activate firmware end\n");
+
+ return 0;
+}
+
+static int hinic3_devlink_get_switch_config(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+
+ ctx->val.vu8 = devlink_dev->switch_cfg;
+
+ return 0;
+}
+
+static int hinic3_devlink_set_switch_config(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+ int err;
+
+ devlink_dev->switch_cfg = ctx->val.vu8;
+ sdk_info(hwdev->dev_hdl, "Switch cfg begin");
+
+ err = hinic3_switch_config(hwdev, devlink_dev->switch_cfg);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to switch cfg, err: %d\n", err);
+ return err;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Switch cfg end\n");
+
+ return 0;
+}
+
+static int hinic3_devlink_firmware_config_validate(struct devlink *devlink, u32 id,
+ union devlink_param_value val,
+ struct netlink_ext_ack *extack)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+ u8 cfg_index = val.vu8;
+
+ if (cfg_index > FW_CFG_MAX_INDEX) {
+ sdk_err(hwdev->dev_hdl, "Firmware cfg index out of range [0,7]\n");
+ NL_SET_ERR_MSG_MOD(extack, "Firmware cfg index out of range [0,7]");
+ return -ERANGE;
+ }
+
+ return 0;
+}
+
+static const struct devlink_param hinic3_devlink_params[] = {
+ DEVLINK_PARAM_DRIVER(HINIC3_DEVLINK_PARAM_ID_ACTIVATE_FW,
+ "activate_fw", DEVLINK_PARAM_TYPE_U8,
+ BIT(DEVLINK_PARAM_CMODE_PERMANENT),
+ hinic3_devlink_get_activate_firmware_config,
+ hinic3_devlink_set_activate_firmware_config,
+ hinic3_devlink_firmware_config_validate),
+ DEVLINK_PARAM_DRIVER(HINIC3_DEVLINK_PARAM_ID_SWITCH_CFG,
+ "switch_cfg", DEVLINK_PARAM_TYPE_U8,
+ BIT(DEVLINK_PARAM_CMODE_PERMANENT),
+ hinic3_devlink_get_switch_config,
+ hinic3_devlink_set_switch_config,
+ hinic3_devlink_firmware_config_validate),
+};
+
+int hinic3_init_devlink(struct hinic3_hwdev *hwdev)
+{
+ struct devlink *devlink = NULL;
+ struct pci_dev *pdev = NULL;
+ int err;
+
+ pdev = hwdev->hwif->pdev;
+ devlink = devlink_alloc(&hinic3_devlink_ops, sizeof(struct hinic3_devlink));
+ if (!devlink) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc devlink\n");
+ return -ENOMEM;
+ }
+
+ hwdev->devlink_dev = devlink_priv(devlink);
+ hwdev->devlink_dev->hwdev = hwdev;
+ hwdev->devlink_dev->activate_fw = FW_CFG_DEFAULT_INDEX;
+ hwdev->devlink_dev->switch_cfg = FW_CFG_DEFAULT_INDEX;
+
+ err = devlink_register(devlink, &pdev->dev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to register devlink\n");
+ goto register_devlink_err;
+ }
+
+ err = devlink_params_register(devlink, hinic3_devlink_params,
+ ARRAY_SIZE(hinic3_devlink_params));
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to register devlink params\n");
+ goto register_devlink_params_err;
+ }
+
+ devlink_params_publish(devlink);
+
+ return 0;
+
+register_devlink_params_err:
+ devlink_unregister(devlink);
+
+register_devlink_err:
+ devlink_free(devlink);
+
+ return -EFAULT;
+}
+
+void hinic3_uninit_devlink(struct hinic3_hwdev *hwdev)
+{
+ struct devlink *devlink = priv_to_devlink(hwdev->devlink_dev);
+
+ devlink_params_unpublish(devlink);
+ devlink_params_unregister(devlink, hinic3_devlink_params,
+ ARRAY_SIZE(hinic3_devlink_params));
+ devlink_unregister(devlink);
+ devlink_free(devlink);
+}
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h
new file mode 100644
index 0000000..68dd0fb
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h
@@ -0,0 +1,173 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_DEVLINK_H
+#define HINIC3_DEVLINK_H
+
+#include "ossl_knl.h"
+#include "hinic3_hwdev.h"
+
+#define FW_MAGIC_NUM 0x5a5a1100
+#define FW_IMAGE_HEAD_SIZE 4096
+#define FW_FRAGMENT_MAX_LEN 1536
+#define FW_CFG_DEFAULT_INDEX 0xFF
+#define FW_TYPE_MAX_NUM 0x40
+#define FW_CFG_MAX_INDEX 7
+
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+enum hinic3_devlink_param_id {
+ HINIC3_DEVLINK_PARAM_ID_BASE = DEVLINK_PARAM_GENERIC_ID_MAX,
+ HINIC3_DEVLINK_PARAM_ID_ACTIVATE_FW,
+ HINIC3_DEVLINK_PARAM_ID_SWITCH_CFG,
+};
+#endif
+
+enum hinic3_firmware_type {
+ UP_FW_UPDATE_MIN_TYPE1 = 0x0,
+ UP_FW_UPDATE_UP_TEXT = 0x0,
+ UP_FW_UPDATE_UP_DATA = 0x1,
+ UP_FW_UPDATE_UP_DICT = 0x2,
+ UP_FW_UPDATE_TILE_PCPTR = 0x3,
+ UP_FW_UPDATE_TILE_TEXT = 0x4,
+ UP_FW_UPDATE_TILE_DATA = 0x5,
+ UP_FW_UPDATE_TILE_DICT = 0x6,
+ UP_FW_UPDATE_PPE_STATE = 0x7,
+ UP_FW_UPDATE_PPE_BRANCH = 0x8,
+ UP_FW_UPDATE_PPE_EXTACT = 0x9,
+ UP_FW_UPDATE_MAX_TYPE1 = 0x9,
+ UP_FW_UPDATE_CFG0 = 0xa,
+ UP_FW_UPDATE_CFG1 = 0xb,
+ UP_FW_UPDATE_CFG2 = 0xc,
+ UP_FW_UPDATE_CFG3 = 0xd,
+ UP_FW_UPDATE_MAX_TYPE1_CFG = 0xd,
+
+ UP_FW_UPDATE_MIN_TYPE2 = 0x14,
+ UP_FW_UPDATE_MAX_TYPE2 = 0x14,
+
+ UP_FW_UPDATE_MIN_TYPE3 = 0x18,
+ UP_FW_UPDATE_PHY = 0x18,
+ UP_FW_UPDATE_BIOS = 0x19,
+ UP_FW_UPDATE_HLINK_ONE = 0x1a,
+ UP_FW_UPDATE_HLINK_TWO = 0x1b,
+ UP_FW_UPDATE_HLINK_THR = 0x1c,
+ UP_FW_UPDATE_MAX_TYPE3 = 0x1c,
+
+ UP_FW_UPDATE_MIN_TYPE4 = 0x20,
+ UP_FW_UPDATE_L0FW = 0x20,
+ UP_FW_UPDATE_L1FW = 0x21,
+ UP_FW_UPDATE_BOOT = 0x22,
+ UP_FW_UPDATE_SEC_DICT = 0x23,
+ UP_FW_UPDATE_HOT_PATCH0 = 0x24,
+ UP_FW_UPDATE_HOT_PATCH1 = 0x25,
+ UP_FW_UPDATE_HOT_PATCH2 = 0x26,
+ UP_FW_UPDATE_HOT_PATCH3 = 0x27,
+ UP_FW_UPDATE_HOT_PATCH4 = 0x28,
+ UP_FW_UPDATE_HOT_PATCH5 = 0x29,
+ UP_FW_UPDATE_HOT_PATCH6 = 0x2a,
+ UP_FW_UPDATE_HOT_PATCH7 = 0x2b,
+ UP_FW_UPDATE_HOT_PATCH8 = 0x2c,
+ UP_FW_UPDATE_HOT_PATCH9 = 0x2d,
+ UP_FW_UPDATE_HOT_PATCH10 = 0x2e,
+ UP_FW_UPDATE_HOT_PATCH11 = 0x2f,
+ UP_FW_UPDATE_HOT_PATCH12 = 0x30,
+ UP_FW_UPDATE_HOT_PATCH13 = 0x31,
+ UP_FW_UPDATE_HOT_PATCH14 = 0x32,
+ UP_FW_UPDATE_HOT_PATCH15 = 0x33,
+ UP_FW_UPDATE_HOT_PATCH16 = 0x34,
+ UP_FW_UPDATE_HOT_PATCH17 = 0x35,
+ UP_FW_UPDATE_HOT_PATCH18 = 0x36,
+ UP_FW_UPDATE_HOT_PATCH19 = 0x37,
+ UP_FW_UPDATE_MAX_TYPE4 = 0x37,
+
+ UP_FW_UPDATE_MIN_TYPE5 = 0x3a,
+ UP_FW_UPDATE_OPTION_ROM = 0x3a,
+ UP_FW_UPDATE_MAX_TYPE5 = 0x3a,
+
+ UP_FW_UPDATE_MIN_TYPE6 = 0x3e,
+ UP_FW_UPDATE_MAX_TYPE6 = 0x3e,
+
+ UP_FW_UPDATE_MIN_TYPE7 = 0x40,
+ UP_FW_UPDATE_MAX_TYPE7 = 0x40,
+};
+
+#define IMAGE_MPU_ALL_IN (BIT_ULL(UP_FW_UPDATE_UP_TEXT) | \
+ BIT_ULL(UP_FW_UPDATE_UP_DATA) | \
+ BIT_ULL(UP_FW_UPDATE_UP_DICT))
+
+#define IMAGE_NPU_ALL_IN (BIT_ULL(UP_FW_UPDATE_TILE_PCPTR) | \
+ BIT_ULL(UP_FW_UPDATE_TILE_TEXT) | \
+ BIT_ULL(UP_FW_UPDATE_TILE_DATA) | \
+ BIT_ULL(UP_FW_UPDATE_TILE_DICT) | \
+ BIT_ULL(UP_FW_UPDATE_PPE_STATE) | \
+ BIT_ULL(UP_FW_UPDATE_PPE_BRANCH) | \
+ BIT_ULL(UP_FW_UPDATE_PPE_EXTACT))
+
+#define IMAGE_COLD_SUB_MODULES_MUST_IN (IMAGE_MPU_ALL_IN | IMAGE_NPU_ALL_IN)
+
+#define IMAGE_CFG_SUB_MODULES_MUST_IN (BIT_ULL(UP_FW_UPDATE_CFG0) | \
+ BIT_ULL(UP_FW_UPDATE_CFG1) | \
+ BIT_ULL(UP_FW_UPDATE_CFG2) | \
+ BIT_ULL(UP_FW_UPDATE_CFG3))
+
+struct firmware_section {
+ u32 section_len;
+ u32 section_offset;
+ u32 section_version;
+ u32 section_type;
+ u32 section_crc;
+ u32 section_flag;
+};
+
+struct firmware_image {
+ u32 fw_version;
+ u32 fw_len;
+ u32 fw_magic;
+ struct {
+ u32 section_cnt : 16;
+ u32 rsvd : 16;
+ } fw_info;
+ struct firmware_section section_info[FW_TYPE_MAX_NUM];
+ u32 device_id; /* cfg fw board_type value */
+ u32 rsvd0[101]; /* device_id and rsvd0[101] is update_head_extend_info */
+ u32 rsvd1[534]; /* big bin file total size 4096B */
+ u32 bin_data; /* obtain the address for use */
+};
+
+struct host_image {
+ struct firmware_section section_info[FW_TYPE_MAX_NUM];
+ struct {
+ u32 total_len;
+ u32 fw_version;
+ } image_info;
+ u32 type_num;
+ u32 device_id;
+};
+
+struct hinic3_cmd_update_firmware {
+ struct mgmt_msg_head msg_head;
+
+ struct {
+ u32 sl : 1;
+ u32 sf : 1;
+ u32 flag : 1;
+ u32 bit_signed : 1;
+ u32 reserved : 12;
+ u32 fragment_len : 16;
+ } ctl_info;
+
+ struct {
+ u32 section_crc;
+ u32 section_type;
+ } section_info;
+
+ u32 total_len;
+ u32 section_len;
+ u32 section_version;
+ u32 section_offset;
+ u32 data[384];
+};
+
+int hinic3_init_devlink(struct hinic3_hwdev *hwdev);
+void hinic3_uninit_devlink(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c
new file mode 100644
index 0000000..caa99e3
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c
@@ -0,0 +1,1422 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/workqueue.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_hw.h"
+#include "hinic3_csr.h"
+#include "hinic3_hw_comm.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_eqs.h"
+
+#include "vram_common.h"
+
+#define HINIC3_EQS_WQ_NAME "hinic3_eqs"
+
+#define AEQ_CTRL_0_INTR_IDX_SHIFT 0
+#define AEQ_CTRL_0_DMA_ATTR_SHIFT 12
+#define AEQ_CTRL_0_PCI_INTF_IDX_SHIFT 20
+#define AEQ_CTRL_0_INTR_MODE_SHIFT 31
+
+#define AEQ_CTRL_0_INTR_IDX_MASK 0x3FFU
+#define AEQ_CTRL_0_DMA_ATTR_MASK 0x3FU
+#define AEQ_CTRL_0_PCI_INTF_IDX_MASK 0x7U
+#define AEQ_CTRL_0_INTR_MODE_MASK 0x1U
+
+#define AEQ_CTRL_0_SET(val, member) \
+ (((val) & AEQ_CTRL_0_##member##_MASK) << \
+ AEQ_CTRL_0_##member##_SHIFT)
+
+#define AEQ_CTRL_0_CLEAR(val, member) \
+ ((val) & (~(AEQ_CTRL_0_##member##_MASK << \
+ AEQ_CTRL_0_##member##_SHIFT)))
+
+#define AEQ_CTRL_1_LEN_SHIFT 0
+#define AEQ_CTRL_1_ELEM_SIZE_SHIFT 24
+#define AEQ_CTRL_1_PAGE_SIZE_SHIFT 28
+
+#define AEQ_CTRL_1_LEN_MASK 0x1FFFFFU
+#define AEQ_CTRL_1_ELEM_SIZE_MASK 0x3U
+#define AEQ_CTRL_1_PAGE_SIZE_MASK 0xFU
+
+#define AEQ_CTRL_1_SET(val, member) \
+ (((val) & AEQ_CTRL_1_##member##_MASK) << \
+ AEQ_CTRL_1_##member##_SHIFT)
+
+#define AEQ_CTRL_1_CLEAR(val, member) \
+ ((val) & (~(AEQ_CTRL_1_##member##_MASK << \
+ AEQ_CTRL_1_##member##_SHIFT)))
+
+#define HINIC3_EQ_PROD_IDX_MASK 0xFFFFF
+#define HINIC3_TASK_PROCESS_EQE_LIMIT 1024
+#define HINIC3_EQ_UPDATE_CI_STEP 64
+
+static uint g_aeq_len = HINIC3_DEFAULT_AEQ_LEN;
+module_param(g_aeq_len, uint, 0444);
+MODULE_PARM_DESC(g_aeq_len,
+ "aeq depth, valid range is " __stringify(HINIC3_MIN_AEQ_LEN)
+ " - " __stringify(HINIC3_MAX_AEQ_LEN));
+
+static uint g_ceq_len = HINIC3_DEFAULT_CEQ_LEN;
+module_param(g_ceq_len, uint, 0444);
+MODULE_PARM_DESC(g_ceq_len,
+ "ceq depth, valid range is " __stringify(HINIC3_MIN_CEQ_LEN)
+ " - " __stringify(HINIC3_MAX_CEQ_LEN));
+
+static uint g_num_ceqe_in_tasklet = HINIC3_TASK_PROCESS_EQE_LIMIT;
+module_param(g_num_ceqe_in_tasklet, uint, 0444);
+MODULE_PARM_DESC(g_num_ceqe_in_tasklet,
+ "The max number of ceqe can be processed in tasklet, default = 1024");
+
+#define CEQ_CTRL_0_INTR_IDX_SHIFT 0
+#define CEQ_CTRL_0_DMA_ATTR_SHIFT 12
+#define CEQ_CTRL_0_LIMIT_KICK_SHIFT 20
+#define CEQ_CTRL_0_PCI_INTF_IDX_SHIFT 24
+#define CEQ_CTRL_0_PAGE_SIZE_SHIFT 27
+#define CEQ_CTRL_0_INTR_MODE_SHIFT 31
+
+#define CEQ_CTRL_0_INTR_IDX_MASK 0x3FFU
+#define CEQ_CTRL_0_DMA_ATTR_MASK 0x3FU
+#define CEQ_CTRL_0_LIMIT_KICK_MASK 0xFU
+#define CEQ_CTRL_0_PCI_INTF_IDX_MASK 0x3U
+#define CEQ_CTRL_0_PAGE_SIZE_MASK 0xF
+#define CEQ_CTRL_0_INTR_MODE_MASK 0x1U
+
+#define CEQ_CTRL_0_SET(val, member) \
+ (((val) & CEQ_CTRL_0_##member##_MASK) << \
+ CEQ_CTRL_0_##member##_SHIFT)
+
+#define CEQ_CTRL_1_LEN_SHIFT 0
+#define CEQ_CTRL_1_GLB_FUNC_ID_SHIFT 20
+
+#define CEQ_CTRL_1_LEN_MASK 0xFFFFFU
+#define CEQ_CTRL_1_GLB_FUNC_ID_MASK 0xFFFU
+
+#define CEQ_CTRL_1_SET(val, member) \
+ (((val) & CEQ_CTRL_1_##member##_MASK) << \
+ CEQ_CTRL_1_##member##_SHIFT)
+
+#define EQ_ELEM_DESC_TYPE_SHIFT 0
+#define EQ_ELEM_DESC_SRC_SHIFT 7
+#define EQ_ELEM_DESC_SIZE_SHIFT 8
+#define EQ_ELEM_DESC_WRAPPED_SHIFT 31
+
+#define EQ_ELEM_DESC_TYPE_MASK 0x7FU
+#define EQ_ELEM_DESC_SRC_MASK 0x1U
+#define EQ_ELEM_DESC_SIZE_MASK 0xFFU
+#define EQ_ELEM_DESC_WRAPPED_MASK 0x1U
+
+#define EQ_ELEM_DESC_GET(val, member) \
+ (((val) >> EQ_ELEM_DESC_##member##_SHIFT) & \
+ EQ_ELEM_DESC_##member##_MASK)
+
+#define EQ_CONS_IDX_CONS_IDX_SHIFT 0
+#define EQ_CONS_IDX_INT_ARMED_SHIFT 31
+
+#define EQ_CONS_IDX_CONS_IDX_MASK 0x1FFFFFU
+#define EQ_CONS_IDX_INT_ARMED_MASK 0x1U
+
+#define EQ_CONS_IDX_SET(val, member) \
+ (((val) & EQ_CONS_IDX_##member##_MASK) << \
+ EQ_CONS_IDX_##member##_SHIFT)
+
+#define EQ_CONS_IDX_CLEAR(val, member) \
+ ((val) & (~(EQ_CONS_IDX_##member##_MASK << \
+ EQ_CONS_IDX_##member##_SHIFT)))
+
+#define EQ_CI_SIMPLE_INDIR_CI_SHIFT 0
+#define EQ_CI_SIMPLE_INDIR_ARMED_SHIFT 21
+#define EQ_CI_SIMPLE_INDIR_AEQ_IDX_SHIFT 30
+#define EQ_CI_SIMPLE_INDIR_CEQ_IDX_SHIFT 24
+
+#define EQ_CI_SIMPLE_INDIR_CI_MASK 0x1FFFFFU
+#define EQ_CI_SIMPLE_INDIR_ARMED_MASK 0x1U
+#define EQ_CI_SIMPLE_INDIR_AEQ_IDX_MASK 0x3U
+#define EQ_CI_SIMPLE_INDIR_CEQ_IDX_MASK 0xFFU
+
+#define EQ_CI_SIMPLE_INDIR_SET(val, member) \
+ (((val) & EQ_CI_SIMPLE_INDIR_##member##_MASK) << \
+ EQ_CI_SIMPLE_INDIR_##member##_SHIFT)
+
+#define EQ_CI_SIMPLE_INDIR_CLEAR(val, member) \
+ ((val) & (~(EQ_CI_SIMPLE_INDIR_##member##_MASK << \
+ EQ_CI_SIMPLE_INDIR_##member##_SHIFT)))
+
+#define EQ_WRAPPED(eq) ((u32)(eq)->wrapped << EQ_VALID_SHIFT)
+
+#define EQ_CONS_IDX(eq) ((eq)->cons_idx | \
+ ((u32)(eq)->wrapped << EQ_WRAPPED_SHIFT))
+
+#define EQ_CONS_IDX_REG_ADDR(eq) \
+ (((eq)->type == HINIC3_AEQ) ? \
+ HINIC3_CSR_AEQ_CONS_IDX_ADDR : \
+ HINIC3_CSR_CEQ_CONS_IDX_ADDR)
+#define EQ_CI_SIMPLE_INDIR_REG_ADDR(eq) \
+ (((eq)->type == HINIC3_AEQ) ? \
+ HINIC3_CSR_AEQ_CI_SIMPLE_INDIR_ADDR : \
+ HINIC3_CSR_CEQ_CI_SIMPLE_INDIR_ADDR)
+
+#define EQ_PROD_IDX_REG_ADDR(eq) \
+ (((eq)->type == HINIC3_AEQ) ? \
+ HINIC3_CSR_AEQ_PROD_IDX_ADDR : \
+ HINIC3_CSR_CEQ_PROD_IDX_ADDR)
+
+#define HINIC3_EQ_HI_PHYS_ADDR_REG(type, pg_num) \
+ ((u32)((type == HINIC3_AEQ) ? \
+ HINIC3_AEQ_HI_PHYS_ADDR_REG(pg_num) : \
+ HINIC3_CEQ_HI_PHYS_ADDR_REG(pg_num)))
+
+#define HINIC3_EQ_LO_PHYS_ADDR_REG(type, pg_num) \
+ ((u32)((type == HINIC3_AEQ) ? \
+ HINIC3_AEQ_LO_PHYS_ADDR_REG(pg_num) : \
+ HINIC3_CEQ_LO_PHYS_ADDR_REG(pg_num)))
+
+#define GET_EQ_NUM_PAGES(eq, size) \
+ ((u16)(ALIGN((u32)((eq)->eq_len * (eq)->elem_size), \
+ (size)) / (size)))
+
+#define HINIC3_EQ_MAX_PAGES(eq) \
+ ((eq)->type == HINIC3_AEQ ? HINIC3_AEQ_MAX_PAGES : \
+ HINIC3_CEQ_MAX_PAGES)
+
+#define GET_EQ_NUM_ELEMS(eq, pg_size) ((pg_size) / (u32)(eq)->elem_size)
+
+#define GET_EQ_ELEMENT(eq, idx) \
+ (((u8 *)(eq)->eq_pages[(idx) / (eq)->num_elem_in_pg].align_vaddr) + \
+ (u32)(((idx) & ((eq)->num_elem_in_pg - 1)) * (eq)->elem_size))
+
+#define GET_AEQ_ELEM(eq, idx) \
+ ((struct hinic3_aeq_elem *)GET_EQ_ELEMENT((eq), (idx)))
+
+#define GET_CEQ_ELEM(eq, idx) ((u32 *)GET_EQ_ELEMENT((eq), (idx)))
+
+#define GET_CURR_AEQ_ELEM(eq) GET_AEQ_ELEM((eq), (eq)->cons_idx)
+
+#define GET_CURR_CEQ_ELEM(eq) GET_CEQ_ELEM((eq), (eq)->cons_idx)
+
+#define PAGE_IN_4K(page_size) ((page_size) >> 12)
+#define EQ_SET_HW_PAGE_SIZE_VAL(eq) \
+ ((u32)ilog2(PAGE_IN_4K((eq)->page_size)))
+
+#define ELEMENT_SIZE_IN_32B(eq) (((eq)->elem_size) >> 5)
+#define EQ_SET_HW_ELEM_SIZE_VAL(eq) ((u32)ilog2(ELEMENT_SIZE_IN_32B(eq)))
+
+#define AEQ_DMA_ATTR_DEFAULT 0
+#define CEQ_DMA_ATTR_DEFAULT 0
+
+#define CEQ_LMT_KICK_DEFAULT 0
+
+#define EQ_MSIX_RESEND_TIMER_CLEAR 1
+
+#define EQ_WRAPPED_SHIFT 20
+
+#define EQ_VALID_SHIFT 31
+
+#define CEQE_TYPE_SHIFT 23
+#define CEQE_TYPE_MASK 0x7
+
+#define CEQE_TYPE(type) (((type) >> CEQE_TYPE_SHIFT) & \
+ CEQE_TYPE_MASK)
+
+#define CEQE_DATA_MASK 0x3FFFFFF
+#define CEQE_DATA(data) ((data) & CEQE_DATA_MASK)
+
+#define aeq_to_aeqs(eq) \
+ container_of((eq) - (eq)->q_id, struct hinic3_aeqs, aeq[0])
+
+#define ceq_to_ceqs(eq) \
+ container_of((eq) - (eq)->q_id, struct hinic3_ceqs, ceq[0])
+
+static irqreturn_t ceq_interrupt(int irq, void *data);
+static irqreturn_t aeq_interrupt(int irq, void *data);
+
+static void ceq_tasklet(ulong ceq_data);
+
+/**
+ * hinic3_aeq_register_hw_cb - register aeq callback for specific event
+ * @hwdev: the pointer to hw device
+ * @pri_handle: the pointer to private invoker device
+ * @event: event for the handler
+ * @hw_cb: callback function
+ **/
+int hinic3_aeq_register_hw_cb(void *hwdev, void *pri_handle, enum hinic3_aeq_type event,
+ hinic3_aeq_hwe_cb hwe_cb)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+
+ if (!hwdev || !hwe_cb || event >= HINIC3_MAX_AEQ_EVENTS)
+ return -EINVAL;
+
+ aeqs = ((struct hinic3_hwdev *)hwdev)->aeqs;
+
+ aeqs->aeq_hwe_cb[event] = hwe_cb;
+ aeqs->aeq_hwe_cb_data[event] = pri_handle;
+
+ set_bit(HINIC3_AEQ_HW_CB_REG, &aeqs->aeq_hw_cb_state[event]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_aeq_register_hw_cb);
+
+/**
+ * hinic3_aeq_unregister_hw_cb - unregister the aeq callback for specific event
+ * @hwdev: the pointer to hw device
+ * @event: event for the handler
+ **/
+void hinic3_aeq_unregister_hw_cb(void *hwdev, enum hinic3_aeq_type event)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+
+ if (!hwdev || event >= HINIC3_MAX_AEQ_EVENTS)
+ return;
+
+ aeqs = ((struct hinic3_hwdev *)hwdev)->aeqs;
+
+ clear_bit(HINIC3_AEQ_HW_CB_REG, &aeqs->aeq_hw_cb_state[event]);
+
+ while (test_bit(HINIC3_AEQ_HW_CB_RUNNING,
+ &aeqs->aeq_hw_cb_state[event]))
+ usleep_range(EQ_USLEEP_LOW_BOUND, EQ_USLEEP_HIG_BOUND);
+
+ aeqs->aeq_hwe_cb[event] = NULL;
+}
+EXPORT_SYMBOL(hinic3_aeq_unregister_hw_cb);
+
+/**
+ * hinic3_aeq_register_swe_cb - register aeq callback for sw event
+ * @hwdev: the pointer to hw device
+ * @pri_handle: the pointer to private invoker device
+ * @event: soft event for the handler
+ * @sw_cb: callback function
+ **/
+int hinic3_aeq_register_swe_cb(void *hwdev, void *pri_handle, enum hinic3_aeq_sw_type event,
+ hinic3_aeq_swe_cb aeq_swe_cb)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+
+ if (!hwdev || !aeq_swe_cb || event >= HINIC3_MAX_AEQ_SW_EVENTS)
+ return -EINVAL;
+
+ aeqs = ((struct hinic3_hwdev *)hwdev)->aeqs;
+
+ aeqs->aeq_swe_cb[event] = aeq_swe_cb;
+ aeqs->aeq_swe_cb_data[event] = pri_handle;
+
+ set_bit(HINIC3_AEQ_SW_CB_REG, &aeqs->aeq_sw_cb_state[event]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_aeq_register_swe_cb);
+
+/**
+ * hinic3_aeq_unregister_swe_cb - unregister the aeq callback for sw event
+ * @hwdev: the pointer to hw device
+ * @event: soft event for the handler
+ **/
+void hinic3_aeq_unregister_swe_cb(void *hwdev, enum hinic3_aeq_sw_type event)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+
+ if (!hwdev || event >= HINIC3_MAX_AEQ_SW_EVENTS)
+ return;
+
+ aeqs = ((struct hinic3_hwdev *)hwdev)->aeqs;
+
+ clear_bit(HINIC3_AEQ_SW_CB_REG, &aeqs->aeq_sw_cb_state[event]);
+
+ while (test_bit(HINIC3_AEQ_SW_CB_RUNNING,
+ &aeqs->aeq_sw_cb_state[event]))
+ usleep_range(EQ_USLEEP_LOW_BOUND, EQ_USLEEP_HIG_BOUND);
+
+ aeqs->aeq_swe_cb[event] = NULL;
+}
+EXPORT_SYMBOL(hinic3_aeq_unregister_swe_cb);
+
+/**
+ * hinic3_ceq_register_cb - register ceq callback for specific event
+ * @hwdev: the pointer to hw device
+ * @pri_handle: the pointer to private invoker device
+ * @event: event for the handler
+ * @ceq_cb: callback function
+ **/
+int hinic3_ceq_register_cb(void *hwdev, void *pri_handle, enum hinic3_ceq_event event,
+ hinic3_ceq_event_cb callback)
+{
+ struct hinic3_ceqs *ceqs = NULL;
+
+ if (!hwdev || event >= HINIC3_MAX_CEQ_EVENTS)
+ return -EINVAL;
+
+ ceqs = ((struct hinic3_hwdev *)hwdev)->ceqs;
+
+ ceqs->ceq_cb[event] = callback;
+ ceqs->ceq_cb_data[event] = pri_handle;
+
+ set_bit(HINIC3_CEQ_CB_REG, &ceqs->ceq_cb_state[event]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_ceq_register_cb);
+
+/**
+ * hinic3_ceq_unregister_cb - unregister ceq callback for specific event
+ * @hwdev: the pointer to hw device
+ * @event: event for the handler
+ **/
+void hinic3_ceq_unregister_cb(void *hwdev, enum hinic3_ceq_event event)
+{
+ struct hinic3_ceqs *ceqs = NULL;
+
+ if (!hwdev || event >= HINIC3_MAX_CEQ_EVENTS)
+ return;
+
+ ceqs = ((struct hinic3_hwdev *)hwdev)->ceqs;
+
+ clear_bit(HINIC3_CEQ_CB_REG, &ceqs->ceq_cb_state[event]);
+
+ while (test_bit(HINIC3_CEQ_CB_RUNNING, &ceqs->ceq_cb_state[event]))
+ usleep_range(EQ_USLEEP_LOW_BOUND, EQ_USLEEP_HIG_BOUND);
+
+ ceqs->ceq_cb[event] = NULL;
+}
+EXPORT_SYMBOL(hinic3_ceq_unregister_cb);
+
+/**
+ * set_eq_cons_idx - write the cons idx to the hw
+ * @eq: The event queue to update the cons idx for
+ * @cons idx: consumer index value
+ **/
+static void set_eq_cons_idx(struct hinic3_eq *eq, u32 arm_state)
+{
+ u32 eq_wrap_ci, val;
+ u32 addr = EQ_CI_SIMPLE_INDIR_REG_ADDR(eq);
+
+ eq_wrap_ci = EQ_CONS_IDX(eq);
+
+ /* if use poll mode only eq0 use int_arm mode */
+ if (eq->q_id != 0 && eq->hwdev->poll)
+ val = EQ_CI_SIMPLE_INDIR_SET(HINIC3_EQ_NOT_ARMED, ARMED);
+ else
+ val = EQ_CI_SIMPLE_INDIR_SET(arm_state, ARMED);
+ if (eq->type == HINIC3_AEQ) {
+ val = val |
+ EQ_CI_SIMPLE_INDIR_SET(eq_wrap_ci, CI) |
+ EQ_CI_SIMPLE_INDIR_SET(eq->q_id, AEQ_IDX);
+ } else {
+ val = val |
+ EQ_CI_SIMPLE_INDIR_SET(eq_wrap_ci, CI) |
+ EQ_CI_SIMPLE_INDIR_SET(eq->q_id, CEQ_IDX);
+ }
+
+ hinic3_hwif_write_reg(eq->hwdev->hwif, addr, val);
+}
+
+/**
+ * ceq_event_handler - handle for the ceq events
+ * @ceqs: ceqs part of the chip
+ * @ceqe: ceq element of the event
+ **/
+static void ceq_event_handler(struct hinic3_ceqs *ceqs, u32 ceqe)
+{
+ struct hinic3_hwdev *hwdev = ceqs->hwdev;
+ enum hinic3_ceq_event event = CEQE_TYPE(ceqe);
+ u32 ceqe_data = CEQE_DATA(ceqe);
+
+ if (event >= HINIC3_MAX_CEQ_EVENTS) {
+ sdk_err(hwdev->dev_hdl, "Ceq unknown event:%d, ceqe date: 0x%x\n",
+ event, ceqe_data);
+ return;
+ }
+
+ set_bit(HINIC3_CEQ_CB_RUNNING, &ceqs->ceq_cb_state[event]);
+
+ if (ceqs->ceq_cb[event] &&
+ test_bit(HINIC3_CEQ_CB_REG, &ceqs->ceq_cb_state[event]))
+ ceqs->ceq_cb[event](ceqs->ceq_cb_data[event], ceqe_data);
+
+ clear_bit(HINIC3_CEQ_CB_RUNNING, &ceqs->ceq_cb_state[event]);
+}
+
+static void aeq_elem_handler(struct hinic3_eq *eq, u32 aeqe_desc)
+{
+ struct hinic3_aeqs *aeqs = aeq_to_aeqs(eq);
+ struct hinic3_aeq_elem *aeqe_pos;
+ enum hinic3_aeq_type event;
+ enum hinic3_aeq_sw_type sw_type;
+ u32 sw_event;
+ u8 data[HINIC3_AEQE_DATA_SIZE], size;
+
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+
+ eq->hwdev->cur_recv_aeq_cnt++;
+
+ event = EQ_ELEM_DESC_GET(aeqe_desc, TYPE);
+ if (EQ_ELEM_DESC_GET(aeqe_desc, SRC)) {
+ sw_event = event;
+ sw_type = sw_event >= HINIC3_NIC_FATAL_ERROR_MAX ?
+ HINIC3_STATEFUL_EVENT : HINIC3_STATELESS_EVENT;
+ /* SW event uses only the first 8B */
+ memcpy(data, aeqe_pos->aeqe_data, HINIC3_AEQE_DATA_SIZE);
+ hinic3_be32_to_cpu(data, HINIC3_AEQE_DATA_SIZE);
+ set_bit(HINIC3_AEQ_SW_CB_RUNNING,
+ &aeqs->aeq_sw_cb_state[sw_type]);
+ if (aeqs->aeq_swe_cb[sw_type] &&
+ test_bit(HINIC3_AEQ_SW_CB_REG,
+ &aeqs->aeq_sw_cb_state[sw_type]))
+ aeqs->aeq_swe_cb[sw_type](aeqs->aeq_swe_cb_data[sw_type], event, data);
+
+ clear_bit(HINIC3_AEQ_SW_CB_RUNNING,
+ &aeqs->aeq_sw_cb_state[sw_type]);
+ return;
+ }
+
+ if (event < HINIC3_MAX_AEQ_EVENTS) {
+ memcpy(data, aeqe_pos->aeqe_data, HINIC3_AEQE_DATA_SIZE);
+ hinic3_be32_to_cpu(data, HINIC3_AEQE_DATA_SIZE);
+
+ size = EQ_ELEM_DESC_GET(aeqe_desc, SIZE);
+ set_bit(HINIC3_AEQ_HW_CB_RUNNING,
+ &aeqs->aeq_hw_cb_state[event]);
+ if (aeqs->aeq_hwe_cb[event] &&
+ test_bit(HINIC3_AEQ_HW_CB_REG,
+ &aeqs->aeq_hw_cb_state[event]))
+ aeqs->aeq_hwe_cb[event](aeqs->aeq_hwe_cb_data[event], data, size);
+ clear_bit(HINIC3_AEQ_HW_CB_RUNNING,
+ &aeqs->aeq_hw_cb_state[event]);
+ return;
+ }
+ sdk_warn(eq->hwdev->dev_hdl, "Unknown aeq hw event %d\n", event);
+}
+
+/**
+ * aeq_irq_handler - handler for the aeq event
+ * @eq: the async event queue of the event
+ **/
+static bool aeq_irq_handler(struct hinic3_eq *eq)
+{
+ struct hinic3_aeq_elem *aeqe_pos = NULL;
+ u32 aeqe_desc;
+ u32 i, eqe_cnt = 0;
+
+ for (i = 0; i < HINIC3_TASK_PROCESS_EQE_LIMIT; i++) {
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+
+ /* Data in HW is in Big endian Format */
+ aeqe_desc = be32_to_cpu(aeqe_pos->desc);
+
+ /* HW updates wrapped bit, when it adds eq element event */
+ if (EQ_ELEM_DESC_GET(aeqe_desc, WRAPPED) == eq->wrapped)
+ return false;
+
+ dma_rmb();
+
+ aeq_elem_handler(eq, aeqe_desc);
+
+ eq->cons_idx++;
+
+ if (eq->cons_idx == eq->eq_len) {
+ eq->cons_idx = 0;
+ eq->wrapped = !eq->wrapped;
+ }
+
+ if (++eqe_cnt >= HINIC3_EQ_UPDATE_CI_STEP) {
+ eqe_cnt = 0;
+ set_eq_cons_idx(eq, HINIC3_EQ_NOT_ARMED);
+ }
+ }
+
+ return true;
+}
+
+/**
+ * ceq_irq_handler - handler for the ceq event
+ * @eq: the completion event queue of the event
+ **/
+static bool ceq_irq_handler(struct hinic3_eq *eq)
+{
+ struct hinic3_ceqs *ceqs = ceq_to_ceqs(eq);
+ u32 ceqe, eqe_cnt = 0;
+ u32 i;
+
+ for (i = 0; i < g_num_ceqe_in_tasklet; i++) {
+ ceqe = *(GET_CURR_CEQ_ELEM(eq));
+ ceqe = be32_to_cpu(ceqe);
+
+ /* HW updates wrapped bit, when it adds eq element event */
+ if (EQ_ELEM_DESC_GET(ceqe, WRAPPED) == eq->wrapped)
+ return false;
+
+ ceq_event_handler(ceqs, ceqe);
+
+ eq->cons_idx++;
+
+ if (eq->cons_idx == eq->eq_len) {
+ eq->cons_idx = 0;
+ eq->wrapped = !eq->wrapped;
+ }
+
+ if (++eqe_cnt >= HINIC3_EQ_UPDATE_CI_STEP) {
+ eqe_cnt = 0;
+ set_eq_cons_idx(eq, HINIC3_EQ_NOT_ARMED);
+ }
+ }
+
+ return true;
+}
+
+static void reschedule_eq_handler(struct hinic3_eq *eq)
+{
+ if (eq->type == HINIC3_AEQ) {
+ struct hinic3_aeqs *aeqs = aeq_to_aeqs(eq);
+
+ queue_work_on(hisdk3_get_work_cpu_affinity(eq->hwdev, WORK_TYPE_AEQ),
+ aeqs->workq, &eq->aeq_work);
+ } else {
+ tasklet_schedule(&eq->ceq_tasklet);
+ }
+}
+
+/**
+ * eq_irq_handler - handler for the eq event
+ * @data: the event queue of the event
+ **/
+static bool eq_irq_handler(void *data)
+{
+ struct hinic3_eq *eq = (struct hinic3_eq *)data;
+ bool uncompleted = false;
+
+ if (eq->type == HINIC3_AEQ)
+ uncompleted = aeq_irq_handler(eq);
+ else
+ uncompleted = ceq_irq_handler(eq);
+
+ set_eq_cons_idx(eq, uncompleted ? HINIC3_EQ_NOT_ARMED :
+ HINIC3_EQ_ARMED);
+
+ return uncompleted;
+}
+
+/**
+ * eq_irq_work - eq work for the event
+ * @work: the work that is associated with the eq
+ **/
+static void eq_irq_work(struct work_struct *work)
+{
+ struct hinic3_eq *eq = container_of(work, struct hinic3_eq, aeq_work);
+
+ if (eq_irq_handler(eq))
+ reschedule_eq_handler(eq);
+}
+
+/**
+ * aeq_interrupt - aeq interrupt handler
+ * @irq: irq number
+ * @data: the async event queue of the event
+ **/
+static irqreturn_t aeq_interrupt(int irq, void *data)
+{
+ struct hinic3_eq *aeq = (struct hinic3_eq *)data;
+ struct hinic3_hwdev *hwdev = aeq->hwdev;
+ struct hinic3_aeqs *aeqs = aeq_to_aeqs(aeq);
+ struct workqueue_struct *workq = aeqs->workq;
+
+ /* clear resend timer cnt register */
+ hinic3_misx_intr_clear_resend_bit(hwdev, aeq->eq_irq.msix_entry_idx,
+ EQ_MSIX_RESEND_TIMER_CLEAR);
+
+ queue_work_on(hisdk3_get_work_cpu_affinity(hwdev, WORK_TYPE_AEQ),
+ workq, &aeq->aeq_work);
+ return IRQ_HANDLED;
+}
+
+/**
+ * ceq_tasklet - ceq tasklet for the event
+ * @ceq_data: data that will be used by the tasklet(ceq)
+ **/
+static void ceq_tasklet(ulong ceq_data)
+{
+ struct hinic3_eq *eq = (struct hinic3_eq *)ceq_data;
+
+ eq->soft_intr_jif = jiffies;
+
+ if (eq_irq_handler(eq))
+ reschedule_eq_handler(eq);
+}
+
+/**
+ * ceq_interrupt - ceq interrupt handler
+ * @irq: irq number
+ * @data: the completion event queue of the event
+ **/
+static irqreturn_t ceq_interrupt(int irq, void *data)
+{
+ struct hinic3_eq *ceq = (struct hinic3_eq *)data;
+
+ ceq->hard_intr_jif = jiffies;
+
+ /* clear resend timer counters */
+ hinic3_misx_intr_clear_resend_bit(ceq->hwdev,
+ ceq->eq_irq.msix_entry_idx,
+ EQ_MSIX_RESEND_TIMER_CLEAR);
+
+ tasklet_schedule(&ceq->ceq_tasklet);
+
+ return IRQ_HANDLED;
+}
+
+/**
+ * set_eq_ctrls - setting eq's ctrls registers
+ * @eq: the event queue for setting
+ **/
+static int set_eq_ctrls(struct hinic3_eq *eq)
+{
+ enum hinic3_eq_type type = eq->type;
+ struct hinic3_hwif *hwif = eq->hwdev->hwif;
+ struct irq_info *eq_irq = &eq->eq_irq;
+ u32 addr, val, ctrl0, ctrl1, page_size_val, elem_size;
+ u32 pci_intf_idx = HINIC3_PCI_INTF_IDX(hwif);
+ int err;
+
+ if (type == HINIC3_AEQ) {
+ /* set ctrl0 */
+ addr = HINIC3_CSR_AEQ_CTRL_0_ADDR;
+
+ val = hinic3_hwif_read_reg(hwif, addr);
+
+ val = AEQ_CTRL_0_CLEAR(val, INTR_IDX) &
+ AEQ_CTRL_0_CLEAR(val, DMA_ATTR) &
+ AEQ_CTRL_0_CLEAR(val, PCI_INTF_IDX) &
+ AEQ_CTRL_0_CLEAR(val, INTR_MODE);
+
+ ctrl0 = AEQ_CTRL_0_SET(eq_irq->msix_entry_idx, INTR_IDX) |
+ AEQ_CTRL_0_SET(AEQ_DMA_ATTR_DEFAULT, DMA_ATTR) |
+ AEQ_CTRL_0_SET(pci_intf_idx, PCI_INTF_IDX) |
+ AEQ_CTRL_0_SET(HINIC3_INTR_MODE_ARMED, INTR_MODE);
+
+ val |= ctrl0;
+
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ /* set ctrl1 */
+ addr = HINIC3_CSR_AEQ_CTRL_1_ADDR;
+
+ page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
+ elem_size = EQ_SET_HW_ELEM_SIZE_VAL(eq);
+
+ ctrl1 = AEQ_CTRL_1_SET(eq->eq_len, LEN) |
+ AEQ_CTRL_1_SET(elem_size, ELEM_SIZE) |
+ AEQ_CTRL_1_SET(page_size_val, PAGE_SIZE);
+
+ hinic3_hwif_write_reg(hwif, addr, ctrl1);
+ } else {
+ page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
+ ctrl0 = CEQ_CTRL_0_SET(eq_irq->msix_entry_idx, INTR_IDX) |
+ CEQ_CTRL_0_SET(CEQ_DMA_ATTR_DEFAULT, DMA_ATTR) |
+ CEQ_CTRL_0_SET(CEQ_LMT_KICK_DEFAULT, LIMIT_KICK) |
+ CEQ_CTRL_0_SET(pci_intf_idx, PCI_INTF_IDX) |
+ CEQ_CTRL_0_SET(page_size_val, PAGE_SIZE) |
+ CEQ_CTRL_0_SET(HINIC3_INTR_MODE_ARMED, INTR_MODE);
+
+ ctrl1 = CEQ_CTRL_1_SET(eq->eq_len, LEN);
+
+ /* set ceq ctrl reg through mgmt cpu */
+ err = hinic3_set_ceq_ctrl_reg(eq->hwdev, eq->q_id, ctrl0,
+ ctrl1);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+/**
+ * ceq_elements_init - Initialize all the elements in the ceq
+ * @eq: the event queue
+ * @init_val: value to init with it the elements
+ **/
+static void ceq_elements_init(struct hinic3_eq *eq, u32 init_val)
+{
+ u32 *ceqe = NULL;
+ u32 i;
+
+ for (i = 0; i < eq->eq_len; i++) {
+ ceqe = GET_CEQ_ELEM(eq, i);
+ *(ceqe) = cpu_to_be32(init_val);
+ }
+
+ wmb(); /* Write the init values */
+}
+
+/**
+ * aeq_elements_init - initialize all the elements in the aeq
+ * @eq: the event queue
+ * @init_val: value to init with it the elements
+ **/
+static void aeq_elements_init(struct hinic3_eq *eq, u32 init_val)
+{
+ struct hinic3_aeq_elem *aeqe = NULL;
+ u32 i;
+
+ for (i = 0; i < eq->eq_len; i++) {
+ aeqe = GET_AEQ_ELEM(eq, i);
+ aeqe->desc = cpu_to_be32(init_val);
+ }
+
+ wmb(); /* Write the init values */
+}
+
+static void eq_elements_init(struct hinic3_eq *eq, u32 init_val)
+{
+ if (eq->type == HINIC3_AEQ)
+ aeq_elements_init(eq, init_val);
+ else
+ ceq_elements_init(eq, init_val);
+}
+
+/**
+ * alloc_eq_pages - allocate the pages for the queue
+ * @eq: the event queue
+ **/
+static int alloc_eq_pages(struct hinic3_eq *eq)
+{
+ struct hinic3_hwif *hwif = eq->hwdev->hwif;
+ struct hinic3_dma_addr_align *eq_page = NULL;
+ u32 reg, init_val;
+ u16 pg_idx, i;
+ int err;
+ gfp_t gfp_vram;
+
+ eq->eq_pages = kcalloc(eq->num_pages, sizeof(*eq->eq_pages),
+ GFP_KERNEL);
+ if (!eq->eq_pages) {
+ sdk_err(eq->hwdev->dev_hdl, "Failed to alloc eq pages description\n");
+ return -ENOMEM;
+ }
+
+ gfp_vram = hi_vram_get_gfp_vram();
+
+ for (pg_idx = 0; pg_idx < eq->num_pages; pg_idx++) {
+ eq_page = &eq->eq_pages[pg_idx];
+ err = hinic3_dma_zalloc_coherent_align(eq->hwdev->dev_hdl,
+ eq->page_size,
+ HINIC3_MIN_EQ_PAGE_SIZE,
+ GFP_KERNEL | gfp_vram,
+ eq_page);
+ if (err) {
+ sdk_err(eq->hwdev->dev_hdl, "Failed to alloc eq page, page index: %hu\n",
+ pg_idx);
+ goto dma_alloc_err;
+ }
+
+ reg = HINIC3_EQ_HI_PHYS_ADDR_REG(eq->type, pg_idx);
+ hinic3_hwif_write_reg(hwif, reg,
+ upper_32_bits(eq_page->align_paddr));
+
+ reg = HINIC3_EQ_LO_PHYS_ADDR_REG(eq->type, pg_idx);
+ hinic3_hwif_write_reg(hwif, reg,
+ lower_32_bits(eq_page->align_paddr));
+ }
+
+ eq->num_elem_in_pg = GET_EQ_NUM_ELEMS(eq, eq->page_size);
+ if (eq->num_elem_in_pg & (eq->num_elem_in_pg - 1)) {
+ sdk_err(eq->hwdev->dev_hdl, "Number element in eq page != power of 2\n");
+ err = -EINVAL;
+ goto dma_alloc_err;
+ }
+ init_val = EQ_WRAPPED(eq);
+
+ eq_elements_init(eq, init_val);
+
+ return 0;
+
+dma_alloc_err:
+ for (i = 0; i < pg_idx; i++)
+ hinic3_dma_free_coherent_align(eq->hwdev->dev_hdl,
+ &eq->eq_pages[i]);
+
+ kfree(eq->eq_pages);
+
+ return err;
+}
+
+/**
+ * free_eq_pages - free the pages of the queue
+ * @eq: the event queue
+ **/
+static void free_eq_pages(struct hinic3_eq *eq)
+{
+ u16 pg_idx;
+
+ for (pg_idx = 0; pg_idx < eq->num_pages; pg_idx++)
+ hinic3_dma_free_coherent_align(eq->hwdev->dev_hdl,
+ &eq->eq_pages[pg_idx]);
+
+ kfree(eq->eq_pages);
+ eq->eq_pages = NULL;
+}
+
+static inline u32 get_page_size(const struct hinic3_eq *eq)
+{
+ u32 total_size;
+ u32 count;
+
+ total_size = ALIGN((eq->eq_len * eq->elem_size),
+ HINIC3_MIN_EQ_PAGE_SIZE);
+ if (total_size <= (HINIC3_EQ_MAX_PAGES(eq) * HINIC3_MIN_EQ_PAGE_SIZE))
+ return HINIC3_MIN_EQ_PAGE_SIZE;
+
+ count = (u32)(ALIGN((total_size / HINIC3_EQ_MAX_PAGES(eq)),
+ HINIC3_MIN_EQ_PAGE_SIZE) / HINIC3_MIN_EQ_PAGE_SIZE);
+
+ /* round up to nearest power of two */
+ count = 1U << (u8)fls((int)(count - 1));
+
+ return ((u32)HINIC3_MIN_EQ_PAGE_SIZE) * count;
+}
+
+static int request_eq_irq(struct hinic3_eq *eq, struct irq_info *entry)
+{
+ int err = 0;
+
+ if (eq->type == HINIC3_AEQ)
+ INIT_WORK(&eq->aeq_work, eq_irq_work);
+ else
+ tasklet_init(&eq->ceq_tasklet, ceq_tasklet, (ulong)eq);
+
+ if (eq->type == HINIC3_AEQ) {
+ snprintf(eq->irq_name, sizeof(eq->irq_name),
+ "hinic3_aeq%u@pci:%s", eq->q_id,
+ pci_name(eq->hwdev->pcidev_hdl));
+
+ err = request_irq(entry->irq_id, aeq_interrupt, 0UL,
+ eq->irq_name, eq);
+ } else {
+ snprintf(eq->irq_name, sizeof(eq->irq_name),
+ "hinic3_ceq%u@pci:%s", eq->q_id,
+ pci_name(eq->hwdev->pcidev_hdl));
+ err = request_irq(entry->irq_id, ceq_interrupt, 0UL,
+ eq->irq_name, eq);
+ }
+
+ return err;
+}
+
+static void reset_eq(struct hinic3_eq *eq)
+{
+ /* clear eq_len to force eqe drop in hardware */
+ if (eq->type == HINIC3_AEQ)
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_CSR_AEQ_CTRL_1_ADDR, 0);
+ else
+ hinic3_set_ceq_ctrl_reg(eq->hwdev, eq->q_id, 0, 0);
+
+ wmb(); /* clear eq_len before clear prod idx */
+
+ hinic3_hwif_write_reg(eq->hwdev->hwif, EQ_PROD_IDX_REG_ADDR(eq), 0);
+}
+
+/**
+ * init_eq - initialize eq
+ * @eq: the event queue
+ * @hwdev: the pointer to hw device
+ * @q_id: Queue id number
+ * @q_len: the number of EQ elements
+ * @type: the type of the event queue, ceq or aeq
+ * @entry: msix entry associated with the event queue
+ * Return: 0 - Success, Negative - failure
+ **/
+static int init_eq(struct hinic3_eq *eq, struct hinic3_hwdev *hwdev, u16 q_id,
+ u32 q_len, enum hinic3_eq_type type, struct irq_info *entry)
+{
+ int err = 0;
+
+ eq->hwdev = hwdev;
+ eq->q_id = q_id;
+ eq->type = type;
+ eq->eq_len = q_len;
+
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_EQ_INDIR_IDX_ADDR(eq->type),
+ eq->q_id);
+ wmb(); /* write index before config */
+
+ reset_eq(eq);
+
+ eq->cons_idx = 0;
+ eq->wrapped = 0;
+
+ eq->elem_size = (type == HINIC3_AEQ) ? HINIC3_AEQE_SIZE : HINIC3_CEQE_SIZE;
+
+ eq->page_size = get_page_size(eq);
+ eq->orig_page_size = eq->page_size;
+ eq->num_pages = GET_EQ_NUM_PAGES(eq, eq->page_size);
+
+ if (eq->num_pages > HINIC3_EQ_MAX_PAGES(eq)) {
+ sdk_err(hwdev->dev_hdl, "Number pages: %u too many pages for eq\n",
+ eq->num_pages);
+ return -EINVAL;
+ }
+
+ err = alloc_eq_pages(eq);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate pages for eq\n");
+ return err;
+ }
+
+ eq->eq_irq.msix_entry_idx = entry->msix_entry_idx;
+ eq->eq_irq.irq_id = entry->irq_id;
+
+ err = set_eq_ctrls(eq);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to set ctrls for eq\n");
+ goto init_eq_ctrls_err;
+ }
+
+ set_eq_cons_idx(eq, HINIC3_EQ_ARMED);
+
+ err = request_eq_irq(eq, entry);
+ if (err) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to request irq for the eq, err: %d\n", err);
+ goto req_irq_err;
+ }
+
+ hinic3_set_msix_state(hwdev, entry->msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+
+ return 0;
+
+init_eq_ctrls_err:
+req_irq_err:
+ free_eq_pages(eq);
+ return err;
+}
+
+int hinic3_init_single_ceq_status(void *hwdev, u16 q_id)
+{
+ int err = 0;
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_eq *eq = NULL;
+
+ if (!hwdev) {
+ sdk_err(dev->dev_hdl, "hwdev is null\n");
+ return -EINVAL;
+ }
+
+ if (q_id >= dev->ceqs->num_ceqs) {
+ sdk_err(dev->dev_hdl, "q_id=%u is larger than num_ceqs %u.\n",
+ q_id, dev->ceqs->num_ceqs);
+ return -EINVAL;
+ }
+
+ eq = &dev->ceqs->ceq[q_id];
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(dev->hwif, HINIC3_EQ_INDIR_IDX_ADDR(eq->type), eq->q_id);
+ wmb(); /* write index before config */
+
+ reset_eq(eq);
+
+ err = set_eq_ctrls(eq);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to set ctrls for eq\n");
+ return err;
+ }
+ set_eq_cons_idx(eq, HINIC3_EQ_ARMED);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_init_single_ceq_status);
+
+/**
+ * remove_eq - remove eq
+ * @eq: the event queue
+ **/
+static void remove_eq(struct hinic3_eq *eq)
+{
+ struct irq_info *entry = &eq->eq_irq;
+
+ hinic3_set_msix_state(eq->hwdev, entry->msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+ synchronize_irq(entry->irq_id);
+
+ free_irq(entry->irq_id, eq);
+
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_EQ_INDIR_IDX_ADDR(eq->type),
+ eq->q_id);
+
+ wmb(); /* write index before config */
+
+ if (eq->type == HINIC3_AEQ) {
+ cancel_work_sync(&eq->aeq_work);
+
+ /* clear eq_len to avoid hw access host memory */
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_CSR_AEQ_CTRL_1_ADDR, 0);
+ } else {
+ tasklet_kill(&eq->ceq_tasklet);
+
+ hinic3_set_ceq_ctrl_reg(eq->hwdev, eq->q_id, 0, 0);
+ }
+
+ /* update cons_idx to avoid invalid interrupt */
+ eq->cons_idx = hinic3_hwif_read_reg(eq->hwdev->hwif,
+ EQ_PROD_IDX_REG_ADDR(eq));
+ set_eq_cons_idx(eq, HINIC3_EQ_NOT_ARMED);
+
+ free_eq_pages(eq);
+}
+
+/**
+ * hinic3_aeqs_init - init all the aeqs
+ * @hwdev: the pointer to hw device
+ * @num_aeqs: number of AEQs
+ * @msix_entries: msix entries associated with the event queues
+ * Return: 0 - Success, Negative - failure
+ **/
+int hinic3_aeqs_init(struct hinic3_hwdev *hwdev, u16 num_aeqs,
+ struct irq_info *msix_entries)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+ int err;
+ u16 i, q_id;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ aeqs = kzalloc(sizeof(*aeqs), GFP_KERNEL);
+ if (!aeqs)
+ return -ENOMEM;
+
+ hwdev->aeqs = aeqs;
+ aeqs->hwdev = hwdev;
+ aeqs->num_aeqs = num_aeqs;
+ aeqs->workq = alloc_workqueue(HINIC3_EQS_WQ_NAME,
+ WQ_MEM_RECLAIM | WQ_HIGHPRI,
+ HINIC3_MAX_AEQS);
+ if (!aeqs->workq) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize aeq workqueue\n");
+ err = -ENOMEM;
+ goto create_work_err;
+ }
+
+ if (g_aeq_len < HINIC3_MIN_AEQ_LEN || g_aeq_len > HINIC3_MAX_AEQ_LEN) {
+ sdk_warn(hwdev->dev_hdl, "Module Parameter g_aeq_len value %u out of range, resetting to %d\n",
+ g_aeq_len, HINIC3_DEFAULT_AEQ_LEN);
+ g_aeq_len = HINIC3_DEFAULT_AEQ_LEN;
+ }
+
+ for (q_id = 0; q_id < num_aeqs; q_id++) {
+ err = init_eq(&aeqs->aeq[q_id], hwdev, q_id, g_aeq_len,
+ HINIC3_AEQ, &msix_entries[q_id]);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init aeq %u\n",
+ q_id);
+ goto init_aeq_err;
+ }
+ }
+ for (q_id = 0; q_id < num_aeqs; q_id++)
+ hinic3_set_msix_state(hwdev, msix_entries[q_id].msix_entry_idx,
+ HINIC3_MSIX_ENABLE);
+
+ return 0;
+
+init_aeq_err:
+ for (i = 0; i < q_id; i++)
+ remove_eq(&aeqs->aeq[i]);
+
+ destroy_workqueue(aeqs->workq);
+
+create_work_err:
+ kfree(aeqs);
+
+ return err;
+}
+
+/**
+ * hinic3_aeqs_free - free all the aeqs
+ * @hwdev: the pointer to hw device
+ **/
+void hinic3_aeqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeqs *aeqs = hwdev->aeqs;
+ enum hinic3_aeq_type aeq_event = HINIC3_HW_INTER_INT;
+ enum hinic3_aeq_sw_type sw_aeq_event = HINIC3_STATELESS_EVENT;
+ u16 q_id;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++)
+ remove_eq(&aeqs->aeq[q_id]);
+
+ for (; sw_aeq_event < HINIC3_MAX_AEQ_SW_EVENTS; sw_aeq_event++)
+ hinic3_aeq_unregister_swe_cb(hwdev, sw_aeq_event);
+
+ for (; aeq_event < HINIC3_MAX_AEQ_EVENTS; aeq_event++)
+ hinic3_aeq_unregister_hw_cb(hwdev, aeq_event);
+
+ destroy_workqueue(aeqs->workq);
+
+ kfree(aeqs);
+}
+
+/**
+ * hinic3_ceqs_init - init all the ceqs
+ * @hwdev: the pointer to hw device
+ * @num_ceqs: number of CEQs
+ * @msix_entries: msix entries associated with the event queues
+ * Return: 0 - Success, Negative - failure
+ **/
+int hinic3_ceqs_init(struct hinic3_hwdev *hwdev, u16 num_ceqs,
+ struct irq_info *msix_entries)
+{
+ struct hinic3_ceqs *ceqs;
+ int err;
+ u16 i, q_id;
+
+ ceqs = kzalloc(sizeof(*ceqs), GFP_KERNEL);
+ if (!ceqs)
+ return -ENOMEM;
+
+ hwdev->ceqs = ceqs;
+
+ ceqs->hwdev = hwdev;
+ ceqs->num_ceqs = num_ceqs;
+
+ if (g_ceq_len < HINIC3_MIN_CEQ_LEN || g_ceq_len > HINIC3_MAX_CEQ_LEN) {
+ sdk_warn(hwdev->dev_hdl, "Module Parameter g_ceq_len value %u out of range, resetting to %d\n",
+ g_ceq_len, HINIC3_DEFAULT_CEQ_LEN);
+ g_ceq_len = HINIC3_DEFAULT_CEQ_LEN;
+ }
+
+ if (!g_num_ceqe_in_tasklet) {
+ sdk_warn(hwdev->dev_hdl, "Module Parameter g_num_ceqe_in_tasklet can not be zero, resetting to %d\n",
+ HINIC3_TASK_PROCESS_EQE_LIMIT);
+ g_num_ceqe_in_tasklet = HINIC3_TASK_PROCESS_EQE_LIMIT;
+ }
+ for (q_id = 0; q_id < num_ceqs; q_id++) {
+ err = init_eq(&ceqs->ceq[q_id], hwdev, q_id, g_ceq_len,
+ HINIC3_CEQ, &msix_entries[q_id]);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init ceq %u\n",
+ q_id);
+ goto init_ceq_err;
+ }
+ }
+ for (q_id = 0; q_id < num_ceqs; q_id++)
+ hinic3_set_msix_state(hwdev, msix_entries[q_id].msix_entry_idx,
+ HINIC3_MSIX_ENABLE);
+
+ for (i = 0; i < HINIC3_MAX_CEQ_EVENTS; i++)
+ ceqs->ceq_cb_state[i] = 0;
+
+ return 0;
+
+init_ceq_err:
+ for (i = 0; i < q_id; i++)
+ remove_eq(&ceqs->ceq[i]);
+
+ kfree(ceqs);
+
+ return err;
+}
+
+/**
+ * hinic3_ceqs_free - free all the ceqs
+ * @hwdev: the pointer to hw device
+ **/
+void hinic3_ceqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_ceqs *ceqs = hwdev->ceqs;
+ enum hinic3_ceq_event ceq_event = HINIC3_CMDQ;
+ u16 q_id;
+
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++)
+ remove_eq(&ceqs->ceq[q_id]);
+
+ for (; ceq_event < HINIC3_MAX_CEQ_EVENTS; ceq_event++)
+ hinic3_ceq_unregister_cb(hwdev, ceq_event);
+
+ kfree(ceqs);
+}
+
+void hinic3_get_ceq_irqs(struct hinic3_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs)
+{
+ struct hinic3_ceqs *ceqs = hwdev->ceqs;
+ u16 q_id;
+
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++) {
+ irqs[q_id].irq_id = ceqs->ceq[q_id].eq_irq.irq_id;
+ irqs[q_id].msix_entry_idx =
+ ceqs->ceq[q_id].eq_irq.msix_entry_idx;
+ }
+
+ *num_irqs = ceqs->num_ceqs;
+}
+
+void hinic3_get_aeq_irqs(struct hinic3_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs)
+{
+ struct hinic3_aeqs *aeqs = hwdev->aeqs;
+ u16 q_id;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++) {
+ irqs[q_id].irq_id = aeqs->aeq[q_id].eq_irq.irq_id;
+ irqs[q_id].msix_entry_idx =
+ aeqs->aeq[q_id].eq_irq.msix_entry_idx;
+ }
+
+ *num_irqs = aeqs->num_aeqs;
+}
+
+void hinic3_dump_aeq_info(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeq_elem *aeqe_pos = NULL;
+ struct hinic3_eq *eq = NULL;
+ u32 addr, ci, pi, ctrl0, idx;
+ int q_id;
+
+ for (q_id = 0; q_id < hwdev->aeqs->num_aeqs; q_id++) {
+ eq = &hwdev->aeqs->aeq[q_id];
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(eq->hwdev->hwif, HINIC3_EQ_INDIR_IDX_ADDR(eq->type),
+ eq->q_id);
+ wmb(); /* write index before config */
+
+ addr = HINIC3_CSR_AEQ_CTRL_0_ADDR;
+
+ ctrl0 = hinic3_hwif_read_reg(hwdev->hwif, addr);
+
+ idx = hinic3_hwif_read_reg(hwdev->hwif, HINIC3_EQ_INDIR_IDX_ADDR(eq->type));
+
+ addr = EQ_CONS_IDX_REG_ADDR(eq);
+ ci = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ addr = EQ_PROD_IDX_REG_ADDR(eq);
+ pi = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+ sdk_err(hwdev->dev_hdl,
+ "Aeq id: %d, idx: %u, ctrl0: 0x%08x, ci: 0x%08x, pi: 0x%x, work_state: 0x%x, wrap: %u, desc: 0x%x swci:0x%x\n",
+ q_id, idx, ctrl0, ci, pi, work_busy(&eq->aeq_work),
+ eq->wrapped, be32_to_cpu(aeqe_pos->desc), eq->cons_idx);
+ }
+
+ hinic3_show_chip_err_info(hwdev);
+}
+
+void hinic3_dump_ceq_info(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_eq *eq = NULL;
+ u32 addr, ci, pi;
+ int q_id;
+
+ for (q_id = 0; q_id < hwdev->ceqs->num_ceqs; q_id++) {
+ eq = &hwdev->ceqs->ceq[q_id];
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_EQ_INDIR_IDX_ADDR(eq->type),
+ eq->q_id);
+ wmb(); /* write index before config */
+
+ addr = EQ_CONS_IDX_REG_ADDR(eq);
+ ci = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ addr = EQ_PROD_IDX_REG_ADDR(eq);
+ pi = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ sdk_err(hwdev->dev_hdl,
+ "Ceq id: %d, ci: 0x%08x, sw_ci: 0x%08x, pi: 0x%x, tasklet_state: 0x%lx, wrap: %u, ceqe: 0x%x\n",
+ q_id, ci, eq->cons_idx, pi,
+ tasklet_state(&eq->ceq_tasklet),
+ eq->wrapped, be32_to_cpu(*(GET_CURR_CEQ_ELEM(eq))));
+
+ sdk_err(hwdev->dev_hdl, "Ceq last response hard interrupt time: %u\n",
+ jiffies_to_msecs(jiffies - eq->hard_intr_jif));
+ sdk_err(hwdev->dev_hdl, "Ceq last response soft interrupt time: %u\n",
+ jiffies_to_msecs(jiffies - eq->soft_intr_jif));
+ }
+
+ hinic3_show_chip_err_info(hwdev);
+}
+
+int hinic3_get_ceq_info(void *hwdev, u16 q_id, struct hinic3_ceq_info *ceq_info)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_eq *eq = NULL;
+
+ if (!hwdev || !ceq_info)
+ return -EINVAL;
+
+ if (q_id >= dev->ceqs->num_ceqs)
+ return -EINVAL;
+
+ eq = &dev->ceqs->ceq[q_id];
+ ceq_info->q_len = eq->eq_len;
+ ceq_info->num_pages = eq->num_pages;
+ ceq_info->page_size = eq->page_size;
+ ceq_info->num_elem_in_pg = eq->num_elem_in_pg;
+ ceq_info->elem_size = eq->elem_size;
+ sdk_info(dev->dev_hdl, "get_ceq_info: qid=0x%x page_size=%ul\n",
+ q_id, eq->page_size);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_ceq_info);
+
+int hinic3_get_ceq_page_phy_addr(void *hwdev, u16 q_id,
+ u16 page_idx, u64 *page_phy_addr)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_eq *eq = NULL;
+
+ if (!hwdev || !page_phy_addr)
+ return -EINVAL;
+
+ if (q_id >= dev->ceqs->num_ceqs)
+ return -EINVAL;
+
+ eq = &dev->ceqs->ceq[q_id];
+ if (page_idx >= eq->num_pages)
+ return -EINVAL;
+
+ *page_phy_addr = eq->eq_pages[page_idx].align_paddr;
+ sdk_info(dev->dev_hdl, "ceq_page_phy_addr: 0x%llx page_idx=%u\n",
+ eq->eq_pages[page_idx].align_paddr, page_idx);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_ceq_page_phy_addr);
+
+int hinic3_set_ceq_irq_disable(void *hwdev, u16 q_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_eq *ceq = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (q_id >= dev->ceqs->num_ceqs)
+ return -EINVAL;
+
+ ceq = &dev->ceqs->ceq[q_id];
+
+ hinic3_set_msix_state(ceq->hwdev, ceq->eq_irq.msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_ceq_irq_disable);
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h
new file mode 100644
index 0000000..a6b83c3
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_EQS_H
+#define HINIC3_EQS_H
+
+#include <linux/types.h>
+#include <linux/interrupt.h>
+#include <linux/workqueue.h>
+
+#include "hinic3_common.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+
+#define HINIC3_MAX_AEQS 4
+#define HINIC3_MAX_CEQS 32
+
+#define HINIC3_AEQ_MAX_PAGES 4
+#define HINIC3_CEQ_MAX_PAGES 8
+
+#define HINIC3_AEQE_SIZE 64
+#define HINIC3_CEQE_SIZE 4
+
+#define HINIC3_AEQE_DESC_SIZE 4
+#define HINIC3_AEQE_DATA_SIZE \
+ (HINIC3_AEQE_SIZE - HINIC3_AEQE_DESC_SIZE)
+
+#define HINIC3_DEFAULT_AEQ_LEN 0x10000
+#define HINIC3_DEFAULT_CEQ_LEN 0x10000
+
+#define HINIC3_MIN_EQ_PAGE_SIZE 0x1000 /* min eq page size 4K Bytes */
+#define HINIC3_MAX_EQ_PAGE_SIZE 0x400000 /* max eq page size 4M Bytes */
+
+#define HINIC3_MIN_AEQ_LEN 64
+#define HINIC3_MAX_AEQ_LEN \
+ ((HINIC3_MAX_EQ_PAGE_SIZE / HINIC3_AEQE_SIZE) * HINIC3_AEQ_MAX_PAGES)
+
+#define HINIC3_MIN_CEQ_LEN 64
+#define HINIC3_MAX_CEQ_LEN \
+ ((HINIC3_MAX_EQ_PAGE_SIZE / HINIC3_CEQE_SIZE) * HINIC3_CEQ_MAX_PAGES)
+#define HINIC3_CEQ_ID_CMDQ 0
+
+#define EQ_IRQ_NAME_LEN 64
+
+#define EQ_USLEEP_LOW_BOUND 900
+#define EQ_USLEEP_HIG_BOUND 1000
+
+enum hinic3_eq_type {
+ HINIC3_AEQ,
+ HINIC3_CEQ
+};
+
+enum hinic3_eq_intr_mode {
+ HINIC3_INTR_MODE_ARMED,
+ HINIC3_INTR_MODE_ALWAYS,
+};
+
+enum hinic3_eq_ci_arm_state {
+ HINIC3_EQ_NOT_ARMED,
+ HINIC3_EQ_ARMED,
+};
+
+struct hinic3_eq {
+ struct hinic3_hwdev *hwdev;
+ u16 q_id;
+ u16 rsvd1;
+ enum hinic3_eq_type type;
+ u32 page_size;
+ u32 orig_page_size;
+ u32 eq_len;
+
+ u32 cons_idx;
+ u16 wrapped;
+ u16 rsvd2;
+
+ u16 elem_size;
+ u16 num_pages;
+ u32 num_elem_in_pg;
+
+ struct irq_info eq_irq;
+ char irq_name[EQ_IRQ_NAME_LEN];
+
+ struct hinic3_dma_addr_align *eq_pages;
+
+ struct work_struct aeq_work;
+ struct tasklet_struct ceq_tasklet;
+
+ u64 hard_intr_jif;
+ u64 soft_intr_jif;
+
+ u64 rsvd3;
+};
+
+struct hinic3_aeq_elem {
+ u8 aeqe_data[HINIC3_AEQE_DATA_SIZE];
+ u32 desc;
+};
+
+enum hinic3_aeq_cb_state {
+ HINIC3_AEQ_HW_CB_REG = 0,
+ HINIC3_AEQ_HW_CB_RUNNING,
+ HINIC3_AEQ_SW_CB_REG,
+ HINIC3_AEQ_SW_CB_RUNNING,
+};
+
+struct hinic3_aeqs {
+ struct hinic3_hwdev *hwdev;
+
+ hinic3_aeq_hwe_cb aeq_hwe_cb[HINIC3_MAX_AEQ_EVENTS];
+ void *aeq_hwe_cb_data[HINIC3_MAX_AEQ_EVENTS];
+ hinic3_aeq_swe_cb aeq_swe_cb[HINIC3_MAX_AEQ_SW_EVENTS];
+ void *aeq_swe_cb_data[HINIC3_MAX_AEQ_SW_EVENTS];
+ unsigned long aeq_hw_cb_state[HINIC3_MAX_AEQ_EVENTS];
+ unsigned long aeq_sw_cb_state[HINIC3_MAX_AEQ_SW_EVENTS];
+
+ struct hinic3_eq aeq[HINIC3_MAX_AEQS];
+ u16 num_aeqs;
+ u16 rsvd1;
+ u32 rsvd2;
+
+ struct workqueue_struct *workq;
+};
+
+enum hinic3_ceq_cb_state {
+ HINIC3_CEQ_CB_REG = 0,
+ HINIC3_CEQ_CB_RUNNING,
+};
+
+struct hinic3_ceqs {
+ struct hinic3_hwdev *hwdev;
+
+ hinic3_ceq_event_cb ceq_cb[HINIC3_MAX_CEQ_EVENTS];
+ void *ceq_cb_data[HINIC3_MAX_CEQ_EVENTS];
+ void *ceq_data[HINIC3_MAX_CEQ_EVENTS];
+ unsigned long ceq_cb_state[HINIC3_MAX_CEQ_EVENTS];
+
+ struct hinic3_eq ceq[HINIC3_MAX_CEQS];
+ u16 num_ceqs;
+ u16 rsvd1;
+ u32 rsvd2;
+};
+
+int hinic3_aeqs_init(struct hinic3_hwdev *hwdev, u16 num_aeqs,
+ struct irq_info *msix_entries);
+
+void hinic3_aeqs_free(struct hinic3_hwdev *hwdev);
+
+int hinic3_ceqs_init(struct hinic3_hwdev *hwdev, u16 num_ceqs,
+ struct irq_info *msix_entries);
+
+void hinic3_ceqs_free(struct hinic3_hwdev *hwdev);
+
+void hinic3_get_ceq_irqs(struct hinic3_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs);
+
+void hinic3_get_aeq_irqs(struct hinic3_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs);
+
+void hinic3_dump_ceq_info(struct hinic3_hwdev *hwdev);
+
+void hinic3_dump_aeq_info(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c
new file mode 100644
index 0000000..6b96b87
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c
@@ -0,0 +1,495 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_hw_api.h"
+ #ifndef HTONL
+#define HTONL(x) \
+ ((((x) & 0x000000ff) << 24) \
+ | (((x) & 0x0000ff00) << 8) \
+ | (((x) & 0x00ff0000) >> 8) \
+ | (((x) & 0xff000000) >> 24))
+#endif
+
+static void hinic3_sml_ctr_read_build_req(struct chipif_sml_ctr_rd_req *msg,
+ u8 instance_id, u8 op_id,
+ u8 ack, u32 ctr_id, u32 init_val)
+{
+ msg->head.value = 0;
+ msg->head.bs.instance = instance_id;
+ msg->head.bs.op_id = op_id;
+ msg->head.bs.ack = ack;
+ msg->head.value = HTONL(msg->head.value);
+ msg->ctr_id = ctr_id;
+ msg->ctr_id = HTONL(msg->ctr_id);
+ msg->initial = init_val;
+}
+
+static void sml_ctr_htonl_n(u32 *node, u32 len)
+{
+ u32 i;
+ u32 *node_new = node;
+
+ for (i = 0; i < len; i++) {
+ *node_new = HTONL(*node_new);
+ node_new++;
+ }
+}
+
+/**
+ * hinic3_sm_ctr_rd16 - small single 16 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd16(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u16 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 16bit counter read fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = rsp.bs_ss16_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd16_clear - small single 16 counter read and clear to zero
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd16_clear(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u16 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance,
+ CHIPIF_SM_CTR_OP_READ_CLEAR,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 16bit counter clear fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = rsp.bs_ss16_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd32 - small single 32 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd32(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u32 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 32bit counter read fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = rsp.bs_ss32_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd32_clear - small single 32 counter read and clear to zero
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ * according to ACN error code (ERR_OK, ERR_PARAM, ERR_FAILED...etc)
+ **/
+int hinic3_sm_ctr_rd32_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u32 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance,
+ CHIPIF_SM_CTR_OP_READ_CLEAR,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 32bit counter clear fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = rsp.bs_ss32_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd64_pair - big pair 128 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value1: read counter value ptr
+ * @value2: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_pair(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value1, u64 *value2)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!value1) {
+ pr_err("First value is NULL for read 64 bit pair\n");
+ return -EFAULT;
+ }
+
+ if (!value2) {
+ pr_err("Second value is NULL for read 64 bit pair\n");
+ return -EFAULT;
+ }
+
+ if (!hwdev || ((ctr_id & 0x1) != 0)) {
+ pr_err("Hwdev is NULL or ctr_id(%d) is odd number for read 64 bit pair\n",
+ ctr_id);
+ return -EFAULT;
+ }
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 64 bit rd pair ret(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value1 = ((u64)rsp.bs_bp64_rsp.val1_h << BIT_32) | rsp.bs_bp64_rsp.val1_l;
+ *value2 = ((u64)rsp.bs_bp64_rsp.val2_h << BIT_32) | rsp.bs_bp64_rsp.val2_l;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd64_pair_clear - big pair 128 counter read and clear to zero
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value1: read counter value ptr
+ * @value2: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_pair_clear(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u64 *value1, u64 *value2)
+{
+ struct chipif_sml_ctr_rd_req req = {0};
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value1 || !value2 || ((ctr_id & 0x1) != 0)) {
+ pr_err("Hwdev or value1 or value2 is NULL or ctr_id(%u) is odd number\n", ctr_id);
+ return -EINVAL;
+ }
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ hinic3_sml_ctr_read_build_req(&req, instance,
+ CHIPIF_SM_CTR_OP_READ_CLEAR,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 64 bit clear pair fail. ret(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value1 = ((u64)rsp.bs_bp64_rsp.val1_h << BIT_32) | rsp.bs_bp64_rsp.val1_l;
+ *value2 = ((u64)rsp.bs_bp64_rsp.val2_h << BIT_32) | rsp.bs_bp64_rsp.val2_l;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd64 - big counter 64 read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u64 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 64bit counter read fail err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = ((u64)rsp.bs_bs64_rsp.value1 << BIT_32) | rsp.bs_bs64_rsp.value2;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_sm_ctr_rd64);
+
+/**
+ * hinic3_sm_ctr_rd64_clear - big counter 64 read and clear to zero
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_clear(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u64 *value)
+{
+ struct chipif_sml_ctr_rd_req req = {0};
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ hinic3_sml_ctr_read_build_req(&req, instance,
+ CHIPIF_SM_CTR_OP_READ_CLEAR,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 64bit counter clear fail err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = ((u64)rsp.bs_bs64_rsp.value1 << BIT_32) | rsp.bs_bs64_rsp.value2;
+
+ return 0;
+}
+
+int hinic3_api_csr_rd32(void *hwdev, u8 dest, u32 addr, u32 *val)
+{
+ struct hinic3_csr_request_api_data api_data = {0};
+ u32 csr_val = 0;
+ u16 in_size = sizeof(api_data);
+ int ret;
+
+ if (!hwdev || !val)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&api_data, 0, sizeof(struct hinic3_csr_request_api_data));
+ api_data.dw0 = 0;
+ api_data.dw1.bits.operation_id = HINIC3_CSR_OPERATION_READ_CSR;
+ api_data.dw1.bits.need_response = HINIC3_CSR_NEED_RESP_DATA;
+ api_data.dw1.bits.data_size = HINIC3_CSR_DATA_SZ_32;
+ api_data.dw1.val32 = cpu_to_be32(api_data.dw1.val32);
+ api_data.dw2.bits.csr_addr = addr;
+ api_data.dw2.val32 = cpu_to_be32(api_data.dw2.val32);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, dest, (u8 *)(&api_data),
+ in_size, &csr_val, 0x4);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Read 32 bit csr fail, dest %u addr 0x%x, ret: 0x%x\n",
+ dest, addr, ret);
+ return ret;
+ }
+
+ *val = csr_val;
+
+ return 0;
+}
+
+int hinic3_api_csr_wr32(void *hwdev, u8 dest, u32 addr, u32 val)
+{
+ struct hinic3_csr_request_api_data api_data;
+ u16 in_size = sizeof(api_data);
+ int ret;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&api_data, 0, sizeof(struct hinic3_csr_request_api_data));
+ api_data.dw1.bits.operation_id = HINIC3_CSR_OPERATION_WRITE_CSR;
+ api_data.dw1.bits.need_response = HINIC3_CSR_NO_RESP_DATA;
+ api_data.dw1.bits.data_size = HINIC3_CSR_DATA_SZ_32;
+ api_data.dw1.val32 = cpu_to_be32(api_data.dw1.val32);
+ api_data.dw2.bits.csr_addr = addr;
+ api_data.dw2.val32 = cpu_to_be32(api_data.dw2.val32);
+ api_data.csr_write_data_h = 0xffffffff;
+ api_data.csr_write_data_l = val;
+
+ ret = hinic3_api_cmd_write_nack(hwdev, dest, (u8 *)(&api_data),
+ in_size);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Write 32 bit csr fail! dest %u addr 0x%x val 0x%x\n",
+ dest, addr, val);
+ return ret;
+ }
+
+ return 0;
+}
+
+int hinic3_api_csr_rd64(void *hwdev, u8 dest, u32 addr, u64 *val)
+{
+ struct hinic3_csr_request_api_data api_data = {0};
+ u64 csr_val = 0;
+ u16 in_size = sizeof(api_data);
+ int ret;
+
+ if (!hwdev || !val)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&api_data, 0, sizeof(struct hinic3_csr_request_api_data));
+ api_data.dw0 = 0;
+ api_data.dw1.bits.operation_id = HINIC3_CSR_OPERATION_READ_CSR;
+ api_data.dw1.bits.need_response = HINIC3_CSR_NEED_RESP_DATA;
+ api_data.dw1.bits.data_size = HINIC3_CSR_DATA_SZ_64;
+ api_data.dw1.val32 = cpu_to_be32(api_data.dw1.val32);
+ api_data.dw2.bits.csr_addr = addr;
+ api_data.dw2.val32 = cpu_to_be32(api_data.dw2.val32);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, dest, (u8 *)(&api_data),
+ in_size, &csr_val, 0x8);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Read 64 bit csr fail, dest %u addr 0x%x\n",
+ dest, addr);
+ return ret;
+ }
+
+ *val = csr_val;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_api_csr_rd64);
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h
new file mode 100644
index 0000000..9ec812e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h
@@ -0,0 +1,141 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HW_API_H
+#define HINIC3_HW_API_H
+
+#include <linux/types.h>
+
+#define CHIPIF_ACK 1
+#define CHIPIF_NOACK 0
+
+#define CHIPIF_SM_CTR_OP_READ 0x2
+#define CHIPIF_SM_CTR_OP_READ_CLEAR 0x6
+
+#define BIT_32 32
+
+/* request head */
+union chipif_sml_ctr_req_head {
+ struct {
+ u32 pad:15;
+ u32 ack:1;
+ u32 op_id:5;
+ u32 instance:6;
+ u32 src:5;
+ } bs;
+
+ u32 value;
+};
+
+/* counter read request struct */
+struct chipif_sml_ctr_rd_req {
+ u32 extra;
+ union chipif_sml_ctr_req_head head;
+ u32 ctr_id;
+ u32 initial;
+ u32 pad;
+};
+
+struct hinic3_csr_request_api_data {
+ u32 dw0;
+
+ union {
+ struct {
+ u32 reserved1:13;
+ /* this field indicates the write/read data size:
+ * 2'b00: 32 bits
+ * 2'b01: 64 bits
+ * 2'b10~2'b11:reserved
+ */
+ u32 data_size:2;
+ /* this field indicates that requestor expect receive a
+ * response data or not.
+ * 1'b0: expect not to receive a response data.
+ * 1'b1: expect to receive a response data.
+ */
+ u32 need_response:1;
+ /* this field indicates the operation that the requestor
+ * expected.
+ * 5'b1_1110: write value to csr space.
+ * 5'b1_1111: read register from csr space.
+ */
+ u32 operation_id:5;
+ u32 reserved2:6;
+ /* this field specifies the Src node ID for this API
+ * request message.
+ */
+ u32 src_node_id:5;
+ } bits;
+
+ u32 val32;
+ } dw1;
+
+ union {
+ struct {
+ /* it specifies the CSR address. */
+ u32 csr_addr:26;
+ u32 reserved3:6;
+ } bits;
+
+ u32 val32;
+ } dw2;
+
+ /* if data_size=2'b01, it is high 32 bits of write data. else, it is
+ * 32'hFFFF_FFFF.
+ */
+ u32 csr_write_data_h;
+ /* the low 32 bits of write data. */
+ u32 csr_write_data_l;
+};
+
+/* counter read response union */
+union ctr_rd_rsp {
+ struct {
+ u32 value1:16;
+ u32 pad0:16;
+ u32 pad1[3];
+ } bs_ss16_rsp;
+
+ struct {
+ u32 value1;
+ u32 pad[3];
+ } bs_ss32_rsp;
+
+ struct {
+ u32 value1:20;
+ u32 pad0:12;
+ u32 value2:12;
+ u32 pad1:20;
+ u32 pad2[2];
+ } bs_sp_rsp;
+
+ struct {
+ u32 value1;
+ u32 value2;
+ u32 pad[2];
+ } bs_bs64_rsp;
+
+ struct {
+ u32 val1_h;
+ u32 val1_l;
+ u32 val2_h;
+ u32 val2_l;
+ } bs_bp64_rsp;
+};
+
+enum HINIC3_CSR_API_DATA_OPERATION_ID {
+ HINIC3_CSR_OPERATION_WRITE_CSR = 0x1E,
+ HINIC3_CSR_OPERATION_READ_CSR = 0x1F
+};
+
+enum HINIC3_CSR_API_DATA_NEED_RESPONSE_DATA {
+ HINIC3_CSR_NO_RESP_DATA = 0,
+ HINIC3_CSR_NEED_RESP_DATA = 1
+};
+
+enum HINIC3_CSR_API_DATA_DATA_SIZE {
+ HINIC3_CSR_DATA_SZ_32 = 0,
+ HINIC3_CSR_DATA_SZ_64 = 1
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c
new file mode 100644
index 0000000..2d4a9f6
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c
@@ -0,0 +1,1632 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/mutex.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/semaphore.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "cfg_mgmt_mpu_cmd.h"
+#include "cfg_mgmt_mpu_cmd_defs.h"
+#include "hinic3_hw_cfg.h"
+
+static void parse_pub_res_cap_dfx(struct hinic3_hwdev *hwdev,
+ const struct service_cap *cap)
+{
+ sdk_info(hwdev->dev_hdl, "Get public resource capbility: svc_cap_en: 0x%x\n",
+ cap->svc_type);
+ sdk_info(hwdev->dev_hdl, "Host_id: 0x%x, ep_id: 0x%x, er_id: 0x%x, port_id: 0x%x\n",
+ cap->host_id, cap->ep_id, cap->er_id, cap->port_id);
+ sdk_info(hwdev->dev_hdl, "cos_bitmap: 0x%x, flexq: 0x%x, virtio_vq_size: 0x%x\n",
+ cap->cos_valid_bitmap, cap->flexq_en, cap->virtio_vq_size);
+ sdk_info(hwdev->dev_hdl, "Host_total_function: 0x%x, host_oq_id_mask_val: 0x%x, max_vf: 0x%x\n",
+ cap->host_total_function, cap->host_oq_id_mask_val,
+ cap->max_vf);
+ sdk_info(hwdev->dev_hdl, "Host_pf_num: 0x%x, pf_id_start: 0x%x, host_vf_num: 0x%x, vf_id_start: 0x%x\n",
+ cap->pf_num, cap->pf_id_start, cap->vf_num, cap->vf_id_start);
+ sdk_info(hwdev->dev_hdl,
+ "host_valid_bitmap: 0x%x, master_host_id: 0x%x, srv_multi_host_mode: 0x%x, hot_plug_disable: 0x%x\n",
+ cap->host_valid_bitmap, cap->master_host_id, cap->srv_multi_host_mode,
+ cap->hot_plug_disable);
+ sdk_info(hwdev->dev_hdl,
+ "os_hot_replace: 0x%x, fake_vf_start_id: 0x%x, fake_vf_num: 0x%x, fake_vf_max_pctx: 0x%x\n",
+ cap->os_hot_replace, cap->fake_vf_start_id, cap->fake_vf_num, cap->fake_vf_max_pctx);
+ sdk_info(hwdev->dev_hdl,
+ "fake_vf_bfilter_start_addr: 0x%x, fake_vf_bfilter_len: 0x%x, bond_create_mode: 0x%x\n",
+ cap->fake_vf_bfilter_start_addr, cap->fake_vf_bfilter_len, cap->bond_create_mode);
+}
+
+static void parse_cqm_res_cap(struct hinic3_hwdev *hwdev, struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap)
+{
+ struct dev_sf_svc_attr *attr = &cap->sf_svc_attr;
+
+ cap->fake_vf_start_id = dev_cap->fake_vf_start_id;
+ cap->fake_vf_num = dev_cap->fake_vf_num;
+ cap->fake_vf_max_pctx = dev_cap->fake_vf_max_pctx;
+ cap->fake_vf_num_cfg = dev_cap->fake_vf_num;
+ cap->fake_vf_bfilter_start_addr = dev_cap->fake_vf_bfilter_start_addr;
+ cap->fake_vf_bfilter_len = dev_cap->fake_vf_bfilter_len;
+
+ if (COMM_SUPPORT_VIRTIO_VQ_SIZE(hwdev))
+ cap->virtio_vq_size = (u16)(VIRTIO_BASE_VQ_SIZE << dev_cap->virtio_vq_size);
+ else
+ cap->virtio_vq_size = VIRTIO_DEFAULT_VQ_SIZE;
+
+ if (dev_cap->sf_svc_attr & SF_SVC_FT_BIT)
+ attr->ft_en = true;
+ else
+ attr->ft_en = false;
+
+ if (dev_cap->sf_svc_attr & SF_SVC_RDMA_BIT)
+ attr->rdma_en = true;
+ else
+ attr->rdma_en = false;
+
+ /* PPF will overwrite it when parse dynamic resource */
+ if (dev_cap->func_sf_en)
+ cap->sf_en = true;
+ else
+ cap->sf_en = false;
+
+ cap->lb_mode = dev_cap->lb_mode;
+ cap->smf_pg = dev_cap->smf_pg;
+
+ cap->timer_en = dev_cap->timer_en;
+ cap->host_oq_id_mask_val = dev_cap->host_oq_id_mask_val;
+ cap->max_connect_num = dev_cap->max_conn_num;
+ cap->max_stick2cache_num = dev_cap->max_stick2cache_num;
+ cap->bfilter_start_addr = dev_cap->max_bfilter_start_addr;
+ cap->bfilter_len = dev_cap->bfilter_len;
+ cap->hash_bucket_num = dev_cap->hash_bucket_num;
+}
+
+static void parse_pub_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ cap->host_id = dev_cap->host_id;
+ cap->ep_id = dev_cap->ep_id;
+ cap->er_id = dev_cap->er_id;
+ cap->port_id = dev_cap->port_id;
+
+ cap->svc_type = dev_cap->svc_cap_en;
+ cap->chip_svc_type = cap->svc_type;
+
+ cap->cos_valid_bitmap = dev_cap->valid_cos_bitmap;
+ cap->port_cos_valid_bitmap = dev_cap->port_cos_valid_bitmap;
+ cap->flexq_en = dev_cap->flexq_en;
+
+ cap->host_total_function = dev_cap->host_total_func;
+ cap->host_valid_bitmap = dev_cap->host_valid_bitmap;
+ cap->master_host_id = dev_cap->master_host_id;
+ cap->srv_multi_host_mode = dev_cap->srv_multi_host_mode;
+ cap->hot_plug_disable = dev_cap->hot_plug_disable;
+ cap->bond_create_mode = dev_cap->bond_create_mode;
+ cap->os_hot_replace = dev_cap->os_hot_replace;
+ cap->fake_vf_en = dev_cap->fake_vf_en;
+ cap->fake_vf_start_bit = dev_cap->fake_vf_start_bit;
+ cap->fake_vf_end_bit = dev_cap->fake_vf_end_bit;
+ cap->fake_vf_page_bit = dev_cap->fake_vf_page_bit;
+ cap->map_host_id = dev_cap->map_host_id;
+
+ if (type != TYPE_VF) {
+ cap->max_vf = dev_cap->max_vf;
+ cap->pf_num = dev_cap->host_pf_num;
+ cap->pf_id_start = dev_cap->pf_id_start;
+ cap->vf_num = dev_cap->host_vf_num;
+ cap->vf_id_start = dev_cap->vf_id_start;
+ } else {
+ cap->max_vf = 0;
+ }
+
+ parse_cqm_res_cap(hwdev, cap, dev_cap);
+ parse_pub_res_cap_dfx(hwdev, cap);
+}
+
+static void parse_dynamic_share_res_cap(struct service_cap *cap,
+ const struct cfg_cmd_dev_cap *dev_cap)
+{
+ if (dev_cap->host_sf_en)
+ cap->sf_en = true;
+ else
+ cap->sf_en = false;
+}
+
+static void parse_l2nic_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct nic_service_cap *nic_cap = &cap->nic_cap;
+
+ nic_cap->max_sqs = dev_cap->nic_max_sq_id + 1;
+ nic_cap->max_rqs = dev_cap->nic_max_rq_id + 1;
+ nic_cap->default_num_queues = dev_cap->nic_default_num_queues;
+ nic_cap->outband_vlan_cfg_en = dev_cap->outband_vlan_cfg_en;
+ nic_cap->lro_enable = dev_cap->lro_enable;
+
+ sdk_info(hwdev->dev_hdl, "L2nic resource capbility, max_sqs: 0x%x, max_rqs: 0x%x\n",
+ nic_cap->max_sqs, nic_cap->max_rqs);
+
+ /* Check parameters from firmware */
+ if (nic_cap->max_sqs > HINIC3_CFG_MAX_QP) {
+ sdk_info(hwdev->dev_hdl, "Number of sq exceed limit[1-%d]: sq: %u\n",
+ HINIC3_CFG_MAX_QP, nic_cap->max_sqs);
+ nic_cap->max_sqs = HINIC3_CFG_MAX_QP;
+ }
+
+ if (nic_cap->max_rqs > HINIC3_CFG_MAX_QP) {
+ sdk_info(hwdev->dev_hdl, "Number of rq exceed limit[1-%d]: rq: %u\n",
+ HINIC3_CFG_MAX_QP, nic_cap->max_rqs);
+ nic_cap->max_rqs = HINIC3_CFG_MAX_QP;
+ }
+
+ if (nic_cap->outband_vlan_cfg_en)
+ sdk_info(hwdev->dev_hdl, "L2nic outband vlan cfg enabled\n");
+}
+
+static void parse_fc_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_fc_svc_cap *fc_cap = &cap->fc_cap.dev_fc_cap;
+
+ fc_cap->max_parent_qpc_num = dev_cap->fc_max_pctx;
+ fc_cap->scq_num = dev_cap->fc_max_scq;
+ fc_cap->srq_num = dev_cap->fc_max_srq;
+ fc_cap->max_child_qpc_num = dev_cap->fc_max_cctx;
+ fc_cap->child_qpc_id_start = dev_cap->fc_cctx_id_start;
+ fc_cap->vp_id_start = dev_cap->fc_vp_id_start;
+ fc_cap->vp_id_end = dev_cap->fc_vp_id_end;
+
+ sdk_info(hwdev->dev_hdl, "Get fc resource capbility\n");
+ sdk_info(hwdev->dev_hdl,
+ "Max_parent_qpc_num: 0x%x, scq_num: 0x%x, srq_num: 0x%x, max_child_qpc_num: 0x%x, child_qpc_id_start: 0x%x\n",
+ fc_cap->max_parent_qpc_num, fc_cap->scq_num, fc_cap->srq_num,
+ fc_cap->max_child_qpc_num, fc_cap->child_qpc_id_start);
+ sdk_info(hwdev->dev_hdl, "Vp_id_start: 0x%x, vp_id_end: 0x%x\n",
+ fc_cap->vp_id_start, fc_cap->vp_id_end);
+}
+
+static void parse_roce_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_roce_svc_own_cap *roce_cap =
+ &cap->rdma_cap.dev_rdma_cap.roce_own_cap;
+
+ roce_cap->max_qps = dev_cap->roce_max_qp;
+ roce_cap->max_cqs = dev_cap->roce_max_cq;
+ roce_cap->max_srqs = dev_cap->roce_max_srq;
+ roce_cap->max_mpts = dev_cap->roce_max_mpt;
+ roce_cap->max_drc_qps = dev_cap->roce_max_drc_qp;
+
+ roce_cap->wqe_cl_start = dev_cap->roce_wqe_cl_start;
+ roce_cap->wqe_cl_end = dev_cap->roce_wqe_cl_end;
+ roce_cap->wqe_cl_sz = dev_cap->roce_wqe_cl_size;
+
+ sdk_info(hwdev->dev_hdl, "Get roce resource capbility, type: 0x%x\n",
+ type);
+ sdk_info(hwdev->dev_hdl, "Max_qps: 0x%x, max_cqs: 0x%x, max_srqs: 0x%x, max_mpts: 0x%x, max_drcts: 0x%x\n",
+ roce_cap->max_qps, roce_cap->max_cqs, roce_cap->max_srqs,
+ roce_cap->max_mpts, roce_cap->max_drc_qps);
+
+ sdk_info(hwdev->dev_hdl, "Wqe_start: 0x%x, wqe_end: 0x%x, wqe_sz: 0x%x\n",
+ roce_cap->wqe_cl_start, roce_cap->wqe_cl_end,
+ roce_cap->wqe_cl_sz);
+
+ if (roce_cap->max_qps == 0) {
+ if (type == TYPE_PF || type == TYPE_PPF) {
+ roce_cap->max_qps = 0x400;
+ roce_cap->max_cqs = 0x800;
+ roce_cap->max_srqs = 0x400;
+ roce_cap->max_mpts = 0x400;
+ roce_cap->max_drc_qps = 0x40;
+ } else {
+ roce_cap->max_qps = 0x200;
+ roce_cap->max_cqs = 0x400;
+ roce_cap->max_srqs = 0x200;
+ roce_cap->max_mpts = 0x200;
+ roce_cap->max_drc_qps = 0x40;
+ }
+ }
+}
+
+static void parse_rdma_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_roce_svc_own_cap *roce_cap =
+ &cap->rdma_cap.dev_rdma_cap.roce_own_cap;
+
+ roce_cap->cmtt_cl_start = dev_cap->roce_cmtt_cl_start;
+ roce_cap->cmtt_cl_end = dev_cap->roce_cmtt_cl_end;
+ roce_cap->cmtt_cl_sz = dev_cap->roce_cmtt_cl_size;
+
+ roce_cap->dmtt_cl_start = dev_cap->roce_dmtt_cl_start;
+ roce_cap->dmtt_cl_end = dev_cap->roce_dmtt_cl_end;
+ roce_cap->dmtt_cl_sz = dev_cap->roce_dmtt_cl_size;
+
+ sdk_info(hwdev->dev_hdl, "Get rdma resource capbility, Cmtt_start: 0x%x, cmtt_end: 0x%x, cmtt_sz: 0x%x\n",
+ roce_cap->cmtt_cl_start, roce_cap->cmtt_cl_end,
+ roce_cap->cmtt_cl_sz);
+
+ sdk_info(hwdev->dev_hdl, "Dmtt_start: 0x%x, dmtt_end: 0x%x, dmtt_sz: 0x%x\n",
+ roce_cap->dmtt_cl_start, roce_cap->dmtt_cl_end,
+ roce_cap->dmtt_cl_sz);
+}
+
+static void parse_ovs_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct ovs_service_cap *ovs_cap = &cap->ovs_cap;
+
+ ovs_cap->dev_ovs_cap.max_pctxs = dev_cap->ovs_max_qpc;
+ ovs_cap->dev_ovs_cap.fake_vf_max_pctx = dev_cap->fake_vf_max_pctx;
+ ovs_cap->dev_ovs_cap.fake_vf_start_id = dev_cap->fake_vf_start_id;
+ ovs_cap->dev_ovs_cap.fake_vf_num = dev_cap->fake_vf_num;
+ ovs_cap->dev_ovs_cap.dynamic_qp_en = dev_cap->flexq_en;
+
+ sdk_info(hwdev->dev_hdl,
+ "Get ovs resource capbility, max_qpc: 0x%x, fake_vf_start_id: 0x%x, fake_vf_num: 0x%x\n",
+ ovs_cap->dev_ovs_cap.max_pctxs,
+ ovs_cap->dev_ovs_cap.fake_vf_start_id,
+ ovs_cap->dev_ovs_cap.fake_vf_num);
+ sdk_info(hwdev->dev_hdl,
+ "fake_vf_max_qpc: 0x%x, dynamic_qp_en: 0x%x\n",
+ ovs_cap->dev_ovs_cap.fake_vf_max_pctx,
+ ovs_cap->dev_ovs_cap.dynamic_qp_en);
+}
+
+static void parse_ppa_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct ppa_service_cap *dip_cap = &cap->ppa_cap;
+
+ dip_cap->qpc_fake_vf_ctx_num = dev_cap->fake_vf_max_pctx;
+ dip_cap->qpc_fake_vf_start = dev_cap->fake_vf_start_id;
+ dip_cap->qpc_fake_vf_num = dev_cap->fake_vf_num;
+ dip_cap->bloomfilter_en = dev_cap->fake_vf_bfilter_len ? 1 : 0;
+ dip_cap->bloomfilter_length = dev_cap->fake_vf_bfilter_len;
+ sdk_info(hwdev->dev_hdl,
+ "Get ppa resource capbility, fake_vf_start_id: 0x%x, fake_vf_num: 0x%x, fake_vf_max_qpc: 0x%x\n",
+ dip_cap->qpc_fake_vf_start,
+ dip_cap->qpc_fake_vf_num,
+ dip_cap->qpc_fake_vf_ctx_num);
+}
+
+static void parse_toe_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_toe_svc_cap *toe_cap = &cap->toe_cap.dev_toe_cap;
+
+ toe_cap->max_pctxs = dev_cap->toe_max_pctx;
+ toe_cap->max_cqs = dev_cap->toe_max_cq;
+ toe_cap->max_srqs = dev_cap->toe_max_srq;
+ toe_cap->srq_id_start = dev_cap->toe_srq_id_start;
+ toe_cap->max_mpts = dev_cap->toe_max_mpt;
+ toe_cap->max_cctxt = dev_cap->toe_max_cctxt;
+
+ sdk_info(hwdev->dev_hdl,
+ "Get toe resource capbility, max_pctxs: 0x%x, max_cqs: 0x%x, max_srqs: 0x%x, srq_id_start: 0x%x, max_mpts: 0x%x\n",
+ toe_cap->max_pctxs, toe_cap->max_cqs, toe_cap->max_srqs,
+ toe_cap->srq_id_start, toe_cap->max_mpts);
+}
+
+static void parse_ipsec_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct ipsec_service_cap *ipsec_cap = &cap->ipsec_cap;
+
+ ipsec_cap->dev_ipsec_cap.max_sactxs = dev_cap->ipsec_max_sactx;
+ ipsec_cap->dev_ipsec_cap.max_cqs = dev_cap->ipsec_max_cq;
+
+ sdk_info(hwdev->dev_hdl, "Get IPsec resource capbility, max_sactxs: 0x%x, max_cqs: 0x%x\n",
+ dev_cap->ipsec_max_sactx, dev_cap->ipsec_max_cq);
+}
+
+static void parse_vbs_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct vbs_service_cap *vbs_cap = &cap->vbs_cap;
+
+ vbs_cap->vbs_max_volq = dev_cap->vbs_max_volq;
+ vbs_cap->vbs_main_pf_enable = dev_cap->vbs_main_pf_enable;
+ vbs_cap->vbs_vsock_pf_enable = dev_cap->vbs_vsock_pf_enable;
+ vbs_cap->vbs_fushion_queue_pf_enable = dev_cap->vbs_fushion_queue_pf_enable;
+
+ sdk_info(hwdev->dev_hdl,
+ "Get VBS resource capbility, vbs_max_volq: 0x%x\n",
+ dev_cap->vbs_max_volq);
+ sdk_info(hwdev->dev_hdl,
+ "Get VBS pf info, vbs_main_pf_enable: 0x%x, vbs_vsock_pf_enable: 0x%x, vbs_fushion_queue_pf_enable: 0x%x\n",
+ dev_cap->vbs_main_pf_enable,
+ dev_cap->vbs_vsock_pf_enable,
+ dev_cap->vbs_fushion_queue_pf_enable);
+}
+
+static void parse_dev_cap(struct hinic3_hwdev *dev,
+ struct cfg_cmd_dev_cap *dev_cap, enum func_type type)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+
+ /* Public resource */
+ parse_pub_res_cap(dev, cap, dev_cap, type);
+
+ /* PPF managed dynamic resource */
+ if (type == TYPE_PPF)
+ parse_dynamic_share_res_cap(cap, dev_cap);
+
+ /* L2 NIC resource */
+ if (IS_NIC_TYPE(dev))
+ parse_l2nic_res_cap(dev, cap, dev_cap, type);
+
+ /* FC without virtulization */
+ if (type == TYPE_PF || type == TYPE_PPF) {
+ if (IS_FC_TYPE(dev))
+ parse_fc_res_cap(dev, cap, dev_cap, type);
+ }
+
+ /* toe resource */
+ if (IS_TOE_TYPE(dev))
+ parse_toe_res_cap(dev, cap, dev_cap, type);
+
+ /* mtt cache line */
+ if (IS_RDMA_ENABLE(dev))
+ parse_rdma_res_cap(dev, cap, dev_cap, type);
+
+ /* RoCE resource */
+ if (IS_ROCE_TYPE(dev))
+ parse_roce_res_cap(dev, cap, dev_cap, type);
+
+ if (IS_OVS_TYPE(dev))
+ parse_ovs_res_cap(dev, cap, dev_cap, type);
+
+ if (IS_IPSEC_TYPE(dev))
+ parse_ipsec_res_cap(dev, cap, dev_cap, type);
+
+ if (IS_PPA_TYPE(dev))
+ parse_ppa_res_cap(dev, cap, dev_cap, type);
+
+ if (IS_VBS_TYPE(dev))
+ parse_vbs_res_cap(dev, cap, dev_cap, type);
+}
+
+static int get_cap_from_fw(struct hinic3_hwdev *dev, enum func_type type)
+{
+ struct cfg_cmd_dev_cap dev_cap;
+ u16 out_len = sizeof(dev_cap);
+ int err;
+
+ memset(&dev_cap, 0, sizeof(dev_cap));
+ dev_cap.func_id = hinic3_global_func_id(dev);
+ sdk_info(dev->dev_hdl, "Get cap from fw, func_idx: %u\n",
+ dev_cap.func_id);
+
+ err = hinic3_msg_to_mgmt_sync(dev, HINIC3_MOD_CFGM, CFG_CMD_GET_DEV_CAP,
+ &dev_cap, sizeof(dev_cap),
+ &dev_cap, &out_len, 0,
+ HINIC3_CHANNEL_COMM);
+ if (err || dev_cap.head.status || !out_len) {
+ sdk_err(dev->dev_hdl,
+ "Failed to get capability from FW, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, dev_cap.head.status, out_len);
+ return -EIO;
+ }
+
+ parse_dev_cap(dev, &dev_cap, type);
+
+ return 0;
+}
+
+u8 hinic3_get_bond_create_mode(void *dev)
+{
+ struct hinic3_hwdev *hwdev = NULL;
+ struct service_cap *cap = NULL;
+
+ if (!dev) {
+ pr_err("pointer dev is NULL\n");
+ return -EINVAL;
+ }
+
+ hwdev = (struct hinic3_hwdev *)dev;
+ cap = &hwdev->cfg_mgmt->svc_cap;
+
+ return cap->bond_create_mode;
+}
+EXPORT_SYMBOL(hinic3_get_bond_create_mode);
+
+int hinic3_get_dev_cap(void *dev)
+{
+ enum func_type type;
+ int err;
+ struct hinic3_hwdev *hwdev = NULL;
+
+ if (!dev) {
+ pr_err("pointer dev is NULL\n");
+ return -EINVAL;
+ }
+ hwdev = (struct hinic3_hwdev *)dev;
+ type = HINIC3_FUNC_TYPE(hwdev);
+
+ switch (type) {
+ case TYPE_PF:
+ case TYPE_PPF:
+ case TYPE_VF:
+ err = get_cap_from_fw(hwdev, type);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to get PF/PPF capability\n");
+ return err;
+ }
+ break;
+ default:
+ sdk_err(hwdev->dev_hdl,
+ "Unsupported PCI Function type: %d\n", type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_dev_cap);
+
+int hinic3_get_ppf_timer_cfg(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_cmd_host_timer cfg_host_timer;
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ u16 out_len = sizeof(cfg_host_timer);
+ int err;
+
+ memset(&cfg_host_timer, 0, sizeof(cfg_host_timer));
+ cfg_host_timer.host_id = dev->cfg_mgmt->svc_cap.host_id;
+
+ err = hinic3_msg_to_mgmt_sync(dev, HINIC3_MOD_CFGM, CFG_CMD_GET_HOST_TIMER,
+ &cfg_host_timer, sizeof(cfg_host_timer),
+ &cfg_host_timer, &out_len, 0,
+ HINIC3_CHANNEL_COMM);
+ if (err || cfg_host_timer.head.status || !out_len) {
+ sdk_err(dev->dev_hdl,
+ "Failed to get host timer cfg from FW, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, cfg_host_timer.head.status, out_len);
+ return -EIO;
+ }
+
+ cap->timer_pf_id_start = cfg_host_timer.timer_pf_id_start;
+ cap->timer_pf_num = cfg_host_timer.timer_pf_num;
+ cap->timer_vf_id_start = cfg_host_timer.timer_vf_id_start;
+ cap->timer_vf_num = cfg_host_timer.timer_vf_num;
+
+ return 0;
+}
+
+static void nic_param_fix(struct hinic3_hwdev *dev)
+{
+}
+
+static void rdma_mtt_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct rdma_service_cap *rdma_cap = &cap->rdma_cap;
+
+ rdma_cap->log_mtt = LOG_MTT_SEG;
+ rdma_cap->log_mtt_seg = LOG_MTT_SEG;
+ rdma_cap->mtt_entry_sz = MTT_ENTRY_SZ;
+ rdma_cap->mpt_entry_sz = RDMA_MPT_ENTRY_SZ;
+ rdma_cap->num_mtts = RDMA_NUM_MTTS;
+}
+
+static void rdma_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct rdma_service_cap *rdma_cap = &cap->rdma_cap;
+ struct dev_roce_svc_own_cap *roce_cap =
+ &rdma_cap->dev_rdma_cap.roce_own_cap;
+
+ rdma_cap->log_mtt = LOG_MTT_SEG;
+ rdma_cap->log_rdmarc = LOG_RDMARC_SEG;
+ rdma_cap->reserved_qps = RDMA_RSVD_QPS;
+ rdma_cap->max_sq_sg = RDMA_MAX_SQ_SGE;
+
+ /* RoCE */
+ if (IS_ROCE_TYPE(dev)) {
+ roce_cap->qpc_entry_sz = ROCE_QPC_ENTRY_SZ;
+ roce_cap->max_wqes = ROCE_MAX_WQES;
+ roce_cap->max_rq_sg = ROCE_MAX_RQ_SGE;
+ roce_cap->max_sq_inline_data_sz = ROCE_MAX_SQ_INLINE_DATA_SZ;
+ roce_cap->max_rq_desc_sz = ROCE_MAX_RQ_DESC_SZ;
+ roce_cap->rdmarc_entry_sz = ROCE_RDMARC_ENTRY_SZ;
+ roce_cap->max_qp_init_rdma = ROCE_MAX_QP_INIT_RDMA;
+ roce_cap->max_qp_dest_rdma = ROCE_MAX_QP_DEST_RDMA;
+ roce_cap->max_srq_wqes = ROCE_MAX_SRQ_WQES;
+ roce_cap->reserved_srqs = ROCE_RSVD_SRQS;
+ roce_cap->max_srq_sge = ROCE_MAX_SRQ_SGE;
+ roce_cap->srqc_entry_sz = ROCE_SRQC_ENTERY_SZ;
+ roce_cap->max_msg_sz = ROCE_MAX_MSG_SZ;
+ }
+
+ rdma_cap->max_sq_desc_sz = RDMA_MAX_SQ_DESC_SZ;
+ rdma_cap->wqebb_size = WQEBB_SZ;
+ rdma_cap->max_cqes = RDMA_MAX_CQES;
+ rdma_cap->reserved_cqs = RDMA_RSVD_CQS;
+ rdma_cap->cqc_entry_sz = RDMA_CQC_ENTRY_SZ;
+ rdma_cap->cqe_size = RDMA_CQE_SZ;
+ rdma_cap->reserved_mrws = RDMA_RSVD_MRWS;
+ rdma_cap->mpt_entry_sz = RDMA_MPT_ENTRY_SZ;
+
+ /* 2^8 - 1
+ * +------------------------+-----------+
+ * | 4B | 1M(20b) | Key(8b) |
+ * +------------------------+-----------+
+ * key = 8bit key + 24bit index,
+ * now Lkey of SGE uses 2bit(bit31 and bit30), so key only have 10bit,
+ * we use original 8bits directly for simpilification
+ */
+ rdma_cap->max_fmr_maps = 0xff;
+ rdma_cap->num_mtts = RDMA_NUM_MTTS;
+ rdma_cap->log_mtt_seg = LOG_MTT_SEG;
+ rdma_cap->mtt_entry_sz = MTT_ENTRY_SZ;
+ rdma_cap->log_rdmarc_seg = LOG_RDMARC_SEG;
+ rdma_cap->local_ca_ack_delay = LOCAL_ACK_DELAY;
+ rdma_cap->num_ports = RDMA_NUM_PORTS;
+ rdma_cap->db_page_size = DB_PAGE_SZ;
+ rdma_cap->direct_wqe_size = DWQE_SZ;
+ rdma_cap->num_pds = NUM_PD;
+ rdma_cap->reserved_pds = RSVD_PD;
+ rdma_cap->max_xrcds = MAX_XRCDS;
+ rdma_cap->reserved_xrcds = RSVD_XRCDS;
+ rdma_cap->max_gid_per_port = MAX_GID_PER_PORT;
+ rdma_cap->gid_entry_sz = GID_ENTRY_SZ;
+ rdma_cap->reserved_lkey = RSVD_LKEY;
+ rdma_cap->num_comp_vectors = (u32)dev->cfg_mgmt->eq_info.num_ceq;
+ rdma_cap->page_size_cap = PAGE_SZ_CAP;
+ rdma_cap->flags = (RDMA_BMME_FLAG_LOCAL_INV |
+ RDMA_BMME_FLAG_REMOTE_INV |
+ RDMA_BMME_FLAG_FAST_REG_WR |
+ RDMA_DEV_CAP_FLAG_XRC |
+ RDMA_DEV_CAP_FLAG_MEM_WINDOW |
+ RDMA_BMME_FLAG_TYPE_2_WIN |
+ RDMA_BMME_FLAG_WIN_TYPE_2B |
+ RDMA_DEV_CAP_FLAG_ATOMIC);
+ rdma_cap->max_frpl_len = MAX_FRPL_LEN;
+ rdma_cap->max_pkeys = MAX_PKEYS;
+}
+
+static void toe_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct toe_service_cap *toe_cap = &cap->toe_cap;
+
+ toe_cap->pctx_sz = TOE_PCTX_SZ;
+ toe_cap->scqc_sz = TOE_CQC_SZ;
+}
+
+static void ovs_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct ovs_service_cap *ovs_cap = &cap->ovs_cap;
+
+ ovs_cap->pctx_sz = OVS_PCTX_SZ;
+}
+
+static void ppa_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct ppa_service_cap *ppa_cap = &cap->ppa_cap;
+
+ ppa_cap->pctx_sz = PPA_PCTX_SZ;
+}
+
+static void fc_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct fc_service_cap *fc_cap = &cap->fc_cap;
+
+ fc_cap->parent_qpc_size = FC_PCTX_SZ;
+ fc_cap->child_qpc_size = FC_CCTX_SZ;
+ fc_cap->sqe_size = FC_SQE_SZ;
+
+ fc_cap->scqc_size = FC_SCQC_SZ;
+ fc_cap->scqe_size = FC_SCQE_SZ;
+
+ fc_cap->srqc_size = FC_SRQC_SZ;
+ fc_cap->srqe_size = FC_SRQE_SZ;
+}
+
+static void ipsec_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct ipsec_service_cap *ipsec_cap = &cap->ipsec_cap;
+
+ ipsec_cap->sactx_sz = IPSEC_SACTX_SZ;
+}
+
+static void init_service_param(struct hinic3_hwdev *dev)
+{
+ if (IS_NIC_TYPE(dev))
+ nic_param_fix(dev);
+ if (IS_RDMA_ENABLE(dev))
+ rdma_mtt_fix(dev);
+ if (IS_ROCE_TYPE(dev))
+ rdma_param_fix(dev);
+ if (IS_FC_TYPE(dev))
+ fc_param_fix(dev);
+ if (IS_TOE_TYPE(dev))
+ toe_param_fix(dev);
+ if (IS_OVS_TYPE(dev))
+ ovs_param_fix(dev);
+ if (IS_IPSEC_TYPE(dev))
+ ipsec_param_fix(dev);
+ if (IS_PPA_TYPE(dev))
+ ppa_param_fix(dev);
+}
+
+static void cfg_get_eq_num(struct hinic3_hwdev *dev)
+{
+ struct cfg_eq_info *eq_info = &dev->cfg_mgmt->eq_info;
+
+ eq_info->num_ceq = dev->hwif->attr.num_ceqs;
+ eq_info->num_ceq_remain = eq_info->num_ceq;
+}
+
+static int cfg_init_eq(struct hinic3_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+ struct cfg_eq *eq = NULL;
+ u8 num_ceq, i = 0;
+
+ cfg_get_eq_num(dev);
+ num_ceq = cfg_mgmt->eq_info.num_ceq;
+
+ sdk_info(dev->dev_hdl, "Cfg mgmt: ceqs=0x%x, remain=0x%x\n",
+ cfg_mgmt->eq_info.num_ceq, cfg_mgmt->eq_info.num_ceq_remain);
+
+ if (!num_ceq) {
+ sdk_err(dev->dev_hdl, "Ceq num cfg in fw is zero\n");
+ return -EFAULT;
+ }
+
+ eq = kcalloc(num_ceq, sizeof(*eq), GFP_KERNEL);
+ if (!eq)
+ return -ENOMEM;
+
+ for (i = 0; i < num_ceq; ++i) {
+ eq[i].eqn = i;
+ eq[i].free = CFG_FREE;
+ eq[i].type = SERVICE_T_MAX;
+ }
+
+ cfg_mgmt->eq_info.eq = eq;
+
+ mutex_init(&cfg_mgmt->eq_info.eq_mutex);
+
+ return 0;
+}
+
+int hinic3_vector_to_eqn(void *hwdev, enum hinic3_service_type type, int vector)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_eq *eq = NULL;
+ int eqn = -EINVAL;
+ int vector_num = vector;
+
+ if (!hwdev || vector < 0)
+ return -EINVAL;
+
+ if (type != SERVICE_T_ROCE) {
+ sdk_err(dev->dev_hdl,
+ "Service type :%d, only RDMA service could get eqn by vector.\n",
+ type);
+ return -EINVAL;
+ }
+
+ cfg_mgmt = dev->cfg_mgmt;
+ vector_num = (vector_num % cfg_mgmt->eq_info.num_ceq) + CFG_RDMA_CEQ_BASE;
+
+ eq = cfg_mgmt->eq_info.eq;
+ if (eq[vector_num].type == SERVICE_T_ROCE && eq[vector_num].free == CFG_BUSY)
+ eqn = eq[vector_num].eqn;
+
+ return eqn;
+}
+EXPORT_SYMBOL(hinic3_vector_to_eqn);
+
+static int cfg_init_interrupt(struct hinic3_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+ struct cfg_irq_info *irq_info = &cfg_mgmt->irq_param_info;
+ u16 intr_num = dev->hwif->attr.num_irqs;
+ u16 intr_needed = dev->hwif->attr.msix_flex_en ? (dev->hwif->attr.num_aeqs +
+ dev->hwif->attr.num_ceqs + dev->hwif->attr.num_sq) : intr_num;
+
+ if (!intr_num) {
+ sdk_err(dev->dev_hdl, "Irq num cfg in fw is zero, msix_flex_en %d\n",
+ dev->hwif->attr.msix_flex_en);
+ return -EFAULT;
+ }
+
+ if (intr_needed > intr_num) {
+ sdk_warn(dev->dev_hdl, "Irq num cfg(%d) is less than the needed irq num(%d) msix_flex_en %d\n",
+ intr_num, intr_needed, dev->hwif->attr.msix_flex_en);
+ intr_needed = intr_num;
+ }
+
+ irq_info->alloc_info = kcalloc(intr_num, sizeof(*irq_info->alloc_info),
+ GFP_KERNEL);
+ if (!irq_info->alloc_info)
+ return -ENOMEM;
+
+ irq_info->num_irq_hw = intr_needed;
+ /* Production requires VF only surppots MSI-X */
+ if (HINIC3_FUNC_TYPE(dev) == TYPE_VF)
+ cfg_mgmt->svc_cap.interrupt_type = INTR_TYPE_MSIX;
+ else
+ cfg_mgmt->svc_cap.interrupt_type = 0;
+
+ mutex_init(&irq_info->irq_mutex);
+ return 0;
+}
+
+static int cfg_enable_interrupt(struct hinic3_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+ u16 nreq = cfg_mgmt->irq_param_info.num_irq_hw;
+
+ void *pcidev = dev->pcidev_hdl;
+ struct irq_alloc_info_st *irq_info = NULL;
+ struct msix_entry *entry = NULL;
+ u16 i = 0;
+ int actual_irq;
+
+ irq_info = cfg_mgmt->irq_param_info.alloc_info;
+
+ sdk_info(dev->dev_hdl, "Interrupt type: %u, irq num: %u.\n",
+ cfg_mgmt->svc_cap.interrupt_type, nreq);
+
+ switch (cfg_mgmt->svc_cap.interrupt_type) {
+ case INTR_TYPE_MSIX:
+ if (!nreq) {
+ sdk_err(dev->dev_hdl, "Interrupt number cannot be zero\n");
+ return -EINVAL;
+ }
+ entry = kcalloc(nreq, sizeof(*entry), GFP_KERNEL);
+ if (!entry)
+ return -ENOMEM;
+
+ for (i = 0; i < nreq; i++)
+ entry[i].entry = i;
+
+ actual_irq = pci_enable_msix_range(pcidev, entry,
+ VECTOR_THRESHOLD, nreq);
+ if (actual_irq < 0) {
+ sdk_err(dev->dev_hdl, "Alloc msix entries with threshold 2 failed. actual_irq: %d\n",
+ actual_irq);
+ kfree(entry);
+ return -ENOMEM;
+ }
+
+ nreq = (u16)actual_irq;
+ cfg_mgmt->irq_param_info.num_total = nreq;
+ cfg_mgmt->irq_param_info.num_irq_remain = nreq;
+ sdk_info(dev->dev_hdl, "Request %u msix vector success.\n",
+ nreq);
+
+ for (i = 0; i < nreq; ++i) {
+ /* u16 driver uses to specify entry, OS writes */
+ irq_info[i].info.msix_entry_idx = entry[i].entry;
+ /* u32 kernel uses to write allocated vector */
+ irq_info[i].info.irq_id = entry[i].vector;
+ irq_info[i].type = SERVICE_T_MAX;
+ irq_info[i].free = CFG_FREE;
+ }
+
+ kfree(entry);
+
+ break;
+
+ default:
+ sdk_err(dev->dev_hdl, "Unsupport interrupt type %d\n",
+ cfg_mgmt->svc_cap.interrupt_type);
+ break;
+ }
+
+ return 0;
+}
+
+int hinic3_alloc_irqs(void *hwdev, enum hinic3_service_type type, u16 num,
+ struct irq_info *irq_info_array, u16 *act_num)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_irq_info *irq_info = NULL;
+ struct irq_alloc_info_st *alloc_info = NULL;
+ int max_num_irq;
+ u16 free_num_irq;
+ int i, j;
+ u16 num_new = num;
+
+ if (!hwdev || !irq_info_array || !act_num)
+ return -EINVAL;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ irq_info = &cfg_mgmt->irq_param_info;
+ alloc_info = irq_info->alloc_info;
+ max_num_irq = irq_info->num_total;
+ free_num_irq = irq_info->num_irq_remain;
+
+ mutex_lock(&irq_info->irq_mutex);
+
+ if (num > free_num_irq) {
+ if (free_num_irq == 0) {
+ sdk_err(dev->dev_hdl, "no free irq resource in cfg mgmt.\n");
+ mutex_unlock(&irq_info->irq_mutex);
+ return -ENOMEM;
+ }
+
+ sdk_warn(dev->dev_hdl, "only %u irq resource in cfg mgmt.\n", free_num_irq);
+ num_new = free_num_irq;
+ }
+
+ *act_num = 0;
+
+ for (i = 0; i < num_new; i++) {
+ for (j = 0; j < max_num_irq; j++) {
+ if (alloc_info[j].free == CFG_FREE) {
+ if (irq_info->num_irq_remain == 0) {
+ sdk_err(dev->dev_hdl, "No free irq resource in cfg mgmt\n");
+ mutex_unlock(&irq_info->irq_mutex);
+ return -EINVAL;
+ }
+ alloc_info[j].type = type;
+ alloc_info[j].free = CFG_BUSY;
+
+ irq_info_array[i].msix_entry_idx =
+ alloc_info[j].info.msix_entry_idx;
+ irq_info_array[i].irq_id = alloc_info[j].info.irq_id;
+ (*act_num)++;
+ irq_info->num_irq_remain--;
+
+ break;
+ }
+ }
+ }
+
+ mutex_unlock(&irq_info->irq_mutex);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_alloc_irqs);
+
+void hinic3_free_irq(void *hwdev, enum hinic3_service_type type, u32 irq_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_irq_info *irq_info = NULL;
+ struct irq_alloc_info_st *alloc_info = NULL;
+ int max_num_irq;
+ int i;
+
+ if (!hwdev)
+ return;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ irq_info = &cfg_mgmt->irq_param_info;
+ alloc_info = irq_info->alloc_info;
+ max_num_irq = irq_info->num_total;
+
+ mutex_lock(&irq_info->irq_mutex);
+
+ for (i = 0; i < max_num_irq; i++) {
+ if (irq_id == alloc_info[i].info.irq_id &&
+ type == alloc_info[i].type) {
+ if (alloc_info[i].free == CFG_BUSY) {
+ alloc_info[i].free = CFG_FREE;
+ irq_info->num_irq_remain++;
+ if (irq_info->num_irq_remain > max_num_irq) {
+ sdk_err(dev->dev_hdl, "Find target,but over range\n");
+ mutex_unlock(&irq_info->irq_mutex);
+ return;
+ }
+ break;
+ }
+ }
+ }
+
+ if (i >= max_num_irq)
+ sdk_warn(dev->dev_hdl, "Irq %u don`t need to free\n", irq_id);
+
+ mutex_unlock(&irq_info->irq_mutex);
+}
+EXPORT_SYMBOL(hinic3_free_irq);
+
+int hinic3_alloc_ceqs(void *hwdev, enum hinic3_service_type type, int num,
+ int *ceq_id_array, int *act_num)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_eq_info *eq = NULL;
+ int free_ceq;
+ int i, j;
+ int num_new = num;
+
+ if (!hwdev || !ceq_id_array || !act_num)
+ return -EINVAL;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ eq = &cfg_mgmt->eq_info;
+ free_ceq = eq->num_ceq_remain;
+
+ mutex_lock(&eq->eq_mutex);
+
+ if (num > free_ceq) {
+ if (free_ceq <= 0) {
+ sdk_err(dev->dev_hdl, "No free ceq resource in cfg mgmt\n");
+ mutex_unlock(&eq->eq_mutex);
+ return -ENOMEM;
+ }
+
+ sdk_warn(dev->dev_hdl, "Only %d ceq resource in cfg mgmt\n",
+ free_ceq);
+ }
+
+ *act_num = 0;
+
+ num_new = min(num_new, eq->num_ceq - CFG_RDMA_CEQ_BASE);
+ for (i = 0; i < num_new; i++) {
+ if (eq->num_ceq_remain == 0) {
+ sdk_warn(dev->dev_hdl, "Alloc %d ceqs, less than required %d ceqs\n",
+ *act_num, num_new);
+ mutex_unlock(&eq->eq_mutex);
+ return 0;
+ }
+
+ for (j = CFG_RDMA_CEQ_BASE; j < eq->num_ceq; j++) {
+ if (eq->eq[j].free == CFG_FREE) {
+ eq->eq[j].type = type;
+ eq->eq[j].free = CFG_BUSY;
+ eq->num_ceq_remain--;
+ ceq_id_array[i] = eq->eq[j].eqn;
+ (*act_num)++;
+ break;
+ }
+ }
+ }
+
+ mutex_unlock(&eq->eq_mutex);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_alloc_ceqs);
+
+void hinic3_free_ceq(void *hwdev, enum hinic3_service_type type, int ceq_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_eq_info *eq = NULL;
+ u8 num_ceq;
+ u8 i = 0;
+
+ if (!hwdev)
+ return;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ eq = &cfg_mgmt->eq_info;
+ num_ceq = eq->num_ceq;
+
+ mutex_lock(&eq->eq_mutex);
+
+ for (i = 0; i < num_ceq; i++) {
+ if (ceq_id == eq->eq[i].eqn &&
+ type == cfg_mgmt->eq_info.eq[i].type) {
+ if (eq->eq[i].free == CFG_BUSY) {
+ eq->eq[i].free = CFG_FREE;
+ eq->num_ceq_remain++;
+ if (eq->num_ceq_remain > num_ceq)
+ eq->num_ceq_remain %= num_ceq;
+
+ mutex_unlock(&eq->eq_mutex);
+ return;
+ }
+ }
+ }
+
+ if (i >= num_ceq)
+ sdk_warn(dev->dev_hdl, "ceq %d don`t need to free.\n", ceq_id);
+
+ mutex_unlock(&eq->eq_mutex);
+}
+EXPORT_SYMBOL(hinic3_free_ceq);
+
+int init_cfg_mgmt(struct hinic3_hwdev *dev)
+{
+ int err;
+ struct cfg_mgmt_info *cfg_mgmt;
+
+ cfg_mgmt = kzalloc(sizeof(*cfg_mgmt), GFP_KERNEL);
+ if (!cfg_mgmt)
+ return -ENOMEM;
+
+ dev->cfg_mgmt = cfg_mgmt;
+ cfg_mgmt->hwdev = dev;
+
+ err = cfg_init_eq(dev);
+ if (err != 0) {
+ sdk_err(dev->dev_hdl, "Failed to init cfg event queue, err: %d\n",
+ err);
+ goto free_mgmt_mem;
+ }
+
+ err = cfg_init_interrupt(dev);
+ if (err != 0) {
+ sdk_err(dev->dev_hdl, "Failed to init cfg interrupt, err: %d\n",
+ err);
+ goto free_eq_mem;
+ }
+
+ err = cfg_enable_interrupt(dev);
+ if (err != 0) {
+ sdk_err(dev->dev_hdl, "Failed to enable cfg interrupt, err: %d\n",
+ err);
+ goto free_interrupt_mem;
+ }
+
+ return 0;
+
+free_interrupt_mem:
+ kfree(cfg_mgmt->irq_param_info.alloc_info);
+ mutex_deinit(&((cfg_mgmt->irq_param_info).irq_mutex));
+ cfg_mgmt->irq_param_info.alloc_info = NULL;
+
+free_eq_mem:
+ kfree(cfg_mgmt->eq_info.eq);
+ mutex_deinit(&cfg_mgmt->eq_info.eq_mutex);
+ cfg_mgmt->eq_info.eq = NULL;
+
+free_mgmt_mem:
+ kfree(cfg_mgmt);
+ return err;
+}
+
+void free_cfg_mgmt(struct hinic3_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+
+ /* if the allocated resource were recycled */
+ if (cfg_mgmt->irq_param_info.num_irq_remain !=
+ cfg_mgmt->irq_param_info.num_total ||
+ cfg_mgmt->eq_info.num_ceq_remain != cfg_mgmt->eq_info.num_ceq)
+ sdk_err(dev->dev_hdl, "Can't reclaim all irq and event queue, please check\n");
+
+ switch (cfg_mgmt->svc_cap.interrupt_type) {
+ case INTR_TYPE_MSIX:
+ pci_disable_msix(dev->pcidev_hdl);
+ break;
+
+ case INTR_TYPE_MSI:
+ pci_disable_msi(dev->pcidev_hdl);
+ break;
+
+ case INTR_TYPE_INT:
+ default:
+ break;
+ }
+
+ kfree(cfg_mgmt->irq_param_info.alloc_info);
+ cfg_mgmt->irq_param_info.alloc_info = NULL;
+ mutex_deinit(&((cfg_mgmt->irq_param_info).irq_mutex));
+
+ kfree(cfg_mgmt->eq_info.eq);
+ cfg_mgmt->eq_info.eq = NULL;
+ mutex_deinit(&cfg_mgmt->eq_info.eq_mutex);
+
+ kfree(cfg_mgmt);
+}
+
+/**
+ * hinic3_init_vf_dev_cap - Set max queue num for VF
+ * @hwdev: the HW device for VF
+ */
+int hinic3_init_vf_dev_cap(void *hwdev)
+{
+ struct hinic3_hwdev *dev = NULL;
+ enum func_type type;
+ int err;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ dev = (struct hinic3_hwdev *)hwdev;
+ type = HINIC3_FUNC_TYPE(dev);
+ if (type != TYPE_VF)
+ return -EPERM;
+
+ err = hinic3_get_dev_cap(dev);
+ if (err != 0)
+ return err;
+
+ nic_param_fix(dev);
+
+ return 0;
+}
+
+int init_capability(struct hinic3_hwdev *dev)
+{
+ int err;
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+
+ cfg_mgmt->svc_cap.sf_svc_attr.ft_pf_en = false;
+ cfg_mgmt->svc_cap.sf_svc_attr.rdma_pf_en = false;
+
+ err = hinic3_get_dev_cap(dev);
+ if (err != 0)
+ return err;
+
+ init_service_param(dev);
+
+ sdk_info(dev->dev_hdl, "Init capability success\n");
+ return 0;
+}
+
+void free_capability(struct hinic3_hwdev *dev)
+{
+ sdk_info(dev->dev_hdl, "Free capability success");
+}
+
+bool hinic3_support_nic(void *hwdev, struct nic_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_NIC_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.nic_cap, sizeof(struct nic_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_nic);
+
+bool hinic3_support_ppa(void *hwdev, struct ppa_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_PPA_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.ppa_cap, sizeof(struct ppa_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_ppa);
+
+bool hinic3_support_bifur(void *hwdev, struct bifur_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_BIFUR_TYPE(dev))
+ return false;
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_bifur);
+
+bool hinic3_support_migr(void *hwdev, struct migr_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_MIGR_TYPE(dev))
+ return false;
+
+ if (cap)
+ cap->master_host_id = dev->cfg_mgmt->svc_cap.master_host_id;
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_migr);
+
+bool hinic3_support_ipsec(void *hwdev, struct ipsec_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_IPSEC_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.ipsec_cap, sizeof(struct ipsec_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_ipsec);
+
+bool hinic3_support_roce(void *hwdev, struct rdma_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_ROCE_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.rdma_cap, sizeof(struct rdma_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_roce);
+
+bool hinic3_support_fc(void *hwdev, struct fc_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_FC_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.fc_cap, sizeof(struct fc_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_fc);
+
+bool hinic3_support_rdma(void *hwdev, struct rdma_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_RDMA_TYPE(dev) && !(IS_RDMA_ENABLE(dev)))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.rdma_cap, sizeof(struct rdma_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_rdma);
+
+bool hinic3_support_ovs(void *hwdev, struct ovs_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_OVS_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.ovs_cap, sizeof(struct ovs_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_ovs);
+
+bool hinic3_support_vbs(void *hwdev, struct vbs_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_VBS_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.vbs_cap, sizeof(struct vbs_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_vbs);
+
+bool hinic3_is_guest_vmsec_enable(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = hwdev;
+
+ if (!hwdev) {
+ pr_err("hwdev is null\n");
+ return false;
+ }
+
+ /* vf used in vm */
+ if (IS_VM_SLAVE_HOST(hw_dev) && (hinic3_func_type(hwdev) == TYPE_VF) &&
+ IS_RDMA_TYPE(hw_dev)) {
+ return true;
+ }
+
+ return false;
+}
+EXPORT_SYMBOL(hinic3_is_guest_vmsec_enable);
+
+/* Only PPF support it, PF is not */
+bool hinic3_support_toe(void *hwdev, struct toe_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_TOE_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.toe_cap, sizeof(struct toe_service_cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_toe);
+
+bool hinic3_func_for_mgmt(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (dev->cfg_mgmt->svc_cap.chip_svc_type)
+ return false;
+ else
+ return true;
+}
+
+bool hinic3_get_stateful_enable(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ return dev->cfg_mgmt->svc_cap.sf_en;
+}
+EXPORT_SYMBOL(hinic3_get_stateful_enable);
+
+bool hinic3_get_timer_enable(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ return dev->cfg_mgmt->svc_cap.timer_en;
+}
+EXPORT_SYMBOL(hinic3_get_timer_enable);
+
+u8 hinic3_host_oq_id_mask(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting host oq id mask\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.host_oq_id_mask_val;
+}
+EXPORT_SYMBOL(hinic3_host_oq_id_mask);
+
+u8 hinic3_host_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting host id\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.host_id;
+}
+EXPORT_SYMBOL(hinic3_host_id);
+
+u16 hinic3_host_total_func(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting host total function number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.host_total_function;
+}
+EXPORT_SYMBOL(hinic3_host_total_func);
+
+u16 hinic3_func_max_qnum(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting function max queue number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.nic_cap.max_sqs;
+}
+EXPORT_SYMBOL(hinic3_func_max_qnum);
+
+u16 hinic3_func_max_nic_qnum(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting function max queue number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.nic_cap.max_sqs;
+}
+EXPORT_SYMBOL(hinic3_func_max_nic_qnum);
+
+u8 hinic3_ep_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting ep id\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.ep_id;
+}
+EXPORT_SYMBOL(hinic3_ep_id);
+
+u8 hinic3_er_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting er id\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.er_id;
+}
+EXPORT_SYMBOL(hinic3_er_id);
+
+u8 hinic3_physical_port_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting physical port id\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.port_id;
+}
+EXPORT_SYMBOL(hinic3_physical_port_id);
+
+u16 hinic3_func_max_vf(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting max vf number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.max_vf;
+}
+EXPORT_SYMBOL(hinic3_func_max_vf);
+
+int hinic3_cos_valid_bitmap(void *hwdev, u8 *func_dft_cos, u8 *port_cos_bitmap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting cos valid bitmap\n");
+ return 1;
+ }
+ *func_dft_cos = dev->cfg_mgmt->svc_cap.cos_valid_bitmap;
+ *port_cos_bitmap = dev->cfg_mgmt->svc_cap.port_cos_valid_bitmap;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_cos_valid_bitmap);
+
+void hinic3_shutdown_hwdev(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return;
+
+ if (IS_SLAVE_HOST(dev))
+ set_slave_host_enable(hwdev, hinic3_pcie_itf_id(hwdev), false);
+}
+
+u32 hinic3_host_pf_num(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting pf number capability\n");
+ return 0;
+ }
+
+ return dev->cfg_mgmt->svc_cap.pf_num;
+}
+EXPORT_SYMBOL(hinic3_host_pf_num);
+
+u32 hinic3_host_pf_id_start(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting pf id start capability\n");
+ return 0;
+ }
+
+ return dev->cfg_mgmt->svc_cap.pf_id_start;
+}
+EXPORT_SYMBOL(hinic3_host_pf_id_start);
+
+u8 hinic3_flexq_en(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return 0;
+
+ return dev->cfg_mgmt->svc_cap.flexq_en;
+}
+EXPORT_SYMBOL(hinic3_flexq_en);
+
+int hinic3_get_fake_vf_info(void *hwdev, u8 *fake_vf_vld,
+ u8 *page_bit, u8 *pf_start_bit, u8 *map_host_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting pf id start capability\n");
+ return -EINVAL;
+ }
+
+ if (!fake_vf_vld || !page_bit || !pf_start_bit || !map_host_id) {
+ pr_err("Fake vf member pointer is NULL for getting pf id start capability\n");
+ return -EINVAL;
+ }
+
+ *fake_vf_vld = dev->cfg_mgmt->svc_cap.fake_vf_en;
+ *page_bit = dev->cfg_mgmt->svc_cap.fake_vf_page_bit;
+ *pf_start_bit = dev->cfg_mgmt->svc_cap.fake_vf_start_bit;
+ *map_host_id = dev->cfg_mgmt->svc_cap.map_host_id;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_fake_vf_info);
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h
new file mode 100644
index 0000000..7157e97
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h
@@ -0,0 +1,346 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HW_CFG_H
+#define HINIC3_HW_CFG_H
+
+#include <linux/types.h>
+#include "cfg_mgmt_mpu_cmd_defs.h"
+#include "hinic3_hwdev.h"
+
+#define CFG_MAX_CMD_TIMEOUT 30000 /* ms */
+
+enum {
+ CFG_FREE = 0,
+ CFG_BUSY = 1
+};
+
+/* start position for CEQs allocation, Max number of CEQs is 32 */
+enum {
+ CFG_RDMA_CEQ_BASE = 0
+};
+
+/* RDMA resource */
+#define K_UNIT BIT(10)
+#define M_UNIT BIT(20)
+#define G_UNIT BIT(30)
+
+#define VIRTIO_BASE_VQ_SIZE 2048U
+#define VIRTIO_DEFAULT_VQ_SIZE 8192U
+
+/* L2NIC */
+#define HINIC3_CFG_MAX_QP 256
+
+/* RDMA */
+#define RDMA_RSVD_QPS 2
+#define ROCE_MAX_WQES (8 * K_UNIT - 1)
+#define IWARP_MAX_WQES (8 * K_UNIT)
+
+#define RDMA_MAX_SQ_SGE 16
+
+#define ROCE_MAX_RQ_SGE 16
+
+/* value changed should change ROCE_MAX_WQE_BB_PER_WR synchronously */
+#define RDMA_MAX_SQ_DESC_SZ (256)
+
+/* (256B(cache_line_len) - 16B(ctrl_seg_len) - 48B(max_task_seg_len)) */
+#define ROCE_MAX_SQ_INLINE_DATA_SZ 192
+
+#define ROCE_MAX_RQ_DESC_SZ 256
+
+#define ROCE_QPC_ENTRY_SZ 512
+
+#define WQEBB_SZ 64
+
+#define ROCE_RDMARC_ENTRY_SZ 32
+#define ROCE_MAX_QP_INIT_RDMA 128
+#define ROCE_MAX_QP_DEST_RDMA 128
+
+#define ROCE_MAX_SRQ_WQES (16 * K_UNIT - 1)
+#define ROCE_RSVD_SRQS 0
+#define ROCE_MAX_SRQ_SGE 15
+#define ROCE_SRQC_ENTERY_SZ 64
+
+#define RDMA_MAX_CQES (8 * M_UNIT - 1)
+#define RDMA_RSVD_CQS 0
+
+#define RDMA_CQC_ENTRY_SZ 128
+
+#define RDMA_CQE_SZ 64
+#define RDMA_RSVD_MRWS 128
+#define RDMA_MPT_ENTRY_SZ 64
+#define RDMA_NUM_MTTS (1 * G_UNIT)
+#define LOG_MTT_SEG 9
+#define MTT_ENTRY_SZ 8
+#define LOG_RDMARC_SEG 3
+
+#define LOCAL_ACK_DELAY 15
+#define RDMA_NUM_PORTS 1
+#define ROCE_MAX_MSG_SZ (2 * G_UNIT)
+
+#define DB_PAGE_SZ (4 * K_UNIT)
+#define DWQE_SZ 256
+
+#define NUM_PD (128 * K_UNIT)
+#define RSVD_PD 0
+
+#define MAX_XRCDS (64 * K_UNIT)
+#define RSVD_XRCDS 0
+
+#define MAX_GID_PER_PORT 128
+#define GID_ENTRY_SZ 32
+#define RSVD_LKEY ((RDMA_RSVD_MRWS - 1) << 8)
+#define NUM_COMP_VECTORS 32
+#define PAGE_SZ_CAP ((1UL << 12) | (1UL << 16) | (1UL << 21))
+#define ROCE_MODE 1
+
+#define MAX_FRPL_LEN 511
+#define MAX_PKEYS 1
+
+/* ToE */
+#define TOE_PCTX_SZ 1024
+#define TOE_CQC_SZ 64
+
+/* IoE */
+#define IOE_PCTX_SZ 512
+
+/* FC */
+#define FC_PCTX_SZ 256
+#define FC_CCTX_SZ 256
+#define FC_SQE_SZ 128
+#define FC_SCQC_SZ 64
+#define FC_SCQE_SZ 64
+#define FC_SRQC_SZ 64
+#define FC_SRQE_SZ 32
+
+/* OVS */
+#define OVS_PCTX_SZ 512
+
+/* PPA */
+#define PPA_PCTX_SZ 512
+
+/* IPsec */
+#define IPSEC_SACTX_SZ 512
+
+struct dev_sf_svc_attr {
+ bool ft_en; /* business enable flag (not include RDMA) */
+ bool ft_pf_en; /* In FPGA Test VF resource is in PF or not,
+ * 0 - VF, 1 - PF, VF doesn't need this bit.
+ */
+ bool rdma_en;
+ bool rdma_pf_en;/* In FPGA Test VF RDMA resource is in PF or not,
+ * 0 - VF, 1 - PF, VF doesn't need this bit.
+ */
+};
+
+enum intr_type {
+ INTR_TYPE_MSIX,
+ INTR_TYPE_MSI,
+ INTR_TYPE_INT,
+ INTR_TYPE_NONE,
+ /* PXE,OVS need single thread processing,
+ * synchronization messages must use poll wait mechanism interface
+ */
+};
+
+/* device capability */
+struct service_cap {
+ struct dev_sf_svc_attr sf_svc_attr;
+ u16 svc_type; /* user input service type */
+ u16 chip_svc_type; /* HW supported service type, reference to servic_bit_define */
+
+ u8 host_id;
+ u8 ep_id;
+ u8 er_id; /* PF/VF's ER */
+ u8 port_id; /* PF/VF's physical port */
+
+ /* Host global resources */
+ u16 host_total_function;
+ u8 pf_num;
+ u8 pf_id_start;
+ u16 vf_num; /* max numbers of vf in current host */
+ u16 vf_id_start;
+ u8 host_oq_id_mask_val;
+ u8 host_valid_bitmap;
+ u8 master_host_id;
+ u8 srv_multi_host_mode;
+ u16 virtio_vq_size;
+
+ u8 hot_plug_disable;
+ u8 bond_create_mode;
+ u8 os_hot_replace;
+ u8 rsvd1;
+
+ u8 timer_pf_num;
+ u8 timer_pf_id_start;
+ u16 timer_vf_num;
+ u16 timer_vf_id_start;
+
+ u8 flexq_en;
+ u8 cos_valid_bitmap;
+ u8 port_cos_valid_bitmap;
+ u16 max_vf; /* max VF number that PF supported */
+
+ u16 fake_vf_start_id;
+ u16 fake_vf_num;
+ u32 fake_vf_max_pctx;
+ u16 fake_vf_bfilter_start_addr;
+ u16 fake_vf_bfilter_len;
+
+ u16 fake_vf_num_cfg;
+
+ /* DO NOT get interrupt_type from firmware */
+ enum intr_type interrupt_type;
+
+ bool sf_en; /* stateful business status */
+ u8 timer_en; /* 0:disable, 1:enable */
+ u8 bloomfilter_en; /* 0:disable, 1:enable */
+
+ u8 lb_mode;
+ u8 smf_pg;
+
+ /* For test */
+ u32 test_mode;
+ u32 test_qpc_num;
+ u32 test_qpc_resvd_num;
+ u32 test_page_size_reorder;
+ bool test_xid_alloc_mode;
+ bool test_gpa_check_enable;
+ u8 test_qpc_alloc_mode;
+ u8 test_scqc_alloc_mode;
+
+ u32 test_max_conn_num;
+ u32 test_max_cache_conn_num;
+ u32 test_scqc_num;
+ u32 test_mpt_num;
+ u32 test_scq_resvd_num;
+ u32 test_mpt_recvd_num;
+ u32 test_hash_num;
+ u32 test_reorder_num;
+
+ u32 max_connect_num; /* PF/VF maximum connection number(1M) */
+ /* The maximum connections which can be stick to cache memory, max 1K */
+ u16 max_stick2cache_num;
+ /* Starting address in cache memory for bloom filter, 64Bytes aligned */
+ u16 bfilter_start_addr;
+ /* Length for bloom filter, aligned on 64Bytes. The size is length*64B.
+ * Bloom filter memory size + 1 must be power of 2.
+ * The maximum memory size of bloom filter is 4M
+ */
+ u16 bfilter_len;
+ /* The size of hash bucket tables, align on 64 entries.
+ * Be used to AND (&) the hash value. Bucket Size +1 must be power of 2.
+ * The maximum number of hash bucket is 4M
+ */
+ u16 hash_bucket_num;
+
+ u8 map_host_id;
+ u8 fake_vf_en;
+ u8 fake_vf_start_bit;
+ u8 fake_vf_end_bit;
+ u8 fake_vf_page_bit;
+
+ struct nic_service_cap nic_cap; /* NIC capability */
+ struct rdma_service_cap rdma_cap; /* RDMA capability */
+ struct fc_service_cap fc_cap; /* FC capability */
+ struct toe_service_cap toe_cap; /* ToE capability */
+ struct ovs_service_cap ovs_cap; /* OVS capability */
+ struct ipsec_service_cap ipsec_cap; /* IPsec capability */
+ struct ppa_service_cap ppa_cap; /* PPA capability */
+ struct vbs_service_cap vbs_cap; /* VBS capability */
+};
+
+struct svc_cap_info {
+ u32 func_idx;
+ struct service_cap cap;
+};
+
+struct cfg_eq {
+ enum hinic3_service_type type;
+ int eqn;
+ int free; /* 1 - alocated, 0- freed */
+};
+
+struct cfg_eq_info {
+ struct cfg_eq *eq;
+
+ u8 num_ceq;
+
+ u8 num_ceq_remain;
+
+ /* mutex used for allocate EQs */
+ struct mutex eq_mutex;
+};
+
+struct irq_alloc_info_st {
+ enum hinic3_service_type type;
+ int free; /* 1 - alocated, 0- freed */
+ struct irq_info info;
+};
+
+struct cfg_irq_info {
+ struct irq_alloc_info_st *alloc_info;
+ u16 num_total;
+ u16 num_irq_remain;
+ u16 num_irq_hw; /* device max irq number */
+
+ /* mutex used for allocate EQs */
+ struct mutex irq_mutex;
+};
+
+#define VECTOR_THRESHOLD 2
+
+struct cfg_mgmt_info {
+ struct hinic3_hwdev *hwdev;
+ struct service_cap svc_cap;
+ struct cfg_eq_info eq_info; /* EQ */
+ struct cfg_irq_info irq_param_info; /* IRQ */
+ u32 func_seq_num; /* temporary */
+};
+
+#define CFG_SERVICE_FT_EN (CFG_SERVICE_MASK_VBS | CFG_SERVICE_MASK_TOE | \
+ CFG_SERVICE_MASK_IPSEC | CFG_SERVICE_MASK_FC | \
+ CFG_SERVICE_MASK_VIRTIO | CFG_SERVICE_MASK_OVS)
+#define CFG_SERVICE_RDMA_EN CFG_SERVICE_MASK_ROCE
+
+#define IS_NIC_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_NIC)
+#define IS_ROCE_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_ROCE)
+#define IS_VBS_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_VBS)
+#define IS_TOE_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_TOE)
+#define IS_IPSEC_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_IPSEC)
+#define IS_FC_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_FC)
+#define IS_OVS_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_OVS)
+#define IS_FT_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_FT_EN)
+#define IS_RDMA_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_RDMA_EN)
+#define IS_RDMA_ENABLE(dev) \
+ ((dev)->cfg_mgmt->svc_cap.sf_svc_attr.rdma_en)
+#define IS_PPA_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_PPA)
+#define IS_MIGR_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_MIGRATE)
+#define IS_BIFUR_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_BIFUR)
+
+int init_cfg_mgmt(struct hinic3_hwdev *dev);
+
+void free_cfg_mgmt(struct hinic3_hwdev *dev);
+
+int init_capability(struct hinic3_hwdev *dev);
+
+void free_capability(struct hinic3_hwdev *dev);
+
+int hinic3_init_vf_dev_cap(void *hwdev);
+
+u8 hinic3_get_bond_create_mode(void *dev);
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c
new file mode 100644
index 0000000..47264f9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c
@@ -0,0 +1,1681 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/msi.h>
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/semaphore.h>
+#include <linux/interrupt.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_cmdq.h"
+#include "mpu_inband_cmd_defs.h"
+#include "mpu_board_defs.h"
+#include "hinic3_hw_comm.h"
+#include "vram_common.h"
+
+#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0
+#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8
+#define HINIC3_MSIX_CNT_COALESC_TIMER_SHIFT 8
+#define HINIC3_MSIX_CNT_PENDING_SHIFT 8
+#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29
+
+#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU
+#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU
+#define HINIC3_MSIX_CNT_COALESC_TIMER_MASK 0xFFU
+#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU
+#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U
+
+#define HINIC3_MSIX_CNT_SET(val, member) \
+ (((val) & HINIC3_MSIX_CNT_##member##_MASK) << \
+ HINIC3_MSIX_CNT_##member##_SHIFT)
+
+#define DEFAULT_RX_BUF_SIZE ((u16)0xB)
+
+enum hinic3_rx_buf_size {
+ HINIC3_RX_BUF_SIZE_32B = 0x20,
+ HINIC3_RX_BUF_SIZE_64B = 0x40,
+ HINIC3_RX_BUF_SIZE_96B = 0x60,
+ HINIC3_RX_BUF_SIZE_128B = 0x80,
+ HINIC3_RX_BUF_SIZE_192B = 0xC0,
+ HINIC3_RX_BUF_SIZE_256B = 0x100,
+ HINIC3_RX_BUF_SIZE_384B = 0x180,
+ HINIC3_RX_BUF_SIZE_512B = 0x200,
+ HINIC3_RX_BUF_SIZE_768B = 0x300,
+ HINIC3_RX_BUF_SIZE_1K = 0x400,
+ HINIC3_RX_BUF_SIZE_1_5K = 0x600,
+ HINIC3_RX_BUF_SIZE_2K = 0x800,
+ HINIC3_RX_BUF_SIZE_3K = 0xC00,
+ HINIC3_RX_BUF_SIZE_4K = 0x1000,
+ HINIC3_RX_BUF_SIZE_8K = 0x2000,
+ HINIC3_RX_BUF_SIZE_16K = 0x4000,
+};
+
+const int hinic3_hw_rx_buf_size[] = {
+ HINIC3_RX_BUF_SIZE_32B,
+ HINIC3_RX_BUF_SIZE_64B,
+ HINIC3_RX_BUF_SIZE_96B,
+ HINIC3_RX_BUF_SIZE_128B,
+ HINIC3_RX_BUF_SIZE_192B,
+ HINIC3_RX_BUF_SIZE_256B,
+ HINIC3_RX_BUF_SIZE_384B,
+ HINIC3_RX_BUF_SIZE_512B,
+ HINIC3_RX_BUF_SIZE_768B,
+ HINIC3_RX_BUF_SIZE_1K,
+ HINIC3_RX_BUF_SIZE_1_5K,
+ HINIC3_RX_BUF_SIZE_2K,
+ HINIC3_RX_BUF_SIZE_3K,
+ HINIC3_RX_BUF_SIZE_4K,
+ HINIC3_RX_BUF_SIZE_8K,
+ HINIC3_RX_BUF_SIZE_16K,
+};
+
+static inline int comm_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, buf_in,
+ in_size, buf_out, out_size, 0,
+ HINIC3_CHANNEL_COMM);
+}
+
+static inline int comm_msg_to_mgmt_sync_ch(struct hinic3_hwdev *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u16 channel)
+{
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, buf_in,
+ in_size, buf_out, out_size, 0, channel);
+}
+
+int hinic3_get_interrupt_cfg(void *dev, struct interrupt_info *info,
+ u16 channel)
+{
+ struct hinic3_hwdev *hwdev = dev;
+ struct comm_cmd_msix_config msix_cfg;
+ u16 out_size = sizeof(msix_cfg);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ memset(&msix_cfg, 0, sizeof(msix_cfg));
+ msix_cfg.func_id = hinic3_global_func_id(hwdev);
+ msix_cfg.msix_index = info->msix_index;
+ msix_cfg.opcode = MGMT_MSG_CMD_OP_GET;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_CFG_MSIX_CTRL_REG,
+ &msix_cfg, sizeof(msix_cfg), &msix_cfg,
+ &out_size, channel);
+ if (err || !out_size || msix_cfg.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to get interrupt config, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, msix_cfg.head.status, out_size, channel);
+ return -EINVAL;
+ }
+
+ info->lli_credit_limit = msix_cfg.lli_credit_cnt;
+ info->lli_timer_cfg = msix_cfg.lli_timer_cnt;
+ info->pending_limt = msix_cfg.pending_cnt;
+ info->coalesc_timer_cfg = msix_cfg.coalesce_timer_cnt;
+ info->resend_timer_cfg = msix_cfg.resend_timer_cnt;
+
+ return 0;
+}
+
+int hinic3_set_interrupt_cfg_direct(void *hwdev, struct interrupt_info *info,
+ u16 channel)
+{
+ struct comm_cmd_msix_config msix_cfg;
+ u16 out_size = sizeof(msix_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&msix_cfg, 0, sizeof(msix_cfg));
+ msix_cfg.func_id = hinic3_global_func_id(hwdev);
+ msix_cfg.msix_index = (u16)info->msix_index;
+ msix_cfg.opcode = MGMT_MSG_CMD_OP_SET;
+
+ msix_cfg.lli_credit_cnt = info->lli_credit_limit;
+ msix_cfg.lli_timer_cnt = info->lli_timer_cfg;
+ msix_cfg.pending_cnt = info->pending_limt;
+ msix_cfg.coalesce_timer_cnt = info->coalesc_timer_cfg;
+ msix_cfg.resend_timer_cnt = info->resend_timer_cfg;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_CFG_MSIX_CTRL_REG,
+ &msix_cfg, sizeof(msix_cfg), &msix_cfg,
+ &out_size, channel);
+ if (err || !out_size || msix_cfg.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set interrupt config, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, msix_cfg.head.status, out_size, channel);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_interrupt_cfg(void *dev, struct interrupt_info info, u16 channel)
+{
+ struct interrupt_info temp_info;
+ struct hinic3_hwdev *hwdev = dev;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ temp_info.msix_index = info.msix_index;
+
+ err = hinic3_get_interrupt_cfg(hwdev, &temp_info, channel);
+ if (err != 0)
+ return -EINVAL;
+
+ if (!info.lli_set) {
+ info.lli_credit_limit = temp_info.lli_credit_limit;
+ info.lli_timer_cfg = temp_info.lli_timer_cfg;
+ }
+
+ if (!info.interrupt_coalesc_set) {
+ info.pending_limt = temp_info.pending_limt;
+ info.coalesc_timer_cfg = temp_info.coalesc_timer_cfg;
+ info.resend_timer_cfg = temp_info.resend_timer_cfg;
+ }
+
+ return hinic3_set_interrupt_cfg_direct(hwdev, &info, channel);
+}
+EXPORT_SYMBOL(hinic3_set_interrupt_cfg);
+
+void hinic3_misx_intr_clear_resend_bit(void *hwdev, u16 msix_idx,
+ u8 clear_resend_en)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 msix_ctrl = 0, addr;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ msix_ctrl = HINIC3_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX) |
+ HINIC3_MSI_CLR_INDIR_SET(clear_resend_en, RESEND_TIMER_CLR);
+
+ addr = HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR;
+ hinic3_hwif_write_reg(hwif, addr, msix_ctrl);
+}
+EXPORT_SYMBOL(hinic3_misx_intr_clear_resend_bit);
+
+int hinic3_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size,
+ u16 channel)
+{
+ struct comm_cmd_wq_page_size page_size_info;
+ u16 out_size = sizeof(page_size_info);
+ int err;
+
+ memset(&page_size_info, 0, sizeof(page_size_info));
+ page_size_info.func_id = func_idx;
+ page_size_info.page_size = HINIC3_PAGE_SIZE_HW(page_size);
+ page_size_info.opcode = MGMT_MSG_CMD_OP_SET;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_CFG_PAGESIZE,
+ &page_size_info, sizeof(page_size_info),
+ &page_size_info, &out_size, channel);
+ if (err || !out_size || page_size_info.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set wq page size, err: %d, status: 0x%x, out_size: 0x%x, channel: 0x%x\n",
+ err, page_size_info.head.status, out_size, channel);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_func_reset(void *dev, u16 func_id, u64 reset_flag, u16 channel)
+{
+ struct comm_cmd_func_reset func_reset;
+ struct hinic3_hwdev *hwdev = dev;
+ u16 out_size = sizeof(func_reset);
+ int err = 0;
+ int is_in_kexec;
+
+ if (!dev) {
+ pr_err("Invalid para: dev is null.\n");
+ return -EINVAL;
+ }
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec != 0) {
+ sdk_info(hwdev->dev_hdl, "Skip function reset!\n");
+ return 0;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Function is reset, flag: 0x%llx, channel:0x%x\n",
+ reset_flag, channel);
+
+ memset(&func_reset, 0, sizeof(func_reset));
+ func_reset.func_id = func_id;
+ func_reset.reset_flag = reset_flag;
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_FUNC_RESET,
+ &func_reset, sizeof(func_reset),
+ &func_reset, &out_size, channel);
+ if (err || !out_size || func_reset.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to reset func resources, reset_flag 0x%llx, err: %d, status: 0x%x, out_size: 0x%x\n",
+ reset_flag, err, func_reset.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_func_reset);
+
+static u16 get_hw_rx_buf_size(int rx_buf_sz)
+{
+ u16 num_hw_types =
+ sizeof(hinic3_hw_rx_buf_size) /
+ sizeof(hinic3_hw_rx_buf_size[0]);
+ u16 i;
+
+ for (i = 0; i < num_hw_types; i++) {
+ if (hinic3_hw_rx_buf_size[i] == rx_buf_sz)
+ return i;
+ }
+
+ pr_err("Chip can't support rx buf size of %d\n", rx_buf_sz);
+
+ return DEFAULT_RX_BUF_SIZE; /* default 2K */
+}
+
+int hinic3_set_root_ctxt(void *hwdev, u32 rq_depth, u32 sq_depth, int rx_buf_sz,
+ u16 channel)
+{
+ struct comm_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_id = hinic3_global_func_id(hwdev);
+
+ root_ctxt.set_cmdq_depth = 0;
+ root_ctxt.cmdq_depth = 0;
+
+ root_ctxt.lro_en = 1;
+
+ root_ctxt.rq_depth = (u16)ilog2(rq_depth);
+ root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz);
+ root_ctxt.sq_depth = (u16)ilog2(sq_depth);
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_SET_VAT,
+ &root_ctxt, sizeof(root_ctxt),
+ &root_ctxt, &out_size, channel);
+ if (err || !out_size || root_ctxt.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set root context, err: %d, status: 0x%x, out_size: 0x%x, channel: 0x%x\n",
+ err, root_ctxt.head.status, out_size, channel);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_root_ctxt);
+
+int hinic3_clean_root_ctxt(void *hwdev, u16 channel)
+{
+ struct comm_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_id = hinic3_global_func_id(hwdev);
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_SET_VAT,
+ &root_ctxt, sizeof(root_ctxt),
+ &root_ctxt, &out_size, channel);
+ if (err || !out_size || root_ctxt.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set root context, err: %d, status: 0x%x, out_size: 0x%x, channel: 0x%x\n",
+ err, root_ctxt.head.status, out_size, channel);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_clean_root_ctxt);
+
+int hinic3_set_cmdq_depth(void *hwdev, u16 cmdq_depth)
+{
+ struct comm_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_id = hinic3_global_func_id(hwdev);
+
+ root_ctxt.set_cmdq_depth = 1;
+ root_ctxt.cmdq_depth = (u8)ilog2(cmdq_depth);
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_VAT, &root_ctxt,
+ sizeof(root_ctxt), &root_ctxt, &out_size);
+ if (err || !out_size || root_ctxt.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set cmdq depth, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, root_ctxt.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_set_cmdq_ctxt(struct hinic3_hwdev *hwdev, u8 cmdq_id,
+ struct cmdq_ctxt_info *ctxt)
+{
+ struct comm_cmd_cmdq_ctxt cmdq_ctxt;
+ u16 out_size = sizeof(cmdq_ctxt);
+ int err;
+
+ memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt));
+ memcpy(&cmdq_ctxt.ctxt, ctxt, sizeof(struct cmdq_ctxt_info));
+ cmdq_ctxt.func_id = hinic3_global_func_id(hwdev);
+ cmdq_ctxt.cmdq_id = cmdq_id;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_CMDQ_CTXT,
+ &cmdq_ctxt, sizeof(cmdq_ctxt),
+ &cmdq_ctxt, &out_size);
+ if (err || !out_size || cmdq_ctxt.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set cmdq ctxt, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, cmdq_ctxt.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_set_ceq_ctrl_reg(struct hinic3_hwdev *hwdev, u16 q_id,
+ u32 ctrl0, u32 ctrl1)
+{
+ struct comm_cmd_ceq_ctrl_reg ceq_ctrl;
+ u16 out_size = sizeof(ceq_ctrl);
+ int err;
+
+ memset(&ceq_ctrl, 0, sizeof(ceq_ctrl));
+ ceq_ctrl.func_id = hinic3_global_func_id(hwdev);
+ ceq_ctrl.q_id = q_id;
+ ceq_ctrl.ctrl0 = ctrl0;
+ ceq_ctrl.ctrl1 = ctrl1;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_CEQ_CTRL_REG,
+ &ceq_ctrl, sizeof(ceq_ctrl),
+ &ceq_ctrl, &out_size);
+ if (err || !out_size || ceq_ctrl.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set ceq %u ctrl reg, err: %d status: 0x%x, out_size: 0x%x\n",
+ q_id, err, ceq_ctrl.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_set_dma_attr_tbl(struct hinic3_hwdev *hwdev, u8 entry_idx, u8 st, u8 at, u8 ph,
+ u8 no_snooping, u8 tph_en)
+{
+ struct comm_cmd_dma_attr_config dma_attr;
+ u16 out_size = sizeof(dma_attr);
+ int err;
+
+ memset(&dma_attr, 0, sizeof(dma_attr));
+ dma_attr.func_id = hinic3_global_func_id(hwdev);
+ dma_attr.entry_idx = entry_idx;
+ dma_attr.st = st;
+ dma_attr.at = at;
+ dma_attr.ph = ph;
+ dma_attr.no_snooping = no_snooping;
+ dma_attr.tph_en = tph_en;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_DMA_ATTR, &dma_attr, sizeof(dma_attr),
+ &dma_attr, &out_size);
+ if (err || !out_size || dma_attr.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set dma attr, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, dma_attr.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_set_bdf_ctxt(void *hwdev, u8 bus, u8 device, u8 function)
+{
+ struct comm_cmd_bdf_info bdf_info;
+ u16 out_size = sizeof(bdf_info);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&bdf_info, 0, sizeof(bdf_info));
+ bdf_info.function_idx = hinic3_global_func_id(hwdev);
+ bdf_info.bus = bus;
+ bdf_info.device = device;
+ bdf_info.function = function;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SEND_BDF_INFO,
+ &bdf_info, sizeof(bdf_info),
+ &bdf_info, &out_size);
+ if (err || !out_size || bdf_info.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set bdf info to MPU, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, bdf_info.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_sync_time(void *hwdev, u64 time)
+{
+ struct comm_cmd_sync_time time_info;
+ u16 out_size = sizeof(time_info);
+ int err;
+
+ memset(&time_info, 0, sizeof(time_info));
+ time_info.mstime = time;
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SYNC_TIME, &time_info,
+ sizeof(time_info), &time_info, &out_size);
+ if (err || time_info.head.status || !out_size) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to sync time to mgmt, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, time_info.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_set_ppf_flr_type(void *hwdev, enum hinic3_ppf_flr_type flr_type)
+{
+ struct comm_cmd_ppf_flr_type_set flr_type_set;
+ u16 out_size = sizeof(struct comm_cmd_ppf_flr_type_set);
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&flr_type_set, 0, sizeof(flr_type_set));
+ flr_type_set.func_id = hinic3_global_func_id(hwdev);
+ flr_type_set.ppf_flr_type = flr_type;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_PPF_FLR_TYPE,
+ &flr_type_set, sizeof(flr_type_set),
+ &flr_type_set, &out_size);
+ if (err || !out_size || flr_type_set.head.status) {
+ sdk_err(dev->dev_hdl, "Failed to set ppf flr type, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, flr_type_set.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_ppf_flr_type);
+
+int hinic3_set_ppf_tbl_hotreplace_flag(void *hwdev, u8 flag)
+{
+ struct comm_cmd_ppf_tbl_htrp_config htr_info = {};
+ u16 out_size = sizeof(struct comm_cmd_ppf_tbl_htrp_config);
+ struct hinic3_hwdev *dev = hwdev;
+ int ret;
+
+ if (!hwdev) {
+ sdk_err(dev->dev_hdl, "Sdk set ppf table hotreplace flag para is null");
+ return -EINVAL;
+ }
+
+ htr_info.hotreplace_flag = flag;
+ ret = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_PPF_TBL_HTR_FLG,
+ &htr_info, sizeof(htr_info), &htr_info, &out_size);
+ if (ret != 0 || htr_info.head.status != 0) {
+ sdk_err(dev->dev_hdl, "Send mbox to mpu failed in sdk, ret:%d, status:%u",
+ ret, htr_info.head.status);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_ppf_tbl_hotreplace_flag);
+
+static int hinic3_get_fw_ver(struct hinic3_hwdev *hwdev, enum hinic3_fw_ver_type type,
+ u8 *mgmt_ver, u8 version_size, u16 channel)
+{
+ struct comm_cmd_get_fw_version fw_ver;
+ u16 out_size = sizeof(fw_ver);
+ int err;
+
+ if (!hwdev || !mgmt_ver)
+ return -EINVAL;
+
+ memset(&fw_ver, 0, sizeof(fw_ver));
+ fw_ver.fw_type = type;
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_GET_FW_VERSION,
+ &fw_ver, sizeof(fw_ver), &fw_ver,
+ &out_size, channel);
+ if (err || !out_size || fw_ver.head.status) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to get fw version, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, fw_ver.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ memcpy(mgmt_ver, fw_ver.ver, version_size);
+
+ return 0;
+}
+
+int hinic3_get_mgmt_version(void *hwdev, u8 *mgmt_ver, u8 version_size,
+ u16 channel)
+{
+ return hinic3_get_fw_ver(hwdev, HINIC3_FW_VER_TYPE_MPU, mgmt_ver,
+ version_size, channel);
+}
+EXPORT_SYMBOL(hinic3_get_mgmt_version);
+
+int hinic3_get_fw_version(void *hwdev, struct hinic3_fw_version *fw_ver,
+ u16 channel)
+{
+ int err;
+
+ if (!hwdev || !fw_ver)
+ return -EINVAL;
+
+ err = hinic3_get_fw_ver(hwdev, HINIC3_FW_VER_TYPE_MPU,
+ fw_ver->mgmt_ver, sizeof(fw_ver->mgmt_ver),
+ channel);
+ if (err != 0)
+ return err;
+
+ err = hinic3_get_fw_ver(hwdev, HINIC3_FW_VER_TYPE_NPU,
+ fw_ver->microcode_ver,
+ sizeof(fw_ver->microcode_ver), channel);
+ if (err != 0)
+ return err;
+
+ return hinic3_get_fw_ver(hwdev, HINIC3_FW_VER_TYPE_BOOT,
+ fw_ver->boot_ver, sizeof(fw_ver->boot_ver),
+ channel);
+}
+EXPORT_SYMBOL(hinic3_get_fw_version);
+
+static int hinic3_comm_features_nego(void *hwdev, u8 opcode, u64 *s_feature,
+ u16 size)
+{
+ struct comm_cmd_feature_nego feature_nego;
+ u16 out_size = sizeof(feature_nego);
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ if (!hwdev || !s_feature || size > COMM_MAX_FEATURE_QWORD)
+ return -EINVAL;
+
+ memset(&feature_nego, 0, sizeof(feature_nego));
+ feature_nego.func_id = hinic3_global_func_id(hwdev);
+ feature_nego.opcode = opcode;
+ if (opcode == MGMT_MSG_CMD_OP_SET) {
+ memcpy(feature_nego.s_feature, s_feature, (size * sizeof(u64)));
+ }
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_FEATURE_NEGO,
+ &feature_nego, sizeof(feature_nego),
+ &feature_nego, &out_size);
+ if (err || !out_size || feature_nego.head.status) {
+ sdk_err(dev->dev_hdl, "Failed to negotiate feature, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, feature_nego.head.status, out_size);
+ return -EINVAL;
+ }
+
+ if (opcode == MGMT_MSG_CMD_OP_GET)
+ memcpy(s_feature, feature_nego.s_feature, (COMM_MAX_FEATURE_QWORD * sizeof(u64)));
+
+ return 0;
+}
+
+int hinic3_get_comm_features(void *hwdev, u64 *s_feature, u16 size)
+{
+ return hinic3_comm_features_nego(hwdev, MGMT_MSG_CMD_OP_GET, s_feature,
+ size);
+}
+
+int hinic3_set_comm_features(void *hwdev, u64 *s_feature, u16 size)
+{
+ return hinic3_comm_features_nego(hwdev, MGMT_MSG_CMD_OP_SET, s_feature,
+ size);
+}
+
+int hinic3_comm_channel_detect(struct hinic3_hwdev *hwdev)
+{
+ struct comm_cmd_channel_detect channel_detect_info;
+ u16 out_size = sizeof(channel_detect_info);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&channel_detect_info, 0, sizeof(channel_detect_info));
+ channel_detect_info.func_id = hinic3_global_func_id(hwdev);
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_CHANNEL_DETECT,
+ &channel_detect_info, sizeof(channel_detect_info),
+ &channel_detect_info, &out_size);
+ if ((channel_detect_info.head.status != HINIC3_MGMT_CMD_UNSUPPORTED &&
+ channel_detect_info.head.status) || err || !out_size) {
+ sdk_err(hwdev->dev_hdl, "Failed to send channel detect, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, channel_detect_info.head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_func_tmr_bitmap_set(void *hwdev, u16 func_id, bool en)
+{
+ struct comm_cmd_func_tmr_bitmap_op bitmap_op;
+ u16 out_size = sizeof(bitmap_op);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&bitmap_op, 0, sizeof(bitmap_op));
+ bitmap_op.func_id = func_id;
+ bitmap_op.opcode = en ? FUNC_TMR_BITMAP_ENABLE : FUNC_TMR_BITMAP_DISABLE;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_FUNC_TMR_BITMAT,
+ &bitmap_op, sizeof(bitmap_op),
+ &bitmap_op, &out_size);
+ if (err || !out_size || bitmap_op.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set timer bitmap, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, bitmap_op.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int ppf_ht_gpa_malloc(struct hinic3_hwdev *hwdev, struct hinic3_page_addr *pg0,
+ struct hinic3_page_addr *pg1)
+{
+ pg0->virt_addr = dma_zalloc_coherent(hwdev->dev_hdl,
+ HINIC3_HT_GPA_PAGE_SIZE,
+ &pg0->phys_addr, GFP_KERNEL);
+ if (!pg0->virt_addr) {
+ sdk_err(hwdev->dev_hdl, "Alloc pg0 page addr failed\n");
+ return -EFAULT;
+ }
+
+ pg1->virt_addr = dma_zalloc_coherent(hwdev->dev_hdl,
+ HINIC3_HT_GPA_PAGE_SIZE,
+ &pg1->phys_addr, GFP_KERNEL);
+ if (!pg1->virt_addr) {
+ sdk_err(hwdev->dev_hdl, "Alloc pg1 page addr failed\n");
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static void ppf_ht_gpa_free(struct hinic3_hwdev *hwdev, struct hinic3_page_addr *pg0,
+ struct hinic3_page_addr *pg1)
+{
+ if (pg0->virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl, HINIC3_HT_GPA_PAGE_SIZE, pg0->virt_addr,
+ (dma_addr_t)(pg0->phys_addr));
+ pg0->virt_addr = NULL;
+ }
+ if (pg1->virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl, HINIC3_HT_GPA_PAGE_SIZE, pg1->virt_addr,
+ (dma_addr_t)(pg1->phys_addr));
+ pg1->virt_addr = NULL;
+ }
+}
+
+static int ppf_ht_gpa_set(struct hinic3_hwdev *hwdev, struct hinic3_page_addr *pg0,
+ struct hinic3_page_addr *pg1)
+{
+ struct comm_cmd_ht_gpa ht_gpa_set;
+ u16 out_size = sizeof(ht_gpa_set);
+ int ret;
+
+ memset(&ht_gpa_set, 0, sizeof(ht_gpa_set));
+
+ ret = ppf_ht_gpa_malloc(hwdev, pg0, pg1);
+ if (ret)
+ return ret;
+
+ ht_gpa_set.host_id = hinic3_host_id(hwdev);
+ ht_gpa_set.page_pa0 = pg0->phys_addr;
+ ht_gpa_set.page_pa1 = pg1->phys_addr;
+ sdk_info(hwdev->dev_hdl, "PPF ht gpa set: page_addr0.pa=0x%llx, page_addr1.pa=0x%llx\n",
+ pg0->phys_addr, pg1->phys_addr);
+ ret = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_PPF_HT_GPA,
+ &ht_gpa_set, sizeof(ht_gpa_set),
+ &ht_gpa_set, &out_size);
+ if (ret || !out_size || ht_gpa_set.head.status) {
+ sdk_warn(hwdev->dev_hdl, "PPF ht gpa set failed, ret: %d, status: 0x%x, out_size: 0x%x\n",
+ ret, ht_gpa_set.head.status, out_size);
+ return -EFAULT;
+ }
+
+ hwdev->page_pa0.phys_addr = pg0->phys_addr;
+ hwdev->page_pa0.virt_addr = pg0->virt_addr;
+
+ hwdev->page_pa1.phys_addr = pg1->phys_addr;
+ hwdev->page_pa1.virt_addr = pg1->virt_addr;
+
+ return 0;
+}
+
+int hinic3_ppf_ht_gpa_init(void *dev)
+{
+ struct hinic3_page_addr page_addr0[HINIC3_PPF_HT_GPA_SET_RETRY_TIMES];
+ struct hinic3_page_addr page_addr1[HINIC3_PPF_HT_GPA_SET_RETRY_TIMES];
+ struct hinic3_hwdev *hwdev = dev;
+ int ret;
+ int i;
+ int j;
+ size_t size;
+
+ if (!dev) {
+ pr_err("Invalid para: dev is null.\n");
+ return -EINVAL;
+ }
+
+ size = HINIC3_PPF_HT_GPA_SET_RETRY_TIMES * sizeof(page_addr0[0]);
+ memset(page_addr0, 0, size);
+ memset(page_addr1, 0, size);
+
+ for (i = 0; i < HINIC3_PPF_HT_GPA_SET_RETRY_TIMES; i++) {
+ ret = ppf_ht_gpa_set(hwdev, &page_addr0[i], &page_addr1[i]);
+ if (ret == 0)
+ break;
+ }
+
+ for (j = 0; j < i; j++)
+ ppf_ht_gpa_free(hwdev, &page_addr0[j], &page_addr1[j]);
+
+ if (i >= HINIC3_PPF_HT_GPA_SET_RETRY_TIMES) {
+ sdk_err(hwdev->dev_hdl, "PPF ht gpa init failed, retry times: %d\n",
+ i);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+void hinic3_ppf_ht_gpa_deinit(void *dev)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (!dev) {
+ pr_err("Invalid para: dev is null.\n");
+ return;
+ }
+
+ if (hwdev->page_pa0.virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl, HINIC3_HT_GPA_PAGE_SIZE,
+ hwdev->page_pa0.virt_addr,
+ (dma_addr_t)(hwdev->page_pa0.phys_addr));
+ hwdev->page_pa0.virt_addr = NULL;
+ }
+
+ if (hwdev->page_pa1.virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl, HINIC3_HT_GPA_PAGE_SIZE,
+ hwdev->page_pa1.virt_addr,
+ (dma_addr_t)hwdev->page_pa1.phys_addr);
+ hwdev->page_pa1.virt_addr = NULL;
+ }
+}
+
+static int set_ppf_tmr_status(struct hinic3_hwdev *hwdev,
+ enum ppf_tmr_status status)
+{
+ struct comm_cmd_ppf_tmr_op op;
+ u16 out_size = sizeof(op);
+ int err = 0;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&op, 0, sizeof(op));
+
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ return -EFAULT;
+
+ op.opcode = status;
+ op.ppf_id = hinic3_ppf_idx(hwdev);
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_PPF_TMR, &op,
+ sizeof(op), &op, &out_size);
+ if (err || !out_size || op.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set ppf timer, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, op.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_ppf_tmr_start(void *hwdev)
+{
+ int is_in_kexec;
+
+ if (!hwdev) {
+ pr_err("Hwdev pointer is NULL for starting ppf timer\n");
+ return -EINVAL;
+ }
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec != 0) {
+ pr_info("Skip starting ppt timer during kexec");
+ return 0;
+ }
+
+ return set_ppf_tmr_status(hwdev, HINIC_PPF_TMR_FLAG_START);
+}
+EXPORT_SYMBOL(hinic3_ppf_tmr_start);
+
+int hinic3_ppf_tmr_stop(void *hwdev)
+{
+ if (!hwdev) {
+ pr_err("Hwdev pointer is NULL for stop ppf timer\n");
+ return -EINVAL;
+ }
+
+ return set_ppf_tmr_status(hwdev, HINIC_PPF_TMR_FLAG_STOP);
+}
+EXPORT_SYMBOL(hinic3_ppf_tmr_stop);
+
+static int hi_vram_kalloc_align(struct hinic3_hwdev *hwdev, char *name,
+ u32 page_size, u32 page_num,
+ struct hinic3_dma_addr_align *mem_align)
+{
+ void *vaddr = NULL, *align_vaddr = NULL;
+ dma_addr_t paddr, align_paddr;
+ u64 real_size = page_size;
+ u64 align = page_size;
+
+ vaddr = (void *)hi_vram_kalloc(name, real_size);
+ if (vaddr == NULL) {
+ sdk_err(hwdev->dev_hdl, "vram kalloc failed, name:%s.\n", name);
+ return -ENOMEM;
+ }
+
+ paddr = (dma_addr_t)virt_to_phys(vaddr);
+ align_paddr = ALIGN(paddr, align);
+ /* align */
+ if (align_paddr == paddr) {
+ align_vaddr = vaddr;
+ goto out;
+ }
+
+ hi_vram_kfree((void *)vaddr, name, real_size);
+
+ /* realloc memory for align */
+ real_size = page_size + align;
+ vaddr = (void *)hi_vram_kalloc(name, real_size);
+ if (vaddr == NULL) {
+ sdk_err(hwdev->dev_hdl, "vram kalloc align failed, name:%s.\n", name);
+ return -ENOMEM;
+ }
+
+ paddr = (dma_addr_t)virt_to_phys(vaddr);
+ align_paddr = ALIGN(paddr, align);
+ align_vaddr = (void *)((u64)vaddr + (align_paddr - paddr));
+
+out:
+ mem_align->real_size = (u32)real_size;
+ mem_align->ori_vaddr = vaddr;
+ mem_align->ori_paddr = paddr;
+ mem_align->align_vaddr = align_vaddr;
+ mem_align->align_paddr = align_paddr;
+
+ return 0;
+}
+
+static void mqm_eqm_free_page_mem(struct hinic3_hwdev *hwdev)
+{
+ u32 i;
+ struct hinic3_dma_addr_align *page_addr;
+ int is_use_vram = get_use_vram_flag();
+ struct mqm_eqm_vram_name_s *mqm_eqm_vram_name = hwdev->mqm_eqm_vram_name;
+
+ page_addr = hwdev->mqm_att.brm_srch_page_addr;
+
+ for (i = 0; i < hwdev->mqm_att.page_num; i++) {
+ if (is_use_vram != 0) {
+ hi_vram_kfree(page_addr->ori_vaddr, mqm_eqm_vram_name[i].vram_name,
+ page_addr->real_size);
+ } else {
+ hinic3_dma_free_coherent_align(hwdev->dev_hdl, page_addr);
+ }
+ page_addr->ori_vaddr = NULL;
+ page_addr++;
+ }
+
+ kfree(mqm_eqm_vram_name);
+ hwdev->mqm_eqm_vram_name = NULL;
+}
+
+static int mqm_eqm_try_alloc_mem(struct hinic3_hwdev *hwdev, u32 page_size,
+ u32 page_num)
+{
+ struct hinic3_dma_addr_align *page_addr = hwdev->mqm_att.brm_srch_page_addr;
+ int is_use_vram = get_use_vram_flag();
+ struct mqm_eqm_vram_name_s *mqm_eqm_vram_name = NULL;
+ u32 valid_num = 0;
+ u32 flag = 1;
+ u32 i = 0;
+ int err;
+ u16 func_id;
+
+ mqm_eqm_vram_name = kzalloc(sizeof(struct mqm_eqm_vram_name_s) * page_num, GFP_KERNEL);
+ if (mqm_eqm_vram_name == NULL) {
+ sdk_err(hwdev->dev_hdl, "mqm eqm alloc vram name failed.\n");
+ return -ENOMEM;
+ }
+
+ hwdev->mqm_eqm_vram_name = mqm_eqm_vram_name;
+ func_id = hinic3_global_func_id(hwdev);
+
+ for (i = 0; i < page_num; i++) {
+ if (is_use_vram != 0) {
+ snprintf(mqm_eqm_vram_name[i].vram_name,
+ VRAM_NAME_MAX_LEN, "%s%u%s%u",
+ VRAM_CQM_GLB_FUNC_BASE, func_id, VRAM_NIC_MQM, i);
+ err = hi_vram_kalloc_align(
+ hwdev, mqm_eqm_vram_name[i].vram_name,
+ page_size, page_num, page_addr);
+ } else {
+ err = hinic3_dma_zalloc_coherent_align(hwdev->dev_hdl, page_size,
+ page_size, GFP_KERNEL, page_addr);
+ }
+ if (err) {
+ flag = 0;
+ break;
+ }
+ valid_num++;
+ page_addr++;
+ }
+
+ hwdev->mqm_att.page_num = valid_num;
+
+ if (flag == 1) {
+ hwdev->mqm_att.page_size = page_size;
+ } else {
+ mqm_eqm_free_page_mem(hwdev);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int mqm_eqm_alloc_page_mem(struct hinic3_hwdev *hwdev)
+{
+ int ret = 0;
+ u32 page_num;
+
+ /* apply for 2M page, page number is chunk_num/1024 */
+ page_num = (hwdev->mqm_att.chunk_num + 0x3ff) >> 0xa;
+ ret = mqm_eqm_try_alloc_mem(hwdev, 0x2 * 0x400 * 0x400, page_num);
+ if (ret == 0) {
+ sdk_info(hwdev->dev_hdl, "[mqm_eqm_init] Alloc page_size 2M OK\n");
+ return 0;
+ }
+
+ /* apply for 64KB page, page number is chunk_num/32 */
+ page_num = (hwdev->mqm_att.chunk_num + 0x1f) >> 0x5;
+ ret = mqm_eqm_try_alloc_mem(hwdev, 0x40 * 0x400, page_num);
+ if (ret == 0) {
+ sdk_info(hwdev->dev_hdl, "[mqm_eqm_init] Alloc page_size 64K OK\n");
+ return 0;
+ }
+
+ /* apply for 4KB page, page number is chunk_num/2 */
+ page_num = (hwdev->mqm_att.chunk_num + 1) >> 1;
+ ret = mqm_eqm_try_alloc_mem(hwdev, 0x4 * 0x400, page_num);
+ if (ret == 0) {
+ sdk_info(hwdev->dev_hdl, "[mqm_eqm_init] Alloc page_size 4K OK\n");
+ return 0;
+ }
+
+ return ret;
+}
+
+static int mqm_eqm_set_cfg_2_hw(struct hinic3_hwdev *hwdev, u8 valid)
+{
+ struct comm_cmd_eqm_cfg info_eqm_cfg;
+ u16 out_size = sizeof(info_eqm_cfg);
+ int err;
+
+ memset(&info_eqm_cfg, 0, sizeof(info_eqm_cfg));
+
+ info_eqm_cfg.host_id = hinic3_host_id(hwdev);
+ info_eqm_cfg.page_size = hwdev->mqm_att.page_size;
+ info_eqm_cfg.valid = valid;
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_MQM_CFG_INFO,
+ &info_eqm_cfg, sizeof(info_eqm_cfg),
+ &info_eqm_cfg, &out_size);
+ if (err || !out_size || info_eqm_cfg.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to init func table, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, info_eqm_cfg.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+#define EQM_DATA_BUF_SIZE 1024
+
+static int mqm_eqm_set_page_2_hw(struct hinic3_hwdev *hwdev)
+{
+ struct comm_cmd_eqm_search_gpa *info = NULL;
+ struct hinic3_dma_addr_align *page_addr = NULL;
+ void *send_buf = NULL;
+ u16 send_buf_size;
+ u32 i;
+ u64 *gpa_hi52 = NULL;
+ u64 gpa;
+ u32 num;
+ u32 start_idx;
+ int err = 0;
+ u16 out_size;
+ u8 cmd;
+
+ send_buf_size = sizeof(struct comm_cmd_eqm_search_gpa) +
+ EQM_DATA_BUF_SIZE;
+ send_buf = kzalloc(send_buf_size, GFP_KERNEL);
+ if (!send_buf) {
+ sdk_err(hwdev->dev_hdl, "Alloc virtual mem failed\r\n");
+ return -EFAULT;
+ }
+
+ page_addr = hwdev->mqm_att.brm_srch_page_addr;
+ info = (struct comm_cmd_eqm_search_gpa *)send_buf;
+
+ gpa_hi52 = info->gpa_hi52;
+ num = 0;
+ start_idx = 0;
+ cmd = COMM_MGMT_CMD_SET_MQM_SRCH_GPA;
+ for (i = 0; i < hwdev->mqm_att.page_num; i++) {
+ /* gpa align to 4K, save gpa[31:12] */
+ gpa = page_addr->align_paddr >> 12;
+ gpa_hi52[num] = gpa;
+ num++;
+ if (num == MQM_ATT_PAGE_NUM) {
+ info->num = num;
+ info->start_idx = start_idx;
+ info->host_id = hinic3_host_id(hwdev);
+ out_size = send_buf_size;
+ err = comm_msg_to_mgmt_sync(hwdev, cmd, info,
+ (u16)send_buf_size,
+ info, &out_size);
+ if (MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size,
+ info->head.status)) {
+ sdk_err(hwdev->dev_hdl, "Set mqm srch gpa fail, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, info->head.status, out_size);
+ err = -EFAULT;
+ goto set_page_2_hw_end;
+ }
+
+ gpa_hi52 = info->gpa_hi52;
+ num = 0;
+ start_idx = i + 1;
+ }
+ page_addr++;
+ }
+
+ if (num != 0) {
+ info->num = num;
+ info->start_idx = start_idx;
+ info->host_id = hinic3_host_id(hwdev);
+ out_size = send_buf_size;
+ err = comm_msg_to_mgmt_sync(hwdev, cmd, info,
+ (u16)send_buf_size, info,
+ &out_size);
+ if (MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size,
+ info->head.status)) {
+ sdk_err(hwdev->dev_hdl, "Set mqm srch gpa fail, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, info->head.status, out_size);
+ err = -EFAULT;
+ goto set_page_2_hw_end;
+ }
+ }
+
+set_page_2_hw_end:
+ kfree(send_buf);
+ return err;
+}
+
+static int get_eqm_num(struct hinic3_hwdev *hwdev, struct comm_cmd_get_eqm_num *info_eqm_fix)
+{
+ int ret;
+ u16 len = sizeof(*info_eqm_fix);
+
+ memset(info_eqm_fix, 0, sizeof(*info_eqm_fix));
+
+ ret = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_GET_MQM_FIX_INFO,
+ info_eqm_fix, sizeof(*info_eqm_fix), info_eqm_fix, &len);
+ if (ret || !len || info_eqm_fix->head.status) {
+ sdk_err(hwdev->dev_hdl, "Get mqm fix info fail,err: %d, status: 0x%x, out_size: 0x%x\n",
+ ret, info_eqm_fix->head.status, len);
+ return -EFAULT;
+ }
+
+ sdk_info(hwdev->dev_hdl, "get chunk_num: 0x%x, search_gpa_num: 0x%08x\n",
+ info_eqm_fix->chunk_num, info_eqm_fix->search_gpa_num);
+
+ return 0;
+}
+
+static int mqm_eqm_init(struct hinic3_hwdev *hwdev)
+{
+ struct comm_cmd_get_eqm_num info_eqm_fix;
+ int ret;
+ int is_in_kexec;
+
+ if (hwdev->hwif->attr.func_type != TYPE_PPF)
+ return 0;
+
+ ret = get_eqm_num(hwdev, &info_eqm_fix);
+ if (ret)
+ return ret;
+
+ if (!(info_eqm_fix.chunk_num))
+ return 0;
+
+ hwdev->mqm_att.chunk_num = info_eqm_fix.chunk_num;
+ hwdev->mqm_att.search_gpa_num = info_eqm_fix.search_gpa_num;
+ hwdev->mqm_att.page_size = 0;
+ hwdev->mqm_att.page_num = 0;
+
+ hwdev->mqm_att.brm_srch_page_addr =
+ kcalloc(hwdev->mqm_att.chunk_num, sizeof(struct hinic3_dma_addr_align), GFP_KERNEL);
+ if (!(hwdev->mqm_att.brm_srch_page_addr)) {
+ sdk_err(hwdev->dev_hdl, "Alloc virtual mem failed\r\n");
+ return -EFAULT;
+ }
+
+ ret = mqm_eqm_alloc_page_mem(hwdev);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "Alloc eqm page mem failed\r\n");
+ goto err_page;
+ }
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec == 0) {
+ ret = mqm_eqm_set_page_2_hw(hwdev);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "Set page to hw failed\r\n");
+ goto err_ecmd;
+ }
+ } else {
+ sdk_info(hwdev->dev_hdl, "Mqm db don't set to chip when os hot replace.\r\n");
+ }
+
+ ret = mqm_eqm_set_cfg_2_hw(hwdev, 1);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "Set page to hw failed\r\n");
+ goto err_ecmd;
+ }
+
+ sdk_info(hwdev->dev_hdl, "ppf_ext_db_init ok\r\n");
+
+ return 0;
+
+err_ecmd:
+ mqm_eqm_free_page_mem(hwdev);
+
+err_page:
+ kfree(hwdev->mqm_att.brm_srch_page_addr);
+
+ return ret;
+}
+
+static void mqm_eqm_deinit(struct hinic3_hwdev *hwdev)
+{
+ int ret;
+
+ if (hwdev->hwif->attr.func_type != TYPE_PPF)
+ return;
+
+ if (!(hwdev->mqm_att.chunk_num))
+ return;
+
+ mqm_eqm_free_page_mem(hwdev);
+ kfree(hwdev->mqm_att.brm_srch_page_addr);
+
+ ret = mqm_eqm_set_cfg_2_hw(hwdev, 0);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "Set mqm eqm cfg to chip fail! err: %d\n",
+ ret);
+ return;
+ }
+
+ hwdev->mqm_att.chunk_num = 0;
+ hwdev->mqm_att.search_gpa_num = 0;
+ hwdev->mqm_att.page_num = 0;
+ hwdev->mqm_att.page_size = 0;
+}
+
+int hinic3_ppf_ext_db_init(struct hinic3_hwdev *hwdev)
+{
+ int ret;
+
+ ret = mqm_eqm_init(hwdev);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "MQM eqm init fail!\n");
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_ppf_ext_db_deinit(struct hinic3_hwdev *hwdev)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hwdev->hwif->attr.func_type != TYPE_PPF)
+ return 0;
+
+ mqm_eqm_deinit(hwdev);
+
+ return 0;
+}
+
+#define HINIC3_FLR_TIMEOUT 1000
+
+static enum hinic3_wait_return check_flr_finish_handler(void *priv_data)
+{
+ struct hinic3_hwif *hwif = priv_data;
+ enum hinic3_pf_status status;
+
+ status = hinic3_get_pf_status(hwif);
+ if (status == HINIC3_PF_STATUS_FLR_FINISH_FLAG) {
+ hinic3_set_pf_status(hwif, HINIC3_PF_STATUS_ACTIVE_FLAG);
+ return WAIT_PROCESS_CPL;
+ }
+
+ return WAIT_PROCESS_WAITING;
+}
+
+static int wait_for_flr_finish(struct hinic3_hwif *hwif)
+{
+ return hinic3_wait_for_timeout(hwif, check_flr_finish_handler,
+ HINIC3_FLR_TIMEOUT, 0xa * USEC_PER_MSEC);
+}
+
+#define HINIC3_WAIT_CMDQ_IDLE_TIMEOUT 5000
+
+static enum hinic3_wait_return check_cmdq_stop_handler(void *priv_data)
+{
+ struct hinic3_hwdev *hwdev = priv_data;
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ enum hinic3_cmdq_type cmdq_type;
+
+ /* Stop waiting when card unpresent */
+ if (!hwdev->chip_present_flag)
+ return WAIT_PROCESS_CPL;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ if (!hinic3_cmdq_idle(&cmdqs->cmdq[cmdq_type]))
+ return WAIT_PROCESS_WAITING;
+ }
+
+ return WAIT_PROCESS_CPL;
+}
+
+static int wait_cmdq_stop(struct hinic3_hwdev *hwdev)
+{
+ enum hinic3_cmdq_type cmdq_type;
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ int err;
+
+ if (!(cmdqs->status & HINIC3_CMDQ_ENABLE))
+ return 0;
+
+ cmdqs->status &= ~HINIC3_CMDQ_ENABLE;
+
+ err = hinic3_wait_for_timeout(hwdev, check_cmdq_stop_handler,
+ HINIC3_WAIT_CMDQ_IDLE_TIMEOUT,
+ USEC_PER_MSEC);
+ if (err == 0)
+ return 0;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ if (!hinic3_cmdq_idle(&cmdqs->cmdq[cmdq_type]))
+ sdk_err(hwdev->dev_hdl, "Cmdq %d is busy\n", cmdq_type);
+ }
+
+ cmdqs->status |= HINIC3_CMDQ_ENABLE;
+
+ return err;
+}
+
+static int hinic3_rx_tx_flush(struct hinic3_hwdev *hwdev, u16 channel, bool wait_io)
+{
+ struct hinic3_hwif *hwif = hwdev->hwif;
+ struct comm_cmd_clear_doorbell clear_db;
+ struct comm_cmd_clear_resource clr_res;
+ u16 out_size;
+ int err;
+ int ret = 0;
+
+ if ((HINIC3_FUNC_TYPE(hwdev) != TYPE_VF) && wait_io)
+ msleep(100); /* wait ucode 100 ms stop I/O */
+
+ err = wait_cmdq_stop(hwdev);
+ if (err != 0) {
+ sdk_warn(hwdev->dev_hdl, "CMDQ is still working, please check CMDQ timeout value is reasonable\n");
+ ret = err;
+ }
+
+ hinic3_disable_doorbell(hwif);
+
+ out_size = sizeof(clear_db);
+ memset(&clear_db, 0, sizeof(clear_db));
+ clear_db.func_id = HINIC3_HWIF_GLOBAL_IDX(hwif);
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_FLUSH_DOORBELL,
+ &clear_db, sizeof(clear_db),
+ &clear_db, &out_size, channel);
+ if (err != 0 || !out_size || clear_db.head.status) {
+ sdk_warn(hwdev->dev_hdl, "Failed to flush doorbell, err: %d, status: 0x%x, out_size: 0x%x, channel: 0x%x\n",
+ err, clear_db.head.status, out_size, channel);
+ if (err != 0)
+ ret = err;
+ else
+ ret = -EFAULT;
+ }
+
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF)
+ hinic3_set_pf_status(hwif, HINIC3_PF_STATUS_FLR_START_FLAG);
+ else
+ msleep(100); /* wait ucode 100 ms stop I/O */
+
+ memset(&clr_res, 0, sizeof(clr_res));
+ clr_res.func_id = HINIC3_HWIF_GLOBAL_IDX(hwif);
+
+ err = hinic3_msg_to_mgmt_no_ack(hwdev, HINIC3_MOD_COMM,
+ COMM_MGMT_CMD_START_FLUSH, &clr_res,
+ sizeof(clr_res), channel);
+ if (err != 0) {
+ sdk_warn(hwdev->dev_hdl, "Failed to notice flush message, err: %d, channel: 0x%x\n",
+ err, channel);
+ ret = err;
+ }
+
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF) {
+ err = wait_for_flr_finish(hwif);
+ if (err != 0) {
+ sdk_warn(hwdev->dev_hdl, "Wait firmware FLR timeout\n");
+ ret = err;
+ }
+ }
+
+ hinic3_enable_doorbell(hwif);
+
+ err = hinic3_reinit_cmdq_ctxts(hwdev);
+ if (err != 0) {
+ sdk_warn(hwdev->dev_hdl, "Failed to reinit cmdq\n");
+ ret = err;
+ }
+
+ return ret;
+}
+
+int hinic3_func_rx_tx_flush(void *hwdev, u16 channel, bool wait_io)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (dev->chip_present_flag == 0)
+ return 0;
+
+ return hinic3_rx_tx_flush(dev, channel, wait_io);
+}
+EXPORT_SYMBOL(hinic3_func_rx_tx_flush);
+
+int hinic3_get_board_info(void *hwdev, struct hinic3_board_info *info,
+ u16 channel)
+{
+ struct comm_cmd_board_info board_info;
+ u16 out_size = sizeof(board_info);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ memset(&board_info, 0, sizeof(board_info));
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_GET_BOARD_INFO,
+ &board_info, sizeof(board_info),
+ &board_info, &out_size, channel);
+ if (err || board_info.head.status || !out_size) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get board info, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, board_info.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ memcpy(info, &board_info.info, sizeof(*info));
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_board_info);
+
+int hinic3_get_hw_pf_infos(void *hwdev, struct hinic3_hw_pf_infos *infos,
+ u16 channel)
+{
+ struct comm_cmd_hw_pf_infos *pf_infos = NULL;
+ u16 out_size = sizeof(*pf_infos);
+ int err = 0;
+
+ if (!hwdev || !infos)
+ return -EINVAL;
+
+ pf_infos = kzalloc(sizeof(*pf_infos), GFP_KERNEL);
+ if (!pf_infos)
+ return -ENOMEM;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_GET_HW_PF_INFOS,
+ pf_infos, sizeof(*pf_infos),
+ pf_infos, &out_size, channel);
+ if (pf_infos->head.status || err || !out_size) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get hw pf information, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, pf_infos->head.status, out_size, channel);
+ err = -EIO;
+ goto free_buf;
+ }
+
+ memcpy(infos, &pf_infos->infos, sizeof(struct hinic3_hw_pf_infos));
+
+free_buf:
+ kfree(pf_infos);
+ return err;
+}
+EXPORT_SYMBOL(hinic3_get_hw_pf_infos);
+
+int hinic3_get_global_attr(void *hwdev, struct comm_global_attr *attr)
+{
+ struct comm_cmd_get_glb_attr get_attr;
+ u16 out_size = sizeof(get_attr);
+ int err = 0;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_GET_GLOBAL_ATTR,
+ &get_attr, sizeof(get_attr), &get_attr,
+ &out_size);
+ if (err || !out_size || get_attr.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get global attribute, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, get_attr.head.status, out_size);
+ return -EIO;
+ }
+
+ memcpy(attr, &get_attr.attr, sizeof(struct comm_global_attr));
+
+ return 0;
+}
+
+int hinic3_set_func_svc_used_state(void *hwdev, u16 svc_type, u8 state,
+ u16 channel)
+{
+ struct comm_cmd_func_svc_used_state used_state;
+ u16 out_size = sizeof(used_state);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&used_state, 0, sizeof(used_state));
+ used_state.func_id = hinic3_global_func_id(hwdev);
+ used_state.svc_type = svc_type;
+ used_state.used_state = state;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev,
+ COMM_MGMT_CMD_SET_FUNC_SVC_USED_STATE,
+ &used_state, sizeof(used_state),
+ &used_state, &out_size, channel);
+ if (err || !out_size || used_state.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set func service used state, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n\n",
+ err, used_state.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_func_svc_used_state);
+
+int hinic3_get_sml_table_info(void *hwdev, u32 tbl_id, u8 *node_id, u8 *instance_id)
+{
+ struct sml_table_id_info sml_table[TABLE_INDEX_MAX];
+ struct comm_cmd_get_sml_tbl_data sml_tbl;
+ u16 out_size = sizeof(sml_tbl);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (tbl_id >= TABLE_INDEX_MAX) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl, "sml table index out of range [0, %u]",
+ TABLE_INDEX_MAX - 1);
+ return -EINVAL;
+ }
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_GET_SML_TABLE_INFO,
+ &sml_tbl, sizeof(sml_tbl), &sml_tbl, &out_size);
+ if (err || !out_size || sml_tbl.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get sml table information, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, sml_tbl.head.status, out_size);
+ return -EIO;
+ }
+
+ memcpy(sml_table, sml_tbl.tbl_data, sizeof(sml_table));
+
+ *node_id = sml_table[tbl_id].node_id;
+ *instance_id = sml_table[tbl_id].instance_id;
+
+ return 0;
+}
+
+int hinic3_activate_firmware(void *hwdev, u8 cfg_index)
+{
+ struct cmd_active_firmware activate_msg;
+ u16 out_size = sizeof(activate_msg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) != TYPE_PF)
+ return -EOPNOTSUPP;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&activate_msg, 0, sizeof(activate_msg));
+ activate_msg.index = cfg_index;
+
+ err = hinic3_pf_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, COMM_MGMT_CMD_ACTIVE_FW,
+ &activate_msg, sizeof(activate_msg),
+ &activate_msg, &out_size, FW_UPDATE_MGMT_TIMEOUT);
+ if (err || !out_size || activate_msg.msg_head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to activate firmware, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, activate_msg.msg_head.status, out_size);
+ err = activate_msg.msg_head.status ? activate_msg.msg_head.status : -EIO;
+ return err;
+ }
+
+ return 0;
+}
+
+int hinic3_switch_config(void *hwdev, u8 cfg_index)
+{
+ struct cmd_switch_cfg switch_cfg;
+ u16 out_size = sizeof(switch_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) != TYPE_PF)
+ return -EOPNOTSUPP;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&switch_cfg, 0, sizeof(switch_cfg));
+ switch_cfg.index = cfg_index;
+
+ err = hinic3_pf_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, COMM_MGMT_CMD_SWITCH_CFG,
+ &switch_cfg, sizeof(switch_cfg),
+ &switch_cfg, &out_size, FW_UPDATE_MGMT_TIMEOUT);
+ if (err || !out_size || switch_cfg.msg_head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to switch cfg, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, switch_cfg.msg_head.status, out_size);
+ err = switch_cfg.msg_head.status ? switch_cfg.msg_head.status : -EIO;
+ return err;
+ }
+
+ return 0;
+}
+
+bool hinic3_is_optical_module_mode(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (dev->board_info.board_type == BOARD_TYPE_STRG_4X25G_COMSTORAGE ||
+ dev->board_info.board_type == BOARD_TYPE_CAL_4X25G_COMSTORAGE ||
+ dev->board_info.board_type == BOARD_TYPE_CAL_2X100G_TCE_BACKPLANE)
+ return false;
+
+ return true;
+}
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h
new file mode 100644
index 0000000..e031ec4
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_COMM_H
+#define HINIC3_COMM_H
+
+#include <linux/types.h>
+
+#include "mpu_inband_cmd_defs.h"
+#include "hinic3_hwdev.h"
+
+#define MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size, status) \
+ ((err) || (status) || !(out_size))
+
+#define HINIC3_PAGE_SIZE_HW(pg_size) ((u8)ilog2((u32)((pg_size) >> 12)))
+
+enum func_tmr_bitmap_status {
+ FUNC_TMR_BITMAP_DISABLE,
+ FUNC_TMR_BITMAP_ENABLE,
+};
+
+enum ppf_tmr_status {
+ HINIC_PPF_TMR_FLAG_STOP,
+ HINIC_PPF_TMR_FLAG_START,
+};
+
+#define HINIC3_HT_GPA_PAGE_SIZE 4096UL
+#define HINIC3_PPF_HT_GPA_SET_RETRY_TIMES 10
+
+int hinic3_set_cmdq_depth(void *hwdev, u16 cmdq_depth);
+
+int hinic3_set_cmdq_ctxt(struct hinic3_hwdev *hwdev, u8 cmdq_id,
+ struct cmdq_ctxt_info *ctxt);
+
+int hinic3_ppf_ext_db_init(struct hinic3_hwdev *hwdev);
+
+int hinic3_ppf_ext_db_deinit(struct hinic3_hwdev *hwdev);
+
+int hinic3_set_ceq_ctrl_reg(struct hinic3_hwdev *hwdev, u16 q_id,
+ u32 ctrl0, u32 ctrl1);
+
+int hinic3_set_dma_attr_tbl(struct hinic3_hwdev *hwdev, u8 entry_idx, u8 st, u8 at, u8 ph,
+ u8 no_snooping, u8 tph_en);
+
+int hinic3_get_comm_features(void *hwdev, u64 *s_feature, u16 size);
+int hinic3_set_comm_features(void *hwdev, u64 *s_feature, u16 size);
+
+int hinic3_comm_channel_detect(struct hinic3_hwdev *hwdev);
+
+int hinic3_get_global_attr(void *hwdev, struct comm_global_attr *attr);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c
new file mode 100644
index 0000000..722fecd
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c
@@ -0,0 +1,530 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "mpu_inband_cmd.h"
+#include "hinic3_hw_mt.h"
+
+#define HINIC3_CMDQ_BUF_MAX_SIZE 2048U
+#define DW_WIDTH 4
+
+#define MSG_MAX_IN_SIZE (2048 * 1024)
+#define MSG_MAX_OUT_SIZE (2048 * 1024)
+
+#define API_CSR_MAX_RD_LEN (4 * 1024 * 1024)
+
+/* completion timeout interval, unit is millisecond */
+#define MGMT_MSG_UPDATE_TIMEOUT 200000U
+
+void free_buff_in(void *hwdev, const struct msg_module *nt_msg, void *buf_in)
+{
+ if (!buf_in)
+ return;
+
+ if (nt_msg->module == SEND_TO_NPU)
+ hinic3_free_cmd_buf(hwdev, buf_in);
+ else
+ kfree(buf_in);
+}
+
+void free_buff_out(void *hwdev, struct msg_module *nt_msg,
+ void *buf_out)
+{
+ if (!buf_out)
+ return;
+
+ if (nt_msg->module == SEND_TO_NPU &&
+ !nt_msg->npu_cmd.direct_resp)
+ hinic3_free_cmd_buf(hwdev, buf_out);
+ else
+ kfree(buf_out);
+}
+
+int alloc_buff_in(void *hwdev, struct msg_module *nt_msg,
+ u32 in_size, void **buf_in)
+{
+ void *msg_buf = NULL;
+
+ if (!in_size)
+ return 0;
+
+ if (nt_msg->module == SEND_TO_NPU) {
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+
+ if (in_size > HINIC3_CMDQ_BUF_MAX_SIZE) {
+ pr_err("Cmdq in size(%u) more than 2KB\n", in_size);
+ return -ENOMEM;
+ }
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ pr_err("Alloc cmdq cmd buffer failed in %s\n",
+ __func__);
+ return -ENOMEM;
+ }
+ msg_buf = cmd_buf->buf;
+ *buf_in = (void *)cmd_buf;
+ cmd_buf->size = (u16)in_size;
+ } else {
+ if (in_size > MSG_MAX_IN_SIZE) {
+ pr_err("In size(%u) more than 2M\n", in_size);
+ return -ENOMEM;
+ }
+ msg_buf = kzalloc(in_size, GFP_KERNEL);
+ *buf_in = msg_buf;
+ }
+ if (!(*buf_in)) {
+ pr_err("Alloc buffer in failed\n");
+ return -ENOMEM;
+ }
+
+ if (copy_from_user(msg_buf, nt_msg->in_buf, in_size)) {
+ pr_err("%s:%d: Copy from user failed\n",
+ __func__, __LINE__);
+ free_buff_in(hwdev, nt_msg, *buf_in);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int alloc_buff_out(void *hwdev, struct msg_module *nt_msg,
+ u32 out_size, void **buf_out)
+{
+ if (!out_size)
+ return 0;
+
+ if (nt_msg->module == SEND_TO_NPU &&
+ !nt_msg->npu_cmd.direct_resp) {
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+
+ if (out_size > HINIC3_CMDQ_BUF_MAX_SIZE) {
+ pr_err("Cmdq out size(%u) more than 2KB\n", out_size);
+ return -ENOMEM;
+ }
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ *buf_out = (void *)cmd_buf;
+ } else {
+ if (out_size > MSG_MAX_OUT_SIZE) {
+ pr_err("out size(%u) more than 2M\n", out_size);
+ return -ENOMEM;
+ }
+ *buf_out = kzalloc(out_size, GFP_KERNEL);
+ }
+ if (!(*buf_out)) {
+ pr_err("Alloc buffer out failed\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+int copy_buf_out_to_user(struct msg_module *nt_msg,
+ u32 out_size, void *buf_out)
+{
+ int ret = 0;
+ void *msg_out = NULL;
+
+ if (out_size == 0 || !buf_out)
+ return 0;
+
+ if (nt_msg->module == SEND_TO_NPU &&
+ !nt_msg->npu_cmd.direct_resp)
+ msg_out = ((struct hinic3_cmd_buf *)buf_out)->buf;
+ else
+ msg_out = buf_out;
+
+ if (copy_to_user(nt_msg->out_buf, msg_out, out_size))
+ ret = -EFAULT;
+
+ return ret;
+}
+
+int get_func_type(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u16 func_type;
+
+ if (*out_size != sizeof(u16) || !buf_out) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(u16));
+ return -EFAULT;
+ }
+
+ func_type = hinic3_func_type(hinic3_get_sdk_hwdev_by_lld(lld_dev));
+
+ *(u16 *)buf_out = func_type;
+ return 0;
+}
+
+int get_func_id(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u16 func_id;
+
+ if (*out_size != sizeof(u16) || !buf_out) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(u16));
+ return -EFAULT;
+ }
+
+ func_id = hinic3_global_func_id(hinic3_get_sdk_hwdev_by_lld(lld_dev));
+ *(u16 *)buf_out = func_id;
+
+ return 0;
+}
+
+int get_hw_driver_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ return hinic3_dbg_get_hw_stats(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ buf_out, out_size);
+}
+
+int clear_hw_driver_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u16 size;
+
+ size = hinic3_dbg_clear_hw_stats(hinic3_get_sdk_hwdev_by_lld(lld_dev));
+ if (*out_size != size) {
+ pr_err("Unexpect out buf size from user :%u, expect: %u\n",
+ *out_size, size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int get_self_test_result(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u32 result;
+
+ if (*out_size != sizeof(u32) || !buf_out) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(u32));
+ return -EFAULT;
+ }
+
+ result = hinic3_get_self_test_result(hinic3_get_sdk_hwdev_by_lld(lld_dev));
+ *(u32 *)buf_out = result;
+
+ return 0;
+}
+
+int get_chip_faults_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u32 offset = 0;
+ struct nic_cmd_chip_fault_stats *fault_info = NULL;
+
+ if (!buf_in || !buf_out || *out_size != sizeof(*fault_info) ||
+ in_size != sizeof(*fault_info)) {
+ pr_err("Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(*fault_info));
+ return -EFAULT;
+ }
+ fault_info = (struct nic_cmd_chip_fault_stats *)buf_in;
+ offset = fault_info->offset;
+
+ fault_info = (struct nic_cmd_chip_fault_stats *)buf_out;
+ hinic3_get_chip_fault_stats(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ fault_info->chip_fault_stats, offset);
+
+ return 0;
+}
+
+static u32 get_up_timeout_val(enum hinic3_mod_type mod, u16 cmd)
+{
+ if (mod == HINIC3_MOD_COMM &&
+ (cmd == COMM_MGMT_CMD_UPDATE_FW ||
+ cmd == COMM_MGMT_CMD_UPDATE_BIOS ||
+ cmd == COMM_MGMT_CMD_ACTIVE_FW ||
+ cmd == COMM_MGMT_CMD_SWITCH_CFG ||
+ cmd == COMM_MGMT_CMD_HOT_ACTIVE_FW))
+ return MGMT_MSG_UPDATE_TIMEOUT;
+
+ return 0; /* use default mbox/apichain timeout time */
+}
+
+int send_to_mpu(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ enum hinic3_mod_type mod;
+ u32 timeout;
+ int ret = 0;
+ u16 cmd;
+
+ mod = (enum hinic3_mod_type)nt_msg->mpu_cmd.mod;
+ cmd = nt_msg->mpu_cmd.cmd;
+
+ if (nt_msg->mpu_cmd.api_type == API_TYPE_MBOX || nt_msg->mpu_cmd.api_type == API_TYPE_CLP) {
+ timeout = get_up_timeout_val(mod, cmd);
+
+ if (nt_msg->mpu_cmd.api_type == API_TYPE_MBOX)
+ ret = hinic3_msg_to_mgmt_sync(hwdev, mod, cmd, buf_in, (u16)in_size,
+ buf_out, (u16 *)(u8 *)out_size, timeout,
+ HINIC3_CHANNEL_DEFAULT);
+ else
+ ret = hinic3_clp_to_mgmt(hwdev, mod, cmd, buf_in, (u16)in_size,
+ buf_out, (u16 *)out_size);
+ if (ret) {
+ pr_err("Message to mgmt cpu return fail, mod: %d, cmd: %u\n", mod, cmd);
+ return ret;
+ }
+ } else if (nt_msg->mpu_cmd.api_type == API_TYPE_API_CHAIN_BYPASS) {
+ pr_err("Unsupported api_type %u\n", nt_msg->mpu_cmd.api_type);
+ return -EINVAL;
+ } else if (nt_msg->mpu_cmd.api_type == API_TYPE_API_CHAIN_TO_MPU) {
+ timeout = get_up_timeout_val(mod, cmd);
+ if (hinic3_pcie_itf_id(hwdev) != SPU_HOST_ID)
+ ret = hinic3_msg_to_mgmt_api_chain_sync(hwdev, mod, cmd, buf_in,
+ (u16)in_size, buf_out,
+ (u16 *)(u8 *)out_size, timeout);
+ else
+ ret = hinic3_msg_to_mgmt_sync(hwdev, mod, cmd, buf_in, (u16)in_size,
+ buf_out, (u16 *)(u8 *)out_size, timeout,
+ HINIC3_CHANNEL_DEFAULT);
+ if (ret) {
+ pr_err("Message to mgmt api chain cpu return fail, mod: %d, cmd: %u\n",
+ mod, cmd);
+ return ret;
+ }
+ } else {
+ pr_err("Unsupported api_type %u\n", nt_msg->mpu_cmd.api_type);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+int send_to_npu(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ int ret = 0;
+ u8 cmd;
+ enum hinic3_mod_type mod;
+
+ mod = (enum hinic3_mod_type)nt_msg->npu_cmd.mod;
+ cmd = nt_msg->npu_cmd.cmd;
+
+ if (nt_msg->npu_cmd.direct_resp) {
+ ret = hinic3_cmdq_direct_resp(hwdev, mod, cmd,
+ buf_in, buf_out, 0,
+ HINIC3_CHANNEL_DEFAULT);
+ if (ret)
+ pr_err("Send direct cmdq failed, err: %d\n", ret);
+ } else {
+ ret = hinic3_cmdq_detail_resp(hwdev, mod, cmd, buf_in, buf_out,
+ NULL, 0, HINIC3_CHANNEL_DEFAULT);
+ if (ret)
+ pr_err("Send detail cmdq failed, err: %d\n", ret);
+ }
+
+ return ret;
+}
+
+static int sm_rd16(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u16 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd16(hwdev, node, instance, id, &val1);
+ if (ret != 0) {
+ pr_err("Get sm ctr information (16 bits)failed!\n");
+ val1 = 0xffff;
+ }
+
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd16_clear(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u16 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd16_clear(hwdev, node, instance, id, &val1);
+ if (ret != 0) {
+ pr_err("Get sm ctr clear information (16 bits)failed!\n");
+ val1 = 0xffff;
+ }
+
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd32(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u32 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd32(hwdev, node, instance, id, &val1);
+ if (ret) {
+ pr_err("Get sm ctr information (32 bits)failed!\n");
+ val1 = ~0;
+ }
+
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd32_clear(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u32 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd32_clear(hwdev, node, instance, id, &val1);
+ if (ret) {
+ pr_err("Get sm ctr clear information(32 bits) failed!\n");
+ val1 = ~0;
+ }
+
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd64_pair(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u64 val1 = 0, val2 = 0;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd64_pair(hwdev, node, instance, id, &val1, &val2);
+ if (ret) {
+ pr_err("Get sm ctr information (64 bits pair)failed!\n");
+ val1 = ~0;
+ val2 = ~0;
+ }
+
+ buf_out->val1 = val1;
+ buf_out->val2 = val2;
+
+ return ret;
+}
+
+static int sm_rd64_pair_clear(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u64 val1 = 0;
+ u64 val2 = 0;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd64_pair_clear(hwdev, node, instance, id, &val1,
+ &val2);
+ if (ret) {
+ pr_err("Get sm ctr clear information(64 bits pair) failed!\n");
+ val1 = ~0;
+ val2 = ~0;
+ }
+
+ buf_out->val1 = val1;
+ buf_out->val2 = val2;
+
+ return ret;
+}
+
+static int sm_rd64(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u64 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd64(hwdev, node, instance, id, &val1);
+ if (ret) {
+ pr_err("Get sm ctr information (64 bits)failed!\n");
+ val1 = ~0;
+ }
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd64_clear(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u64 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd64_clear(hwdev, node, instance, id, &val1);
+ if (ret) {
+ pr_err("Get sm ctr clear information(64 bits) failed!\n");
+ val1 = ~0;
+ }
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+typedef int (*sm_module)(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out);
+
+struct sm_module_handle {
+ enum sm_cmd_type sm_cmd_name;
+ sm_module sm_func;
+};
+
+const struct sm_module_handle sm_module_cmd_handle[] = {
+ {SM_CTR_RD16, sm_rd16},
+ {SM_CTR_RD32, sm_rd32},
+ {SM_CTR_RD64_PAIR, sm_rd64_pair},
+ {SM_CTR_RD64, sm_rd64},
+ {SM_CTR_RD16_CLEAR, sm_rd16_clear},
+ {SM_CTR_RD32_CLEAR, sm_rd32_clear},
+ {SM_CTR_RD64_PAIR_CLEAR, sm_rd64_pair_clear},
+ {SM_CTR_RD64_CLEAR, sm_rd64_clear}
+};
+
+int send_to_sm(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct sm_in_st *sm_in = buf_in;
+ struct sm_out_st *sm_out = buf_out;
+ u32 msg_formate;
+ int index, num_cmds = ARRAY_LEN(sm_module_cmd_handle);
+ int ret = 0;
+
+ if (!nt_msg || !buf_in || !buf_out ||
+ in_size != sizeof(*sm_in) || *out_size != sizeof(*sm_out)) {
+ pr_err("Unexpect out buf size :%u, in buf size: %u\n",
+ *out_size, in_size);
+ return -EINVAL;
+ }
+ msg_formate = nt_msg->msg_formate;
+
+ for (index = 0; index < num_cmds; index++) {
+ if (msg_formate != sm_module_cmd_handle[index].sm_cmd_name)
+ continue;
+
+ ret = sm_module_cmd_handle[index].sm_func(hwdev, (u32)sm_in->id,
+ (u8)sm_in->instance,
+ (u8)sm_in->node, sm_out);
+ break;
+ }
+
+ if (index == num_cmds) {
+ pr_err("Can't find callback for %d\n", msg_formate);
+ return -EINVAL;
+ }
+ if (ret != 0)
+ pr_err("Get sm information fail, id:%d, instance:%d, node:%d\n",
+ sm_in->id, sm_in->instance, sm_in->node);
+
+ *out_size = sizeof(struct sm_out_st);
+
+ return ret;
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h
new file mode 100644
index 0000000..9330200
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HW_MT_H
+#define HINIC3_HW_MT_H
+
+#include "hinic3_lld.h"
+
+struct sm_in_st {
+ int node;
+ int id;
+ int instance;
+};
+
+struct sm_out_st {
+ u64 val1;
+ u64 val2;
+};
+
+struct up_log_msg_st {
+ u32 rd_len;
+ u32 addr;
+};
+
+struct csr_write_st {
+ u32 rd_len;
+ u32 addr;
+ u8 *data;
+};
+
+int get_func_type(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int get_func_id(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int get_hw_driver_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int clear_hw_driver_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int get_self_test_result(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int get_chip_faults_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c
new file mode 100644
index 0000000..ac80b63
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c
@@ -0,0 +1,2222 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/time.h>
+#include <linux/timex.h>
+#include <linux/rtc.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/completion.h>
+#include <linux/semaphore.h>
+#include <linux/interrupt.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwif.h"
+#include "hinic3_eqs.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_mbox.h"
+#include "hinic3_cmdq.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_multi_host_mgmt.h"
+#include "hinic3_hw_comm.h"
+#include "hinic3_cqm.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_devlink.h"
+#include "hinic3_hwdev.h"
+
+static unsigned int wq_page_order = HINIC3_MAX_WQ_PAGE_SIZE_ORDER;
+module_param(wq_page_order, uint, 0444);
+MODULE_PARM_DESC(wq_page_order, "Set wq page size order, wq page size is 4K * (2 ^ wq_page_order) - default is 8");
+
+enum hinic3_pcie_nosnoop {
+ HINIC3_PCIE_SNOOP = 0,
+ HINIC3_PCIE_NO_SNOOP = 1,
+};
+
+enum hinic3_pcie_tph {
+ HINIC3_PCIE_TPH_DISABLE = 0,
+ HINIC3_PCIE_TPH_ENABLE = 1,
+};
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_SHIFT 0
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_MASK 0x3FF
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_SET(val, member) \
+ (((u32)(val) & HINIC3_DMA_ATTR_INDIR_##member##_MASK) << \
+ HINIC3_DMA_ATTR_INDIR_##member##_SHIFT)
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_CLEAR(val, member) \
+ ((val) & (~(HINIC3_DMA_ATTR_INDIR_##member##_MASK \
+ << HINIC3_DMA_ATTR_INDIR_##member##_SHIFT)))
+
+#define HINIC3_DMA_ATTR_ENTRY_ST_SHIFT 0
+#define HINIC3_DMA_ATTR_ENTRY_AT_SHIFT 8
+#define HINIC3_DMA_ATTR_ENTRY_PH_SHIFT 10
+#define HINIC3_DMA_ATTR_ENTRY_NO_SNOOPING_SHIFT 12
+#define HINIC3_DMA_ATTR_ENTRY_TPH_EN_SHIFT 13
+
+#define HINIC3_DMA_ATTR_ENTRY_ST_MASK 0xFF
+#define HINIC3_DMA_ATTR_ENTRY_AT_MASK 0x3
+#define HINIC3_DMA_ATTR_ENTRY_PH_MASK 0x3
+#define HINIC3_DMA_ATTR_ENTRY_NO_SNOOPING_MASK 0x1
+#define HINIC3_DMA_ATTR_ENTRY_TPH_EN_MASK 0x1
+
+#define HINIC3_DMA_ATTR_ENTRY_SET(val, member) \
+ (((u32)(val) & HINIC3_DMA_ATTR_ENTRY_##member##_MASK) << \
+ HINIC3_DMA_ATTR_ENTRY_##member##_SHIFT)
+
+#define HINIC3_DMA_ATTR_ENTRY_CLEAR(val, member) \
+ ((val) & (~(HINIC3_DMA_ATTR_ENTRY_##member##_MASK \
+ << HINIC3_DMA_ATTR_ENTRY_##member##_SHIFT)))
+
+#define HINIC3_PCIE_ST_DISABLE 0
+#define HINIC3_PCIE_AT_DISABLE 0
+#define HINIC3_PCIE_PH_DISABLE 0
+
+#define PCIE_MSIX_ATTR_ENTRY 0
+
+#define HINIC3_CHIP_PRESENT 1
+#define HINIC3_CHIP_ABSENT 0
+
+#define HINIC3_DEAULT_EQ_MSIX_PENDING_LIMIT 0
+#define HINIC3_DEAULT_EQ_MSIX_COALESC_TIMER_CFG 0xFF
+#define HINIC3_DEAULT_EQ_MSIX_RESEND_TIMER_CFG 7
+
+#define HINIC3_HWDEV_WQ_NAME "hinic3_hardware"
+#define HINIC3_WQ_MAX_REQ 10
+
+#define SLAVE_HOST_STATUS_CLEAR(host_id, val) ((val) & (~(1U << (host_id))))
+#define SLAVE_HOST_STATUS_SET(host_id, enable) (((u8)(enable) & 1U) << (host_id))
+#define SLAVE_HOST_STATUS_GET(host_id, val) (!!((val) & (1U << (host_id))))
+
+#ifdef HAVE_HOT_REPLACE_FUNC
+ extern int get_partition_id(void);
+#else
+ static int get_partition_id(void) { return 0; }
+#endif
+
+void set_slave_host_enable(void *hwdev, u8 host_id, bool enable)
+{
+ u32 reg_val;
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF)
+ return;
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_HOST_SLAVE_STATUS_ADDR);
+
+ reg_val = SLAVE_HOST_STATUS_CLEAR(host_id, reg_val);
+ reg_val |= SLAVE_HOST_STATUS_SET(host_id, enable);
+ hinic3_hwif_write_reg(dev->hwif, HINIC3_MULT_HOST_SLAVE_STATUS_ADDR, reg_val);
+
+ sdk_info(dev->dev_hdl, "Set slave host %d status %d, reg value: 0x%x\n",
+ host_id, enable, reg_val);
+}
+
+int hinic3_get_slave_host_enable(void *hwdev, u8 host_id, u8 *slave_en)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ u32 reg_val;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF) {
+ sdk_warn(dev->dev_hdl, "hwdev should be ppf\n");
+ return -EINVAL;
+ }
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_HOST_SLAVE_STATUS_ADDR);
+ *slave_en = SLAVE_HOST_STATUS_GET(host_id, reg_val);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_slave_host_enable);
+
+int hinic3_get_slave_bitmap(void *hwdev, u8 *slave_host_bitmap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct service_cap *cap = NULL;
+
+ if (!dev || !slave_host_bitmap)
+ return -EINVAL;
+
+ cap = &dev->cfg_mgmt->svc_cap;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF) {
+ sdk_warn(dev->dev_hdl, "hwdev should be ppf\n");
+ return -EINVAL;
+ }
+
+ *slave_host_bitmap = cap->host_valid_bitmap & (~(1U << cap->master_host_id));
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_slave_bitmap);
+
+void set_func_host_mode(struct hinic3_hwdev *hwdev, enum hinic3_func_mode mode)
+{
+ switch (mode) {
+ case FUNC_MOD_MULTI_BM_MASTER:
+ sdk_info(hwdev->dev_hdl, "Detect multi-host BM master host\n");
+ hwdev->func_mode = FUNC_MOD_MULTI_BM_MASTER;
+ break;
+ case FUNC_MOD_MULTI_BM_SLAVE:
+ sdk_info(hwdev->dev_hdl, "Detect multi-host BM slave host\n");
+ hwdev->func_mode = FUNC_MOD_MULTI_BM_SLAVE;
+ break;
+ case FUNC_MOD_MULTI_VM_MASTER:
+ sdk_info(hwdev->dev_hdl, "Detect multi-host VM master host\n");
+ hwdev->func_mode = FUNC_MOD_MULTI_VM_MASTER;
+ break;
+ case FUNC_MOD_MULTI_VM_SLAVE:
+ sdk_info(hwdev->dev_hdl, "Detect multi-host VM slave host\n");
+ hwdev->func_mode = FUNC_MOD_MULTI_VM_SLAVE;
+ break;
+ default:
+ hwdev->func_mode = FUNC_MOD_NORMAL_HOST;
+ break;
+ }
+}
+
+static void hinic3_init_host_mode_pre(struct hinic3_hwdev *hwdev)
+{
+ struct service_cap *cap = &hwdev->cfg_mgmt->svc_cap;
+ u8 host_id = hwdev->hwif->attr.pci_intf_idx;
+
+ switch (cap->srv_multi_host_mode) {
+ case HINIC3_SDI_MODE_BM:
+ if (host_id == cap->master_host_id)
+ set_func_host_mode(hwdev, FUNC_MOD_MULTI_BM_MASTER);
+ else
+ set_func_host_mode(hwdev, FUNC_MOD_MULTI_BM_SLAVE);
+ break;
+ case HINIC3_SDI_MODE_VM:
+ if (host_id == cap->master_host_id)
+ set_func_host_mode(hwdev, FUNC_MOD_MULTI_VM_MASTER);
+ else
+ set_func_host_mode(hwdev, FUNC_MOD_MULTI_VM_SLAVE);
+ break;
+ default:
+ set_func_host_mode(hwdev, FUNC_MOD_NORMAL_HOST);
+ break;
+ }
+}
+
+static void hinic3_init_hot_plug_status(struct hinic3_hwdev *hwdev)
+{
+ struct service_cap *cap = &hwdev->cfg_mgmt->svc_cap;
+
+ if (cap->hot_plug_disable) {
+ hwdev->hot_plug_mode = HOT_PLUG_DISABLE;
+ } else {
+ hwdev->hot_plug_mode = HOT_PLUG_ENABLE;
+ }
+}
+
+static void hinic3_init_os_hot_replace(struct hinic3_hwdev *hwdev)
+{
+ struct service_cap *cap = &hwdev->cfg_mgmt->svc_cap;
+
+ if (cap->os_hot_replace) {
+ hwdev->hot_replace_mode = HOT_REPLACE_ENABLE;
+ } else {
+ hwdev->hot_replace_mode = HOT_REPLACE_DISABLE;
+ }
+}
+
+static u8 hinic3_nic_sw_aeqe_handler(void *hwdev, u8 event, u8 *data)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev)
+ return 0;
+
+ sdk_err(dev->dev_hdl, "Received nic ucode aeq event type: 0x%x, data: 0x%llx\n",
+ event, *((u64 *)data));
+
+ if (event < HINIC3_NIC_FATAL_ERROR_MAX)
+ atomic_inc(&dev->hw_stats.nic_ucode_event_stats[event]);
+
+ return 0;
+}
+
+static void hinic3_init_heartbeat_detect(struct hinic3_hwdev *hwdev);
+static void hinic3_destroy_heartbeat_detect(struct hinic3_hwdev *hwdev);
+
+typedef void (*mgmt_event_cb)(void *handle, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+struct mgmt_event_handle {
+ u16 cmd;
+ mgmt_event_cb proc;
+};
+
+static int pf_handle_vf_comm_mbox(void *pri_handle,
+ u16 vf_id, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = pri_handle;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ sdk_warn(hwdev->dev_hdl, "Unsupported vf mbox event %u to process\n",
+ cmd);
+
+ return 0;
+}
+
+static int vf_handle_pf_comm_mbox(void *pri_handle,
+ u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = pri_handle;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ sdk_warn(hwdev->dev_hdl, "Unsupported pf mbox event %u to process\n",
+ cmd);
+ return 0;
+}
+
+static void chip_fault_show(struct hinic3_hwdev *hwdev,
+ struct hinic3_fault_event *event)
+{
+ char fault_level[FAULT_LEVEL_MAX][FAULT_SHOW_STR_LEN + 1] = {
+ "fatal", "reset", "host", "flr", "general", "suggestion"};
+ char level_str[FAULT_SHOW_STR_LEN + 1];
+ u8 level;
+ int ret;
+
+ memset(level_str, 0, FAULT_SHOW_STR_LEN + 1);
+ level = event->event.chip.err_level;
+ if (level < FAULT_LEVEL_MAX) {
+ ret = strscpy(level_str, fault_level[level],
+ FAULT_SHOW_STR_LEN);
+ if (ret < 0)
+ return;
+ } else {
+ ret = strscpy(level_str, "Unknown", FAULT_SHOW_STR_LEN);
+ if (ret < 0)
+ return;
+ }
+
+ if (level == FAULT_LEVEL_SERIOUS_FLR)
+ dev_err(hwdev->dev_hdl, "err_level: %u [%s], flr func_id: %u\n",
+ level, level_str, event->event.chip.func_id);
+
+ dev_err(hwdev->dev_hdl,
+ "Module_id: 0x%x, err_type: 0x%x, err_level: %u[%s], err_csr_addr: 0x%08x, err_csr_value: 0x%08x\n",
+ event->event.chip.node_id,
+ event->event.chip.err_type, level, level_str,
+ event->event.chip.err_csr_addr,
+ event->event.chip.err_csr_value);
+}
+
+static void fault_report_show(struct hinic3_hwdev *hwdev,
+ struct hinic3_fault_event *event)
+{
+ char fault_type[FAULT_TYPE_MAX][FAULT_SHOW_STR_LEN + 1] = {
+ "chip", "ucode", "mem rd timeout", "mem wr timeout",
+ "reg rd timeout", "reg wr timeout", "phy fault", "tsensor fault"};
+ char type_str[FAULT_SHOW_STR_LEN + 1] = {0};
+ struct fault_event_stats *fault = NULL;
+ int ret;
+
+ sdk_err(hwdev->dev_hdl, "Fault event report received, func_id: %u\n",
+ hinic3_global_func_id(hwdev));
+
+ fault = &hwdev->hw_stats.fault_event_stats;
+
+ if (event->type < FAULT_TYPE_MAX) {
+ ret = strscpy(type_str, fault_type[event->type], sizeof(type_str));
+ if (ret < 0)
+ return;
+ atomic_inc(&fault->fault_type_stat[event->type]);
+ } else {
+ ret = strscpy(type_str, "Unknown", sizeof(type_str));
+ if (ret < 0)
+ return;
+ }
+
+ sdk_err(hwdev->dev_hdl, "Fault type: %u [%s]\n", event->type, type_str);
+ /* 0, 1, 2 and 3 word Represents array event->event.val index */
+ sdk_err(hwdev->dev_hdl, "Fault val[0]: 0x%08x, val[1]: 0x%08x, val[2]: 0x%08x, val[3]: 0x%08x\n",
+ event->event.val[0x0], event->event.val[0x1],
+ event->event.val[0x2], event->event.val[0x3]);
+
+ hinic3_show_chip_err_info(hwdev);
+
+ switch (event->type) {
+ case FAULT_TYPE_CHIP:
+ chip_fault_show(hwdev, event);
+ break;
+ case FAULT_TYPE_UCODE:
+ sdk_err(hwdev->dev_hdl, "Cause_id: %u, core_id: %u, c_id: %u, epc: 0x%08x\n",
+ event->event.ucode.cause_id, event->event.ucode.core_id,
+ event->event.ucode.c_id, event->event.ucode.epc);
+ break;
+ case FAULT_TYPE_MEM_RD_TIMEOUT:
+ case FAULT_TYPE_MEM_WR_TIMEOUT:
+ sdk_err(hwdev->dev_hdl, "Err_csr_ctrl: 0x%08x, err_csr_data: 0x%08x, ctrl_tab: 0x%08x, mem_index: 0x%08x\n",
+ event->event.mem_timeout.err_csr_ctrl,
+ event->event.mem_timeout.err_csr_data,
+ event->event.mem_timeout.ctrl_tab, event->event.mem_timeout.mem_index);
+ break;
+ case FAULT_TYPE_REG_RD_TIMEOUT:
+ case FAULT_TYPE_REG_WR_TIMEOUT:
+ sdk_err(hwdev->dev_hdl, "Err_csr: 0x%08x\n", event->event.reg_timeout.err_csr);
+ break;
+ case FAULT_TYPE_PHY_FAULT:
+ sdk_err(hwdev->dev_hdl, "Op_type: %u, port_id: %u, dev_ad: %u, csr_addr: 0x%08x, op_data: 0x%08x\n",
+ event->event.phy_fault.op_type, event->event.phy_fault.port_id,
+ event->event.phy_fault.dev_ad, event->event.phy_fault.csr_addr,
+ event->event.phy_fault.op_data);
+ break;
+ default:
+ break;
+ }
+}
+
+static void fault_event_handler(void *dev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_cmd_fault_event *fault_event = NULL;
+ struct hinic3_fault_event *fault = NULL;
+ struct hinic3_event_info event_info;
+ struct hinic3_hwdev *hwdev = dev;
+ u8 fault_src = HINIC3_FAULT_SRC_TYPE_MAX;
+ u8 fault_level;
+
+ if (in_size != sizeof(*fault_event)) {
+ sdk_err(hwdev->dev_hdl, "Invalid fault event report, length: %u, should be %ld\n",
+ in_size, sizeof(*fault_event));
+ return;
+ }
+
+ fault_event = buf_in;
+ fault_report_show(hwdev, &fault_event->event);
+
+ if (fault_event->event.type == FAULT_TYPE_CHIP)
+ fault_level = fault_event->event.event.chip.err_level;
+ else
+ fault_level = FAULT_LEVEL_FATAL;
+
+ if (hwdev->event_callback) {
+ event_info.service = EVENT_SRV_COMM;
+ event_info.type = EVENT_COMM_FAULT;
+ fault = (void *)event_info.event_data;
+ memcpy(fault, &fault_event->event, sizeof(struct hinic3_fault_event));
+ fault->fault_level = fault_level;
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+ }
+
+ if (fault_event->event.type <= FAULT_TYPE_REG_WR_TIMEOUT)
+ fault_src = fault_event->event.type;
+ else if (fault_event->event.type == FAULT_TYPE_PHY_FAULT)
+ fault_src = HINIC3_FAULT_SRC_HW_PHY_FAULT;
+
+ hisdk3_fault_post_process(hwdev, fault_src, fault_level);
+}
+
+static void ffm_event_record(struct hinic3_hwdev *dev, struct dbgtool_k_glb_info *dbgtool_info,
+ struct ffm_intr_info *intr)
+{
+ struct rtc_time rctm;
+ struct timeval txc;
+ u32 ffm_idx;
+ u32 last_err_csr_addr;
+ u32 last_err_csr_value;
+
+ ffm_idx = dbgtool_info->ffm->ffm_num;
+ last_err_csr_addr = dbgtool_info->ffm->last_err_csr_addr;
+ last_err_csr_value = dbgtool_info->ffm->last_err_csr_value;
+ if (ffm_idx < FFM_RECORD_NUM_MAX) {
+ if (intr->err_csr_addr == last_err_csr_addr &&
+ intr->err_csr_value == last_err_csr_value) {
+ dbgtool_info->ffm->ffm[ffm_idx - 1].times++;
+ sdk_err(dev->dev_hdl, "Receive intr same, ffm_idx: %u\n", ffm_idx - 1);
+ return;
+ }
+ sdk_err(dev->dev_hdl, "Receive intr, ffm_idx: %u\n", ffm_idx);
+
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.node_id = intr->node_id;
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.err_level = intr->err_level;
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.err_type = intr->err_type;
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.err_csr_addr = intr->err_csr_addr;
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.err_csr_value = intr->err_csr_value;
+ dbgtool_info->ffm->last_err_csr_addr = intr->err_csr_addr;
+ dbgtool_info->ffm->last_err_csr_value = intr->err_csr_value;
+ dbgtool_info->ffm->ffm[ffm_idx].times = 1;
+
+ /* Obtain the current UTC time */
+ do_gettimeofday(&txc);
+
+ /* Calculate the time in date value to tm, i.e. GMT + 8, mutiplied by 60 * 60 */
+ rtc_time_to_tm((unsigned long)txc.tv_sec + 60 * 60 * 8, &rctm);
+
+ /* tm_year starts from 1900; 0->1900, 1->1901, and so on */
+ dbgtool_info->ffm->ffm[ffm_idx].year = (u16)(rctm.tm_year + 1900);
+ /* tm_mon starts from 0, 0 indicates January, and so on */
+ dbgtool_info->ffm->ffm[ffm_idx].mon = (u8)rctm.tm_mon + 1;
+ dbgtool_info->ffm->ffm[ffm_idx].mday = (u8)rctm.tm_mday;
+ dbgtool_info->ffm->ffm[ffm_idx].hour = (u8)rctm.tm_hour;
+ dbgtool_info->ffm->ffm[ffm_idx].min = (u8)rctm.tm_min;
+ dbgtool_info->ffm->ffm[ffm_idx].sec = (u8)rctm.tm_sec;
+
+ dbgtool_info->ffm->ffm_num++;
+ }
+}
+
+static void ffm_event_msg_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct dbgtool_k_glb_info *dbgtool_info = NULL;
+ struct hinic3_hwdev *dev = hwdev;
+ struct card_node *card_info = NULL;
+ struct ffm_intr_info *intr = NULL;
+
+ if (in_size != sizeof(*intr)) {
+ sdk_err(dev->dev_hdl, "Invalid fault event report, length: %u, should be %ld.\n",
+ in_size, sizeof(*intr));
+ return;
+ }
+
+ intr = buf_in;
+
+ sdk_err(dev->dev_hdl, "node_id: 0x%x, err_type: 0x%x, err_level: %u, err_csr_addr: 0x%08x, err_csr_value: 0x%08x\n",
+ intr->node_id, intr->err_type, intr->err_level,
+ intr->err_csr_addr, intr->err_csr_value);
+
+ hinic3_show_chip_err_info(hwdev);
+
+ card_info = dev->chip_node;
+ dbgtool_info = card_info->dbgtool_info;
+
+ *out_size = sizeof(*intr);
+
+ if (!dbgtool_info)
+ return;
+
+ if (!dbgtool_info->ffm)
+ return;
+
+ ffm_event_record(dev, dbgtool_info, intr);
+}
+
+#define X_CSR_INDEX 30
+
+static void sw_watchdog_timeout_info_show(struct hinic3_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct comm_info_sw_watchdog *watchdog_info = buf_in;
+ u32 stack_len, i, j, tmp;
+ u32 *dump_addr = NULL;
+ u64 *reg = NULL;
+
+ if (in_size != sizeof(*watchdog_info)) {
+ sdk_err(hwdev->dev_hdl, "Invalid mgmt watchdog report, length: %d, should be %ld\n",
+ in_size, sizeof(*watchdog_info));
+ return;
+ }
+
+ sdk_err(hwdev->dev_hdl, "Mgmt deadloop time: 0x%x 0x%x, task id: 0x%x, sp: 0x%llx\n",
+ watchdog_info->curr_time_h, watchdog_info->curr_time_l,
+ watchdog_info->task_id, watchdog_info->sp);
+ sdk_err(hwdev->dev_hdl,
+ "Stack current used: 0x%x, peak used: 0x%x, overflow flag: 0x%x, top: 0x%llx, bottom: 0x%llx\n",
+ watchdog_info->curr_used, watchdog_info->peak_used,
+ watchdog_info->is_overflow, watchdog_info->stack_top, watchdog_info->stack_bottom);
+
+ sdk_err(hwdev->dev_hdl, "Mgmt pc: 0x%llx, elr: 0x%llx, spsr: 0x%llx, far: 0x%llx, esr: 0x%llx, xzr: 0x%llx\n",
+ watchdog_info->pc, watchdog_info->elr, watchdog_info->spsr, watchdog_info->far,
+ watchdog_info->esr, watchdog_info->xzr);
+
+ sdk_err(hwdev->dev_hdl, "Mgmt register info\n");
+ reg = &watchdog_info->x30;
+ for (i = 0; i <= X_CSR_INDEX; i++)
+ sdk_err(hwdev->dev_hdl, "x%02u:0x%llx\n",
+ X_CSR_INDEX - i, reg[i]);
+
+ if (watchdog_info->stack_actlen <= DATA_LEN_1K) {
+ stack_len = watchdog_info->stack_actlen;
+ } else {
+ sdk_err(hwdev->dev_hdl, "Oops stack length: 0x%x is wrong\n",
+ watchdog_info->stack_actlen);
+ stack_len = DATA_LEN_1K;
+ }
+
+ sdk_err(hwdev->dev_hdl, "Mgmt dump stack, 16 bytes per line(start from sp)\n");
+ for (i = 0; i < (stack_len / DUMP_16B_PER_LINE); i++) {
+ dump_addr = (u32 *)(watchdog_info->stack_data + (u32)(i * DUMP_16B_PER_LINE));
+ sdk_err(hwdev->dev_hdl, "0x%08x 0x%08x 0x%08x 0x%08x\n",
+ *dump_addr, *(dump_addr + 0x1), *(dump_addr + 0x2), *(dump_addr + 0x3));
+ }
+
+ tmp = (stack_len % DUMP_16B_PER_LINE) / DUMP_4_VAR_PER_LINE;
+ for (j = 0; j < tmp; j++) {
+ dump_addr = (u32 *)(watchdog_info->stack_data +
+ (u32)(i * DUMP_16B_PER_LINE + j * DUMP_4_VAR_PER_LINE));
+ sdk_err(hwdev->dev_hdl, "0x%08x ", *dump_addr);
+ }
+
+ *out_size = sizeof(*watchdog_info);
+ watchdog_info = buf_out;
+ watchdog_info->head.status = 0;
+}
+
+static void mgmt_watchdog_timeout_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_event_info event_info = { 0 };
+ struct hinic3_hwdev *dev = hwdev;
+
+ sw_watchdog_timeout_info_show(dev, buf_in, in_size, buf_out, out_size);
+
+ if (dev->event_callback) {
+ event_info.type = EVENT_COMM_MGMT_WATCHDOG;
+ dev->event_callback(dev->event_pri_handle, &event_info);
+ }
+}
+
+static void show_exc_info(struct hinic3_hwdev *hwdev, struct tag_exc_info *exc_info)
+{
+ u32 i;
+
+ /* key information */
+ sdk_err(hwdev->dev_hdl, "==================== Exception Info Begin ====================\n");
+ sdk_err(hwdev->dev_hdl, "Exception CpuTick : 0x%08x 0x%08x\n",
+ exc_info->cpu_tick.cnt_hi, exc_info->cpu_tick.cnt_lo);
+ sdk_err(hwdev->dev_hdl, "Exception Cause : %u\n", exc_info->exc_cause);
+ sdk_err(hwdev->dev_hdl, "Os Version : %s\n", exc_info->os_ver);
+ sdk_err(hwdev->dev_hdl, "App Version : %s\n", exc_info->app_ver);
+ sdk_err(hwdev->dev_hdl, "CPU Type : 0x%08x\n", exc_info->cpu_type);
+ sdk_err(hwdev->dev_hdl, "CPU ID : 0x%08x\n", exc_info->cpu_id);
+ sdk_err(hwdev->dev_hdl, "Thread Type : 0x%08x\n", exc_info->thread_type);
+ sdk_err(hwdev->dev_hdl, "Thread ID : 0x%08x\n", exc_info->thread_id);
+ sdk_err(hwdev->dev_hdl, "Byte Order : 0x%08x\n", exc_info->byte_order);
+ sdk_err(hwdev->dev_hdl, "Nest Count : 0x%08x\n", exc_info->nest_cnt);
+ sdk_err(hwdev->dev_hdl, "Fatal Error Num : 0x%08x\n", exc_info->fatal_errno);
+ sdk_err(hwdev->dev_hdl, "Current SP : 0x%016llx\n", exc_info->uw_sp);
+ sdk_err(hwdev->dev_hdl, "Stack Bottom : 0x%016llx\n", exc_info->stack_bottom);
+
+ /* register field */
+ sdk_err(hwdev->dev_hdl, "Register contents when exception occur.\n");
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "TTBR0",
+ exc_info->reg_info.ttbr0, "TTBR1", exc_info->reg_info.ttbr1);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "TCR",
+ exc_info->reg_info.tcr, "MAIR", exc_info->reg_info.mair);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "SCTLR",
+ exc_info->reg_info.sctlr, "VBAR", exc_info->reg_info.vbar);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "CURRENTE1",
+ exc_info->reg_info.current_el, "SP", exc_info->reg_info.sp);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "ELR",
+ exc_info->reg_info.elr, "SPSR", exc_info->reg_info.spsr);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "FAR",
+ exc_info->reg_info.far_r, "ESR", exc_info->reg_info.esr);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx\n", "XZR", exc_info->reg_info.xzr);
+
+ for (i = 0; i < XREGS_NUM - 1; i += 0x2)
+ sdk_err(hwdev->dev_hdl, "XREGS[%02u]%-5s: 0x%016llx \t XREGS[%02u]%-5s: 0x%016llx",
+ i, " ", exc_info->reg_info.xregs[i],
+ (u32)(i + 0x1U), " ", exc_info->reg_info.xregs[(u32)(i + 0x1U)]);
+
+ sdk_err(hwdev->dev_hdl, "XREGS[%02u]%-5s: 0x%016llx \t ", XREGS_NUM - 1, " ",
+ exc_info->reg_info.xregs[XREGS_NUM - 1]);
+}
+
+#define FOUR_REG_LEN 16
+
+static void mgmt_lastword_report_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct tag_comm_info_up_lastword *lastword_info = buf_in;
+ struct tag_exc_info *exc_info = &lastword_info->stack_info;
+ u32 stack_len = lastword_info->stack_actlen;
+ struct hinic3_hwdev *dev = hwdev;
+ u32 *curr_reg = NULL;
+ u32 reg_i, cnt;
+
+ if (in_size != sizeof(*lastword_info)) {
+ sdk_err(dev->dev_hdl, "Invalid mgmt lastword, length: %u, should be %ld\n",
+ in_size, sizeof(*lastword_info));
+ return;
+ }
+
+ show_exc_info(dev, exc_info);
+
+ /* call stack dump */
+ sdk_err(dev->dev_hdl, "Dump stack when exceptioin occurs, 16Bytes per line.\n");
+
+ cnt = stack_len / FOUR_REG_LEN;
+ for (reg_i = 0; reg_i < cnt; reg_i++) {
+ curr_reg = (u32 *)(lastword_info->stack_data + ((u64)(u32)(reg_i * FOUR_REG_LEN)));
+ sdk_err(dev->dev_hdl, "0x%08x 0x%08x 0x%08x 0x%08x\n",
+ *curr_reg, *(curr_reg + 0x1), *(curr_reg + 0x2), *(curr_reg + 0x3));
+ }
+
+ sdk_err(dev->dev_hdl, "==================== Exception Info End ====================\n");
+}
+
+const struct mgmt_event_handle mgmt_event_proc[] = {
+ {
+ .cmd = COMM_MGMT_CMD_FAULT_REPORT,
+ .proc = fault_event_handler,
+ },
+
+ {
+ .cmd = COMM_MGMT_CMD_FFM_SET,
+ .proc = ffm_event_msg_handler,
+ },
+
+ {
+ .cmd = COMM_MGMT_CMD_WATCHDOG_INFO,
+ .proc = mgmt_watchdog_timeout_event_handler,
+ },
+
+ {
+ .cmd = COMM_MGMT_CMD_LASTWORD_GET,
+ .proc = mgmt_lastword_report_event_handler,
+ },
+};
+
+static void pf_handle_mgmt_comm_event(void *handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = handle;
+ u32 i, event_num = ARRAY_LEN(mgmt_event_proc);
+
+ if (!hwdev)
+ return;
+
+ for (i = 0; i < event_num; i++) {
+ if (cmd == mgmt_event_proc[i].cmd) {
+ if (mgmt_event_proc[i].proc)
+ mgmt_event_proc[i].proc(handle, buf_in, in_size,
+ buf_out, out_size);
+
+ return;
+ }
+ }
+
+ sdk_warn(hwdev->dev_hdl, "Unsupported mgmt cpu event %u to process\n",
+ cmd);
+ *out_size = sizeof(struct mgmt_msg_head);
+ ((struct mgmt_msg_head *)buf_out)->status = HINIC3_MGMT_CMD_UNSUPPORTED;
+}
+
+static void hinic3_set_chip_present(struct hinic3_hwdev *hwdev)
+{
+ hwdev->chip_present_flag = HINIC3_CHIP_PRESENT;
+}
+
+static void hinic3_set_chip_absent(struct hinic3_hwdev *hwdev)
+{
+ sdk_err(hwdev->dev_hdl, "Card not present\n");
+ hwdev->chip_present_flag = HINIC3_CHIP_ABSENT;
+}
+
+int hinic3_get_chip_present_flag(const void *hwdev)
+{
+ if (!hwdev)
+ return 0;
+
+ return ((struct hinic3_hwdev *)hwdev)->chip_present_flag;
+}
+EXPORT_SYMBOL(hinic3_get_chip_present_flag);
+
+void hinic3_force_complete_all(void *dev)
+{
+ struct hinic3_recv_msg *recv_resp_msg = NULL;
+ struct hinic3_hwdev *hwdev = dev;
+ struct hinic3_mbox *func_to_func = NULL;
+
+ spin_lock_bh(&hwdev->channel_lock);
+ if (hinic3_func_type(hwdev) != TYPE_VF &&
+ test_bit(HINIC3_HWDEV_MGMT_INITED, &hwdev->func_state)) {
+ recv_resp_msg = &hwdev->pf_to_mgmt->recv_resp_msg_from_mgmt;
+ spin_lock_bh(&hwdev->pf_to_mgmt->sync_event_lock);
+ if (hwdev->pf_to_mgmt->event_flag == SEND_EVENT_START) {
+ complete(&recv_resp_msg->recv_done);
+ hwdev->pf_to_mgmt->event_flag = SEND_EVENT_TIMEOUT;
+ }
+ spin_unlock_bh(&hwdev->pf_to_mgmt->sync_event_lock);
+ }
+
+ if (test_bit(HINIC3_HWDEV_MBOX_INITED, &hwdev->func_state)) {
+ func_to_func = hwdev->func_to_func;
+ spin_lock(&func_to_func->mbox_lock);
+ if (func_to_func->event_flag == EVENT_START)
+ func_to_func->event_flag = EVENT_TIMEOUT;
+ spin_unlock(&func_to_func->mbox_lock);
+ }
+
+ if (test_bit(HINIC3_HWDEV_CMDQ_INITED, &hwdev->func_state))
+ hinic3_cmdq_flush_sync_cmd(hwdev);
+
+ spin_unlock_bh(&hwdev->channel_lock);
+}
+EXPORT_SYMBOL(hinic3_force_complete_all);
+
+void hinic3_detect_hw_present(void *hwdev)
+{
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev)) {
+ hinic3_set_chip_absent(hwdev);
+ hinic3_force_complete_all(hwdev);
+ }
+}
+
+/**
+ * dma_attr_table_init - initialize the default dma attributes
+ * @hwdev: the pointer to hw device
+ **/
+static int dma_attr_table_init(struct hinic3_hwdev *hwdev)
+{
+ u32 addr, val, dst_attr;
+
+ /* Use indirect access should set entry_idx first */
+ addr = HINIC3_CSR_DMA_ATTR_INDIR_IDX_ADDR;
+ val = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ val = HINIC3_DMA_ATTR_INDIR_IDX_CLEAR(val, IDX);
+
+ val |= HINIC3_DMA_ATTR_INDIR_IDX_SET(PCIE_MSIX_ATTR_ENTRY, IDX);
+
+ hinic3_hwif_write_reg(hwdev->hwif, addr, val);
+
+ wmb(); /* write index before config */
+
+ addr = HINIC3_CSR_DMA_ATTR_TBL_ADDR;
+ val = hinic3_hwif_read_reg(hwdev->hwif, addr);
+
+ dst_attr = HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_ST_DISABLE, ST) |
+ HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_AT_DISABLE, AT) |
+ HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_PH_DISABLE, PH) |
+ HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_SNOOP, NO_SNOOPING) |
+ HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_TPH_DISABLE, TPH_EN);
+
+ if (val == dst_attr)
+ return 0;
+
+ return hinic3_set_dma_attr_tbl(hwdev, PCIE_MSIX_ATTR_ENTRY, HINIC3_PCIE_ST_DISABLE,
+ HINIC3_PCIE_AT_DISABLE, HINIC3_PCIE_PH_DISABLE,
+ HINIC3_PCIE_SNOOP, HINIC3_PCIE_TPH_DISABLE);
+}
+
+static int init_aeqs_msix_attr(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeqs *aeqs = hwdev->aeqs;
+ struct interrupt_info info = {0};
+ struct hinic3_eq *eq = NULL;
+ int q_id;
+ int err;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = HINIC3_DEAULT_EQ_MSIX_PENDING_LIMIT;
+ info.coalesc_timer_cfg = HINIC3_DEAULT_EQ_MSIX_COALESC_TIMER_CFG;
+ info.resend_timer_cfg = HINIC3_DEAULT_EQ_MSIX_RESEND_TIMER_CFG;
+
+ for (q_id = aeqs->num_aeqs - 1; q_id >= 0; q_id--) {
+ eq = &aeqs->aeq[q_id];
+ info.msix_index = eq->eq_irq.msix_entry_idx;
+ err = hinic3_set_interrupt_cfg_direct(hwdev, &info,
+ HINIC3_CHANNEL_COMM);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Set msix attr for aeq %d failed\n",
+ q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static int init_ceqs_msix_attr(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_ceqs *ceqs = hwdev->ceqs;
+ struct interrupt_info info = {0};
+ struct hinic3_eq *eq = NULL;
+ u16 q_id;
+ int err;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = HINIC3_DEAULT_EQ_MSIX_PENDING_LIMIT;
+ info.coalesc_timer_cfg = HINIC3_DEAULT_EQ_MSIX_COALESC_TIMER_CFG;
+ info.resend_timer_cfg = HINIC3_DEAULT_EQ_MSIX_RESEND_TIMER_CFG;
+
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++) {
+ eq = &ceqs->ceq[q_id];
+ info.msix_index = eq->eq_irq.msix_entry_idx;
+ err = hinic3_set_interrupt_cfg(hwdev, info,
+ HINIC3_CHANNEL_COMM);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Set msix attr for ceq %u failed\n",
+ q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static int hinic3_comm_clp_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF || !COMM_SUPPORT_CLP(hwdev))
+ return 0;
+
+ err = hinic3_clp_pf_to_mgmt_init(hwdev);
+ if (err != 0)
+ return err;
+
+ return 0;
+}
+
+static void hinic3_comm_clp_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ if (hinic3_func_type(hwdev) == TYPE_VF || !COMM_SUPPORT_CLP(hwdev))
+ return;
+
+ hinic3_clp_pf_to_mgmt_free(hwdev);
+}
+
+static int hinic3_comm_aeqs_init(struct hinic3_hwdev *hwdev)
+{
+ struct irq_info aeq_irqs[HINIC3_MAX_AEQS] = {{0} };
+ u16 num_aeqs, resp_num_irq = 0, i;
+ int err;
+
+ num_aeqs = HINIC3_HWIF_NUM_AEQS(hwdev->hwif);
+ if (num_aeqs > HINIC3_MAX_AEQS) {
+ sdk_warn(hwdev->dev_hdl, "Adjust aeq num to %d\n",
+ HINIC3_MAX_AEQS);
+ num_aeqs = HINIC3_MAX_AEQS;
+ }
+ err = hinic3_alloc_irqs(hwdev, SERVICE_T_INTF, num_aeqs, aeq_irqs,
+ &resp_num_irq);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc aeq irqs, num_aeqs: %u\n",
+ num_aeqs);
+ return err;
+ }
+
+ if (resp_num_irq < num_aeqs) {
+ sdk_warn(hwdev->dev_hdl, "Adjust aeq num to %u\n",
+ resp_num_irq);
+ num_aeqs = resp_num_irq;
+ }
+
+ err = hinic3_aeqs_init(hwdev, num_aeqs, aeq_irqs);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init aeqs\n");
+ goto aeqs_init_err;
+ }
+
+ return 0;
+
+aeqs_init_err:
+ for (i = 0; i < num_aeqs; i++)
+ hinic3_free_irq(hwdev, SERVICE_T_INTF, aeq_irqs[i].irq_id);
+
+ return err;
+}
+
+static void hinic3_comm_aeqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct irq_info aeq_irqs[HINIC3_MAX_AEQS] = {{0} };
+ u16 num_irqs, i;
+
+ hinic3_get_aeq_irqs(hwdev, (struct irq_info *)aeq_irqs, &num_irqs);
+
+ hinic3_aeqs_free(hwdev);
+
+ for (i = 0; i < num_irqs; i++)
+ hinic3_free_irq(hwdev, SERVICE_T_INTF, aeq_irqs[i].irq_id);
+}
+
+static int hinic3_comm_ceqs_init(struct hinic3_hwdev *hwdev)
+{
+ struct irq_info ceq_irqs[HINIC3_MAX_CEQS] = {{0} };
+ u16 num_ceqs, resp_num_irq = 0, i;
+ int err;
+
+ num_ceqs = HINIC3_HWIF_NUM_CEQS(hwdev->hwif);
+ if (num_ceqs > HINIC3_MAX_CEQS) {
+ sdk_warn(hwdev->dev_hdl, "Adjust ceq num to %d\n",
+ HINIC3_MAX_CEQS);
+ num_ceqs = HINIC3_MAX_CEQS;
+ }
+
+ err = hinic3_alloc_irqs(hwdev, SERVICE_T_INTF, num_ceqs, ceq_irqs,
+ &resp_num_irq);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc ceq irqs, num_ceqs: %u\n",
+ num_ceqs);
+ return err;
+ }
+
+ if (resp_num_irq < num_ceqs) {
+ sdk_warn(hwdev->dev_hdl, "Adjust ceq num to %u\n",
+ resp_num_irq);
+ num_ceqs = resp_num_irq;
+ }
+
+ err = hinic3_ceqs_init(hwdev, num_ceqs, ceq_irqs);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to init ceqs, err:%d\n", err);
+ goto ceqs_init_err;
+ }
+
+ return 0;
+
+ceqs_init_err:
+ for (i = 0; i < num_ceqs; i++)
+ hinic3_free_irq(hwdev, SERVICE_T_INTF, ceq_irqs[i].irq_id);
+
+ return err;
+}
+
+static void hinic3_comm_ceqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct irq_info ceq_irqs[HINIC3_MAX_CEQS] = {{0} };
+ u16 num_irqs;
+ int i;
+
+ hinic3_get_ceq_irqs(hwdev, (struct irq_info *)ceq_irqs, &num_irqs);
+
+ hinic3_ceqs_free(hwdev);
+
+ for (i = 0; i < num_irqs; i++)
+ hinic3_free_irq(hwdev, SERVICE_T_INTF, ceq_irqs[i].irq_id);
+}
+
+static int hinic3_comm_func_to_func_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_func_to_func_init(hwdev);
+ if (err != 0)
+ return err;
+
+ hinic3_aeq_register_hw_cb(hwdev, hwdev, HINIC3_MBX_FROM_FUNC,
+ hinic3_mbox_func_aeqe_handler);
+ hinic3_aeq_register_hw_cb(hwdev, hwdev, HINIC3_MSG_FROM_MGMT_CPU,
+ hinic3_mgmt_msg_aeqe_handler);
+
+ if (!HINIC3_IS_VF(hwdev)) {
+ hinic3_register_pf_mbox_cb(hwdev, HINIC3_MOD_COMM, hwdev, pf_handle_vf_comm_mbox);
+ hinic3_register_pf_mbox_cb(hwdev, HINIC3_MOD_SW_FUNC,
+ hwdev, sw_func_pf_mbox_handler);
+ } else {
+ hinic3_register_vf_mbox_cb(hwdev, HINIC3_MOD_COMM, hwdev, vf_handle_pf_comm_mbox);
+ }
+
+ set_bit(HINIC3_HWDEV_MBOX_INITED, &hwdev->func_state);
+
+ return 0;
+}
+
+static void hinic3_comm_func_to_func_free(struct hinic3_hwdev *hwdev)
+{
+ spin_lock_bh(&hwdev->channel_lock);
+ clear_bit(HINIC3_HWDEV_MBOX_INITED, &hwdev->func_state);
+ spin_unlock_bh(&hwdev->channel_lock);
+
+ hinic3_aeq_unregister_hw_cb(hwdev, HINIC3_MBX_FROM_FUNC);
+
+ if (!HINIC3_IS_VF(hwdev)) {
+ hinic3_unregister_pf_mbox_cb(hwdev, HINIC3_MOD_COMM);
+ } else {
+ hinic3_unregister_vf_mbox_cb(hwdev, HINIC3_MOD_COMM);
+
+ hinic3_aeq_unregister_hw_cb(hwdev, HINIC3_MSG_FROM_MGMT_CPU);
+ }
+
+ hinic3_func_to_func_free(hwdev);
+}
+
+static int hinic3_comm_pf_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ err = hinic3_pf_to_mgmt_init(hwdev);
+ if (err != 0)
+ return err;
+
+ hinic3_register_mgmt_msg_cb(hwdev, HINIC3_MOD_COMM, hwdev,
+ pf_handle_mgmt_comm_event);
+
+ set_bit(HINIC3_HWDEV_MGMT_INITED, &hwdev->func_state);
+
+ return 0;
+}
+
+static void hinic3_comm_pf_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return;
+
+ spin_lock_bh(&hwdev->channel_lock);
+ clear_bit(HINIC3_HWDEV_MGMT_INITED, &hwdev->func_state);
+ spin_unlock_bh(&hwdev->channel_lock);
+
+ hinic3_unregister_mgmt_msg_cb(hwdev, HINIC3_MOD_COMM);
+
+ hinic3_aeq_unregister_hw_cb(hwdev, HINIC3_MSG_FROM_MGMT_CPU);
+
+ hinic3_pf_to_mgmt_free(hwdev);
+}
+
+static int hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_cmdqs_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init cmd queues\n");
+ return err;
+ }
+
+ hinic3_ceq_register_cb(hwdev, hwdev, HINIC3_CMDQ, hinic3_cmdq_ceq_handler);
+
+ err = hinic3_set_cmdq_depth(hwdev, HINIC3_CMDQ_DEPTH);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to set cmdq depth\n");
+ goto set_cmdq_depth_err;
+ }
+
+ set_bit(HINIC3_HWDEV_CMDQ_INITED, &hwdev->func_state);
+
+ return 0;
+
+set_cmdq_depth_err:
+ hinic3_cmdqs_free(hwdev);
+
+ return err;
+}
+
+static void hinic3_comm_cmdqs_free(struct hinic3_hwdev *hwdev)
+{
+ spin_lock_bh(&hwdev->channel_lock);
+ clear_bit(HINIC3_HWDEV_CMDQ_INITED, &hwdev->func_state);
+ spin_unlock_bh(&hwdev->channel_lock);
+
+ hinic3_ceq_unregister_cb(hwdev, HINIC3_CMDQ);
+ hinic3_cmdqs_free(hwdev);
+}
+
+static void hinic3_sync_mgmt_func_state(struct hinic3_hwdev *hwdev)
+{
+ hinic3_set_pf_status(hwdev->hwif, HINIC3_PF_STATUS_ACTIVE_FLAG);
+}
+
+static void hinic3_unsync_mgmt_func_state(struct hinic3_hwdev *hwdev)
+{
+ hinic3_set_pf_status(hwdev->hwif, HINIC3_PF_STATUS_INIT);
+}
+
+static int init_basic_attributes(struct hinic3_hwdev *hwdev)
+{
+ u64 drv_features[COMM_MAX_FEATURE_QWORD] = {HINIC3_DRV_FEATURE_QW0, 0, 0, 0};
+ int err, i;
+
+ if (hinic3_func_type(hwdev) == TYPE_PPF)
+ drv_features[0] |= COMM_F_CHANNEL_DETECT;
+
+ err = hinic3_get_board_info(hwdev, &hwdev->board_info,
+ HINIC3_CHANNEL_COMM);
+ if (err != 0)
+ return err;
+
+ err = hinic3_get_comm_features(hwdev, hwdev->features,
+ COMM_MAX_FEATURE_QWORD);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Get comm features failed\n");
+ return err;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Comm hw features: 0x%llx, drv features: 0x%llx\n",
+ hwdev->features[0], drv_features[0]);
+
+ for (i = 0; i < COMM_MAX_FEATURE_QWORD; i++)
+ hwdev->features[i] &= drv_features[i];
+
+ err = hinic3_get_global_attr(hwdev, &hwdev->glb_attr);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to get global attribute\n");
+ return err;
+ }
+
+ sdk_info(hwdev->dev_hdl,
+ "global attribute: max_host: 0x%x, max_pf: 0x%x, vf_id_start: 0x%x, mgmt node id: 0x%x, cmdq_num: 0x%x\n",
+ hwdev->glb_attr.max_host_num, hwdev->glb_attr.max_pf_num,
+ hwdev->glb_attr.vf_id_start,
+ hwdev->glb_attr.mgmt_host_node_id,
+ hwdev->glb_attr.cmdq_num);
+
+ return 0;
+}
+
+static int init_basic_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_comm_aeqs_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init async event queues\n");
+ return err;
+ }
+
+ err = hinic3_comm_func_to_func_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init mailbox\n");
+ goto func_to_func_init_err;
+ }
+
+ err = init_aeqs_msix_attr(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init aeqs msix attr\n");
+ goto aeqs_msix_attr_init_err;
+ }
+
+ return 0;
+
+aeqs_msix_attr_init_err:
+ hinic3_comm_func_to_func_free(hwdev);
+
+func_to_func_init_err:
+ hinic3_comm_aeqs_free(hwdev);
+
+ return err;
+}
+
+static void free_base_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ hinic3_comm_func_to_func_free(hwdev);
+ hinic3_comm_aeqs_free(hwdev);
+}
+
+static int init_pf_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_comm_clp_to_mgmt_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init clp\n");
+ return err;
+ }
+
+ err = hinic3_comm_pf_to_mgmt_init(hwdev);
+ if (err != 0) {
+ hinic3_comm_clp_to_mgmt_free(hwdev);
+ sdk_err(hwdev->dev_hdl, "Failed to init pf to mgmt\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static void free_pf_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ hinic3_comm_clp_to_mgmt_free(hwdev);
+ hinic3_comm_pf_to_mgmt_free(hwdev);
+}
+
+static int init_mgmt_channel_post(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ /* mbox host channel resources will be freed in
+ * hinic3_func_to_func_free
+ */
+ if (HINIC3_IS_PPF(hwdev)) {
+ err = hinic3_mbox_init_host_msg_channel(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init mbox host channel\n");
+ return err;
+ }
+ }
+
+ err = init_pf_mgmt_channel(hwdev);
+ if (err != 0)
+ return err;
+
+ return 0;
+}
+
+static void free_mgmt_msg_channel_post(struct hinic3_hwdev *hwdev)
+{
+ free_pf_mgmt_channel(hwdev);
+}
+
+static int init_cmdqs_channel(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = dma_attr_table_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init dma attr table\n");
+ goto dma_attr_init_err;
+ }
+
+ err = hinic3_comm_ceqs_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init completion event queues\n");
+ goto ceqs_init_err;
+ }
+
+ err = init_ceqs_msix_attr(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init ceqs msix attr\n");
+ goto init_ceq_msix_err;
+ }
+
+ /* set default wq page_size */
+ if (wq_page_order > HINIC3_MAX_WQ_PAGE_SIZE_ORDER) {
+ sdk_info(hwdev->dev_hdl, "wq_page_order exceed limit[0, %d], reset to %d\n",
+ HINIC3_MAX_WQ_PAGE_SIZE_ORDER,
+ HINIC3_MAX_WQ_PAGE_SIZE_ORDER);
+ wq_page_order = HINIC3_MAX_WQ_PAGE_SIZE_ORDER;
+ }
+ hwdev->wq_page_size = HINIC3_HW_WQ_PAGE_SIZE * (1U << wq_page_order);
+ sdk_info(hwdev->dev_hdl, "WQ page size: 0x%x\n", hwdev->wq_page_size);
+ err = hinic3_set_wq_page_size(hwdev, hinic3_global_func_id(hwdev),
+ hwdev->wq_page_size, HINIC3_CHANNEL_COMM);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to set wq page size\n");
+ goto init_wq_pg_size_err;
+ }
+
+ err = hinic3_comm_cmdqs_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init cmd queues\n");
+ goto cmdq_init_err;
+ }
+
+ return 0;
+
+cmdq_init_err:
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF)
+ hinic3_set_wq_page_size(hwdev, hinic3_global_func_id(hwdev),
+ HINIC3_HW_WQ_PAGE_SIZE,
+ HINIC3_CHANNEL_COMM);
+init_wq_pg_size_err:
+init_ceq_msix_err:
+ hinic3_comm_ceqs_free(hwdev);
+
+ceqs_init_err:
+dma_attr_init_err:
+
+ return err;
+}
+
+static void hinic3_free_cmdqs_channel(struct hinic3_hwdev *hwdev)
+{
+ hinic3_comm_cmdqs_free(hwdev);
+
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF)
+ hinic3_set_wq_page_size(hwdev, hinic3_global_func_id(hwdev),
+ HINIC3_HW_WQ_PAGE_SIZE, HINIC3_CHANNEL_COMM);
+
+ hinic3_comm_ceqs_free(hwdev);
+}
+
+static int hinic3_init_comm_ch(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = init_basic_mgmt_channel(hwdev);
+ if (err != 0)
+ return err;
+
+ err = hinic3_func_reset(hwdev, hinic3_global_func_id(hwdev),
+ HINIC3_COMM_RES, HINIC3_CHANNEL_COMM);
+ if (err != 0)
+ goto func_reset_err;
+
+ err = init_basic_attributes(hwdev);
+ if (err != 0)
+ goto init_basic_attr_err;
+
+ err = init_mgmt_channel_post(hwdev);
+ if (err != 0)
+ goto init_mgmt_channel_post_err;
+
+ err = hinic3_set_func_svc_used_state(hwdev, SVC_T_COMM, 1, HINIC3_CHANNEL_COMM);
+ if (err != 0)
+ goto set_used_state_err;
+
+ err = init_cmdqs_channel(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init cmdq channel\n");
+ goto init_cmdqs_channel_err;
+ }
+
+ hinic3_sync_mgmt_func_state(hwdev);
+
+ if (HISDK3_F_CHANNEL_LOCK_EN(hwdev)) {
+ hinic3_mbox_enable_channel_lock(hwdev, true);
+ hinic3_cmdq_enable_channel_lock(hwdev, true);
+ }
+
+ err = hinic3_aeq_register_swe_cb(hwdev, hwdev, HINIC3_STATELESS_EVENT,
+ hinic3_nic_sw_aeqe_handler);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to register sw aeqe handler\n");
+ goto register_ucode_aeqe_err;
+ }
+
+ return 0;
+
+register_ucode_aeqe_err:
+ hinic3_unsync_mgmt_func_state(hwdev);
+ hinic3_free_cmdqs_channel(hwdev);
+init_cmdqs_channel_err:
+ hinic3_set_func_svc_used_state(hwdev, SVC_T_COMM, 0, HINIC3_CHANNEL_COMM);
+set_used_state_err:
+ free_mgmt_msg_channel_post(hwdev);
+init_mgmt_channel_post_err:
+init_basic_attr_err:
+func_reset_err:
+ free_base_mgmt_channel(hwdev);
+
+ return err;
+}
+
+static void hinic3_uninit_comm_ch(struct hinic3_hwdev *hwdev)
+{
+ hinic3_aeq_unregister_swe_cb(hwdev, HINIC3_STATELESS_EVENT);
+
+ hinic3_unsync_mgmt_func_state(hwdev);
+
+ hinic3_free_cmdqs_channel(hwdev);
+
+ hinic3_set_func_svc_used_state(hwdev, SVC_T_COMM, 0, HINIC3_CHANNEL_COMM);
+
+ free_mgmt_msg_channel_post(hwdev);
+
+ free_base_mgmt_channel(hwdev);
+}
+
+static void hinic3_auto_sync_time_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_hwdev *hwdev = container_of(delay, struct hinic3_hwdev, sync_time_task);
+ int err;
+
+ err = hinic3_sync_time(hwdev, ossl_get_real_time());
+ if (err != 0)
+ sdk_err(hwdev->dev_hdl, "Synchronize UTC time to firmware failed, errno:%d.\n",
+ err);
+
+ queue_delayed_work(hwdev->workq, &hwdev->sync_time_task,
+ msecs_to_jiffies(HINIC3_SYNFW_TIME_PERIOD));
+}
+
+static void hinic3_auto_channel_detect_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_hwdev *hwdev = container_of(delay, struct hinic3_hwdev, channel_detect_task);
+ struct card_node *chip_node = NULL;
+
+ hinic3_comm_channel_detect(hwdev);
+
+ chip_node = hwdev->chip_node;
+ if (!atomic_read(&chip_node->channel_busy_cnt))
+ queue_delayed_work(hwdev->workq, &hwdev->channel_detect_task,
+ msecs_to_jiffies(HINIC3_CHANNEL_DETECT_PERIOD));
+}
+
+static int hinic3_init_ppf_work(struct hinic3_hwdev *hwdev)
+{
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ return 0;
+
+ INIT_DELAYED_WORK(&hwdev->sync_time_task, hinic3_auto_sync_time_work);
+ queue_delayed_work(hwdev->workq, &hwdev->sync_time_task,
+ msecs_to_jiffies(HINIC3_SYNFW_TIME_PERIOD));
+
+ if (COMM_SUPPORT_CHANNEL_DETECT(hwdev)) {
+ INIT_DELAYED_WORK(&hwdev->channel_detect_task,
+ hinic3_auto_channel_detect_work);
+ queue_delayed_work(hwdev->workq, &hwdev->channel_detect_task,
+ msecs_to_jiffies(HINIC3_CHANNEL_DETECT_PERIOD));
+ }
+
+ return 0;
+}
+
+static void hinic3_free_ppf_work(struct hinic3_hwdev *hwdev)
+{
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ return;
+
+ if (COMM_SUPPORT_CHANNEL_DETECT(hwdev)) {
+ hwdev->features[0] &= ~(COMM_F_CHANNEL_DETECT);
+ cancel_delayed_work_sync(&hwdev->channel_detect_task);
+ }
+
+ cancel_delayed_work_sync(&hwdev->sync_time_task);
+}
+
+static int init_hwdew(struct hinic3_init_para *para)
+{
+ struct hinic3_hwdev *hwdev;
+
+ hwdev = kzalloc(sizeof(*hwdev), GFP_KERNEL);
+ if (!hwdev)
+ return -ENOMEM;
+
+ *para->hwdev = hwdev;
+ hwdev->adapter_hdl = para->adapter_hdl;
+ hwdev->pcidev_hdl = para->pcidev_hdl;
+ hwdev->dev_hdl = para->dev_hdl;
+ hwdev->chip_node = para->chip_node;
+ hwdev->poll = para->poll;
+ hwdev->probe_fault_level = para->probe_fault_level;
+ hwdev->func_state = 0;
+ sema_init(&hwdev->ppf_sem, 1);
+
+ hwdev->chip_fault_stats = vzalloc(HINIC3_CHIP_FAULT_SIZE);
+ if (!hwdev->chip_fault_stats)
+ goto alloc_chip_fault_stats_err;
+
+ hwdev->stateful_ref_cnt = 0;
+ memset(hwdev->features, 0, sizeof(hwdev->features));
+
+ spin_lock_init(&hwdev->channel_lock);
+ mutex_init(&hwdev->stateful_mutex);
+
+ return 0;
+
+alloc_chip_fault_stats_err:
+ sema_deinit(&hwdev->ppf_sem);
+ para->probe_fault_level = hwdev->probe_fault_level;
+ kfree(hwdev);
+ *para->hwdev = NULL;
+ return -EFAULT;
+}
+
+int hinic3_init_hwdev(struct hinic3_init_para *para)
+{
+ struct hinic3_hwdev *hwdev = NULL;
+ int err;
+
+ err = init_hwdew(para);
+ if (err != 0)
+ return err;
+
+ hwdev = *para->hwdev;
+ err = hinic3_init_hwif(hwdev, para->cfg_reg_base, para->intr_reg_base, para->mgmt_reg_base,
+ para->db_base_phy, para->db_base, para->db_dwqe_len);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init hwif\n");
+ goto init_hwif_err;
+ }
+
+ hinic3_set_chip_present(hwdev);
+
+ hisdk3_init_profile_adapter(hwdev);
+
+ hwdev->workq = alloc_workqueue(HINIC3_HWDEV_WQ_NAME, WQ_MEM_RECLAIM, HINIC3_WQ_MAX_REQ);
+ if (!hwdev->workq) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc hardware workq\n");
+ goto alloc_workq_err;
+ }
+
+ hinic3_init_heartbeat_detect(hwdev);
+
+ err = init_cfg_mgmt(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init config mgmt\n");
+ goto init_cfg_mgmt_err;
+ }
+
+ err = hinic3_init_comm_ch(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init communication channel\n");
+ goto init_comm_ch_err;
+ }
+
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+ err = hinic3_init_devlink(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init devlink\n");
+ goto init_devlink_err;
+ }
+#endif
+
+ err = init_capability(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init capability\n");
+ goto init_cap_err;
+ }
+
+ hinic3_init_host_mode_pre(hwdev);
+
+ hinic3_init_hot_plug_status(hwdev);
+
+ hinic3_init_os_hot_replace(hwdev);
+
+ err = hinic3_multi_host_mgmt_init(hwdev);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to init function mode\n");
+ goto init_multi_host_fail;
+ }
+
+ // hot_replace_mode is enable, run ppf function only when partition_id is 0
+ // or run ppf function directly
+ if (hwdev->hot_replace_mode == HOT_REPLACE_ENABLE) {
+ if (get_partition_id() == 0) {
+ err = hinic3_init_ppf_work(hwdev);
+ if (err != 0) {
+ goto init_ppf_work_fail;
+ }
+ }
+ } else {
+ err = hinic3_init_ppf_work(hwdev);
+ if (err != 0)
+ goto init_ppf_work_fail;
+ }
+
+ err = hinic3_set_comm_features(hwdev, hwdev->features, COMM_MAX_FEATURE_QWORD);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to set comm features\n");
+ goto set_feature_err;
+ }
+
+ return 0;
+
+set_feature_err:
+ hinic3_free_ppf_work(hwdev);
+
+init_ppf_work_fail:
+ hinic3_multi_host_mgmt_free(hwdev);
+
+init_multi_host_fail:
+ free_capability(hwdev);
+
+init_cap_err:
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+ hinic3_uninit_devlink(hwdev);
+
+init_devlink_err:
+#endif
+ hinic3_uninit_comm_ch(hwdev);
+
+init_comm_ch_err:
+ free_cfg_mgmt(hwdev);
+
+init_cfg_mgmt_err:
+ hinic3_destroy_heartbeat_detect(hwdev);
+ destroy_workqueue(hwdev->workq);
+
+alloc_workq_err:
+ hisdk3_deinit_profile_adapter(hwdev);
+
+ hinic3_free_hwif(hwdev);
+
+init_hwif_err:
+ spin_lock_deinit(&hwdev->channel_lock);
+ vfree(hwdev->chip_fault_stats);
+ para->probe_fault_level = hwdev->probe_fault_level;
+ kfree(hwdev);
+ *para->hwdev = NULL;
+
+ return -EFAULT;
+}
+
+void hinic3_free_hwdev(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u64 drv_features[COMM_MAX_FEATURE_QWORD];
+
+ memset(drv_features, 0, sizeof(drv_features));
+ hinic3_set_comm_features(hwdev, drv_features, COMM_MAX_FEATURE_QWORD);
+
+ hinic3_free_ppf_work(dev);
+
+ hinic3_multi_host_mgmt_free(dev);
+
+ hinic3_func_rx_tx_flush(hwdev, HINIC3_CHANNEL_COMM, true);
+
+ free_capability(dev);
+
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+ hinic3_uninit_devlink(dev);
+#endif
+
+ hinic3_uninit_comm_ch(dev);
+
+ free_cfg_mgmt(dev);
+ hinic3_destroy_heartbeat_detect(hwdev);
+ destroy_workqueue(dev->workq);
+
+ hisdk3_deinit_profile_adapter(hwdev);
+ hinic3_free_hwif(dev);
+
+ spin_lock_deinit(&dev->channel_lock);
+ vfree(dev->chip_fault_stats);
+
+ kfree(dev);
+}
+
+void *hinic3_get_pcidev_hdl(void *hwdev)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (!hwdev)
+ return NULL;
+
+ return dev->pcidev_hdl;
+}
+
+int hinic3_register_service_adapter(void *hwdev, void *service_adapter,
+ enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev || !service_adapter || type >= SERVICE_T_MAX)
+ return -EINVAL;
+
+ if (dev->service_adapter[type])
+ return -EINVAL;
+
+ dev->service_adapter[type] = service_adapter;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_service_adapter);
+
+void hinic3_unregister_service_adapter(void *hwdev,
+ enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev || type >= SERVICE_T_MAX)
+ return;
+
+ dev->service_adapter[type] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_service_adapter);
+
+void *hinic3_get_service_adapter(void *hwdev, enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev || type >= SERVICE_T_MAX)
+ return NULL;
+
+ return dev->service_adapter[type];
+}
+EXPORT_SYMBOL(hinic3_get_service_adapter);
+
+int hinic3_dbg_get_hw_stats(const void *hwdev, u8 *hw_stats, const u32 *out_size)
+{
+ struct hinic3_hw_stats *tmp_hw_stats = (struct hinic3_hw_stats *)hw_stats;
+ struct card_node *chip_node = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (*out_size != sizeof(struct hinic3_hw_stats) || !hw_stats) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(struct hinic3_hw_stats));
+ return -EFAULT;
+ }
+
+ memcpy(hw_stats,
+ &((struct hinic3_hwdev *)hwdev)->hw_stats, sizeof(struct hinic3_hw_stats));
+
+ chip_node = ((struct hinic3_hwdev *)hwdev)->chip_node;
+
+ atomic_set(&tmp_hw_stats->nic_ucode_event_stats[HINIC3_CHANNEL_BUSY],
+ atomic_read(&chip_node->channel_busy_cnt));
+
+ return 0;
+}
+
+u16 hinic3_dbg_clear_hw_stats(void *hwdev)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_hwdev *dev = hwdev;
+
+ memset((void *)&dev->hw_stats, 0, sizeof(struct hinic3_hw_stats));
+ memset((void *)dev->chip_fault_stats, 0, HINIC3_CHIP_FAULT_SIZE);
+
+ chip_node = dev->chip_node;
+ if (COMM_SUPPORT_CHANNEL_DETECT(dev) && atomic_read(&chip_node->channel_busy_cnt)) {
+ atomic_set(&chip_node->channel_busy_cnt, 0);
+ dev->aeq_busy_cnt = 0;
+ queue_delayed_work(dev->workq, &dev->channel_detect_task,
+ msecs_to_jiffies(HINIC3_CHANNEL_DETECT_PERIOD));
+ }
+
+ return sizeof(struct hinic3_hw_stats);
+}
+
+void hinic3_get_chip_fault_stats(const void *hwdev, u8 *chip_fault_stats,
+ u32 offset)
+{
+ if (offset >= HINIC3_CHIP_FAULT_SIZE) {
+ pr_err("Invalid chip offset value: %d\n", offset);
+ return;
+ }
+
+ if (offset + MAX_DRV_BUF_SIZE <= HINIC3_CHIP_FAULT_SIZE)
+ memcpy(chip_fault_stats,
+ ((struct hinic3_hwdev *)hwdev)->chip_fault_stats
+ + offset, MAX_DRV_BUF_SIZE);
+ else
+ memcpy(chip_fault_stats,
+ ((struct hinic3_hwdev *)hwdev)->chip_fault_stats
+ + offset, HINIC3_CHIP_FAULT_SIZE - offset);
+}
+
+void hinic3_event_register(void *dev, void *pri_handle,
+ hinic3_event_handler callback)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for register event\n");
+ return;
+ }
+
+ hwdev->event_callback = callback;
+ hwdev->event_pri_handle = pri_handle;
+}
+
+void hinic3_event_unregister(void *dev)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for register event\n");
+ return;
+ }
+
+ hwdev->event_callback = NULL;
+ hwdev->event_pri_handle = NULL;
+}
+
+void hinic3_event_callback(void *hwdev, struct hinic3_event_info *event)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev) {
+ pr_err("Hwdev pointer is NULL for event callback\n");
+ return;
+ }
+
+ if (!dev->event_callback) {
+ sdk_info(dev->dev_hdl, "Event callback function not register\n");
+ return;
+ }
+
+ dev->event_callback(dev->event_pri_handle, event);
+}
+EXPORT_SYMBOL(hinic3_event_callback);
+
+void hinic3_set_pcie_order_cfg(void *handle)
+{
+}
+
+void hinic3_disable_mgmt_msg_report(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = (struct hinic3_hwdev *)hwdev;
+
+ hinic3_set_pf_status(hw_dev->hwif, HINIC3_PF_STATUS_INIT);
+}
+
+void hinic3_record_pcie_error(void *hwdev)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (!hwdev)
+ return;
+
+ atomic_inc(&dev->hw_stats.fault_event_stats.pcie_fault_stats);
+}
+
+bool hinic3_need_init_stateful_default(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u16 chip_svc_type = dev->cfg_mgmt->svc_cap.svc_type;
+
+ /* Current virtio net have to init cqm in PPF. */
+ if (hinic3_func_type(hwdev) == TYPE_PPF && (chip_svc_type & CFG_SERVICE_MASK_VIRTIO) != 0)
+ return true;
+
+ /* vroce have to init cqm */
+ if (IS_MASTER_HOST(dev) &&
+ (hinic3_func_type(hwdev) != TYPE_PPF) &&
+ ((chip_svc_type & CFG_SERVICE_MASK_ROCE) != 0))
+ return true;
+
+ /* SDI5.1 vm mode nano os PF0 as ppf needs to do stateful init else mailbox will fail */
+ if (hinic3_func_type(hwdev) == TYPE_PPF && hinic3_is_vm_slave_host(hwdev))
+ return true;
+
+ /* Other service type will init cqm when uld call. */
+ return false;
+}
+
+static inline void stateful_uninit(struct hinic3_hwdev *hwdev)
+{
+ u32 stateful_en;
+
+ cqm_uninit(hwdev);
+
+ stateful_en = IS_FT_TYPE(hwdev) | IS_RDMA_TYPE(hwdev);
+ if (stateful_en)
+ hinic3_ppf_ext_db_deinit(hwdev);
+}
+
+int hinic3_stateful_init(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int stateful_en;
+ int err;
+
+ if (!dev)
+ return -EINVAL;
+
+ if (!hinic3_get_stateful_enable(dev))
+ return 0;
+
+ mutex_lock(&dev->stateful_mutex);
+ if (dev->stateful_ref_cnt++) {
+ mutex_unlock(&dev->stateful_mutex);
+ return 0;
+ }
+
+ stateful_en = (int)(IS_FT_TYPE(dev) | IS_RDMA_TYPE(dev));
+ if (stateful_en != 0 && HINIC3_IS_PPF(dev)) {
+ err = hinic3_ppf_ext_db_init(dev);
+ if (err != 0)
+ goto out;
+ }
+
+ err = cqm_init(dev);
+ if (err != 0) {
+ sdk_err(dev->dev_hdl, "Failed to init cqm, err: %d\n", err);
+ goto init_cqm_err;
+ }
+
+ mutex_unlock(&dev->stateful_mutex);
+ sdk_info(dev->dev_hdl, "Initialize stateful resource success\n");
+
+ return 0;
+
+init_cqm_err:
+ if (stateful_en != 0)
+ hinic3_ppf_ext_db_deinit(dev);
+
+out:
+ dev->stateful_ref_cnt--;
+ mutex_unlock(&dev->stateful_mutex);
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_stateful_init);
+
+void hinic3_stateful_deinit(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev || !hinic3_get_stateful_enable(dev))
+ return;
+
+ mutex_lock(&dev->stateful_mutex);
+ if (!dev->stateful_ref_cnt || --dev->stateful_ref_cnt) {
+ mutex_unlock(&dev->stateful_mutex);
+ return;
+ }
+
+ stateful_uninit(hwdev);
+ mutex_unlock(&dev->stateful_mutex);
+
+ sdk_info(dev->dev_hdl, "Clear stateful resource success\n");
+}
+EXPORT_SYMBOL(hinic3_stateful_deinit);
+
+void hinic3_free_stateful(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev || !hinic3_get_stateful_enable(dev) || !dev->stateful_ref_cnt)
+ return;
+
+ if (!hinic3_need_init_stateful_default(hwdev) || dev->stateful_ref_cnt > 1)
+ sdk_info(dev->dev_hdl, "Current stateful resource ref is incorrect, ref_cnt:%u\n",
+ dev->stateful_ref_cnt);
+
+ stateful_uninit(hwdev);
+
+ sdk_info(dev->dev_hdl, "Clear stateful resource success\n");
+}
+
+int hinic3_get_card_present_state(void *hwdev, bool *card_present_state)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev || !card_present_state)
+ return -EINVAL;
+
+ *card_present_state = get_card_present_state(dev);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_card_present_state);
+
+void hinic3_link_event_stats(void *dev, u8 link)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (link)
+ atomic_inc(&hwdev->hw_stats.link_event_stats.link_up_stats);
+ else
+ atomic_inc(&hwdev->hw_stats.link_event_stats.link_down_stats);
+}
+EXPORT_SYMBOL(hinic3_link_event_stats);
+
+int hinic3_get_link_event_stats(void *dev, int *link_state)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (!hwdev || !link_state)
+ return -EINVAL;
+
+ *link_state = hwdev->hw_stats.link_event_stats.link_down_stats.counter;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_link_event_stats);
+
+u8 hinic3_max_pf_num(void *hwdev)
+{
+ if (!hwdev)
+ return 0;
+
+ return HINIC3_MAX_PF_NUM((struct hinic3_hwdev *)hwdev);
+}
+EXPORT_SYMBOL(hinic3_max_pf_num);
+
+void hinic3_fault_event_report(void *hwdev, u16 src, u16 level)
+{
+ if (!hwdev)
+ return;
+
+ sdk_info(((struct hinic3_hwdev *)hwdev)->dev_hdl, "Fault event report, src: %u, level: %u\n",
+ src, level);
+
+ hisdk3_fault_post_process(hwdev, src, level);
+}
+EXPORT_SYMBOL(hinic3_fault_event_report);
+
+int hinic3_is_slave_func(const void *hwdev, bool *is_slave_func)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ *is_slave_func = IS_SLAVE_HOST((struct hinic3_hwdev *)hwdev);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_is_slave_func);
+
+int hinic3_is_master_func(const void *hwdev, bool *is_master_func)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ *is_master_func = IS_MASTER_HOST((struct hinic3_hwdev *)hwdev);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_is_master_func);
+
+void hinic3_probe_success(void *hwdev)
+{
+ if (!hwdev)
+ return;
+
+ hisdk3_probe_success(hwdev);
+}
+
+#define HINIC3_CHANNEL_BUSY_TIMEOUT 25
+
+static void hinic3_update_channel_status(struct hinic3_hwdev *hwdev)
+{
+ struct card_node *chip_node = hwdev->chip_node;
+
+ if (!chip_node)
+ return;
+
+ if (hinic3_func_type(hwdev) != TYPE_PPF || !COMM_SUPPORT_CHANNEL_DETECT(hwdev) ||
+ atomic_read(&chip_node->channel_busy_cnt))
+ return;
+
+ if (test_bit(HINIC3_HWDEV_MBOX_INITED, &hwdev->func_state)) {
+ if (hwdev->last_recv_aeq_cnt != hwdev->cur_recv_aeq_cnt) {
+ hwdev->aeq_busy_cnt = 0;
+ hwdev->last_recv_aeq_cnt = hwdev->cur_recv_aeq_cnt;
+ } else {
+ hwdev->aeq_busy_cnt++;
+ }
+
+ if (hwdev->aeq_busy_cnt > HINIC3_CHANNEL_BUSY_TIMEOUT) {
+ atomic_inc(&chip_node->channel_busy_cnt);
+ sdk_err(hwdev->dev_hdl, "Detect channel busy\n");
+ }
+ }
+}
+
+static void hinic3_heartbeat_lost_handler(struct work_struct *work)
+{
+ struct hinic3_event_info event_info = { 0 };
+ struct hinic3_hwdev *hwdev = container_of(work, struct hinic3_hwdev,
+ heartbeat_lost_work);
+ u16 src, level;
+
+ atomic_inc(&hwdev->hw_stats.heart_lost_stats);
+
+ if (hwdev->event_callback) {
+ event_info.service = EVENT_SRV_COMM;
+ event_info.type =
+ hwdev->pcie_link_down ? EVENT_COMM_PCIE_LINK_DOWN :
+ EVENT_COMM_HEART_LOST;
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+ }
+
+ if (hwdev->pcie_link_down) {
+ src = HINIC3_FAULT_SRC_PCIE_LINK_DOWN;
+ level = FAULT_LEVEL_HOST;
+ sdk_err(hwdev->dev_hdl, "Detect pcie is link down\n");
+ } else {
+ src = HINIC3_FAULT_SRC_HOST_HEARTBEAT_LOST;
+ level = FAULT_LEVEL_FATAL;
+ sdk_err(hwdev->dev_hdl, "Heart lost report received, func_id: %d\n",
+ hinic3_global_func_id(hwdev));
+ }
+
+ hinic3_show_chip_err_info(hwdev);
+
+ hisdk3_fault_post_process(hwdev, src, level);
+}
+
+#define DETECT_PCIE_LINK_DOWN_RETRY 2
+#define HINIC3_HEARTBEAT_START_EXPIRE 5000
+#define HINIC3_HEARTBEAT_PERIOD 1000
+
+static bool hinic3_is_hw_abnormal(struct hinic3_hwdev *hwdev)
+{
+ u32 status;
+
+ if (!hinic3_get_chip_present_flag(hwdev))
+ return false;
+
+ status = hinic3_get_heartbeat_status(hwdev);
+ if (status == HINIC3_PCIE_LINK_DOWN) {
+ sdk_warn(hwdev->dev_hdl, "Detect BAR register read failed\n");
+ hwdev->rd_bar_err_cnt++;
+ if (hwdev->rd_bar_err_cnt >= DETECT_PCIE_LINK_DOWN_RETRY) {
+ hinic3_set_chip_absent(hwdev);
+ hinic3_force_complete_all(hwdev);
+ hwdev->pcie_link_down = true;
+ return true;
+ }
+
+ return false;
+ }
+
+ if (status) {
+ hwdev->heartbeat_lost = true;
+ return true;
+ }
+
+ hwdev->rd_bar_err_cnt = 0;
+
+ return false;
+}
+
+#ifdef HAVE_TIMER_SETUP
+static void hinic3_heartbeat_timer_handler(struct timer_list *t)
+#else
+static void hinic3_heartbeat_timer_handler(unsigned long data)
+#endif
+{
+#ifdef HAVE_TIMER_SETUP
+ struct hinic3_hwdev *hwdev = from_timer(hwdev, t, heartbeat_timer);
+#else
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)data;
+#endif
+
+ if (hinic3_is_hw_abnormal(hwdev)) {
+ stop_timer(&hwdev->heartbeat_timer);
+ queue_work(hwdev->workq, &hwdev->heartbeat_lost_work);
+ } else {
+ mod_timer(&hwdev->heartbeat_timer,
+ jiffies + msecs_to_jiffies(HINIC3_HEARTBEAT_PERIOD));
+ }
+
+ hinic3_update_channel_status(hwdev);
+}
+
+static void hinic3_init_heartbeat_detect(struct hinic3_hwdev *hwdev)
+{
+#ifdef HAVE_TIMER_SETUP
+ timer_setup(&hwdev->heartbeat_timer, hinic3_heartbeat_timer_handler, 0);
+#else
+ initialize_timer(hwdev->adapter_hdl, &hwdev->heartbeat_timer);
+ hwdev->heartbeat_timer.data = (u64)hwdev;
+ hwdev->heartbeat_timer.function = hinic3_heartbeat_timer_handler;
+#endif
+
+ hwdev->heartbeat_timer.expires =
+ jiffies + msecs_to_jiffies(HINIC3_HEARTBEAT_START_EXPIRE);
+
+ add_to_timer(&hwdev->heartbeat_timer, HINIC3_HEARTBEAT_PERIOD);
+
+ INIT_WORK(&hwdev->heartbeat_lost_work, hinic3_heartbeat_lost_handler);
+}
+
+static void hinic3_destroy_heartbeat_detect(struct hinic3_hwdev *hwdev)
+{
+ destroy_work(&hwdev->heartbeat_lost_work);
+ stop_timer(&hwdev->heartbeat_timer);
+ delete_timer(&hwdev->heartbeat_timer);
+}
+
+void hinic3_set_api_stop(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return;
+
+ dev->chip_present_flag = HINIC3_CHIP_ABSENT;
+ sdk_info(dev->dev_hdl, "Set card absent\n");
+ hinic3_force_complete_all(dev);
+ sdk_info(dev->dev_hdl, "All messages interacting with the chip will stop\n");
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h
new file mode 100644
index 0000000..7c2cfc2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h
@@ -0,0 +1,234 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HWDEV_H
+#define HINIC3_HWDEV_H
+
+#include <linux/workqueue.h>
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "mpu_inband_cmd_defs.h"
+#include "hinic3_profile.h"
+#include "vram_common.h"
+
+struct cfg_mgmt_info;
+
+struct hinic3_hwif;
+struct hinic3_aeqs;
+struct hinic3_ceqs;
+struct hinic3_mbox;
+struct hinic3_msg_pf_to_mgmt;
+struct hinic3_hwdev;
+
+#define HINIC3_CHANNEL_DETECT_PERIOD (5 * 1000)
+
+struct hinic3_page_addr {
+ void *virt_addr;
+ u64 phys_addr;
+};
+
+struct mqm_addr_trans_tbl_info {
+ u32 chunk_num;
+ u32 search_gpa_num;
+ u32 page_size;
+ u32 page_num;
+ struct hinic3_dma_addr_align *brm_srch_page_addr;
+};
+
+struct hinic3_devlink {
+ struct hinic3_hwdev *hwdev;
+ u8 activate_fw; /* 0 ~ 7 */
+ u8 switch_cfg; /* 0 ~ 7 */
+};
+
+enum hinic3_func_mode {
+ /* single host */
+ FUNC_MOD_NORMAL_HOST,
+ /* multi host, bare-metal, sdi side */
+ FUNC_MOD_MULTI_BM_MASTER,
+ /* multi host, bare-metal, host side */
+ FUNC_MOD_MULTI_BM_SLAVE,
+ /* multi host, vm mode, sdi side */
+ FUNC_MOD_MULTI_VM_MASTER,
+ /* multi host, vm mode, host side */
+ FUNC_MOD_MULTI_VM_SLAVE,
+};
+
+#define IS_BMGW_MASTER_HOST(hwdev) \
+ ((hwdev)->func_mode == FUNC_MOD_MULTI_BM_MASTER)
+#define IS_BMGW_SLAVE_HOST(hwdev) \
+ ((hwdev)->func_mode == FUNC_MOD_MULTI_BM_SLAVE)
+#define IS_VM_MASTER_HOST(hwdev) \
+ ((hwdev)->func_mode == FUNC_MOD_MULTI_VM_MASTER)
+#define IS_VM_SLAVE_HOST(hwdev) \
+ ((hwdev)->func_mode == FUNC_MOD_MULTI_VM_SLAVE)
+
+#define IS_MASTER_HOST(hwdev) \
+ (IS_BMGW_MASTER_HOST(hwdev) || IS_VM_MASTER_HOST(hwdev))
+
+#define IS_SLAVE_HOST(hwdev) \
+ (IS_BMGW_SLAVE_HOST(hwdev) || IS_VM_SLAVE_HOST(hwdev))
+
+#define IS_MULTI_HOST(hwdev) \
+ (IS_BMGW_MASTER_HOST(hwdev) || IS_BMGW_SLAVE_HOST(hwdev) || \
+ IS_VM_MASTER_HOST(hwdev) || IS_VM_SLAVE_HOST(hwdev))
+
+#define NEED_MBOX_FORWARD(hwdev) IS_BMGW_SLAVE_HOST(hwdev)
+
+enum hinic3_host_mode_e {
+ HINIC3_MODE_NORMAL = 0,
+ HINIC3_SDI_MODE_VM,
+ HINIC3_SDI_MODE_BM,
+ HINIC3_SDI_MODE_MAX,
+};
+
+enum hinic3_hot_plug_mode {
+ HOT_PLUG_ENABLE,
+ HOT_PLUG_DISABLE,
+};
+
+enum hinic3_os_hot_replace_mode {
+ HOT_REPLACE_DISABLE,
+ HOT_REPLACE_ENABLE,
+};
+
+#define UNSUPPORT_HOT_PLUG(hwdev) \
+ ((hwdev)->hot_plug_mode == HOT_PLUG_DISABLE)
+
+#define SUPPORT_HOT_PLUG(hwdev) \
+ ((hwdev)->hot_plug_mode == HOT_PLUG_ENABLE)
+
+#define MULTI_HOST_CHIP_MODE_SHIFT 0
+#define MULTI_HOST_MASTER_MBX_STS_SHIFT 17
+#define MULTI_HOST_PRIV_DATA_SHIFT 0x8
+
+#define MULTI_HOST_CHIP_MODE_MASK 0xF
+#define MULTI_HOST_MASTER_MBX_STS_MASK 0x1
+#define MULTI_HOST_PRIV_DATA_MASK 0xFFFF
+
+#define MULTI_HOST_REG_SET(val, member) \
+ (((val) & MULTI_HOST_##member##_MASK) \
+ << MULTI_HOST_##member##_SHIFT)
+#define MULTI_HOST_REG_GET(val, member) \
+ (((val) >> MULTI_HOST_##member##_SHIFT) \
+ & MULTI_HOST_##member##_MASK)
+#define MULTI_HOST_REG_CLEAR(val, member) \
+ ((val) & (~(MULTI_HOST_##member##_MASK \
+ << MULTI_HOST_##member##_SHIFT)))
+
+struct mqm_eqm_vram_name_s {
+ char vram_name[VRAM_NAME_MAX_LEN];
+};
+
+struct hinic3_hwdev {
+ void *adapter_hdl; /* pointer to hinic3_pcidev or NDIS_Adapter */
+ void *pcidev_hdl; /* pointer to pcidev or Handler */
+ void *dev_hdl; /* pointer to pcidev->dev or Handler, for
+ * sdk_err() or dma_alloc()
+ */
+
+ void *service_adapter[SERVICE_T_MAX];
+ void *chip_node;
+ struct semaphore ppf_sem;
+ void *ppf_hwdev;
+
+ u32 wq_page_size;
+ int chip_present_flag;
+ bool poll; /* use polling mode or int mode */
+ u32 rsvd1;
+
+ struct hinic3_hwif *hwif; /* include void __iomem *bar */
+ struct comm_global_attr glb_attr;
+ u64 features[COMM_MAX_FEATURE_QWORD];
+
+ struct cfg_mgmt_info *cfg_mgmt;
+
+ struct hinic3_cmdqs *cmdqs;
+ struct hinic3_aeqs *aeqs;
+ struct hinic3_ceqs *ceqs;
+ struct hinic3_mbox *func_to_func;
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt;
+ struct hinic3_clp_pf_to_mgmt *clp_pf_to_mgmt;
+
+ void *cqm_hdl;
+ struct mqm_addr_trans_tbl_info mqm_att;
+ struct hinic3_page_addr page_pa0;
+ struct hinic3_page_addr page_pa1;
+ u32 stateful_ref_cnt;
+ u32 rsvd2;
+
+ struct hinic3_multi_host_mgmt *mhost_mgmt;
+ char mhost_mgmt_name[VRAM_NAME_MAX_LEN];
+
+ struct mqm_eqm_vram_name_s *mqm_eqm_vram_name;
+
+ struct mutex stateful_mutex; /* protect cqm init and deinit */
+
+ struct hinic3_hw_stats hw_stats;
+ u8 *chip_fault_stats;
+
+ hinic3_event_handler event_callback;
+ void *event_pri_handle;
+
+ struct hinic3_board_info board_info;
+
+ struct delayed_work sync_time_task;
+ struct delayed_work channel_detect_task;
+ struct hisdk3_prof_attr *prof_attr;
+ struct hinic3_prof_adapter *prof_adap;
+
+ struct workqueue_struct *workq;
+
+ u32 rd_bar_err_cnt;
+ bool pcie_link_down;
+ bool heartbeat_lost;
+ struct timer_list heartbeat_timer;
+ struct work_struct heartbeat_lost_work;
+
+ ulong func_state;
+ spinlock_t channel_lock; /* protect channel init and deinit */
+
+ u16 probe_fault_level;
+
+ struct hinic3_devlink *devlink_dev;
+
+ enum hinic3_func_mode func_mode;
+ enum hinic3_hot_plug_mode hot_plug_mode;
+
+ enum hinic3_os_hot_replace_mode hot_replace_mode;
+ u32 rsvd5;
+
+ DECLARE_BITMAP(func_probe_in_host, MAX_FUNCTION_NUM);
+ DECLARE_BITMAP(netdev_setup_state, MAX_FUNCTION_NUM);
+
+ u64 cur_recv_aeq_cnt;
+ u64 last_recv_aeq_cnt;
+ u16 aeq_busy_cnt;
+
+ u64 mbox_send_cnt;
+ u64 mbox_ack_cnt;
+
+ u64 rsvd4[5];
+};
+
+#define HINIC3_DRV_FEATURE_QW0 \
+ (COMM_F_API_CHAIN | COMM_F_CLP | COMM_F_MBOX_SEGMENT | \
+ COMM_F_CMDQ_NUM | COMM_F_VIRTIO_VQ_SIZE)
+
+#define HINIC3_MAX_HOST_NUM(hwdev) ((hwdev)->glb_attr.max_host_num)
+#define HINIC3_MAX_PF_NUM(hwdev) ((hwdev)->glb_attr.max_pf_num)
+#define HINIC3_MGMT_CPU_NODE_ID(hwdev) ((hwdev)->glb_attr.mgmt_host_node_id)
+
+#define COMM_FEATURE_QW0(hwdev, feature) \
+ ((hwdev)->features[0] & COMM_F_##feature)
+#define COMM_SUPPORT_API_CHAIN(hwdev) COMM_FEATURE_QW0(hwdev, API_CHAIN)
+#define COMM_SUPPORT_CLP(hwdev) COMM_FEATURE_QW0(hwdev, CLP)
+#define COMM_SUPPORT_CHANNEL_DETECT(hwdev) COMM_FEATURE_QW0(hwdev, CHANNEL_DETECT)
+#define COMM_SUPPORT_MBOX_SEGMENT(hwdev) (hinic3_pcie_itf_id(hwdev) == SPU_HOST_ID)
+#define COMM_SUPPORT_CMDQ_NUM(hwdev) COMM_FEATURE_QW0(hwdev, CMDQ_NUM)
+#define COMM_SUPPORT_VIRTIO_VQ_SIZE(hwdev) COMM_FEATURE_QW0(hwdev, VIRTIO_VQ_SIZE)
+
+void set_func_host_mode(struct hinic3_hwdev *hwdev, enum hinic3_func_mode mode);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c
new file mode 100644
index 0000000..8590f70
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c
@@ -0,0 +1,1050 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_csr.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+
+#ifndef CONFIG_MODULE_PROF
+#define WAIT_HWIF_READY_TIMEOUT 10000
+#else
+#define WAIT_HWIF_READY_TIMEOUT 30000
+#endif
+
+#define HINIC3_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT 60000
+
+#define MAX_MSIX_ENTRY 2048
+
+#define DB_IDX(db, db_base) \
+ ((u32)(((ulong)(db) - (ulong)(db_base)) / \
+ HINIC3_DB_PAGE_SIZE))
+
+#define HINIC3_AF0_FUNC_GLOBAL_IDX_SHIFT 0
+#define HINIC3_AF0_P2P_IDX_SHIFT 12
+#define HINIC3_AF0_PCI_INTF_IDX_SHIFT 17
+#define HINIC3_AF0_VF_IN_PF_SHIFT 20
+#define HINIC3_AF0_FUNC_TYPE_SHIFT 28
+
+#define HINIC3_AF0_FUNC_GLOBAL_IDX_MASK 0xFFF
+#define HINIC3_AF0_P2P_IDX_MASK 0x1F
+#define HINIC3_AF0_PCI_INTF_IDX_MASK 0x7
+#define HINIC3_AF0_VF_IN_PF_MASK 0xFF
+#define HINIC3_AF0_FUNC_TYPE_MASK 0x1
+
+#define HINIC3_AF0_GET(val, member) \
+ (((val) >> HINIC3_AF0_##member##_SHIFT) & HINIC3_AF0_##member##_MASK)
+
+#define HINIC3_AF1_PPF_IDX_SHIFT 0
+#define HINIC3_AF1_AEQS_PER_FUNC_SHIFT 8
+#define HINIC3_AF1_MGMT_INIT_STATUS_SHIFT 30
+#define HINIC3_AF1_PF_INIT_STATUS_SHIFT 31
+
+#define HINIC3_AF1_PPF_IDX_MASK 0x3F
+#define HINIC3_AF1_AEQS_PER_FUNC_MASK 0x3
+#define HINIC3_AF1_MGMT_INIT_STATUS_MASK 0x1
+#define HINIC3_AF1_PF_INIT_STATUS_MASK 0x1
+
+#define HINIC3_AF1_GET(val, member) \
+ (((val) >> HINIC3_AF1_##member##_SHIFT) & HINIC3_AF1_##member##_MASK)
+
+#define HINIC3_AF2_CEQS_PER_FUNC_SHIFT 0
+#define HINIC3_AF2_DMA_ATTR_PER_FUNC_SHIFT 9
+#define HINIC3_AF2_IRQS_PER_FUNC_SHIFT 16
+
+#define HINIC3_AF2_CEQS_PER_FUNC_MASK 0x1FF
+#define HINIC3_AF2_DMA_ATTR_PER_FUNC_MASK 0x7
+#define HINIC3_AF2_IRQS_PER_FUNC_MASK 0x7FF
+
+#define HINIC3_AF2_GET(val, member) \
+ (((val) >> HINIC3_AF2_##member##_SHIFT) & HINIC3_AF2_##member##_MASK)
+
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_NXT_PF_SHIFT 0
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_PF_SHIFT 16
+
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_NXT_PF_MASK 0xFFF
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_PF_MASK 0xFFF
+
+#define HINIC3_AF3_GET(val, member) \
+ (((val) >> HINIC3_AF3_##member##_SHIFT) & HINIC3_AF3_##member##_MASK)
+
+#define HINIC3_AF4_DOORBELL_CTRL_SHIFT 0
+#define HINIC3_AF4_DOORBELL_CTRL_MASK 0x1
+
+#define HINIC3_AF4_GET(val, member) \
+ (((val) >> HINIC3_AF4_##member##_SHIFT) & HINIC3_AF4_##member##_MASK)
+
+#define HINIC3_AF4_SET(val, member) \
+ (((val) & HINIC3_AF4_##member##_MASK) << HINIC3_AF4_##member##_SHIFT)
+
+#define HINIC3_AF4_CLEAR(val, member) \
+ ((val) & (~(HINIC3_AF4_##member##_MASK << HINIC3_AF4_##member##_SHIFT)))
+
+#define HINIC3_AF5_OUTBOUND_CTRL_SHIFT 0
+#define HINIC3_AF5_OUTBOUND_CTRL_MASK 0x1
+
+#define HINIC3_AF5_GET(val, member) \
+ (((val) >> HINIC3_AF5_##member##_SHIFT) & HINIC3_AF5_##member##_MASK)
+
+#define HINIC3_AF5_SET(val, member) \
+ (((val) & HINIC3_AF5_##member##_MASK) << HINIC3_AF5_##member##_SHIFT)
+
+#define HINIC3_AF5_CLEAR(val, member) \
+ ((val) & (~(HINIC3_AF5_##member##_MASK << HINIC3_AF5_##member##_SHIFT)))
+
+#define HINIC3_AF6_PF_STATUS_SHIFT 0
+#define HINIC3_AF6_PF_STATUS_MASK 0xFFFF
+
+#define HINIC3_AF6_FUNC_MAX_SQ_SHIFT 23
+#define HINIC3_AF6_FUNC_MAX_SQ_MASK 0x1FF
+
+#define HINIC3_AF6_MSIX_FLEX_EN_SHIFT 22
+#define HINIC3_AF6_MSIX_FLEX_EN_MASK 0x1
+
+#define HINIC3_AF6_SET(val, member) \
+ ((((u32)(val)) & HINIC3_AF6_##member##_MASK) << \
+ HINIC3_AF6_##member##_SHIFT)
+
+#define HINIC3_AF6_GET(val, member) \
+ (((u32)(val) >> HINIC3_AF6_##member##_SHIFT) & HINIC3_AF6_##member##_MASK)
+
+#define HINIC3_AF6_CLEAR(val, member) \
+ ((u32)(val) & (~(HINIC3_AF6_##member##_MASK << \
+ HINIC3_AF6_##member##_SHIFT)))
+
+#define HINIC3_PPF_ELECT_PORT_IDX_SHIFT 0
+
+#define HINIC3_PPF_ELECT_PORT_IDX_MASK 0x3F
+
+#define HINIC3_PPF_ELECT_PORT_GET(val, member) \
+ (((val) >> HINIC3_PPF_ELECT_PORT_##member##_SHIFT) & \
+ HINIC3_PPF_ELECT_PORT_##member##_MASK)
+
+#define HINIC3_PPF_ELECTION_IDX_SHIFT 0
+
+#define HINIC3_PPF_ELECTION_IDX_MASK 0x3F
+
+#define HINIC3_PPF_ELECTION_SET(val, member) \
+ (((val) & HINIC3_PPF_ELECTION_##member##_MASK) << \
+ HINIC3_PPF_ELECTION_##member##_SHIFT)
+
+#define HINIC3_PPF_ELECTION_GET(val, member) \
+ (((val) >> HINIC3_PPF_ELECTION_##member##_SHIFT) & \
+ HINIC3_PPF_ELECTION_##member##_MASK)
+
+#define HINIC3_PPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HINIC3_PPF_ELECTION_##member##_MASK << \
+ HINIC3_PPF_ELECTION_##member##_SHIFT)))
+
+#define HINIC3_MPF_ELECTION_IDX_SHIFT 0
+
+#define HINIC3_MPF_ELECTION_IDX_MASK 0x1F
+
+#define HINIC3_MPF_ELECTION_SET(val, member) \
+ (((val) & HINIC3_MPF_ELECTION_##member##_MASK) << \
+ HINIC3_MPF_ELECTION_##member##_SHIFT)
+
+#define HINIC3_MPF_ELECTION_GET(val, member) \
+ (((val) >> HINIC3_MPF_ELECTION_##member##_SHIFT) & \
+ HINIC3_MPF_ELECTION_##member##_MASK)
+
+#define HINIC3_MPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HINIC3_MPF_ELECTION_##member##_MASK << \
+ HINIC3_MPF_ELECTION_##member##_SHIFT)))
+
+#define HINIC3_GET_REG_FLAG(reg) ((reg) & (~(HINIC3_REGS_FLAG_MAKS)))
+
+#define HINIC3_GET_REG_ADDR(reg) ((reg) & (HINIC3_REGS_FLAG_MAKS))
+
+u32 hinic3_hwif_read_reg(struct hinic3_hwif *hwif, u32 reg)
+{
+ if (HINIC3_GET_REG_FLAG(reg) == HINIC3_MGMT_REGS_FLAG)
+ return be32_to_cpu(readl(hwif->mgmt_regs_base +
+ HINIC3_GET_REG_ADDR(reg)));
+ else
+ return be32_to_cpu(readl(hwif->cfg_regs_base +
+ HINIC3_GET_REG_ADDR(reg)));
+}
+
+void hinic3_hwif_write_reg(struct hinic3_hwif *hwif, u32 reg, u32 val)
+{
+ if (HINIC3_GET_REG_FLAG(reg) == HINIC3_MGMT_REGS_FLAG)
+ writel(cpu_to_be32(val),
+ hwif->mgmt_regs_base + HINIC3_GET_REG_ADDR(reg));
+ else
+ writel(cpu_to_be32(val),
+ hwif->cfg_regs_base + HINIC3_GET_REG_ADDR(reg));
+}
+
+bool get_card_present_state(struct hinic3_hwdev *hwdev)
+{
+ u32 attr1;
+
+ attr1 = hinic3_hwif_read_reg(hwdev->hwif, HINIC3_CSR_FUNC_ATTR1_ADDR);
+ if (attr1 == HINIC3_PCIE_LINK_DOWN) {
+ sdk_warn(hwdev->dev_hdl, "Card is not present\n");
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * hinic3_get_heartbeat_status - get heart beat status
+ * @hwdev: the pointer to hw device
+ * Return: 0 - normal, 1 - heart lost, 0xFFFFFFFF - Pcie link down
+ **/
+u32 hinic3_get_heartbeat_status(void *hwdev)
+{
+ u32 attr1;
+
+ if (!hwdev)
+ return HINIC3_PCIE_LINK_DOWN;
+
+ attr1 = hinic3_hwif_read_reg(((struct hinic3_hwdev *)hwdev)->hwif,
+ HINIC3_CSR_FUNC_ATTR1_ADDR);
+ if (attr1 == HINIC3_PCIE_LINK_DOWN)
+ return attr1;
+
+ return !HINIC3_AF1_GET(attr1, MGMT_INIT_STATUS);
+}
+EXPORT_SYMBOL(hinic3_get_heartbeat_status);
+
+#define MIGRATE_HOST_STATUS_CLEAR(host_id, val) ((val) & (~(1U << (host_id))))
+#define MIGRATE_HOST_STATUS_SET(host_id, enable) (((u8)(enable) & 1U) << (host_id))
+#define MIGRATE_HOST_STATUS_GET(host_id, val) (!!((val) & (1U << (host_id))))
+
+int hinic3_set_host_migrate_enable(void *hwdev, u8 host_id, bool enable)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ u32 reg_val;
+
+ if (!dev || host_id > SPU_HOST_ID)
+ return -EINVAL;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF) {
+ sdk_warn(dev->dev_hdl, "hwdev should be ppf\n");
+ return -EINVAL;
+ }
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_MIGRATE_HOST_STATUS_ADDR);
+ reg_val = MIGRATE_HOST_STATUS_CLEAR(host_id, reg_val);
+ reg_val |= MIGRATE_HOST_STATUS_SET(host_id, enable);
+
+ hinic3_hwif_write_reg(dev->hwif, HINIC3_MULT_MIGRATE_HOST_STATUS_ADDR, reg_val);
+
+ sdk_info(dev->dev_hdl, "Set migrate host %d status %d, reg value: 0x%x\n",
+ host_id, enable, reg_val);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_host_migrate_enable);
+
+int hinic3_get_host_migrate_enable(void *hwdev, u8 host_id, u8 *migrate_en)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ u32 reg_val;
+
+ if (!dev || !migrate_en || host_id > SPU_HOST_ID)
+ return -EINVAL;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF) {
+ sdk_warn(dev->dev_hdl, "hwdev should be ppf\n");
+ return -EINVAL;
+ }
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_MIGRATE_HOST_STATUS_ADDR);
+ *migrate_en = MIGRATE_HOST_STATUS_GET(host_id, reg_val);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_host_migrate_enable);
+
+static enum hinic3_wait_return check_hwif_ready_handler(void *priv_data)
+{
+ u32 status;
+
+ status = hinic3_get_heartbeat_status(priv_data);
+ if (status == HINIC3_PCIE_LINK_DOWN)
+ return WAIT_PROCESS_ERR;
+ else if (!status)
+ return WAIT_PROCESS_CPL;
+
+ return WAIT_PROCESS_WAITING;
+}
+
+static int wait_hwif_ready(struct hinic3_hwdev *hwdev)
+{
+ int ret;
+
+ ret = hinic3_wait_for_timeout(hwdev, check_hwif_ready_handler,
+ WAIT_HWIF_READY_TIMEOUT, USEC_PER_MSEC);
+ if (ret == -ETIMEDOUT) {
+ hwdev->probe_fault_level = FAULT_LEVEL_FATAL;
+ sdk_err(hwdev->dev_hdl, "Wait for hwif timeout\n");
+ }
+
+ return ret;
+}
+
+/**
+ * set_hwif_attr - set the attributes as members in hwif
+ * @hwif: the hardware interface of a pci function device
+ * @attr0: the first attribute that was read from the hw
+ * @attr1: the second attribute that was read from the hw
+ * @attr2: the third attribute that was read from the hw
+ * @attr3: the fourth attribute that was read from the hw
+ **/
+static void set_hwif_attr(struct hinic3_hwif *hwif, u32 attr0, u32 attr1,
+ u32 attr2, u32 attr3, u32 attr6)
+{
+ hwif->attr.func_global_idx = HINIC3_AF0_GET(attr0, FUNC_GLOBAL_IDX);
+ hwif->attr.port_to_port_idx = HINIC3_AF0_GET(attr0, P2P_IDX);
+ hwif->attr.pci_intf_idx = HINIC3_AF0_GET(attr0, PCI_INTF_IDX);
+ hwif->attr.vf_in_pf = HINIC3_AF0_GET(attr0, VF_IN_PF);
+ hwif->attr.func_type = HINIC3_AF0_GET(attr0, FUNC_TYPE);
+
+ hwif->attr.ppf_idx = HINIC3_AF1_GET(attr1, PPF_IDX);
+ hwif->attr.num_aeqs = BIT(HINIC3_AF1_GET(attr1, AEQS_PER_FUNC));
+ hwif->attr.num_ceqs = (u8)HINIC3_AF2_GET(attr2, CEQS_PER_FUNC);
+ hwif->attr.num_irqs = HINIC3_AF2_GET(attr2, IRQS_PER_FUNC);
+ if (hwif->attr.num_irqs > MAX_MSIX_ENTRY)
+ hwif->attr.num_irqs = MAX_MSIX_ENTRY;
+
+ hwif->attr.num_dma_attr = BIT(HINIC3_AF2_GET(attr2, DMA_ATTR_PER_FUNC));
+
+ hwif->attr.global_vf_id_of_pf = HINIC3_AF3_GET(attr3,
+ GLOBAL_VF_ID_OF_PF);
+
+ hwif->attr.num_sq = HINIC3_AF6_GET(attr6, FUNC_MAX_SQ);
+ hwif->attr.msix_flex_en = HINIC3_AF6_GET(attr6, MSIX_FLEX_EN);
+}
+
+/**
+ * get_hwif_attr - read and set the attributes as members in hwif
+ * @hwif: the hardware interface of a pci function device
+ **/
+static int get_hwif_attr(struct hinic3_hwif *hwif)
+{
+ u32 addr, attr0, attr1, attr2, attr3, attr6;
+
+ addr = HINIC3_CSR_FUNC_ATTR0_ADDR;
+ attr0 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr0 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ addr = HINIC3_CSR_FUNC_ATTR1_ADDR;
+ attr1 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr1 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ addr = HINIC3_CSR_FUNC_ATTR2_ADDR;
+ attr2 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr2 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ addr = HINIC3_CSR_FUNC_ATTR3_ADDR;
+ attr3 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr3 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ addr = HINIC3_CSR_FUNC_ATTR6_ADDR;
+ attr6 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr6 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ set_hwif_attr(hwif, attr0, attr1, attr2, attr3, attr6);
+
+ return 0;
+}
+
+void hinic3_set_pf_status(struct hinic3_hwif *hwif,
+ enum hinic3_pf_status status)
+{
+ u32 attr6 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR);
+
+ attr6 = HINIC3_AF6_CLEAR(attr6, PF_STATUS);
+ attr6 |= HINIC3_AF6_SET(status, PF_STATUS);
+
+ if (hwif->attr.func_type == TYPE_VF)
+ return;
+
+ hinic3_hwif_write_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR, attr6);
+}
+
+enum hinic3_pf_status hinic3_get_pf_status(struct hinic3_hwif *hwif)
+{
+ u32 attr6 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR);
+
+ return HINIC3_AF6_GET(attr6, PF_STATUS);
+}
+
+static enum hinic3_doorbell_ctrl hinic3_get_doorbell_ctrl_status(struct hinic3_hwif *hwif)
+{
+ u32 attr4 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR4_ADDR);
+
+ return HINIC3_AF4_GET(attr4, DOORBELL_CTRL);
+}
+
+static enum hinic3_outbound_ctrl hinic3_get_outbound_ctrl_status(struct hinic3_hwif *hwif)
+{
+ u32 attr5 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR5_ADDR);
+
+ return HINIC3_AF5_GET(attr5, OUTBOUND_CTRL);
+}
+
+void hinic3_enable_doorbell(struct hinic3_hwif *hwif)
+{
+ u32 addr, attr4;
+
+ addr = HINIC3_CSR_FUNC_ATTR4_ADDR;
+ attr4 = hinic3_hwif_read_reg(hwif, addr);
+
+ attr4 = HINIC3_AF4_CLEAR(attr4, DOORBELL_CTRL);
+ attr4 |= HINIC3_AF4_SET(ENABLE_DOORBELL, DOORBELL_CTRL);
+
+ hinic3_hwif_write_reg(hwif, addr, attr4);
+}
+
+void hinic3_disable_doorbell(struct hinic3_hwif *hwif)
+{
+ u32 addr, attr4;
+
+ addr = HINIC3_CSR_FUNC_ATTR4_ADDR;
+ attr4 = hinic3_hwif_read_reg(hwif, addr);
+
+ attr4 = HINIC3_AF4_CLEAR(attr4, DOORBELL_CTRL);
+ attr4 |= HINIC3_AF4_SET(DISABLE_DOORBELL, DOORBELL_CTRL);
+
+ hinic3_hwif_write_reg(hwif, addr, attr4);
+}
+
+/**
+ * set_ppf - try to set hwif as ppf and set the type of hwif in this case
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void set_ppf(struct hinic3_hwif *hwif)
+{
+ struct hinic3_func_attr *attr = &hwif->attr;
+ u32 addr, val, ppf_election;
+
+ /* Read Modify Write */
+ addr = HINIC3_CSR_PPF_ELECTION_ADDR;
+
+ val = hinic3_hwif_read_reg(hwif, addr);
+ val = HINIC3_PPF_ELECTION_CLEAR(val, IDX);
+
+ ppf_election = HINIC3_PPF_ELECTION_SET(attr->func_global_idx, IDX);
+ val |= ppf_election;
+
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ /* Check PPF */
+ val = hinic3_hwif_read_reg(hwif, addr);
+
+ attr->ppf_idx = HINIC3_PPF_ELECTION_GET(val, IDX);
+ if (attr->ppf_idx == attr->func_global_idx)
+ attr->func_type = TYPE_PPF;
+}
+
+/**
+ * get_mpf - get the mpf index into the hwif
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void get_mpf(struct hinic3_hwif *hwif)
+{
+ struct hinic3_func_attr *attr = &hwif->attr;
+ u32 mpf_election, addr;
+
+ addr = HINIC3_CSR_GLOBAL_MPF_ELECTION_ADDR;
+
+ mpf_election = hinic3_hwif_read_reg(hwif, addr);
+ attr->mpf_idx = HINIC3_MPF_ELECTION_GET(mpf_election, IDX);
+}
+
+/**
+ * set_mpf - try to set hwif as mpf and set the mpf idx in hwif
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void set_mpf(struct hinic3_hwif *hwif)
+{
+ struct hinic3_func_attr *attr = &hwif->attr;
+ u32 addr, val, mpf_election;
+
+ /* Read Modify Write */
+ addr = HINIC3_CSR_GLOBAL_MPF_ELECTION_ADDR;
+
+ val = hinic3_hwif_read_reg(hwif, addr);
+
+ val = HINIC3_MPF_ELECTION_CLEAR(val, IDX);
+ mpf_election = HINIC3_MPF_ELECTION_SET(attr->func_global_idx, IDX);
+
+ val |= mpf_election;
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+static int init_hwif(struct hinic3_hwdev *hwdev, void *cfg_reg_base, void *intr_reg_base,
+ void *mgmt_regs_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ hwif = kzalloc(sizeof(*hwif), GFP_KERNEL);
+ if (!hwif)
+ return -ENOMEM;
+
+ hwdev->hwif = hwif;
+ hwif->pdev = hwdev->pcidev_hdl;
+
+ /* if function is VF, mgmt_regs_base will be NULL */
+ hwif->cfg_regs_base = mgmt_regs_base ? cfg_reg_base :
+ (u8 *)cfg_reg_base + HINIC3_VF_CFG_REG_OFFSET;
+
+ hwif->intr_regs_base = intr_reg_base;
+ hwif->mgmt_regs_base = mgmt_regs_base;
+
+ return 0;
+}
+
+static int init_db_area_idx(struct hinic3_hwif *hwif, u64 db_base_phy, u8 *db_base,
+ u64 db_dwqe_len)
+{
+ struct hinic3_free_db_area *free_db_area = &hwif->free_db_area;
+ u32 db_max_areas;
+
+ hwif->db_base_phy = db_base_phy;
+ hwif->db_base = db_base;
+ hwif->db_dwqe_len = db_dwqe_len;
+
+ db_max_areas = (db_dwqe_len > HINIC3_DB_DWQE_SIZE) ?
+ HINIC3_DB_MAX_AREAS :
+ (u32)(db_dwqe_len / HINIC3_DB_PAGE_SIZE);
+ free_db_area->db_bitmap_array = bitmap_zalloc(db_max_areas, GFP_KERNEL);
+ if (!free_db_area->db_bitmap_array) {
+ pr_err("Failed to allocate db area.\n");
+ return -ENOMEM;
+ }
+ free_db_area->db_max_areas = db_max_areas;
+ spin_lock_init(&free_db_area->idx_lock);
+ return 0;
+}
+
+static void free_db_area(struct hinic3_free_db_area *free_db_area)
+{
+ spin_lock_deinit(&free_db_area->idx_lock);
+ kfree(free_db_area->db_bitmap_array);
+ free_db_area->db_bitmap_array = NULL;
+}
+
+static int get_db_idx(struct hinic3_hwif *hwif, u32 *idx)
+{
+ struct hinic3_free_db_area *free_db_area = &hwif->free_db_area;
+ u32 pg_idx;
+
+ spin_lock(&free_db_area->idx_lock);
+ pg_idx = (u32)find_first_zero_bit(free_db_area->db_bitmap_array,
+ free_db_area->db_max_areas);
+ if (pg_idx == free_db_area->db_max_areas) {
+ spin_unlock(&free_db_area->idx_lock);
+ return -ENOMEM;
+ }
+ set_bit(pg_idx, free_db_area->db_bitmap_array);
+ spin_unlock(&free_db_area->idx_lock);
+
+ *idx = pg_idx;
+
+ return 0;
+}
+
+static void free_db_idx(struct hinic3_hwif *hwif, u32 idx)
+{
+ struct hinic3_free_db_area *free_db_area = &hwif->free_db_area;
+
+ if (idx >= free_db_area->db_max_areas)
+ return;
+
+ spin_lock(&free_db_area->idx_lock);
+ clear_bit((int)idx, free_db_area->db_bitmap_array);
+
+ spin_unlock(&free_db_area->idx_lock);
+}
+
+void hinic3_free_db_addr(void *hwdev, const void __iomem *db_base,
+ void __iomem *dwqe_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 idx;
+
+ if (!hwdev || !db_base)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+ idx = DB_IDX(db_base, hwif->db_base);
+
+ free_db_idx(hwif, idx);
+}
+EXPORT_SYMBOL(hinic3_free_db_addr);
+
+int hinic3_alloc_db_addr(void *hwdev, void __iomem **db_base,
+ void __iomem **dwqe_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 idx = 0;
+ int err;
+
+ if (!hwdev || !db_base)
+ return -EINVAL;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ err = get_db_idx(hwif, &idx);
+ if (err)
+ return -EFAULT;
+
+ *db_base = hwif->db_base + idx * HINIC3_DB_PAGE_SIZE;
+
+ if (!dwqe_base)
+ return 0;
+
+ *dwqe_base = (u8 *)*db_base + HINIC3_DWQE_OFFSET;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_alloc_db_addr);
+
+void hinic3_free_db_phy_addr(void *hwdev, u64 db_base, u64 dwqe_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 idx;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+ idx = DB_IDX(db_base, hwif->db_base_phy);
+
+ free_db_idx(hwif, idx);
+}
+EXPORT_SYMBOL(hinic3_free_db_phy_addr);
+
+int hinic3_alloc_db_phy_addr(void *hwdev, u64 *db_base, u64 *dwqe_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 idx;
+ int err;
+
+ if (!hwdev || !db_base || !dwqe_base)
+ return -EINVAL;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ err = get_db_idx(hwif, &idx);
+ if (err)
+ return -EFAULT;
+
+ *db_base = hwif->db_base_phy + idx * HINIC3_DB_PAGE_SIZE;
+ *dwqe_base = *db_base + HINIC3_DWQE_OFFSET;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_alloc_db_phy_addr);
+
+void hinic3_set_msix_auto_mask_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_auto_mask flag)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 mask_bits;
+ u32 addr;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ if (flag)
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(1, AUTO_MSK_SET);
+ else
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(1, AUTO_MSK_CLR);
+
+ mask_bits = mask_bits |
+ HINIC3_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX);
+
+ addr = HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR;
+ hinic3_hwif_write_reg(hwif, addr, mask_bits);
+}
+EXPORT_SYMBOL(hinic3_set_msix_auto_mask_state);
+
+void hinic3_set_msix_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_state flag)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 mask_bits;
+ u32 addr;
+ u8 int_msk = 1;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ if (flag)
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(int_msk, INT_MSK_SET);
+ else
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(int_msk, INT_MSK_CLR);
+ mask_bits = mask_bits |
+ HINIC3_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX);
+
+ addr = HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR;
+ hinic3_hwif_write_reg(hwif, addr, mask_bits);
+}
+EXPORT_SYMBOL(hinic3_set_msix_state);
+
+static void disable_all_msix(struct hinic3_hwdev *hwdev)
+{
+ u16 num_irqs = hwdev->hwif->attr.num_irqs;
+ u16 i;
+
+ for (i = 0; i < num_irqs; i++)
+ hinic3_set_msix_state(hwdev, i, HINIC3_MSIX_DISABLE);
+}
+
+static void enable_all_msix(struct hinic3_hwdev *hwdev)
+{
+ u16 num_irqs = hwdev->hwif->attr.num_irqs;
+ u16 i;
+
+ for (i = 0; i < num_irqs; i++)
+ hinic3_set_msix_state(hwdev, i, HINIC3_MSIX_ENABLE);
+}
+
+static enum hinic3_wait_return check_db_outbound_enable_handler(void *priv_data)
+{
+ struct hinic3_hwif *hwif = priv_data;
+ enum hinic3_doorbell_ctrl db_ctrl;
+ enum hinic3_outbound_ctrl outbound_ctrl;
+
+ db_ctrl = hinic3_get_doorbell_ctrl_status(hwif);
+ outbound_ctrl = hinic3_get_outbound_ctrl_status(hwif);
+ if (outbound_ctrl == ENABLE_OUTBOUND && db_ctrl == ENABLE_DOORBELL)
+ return WAIT_PROCESS_CPL;
+
+ return WAIT_PROCESS_WAITING;
+}
+
+static int wait_until_doorbell_and_outbound_enabled(struct hinic3_hwif *hwif)
+{
+ return hinic3_wait_for_timeout(hwif, check_db_outbound_enable_handler,
+ HINIC3_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT, USEC_PER_MSEC);
+}
+
+static void select_ppf_mpf(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_hwif *hwif = hwdev->hwif;
+
+ if (!HINIC3_IS_VF(hwdev)) {
+ set_ppf(hwif);
+
+ if (HINIC3_IS_PPF(hwdev))
+ set_mpf(hwif);
+
+ get_mpf(hwif);
+ }
+}
+
+/**
+ * hinic3_init_hwif - initialize the hw interface
+ * @hwif: the hardware interface of a pci function device
+ * @pdev: the pci device that will be part of the hwif struct
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_init_hwif(struct hinic3_hwdev *hwdev, void *cfg_reg_base,
+ void *intr_reg_base, void *mgmt_regs_base, u64 db_base_phy,
+ void *db_base, u64 db_dwqe_len)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 attr1, attr4, attr5;
+ int err;
+
+ err = init_hwif(hwdev, cfg_reg_base, intr_reg_base, mgmt_regs_base);
+ if (err)
+ return err;
+
+ hwif = hwdev->hwif;
+
+ err = init_db_area_idx(hwif, db_base_phy, db_base, db_dwqe_len);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init db area.\n");
+ goto init_db_area_err;
+ }
+
+ err = wait_hwif_ready(hwdev);
+ if (err) {
+ attr1 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR1_ADDR);
+ sdk_err(hwdev->dev_hdl, "Chip status is not ready, attr1:0x%x\n", attr1);
+ goto hwif_ready_err;
+ }
+
+ err = get_hwif_attr(hwif);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Get hwif attr failed\n");
+ goto hwif_ready_err;
+ }
+
+ err = wait_until_doorbell_and_outbound_enabled(hwif);
+ if (err) {
+ attr4 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR4_ADDR);
+ attr5 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR5_ADDR);
+ sdk_err(hwdev->dev_hdl, "Hw doorbell/outbound is disabled, attr4 0x%x attr5 0x%x\n",
+ attr4, attr5);
+ goto hwif_ready_err;
+ }
+
+ select_ppf_mpf(hwdev);
+
+ disable_all_msix(hwdev);
+ /* disable mgmt cpu report any event */
+ hinic3_set_pf_status(hwdev->hwif, HINIC3_PF_STATUS_INIT);
+
+ sdk_info(hwdev->dev_hdl, "global_func_idx: %u, func_type: %d, host_id: %u, ppf: %u, mpf: %u\n",
+ hwif->attr.func_global_idx, hwif->attr.func_type, hwif->attr.pci_intf_idx,
+ hwif->attr.ppf_idx, hwif->attr.mpf_idx);
+
+ return 0;
+
+hwif_ready_err:
+ hinic3_show_chip_err_info(hwdev);
+ free_db_area(&hwif->free_db_area);
+init_db_area_err:
+ kfree(hwif);
+
+ return err;
+}
+
+/**
+ * hinic3_free_hwif - free the hw interface
+ * @hwif: the hardware interface of a pci function device
+ * @pdev: the pci device that will be part of the hwif struct
+ **/
+void hinic3_free_hwif(struct hinic3_hwdev *hwdev)
+{
+ spin_lock_deinit(&hwdev->hwif->free_db_area.idx_lock);
+ free_db_area(&hwdev->hwif->free_db_area);
+ enable_all_msix(hwdev);
+ kfree(hwdev->hwif);
+ hwdev->hwif = NULL;
+}
+
+u16 hinic3_global_func_id(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.func_global_idx;
+}
+EXPORT_SYMBOL(hinic3_global_func_id);
+
+/**
+ * get function id from register,used by sriov hot migration process
+ * @hwdev: the pointer to hw device
+ */
+u16 hinic3_global_func_id_hw(void *hwdev)
+{
+ u32 addr, attr0;
+ struct hinic3_hwdev *dev;
+
+ dev = (struct hinic3_hwdev *)hwdev;
+ addr = HINIC3_CSR_FUNC_ATTR0_ADDR;
+ attr0 = hinic3_hwif_read_reg(dev->hwif, addr);
+
+ return HINIC3_AF0_GET(attr0, FUNC_GLOBAL_IDX);
+}
+
+/**
+ * get function id, used by sriov hot migratition process.
+ * @hwdev: the pointer to hw device
+ * @func_id: function id
+ */
+int hinic3_global_func_id_get(void *hwdev, u16 *func_id)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (!hwdev || !func_id)
+ return -EINVAL;
+
+ /* only vf get func_id from chip reg for sriov migrate */
+ if (!HINIC3_IS_VF(dev)) {
+ *func_id = hinic3_global_func_id(hwdev);
+ return 0;
+ }
+
+ *func_id = hinic3_global_func_id_hw(dev);
+ return 0;
+}
+
+u16 hinic3_intr_num(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.num_irqs;
+}
+EXPORT_SYMBOL(hinic3_intr_num);
+
+u8 hinic3_pf_id_of_vf(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.port_to_port_idx;
+}
+EXPORT_SYMBOL(hinic3_pf_id_of_vf);
+
+u8 hinic3_pcie_itf_id(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.pci_intf_idx;
+}
+EXPORT_SYMBOL(hinic3_pcie_itf_id);
+
+u8 hinic3_vf_in_pf(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.vf_in_pf;
+}
+EXPORT_SYMBOL(hinic3_vf_in_pf);
+
+enum func_type hinic3_func_type(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.func_type;
+}
+EXPORT_SYMBOL(hinic3_func_type);
+
+u8 hinic3_ceq_num(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.num_ceqs;
+}
+EXPORT_SYMBOL(hinic3_ceq_num);
+
+u16 hinic3_glb_pf_vf_offset(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.global_vf_id_of_pf;
+}
+EXPORT_SYMBOL(hinic3_glb_pf_vf_offset);
+
+u8 hinic3_ppf_idx(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.ppf_idx;
+}
+EXPORT_SYMBOL(hinic3_ppf_idx);
+
+u8 hinic3_host_ppf_idx(struct hinic3_hwdev *hwdev, u8 host_id)
+{
+ u32 ppf_elect_port_addr;
+ u32 val;
+
+ if (!hwdev)
+ return 0;
+
+ ppf_elect_port_addr = HINIC3_CSR_FUNC_PPF_ELECT(host_id);
+ val = hinic3_hwif_read_reg(hwdev->hwif, ppf_elect_port_addr);
+
+ return HINIC3_PPF_ELECT_PORT_GET(val, IDX);
+}
+
+u32 hinic3_get_self_test_result(void *hwdev)
+{
+ struct hinic3_hwif *hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hinic3_hwif_read_reg(hwif, HINIC3_MGMT_HEALTH_STATUS_ADDR);
+}
+
+void hinic3_show_chip_err_info(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_hwif *hwif = hwdev->hwif;
+ u32 value;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return;
+
+ value = hinic3_hwif_read_reg(hwif, HINIC3_CHIP_BASE_INFO_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip base info: 0x%08x\n", value);
+
+ value = hinic3_hwif_read_reg(hwif, HINIC3_MGMT_HEALTH_STATUS_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Mgmt CPU health status: 0x%08x\n", value);
+
+ value = hinic3_hwif_read_reg(hwif, HINIC3_CHIP_ERR_STATUS0_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip fatal error status0: 0x%08x\n", value);
+ value = hinic3_hwif_read_reg(hwif, HINIC3_CHIP_ERR_STATUS1_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip fatal error status1: 0x%08x\n", value);
+
+ value = hinic3_hwif_read_reg(hwif, HINIC3_ERR_INFO0_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip exception info0: 0x%08x\n", value);
+ value = hinic3_hwif_read_reg(hwif, HINIC3_ERR_INFO1_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip exception info1: 0x%08x\n", value);
+ value = hinic3_hwif_read_reg(hwif, HINIC3_ERR_INFO2_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip exception info2: 0x%08x\n", value);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h
new file mode 100644
index 0000000..b204b21
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HWIF_H
+#define HINIC3_HWIF_H
+
+#include "hinic3_hwdev.h"
+
+#define HINIC3_PCIE_LINK_DOWN 0xFFFFFFFF
+
+struct hinic3_free_db_area {
+ unsigned long *db_bitmap_array;
+ u32 db_max_areas;
+ /* spinlock for allocating doorbell area */
+ spinlock_t idx_lock;
+};
+
+struct hinic3_func_attr {
+ u16 func_global_idx;
+ u8 port_to_port_idx;
+ u8 pci_intf_idx;
+ u8 vf_in_pf;
+ u8 rsvd1;
+ u16 rsvd2;
+ enum func_type func_type;
+
+ u8 mpf_idx;
+
+ u8 ppf_idx;
+
+ u16 num_irqs; /* max: 2 ^ 15 */
+ u8 num_aeqs; /* max: 2 ^ 3 */
+ u8 num_ceqs; /* max: 2 ^ 7 */
+
+ u16 num_sq; /* max: 2 ^ 8 */
+ u8 num_dma_attr; /* max: 2 ^ 6 */
+ u8 msix_flex_en;
+
+ u16 global_vf_id_of_pf;
+};
+
+struct hinic3_hwif {
+ u8 __iomem *cfg_regs_base;
+ u8 __iomem *intr_regs_base;
+ u8 __iomem *mgmt_regs_base;
+ u64 db_base_phy;
+ u64 db_dwqe_len;
+ u8 __iomem *db_base;
+
+ struct hinic3_free_db_area free_db_area;
+
+ struct hinic3_func_attr attr;
+
+ void *pdev;
+ u64 rsvd;
+};
+
+enum hinic3_outbound_ctrl {
+ ENABLE_OUTBOUND = 0x0,
+ DISABLE_OUTBOUND = 0x1,
+};
+
+enum hinic3_doorbell_ctrl {
+ ENABLE_DOORBELL = 0x0,
+ DISABLE_DOORBELL = 0x1,
+};
+
+enum hinic3_pf_status {
+ HINIC3_PF_STATUS_INIT = 0X0,
+ HINIC3_PF_STATUS_ACTIVE_FLAG = 0x11,
+ HINIC3_PF_STATUS_FLR_START_FLAG = 0x12,
+ HINIC3_PF_STATUS_FLR_FINISH_FLAG = 0x13,
+};
+
+#define HINIC3_HWIF_NUM_AEQS(hwif) ((hwif)->attr.num_aeqs)
+#define HINIC3_HWIF_NUM_CEQS(hwif) ((hwif)->attr.num_ceqs)
+#define HINIC3_HWIF_NUM_IRQS(hwif) ((hwif)->attr.num_irqs)
+#define HINIC3_HWIF_GLOBAL_IDX(hwif) ((hwif)->attr.func_global_idx)
+#define HINIC3_HWIF_GLOBAL_VF_OFFSET(hwif) ((hwif)->attr.global_vf_id_of_pf)
+#define HINIC3_HWIF_PPF_IDX(hwif) ((hwif)->attr.ppf_idx)
+#define HINIC3_PCI_INTF_IDX(hwif) ((hwif)->attr.pci_intf_idx)
+
+#define HINIC3_FUNC_TYPE(dev) ((dev)->hwif->attr.func_type)
+#define HINIC3_IS_PF(dev) (HINIC3_FUNC_TYPE(dev) == TYPE_PF)
+#define HINIC3_IS_VF(dev) (HINIC3_FUNC_TYPE(dev) == TYPE_VF)
+#define HINIC3_IS_PPF(dev) (HINIC3_FUNC_TYPE(dev) == TYPE_PPF)
+
+u32 hinic3_hwif_read_reg(struct hinic3_hwif *hwif, u32 reg);
+
+void hinic3_hwif_write_reg(struct hinic3_hwif *hwif, u32 reg, u32 val);
+
+void hinic3_set_pf_status(struct hinic3_hwif *hwif,
+ enum hinic3_pf_status status);
+
+enum hinic3_pf_status hinic3_get_pf_status(struct hinic3_hwif *hwif);
+
+void hinic3_disable_doorbell(struct hinic3_hwif *hwif);
+
+void hinic3_enable_doorbell(struct hinic3_hwif *hwif);
+
+int hinic3_init_hwif(struct hinic3_hwdev *hwdev, void *cfg_reg_base,
+ void *intr_reg_base, void *mgmt_regs_base, u64 db_base_phy,
+ void *db_base, u64 db_dwqe_len);
+
+void hinic3_free_hwif(struct hinic3_hwdev *hwdev);
+
+void hinic3_show_chip_err_info(struct hinic3_hwdev *hwdev);
+
+u8 hinic3_host_ppf_idx(struct hinic3_hwdev *hwdev, u8 host_id);
+
+bool get_card_present_state(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c
new file mode 100644
index 0000000..82a26ae
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c
@@ -0,0 +1,2460 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <net/addrconf.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/io-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/inetdevice.h>
+#include <linux/time.h>
+#include <linux/timex.h>
+#include <linux/rtc.h>
+#include <linux/aer.h>
+#include <linux/debugfs.h>
+#include <linux/notifier.h>
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_common.h"
+#include "hinic3_crm.h"
+#include "hinic3_pci_id_tbl.h"
+#include "hinic3_sriov.h"
+#include "hinic3_dev_mgmt.h"
+#include "hinic3_nictool.h"
+#include "hinic3_hw.h"
+#include "hinic3_lld.h"
+
+#include "hinic3_profile.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_multi_host_mgmt.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_devlink.h"
+
+#include "vram_common.h"
+
+enum partition_dev_type {
+ PARTITION_DEV_NONE = 0,
+ PARTITION_DEV_SHARED,
+ PARTITION_DEV_EXCLUSIVE,
+ PARTITION_DEV_BACKUP,
+};
+
+#ifdef HAVE_HOT_REPLACE_FUNC
+extern int vpci_set_partition_attrs(struct pci_dev *dev, unsigned int dev_type, unsigned int partition_id);
+extern int get_partition_id(void);
+#else
+static int vpci_set_partition_attrs(struct pci_dev *dev, unsigned int dev_type, unsigned int partition_id) { return 0; }
+static int get_partition_id(void) { return 0; }
+#endif
+
+static bool disable_vf_load;
+module_param(disable_vf_load, bool, 0444);
+MODULE_PARM_DESC(disable_vf_load,
+ "Disable virtual functions probe or not - default is false");
+
+static bool g_is_pf_migrated;
+static bool disable_attach;
+module_param(disable_attach, bool, 0444);
+MODULE_PARM_DESC(disable_attach, "disable_attach or not - default is false");
+
+#define HINIC3_WAIT_SRIOV_CFG_TIMEOUT 15000
+
+#if !(defined(HAVE_SRIOV_CONFIGURE) || defined(HAVE_RHEL6_SRIOV_CONFIGURE))
+static DEVICE_ATTR(sriov_numvfs, 0664,
+ hinic3_sriov_numvfs_show, hinic3_sriov_numvfs_store);
+static DEVICE_ATTR(sriov_totalvfs, 0444,
+ hinic3_sriov_totalvfs_show, NULL);
+#endif /* !(HAVE_SRIOV_CONFIGURE || HAVE_RHEL6_SRIOV_CONFIGURE) */
+
+static struct attribute *hinic3_attributes[] = {
+#if !(defined(HAVE_SRIOV_CONFIGURE) || defined(HAVE_RHEL6_SRIOV_CONFIGURE))
+ &dev_attr_sriov_numvfs.attr,
+ &dev_attr_sriov_totalvfs.attr,
+#endif /* !(HAVE_SRIOV_CONFIGURE || HAVE_RHEL6_SRIOV_CONFIGURE) */
+ NULL
+};
+
+static const struct attribute_group hinic3_attr_group = {
+ .attrs = hinic3_attributes,
+};
+
+struct hinic3_uld_info g_uld_info[SERVICE_T_MAX] = { {0} };
+
+#define HINIC3_EVENT_PROCESS_TIMEOUT 10000
+#define HINIC3_WAIT_EVENT_PROCESS_TIMEOUT 100
+struct mutex g_uld_mutex;
+#define BUS_MAX_DEV_NUM 256
+#define HINIC3_SLAVE_WORK_MAX_NUM 20
+
+typedef struct vf_offset_info {
+ u8 valid;
+ u16 vf_offset_from_pf[CMD_MAX_MAX_PF_NUM];
+} VF_OFFSET_INFO_S;
+
+static VF_OFFSET_INFO_S g_vf_offset;
+DEFINE_MUTEX(g_vf_offset_lock);
+
+void hinic3_uld_lock_init(void)
+{
+ mutex_init(&g_uld_mutex);
+}
+
+static const char *s_uld_name[SERVICE_T_MAX] = {
+ "nic", "ovs", "roce", "toe", "ioe",
+ "fc", "vbs", "ipsec", "virtio", "migrate",
+ "ppa", "custom", "vroce", "crypt", "vsock", "bifur"};
+
+const char **hinic3_get_uld_names(void)
+{
+ return s_uld_name;
+}
+
+#ifdef CONFIG_PCI_IOV
+static int hinic3_get_pf_device_id(struct pci_dev *pdev)
+{
+ struct pci_dev *pf_dev = pci_physfn(pdev);
+
+ return pf_dev->device;
+}
+#endif
+
+static int attach_uld(struct hinic3_pcidev *dev, enum hinic3_service_type type,
+ const struct hinic3_uld_info *uld_info)
+{
+ void *uld_dev = NULL;
+ int err;
+
+ mutex_lock(&dev->pdev_mutex);
+
+ if (dev->uld_dev[type]) {
+ sdk_err(&dev->pcidev->dev,
+ "%s driver has attached to pcie device\n",
+ s_uld_name[type]);
+ err = 0;
+ goto out_unlock;
+ }
+
+ atomic_set(&dev->uld_ref_cnt[type], 0);
+
+ if (!uld_info->probe) {
+ err = 0;
+ goto out_unlock;
+ }
+ err = uld_info->probe(&dev->lld_dev, &uld_dev, dev->uld_dev_name[type]);
+ if (err) {
+ sdk_err(&dev->pcidev->dev,
+ "Failed to add object for %s driver to pcie device\n",
+ s_uld_name[type]);
+ goto probe_failed;
+ }
+
+ dev->uld_dev[type] = uld_dev;
+ set_bit(type, &dev->uld_state);
+ mutex_unlock(&dev->pdev_mutex);
+
+ sdk_info(&dev->pcidev->dev,
+ "Attach %s driver to pcie device succeed\n", s_uld_name[type]);
+ return 0;
+
+probe_failed:
+out_unlock:
+ mutex_unlock(&dev->pdev_mutex);
+
+ return err;
+}
+
+static void wait_uld_unused(struct hinic3_pcidev *dev, enum hinic3_service_type type)
+{
+ u32 loop_cnt = 0;
+
+ while (atomic_read(&dev->uld_ref_cnt[type])) {
+ loop_cnt++;
+ if (loop_cnt % PRINT_ULD_DETACH_TIMEOUT_INTERVAL == 0)
+ sdk_err(&dev->pcidev->dev, "Wait for uld unused for %lds, reference count: %d\n",
+ loop_cnt / MSEC_PER_SEC, atomic_read(&dev->uld_ref_cnt[type]));
+
+ usleep_range(ULD_LOCK_MIN_USLEEP_TIME, ULD_LOCK_MAX_USLEEP_TIME);
+ }
+}
+
+static void detach_uld(struct hinic3_pcidev *dev,
+ enum hinic3_service_type type)
+{
+ struct hinic3_uld_info *uld_info = &g_uld_info[type];
+ unsigned long end;
+ bool timeout = true;
+
+ mutex_lock(&dev->pdev_mutex);
+ if (!dev->uld_dev[type]) {
+ mutex_unlock(&dev->pdev_mutex);
+ return;
+ }
+
+ end = jiffies + msecs_to_jiffies(HINIC3_EVENT_PROCESS_TIMEOUT);
+ do {
+ if (!test_and_set_bit(type, &dev->state)) {
+ timeout = false;
+ break;
+ }
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+ } while (time_before(jiffies, end));
+
+ if (timeout && !test_and_set_bit(type, &dev->state))
+ timeout = false;
+
+ spin_lock_bh(&dev->uld_lock);
+ clear_bit(type, &dev->uld_state);
+ spin_unlock_bh(&dev->uld_lock);
+
+ wait_uld_unused(dev, type);
+
+ if (!uld_info->remove) {
+ mutex_unlock(&dev->pdev_mutex);
+ return;
+ }
+ uld_info->remove(&dev->lld_dev, dev->uld_dev[type]);
+
+ dev->uld_dev[type] = NULL;
+ if (!timeout)
+ clear_bit(type, &dev->state);
+
+ sdk_info(&dev->pcidev->dev,
+ "Detach %s driver from pcie device succeed\n",
+ s_uld_name[type]);
+ mutex_unlock(&dev->pdev_mutex);
+}
+
+static void attach_ulds(struct hinic3_pcidev *dev)
+{
+ enum hinic3_service_type type;
+ struct pci_dev *pdev = dev->pcidev;
+
+ int is_in_kexec = vram_get_kexec_flag();
+ /* don't need hold when driver parallel load during spu hot replace */
+ if (is_in_kexec == 0) {
+ lld_hold();
+ }
+
+ mutex_lock(&g_uld_mutex);
+
+ for (type = SERVICE_T_OVS; type < SERVICE_T_MAX; type++) {
+ if (g_uld_info[type].probe) {
+ if (pdev->is_virtfn &&
+ (!hinic3_get_vf_service_load(pdev, (u16)type))) {
+ sdk_info(&pdev->dev, "VF device disable service_type = %d load in host\n",
+ type);
+ continue;
+ }
+ attach_uld(dev, type, &g_uld_info[type]);
+ }
+ }
+ mutex_unlock(&g_uld_mutex);
+
+ if (is_in_kexec == 0) {
+ lld_put();
+ }
+}
+
+static void detach_ulds(struct hinic3_pcidev *dev)
+{
+ enum hinic3_service_type type;
+
+ lld_hold();
+ mutex_lock(&g_uld_mutex);
+ for (type = SERVICE_T_MAX - 1; type > SERVICE_T_NIC; type--) {
+ if (g_uld_info[type].probe)
+ detach_uld(dev, type);
+ }
+
+ if (g_uld_info[SERVICE_T_NIC].probe)
+ detach_uld(dev, SERVICE_T_NIC);
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+}
+
+int hinic3_register_uld(enum hinic3_service_type type,
+ struct hinic3_uld_info *uld_info)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+ struct list_head *chip_list = NULL;
+
+ if (type >= SERVICE_T_MAX) {
+ pr_err("Unknown type %d of up layer driver to register\n",
+ type);
+ return -EINVAL;
+ }
+
+ if (!uld_info || !uld_info->probe || !uld_info->remove) {
+ pr_err("Invalid information of %s driver to register\n",
+ s_uld_name[type]);
+ return -EINVAL;
+ }
+
+ lld_hold();
+ mutex_lock(&g_uld_mutex);
+
+ if (g_uld_info[type].probe) {
+ pr_err("%s driver has registered\n", s_uld_name[type]);
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+ return -EINVAL;
+ }
+
+ chip_list = get_hinic3_chip_list();
+ memcpy(&g_uld_info[type], uld_info, sizeof(struct hinic3_uld_info));
+ list_for_each_entry(chip_node, chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (attach_uld(dev, type, uld_info) != 0) {
+ sdk_err(&dev->pcidev->dev,
+ "Attach %s driver to pcie device failed\n",
+ s_uld_name[type]);
+#ifdef CONFIG_MODULE_PROF
+ hinic3_probe_fault_process(dev->pcidev, FAULT_LEVEL_HOST);
+ break;
+#else
+ continue;
+#endif
+ }
+ }
+ }
+
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+
+ pr_info("Register %s driver succeed\n", s_uld_name[type]);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_uld);
+
+void hinic3_unregister_uld(enum hinic3_service_type type)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+ struct hinic3_uld_info *uld_info = NULL;
+ struct list_head *chip_list = NULL;
+
+ if (type >= SERVICE_T_MAX) {
+ pr_err("Unknown type %d of up layer driver to unregister\n",
+ type);
+ return;
+ }
+
+ lld_hold();
+ mutex_lock(&g_uld_mutex);
+ chip_list = get_hinic3_chip_list();
+ list_for_each_entry(chip_node, chip_list, node) {
+ /* detach vf first */
+ list_for_each_entry(dev, &chip_node->func_list, node)
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ detach_uld(dev, type);
+
+ list_for_each_entry(dev, &chip_node->func_list, node)
+ if (hinic3_func_type(dev->hwdev) == TYPE_PF)
+ detach_uld(dev, type);
+
+ list_for_each_entry(dev, &chip_node->func_list, node)
+ if (hinic3_func_type(dev->hwdev) == TYPE_PPF)
+ detach_uld(dev, type);
+ }
+
+ uld_info = &g_uld_info[type];
+ memset(uld_info, 0, sizeof(struct hinic3_uld_info));
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+}
+EXPORT_SYMBOL(hinic3_unregister_uld);
+
+int hinic3_attach_nic(struct hinic3_lld_dev *lld_dev)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev)
+ return -EINVAL;
+
+ dev = container_of(lld_dev, struct hinic3_pcidev, lld_dev);
+ return attach_uld(dev, SERVICE_T_NIC, &g_uld_info[SERVICE_T_NIC]);
+}
+EXPORT_SYMBOL(hinic3_attach_nic);
+
+void hinic3_detach_nic(const struct hinic3_lld_dev *lld_dev)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev)
+ return;
+
+ dev = container_of(lld_dev, struct hinic3_pcidev, lld_dev);
+ detach_uld(dev, SERVICE_T_NIC);
+}
+EXPORT_SYMBOL(hinic3_detach_nic);
+
+int hinic3_attach_service(const struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev || type >= SERVICE_T_MAX)
+ return -EINVAL;
+
+ dev = container_of(lld_dev, struct hinic3_pcidev, lld_dev);
+ return attach_uld(dev, type, &g_uld_info[type]);
+}
+EXPORT_SYMBOL(hinic3_attach_service);
+
+void hinic3_detach_service(const struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev || type >= SERVICE_T_MAX)
+ return;
+
+ dev = container_of(lld_dev, struct hinic3_pcidev, lld_dev);
+ detach_uld(dev, type);
+}
+EXPORT_SYMBOL(hinic3_detach_service);
+
+void hinic3_module_get(void *hwdev, enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev || type >= SERVICE_T_MAX)
+ return;
+ __module_get(THIS_MODULE);
+}
+EXPORT_SYMBOL(hinic3_module_get);
+
+void hinic3_module_put(void *hwdev, enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev || type >= SERVICE_T_MAX)
+ return;
+ module_put(THIS_MODULE);
+}
+EXPORT_SYMBOL(hinic3_module_put);
+
+static void hinic3_sync_time_to_fmw(struct hinic3_pcidev *pdev_pri)
+{
+ struct timeval tv = {0};
+ struct rtc_time rt_time = {0};
+ u64 tv_msec;
+ int err;
+
+ do_gettimeofday(&tv);
+
+ tv_msec = (u64)(tv.tv_sec * MSEC_PER_SEC + tv.tv_usec / USEC_PER_MSEC);
+ err = hinic3_sync_time(pdev_pri->hwdev, tv_msec);
+ if (err) {
+ sdk_err(&pdev_pri->pcidev->dev, "Synchronize UTC time to firmware failed, errno:%d.\n",
+ err);
+ } else {
+ rtc_time_to_tm((unsigned long)(tv.tv_sec), &rt_time);
+ sdk_info(&pdev_pri->pcidev->dev,
+ "Synchronize UTC time to firmware succeed. UTC time %d-%02d-%02d %02d:%02d:%02d.\n",
+ rt_time.tm_year + HINIC3_SYNC_YEAR_OFFSET,
+ rt_time.tm_mon + HINIC3_SYNC_MONTH_OFFSET,
+ rt_time.tm_mday, rt_time.tm_hour,
+ rt_time.tm_min, rt_time.tm_sec);
+ }
+}
+
+static void send_uld_dev_event(struct hinic3_pcidev *dev,
+ struct hinic3_event_info *event)
+{
+ enum hinic3_service_type type;
+
+ for (type = SERVICE_T_NIC; type < SERVICE_T_MAX; type++) {
+ if (test_and_set_bit(type, &dev->state)) {
+ sdk_warn(&dev->pcidev->dev, "Svc: 0x%x, event: 0x%x can't handler, %s is in detach\n",
+ event->service, event->type, s_uld_name[type]);
+ continue;
+ }
+
+ if (g_uld_info[type].event)
+ g_uld_info[type].event(&dev->lld_dev,
+ dev->uld_dev[type], event);
+ clear_bit(type, &dev->state);
+ }
+}
+
+static void send_event_to_dst_pf(struct hinic3_pcidev *dev, u16 func_id,
+ struct hinic3_event_info *event)
+{
+ struct hinic3_pcidev *des_dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(des_dev, &dev->chip_node->func_list, node) {
+ if (dev->lld_state == HINIC3_IN_REMOVE)
+ continue;
+
+ if (hinic3_func_type(des_dev->hwdev) == TYPE_VF)
+ continue;
+
+ if (hinic3_global_func_id(des_dev->hwdev) == func_id) {
+ send_uld_dev_event(des_dev, event);
+ break;
+ }
+ }
+ lld_put();
+}
+
+static void send_event_to_all_pf(struct hinic3_pcidev *dev,
+ struct hinic3_event_info *event)
+{
+ struct hinic3_pcidev *des_dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(des_dev, &dev->chip_node->func_list, node) {
+ if (dev->lld_state == HINIC3_IN_REMOVE)
+ continue;
+
+ if (hinic3_func_type(des_dev->hwdev) == TYPE_VF)
+ continue;
+
+ send_uld_dev_event(des_dev, event);
+ }
+ lld_put();
+}
+
+u32 hinic3_pdev_is_virtfn(struct pci_dev *pdev)
+{
+#ifdef CONFIG_PCI_IOV
+ return pdev->is_virtfn;
+#else
+ return 0;
+#endif
+}
+
+static int hinic3_get_function_enable(struct pci_dev *pdev, bool *en)
+{
+ struct pci_dev *pf_pdev = pdev->physfn;
+ struct hinic3_pcidev *pci_adapter = NULL;
+ void *pf_hwdev = NULL;
+ u16 global_func_id;
+ int err;
+
+ /* PF in host os or function in guest os, probe sdk in default */
+ if (!hinic3_pdev_is_virtfn(pdev) || !pf_pdev) {
+ *en = true;
+ return 0;
+ }
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter || !pci_adapter->hwdev) {
+ /* vf in host and pf sdk not probed */
+ return -EFAULT;
+ }
+ pf_hwdev = pci_adapter->hwdev;
+
+ err = hinic3_get_vfid_by_vfpci(NULL, pdev, &global_func_id);
+ if (err) {
+ sdk_err(&pci_adapter->pcidev->dev, "Func hinic3_get_vfid_by_vfpci fail %d \n", err);
+ return err;
+ }
+
+ err = hinic3_get_func_nic_enable(pf_hwdev, global_func_id, en);
+ if (!!err) {
+ sdk_info(&pdev->dev, "Failed to get function nic status, err %d.\n", err);
+ return err;
+ }
+
+ return 0;
+}
+
+int hinic3_set_func_probe_in_host(void *hwdev, u16 func_id, bool probe)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ return -EINVAL;
+
+ if (probe)
+ set_bit(func_id, dev->func_probe_in_host);
+ else
+ clear_bit(func_id, dev->func_probe_in_host);
+
+ return 0;
+}
+
+bool hinic3_get_func_probe_in_host(void *hwdev, u16 func_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_hwdev *ppf_dev = NULL;
+ bool probed = false;
+
+ if (!hwdev)
+ return false;
+
+ down(&dev->ppf_sem);
+ ppf_dev = hinic3_get_ppf_hwdev_by_pdev(dev->pcidev_hdl);
+ if (!ppf_dev || hinic3_func_type(ppf_dev) != TYPE_PPF) {
+ up(&dev->ppf_sem);
+ return false;
+ }
+
+ probed = !!test_bit(func_id, ppf_dev->func_probe_in_host);
+ up(&dev->ppf_sem);
+
+ return probed;
+}
+
+void *hinic3_get_ppf_hwdev_by_pdev(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!pdev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ chip_node = pci_adapter->chip_node;
+ lld_dev_hold(&pci_adapter->lld_dev);
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (dev->lld_state == HINIC3_IN_REMOVE)
+ continue;
+
+ if (dev->hwdev && hinic3_func_type(dev->hwdev) == TYPE_PPF) {
+ lld_dev_put(&pci_adapter->lld_dev);
+ return dev->hwdev;
+ }
+ }
+ lld_dev_put(&pci_adapter->lld_dev);
+
+ return NULL;
+}
+
+static int hinic3_set_vf_nic_used_state(void *hwdev, u16 func_id, bool opened)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_hwdev *ppf_dev = NULL;
+
+ if (!dev || func_id >= MAX_FUNCTION_NUM)
+ return -EINVAL;
+
+ down(&dev->ppf_sem);
+ ppf_dev = hinic3_get_ppf_hwdev_by_pdev(dev->pcidev_hdl);
+ if (!ppf_dev || hinic3_func_type(ppf_dev) != TYPE_PPF) {
+ up(&dev->ppf_sem);
+ return -EINVAL;
+ }
+
+ if (opened)
+ set_bit(func_id, ppf_dev->netdev_setup_state);
+ else
+ clear_bit(func_id, ppf_dev->netdev_setup_state);
+
+ up(&dev->ppf_sem);
+
+ return 0;
+}
+
+static void set_vf_func_in_use(struct pci_dev *pdev, bool in_use)
+{
+ struct pci_dev *pf_pdev = pdev->physfn;
+ struct hinic3_pcidev *pci_adapter = NULL;
+ void *pf_hwdev = NULL;
+ u16 global_func_id;
+
+ /* only need to be set when VF is on the host */
+ if (!hinic3_pdev_is_virtfn(pdev) || !pf_pdev)
+ return;
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter || !pci_adapter->hwdev)
+ return;
+
+ pf_hwdev = pci_adapter->hwdev;
+
+ global_func_id = (u16)pdev->devfn + hinic3_glb_pf_vf_offset(pf_hwdev);
+ (void)hinic3_set_vf_nic_used_state(pf_hwdev, global_func_id, in_use);
+}
+
+static int hinic3_pf_get_vf_offset_info(struct hinic3_pcidev *des_dev, u16 *vf_offset)
+{
+ int err, i;
+ struct hinic3_hw_pf_infos *pf_infos = NULL;
+ u16 pf_func_id;
+ struct hinic3_pcidev *pf_pci_adapter = NULL;
+
+ pf_pci_adapter = (hinic3_pdev_is_virtfn(des_dev->pcidev)) ? pci_get_drvdata(des_dev->pcidev->physfn) : des_dev;
+ pf_func_id = hinic3_global_func_id(pf_pci_adapter->hwdev);
+ if (pf_func_id >= CMD_MAX_MAX_PF_NUM || !vf_offset)
+ return -EINVAL;
+
+ mutex_lock(&g_vf_offset_lock);
+ if (g_vf_offset.valid == 0) {
+ pf_infos = kzalloc(sizeof(*pf_infos), GFP_KERNEL);
+ if (!pf_infos) {
+ sdk_err(&pf_pci_adapter->pcidev->dev, "Malloc pf_infos fail\n");
+ err = -ENOMEM;
+ goto err_malloc;
+ }
+
+ err = hinic3_get_hw_pf_infos(pf_pci_adapter->hwdev, pf_infos, HINIC3_CHANNEL_COMM);
+ if (err) {
+ sdk_warn(&pf_pci_adapter->pcidev->dev, "Hinic3_get_hw_pf_infos fail err %d\n", err);
+ err = -EFAULT;
+ goto err_out;
+ }
+
+ g_vf_offset.valid = 1;
+ for (i = 0; i < CMD_MAX_MAX_PF_NUM; i++) {
+ g_vf_offset.vf_offset_from_pf[i] = pf_infos->infos[i].vf_offset;
+ }
+
+ kfree(pf_infos);
+ }
+
+ *vf_offset = g_vf_offset.vf_offset_from_pf[pf_func_id];
+
+ mutex_unlock(&g_vf_offset_lock);
+
+ return 0;
+
+err_out:
+ kfree(pf_infos);
+err_malloc:
+ mutex_unlock(&g_vf_offset_lock);
+ return err;
+}
+
+static struct pci_dev *get_vf_pdev_by_pf(struct hinic3_pcidev *des_dev,
+ u16 func_id)
+{
+ int err;
+ u16 bus_num;
+ u16 vf_start, vf_end;
+ u16 des_fn, pf_func_id, vf_offset;
+
+ vf_start = hinic3_glb_pf_vf_offset(des_dev->hwdev);
+ vf_end = vf_start + hinic3_func_max_vf(des_dev->hwdev);
+ pf_func_id = hinic3_global_func_id(des_dev->hwdev);
+ if (func_id <= vf_start || func_id > vf_end || pf_func_id >= CMD_MAX_MAX_PF_NUM)
+ return NULL;
+
+ err = hinic3_pf_get_vf_offset_info(des_dev, &vf_offset);
+ if (err) {
+ sdk_warn(&des_dev->pcidev->dev, "Hinic3_pf_get_vf_offset_info fail\n");
+ return NULL;
+ }
+
+ des_fn = ((func_id - vf_start) - 1) + pf_func_id + vf_offset;
+ bus_num = des_dev->pcidev->bus->number + des_fn / BUS_MAX_DEV_NUM;
+
+ return pci_get_domain_bus_and_slot(0, bus_num, (des_fn % BUS_MAX_DEV_NUM));
+}
+
+static struct hinic3_pcidev *get_des_pci_adapter(struct hinic3_pcidev *des_dev,
+ u16 func_id)
+{
+ struct pci_dev *des_pdev = NULL;
+ u16 vf_start, vf_end;
+ bool probe_in_host = false;
+
+ if (hinic3_global_func_id(des_dev->hwdev) == func_id)
+ return des_dev;
+
+ vf_start = hinic3_glb_pf_vf_offset(des_dev->hwdev);
+ vf_end = vf_start + hinic3_func_max_vf(des_dev->hwdev);
+ if (func_id <= vf_start || func_id > vf_end)
+ return NULL;
+
+ des_pdev = get_vf_pdev_by_pf(des_dev, func_id);
+ if (!des_pdev)
+ return NULL;
+
+ pci_dev_put(des_pdev);
+
+ probe_in_host = hinic3_get_func_probe_in_host(des_dev->hwdev, func_id);
+ if (!probe_in_host)
+ return NULL;
+
+ return pci_get_drvdata(des_pdev);
+}
+
+int __set_vroce_func_state(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = pci_adapter->pcidev;
+ u16 func_id;
+ int err;
+ u8 enable_vroce = false;
+
+ func_id = hinic3_global_func_id(pci_adapter->hwdev);
+
+ err = hinic3_get_func_vroce_enable(pci_adapter->hwdev, func_id, &enable_vroce);
+ if (0 != err) {
+ sdk_err(&pdev->dev, "Failed to get vroce state.\n");
+ return err;
+ }
+
+ mutex_lock(&g_uld_mutex);
+
+ if (!!enable_vroce) {
+ if (!g_uld_info[SERVICE_T_ROCE].probe) {
+ sdk_info(&pdev->dev, "Uld(roce_info) has not been registered!\n");
+ mutex_unlock(&g_uld_mutex);
+ return 0;
+ }
+
+ err = attach_uld(pci_adapter, SERVICE_T_ROCE, &g_uld_info[SERVICE_T_ROCE]);
+ if (0 != err) {
+ sdk_err(&pdev->dev, "Failed to initialize VROCE.\n");
+ mutex_unlock(&g_uld_mutex);
+ return err;
+ }
+ } else {
+ sdk_info(&pdev->dev, "Func %hu vroce state: disable.\n", func_id);
+ if (g_uld_info[SERVICE_T_ROCE].remove)
+ detach_uld(pci_adapter, SERVICE_T_ROCE);
+ }
+
+ mutex_unlock(&g_uld_mutex);
+
+ return 0;
+}
+
+void slave_host_mgmt_vroce_work(struct work_struct *work)
+{
+ struct hinic3_pcidev *pci_adapter =
+ container_of(work, struct hinic3_pcidev, slave_vroce_work);
+
+ __set_vroce_func_state(pci_adapter);
+}
+
+void *hinic3_get_roce_uld_by_pdev(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ return pci_adapter->uld_dev[SERVICE_T_ROCE];
+}
+
+static int __func_service_state_process(struct hinic3_pcidev *event_dev,
+ struct hinic3_pcidev *des_dev,
+ struct hinic3_mhost_nic_func_state *state, u16 cmd)
+{
+ int err = 0;
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)event_dev->hwdev;
+
+ switch (cmd) {
+ case HINIC3_MHOST_GET_VROCE_STATE:
+ state->enable = hinic3_get_roce_uld_by_pdev(des_dev->pcidev) ? 1 : 0;
+ break;
+ case HINIC3_MHOST_NIC_STATE_CHANGE:
+ sdk_info(&des_dev->pcidev->dev, "Receive nic[%u] state changed event, state: %u\n",
+ state->func_idx, state->enable);
+ if (event_dev->multi_host_mgmt_workq) {
+ queue_work(event_dev->multi_host_mgmt_workq, &des_dev->slave_nic_work);
+ } else {
+ sdk_err(&des_dev->pcidev->dev, "Can not schedule slave nic work\n");
+ err = -EFAULT;
+ }
+ break;
+ case HINIC3_MHOST_VROCE_STATE_CHANGE:
+ sdk_info(&des_dev->pcidev->dev, "Receive vroce[%u] state changed event, state: %u\n",
+ state->func_idx, state->enable);
+ queue_work_on(hisdk3_get_work_cpu_affinity(dev, WORK_TYPE_MBOX),
+ event_dev->multi_host_mgmt_workq,
+ &des_dev->slave_vroce_work);
+ break;
+ default:
+ sdk_warn(&des_dev->pcidev->dev, "Service state process with unknown cmd: %u\n", cmd);
+ err = -EFAULT;
+ break;
+ }
+
+ return err;
+}
+
+static void __multi_host_mgmt(struct hinic3_pcidev *dev,
+ struct hinic3_multi_host_mgmt_event *mhost_mgmt)
+{
+ struct hinic3_pcidev *cur_dev = NULL;
+ struct hinic3_pcidev *des_dev = NULL;
+ struct hinic3_mhost_nic_func_state *nic_state = NULL;
+ u16 sub_cmd = mhost_mgmt->sub_cmd;
+
+ switch (sub_cmd) {
+ case HINIC3_MHOST_GET_VROCE_STATE:
+ case HINIC3_MHOST_VROCE_STATE_CHANGE:
+ case HINIC3_MHOST_NIC_STATE_CHANGE:
+ nic_state = mhost_mgmt->data;
+ nic_state->status = 0;
+ if (!dev->hwdev)
+ return;
+
+ if (!IS_BMGW_SLAVE_HOST((struct hinic3_hwdev *)dev->hwdev))
+ return;
+
+ /* find func_idx pci_adapter and disable or enable nic */
+ lld_dev_hold(&dev->lld_dev);
+ list_for_each_entry(cur_dev, &dev->chip_node->func_list, node) {
+ if (cur_dev->lld_state == HINIC3_IN_REMOVE || hinic3_pdev_is_virtfn(cur_dev->pcidev))
+ continue;
+
+ des_dev = get_des_pci_adapter(cur_dev, nic_state->func_idx);
+ if (!des_dev)
+ continue;
+
+ if (__func_service_state_process(dev, des_dev, nic_state, sub_cmd))
+ nic_state->status = 1;
+ break;
+ }
+ lld_dev_put(&dev->lld_dev);
+ break;
+ default:
+ sdk_warn(&dev->pcidev->dev, "Received unknown multi-host mgmt event: %u\n",
+ mhost_mgmt->sub_cmd);
+ break;
+ }
+}
+
+static void hinic3_event_process(void *adapter, struct hinic3_event_info *event)
+{
+ struct hinic3_pcidev *dev = adapter;
+ struct hinic3_fault_event *fault = (void *)event->event_data;
+ struct hinic3_multi_host_mgmt_event *mhost_event = (void *)event->event_data;
+ u16 func_id;
+
+ switch (HINIC3_SRV_EVENT_TYPE(event->service, event->type)) {
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_MULTI_HOST_MGMT):
+ __multi_host_mgmt(dev, mhost_event);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_FAULT):
+ if (fault->fault_level == FAULT_LEVEL_SERIOUS_FLR &&
+ fault->event.chip.func_id < hinic3_max_pf_num(dev->hwdev)) {
+ func_id = fault->event.chip.func_id;
+ return send_event_to_dst_pf(adapter, func_id, event);
+ }
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_MGMT_WATCHDOG):
+ send_event_to_all_pf(adapter, event);
+ break;
+ default:
+ send_uld_dev_event(adapter, event);
+ break;
+ }
+}
+
+static void uld_def_init(struct hinic3_pcidev *pci_adapter)
+{
+ int type;
+
+ for (type = 0; type < SERVICE_T_MAX; type++) {
+ atomic_set(&pci_adapter->uld_ref_cnt[type], 0);
+ clear_bit(type, &pci_adapter->uld_state);
+ }
+
+ spin_lock_init(&pci_adapter->uld_lock);
+}
+
+static int mapping_bar(struct pci_dev *pdev,
+ struct hinic3_pcidev *pci_adapter)
+{
+ int cfg_bar;
+
+ cfg_bar = HINIC3_IS_VF_DEV(pdev) ?
+ HINIC3_VF_PCI_CFG_REG_BAR : HINIC3_PF_PCI_CFG_REG_BAR;
+
+ pci_adapter->cfg_reg_base = pci_ioremap_bar(pdev, cfg_bar);
+ if (!pci_adapter->cfg_reg_base) {
+ sdk_err(&pdev->dev,
+ "Failed to map configuration regs\n");
+ return -ENOMEM;
+ }
+
+ pci_adapter->intr_reg_base = pci_ioremap_bar(pdev,
+ HINIC3_PCI_INTR_REG_BAR);
+ if (!pci_adapter->intr_reg_base) {
+ sdk_err(&pdev->dev,
+ "Failed to map interrupt regs\n");
+ goto map_intr_bar_err;
+ }
+
+ if (!HINIC3_IS_VF_DEV(pdev)) {
+ pci_adapter->mgmt_reg_base =
+ pci_ioremap_bar(pdev, HINIC3_PCI_MGMT_REG_BAR);
+ if (!pci_adapter->mgmt_reg_base) {
+ sdk_err(&pdev->dev,
+ "Failed to map mgmt regs\n");
+ goto map_mgmt_bar_err;
+ }
+ }
+
+ pci_adapter->db_base_phy = pci_resource_start(pdev, HINIC3_PCI_DB_BAR);
+ pci_adapter->db_dwqe_len = pci_resource_len(pdev, HINIC3_PCI_DB_BAR);
+ pci_adapter->db_base = pci_ioremap_bar(pdev, HINIC3_PCI_DB_BAR);
+ if (!pci_adapter->db_base) {
+ sdk_err(&pdev->dev,
+ "Failed to map doorbell regs\n");
+ goto map_db_err;
+ }
+
+ return 0;
+
+map_db_err:
+ if (!HINIC3_IS_VF_DEV(pdev))
+ iounmap(pci_adapter->mgmt_reg_base);
+
+map_mgmt_bar_err:
+ iounmap(pci_adapter->intr_reg_base);
+
+map_intr_bar_err:
+ iounmap(pci_adapter->cfg_reg_base);
+
+ return -ENOMEM;
+}
+
+static void unmapping_bar(struct hinic3_pcidev *pci_adapter)
+{
+ iounmap(pci_adapter->db_base);
+
+ if (!HINIC3_IS_VF_DEV(pci_adapter->pcidev))
+ iounmap(pci_adapter->mgmt_reg_base);
+
+ iounmap(pci_adapter->intr_reg_base);
+ iounmap(pci_adapter->cfg_reg_base);
+}
+
+static int hinic3_pci_init(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ int err;
+
+ pci_adapter = kzalloc(sizeof(*pci_adapter), GFP_KERNEL);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev,
+ "Failed to alloc pci device adapter\n");
+ return -ENOMEM;
+ }
+ pci_adapter->pcidev = pdev;
+ mutex_init(&pci_adapter->pdev_mutex);
+
+ pci_set_drvdata(pdev, pci_adapter);
+
+ err = pci_enable_device(pdev);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to enable PCI device\n");
+ goto pci_enable_err;
+ }
+
+ err = pci_request_regions(pdev, HINIC3_DRV_NAME);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to request regions\n");
+ goto pci_regions_err;
+ }
+
+ pci_enable_pcie_error_reporting(pdev);
+
+ pci_set_master(pdev);
+
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64)); /* 64 bit DMA mask */
+ if (err) {
+ sdk_warn(&pdev->dev, "Couldn't set 64-bit DMA mask\n");
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); /* 32 bit DMA mask */
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to set DMA mask\n");
+ goto dma_mask_err;
+ }
+ }
+
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); /* 64 bit DMA mask */
+ if (err) {
+ sdk_warn(&pdev->dev,
+ "Couldn't set 64-bit coherent DMA mask\n");
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); /* 32 bit DMA mask */
+ if (err) {
+ sdk_err(&pdev->dev,
+ "Failed to set coherent DMA mask\n");
+ goto dma_consistnet_mask_err;
+ }
+ }
+
+ return 0;
+
+dma_consistnet_mask_err:
+dma_mask_err:
+ pci_clear_master(pdev);
+ pci_disable_pcie_error_reporting(pdev);
+ pci_release_regions(pdev);
+
+pci_regions_err:
+ pci_disable_device(pdev);
+
+pci_enable_err:
+ pci_set_drvdata(pdev, NULL);
+ kfree(pci_adapter);
+
+ return err;
+}
+
+static void hinic3_pci_deinit(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ pci_clear_master(pdev);
+ pci_release_regions(pdev);
+ pci_disable_pcie_error_reporting(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ kfree(pci_adapter);
+}
+
+static void set_vf_load_state(struct pci_dev *pdev, struct hinic3_pcidev *pci_adapter)
+{
+ /* In bm mode, slave host will load vfs in default */
+ if (IS_BMGW_SLAVE_HOST(((struct hinic3_hwdev *)pci_adapter->hwdev)) &&
+ hinic3_func_type(pci_adapter->hwdev) != TYPE_VF)
+ hinic3_set_vf_load_state(pdev, false);
+
+ if (!disable_attach) {
+ if ((hinic3_func_type(pci_adapter->hwdev) != TYPE_VF) &&
+ hinic3_is_bm_slave_host(pci_adapter->hwdev)) {
+ if (hinic3_func_max_vf(pci_adapter->hwdev) == 0) {
+ sdk_warn(&pdev->dev, "The sriov enabling process is skipped, vfs_num: 0.\n");
+ return;
+ }
+ hinic3_pci_sriov_enable(pdev, hinic3_func_max_vf(pci_adapter->hwdev));
+ }
+ }
+}
+
+static void hinic3_init_ppf_hwdev(struct hinic3_hwdev *hwdev)
+{
+ if (!hwdev) {
+ pr_err("[%s:%d] null hwdev pointer\n", __FILE__, __LINE__);
+ return;
+ }
+
+ hwdev->ppf_hwdev = hinic3_get_ppf_hwdev_by_pdev(hwdev->pcidev_hdl);
+ return;
+}
+
+static int set_nic_func_state(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = pci_adapter->pcidev;
+ u16 func_id;
+ int err;
+ bool enable_nic = false;
+
+ func_id = hinic3_global_func_id(pci_adapter->hwdev);
+
+ err = hinic3_get_func_nic_enable(pci_adapter->hwdev, func_id, &enable_nic);
+ if (0 != err) {
+ sdk_err(&pdev->dev, "Failed to get nic state.\n");
+ return err;
+ }
+
+ if (!enable_nic) {
+ sdk_info(&pdev->dev, "Func %hu nic state: disable.\n", func_id);
+ detach_uld(pci_adapter, SERVICE_T_NIC);
+ return 0;
+ }
+
+ if (IS_BMGW_SLAVE_HOST((struct hinic3_hwdev *)pci_adapter->hwdev))
+ (void)hinic3_init_vf_dev_cap(pci_adapter->hwdev);
+
+ if (g_uld_info[SERVICE_T_NIC].probe) {
+ err = attach_uld(pci_adapter, SERVICE_T_NIC, &g_uld_info[SERVICE_T_NIC]);
+ if (0 != err) {
+ sdk_err(&pdev->dev, "Initialize NIC failed\n");
+ return err;
+ }
+ }
+
+ return 0;
+}
+
+static int hinic3_func_init(struct pci_dev *pdev, struct hinic3_pcidev *pci_adapter)
+{
+ struct hinic3_init_para init_para = {0};
+ bool cqm_init_en = false;
+ int err;
+
+ init_para.adapter_hdl = pci_adapter;
+ init_para.pcidev_hdl = pdev;
+ init_para.dev_hdl = &pdev->dev;
+ init_para.cfg_reg_base = pci_adapter->cfg_reg_base;
+ init_para.intr_reg_base = pci_adapter->intr_reg_base;
+ init_para.mgmt_reg_base = pci_adapter->mgmt_reg_base;
+ init_para.db_base = pci_adapter->db_base;
+ init_para.db_base_phy = pci_adapter->db_base_phy;
+ init_para.db_dwqe_len = pci_adapter->db_dwqe_len;
+ init_para.hwdev = &pci_adapter->hwdev;
+ init_para.chip_node = pci_adapter->chip_node;
+ init_para.probe_fault_level = pci_adapter->probe_fault_level;
+ err = hinic3_init_hwdev(&init_para);
+ if (err) {
+ pci_adapter->hwdev = NULL;
+ pci_adapter->probe_fault_level = init_para.probe_fault_level;
+ sdk_err(&pdev->dev, "Failed to initialize hardware device\n");
+ return -EFAULT;
+ }
+
+ cqm_init_en = hinic3_need_init_stateful_default(pci_adapter->hwdev);
+ if (cqm_init_en) {
+ err = hinic3_stateful_init(pci_adapter->hwdev);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to init stateful\n");
+ goto stateful_init_err;
+ }
+ }
+
+ pci_adapter->lld_dev.pdev = pdev;
+
+ pci_adapter->lld_dev.hwdev = pci_adapter->hwdev;
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF)
+ set_bit(HINIC3_FUNC_PERSENT, &pci_adapter->sriov_info.state);
+
+ hinic3_event_register(pci_adapter->hwdev, pci_adapter,
+ hinic3_event_process);
+
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF)
+ hinic3_sync_time_to_fmw(pci_adapter);
+
+ /* dbgtool init */
+ lld_lock_chip_node();
+ err = nictool_k_init(pci_adapter->hwdev, pci_adapter->chip_node);
+ if (err) {
+ lld_unlock_chip_node();
+ sdk_err(&pdev->dev, "Failed to initialize dbgtool\n");
+ goto nictool_init_err;
+ }
+ list_add_tail(&pci_adapter->node, &pci_adapter->chip_node->func_list);
+ lld_unlock_chip_node();
+
+ hinic3_init_ppf_hwdev((struct hinic3_hwdev *)pci_adapter->hwdev);
+
+ set_vf_load_state(pdev, pci_adapter);
+
+ if (!disable_attach) {
+ /* NIC is base driver, probe firstly */
+ err = set_nic_func_state(pci_adapter);
+ if (err)
+ goto set_nic_func_state_err;
+
+ attach_ulds(pci_adapter);
+
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF) {
+ err = sysfs_create_group(&pdev->dev.kobj,
+ &hinic3_attr_group);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to create sysfs group\n");
+ goto create_sysfs_err;
+ }
+ }
+ }
+
+ return 0;
+
+create_sysfs_err:
+ detach_ulds(pci_adapter);
+
+set_nic_func_state_err:
+ lld_lock_chip_node();
+ list_del(&pci_adapter->node);
+ lld_unlock_chip_node();
+
+ wait_lld_dev_unused(pci_adapter);
+
+ lld_lock_chip_node();
+ nictool_k_uninit(pci_adapter->hwdev, pci_adapter->chip_node);
+ lld_unlock_chip_node();
+
+nictool_init_err:
+ hinic3_event_unregister(pci_adapter->hwdev);
+ if (cqm_init_en)
+ hinic3_stateful_deinit(pci_adapter->hwdev);
+stateful_init_err:
+ hinic3_free_hwdev(pci_adapter->hwdev);
+
+ return err;
+}
+
+static void hinic3_func_deinit(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ /* When function deinit, disable mgmt initiative report events firstly,
+ * then flush mgmt work-queue.
+ */
+ hinic3_disable_mgmt_msg_report(pci_adapter->hwdev);
+
+ hinic3_flush_mgmt_workq(pci_adapter->hwdev);
+
+ lld_lock_chip_node();
+ list_del(&pci_adapter->node);
+ lld_unlock_chip_node();
+
+ detach_ulds(pci_adapter);
+
+ wait_lld_dev_unused(pci_adapter);
+
+ lld_lock_chip_node();
+ nictool_k_uninit(pci_adapter->hwdev, pci_adapter->chip_node);
+ lld_unlock_chip_node();
+
+ hinic3_event_unregister(pci_adapter->hwdev);
+
+ hinic3_free_stateful(pci_adapter->hwdev);
+
+ hinic3_free_hwdev(pci_adapter->hwdev);
+ pci_adapter->hwdev = NULL;
+}
+
+static void wait_sriov_cfg_complete(struct hinic3_pcidev *pci_adapter)
+{
+ struct hinic3_sriov_info *sriov_info;
+ unsigned long end;
+
+ sriov_info = &pci_adapter->sriov_info;
+ clear_bit(HINIC3_FUNC_PERSENT, &sriov_info->state);
+ usleep_range(9900, 10000); /* sleep 9900 us ~ 10000 us */
+
+ end = jiffies + msecs_to_jiffies(HINIC3_WAIT_SRIOV_CFG_TIMEOUT);
+ do {
+ if (!test_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state) &&
+ !test_bit(HINIC3_SRIOV_DISABLE, &sriov_info->state))
+ return;
+
+ usleep_range(9900, 10000); /* sleep 9900 us ~ 10000 us */
+ } while (time_before(jiffies, end));
+}
+
+static bool hinic3_get_vf_nic_en_status(struct pci_dev *pdev)
+{
+ bool nic_en = false;
+ u16 global_func_id;
+ struct pci_dev *pf_pdev = NULL;
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return false;
+ }
+
+ if (pdev->is_virtfn)
+ pf_pdev = pdev->physfn;
+ else
+ return false;
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return false;
+ }
+
+ if (!IS_BMGW_SLAVE_HOST((struct hinic3_hwdev *)pci_adapter->hwdev))
+ return false;
+
+ if (hinic3_get_vfid_by_vfpci(NULL, pdev, &global_func_id)) {
+ sdk_err(&pdev->dev, "Get vf id by vfpci failed\n");
+ return false;
+ }
+
+ if (hinic3_get_mhost_func_nic_enable(pci_adapter->hwdev,
+ global_func_id, &nic_en)) {
+ sdk_err(&pdev->dev, "Get function nic status failed\n");
+ return false;
+ }
+
+ sdk_info(&pdev->dev, "Func %hu %s default probe in host\n",
+ global_func_id, (nic_en) ? "enable" : "disable");
+
+ return nic_en;
+}
+
+bool hinic3_get_vf_load_state(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct pci_dev *pf_pdev = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return false;
+ }
+
+ /* vf used in vm */
+ if (pci_is_root_bus(pdev->bus))
+ return false;
+
+ if (pdev->is_virtfn)
+ pf_pdev = pdev->physfn;
+ else
+ pf_pdev = pdev;
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return false;
+ }
+
+ return !pci_adapter->disable_vf_load;
+}
+
+int hinic3_set_vf_load_state(struct pci_dev *pdev, bool vf_load_state)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return -EINVAL;
+ }
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return -EINVAL;
+ }
+
+ if (hinic3_func_type(pci_adapter->hwdev) == TYPE_VF)
+ return 0;
+
+ pci_adapter->disable_vf_load = !vf_load_state;
+ sdk_info(&pci_adapter->pcidev->dev, "Current function %s vf load in host\n",
+ vf_load_state ? "enable" : "disable");
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_vf_load_state);
+
+
+
+bool hinic3_get_vf_service_load(struct pci_dev *pdev, u16 service)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct pci_dev *pf_pdev = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return false;
+ }
+
+ if (pdev->is_virtfn)
+ pf_pdev = pdev->physfn;
+ else
+ pf_pdev = pdev;
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return false;
+ }
+
+ if (service >= SERVICE_T_MAX) {
+ sdk_err(&pdev->dev, "service_type = %u state is error\n",
+ service);
+ return false;
+ }
+
+ return !pci_adapter->disable_srv_load[service];
+}
+
+int hinic3_set_vf_service_load(struct pci_dev *pdev, u16 service,
+ bool vf_srv_load)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return -EINVAL;
+ }
+
+ if (service >= SERVICE_T_MAX) {
+ sdk_err(&pdev->dev, "service_type = %u state is error\n",
+ service);
+ return -EFAULT;
+ }
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return -EINVAL;
+ }
+
+ if (hinic3_func_type(pci_adapter->hwdev) == TYPE_VF)
+ return 0;
+
+ pci_adapter->disable_srv_load[service] = !vf_srv_load;
+ sdk_info(&pci_adapter->pcidev->dev, "Current function %s vf load in host\n",
+ vf_srv_load ? "enable" : "disable");
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_vf_service_load);
+
+static bool hinic3_is_host_vmsec_enable(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct pci_dev *pf_pdev = NULL;
+
+ if (pdev->is_virtfn) {
+ pf_pdev = pdev->physfn;
+ } else {
+ pf_pdev = pdev;
+ }
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ pr_err("Pci_adapter is null.\n");
+ return false;
+ }
+
+ /* pf/vf used in host */
+ if (IS_VM_SLAVE_HOST((struct hinic3_hwdev *)pci_adapter->hwdev) &&
+ (hinic3_func_type(pci_adapter->hwdev) == TYPE_PF) &&
+ IS_RDMA_TYPE((struct hinic3_hwdev *)pci_adapter->hwdev)) {
+ return true;
+ }
+
+ return false;
+}
+
+static int hinic3_remove_func(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = pci_adapter->pcidev;
+
+ mutex_lock(&pci_adapter->pdev_mutex);
+ if (pci_adapter->lld_state != HINIC3_PROBE_OK) {
+ sdk_warn(&pdev->dev, "Current function don not need remove\n");
+ mutex_unlock(&pci_adapter->pdev_mutex);
+ return 0;
+ }
+ pci_adapter->lld_state = HINIC3_IN_REMOVE;
+ mutex_unlock(&pci_adapter->pdev_mutex);
+
+ if (!(pdev->is_virtfn) && (hinic3_is_host_vmsec_enable(pdev) == true) &&
+ (hinic3_func_type((struct hinic3_hwdev *)pci_adapter->hwdev) == TYPE_PF)) {
+ cancel_delayed_work_sync(&pci_adapter->migration_probe_dwork);
+ flush_workqueue(pci_adapter->migration_probe_workq);
+ destroy_workqueue(pci_adapter->migration_probe_workq);
+ }
+
+ hinic3_detect_hw_present(pci_adapter->hwdev);
+
+ hisdk3_remove_pre_process(pci_adapter->hwdev);
+
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF) {
+ sysfs_remove_group(&pdev->dev.kobj, &hinic3_attr_group);
+ wait_sriov_cfg_complete(pci_adapter);
+ hinic3_pci_sriov_disable(pdev);
+ }
+
+ hinic3_func_deinit(pdev);
+
+ lld_lock_chip_node();
+ free_chip_node(pci_adapter);
+ lld_unlock_chip_node();
+
+ unmapping_bar(pci_adapter);
+
+ mutex_lock(&pci_adapter->pdev_mutex);
+ pci_adapter->lld_state = HINIC3_NOT_PROBE;
+ mutex_unlock(&pci_adapter->pdev_mutex);
+
+ sdk_info(&pdev->dev, "Pcie device removed function\n");
+
+ set_vf_func_in_use(pdev, false);
+
+ return 0;
+}
+
+int hinic3_get_vfid_by_vfpci(void *hwdev, struct pci_dev *pdev, u16 *global_func_id)
+{
+ struct pci_dev *pf_pdev = NULL;
+ struct hinic3_pcidev *pci_adapter = NULL;
+ u16 pf_bus, vf_bus, vf_offset;
+ int err;
+
+ if (!pdev || !global_func_id || !hinic3_pdev_is_virtfn(pdev))
+ return -EINVAL;
+ (void)hwdev;
+ pf_pdev = pdev->physfn;
+
+ vf_bus = pdev->bus->number;
+ pf_bus = pf_pdev->bus->number;
+
+ if (pdev->vendor == HINIC3_VIRTIO_VNEDER_ID) {
+ return -EPERM;
+ }
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return -EINVAL;
+ }
+
+ err = hinic3_pf_get_vf_offset_info(pci_adapter, &vf_offset);
+ if (err) {
+ sdk_err(&pdev->dev, "Func hinic3_pf_get_vf_offset_info fail\n");
+ return -EFAULT;
+ }
+
+ *global_func_id = (u16)((vf_bus - pf_bus) * BUS_MAX_DEV_NUM) + (u16)pdev->devfn +
+ (u16)(CMD_MAX_MAX_PF_NUM - g_vf_offset.vf_offset_from_pf[0]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_vfid_by_vfpci);
+
+static void hinic3_set_vf_status_in_host(struct pci_dev *pdev, bool status)
+{
+ struct pci_dev *pf_pdev = pdev->physfn;
+ struct hinic3_pcidev *pci_adapter = NULL;
+ void *pf_hwdev = NULL;
+ void *ppf_hwdev = NULL;
+ u16 global_func_id;
+ int ret;
+
+ if (!pf_pdev)
+ return;
+
+ if (!hinic3_pdev_is_virtfn(pdev))
+ return;
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ pf_hwdev = pci_adapter->hwdev;
+ ppf_hwdev = hinic3_get_ppf_hwdev_by_pdev(pf_pdev);
+ if (!pf_hwdev || !ppf_hwdev)
+ return;
+
+ ret = hinic3_get_vfid_by_vfpci(NULL, pdev, &global_func_id);
+ if (ret) {
+ sdk_err(&pci_adapter->pcidev->dev, "Func hinic3_get_vfid_by_vfpci fail %d \n", ret);
+ return;
+ }
+
+ ret = hinic3_set_func_probe_in_host(ppf_hwdev, global_func_id, status);
+ if (ret)
+ sdk_err(&pci_adapter->pcidev->dev, "Set the function probe status in host failed\n");
+}
+#ifdef CONFIG_PCI_IOV
+static bool check_pdev_type_and_state(struct pci_dev *pdev)
+{
+ if (!(pdev->is_virtfn)) {
+ return false;
+ }
+
+ if ((hinic3_get_pf_device_id(pdev) != HINIC3_DEV_ID_SDI_5_1_PF) &&
+ (hinic3_get_pf_device_id(pdev) != HINIC3_DEV_ID_SDI_5_0_PF)) {
+ return false;
+ }
+
+ if (!hinic3_get_vf_load_state(pdev)) {
+ return false;
+ }
+
+ return true;
+}
+#endif
+
+static void hinic3_remove(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ sdk_info(&pdev->dev, "Pcie device remove begin\n");
+
+ if (!pci_adapter)
+ goto out;
+#ifdef CONFIG_PCI_IOV
+ if (check_pdev_type_and_state(pdev)) {
+ goto out;
+ }
+#endif
+
+ cancel_work_sync(&pci_adapter->slave_nic_work);
+ cancel_work_sync(&pci_adapter->slave_vroce_work);
+
+ hinic3_remove_func(pci_adapter);
+
+ if (!pci_adapter->pcidev->is_virtfn &&
+ pci_adapter->multi_host_mgmt_workq)
+ destroy_workqueue(pci_adapter->multi_host_mgmt_workq);
+
+ hinic3_pci_deinit(pdev);
+ hinic3_probe_pre_unprocess(pdev);
+
+out:
+ hinic3_set_vf_status_in_host(pdev, false);
+
+ sdk_info(&pdev->dev, "Pcie device removed\n");
+}
+
+static int probe_func_param_init(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = NULL;
+
+ if (!pci_adapter)
+ return -EFAULT;
+
+ pdev = pci_adapter->pcidev;
+ if (!pdev)
+ return -EFAULT;
+
+ mutex_lock(&pci_adapter->pdev_mutex);
+ if (pci_adapter->lld_state >= HINIC3_PROBE_START) {
+ sdk_warn(&pdev->dev, "Don not probe repeat\n");
+ mutex_unlock(&pci_adapter->pdev_mutex);
+ return -EEXIST;
+ }
+ pci_adapter->lld_state = HINIC3_PROBE_START;
+ mutex_unlock(&pci_adapter->pdev_mutex);
+
+ return 0;
+}
+
+static void hinic3_probe_success_process(struct hinic3_pcidev *pci_adapter)
+{
+ hinic3_probe_success(pci_adapter->hwdev);
+
+ mutex_lock(&pci_adapter->pdev_mutex);
+ pci_adapter->lld_state = HINIC3_PROBE_OK;
+ mutex_unlock(&pci_adapter->pdev_mutex);
+}
+
+static int hinic3_probe_func(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = pci_adapter->pcidev;
+ int err;
+
+ err = probe_func_param_init(pci_adapter);
+ if (err == -EEXIST)
+ return 0;
+ else if (err)
+ return err;
+
+ set_vf_func_in_use(pdev, true);
+
+ err = mapping_bar(pdev, pci_adapter);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to map bar\n");
+ goto map_bar_failed;
+ }
+
+ uld_def_init(pci_adapter);
+
+ /* if chip information of pcie function exist, add the function into chip */
+ lld_lock_chip_node();
+ err = alloc_chip_node(pci_adapter);
+ if (err) {
+ lld_unlock_chip_node();
+ sdk_err(&pdev->dev, "Failed to add new chip node to global list\n");
+ goto alloc_chip_node_fail;
+ }
+ lld_unlock_chip_node();
+
+ err = hinic3_func_init(pdev, pci_adapter);
+ if (err)
+ goto func_init_err;
+
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF) {
+ err = hinic3_set_bdf_ctxt(pci_adapter->hwdev, pdev->bus->number,
+ PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to set BDF info to MPU\n");
+ goto set_bdf_err;
+ }
+ }
+
+ hinic3_probe_success_process(pci_adapter);
+
+ return 0;
+
+set_bdf_err:
+ hinic3_func_deinit(pdev);
+
+func_init_err:
+ lld_lock_chip_node();
+ free_chip_node(pci_adapter);
+ lld_unlock_chip_node();
+
+alloc_chip_node_fail:
+ unmapping_bar(pci_adapter);
+
+map_bar_failed:
+ set_vf_func_in_use(pdev, false);
+ sdk_err(&pdev->dev, "Pcie device probe function failed\n");
+ return err;
+}
+
+void hinic3_set_func_state(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = pci_adapter->pcidev;
+ int err;
+ bool enable_func = false;
+
+ err = hinic3_get_function_enable(pdev, &enable_func);
+ if (err) {
+ sdk_info(&pdev->dev, "Get function enable failed\n");
+ return;
+ }
+
+ sdk_info(&pdev->dev, "%s function resource start\n",
+ enable_func ? "Initialize" : "Free");
+ if (enable_func) {
+ err = hinic3_probe_func(pci_adapter);
+ if (err)
+ sdk_info(&pdev->dev, "Function probe failed\n");
+ } else {
+ hinic3_remove_func(pci_adapter);
+ }
+ if (err == 0)
+ sdk_info(&pdev->dev, "%s function resource end\n",
+ enable_func ? "Initialize" : "Free");
+}
+
+void slave_host_mgmt_work(struct work_struct *work)
+{
+ struct hinic3_pcidev *pci_adapter =
+ container_of(work, struct hinic3_pcidev, slave_nic_work);
+
+ if (hinic3_pdev_is_virtfn(pci_adapter->pcidev))
+ hinic3_set_func_state(pci_adapter);
+ else
+ set_nic_func_state(pci_adapter);
+}
+
+static int pci_adapter_assign_val(struct hinic3_pcidev **ppci_adapter,
+ struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ *ppci_adapter = pci_get_drvdata(pdev);
+ (*ppci_adapter)->disable_vf_load = disable_vf_load;
+ (*ppci_adapter)->id = *id;
+ (*ppci_adapter)->lld_state = HINIC3_NOT_PROBE;
+ (*ppci_adapter)->probe_fault_level = FAULT_LEVEL_SERIOUS_FLR;
+ lld_dev_cnt_init(*ppci_adapter);
+
+ (*ppci_adapter)->multi_host_mgmt_workq =
+ alloc_workqueue("hinic_mhost_mgmt", WQ_UNBOUND,
+ HINIC3_SLAVE_WORK_MAX_NUM);
+ if (!(*ppci_adapter)->multi_host_mgmt_workq) {
+ hinic3_pci_deinit(pdev);
+ sdk_err(&pdev->dev, "Alloc multi host mgmt workqueue failed\n");
+ return -ENOMEM;
+ }
+
+ INIT_WORK(&(*ppci_adapter)->slave_nic_work, slave_host_mgmt_work);
+ INIT_WORK(&(*ppci_adapter)->slave_vroce_work,
+ slave_host_mgmt_vroce_work);
+
+ return 0;
+}
+
+static void slave_host_vfio_probe_delay_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_pcidev *pci_adapter = container_of(delay, struct hinic3_pcidev, migration_probe_dwork);
+ struct pci_dev *pdev = pci_adapter->pcidev;
+ int (*dev_migration_probe)(struct pci_dev *);
+ int rc;
+
+ if (hinic3_func_type((struct hinic3_hwdev *)pci_adapter->hwdev) != TYPE_PF) {
+ return;
+ }
+
+ dev_migration_probe = __symbol_get("migration_dev_migration_probe");
+ if (!(dev_migration_probe)) {
+ sdk_err(&pdev->dev,
+ "Failed to find: migration_dev_migration_probe");
+ queue_delayed_work(pci_adapter->migration_probe_workq,
+ &pci_adapter->migration_probe_dwork, WAIT_TIME * HZ);
+ } else {
+ rc = dev_migration_probe(pdev);
+ __symbol_put("migration_dev_migration_probe");
+ if (rc) {
+ sdk_err(&pdev->dev,
+ "Failed to __dev_migration_probe, rc:0x%x, pf migrated(%d).\n",
+ rc, g_is_pf_migrated);
+ } else {
+ g_is_pf_migrated = true;
+ sdk_info(&pdev->dev,
+ "Successed in __dev_migration_probe, pf migrated(%d).\n",
+ g_is_pf_migrated);
+ }
+ }
+
+ return;
+}
+
+struct vf_add_delaywork {
+ struct pci_dev *vf_pdev;
+ struct delayed_work migration_vf_add_dwork;
+};
+
+static void slave_host_migration_vf_add_delay_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct vf_add_delaywork *vf_add = container_of(delay, struct vf_add_delaywork, migration_vf_add_dwork);
+ struct pci_dev *vf_pdev = vf_add->vf_pdev;
+ struct pci_dev *pf_pdev = NULL;
+ int (*migration_dev_add_vf)(struct pci_dev *);
+ int ret;
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!vf_pdev) {
+ pr_err("vf pdev is null.\n");
+ goto err1;
+ }
+ if (!vf_pdev->is_virtfn) {
+ sdk_err(&vf_pdev->dev, "Pdev is not virtfn.\n");
+ goto err1;
+ }
+
+ pf_pdev = vf_pdev->physfn;
+ if (!pf_pdev) {
+ sdk_err(&vf_pdev->dev, "pf_pdev is null.\n");
+ goto err1;
+ }
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_err(&vf_pdev->dev, "Pci_adapter is null.\n");
+ goto err1;
+ }
+
+ if (!g_is_pf_migrated) {
+ sdk_info(&vf_pdev->dev, "pf is not migrated yet, so vf continues to try again.\n");
+ goto delay_work;
+ }
+
+ migration_dev_add_vf = __symbol_get("migration_dev_add_vf");
+ if (migration_dev_add_vf) {
+ ret = migration_dev_add_vf(vf_pdev);
+ __symbol_put("migration_dev_add_vf");
+ if (ret) {
+ sdk_err(&vf_pdev->dev,
+ "vf get migration symbol successed, but dev add vf failed, ret:%d.\n",
+ ret);
+ } else {
+ sdk_info(&vf_pdev->dev,
+ "vf get migration symbol successed, and dev add vf success.\n");
+ }
+ goto err1;
+ }
+ sdk_info(&vf_pdev->dev, "pf is migrated, but vf get migration symbol failed.\n");
+
+delay_work:
+ queue_delayed_work(pci_adapter->migration_probe_workq,
+ &vf_add->migration_vf_add_dwork, WAIT_TIME * HZ);
+ return;
+
+err1:
+ kfree(vf_add);
+ return;
+}
+
+static void hinic3_probe_vf_add_dwork(struct pci_dev *pdev)
+{
+ struct pci_dev *pf_pdev = NULL;
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!hinic3_is_host_vmsec_enable(pdev)) {
+ return;
+ }
+
+#if defined(CONFIG_SP_VID_DID)
+ if ((pdev->vendor == PCI_VENDOR_ID_SPNIC) && (pdev->device == HINIC3_DEV_SDI_5_1_ID_VF)) {
+#elif defined(CONFIG_NF_VID_DID)
+ if ((pdev->vendor == PCI_VENDOR_ID_NF) && (pdev->device == NFNIC_DEV_ID_VF)) {
+#else
+ if ((pdev->vendor == PCI_VENDOR_ID_HUAWEI) && (pdev->device == HINIC3_DEV_SDI_5_0_ID_VF)) {
+#endif
+ struct vf_add_delaywork *vf_add = kmalloc(sizeof(struct vf_add_delaywork), GFP_ATOMIC);
+ if (!vf_add) {
+ sdk_info(&pdev->dev, "vf_add is null.\n");
+ return;
+ }
+ vf_add->vf_pdev = pdev;
+
+ pf_pdev = pdev->physfn;
+
+ if (!pf_pdev) {
+ sdk_info(&pdev->dev, "Vf-pf_pdev is null.\n");
+ kfree(vf_add);
+ return;
+ }
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_info(&pdev->dev, "Pci_adapter is null.\n");
+ kfree(vf_add);
+ return;
+ }
+
+ INIT_DELAYED_WORK(&vf_add->migration_vf_add_dwork,
+ slave_host_migration_vf_add_delay_work);
+
+ queue_delayed_work(pci_adapter->migration_probe_workq,
+ &vf_add->migration_vf_add_dwork,
+ WAIT_TIME * HZ);
+ }
+
+ return;
+}
+
+static int hinic3_probe_migration_dwork(struct pci_dev *pdev, struct hinic3_pcidev *pci_adapter)
+{
+ if (!hinic3_is_host_vmsec_enable(pdev)) {
+ sdk_info(&pdev->dev, "Probe_migration : hinic3_is_host_vmsec_enable is (0).\n");
+ return 0;
+ }
+
+ if (IS_VM_SLAVE_HOST((struct hinic3_hwdev *)pci_adapter->hwdev) &&
+ hinic3_func_type((struct hinic3_hwdev *)pci_adapter->hwdev) == TYPE_PF) {
+ pci_adapter->migration_probe_workq =
+ create_singlethread_workqueue("hinic3_migration_probe_delay");
+ if (!pci_adapter->migration_probe_workq) {
+ sdk_err(&pdev->dev, "Failed to create work queue:%s\n",
+ "hinic3_migration_probe_delay");
+ return -EINVAL;
+ }
+
+ INIT_DELAYED_WORK(&pci_adapter->migration_probe_dwork,
+ slave_host_vfio_probe_delay_work);
+
+ queue_delayed_work(pci_adapter->migration_probe_workq,
+ &pci_adapter->migration_probe_dwork, WAIT_TIME * HZ);
+ }
+
+ return 0;
+}
+
+static bool hinic3_os_hot_replace_allow(struct hinic3_pcidev *pci_adapter)
+{
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)pci_adapter->hwdev;
+ // check service enable and dev is not VF
+ if (hinic3_func_type(hwdev) == TYPE_VF || hwdev->hot_replace_mode == HOT_REPLACE_DISABLE)
+ return false;
+
+ return true;
+}
+
+static bool hinic3_os_hot_replace_process(struct hinic3_pcidev *pci_adapter)
+{
+ struct hinic3_board_info *board_info;
+ u16 cur_pf_id = hinic3_global_func_id(pci_adapter->hwdev);
+ u8 cur_partion_id;
+ board_info = &((struct hinic3_hwdev *)(pci_adapter->hwdev))->board_info;
+ // probe to os
+ vpci_set_partition_attrs(pci_adapter->pcidev, PARTITION_DEV_EXCLUSIVE,
+ get_function_partition(cur_pf_id, board_info->port_num));
+
+ // check pf_id is in the right partition_id
+ cur_partion_id = get_partition_id();
+ if (get_function_partition(cur_pf_id, board_info->port_num) == cur_partion_id) {
+ return true;
+ }
+
+ pci_adapter->probe_fault_level = FAULT_LEVEL_SUGGESTION;
+ return false;
+}
+
+static int hinic3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ u16 probe_fault_level = FAULT_LEVEL_SERIOUS_FLR;
+ u32 device_id, function_id;
+ int err;
+
+ sdk_info(&pdev->dev, "Pcie device probe begin\n");
+#ifdef CONFIG_PCI_IOV
+ hinic3_set_vf_status_in_host(pdev, true);
+ if (check_pdev_type_and_state(pdev)) {
+ sdk_info(&pdev->dev, "VFs are not binded to hinic\n");
+ hinic3_probe_vf_add_dwork(pdev);
+ return -EINVAL;
+ }
+#endif
+ err = hinic3_probe_pre_process(pdev);
+ if (err != 0 && err != HINIC3_NOT_PROBE)
+ goto out;
+
+ if (err == HINIC3_NOT_PROBE)
+ return 0;
+
+ if (hinic3_pci_init(pdev))
+ goto pci_init_err;
+
+ if (pci_adapter_assign_val(&pci_adapter, pdev, id))
+ goto allco_queue_err;
+
+ if (pdev->is_virtfn && (!hinic3_get_vf_load_state(pdev)) &&
+ (!hinic3_get_vf_nic_en_status(pdev))) {
+ sdk_info(&pdev->dev, "VF device disable load in host\n");
+ return 0;
+ }
+
+ if (hinic3_probe_func(pci_adapter))
+ goto hinic3_probe_func_fail;
+
+ if (hinic3_os_hot_replace_allow(pci_adapter)) {
+ if (!hinic3_os_hot_replace_process(pci_adapter)) {
+ device_id = PCI_SLOT(pdev->devfn);
+ function_id = PCI_FUNC(pdev->devfn);
+ sdk_info(&pdev->dev,
+ "os hot replace: skip function %d:%d for partition %d",
+ device_id, function_id, get_partition_id());
+ goto os_hot_repalce_not_allow;
+ }
+ }
+
+ if (hinic3_probe_migration_dwork(pdev, pci_adapter))
+ goto hinic3_probe_func_fail;
+
+ sdk_info(&pdev->dev, "Pcie device probed\n");
+ return 0;
+
+os_hot_repalce_not_allow:
+ hinic3_func_deinit(pdev);
+ lld_lock_chip_node();
+ free_chip_node(pci_adapter);
+ lld_unlock_chip_node();
+ unmapping_bar(pci_adapter);
+ set_vf_func_in_use(pdev, false);
+
+hinic3_probe_func_fail:
+ destroy_workqueue(pci_adapter->multi_host_mgmt_workq);
+ cancel_work_sync(&pci_adapter->slave_nic_work);
+ cancel_work_sync(&pci_adapter->slave_vroce_work);
+allco_queue_err:
+ probe_fault_level = pci_adapter->probe_fault_level;
+ hinic3_pci_deinit(pdev);
+pci_init_err:
+ hinic3_probe_pre_unprocess(pdev);
+
+out:
+ hinic3_probe_fault_process(pdev, probe_fault_level);
+ sdk_err(&pdev->dev, "Pcie device probe failed\n");
+ return err;
+}
+
+static int hinic3_get_pf_info(struct pci_dev *pdev, u16 service,
+ struct hinic3_hw_pf_infos **pf_infos)
+{
+ struct hinic3_pcidev *dev = pci_get_drvdata(pdev);
+ int err;
+
+ if (service >= SERVICE_T_MAX) {
+ sdk_err(&pdev->dev, "Current vf do not supports set service_type = %u state in host\n",
+ service);
+ return -EFAULT;
+ }
+
+ *pf_infos = kzalloc(sizeof(struct hinic3_hw_pf_infos), GFP_KERNEL);
+ if (*pf_infos == NULL) {
+ sdk_err(&pdev->dev, "pf_infos kzalloc failed\n");
+ return -EFAULT;
+ }
+ err = hinic3_get_hw_pf_infos(dev->hwdev, *pf_infos, HINIC3_CHANNEL_COMM);
+ if (err) {
+ kfree(*pf_infos);
+ sdk_err(&pdev->dev, "Get chipf pf info failed, ret %d\n", err);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_func_en(struct pci_dev *des_pdev, struct hinic3_pcidev *dst_dev,
+ bool en, u16 vf_func_id)
+{
+ int err;
+
+ mutex_lock(&dst_dev->pdev_mutex);
+ /* unload invalid vf func id */
+ if (!en && vf_func_id != hinic3_global_func_id(dst_dev->hwdev) &&
+ !strcmp(des_pdev->driver->name, HINIC3_DRV_NAME)) {
+ pr_err("dst_dev func id:%u, vf_func_id:%u\n",
+ hinic3_global_func_id(dst_dev->hwdev), vf_func_id);
+ mutex_unlock(&dst_dev->pdev_mutex);
+ return -EFAULT;
+ }
+
+ if (!en && dst_dev->lld_state == HINIC3_PROBE_OK) {
+ mutex_unlock(&dst_dev->pdev_mutex);
+ hinic3_remove_func(dst_dev);
+ } else if (en && dst_dev->lld_state == HINIC3_NOT_PROBE) {
+ mutex_unlock(&dst_dev->pdev_mutex);
+ err = hinic3_probe_func(dst_dev);
+ if (err)
+ return -EFAULT;
+ } else {
+ mutex_unlock(&dst_dev->pdev_mutex);
+ }
+
+ return 0;
+}
+
+static int get_vf_service_state_param(struct pci_dev *pdev, struct hinic3_pcidev **dev_ptr,
+ u16 service, struct hinic3_hw_pf_infos **pf_infos)
+{
+ int err;
+
+ if (!pdev)
+ return -EINVAL;
+
+ *dev_ptr = pci_get_drvdata(pdev);
+ if (!(*dev_ptr))
+ return -EINVAL;
+
+ err = hinic3_get_pf_info(pdev, service, pf_infos);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static int hinic3_dst_pdev_valid(struct hinic3_pcidev *dst_dev, struct pci_dev **des_pdev_ptr,
+ u16 vf_devfn, bool en)
+{
+ u16 bus;
+
+ bus = dst_dev->pcidev->bus->number + vf_devfn / BUS_MAX_DEV_NUM;
+ *des_pdev_ptr = pci_get_domain_bus_and_slot(pci_domain_nr(dst_dev->pcidev->bus),
+ bus, vf_devfn % BUS_MAX_DEV_NUM);
+ if (!(*des_pdev_ptr)) {
+ pr_err("des_pdev is NULL\n");
+ return -EFAULT;
+ }
+
+ if ((*des_pdev_ptr)->driver == NULL) {
+ pr_err("des_pdev_ptr->driver is NULL\n");
+ return -EFAULT;
+ }
+
+ /* OVS sriov hw scene, when vf bind to vf_io return error. */
+ if ((!en && strcmp((*des_pdev_ptr)->driver->name, HINIC3_DRV_NAME))) {
+ pr_err("vf bind driver:%s\n", (*des_pdev_ptr)->driver->name);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int paramerter_is_unexpected(struct hinic3_pcidev *dst_dev, u16 *func_id, u16 *vf_start,
+ u16 *vf_end, u16 vf_func_id)
+{
+ if (hinic3_func_type(dst_dev->hwdev) == TYPE_VF)
+ return -EPERM;
+
+ *func_id = hinic3_global_func_id(dst_dev->hwdev);
+ *vf_start = hinic3_glb_pf_vf_offset(dst_dev->hwdev) + 1;
+ *vf_end = *vf_start + hinic3_func_max_vf(dst_dev->hwdev);
+ if (vf_func_id < *vf_start || vf_func_id > *vf_end)
+ return -EPERM;
+
+ return 0;
+}
+
+int hinic3_set_vf_service_state(struct pci_dev *pdev, u16 vf_func_id, u16 service, bool en)
+{
+ struct hinic3_hw_pf_infos *pf_infos = NULL;
+ struct hinic3_pcidev *dev = NULL, *dst_dev = NULL;
+ struct pci_dev *des_pdev = NULL;
+ u16 vf_start, vf_end, vf_devfn, func_id;
+ int err;
+ bool find_dst_dev = false;
+
+ err = get_vf_service_state_param(pdev, &dev, service, &pf_infos);
+ if (err)
+ return err;
+
+ lld_hold();
+ list_for_each_entry(dst_dev, &dev->chip_node->func_list, node) {
+ if (paramerter_is_unexpected(dst_dev, &func_id, &vf_start, &vf_end, vf_func_id) != 0)
+ continue;
+
+ vf_devfn = pf_infos->infos[func_id].vf_offset + (vf_func_id - vf_start) +
+ (u16)dst_dev->pcidev->devfn;
+ err = hinic3_dst_pdev_valid(dst_dev, &des_pdev, vf_devfn, en);
+ if (err) {
+ sdk_err(&pdev->dev, "Can not get vf func_id %u from pf %u\n",
+ vf_func_id, func_id);
+ lld_put();
+ goto free_pf_info;
+ }
+
+ dst_dev = pci_get_drvdata(des_pdev);
+ /* When enable vf scene, if vf bind to vf-io, return ok */
+ if (strcmp(des_pdev->driver->name, HINIC3_DRV_NAME) ||
+ !dst_dev || (!en && dst_dev->lld_state != HINIC3_PROBE_OK) ||
+ (en && dst_dev->lld_state != HINIC3_NOT_PROBE)) {
+ lld_put();
+ goto free_pf_info;
+ }
+
+ if (en)
+ pci_dev_put(des_pdev);
+ find_dst_dev = true;
+ break;
+ }
+ lld_put();
+
+ if (!find_dst_dev) {
+ err = -EFAULT;
+ sdk_err(&pdev->dev, "Invalid parameter vf_id %u \n", vf_func_id);
+ goto free_pf_info;
+ }
+
+ err = hinic3_set_func_en(des_pdev, dst_dev, en, vf_func_id);
+
+free_pf_info:
+ kfree(pf_infos);
+ return err;
+}
+EXPORT_SYMBOL(hinic3_set_vf_service_state);
+
+static const struct pci_device_id hinic3_pci_table[] = {
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_SPU), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_STANDARD), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_SDI_5_1_PF), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_SDI_5_0_PF), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_DPU_PF), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_SDI_5_1_ID_VF), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_VF), 0},
+ {0, 0}
+
+};
+
+MODULE_DEVICE_TABLE(pci, hinic3_pci_table);
+
+/**
+ * hinic3_io_error_detected - called when PCI error is detected
+ * @pdev: Pointer to PCI device
+ * @state: The current pci connection state
+ *
+ * This function is called after a PCI bus error affecting
+ * this device has been detected.
+ *
+ * Since we only need error detecting not error handling, so we
+ * always return PCI_ERS_RESULT_CAN_RECOVER to tell the AER
+ * driver that we don't need reset(error handling).
+ */
+static pci_ers_result_t hinic3_io_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ sdk_err(&pdev->dev,
+ "Uncorrectable error detected, log and cleanup error status: 0x%08x\n",
+ state);
+
+ pci_cleanup_aer_uncorrect_error_status(pdev);
+ pci_adapter = pci_get_drvdata(pdev);
+ if (pci_adapter)
+ hinic3_record_pcie_error(pci_adapter->hwdev);
+
+ return PCI_ERS_RESULT_CAN_RECOVER;
+}
+
+static void hinic3_timer_disable(void *hwdev)
+{
+ if (!hwdev)
+ return;
+
+ if (hinic3_get_stateful_enable(hwdev) && hinic3_get_timer_enable(hwdev))
+ (void)hinic3_func_tmr_bitmap_set(hwdev, hinic3_global_func_id(hwdev), false);
+
+ return;
+}
+
+static void hinic3_shutdown(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ sdk_info(&pdev->dev, "Shutdown device\n");
+
+ if (pci_adapter) {
+ hinic3_timer_disable(pci_adapter->hwdev);
+ hinic3_shutdown_hwdev(pci_adapter->hwdev);
+ }
+
+ pci_disable_device(pdev);
+
+ if (pci_adapter)
+ hinic3_set_api_stop(pci_adapter->hwdev);
+}
+
+#ifdef HAVE_RHEL6_SRIOV_CONFIGURE
+static struct pci_driver_rh hinic3_driver_rh = {
+ .sriov_configure = hinic3_pci_sriov_configure,
+};
+#endif
+
+/* Cause we only need error detecting not error handling, so only error_detected
+ * callback is enough.
+ */
+static struct pci_error_handlers hinic3_err_handler = {
+ .error_detected = hinic3_io_error_detected,
+};
+
+static struct pci_driver hinic3_driver = {
+ .name = HINIC3_DRV_NAME,
+ .id_table = hinic3_pci_table,
+ .probe = hinic3_probe,
+ .remove = hinic3_remove,
+ .shutdown = hinic3_shutdown,
+#ifdef CONFIG_PARTITION_DEVICE
+ .driver.probe_concurrency = true,
+#endif
+#if defined(HAVE_SRIOV_CONFIGURE)
+ .sriov_configure = hinic3_pci_sriov_configure,
+#elif defined(HAVE_RHEL6_SRIOV_CONFIGURE)
+ .rh_reserved = &hinic3_driver_rh,
+#endif
+ .err_handler = &hinic3_err_handler
+};
+
+int hinic3_lld_init(void)
+{
+ int err;
+
+ pr_info("%s - version %s\n", HINIC3_DRV_DESC, HINIC3_DRV_VERSION);
+ memset(g_uld_info, 0, sizeof(g_uld_info));
+
+ hinic3_lld_lock_init();
+ hinic3_uld_lock_init();
+
+ err = hinic3_module_pre_init();
+ if (err) {
+ pr_err("Init custom failed\n");
+ goto module_pre_init_err;
+ }
+
+ err = pci_register_driver(&hinic3_driver);
+ if (err) {
+ pr_err("sdk3 pci register driver failed\n");
+ goto register_pci_driver_err;
+ }
+
+ return 0;
+
+register_pci_driver_err:
+ hinic3_module_post_exit();
+module_pre_init_err:
+ return err;
+}
+
+void hinic3_lld_exit(void)
+{
+ pci_unregister_driver(&hinic3_driver);
+
+ hinic3_module_post_exit();
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c
new file mode 100644
index 0000000..5398a34
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c
@@ -0,0 +1,1884 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/types.h>
+#include <linux/semaphore.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwif.h"
+#include "hinic3_eqs.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_common.h"
+#include "hinic3_mbox.h"
+
+#define HINIC3_MBOX_USEC_50 50
+
+#define HINIC3_MBOX_INT_DST_AEQN_SHIFT 10
+#define HINIC3_MBOX_INT_SRC_RESP_AEQN_SHIFT 12
+#define HINIC3_MBOX_INT_STAT_DMA_SHIFT 14
+/* The size of data to be send (unit of 4 bytes) */
+#define HINIC3_MBOX_INT_TX_SIZE_SHIFT 20
+/* SO_RO(strong order, relax order) */
+#define HINIC3_MBOX_INT_STAT_DMA_SO_RO_SHIFT 25
+#define HINIC3_MBOX_INT_WB_EN_SHIFT 28
+
+#define HINIC3_MBOX_INT_DST_AEQN_MASK 0x3
+#define HINIC3_MBOX_INT_SRC_RESP_AEQN_MASK 0x3
+#define HINIC3_MBOX_INT_STAT_DMA_MASK 0x3F
+#define HINIC3_MBOX_INT_TX_SIZE_MASK 0x1F
+#define HINIC3_MBOX_INT_STAT_DMA_SO_RO_MASK 0x3
+#define HINIC3_MBOX_INT_WB_EN_MASK 0x1
+
+#define HINIC3_MBOX_INT_SET(val, field) \
+ (((val) & HINIC3_MBOX_INT_##field##_MASK) << \
+ HINIC3_MBOX_INT_##field##_SHIFT)
+
+enum hinic3_mbox_tx_status {
+ TX_NOT_DONE = 1,
+};
+
+#define HINIC3_MBOX_CTRL_TRIGGER_AEQE_SHIFT 0
+/* specifies the issue request for the message data.
+ * 0 - Tx request is done;
+ * 1 - Tx request is in process.
+ */
+#define HINIC3_MBOX_CTRL_TX_STATUS_SHIFT 1
+#define HINIC3_MBOX_CTRL_DST_FUNC_SHIFT 16
+
+#define HINIC3_MBOX_CTRL_TRIGGER_AEQE_MASK 0x1
+#define HINIC3_MBOX_CTRL_TX_STATUS_MASK 0x1
+#define HINIC3_MBOX_CTRL_DST_FUNC_MASK 0x1FFF
+
+#define HINIC3_MBOX_CTRL_SET(val, field) \
+ (((val) & HINIC3_MBOX_CTRL_##field##_MASK) << \
+ HINIC3_MBOX_CTRL_##field##_SHIFT)
+
+#define MBOX_SEGLEN_MASK \
+ HINIC3_MSG_HEADER_SET(HINIC3_MSG_HEADER_SEG_LEN_MASK, SEG_LEN)
+
+#define MBOX_MSG_WAIT_ONCE_TIME_US 10
+#define MBOX_MSG_POLLING_TIMEOUT 8000
+#define HINIC3_MBOX_COMP_TIME 40000U
+
+/* MBOX size is 64B, 8B for mbox_header, 8B reserved */
+#define MBOX_SEG_LEN 48
+#define MBOX_SEG_LEN_ALIGN 4
+#define MBOX_WB_STATUS_LEN 16UL
+
+#define SEQ_ID_START_VAL 0
+#define SEQ_ID_MAX_VAL 42
+#define MBOX_LAST_SEG_MAX_LEN (MBOX_MAX_BUF_SZ - \
+ SEQ_ID_MAX_VAL * MBOX_SEG_LEN)
+
+/* mbox write back status is 16B, only first 4B is used */
+#define MBOX_WB_STATUS_ERRCODE_MASK 0xFFFF
+#define MBOX_WB_STATUS_MASK 0xFF
+#define MBOX_WB_ERROR_CODE_MASK 0xFF00
+#define MBOX_WB_STATUS_FINISHED_SUCCESS 0xFF
+#define MBOX_WB_STATUS_FINISHED_WITH_ERR 0xFE
+#define MBOX_WB_STATUS_NOT_FINISHED 0x00
+
+#define MBOX_STATUS_FINISHED(wb) \
+ (((wb) & MBOX_WB_STATUS_MASK) != MBOX_WB_STATUS_NOT_FINISHED)
+#define MBOX_STATUS_SUCCESS(wb) \
+ (((wb) & MBOX_WB_STATUS_MASK) == MBOX_WB_STATUS_FINISHED_SUCCESS)
+#define MBOX_STATUS_ERRCODE(wb) \
+ ((wb) & MBOX_WB_ERROR_CODE_MASK)
+
+#define DST_AEQ_IDX_DEFAULT_VAL 0
+#define SRC_AEQ_IDX_DEFAULT_VAL 0
+#define NO_DMA_ATTRIBUTE_VAL 0
+
+#define MBOX_MSG_NO_DATA_LEN 1
+
+#define MBOX_BODY_FROM_HDR(header) ((u8 *)(header) + MBOX_HEADER_SZ)
+#define MBOX_AREA(hwif) \
+ ((hwif)->cfg_regs_base + HINIC3_FUNC_CSR_MAILBOX_DATA_OFF)
+
+#define MBOX_DMA_MSG_QUEUE_DEPTH 32
+
+#define MBOX_MQ_CI_OFFSET (HINIC3_CFG_REGS_FLAG + HINIC3_FUNC_CSR_MAILBOX_DATA_OFF + \
+ MBOX_HEADER_SZ + MBOX_SEG_LEN)
+
+#define MBOX_MQ_SYNC_CI_SHIFT 0
+#define MBOX_MQ_ASYNC_CI_SHIFT 8
+
+#define MBOX_MQ_SYNC_CI_MASK 0xFF
+#define MBOX_MQ_ASYNC_CI_MASK 0xFF
+
+#define MBOX_MQ_CI_SET(val, field) \
+ (((val) & MBOX_MQ_##field##_CI_MASK) << MBOX_MQ_##field##_CI_SHIFT)
+#define MBOX_MQ_CI_GET(val, field) \
+ (((val) >> MBOX_MQ_##field##_CI_SHIFT) & MBOX_MQ_##field##_CI_MASK)
+#define MBOX_MQ_CI_CLEAR(val, field) \
+ ((val) & (~(MBOX_MQ_##field##_CI_MASK << MBOX_MQ_##field##_CI_SHIFT)))
+
+#define IS_PF_OR_PPF_SRC(hwdev, src_func_idx) \
+ ((src_func_idx) < HINIC3_MAX_PF_NUM(hwdev))
+
+#define MBOX_RESPONSE_ERROR 0x1
+#define MBOX_MSG_ID_MASK 0xF
+#define MBOX_MSG_ID(func_to_func) ((func_to_func)->send_msg_id)
+#define MBOX_MSG_ID_INC(func_to_func) \
+ (MBOX_MSG_ID(func_to_func) = \
+ (MBOX_MSG_ID(func_to_func) + 1) & MBOX_MSG_ID_MASK)
+
+/* max message counter wait to process for one function */
+#define HINIC3_MAX_MSG_CNT_TO_PROCESS 10
+
+#define MBOX_MSG_CHANNEL_STOP(func_to_func) \
+ ((((func_to_func)->lock_channel_en) && \
+ test_bit((func_to_func)->cur_msg_channel, \
+ &(func_to_func)->channel_stop)) ? true : false)
+
+enum mbox_ordering_type {
+ STRONG_ORDER,
+};
+
+enum mbox_write_back_type {
+ WRITE_BACK = 1,
+};
+
+enum mbox_aeq_trig_type {
+ NOT_TRIGGER,
+ TRIGGER,
+};
+
+static int send_mbox_msg(struct hinic3_mbox *func_to_func, u8 mod, u16 cmd,
+ void *msg, u16 msg_len, u16 dst_func,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info);
+
+static struct hinic3_msg_desc *get_mbox_msg_desc(struct hinic3_mbox *func_to_func,
+ u64 dir, u64 src_func_id);
+
+/**
+ * hinic3_register_ppf_mbox_cb - register mbox callback for ppf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ * @pri_handle specific mod's private data that will be used in callback
+ * @callback: callback function
+ * Return: 0 - success, negative - failure
+ */
+int hinic3_register_ppf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_ppf_mbox_cb callback)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return -EFAULT;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ func_to_func->ppf_mbox_cb[mod] = callback;
+ func_to_func->ppf_mbox_data[mod] = pri_handle;
+
+ set_bit(HINIC3_PPF_MBOX_CB_REG, &func_to_func->ppf_mbox_cb_state[mod]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_ppf_mbox_cb);
+
+/**
+ * hinic3_register_pf_mbox_cb - register mbox callback for pf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ * @pri_handle specific mod's private data that will be used in callback
+ * @callback: callback function
+ * Return: 0 - success, negative - failure
+ */
+int hinic3_register_pf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_pf_mbox_cb callback)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return -EFAULT;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ func_to_func->pf_mbox_cb[mod] = callback;
+ func_to_func->pf_mbox_data[mod] = pri_handle;
+
+ set_bit(HINIC3_PF_MBOX_CB_REG, &func_to_func->pf_mbox_cb_state[mod]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_pf_mbox_cb);
+
+/**
+ * hinic3_register_vf_mbox_cb - register mbox callback for vf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ * @pri_handle specific mod's private data that will be used in callback
+ * @callback: callback function
+ * Return: 0 - success, negative - failure
+ */
+int hinic3_register_vf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_vf_mbox_cb callback)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return -EFAULT;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ func_to_func->vf_mbox_cb[mod] = callback;
+ func_to_func->vf_mbox_data[mod] = pri_handle;
+
+ set_bit(HINIC3_VF_MBOX_CB_REG, &func_to_func->vf_mbox_cb_state[mod]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_vf_mbox_cb);
+
+/**
+ * hinic3_unregister_ppf_mbox_cb - unregister the mbox callback for ppf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ */
+void hinic3_unregister_ppf_mbox_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ clear_bit(HINIC3_PPF_MBOX_CB_REG,
+ &func_to_func->ppf_mbox_cb_state[mod]);
+
+ while (test_bit(HINIC3_PPF_MBOX_CB_RUNNING,
+ &func_to_func->ppf_mbox_cb_state[mod]))
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ func_to_func->ppf_mbox_data[mod] = NULL;
+ func_to_func->ppf_mbox_cb[mod] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_ppf_mbox_cb);
+
+/**
+ * hinic3_unregister_ppf_mbox_cb - unregister the mbox callback for pf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ */
+void hinic3_unregister_pf_mbox_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ clear_bit(HINIC3_PF_MBOX_CB_REG, &func_to_func->pf_mbox_cb_state[mod]);
+
+ while (test_bit(HINIC3_PF_MBOX_CB_RUNNING, &func_to_func->pf_mbox_cb_state[mod]) != 0)
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ func_to_func->pf_mbox_data[mod] = NULL;
+ func_to_func->pf_mbox_cb[mod] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_pf_mbox_cb);
+
+/**
+ * hinic3_unregister_vf_mbox_cb - unregister the mbox callback for vf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ */
+void hinic3_unregister_vf_mbox_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ clear_bit(HINIC3_VF_MBOX_CB_REG, &func_to_func->vf_mbox_cb_state[mod]);
+
+ while (test_bit(HINIC3_VF_MBOX_CB_RUNNING, &func_to_func->vf_mbox_cb_state[mod]) != 0)
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ func_to_func->vf_mbox_data[mod] = NULL;
+ func_to_func->vf_mbox_cb[mod] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_vf_mbox_cb);
+
+/**
+ * hinic3_unregister_ppf_mbox_cb - unregister the mbox callback for pf from ppf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ */
+void hinic3_unregister_ppf_to_pf_mbox_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ clear_bit(HINIC3_PPF_TO_PF_MBOX_CB_REG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]);
+
+ while (test_bit(HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]))
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ func_to_func->pf_recv_ppf_mbox_data[mod] = NULL;
+ func_to_func->pf_recv_ppf_mbox_cb[mod] = NULL;
+}
+
+static int recv_vf_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ void *buf_out, u16 *out_size)
+{
+ hinic3_vf_mbox_cb cb;
+ int ret;
+
+ if (recv_mbox->mod >= HINIC3_MOD_MAX) {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "Receive illegal mbox message, mod = %hhu\n",
+ recv_mbox->mod);
+ return -EINVAL;
+ }
+
+ set_bit(HINIC3_VF_MBOX_CB_RUNNING,
+ &func_to_func->vf_mbox_cb_state[recv_mbox->mod]);
+
+ cb = func_to_func->vf_mbox_cb[recv_mbox->mod];
+ if (cb && test_bit(HINIC3_VF_MBOX_CB_REG,
+ &func_to_func->vf_mbox_cb_state[recv_mbox->mod])) {
+ ret = cb(func_to_func->vf_mbox_data[recv_mbox->mod],
+ recv_mbox->cmd, recv_mbox->msg,
+ recv_mbox->msg_len, buf_out, out_size);
+ } else {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "VF mbox cb is not registered\n");
+ ret = -EINVAL;
+ }
+
+ clear_bit(HINIC3_VF_MBOX_CB_RUNNING,
+ &func_to_func->vf_mbox_cb_state[recv_mbox->mod]);
+
+ return ret;
+}
+
+static int recv_pf_from_ppf_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ void *buf_out, u16 *out_size)
+{
+ hinic3_pf_recv_from_ppf_mbox_cb cb;
+ enum hinic3_mod_type mod = recv_mbox->mod;
+ int ret;
+
+ if (mod >= HINIC3_MOD_MAX) {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "Receive illegal mbox message, mod = %d\n",
+ mod);
+ return -EINVAL;
+ }
+
+ set_bit(HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]);
+
+ cb = func_to_func->pf_recv_ppf_mbox_cb[mod];
+ if (cb && test_bit(HINIC3_PPF_TO_PF_MBOX_CB_REG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]) != 0) {
+ ret = cb(func_to_func->pf_recv_ppf_mbox_data[mod],
+ recv_mbox->cmd, recv_mbox->msg, recv_mbox->msg_len,
+ buf_out, out_size);
+ } else {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "PF receive ppf mailbox callback is not registered\n");
+ ret = -EINVAL;
+ }
+
+ clear_bit(HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]);
+
+ return ret;
+}
+
+static int recv_ppf_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ u8 pf_id, void *buf_out, u16 *out_size)
+{
+ hinic3_ppf_mbox_cb cb;
+ u16 vf_id = 0;
+ int ret;
+
+ if (recv_mbox->mod >= HINIC3_MOD_MAX) {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "Receive illegal mbox message, mod = %hhu\n",
+ recv_mbox->mod);
+ return -EINVAL;
+ }
+
+ set_bit(HINIC3_PPF_MBOX_CB_RUNNING,
+ &func_to_func->ppf_mbox_cb_state[recv_mbox->mod]);
+
+ cb = func_to_func->ppf_mbox_cb[recv_mbox->mod];
+ if (cb && test_bit(HINIC3_PPF_MBOX_CB_REG,
+ &func_to_func->ppf_mbox_cb_state[recv_mbox->mod])) {
+ ret = cb(func_to_func->ppf_mbox_data[recv_mbox->mod],
+ pf_id, vf_id, recv_mbox->cmd, recv_mbox->msg,
+ recv_mbox->msg_len, buf_out, out_size);
+ } else {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "PPF mbox cb is not registered, mod = %hhu\n",
+ recv_mbox->mod);
+ ret = -EINVAL;
+ }
+
+ clear_bit(HINIC3_PPF_MBOX_CB_RUNNING,
+ &func_to_func->ppf_mbox_cb_state[recv_mbox->mod]);
+
+ return ret;
+}
+
+static int recv_pf_from_vf_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ u16 src_func_idx, void *buf_out,
+ u16 *out_size)
+{
+ hinic3_pf_mbox_cb cb;
+ u16 vf_id = 0;
+ int ret;
+
+ if (recv_mbox->mod >= HINIC3_MOD_MAX) {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "Receive illegal mbox message, mod = %hhu\n",
+ recv_mbox->mod);
+ return -EINVAL;
+ }
+
+ set_bit(HINIC3_PF_MBOX_CB_RUNNING,
+ &func_to_func->pf_mbox_cb_state[recv_mbox->mod]);
+
+ cb = func_to_func->pf_mbox_cb[recv_mbox->mod];
+ if (cb && test_bit(HINIC3_PF_MBOX_CB_REG,
+ &func_to_func->pf_mbox_cb_state[recv_mbox->mod]) != 0) {
+ vf_id = src_func_idx -
+ hinic3_glb_pf_vf_offset(func_to_func->hwdev);
+ ret = cb(func_to_func->pf_mbox_data[recv_mbox->mod],
+ vf_id, recv_mbox->cmd, recv_mbox->msg,
+ recv_mbox->msg_len, buf_out, out_size);
+ } else {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "PF mbox mod(0x%x) cb is not registered\n",
+ recv_mbox->mod);
+ ret = -EINVAL;
+ }
+
+ clear_bit(HINIC3_PF_MBOX_CB_RUNNING,
+ &func_to_func->pf_mbox_cb_state[recv_mbox->mod]);
+
+ return ret;
+}
+
+static void response_for_recv_func_mbox(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ int err, u16 out_size, u16 src_func_idx)
+{
+ struct mbox_msg_info msg_info = {0};
+ u16 size = out_size;
+
+ msg_info.msg_id = recv_mbox->msg_id;
+ if (err)
+ msg_info.status = HINIC3_MBOX_PF_SEND_ERR;
+
+ /* if not data need to response, set out_size to 1 */
+ if (!out_size || err)
+ size = MBOX_MSG_NO_DATA_LEN;
+
+ if (size > HINIC3_MBOX_DATA_SIZE) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Response msg len(%d) exceed limit(%d)\n",
+ size, HINIC3_MBOX_DATA_SIZE);
+ size = HINIC3_MBOX_DATA_SIZE;
+ }
+
+ send_mbox_msg(func_to_func, recv_mbox->mod, recv_mbox->cmd,
+ recv_mbox->resp_buff, size, src_func_idx,
+ HINIC3_MSG_RESPONSE, HINIC3_MSG_NO_ACK, &msg_info);
+}
+
+static void recv_func_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox)
+{
+ struct hinic3_hwdev *dev = func_to_func->hwdev;
+ void *buf_out = recv_mbox->resp_buff;
+ u16 src_func_idx = recv_mbox->src_func_idx;
+ u16 out_size = HINIC3_MBOX_DATA_SIZE;
+ int err = 0;
+
+ if (HINIC3_IS_VF(dev)) {
+ err = recv_vf_mbox_handler(func_to_func, recv_mbox, buf_out,
+ &out_size);
+ } else { /* pf/ppf process */
+ if (IS_PF_OR_PPF_SRC(dev, src_func_idx)) {
+ if (HINIC3_IS_PPF(dev)) {
+ err = recv_ppf_mbox_handler(func_to_func,
+ recv_mbox,
+ (u8)src_func_idx,
+ buf_out, &out_size);
+ if (err)
+ goto out;
+ } else {
+ err = recv_pf_from_ppf_handler(func_to_func,
+ recv_mbox,
+ buf_out,
+ &out_size);
+ if (err)
+ goto out;
+ }
+ /* The source is neither PF nor PPF, so it is from VF */
+ } else {
+ err = recv_pf_from_vf_mbox_handler(func_to_func,
+ recv_mbox,
+ src_func_idx,
+ buf_out, &out_size);
+ }
+ }
+
+out:
+ if (recv_mbox->ack_type == HINIC3_MSG_ACK)
+ response_for_recv_func_mbox(func_to_func, recv_mbox, err,
+ out_size, src_func_idx);
+}
+
+static struct hinic3_recv_mbox *alloc_recv_mbox(void)
+{
+ struct hinic3_recv_mbox *recv_msg = NULL;
+
+ recv_msg = kzalloc(sizeof(*recv_msg), GFP_KERNEL);
+ if (!recv_msg)
+ return NULL;
+
+ recv_msg->msg = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
+ if (!recv_msg->msg)
+ goto alloc_msg_err;
+
+ recv_msg->resp_buff = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
+ if (!recv_msg->resp_buff)
+ goto alloc_resp_bff_err;
+
+ return recv_msg;
+
+alloc_resp_bff_err:
+ kfree(recv_msg->msg);
+
+alloc_msg_err:
+ kfree(recv_msg);
+
+ return NULL;
+}
+
+static void free_recv_mbox(struct hinic3_recv_mbox *recv_msg)
+{
+ kfree(recv_msg->resp_buff);
+ kfree(recv_msg->msg);
+ kfree(recv_msg);
+ recv_msg = NULL;
+}
+
+static void recv_func_mbox_work_handler(struct work_struct *work)
+{
+ struct hinic3_mbox_work *mbox_work =
+ container_of(work, struct hinic3_mbox_work, work);
+
+ recv_func_mbox_handler(mbox_work->func_to_func, mbox_work->recv_mbox);
+
+ atomic_dec(&mbox_work->msg_ch->recv_msg_cnt);
+
+ destroy_work(&mbox_work->work);
+
+ free_recv_mbox(mbox_work->recv_mbox);
+ kfree(mbox_work);
+}
+
+static void resp_mbox_handler(struct hinic3_mbox *func_to_func,
+ const struct hinic3_msg_desc *msg_desc)
+{
+ spin_lock(&func_to_func->mbox_lock);
+ if (msg_desc->msg_info.msg_id == func_to_func->send_msg_id &&
+ func_to_func->event_flag == EVENT_START)
+ func_to_func->event_flag = EVENT_SUCCESS;
+ else
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Mbox response timeout, current send msg id(0x%x), recv msg id(0x%x), status(0x%x)\n",
+ func_to_func->send_msg_id, msg_desc->msg_info.msg_id,
+ msg_desc->msg_info.status);
+ spin_unlock(&func_to_func->mbox_lock);
+}
+
+static void recv_mbox_msg_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_msg_desc *msg_desc,
+ u64 mbox_header)
+{
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ struct hinic3_recv_mbox *recv_msg = NULL;
+ struct hinic3_mbox_work *mbox_work = NULL;
+ struct hinic3_msg_channel *msg_ch =
+ container_of(msg_desc, struct hinic3_msg_channel, recv_msg);
+ u16 src_func_idx = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ if (atomic_read(&msg_ch->recv_msg_cnt) >
+ HINIC3_MAX_MSG_CNT_TO_PROCESS) {
+ sdk_warn(hwdev->dev_hdl, "This function(%u) have %d message wait to process, can't add to work queue\n",
+ src_func_idx, atomic_read(&msg_ch->recv_msg_cnt));
+ return;
+ }
+
+ recv_msg = alloc_recv_mbox();
+ if (!recv_msg) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc receive mbox message buffer\n");
+ return;
+ }
+ recv_msg->msg_len = msg_desc->msg_len;
+ memcpy(recv_msg->msg, msg_desc->msg, recv_msg->msg_len);
+
+ recv_msg->msg_id = msg_desc->msg_info.msg_id;
+ recv_msg->mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+ recv_msg->cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+ recv_msg->ack_type = HINIC3_MSG_HEADER_GET(mbox_header, NO_ACK);
+ recv_msg->src_func_idx = src_func_idx;
+
+ mbox_work = kzalloc(sizeof(*mbox_work), GFP_KERNEL);
+ if (!mbox_work) {
+ sdk_err(hwdev->dev_hdl, "Allocate mbox work memory failed.\n");
+ free_recv_mbox(recv_msg);
+ return;
+ }
+
+ atomic_inc(&msg_ch->recv_msg_cnt);
+
+ mbox_work->func_to_func = func_to_func;
+ mbox_work->recv_mbox = recv_msg;
+ mbox_work->msg_ch = msg_ch;
+
+ INIT_WORK(&mbox_work->work, recv_func_mbox_work_handler);
+ queue_work_on(hisdk3_get_work_cpu_affinity(hwdev, WORK_TYPE_MBOX),
+ func_to_func->workq, &mbox_work->work);
+}
+
+static bool check_mbox_segment(struct hinic3_mbox *func_to_func,
+ struct hinic3_msg_desc *msg_desc,
+ u64 mbox_header, void *mbox_body)
+{
+ u8 seq_id, seg_len, msg_id, mod;
+ u16 src_func_idx, cmd;
+
+ seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+ seg_len = HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN);
+ msg_id = HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID);
+ mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+ cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+ src_func_idx = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ if (seq_id > SEQ_ID_MAX_VAL || seg_len > MBOX_SEG_LEN ||
+ (seq_id == SEQ_ID_MAX_VAL && seg_len > MBOX_LAST_SEG_MAX_LEN))
+ goto seg_err;
+
+ if (seq_id == 0) {
+ msg_desc->seq_id = seq_id;
+ msg_desc->msg_info.msg_id = msg_id;
+ msg_desc->mod = mod;
+ msg_desc->cmd = cmd;
+ } else {
+ if (seq_id != msg_desc->seq_id + 1 || msg_id != msg_desc->msg_info.msg_id ||
+ mod != msg_desc->mod || cmd != msg_desc->cmd)
+ goto seg_err;
+
+ msg_desc->seq_id = seq_id;
+ }
+
+ return true;
+
+seg_err:
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Mailbox segment check failed, src func id: 0x%x, front seg info: seq id: 0x%x, msg id: 0x%x, mod: 0x%x, cmd: 0x%x\n",
+ src_func_idx, msg_desc->seq_id, msg_desc->msg_info.msg_id,
+ msg_desc->mod, msg_desc->cmd);
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Current seg info: seg len: 0x%x, seq id: 0x%x, msg id: 0x%x, mod: 0x%x, cmd: 0x%x\n",
+ seg_len, seq_id, msg_id, mod, cmd);
+
+ return false;
+}
+
+static void recv_mbox_handler(struct hinic3_mbox *func_to_func,
+ u64 *header, struct hinic3_msg_desc *msg_desc)
+{
+ u64 mbox_header = *header;
+ void *mbox_body = MBOX_BODY_FROM_HDR(((void *)header));
+ u8 seq_id, seg_len;
+ int pos;
+
+ if (!check_mbox_segment(func_to_func, msg_desc, mbox_header, mbox_body)) {
+ msg_desc->seq_id = SEQ_ID_MAX_VAL;
+ return;
+ }
+
+ seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+ seg_len = HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN);
+
+ pos = seq_id * MBOX_SEG_LEN;
+ memcpy((u8 *)msg_desc->msg + pos, mbox_body, seg_len);
+
+ if (!HINIC3_MSG_HEADER_GET(mbox_header, LAST))
+ return;
+
+ msg_desc->msg_len = HINIC3_MSG_HEADER_GET(mbox_header, MSG_LEN);
+ msg_desc->msg_info.status = HINIC3_MSG_HEADER_GET(mbox_header, STATUS);
+
+ if (HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION) ==
+ HINIC3_MSG_RESPONSE) {
+ resp_mbox_handler(func_to_func, msg_desc);
+ return;
+ }
+
+ recv_mbox_msg_handler(func_to_func, msg_desc, mbox_header);
+}
+
+void hinic3_mbox_func_aeqe_handler(void *handle, u8 *header, u8 size)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+ struct hinic3_msg_desc *msg_desc = NULL;
+ u64 mbox_header = *((u64 *)header);
+ u64 src, dir;
+
+ func_to_func = ((struct hinic3_hwdev *)handle)->func_to_func;
+
+ dir = HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION);
+ src = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ msg_desc = get_mbox_msg_desc(func_to_func, dir, src);
+ if (!msg_desc) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Mailbox source function id: %u is invalid for current function\n",
+ (u32)src);
+ return;
+ }
+
+ recv_mbox_handler(func_to_func, (u64 *)header, msg_desc);
+}
+
+static int init_mbox_dma_queue(struct hinic3_hwdev *hwdev, struct mbox_dma_queue *mq)
+{
+ u32 size;
+
+ mq->depth = MBOX_DMA_MSG_QUEUE_DEPTH;
+ mq->prod_idx = 0;
+ mq->cons_idx = 0;
+
+ size = mq->depth * MBOX_MAX_BUF_SZ;
+ mq->dma_buff_vaddr = dma_zalloc_coherent(hwdev->dev_hdl, size, &mq->dma_buff_paddr,
+ GFP_KERNEL);
+ if (!mq->dma_buff_vaddr) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc dma_buffer\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void deinit_mbox_dma_queue(struct hinic3_hwdev *hwdev, struct mbox_dma_queue *mq)
+{
+ dma_free_coherent(hwdev->dev_hdl, mq->depth * MBOX_MAX_BUF_SZ,
+ mq->dma_buff_vaddr, mq->dma_buff_paddr);
+}
+
+static int hinic3_init_mbox_dma_queue(struct hinic3_mbox *func_to_func)
+{
+ u32 val;
+ int err;
+
+ err = init_mbox_dma_queue(func_to_func->hwdev, &func_to_func->sync_msg_queue);
+ if (err)
+ return err;
+
+ err = init_mbox_dma_queue(func_to_func->hwdev, &func_to_func->async_msg_queue);
+ if (err) {
+ deinit_mbox_dma_queue(func_to_func->hwdev, &func_to_func->sync_msg_queue);
+ return err;
+ }
+
+ val = hinic3_hwif_read_reg(func_to_func->hwdev->hwif, MBOX_MQ_CI_OFFSET);
+ val = MBOX_MQ_CI_CLEAR(val, SYNC);
+ val = MBOX_MQ_CI_CLEAR(val, ASYNC);
+ hinic3_hwif_write_reg(func_to_func->hwdev->hwif, MBOX_MQ_CI_OFFSET, val);
+
+ return 0;
+}
+
+static void hinic3_deinit_mbox_dma_queue(struct hinic3_mbox *func_to_func)
+{
+ deinit_mbox_dma_queue(func_to_func->hwdev, &func_to_func->sync_msg_queue);
+ deinit_mbox_dma_queue(func_to_func->hwdev, &func_to_func->async_msg_queue);
+}
+
+#define MBOX_DMA_MSG_INIT_XOR_VAL 0x5a5a5a5a
+#define MBOX_XOR_DATA_ALIGN 4
+static u32 mbox_dma_msg_xor(u32 *data, u16 msg_len)
+{
+ u32 mbox_xor = MBOX_DMA_MSG_INIT_XOR_VAL;
+ u16 dw_len = msg_len / sizeof(u32);
+ u16 i;
+
+ for (i = 0; i < dw_len; i++)
+ mbox_xor ^= data[i];
+
+ return mbox_xor;
+}
+
+#define MQ_ID_MASK(mq, idx) ((idx) & ((mq)->depth - 1))
+#define IS_MSG_QUEUE_FULL(mq) (MQ_ID_MASK(mq, (mq)->prod_idx + 1) == \
+ MQ_ID_MASK(mq, (mq)->cons_idx))
+
+static int mbox_prepare_dma_entry(struct hinic3_mbox *func_to_func, struct mbox_dma_queue *mq,
+ struct mbox_dma_msg *dma_msg, void *msg, u16 msg_len)
+{
+ u64 dma_addr, offset;
+ void *dma_vaddr = NULL;
+
+ if (IS_MSG_QUEUE_FULL(mq)) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Mbox sync message queue is busy, pi: %u, ci: %u\n",
+ mq->prod_idx, MQ_ID_MASK(mq, mq->cons_idx));
+ return -EBUSY;
+ }
+
+ /* copy data to DMA buffer */
+ offset = mq->prod_idx * MBOX_MAX_BUF_SZ;
+ dma_vaddr = (u8 *)mq->dma_buff_vaddr + offset;
+ memcpy(dma_vaddr, msg, msg_len);
+
+ dma_addr = mq->dma_buff_paddr + offset;
+ dma_msg->dma_addr_high = upper_32_bits(dma_addr);
+ dma_msg->dma_addr_low = lower_32_bits(dma_addr);
+ dma_msg->msg_len = msg_len;
+ /* The firmware obtains message based on 4B alignment. */
+ dma_msg->xor = mbox_dma_msg_xor(dma_vaddr, ALIGN(msg_len, MBOX_XOR_DATA_ALIGN));
+
+ mq->prod_idx++;
+ mq->prod_idx = MQ_ID_MASK(mq, mq->prod_idx);
+
+ return 0;
+}
+
+static int mbox_prepare_dma_msg(struct hinic3_mbox *func_to_func, enum hinic3_msg_ack_type ack_type,
+ struct mbox_dma_msg *dma_msg, void *msg, u16 msg_len)
+{
+ struct mbox_dma_queue *mq = NULL;
+ u32 val;
+
+ val = hinic3_hwif_read_reg(func_to_func->hwdev->hwif, MBOX_MQ_CI_OFFSET);
+ if (ack_type == HINIC3_MSG_ACK) {
+ mq = &func_to_func->sync_msg_queue;
+ mq->cons_idx = MBOX_MQ_CI_GET(val, SYNC);
+ } else {
+ mq = &func_to_func->async_msg_queue;
+ mq->cons_idx = MBOX_MQ_CI_GET(val, ASYNC);
+ }
+
+ return mbox_prepare_dma_entry(func_to_func, mq, dma_msg, msg, msg_len);
+}
+
+static void clear_mbox_status(struct hinic3_send_mbox *mbox)
+{
+ *mbox->wb_status = 0;
+
+ /* clear mailbox write back status */
+ wmb();
+}
+
+static void mbox_copy_header(struct hinic3_hwdev *hwdev,
+ struct hinic3_send_mbox *mbox, u64 *header)
+{
+ u32 *data = (u32 *)header;
+ u32 i, idx_max = MBOX_HEADER_SZ / sizeof(u32);
+
+ for (i = 0; i < idx_max; i++) {
+ __raw_writel(cpu_to_be32(*(data + i)),
+ mbox->data + i * sizeof(u32));
+ }
+}
+
+static int mbox_copy_send_data(struct hinic3_hwdev *hwdev,
+ struct hinic3_send_mbox *mbox, void *seg, u16 seg_len)
+{
+ u32 *data = seg;
+ u32 data_len, chk_sz = sizeof(u32);
+ u32 i, idx_max;
+ u8 mbox_max_buf[MBOX_SEG_LEN] = {0};
+
+ /* The mbox message should be aligned in 4 bytes. */
+ if (seg_len % chk_sz) {
+ memcpy(mbox_max_buf, seg, seg_len);
+ data = (u32 *)mbox_max_buf;
+ }
+
+ data_len = seg_len;
+ idx_max = ALIGN(data_len, chk_sz) / chk_sz;
+
+ for (i = 0; i < idx_max; i++) {
+ __raw_writel(cpu_to_be32(*(data + i)),
+ mbox->data + MBOX_HEADER_SZ + i * sizeof(u32));
+ }
+
+ return 0;
+}
+
+static void write_mbox_msg_attr(struct hinic3_mbox *func_to_func,
+ u16 dst_func, u16 dst_aeqn, u16 seg_len)
+{
+ u32 mbox_int, mbox_ctrl;
+ u16 func = dst_func;
+
+ /* for VF to PF's message, dest func id will self-learning by HW */
+ if (HINIC3_IS_VF(func_to_func->hwdev) && dst_func != HINIC3_MGMT_SRC_ID)
+ func = 0; /* the destination is the VF's PF */
+
+ mbox_int = HINIC3_MBOX_INT_SET(dst_aeqn, DST_AEQN) |
+ HINIC3_MBOX_INT_SET(0, SRC_RESP_AEQN) |
+ HINIC3_MBOX_INT_SET(NO_DMA_ATTRIBUTE_VAL, STAT_DMA) |
+ HINIC3_MBOX_INT_SET(ALIGN(seg_len + MBOX_HEADER_SZ,
+ MBOX_SEG_LEN_ALIGN) >> 2,
+ TX_SIZE) |
+ HINIC3_MBOX_INT_SET(STRONG_ORDER, STAT_DMA_SO_RO) |
+ HINIC3_MBOX_INT_SET(WRITE_BACK, WB_EN);
+
+ hinic3_hwif_write_reg(func_to_func->hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF, mbox_int);
+
+ wmb(); /* writing the mbox int attributes */
+ mbox_ctrl = HINIC3_MBOX_CTRL_SET(TX_NOT_DONE, TX_STATUS);
+
+ mbox_ctrl |= HINIC3_MBOX_CTRL_SET(NOT_TRIGGER, TRIGGER_AEQE);
+
+ mbox_ctrl |= HINIC3_MBOX_CTRL_SET(func, DST_FUNC);
+
+ hinic3_hwif_write_reg(func_to_func->hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF, mbox_ctrl);
+}
+
+static void dump_mbox_reg(struct hinic3_hwdev *hwdev)
+{
+ u32 val;
+
+ val = hinic3_hwif_read_reg(hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF);
+ sdk_err(hwdev->dev_hdl, "Mailbox control reg: 0x%x\n", val);
+ val = hinic3_hwif_read_reg(hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF);
+ sdk_err(hwdev->dev_hdl, "Mailbox interrupt offset: 0x%x\n", val);
+}
+
+static u16 get_mbox_status(const struct hinic3_send_mbox *mbox)
+{
+ /* write back is 16B, but only use first 4B */
+ u64 wb_val = be64_to_cpu(*mbox->wb_status);
+
+ rmb(); /* verify reading before check */
+
+ return (u16)(wb_val & MBOX_WB_STATUS_ERRCODE_MASK);
+}
+
+static enum hinic3_wait_return check_mbox_wb_status(void *priv_data)
+{
+ struct hinic3_mbox *func_to_func = priv_data;
+ u16 wb_status;
+
+ if (MBOX_MSG_CHANNEL_STOP(func_to_func) || !func_to_func->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ wb_status = get_mbox_status(&func_to_func->send_mbox);
+
+ return MBOX_STATUS_FINISHED(wb_status) ?
+ WAIT_PROCESS_CPL : WAIT_PROCESS_WAITING;
+}
+
+static int send_mbox_seg(struct hinic3_mbox *func_to_func, u64 header,
+ u16 dst_func, void *seg, u16 seg_len, void *msg_info)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ u8 num_aeqs = hwdev->hwif->attr.num_aeqs;
+ u16 dst_aeqn, wb_status = 0, errcode;
+ u16 seq_dir = HINIC3_MSG_HEADER_GET(header, DIRECTION);
+ int err;
+
+ /* mbox to mgmt cpu, hardware don't care dst aeq id */
+ if (num_aeqs > HINIC3_MBOX_RSP_MSG_AEQ)
+ dst_aeqn = (seq_dir == HINIC3_MSG_DIRECT_SEND) ?
+ HINIC3_ASYNC_MSG_AEQ : HINIC3_MBOX_RSP_MSG_AEQ;
+ else
+ dst_aeqn = 0;
+
+ clear_mbox_status(send_mbox);
+
+ mbox_copy_header(hwdev, send_mbox, &header);
+
+ err = mbox_copy_send_data(hwdev, send_mbox, seg, seg_len);
+ if (err != 0)
+ return err;
+
+ write_mbox_msg_attr(func_to_func, dst_func, dst_aeqn, seg_len);
+
+ wmb(); /* writing the mbox msg attributes */
+
+ err = hinic3_wait_for_timeout(func_to_func, check_mbox_wb_status,
+ MBOX_MSG_POLLING_TIMEOUT, MBOX_MSG_WAIT_ONCE_TIME_US);
+ wb_status = get_mbox_status(send_mbox);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Send mailbox segment timeout, wb status: 0x%x\n",
+ wb_status);
+ dump_mbox_reg(hwdev);
+ return -ETIMEDOUT;
+ }
+
+ if (!MBOX_STATUS_SUCCESS(wb_status)) {
+ sdk_err(hwdev->dev_hdl, "Send mailbox segment to function %u error, wb status: 0x%x\n",
+ dst_func, wb_status);
+ errcode = MBOX_STATUS_ERRCODE(wb_status);
+ return errcode ? errcode : -EFAULT;
+ }
+
+ return 0;
+}
+
+static int send_mbox_msg(struct hinic3_mbox *func_to_func, u8 mod, u16 cmd,
+ void *msg, u16 msg_len, u16 dst_func,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info)
+{
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ struct mbox_dma_msg dma_msg = {0};
+ enum hinic3_data_type data_type = HINIC3_DATA_INLINE;
+ int err = 0;
+ u32 seq_id = 0;
+ u16 seg_len = MBOX_SEG_LEN;
+ u16 rsp_aeq_id, left;
+ u8 *msg_seg = NULL;
+ u64 header = 0;
+
+ if (hwdev->poll || hwdev->hwif->attr.num_aeqs >= 0x2)
+ rsp_aeq_id = HINIC3_MBOX_RSP_MSG_AEQ;
+ else
+ rsp_aeq_id = 0;
+
+ mutex_lock(&func_to_func->msg_send_lock);
+
+ if (IS_DMA_MBX_MSG(dst_func) && !COMM_SUPPORT_MBOX_SEGMENT(hwdev)) {
+ err = mbox_prepare_dma_msg(func_to_func, ack_type, &dma_msg, msg, msg_len);
+ if (err != 0)
+ goto send_err;
+
+ msg = &dma_msg;
+ msg_len = sizeof(dma_msg);
+ data_type = HINIC3_DATA_DMA;
+ }
+
+ msg_seg = (u8 *)msg;
+ left = msg_len;
+
+ header = HINIC3_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ HINIC3_MSG_HEADER_SET(mod, MODULE) |
+ HINIC3_MSG_HEADER_SET(seg_len, SEG_LEN) |
+ HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HINIC3_MSG_HEADER_SET(data_type, DATA_TYPE) |
+ HINIC3_MSG_HEADER_SET(SEQ_ID_START_VAL, SEQID) |
+ HINIC3_MSG_HEADER_SET(NOT_LAST_SEGMENT, LAST) |
+ HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+ HINIC3_MSG_HEADER_SET(cmd, CMD) |
+ /* The vf's offset to it's associated pf */
+ HINIC3_MSG_HEADER_SET(msg_info->msg_id, MSG_ID) |
+ HINIC3_MSG_HEADER_SET(rsp_aeq_id, AEQ_ID) |
+ HINIC3_MSG_HEADER_SET(HINIC3_MSG_FROM_MBOX, SOURCE) |
+ HINIC3_MSG_HEADER_SET(!!msg_info->status, STATUS) |
+ HINIC3_MSG_HEADER_SET(hinic3_global_func_id(hwdev),
+ SRC_GLB_FUNC_IDX);
+
+ while (!(HINIC3_MSG_HEADER_GET(header, LAST))) {
+ if (left <= MBOX_SEG_LEN) {
+ header &= ~MBOX_SEGLEN_MASK;
+ header |= HINIC3_MSG_HEADER_SET(left, SEG_LEN);
+ header |= HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST);
+
+ seg_len = left;
+ }
+
+ err = send_mbox_seg(func_to_func, header, dst_func, msg_seg,
+ seg_len, msg_info);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to send mbox seg, seq_id=0x%llx\n",
+ HINIC3_MSG_HEADER_GET(header, SEQID));
+ goto send_err;
+ }
+
+ left -= MBOX_SEG_LEN;
+ msg_seg += MBOX_SEG_LEN;
+
+ seq_id++;
+ header &= ~(HINIC3_MSG_HEADER_SET(HINIC3_MSG_HEADER_SEQID_MASK,
+ SEQID));
+ header |= HINIC3_MSG_HEADER_SET(seq_id, SEQID);
+ }
+
+send_err:
+ mutex_unlock(&func_to_func->msg_send_lock);
+
+ return err;
+}
+
+static void set_mbox_to_func_event(struct hinic3_mbox *func_to_func,
+ enum mbox_event_state event_flag)
+{
+ spin_lock(&func_to_func->mbox_lock);
+ func_to_func->event_flag = event_flag;
+ spin_unlock(&func_to_func->mbox_lock);
+}
+
+static enum hinic3_wait_return check_mbox_msg_finish(void *priv_data)
+{
+ struct hinic3_mbox *func_to_func = priv_data;
+
+ if (MBOX_MSG_CHANNEL_STOP(func_to_func) || func_to_func->hwdev->chip_present_flag == 0)
+ return WAIT_PROCESS_ERR;
+
+ return (func_to_func->event_flag == EVENT_SUCCESS) ?
+ WAIT_PROCESS_CPL : WAIT_PROCESS_WAITING;
+}
+
+static int wait_mbox_msg_completion(struct hinic3_mbox *func_to_func,
+ u32 timeout)
+{
+ u32 wait_time;
+ int err;
+
+ wait_time = (timeout != 0) ? timeout : HINIC3_MBOX_COMP_TIME;
+ err = hinic3_wait_for_timeout(func_to_func, check_mbox_msg_finish,
+ wait_time, WAIT_USEC_50);
+ if (err) {
+ set_mbox_to_func_event(func_to_func, EVENT_TIMEOUT);
+ return -ETIMEDOUT;
+ }
+
+ set_mbox_to_func_event(func_to_func, EVENT_END);
+
+ return 0;
+}
+
+#define TRY_MBOX_LOCK_SLEPP 1000
+static int send_mbox_msg_lock(struct hinic3_mbox *func_to_func, u16 channel)
+{
+ if (!func_to_func->lock_channel_en) {
+ mutex_lock(&func_to_func->mbox_send_lock);
+ return 0;
+ }
+
+ while (test_bit(channel, &func_to_func->channel_stop) == 0) {
+ if (mutex_trylock(&func_to_func->mbox_send_lock) != 0)
+ return 0;
+
+ usleep_range(TRY_MBOX_LOCK_SLEPP - 1, TRY_MBOX_LOCK_SLEPP);
+ }
+
+ return -EAGAIN;
+}
+
+static void send_mbox_msg_unlock(struct hinic3_mbox *func_to_func)
+{
+ mutex_unlock(&func_to_func->mbox_send_lock);
+}
+
+int hinic3_mbox_to_func(struct hinic3_mbox *func_to_func, u8 mod, u16 cmd,
+ u16 dst_func, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel)
+{
+ /* use mbox_resp to hole data which responsed from other function */
+ struct hinic3_msg_desc *msg_desc = NULL;
+ struct mbox_msg_info msg_info = {0};
+ int err;
+
+ if (func_to_func->hwdev->chip_present_flag == 0)
+ return -EPERM;
+
+ /* expect response message */
+ msg_desc = get_mbox_msg_desc(func_to_func, HINIC3_MSG_RESPONSE, dst_func);
+ if (!msg_desc) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "msg_desc null\n");
+ return -EFAULT;
+ }
+
+ err = send_mbox_msg_lock(func_to_func, channel);
+ if (err)
+ return err;
+
+ func_to_func->cur_msg_channel = channel;
+ msg_info.msg_id = MBOX_MSG_ID_INC(func_to_func);
+
+ set_mbox_to_func_event(func_to_func, EVENT_START);
+
+ err = send_mbox_msg(func_to_func, mod, cmd, buf_in, in_size, dst_func,
+ HINIC3_MSG_DIRECT_SEND, HINIC3_MSG_ACK, &msg_info);
+ if (err) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Send mailbox mod %u, cmd %u failed, msg_id: %u, err: %d\n",
+ mod, cmd, msg_info.msg_id, err);
+ set_mbox_to_func_event(func_to_func, EVENT_FAIL);
+ goto send_err;
+ }
+ func_to_func->hwdev->mbox_send_cnt++;
+
+ if (wait_mbox_msg_completion(func_to_func, timeout) != 0) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Send mbox msg timeout, msg_id: %u\n", msg_info.msg_id);
+ hinic3_dump_aeq_info(func_to_func->hwdev);
+ err = -ETIMEDOUT;
+ goto send_err;
+ }
+ func_to_func->hwdev->mbox_ack_cnt++;
+
+ if (mod != msg_desc->mod || cmd != msg_desc->cmd) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Invalid response mbox message, mod: 0x%x, cmd: 0x%x, expect mod: 0x%x, cmd: 0x%x\n",
+ msg_desc->mod, msg_desc->cmd, mod, cmd);
+ err = -EFAULT;
+ goto send_err;
+ }
+
+ if (msg_desc->msg_info.status) {
+ err = msg_desc->msg_info.status;
+ goto send_err;
+ }
+
+ if (buf_out && out_size) {
+ if (*out_size < msg_desc->msg_len) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Invalid response mbox message length: %u for mod %d cmd %u, should less than: %u\n",
+ msg_desc->msg_len, mod, cmd, *out_size);
+ err = -EFAULT;
+ goto send_err;
+ }
+
+ if (msg_desc->msg_len)
+ memcpy(buf_out, msg_desc->msg, msg_desc->msg_len);
+ *out_size = msg_desc->msg_len;
+ }
+
+send_err:
+ send_mbox_msg_unlock(func_to_func);
+
+ return err;
+}
+
+static int mbox_func_params_valid(struct hinic3_mbox *func_to_func,
+ void *buf_in, u16 in_size, u16 channel)
+{
+ if (!buf_in || !in_size)
+ return -EINVAL;
+
+ if (in_size > HINIC3_MBOX_DATA_SIZE) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Mbox msg len %u exceed limit: [1, %u]\n",
+ in_size, HINIC3_MBOX_DATA_SIZE);
+ return -EINVAL;
+ }
+
+ if (channel >= HINIC3_CHANNEL_MAX) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Invalid channel id: 0x%x\n", channel);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_mbox_to_host(struct hinic3_hwdev *hwdev, u16 dest_host_ppf_id, enum hinic3_mod_type mod,
+ u8 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ int err;
+
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size, channel);
+ if (err)
+ return err;
+
+ if (!HINIC3_IS_PPF(hwdev)) {
+ sdk_err(hwdev->dev_hdl, "Params error, only PPF can send message to other host, func_type: %d\n",
+ hinic3_func_type(hwdev));
+ return -EINVAL;
+ }
+
+ return hinic3_mbox_to_func(func_to_func, mod, cmd, dest_host_ppf_id, buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+}
+
+int hinic3_mbox_to_func_no_ack(struct hinic3_hwdev *hwdev, u16 func_idx,
+ u8 mod, u16 cmd, void *buf_in, u16 in_size, u16 channel)
+{
+ struct mbox_msg_info msg_info = {0};
+ int err = mbox_func_params_valid(hwdev->func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ err = send_mbox_msg_lock(hwdev->func_to_func, channel);
+ if (err)
+ return err;
+
+ err = send_mbox_msg(hwdev->func_to_func, mod, cmd, buf_in, in_size,
+ func_idx, HINIC3_MSG_DIRECT_SEND,
+ HINIC3_MSG_NO_ACK, &msg_info);
+ if (err)
+ sdk_err(hwdev->dev_hdl, "Send mailbox no ack failed\n");
+
+ send_mbox_msg_unlock(hwdev->func_to_func);
+
+ return err;
+}
+
+int hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ int err = mbox_func_params_valid(func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ /* TODO: MPU have not implement this cmd yet */
+ if (mod == HINIC3_MOD_COMM && cmd == COMM_MGMT_CMD_SEND_API_ACK_BY_UP)
+ return 0;
+
+ return hinic3_mbox_to_func(func_to_func, mod, cmd, HINIC3_MGMT_SRC_ID,
+ buf_in, in_size, buf_out, out_size, timeout,
+ channel);
+}
+
+void hinic3_response_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 msg_id)
+{
+ struct mbox_msg_info msg_info;
+
+ msg_info.msg_id = (u8)msg_id;
+ msg_info.status = 0;
+
+ send_mbox_msg(hwdev->func_to_func, mod, cmd, buf_in, in_size,
+ HINIC3_MGMT_SRC_ID, HINIC3_MSG_RESPONSE,
+ HINIC3_MSG_NO_ACK, &msg_info);
+}
+
+int hinic3_send_mbox_to_mgmt_no_ack(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 channel)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ int err = mbox_func_params_valid(func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ return hinic3_mbox_to_func_no_ack(hwdev, HINIC3_MGMT_SRC_ID, mod, cmd,
+ buf_in, in_size, channel);
+}
+
+int hinic3_mbox_ppf_to_host(void *hwdev, u8 mod, u16 cmd, u8 host_id,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u16 dst_ppf_func;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (!(dev->chip_present_flag))
+ return -EPERM;
+
+ err = mbox_func_params_valid(dev->func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ if (!HINIC3_IS_PPF(dev)) {
+ sdk_err(dev->dev_hdl, "Params error, only ppf support send mbox to ppf. func_type: %d\n",
+ hinic3_func_type(dev));
+ return -EINVAL;
+ }
+
+ if (host_id >= HINIC3_MAX_HOST_NUM(dev) ||
+ host_id == HINIC3_PCI_INTF_IDX(dev->hwif)) {
+ sdk_err(dev->dev_hdl, "Params error, host id: %u\n", host_id);
+ return -EINVAL;
+ }
+
+ dst_ppf_func = hinic3_host_ppf_idx(dev, host_id);
+ if (dst_ppf_func >= HINIC3_MAX_PF_NUM(dev)) {
+ sdk_err(dev->dev_hdl, "Dest host(%u) have not elect ppf(0x%x).\n",
+ host_id, dst_ppf_func);
+ return -EINVAL;
+ }
+
+ return hinic3_mbox_to_func(dev->func_to_func, mod, cmd,
+ dst_ppf_func, buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_ppf_to_host);
+
+int hinic3_mbox_to_pf(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (!(dev->chip_present_flag))
+ return -EPERM;
+
+ err = mbox_func_params_valid(dev->func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ if (!HINIC3_IS_VF(dev)) {
+ sdk_err(dev->dev_hdl, "Params error, func_type: %d\n",
+ hinic3_func_type(dev));
+ return -EINVAL;
+ }
+
+ return hinic3_mbox_to_func(dev->func_to_func, mod, cmd,
+ hinic3_pf_id_of_vf(dev), buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_to_pf);
+
+int hinic3_mbox_to_vf(void *hwdev, u16 vf_id, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u32 timeout,
+ u16 channel)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+ int err = 0;
+ u16 dst_func_idx;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size, channel);
+ if (err != 0)
+ return err;
+
+ if (HINIC3_IS_VF((struct hinic3_hwdev *)hwdev)) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl, "Params error, func_type: %d\n",
+ hinic3_func_type(hwdev));
+ return -EINVAL;
+ }
+
+ if (!vf_id) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "VF id(%u) error!\n", vf_id);
+ return -EINVAL;
+ }
+
+ /* vf_offset_to_pf + vf_id is the vf's global function id of vf in
+ * this pf
+ */
+ dst_func_idx = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+
+ return hinic3_mbox_to_func(func_to_func, mod, cmd, dst_func_idx, buf_in,
+ in_size, buf_out, out_size, timeout,
+ channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_to_vf);
+
+int hinic3_mbox_to_vf_no_ack(void *hwdev, u16 vf_id, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size, u16 channel)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+ int err = 0;
+ u16 dst_func_idx;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size, channel);
+ if (err != 0)
+ return err;
+
+ if (HINIC3_IS_VF((struct hinic3_hwdev *)hwdev)) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl, "Params error, func_type: %d\n",
+ hinic3_func_type(hwdev));
+ return -EINVAL;
+ }
+
+ if (!vf_id) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "VF id(%u) error!\n", vf_id);
+ return -EINVAL;
+ }
+
+ /* vf_offset_to_pf + vf_id is the vf's global function id of vf in
+ * this pf
+ */
+ dst_func_idx = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+
+ return hinic3_mbox_to_func_no_ack(hwdev, dst_func_idx, mod, cmd,
+ buf_in, in_size, channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_to_vf_no_ack);
+
+void hinic3_mbox_enable_channel_lock(struct hinic3_hwdev *hwdev, bool enable)
+{
+ hwdev->func_to_func->lock_channel_en = enable;
+
+ sdk_info(hwdev->dev_hdl, "%s mbox channel lock\n",
+ enable ? "Enable" : "Disable");
+}
+
+static int alloc_mbox_msg_channel(struct hinic3_msg_channel *msg_ch)
+{
+ msg_ch->resp_msg.msg = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
+ if (!msg_ch->resp_msg.msg)
+ return -ENOMEM;
+
+ msg_ch->recv_msg.msg = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
+ if (!msg_ch->recv_msg.msg) {
+ kfree(msg_ch->resp_msg.msg);
+ return -ENOMEM;
+ }
+
+ msg_ch->resp_msg.seq_id = SEQ_ID_MAX_VAL;
+ msg_ch->recv_msg.seq_id = SEQ_ID_MAX_VAL;
+ atomic_set(&msg_ch->recv_msg_cnt, 0);
+
+ return 0;
+}
+
+static void free_mbox_msg_channel(struct hinic3_msg_channel *msg_ch)
+{
+ kfree(msg_ch->recv_msg.msg);
+ kfree(msg_ch->resp_msg.msg);
+}
+
+static int init_mgmt_msg_channel(struct hinic3_mbox *func_to_func)
+{
+ int err;
+
+ err = alloc_mbox_msg_channel(&func_to_func->mgmt_msg);
+ if (err != 0) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Failed to alloc mgmt message channel\n");
+ return err;
+ }
+
+ err = hinic3_init_mbox_dma_queue(func_to_func);
+ if (err != 0) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Failed to init mbox dma queue\n");
+ free_mbox_msg_channel(&func_to_func->mgmt_msg);
+ }
+
+ return err;
+}
+
+static void deinit_mgmt_msg_channel(struct hinic3_mbox *func_to_func)
+{
+ hinic3_deinit_mbox_dma_queue(func_to_func);
+ free_mbox_msg_channel(&func_to_func->mgmt_msg);
+}
+
+int hinic3_mbox_init_host_msg_channel(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ u8 host_num = HINIC3_MAX_HOST_NUM(hwdev);
+ int i, host_id, err;
+
+ if (host_num == 0)
+ return 0;
+
+ func_to_func->host_msg = kcalloc(host_num,
+ sizeof(*func_to_func->host_msg),
+ GFP_KERNEL);
+ if (!func_to_func->host_msg) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Failed to alloc host message array\n");
+ return -ENOMEM;
+ }
+
+ for (host_id = 0; host_id < host_num; host_id++) {
+ err = alloc_mbox_msg_channel(&func_to_func->host_msg[host_id]);
+ if (err) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Failed to alloc host %d message channel\n",
+ host_id);
+ goto alloc_msg_ch_err;
+ }
+ }
+
+ func_to_func->support_h2h_msg = true;
+
+ return 0;
+
+alloc_msg_ch_err:
+ for (i = 0; i < host_id; i++)
+ free_mbox_msg_channel(&func_to_func->host_msg[i]);
+
+ kfree(func_to_func->host_msg);
+ func_to_func->host_msg = NULL;
+
+ return -ENOMEM;
+}
+
+static void deinit_host_msg_channel(struct hinic3_mbox *func_to_func)
+{
+ int i;
+
+ if (!func_to_func->host_msg)
+ return;
+
+ for (i = 0; i < HINIC3_MAX_HOST_NUM(func_to_func->hwdev); i++)
+ free_mbox_msg_channel(&func_to_func->host_msg[i]);
+
+ kfree(func_to_func->host_msg);
+ func_to_func->host_msg = NULL;
+}
+
+int hinic3_init_func_mbox_msg_channel(void *hwdev, u16 num_func)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_mbox *func_to_func = NULL;
+ u16 func_id, i;
+ int err;
+
+ if (!hwdev || !num_func || num_func > HINIC3_MAX_FUNCTIONS)
+ return -EINVAL;
+
+ func_to_func = dev->func_to_func;
+ if (func_to_func->func_msg)
+ return (func_to_func->num_func_msg == num_func) ? 0 : -EFAULT;
+
+ func_to_func->func_msg =
+ kcalloc(num_func, sizeof(*func_to_func->func_msg), GFP_KERNEL);
+ if (!func_to_func->func_msg) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Failed to alloc func message array\n");
+ return -ENOMEM;
+ }
+
+ for (func_id = 0; func_id < num_func; func_id++) {
+ err = alloc_mbox_msg_channel(&func_to_func->func_msg[func_id]);
+ if (err != 0) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Failed to alloc func %hu message channel\n",
+ func_id);
+ goto alloc_msg_ch_err;
+ }
+ }
+
+ func_to_func->num_func_msg = num_func;
+
+ return 0;
+
+alloc_msg_ch_err:
+ for (i = 0; i < func_id; i++)
+ free_mbox_msg_channel(&func_to_func->func_msg[i]);
+
+ kfree(func_to_func->func_msg);
+ func_to_func->func_msg = NULL;
+
+ return -ENOMEM;
+}
+
+static void hinic3_deinit_func_mbox_msg_channel(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ u16 i;
+
+ if (!func_to_func->func_msg)
+ return;
+
+ for (i = 0; i < func_to_func->num_func_msg; i++)
+ free_mbox_msg_channel(&func_to_func->func_msg[i]);
+
+ kfree(func_to_func->func_msg);
+ func_to_func->func_msg = NULL;
+}
+
+static struct hinic3_msg_desc *get_mbox_msg_desc(struct hinic3_mbox *func_to_func,
+ u64 dir, u64 src_func_id)
+{
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ struct hinic3_msg_channel *msg_ch = NULL;
+ u16 id;
+
+ if (src_func_id == HINIC3_MGMT_SRC_ID) {
+ msg_ch = &func_to_func->mgmt_msg;
+ } else if (HINIC3_IS_VF(hwdev)) {
+ /* message from pf */
+ msg_ch = func_to_func->func_msg;
+ if (src_func_id != hinic3_pf_id_of_vf(hwdev) || !msg_ch)
+ return NULL;
+ } else if (src_func_id > hinic3_glb_pf_vf_offset(hwdev)) {
+ /* message from vf */
+ id = (u16)(src_func_id - 1U) - hinic3_glb_pf_vf_offset(hwdev);
+ if (id >= func_to_func->num_func_msg)
+ return NULL;
+
+ msg_ch = &func_to_func->func_msg[id];
+ } else {
+ /* message from other host's ppf */
+ if (!func_to_func->support_h2h_msg)
+ return NULL;
+
+ for (id = 0; id < HINIC3_MAX_HOST_NUM(hwdev); id++) {
+ if (src_func_id == hinic3_host_ppf_idx(hwdev, (u8)id))
+ break;
+ }
+
+ if (id == HINIC3_MAX_HOST_NUM(hwdev) || !func_to_func->host_msg)
+ return NULL;
+
+ msg_ch = &func_to_func->host_msg[id];
+ }
+
+ return (dir == HINIC3_MSG_DIRECT_SEND) ?
+ &msg_ch->recv_msg : &msg_ch->resp_msg;
+}
+
+static void prepare_send_mbox(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+
+ send_mbox->data = MBOX_AREA(func_to_func->hwdev->hwif);
+}
+
+static int alloc_mbox_wb_status(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ u32 addr_h, addr_l;
+
+ send_mbox->wb_vaddr = dma_zalloc_coherent(hwdev->dev_hdl,
+ MBOX_WB_STATUS_LEN,
+ &send_mbox->wb_paddr,
+ GFP_KERNEL);
+ if (!send_mbox->wb_vaddr)
+ return -ENOMEM;
+
+ send_mbox->wb_status = send_mbox->wb_vaddr;
+
+ addr_h = upper_32_bits(send_mbox->wb_paddr);
+ addr_l = lower_32_bits(send_mbox->wb_paddr);
+
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF,
+ addr_h);
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF,
+ addr_l);
+
+ return 0;
+}
+
+static void free_mbox_wb_status(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF,
+ 0);
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF,
+ 0);
+
+ dma_free_coherent(hwdev->dev_hdl, MBOX_WB_STATUS_LEN,
+ send_mbox->wb_vaddr, send_mbox->wb_paddr);
+}
+
+int hinic3_func_to_func_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func;
+ int err = -ENOMEM;
+
+ func_to_func = kzalloc(sizeof(*func_to_func), GFP_KERNEL);
+ if (!func_to_func)
+ return -ENOMEM;
+
+ hwdev->func_to_func = func_to_func;
+ func_to_func->hwdev = hwdev;
+ mutex_init(&func_to_func->mbox_send_lock);
+ mutex_init(&func_to_func->msg_send_lock);
+ spin_lock_init(&func_to_func->mbox_lock);
+ func_to_func->workq = create_singlethread_workqueue(HINIC3_MBOX_WQ_NAME);
+ if (!func_to_func->workq) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize MBOX workqueue\n");
+ goto create_mbox_workq_err;
+ }
+
+ err = init_mgmt_msg_channel(func_to_func);
+ if (err)
+ goto init_mgmt_msg_ch_err;
+
+ if (HINIC3_IS_VF(hwdev)) {
+ /* VF to PF mbox message channel */
+ err = hinic3_init_func_mbox_msg_channel(hwdev, 1);
+ if (err)
+ goto init_func_msg_ch_err;
+ }
+
+ err = alloc_mbox_wb_status(func_to_func);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc mbox write back status\n");
+ goto alloc_wb_status_err;
+ }
+
+ prepare_send_mbox(func_to_func);
+
+ return 0;
+
+alloc_wb_status_err:
+ if (HINIC3_IS_VF(hwdev))
+ hinic3_deinit_func_mbox_msg_channel(hwdev);
+
+init_func_msg_ch_err:
+ deinit_mgmt_msg_channel(func_to_func);
+
+init_mgmt_msg_ch_err:
+ destroy_workqueue(func_to_func->workq);
+
+create_mbox_workq_err:
+ spin_lock_deinit(&func_to_func->mbox_lock);
+ mutex_deinit(&func_to_func->msg_send_lock);
+ mutex_deinit(&func_to_func->mbox_send_lock);
+ kfree(func_to_func);
+
+ return err;
+}
+
+void hinic3_func_to_func_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+
+ /* destroy workqueue before free related mbox resources in case of
+ * illegal resource access
+ */
+ destroy_workqueue(func_to_func->workq);
+
+ free_mbox_wb_status(func_to_func);
+ if (HINIC3_IS_PPF(hwdev))
+ deinit_host_msg_channel(func_to_func);
+ hinic3_deinit_func_mbox_msg_channel(hwdev);
+ deinit_mgmt_msg_channel(func_to_func);
+ spin_lock_deinit(&func_to_func->mbox_lock);
+ mutex_deinit(&func_to_func->mbox_send_lock);
+ mutex_deinit(&func_to_func->msg_send_lock);
+
+ kfree(func_to_func);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h
new file mode 100644
index 0000000..226f8d6
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h
@@ -0,0 +1,281 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_MBOX_H
+#define HINIC3_MBOX_H
+
+#include "hinic3_crm.h"
+#include "hinic3_hwdev.h"
+
+#define HINIC3_MBOX_PF_SEND_ERR 0x1
+
+#define HINIC3_MGMT_SRC_ID 0x1FFF
+#define HINIC3_MAX_FUNCTIONS 4096
+
+/* message header define */
+#define HINIC3_MSG_HEADER_SRC_GLB_FUNC_IDX_SHIFT 0
+#define HINIC3_MSG_HEADER_STATUS_SHIFT 13
+#define HINIC3_MSG_HEADER_SOURCE_SHIFT 15
+#define HINIC3_MSG_HEADER_AEQ_ID_SHIFT 16
+#define HINIC3_MSG_HEADER_MSG_ID_SHIFT 18
+#define HINIC3_MSG_HEADER_CMD_SHIFT 22
+
+#define HINIC3_MSG_HEADER_MSG_LEN_SHIFT 32
+#define HINIC3_MSG_HEADER_MODULE_SHIFT 43
+#define HINIC3_MSG_HEADER_SEG_LEN_SHIFT 48
+#define HINIC3_MSG_HEADER_NO_ACK_SHIFT 54
+#define HINIC3_MSG_HEADER_DATA_TYPE_SHIFT 55
+#define HINIC3_MSG_HEADER_SEQID_SHIFT 56
+#define HINIC3_MSG_HEADER_LAST_SHIFT 62
+#define HINIC3_MSG_HEADER_DIRECTION_SHIFT 63
+
+#define HINIC3_MSG_HEADER_SRC_GLB_FUNC_IDX_MASK 0x1FFF
+#define HINIC3_MSG_HEADER_STATUS_MASK 0x1
+#define HINIC3_MSG_HEADER_SOURCE_MASK 0x1
+#define HINIC3_MSG_HEADER_AEQ_ID_MASK 0x3
+#define HINIC3_MSG_HEADER_MSG_ID_MASK 0xF
+#define HINIC3_MSG_HEADER_CMD_MASK 0x3FF
+
+#define HINIC3_MSG_HEADER_MSG_LEN_MASK 0x7FF
+#define HINIC3_MSG_HEADER_MODULE_MASK 0x1F
+#define HINIC3_MSG_HEADER_SEG_LEN_MASK 0x3F
+#define HINIC3_MSG_HEADER_NO_ACK_MASK 0x1
+#define HINIC3_MSG_HEADER_DATA_TYPE_MASK 0x1
+#define HINIC3_MSG_HEADER_SEQID_MASK 0x3F
+#define HINIC3_MSG_HEADER_LAST_MASK 0x1
+#define HINIC3_MSG_HEADER_DIRECTION_MASK 0x1
+
+#define MBOX_MAX_BUF_SZ 2048U
+#define MBOX_HEADER_SZ 8
+#define HINIC3_MBOX_DATA_SIZE (MBOX_MAX_BUF_SZ - MBOX_HEADER_SZ)
+
+#define HINIC3_MSG_HEADER_GET(val, field) \
+ (((val) >> HINIC3_MSG_HEADER_##field##_SHIFT) & \
+ HINIC3_MSG_HEADER_##field##_MASK)
+#define HINIC3_MSG_HEADER_SET(val, field) \
+ ((u64)(((u64)(val)) & HINIC3_MSG_HEADER_##field##_MASK) << \
+ HINIC3_MSG_HEADER_##field##_SHIFT)
+
+#define IS_DMA_MBX_MSG(dst_func) ((dst_func) == HINIC3_MGMT_SRC_ID)
+
+enum hinic3_msg_direction_type {
+ HINIC3_MSG_DIRECT_SEND = 0,
+ HINIC3_MSG_RESPONSE = 1,
+};
+
+enum hinic3_msg_segment_type {
+ NOT_LAST_SEGMENT = 0,
+ LAST_SEGMENT = 1,
+};
+
+enum hinic3_msg_ack_type {
+ HINIC3_MSG_ACK,
+ HINIC3_MSG_NO_ACK,
+};
+
+enum hinic3_data_type {
+ HINIC3_DATA_INLINE = 0,
+ HINIC3_DATA_DMA = 1,
+};
+
+enum hinic3_msg_src_type {
+ HINIC3_MSG_FROM_MGMT = 0,
+ HINIC3_MSG_FROM_MBOX = 1,
+};
+
+enum hinic3_msg_aeq_type {
+ HINIC3_ASYNC_MSG_AEQ = 0,
+ /* indicate dest func or mgmt cpu which aeq to response mbox message */
+ HINIC3_MBOX_RSP_MSG_AEQ = 1,
+ /* indicate mgmt cpu which aeq to response api cmd message */
+ HINIC3_MGMT_RSP_MSG_AEQ = 2,
+};
+
+#define HINIC3_MBOX_WQ_NAME "hinic3_mbox"
+
+struct mbox_msg_info {
+ u8 msg_id;
+ u8 status; /* can only use 1 bit */
+};
+
+struct hinic3_msg_desc {
+ void *msg;
+ u16 msg_len;
+ u8 seq_id;
+ u8 mod;
+ u16 cmd;
+ struct mbox_msg_info msg_info;
+};
+
+struct hinic3_msg_channel {
+ struct hinic3_msg_desc resp_msg;
+ struct hinic3_msg_desc recv_msg;
+
+ atomic_t recv_msg_cnt;
+};
+
+/* Receive other functions mbox message */
+struct hinic3_recv_mbox {
+ void *msg;
+ u16 msg_len;
+ u8 msg_id;
+ u8 mod;
+ u16 cmd;
+ u16 src_func_idx;
+
+ enum hinic3_msg_ack_type ack_type;
+ u32 rsvd1;
+
+ void *resp_buff;
+};
+
+struct hinic3_send_mbox {
+ u8 *data;
+
+ u64 *wb_status; /* write back status */
+ void *wb_vaddr;
+ dma_addr_t wb_paddr;
+};
+
+enum mbox_event_state {
+ EVENT_START = 0,
+ EVENT_FAIL,
+ EVENT_SUCCESS,
+ EVENT_TIMEOUT,
+ EVENT_END,
+};
+
+enum hinic3_mbox_cb_state {
+ HINIC3_VF_MBOX_CB_REG = 0,
+ HINIC3_VF_MBOX_CB_RUNNING,
+ HINIC3_PF_MBOX_CB_REG,
+ HINIC3_PF_MBOX_CB_RUNNING,
+ HINIC3_PPF_MBOX_CB_REG,
+ HINIC3_PPF_MBOX_CB_RUNNING,
+ HINIC3_PPF_TO_PF_MBOX_CB_REG,
+ HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG,
+};
+
+enum hinic3_mbox_ack_type {
+ MBOX_ACK,
+ MBOX_NO_ACK,
+};
+
+struct mbox_dma_msg {
+ u32 xor;
+ u32 dma_addr_high;
+ u32 dma_addr_low;
+ u32 msg_len;
+ u64 rsvd;
+};
+
+struct mbox_dma_queue {
+ void *dma_buff_vaddr;
+ dma_addr_t dma_buff_paddr;
+
+ u16 depth;
+ u16 prod_idx;
+ u16 cons_idx;
+};
+
+struct hinic3_mbox {
+ struct hinic3_hwdev *hwdev;
+
+ bool lock_channel_en;
+ unsigned long channel_stop;
+ u16 cur_msg_channel;
+ u32 rsvd1;
+
+ /* lock for send mbox message and ack message */
+ struct mutex mbox_send_lock;
+ /* lock for send mbox message */
+ struct mutex msg_send_lock;
+ struct hinic3_send_mbox send_mbox;
+
+ struct mbox_dma_queue sync_msg_queue;
+ struct mbox_dma_queue async_msg_queue;
+
+ struct workqueue_struct *workq;
+
+ struct hinic3_msg_channel mgmt_msg; /* driver and MGMT CPU */
+ struct hinic3_msg_channel *host_msg; /* PPF message between hosts */
+ struct hinic3_msg_channel *func_msg; /* PF to VF or VF to PF */
+ u16 num_func_msg;
+ bool support_h2h_msg; /* host to host */
+
+ /* vf receive pf/ppf callback */
+ hinic3_vf_mbox_cb vf_mbox_cb[HINIC3_MOD_MAX];
+ void *vf_mbox_data[HINIC3_MOD_MAX];
+ /* pf/ppf receive vf callback */
+ hinic3_pf_mbox_cb pf_mbox_cb[HINIC3_MOD_MAX];
+ void *pf_mbox_data[HINIC3_MOD_MAX];
+ /* ppf receive pf/ppf callback */
+ hinic3_ppf_mbox_cb ppf_mbox_cb[HINIC3_MOD_MAX];
+ void *ppf_mbox_data[HINIC3_MOD_MAX];
+ /* pf receive ppf callback */
+ hinic3_pf_recv_from_ppf_mbox_cb pf_recv_ppf_mbox_cb[HINIC3_MOD_MAX];
+ void *pf_recv_ppf_mbox_data[HINIC3_MOD_MAX];
+ unsigned long ppf_to_pf_mbox_cb_state[HINIC3_MOD_MAX];
+ unsigned long ppf_mbox_cb_state[HINIC3_MOD_MAX];
+ unsigned long pf_mbox_cb_state[HINIC3_MOD_MAX];
+ unsigned long vf_mbox_cb_state[HINIC3_MOD_MAX];
+
+ u8 send_msg_id;
+ u16 rsvd2;
+ enum mbox_event_state event_flag;
+ /* lock for mbox event flag */
+ spinlock_t mbox_lock;
+ u64 rsvd3;
+};
+
+struct hinic3_mbox_work {
+ struct work_struct work;
+ struct hinic3_mbox *func_to_func;
+ struct hinic3_recv_mbox *recv_mbox;
+ struct hinic3_msg_channel *msg_ch;
+};
+
+struct vf_cmd_check_handle {
+ u16 cmd;
+ bool (*check_cmd)(struct hinic3_hwdev *hwdev, u16 src_func_idx,
+ void *buf_in, u16 in_size);
+};
+
+void hinic3_mbox_func_aeqe_handler(void *handle, u8 *header, u8 size);
+
+bool hinic3_mbox_check_cmd_valid(struct hinic3_hwdev *hwdev,
+ struct vf_cmd_check_handle *cmd_handle,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+ u8 size);
+
+int hinic3_func_to_func_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_func_to_func_free(struct hinic3_hwdev *hwdev);
+
+int hinic3_mbox_to_host(struct hinic3_hwdev *hwdev, u16 dest_host_ppf_id,
+ enum hinic3_mod_type mod, u8 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u32 timeout, u16 channel);
+
+int hinic3_mbox_to_func_no_ack(struct hinic3_hwdev *hwdev, u16 func_idx,
+ u8 mod, u16 cmd, void *buf_in, u16 in_size,
+ u16 channel);
+
+int hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel);
+
+void hinic3_response_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 msg_id);
+
+int hinic3_send_mbox_to_mgmt_no_ack(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 channel);
+int hinic3_mbox_to_func(struct hinic3_mbox *func_to_func, u8 mod, u16 cmd,
+ u16 dst_func, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout, u16 channel);
+
+int hinic3_mbox_init_host_msg_channel(struct hinic3_hwdev *hwdev);
+
+void hinic3_mbox_enable_channel_lock(struct hinic3_hwdev *hwdev, bool enable);
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c
new file mode 100644
index 0000000..0d75177
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c
@@ -0,0 +1,1571 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/spinlock.h>
+#include <linux/completion.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/semaphore.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "mpu_inband_cmd.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_eqs.h"
+#include "hinic3_mbox.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_csr.h"
+#include "hinic3_mgmt.h"
+
+#define HINIC3_MSG_TO_MGMT_MAX_LEN 2016
+
+#define HINIC3_API_CHAIN_AEQ_ID 2
+#define MAX_PF_MGMT_BUF_SIZE 2048UL
+#define SEGMENT_LEN 48
+#define ASYNC_MSG_FLAG 0x8
+#define MGMT_MSG_MAX_SEQ_ID (ALIGN(HINIC3_MSG_TO_MGMT_MAX_LEN, \
+ SEGMENT_LEN) / SEGMENT_LEN)
+
+#define MGMT_MSG_LAST_SEG_MAX_LEN (MAX_PF_MGMT_BUF_SIZE - \
+ SEGMENT_LEN * MGMT_MSG_MAX_SEQ_ID)
+
+#define BUF_OUT_DEFAULT_SIZE 1
+
+#define MGMT_MSG_SIZE_MIN 20
+#define MGMT_MSG_SIZE_STEP 16
+#define MGMT_MSG_RSVD_FOR_DEV 8
+
+#define SYNC_MSG_ID_MASK 0x7
+#define ASYNC_MSG_ID_MASK 0x7
+
+#define SYNC_FLAG 0
+#define ASYNC_FLAG 1
+
+#define MSG_NO_RESP 0xFFFF
+
+#define MGMT_MSG_TIMEOUT 20000 /* millisecond */
+
+#define SYNC_MSG_ID(pf_to_mgmt) ((pf_to_mgmt)->sync_msg_id)
+
+#define SYNC_MSG_ID_INC(pf_to_mgmt) (SYNC_MSG_ID(pf_to_mgmt) = \
+ (SYNC_MSG_ID(pf_to_mgmt) + 1) & SYNC_MSG_ID_MASK)
+#define ASYNC_MSG_ID(pf_to_mgmt) ((pf_to_mgmt)->async_msg_id)
+
+#define ASYNC_MSG_ID_INC(pf_to_mgmt) (ASYNC_MSG_ID(pf_to_mgmt) = \
+ ((ASYNC_MSG_ID(pf_to_mgmt) + 1) & ASYNC_MSG_ID_MASK) \
+ | ASYNC_MSG_FLAG)
+
+static void pf_to_mgmt_send_event_set(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ int event_flag)
+{
+ spin_lock(&pf_to_mgmt->sync_event_lock);
+ pf_to_mgmt->event_flag = event_flag;
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+}
+
+/**
+ * hinic3_register_mgmt_msg_cb - register sync msg handler for a module
+ * @hwdev: the pointer to hw device
+ * @mod: module in the chip that this handler will handle its sync messages
+ * @pri_handle: specific mod's private data that will be used in callback
+ * @callback: the handler for a sync message that will handle messages
+ **/
+int hinic3_register_mgmt_msg_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_mgmt_msg_cb callback)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+
+ if (mod >= HINIC3_MOD_HW_MAX || !hwdev)
+ return -EFAULT;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return -EINVAL;
+
+ pf_to_mgmt->recv_mgmt_msg_cb[mod] = callback;
+ pf_to_mgmt->recv_mgmt_msg_data[mod] = pri_handle;
+
+ set_bit(HINIC3_MGMT_MSG_CB_REG, &pf_to_mgmt->mgmt_msg_cb_state[mod]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_mgmt_msg_cb);
+
+/**
+ * hinic3_unregister_mgmt_msg_cb - unregister sync msg handler for a module
+ * @hwdev: the pointer to hw device
+ * @mod: module in the chip that this handler will handle its sync messages
+ **/
+void hinic3_unregister_mgmt_msg_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+
+ if (!hwdev || mod >= HINIC3_MOD_HW_MAX)
+ return;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return;
+
+ clear_bit(HINIC3_MGMT_MSG_CB_REG, &pf_to_mgmt->mgmt_msg_cb_state[mod]);
+
+ while (test_bit(HINIC3_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[mod]))
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ pf_to_mgmt->recv_mgmt_msg_cb[mod] = NULL;
+ pf_to_mgmt->recv_mgmt_msg_data[mod] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_mgmt_msg_cb);
+
+/**
+ * mgmt_msg_len - calculate the total message length
+ * @msg_data_len: the length of the message data
+ * Return: the total message length
+ **/
+static u16 mgmt_msg_len(u16 msg_data_len)
+{
+ /* u64 - the size of the header */
+ u16 msg_size;
+
+ msg_size = (u16)(MGMT_MSG_RSVD_FOR_DEV + sizeof(u64) + msg_data_len);
+
+ if (msg_size > MGMT_MSG_SIZE_MIN)
+ msg_size = MGMT_MSG_SIZE_MIN +
+ ALIGN((msg_size - MGMT_MSG_SIZE_MIN),
+ MGMT_MSG_SIZE_STEP);
+ else
+ msg_size = MGMT_MSG_SIZE_MIN;
+
+ return msg_size;
+}
+
+/**
+ * prepare_header - prepare the header of the message
+ * @pf_to_mgmt: PF to MGMT channel
+ * @header: pointer of the header to prepare
+ * @msg_len: the length of the message
+ * @mod: module in the chip that will get the message
+ * @direction: the direction of the original message
+ * @msg_id: message id
+ **/
+static void prepare_header(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u64 *header, u16 msg_len, u8 mod,
+ enum hinic3_msg_ack_type ack_type,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_mgmt_cmd cmd, u32 msg_id)
+{
+ struct hinic3_hwif *hwif = pf_to_mgmt->hwdev->hwif;
+
+ *header = HINIC3_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ HINIC3_MSG_HEADER_SET(mod, MODULE) |
+ HINIC3_MSG_HEADER_SET(msg_len, SEG_LEN) |
+ HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HINIC3_MSG_HEADER_SET(HINIC3_DATA_INLINE, DATA_TYPE) |
+ HINIC3_MSG_HEADER_SET(0, SEQID) |
+ HINIC3_MSG_HEADER_SET(HINIC3_API_CHAIN_AEQ_ID, AEQ_ID) |
+ HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
+ HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+ HINIC3_MSG_HEADER_SET(cmd, CMD) |
+ HINIC3_MSG_HEADER_SET(HINIC3_MSG_FROM_MGMT, SOURCE) |
+ HINIC3_MSG_HEADER_SET(hwif->attr.func_global_idx,
+ SRC_GLB_FUNC_IDX) |
+ HINIC3_MSG_HEADER_SET(msg_id, MSG_ID);
+}
+
+static void clp_prepare_header(struct hinic3_hwdev *hwdev, u64 *header,
+ u16 msg_len, u8 mod,
+ enum hinic3_msg_ack_type ack_type,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_mgmt_cmd cmd, u32 msg_id)
+{
+ struct hinic3_hwif *hwif = hwdev->hwif;
+
+ *header = HINIC3_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ HINIC3_MSG_HEADER_SET(mod, MODULE) |
+ HINIC3_MSG_HEADER_SET(msg_len, SEG_LEN) |
+ HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HINIC3_MSG_HEADER_SET(HINIC3_DATA_INLINE, DATA_TYPE) |
+ HINIC3_MSG_HEADER_SET(0, SEQID) |
+ HINIC3_MSG_HEADER_SET(HINIC3_API_CHAIN_AEQ_ID, AEQ_ID) |
+ HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
+ HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+ HINIC3_MSG_HEADER_SET(cmd, CMD) |
+ HINIC3_MSG_HEADER_SET(hwif->attr.func_global_idx,
+ SRC_GLB_FUNC_IDX) |
+ HINIC3_MSG_HEADER_SET(msg_id, MSG_ID);
+}
+
+/**
+ * prepare_mgmt_cmd - prepare the mgmt command
+ * @mgmt_cmd: pointer to the command to prepare
+ * @header: pointer of the header to prepare
+ * @msg: the data of the message
+ * @msg_len: the length of the message
+ **/
+static int prepare_mgmt_cmd(u8 *mgmt_cmd, u64 *header, const void *msg, int msg_len)
+{
+ u8 *mgmt_cmd_new = mgmt_cmd;
+
+ memset(mgmt_cmd_new, 0, MGMT_MSG_RSVD_FOR_DEV);
+
+ mgmt_cmd_new += MGMT_MSG_RSVD_FOR_DEV;
+ memcpy(mgmt_cmd_new, header, sizeof(*header));
+
+ mgmt_cmd_new += sizeof(*header);
+ memcpy(mgmt_cmd_new, msg, (size_t)(u32)msg_len);
+
+ return 0;
+}
+
+/**
+ * send_msg_to_mgmt_sync - send async message
+ * @pf_to_mgmt: PF to MGMT channel
+ * @mod: module in the chip that will get the message
+ * @cmd: command of the message
+ * @msg: the msg data
+ * @msg_len: the msg data length
+ * @direction: the direction of the original message
+ * @resp_msg_id: msg id to response for
+ * Return: 0 - success, negative - failure
+ **/
+static int send_msg_to_mgmt_sync(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 mod, u16 cmd, const void *msg, u16 msg_len,
+ enum hinic3_msg_ack_type ack_type,
+ enum hinic3_msg_direction_type direction,
+ u16 resp_msg_id)
+{
+ void *mgmt_cmd = pf_to_mgmt->sync_msg_buf;
+ struct hinic3_api_cmd_chain *chain = NULL;
+ u8 node_id = HINIC3_MGMT_CPU_NODE_ID(pf_to_mgmt->hwdev);
+ u64 header;
+ u16 cmd_size = mgmt_msg_len(msg_len);
+ int ret;
+
+ if (hinic3_get_chip_present_flag(pf_to_mgmt->hwdev) == 0)
+ return -EFAULT;
+
+ if (cmd_size > HINIC3_MSG_TO_MGMT_MAX_LEN)
+ return -EFAULT;
+
+ if (direction == HINIC3_MSG_RESPONSE)
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, ack_type,
+ direction, cmd, resp_msg_id);
+ else
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, ack_type,
+ direction, cmd, SYNC_MSG_ID_INC(pf_to_mgmt));
+ chain = pf_to_mgmt->cmd_chain[HINIC3_API_CMD_WRITE_TO_MGMT_CPU];
+
+ if (ack_type == HINIC3_MSG_ACK)
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_START);
+
+ ret = prepare_mgmt_cmd((u8 *)mgmt_cmd, &header, msg, msg_len);
+ if (ret != 0)
+ return ret;
+
+ return hinic3_api_cmd_write(chain, node_id, mgmt_cmd, cmd_size);
+}
+
+/**
+ * send_msg_to_mgmt_async - send async message
+ * @pf_to_mgmt: PF to MGMT channel
+ * @mod: module in the chip that will get the message
+ * @cmd: command of the message
+ * @msg: the data of the message
+ * @msg_len: the length of the message
+ * @direction: the direction of the original message
+ * Return: 0 - success, negative - failure
+ **/
+static int send_msg_to_mgmt_async(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 mod, u16 cmd, const void *msg, u16 msg_len,
+ enum hinic3_msg_direction_type direction)
+{
+ void *mgmt_cmd = pf_to_mgmt->async_msg_buf;
+ struct hinic3_api_cmd_chain *chain = NULL;
+ u8 node_id = HINIC3_MGMT_CPU_NODE_ID(pf_to_mgmt->hwdev);
+ u64 header;
+ u16 cmd_size = mgmt_msg_len(msg_len);
+ int ret;
+
+ if (hinic3_get_chip_present_flag(pf_to_mgmt->hwdev) == 0)
+ return -EFAULT;
+
+ if (cmd_size > HINIC3_MSG_TO_MGMT_MAX_LEN)
+ return -EFAULT;
+
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, HINIC3_MSG_NO_ACK,
+ direction, cmd, ASYNC_MSG_ID(pf_to_mgmt));
+
+ ret = prepare_mgmt_cmd((u8 *)mgmt_cmd, &header, msg, msg_len);
+ if (ret != 0)
+ return ret;
+
+ chain = pf_to_mgmt->cmd_chain[HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU];
+
+ return hinic3_api_cmd_write(chain, node_id, mgmt_cmd, cmd_size);
+}
+
+static inline int msg_to_mgmt_pre(u8 mod, void *buf_in, u16 in_size)
+{
+ struct hinic3_msg_head *msg_head = NULL;
+
+ /* set aeq fix num to 3, need to ensure response aeq id < 3 */
+ if (mod == HINIC3_MOD_COMM || mod == HINIC3_MOD_L2NIC) {
+ if (in_size < sizeof(struct hinic3_msg_head))
+ return -EINVAL;
+
+ msg_head = buf_in;
+
+ if (msg_head->resp_aeq_num >= HINIC3_MAX_AEQS)
+ msg_head->resp_aeq_num = 0;
+ }
+
+ return 0;
+}
+
+int hinic3_pf_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ void *dev = ((struct hinic3_hwdev *)hwdev)->dev_hdl;
+ struct hinic3_recv_msg *recv_msg = NULL;
+ struct completion *recv_done = NULL;
+ ulong timeo;
+ int err;
+ ulong ret;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ if ((buf_in == NULL) || (in_size == 0))
+ return -EINVAL;
+
+ ret = msg_to_mgmt_pre(mod, buf_in, in_size);
+ if (ret != 0)
+ return -EINVAL;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+
+ /* Lock the sync_msg_buf */
+ down(&pf_to_mgmt->sync_msg_lock);
+ recv_msg = &pf_to_mgmt->recv_resp_msg_from_mgmt;
+ recv_done = &recv_msg->recv_done;
+
+ init_completion(recv_done);
+
+ err = send_msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
+ HINIC3_MSG_ACK, HINIC3_MSG_DIRECT_SEND,
+ MSG_NO_RESP);
+ if (err) {
+ sdk_err(dev, "Failed to send sync msg to mgmt, sync_msg_id: %u\n",
+ pf_to_mgmt->sync_msg_id);
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_FAIL);
+ goto unlock_sync_msg;
+ }
+
+ timeo = msecs_to_jiffies(timeout ? timeout : MGMT_MSG_TIMEOUT);
+
+ ret = wait_for_completion_timeout(recv_done, timeo);
+ if (!ret) {
+ sdk_err(dev, "Mgmt response sync cmd timeout, sync_msg_id: %u\n",
+ pf_to_mgmt->sync_msg_id);
+ hinic3_dump_aeq_info((struct hinic3_hwdev *)hwdev);
+ err = -ETIMEDOUT;
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_TIMEOUT);
+ goto unlock_sync_msg;
+ }
+
+ spin_lock(&pf_to_mgmt->sync_event_lock);
+ if (pf_to_mgmt->event_flag == SEND_EVENT_TIMEOUT) {
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+ err = -ETIMEDOUT;
+ goto unlock_sync_msg;
+ }
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_END);
+
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag)) {
+ destroy_completion(recv_done);
+ up(&pf_to_mgmt->sync_msg_lock);
+ return -ETIMEDOUT;
+ }
+
+ if (buf_out && out_size) {
+ if (*out_size < recv_msg->msg_len) {
+ sdk_err(dev, "Invalid response message length: %u for mod %d cmd %u from mgmt, should less than: %u\n",
+ recv_msg->msg_len, mod, cmd, *out_size);
+ err = -EFAULT;
+ goto unlock_sync_msg;
+ }
+
+ if (recv_msg->msg_len)
+ memcpy(buf_out, recv_msg->msg, recv_msg->msg_len);
+ *out_size = recv_msg->msg_len;
+ }
+
+unlock_sync_msg:
+ destroy_completion(recv_done);
+ up(&pf_to_mgmt->sync_msg_lock);
+
+ return err;
+}
+
+int hinic3_pf_to_mgmt_async(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ void *dev = ((struct hinic3_hwdev *)hwdev)->dev_hdl;
+ int err;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+
+ /* Lock the async_msg_buf */
+ spin_lock_bh(&pf_to_mgmt->async_msg_lock);
+ ASYNC_MSG_ID_INC(pf_to_mgmt);
+
+ err = send_msg_to_mgmt_async(pf_to_mgmt, mod, cmd, buf_in, in_size,
+ HINIC3_MSG_DIRECT_SEND);
+ spin_unlock_bh(&pf_to_mgmt->async_msg_lock);
+
+ if (err) {
+ sdk_err(dev, "Failed to send async mgmt msg\n");
+ return err;
+ }
+
+ return 0;
+}
+
+/* This function is only used by tx/rx flush */
+int hinic3_pf_to_mgmt_no_ack(void *hwdev, enum hinic3_mod_type mod, u8 cmd, void *buf_in,
+ u16 in_size)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ void *dev = NULL;
+ int err = -EINVAL;
+ struct hinic3_hwdev *tmp_hwdev = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ tmp_hwdev = (struct hinic3_hwdev *)hwdev;
+ dev = tmp_hwdev->dev_hdl;
+ pf_to_mgmt = tmp_hwdev->pf_to_mgmt;
+
+ if (in_size > HINIC3_MBOX_DATA_SIZE) {
+ sdk_err(dev, "Mgmt msg buffer size: %u is invalid\n", in_size);
+ return -EINVAL;
+ }
+
+ if (!(tmp_hwdev->chip_present_flag))
+ return -EPERM;
+
+ /* lock the sync_msg_buf */
+ down(&pf_to_mgmt->sync_msg_lock);
+
+ err = send_msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size, HINIC3_MSG_NO_ACK,
+ HINIC3_MSG_DIRECT_SEND, MSG_NO_RESP);
+
+ up(&pf_to_mgmt->sync_msg_lock);
+
+ return err;
+}
+
+int hinic3_pf_msg_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_get_chip_present_flag(hwdev) == 0)
+ return -EPERM;
+
+ if (in_size > HINIC3_MSG_TO_MGMT_MAX_LEN)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ return hinic3_pf_to_mgmt_sync(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout);
+}
+
+int hinic3_msg_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_get_chip_present_flag(hwdev) == 0)
+ return -EPERM;
+
+ return hinic3_send_mbox_to_mgmt(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+}
+EXPORT_SYMBOL(hinic3_msg_to_mgmt_sync);
+
+int hinic3_msg_to_mgmt_no_ack(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 channel)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_get_chip_present_flag(hwdev) == 0)
+ return -EPERM;
+
+ return hinic3_send_mbox_to_mgmt_no_ack(hwdev, mod, cmd, buf_in,
+ in_size, channel);
+}
+EXPORT_SYMBOL(hinic3_msg_to_mgmt_no_ack);
+
+int hinic3_msg_to_mgmt_async(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 channel)
+{
+ return hinic3_msg_to_mgmt_api_chain_async(hwdev, mod, cmd, buf_in,
+ in_size);
+}
+EXPORT_SYMBOL(hinic3_msg_to_mgmt_async);
+
+int hinic3_msg_to_mgmt_api_chain_sync(void *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_get_chip_present_flag(hwdev) == 0)
+ return -EPERM;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev)) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "PF don't support api chain\n");
+ return -EPERM;
+ }
+
+ return hinic3_pf_msg_to_mgmt_sync(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout);
+}
+
+int hinic3_msg_to_mgmt_api_chain_async(void *hwdev, u8 mod, u16 cmd,
+ const void *buf_in, u16 in_size)
+{
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ err = -EFAULT;
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "VF don't support async cmd\n");
+ } else if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev)) {
+ err = -EPERM;
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "PF don't support api chain\n");
+ } else {
+ err = hinic3_pf_to_mgmt_async(hwdev, mod, cmd, buf_in, in_size);
+ }
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_msg_to_mgmt_api_chain_async);
+
+static void send_mgmt_ack(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 mod, u16 cmd, void *buf_in, u16 in_size,
+ u16 msg_id)
+{
+ u16 buf_size;
+
+ if (!in_size)
+ buf_size = BUF_OUT_DEFAULT_SIZE;
+ else
+ buf_size = in_size;
+
+ hinic3_response_mbox_to_mgmt(pf_to_mgmt->hwdev, mod, cmd, buf_in,
+ buf_size, msg_id);
+}
+
+static void mgmt_recv_msg_handler(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 mod, u16 cmd, void *buf_in, u16 in_size,
+ u16 msg_id, int need_resp)
+{
+ void *dev = pf_to_mgmt->hwdev->dev_hdl;
+ void *buf_out = pf_to_mgmt->mgmt_ack_buf;
+ enum hinic3_mod_type tmp_mod = mod;
+ bool ack_first = false;
+ u16 out_size = 0;
+
+ memset(buf_out, 0, MAX_PF_MGMT_BUF_SIZE);
+
+ if (mod >= HINIC3_MOD_HW_MAX) {
+ sdk_warn(dev, "Receive illegal message from mgmt cpu, mod = %d\n",
+ mod);
+ goto unsupported;
+ }
+
+ set_bit(HINIC3_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod]);
+
+ if (!pf_to_mgmt->recv_mgmt_msg_cb[mod] ||
+ !test_bit(HINIC3_MGMT_MSG_CB_REG,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod])) {
+ sdk_warn(dev, "Receive mgmt callback is null, mod = %u, cmd=%u\n", mod, cmd);
+ clear_bit(HINIC3_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod]);
+ goto unsupported;
+ }
+
+ pf_to_mgmt->recv_mgmt_msg_cb[tmp_mod](pf_to_mgmt->recv_mgmt_msg_data[tmp_mod],
+ cmd, buf_in, in_size,
+ buf_out, &out_size);
+
+ clear_bit(HINIC3_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod]);
+
+ goto resp;
+
+unsupported:
+ out_size = sizeof(struct mgmt_msg_head);
+ ((struct mgmt_msg_head *)buf_out)->status = HINIC3_MGMT_CMD_UNSUPPORTED;
+
+resp:
+ if (!ack_first && need_resp)
+ send_mgmt_ack(pf_to_mgmt, mod, cmd, buf_out, out_size, msg_id);
+}
+
+/**
+ * mgmt_resp_msg_handler - handler for response message from mgmt cpu
+ * @pf_to_mgmt: PF to MGMT channel
+ * @recv_msg: received message details
+ **/
+static void mgmt_resp_msg_handler(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ struct hinic3_recv_msg *recv_msg)
+{
+ void *dev = pf_to_mgmt->hwdev->dev_hdl;
+
+ /* delete async msg */
+ if (recv_msg->msg_id & ASYNC_MSG_FLAG)
+ return;
+
+ spin_lock(&pf_to_mgmt->sync_event_lock);
+ if (recv_msg->msg_id == pf_to_mgmt->sync_msg_id &&
+ pf_to_mgmt->event_flag == SEND_EVENT_START) {
+ pf_to_mgmt->event_flag = SEND_EVENT_SUCCESS;
+ complete(&recv_msg->recv_done);
+ } else if (recv_msg->msg_id != pf_to_mgmt->sync_msg_id) {
+ sdk_err(dev, "Send msg id(0x%x) recv msg id(0x%x) dismatch, event state=%d\n",
+ pf_to_mgmt->sync_msg_id, recv_msg->msg_id,
+ pf_to_mgmt->event_flag);
+ } else {
+ sdk_err(dev, "Wait timeout, send msg id(0x%x) recv msg id(0x%x), event state=%d!\n",
+ pf_to_mgmt->sync_msg_id, recv_msg->msg_id,
+ pf_to_mgmt->event_flag);
+ }
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+}
+
+static void recv_mgmt_msg_work_handler(struct work_struct *work)
+{
+ struct hinic3_mgmt_msg_handle_work *mgmt_work =
+ container_of(work, struct hinic3_mgmt_msg_handle_work, work);
+
+ mgmt_recv_msg_handler(mgmt_work->pf_to_mgmt, mgmt_work->mod,
+ mgmt_work->cmd, mgmt_work->msg,
+ mgmt_work->msg_len, mgmt_work->msg_id,
+ !mgmt_work->async_mgmt_to_pf);
+
+ destroy_work(&mgmt_work->work);
+
+ kfree(mgmt_work->msg);
+ kfree(mgmt_work);
+}
+
+static bool check_mgmt_head_info(struct hinic3_recv_msg *recv_msg,
+ u8 seq_id, u8 seg_len, u16 msg_id)
+{
+ if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN ||
+ (seq_id == MGMT_MSG_MAX_SEQ_ID && seg_len > MGMT_MSG_LAST_SEG_MAX_LEN))
+ return false;
+
+ if (seq_id == 0) {
+ recv_msg->seq_id = seq_id;
+ recv_msg->msg_id = msg_id;
+ } else {
+ if (seq_id != recv_msg->seq_id + 1 || msg_id != recv_msg->msg_id)
+ return false;
+
+ recv_msg->seq_id = seq_id;
+ }
+
+ return true;
+}
+
+static void init_mgmt_msg_work(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ struct hinic3_recv_msg *recv_msg)
+{
+ struct hinic3_mgmt_msg_handle_work *mgmt_work = NULL;
+ struct hinic3_hwdev *hwdev = pf_to_mgmt->hwdev;
+
+ mgmt_work = kzalloc(sizeof(*mgmt_work), GFP_KERNEL);
+ if (!mgmt_work) {
+ sdk_err(hwdev->dev_hdl, "Allocate mgmt work memory failed\n");
+ return;
+ }
+
+ if (recv_msg->msg_len) {
+ mgmt_work->msg = kzalloc(recv_msg->msg_len, GFP_KERNEL);
+ if (!mgmt_work->msg) {
+ sdk_err(hwdev->dev_hdl, "Allocate mgmt msg memory failed\n");
+ kfree(mgmt_work);
+ return;
+ }
+ }
+
+ mgmt_work->pf_to_mgmt = pf_to_mgmt;
+ mgmt_work->msg_len = recv_msg->msg_len;
+ memcpy(mgmt_work->msg, recv_msg->msg, recv_msg->msg_len);
+ mgmt_work->msg_id = recv_msg->msg_id;
+ mgmt_work->mod = recv_msg->mod;
+ mgmt_work->cmd = recv_msg->cmd;
+ mgmt_work->async_mgmt_to_pf = recv_msg->async_mgmt_to_pf;
+
+ INIT_WORK(&mgmt_work->work, recv_mgmt_msg_work_handler);
+ queue_work_on(hisdk3_get_work_cpu_affinity(hwdev, WORK_TYPE_MGMT_MSG),
+ pf_to_mgmt->workq, &mgmt_work->work);
+}
+
+/**
+ * recv_mgmt_msg_handler - handler a message from mgmt cpu
+ * @pf_to_mgmt: PF to MGMT channel
+ * @header: the header of the message
+ * @recv_msg: received message details
+ **/
+static void recv_mgmt_msg_handler(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 *header, struct hinic3_recv_msg *recv_msg)
+{
+ struct hinic3_hwdev *hwdev = pf_to_mgmt->hwdev;
+ u64 mbox_header = *((u64 *)header);
+ void *msg_body = header + sizeof(mbox_header);
+ u8 seq_id, seq_len;
+ u16 msg_id;
+ u32 offset;
+ u64 dir;
+
+ /* Don't need to get anything from hw when cmd is async */
+ dir = HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION);
+ if (dir == HINIC3_MSG_RESPONSE &&
+ (HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID) & ASYNC_MSG_FLAG))
+ return;
+
+ seq_len = HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN);
+ seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+ msg_id = HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID);
+ if (!check_mgmt_head_info(recv_msg, seq_id, seq_len, msg_id)) {
+ sdk_err(hwdev->dev_hdl, "Mgmt msg sequence id and segment length check failed\n");
+ sdk_err(hwdev->dev_hdl,
+ "Front seq_id: 0x%x,current seq_id: 0x%x, seg len: 0x%x, front msg_id: %d, cur: %d\n",
+ recv_msg->seq_id, seq_id, seq_len, recv_msg->msg_id, msg_id);
+ /* set seq_id to invalid seq_id */
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+ return;
+ }
+
+ offset = seq_id * SEGMENT_LEN;
+ memcpy((u8 *)recv_msg->msg + offset, msg_body, seq_len);
+
+ if (!HINIC3_MSG_HEADER_GET(mbox_header, LAST))
+ return;
+
+ recv_msg->cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+ recv_msg->mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+ recv_msg->async_mgmt_to_pf = HINIC3_MSG_HEADER_GET(mbox_header,
+ NO_ACK);
+ recv_msg->msg_len = HINIC3_MSG_HEADER_GET(mbox_header, MSG_LEN);
+ recv_msg->msg_id = msg_id;
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+
+ if (HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION) ==
+ HINIC3_MSG_RESPONSE) {
+ mgmt_resp_msg_handler(pf_to_mgmt, recv_msg);
+ return;
+ }
+
+ init_mgmt_msg_work(pf_to_mgmt, recv_msg);
+}
+
+/**
+ * hinic3_mgmt_msg_aeqe_handler - handler for a mgmt message event
+ * @handle: PF to MGMT channel
+ * @header: the header of the message
+ * @size: unused
+ **/
+void hinic3_mgmt_msg_aeqe_handler(void *hwdev, u8 *header, u8 size)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ struct hinic3_recv_msg *recv_msg = NULL;
+ bool is_send_dir = false;
+
+ if ((HINIC3_MSG_HEADER_GET(*(u64 *)header, SOURCE) ==
+ HINIC3_MSG_FROM_MBOX)) {
+ hinic3_mbox_func_aeqe_handler(hwdev, header, size);
+ return;
+ }
+
+ pf_to_mgmt = dev->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return;
+
+ is_send_dir = (HINIC3_MSG_HEADER_GET(*(u64 *)header, DIRECTION) ==
+ HINIC3_MSG_DIRECT_SEND) ? true : false;
+
+ recv_msg = is_send_dir ? &pf_to_mgmt->recv_msg_from_mgmt :
+ &pf_to_mgmt->recv_resp_msg_from_mgmt;
+
+ recv_mgmt_msg_handler(pf_to_mgmt, header, recv_msg);
+}
+
+/**
+ * alloc_recv_msg - allocate received message memory
+ * @recv_msg: pointer that will hold the allocated data
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_recv_msg(struct hinic3_recv_msg *recv_msg)
+{
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+
+ recv_msg->msg = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!recv_msg->msg)
+ return -ENOMEM;
+
+ return 0;
+}
+
+/**
+ * free_recv_msg - free received message memory
+ * @recv_msg: pointer that holds the allocated data
+ **/
+static void free_recv_msg(struct hinic3_recv_msg *recv_msg)
+{
+ kfree(recv_msg->msg);
+ recv_msg->msg = NULL;
+}
+
+/**
+ * alloc_msg_buf - allocate all the message buffers of PF to MGMT channel
+ * @pf_to_mgmt: PF to MGMT channel
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_msg_buf(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt)
+{
+ int err;
+ void *dev = pf_to_mgmt->hwdev->dev_hdl;
+
+ err = alloc_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ if (err) {
+ sdk_err(dev, "Failed to allocate recv msg\n");
+ return err;
+ }
+
+ err = alloc_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+ if (err) {
+ sdk_err(dev, "Failed to allocate resp recv msg\n");
+ goto alloc_msg_for_resp_err;
+ }
+
+ pf_to_mgmt->async_msg_buf = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!pf_to_mgmt->async_msg_buf) {
+ err = -ENOMEM;
+ goto async_msg_buf_err;
+ }
+
+ pf_to_mgmt->sync_msg_buf = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!pf_to_mgmt->sync_msg_buf) {
+ err = -ENOMEM;
+ goto sync_msg_buf_err;
+ }
+
+ pf_to_mgmt->mgmt_ack_buf = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!pf_to_mgmt->mgmt_ack_buf) {
+ err = -ENOMEM;
+ goto ack_msg_buf_err;
+ }
+
+ return 0;
+
+ack_msg_buf_err:
+ kfree(pf_to_mgmt->sync_msg_buf);
+
+sync_msg_buf_err:
+ kfree(pf_to_mgmt->async_msg_buf);
+
+async_msg_buf_err:
+ free_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+
+alloc_msg_for_resp_err:
+ free_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ return err;
+}
+
+/**
+ * free_msg_buf - free all the message buffers of PF to MGMT channel
+ * @pf_to_mgmt: PF to MGMT channel
+ * Return: 0 - success, negative - failure
+ **/
+static void free_msg_buf(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt)
+{
+ kfree(pf_to_mgmt->mgmt_ack_buf);
+ kfree(pf_to_mgmt->sync_msg_buf);
+ kfree(pf_to_mgmt->async_msg_buf);
+
+ free_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+ free_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ pf_to_mgmt->mgmt_ack_buf = NULL;
+ pf_to_mgmt->sync_msg_buf = NULL;
+ pf_to_mgmt->async_msg_buf = NULL;
+}
+
+/**
+ * hinic_pf_to_mgmt_init - initialize PF to MGMT channel
+ * @hwdev: the pointer to hw device
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_pf_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt;
+ void *dev = hwdev->dev_hdl;
+ int err;
+
+ pf_to_mgmt = kzalloc(sizeof(*pf_to_mgmt), GFP_KERNEL);
+ if (!pf_to_mgmt)
+ return -ENOMEM;
+
+ hwdev->pf_to_mgmt = pf_to_mgmt;
+ pf_to_mgmt->hwdev = hwdev;
+ spin_lock_init(&pf_to_mgmt->async_msg_lock);
+ spin_lock_init(&pf_to_mgmt->sync_event_lock);
+ sema_init(&pf_to_mgmt->sync_msg_lock, 1);
+ pf_to_mgmt->workq = create_singlethread_workqueue(HINIC3_MGMT_WQ_NAME);
+ if (!pf_to_mgmt->workq) {
+ sdk_err(dev, "Failed to initialize MGMT workqueue\n");
+ err = -ENOMEM;
+ goto create_mgmt_workq_err;
+ }
+
+ err = alloc_msg_buf(pf_to_mgmt);
+ if (err) {
+ sdk_err(dev, "Failed to allocate msg buffers\n");
+ goto alloc_msg_buf_err;
+ }
+
+ err = hinic3_api_cmd_init(hwdev, pf_to_mgmt->cmd_chain);
+ if (err) {
+ sdk_err(dev, "Failed to init the api cmd chains\n");
+ goto api_cmd_init_err;
+ }
+
+ return 0;
+
+api_cmd_init_err:
+ free_msg_buf(pf_to_mgmt);
+
+alloc_msg_buf_err:
+ destroy_workqueue(pf_to_mgmt->workq);
+
+create_mgmt_workq_err:
+ spin_lock_deinit(&pf_to_mgmt->sync_event_lock);
+ spin_lock_deinit(&pf_to_mgmt->async_msg_lock);
+ sema_deinit(&pf_to_mgmt->sync_msg_lock);
+ kfree(pf_to_mgmt);
+
+ return err;
+}
+
+/**
+ * hinic_pf_to_mgmt_free - free PF to MGMT channel
+ * @hwdev: the pointer to hw device
+ **/
+void hinic3_pf_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = hwdev->pf_to_mgmt;
+
+ /* destroy workqueue before free related pf_to_mgmt resources in case of
+ * illegal resource access
+ */
+ destroy_workqueue(pf_to_mgmt->workq);
+ hinic3_api_cmd_free(hwdev, pf_to_mgmt->cmd_chain);
+
+ free_msg_buf(pf_to_mgmt);
+ spin_lock_deinit(&pf_to_mgmt->sync_event_lock);
+ spin_lock_deinit(&pf_to_mgmt->async_msg_lock);
+ sema_deinit(&pf_to_mgmt->sync_msg_lock);
+ kfree(pf_to_mgmt);
+}
+
+void hinic3_flush_mgmt_workq(void *hwdev)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ flush_workqueue(dev->aeqs->workq);
+
+ if (hinic3_func_type(dev) != TYPE_VF)
+ flush_workqueue(dev->pf_to_mgmt->workq);
+}
+
+int hinic3_api_cmd_read_ack(void *hwdev, u8 dest, const void *cmd,
+ u16 size, void *ack, u16 ack_size)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ struct hinic3_api_cmd_chain *chain = NULL;
+
+ if (!hwdev || !cmd || (ack_size && !ack) || size > MAX_PF_MGMT_BUF_SIZE)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+ chain = pf_to_mgmt->cmd_chain[HINIC3_API_CMD_POLL_READ];
+
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -EPERM;
+
+ return hinic3_api_cmd_read(chain, dest, cmd, size, ack, ack_size);
+}
+
+/**
+ * api cmd write or read bypass default use poll, if want to use aeq interrupt,
+ * please set wb_trigger_aeqe to 1
+ **/
+int hinic3_api_cmd_write_nack(void *hwdev, u8 dest, const void *cmd, u16 size)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ struct hinic3_api_cmd_chain *chain = NULL;
+
+ if (!hwdev || !size || !cmd || size > MAX_PF_MGMT_BUF_SIZE)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+ chain = pf_to_mgmt->cmd_chain[HINIC3_API_CMD_POLL_WRITE];
+
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -EPERM;
+
+ return hinic3_api_cmd_write(chain, dest, cmd, size);
+}
+
+static int get_clp_reg(void *hwdev, enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 *reg_addr)
+{
+ switch (reg_type) {
+ case HINIC3_CLP_BA_HOST:
+ *reg_addr = (data_type == HINIC3_CLP_REQ_HOST) ?
+ HINIC3_CLP_REG(REQBASE) :
+ HINIC3_CLP_REG(RSPBASE);
+ break;
+
+ case HINIC3_CLP_SIZE_HOST:
+ *reg_addr = HINIC3_CLP_REG(SIZE);
+ break;
+
+ case HINIC3_CLP_LEN_HOST:
+ *reg_addr = (data_type == HINIC3_CLP_REQ_HOST) ?
+ HINIC3_CLP_REG(REQ) : HINIC3_CLP_REG(RSP);
+ break;
+
+ case HINIC3_CLP_START_REQ_HOST:
+ *reg_addr = HINIC3_CLP_REG(REQ);
+ break;
+
+ case HINIC3_CLP_READY_RSP_HOST:
+ *reg_addr = HINIC3_CLP_REG(RSP);
+ break;
+
+ default:
+ *reg_addr = 0;
+ break;
+ }
+ if (*reg_addr == 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static inline int clp_param_valid(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type)
+{
+ if (data_type == HINIC3_CLP_REQ_HOST &&
+ reg_type == HINIC3_CLP_READY_RSP_HOST)
+ return -EINVAL;
+
+ if (data_type == HINIC3_CLP_RSP_HOST &&
+ reg_type == HINIC3_CLP_START_REQ_HOST)
+ return -EINVAL;
+
+ return 0;
+}
+
+static u32 get_clp_reg_value(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 reg_addr)
+{
+ u32 value;
+
+ value = hinic3_hwif_read_reg(hwdev->hwif, reg_addr);
+
+ switch (reg_type) {
+ case HINIC3_CLP_BA_HOST:
+ value = ((value >> HINIC3_CLP_OFFSET(BASE)) &
+ HINIC3_CLP_MASK(BASE));
+ break;
+
+ case HINIC3_CLP_SIZE_HOST:
+ if (data_type == HINIC3_CLP_REQ_HOST)
+ value = ((value >> HINIC3_CLP_OFFSET(REQ_SIZE)) &
+ HINIC3_CLP_MASK(SIZE));
+ else
+ value = ((value >> HINIC3_CLP_OFFSET(RSP_SIZE)) &
+ HINIC3_CLP_MASK(SIZE));
+ break;
+
+ case HINIC3_CLP_LEN_HOST:
+ value = ((value >> HINIC3_CLP_OFFSET(LEN)) &
+ HINIC3_CLP_MASK(LEN));
+ break;
+
+ case HINIC3_CLP_START_REQ_HOST:
+ value = ((value >> HINIC3_CLP_OFFSET(START)) &
+ HINIC3_CLP_MASK(START));
+ break;
+
+ case HINIC3_CLP_READY_RSP_HOST:
+ value = ((value >> HINIC3_CLP_OFFSET(READY)) &
+ HINIC3_CLP_MASK(READY));
+ break;
+
+ default:
+ break;
+ }
+
+ return value;
+}
+
+static int hinic3_read_clp_reg(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 *read_value)
+{
+ u32 reg_addr;
+ int err;
+
+ err = clp_param_valid(hwdev, data_type, reg_type);
+ if (err)
+ return err;
+
+ err = get_clp_reg(hwdev, data_type, reg_type, ®_addr);
+ if (err)
+ return err;
+
+ *read_value = get_clp_reg_value(hwdev, data_type, reg_type, reg_addr);
+
+ return 0;
+}
+
+static int check_data_type(enum clp_data_type data_type,
+ enum clp_reg_type reg_type)
+{
+ if (data_type == HINIC3_CLP_REQ_HOST &&
+ reg_type == HINIC3_CLP_READY_RSP_HOST)
+ return -EINVAL;
+ if (data_type == HINIC3_CLP_RSP_HOST &&
+ reg_type == HINIC3_CLP_START_REQ_HOST)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int check_reg_value(enum clp_reg_type reg_type, u32 value)
+{
+ if (reg_type == HINIC3_CLP_BA_HOST &&
+ value > HINIC3_CLP_SRAM_BASE_REG_MAX)
+ return -EINVAL;
+
+ if (reg_type == HINIC3_CLP_SIZE_HOST &&
+ value > HINIC3_CLP_SRAM_SIZE_REG_MAX)
+ return -EINVAL;
+
+ if (reg_type == HINIC3_CLP_LEN_HOST &&
+ value > HINIC3_CLP_LEN_REG_MAX)
+ return -EINVAL;
+
+ if ((reg_type == HINIC3_CLP_START_REQ_HOST ||
+ reg_type == HINIC3_CLP_READY_RSP_HOST) &&
+ value > HINIC3_CLP_START_OR_READY_REG_MAX)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int hinic3_check_clp_init_status(struct hinic3_hwdev *hwdev)
+{
+ int err;
+ u32 reg_value = 0;
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_BA_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong req ba value: 0x%x\n",
+ reg_value);
+ return -EINVAL;
+ }
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_BA_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong rsp ba value: 0x%x\n",
+ reg_value);
+ return -EINVAL;
+ }
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_SIZE_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong req size\n");
+ return -EINVAL;
+ }
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_SIZE_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong rsp size\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void hinic3_write_clp_reg(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 value)
+{
+ u32 reg_addr, reg_value;
+
+ if (check_data_type(data_type, reg_type))
+ return;
+
+ if (check_reg_value(reg_type, value))
+ return;
+
+ if (get_clp_reg(hwdev, data_type, reg_type, ®_addr))
+ return;
+
+ reg_value = hinic3_hwif_read_reg(hwdev->hwif, reg_addr);
+
+ switch (reg_type) {
+ case HINIC3_CLP_LEN_HOST:
+ reg_value = reg_value &
+ (~(HINIC3_CLP_MASK(LEN) << HINIC3_CLP_OFFSET(LEN)));
+ reg_value = reg_value | (value << HINIC3_CLP_OFFSET(LEN));
+ break;
+
+ case HINIC3_CLP_START_REQ_HOST:
+ reg_value = reg_value &
+ (~(HINIC3_CLP_MASK(START) <<
+ HINIC3_CLP_OFFSET(START)));
+ reg_value = reg_value | (value << HINIC3_CLP_OFFSET(START));
+ break;
+
+ case HINIC3_CLP_READY_RSP_HOST:
+ reg_value = reg_value &
+ (~(HINIC3_CLP_MASK(READY) <<
+ HINIC3_CLP_OFFSET(READY)));
+ reg_value = reg_value | (value << HINIC3_CLP_OFFSET(READY));
+ break;
+
+ default:
+ return;
+ }
+
+ hinic3_hwif_write_reg(hwdev->hwif, reg_addr, reg_value);
+}
+
+static int hinic3_read_clp_data(struct hinic3_hwdev *hwdev,
+ void *buf_out, u16 *out_size)
+{
+ int err;
+ u32 reg = HINIC3_CLP_DATA(RSP);
+ u32 ready, delay_cnt;
+ u32 *ptr = (u32 *)buf_out;
+ u32 temp_out_size = 0;
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_READY_RSP_HOST, &ready);
+ if (err)
+ return err;
+
+ delay_cnt = 0;
+ while (ready == 0) {
+ usleep_range(9000, 10000); /* sleep 9000 us ~ 10000 us */
+ delay_cnt++;
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_READY_RSP_HOST, &ready);
+ if (err || delay_cnt > HINIC3_CLP_DELAY_CNT_MAX) {
+ sdk_err(hwdev->dev_hdl, "Timeout with delay_cnt: %u\n",
+ delay_cnt);
+ return -EINVAL;
+ }
+ }
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_LEN_HOST, &temp_out_size);
+ if (err)
+ return err;
+
+ if (temp_out_size > HINIC3_CLP_SRAM_SIZE_REG_MAX || !temp_out_size) {
+ sdk_err(hwdev->dev_hdl, "Invalid temp_out_size: %u\n",
+ temp_out_size);
+ return -EINVAL;
+ }
+
+ *out_size = (u16)temp_out_size;
+ for (; temp_out_size > 0; temp_out_size--) {
+ *ptr = hinic3_hwif_read_reg(hwdev->hwif, reg);
+ ptr++;
+ /* read 4 bytes every time */
+ reg = reg + 4;
+ }
+
+ hinic3_write_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_READY_RSP_HOST, (u32)0x0);
+ hinic3_write_clp_reg(hwdev, HINIC3_CLP_RSP_HOST, HINIC3_CLP_LEN_HOST,
+ (u32)0x0);
+
+ return 0;
+}
+
+static int hinic3_write_clp_data(struct hinic3_hwdev *hwdev,
+ void *buf_in, u16 in_size)
+{
+ int err;
+ u32 reg = HINIC3_CLP_DATA(REQ);
+ u32 start = 1;
+ u32 delay_cnt = 0;
+ u32 *ptr = (u32 *)buf_in;
+ u16 size_in = in_size;
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_START_REQ_HOST, &start);
+ if (err != 0)
+ return err;
+
+ while (start == 1) {
+ usleep_range(9000, 10000); /* sleep 9000 us ~ 10000 us */
+ delay_cnt++;
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_START_REQ_HOST, &start);
+ if (err || delay_cnt > HINIC3_CLP_DELAY_CNT_MAX)
+ return -EINVAL;
+ }
+
+ hinic3_write_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_LEN_HOST, size_in);
+ hinic3_write_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_START_REQ_HOST, (u32)0x1);
+
+ for (; size_in > 0; size_in--) {
+ hinic3_hwif_write_reg(hwdev->hwif, reg, *ptr);
+ ptr++;
+ reg = reg + sizeof(u32);
+ }
+
+ return 0;
+}
+
+static void hinic3_clear_clp_data(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type)
+{
+ u32 reg = (data_type == HINIC3_CLP_REQ_HOST) ?
+ HINIC3_CLP_DATA(REQ) : HINIC3_CLP_DATA(RSP);
+ u32 count = HINIC3_CLP_INPUT_BUF_LEN_HOST / HINIC3_CLP_DATA_UNIT_HOST;
+
+ for (; count > 0; count--) {
+ hinic3_hwif_write_reg(hwdev->hwif, reg, 0x0);
+ reg = reg + sizeof(u32);
+ }
+}
+
+int hinic3_pf_clp_to_mgmt(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic3_clp_pf_to_mgmt *clp_pf_to_mgmt = NULL;
+ struct hinic3_hwdev *dev = hwdev;
+ u64 header;
+ u16 real_size;
+ u8 *clp_msg_buf = NULL;
+ int err;
+
+ if (!COMM_SUPPORT_CLP(dev))
+ return -EPERM;
+
+ clp_pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->clp_pf_to_mgmt;
+ if (!clp_pf_to_mgmt)
+ return -EPERM;
+
+ clp_msg_buf = clp_pf_to_mgmt->clp_msg_buf;
+
+ /* 4 bytes alignment */
+ if (in_size % HINIC3_CLP_DATA_UNIT_HOST)
+ real_size = (in_size + (u16)sizeof(header) +
+ HINIC3_CLP_DATA_UNIT_HOST);
+ else
+ real_size = in_size + (u16)sizeof(header);
+ real_size = real_size / HINIC3_CLP_DATA_UNIT_HOST;
+
+ if (real_size >
+ (HINIC3_CLP_INPUT_BUF_LEN_HOST / HINIC3_CLP_DATA_UNIT_HOST)) {
+ sdk_err(dev->dev_hdl, "Invalid real_size: %u\n", real_size);
+ return -EINVAL;
+ }
+ down(&clp_pf_to_mgmt->clp_msg_lock);
+
+ err = hinic3_check_clp_init_status(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Check clp init status failed\n");
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return err;
+ }
+
+ hinic3_clear_clp_data(dev, HINIC3_CLP_RSP_HOST);
+ hinic3_write_clp_reg(dev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_READY_RSP_HOST, 0x0);
+
+ /* Send request */
+ memset(clp_msg_buf, 0x0, HINIC3_CLP_INPUT_BUF_LEN_HOST);
+ clp_prepare_header(dev, &header, in_size, mod, 0, 0, cmd, 0);
+
+ memcpy(clp_msg_buf, &header, sizeof(header));
+
+ clp_msg_buf += sizeof(header);
+ memcpy(clp_msg_buf, buf_in, in_size);
+
+ clp_msg_buf = clp_pf_to_mgmt->clp_msg_buf;
+
+ hinic3_clear_clp_data(dev, HINIC3_CLP_REQ_HOST);
+ err = hinic3_write_clp_data(hwdev,
+ clp_pf_to_mgmt->clp_msg_buf, real_size);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Send clp request failed\n");
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+
+ /* Get response */
+ clp_msg_buf = clp_pf_to_mgmt->clp_msg_buf;
+ memset(clp_msg_buf, 0x0, HINIC3_CLP_INPUT_BUF_LEN_HOST);
+ err = hinic3_read_clp_data(hwdev, clp_msg_buf, &real_size);
+ hinic3_clear_clp_data(dev, HINIC3_CLP_RSP_HOST);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Read clp response failed\n");
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+
+ real_size = (u16)((real_size * HINIC3_CLP_DATA_UNIT_HOST) & 0xffff);
+ if (real_size <= sizeof(header) || real_size > HINIC3_CLP_INPUT_BUF_LEN_HOST) {
+ sdk_err(dev->dev_hdl, "Invalid response size: %u", real_size);
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+ real_size = real_size - sizeof(header);
+ if (real_size != *out_size) {
+ sdk_err(dev->dev_hdl, "Invalid real_size:%u, out_size: %u\n",
+ real_size, *out_size);
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+
+ memcpy(buf_out, (clp_msg_buf + sizeof(header)), real_size);
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+
+ return 0;
+}
+
+int hinic3_clp_to_mgmt(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ if (!dev)
+ return -EINVAL;
+
+ if (!dev->chip_present_flag)
+ return -EPERM;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_CLP(dev))
+ return -EPERM;
+
+ err = hinic3_pf_clp_to_mgmt(dev, mod, cmd, buf_in, in_size, buf_out,
+ out_size);
+
+ return err;
+}
+
+int hinic3_clp_pf_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_clp_pf_to_mgmt *clp_pf_to_mgmt = NULL;
+
+ if (!COMM_SUPPORT_CLP(hwdev))
+ return 0;
+
+ clp_pf_to_mgmt = kzalloc(sizeof(*clp_pf_to_mgmt), GFP_KERNEL);
+ if (!clp_pf_to_mgmt)
+ return -ENOMEM;
+
+ clp_pf_to_mgmt->clp_msg_buf = kzalloc(HINIC3_CLP_INPUT_BUF_LEN_HOST,
+ GFP_KERNEL);
+ if (!clp_pf_to_mgmt->clp_msg_buf) {
+ kfree(clp_pf_to_mgmt);
+ return -ENOMEM;
+ }
+ sema_init(&clp_pf_to_mgmt->clp_msg_lock, 1);
+
+ hwdev->clp_pf_to_mgmt = clp_pf_to_mgmt;
+
+ return 0;
+}
+
+void hinic3_clp_pf_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_clp_pf_to_mgmt *clp_pf_to_mgmt = hwdev->clp_pf_to_mgmt;
+
+ if (!COMM_SUPPORT_CLP(hwdev))
+ return;
+
+ sema_deinit(&clp_pf_to_mgmt->clp_msg_lock);
+ kfree(clp_pf_to_mgmt->clp_msg_buf);
+ kfree(clp_pf_to_mgmt);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h
new file mode 100644
index 0000000..48970e3
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_MGMT_H
+#define HINIC3_MGMT_H
+
+#include <linux/types.h>
+#include <linux/completion.h>
+#include <linux/semaphore.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+
+#include "mpu_cmd_base_defs.h"
+#include "hinic3_hw.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_hwdev.h"
+
+#define HINIC3_MGMT_WQ_NAME "hinic3_mgmt"
+
+#define HINIC3_CLP_REG_GAP 0x20
+#define HINIC3_CLP_INPUT_BUF_LEN_HOST 4096UL
+#define HINIC3_CLP_DATA_UNIT_HOST 4UL
+
+enum clp_data_type {
+ HINIC3_CLP_REQ_HOST = 0,
+ HINIC3_CLP_RSP_HOST = 1
+};
+
+enum clp_reg_type {
+ HINIC3_CLP_BA_HOST = 0,
+ HINIC3_CLP_SIZE_HOST = 1,
+ HINIC3_CLP_LEN_HOST = 2,
+ HINIC3_CLP_START_REQ_HOST = 3,
+ HINIC3_CLP_READY_RSP_HOST = 4
+};
+
+#define HINIC3_CLP_REQ_SIZE_OFFSET 0
+#define HINIC3_CLP_RSP_SIZE_OFFSET 16
+#define HINIC3_CLP_BASE_OFFSET 0
+#define HINIC3_CLP_LEN_OFFSET 0
+#define HINIC3_CLP_START_OFFSET 31
+#define HINIC3_CLP_READY_OFFSET 31
+#define HINIC3_CLP_OFFSET(member) (HINIC3_CLP_##member##_OFFSET)
+
+#define HINIC3_CLP_SIZE_MASK 0x7ffUL
+#define HINIC3_CLP_BASE_MASK 0x7ffffffUL
+#define HINIC3_CLP_LEN_MASK 0x7ffUL
+#define HINIC3_CLP_START_MASK 0x1UL
+#define HINIC3_CLP_READY_MASK 0x1UL
+#define HINIC3_CLP_MASK(member) (HINIC3_CLP_##member##_MASK)
+
+#define HINIC3_CLP_DELAY_CNT_MAX 200UL
+#define HINIC3_CLP_SRAM_SIZE_REG_MAX 0x3ff
+#define HINIC3_CLP_SRAM_BASE_REG_MAX 0x7ffffff
+#define HINIC3_CLP_LEN_REG_MAX 0x3ff
+#define HINIC3_CLP_START_OR_READY_REG_MAX 0x1
+
+struct hinic3_recv_msg {
+ void *msg;
+
+ u16 msg_len;
+ u16 rsvd1;
+ enum hinic3_mod_type mod;
+
+ u16 cmd;
+ u8 seq_id;
+ u8 rsvd2;
+ u16 msg_id;
+ u16 rsvd3;
+
+ int async_mgmt_to_pf;
+ u32 rsvd4;
+
+ struct completion recv_done;
+};
+
+struct hinic3_msg_head {
+ u8 status;
+ u8 version;
+ u8 resp_aeq_num;
+ u8 rsvd0[5];
+};
+
+enum comm_pf_to_mgmt_event_state {
+ SEND_EVENT_UNINIT = 0,
+ SEND_EVENT_START,
+ SEND_EVENT_SUCCESS,
+ SEND_EVENT_FAIL,
+ SEND_EVENT_TIMEOUT,
+ SEND_EVENT_END,
+};
+
+enum hinic3_mgmt_msg_cb_state {
+ HINIC3_MGMT_MSG_CB_REG = 0,
+ HINIC3_MGMT_MSG_CB_RUNNING,
+};
+
+struct hinic3_clp_pf_to_mgmt {
+ struct semaphore clp_msg_lock;
+ void *clp_msg_buf;
+};
+
+struct hinic3_msg_pf_to_mgmt {
+ struct hinic3_hwdev *hwdev;
+
+ /* Async cmd can not be scheduling */
+ spinlock_t async_msg_lock;
+ struct semaphore sync_msg_lock;
+
+ struct workqueue_struct *workq;
+
+ void *async_msg_buf;
+ void *sync_msg_buf;
+ void *mgmt_ack_buf;
+
+ struct hinic3_recv_msg recv_msg_from_mgmt;
+ struct hinic3_recv_msg recv_resp_msg_from_mgmt;
+
+ u16 async_msg_id;
+ u16 sync_msg_id;
+ u32 rsvd1;
+ struct hinic3_api_cmd_chain *cmd_chain[HINIC3_API_CMD_MAX];
+
+ hinic3_mgmt_msg_cb recv_mgmt_msg_cb[HINIC3_MOD_HW_MAX];
+ void *recv_mgmt_msg_data[HINIC3_MOD_HW_MAX];
+ unsigned long mgmt_msg_cb_state[HINIC3_MOD_HW_MAX];
+
+ void *async_msg_cb_data[HINIC3_MOD_HW_MAX];
+
+ /* lock when sending msg */
+ spinlock_t sync_event_lock;
+ enum comm_pf_to_mgmt_event_state event_flag;
+ u64 rsvd2;
+};
+
+struct hinic3_mgmt_msg_handle_work {
+ struct work_struct work;
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt;
+
+ void *msg;
+ u16 msg_len;
+ u16 rsvd1;
+
+ enum hinic3_mod_type mod;
+ u16 cmd;
+ u16 msg_id;
+
+ int async_mgmt_to_pf;
+};
+
+void hinic3_mgmt_msg_aeqe_handler(void *hwdev, u8 *header, u8 size);
+
+int hinic3_pf_to_mgmt_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_pf_to_mgmt_free(struct hinic3_hwdev *hwdev);
+
+int hinic3_pf_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout);
+int hinic3_pf_to_mgmt_async(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size);
+
+int hinic3_pf_msg_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout);
+
+int hinic3_pf_to_mgmt_no_ack(void *hwdev, enum hinic3_mod_type mod, u8 cmd, void *buf_in,
+ u16 in_size);
+
+int hinic3_api_cmd_read_ack(void *hwdev, u8 dest, const void *cmd, u16 size,
+ void *ack, u16 ack_size);
+
+int hinic3_api_cmd_write_nack(void *hwdev, u8 dest, const void *cmd, u16 size);
+
+int hinic3_pf_clp_to_mgmt(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size);
+
+int hinic3_clp_pf_to_mgmt_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_clp_pf_to_mgmt_free(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.c
new file mode 100644
index 0000000..b619800
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.c
@@ -0,0 +1,1259 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/semaphore.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/completion.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+
+#include "ossl_knl.h"
+#include "hinic3_common.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwif.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_mbox.h"
+#include "hinic3_multi_host_mgmt.h"
+#include "hinic3_hw_cfg.h"
+
+#define HINIC3_SUPPORT_MAX_PF_NUM 32
+#define HINIC3_MBOX_PF_BUSY_ACTIVE_FW 0x2
+
+void set_master_host_mbox_enable(struct hinic3_hwdev *hwdev, bool enable)
+{
+ u32 reg_val;
+
+ if (!IS_MASTER_HOST(hwdev) || HINIC3_FUNC_TYPE(hwdev) != TYPE_PPF)
+ return;
+
+ reg_val = hinic3_hwif_read_reg(hwdev->hwif, HINIC3_MULT_HOST_MASTER_MBOX_STATUS_ADDR);
+ reg_val = MULTI_HOST_REG_CLEAR(reg_val, MASTER_MBX_STS);
+ reg_val |= MULTI_HOST_REG_SET((u8)enable, MASTER_MBX_STS);
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_MULT_HOST_MASTER_MBOX_STATUS_ADDR, reg_val);
+
+ sdk_info(hwdev->dev_hdl, "Multi-host status: %d, reg value: 0x%x\n",
+ enable, reg_val);
+}
+
+static bool hinic3_get_master_host_mbox_enable(void *hwdev)
+{
+ u32 reg_val;
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_SLAVE_HOST(dev) || HINIC3_FUNC_TYPE(dev) == TYPE_VF)
+ return true;
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_HOST_MASTER_MBOX_STATUS_ADDR);
+
+ return !!MULTI_HOST_REG_GET(reg_val, MASTER_MBX_STS);
+}
+
+bool hinic3_is_multi_bm(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ return ((IS_BMGW_SLAVE_HOST(hw_dev)) || (IS_BMGW_MASTER_HOST(hw_dev))) ? true : false;
+}
+EXPORT_SYMBOL(hinic3_is_multi_bm);
+
+bool hinic3_is_slave_host(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = hwdev;
+
+ if (!hwdev) {
+ pr_err("hwdev is null\n");
+ return false;
+ }
+
+ return ((IS_BMGW_SLAVE_HOST(hw_dev)) || (IS_VM_SLAVE_HOST(hw_dev))) ? true : false;
+}
+EXPORT_SYMBOL(hinic3_is_slave_host);
+
+bool hinic3_is_vm_slave_host(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = hwdev;
+
+ if (!hwdev) {
+ pr_err("hwdev is null\n");
+ return false;
+ }
+
+ return (IS_VM_SLAVE_HOST(hw_dev)) ? true : false;
+}
+EXPORT_SYMBOL(hinic3_is_vm_slave_host);
+
+bool hinic3_is_bm_slave_host(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = hwdev;
+
+ if (!hwdev) {
+ pr_err("hwdev is null\n");
+ return false;
+ }
+
+ return (IS_BMGW_SLAVE_HOST(hw_dev)) ? true : false;
+}
+EXPORT_SYMBOL(hinic3_is_bm_slave_host);
+
+static int __send_mbox_to_host(struct hinic3_hwdev *mbox_hwdev,
+ struct hinic3_hwdev *hwdev,
+ enum hinic3_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout,
+ enum hinic3_mbox_ack_type ack_type, u16 channel)
+{
+ u8 dst_host_func_idx;
+ struct service_cap *cap = &hwdev->cfg_mgmt->svc_cap;
+
+ if (!mbox_hwdev->chip_present_flag)
+ return -EPERM;
+
+ if (!hinic3_get_master_host_mbox_enable(hwdev)) {
+ sdk_err(hwdev->dev_hdl, "Master host not initialized\n");
+ return -EFAULT;
+ }
+
+ if (!mbox_hwdev->mhost_mgmt) {
+ /* send to master host in default */
+ dst_host_func_idx = hinic3_host_ppf_idx(hwdev, cap->master_host_id);
+ } else {
+ dst_host_func_idx = IS_MASTER_HOST(hwdev) ?
+ mbox_hwdev->mhost_mgmt->shost_ppf_idx :
+ mbox_hwdev->mhost_mgmt->mhost_ppf_idx;
+ }
+
+ if (ack_type == MBOX_ACK)
+ return hinic3_mbox_to_host(mbox_hwdev, dst_host_func_idx,
+ mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+ else
+ return hinic3_mbox_to_func_no_ack(mbox_hwdev, dst_host_func_idx,
+ mod, cmd, buf_in, in_size, channel);
+}
+
+static int __mbox_to_host(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod,
+ u8 cmd, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout,
+ enum hinic3_mbox_ack_type ack_type, u16 channel)
+{
+ struct hinic3_hwdev *mbox_hwdev = hwdev;
+ int err;
+
+ if (!IS_MULTI_HOST(hwdev) || HINIC3_IS_VF(hwdev))
+ return -EPERM;
+
+ if (hinic3_func_type(hwdev) == TYPE_PF) {
+ down(&hwdev->ppf_sem);
+ mbox_hwdev = hwdev->ppf_hwdev;
+ if (!mbox_hwdev) {
+ err = -EINVAL;
+ goto release_lock;
+ }
+
+ if (!test_bit(HINIC3_HWDEV_MBOX_INITED, &mbox_hwdev->func_state)) {
+ err = -EPERM;
+ goto release_lock;
+ }
+ }
+
+ err = __send_mbox_to_host(mbox_hwdev, hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout, ack_type, channel);
+
+release_lock:
+ if (hinic3_func_type(hwdev) == TYPE_PF)
+ up(&hwdev->ppf_sem);
+
+ return err;
+}
+
+int hinic3_mbox_to_host_sync(void *hwdev, enum hinic3_mod_type mod,
+ u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout, u16 channel)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ return __mbox_to_host((struct hinic3_hwdev *)hwdev, mod, cmd, buf_in,
+ in_size, buf_out, out_size, timeout, MBOX_ACK, channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_to_host_sync);
+
+int hinic3_mbox_to_host_no_ack(struct hinic3_hwdev *hwdev,
+ enum hinic3_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size, u16 channel)
+{
+ return __mbox_to_host(hwdev, mod, cmd, buf_in, in_size, NULL, NULL,
+ 0, MBOX_NO_ACK, channel);
+}
+
+static int __get_func_nic_state_from_pf(struct hinic3_hwdev *hwdev,
+ u16 glb_func_idx, u8 *en);
+static int __get_func_vroce_state_from_pf(struct hinic3_hwdev *hwdev,
+ u16 glb_func_idx, u8 *en);
+
+int sw_func_pf_mbox_handler(void *pri_handle, u16 vf_id, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = pri_handle;
+ struct hinic3_slave_func_nic_state *nic_state = NULL;
+ struct hinic3_slave_func_nic_state *out_state = NULL;
+ int err;
+
+ switch (cmd) {
+ case HINIC3_SW_CMD_GET_SLAVE_FUNC_NIC_STATE:
+ nic_state = buf_in;
+ out_state = buf_out;
+ *out_size = sizeof(*nic_state);
+
+ /* find nic state in PPF func_nic_en bitmap */
+ err = __get_func_nic_state_from_pf(hwdev, nic_state->func_idx,
+ &out_state->enable);
+ out_state->status = err ? 1 : 0;
+
+ break;
+ case HINIC3_SW_CMD_GET_SLAVE_FUNC_VROCE_STATE:
+ nic_state = buf_in;
+ out_state = buf_out;
+ *out_size = sizeof(*nic_state);
+
+ err = __get_func_vroce_state_from_pf(hwdev, nic_state->func_idx,
+ &out_state->enable);
+ out_state->status = err ? 1 : 0;
+
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int __master_host_sw_func_handler(struct hinic3_hwdev *hwdev, u16 pf_idx,
+ u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_multi_host_mgmt *mhost_mgmt = hwdev->mhost_mgmt;
+ struct register_slave_host *out_shost = NULL;
+ struct register_slave_host *slave_host = NULL;
+ u64 *vroce_en = NULL;
+
+ int err = 0;
+
+ if (!mhost_mgmt)
+ return -ENXIO;
+ switch (cmd) {
+ case HINIC3_SW_CMD_SLAVE_HOST_PPF_REGISTER:
+ slave_host = buf_in;
+ out_shost = buf_out;
+ *out_size = sizeof(*slave_host);
+ vroce_en = out_shost->funcs_vroce_en;
+
+ /* just get information about function nic enable */
+ if (slave_host->get_nic_en) {
+ bitmap_copy((ulong *)out_shost->funcs_nic_en,
+ mhost_mgmt->func_nic_en,
+ HINIC3_MAX_MGMT_FUNCTIONS);
+
+ if (IS_MASTER_HOST(hwdev))
+ bitmap_copy((ulong *)vroce_en,
+ mhost_mgmt->func_vroce_en,
+ HINIC3_MAX_MGMT_FUNCTIONS);
+ out_shost->status = 0;
+ break;
+ }
+
+ mhost_mgmt->shost_registered = true;
+ mhost_mgmt->shost_host_idx = slave_host->host_id;
+ mhost_mgmt->shost_ppf_idx = slave_host->ppf_idx;
+
+ bitmap_copy((ulong *)out_shost->funcs_nic_en,
+ mhost_mgmt->func_nic_en, HINIC3_MAX_MGMT_FUNCTIONS);
+
+ if (IS_MASTER_HOST(hwdev))
+ bitmap_copy((ulong *)vroce_en,
+ mhost_mgmt->func_vroce_en,
+ HINIC3_MAX_MGMT_FUNCTIONS);
+
+ sdk_info(hwdev->dev_hdl, "Slave host registers PPF, host_id: %u, ppf_idx: %u\n",
+ slave_host->host_id, slave_host->ppf_idx);
+
+ out_shost->status = 0;
+ break;
+ case HINIC3_SW_CMD_SLAVE_HOST_PPF_UNREGISTER:
+ slave_host = buf_in;
+ mhost_mgmt->shost_registered = false;
+ sdk_info(hwdev->dev_hdl, "Slave host unregisters PPF, host_id: %u, ppf_idx: %u\n",
+ slave_host->host_id, slave_host->ppf_idx);
+
+ *out_size = sizeof(*slave_host);
+ ((struct register_slave_host *)buf_out)->status = 0;
+ break;
+
+ default:
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
+static int __event_func_service_state_handler(struct hinic3_hwdev *hwdev,
+ u8 sub_cmd, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_event_info event_info = {0};
+ struct hinic3_mhost_nic_func_state state = {0};
+ struct hinic3_slave_func_nic_state *out_state = NULL;
+ struct hinic3_slave_func_nic_state *in_state = buf_in;
+
+ if (!hwdev->event_callback)
+ return 0;
+
+ event_info.type = EVENT_COMM_MULTI_HOST_MGMT;
+ ((struct hinic3_multi_host_mgmt_event *)(void *)event_info.event_data)->sub_cmd = sub_cmd;
+ ((struct hinic3_multi_host_mgmt_event *)(void *)event_info.event_data)->data = &state;
+
+ state.func_idx = in_state->func_idx;
+ state.enable = in_state->enable;
+
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+
+ *out_size = sizeof(*out_state);
+ out_state = buf_out;
+ out_state->status = state.status;
+ if (sub_cmd == HINIC3_MHOST_GET_VROCE_STATE)
+ out_state->opened = state.enable;
+
+ return state.status;
+}
+
+static int __event_set_func_nic_state(struct hinic3_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return __event_func_service_state_handler(hwdev,
+ HINIC3_MHOST_NIC_STATE_CHANGE,
+ buf_in, in_size,
+ buf_out, out_size);
+}
+
+static int __event_set_func_vroce_state(struct hinic3_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return __event_func_service_state_handler(hwdev,
+ HINIC3_MHOST_VROCE_STATE_CHANGE,
+ buf_in, in_size,
+ buf_out, out_size);
+}
+
+static int __event_get_func_vroce_state(struct hinic3_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return __event_func_service_state_handler(hwdev,
+ HINIC3_MHOST_GET_VROCE_STATE,
+ buf_in, in_size,
+ buf_out, out_size);
+}
+
+int vf_sw_func_handler(void *hwdev, u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ int err = 0;
+
+ switch (cmd) {
+ case HINIC3_SW_CMD_SET_SLAVE_FUNC_VROCE_STATE:
+ err = __event_set_func_vroce_state(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ case HINIC3_SW_CMD_GET_SLAVE_VROCE_DEVICE_STATE:
+ err = __event_get_func_vroce_state(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+static int multi_host_event_handler(struct hinic3_hwdev *hwdev,
+ u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ int err;
+
+ switch (cmd) {
+ case HINIC3_SW_CMD_SET_SLAVE_FUNC_VROCE_STATE:
+ err = __event_set_func_vroce_state(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ case HINIC3_SW_CMD_SET_SLAVE_FUNC_NIC_STATE:
+ err = __event_set_func_nic_state(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ case HINIC3_SW_CMD_GET_SLAVE_VROCE_DEVICE_STATE:
+ err = __event_get_func_vroce_state(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+static int sw_set_slave_func_nic_state(struct hinic3_hwdev *hwdev, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_slave_func_nic_state *nic_state = buf_in;
+ struct hinic3_slave_func_nic_state *nic_state_out = buf_out;
+ struct hinic3_multi_host_mgmt *mhost_mgmt = hwdev->mhost_mgmt;
+ *out_size = sizeof(*nic_state);
+ nic_state_out->status = 0;
+ sdk_info(hwdev->dev_hdl, "Slave func %u %s nic\n",
+ nic_state->func_idx,
+ nic_state->enable ? "register" : "unregister");
+
+ if (nic_state->enable) {
+ set_bit(nic_state->func_idx, mhost_mgmt->func_nic_en);
+ } else {
+ if ((test_bit(nic_state->func_idx, mhost_mgmt->func_nic_en)) &&
+ nic_state->func_idx >= HINIC3_SUPPORT_MAX_PF_NUM &&
+ (!test_bit(nic_state->func_idx, hwdev->func_probe_in_host))) {
+ sdk_warn(hwdev->dev_hdl, "VF%u in vm, delete tap port failed\n",
+ nic_state->func_idx);
+ nic_state_out->status = HINIC3_VF_IN_VM;
+ return 0;
+ }
+ clear_bit(nic_state->func_idx, mhost_mgmt->func_nic_en);
+ }
+
+ return multi_host_event_handler(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size);
+}
+
+static int sw_set_slave_vroce_state(struct hinic3_hwdev *hwdev, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_slave_func_nic_state *nic_state = buf_in;
+ struct hinic3_slave_func_nic_state *nic_state_out = buf_out;
+ struct hinic3_multi_host_mgmt *mhost_mgmt = hwdev->mhost_mgmt;
+ int err;
+
+ nic_state = buf_in;
+ *out_size = sizeof(*nic_state);
+ nic_state_out->status = 0;
+
+ sdk_info(hwdev->dev_hdl, "Slave func %u %s vroce\n", nic_state->func_idx,
+ nic_state->enable ? "register" : "unregister");
+
+ if (nic_state->enable)
+ set_bit(nic_state->func_idx,
+ mhost_mgmt->func_vroce_en);
+ else
+ clear_bit(nic_state->func_idx,
+ mhost_mgmt->func_vroce_en);
+
+ err = multi_host_event_handler(hwdev, cmd, buf_in, in_size,
+ buf_out, out_size);
+
+ return err;
+}
+
+static int sw_get_slave_vroce_device_state(struct hinic3_hwdev *hwdev, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_slave_func_nic_state *nic_state_out = buf_out;
+ int err;
+
+ *out_size = sizeof(struct hinic3_slave_func_nic_state);
+ nic_state_out->status = 0;
+ err = multi_host_event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+
+ return err;
+}
+
+static void sw_get_slave_netdev_state(struct hinic3_hwdev *hwdev, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_slave_func_nic_state *nic_state = buf_in;
+ struct hinic3_slave_func_nic_state *nic_state_out = buf_out;
+
+ *out_size = sizeof(*nic_state);
+ nic_state_out->status = 0;
+ nic_state_out->opened =
+ test_bit(nic_state->func_idx,
+ hwdev->netdev_setup_state) ? 1 : 0;
+}
+
+static int __slave_host_sw_func_handler(struct hinic3_hwdev *hwdev, u16 pf_idx,
+ u8 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_multi_host_mgmt *mhost_mgmt = hwdev->mhost_mgmt;
+ int err = 0;
+
+ if (!mhost_mgmt)
+ return -ENXIO;
+ switch (cmd) {
+ case HINIC3_SW_CMD_SET_SLAVE_FUNC_NIC_STATE:
+ err = sw_set_slave_func_nic_state(hwdev, cmd, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ case HINIC3_SW_CMD_SET_SLAVE_FUNC_VROCE_STATE:
+ err = sw_set_slave_vroce_state(hwdev, cmd, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ case HINIC3_SW_CMD_GET_SLAVE_VROCE_DEVICE_STATE:
+ err = sw_get_slave_vroce_device_state(hwdev, cmd,
+ buf_in, in_size,
+ buf_out, out_size);
+ break;
+ case HINIC3_SW_CMD_GET_SLAVE_NETDEV_STATE:
+ sw_get_slave_netdev_state(hwdev, cmd, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
+static int sw_func_ppf_mbox_handler(void *handle, u16 pf_idx, u16 vf_id, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = handle;
+ int err;
+
+ if (IS_MASTER_HOST(hwdev))
+ err = __master_host_sw_func_handler(hwdev, pf_idx, (u8)cmd, buf_in,
+ in_size, buf_out, out_size);
+ else if (IS_SLAVE_HOST(hwdev))
+ err = __slave_host_sw_func_handler(hwdev, pf_idx, (u8)cmd, buf_in,
+ in_size, buf_out, out_size);
+ else
+ err = -EINVAL;
+
+ if (err)
+ sdk_err(hwdev->dev_hdl, "PPF process sw funcs cmd %u failed, err: %d\n",
+ cmd, err);
+
+ return err;
+}
+
+static int __ppf_process_mbox_msg(struct hinic3_hwdev *hwdev, u16 pf_idx, u16 vf_id,
+ enum hinic3_mod_type mod, u8 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ /* when not support return err */
+ int err = -EFAULT;
+
+ if (IS_SLAVE_HOST(hwdev)) {
+ err = hinic3_mbox_to_host_sync(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, 0, HINIC3_CHANNEL_COMM);
+ if (err)
+ sdk_err(hwdev->dev_hdl, "Send mailbox to mPF failed, err: %d\n",
+ err);
+ } else if (IS_MASTER_HOST(hwdev)) {
+ if (mod == HINIC3_MOD_COMM && cmd == COMM_MGMT_CMD_START_FLR)
+ err = hinic3_pf_to_mgmt_no_ack(hwdev, mod, cmd, buf_in,
+ in_size);
+ else
+ err = hinic3_pf_msg_to_mgmt_sync(hwdev, mod, cmd, buf_in,
+ in_size, buf_out,
+ out_size, 0U);
+ if (err && err != HINIC3_MBOX_PF_BUSY_ACTIVE_FW)
+ sdk_err(hwdev->dev_hdl, "PF mbox mod %d cmd %u callback handler err: %d\n",
+ mod, cmd, err);
+ }
+
+ return err;
+}
+
+static int hinic3_ppf_process_mbox_msg(struct hinic3_hwdev *hwdev, u16 pf_idx, u16 vf_id,
+ enum hinic3_mod_type mod, u8 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ bool same_host = false;
+ int err = -EFAULT;
+
+ /* Currently, only the master ppf and slave ppf communicate with each
+ * other through ppf messages. If other PF/VFs need to communicate
+ * with the PPF, modify the same_host based on the
+ * hinic3_get_hw_pf_infos information.
+ */
+
+ switch (hwdev->func_mode) {
+ case FUNC_MOD_MULTI_VM_MASTER:
+ case FUNC_MOD_MULTI_BM_MASTER:
+ if (!same_host)
+ err = __ppf_process_mbox_msg(hwdev, pf_idx, vf_id,
+ mod, cmd, buf_in, in_size,
+ buf_out, out_size);
+ else
+ sdk_warn(hwdev->dev_hdl, "Doesn't support PPF mbox message in BM master\n");
+
+ break;
+ case FUNC_MOD_MULTI_VM_SLAVE:
+ case FUNC_MOD_MULTI_BM_SLAVE:
+ same_host = true;
+ if (same_host)
+ err = __ppf_process_mbox_msg(hwdev, pf_idx, vf_id,
+ mod, cmd, buf_in, in_size,
+ buf_out, out_size);
+ else
+ sdk_warn(hwdev->dev_hdl, "Doesn't support receiving control messages from BM master\n");
+
+ break;
+ default:
+ sdk_warn(hwdev->dev_hdl, "Doesn't support PPF mbox message\n");
+
+ break;
+ }
+
+ return err;
+}
+
+static int comm_ppf_mbox_handler(void *handle, u16 pf_idx, u16 vf_id, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ return hinic3_ppf_process_mbox_msg(handle, pf_idx, vf_id, HINIC3_MOD_COMM,
+ (u8)cmd, buf_in, in_size, buf_out,
+ out_size);
+}
+
+static int hilink_ppf_mbox_handler(void *handle, u16 pf_idx, u16 vf_id, u16 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return hinic3_ppf_process_mbox_msg(handle, pf_idx, vf_id,
+ HINIC3_MOD_HILINK, (u8)cmd, buf_in,
+ in_size, buf_out, out_size);
+}
+
+static int hinic3_nic_ppf_mbox_handler(void *handle, u16 pf_idx, u16 vf_id, u16 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return hinic3_ppf_process_mbox_msg(handle, pf_idx, vf_id,
+ HINIC3_MOD_L2NIC, (u8)cmd, buf_in, in_size,
+ buf_out, out_size);
+}
+
+static int hinic3_register_slave_ppf(struct hinic3_hwdev *hwdev, bool registered)
+{
+ struct register_slave_host *host_info = NULL;
+ u16 out_size = sizeof(struct register_slave_host);
+ u8 cmd;
+ int err;
+
+ if (!IS_SLAVE_HOST(hwdev))
+ return -EINVAL;
+
+ /* if unsupport hot plug, return true. */
+ if (UNSUPPORT_HOT_PLUG((struct hinic3_hwdev *)hwdev)) {
+ return 0;
+ }
+
+ host_info = kcalloc(1, sizeof(struct register_slave_host), GFP_KERNEL);
+ if (!host_info)
+ return -ENOMEM;
+
+ cmd = registered ? HINIC3_SW_CMD_SLAVE_HOST_PPF_REGISTER :
+ HINIC3_SW_CMD_SLAVE_HOST_PPF_UNREGISTER;
+
+ host_info->host_id = hinic3_pcie_itf_id(hwdev);
+ host_info->ppf_idx = hinic3_ppf_idx(hwdev);
+
+ err = hinic3_mbox_to_host_sync(hwdev, HINIC3_MOD_SW_FUNC, cmd,
+ host_info, sizeof(struct register_slave_host), host_info,
+ &out_size, 0, HINIC3_CHANNEL_COMM);
+ if (!!err || !out_size || host_info->status) {
+ sdk_err(hwdev->dev_hdl, "Failed to %s slave host, err: %d, out_size: 0x%x, status: 0x%x\n",
+ registered ? "register" : "unregister", err, out_size, host_info->status);
+
+ kfree(host_info);
+ return -EFAULT;
+ }
+ bitmap_copy(hwdev->mhost_mgmt->func_nic_en,
+ (ulong *)host_info->funcs_nic_en,
+ HINIC3_MAX_MGMT_FUNCTIONS);
+
+ if (IS_SLAVE_HOST(hwdev))
+ bitmap_copy(hwdev->mhost_mgmt->func_vroce_en,
+ (ulong *)host_info->funcs_vroce_en,
+ HINIC3_MAX_MGMT_FUNCTIONS);
+
+ kfree(host_info);
+ return 0;
+}
+
+static int get_host_id_by_func_id(struct hinic3_hwdev *hwdev, u16 func_idx,
+ u8 *host_id)
+{
+ struct hinic3_hw_pf_infos *pf_infos = NULL;
+ u16 vf_id_start, vf_id_end;
+ int i;
+
+ if (!hwdev || !host_id || !hwdev->mhost_mgmt)
+ return -EINVAL;
+
+ pf_infos = &hwdev->mhost_mgmt->pf_infos;
+
+ for (i = 0; i < pf_infos->num_pfs; i++) {
+ if (func_idx == pf_infos->infos[i].glb_func_idx) {
+ *host_id = pf_infos->infos[i].itf_idx;
+ return 0;
+ }
+
+ vf_id_start = pf_infos->infos[i].glb_pf_vf_offset + 1;
+ vf_id_end = pf_infos->infos[i].glb_pf_vf_offset +
+ pf_infos->infos[i].max_vfs;
+ if (func_idx >= vf_id_start && func_idx <= vf_id_end) {
+ *host_id = pf_infos->infos[i].itf_idx;
+ return 0;
+ }
+ }
+
+ return -EFAULT;
+}
+
+static int set_slave_func_nic_state(struct hinic3_hwdev *hwdev,
+ struct hinic3_func_nic_state *state)
+{
+ struct hinic3_slave_func_nic_state nic_state = {0};
+ u16 out_size = sizeof(nic_state);
+ u8 cmd = HINIC3_SW_CMD_SET_SLAVE_FUNC_NIC_STATE;
+ int err;
+
+ nic_state.func_idx = state->func_idx;
+ nic_state.enable = state->state;
+ nic_state.vroce_flag = state->vroce_flag;
+
+ if (state->vroce_flag)
+ cmd = HINIC3_SW_CMD_SET_SLAVE_FUNC_VROCE_STATE;
+
+ err = hinic3_mbox_to_host_sync(hwdev, HINIC3_MOD_SW_FUNC,
+ cmd, &nic_state, sizeof(nic_state),
+ &nic_state, &out_size, 0, HINIC3_CHANNEL_COMM);
+ if (err == MBOX_ERRCODE_UNKNOWN_DES_FUNC) {
+ sdk_warn(hwdev->dev_hdl,
+ "Can not notify func %u %s state because slave host isn't initialized\n",
+ state->func_idx, state->vroce_flag ? "vroce" : "nic");
+ } else if (err || !out_size || nic_state.status) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to set slave %s state, err: %d, out_size: 0x%x, status: 0x%x\n",
+ state->vroce_flag ? "vroce" : "nic",
+ err, out_size, nic_state.status);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int get_slave_func_netdev_state(struct hinic3_hwdev *hwdev, u16 func_idx, int *opened)
+{
+ struct hinic3_slave_func_nic_state nic_state = {0};
+ u16 out_size = sizeof(nic_state);
+ int err;
+
+ nic_state.func_idx = func_idx;
+ err = hinic3_mbox_to_host_sync(hwdev, HINIC3_MOD_SW_FUNC,
+ HINIC3_SW_CMD_GET_SLAVE_NETDEV_STATE,
+ &nic_state, sizeof(nic_state), &nic_state,
+ &out_size, 0, HINIC3_CHANNEL_COMM);
+ if (err == MBOX_ERRCODE_UNKNOWN_DES_FUNC) {
+ sdk_warn(hwdev->dev_hdl,
+ "Can not get func %u netdev state because slave host isn't initialized\n",
+ func_idx);
+ } else if (err || !out_size || nic_state.status) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to get netdev state, err: %d, out_size: 0x%x, status: 0x%x\n",
+ err, out_size, nic_state.status);
+ return -EFAULT;
+ }
+
+ *opened = nic_state.opened;
+ return 0;
+}
+
+static int set_nic_state_params_valid(void *hwdev,
+ struct hinic3_func_nic_state *state)
+{
+ struct hinic3_multi_host_mgmt *mhost_mgmt = NULL;
+ struct hinic3_hwdev *ppf_hwdev = hwdev;
+
+ if (!hwdev || !state)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ ppf_hwdev = ((struct hinic3_hwdev *)hwdev)->ppf_hwdev;
+
+ if (!ppf_hwdev || !IS_MASTER_HOST(ppf_hwdev))
+ return -EINVAL;
+
+ mhost_mgmt = ppf_hwdev->mhost_mgmt;
+ if (!mhost_mgmt || state->func_idx >= HINIC3_MAX_MGMT_FUNCTIONS)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int get_func_current_state(struct hinic3_multi_host_mgmt *mhost_mgmt,
+ struct hinic3_func_nic_state *state,
+ int *old_state)
+{
+ ulong *func_bitmap = NULL;
+
+ if (state->vroce_flag == 1)
+ func_bitmap = mhost_mgmt->func_vroce_en;
+ else
+ func_bitmap = mhost_mgmt->func_nic_en;
+
+ *old_state = test_bit(state->func_idx, func_bitmap) ? 1 : 0;
+ if (state->state == HINIC3_FUNC_NIC_DEL)
+ clear_bit(state->func_idx, func_bitmap);
+ else if (state->state == HINIC3_FUNC_NIC_ADD)
+ set_bit(state->func_idx, func_bitmap);
+ else
+ return -EINVAL;
+
+ return 0;
+}
+
+static bool check_vroce_state(struct hinic3_multi_host_mgmt *mhost_mgmt,
+ struct hinic3_func_nic_state *state)
+{
+ bool is_ready = true;
+ ulong *func_bitmap = mhost_mgmt->func_vroce_en;
+
+ if (!state->vroce_flag && state->state == HINIC3_FUNC_NIC_DEL)
+ is_ready = test_bit(state->func_idx, func_bitmap) ? false : true;
+
+ return is_ready;
+}
+
+int hinic3_set_func_nic_state(void *hwdev, struct hinic3_func_nic_state *state)
+{
+ struct hinic3_multi_host_mgmt *mhost_mgmt = NULL;
+ struct hinic3_hwdev *ppf_hwdev = hwdev;
+ u8 host_enable;
+ int err, old_state = 0;
+ u8 host_id = 0;
+
+ err = set_nic_state_params_valid(hwdev, state);
+ if (err)
+ return err;
+
+ mhost_mgmt = ppf_hwdev->mhost_mgmt;
+
+ if (IS_MASTER_HOST(ppf_hwdev) &&
+ !check_vroce_state(mhost_mgmt, state)) {
+ sdk_warn(ppf_hwdev->dev_hdl,
+ "Should disable vroce before disable nic for function %u\n",
+ state->func_idx);
+ return -EFAULT;
+ }
+
+ err = get_func_current_state(mhost_mgmt, state, &old_state);
+ if (err) {
+ sdk_err(ppf_hwdev->dev_hdl, "Failed to get function %u current state, err: %d\n",
+ state->func_idx, err);
+ return err;
+ }
+
+ err = get_host_id_by_func_id(ppf_hwdev, state->func_idx, &host_id);
+ if (err) {
+ sdk_err(ppf_hwdev->dev_hdl,
+ "Failed to get function %u host id, err: %d\n", state->func_idx, err);
+ if (state->vroce_flag)
+ return -EFAULT;
+
+ old_state ? set_bit(state->func_idx, mhost_mgmt->func_nic_en) :
+ clear_bit(state->func_idx, mhost_mgmt->func_nic_en);
+ return -EFAULT;
+ }
+
+ err = hinic3_get_slave_host_enable(hwdev, host_id, &host_enable);
+ if (err != 0) {
+ sdk_err(ppf_hwdev->dev_hdl,
+ "Get slave host %u enable failed, ret %d\n", host_id, err);
+ return err;
+ }
+ sdk_info(ppf_hwdev->dev_hdl, "Set slave host %u(status: %u) func %u %s %s\n",
+ host_id, host_enable, state->func_idx,
+ state->state ? "enable" : "disable", state->vroce_flag ? "vroce" : "nic");
+
+ if (!host_enable)
+ return 0;
+
+ /* notify slave host */
+ err = set_slave_func_nic_state(hwdev, state);
+ if (err) {
+ if (state->vroce_flag)
+ return -EFAULT;
+
+ old_state ? set_bit(state->func_idx, mhost_mgmt->func_nic_en) :
+ clear_bit(state->func_idx, mhost_mgmt->func_nic_en);
+ return err;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_func_nic_state);
+
+int hinic3_get_netdev_state(void *hwdev, u16 func_idx, int *opened)
+{
+ struct hinic3_hwdev *ppf_hwdev = hwdev;
+ int err;
+ u8 host_enable;
+ u8 host_id = 0;
+ struct hinic3_func_nic_state state = {0};
+
+ *opened = 0;
+ state.func_idx = func_idx;
+ err = set_nic_state_params_valid(hwdev, &state);
+ if (err)
+ return err;
+
+ err = get_host_id_by_func_id(ppf_hwdev, func_idx, &host_id);
+ if (err) {
+ sdk_err(ppf_hwdev->dev_hdl, "Failed to get function %u host id, err: %d\n",
+ func_idx, err);
+ return -EFAULT;
+ }
+
+ err = hinic3_get_slave_host_enable(hwdev, host_id, &host_enable);
+ if (err != 0) {
+ sdk_err(ppf_hwdev->dev_hdl, "Get slave host %u enable failed, ret %d\n",
+ host_id, err);
+ return err;
+ }
+ if (!host_enable)
+ return 0;
+
+ return get_slave_func_netdev_state(hwdev, func_idx, opened);
+}
+EXPORT_SYMBOL(hinic3_get_netdev_state);
+
+static int __get_func_nic_state_from_pf(struct hinic3_hwdev *hwdev,
+ u16 glb_func_idx, u8 *en)
+{
+ struct hinic3_multi_host_mgmt *mhost_mgmt = NULL;
+ struct hinic3_hwdev *ppf_hwdev = hwdev;
+
+ down(&hwdev->ppf_sem);
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ ppf_hwdev = ((struct hinic3_hwdev *)hwdev)->ppf_hwdev;
+
+ if (!ppf_hwdev || !ppf_hwdev->mhost_mgmt) {
+ up(&hwdev->ppf_sem);
+ return -EFAULT;
+ }
+
+ mhost_mgmt = ppf_hwdev->mhost_mgmt;
+ *en = !!test_bit(glb_func_idx, mhost_mgmt->func_nic_en);
+ up(&hwdev->ppf_sem);
+
+ return 0;
+}
+
+static int __get_func_vroce_state_from_pf(struct hinic3_hwdev *hwdev,
+ u16 glb_func_idx, u8 *en)
+{
+ struct hinic3_multi_host_mgmt *mhost_mgmt = NULL;
+ struct hinic3_hwdev *ppf_hwdev = hwdev;
+
+ down(&hwdev->ppf_sem);
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ ppf_hwdev = ((struct hinic3_hwdev *)hwdev)->ppf_hwdev;
+
+ if (!ppf_hwdev || !ppf_hwdev->mhost_mgmt) {
+ up(&hwdev->ppf_sem);
+ return -EFAULT;
+ }
+
+ mhost_mgmt = ppf_hwdev->mhost_mgmt;
+ *en = !!test_bit(glb_func_idx, mhost_mgmt->func_vroce_en);
+ up(&hwdev->ppf_sem);
+
+ return 0;
+}
+
+static int __get_vf_func_nic_state(struct hinic3_hwdev *hwdev, u16 glb_func_idx,
+ bool *en)
+{
+ struct hinic3_slave_func_nic_state nic_state = {0};
+ u16 out_size = sizeof(nic_state);
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ nic_state.func_idx = glb_func_idx;
+ err = hinic3_mbox_to_pf(hwdev, HINIC3_MOD_SW_FUNC,
+ HINIC3_SW_CMD_GET_SLAVE_FUNC_NIC_STATE,
+ &nic_state, sizeof(nic_state),
+ &nic_state, &out_size, 0, HINIC3_CHANNEL_COMM);
+ if (err || !out_size || nic_state.status) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to get vf %u state, err: %d, out_size: %u, status: 0x%x\n",
+ glb_func_idx, err, out_size, nic_state.status);
+ return -EFAULT;
+ }
+
+ *en = !!nic_state.enable;
+
+ return 0;
+ }
+
+ return -EFAULT;
+}
+
+static int __get_func_vroce_state(struct hinic3_hwdev *hwdev, u16 glb_func_idx,
+ u8 *en)
+{
+ struct hinic3_slave_func_nic_state vroce_state = {0};
+ u16 out_size = sizeof(vroce_state);
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ vroce_state.func_idx = glb_func_idx;
+ err = hinic3_mbox_to_pf(hwdev, HINIC3_MOD_SW_FUNC,
+ HINIC3_SW_CMD_GET_SLAVE_FUNC_VROCE_STATE,
+ &vroce_state, sizeof(vroce_state),
+ &vroce_state, &out_size, 0, HINIC3_CHANNEL_COMM);
+ if (err || !out_size || vroce_state.status) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to get vf %u state, err: %d, out_size: %u, status: 0x%x\n",
+ glb_func_idx, err, out_size, vroce_state.status);
+ return -EFAULT;
+ }
+
+ *en = !!vroce_state.enable;
+
+ return 0;
+ }
+
+ return __get_func_vroce_state_from_pf(hwdev, glb_func_idx, en);
+}
+
+int hinic3_get_func_vroce_enable(void *hwdev, u16 glb_func_idx, u8 *en)
+{
+ if (!hwdev || !en)
+ return -EINVAL;
+
+ return __get_func_vroce_state(hwdev, glb_func_idx, en);
+}
+EXPORT_SYMBOL(hinic3_get_func_vroce_enable);
+
+int hinic3_get_func_nic_enable(void *hwdev, u16 glb_func_idx, bool *en)
+{
+ u8 nic_en;
+ int err;
+
+ if (!hwdev || !en)
+ return -EINVAL;
+
+ /* if single host or unsupport hot plug, return true. */
+ if (!IS_MULTI_HOST((struct hinic3_hwdev *)hwdev) ||
+ UNSUPPORT_HOT_PLUG((struct hinic3_hwdev *)hwdev)) {
+ *en = true;
+ return 0;
+ }
+
+ if (!IS_SLAVE_HOST((struct hinic3_hwdev *)hwdev)) {
+ /* if card mode is OVS, VFs don't need attach_uld, so return false. */
+ if (hinic3_func_type(hwdev) == TYPE_VF &&
+ hinic3_support_ovs(hwdev, NULL))
+ *en = false;
+ else
+ *en = true;
+
+ return 0;
+ }
+
+ /* PF in slave host should be probe in CHIP_MODE_VMGW
+ * mode for pxe install.
+ * PF num need (0 ~31)
+ */
+ if (hinic3_func_type(hwdev) != TYPE_VF &&
+ IS_VM_SLAVE_HOST((struct hinic3_hwdev *)hwdev) &&
+ glb_func_idx < HINIC3_SUPPORT_MAX_PF_NUM) {
+ *en = true;
+ return 0;
+ }
+
+ /* try to get function nic state in sdk directly */
+ err = __get_func_nic_state_from_pf(hwdev, glb_func_idx, &nic_en);
+ if (err) {
+ if (glb_func_idx < HINIC3_SUPPORT_MAX_PF_NUM)
+ return err;
+ } else {
+ *en = !!nic_en;
+ return 0;
+ }
+
+ return __get_vf_func_nic_state(hwdev, glb_func_idx, en);
+}
+
+static int slave_host_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ if (IS_SLAVE_HOST(hwdev)) {
+ /* PXE doesn't support to receive mbox from master host */
+ set_slave_host_enable(hwdev, hinic3_pcie_itf_id(hwdev), true);
+ if ((IS_VM_SLAVE_HOST(hwdev) &&
+ hinic3_get_master_host_mbox_enable(hwdev)) ||
+ IS_BMGW_SLAVE_HOST(hwdev)) {
+ err = hinic3_register_slave_ppf(hwdev, true);
+ if (err) {
+ set_slave_host_enable(hwdev, hinic3_pcie_itf_id(hwdev), false);
+ return err;
+ }
+ }
+ } else {
+ /* slave host can send message to mgmt cpu
+ * after setup master mbox
+ */
+ set_master_host_mbox_enable(hwdev, true);
+ }
+
+ return 0;
+}
+
+int hinic3_multi_host_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+ struct service_cap *cap = &hwdev->cfg_mgmt->svc_cap;
+ int is_use_vram, is_in_kexec;
+
+ if (!IS_MULTI_HOST(hwdev) || !HINIC3_IS_PPF(hwdev))
+ return 0;
+
+ is_use_vram = get_use_vram_flag();
+ if (is_use_vram != 0) {
+ snprintf(hwdev->mhost_mgmt_name, VRAM_NAME_MAX_LEN, "%s", VRAM_NIC_MHOST_MGMT);
+ hwdev->mhost_mgmt = hi_vram_kalloc(hwdev->mhost_mgmt_name, sizeof(*hwdev->mhost_mgmt));
+ } else {
+ hwdev->mhost_mgmt = kcalloc(1, sizeof(*hwdev->mhost_mgmt), GFP_KERNEL);
+ }
+ if (!hwdev->mhost_mgmt)
+ return -ENOMEM;
+
+ hwdev->mhost_mgmt->shost_ppf_idx = hinic3_host_ppf_idx(hwdev, HINIC3_MGMT_SHOST_HOST_ID);
+ hwdev->mhost_mgmt->mhost_ppf_idx = hinic3_host_ppf_idx(hwdev, cap->master_host_id);
+
+ err = hinic3_get_hw_pf_infos(hwdev, &hwdev->mhost_mgmt->pf_infos, HINIC3_CHANNEL_COMM);
+ if (err)
+ goto out_free_mhost_mgmt;
+
+ hinic3_register_ppf_mbox_cb(hwdev, HINIC3_MOD_COMM, hwdev, comm_ppf_mbox_handler);
+ hinic3_register_ppf_mbox_cb(hwdev, HINIC3_MOD_L2NIC, hwdev, hinic3_nic_ppf_mbox_handler);
+ hinic3_register_ppf_mbox_cb(hwdev, HINIC3_MOD_HILINK, hwdev, hilink_ppf_mbox_handler);
+ hinic3_register_ppf_mbox_cb(hwdev, HINIC3_MOD_SW_FUNC, hwdev, sw_func_ppf_mbox_handler);
+
+ is_in_kexec = vram_get_kexec_flag();
+ if (is_in_kexec == 0) {
+ bitmap_zero(hwdev->mhost_mgmt->func_nic_en, HINIC3_MAX_MGMT_FUNCTIONS);
+ bitmap_zero(hwdev->mhost_mgmt->func_vroce_en, HINIC3_MAX_MGMT_FUNCTIONS);
+ }
+
+ /* Slave host:
+ * register slave host ppf functions
+ * Get function's nic state
+ */
+ err = slave_host_init(hwdev);
+ if (err)
+ goto out_free_mhost_mgmt;
+
+ return 0;
+
+out_free_mhost_mgmt:
+ if (is_use_vram != 0) {
+ hi_vram_kfree((void *)hwdev->mhost_mgmt,
+ hwdev->mhost_mgmt_name,
+ sizeof(*hwdev->mhost_mgmt));
+ } else {
+ kfree(hwdev->mhost_mgmt);
+ }
+ hwdev->mhost_mgmt = NULL;
+
+ return err;
+}
+
+int hinic3_multi_host_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ int is_use_vram;
+ if (!IS_MULTI_HOST(hwdev) || !HINIC3_IS_PPF(hwdev))
+ return 0;
+
+ if (IS_SLAVE_HOST(hwdev)) {
+ hinic3_register_slave_ppf(hwdev, false);
+
+ set_slave_host_enable(hwdev, hinic3_pcie_itf_id(hwdev), false);
+ } else {
+ set_master_host_mbox_enable(hwdev, false);
+ }
+
+ hinic3_unregister_ppf_mbox_cb(hwdev, HINIC3_MOD_COMM);
+ hinic3_unregister_ppf_mbox_cb(hwdev, HINIC3_MOD_L2NIC);
+ hinic3_unregister_ppf_mbox_cb(hwdev, HINIC3_MOD_HILINK);
+ hinic3_unregister_ppf_mbox_cb(hwdev, HINIC3_MOD_SW_FUNC);
+
+ is_use_vram = get_use_vram_flag();
+ if (is_use_vram != 0) {
+ hi_vram_kfree((void *)hwdev->mhost_mgmt,
+ hwdev->mhost_mgmt_name,
+ sizeof(*hwdev->mhost_mgmt));
+ } else {
+ kfree(hwdev->mhost_mgmt);
+ }
+ hwdev->mhost_mgmt = NULL;
+
+ return 0;
+}
+
+int hinic3_get_mhost_func_nic_enable(void *hwdev, u16 func_id, bool *en)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u8 func_en;
+ int ret;
+
+ if (!hwdev || !en || func_id >= HINIC3_MAX_MGMT_FUNCTIONS || !IS_MULTI_HOST(dev))
+ return -EINVAL;
+
+ ret = __get_func_nic_state_from_pf(hwdev, func_id, &func_en);
+ if (ret)
+ return ret;
+
+ *en = !!func_en;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_mhost_func_nic_enable);
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.h
new file mode 100644
index 0000000..fb25160
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_multi_host_mgmt.h
@@ -0,0 +1,124 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2022 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_MULTI_HOST_MGMT_H
+#define HINIC3_MULTI_HOST_MGMT_H
+
+#define HINIC3_VF_IN_VM 0x3
+
+#define HINIC3_MGMT_SHOST_HOST_ID 0
+#define HINIC3_MAX_MGMT_FUNCTIONS 1024
+#define HINIC3_MAX_MGMT_FUNCTIONS_64 (HINIC3_MAX_MGMT_FUNCTIONS / 64)
+
+struct hinic3_multi_host_mgmt {
+ struct hinic3_hwdev *hwdev;
+
+ /* slave host registered */
+ bool shost_registered;
+ u8 shost_host_idx;
+ u8 shost_ppf_idx;
+
+ u8 mhost_ppf_idx;
+ u8 rsvd1;
+
+ /* slave host functios support nic enable */
+ DECLARE_BITMAP(func_nic_en, HINIC3_MAX_MGMT_FUNCTIONS);
+ DECLARE_BITMAP(func_vroce_en, HINIC3_MAX_MGMT_FUNCTIONS);
+
+ struct hinic3_hw_pf_infos pf_infos;
+
+ u64 rsvd2;
+};
+
+struct hinic3_host_fwd_head {
+ unsigned short dst_glb_func_idx;
+ unsigned char dst_itf_idx;
+ unsigned char mod;
+
+ unsigned char cmd;
+ unsigned char rsv[3];
+};
+
+/* software cmds, vf->pf and multi-host */
+enum hinic3_sw_funcs_cmd {
+ HINIC3_SW_CMD_SLAVE_HOST_PPF_REGISTER = 0x0,
+ HINIC3_SW_CMD_SLAVE_HOST_PPF_UNREGISTER,
+ HINIC3_SW_CMD_GET_SLAVE_FUNC_NIC_STATE,
+ HINIC3_SW_CMD_SET_SLAVE_FUNC_NIC_STATE,
+ HINIC3_SW_CMD_SEND_MSG_TO_VF,
+ HINIC3_SW_CMD_MIGRATE_READY,
+ HINIC3_SW_CMD_GET_SLAVE_NETDEV_STATE,
+
+ HINIC3_SW_CMD_GET_SLAVE_FUNC_VROCE_STATE,
+ HINIC3_SW_CMD_SET_SLAVE_FUNC_VROCE_STATE,
+ HINIC3_SW_CMD_GET_SLAVE_VROCE_DEVICE_STATE = 0x9, // 与vroce_cfg_vf_do.h宏一致
+};
+
+/* multi host mgmt event sub cmd */
+enum hinic3_mhost_even_type {
+ HINIC3_MHOST_NIC_STATE_CHANGE = 1,
+ HINIC3_MHOST_VROCE_STATE_CHANGE = 2,
+ HINIC3_MHOST_GET_VROCE_STATE = 3,
+};
+
+struct hinic3_mhost_nic_func_state {
+ u8 status;
+ u8 enable;
+ u16 func_idx;
+};
+
+struct hinic3_multi_host_mgmt_event {
+ u16 sub_cmd;
+ u16 rsvd[3];
+
+ void *data;
+};
+
+int hinic3_multi_host_mgmt_init(struct hinic3_hwdev *hwdev);
+int hinic3_multi_host_mgmt_free(struct hinic3_hwdev *hwdev);
+int hinic3_mbox_to_host_no_ack(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size, u16 channel);
+
+struct register_slave_host {
+ u8 status;
+ u8 version;
+ u8 rsvd[6];
+
+ u8 host_id;
+ u8 ppf_idx;
+ u8 get_nic_en;
+ u8 rsvd2[5];
+
+ /* 16 * 64 bits for max 1024 functions */
+ u64 funcs_nic_en[HINIC3_MAX_MGMT_FUNCTIONS_64];
+ /* 16 * 64 bits for max 1024 functions */
+ u64 funcs_vroce_en[HINIC3_MAX_MGMT_FUNCTIONS_64];
+};
+
+struct hinic3_slave_func_nic_state {
+ u8 status;
+ u8 version;
+ u8 rsvd[6];
+
+ u16 func_idx;
+ u8 enable;
+ u8 opened;
+ u8 vroce_flag;
+ u8 rsvd2[7];
+};
+
+void set_master_host_mbox_enable(struct hinic3_hwdev *hwdev, bool enable);
+
+int sw_func_pf_mbox_handler(void *pri_handle, u16 vf_id, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size);
+
+int vf_sw_func_handler(void *hwdev, u8 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size);
+int hinic3_set_func_probe_in_host(void *hwdev, u16 func_id, bool probe);
+bool hinic3_get_func_probe_in_host(void *hwdev, u16 func_id);
+
+void *hinic3_get_ppf_hwdev_by_pdev(struct pci_dev *pdev);
+
+int hinic3_get_func_nic_enable(void *hwdev, u16 glb_func_idx, bool *en);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c
new file mode 100644
index 0000000..ee7afef
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c
@@ -0,0 +1,1021 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <net/sock.h>
+#include <linux/cdev.h>
+#include <linux/device.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_dev_mgmt.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_lld.h"
+#include "hinic3_hw_mt.h"
+#include "hinic3_nictool.h"
+
+static int g_nictool_ref_cnt;
+
+static dev_t g_dev_id = {0};
+static struct class *g_nictool_class;
+static struct cdev g_nictool_cdev;
+
+#define HINIC3_MAX_BUF_SIZE (2048 * 1024)
+
+void *g_card_node_array[MAX_CARD_NUM] = {0};
+void *g_card_vir_addr[MAX_CARD_NUM] = {0};
+u64 g_card_phy_addr[MAX_CARD_NUM] = {0};
+int card_id;
+
+#define HIADM3_DEV_PATH "/dev/hinic3_nictool_dev"
+#define HIADM3_DEV_CLASS "hinic3_nictool_class"
+#define HIADM3_DEV_NAME "hinic3_nictool_dev"
+
+typedef int (*hw_driv_module)(struct hinic3_lld_dev *lld_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size);
+
+struct hw_drv_module_handle {
+ enum driver_cmd_type driv_cmd_name;
+ hw_driv_module driv_func;
+};
+
+static int get_single_card_info(struct hinic3_lld_dev *lld_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ if (!buf_out || *out_size != sizeof(struct card_info)) {
+ pr_err("buf_out is NULL, or out_size != %lu\n", sizeof(struct card_info));
+ return -EINVAL;
+ }
+
+ hinic3_get_card_info(hinic3_get_sdk_hwdev_by_lld(lld_dev), buf_out);
+
+ return 0;
+}
+
+static int is_driver_in_vm(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ bool in_host = false;
+
+ if (!buf_out || (*out_size != sizeof(u8))) {
+ pr_err("buf_out is NULL, or out_size != %lu\n", sizeof(u8));
+ return -EINVAL;
+ }
+
+ in_host = hinic3_is_in_host();
+ if (in_host)
+ *((u8 *)buf_out) = 0;
+ else
+ *((u8 *)buf_out) = 1;
+
+ return 0;
+}
+
+static int get_all_chip_id_cmd(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ if (*out_size != sizeof(struct nic_card_id) || !buf_out) {
+ pr_err("Invalid parameter: out_buf_size %u, expect %lu\n",
+ *out_size, sizeof(struct nic_card_id));
+ return -EFAULT;
+ }
+
+ hinic3_get_all_chip_id(buf_out);
+
+ return 0;
+}
+
+static int get_os_hot_replace_info(struct hinic3_lld_dev *lld_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ if (*out_size != sizeof(struct os_hot_replace_info) || !buf_out) {
+ pr_err("Invalid parameter: out_buf_size %u, expect %lu\n",
+ *out_size, sizeof(struct os_hot_replace_info));
+ return -EFAULT;
+ }
+
+ hinic3_get_os_hot_replace_info(buf_out);
+
+ return 0;
+}
+
+static int get_card_usr_api_chain_mem(int card_idx)
+{
+ unsigned char *tmp = NULL;
+ int i;
+
+ card_id = card_idx;
+ if (!g_card_vir_addr[card_idx]) {
+ g_card_vir_addr[card_idx] =
+ (void *)ossl_get_free_pages(GFP_KERNEL,
+ DBGTOOL_PAGE_ORDER);
+ if (!g_card_vir_addr[card_idx]) {
+ pr_err("Alloc api chain memory fail for card %d!\n", card_idx);
+ return -EFAULT;
+ }
+
+ memset(g_card_vir_addr[card_idx], 0,
+ PAGE_SIZE * (1 << DBGTOOL_PAGE_ORDER));
+
+ g_card_phy_addr[card_idx] =
+ virt_to_phys(g_card_vir_addr[card_idx]);
+ if (!g_card_phy_addr[card_idx]) {
+ pr_err("phy addr for card %d is 0\n", card_idx);
+ free_pages((unsigned long)g_card_vir_addr[card_idx], DBGTOOL_PAGE_ORDER);
+ g_card_vir_addr[card_idx] = NULL;
+ return -EFAULT;
+ }
+
+ tmp = g_card_vir_addr[card_idx];
+ for (i = 0; i < (1 << DBGTOOL_PAGE_ORDER); i++) {
+ SetPageReserved(virt_to_page(tmp));
+ tmp += PAGE_SIZE;
+ }
+ }
+
+ return 0;
+}
+
+static void chipif_get_all_pf_dev_info(struct pf_dev_info *dev_info, int card_idx,
+ void **g_func_handle_array)
+{
+ u32 func_idx;
+ void *hwdev = NULL;
+ struct pci_dev *pdev = NULL;
+
+ for (func_idx = 0; func_idx < PF_DEV_INFO_NUM; func_idx++) {
+ hwdev = (void *)g_func_handle_array[func_idx];
+
+ dev_info[func_idx].phy_addr = g_card_phy_addr[card_idx];
+
+ if (!hwdev) {
+ dev_info[func_idx].bar0_size = 0;
+ dev_info[func_idx].bus = 0;
+ dev_info[func_idx].slot = 0;
+ dev_info[func_idx].func = 0;
+ } else {
+ pdev = (struct pci_dev *)hinic3_get_pcidev_hdl(hwdev);
+ dev_info[func_idx].bar0_size =
+ pci_resource_len(pdev, 0);
+ dev_info[func_idx].bus = pdev->bus->number;
+ dev_info[func_idx].slot = PCI_SLOT(pdev->devfn);
+ dev_info[func_idx].func = PCI_FUNC(pdev->devfn);
+ }
+ }
+}
+
+static int get_pf_dev_info(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct pf_dev_info *dev_info = buf_out;
+ struct card_node *card_info = hinic3_get_chip_node_by_lld(lld_dev);
+ int id, err;
+
+ if (!buf_out || *out_size != sizeof(struct pf_dev_info) * PF_DEV_INFO_NUM) {
+ pr_err("Invalid parameter: out_buf_size %u, expect %lu\n",
+ *out_size, sizeof(*dev_info) * PF_DEV_INFO_NUM);
+ return -EFAULT;
+ }
+
+ err = sscanf(card_info->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ pr_err("Failed to get card id\n");
+ return err;
+ }
+
+ if (id >= MAX_CARD_NUM || id < 0) {
+ pr_err("chip id %d exceed limit[0-%d]\n", id, MAX_CARD_NUM - 1);
+ return -EINVAL;
+ }
+
+ chipif_get_all_pf_dev_info(dev_info, id, card_info->func_handle_array);
+
+ err = get_card_usr_api_chain_mem(id);
+ if (err) {
+ pr_err("Faile to get api chain memory for userspace %s\n",
+ card_info->chip_name);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static void dbgtool_knl_free_mem(int id)
+{
+ unsigned char *tmp = NULL;
+ int i;
+
+ if (id < 0 || id >= MAX_CARD_NUM) {
+ pr_err("Invalid card id\n");
+ return;
+ }
+
+ if (!g_card_vir_addr[id])
+ return;
+
+ tmp = g_card_vir_addr[id];
+ for (i = 0; i < (1 << DBGTOOL_PAGE_ORDER); i++) {
+ ClearPageReserved(virt_to_page(tmp));
+ tmp += PAGE_SIZE;
+ }
+
+ free_pages((unsigned long)g_card_vir_addr[id], DBGTOOL_PAGE_ORDER);
+ g_card_vir_addr[id] = NULL;
+ g_card_phy_addr[id] = 0;
+}
+
+static int free_knl_mem(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct card_node *card_info = hinic3_get_chip_node_by_lld(lld_dev);
+ int id, err;
+
+ err = sscanf(card_info->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ pr_err("Failed to get card id\n");
+ return err;
+ }
+
+ if (id >= MAX_CARD_NUM || id < 0) {
+ pr_err("chip id %d exceed limit[0-%d]\n", id, MAX_CARD_NUM - 1);
+ return -EINVAL;
+ }
+
+ dbgtool_knl_free_mem(id);
+
+ return 0;
+}
+
+static int card_info_param_valid(const char *dev_name, const void *buf_out,
+ u32 buf_out_size, int *id)
+{
+ int err;
+
+ if (!buf_out || buf_out_size != sizeof(struct hinic3_card_func_info)) {
+ pr_err("Invalid parameter: out_buf_size %u, expect %lu\n",
+ buf_out_size, sizeof(struct hinic3_card_func_info));
+ return -EINVAL;
+ }
+
+ err = memcmp(dev_name, HINIC3_CHIP_NAME, strlen(HINIC3_CHIP_NAME));
+ if (err) {
+ pr_err("Invalid chip name %s\n", dev_name);
+ return err;
+ }
+
+ err = sscanf(dev_name, HINIC3_CHIP_NAME "%d", id);
+ if (err < 0) {
+ pr_err("Failed to get card id\n");
+ return err;
+ }
+
+ if (*id >= MAX_CARD_NUM || *id < 0) {
+ pr_err("chip id %d exceed limit[0-%d]\n",
+ *id, MAX_CARD_NUM - 1);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int get_card_func_info(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct hinic3_card_func_info *card_func_info = buf_out;
+ struct card_node *card_info = hinic3_get_chip_node_by_lld(lld_dev);
+ int err, id = 0;
+
+ err = card_info_param_valid(card_info->chip_name, buf_out, *out_size, &id);
+ if (err)
+ return err;
+
+ hinic3_get_card_func_info_by_card_name(card_info->chip_name, card_func_info);
+
+ if (!card_func_info->num_pf) {
+ pr_err("None function found for %s\n", card_info->chip_name);
+ return -EFAULT;
+ }
+
+ err = get_card_usr_api_chain_mem(id);
+ if (err) {
+ pr_err("Faile to get api chain memory for userspace %s\n",
+ card_info->chip_name);
+ return -EFAULT;
+ }
+
+ card_func_info->usr_api_phy_addr = g_card_phy_addr[id];
+
+ return 0;
+}
+
+static int get_pf_cap_info(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct service_cap *func_cap = NULL;
+ struct hinic3_hwdev *hwdev = NULL;
+ struct card_node *card_info = hinic3_get_chip_node_by_lld(lld_dev);
+ struct svc_cap_info *svc_cap_info_in = (struct svc_cap_info *)buf_in;
+ struct svc_cap_info *svc_cap_info_out = (struct svc_cap_info *)buf_out;
+
+ if (*out_size != sizeof(struct svc_cap_info) || in_size != sizeof(struct svc_cap_info) ||
+ !buf_in || !buf_out) {
+ pr_err("Invalid parameter: out_buf_size %u, in_size: %u, expect %lu\n",
+ *out_size, in_size, sizeof(struct svc_cap_info));
+ return -EINVAL;
+ }
+
+ if (svc_cap_info_in->func_idx >= MAX_FUNCTION_NUM) {
+ pr_err("func_idx is illegal. func_idx: %u, max_num: %u\n",
+ svc_cap_info_in->func_idx, MAX_FUNCTION_NUM);
+ return -EINVAL;
+ }
+
+ lld_hold();
+ hwdev = (struct hinic3_hwdev *)(card_info->func_handle_array)[svc_cap_info_in->func_idx];
+ if (!hwdev) {
+ lld_put();
+ return -EINVAL;
+ }
+
+ func_cap = &hwdev->cfg_mgmt->svc_cap;
+ memcpy(&svc_cap_info_out->cap, func_cap, sizeof(struct service_cap));
+ lld_put();
+
+ return 0;
+}
+
+static int get_hw_drv_version(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct drv_version_info *ver_info = buf_out;
+ int err;
+
+ if (!buf_out) {
+ pr_err("Buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (*out_size != sizeof(*ver_info)) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(*ver_info));
+ return -EINVAL;
+ }
+
+ err = snprintf(ver_info->ver, sizeof(ver_info->ver), "%s %s", HINIC3_DRV_VERSION,
+ "2025-05-01_00:00:03");
+ if (err < 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int get_pf_id(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct hinic3_pf_info *pf_info = NULL;
+ struct card_node *chip_node = hinic3_get_chip_node_by_lld(lld_dev);
+ u32 port_id;
+ int err;
+
+ if (!chip_node)
+ return -ENODEV;
+
+ if (!buf_out || (*out_size != sizeof(*pf_info)) || !buf_in || in_size != sizeof(u32)) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu, in size:%u\n",
+ *out_size, sizeof(*pf_info), in_size);
+ return -EINVAL;
+ }
+
+ port_id = *((u32 *)buf_in);
+ pf_info = (struct hinic3_pf_info *)buf_out;
+ err = hinic3_get_pf_id(chip_node, port_id, &pf_info->pf_id, &pf_info->isvalid);
+ if (err)
+ return err;
+
+ *out_size = sizeof(*pf_info);
+
+ return 0;
+}
+
+static int get_mbox_cnt(struct hinic3_lld_dev *lld_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ if (buf_out == NULL || *out_size != sizeof(struct card_mbox_cnt_info)) {
+ pr_err("buf_out is NULL, or out_size != %lu\n",
+ sizeof(struct card_info));
+ return -EINVAL;
+ }
+
+ hinic3_get_mbox_cnt(hinic3_get_sdk_hwdev_by_lld(lld_dev), buf_out);
+
+ return 0;
+}
+
+struct hw_drv_module_handle hw_driv_module_cmd_handle[] = {
+ {FUNC_TYPE, get_func_type},
+ {GET_FUNC_IDX, get_func_id},
+ {GET_HW_STATS, (hw_driv_module)get_hw_driver_stats},
+ {CLEAR_HW_STATS, clear_hw_driver_stats},
+ {GET_SELF_TEST_RES, get_self_test_result},
+ {GET_CHIP_FAULT_STATS, (hw_driv_module)get_chip_faults_stats},
+ {GET_SINGLE_CARD_INFO, (hw_driv_module)get_single_card_info},
+ {IS_DRV_IN_VM, is_driver_in_vm},
+ {GET_CHIP_ID, get_all_chip_id_cmd},
+ {GET_PF_DEV_INFO, get_pf_dev_info},
+ {CMD_FREE_MEM, free_knl_mem},
+ {GET_CHIP_INFO, get_card_func_info},
+ {GET_FUNC_CAP, get_pf_cap_info},
+ {GET_DRV_VERSION, get_hw_drv_version},
+ {GET_PF_ID, get_pf_id},
+ {GET_OS_HOT_REPLACE_INFO, get_os_hot_replace_info},
+ {GET_MBOX_CNT, (hw_driv_module)get_mbox_cnt},
+};
+
+static int alloc_tmp_buf(void *hwdev, struct msg_module *nt_msg, u32 in_size,
+ void **buf_in, u32 out_size, void **buf_out)
+{
+ int ret;
+
+ ret = alloc_buff_in(hwdev, nt_msg, in_size, buf_in);
+ if (ret) {
+ pr_err("Alloc tool cmd buff in failed\n");
+ return ret;
+ }
+
+ ret = alloc_buff_out(hwdev, nt_msg, out_size, buf_out);
+ if (ret) {
+ pr_err("Alloc tool cmd buff out failed\n");
+ goto out_free_buf_in;
+ }
+
+ return 0;
+
+out_free_buf_in:
+ free_buff_in(hwdev, nt_msg, *buf_in);
+
+ return ret;
+}
+
+static void free_tmp_buf(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, void *buf_out)
+{
+ free_buff_out(hwdev, nt_msg, buf_out);
+ free_buff_in(hwdev, nt_msg, buf_in);
+}
+
+static int send_to_hw_driver(struct hinic3_lld_dev *lld_dev, struct msg_module *nt_msg,
+ const void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ int index, num_cmds = (int)(sizeof(hw_driv_module_cmd_handle) /
+ sizeof(hw_driv_module_cmd_handle[0]));
+ enum driver_cmd_type cmd_type =
+ (enum driver_cmd_type)(nt_msg->msg_formate);
+ int err = 0;
+
+ for (index = 0; index < num_cmds; index++) {
+ if (cmd_type ==
+ hw_driv_module_cmd_handle[index].driv_cmd_name) {
+ err = hw_driv_module_cmd_handle[index].driv_func
+ (lld_dev, buf_in, in_size, buf_out, out_size);
+ break;
+ }
+ }
+
+ if (index == num_cmds) {
+ pr_err("Can't find callback for %d\n", cmd_type);
+ return -EINVAL;
+ }
+
+ return err;
+}
+
+static int send_to_service_driver(struct hinic3_lld_dev *lld_dev, struct msg_module *nt_msg,
+ const void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ const char **service_name = NULL;
+ enum hinic3_service_type type;
+ void *uld_dev = NULL;
+ int ret = -EINVAL;
+
+ service_name = hinic3_get_uld_names();
+ type = nt_msg->module - SEND_TO_SRV_DRV_BASE;
+ if (type >= SERVICE_T_MAX) {
+ pr_err("Ioctl input module id: %u is incorrectly\n", nt_msg->module);
+ return -EINVAL;
+ }
+
+ uld_dev = hinic3_get_uld_dev(lld_dev, type);
+ if (!uld_dev) {
+ if (nt_msg->msg_formate == GET_DRV_VERSION)
+ return 0;
+
+ pr_err("Can not get the uld dev correctly: %s driver may be not register\n",
+ service_name[type]);
+ return -EINVAL;
+ }
+
+ if (g_uld_info[type].ioctl)
+ ret = g_uld_info[type].ioctl(uld_dev, nt_msg->msg_formate,
+ buf_in, in_size, buf_out, out_size);
+ uld_dev_put(lld_dev, type);
+
+ return ret;
+}
+
+static int nictool_exec_cmd(struct hinic3_lld_dev *lld_dev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ int ret = 0;
+
+ switch (nt_msg->module) {
+ case SEND_TO_HW_DRIVER:
+ ret = send_to_hw_driver(lld_dev, nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ case SEND_TO_MPU:
+ ret = send_to_mpu(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ case SEND_TO_SM:
+ ret = send_to_sm(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ case SEND_TO_NPU:
+ ret = send_to_npu(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ default:
+ ret = send_to_service_driver(lld_dev, nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ }
+
+ return ret;
+}
+
+static int cmd_parameter_valid(struct msg_module *nt_msg, unsigned long arg,
+ u32 *out_size_expect, u32 *in_size)
+{
+ if (copy_from_user(nt_msg, (void *)arg, sizeof(*nt_msg))) {
+ pr_err("Copy information from user failed\n");
+ return -EFAULT;
+ }
+
+ *out_size_expect = nt_msg->buf_out_size;
+ *in_size = nt_msg->buf_in_size;
+ if (*out_size_expect > HINIC3_MAX_BUF_SIZE ||
+ *in_size > HINIC3_MAX_BUF_SIZE) {
+ pr_err("Invalid in size: %u or out size: %u\n",
+ *in_size, *out_size_expect);
+ return -EFAULT;
+ }
+
+ nt_msg->device_name[IFNAMSIZ - 1] = '\0';
+
+ return 0;
+}
+
+static struct hinic3_lld_dev *get_lld_dev_by_nt_msg(struct msg_module *nt_msg)
+{
+ struct hinic3_lld_dev *lld_dev = NULL;
+
+ if (nt_msg->module == SEND_TO_NIC_DRIVER &&
+ (nt_msg->msg_formate == GET_XSFP_INFO ||
+ nt_msg->msg_formate == GET_XSFP_PRESENT ||
+ nt_msg->msg_formate == GET_XSFP_INFO_COMP_CMIS)) {
+ lld_dev = hinic3_get_lld_dev_by_chip_and_port(nt_msg->device_name, nt_msg->port_id);
+ } else if (nt_msg->module == SEND_TO_CUSTOM_DRIVER &&
+ nt_msg->msg_formate == CMD_CUSTOM_BOND_GET_CHIP_NAME) {
+ lld_dev = hinic3_get_lld_dev_by_dev_name(nt_msg->device_name,
+ SERVICE_T_MAX);
+ } else if (nt_msg->module == SEND_TO_VBS_DRIVER ||
+ nt_msg->module == SEND_TO_BIFUR_DRIVER) {
+ lld_dev = hinic3_get_lld_dev_by_chip_name(nt_msg->device_name);
+ } else if (nt_msg->module >= SEND_TO_SRV_DRV_BASE &&
+ nt_msg->module < SEND_TO_DRIVER_MAX &&
+ nt_msg->msg_formate != GET_DRV_VERSION) {
+ lld_dev = hinic3_get_lld_dev_by_dev_name(nt_msg->device_name,
+ nt_msg->module - SEND_TO_SRV_DRV_BASE);
+ } else {
+ lld_dev = hinic3_get_lld_dev_by_chip_name(nt_msg->device_name);
+ if (!lld_dev)
+ lld_dev = hinic3_get_lld_dev_by_dev_name(nt_msg->device_name, SERVICE_T_MAX);
+ }
+
+ return lld_dev;
+}
+
+static long hinicadm_k_unlocked_ioctl(struct file *pfile, unsigned long arg)
+{
+ struct hinic3_lld_dev *lld_dev = NULL;
+ struct msg_module nt_msg;
+ void *buf_out = NULL;
+ void *buf_in = NULL;
+ u32 out_size_expect = 0;
+ u32 out_size = 0;
+ u32 in_size = 0;
+ int ret = 0;
+
+ memset(&nt_msg, 0, sizeof(nt_msg));
+ if (cmd_parameter_valid(&nt_msg, arg, &out_size_expect, &in_size))
+ return -EFAULT;
+
+ lld_dev = get_lld_dev_by_nt_msg(&nt_msg);
+ if (!lld_dev) {
+ if (nt_msg.msg_formate != DEV_NAME_TEST)
+ pr_err("Can not find device %s for module %u\n",
+ nt_msg.device_name, nt_msg.module);
+
+ return -ENODEV;
+ }
+
+ if (nt_msg.msg_formate == DEV_NAME_TEST) {
+ lld_dev_put(lld_dev);
+ return 0;
+ }
+
+ ret = alloc_tmp_buf(hinic3_get_sdk_hwdev_by_lld(lld_dev), &nt_msg,
+ in_size, &buf_in, out_size_expect, &buf_out);
+ if (ret) {
+ pr_err("Alloc tmp buff failed\n");
+ goto out_free_lock;
+ }
+
+ out_size = out_size_expect;
+
+ ret = nictool_exec_cmd(lld_dev, &nt_msg, buf_in, in_size, buf_out, &out_size);
+ if (ret) {
+ pr_err("nictool_exec_cmd failed, module: %u, ret: %d.\n", nt_msg.module, ret);
+ goto out_free_buf;
+ }
+
+ if (out_size > out_size_expect) {
+ ret = -EFAULT;
+ pr_err("Out size is greater than expected out size from user: %u, out size: %u\n",
+ out_size_expect, out_size);
+ goto out_free_buf;
+ }
+
+ ret = copy_buf_out_to_user(&nt_msg, out_size, buf_out);
+ if (ret)
+ pr_err("Copy information to user failed\n");
+
+out_free_buf:
+ free_tmp_buf(hinic3_get_sdk_hwdev_by_lld(lld_dev), &nt_msg, buf_in, buf_out);
+
+out_free_lock:
+ lld_dev_put(lld_dev);
+ return (long)ret;
+}
+
+/**
+ * dbgtool_knl_ffm_info_rd - Read ffm information
+ * @para: the dbgtool parameter
+ * @dbgtool_info: the dbgtool info
+ **/
+static long dbgtool_knl_ffm_info_rd(struct dbgtool_param *para,
+ struct dbgtool_k_glb_info *dbgtool_info)
+{
+ if (!para->param.ffm_rd || !dbgtool_info->ffm)
+ return -EINVAL;
+
+ /* Copy the ffm_info to user mode */
+ if (copy_to_user(para->param.ffm_rd, dbgtool_info->ffm,
+ (unsigned int)sizeof(struct ffm_record_info))) {
+ pr_err("Copy ffm_info to user fail\n");
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static long dbgtool_k_unlocked_ioctl(struct file *pfile,
+ unsigned int real_cmd,
+ unsigned long arg)
+{
+ long ret;
+ struct dbgtool_param param;
+ struct dbgtool_k_glb_info *dbgtool_info = NULL;
+ struct card_node *card_info = NULL;
+ int i;
+
+ (void)memset(¶m, 0, sizeof(param));
+
+ if (copy_from_user(¶m, (void *)arg, sizeof(param))) {
+ pr_err("Copy param from user fail\n");
+ return -EFAULT;
+ }
+
+ lld_hold();
+ for (i = 0; i < MAX_CARD_NUM; i++) {
+ card_info = (struct card_node *)g_card_node_array[i];
+ if (!card_info)
+ continue;
+ if (memcmp(param.chip_name, card_info->chip_name,
+ strlen(card_info->chip_name) + 1) == 0)
+ break;
+ }
+
+ if (i == MAX_CARD_NUM || !card_info) {
+ lld_put();
+ pr_err("Can't find this card.\n");
+ return -EFAULT;
+ }
+
+ card_id = i;
+ dbgtool_info = (struct dbgtool_k_glb_info *)card_info->dbgtool_info;
+
+ down(&dbgtool_info->dbgtool_sem);
+
+ switch (real_cmd) {
+ case DBGTOOL_CMD_FFM_RD:
+ ret = dbgtool_knl_ffm_info_rd(¶m, dbgtool_info);
+ break;
+ case DBGTOOL_CMD_MSG_2_UP:
+ pr_err("Not suppose to use this cmd(0x%x).\n", real_cmd);
+ ret = 0;
+ break;
+
+ default:
+ pr_err("Dbgtool cmd(0x%x) not support now\n", real_cmd);
+ ret = -EFAULT;
+ break;
+ }
+
+ up(&dbgtool_info->dbgtool_sem);
+
+ lld_put();
+
+ return ret;
+}
+
+static int nictool_k_release(struct inode *pnode, struct file *pfile)
+{
+ return 0;
+}
+
+static int nictool_k_open(struct inode *pnode, struct file *pfile)
+{
+ return 0;
+}
+
+static ssize_t nictool_k_read(struct file *pfile, char __user *ubuf,
+ size_t size, loff_t *ppos)
+{
+ return 0;
+}
+
+static ssize_t nictool_k_write(struct file *pfile, const char __user *ubuf,
+ size_t size, loff_t *ppos)
+{
+ return 0;
+}
+
+static long nictool_k_unlocked_ioctl(struct file *pfile,
+ unsigned int cmd, unsigned long arg)
+{
+ unsigned int real_cmd;
+
+ real_cmd = _IOC_NR(cmd);
+
+ return (real_cmd == NICTOOL_CMD_TYPE) ?
+ hinicadm_k_unlocked_ioctl(pfile, arg) :
+ dbgtool_k_unlocked_ioctl(pfile, real_cmd, arg);
+}
+
+static int hinic3_mem_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ pgprot_t vm_page_prot;
+ unsigned long vmsize = vma->vm_end - vma->vm_start;
+ phys_addr_t offset = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT;
+ phys_addr_t phy_addr;
+ int err = 0;
+
+ if (vmsize > (PAGE_SIZE * (1 << DBGTOOL_PAGE_ORDER))) {
+ pr_err("Map size = %lu is bigger than alloc\n", vmsize);
+ return -EAGAIN;
+ }
+
+ /* old version of tool set vma->vm_pgoff to 0 */
+ phy_addr = offset ? offset : g_card_phy_addr[card_id];
+ /* check phy_addr valid */
+ if (phy_addr != g_card_phy_addr[card_id]) {
+ err = hinic3_bar_mmap_param_valid(phy_addr, vmsize);
+ if (err != 0) {
+ pr_err("mmap param invalid, err: %d\n", err);
+ return err;
+ }
+ }
+
+ /* Disable cache and write buffer in the mapping area */
+ vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ vma->vm_page_prot = vm_page_prot;
+ if (remap_pfn_range(vma, vma->vm_start, (phy_addr >> PAGE_SHIFT),
+ vmsize, vma->vm_page_prot)) {
+ pr_err("Remap pfn range failed.\n");
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static const struct file_operations fifo_operations = {
+ .owner = THIS_MODULE,
+ .release = nictool_k_release,
+ .open = nictool_k_open,
+ .read = nictool_k_read,
+ .write = nictool_k_write,
+ .unlocked_ioctl = nictool_k_unlocked_ioctl,
+ .mmap = hinic3_mem_mmap,
+};
+
+static void free_dbgtool_info(void *hwdev, struct card_node *chip_info)
+{
+ struct dbgtool_k_glb_info *dbgtool_info = NULL;
+
+ if (hinic3_func_type(hwdev) != TYPE_VF)
+ chip_info->func_handle_array[hinic3_global_func_id(hwdev)] = NULL;
+
+ if (--chip_info->func_num)
+ return;
+
+ if (chip_info->chip_id >= 0 && chip_info->chip_id < MAX_CARD_NUM)
+ g_card_node_array[chip_info->chip_id] = NULL;
+
+ dbgtool_info = chip_info->dbgtool_info;
+ /* FFM deinit */
+ if (dbgtool_info && dbgtool_info->ffm) {
+ kfree(dbgtool_info->ffm);
+ dbgtool_info->ffm = NULL;
+ }
+
+ if (dbgtool_info)
+ kfree(dbgtool_info);
+
+ chip_info->dbgtool_info = NULL;
+
+ if (chip_info->chip_id >= 0 && chip_info->chip_id < MAX_CARD_NUM)
+ dbgtool_knl_free_mem(chip_info->chip_id);
+}
+
+static int alloc_dbgtool_info(void *hwdev, struct card_node *chip_info)
+{
+ struct dbgtool_k_glb_info *dbgtool_info = NULL;
+ int err, id = 0;
+
+ if (hinic3_func_type(hwdev) != TYPE_VF)
+ chip_info->func_handle_array[hinic3_global_func_id(hwdev)] = hwdev;
+
+ if (chip_info->func_num++)
+ return 0;
+
+ dbgtool_info = (struct dbgtool_k_glb_info *)
+ kzalloc(sizeof(struct dbgtool_k_glb_info), GFP_KERNEL);
+ if (!dbgtool_info) {
+ pr_err("Failed to allocate dbgtool_info\n");
+ goto dbgtool_info_fail;
+ }
+
+ chip_info->dbgtool_info = dbgtool_info;
+
+ /* FFM init */
+ dbgtool_info->ffm = (struct ffm_record_info *)
+ kzalloc(sizeof(struct ffm_record_info), GFP_KERNEL);
+ if (!dbgtool_info->ffm) {
+ pr_err("Failed to allocate cell contexts for a chain\n");
+ goto dbgtool_info_ffm_fail;
+ }
+
+ sema_init(&dbgtool_info->dbgtool_sem, 1);
+
+ err = sscanf(chip_info->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ pr_err("Failed to get card id\n");
+ goto sscanf_chdev_fail;
+ }
+
+ g_card_node_array[id] = chip_info;
+
+ return 0;
+
+sscanf_chdev_fail:
+ kfree(dbgtool_info->ffm);
+
+dbgtool_info_ffm_fail:
+ kfree(dbgtool_info);
+ chip_info->dbgtool_info = NULL;
+
+dbgtool_info_fail:
+ if (hinic3_func_type(hwdev) != TYPE_VF)
+ chip_info->func_handle_array[hinic3_global_func_id(hwdev)] = NULL;
+ chip_info->func_num--;
+ return -ENOMEM;
+}
+
+/**
+ * nictool_k_init - initialize the hw interface
+ **/
+/* temp for dbgtool_info */
+int nictool_k_init(void *hwdev, void *chip_node)
+{
+ struct card_node *chip_info = (struct card_node *)chip_node;
+ struct device *pdevice = NULL;
+ int err;
+
+ err = alloc_dbgtool_info(hwdev, chip_info);
+ if (err)
+ return err;
+
+ if (g_nictool_ref_cnt++) {
+ /* already initialized */
+ return 0;
+ }
+
+ err = alloc_chrdev_region(&g_dev_id, 0, 1, HIADM3_DEV_NAME);
+ if (err) {
+ pr_err("Register nictool_dev failed(0x%x)\n", err);
+ goto alloc_chdev_fail;
+ }
+
+ /* Create equipment */
+ g_nictool_class = class_create(THIS_MODULE, HIADM3_DEV_CLASS);
+ if (IS_ERR(g_nictool_class)) {
+ pr_err("Create nictool_class fail\n");
+ err = -EFAULT;
+ goto class_create_err;
+ }
+
+ /* Initializing the character device */
+ cdev_init(&g_nictool_cdev, &fifo_operations);
+
+ /* Add devices to the operating system */
+ err = cdev_add(&g_nictool_cdev, g_dev_id, 1);
+ if (err < 0) {
+ pr_err("Add nictool_dev to operating system fail(0x%x)\n", err);
+ goto cdev_add_err;
+ }
+
+ /* Export device information to user space
+ * (/sys/class/class name/device name)
+ */
+ pdevice = device_create(g_nictool_class, NULL,
+ g_dev_id, NULL, HIADM3_DEV_NAME);
+ if (IS_ERR(pdevice)) {
+ pr_err("Export nictool device information to user space fail\n");
+ err = -EFAULT;
+ goto device_create_err;
+ }
+
+ pr_info("Register nictool_dev to system succeed\n");
+
+ return 0;
+
+device_create_err:
+ cdev_del(&g_nictool_cdev);
+
+cdev_add_err:
+ class_destroy(g_nictool_class);
+
+class_create_err:
+ g_nictool_class = NULL;
+ unregister_chrdev_region(g_dev_id, 1);
+
+alloc_chdev_fail:
+ g_nictool_ref_cnt--;
+ free_dbgtool_info(hwdev, chip_info);
+
+ return err;
+}
+
+void nictool_k_uninit(void *hwdev, void *chip_node)
+{
+ struct card_node *chip_info = (struct card_node *)chip_node;
+
+ free_dbgtool_info(hwdev, chip_info);
+
+ if (!g_nictool_ref_cnt)
+ return;
+
+ if (--g_nictool_ref_cnt)
+ return;
+
+ if (!g_nictool_class || IS_ERR(g_nictool_class)) {
+ pr_err("Nictool class is NULL.\n");
+ return;
+ }
+
+ device_destroy(g_nictool_class, g_dev_id);
+ cdev_del(&g_nictool_cdev);
+ class_destroy(g_nictool_class);
+ g_nictool_class = NULL;
+
+ unregister_chrdev_region(g_dev_id, 1);
+
+ pr_info("Unregister nictool_dev succeed\n");
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h
new file mode 100644
index 0000000..c943dfc
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NICTOOL_H
+#define HINIC3_NICTOOL_H
+
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+
+#ifndef MAX_SIZE
+#define MAX_SIZE (16)
+#endif
+
+#define DBGTOOL_PAGE_ORDER (10)
+
+#define MAX_CARD_NUM (64)
+
+int nictool_k_init(void *hwdev, void *chip_node);
+void nictool_k_uninit(void *hwdev, void *chip_node);
+
+void hinic3_get_os_hot_replace_info(void *oshr_info);
+
+void hinic3_get_all_chip_id(void *id_info);
+
+void hinic3_get_card_func_info_by_card_name
+ (const char *chip_name, struct hinic3_card_func_info *card_func);
+
+void hinic3_get_card_info(const void *hwdev, void *bufin);
+
+bool hinic3_is_in_host(void);
+
+int hinic3_get_pf_id(struct card_node *chip_node, u32 port_id, u32 *pf_id, u32 *isvalid);
+
+void hinic3_get_mbox_cnt(const void *hwdev, void *bufin);
+
+extern struct hinic3_uld_info g_uld_info[SERVICE_T_MAX];
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h
new file mode 100644
index 0000000..6f145a0
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_PCI_ID_TBL_H
+#define HINIC3_PCI_ID_TBL_H
+
+#define HINIC3_VIRTIO_VNEDER_ID 0x1AF4
+#ifdef CONFIG_SP_VID_DID
+#define PCI_VENDOR_ID_SPNIC 0x1F3F
+#define HINIC3_DEV_ID_STANDARD 0x9020
+#define HINIC3_DEV_ID_SDI_5_1_PF 0x9032
+#define HINIC3_DEV_ID_SDI_5_0_PF 0x9031
+#define HINIC3_DEV_ID_DPU_PF 0x9030
+#define HINIC3_DEV_ID_SPN120 0x9021
+#define HINIC3_DEV_ID_VF 0x9001
+#define HINIC3_DEV_ID_VF_HV 0x9002
+#define HINIC3_DEV_SDI_5_1_ID_VF 0x9003
+#define HINIC3_DEV_SDI_5_1_ID_VF_HV 0x9004
+#define HINIC3_DEV_ID_SPU 0xAC00
+#define HINIC3_DEV_SDI_5_1_SSDID_VF 0x1000
+#define HINIC3_DEV_SDI_V100_SSDID_MASK (3 << 12)
+#elif defined(CONFIG_NF_VID_DID)
+#define PCI_VENDOR_ID_NF 0x2036
+#define NFNIC_DEV_ID_STANDARD 0x1618
+#define NFNIC_DEV_ID_SDI_5_1_PF 0x0226
+#define NFNIC_DEV_ID_SDI_5_0_PF 0x0225
+#define NFNIC_DEV_ID_DPU_PF 0x0224
+#define NFNIC_DEV_ID_VF 0x1619
+#define NFNIC_DEV_ID_VF_HV 0x379F
+#define NFNIC_DEV_SDI_5_1_ID_VF 0x375F
+#define NFNIC_DEV_SDI_5_0_ID_VF 0x375F
+#define NFNIC_DEV_SDI_5_1_ID_VF_HV 0x379F
+#define NFNIC_DEV_ID_SPU 0xAC00
+#define NFNIC_DEV_SDI_5_1_SSDID_VF 0x1000
+#define NFNIC_DEV_SDI_V100_SSDID_MASK (3 << 12)
+#else
+#define PCI_VENDOR_ID_HUAWEI 0x19e5
+#define HINIC3_DEV_ID_STANDARD 0x0222
+#define HINIC3_DEV_ID_SDI_5_1_PF 0x0226
+#define HINIC3_DEV_ID_SDI_5_0_PF 0x0225
+#define HINIC3_DEV_ID_DPU_PF 0x0224
+#define HINIC3_DEV_ID_VF 0x375F
+#define HINIC3_DEV_ID_VF_HV 0x379F
+#define HINIC3_DEV_SDI_5_1_ID_VF 0x375F
+#define HINIC3_DEV_SDI_5_0_ID_VF 0x375F
+#define HINIC3_DEV_SDI_5_1_ID_VF_HV 0x379F
+#define HINIC3_DEV_ID_SPU 0xAC00
+#define HINIC3_DEV_SDI_5_1_SSDID_VF 0x1000
+#define HINIC3_DEV_SDI_V100_SSDID_MASK (3 << 12)
+#endif
+
+#define NFNIC_DEV_SSID_2X25G_NF 0x0860
+#define NFNIC_DEV_SSID_4X25G_NF 0x0861
+#define NFNIC_DEV_SSID_2x100G_NF 0x0862
+#define NFNIC_DEV_SSID_2x200G_NF 0x0863
+
+#define HINIC3_DEV_SSID_2X10G 0x0035
+#define HINIC3_DEV_SSID_2X25G 0x0051
+#define HINIC3_DEV_SSID_4X25G 0x0052
+#define HINIC3_DEV_SSID_4X25G_BD 0x0252
+#define HINIC3_DEV_SSID_4X25G_SMARTNIC 0x0152
+#define HINIC3_DEV_SSID_6X25G_VL 0x0356
+#define HINIC3_DEV_SSID_2X100G 0x00A1
+#define HINIC3_DEV_SSID_2X100G_SMARTNIC 0x01A1
+#define HINIC3_DEV_SSID_2X200G 0x04B1
+#define HINIC3_DEV_SSID_2X100G_VF 0x1000
+#define HINIC3_DEV_SSID_HPC_4_HOST_NIC 0x005A
+#define HINIC3_DEV_SSID_2X200G_VL 0x00B1
+#define HINIC3_DEV_SSID_1X100G 0x02A4
+
+#define BIFUR_RESOURCE_PF_SSID 0x05a1
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c
new file mode 100644
index 0000000..fbb6198
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c
@@ -0,0 +1,44 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/semaphore.h>
+#include <linux/workqueue.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_profile.h"
+#include "hinic3_prof_adap.h"
+
+static bool is_match_prof_default_adapter(void *device)
+{
+ /* always match default profile adapter in standard scene */
+ return true;
+}
+
+struct hinic3_prof_adapter prof_adap_objs[] = {
+ /* Add prof adapter before default profile */
+ {
+ .type = PROF_ADAP_TYPE_DEFAULT,
+ .match = is_match_prof_default_adapter,
+ .init = NULL,
+ .deinit = NULL,
+ },
+};
+
+void hisdk3_init_profile_adapter(struct hinic3_hwdev *hwdev)
+{
+ u16 num_adap = ARRAY_SIZE(prof_adap_objs);
+
+ hwdev->prof_adap = hinic3_prof_init(hwdev, prof_adap_objs, num_adap,
+ (void *)&hwdev->prof_attr);
+ if (hwdev->prof_adap)
+ sdk_info(hwdev->dev_hdl, "Find profile adapter type: %d\n", hwdev->prof_adap->type);
+}
+
+void hisdk3_deinit_profile_adapter(struct hinic3_hwdev *hwdev)
+{
+ hinic3_prof_deinit(hwdev->prof_adap, hwdev->prof_attr);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h
new file mode 100644
index 0000000..e244d11
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_PROF_ADAP_H
+#define HINIC3_PROF_ADAP_H
+
+#include <linux/workqueue.h>
+
+#include "hinic3_profile.h"
+#include "hinic3_hwdev.h"
+
+enum cpu_affinity_work_type {
+ WORK_TYPE_AEQ,
+ WORK_TYPE_MBOX,
+ WORK_TYPE_MGMT_MSG,
+ WORK_TYPE_COMM,
+};
+
+enum hisdk3_sw_features {
+ HISDK3_SW_F_CHANNEL_LOCK = BIT(0),
+};
+
+struct hisdk3_prof_ops {
+ void (*fault_recover)(void *data, u16 src, u16 level);
+ int (*get_work_cpu_affinity)(void *data, u32 work_type);
+ void (*probe_success)(void *data);
+ void (*remove_pre_handle)(struct hinic3_hwdev *hwdev);
+};
+
+struct hisdk3_prof_attr {
+ void *priv_data;
+ u64 hw_feature_cap;
+ u64 sw_feature_cap;
+ u64 dft_hw_feature;
+ u64 dft_sw_feature;
+
+ struct hisdk3_prof_ops *ops;
+};
+
+#define GET_PROF_ATTR_OPS(hwdev) \
+ ((hwdev)->prof_attr ? (hwdev)->prof_attr->ops : NULL)
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline int hisdk3_get_work_cpu_affinity(struct hinic3_hwdev *hwdev,
+ enum cpu_affinity_work_type type)
+{
+ struct hisdk3_prof_ops *ops = GET_PROF_ATTR_OPS(hwdev);
+
+ if (ops && ops->get_work_cpu_affinity)
+ return ops->get_work_cpu_affinity(hwdev->prof_attr->priv_data, type);
+
+ return WORK_CPU_UNBOUND;
+}
+
+static inline void hisdk3_fault_post_process(struct hinic3_hwdev *hwdev,
+ u16 src, u16 level)
+{
+ struct hisdk3_prof_ops *ops = GET_PROF_ATTR_OPS(hwdev);
+
+ if (ops && ops->fault_recover)
+ ops->fault_recover(hwdev->prof_attr->priv_data, src, level);
+}
+
+static inline void hisdk3_probe_success(struct hinic3_hwdev *hwdev)
+{
+ struct hisdk3_prof_ops *ops = GET_PROF_ATTR_OPS(hwdev);
+
+ if (ops && ops->probe_success)
+ ops->probe_success(hwdev->prof_attr->priv_data);
+}
+
+static inline bool hisdk3_sw_feature_en(const struct hinic3_hwdev *hwdev,
+ u64 feature_bit)
+{
+ if (!hwdev->prof_attr)
+ return false;
+
+ return (hwdev->prof_attr->sw_feature_cap & feature_bit) &&
+ (hwdev->prof_attr->dft_sw_feature & feature_bit);
+}
+
+#ifdef CONFIG_MODULE_PROF
+static inline void hisdk3_remove_pre_process(struct hinic3_hwdev *hwdev)
+{
+ struct hisdk3_prof_ops *ops = NULL;
+
+ if (!hwdev)
+ return;
+
+ ops = GET_PROF_ATTR_OPS(hwdev);
+
+ if (ops && ops->remove_pre_handle)
+ ops->remove_pre_handle(hwdev);
+}
+#else
+static inline void hisdk3_remove_pre_process(struct hinic3_hwdev *hwdev) {};
+#endif
+#define SW_FEATURE_EN(hwdev, f_bit) \
+ hisdk3_sw_feature_en(hwdev, HISDK3_SW_F_##f_bit)
+#define HISDK3_F_CHANNEL_LOCK_EN(hwdev) SW_FEATURE_EN(hwdev, CHANNEL_LOCK)
+
+void hisdk3_init_profile_adapter(struct hinic3_hwdev *hwdev);
+void hisdk3_deinit_profile_adapter(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h
new file mode 100644
index 0000000..e204a98
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CHIPIF_SM_LT_H
+#define CHIPIF_SM_LT_H
+
+#include <linux/types.h>
+
+#define SM_LT_LOAD (0x12)
+#define SM_LT_STORE (0x14)
+
+#define SM_LT_NUM_OFFSET 13
+#define SM_LT_ABUF_FLG_OFFSET 12
+#define SM_LT_BC_OFFSET 11
+
+#define SM_LT_ENTRY_16B 16
+#define SM_LT_ENTRY_32B 32
+#define SM_LT_ENTRY_48B 48
+#define SM_LT_ENTRY_64B 64
+
+#define TBL_LT_OFFSET_DEFAULT 0
+
+#define SM_CACHE_LINE_SHFT 4 /* log2(16) */
+#define SM_CACHE_LINE_SIZE 16 /* the size of cache line */
+
+#define MAX_SM_LT_READ_LINE_NUM 4
+#define MAX_SM_LT_WRITE_LINE_NUM 3
+
+#define SM_LT_FULL_BYTEENB 0xFFFF
+
+#define TBL_GET_ENB3_MASK(bitmask) ((u16)(((bitmask) >> 32) & 0xFFFF))
+#define TBL_GET_ENB2_MASK(bitmask) ((u16)(((bitmask) >> 16) & 0xFFFF))
+#define TBL_GET_ENB1_MASK(bitmask) ((u16)((bitmask) & 0xFFFF))
+
+enum {
+ SM_LT_NUM_0 = 0, /* lt num = 0, load/store 16B */
+ SM_LT_NUM_1, /* lt num = 1, load/store 32B */
+ SM_LT_NUM_2, /* lt num = 2, load/store 48B */
+ SM_LT_NUM_3 /* lt num = 3, load 64B */
+};
+
+/* lt load request */
+union sml_lt_req_head {
+ struct {
+ u32 offset:8;
+ u32 pad:3;
+ u32 bc:1;
+ u32 abuf_flg:1;
+ u32 num:2;
+ u32 ack:1;
+ u32 op_id:5;
+ u32 instance:6;
+ u32 src:5;
+ } bs;
+
+ u32 value;
+};
+
+struct sml_lt_load_req {
+ u32 extra;
+ union sml_lt_req_head head;
+ u32 index;
+ u32 pad0;
+ u32 pad1;
+};
+
+struct sml_lt_store_req {
+ u32 extra;
+ union sml_lt_req_head head;
+ u32 index;
+ u32 byte_enb[2];
+ u8 write_data[48];
+};
+
+enum {
+ SM_LT_OFFSET_1 = 1,
+ SM_LT_OFFSET_2,
+ SM_LT_OFFSET_3,
+ SM_LT_OFFSET_4,
+ SM_LT_OFFSET_5,
+ SM_LT_OFFSET_6,
+ SM_LT_OFFSET_7,
+ SM_LT_OFFSET_8,
+ SM_LT_OFFSET_9,
+ SM_LT_OFFSET_10,
+ SM_LT_OFFSET_11,
+ SM_LT_OFFSET_12,
+ SM_LT_OFFSET_13,
+ SM_LT_OFFSET_14,
+ SM_LT_OFFSET_15
+};
+
+enum HINIC_CSR_API_DATA_OPERATION_ID {
+ HINIC_CSR_OPERATION_WRITE_CSR = 0x1E,
+ HINIC_CSR_OPERATION_READ_CSR = 0x1F
+};
+
+enum HINIC_CSR_API_DATA_NEED_RESPONSE_DATA {
+ HINIC_CSR_NO_RESP_DATA = 0,
+ HINIC_CSR_NEED_RESP_DATA = 1
+};
+
+enum HINIC_CSR_API_DATA_DATA_SIZE {
+ HINIC_CSR_DATA_SZ_32 = 0,
+ HINIC_CSR_DATA_SZ_64 = 1
+};
+
+struct hinic_csr_request_api_data {
+ u32 dw0;
+
+ union {
+ struct {
+ u32 reserved1:13;
+ /* this field indicates the write/read data size:
+ * 2'b00: 32 bits
+ * 2'b01: 64 bits
+ * 2'b10~2'b11:reserved
+ */
+ u32 data_size:2;
+ /* this field indicates that requestor expect receive a
+ * response data or not.
+ * 1'b0: expect not to receive a response data.
+ * 1'b1: expect to receive a response data.
+ */
+ u32 need_response:1;
+ /* this field indicates the operation that the requestor
+ * expected.
+ * 5'b1_1110: write value to csr space.
+ * 5'b1_1111: read register from csr space.
+ */
+ u32 operation_id:5;
+ u32 reserved2:6;
+ /* this field specifies the Src node ID for this API
+ * request message.
+ */
+ u32 src_node_id:5;
+ } bits;
+
+ u32 val32;
+ } dw1;
+
+ union {
+ struct {
+ /* it specifies the CSR address. */
+ u32 csr_addr:26;
+ u32 reserved3:6;
+ } bits;
+
+ u32 val32;
+ } dw2;
+
+ /* if data_size=2'b01, it is high 32 bits of write data. else, it is
+ * 32'hFFFF_FFFF.
+ */
+ u32 csr_write_data_h;
+ /* the low 32 bits of write data. */
+ u32 csr_write_data_l;
+};
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c
new file mode 100644
index 0000000..b802104
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c
@@ -0,0 +1,160 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_common.h"
+#include "hinic3_sm_lt.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+
+#define ACK 1
+#define NOACK 0
+
+#define LT_LOAD16_API_SIZE (16 + 4)
+#define LT_STORE16_API_SIZE (32 + 4)
+
+#ifndef HTONL
+#define HTONL(x) \
+ ((((x) & 0x000000ff) << 24) \
+ | (((x) & 0x0000ff00) << 8) \
+ | (((x) & 0x00ff0000) >> 8) \
+ | (((x) & 0xff000000) >> 24))
+#endif
+
+static inline void sm_lt_build_head(union sml_lt_req_head *head,
+ u8 instance_id,
+ u8 op_id, u8 ack,
+ u8 offset, u8 num)
+{
+ head->value = 0;
+ head->bs.instance = instance_id;
+ head->bs.op_id = op_id;
+ head->bs.ack = ack;
+ head->bs.num = num;
+ head->bs.abuf_flg = 0;
+ head->bs.bc = 1;
+ head->bs.offset = offset;
+ head->value = HTONL((head->value));
+}
+
+static inline void sm_lt_load_build_req(struct sml_lt_load_req *req,
+ u8 instance_id,
+ u8 op_id, u8 ack,
+ u32 lt_index,
+ u8 offset, u8 num)
+{
+ sm_lt_build_head(&req->head, instance_id, op_id, ack, offset, num);
+ req->extra = 0;
+ req->index = lt_index;
+ req->index = HTONL(req->index);
+ req->pad0 = 0;
+ req->pad1 = 0;
+}
+
+static void sml_lt_store_data(u32 *dst, const u32 *src, u8 num)
+{
+ switch (num) {
+ case SM_LT_NUM_2:
+ *(dst + SM_LT_OFFSET_11) = *(src + SM_LT_OFFSET_11);
+ *(dst + SM_LT_OFFSET_10) = *(src + SM_LT_OFFSET_10);
+ *(dst + SM_LT_OFFSET_9) = *(src + SM_LT_OFFSET_9);
+ *(dst + SM_LT_OFFSET_8) = *(src + SM_LT_OFFSET_8);
+ /*lint -fallthrough */
+ case SM_LT_NUM_1:
+ *(dst + SM_LT_OFFSET_7) = *(src + SM_LT_OFFSET_7);
+ *(dst + SM_LT_OFFSET_6) = *(src + SM_LT_OFFSET_6);
+ *(dst + SM_LT_OFFSET_5) = *(src + SM_LT_OFFSET_5);
+ *(dst + SM_LT_OFFSET_4) = *(src + SM_LT_OFFSET_4);
+ /*lint -fallthrough */
+ case SM_LT_NUM_0:
+ *(dst + SM_LT_OFFSET_3) = *(src + SM_LT_OFFSET_3);
+ *(dst + SM_LT_OFFSET_2) = *(src + SM_LT_OFFSET_2);
+ *(dst + SM_LT_OFFSET_1) = *(src + SM_LT_OFFSET_1);
+ *dst = *src;
+ break;
+ default:
+ break;
+ }
+}
+
+static inline void sm_lt_store_build_req(struct sml_lt_store_req *req,
+ u8 instance_id,
+ u8 op_id, u8 ack,
+ u32 lt_index,
+ u8 offset,
+ u8 num,
+ u16 byte_enb3,
+ u16 byte_enb2,
+ u16 byte_enb1,
+ u8 *data)
+{
+ sm_lt_build_head(&req->head, instance_id, op_id, ack, offset, num);
+ req->index = lt_index;
+ req->index = HTONL(req->index);
+ req->extra = 0;
+ req->byte_enb[0] = (u32)(byte_enb3);
+ req->byte_enb[0] = HTONL(req->byte_enb[0]);
+ req->byte_enb[1] = HTONL((((u32)byte_enb2) << 16) | byte_enb1);
+ sml_lt_store_data((u32 *)req->write_data, (u32 *)(void *)data, num);
+}
+
+int hinic3_dbg_lt_rd_16byte(void *hwdev, u8 dest, u8 instance,
+ u32 lt_index, u8 *data)
+{
+ struct sml_lt_load_req req;
+ int ret;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ sm_lt_load_build_req(&req, instance, SM_LT_LOAD, ACK, lt_index, 0, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, dest, (u8 *)(&req),
+ LT_LOAD16_API_SIZE, (void *)data, 0x10);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Read linear table 16byte fail, err: %d\n", ret);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_dbg_lt_wr_16byte_mask(void *hwdev, u8 dest, u8 instance,
+ u32 lt_index, u8 *data, u16 mask)
+{
+ struct sml_lt_store_req req;
+ int ret;
+
+ if (!hwdev || !data)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ sm_lt_store_build_req(&req, instance, SM_LT_STORE, NOACK, lt_index,
+ 0, 0, 0, 0, mask, data);
+
+ ret = hinic3_api_cmd_write_nack(hwdev, dest, &req, LT_STORE16_API_SIZE);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Write linear table 16byte fail, err: %d\n", ret);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c
new file mode 100644
index 0000000..c8258ff
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c
@@ -0,0 +1,279 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_lld.h"
+#include "hinic3_dev_mgmt.h"
+#include "hinic3_sriov.h"
+
+static int hinic3_init_vf_hw(void *hwdev, u16 start_vf_id, u16 end_vf_id)
+{
+ u16 i, func_idx;
+ int err;
+
+ /* mbox msg channel resources will be freed during remove process */
+ err = hinic3_init_func_mbox_msg_channel(hwdev,
+ hinic3_func_max_vf(hwdev));
+ if (err != 0)
+ return err;
+
+ /* vf use 256K as default wq page size, and can't change it */
+ for (i = start_vf_id; i <= end_vf_id; i++) {
+ func_idx = hinic3_glb_pf_vf_offset(hwdev) + i;
+ err = hinic3_set_wq_page_size(hwdev, func_idx,
+ HINIC3_DEFAULT_WQ_PAGE_SIZE,
+ HINIC3_CHANNEL_COMM);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int hinic3_deinit_vf_hw(void *hwdev, u16 start_vf_id, u16 end_vf_id)
+{
+ u16 func_idx, idx;
+
+ for (idx = start_vf_id; idx <= end_vf_id; idx++) {
+ func_idx = hinic3_glb_pf_vf_offset(hwdev) + idx;
+ hinic3_set_wq_page_size(hwdev, func_idx,
+ HINIC3_HW_WQ_PAGE_SIZE,
+ HINIC3_CHANNEL_COMM);
+ }
+
+ return 0;
+}
+
+#if !(defined(HAVE_SRIOV_CONFIGURE) || defined(HAVE_RHEL6_SRIOV_CONFIGURE))
+ssize_t hinic3_sriov_totalvfs_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ return sprintf(buf, "%d\n", pci_sriov_get_totalvfs(pdev));
+}
+
+ssize_t hinic3_sriov_numvfs_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ return sprintf(buf, "%d\n", pci_num_vf(pdev));
+}
+
+ssize_t hinic3_sriov_numvfs_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ int ret;
+ u16 num_vfs;
+ int cur_vfs, total_vfs;
+
+ ret = kstrtou16(buf, 0, &num_vfs);
+ if (ret < 0)
+ return ret;
+
+ cur_vfs = pci_num_vf(pdev);
+ total_vfs = pci_sriov_get_totalvfs(pdev);
+ if (num_vfs > total_vfs)
+ return -ERANGE;
+
+ if (num_vfs == cur_vfs)
+ return count; /* no change */
+
+ if (num_vfs == 0) {
+ /* disable VFs */
+ ret = hinic3_pci_sriov_configure(pdev, 0);
+ if (ret < 0)
+ return ret;
+ return count;
+ }
+
+ /* enable VFs */
+ if (cur_vfs) {
+ nic_warn(&pdev->dev, "%d VFs already enabled. Disable before enabling %d VFs\n",
+ cur_vfs, num_vfs);
+ return -EBUSY;
+ }
+
+ ret = hinic3_pci_sriov_configure(pdev, num_vfs);
+ if (ret < 0)
+ return ret;
+
+ if (ret != num_vfs)
+ nic_warn(&pdev->dev, "%d VFs requested; only %d enabled\n",
+ num_vfs, ret);
+
+ return count;
+}
+
+#endif /* !(HAVE_SRIOV_CONFIGURE || HAVE_RHEL6_SRIOV_CONFIGURE) */
+
+int hinic3_pci_sriov_disable(struct pci_dev *dev)
+{
+#ifdef CONFIG_PCI_IOV
+ struct hinic3_sriov_info *sriov_info = NULL;
+ struct hinic3_event_info event = {0};
+ void *hwdev = NULL;
+ u16 tmp_vfs;
+
+ sriov_info = hinic3_get_sriov_info_by_pcidev(dev);
+ hwdev = hinic3_get_hwdev_by_pcidev(dev);
+ if (!hwdev) {
+ sdk_err(&dev->dev, "SR-IOV disable is not permitted, please wait...\n");
+ return -EPERM;
+ }
+
+ /* if SR-IOV is already disabled then there is nothing to do */
+ if (!sriov_info->sriov_enabled)
+ return 0;
+
+ if (test_and_set_bit(HINIC3_SRIOV_DISABLE, &sriov_info->state)) {
+ sdk_err(&dev->dev, "SR-IOV disable in process, please wait");
+ return -EPERM;
+ }
+
+ /* If our VFs are assigned we cannot shut down SR-IOV
+ * without causing issues, so just leave the hardware
+ * available but disabled
+ */
+ if (pci_vfs_assigned(dev)) {
+ clear_bit(HINIC3_SRIOV_DISABLE, &sriov_info->state);
+ sdk_warn(&dev->dev, "Unloading driver while VFs are assigned - VFs will not be deallocated\n");
+ return -EPERM;
+ }
+
+ event.service = EVENT_SRV_COMM;
+ event.type = EVENT_COMM_SRIOV_STATE_CHANGE;
+ ((struct hinic3_sriov_state_info *)(void *)event.event_data)->enable = 0;
+ hinic3_event_callback(hwdev, &event);
+
+ sriov_info->sriov_enabled = false;
+
+ /* disable iov and allow time for transactions to clear */
+ pci_disable_sriov(dev);
+
+ tmp_vfs = (u16)sriov_info->num_vfs;
+ sriov_info->num_vfs = 0;
+ hinic3_deinit_vf_hw(hwdev, 1, tmp_vfs);
+
+ clear_bit(HINIC3_SRIOV_DISABLE, &sriov_info->state);
+
+#endif
+
+ return 0;
+}
+
+#ifdef CONFIG_PCI_IOV
+int hinic3_pci_sriov_check(struct hinic3_sriov_info *sriov_info, struct pci_dev *dev, int num_vfs)
+{
+ int pre_existing_vfs;
+ int err;
+
+ if (test_and_set_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state)) {
+ sdk_err(&dev->dev,
+ "SR-IOV enable in process, please wait, num_vfs %d\n",
+ num_vfs);
+ return -EPERM;
+ }
+
+ pre_existing_vfs = pci_num_vf(dev);
+
+ if (num_vfs > pci_sriov_get_totalvfs(dev)) {
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return -ERANGE;
+ }
+
+ if (pre_existing_vfs && pre_existing_vfs != num_vfs) {
+ err = hinic3_pci_sriov_disable(dev);
+ if (err) {
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return err;
+ }
+ } else if (pre_existing_vfs == num_vfs) {
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return num_vfs;
+ }
+
+ return 0;
+}
+#endif
+
+
+int hinic3_pci_sriov_enable(struct pci_dev *dev, int num_vfs)
+{
+#ifdef CONFIG_PCI_IOV
+ struct hinic3_sriov_info *sriov_info = NULL;
+ struct hinic3_event_info event = {0};
+ void *hwdev = NULL;
+ int err = 0;
+
+ sriov_info = hinic3_get_sriov_info_by_pcidev(dev);
+ hwdev = hinic3_get_hwdev_by_pcidev(dev);
+ if (!hwdev) {
+ sdk_err(&dev->dev, "SR-IOV enable is not permitted, please wait...\n");
+ return -EPERM;
+ }
+
+ err = hinic3_pci_sriov_check(sriov_info, dev, num_vfs);
+ if (err != 0) {
+ return err;
+ }
+
+ err = hinic3_init_vf_hw(hwdev, 1, (u16)num_vfs);
+ if (err) {
+ sdk_err(&dev->dev, "Failed to init vf in hardware before enable sriov, error %d\n",
+ err);
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return err;
+ }
+
+ err = pci_enable_sriov(dev, num_vfs);
+ if (err) {
+ sdk_err(&dev->dev, "Failed to enable SR-IOV, error %d\n", err);
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return err;
+ }
+
+ sriov_info->sriov_enabled = true;
+ sriov_info->num_vfs = num_vfs;
+
+ event.service = EVENT_SRV_COMM;
+ event.type = EVENT_COMM_SRIOV_STATE_CHANGE;
+ ((struct hinic3_sriov_state_info *)(void *)event.event_data)->enable = 1;
+ ((struct hinic3_sriov_state_info *)(void *)event.event_data)->num_vfs = (u16)num_vfs;
+ hinic3_event_callback(hwdev, &event);
+
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+
+ return num_vfs;
+#else
+
+ return 0;
+#endif
+}
+
+int hinic3_pci_sriov_configure(struct pci_dev *dev, int num_vfs)
+{
+ struct hinic3_sriov_info *sriov_info = NULL;
+
+ sriov_info = hinic3_get_sriov_info_by_pcidev(dev);
+ if (!sriov_info)
+ return -EFAULT;
+
+ if (!test_bit(HINIC3_FUNC_PERSENT, &sriov_info->state))
+ return -EFAULT;
+
+ if (num_vfs == 0)
+ return hinic3_pci_sriov_disable(dev);
+ else
+ return hinic3_pci_sriov_enable(dev, num_vfs);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h
new file mode 100644
index 0000000..4a640ad
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_SRIOV_H
+#define HINIC3_SRIOV_H
+#include <linux/types.h>
+#include <linux/pci.h>
+
+#if !(defined(HAVE_SRIOV_CONFIGURE) || defined(HAVE_RHEL6_SRIOV_CONFIGURE))
+ssize_t hinic3_sriov_totalvfs_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+ssize_t hinic3_sriov_numvfs_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+ssize_t hinic3_sriov_numvfs_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count);
+#endif /* !(HAVE_SRIOV_CONFIGURE || HAVE_RHEL6_SRIOV_CONFIGURE) */
+
+enum hinic3_sriov_state {
+ HINIC3_SRIOV_DISABLE,
+ HINIC3_SRIOV_ENABLE,
+ HINIC3_FUNC_PERSENT,
+};
+
+struct hinic3_sriov_info {
+ bool sriov_enabled;
+ unsigned int num_vfs;
+ unsigned long state;
+};
+
+struct hinic3_sriov_info *hinic3_get_sriov_info_by_pcidev(struct pci_dev *pdev);
+int hinic3_pci_sriov_disable(struct pci_dev *dev);
+int hinic3_pci_sriov_enable(struct pci_dev *dev, int num_vfs);
+int hinic3_pci_sriov_configure(struct pci_dev *dev, int num_vfs);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c
new file mode 100644
index 0000000..4f8acd6
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c
@@ -0,0 +1,159 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "ossl_knl.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_wq.h"
+
+#define WQ_MIN_DEPTH 64
+#define WQ_MAX_DEPTH 65536
+#define WQ_MAX_NUM_PAGES (PAGE_SIZE / sizeof(u64))
+
+static int wq_init_wq_block(struct hinic3_wq *wq)
+{
+ int i;
+
+ if (WQ_IS_0_LEVEL_CLA(wq)) {
+ wq->wq_block_paddr = wq->wq_pages[0].align_paddr;
+ wq->wq_block_vaddr = wq->wq_pages[0].align_vaddr;
+
+ return 0;
+ }
+
+ if (wq->num_wq_pages > WQ_MAX_NUM_PAGES) {
+ sdk_err(wq->dev_hdl, "num_wq_pages exceed limit: %lu\n",
+ WQ_MAX_NUM_PAGES);
+ return -EFAULT;
+ }
+
+ wq->wq_block_vaddr = dma_zalloc_coherent(wq->dev_hdl, PAGE_SIZE,
+ &wq->wq_block_paddr,
+ GFP_KERNEL);
+ if (!wq->wq_block_vaddr) {
+ sdk_err(wq->dev_hdl, "Failed to alloc wq block\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < wq->num_wq_pages; i++)
+ wq->wq_block_vaddr[i] =
+ cpu_to_be64(wq->wq_pages[i].align_paddr);
+
+ return 0;
+}
+
+static int wq_alloc_pages(struct hinic3_wq *wq)
+{
+ int i, page_idx, err;
+
+ wq->wq_pages = kcalloc(wq->num_wq_pages, sizeof(*wq->wq_pages),
+ GFP_KERNEL);
+ if (!wq->wq_pages) {
+ sdk_err(wq->dev_hdl, "Failed to alloc wq pages handle\n");
+ return -ENOMEM;
+ }
+
+ for (page_idx = 0; page_idx < wq->num_wq_pages; page_idx++) {
+ err = hinic3_dma_zalloc_coherent_align(wq->dev_hdl,
+ wq->wq_page_size,
+ wq->wq_page_size,
+ GFP_KERNEL,
+ &wq->wq_pages[page_idx]);
+ if (err) {
+ sdk_err(wq->dev_hdl, "Failed to alloc wq page\n");
+ goto free_wq_pages;
+ }
+ }
+
+ err = wq_init_wq_block(wq);
+ if (err)
+ goto free_wq_pages;
+
+ return 0;
+
+free_wq_pages:
+ for (i = 0; i < page_idx; i++)
+ hinic3_dma_free_coherent_align(wq->dev_hdl, &wq->wq_pages[i]);
+
+ kfree(wq->wq_pages);
+ wq->wq_pages = NULL;
+
+ return -ENOMEM;
+}
+
+static void wq_free_pages(struct hinic3_wq *wq)
+{
+ int i;
+
+ if (!WQ_IS_0_LEVEL_CLA(wq))
+ dma_free_coherent(wq->dev_hdl, PAGE_SIZE, wq->wq_block_vaddr,
+ wq->wq_block_paddr);
+
+ for (i = 0; i < wq->num_wq_pages; i++)
+ hinic3_dma_free_coherent_align(wq->dev_hdl, &wq->wq_pages[i]);
+
+ kfree(wq->wq_pages);
+ wq->wq_pages = NULL;
+}
+
+int hinic3_wq_create(void *hwdev, struct hinic3_wq *wq, u32 q_depth,
+ u16 wqebb_size)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u32 wq_page_size;
+
+ if (!wq || !dev) {
+ pr_err("Invalid wq or dev_hdl\n");
+ return -EINVAL;
+ }
+
+ if (q_depth < WQ_MIN_DEPTH || q_depth > WQ_MAX_DEPTH ||
+ (q_depth & (q_depth - 1)) || !wqebb_size ||
+ (wqebb_size & (wqebb_size - 1))) {
+ sdk_err(dev->dev_hdl, "Wq q_depth(%u) or wqebb_size(%u) is invalid\n",
+ q_depth, wqebb_size);
+ return -EINVAL;
+ }
+
+ wq_page_size = ALIGN(dev->wq_page_size, PAGE_SIZE);
+
+ memset(wq, 0, sizeof(struct hinic3_wq));
+ wq->dev_hdl = dev->dev_hdl;
+ wq->q_depth = q_depth;
+ wq->idx_mask = (u16)(q_depth - 1);
+ wq->wqebb_size = wqebb_size;
+ wq->wqebb_size_shift = (u16)ilog2(wq->wqebb_size);
+ wq->wq_page_size = wq_page_size;
+
+ wq->wqebbs_per_page = wq_page_size / wqebb_size;
+ /* In case of wq_page_size is larger than q_depth * wqebb_size */
+ if (wq->wqebbs_per_page > q_depth)
+ wq->wqebbs_per_page = q_depth;
+ wq->wqebbs_per_page_shift = (u16)ilog2(wq->wqebbs_per_page);
+ wq->wqebbs_per_page_mask = (u16)(wq->wqebbs_per_page - 1);
+ wq->num_wq_pages = (u16)(ALIGN(((u32)q_depth * wqebb_size),
+ wq_page_size) / wq_page_size);
+
+ return wq_alloc_pages(wq);
+}
+EXPORT_SYMBOL(hinic3_wq_create);
+
+void hinic3_wq_destroy(struct hinic3_wq *wq)
+{
+ if (!wq)
+ return;
+
+ wq_free_pages(wq);
+}
+EXPORT_SYMBOL(hinic3_wq_destroy);
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c b/drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c
new file mode 100644
index 0000000..f8aea696
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c
@@ -0,0 +1,119 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/vmalloc.h>
+#include "ossl_knl_linux.h"
+
+#define OSSL_MINUTE_BASE (60)
+
+struct file *file_creat(const char *file_name)
+{
+ return filp_open(file_name, O_CREAT | O_RDWR | O_APPEND, 0);
+}
+
+struct file *file_open(const char *file_name)
+{
+ return filp_open(file_name, O_RDONLY, 0);
+}
+
+void file_close(struct file *file_handle)
+{
+ (void)filp_close(file_handle, NULL);
+}
+
+u32 get_file_size(struct file *file_handle)
+{
+ struct inode *file_inode = NULL;
+
+ file_inode = file_handle->f_inode;
+
+ return (u32)(file_inode->i_size);
+}
+
+void set_file_position(struct file *file_handle, u32 position)
+{
+ file_handle->f_pos = position;
+}
+
+int file_read(struct file *file_handle, char *log_buffer, u32 rd_length,
+ u32 *file_pos)
+{
+ return (int)kernel_read(file_handle, log_buffer, rd_length,
+ &file_handle->f_pos);
+}
+
+u32 file_write(struct file *file_handle, const char *log_buffer, u32 wr_length)
+{
+ return (u32)kernel_write(file_handle, log_buffer, wr_length,
+ &file_handle->f_pos);
+}
+
+static int _linux_thread_func(void *thread)
+{
+ struct sdk_thread_info *info = (struct sdk_thread_info *)thread;
+
+ while (!kthread_should_stop())
+ info->thread_fn(info->data);
+
+ return 0;
+}
+
+int creat_thread(struct sdk_thread_info *thread_info)
+{
+ thread_info->thread_obj = kthread_run(_linux_thread_func, thread_info,
+ thread_info->name);
+ if (!thread_info->thread_obj)
+ return -EFAULT;
+
+ return 0;
+}
+
+void stop_thread(struct sdk_thread_info *thread_info)
+{
+ if (thread_info->thread_obj)
+ (void)kthread_stop(thread_info->thread_obj);
+}
+
+void utctime_to_localtime(u64 utctime, u64 *localtime)
+{
+ *localtime = utctime - (u64)(sys_tz.tz_minuteswest * OSSL_MINUTE_BASE); /*lint !e647 !e571*/
+}
+
+#ifndef HAVE_TIMER_SETUP
+void initialize_timer(const void *adapter_hdl, struct timer_list *timer)
+{
+ if (!adapter_hdl || !timer)
+ return;
+
+ init_timer(timer);
+}
+#endif
+
+void add_to_timer(struct timer_list *timer, u64 period)
+{
+ if (!timer)
+ return;
+
+ add_timer(timer);
+}
+
+void stop_timer(struct timer_list *timer) {}
+
+void delete_timer(struct timer_list *timer)
+{
+ if (!timer)
+ return;
+
+ del_timer_sync(timer);
+}
+
+u64 ossl_get_real_time(void)
+{
+ struct timeval tv = {0};
+ u64 tv_msec;
+
+ do_gettimeofday(&tv);
+
+ tv_msec = (u64)tv.tv_sec * MSEC_PER_SEC + (u64)tv.tv_usec / USEC_PER_MSEC;
+ return tv_msec;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/include/bond/bond_common_defs.h b/drivers/net/ethernet/huawei/hinic3/include/bond/bond_common_defs.h
new file mode 100644
index 0000000..bfb4499
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/bond/bond_common_defs.h
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) Huawei Technologies Co., Ltd. 2021. All rights reserved. */
+
+#ifndef BOND_COMMON_DEFS_H
+#define BOND_COMMON_DEFS_H
+
+#define BOND_NAME_MAX_LEN 16
+#define BOND_PORT_MAX_NUM 4
+#define BOND_ID_INVALID 0xFFFF
+#define OVS_PORT_NUM_MAX BOND_PORT_MAX_NUM
+#define DEFAULT_ROCE_BOND_FUNC 0xFFFFFFFF
+
+#define BOND_ID_IS_VALID(_id) \
+ (((_id) >= BOND_FIRST_ID) && ((_id) <= BOND_MAX_ID))
+#define BOND_ID_IS_INVALID(_id) (!(BOND_ID_IS_VALID(_id)))
+
+enum bond_group_id {
+ BOND_FIRST_ID = 1,
+ BOND_MAX_ID = 4,
+ BOND_MAX_NUM,
+};
+
+#pragma pack(push, 4)
+/**
+ * bond per port statistics
+ */
+struct tag_bond_port_stat {
+ /** mpu provide */
+ u64 rx_pkts;
+ u64 rx_bytes;
+ u64 rx_drops;
+ u64 rx_errors;
+
+ u64 tx_pkts;
+ u64 tx_bytes;
+ u64 tx_drops;
+ u64 tx_errors;
+};
+
+#pragma pack(pop)
+
+/**
+ * bond port attribute
+ */
+struct tag_bond_port_attr {
+ u8 duplex;
+ u8 status;
+ u8 rsvd0[2];
+ u32 speed;
+};
+
+/**
+ * Get bond information command struct defination
+ * @see OVS_MPU_CMD_BOND_GET_ATTR
+ */
+struct tag_bond_get {
+ u16 bond_id_vld; /* 1: used bond_id get bond info, 0: used bond_name */
+ u16 bond_id; /* if bond_id_vld=1 input, else output */
+ u8 bond_name[BOND_NAME_MAX_LEN]; /* if bond_id_vld=0 input, else output */
+
+ u16 bond_mode; /* 1 for active-backup,2 for balance-xor,4 for 802.3ad */
+ u8 active_slaves; /* active port slaves(bitmaps) */
+ u8 slaves; /* bond port id bitmaps */
+
+ u8 lacp_collect_slaves; /* bond port id bitmaps */
+ u8 xmit_hash_policy; /* xmit hash:0 for layer 2, 1 for layer 2+3, 2 for layer 3+4 */
+ u16 rsvd0; /* in order to 4B aligned */
+
+ struct tag_bond_port_stat stat[BOND_PORT_MAX_NUM];
+ struct tag_bond_port_attr attr[BOND_PORT_MAX_NUM];
+};
+
+#endif /** BOND_COMMON_DEFS_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd.h b/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd.h
new file mode 100644
index 0000000..a13b66d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CFG_MGMT_MPU_CMD_H
+#define CFG_MGMT_MPU_CMD_H
+
+enum cfg_cmd {
+ CFG_CMD_GET_DEV_CAP = 0, /**< Device capability of pf/vf, @see cfg_cmd_dev_cap */
+ CFG_CMD_GET_HOST_TIMER = 1, /**< Capability of host timer, @see cfg_cmd_host_timer */
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd_defs.h b/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd_defs.h
new file mode 100644
index 0000000..f9737ea
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/cfg_mgmt/cfg_mgmt_mpu_cmd_defs.h
@@ -0,0 +1,221 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CFG_MGMT_MPU_CMD_DEFS_H
+#define CFG_MGMT_MPU_CMD_DEFS_H
+
+#include "mpu_cmd_base_defs.h"
+
+enum servic_bit_define {
+ SERVICE_BIT_NIC = 0,
+ SERVICE_BIT_ROCE = 1,
+ SERVICE_BIT_VBS = 2,
+ SERVICE_BIT_TOE = 3,
+ SERVICE_BIT_IPSEC = 4,
+ SERVICE_BIT_FC = 5,
+ SERVICE_BIT_VIRTIO = 6,
+ SERVICE_BIT_OVS = 7,
+ SERVICE_BIT_NVME = 8,
+ SERVICE_BIT_ROCEAA = 9,
+ SERVICE_BIT_CURRENET = 10,
+ SERVICE_BIT_PPA = 11,
+ SERVICE_BIT_MIGRATE = 12,
+ SERVICE_BIT_VROCE = 13,
+ SERVICE_BIT_BIFUR = 14,
+ SERVICE_BIT_MAX
+};
+
+#define CFG_SERVICE_MASK_NIC (0x1 << SERVICE_BIT_NIC)
+#define CFG_SERVICE_MASK_ROCE (0x1 << SERVICE_BIT_ROCE)
+#define CFG_SERVICE_MASK_VBS (0x1 << SERVICE_BIT_VBS)
+#define CFG_SERVICE_MASK_TOE (0x1 << SERVICE_BIT_TOE)
+#define CFG_SERVICE_MASK_IPSEC (0x1 << SERVICE_BIT_IPSEC)
+#define CFG_SERVICE_MASK_FC (0x1 << SERVICE_BIT_FC)
+#define CFG_SERVICE_MASK_VIRTIO (0x1 << SERVICE_BIT_VIRTIO)
+#define CFG_SERVICE_MASK_OVS (0x1 << SERVICE_BIT_OVS)
+#define CFG_SERVICE_MASK_NVME (0x1 << SERVICE_BIT_NVME)
+#define CFG_SERVICE_MASK_ROCEAA (0x1 << SERVICE_BIT_ROCEAA)
+#define CFG_SERVICE_MASK_CURRENET (0x1 << SERVICE_BIT_CURRENET)
+#define CFG_SERVICE_MASK_PPA (0x1 << SERVICE_BIT_PPA)
+#define CFG_SERVICE_MASK_MIGRATE (0x1 << SERVICE_BIT_MIGRATE)
+#define CFG_SERVICE_MASK_VROCE (0x1 << SERVICE_BIT_VROCE)
+#define CFG_SERVICE_MASK_BIFUR (0x1 << SERVICE_BIT_BIFUR)
+
+/* Definition of the scenario ID in the cfg_data, which is used for SML memory allocation. */
+enum scenes_id_define {
+ SCENES_ID_FPGA_ETH = 0,
+ SCENES_ID_COMPUTE_STANDARD = 1,
+ SCENES_ID_STORAGE_ROCEAA_2x100 = 2,
+ SCENES_ID_STORAGE_ROCEAA_4x25 = 3,
+ SCENES_ID_CLOUD = 4,
+ SCENES_ID_FC = 5,
+ SCENES_ID_STORAGE_ROCE = 6,
+ SCENES_ID_COMPUTE_ROCE = 7,
+ SCENES_ID_STORAGE_TOE = 8,
+ SCENES_ID_COMPUTE_DPU = 100,
+ SCENES_ID_COMPUTE_SMART_NIC = 101,
+ SCENES_ID_MAX
+};
+
+/* struct cfg_cmd_dev_cap.sf_svc_attr */
+enum {
+ SF_SVC_FT_BIT = (1 << 0),
+ SF_SVC_RDMA_BIT = (1 << 1),
+};
+
+struct cfg_cmd_host_timer {
+ struct mgmt_msg_head head;
+
+ u8 host_id;
+ u8 rsvd1;
+
+ u8 timer_pf_num;
+ u8 timer_pf_id_start;
+ u16 timer_vf_num;
+ u16 timer_vf_id_start;
+ u32 rsvd2[8];
+};
+
+struct cfg_cmd_dev_cap {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1;
+
+ /* Public resources */
+ u8 host_id;
+ u8 ep_id;
+ u8 er_id;
+ u8 port_id;
+
+ u16 host_total_func;
+ u8 host_pf_num;
+ u8 pf_id_start;
+ u16 host_vf_num;
+ u16 vf_id_start;
+ u8 host_oq_id_mask_val;
+ u8 timer_en;
+ u8 host_valid_bitmap;
+ u8 rsvd_host;
+
+ u16 svc_cap_en;
+ u16 max_vf;
+ u8 flexq_en;
+ u8 valid_cos_bitmap;
+ /* Reserved for func_valid_cos_bitmap */
+ u8 port_cos_valid_bitmap;
+ u8 rsvd_func1;
+ u32 rsvd_func2;
+
+ u8 sf_svc_attr;
+ u8 func_sf_en;
+ u8 lb_mode;
+ u8 smf_pg;
+
+ u32 max_conn_num;
+ u16 max_stick2cache_num;
+ u16 max_bfilter_start_addr;
+ u16 bfilter_len;
+ u16 hash_bucket_num;
+
+ /* shared resource */
+ u8 host_sf_en;
+ u8 master_host_id;
+ u8 srv_multi_host_mode;
+ u8 virtio_vq_size;
+
+ u8 hot_plug_disable;
+ u8 bond_create_mode;
+ u8 lro_enable;
+ u8 os_hot_replace;
+
+ u32 rsvd_func3[4];
+
+ /* l2nic */
+ u16 nic_max_sq_id;
+ u16 nic_max_rq_id;
+ u16 nic_default_num_queues;
+ u16 outband_vlan_cfg_en;
+ u32 rsvd2_nic[2];
+
+ /* RoCE */
+ u32 roce_max_qp;
+ u32 roce_max_cq;
+ u32 roce_max_srq;
+ u32 roce_max_mpt;
+ u32 roce_max_drc_qp;
+
+ u32 roce_cmtt_cl_start;
+ u32 roce_cmtt_cl_end;
+ u32 roce_cmtt_cl_size;
+
+ u32 roce_dmtt_cl_start;
+ u32 roce_dmtt_cl_end;
+ u32 roce_dmtt_cl_size;
+
+ u32 roce_wqe_cl_start;
+ u32 roce_wqe_cl_end;
+ u32 roce_wqe_cl_size;
+ u8 roce_srq_container_mode;
+ u8 rsvd_roce1[3];
+ u32 rsvd_roce2[5];
+
+ /* IPsec */
+ u32 ipsec_max_sactx;
+ u16 ipsec_max_cq;
+ u16 rsvd_ipsec1;
+ u32 rsvd_ipsec[2];
+
+ /* OVS */
+ u32 ovs_max_qpc;
+ u32 rsvd_ovs1[3];
+
+ /* ToE */
+ u32 toe_max_pctx;
+ u32 toe_max_cq;
+ u16 toe_max_srq;
+ u16 toe_srq_id_start;
+ u16 toe_max_mpt;
+ u16 toe_rsvd_1;
+ u32 toe_max_cctxt;
+ u32 rsvd_toe[1];
+
+ /* FC */
+ u32 fc_max_pctx;
+ u32 fc_max_scq;
+ u32 fc_max_srq;
+
+ u32 fc_max_cctx;
+ u32 fc_cctx_id_start;
+
+ u8 fc_vp_id_start;
+ u8 fc_vp_id_end;
+ u8 rsvd_fc1[2];
+ u32 rsvd_fc2[5];
+
+ /* VBS */
+ u16 vbs_max_volq;
+ u8 vbs_main_pf_enable;
+ u8 vbs_vsock_pf_enable;
+ u8 vbs_fushion_queue_pf_enable;
+ u8 rsvd0_vbs;
+ u16 rsvd1_vbs;
+ u32 rsvd2_vbs[2];
+
+ u16 fake_vf_start_id;
+ u16 fake_vf_num;
+ u32 fake_vf_max_pctx;
+ u16 fake_vf_bfilter_start_addr;
+ u16 fake_vf_bfilter_len;
+
+ u32 map_host_id : 3;
+ u32 fake_vf_en : 1;
+ u32 fake_vf_start_bit : 4;
+ u32 fake_vf_end_bit : 4;
+ u32 fake_vf_page_bit : 4;
+ u32 rsvd2 : 16;
+
+ u32 rsvd_glb[7];
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd.h b/drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd.h
new file mode 100644
index 0000000..d4e33f7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_NPU_CMD_H
+#define CQM_NPU_CMD_H
+
+enum cqm_cmd_type {
+ CQM_CMD_T_INVALID = 0, /* < Invalid command */
+ CQM_CMD_T_BAT_UPDATE, /* < Update the bat configuration of the funciton,
+ * @see struct tag_cqm_cmdq_bat_update
+ */
+ CQM_CMD_T_CLA_UPDATE, /* < Update the cla configuration of the funciton,
+ * @see struct tag_cqm_cla_update_cmd
+ */
+ CQM_CMD_T_BLOOMFILTER_SET, /* < Set the bloomfilter configuration of the funciton,
+ * @see struct tag_cqm_bloomfilter_cmd
+ */
+ CQM_CMD_T_BLOOMFILTER_CLEAR, /* < Clear the bloomfilter configuration of the funciton,
+ * @see struct tag_cqm_bloomfilter_cmd
+ */
+ CQM_CMD_T_RSVD, /* < Unused */
+ CQM_CMD_T_CLA_CACHE_INVALID, /* < Invalidate the cla cacheline,
+ * @see struct tag_cqm_cla_cache_invalid_cmd
+ */
+ CQM_CMD_T_BLOOMFILTER_INIT, /* < Init the bloomfilter configuration of the funciton,
+ * @see struct tag_cqm_bloomfilter_init_cmd
+ */
+ CQM_CMD_T_MAX
+};
+
+#endif /* CQM_NPU_CMD_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd_defs.h b/drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd_defs.h
new file mode 100644
index 0000000..28b83ed
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/cqm/cqm_npu_cmd_defs.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_NPU_CMD_DEFS_H
+#define CQM_NPU_CMD_DEFS_H
+
+struct tag_cqm_cla_cache_invalid_cmd {
+ u32 gpa_h;
+ u32 gpa_l;
+
+ u32 cache_size; /* CLA cache size=4096B */
+
+ u32 smf_id;
+ u32 func_id;
+};
+
+struct tag_cqm_cla_update_cmd {
+ /* Gpa address to be updated */
+ u32 gpa_h; // byte addr
+ u32 gpa_l; // byte addr
+
+ /* Updated Value */
+ u32 value_h;
+ u32 value_l;
+
+ u32 smf_id;
+ u32 func_id;
+};
+
+struct tag_cqm_bloomfilter_cmd {
+ u32 rsv1;
+
+#if (BYTE_ORDER == LITTLE_ENDIAN)
+ u32 k_en : 4;
+ u32 func_id : 16;
+ u32 rsv2 : 12;
+#else
+ u32 rsv2 : 12;
+ u32 func_id : 16;
+ u32 k_en : 4;
+#endif
+
+ u32 index_h;
+ u32 index_l;
+};
+
+#define CQM_BAT_MAX_SIZE 256
+struct tag_cqm_cmdq_bat_update {
+ u32 offset; // byte offset,16Byte aligned
+ u32 byte_len; // max size: 256byte
+ u8 data[CQM_BAT_MAX_SIZE];
+ u32 smf_id;
+ u32 func_id;
+};
+
+struct tag_cqm_bloomfilter_init_cmd {
+ u32 bloom_filter_len; // 16Byte aligned
+ u32 bloom_filter_addr;
+};
+
+#endif /* CQM_CMDQ_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/include/hinic3_common.h b/drivers/net/ethernet/huawei/hinic3/include/hinic3_common.h
new file mode 100644
index 0000000..6c5b995
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/hinic3_common.h
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_COMMON_H
+#define HINIC3_COMMON_H
+
+#include <linux/types.h>
+
+struct hinic3_dma_addr_align {
+ u32 real_size;
+
+ void *ori_vaddr;
+ dma_addr_t ori_paddr;
+
+ void *align_vaddr;
+ dma_addr_t align_paddr;
+};
+
+enum hinic3_wait_return {
+ WAIT_PROCESS_CPL = 0,
+ WAIT_PROCESS_WAITING = 1,
+ WAIT_PROCESS_ERR = 2,
+};
+
+struct hinic3_sge {
+ u32 hi_addr;
+ u32 lo_addr;
+ u32 len;
+};
+
+/* *
+ * hinic_cpu_to_be32 - convert data to big endian 32 bit format
+ * @data: the data to convert
+ * @len: length of data to convert, must be Multiple of 4B
+ */
+static inline void hinic3_cpu_to_be32(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ int data_len = len;
+ u32 *mem = (u32 *)data;
+
+ if (!data)
+ return;
+
+ data_len = data_len / chunk_sz;
+
+ for (i = 0; i < data_len; i++) {
+ *mem = cpu_to_be32(*mem);
+ mem++;
+ }
+}
+
+/* *
+ * hinic3_cpu_to_be32 - convert data from big endian 32 bit format
+ * @data: the data to convert
+ * @len: length of data to convert
+ */
+static inline void hinic3_be32_to_cpu(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ int data_len = len;
+ u32 *mem = (u32 *)data;
+
+ if (!data)
+ return;
+
+ data_len = data_len / chunk_sz;
+
+ for (i = 0; i < data_len; i++) {
+ *mem = be32_to_cpu(*mem);
+ mem++;
+ }
+}
+
+/* *
+ * hinic3_set_sge - set dma area in scatter gather entry
+ * @sge: scatter gather entry
+ * @addr: dma address
+ * @len: length of relevant data in the dma address
+ */
+static inline void hinic3_set_sge(struct hinic3_sge *sge, dma_addr_t addr,
+ int len)
+{
+ sge->hi_addr = upper_32_bits(addr);
+ sge->lo_addr = lower_32_bits(addr);
+ sge->len = (u32)len;
+}
+
+#define hinic3_hw_be32(val) (val)
+#define hinic3_hw_cpu32(val) (val)
+#define hinic3_hw_cpu16(val) (val)
+
+static inline void hinic3_hw_be32_len(void *data, int len)
+{
+}
+
+static inline void hinic3_hw_cpu32_len(void *data, int len)
+{
+}
+
+int hinic3_dma_zalloc_coherent_align(void *dev_hdl, u64 size, u64 align,
+ unsigned int flag,
+ struct hinic3_dma_addr_align *mem_align);
+
+void hinic3_dma_free_coherent_align(void *dev_hdl,
+ struct hinic3_dma_addr_align *mem_align);
+
+typedef enum hinic3_wait_return (*wait_cpl_handler)(void *priv_data);
+
+int hinic3_wait_for_timeout(void *priv_data, wait_cpl_handler handler,
+ u32 wait_total_ms, u32 wait_once_us);
+
+/* func_attr.glb_func_idx, global function index */
+u16 hinic3_global_func_id(void *hwdev);
+
+int hinic3_global_func_id_get(void *hwdev, u16 *func_id);
+
+/* func_attr.p2p_idx, belongs to which pf */
+u8 hinic3_pf_id_of_vf(void *hwdev);
+
+/* func_attr.itf_idx, pcie interface index */
+u8 hinic3_pcie_itf_id(void *hwdev);
+int hinic3_get_vfid_by_vfpci(void *hwdev, struct pci_dev *pdev, u16 *global_func_id);
+/* func_attr.vf_in_pf, the vf offset in pf */
+u8 hinic3_vf_in_pf(void *hwdev);
+
+/* func_attr.func_type, 0-PF 1-VF 2-PPF */
+enum func_type hinic3_func_type(void *hwdev);
+
+/* The PF func_attr.glb_pf_vf_offset,
+ * PF use only
+ */
+u16 hinic3_glb_pf_vf_offset(void *hwdev);
+
+/* func_attr.mpf_idx, mpf global function index,
+ * This value is valid only when it is PF
+ */
+u8 hinic3_mpf_idx(void *hwdev);
+
+u8 hinic3_ppf_idx(void *hwdev);
+
+/* func_attr.intr_num, MSI-X table entry in function */
+u16 hinic3_intr_num(void *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm.h b/drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm.h
new file mode 100644
index 0000000..47857a3
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm.h
@@ -0,0 +1,353 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CQM_H
+#define CQM_H
+
+#include <linux/completion.h>
+
+#ifndef HIUDK_SDK
+
+#include "hinic3_cqm_define.h"
+#include "vram_common.h"
+
+#define CQM_SUCCESS 0
+#define CQM_FAIL (-1)
+#define CQM_CONTINUE 1
+
+#define CQM_WQE_WF_LINK 1
+#define CQM_WQE_WF_NORMAL 0
+
+#define CQM_QUEUE_LINK_MODE 0
+#define CQM_QUEUE_RING_MODE 1
+#define CQM_QUEUE_TOE_SRQ_LINK_MODE 2
+#define CQM_QUEUE_RDMA_QUEUE_MODE 3
+
+struct tag_cqm_linkwqe {
+ u32 rsv1 : 14;
+ u32 wf : 1;
+ u32 rsv2 : 14;
+ u32 ctrlsl : 2;
+ u32 o : 1;
+
+ u32 rsv3 : 31;
+ u32 lp : 1; /* lp define o-bit is flipping */
+
+ u32 next_page_gpa_h; /* Record the upper 32 bits of the PADDR of the next page */
+ u32 next_page_gpa_l; /* Record the lower 32 bits of the PADDR of the next page */
+
+ u32 next_buffer_addr_h; /* Record the upper 32 bits of the VADDR of the next page */
+ u32 next_buffer_addr_l; /* Record the lower 32 bits of the VADDR of the next page */
+};
+
+/* The WQE size cannot exceed the common RQE size. */
+struct tag_cqm_srq_linkwqe {
+ struct tag_cqm_linkwqe linkwqe;
+ u32 current_buffer_gpa_h;
+ u32 current_buffer_gpa_l;
+ u32 current_buffer_addr_h;
+ u32 current_buffer_addr_l;
+
+ u32 fast_link_page_addr_h;
+ u32 fast_link_page_addr_l;
+
+ u32 fixed_next_buffer_addr_h;
+ u32 fixed_next_buffer_addr_l;
+};
+
+/* First 64B of standard 128B WQE */
+union tag_cqm_linkwqe_first64B {
+ struct tag_cqm_linkwqe basic_linkwqe;
+ struct tag_cqm_srq_linkwqe toe_srq_linkwqe;
+ u32 value[16];
+};
+
+/* Last 64 bytes of the standard 128-byte WQE */
+struct tag_cqm_linkwqe_second64B {
+ u32 rsvd0[4];
+ u32 rsvd1[4];
+ union {
+ struct {
+ u32 rsvd0[3];
+ u32 rsvd1 : 29;
+ u32 toe_o : 1;
+ u32 resvd2 : 2;
+ } bs;
+ u32 value[4];
+ } third_16B;
+
+ union {
+ struct {
+ u32 rsvd0[2];
+ u32 rsvd1 : 31;
+ u32 ifoe_o : 1;
+ u32 rsvd2;
+ } bs;
+ u32 value[4];
+ } forth_16B;
+};
+
+/* Standard 128B WQE structure */
+struct tag_cqm_linkwqe_128B {
+ union tag_cqm_linkwqe_first64B first64B;
+ struct tag_cqm_linkwqe_second64B second64B;
+};
+
+enum cqm_aeq_event_type {
+ CQM_AEQ_BASE_T_NIC = 0,
+ CQM_AEQ_BASE_T_ROCE = 16,
+ CQM_AEQ_BASE_T_FC = 48,
+ CQM_AEQ_BASE_T_IOE = 56,
+ CQM_AEQ_BASE_T_TOE = 64,
+ CQM_AEQ_BASE_T_VBS = 96,
+ CQM_AEQ_BASE_T_IPSEC = 112,
+ CQM_AEQ_BASE_T_MAX = 128
+};
+
+struct tag_service_register_template {
+ u32 service_type;
+ u32 srq_ctx_size;
+ u32 scq_ctx_size;
+ void *service_handle; /* The ceq/aeq function is called back */
+ void (*shared_cq_ceq_callback)(void *service_handle, u32 cqn, void *cq_priv);
+ void (*embedded_cq_ceq_callback)(void *service_handle, u32 xid, void *qpc_priv);
+ void (*no_cq_ceq_callback)(void *service_handle, u32 xid, u32 qid, void *qpc_priv);
+ u8 (*aeq_level_callback)(void *service_handle, u8 event_type, u8 *val);
+ void (*aeq_callback)(void *service_handle, u8 event_type, u8 *val);
+};
+
+enum cqm_object_type {
+ CQM_OBJECT_ROOT_CTX = 0, ///<0:root context
+ CQM_OBJECT_SERVICE_CTX, ///<1:QPC
+ CQM_OBJECT_MPT, ///<2:RDMA
+
+ CQM_OBJECT_NONRDMA_EMBEDDED_RQ = 10,
+ CQM_OBJECT_NONRDMA_EMBEDDED_SQ,
+ CQM_OBJECT_NONRDMA_SRQ,
+ CQM_OBJECT_NONRDMA_EMBEDDED_CQ,
+ CQM_OBJECT_NONRDMA_SCQ,
+
+ CQM_OBJECT_RESV = 20,
+
+ CQM_OBJECT_RDMA_QP = 30,
+ CQM_OBJECT_RDMA_SRQ,
+ CQM_OBJECT_RDMA_SCQ,
+
+ CQM_OBJECT_MTT = 50,
+ CQM_OBJECT_RDMARC,
+};
+
+#define CQM_INDEX_INVALID ~(0U)
+#define CQM_INDEX_RESERVED (0xfffff)
+
+#define CQM_RDMA_Q_ROOM_1 (1)
+#define CQM_RDMA_Q_ROOM_2 (2)
+
+#define CQM_HARDWARE_DOORBELL (1)
+#define CQM_SOFTWARE_DOORBELL (2)
+
+struct tag_cqm_buf_list {
+ void *va;
+ dma_addr_t pa;
+ u32 refcount;
+};
+
+struct tag_cqm_buf {
+ struct tag_cqm_buf_list *buf_list;
+ struct tag_cqm_buf_list direct;
+ u32 page_number;
+ u32 buf_number;
+ u32 buf_size;
+ struct vram_buf_info buf_info;
+ u32 bat_entry_type;
+};
+
+struct completion;
+
+struct tag_cqm_object {
+ u32 service_type;
+ u32 object_type;
+ u32 object_size;
+ atomic_t refcount;
+ struct completion free;
+ void *cqm_handle;
+};
+
+struct tag_cqm_qpc_mpt {
+ struct tag_cqm_object object;
+ u32 xid;
+ dma_addr_t paddr;
+ void *priv;
+ u8 *vaddr;
+};
+
+struct tag_cqm_queue_header {
+ u64 doorbell_record;
+ u64 ci_record;
+ u64 rsv1;
+ u64 rsv2;
+};
+
+struct tag_cqm_queue {
+ struct tag_cqm_object object;
+ u32 index;
+ void *priv;
+ u32 current_q_doorbell;
+ u32 current_q_room;
+ struct tag_cqm_buf q_room_buf_1;
+ struct tag_cqm_buf q_room_buf_2;
+ struct tag_cqm_queue_header *q_header_vaddr;
+ dma_addr_t q_header_paddr;
+ u8 *q_ctx_vaddr;
+ dma_addr_t q_ctx_paddr;
+ u32 valid_wqe_num;
+ u8 *tail_container;
+ u8 *head_container;
+ u8 queue_link_mode;
+};
+
+struct tag_cqm_mtt_rdmarc {
+ struct tag_cqm_object object;
+ u32 index_base;
+ u32 index_number;
+ u8 *vaddr;
+};
+
+struct tag_cqm_cmd_buf {
+ void *buf;
+ dma_addr_t dma;
+ u16 size;
+};
+
+enum cqm_cmd_ack_type_e {
+ CQM_CMD_ACK_TYPE_CMDQ = 0,
+ CQM_CMD_ACK_TYPE_SHARE_CQN = 1,
+ CQM_CMD_ACK_TYPE_APP_CQN = 2
+};
+
+#define CQM_CMD_BUF_LEN 0x800
+
+#endif
+
+#define hiudk_cqm_object_delete(x, y) cqm_object_delete(y)
+#define hiudk_cqm_object_funcid(x, y) cqm_object_funcid(y)
+#define hiudk_cqm_object_offset_addr(x, y, z, m) cqm_object_offset_addr(y, z, m)
+#define hiudk_cqm_object_put(x, y) cqm_object_put(y)
+#define hiudk_cqm_object_resize_alloc_new(x, y, z) cqm_object_resize_alloc_new(y, z)
+#define hiudk_cqm_object_resize_free_new(x, y) cqm_object_resize_free_new(y)
+#define hiudk_cqm_object_resize_free_old(x, y) cqm_object_resize_free_old(y)
+#define hiudk_cqm_object_share_recv_queue_add_container(x, y) \
+ cqm_object_share_recv_queue_add_container(y)
+#define hiudk_cqm_object_srq_add_container_free(x, y, z) cqm_object_srq_add_container_free(y, z)
+#define hiudk_cqm_ring_software_db(x, y, z) cqm_ring_software_db(y, z)
+#define hiudk_cqm_srq_used_rq_container_delete(x, y, z) cqm_srq_used_rq_container_delete(y, z)
+
+s32 cqm3_init(void *ex_handle);
+void cqm3_uninit(void *ex_handle);
+
+s32 cqm3_service_register(void *ex_handle,
+ struct tag_service_register_template *service_template);
+void cqm3_service_unregister(void *ex_handle, u32 service_type);
+s32 cqm3_fake_vf_num_set(void *ex_handle, u16 fake_vf_num_cfg);
+bool cqm3_need_secure_mem(void *ex_handle);
+struct tag_cqm_queue *cqm3_object_fc_srq_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv);
+struct tag_cqm_queue *cqm3_object_recv_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 init_rq_num, u32 container_size,
+ u32 wqe_size, void *object_priv);
+struct tag_cqm_queue *cqm3_object_share_recv_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 container_number, u32 container_size,
+ u32 wqe_size);
+struct tag_cqm_qpc_mpt *cqm3_object_qpc_mpt_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ u32 index, bool low2bit_align_en);
+
+struct tag_cqm_queue *cqm3_object_nonrdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv);
+struct tag_cqm_queue *cqm3_object_rdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ bool room_header_alloc, u32 xid);
+struct tag_cqm_mtt_rdmarc *cqm3_object_rdma_table_get(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 index_base, u32 index_number);
+struct tag_cqm_object *cqm3_object_get(void *ex_handle, enum cqm_object_type object_type,
+ u32 index, bool bh);
+struct tag_cqm_cmd_buf *cqm3_cmd_alloc(void *ex_handle);
+void cqm3_cmd_free(void *ex_handle, struct tag_cqm_cmd_buf *cmd_buf);
+
+s32 cqm3_send_cmd_box(void *ex_handle, u8 mod, u8 cmd,
+ struct tag_cqm_cmd_buf *buf_in, struct tag_cqm_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel);
+
+s32 cqm3_lb_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, u8 cos_id,
+ struct tag_cqm_cmd_buf *buf_in, struct tag_cqm_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel);
+s32 cqm3_lb_send_cmd_box_async(void *ex_handle, u8 mod, u8 cmd, u8 cos_id,
+ struct tag_cqm_cmd_buf *buf_in, u16 channel);
+
+s32 cqm3_db_addr_alloc(void *ex_handle, void __iomem **db_addr, void __iomem **dwqe_addr);
+void cqm3_db_addr_free(void *ex_handle, const void __iomem *db_addr,
+ void __iomem *dwqe_addr);
+
+void *cqm3_get_db_addr(void *ex_handle, u32 service_type);
+s32 cqm3_ring_hardware_db(void *ex_handle, u32 service_type, u8 db_count, u64 db);
+
+s32 cqm_ring_hardware_db_fc(void *ex_handle, u32 service_type, u8 db_count, u8 pagenum, u64 db);
+s32 cqm3_ring_hardware_db_update_pri(void *ex_handle, u32 service_type, u8 db_count, u64 db);
+s32 cqm3_bloomfilter_inc(void *ex_handle, u16 func_id, u64 id);
+s32 cqm3_bloomfilter_dec(void *ex_handle, u16 func_id, u64 id);
+void *cqm3_gid_base(void *ex_handle);
+void *cqm3_timer_base(void *ex_handle);
+void cqm3_function_timer_clear(void *ex_handle, u32 function_id);
+void cqm3_function_hash_buf_clear(void *ex_handle, s32 global_funcid);
+s32 cqm3_ring_direct_wqe_db(void *ex_handle, u32 service_type, u8 db_count, void *direct_wqe);
+s32 cqm_ring_direct_wqe_db_fc(void *ex_handle, u32 service_type, void *direct_wqe);
+
+s32 cqm3_object_share_recv_queue_add_container(struct tag_cqm_queue *common);
+s32 cqm3_object_srq_add_container_free(struct tag_cqm_queue *common, u8 **container_addr);
+
+s32 cqm3_ring_software_db(struct tag_cqm_object *object, u64 db_record);
+void cqm3_object_put(struct tag_cqm_object *object);
+
+/**
+ * @brief Obtains the function ID of an object.
+ * @param Object Pointer
+ * @retval >=0 function's ID
+ * @retval -1 Fails
+ */
+s32 cqm3_object_funcid(struct tag_cqm_object *object);
+
+s32 cqm3_object_resize_alloc_new(struct tag_cqm_object *object, u32 object_size);
+void cqm3_object_resize_free_new(struct tag_cqm_object *object);
+void cqm3_object_resize_free_old(struct tag_cqm_object *object);
+
+/**
+ * @brief Releasing a container
+ * @param Object Pointer
+ * @param container Pointer to the container to be released
+ * @retval void
+ */
+void cqm3_srq_used_rq_container_delete(struct tag_cqm_object *object, u8 *container);
+
+void cqm3_object_delete(struct tag_cqm_object *object);
+
+/**
+ * @brief Obtains the PADDR and VADDR of the specified offset in the object buffer.
+ * @details Only rdma table lookup is supported
+ * @param Object Pointer
+ * @param offset For an RDMA table, the offset is the absolute index number.
+ * @param paddr The physical address is returned only for the RDMA table.
+ * @retval u8 *buffer Virtual address at specified offset
+ */
+u8 *cqm3_object_offset_addr(struct tag_cqm_object *object, u32 offset, dma_addr_t *paddr);
+
+#endif /* CQM_H */
+
diff --git a/drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm_define.h b/drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm_define.h
new file mode 100644
index 0000000..71d6166
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/hinic3_cqm_define.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_CQM_DEFINE_H
+#define HINIC3_CQM_DEFINE_H
+#if !defined(HIUDK_ULD) && !defined(HIUDK_SDK_ADPT)
+#define cqm_init cqm3_init
+#define cqm_uninit cqm3_uninit
+#define cqm_service_register cqm3_service_register
+#define cqm_service_unregister cqm3_service_unregister
+#define cqm_bloomfilter_dec cqm3_bloomfilter_dec
+#define cqm_bloomfilter_inc cqm3_bloomfilter_inc
+#define cqm_cmd_alloc cqm3_cmd_alloc
+#define cqm_cmd_free cqm3_cmd_free
+#define cqm_send_cmd_box cqm3_send_cmd_box
+#define cqm_lb_send_cmd_box cqm3_lb_send_cmd_box
+#define cqm_lb_send_cmd_box_async cqm3_lb_send_cmd_box_async
+#define cqm_db_addr_alloc cqm3_db_addr_alloc
+#define cqm_db_addr_free cqm3_db_addr_free
+#define cqm_ring_hardware_db cqm3_ring_hardware_db
+#define cqm_ring_software_db cqm3_ring_software_db
+#define cqm_object_fc_srq_create cqm3_object_fc_srq_create
+#define cqm_object_share_recv_queue_create cqm3_object_share_recv_queue_create
+#define cqm_object_share_recv_queue_add_container cqm3_object_share_recv_queue_add_container
+#define cqm_object_srq_add_container_free cqm3_object_srq_add_container_free
+#define cqm_object_recv_queue_create cqm3_object_recv_queue_create
+#define cqm_object_qpc_mpt_create cqm3_object_qpc_mpt_create
+#define cqm_object_nonrdma_queue_create cqm3_object_nonrdma_queue_create
+#define cqm_object_rdma_queue_create cqm3_object_rdma_queue_create
+#define cqm_object_rdma_table_get cqm3_object_rdma_table_get
+#define cqm_object_delete cqm3_object_delete
+#define cqm_object_offset_addr cqm3_object_offset_addr
+#define cqm_object_get cqm3_object_get
+#define cqm_object_put cqm3_object_put
+#define cqm_object_funcid cqm3_object_funcid
+#define cqm_object_resize_alloc_new cqm3_object_resize_alloc_new
+#define cqm_object_resize_free_new cqm3_object_resize_free_new
+#define cqm_object_resize_free_old cqm3_object_resize_free_old
+#define cqm_function_timer_clear cqm3_function_timer_clear
+#define cqm_function_hash_buf_clear cqm3_function_hash_buf_clear
+#define cqm_srq_used_rq_container_delete cqm3_srq_used_rq_container_delete
+#define cqm_timer_base cqm3_timer_base
+#define cqm_get_db_addr cqm3_get_db_addr
+#define cqm_ring_direct_wqe_db cqm3_ring_direct_wqe_db
+#define cqm_fake_vf_num_set cqm3_fake_vf_num_set
+#define cqm_need_secure_mem cqm3_need_secure_mem
+#endif
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/hinic3_lld.h b/drivers/net/ethernet/huawei/hinic3/include/hinic3_lld.h
new file mode 100644
index 0000000..e36ba1d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/hinic3_lld.h
@@ -0,0 +1,225 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_LLD_H
+#define HINIC3_LLD_H
+
+#include "hinic3_crm.h"
+
+#define WAIT_TIME 1
+
+#ifdef HIUDK_SDK
+
+int hwsdk_set_vf_load_state(struct hinic3_lld_dev *lld_dev, bool vf_load_state);
+
+int hwsdk_set_vf_service_load(struct hinic3_lld_dev *lld_dev, u16 service,
+ bool vf_srv_load);
+
+int hwsdk_set_vf_service_state(struct hinic3_lld_dev *lld_dev, u16 vf_func_id,
+ u16 service, bool en);
+#else
+struct hinic3_lld_dev {
+ struct pci_dev *pdev;
+ void *hwdev;
+};
+
+struct hinic3_uld_info {
+ /* When the function does not need to initialize the corresponding uld,
+ * @probe needs to return 0 and uld_dev is set to NULL;
+ * if uld_dev is NULL, @remove will not be called when uninstalling
+ */
+ int (*probe)(struct hinic3_lld_dev *lld_dev, void **uld_dev, char *uld_dev_name);
+ void (*remove)(struct hinic3_lld_dev *lld_dev, void *uld_dev);
+ int (*suspend)(struct hinic3_lld_dev *lld_dev, void *uld_dev, pm_message_t state);
+ int (*resume)(struct hinic3_lld_dev *lld_dev, void *uld_dev);
+ void (*event)(struct hinic3_lld_dev *lld_dev, void *uld_dev,
+ struct hinic3_event_info *event);
+ int (*ioctl)(void *uld_dev, u32 cmd, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+};
+#endif
+
+#ifndef HIUDK_ULD
+/* hinic3_register_uld - register an upper-layer driver
+ * @type: uld service type
+ * @uld_info: uld callback
+ *
+ * Registers an upper-layer driver.
+ * Traverse existing devices and call @probe to initialize the uld device.
+ */
+int hinic3_register_uld(enum hinic3_service_type type, struct hinic3_uld_info *uld_info);
+
+/**
+ * hinic3_unregister_uld - unregister an upper-layer driver
+ * @type: uld service type
+ *
+ * Traverse existing devices and call @remove to uninstall the uld device.
+ * Unregisters an existing upper-layer driver.
+ */
+void hinic3_unregister_uld(enum hinic3_service_type type);
+
+void lld_hold(void);
+void lld_put(void);
+
+/**
+ * @brief hinic3_get_lld_dev_by_chip_name - get lld device by chip name
+ * @param chip_name: chip name
+ *
+ * The value of lld_dev reference increases when lld_dev is obtained. The caller needs
+ * to release the reference by calling lld_dev_put.
+ **/
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_chip_name(const char *chip_name);
+
+/**
+ * @brief lld_dev_hold - get reference to lld_dev
+ * @param dev: lld device
+ *
+ * Hold reference to device to keep it from being freed
+ **/
+void lld_dev_hold(struct hinic3_lld_dev *dev);
+
+/**
+ * @brief lld_dev_put - release reference to lld_dev
+ * @param dev: lld device
+ *
+ * Release reference to device to allow it to be freed
+ **/
+void lld_dev_put(struct hinic3_lld_dev *dev);
+
+/**
+ * @brief hinic3_get_lld_dev_by_dev_name - get lld device by uld device name
+ * @param dev_name: uld device name
+ * @param type: uld service type, When the type is SERVICE_T_MAX, try to match
+ * all ULD names to get uld_dev
+ *
+ * The value of lld_dev reference increases when lld_dev is obtained. The caller needs
+ * to release the reference by calling lld_dev_put.
+ **/
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_dev_name(const char *dev_name,
+ enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_lld_dev_by_dev_name_unsafe - get lld device by uld device name
+ * @param dev_name: uld device name
+ * @param type: uld service type, When the type is SERVICE_T_MAX, try to match
+ * all ULD names to get uld_dev
+ *
+ * hinic3_get_lld_dev_by_dev_name_unsafe() is completely analogous to
+ * hinic3_get_lld_dev_by_dev_name(), The only difference is that the reference
+ * of lld_dev is not increased when lld_dev is obtained.
+ *
+ * The caller must ensure that lld_dev will not be freed during the remove process
+ * when using lld_dev.
+ **/
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_dev_name_unsafe(const char *dev_name,
+ enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_lld_dev_by_chip_and_port - get lld device by chip name and port id
+ * @param chip_name: chip name
+ * @param port_id: port id
+ **/
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_chip_and_port(const char *chip_name, u8 port_id);
+
+/**
+ * @brief hinic3_get_ppf_dev - get ppf device without depend on input parameter
+ **/
+void *hinic3_get_ppf_dev(void);
+
+/**
+ * @brief hinic3_get_ppf_lld_dev - get ppf lld device by current function's lld device
+ * @param lld_dev: current function's lld device
+ *
+ * The value of lld_dev reference increases when lld_dev is obtained. The caller needs
+ * to release the reference by calling lld_dev_put.
+ **/
+struct hinic3_lld_dev *hinic3_get_ppf_lld_dev(struct hinic3_lld_dev *lld_dev);
+
+/**
+ * @brief hinic3_get_ppf_lld_dev_unsafe - get ppf lld device by current function's lld device
+ * @param lld_dev: current function's lld device
+ *
+ * hinic3_get_ppf_lld_dev_unsafe() is completely analogous to hinic3_get_ppf_lld_dev(),
+ * The only difference is that the reference of lld_dev is not increased when lld_dev is obtained.
+ *
+ * The caller must ensure that ppf's lld_dev will not be freed during the remove process
+ * when using ppf lld_dev.
+ **/
+struct hinic3_lld_dev *hinic3_get_ppf_lld_dev_unsafe(struct hinic3_lld_dev *lld_dev);
+
+/**
+ * @brief uld_dev_hold - get reference to uld_dev
+ * @param lld_dev: lld device
+ * @param type: uld service type
+ *
+ * Hold reference to uld device to keep it from being freed
+ **/
+void uld_dev_hold(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+
+/**
+ * @brief uld_dev_put - release reference to lld_dev
+ * @param dev: lld device
+ * @param type: uld service type
+ *
+ * Release reference to uld device to allow it to be freed
+ **/
+void uld_dev_put(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_uld_dev - get uld device by lld device
+ * @param lld_dev: lld device
+ * @param type: uld service type
+ *
+ * The value of uld_dev reference increases when uld_dev is obtained. The caller needs
+ * to release the reference by calling uld_dev_put.
+ **/
+void *hinic3_get_uld_dev(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_uld_dev_unsafe - get uld device by lld device
+ * @param lld_dev: lld device
+ * @param type: uld service type
+ *
+ * hinic3_get_uld_dev_unsafe() is completely analogous to hinic3_get_uld_dev(),
+ * The only difference is that the reference of uld_dev is not increased when uld_dev is obtained.
+ *
+ * The caller must ensure that uld_dev will not be freed during the remove process
+ * when using uld_dev.
+ **/
+void *hinic3_get_uld_dev_unsafe(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_chip_name - get chip name by lld device
+ * @param lld_dev: lld device
+ * @param chip_name: String for storing the chip name
+ * @param max_len: Maximum number of characters to be copied for chip_name
+ **/
+int hinic3_get_chip_name(struct hinic3_lld_dev *lld_dev, char *chip_name, u16 max_len);
+
+struct card_node *hinic3_get_chip_node_by_lld(struct hinic3_lld_dev *lld_dev);
+
+struct hinic3_hwdev *hinic3_get_sdk_hwdev_by_lld(struct hinic3_lld_dev *lld_dev);
+
+bool hinic3_get_vf_service_load(struct pci_dev *pdev, u16 service);
+
+int hinic3_set_vf_service_load(struct pci_dev *pdev, u16 service,
+ bool vf_srv_load);
+
+int hinic3_set_vf_service_state(struct pci_dev *pdev, u16 vf_func_id,
+ u16 service, bool en);
+
+bool hinic3_get_vf_load_state(struct pci_dev *pdev);
+
+int hinic3_set_vf_load_state(struct pci_dev *pdev, bool vf_load_state);
+
+int hinic3_attach_nic(struct hinic3_lld_dev *lld_dev);
+
+void hinic3_detach_nic(const struct hinic3_lld_dev *lld_dev);
+
+int hinic3_attach_service(const struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+void hinic3_detach_service(const struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+const char **hinic3_get_uld_names(void);
+int hinic3_lld_init(void);
+void hinic3_lld_exit(void);
+#endif
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/hinic3_profile.h b/drivers/net/ethernet/huawei/hinic3/include/hinic3_profile.h
new file mode 100644
index 0000000..e0bd256
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/hinic3_profile.h
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_PROFILE_H
+#define HINIC3_PROFILE_H
+
+typedef bool (*hinic3_is_match_prof)(void *device);
+typedef void *(*hinic3_init_prof_attr)(void *device);
+typedef void (*hinic3_deinit_prof_attr)(void *porf_attr);
+
+enum prof_adapter_type {
+ PROF_ADAP_TYPE_INVALID,
+ PROF_ADAP_TYPE_PANGEA = 1,
+
+ /* Add prof adapter type before default */
+ PROF_ADAP_TYPE_DEFAULT,
+};
+
+/**
+ * struct hinic3_prof_adapter - custom scene's profile adapter
+ * @type: adapter type
+ * @match: Check whether the current function is used in the custom scene.
+ * Implemented in the current source file
+ * @init: When @match return true, the initialization function called in probe.
+ * Implemented in the source file of the custom scene
+ * @deinit: When @match return true, the deinitialization function called when
+ * remove. Implemented in the source file of the custom scene
+ */
+struct hinic3_prof_adapter {
+ enum prof_adapter_type type;
+ hinic3_is_match_prof match;
+ hinic3_init_prof_attr init;
+ hinic3_deinit_prof_attr deinit;
+};
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline struct hinic3_prof_adapter *hinic3_prof_init(void *device,
+ struct hinic3_prof_adapter *adap_objs,
+ int num_adap, void **prof_attr)
+{
+ struct hinic3_prof_adapter *prof_obj = NULL;
+ int i;
+
+ for (i = 0; i < num_adap; i++) {
+ prof_obj = &adap_objs[i];
+ if (!(prof_obj->match && prof_obj->match(device)))
+ continue;
+
+ *prof_attr = prof_obj->init ? prof_obj->init(device) : NULL;
+
+ return prof_obj;
+ }
+
+ return NULL;
+}
+
+static inline void hinic3_prof_deinit(struct hinic3_prof_adapter *prof_obj, void *prof_attr)
+{
+ if (!prof_obj)
+ return;
+
+ if (prof_obj->deinit)
+ prof_obj->deinit(prof_attr);
+}
+
+/* module-level interface */
+#ifdef CONFIG_MODULE_PROF
+struct hinic3_module_ops {
+ int (*module_prof_init)(void);
+ void (*module_prof_exit)(void);
+ void (*probe_fault_process)(void *pdev, u16 level);
+ int (*probe_pre_process)(void *pdev);
+ void (*probe_pre_unprocess)(void *pdev);
+};
+
+struct hinic3_module_ops *hinic3_get_module_prof_ops(void);
+
+static inline void hinic3_probe_fault_process(void *pdev, u16 level)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (ops && ops->probe_fault_process)
+ ops->probe_fault_process(pdev, level);
+}
+
+static inline int hinic3_module_pre_init(void)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (!ops || !ops->module_prof_init)
+ return -EINVAL;
+
+ return ops->module_prof_init();
+}
+
+static inline void hinic3_module_post_exit(void)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (ops && ops->module_prof_exit)
+ ops->module_prof_exit();
+}
+
+static inline int hinic3_probe_pre_process(void *pdev)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (!ops || !ops->probe_pre_process)
+ return -EINVAL;
+
+ return ops->probe_pre_process(pdev);
+}
+
+static inline void hinic3_probe_pre_unprocess(void *pdev)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (ops && ops->probe_pre_unprocess)
+ ops->probe_pre_unprocess(pdev);
+}
+#else
+static inline void hinic3_probe_fault_process(void *pdev, u16 level) { };
+
+static inline int hinic3_module_pre_init(void)
+{
+ return 0;
+}
+
+static inline void hinic3_module_post_exit(void) { };
+
+static inline int hinic3_probe_pre_process(void *pdev)
+{
+ return 0;
+}
+
+static inline void hinic3_probe_pre_unprocess(void *pdev) { };
+#endif
+
+#ifdef LLT_STATIC_DEF_SAVED
+#define static
+#undef LLT_STATIC_DEF_SAVED
+#endif
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/mag_mpu_cmd.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/mag_mpu_cmd.h
new file mode 100644
index 0000000..199f17a
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/mag_mpu_cmd.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef MAG_MPU_CMD_H
+#define MAG_MPU_CMD_H
+
+/* Definition of the SerDes/MAG message command word */
+enum mag_cmd {
+ SERDES_CMD_PROCESS = 0, /* serdes cmd @see struct serdes_cmd_in */
+
+ MAG_CMD_SET_PORT_CFG = 1, /* set port cfg function @see struct mag_cmd_set_port_cfg */
+ MAG_CMD_SET_PORT_ADAPT = 2, /* set port adapt mode @see struct mag_cmd_set_port_adapt */
+ MAG_CMD_CFG_LOOPBACK_MODE = 3, /* set port loopback mode @see mag_cmd_cfg_loopback_mode */
+
+ MAG_CMD_GET_PORT_ENABLE = 5, /* get port enable status @see mag_cmd_get_port_enable */
+ MAG_CMD_SET_PORT_ENABLE = 6, /* set port enable mode @see mag_cmd_set_port_enable */
+ MAG_CMD_GET_LINK_STATUS = 7, /* get port link status @see mag_cmd_get_link_status */
+ MAG_CMD_SET_LINK_FOLLOW = 8, /* set port link_follow mode @see mag_cmd_set_link_follow */
+ MAG_CMD_SET_PMA_ENABLE = 9, /* set pma enable mode @see struct mag_cmd_set_pma_enable */
+ MAG_CMD_CFG_FEC_MODE = 10, /* set port fec mode @see struct mag_cmd_cfg_fec_mode */
+ MAG_CMD_GET_BOND_STATUS = 11, /* reserved for future use */
+
+ MAG_CMD_CFG_AN_TYPE = 12, /* reserved for future use */
+ MAG_CMD_CFG_LINK_TIME = 13, /* get link time @see struct mag_cmd_get_link_time */
+
+ MAG_CMD_SET_PANGEA_ADAPT = 15, /* set pangea adapt mode @see mag_cmd_set_pangea_adapt */
+
+ /* Bios link configuration dependency 30-49 */
+ MAG_CMD_CFG_BIOS_LINK_CFG = 31, /* reserved for future use */
+ MAG_CMD_RESTORE_LINK_CFG = 32, /* restore link cfg @see mag_cmd_restore_link_cfg */
+ MAG_CMD_ACTIVATE_BIOS_LINK_CFG = 33, /* active bios link cfg */
+
+ /* Optical module、LED, PHY and other peripheral configuration management 50 - 99 */
+ /* LED */
+ MAG_CMD_SET_LED_CFG = 50, /* set led cfg @see struct mag_cmd_set_led_cfg */
+
+ /* PHY */
+ MAG_CMD_GET_PHY_INIT_STATUS = 55, /* reserved for future use */
+
+ /* Optical module */
+ MAG_CMD_GET_XSFP_INFO = 60, /* get xsfp info @see struct mag_cmd_get_xsfp_info */
+ MAG_CMD_SET_XSFP_ENABLE = 61, /* set xsfp enable mode @see mag_cmd_set_xsfp_enable */
+ MAG_CMD_GET_XSFP_PRESENT = 62, /* get xsfp present status @see mag_cmd_get_xsfp_present */
+ MAG_CMD_SET_XSFP_RW = 63, /* sfp/qsfp single byte read/write, @see mag_cmd_set_xsfp_rw */
+ MAG_CMD_CFG_XSFP_TEMPERATURE = 64, /* get xsfp temp @see mag_cmd_sfp_temp_out_info */
+ /**< set xsfp tlv info @see struct mag_cmd_set_xsfp_tlv_req */
+ MAG_CMD_SET_XSFP_TLV_INFO = 65,
+ /**< get xsfp tlv info @see struct drv_mag_cmd_get_xsfp_tlv_rsp */
+ MAG_CMD_GET_XSFP_TLV_INFO = 66,
+
+ /* Event reported 100-149 */
+ MAG_CMD_WIRE_EVENT = 100,
+ MAG_CMD_LINK_ERR_EVENT = 101,
+
+ /* DFX、Counter */
+ MAG_CMD_EVENT_PORT_INFO = 150, /* get port event info @see mag_cmd_event_port_info */
+ MAG_CMD_GET_PORT_STAT = 151, /* get port state @see struct mag_cmd_get_port_stat */
+ MAG_CMD_CLR_PORT_STAT = 152, /* clear port state @see struct mag_cmd_port_stats_info */
+ MAG_CMD_GET_PORT_INFO = 153, /* get port info @see struct mag_cmd_get_port_info */
+ MAG_CMD_GET_PCS_ERR_CNT = 154, /* pcs err count @see struct mag_cmd_event_port_info */
+ MAG_CMD_GET_MAG_CNT = 155, /* fec code count @see struct mag_cmd_get_mag_cnt */
+ MAG_CMD_DUMP_ANTRAIN_INFO = 156, /* dump anlt info @see mag_cmd_dump_antrain_info */
+
+ /* patch reserve cmd */
+ MAG_CMD_PATCH_RSVD_0 = 200,
+ MAG_CMD_PATCH_RSVD_1 = 201,
+ MAG_CMD_PATCH_RSVD_2 = 202,
+ MAG_CMD_PATCH_RSVD_3 = 203,
+ MAG_CMD_PATCH_RSVD_4 = 204,
+
+ MAG_CMD_MAX = 0xFF
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_board_defs.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_board_defs.h
new file mode 100644
index 0000000..88a9c0d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_board_defs.h
@@ -0,0 +1,135 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef MPU_BOARD_DEFS_H
+#define MPU_BOARD_DEFS_H
+
+#define BOARD_TYPE_TEST_RANGE_START 1
+#define BOARD_TYPE_TEST_RANGE_END 29
+#define BOARD_TYPE_STRG_RANGE_START 30
+#define BOARD_TYPE_STRG_RANGE_END 99
+#define BOARD_TYPE_CAL_RANGE_START 100
+#define BOARD_TYPE_CAL_RANGE_END 169
+#define BOARD_TYPE_CLD_RANGE_START 170
+#define BOARD_TYPE_CLD_RANGE_END 239
+#define BOARD_TYPE_RSVD_RANGE_START 240
+#define BOARD_TYPE_RSVD_RANGE_END 255
+
+enum board_type_define_e {
+ BOARD_TYPE_MPU_DEFAULT = 0,
+ BOARD_TYPE_TEST_EVB_4X25G = 1,
+ BOARD_TYPE_TEST_CEM_2X100G = 2,
+ BOARD_TYPE_STRG_SMARTIO_4X32G_FC = 30,
+ BOARD_TYPE_STRG_SMARTIO_4X25G_TIOE = 31,
+ BOARD_TYPE_STRG_SMARTIO_4X25G_ROCE = 32,
+ BOARD_TYPE_STRG_SMARTIO_4X25G_ROCE_AA = 33,
+ BOARD_TYPE_STRG_SMARTIO_4X25G_SRIOV = 34,
+ BOARD_TYPE_STRG_SMARTIO_4X25G_SRIOV_SW = 35,
+ BOARD_TYPE_STRG_4X25G_COMSTORAGE = 36,
+ BOARD_TYPE_STRG_2X100G_TIOE = 40,
+ BOARD_TYPE_STRG_2X100G_ROCE = 41,
+ BOARD_TYPE_STRG_2X100G_ROCE_AA = 42,
+ BOARD_TYPE_CAL_2X25G_NIC_75MPPS = 100,
+ BOARD_TYPE_CAL_2X25G_NIC_40MPPS = 101,
+ BOARD_TYPE_CAL_2X100G_DPU_VL = 102,
+ BOARD_TYPE_CAL_4X25G_NIC_120MPPS = 105,
+ BOARD_TYPE_CAL_4X25G_COMSTORAGE = 106,
+ BOARD_TYPE_CAL_2X32G_FC_HBA = 110,
+ BOARD_TYPE_CAL_2X16G_FC_HBA = 111,
+ BOARD_TYPE_CAL_2X100G_NIC_120MPPS = 115,
+ BOARD_TYPE_CAL_2X25G_DPU_BD = 116,
+ BOARD_TYPE_CAL_2X100G_TCE_BACKPLANE = 117,
+ BOARD_TYPE_CAL_4X25G_DPU_VL = 118,
+ BOARD_TYPE_CAL_4X25G_SMARTNIC_120MPPS = 119,
+ BOARD_TYPE_CAL_2X100G_SMARTNIC_120MPPS = 120,
+ BOARD_TYPE_CAL_6X25G_DPU_VL = 121,
+ BOARD_TYPE_CAL_4X25G_DPU_BD = 122,
+ BOARD_TYPE_CAL_2X25G_NIC_4HOST = 123,
+ BOARD_TYPE_CAL_2X10G_LOW_POWER = 125,
+ BOARD_TYPE_CAL_2X200G_NIC_INTERNET = 127,
+ BOARD_TYPE_CAL_1X100GR2_OCP = 129,
+ BOARD_TYPE_CAL_2X200G_DPU_VL = 130,
+ BOARD_TYPE_CLD_2X100G_SDI5_1 = 170,
+ BOARD_TYPE_CLD_2X25G_SDI5_0_LITE = 171,
+ BOARD_TYPE_CLD_2X100G_SDI5_0 = 172,
+ BOARD_TYPE_CLD_4X25G_SDI5_0_C = 175,
+ BOARD_TYPE_MAX_INDEX = 0xFF
+};
+
+static inline u32 spu_board_type_valid(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CLD_2X25G_SDI5_0_LITE) ||
+ ((board_type) == BOARD_TYPE_CLD_2X100G_SDI5_0) ||
+ ((board_type) == BOARD_TYPE_CLD_4X25G_SDI5_0_C) ||
+ ((board_type) == BOARD_TYPE_CAL_2X25G_DPU_BD) ||
+ ((board_type) == BOARD_TYPE_CAL_2X100G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_DPU_BD) ||
+ ((board_type) == BOARD_TYPE_CAL_2X200G_DPU_VL);
+}
+
+static inline int board_type_is_sdi_50(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CLD_2X25G_SDI5_0_LITE) ||
+ ((board_type) == BOARD_TYPE_CLD_2X100G_SDI5_0) ||
+ ((board_type) == BOARD_TYPE_CLD_4X25G_SDI5_0_C);
+}
+
+static inline int board_type_is_sdi(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CLD_2X100G_SDI5_1) ||
+ ((board_type) == BOARD_TYPE_CLD_2X25G_SDI5_0_LITE) ||
+ ((board_type) == BOARD_TYPE_CLD_2X100G_SDI5_0) ||
+ ((board_type) == BOARD_TYPE_CLD_4X25G_SDI5_0_C);
+}
+
+static inline int board_type_is_dpu_spu(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CAL_2X25G_DPU_BD) ||
+ ((board_type) == BOARD_TYPE_CAL_2X100G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_DPU_BD) ||
+ ((board_type) == BOARD_TYPE_CAL_2X200G_DPU_VL);
+}
+
+static inline int board_type_is_dpu(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CAL_2X25G_DPU_BD) ||
+ ((board_type) == BOARD_TYPE_CAL_2X100G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_DPU_BD) ||
+ ((board_type) == BOARD_TYPE_CAL_6X25G_DPU_VL) ||
+ ((board_type) == BOARD_TYPE_CAL_2X200G_DPU_VL);
+}
+
+static inline int board_type_is_smartnic(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CAL_4X25G_SMARTNIC_120MPPS) ||
+ ((board_type) == BOARD_TYPE_CAL_2X100G_SMARTNIC_120MPPS);
+}
+
+/* 此接口判断是否是分布式存储的标卡以及计算的标卡(含ROCE特性),
+ * 仅用于LLDP TX功能冲突命令字处理的判断
+ */
+static inline int board_type_is_compute(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CAL_2X25G_NIC_75MPPS) ||
+ ((board_type) == BOARD_TYPE_CAL_2X25G_NIC_40MPPS) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_NIC_120MPPS) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_COMSTORAGE) ||
+ ((board_type) == BOARD_TYPE_CAL_2X100G_NIC_120MPPS) ||
+ ((board_type) == BOARD_TYPE_CAL_2X10G_LOW_POWER) ||
+ ((board_type) == BOARD_TYPE_CAL_2X200G_NIC_INTERNET) ||
+ ((board_type) == BOARD_TYPE_CAL_1X100GR2_OCP) ||
+ ((board_type) == BOARD_TYPE_CAL_4X25G_SMARTNIC_120MPPS) ||
+ ((board_type) == BOARD_TYPE_CAL_2X25G_NIC_4HOST) ||
+ ((board_type) == BOARD_TYPE_CAL_2X100G_SMARTNIC_120MPPS);
+}
+
+/* 此接口判断服务器输入reboot网卡是否需要复位 */
+static inline int board_type_is_multi_socket(u32 board_type)
+{
+ return ((board_type) == BOARD_TYPE_CAL_1X100GR2_OCP);
+}
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_cmd_base_defs.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_cmd_base_defs.h
new file mode 100644
index 0000000..e65c206
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_cmd_base_defs.h
@@ -0,0 +1,165 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2021-2023. All rights reserved.
+ * Description : common definitions
+ */
+
+#ifndef COMM_DEFS_H
+#define COMM_DEFS_H
+
+/** MPU CMD MODULE TYPE */
+enum hinic3_mod_type {
+ HINIC3_MOD_COMM = 0, /* HW communication module */
+ HINIC3_MOD_L2NIC = 1, /* L2NIC module */
+ HINIC3_MOD_ROCE = 2,
+ HINIC3_MOD_PLOG = 3,
+ HINIC3_MOD_TOE = 4,
+ HINIC3_MOD_FLR = 5,
+ HINIC3_MOD_VROCE = 6,
+ HINIC3_MOD_CFGM = 7, /* Configuration management */
+ HINIC3_MOD_CQM = 8,
+ HINIC3_MOD_VMSEC = 9,
+ COMM_MOD_FC = 10,
+ HINIC3_MOD_OVS = 11,
+ HINIC3_MOD_DSW = 12,
+ HINIC3_MOD_MIGRATE = 13,
+ HINIC3_MOD_HILINK = 14,
+ HINIC3_MOD_CRYPT = 15, /* secure crypto module */
+ HINIC3_MOD_VIO = 16,
+ HINIC3_MOD_IMU = 17,
+ HINIC3_MOD_DFX = 18, /* DFX */
+ HINIC3_MOD_HW_MAX = 19, /* hardware max module id */
+ /* Software module id, for PF/VF and multi-host */
+ HINIC3_MOD_SW_FUNC = 20,
+ HINIC3_MOD_MAX,
+};
+
+/* func reset的flag ,用于指示清理哪种资源 */
+enum func_reset_flag_e{
+ RES_TYPE_FLUSH_BIT = 0,
+ RES_TYPE_MQM,
+ RES_TYPE_SMF,
+ RES_TYPE_PF_BW_CFG,
+
+ RES_TYPE_COMM = 10,
+ RES_TYPE_COMM_MGMT_CH, /* clear mbox and aeq, The RES_TYPE_COMM bit must be set */
+ RES_TYPE_COMM_CMD_CH, /* clear cmdq and ceq, The RES_TYPE_COMM bit must be set */
+ RES_TYPE_NIC,
+ RES_TYPE_OVS,
+ RES_TYPE_VBS,
+ RES_TYPE_ROCE,
+ RES_TYPE_FC,
+ RES_TYPE_TOE,
+ RES_TYPE_IPSEC,
+ RES_TYPE_MAX,
+};
+
+#define HINIC3_COMM_RES \
+ ((1 << RES_TYPE_COMM) | (1 << RES_TYPE_COMM_CMD_CH) | \
+ (1 << RES_TYPE_FLUSH_BIT) | (1 << RES_TYPE_MQM) | \
+ (1 << RES_TYPE_SMF) | (1 << RES_TYPE_PF_BW_CFG))
+
+#define HINIC3_NIC_RES (1 << RES_TYPE_NIC)
+#define HINIC3_OVS_RES (1 << RES_TYPE_OVS)
+#define HINIC3_VBS_RES (1 << RES_TYPE_VBS)
+#define HINIC3_ROCE_RES (1 << RES_TYPE_ROCE)
+#define HINIC3_FC_RES (1 << RES_TYPE_FC)
+#define HINIC3_TOE_RES (1 << RES_TYPE_TOE)
+#define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC)
+
+/* MODE OVS、NIC、UNKNOWN */
+#define HINIC3_WORK_MODE_OVS 0
+#define HINIC3_WORK_MODE_UNKNOWN 1
+#define HINIC3_WORK_MODE_NIC 2
+
+#define DEVICE_TYPE_L2NIC 0
+#define DEVICE_TYPE_NVME 1
+#define DEVICE_TYPE_VIRTIO_NET 2
+#define DEVICE_TYPE_VIRTIO_BLK 3
+#define DEVICE_TYPE_VIRTIO_VSOCK 4
+#define DEVICE_TYPE_VIRTIO_NET_TRANSITION 5
+#define DEVICE_TYPE_VIRTIO_BLK_TRANSITION 6
+#define DEVICE_TYPE_VIRTIO_SCSI_TRANSITION 7
+#define DEVICE_TYPE_VIRTIO_HPC 8
+#define DEVICE_TYPE_VIRTIO_FS 9
+
+#define IS_STORAGE_DEVICE_TYPE(dev_type) \
+ ((dev_type) == DEVICE_TYPE_VIRTIO_BLK || \
+ (dev_type) == DEVICE_TYPE_VIRTIO_BLK_TRANSITION || \
+ (dev_type) == DEVICE_TYPE_VIRTIO_SCSI_TRANSITION || \
+ (dev_type) == DEVICE_TYPE_VIRTIO_FS)
+
+#define MGMT_MSG_CMD_OP_SET 1
+#define MGMT_MSG_CMD_OP_GET 0
+
+#define MGMT_MSG_CMD_OP_START 1
+#define MGMT_MSG_CMD_OP_STOP 0
+
+#define HOT_REPLACE_PARTITION_NUM 2
+
+enum hinic3_svc_type {
+ SVC_T_COMM = 0,
+ SVC_T_NIC,
+ SVC_T_OVS,
+ SVC_T_ROCE,
+ SVC_T_TOE,
+ SVC_T_IOE,
+ SVC_T_FC,
+ SVC_T_VBS,
+ SVC_T_IPSEC,
+ SVC_T_VIRTIO,
+ SVC_T_MIGRATE,
+ SVC_T_PPA,
+ SVC_T_MAX,
+};
+
+/**
+ * Common header control information of the COMM message interaction command word between the driver and PF
+ * stuct mgmt_msg_head and struct comm_info_head are the same stucture
+ */
+struct mgmt_msg_head {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+};
+
+/**
+ * Common header control information of the COMM message interaction command word between the driver and PF
+ */
+struct comm_info_head {
+ /** response status code, 0: success, others: error code */
+ u8 status;
+
+ /** firmware version for command */
+ u8 version;
+
+ /** response aeq number, unused for now */
+ u8 rep_aeq_num;
+ u8 rsvd[5];
+};
+
+
+static inline u32 get_function_partition(u32 function_id, u32 port_num)
+{
+ return (function_id / port_num) % HOT_REPLACE_PARTITION_NUM;
+}
+
+static inline u32 is_primary_function(u32 function_id, u32 port_num)
+{
+ return (function_id / port_num) % HOT_REPLACE_PARTITION_NUM == 0;
+}
+
+static inline u32 mpu_nic_get_primary_function(u32 function_id, u32 port_num)
+{
+ return ((function_id / port_num) % HOT_REPLACE_PARTITION_NUM == 0) ?
+ function_id : (function_id - port_num);
+}
+
+// when func_id is in partition 0/1, it will get its another func_id in partition 1/0
+static inline u32 mpu_nic_get_backup_function(u32 function_id, u32 port_num)
+{
+ return ((function_id / port_num) % HOT_REPLACE_PARTITION_NUM == 0) ?
+ (function_id + port_num) : (function_id - port_num);
+}
+
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd.h
new file mode 100644
index 0000000..fd0401f
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd.h
@@ -0,0 +1,192 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef MPU_INBAND_CMD_H
+#define MPU_INBAND_CMD_H
+
+enum hinic3_mgmt_cmd {
+ COMM_MGMT_CMD_FUNC_RESET = 0, /* reset function @see comm_cmd_func_reset */
+ COMM_MGMT_CMD_FEATURE_NEGO, /* feature negotiation @see comm_cmd_feature_nego */
+ COMM_MGMT_CMD_FLUSH_DOORBELL, /* clear doorbell @see comm_cmd_clear_doorbell */
+ COMM_MGMT_CMD_START_FLUSH, /* clear statefull business txrx resource
+ * @see comm_cmd_clear_resource
+ */
+ COMM_MGMT_CMD_SET_FUNC_FLR, /* set function flr @see comm_cmd_func_flr_set */
+ COMM_MGMT_CMD_GET_GLOBAL_ATTR, /* get global attr @see comm_cmd_get_glb_attr */
+ COMM_MGMT_CMD_SET_PPF_FLR_TYPE, /* set ppf flr type @see comm_cmd_ppf_flr_type_set */
+ COMM_MGMT_CMD_SET_FUNC_SVC_USED_STATE, /* set function service used state
+ * @see comm_cmd_func_svc_used_state
+ */
+ COMM_MGMT_CMD_START_FLR, /* MPU not use */
+
+ COMM_MGMT_CMD_CFG_MSIX_NUM = 10, /**< set msix num @see comm_cmd_cfg_msix_num */
+
+ COMM_MGMT_CMD_SET_CMDQ_CTXT = 20, /* set commandq context @see comm_cmd_cmdq_ctxt */
+ COMM_MGMT_CMD_SET_VAT, /** set vat table info @see comm_cmd_root_ctxt */
+ COMM_MGMT_CMD_CFG_PAGESIZE, /**< set rootctx pagesize @see comm_cmd_wq_page_size */
+ COMM_MGMT_CMD_CFG_MSIX_CTRL_REG, /* config msix ctrl register @see comm_cmd_msix_config */
+ COMM_MGMT_CMD_SET_CEQ_CTRL_REG, /**< set ceq ctrl register @see comm_cmd_ceq_ctrl_reg */
+ COMM_MGMT_CMD_SET_DMA_ATTR, /**< set PF/VF DMA table attr @see comm_cmd_dma_attr_config */
+ COMM_MGMT_CMD_SET_PPF_TBL_HTR_FLG, /* set PPF func table os hotreplace flag
+ * @see comm_cmd_ppf_tbl_htrp_config
+ */
+
+ COMM_MGMT_CMD_GET_MQM_FIX_INFO = 40, /**< get mqm fix info @see comm_cmd_get_eqm_num */
+ COMM_MGMT_CMD_SET_MQM_CFG_INFO, /**< set mqm config info @see comm_cmd_eqm_cfg */
+ COMM_MGMT_CMD_SET_MQM_SRCH_GPA, /* set mqm search gpa info @see comm_cmd_eqm_search_gpa */
+ COMM_MGMT_CMD_SET_PPF_TMR, /**< set ppf tmr @see comm_cmd_ppf_tmr_op */
+ COMM_MGMT_CMD_SET_PPF_HT_GPA, /**< set ppf ht gpa @see comm_cmd_ht_gpa */
+ COMM_MGMT_CMD_SET_FUNC_TMR_BITMAT, /* @see comm_cmd_func_tmr_bitmap_op */
+ COMM_MGMT_CMD_SET_MBX_CRDT, /**< reserved */
+ COMM_MGMT_CMD_CFG_TEMPLATE, /**< config template @see comm_cmd_cfg_template */
+ COMM_MGMT_CMD_SET_MQM_LIMIT, /**< set mqm limit @see comm_cmd_set_mqm_limit */
+
+ COMM_MGMT_CMD_GET_FW_VERSION = 60, /**< get firmware version @see comm_cmd_get_fw_version */
+ COMM_MGMT_CMD_GET_BOARD_INFO, /**< get board info @see comm_cmd_board_info */
+ COMM_MGMT_CMD_SYNC_TIME, /**< synchronize host time to MPU @see comm_cmd_sync_time */
+ COMM_MGMT_CMD_GET_HW_PF_INFOS, /**< get pf info @see comm_cmd_hw_pf_infos */
+ COMM_MGMT_CMD_SEND_BDF_INFO, /**< send bdf info @see comm_cmd_bdf_info */
+ COMM_MGMT_CMD_GET_VIRTIO_BDF_INFO, /**< get virtio bdf info @see mpu_pcie_device_info_s */
+ COMM_MGMT_CMD_GET_SML_TABLE_INFO, /**< get sml table info @see comm_cmd_get_sml_tbl_data */
+ COMM_MGMT_CMD_GET_SDI_INFO, /**< get sdi info @see comm_cmd_sdi_info */
+ COMM_MGMT_CMD_ROOT_CTX_LOAD, /* get root context info @see comm_cmd_root_ctx_load_req_s */
+ COMM_MGMT_CMD_GET_HW_BOND, /**< get bond info @see comm_cmd_hw_bond_infos */
+
+ COMM_MGMT_CMD_UPDATE_FW = 80, /* update firmware @see cmd_update_fw @see comm_info_head */
+ COMM_MGMT_CMD_ACTIVE_FW, /**< cold active firmware @see cmd_active_firmware */
+ COMM_MGMT_CMD_HOT_ACTIVE_FW, /**< hot active firmware @see cmd_hot_active_fw */
+ COMM_MGMT_CMD_HOT_ACTIVE_DONE_NOTICE, /**< reserved */
+ COMM_MGMT_CMD_SWITCH_CFG, /**< switch config file @see cmd_switch_cfg */
+ COMM_MGMT_CMD_CHECK_FLASH, /**< check flash @see comm_info_check_flash */
+ COMM_MGMT_CMD_CHECK_FLASH_RW, /* check whether flash reads and writes normally
+ * @see comm_cmd_hw_bond_infos
+ */
+ COMM_MGMT_CMD_RESOURCE_CFG, /**< reserved */
+ COMM_MGMT_CMD_UPDATE_BIOS, /**< update bios firmware @see cmd_update_fw */
+ COMM_MGMT_CMD_MPU_GIT_CODE, /**< get mpu git tag @see cmd_get_mpu_git_code */
+
+ COMM_MGMT_CMD_FAULT_REPORT = 100, /**< report fault event to driver */
+ COMM_MGMT_CMD_WATCHDOG_INFO, /* report software watchdog timeout to driver
+ * @see comm_info_sw_watchdog
+ */
+ COMM_MGMT_CMD_MGMT_RESET, /**< report mpu chip reset to driver */
+ COMM_MGMT_CMD_FFM_SET, /* report except interrupt to driver */
+
+ COMM_MGMT_CMD_GET_LOG = 120, /* get the log of the dictionary @see nic_log_info_request */
+ COMM_MGMT_CMD_TEMP_OP, /* temperature operation @see comm_temp_in_info
+ * @see comm_temp_out_info
+ */
+ COMM_MGMT_CMD_EN_AUTO_RST_CHIP, /* @see comm_cmd_enable_auto_rst_chip */
+ COMM_MGMT_CMD_CFG_REG, /**< reserved */
+ COMM_MGMT_CMD_GET_CHIP_ID, /**< get chip id @see comm_chip_id_info */
+ COMM_MGMT_CMD_SYSINFO_DFX, /**< reserved */
+ COMM_MGMT_CMD_PCIE_DFX_NTC, /**< reserved */
+ COMM_MGMT_CMD_DICT_LOG_STATUS, /* @see mpu_log_status_info */
+ COMM_MGMT_CMD_MSIX_INFO, /**< read msix map table @see comm_cmd_msix_info */
+ COMM_MGMT_CMD_CHANNEL_DETECT, /**< auto channel detect @see comm_cmd_channel_detect */
+ COMM_MGMT_CMD_DICT_COUNTER_STATUS, /**< get flash counter status @see flash_counter_info */
+ COMM_MGMT_CMD_UCODE_SM_COUNTER, /* get ucode sm counter @see comm_read_ucode_sm_req
+ * @see comm_read_ucode_sm_resp
+ */
+ COMM_MGMT_CMD_CLEAR_LOG, /**< clear log @see comm_cmd_clear_log_s */
+ COMM_MGMT_CMD_UCODE_SM_COUNTER_PER,
+ /**< get ucode sm counter @see struct comm_read_ucode_sm_per_req
+ * @see struct comm_read_ucode_sm_per_resp
+ */
+
+ COMM_MGMT_CMD_CHECK_IF_SWITCH_WORKMODE = 140, /* check if switch workmode reserved
+ * @see comm_cmd_check_if_switch_workmode
+ */
+ COMM_MGMT_CMD_SWITCH_WORKMODE, /* switch workmode reserved @see comm_cmd_switch_workmode */
+
+ COMM_MGMT_CMD_MIGRATE_DFX_HPA = 150, /* query migrate varialbe @see comm_cmd_migrate_dfx */
+ COMM_MGMT_CMD_BDF_INFO, /**< get bdf info @see cmd_get_bdf_info_s */
+ COMM_MGMT_CMD_NCSI_CFG_INFO_GET_PROC, /**< get ncsi config info @see comm_cmd_ncsi_cfg_s */
+ COMM_MGMT_CMD_CPI_TCAM_DBG, /* enable or disable the scheduled cpi tcam task,
+ * set task interval time @see comm_cmd_cpi_tcam_dbg_s
+ */
+ COMM_MGMT_CMD_LLDP_TX_FUNC_SET,
+
+ COMM_MGMT_CMD_SECTION_RSVD_0 = 160, /**< rsvd0 section */
+ COMM_MGMT_CMD_SECTION_RSVD_1 = 170, /**< rsvd1 section */
+ COMM_MGMT_CMD_SECTION_RSVD_2 = 180, /**< rsvd2 section */
+ COMM_MGMT_CMD_SECTION_RSVD_3 = 190, /**< rsvd3 section */
+
+ COMM_MGMT_CMD_GET_TDIE_ID = 199, /**< get totem die id @see comm_cmd_get_totem_die_id */
+ COMM_MGMT_CMD_GET_UDIE_ID = 200, /**< get unicorn die id @see comm_cmd_get_die_id */
+ COMM_MGMT_CMD_GET_EFUSE_TEST, /**< reserved */
+ COMM_MGMT_CMD_EFUSE_INFO_CFG, /**< set efuse config @see comm_efuse_cfg_info */
+ COMM_MGMT_CMD_GPIO_CTL, /**< reserved */
+ COMM_MGMT_CMD_HI30_SERLOOP_START, /* set serloop start @see comm_cmd_hi30_serloop */
+ COMM_MGMT_CMD_HI30_SERLOOP_STOP, /* set serloop stop @see comm_cmd_hi30_serloop */
+ COMM_MGMT_CMD_HI30_MBIST_SET_FLAG, /**< reserved */
+ COMM_MGMT_CMD_HI30_MBIST_GET_RESULT, /**< reserved */
+ COMM_MGMT_CMD_ECC_TEST, /**< reserved */
+ COMM_MGMT_CMD_FUNC_BIST_TEST, /**< reserved */
+
+ COMM_MGMT_CMD_VPD_SET = 210, /**< reserved */
+ COMM_MGMT_CMD_VPD_GET, /**< reserved */
+
+ COMM_MGMT_CMD_ERASE_FLASH, /**< erase flash sector @see cmd_sector_info */
+ COMM_MGMT_CMD_QUERY_FW_INFO, /**< get firmware info @see cmd_query_fw */
+ COMM_MGMT_CMD_GET_CFG_INFO, /* get cfg in flash reserved @see comm_cmd_get_cfg_info_t */
+ COMM_MGMT_CMD_GET_UART_LOG, /* collect hinicshell log @see nic_cmd_get_uart_log_info */
+ COMM_MGMT_CMD_SET_UART_CMD, /* hinicshell command to mpu @see nic_cmd_set_uart_log_cmd */
+ COMM_MGMT_CMD_SPI_TEST, /**< reserved */
+
+ /* TODO: ALL reg read/write merge to COMM_MGMT_CMD_CFG_REG */
+ COMM_MGMT_CMD_MPU_REG_GET, /**< get mpu register value @see dbgtool_up_reg_opt_info */
+ COMM_MGMT_CMD_MPU_REG_SET, /**< set mpu register value @see dbgtool_up_reg_opt_info */
+
+ COMM_MGMT_CMD_REG_READ = 220, /**< read register value @see comm_info_reg_read_write */
+ COMM_MGMT_CMD_REG_WRITE, /**< write register value @see comm_info_reg_read_write */
+ COMM_MGMT_CMD_MAG_REG_WRITE, /**< write mag register value @see comm_info_dfx_mag_reg */
+ COMM_MGMT_CMD_ANLT_REG_WRITE, /**< read register value @see comm_info_dfx_anlt_reg */
+
+ COMM_MGMT_CMD_HEART_EVENT, /**< ncsi heart event @see comm_cmd_heart_event */
+ COMM_MGMT_CMD_NCSI_OEM_GET_DRV_INFO, /**< nsci oem get driver info */
+ COMM_MGMT_CMD_LASTWORD_GET, /**< report lastword to driver @see comm_info_up_lastword_s */
+ COMM_MGMT_CMD_READ_BIN_DATA, /**< reserved */
+ COMM_MGMT_CMD_GET_REG_VAL, /**< read register value @see comm_cmd_mbox_csr_rd_req */
+ COMM_MGMT_CMD_SET_REG_VAL, /**< write register value @see comm_cmd_mbox_csr_wt_req */
+
+ /* TODO: check if needed */
+ COMM_MGMT_CMD_SET_VIRTIO_DEV = 230, /* set the virtio device
+ * @see comm_cmd_set_virtio_dev
+ */
+ COMM_MGMT_CMD_SET_MAC, /**< set mac address @see comm_info_mac */
+ /* MPU patch cmd */
+ COMM_MGMT_CMD_LOAD_PATCH, /**< load hot patch @see cmd_update_fw */
+ COMM_MGMT_CMD_REMOVE_PATCH, /**< remove hot patch @see cmd_patch_remove */
+ COMM_MGMT_CMD_PATCH_ACTIVE, /**< actice hot patch @see cmd_patch_active */
+ COMM_MGMT_CMD_PATCH_DEACTIVE, /**< deactice hot patch @see cmd_patch_deactive */
+ COMM_MGMT_CMD_PATCH_SRAM_OPTIMIZE, /**< set hot patch sram optimize */
+ /* container host process */
+ COMM_MGMT_CMD_CONTAINER_HOST_PROC, /* container host process reserved
+ * @see comm_cmd_con_sel_sta
+ */
+ /* nsci counter */
+ COMM_MGMT_CMD_NCSI_COUNTER_PROC, /* get ncsi counter @see nsci_counter_in_info_s */
+ COMM_MGMT_CMD_CHANNEL_STATUS_CHECK, /* check channel status reserved
+ * @see channel_status_check_info_s
+ */
+
+ COMM_MGMT_CMD_RSVD_0 = 240, /**< hot patch reserved cmd */
+ COMM_MGMT_CMD_RSVD_1, /**< hot patch reserved cmd */
+ COMM_MGMT_CMD_RSVD_2, /**< hot patch reserved cmd */
+ COMM_MGMT_CMD_RSVD_3, /**< hot patch reserved cmd */
+ COMM_MGMT_CMD_RSVD_4, /**< hot patch reserved cmd */
+ COMM_MGMT_CMD_SEND_API_ACK_BY_UP, /**< reserved */
+
+ /* for tool ver compatible info */
+ COMM_MGMT_CMD_GET_VER_COMPATIBLE_INFO = 254, /* get compatible info
+ * @see comm_cmd_compatible_info
+ */
+ /* When adding a command word, you cannot change the value of an existing command word.
+ * Add the command word in the rsvd section. In principle,
+ * the cmd tables of all branches are the same.
+ */
+ COMM_MGMT_CMD_MAX = 255,
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd_defs.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd_defs.h
new file mode 100644
index 0000000..fd3a7dd
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_inband_cmd_defs.h
@@ -0,0 +1,1104 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef MPU_INBAND_CMD_DEFS_H
+#define MPU_INBAND_CMD_DEFS_H
+
+#include "mpu_cmd_base_defs.h"
+#include "mpu_outband_ncsi_cmd_defs.h"
+
+#define HARDWARE_ID_1XX3V100_TAG 31 /* 1xx3v100 tag */
+#define DUMP_16B_PER_LINE 16
+#define DUMP_8_VAR_PER_LINE 8
+#define DUMP_4_VAR_PER_LINE 4
+#define FW_UPDATE_MGMT_TIMEOUT 3000000U
+
+#define FUNC_RESET_FLAG_MAX_VALUE ((1U << (RES_TYPE_MAX + 1)) - 1)
+struct comm_cmd_func_reset {
+ struct mgmt_msg_head head;
+ u16 func_id; /**< function id */
+ u16 rsvd1[3];
+ u64 reset_flag; /**< reset function type flag @see enum func_reset_flag_e */
+};
+
+enum {
+ COMM_F_API_CHAIN = 1U << 0,
+ COMM_F_CLP = 1U << 1,
+ COMM_F_CHANNEL_DETECT = 1U << 2,
+ COMM_F_MBOX_SEGMENT = 1U << 3,
+ COMM_F_CMDQ_NUM = 1U << 4,
+ COMM_F_VIRTIO_VQ_SIZE = 1U << 5,
+};
+
+#define COMM_MAX_FEATURE_QWORD 4
+enum COMM_FEATURE_NEGO_OPCODE {
+ COMM_FEATURE_NEGO_OPCODE_GET = 0,
+ COMM_FEATURE_NEGO_OPCODE_SET = 1
+};
+
+struct comm_cmd_feature_nego {
+ struct mgmt_msg_head head;
+ u16 func_id; /**< function id */
+ u8 opcode; /**< operate type 0: get, 1: set */
+ u8 rsvd;
+ u64 s_feature[COMM_MAX_FEATURE_QWORD]; /**< feature info */
+};
+
+struct comm_cmd_func_flr_set {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u8 type; /**< 1: flr enable */
+ u8 isall; /**< flr type 0: specify PF and associated VF flr, 1: all functions flr */
+ u32 rsvd;
+};
+
+struct comm_cmd_clear_doorbell {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u16 rsvd1[3];
+};
+
+struct comm_cmd_clear_resource {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u16 rsvd1[3];
+};
+
+struct comm_global_attr {
+ u8 max_host_num; /**< maximum number of host */
+ u8 max_pf_num; /**< maximum number of pf */
+ u16 vf_id_start; /**< VF function id start */
+
+ u8 mgmt_host_node_id; /**< node id */
+ u8 cmdq_num; /**< cmdq num */
+ u8 rsvd1[2];
+ u32 rsvd2[8];
+};
+
+struct comm_cmd_get_glb_attr {
+ struct mgmt_msg_head head;
+ struct comm_global_attr attr; /**< global attr @see struct comm_global_attr */
+};
+
+struct comm_cmd_ppf_flr_type_set {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 func_service_type;
+ u8 rsvd1;
+ u32 ppf_flr_type; /**< function flr type 1:statefull 0:stateless */
+};
+
+struct comm_cmd_func_svc_used_state {
+ struct mgmt_msg_head head;
+ u16 func_id;
+ u16 svc_type;
+ u8 used_state;
+ u8 rsvd[35];
+};
+
+struct comm_cmd_cfg_msix_num {
+ struct comm_info_head head;
+
+ u16 func_id;
+ u8 op_code; /**< operate type 1: alloc 0: free */
+ u8 rsvd0;
+
+ u16 msix_num;
+ u16 rsvd1;
+};
+
+struct cmdq_ctxt_info {
+ u64 curr_wqe_page_pfn;
+ u64 wq_block_pfn;
+};
+
+struct comm_cmd_cmdq_ctxt {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 cmdq_id;
+ u8 rsvd1[5];
+
+ struct cmdq_ctxt_info ctxt;
+};
+
+struct comm_cmd_root_ctxt {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 set_cmdq_depth;
+ u8 cmdq_depth;
+ u16 rx_buf_sz;
+ u8 lro_en;
+ u8 rsvd1;
+ u16 sq_depth;
+ u16 rq_depth;
+ u64 rsvd2;
+};
+
+struct comm_cmd_wq_page_size {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u8 opcode; /**< operate type 0:get , 1:set */
+ /* real_size=4KB*2^page_size, range(0~20) must be checked by driver */
+ u8 page_size;
+
+ u32 rsvd1;
+};
+
+struct comm_cmd_msix_config {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u8 opcode; /**< operate type 0:get , 1:set */
+ u8 rsvd1;
+ u16 msix_index;
+ u8 pending_cnt;
+ u8 coalesce_timer_cnt;
+ u8 resend_timer_cnt;
+ u8 lli_timer_cnt;
+ u8 lli_credit_cnt;
+ u8 rsvd2[5];
+};
+
+struct comm_cmd_ceq_ctrl_reg {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u16 q_id;
+ u32 ctrl0;
+ u32 ctrl1;
+ u32 rsvd1;
+};
+
+struct comm_cmd_dma_attr_config {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u8 entry_idx;
+ u8 st;
+ u8 at;
+ u8 ph;
+ u8 no_snooping;
+ u8 tph_en;
+ u32 resv1;
+};
+
+struct comm_cmd_ppf_tbl_htrp_config {
+ struct mgmt_msg_head head;
+
+ u32 hotreplace_flag;
+};
+
+struct comm_cmd_get_eqm_num {
+ struct mgmt_msg_head head;
+
+ u8 host_id; /**< host id */
+ u8 rsvd1[3];
+ u32 chunk_num;
+ u32 search_gpa_num;
+};
+
+struct comm_cmd_eqm_cfg {
+ struct mgmt_msg_head head;
+
+ u8 host_id; /**< host id */
+ u8 valid; /**< 0:clear config , 1:set config */
+ u16 rsvd1;
+ u32 page_size; /**< page size */
+ u32 rsvd2;
+};
+
+struct comm_cmd_eqm_search_gpa {
+ struct mgmt_msg_head head;
+
+ u8 host_id; /**< host id Deprecated field, not used */
+ u8 rsvd1[3];
+ u32 start_idx; /**< start index */
+ u32 num;
+ u32 rsvd2;
+ u64 gpa_hi52[0]; /**< [gpa data */
+};
+
+struct comm_cmd_ppf_tmr_op {
+ struct mgmt_msg_head head;
+
+ u8 ppf_id; /**< ppf function id */
+ u8 opcode; /**< operation type 1: start timer, 0: stop timer */
+ u8 rsvd1[6];
+};
+
+struct comm_cmd_ht_gpa {
+ struct mgmt_msg_head head;
+
+ u8 host_id; /**< host id */
+ u8 rsvd0[3];
+ u32 rsvd1[7];
+ u64 page_pa0;
+ u64 page_pa1;
+};
+
+struct comm_cmd_func_tmr_bitmap_op {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u8 opcode; /**< operation type 1: start timer, 0: stop timer */
+ u8 rsvd1[5];
+};
+
+#define DD_CFG_TEMPLATE_MAX_IDX 12
+#define DD_CFG_TEMPLATE_MAX_TXT_LEN 64
+#define CFG_TEMPLATE_OP_QUERY 0
+#define CFG_TEMPLATE_OP_SET 1
+#define CFG_TEMPLATE_SET_MODE_BY_IDX 0
+#define CFG_TEMPLATE_SET_MODE_BY_NAME 1
+
+struct comm_cmd_cfg_template {
+ struct mgmt_msg_head head;
+ u8 opt_type; /**< operation type 0: query 1: set */
+ u8 set_mode; /**< set mode 0:index mode 1:name mode. */
+ u8 tp_err;
+ u8 rsvd0;
+
+ u8 cur_index; /**< current cfg tempalte index. */
+ u8 cur_max_index; /** max support cfg tempalte index. */
+ u8 rsvd1[2];
+ u8 cur_name[DD_CFG_TEMPLATE_MAX_TXT_LEN]; /**< current cfg tempalte name. */
+ u8 cur_cfg_temp_info[DD_CFG_TEMPLATE_MAX_IDX][DD_CFG_TEMPLATE_MAX_TXT_LEN];
+
+ u8 next_index; /**< next reset cfg tempalte index. */
+ u8 next_max_index; /**< max support cfg tempalte index. */
+ u8 rsvd2[2];
+ u8 next_name[DD_CFG_TEMPLATE_MAX_TXT_LEN]; /**< next reset cfg tempalte name. */
+ u8 next_cfg_temp_info[DD_CFG_TEMPLATE_MAX_IDX][DD_CFG_TEMPLATE_MAX_TXT_LEN];
+};
+
+#define MQM_SUPPORT_COS_NUM 8
+#define MQM_INVALID_WEIGHT 256
+#define MQM_LIMIT_SET_FLAG_READ 0
+#define MQM_LIMIT_SET_FLAG_WRITE 1
+struct comm_cmd_set_mqm_limit {
+ struct mgmt_msg_head head;
+
+ u16 set_flag; /**< operation type 0: read 1: write */
+ u16 func_id; /**< function id */
+ /* Indicates the weight of cos_id. The value ranges from 0 to 255.
+ * The value 0 indicates SP scheduling.
+ */
+ u16 cos_weight[MQM_SUPPORT_COS_NUM]; /**< cos weight range[0,255] */
+ u32 host_min_rate; /**< current host minimum rate */
+ u32 func_min_rate; /**< current function minimum rate,unit:Mbps */
+ u32 func_max_rate; /**< current function maximum rate,unit:Mbps */
+ u8 rsvd[64]; /* Reserved */
+};
+
+#define HINIC3_FW_VERSION_LEN 16
+#define HINIC3_FW_COMPILE_TIME_LEN 20
+
+enum hinic3_fw_ver_type {
+ HINIC3_FW_VER_TYPE_BOOT,
+ HINIC3_FW_VER_TYPE_MPU,
+ HINIC3_FW_VER_TYPE_NPU,
+ HINIC3_FW_VER_TYPE_SMU_L0,
+ HINIC3_FW_VER_TYPE_SMU_L1,
+ HINIC3_FW_VER_TYPE_CFG,
+};
+
+struct comm_cmd_get_fw_version {
+ struct mgmt_msg_head head;
+
+ u16 fw_type; /**< firmware type @see enum hinic3_fw_ver_type */
+ u16 fw_dfx_vld : 1; /**< 0: release, 1: debug */
+ u16 rsvd1 : 15;
+ u8 ver[HINIC3_FW_VERSION_LEN]; /**< firmware version */
+ u8 time[HINIC3_FW_COMPILE_TIME_LEN]; /**< firmware compile time */
+};
+
+struct hinic3_board_info {
+ u8 board_type; /**< board type */
+ u8 port_num; /**< current port number */
+ u8 port_speed; /**< port speed */
+ u8 pcie_width; /**< pcie width */
+ u8 host_num; /**< host number */
+ u8 pf_num; /**< pf number */
+ u16 vf_total_num; /**< vf total number */
+ u8 tile_num; /**< tile number */
+ u8 qcm_num; /**< qcm number */
+ u8 core_num; /**< core number */
+ u8 work_mode; /**< work mode */
+ u8 service_mode; /**< service mode */
+ u8 pcie_mode; /**< pcie mode */
+ u8 boot_sel; /**< boot sel */
+ u8 board_id; /**< board id */
+ u32 rsvd;
+ u32 service_en_bitmap; /**< service en bitmap */
+ u8 scenes_id; /**< scenes id */
+ u8 cfg_template_id; /**< cfg template index */
+ u8 hardware_id; /**< hardware id */
+ u8 spu_en; /**< spu enable flag */
+ u16 pf_vendor_id; /**< pf vendor id */
+ u8 tile_bitmap; /**< used tile bitmap */
+ u8 sm_bitmap; /**< used sm bitmap */
+};
+
+struct comm_cmd_board_info {
+ struct mgmt_msg_head head;
+
+ struct hinic3_board_info info; /**< board info @see struct hinic3_board_info */
+ u32 rsvd[22];
+};
+
+struct comm_cmd_sync_time {
+ struct mgmt_msg_head head;
+
+ u64 mstime; /**< time,unit:ms */
+ u64 rsvd1;
+};
+
+struct hw_pf_info {
+ u16 glb_func_idx; /**< function id */
+ u16 glb_pf_vf_offset;
+ u8 p2p_idx;
+ u8 itf_idx; /**< host id */
+ u16 max_vfs; /**< max vf number */
+ u16 max_queue_num; /**< max queue number */
+ u16 vf_max_queue_num;
+ u16 port_id;
+ u16 rsvd0;
+ u32 pf_service_en_bitmap;
+ u32 vf_service_en_bitmap;
+ u16 rsvd1[2];
+
+ u8 device_type;
+ u8 bus_num; /**< bdf info */
+ u16 vf_stride; /**< vf stride */
+ u16 vf_offset; /**< vf offset */
+ u8 rsvd[2];
+};
+
+#define CMD_MAX_MAX_PF_NUM 32
+struct hinic3_hw_pf_infos {
+ u8 num_pfs; /**< pf number */
+ u8 rsvd1[3];
+
+ struct hw_pf_info infos[CMD_MAX_MAX_PF_NUM]; /**< pf info @see struct hw_pf_info */
+};
+
+struct comm_cmd_hw_pf_infos {
+ struct mgmt_msg_head head;
+
+ struct hinic3_hw_pf_infos infos; /**< all pf info @see struct hinic3_hw_pf_infos */
+};
+
+struct comm_cmd_bdf_info {
+ struct mgmt_msg_head head;
+
+ u16 function_idx; /**< function id */
+ u8 rsvd1[2];
+ u8 bus; /**< bus info */
+ u8 device; /**< device info */
+ u8 function; /**< function info */
+ u8 rsvd2[5];
+};
+
+#define TABLE_INDEX_MAX 129
+struct sml_table_id_info {
+ u8 node_id;
+ u8 instance_id;
+};
+
+struct comm_cmd_get_sml_tbl_data {
+ struct comm_info_head head; /* 8B */
+ u8 tbl_data[512]; /**< sml table data */
+};
+
+struct comm_cmd_sdi_info {
+ struct mgmt_msg_head head;
+ u32 cfg_sdi_mode; /**< host mode, 0:normal 1:virtual machine 2:bare metal */
+};
+
+#define HINIC_OVS_BOND_DEFAULT_ID 1
+struct hinic3_hw_bond_infos {
+ u8 bond_id;
+ u8 valid;
+ u8 rsvd1[2];
+};
+
+struct comm_cmd_hw_bond_infos {
+ struct mgmt_msg_head head;
+ struct hinic3_hw_bond_infos infos; /**< bond info @see struct hinic3_hw_bond_infos */
+};
+
+/* 工具数据长度为1536(1.5K),工具最大发2k,包含头部 */
+struct cmd_update_fw {
+ struct comm_info_head head; // 8B
+ u16 fw_flag; /**< subfirmware flag, bit 0: last slice flag, bit 1 first slice flag */
+ u16 slice_len; /**< current slice length */
+ u32 fw_crc; /**< subfirmware crc */
+ u32 fw_type; /**< subfirmware type */
+ u32 bin_total_len; /**< total firmware length, only fisrt slice is effective */
+ u32 bin_section_len; /**< subfirmware length */
+ u32 fw_verion; /**< subfirmware version */
+ u32 fw_offset; /**< current slice offset of current subfirmware */
+ u32 data[0]; /**< data */
+};
+
+struct cmd_switch_cfg {
+ struct comm_info_head msg_head;
+ u8 index; /**< index, range[0,7] */
+ u8 data[7];
+};
+
+struct cmd_active_firmware {
+ struct comm_info_head msg_head;
+ u8 index; /* 0 ~ 7 */
+ u8 data[7];
+};
+
+#define HOT_ACTIVE_MPU 1
+#define HOT_ACTIVE_NPU 2
+#define HOT_ACTIVE_MNPU 3
+struct cmd_hot_active_fw {
+ struct comm_info_head head;
+ u32 type; /**< hot actice firmware type 1: mpu; 2: ucode; 3: mpu & npu */
+ u32 data[3];
+};
+
+#define FLASH_CHECK_OK 1
+#define FLASH_CHECK_ERR 2
+#define FLASH_CHECK_DISMATCH 3
+
+struct comm_info_check_flash {
+ struct comm_info_head head;
+
+ u8 status; /**< flash check status */
+ u8 rsv[3];
+};
+
+struct cmd_get_mpu_git_code {
+ struct comm_info_head head; /* 8B */
+ u32 rsvd; /* reserve */
+ char mpu_git_code[64]; /**< mpu git tag and compile time */
+};
+
+#define DATA_LEN_1K 1024
+struct comm_info_sw_watchdog {
+ struct comm_info_head head;
+
+ u32 curr_time_h; /**< infinite loop occurrence time,cycle */
+ u32 curr_time_l; /**< infinite loop occurrence time,cycle */
+ u32 task_id; /**< task id .task that occur in an infinite loop */
+ u32 rsv;
+
+ u64 pc;
+
+ u64 elr;
+ u64 spsr;
+ u64 far;
+ u64 esr;
+ u64 xzr;
+ u64 x30;
+ u64 x29;
+ u64 x28;
+ u64 x27;
+ u64 x26;
+ u64 x25;
+ u64 x24;
+ u64 x23;
+ u64 x22;
+ u64 x21;
+ u64 x20;
+ u64 x19;
+ u64 x18;
+ u64 x17;
+ u64 x16;
+ u64 x15;
+ u64 x14;
+ u64 x13;
+ u64 x12;
+ u64 x11;
+ u64 x10;
+ u64 x09;
+ u64 x08;
+ u64 x07;
+ u64 x06;
+ u64 x05;
+ u64 x04;
+ u64 x03;
+ u64 x02;
+ u64 x01;
+ u64 x00;
+
+ u64 stack_top; /**< stack top */
+ u64 stack_bottom; /**< stack bottom */
+ u64 sp; /**< sp pointer */
+ u32 curr_used; /**< the size currently used by the stack */
+ u32 peak_used; /**< historical peak of stack usage */
+ u32 is_overflow; /**< stack overflow flag */
+
+ u32 stack_actlen; /**< actual stack length(<=1024) */
+ u8 stack_data[DATA_LEN_1K]; /* If the value exceeds 1024, it will be truncated. */
+};
+
+struct nic_log_info_request {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u32 offset;
+ u8 log_or_index; /* 0:log 1:index */
+ u8 type; /* log type 0:up 1:ucode 2:smu 3:mpu lastword 4.npu lastword */
+ u8 area; /* area 0:ram 1:flash (this bit is valid only when log_or_index is 0) */
+ u8 rsvd1; /* reserved */
+};
+
+#define MPU_TEMP_OP_GET 0
+#define MPU_TEMP_THRESHOLD_OP_CFG 1
+#define MPU_TEMP_MCTP_DFX_INFO_GET 2
+struct comm_temp_in_info {
+ struct comm_info_head head;
+ u8 opt_type; /**< operation type 0:read operation 1:cfg operation */
+ u8 rsv[3];
+ s32 max_temp; /**< maximum threshold of temperature */
+ s32 min_temp; /**< minimum threshold of temperature */
+};
+
+struct comm_temp_out_info {
+ struct comm_info_head head;
+ s32 temp_data; /**< current temperature */
+ s32 max_temp_threshold; /**< maximum threshold of temperature */
+ s32 min_temp_threshold; /**< minimum threshold of temperature */
+ s32 max_temp; /**< maximum temperature */
+ s32 min_temp; /**< minimum temperature */
+};
+
+/* 关闭芯片自复位 */
+struct comm_cmd_enable_auto_rst_chip {
+ struct comm_info_head head;
+ u8 op_code; /**< operation type 0:get operation 1:set operation */
+ u8 enable; /* auto reset status 0: disable auto reset chip 1: enable */
+ u8 rsvd[2];
+};
+
+struct comm_chip_id_info {
+ struct comm_info_head head;
+ u8 chip_id; /**< chip id */
+ u8 rsvd[3];
+};
+
+struct mpu_log_status_info {
+ struct comm_info_head head;
+ u8 type; /**< operation type 0:read operation 1:write operation */
+ u8 log_status; /**< log status 0:idle 1:busy */
+ u8 rsvd[2];
+};
+
+struct comm_cmd_msix_info {
+ struct comm_info_head head;
+ u8 rsvd1;
+ u8 flag; /**< table flag 0:second table, 1:actual table */
+ u8 rsvd[2];
+};
+
+struct comm_cmd_channel_detect {
+ struct mgmt_msg_head head;
+
+ u16 func_id; /**< function id */
+ u16 rsvd1[3];
+ u32 rsvd2[2];
+};
+
+#define MAX_LOG_BUF_SIZE 1024
+#define FLASH_NPU_COUNTER_HEAD_MAGIC (0x5a)
+#define FLASH_NPU_COUNTER_NIC_TYPE 0
+#define FLASH_NPU_COUNTER_FC_TYPE 1
+
+struct flash_npu_counter_head_s {
+ u8 magic;
+ u8 tbl_type;
+ u8 count_type; /**< 0:nic;1:fc */
+ u8 count_num; /**< current count number */
+ u16 base_offset; /**< address offset */
+ u16 base_count;
+};
+
+struct flash_counter_info {
+ struct comm_info_head head;
+
+ u32 length; /**< flash counter buff len */
+ u32 offset; /**< flash counter buff offset */
+ u8 data[MAX_LOG_BUF_SIZE]; /**< flash counter data */
+};
+
+enum mpu_sm_cmd_type {
+ COMM_SM_CTR_RD16 = 1,
+ COMM_SM_CTR_RD32,
+ COMM_SM_CTR_RD64_PAIR,
+ COMM_SM_CTR_RD64,
+ COMM_SM_CTR_RD32_CLEAR,
+ COMM_SM_CTR_RD64_PAIR_CLEAR,
+ COMM_SM_CTR_RD64_CLEAR,
+ COMM_SM_CTR_RD16_CLEAR,
+};
+
+struct comm_read_ucode_sm_req {
+ struct mgmt_msg_head msg_head;
+
+ u32 node; /**< node id @see enum INTERNAL_RING_NODE_ID_E */
+ u32 count_id; /**< count id */
+ u32 instanse; /**< instance id */
+ u32 type; /**< read type @see enum mpu_sm_cmd_type */
+};
+
+struct comm_read_ucode_sm_resp {
+ struct mgmt_msg_head msg_head;
+
+ u64 val1;
+ u64 val2;
+};
+
+#define PER_REQ_MAX_DATA_LEN 0x600
+
+struct comm_read_ucode_sm_per_req {
+ struct mgmt_msg_head msg_head;
+
+ u32 tbl_type;
+ u32 count_id;
+};
+
+struct comm_read_ucode_sm_per_resp {
+ struct mgmt_msg_head msg_head;
+
+ u8 data[PER_REQ_MAX_DATA_LEN];
+};
+
+struct ucode_sm_counter_get_info {
+ u32 width_type;
+ u32 tbl_type;
+ unsigned int base_count;
+ unsigned int count_num;
+};
+
+enum log_type {
+ MPU_LOG_CLEAR = 0,
+ SMU_LOG_CLEAR = 1,
+ NPU_LOG_CLEAR = 2,
+ SPU_LOG_CLEAR = 3,
+ ALL_LOG_CLEAR = 4,
+};
+
+#define ABLESWITCH 1
+#define IMABLESWITCH 2
+enum switch_workmode_op {
+ SWITCH_WORKMODE_SWITCH = 0,
+ SWITCH_WORKMODE_OTHER = 1
+};
+
+enum switch_workmode_obj {
+ SWITCH_WORKMODE_FC = 0,
+ SWITCH_WORKMODE_TOE = 1,
+ SWITCH_WORKMODE_ROCE_AND_NOF = 2,
+ SWITCH_WORKMODE_NOF_AA = 3,
+ SWITCH_WORKMODE_ETH_CNTR = 4,
+ SWITCH_WORKMODE_NOF_CNTR = 5,
+};
+
+struct comm_cmd_check_if_switch_workmode {
+ struct mgmt_msg_head head;
+ u8 switch_able;
+ u8 rsvd1;
+ u16 rsvd2[3];
+ u32 rsvd3[3];
+};
+
+#define MIG_NOR_VM_ONE_MAX_SGE_MEM (64 * 8)
+#define MIG_NOR_VM_ONE_MAX_MEM (MIG_NOR_VM_ONE_MAX_SGE_MEM + 16)
+#define MIG_VM_MAX_SML_ENTRY_NUM 24
+
+struct comm_cmd_migrate_dfx_s {
+ struct mgmt_msg_head head;
+ u32 hpa_entry_id; /**< hpa entry id */
+ u8 vm_hpa[MIG_NOR_VM_ONE_MAX_MEM]; /**< vm hpa info */
+};
+
+#define BDF_BUS_BIT 8
+struct pf_bdf_info {
+ u8 itf_idx; /**< host id */
+ u16 bdf; /**< bdf info */
+ u8 pf_bdf_info_vld; /**< pf bdf info valid */
+};
+
+struct vf_bdf_info {
+ u16 glb_pf_vf_offset; /**< global_func_id offset of 1st vf in pf */
+ u16 max_vfs; /**< vf number */
+ u16 vf_stride; /**< VF_RID_SETTING.vf_stride */
+ u16 vf_offset; /**< VF_RID_SETTING.vf_offset */
+ u8 bus_num; /**< tl_cfg_bus_num */
+ u8 rsv[3];
+};
+
+struct cmd_get_bdf_info_s {
+ struct mgmt_msg_head head;
+ struct pf_bdf_info pf_bdf_info[CMD_MAX_MAX_PF_NUM];
+ struct vf_bdf_info vf_bdf_info[CMD_MAX_MAX_PF_NUM];
+ u32 vf_num; /**< vf num */
+};
+
+#define CPI_TCAM_DBG_CMD_SET_TASK_ENABLE_VALID 0x1
+#define CPI_TCAM_DBG_CMD_SET_TIME_INTERVAL_VALID 0x2
+#define CPI_TCAM_DBG_CMD_TYPE_SET 0
+#define CPI_TCAM_DBG_CMD_TYPE_GET 1
+
+#define UDIE_ID_DATA_LEN 8
+#define TDIE_ID_DATA_LEN 18
+struct comm_cmd_get_die_id {
+ struct comm_info_head head;
+
+ u32 die_id_data[UDIE_ID_DATA_LEN]; /**< die id data */
+};
+
+struct comm_cmd_get_totem_die_id {
+ struct comm_info_head head;
+
+ u32 die_id_data[TDIE_ID_DATA_LEN]; /**< die id data */
+};
+
+#define MAX_EFUSE_INFO_BUF_SIZE 1024
+
+enum comm_efuse_opt_type {
+ EFUSE_OPT_UNICORN_EFUSE_BURN = 1, /**< burn unicorn efuse bin */
+ EFUSE_OPT_UPDATE_SWSB = 2, /**< hw rotpk switch to guest rotpk */
+ EFUSE_OPT_TOTEM_EFUSE_BURN = 3 /**< burn totem efuse bin */
+};
+
+struct comm_efuse_cfg_info {
+ struct comm_info_head head;
+ u8 opt_type; /**< operation type @see enum comm_efuse_opt_type */
+ u8 rsvd[3];
+ u32 total_len; /**< entire package leng value */
+ u32 data_csum; /**< data csum */
+ u8 data[MAX_EFUSE_INFO_BUF_SIZE]; /**< efuse cfg data, size 768byte */
+};
+
+/* serloop模块接口 */
+struct comm_cmd_hi30_serloop {
+ struct comm_info_head head;
+
+ u32 macro;
+ u32 lane;
+ u32 prbs_pattern;
+ u32 result;
+};
+
+struct cmd_sector_info {
+ struct comm_info_head head;
+ u32 offset; /**< flash addr */
+ u32 len; /**< flash length */
+};
+
+struct cmd_query_fw {
+ struct comm_info_head head;
+ u32 offset; /**< offset addr */
+ u32 len; /**< length */
+};
+
+struct nic_cmd_get_uart_log_info {
+ struct comm_info_head head;
+ struct {
+ u32 ret : 8;
+ u32 version : 8;
+ u32 log_elem_real_num : 16;
+ } log_head;
+ char uart_log[MAX_LOG_BUF_SIZE];
+};
+
+#define MAX_LOG_CMD_BUF_SIZE 128
+
+struct nic_cmd_set_uart_log_cmd {
+ struct comm_info_head head;
+ struct {
+ u32 ret : 8;
+ u32 version : 8;
+ u32 cmd_elem_real_num : 16;
+ } log_head;
+ char uart_cmd[MAX_LOG_CMD_BUF_SIZE];
+};
+
+struct dbgtool_up_reg_opt_info {
+ struct comm_info_head head;
+
+ u8 len;
+ u8 is_car;
+ u8 car_clear_flag;
+ u32 csr_addr; /**< register addr */
+ u32 csr_value; /**< register value */
+};
+
+struct comm_info_reg_read_write {
+ struct comm_info_head head;
+
+ u32 reg_addr; /**< register address */
+ u32 val_length; /**< register value length */
+
+ u32 data[2]; /**< register value */
+};
+
+#ifndef DFX_MAG_MAX_REG_NUM
+#define DFX_MAG_MAX_REG_NUM (32)
+#endif
+struct comm_info_dfx_mag_reg {
+ struct comm_info_head head;
+ u32 write; /**< read or write flag: 0:read; 1:write */
+ u32 reg_addr; /**< register address */
+ u32 reg_cnt; /**< register num , up to 32 */
+ u32 clear; /**< clear flag: 0:do not clear after read 1:clear after read */
+ u32 data[DFX_MAG_MAX_REG_NUM]; /**< register data */
+};
+
+struct comm_info_dfx_anlt_reg {
+ struct comm_info_head head;
+ u32 write; /**< read or write flag: 0:read; 1:write */
+ u32 reg_addr; /**< register address */
+ u32 reg_cnt; /**< register num , up to 32 */
+ u32 clear; /**< clear flag: 0:do not clear after read 1:clear after read */
+ u32 data[DFX_MAG_MAX_REG_NUM]; /**< register data */
+};
+
+#define MAX_DATA_NUM (240)
+struct csr_msg {
+ struct {
+ u32 node_id : 5; // [4:0]
+ u32 data_width : 10; // [14:5]
+ u32 rsvd : 17; // [31:15]
+ } bits;
+ u32 addr;
+};
+
+struct comm_cmd_heart_event {
+ struct mgmt_msg_head head;
+
+ u8 init_sta; /* 0: mpu init ok, 1: mpu init error. */
+ u8 rsvd1[3];
+ u32 heart; /* add one by one */
+ u32 heart_handshake; /* should be alwasys: 0x5A5A5A5A */
+};
+
+#define XREGS_NUM 31
+struct tag_cpu_tick {
+ u32 cnt_hi;
+ u32 cnt_lo;
+};
+
+struct tag_ax_exc_reg_info {
+ u64 ttbr0;
+ u64 ttbr1;
+ u64 tcr;
+ u64 mair;
+ u64 sctlr;
+ u64 vbar;
+ u64 current_el;
+ u64 sp;
+ /* The memory layout of the following fields is the same as that of TskContext. */
+ u64 elr; /* 返回地址 */
+ u64 spsr;
+ u64 far_r;
+ u64 esr;
+ u64 xzr;
+ u64 xregs[XREGS_NUM]; /* 0~30: x30~x0 */
+};
+
+struct tag_exc_info {
+ char os_ver[48]; /**< os version */
+ char app_ver[64]; /**< application version*/
+ u32 exc_cause; /**< exception reason */
+ u32 thread_type; /**< Thread type before exception */
+ u32 thread_id; /**< Thread PID before exception */
+ u16 byte_order; /**< byte order */
+ u16 cpu_type; /**< CPU type */
+ u32 cpu_id; /**< CPU ID */
+ struct tag_cpu_tick cpu_tick; /**< CPU Tick */
+ u32 nest_cnt; /**< exception nesting count */
+ u32 fatal_errno; /**< fatal error code, valid when a fatal error occurs */
+ u64 uw_sp; /**< exception front stack pointer */
+ u64 stack_bottom; /**< bottom of stack before exception */
+ /* Context information of the core register when an exception occurs.
+ * 82\57 must be located in byte 152, If any change is made,
+ * the OS_EXC_REGINFO_OFFSET macro in sre_platform.eh needs to be updated.
+ */
+ struct tag_ax_exc_reg_info reg_info; /**< register info @see EXC_REGS_S */
+};
+
+/* 上报给驱动的up lastword模块接口 */
+#define MPU_LASTWORD_SIZE 1024
+struct tag_comm_info_up_lastword {
+ struct comm_info_head head;
+
+ struct tag_exc_info stack_info;
+ u32 stack_actlen; /**< actual stack length (<=1024) */
+ u8 stack_data[MPU_LASTWORD_SIZE];
+};
+
+struct comm_cmd_mbox_csr_rd_req {
+ struct mgmt_msg_head head;
+ struct csr_msg csr_info[MAX_DATA_NUM];
+ u32 data_num;
+};
+
+struct comm_cmd_mbox_csr_wt_req {
+ struct mgmt_msg_head head;
+ struct csr_msg csr_info;
+ u64 value;
+};
+
+struct comm_cmd_mbox_csr_rd_ret {
+ struct mgmt_msg_head head;
+ u64 value[MAX_DATA_NUM];
+};
+
+struct comm_cmd_mbox_csr_wt_ret {
+ struct mgmt_msg_head head;
+};
+
+enum comm_virtio_dev_type {
+ COMM_VIRTIO_NET_TYPE = 0,
+ COMM_VIRTIO_BLK_TYPE = 1,
+ COMM_VIRTIO_SCSI_TYPE = 4,
+};
+
+struct comm_virtio_dev_cmd {
+ u16 device_type; /**< device type @see enum comm_virtio_dev_type */
+ u16 device_id;
+ u32 devid_switch;
+ u32 sub_vendor_id;
+ u32 sub_class_code;
+ u32 flash_en;
+};
+
+struct comm_virtio_dev_ctl {
+ u32 device_type_mark;
+ u32 devid_switch_mark;
+ u32 sub_vendor_id_mark;
+ u32 sub_class_code_mark;
+ u32 flash_en_mark;
+};
+
+struct comm_cmd_set_virtio_dev {
+ struct comm_info_head head;
+ struct comm_virtio_dev_cmd virtio_dev_cmd; /**< @see struct comm_virtio_dev_cmd_s */
+ struct comm_virtio_dev_ctl virtio_dev_ctl; /**< @see struct comm_virtio_dev_ctl_s */
+};
+
+/* Interfaces of the MAC Module */
+#ifndef MAC_ADDRESS_BYTE_NUM
+#define MAC_ADDRESS_BYTE_NUM (6)
+#endif
+struct comm_info_mac {
+ struct comm_info_head head;
+
+ u16 is_valid;
+ u16 rsvd0;
+ u8 data[MAC_ADDRESS_BYTE_NUM];
+ u16 rsvd1;
+};
+
+struct cmd_patch_active {
+ struct comm_info_head head;
+ u32 fw_type; /**< firmware type */
+ u32 data[3]; /**< reserved */
+};
+
+struct cmd_patch_deactive {
+ struct comm_info_head head;
+ u32 fw_type; /**< firmware type */
+ u32 data[3]; /**< reserved */
+};
+
+struct cmd_patch_remove {
+ struct comm_info_head head;
+ u32 fw_type; /**< firmware type */
+ u32 data[3]; /**< reserved */
+};
+
+struct cmd_patch_sram_optimize {
+ struct comm_info_head head;
+ u32 data[4]; /**< reserved */
+};
+
+/* ncsi counter */
+struct nsci_counter_in_info_s {
+ struct comm_info_head head;
+ u8 opt_type; /**< operate type 0:read counter 1:counter clear */
+ u8 rsvd[3];
+};
+
+struct channel_status_check_info_s {
+ struct comm_info_head head;
+ u32 rsvd1;
+ u32 rsvd2;
+};
+
+struct comm_cmd_compatible_info {
+ struct mgmt_msg_head head;
+ u8 chip_ver;
+ u8 host_env;
+ u8 rsv[13];
+
+ u8 cmd_count;
+ union {
+ struct {
+ u8 module;
+ u8 mod_type;
+ u16 cmd;
+ } cmd_desc;
+ u32 cmd_desc_val;
+ } cmds_desc[24];
+ u8 cmd_ver[24];
+};
+
+struct tag_ncsi_chan_info {
+ u8 aen_en; /**< aen enable */
+ u8 index; /**< index of channel */
+ u8 port; /**< net port number */
+ u8 state; /**< ncsi state */
+ u8 ncsi_port_en; /**< ncsi port enable flag (1:enable 0:disable) */
+ u8 rsv[3];
+ struct tag_ncsi_chan_capa capabilities; /**< ncsi channel capabilities*/
+ struct tg_g_ncsi_parameters parameters; /**< ncsi state */
+};
+
+struct comm_cmd_ncsi_settings {
+ u8 ncsi_ver; /**< ncsi version */
+ u8 ncsi_pkg_id;
+ u8 arb_en; /**< arbitration en */
+ u8 duplex_set; /**< duplex mode */
+ u8 chan_num; /**< Number of virtual channels */
+ u8 iid; /**< identify new instances of a command */
+ u8 lldp_over_ncsi_enable;
+ u8 lldp_over_mctp_enable;
+ u32 magicwd;
+ u8 lldp_tx_enable;
+ u8 rsvd[3];
+ u32 crc;
+ struct tag_ncsi_chan_info ncsi_chan_info;
+};
+
+struct comm_cmd_ncsi_cfg {
+ struct comm_info_head head;
+ u8 ncsi_cable_state; /**< ncsi cable status 0:cable out of place,1:cable in place */
+ u8 setting_type; /**< nsci info type:0:ram cofig, 1: flash config */
+ u8 port; /**< net port number */
+ u8 erase_flag; /**< flash erase flag, 1: erase flash info */
+ struct comm_cmd_ncsi_settings setting_info;
+};
+
+#define MQM_ATT_PAGE_NUM 128
+
+/* Maximum segment data length of the upgrade command */
+#define MAX_FW_FRAGMENT_LEN (1536)
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_outband_ncsi_cmd_defs.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_outband_ncsi_cmd_defs.h
new file mode 100644
index 0000000..767f886
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/mpu_outband_ncsi_cmd_defs.h
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef MPU_OUTBAND_NCSI_CMD_DEFS_H
+#define MPU_OUTBAND_NCSI_CMD_DEFS_H
+
+#pragma pack(push, 1)
+
+enum NCSI_RESPONSE_CODE_E {
+ COMMAND_COMPLETED = 0x00, /**< command completed */
+ COMMAND_FAILED = 0x01, /**< command failed */
+ COMMAND_UNAVAILABLE = 0x02, /**< command unavailable */
+ COMMAND_UNSPORRTED = 0x03 /**< command unsporrted */
+};
+
+enum NCSI_REASON_CODE_E {
+ NO_ERROR = 0x00, /**< no error */
+ INTERFACE_INIT_REQUIRED = 0x01, /**< interface init required */
+ INVALID_PARA = 0x02, /**< invalid parameter */
+ CHAN_NOT_READY = 0x03, /**< channel not ready */
+ PKG_NOT_READY = 0x04, /**< package not ready */
+ INVALID_PAYLOAD_LEN = 0x05, /**< invalid payload len */
+ LINK_STATUS_ERROR = 0xA06, /**< get link status fail */
+ VLAN_TAG_INVALID = 0xB07, /**< vlan tag invalid */
+ MAC_ADD_IS_ZERO = 0xE08, /**< mac add is zero */
+ FLOW_CONTROL_UNSUPPORTED = 0x09, /**< flow control unsupported */
+ CHECKSUM_ERR = 0xA, /**< check sum error */
+ /**< the command type is unsupported only when the response code is 0x03 */
+ UNSUPPORTED_COMMAND_TYPE = 0x7FFF
+};
+
+enum NCSI_CLIENT_TYPE_E {
+ NCSI_RMII_TYPE = 1, /**< rmii client */
+ NCSI_MCTP_TYPE = 2, /**< MCTP client */
+ NCSI_AEN_TYPE = 3 /**< AEN client */
+};
+
+/**
+ * @brief ncsi ctrl packet header
+ */
+struct tag_ncsi_ctrl_packet_header {
+ u8 mc_id; /**< management control ID */
+ u8 head_revision; /**< head revision */
+ u8 reserved0; /**< reserved */
+ u8 iid; /**< instance ID */
+ u8 pkt_type; /**< packet type */
+#ifdef NCSI_BIG_ENDIAN
+ u8 pkg_id : 3; /**< packet ID */
+ u8 inter_chan_id : 5; /**< channel ID */
+#else
+ u8 inter_chan_id : 5; /**< channel ID */
+ u8 pkg_id : 3; /**< packet ID */
+#endif
+#ifdef BD_BIG_ENDIAN
+ u8 reserved1 : 4; /**< reserved1 */
+ u8 payload_len_hi : 4; /**< payload len have 12bits */
+#else
+ u8 payload_len_hi : 4; /**< payload len have 12bits */
+ u8 reserved1 : 4; /**< reserved1 */
+#endif
+ u8 payload_len_lo; /**< payload len lo */
+ u32 reserved2; /**< reserved2 */
+ u32 reserved3; /**< reserved3 */
+};
+
+#define NCSI_MAX_PAYLOAD_LEN 1500
+#define NCSI_MAC_LEN 6
+
+/**
+ * @brief ncsi clear initial state command struct defination
+ *
+ */
+struct tag_ncsi_ctrl_packet {
+ struct tag_ncsi_ctrl_packet_header packet_head; /**< ncsi ctrl packet header */
+ u8 payload[NCSI_MAX_PAYLOAD_LEN]; /**< ncsi ctrl packet payload */
+};
+
+/**
+ * @brief ethernet header description
+ *
+ */
+struct tag_ethernet_header {
+ u8 dst_addr[NCSI_MAC_LEN]; /**< ethernet destination address */
+ u8 src_addr[NCSI_MAC_LEN]; /**< ethernet source address */
+ u16 ether_type; /**< ethernet type */
+};
+
+/**
+ * @brief ncsi common packet description
+ *
+ */
+struct tg_ncsi_common_packet {
+ struct tag_ethernet_header frame_head; /**< common packet ethernet frame header */
+ struct tag_ncsi_ctrl_packet ctrl_packet; /**< common packet ncsi ctrl packet */
+};
+
+/**
+ * @brief ncsi clear initial state command struct defination
+ */
+struct tag_ncsi_client_info {
+ u8 *name; /**< client info client name */
+ u32 type; /**< client info type of ncsi media @see enum NCSI_CLIENT_TYPE_E */
+ u8 bmc_mac[NCSI_MAC_LEN]; /**< client info BMC mac addr */
+ u8 ncsi_mac[NCSI_MAC_LEN]; /**< client info local mac addr */
+ u8 reserve[2]; /**< client info reserved, Four-byte alignment */
+ u32 rsp_len; /**< client info include pad */
+ struct tg_ncsi_common_packet ncsi_packet_rsp; /**< ncsi common packet response */
+};
+
+/* AEN Enable Command (0x08) */
+#define AEN_ENABLE_REQ_LEN 8
+#define AEN_ENABLE_RSP_LEN 4
+#define AEN_CTRL_LINK_STATUS_SHIFT 0
+#define AEN_CTRL_CONFIG_REQ_SHIFT 1
+#define AEN_CTRL_DRV_CHANGE_SHIFT 2
+
+/* AEN Type */
+enum aen_type_e{
+ AEN_LINK_STATUS_CHANGE_TYPE = 0x0,
+ AEN_CONFIG_REQUIRED_TYPE = 0x1,
+ OEM_AEN_CONFIG_REQUEST_TYPE = 0x80,
+ AEN_TYPE_MAX = 0x100
+} ;
+
+/* get link status 0x0A */
+#define GET_LINK_STATUS_REQ_LEN 0
+#define GET_LINK_STATUS_RSP_LEN 16
+/* link speed(fc link speed is mapped to unknown) */
+enum NCSI_CMD_LINK_SPEED_E {
+ LINK_SPEED_10M = 0x2, /**< 10M */
+ LINK_SPEED_100M = 0x5, /**< 100M */
+ LINK_SPEED_1G = 0x7, /**< 1G */
+ LINK_SPEED_10G = 0x8, /**< 10G */
+ LINK_SPEED_20G = 0x9, /**< 20G */
+ LINK_SPEED_25G = 0xa, /**< 25G */
+ LINK_SPEED_40G = 0xb, /**< 40G */
+ LINK_SPEED_50G = 0xc, /**< 50G */
+ LINK_SPEED_100G = 0xd, /**< 100G */
+ LINK_SPEED_2_5G = 0xe, /**< 2.5G */
+ LINK_SPEED_UNKNOWN = 0xf
+};
+
+/* Set Vlan Filter (0x0B) */
+/* Only VLAN-tagged packets that match the enabled VLAN Filter settings are accepted. */
+#define VLAN_MODE_UNSET 0X00
+#define VLAN_ONLY 0x01
+/* if match the MAC address ,any vlan-tagged and non-vlan-tagged will be accepted */
+#define ANYVLAN_NONVLAN 0x03
+#define VLAN_MODE_SUPPORT 0x05
+
+/* chanel vlan filter enable */
+#define CHNL_VALN_FL_ENABLE 0x01
+#define CHNL_VALN_FL_DISABLE 0x00
+
+/* vlan id invalid */
+#define VLAN_ID_VALID 0x01
+#define VLAN_ID_INVALID 0x00
+
+/* VLAN ID */
+#define SET_VLAN_FILTER_REQ_LEN 8
+#define SET_VLAN_FILTER_RSP_LEN 4
+
+/* ncsi_get_controller_packet_statistics_config */
+#define NO_INFORMATION_STATISTICS 0xff
+
+/* Enable VLAN Command (0x0C) */
+#define ENABLE_VLAN_REQ_LEN 4
+#define ENABLE_VLAN_RSP_LEN 4
+#define VLAN_FL_MAX_ID 8
+
+/* NCSI channel capabilities */
+struct tag_ncsi_chan_capa {
+ u32 capa_flags; /**< NCSI channel capabilities capa flags */
+ u32 bcast_filter; /**< NCSI channel capabilities bcast filter */
+ u32 multicast_filter; /**< NCSI channel capabilities multicast filter */
+ u32 buffering; /**< NCSI channel capabilities buffering */
+ u32 aen_ctrl; /**< NCSI channel capabilities aen ctrl */
+ u8 vlan_count; /**< NCSI channel capabilities vlan count */
+ u8 mixed_count; /**< NCSI channel capabilities mixed count */
+ u8 multicast_count; /**< NCSI channel capabilities multicast count */
+ u8 unicast_count; /**< NCSI channel capabilities unicast count */
+ u16 rsvd; /**< NCSI channel capabilities reserved */
+ u8 vlan_mode; /**< NCSI channel capabilities vlan mode */
+ u8 chan_count; /**< NCSI channel capabilities channel count */
+};
+
+struct tg_g_ncsi_parameters {
+ u8 mac_address_count;
+ u8 reserved1[2];
+ u8 mac_address_flags;
+ u8 vlan_tag_count;
+ u8 reserved2;
+ u16 vlan_tag_flags;
+ u32 link_settings;
+ u32 broadcast_packet_filter_settings;
+ u8 broadcast_packet_filter_status : 1;
+ u8 channel_enable : 1;
+ u8 channel_network_tx_enable : 1;
+ u8 global_mulicast_packet_filter_status : 1;
+ /**< bit0-3:mac_add0——mac_add3 address type:0 unicast,1 multileaving */
+ u8 config_flags_reserved1 : 4;
+ u8 config_flags_reserved2[3];
+ u8 vlan_mode; /**< current vlan mode */
+ u8 flow_control_enable;
+ u16 reserved3;
+ u32 AEN_control;
+ u8 mac_add[4][6];
+ u16 vlan_tag[VLAN_FL_MAX_ID];
+};
+
+#pragma pack(pop)
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/mpu/nic_cfg_comm.h b/drivers/net/ethernet/huawei/hinic3/include/mpu/nic_cfg_comm.h
new file mode 100644
index 0000000..83b75f9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/mpu/nic_cfg_comm.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C), 2001-2021, Huawei Tech. Co., Ltd.
+ * File Name : nic_cfg_comm.h
+ * Version : Initial Draft
+ * Description : nic config common header file
+ * Function List :
+ * History :
+ * Modification: Created file
+ */
+
+#ifndef NIC_CFG_COMM_H
+#define NIC_CFG_COMM_H
+
+/* rss */
+#define HINIC3_RSS_TYPE_VALID_SHIFT 23
+#define HINIC3_RSS_TYPE_TCP_IPV6_EXT_SHIFT 24
+#define HINIC3_RSS_TYPE_IPV6_EXT_SHIFT 25
+#define HINIC3_RSS_TYPE_TCP_IPV6_SHIFT 26
+#define HINIC3_RSS_TYPE_IPV6_SHIFT 27
+#define HINIC3_RSS_TYPE_TCP_IPV4_SHIFT 28
+#define HINIC3_RSS_TYPE_IPV4_SHIFT 29
+#define HINIC3_RSS_TYPE_UDP_IPV6_SHIFT 30
+#define HINIC3_RSS_TYPE_UDP_IPV4_SHIFT 31
+
+#define HINIC3_RSS_TYPE_SET(val, member) (((u32)(val) & 0x1) << HINIC3_RSS_TYPE_##member##_SHIFT)
+#define HINIC3_RSS_TYPE_GET(val, member) (((u32)(val) >> HINIC3_RSS_TYPE_##member##_SHIFT) & 0x1)
+
+enum nic_rss_hash_type {
+ NIC_RSS_HASH_TYPE_XOR = 0,
+ NIC_RSS_HASH_TYPE_TOEP,
+
+ NIC_RSS_HASH_TYPE_MAX /* MUST BE THE LAST ONE */
+};
+
+#define NIC_RSS_INDIR_SIZE 256
+#define NIC_RSS_KEY_SIZE 40
+
+/* *
+ * Definition of the NIC receiving mode
+ */
+#define NIC_RX_MODE_UC 0x01
+#define NIC_RX_MODE_MC 0x02
+#define NIC_RX_MODE_BC 0x04
+#define NIC_RX_MODE_MC_ALL 0x08
+#define NIC_RX_MODE_PROMISC 0x10
+#define NIC_RX_DB_COS_MAX 0x4
+
+/* IEEE 802.1Qaz std */
+#define NIC_DCB_COS_MAX 0x8
+#define NIC_DCB_UP_MAX 0x8
+#define NIC_DCB_TC_MAX 0x8
+#define NIC_DCB_PG_MAX 0x8
+#define NIC_DCB_TSA_SP 0x0
+#define NIC_DCB_TSA_CBS 0x1 /* hi1822 do NOT support */
+#define NIC_DCB_TSA_ETS 0x2
+#define NIC_DCB_DSCP_NUM 0x8
+#define NIC_DCB_IP_PRI_MAX 0x40
+
+#define NIC_DCB_PRIO_DWRR 0x0
+#define NIC_DCB_PRIO_STRICT 0x1
+
+#define NIC_DCB_MAX_PFC_NUM 0x4
+
+#define NIC_ETS_PERCENT_WEIGHT 100
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/ossl_types.h b/drivers/net/ethernet/huawei/hinic3/include/ossl_types.h
new file mode 100644
index 0000000..c646e7c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/ossl_types.h
@@ -0,0 +1,144 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef _OSSL_TYPES_H
+#define _OSSL_TYPES_H
+
+#undef NULL
+#if defined(__cplusplus)
+#define NULL 0
+#else
+#define NULL ((void *)0)
+#endif
+
+#if defined(__LINUX__)
+#ifdef __USER__ /* linux user */
+#if defined(__ia64__) || defined(__x86_64__) || defined(__aarch64__)
+#define s64 long
+#define u64 unsigned long
+#else
+#define s64 long long
+#define u64 unsigned long long
+#endif
+#define s32 int
+#define u32 unsigned int
+#define s16 short
+#define u16 unsigned short
+
+#ifdef __hinic_arm__
+#define s8 signed char
+#else
+#define s8 char
+#endif
+
+#ifndef dma_addr_t
+typedef u64 dma_addr_t;
+#endif
+
+#define u8 unsigned char
+#define ulong unsigned long
+#define uint unsigned int
+
+#define ushort unsigned short
+
+#endif
+#endif
+
+#define uda_handle void *
+
+#define UDA_TRUE 1
+#define UDA_FALSE 0
+
+#if defined(__USER__) || defined(USER)
+#ifndef F_OK
+#define F_OK 0
+#endif
+#ifndef F_FAILED
+#define F_FAILED (-1)
+#endif
+
+#define uda_status int
+#define TOOL_REAL_PATH_MAX_LEN 512
+#define SAFE_FUNCTION_ERR (-1)
+
+enum {
+ UDA_SUCCESS = 0x0, // run success
+ UDA_FAIL, // run failed
+ UDA_ENXIO, // no device
+ UDA_ENONMEM, // alloc memory failed
+ UDA_EBUSY, // card busy or restart
+ UDA_ECRC, // CRC check error
+ UDA_EINVAL, // invalid parameter
+ UDA_EFAULT, // invalid address
+ UDA_ELEN, // invalid length
+ UDA_ECMD, // error occurs when execute the cmd
+ UDA_ENODRIVER, // driver is not installed
+ UDA_EXIST, // has existed
+ UDA_EOVERSTEP, // over step
+ UDA_ENOOBJ, // have no object
+ UDA_EOBJ, // error object
+ UDA_ENOMATCH, // driver does not match to firmware
+ UDA_ETIMEOUT, // timeout
+
+ UDA_CONTOP,
+
+ UDA_REBOOT = 0xFD,
+ UDA_CANCEL = 0xFE,
+ UDA_KILLED = 0xFF,
+};
+
+enum {
+ UDA_FLOCK_NOBLOCK = 0,
+ UDA_FLOCK_BLOCK = 1,
+};
+
+/* array index */
+#define ARRAY_INDEX_0 0
+#define ARRAY_INDEX_1 1
+#define ARRAY_INDEX_2 2
+#define ARRAY_INDEX_3 3
+#define ARRAY_INDEX_4 4
+#define ARRAY_INDEX_5 5
+#define ARRAY_INDEX_6 6
+#define ARRAY_INDEX_7 7
+#define ARRAY_INDEX_8 8
+#define ARRAY_INDEX_12 12
+#define ARRAY_INDEX_13 13
+
+/* define shift bits */
+#define SHIFT_BIT_1 1
+#define SHIFT_BIT_2 2
+#define SHIFT_BIT_3 3
+#define SHIFT_BIT_4 4
+#define SHIFT_BIT_6 6
+#define SHIFT_BIT_7 7
+#define SHIFT_BIT_8 8
+#define SHIFT_BIT_11 11
+#define SHIFT_BIT_12 12
+#define SHIFT_BIT_15 15
+#define SHIFT_BIT_16 16
+#define SHIFT_BIT_17 17
+#define SHIFT_BIT_19 19
+#define SHIFT_BIT_20 20
+#define SHIFT_BIT_23 23
+#define SHIFT_BIT_24 24
+#define SHIFT_BIT_25 25
+#define SHIFT_BIT_26 26
+#define SHIFT_BIT_28 28
+#define SHIFT_BIT_29 29
+#define SHIFT_BIT_32 32
+#define SHIFT_BIT_35 35
+#define SHIFT_BIT_37 37
+#define SHIFT_BIT_39 39
+#define SHIFT_BIT_40 40
+#define SHIFT_BIT_43 43
+#define SHIFT_BIT_48 48
+#define SHIFT_BIT_51 51
+#define SHIFT_BIT_56 56
+#define SHIFT_BIT_57 57
+#define SHIFT_BIT_59 59
+#define SHIFT_BIT_60 60
+#define SHIFT_BIT_61 61
+
+#endif
+#endif /* OSSL_TYPES_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/include/public/npu_cmdq_base_defs.h b/drivers/net/ethernet/huawei/hinic3/include/public/npu_cmdq_base_defs.h
new file mode 100644
index 0000000..78236c9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/public/npu_cmdq_base_defs.h
@@ -0,0 +1,232 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef NPU_CMDQ_BASE_DEFS_H
+#define NPU_CMDQ_BASE_DEFS_H
+
+/* CmdQ Common subtype */
+enum comm_cmdq_cmd {
+ COMM_CMD_UCODE_ARM_BIT_SET = 2,
+ COMM_CMD_SEND_NPU_DFT_CMD,
+};
+
+/* Cmdq ack type */
+enum hinic3_ack_type {
+ HINIC3_ACK_TYPE_CMDQ,
+ HINIC3_ACK_TYPE_SHARE_CQN,
+ HINIC3_ACK_TYPE_APP_CQN,
+
+ HINIC3_MOD_ACK_MAX = 15,
+};
+
+/* Defines the queue type of the set arm bit. */
+enum {
+ SET_ARM_BIT_FOR_CMDQ = 0,
+ SET_ARM_BIT_FOR_L2NIC_SQ,
+ SET_ARM_BIT_FOR_L2NIC_RQ,
+ SET_ARM_BIT_TYPE_NUM
+};
+
+/* Defines the type. Each function supports a maximum of eight CMDQ types. */
+enum {
+ CMDQ_0 = 0,
+ CMDQ_1 = 1, /* dedicated and non-blocking queues */
+ CMDQ_NUM
+};
+
+/* *******************cmd common command data structure ************************ */
+// Func->ucode, which is used to set arm bit data,
+// The microcode needs to perform big-endian conversion.
+struct comm_info_ucode_set_arm_bit {
+ u32 q_type;
+ u32 q_id;
+};
+
+/* *******************WQE data structure ************************ */
+union cmdq_wqe_cs_dw0 {
+ struct {
+ u32 err_status : 29;
+ u32 error_code : 2;
+ u32 rsvd : 1;
+ } bs;
+ u32 val;
+};
+
+union cmdq_wqe_cs_dw1 {
+ struct {
+ u32 token : 16; // [15:0]
+ u32 cmd : 8; // [23:16]
+ u32 mod : 5; // [28:24]
+ u32 ack_type : 2; // [30:29]
+ u32 obit : 1; // [31]
+ } drv_wr; // This structure is used when the driver writes the wqe.
+
+ struct {
+ u32 mod : 5; // [4:0]
+ u32 ack_type : 3; // [7:5]
+ u32 cmd : 8; // [15:8]
+ u32 arm : 1; // [16]
+ u32 rsvd : 14; // [30:17]
+ u32 obit : 1; // [31]
+ } wb;
+ u32 val;
+};
+
+/* CmdQ BD information or write back buffer information */
+struct cmdq_sge {
+ u32 pa_h; // Upper 32 bits of the physical address
+ u32 pa_l; // Upper 32 bits of the physical address
+ u32 len; // Invalid bit[31].
+ u32 resv;
+};
+
+/* Ctrls section definition of WQE */
+struct cmdq_wqe_ctrls {
+ union {
+ struct {
+ u32 bdsl : 8; // [7:0]
+ u32 drvsl : 2; // [9:8]
+ u32 rsv : 4; // [13:10]
+ u32 wf : 1; // [14]
+ u32 cf : 1; // [15]
+ u32 tsl : 5; // [20:16]
+ u32 va : 1; // [21]
+ u32 df : 1; // [22]
+ u32 cr : 1; // [23]
+ u32 difsl : 3; // [26:24]
+ u32 csl : 2; // [28:27]
+ u32 ctrlsl : 2; // [30:29]
+ u32 obit : 1; // [31]
+ } bs;
+ u32 val;
+ } header;
+ u32 qsf;
+};
+
+/* Complete section definition of WQE */
+struct cmdq_wqe_cs {
+ union cmdq_wqe_cs_dw0 dw0;
+ union cmdq_wqe_cs_dw1 dw1;
+ union {
+ struct cmdq_sge sge;
+ u32 dw2_5[4];
+ } ack;
+};
+
+/* Inline header in WQE inline, describing the length of inline data */
+union cmdq_wqe_inline_header {
+ struct {
+ u32 buf_len : 11; // [10:0] inline data len
+ u32 rsv : 21; // [31:11]
+ } bs;
+ u32 val;
+};
+
+/* Definition of buffer descriptor section in WQE */
+union cmdq_wqe_bds {
+ struct {
+ struct cmdq_sge bds_sge;
+ u32 rsvd[4]; /* Zwy is used to transfer the virtual address of the buffer. */
+ } lcmd; /* Long command, non-inline, and SGE describe the buffer information. */
+};
+
+/* Definition of CMDQ WQE */
+/* (long cmd, 64B)
+ * +----------------------------------------+
+ * | ctrl section(8B) |
+ * +----------------------------------------+
+ * | |
+ * | complete section(24B) |
+ * | |
+ * +----------------------------------------+
+ * | |
+ * | buffer descriptor section(16B) |
+ * | |
+ * +----------------------------------------+
+ * | driver section(16B) |
+ * +----------------------------------------+
+ *
+ *
+ * (middle cmd, 128B)
+ * +----------------------------------------+
+ * | ctrl section(8B) |
+ * +----------------------------------------+
+ * | |
+ * | complete section(24B) |
+ * | |
+ * +----------------------------------------+
+ * | |
+ * | buffer descriptor section(88B) |
+ * | |
+ * +----------------------------------------+
+ * | driver section(8B) |
+ * +----------------------------------------+
+ *
+ *
+ * (short cmd, 64B)
+ * +----------------------------------------+
+ * | ctrl section(8B) |
+ * +----------------------------------------+
+ * | |
+ * | complete section(24B) |
+ * | |
+ * +----------------------------------------+
+ * | |
+ * | buffer descriptor section(24B) |
+ * | |
+ * +----------------------------------------+
+ * | driver section(8B) |
+ * +----------------------------------------+
+ */
+struct cmdq_wqe {
+ struct cmdq_wqe_ctrls ctrls;
+ struct cmdq_wqe_cs cs;
+ union cmdq_wqe_bds bds;
+};
+
+/* Definition of ctrls section in inline WQE */
+struct cmdq_wqe_ctrls_inline {
+ union {
+ struct {
+ u32 bdsl : 8; // [7:0]
+ u32 drvsl : 2; // [9:8]
+ u32 rsv : 4; // [13:10]
+ u32 wf : 1; // [14]
+ u32 cf : 1; // [15]
+ u32 tsl : 5; // [20:16]
+ u32 va : 1; // [21]
+ u32 df : 1; // [22]
+ u32 cr : 1; // [23]
+ u32 difsl : 3; // [26:24]
+ u32 csl : 2; // [28:27]
+ u32 ctrlsl : 2; // [30:29]
+ u32 obit : 1; // [31]
+ } bs;
+ u32 val;
+ } header;
+ u32 qsf;
+ u64 db;
+};
+
+/* Buffer descriptor section definition of WQE */
+union cmdq_wqe_bds_inline {
+ struct {
+ union cmdq_wqe_inline_header header;
+ u32 rsvd;
+ u8 data_inline[80];
+ } mcmd; /* Middle command, inline mode */
+
+ struct {
+ union cmdq_wqe_inline_header header;
+ u32 rsvd;
+ u8 data_inline[16];
+ } scmd; /* Short command, inline mode */
+};
+
+struct cmdq_wqe_inline {
+ struct cmdq_wqe_ctrls_inline ctrls;
+ struct cmdq_wqe_cs cs;
+ union cmdq_wqe_bds_inline bds;
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/include/readme.txt b/drivers/net/ethernet/huawei/hinic3/include/readme.txt
new file mode 100644
index 0000000..895f213
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/readme.txt
@@ -0,0 +1 @@
+本目录是业务内部共享接口
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/include/vmsec/vmsec_mpu_common.h b/drivers/net/ethernet/huawei/hinic3/include/vmsec/vmsec_mpu_common.h
new file mode 100644
index 0000000..d78dba8
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/vmsec/vmsec_mpu_common.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef VMSEC_MPU_COMMON_H
+#define VMSEC_MPU_COMMON_H
+
+#include "mpu_cmd_base_defs.h"
+
+#define VM_GPA_INFO_MODE_MIG 0
+#define VM_GPA_INFO_MODE_NMIG 1
+
+/**
+ * Commands between VMSEC to MPU
+ */
+enum tag_vmsec_mpu_cmd {
+ /* vmsec ctx gpa */
+ VMSEC_MPU_CMD_CTX_GPA_SET = 0,
+ VMSEC_MPU_CMD_CTX_GPA_SHOW,
+ VMSEC_MPU_CMD_CTX_GPA_DEL,
+
+ /* vmsec pci hole */
+ VMSEC_MPU_CMD_PCI_HOLE_SET,
+ VMSEC_MPU_CMD_PCI_HOLE_SHOW,
+ VMSEC_MPU_CMD_PCI_HOLE_DEL,
+
+ /* vmsec func cfg */
+ VMSEC_MPU_CMD_FUN_CFG_ENTRY_IDX_SET,
+ VMSEC_MPU_CMD_FUN_CFG_ENTRY_IDX_SHOW,
+
+ VMSEC_MPU_CMD_MAX
+};
+
+struct vmsec_ctx_gpa_entry {
+#if defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)
+ u32 func_id : 16;
+ u32 mode : 8;
+ u32 rsvd : 8;
+#else
+ u32 rsvd : 8;
+ u32 mode : 8;
+ u32 func_id : 16;
+#endif
+
+ /* sml tbl to wr */
+ u32 gpa_addr0_hi;
+ u32 gpa_addr0_lo;
+ u32 gpa_len0;
+};
+
+struct vmsec_pci_hole_idx {
+#if defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)
+ u32 entry_idx : 5;
+ u32 rsvd : 27;
+#else
+ u32 rsvd : 27;
+ u32 entry_idx : 5;
+#endif
+};
+
+struct vmsec_pci_hole_entry {
+ /* sml tbl to wr */
+ /* pcie hole 32-bit region */
+ u32 gpa_addr0_hi;
+ u32 gpa_addr0_lo;
+ u32 gpa_len0_hi;
+ u32 gpa_len0_lo;
+
+ /* pcie hole 64-bit region */
+ u32 gpa_addr1_hi;
+ u32 gpa_addr1_lo;
+ u32 gpa_len1_hi;
+ u32 gpa_len1_lo;
+
+ /* ctrl info used by drv */
+ u32 domain_id; /* unique vm id */
+#if defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)
+ u32 rsvd1 : 21;
+ u32 vf_nums : 11;
+#else
+ u32 rsvd1 : 21;
+ u32 vf_nums : 11;
+#endif
+ u32 vroce_vf_bitmap;
+};
+
+struct vmsec_funcfg_info_entry {
+ /* funcfg to update */
+#if defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)
+ u32 func_id : 16;
+ u32 entry_vld : 1;
+ u32 entry_idx : 5;
+ u32 rsvd : 10;
+#else
+ u32 rsvd : 10;
+ u32 entry_idx : 5;
+ u32 entry_vld : 1;
+ u32 func_id : 16;
+#endif
+};
+
+/* set/get/del */
+struct vmsec_cfg_ctx_gpa_entry_cmd {
+ struct comm_info_head head;
+ struct vmsec_ctx_gpa_entry entry;
+};
+
+#endif /* VMSEC_MPU_COMMON_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/include/vram_common.h b/drivers/net/ethernet/huawei/hinic3/include/vram_common.h
new file mode 100644
index 0000000..9f93f7e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/include/vram_common.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef VRAM_COMMON_H
+#define VRAM_COMMON_H
+
+#include <linux/pci.h>
+#include <linux/notifier.h>
+
+#define VRAM_BLOCK_SIZE_2M 0x200000UL
+#define KEXEC_SIGN "hinic-in-kexec"
+// now vram_name max len is 14, when add other vram, attention this value
+#define VRAM_NAME_MAX_LEN 16
+
+#define VRAM_CQM_GLB_FUNC_BASE "F"
+#define VRAM_CQM_FAKE_MEM_BASE "FK"
+#define VRAM_CQM_CLA_BASE "C"
+#define VRAM_CQM_CLA_TYPE_BASE "T"
+#define VRAM_CQM_CLA_SMF_BASE "SMF"
+#define VRAM_CQM_CLA_COORD_X "X"
+#define VRAM_CQM_CLA_COORD_Y "Y"
+#define VRAM_CQM_CLA_COORD_Z "Z"
+#define VRAM_CQM_BITMAP_BASE "B"
+
+#define VRAM_NIC_DCB "DCB"
+#define VRAM_NIC_MHOST_MGMT "MHOST_MGMT"
+#define VRAM_NIC_VRAM "NIC_VRAM"
+#define VRAM_NIC_IRQ_VRAM "NIC_IRQ"
+
+#define VRAM_NIC_MQM "NM"
+
+#define VRAM_VBS_BASE_IOCB "BASE_IOCB"
+#define VRAM_VBS_EX_IOCB "EX_IOCB"
+#define VRAM_VBS_RXQS_CQE "RXQS_CQE"
+
+#define VRAM_VBS_VOLQ_MTT "VOLQ_MTT"
+#define VRAM_VBS_VOLQ_MTT_PAGE "MTT_PAGE"
+
+#define VRAM_VROCE_ENTRY_POOL "VROCE_ENTRY"
+#define VRAM_VROCE_GROUP_POOL "VROCE_GROUP"
+#define VRAM_VROCE_UUID "VROCE_UUID"
+#define VRAM_VROCE_VID "VROCE_VID"
+#define VRAM_VROCE_BASE "VROCE_BASE"
+#define VRAM_VROCE_DSCP "VROCE_DSCP"
+#define VRAM_VROCE_QOS "VROCE_QOS"
+#define VRAM_VROCE_DEV "VROCE_DEV"
+#define VRAM_VROCE_RGROUP_HT_CNT "RGROUP_CNT"
+#define VRAM_VROCE_RACL_HT_CNT "RACL_CNT"
+
+#define VRAM_NAME_APPLY_LEN 64
+
+#define MPU_OS_HOTREPLACE_FLAG 0x1
+struct vram_buf_info {
+ char buf_vram_name[VRAM_NAME_APPLY_LEN];
+ int use_vram;
+};
+
+enum KUP_HOOK_POINT {
+ PRE_FREEZE,
+ FREEZE_TO_KILL,
+ PRE_UPDATE_KERNEL,
+ POST_UPDATE_KERNEL,
+ UNFREEZE_TO_RUN,
+ POST_RUN,
+ KUP_HOOK_MAX,
+};
+
+#define hi_vram_kalloc(name, size) 0
+#define hi_vram_kfree(vaddr, name, size)
+#define get_use_vram_flag(void) 0
+#define vram_get_kexec_flag(void) 0
+#define hi_vram_get_gfp_vram(void) 0
+
+#endif /* VRAM_COMMON_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/mag_mpu_cmd_defs.h b/drivers/net/ethernet/huawei/hinic3/mag_mpu_cmd_defs.h
new file mode 100644
index 0000000..e77d7d5
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/mag_mpu_cmd_defs.h
@@ -0,0 +1,1143 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef MAG_MPU_CMD_DEFS_H
+#define MAG_MPU_CMD_DEFS_H
+
+#include "mpu_cmd_base_defs.h"
+
+/* serdes cmd struct define */
+#define CMD_ARRAY_BUF_SIZE 64
+#define SERDES_CMD_DATA_BUF_SIZE 512
+#define RATE_MBPS_TO_GBPS 1000
+struct serdes_in_info {
+ u32 chip_id : 16;
+ u32 macro_id : 16;
+ u32 start_sds_id : 16;
+ u32 sds_num : 16;
+
+ u32 cmd_type : 8; /* reserved for iotype */
+ u32 sub_cmd : 8;
+ u32 rw : 1; /* 0: read, 1: write */
+ u32 rsvd : 15;
+
+ u32 val;
+ union {
+ char field[CMD_ARRAY_BUF_SIZE];
+ u32 addr;
+ u8 *ex_param;
+ };
+};
+
+struct serdes_out_info {
+ u32 str_len; /* out_str length */
+ u32 result_offset;
+ u32 type; /* 0:data; 1:string */
+ char out_str[SERDES_CMD_DATA_BUF_SIZE];
+};
+
+struct serdes_cmd_in {
+ struct mgmt_msg_head head;
+
+ struct serdes_in_info serdes_in;
+};
+
+struct serdes_cmd_out {
+ struct mgmt_msg_head head;
+
+ struct serdes_out_info serdes_out;
+};
+
+enum mag_cmd_port_speed {
+ PORT_SPEED_NOT_SET = 0,
+ PORT_SPEED_10MB = 1,
+ PORT_SPEED_100MB = 2,
+ PORT_SPEED_1GB = 3,
+ PORT_SPEED_10GB = 4,
+ PORT_SPEED_25GB = 5,
+ PORT_SPEED_40GB = 6,
+ PORT_SPEED_50GB = 7,
+ PORT_SPEED_100GB = 8,
+ PORT_SPEED_200GB = 9,
+ PORT_SPEED_UNKNOWN
+};
+
+enum mag_cmd_port_an {
+ PORT_AN_NOT_SET = 0,
+ PORT_CFG_AN_ON = 1,
+ PORT_CFG_AN_OFF = 2
+};
+
+enum mag_cmd_port_adapt {
+ PORT_ADAPT_NOT_SET = 0,
+ PORT_CFG_ADAPT_ON = 1,
+ PORT_CFG_ADAPT_OFF = 2
+};
+
+enum mag_cmd_port_sriov {
+ PORT_SRIOV_NOT_SET = 0,
+ PORT_CFG_SRIOV_ON = 1,
+ PORT_CFG_SRIOV_OFF = 2
+};
+
+enum mag_cmd_port_fec {
+ PORT_FEC_NOT_SET = 0,
+ PORT_FEC_RSFEC = 1,
+ PORT_FEC_BASEFEC = 2,
+ PORT_FEC_NOFEC = 3,
+ PORT_FEC_LLRSFEC = 4,
+ PORT_FEC_AUTO = 5
+};
+
+enum mag_cmd_port_lanes {
+ PORT_LANES_NOT_SET = 0,
+ PORT_LANES_X1 = 1,
+ PORT_LANES_X2 = 2,
+ PORT_LANES_X4 = 4,
+ PORT_LANES_X8 = 8 /* reserved for future use */
+};
+
+enum mag_cmd_port_duplex {
+ PORT_DUPLEX_HALF = 0,
+ PORT_DUPLEX_FULL = 1
+};
+
+enum mag_cmd_wire_node {
+ WIRE_NODE_UNDEF = 0,
+ CABLE_10G = 1,
+ FIBER_10G = 2,
+ CABLE_25G = 3,
+ FIBER_25G = 4,
+ CABLE_40G = 5,
+ FIBER_40G = 6,
+ CABLE_50G = 7,
+ FIBER_50G = 8,
+ CABLE_100G = 9,
+ FIBER_100G = 10,
+ CABLE_200G = 11,
+ FIBER_200G = 12,
+ WIRE_NODE_NUM
+};
+
+enum mag_cmd_cnt_type {
+ MAG_RX_RSFEC_DEC_CW_CNT = 0,
+ MAG_RX_RSFEC_CORR_CW_CNT = 1,
+ MAG_RX_RSFEC_UNCORR_CW_CNT = 2,
+ MAG_RX_PCS_BER_CNT = 3,
+ MAG_RX_PCS_ERR_BLOCK_CNT = 4,
+ MAG_RX_PCS_E_BLK_CNT = 5,
+ MAG_RX_PCS_DEC_ERR_BLK_CNT = 6,
+ MAG_RX_PCS_LANE_BIP_ERR_CNT = 7,
+ MAG_RX_RSFEC_ERR_CW_CNT = 8,
+ MAG_CNT_NUM
+};
+
+/* mag_cmd_set_port_cfg config bitmap */
+#define MAG_CMD_SET_SPEED 0x1
+#define MAG_CMD_SET_AUTONEG 0x2
+#define MAG_CMD_SET_FEC 0x4
+#define MAG_CMD_SET_LANES 0x8
+struct mag_cmd_set_port_cfg {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+
+ u32 config_bitmap;
+ u8 speed;
+ u8 autoneg;
+ u8 fec;
+ u8 lanes;
+ u8 rsvd1[20];
+};
+
+/* mag supported/advertised link mode bitmap */
+enum mag_cmd_link_mode {
+ LINK_MODE_GE = 0,
+ LINK_MODE_10GE_BASE_R = 1,
+ LINK_MODE_25GE_BASE_R = 2,
+ LINK_MODE_40GE_BASE_R4 = 3,
+ LINK_MODE_50GE_BASE_R = 4,
+ LINK_MODE_50GE_BASE_R2 = 5,
+ LINK_MODE_100GE_BASE_R = 6,
+ LINK_MODE_100GE_BASE_R2 = 7,
+ LINK_MODE_100GE_BASE_R4 = 8,
+ LINK_MODE_200GE_BASE_R2 = 9,
+ LINK_MODE_200GE_BASE_R4 = 10,
+ LINK_MODE_MAX_NUMBERS,
+
+ LINK_MODE_UNKNOWN = 0xFFFF
+};
+
+#define LINK_MODE_GE_BIT 0x1u
+#define LINK_MODE_10GE_BASE_R_BIT 0x2u
+#define LINK_MODE_25GE_BASE_R_BIT 0x4u
+#define LINK_MODE_40GE_BASE_R4_BIT 0x8u
+#define LINK_MODE_50GE_BASE_R_BIT 0x10u
+#define LINK_MODE_50GE_BASE_R2_BIT 0x20u
+#define LINK_MODE_100GE_BASE_R_BIT 0x40u
+#define LINK_MODE_100GE_BASE_R2_BIT 0x80u
+#define LINK_MODE_100GE_BASE_R4_BIT 0x100u
+#define LINK_MODE_200GE_BASE_R2_BIT 0x200u
+#define LINK_MODE_200GE_BASE_R4_BIT 0x400u
+
+#define CABLE_10GE_BASE_R_BIT LINK_MODE_10GE_BASE_R_BIT
+#define CABLE_25GE_BASE_R_BIT (LINK_MODE_25GE_BASE_R_BIT | LINK_MODE_10GE_BASE_R_BIT)
+#define CABLE_40GE_BASE_R4_BIT LINK_MODE_40GE_BASE_R4_BIT
+#define CABLE_50GE_BASE_R_BIT (LINK_MODE_50GE_BASE_R_BIT | LINK_MODE_25GE_BASE_R_BIT | \
+ LINK_MODE_10GE_BASE_R_BIT)
+#define CABLE_50GE_BASE_R2_BIT LINK_MODE_50GE_BASE_R2_BIT
+#define CABLE_100GE_BASE_R2_BIT (LINK_MODE_100GE_BASE_R2_BIT | LINK_MODE_50GE_BASE_R2_BIT)
+#define CABLE_100GE_BASE_R4_BIT (LINK_MODE_100GE_BASE_R4_BIT | LINK_MODE_40GE_BASE_R4_BIT)
+#define CABLE_200GE_BASE_R4_BIT (LINK_MODE_200GE_BASE_R4_BIT | LINK_MODE_100GE_BASE_R4_BIT | \
+ LINK_MODE_40GE_BASE_R4_BIT)
+
+struct mag_cmd_get_port_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+
+ u8 wire_type;
+ u8 an_support;
+ u8 an_en;
+ u8 duplex;
+
+ u8 speed;
+ u8 fec;
+ u8 lanes;
+ u8 rsvd1;
+
+ u32 supported_mode;
+ u32 advertised_mode;
+ u32 supported_fec_mode;
+ u16 bond_speed;
+ u8 rsvd2[2];
+};
+
+#define MAG_CMD_OPCODE_GET 0
+#define MAG_CMD_OPCODE_SET 1
+struct mag_cmd_set_port_adapt {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get adapt info 1:set adapt */
+ u8 enable;
+ u8 rsvd0;
+ u32 speed_mode;
+ u32 rsvd1[3];
+};
+
+#define MAG_CMD_LP_MODE_SDS_S_TX2RX 1
+#define MAG_CMD_LP_MODE_SDS_P_RX2TX 2
+#define MAG_CMD_LP_MODE_SDS_P_TX2RX 3
+#define MAG_CMD_LP_MODE_MAC_RX2TX 4
+#define MAG_CMD_LP_MODE_MAC_TX2RX 5
+#define MAG_CMD_LP_MODE_TXDP2RXDP 6
+struct mag_cmd_cfg_loopback_mode {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get loopback mode 1:set loopback mode */
+ u8 lp_mode;
+ u8 lp_en; /* 0:disable 1:enable */
+
+ u32 rsvd0[2];
+};
+
+#define MAG_CMD_PORT_DISABLE 0x0
+#define MAG_CMD_TX_ENABLE 0x1
+#define MAG_CMD_RX_ENABLE 0x2
+/* the physical port is disable only when all pf of the port are set to down,
+ * if any pf is enable, the port is enable
+ */
+struct mag_cmd_set_port_enable {
+ struct mgmt_msg_head head;
+
+ u16 function_id; /* function_id should not more than the max support pf_id(32) */
+ u16 rsvd0;
+
+ u8 state; /* bitmap bit0:tx_en bit1:rx_en */
+ u8 rsvd1[3];
+};
+
+struct mag_cmd_get_port_enable {
+ struct mgmt_msg_head head;
+
+ u8 port;
+ u8 state; /* bitmap bit0:tx_en bit1:rx_en */
+ u8 rsvd0[2];
+};
+
+#define PMA_FOLLOW_DEFAULT 0x0
+#define PMA_FOLLOW_ENABLE 0x1
+#define PMA_FOLLOW_DISABLE 0x2
+#define PMA_FOLLOW_GET 0x4
+/* the physical port disable link follow only when all pf of the port are set to follow disable */
+struct mag_cmd_set_link_follow {
+ struct mgmt_msg_head head;
+
+ u16 function_id; /* function_id should not more than the max support pf_id(32) */
+ u16 rsvd0;
+
+ u8 follow;
+ u8 rsvd1[3];
+};
+
+/* firmware also use this cmd report link event to driver */
+struct mag_cmd_get_link_status {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 status; /* 0:link down 1:link up */
+ u8 rsvd0[2];
+};
+
+/* firmware also use this cmd report bond event to driver */
+struct mag_cmd_get_bond_status {
+ struct mgmt_msg_head head;
+
+ u8 status; /* 0:bond down 1:bond up */
+ u8 rsvd0[3];
+};
+
+struct mag_cmd_set_pma_enable {
+ struct mgmt_msg_head head;
+
+ u16 function_id; /* function_id should not more than the max support pf_id(32) */
+ u16 enable;
+};
+
+struct mag_cmd_cfg_an_type {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get an type 1:set an type */
+ u8 rsvd0[2];
+
+ u32 an_type; /* 0:ieee 1:25G/50 eth consortium */
+};
+
+struct mag_cmd_get_link_time {
+ struct mgmt_msg_head head;
+ u8 port_id;
+ u8 rsvd0[3];
+
+ u32 link_up_begin;
+ u32 link_up_end;
+ u32 link_down_begin;
+ u32 link_down_end;
+};
+
+struct mag_cmd_cfg_fec_mode {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get fec mode 1:set fec mode */
+ u8 advertised_fec;
+ u8 supported_fec;
+};
+
+/* speed */
+#define PANGEA_ADAPT_10G_BITMAP 0xd
+#define PANGEA_ADAPT_25G_BITMAP 0x72
+#define PANGEA_ADAPT_40G_BITMAP 0x680
+#define PANGEA_ADAPT_100G_BITMAP 0x1900
+
+/* speed and fec */
+#define PANGEA_10G_NO_BITMAP 0x8
+#define PANGEA_10G_BASE_BITMAP 0x4
+#define PANGEA_25G_NO_BITMAP 0x10
+#define PANGEA_25G_BASE_BITMAP 0x20
+#define PANGEA_25G_RS_BITMAP 0x40
+#define PANGEA_40G_NO_BITMAP 0x400
+#define PANGEA_40G_BASE_BITMAP 0x200
+#define PANGEA_100G_NO_BITMAP 0x800
+#define PANGEA_100G_RS_BITMAP 0x1000
+
+/* adapt or fec */
+#define PANGEA_ADAPT_ADAPT_BITMAP 0x183
+#define PANGEA_ADAPT_NO_BITMAP 0xc18
+#define PANGEA_ADAPT_BASE_BITMAP 0x224
+#define PANGEA_ADAPT_RS_BITMAP 0x1040
+
+/* default cfg */
+#define PANGEA_ADAPT_CFG_10G_CR 0x200d
+#define PANGEA_ADAPT_CFG_10G_SRLR 0xd
+#define PANGEA_ADAPT_CFG_25G_CR 0x207f
+#define PANGEA_ADAPT_CFG_25G_SRLR 0x72
+#define PANGEA_ADAPT_CFG_40G_CR4 0x2680
+#define PANGEA_ADAPT_CFG_40G_SRLR4 0x680
+#define PANGEA_ADAPT_CFG_100G_CR4 0x3f80
+#define PANGEA_ADAPT_CFG_100G_SRLR4 0x1900
+
+union pangea_adapt_bitmap_u {
+ struct {
+ u32 adapt_10g : 1; /* [0] adapt_10g */
+ u32 adapt_25g : 1; /* [1] adapt_25g */
+ u32 base_10g : 1; /* [2] base_10g */
+ u32 no_10g : 1; /* [3] no_10g */
+ u32 no_25g : 1; /* [4] no_25g */
+ u32 base_25g : 1; /* [5] base_25g */
+ u32 rs_25g : 1; /* [6] rs_25g */
+ u32 adapt_40g : 1; /* [7] adapt_40g */
+ u32 adapt_100g : 1; /* [8] adapt_100g */
+ u32 base_40g : 1; /* [9] base_40g */
+ u32 no_40g : 1; /* [10] no_40g */
+ u32 no_100g : 1; /* [11] no_100g */
+ u32 rs_100g : 1; /* [12] rs_100g */
+ u32 auto_neg : 1; /* [13] auto_neg */
+ u32 rsvd0 : 18; /* [31:14] reserved */
+ } bits;
+
+ u32 value;
+};
+
+#define PANGEA_ADAPT_GET 0x0
+#define PANGEA_ADAPT_SET 0x1
+struct mag_cmd_set_pangea_adapt {
+ struct mgmt_msg_head head;
+
+ u16 port_id;
+ u8 opcode; /* 0:get adapt info 1:cfg adapt info */
+ u8 wire_type;
+
+ union pangea_adapt_bitmap_u cfg_bitmap;
+ union pangea_adapt_bitmap_u cur_bitmap;
+ u32 rsvd1[3];
+};
+
+struct mag_cmd_cfg_bios_link_cfg {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get bios link info 1:set bios link cfg */
+ u8 clear;
+ u8 rsvd0;
+
+ u32 wire_type;
+ u8 an_en;
+ u8 speed;
+ u8 fec;
+ u8 rsvd1;
+ u32 speed_mode;
+ u32 rsvd2[3];
+};
+
+struct mag_cmd_restore_link_cfg {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd[7];
+};
+
+struct mag_cmd_activate_bios_link_cfg {
+ struct mgmt_msg_head head;
+
+ u32 rsvd[8];
+};
+
+/* led type */
+enum mag_led_type {
+ MAG_CMD_LED_TYPE_ALARM = 0x0,
+ MAG_CMD_LED_TYPE_LOW_SPEED = 0x1,
+ MAG_CMD_LED_TYPE_HIGH_SPEED = 0x2
+};
+
+/* led mode */
+enum mag_led_mode {
+ MAG_CMD_LED_MODE_DEFAULT = 0x0,
+ MAG_CMD_LED_MODE_FORCE_ON = 0x1,
+ MAG_CMD_LED_MODE_FORCE_OFF = 0x2,
+ MAG_CMD_LED_MODE_FORCE_BLINK_1HZ = 0x3,
+ MAG_CMD_LED_MODE_FORCE_BLINK_2HZ = 0x4,
+ MAG_CMD_LED_MODE_FORCE_BLINK_4HZ = 0x5,
+ MAG_CMD_LED_MODE_1HZ = 0x6,
+ MAG_CMD_LED_MODE_2HZ = 0x7,
+ MAG_CMD_LED_MODE_4HZ = 0x8
+};
+
+/* the led is report alarm when any pf of the port is alram */
+struct mag_cmd_set_led_cfg {
+ struct mgmt_msg_head head;
+
+ u16 function_id;
+ u8 type;
+ u8 mode;
+};
+
+#define XSFP_INFO_MAX_SIZE 640
+/* xsfp wire type, refer to cmis protocol definition */
+enum mag_wire_type {
+ MAG_CMD_WIRE_TYPE_UNKNOWN = 0x0,
+ MAG_CMD_WIRE_TYPE_MM = 0x1,
+ MAG_CMD_WIRE_TYPE_SM = 0x2,
+ MAG_CMD_WIRE_TYPE_COPPER = 0x3,
+ MAG_CMD_WIRE_TYPE_ACC = 0x4,
+ MAG_CMD_WIRE_TYPE_BASET = 0x5,
+ MAG_CMD_WIRE_TYPE_AOC = 0x40,
+ MAG_CMD_WIRE_TYPE_ELECTRIC = 0x41,
+ MAG_CMD_WIRE_TYPE_BACKPLANE = 0x42
+};
+
+struct mag_cmd_get_xsfp_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 wire_type;
+ u16 out_len;
+ u32 rsvd;
+ u8 sfp_info[XSFP_INFO_MAX_SIZE];
+};
+
+#define MAG_CMD_XSFP_DISABLE 0x0
+#define MAG_CMD_XSFP_ENABLE 0x1
+/* the sfp is disable only when all pf of the port are set sfp down,
+ * if any pf is enable, the sfp is enable
+ */
+struct mag_cmd_set_xsfp_enable {
+ struct mgmt_msg_head head;
+
+ u32 port_id;
+ u32 status; /* 0:on 1:off */
+};
+
+#define MAG_CMD_XSFP_PRESENT 0x0
+#define MAG_CMD_XSFP_ABSENT 0x1
+struct mag_cmd_get_xsfp_present {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 abs_status; /* 0:present, 1:absent */
+ u8 rsvd[2];
+};
+
+#define MAG_CMD_XSFP_READ 0x0
+#define MAG_CMD_XSFP_WRITE 0x1
+struct mag_cmd_set_xsfp_rw {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 operation; /* 0: read; 1: write */
+ u8 value;
+ u8 rsvd0;
+ u32 devaddr;
+ u32 offset;
+ u32 rsvd1;
+};
+
+struct mag_cmd_cfg_xsfp_temperature {
+ struct mgmt_msg_head head;
+
+ u8 opcode; /* 0:read 1:write */
+ u8 rsvd0[3];
+ s32 max_temp;
+ s32 min_temp;
+};
+
+struct mag_cmd_get_xsfp_temperature {
+ struct mgmt_msg_head head;
+
+ s16 sfp_temp[8];
+ u8 rsvd[32];
+ s32 max_temp;
+ s32 min_temp;
+};
+
+/* xsfp plug event */
+struct mag_cmd_wire_event {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 status; /* 0:present, 1:absent */
+ u8 rsvd[2];
+};
+
+/* link err type definition */
+#define MAG_CMD_ERR_XSFP_UNKNOWN 0x0
+struct mag_cmd_link_err_event {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 link_err_type;
+ u8 rsvd[2];
+};
+
+#define MAG_PARAM_TYPE_DEFAULT_CFG 0x0
+#define MAG_PARAM_TYPE_BIOS_CFG 0x1
+#define MAG_PARAM_TYPE_TOOL_CFG 0x2
+#define MAG_PARAM_TYPE_FINAL_CFG 0x3
+#define MAG_PARAM_TYPE_WIRE_INFO 0x4
+#define MAG_PARAM_TYPE_ADAPT_INFO 0x5
+#define MAG_PARAM_TYPE_MAX_CNT 0x6
+struct param_head {
+ u8 valid_len;
+ u8 info_type;
+ u8 rsvd[2];
+};
+
+struct mag_port_link_param {
+ struct param_head head;
+
+ u8 an;
+ u8 fec;
+ u8 speed;
+ u8 rsvd0;
+
+ u32 used;
+ u32 an_fec_ability;
+ u32 an_speed_ability;
+ u32 an_pause_ability;
+};
+
+struct mag_port_wire_info {
+ struct param_head head;
+
+ u8 status;
+ u8 rsvd0[3];
+
+ u8 wire_type;
+ u8 default_fec;
+ u8 speed;
+ u8 rsvd1;
+ u32 speed_ability;
+};
+
+struct mag_port_adapt_info {
+ struct param_head head;
+
+ u32 adapt_en;
+ u32 flash_adapt;
+ u32 rsvd0[2];
+
+ u32 wire_node;
+ u32 an_en;
+ u32 speed;
+ u32 fec;
+};
+
+struct mag_port_param_info {
+ u8 parameter_cnt;
+ u8 lane_id;
+ u8 lane_num;
+ u8 rsvd0;
+
+ struct mag_port_link_param default_cfg;
+ struct mag_port_link_param bios_cfg;
+ struct mag_port_link_param tool_cfg;
+ struct mag_port_link_param final_cfg;
+
+ struct mag_port_wire_info wire_info;
+ struct mag_port_adapt_info adapt_info;
+};
+
+#define XSFP_VENDOR_NAME_LEN 16
+struct mag_cmd_event_port_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 event_type;
+ u8 rsvd0[2];
+
+ u8 vendor_name[XSFP_VENDOR_NAME_LEN];
+ u32 port_type; /* fiber / copper */
+ u32 port_sub_type; /* sr / lr */
+ u32 cable_length; /* 1/3/5m */
+ u8 cable_temp; /* temp */
+ u8 max_speed; /* Maximum rate of an optical module */
+ u8 sfp_type; /* sfp/qsfp/dsfp */
+ u8 rsvd1;
+ u32 power[4]; /* Optical Power */
+
+ u8 an_state;
+ u8 fec;
+ u16 speed;
+
+ u8 gpio_insert; /* 0:present 1:absent */
+ u8 alos;
+ u8 rx_los;
+ u8 pma_ctrl;
+
+ u32 pma_fifo_reg;
+ u32 pma_signal_ok_reg;
+ u32 pcs_64_66b_reg;
+ u32 rf_lf;
+ u8 pcs_link;
+ u8 pcs_mac_link;
+ u8 tx_enable;
+ u8 rx_enable;
+ u32 pcs_err_cnt;
+
+ u8 eq_data[38];
+ u8 rsvd2[2];
+
+ u32 his_link_machine_state;
+ u32 cur_link_machine_state;
+ u8 his_machine_state_data[128];
+ u8 cur_machine_state_data[128];
+ u8 his_machine_state_length;
+ u8 cur_machine_state_length;
+
+ struct mag_port_param_info param_info;
+ u8 rsvd3[360];
+};
+
+struct mag_cmd_rsfec_stats {
+ u32 rx_err_lane_phy;
+};
+
+struct mag_cmd_port_stats {
+ u64 mac_tx_fragment_pkt_num;
+ u64 mac_tx_undersize_pkt_num;
+ u64 mac_tx_undermin_pkt_num;
+ u64 mac_tx_64_oct_pkt_num;
+ u64 mac_tx_65_127_oct_pkt_num;
+ u64 mac_tx_128_255_oct_pkt_num;
+ u64 mac_tx_256_511_oct_pkt_num;
+ u64 mac_tx_512_1023_oct_pkt_num;
+ u64 mac_tx_1024_1518_oct_pkt_num;
+ u64 mac_tx_1519_2047_oct_pkt_num;
+ u64 mac_tx_2048_4095_oct_pkt_num;
+ u64 mac_tx_4096_8191_oct_pkt_num;
+ u64 mac_tx_8192_9216_oct_pkt_num;
+ u64 mac_tx_9217_12287_oct_pkt_num;
+ u64 mac_tx_12288_16383_oct_pkt_num;
+ u64 mac_tx_1519_max_bad_pkt_num;
+ u64 mac_tx_1519_max_good_pkt_num;
+ u64 mac_tx_oversize_pkt_num;
+ u64 mac_tx_jabber_pkt_num;
+ u64 mac_tx_bad_pkt_num;
+ u64 mac_tx_bad_oct_num;
+ u64 mac_tx_good_pkt_num;
+ u64 mac_tx_good_oct_num;
+ u64 mac_tx_total_pkt_num;
+ u64 mac_tx_total_oct_num;
+ u64 mac_tx_uni_pkt_num;
+ u64 mac_tx_multi_pkt_num;
+ u64 mac_tx_broad_pkt_num;
+ u64 mac_tx_pause_num;
+ u64 mac_tx_pfc_pkt_num;
+ u64 mac_tx_pfc_pri0_pkt_num;
+ u64 mac_tx_pfc_pri1_pkt_num;
+ u64 mac_tx_pfc_pri2_pkt_num;
+ u64 mac_tx_pfc_pri3_pkt_num;
+ u64 mac_tx_pfc_pri4_pkt_num;
+ u64 mac_tx_pfc_pri5_pkt_num;
+ u64 mac_tx_pfc_pri6_pkt_num;
+ u64 mac_tx_pfc_pri7_pkt_num;
+ u64 mac_tx_control_pkt_num;
+ u64 mac_tx_err_all_pkt_num;
+ u64 mac_tx_from_app_good_pkt_num;
+ u64 mac_tx_from_app_bad_pkt_num;
+
+ u64 mac_rx_fragment_pkt_num;
+ u64 mac_rx_undersize_pkt_num;
+ u64 mac_rx_undermin_pkt_num;
+ u64 mac_rx_64_oct_pkt_num;
+ u64 mac_rx_65_127_oct_pkt_num;
+ u64 mac_rx_128_255_oct_pkt_num;
+ u64 mac_rx_256_511_oct_pkt_num;
+ u64 mac_rx_512_1023_oct_pkt_num;
+ u64 mac_rx_1024_1518_oct_pkt_num;
+ u64 mac_rx_1519_2047_oct_pkt_num;
+ u64 mac_rx_2048_4095_oct_pkt_num;
+ u64 mac_rx_4096_8191_oct_pkt_num;
+ u64 mac_rx_8192_9216_oct_pkt_num;
+ u64 mac_rx_9217_12287_oct_pkt_num;
+ u64 mac_rx_12288_16383_oct_pkt_num;
+ u64 mac_rx_1519_max_bad_pkt_num;
+ u64 mac_rx_1519_max_good_pkt_num;
+ u64 mac_rx_oversize_pkt_num;
+ u64 mac_rx_jabber_pkt_num;
+ u64 mac_rx_bad_pkt_num;
+ u64 mac_rx_bad_oct_num;
+ u64 mac_rx_good_pkt_num;
+ u64 mac_rx_good_oct_num;
+ u64 mac_rx_total_pkt_num;
+ u64 mac_rx_total_oct_num;
+ u64 mac_rx_uni_pkt_num;
+ u64 mac_rx_multi_pkt_num;
+ u64 mac_rx_broad_pkt_num;
+ u64 mac_rx_pause_num;
+ u64 mac_rx_pfc_pkt_num;
+ u64 mac_rx_pfc_pri0_pkt_num;
+ u64 mac_rx_pfc_pri1_pkt_num;
+ u64 mac_rx_pfc_pri2_pkt_num;
+ u64 mac_rx_pfc_pri3_pkt_num;
+ u64 mac_rx_pfc_pri4_pkt_num;
+ u64 mac_rx_pfc_pri5_pkt_num;
+ u64 mac_rx_pfc_pri6_pkt_num;
+ u64 mac_rx_pfc_pri7_pkt_num;
+ u64 mac_rx_control_pkt_num;
+ u64 mac_rx_sym_err_pkt_num;
+ u64 mac_rx_fcs_err_pkt_num;
+ u64 mac_rx_send_app_good_pkt_num;
+ u64 mac_rx_send_app_bad_pkt_num;
+ u64 mac_rx_unfilter_pkt_num;
+};
+
+struct mag_port_stats {
+ u64 tx_frag_pkts_port;
+ u64 tx_under_frame_pkts_port;
+ u64 tx_under_min_pkts_port;
+ u64 tx_64_oct_pkts_port;
+ u64 tx_127_oct_pkts_port;
+ u64 tx_255_oct_pkts_port;
+ u64 tx_511_oct_pkts_port;
+ u64 tx_1023_oct_pkts_port;
+ u64 tx_1518_oct_pkts_port;
+ u64 tx_2047_oct_pkts_port;
+ u64 tx_4095_oct_pkts_port;
+ u64 tx_8191_oct_pkts_port;
+ u64 tx_9216_oct_pkts_port;
+ u64 tx_12287_oct_pkts_port;
+ u64 tx_16383_oct_pkts_port;
+ u64 tx_1519_to_max_bad_pkts_port;
+ u64 tx_1519_to_max_good_pkts_port;
+ u64 tx_oversize_pkts_port;
+ u64 tx_jabber_pkts_port;
+ u64 tx_bad_pkts_port;
+ u64 tx_bad_octs_port;
+ u64 tx_good_pkts_port;
+ u64 tx_good_octs_port;
+ u64 tx_total_pkts_port;
+ u64 tx_total_octs_port;
+ u64 tx_unicast_pkts_port;
+ u64 tx_multicast_pkts_port;
+ u64 tx_broadcast_pkts_port;
+ u64 tx_pause_pkts_port;
+ u64 tx_pfc_pkts_port;
+ u64 tx_pri_0_pkts_port;
+ u64 tx_pri_1_pkts_port;
+ u64 tx_pri_2_pkts_port;
+ u64 tx_pri_3_pkts_port;
+ u64 tx_pri_4_pkts_port;
+ u64 tx_pri_5_pkts_port;
+ u64 tx_pri_6_pkts_port;
+ u64 tx_pri_7_pkts_port;
+ u64 tx_mac_control_pkts_port;
+ u64 tx_y1731_pkts_port;
+ u64 tx_1588_pkts_port;
+ u64 tx_error_pkts_port;
+ u64 tx_app_good_pkts_port;
+ u64 tx_app_bad_pkts_port;
+ u64 rx_frag_pkts_port;
+ u64 rx_under_frame_pkts_port;
+ u64 rx_under_min_pkts_port;
+ u64 rx_64_oct_pkts_port;
+ u64 rx_127_oct_pkts_port;
+ u64 rx_255_oct_pkts_port;
+ u64 rx_511_oct_pkts_port;
+ u64 rx_1023_oct_pkts_port;
+ u64 rx_1518_oct_pkts_port;
+ u64 rx_2047_oct_pkts_port;
+ u64 rx_4095_oct_pkts_port;
+ u64 rx_8191_oct_pkts_port;
+ u64 rx_9216_oct_pkts_port;
+ u64 rx_12287_oct_pkts_port;
+ u64 rx_16383_oct_pkts_port;
+ u64 rx_1519_to_max_bad_pkts_port;
+ u64 rx_1519_to_max_good_pkts_port;
+ u64 rx_oversize_pkts_port;
+ u64 rx_jabber_pkts_port;
+ u64 rx_bad_pkts_port;
+ u64 rx_bad_octs_port;
+ u64 rx_good_pkts_port;
+ u64 rx_good_octs_port;
+ u64 rx_total_pkts_port;
+ u64 rx_total_octs_port;
+ u64 rx_unicast_pkts_port;
+ u64 rx_multicast_pkts_port;
+ u64 rx_broadcast_pkts_port;
+ u64 rx_pause_pkts_port;
+ u64 rx_pfc_pkts_port;
+ u64 rx_pri_0_pkts_port;
+ u64 rx_pri_1_pkts_port;
+ u64 rx_pri_2_pkts_port;
+ u64 rx_pri_3_pkts_port;
+ u64 rx_pri_4_pkts_port;
+ u64 rx_pri_5_pkts_port;
+ u64 rx_pri_6_pkts_port;
+ u64 rx_pri_7_pkts_port;
+ u64 rx_mac_control_pkts_port;
+ u64 rx_y1731_pkts_port;
+ u64 rx_sym_err_pkts_port;
+ u64 rx_fcs_err_pkts_port;
+ u64 rx_app_good_pkts_port;
+ u64 rx_app_bad_pkts_port;
+ u64 rx_unfilter_pkts_port;
+};
+
+struct mag_cmd_port_stats_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+};
+
+struct mag_cmd_get_port_stat {
+ struct mgmt_msg_head head;
+
+ struct mag_cmd_port_stats counter;
+ u64 rsvd1[15];
+};
+
+struct mag_cmd_get_pcs_err_cnt {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+
+ u32 pcs_err_cnt;
+};
+
+struct mag_cmd_get_mag_cnt {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 len;
+ u8 rsvd0[2];
+
+ u32 mag_csr[128];
+};
+
+struct mag_cmd_dump_antrain_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 len;
+ u8 rsvd0[2];
+
+ u32 antrain_csr[256];
+};
+
+#define MAG_SFP_PORT_NUM 24
+struct mag_cmd_sfp_temp_in_info {
+ struct mgmt_msg_head head; /* 8B */
+ u8 opt_type; /* 0:read operation 1:cfg operation */
+ u8 rsv[3];
+ s32 max_temp; /* Chip optical module threshold */
+ s32 min_temp; /* Chip optical module threshold */
+};
+
+struct mag_cmd_sfp_temp_out_info {
+ struct mgmt_msg_head head; /* 8B */
+ s16 sfp_temp_data[MAG_SFP_PORT_NUM]; /* Temperature read */
+ s32 max_temp; /* Chip optical module threshold */
+ s32 min_temp; /* Chip optical module threshold */
+};
+
+#define XSFP_CMIS_PARSE_PAGE_NUM 6
+#define XSFP_CMIS_INFO_MAX_SIZE 1536
+#define QSFP_CMIS_PAGE_SIZE 128
+#define QSFP_CMIS_MAX_CHANNEL_NUM 0x8
+
+/* Lower: Control and Essentials, Upper: Administrative Information */
+#define QSFP_CMIS_PAGE_00H 0x00
+/* Advertising */
+#define QSFP_CMIS_PAGE_01H 0x01
+/* Module and lane Thresholds */
+#define QSFP_CMIS_PAGE_02H 0x02
+/* User EEPROM */
+#define QSFP_CMIS_PAGE_03H 0x03
+/* Laser Capabilities Advertising (Page 04h, Optional) */
+#define QSFP_CMIS_PAGE_04H 0x04
+#define QSFP_CMIS_PAGE_05H 0x05
+/* Lane and Data Path Control */
+#define QSFP_CMIS_PAGE_10H 0x10
+/* Lane Status */
+#define QSFP_CMIS_PAGE_11H 0x11
+#define QSFP_CMIS_PAGE_12H 0x12
+
+#define MGMT_TLV_U8_SIZE 1
+#define MGMT_TLV_U16_SIZE 2
+#define MGMT_TLV_U32_SIZE 4
+
+#define MGMT_TLV_GET_U8(addr) (*((u8 *)(void *)(addr)))
+#define MGMT_TLV_SET_U8(addr, value) \
+ ((*((u8 *)(void *)(addr))) = ((u8)(value)))
+
+#define MGMT_TLV_GET_U16(addr) (*((u16 *)(void *)(addr)))
+#define MGMT_TLV_SET_U16(addr, value) \
+ ((*((u16 *)(void *)(addr))) = ((u16)(value)))
+
+#define MGMT_TLV_GET_U32(addr) (*((u32 *)(void *)(addr)))
+#define MGMT_TLV_SET_U32(addr, value) \
+ ((*((u32 *)(void *)(addr))) = ((u32)(value)))
+
+#define MGMT_TLV_TYPE_END 0xFFFF
+
+enum mag_xsfp_type {
+ MAG_XSFP_TYPE_PAGE = 0x01,
+ MAG_XSFP_TYPE_WIRE_TYPE = 0x02,
+ MAG_XSFP_TYPE_END = MGMT_TLV_TYPE_END
+};
+
+struct qsfp_cmis_lower_page_00_s {
+ u8 resv0[14];
+ u8 temperature_msb;
+ u8 temperature_lsb;
+ u8 volt_supply[2];
+ u8 resv1[67];
+ u8 media_type;
+ u8 electrical_interface_id;
+ u8 media_interface_id;
+ u8 lane_count;
+ u8 resv2[39];
+};
+
+struct qsfp_cmis_upper_page_00_s {
+ u8 identifier;
+ u8 vendor_name[16];
+ u8 vendor_oui[3];
+ u8 vendor_pn[16];
+ u8 vendor_rev[2];
+ u8 vendor_sn[16];
+ u8 date_code[8];
+ u8 clei_code[10];
+ u8 power_character[2];
+ u8 cable_len;
+ u8 connector;
+ u8 copper_cable_attenuation[6];
+ u8 near_end_implementation;
+ u8 far_end_config;
+ u8 media_technology;
+ u8 resv0[43];
+};
+
+struct qsfp_cmis_upper_page_01_s {
+ u8 firmware_rev[2];
+ u8 hardware_rev[2];
+ u8 smf_len_km;
+ u8 om5_len;
+ u8 om4_len;
+ u8 om3_len;
+ u8 om2_len;
+ u8 resv0;
+ u8 wavelength[2];
+ u8 wavelength_tolerance[2];
+ u8 pages_implement;
+ u8 resv1[16];
+ u8 monitor_implement[2];
+ u8 resv2[95];
+};
+
+struct qsfp_cmis_upper_page_02_s {
+ u8 temperature_high_alarm[2];
+ u8 temperature_low_alarm[2];
+ u8 temperature_high_warn[2];
+ u8 temperature_low_warn[2];
+ u8 volt_high_alarm[2];
+ u8 volt_low_alarm[2];
+ u8 volt_high_warn[2];
+ u8 volt_low_warn[2];
+ u8 resv0[32];
+ u8 tx_power_high_alarm[2];
+ u8 tx_power_low_alarm[2];
+ u8 tx_power_high_warn[2];
+ u8 tx_power_low_warn[2];
+ u8 tx_bias_high_alarm[2];
+ u8 tx_bias_low_alarm[2];
+ u8 tx_bias_high_warn[2];
+ u8 tx_bias_low_warn[2];
+ u8 rx_power_high_alarm[2];
+ u8 rx_power_low_alarm[2];
+ u8 rx_power_high_warn[2];
+ u8 rx_power_low_warn[2];
+ u8 resv1[56];
+};
+
+struct qsfp_cmis_upper_page_03_s {
+ u8 resv0[QSFP_CMIS_PAGE_SIZE]; /* Reg 128-255: Upper Memory: Page 03H */
+};
+
+struct qsfp_cmis_upper_page_10_s {
+ u8 resv0[2]; /* Reg 128-129: Upper Memory: Page 10H */
+ u8 tx_disable; /* Reg 130: Tx disable, 0b=enabled, 1b=disabled */
+ u8 resv1[125]; /* Reg 131-255 */
+};
+
+struct qsfp_cmis_upper_page_11_s {
+ u8 resv0[7];
+ u8 tx_fault;
+ u8 tx_los;
+ u8 resv1[10];
+ u8 rx_los;
+ u8 resv2[6];
+ u8 tx_power[16];
+ u8 tx_bias[16];
+ u8 rx_power[16];
+ u8 resv3[54];
+};
+
+struct qsfp_cmis_info_s {
+ struct qsfp_cmis_lower_page_00_s lower_page_00;
+ struct qsfp_cmis_upper_page_00_s upper_page_00;
+ struct qsfp_cmis_upper_page_01_s upper_page_01;
+ struct qsfp_cmis_upper_page_02_s upper_page_02;
+ struct qsfp_cmis_upper_page_10_s upper_page_10;
+ struct qsfp_cmis_upper_page_11_s upper_page_11;
+};
+
+struct qsfp_cmis_comm_power_s {
+ u32 chl_power[QSFP_CMIS_MAX_CHANNEL_NUM];
+};
+
+struct qsfp_cmis_wire_info_s {
+ struct qsfp_cmis_comm_power_s rx_power;
+ u8 rx_los;
+ u8 resv0[3];
+};
+
+struct mgmt_tlv_info {
+ u16 type;
+ u16 length;
+ u8 value[];
+};
+
+struct mag_cmd_set_xsfp_tlv_req {
+ struct mgmt_msg_head head;
+
+ u8 tlv_buf[];
+};
+
+struct mag_cmd_set_xsfp_tlv_rsp {
+ struct mgmt_msg_head head;
+};
+
+struct mag_cmd_get_xsfp_tlv_req {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd;
+ u16 rsp_buf_len;
+};
+
+struct mag_cmd_get_xsfp_tlv_rsp {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd[3];
+
+ u8 tlv_buf[];
+};
+
+
+struct parse_tlv_info {
+ u8 tlv_page_info[XSFP_CMIS_INFO_MAX_SIZE + 1];
+ u32 tlv_page_info_len;
+ u32 tlv_page_num[XSFP_CMIS_PARSE_PAGE_NUM];
+ u32 wire_type;
+ u8 id;
+};
+
+struct drv_mag_cmd_get_xsfp_tlv_rsp {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd[3];
+
+ u8 tlv_buf[XSFP_CMIS_INFO_MAX_SIZE];
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd.h b/drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd.h
new file mode 100644
index 0000000..8e0fa89
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd.h
@@ -0,0 +1,174 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C), 2001-2011, Huawei Tech. Co., Ltd.
+ * File Name : nic_mpu_cmd.h
+ * Version : Initial Draft
+ * Created : 2019/4/25
+ * Last Modified :
+ * Description : NIC Commands between Driver and MPU
+ * Function List :
+ */
+
+#ifndef NIC_MPU_CMD_H
+#define NIC_MPU_CMD_H
+
+/* Commands between NIC to MPU
+ */
+enum hinic3_nic_cmd {
+ HINIC3_NIC_CMD_VF_REGISTER = 0, /* only for PFD and VFD */
+
+ /* FUNC CFG */
+ HINIC3_NIC_CMD_SET_FUNC_TBL = 5,
+ HINIC3_NIC_CMD_SET_VPORT_ENABLE,
+ HINIC3_NIC_CMD_SET_RX_MODE,
+ HINIC3_NIC_CMD_SQ_CI_ATTR_SET,
+ HINIC3_NIC_CMD_GET_VPORT_STAT,
+ HINIC3_NIC_CMD_CLEAN_VPORT_STAT,
+ HINIC3_NIC_CMD_CLEAR_QP_RESOURCE,
+ HINIC3_NIC_CMD_CFG_FLEX_QUEUE,
+ /* LRO CFG */
+ HINIC3_NIC_CMD_CFG_RX_LRO,
+ HINIC3_NIC_CMD_CFG_LRO_TIMER,
+ HINIC3_NIC_CMD_FEATURE_NEGO,
+ HINIC3_NIC_CMD_CFG_LOCAL_LRO_STATE,
+
+ HINIC3_NIC_CMD_CACHE_OUT_QP_RES,
+ HINIC3_NIC_CMD_SET_FUNC_ER_FWD_ID,
+
+ /** MAC & VLAN CFG & VXLAN CFG */
+ HINIC3_NIC_CMD_GET_MAC = 20,
+ HINIC3_NIC_CMD_SET_MAC,
+ HINIC3_NIC_CMD_DEL_MAC,
+ HINIC3_NIC_CMD_UPDATE_MAC,
+ HINIC3_NIC_CMD_GET_ALL_DEFAULT_MAC,
+
+ HINIC3_NIC_CMD_CFG_FUNC_VLAN,
+ HINIC3_NIC_CMD_SET_VLAN_FILTER_EN,
+ HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD,
+ HINIC3_NIC_CMD_SMAC_CHECK_STATE,
+ HINIC3_NIC_CMD_OUTBAND_SET_FUNC_VLAN,
+
+ HINIC3_NIC_CMD_CFG_VXLAN_PORT,
+ HINIC3_NIC_CMD_RX_RATE_CFG,
+ HINIC3_NIC_CMD_WR_ORDERING_CFG,
+
+ /* SR-IOV */
+ HINIC3_NIC_CMD_CFG_VF_VLAN = 40,
+ HINIC3_NIC_CMD_SET_SPOOPCHK_STATE,
+ /* RATE LIMIT */
+ HINIC3_NIC_CMD_SET_MAX_MIN_RATE,
+
+ /* RSS CFG */
+ HINIC3_NIC_CMD_RSS_CFG = 60,
+ HINIC3_NIC_CMD_RSS_TEMP_MGR, /* TODO: delete after implement nego cmd */
+ HINIC3_NIC_CMD_GET_RSS_CTX_TBL, /* TODO: delete: move to ucode cmd */
+ HINIC3_NIC_CMD_CFG_RSS_HASH_KEY,
+ HINIC3_NIC_CMD_CFG_RSS_HASH_ENGINE,
+ HINIC3_NIC_CMD_SET_RSS_CTX_TBL_INTO_FUNC,
+ /* IP checksum error packets, enable rss quadruple hash */
+ HINIC3_NIC_CMD_IPCS_ERR_RSS_ENABLE_OP = 66,
+ HINIC3_NIC_CMD_GTP_INNER_PARSE_STATUS,
+
+ /* PPA/FDIR */
+ HINIC3_NIC_CMD_ADD_TC_FLOW = 80,
+ HINIC3_NIC_CMD_DEL_TC_FLOW,
+ HINIC3_NIC_CMD_GET_TC_FLOW,
+ HINIC3_NIC_CMD_FLUSH_TCAM,
+ HINIC3_NIC_CMD_CFG_TCAM_BLOCK,
+ HINIC3_NIC_CMD_ENABLE_TCAM,
+ HINIC3_NIC_CMD_GET_TCAM_BLOCK,
+ HINIC3_NIC_CMD_CFG_PPA_TABLE_ID,
+ HINIC3_NIC_CMD_SET_PPA_EN = 88,
+ HINIC3_NIC_CMD_CFG_PPA_MODE,
+ HINIC3_NIC_CMD_CFG_PPA_FLUSH,
+ HINIC3_NIC_CMD_SET_FDIR_STATUS,
+ HINIC3_NIC_CMD_GET_PPA_COUNTER,
+ HINIC3_NIC_CMD_SET_FUNC_FLOW_BIFUR_ENABLE,
+ HINIC3_NIC_CMD_SET_BOND_MASK,
+ HINIC3_NIC_CMD_GET_BLOCK_TC_FLOWS,
+ HINIC3_NIC_CMD_GET_BOND_MASK,
+
+ /* PORT CFG */
+ HINIC3_NIC_CMD_SET_PORT_ENABLE = 100,
+ HINIC3_NIC_CMD_CFG_PAUSE_INFO,
+
+ HINIC3_NIC_CMD_SET_PORT_CAR,
+ HINIC3_NIC_CMD_SET_ER_DROP_PKT,
+
+ HINIC3_NIC_CMD_VF_COS,
+ HINIC3_NIC_CMD_SETUP_COS_MAPPING,
+ HINIC3_NIC_CMD_SET_ETS,
+ HINIC3_NIC_CMD_SET_PFC,
+ HINIC3_NIC_CMD_QOS_ETS,
+ HINIC3_NIC_CMD_QOS_PFC,
+ HINIC3_NIC_CMD_QOS_DCB_STATE,
+ HINIC3_NIC_CMD_QOS_PORT_CFG,
+ HINIC3_NIC_CMD_QOS_MAP_CFG,
+ HINIC3_NIC_CMD_FORCE_PKT_DROP,
+ HINIC3_NIC_CMD_CFG_TX_PROMISC_SKIP = 114,
+ HINIC3_NIC_CMD_SET_PORT_FLOW_BIFUR_ENABLE = 117,
+ HINIC3_NIC_CMD_TX_PAUSE_EXCP_NOTICE = 118,
+ HINIC3_NIC_CMD_INQUIRT_PAUSE_CFG = 119,
+
+ /* MISC */
+ HINIC3_NIC_CMD_BIOS_CFG = 120,
+ HINIC3_NIC_CMD_SET_FIRMWARE_CUSTOM_PACKETS_MSG,
+
+ /* BOND */
+ HINIC3_NIC_CMD_BOND_DEV_CREATE = 134,
+ HINIC3_NIC_CMD_BOND_DEV_DELETE,
+ HINIC3_NIC_CMD_BOND_DEV_OPEN_CLOSE,
+ HINIC3_NIC_CMD_BOND_INFO_GET,
+ HINIC3_NIC_CMD_BOND_ACTIVE_INFO_GET,
+ HINIC3_NIC_CMD_BOND_ACTIVE_NOTICE,
+
+ /* DFX */
+ HINIC3_NIC_CMD_GET_SM_TABLE = 140,
+ HINIC3_NIC_CMD_RD_LINE_TBL,
+
+ HINIC3_NIC_CMD_SET_UCAPTURE_OPT = 160, /* TODO: move to roce */
+ HINIC3_NIC_CMD_SET_VHD_CFG,
+
+ /* OUT OF BAND */
+ HINIC3_NIC_CMD_GET_OUTBAND_CFG = 170,
+ HINIC3_NIC_CMD_OUTBAND_CFG_NOTICE,
+
+ /* TODO: move to HILINK */
+ HINIC3_NIC_CMD_GET_PORT_STAT = 200,
+ HINIC3_NIC_CMD_CLEAN_PORT_STAT,
+ HINIC3_NIC_CMD_CFG_LOOPBACK_MODE,
+ HINIC3_NIC_CMD_GET_SFP_QSFP_INFO,
+ HINIC3_NIC_CMD_SET_SFP_STATUS,
+ HINIC3_NIC_CMD_GET_LIGHT_MODULE_ABS,
+ HINIC3_NIC_CMD_GET_LINK_INFO,
+ HINIC3_NIC_CMD_CFG_AN_TYPE,
+ HINIC3_NIC_CMD_GET_PORT_INFO,
+ HINIC3_NIC_CMD_SET_LINK_SETTINGS,
+ HINIC3_NIC_CMD_ACTIVATE_BIOS_LINK_CFG,
+ HINIC3_NIC_CMD_RESTORE_LINK_CFG,
+ HINIC3_NIC_CMD_SET_LINK_FOLLOW,
+ HINIC3_NIC_CMD_GET_LINK_STATE,
+ HINIC3_NIC_CMD_LINK_STATUS_REPORT,
+ HINIC3_NIC_CMD_CABLE_PLUG_EVENT,
+ HINIC3_NIC_CMD_LINK_ERR_EVENT,
+ HINIC3_NIC_CMD_SET_LED_STATUS,
+
+ /* mig */
+ HINIC3_NIC_CMD_MIG_SET_CEQ_CTRL = 230,
+ HINIC3_NIC_CMD_MIG_CFG_MSIX_INFO,
+ HINIC3_NIC_CMD_MIG_CFG_FUNC_VAT_TBL,
+ HINIC3_NIC_CMD_MIG_GET_VF_INFO,
+ HINIC3_NIC_CMD_MIG_CHK_MBX_EMPTY,
+ HINIC3_NIC_CMD_MIG_SET_VPORT_ENABLE,
+ HINIC3_NIC_CMD_MIG_CFG_SQ_CI,
+ HINIC3_NIC_CMD_MIG_CFG_RSS_TBL,
+ HINIC3_NIC_CMD_MIG_CFG_MAC_TBL,
+ HINIC3_NIC_CMD_MIG_TMP_SET_CMDQ_CTX,
+
+ HINIC3_OSHR_CMD_ACTIVE_FUNCTION = 240,
+ HINIC3_NIC_CMD_GET_RQ_INFO = 241,
+
+ HINIC3_NIC_CMD_MAX = 256,
+};
+
+#endif /* NIC_MPU_CMD_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd_defs.h b/drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd_defs.h
new file mode 100644
index 0000000..ee6bf20
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/nic_mpu_cmd_defs.h
@@ -0,0 +1,1420 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2024 Huawei Technologies Co., Ltd */
+
+#ifndef NIC_MPU_CMD_DEFS_H
+#define NIC_MPU_CMD_DEFS_H
+
+#include "nic_cfg_comm.h"
+#include "mpu_cmd_base_defs.h"
+
+#ifndef ETH_ALEN
+#define ETH_ALEN 6
+#endif
+
+#define HINIC3_CMD_OP_SET 1
+#define HINIC3_CMD_OP_GET 0
+
+#define HINIC3_CMD_OP_ADD 1
+#define HINIC3_CMD_OP_DEL 0
+
+#define NIC_TCAM_BLOCK_LARGE_NUM 256
+#define NIC_TCAM_BLOCK_LARGE_SIZE 16
+
+#define TRAFFIC_BIFUR_MODEL_TYPE 2
+
+#define NIC_TCAM_FLOW_BIFUR_FLAG (1 << 0)
+
+#ifndef BIT
+#define BIT(n) (1UL << (n))
+#endif
+
+enum nic_feature_cap {
+ NIC_F_CSUM = BIT(0),
+ NIC_F_SCTP_CRC = BIT(1),
+ NIC_F_TSO = BIT(2),
+ NIC_F_LRO = BIT(3),
+ NIC_F_UFO = BIT(4),
+ NIC_F_RSS = BIT(5),
+ NIC_F_RX_VLAN_FILTER = BIT(6),
+ NIC_F_RX_VLAN_STRIP = BIT(7),
+ NIC_F_TX_VLAN_INSERT = BIT(8),
+ NIC_F_VXLAN_OFFLOAD = BIT(9),
+ NIC_F_IPSEC_OFFLOAD = BIT(10),
+ NIC_F_FDIR = BIT(11),
+ NIC_F_PROMISC = BIT(12),
+ NIC_F_ALLMULTI = BIT(13),
+ NIC_F_XSFP_REPORT = BIT(14),
+ NIC_F_VF_MAC = BIT(15),
+ NIC_F_RATE_LIMIT = BIT(16),
+ NIC_F_RXQ_RECOVERY = BIT(17),
+};
+
+#define NIC_F_ALL_MASK 0x3FFFF /* 使能所有属性 */
+
+struct hinic3_mgmt_msg_head {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+};
+
+#define NIC_MAX_FEATURE_QWORD 4
+struct hinic3_cmd_feature_nego {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 1: set, 0: get */
+ u8 rsvd;
+ u64 s_feature[NIC_MAX_FEATURE_QWORD];
+};
+
+struct hinic3_port_mac_set {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 mac[ETH_ALEN];
+};
+
+struct hinic3_port_mac_update {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 old_mac[ETH_ALEN];
+ u16 rsvd2;
+ u8 new_mac[ETH_ALEN];
+};
+
+struct hinic3_vport_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd2[3];
+};
+
+struct hinic3_port_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd2[3];
+};
+
+#define HINIC3_SET_PORT_CAR_PROFILE 0
+#define HINIC3_SET_PORT_CAR_STATE 1
+#define HINIC3_GET_PORT_CAR_LIMIT_SPEED 2
+
+struct hinic3_port_car_info {
+ u32 cir; /* unit: kbps, range:[1,400*1000*1000], i.e. 1Kbps~400Gbps(400M*kbps) */
+ u32 xir; /* unit: kbps, range:[1,400*1000*1000], i.e. 1Kbps~400Gbps(400M*kbps) */
+ u32 cbs; /* unit: Byte, range:[1,320*1000*1000], i.e. 1byte~2560Mbit */
+ u32 xbs; /* unit: Byte, range:[1,320*1000*1000], i.e. 1byte~2560Mbit */
+};
+
+struct hinic3_cmd_set_port_car {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode; /* 0--set car profile, 1--set car state */
+ u8 state; /* 0--disable, 1--enable */
+ u8 level;
+
+ struct hinic3_port_car_info car;
+};
+
+struct hinic3_cmd_clear_qp_resource {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_cmd_cache_out_qp_resource {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_port_stats_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_vport_stats {
+ u64 tx_unicast_pkts_vport;
+ u64 tx_unicast_bytes_vport;
+ u64 tx_multicast_pkts_vport;
+ u64 tx_multicast_bytes_vport;
+ u64 tx_broadcast_pkts_vport;
+ u64 tx_broadcast_bytes_vport;
+
+ u64 rx_unicast_pkts_vport;
+ u64 rx_unicast_bytes_vport;
+ u64 rx_multicast_pkts_vport;
+ u64 rx_multicast_bytes_vport;
+ u64 rx_broadcast_pkts_vport;
+ u64 rx_broadcast_bytes_vport;
+
+ u64 tx_discard_vport;
+ u64 rx_discard_vport;
+ u64 tx_err_vport;
+ u64 rx_err_vport;
+};
+
+struct hinic3_phy_fpga_port_stats {
+ u64 mac_rx_total_octs_port;
+ u64 mac_tx_total_octs_port;
+ u64 mac_rx_under_frame_pkts_port;
+ u64 mac_rx_frag_pkts_port;
+ u64 mac_rx_64_oct_pkts_port;
+ u64 mac_rx_127_oct_pkts_port;
+ u64 mac_rx_255_oct_pkts_port;
+ u64 mac_rx_511_oct_pkts_port;
+ u64 mac_rx_1023_oct_pkts_port;
+ u64 mac_rx_max_oct_pkts_port;
+ u64 mac_rx_over_oct_pkts_port;
+ u64 mac_tx_64_oct_pkts_port;
+ u64 mac_tx_127_oct_pkts_port;
+ u64 mac_tx_255_oct_pkts_port;
+ u64 mac_tx_511_oct_pkts_port;
+ u64 mac_tx_1023_oct_pkts_port;
+ u64 mac_tx_max_oct_pkts_port;
+ u64 mac_tx_over_oct_pkts_port;
+ u64 mac_rx_good_pkts_port;
+ u64 mac_rx_crc_error_pkts_port;
+ u64 mac_rx_broadcast_ok_port;
+ u64 mac_rx_multicast_ok_port;
+ u64 mac_rx_mac_frame_ok_port;
+ u64 mac_rx_length_err_pkts_port;
+ u64 mac_rx_vlan_pkts_port;
+ u64 mac_rx_pause_pkts_port;
+ u64 mac_rx_unknown_mac_frame_port;
+ u64 mac_tx_good_pkts_port;
+ u64 mac_tx_broadcast_ok_port;
+ u64 mac_tx_multicast_ok_port;
+ u64 mac_tx_underrun_pkts_port;
+ u64 mac_tx_mac_frame_ok_port;
+ u64 mac_tx_vlan_pkts_port;
+ u64 mac_tx_pause_pkts_port;
+};
+
+struct hinic3_port_stats {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ struct hinic3_phy_fpga_port_stats stats;
+};
+
+struct hinic3_cmd_vport_stats {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u32 stats_size;
+ u32 rsvd1;
+ struct hinic3_vport_stats stats;
+ u64 rsvd2[6];
+};
+
+struct hinic3_cmd_qpn {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 base_qpn;
+};
+
+enum hinic3_func_tbl_cfg_bitmap {
+ FUNC_CFG_INIT,
+ FUNC_CFG_RX_BUF_SIZE,
+ FUNC_CFG_MTU,
+};
+
+struct hinic3_func_tbl_cfg {
+ u16 rx_wqe_buf_size;
+ u16 mtu;
+ u32 rsvd[9];
+};
+
+struct hinic3_cmd_set_func_tbl {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd;
+
+ u32 cfg_bitmap;
+ struct hinic3_func_tbl_cfg tbl_cfg;
+};
+
+struct hinic3_cmd_cons_idx_attr {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_idx;
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u32 rsvd;
+ u64 ci_addr;
+};
+
+union sm_tbl_args {
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } mac_table_arg;
+ struct {
+ u32 er_id;
+ u32 vlan_id;
+ } vlan_elb_table_arg;
+ struct {
+ u32 func_id;
+ } vlan_filter_arg;
+ struct {
+ u32 mc_id;
+ } mc_elb_arg;
+ struct {
+ u32 func_id;
+ } func_tbl_arg;
+ struct {
+ u32 port_id;
+ } port_tbl_arg;
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } fdir_io_table_arg;
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } flexq_table_arg;
+ u32 args[4];
+};
+
+#define DFX_SM_TBL_BUF_MAX (768)
+
+struct nic_cmd_dfx_sm_table {
+ struct hinic3_mgmt_msg_head msg_head;
+ u32 tbl_type;
+ union sm_tbl_args args;
+ u8 tbl_buf[DFX_SM_TBL_BUF_MAX];
+};
+
+struct hinic3_cmd_vlan_offload {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 vlan_offload;
+ u8 rsvd1[5];
+};
+
+/* ucode capture cfg info */
+struct nic_cmd_capture_info {
+ struct hinic3_mgmt_msg_head msg_head;
+ u32 op_type;
+ u32 func_port;
+ u32 is_en_trx;
+ u32 offset_cos;
+ u32 data_vlan;
+};
+
+struct hinic3_cmd_lro_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 lro_ipv4_en;
+ u8 lro_ipv6_en;
+ u8 lro_max_pkt_len; /* unit is 1K */
+ u8 resv2[13];
+};
+
+struct hinic3_cmd_lro_timer {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 opcode; /* 1: set timer value, 0: get timer value */
+ u8 rsvd1;
+ u16 rsvd2;
+ u32 timer;
+};
+
+struct hinic3_cmd_local_lro_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 0: get state, 1: set state */
+ u8 state; /* 0: disable, 1: enable */
+};
+
+struct hinic3_cmd_gtp_inner_parse_status {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 0: get state, 1: set state */
+ u8 status; /* 0: disable, 1: enable */
+};
+
+struct hinic3_cmd_vf_vlan_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 vlan_id;
+ u8 qos;
+ u8 rsvd2[5];
+};
+
+struct hinic3_cmd_spoofchk_set {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 state;
+ u8 rsvd1;
+};
+
+struct hinic3_cmd_tx_rate_cfg {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 rsvd1;
+ u8 direct;
+ u32 min_rate;
+ u32 max_rate;
+ u8 rsvd2[8];
+};
+
+struct hinic3_cmd_port_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+ u8 fec;
+ u16 rsvd2;
+ u32 rsvd3[4];
+};
+
+struct hinic3_cmd_register_vf {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 op_register; /* 0 - unregister, 1 - register */
+ u8 rsvd1[3];
+ u32 support_extra_feature;
+ u8 rsvd2[32];
+};
+
+struct hinic3_cmd_link_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 state;
+ u16 rsvd1;
+};
+
+struct hinic3_cmd_vlan_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 outband_defvid_flag;
+ u16 vlan_id;
+ u8 blacklist_flag;
+ u8 rsvd2;
+};
+
+#define VLAN_BLACKLIST_ENABLE 1
+#define VLAN_BLACKLIST_DISABLE 0
+
+struct hinic3_cmd_vxlan_port_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 cfg_mode;
+ u16 vxlan_port;
+ u16 rsvd2;
+};
+
+/* set vlan filter */
+struct hinic3_cmd_set_vlan_filter {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 resvd[2];
+ u32 vlan_filter_ctrl; /* bit0:vlan filter en; bit1:broadcast_filter_en */
+};
+
+struct hinic3_cmd_link_ksettings_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+
+ u32 valid_bitmap;
+ u8 speed; /* enum nic_speed_level */
+ u8 autoneg; /* 0 - off, 1 - on */
+ u8 fec; /* 0 - RSFEC, 1 - BASEFEC, 2 - NOFEC */
+ u8 rsvd2[21]; /* reserved for duplex, port, etc. */
+};
+
+struct mpu_lt_info {
+ u8 node;
+ u8 inst;
+ u8 entry_size;
+ u8 rsvd;
+ u32 lt_index;
+ u32 offset;
+ u32 len;
+};
+
+struct nic_mpu_lt_opera {
+ struct hinic3_mgmt_msg_head msg_head;
+ struct mpu_lt_info net_lt_cmd;
+ u8 data[100];
+};
+
+struct hinic3_force_pkt_drop {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port;
+ u8 rsvd1[3];
+};
+
+struct hinic3_rx_mode_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 rx_mode;
+};
+
+/* rss */
+struct hinic3_rss_context_table {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 context;
+};
+
+struct hinic3_cmd_rss_engine_type {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 hash_engine;
+ u8 rsvd1[4];
+};
+
+struct hinic3_cmd_rss_hash_key {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 key[NIC_RSS_KEY_SIZE];
+};
+
+struct hinic3_rss_indir_table {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 indir[NIC_RSS_INDIR_SIZE];
+};
+
+#define NIC_RSS_CMD_TEMP_ALLOC 0x01
+#define NIC_RSS_CMD_TEMP_FREE 0x02
+
+struct hinic3_rss_template_mgmt {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 cmd;
+ u8 template_id;
+ u8 rsvd1[4];
+};
+
+struct hinic3_cmd_rss_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 rss_en;
+ u8 rq_priority_number;
+ u8 prio_tc[NIC_DCB_COS_MAX];
+ u16 num_qps;
+ u16 rsvd1;
+};
+
+struct hinic3_dcb_state {
+ u8 dcb_on;
+ u8 default_cos;
+ u8 trust;
+ u8 rsvd1;
+ u8 pcp2cos[NIC_DCB_UP_MAX];
+ u8 dscp2cos[64];
+ u32 rsvd2[7];
+};
+
+struct hinic3_cmd_vf_dcb_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ struct hinic3_dcb_state state;
+};
+
+struct hinic3_up_ets_cfg { /* delet */
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+
+ u8 cos_tc[NIC_DCB_COS_MAX];
+ u8 tc_bw[NIC_DCB_TC_MAX];
+ u8 cos_prio[NIC_DCB_COS_MAX];
+ u8 cos_bw[NIC_DCB_COS_MAX];
+ u8 tc_prio[NIC_DCB_TC_MAX];
+};
+
+#define CMD_QOS_ETS_COS_TC BIT(0)
+#define CMD_QOS_ETS_TC_BW BIT(1)
+#define CMD_QOS_ETS_COS_PRIO BIT(2)
+#define CMD_QOS_ETS_COS_BW BIT(3)
+#define CMD_QOS_ETS_TC_PRIO BIT(4)
+#define CMD_QOS_ETS_TC_RATELIMIT BIT(5)
+
+struct hinic3_cmd_ets_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 1 - set, 0 - get */
+ /* bit0 - cos_tc, bit1 - tc_bw, bit2 - cos_prio, bit3 - cos_bw, bit4 - tc_prio */
+ u8 cfg_bitmap;
+ u8 rsvd;
+
+ u8 cos_tc[NIC_DCB_COS_MAX];
+ u8 tc_bw[NIC_DCB_TC_MAX];
+ u8 cos_prio[NIC_DCB_COS_MAX]; /* 0 - DWRR, 1 - STRICT */
+ u8 cos_bw[NIC_DCB_COS_MAX];
+ u8 tc_prio[NIC_DCB_TC_MAX]; /* 0 - DWRR, 1 - STRICT */
+ u8 rate_limit[NIC_DCB_TC_MAX];
+};
+
+struct hinic3_cmd_set_dcb_state {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 op_code; /* 0 - get dcb state, 1 - set dcb state */
+ u8 state; /* 0 - disable, 1 - enable dcb */
+ u8 port_state; /* 0 - disable, 1 - enable dcb */
+ u8 rsvd[7];
+};
+
+#define PFC_BIT_MAP_NUM 8
+struct hinic3_cmd_set_pfc {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 0:get 1: set pfc_en 2: set pfc_bitmap 3: set all */
+ u8 pfc_en; /* pfc_en 和 pfc_bitmap 必须同时设置 */
+ u8 pfc_bitmap;
+ u8 rsvd[4];
+};
+
+#define CMD_QOS_PORT_TRUST BIT(0)
+#define CMD_QOS_PORT_DFT_COS BIT(1)
+struct hinic3_cmd_qos_port_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 0 - get, 1 - set */
+ u8 cfg_bitmap; /* bit0 - trust, bit1 - dft_cos */
+ u8 rsvd0;
+
+ u8 trust;
+ u8 dft_cos;
+ u8 rsvd1[18];
+};
+
+#define MAP_COS_MAX_NUM 8
+#define CMD_QOS_MAP_PCP2COS BIT(0)
+#define CMD_QOS_MAP_DSCP2COS BIT(1)
+struct hinic3_cmd_qos_map_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 op_code;
+ u8 cfg_bitmap; /* bit0 - pcp2cos, bit1 - dscp2cos */
+ u16 rsvd0;
+
+ u8 pcp2cos[8]; /* 8 must be configured together */
+ /* If the dscp2cos parameter is set to 0xFF, the MPU ignores the DSCP priority,
+ * Multiple mappings between DSCP values and CoS values can be configured at a time.
+ */
+ u8 dscp2cos[64];
+ u32 rsvd1[4];
+};
+
+struct hinic3_cos_up_map {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 cos_valid_mask; /* every bit indicate index of map is valid 1 or not 0 */
+ u16 rsvd1;
+
+ /* user priority in cos(index:cos, value: up pri) */
+ u8 map[NIC_DCB_UP_MAX];
+};
+
+struct hinic3_cmd_pause_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode;
+ u16 rsvd1;
+ u8 auto_neg;
+ u8 rx_pause;
+ u8 tx_pause;
+ u8 rsvd2[5];
+};
+
+struct nic_cmd_pause_inquiry_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 valid;
+
+ u32 type; /* 1: set, 2: get */
+
+ u32 cos_id;
+
+ u32 rx_inquiry_pause_drop_pkts_en;
+ u32 rx_inquiry_pause_period_ms;
+ u32 rx_inquiry_pause_times;
+ /* rx pause Detection Threshold, Default PAUSE_FRAME_THD_10G/25G/40G/100 */
+ u32 rx_inquiry_pause_frame_thd;
+ u32 rx_inquiry_tx_total_pkts;
+
+ u32 tx_inquiry_pause_en; /* tx pause detect enable */
+ u32 tx_inquiry_pause_period_ms; /* tx pause Default Detection Period 200ms */
+ u32 tx_inquiry_pause_times; /* tx pause Default Times Period 5 */
+ u32 tx_inquiry_pause_frame_thd; /* tx pause Detection Threshold */
+ u32 tx_inquiry_rx_total_pkts;
+ u32 rsvd[3];
+};
+
+/* pfc/pause Storm TX exception reporting */
+struct nic_cmd_tx_pause_notice {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 tx_pause_except; /* 1: abnormality,0: normal */
+ u32 except_level;
+ u32 rsvd;
+};
+
+#define HINIC3_CMD_OP_FREE 0
+#define HINIC3_CMD_OP_ALLOC 1
+
+struct hinic3_cmd_cfg_qps {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 1: alloc qp, 0: free qp */
+ u8 rsvd1;
+ u16 num_qps;
+ u16 rsvd2;
+};
+
+struct hinic3_cmd_led_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port;
+ u8 type;
+ u8 mode;
+ u8 rsvd1;
+};
+
+struct hinic3_cmd_port_loopback {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode;
+ u8 mode;
+ u8 en;
+ u32 rsvd1[2];
+};
+
+struct hinic3_cmd_get_light_module_abs {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 abs_status; /* 0:present, 1:absent */
+ u8 rsv[2];
+};
+
+#define STD_SFP_INFO_MAX_SIZE 640
+struct hinic3_cmd_get_std_sfp_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 wire_type;
+ u16 eeprom_len;
+ u32 rsvd;
+ u8 sfp_info[STD_SFP_INFO_MAX_SIZE];
+};
+
+struct hinic3_cable_plug_event {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 plugged; /* 0: unplugged, 1: plugged */
+ u8 port_id;
+};
+
+struct nic_cmd_mac_info {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 valid_bitmap;
+ u16 rsvd;
+
+ u8 host_id[32];
+ u8 port_id[32];
+ u8 mac_addr[192];
+};
+
+struct nic_cmd_set_tcam_enable {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 tcam_enable;
+ u8 rsvd1;
+ u32 rsvd2;
+};
+
+struct nic_cmd_set_fdir_status {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 pkt_type_en;
+ u8 pkt_type;
+ u8 qid;
+ u8 rsvd2;
+};
+
+#define HINIC3_TCAM_BLOCK_ENABLE 1
+#define HINIC3_TCAM_BLOCK_DISABLE 0
+#define HINIC3_MAX_TCAM_RULES_NUM 4096
+
+/* tcam block type, according to tcam block size */
+enum {
+ NIC_TCAM_BLOCK_TYPE_LARGE = 0, /* block_size: 16 */
+ NIC_TCAM_BLOCK_TYPE_SMALL, /* block_size: 0 */
+ NIC_TCAM_BLOCK_TYPE_MAX
+};
+
+/* alloc tcam block input struct */
+struct nic_cmd_ctrl_tcam_block_in {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u8 alloc_en; /* 0: Releases the allocated TCAM block. 1: Applies for a new TCAM block */
+ /* 0: 16 size tcam block, 1: 0 size tcam block, other reserved. */
+ u8 tcam_type;
+ u16 tcam_block_index;
+ /* Size of the block that the driver wants to allocate
+ * Interface returned by the UP to the driver,
+ * indicating the size of the allocated TCAM block supported by the UP
+ */
+ u16 alloc_block_num;
+};
+
+/* alloc tcam block output struct */
+struct nic_cmd_ctrl_tcam_block_out {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u8 alloc_en; /* 0: Releases the allocated TCAM block. 1: Applies for a new TCAM block */
+ /* 0: 16 size tcam block, 1: 0 size tcam block, other reserved. */
+ u8 tcam_type;
+ u16 tcam_block_index;
+ /* Size of the block that the driver wants to allocate
+ * Interface returned by the UP to the driver,
+ * indicating the size of the allocated TCAM block supported by the UP
+ */
+ u16 mpu_alloc_block_size;
+};
+
+struct nic_cmd_flush_tcam_rules {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u16 rsvd;
+};
+
+struct nic_cmd_dfx_fdir_tcam_block_table {
+ struct hinic3_mgmt_msg_head head;
+ u8 tcam_type;
+ u8 valid;
+ u16 tcam_block_index;
+ u16 use_function_id;
+ u16 rsvd;
+};
+
+struct tcam_result {
+ u32 qid;
+ u32 rsvd;
+};
+
+#define TCAM_FLOW_KEY_SIZE (44)
+
+struct tcam_key_x_y {
+ u8 x[TCAM_FLOW_KEY_SIZE];
+ u8 y[TCAM_FLOW_KEY_SIZE];
+};
+
+struct nic_tcam_cfg_rule {
+ u32 index;
+ struct tcam_result data;
+ struct tcam_key_x_y key;
+};
+
+#define TCAM_RULE_FDIR_TYPE 0
+#define TCAM_RULE_PPA_TYPE 1
+
+struct nic_cmd_fdir_add_rule {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 type;
+ u8 fdir_ext; /* 0x1: flow bifur en bit */
+ struct nic_tcam_cfg_rule rule;
+};
+
+struct nic_cmd_fdir_del_rules {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 type;
+ u8 rsvd;
+ u32 index_start;
+ u32 index_num;
+};
+
+struct nic_cmd_fdir_get_rule {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 index;
+ u8 valid;
+ u8 type;
+ u16 rsvd;
+ struct tcam_key_x_y key;
+ struct tcam_result data;
+ u64 packet_count;
+ u64 byte_count;
+};
+
+struct nic_cmd_fdir_get_block_rules {
+ struct hinic3_mgmt_msg_head head;
+ u8 tcam_block_type; // only NIC_TCAM_BLOCK_TYPE_LARGE
+ u8 tcam_table_type; // TCAM_RULE_PPA_TYPE or TCAM_RULE_FDIR_TYPE
+ u16 tcam_block_index;
+ u8 valid[NIC_TCAM_BLOCK_LARGE_SIZE];
+ struct tcam_key_x_y key[NIC_TCAM_BLOCK_LARGE_SIZE];
+ struct tcam_result data[NIC_TCAM_BLOCK_LARGE_SIZE];
+};
+
+struct hinic3_tcam_key_ipv4_mem {
+ u32 rsvd1 : 1;
+ u32 bifur_flag : 2;
+ u32 model : 1;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+ u32 sipv4_h : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+ u32 dipv4_h : 16;
+ u32 sipv4_l : 16;
+ u32 vlan_id : 15;
+ u32 vlan_flag : 1;
+ u32 dipv4_l : 16;
+ u32 rsvd3;
+ u32 dport : 16;
+ u32 rsvd4 : 16;
+ u32 rsvd5 : 16;
+ u32 sport : 16;
+ u32 outer_sipv4_h : 16;
+ u32 rsvd6 : 16;
+ u32 outer_dipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+ u32 vni_h : 16;
+ u32 outer_dipv4_l : 16;
+ u32 rsvd7 : 16;
+ u32 vni_l : 16;
+};
+
+union hinic3_tag_tcam_ext_info {
+ struct {
+ u32 id : 16; /* id */
+ u32 type : 4; /* type: 0-func, 1-vmdq, 2-port, 3-rsvd, 4-trunk, 5-dp, 6-mc */
+ u32 host_id : 3;
+ u32 rss_q_num : 8; /* rss queue num */
+ u32 ext : 1;
+ } bs;
+ u32 value;
+};
+
+struct hinic3_tcam_key_ipv6_mem {
+ u32 bifur_flag : 2;
+ u32 vlan_flag : 1;
+ u32 outer_ip_type : 1;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+ u32 sipv6_key0 : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+ u32 sipv6_key2 : 16;
+ u32 sipv6_key1 : 16;
+ u32 sipv6_key4 : 16;
+ u32 sipv6_key3 : 16;
+ u32 sipv6_key6 : 16;
+ u32 sipv6_key5 : 16;
+ u32 dport : 16;
+ u32 sipv6_key7 : 16;
+ u32 dipv6_key0 : 16;
+ u32 sport : 16;
+ u32 dipv6_key2 : 16;
+ u32 dipv6_key1 : 16;
+ u32 dipv6_key4 : 16;
+ u32 dipv6_key3 : 16;
+ u32 dipv6_key6 : 16;
+ u32 dipv6_key5 : 16;
+ u32 rsvd2 : 16;
+ u32 dipv6_key7 : 16;
+};
+
+struct hinic3_tcam_key_vxlan_ipv6_mem {
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+
+ u32 dipv6_key0 : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+
+ u32 dipv6_key2 : 16;
+ u32 dipv6_key1 : 16;
+
+ u32 dipv6_key4 : 16;
+ u32 dipv6_key3 : 16;
+
+ u32 dipv6_key6 : 16;
+ u32 dipv6_key5 : 16;
+
+ u32 dport : 16;
+ u32 dipv6_key7 : 16;
+
+ u32 rsvd2 : 16;
+ u32 sport : 16;
+
+ u32 outer_sipv4_h : 16;
+ u32 rsvd3 : 16;
+
+ u32 outer_dipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+
+ u32 vni_h : 16;
+ u32 outer_dipv4_l : 16;
+
+ u32 rsvd4 : 16;
+ u32 vni_l : 16;
+};
+
+struct tag_tcam_key {
+ union {
+ struct hinic3_tcam_key_ipv4_mem key_info;
+ struct hinic3_tcam_key_ipv6_mem key_info_ipv6;
+ struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6;
+ };
+
+ union {
+ struct hinic3_tcam_key_ipv4_mem key_mask;
+ struct hinic3_tcam_key_ipv6_mem key_mask_ipv6;
+ struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6;
+ };
+};
+
+enum {
+ PPA_TABLE_ID_CLEAN_CMD = 0,
+ PPA_TABLE_ID_ADD_CMD,
+ PPA_TABLE_ID_DEL_CMD,
+ FDIR_TABLE_ID_ADD_CMD,
+ FDIR_TABLE_ID_DEL_CMD,
+ PPA_TABEL_ID_MAX
+};
+
+struct hinic3_ppa_cfg_table_id_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u16 cmd;
+ u16 table_id;
+ u16 rsvd1;
+};
+
+struct hinic3_ppa_cfg_ppa_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 ppa_en;
+ u8 ppa_miss_drop_en;
+};
+
+struct hinic3_func_flow_bifur_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 func_id;
+ u8 flow_bifur_en;
+ u8 rsvd[5];
+};
+
+struct hinic3_port_flow_bifur_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 port_id;
+ u8 flow_bifur_en;
+ u8 flow_bifur_type; /* 0->vf bifur, 2->traffic bifur */
+ u8 rsvd[4];
+};
+
+struct hinic3_bond_mask_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 func_id;
+ u8 bond_mask;
+ u8 bond_en;
+ u8 func_valid;
+ u8 rsvd[3];
+};
+
+struct hinic3_func_er_value_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+ u16 vf_id;
+ u16 er_fwd_id;
+};
+
+#define HINIC3_TX_SET_PROMISC_SKIP 0
+#define HINIC3_TX_GET_PROMISC_SKIP 1
+
+#define HINIC3_GET_TRAFFIC_BIFUR_STATE 0
+#define HINIC3_SET_TRAFFIC_BIFUR_STATE 1
+
+struct hinic3_tx_promisc_cfg {
+ struct hinic3_mgmt_msg_head msg_head;
+ u8 port_id;
+ u8 promisc_skip_en; /* 0: disable tx promisc replication, 1: enable */
+ u8 opcode; /* 0: set, 1: get */
+ u8 rsvd1;
+};
+
+struct hinic3_ppa_cfg_mode_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u8 ppa_mode;
+ u8 qpc_func_nums;
+ u16 base_qpc_func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_ppa_flush_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u8 flush_en; /* 0 flush done, 1 in flush operation */
+ u8 rsvd1;
+};
+
+struct hinic3_ppa_fdir_query_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u32 index;
+ u32 rsvd;
+ u64 pkt_nums;
+ u64 pkt_bytes;
+};
+
+/* BIOS CONF */
+enum {
+ NIC_NVM_DATA_SET = BIT(0), /* 1-save, 0-read */
+ NIC_NVM_DATA_PXE = BIT(1),
+ NIC_NVM_DATA_VLAN = BIT(2),
+ NIC_NVM_DATA_VLAN_PRI = BIT(3),
+ NIC_NVM_DATA_VLAN_ID = BIT(4),
+ NIC_NVM_DATA_WORK_MODE = BIT(5),
+ NIC_NVM_DATA_PF_TX_SPEED_LIMIT = BIT(6),
+ NIC_NVM_DATA_GE_MODE = BIT(7),
+ NIC_NVM_DATA_AUTO_NEG = BIT(8),
+ NIC_NVM_DATA_LINK_FEC = BIT(9),
+ NIC_NVM_DATA_PF_ADAPTIVE_LINK = BIT(10),
+ NIC_NVM_DATA_SRIOV_CONTROL = BIT(11),
+ NIC_NVM_DATA_EXTEND_MODE = BIT(12),
+ NIC_NVM_DATA_LEGACY_VLAN = BIT(13),
+ NIC_NVM_DATA_LEGACY_VLAN_PRI = BIT(14),
+ NIC_NVM_DATA_LEGACY_VLAN_ID = BIT(15),
+ NIC_NVM_DATA_RESET = BIT(31),
+};
+
+#define BIOS_CFG_SIGNATURE 0x1923E518
+#define BIOS_OP_CFG_ALL(op_code_val) \
+ ((((op_code_val) >> 1) & (0xFFFFFFFF)) != 0)
+#define BIOS_OP_CFG_WRITE(op_code_val) \
+ ((((op_code_val) & NIC_NVM_DATA_SET)) != 0)
+#define BIOS_OP_CFG_PXE_EN(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_PXE) != 0)
+#define BIOS_OP_CFG_VLAN_EN(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_VLAN) != 0)
+#define BIOS_OP_CFG_VLAN_PRI(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_VLAN_PRI) != 0)
+#define BIOS_OP_CFG_VLAN_ID(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_VLAN_ID) != 0)
+#define BIOS_OP_CFG_WORK_MODE(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_WORK_MODE) != 0)
+#define BIOS_OP_CFG_PF_BW(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_PF_TX_SPEED_LIMIT) != 0)
+#define BIOS_OP_CFG_GE_SPEED(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_GE_MODE) != 0)
+#define BIOS_OP_CFG_AUTO_NEG(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_AUTO_NEG) != 0)
+#define BIOS_OP_CFG_LINK_FEC(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_LINK_FEC) != 0)
+#define BIOS_OP_CFG_AUTO_ADPAT(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_PF_ADAPTIVE_LINK) != 0)
+#define BIOS_OP_CFG_SRIOV_ENABLE(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_SRIOV_CONTROL) != 0)
+#define BIOS_OP_CFG_EXTEND_MODE(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_EXTEND_MODE) != 0)
+#define BIOS_OP_CFG_LEGACY_VLAN_EN(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_LEGACY_VLAN) != 0)
+#define BIOS_OP_CFG_LEGACY_VLAN_PRI(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_LEGACY_VLAN_PRI) != 0)
+#define BIOS_OP_CFG_LEGACY_VLAN_ID(op_code_val) \
+ (((op_code_val) & NIC_NVM_DATA_LEGACY_VLAN_ID) != 0)
+#define BIOS_OP_CFG_RST_DEF_SET(op_code_val) \
+ (((op_code_val) & (u32)NIC_NVM_DATA_RESET) != 0)
+
+
+#define NIC_BIOS_CFG_MAX_PF_BW 100
+
+struct nic_legacy_vlan_cfg {
+ /* Legacy mode PXE VLAN enable: 0 - disable 1 - enable */
+ u8 pxe_vlan_en : 1;
+ /* Legacy mode PXE VLAN priority: 0-7 */
+ u8 pxe_vlan_pri : 3;
+ /* Legacy mode PXE VLAN ID 1-4094 */
+ u16 pxe_vlan_id : 12;
+};
+
+/* Note: This structure must be 4-byte aligned. */
+struct nic_bios_cfg {
+ u32 signature;
+ u8 pxe_en;
+ u8 extend_mode;
+ struct nic_legacy_vlan_cfg nlvc;
+ u8 pxe_vlan_en;
+ u8 pxe_vlan_pri;
+ u16 pxe_vlan_id;
+ u32 service_mode;
+ u32 pf_tx_bw;
+ u8 speed;
+ u8 auto_neg;
+ u8 lanes;
+ u8 fec;
+ u8 auto_adapt;
+ u8 func_valid;
+ u8 func_id;
+ u8 sriov_en;
+};
+
+struct nic_cmd_bios_cfg {
+ struct hinic3_mgmt_msg_head head;
+ u32 op_code; /* Operation Code: Bit0[0: read 1:write, BIT1-6: cfg_mask */
+ struct nic_bios_cfg bios_cfg;
+};
+
+struct nic_rx_rate_bios_cfg {
+ struct mgmt_msg_head msg_head;
+
+ u32 op_code; /* Operation Code:[0:read 1:write] */
+ u8 rx_rate_limit;
+ u8 func_id;
+};
+
+struct nic_cmd_vhd_config {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 vhd_type;
+ u8 virtio_small_enable; /* 0: mergeable mode, 1: small mode */
+};
+
+/* BOND */
+struct hinic3_create_bond_info {
+ u32 bond_id;
+ u32 master_slave_port_id;
+ u32 slave_bitmap; /* bond port id bitmap */
+ u32 poll_timeout; /* Bond device link check time */
+ u32 up_delay; /* Temporarily reserved */
+ u32 down_delay; /* Temporarily reserved */
+ u32 bond_mode; /* Temporarily reserved */
+ u32 active_pf; /* bond use active pf id */
+ u32 active_port_max_num; /* Maximum number of active bond member interfaces */
+ u32 active_port_min_num; /* Minimum number of active bond member interfaces */
+ u32 xmit_hash_policy;
+ u32 default_param_flag;
+ u32 rsvd;
+};
+
+struct hinic3_cmd_create_bond {
+ struct hinic3_mgmt_msg_head head;
+ struct hinic3_create_bond_info create_bond_info;
+};
+
+struct hinic3_cmd_delete_bond {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 rsvd[2];
+};
+
+struct hinic3_open_close_bond_info {
+ u32 bond_id;
+ u32 open_close_flag; /* Bond flag. 1: open; 0: close. */
+ u32 rsvd[2];
+};
+
+struct hinic3_cmd_open_close_bond {
+ struct hinic3_mgmt_msg_head head;
+ struct hinic3_open_close_bond_info open_close_bond_info;
+};
+
+struct lacp_port_params {
+ u16 port_number;
+ u16 port_priority;
+ u16 key;
+ u16 system_priority;
+ u8 system[ETH_ALEN];
+ u8 port_state;
+ u8 rsvd;
+};
+
+struct lacp_port_info {
+ u32 selected;
+ u32 aggregator_port_id;
+
+ struct lacp_port_params actor;
+ struct lacp_port_params partner;
+
+ u64 tx_lacp_pkts;
+ u64 rx_lacp_pkts;
+ u64 rx_8023ad_drop;
+ u64 tx_8023ad_drop;
+ u64 unknown_pkt_drop;
+ u64 rx_marker_pkts;
+ u64 tx_marker_pkts;
+};
+
+struct hinic3_bond_status_info {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 bon_mmi_status;
+ u32 active_bitmap;
+ u32 port_count;
+
+ struct lacp_port_info port_info[4];
+
+ u64 success_report_cnt[4];
+ u64 fail_report_cnt[4];
+
+ u64 poll_timeout;
+ u64 fast_periodic_timeout;
+ u64 slow_periodic_timeout;
+ u64 short_timeout;
+ u64 long_timeout;
+ u64 aggregate_wait_timeout;
+ u64 tx_period_timeout;
+ u64 rx_marker_timer;
+};
+
+struct hinic3_bond_active_report_info {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 bon_mmi_status;
+ u32 active_bitmap;
+
+ u8 rsvd[16];
+};
+
+/* IP checksum error packets, enable rss quadruple hash. */
+struct hinic3_ipcs_err_rss_enable_operation_s {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 en_tag;
+ u8 type; /* 1: set 0: get */
+ u8 rsvd[2];
+};
+
+struct hinic3_smac_check_state {
+ struct hinic3_mgmt_msg_head head;
+ u8 smac_check_en; /* 1: enable 0: disable */
+ u8 op_code; /* 1: set 0: get */
+ u8 flash_en; /* 1: enable 0: disable */
+ u8 rsvd;
+};
+
+struct hinic3_clear_log_state {
+ struct hinic3_mgmt_msg_head head;
+ u32 type;
+};
+
+struct hinic3_outband_cfg_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 outband_default_vid;
+ u16 func_id;
+};
+
+struct hinic3_wr_ordering {
+ struct hinic3_mgmt_msg_head head;
+ u8 op_code; /* 1: set 0: get */
+ u8 wr_pkt_so_ro;
+ u8 rd_pkt_so_ro;
+ u8 rsvd;
+};
+
+struct hinic3_function_active_info {
+ struct hinic3_mgmt_msg_head head;
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_rq_info {
+ struct hinic3_mgmt_msg_head head;
+ u16 func_id;
+ u16 rq_depth;
+ u16 rq_num;
+ u16 pf_num;
+ u16 port_num;
+};
+
+#endif /* HINIC_MGMT_INTERFACE_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/nic_npu_cmd.h b/drivers/net/ethernet/huawei/hinic3/nic_npu_cmd.h
new file mode 100644
index 0000000..97eda43
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/nic_npu_cmd.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C), 2001-2011, Huawei Tech. Co., Ltd.
+ * File Name : nic_npu_cmd.h
+ * Version : Initial Draft
+ * Created : 2019/4/25
+ * Last Modified :
+ * Description : NIC Commands between Driver and NPU
+ * Function List :
+ */
+
+#ifndef NIC_NPU_CMD_H
+#define NIC_NPU_CMD_H
+
+/* NIC CMDQ MODE */
+enum hinic3_ucode_cmd {
+ HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0,
+ HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ HINIC3_UCODE_CMD_ARM_SQ, /**< Unused */
+ HINIC3_UCODE_CMD_ARM_RQ, /**< Unused */
+ HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ HINIC3_UCODE_CMD_GET_RSS_CONTEXT_TABLE, /**< Unused */
+ HINIC3_UCODE_CMD_SET_IQ_ENABLE, /**< Unused */
+ HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10,
+ HINIC3_UCODE_CMD_MODIFY_VLAN_CTX,
+ HINIC3_UCODE_CMD_PPA_HASH_TABLE,
+ HINIC3_UCODE_CMD_RXQ_INFO_GET = 13,
+ HINIC3_UCODE_MIG_CFG_Q_CTX = 14,
+ HINIC3_UCODE_MIG_CHK_SQ_STOP,
+ HINIC3_UCODE_CHK_RQ_STOP,
+ HINIC3_UCODE_MIG_CFG_BAT_INFO,
+};
+
+#endif /* NIC_NPU_CMD_H */
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/ossl_knl.h b/drivers/net/ethernet/huawei/hinic3/ossl_knl.h
new file mode 100644
index 0000000..bb658cb
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/ossl_knl.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef OSSL_KNL_H
+#define OSSL_KNL_H
+
+#include "ossl_knl_linux.h"
+#include <linux/types.h>
+
+#define sdk_err(dev, format, ...) dev_err(dev, "[COMM]" format, ##__VA_ARGS__)
+#define sdk_warn(dev, format, ...) dev_warn(dev, "[COMM]" format, ##__VA_ARGS__)
+#define sdk_notice(dev, format, ...) dev_notice(dev, "[COMM]" format, ##__VA_ARGS__)
+#define sdk_info(dev, format, ...) dev_info(dev, "[COMM]" format, ##__VA_ARGS__)
+
+#define nic_err(dev, format, ...) dev_err(dev, "[NIC]" format, ##__VA_ARGS__)
+#define nic_warn(dev, format, ...) dev_warn(dev, "[NIC]" format, ##__VA_ARGS__)
+#define nic_notice(dev, format, ...) dev_notice(dev, "[NIC]" format, ##__VA_ARGS__)
+#define nic_info(dev, format, ...) dev_info(dev, "[NIC]" format, ##__VA_ARGS__)
+
+#ifndef BIG_ENDIAN
+#define BIG_ENDIAN 0x4321
+#endif
+
+#ifndef LITTLE_ENDIAN
+#define LITTLE_ENDIAN 0x1234
+#endif
+
+#ifdef BYTE_ORDER
+#undef BYTE_ORDER
+#endif
+/* X86 */
+#define BYTE_ORDER LITTLE_ENDIAN
+#define USEC_PER_MSEC 1000L
+#define MSEC_PER_SEC 1000L
+
+/* Waiting for 50 us */
+#define WAIT_USEC_50 50L
+
+#endif /* OSSL_KNL_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h b/drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h
new file mode 100644
index 0000000..b815d7c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h
@@ -0,0 +1,371 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef OSSL_KNL_LINUX_H_
+#define OSSL_KNL_LINUX_H_
+
+#include <net/ipv6.h>
+#include <net/devlink.h>
+#include <linux/string.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/version.h>
+#include <linux/ethtool.h>
+#include <linux/fs.h>
+#include <linux/kthread.h>
+#include <linux/if_vlan.h>
+#include <linux/udp.h>
+#include <linux/highmem.h>
+#include <linux/list.h>
+#include <linux/bitmap.h>
+#include <linux/slab.h>
+#include <linux/proc_fs.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/filter.h>
+#include <linux/aer.h>
+#include <linux/socket.h>
+
+#ifndef NETIF_F_SCTP_CSUM
+#define NETIF_F_SCTP_CSUM 0
+#endif
+
+#ifndef __GFP_COLD
+#define __GFP_COLD 0
+#endif
+
+#ifndef __GFP_COMP
+#define __GFP_COMP 0
+#endif
+
+#undef __always_unused
+#define __always_unused __attribute__((__unused__))
+
+#define ossl_get_free_pages __get_free_pages
+
+#ifndef ETHTOOL_LINK_MODE_100000baseKR_Full_BIT
+#define ETHTOOL_LINK_MODE_100000baseKR_Full_BIT 75
+#define ETHTOOL_LINK_MODE_100000baseCR_Full_BIT 78
+#define ETHTOOL_LINK_MODE_100000baseSR_Full_BIT 76
+#endif
+#ifndef ETHTOOL_LINK_MODE_200000baseKR2_Full_BIT
+#define ETHTOOL_LINK_MODE_200000baseKR2_Full_BIT 80
+#define ETHTOOL_LINK_MODE_200000baseSR2_Full_BIT 81
+#define ETHTOOL_LINK_MODE_200000baseCR2_Full_BIT 84
+#endif
+
+#ifndef high_16_bits
+#define low_16_bits(x) ((x) & 0xFFFF)
+#define high_16_bits(x) (((x) & 0xFFFF0000) >> 16)
+#endif
+
+#ifndef U8_MAX
+#define U8_MAX 0xFF
+#endif
+
+#define ETH_TYPE_TRANS_SETS_DEV
+#define HAVE_NETDEV_STATS_IN_NETDEV
+
+#ifndef HAVE_SET_RX_MODE
+#define HAVE_SET_RX_MODE
+#endif
+
+#define HAVE_INET6_IFADDR_LIST
+#define HAVE_NDO_GET_STATS64
+
+#ifndef HAVE_MQPRIO
+#define HAVE_MQPRIO
+#endif
+#ifndef HAVE_SETUP_TC
+#define HAVE_SETUP_TC
+#endif
+
+#ifndef HAVE_NDO_SET_FEATURES
+#define HAVE_NDO_SET_FEATURES
+#endif
+#define HAVE_IRQ_AFFINITY_NOTIFY
+#define HAVE_ETHTOOL_SET_PHYS_ID
+#define HAVE_NETDEV_WANTED_FEAUTES
+
+#ifndef HAVE_PCI_DEV_FLAGS_ASSIGNED
+#define HAVE_PCI_DEV_FLAGS_ASSIGNED
+#define HAVE_VF_SPOOFCHK_CONFIGURE
+#endif
+#ifndef HAVE_SKB_L4_RXHASH
+#define HAVE_SKB_L4_RXHASH
+#endif
+
+#define HAVE_ETHTOOL_GRXFHINDIR_SIZE
+#define HAVE_INT_NDO_VLAN_RX_ADD_VID
+#ifdef ETHTOOL_SRXNTUPLE
+#undef ETHTOOL_SRXNTUPLE
+#endif
+
+#define _kc_kmap_atomic(page) kmap_atomic(page)
+#define _kc_kunmap_atomic(addr) kunmap_atomic(addr)
+
+#include <linux/of_net.h>
+#define HAVE_FDB_OPS
+#define HAVE_ETHTOOL_GET_TS_INFO
+
+#define HAVE_NAPI_GRO_FLUSH_OLD
+
+#ifndef HAVE_SRIOV_CONFIGURE
+#define HAVE_SRIOV_CONFIGURE
+#endif
+
+#define HAVE_ENCAP_TSO_OFFLOAD
+#define HAVE_SKB_INNER_NETWORK_HEADER
+
+
+#define HAVE_NDO_SET_VF_LINK_STATE
+#define HAVE_SKB_INNER_PROTOCOL
+#define HAVE_MPLS_FEATURES
+
+#define HAVE_NDO_GET_PHYS_PORT_ID
+#define HAVE_NETIF_SET_XPS_QUEUE_CONST_MASK
+
+#define HAVE_VXLAN_CHECKS
+#define HAVE_NET_GET_RANDOM_ONCE
+#define HAVE_HWMON_DEVICE_REGISTER_WITH_GROUPS
+
+#define HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK
+
+
+#define HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+#define HAVE_VLAN_FIND_DEV_DEEP_RCU
+
+#define HAVE_SKBUFF_CSUM_LEVEL
+#define HAVE_MULTI_VLAN_OFFLOAD_EN
+#define HAVE_ETH_GET_HEADLEN_FUNC
+
+
+#define HAVE_RXFH_HASHFUNC
+#define HAVE_NDO_SET_VF_TRUST
+
+#include <net/devlink.h>
+
+#define HAVE_IO_MAP_WC_SIZE
+
+#define HAVE_NETDEVICE_MIN_MAX_MTU
+
+
+#define HAVE_VOID_NDO_GET_STATS64
+#define HAVE_VM_OPS_FAULT_NO_VMA
+
+#define HAVE_HWTSTAMP_FILTER_NTP_ALL
+#define HAVE_NDO_SETUP_TC_CHAIN_INDEX
+#define HAVE_PCI_ERROR_HANDLER_RESET_PREPARE
+#define HAVE_PTP_CLOCK_DO_AUX_WORK
+
+
+#define HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV
+
+#define HAVE_XDP_SUPPORT
+#if (KERNEL_VERSION(5, 9, 0) > LINUX_VERSION_CODE)
+#define HAVE_XDP_QUERY_PROG
+#endif
+
+#define HAVE_NDO_BPF_NETDEV_BPF
+#define HAVE_TIMER_SETUP
+#define HAVE_XDP_DATA_META
+
+#define HAVE_MACRO_VM_FAULT_T
+
+#define HAVE_NDO_SELECT_QUEUE_SB_DEV
+
+
+#define dev_open(x) dev_open(x, NULL)
+#define HAVE_NEW_ETHTOOL_LINK_SETTINGS_ONLY
+
+#ifndef get_ds
+#define get_ds() (KERNEL_DS)
+#endif
+
+#ifndef dma_zalloc_coherent
+#define dma_zalloc_coherent(d, s, h, f) _hinic3_dma_zalloc_coherent(d, s, h, f)
+static inline void *_hinic3_dma_zalloc_coherent(struct device *dev,
+ size_t size, dma_addr_t *dma_handle,
+ gfp_t gfp)
+{
+ /* Above kernel 5.0, fixed up all remaining architectures
+ * to zero the memory in dma_alloc_coherent, and made
+ * dma_zalloc_coherent a no-op wrapper around dma_alloc_coherent,
+ * which fixes all of the above issues.
+ */
+ return dma_alloc_coherent(dev, size, dma_handle, gfp);
+}
+#endif
+
+#if (KERNEL_VERSION(5, 6, 0) <= LINUX_VERSION_CODE)
+#ifndef DT_KNL_EMU
+struct timeval {
+ __kernel_old_time_t tv_sec; /* seconds */
+ __kernel_suseconds_t tv_usec; /* microseconds */
+};
+#endif
+#endif
+
+#ifndef do_gettimeofday
+#define do_gettimeofday(time) _kc_do_gettimeofday(time)
+static inline void _kc_do_gettimeofday(struct timeval *tv)
+{
+ struct timespec64 ts;
+
+ ktime_get_real_ts64(&ts);
+ tv->tv_sec = ts.tv_sec;
+ tv->tv_usec = ts.tv_nsec / NSEC_PER_USEC;
+}
+#endif
+
+
+
+#define HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK
+#define HAVE_NDO_SELECT_QUEUE_SB_DEV
+#define HAVE_GENL_OPS_FIELD_VALIDATE
+#define ETH_MODULE_SFF_8436_MAX_LEN 640
+#define ETH_MODULE_SFF_8636_MAX_LEN 640
+#define SPEED_200000 200000
+
+#ifndef FIELD_SIZEOF
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->f))
+#endif
+
+/*****************************************************************************/
+#if (KERNEL_VERSION(5, 5, 0) > LINUX_VERSION_CODE)
+#else /* >= 5.5.0 */
+#define HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+#endif /* 5.5.0 */
+
+/*****************************************************************************/
+#if (KERNEL_VERSION(5, 6, 0) > LINUX_VERSION_CODE)
+#else /* >= 5.6.0 */
+#ifndef rtc_time_to_tm
+#define rtc_time_to_tm rtc_time64_to_tm
+#endif
+#define HAVE_NDO_TX_TIMEOUT_TXQ
+#define HAVE_PROC_OPS
+#endif /* 5.6.0 */
+
+/*****************************************************************************/
+#if (KERNEL_VERSION(5, 7, 0) > LINUX_VERSION_CODE)
+#else /* >= 5.7.0 */
+#define SUPPORTED_COALESCE_PARAMS
+
+#ifndef pci_cleanup_aer_uncorrect_error_status
+#define pci_cleanup_aer_uncorrect_error_status pci_aer_clear_nonfatal_status
+#endif
+#endif /* 5.7.0 */
+
+/* ************************************************************************ */
+#if (KERNEL_VERSION(5, 9, 0) > LINUX_VERSION_CODE)
+
+#else /* >= 5.9.0 */
+#define HAVE_XDP_FRAME_SZ
+#endif /* 5.9.0 */
+
+/* ************************************************************************ */
+#if (KERNEL_VERSION(5, 10, 0) > LINUX_VERSION_CODE)
+#define HAVE_DEVLINK_FW_FILE_NAME_PARAM
+#else /* >= 5.10.0 */
+#endif /* 5.10.0 */
+
+#define HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+
+/* ************************************************************************ */
+#if (KERNEL_VERSION(5, 10, 0) > LINUX_VERSION_CODE)
+
+#else /* >= 5.10.0 */
+#if !defined(HAVE_ETHTOOL_COALESCE_EXTACK) && \
+ !defined(NO_ETHTOOL_COALESCE_EXTACK)
+#define HAVE_ETHTOOL_COALESCE_EXTACK
+#endif
+#endif /* 5.10.0 */
+
+/* ************************************************************************ */
+#if (KERNEL_VERSION(5, 10, 0) > LINUX_VERSION_CODE)
+
+#else /* >= 5.10.0 */
+#if !defined(HAVE_ETHTOOL_RINGPARAM_EXTACK) && \
+ !defined(NO_ETHTOOL_RINGPARAM_EXTACK)
+#define HAVE_ETHTOOL_RINGPARAM_EXTACK
+#endif
+#endif /* 5.10.0 */
+/* ************************************************************************ */
+#define HAVE_NDO_UDP_TUNNEL_ADD
+#define HAVE_ENCAPSULATION_TSO
+#define HAVE_ENCAPSULATION_CSUM
+
+#ifndef eth_zero_addr
+static inline void hinic3_eth_zero_addr(u8 *addr)
+{
+ (void)memset(addr, 0x00, ETH_ALEN);
+}
+
+#define eth_zero_addr(_addr) hinic3_eth_zero_addr(_addr)
+#endif
+
+#ifndef netdev_hw_addr_list_for_each
+#define netdev_hw_addr_list_for_each(ha, l) \
+ list_for_each_entry(ha, &(l)->list, list)
+#endif
+
+#define spin_lock_deinit(lock)
+
+struct file *file_creat(const char *file_name);
+
+struct file *file_open(const char *file_name);
+
+void file_close(struct file *file_handle);
+
+u32 get_file_size(struct file *file_handle);
+
+void set_file_position(struct file *file_handle, u32 position);
+
+int file_read(struct file *file_handle, char *log_buffer, u32 rd_length,
+ u32 *file_pos);
+
+u32 file_write(struct file *file_handle, const char *log_buffer, u32 wr_length);
+
+struct sdk_thread_info {
+ struct task_struct *thread_obj;
+ char *name;
+ void (*thread_fn)(void *x);
+ void *thread_event;
+ void *data;
+};
+
+int creat_thread(struct sdk_thread_info *thread_info);
+
+void stop_thread(struct sdk_thread_info *thread_info);
+
+#define destroy_work(work)
+void utctime_to_localtime(u64 utctime, u64 *localtime);
+#ifndef HAVE_TIMER_SETUP
+void initialize_timer(const void *adapter_hdl, struct timer_list *timer);
+#endif
+void add_to_timer(struct timer_list *timer, u64 period);
+void stop_timer(struct timer_list *timer);
+void delete_timer(struct timer_list *timer);
+u64 ossl_get_real_time(void);
+
+#define nicif_err(priv, type, dev, fmt, args...) \
+ netif_level(err, priv, type, dev, "[NIC]" fmt, ##args)
+#define nicif_warn(priv, type, dev, fmt, args...) \
+ netif_level(warn, priv, type, dev, "[NIC]" fmt, ##args)
+#define nicif_notice(priv, type, dev, fmt, args...) \
+ netif_level(notice, priv, type, dev, "[NIC]" fmt, ##args)
+#define nicif_info(priv, type, dev, fmt, args...) \
+ netif_level(info, priv, type, dev, "[NIC]" fmt, ##args)
+#define nicif_dbg(priv, type, dev, fmt, args...) \
+ netif_level(dbg, priv, type, dev, "[NIC]" fmt, ##args)
+
+#define destroy_completion(completion)
+#define sema_deinit(lock)
+#define mutex_deinit(lock)
+#define rwlock_deinit(lock)
+
+#define tasklet_state(tasklet) ((tasklet)->state)
+
+#endif
+/* ************************************************************************ */
--
2.28.0.windows.1
1
0

13 Oct '25
driver inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/ID1K5J
----------------------------------------------------------------------
Yushan Wang (2):
soc cache: support L3 cache lock in framework
soc cache: L3 cache lockdown support for HiSilicon SoC
arch/arm64/configs/openeuler_defconfig | 3 +-
drivers/soc/hisilicon/Kconfig | 2 +-
.../soc/hisilicon/hisi_soc_cache_framework.c | 271 +++++++-
.../soc/hisilicon/hisi_soc_cache_framework.h | 15 +
drivers/soc/hisilicon/hisi_soc_l3c.c | 610 ++++++++++++++++++
5 files changed, 897 insertions(+), 3 deletions(-)
create mode 100644 drivers/soc/hisilicon/hisi_soc_l3c.c
--
2.33.0
2
3

13 Oct '25
From: "Borislav Petkov (AMD)" <bp(a)alien8.de>
stable inclusion
from stable-v5.10.214
commit cc6ddd6fa93eb59ac6f63158a6466e45ad0ca94c
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/ID1J7D
CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=…
--------------------------------
The Link tag has all the details but basically due to missing upstream
commits, the header which contains __text_gen_insn() is not in the
includes in paravirt.c, leading to:
arch/x86/kernel/paravirt.c: In function 'paravirt_patch_call':
arch/x86/kernel/paravirt.c:65:9: error: implicit declaration of function '__text_gen_insn' \
[-Werror=implicit-function-declaration]
65 | __text_gen_insn(insn_buff, CALL_INSN_OPCODE,
| ^~~~~~~~~~~~~~~
Add the missing include.
Reported-by: Omar Sandoval <osandov(a)osandov.com>
Signed-off-by: Borislav Petkov (AMD) <bp(a)alien8.de>
Link: https://lore.kernel.org/r/ZeYXvd1-rVkPGvvW@telecaster
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Bowen You <youbowen2(a)huawei.com>
---
arch/x86/kernel/paravirt.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index a32e73f83f49..2da5c225bf04 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -31,6 +31,7 @@
#include <asm/special_insns.h>
#include <asm/tlb.h>
#include <asm/io_bitmap.h>
+#include <asm/text-patching.h>
/*
* nop stub, which must not clobber anything *including the stack* to
--
2.34.1
2
1