Kernel
Threads by month
- ----- 2025 -----
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- 3 participants
- 17954 discussions

【Meeting Notice】openEuler kernel 技术分享第十三期 & 双周例会 Time: 2021-10-15 14:00-16:30
by Meeting Book 15 Oct '21
by Meeting Book 15 Oct '21
15 Oct '21
1
0
backport perf bugfix patches from maillist.
Li Huafei (2):
perf env: Normalize aarch64.* and arm64.* to arm64 in normalize_arch()
perf annotate: Add error log in symbol__annotate()
tools/perf/util/annotate.c | 4 +++-
tools/perf/util/env.c | 2 +-
2 files changed, 4 insertions(+), 2 deletions(-)
--
2.20.1
1
2
Backport LTS 5.10.54 from upstream.
Adrian Hunter (1):
driver core: Prevent warning when removing a device link from
unregistered consumer
Alain Volmat (1):
spi: stm32: fixes pm_runtime calls in probe/remove
Alan Young (1):
ALSA: pcm: Call substream ack() method upon compat mmap commit
Aleksandr Loktionov (1):
igb: Check if num of q_vectors is smaller than max before array access
Aleksandr Nogikh (1):
net: add kcov handle to skb extensions
Alexander Egorenkov (1):
s390/boot: fix use of expolines in the DMA code
Alexander Tsoy (1):
ALSA: usb-audio: Add registration quirk for JBL Quantum headsets
Alexandru Tachici (1):
spi: spi-bcm2835: Fix deadlock
Amelie Delaunay (1):
usb: typec: stusb160x: register role switch before interrupt
registration
Anand Jain (1):
btrfs: check for missing device in btrfs_trim_fs
Axel Lin (2):
regulator: hi6421: Use correct variable type for regmap api val
argument
regulator: hi6421: Fix getting wrong drvdata
Bhaumik Bhatt (1):
bus: mhi: core: Validate channel ID when processing command
completions
Casey Chen (1):
nvme-pci: do not call nvme_dev_remove_admin from nvme_remove
Charles Baylis (1):
drm: Return -ENOTTY for non-drm ioctls
Charles Keepax (1):
ASoC: wm_adsp: Correct wm_coeff_tlv_get handling
Christoph Hellwig (1):
nvme: set the PRACT bit when using Write Zeroes with T10 PI
Christophe JAILLET (7):
ixgbe: Fix an error handling path in 'ixgbe_probe()'
igc: Fix an error handling path in 'igc_probe()'
igb: Fix an error handling path in 'igb_probe()'
fm10k: Fix an error handling path in 'fm10k_probe()'
e1000e: Fix an error handling path in 'e1000_probe()'
iavf: Fix an error handling path in 'iavf_probe()'
gve: Fix an error handling path in 'gve_probe()'
Clark Wang (1):
spi: imx: add a check for speed_hz before calculating the clock
Colin Ian King (2):
liquidio: Fix unintentional sign extension issue on left shift of u16
s390/bpf: Perform r1 range checking before accessing jit->seen_reg[r1]
Colin Xu (1):
drm/i915/gvt: Clear d3_entered on elsp cmd submission.
Daniel Borkmann (1):
bpf: Fix tail_call_reachable rejection for interpreter when jit failed
David Howells (1):
afs: Fix tracepoint string placement with built-in AFS
David Jeffery (1):
usb: ehci: Prevent missed ehci interrupts with edge-triggered MSI
Dmitry Bogdanov (1):
scsi: target: Fix protect handling in WRITE SAME(32)
Dongliang Mu (1):
usb: hso: fix error handling code of hso_create_net_device
Eric Dumazet (1):
net/tcp_fastopen: fix data races around tfo_active_disable_stamp
Evan Quan (1):
PCI: Mark AMD Navi14 GPU ATS as broken
Florian Fainelli (1):
skbuff: Fix build with SKB extensions disabled
Frederic Weisbecker (1):
posix-cpu-timers: Fix rearm racing against process tick
Greg Kroah-Hartman (1):
nds32: fix up stack guard gap
Greg Thelen (1):
usb: xhci: avoid renesas_usb_fw.mem when it's unusable
Gustavo A. R. Silva (1):
media: ngene: Fix out-of-bounds bug in ngene_command_config_free_buf()
Hangbin Liu (2):
selftests: icmp_redirect: remove from checking for IPv6 route get
selftests: icmp_redirect: IPv6 PMTU info should be cleared after
redirect
Haoran Luo (1):
tracing: Fix bug in rb_per_cpu_empty() that might cause deadloop.
Hui Wang (1):
ALSA: hda/realtek: Fix pop noise and 2 Front Mic issues on a machine
Ian Ray (1):
USB: serial: cp210x: fix comments for GE CS1000
Ilya Dryomov (2):
rbd: don't hold lock_rwsem while running_list is being drained
rbd: always kick acquire on "acquired" and "released" notifications
Jakub Sitnicki (1):
bpf, sockmap, udp: sk_prot needs inuse_idx set for proc stats
Jedrzej Jagielski (1):
igb: Fix position of assignment to *ring
Jianguo Wu (1):
mptcp: fix warning in __skb_flow_dissect() when do syn cookie for
subflow join
John Fastabend (2):
bpf, sockmap: Fix potential memory leak on unlikely error case
bpf, sockmap, tcp: sk_prot needs inuse_idx set for proc stats
John Keeping (1):
USB: serial: cp210x: add ID for CEL EM3588 USB ZigBee stick
Julian Sikorski (1):
USB: usb-storage: Add LaCie Rugged USB3-FW to IGNORE_UAS
Jérôme Glisse (1):
misc: eeprom: at24: Always append device id even if label property is
set.
Kalesh AP (1):
bnxt_en: don't disable an already disabled PCI device
Like Xu (1):
KVM: x86/pmu: Clear anythread deprecated bit when 0xa leaf is
unsupported on the SVM
Likun Gao (1):
drm/amdgpu: update golden setting for sienna_cichlid
Luis Henriques (1):
ceph: don't WARN if we're still opening a session to an MDS
Mahesh Bandewar (1):
bonding: fix build issue
Marc Zyngier (1):
firmware/efi: Tell memblock about EFI iomem reservations
Marcelo Henrique Cerri (1):
proc: Avoid mixing integer types in mem_rw()
Marco De Marco (1):
USB: serial: option: add support for u-blox LARA-R6 family
Marek Behún (2):
net: dsa: mv88e6xxx: enable SerDes RX stats for Topaz
net: dsa: mv88e6xxx: enable SerDes PCS register dump via ethtool -d on
Topaz
Marek Vasut (1):
spi: cadence: Correct initialisation of runtime PM again
Mark Tomlinson (1):
usb: max-3421: Prevent corruption of freed memory
Markus Boehme (1):
ixgbe: Fix packet corruption due to missing DMA sync
Mathias Nyman (4):
xhci: Fix lost USB 2 remote wake
usb: hub: Disable USB 3 device initiated lpm if exit latency is too
high
usb: hub: Fix link power management max exit latency (MEL)
calculations
xhci: add xhci_get_virt_ep() helper
Maxim Schwalm (1):
ASoC: rt5631: Fix regcache sync errors on resume
Maxime Ripard (1):
drm/panel: raspberrypi-touchscreen: Prevent double-free
Michael Chan (3):
bnxt_en: Refresh RoCE capabilities in bnxt_ulp_probe()
bnxt_en: Add missing check for BNXT_STATE_ABORT_ERR in
bnxt_fw_rset_task()
bnxt_en: Validate vlan protocol ID on RX packets
Michal Suchanek (1):
efi/tpm: Differentiate missing and invalid final event log table.
Mike Christie (1):
scsi: iscsi: Fix iface sysfs attr detection
Mike Kravetz (1):
hugetlbfs: fix mount mode command line processing
Mike Rapoport (1):
memblock: make for_each_mem_range() traverse MEMBLOCK_HOTPLUG regions
Minas Harutyunyan (2):
usb: dwc2: gadget: Fix GOUTNAK flow for Slave mode.
usb: dwc2: gadget: Fix sending zero length packet in DDMA mode.
Moritz Fischer (1):
Revert "usb: renesas-xhci: Fix handling of unknown ROM state"
Nguyen Dinh Phi (1):
netrom: Decrease sock refcount when sock timers expire
Nicholas Piggin (4):
KVM: PPC: Book3S: Fix CONFIG_TRANSACTIONAL_MEM=n crash
KVM: PPC: Fix kvm_arch_vcpu_ioctl vcpu_load leak
KVM: PPC: Book3S: Fix H_RTAS rets buffer overflow
KVM: PPC: Book3S HV Nested: Sanitise H_ENTER_NESTED TM state
Nicolas Dichtel (1):
ipv6: fix 'disable_policy' for fwd packets
Nicolas Saenz Julienne (1):
timers: Fix get_next_timer_interrupt() with no timers pending
Paolo Abeni (1):
ipv6: fix another slab-out-of-bounds in fib6_nh_flush_exceptions
Paul Blakey (1):
skbuff: Release nfct refcount on napi stolen or re-used skbs
Pavel Begunkov (2):
io_uring: explicitly count entries for poll reqs
io_uring: remove double poll entry on arm failure
Pavel Skripkin (1):
net: sched: fix memory leak in tcindex_partial_destroy_work
Peilin Ye (1):
net/sched: act_skbmod: Skip non-Ethernet packets
Peter Collingbourne (2):
selftest: use mmap instead of posix_memalign to allocate memory
userfaultfd: do not untag user pointers
Peter Hess (1):
spi: mediatek: fix fifo rx mode
Pierre-Louis Bossart (1):
ALSA: hda: intel-dsp-cfg: add missing ElkhartLake PCI ID
Randy Dunlap (1):
net: hisilicon: rename CACHE_LINE_MASK to avoid redefinition
Riccardo Mancini (13):
perf inject: Fix dso->nsinfo refcounting
perf map: Fix dso->nsinfo refcounting
perf probe: Fix dso->nsinfo refcounting
perf env: Fix sibling_dies memory leak
perf test session_topology: Delete session->evlist
perf test event_update: Fix memory leak of evlist
perf dso: Fix memory leak in dso__new_map()
perf test maps__merge_in: Fix memory leak of maps
perf env: Fix memory leak of cpu_pmu_caps
perf report: Free generated help strings for sort option
perf script: Fix memory 'threads' and 'cpus' leaks on exit
perf lzma: Close lzma stream on exit
perf inject: Close inject.output on exit
Robert Richter (2):
ACPI: Kconfig: Fix table override from built-in initrd
Documentation: Fix intiramfs script name
Roman Skakun (1):
dma-mapping: handle vmalloc addresses in dma_common_{mmap,get_sgtable}
Ronnie Sahlberg (2):
cifs: only write 64kb at a time when fallocating a small region of a
file
cifs: fix fallocate when trying to allocate a hole.
Sayanta Pattanayak (1):
r8169: Avoid duplicate sysfs entry creation error
Shahjada Abul Husain (1):
cxgb4: fix IRQ free race during driver unload
Somnath Kotur (1):
bnxt_en: Check abort error state in bnxt_half_open_nic()
Stephen Boyd (1):
mmc: core: Don't allocate IDA for OF aliases
Steven Rostedt (VMware) (3):
tracepoints: Update static_call before tp_funcs when adding a
tracepoint
tracing/histogram: Rename "cpu" to "common_cpu"
tracing: Synthetic event field_pos is an index not a boolean
Taehee Yoo (8):
bonding: fix suspicious RCU usage in bond_ipsec_add_sa()
bonding: fix null dereference in bond_ipsec_add_sa()
ixgbevf: use xso.real_dev instead of xso.dev in callback functions of
struct xfrmdev_ops
bonding: fix suspicious RCU usage in bond_ipsec_del_sa()
bonding: disallow setting nested bonding + ipsec offload
bonding: Add struct bond_ipesc to manage SA
bonding: fix suspicious RCU usage in bond_ipsec_offload_ok()
bonding: fix incorrect return value of bond_ipsec_offload_ok()
Takashi Iwai (4):
ALSA: usb-audio: Add missing proc text entry for BESPOKEN type
ALSA: sb: Fix potential ABBA deadlock in CSP driver
ALSA: hdmi: Expose all pins on MSI MS-7C94 board
ALSA: pcm: Fix mmap capability check
Tobias Klauser (1):
bpftool: Check malloc return value in mount_bpffs_for_pin
Tom Rix (1):
igc: change default return of igc_read_phy_reg()
Uwe Kleine-König (1):
pwm: sprd: Ensure configuring period and duty_cycle isn't wrongly
skipped
Vasily Gorbik (1):
s390/ftrace: fix ftrace_update_ftrace_func implementation
Vincent Palatin (1):
Revert "USB: quirks: ignore remote wake-up on Fibocom L850-GL LTE
modem"
Vinicius Costa Gomes (2):
igc: Fix use-after-free error during reset
igb: Fix use-after-free error during reset
Vladimir Oltean (1):
net: dsa: sja1105: make VID 4095 a bridge VLAN too
Wei Wang (1):
tcp: disable TFO blackhole logic by default
Xin Long (2):
sctp: trim optlen when it's a huge value in sctp_setsockopt
sctp: update active_key for asoc when old key is being replaced
Xuan Zhuo (2):
bpf, test: fix NULL pointer dereference on invalid
expected_attach_type
xdp, net: Fix use-after-free in bpf_xdp_link_release
Yajun Deng (2):
net: decnet: Fix sleeping inside in af_decnet
net: sched: cls_api: Fix the the wrong parameter
Yang Jihong (1):
perf sched: Fix record failure when CONFIG_SCHEDSTATS is not set
Yoshihiro Shimoda (1):
usb: renesas_usbhs: Fix superfluous irqs happen after usb_pkt_pop()
YueHaibing (1):
stmmac: platform: Fix signedness bug in stmmac_probe_config_dt()
Zhang Qilong (1):
usb: gadget: Fix Unbalanced pm_runtime_enable in tegra_xudc_probe
Zhihao Cheng (1):
nvme-pci: don't WARN_ON in nvme_reset_work if ctrl.state is not
RESETTING
Ziyang Xuan (1):
net: fix uninit-value in caif_seqpkt_sendmsg
Íñigo Huguet (1):
sfc: ensure correct number of XDP queues
Documentation/arm64/tagged-address-abi.rst | 26 ++-
.../early_userspace_support.rst | 8 +-
.../filesystems/ramfs-rootfs-initramfs.rst | 2 +-
Documentation/networking/ip-sysctl.rst | 2 +-
Documentation/trace/histogram.rst | 2 +-
arch/nds32/mm/mmap.c | 2 +-
arch/powerpc/kvm/book3s_hv.c | 2 +
arch/powerpc/kvm/book3s_hv_nested.c | 20 ++
arch/powerpc/kvm/book3s_rtas.c | 25 ++-
arch/powerpc/kvm/powerpc.c | 4 +-
arch/s390/boot/text_dma.S | 19 +-
arch/s390/include/asm/ftrace.h | 1 +
arch/s390/kernel/ftrace.c | 2 +
arch/s390/kernel/mcount.S | 4 +-
arch/s390/net/bpf_jit_comp.c | 2 +-
arch/x86/kvm/cpuid.c | 3 +-
drivers/acpi/Kconfig | 2 +-
drivers/base/core.c | 6 +-
drivers/block/rbd.c | 32 ++-
drivers/bus/mhi/core/main.c | 17 +-
drivers/firmware/efi/efi.c | 13 +-
drivers/firmware/efi/tpm.c | 8 +-
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 1 +
drivers/gpu/drm/drm_ioctl.c | 3 +
drivers/gpu/drm/i915/gvt/handlers.c | 15 ++
.../drm/panel/panel-raspberrypi-touchscreen.c | 1 -
drivers/media/pci/ngene/ngene-core.c | 2 +-
drivers/media/pci/ngene/ngene.h | 14 +-
drivers/misc/eeprom/at24.c | 17 +-
drivers/mmc/core/host.c | 20 +-
drivers/net/bonding/bond_main.c | 183 +++++++++++++++---
drivers/net/dsa/mv88e6xxx/chip.c | 10 +
drivers/net/dsa/mv88e6xxx/serdes.c | 6 +-
drivers/net/dsa/sja1105/sja1105_main.c | 6 +
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 34 +++-
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c | 9 +-
.../cavium/liquidio/cn23xx_pf_device.c | 2 +-
.../net/ethernet/chelsio/cxgb4/cxgb4_main.c | 18 +-
.../net/ethernet/chelsio/cxgb4/cxgb4_uld.c | 3 +
drivers/net/ethernet/google/gve/gve_main.c | 5 +-
drivers/net/ethernet/hisilicon/hip04_eth.c | 6 +-
drivers/net/ethernet/intel/e1000e/netdev.c | 1 +
drivers/net/ethernet/intel/fm10k/fm10k_pci.c | 1 +
drivers/net/ethernet/intel/iavf/iavf_main.c | 1 +
drivers/net/ethernet/intel/igb/igb_main.c | 15 +-
drivers/net/ethernet/intel/igc/igc.h | 2 +-
drivers/net/ethernet/intel/igc/igc_main.c | 3 +
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 4 +-
drivers/net/ethernet/intel/ixgbevf/ipsec.c | 20 +-
drivers/net/ethernet/realtek/r8169_main.c | 3 +-
drivers/net/ethernet/sfc/efx_channels.c | 13 +-
.../ethernet/stmicro/stmmac/stmmac_platform.c | 8 +-
drivers/net/usb/hso.c | 33 +++-
drivers/nvme/host/core.c | 5 +-
drivers/nvme/host/pci.c | 5 +-
drivers/pci/quirks.c | 4 +-
drivers/pwm/pwm-sprd.c | 11 +-
drivers/regulator/hi6421-regulator.c | 30 +--
drivers/scsi/scsi_transport_iscsi.c | 90 ++++-----
drivers/spi/spi-bcm2835.c | 12 +-
drivers/spi/spi-cadence.c | 14 +-
drivers/spi/spi-imx.c | 37 ++--
drivers/spi/spi-mt65xx.c | 16 +-
drivers/spi/spi-stm32.c | 9 +-
drivers/target/target_core_sbc.c | 35 ++--
drivers/usb/core/hub.c | 120 ++++++++----
drivers/usb/core/quirks.c | 4 -
drivers/usb/dwc2/gadget.c | 31 ++-
drivers/usb/gadget/udc/tegra-xudc.c | 1 +
drivers/usb/host/ehci-hcd.c | 18 +-
drivers/usb/host/max3421-hcd.c | 44 ++---
drivers/usb/host/xhci-hub.c | 3 +-
drivers/usb/host/xhci-pci-renesas.c | 16 +-
drivers/usb/host/xhci-pci.c | 7 +
drivers/usb/host/xhci-ring.c | 58 ++++--
drivers/usb/host/xhci.h | 3 +-
drivers/usb/renesas_usbhs/fifo.c | 7 +
drivers/usb/serial/cp210x.c | 5 +-
drivers/usb/serial/option.c | 3 +
drivers/usb/storage/unusual_uas.h | 7 +
drivers/usb/typec/stusb160x.c | 11 +-
fs/afs/cmservice.c | 25 +--
fs/btrfs/extent-tree.c | 3 +
fs/ceph/mds_client.c | 2 +-
fs/cifs/smb2ops.c | 49 +++--
fs/hugetlbfs/inode.c | 2 +-
fs/io_uring.c | 18 +-
fs/proc/base.c | 2 +-
fs/userfaultfd.c | 24 ++-
include/drm/drm_ioctl.h | 1 +
include/linux/memblock.h | 4 +-
include/linux/skbuff.h | 33 ++++
include/net/bonding.h | 9 +-
include/trace/events/afs.h | 67 ++++++-
kernel/bpf/verifier.c | 2 +
kernel/dma/ops_helpers.c | 12 +-
kernel/time/posix-cpu-timers.c | 10 +-
kernel/time/timer.c | 8 +-
kernel/trace/ring_buffer.c | 28 ++-
kernel/trace/trace.c | 4 +
kernel/trace/trace_events_hist.c | 22 ++-
kernel/trace/trace_synth.h | 2 +-
kernel/tracepoint.c | 2 +-
lib/Kconfig.debug | 1 +
mm/memblock.c | 3 +-
net/bpf/test_run.c | 3 +
net/caif/caif_socket.c | 3 +-
net/core/dev.c | 28 ++-
net/core/skbuff.c | 12 ++
net/core/skmsg.c | 16 +-
net/decnet/af_decnet.c | 27 ++-
net/ipv4/tcp_bpf.c | 2 +-
net/ipv4/tcp_fastopen.c | 28 ++-
net/ipv4/tcp_ipv4.c | 2 +-
net/ipv4/udp_bpf.c | 2 +-
net/ipv6/ip6_output.c | 4 +-
net/ipv6/route.c | 2 +-
net/mptcp/syncookies.c | 16 +-
net/netrom/nr_timer.c | 20 +-
net/sched/act_skbmod.c | 12 +-
net/sched/cls_api.c | 2 +-
net/sched/cls_tcindex.c | 5 +-
net/sctp/auth.c | 2 +
net/sctp/socket.c | 4 +
sound/core/pcm_native.c | 25 ++-
sound/hda/intel-dsp-config.c | 4 +
sound/isa/sb/sb16_csp.c | 4 +
sound/pci/hda/patch_hdmi.c | 1 +
sound/pci/hda/patch_realtek.c | 1 +
sound/soc/codecs/rt5631.c | 2 +
sound/soc/codecs/wm_adsp.c | 2 +-
sound/usb/mixer.c | 10 +-
sound/usb/quirks.c | 3 +
tools/bpf/bpftool/common.c | 5 +
tools/perf/builtin-inject.c | 13 +-
tools/perf/builtin-report.c | 33 ++--
tools/perf/builtin-sched.c | 33 +++-
tools/perf/builtin-script.c | 7 +
tools/perf/tests/event_update.c | 2 +-
tools/perf/tests/maps.c | 2 +
tools/perf/tests/topology.c | 1 +
tools/perf/util/dso.c | 4 +-
tools/perf/util/env.c | 2 +
tools/perf/util/lzma.c | 8 +-
tools/perf/util/map.c | 2 +
tools/perf/util/probe-event.c | 4 +-
tools/perf/util/sort.c | 2 +-
tools/perf/util/sort.h | 2 +-
tools/testing/selftests/net/icmp_redirect.sh | 5 +-
tools/testing/selftests/vm/userfaultfd.c | 6 +-
150 files changed, 1381 insertions(+), 592 deletions(-)
--
2.20.1
1
161
sched: load tracking optimization.
Yu Jiahua (3):
sched: Introcude config option SCHED_OPTIMIZE_LOAD_TRACKING
sched: Add switch for update_blocked_averages
sched: Add frequency control for load update in scheduler_tick
include/linux/sched/sysctl.h | 13 ++++
init/Kconfig | 8 +++
kernel/sched/fair.c | 126 +++++++++++++++++++++++++++++++++++
kernel/sysctl.c | 27 ++++++++
4 files changed, 174 insertions(+)
--
2.20.1
1
3
Backport LTS 5.10.53 from upstream.
Alexander Ovechkin (1):
net: send SYNACK packet with accepted fwmark
Alexandre Torgue (6):
ARM: dts: stm32: fix gpio-keys node on STM32 MCU boards
ARM: dts: stm32: fix RCC node name on stm32f429 MCU
ARM: dts: stm32: fix timer nodes on STM32 MCU to prevent warnings
ARM: dts: stm32: fix i2c node name on stm32f746 to prevent warnings
ARM: dts: stm32: move stmmac axi config in ethernet node on stm32mp15
ARM: dts: stm32: fix stpmic node for stm32mp1 boards
Andrew Jeffery (1):
ARM: dts: tacoma: Add phase corrections for eMMC
Benjamin Gaignard (1):
ARM: dts: rockchip: Fix IOMMU nodes properties on rk322x
Bixuan Cui (1):
rtc: mxc_v2: add missing MODULE_DEVICE_TABLE
Colin Ian King (1):
scsi: aic7xxx: Fix unintentional sign extension issue on left shift of
u8
Corentin Labbe (2):
ARM: dts: gemini: rename mdio to the right name
ARM: dts: gemini: add device_type on pci
Daniel Rosenberg (1):
f2fs: Show casefolding support only when supported
Dmitry Osipenko (4):
ARM: tegra: wm8903: Fix polarity of headphones-detection GPIO in
device-trees
ARM: tegra: nexus7: Correct 3v3 regulator GPIO of PM269 variant
memory: tegra: Fix compilation warnings on 64bit platforms
thermal/core/thermal_of: Stop zone device before unregistering it
Doug Berger (1):
net: bcmgenet: ensure EXT_ENERGY_DET_MASK is clear
Elaine Zhang (6):
ARM: dts: rockchip: Fix power-controller node names for rk3066a
ARM: dts: rockchip: Fix power-controller node names for rk3188
ARM: dts: rockchip: Fix power-controller node names for rk3288
arm64: dts: rockchip: Fix power-controller node names for px30
arm64: dts: rockchip: Fix power-controller node names for rk3328
arm64: dts: rockchip: Fix power-controller node names for rk3399
Eric Dumazet (3):
tcp: annotate data races around tp->mtu_info
ipv6: tcp: drop silly ICMPv6 packet too big messages
udp: annotate data races around unix_sk(sk)->gso_size
Etienne Carriere (1):
firmware: arm_scmi: Add SMCCC discovery dependency in Kconfig
Ezequiel Garcia (2):
ARM: dts: rockchip: Fix thermal sensor cells o rk322x
ARM: dts: rockchip: Fix the timer clocks order
Florian Fainelli (1):
net: bcmgenet: Ensure all TX/RX queues DMAs are disabled
Geert Uytterhoeven (1):
thermal/drivers/rcar_gen3_thermal: Do not shadow rcar_gen3_ths_tj_1
Greg Kroah-Hartman (2):
Revert "swap: fix do_swap_page() race with swapoff"
Revert "mm/shmem: fix shmem_swapin() race with swapoff"
Grygorii Strashko (4):
ARM: dts: am57xx-cl-som-am57x: fix ti,no-reset-on-init flag for gpios
ARM: dts: am437x-gp-evm: fix ti,no-reset-on-init flag for gpios
ARM: dts: am335x: fix ti,no-reset-on-init flag for gpios
arm64: dts: ti: k3-am654x/j721e/j7200-common-proc-board: Fix
MCU_RGMII1_TXC direction
Grzegorz Szymaszek (2):
ARM: dts: stm32: fix stm32mp157c-odyssey card detect pin
ARM: dts: stm32: fix the Odyssey SoM eMMC VQMMC supply
Gu Shengxian (1):
bpftool: Properly close va_list 'ap' by va_end() on error
Hangbin Liu (1):
net: ip_tunnel: fix mtu calculation for ETHER tunnel devices
Heiko Carstens (1):
s390: introduce proper type handling call_on_stack() macro
Ilya Leoshkevich (1):
s390/traps: do not test MONITOR CALL without CONFIG_BUG
Jason Ekstrand (1):
dma-buf/sync_file: Don't leak fences on merge failure
Javed Hasan (2):
scsi: libfc: Fix array index out of bound exception
scsi: qedf: Add check to synchronize abort and flush
Joel Stanley (1):
ARM: dts: aspeed: Fix AST2600 machines line names
Johan Jonker (4):
ARM: dts: rockchip: fix pinctrl sleep nodename for rk3036-kylin and
rk3288
arm64: dts: rockchip: fix pinctrl sleep nodename for rk3399.dtsi
arm64: dts: rockchip: fix regulator-gpio states array
ARM: dts: rockchip: fix supply properties in io-domains nodes
John Fastabend (1):
bpf: Track subprog poke descriptors correctly and fix use-after-free
Jonathan Neuschäfer (1):
ARM: imx: pm-imx5: Fix references to imx5_cpu_suspend_info
Kan Liang (1):
perf/x86/intel/uncore: Clean up error handling path of iio mapping
Konstantin Porotchkin (1):
arch/arm64/boot/dts/marvell: fix NAND partitioning scheme
Krzysztof Kozlowski (3):
thermal/drivers/imx_sc: Add missing of_node_put for loop iteration
thermal/drivers/sprd: Add missing of_node_put for loop iteration
rtc: max77686: Do not enforce (incorrect) interrupt trigger type
Linus Walleij (2):
ARM: dts: ux500: Fix orientation of accelerometer
drm/panel: nt35510: Do not fail if DSI read fails
Louis Peens (1):
net/sched: act_ct: remove and free nf_table callbacks
Lucas Stach (1):
arm64: dts: imx8mq: assign PCIe clocks
Marek Behún (4):
net: dsa: mv88e6xxx: enable .port_set_policy() on Topaz
net: dsa: mv88e6xxx: use correct .stats_set_histogram() on Topaz
net: dsa: mv88e6xxx: enable .rmu_disable() on Topaz
net: dsa: mv88e6xxx: enable devlink ATU hash param for Topaz
Marek Vasut (4):
ARM: dts: stm32: Remove extra size-cells on dhcom-pdk2
ARM: dts: stm32: Fix touchscreen node on dhcom-pdk2
ARM: dts: stm32: Drop unused linux,wakeup from touchscreen node on
DHCOM SoM
ARM: dts: stm32: Rename spi-flash/mx66l51235l@N to flash@N on DHCOM
SoM
Masahiro Yamada (2):
kbuild: sink stdout from cmd for silent build
kbuild: do not suppress Kconfig prompts for silent build
Matthias Maennich (1):
kbuild: mkcompile_h: consider timestamp if KBUILD_BUILD_TIMESTAMP is
set
Mian Yousaf Kaukab (1):
arm64: dts: ls208xa: remove bus-num from dspi node
Mike Rapoport (1):
mm/page_alloc: fix memory map initialization for descending nodes
Nguyen Dinh Phi (1):
tcp: fix tcp_init_transfer() to not reset icsk_ca_initialized
Odin Ugedal (1):
sched/fair: Fix CFS bandwidth hrtimer expiry type
Oleksij Rempel (1):
ARM: dts: imx6dl-riotboard: configure PHY clock and set proper EEE
value
Pali Rohár (2):
firmware: turris-mox-rwtm: add marvell,armada-3700-rwtm-firmware
compatible string
arm64: dts: marvell: armada-37xx: move firmware node to generic dtsi
file
Paolo Abeni (1):
tcp: consistently disable header prediction for mptcp
Paulo Alcantara (1):
cifs: prevent NULL deref in cifs_compose_mount_options()
Pavel Skripkin (4):
net: moxa: fix UAF in moxart_mac_probe
net: qcom/emac: fix UAF in emac_remove
net: ti: fix UAF in tlan_remove_one
net: fddi: fix UAF in fza_probe
Peter Xu (2):
mm/thp: simplify copying of huge zero page pmd when fork
mm/userfaultfd: fix uffd-wp special cases for fork()
Philipp Zabel (1):
reset: ti-syscon: fix to_ti_syscon_reset_data macro
Primoz Fiser (1):
ARM: dts: imx6: phyFLEX: Fix UART hardware flow control
Rafał Miłecki (5):
ARM: brcmstb: dts: fix NAND nodes names
ARM: Cygnus: dts: fix NAND nodes names
ARM: NSP: dts: fix NAND nodes names
ARM: dts: BCM63xx: Fix NAND nodes names
ARM: dts: Hurricane 2: Fix NAND nodes names
Ronak Doshi (1):
vmxnet3: fix cksum offload issues for tunnels with non-default udp
ports
Sanket Parmar (1):
usb: cdns3: Enable TDL_CHK only for OUT ep
Sebastian Reichel (2):
ARM: dts: ux500: Fix interrupt cells
ARM: dts: ux500: Rename gpio-controller node
Stefan Wahren (2):
ARM: dts: bcm283x: Fix up MMC node names
ARM: dts: bcm283x: Fix up GPIO LED node names
Sudeep Holla (2):
firmware: arm_scmi: Fix the build when CONFIG_MAILBOX is not selected
arm64: dts: juno: Update SCPI nodes as per the YAML schema
Sujit Kautkar (1):
arm64: dts: qcom: sc7180: Move rmtfs memory region
Suman Anna (1):
ARM: dts: OMAP2+: Replace underscores in sub-mailbox node names
Taehee Yoo (2):
net: netdevsim: use xso.real_dev instead of xso.dev in callback
functions of struct xfrmdev_ops
net: validate lwtstate->data before returning from skb_tunnel_info()
Talal Ahmad (1):
tcp: call sk_wmem_schedule before sk_mem_charge in zerocopy path
Thierry Reding (2):
soc/tegra: fuse: Fix Tegra234-only builds
firmware: tegra: bpmp: Fix Tegra234-only builds
Tony Lindgren (1):
ARM: OMAP2+: Block suspend for am3 and am4 if PM is not configured
Vadim Fedorenko (1):
net: ipv6: fix return value of ip6_skb_dst_mtu
Vasily Averin (1):
netfilter: ctnetlink: suspicious RCU usage in ctnetlink_dump_helpinfo
Vladimir Oltean (1):
net: dsa: properly check for the bridge_leave methods in
dsa_switch_bridge_leave()
Wei Li (1):
tools: bpf: Fix error in 'make -C tools/ bpf_install'
Wolfgang Bumiller (1):
net: bridge: sync fdb to new unicast-filtering ports
Yang Yingliang (1):
thermal/core: Correct function name thermal_zone_device_unregister()
wenxu (1):
net/sched: act_ct: fix err check for nf_conntrack_confirm
Makefile | 9 +-
arch/arm/boot/dts/am335x-baltos.dtsi | 4 +-
arch/arm/boot/dts/am335x-evmsk.dts | 2 +-
.../boot/dts/am335x-moxa-uc-2100-common.dtsi | 2 +-
.../boot/dts/am335x-moxa-uc-8100-common.dtsi | 2 +-
arch/arm/boot/dts/am33xx-l4.dtsi | 2 +-
arch/arm/boot/dts/am437x-gp-evm.dts | 5 +-
arch/arm/boot/dts/am437x-l4.dtsi | 2 +-
arch/arm/boot/dts/am57xx-cl-som-am57x.dts | 13 +--
arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts | 5 +-
arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts | 6 +-
arch/arm/boot/dts/bcm-cygnus.dtsi | 2 +-
arch/arm/boot/dts/bcm-hr2.dtsi | 2 +-
arch/arm/boot/dts/bcm-nsp.dtsi | 2 +-
arch/arm/boot/dts/bcm2711-rpi-4-b.dts | 4 +-
arch/arm/boot/dts/bcm2711.dtsi | 2 +-
arch/arm/boot/dts/bcm2835-rpi-a-plus.dts | 4 +-
arch/arm/boot/dts/bcm2835-rpi-a.dts | 2 +-
arch/arm/boot/dts/bcm2835-rpi-b-plus.dts | 4 +-
arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts | 2 +-
arch/arm/boot/dts/bcm2835-rpi-b.dts | 2 +-
arch/arm/boot/dts/bcm2835-rpi-cm1.dtsi | 2 +-
arch/arm/boot/dts/bcm2835-rpi-zero-w.dts | 2 +-
arch/arm/boot/dts/bcm2835-rpi-zero.dts | 2 +-
arch/arm/boot/dts/bcm2835-rpi.dtsi | 2 +-
arch/arm/boot/dts/bcm2836-rpi-2-b.dts | 4 +-
arch/arm/boot/dts/bcm2837-rpi-3-a-plus.dts | 4 +-
arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts | 4 +-
arch/arm/boot/dts/bcm2837-rpi-3-b.dts | 2 +-
arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi | 2 +-
arch/arm/boot/dts/bcm283x.dtsi | 2 +-
arch/arm/boot/dts/bcm63138.dtsi | 2 +-
arch/arm/boot/dts/bcm7445-bcm97445svmb.dts | 4 +-
arch/arm/boot/dts/bcm7445.dtsi | 2 +-
arch/arm/boot/dts/bcm911360_entphn.dts | 4 +-
arch/arm/boot/dts/bcm958300k.dts | 4 +-
arch/arm/boot/dts/bcm958305k.dts | 4 +-
arch/arm/boot/dts/bcm958522er.dts | 4 +-
arch/arm/boot/dts/bcm958525er.dts | 4 +-
arch/arm/boot/dts/bcm958525xmc.dts | 4 +-
arch/arm/boot/dts/bcm958622hr.dts | 4 +-
arch/arm/boot/dts/bcm958623hr.dts | 4 +-
arch/arm/boot/dts/bcm958625hr.dts | 4 +-
arch/arm/boot/dts/bcm958625k.dts | 4 +-
arch/arm/boot/dts/bcm963138dvt.dts | 4 +-
arch/arm/boot/dts/bcm988312hr.dts | 4 +-
arch/arm/boot/dts/dm816x.dtsi | 2 +-
arch/arm/boot/dts/dra7-ipu-dsp-common.dtsi | 6 +-
arch/arm/boot/dts/dra7-l4.dtsi | 4 +-
arch/arm/boot/dts/dra72x.dtsi | 6 +-
arch/arm/boot/dts/dra74-ipu-dsp-common.dtsi | 2 +-
arch/arm/boot/dts/dra74x.dtsi | 8 +-
arch/arm/boot/dts/gemini-dlink-dns-313.dts | 2 +-
arch/arm/boot/dts/gemini-nas4220b.dts | 2 +-
arch/arm/boot/dts/gemini-rut1xx.dts | 2 +-
arch/arm/boot/dts/gemini-wbd111.dts | 2 +-
arch/arm/boot/dts/gemini-wbd222.dts | 2 +-
arch/arm/boot/dts/gemini.dtsi | 1 +
arch/arm/boot/dts/imx6dl-riotboard.dts | 2 +
arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi | 5 +-
arch/arm/boot/dts/omap4-l4.dtsi | 4 +-
arch/arm/boot/dts/omap5-l4.dtsi | 4 +-
arch/arm/boot/dts/rk3036-kylin.dts | 2 +-
arch/arm/boot/dts/rk3066a.dtsi | 6 +-
arch/arm/boot/dts/rk3188.dtsi | 14 +--
arch/arm/boot/dts/rk322x.dtsi | 12 +-
arch/arm/boot/dts/rk3288-rock2-som.dtsi | 2 +-
arch/arm/boot/dts/rk3288-vyasa.dts | 4 +-
arch/arm/boot/dts/rk3288.dtsi | 14 +--
arch/arm/boot/dts/ste-ab8500.dtsi | 28 ++---
arch/arm/boot/dts/ste-ab8505.dtsi | 24 ++--
arch/arm/boot/dts/ste-href-ab8500.dtsi | 2 +-
arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi | 3 +
arch/arm/boot/dts/ste-href.dtsi | 2 +-
arch/arm/boot/dts/ste-snowball.dts | 2 +-
arch/arm/boot/dts/stm32429i-eval.dts | 8 +-
arch/arm/boot/dts/stm32746g-eval.dts | 6 +-
arch/arm/boot/dts/stm32f429-disco.dts | 6 +-
arch/arm/boot/dts/stm32f429.dtsi | 10 +-
arch/arm/boot/dts/stm32f469-disco.dts | 6 +-
arch/arm/boot/dts/stm32f746.dtsi | 12 +-
arch/arm/boot/dts/stm32f769-disco.dts | 6 +-
arch/arm/boot/dts/stm32h743.dtsi | 4 -
arch/arm/boot/dts/stm32mp151.dtsi | 12 +-
arch/arm/boot/dts/stm32mp157a-stinger96.dtsi | 7 +-
.../arm/boot/dts/stm32mp157c-odyssey-som.dtsi | 7 +-
arch/arm/boot/dts/stm32mp157c-odyssey.dts | 2 +-
arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi | 7 +-
arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi | 7 +-
arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi | 2 +-
arch/arm/boot/dts/stm32mp15xx-osd32.dtsi | 7 +-
.../boot/dts/tegra20-acer-a500-picasso.dts | 2 +-
arch/arm/boot/dts/tegra20-harmony.dts | 2 +-
arch/arm/boot/dts/tegra20-medcom-wide.dts | 2 +-
arch/arm/boot/dts/tegra20-plutux.dts | 2 +-
arch/arm/boot/dts/tegra20-seaboard.dts | 2 +-
arch/arm/boot/dts/tegra20-tec.dts | 2 +-
arch/arm/boot/dts/tegra20-ventana.dts | 2 +-
.../tegra30-asus-nexus7-grouper-ti-pmic.dtsi | 2 +-
arch/arm/boot/dts/tegra30-cardhu.dtsi | 2 +-
arch/arm/mach-imx/suspend-imx53.S | 4 +-
arch/arm/mach-omap2/pm33xx-core.c | 40 +++++++
arch/arm64/boot/dts/arm/juno-base.dtsi | 6 +-
.../arm64/boot/dts/freescale/fsl-ls208xa.dtsi | 1 -
arch/arm64/boot/dts/freescale/imx8mq.dtsi | 16 +++
.../dts/marvell/armada-3720-turris-mox.dts | 6 +-
arch/arm64/boot/dts/marvell/armada-37xx.dtsi | 8 ++
arch/arm64/boot/dts/marvell/cn9130-db.dts | 2 +-
arch/arm64/boot/dts/qcom/sc7180-idp.dts | 2 +-
arch/arm64/boot/dts/rockchip/px30.dtsi | 16 +--
.../arm64/boot/dts/rockchip/rk3308-roc-cc.dts | 4 +-
.../boot/dts/rockchip/rk3328-nanopi-r2s.dts | 4 +-
.../arm64/boot/dts/rockchip/rk3328-roc-cc.dts | 4 +-
arch/arm64/boot/dts/rockchip/rk3328.dtsi | 6 +-
.../boot/dts/rockchip/rk3399-gru-scarlet.dtsi | 2 +-
arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi | 4 +-
arch/arm64/boot/dts/rockchip/rk3399.dtsi | 42 +++----
.../arm64/boot/dts/ti/k3-am654-base-board.dts | 2 +-
.../dts/ti/k3-j7200-common-proc-board.dts | 2 +-
.../dts/ti/k3-j721e-common-proc-board.dts | 2 +-
arch/ia64/include/asm/pgtable.h | 5 +-
arch/ia64/mm/init.c | 6 +-
arch/s390/include/asm/stacktrace.h | 97 ++++++++++++++++
arch/s390/kernel/traps.c | 2 +
arch/x86/events/intel/uncore_snbep.c | 6 +-
arch/x86/net/bpf_jit_comp.c | 3 +
drivers/dma-buf/sync_file.c | 13 ++-
drivers/firmware/Kconfig | 2 +-
drivers/firmware/arm_scmi/common.h | 2 +-
drivers/firmware/arm_scmi/driver.c | 2 +
drivers/firmware/tegra/Makefile | 1 +
drivers/firmware/tegra/bpmp-private.h | 3 +-
drivers/firmware/tegra/bpmp.c | 3 +-
drivers/firmware/turris-mox-rwtm.c | 1 +
drivers/gpu/drm/panel/panel-novatek-nt35510.c | 4 +-
drivers/memory/tegra/tegra124-emc.c | 4 +-
drivers/memory/tegra/tegra30-emc.c | 4 +-
drivers/net/dsa/mv88e6xxx/chip.c | 12 +-
.../net/ethernet/broadcom/genet/bcmgenet.c | 23 ++--
.../ethernet/broadcom/genet/bcmgenet_wol.c | 6 -
drivers/net/ethernet/moxa/moxart_ether.c | 4 +-
drivers/net/ethernet/qualcomm/emac/emac.c | 3 +-
drivers/net/ethernet/ti/tlan.c | 3 +-
drivers/net/fddi/defza.c | 3 +-
drivers/net/netdevsim/ipsec.c | 8 +-
drivers/net/vmxnet3/vmxnet3_ethtool.c | 22 +++-
drivers/reset/reset-ti-syscon.c | 4 +-
drivers/rtc/rtc-max77686.c | 4 +-
drivers/rtc/rtc-mxc_v2.c | 1 +
drivers/scsi/aic7xxx/aic7xxx_core.c | 2 +-
drivers/scsi/libfc/fc_rport.c | 13 ++-
drivers/scsi/qedf/qedf_io.c | 22 +++-
drivers/soc/tegra/fuse/fuse-tegra30.c | 3 +-
drivers/thermal/imx_sc_thermal.c | 3 +
drivers/thermal/rcar_gen3_thermal.c | 5 +-
drivers/thermal/sprd_thermal.c | 15 ++-
drivers/thermal/thermal_core.c | 2 +-
drivers/thermal/thermal_of.c | 3 +
drivers/usb/cdns3/gadget.c | 8 +-
fs/cifs/cifs_dfs_ref.c | 3 +
fs/f2fs/sysfs.c | 4 +
include/linux/bpf.h | 1 +
include/linux/huge_mm.h | 2 +-
include/linux/swap.h | 9 --
include/linux/swapops.h | 2 +
include/net/dst_metadata.h | 4 +-
include/net/ip6_route.h | 2 +-
include/net/tcp.h | 4 +
kernel/bpf/core.c | 8 +-
kernel/bpf/verifier.c | 60 ++++------
kernel/sched/fair.c | 4 +-
mm/huge_memory.c | 36 +++---
mm/memory.c | 36 +++---
mm/page_alloc.c | 106 +++++++++++-------
mm/shmem.c | 14 +--
net/bridge/br_if.c | 17 ++-
net/dsa/switch.c | 4 +-
net/ipv4/ip_tunnel.c | 18 ++-
net/ipv4/tcp.c | 3 +
net/ipv4/tcp_input.c | 2 +-
net/ipv4/tcp_ipv4.c | 4 +-
net/ipv4/tcp_output.c | 1 +
net/ipv4/udp.c | 6 +-
net/ipv6/tcp_ipv6.c | 21 +++-
net/ipv6/udp.c | 2 +-
net/ipv6/xfrm6_output.c | 2 +-
net/netfilter/nf_conntrack_netlink.c | 3 +
net/sched/act_ct.c | 14 ++-
scripts/Kbuild.include | 7 +-
scripts/mkcompile_h | 14 ++-
tools/bpf/Makefile | 7 +-
tools/bpf/bpftool/jit_disasm.c | 6 +-
192 files changed, 815 insertions(+), 567 deletions(-)
--
2.20.1
1
120
Backport LTS 5.10.52 from upstream.
Alex Bee (2):
arm64: dts: rockchip: Re-add regulator-boot-on, regulator-always-on
for vdd_gpu on rk3399-roc-pc
arm64: dts: rockchip: Re-add regulator-always-on for vcc_sdio for
rk3399-roc-pc
Alexander Shishkin (1):
intel_th: Wait until port is in reset before programming it
Aneesh Kumar K.V (1):
powerpc/mm/book3s64: Fix possible build error
Arnd Bergmann (2):
partitions: msdos: fix one-byte get_unaligned()
mips: always link byteswap helpers into decompressor
Aswath Govindraju (2):
ARM: dts: am335x: align ti,pindir-d0-out-d1-in property with dt-shema
ARM: dts: am437x: align ti,pindir-d0-out-d1-in property with dt-shema
Athira Rajeev (1):
selftests/powerpc: Fix "no_handler" EBB selftest
Benjamin Herrenschmidt (1):
powerpc/boot: Fixup device-tree on little endian
Bixuan Cui (1):
power: reset: gpio-poweroff: add missing MODULE_DEVICE_TABLE
Chandrakanth Patil (2):
scsi: megaraid_sas: Fix resource leak in case of probe failure
scsi: megaraid_sas: Handle missing interrupts while re-enabling IRQs
Chang S. Bae (1):
x86/signal: Detect and prevent an alternate signal stack overflow
Chao Yu (4):
f2fs: atgc: fix to set default age threshold
f2fs: add MODULE_SOFTDEP to ensure crc32 is included in the initramfs
f2fs: compress: fix to disallow temp extension
f2fs: fix to avoid adding tab before doc section
Christian Brauner (1):
cgroup: verify that source is a string
Christoph Niedermaier (3):
ARM: dts: imx6q-dhcom: Fix ethernet reset time properties
ARM: dts: imx6q-dhcom: Fix ethernet plugin detection problems
ARM: dts: imx6q-dhcom: Add gpios pinctrl for i2c bus recovery
Christophe JAILLET (3):
tty: serial: 8250: serial_cs: Fix a memory leak in error handling path
remoteproc: k3-r5: Fix an error message
scsi: be2iscsi: Fix an error handling path in beiscsi_dev_probe()
Chuck Lever (1):
NFSD: Fix TP_printk() format specifier in nfsd_clid_class
Chunfeng Yun (1):
usb: common: usb-conn-gpio: fix NULL pointer dereference of charger
Chunyan Zhang (1):
thermal/drivers/sprd: Add missing MODULE_DEVICE_TABLE
Corentin Labbe (1):
ARM: dts: gemini-rut1xx: remove duplicate ethernet node
Cristian Marussi (1):
firmware: arm_scmi: Reset Rx buffer to max size during async commands
Dan Carpenter (2):
rtc: fix snprintf() checking in is_rtc_hctosys()
scsi: scsi_dh_alua: Fix signedness bug in alua_rtpg()
Daniel Mack (1):
serial: tty: uartlite: fix console setup
Dimitri John Ledkov (1):
lib/decompress_unlz4.c: correctly handle zero-padding around initrds.
Dmitry Torokhov (1):
i2c: core: Disable client irq on reboot/shutdown
Eli Cohen (3):
vdpa/mlx5: Fix umem sizes assignments on VQ create
vdpa/mlx5: Fix possible failure in umem size calculation
vdpa/mlx5: Clear vq ready indication upon device reset
Fabio Aiuto (1):
staging: rtl8723bs: fix macro value for 2.4Ghz only device
Fabrice Fontaine (1):
s390: disable SSP when needed
Frederic Weisbecker (1):
srcu: Fix broken node geometry after early ssp init
Gao Xiang (1):
nfs: fix acl memory leak of posix_acl_create()
Geert Uytterhoeven (6):
reset: RESET_BRCMSTB_RESCAL should depend on ARCH_BRCMSTB
reset: RESET_INTEL_GW should depend on X86
ARM: dts: r8a7779, marzen: Fix DU clock names
arm64: dts: renesas: Add missing opp-suspend properties
arm64: dts: renesas: r8a7796[01]: Fix OPP table entry voltages
arm64: dts: renesas: r8a779a0: Drop power-domains property from GIC
node
Geoff Levand (1):
powerpc/ps3: Add dma_mask to ps3_dma_region
Geoffrey D. Bennett (4):
ALSA: usb-audio: scarlett2: Fix 18i8 Gen 2 PCM Input count
ALSA: usb-audio: scarlett2: Fix data_mutex lock
ALSA: usb-audio: scarlett2: Fix scarlett2_*_ctl_put() return values
ALSA: usb-audio: scarlett2: Fix 6i6 Gen 2 line out descriptions
Gowtham Tammana (1):
ARM: dts: dra7: Fix duplicate USB4 target module node
Greg Kroah-Hartman (1):
Revert "drm/ast: Remove reference to struct drm_device.pdev"
Hannes Reinecke (2):
scsi: core: Fixup calling convention for scsi_mode_sense()
scsi: scsi_dh_alua: Check for negative result value
Hans de Goede (1):
ACPI: video: Add quirk for the Dell Vostro 3350
Heiko Carstens (4):
s390/processor: always inline stap() and __load_psw_mask()
s390/ipl_parm: fix program check new psw handling
s390/mem_detect: fix diag260() program check new psw handling
s390/mem_detect: fix tprot() program check new psw handling
Icenowy Zheng (1):
arm64: dts: allwinner: a64-sopine-baseboard: change RGMII mode to TXID
James Smart (2):
scsi: lpfc: Fix "Unexpected timeout" error in direct attach topology
scsi: lpfc: Fix crash when lpfc_sli4_hba_setup() fails to initialize
the SGLs
Jan Kiszka (1):
watchdog: iTCO_wdt: Account for rebooting on second timeout
Jaroslav Kysela (1):
ASoC: soc-pcm: fix the return value in dpcm_apply_symmetry()
Javier Martinez Canillas (1):
PCI: rockchip: Register IRQ handlers after device and data are ready
Jeff Layton (1):
ceph: remove bogus checks and WARN_ONs from ceph_set_page_dirty
Jiajun Cao (1):
ALSA: hda: Add IRQ check for platform_get_irq()
Jiapeng Chong (1):
fs/jfs: Fix missing error code in lmLogInit()
Jing Xiangfeng (1):
drm/gma500: Add the missed drm_gem_object_put() in
psb_user_framebuffer_create()
John Garry (1):
scsi: core: Cap scsi_host cmd_per_lun at can_queue
Jon Hunter (1):
PCI: tegra194: Fix tegra_pcie_ep_raise_msi_irq() ill-defined shift
Jonathan Cameron (2):
iio: gyro: fxa21002c: Balance runtime pm + use
pm_runtime_resume_and_get().
iio: magn: bmc150: Balance runtime pm + use
pm_runtime_resume_and_get()
José Roberto de Souza (1):
drm/dp_mst: Add missing drm parameters to recently added call to
drm_dbg_kms()
Kashyap Desai (1):
scsi: megaraid_sas: Early detection of VD deletion through RaidMap
update
Kefeng Wang (1):
KVM: mmio: Fix use-after-free Read in
kvm_vm_ioctl_unregister_coalesced_mmio
Kishon Vijay Abraham I (1):
arm64: dts: ti: k3-j721e-main: Fix external refclk input to SERDES
Koby Elbaz (2):
habanalabs/gaudi: set the correct cpu_id on MME2_QM failure
habanalabs: remove node from list before freeing the node
Krzysztof Kozlowski (10):
power: supply: max17042: Do not enforce (incorrect) interrupt trigger
type
reset: a10sr: add missing of_match_table reference
ARM: exynos: add missing of_node_put for loop iteration
ARM: dts: exynos: fix PWM LED max brightness on Odroid XU/XU3
ARM: dts: exynos: fix PWM LED max brightness on Odroid HC1
ARM: dts: exynos: fix PWM LED max brightness on Odroid XU4
memory: stm32-fmc2-ebi: add missing of_node_put for loop iteration
memory: atmel-ebi: add missing of_node_put for loop iteration
memory: fsl_ifc: fix leak of IO mapping on probe failure
memory: fsl_ifc: fix leak of private memory on probe failure
Krzysztof Wilczyński (1):
PCI/sysfs: Fix dsm_label_utf16s_to_utf8s() buffer overrun
Lai Jiangshan (1):
KVM: X86: Disable hardware breakpoints unconditionally before
kvm_x86->run()
Liguang Zhang (1):
ACPI: AMBA: Fix resource name in /proc/iomem
Linus Torvalds (1):
certs: add 'x509_revocation_list' to gitignore
Linus Walleij (1):
power: supply: ab8500: Avoid NULL pointers
Logan Gunthorpe (1):
PCI/P2PDMA: Avoid pci_get_slot(), which may sleep
Long Li (1):
PCI: hv: Fix a race condition when removing the device
Luiz Sampaio (1):
w1: ds2438: fixing bug that would always get page0
Lukas Wunner (1):
PCI: pciehp: Ignore Link Down/Up caused by DPC
Lv Yunlong (1):
misc/libmasm/module: Fix two use after free in ibmasm_init_one
Marco Elver (1):
kcov: add __no_sanitize_coverage to fix noinstr for all architectures
Marek Behún (2):
firmware: turris-mox-rwtm: fix reply status decoding function
firmware: turris-mox-rwtm: report failures better
Marek Vasut (2):
ARM: dts: stm32: Connect PHY IRQ line on DH STM32MP1 SoM
ARM: dts: stm32: Rework LAN8710Ai PHY reset on DHCOM SoM
Martin Blumenstingl (1):
PCI: intel-gw: Fix INTx enable
Martin Fäcknitz (1):
MIPS: vdso: Invalid GIC access through VDSO
Matthew Auld (1):
drm/i915/gtt: drop the page table optimisation
Maurizio Lombardi (1):
nvme-tcp: can't set sk_user_data without write_lock
Michael Kelley (1):
scsi: storvsc: Correctly handle multiple flags in srb_status
Michael S. Tsirkin (1):
virtio_net: move tx vq operation under tx queue lock
Michael Walle (1):
serial: fsl_lpuart: disable DMA for console and fix sysrq
Mike Christie (7):
scsi: iscsi: Add iscsi_cls_conn refcount helpers
scsi: iscsi: Fix conn use after free during resets
scsi: iscsi: Fix shost->max_id use
scsi: qedi: Fix null ref during abort handling
scsi: qedi: Fix race during abort timeouts
scsi: qedi: Fix TMF session block/unblock use
scsi: qedi: Fix cleanup session block/unblock use
Mike Marshall (1):
orangefs: fix orangefs df output.
Nathan Chancellor (2):
hexagon: handle {,SOFT}IRQENTRY_TEXT in linker script
hexagon: use common DISCARDS macro
NeilBrown (1):
SUNRPC: prevent port reuse on transports which don't request it.
Nick Desaulniers (1):
ARM: 9087/1: kprobes: test-thumb: fix for LLVM_IAS=1
Nicolas Ferre (1):
dt-bindings: i2c: at91: fix example for scl-gpios
Niklas Söderlund (1):
thermal/drivers/rcar_gen3_thermal: Fix coefficient calculations
Nikolay Aleksandrov (2):
net: bridge: multicast: fix PIM hello router port marking race
net: bridge: multicast: fix MRD advertisement router port marking race
Pali Rohár (2):
firmware: turris-mox-rwtm: fail probing when firmware does not support
hwrng
firmware: turris-mox-rwtm: show message about HWRNG registration
Paul Cercueil (2):
drm/ingenic: Fix non-OSD mode
drm/ingenic: Switch IPU plane to type OVERLAY
Paul E. McKenney (1):
rcu: Reject RCU_LOCKDEP_WARN() false positives
Paulo Alcantara (1):
cifs: handle reconnect of tcon when there is no cached dfs referral
Peter Robinson (1):
gpio: pca953x: Add support for the On Semi pca9655
Peter Zijlstra (2):
jump_label: Fix jump_label_text_reserved() vs __init
static_call: Fix static_call_text_reserved() vs __init
Philip Yang (1):
drm/amdkfd: fix sysfs kobj leak
Philipp Zabel (1):
reset: bail if try_module_get() fails
Pierre-Louis Bossart (2):
ASoC: Intel: sof_sdw: add mutual exclusion between PCH DMIC and RT715
ASoC: Intel: kbl_da7219_max98357a: shrink platform_id below 20
characters
Po-Hsu Lin (1):
selftests: timers: rtcpie: skip test if default RTC device does not
exist
Rafał Miłecki (1):
ARM: dts: BCM5301X: Fixup SPI binding
Randy Dunlap (2):
PCI: ftpci100: Rename macro name collision
mips: disable branch profiling in boot/decompress.o
Rashmi A (1):
phy: intel: Fix for warnings due to EMMC clock 175Mhz change in FIP
Robin Gong (1):
dmaengine: fsl-qdma: check dma_set_mask return value
Roger Quadros (1):
arm64: dts: ti: j7200-main: Enable USB2 PHY RX sensitivity workaround
Ruslan Bilovol (1):
usb: gadget: f_hid: fix endianness issue with descriptors
Salvatore Bonaccorso (1):
ARM: dts: sun8i: h3: orangepi-plus: Fix ethernet phy-mode
Sandor Bodo-Merle (2):
PCI: iproc: Fix multi-MSI base vector number allocation
PCI: iproc: Support multi-MSI only on uniprocessor kernel
Sascha Hauer (1):
ubifs: Fix off-by-one error
Sean Christopherson (2):
KVM: x86: Use guest MAXPHYADDR from CPUID.0x8000_0008 iff TDP is
enabled
KVM: x86/mmu: Do not apply HPA (memory encryption) mask to GPAs
Sherry Sun (1):
tty: serial: fsl_lpuart: fix the potential risk of division or modulo
by zero
Siddharth Gupta (1):
remoteproc: core: Fix cdev remove and rproc del
Srinivas Neeli (2):
gpio: zynq: Check return value of pm_runtime_get_sync
gpio: zynq: Check return value of irq_get_irq_data
Stefan Eichenberger (1):
watchdog: imx_sc_wdt: fix pretimeout
Steffen Maier (1):
scsi: zfcp: Report port fc_security as unknown early during remote
cable pull
Stephan Gerhold (1):
power: supply: rt5033_battery: Fix device tree enumeration
Stephen Boyd (1):
arm64: dts: qcom: trogdor: Add no-hpd to DSI bridge node
Steven Rostedt (VMware) (1):
tracing: Do not reference char * as a string in histograms
Suganath Prabu S (1):
scsi: mpt3sas: Fix deadlock while cancelling the running firmware
event
Takashi Iwai (3):
ALSA: usx2y: Avoid camelCase
ALSA: usx2y: Don't call free_pages_exact() with NULL address
ALSA: sb: Fix potential double-free of CSP mixer elements
Takashi Sakamoto (3):
Revert "ALSA: bebob/oxfw: fix Kconfig entry for Mackie d.2 Pro"
ALSA: bebob: add support for ToneWeal FW66
ALSA: firewire-motu: fix detection for S/PDIF source on optical
interface in v2 protocol
Tao Ren (1):
watchdog: aspeed: fix hardware timeout calculation
Thomas Gleixner (3):
x86/fpu: Return proper error codes from user access functions
x86/fpu: Fix copy_xstate_to_kernel() gap handling
x86/fpu: Limit xstate copy size in xstateregs_set()
Tong Zhang (2):
misc: alcor_pci: fix null-ptr-deref when there is no PCI bridge
misc: alcor_pci: fix inverted branch condition
Tony Lindgren (1):
mfd: cpcap: Fix cpcap dmamask not set warnings
Trond Myklebust (8):
NFSv4: Fix delegation return in cases where we have to retry
NFS: nfs_find_open_context() may only select open files
NFSv4: Initialise connection to the server in nfs4_alloc_client()
NFSv4: Fix an Oops in pnfs_mark_request_commit() when doing O_DIRECT
nfsd: Reduce contention for the nfsd_file nf_rwsem
NFSv4/pnfs: Fix the layout barrier update
NFSv4/pnfs: Fix layoutget behaviour after invalidation
NFSv4/pNFS: Don't call _nfs4_pnfs_v3_ds_connect multiple times
Tyrel Datwyler (1):
scsi: core: Fix bad pointer dereference when ehandler kthread is
invalid
Uwe Kleine-König (4):
backlight: lm3630a: Fix return code of .update_status() callback
pwm: spear: Don't modify HW state in .remove callback
pwm: tegra: Don't modify HW state in .remove callback
pwm: imx1: Don't disable clocks at device remove time
Valentin Vidic (1):
s390/sclp_vt220: fix console name to match device
Valentine Barshak (1):
arm64: dts: renesas: v3msk: Fix memory size
Ville Syrjälä (1):
drm/i915/gt: Fix -EDEADLK handling regression
Vitaly Kuznetsov (1):
KVM: nSVM: Check the value written to MSR_VM_HSAVE_PA
Wayne Lin (2):
drm/dp_mst: Do not set proposed vcpi directly
drm/dp_mst: Avoid to mess up payload table by ports in stale topology
Wei Yongjun (1):
watchdog: jz4740: Fix return value check in jz4740_wdt_probe()
Xie Yongji (3):
virtio-blk: Fix memory leak among suspend/resume procedure
virtio_net: Fix error handling in virtnet_restore()
virtio_console: Assure used length from device is limited
Xiyu Yang (2):
iommu/arm-smmu: Fix arm_smmu_device refcount leak when
arm_smmu_rpm_get fails
iommu/arm-smmu: Fix arm_smmu_device refcount leak in address
translation
Xuewen Yan (1):
sched/uclamp: Ignore max aggregation if rq is idle
Yang Yingliang (3):
leds: tlc591xx: fix return value check in tlc591xx_probe()
ALSA: ppc: fix error return code in snd_pmac_probe()
usb: gadget: hid: fix error return code in hid_bind()
Yizhuo Zhai (1):
Input: hideep - fix the uninitialized use in hideep_nvm_unlock()
Yufen Yu (2):
ALSA: ac97: fix PM reference leak in ac97_bus_remove()
ASoC: img: Fix PM reference leak in img_i2s_in_probe()
Zhen Lei (8):
fbmem: Do not delete the mode that is still in use
ASoC: soc-core: Fix the error return code in
snd_soc_of_parse_audio_routing()
um: fix error return code in slip_open()
um: fix error return code in winch_tramp()
ubifs: journal: Fix error return code in ubifs_jnl_write_inode()
ALSA: isa: Fix error return code in snd_cmi8330_probe()
memory: pl353: Fix error return code in pl353_smc_probe()
firmware: tegra: Fix error return code in tegra210_bpmp_init()
Zhihao Cheng (1):
ubifs: Set/Clear I_LINKABLE under i_lock for whiteout inode
Zou Wei (14):
ASoC: intel/boards: add missing MODULE_DEVICE_TABLE
mfd: da9052/stmpe: Add and modify MODULE_DEVICE_TABLE
fsi: Add missing MODULE_DEVICE_TABLE
leds: turris-omnia: add missing MODULE_DEVICE_TABLE
power: supply: sc27xx: Add missing MODULE_DEVICE_TABLE
power: supply: sc2731_charger: Add missing MODULE_DEVICE_TABLE
watchdog: Fix possible use-after-free in wdt_startup()
watchdog: sc520_wdt: Fix possible use-after-free in wdt_turnoff()
watchdog: Fix possible use-after-free by calling del_timer_sync()
PCI: tegra: Add missing MODULE_DEVICE_TABLE
power: supply: charger-manager: add missing MODULE_DEVICE_TABLE
power: supply: ab8500: add missing MODULE_DEVICE_TABLE
pwm: img: Fix PM reference leak in img_pwm_enable()
reset: brcmstb: Add missing MODULE_DEVICE_TABLE
ching Huang (2):
scsi: arcmsr: Fix the wrong CDB payload report to IOP
scsi: arcmsr: Fix doorbell status being updated late on ARC-1886
.../devicetree/bindings/i2c/i2c-at91.txt | 2 +-
Documentation/filesystems/f2fs.rst | 16 +-
arch/arm/boot/dts/am335x-cm-t335.dts | 2 +-
arch/arm/boot/dts/am43x-epos-evm.dts | 4 +-
arch/arm/boot/dts/am5718.dtsi | 6 +-
arch/arm/boot/dts/bcm5301x.dtsi | 18 +-
arch/arm/boot/dts/dra7-l4.dtsi | 22 -
arch/arm/boot/dts/dra71x.dtsi | 4 -
arch/arm/boot/dts/dra72x.dtsi | 4 -
arch/arm/boot/dts/dra74x.dtsi | 92 ++--
arch/arm/boot/dts/exynos5422-odroidhc1.dts | 2 +-
arch/arm/boot/dts/exynos5422-odroidxu4.dts | 2 +-
.../boot/dts/exynos54xx-odroidxu-leds.dtsi | 4 +-
arch/arm/boot/dts/gemini-rut1xx.dts | 12 -
arch/arm/boot/dts/imx6q-dhcom-som.dtsi | 41 +-
arch/arm/boot/dts/r8a7779-marzen.dts | 2 +-
arch/arm/boot/dts/r8a7779.dtsi | 1 +
arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi | 10 +-
arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts | 2 +-
arch/arm/mach-exynos/exynos.c | 2 +
arch/arm/probes/kprobes/test-thumb.c | 10 +-
.../allwinner/sun50i-a64-sopine-baseboard.dts | 2 +-
arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi | 2 +
arch/arm64/boot/dts/renesas/r8a774a1.dtsi | 1 +
arch/arm64/boot/dts/renesas/r8a77960.dtsi | 7 +-
arch/arm64/boot/dts/renesas/r8a77961.dtsi | 7 +-
.../arm64/boot/dts/renesas/r8a77970-v3msk.dts | 2 +-
arch/arm64/boot/dts/renesas/r8a779a0.dtsi | 1 -
.../boot/dts/rockchip/rk3399-roc-pc.dtsi | 3 +
arch/arm64/boot/dts/ti/k3-j7200-main.dtsi | 1 +
.../dts/ti/k3-j721e-common-proc-board.dts | 4 +
arch/arm64/boot/dts/ti/k3-j721e-main.dtsi | 58 +--
arch/hexagon/kernel/vmlinux.lds.S | 9 +-
arch/mips/boot/compressed/Makefile | 4 +-
arch/mips/boot/compressed/decompress.c | 2 +
arch/mips/include/asm/vdso/vdso.h | 2 +-
arch/powerpc/boot/devtree.c | 59 ++-
arch/powerpc/boot/ns16550.c | 9 +-
arch/powerpc/include/asm/ps3.h | 2 +
arch/powerpc/mm/book3s64/radix_tlb.c | 26 +-
arch/powerpc/platforms/ps3/mm.c | 12 +
arch/s390/Makefile | 1 +
arch/s390/boot/ipl_parm.c | 19 +-
arch/s390/boot/mem_detect.c | 47 +-
arch/s390/include/asm/processor.h | 4 +-
arch/s390/kernel/setup.c | 2 +-
arch/s390/purgatory/Makefile | 1 +
arch/um/drivers/chan_user.c | 3 +-
arch/um/drivers/slip_user.c | 3 +-
arch/x86/include/asm/fpu/internal.h | 19 +-
arch/x86/kernel/fpu/regset.c | 2 +-
arch/x86/kernel/fpu/xstate.c | 105 ++--
arch/x86/kernel/signal.c | 24 +-
arch/x86/kvm/cpuid.c | 8 +-
arch/x86/kvm/mmu/mmu.c | 2 +
arch/x86/kvm/mmu/paging.h | 14 +
arch/x86/kvm/mmu/paging_tmpl.h | 4 +-
arch/x86/kvm/mmu/spte.h | 6 -
arch/x86/kvm/svm/svm.c | 11 +-
arch/x86/kvm/x86.c | 2 +
block/partitions/ldm.c | 2 +-
block/partitions/ldm.h | 3 -
block/partitions/msdos.c | 24 +-
certs/.gitignore | 1 +
drivers/acpi/acpi_amba.c | 1 +
drivers/acpi/acpi_video.c | 9 +
drivers/block/virtio_blk.c | 2 +
drivers/char/virtio_console.c | 4 +-
drivers/dma/fsl-qdma.c | 6 +-
drivers/firmware/arm_scmi/driver.c | 4 +
drivers/firmware/tegra/bpmp-tegra210.c | 2 +-
drivers/firmware/turris-mox-rwtm.c | 55 ++-
drivers/fsi/fsi-master-aspeed.c | 1 +
drivers/fsi/fsi-master-ast-cf.c | 1 +
drivers/fsi/fsi-master-gpio.c | 1 +
drivers/fsi/fsi-occ.c | 1 +
drivers/gpio/gpio-pca953x.c | 1 +
drivers/gpio/gpio-zynq.c | 15 +-
drivers/gpu/drm/amd/amdkfd/kfd_process.c | 14 +-
.../amd/amdkfd/kfd_process_queue_manager.c | 1 +
drivers/gpu/drm/ast/ast_main.c | 5 +-
drivers/gpu/drm/drm_dp_mst_topology.c | 68 ++-
drivers/gpu/drm/gma500/framebuffer.c | 7 +-
drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 5 +-
drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c | 2 +-
drivers/gpu/drm/ingenic/ingenic-drm-drv.c | 20 +-
drivers/gpu/drm/ingenic/ingenic-ipu.c | 2 +-
drivers/hwtracing/intel_th/core.c | 17 +
drivers/hwtracing/intel_th/gth.c | 16 +
drivers/hwtracing/intel_th/intel_th.h | 3 +
drivers/i2c/i2c-core-base.c | 3 +
drivers/iio/gyro/fxas21002c_core.c | 11 +-
drivers/iio/magnetometer/bmc150_magn.c | 10 +-
drivers/input/touchscreen/hideep.c | 13 +-
drivers/iommu/arm/arm-smmu/arm-smmu.c | 10 +-
drivers/leds/leds-tlc591xx.c | 8 +-
drivers/leds/leds-turris-omnia.c | 1 +
drivers/memory/atmel-ebi.c | 4 +-
drivers/memory/fsl_ifc.c | 8 +-
drivers/memory/pl353-smc.c | 1 +
drivers/memory/stm32-fmc2-ebi.c | 4 +
drivers/mfd/da9052-i2c.c | 1 +
drivers/mfd/motorola-cpcap.c | 4 +
drivers/mfd/stmpe-i2c.c | 2 +-
drivers/misc/cardreader/alcor_pci.c | 8 +-
drivers/misc/habanalabs/gaudi/gaudi.c | 3 +-
drivers/misc/habanalabs/goya/goya.c | 1 +
drivers/misc/ibmasm/module.c | 5 +-
drivers/net/virtio_net.c | 27 +-
drivers/nvme/target/tcp.c | 1 -
drivers/pci/controller/dwc/pcie-intel-gw.c | 10 +-
drivers/pci/controller/dwc/pcie-tegra194.c | 2 +-
drivers/pci/controller/pci-ftpci100.c | 30 +-
drivers/pci/controller/pci-hyperv.c | 30 +-
drivers/pci/controller/pci-tegra.c | 1 +
drivers/pci/controller/pcie-iproc-msi.c | 29 +-
drivers/pci/controller/pcie-rockchip-host.c | 12 +-
drivers/pci/hotplug/pciehp_hpc.c | 36 ++
drivers/pci/p2pdma.c | 34 +-
drivers/pci/pci-label.c | 2 +-
drivers/pci/pci.h | 4 +
drivers/pci/pcie/dpc.c | 74 ++-
drivers/phy/intel/phy-intel-keembay-emmc.c | 3 +-
drivers/power/reset/gpio-poweroff.c | 1 +
drivers/power/supply/Kconfig | 3 +-
drivers/power/supply/ab8500_btemp.c | 1 +
drivers/power/supply/ab8500_charger.c | 19 +-
drivers/power/supply/ab8500_fg.c | 1 +
drivers/power/supply/charger-manager.c | 1 +
drivers/power/supply/max17042_battery.c | 2 +-
drivers/power/supply/rt5033_battery.c | 7 +
drivers/power/supply/sc2731_charger.c | 1 +
drivers/power/supply/sc27xx_fuel_gauge.c | 1 +
drivers/pwm/pwm-img.c | 2 +-
drivers/pwm/pwm-imx1.c | 2 -
drivers/pwm/pwm-spear.c | 4 -
drivers/pwm/pwm-tegra.c | 13 -
drivers/remoteproc/remoteproc_cdev.c | 2 +-
drivers/remoteproc/remoteproc_core.c | 2 +-
drivers/remoteproc/ti_k3_r5_remoteproc.c | 2 +-
drivers/reset/Kconfig | 4 +-
drivers/reset/core.c | 5 +-
drivers/reset/reset-a10sr.c | 1 +
drivers/reset/reset-brcmstb.c | 1 +
drivers/rtc/proc.c | 4 +-
drivers/s390/char/sclp_vt220.c | 4 +-
drivers/s390/scsi/zfcp_sysfs.c | 1 +
drivers/scsi/arcmsr/arcmsr_hba.c | 19 +-
drivers/scsi/be2iscsi/be_main.c | 5 +-
drivers/scsi/bnx2i/bnx2i_iscsi.c | 2 +-
drivers/scsi/cxgbi/libcxgbi.c | 4 +-
drivers/scsi/device_handler/scsi_dh_alua.c | 11 +-
drivers/scsi/hosts.c | 4 +
drivers/scsi/libiscsi.c | 122 +++--
drivers/scsi/lpfc/lpfc_els.c | 9 +
drivers/scsi/lpfc/lpfc_sli.c | 5 +-
drivers/scsi/megaraid/megaraid_sas.h | 12 +
drivers/scsi/megaraid/megaraid_sas_base.c | 96 +++-
drivers/scsi/megaraid/megaraid_sas_fp.c | 6 +-
drivers/scsi/megaraid/megaraid_sas_fusion.c | 10 +-
drivers/scsi/mpt3sas/mpt3sas_scsih.c | 22 +
drivers/scsi/qedi/qedi.h | 1 +
drivers/scsi/qedi/qedi_fw.c | 24 +-
drivers/scsi/qedi/qedi_iscsi.c | 37 +-
drivers/scsi/qedi/qedi_main.c | 2 +-
drivers/scsi/scsi_lib.c | 10 +-
drivers/scsi/scsi_transport_iscsi.c | 12 +
drivers/scsi/scsi_transport_sas.c | 9 +-
drivers/scsi/sd.c | 12 +-
drivers/scsi/sr.c | 2 +-
drivers/scsi/storvsc_drv.c | 61 +--
drivers/staging/rtl8723bs/hal/odm.h | 5 +-
drivers/thermal/rcar_gen3_thermal.c | 2 +-
drivers/thermal/sprd_thermal.c | 1 +
drivers/tty/serial/8250/serial_cs.c | 11 +-
drivers/tty/serial/fsl_lpuart.c | 9 +
drivers/tty/serial/uartlite.c | 27 +-
drivers/usb/common/usb-conn-gpio.c | 44 +-
drivers/usb/gadget/function/f_hid.c | 2 +-
drivers/usb/gadget/legacy/hid.c | 4 +-
drivers/vdpa/mlx5/net/mlx5_vnet.c | 28 +-
drivers/video/backlight/lm3630a_bl.c | 12 +-
drivers/video/fbdev/core/fbmem.c | 12 +-
drivers/w1/slaves/w1_ds2438.c | 4 +-
drivers/watchdog/aspeed_wdt.c | 2 +-
drivers/watchdog/iTCO_wdt.c | 12 +-
drivers/watchdog/imx_sc_wdt.c | 11 +-
drivers/watchdog/jz4740_wdt.c | 4 +-
drivers/watchdog/lpc18xx_wdt.c | 2 +-
drivers/watchdog/sbc60xxwdt.c | 2 +-
drivers/watchdog/sc520_wdt.c | 2 +-
drivers/watchdog/w83877f_wdt.c | 2 +-
fs/ceph/addr.c | 10 +-
fs/cifs/connect.c | 6 +-
fs/f2fs/gc.c | 1 +
fs/f2fs/namei.c | 16 +-
fs/f2fs/super.c | 1 +
fs/jfs/jfs_logmgr.c | 1 +
fs/nfs/delegation.c | 71 ++-
fs/nfs/delegation.h | 1 +
fs/nfs/direct.c | 17 +-
fs/nfs/inode.c | 4 +
fs/nfs/nfs3proc.c | 4 +-
fs/nfs/nfs4_fs.h | 1 +
fs/nfs/nfs4client.c | 82 ++--
fs/nfs/pnfs.c | 40 +-
fs/nfs/pnfs_nfs.c | 52 +-
fs/nfsd/nfs4state.c | 3 -
fs/nfsd/trace.h | 29 --
fs/nfsd/vfs.c | 18 +-
fs/orangefs/super.c | 2 +-
fs/ubifs/dir.c | 7 +
fs/ubifs/journal.c | 3 +-
fs/ubifs/xattr.c | 2 +-
include/linux/compiler-clang.h | 17 +
include/linux/compiler-gcc.h | 6 +
include/linux/compiler_types.h | 2 +-
include/linux/nfs_fs.h | 1 +
include/linux/rcupdate.h | 2 +-
include/linux/sched/signal.h | 19 +-
include/scsi/libiscsi.h | 11 +-
include/scsi/scsi_transport_iscsi.h | 2 +
kernel/cgroup/cgroup-v1.c | 2 +
kernel/jump_label.c | 13 +-
kernel/rcu/rcu.h | 2 +
kernel/rcu/srcutree.c | 3 +
kernel/rcu/tree.c | 16 +-
kernel/rcu/update.c | 2 +-
kernel/sched/sched.h | 21 +-
kernel/static_call.c | 13 +-
kernel/trace/trace_events_hist.c | 6 +-
lib/decompress_unlz4.c | 8 +
net/bridge/br_multicast.c | 6 +
net/sunrpc/xprtsock.c | 3 +-
sound/ac97/bus.c | 2 +-
sound/firewire/Kconfig | 5 +-
sound/firewire/bebob/bebob.c | 5 +-
sound/firewire/motu/motu-protocol-v2.c | 13 +-
sound/firewire/oxfw/oxfw.c | 2 +-
sound/isa/cmi8330.c | 2 +-
sound/isa/sb/sb16_csp.c | 8 +-
sound/pci/hda/hda_tegra.c | 3 +
sound/ppc/powermac.c | 6 +-
sound/soc/img/img-i2s-in.c | 2 +-
sound/soc/intel/boards/kbl_da7219_max98357a.c | 4 +-
sound/soc/intel/boards/sof_da7219_max98373.c | 1 +
sound/soc/intel/boards/sof_rt5682.c | 1 +
sound/soc/intel/boards/sof_sdw.c | 19 +-
sound/soc/intel/boards/sof_sdw_common.h | 1 +
.../intel/common/soc-acpi-intel-kbl-match.c | 2 +-
sound/soc/soc-core.c | 2 +-
sound/soc/soc-pcm.c | 2 +-
sound/usb/mixer_scarlett_gen2.c | 39 +-
sound/usb/usx2y/usX2Yhwdep.c | 56 +--
sound/usb/usx2y/usX2Yhwdep.h | 2 +-
sound/usb/usx2y/usb_stream.c | 7 +-
sound/usb/usx2y/usbus428ctldefs.h | 102 ++--
sound/usb/usx2y/usbusx2y.c | 218 ++++-----
sound/usb/usx2y/usbusx2y.h | 58 +--
sound/usb/usx2y/usbusx2yaudio.c | 448 +++++++++---------
sound/usb/usx2y/usx2yhwdeppcm.c | 410 ++++++++--------
sound/usb/usx2y/usx2yhwdeppcm.h | 4 +-
.../powerpc/pmu/ebb/no_handler_test.c | 2 -
tools/testing/selftests/timers/rtcpie.c | 10 +-
virt/kvm/coalesced_mmio.c | 2 +-
265 files changed, 2508 insertions(+), 1668 deletions(-)
create mode 100644 arch/x86/kvm/mmu/paging.h
--
2.20.1
1
237
Backport 5.10.51 LTS patches
Aaron Liu (1):
drm/amdgpu: enable sdma0 tmz for Raven/Renoir(V2)
Al Cooper (1):
mmc: sdhci: Fix warning message when accessing RPMB in HS400 mode
Alex Bee (2):
drm: rockchip: add missing registers for RK3188
drm: rockchip: add missing registers for RK3066
Amber Lin (1):
drm/amdkfd: Fix circular lock in nocpsch path
Amit Cohen (1):
selftests: Clean forgotten resources as part of cleanup()
Andrey Grodzovsky (2):
drm/scheduler: Fix hang when sched_entity released
drm/sched: Avoid data corruptions
Andy Shevchenko (1):
net: pch_gbe: Use proper accessors to BE data in pch_ptp_match()
Ansuel Smith (1):
net: mdio: ipq8064: add regmap config to disable REGCACHE
Arnd Bergmann (1):
media: subdev: disallow ioctl for saa6588/davinci
Arturo Giusti (1):
udf: Fix NULL pointer dereference in udf_symlink function
Benjamin Drung (1):
media: uvcvideo: Fix pixel format change for Elgato Cam Link 4K
Bibo Mao (1):
hugetlb: clear huge pte during flush function on mips platform
Bixuan Cui (1):
pinctrl: equilibrium: Add missing MODULE_DEVICE_TABLE
Brandon Syu (1):
drm/amd/display: fix HDCP reset sequence on reinitialize
Cameron Nemo (2):
arm64: dts: rockchip: add rk3328 dwc3 usb controller node
arm64: dts: rockchip: Enable USB3 for rk3328 Rock64
Chao Yu (1):
f2fs: fix to avoid racing on fsync_entry_slab by multi filesystem
instances
Christian Löhle (1):
mmc: core: Allow UHS-I voltage switch for SDSC cards if supported
Christophe JAILLET (1):
nvmem: core: add a missing of_node_put
Christophe Leroy (1):
powerpc/mm: Fix lockup on kernel exec fault
Damien Le Moal (2):
dm: Fix dm_accept_partial_bio() relative to zone management commands
dm zoned: check zone capacity
Dan Carpenter (2):
drm/vc4: fix argument ordering in vc4_crtc_get_margins()
ath11k: unlock on error path in ath11k_mac_op_add_interface()
Daniel Borkmann (1):
bpf: Fix up register-based shifts in interpreter to silence KUBSAN
Daniel Lenski (1):
Bluetooth: btusb: Add a new QCA_ROME device (0cf3:e500)
Daniel Vetter (4):
drm/tegra: Don't set allow_fb_modifiers explicitly
drm/msm/mdp4: Fix modifier support enabling
drm/arm/malidp: Always list modifiers
drm/nouveau: Don't set allow_fb_modifiers explicitly
Davide Caratti (1):
net/sched: cls_api: increase max_reclassify_loop
Dinghao Liu (1):
clk: renesas: rcar-usb2-clock-sel: Fix error handling in .probe()
Dmitry Osipenko (3):
clk: tegra: Fix refcounting of gate clocks
clk: tegra: Ensure that PLLU configuration is applied properly
ASoC: tegra: Set driver_name=tegra for all machine drivers
Dmytro Laktyushkin (1):
drm/amd/display: fix use_max_lb flag for 420 pixel formats
Eli Cohen (1):
net/mlx5: Fix lag port remapping logic
Felix Fietkau (1):
mt76: mt7615: fix fixed-rate tx status reporting
Ferry Toth (1):
extcon: intel-mrfld: Sync hardware and software state on init
Fugang Duan (1):
net: fec: add ndo_select_queue to fix TX bandwidth fluctuations
Gerd Rausch (1):
RDMA/cma: Fix rdma_resolve_route() memory leak
Gioh Kim (1):
RDMA/rtrs: Change MAX_SESS_QUEUE_DEPTH
Guchun Chen (1):
drm/amd/display: fix incorrrect valid irq check
Gustavo A. R. Silva (1):
wireless: wext-spy: Fix out-of-bounds warning
Hans de Goede (1):
mmc: sdhci-acpi: Disable write protect detection on Toshiba Encore 2
WT8-B
Haren Myneni (1):
powerpc/powernv/vas: Release reference to tgid during window close
Harry Wentland (1):
drm/amd/display: Reject non-zero src_y and src_x for video planes
Heiner Kallweit (1):
r8169: avoid link-up interrupt issue on RTL8106e if user enables ASPM
Hilda Wu (1):
Bluetooth: btusb: Add support USB ALT 3 for WBS
Horatiu Vultur (1):
net: bridge: mrp: Update ring transitions.
Huang Pei (1):
MIPS: add PMD table accounting into MIPS'pmd_alloc_one
Huy Nguyen (1):
net/mlx5e: IPsec/rep_tc: Fix rep_tc_update_skb drops IPsec packet
Jack Zhang (1):
drm/amd/amdgpu/sriov disable all ip hw status by default
Jacob Keller (2):
ice: fix incorrect payload indicator on PTYPE
ice: mark PTYPE 2 as reserved
Jakub Kicinski (1):
net: ip: avoid OOM kills with large UDP sends over loopback
Jan Kara (1):
rq-qos: fix missed wake-ups in rq_qos_throttle try two
Jeremy Linton (1):
coresight: Propagate symlink failure
Jesse Brandeburg (4):
e100: handle eeprom as little endian
igb: handle vlan types with checker enabled
igb: fix assignment on big endian machines
i40e: fix PTP on 5Gb links
Jian Shen (1):
net: fix mistake path for netdev_features_strings
Jiansong Chen (1):
drm/amdgpu: remove unsafe optimization to drop preamble ib
Jiapeng Chong (1):
RDMA/cxgb4: Fix missing error code in create_qp()
Jing Xiangfeng (1):
drm/radeon: Add the missed drm_gem_object_put() in
radeon_user_framebuffer_create()
Joakim Zhang (1):
net: phy: realtek: add delay to fix RXC generation issue
Joe Thornber (1):
dm space maps: don't reset space map allocation cursor when committing
Johan Hovold (3):
media: dtv5100: fix control-request directions
media: gspca/sq905: fix control-request direction
media: gspca/sunplus: fix zero-length control requests
Johannes Berg (4):
iwlwifi: mvm: don't change band on bound PHY contexts
iwlwifi: pcie: free IML DMA memory allocation
iwlwifi: pcie: fix context info freeing
mac80211: consider per-CPU statistics if present
Jonathan Kim (1):
drm/amdkfd: fix circular locking on get_wave_state
Joseph Greathouse (1):
drm/amdgpu: Update NV SIMD-per-CU to 2
Kai-Heng Feng (1):
Bluetooth: Shutdown controller after workqueues are flushed or
cancelled
Kees Cook (4):
drm/amd/display: Avoid HDCP over-read and corruption
drm/i915/display: Do not zero past infoframes.vsc
lkdtm/bugs: XFAIL UNALIGNED_LOAD_STORE_WRITE
selftests/lkdtm: Fix expected text for CR4 pinning
Kiran K (1):
Bluetooth: Fix alt settings for incoming SCO with transparent coding
format
Konstantin Kharlamov (1):
PCI: Leave Apple Thunderbolt controllers on for s2idle or standby
Kuninori Morimoto (1):
clk: renesas: r8a77995: Add ZA2 clock
KuoHsiang Chou (1):
drm/ast: Fixed CVE for DP501
Lee Gibson (1):
wl1251: Fix possible buffer overflow in wl1251_cmd_scan
Limeng (1):
mfd: syscon: Free the allocated name field of struct regmap_config
Linus Walleij (1):
power: supply: ab8500: Fix an old bug
Liu Ying (1):
drm/bridge: nwl-dsi: Force a full modeset when crtc_state->active is
changed to be true
Liwei Song (1):
ice: set the value of global config lock timeout longer
Longpeng(Mike) (1):
vsock: notify server to shutdown when client has pending signal
Luiz Augusto von Dentz (2):
Bluetooth: L2CAP: Fix invalid access if ECRED Reconfigure fails
Bluetooth: L2CAP: Fix invalid access on ECRED Connection response
Lv Yunlong (1):
ipack/carriers/tpci200: Fix a double free in tpci200_pci_probe
Lyude Paul (1):
drm/dp: Handle zeroed port counts in drm_dp_read_downstream_info()
Marcelo Ricardo Leitner (2):
sctp: validate from_addr_param return
sctp: add size validation when walking chunks
Mark Yacoub (1):
drm/amd/display: Verify Gamma & Degamma LUT sizes in
amdgpu_dm_atomic_check
Mateusz Kwiatkowski (1):
drm/vc4: Fix clock source for VEC PixelValve on BCM2711
Max Gurtovoy (1):
IB/isert: Align target max I/O size to initiator size
Maxime Ripard (3):
drm/vc4: txp: Properly set the possible_crtcs mask
drm/vc4: crtc: Skip the TXP
drm/vc4: hdmi: Prevent clock unbalance
Maximilian Luz (1):
pinctrl/amd: Add device HID for new AMD GPIO controller
Mikulas Patocka (4):
dm writecache: don't split bios when overwriting contiguous cache
content
dm writecache: commit just one block, not a full page
dm writecache: flush origin device when writing and cache is full
dm writecache: write at least 4k when committing
Minchan Kim (1):
selinux: use __GFP_NOWARN with GFP_NOWAIT in the AVC
Nathan Chancellor (2):
powerpc/barrier: Avoid collision with clang's __lwsync macro
qemu_fw_cfg: Make fw_cfg_rev_attr a proper kobj_attribute
Nick Desaulniers (1):
MIPS: set mips32r5 for virt extensions
Nikola Cornij (1):
drm/amd/display: Fix DCN 3.01 DSCCLK validation
Nirmoy Das (1):
drm/amdkfd: use allowed domain for vmbo validation
Odin Ugedal (1):
sched/fair: Ensure _sum and _avg values stay consistent
Pali Rohár (2):
PCI: aardvark: Fix checking for PIO Non-posted Request
PCI: aardvark: Implement workaround for the readback value of VEND_ID
Pascal Terjan (1):
rtl8xxxu: Fix device info for RTL8192EU devices
Paul Burton (2):
tracing: Simplify & fix saved_tgids logic
tracing: Resize tgid_map to pid_max, not PID_MAX_DEFAULT
Paul Cercueil (3):
MIPS: cpu-probe: Fix FPU detection on Ingenic JZ4760(B)
MIPS: ingenic: Select CPU_SUPPORTS_CPUFREQ && MIPS_EXTERNAL_TIMER
MIPS: MT extensions are not available on MIPS32r1
Paul M Stillwell Jr (1):
ice: fix clang warning regarding deadcode.DeadStores
Pavel Begunkov (1):
io_uring: fix false WARN_ONCE
Pavel Skripkin (3):
reiserfs: add check for invalid 1st journal block
media: zr364xx: fix memory leak in zr364xx_start_readpipe
jfs: fix GPF in diFree
Petr Pavlu (1):
ipmi/watchdog: Stop watchdog timer when the current action is 'none'
Ping-Ke Shih (1):
cfg80211: fix default HE tx bitrate mask in 2G band
Radim Pavlik (1):
pinctrl: mcp23s08: fix race condition in irq handler
Roman Li (1):
drm/amd/display: Update scaling settings on modeset
Russ Weight (1):
fpga: stratix10-soc: Add missing fpga_mgr_free() call
Rustam Kovhaev (1):
bpf: Fix false positive kmemleak report in bpf_ringbuf_area_alloc()
Ryder Lee (1):
mt76: mt7915: fix IEEE80211_HE_PHY_CAP7_MAX_NC for station mode
Sai Prakash Ranjan (1):
coresight: tmc-etf: Fix global-out-of-bounds in
tmc_update_etf_buffer()
Samuel Holland (1):
clocksource/arm_arch_timer: Improve Allwinner A64 timer workaround
Sean Young (1):
media, bpf: Do not copy more entries than user space requested
Sebastian Andrzej Siewior (1):
net: Treat __napi_schedule_irqoff() as __napi_schedule() on PREEMPT_RT
Shaul Triebitz (1):
iwlwifi: mvm: fix error print when session protection ends
Srinivas Pandruvada (1):
thermal/drivers/int340x/processor_thermal: Fix tcc setting
Stanley.Yang (1):
drm/amdgpu: fix bad address translation for sienna_cichlid
Steffen Klassert (1):
xfrm: Fix error reporting in xfrm_state_construct.
Tedd Ho-Jeong An (1):
Bluetooth: mgmt: Fix the command returns garbage parameter value
Tetsuo Handa (1):
smackfs: restrict bytes count in smk_set_cipso()
Thomas Gleixner (1):
cpu/hotplug: Cure the cpusets trainwreck
Thomas Hebb (1):
drm/rockchip: dsi: remove extra component_del() call
Thomas Zimmermann (3):
drm/mxsfb: Don't select DRM_KMS_FB_HELPER
drm/zte: Don't select DRM_KMS_FB_HELPER
drm/ast: Remove reference to struct drm_device.pdev
Tiezhu Yang (1):
drm/radeon: Call radeon_suspend_kms() in radeon_pci_shutdown() for
Loongson64
Tim Jiang (1):
Bluetooth: btusb: fix bt fiwmare downloading failure issue for qca
btsoc.
Timo Sigurdsson (1):
ata: ahci_sunxi: Disable DIPM
Tony Lindgren (1):
wlcore/wl12xx: Fix wl12xx get_mac error if device is in ELP
Vladimir Oltean (2):
net: mdio: provide shim implementation of devm_of_mdiobus_register
net: stmmac: the XPCS obscures a potential "PHY not found" error
Vladimir Stempen (1):
drm/amd/display: Release MST resources on switch from MST to SST
Wang Li (1):
drm/mediatek: Fix PM reference leak in mtk_crtc_ddp_hw_init()
Weilun Du (1):
mac80211_hwsim: add concurrent channels scanning support over virtio
Wesley Chalmers (2):
drm/amd/display: Set DISPCLK_MAX_ERRDET_CYCLES to 7
drm/amd/display: Fix off-by-one error in DML
Willy Tarreau (1):
ipv6: use prandom_u32() for ID generation
Wolfram Sang (1):
mmc: core: clear flags before allowing to retune
Xianting Tian (1):
virtio_net: Remove BUG() to avoid machine dead
Xiao Yang (1):
RDMA/rxe: Don't overwrite errno from ib_umem_get()
Xiaochen Shen (1):
selftests/resctrl: Fix incorrect parsing of option "-t"
Xie Yongji (2):
drm/virtio: Fix double free on probe failure
virtio-net: Add validation for used length
Yang Yingliang (10):
net: mscc: ocelot: check return value after calling
platform_get_resource()
net: bcmgenet: check return value after calling
platform_get_resource()
net: mvpp2: check return value after calling platform_get_resource()
net: micrel: check return value after calling platform_get_resource()
net: moxa: Use devm_platform_get_and_ioremap_resource()
net: sgi: ioc3-eth: check return value after calling
platform_get_resource()
fjes: check return value after calling platform_get_resource()
net: ipa: Add missing of_node_put() in ipa_firmware_load()
net: sched: fix error return code in tcf_del_walker()
io_uring: fix clear IORING_SETUP_R_DISABLED in wrong function
Yu Kuai (1):
drm: bridge: cdns-mhdp8546: Fix PM reference leak in
Yu Liu (1):
Bluetooth: Fix the HCI to MGMT status conversion table
Yuchung Cheng (1):
net: tcp better handling of reordering then loss cases
Yun Zhou (1):
seq_buf: Fix overflow in seq_buf_putmem_hex()
Zhenyu Ye (1):
arm64: tlb: fix the TTL value of tlb_get_level
Zheyu Ma (2):
atm: nicstar: use 'dma_free_coherent' instead of 'kfree'
atm: nicstar: register the interrupt handler in the right place
Zou Wei (8):
atm: iphase: fix possible use-after-free in ia_module_exit()
mISDN: fix possible use-after-free in HFC_cleanup()
atm: nicstar: Fix possible use-after-free in nicstar_cleanup()
drm/bridge: lt9611: Add missing MODULE_DEVICE_TABLE
drm/vc4: hdmi: Fix PM reference leak in vc4_hdmi_encoder_pre_crtc_co()
drm/bridge: cdns: Fix PM reference leak in cdns_dsi_transfer()
cw1200: add missing MODULE_DEVICE_TABLE
pinctrl: mcp23s08: Fix missing unlock on error in mcp23s08_irq()
gushengxian (1):
flow_offload: action should not be NULL when it is referenced
mark-yw.chen (1):
Bluetooth: btusb: Fixed too many in-token issue for Mediatek Chip.
xinhui pan (1):
drm/amdkfd: Walk through list with dqm lock hold
zhanglianjie (1):
MIPS: loongsoon64: Reserve memory below starting pfn to prevent Oops
Íñigo Huguet (2):
sfc: avoid double pci_remove of VFs
sfc: error code if SRIOV cannot be disabled
.../arm64/boot/dts/rockchip/rk3328-rock64.dts | 5 +
arch/arm64/boot/dts/rockchip/rk3328.dtsi | 19 +++
arch/arm64/include/asm/tlb.h | 4 +
arch/mips/Kconfig | 2 +
arch/mips/include/asm/cpu-features.h | 4 +-
arch/mips/include/asm/hugetlb.h | 8 +-
arch/mips/include/asm/mipsregs.h | 8 +-
arch/mips/include/asm/pgalloc.h | 10 +-
arch/mips/kernel/cpu-probe.c | 5 +
arch/mips/loongson64/numa.c | 3 +
arch/powerpc/include/asm/barrier.h | 2 +
arch/powerpc/mm/fault.c | 4 +-
arch/powerpc/platforms/powernv/vas-window.c | 9 +-
block/blk-rq-qos.c | 4 +-
drivers/ata/ahci_sunxi.c | 2 +-
drivers/atm/iphase.c | 2 +-
drivers/atm/nicstar.c | 26 ++--
drivers/bluetooth/btusb.c | 24 ++-
drivers/char/ipmi/ipmi_watchdog.c | 22 +--
drivers/clk/renesas/r8a77995-cpg-mssr.c | 1 +
drivers/clk/renesas/rcar-usb2-clock-sel.c | 24 +--
drivers/clk/tegra/clk-periph-gate.c | 72 +++++----
drivers/clk/tegra/clk-periph.c | 11 ++
drivers/clk/tegra/clk-pll.c | 9 +-
drivers/clocksource/arm_arch_timer.c | 2 +-
drivers/extcon/extcon-intel-mrfld.c | 9 ++
drivers/firmware/qemu_fw_cfg.c | 8 +-
drivers/fpga/stratix10-soc.c | 1 +
.../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 21 +--
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 11 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_umc.h | 5 +
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 4 +-
drivers/gpu/drm/amd/amdgpu/umc_v8_7.c | 2 +-
.../drm/amd/amdkfd/kfd_device_queue_manager.c | 68 +++++----
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 24 ++-
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 1 +
.../amd/display/amdgpu_dm/amdgpu_dm_color.c | 41 +++++-
.../gpu/drm/amd/display/dc/core/dc_link_dp.c | 2 +
.../drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c | 9 +-
.../drm/amd/display/dc/dcn20/dcn20_hwseq.c | 2 +-
.../dc/dml/dcn30/display_mode_vba_30.c | 78 ++++------
drivers/gpu/drm/amd/display/dc/irq_types.h | 2 +-
.../gpu/drm/amd/display/modules/hdcp/hdcp.c | 1 -
.../display/modules/hdcp/hdcp1_execution.c | 4 +-
drivers/gpu/drm/amd/include/navi10_enum.h | 2 +-
drivers/gpu/drm/arm/malidp_planes.c | 9 +-
drivers/gpu/drm/ast/ast_dp501.c | 139 +++++++++++++-----
drivers/gpu/drm/ast/ast_drv.h | 12 ++
drivers/gpu/drm/ast/ast_main.c | 11 +-
.../drm/bridge/cadence/cdns-mhdp8546-core.c | 4 +-
drivers/gpu/drm/bridge/cdns-dsi.c | 2 +-
drivers/gpu/drm/bridge/lontium-lt9611.c | 1 +
drivers/gpu/drm/bridge/nwl-dsi.c | 61 +++++---
drivers/gpu/drm/drm_dp_helper.c | 7 +
drivers/gpu/drm/i915/display/intel_dp.c | 2 +-
drivers/gpu/drm/mediatek/mtk_drm_crtc.c | 2 +-
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 2 -
drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 8 +-
drivers/gpu/drm/mxsfb/Kconfig | 1 -
drivers/gpu/drm/nouveau/nouveau_display.c | 1 -
drivers/gpu/drm/radeon/radeon_display.c | 1 +
drivers/gpu/drm/radeon/radeon_drv.c | 8 +-
.../gpu/drm/rockchip/dw-mipi-dsi-rockchip.c | 4 -
drivers/gpu/drm/rockchip/rockchip_vop_reg.c | 21 ++-
drivers/gpu/drm/scheduler/sched_entity.c | 8 +-
drivers/gpu/drm/scheduler/sched_main.c | 24 +++
drivers/gpu/drm/tegra/dc.c | 10 +-
drivers/gpu/drm/tegra/drm.c | 2 -
drivers/gpu/drm/vc4/vc4_crtc.c | 5 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vc4/vc4_hdmi.c | 10 +-
drivers/gpu/drm/vc4/vc4_txp.c | 2 +-
drivers/gpu/drm/virtio/virtgpu_kms.c | 1 +
drivers/gpu/drm/zte/Kconfig | 1 -
drivers/hwtracing/coresight/coresight-core.c | 2 +-
.../hwtracing/coresight/coresight-tmc-etf.c | 2 +-
drivers/infiniband/core/cma.c | 3 +-
drivers/infiniband/hw/cxgb4/qp.c | 1 +
drivers/infiniband/sw/rxe/rxe_mr.c | 2 +-
drivers/infiniband/ulp/isert/ib_isert.c | 4 +-
drivers/infiniband/ulp/isert/ib_isert.h | 3 -
drivers/infiniband/ulp/rtrs/rtrs-pri.h | 13 +-
drivers/ipack/carriers/tpci200.c | 5 +-
drivers/isdn/hardware/mISDN/hfcpci.c | 2 +-
drivers/md/dm-writecache.c | 48 ++++--
drivers/md/dm-zoned-metadata.c | 7 +
drivers/md/dm.c | 8 +-
.../md/persistent-data/dm-space-map-disk.c | 9 +-
.../persistent-data/dm-space-map-metadata.c | 9 +-
drivers/media/i2c/saa6588.c | 4 +-
drivers/media/pci/bt8xx/bttv-driver.c | 6 +-
drivers/media/pci/saa7134/saa7134-video.c | 6 +-
drivers/media/platform/davinci/vpbe_display.c | 2 +-
drivers/media/platform/davinci/vpbe_venc.c | 6 +-
drivers/media/rc/bpf-lirc.c | 3 +-
drivers/media/usb/dvb-usb/dtv5100.c | 7 +-
drivers/media/usb/gspca/sq905.c | 2 +-
drivers/media/usb/gspca/sunplus.c | 8 +-
drivers/media/usb/uvc/uvc_video.c | 27 ++++
drivers/media/usb/zr364xx/zr364xx.c | 1 +
drivers/mfd/syscon.c | 2 +-
drivers/misc/lkdtm/bugs.c | 3 +
drivers/mmc/core/core.c | 7 +-
drivers/mmc/core/sd.c | 10 +-
drivers/mmc/host/sdhci-acpi.c | 11 ++
drivers/mmc/host/sdhci.c | 4 +
drivers/mmc/host/sdhci.h | 1 +
drivers/net/dsa/ocelot/seville_vsc9953.c | 5 +
drivers/net/ethernet/broadcom/genet/bcmmii.c | 4 +
drivers/net/ethernet/freescale/fec_main.c | 32 ++++
drivers/net/ethernet/intel/e100.c | 12 +-
drivers/net/ethernet/intel/i40e/i40e_ptp.c | 8 +-
drivers/net/ethernet/intel/ice/ice_ethtool.c | 6 +-
.../net/ethernet/intel/ice/ice_lan_tx_rx.h | 4 +-
drivers/net/ethernet/intel/ice/ice_type.h | 2 +-
drivers/net/ethernet/intel/igb/igb_main.c | 9 +-
drivers/net/ethernet/intel/igbvf/netdev.c | 4 +-
.../net/ethernet/marvell/mvpp2/mvpp2_main.c | 4 +
.../net/ethernet/mellanox/mlx5/core/en_rx.c | 6 +-
drivers/net/ethernet/mellanox/mlx5/core/lag.c | 19 ++-
drivers/net/ethernet/micrel/ks8842.c | 4 +
drivers/net/ethernet/moxa/moxart_ether.c | 5 +-
.../ethernet/oki-semi/pch_gbe/pch_gbe_main.c | 19 +--
drivers/net/ethernet/realtek/r8169_main.c | 1 -
drivers/net/ethernet/sfc/ef10_sriov.c | 25 ++--
drivers/net/ethernet/sgi/ioc3-eth.c | 4 +
.../net/ethernet/stmicro/stmmac/stmmac_mdio.c | 21 ++-
drivers/net/fjes/fjes_main.c | 4 +
drivers/net/ipa/ipa_main.c | 1 +
drivers/net/mdio/mdio-ipq8064.c | 33 +++--
drivers/net/phy/realtek.c | 15 +-
drivers/net/virtio_net.c | 22 ++-
drivers/net/wireless/ath/ath11k/mac.c | 4 +-
.../net/wireless/intel/iwlwifi/mvm/mac80211.c | 24 ++-
.../wireless/intel/iwlwifi/mvm/time-event.c | 4 +
.../intel/iwlwifi/pcie/ctxt-info-gen3.c | 15 +-
.../wireless/intel/iwlwifi/pcie/internal.h | 3 +
.../wireless/intel/iwlwifi/pcie/trans-gen2.c | 3 +-
drivers/net/wireless/mac80211_hwsim.c | 48 ++++--
.../net/wireless/mediatek/mt76/mt7615/mac.c | 10 +-
.../net/wireless/mediatek/mt76/mt7915/init.c | 6 +-
.../net/wireless/realtek/rtl8xxxu/rtl8xxxu.h | 11 +-
.../realtek/rtl8xxxu/rtl8xxxu_8192e.c | 59 +++++++-
drivers/net/wireless/st/cw1200/cw1200_sdio.c | 1 +
drivers/net/wireless/ti/wl1251/cmd.c | 9 +-
drivers/net/wireless/ti/wl12xx/main.c | 7 +
drivers/nvmem/core.c | 9 +-
drivers/pci/controller/pci-aardvark.c | 13 +-
drivers/pci/quirks.c | 11 ++
drivers/pinctrl/pinctrl-amd.c | 1 +
drivers/pinctrl/pinctrl-equilibrium.c | 1 +
drivers/pinctrl/pinctrl-mcp23s08.c | 10 +-
.../processor_thermal_device.c | 20 ++-
fs/f2fs/f2fs.h | 2 +
fs/f2fs/recovery.c | 23 +--
fs/f2fs/super.c | 8 +-
fs/io-wq.c | 5 +-
fs/io_uring.c | 2 +-
fs/jfs/inode.c | 3 +-
fs/reiserfs/journal.c | 14 ++
fs/udf/namei.c | 4 +
include/linux/mfd/abx500/ux500_chargalg.h | 2 +-
include/linux/netdev_features.h | 2 +-
include/linux/of_mdio.h | 7 +
include/linux/wait.h | 2 +-
include/media/v4l2-subdev.h | 4 +
include/net/flow_offload.h | 12 +-
include/net/sctp/structs.h | 2 +-
include/uapi/linux/ethtool.h | 4 +-
kernel/bpf/core.c | 61 +++++---
kernel/bpf/ringbuf.c | 2 +
kernel/cpu.c | 49 ++++++
kernel/sched/fair.c | 6 +-
kernel/sched/wait.c | 9 +-
kernel/trace/trace.c | 91 +++++++-----
lib/seq_buf.c | 4 +-
net/bluetooth/hci_core.c | 16 +-
net/bluetooth/hci_event.c | 6 +-
net/bluetooth/l2cap_core.c | 8 +-
net/bluetooth/mgmt.c | 5 +
net/bridge/br_mrp.c | 6 +-
net/core/dev.c | 11 +-
net/ipv4/ip_output.c | 32 ++--
net/ipv4/tcp_input.c | 45 +++---
net/ipv6/ip6_output.c | 32 ++--
net/ipv6/output_core.c | 28 +---
net/mac80211/sta_info.c | 11 +-
net/sched/act_api.c | 3 +-
net/sched/cls_api.c | 2 +-
net/sctp/bind_addr.c | 19 ++-
net/sctp/input.c | 8 +-
net/sctp/ipv6.c | 7 +-
net/sctp/protocol.c | 7 +-
net/sctp/sm_make_chunk.c | 29 ++--
net/vmw_vsock/af_vsock.c | 2 +-
net/wireless/nl80211.c | 9 +-
net/wireless/wext-spy.c | 14 +-
net/xfrm/xfrm_user.c | 28 ++--
security/selinux/avc.c | 13 +-
security/smack/smackfs.c | 2 +
sound/soc/tegra/tegra_alc5632.c | 1 +
sound/soc/tegra/tegra_max98090.c | 1 +
sound/soc/tegra/tegra_rt5640.c | 1 +
sound/soc/tegra/tegra_rt5677.c | 1 +
sound/soc/tegra/tegra_sgtl5000.c | 1 +
sound/soc/tegra/tegra_wm8753.c | 1 +
sound/soc/tegra/tegra_wm8903.c | 1 +
sound/soc/tegra/tegra_wm9712.c | 1 +
sound/soc/tegra/trimslice.c | 1 +
.../net/mlxsw/devlink_trap_l3_drops.sh | 3 +
.../net/mlxsw/devlink_trap_l3_exceptions.sh | 3 +
.../drivers/net/mlxsw/qos_dscp_bridge.sh | 2 +
tools/testing/selftests/lkdtm/tests.txt | 2 +-
.../selftests/net/forwarding/pedit_dsfield.sh | 2 +
.../selftests/net/forwarding/pedit_l4port.sh | 2 +
.../net/forwarding/skbedit_priority.sh | 2 +
tools/testing/selftests/resctrl/README | 2 +-
.../testing/selftests/resctrl/resctrl_tests.c | 4 +-
219 files changed, 1625 insertions(+), 773 deletions(-)
--
2.20.1
1
202

14 Oct '21
backport some bugfix patches for fs/perf/network/sched module.
Liu Jian (1):
igmp: Add ip_mc_list lock in ip_check_mc_rcu
Marco Elver (1):
kcsan: Never set up watchpoints on NULL pointers
Riccardo Mancini (3):
perf probe-file: Delete namelist in del_events() on the error path
perf test bpf: Free obj_buf
perf data: Close all files in close_dir()
Theodore Ts'o (1):
ext4: inline jbd2_journal_[un]register_shrinker()
Xiongfeng Wang (1):
ACPI / PPTT: get PPTT table in the first beginning
Zhang Yi (9):
jbd2: remove the out label in __jbd2_journal_remove_checkpoint()
jbd2: ensure abort the journal if detect IO error when writing
original buffer back
jbd2: don't abort the journal when freeing buffers
jbd2: remove redundant buffer io error checks
jbd2,ext4: add a shrinker to release checkpointed buffers
jbd2: simplify journal_clean_one_cp_list()
ext4: remove bdev_try_to_free_page() callback
fs: remove bdev_try_to_free_page callback
jbd2: export jbd2_journal_[un]register_shrinker()
Zheng Zucheng (1):
Revert "[Huawei] sched: export sched_setscheduler symbol"
arch/arm64/kernel/topology.c | 6 +-
drivers/acpi/pptt.c | 83 ++++++--------
fs/block_dev.c | 15 ---
fs/ext4/super.c | 21 ----
fs/jbd2/checkpoint.c | 206 ++++++++++++++++++++++++++++-------
fs/jbd2/journal.c | 74 +++++++++++++
fs/jbd2/transaction.c | 17 ---
include/linux/acpi.h | 1 +
include/linux/fs.h | 1 -
include/linux/jbd2.h | 35 ++++++
include/trace/events/jbd2.h | 101 +++++++++++++++++
kernel/kcsan/encoding.h | 6 +-
kernel/sched/core.c | 1 -
net/ipv4/igmp.c | 2 +
tools/perf/tests/bpf.c | 2 +
tools/perf/util/data.c | 2 +-
tools/perf/util/probe-file.c | 4 +-
17 files changed, 430 insertions(+), 147 deletions(-)
--
2.20.1
1
17
Ramaxel inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4DBD7
CVE: NA
Initial commit the spfc module for ramaxel Super FC adapter
Signed-off-by: Yanling Song <songyl(a)ramaxel.com>
---
arch/arm64/configs/openeuler_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
drivers/scsi/Kconfig | 1 +
drivers/scsi/Makefile | 1 +
drivers/scsi/spfc/Kconfig | 17 +
drivers/scsi/spfc/Makefile | 47 +
drivers/scsi/spfc/common/unf_common.h | 1755 +++++++
drivers/scsi/spfc/common/unf_disc.c | 1276 +++++
drivers/scsi/spfc/common/unf_disc.h | 51 +
drivers/scsi/spfc/common/unf_event.c | 517 ++
drivers/scsi/spfc/common/unf_event.h | 84 +
drivers/scsi/spfc/common/unf_exchg.c | 2317 +++++++++
drivers/scsi/spfc/common/unf_exchg.h | 436 ++
drivers/scsi/spfc/common/unf_exchg_abort.c | 825 +++
drivers/scsi/spfc/common/unf_exchg_abort.h | 23 +
drivers/scsi/spfc/common/unf_fcstruct.h | 459 ++
drivers/scsi/spfc/common/unf_gs.c | 2521 +++++++++
drivers/scsi/spfc/common/unf_gs.h | 58 +
drivers/scsi/spfc/common/unf_init.c | 353 ++
drivers/scsi/spfc/common/unf_io.c | 1220 +++++
drivers/scsi/spfc/common/unf_io.h | 96 +
drivers/scsi/spfc/common/unf_io_abnormal.c | 986 ++++
drivers/scsi/spfc/common/unf_io_abnormal.h | 19 +
drivers/scsi/spfc/common/unf_log.h | 178 +
drivers/scsi/spfc/common/unf_lport.c | 1008 ++++
drivers/scsi/spfc/common/unf_lport.h | 519 ++
drivers/scsi/spfc/common/unf_ls.c | 4884 ++++++++++++++++++
drivers/scsi/spfc/common/unf_ls.h | 61 +
drivers/scsi/spfc/common/unf_npiv.c | 1005 ++++
drivers/scsi/spfc/common/unf_npiv.h | 47 +
drivers/scsi/spfc/common/unf_npiv_portman.c | 360 ++
drivers/scsi/spfc/common/unf_npiv_portman.h | 17 +
drivers/scsi/spfc/common/unf_portman.c | 2431 +++++++++
drivers/scsi/spfc/common/unf_portman.h | 96 +
drivers/scsi/spfc/common/unf_rport.c | 2286 ++++++++
drivers/scsi/spfc/common/unf_rport.h | 301 ++
drivers/scsi/spfc/common/unf_scsi.c | 1463 ++++++
drivers/scsi/spfc/common/unf_scsi_common.h | 570 ++
drivers/scsi/spfc/common/unf_service.c | 1439 ++++++
drivers/scsi/spfc/common/unf_service.h | 66 +
drivers/scsi/spfc/common/unf_type.h | 216 +
drivers/scsi/spfc/hw/spfc_chipitf.c | 1105 ++++
drivers/scsi/spfc/hw/spfc_chipitf.h | 797 +++
drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c | 1646 ++++++
drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h | 215 +
drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c | 891 ++++
drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h | 65 +
drivers/scsi/spfc/hw/spfc_cqm_main.c | 1257 +++++
drivers/scsi/spfc/hw/spfc_cqm_main.h | 414 ++
drivers/scsi/spfc/hw/spfc_cqm_object.c | 959 ++++
drivers/scsi/spfc/hw/spfc_cqm_object.h | 279 +
drivers/scsi/spfc/hw/spfc_hba.c | 1724 +++++++
drivers/scsi/spfc/hw/spfc_hba.h | 341 ++
drivers/scsi/spfc/hw/spfc_hw_wqe.h | 1645 ++++++
drivers/scsi/spfc/hw/spfc_io.c | 1193 +++++
drivers/scsi/spfc/hw/spfc_io.h | 138 +
drivers/scsi/spfc/hw/spfc_lld.c | 998 ++++
drivers/scsi/spfc/hw/spfc_lld.h | 76 +
drivers/scsi/spfc/hw/spfc_module.h | 297 ++
drivers/scsi/spfc/hw/spfc_parent_context.h | 269 +
drivers/scsi/spfc/hw/spfc_queue.c | 4857 +++++++++++++++++
drivers/scsi/spfc/hw/spfc_queue.h | 711 +++
drivers/scsi/spfc/hw/spfc_service.c | 2169 ++++++++
drivers/scsi/spfc/hw/spfc_service.h | 282 +
drivers/scsi/spfc/hw/spfc_utils.c | 102 +
drivers/scsi/spfc/hw/spfc_utils.h | 202 +
drivers/scsi/spfc/hw/spfc_wqe.c | 646 +++
drivers/scsi/spfc/hw/spfc_wqe.h | 239 +
68 files changed, 53528 insertions(+)
create mode 100644 drivers/scsi/spfc/Kconfig
create mode 100644 drivers/scsi/spfc/Makefile
create mode 100644 drivers/scsi/spfc/common/unf_common.h
create mode 100644 drivers/scsi/spfc/common/unf_disc.c
create mode 100644 drivers/scsi/spfc/common/unf_disc.h
create mode 100644 drivers/scsi/spfc/common/unf_event.c
create mode 100644 drivers/scsi/spfc/common/unf_event.h
create mode 100644 drivers/scsi/spfc/common/unf_exchg.c
create mode 100644 drivers/scsi/spfc/common/unf_exchg.h
create mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.c
create mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.h
create mode 100644 drivers/scsi/spfc/common/unf_fcstruct.h
create mode 100644 drivers/scsi/spfc/common/unf_gs.c
create mode 100644 drivers/scsi/spfc/common/unf_gs.h
create mode 100644 drivers/scsi/spfc/common/unf_init.c
create mode 100644 drivers/scsi/spfc/common/unf_io.c
create mode 100644 drivers/scsi/spfc/common/unf_io.h
create mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.c
create mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.h
create mode 100644 drivers/scsi/spfc/common/unf_log.h
create mode 100644 drivers/scsi/spfc/common/unf_lport.c
create mode 100644 drivers/scsi/spfc/common/unf_lport.h
create mode 100644 drivers/scsi/spfc/common/unf_ls.c
create mode 100644 drivers/scsi/spfc/common/unf_ls.h
create mode 100644 drivers/scsi/spfc/common/unf_npiv.c
create mode 100644 drivers/scsi/spfc/common/unf_npiv.h
create mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.c
create mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.h
create mode 100644 drivers/scsi/spfc/common/unf_portman.c
create mode 100644 drivers/scsi/spfc/common/unf_portman.h
create mode 100644 drivers/scsi/spfc/common/unf_rport.c
create mode 100644 drivers/scsi/spfc/common/unf_rport.h
create mode 100644 drivers/scsi/spfc/common/unf_scsi.c
create mode 100644 drivers/scsi/spfc/common/unf_scsi_common.h
create mode 100644 drivers/scsi/spfc/common/unf_service.c
create mode 100644 drivers/scsi/spfc/common/unf_service.h
create mode 100644 drivers/scsi/spfc/common/unf_type.h
create mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.c
create mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.h
create mode 100644 drivers/scsi/spfc/hw/spfc_hba.c
create mode 100644 drivers/scsi/spfc/hw/spfc_hba.h
create mode 100644 drivers/scsi/spfc/hw/spfc_hw_wqe.h
create mode 100644 drivers/scsi/spfc/hw/spfc_io.c
create mode 100644 drivers/scsi/spfc/hw/spfc_io.h
create mode 100644 drivers/scsi/spfc/hw/spfc_lld.c
create mode 100644 drivers/scsi/spfc/hw/spfc_lld.h
create mode 100644 drivers/scsi/spfc/hw/spfc_module.h
create mode 100644 drivers/scsi/spfc/hw/spfc_parent_context.h
create mode 100644 drivers/scsi/spfc/hw/spfc_queue.c
create mode 100644 drivers/scsi/spfc/hw/spfc_queue.h
create mode 100644 drivers/scsi/spfc/hw/spfc_service.c
create mode 100644 drivers/scsi/spfc/hw/spfc_service.h
create mode 100644 drivers/scsi/spfc/hw/spfc_utils.c
create mode 100644 drivers/scsi/spfc/hw/spfc_utils.h
create mode 100644 drivers/scsi/spfc/hw/spfc_wqe.c
create mode 100644 drivers/scsi/spfc/hw/spfc_wqe.h
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index 8345f906f5fc..0bdb678bff3a 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -7135,3 +7135,4 @@ CONFIG_ETMEM_SCAN=m
CONFIG_ETMEM_SWAP=m
CONFIG_NET_VENDOR_RAMAXEL=y
CONFIG_SPNIC=m
+CONFIG_SPFC=m
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index c1304f2e7de4..57631bbc8839 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -8514,3 +8514,4 @@ CONFIG_ETMEM_SWAP=m
CONFIG_USERSWAP=y
CONFIG_NET_VENDOR_RAMAXEL=y
CONFIG_SPNIC=m
+CONFIG_SPFC=m
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 0fbe4edeccd0..170d59df48d1 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -1151,6 +1151,7 @@ source "drivers/scsi/qla2xxx/Kconfig"
source "drivers/scsi/qla4xxx/Kconfig"
source "drivers/scsi/qedi/Kconfig"
source "drivers/scsi/qedf/Kconfig"
+source "drivers/scsi/spfc/Kconfig"
source "drivers/scsi/huawei/Kconfig"
config SCSI_LPFC
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index 78a3c832394c..299d3318fac8 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -85,6 +85,7 @@ obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o
obj-$(CONFIG_SCSI_QLOGIC_1280) += qla1280.o
obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx/
obj-$(CONFIG_SCSI_QLA_ISCSI) += libiscsi.o qla4xxx/
+obj-$(CONFIG_SPFC) += spfc/
obj-$(CONFIG_SCSI_LPFC) += lpfc/
obj-$(CONFIG_SCSI_HUAWEI_FC) += huawei/
obj-$(CONFIG_SCSI_BFA_FC) += bfa/
diff --git a/drivers/scsi/spfc/Kconfig b/drivers/scsi/spfc/Kconfig
new file mode 100644
index 000000000000..1021089f355c
--- /dev/null
+++ b/drivers/scsi/spfc/Kconfig
@@ -0,0 +1,17 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Ramaxel SPFC driver configuration
+#
+
+config SPFC
+ tristate "Ramaxel Fabric Channel Host Adapter Support"
+ default m
+ depends on PCI && SCSI
+ depends on SCSI_FC_ATTRS
+ depends on ARM64 || X86_64
+ help
+ This driver supports Ramaxel Fabric Channel PCIe host adapter.
+ To compile this driver as part of the kernel, choose Y here.
+ If unsure, choose N.
+ The default is N.
+
diff --git a/drivers/scsi/spfc/Makefile b/drivers/scsi/spfc/Makefile
new file mode 100644
index 000000000000..02fe0213e048
--- /dev/null
+++ b/drivers/scsi/spfc/Makefile
@@ -0,0 +1,47 @@
+# SPDX-License-Identifier: GPL-2.0-only
+obj-$(CONFIG_SPFC) += spfc.o
+
+subdir-ccflags-y += -I$(src)/../../net/ethernet/ramaxel/spnic/hw
+subdir-ccflags-y += -I$(src)/hw
+subdir-ccflags-y += -I$(src)/common
+
+spfc-objs := common/unf_init.o \
+ common/unf_event.o \
+ common/unf_exchg.o \
+ common/unf_exchg_abort.o \
+ common/unf_io.o \
+ common/unf_io_abnormal.o \
+ common/unf_lport.o \
+ common/unf_npiv.o \
+ common/unf_npiv_portman.o \
+ common/unf_disc.o \
+ common/unf_rport.o \
+ common/unf_service.o \
+ common/unf_ls.o \
+ common/unf_gs.o \
+ common/unf_portman.o \
+ common/unf_scsi.o \
+ hw/spfc_utils.o \
+ hw/spfc_lld.o \
+ hw/spfc_io.o \
+ hw/spfc_wqe.o \
+ hw/spfc_service.o \
+ hw/spfc_chipitf.o \
+ hw/spfc_queue.o \
+ hw/spfc_hba.o \
+ hw/spfc_cqm_bat_cla.o \
+ hw/spfc_cqm_bitmap_table.o \
+ hw/spfc_cqm_main.o \
+ hw/spfc_cqm_object.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_hwdev.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_common.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_hwif.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_wq.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_eqs.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_mbox.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.o
diff --git a/drivers/scsi/spfc/common/unf_common.h b/drivers/scsi/spfc/common/unf_common.h
new file mode 100644
index 000000000000..bf9d156e07ce
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_common.h
@@ -0,0 +1,1755 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_COMMON_H
+#define UNF_COMMON_H
+
+#include "unf_type.h"
+#include "unf_fcstruct.h"
+
+/* version num */
+#define SPFC_DRV_VERSION "B101"
+#define SPFC_DRV_DESC "Ramaxel Memory Technology Fibre Channel Driver"
+
+#define UNF_MAX_SECTORS 0xffff
+#define UNF_ORIGIN_HOTTAG_MASK 0x7fff
+#define UNF_HOTTAG_FLAG (1 << 15)
+#define UNF_PKG_FREE_OXID 0x0
+#define UNF_PKG_FREE_RXID 0x1
+
+#define UNF_SPFC_MAXRPORT_NUM (2048)
+#define SPFC_DEFAULT_RPORT_INDEX (UNF_SPFC_MAXRPORT_NUM - 1)
+
+/* session use sq num */
+#define UNF_SQ_NUM_PER_SESSION 3
+
+extern atomic_t fc_mem_ref;
+extern u32 unf_dgb_level;
+extern u32 spfc_dif_type;
+extern u32 spfc_dif_enable;
+extern u8 spfc_guard;
+extern int link_lose_tmo;
+
+/* define bits */
+#define UNF_BIT(n) (0x1UL << (n))
+#define UNF_BIT_0 UNF_BIT(0)
+#define UNF_BIT_1 UNF_BIT(1)
+#define UNF_BIT_2 UNF_BIT(2)
+#define UNF_BIT_3 UNF_BIT(3)
+#define UNF_BIT_4 UNF_BIT(4)
+#define UNF_BIT_5 UNF_BIT(5)
+
+#define UNF_BITS_PER_BYTE 8
+
+#define UNF_NOTIFY_UP_CLEAN_FLASH 2
+
+/* Echo macro define */
+#define ECHO_MG_VERSION_LOCAL 1
+#define ECHO_MG_VERSION_REMOTE 2
+
+#define SPFC_WIN_NPIV_NUM 32
+
+#define UNF_GET_NAME_HIGH_WORD(name) (((name) >> 32) & 0xffffffff)
+#define UNF_GET_NAME_LOW_WORD(name) ((name) & 0xffffffff)
+
+#define UNF_FIRST_LPORT_ID_MASK 0xffffff00
+#define UNF_PORT_ID_MASK 0x000000ff
+#define UNF_FIRST_LPORT_ID 0x00000000
+#define UNF_SECOND_LPORT_ID 0x00000001
+#define UNF_EIGHTH_LPORT_ID 0x00000007
+#define SPFC_MAX_COUNTER_TYPE 128
+
+#define UNF_EVENT_ASYN 0
+#define UNF_EVENT_SYN 1
+#define UNF_GLOBAL_EVENT_ASYN 2
+#define UNF_GLOBAL_EVENT_SYN 3
+
+#define UNF_GET_SLOT_ID_BY_PORTID(port_id) (((port_id) & 0x001f00) >> 8)
+#define UNF_GET_FUNC_ID_BY_PORTID(port_id) ((port_id) & 0x0000ff)
+#define UNF_GET_BOARD_TYPE_AND_SLOT_ID_BY_PORTID(port_id) \
+ (((port_id) & 0x00FF00) >> 8)
+
+#define UNF_FC_SERVER_BOARD_8_G 13 /* 8G mode */
+#define UNF_FC_SERVER_BOARD_16_G 7 /* 16G mode */
+#define UNF_FC_SERVER_BOARD_32_G 6 /* 32G mode */
+
+#define UNF_PORT_TYPE_FC_QSFP 1
+#define UNF_PORT_TYPE_FC_SFP 0
+#define UNF_PORT_UNGRADE_FW_RESET_ACTIVE 0
+#define UNF_PORT_UNGRADE_FW_RESET_INACTIVE 1
+
+enum unf_rport_qos_level {
+ UNF_QOS_LEVEL_DEFAULT = 0,
+ UNF_QOS_LEVEL_MIDDLE,
+ UNF_QOS_LEVEL_HIGH,
+ UNF_QOS_LEVEL_BUTT
+};
+
+struct buff_list {
+ u8 *vaddr;
+ dma_addr_t paddr;
+};
+
+struct buf_describe {
+ struct buff_list *buflist;
+ u32 buf_size;
+ u32 buf_num;
+};
+
+#define IO_STATICS
+struct unf_port_info {
+ u32 local_nport_id;
+ u32 nport_id;
+ u32 rport_index;
+ u64 port_name;
+ enum unf_rport_qos_level qos_level;
+ u8 cs_ctrl;
+ u8 rsvd0[3];
+ u32 sqn_base;
+};
+
+struct unf_cfg_item {
+ char *puc_name;
+ u32 min_value;
+ u32 default_value;
+ u32 max_value;
+};
+
+struct unf_port_param {
+ u32 ra_tov;
+ u32 ed_tov;
+};
+
+/* get wwpn adn wwnn */
+struct unf_get_chip_info_argout {
+ u8 board_type;
+ u64 wwpn;
+ u64 wwnn;
+ u64 sys_mac;
+};
+
+/* get sfp info: present and speed */
+struct unf_get_port_info_argout {
+ u8 sfp_speed;
+ u8 present;
+ u8 rsvd[2];
+};
+
+/* SFF-8436(QSFP+) Rev 4.7 */
+struct unf_sfp_plus_field_a0 {
+ u8 identifier;
+ /* offset 1~2 */
+ struct {
+ u8 reserved;
+ u8 status;
+ } status_indicator;
+ /* offset 3~21 */
+ struct {
+ u8 rx_tx_los;
+ u8 tx_fault;
+ u8 all_resv;
+
+ u8 ini_complete : 1;
+ u8 bit_resv : 3;
+ u8 temp_low_warn : 1;
+ u8 temp_high_warn : 1;
+ u8 temp_low_alarm : 1;
+ u8 temp_high_alarm : 1;
+
+ u8 resv : 4;
+ u8 vcc_low_warn : 1;
+ u8 vcc_high_warn : 1;
+ u8 vcc_low_alarm : 1;
+ u8 vcc_high_alarm : 1;
+
+ u8 resv8;
+ u8 rx_pow[2];
+ u8 tx_bias[2];
+ u8 reserved[6];
+ u8 vendor_specifics[3];
+ } interrupt_flag;
+ /* offset 22~33 */
+ struct {
+ u8 temp[2];
+ u8 reserved[2];
+ u8 supply_vol[2];
+ u8 reserveds[2];
+ u8 vendor_specific[4];
+ } module_monitors;
+ /* offset 34~81 */
+ struct {
+ u8 rx_pow[8];
+ u8 tx_bias[8];
+ u8 reserved[16];
+ u8 vendor_specific[16];
+ } channel_monitor_val;
+
+ /* offset 82~85 */
+ u8 reserved[4];
+
+ /* offset 86~97 */
+ struct {
+ /* 86~88 */
+ u8 tx_disable;
+ u8 rx_rate_select;
+ u8 tx_rate_select;
+
+ /* 89~92 */
+ u8 rx4_app_select;
+ u8 rx3_app_select;
+ u8 rx2_app_select;
+ u8 rx1_app_select;
+ /* 93 */
+ u8 power_override : 1;
+ u8 power_set : 1;
+ u8 reserved : 6;
+
+ /* 94~97 */
+ u8 tx4_app_select;
+ u8 tx3_app_select;
+ u8 tx2_app_select;
+ u8 tx1_app_select;
+ /* 98~99 */
+ u8 reserved2[2];
+ } control;
+ /* 100~106 */
+ struct {
+ /* 100 */
+ u8 m_rx1_los : 1;
+ u8 m_rx2_los : 1;
+ u8 m_rx3_los : 1;
+ u8 m_rx4_los : 1;
+ u8 m_tx1_los : 1;
+ u8 m_tx2_los : 1;
+ u8 m_tx3_los : 1;
+ u8 m_tx4_los : 1;
+ /* 101 */
+ u8 m_tx1_fault : 1;
+ u8 m_tx2_fault : 1;
+ u8 m_tx3_fault : 1;
+ u8 m_tx4_fault : 1;
+ u8 reserved : 4;
+ /* 102 */
+ u8 reserved1;
+ /* 103 */
+ u8 mini_cmp_flag : 1;
+ u8 rsv : 3;
+ u8 m_temp_low_warn : 1;
+ u8 m_temp_high_warn : 1;
+ u8 m_temp_low_alarm : 1;
+ u8 m_temp_high_alarm : 1;
+ /* 104 */
+ u8 rsv1 : 4;
+ u8 m_vcc_low_warn : 1;
+ u8 m_vcc_high_warn : 1;
+ u8 m_vcc_low_alarm : 1;
+ u8 m_vcc_high_alarm : 1;
+ /* 105~106 */
+ u8 vendor_specific[2];
+ } module_channel_mask_bit;
+ /* 107~118 */
+ u8 resv[12];
+ /* 119~126 */
+ u8 password_reserved[8];
+ /* 127 */
+ u8 page_select;
+};
+
+/* page 00 */
+struct unf_sfp_plus_field_00 {
+ /* 128~191 */
+ struct {
+ u8 id;
+ u8 id_ext;
+ u8 connector;
+ u8 speci_com[6];
+ u8 mode;
+ u8 speed;
+ u8 encoding;
+ u8 br_nominal;
+ u8 ext_rate_select_com;
+ u8 length_smf;
+ u8 length_om3;
+ u8 length_om2;
+ u8 length_om1;
+ u8 length_copper;
+ u8 device_tech;
+ u8 vendor_name[16];
+ u8 ex_module;
+ u8 vendor_oui[3];
+ u8 vendor_pn[16];
+ u8 vendor_rev[2];
+ /* Wave length or Copper cable Attenuation*/
+ u8 wave_or_copper_attenuation[2];
+ u8 wave_length_toler[2]; /* Wavelength tolerance */
+ u8 max_temp;
+ u8 cc_base;
+ } base_id_fields;
+ /* 192~223 */
+ struct {
+ u8 options[4];
+ u8 vendor_sn[16];
+ u8 date_code[8];
+ u8 diagn_monit_type;
+ u8 enhance_opt;
+ u8 reserved;
+ u8 ccext;
+ } ext_id_fields;
+ /* 224~255 */
+ u8 vendor_spec_eeprom[32];
+};
+
+/* page 01 */
+struct unf_sfp_plus_field_01 {
+ u8 optional01[128];
+};
+
+/* page 02 */
+struct unf_sfp_plus_field_02 {
+ u8 optional02[128];
+};
+
+/* page 03 */
+struct unf_sfp_plus_field_03 {
+ u8 temp_high_alarm[2];
+ u8 temp_low_alarm[2];
+ u8 temp_high_warn[2];
+ u8 temp_low_warn[2];
+
+ u8 reserved1[8];
+
+ u8 vcc_high_alarm[2];
+ u8 vcc_low_alarm[2];
+ u8 vcc_high_warn[2];
+ u8 vcc_low_warn[2];
+
+ u8 reserved2[8];
+ u8 vendor_specific1[16];
+
+ u8 pow_high_alarm[2];
+ u8 pow_low_alarm[2];
+ u8 pow_high_warn[2];
+ u8 pow_low_warn[2];
+
+ u8 bias_high_alarm[2];
+ u8 bias_low_alarm[2];
+ u8 bias_high_warn[2];
+ u8 bias_low_warn[2];
+
+ u8 tx_power_high_alarm[2];
+ u8 tx_power_low_alarm[2];
+ u8 reserved3[4];
+
+ u8 reserved4[8];
+
+ u8 vendor_specific2[16];
+ u8 reserved5[2];
+ u8 vendor_specific3[12];
+ u8 rx_ampl[2];
+ u8 rx_tx_sq_disable;
+ u8 rx_output_disable;
+ u8 chan_monit_mask[12];
+ u8 reserved6[2];
+};
+
+struct unf_sfp_plus_info {
+ struct unf_sfp_plus_field_a0 sfp_plus_info_a0;
+ struct unf_sfp_plus_field_00 sfp_plus_info_00;
+ struct unf_sfp_plus_field_01 sfp_plus_info_01;
+ struct unf_sfp_plus_field_02 sfp_plus_info_02;
+ struct unf_sfp_plus_field_03 sfp_plus_info_03;
+};
+
+struct unf_sfp_data_field_a0 {
+ /* Offset 0~63 */
+ struct {
+ u8 id;
+ u8 id_ext;
+ u8 connector;
+ u8 transceiver[8];
+ u8 encoding;
+ u8 br_nominal; /* Nominal signalling rate, units of 100MBd. */
+ u8 rate_identifier; /* Type of rate select functionality */
+ /* Link length supported for single mode fiber, units of km */
+ u8 length_smk_km;
+ /* Link length supported for single mode fiber,
+ *units of 100 m
+ */
+ u8 length_smf;
+ /* Link length supported for 50 um OM2 fiber,units of 10 m */
+ u8 length_smf_om2;
+ /* Link length supported for 62.5 um OM1 fiber, units of 10 m */
+ u8 length_smf_om1;
+ /*Link length supported for copper/direct attach cable,
+ *units of m
+ */
+ u8 length_cable;
+ /* Link length supported for 50 um OM3 fiber, units of 10m */
+ u8 length_om3;
+ u8 vendor_name[16]; /* ASCII */
+ /* Code for electronic or optical compatibility*/
+ u8 transceiver2;
+ u8 vendor_oui[3]; /* SFP vendor IEEE company ID */
+ u8 vendor_pn[16]; /* Part number provided by SFP vendor (ASCII)
+ */
+ /* Revision level for part number provided by vendor (ASCII) */
+ u8 vendor_rev[4];
+ /* Laser wavelength (Passive/Active Cable
+ *Specification Compliance)
+ */
+ u8 wave_length[2];
+ u8 unallocated;
+ /* Check code for Base ID Fields (addresses 0 to 62)*/
+ u8 cc_base;
+ } base_id_fields;
+
+ /* Offset 64~95 */
+ struct {
+ u8 options[2];
+ u8 br_max;
+ u8 br_min;
+ u8 vendor_sn[16];
+ u8 date_code[8];
+ u8 diag_monitoring_type;
+ u8 enhanced_options;
+ u8 sff8472_compliance;
+ u8 cc_ext;
+ } ext_id_fields;
+
+ /* Offset 96~255 */
+ struct {
+ u8 vendor_spec_eeprom[32];
+ u8 rsvd[128];
+ } vendor_spec_id_fields;
+};
+
+struct unf_sfp_data_field_a2 {
+ /* Offset 0~119 */
+ struct {
+ /* 0~39 */
+ struct {
+ u8 temp_alarm_high[2];
+ u8 temp_alarm_low[2];
+ u8 temp_warning_high[2];
+ u8 temp_warning_low[2];
+
+ u8 vcc_alarm_high[2];
+ u8 vcc_alarm_low[2];
+ u8 vcc_warning_high[2];
+ u8 vcc_warning_low[2];
+
+ u8 bias_alarm_high[2];
+ u8 bias_alarm_low[2];
+ u8 bias_warning_high[2];
+ u8 bias_warning_low[2];
+
+ u8 tx_alarm_high[2];
+ u8 tx_alarm_low[2];
+ u8 tx_warning_high[2];
+ u8 tx_warning_low[2];
+
+ u8 rx_alarm_high[2];
+ u8 rx_alarm_low[2];
+ u8 rx_warning_high[2];
+ u8 rx_warning_low[2];
+ } alarm_warn_th;
+
+ u8 unallocated0[16];
+ u8 ext_cal_constants[36];
+ u8 unallocated1[3];
+ u8 cc_dmi;
+
+ /* 96~105 */
+ struct {
+ u8 temp[2];
+ u8 vcc[2];
+ u8 tx_bias[2];
+ u8 tx_power[2];
+ u8 rx_power[2];
+ } diag;
+
+ u8 unallocated2[4];
+
+ struct {
+ u8 data_rdy_bar_state : 1;
+ u8 rx_los : 1;
+ u8 tx_fault_state : 1;
+ u8 soft_rate_select_state : 1;
+ u8 rate_select_state : 1;
+ u8 rs_state : 1;
+ u8 soft_tx_disable_select : 1;
+ u8 tx_disable_state : 1;
+ } status_ctrl;
+ u8 rsvd;
+
+ /* 112~113 */
+ struct {
+ /* 112 */
+ u8 tx_alarm_low : 1;
+ u8 tx_alarm_high : 1;
+ u8 tx_bias_alarm_low : 1;
+ u8 tx_bias_alarm_high : 1;
+ u8 vcc_alarm_low : 1;
+ u8 vcc_alarm_high : 1;
+ u8 temp_alarm_low : 1;
+ u8 temp_alarm_high : 1;
+
+ /* 113 */
+ u8 rsvd : 6;
+ u8 rx_alarm_low : 1;
+ u8 rx_alarm_high : 1;
+ } alarm;
+
+ u8 unallocated3[2];
+
+ /* 116~117 */
+ struct {
+ /* 116 */
+ u8 tx_warn_lo : 1;
+ u8 tx_warn_hi : 1;
+ u8 bias_warn_lo : 1;
+ u8 bias_warn_hi : 1;
+ u8 vcc_warn_lo : 1;
+ u8 vcc_warn_hi : 1;
+ u8 temp_warn_lo : 1;
+ u8 temp_warn_hi : 1;
+
+ /* 117 */
+ u8 rsvd : 6;
+ u8 rx_warn_lo : 1;
+ u8 rx_warn_hi : 1;
+ } warning;
+
+ u8 ext_status_and_ctrl[2];
+ } diag;
+
+ /* Offset 120~255 */
+ struct {
+ u8 vendor_spec[8];
+ u8 user_eeprom[120];
+ u8 vendor_ctrl[8];
+ } general_use_fields;
+};
+
+struct unf_sfp_info {
+ struct unf_sfp_data_field_a0 sfp_info_a0;
+ struct unf_sfp_data_field_a2 sfp_info_a2;
+};
+
+struct unf_sfp_err_rome_info {
+ struct unf_sfp_info sfp_info;
+ struct unf_sfp_plus_info sfp_plus_info;
+};
+
+struct unf_err_code {
+ u32 loss_of_signal_count;
+ u32 bad_rx_char_count;
+ u32 loss_of_sync_count;
+ u32 link_fail_count;
+ u32 rx_eof_a_count;
+ u32 dis_frame_count;
+ u32 bad_crc_count;
+ u32 proto_error_count;
+};
+
+/* config file */
+enum unf_port_mode {
+ UNF_PORT_MODE_UNKNOWN = 0x00,
+ UNF_PORT_MODE_TGT = 0x10,
+ UNF_PORT_MODE_INI = 0x20,
+ UNF_PORT_MODE_BOTH = 0x30
+};
+
+enum unf_port_upgrade {
+ UNF_PORT_UNSUPPORT_UPGRADE_REPORT = 0x00,
+ UNF_PORT_SUPPORT_UPGRADE_REPORT = 0x01,
+ UNF_PORT_UPGRADE_BUTT
+};
+
+#define UNF_BYTES_OF_DWORD 0x4
+static inline void __attribute__((unused)) unf_big_end_to_cpu(u8 *buffer, u32 size)
+{
+ u32 *buf = NULL;
+ u32 word_sum = 0;
+ u32 index = 0;
+
+ if (!buffer)
+ return;
+
+ buf = (u32 *)buffer;
+
+ /* byte to word */
+ if (size % UNF_BYTES_OF_DWORD == 0)
+ word_sum = size / UNF_BYTES_OF_DWORD;
+ else
+ return;
+
+ /* word to byte */
+ while (index < word_sum) {
+ *buf = be32_to_cpu(*buf);
+ buf++;
+ index++;
+ }
+}
+
+static inline void __attribute__((unused)) unf_cpu_to_big_end(void *buffer, u32 size)
+{
+#define DWORD_BIT 32
+#define BYTE_BIT 8
+ u32 *buf = NULL;
+ u32 word_sum = 0;
+ u32 index = 0;
+ u32 tmp = 0;
+
+ if (!buffer)
+ return;
+
+ buf = (u32 *)buffer;
+
+ /* byte to dword */
+ word_sum = size / UNF_BYTES_OF_DWORD;
+
+ /* dword to byte */
+ while (index < word_sum) {
+ *buf = cpu_to_be32(*buf);
+ buf++;
+ index++;
+ }
+
+ if (size % UNF_BYTES_OF_DWORD) {
+ tmp = cpu_to_be32(*buf);
+ tmp =
+ tmp >> (DWORD_BIT - (size % UNF_BYTES_OF_DWORD) * BYTE_BIT);
+ memcpy(buf, &tmp, (size % UNF_BYTES_OF_DWORD));
+ }
+}
+
+#define UNF_TOP_AUTO_MASK 0x0f
+#define UNF_TOP_UNKNOWN 0xff
+#define SPFC_TOP_AUTO 0x0
+
+#define UNF_NORMAL_MODE 0
+#define UNF_SET_NOMAL_MODE(mode) ((mode) = UNF_NORMAL_MODE)
+
+/*
+ * * SCSI status
+ */
+#define SCSI_GOOD 0x00
+#define SCSI_CHECK_CONDITION 0x02
+#define SCSI_CONDITION_MET 0x04
+#define SCSI_BUSY 0x08
+#define SCSI_INTERMEDIATE 0x10
+#define SCSI_INTERMEDIATE_COND_MET 0x14
+#define SCSI_RESERVATION_CONFLICT 0x18
+#define SCSI_TASK_SET_FULL 0x28
+#define SCSI_ACA_ACTIVE 0x30
+#define SCSI_TASK_ABORTED 0x40
+
+enum unf_act_topo {
+ UNF_ACT_TOP_PUBLIC_LOOP = 0x1,
+ UNF_ACT_TOP_PRIVATE_LOOP = 0x2,
+ UNF_ACT_TOP_P2P_DIRECT = 0x4,
+ UNF_ACT_TOP_P2P_FABRIC = 0x8,
+ UNF_TOP_LOOP_MASK = 0x03,
+ UNF_TOP_P2P_MASK = 0x0c,
+ UNF_TOP_FCOE_MASK = 0x30,
+ UNF_ACT_TOP_UNKNOWN
+};
+
+#define UNF_FL_PORT_LOOP_ADDR 0x00
+#define UNF_INVALID_LOOP_ADDR 0xff
+
+#define UNF_LOOP_ROLE_MASTER_OR_SLAVE 0x0
+#define UNF_LOOP_ROLE_ONLY_SLAVE 0x1
+
+#define UNF_TOU16_CHECK(dest, src, over_action) \
+ do { \
+ if (unlikely(0xFFFF < (src))) { \
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, \
+ UNF_ERR, "ToU16 error, src 0x%x ", \
+ (src)); \
+ over_action; \
+ } \
+ ((dest) = (u16)(src)); \
+ } while (0)
+
+#define UNF_PORT_SPEED_AUTO 0
+#define UNF_PORT_SPEED_2_G 2
+#define UNF_PORT_SPEED_4_G 4
+#define UNF_PORT_SPEED_8_G 8
+#define UNF_PORT_SPEED_10_G 10
+#define UNF_PORT_SPEED_16_G 16
+#define UNF_PORT_SPEED_32_G 32
+
+#define UNF_PORT_SPEED_UNKNOWN (~0)
+#define UNF_PORT_SFP_SPEED_ERR 0xFF
+
+#define UNF_OP_DEBUG_DUMP 0x0001
+#define UNF_OP_FCPORT_INFO 0x0002
+#define UNF_OP_FCPORT_LINK_CMD_TEST 0x0003
+#define UNF_OP_TEST_MBX 0x0004
+
+/* max frame size */
+#define UNF_MAX_FRAME_SIZE 2112
+
+/* default */
+#define UNF_DEFAULT_FRAME_SIZE 2048
+#define UNF_DEFAULT_EDTOV 2000
+#define UNF_DEFAULT_RATOV 10000
+#define UNF_DEFAULT_FABRIC_RATOV 10000
+#define UNF_MAX_RETRY_COUNT 3
+#define UNF_RRQ_MIN_TIMEOUT_INTERVAL 30000
+#define UNF_LOGO_TIMEOUT_INTERVAL 3000
+#define UNF_SFS_MIN_TIMEOUT_INTERVAL 15000
+#define UNF_WRITE_RRQ_SENDERR_INTERVAL 3000
+#define UNF_REC_TOV 3000
+
+#define UNF_WAIT_SEM_TIMEOUT (5000UL)
+#define UNF_WAIT_ABTS_RSP_TIMEOUT (20000UL)
+#define UNF_MAX_ABTS_WAIT_INTERVAL ((UNF_WAIT_SEM_TIMEOUT - 500) / 1000)
+
+#define UNF_TGT_RRQ_REDUNDANT_TIME 2000
+#define UNF_INI_RRQ_REDUNDANT_TIME 500
+#define UNF_INI_ELS_REDUNDANT_TIME 2000
+
+/* ELS command values */
+#define UNF_ELS_CMND_HIGH_MASK 0xff000000
+#define UNF_ELS_CMND_RJT 0x01000000
+#define UNF_ELS_CMND_ACC 0x02000000
+#define UNF_ELS_CMND_PLOGI 0x03000000
+#define UNF_ELS_CMND_FLOGI 0x04000000
+#define UNF_ELS_CMND_LOGO 0x05000000
+#define UNF_ELS_CMND_RLS 0x0F000000
+#define UNF_ELS_CMND_ECHO 0x10000000
+#define UNF_ELS_CMND_REC 0x13000000
+#define UNF_ELS_CMND_RRQ 0x12000000
+#define UNF_ELS_CMND_PRLI 0x20000000
+#define UNF_ELS_CMND_PRLO 0x21000000
+#define UNF_ELS_CMND_PDISC 0x50000000
+#define UNF_ELS_CMND_FDISC 0x51000000
+#define UNF_ELS_CMND_ADISC 0x52000000
+#define UNF_ELS_CMND_FAN 0x60000000
+#define UNF_ELS_CMND_RSCN 0x61000000
+#define UNF_FCP_CMND_SRR 0x14000000
+#define UNF_GS_CMND_SCR 0x62000000
+
+#define UNF_PLOGI_VERSION_UPPER 0x20
+#define UNF_PLOGI_VERSION_LOWER 0x20
+#define UNF_PLOGI_CONCURRENT_SEQ 0x00FF
+#define UNF_PLOGI_RO_CATEGORY 0x00FE
+#define UNF_PLOGI_SEQ_PER_XCHG 0x0001
+#define UNF_LGN_INFRAMESIZE 2048
+
+/* CT_IU pream defines */
+#define UNF_REV_NPORTID_INIT 0x01000000
+#define UNF_FSTYPE_OPT_INIT 0xfc020000
+#define UNF_FSTYPE_RFT_ID 0x02170000
+#define UNF_FSTYPE_GID_PT 0x01A10000
+#define UNF_FSTYPE_GID_FT 0x01710000
+#define UNF_FSTYPE_RFF_ID 0x021F0000
+#define UNF_FSTYPE_GFF_ID 0x011F0000
+#define UNF_FSTYPE_GNN_ID 0x01130000
+#define UNF_FSTYPE_GPN_ID 0x01120000
+
+#define UNF_CT_IU_RSP_MASK 0xffff0000
+#define UNF_CT_IU_REASON_MASK 0x00ff0000
+#define UNF_CT_IU_EXPLAN_MASK 0x0000ff00
+#define UNF_CT_IU_REJECT 0x80010000
+#define UNF_CT_IU_ACCEPT 0x80020000
+
+#define UNF_FABRIC_FULL_REG 0x00000003
+
+#define UNF_FC4_SCSI_BIT8 0x00000100
+#define UNF_FC4_FCP_TYPE 0x00000008
+#define UNF_FRAG_REASON_VENDOR 0
+
+/* GID_PT, GID_FT */
+#define UNF_GID_PT_TYPE 0x7F000000
+#define UNF_GID_FT_TYPE 0x00000008
+
+/*
+ *FC4 defines
+ */
+#define UNF_FC4_FRAME_PAGE_SIZE 0x10
+#define UNF_FC4_FRAME_PAGE_SIZE_SHIFT 16
+
+#define UNF_FC4_FRAME_PARM_0_FCP 0x08000000
+#define UNF_FC4_FRAME_PARM_0_I_PAIR 0x00002000
+#define UNF_FC4_FRAME_PARM_0_GOOD_RSP_CODE 0x00000100
+#define UNF_FC4_FRAME_PARM_0_MASK \
+ (UNF_FC4_FRAME_PARM_0_FCP | UNF_FC4_FRAME_PARM_0_I_PAIR | \
+ UNF_FC4_FRAME_PARM_0_GOOD_RSP_CODE)
+#define UNF_FC4_FRAME_PARM_3_INI 0x00000020
+#define UNF_FC4_FRAME_PARM_3_TGT 0x00000010
+#define UNF_FC4_FRAME_PARM_3_BOTH \
+ (UNF_FC4_FRAME_PARM_3_INI | UNF_FC4_FRAME_PARM_3_TGT)
+#define UNF_FC4_FRAME_PARM_3_R_XFER_DIS 0x00000002
+#define UNF_FC4_FRAME_PARM_3_W_XFER_DIS 0x00000001
+#define UNF_FC4_FRAME_PARM_3_REC_SUPPORT 0x00000400 /* bit 10 */
+#define UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT 0x00000200 /* bit 9 */
+#define UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT 0x00000100 /* bit 8 */
+#define UNF_FC4_FRAME_PARM_3_CONF_ALLOW 0x00000080 /* bit 7 */
+
+#define UNF_FC4_FRAME_PARM_3_MASK \
+ (UNF_FC4_FRAME_PARM_3_INI | UNF_FC4_FRAME_PARM_3_TGT | \
+ UNF_FC4_FRAME_PARM_3_R_XFER_DIS)
+
+#define UNF_FC4_TYPE_SHIFT 24
+#define UNF_FC4_TYPE_MASK 0xff
+/* FC4 feature we support */
+#define UNF_GFF_ACC_MASK 0xFF000000
+
+/* Reject CT_IU Reason Codes */
+#define UNF_CTIU_RJT_MASK 0xffff0000
+#define UNF_CTIU_RJT_INVALID_COMMAND 0x00010000
+#define UNF_CTIU_RJT_INVALID_VERSION 0x00020000
+#define UNF_CTIU_RJT_LOGIC_ERR 0x00030000
+#define UNF_CTIU_RJT_INVALID_SIZE 0x00040000
+#define UNF_CTIU_RJT_LOGIC_BUSY 0x00050000
+#define UNF_CTIU_RJT_PROTOCOL_ERR 0x00070000
+#define UNF_CTIU_RJT_UNABLE_PERFORM 0x00090000
+#define UNF_CTIU_RJT_NOT_SUPPORTED 0x000B0000
+
+/* FS_RJT Reason code explanations, FC-GS-2 6.5 */
+#define UNF_CTIU_RJT_EXP_MASK 0x0000FF00
+#define UNF_CTIU_RJT_EXP_NO_ADDTION 0x00000000
+#define UNF_CTIU_RJT_EXP_PORTID_NO_REG 0x00000100
+#define UNF_CTIU_RJT_EXP_PORTNAME_NO_REG 0x00000200
+#define UNF_CTIU_RJT_EXP_NODENAME_NO_REG 0x00000300
+#define UNF_CTIU_RJT_EXP_FC4TYPE_NO_REG 0x00000700
+#define UNF_CTIU_RJT_EXP_PORTTYPE_NO_REG 0x00000A00
+
+/*
+ * LS_RJT defines
+ */
+#define UNF_FC_LS_RJT_REASON_MASK 0x00ff0000
+
+/*
+ * LS_RJT reason code defines
+ */
+#define UNF_LS_OK 0x00000000
+#define UNF_LS_RJT_INVALID_COMMAND 0x00010000
+#define UNF_LS_RJT_LOGICAL_ERROR 0x00030000
+#define UNF_LS_RJT_BUSY 0x00050000
+#define UNF_LS_RJT_PROTOCOL_ERROR 0x00070000
+#define UNF_LS_RJT_REQUEST_DENIED 0x00090000
+#define UNF_LS_RJT_NOT_SUPPORTED 0x000b0000
+#define UNF_LS_RJT_CLASS_ERROR 0x000c0000
+
+/*
+ * LS_RJT code explanation
+ */
+#define UNF_LS_RJT_NO_ADDITIONAL_INFO 0x00000000
+#define UNF_LS_RJT_INV_DATA_FIELD_SIZE 0x00000700
+#define UNF_LS_RJT_INV_COMMON_SERV_PARAM 0x00000F00
+#define UNF_LS_RJT_INVALID_OXID_RXID 0x00001700
+#define UNF_LS_RJT_COMMAND_IN_PROGRESS 0x00001900
+#define UNF_LS_RJT_INSUFFICIENT_RESOURCES 0x00002900
+#define UNF_LS_RJT_COMMAND_NOT_SUPPORTED 0x00002C00
+#define UNF_LS_RJT_UNABLE_TO_SUPLY_REQ_DATA 0x00002A00
+#define UNF_LS_RJT_INVALID_PAYLOAD_LENGTH 0x00002D00
+
+#define UNF_P2P_LOCAL_NPORT_ID 0x000000EF
+#define UNF_P2P_REMOTE_NPORT_ID 0x000000D6
+
+#define UNF_BBCREDIT_MANAGE_NFPORT 0
+#define UNF_BBCREDIT_MANAGE_LPORT 1
+#define UNF_BBCREDIT_LPORT 0
+#define UNF_CONTIN_INCREASE_SUPPORT 1
+#define UNF_CLASS_VALID 1
+#define UNF_CLASS_INVALID 0
+#define UNF_NOT_MEANINGFUL 0
+#define UNF_NO_SERVICE_PARAMS 0
+#define UNF_CLEAN_ADDRESS_DEFAULT 0
+#define UNF_PRIORITY_ENABLE 1
+#define UNF_PRIORITY_DISABLE 0
+#define UNF_SEQUEN_DELIVERY_REQ 1 /* Sequential delivery requested */
+
+#define UNF_FC_PROTOCOL_CLASS_3 0x0
+#define UNF_FC_PROTOCOL_CLASS_2 0x1
+#define UNF_FC_PROTOCOL_CLASS_1 0x2
+#define UNF_FC_PROTOCOL_CLASS_F 0x3
+#define UNF_FC_PROTOCOL_CLASS_OTHER 0x4
+
+#define UNF_RSCN_PORT_ADDR 0x0
+#define UNF_RSCN_AREA_ADDR_GROUP 0x1
+#define UNF_RSCN_DOMAIN_ADDR_GROUP 0x2
+#define UNF_RSCN_FABRIC_ADDR_GROUP 0x3
+
+#define UNF_GET_RSCN_PLD_LEN(cmnd) ((cmnd) & 0x0000ffff)
+#define UNF_RSCN_PAGE_LEN 0x4
+
+#define UNF_PORT_LINK_UP 0x0000
+#define UNF_PORT_LINK_DOWN 0x0001
+#define UNF_PORT_RESET_START 0x0002
+#define UNF_PORT_RESET_END 0x0003
+#define UNF_PORT_LINK_UNKNOWN 0x0004
+#define UNF_PORT_NOP 0x0005
+#define UNF_PORT_CORE_FATAL_ERROR 0x0006
+#define UNF_PORT_CORE_UNRECOVERABLE_ERROR 0x0007
+#define UNF_PORT_CORE_RECOVERABLE_ERROR 0x0008
+#define UNF_PORT_LOGOUT 0x0009
+#define UNF_PORT_CLEAR_VLINK 0x000a
+#define UNF_PORT_UPDATE_PROCESS 0x000b
+#define UNF_PORT_DEBUG_DUMP 0x000c
+#define UNF_PORT_GET_FWLOG 0x000d
+#define UNF_PORT_CLEAN_DONE 0x000e
+#define UNF_PORT_BEGIN_REMOVE 0x000f
+#define UNF_PORT_RELEASE_RPORT_INDEX 0x0010
+#define UNF_PORT_ABNORMAL_RESET 0x0012
+
+/*
+ * SCSI begin
+ */
+#define SCSIOPC_TEST_UNIT_READY 0x00
+#define SCSIOPC_INQUIRY 0x12
+#define SCSIOPC_MODE_SENSE_6 0x1A
+#define SCSIOPC_MODE_SENSE_10 0x5A
+#define SCSIOPC_MODE_SELECT_6 0x15
+#define SCSIOPC_RESERVE 0x16
+#define SCSIOPC_RELEASE 0x17
+#define SCSIOPC_START_STOP_UNIT 0x1B
+#define SCSIOPC_READ_CAPACITY_10 0x25
+#define SCSIOPC_READ_CAPACITY_16 0x9E
+#define SCSIOPC_READ_6 0x08
+#define SCSIOPC_READ_10 0x28
+#define SCSIOPC_READ_12 0xA8
+#define SCSIOPC_READ_16 0x88
+#define SCSIOPC_WRITE_6 0x0A
+#define SCSIOPC_WRITE_10 0x2A
+#define SCSIOPC_WRITE_12 0xAA
+#define SCSIOPC_WRITE_16 0x8A
+#define SCSIOPC_WRITE_VERIFY 0x2E
+#define SCSIOPC_VERIFY_10 0x2F
+#define SCSIOPC_VERIFY_12 0xAF
+#define SCSIOPC_VERIFY_16 0x8F
+#define SCSIOPC_REQUEST_SENSE 0x03
+#define SCSIOPC_REPORT_LUN 0xA0
+#define SCSIOPC_FORMAT_UNIT 0x04
+#define SCSIOPC_SEND_DIAGNOSTIC 0x1D
+#define SCSIOPC_WRITE_SAME_10 0x41
+#define SCSIOPC_WRITE_SAME_16 0x93
+#define SCSIOPC_READ_BUFFER 0x3C
+#define SCSIOPC_WRITE_BUFFER 0x3B
+
+#define SCSIOPC_LOG_SENSE 0x4D
+#define SCSIOPC_MODE_SELECT_10 0x55
+#define SCSIOPC_SYNCHRONIZE_CACHE_10 0x35
+#define SCSIOPC_SYNCHRONIZE_CACHE_16 0x91
+#define SCSIOPC_WRITE_AND_VERIFY_10 0x2E
+#define SCSIOPC_WRITE_AND_VERIFY_12 0xAE
+#define SCSIOPC_WRITE_AND_VERIFY_16 0x8E
+#define SCSIOPC_READ_MEDIA_SERIAL_NUMBER 0xAB
+#define SCSIOPC_REASSIGN_BLOCKS 0x07
+#define SCSIOPC_ATA_PASSTHROUGH_16 0x85
+#define SCSIOPC_ATA_PASSTHROUGH_12 0xa1
+
+/*
+ * SCSI end
+ */
+#define IS_READ_COMMAND(opcode) \
+ ((opcode) == SCSIOPC_READ_6 || (opcode) == SCSIOPC_READ_10 || \
+ (opcode) == SCSIOPC_READ_12 || (opcode) == SCSIOPC_READ_16)
+#define IS_WRITE_COMMAND(opcode) \
+ ((opcode) == SCSIOPC_WRITE_6 || (opcode) == SCSIOPC_WRITE_10 || \
+ (opcode) == SCSIOPC_WRITE_12 || (opcode) == SCSIOPC_WRITE_16)
+
+#define IS_VERIFY_COMMAND(opcode) \
+ ((opcode) == SCSIOPC_VERIFY_10 || (opcode) == SCSIOPC_VERIFY_12 || \
+ (opcode) == SCSIOPC_VERIFY_16)
+
+#define FCP_RSP_LEN_VALID_MASK 0x1
+#define FCP_SNS_LEN_VALID_MASK 0x2
+#define FCP_RESID_OVER_MASK 0x4
+#define FCP_RESID_UNDER_MASK 0x8
+#define FCP_CONF_REQ_MASK 0x10
+#define FCP_SCSI_STATUS_GOOD 0x0
+
+#define UNF_DELAYED_WORK_SYNC(ret, port_id, work, work_symb) \
+ do { \
+ if (!cancel_delayed_work_sync(work)) { \
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, \
+ UNF_INFO, \
+ "[info]LPort or RPort(0x%x) %s worker " \
+ "can't destroy, or no " \
+ "worker", \
+ port_id, work_symb); \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = RETURN_OK; \
+ } \
+ } while (0)
+
+#define UNF_GET_SFS_ENTRY(pkg) ((union unf_sfs_u *)(void *)(((struct unf_frame_pkg *)(pkg)) \
+ ->unf_cmnd_pload_bl.buffer_ptr))
+/* FLOGI */
+#define UNF_GET_FLOGI_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->flogi.flogi_payload))
+#define UNF_FLOGI_PAYLOAD_LEN sizeof(struct unf_flogi_fdisc_payload)
+
+/* FLOGI ACC */
+#define UNF_GET_FLOGI_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg))) \
+ ->flogi_acc.flogi_payload))
+#define UNF_FLOGI_ACC_PAYLOAD_LEN sizeof(struct unf_flogi_fdisc_payload)
+
+/* FDISC */
+#define UNF_FDISC_PAYLOAD_LEN UNF_FLOGI_PAYLOAD_LEN
+#define UNF_FDISC_ACC_PAYLOAD_LEN UNF_FLOGI_ACC_PAYLOAD_LEN
+
+/* PLOGI */
+#define UNF_GET_PLOGI_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->plogi.payload))
+#define UNF_PLOGI_PAYLOAD_LEN sizeof(struct unf_plogi_payload)
+
+/* PLOGI ACC */
+#define UNF_GET_PLOGI_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->plogi_acc.payload))
+#define UNF_PLOGI_ACC_PAYLOAD_LEN sizeof(struct unf_plogi_payload)
+
+/* LOGO */
+#define UNF_GET_LOGO_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->logo.payload))
+#define UNF_LOGO_PAYLOAD_LEN sizeof(struct unf_logo_payload)
+
+/* ECHO */
+#define UNF_GET_ECHO_PAYLOAD(pkg) \
+ (((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->echo.echo_pld)
+
+/* ECHO PHYADDR */
+#define UNF_GET_ECHO_PAYLOAD_PHYADDR(pkg) \
+ (((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->echo.phy_echo_addr)
+
+#define UNF_ECHO_PAYLOAD_LEN sizeof(struct unf_echo_payload)
+
+/* REC */
+#define UNF_GET_REC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->rec.rec_pld))
+
+#define UNF_REC_PAYLOAD_LEN sizeof(struct unf_rec_pld)
+
+/* ECHO ACC */
+#define UNF_GET_ECHO_ACC_PAYLOAD(pkg) \
+ (((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->echo_acc.echo_pld)
+#define UNF_ECHO_ACC_PAYLOAD_LEN sizeof(struct unf_echo_payload)
+
+/* RRQ */
+#define UNF_GET_RRQ_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->rrq.cmnd))
+#define UNF_RRQ_PAYLOAD_LEN \
+ (sizeof(struct unf_rrq) - sizeof(struct unf_fc_head))
+
+/* PRLI */
+#define UNF_GET_PRLI_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->prli.payload))
+#define UNF_PRLI_PAYLOAD_LEN sizeof(struct unf_prli_payload)
+
+/* PRLI ACC */
+#define UNF_GET_PRLI_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->prli_acc.payload))
+#define UNF_PRLI_ACC_PAYLOAD_LEN sizeof(struct unf_prli_payload)
+
+/* PRLO */
+#define UNF_GET_PRLO_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->prlo.payload))
+#define UNF_PRLO_PAYLOAD_LEN sizeof(struct unf_prli_payload)
+
+#define UNF_GET_PRLO_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->prlo_acc.payload))
+#define UNF_PRLO_ACC_PAYLOAD_LEN sizeof(struct unf_prli_payload)
+
+/* PDISC */
+#define UNF_GET_PDISC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->pdisc.payload))
+#define UNF_PDISC_PAYLOAD_LEN sizeof(struct unf_plogi_payload)
+
+/* PDISC ACC */
+#define UNF_GET_PDISC_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->pdisc_acc.payload))
+#define UNF_PDISC_ACC_PAYLOAD_LEN sizeof(struct unf_plogi_payload)
+
+/* ADISC */
+#define UNF_GET_ADISC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->adisc.adisc_payl))
+#define UNF_ADISC_PAYLOAD_LEN sizeof(struct unf_adisc_payload)
+
+/* ADISC ACC */
+#define UNF_GET_ADISC_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->adisc_acc.adisc_payl))
+#define UNF_ADISC_ACC_PAYLOAD_LEN sizeof(struct unf_adisc_payload)
+
+/* RSCN ACC */
+#define UNF_GET_RSCN_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->els_acc.cmnd))
+#define UNF_RSCN_ACC_PAYLOAD_LEN \
+ (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
+
+/* LOGO ACC */
+#define UNF_GET_LOGO_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->els_acc.cmnd))
+#define UNF_LOGO_ACC_PAYLOAD_LEN \
+ (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
+
+/* RRQ ACC */
+#define UNF_GET_RRQ_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->els_acc.cmnd))
+#define UNF_RRQ_ACC_PAYLOAD_LEN \
+ (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
+
+/* REC ACC */
+#define UNF_GET_REC_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->els_acc.cmnd))
+#define UNF_REC_ACC_PAYLOAD_LEN \
+ (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
+
+/* GPN_ID */
+#define UNF_GET_GPNID_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gpn_id.ctiu_pream))
+#define UNF_GPNID_PAYLOAD_LEN \
+ (sizeof(struct unf_gpnid) - sizeof(struct unf_fc_head))
+
+#define UNF_GET_GPNID_RSP_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gpn_id_rsp.ctiu_pream))
+#define UNF_GPNID_RSP_PAYLOAD_LEN \
+ (sizeof(struct unf_gpnid_rsp) - sizeof(struct unf_fc_head))
+
+/* GNN_ID */
+#define UNF_GET_GNNID_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gnn_id.ctiu_pream))
+#define UNF_GNNID_PAYLOAD_LEN \
+ (sizeof(struct unf_gnnid) - sizeof(struct unf_fc_head))
+
+#define UNF_GET_GNNID_RSP_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gnn_id_rsp.ctiu_pream))
+#define UNF_GNNID_RSP_PAYLOAD_LEN \
+ (sizeof(struct unf_gnnid_rsp) - sizeof(struct unf_fc_head))
+
+/* GFF_ID */
+#define UNF_GET_GFFID_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gff_id.ctiu_pream))
+#define UNF_GFFID_PAYLOAD_LEN \
+ (sizeof(struct unf_gffid) - sizeof(struct unf_fc_head))
+
+#define UNF_GET_GFFID_RSP_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gff_id_rsp.ctiu_pream))
+#define UNF_GFFID_RSP_PAYLOAD_LEN \
+ (sizeof(struct unf_gffid_rsp) - sizeof(struct unf_fc_head))
+
+/* GID_FT/GID_PT */
+#define UNF_GET_GID_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg)) \
+ ->get_id.gid_req.ctiu_pream))
+
+#define UNF_GID_PAYLOAD_LEN (sizeof(struct unf_ctiu_prem) + sizeof(u32))
+#define UNF_GET_GID_ACC_PAYLOAD(pkg) \
+ (((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg)) \
+ ->get_id.gid_rsp.gid_acc_pld)
+#define UNF_GID_ACC_PAYLOAD_LEN sizeof(struct unf_gid_acc_pld)
+
+/* RFT_ID */
+#define UNF_GET_RFTID_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->rft_id.ctiu_pream))
+#define UNF_RFTID_PAYLOAD_LEN \
+ (sizeof(struct unf_rftid) - sizeof(struct unf_fc_head))
+
+#define UNF_GET_RFTID_RSP_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->rft_id_rsp.ctiu_pream))
+#define UNF_RFTID_RSP_PAYLOAD_LEN sizeof(struct unf_ctiu_prem)
+
+/* RFF_ID */
+#define UNF_GET_RFFID_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->rff_id.ctiu_pream))
+#define UNF_RFFID_PAYLOAD_LEN \
+ (sizeof(struct unf_rffid) - sizeof(struct unf_fc_head))
+
+#define UNF_GET_RFFID_RSP_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->rff_id_rsp.ctiu_pream))
+#define UNF_RFFID_RSP_PAYLOAD_LEN sizeof(struct unf_ctiu_prem)
+
+/* ACC&RJT */
+#define UNF_GET_ELS_ACC_RJT_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->els_rjt.cmnd))
+#define UNF_ELS_ACC_RJT_LEN \
+ (sizeof(struct unf_els_rjt) - sizeof(struct unf_fc_head))
+
+/* SCR */
+#define UNF_SCR_PAYLOAD(pkg) \
+ (((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->scr.payload)
+#define UNF_SCR_PAYLOAD_LEN \
+ (sizeof(struct unf_scr) - sizeof(struct unf_fc_head))
+
+#define UNF_SCR_RSP_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->els_acc.cmnd))
+#define UNF_SCR_RSP_PAYLOAD_LEN \
+ (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
+
+#define UNF_GS_RSP_PAYLOAD_LEN \
+ (sizeof(union unf_sfs_u) - sizeof(struct unf_fc_head))
+
+#define UNF_GET_XCHG_TAG(pkg) \
+ (((struct unf_frame_pkg *)(pkg)) \
+ ->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX])
+#define UNF_GET_ABTS_XCHG_TAG(pkg) \
+ ((u16)(((pkg)->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]) >> 16))
+#define UNF_GET_IO_XCHG_TAG(pkg) \
+ ((u16)((pkg)->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]))
+
+#define UNF_GET_HOTPOOL_TAG(pkg) \
+ (((struct unf_frame_pkg *)(pkg)) \
+ ->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX])
+#define UNF_GET_SID(pkg) \
+ (((struct unf_frame_pkg *)(pkg))->frame_head.csctl_sid & \
+ UNF_NPORTID_MASK)
+#define UNF_GET_DID(pkg) \
+ (((struct unf_frame_pkg *)(pkg))->frame_head.rctl_did & \
+ UNF_NPORTID_MASK)
+#define UNF_GET_OXID(pkg) \
+ (((struct unf_frame_pkg *)(pkg))->frame_head.oxid_rxid >> 16)
+#define UNF_GET_RXID(pkg) \
+ ((u16)((struct unf_frame_pkg *)(pkg))->frame_head.oxid_rxid)
+#define UNF_GET_XID_RELEASE_TIMER(pkg) \
+ (((struct unf_frame_pkg *)(pkg))->release_task_id_timer)
+#define UNF_GETXCHGALLOCTIME(pkg) \
+ (((struct unf_frame_pkg *)(pkg)) \
+ ->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME])
+
+#define UNF_SET_XCHG_ALLOC_TIME(pkg, xchg) \
+ (((struct unf_frame_pkg *)(pkg)) \
+ ->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = \
+ (((struct unf_xchg *)(xchg)) \
+ ->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]))
+#define UNF_SET_ABORT_INFO_IOTYPE(pkg, xchg) \
+ (((struct unf_frame_pkg *)(pkg)) \
+ ->private_data[PKG_PRIVATE_XCHG_ABORT_INFO] |= \
+ (((u8)(((struct unf_xchg *)(xchg))->data_direction & 0x7)) \
+ << 2))
+
+#define UNF_CHECK_NPORT_FPORT_BIT(els_payload) \
+ (((struct unf_flogi_fdisc_payload *)(els_payload)) \
+ ->fabric_parms.co_parms.nport)
+
+#define UNF_GET_RSP_BUF(pkg) \
+ ((void *)(((struct unf_frame_pkg *)(pkg))->unf_rsp_pload_bl.buffer_ptr))
+#define UNF_GET_RSP_LEN(pkg) \
+ (((struct unf_frame_pkg *)(pkg))->unf_rsp_pload_bl.length)
+
+#define UNF_N_PORT 0
+#define UNF_F_PORT 1
+
+#define UNF_GET_RA_TOV_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.r_a_tov)
+#define UNF_GET_RT_TOV_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.r_t_tov)
+#define UNF_GET_E_D_TOV_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.e_d_tov)
+#define UNF_GET_E_D_TOV_RESOLUTION_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.e_d_tov_resolution)
+#define UNF_GET_BB_SC_N_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.bbscn)
+#define UNF_GET_BB_CREDIT_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.bb_credit)
+
+enum unf_pcie_error_code {
+ UNF_PCIE_ERROR_NONE = 0,
+ UNF_PCIE_DATAPARITYDETECTED = 1,
+ UNF_PCIE_SIGNALTARGETABORT,
+ UNF_PCIE_RECEIVEDTARGETABORT,
+ UNF_PCIE_RECEIVEDMASTERABORT,
+ UNF_PCIE_SIGNALEDSYSTEMERROR,
+ UNF_PCIE_DETECTEDPARITYERROR,
+ UNF_PCIE_CORRECTABLEERRORDETECTED,
+ UNF_PCIE_NONFATALERRORDETECTED,
+ UNF_PCIE_FATALERRORDETECTED,
+ UNF_PCIE_UNSUPPORTEDREQUESTDETECTED,
+ UNF_PCIE_AUXILIARYPOWERDETECTED,
+ UNF_PCIE_TRANSACTIONSPENDING,
+
+ UNF_PCIE_UNCORRECTINTERERRSTATUS,
+ UNF_PCIE_UNSUPPORTREQERRSTATUS,
+ UNF_PCIE_ECRCERRORSTATUS,
+ UNF_PCIE_MALFORMEDTLPSTATUS,
+ UNF_PCIE_RECEIVEROVERFLOWSTATUS,
+ UNF_PCIE_UNEXPECTCOMPLETESTATUS,
+ UNF_PCIE_COMPLETERABORTSTATUS,
+ UNF_PCIE_COMPLETIONTIMEOUTSTATUS,
+ UNF_PCIE_FLOWCTRLPROTOCOLERRSTATUS,
+ UNF_PCIE_POISONEDTLPSTATUS,
+ UNF_PCIE_SURPRISEDOWNERRORSTATUS,
+ UNF_PCIE_DATALINKPROTOCOLERRSTATUS,
+ UNF_PCIE_ADVISORYNONFATALERRSTATUS,
+ UNF_PCIE_REPLAYTIMERTIMEOUTSTATUS,
+ UNF_PCIE_REPLAYNUMROLLOVERSTATUS,
+ UNF_PCIE_BADDLLPSTATUS,
+ UNF_PCIE_BADTLPSTATUS,
+ UNF_PCIE_RECEIVERERRORSTATUS,
+
+ UNF_PCIE_BUTT
+};
+
+#define UNF_DMA_HI32(a) (((a) >> 32) & 0xffffffff)
+#define UNF_DMA_LO32(a) ((a) & 0xffffffff)
+
+#define UNF_WWN_LEN 8
+#define UNF_MAC_LEN 6
+
+/* send BLS/ELS/BLS REPLY/ELS REPLY/GS/ */
+/* rcvd BLS/ELS/REQ DONE/REPLY DONE */
+#define UNF_PKG_BLS_REQ 0x0100
+#define UNF_PKG_BLS_REQ_DONE 0x0101
+#define UNF_PKG_BLS_REPLY 0x0102
+#define UNF_PKG_BLS_REPLY_DONE 0x0103
+
+#define UNF_PKG_ELS_REQ 0x0200
+#define UNF_PKG_ELS_REQ_DONE 0x0201
+
+#define UNF_PKG_ELS_REPLY 0x0202
+#define UNF_PKG_ELS_REPLY_DONE 0x0203
+
+#define UNF_PKG_GS_REQ 0x0300
+#define UNF_PKG_GS_REQ_DONE 0x0301
+
+#define UNF_PKG_TGT_XFER 0x0400
+#define UNF_PKG_TGT_RSP 0x0401
+#define UNF_PKG_TGT_RSP_NOSGL 0x0402
+#define UNF_PKG_TGT_RSP_STATUS 0x0403
+
+#define UNF_PKG_INI_IO 0x0500
+#define UNF_PKG_INI_RCV_TGT_RSP 0x0507
+
+/* external sgl struct start */
+struct unf_esgl_page {
+ u64 page_address;
+ dma_addr_t esgl_phy_addr;
+ u32 page_size;
+};
+
+/* external sgl struct end */
+struct unf_esgl {
+ struct list_head entry_esgl;
+ struct unf_esgl_page page;
+};
+
+#define UNF_RESPONE_DATA_LEN 8
+struct unf_frame_payld {
+ u8 *buffer_ptr;
+ dma_addr_t buf_dma_addr;
+ u32 length;
+};
+
+enum pkg_private_index {
+ PKG_PRIVATE_LOWLEVEL_XCHG_ADD = 0,
+ PKG_PRIVATE_XCHG_HOT_POOL_INDEX = 1, /* Hot Pool Index */
+ PKG_PRIVATE_XCHG_RPORT_INDEX = 2, /* RPort index */
+ PKG_PRIVATE_XCHG_VP_INDEX = 3, /* VPort index */
+ PKG_PRIVATE_XCHG_SSQ_INDEX,
+ PKG_PRIVATE_RPORT_RX_SIZE,
+ PKG_PRIVATE_XCHG_TIMEER,
+ PKG_PRIVATE_XCHG_ALLOC_TIME,
+ PKG_PRIVATE_XCHG_ABORT_INFO,
+ PKG_PRIVATE_ECHO_CMD_SND_TIME, /* local send echo cmd time stamp */
+ PKG_PRIVATE_ECHO_ACC_RCV_TIME, /* local receive echo acc time stamp */
+ PKG_PRIVATE_ECHO_CMD_RCV_TIME, /* remote receive echo cmd time stamp */
+ PKG_PRIVATE_ECHO_RSP_SND_TIME, /* remote send echo rsp time stamp */
+ PKG_MAX_PRIVATE_DATA_SIZE
+};
+
+extern u32 dix_flag;
+extern u32 dif_sgl_mode;
+extern u32 dif_app_esc_check;
+extern u32 dif_ref_esc_check;
+
+#define UNF_DIF_ACTION_NONE 0
+
+enum unf_adm_dif_mode_E {
+ UNF_SWITCH_DIF_DIX = 0,
+ UNF_APP_REF_ESCAPE,
+ ALL_DIF_MODE = 20,
+};
+
+#define UNF_DIF_CRC_ERR 0x1001
+#define UNF_DIF_APP_ERR 0x1002
+#define UNF_DIF_LBA_ERR 0x1003
+
+#define UNF_VERIFY_CRC_MASK (1 << 1)
+#define UNF_VERIFY_APP_MASK (1 << 2)
+#define UNF_VERIFY_LBA_MASK (1 << 3)
+
+#define UNF_REPLACE_CRC_MASK (1 << 8)
+#define UNF_REPLACE_APP_MASK (1 << 9)
+#define UNF_REPLACE_LBA_MASK (1 << 10)
+
+#define UNF_DIF_ACTION_MASK (0xff << 16)
+#define UNF_DIF_ACTION_INSERT (0x1 << 16)
+#define UNF_DIF_ACTION_VERIFY_AND_DELETE (0x2 << 16)
+#define UNF_DIF_ACTION_VERIFY_AND_FORWARD (0x3 << 16)
+#define UNF_DIF_ACTION_VERIFY_AND_REPLACE (0x4 << 16)
+
+#define UNF_DIF_ACTION_NO_INCREASE_REFTAG (0x1 << 24)
+
+#define UNF_DEFAULT_CRC_GUARD_SEED (0)
+#define UNF_CAL_512_BLOCK_CNT(data_len) ((data_len) >> 9)
+#define UNF_CAL_BLOCK_CNT(data_len, sector_size) ((data_len) / (sector_size))
+#define UNF_CAL_CRC_BLK_CNT(crc_data_len, sector_size) \
+ ((crc_data_len) / ((sector_size) + 8))
+
+#define UNF_DIF_DOUBLE_SGL (1 << 1)
+#define UNF_DIF_SECTSIZE_4KB (1 << 2)
+#define UNF_DIF_SECTSIZE_512 (0 << 2)
+#define UNF_DIF_LBA_NONE_INCREASE (1 << 3)
+#define UNF_DIF_TYPE3 (1 << 4)
+
+#define SECTOR_SIZE_512 512
+#define SECTOR_SIZE_4096 4096
+#define SPFC_DIF_APP_REF_ESC_NOT_CHECK 1
+#define SPFC_DIF_APP_REF_ESC_CHECK 0
+
+struct unf_dif {
+ u16 crc;
+ u16 app_tag;
+ u32 lba;
+};
+
+enum unf_io_state { UNF_INI_IO = 0, UNF_TGT_XFER = 1, UNF_TGT_RSP = 2 };
+
+#define UNF_PKG_LAST_RESPONSE 0
+#define UNF_PKG_NOT_LAST_RESPONSE 1
+
+struct unf_frame_pkg {
+ /* pkt type:BLS/ELS/FC4LS/CMND/XFER/RSP */
+ u32 type;
+ u32 last_pkg_flag;
+ u32 fcp_conf_flag;
+
+#define UNF_FCP_RESPONSE_VALID 0x01
+#define UNF_FCP_SENSE_VALID 0x02
+ u32 response_and_sense_valid_flag; /* resp and sense vailed flag */
+ u32 cmnd;
+ struct unf_fc_head frame_head;
+ u32 entry_count;
+ void *xchg_contex;
+ u32 transfer_len;
+ u32 residus_len;
+ u32 status;
+ u32 status_sub_code;
+ enum unf_io_state io_state;
+ u32 qos_level;
+ u32 private_data[PKG_MAX_PRIVATE_DATA_SIZE];
+ struct unf_fcp_cmnd *fcp_cmnd;
+ struct unf_dif_control_info dif_control;
+ struct unf_frame_payld unf_cmnd_pload_bl;
+ struct unf_frame_payld unf_rsp_pload_bl;
+ struct unf_frame_payld unf_sense_pload_bl;
+ void *upper_cmd;
+ u32 abts_maker_status;
+ u32 release_task_id_timer;
+ u8 byte_orders;
+ u8 rx_or_ox_id;
+ u8 class_mode;
+ u8 rsvd;
+ u8 *peresp;
+ u32 rcvrsp_len;
+ ulong timeout;
+ u32 origin_hottag;
+ u32 origin_magicnum;
+};
+
+#define UNF_MAX_SFS_XCHG 2048
+#define UNF_RESERVE_SFS_XCHG 128 /* times on exchange mgr num */
+
+struct unf_lport_cfg_item {
+ u32 port_id;
+ u32 port_mode; /* INI(0x20), TGT(0x10), BOTH(0x30) */
+ u32 port_topology; /* 0x3:loop , 0xc:p2p ,0xf:auto */
+ u32 max_queue_depth;
+ u32 max_io; /* Recommended Value 512-4096 */
+ u32 max_login;
+ u32 max_sfs_xchg;
+ u32 port_speed; /* 0:auto 1:1Gbps 2:2Gbps 4:4Gbps 8:8Gbps 16:16Gbps */
+ u32 tape_support; /* ape support */
+ u32 fcp_conf; /* fcp confirm support */
+ u32 bbscn;
+};
+
+struct unf_port_dynamic_info {
+ u32 sfp_posion;
+ u32 sfp_valid;
+ u32 phy_link;
+ u32 firmware_state;
+ u32 cur_speed;
+ u32 mailbox_timeout_cnt;
+};
+
+struct unf_port_intr_coalsec {
+ u32 delay_timer;
+ u32 depth;
+};
+
+struct unf_port_topo {
+ u32 topo_cfg;
+ enum unf_act_topo topo_act;
+};
+
+struct unf_port_transfer_para {
+ u32 type;
+ u32 value;
+};
+
+struct unf_buf {
+ u8 *buf;
+ u32 buf_len;
+};
+
+/* get ucode & up ver */
+#define SPFC_VER_LEN (16)
+#define SPFC_COMPILE_TIME_LEN (20)
+struct unf_fw_version {
+ u32 message_type;
+ u8 fw_version[SPFC_VER_LEN];
+};
+
+struct unf_port_wwn {
+ u64 sys_port_wwn;
+ u64 sys_node_name;
+};
+
+enum unf_port_config_set_op {
+ UNF_PORT_CFG_SET_SPEED,
+ UNF_PORT_CFG_SET_PORT_SWITCH,
+ UNF_PORT_CFG_SET_POWER_STATE,
+ UNF_PORT_CFG_SET_PORT_STATE,
+ UNF_PORT_CFG_UPDATE_WWN,
+ UNF_PORT_CFG_TEST_FLASH,
+ UNF_PORT_CFG_UPDATE_FABRIC_PARAM,
+ UNF_PORT_CFG_UPDATE_PLOGI_PARAM,
+ UNF_PORT_CFG_SET_BUTT
+};
+
+enum unf_port_cfg_get_op {
+ UNF_PORT_CFG_GET_TOPO_ACT,
+ UNF_PORT_CFG_GET_LOOP_MAP,
+ UNF_PORT_CFG_GET_SFP_PRESENT,
+ UNF_PORT_CFG_GET_FW_VER,
+ UNF_PORT_CFG_GET_HW_VER,
+ UNF_PORT_CFG_GET_WORKBALE_BBCREDIT,
+ UNF_PORT_CFG_GET_WORKBALE_BBSCN,
+ UNF_PORT_CFG_GET_FC_SERDES,
+ UNF_PORT_CFG_GET_LOOP_ALPA,
+ UNF_PORT_CFG_GET_MAC_ADDR,
+ UNF_PORT_CFG_GET_SFP_VER,
+ UNF_PORT_CFG_GET_SFP_SUPPORT_UPDATE,
+ UNF_PORT_CFG_GET_SFP_LOG,
+ UNF_PORT_CFG_GET_PCIE_LINK_STATE,
+ UNF_PORT_CFG_GET_FLASH_DATA_INFO,
+ UNF_PORT_CFG_GET_BUTT,
+};
+
+enum unf_port_config_state {
+ UNF_PORT_CONFIG_STATE_START,
+ UNF_PORT_CONFIG_STATE_STOP,
+ UNF_PORT_CONFIG_STATE_RESET,
+ UNF_PORT_CONFIG_STATE_STOP_INTR,
+ UNF_PORT_CONFIG_STATE_BUTT
+};
+
+enum unf_port_config_update {
+ UNF_PORT_CONFIG_UPDATE_FW_MINIMUM,
+ UNF_PORT_CONFIG_UPDATE_FW_ALL,
+ UNF_PORT_CONFIG_UPDATE_BUTT
+};
+
+enum unf_disable_vp_mode {
+ UNF_DISABLE_VP_MODE_ONLY = 0x8,
+ UNF_DISABLE_VP_MODE_REINIT_LINK = 0x9,
+ UNF_DISABLE_VP_MODE_NOFAB_LOGO = 0xA,
+ UNF_DISABLE_VP_MODE_LOGO_ALL = 0xB
+};
+
+struct unf_vport_info {
+ u16 vp_index;
+ u64 node_name;
+ u64 port_name;
+ u32 port_mode; /* INI, TGT or both */
+ enum unf_disable_vp_mode disable_mode;
+ u32 nport_id; /* maybe acquired by lowlevel and update to common */
+ void *vport;
+};
+
+struct unf_port_login_parms {
+ enum unf_act_topo act_topo;
+
+ u32 rport_index;
+ u32 seq_cnt : 1;
+ u32 ed_tov : 1;
+ u32 reserved : 14;
+ u32 tx_mfs : 16;
+ u32 ed_tov_timer_val;
+
+ u8 remote_rttov_tag;
+ u8 remote_edtov_tag;
+ u16 remote_bb_credit;
+ u16 compared_bbscn;
+ u32 compared_edtov_val;
+ u32 compared_ratov_val;
+ u32 els_cmnd_code;
+};
+
+struct unf_mbox_head_info {
+ /* mbox header */
+ u8 cmnd_type;
+ u8 length;
+ u8 port_id;
+ u8 pad0;
+
+ /* operation */
+ u32 opcode : 4;
+ u32 pad1 : 28;
+};
+
+struct unf_mbox_head_sts {
+ /* mbox header */
+ u8 cmnd_type;
+ u8 length;
+ u8 port_id;
+ u8 pad0;
+
+ /* operation */
+ u16 pad1;
+ u8 pad2;
+ u8 status;
+};
+
+struct unf_low_level_service_op {
+ u32 (*unf_ls_gs_send)(void *hba, struct unf_frame_pkg *pkg);
+ u32 (*unf_bls_send)(void *hba, struct unf_frame_pkg *pkg);
+ u32 (*unf_cmnd_send)(void *hba, struct unf_frame_pkg *pkg);
+ u32 (*unf_rsp_send)(void *handle, struct unf_frame_pkg *pkg);
+ u32 (*unf_release_rport_res)(void *handle, struct unf_port_info *rport_info);
+ u32 (*unf_flush_ini_resp_que)(void *handle);
+ u32 (*unf_alloc_rport_res)(void *handle, struct unf_port_info *rport_info);
+ u32 (*ll_release_xid)(void *handle, struct unf_frame_pkg *pkg);
+ u32 (*unf_xfer_send)(void *handle, struct unf_frame_pkg *pkg);
+};
+
+struct unf_low_level_port_mgr_op {
+ /* fcport/opcode/input parameter */
+ u32 (*ll_port_config_set)(void *fc_port, enum unf_port_config_set_op opcode, void *para_in);
+
+ /* fcport/opcode/output parameter */
+ u32 (*ll_port_config_get)(void *fc_port, enum unf_port_cfg_get_op opcode, void *para_out);
+};
+
+struct unf_chip_info {
+ u8 chip_type;
+ u8 chip_work_mode;
+ u8 disable_err_flag;
+};
+
+struct unf_low_level_functioon_op {
+ struct unf_chip_info chip_info;
+ /* low level type */
+ u32 low_level_type;
+ const char *name;
+ struct pci_dev *dev;
+ u64 sys_node_name;
+ u64 sys_port_name;
+ struct unf_lport_cfg_item lport_cfg_items;
+#define UNF_LOW_LEVEL_MGR_TYPE_ACTIVE 0
+#define UNF_LOW_LEVEL_MGR_TYPE_PASSTIVE 1
+ const u32 xchg_mgr_type;
+
+#define UNF_NO_EXTRA_ABTS_XCHG 0x0
+#define UNF_LL_IOC_ABTS_XCHG 0x1
+ const u32 abts_xchg;
+
+#define UNF_CM_RPORT_SET_QUALIFIER 0x0
+#define UNF_CM_RPORT_SET_QUALIFIER_REUSE 0x1
+#define UNF_CM_RPORT_SET_QUALIFIER_SPFC 0x2
+
+ /* low level pass-through flag. */
+#define UNF_LOW_LEVEL_PASS_THROUGH_FIP 0x0
+#define UNF_LOW_LEVEL_PASS_THROUGH_FABRIC_LOGIN 0x1
+#define UNF_LOW_LEVEL_PASS_THROUGH_PORT_LOGIN 0x2
+ u32 passthrough_flag;
+
+ /* low level parameter */
+ u32 support_max_npiv_num;
+ u32 support_max_ssq_num;
+ u32 support_max_speed;
+ u32 support_min_speed;
+ u32 fc_ser_max_speed;
+
+ u32 support_max_rport;
+
+ u32 support_max_hot_tag_range;
+ u32 sfp_type;
+ u32 update_fw_reset_active;
+ u32 support_upgrade_report;
+ u32 multi_conf_support;
+ u32 port_type;
+#define UNF_LOW_LEVEL_RELEASE_RPORT_SYNC 0x0
+#define UNF_LOW_LEVEL_RELEASE_RPORT_ASYNC 0x1
+ u8 rport_release_type;
+#define UNF_LOW_LEVEL_SIRT_PAGE_MODE_FIXED 0x0
+#define UNF_LOW_LEVEL_SIRT_PAGE_MODE_XCHG 0x1
+ u8 sirt_page_mode;
+ u8 sfp_speed;
+
+ /* IO reference */
+ struct unf_low_level_service_op service_op;
+
+ /* Port Mgr reference */
+ struct unf_low_level_port_mgr_op port_mgr_op;
+
+ u8 chip_id;
+};
+
+struct unf_cm_handle_op {
+ /* return:L_Port */
+ void *(*unf_alloc_local_port)(void *private_data,
+ struct unf_low_level_functioon_op *low_level_op);
+
+ /* input para:L_Port */
+ u32 (*unf_release_local_port)(void *lport);
+
+ /* input para:L_Port, FRAME_PKG_S */
+ u32 (*unf_receive_ls_gs_pkg)(void *lport, struct unf_frame_pkg *pkg);
+
+ /* input para:L_Port, FRAME_PKG_S */
+ u32 (*unf_receive_bls_pkg)(void *lport, struct unf_frame_pkg *pkg);
+ /* input para:L_Port, FRAME_PKG_S */
+ u32 (*unf_send_els_done)(void *lport, struct unf_frame_pkg *pkg);
+
+ /* input para:L_Port, FRAME_PKG_S */
+ u32 (*unf_receive_marker_status)(void *lport, struct unf_frame_pkg *pkg);
+ u32 (*unf_receive_abts_marker_status)(void *lport, struct unf_frame_pkg *pkg);
+ /* input para:L_Port, FRAME_PKG_S */
+ u32 (*unf_receive_ini_response)(void *lport, struct unf_frame_pkg *pkg);
+
+ int (*unf_get_cfg_parms)(char *section_name,
+ struct unf_cfg_item *cfg_parm, u32 *cfg_value,
+ u32 item_num);
+
+ /* TGT IO interface */
+ u32 (*unf_process_fcp_cmnd)(void *lport, struct unf_frame_pkg *pkg);
+
+ /* TGT IO Done */
+ u32 (*unf_tgt_cmnd_xfer_or_rsp_echo)(void *lport, struct unf_frame_pkg *pkg);
+
+ u32 (*unf_cm_get_sgl_entry)(void *pkg, char **buf, u32 *buf_len);
+ u32 (*unf_cm_get_dif_sgl_entry)(void *pkg, char **buf, u32 *buf_len);
+
+ struct unf_esgl_page *(*unf_get_one_free_esgl_page)(void *lport, struct unf_frame_pkg *pkg);
+
+ /* input para:L_Port, EVENT */
+ u32 (*unf_fc_port_event)(void *lport, u32 events, void *input);
+
+ int (*unf_drv_start_work)(void *lport);
+
+ void (*unf_card_rport_chip_err)(struct pci_dev const *pci_dev);
+};
+
+u32 unf_get_cm_handle_ops(struct unf_cm_handle_op *cm_handle);
+int unf_common_init(void);
+void unf_common_exit(void);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_disc.c b/drivers/scsi/spfc/common/unf_disc.c
new file mode 100644
index 000000000000..c48d0ba670d4
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_disc.c
@@ -0,0 +1,1276 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_disc.h"
+#include "unf_log.h"
+#include "unf_common.h"
+#include "unf_event.h"
+#include "unf_lport.h"
+#include "unf_rport.h"
+#include "unf_exchg.h"
+#include "unf_ls.h"
+#include "unf_gs.h"
+#include "unf_portman.h"
+
+#define UNF_LIST_RSCN_PAGE_CNT 2560
+#define UNF_MAX_PORTS_PRI_LOOP 2
+#define UNF_MAX_GS_SEND_NUM 8
+#define UNF_OS_REMOVE_CARD_TIMEOUT (60 * 1000)
+
+static void unf_set_disc_state(struct unf_disc *disc,
+ enum unf_disc_state states)
+{
+ FC_CHECK_RETURN_VOID(disc);
+
+ if (states != disc->states) {
+ /* Reset disc retry count */
+ disc->retry_count = 0;
+ }
+
+ disc->states = states;
+}
+
+static inline u32 unf_get_loop_map(struct unf_lport *lport, u8 loop_map[], u32 loop_map_size)
+{
+ struct unf_buf buf = {0};
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport->low_level_func.port_mgr_op.ll_port_config_get,
+ UNF_RETURN_ERROR);
+
+ buf.buf = loop_map;
+ buf.buf_len = loop_map_size;
+
+ ret = lport->low_level_func.port_mgr_op.ll_port_config_get(lport->fc_port,
+ UNF_PORT_CFG_GET_LOOP_MAP,
+ (void *)&buf);
+ return ret;
+}
+
+static void unf_login_with_loop_node(struct unf_lport *lport, u32 alpa)
+{
+ /* Only used for Private Loop LOGIN */
+ struct unf_rport *unf_rport = NULL;
+ ulong rport_flag = 0;
+ u32 port_feature = 0;
+ u32 ret;
+
+ /* Check AL_PA validity */
+ if (lport->nport_id == alpa) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) is the same as RPort with AL_PA(0x%x), do nothing",
+ lport->port_id, alpa);
+ return;
+ }
+
+ if (alpa == 0) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) is fabric, do nothing",
+ lport->port_id, alpa);
+ return;
+ }
+
+ /* Get & set R_Port: reuse only */
+ unf_rport = unf_get_rport_by_nport_id(lport, alpa);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) RPort(0x%x_0x%p) login with private loop",
+ lport->port_id, lport->nport_id, alpa, unf_rport);
+
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, alpa);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) allocate new RPort(0x%x) failed",
+ lport->port_id, lport->nport_id, alpa);
+ return;
+ }
+
+ /* Update R_Port state & N_Port_ID */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
+ unf_rport->nport_id = alpa;
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
+
+ /* Private Loop: check whether need delay to send PLOGI or not */
+ port_feature = unf_rport->options;
+
+ /* check Rport and Lport feature */
+ if (port_feature == UNF_PORT_MODE_UNKNOWN &&
+ lport->options == UNF_PORT_MODE_INI) {
+ /* Start to send PLOGI */
+ ret = unf_send_plogi(lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send PLOGI to RPort(0x%x) failed",
+ lport->port_id, lport->nport_id, unf_rport->nport_id);
+
+ unf_rport_error_recovery(unf_rport);
+ }
+ } else {
+ unf_check_rport_need_delay_plogi(lport, unf_rport, port_feature);
+ }
+}
+
+static int unf_discover_private_loop(void *arg_in, void *arg_out)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)arg_in;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 i = 0;
+ u8 loop_id = 0;
+ u32 alpa_index = 0;
+ u8 loop_map[UNF_LOOPMAP_COUNT];
+
+ FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
+ memset(loop_map, 0x0, UNF_LOOPMAP_COUNT);
+
+ /* Get Port Loop Map */
+ ret = unf_get_loop_map(unf_lport, loop_map, UNF_LOOPMAP_COUNT);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) get loop map failed", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Check Loop Map Ports Count */
+ if (loop_map[ARRAY_INDEX_0] > UNF_MAX_PORTS_PRI_LOOP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) has more than %d ports(%u) in private loop",
+ unf_lport->port_id, UNF_MAX_PORTS_PRI_LOOP, loop_map[ARRAY_INDEX_0]);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* AL_PA = 0 means Public Loop */
+ if (loop_map[ARRAY_INDEX_1] == UNF_FL_PORT_LOOP_ADDR ||
+ loop_map[ARRAY_INDEX_2] == UNF_FL_PORT_LOOP_ADDR) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) one or more AL_PA is 0x00, indicate it's FL_Port",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Discovery Private Loop Ports */
+ for (i = 0; i < loop_map[ARRAY_INDEX_0]; i++) {
+ alpa_index = i + 1;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) start to disc(0x%x) with count(0x%x)",
+ unf_lport->port_id, loop_map[alpa_index], i);
+
+ /* Check whether need delay to send PLOGI or not */
+ loop_id = loop_map[alpa_index];
+ unf_login_with_loop_node(unf_lport, (u32)loop_id);
+ }
+
+ return RETURN_OK;
+}
+
+u32 unf_disc_start(void *lport)
+{
+ /*
+ * Call by:
+ * 1. Enter Private Loop Login
+ * 2. Analysis RSCN payload
+ * 3. SCR callback
+ * *
+ * Doing:
+ * Fabric/Public Loop: Send GID_PT
+ * Private Loop: (delay to) send PLOGI or send LOGO immediately
+ * P2P: do nothing
+ */
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_cm_event_report *event = NULL;
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+ enum unf_act_topo act_topo = UNF_ACT_TOP_UNKNOWN;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ act_topo = unf_lport->act_topo;
+ disc = &unf_lport->disc;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x) with topo(0x%x) begin to discovery",
+ unf_lport->port_id, act_topo);
+
+ if (act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ act_topo == UNF_ACT_TOP_PUBLIC_LOOP) {
+ /* 1. Fabric or Public Loop Topology: for directory server */
+ unf_rport = unf_get_rport_by_nport_id(unf_lport,
+ UNF_FC_FID_DIR_SERV); /* 0xfffffc */
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) unable to get SNS RPort(0xfffffc)",
+ unf_lport->port_id);
+
+ unf_rport = unf_rport_get_free_and_init(unf_lport, UNF_PORT_TYPE_FC,
+ UNF_FC_FID_DIR_SERV);
+ if (!unf_rport)
+ return UNF_RETURN_ERROR;
+
+ unf_rport->nport_id = UNF_FC_FID_DIR_SERV;
+ }
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_set_disc_state(disc, UNF_DISC_ST_START); /* disc start */
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_NORMAL_ENTER);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /*
+ * NOTE: Send GID_PT
+ * The Name Server shall, when it receives a GID_PT request,
+ * return all Port Identifiers having registered support for the
+ * specified Port Type. One or more Port Identifiers, having
+ * registered as the specified Port Type, are returned.
+ */
+ ret = unf_send_gid_pt(unf_lport, unf_rport);
+ if (ret != RETURN_OK)
+ unf_disc_error_recovery(unf_lport);
+ } else if (act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
+ /* Private Loop: to thread process */
+ event = unf_get_one_event_node(unf_lport);
+ FC_CHECK_RETURN_VALUE(event, UNF_RETURN_ERROR);
+
+ event->lport = unf_lport;
+ event->event_asy_flag = UNF_EVENT_ASYN;
+ event->unf_event_task = unf_discover_private_loop;
+ event->para_in = (void *)unf_lport;
+
+ unf_post_one_event_node(unf_lport, event);
+ } else {
+ /* P2P toplogy mode: Do nothing */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) with topo(0x%x) need do nothing",
+ unf_lport->port_id, act_topo);
+ }
+
+ return ret;
+}
+
+static u32 unf_disc_stop(void *lport)
+{
+ /* Call by GID_ACC processer */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *root_lport = NULL;
+ struct unf_rport *sns_port = NULL;
+ struct unf_disc_rport *disc_rport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_disc *root_disc = NULL;
+ struct list_head *node = NULL;
+ ulong flag = 0;
+ u32 ret = RETURN_OK;
+ u32 nport_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ unf_lport = (struct unf_lport *)lport;
+ disc = &unf_lport->disc;
+ root_lport = (struct unf_lport *)unf_lport->root_lport;
+ root_disc = &root_lport->disc;
+
+ /* Get R_Port for Directory server */
+ sns_port = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find fabric RPort(0xfffffc) failed",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* for R_Port from disc pool busy list */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ if (list_empty(&disc->disc_rport_mgr.list_disc_rports_busy)) {
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ return RETURN_OK;
+ }
+
+ node = UNF_OS_LIST_NEXT(&disc->disc_rport_mgr.list_disc_rports_busy);
+ do {
+ /* Delete from Disc busy list */
+ disc_rport = list_entry(node, struct unf_disc_rport, entry_rport);
+ nport_id = disc_rport->nport_id;
+ list_del_init(node);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /* Add back to (free) Disc R_Port pool (list) */
+ spin_lock_irqsave(&root_disc->rport_busy_pool_lock, flag);
+ list_add_tail(node, &root_disc->disc_rport_mgr.list_disc_rports_pool);
+ spin_unlock_irqrestore(&root_disc->rport_busy_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x_0x%x) remove nportid:0x%x from rportbusy list",
+ unf_lport->port_id, unf_lport->nport_id, disc_rport->nport_id);
+ /* Send GNN_ID to Name Server */
+ ret = unf_get_and_post_disc_event(unf_lport, sns_port, nport_id,
+ UNF_DISC_GET_NODE_NAME);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ unf_lport->nport_id, UNF_DISC_GET_NODE_NAME, nport_id);
+
+ /* NOTE: go to next stage */
+ unf_rcv_gnn_id_rsp_unknown(unf_lport, sns_port, nport_id);
+ }
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ node = UNF_OS_LIST_NEXT(&disc->disc_rport_mgr.list_disc_rports_busy);
+ } while (node != &disc->disc_rport_mgr.list_disc_rports_busy);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ return ret;
+}
+
+static u32 unf_init_rport_pool(struct unf_lport *lport)
+{
+ struct unf_rport_pool *rport_pool = NULL;
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = RETURN_OK;
+ u32 i = 0;
+ u32 bitmap_cnt = 0;
+ ulong flag = 0;
+ u32 max_login = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* Init RPort Pool info */
+ rport_pool = &lport->rport_pool;
+ max_login = lport->low_level_func.lport_cfg_items.max_login;
+ rport_pool->rport_pool_completion = NULL;
+ rport_pool->rport_pool_count = max_login;
+ spin_lock_init(&rport_pool->rport_free_pool_lock);
+ INIT_LIST_HEAD(&rport_pool->list_rports_pool); /* free RPort pool */
+
+ /* 1. Alloc RPort Pool buffer/resource (memory) */
+ rport_pool->rport_pool_add = vmalloc((size_t)(max_login * sizeof(struct unf_rport)));
+ if (!rport_pool->rport_pool_add) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) allocate RPort(s) resource failed", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(rport_pool->rport_pool_add, 0, (max_login * sizeof(struct unf_rport)));
+
+ /* 2. Alloc R_Port Pool bitmap */
+ bitmap_cnt = (lport->low_level_func.support_max_rport) / BITS_PER_LONG + 1;
+ rport_pool->rpi_bitmap = vmalloc((size_t)(bitmap_cnt * sizeof(ulong)));
+ if (!rport_pool->rpi_bitmap) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) allocate RPort Bitmap failed", lport->port_id);
+
+ vfree(rport_pool->rport_pool_add);
+ rport_pool->rport_pool_add = NULL;
+ return UNF_RETURN_ERROR;
+ }
+ memset(rport_pool->rpi_bitmap, 0, (bitmap_cnt * sizeof(ulong)));
+
+ /* 3. Rport resource Management: Add Rports (buffer) to Rport Pool List
+ */
+ unf_rport = (struct unf_rport *)(rport_pool->rport_pool_add);
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
+ for (i = 0; i < rport_pool->rport_pool_count; i++) {
+ spin_lock_init(&unf_rport->rport_state_lock);
+ list_add_tail(&unf_rport->entry_rport, &rport_pool->list_rports_pool);
+ sema_init(&unf_rport->task_sema, 0);
+ unf_rport++;
+ }
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+
+ return ret;
+}
+
+static void unf_free_rport_pool(struct unf_lport *lport)
+{
+ struct unf_rport_pool *rport_pool = NULL;
+ bool wait = false;
+ ulong flag = 0;
+ u32 remain = 0;
+ u64 timeout = 0;
+ u32 max_login = 0;
+ u32 i;
+ struct unf_rport *unf_rport = NULL;
+ struct completion rport_pool_completion;
+
+ init_completion(&rport_pool_completion);
+ FC_CHECK_RETURN_VOID(lport);
+
+ rport_pool = &lport->rport_pool;
+ max_login = lport->low_level_func.lport_cfg_items.max_login;
+
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
+ if (rport_pool->rport_pool_count != max_login) {
+ rport_pool->rport_pool_completion = &rport_pool_completion;
+ remain = max_login - rport_pool->rport_pool_count;
+ wait = true;
+ }
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+
+ if (wait) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to wait for RPort pool completion, remain(0x%x)",
+ lport->port_id, remain);
+
+ unf_show_all_rport(lport);
+
+ timeout = wait_for_completion_timeout(rport_pool->rport_pool_completion,
+ msecs_to_jiffies(UNF_OS_REMOVE_CARD_TIMEOUT));
+ if (timeout == 0)
+ unf_cm_mark_dirty_mem(lport, UNF_LPORT_DIRTY_FLAG_RPORT_POOL_DIRTY);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) wait for RPort pool completion end",
+ lport->port_id);
+
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
+ rport_pool->rport_pool_completion = NULL;
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+ }
+
+ unf_rport = (struct unf_rport *)(rport_pool->rport_pool_add);
+ for (i = 0; i < rport_pool->rport_pool_count; i++) {
+ if (!unf_rport)
+ break;
+ unf_rport++;
+ }
+
+ if ((lport->dirty_flag & UNF_LPORT_DIRTY_FLAG_RPORT_POOL_DIRTY) == 0) {
+ vfree(rport_pool->rport_pool_add);
+ rport_pool->rport_pool_add = NULL;
+ vfree(rport_pool->rpi_bitmap);
+ rport_pool->rpi_bitmap = NULL;
+ }
+}
+
+static void unf_init_rscn_node(struct unf_port_id_page *port_id_page)
+{
+ FC_CHECK_RETURN_VOID(port_id_page);
+
+ port_id_page->addr_format = 0;
+ port_id_page->event_qualifier = 0;
+ port_id_page->reserved = 0;
+ port_id_page->port_id_area = 0;
+ port_id_page->port_id_domain = 0;
+ port_id_page->port_id_port = 0;
+}
+
+struct unf_port_id_page *unf_get_free_rscn_node(void *rscn_mg)
+{
+ /* Call by Save RSCN Port_ID */
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+ struct unf_port_id_page *port_id_node = NULL;
+ struct list_head *list_node = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(rscn_mg, NULL);
+ rscn_mgr = (struct unf_rscn_mgr *)rscn_mg;
+
+ spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
+ if (list_empty(&rscn_mgr->list_free_rscn_page)) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_WARN,
+ "[warn]No RSCN node anymore");
+
+ spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
+ return NULL;
+ }
+
+ /* Get from list_free_RSCN_page */
+ list_node = UNF_OS_LIST_NEXT(&rscn_mgr->list_free_rscn_page);
+ list_del(list_node);
+ rscn_mgr->free_rscn_count--;
+ port_id_node = list_entry(list_node, struct unf_port_id_page, list_node_rscn);
+ unf_init_rscn_node(port_id_node);
+ spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
+
+ return port_id_node;
+}
+
+static void unf_release_rscn_node(void *rscn_mg, void *port_id_node)
+{
+ /* Call by RSCN GID_ACC */
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+ struct unf_port_id_page *port_id_page = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(rscn_mg);
+ FC_CHECK_RETURN_VOID(port_id_node);
+ rscn_mgr = (struct unf_rscn_mgr *)rscn_mg;
+ port_id_page = (struct unf_port_id_page *)port_id_node;
+
+ /* Back to list_free_RSCN_page */
+ spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
+ rscn_mgr->free_rscn_count++;
+ unf_init_rscn_node(port_id_page);
+ list_add_tail(&port_id_page->list_node_rscn, &rscn_mgr->list_free_rscn_page);
+ spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
+}
+
+static u32 unf_init_rscn_pool(struct unf_lport *lport)
+{
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+ struct unf_port_id_page *port_id_page = NULL;
+ u32 ret = RETURN_OK;
+ u32 i = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ rscn_mgr = &lport->disc.rscn_mgr;
+
+ /* Get RSCN Pool buffer */
+ rscn_mgr->rscn_pool_add = vmalloc(UNF_LIST_RSCN_PAGE_CNT * sizeof(struct unf_port_id_page));
+ if (!rscn_mgr->rscn_pool_add) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate RSCN pool failed", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(rscn_mgr->rscn_pool_add, 0,
+ UNF_LIST_RSCN_PAGE_CNT * sizeof(struct unf_port_id_page));
+
+ spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
+ port_id_page = (struct unf_port_id_page *)(rscn_mgr->rscn_pool_add);
+ for (i = 0; i < UNF_LIST_RSCN_PAGE_CNT; i++) {
+ /* Add tail to list_free_RSCN_page */
+ list_add_tail(&port_id_page->list_node_rscn, &rscn_mgr->list_free_rscn_page);
+
+ rscn_mgr->free_rscn_count++;
+ port_id_page++;
+ }
+ spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
+
+ return ret;
+}
+
+static void unf_freerscn_pool(struct unf_lport *lport)
+{
+ struct unf_disc *disc = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ disc = &lport->disc;
+ if (disc->rscn_mgr.rscn_pool_add) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_INFO, "[info]Port(0x%x) free RSCN pool", lport->nport_id);
+
+ vfree(disc->rscn_mgr.rscn_pool_add);
+ disc->rscn_mgr.rscn_pool_add = NULL;
+ }
+}
+
+static u32 unf_init_rscn_mgr(struct unf_lport *lport)
+{
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ rscn_mgr = &lport->disc.rscn_mgr;
+
+ INIT_LIST_HEAD(&rscn_mgr->list_free_rscn_page); /* free RSCN page list */
+ INIT_LIST_HEAD(&rscn_mgr->list_using_rscn_page); /* busy RSCN page list */
+ spin_lock_init(&rscn_mgr->rscn_id_list_lock);
+ rscn_mgr->free_rscn_count = 0;
+ rscn_mgr->unf_get_free_rscn_node = unf_get_free_rscn_node;
+ rscn_mgr->unf_release_rscn_node = unf_release_rscn_node;
+
+ ret = unf_init_rscn_pool(lport);
+ return ret;
+}
+
+static void unf_destroy_rscn_mngr(struct unf_lport *lport)
+{
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ rscn_mgr = &lport->disc.rscn_mgr;
+
+ rscn_mgr->free_rscn_count = 0;
+ rscn_mgr->unf_get_free_rscn_node = NULL;
+ rscn_mgr->unf_release_rscn_node = NULL;
+
+ unf_freerscn_pool(lport);
+}
+
+static u32 unf_init_disc_rport_pool(struct unf_lport *lport)
+{
+ struct unf_disc_rport_mg *disc_mgr = NULL;
+ struct unf_disc_rport *disc_rport = NULL;
+ u32 i = 0;
+ u32 max_log_in = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ max_log_in = lport->low_level_func.lport_cfg_items.max_login;
+ disc_mgr = &lport->disc.disc_rport_mgr;
+
+ /* Alloc R_Port Disc Pool buffer */
+ disc_mgr->disc_pool_add =
+ vmalloc(max_log_in * sizeof(struct unf_disc_rport));
+ if (!disc_mgr->disc_pool_add) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate disc RPort pool failed", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(disc_mgr->disc_pool_add, 0, (max_log_in * sizeof(struct unf_disc_rport)));
+
+ /* Add R_Port to (free) DISC R_Port Pool */
+ spin_lock_irqsave(&lport->disc.rport_busy_pool_lock, flag);
+ disc_rport = (struct unf_disc_rport *)(disc_mgr->disc_pool_add);
+ for (i = 0; i < max_log_in; i++) {
+ /* Add tail to list_disc_Rport_pool */
+ list_add_tail(&disc_rport->entry_rport, &disc_mgr->list_disc_rports_pool);
+
+ disc_rport++;
+ }
+ spin_unlock_irqrestore(&lport->disc.rport_busy_pool_lock, flag);
+
+ return RETURN_OK;
+}
+
+static void unf_free_disc_rport_pool(struct unf_lport *lport)
+{
+ struct unf_disc *disc = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ disc = &lport->disc;
+ if (disc->disc_rport_mgr.disc_pool_add) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_INFO, "[info]Port(0x%x) free disc RPort pool", lport->port_id);
+
+ vfree(disc->disc_rport_mgr.disc_pool_add);
+ disc->disc_rport_mgr.disc_pool_add = NULL;
+ }
+}
+
+int unf_discover_port_info(void *arg_in)
+{
+ struct unf_disc_gs_event_info *disc_gs_info = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+
+ FC_CHECK_RETURN_VALUE(arg_in, UNF_RETURN_ERROR);
+
+ disc_gs_info = (struct unf_disc_gs_event_info *)arg_in;
+ unf_lport = (struct unf_lport *)disc_gs_info->lport;
+ unf_rport = (struct unf_rport *)disc_gs_info->rport;
+
+ switch (disc_gs_info->type) {
+ case UNF_DISC_GET_PORT_NAME:
+ ret = unf_send_gpn_id(unf_lport, unf_rport, disc_gs_info->rport_id);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send GPN_ID failed RPort(0x%x)",
+ unf_lport->nport_id, disc_gs_info->rport_id);
+ unf_rcv_gpn_id_rsp_unknown(unf_lport, disc_gs_info->rport_id);
+ }
+ break;
+ case UNF_DISC_GET_FEATURE:
+ ret = unf_send_gff_id(unf_lport, unf_rport, disc_gs_info->rport_id);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send GFF_ID failed to get RPort(0x%x)'s feature",
+ unf_lport->port_id, disc_gs_info->rport_id);
+
+ unf_rcv_gff_id_rsp_unknown(unf_lport, disc_gs_info->rport_id);
+ }
+ break;
+ case UNF_DISC_GET_NODE_NAME:
+ ret = unf_send_gnn_id(unf_lport, unf_rport, disc_gs_info->rport_id);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) GNN_ID send failed with NPort ID(0x%x)",
+ unf_lport->port_id, disc_gs_info->rport_id);
+
+ /* NOTE: Continue to next stage */
+ unf_rcv_gnn_id_rsp_unknown(unf_lport, unf_rport, disc_gs_info->rport_id);
+ }
+ break;
+ default:
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]Send GS packet type(0x%x) is unknown", disc_gs_info->type);
+ }
+
+ kfree(disc_gs_info);
+
+ return (int)ret;
+}
+
+u32 unf_get_and_post_disc_event(void *lport, void *sns_port, u32 nport_id,
+ enum unf_disc_type type)
+{
+ struct unf_disc_gs_event_info *disc_gs_info = NULL;
+ ulong flag = 0;
+ struct unf_lport *root_lport = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_disc_manage_info *disc_info = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+
+ unf_lport = (struct unf_lport *)lport;
+
+ if (unf_lport->link_up == UNF_PORT_LINK_DOWN)
+ return RETURN_OK;
+
+ root_lport = unf_lport->root_lport;
+ disc_info = &root_lport->disc.disc_thread_info;
+
+ if (disc_info->thread_exit)
+ return RETURN_OK;
+
+ disc_gs_info = kmalloc(sizeof(struct unf_disc_gs_event_info), GFP_ATOMIC);
+ if (!disc_gs_info)
+ return UNF_RETURN_ERROR;
+
+ disc_gs_info->type = type;
+ disc_gs_info->lport = unf_lport;
+ disc_gs_info->rport = sns_port;
+ disc_gs_info->rport_id = nport_id;
+
+ INIT_LIST_HEAD(&disc_gs_info->list_entry);
+
+ spin_lock_irqsave(&disc_info->disc_event_list_lock, flag);
+ list_add_tail(&disc_gs_info->list_entry, &disc_info->list_head);
+ spin_unlock_irqrestore(&disc_info->disc_event_list_lock, flag);
+ wake_up_process(disc_info->thread);
+ return RETURN_OK;
+}
+
+static int unf_disc_event_process(void *arg)
+{
+ struct list_head *node = NULL;
+ struct unf_disc_gs_event_info *disc_gs_info = NULL;
+ ulong flags = 0;
+ struct unf_disc *disc = (struct unf_disc *)arg;
+ struct unf_disc_manage_info *disc_info = &disc->disc_thread_info;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) enter discovery thread.", disc->lport->port_id);
+
+ while (!kthread_should_stop()) {
+ if (disc_info->thread_exit)
+ break;
+
+ spin_lock_irqsave(&disc_info->disc_event_list_lock, flags);
+ if ((list_empty(&disc_info->list_head)) ||
+ (atomic_read(&disc_info->disc_contrl_size) == 0)) {
+ spin_unlock_irqrestore(&disc_info->disc_event_list_lock, flags);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout((long)msecs_to_jiffies(UNF_S_TO_MS));
+ } else {
+ node = UNF_OS_LIST_NEXT(&disc_info->list_head);
+ list_del_init(node);
+ disc_gs_info = list_entry(node, struct unf_disc_gs_event_info, list_entry);
+ spin_unlock_irqrestore(&disc_info->disc_event_list_lock, flags);
+ unf_discover_port_info(disc_gs_info);
+ }
+ }
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "Port(0x%x) discovery thread over.", disc->lport->port_id);
+
+ return RETURN_OK;
+}
+
+void unf_flush_disc_event(void *disc, void *vport)
+{
+ struct unf_disc *unf_disc = (struct unf_disc *)disc;
+ struct unf_disc_manage_info *disc_info = NULL;
+ struct list_head *list = NULL;
+ struct list_head *list_tmp = NULL;
+ struct unf_disc_gs_event_info *disc_gs_info = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(disc);
+
+ disc_info = &unf_disc->disc_thread_info;
+
+ spin_lock_irqsave(&disc_info->disc_event_list_lock, flag);
+ list_for_each_safe(list, list_tmp, &disc_info->list_head) {
+ disc_gs_info = list_entry(list, struct unf_disc_gs_event_info, list_entry);
+
+ if (!vport || disc_gs_info->lport == vport) {
+ list_del_init(&disc_gs_info->list_entry);
+ kfree(disc_gs_info);
+ }
+ }
+
+ if (!vport)
+ atomic_set(&disc_info->disc_contrl_size, UNF_MAX_GS_SEND_NUM);
+ spin_unlock_irqrestore(&disc_info->disc_event_list_lock, flag);
+}
+
+void unf_disc_ctrl_size_inc(void *lport, u32 cmnd)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_lport = unf_lport->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ if (atomic_read(&unf_lport->disc.disc_thread_info.disc_contrl_size) ==
+ UNF_MAX_GS_SEND_NUM)
+ return;
+
+ if (cmnd == NS_GPN_ID || cmnd == NS_GNN_ID || cmnd == NS_GFF_ID)
+ atomic_inc(&unf_lport->disc.disc_thread_info.disc_contrl_size);
+}
+
+void unf_destroy_disc_thread(void *disc)
+{
+ struct unf_disc_manage_info *disc_info = NULL;
+ struct unf_disc *unf_disc = (struct unf_disc *)disc;
+
+ FC_CHECK_RETURN_VOID(unf_disc);
+
+ disc_info = &unf_disc->disc_thread_info;
+
+ disc_info->thread_exit = true;
+ unf_flush_disc_event(unf_disc, NULL);
+
+ wake_up_process(disc_info->thread);
+ kthread_stop(disc_info->thread);
+ disc_info->thread = NULL;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) destroy discovery thread succeed.",
+ unf_disc->lport->port_id);
+}
+
+u32 unf_crerate_disc_thread(void *disc)
+{
+ struct unf_disc_manage_info *disc_info = NULL;
+ struct unf_disc *unf_disc = (struct unf_disc *)disc;
+
+ FC_CHECK_RETURN_VALUE(unf_disc, UNF_RETURN_ERROR);
+
+ /* If the thread cannot be found, apply for a new thread. */
+ disc_info = &unf_disc->disc_thread_info;
+
+ memset(disc_info, 0, sizeof(struct unf_disc_manage_info));
+
+ INIT_LIST_HEAD(&disc_info->list_head);
+ spin_lock_init(&disc_info->disc_event_list_lock);
+ atomic_set(&disc_info->disc_contrl_size, UNF_MAX_GS_SEND_NUM);
+
+ disc_info->thread_exit = false;
+ disc_info->thread = kthread_create(unf_disc_event_process, unf_disc, "%x_DiscT",
+ unf_disc->lport->port_id);
+
+ if (IS_ERR(disc_info->thread) || !disc_info->thread) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) creat discovery thread(0x%p) unsuccessful.",
+ unf_disc->lport->port_id, disc_info->thread);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ wake_up_process(disc_info->thread);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) creat discovery thread succeed.", unf_disc->lport->port_id);
+
+ return RETURN_OK;
+}
+
+void unf_disc_ref_cnt_dec(struct unf_disc *disc)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(disc);
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ if (atomic_dec_and_test(&disc->disc_ref_cnt)) {
+ if (disc->disc_completion)
+ complete(disc->disc_completion);
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+}
+
+void unf_wait_disc_complete(struct unf_lport *lport)
+{
+ struct unf_disc *disc = NULL;
+ bool wait = false;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u64 time_out = 0;
+
+ struct completion disc_completion;
+
+ init_completion(&disc_completion);
+ disc = &lport->disc;
+
+ UNF_DELAYED_WORK_SYNC(ret, (lport->port_id), (&disc->disc_work),
+ "Disc_work");
+ if (ret == RETURN_OK)
+ unf_disc_ref_cnt_dec(disc);
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ if (atomic_read(&disc->disc_ref_cnt) != 0) {
+ disc->disc_completion = &disc_completion;
+ wait = true;
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ if (wait) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to wait for discover completion",
+ lport->port_id);
+
+ time_out =
+ wait_for_completion_timeout(disc->disc_completion,
+ msecs_to_jiffies(UNF_OS_REMOVE_CARD_TIMEOUT));
+ if (time_out == 0)
+ unf_cm_mark_dirty_mem(lport, UNF_LPORT_DIRTY_FLAG_DISC_DIRTY);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) wait for discover completion end", lport->port_id);
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ disc->disc_completion = NULL;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ }
+}
+
+void unf_disc_mgr_destroy(void *lport)
+{
+ struct unf_disc *disc = NULL;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ unf_lport = (struct unf_lport *)lport;
+
+ disc = &unf_lport->disc;
+ disc->retry_count = 0;
+ disc->disc_temp.unf_disc_start = NULL;
+ disc->disc_temp.unf_disc_stop = NULL;
+ disc->disc_temp.unf_disc_callback = NULL;
+
+ unf_free_disc_rport_pool(unf_lport);
+ unf_destroy_rscn_mngr(unf_lport);
+ unf_wait_disc_complete(unf_lport);
+
+ if (unf_lport->root_lport != unf_lport)
+ return;
+
+ unf_destroy_disc_thread(disc);
+ unf_free_rport_pool(unf_lport);
+ unf_lport->destroy_step = UNF_LPORT_DESTROY_STEP_6_DESTROY_DISC_MGR;
+}
+
+void unf_disc_error_recovery(void *lport)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_disc *disc = NULL;
+ ulong delay = 0;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = (struct unf_lport *)lport;
+ disc = &unf_lport->disc;
+
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) find RPort failed", unf_lport->port_id);
+ return;
+ }
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+
+ /* Delay work is pending */
+ if (delayed_work_pending(&disc->disc_work)) {
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) disc_work is running and do nothing",
+ unf_lport->port_id);
+ return;
+ }
+
+ /* Continue to retry */
+ if (disc->retry_count < disc->max_retry_count) {
+ disc->retry_count++;
+ delay = (ulong)unf_lport->ed_tov;
+ if (queue_delayed_work(unf_wq, &disc->disc_work,
+ (ulong)msecs_to_jiffies((u32)delay)))
+ atomic_inc(&disc->disc_ref_cnt);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ } else {
+ /* Go to next stage */
+ if (disc->states == UNF_DISC_ST_GIDPT_WAIT) {
+ /* GID_PT_WAIT --->>> Send GID_FT */
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_RETRY_TIMEOUT);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ while ((ret != RETURN_OK) &&
+ (disc->retry_count < disc->max_retry_count)) {
+ ret = unf_send_gid_ft(unf_lport, unf_rport);
+ disc->retry_count++;
+ }
+ } else if (disc->states == UNF_DISC_ST_GIDFT_WAIT) {
+ /* GID_FT_WAIT --->>> Send LOGO */
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_RETRY_TIMEOUT);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ } else {
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ }
+ }
+}
+
+enum unf_disc_state unf_disc_stat_start(enum unf_disc_state old_state,
+ enum unf_disc_event event)
+{
+ enum unf_disc_state next_state = UNF_DISC_ST_END;
+
+ if (event == UNF_EVENT_DISC_NORMAL_ENTER)
+ next_state = UNF_DISC_ST_GIDPT_WAIT;
+ else
+ next_state = old_state;
+
+ return next_state;
+}
+
+enum unf_disc_state unf_disc_stat_gid_pt_wait(enum unf_disc_state old_state,
+ enum unf_disc_event event)
+{
+ enum unf_disc_state next_state = UNF_DISC_ST_END;
+
+ switch (event) {
+ case UNF_EVENT_DISC_FAILED:
+ next_state = UNF_DISC_ST_GIDPT_WAIT;
+ break;
+
+ case UNF_EVENT_DISC_RETRY_TIMEOUT:
+ next_state = UNF_DISC_ST_GIDFT_WAIT;
+ break;
+
+ case UNF_EVENT_DISC_SUCCESS:
+ next_state = UNF_DISC_ST_END;
+ break;
+
+ case UNF_EVENT_DISC_LINKDOWN:
+ next_state = UNF_DISC_ST_START;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+enum unf_disc_state unf_disc_stat_gid_ft_wait(enum unf_disc_state old_state,
+ enum unf_disc_event event)
+{
+ enum unf_disc_state next_state = UNF_DISC_ST_END;
+
+ switch (event) {
+ case UNF_EVENT_DISC_FAILED:
+ next_state = UNF_DISC_ST_GIDFT_WAIT;
+ break;
+
+ case UNF_EVENT_DISC_RETRY_TIMEOUT:
+ next_state = UNF_DISC_ST_END;
+ break;
+
+ case UNF_EVENT_DISC_LINKDOWN:
+ next_state = UNF_DISC_ST_START;
+ break;
+
+ case UNF_EVENT_DISC_SUCCESS:
+ next_state = UNF_DISC_ST_END;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+enum unf_disc_state unf_disc_stat_end(enum unf_disc_state old_state, enum unf_disc_event event)
+{
+ enum unf_disc_state next_state = UNF_DISC_ST_END;
+
+ if (event == UNF_EVENT_DISC_LINKDOWN)
+ next_state = UNF_DISC_ST_START;
+ else
+ next_state = old_state;
+
+ return next_state;
+}
+
+void unf_disc_state_ma(struct unf_lport *lport, enum unf_disc_event event)
+{
+ struct unf_disc *disc = NULL;
+ enum unf_disc_state old_state = UNF_DISC_ST_START;
+ enum unf_disc_state next_state = UNF_DISC_ST_START;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ disc = &lport->disc;
+ old_state = disc->states;
+
+ switch (disc->states) {
+ case UNF_DISC_ST_START:
+ next_state = unf_disc_stat_start(old_state, event);
+ break;
+
+ case UNF_DISC_ST_GIDPT_WAIT:
+ next_state = unf_disc_stat_gid_pt_wait(old_state, event);
+ break;
+
+ case UNF_DISC_ST_GIDFT_WAIT:
+ next_state = unf_disc_stat_gid_ft_wait(old_state, event);
+ break;
+
+ case UNF_DISC_ST_END:
+ next_state = unf_disc_stat_end(old_state, event);
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ unf_set_disc_state(disc, next_state);
+}
+
+static void unf_lport_disc_timeout(struct work_struct *work)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_disc *disc = NULL;
+ enum unf_disc_state state = UNF_DISC_ST_END;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ disc = container_of(work, struct unf_disc, disc_work.work);
+ if (!disc) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Get discover pointer failed");
+
+ return;
+ }
+
+ unf_lport = disc->lport;
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Find Port by discovery work failed");
+
+ unf_disc_ref_cnt_dec(disc);
+ return;
+ }
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ state = disc->states;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV); /* 0xfffffc */
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find fabric RPort failed", unf_lport->port_id);
+
+ unf_disc_ref_cnt_dec(disc);
+ return;
+ }
+
+ switch (state) {
+ case UNF_DISC_ST_START:
+ break;
+
+ case UNF_DISC_ST_GIDPT_WAIT:
+ (void)unf_send_gid_pt(unf_lport, unf_rport);
+ break;
+
+ case UNF_DISC_ST_GIDFT_WAIT:
+ (void)unf_send_gid_ft(unf_lport, unf_rport);
+ break;
+
+ case UNF_DISC_ST_END:
+ break;
+
+ default:
+ break;
+ }
+
+ unf_disc_ref_cnt_dec(disc);
+}
+
+u32 unf_init_disc_mgr(struct unf_lport *lport)
+{
+ struct unf_disc *disc = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ disc = &lport->disc;
+ disc->max_retry_count = UNF_DISC_RETRY_TIMES;
+ disc->retry_count = 0;
+ disc->disc_flag = UNF_DISC_NONE;
+ INIT_LIST_HEAD(&disc->list_busy_rports);
+ INIT_LIST_HEAD(&disc->list_delete_rports);
+ INIT_LIST_HEAD(&disc->list_destroy_rports);
+ spin_lock_init(&disc->rport_busy_pool_lock);
+
+ disc->disc_rport_mgr.disc_pool_add = NULL;
+ INIT_LIST_HEAD(&disc->disc_rport_mgr.list_disc_rports_pool);
+ INIT_LIST_HEAD(&disc->disc_rport_mgr.list_disc_rports_busy);
+
+ disc->disc_completion = NULL;
+ disc->lport = lport;
+ INIT_DELAYED_WORK(&disc->disc_work, unf_lport_disc_timeout);
+ disc->disc_temp.unf_disc_start = unf_disc_start;
+ disc->disc_temp.unf_disc_stop = unf_disc_stop;
+ disc->disc_temp.unf_disc_callback = NULL;
+ atomic_set(&disc->disc_ref_cnt, 0);
+
+ /* Init RSCN Manager */
+ ret = unf_init_rscn_mgr(lport);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ if (lport->root_lport != lport)
+ return ret;
+
+ ret = unf_crerate_disc_thread(disc);
+ if (ret != RETURN_OK) {
+ unf_destroy_rscn_mngr(lport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Init R_Port free Pool */
+ ret = unf_init_rport_pool(lport);
+ if (ret != RETURN_OK) {
+ unf_destroy_disc_thread(disc);
+ unf_destroy_rscn_mngr(lport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Init R_Port free disc Pool */
+ ret = unf_init_disc_rport_pool(lport);
+ if (ret != RETURN_OK) {
+ unf_destroy_disc_thread(disc);
+ unf_free_rport_pool(lport);
+ unf_destroy_rscn_mngr(lport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return ret;
+}
diff --git a/drivers/scsi/spfc/common/unf_disc.h b/drivers/scsi/spfc/common/unf_disc.h
new file mode 100644
index 000000000000..7ecad3eec424
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_disc.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_DISC_H
+#define UNF_DISC_H
+
+#include "unf_type.h"
+
+#define UNF_DISC_RETRY_TIMES 3
+#define UNF_DISC_NONE 0
+#define UNF_DISC_FABRIC 1
+#define UNF_DISC_LOOP 2
+
+enum unf_disc_state {
+ UNF_DISC_ST_START = 0x3000,
+ UNF_DISC_ST_GIDPT_WAIT,
+ UNF_DISC_ST_GIDFT_WAIT,
+ UNF_DISC_ST_END
+};
+
+enum unf_disc_event {
+ UNF_EVENT_DISC_NORMAL_ENTER = 0x8000,
+ UNF_EVENT_DISC_FAILED = 0x8001,
+ UNF_EVENT_DISC_SUCCESS = 0x8002,
+ UNF_EVENT_DISC_RETRY_TIMEOUT = 0x8003,
+ UNF_EVENT_DISC_LINKDOWN = 0x8004
+};
+
+enum unf_disc_type {
+ UNF_DISC_GET_PORT_NAME = 0,
+ UNF_DISC_GET_NODE_NAME,
+ UNF_DISC_GET_FEATURE
+};
+
+struct unf_disc_gs_event_info {
+ void *lport;
+ void *rport;
+ u32 rport_id;
+ enum unf_disc_type type;
+ struct list_head list_entry;
+};
+
+u32 unf_get_and_post_disc_event(void *lport, void *sns_port, u32 nport_id,
+ enum unf_disc_type type);
+
+void unf_flush_disc_event(void *disc, void *vport);
+void unf_disc_ctrl_size_inc(void *lport, u32 cmnd);
+void unf_disc_error_recovery(void *lport);
+void unf_disc_mgr_destroy(void *lport);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_event.c b/drivers/scsi/spfc/common/unf_event.c
new file mode 100644
index 000000000000..cf51c31ca4a3
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_event.c
@@ -0,0 +1,517 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_event.h"
+#include "unf_log.h"
+#include "unf_common.h"
+#include "unf_lport.h"
+
+struct unf_event_list fc_event_list;
+struct unf_global_event_queue global_event_queue;
+
+/* Max global event node */
+#define UNF_MAX_GLOBAL_ENENT_NODE 24
+
+u32 unf_init_event_msg(struct unf_lport *lport)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ u32 ret = RETURN_OK;
+ u32 index = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ event_mgr = &lport->event_mgr;
+
+ /* Get and Initial Event Node resource */
+ event_mgr->mem_add = vmalloc((size_t)event_mgr->free_event_count *
+ sizeof(struct unf_cm_event_report));
+ if (!event_mgr->mem_add) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate event manager failed",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(event_mgr->mem_add, 0,
+ ((size_t)event_mgr->free_event_count * sizeof(struct unf_cm_event_report)));
+
+ event_node = (struct unf_cm_event_report *)(event_mgr->mem_add);
+
+ spin_lock_irqsave(&event_mgr->port_event_lock, flag);
+ for (index = 0; index < event_mgr->free_event_count; index++) {
+ INIT_LIST_HEAD(&event_node->list_entry);
+ list_add_tail(&event_node->list_entry, &event_mgr->list_free_event);
+ event_node++;
+ }
+ spin_unlock_irqrestore(&event_mgr->port_event_lock, flag);
+
+ return ret;
+}
+
+static void unf_del_event_center_fun_op(struct unf_lport *lport)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ event_mgr = &lport->event_mgr;
+ event_mgr->unf_get_free_event_func = NULL;
+ event_mgr->unf_release_event = NULL;
+ event_mgr->unf_post_event_func = NULL;
+}
+
+void unf_init_event_node(struct unf_cm_event_report *event_node)
+{
+ FC_CHECK_RETURN_VOID(event_node);
+
+ event_node->event = UNF_EVENT_TYPE_REQUIRE;
+ event_node->event_asy_flag = UNF_EVENT_ASYN;
+ event_node->delay_times = 0;
+ event_node->para_in = NULL;
+ event_node->para_out = NULL;
+ event_node->result = 0;
+ event_node->lport = NULL;
+ event_node->unf_event_task = NULL;
+}
+
+struct unf_cm_event_report *unf_get_free_event_node(void *lport)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ struct list_head *list_node = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ unf_lport = (struct unf_lport *)lport;
+ unf_lport = unf_lport->root_lport;
+
+ if (unlikely(atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP))
+ return NULL;
+
+ event_mgr = &unf_lport->event_mgr;
+
+ spin_lock_irqsave(&event_mgr->port_event_lock, flags);
+ if (list_empty(&event_mgr->list_free_event)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) have no event node anymore",
+ unf_lport->port_id);
+
+ spin_unlock_irqrestore(&event_mgr->port_event_lock, flags);
+ return NULL;
+ }
+
+ list_node = UNF_OS_LIST_NEXT(&event_mgr->list_free_event);
+ list_del(list_node);
+ event_mgr->free_event_count--;
+ event_node = list_entry(list_node, struct unf_cm_event_report, list_entry);
+
+ unf_init_event_node(event_node);
+ spin_unlock_irqrestore(&event_mgr->port_event_lock, flags);
+
+ return event_node;
+}
+
+void unf_post_event(void *lport, void *event_node)
+{
+ struct unf_cm_event_report *cm_event_node = NULL;
+ struct unf_chip_manage_info *card_thread_info = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(event_node);
+ cm_event_node = (struct unf_cm_event_report *)event_node;
+
+ /* If null, post to global event center */
+ if (!lport) {
+ spin_lock_irqsave(&fc_event_list.fc_event_list_lock, flags);
+ fc_event_list.list_num++;
+ list_add_tail(&cm_event_node->list_entry, &fc_event_list.list_head);
+ spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, flags);
+
+ wake_up_process(event_task_thread);
+ } else {
+ unf_lport = (struct unf_lport *)lport;
+ unf_lport = unf_lport->root_lport;
+ card_thread_info = unf_lport->chip_info;
+
+ /* Post to global event center */
+ if (!card_thread_info) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_WARN,
+ "[warn]Port(0x%x) has strange event with type(0x%x)",
+ unf_lport->nport_id, cm_event_node->event);
+
+ spin_lock_irqsave(&fc_event_list.fc_event_list_lock, flags);
+ fc_event_list.list_num++;
+ list_add_tail(&cm_event_node->list_entry, &fc_event_list.list_head);
+ spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, flags);
+
+ wake_up_process(event_task_thread);
+ } else {
+ spin_lock_irqsave(&card_thread_info->chip_event_list_lock, flags);
+ card_thread_info->list_num++;
+ list_add_tail(&cm_event_node->list_entry, &card_thread_info->list_head);
+ spin_unlock_irqrestore(&card_thread_info->chip_event_list_lock, flags);
+
+ wake_up_process(card_thread_info->thread);
+ }
+ }
+}
+
+void unf_check_event_mgr_status(struct unf_event_mgr *event_mgr)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(event_mgr);
+
+ spin_lock_irqsave(&event_mgr->port_event_lock, flag);
+ if (event_mgr->emg_completion && event_mgr->free_event_count == UNF_MAX_EVENT_NODE)
+ complete(event_mgr->emg_completion);
+
+ spin_unlock_irqrestore(&event_mgr->port_event_lock, flag);
+}
+
+void unf_release_event(void *lport, void *event_node)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_cm_event_report *cm_event_node = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(event_node);
+
+ cm_event_node = (struct unf_cm_event_report *)event_node;
+ unf_lport = (struct unf_lport *)lport;
+ unf_lport = unf_lport->root_lport;
+ event_mgr = &unf_lport->event_mgr;
+
+ spin_lock_irqsave(&event_mgr->port_event_lock, flags);
+ event_mgr->free_event_count++;
+ unf_init_event_node(cm_event_node);
+ list_add_tail(&cm_event_node->list_entry, &event_mgr->list_free_event);
+ spin_unlock_irqrestore(&event_mgr->port_event_lock, flags);
+
+ unf_check_event_mgr_status(event_mgr);
+}
+
+void unf_release_global_event(void *event_node)
+{
+ ulong flag = 0;
+ struct unf_cm_event_report *cm_event_node = NULL;
+
+ FC_CHECK_RETURN_VOID(event_node);
+ cm_event_node = (struct unf_cm_event_report *)event_node;
+
+ unf_init_event_node(cm_event_node);
+
+ spin_lock_irqsave(&global_event_queue.global_event_list_lock, flag);
+ global_event_queue.list_number++;
+ list_add_tail(&cm_event_node->list_entry, &global_event_queue.global_event_list);
+ spin_unlock_irqrestore(&global_event_queue.global_event_list_lock, flag);
+}
+
+u32 unf_init_event_center(void *lport)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+ u32 ret = RETURN_OK;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ unf_lport = (struct unf_lport *)lport;
+
+ /* Initial Disc manager */
+ event_mgr = &unf_lport->event_mgr;
+ event_mgr->free_event_count = UNF_MAX_EVENT_NODE;
+ event_mgr->unf_get_free_event_func = unf_get_free_event_node;
+ event_mgr->unf_release_event = unf_release_event;
+ event_mgr->unf_post_event_func = unf_post_event;
+
+ INIT_LIST_HEAD(&event_mgr->list_free_event);
+ spin_lock_init(&event_mgr->port_event_lock);
+ event_mgr->emg_completion = NULL;
+
+ ret = unf_init_event_msg(unf_lport);
+
+ return ret;
+}
+
+void unf_wait_event_mgr_complete(struct unf_event_mgr *event_mgr)
+{
+ struct unf_event_mgr *event_mgr_temp = NULL;
+ bool wait = false;
+ ulong mg_flag = 0;
+
+ struct completion fc_event_completion;
+
+ init_completion(&fc_event_completion);
+ FC_CHECK_RETURN_VOID(event_mgr);
+ event_mgr_temp = event_mgr;
+
+ spin_lock_irqsave(&event_mgr_temp->port_event_lock, mg_flag);
+ if (event_mgr_temp->free_event_count != UNF_MAX_EVENT_NODE) {
+ event_mgr_temp->emg_completion = &fc_event_completion;
+ wait = true;
+ }
+ spin_unlock_irqrestore(&event_mgr_temp->port_event_lock, mg_flag);
+
+ if (wait)
+ wait_for_completion(event_mgr_temp->emg_completion);
+
+ spin_lock_irqsave(&event_mgr_temp->port_event_lock, mg_flag);
+ event_mgr_temp->emg_completion = NULL;
+ spin_unlock_irqrestore(&event_mgr_temp->port_event_lock, mg_flag);
+}
+
+u32 unf_event_center_destroy(void *lport)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+ struct list_head *list = NULL;
+ struct list_head *list_tmp = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+ ulong list_lock_flag = 0;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ unf_lport = (struct unf_lport *)lport;
+ event_mgr = &unf_lport->event_mgr;
+
+ spin_lock_irqsave(&fc_event_list.fc_event_list_lock, list_lock_flag);
+ if (!list_empty(&fc_event_list.list_head)) {
+ list_for_each_safe(list, list_tmp, &fc_event_list.list_head) {
+ event_node = list_entry(list, struct unf_cm_event_report, list_entry);
+
+ if (event_node->lport == unf_lport) {
+ list_del_init(&event_node->list_entry);
+ if (event_node->event_asy_flag == UNF_EVENT_SYN) {
+ event_node->result = UNF_RETURN_ERROR;
+ complete(&event_node->event_comp);
+ }
+
+ spin_lock_irqsave(&event_mgr->port_event_lock, flag);
+ event_mgr->free_event_count++;
+ list_add_tail(&event_node->list_entry,
+ &event_mgr->list_free_event);
+ spin_unlock_irqrestore(&event_mgr->port_event_lock, flag);
+ }
+ }
+ }
+ spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, list_lock_flag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to wait event",
+ unf_lport->port_id);
+
+ unf_wait_event_mgr_complete(event_mgr);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) wait event process end",
+ unf_lport->port_id);
+
+ unf_del_event_center_fun_op(unf_lport);
+
+ vfree(event_mgr->mem_add);
+ event_mgr->mem_add = NULL;
+ unf_lport->destroy_step = UNF_LPORT_DESTROY_STEP_3_DESTROY_EVENT_CENTER;
+
+ return ret;
+}
+
+static void unf_procee_asyn_event(struct unf_cm_event_report *event_node)
+{
+ struct unf_lport *lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ lport = (struct unf_lport *)event_node->lport;
+
+ FC_CHECK_RETURN_VOID(lport);
+ if (event_node->unf_event_task) {
+ ret = (u32)event_node->unf_event_task(event_node->para_in,
+ event_node->para_out);
+ }
+
+ if (lport->event_mgr.unf_release_event)
+ lport->event_mgr.unf_release_event(lport, event_node);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_WARN,
+ "[warn]Port(0x%x) handle event(0x%x) failed",
+ lport->port_id, event_node->event);
+ }
+}
+
+void unf_handle_event(struct unf_cm_event_report *event_node)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 event = 0;
+ u32 event_asy_flag = UNF_EVENT_ASYN;
+
+ FC_CHECK_RETURN_VOID(event_node);
+
+ event = event_node->event;
+ event_asy_flag = event_node->event_asy_flag;
+
+ switch (event_asy_flag) {
+ case UNF_EVENT_SYN: /* synchronous event node */
+ case UNF_GLOBAL_EVENT_SYN:
+ if (event_node->unf_event_task)
+ ret = (u32)event_node->unf_event_task(event_node->para_in,
+ event_node->para_out);
+
+ event_node->result = ret;
+ complete(&event_node->event_comp);
+ break;
+
+ case UNF_EVENT_ASYN: /* asynchronous event node */
+ unf_procee_asyn_event(event_node);
+ break;
+
+ case UNF_GLOBAL_EVENT_ASYN:
+ if (event_node->unf_event_task) {
+ ret = (u32)event_node->unf_event_task(event_node->para_in,
+ event_node->para_out);
+ }
+
+ unf_release_global_event(event_node);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_WARN,
+ "[warn]handle global event(0x%x) failed", event);
+ }
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_WARN,
+ "[warn]Unknown event(0x%x)", event);
+ break;
+ }
+}
+
+u32 unf_init_global_event_msg(void)
+{
+ struct unf_cm_event_report *event_node = NULL;
+ u32 ret = RETURN_OK;
+ u32 index = 0;
+ ulong flag = 0;
+
+ INIT_LIST_HEAD(&global_event_queue.global_event_list);
+ spin_lock_init(&global_event_queue.global_event_list_lock);
+ global_event_queue.list_number = 0;
+
+ global_event_queue.global_event_add = vmalloc(UNF_MAX_GLOBAL_ENENT_NODE *
+ sizeof(struct unf_cm_event_report));
+ if (!global_event_queue.global_event_add) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Can't allocate global event queue");
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(global_event_queue.global_event_add, 0,
+ (UNF_MAX_GLOBAL_ENENT_NODE * sizeof(struct unf_cm_event_report)));
+
+ event_node = (struct unf_cm_event_report *)(global_event_queue.global_event_add);
+
+ spin_lock_irqsave(&global_event_queue.global_event_list_lock, flag);
+ for (index = 0; index < UNF_MAX_GLOBAL_ENENT_NODE; index++) {
+ INIT_LIST_HEAD(&event_node->list_entry);
+ list_add_tail(&event_node->list_entry, &global_event_queue.global_event_list);
+
+ global_event_queue.list_number++;
+ event_node++;
+ }
+ spin_unlock_irqrestore(&global_event_queue.global_event_list_lock, flag);
+
+ return ret;
+}
+
+void unf_destroy_global_event_msg(void)
+{
+ if (global_event_queue.list_number != UNF_MAX_GLOBAL_ENENT_NODE) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_CRITICAL,
+ "[warn]Global event release not complete with remain nodes(0x%x)",
+ global_event_queue.list_number);
+ }
+
+ vfree(global_event_queue.global_event_add);
+}
+
+u32 unf_schedule_global_event(void *para_in, u32 event_asy_flag,
+ int (*unf_event_task)(void *arg_in, void *arg_out))
+{
+ struct list_head *list_node = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ spinlock_t *event_list_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(unf_event_task, UNF_RETURN_ERROR);
+
+ if (event_asy_flag != UNF_GLOBAL_EVENT_ASYN && event_asy_flag != UNF_GLOBAL_EVENT_SYN) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Event async flag(0x%x) abnormity",
+ event_asy_flag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ event_list_lock = &global_event_queue.global_event_list_lock;
+ spin_lock_irqsave(event_list_lock, flag);
+ if (list_empty(&global_event_queue.global_event_list)) {
+ spin_unlock_irqrestore(event_list_lock, flag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ list_node = UNF_OS_LIST_NEXT(&global_event_queue.global_event_list);
+ list_del_init(list_node);
+ global_event_queue.list_number--;
+ event_node = list_entry(list_node, struct unf_cm_event_report, list_entry);
+ spin_unlock_irqrestore(event_list_lock, flag);
+
+ /* Initial global event */
+ unf_init_event_node(event_node);
+ init_completion(&event_node->event_comp);
+ event_node->event_asy_flag = event_asy_flag;
+ event_node->unf_event_task = unf_event_task;
+ event_node->para_in = (void *)para_in;
+ event_node->para_out = NULL;
+
+ unf_post_event(NULL, event_node);
+
+ if (event_asy_flag == UNF_GLOBAL_EVENT_SYN) {
+ /* must wait for complete */
+ wait_for_completion(&event_node->event_comp);
+ ret = event_node->result;
+ unf_release_global_event(event_node);
+ } else {
+ ret = RETURN_OK;
+ }
+
+ return ret;
+}
+
+struct unf_cm_event_report *unf_get_one_event_node(void *lport)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(unf_lport->event_mgr.unf_get_free_event_func, NULL);
+
+ return unf_lport->event_mgr.unf_get_free_event_func((void *)unf_lport);
+}
+
+void unf_post_one_event_node(void *lport, struct unf_cm_event_report *event)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(event);
+
+ FC_CHECK_RETURN_VOID(unf_lport->event_mgr.unf_post_event_func);
+ FC_CHECK_RETURN_VOID(event);
+
+ unf_lport->event_mgr.unf_post_event_func((void *)unf_lport, event);
+}
diff --git a/drivers/scsi/spfc/common/unf_event.h b/drivers/scsi/spfc/common/unf_event.h
new file mode 100644
index 000000000000..4d23f11986af
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_event.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_EVENT_H
+#define UNF_EVENT_H
+
+#include "unf_type.h"
+
+#define UNF_MAX_EVENT_NODE 256
+
+enum unf_event_type {
+ UNF_EVENT_TYPE_ALARM = 0, /* Alarm */
+ UNF_EVENT_TYPE_REQUIRE, /* Require */
+ UNF_EVENT_TYPE_RECOVERY, /* Recovery */
+ UNF_EVENT_TYPE_BUTT
+};
+
+struct unf_cm_event_report {
+ /* event type */
+ u32 event;
+
+ /* ASY flag */
+ u32 event_asy_flag;
+
+ /* Delay times,must be async event */
+ u32 delay_times;
+
+ struct list_head list_entry;
+
+ void *lport;
+
+ /* parameter */
+ void *para_in;
+ void *para_out;
+ u32 result;
+
+ /* recovery strategy */
+ int (*unf_event_task)(void *arg_in, void *arg_out);
+
+ struct completion event_comp;
+};
+
+struct unf_event_mgr {
+ spinlock_t port_event_lock;
+ u32 free_event_count;
+
+ struct list_head list_free_event;
+
+ struct completion *emg_completion;
+
+ void *mem_add;
+ struct unf_cm_event_report *(*unf_get_free_event_func)(void *lport);
+ void (*unf_release_event)(void *lport, void *event_node);
+ void (*unf_post_event_func)(void *lport, void *event_node);
+};
+
+struct unf_global_event_queue {
+ void *global_event_add;
+ u32 list_number;
+ struct list_head global_event_list;
+ spinlock_t global_event_list_lock;
+};
+
+struct unf_event_list {
+ struct list_head list_head;
+ spinlock_t fc_event_list_lock;
+ u32 list_num; /* list node number */
+};
+
+void unf_handle_event(struct unf_cm_event_report *event_node);
+u32 unf_init_global_event_msg(void);
+void unf_destroy_global_event_msg(void);
+u32 unf_schedule_global_event(void *para_in, u32 event_asy_flag,
+ int (*unf_event_task)(void *arg_in, void *arg_out));
+struct unf_cm_event_report *unf_get_one_event_node(void *lport);
+void unf_post_one_event_node(void *lport, struct unf_cm_event_report *event);
+u32 unf_event_center_destroy(void *lport);
+u32 unf_init_event_center(void *lport);
+
+extern struct task_struct *event_task_thread;
+extern struct unf_global_event_queue global_event_queue;
+extern struct unf_event_list fc_event_list;
+#endif
+
diff --git a/drivers/scsi/spfc/common/unf_exchg.c b/drivers/scsi/spfc/common/unf_exchg.c
new file mode 100644
index 000000000000..830cad7e6962
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_exchg.c
@@ -0,0 +1,2317 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_exchg.h"
+#include "unf_log.h"
+#include "unf_common.h"
+#include "unf_rport.h"
+#include "unf_service.h"
+#include "unf_io.h"
+#include "unf_exchg_abort.h"
+
+#define SPFC_XCHG_TYPE_MASK 0xFFFF
+#define UNF_DEL_XCHG_TIMER_SAFE(xchg) \
+ do { \
+ if (cancel_delayed_work(&((xchg)->timeout_work))) { \
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR, \
+ "Exchange(0x%p) is free, but timer is pending.", \
+ xchg); \
+ } else { \
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, \
+ UNF_CRITICAL, \
+ "Exchange(0x%p) is free, but timer is running.", \
+ xchg); \
+ } \
+ } while (0)
+
+static struct unf_io_flow_id io_stage_table[] = {
+ {"XCHG_ALLOC"}, {"TGT_RECEIVE_ABTS"},
+ {"TGT_ABTS_DONE"}, {"TGT_IO_SRR"},
+ {"SFS_RESPONSE"}, {"SFS_TIMEOUT"},
+ {"INI_SEND_CMND"}, {"INI_RESPONSE_DONE"},
+ {"INI_EH_ABORT"}, {"INI_EH_DEVICE_RESET"},
+ {"INI_EH_BLS_DONE"}, {"INI_IO_TIMEOUT"},
+ {"INI_REQ_TIMEOUT"}, {"XCHG_CANCEL_TIMER"},
+ {"XCHG_FREE_XCHG"}, {"SEND_ELS"},
+ {"IO_XCHG_WAIT"},
+};
+
+static void unf_init_xchg_attribute(struct unf_xchg *xchg);
+static void unf_delay_work_del_syn(struct unf_xchg *xchg);
+static void unf_free_lport_sfs_xchg(struct unf_xchg_mgr *xchg_mgr,
+ bool done_ini_flag);
+static void unf_free_lport_destroy_xchg(struct unf_xchg_mgr *xchg_mgr);
+
+void unf_wake_up_scsi_task_cmnd(struct unf_lport *lport)
+{
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong hot_pool_lock_flags = 0;
+ ulong xchg_flag = 0;
+ struct unf_xchg_mgr *xchg_mgrs = NULL;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ xchg_mgrs = unf_get_xchg_mgr_by_lport(lport, i);
+
+ if (!xchg_mgrs) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MINOR,
+ "Can't find LPort(0x%x) MgrIdx %u exchange manager.",
+ lport->port_id, i);
+ continue;
+ }
+
+ spin_lock_irqsave(&xchg_mgrs->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ list_for_each_safe(node, next_node,
+ (&xchg_mgrs->hot_pool->ini_busylist)) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ if (INI_IO_STATE_UPTASK & xchg->io_state &&
+ (atomic_read(&xchg->ref_cnt) > 0)) {
+ UNF_SET_SCSI_CMND_RESULT(xchg, UNF_IO_SUCCESS);
+ up(&xchg->task_sema);
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MINOR,
+ "Wake up task command exchange(0x%p), Hot Pool Tag(0x%x).",
+ xchg, xchg->hotpooltag);
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+ }
+
+ spin_unlock_irqrestore(&xchg_mgrs->hot_pool->xchg_hotpool_lock,
+ hot_pool_lock_flags);
+ }
+}
+
+void *unf_cm_get_free_xchg(void *lport, u32 xchg_type)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
+
+ FC_CHECK_RETURN_VALUE(unlikely(lport), NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
+
+ /* Find the corresponding Lport Xchg management template. */
+ FC_CHECK_RETURN_VALUE(unlikely(xchg_mgr_temp->unf_xchg_get_free_and_init), NULL);
+
+ return xchg_mgr_temp->unf_xchg_get_free_and_init(unf_lport, xchg_type);
+}
+
+void unf_cm_free_xchg(void *lport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
+
+ FC_CHECK_RETURN_VOID(unlikely(lport));
+ FC_CHECK_RETURN_VOID(unlikely(xchg));
+
+ unf_lport = (struct unf_lport *)lport;
+ xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
+ FC_CHECK_RETURN_VOID(unlikely(xchg_mgr_temp->unf_xchg_release));
+
+ /*
+ * unf_cm_free_xchg --->>> unf_free_xchg
+ * --->>> unf_xchg_ref_dec --->>> unf_free_fcp_xchg --->>>
+ * unf_done_ini_xchg
+ */
+ xchg_mgr_temp->unf_xchg_release(lport, xchg);
+}
+
+void *unf_cm_lookup_xchg_by_tag(void *lport, u16 hot_pool_tag)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
+
+ FC_CHECK_RETURN_VALUE(unlikely(lport), NULL);
+
+ /* Find the corresponding Lport Xchg management template */
+ unf_lport = (struct unf_lport *)lport;
+ xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
+
+ FC_CHECK_RETURN_VALUE(unlikely(xchg_mgr_temp->unf_look_up_xchg_by_tag), NULL);
+
+ return xchg_mgr_temp->unf_look_up_xchg_by_tag(lport, hot_pool_tag);
+}
+
+void *unf_cm_lookup_xchg_by_id(void *lport, u16 ox_id, u32 oid)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
+
+ FC_CHECK_RETURN_VALUE(unlikely(lport), NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
+
+ /* Find the corresponding Lport Xchg management template */
+ FC_CHECK_RETURN_VALUE(unlikely(xchg_mgr_temp->unf_look_up_xchg_by_id), NULL);
+
+ return xchg_mgr_temp->unf_look_up_xchg_by_id(lport, ox_id, oid);
+}
+
+struct unf_xchg *unf_cm_lookup_xchg_by_cmnd_sn(void *lport, u64 command_sn,
+ u32 world_id, void *pinitiator)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
+ struct unf_xchg *xchg = NULL;
+
+ FC_CHECK_RETURN_VALUE(unlikely(lport), NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
+
+ FC_CHECK_RETURN_VALUE(unlikely(xchg_mgr_temp->unf_look_up_xchg_by_cmnd_sn), NULL);
+
+ xchg = (struct unf_xchg *)xchg_mgr_temp->unf_look_up_xchg_by_cmnd_sn(unf_lport,
+ command_sn,
+ world_id,
+ pinitiator);
+
+ return xchg;
+}
+
+static u32 unf_init_xchg(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr,
+ u32 xchg_sum, u32 sfs_sum)
+{
+ struct unf_xchg *xchg_mem = NULL;
+ union unf_sfs_u *sfs_mm_start = NULL;
+ dma_addr_t sfs_dma_addr;
+ struct unf_xchg *xchg = NULL;
+ struct unf_xchg_free_pool *free_pool = NULL;
+ ulong flags = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VALUE((sfs_sum <= xchg_sum), UNF_RETURN_ERROR);
+
+ free_pool = &xchg_mgr->free_pool;
+ xchg_mem = xchg_mgr->fcp_mm_start;
+ xchg = xchg_mem;
+
+ sfs_mm_start = (union unf_sfs_u *)xchg_mgr->sfs_mm_start;
+ sfs_dma_addr = xchg_mgr->sfs_phy_addr;
+ /* 1. Allocate the SFS UNION memory to each SFS XCHG
+ * and mount the SFS XCHG to the corresponding FREE linked list
+ */
+ free_pool->total_sfs_xchg = 0;
+ free_pool->sfs_xchg_sum = sfs_sum;
+ for (i = 0; i < sfs_sum; i++) {
+ INIT_LIST_HEAD(&xchg->list_xchg_entry);
+ INIT_LIST_HEAD(&xchg->list_esgls);
+ spin_lock_init(&xchg->xchg_state_lock);
+ sema_init(&xchg->task_sema, 0);
+ sema_init(&xchg->echo_info.echo_sync_sema, 0);
+
+ spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
+ xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr = sfs_mm_start;
+ xchg->fcp_sfs_union.sfs_entry.sfs_buff_phy_addr = sfs_dma_addr;
+ xchg->fcp_sfs_union.sfs_entry.sfs_buff_len = sizeof(*sfs_mm_start);
+ list_add_tail(&xchg->list_xchg_entry, &free_pool->list_sfs_xchg_list);
+ free_pool->total_sfs_xchg++;
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+
+ sfs_mm_start++;
+ sfs_dma_addr = sfs_dma_addr + sizeof(union unf_sfs_u);
+ xchg++;
+ }
+
+ free_pool->total_fcp_xchg = 0;
+
+ for (i = 0; (i < xchg_sum - sfs_sum); i++) {
+ INIT_LIST_HEAD(&xchg->list_xchg_entry);
+
+ INIT_LIST_HEAD(&xchg->list_esgls);
+ spin_lock_init(&xchg->xchg_state_lock);
+ sema_init(&xchg->task_sema, 0);
+ sema_init(&xchg->echo_info.echo_sync_sema, 0);
+
+ /* alloc dma buffer for fcp_rsp_iu */
+ spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
+ list_add_tail(&xchg->list_xchg_entry, &free_pool->list_free_xchg_list);
+ free_pool->total_fcp_xchg++;
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+
+ xchg++;
+ }
+
+ free_pool->fcp_xchg_sum = free_pool->total_fcp_xchg;
+
+ return RETURN_OK;
+}
+
+static u32 unf_get_xchg_config_sum(struct unf_lport *lport, u32 *xchg_sum)
+{
+ struct unf_lport_cfg_item *lport_cfg_items = NULL;
+
+ lport_cfg_items = &lport->low_level_func.lport_cfg_items;
+
+ /* It has been checked at the bottom layer. Don't need to check it
+ * again.
+ */
+ *xchg_sum = lport_cfg_items->max_sfs_xchg + lport_cfg_items->max_io;
+ if ((*xchg_sum / UNF_EXCHG_MGR_NUM) == 0 ||
+ lport_cfg_items->max_sfs_xchg / UNF_EXCHG_MGR_NUM == 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) Xchgsum(%u) or SfsXchg(%u) is less than ExchangeMgrNum(%u).",
+ lport->port_id, *xchg_sum, lport_cfg_items->max_sfs_xchg,
+ UNF_EXCHG_MGR_NUM);
+ return UNF_RETURN_ERROR;
+ }
+
+ if (*xchg_sum > (INVALID_VALUE16 - 1)) {
+ /* If the format of ox_id/rx_id is exceeded, this function is
+ * not supported
+ */
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) Exchange num(0x%x) is Too Big.",
+ lport->port_id, *xchg_sum);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+static void unf_xchg_cancel_timer(void *xchg)
+{
+ struct unf_xchg *tmp_xchg = NULL;
+ bool need_dec_xchg_ref = false;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ tmp_xchg = (struct unf_xchg *)xchg;
+
+ spin_lock_irqsave(&tmp_xchg->xchg_state_lock, flag);
+ if (cancel_delayed_work(&tmp_xchg->timeout_work))
+ need_dec_xchg_ref = true;
+
+ spin_unlock_irqrestore(&tmp_xchg->xchg_state_lock, flag);
+
+ if (need_dec_xchg_ref)
+ unf_xchg_ref_dec(xchg, XCHG_CANCEL_TIMER);
+}
+
+void unf_show_all_xchg(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+
+ unf_lport = lport;
+
+ /* hot Xchg */
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN, "INI busy :");
+ list_for_each_safe(xchg_node, next_xchg_node, &xchg_mgr->hot_pool->ini_busylist) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "0x%p---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
+ xchg, (u32)xchg->hotpooltag, (u32)xchg->xchg_type,
+ (u32)xchg->oxid, (u32)xchg->rxid, (u32)xchg->sid, (u32)xchg->did,
+ atomic_read(&xchg->ref_cnt), (u32)xchg->io_state, xchg->alloc_jif);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN, "SFS :");
+ list_for_each_safe(xchg_node, next_xchg_node, &xchg_mgr->hot_pool->sfs_busylist) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "0x%p---0x%x---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
+ xchg, xchg->cmnd_code, (u32)xchg->hotpooltag,
+ (u32)xchg->xchg_type, (u32)xchg->oxid, (u32)xchg->rxid, (u32)xchg->sid,
+ (u32)xchg->did, atomic_read(&xchg->ref_cnt),
+ (u32)xchg->io_state, xchg->alloc_jif);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN, "Destroy list.");
+ list_for_each_safe(xchg_node, next_xchg_node, &xchg_mgr->hot_pool->list_destroy_xchg) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "0x%p---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
+ xchg, (u32)xchg->hotpooltag, (u32)xchg->xchg_type,
+ (u32)xchg->oxid, (u32)xchg->rxid, (u32)xchg->sid, (u32)xchg->did,
+ atomic_read(&xchg->ref_cnt), (u32)xchg->io_state, xchg->alloc_jif);
+ }
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, flags);
+}
+
+static u32 unf_free_lport_xchg(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
+{
+#define UNF_OS_WAITIO_TIMEOUT (10 * 1000)
+
+ ulong free_pool_lock_flags = 0;
+ bool wait = false;
+ u32 total_xchg = 0;
+ u32 total_xchg_sum = 0;
+ u32 ret = RETURN_OK;
+ u64 time_out = 0;
+ struct completion xchg_mgr_completion;
+
+ init_completion(&xchg_mgr_completion);
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg_mgr, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg_mgr->hot_pool, UNF_RETURN_ERROR);
+
+ unf_free_lport_sfs_xchg(xchg_mgr, false);
+
+ /* free INI Mode exchanges belong to L_Port */
+ unf_free_lport_ini_xchg(xchg_mgr, false);
+
+ spin_lock_irqsave(&xchg_mgr->free_pool.xchg_freepool_lock, free_pool_lock_flags);
+ total_xchg = xchg_mgr->free_pool.total_fcp_xchg + xchg_mgr->free_pool.total_sfs_xchg;
+ total_xchg_sum = xchg_mgr->free_pool.fcp_xchg_sum + xchg_mgr->free_pool.sfs_xchg_sum;
+ if (total_xchg != total_xchg_sum) {
+ xchg_mgr->free_pool.xchg_mgr_completion = &xchg_mgr_completion;
+ wait = true;
+ }
+ spin_unlock_irqrestore(&xchg_mgr->free_pool.xchg_freepool_lock, free_pool_lock_flags);
+
+ if (wait) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to wait for exchange manager completion (0x%x:0x%x)",
+ lport->port_id, total_xchg, total_xchg_sum);
+
+ unf_show_all_xchg(lport, xchg_mgr);
+
+ time_out = wait_for_completion_timeout(xchg_mgr->free_pool.xchg_mgr_completion,
+ msecs_to_jiffies(UNF_OS_WAITIO_TIMEOUT));
+ if (time_out == 0)
+ unf_free_lport_destroy_xchg(xchg_mgr);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) wait for exchange manager completion end",
+ lport->port_id);
+
+ spin_lock_irqsave(&xchg_mgr->free_pool.xchg_freepool_lock, free_pool_lock_flags);
+ xchg_mgr->free_pool.xchg_mgr_completion = NULL;
+ spin_unlock_irqrestore(&xchg_mgr->free_pool.xchg_freepool_lock,
+ free_pool_lock_flags);
+ }
+
+ return ret;
+}
+
+void unf_free_lport_all_xchg(struct unf_lport *lport)
+{
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ u32 i;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ xchg_mgr = unf_get_xchg_mgr_by_lport(lport, i);
+ ;
+ if (unlikely(!xchg_mgr)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) hot pool is NULL",
+ lport->port_id);
+
+ continue;
+ }
+ unf_free_lport_sfs_xchg(xchg_mgr, false);
+
+ /* free INI Mode exchanges belong to L_Port */
+ unf_free_lport_ini_xchg(xchg_mgr, false);
+
+ unf_free_lport_destroy_xchg(xchg_mgr);
+ }
+}
+
+static void unf_delay_work_del_syn(struct unf_xchg *xchg)
+{
+ FC_CHECK_RETURN_VOID(xchg);
+
+ /* synchronous release timer */
+ if (!cancel_delayed_work_sync(&xchg->timeout_work)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Exchange(0x%p), State(0x%x) can't delete work timer, timer is running or no timer.",
+ xchg, xchg->io_state);
+ } else {
+ /* The reference count cannot be directly subtracted.
+ * This prevents the XCHG from being moved to the Free linked
+ * list when the card is unloaded.
+ */
+ unf_cm_free_xchg(xchg->lport, xchg);
+ }
+}
+
+static void unf_free_lport_sfs_xchg(struct unf_xchg_mgr *xchg_mgr, bool done_ini_flag)
+{
+ struct list_head *list = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong hot_pool_lock_flags = 0;
+
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+ FC_CHECK_RETURN_VOID(xchg_mgr->hot_pool);
+
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ while (!list_empty(&xchg_mgr->hot_pool->sfs_busylist)) {
+ list = UNF_OS_LIST_NEXT(&xchg_mgr->hot_pool->sfs_busylist);
+ list_del_init(list);
+
+ /* Prevent the xchg of the sfs from being accessed repeatedly.
+ * The xchg is first mounted to the destroy linked list.
+ */
+ list_add_tail(list, &xchg_mgr->hot_pool->list_destroy_xchg);
+
+ xchg = list_entry(list, struct unf_xchg, list_xchg_entry);
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ unf_delay_work_del_syn(xchg);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Free SFS Exchange(0x%p), State(0x%x), Reference count(%d), Start time(%llu).",
+ xchg, xchg->io_state, atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
+
+ unf_cm_free_xchg(xchg->lport, xchg);
+
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ }
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+}
+
+void unf_free_lport_ini_xchg(struct unf_xchg_mgr *xchg_mgr, bool done_ini_flag)
+{
+ struct list_head *list = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong hot_pool_lock_flags = 0;
+ u32 up_status = 0;
+
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+ FC_CHECK_RETURN_VOID(xchg_mgr->hot_pool);
+
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ while (!list_empty(&xchg_mgr->hot_pool->ini_busylist)) {
+ /* for each INI busy_list (exchange) node */
+ list = UNF_OS_LIST_NEXT(&xchg_mgr->hot_pool->ini_busylist);
+
+ /* Put exchange node to destroy_list, prevent done repeatly */
+ list_del_init(list);
+ list_add_tail(list, &xchg_mgr->hot_pool->list_destroy_xchg);
+ xchg = list_entry(list, struct unf_xchg, list_xchg_entry);
+ if (atomic_read(&xchg->ref_cnt) <= 0)
+ continue;
+
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock,
+ hot_pool_lock_flags);
+ unf_delay_work_del_syn(xchg);
+
+ /* In the case of INI done, the command should be set to fail to
+ * prevent data inconsistency caused by the return of OK
+ */
+ up_status = unf_get_up_level_cmnd_errcode(xchg->scsi_cmnd_info.err_code_table,
+ xchg->scsi_cmnd_info.err_code_table_cout,
+ UNF_IO_PORT_LOGOUT);
+
+ if (INI_IO_STATE_UPABORT & xchg->io_state) {
+ /*
+ * About L_Port destroy:
+ * UP_ABORT ---to--->>> ABORT_Port_Removing
+ */
+ up_status = UNF_IO_ABORT_PORT_REMOVING;
+ }
+
+ xchg->scsi_cmnd_info.result = up_status;
+ up(&xchg->task_sema);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Free INI exchange(0x%p) state(0x%x) reference count(%d) start time(%llu)",
+ xchg, xchg->io_state, atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
+
+ unf_cm_free_xchg(xchg->lport, xchg);
+
+ /* go to next INI busy_list (exchange) node */
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ }
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+}
+
+static void unf_free_lport_destroy_xchg(struct unf_xchg_mgr *xchg_mgr)
+{
+#define UNF_WAIT_DESTROY_EMPTY_STEP_MS 1000
+#define UNF_WAIT_IO_STATE_TGT_FRONT_MS (10 * 1000)
+
+ struct unf_xchg *xchg = NULL;
+ struct list_head *next_xchg_node = NULL;
+ ulong hot_pool_lock_flags = 0;
+ ulong xchg_flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+ FC_CHECK_RETURN_VOID(xchg_mgr->hot_pool);
+
+ /* In this case, the timer on the destroy linked list is deleted.
+ * You only need to check whether the timer is released at the end of
+ * the tgt.
+ */
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ while (!list_empty(&xchg_mgr->hot_pool->list_destroy_xchg)) {
+ next_xchg_node = UNF_OS_LIST_NEXT(&xchg_mgr->hot_pool->list_destroy_xchg);
+ xchg = list_entry(next_xchg_node, struct unf_xchg, list_xchg_entry);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Free Exchange(0x%p), Type(0x%x), State(0x%x), Reference count(%d), Start time(%llu)",
+ xchg, xchg->xchg_type, xchg->io_state,
+ atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+
+ /* This interface can be invoked to ensure that the timer is
+ * successfully canceled or wait until the timer execution is
+ * complete
+ */
+ unf_delay_work_del_syn(xchg);
+
+ /*
+ * If the timer is canceled successfully, delete Xchg
+ * If the timer has burst, the Xchg may have been released,In
+ * this case, deleting the Xchg will be failed
+ */
+ unf_cm_free_xchg(xchg->lport, xchg);
+
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ };
+
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+}
+
+static void unf_free_all_big_sfs(struct unf_xchg_mgr *xchg_mgr)
+{
+ struct unf_big_sfs *big_sfs = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ u32 i;
+
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+
+ /* Release the free resources in the busy state */
+ spin_lock_irqsave(&xchg_mgr->big_sfs_pool.big_sfs_pool_lock, flag);
+ list_for_each_safe(node, next_node, &xchg_mgr->big_sfs_pool.list_busypool) {
+ list_del(node);
+ list_add_tail(node, &xchg_mgr->big_sfs_pool.list_freepool);
+ }
+
+ list_for_each_safe(node, next_node, &xchg_mgr->big_sfs_pool.list_freepool) {
+ list_del(node);
+ big_sfs = list_entry(node, struct unf_big_sfs, entry_bigsfs);
+ if (big_sfs->addr)
+ big_sfs->addr = NULL;
+ }
+ spin_unlock_irqrestore(&xchg_mgr->big_sfs_pool.big_sfs_pool_lock, flag);
+
+ if (xchg_mgr->big_sfs_buf_list.buflist) {
+ for (i = 0; i < xchg_mgr->big_sfs_buf_list.buf_num; i++) {
+ kfree(xchg_mgr->big_sfs_buf_list.buflist[i].vaddr);
+ xchg_mgr->big_sfs_buf_list.buflist[i].vaddr = NULL;
+ }
+
+ kfree(xchg_mgr->big_sfs_buf_list.buflist);
+ xchg_mgr->big_sfs_buf_list.buflist = NULL;
+ }
+}
+
+static void unf_free_big_sfs_pool(struct unf_xchg_mgr *xchg_mgr)
+{
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Free Big SFS Pool, Count(0x%x).",
+ xchg_mgr->big_sfs_pool.free_count);
+
+ unf_free_all_big_sfs(xchg_mgr);
+ xchg_mgr->big_sfs_pool.free_count = 0;
+
+ if (xchg_mgr->big_sfs_pool.big_sfs_pool) {
+ vfree(xchg_mgr->big_sfs_pool.big_sfs_pool);
+ xchg_mgr->big_sfs_pool.big_sfs_pool = NULL;
+ }
+}
+
+static void unf_free_xchg_mgr_mem(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
+{
+ struct unf_xchg *xchg = NULL;
+ u32 i = 0;
+ u32 xchg_sum = 0;
+ struct unf_xchg_free_pool *free_pool = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+
+ unf_free_big_sfs_pool(xchg_mgr);
+
+ /* The sfs is released first, and the XchgMgr is allocated by the get
+ * free page. Therefore, the XchgMgr is compared with the '0'
+ */
+ if (xchg_mgr->sfs_mm_start != 0) {
+ dma_free_coherent(&lport->low_level_func.dev->dev, xchg_mgr->sfs_mem_size,
+ xchg_mgr->sfs_mm_start, xchg_mgr->sfs_phy_addr);
+ xchg_mgr->sfs_mm_start = 0;
+ }
+
+ /* Release Xchg first */
+ if (xchg_mgr->fcp_mm_start) {
+ unf_get_xchg_config_sum(lport, &xchg_sum);
+ xchg_sum = xchg_sum / UNF_EXCHG_MGR_NUM;
+
+ xchg = xchg_mgr->fcp_mm_start;
+ for (i = 0; i < xchg_sum; i++) {
+ if (!xchg)
+ break;
+ xchg++;
+ }
+
+ vfree(xchg_mgr->fcp_mm_start);
+ xchg_mgr->fcp_mm_start = NULL;
+ }
+
+ /* release the hot pool */
+ if (xchg_mgr->hot_pool) {
+ vfree(xchg_mgr->hot_pool);
+ xchg_mgr->hot_pool = NULL;
+ }
+
+ free_pool = &xchg_mgr->free_pool;
+
+ vfree(xchg_mgr);
+}
+
+static void unf_free_xchg_mgr(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
+{
+ ulong flags = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+
+ /* 1. At first, free exchanges for this Exch_Mgr */
+ ret = unf_free_lport_xchg(lport, xchg_mgr);
+
+ /* 2. Delete this Exch_Mgr entry */
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ list_del_init(&xchg_mgr->xchg_mgr_entry);
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ /* 3. free Exch_Mgr memory if necessary */
+ if (ret == RETURN_OK) {
+ /* free memory directly */
+ unf_free_xchg_mgr_mem(lport, xchg_mgr);
+ } else {
+ /* Add it to Dirty list */
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ list_add_tail(&xchg_mgr->xchg_mgr_entry, &lport->list_drty_xchg_mgr_head);
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ /* Mark dirty flag */
+ unf_cm_mark_dirty_mem(lport, UNF_LPORT_DIRTY_FLAG_XCHGMGR_DIRTY);
+ }
+}
+
+void unf_free_all_xchg_mgr(struct unf_lport *lport)
+{
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ ulong flags = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ /* for each L_Port->Exch_Mgr_List */
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ while (!list_empty(&lport->list_xchg_mgr_head)) {
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ xchg_mgr = unf_get_xchg_mgr_by_lport(lport, i);
+ unf_free_xchg_mgr(lport, xchg_mgr);
+ if (i < UNF_EXCHG_MGR_NUM)
+ lport->xchg_mgr[i] = NULL;
+
+ i++;
+
+ /* go to next */
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ }
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_4_DESTROY_EXCH_MGR;
+}
+
+static u32 unf_init_xchg_mgr(struct unf_xchg_mgr *xchg_mgr)
+{
+ FC_CHECK_RETURN_VALUE(xchg_mgr, UNF_RETURN_ERROR);
+
+ memset(xchg_mgr, 0, sizeof(struct unf_xchg_mgr));
+
+ INIT_LIST_HEAD(&xchg_mgr->xchg_mgr_entry);
+ xchg_mgr->fcp_mm_start = NULL;
+ xchg_mgr->mem_szie = sizeof(struct unf_xchg_mgr);
+
+ return RETURN_OK;
+}
+
+static u32 unf_init_xchg_mgr_free_pool(struct unf_xchg_mgr *xchg_mgr)
+{
+ struct unf_xchg_free_pool *free_pool = NULL;
+
+ FC_CHECK_RETURN_VALUE(xchg_mgr, UNF_RETURN_ERROR);
+
+ free_pool = &xchg_mgr->free_pool;
+ INIT_LIST_HEAD(&free_pool->list_free_xchg_list);
+ INIT_LIST_HEAD(&free_pool->list_sfs_xchg_list);
+ spin_lock_init(&free_pool->xchg_freepool_lock);
+ free_pool->fcp_xchg_sum = 0;
+ free_pool->xchg_mgr_completion = NULL;
+
+ return RETURN_OK;
+}
+
+static u32 unf_init_xchg_hot_pool(struct unf_lport *lport, struct unf_xchg_hot_pool *hot_pool,
+ u32 xchg_sum)
+{
+ FC_CHECK_RETURN_VALUE(hot_pool, UNF_RETURN_ERROR);
+
+ INIT_LIST_HEAD(&hot_pool->sfs_busylist);
+ INIT_LIST_HEAD(&hot_pool->ini_busylist);
+ spin_lock_init(&hot_pool->xchg_hotpool_lock);
+ INIT_LIST_HEAD(&hot_pool->list_destroy_xchg);
+ hot_pool->total_xchges = 0;
+ hot_pool->wait_state = false;
+ hot_pool->lport = lport;
+
+ /* Slab Pool Index */
+ hot_pool->slab_next_index = 0;
+ UNF_TOU16_CHECK(hot_pool->slab_total_sum, xchg_sum, return UNF_RETURN_ERROR);
+
+ return RETURN_OK;
+}
+
+static u32 unf_alloc_and_init_big_sfs_pool(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
+{
+#define UNF_MAX_RESOURCE_RESERVED_FOR_RSCN 20
+#define UNF_BIG_SFS_POOL_TYPES 6
+ u32 i = 0;
+ u32 size = 0;
+ u32 align_size = 0;
+ u32 npiv_cnt = 0;
+ struct unf_big_sfs_pool *big_sfs_pool = NULL;
+ struct unf_big_sfs *big_sfs_buff = NULL;
+ u32 buf_total_size;
+ u32 buf_num;
+ u32 buf_cnt_per_huge_buf;
+ u32 alloc_idx;
+ u32 cur_buf_idx = 0;
+ u32 cur_buf_offset = 0;
+
+ FC_CHECK_RETURN_VALUE(xchg_mgr, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ big_sfs_pool = &xchg_mgr->big_sfs_pool;
+
+ INIT_LIST_HEAD(&big_sfs_pool->list_freepool);
+ INIT_LIST_HEAD(&big_sfs_pool->list_busypool);
+ spin_lock_init(&big_sfs_pool->big_sfs_pool_lock);
+ npiv_cnt = lport->low_level_func.support_max_npiv_num;
+
+ /*
+ * The value*6 indicates GID_PT/GID_FT, RSCN, and ECHO
+ * Another command is received when a command is being responded
+ * A maximum of 20 resources are reserved for the RSCN. During the test,
+ * multiple rscn are found. As a result, the resources are insufficient
+ * and the disc fails.
+ */
+ big_sfs_pool->free_count = (npiv_cnt + 1) * UNF_BIG_SFS_POOL_TYPES +
+ UNF_MAX_RESOURCE_RESERVED_FOR_RSCN;
+ big_sfs_buff =
+ (struct unf_big_sfs *)vmalloc(big_sfs_pool->free_count * sizeof(struct unf_big_sfs));
+ if (!big_sfs_buff) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Allocate Big SFS buf fail.");
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(big_sfs_buff, 0, big_sfs_pool->free_count * sizeof(struct unf_big_sfs));
+ xchg_mgr->mem_szie += (u32)(big_sfs_pool->free_count * sizeof(struct unf_big_sfs));
+ big_sfs_pool->big_sfs_pool = (void *)big_sfs_buff;
+
+ /*
+ * Use the larger value of sizeof (struct unf_gid_acc_pld) and sizeof
+ * (struct unf_rscn_pld) to avoid the icp error.Therefore, the value is
+ * directly assigned instead of being compared.
+ */
+ size = sizeof(struct unf_gid_acc_pld);
+ align_size = ALIGN(size, PAGE_SIZE);
+
+ buf_total_size = align_size * big_sfs_pool->free_count;
+ xchg_mgr->big_sfs_buf_list.buf_size =
+ buf_total_size > BUF_LIST_PAGE_SIZE ? BUF_LIST_PAGE_SIZE
+ : buf_total_size;
+
+ buf_cnt_per_huge_buf = xchg_mgr->big_sfs_buf_list.buf_size / align_size;
+ buf_num = big_sfs_pool->free_count % buf_cnt_per_huge_buf
+ ? big_sfs_pool->free_count / buf_cnt_per_huge_buf + 1
+ : big_sfs_pool->free_count / buf_cnt_per_huge_buf;
+
+ xchg_mgr->big_sfs_buf_list.buflist = (struct buff_list *)kmalloc(buf_num *
+ sizeof(struct buff_list), GFP_KERNEL);
+ xchg_mgr->big_sfs_buf_list.buf_num = buf_num;
+
+ if (!xchg_mgr->big_sfs_buf_list.buflist) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate BigSfs pool buf list failed out of memory");
+ goto free_buff;
+ }
+ memset(xchg_mgr->big_sfs_buf_list.buflist, 0, buf_num * sizeof(struct buff_list));
+ for (alloc_idx = 0; alloc_idx < buf_num; alloc_idx++) {
+ xchg_mgr->big_sfs_buf_list.buflist[alloc_idx].vaddr =
+ kmalloc(xchg_mgr->big_sfs_buf_list.buf_size, GFP_ATOMIC);
+ if (xchg_mgr->big_sfs_buf_list.buflist[alloc_idx].vaddr ==
+ NULL) {
+ goto free_buff;
+ }
+ memset(xchg_mgr->big_sfs_buf_list.buflist[alloc_idx].vaddr, 0,
+ xchg_mgr->big_sfs_buf_list.buf_size);
+ }
+
+ for (i = 0; i < big_sfs_pool->free_count; i++) {
+ if (i != 0 && !(i % buf_cnt_per_huge_buf))
+ cur_buf_idx++;
+
+ cur_buf_offset = align_size * (i % buf_cnt_per_huge_buf);
+ big_sfs_buff->addr = xchg_mgr->big_sfs_buf_list.buflist[cur_buf_idx].vaddr +
+ cur_buf_offset;
+ big_sfs_buff->size = size;
+ xchg_mgr->mem_szie += size;
+ list_add_tail(&big_sfs_buff->entry_bigsfs, &big_sfs_pool->list_freepool);
+ big_sfs_buff++;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[EVENT]Allocate BigSfs pool size:%d,align_size:%d,buf_num:%u,buf_size:%u",
+ size, align_size, xchg_mgr->big_sfs_buf_list.buf_num,
+ xchg_mgr->big_sfs_buf_list.buf_size);
+ return RETURN_OK;
+free_buff:
+ unf_free_all_big_sfs(xchg_mgr);
+ vfree(big_sfs_buff);
+ big_sfs_pool->big_sfs_pool = NULL;
+ return UNF_RETURN_ERROR;
+}
+
+static void unf_free_one_big_sfs(struct unf_xchg *xchg)
+{
+ ulong flag = 0;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ xchg_mgr = xchg->xchg_mgr;
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+ if (!xchg->big_sfs_buf)
+ return;
+
+ if (xchg->cmnd_code != NS_GID_PT && xchg->cmnd_code != NS_GID_FT &&
+ xchg->cmnd_code != ELS_ECHO &&
+ xchg->cmnd_code != (UNF_SET_ELS_ACC_TYPE(ELS_ECHO)) && xchg->cmnd_code != ELS_RSCN &&
+ xchg->cmnd_code != (UNF_SET_ELS_ACC_TYPE(ELS_RSCN))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Exchange(0x%p), Command(0x%x) big SFS buf is not NULL.",
+ xchg, xchg->cmnd_code);
+ }
+
+ spin_lock_irqsave(&xchg_mgr->big_sfs_pool.big_sfs_pool_lock, flag);
+ list_del(&xchg->big_sfs_buf->entry_bigsfs);
+ list_add_tail(&xchg->big_sfs_buf->entry_bigsfs,
+ &xchg_mgr->big_sfs_pool.list_freepool);
+ xchg_mgr->big_sfs_pool.free_count++;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Free one big SFS buf(0x%p), Count(0x%x), Exchange(0x%p), Command(0x%x).",
+ xchg->big_sfs_buf->addr, xchg_mgr->big_sfs_pool.free_count,
+ xchg, xchg->cmnd_code);
+ spin_unlock_irqrestore(&xchg_mgr->big_sfs_pool.big_sfs_pool_lock, flag);
+}
+
+static void unf_free_exchg_mgr_info(struct unf_lport *lport)
+{
+ u32 i;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ list_for_each_safe(node, next_node, &lport->list_xchg_mgr_head) {
+ list_del(node);
+ xchg_mgr = list_entry(node, struct unf_xchg_mgr, xchg_mgr_entry);
+ }
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ xchg_mgr = lport->xchg_mgr[i];
+
+ if (xchg_mgr) {
+ unf_free_big_sfs_pool(xchg_mgr);
+
+ if (xchg_mgr->sfs_mm_start) {
+ dma_free_coherent(&lport->low_level_func.dev->dev,
+ xchg_mgr->sfs_mem_size, xchg_mgr->sfs_mm_start,
+ xchg_mgr->sfs_phy_addr);
+ xchg_mgr->sfs_mm_start = 0;
+ }
+
+ if (xchg_mgr->fcp_mm_start) {
+ vfree(xchg_mgr->fcp_mm_start);
+ xchg_mgr->fcp_mm_start = NULL;
+ }
+
+ if (xchg_mgr->hot_pool) {
+ vfree(xchg_mgr->hot_pool);
+ xchg_mgr->hot_pool = NULL;
+ }
+
+ vfree(xchg_mgr);
+ lport->xchg_mgr[i] = NULL;
+ }
+ }
+}
+
+static u32 unf_alloc_and_init_xchg_mgr(struct unf_lport *lport)
+{
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct unf_xchg *xchg_mem = NULL;
+ void *sfs_mm_start = 0;
+ dma_addr_t sfs_phy_addr = 0;
+ u32 xchg_sum = 0;
+ u32 sfs_xchg_sum = 0;
+ ulong flags = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 slab_num = 0;
+ u32 i = 0;
+
+ ret = unf_get_xchg_config_sum(lport, &xchg_sum);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) can't get Exchange.", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* SFS Exchange Sum */
+ sfs_xchg_sum = lport->low_level_func.lport_cfg_items.max_sfs_xchg /
+ UNF_EXCHG_MGR_NUM;
+ xchg_sum = xchg_sum / UNF_EXCHG_MGR_NUM;
+ slab_num = lport->low_level_func.support_max_hot_tag_range / UNF_EXCHG_MGR_NUM;
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ /* Alloc Exchange Manager */
+ xchg_mgr = (struct unf_xchg_mgr *)vmalloc(sizeof(struct unf_xchg_mgr));
+ if (!xchg_mgr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) allocate Exchange Manager Memory Fail.",
+ lport->port_id);
+ goto exit;
+ }
+
+ /* Init Exchange Manager */
+ ret = unf_init_xchg_mgr(xchg_mgr);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) initialization Exchange Manager unsuccessful.",
+ lport->port_id);
+ goto free_xchg_mgr;
+ }
+
+ /* Initialize the Exchange Free Pool resource */
+ ret = unf_init_xchg_mgr_free_pool(xchg_mgr);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) initialization Exchange Manager Free Pool unsuccessful.",
+ lport->port_id);
+ goto free_xchg_mgr;
+ }
+
+ /* Allocate memory for Hot Pool and Xchg slab */
+ hot_pool = vmalloc(sizeof(struct unf_xchg_hot_pool) +
+ sizeof(struct unf_xchg *) * slab_num);
+ if (!hot_pool) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) allocate Hot Pool Memory Fail.",
+ lport->port_id);
+ goto free_xchg_mgr;
+ }
+ memset(hot_pool, 0,
+ sizeof(struct unf_xchg_hot_pool) + sizeof(struct unf_xchg *) * slab_num);
+
+ xchg_mgr->mem_szie += (u32)(sizeof(struct unf_xchg_hot_pool) +
+ sizeof(struct unf_xchg *) * slab_num);
+ /* Initialize the Exchange Hot Pool resource */
+ ret = unf_init_xchg_hot_pool(lport, hot_pool, slab_num);
+ if (ret != RETURN_OK)
+ goto free_hot_pool;
+
+ hot_pool->base += (u16)(i * slab_num);
+ /* Allocate the memory of all Xchg (IO/SFS) */
+ xchg_mem = vmalloc(sizeof(struct unf_xchg) * xchg_sum);
+ if (!xchg_mem) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) allocate Exchange Memory Fail.",
+ lport->port_id);
+ goto free_hot_pool;
+ }
+ memset(xchg_mem, 0, sizeof(struct unf_xchg) * xchg_sum);
+
+ xchg_mgr->mem_szie += (u32)(sizeof(struct unf_xchg) * xchg_sum);
+ xchg_mgr->hot_pool = hot_pool;
+ xchg_mgr->fcp_mm_start = xchg_mem;
+ /* Allocate the memory used by the SFS Xchg to carry the
+ * ELS/BLS/GS command and response
+ */
+ xchg_mgr->sfs_mem_size = (u32)(sizeof(union unf_sfs_u) * sfs_xchg_sum);
+
+ /* Apply for the DMA space for sending sfs frames.
+ * If the value of DMA32 is less than 4 GB, cross-4G problems
+ * will not occur
+ */
+ sfs_mm_start = dma_alloc_coherent(&lport->low_level_func.dev->dev,
+ xchg_mgr->sfs_mem_size,
+ &sfs_phy_addr, GFP_KERNEL);
+ if (!sfs_mm_start) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) Get Free Pagers Fail .",
+ lport->port_id);
+ goto free_xchg_mem;
+ }
+ memset(sfs_mm_start, 0, sizeof(union unf_sfs_u) * sfs_xchg_sum);
+
+ xchg_mgr->mem_szie += xchg_mgr->sfs_mem_size;
+ xchg_mgr->sfs_mm_start = sfs_mm_start;
+ xchg_mgr->sfs_phy_addr = sfs_phy_addr;
+ /* The Xchg is initialized and mounted to the Free Pool */
+ ret = unf_init_xchg(lport, xchg_mgr, xchg_sum, sfs_xchg_sum);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) initialization Exchange unsuccessful, Exchange Number(%d), SFS Exchange number(%d).",
+ lport->port_id, xchg_sum, sfs_xchg_sum);
+ dma_free_coherent(&lport->low_level_func.dev->dev, xchg_mgr->sfs_mem_size,
+ xchg_mgr->sfs_mm_start, xchg_mgr->sfs_phy_addr);
+ xchg_mgr->sfs_mm_start = 0;
+ goto free_xchg_mem;
+ }
+
+ /* Apply for the memory used by GID_PT, GID_FT, and RSCN */
+ ret = unf_alloc_and_init_big_sfs_pool(lport, xchg_mgr);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) allocate big SFS fail", lport->port_id);
+ dma_free_coherent(&lport->low_level_func.dev->dev, xchg_mgr->sfs_mem_size,
+ xchg_mgr->sfs_mm_start, xchg_mgr->sfs_phy_addr);
+ xchg_mgr->sfs_mm_start = 0;
+ goto free_xchg_mem;
+ }
+
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ lport->xchg_mgr[i] = (void *)xchg_mgr;
+ list_add_tail(&xchg_mgr->xchg_mgr_entry, &lport->list_xchg_mgr_head);
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) ExchangeMgr:(0x%p),Base:(0x%x).",
+ lport->port_id, lport->xchg_mgr[i], hot_pool->base);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) allocate Exchange Manager size(0x%x).",
+ lport->port_id, xchg_mgr->mem_szie);
+ return RETURN_OK;
+free_xchg_mem:
+ vfree(xchg_mem);
+free_hot_pool:
+ vfree(hot_pool);
+free_xchg_mgr:
+ vfree(xchg_mgr);
+exit:
+ unf_free_exchg_mgr_info(lport);
+ return UNF_RETURN_ERROR;
+}
+
+void unf_xchg_mgr_destroy(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_free_all_xchg_mgr(lport);
+}
+
+u32 unf_alloc_xchg_resource(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ INIT_LIST_HEAD(&lport->list_drty_xchg_mgr_head);
+ INIT_LIST_HEAD(&lport->list_xchg_mgr_head);
+ spin_lock_init(&lport->xchg_mgr_lock);
+
+ /* LPort Xchg Management Unit Alloc */
+ if (unf_alloc_and_init_xchg_mgr(lport) != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ return RETURN_OK;
+}
+
+void unf_destroy_dirty_xchg(struct unf_lport *lport, bool show_only)
+{
+ u32 dirty_xchg = 0;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ ulong flags = 0;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (lport->dirty_flag & UNF_LPORT_DIRTY_FLAG_XCHGMGR_DIRTY) {
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ list_for_each_safe(node, next_node, &lport->list_drty_xchg_mgr_head) {
+ xchg_mgr = list_entry(node, struct unf_xchg_mgr, xchg_mgr_entry);
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+ if (xchg_mgr) {
+ dirty_xchg = (xchg_mgr->free_pool.total_fcp_xchg +
+ xchg_mgr->free_pool.total_sfs_xchg);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) has %u dirty exchange(s)",
+ lport->port_id, dirty_xchg);
+
+ unf_show_all_xchg(lport, xchg_mgr);
+
+ if (!show_only) {
+ /* Delete Dirty Exchange Mgr entry */
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ list_del_init(&xchg_mgr->xchg_mgr_entry);
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ /* Free Dirty Exchange Mgr memory */
+ unf_free_xchg_mgr_mem(lport, xchg_mgr);
+ }
+ }
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ }
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+ }
+}
+
+struct unf_xchg_mgr *unf_get_xchg_mgr_by_lport(struct unf_lport *lport, u32 idx)
+{
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE((idx < UNF_EXCHG_MGR_NUM), NULL);
+
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ xchg_mgr = lport->xchg_mgr[idx];
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ return xchg_mgr;
+}
+
+struct unf_xchg_hot_pool *unf_get_hot_pool_by_lport(struct unf_lport *lport,
+ u32 mgr_idx)
+{
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ unf_lport = (struct unf_lport *)(lport->root_lport);
+
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+
+ /* Get Xchg Manager */
+ xchg_mgr = unf_get_xchg_mgr_by_lport(unf_lport, mgr_idx);
+ if (!xchg_mgr) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) Exchange Manager is NULL.",
+ unf_lport->port_id);
+
+ return NULL;
+ }
+
+ /* Get Xchg Manager Hot Pool */
+ return xchg_mgr->hot_pool;
+}
+
+static inline void unf_hot_pool_slab_set(struct unf_xchg_hot_pool *hot_pool,
+ u16 slab_index, struct unf_xchg *xchg)
+{
+ FC_CHECK_RETURN_VOID(hot_pool);
+
+ hot_pool->xchg_slab[slab_index] = xchg;
+}
+
+static inline struct unf_xchg *unf_get_xchg_by_xchg_tag(struct unf_xchg_hot_pool *hot_pool,
+ u16 slab_index)
+{
+ FC_CHECK_RETURN_VALUE(hot_pool, NULL);
+
+ return hot_pool->xchg_slab[slab_index];
+}
+
+static void *unf_look_up_xchg_by_tag(void *lport, u16 hot_pool_tag)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong flags = 0;
+ u32 exchg_mgr_idx = 0;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ /* In the case of NPIV, lport is the Vport pointer,
+ * the share uses the ExchMgr of RootLport
+ */
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+
+ exchg_mgr_idx = (hot_pool_tag * UNF_EXCHG_MGR_NUM) /
+ unf_lport->low_level_func.support_max_hot_tag_range;
+ if (unlikely(exchg_mgr_idx >= UNF_EXCHG_MGR_NUM)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) Get ExchgMgr %u err",
+ unf_lport->port_id, exchg_mgr_idx);
+
+ return NULL;
+ }
+
+ xchg_mgr = unf_lport->xchg_mgr[exchg_mgr_idx];
+
+ if (unlikely(!xchg_mgr)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) ExchgMgr %u is null",
+ unf_lport->port_id, exchg_mgr_idx);
+
+ return NULL;
+ }
+
+ hot_pool = xchg_mgr->hot_pool;
+
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) Hot Pool is NULL.",
+ unf_lport->port_id);
+
+ return NULL;
+ }
+
+ if (unlikely(hot_pool_tag >= (hot_pool->slab_total_sum + hot_pool->base))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]LPort(0x%x) can't Input Tag(0x%x), Max(0x%x).",
+ unf_lport->port_id, hot_pool_tag,
+ (hot_pool->slab_total_sum + hot_pool->base));
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ xchg = unf_get_xchg_by_xchg_tag(hot_pool, hot_pool_tag - hot_pool->base);
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ return (void *)xchg;
+}
+
+static void *unf_find_xchg_by_ox_id(void *lport, u16 ox_id, u32 oid)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong flags = 0;
+ ulong xchg_flags = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ /* In the case of NPIV, the lport is the Vport pointer,
+ * and the share uses the ExchMgr of the RootLport
+ */
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) MgrIdex %u Hot Pool is NULL.",
+ unf_lport->port_id, i);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* 1. Traverse sfs_busy list */
+ list_for_each_safe(node, next_node, &hot_pool->sfs_busylist) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flags);
+ if (unf_check_oxid_matched(ox_id, oid, xchg)) {
+ atomic_inc(&xchg->ref_cnt);
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flags);
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ return xchg;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flags);
+ }
+
+ /* 2. Traverse INI_Busy List */
+ list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flags);
+ if (unf_check_oxid_matched(ox_id, oid, xchg)) {
+ atomic_inc(&xchg->ref_cnt);
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flags);
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ return xchg;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock,
+ xchg_flags);
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ }
+
+ return NULL;
+}
+
+static inline bool unf_check_xchg_matched(struct unf_xchg *xchg, u64 command_sn,
+ u32 world_id, void *pinitiator)
+{
+ bool matched = false;
+
+ matched = (command_sn == xchg->cmnd_sn);
+ if (matched && (atomic_read(&xchg->ref_cnt) > 0))
+ return true;
+ else
+ return false;
+}
+
+static void *unf_look_up_xchg_by_cmnd_sn(void *lport, u64 command_sn,
+ u32 world_id, void *pinitiator)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong flags = 0;
+ u32 i;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ /* In NPIV, lport is a Vport pointer, and idle resources are shared by
+ * ExchMgr of RootLport. However, busy resources are mounted on each
+ * vport. Therefore, vport needs to be used.
+ */
+ unf_lport = (struct unf_lport *)lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) hot pool is NULL",
+ unf_lport->port_id);
+
+ continue;
+ }
+
+ /* from busy_list */
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+ if (unf_check_xchg_matched(xchg, command_sn, world_id, pinitiator)) {
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ return xchg;
+ }
+ }
+
+ /* vport: from destroy_list */
+ if (unf_lport != unf_lport->root_lport) {
+ list_for_each_safe(node, next_node, &hot_pool->list_destroy_xchg) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+ if (unf_check_xchg_matched(xchg, command_sn, world_id,
+ pinitiator)) {
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) lookup exchange from destroy list",
+ unf_lport->port_id);
+
+ return xchg;
+ }
+ }
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ }
+
+ return NULL;
+}
+
+static inline u32 unf_alloc_hot_pool_slab(struct unf_xchg_hot_pool *hot_pool, struct unf_xchg *xchg)
+{
+ u16 slab_index = 0;
+
+ FC_CHECK_RETURN_VALUE(hot_pool, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ /* Check whether the hotpool tag is in the specified range sirt.
+ * If yes, set up the management relationship. If no, handle the problem
+ * according to the normal IO. If the sirt digitmap is used but the tag
+ * is occupied, it indicates that the I/O is discarded.
+ */
+
+ hot_pool->slab_next_index = (u16)hot_pool->slab_next_index;
+ slab_index = hot_pool->slab_next_index;
+ while (unf_get_xchg_by_xchg_tag(hot_pool, slab_index)) {
+ slab_index++;
+ slab_index = slab_index % hot_pool->slab_total_sum;
+
+ /* Rewind occurs */
+ if (slab_index == hot_pool->slab_next_index) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
+ "There is No Slab At Hot Pool(0x%p) for xchg(0x%p).",
+ hot_pool, xchg);
+
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ unf_hot_pool_slab_set(hot_pool, slab_index, xchg);
+ xchg->hotpooltag = slab_index + hot_pool->base;
+ slab_index++;
+ hot_pool->slab_next_index = slab_index % hot_pool->slab_total_sum;
+
+ return RETURN_OK;
+}
+
+struct unf_esgl_page *
+unf_get_and_add_one_free_esgl_page(struct unf_lport *lport, struct unf_xchg *xchg)
+{
+ struct unf_esgl *esgl = NULL;
+ struct list_head *list_head = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(xchg, NULL);
+
+ /* Obtain a new Esgl from the EsglPool and add it to the list_esgls of
+ * the Xchg
+ */
+ spin_lock_irqsave(&lport->esgl_pool.esgl_pool_lock, flag);
+ if (!list_empty(&lport->esgl_pool.list_esgl_pool)) {
+ list_head = UNF_OS_LIST_NEXT(&lport->esgl_pool.list_esgl_pool);
+ list_del(list_head);
+ lport->esgl_pool.esgl_pool_count--;
+ list_add_tail(list_head, &xchg->list_esgls);
+
+ esgl = list_entry(list_head, struct unf_esgl, entry_esgl);
+ atomic_inc(&xchg->esgl_cnt);
+ spin_unlock_irqrestore(&lport->esgl_pool.esgl_pool_lock, flag);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) esgl pool is empty",
+ lport->nport_id);
+
+ spin_unlock_irqrestore(&lport->esgl_pool.esgl_pool_lock, flag);
+ return NULL;
+ }
+
+ return &esgl->page;
+}
+
+void unf_release_esgls(struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct list_head *list = NULL;
+ struct list_head *list_tmp = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ FC_CHECK_RETURN_VOID(xchg->lport);
+
+ if (atomic_read(&xchg->esgl_cnt) <= 0)
+ return;
+
+ /* In the case of NPIV, the Vport pointer is saved in v_pstExch,
+ * and the EsglPool of RootLport is shared.
+ */
+ unf_lport = (xchg->lport)->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ spin_lock_irqsave(&unf_lport->esgl_pool.esgl_pool_lock, flag);
+ if (!list_empty(&xchg->list_esgls)) {
+ list_for_each_safe(list, list_tmp, &xchg->list_esgls) {
+ list_del(list);
+ list_add_tail(list, &unf_lport->esgl_pool.list_esgl_pool);
+ unf_lport->esgl_pool.esgl_pool_count++;
+ atomic_dec(&xchg->esgl_cnt);
+ }
+ }
+ spin_unlock_irqrestore(&unf_lport->esgl_pool.esgl_pool_lock, flag);
+}
+
+static void unf_add_back_to_fcp_list(struct unf_xchg_free_pool *free_pool, struct unf_xchg *xchg)
+{
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(free_pool);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_init_xchg_attribute(xchg);
+
+ /* The released I/O resources are added to the queue tail to facilitate
+ * fault locating
+ */
+ spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
+ list_add_tail(&xchg->list_xchg_entry, &free_pool->list_free_xchg_list);
+ free_pool->total_fcp_xchg++;
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+}
+
+static void unf_check_xchg_mgr_status(struct unf_xchg_mgr *xchg_mgr)
+{
+ ulong flags = 0;
+ u32 total_xchg = 0;
+ u32 total_xchg_sum = 0;
+
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+
+ spin_lock_irqsave(&xchg_mgr->free_pool.xchg_freepool_lock, flags);
+
+ total_xchg = xchg_mgr->free_pool.total_fcp_xchg + xchg_mgr->free_pool.total_sfs_xchg;
+ total_xchg_sum = xchg_mgr->free_pool.fcp_xchg_sum + xchg_mgr->free_pool.sfs_xchg_sum;
+
+ if (xchg_mgr->free_pool.xchg_mgr_completion && total_xchg == total_xchg_sum)
+ complete(xchg_mgr->free_pool.xchg_mgr_completion);
+
+ spin_unlock_irqrestore(&xchg_mgr->free_pool.xchg_freepool_lock, flags);
+}
+
+static void unf_free_fcp_xchg(struct unf_xchg *xchg)
+{
+ struct unf_xchg_free_pool *free_pool = NULL;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ /* Releasing a Specified INI I/O and Invoking the scsi_done Process */
+ unf_done_ini_xchg(xchg);
+ free_pool = xchg->free_pool;
+ xchg_mgr = xchg->xchg_mgr;
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+
+ atomic_dec(&unf_rport->pending_io_cnt);
+ /* Release the Esgls in the Xchg structure and return it to the EsglPool
+ * of the Lport
+ */
+ unf_release_esgls(xchg);
+
+ if (unlikely(xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu)) {
+ kfree(xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu);
+ xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu = NULL;
+ }
+
+ /* Mount I/O resources to the FCP Free linked list */
+ unf_add_back_to_fcp_list(free_pool, xchg);
+
+ /* The Xchg is released synchronously and then forcibly released to
+ * prevent the Xchg from accessing the Xchg in the normal I/O process
+ */
+ if (unlikely(unf_lport->port_removing))
+ unf_check_xchg_mgr_status(xchg_mgr);
+}
+
+static void unf_init_io_xchg_param(struct unf_xchg *xchg, struct unf_lport *lport,
+ struct unf_xchg_mgr *xchg_mgr)
+{
+ static atomic64_t exhd_id;
+
+ xchg->start_jif = atomic64_inc_return(&exhd_id);
+ xchg->xchg_mgr = xchg_mgr;
+ xchg->free_pool = &xchg_mgr->free_pool;
+ xchg->hot_pool = xchg_mgr->hot_pool;
+ xchg->lport = lport;
+ xchg->xchg_type = UNF_XCHG_TYPE_INI;
+ xchg->free_xchg = unf_free_fcp_xchg;
+ xchg->scsi_or_tgt_cmnd_func = NULL;
+ xchg->io_state = UNF_IO_STATE_NEW;
+ xchg->io_send_stage = TGT_IO_SEND_STAGE_NONE;
+ xchg->io_send_result = TGT_IO_SEND_RESULT_INVALID;
+ xchg->io_send_abort = false;
+ xchg->io_abort_result = false;
+ xchg->oxid = INVALID_VALUE16;
+ xchg->abort_oxid = INVALID_VALUE16;
+ xchg->rxid = INVALID_VALUE16;
+ xchg->sid = INVALID_VALUE32;
+ xchg->did = INVALID_VALUE32;
+ xchg->oid = INVALID_VALUE32;
+ xchg->seq_id = INVALID_VALUE8;
+ xchg->cmnd_code = INVALID_VALUE32;
+ xchg->data_len = 0;
+ xchg->resid_len = 0;
+ xchg->data_direction = DMA_NONE;
+ xchg->may_consume_res_cnt = 0;
+ xchg->fast_consume_res_cnt = 0;
+ xchg->io_front_jif = 0;
+ xchg->tmf_state = 0;
+ xchg->ucode_abts_state = INVALID_VALUE32;
+ xchg->abts_state = 0;
+ xchg->rport_bind_jifs = INVALID_VALUE64;
+ xchg->scsi_id = INVALID_VALUE32;
+ xchg->qos_level = 0;
+ xchg->world_id = INVALID_VALUE32;
+
+ memset(&xchg->dif_control, 0, sizeof(struct unf_dif_control_info));
+ memset(&xchg->req_sgl_info, 0, sizeof(struct unf_req_sgl_info));
+ memset(&xchg->dif_sgl_info, 0, sizeof(struct unf_req_sgl_info));
+ xchg->scsi_cmnd_info.result = 0;
+
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
+
+ if (xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] == INVALID_VALUE32) {
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
+ }
+
+ if (xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] == 0) {
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
+ }
+
+ atomic_set(&xchg->ref_cnt, 0);
+ atomic_set(&xchg->delay_flag, 0);
+
+ if (delayed_work_pending(&xchg->timeout_work))
+ UNF_DEL_XCHG_TIMER_SAFE(xchg);
+
+ INIT_DELAYED_WORK(&xchg->timeout_work, unf_fc_ini_io_xchg_time_out);
+}
+
+static struct unf_xchg *unf_alloc_io_xchg(struct unf_lport *lport,
+ struct unf_xchg_mgr *xchg_mgr)
+{
+ struct unf_xchg *xchg = NULL;
+ struct list_head *list_node = NULL;
+ struct unf_xchg_free_pool *free_pool = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(xchg_mgr, NULL);
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ free_pool = &xchg_mgr->free_pool;
+ hot_pool = xchg_mgr->hot_pool;
+ FC_CHECK_RETURN_VALUE(free_pool, NULL);
+ FC_CHECK_RETURN_VALUE(hot_pool, NULL);
+
+ /* 1. Free Pool */
+ spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
+ if (unlikely(list_empty(&free_pool->list_free_xchg_list))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "Port(0x%x) have no Exchange anymore.",
+ lport->port_id);
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+ return NULL;
+ }
+
+ /* Select an idle node from free pool */
+ list_node = UNF_OS_LIST_NEXT(&free_pool->list_free_xchg_list);
+ list_del(list_node);
+ free_pool->total_fcp_xchg--;
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+
+ xchg = list_entry(list_node, struct unf_xchg, list_xchg_entry);
+ /*
+ * Hot Pool:
+ * When xchg is mounted to Hot Pool, the mount mode and release mode
+ * of Xchg must be specified and stored in the sfs linked list.
+ */
+ flags = 0;
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ if (unf_alloc_hot_pool_slab(hot_pool, xchg) != RETURN_OK) {
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ unf_add_back_to_fcp_list(free_pool, xchg);
+ if (unlikely(lport->port_removing))
+ unf_check_xchg_mgr_status(xchg_mgr);
+
+ return NULL;
+ }
+ list_add_tail(&xchg->list_xchg_entry, &hot_pool->ini_busylist);
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* 3. Exchange State */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_init_io_xchg_param(xchg, lport, xchg_mgr);
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ return xchg;
+}
+
+static void unf_add_back_to_sfs_list(struct unf_xchg_free_pool *free_pool,
+ struct unf_xchg *xchg)
+{
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(free_pool);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_init_xchg_attribute(xchg);
+
+ spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
+
+ list_add_tail(&xchg->list_xchg_entry, &free_pool->list_sfs_xchg_list);
+ free_pool->total_sfs_xchg++;
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+}
+
+static void unf_free_sfs_xchg(struct unf_xchg *xchg)
+{
+ struct unf_xchg_free_pool *free_pool = NULL;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ free_pool = xchg->free_pool;
+ unf_lport = xchg->lport;
+ xchg_mgr = xchg->xchg_mgr;
+
+ /* The memory is applied for when the GID_PT/GID_FT is sent.
+ * If no response is received, the GID_PT/GID_FT needs to be forcibly
+ * released.
+ */
+
+ unf_free_one_big_sfs(xchg);
+
+ unf_add_back_to_sfs_list(free_pool, xchg);
+
+ if (unlikely(unf_lport->port_removing))
+ unf_check_xchg_mgr_status(xchg_mgr);
+}
+
+static void unf_fc_xchg_add_timer(void *xchg, ulong time_ms,
+ enum unf_timer_type time_type)
+{
+ ulong flag = 0;
+ struct unf_xchg *unf_xchg = NULL;
+ ulong times_ms = time_ms;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ unf_xchg = (struct unf_xchg *)xchg;
+ unf_lport = unf_xchg->lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ /* update timeout */
+ switch (time_type) {
+ /* The processing of TGT RRQ timeout is the same as that of TGT IO
+ * timeout. The timeout period is different.
+ */
+ case UNF_TIMER_TYPE_TGT_RRQ:
+ times_ms = times_ms + UNF_TGT_RRQ_REDUNDANT_TIME;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "TGT RRQ Timer set.");
+ break;
+
+ case UNF_TIMER_TYPE_INI_RRQ:
+ times_ms = times_ms - UNF_INI_RRQ_REDUNDANT_TIME;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "INI RRQ Timer set.");
+ break;
+
+ case UNF_TIMER_TYPE_SFS:
+ times_ms = times_ms + UNF_INI_ELS_REDUNDANT_TIME;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "INI ELS Timer set.");
+ break;
+ default:
+ break;
+ }
+
+ /* The xchg of the timer must be valid. If the reference count of xchg
+ * is 0, the timer must not be added
+ */
+ if (atomic_read(&unf_xchg->ref_cnt) <= 0) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
+ "[warn]Abnormal Exchange(0x%p), Reference count(0x%x), Can't add timer.",
+ unf_xchg, atomic_read(&unf_xchg->ref_cnt));
+ return;
+ }
+
+ /* Delay Work: Hold for timer */
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, flag);
+ if (queue_delayed_work(unf_lport->xchg_wq, &unf_xchg->timeout_work,
+ (ulong)msecs_to_jiffies((u32)times_ms))) {
+ /* hold for timer */
+ atomic_inc(&unf_xchg->ref_cnt);
+ }
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flag);
+}
+
+static void unf_init_sfs_xchg_param(struct unf_xchg *xchg,
+ struct unf_lport *lport,
+ struct unf_xchg_mgr *xchg_mgr)
+{
+ xchg->free_pool = &xchg_mgr->free_pool;
+ xchg->hot_pool = xchg_mgr->hot_pool;
+ xchg->lport = lport;
+ xchg->xchg_mgr = xchg_mgr;
+ xchg->free_xchg = unf_free_sfs_xchg;
+ xchg->xchg_type = UNF_XCHG_TYPE_SFS;
+ xchg->io_state = UNF_IO_STATE_NEW;
+ xchg->scsi_cmnd_info.result = 0;
+ xchg->ob_callback_sts = UNF_IO_SUCCESS;
+
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
+
+ if (xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] ==
+ INVALID_VALUE32) {
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
+ }
+
+ if (xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] == 0) {
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
+ }
+
+ if (delayed_work_pending(&xchg->timeout_work))
+ UNF_DEL_XCHG_TIMER_SAFE(xchg);
+
+ INIT_DELAYED_WORK(&xchg->timeout_work, unf_sfs_xchg_time_out);
+}
+
+static struct unf_xchg *unf_alloc_sfs_xchg(struct unf_lport *lport,
+ struct unf_xchg_mgr *xchg_mgr)
+{
+ struct unf_xchg *xchg = NULL;
+ struct list_head *list_node = NULL;
+ struct unf_xchg_free_pool *free_pool = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(xchg_mgr, NULL);
+ free_pool = &xchg_mgr->free_pool;
+ hot_pool = xchg_mgr->hot_pool;
+ FC_CHECK_RETURN_VALUE(free_pool, NULL);
+ FC_CHECK_RETURN_VALUE(hot_pool, NULL);
+
+ /* Select an idle node from free pool */
+ spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
+ if (list_empty(&free_pool->list_sfs_xchg_list)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) have no Exchange anymore.",
+ lport->port_id);
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+ return NULL;
+ }
+
+ list_node = UNF_OS_LIST_NEXT(&free_pool->list_sfs_xchg_list);
+ list_del(list_node);
+ free_pool->total_sfs_xchg--;
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+
+ xchg = list_entry(list_node, struct unf_xchg, list_xchg_entry);
+ /*
+ * The xchg is mounted to the Hot Pool.
+ * The mount mode and release mode of the xchg must be specified
+ * and stored in the sfs linked list.
+ */
+ flags = 0;
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ if (unf_alloc_hot_pool_slab(hot_pool, xchg) != RETURN_OK) {
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ unf_add_back_to_sfs_list(free_pool, xchg);
+ if (unlikely(lport->port_removing))
+ unf_check_xchg_mgr_status(xchg_mgr);
+
+ return NULL;
+ }
+
+ list_add_tail(&xchg->list_xchg_entry, &hot_pool->sfs_busylist);
+ hot_pool->total_xchges++;
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_init_sfs_xchg_param(xchg, lport, xchg_mgr);
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ return xchg;
+}
+
+static void *unf_get_new_xchg(void *lport, u32 xchg_type)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 exchg_type = 0;
+ u16 xchg_mgr_type;
+ u32 rtry_cnt = 0;
+ u32 last_exchg_mgr_idx;
+
+ xchg_mgr_type = (xchg_type >> UNF_SHIFT_16);
+ exchg_type = xchg_type & SPFC_XCHG_TYPE_MASK;
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ /* In the case of NPIV, the lport is the Vport pointer,
+ * and the share uses the ExchMgr of the RootLport.
+ */
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+
+ if (unlikely((atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) ||
+ (atomic_read(&((struct unf_lport *)lport)->lport_no_operate_flag) ==
+ UNF_LPORT_NOP))) {
+ return NULL;
+ }
+
+ last_exchg_mgr_idx = (u32)atomic64_inc_return(&unf_lport->last_exchg_mgr_idx);
+try_next_mgr:
+ rtry_cnt++;
+ if (unlikely(rtry_cnt > UNF_EXCHG_MGR_NUM))
+ return NULL;
+
+ /* If Fixed mode,only use XchgMgr 0 */
+ if (unlikely(xchg_mgr_type == UNF_XCHG_MGR_TYPE_FIXED)) {
+ xchg_mgr = (struct unf_xchg_mgr *)unf_lport->xchg_mgr[ARRAY_INDEX_0];
+ } else {
+ xchg_mgr = (struct unf_xchg_mgr *)unf_lport
+ ->xchg_mgr[last_exchg_mgr_idx % UNF_EXCHG_MGR_NUM];
+ }
+ if (unlikely(!xchg_mgr)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) get exchangemgr %u is null.",
+ unf_lport->port_id, last_exchg_mgr_idx % UNF_EXCHG_MGR_NUM);
+ return NULL;
+ }
+ last_exchg_mgr_idx++;
+
+ /* Allocate entries based on the Exchange type */
+ switch (exchg_type) {
+ case UNF_XCHG_TYPE_SFS:
+ xchg = unf_alloc_sfs_xchg(lport, xchg_mgr);
+ break;
+ case UNF_XCHG_TYPE_INI:
+ xchg = unf_alloc_io_xchg(lport, xchg_mgr);
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) unwonted, Exchange type(0x%x).",
+ unf_lport->port_id, exchg_type);
+ break;
+ }
+
+ if (likely(xchg)) {
+ xchg->oxid = INVALID_VALUE16;
+ xchg->abort_oxid = INVALID_VALUE16;
+ xchg->rxid = INVALID_VALUE16;
+ xchg->debug_hook = false;
+ xchg->alloc_jif = jiffies;
+
+ atomic_set(&xchg->ref_cnt, 1);
+ atomic_set(&xchg->esgl_cnt, 0);
+ } else {
+ goto try_next_mgr;
+ }
+
+ return xchg;
+}
+
+static void unf_free_xchg(void *lport, void *xchg)
+{
+ struct unf_xchg *unf_xchg = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_xchg = (struct unf_xchg *)xchg;
+ unf_xchg_ref_dec(unf_xchg, XCHG_FREE_XCHG);
+}
+
+u32 unf_init_xchg_mgr_temp(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ lport->xchg_mgr_temp.unf_xchg_get_free_and_init = unf_get_new_xchg;
+ lport->xchg_mgr_temp.unf_xchg_release = unf_free_xchg;
+ lport->xchg_mgr_temp.unf_look_up_xchg_by_tag = unf_look_up_xchg_by_tag;
+ lport->xchg_mgr_temp.unf_look_up_xchg_by_id = unf_find_xchg_by_ox_id;
+ lport->xchg_mgr_temp.unf_xchg_add_timer = unf_fc_xchg_add_timer;
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer = unf_xchg_cancel_timer;
+ lport->xchg_mgr_temp.unf_xchg_abort_all_io = unf_xchg_abort_all_xchg;
+ lport->xchg_mgr_temp.unf_look_up_xchg_by_cmnd_sn = unf_look_up_xchg_by_cmnd_sn;
+ lport->xchg_mgr_temp.unf_xchg_abort_by_lun = unf_xchg_abort_by_lun;
+ lport->xchg_mgr_temp.unf_xchg_abort_by_session = unf_xchg_abort_by_session;
+ lport->xchg_mgr_temp.unf_xchg_mgr_io_xchg_abort = unf_xchg_mgr_io_xchg_abort;
+ lport->xchg_mgr_temp.unf_xchg_mgr_sfs_xchg_abort = unf_xchg_mgr_sfs_xchg_abort;
+
+ return RETURN_OK;
+}
+
+void unf_release_xchg_mgr_temp(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (lport->dirty_flag & UNF_LPORT_DIRTY_FLAG_XCHGMGR_DIRTY) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) has dirty exchange, Don't release exchange manager template.",
+ lport->port_id);
+
+ return;
+ }
+
+ memset(&lport->xchg_mgr_temp, 0, sizeof(struct unf_cm_xchg_mgr_template));
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_7_DESTROY_XCHG_MGR_TMP;
+}
+
+void unf_set_hot_pool_wait_state(struct unf_lport *lport, bool wait_state)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ ulong pool_lock_flags = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) hot pool is NULL",
+ lport->port_id);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
+ hot_pool->wait_state = wait_state;
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
+ }
+}
+
+u32 unf_xchg_ref_inc(struct unf_xchg *xchg, enum unf_ioflow_id io_stage)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ ulong flags = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ if (unlikely(xchg->debug_hook)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Xchg(0x%p) State(0x%x) SID_DID(0x%x_0x%x) OX_ID_RX_ID(0x%x_0x%x) AllocJiff(%llu) Refcnt(%d) Stage(%s)",
+ xchg, xchg->io_state, xchg->sid, xchg->did,
+ xchg->oxid, xchg->rxid, xchg->alloc_jif,
+ atomic_read(&xchg->ref_cnt),
+ io_stage_table[io_stage].stage);
+ }
+
+ hot_pool = xchg->hot_pool;
+ FC_CHECK_RETURN_VALUE(hot_pool, UNF_RETURN_ERROR);
+
+ /* Exchange -> Hot Pool Tag check */
+ if (unlikely((xchg->hotpooltag >= (hot_pool->slab_total_sum + hot_pool->base)) ||
+ xchg->hotpooltag < hot_pool->base)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Xchg(0x%p) S_ID(%xh) D_ID(0x%x) hot_pool_tag(0x%x) is bigger than slab total num(0x%x) base(0x%x)",
+ xchg, xchg->sid, xchg->did, xchg->hotpooltag,
+ hot_pool->slab_total_sum + hot_pool->base, hot_pool->base);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* atomic read & inc */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (unlikely(atomic_read(&xchg->ref_cnt) <= 0)) {
+ ret = UNF_RETURN_ERROR;
+ } else {
+ if (unf_get_xchg_by_xchg_tag(hot_pool, xchg->hotpooltag - hot_pool->base) == xchg) {
+ atomic_inc(&xchg->ref_cnt);
+ ret = RETURN_OK;
+ } else {
+ ret = UNF_RETURN_ERROR;
+ }
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ return ret;
+}
+
+void unf_xchg_ref_dec(struct unf_xchg *xchg, enum unf_ioflow_id io_stage)
+{
+ /* Atomic dec ref_cnt & test, free exchange if necessary (ref_cnt==0) */
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ void (*free_xchg)(struct unf_xchg *) = NULL;
+ ulong flags = 0;
+ ulong xchg_lock_falgs = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ if (xchg->debug_hook) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Xchg(0x%p) State(0x%x) SID_DID(0x%x_0x%x) OXID_RXID(0x%x_0x%x) AllocJiff(%llu) Refcnt(%d) Statge %s",
+ xchg, xchg->io_state, xchg->sid, xchg->did, xchg->oxid,
+ xchg->rxid, xchg->alloc_jif,
+ atomic_read(&xchg->ref_cnt),
+ io_stage_table[io_stage].stage);
+ }
+
+ hot_pool = xchg->hot_pool;
+ FC_CHECK_RETURN_VOID(hot_pool);
+ FC_CHECK_RETURN_VOID((xchg->hotpooltag >= hot_pool->base));
+
+ /*
+ * 1. Atomic dec & test
+ * 2. Free exchange if necessary (ref_cnt == 0)
+ */
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_falgs);
+ if (atomic_dec_and_test(&xchg->ref_cnt)) {
+ free_xchg = xchg->free_xchg;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_falgs);
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ unf_hot_pool_slab_set(hot_pool,
+ xchg->hotpooltag - hot_pool->base, NULL);
+ /* Delete exchange list entry */
+ list_del_init(&xchg->list_xchg_entry);
+ hot_pool->total_xchges--;
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* unf_free_fcp_xchg --->>> unf_done_ini_xchg */
+ if (free_xchg)
+ free_xchg(xchg);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_falgs);
+ }
+}
+
+static void unf_init_xchg_attribute(struct unf_xchg *xchg)
+{
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ xchg->xchg_mgr = NULL;
+ xchg->free_pool = NULL;
+ xchg->hot_pool = NULL;
+ xchg->lport = NULL;
+ xchg->rport = NULL;
+ xchg->disc_rport = NULL;
+ xchg->io_state = UNF_IO_STATE_NEW;
+ xchg->io_send_stage = TGT_IO_SEND_STAGE_NONE;
+ xchg->io_send_result = TGT_IO_SEND_RESULT_INVALID;
+ xchg->io_send_abort = false;
+ xchg->io_abort_result = false;
+ xchg->abts_state = 0;
+ xchg->oxid = INVALID_VALUE16;
+ xchg->abort_oxid = INVALID_VALUE16;
+ xchg->rxid = INVALID_VALUE16;
+ xchg->sid = INVALID_VALUE32;
+ xchg->did = INVALID_VALUE32;
+ xchg->oid = INVALID_VALUE32;
+ xchg->disc_portid = INVALID_VALUE32;
+ xchg->seq_id = INVALID_VALUE8;
+ xchg->cmnd_code = INVALID_VALUE32;
+ xchg->cmnd_sn = INVALID_VALUE64;
+ xchg->data_len = 0;
+ xchg->resid_len = 0;
+ xchg->data_direction = DMA_NONE;
+ xchg->hotpooltag = INVALID_VALUE16;
+ xchg->big_sfs_buf = NULL;
+ xchg->may_consume_res_cnt = 0;
+ xchg->fast_consume_res_cnt = 0;
+ xchg->io_front_jif = INVALID_VALUE64;
+ xchg->ob_callback_sts = UNF_IO_SUCCESS;
+ xchg->start_jif = 0;
+ xchg->rport_bind_jifs = INVALID_VALUE64;
+ xchg->scsi_id = INVALID_VALUE32;
+ xchg->qos_level = 0;
+ xchg->world_id = INVALID_VALUE32;
+
+ memset(&xchg->seq, 0, sizeof(struct unf_seq));
+ memset(&xchg->fcp_cmnd, 0, sizeof(struct unf_fcp_cmnd));
+ memset(&xchg->scsi_cmnd_info, 0, sizeof(struct unf_scsi_cmd_info));
+ memset(&xchg->dif_info, 0, sizeof(struct dif_info));
+ memset(xchg->private_data, 0, (PKG_MAX_PRIVATE_DATA_SIZE * sizeof(u32)));
+ xchg->echo_info.echo_result = UNF_ELS_ECHO_RESULT_OK;
+ xchg->echo_info.response_time = 0;
+
+ if (xchg->xchg_type == UNF_XCHG_TYPE_SFS) {
+ if (xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
+ memset(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr, 0,
+ sizeof(union unf_sfs_u));
+ xchg->fcp_sfs_union.sfs_entry.cur_offset = 0;
+ }
+ } else if (xchg->xchg_type != UNF_XCHG_TYPE_INI) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Exchange Type(0x%x) SFS Union uninited.",
+ xchg->xchg_type);
+ }
+ xchg->xchg_type = UNF_XCHG_TYPE_INVALID;
+ xchg->xfer_or_rsp_echo = NULL;
+ xchg->scsi_or_tgt_cmnd_func = NULL;
+ xchg->ob_callback = NULL;
+ xchg->callback = NULL;
+ xchg->free_xchg = NULL;
+
+ atomic_set(&xchg->ref_cnt, 0);
+ atomic_set(&xchg->esgl_cnt, 0);
+ atomic_set(&xchg->delay_flag, 0);
+
+ if (delayed_work_pending(&xchg->timeout_work))
+ UNF_DEL_XCHG_TIMER_SAFE(xchg);
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+}
+
+bool unf_busy_io_completed(struct unf_lport *lport)
+{
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ ulong pool_lock_flags = 0;
+ u32 i;
+
+ FC_CHECK_RETURN_VALUE(lport, true);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ xchg_mgr = unf_get_xchg_mgr_by_lport(lport, i);
+ if (unlikely(!xchg_mgr)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) Exchange Manager is NULL",
+ lport->port_id);
+ continue;
+ }
+
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock,
+ pool_lock_flags);
+ if (!list_empty(&xchg_mgr->hot_pool->ini_busylist)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) ini busylist is not empty.",
+ lport->port_id);
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock,
+ pool_lock_flags);
+ return false;
+ }
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock,
+ pool_lock_flags);
+ }
+ return true;
+}
diff --git a/drivers/scsi/spfc/common/unf_exchg.h b/drivers/scsi/spfc/common/unf_exchg.h
new file mode 100644
index 000000000000..5390f932fe44
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_exchg.h
@@ -0,0 +1,436 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_FCEXCH_H
+#define UNF_FCEXCH_H
+
+#include "unf_type.h"
+#include "unf_fcstruct.h"
+#include "unf_lport.h"
+#include "unf_scsi_common.h"
+
+enum unf_ioflow_id {
+ XCHG_ALLOC = 0,
+ TGT_RECEIVE_ABTS,
+ TGT_ABTS_DONE,
+ TGT_IO_SRR,
+ SFS_RESPONSE,
+ SFS_TIMEOUT,
+ INI_SEND_CMND,
+ INI_RESPONSE_DONE,
+ INI_EH_ABORT,
+ INI_EH_DEVICE_RESET,
+ INI_EH_BLS_DONE,
+ INI_IO_TIMEOUT,
+ INI_REQ_TIMEOUT,
+ XCHG_CANCEL_TIMER,
+ XCHG_FREE_XCHG,
+ SEND_ELS,
+ IO_XCHG_WAIT,
+ XCHG_BUTT
+};
+
+enum unf_xchg_type {
+ UNF_XCHG_TYPE_INI = 0, /* INI IO */
+ UNF_XCHG_TYPE_SFS = 1,
+ UNF_XCHG_TYPE_INVALID
+};
+
+enum unf_xchg_mgr_type {
+ UNF_XCHG_MGR_TYPE_RANDOM = 0,
+ UNF_XCHG_MGR_TYPE_FIXED = 1,
+ UNF_XCHG_MGR_TYPE_INVALID
+};
+
+enum tgt_io_send_stage {
+ TGT_IO_SEND_STAGE_NONE = 0,
+ TGT_IO_SEND_STAGE_DOING = 1, /* xfer/rsp into queue */
+ TGT_IO_SEND_STAGE_DONE = 2, /* xfer/rsp into queue complete */
+ TGT_IO_SEND_STAGE_ECHO = 3, /* driver handled TSTS */
+ TGT_IO_SEND_STAGE_INVALID
+};
+
+enum tgt_io_send_result {
+ TGT_IO_SEND_RESULT_OK = 0, /* xfer/rsp enqueue succeed */
+ TGT_IO_SEND_RESULT_FAIL = 1, /* xfer/rsp enqueue fail */
+ TGT_IO_SEND_RESULT_INVALID
+};
+
+struct unf_io_flow_id {
+ char *stage;
+};
+
+#define unf_check_oxid_matched(ox_id, oid, xchg) \
+ (((ox_id) == (xchg)->oxid) && ((oid) == (xchg)->oid) && \
+ (atomic_read(&(xchg)->ref_cnt) > 0))
+
+#define UNF_CHECK_ALLOCTIME_VALID(lport, xchg_tag, exchg, pkg_alloc_time, \
+ xchg_alloc_time) \
+ do { \
+ if (unlikely(((pkg_alloc_time) != 0) && \
+ ((pkg_alloc_time) != (xchg_alloc_time)))) { \
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR, \
+ "Lport(0x%x_0x%x_0x%x_0x%p) AllocTime is not " \
+ "equal,PKG " \
+ "AllocTime:0x%x,Exhg AllocTime:0x%x", \
+ (lport)->port_id, (lport)->nport_id, xchg_tag, \
+ exchg, pkg_alloc_time, xchg_alloc_time); \
+ return UNF_RETURN_ERROR; \
+ }; \
+ if (unlikely((pkg_alloc_time) == 0)) { \
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR, \
+ "Lport(0x%x_0x%x_0x%x_0x%p) pkgtime err,PKG " \
+ "AllocTime:0x%x,Exhg AllocTime:0x%x", \
+ (lport)->port_id, (lport)->nport_id, xchg_tag, \
+ exchg, pkg_alloc_time, xchg_alloc_time); \
+ }; \
+ } while (0)
+
+#define UNF_SET_SCSI_CMND_RESULT(xchg, cmnd_result) \
+ ((xchg)->scsi_cmnd_info.result = (cmnd_result))
+
+#define UNF_GET_GS_SFS_XCHG_TIMER(lport) (3 * (ulong)(lport)->ra_tov)
+
+#define UNF_GET_BLS_SFS_XCHG_TIMER(lport) (2 * (ulong)(lport)->ra_tov)
+
+#define UNF_GET_ELS_SFS_XCHG_TIMER(lport) (2 * (ulong)(lport)->ra_tov)
+
+#define UNF_ELS_ECHO_RESULT_OK 0
+#define UNF_ELS_ECHO_RESULT_FAIL 1
+
+struct unf_xchg;
+/* Xchg hot pool, busy IO lookup Xchg */
+struct unf_xchg_hot_pool {
+ /* Xchg sum, in hot pool */
+ u16 total_xchges;
+ bool wait_state;
+
+ /* pool lock */
+ spinlock_t xchg_hotpool_lock;
+
+ /* Xchg posiontion list */
+ struct list_head sfs_busylist;
+ struct list_head ini_busylist;
+ struct list_head list_destroy_xchg;
+
+ /* Next free hot point */
+ u16 slab_next_index;
+ u16 slab_total_sum;
+ u16 base;
+
+ struct unf_lport *lport;
+
+ struct unf_xchg *xchg_slab[ARRAY_INDEX_0];
+};
+
+/* Xchg's FREE POOL */
+struct unf_xchg_free_pool {
+ spinlock_t xchg_freepool_lock;
+
+ u32 fcp_xchg_sum;
+
+ /* IO used Xchg */
+ struct list_head list_free_xchg_list;
+ u32 total_fcp_xchg;
+
+ /* SFS used Xchg */
+ struct list_head list_sfs_xchg_list;
+ u32 total_sfs_xchg;
+ u32 sfs_xchg_sum;
+
+ struct completion *xchg_mgr_completion;
+};
+
+struct unf_big_sfs {
+ struct list_head entry_bigsfs;
+ void *addr;
+ u32 size;
+};
+
+struct unf_big_sfs_pool {
+ void *big_sfs_pool;
+ u32 free_count;
+ struct list_head list_freepool;
+ struct list_head list_busypool;
+ spinlock_t big_sfs_pool_lock;
+};
+
+/* Xchg Manager for vport Xchg */
+struct unf_xchg_mgr {
+ /* MG type */
+ u32 mgr_type;
+
+ /* MG entry */
+ struct list_head xchg_mgr_entry;
+
+ /* MG attribution */
+ u32 mem_szie;
+
+ /* MG alloced resource */
+ void *fcp_mm_start;
+
+ u32 sfs_mem_size;
+ void *sfs_mm_start;
+ dma_addr_t sfs_phy_addr;
+
+ struct unf_xchg_free_pool free_pool;
+ struct unf_xchg_hot_pool *hot_pool;
+
+ struct unf_big_sfs_pool big_sfs_pool;
+
+ struct buf_describe big_sfs_buf_list;
+};
+
+struct unf_seq {
+ /* Seq ID */
+ u8 seq_id;
+
+ /* Seq Cnt */
+ u16 seq_cnt;
+
+ /* Seq state and len,maybe used for fcoe */
+ u16 seq_stat;
+ u32 rec_data_len;
+};
+
+union unf_xchg_fcp_sfs {
+ struct unf_sfs_entry sfs_entry;
+ struct unf_fcp_rsp_iu_entry fcp_rsp_entry;
+};
+
+#define UNF_IO_STATE_NEW 0
+#define TGT_IO_STATE_SEND_XFERRDY (1 << 2) /* succeed to send XFer rdy */
+#define TGT_IO_STATE_RSP (1 << 5) /* chip send rsp */
+#define TGT_IO_STATE_ABORT (1 << 7)
+
+#define INI_IO_STATE_UPTASK \
+ (1 << 15) /* INI Upper-layer Task Management Commands */
+#define INI_IO_STATE_UPABORT \
+ (1 << 16) /* INI Upper-layer timeout Abort flag \
+ */
+#define INI_IO_STATE_DRABORT (1 << 17) /* INI driver Abort flag */
+#define INI_IO_STATE_DONE (1 << 18) /* INI complete flag */
+#define INI_IO_STATE_WAIT_RRQ (1 << 19) /* INI wait send rrq */
+#define INI_IO_STATE_UPSEND_ERR (1 << 20) /* INI send fail flag */
+/* INI only clear firmware resource flag */
+#define INI_IO_STATE_ABORT_RESOURCE (1 << 21)
+/* ioc abort:INI send ABTS ,5S timeout Semaphore,than set 1 */
+#define INI_IO_STATE_ABORT_TIMEOUT (1 << 22)
+#define INI_IO_STATE_RRQSEND_ERR (1 << 23) /* INI send RRQ fail flag */
+#define INI_IO_STATE_LOGO (1 << 24) /* INI busy IO session logo status */
+#define INI_IO_STATE_TMF_ABORT (1 << 25) /* INI TMF ABORT IO flag */
+#define INI_IO_STATE_REC_TIMEOUT_WAIT (1 << 26) /* INI REC TIMEOUT WAIT */
+#define INI_IO_STATE_REC_TIMEOUT (1 << 27) /* INI REC TIMEOUT */
+
+#define TMF_RESPONSE_RECEIVED (1 << 0)
+#define MARKER_STS_RECEIVED (1 << 1)
+#define ABTS_RESPONSE_RECEIVED (1 << 2)
+
+struct unf_scsi_cmd_info {
+ ulong time_out;
+ ulong abort_time_out;
+ void *scsi_cmnd;
+ void (*done)(struct unf_scsi_cmnd *scsi_cmd);
+ ini_get_sgl_entry_buf unf_get_sgl_entry_buf;
+ struct unf_ini_error_code *err_code_table; /* error code table */
+ char *sense_buf;
+ u32 err_code_table_cout; /* Size of the error code table */
+ u32 buf_len;
+ u32 entry_cnt;
+ u32 result; /* Stores command execution results */
+ u32 port_id;
+/* Re-search for rport based on scsiid during retry. Otherwise,
+ *data inconsistency will occur
+ */
+ u32 scsi_id;
+ void *sgl;
+ uplevel_cmd_done uplevel_done;
+};
+
+struct unf_req_sgl_info {
+ void *sgl;
+ void *sgl_start;
+ u32 req_index;
+ u32 entry_index;
+};
+
+struct unf_els_echo_info {
+ u64 response_time;
+ struct semaphore echo_sync_sema;
+ u32 echo_result;
+};
+
+struct unf_xchg {
+ /* Mg resource relative */
+ /* list delete from HotPool */
+ struct unf_xchg_hot_pool *hot_pool;
+
+ /* attach to FreePool */
+ struct unf_xchg_free_pool *free_pool;
+ struct unf_xchg_mgr *xchg_mgr;
+ struct unf_lport *lport; /* Local LPort/VLPort */
+ struct unf_rport *rport; /* Rmote Port */
+ struct unf_rport *disc_rport; /* Discover Rmote Port */
+ struct list_head list_xchg_entry;
+ struct list_head list_abort_xchg_entry;
+ spinlock_t xchg_state_lock;
+
+ /* Xchg reference */
+ atomic_t ref_cnt;
+ atomic_t esgl_cnt;
+ bool debug_hook;
+ /* Xchg attribution */
+ u16 hotpooltag;
+ u16 abort_oxid;
+ u32 xchg_type; /* LS,TGT CMND ,REQ,or SCSI Cmnd */
+ u16 oxid;
+ u16 rxid;
+ u32 sid;
+ u32 did;
+ u32 oid; /* ID of the exchange initiator */
+ u32 disc_portid; /* Send GNN_ID/GFF_ID NPortId */
+ u8 seq_id;
+ u8 byte_orders; /* Byte order */
+ struct unf_seq seq;
+
+ u32 cmnd_code;
+ u32 world_id;
+ /* Dif control */
+ struct unf_dif_control_info dif_control;
+ struct dif_info dif_info;
+ /* IO status Abort,timer out */
+ u32 io_state; /* TGT_IO_STATE_E */
+ u32 tmf_state; /* TMF STATE */
+ u32 ucode_abts_state;
+ u32 abts_state;
+
+ /* IO Enqueuing */
+ enum tgt_io_send_stage io_send_stage; /* tgt_io_send_stage */
+ /* IO Enqueuing result, success or failure */
+ enum tgt_io_send_result io_send_result; /* tgt_io_send_result */
+
+ u8 io_send_abort; /* is or not send io abort */
+ /*result of io abort cmd(succ:true; fail:false)*/
+ u8 io_abort_result;
+ /* for INI,Indicates the length of the data transmitted over the PCI
+ * link
+ */
+ u32 data_len;
+ /* ResidLen,greater than 0 UnderFlow or Less than Overflow */
+ int resid_len;
+ /* +++++++++++++++++IO Special++++++++++++++++++++ */
+ /* point to tgt cmnd/req/scsi cmnd */
+ /* Fcp cmnd */
+ struct unf_fcp_cmnd fcp_cmnd;
+
+ struct unf_scsi_cmd_info scsi_cmnd_info;
+
+ struct unf_req_sgl_info req_sgl_info;
+
+ struct unf_req_sgl_info dif_sgl_info;
+
+ u64 cmnd_sn;
+ void *pinitiator;
+
+ /* timestamp */
+ u64 start_jif;
+ u64 alloc_jif;
+
+ u64 io_front_jif;
+
+ u32 may_consume_res_cnt;
+ u32 fast_consume_res_cnt;
+
+ /* scsi req info */
+ u32 data_direction;
+
+ struct unf_big_sfs *big_sfs_buf;
+
+ /* scsi cmnd sense_buffer pointer */
+ union unf_xchg_fcp_sfs fcp_sfs_union;
+
+ /* One exchange may use several External Sgls */
+ struct list_head list_esgls;
+ struct unf_els_echo_info echo_info;
+ struct semaphore task_sema;
+
+ /* for RRQ ,IO Xchg add to SFS Xchg */
+ void *io_xchg;
+
+ /* Xchg delay work */
+ struct delayed_work timeout_work;
+
+ void (*xfer_or_rsp_echo)(struct unf_xchg *xchg, u32 status);
+
+ /* wait list XCHG send function */
+ int (*scsi_or_tgt_cmnd_func)(struct unf_xchg *xchg);
+
+ /* send result callback */
+ void (*ob_callback)(struct unf_xchg *xchg);
+
+ /* Response IO callback */
+ void (*callback)(void *lport, void *rport, void *xchg);
+
+ /* Xchg release function */
+ void (*free_xchg)(struct unf_xchg *xchg);
+
+ /* +++++++++++++++++low level Special++++++++++++++++++++ */
+ /* private data,provide for low level */
+ u32 private_data[PKG_MAX_PRIVATE_DATA_SIZE];
+
+ u64 rport_bind_jifs;
+
+ /* sfs exchg ob callback status */
+ u32 ob_callback_sts;
+ u32 scsi_id;
+ u32 qos_level;
+ void *ls_rsp_addr;
+ void *ls_req;
+ u32 status;
+ atomic_t delay_flag;
+ void *upper_ct;
+};
+
+struct unf_esgl_page *
+unf_get_and_add_one_free_esgl_page(struct unf_lport *lport,
+ struct unf_xchg *xchg);
+void unf_release_xchg_mgr_temp(struct unf_lport *lport);
+u32 unf_init_xchg_mgr_temp(struct unf_lport *lport);
+u32 unf_alloc_xchg_resource(struct unf_lport *lport);
+void unf_free_all_xchg_mgr(struct unf_lport *lport);
+void unf_xchg_mgr_destroy(struct unf_lport *lport);
+u32 unf_xchg_ref_inc(struct unf_xchg *xchg, enum unf_ioflow_id io_stage);
+void unf_xchg_ref_dec(struct unf_xchg *xchg, enum unf_ioflow_id io_stage);
+struct unf_xchg_mgr *unf_get_xchg_mgr_by_lport(struct unf_lport *lport,
+ u32 mgr_idx);
+struct unf_xchg_hot_pool *unf_get_hot_pool_by_lport(struct unf_lport *lport,
+ u32 mgr_idx);
+void unf_free_lport_ini_xchg(struct unf_xchg_mgr *xchg_mgr, bool done_ini_flag);
+struct unf_xchg *unf_cm_lookup_xchg_by_cmnd_sn(void *lport, u64 command_sn,
+ u32 world_id, void *pinitiator);
+void *unf_cm_lookup_xchg_by_id(void *lport, u16 ox_id, u32 oid);
+void unf_cm_xchg_abort_by_lun(struct unf_lport *lport, struct unf_rport *rport,
+ u64 lun_id, void *tm_xchg,
+ bool abort_all_lun_flag);
+void unf_cm_xchg_abort_by_session(struct unf_lport *lport,
+ struct unf_rport *rport);
+
+void unf_cm_xchg_mgr_abort_io_by_id(struct unf_lport *lport,
+ struct unf_rport *rport, u32 sid, u32 did,
+ u32 extra_io_stat);
+void unf_cm_xchg_mgr_abort_sfs_by_id(struct unf_lport *lport,
+ struct unf_rport *rport, u32 sid, u32 did);
+void unf_cm_free_xchg(void *lport, void *xchg);
+void *unf_cm_get_free_xchg(void *lport, u32 xchg_type);
+void *unf_cm_lookup_xchg_by_tag(void *lport, u16 hot_pool_tag);
+void unf_release_esgls(struct unf_xchg *xchg);
+void unf_show_all_xchg(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr);
+void unf_destroy_dirty_xchg(struct unf_lport *lport, bool show_only);
+void unf_wake_up_scsi_task_cmnd(struct unf_lport *lport);
+void unf_set_hot_pool_wait_state(struct unf_lport *lport, bool wait_state);
+void unf_free_lport_all_xchg(struct unf_lport *lport);
+extern u32 unf_get_up_level_cmnd_errcode(struct unf_ini_error_code *err_table,
+ u32 err_table_count, u32 drv_err_code);
+bool unf_busy_io_completed(struct unf_lport *lport);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_exchg_abort.c b/drivers/scsi/spfc/common/unf_exchg_abort.c
new file mode 100644
index 000000000000..68f751be04aa
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_exchg_abort.c
@@ -0,0 +1,825 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_exchg_abort.h"
+#include "unf_log.h"
+#include "unf_common.h"
+#include "unf_rport.h"
+#include "unf_service.h"
+#include "unf_ls.h"
+#include "unf_io.h"
+
+void unf_cm_xchg_mgr_abort_io_by_id(struct unf_lport *lport, struct unf_rport *rport, u32 sid,
+ u32 did, u32 extra_io_state)
+{
+ /*
+ * for target session: set ABORT
+ * 1. R_Port remove
+ * 2. Send PLOGI_ACC callback
+ * 3. RCVD PLOGI
+ * 4. RCVD LOGO
+ */
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (lport->xchg_mgr_temp.unf_xchg_mgr_io_xchg_abort) {
+ /* The SID/DID of the Xchg is in reverse direction in different
+ * phases. Therefore, the reverse direction needs to be
+ * considered
+ */
+ lport->xchg_mgr_temp.unf_xchg_mgr_io_xchg_abort(lport, rport, sid, did,
+ extra_io_state);
+ lport->xchg_mgr_temp.unf_xchg_mgr_io_xchg_abort(lport, rport, did, sid,
+ extra_io_state);
+ }
+}
+
+void unf_cm_xchg_mgr_abort_sfs_by_id(struct unf_lport *lport,
+ struct unf_rport *rport, u32 sid, u32 did)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (lport->xchg_mgr_temp.unf_xchg_mgr_sfs_xchg_abort) {
+ /* The SID/DID of the Xchg is in reverse direction in different
+ * phases, therefore, the reverse direction needs to be
+ * considered
+ */
+ lport->xchg_mgr_temp.unf_xchg_mgr_sfs_xchg_abort(lport, rport, sid, did);
+ lport->xchg_mgr_temp.unf_xchg_mgr_sfs_xchg_abort(lport, rport, did, sid);
+ }
+}
+
+void unf_cm_xchg_abort_by_lun(struct unf_lport *lport, struct unf_rport *rport,
+ u64 lun_id, void *xchg, bool abort_all_lun_flag)
+{
+ /*
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ void (*unf_xchg_abort_by_lun)(void *, void *, u64, void *, bool) = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_xchg_abort_by_lun = lport->xchg_mgr_temp.unf_xchg_abort_by_lun;
+ if (unf_xchg_abort_by_lun)
+ unf_xchg_abort_by_lun((void *)lport, (void *)rport, lun_id,
+ xchg, abort_all_lun_flag);
+}
+
+void unf_cm_xchg_abort_by_session(struct unf_lport *lport, struct unf_rport *rport)
+{
+ void (*unf_xchg_abort_by_session)(void *, void *) = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_xchg_abort_by_session = lport->xchg_mgr_temp.unf_xchg_abort_by_session;
+ if (unf_xchg_abort_by_session)
+ unf_xchg_abort_by_session((void *)lport, (void *)rport);
+}
+
+static void unf_xchg_abort_all_sfs_xchg(struct unf_lport *lport, bool clean)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong pool_lock_falgs = 0;
+ ulong xchg_lock_flags = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT,
+ UNF_MAJOR, "Port(0x%x) Hot Pool is NULL.", lport->port_id);
+
+ continue;
+ }
+
+ if (!clean) {
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+
+ /* Clearing the SFS_Busy_list Exchange Resource */
+ list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->sfs_busylist) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_flags);
+ if (atomic_read(&xchg->ref_cnt) > 0)
+ xchg->io_state |= TGT_IO_STATE_ABORT;
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_flags);
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+ } else {
+ continue;
+ }
+ }
+}
+
+static void unf_xchg_abort_ini_io_xchg(struct unf_lport *lport, bool clean)
+{
+ /* Clean L_Port/V_Port Link Down I/O: Abort */
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong pool_lock_falgs = 0;
+ ulong xchg_lock_flags = 0;
+ u32 io_state = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) hot pool is NULL",
+ lport->port_id);
+
+ continue;
+ }
+
+ if (!clean) {
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+
+ /* 1. Abort INI_Busy_List IO */
+ list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_flags);
+ if (atomic_read(&xchg->ref_cnt) > 0)
+ xchg->io_state |= INI_IO_STATE_DRABORT | io_state;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_flags);
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+ } else {
+ /* Do nothing, just return */
+ continue;
+ }
+ }
+}
+
+void unf_xchg_abort_all_xchg(void *lport, u32 xchg_type, bool clean)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ unf_lport = (struct unf_lport *)lport;
+
+ switch (xchg_type) {
+ case UNF_XCHG_TYPE_SFS:
+ unf_xchg_abort_all_sfs_xchg(unf_lport, clean);
+ break;
+ /* Clean L_Port/V_Port Link Down I/O: Abort */
+ case UNF_XCHG_TYPE_INI:
+ unf_xchg_abort_ini_io_xchg(unf_lport, clean);
+ break;
+ default:
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) unknown exch type(0x%x)",
+ unf_lport->port_id, xchg_type);
+ break;
+ }
+}
+
+static void unf_xchg_abort_ini_send_tm_cmd(void *lport, void *rport, u64 lun_id)
+{
+ /*
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong flags = 0;
+ ulong xchg_flag = 0;
+ u32 i = 0;
+ u64 raw_lun_id = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+ unf_rport = (struct unf_rport *)rport;
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) hot pool is NULL",
+ unf_lport->port_id);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* 1. for each exchange from busy list */
+ list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+
+ raw_lun_id = *(u64 *)(xchg->fcp_cmnd.lun) >> UNF_SHIFT_16 &
+ UNF_RAW_LUN_ID_MASK;
+ if (lun_id == raw_lun_id && unf_rport == xchg->rport) {
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ xchg->io_state |= INI_IO_STATE_TMF_ABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Exchange(%p) state(0x%x) S_ID(0x%x) D_ID(0x%x) tag(0x%x) abort by TMF CMD",
+ xchg, xchg->io_state,
+ ((struct unf_lport *)lport)->nport_id,
+ unf_rport->nport_id, xchg->hotpooltag);
+ }
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ }
+}
+
+static void unf_xchg_abort_ini_tmf_target_reset(void *lport, void *rport)
+{
+ /*
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong flags = 0;
+ ulong xchg_flag = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+ unf_rport = (struct unf_rport *)rport;
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) hot pool is NULL",
+ unf_lport->port_id);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* 1. for each exchange from busy_list */
+ list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+ if (unf_rport == xchg->rport) {
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ xchg->io_state |= INI_IO_STATE_TMF_ABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Exchange(%p) state(0x%x) S_ID(0x%x) D_ID(0x%x) tag(0x%x) abort by TMF CMD",
+ xchg, xchg->io_state, unf_lport->nport_id,
+ unf_rport->nport_id, xchg->hotpooltag);
+ }
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ }
+}
+
+void unf_xchg_abort_by_lun(void *lport, void *rport, u64 lun_id, void *xchg,
+ bool abort_all_lun_flag)
+{
+ /* ABORT: set UP_ABORT tag for target LUN I/O */
+ struct unf_xchg *tm_xchg = (struct unf_xchg *)xchg;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) LUN_ID(0x%llx) TM_EXCH(0x%p) flag(%d)",
+ ((struct unf_lport *)lport)->port_id, lun_id, xchg,
+ abort_all_lun_flag);
+
+ /* for INI Mode */
+ if (!tm_xchg) {
+ /*
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ unf_xchg_abort_ini_send_tm_cmd(lport, rport, lun_id);
+
+ return;
+ }
+}
+
+void unf_xchg_abort_by_session(void *lport, void *rport)
+{
+ /*
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) Rport(0x%x) start session reset with TMF",
+ ((struct unf_lport *)lport)->port_id, ((struct unf_rport *)rport)->nport_id);
+
+ unf_xchg_abort_ini_tmf_target_reset(lport, rport);
+}
+
+void unf_xchg_up_abort_io_by_scsi_id(void *lport, u32 scsi_id)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong flags = 0;
+ ulong xchg_flag = 0;
+ u32 i;
+ u32 io_abort_flag = INI_IO_STATE_UPABORT | INI_IO_STATE_UPSEND_ERR |
+ INI_IO_STATE_TMF_ABORT;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) hot pool is NULL",
+ unf_lport->port_id);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* 1. for each exchange from busy_list */
+ list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ if (lport == xchg->lport && scsi_id == xchg->scsi_id &&
+ !(xchg->io_state & io_abort_flag)) {
+ xchg->io_state |= INI_IO_STATE_UPABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Exchange(%p) scsi_cmd(0x%p) state(0x%x) scsi_id(0x%x) tag(0x%x) upabort by scsi id",
+ xchg, xchg->scsi_cmnd_info.scsi_cmnd,
+ xchg->io_state, scsi_id, xchg->hotpooltag);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+ }
+ }
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ }
+}
+
+static void unf_ini_busy_io_xchg_abort(void *xchg_hot_pool, void *rport,
+ u32 sid, u32 did, u32 extra_io_state)
+{
+ /*
+ * for target session: Set (DRV) ABORT
+ * 1. R_Port remove
+ * 2. Send PLOGI_ACC callback
+ * 3. RCVD PLOGI
+ * 4. RCVD LOGO
+ */
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong xchg_lock_flags = 0;
+
+ unf_rport = (struct unf_rport *)rport;
+ hot_pool = (struct unf_xchg_hot_pool *)xchg_hot_pool;
+
+ /* ABORT INI IO: INI_BUSY_LIST */
+ list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_flags);
+ if (did == xchg->did && sid == xchg->sid &&
+ unf_rport == xchg->rport &&
+ (atomic_read(&xchg->ref_cnt) > 0)) {
+ xchg->scsi_cmnd_info.result = UNF_SCSI_HOST(DID_IMM_RETRY);
+ xchg->io_state |= INI_IO_STATE_DRABORT;
+ xchg->io_state |= extra_io_state;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Abort INI:0x%p---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
+ xchg, (u32)xchg->hotpooltag, (u32)xchg->xchg_type,
+ (u32)xchg->oxid, (u32)xchg->rxid,
+ (u32)xchg->sid, (u32)xchg->did, (u32)xchg->io_state,
+ atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_flags);
+ }
+}
+
+void unf_xchg_mgr_io_xchg_abort(void *lport, void *rport, u32 sid, u32 did, u32 extra_io_state)
+{
+ /*
+ * for target session: set ABORT
+ * 1. R_Port remove
+ * 2. Send PLOGI_ACC callback
+ * 3. RCVD PLOGI
+ * 4. RCVD LOGO
+ */
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong pool_lock_falgs = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) hot pool is NULL",
+ unf_lport->port_id);
+
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+
+ /* 1. Clear INI (session) IO: INI Mode */
+ unf_ini_busy_io_xchg_abort(hot_pool, rport, sid, did, extra_io_state);
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+ }
+}
+
+void unf_xchg_mgr_sfs_xchg_abort(void *lport, void *rport, u32 sid, u32 did)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong pool_lock_falgs = 0;
+ ulong xchg_lock_flags = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (!hot_pool) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT,
+ UNF_MAJOR, "Port(0x%x) Hot Pool is NULL.",
+ unf_lport->port_id);
+
+ continue;
+ }
+
+ unf_rport = (struct unf_rport *)rport;
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+
+ /* Clear the SFS exchange of the corresponding connection */
+ list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->sfs_busylist) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_flags);
+ if (did == xchg->did && sid == xchg->sid &&
+ unf_rport == xchg->rport && (atomic_read(&xchg->ref_cnt) > 0)) {
+ xchg->io_state |= TGT_IO_STATE_ABORT;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Abort SFS:0x%p---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
+ xchg, (u32)xchg->hotpooltag, (u32)xchg->xchg_type,
+ (u32)xchg->oxid, (u32)xchg->rxid, (u32)xchg->sid,
+ (u32)xchg->did, (u32)xchg->io_state,
+ atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_flags);
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+ }
+}
+
+static void unf_fc_wait_abts_complete(struct unf_lport *lport, struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = lport;
+ struct unf_scsi_cmnd scsi_cmnd = {0};
+ ulong flag = 0;
+ u32 time_out_value = 2000;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ u32 io_result;
+
+ scsi_cmnd.scsi_id = xchg->scsi_cmnd_info.scsi_id;
+ scsi_cmnd.upper_cmnd = xchg->scsi_cmnd_info.scsi_cmnd;
+ scsi_cmnd.done = xchg->scsi_cmnd_info.done;
+ scsi_image_table = &unf_lport->rport_scsi_table;
+
+ if (down_timeout(&xchg->task_sema, (s64)msecs_to_jiffies(time_out_value))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) recv abts marker timeout,Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x)",
+ unf_lport->port_id, xchg, xchg->oxid, xchg->rxid);
+ goto ABTS_FIAILED;
+ }
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ if (xchg->ucode_abts_state == UNF_IO_SUCCESS ||
+ xchg->scsi_cmnd_info.result == UNF_IO_ABORT_PORT_REMOVING) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) Send ABTS succeed and recv marker Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x) marker status(0x%x)",
+ unf_lport->port_id, xchg, xchg->oxid, xchg->rxid,
+ xchg->ucode_abts_state);
+ io_result = DID_BUS_BUSY;
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_cmnd.scsi_id, io_result);
+ unf_complete_cmnd(&scsi_cmnd, io_result << UNF_SHIFT_16);
+ return;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ABTS failed. Exch(0x%p) hot_tag(0x%x) ret(0x%x) xchg->io_state (0x%x)",
+ unf_lport->port_id, xchg, xchg->hotpooltag,
+ xchg->scsi_cmnd_info.result, xchg->io_state);
+ goto ABTS_FIAILED;
+
+ABTS_FIAILED:
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ xchg->io_state &= ~INI_IO_STATE_UPABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+}
+
+void unf_fc_abort_time_out_cmnd(struct unf_lport *lport, struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = lport;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ if (xchg->io_state & INI_IO_STATE_UPABORT) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "LPort(0x%x) xchange(0x%p) OX_ID(0x%x), RX_ID(0x%x) Cmdsn(0x%lx) has been aborted.",
+ unf_lport->port_id, xchg, xchg->oxid,
+ xchg->rxid, (ulong)xchg->cmnd_sn);
+ return;
+ }
+ xchg->io_state |= INI_IO_STATE_UPABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_KEVENT,
+ "LPort(0x%x) exchg(0x%p) OX_ID(0x%x) RX_ID(0x%x) Cmdsn(0x%lx) timeout abort it",
+ unf_lport->port_id, xchg, xchg->oxid, xchg->rxid, (ulong)xchg->cmnd_sn);
+
+ unf_lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg,
+ (ulong)UNF_WAIT_ABTS_RSP_TIMEOUT, UNF_TIMER_TYPE_INI_ABTS);
+
+ sema_init(&xchg->task_sema, 0);
+
+ if (unf_send_abts(unf_lport, xchg) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "LPort(0x%x) send ABTS, Send ABTS unsuccessful. Exchange OX_ID(0x%x), RX_ID(0x%x).",
+ unf_lport->port_id, xchg->oxid, xchg->rxid);
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ xchg->io_state &= ~INI_IO_STATE_UPABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+ return;
+ }
+ unf_fc_wait_abts_complete(unf_lport, xchg);
+}
+
+static void unf_fc_ini_io_rec_wait_time_out(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ ulong time_out = 0;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) Exch(0x%p) Rec timeout exchange OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
+ lport->port_id, rport->nport_id, xchg, xchg->oxid,
+ xchg->rxid, xchg->io_state);
+
+ if (xchg->rport_bind_jifs == rport->rport_alloc_jifs) {
+ unf_send_rec(lport, rport, xchg);
+
+ if (xchg->scsi_cmnd_info.abort_time_out > 0) {
+ time_out = (xchg->scsi_cmnd_info.abort_time_out > UNF_REC_TOV) ?
+ (xchg->scsi_cmnd_info.abort_time_out - UNF_REC_TOV) : 0;
+ if (time_out > 0) {
+ lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg, time_out,
+ UNF_TIMER_TYPE_REQ_IO);
+ } else {
+ unf_fc_abort_time_out_cmnd(lport, xchg);
+ }
+ }
+ }
+}
+
+static void unf_fc_ini_send_abts_time_out(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ if (xchg->rport_bind_jifs == rport->rport_alloc_jifs &&
+ xchg->rport_bind_jifs != INVALID_VALUE64) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) Exch(0x%p) first time to send abts timeout, retry again OX_ID(0x%x) RX_ID(0x%x) HotTag(0x%x) state(0x%x)",
+ lport->port_id, rport->nport_id, xchg, xchg->oxid,
+ xchg->rxid, xchg->hotpooltag, xchg->io_state);
+
+ lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg,
+ (ulong)UNF_WAIT_ABTS_RSP_TIMEOUT, UNF_TIMER_TYPE_INI_ABTS);
+
+ if (unf_send_abts(lport, xchg) != RETURN_OK) {
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ unf_abts_timeout_recovery_default(rport, xchg);
+
+ unf_cm_free_xchg(lport, xchg);
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) Exch(0x%p) rport is invalid, exchg rport jiff(0x%llx 0x%llx), free exchange OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
+ lport->port_id, rport->nport_id, xchg,
+ xchg->rport_bind_jifs, rport->rport_alloc_jifs,
+ xchg->oxid, xchg->rxid, xchg->io_state);
+
+ unf_cm_free_xchg(lport, xchg);
+ }
+}
+
+void unf_fc_ini_io_xchg_time_out(struct work_struct *work)
+{
+ struct unf_xchg *xchg = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 port_valid_flag = 0;
+
+ xchg = container_of(work, struct unf_xchg, timeout_work.work);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ ret = unf_xchg_ref_inc(xchg, INI_IO_TIMEOUT);
+ FC_CHECK_RETURN_VOID(ret == RETURN_OK);
+
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+
+ port_valid_flag = (!unf_lport) || (!unf_rport);
+ if (port_valid_flag) {
+ unf_xchg_ref_dec(xchg, INI_IO_TIMEOUT);
+ unf_xchg_ref_dec(xchg, INI_IO_TIMEOUT);
+ return;
+ }
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ /* 1. for Send RRQ failed Timer timeout */
+ if (INI_IO_STATE_RRQSEND_ERR & xchg->io_state) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[info]LPort(0x%x) RPort(0x%x) Exch(0x%p) had wait enough time for RRQ send failed OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id, xchg,
+ xchg->oxid, xchg->rxid, xchg->io_state);
+ unf_notify_chip_free_xid(xchg);
+ unf_cm_free_xchg(unf_lport, xchg);
+ }
+ /* Second ABTS timeout and enter LOGO process */
+ else if ((INI_IO_STATE_ABORT_TIMEOUT & xchg->io_state) &&
+ (!(ABTS_RESPONSE_RECEIVED & xchg->abts_state))) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) Exch(0x%p) had wait enough time for second abts send OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id, xchg,
+ xchg->oxid, xchg->rxid, xchg->io_state);
+ unf_abts_timeout_recovery_default(unf_rport, xchg);
+ unf_cm_free_xchg(unf_lport, xchg);
+ }
+ /* First time to send ABTS, timeout and retry to send ABTS again */
+ else if ((INI_IO_STATE_UPABORT & xchg->io_state) &&
+ (!(ABTS_RESPONSE_RECEIVED & xchg->abts_state))) {
+ xchg->io_state |= INI_IO_STATE_ABORT_TIMEOUT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_fc_ini_send_abts_time_out(unf_lport, unf_rport, xchg);
+ }
+ /* 3. IO_DONE */
+ else if ((INI_IO_STATE_DONE & xchg->io_state) &&
+ (ABTS_RESPONSE_RECEIVED & xchg->abts_state)) {
+ /*
+ * for IO_DONE:
+ * 1. INI ABTS first timer time out
+ * 2. INI RCVD ABTS Response
+ * 3. Normal case for I/O Done
+ */
+ /* Send ABTS & RCVD RSP & no timeout */
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ if (unf_send_rrq(unf_lport, unf_rport, xchg) == RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]LPort(0x%x) send RRQ succeed to RPort(0x%x) Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id, xchg,
+ xchg->oxid, xchg->rxid, xchg->io_state);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]LPort(0x%x) can't send RRQ to RPort(0x%x) Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id, xchg,
+ xchg->oxid, xchg->rxid, xchg->io_state);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ xchg->io_state |= INI_IO_STATE_RRQSEND_ERR;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg,
+ (ulong)UNF_WRITE_RRQ_SENDERR_INTERVAL, UNF_TIMER_TYPE_INI_IO);
+ }
+ } else if (INI_IO_STATE_REC_TIMEOUT_WAIT & xchg->io_state) {
+ xchg->io_state &= ~INI_IO_STATE_REC_TIMEOUT_WAIT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_fc_ini_io_rec_wait_time_out(unf_lport, unf_rport, xchg);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_fc_abort_time_out_cmnd(unf_lport, xchg);
+ }
+
+ unf_xchg_ref_dec(xchg, INI_IO_TIMEOUT);
+ unf_xchg_ref_dec(xchg, INI_IO_TIMEOUT);
+}
+
+void unf_sfs_xchg_time_out(struct work_struct *work)
+{
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(work);
+ xchg = container_of(work, struct unf_xchg, timeout_work.work);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ ret = unf_xchg_ref_inc(xchg, SFS_TIMEOUT);
+ FC_CHECK_RETURN_VOID(ret == RETURN_OK);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]SFS Exch(%p) Cmnd(0x%x) IO Exch(0x%p) Sid_Did(0x%x:0x%x) HotTag(0x%x) State(0x%x) Timeout.",
+ xchg, xchg->cmnd_code, xchg->io_xchg, xchg->sid, xchg->did,
+ xchg->hotpooltag, xchg->io_state);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if ((xchg->io_state & TGT_IO_STATE_ABORT) &&
+ xchg->cmnd_code != ELS_RRQ && xchg->cmnd_code != ELS_LOGO) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "SFS Exch(0x%p) Cmnd(0x%x) Hot Pool Tag(0x%x) timeout, but aborted, no need to handle.",
+ xchg, xchg->cmnd_code, xchg->hotpooltag);
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
+ unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
+
+ return;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ /* The sfs times out. If the sfs is ELS reply,
+ * go to UNF_RPortErrorRecovery/unf_lport_error_recovery.
+ * Otherwise, go to the corresponding obCallback.
+ */
+ if (UNF_XCHG_IS_ELS_REPLY(xchg) && unf_rport) {
+ if (unf_rport->nport_id >= UNF_FC_FID_DOM_MGR)
+ unf_lport_error_recovery(unf_lport);
+ else
+ unf_rport_error_recovery(unf_rport);
+
+ } else if (xchg->ob_callback) {
+ xchg->ob_callback(xchg);
+ } else {
+ /* Do nothing */
+ }
+ unf_notify_chip_free_xid(xchg);
+ unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
+ unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
+}
diff --git a/drivers/scsi/spfc/common/unf_exchg_abort.h b/drivers/scsi/spfc/common/unf_exchg_abort.h
new file mode 100644
index 000000000000..b55f4eea2cce
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_exchg_abort.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_FCEXCH_ABORT_H
+#define UNF_FCEXCH_ABORT_H
+
+#include "unf_type.h"
+#include "unf_exchg.h"
+
+#define UNF_RAW_LUN_ID_MASK 0x000000000000ffff
+
+void unf_xchg_abort_by_lun(void *lport, void *rport, u64 lun_id, void *tm_xchg,
+ bool abort_all_lun_flag);
+void unf_xchg_abort_by_session(void *lport, void *rport);
+void unf_xchg_mgr_io_xchg_abort(void *lport, void *rport, u32 sid, u32 did,
+ u32 extra_io_state);
+void unf_xchg_mgr_sfs_xchg_abort(void *lport, void *rport, u32 sid, u32 did);
+void unf_xchg_abort_all_xchg(void *lport, u32 xchg_type, bool clean);
+void unf_fc_abort_time_out_cmnd(struct unf_lport *lport, struct unf_xchg *xchg);
+void unf_fc_ini_io_xchg_time_out(struct work_struct *work);
+void unf_sfs_xchg_time_out(struct work_struct *work);
+void unf_xchg_up_abort_io_by_scsi_id(void *lport, u32 scsi_id);
+#endif
diff --git a/drivers/scsi/spfc/common/unf_fcstruct.h b/drivers/scsi/spfc/common/unf_fcstruct.h
new file mode 100644
index 000000000000..d6eb8592994b
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_fcstruct.h
@@ -0,0 +1,459 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_FCSTRUCT_H
+#define UNF_FCSTRUCT_H
+
+#include "unf_type.h"
+#include "unf_scsi_common.h"
+
+#define FC_RCTL_BLS 0x80000000
+
+/*
+ * * R_CTL Basic Link Data defines
+ */
+
+#define FC_RCTL_BLS_ACC (FC_RCTL_BLS | 0x04000000)
+#define FC_RCTL_BLS_RJT (FC_RCTL_BLS | 0x05000000)
+
+/*
+ * * BA_RJT reason code defines
+ */
+#define FCXLS_BA_RJT_LOGICAL_ERROR 0x00030000
+
+/*
+ * * BA_RJT code explanation
+ */
+
+#define FCXLS_LS_RJT_INVALID_OXID_RXID 0x00001700
+
+/*
+ * * ELS ACC
+ */
+struct unf_els_acc {
+ struct unf_fc_head frame_hdr;
+ u32 cmnd;
+};
+
+/*
+ * * ELS RJT
+ */
+struct unf_els_rjt {
+ struct unf_fc_head frame_hdr;
+ u32 cmnd;
+ u32 reason_code;
+};
+
+/*
+ * * FLOGI payload,
+ * * FC-LS-2 FLOGI, PLOGI, FDISC or LS_ACC Payload
+ */
+struct unf_flogi_fdisc_payload {
+ u32 cmnd;
+ struct unf_fabric_parm fabric_parms;
+};
+
+/*
+ * * Flogi and Flogi accept frames. They are the same structure
+ */
+struct unf_flogi_fdisc_acc {
+ struct unf_fc_head frame_hdr;
+ struct unf_flogi_fdisc_payload flogi_payload;
+};
+
+/*
+ * * Fdisc and Fdisc accept frames. They are the same structure
+ */
+
+struct unf_fdisc_acc {
+ struct unf_fc_head frame_hdr;
+ struct unf_flogi_fdisc_payload fdisc_payload;
+};
+
+/*
+ * * PLOGI payload
+ */
+struct unf_plogi_payload {
+ u32 cmnd;
+ struct unf_lgn_parm stparms;
+};
+
+/*
+ *Plogi, Plogi accept, Pdisc and Pdisc accept frames. They are all the same
+ *structure.
+ */
+struct unf_plogi_pdisc {
+ struct unf_fc_head frame_hdr;
+ struct unf_plogi_payload payload;
+};
+
+/*
+ * * LOGO logout link service requests invalidation of service parameters and
+ * * port name.
+ * * see FC-PH 4.3 Section 21.4.8
+ */
+struct unf_logo_payload {
+ u32 cmnd;
+ u32 nport_id;
+ u32 high_port_name;
+ u32 low_port_name;
+};
+
+/*
+ * * payload to hold LOGO command
+ */
+struct unf_logo {
+ struct unf_fc_head frame_hdr;
+ struct unf_logo_payload payload;
+};
+
+/*
+ * * payload for ECHO command, refer to FC-LS-2 4.2.4
+ */
+struct unf_echo_payload {
+ u32 cmnd;
+#define UNF_FC_ECHO_PAYLOAD_LENGTH 255 /* Length in words */
+ u32 data[UNF_FC_ECHO_PAYLOAD_LENGTH];
+};
+
+struct unf_echo {
+ struct unf_fc_head frame_hdr;
+ struct unf_echo_payload *echo_pld;
+ dma_addr_t phy_echo_addr;
+};
+
+#define UNF_PRLI_SIRT_EXTRA_SIZE 12
+
+/*
+ * * payload for PRLI and PRLO
+ */
+struct unf_prli_payload {
+ u32 cmnd;
+#define UNF_FC_PRLI_PAYLOAD_LENGTH 7 /* Length in words */
+ u32 parms[UNF_FC_PRLI_PAYLOAD_LENGTH];
+};
+
+/*
+ * * FCHS structure with payload
+ */
+struct unf_prli_prlo {
+ struct unf_fc_head frame_hdr;
+ struct unf_prli_payload payload;
+};
+
+struct unf_adisc_payload {
+ u32 cmnd;
+ u32 hard_address;
+ u32 high_port_name;
+ u32 low_port_name;
+ u32 high_node_name;
+ u32 low_node_name;
+ u32 nport_id;
+};
+
+/*
+ * * FCHS structure with payload
+ */
+struct unf_adisc {
+ struct unf_fc_head frame_hdr; /* FCHS structure */
+ struct unf_adisc_payload
+ adisc_payl; /* Payload data containing ADISC info
+ */
+};
+
+/*
+ * * RLS payload
+ */
+struct unf_rls_payload {
+ u32 cmnd;
+ u32 nport_id; /* in litle endian format */
+};
+
+/*
+ * * RLS
+ */
+struct unf_rls {
+ struct unf_fc_head frame_hdr; /* FCHS structure */
+ struct unf_rls_payload rls; /* payload data containing the RLS info */
+};
+
+/*
+ * * RLS accept payload
+ */
+struct unf_rls_acc_payload {
+ u32 cmnd;
+ u32 link_failure_count;
+ u32 loss_of_sync_count;
+ u32 loss_of_signal_count;
+ u32 primitive_seq_count;
+ u32 invalid_trans_word_count;
+ u32 invalid_crc_count;
+};
+
+/*
+ * * RLS accept
+ */
+struct unf_rls_acc {
+ struct unf_fc_head frame_hdr; /* FCHS structure */
+ struct unf_rls_acc_payload
+ rls; /* payload data containing the RLS ACC info
+ */
+};
+
+/*
+ * * FCHS structure with payload
+ */
+struct unf_rrq {
+ struct unf_fc_head frame_hdr;
+ u32 cmnd;
+ u32 sid;
+ u32 oxid_rxid;
+};
+
+#define UNF_SCR_PAYLOAD_CNT 2
+struct unf_scr {
+ struct unf_fc_head frame_hdr;
+ u32 payload[UNF_SCR_PAYLOAD_CNT];
+};
+
+struct unf_ctiu_prem {
+ u32 rev_inid;
+ u32 gstype_gssub_options;
+ u32 cmnd_rsp_size;
+ u32 frag_reason_exp_vend;
+};
+
+#define UNF_FC4TYPE_CNT 8
+struct unf_rftid {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 nport_id;
+ u32 fc4_types[UNF_FC4TYPE_CNT];
+};
+
+struct unf_rffid {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 nport_id;
+ u32 fc4_feature;
+};
+
+struct unf_rffid_rsp {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+};
+
+struct unf_gffid {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 nport_id;
+};
+
+struct unf_gffid_rsp {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 fc4_feature[32];
+};
+
+struct unf_gnnid {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 nport_id;
+};
+
+struct unf_gnnid_rsp {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 node_name[2];
+};
+
+struct unf_gpnid {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 nport_id;
+};
+
+struct unf_gpnid_rsp {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 port_name[2];
+};
+
+struct unf_rft_rsp {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+};
+
+struct unf_ls_rjt_pld {
+ u32 srr_op; /* 01000000h */
+ u8 vandor;
+ u8 reason_exp;
+ u8 reason;
+ u8 reserved;
+};
+
+struct unf_ls_rjt {
+ struct unf_fc_head frame_hdr;
+ struct unf_ls_rjt_pld pld;
+};
+
+struct unf_rec_pld {
+ u32 rec_cmnd;
+ u32 xchg_org_sid; /* bit0-bit23 */
+ u16 rx_id;
+ u16 ox_id;
+};
+
+struct unf_rec {
+ struct unf_fc_head frame_hdr;
+ struct unf_rec_pld rec_pld;
+};
+
+struct unf_rec_acc_pld {
+ u32 cmnd;
+ u16 rx_id;
+ u16 ox_id;
+ u32 org_addr_id; /* bit0-bit23 */
+ u32 rsp_addr_id; /* bit0-bit23 */
+};
+
+struct unf_rec_acc {
+ struct unf_fc_head frame_hdr;
+ struct unf_rec_acc_pld payload;
+};
+
+struct unf_gid {
+ struct unf_ctiu_prem ctiu_pream;
+ u32 scope_type;
+};
+
+struct unf_gid_acc {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+};
+
+#define UNF_LOOPMAP_COUNT 128
+struct unf_loop_init {
+ struct unf_fc_head frame_hdr;
+ u32 cmnd;
+#define UNF_FC_ALPA_BIT_MAP_SIZE 4
+ u32 alpha_bit_map[UNF_FC_ALPA_BIT_MAP_SIZE];
+};
+
+struct unf_loop_map {
+ struct unf_fc_head frame_hdr;
+ u32 cmnd;
+ u32 loop_map[32];
+};
+
+struct unf_ctiu_rjt {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+};
+
+struct unf_gid_acc_pld {
+ struct unf_ctiu_prem ctiu_pream;
+
+ u32 gid_port_id[UNF_GID_PORT_CNT];
+};
+
+struct unf_gid_rsp {
+ struct unf_gid_acc_pld *gid_acc_pld;
+};
+
+struct unf_gid_req_rsp {
+ struct unf_fc_head frame_hdr;
+ struct unf_gid gid_req;
+ struct unf_gid_rsp gid_rsp;
+};
+
+/* FC-LS-2 Table 31 RSCN Payload */
+struct unf_rscn_port_id_page {
+ u8 port_id_port;
+ u8 port_id_area;
+ u8 port_id_domain;
+
+ u8 addr_format : 2;
+ u8 event_qualifier : 4;
+ u8 reserved : 2;
+};
+
+struct unf_rscn_pld {
+ u32 cmnd;
+ struct unf_rscn_port_id_page port_id_page[UNF_RSCN_PAGE_SUM];
+};
+
+struct unf_rscn {
+ struct unf_fc_head frame_hdr;
+ struct unf_rscn_pld *rscn_pld;
+};
+
+union unf_sfs_u {
+ struct {
+ struct unf_fc_head frame_head;
+ u8 data[0];
+ } sfs_common;
+ struct unf_els_acc els_acc;
+ struct unf_els_rjt els_rjt;
+ struct unf_plogi_pdisc plogi;
+ struct unf_logo logo;
+ struct unf_echo echo;
+ struct unf_echo echo_acc;
+ struct unf_prli_prlo prli;
+ struct unf_prli_prlo prlo;
+ struct unf_rls rls;
+ struct unf_rls_acc rls_acc;
+ struct unf_plogi_pdisc pdisc;
+ struct unf_adisc adisc;
+ struct unf_rrq rrq;
+ struct unf_flogi_fdisc_acc flogi;
+ struct unf_fdisc_acc fdisc;
+ struct unf_scr scr;
+ struct unf_rec rec;
+ struct unf_rec_acc rec_acc;
+ struct unf_ls_rjt ls_rjt;
+ struct unf_rscn rscn;
+ struct unf_gid_req_rsp get_id;
+ struct unf_rftid rft_id;
+ struct unf_rft_rsp rft_id_rsp;
+ struct unf_rffid rff_id;
+ struct unf_rffid_rsp rff_id_rsp;
+ struct unf_gffid gff_id;
+ struct unf_gffid_rsp gff_id_rsp;
+ struct unf_gnnid gnn_id;
+ struct unf_gnnid_rsp gnn_id_rsp;
+ struct unf_gpnid gpn_id;
+ struct unf_gpnid_rsp gpn_id_rsp;
+ struct unf_plogi_pdisc plogi_acc;
+ struct unf_plogi_pdisc pdisc_acc;
+ struct unf_adisc adisc_acc;
+ struct unf_prli_prlo prli_acc;
+ struct unf_prli_prlo prlo_acc;
+ struct unf_flogi_fdisc_acc flogi_acc;
+ struct unf_fdisc_acc fdisc_acc;
+ struct unf_loop_init lpi;
+ struct unf_loop_map loop_map;
+ struct unf_ctiu_rjt ctiu_rjt;
+};
+
+struct unf_sfs_entry {
+ union unf_sfs_u *fc_sfs_entry_ptr; /* Virtual addr of SFS buffer */
+ u64 sfs_buff_phy_addr; /* Physical addr of SFS buffer */
+ u32 sfs_buff_len; /* Length of bytes in SFS buffer */
+ u32 cur_offset;
+};
+
+struct unf_fcp_rsp_iu_entry {
+ u8 *fcp_rsp_iu;
+ u32 fcp_sense_len;
+};
+
+struct unf_rjt_info {
+ u32 els_cmnd_code;
+ u32 reason_code;
+ u32 reason_explanation;
+ u8 class_mode;
+ u8 ucrsvd[3];
+};
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_gs.c b/drivers/scsi/spfc/common/unf_gs.c
new file mode 100644
index 000000000000..cb5fc1a5d246
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_gs.c
@@ -0,0 +1,2521 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_gs.h"
+#include "unf_log.h"
+#include "unf_exchg.h"
+#include "unf_rport.h"
+#include "unf_service.h"
+#include "unf_portman.h"
+#include "unf_ls.h"
+
+static void unf_gpn_id_callback(void *lport, void *sns_port, void *xchg);
+static void unf_gpn_id_ob_callback(struct unf_xchg *xchg);
+static void unf_gnn_id_ob_callback(struct unf_xchg *xchg);
+static void unf_scr_callback(void *lport, void *rport, void *xchg);
+static void unf_scr_ob_callback(struct unf_xchg *xchg);
+static void unf_gff_id_ob_callback(struct unf_xchg *xchg);
+static void unf_gff_id_callback(void *lport, void *sns_port, void *xchg);
+static void unf_gnn_id_callback(void *lport, void *sns_port, void *xchg);
+static void unf_gid_ft_ob_callback(struct unf_xchg *xchg);
+static void unf_gid_ft_callback(void *lport, void *rport, void *xchg);
+static void unf_gid_pt_ob_callback(struct unf_xchg *xchg);
+static void unf_gid_pt_callback(void *lport, void *rport, void *xchg);
+static void unf_rft_id_ob_callback(struct unf_xchg *xchg);
+static void unf_rft_id_callback(void *lport, void *rport, void *xchg);
+static void unf_rff_id_callback(void *lport, void *rport, void *xchg);
+static void unf_rff_id_ob_callback(struct unf_xchg *xchg);
+
+#define UNF_GET_DOMAIN_ID(x) (((x) & 0xFF0000) >> 16)
+#define UNF_GET_AREA_ID(x) (((x) & 0x00FF00) >> 8)
+
+#define UNF_GID_LAST_PORT_ID 0x80
+#define UNF_GID_CONTROL(nport_id) ((nport_id) >> 24)
+#define UNF_GET_PORT_OPTIONS(fc_4feature) ((fc_4feature) >> 20)
+
+#define UNF_SERVICE_GET_NPORTID_FORM_GID_PAGE(port_id_page) \
+ (((u32)(port_id_page)->port_id_domain << 16) | \
+ ((u32)(port_id_page)->port_id_area << 8) | \
+ ((u32)(port_id_page)->port_id_port))
+
+#define UNF_GNN_GFF_ID_RJT_REASON(rjt_reason) \
+ ((UNF_CTIU_RJT_UNABLE_PERFORM == \
+ ((rjt_reason) & UNF_CTIU_RJT_MASK)) && \
+ ((UNF_CTIU_RJT_EXP_PORTID_NO_REG == \
+ ((rjt_reason) & UNF_CTIU_RJT_EXP_MASK)) || \
+ (UNF_CTIU_RJT_EXP_PORTNAME_NO_REG == \
+ ((rjt_reason) & UNF_CTIU_RJT_EXP_MASK)) || \
+ (UNF_CTIU_RJT_EXP_NODENAME_NO_REG == \
+ ((rjt_reason) & UNF_CTIU_RJT_EXP_MASK))))
+
+u32 unf_send_scr(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* after RCVD RFF_ID ACC */
+ struct unf_scr *scr = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, NULL, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for SCR",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_SCR;
+
+ xchg->callback = unf_scr_callback;
+ xchg->ob_callback = unf_scr_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ scr = &fc_entry->scr;
+ memset(scr, 0, sizeof(struct unf_scr));
+ scr->payload[ARRAY_INDEX_0] = (UNF_GS_CMND_SCR); /* SCR is 0x62 */
+ scr->payload[ARRAY_INDEX_1] = (UNF_FABRIC_FULL_REG); /* Full registration */
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: SCR send %s. Port(0x%x_0x%x)--->RPort(0x%x) with hottag(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ lport->nport_id, rport->nport_id, xchg->hotpooltag);
+
+ return ret;
+}
+
+static void unf_fill_gff_id_pld(struct unf_gffid *gff_id, u32 nport_id)
+{
+ FC_CHECK_RETURN_VOID(gff_id);
+
+ gff_id->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ gff_id->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ gff_id->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GFF_ID);
+ gff_id->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+ gff_id->nport_id = nport_id;
+}
+
+static void unf_ctpass_thru_callback(void *lport, void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_gid_acc_pld *gid_acc_pld = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ union unf_sfs_u *sfs = NULL;
+ u32 cmnd_rsp_size = 0;
+
+ struct send_com_trans_out *out_send = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)xchg;
+ sfs = unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+
+ gid_acc_pld = sfs->get_id.gid_rsp.gid_acc_pld;
+ if (!gid_acc_pld) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) CT PassThru response payload is NULL",
+ unf_lport->port_id);
+
+ return;
+ }
+
+ out_send = (struct send_com_trans_out *)unf_xchg->upper_ct;
+
+ cmnd_rsp_size = (gid_acc_pld->ctiu_pream.cmnd_rsp_size);
+ if (UNF_CT_IU_ACCEPT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ out_send->hba_status = 0; /* HBA_STATUS_OK 0 */
+ out_send->total_resp_buffer_cnt = unf_xchg->fcp_sfs_union.sfs_entry.cur_offset;
+ out_send->actual_resp_buffer_cnt = unf_xchg->fcp_sfs_union.sfs_entry.cur_offset;
+ unf_cpu_to_big_end(out_send->resp_buffer, (u32)out_send->total_resp_buffer_cnt);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) CT PassThru was receive len is(0x%0x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ out_send->total_resp_buffer_cnt);
+ } else if (UNF_CT_IU_REJECT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ out_send->hba_status = 13; /* HBA_STATUS_ERROR_ELS_REJECT 13 */
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) CT PassThru was rejected",
+ unf_lport->port_id, unf_lport->nport_id);
+ } else {
+ out_send->hba_status = 1; /* HBA_STATUS_ERROR 1 */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) CT PassThru was UNKNOWN",
+ unf_lport->port_id, unf_lport->nport_id);
+ }
+
+ up(&unf_lport->wmi_task_sema);
+}
+
+u32 unf_send_ctpass_thru(struct unf_lport *lport, void *buffer, u32 bufflen)
+{
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_rport *sns_port = NULL;
+ struct send_com_trans_in *in_send = (struct send_com_trans_in *)buffer;
+ struct send_com_trans_out *out_send =
+ (struct send_com_trans_out *)buffer;
+ struct unf_ctiu_prem *ctiu_pream = NULL;
+ struct unf_gid *gs_pld = NULL;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buffer, UNF_RETURN_ERROR);
+
+ ctiu_pream = (struct unf_ctiu_prem *)in_send->req_buffer;
+ unf_cpu_to_big_end(ctiu_pream, sizeof(struct unf_gid));
+
+ if (ctiu_pream->cmnd_rsp_size >> UNF_SHIFT_16 == NS_GIEL) {
+ sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_MGMT_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't find SNS port",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ } else if (ctiu_pream->cmnd_rsp_size >> UNF_SHIFT_16 == NS_GA_NXT) {
+ sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't find SNS port",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[info]%s cmnd(0x%x) is error:", __func__,
+ ctiu_pream->cmnd_rsp_size >> UNF_SHIFT_16);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, sns_port->nport_id, sns_port, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for GFF_ID",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ xchg->cmnd_code = ctiu_pream->cmnd_rsp_size >> UNF_SHIFT_16;
+ xchg->upper_ct = buffer;
+ xchg->ob_callback = NULL;
+ xchg->callback = unf_ctpass_thru_callback;
+ xchg->oxid = xchg->hotpooltag;
+ unf_fill_package(&pkg, xchg, sns_port);
+ pkg.type = UNF_PKG_GS_REQ;
+ xchg->fcp_sfs_union.sfs_entry.sfs_buff_len = bufflen;
+ gs_pld = &fc_entry->get_id.gid_req; /* GID req payload */
+ memset(gs_pld, 0, sizeof(struct unf_gid));
+ memcpy(gs_pld, (struct unf_gid *)in_send->req_buffer, sizeof(struct unf_gid));
+ fc_entry->get_id.gid_rsp.gid_acc_pld = (struct unf_gid_acc_pld *)out_send->resp_buffer;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+
+ return ret;
+}
+
+u32 unf_send_gff_id(struct unf_lport *lport, struct unf_rport *sns_port,
+ u32 nport_id)
+{
+ struct unf_gffid *gff_id = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ struct unf_frame_pkg pkg;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+
+ if (unf_is_lport_valid(lport) != RETURN_OK)
+ /* Lport is invalid, no retry or handle required, return ok */
+ return RETURN_OK;
+
+ unf_lport = (struct unf_lport *)lport->root_lport;
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, sns_port->nport_id, sns_port, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for GFF_ID",
+ lport->port_id);
+
+ return unf_get_and_post_disc_event(lport, sns_port, nport_id, UNF_DISC_GET_FEATURE);
+ }
+
+ xchg->cmnd_code = NS_GFF_ID;
+ xchg->disc_portid = nport_id;
+
+ xchg->ob_callback = unf_gff_id_ob_callback;
+ xchg->callback = unf_gff_id_callback;
+
+ unf_fill_package(&pkg, xchg, sns_port);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ gff_id = &fc_entry->gff_id;
+ memset(gff_id, 0, sizeof(struct unf_gffid));
+ unf_fill_gff_id_pld(gff_id, nport_id);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ else
+ atomic_dec(&unf_lport->disc.disc_thread_info.disc_contrl_size);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: GFF_ID send %s. Port(0x%x)--->RPort(0x%x). Inquire RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ sns_port->nport_id, nport_id);
+
+ return ret;
+}
+
+static void unf_fill_gnnid_pld(struct unf_gnnid *gnnid_pld, u32 nport_id)
+{
+ /* Inquiry R_Port node name from SW */
+ FC_CHECK_RETURN_VOID(gnnid_pld);
+
+ gnnid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ gnnid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ gnnid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GNN_ID);
+ gnnid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+
+ gnnid_pld->nport_id = nport_id;
+}
+
+u32 unf_send_gnn_id(struct unf_lport *lport, struct unf_rport *sns_port,
+ u32 nport_id)
+{
+ /* from DISC stop/re-login */
+ struct unf_gnnid *unf_gnnid = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x_0x%x) send gnnid to 0x%x.", lport->port_id,
+ lport->nport_id, nport_id);
+
+ if (unf_is_lport_valid(lport) != RETURN_OK)
+ /* Lport is invalid, no retry or handle required, return ok */
+ return RETURN_OK;
+
+ unf_lport = (struct unf_lport *)lport->root_lport;
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, sns_port->nport_id,
+ sns_port, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) exchange can't be NULL for GNN_ID",
+ lport->port_id);
+
+ return unf_get_and_post_disc_event(lport, sns_port, nport_id,
+ UNF_DISC_GET_NODE_NAME);
+ }
+
+ xchg->cmnd_code = NS_GNN_ID;
+ xchg->disc_portid = nport_id;
+
+ xchg->ob_callback = unf_gnn_id_ob_callback;
+ xchg->callback = unf_gnn_id_callback;
+
+ unf_fill_package(&pkg, xchg, sns_port);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ unf_gnnid = &fc_entry->gnn_id; /* GNNID payload */
+ memset(unf_gnnid, 0, sizeof(struct unf_gnnid));
+ unf_fill_gnnid_pld(unf_gnnid, nport_id);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ else
+ atomic_dec(&unf_lport->disc.disc_thread_info.disc_contrl_size);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: GNN_ID send %s. Port(0x%x_0x%x)--->RPort(0x%x) inquire Nportid(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ lport->nport_id, sns_port->nport_id, nport_id);
+
+ return ret;
+}
+
+static void unf_fill_gpnid_pld(struct unf_gpnid *gpnid_pld, u32 nport_id)
+{
+ FC_CHECK_RETURN_VOID(gpnid_pld);
+
+ gpnid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ gpnid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ gpnid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GPN_ID);
+ gpnid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+
+ /* Inquiry WWN from SW */
+ gpnid_pld->nport_id = nport_id;
+}
+
+u32 unf_send_gpn_id(struct unf_lport *lport, struct unf_rport *sns_port,
+ u32 nport_id)
+{
+ struct unf_gpnid *gpnid_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+
+ if (unf_is_lport_valid(lport) != RETURN_OK)
+ /* Lport is invalid, no retry or handle required, return ok */
+ return RETURN_OK;
+
+ unf_lport = (struct unf_lport *)lport->root_lport;
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, sns_port->nport_id,
+ sns_port, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for GPN_ID",
+ lport->port_id);
+
+ return unf_get_and_post_disc_event(lport, sns_port, nport_id,
+ UNF_DISC_GET_PORT_NAME);
+ }
+
+ xchg->cmnd_code = NS_GPN_ID;
+ xchg->disc_portid = nport_id;
+
+ xchg->callback = unf_gpn_id_callback;
+ xchg->ob_callback = unf_gpn_id_ob_callback;
+
+ unf_fill_package(&pkg, xchg, sns_port);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ gpnid_pld = &fc_entry->gpn_id;
+ memset(gpnid_pld, 0, sizeof(struct unf_gpnid));
+ unf_fill_gpnid_pld(gpnid_pld, nport_id);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ else
+ atomic_dec(&unf_lport->disc.disc_thread_info.disc_contrl_size);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: GPN_ID send %s. Port(0x%x)--->RPort(0x%x). Inquire RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ sns_port->nport_id, nport_id);
+
+ return ret;
+}
+
+static void unf_fill_gid_ft_pld(struct unf_gid *gid_pld)
+{
+ FC_CHECK_RETURN_VOID(gid_pld);
+
+ gid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ gid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ gid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GID_FT);
+ gid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+
+ gid_pld->scope_type = (UNF_GID_FT_TYPE);
+}
+
+u32 unf_send_gid_ft(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_gid *gid_pld = NULL;
+ struct unf_gid_rsp *gid_rsp = NULL;
+ struct unf_gid_acc_pld *gid_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id,
+ rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for GID_FT",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = NS_GID_FT;
+
+ xchg->ob_callback = unf_gid_ft_ob_callback;
+ xchg->callback = unf_gid_ft_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ gid_pld = &fc_entry->get_id.gid_req; /* GID req payload */
+ unf_fill_gid_ft_pld(gid_pld);
+ gid_rsp = &fc_entry->get_id.gid_rsp; /* GID rsp payload */
+
+ gid_acc_pld = (struct unf_gid_acc_pld *)unf_get_one_big_sfs_buf(xchg);
+ if (!gid_acc_pld) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate GID_FT response buffer failed",
+ lport->port_id);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+ memset(gid_acc_pld, 0, sizeof(struct unf_gid_acc_pld));
+ gid_rsp->gid_acc_pld = gid_acc_pld;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: GID_FT send %s. Port(0x%x)--->RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id);
+
+ return ret;
+}
+
+static void unf_fill_gid_pt_pld(struct unf_gid *gid_pld,
+ struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(gid_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ gid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ gid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ gid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GID_PT);
+ gid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+
+ /* 0x7F000000 means NX_Port */
+ gid_pld->scope_type = (UNF_GID_PT_TYPE);
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, gid_pld,
+ sizeof(struct unf_gid));
+}
+
+u32 unf_send_gid_pt(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* from DISC start */
+ struct unf_gid *gid_pld = NULL;
+ struct unf_gid_rsp *gid_rsp = NULL;
+ struct unf_gid_acc_pld *gid_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id,
+ rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for GID_PT",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = NS_GID_PT;
+
+ xchg->ob_callback = unf_gid_pt_ob_callback;
+ xchg->callback = unf_gid_pt_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ gid_pld = &fc_entry->get_id.gid_req; /* GID req payload */
+ unf_fill_gid_pt_pld(gid_pld, lport);
+ gid_rsp = &fc_entry->get_id.gid_rsp; /* GID rsp payload */
+
+ gid_acc_pld = (struct unf_gid_acc_pld *)unf_get_one_big_sfs_buf(xchg);
+ if (!gid_acc_pld) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%0x) Allocate GID_PT response buffer failed",
+ lport->port_id);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+ memset(gid_acc_pld, 0, sizeof(struct unf_gid_acc_pld));
+ gid_rsp->gid_acc_pld = gid_acc_pld;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: GID_PT send %s. Port(0x%x_0x%x)--->RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ lport->nport_id, rport->nport_id);
+
+ return ret;
+}
+
+static void unf_fill_rft_id_pld(struct unf_rftid *rftid_pld,
+ struct unf_lport *lport)
+{
+ u32 index = 1;
+
+ FC_CHECK_RETURN_VOID(rftid_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ rftid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ rftid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ rftid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_RFT_ID);
+ rftid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+ rftid_pld->nport_id = (lport->nport_id);
+ rftid_pld->fc4_types[ARRAY_INDEX_0] = (UNF_FC4_SCSI_BIT8);
+
+ for (index = ARRAY_INDEX_2; index < UNF_FC4TYPE_CNT; index++)
+ rftid_pld->fc4_types[index] = 0;
+}
+
+u32 unf_send_rft_id(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* After PLOGI process */
+ struct unf_rftid *rft_id = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id,
+ rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for RFT_ID",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = NS_RFT_ID;
+
+ xchg->callback = unf_rft_id_callback;
+ xchg->ob_callback = unf_rft_id_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ rft_id = &fc_entry->rft_id;
+ memset(rft_id, 0, sizeof(struct unf_rftid));
+ unf_fill_rft_id_pld(rft_id, lport);
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: RFT_ID send %s. Port(0x%x_0x%x)--->RPort(0x%x). rport(0x%p) wwpn(0x%llx) ",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ lport->nport_id, rport->nport_id, rport, rport->port_name);
+
+ return ret;
+}
+
+static void unf_fill_rff_id_pld(struct unf_rffid *rffid_pld,
+ struct unf_lport *lport, u32 fc4_type)
+{
+ FC_CHECK_RETURN_VOID(rffid_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ rffid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ rffid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ rffid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_RFF_ID);
+ rffid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+ rffid_pld->nport_id = (lport->nport_id);
+ rffid_pld->fc4_feature = (fc4_type | (lport->options << UNF_SHIFT_4));
+}
+
+u32 unf_send_rff_id(struct unf_lport *lport, struct unf_rport *rport,
+ u32 fc4_type)
+{
+ /* from RFT_ID, then Send SCR */
+ struct unf_rffid *rff_id = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_INFO, "%s Enter", __func__);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id,
+ rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for RFF_ID",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = NS_RFF_ID;
+
+ xchg->callback = unf_rff_id_callback;
+ xchg->ob_callback = unf_rff_id_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ rff_id = &fc_entry->rff_id;
+ memset(rff_id, 0, sizeof(struct unf_rffid));
+ unf_fill_rff_id_pld(rff_id, lport, fc4_type);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: RFF_ID feature 0x%x(10:TGT,20:INI,30:COM) send %s. Port(0x%x_0x%x)--->RPortid(0x%x) rport(0x%p)",
+ lport->options, (ret != RETURN_OK) ? "failed" : "succeed",
+ lport->port_id, lport->nport_id, rport->nport_id, rport);
+
+ return ret;
+}
+
+void unf_handle_init_gid_acc(struct unf_gid_acc_pld *gid_acc_pld,
+ struct unf_lport *lport)
+{
+ /*
+ * from SCR ACC callback
+ * NOTE: inquiry disc R_Port used for NPIV
+ */
+ struct unf_disc_rport *disc_rport = NULL;
+ struct unf_disc *disc = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 gid_port_id = 0;
+ u32 nport_id = 0;
+ u32 index = 0;
+ u8 control = 0;
+
+ FC_CHECK_RETURN_VOID(gid_acc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ /*
+ * 1. Find & Check & Get (new) R_Port from list_disc_rports_pool
+ * then, Add to R_Port Disc_busy_list
+ */
+ while (index < UNF_GID_PORT_CNT) {
+ gid_port_id = (gid_acc_pld->gid_port_id[index]);
+ nport_id = UNF_NPORTID_MASK & gid_port_id;
+ control = UNF_GID_CONTROL(gid_port_id);
+
+ /* for each N_Port_ID from GID_ACC payload */
+ if (lport->nport_id != nport_id && nport_id != 0 &&
+ (!unf_lookup_lport_by_nportid(lport, nport_id))) {
+ /* for New Port, not L_Port */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) get nportid(0x%x) from GID_ACC",
+ lport->port_id, lport->nport_id, nport_id);
+
+ /* Get R_Port from list of RPort Disc Pool */
+ disc_rport = unf_rport_get_free_and_init(lport,
+ UNF_PORT_TYPE_DISC, nport_id);
+ if (!disc_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) can't allocate new rport(0x%x) from disc pool",
+ lport->port_id, lport->nport_id,
+ nport_id);
+
+ index++;
+ continue;
+ }
+ }
+
+ if (UNF_GID_LAST_PORT_ID == (UNF_GID_LAST_PORT_ID & control))
+ break;
+
+ index++;
+ }
+
+ /*
+ * 2. Do port disc stop operation:
+ * NOTE: Do DISC & release R_Port from busy_list back to
+ * list_disc_rports_pool
+ */
+ disc = &lport->disc;
+ if (!disc->disc_temp.unf_disc_stop) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) disc stop function is NULL",
+ lport->port_id, lport->nport_id);
+
+ return;
+ }
+
+ ret = disc->disc_temp.unf_disc_stop(lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) do disc stop failed",
+ lport->port_id, lport->nport_id);
+ }
+}
+
+u32 unf_rport_relogin(struct unf_lport *lport, u32 nport_id)
+{
+ /* Send GNN_ID */
+ struct unf_rport *sns_port = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* Get SNS R_Port */
+ sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find fabric Port", lport->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Send GNN_ID now to SW */
+ ret = unf_get_and_post_disc_event(lport, sns_port, nport_id,
+ UNF_DISC_GET_NODE_NAME);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ lport->nport_id, UNF_DISC_GET_NODE_NAME, nport_id);
+
+ /* NOTE: Continue to next stage */
+ unf_rcv_gnn_id_rsp_unknown(lport, sns_port, nport_id);
+ }
+
+ return ret;
+}
+
+u32 unf_rport_check_wwn(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* Send GPN_ID */
+ struct unf_rport *sns_port = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ /* Get SNS R_Port */
+ sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find fabric Port", lport->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Send GPN_ID to SW */
+ ret = unf_get_and_post_disc_event(lport, sns_port, rport->nport_id,
+ UNF_DISC_GET_PORT_NAME);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ lport->nport_id, UNF_DISC_GET_PORT_NAME,
+ rport->nport_id);
+
+ unf_rcv_gpn_id_rsp_unknown(lport, rport->nport_id);
+ }
+
+ return ret;
+}
+
+u32 unf_handle_rscn_port_not_indisc(struct unf_lport *lport, u32 rscn_nport_id)
+{
+ /* RSCN Port_ID not in GID_ACC payload table: Link Down */
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* from R_Port busy list by N_Port_ID */
+ unf_rport = unf_get_rport_by_nport_id(lport, rscn_nport_id);
+ if (unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) RPort(0x%x) wwpn(0x%llx) has been removed and link down it",
+ lport->port_id, rscn_nport_id, unf_rport->port_name);
+
+ unf_rport_linkdown(lport, unf_rport);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) has no RPort(0x%x) and do nothing",
+ lport->nport_id, rscn_nport_id);
+ }
+
+ return ret;
+}
+
+u32 unf_handle_rscn_port_indisc(struct unf_lport *lport, u32 rscn_nport_id)
+{
+ /* Send GPN_ID or re-login(GNN_ID) */
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* from R_Port busy list by N_Port_ID */
+ unf_rport = unf_get_rport_by_nport_id(lport, rscn_nport_id);
+ if (unf_rport) {
+ /* R_Port exist: send GPN_ID */
+ ret = unf_rport_check_wwn(lport, unf_rport);
+ } else {
+ if (UNF_PORT_MODE_INI == (lport->options & UNF_PORT_MODE_INI))
+ /* Re-LOGIN with INI mode: Send GNN_ID */
+ ret = unf_rport_relogin(lport, rscn_nport_id);
+ else
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) with no INI feature. Do nothing",
+ lport->nport_id);
+ }
+
+ return ret;
+}
+
+static u32 unf_handle_rscn_port_addr(struct unf_port_id_page *portid_page,
+ struct unf_gid_acc_pld *gid_acc_pld,
+ struct unf_lport *lport)
+{
+ /*
+ * Input parameters:
+ * 1. Port_ID_page: saved from RSCN payload
+ * 2. GID_ACC_payload: back from GID_ACC (GID_PT or GID_FT)
+ * *
+ * Do work: check whether RSCN Port_ID within GID_ACC payload or not
+ * then, re-login or link down rport
+ */
+ u32 rscn_nport_id = 0;
+ u32 gid_port_id = 0;
+ u32 nport_id = 0;
+ u32 index = 0;
+ u8 control = 0;
+ u32 ret = RETURN_OK;
+ bool have_same_id = false;
+
+ FC_CHECK_RETURN_VALUE(portid_page, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(gid_acc_pld, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* 1. get RSCN_NPort_ID from (L_Port->Disc->RSCN_Mgr)->RSCN_Port_ID_Page
+ */
+ rscn_nport_id = UNF_SERVICE_GET_NPORTID_FORM_GID_PAGE(portid_page);
+
+ /*
+ * 2. for RSCN_NPort_ID
+ * check whether RSCN_NPort_ID within GID_ACC_Payload or not
+ */
+ while (index < UNF_GID_PORT_CNT) {
+ gid_port_id = (gid_acc_pld->gid_port_id[index]);
+ nport_id = UNF_NPORTID_MASK & gid_port_id;
+ control = UNF_GID_CONTROL(gid_port_id);
+
+ if (lport->nport_id != nport_id && nport_id != 0) {
+ /* is not L_Port */
+ if (nport_id == rscn_nport_id) {
+ /* RSCN Port_ID within GID_ACC payload */
+ have_same_id = true;
+ break;
+ }
+ }
+
+ if (UNF_GID_LAST_PORT_ID == (UNF_GID_LAST_PORT_ID & control))
+ break;
+
+ index++;
+ }
+
+ /* 3. RSCN_Port_ID not within GID_ACC payload table */
+ if (!have_same_id) {
+ /* rport has been removed */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[warn]Port(0x%x_0x%x) find RSCN N_Port_ID(0x%x) in GID_ACC table failed",
+ lport->port_id, lport->nport_id, rscn_nport_id);
+
+ /* Link down rport */
+ ret = unf_handle_rscn_port_not_indisc(lport, rscn_nport_id);
+
+ } else { /* 4. RSCN_Port_ID within GID_ACC payload table */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x_0x%x) find RSCN N_Port_ID(0x%x) in GID_ACC table succeed",
+ lport->port_id, lport->nport_id, rscn_nport_id);
+
+ /* Re-login with INI mode */
+ ret = unf_handle_rscn_port_indisc(lport, rscn_nport_id);
+ }
+
+ return ret;
+}
+
+void unf_check_rport_rscn_process(struct unf_rport *rport,
+ struct unf_port_id_page *portid_page)
+{
+ struct unf_rport *unf_rport = rport;
+ struct unf_port_id_page *unf_portid_page = portid_page;
+ u8 addr_format = unf_portid_page->addr_format;
+
+ switch (addr_format) {
+ /* domain+area */
+ case UNF_RSCN_AREA_ADDR_GROUP:
+ if (UNF_GET_DOMAIN_ID(unf_rport->nport_id) == unf_portid_page->port_id_domain &&
+ UNF_GET_AREA_ID(unf_rport->nport_id) == unf_portid_page->port_id_area)
+ unf_rport->rscn_position = UNF_RPORT_NEED_PROCESS;
+
+ break;
+ /* domain */
+ case UNF_RSCN_DOMAIN_ADDR_GROUP:
+ if (UNF_GET_DOMAIN_ID(unf_rport->nport_id) == unf_portid_page->port_id_domain)
+ unf_rport->rscn_position = UNF_RPORT_NEED_PROCESS;
+
+ break;
+ /* all */
+ case UNF_RSCN_FABRIC_ADDR_GROUP:
+ unf_rport->rscn_position = UNF_RPORT_NEED_PROCESS;
+ break;
+ default:
+ break;
+ }
+}
+
+static void unf_set_rport_rscn_position(struct unf_lport *lport,
+ struct unf_port_id_page *portid_page)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct list_head *list_node = NULL;
+ struct list_head *list_nextnode = NULL;
+ struct unf_disc *disc = NULL;
+ ulong disc_flag = 0;
+ ulong rport_flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ disc = &lport->disc;
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
+ list_for_each_safe(list_node, list_nextnode, &disc->list_busy_rports) {
+ unf_rport = list_entry(list_node, struct unf_rport, entry_rport);
+ spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
+
+ if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ if (unf_rport->rscn_position == UNF_RPORT_NOT_NEED_PROCESS)
+ unf_check_rport_rscn_process(unf_rport, portid_page);
+ } else {
+ unf_rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ }
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+}
+
+static void unf_set_rport_rscn_position_local(struct unf_lport *lport)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct list_head *list_node = NULL;
+ struct list_head *list_nextnode = NULL;
+ struct unf_disc *disc = NULL;
+ ulong disc_flag = 0;
+ ulong rport_flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ disc = &lport->disc;
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
+ list_for_each_safe(list_node, list_nextnode, &disc->list_busy_rports) {
+ unf_rport = list_entry(list_node, struct unf_rport, entry_rport);
+ spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
+
+ if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ if (unf_rport->rscn_position == UNF_RPORT_NEED_PROCESS)
+ unf_rport->rscn_position = UNF_RPORT_ONLY_IN_LOCAL_PROCESS;
+ } else {
+ unf_rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ }
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+}
+
+static void unf_reset_rport_rscn_setting(struct unf_lport *lport)
+{
+ struct unf_rport *rport = NULL;
+ struct list_head *list_node = NULL;
+ struct list_head *list_nextnode = NULL;
+ struct unf_disc *disc = NULL;
+ ulong rport_flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ disc = &lport->disc;
+
+ list_for_each_safe(list_node, list_nextnode, &disc->list_busy_rports) {
+ rport = list_entry(list_node, struct unf_rport, entry_rport);
+ spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
+ rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+ }
+}
+
+void unf_compare_nport_id_with_rport_list(struct unf_lport *lport, u32 nport_id,
+ struct unf_port_id_page *portid_page)
+{
+ struct unf_rport *rport = NULL;
+ ulong rport_flag = 0;
+ u8 addr_format = portid_page->addr_format;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ switch (addr_format) {
+ /* domain+area */
+ case UNF_RSCN_AREA_ADDR_GROUP:
+ if ((UNF_GET_DOMAIN_ID(nport_id) != portid_page->port_id_domain) ||
+ (UNF_GET_AREA_ID(nport_id) != portid_page->port_id_area))
+ return;
+
+ break;
+ /* domain */
+ case UNF_RSCN_DOMAIN_ADDR_GROUP:
+ if (UNF_GET_DOMAIN_ID(nport_id) != portid_page->port_id_domain)
+ return;
+
+ break;
+ /* all */
+ case UNF_RSCN_FABRIC_ADDR_GROUP:
+ break;
+ /* can't enter this branch guarantee by outer */
+ default:
+ break;
+ }
+
+ rport = unf_get_rport_by_nport_id(lport, nport_id);
+
+ if (!rport) {
+ if (UNF_PORT_MODE_INI == (lport->options & UNF_PORT_MODE_INI)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) Find Rport(0x%x) by RSCN",
+ lport->nport_id, nport_id);
+ unf_rport_relogin(lport, nport_id);
+ }
+ } else {
+ spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
+ if (rport->rscn_position == UNF_RPORT_NEED_PROCESS)
+ rport->rscn_position = UNF_RPORT_IN_DISC_AND_LOCAL_PROCESS;
+
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+ }
+}
+
+static void unf_compare_disc_with_local_rport(struct unf_lport *lport,
+ struct unf_gid_acc_pld *pld,
+ struct unf_port_id_page *page)
+{
+ u32 gid_port_id = 0;
+ u32 nport_id = 0;
+ u32 index = 0;
+ u8 control = 0;
+
+ FC_CHECK_RETURN_VOID(pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ while (index < UNF_GID_PORT_CNT) {
+ gid_port_id = (pld->gid_port_id[index]);
+ nport_id = UNF_NPORTID_MASK & gid_port_id;
+ control = UNF_GID_CONTROL(gid_port_id);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_INFO, "[info]Port(0x%x) DISC N_Port_ID(0x%x)",
+ lport->nport_id, nport_id);
+
+ if (nport_id != 0 &&
+ (!unf_lookup_lport_by_nportid(lport, nport_id)))
+ unf_compare_nport_id_with_rport_list(lport, nport_id, page);
+
+ if (UNF_GID_LAST_PORT_ID == (UNF_GID_LAST_PORT_ID & control))
+ break;
+
+ index++;
+ }
+
+ unf_set_rport_rscn_position_local(lport);
+}
+
+static u32 unf_process_each_rport_after_rscn(struct unf_lport *lport,
+ struct unf_rport *sns_port,
+ struct unf_rport *rport)
+{
+ ulong rport_flag = 0;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
+
+ if (rport->rscn_position == UNF_RPORT_IN_DISC_AND_LOCAL_PROCESS) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x_0x%x) RPort(0x%x) rescan position(0x%x), check wwpn",
+ lport->port_id, lport->nport_id, rport->nport_id,
+ rport->rscn_position);
+ rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+ ret = unf_rport_check_wwn(lport, rport);
+ } else if (rport->rscn_position == UNF_RPORT_ONLY_IN_LOCAL_PROCESS) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x_0x%x) RPort(0x%x) rescan position(0x%x), linkdown it",
+ lport->port_id, lport->nport_id, rport->nport_id,
+ rport->rscn_position);
+ rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+ unf_rport_linkdown(lport, rport);
+ } else {
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+ }
+
+ return ret;
+}
+
+static u32 unf_process_local_rport_after_rscn(struct unf_lport *lport,
+ struct unf_rport *sns_port)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct list_head *list_node = NULL;
+ struct unf_disc *disc = NULL;
+ ulong disc_flag = 0;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+ disc = &lport->disc;
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
+ if (list_empty(&disc->list_busy_rports)) {
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ list_node = UNF_OS_LIST_NEXT(&disc->list_busy_rports);
+
+ do {
+ unf_rport = list_entry(list_node, struct unf_rport, entry_rport);
+
+ if (unf_rport->rscn_position == UNF_RPORT_NOT_NEED_PROCESS) {
+ list_node = UNF_OS_LIST_NEXT(list_node);
+ continue;
+ } else {
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+ ret = unf_process_each_rport_after_rscn(lport, sns_port, unf_rport);
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
+ list_node = UNF_OS_LIST_NEXT(&disc->list_busy_rports);
+ }
+ } while (list_node != &disc->list_busy_rports);
+
+ unf_reset_rport_rscn_setting(lport);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+
+ return ret;
+}
+
+static u32 unf_handle_rscn_group_addr(struct unf_port_id_page *portid_page,
+ struct unf_gid_acc_pld *gid_acc_pld,
+ struct unf_lport *lport)
+{
+ struct unf_rport *sns_port = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(portid_page, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(gid_acc_pld, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find fabric port failed", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_set_rport_rscn_position(lport, portid_page);
+ unf_compare_disc_with_local_rport(lport, gid_acc_pld, portid_page);
+
+ ret = unf_process_local_rport_after_rscn(lport, sns_port);
+
+ return ret;
+}
+
+static void unf_handle_rscn_gid_acc(struct unf_gid_acc_pld *gid_acc_pid,
+ struct unf_lport *lport)
+{
+ /* for N_Port_ID table return from RSCN */
+ struct unf_port_id_page *port_id_page = NULL;
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+ struct list_head *list_node = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(gid_acc_pid);
+ FC_CHECK_RETURN_VOID(lport);
+ rscn_mgr = &lport->disc.rscn_mgr;
+
+ spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
+ while (!list_empty(&rscn_mgr->list_using_rscn_page)) {
+ /*
+ * for each RSCN_Using_Page(NPortID)
+ * for each
+ * L_Port->Disc->RSCN_Mgr->RSCN_Using_Page(Port_ID_Page)
+ * * NOTE:
+ * check using_page_port_id whether within GID_ACC payload or
+ * not
+ */
+ list_node = UNF_OS_LIST_NEXT(&rscn_mgr->list_using_rscn_page);
+ port_id_page = list_entry(list_node, struct unf_port_id_page, list_node_rscn);
+ list_del(list_node); /* NOTE: here delete node (from RSCN using Page) */
+ spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
+
+ switch (port_id_page->addr_format) {
+ /* each page of RSNC corresponding one of N_Port_ID */
+ case UNF_RSCN_PORT_ADDR:
+ (void)unf_handle_rscn_port_addr(port_id_page, gid_acc_pid, lport);
+ break;
+
+ /* each page of RSNC corresponding address group */
+ case UNF_RSCN_AREA_ADDR_GROUP:
+ case UNF_RSCN_DOMAIN_ADDR_GROUP:
+ case UNF_RSCN_FABRIC_ADDR_GROUP:
+ (void)unf_handle_rscn_group_addr(port_id_page, gid_acc_pid, lport);
+ break;
+
+ default:
+ break;
+ }
+
+ /* NOTE: release this RSCN_Node */
+ rscn_mgr->unf_release_rscn_node(rscn_mgr, port_id_page);
+
+ /* go to next */
+ spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
+ }
+
+ spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
+}
+
+static void unf_gid_acc_handle(struct unf_gid_acc_pld *gid_acc_pid,
+ struct unf_lport *lport)
+{
+#define UNF_NONE_DISC 0X0 /* before enter DISC */
+ struct unf_disc *disc = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(gid_acc_pid);
+ FC_CHECK_RETURN_VOID(lport);
+ disc = &lport->disc;
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ switch (disc->disc_option) {
+ case UNF_INIT_DISC: /* from SCR callback with INI mode */
+ disc->disc_option = UNF_NONE_DISC;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ unf_handle_init_gid_acc(gid_acc_pid, lport); /* R_Port from Disc_list */
+ break;
+
+ case UNF_RSCN_DISC: /* from RSCN payload parse(analysis) */
+ disc->disc_option = UNF_NONE_DISC;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ unf_handle_rscn_gid_acc(gid_acc_pid, lport); /* R_Port from busy_list */
+ break;
+
+ default:
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x)'s disc option(0x%x) is abnormal",
+ lport->port_id, lport->nport_id, disc->disc_option);
+ break;
+ }
+}
+
+static void unf_gid_ft_ob_callback(struct unf_xchg *xchg)
+{
+ /* Do recovery */
+ struct unf_lport *lport = NULL;
+ union unf_sfs_u *sfs_ptr = NULL;
+ struct unf_disc *disc = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ sfs_ptr = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!sfs_ptr)
+ return;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+ if (!lport)
+ return;
+
+ disc = &lport->disc;
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(lport, UNF_EVENT_DISC_FAILED);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /* Do DISC recovery operation */
+ unf_disc_error_recovery(lport);
+}
+
+static void unf_gid_ft_callback(void *lport, void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_gid_acc_pld *gid_acc_pld = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ union unf_sfs_u *sfs_ptr = NULL;
+ u32 cmnd_rsp_size = 0;
+ u32 rjt_reason = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)xchg;
+ disc = &unf_lport->disc;
+
+ sfs_ptr = unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ gid_acc_pld = sfs_ptr->get_id.gid_rsp.gid_acc_pld;
+ if (!gid_acc_pld) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) GID_FT response payload is NULL",
+ unf_lport->port_id);
+
+ return;
+ }
+
+ cmnd_rsp_size = gid_acc_pld->ctiu_pream.cmnd_rsp_size;
+ if (UNF_CT_IU_ACCEPT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /* Process GID_FT ACC */
+ unf_gid_acc_handle(gid_acc_pld, unf_lport);
+ } else if (UNF_CT_IU_REJECT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ rjt_reason = (gid_acc_pld->ctiu_pream.frag_reason_exp_vend);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) GID_FT was rejected with reason code(0x%x)",
+ unf_lport->port_id, rjt_reason);
+
+ if (UNF_CTIU_RJT_EXP_FC4TYPE_NO_REG ==
+ (rjt_reason & UNF_CTIU_RJT_EXP_MASK)) {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ unf_gid_acc_handle(gid_acc_pld, unf_lport);
+ } else {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ }
+ } else {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_FAILED);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /* Do DISC recovery operation */
+ unf_disc_error_recovery(unf_lport);
+ }
+}
+
+static void unf_gid_pt_ob_callback(struct unf_xchg *xchg)
+{
+ /* Do recovery */
+ struct unf_lport *lport = NULL;
+ union unf_sfs_u *sfs_ptr = NULL;
+ struct unf_disc *disc = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ sfs_ptr = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!sfs_ptr)
+ return;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+ if (!lport)
+ return;
+
+ disc = &lport->disc;
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(lport, UNF_EVENT_DISC_FAILED);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /* Do DISC recovery operation */
+ unf_disc_error_recovery(lport);
+}
+
+static void unf_gid_pt_callback(void *lport, void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_gid_acc_pld *gid_acc_pld = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ union unf_sfs_u *sfs_ptr = NULL;
+ u32 cmnd_rsp_size = 0;
+ u32 rjt_reason = 0;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_rport = (struct unf_rport *)rport;
+ disc = &unf_lport->disc;
+ unf_xchg = (struct unf_xchg *)xchg;
+ sfs_ptr = unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+
+ gid_acc_pld = sfs_ptr->get_id.gid_rsp.gid_acc_pld;
+ if (!gid_acc_pld) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) GID_PT response payload is NULL",
+ unf_lport->port_id);
+ return;
+ }
+
+ cmnd_rsp_size = (gid_acc_pld->ctiu_pream.cmnd_rsp_size);
+ if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_ACCEPT) {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ unf_gid_acc_handle(gid_acc_pld, unf_lport);
+ } else if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_REJECT) {
+ rjt_reason = (gid_acc_pld->ctiu_pream.frag_reason_exp_vend);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) GID_PT was rejected with reason code(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, rjt_reason);
+
+ if ((rjt_reason & UNF_CTIU_RJT_EXP_MASK) ==
+ UNF_CTIU_RJT_EXP_PORTTYPE_NO_REG) {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ unf_gid_acc_handle(gid_acc_pld, unf_lport);
+ } else {
+ ret = unf_send_gid_ft(unf_lport, unf_rport);
+ if (ret != RETURN_OK)
+ goto SEND_GID_PT_FT_FAILED;
+ }
+ } else {
+ goto SEND_GID_PT_FT_FAILED;
+ }
+
+ return;
+SEND_GID_PT_FT_FAILED:
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_FAILED);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ unf_disc_error_recovery(unf_lport);
+}
+
+static void unf_gnn_id_ob_callback(struct unf_xchg *xchg)
+{
+ /* Send GFF_ID */
+ struct unf_lport *lport = NULL;
+ struct unf_rport *sns_port = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 nport_id = 0;
+ struct unf_lport *root_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ lport = xchg->lport;
+ FC_CHECK_RETURN_VOID(lport);
+ sns_port = xchg->rport;
+ FC_CHECK_RETURN_VOID(sns_port);
+ nport_id = xchg->disc_portid;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send GNN_ID failed to inquire RPort(0x%x)",
+ lport->port_id, nport_id);
+
+ root_lport = (struct unf_lport *)lport->root_lport;
+ atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
+ wake_up_process(root_lport->disc.disc_thread_info.thread);
+
+ /* NOTE: continue next stage */
+ ret = unf_get_and_post_disc_event(lport, sns_port, nport_id, UNF_DISC_GET_FEATURE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ lport->port_id, UNF_DISC_GET_FEATURE, nport_id);
+
+ unf_rcv_gff_id_rsp_unknown(lport, nport_id);
+ }
+}
+
+static void unf_rcv_gnn_id_acc(struct unf_lport *lport,
+ struct unf_rport *sns_port,
+ struct unf_gnnid_rsp *gnnid_rsp_pld,
+ u32 nport_id)
+{
+ /* Send GFF_ID or Link down immediately */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_sns_port = sns_port;
+ struct unf_gnnid_rsp *unf_gnnid_rsp_pld = gnnid_rsp_pld;
+ struct unf_rport *rport = NULL;
+ u64 node_name = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(sns_port);
+ FC_CHECK_RETURN_VOID(gnnid_rsp_pld);
+
+ node_name = ((u64)(unf_gnnid_rsp_pld->node_name[ARRAY_INDEX_0]) << UNF_SHIFT_32) |
+ ((u64)(unf_gnnid_rsp_pld->node_name[ARRAY_INDEX_1]));
+
+ if (unf_lport->node_name == node_name) {
+ /* R_Port & L_Port with same Node Name */
+ rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
+ if (rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) has the same node name(0x%llx) with RPort(0x%x), linkdown it",
+ unf_lport->port_id, node_name, nport_id);
+
+ /* Destroy immediately */
+ unf_rport_immediate_link_down(unf_lport, rport);
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x) got RPort(0x%x) with node name(0x%llx) by GNN_ID",
+ unf_lport->port_id, nport_id, node_name);
+
+ /* Start to Send GFF_ID */
+ ret = unf_get_and_post_disc_event(unf_lport, unf_sns_port,
+ nport_id, UNF_DISC_GET_FEATURE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ unf_lport->port_id, UNF_DISC_GET_FEATURE, nport_id);
+
+ unf_rcv_gff_id_rsp_unknown(unf_lport, nport_id);
+ }
+ }
+}
+
+static void unf_rcv_gnn_id_rjt(struct unf_lport *lport,
+ struct unf_rport *sns_port,
+ struct unf_gnnid_rsp *gnnid_rsp_pld,
+ u32 nport_id)
+{
+ /* Send GFF_ID */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_sns_port = sns_port;
+ struct unf_gnnid_rsp *unf_gnnid_rsp_pld = gnnid_rsp_pld;
+ u32 rjt_reason = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(sns_port);
+ FC_CHECK_RETURN_VOID(gnnid_rsp_pld);
+
+ rjt_reason = (unf_gnnid_rsp_pld->ctiu_pream.frag_reason_exp_vend);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) GNN_ID was rejected with reason code(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, rjt_reason);
+
+ if (!UNF_GNN_GFF_ID_RJT_REASON(rjt_reason)) {
+ /* Node existence: Continue next stage */
+ ret = unf_get_and_post_disc_event(unf_lport, unf_sns_port,
+ nport_id, UNF_DISC_GET_FEATURE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ unf_lport->port_id, UNF_DISC_GET_FEATURE, nport_id);
+
+ unf_rcv_gff_id_rsp_unknown(unf_lport, nport_id);
+ }
+ }
+}
+
+void unf_rcv_gnn_id_rsp_unknown(struct unf_lport *lport,
+ struct unf_rport *sns_port, u32 nport_id)
+{
+ /* Send GFF_ID */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_sns_port = sns_port;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(sns_port);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) Rportid(0x%x) GNN_ID response is unknown. Sending GFF_ID",
+ unf_lport->port_id, unf_lport->nport_id, nport_id);
+
+ ret = unf_get_and_post_disc_event(unf_lport, unf_sns_port, nport_id, UNF_DISC_GET_FEATURE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ unf_lport->port_id, UNF_DISC_GET_FEATURE,
+ nport_id);
+
+ /* NOTE: go to next stage */
+ unf_rcv_gff_id_rsp_unknown(unf_lport, nport_id);
+ }
+}
+
+static void unf_gnn_id_callback(void *lport, void *sns_port, void *xchg)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_rport *unf_sns_port = (struct unf_rport *)sns_port;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_gnnid_rsp *gnnid_rsp_pld = NULL;
+ u32 cmnd_rsp_size = 0;
+ u32 nport_id = 0;
+ struct unf_lport *root_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(sns_port);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ nport_id = unf_xchg->disc_portid;
+ gnnid_rsp_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->gnn_id_rsp;
+ cmnd_rsp_size = gnnid_rsp_pld->ctiu_pream.cmnd_rsp_size;
+
+ root_lport = (struct unf_lport *)unf_lport->root_lport;
+ atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
+ wake_up_process(root_lport->disc.disc_thread_info.thread);
+
+ if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_ACCEPT) {
+ /* Case ACC: send GFF_ID or Link down immediately */
+ unf_rcv_gnn_id_acc(unf_lport, unf_sns_port, gnnid_rsp_pld, nport_id);
+ } else if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_REJECT) {
+ /* Case RJT: send GFF_ID */
+ unf_rcv_gnn_id_rjt(unf_lport, unf_sns_port, gnnid_rsp_pld, nport_id);
+ } else { /* NOTE: continue next stage */
+ /* Case unknown: send GFF_ID */
+ unf_rcv_gnn_id_rsp_unknown(unf_lport, unf_sns_port, nport_id);
+ }
+}
+
+static void unf_gff_id_ob_callback(struct unf_xchg *xchg)
+{
+ /* Send PLOGI */
+ struct unf_lport *lport = NULL;
+ struct unf_lport *root_lport = NULL;
+ struct unf_rport *rport = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 nport_id = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ nport_id = xchg->disc_portid;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ root_lport = (struct unf_lport *)lport->root_lport;
+ atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
+ wake_up_process(root_lport->disc.disc_thread_info.thread);
+
+ /* Get (safe) R_Port */
+ rport = unf_get_rport_by_nport_id(lport, nport_id);
+ rport = unf_get_safe_rport(lport, rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ if (!rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't allocate new RPort(0x%x)",
+ lport->port_id, nport_id);
+ return;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) send GFF_ID(0x%x_0x%x) to RPort(0x%x_0x%x) abnormal",
+ lport->port_id, lport->nport_id, xchg->oxid, xchg->rxid,
+ rport->rport_index, rport->nport_id);
+
+ /* Update R_Port state: PLOGI_WAIT */
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->nport_id = nport_id;
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* NOTE: Start to send PLOGI */
+ ret = unf_send_plogi(lport, rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send PLOGI failed, enter recovry",
+ lport->port_id);
+
+ /* Do R_Port recovery */
+ unf_rport_error_recovery(rport);
+ }
+}
+
+void unf_rcv_gff_id_acc(struct unf_lport *lport,
+ struct unf_gffid_rsp *gffid_rsp_pld, u32 nport_id)
+{
+ /* Delay to LOGIN */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *rport = NULL;
+ struct unf_gffid_rsp *unf_gffid_rsp_pld = gffid_rsp_pld;
+ u32 fc_4feacture = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(gffid_rsp_pld);
+
+ fc_4feacture = unf_gffid_rsp_pld->fc4_feature[ARRAY_INDEX_1];
+ if ((UNF_GFF_ACC_MASK & fc_4feacture) == 0)
+ fc_4feacture = be32_to_cpu(unf_gffid_rsp_pld->fc4_feature[ARRAY_INDEX_1]);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) RPort(0x%x) received GFF_ID ACC. FC4 feature is 0x%x(1:TGT,2:INI,3:COM)",
+ unf_lport->port_id, unf_lport->nport_id, nport_id, fc_4feacture);
+
+ /* Check (& Get new) R_Port */
+ rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
+ if (rport)
+ rport = unf_find_rport(unf_lport, nport_id, rport->port_name);
+
+ if (rport || (UNF_GET_PORT_OPTIONS(fc_4feacture) != UNF_PORT_MODE_INI)) {
+ rport = unf_get_safe_rport(unf_lport, rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ FC_CHECK_RETURN_VOID(rport);
+ } else {
+ return;
+ }
+
+ if ((fc_4feacture & UNF_GFF_ACC_MASK) != 0) {
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->options = UNF_GET_PORT_OPTIONS(fc_4feacture);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ } else if (rport->port_name != INVALID_WWPN) {
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->options = unf_get_port_feature(rport->port_name);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ }
+
+ /* NOTE: Send PLOGI if necessary */
+ unf_check_rport_need_delay_plogi(unf_lport, rport, rport->options);
+}
+
+void unf_rcv_gff_id_rjt(struct unf_lport *lport,
+ struct unf_gffid_rsp *gffid_rsp_pld, u32 nport_id)
+{
+ /* Delay LOGIN or LOGO */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *rport = NULL;
+ struct unf_gffid_rsp *unf_gffid_rsp_pld = gffid_rsp_pld;
+ u32 rjt_reason = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(gffid_rsp_pld);
+
+ /* Check (& Get new) R_Port */
+ rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
+ if (rport)
+ rport = unf_find_rport(unf_lport, nport_id, rport->port_name);
+
+ if (!rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) get RPort by N_Port_ID(0x%x) failed and alloc new",
+ unf_lport->port_id, nport_id);
+
+ rport = unf_rport_get_free_and_init(unf_lport, UNF_PORT_TYPE_FC, nport_id);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->nport_id = nport_id;
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ }
+
+ rjt_reason = unf_gffid_rsp_pld->ctiu_pream.frag_reason_exp_vend;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send GFF_ID for RPort(0x%x) but was rejected. Reason code(0x%x)",
+ unf_lport->port_id, nport_id, rjt_reason);
+
+ if (!UNF_GNN_GFF_ID_RJT_REASON(rjt_reason)) {
+ rport = unf_get_safe_rport(lport, rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* Update R_Port state: PLOGI_WAIT */
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->nport_id = nport_id;
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* Delay to send PLOGI */
+ unf_rport_delay_login(rport);
+ } else {
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ if (rport->rp_state == UNF_RPORT_ST_INIT) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* Enter closing state */
+ unf_rport_enter_logo(unf_lport, rport);
+ } else {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ }
+ }
+}
+
+void unf_rcv_gff_id_rsp_unknown(struct unf_lport *lport, u32 nport_id)
+{
+ /* Send PLOGI */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *rport = NULL;
+ ulong flag = 0;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send GFF_ID for RPort(0x%x) but response is unknown",
+ unf_lport->port_id, nport_id);
+
+ /* Get (Safe) R_Port & Set State */
+ rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
+ if (rport)
+ rport = unf_find_rport(unf_lport, nport_id, rport->port_name);
+
+ if (!rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) can't get RPort by NPort ID(0x%x), allocate new RPort",
+ unf_lport->port_id, unf_lport->nport_id, nport_id);
+
+ rport = unf_rport_get_free_and_init(unf_lport, UNF_PORT_TYPE_FC, nport_id);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->nport_id = nport_id;
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ }
+
+ rport = unf_get_safe_rport(unf_lport, rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* Update R_Port state: PLOGI_WAIT */
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->nport_id = nport_id;
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* Start to send PLOGI */
+ ret = unf_send_plogi(unf_lport, rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) can not send PLOGI for RPort(0x%x), enter recovery",
+ unf_lport->port_id, nport_id);
+
+ unf_rport_error_recovery(rport);
+ }
+}
+
+static void unf_gff_id_callback(void *lport, void *sns_port, void *xchg)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_lport *root_lport = NULL;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_gffid_rsp *gffid_rsp_pld = NULL;
+ u32 cmnd_rsp_size = 0;
+ u32 nport_id = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(sns_port);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ nport_id = unf_xchg->disc_portid;
+
+ gffid_rsp_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->gff_id_rsp;
+ cmnd_rsp_size = (gffid_rsp_pld->ctiu_pream.cmnd_rsp_size);
+
+ root_lport = (struct unf_lport *)unf_lport->root_lport;
+ atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
+ wake_up_process(root_lport->disc.disc_thread_info.thread);
+
+ if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_ACCEPT) {
+ /* Case for GFF_ID ACC: (Delay)PLOGI */
+ unf_rcv_gff_id_acc(unf_lport, gffid_rsp_pld, nport_id);
+ } else if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_REJECT) {
+ /* Case for GFF_ID RJT: Delay PLOGI or LOGO directly */
+ unf_rcv_gff_id_rjt(unf_lport, gffid_rsp_pld, nport_id);
+ } else {
+ /* Send PLOGI */
+ unf_rcv_gff_id_rsp_unknown(unf_lport, nport_id);
+ }
+}
+
+static void unf_rcv_gpn_id_acc(struct unf_lport *lport,
+ u32 nport_id, u64 port_name)
+{
+ /* then PLOGI or re-login */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *rport = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ rport = unf_find_valid_rport(unf_lport, port_name, nport_id);
+ if (rport) {
+ /* R_Port with TGT mode & L_Port with INI mode:
+ * send PLOGI with INIT state
+ */
+ if ((rport->options & UNF_PORT_MODE_TGT) == UNF_PORT_MODE_TGT) {
+ rport = unf_get_safe_rport(lport, rport, UNF_RPORT_REUSE_INIT, nport_id);
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* Update R_Port state: PLOGI_WAIT */
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->nport_id = nport_id;
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* Start to send PLOGI */
+ ret = unf_send_plogi(unf_lport, rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send PLOGI failed for 0x%x, enter recovry",
+ unf_lport->port_id, unf_lport->nport_id, nport_id);
+
+ unf_rport_error_recovery(rport);
+ }
+ } else {
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ if (rport->rp_state != UNF_RPORT_ST_PLOGI_WAIT &&
+ rport->rp_state != UNF_RPORT_ST_PRLI_WAIT &&
+ rport->rp_state != UNF_RPORT_ST_READY) {
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* Do LOGO operation */
+ unf_rport_enter_logo(unf_lport, rport);
+ } else {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ }
+ }
+ } else {
+ /* Send GNN_ID */
+ (void)unf_rport_relogin(unf_lport, nport_id);
+ }
+}
+
+static void unf_rcv_gpn_id_rjt(struct unf_lport *lport, u32 nport_id)
+{
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *rport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
+ if (rport)
+ /* Do R_Port Link down */
+ unf_rport_linkdown(unf_lport, rport);
+}
+
+void unf_rcv_gpn_id_rsp_unknown(struct unf_lport *lport, u32 nport_id)
+{
+ struct unf_lport *unf_lport = lport;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) wrong response of GPN_ID with RPort(0x%x)",
+ unf_lport->port_id, nport_id);
+
+ /* NOTE: go to next stage */
+ (void)unf_rport_relogin(unf_lport, nport_id);
+}
+
+static void unf_gpn_id_ob_callback(struct unf_xchg *xchg)
+{
+ struct unf_lport *lport = NULL;
+ u32 nport_id = 0;
+ struct unf_lport *root_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ lport = xchg->lport;
+ nport_id = xchg->disc_portid;
+ FC_CHECK_RETURN_VOID(lport);
+
+ root_lport = (struct unf_lport *)lport->root_lport;
+ atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
+ wake_up_process(root_lport->disc.disc_thread_info.thread);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send GPN_ID failed to inquire RPort(0x%x)",
+ lport->port_id, nport_id);
+
+ /* NOTE: go to next stage */
+ (void)unf_rport_relogin(lport, nport_id);
+}
+
+static void unf_gpn_id_callback(void *lport, void *sns_port, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_gpnid_rsp *gpnid_rsp_pld = NULL;
+ u64 port_name = 0;
+ u32 cmnd_rsp_size = 0;
+ u32 nport_id = 0;
+ struct unf_lport *root_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(sns_port);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)xchg;
+ nport_id = unf_xchg->disc_portid;
+
+ root_lport = (struct unf_lport *)unf_lport->root_lport;
+ atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
+ wake_up_process(root_lport->disc.disc_thread_info.thread);
+
+ gpnid_rsp_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->gpn_id_rsp;
+ cmnd_rsp_size = gpnid_rsp_pld->ctiu_pream.cmnd_rsp_size;
+ if (UNF_CT_IU_ACCEPT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ /* GPN_ID ACC */
+ port_name = ((u64)(gpnid_rsp_pld->port_name[ARRAY_INDEX_0])
+ << UNF_SHIFT_32) |
+ ((u64)(gpnid_rsp_pld->port_name[ARRAY_INDEX_1]));
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x) GPN_ID ACC with WWN(0x%llx) RPort NPort ID(0x%x)",
+ unf_lport->port_id, port_name, nport_id);
+
+ /* Send PLOGI or LOGO or GNN_ID */
+ unf_rcv_gpn_id_acc(unf_lport, nport_id, port_name);
+ } else if (UNF_CT_IU_REJECT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ /* GPN_ID RJT: Link Down */
+ unf_rcv_gpn_id_rjt(unf_lport, nport_id);
+ } else {
+ /* GPN_ID response type unknown: Send GNN_ID */
+ unf_rcv_gpn_id_rsp_unknown(unf_lport, nport_id);
+ }
+}
+
+static void unf_rff_id_ob_callback(struct unf_xchg *xchg)
+{
+ /* Do recovery */
+ struct unf_lport *lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send RFF_ID failed",
+ lport->port_id, lport->nport_id);
+
+ unf_lport_error_recovery(lport);
+}
+
+static void unf_rff_id_callback(void *lport, void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_ctiu_prem *ctiu_prem = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 cmnd_rsp_size = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)xchg;
+ if (unlikely(!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr))
+ return;
+
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_FCTRL);
+ unf_rport = unf_get_safe_rport(unf_lport, unf_rport,
+ UNF_RPORT_REUSE_ONLY, UNF_FC_FID_FCTRL);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't allocate RPort(0x%x)",
+ unf_lport->port_id, UNF_FC_FID_FCTRL);
+ return;
+ }
+
+ unf_rport->nport_id = UNF_FC_FID_FCTRL;
+ ctiu_prem = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rff_id_rsp.ctiu_pream;
+ cmnd_rsp_size = ctiu_prem->cmnd_rsp_size;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x_0x%x) RFF_ID rsp is (0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ (cmnd_rsp_size & UNF_CT_IU_RSP_MASK));
+
+ /* RSP Type check: some SW not support RFF_ID, go to next stage also */
+ if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_ACCEPT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) receive RFF ACC(0x%x) in state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ (cmnd_rsp_size & UNF_CT_IU_RSP_MASK), unf_lport->states);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) receive RFF RJT(0x%x) in state(0x%x) with RJT reason code(0x%x) explanation(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ (cmnd_rsp_size & UNF_CT_IU_RSP_MASK), unf_lport->states,
+ (ctiu_prem->frag_reason_exp_vend) & UNF_CT_IU_REASON_MASK,
+ (ctiu_prem->frag_reason_exp_vend) & UNF_CT_IU_EXPLAN_MASK);
+ }
+
+ /* L_Port state check */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ if (unf_lport->states != UNF_LPORT_ST_RFF_ID_WAIT) {
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) receive RFF reply in state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
+
+ return;
+ }
+ /* LPort: RFF_ID_WAIT --> SCR_WAIT */
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_REMOTE_ACC);
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ ret = unf_send_scr(unf_lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send SCR failed",
+ unf_lport->port_id, unf_lport->nport_id);
+ unf_lport_error_recovery(unf_lport);
+ }
+}
+
+static void unf_rft_id_ob_callback(struct unf_xchg *xchg)
+{
+ struct unf_lport *lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send RFT_ID failed",
+ lport->port_id, lport->nport_id);
+ unf_lport_error_recovery(lport);
+}
+
+static void unf_rft_id_callback(void *lport, void *rport, void *xchg)
+{
+ /* RFT_ID --->>> RFF_ID */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_ctiu_prem *ctiu_prem = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 cmnd_rsp_size = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_rport = (struct unf_rport *)rport;
+ unf_xchg = (struct unf_xchg *)xchg;
+
+ if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) SFS entry is NULL with state(0x%x)",
+ unf_lport->port_id, unf_lport->states);
+ return;
+ }
+
+ ctiu_prem = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr
+ ->rft_id_rsp.ctiu_pream;
+ cmnd_rsp_size = (ctiu_prem->cmnd_rsp_size);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) RFT_ID response is (0x%x)",
+ (cmnd_rsp_size & UNF_CT_IU_RSP_MASK), unf_lport->port_id,
+ unf_lport->nport_id);
+
+ if (UNF_CT_IU_ACCEPT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ /* Case for RFT_ID ACC: send RFF_ID */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ if (unf_lport->states != UNF_LPORT_ST_RFT_ID_WAIT) {
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) receive RFT_ID ACC in state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_lport->states);
+
+ return;
+ }
+
+ /* LPort: RFT_ID_WAIT --> RFF_ID_WAIT */
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_REMOTE_ACC);
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ /* Start to send RFF_ID GS command */
+ ret = unf_send_rff_id(unf_lport, unf_rport, UNF_FC4_FCP_TYPE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send RFF_ID failed",
+ unf_lport->port_id, unf_lport->nport_id);
+ unf_lport_error_recovery(unf_lport);
+ }
+ } else {
+ /* Case for RFT_ID RJT: do recovery */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) receive RFT_ID RJT with reason_code(0x%x) explanation(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ (ctiu_prem->frag_reason_exp_vend) & UNF_CT_IU_REASON_MASK,
+ (ctiu_prem->frag_reason_exp_vend) & UNF_CT_IU_EXPLAN_MASK);
+
+ /* Do L_Port recovery */
+ unf_lport_error_recovery(unf_lport);
+ }
+}
+
+static void unf_scr_ob_callback(struct unf_xchg *xchg)
+{
+ /* Callback fucnion for exception: Do L_Port error recovery */
+ struct unf_lport *lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send SCR failed and do port recovery",
+ lport->port_id);
+
+ unf_lport_error_recovery(lport);
+}
+
+static void unf_scr_callback(void *lport, void *rport, void *xchg)
+{
+ /* Callback function for SCR response: Send GID_PT with INI mode */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_els_acc *els_acc = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ ulong port_flag = 0;
+ ulong disc_flag = 0;
+ u32 cmnd = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)xchg;
+ disc = &unf_lport->disc;
+
+ if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr)
+ return;
+
+ els_acc = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->els_acc;
+ if (unf_xchg->byte_orders & UNF_BIT_2)
+ cmnd = be32_to_cpu(els_acc->cmnd);
+ else
+ cmnd = (els_acc->cmnd);
+
+ if ((cmnd & UNF_ELS_CMND_HIGH_MASK) == UNF_ELS_CMND_ACC) {
+ spin_lock_irqsave(&unf_lport->lport_state_lock, port_flag);
+ if (unf_lport->states != UNF_LPORT_ST_SCR_WAIT) {
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock,
+ port_flag);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) receive SCR ACC with error state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_lport->states);
+ return;
+ }
+
+ /* LPort: SCR_WAIT --> READY */
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_REMOTE_ACC);
+ if (unf_lport->states == UNF_LPORT_ST_READY) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) enter READY state when received SCR response",
+ unf_lport->port_id, unf_lport->nport_id);
+ }
+
+ /* Start to Discovery with INI mode: GID_PT */
+ if ((unf_lport->options & UNF_PORT_MODE_INI) ==
+ UNF_PORT_MODE_INI) {
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock,
+ port_flag);
+
+ if (unf_lport->disc.disc_temp.unf_disc_start) {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock,
+ disc_flag);
+ unf_lport->disc.disc_option = UNF_INIT_DISC;
+ disc->last_disc_jiff = jiffies;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+
+ ret = unf_lport->disc.disc_temp.unf_disc_start(unf_lport);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x) DISC %s with INI mode",
+ unf_lport->port_id,
+ (ret != RETURN_OK) ? "failed" : "succeed");
+ }
+ return;
+ }
+
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, port_flag);
+ /* NOTE: set state with UNF_DISC_ST_END used for
+ * RSCN process
+ */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
+ unf_lport->disc.states = UNF_DISC_ST_END;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) is TGT mode, no need to discovery",
+ unf_lport->port_id);
+
+ return;
+ }
+ unf_lport_error_recovery(unf_lport);
+}
+
+void unf_check_rport_need_delay_plogi(struct unf_lport *lport,
+ struct unf_rport *rport, u32 port_feature)
+{
+ /*
+ * Called by:
+ * 1. Private loop
+ * 2. RCVD GFF_ID ACC
+ */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ ulong flag = 0;
+ u32 nport_id = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ nport_id = unf_rport->nport_id;
+
+ /*
+ * Send GFF_ID means L_Port has INI attribute
+ * *
+ * When to send PLOGI:
+ * 1. R_Port has TGT mode (COM or TGT), send PLOGI immediately
+ * 2. R_Port only with INI, send LOGO immediately
+ * 3. R_Port with unknown attribute, delay to send PLOGI
+ */
+ if ((UNF_PORT_MODE_TGT & port_feature) ||
+ (UNF_LPORT_ENHANCED_FEATURE_ENHANCED_GFF &
+ unf_lport->enhanced_features)) {
+ /* R_Port has TGT mode: send PLOGI immediately */
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ FC_CHECK_RETURN_VOID(unf_rport);
+
+ /* Update R_Port state: PLOGI_WAIT */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = nport_id;
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Start to send PLOGI */
+ ret = unf_send_plogi(unf_lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send PLOGI to RPort(0x%x) failed",
+ unf_lport->port_id, unf_lport->nport_id,
+ nport_id);
+
+ unf_rport_error_recovery(unf_rport);
+ }
+ } else if (port_feature == UNF_PORT_MODE_INI) {
+ /* R_Port only with INI mode: can't send PLOGI
+ * --->>> LOGO/nothing
+ */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ if (unf_rport->rp_state == UNF_RPORT_ST_INIT) {
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send LOGO to RPort(0x%x) which only with INI mode",
+ unf_lport->port_id, unf_lport->nport_id, nport_id);
+
+ /* Enter Closing state */
+ unf_rport_enter_logo(unf_lport, unf_rport);
+ } else {
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+ }
+ } else {
+ /* Unknown R_Port attribute: Delay to send PLOGI */
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ FC_CHECK_RETURN_VOID(unf_rport);
+
+ /* Update R_Port state: PLOGI_WAIT */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = nport_id;
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ unf_rport_delay_login(unf_rport);
+ }
+}
diff --git a/drivers/scsi/spfc/common/unf_gs.h b/drivers/scsi/spfc/common/unf_gs.h
new file mode 100644
index 000000000000..d9856133b3cd
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_gs.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_GS_H
+#define UNF_GS_H
+
+#include "unf_type.h"
+#include "unf_lport.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif /* __cplusplus */
+
+u32 unf_send_scr(struct unf_lport *lport,
+ struct unf_rport *rport);
+u32 unf_send_ctpass_thru(struct unf_lport *lport,
+ void *buffer, u32 bufflen);
+
+u32 unf_send_gid_ft(struct unf_lport *lport,
+ struct unf_rport *rport);
+u32 unf_send_gid_pt(struct unf_lport *lport,
+ struct unf_rport *rport);
+u32 unf_send_gpn_id(struct unf_lport *lport,
+ struct unf_rport *sns_port, u32 nport_id);
+u32 unf_send_gnn_id(struct unf_lport *lport,
+ struct unf_rport *sns_port, u32 nport_id);
+u32 unf_send_gff_id(struct unf_lport *lport,
+ struct unf_rport *sns_port, u32 nport_id);
+
+u32 unf_send_rff_id(struct unf_lport *lport,
+ struct unf_rport *rport, u32 fc4_type);
+u32 unf_send_rft_id(struct unf_lport *lport,
+ struct unf_rport *rport);
+void unf_rcv_gnn_id_rsp_unknown(struct unf_lport *lport,
+ struct unf_rport *sns_port, u32 nport_id);
+void unf_rcv_gpn_id_rsp_unknown(struct unf_lport *lport, u32 nport_id);
+void unf_rcv_gff_id_rsp_unknown(struct unf_lport *lport, u32 nport_id);
+void unf_check_rport_need_delay_plogi(struct unf_lport *lport,
+ struct unf_rport *rport, u32 port_feature);
+
+struct send_com_trans_in {
+ unsigned char port_wwn[8];
+ u32 req_buffer_count;
+ unsigned char req_buffer[ARRAY_INDEX_1];
+};
+
+struct send_com_trans_out {
+ u32 hba_status;
+ u32 total_resp_buffer_cnt;
+ u32 actual_resp_buffer_cnt;
+ unsigned char resp_buffer[ARRAY_INDEX_1];
+};
+
+#ifdef __cplusplus
+}
+#endif /* __cplusplus */
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_init.c b/drivers/scsi/spfc/common/unf_init.c
new file mode 100644
index 000000000000..7e6f98d16977
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_init.c
@@ -0,0 +1,353 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_type.h"
+#include "unf_log.h"
+#include "unf_scsi_common.h"
+#include "unf_event.h"
+#include "unf_exchg.h"
+#include "unf_portman.h"
+#include "unf_rport.h"
+#include "unf_service.h"
+#include "unf_io.h"
+#include "unf_io_abnormal.h"
+
+#define UNF_PID 12
+#define MY_PID UNF_PID
+
+#define RPORT_FEATURE_POOL_SIZE 4096
+struct task_struct *event_task_thread;
+struct workqueue_struct *unf_wq;
+
+atomic_t fc_mem_ref;
+
+struct unf_global_card_thread card_thread_mgr;
+u32 unf_dgb_level = UNF_MAJOR;
+u32 log_print_level = UNF_INFO;
+u32 log_limited_times = UNF_LOGIN_ATT_PRINT_TIMES;
+
+static struct unf_esgl_page *unf_get_one_free_esgl_page
+ (void *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(pkg, NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)pkg->xchg_contex;
+
+ return unf_get_and_add_one_free_esgl_page(unf_lport, unf_xchg);
+}
+
+static int unf_get_cfg_parms(char *section_name, struct unf_cfg_item *cfg_itm,
+ u32 *cfg_value, u32 itemnum)
+{
+ /* Maximum length of a configuration item value, including the end
+ * character
+ */
+#define UNF_MAX_ITEM_VALUE_LEN (256)
+
+ u32 *unf_cfg_value = NULL;
+ struct unf_cfg_item *unf_cfg_itm = NULL;
+ u32 i = 0;
+
+ unf_cfg_itm = cfg_itm;
+ unf_cfg_value = cfg_value;
+
+ for (i = 0; i < itemnum; i++) {
+ if (!unf_cfg_itm || !unf_cfg_value) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_ERR,
+ "[err]Config name or value is NULL");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (strcmp("End", unf_cfg_itm->puc_name) == 0x0)
+ break;
+
+ if (strcmp("fw_path", unf_cfg_itm->puc_name) == 0x0) {
+ unf_cfg_itm++;
+ unf_cfg_value += UNF_MAX_ITEM_VALUE_LEN / sizeof(u32);
+ continue;
+ }
+
+ *unf_cfg_value = unf_cfg_itm->default_value;
+ unf_cfg_itm++;
+ unf_cfg_value++;
+ }
+
+ return RETURN_OK;
+}
+
+struct unf_cm_handle_op unf_cm_handle_ops = {
+ .unf_alloc_local_port = unf_lport_create_and_init,
+ .unf_release_local_port = unf_release_local_port,
+ .unf_receive_ls_gs_pkg = unf_receive_ls_gs_pkg,
+ .unf_receive_bls_pkg = unf_receive_bls_pkg,
+ .unf_send_els_done = unf_send_els_done,
+ .unf_receive_ini_response = unf_ini_scsi_completed,
+ .unf_get_cfg_parms = unf_get_cfg_parms,
+ .unf_receive_marker_status = unf_recv_tmf_marker_status,
+ .unf_receive_abts_marker_status = unf_recv_abts_marker_status,
+
+ .unf_process_fcp_cmnd = NULL,
+ .unf_tgt_cmnd_xfer_or_rsp_echo = NULL,
+ .unf_cm_get_sgl_entry = unf_ini_get_sgl_entry,
+ .unf_cm_get_dif_sgl_entry = unf_ini_get_dif_sgl_entry,
+ .unf_get_one_free_esgl_page = unf_get_one_free_esgl_page,
+ .unf_fc_port_event = unf_fc_port_link_event,
+};
+
+u32 unf_get_cm_handle_ops(struct unf_cm_handle_op *cm_handle)
+{
+ FC_CHECK_RETURN_VALUE(cm_handle, UNF_RETURN_ERROR);
+
+ memcpy(cm_handle, &unf_cm_handle_ops, sizeof(struct unf_cm_handle_op));
+
+ return RETURN_OK;
+}
+
+static void unf_deinit_cm_handle_ops(void)
+{
+ memset(&unf_cm_handle_ops, 0, sizeof(struct unf_cm_handle_op));
+}
+
+int unf_event_process(void *worker_ptr)
+{
+ struct list_head *event_list = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ struct completion *create_done = (struct completion *)worker_ptr;
+ ulong flags = 0;
+
+ set_user_nice(current, UNF_OS_THRD_PRI_LOW);
+ recalc_sigpending();
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[event]Enter event thread");
+
+ if (create_done)
+ complete(create_done);
+
+ do {
+ spin_lock_irqsave(&fc_event_list.fc_event_list_lock, flags);
+ if (list_empty(&fc_event_list.list_head)) {
+ spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, flags);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout((long)msecs_to_jiffies(UNF_S_TO_MS));
+ } else {
+ event_list = UNF_OS_LIST_NEXT(&fc_event_list.list_head);
+ list_del_init(event_list);
+ fc_event_list.list_num--;
+ event_node = list_entry(event_list,
+ struct unf_cm_event_report,
+ list_entry);
+ spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, flags);
+
+ /* Process event node */
+ unf_handle_event(event_node);
+ }
+ } while (!kthread_should_stop());
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "[event]Event thread exit");
+
+ return RETURN_OK;
+}
+
+static int unf_creat_event_center(void)
+{
+ struct completion create_done;
+
+ init_completion(&create_done);
+ INIT_LIST_HEAD(&fc_event_list.list_head);
+ fc_event_list.list_num = 0;
+ spin_lock_init(&fc_event_list.fc_event_list_lock);
+
+ event_task_thread = kthread_run(unf_event_process, &create_done, "spfc_event");
+ if (IS_ERR(event_task_thread)) {
+ complete_and_exit(&create_done, 0);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Create event thread failed(0x%p)",
+ event_task_thread);
+
+ return UNF_RETURN_ERROR;
+ }
+ wait_for_completion(&create_done);
+ return RETURN_OK;
+}
+
+static void unf_cm_event_thread_exit(void)
+{
+ if (event_task_thread)
+ kthread_stop(event_task_thread);
+}
+
+static void unf_init_card_mgr_list(void)
+{
+ /* So far, do not care */
+ INIT_LIST_HEAD(&card_thread_mgr.card_list_head);
+
+ spin_lock_init(&card_thread_mgr.global_card_list_lock);
+
+ card_thread_mgr.card_num = 0;
+}
+
+int unf_port_feature_pool_init(void)
+{
+ u32 index = 0;
+ u32 rport_feature_pool_size = 0;
+ struct unf_rport_feature_recard *rport_feature = NULL;
+ unsigned long flags = 0;
+
+ rport_feature_pool_size = sizeof(struct unf_rport_feature_pool);
+ port_feature_pool = vmalloc(rport_feature_pool_size);
+ if (!port_feature_pool) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]cannot allocate rport feature pool");
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(port_feature_pool, 0, rport_feature_pool_size);
+ spin_lock_init(&port_feature_pool->port_fea_pool_lock);
+ INIT_LIST_HEAD(&port_feature_pool->list_busy_head);
+ INIT_LIST_HEAD(&port_feature_pool->list_free_head);
+
+ port_feature_pool->port_feature_pool_addr =
+ vmalloc((size_t)(RPORT_FEATURE_POOL_SIZE * sizeof(struct unf_rport_feature_recard)));
+ if (!port_feature_pool->port_feature_pool_addr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]cannot allocate rport feature pool address");
+
+ vfree(port_feature_pool);
+ port_feature_pool = NULL;
+
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(port_feature_pool->port_feature_pool_addr, 0,
+ RPORT_FEATURE_POOL_SIZE * sizeof(struct unf_rport_feature_recard));
+ rport_feature = (struct unf_rport_feature_recard *)
+ port_feature_pool->port_feature_pool_addr;
+
+ spin_lock_irqsave(&port_feature_pool->port_fea_pool_lock, flags);
+ for (index = 0; index < RPORT_FEATURE_POOL_SIZE; index++) {
+ list_add_tail(&rport_feature->entry_feature, &port_feature_pool->list_free_head);
+ rport_feature++;
+ }
+ spin_unlock_irqrestore(&port_feature_pool->port_fea_pool_lock, flags);
+
+ return RETURN_OK;
+}
+
+void unf_free_port_feature_pool(void)
+{
+ if (port_feature_pool->port_feature_pool_addr) {
+ vfree(port_feature_pool->port_feature_pool_addr);
+ port_feature_pool->port_feature_pool_addr = NULL;
+ }
+
+ vfree(port_feature_pool);
+ port_feature_pool = NULL;
+}
+
+int unf_common_init(void)
+{
+ int ret = RETURN_OK;
+
+ unf_dgb_level = UNF_MAJOR;
+ log_print_level = UNF_KEVENT;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "UNF Driver Version:%s.", SPFC_DRV_VERSION);
+
+ atomic_set(&fc_mem_ref, 0);
+ ret = unf_port_feature_pool_init();
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port Feature Pool init failed");
+ return ret;
+ }
+
+ ret = (int)unf_register_ini_transport();
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]INI interface init failed");
+ goto REG_INITRANSPORT_FAIL;
+ }
+
+ unf_port_mgmt_init();
+ unf_init_card_mgr_list();
+ ret = (int)unf_init_global_event_msg();
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Create global event center failed");
+ goto CREAT_GLBEVENTMSG_FAIL;
+ }
+
+ ret = (int)unf_creat_event_center();
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Create event center (thread) failed");
+ goto CREAT_EVENTCENTER_FAIL;
+ }
+
+ unf_wq = create_workqueue("unf_wq");
+ if (!unf_wq) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Create work queue failed");
+ goto CREAT_WORKQUEUE_FAIL;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Init common layer succeed");
+ return ret;
+CREAT_WORKQUEUE_FAIL:
+ unf_cm_event_thread_exit();
+CREAT_EVENTCENTER_FAIL:
+ unf_destroy_global_event_msg();
+CREAT_GLBEVENTMSG_FAIL:
+ unf_unregister_ini_transport();
+REG_INITRANSPORT_FAIL:
+ unf_free_port_feature_pool();
+ return UNF_RETURN_ERROR;
+}
+
+static void unf_destroy_dirty_port(void)
+{
+ u32 ditry_port_num = 0;
+
+ unf_show_dirty_port(false, &ditry_port_num);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Sys has %u dirty L_Port(s)", ditry_port_num);
+}
+
+void unf_common_exit(void)
+{
+ unf_free_port_feature_pool();
+
+ unf_destroy_dirty_port();
+
+ flush_workqueue(unf_wq);
+ destroy_workqueue(unf_wq);
+ unf_wq = NULL;
+
+ unf_cm_event_thread_exit();
+
+ unf_destroy_global_event_msg();
+
+ unf_deinit_cm_handle_ops();
+
+ unf_port_mgmt_deinit();
+
+ unf_unregister_ini_transport();
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[info]SPFC module remove succeed, memory reference count is %d",
+ atomic_read(&fc_mem_ref));
+}
diff --git a/drivers/scsi/spfc/common/unf_io.c b/drivers/scsi/spfc/common/unf_io.c
new file mode 100644
index 000000000000..b1255ecba88c
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_io.c
@@ -0,0 +1,1220 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_io.h"
+#include "unf_log.h"
+#include "unf_portman.h"
+#include "unf_service.h"
+#include "unf_io_abnormal.h"
+
+u32 sector_size_flag;
+
+#define UNF_GET_FCP_CTL(pkg) ((((pkg)->status) >> UNF_SHIFT_8) & 0xFF)
+#define UNF_GET_SCSI_STATUS(pkg) (((pkg)->status) & 0xFF)
+
+static u32 unf_io_success_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status);
+static u32 unf_ini_error_default_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg,
+ u32 up_status);
+static u32 unf_io_underflow_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status);
+static u32 unf_ini_dif_error_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status);
+
+struct unf_ini_error_handler_s {
+ u32 ini_error_code;
+ u32 (*unf_ini_error_handler)(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status);
+};
+
+struct unf_ini_error_handler_s ini_error_handler_table[] = {
+ {UNF_IO_SUCCESS, unf_io_success_handler},
+ {UNF_IO_ABORTED, unf_ini_error_default_handler},
+ {UNF_IO_FAILED, unf_ini_error_default_handler},
+ {UNF_IO_ABORT_ABTS, unf_ini_error_default_handler},
+ {UNF_IO_ABORT_LOGIN, unf_ini_error_default_handler},
+ {UNF_IO_ABORT_REET, unf_ini_error_default_handler},
+ {UNF_IO_ABORT_FAILED, unf_ini_error_default_handler},
+ {UNF_IO_OUTOF_ORDER, unf_ini_error_default_handler},
+ {UNF_IO_FTO, unf_ini_error_default_handler},
+ {UNF_IO_LINK_FAILURE, unf_ini_error_default_handler},
+ {UNF_IO_OVER_FLOW, unf_ini_error_default_handler},
+ {UNF_IO_RSP_OVER, unf_ini_error_default_handler},
+ {UNF_IO_LOST_FRAME, unf_ini_error_default_handler},
+ {UNF_IO_UNDER_FLOW, unf_io_underflow_handler},
+ {UNF_IO_HOST_PROG_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_SEST_PROG_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_INVALID_ENTRY, unf_ini_error_default_handler},
+ {UNF_IO_ABORT_SEQ_NOT, unf_ini_error_default_handler},
+ {UNF_IO_REJECT, unf_ini_error_default_handler},
+ {UNF_IO_EDC_IN_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_EDC_OUT_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_UNINIT_KEK_ERR, unf_ini_error_default_handler},
+ {UNF_IO_DEK_OUTOF_RANGE, unf_ini_error_default_handler},
+ {UNF_IO_KEY_UNWRAP_ERR, unf_ini_error_default_handler},
+ {UNF_IO_KEY_TAG_ERR, unf_ini_error_default_handler},
+ {UNF_IO_KEY_ECC_ERR, unf_ini_error_default_handler},
+ {UNF_IO_BLOCK_SIZE_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_ILLEGAL_CIPHER_MODE, unf_ini_error_default_handler},
+ {UNF_IO_CLEAN_UP, unf_ini_error_default_handler},
+ {UNF_IO_ABORTED_BY_TARGET, unf_ini_error_default_handler},
+ {UNF_IO_TRANSPORT_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_LINK_FLASH, unf_ini_error_default_handler},
+ {UNF_IO_TIMEOUT, unf_ini_error_default_handler},
+ {UNF_IO_DMA_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_DIF_ERROR, unf_ini_dif_error_handler},
+ {UNF_IO_INCOMPLETE, unf_ini_error_default_handler},
+ {UNF_IO_DIF_REF_ERROR, unf_ini_dif_error_handler},
+ {UNF_IO_DIF_GEN_ERROR, unf_ini_dif_error_handler},
+ {UNF_IO_NO_XCHG, unf_ini_error_default_handler}
+ };
+
+void unf_done_ini_xchg(struct unf_xchg *xchg)
+{
+ /*
+ * About I/O Done
+ * 1. normal case
+ * 2. Send ABTS & RCVD RSP
+ * 3. Send ABTS & timer timeout
+ */
+ struct unf_scsi_cmnd scsi_cmd = {0};
+ ulong flags = 0;
+ struct unf_scsi_cmd_info *scsi_cmnd_info = NULL;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ u32 scsi_id = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ if (unlikely(!xchg->scsi_cmnd_info.scsi_cmnd))
+ return;
+
+ /* 1. Free RX_ID for INI SIRT: Do not care */
+
+ /*
+ * 2. set & check exchange state
+ * *
+ * for Set UP_ABORT Tag:
+ * 1) L_Port destroy
+ * 2) LUN reset
+ * 3) Target/Session reset
+ * 4) SCSI send Abort(ABTS)
+ */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ xchg->io_state |= INI_IO_STATE_DONE;
+ if (unlikely(xchg->io_state &
+ (INI_IO_STATE_UPABORT | INI_IO_STATE_UPSEND_ERR | INI_IO_STATE_TMF_ABORT))) {
+ /*
+ * a. UPABORT: scsi have send ABTS
+ * --->>> do not call SCSI_Done, return directly
+ * b. UPSEND_ERR: error happened duiring LLDD send SCSI_CMD
+ * --->>> do not call SCSI_Done, scsi need retry
+ */
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
+ "[event]Exchange(0x%p) Cmdsn:0x%lx upCmd:%p hottag(0x%x) with state(0x%x) has been aborted or send error",
+ xchg, (ulong)xchg->cmnd_sn, xchg->scsi_cmnd_info.scsi_cmnd,
+ xchg->hotpooltag, xchg->io_state);
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ return;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ scsi_cmnd_info = &xchg->scsi_cmnd_info;
+
+ /*
+ * 3. Set:
+ * scsi_cmnd;
+ * cmnd_done_func;
+ * cmnd up_level_done;
+ * sense_buff_addr;
+ * resid_length;
+ * cmnd_result;
+ * dif_info
+ * **
+ * UNF_SCSI_CMND <<-- UNF_SCSI_CMND_INFO
+ */
+ UNF_SET_HOST_CMND((&scsi_cmd), scsi_cmnd_info->scsi_cmnd);
+ UNF_SER_CMND_DONE_FUNC((&scsi_cmd), scsi_cmnd_info->done);
+ UNF_SET_UP_LEVEL_CMND_DONE_FUNC(&scsi_cmd, scsi_cmnd_info->uplevel_done);
+ scsi_cmd.drv_private = xchg->lport;
+ if (unlikely((UNF_SCSI_STATUS(xchg->scsi_cmnd_info.result)) & FCP_SNS_LEN_VALID_MASK)) {
+ unf_save_sense_data(scsi_cmd.upper_cmnd,
+ (char *)xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu,
+ (int)xchg->fcp_sfs_union.fcp_rsp_entry.fcp_sense_len);
+ }
+ UNF_SET_RESID((&scsi_cmd), (u32)xchg->resid_len);
+ UNF_SET_CMND_RESULT((&scsi_cmd), scsi_cmnd_info->result);
+ memcpy(&scsi_cmd.dif_info, &xchg->dif_info, sizeof(struct dif_info));
+
+ scsi_id = scsi_cmnd_info->scsi_id;
+
+ UNF_DONE_SCSI_CMND((&scsi_cmd));
+
+ /* 4. Update IO result CNT */
+ if (likely(xchg->lport)) {
+ scsi_image_table = &xchg->lport->rport_scsi_table;
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_id,
+ (scsi_cmnd_info->result >> UNF_SHIFT_16));
+ }
+}
+
+static inline u32 unf_ini_get_sgl_entry_buf(ini_get_sgl_entry_buf ini_get_sgl,
+ void *cmnd, void *driver_sgl,
+ void **upper_sgl, u32 *req_index,
+ u32 *index, char **buf,
+ u32 *buf_len)
+{
+ if (unlikely(!ini_get_sgl)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Command(0x%p) Get sgl Entry func Null.", cmnd);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return ini_get_sgl(cmnd, driver_sgl, upper_sgl, req_index, index, buf, buf_len);
+}
+
+u32 unf_ini_get_sgl_entry(void *pkg, char **buf, u32 *buf_len)
+{
+ struct unf_frame_pkg *unf_pkg = (struct unf_frame_pkg *)pkg;
+ struct unf_xchg *unf_xchg = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf_len, UNF_RETURN_ERROR);
+
+ unf_xchg = (struct unf_xchg *)unf_pkg->xchg_contex;
+ FC_CHECK_RETURN_VALUE(unf_xchg, UNF_RETURN_ERROR);
+
+ /* Get SGL Entry buffer for INI Mode */
+ ret = unf_ini_get_sgl_entry_buf(unf_xchg->scsi_cmnd_info.unf_get_sgl_entry_buf,
+ unf_xchg->scsi_cmnd_info.scsi_cmnd, NULL,
+ &unf_xchg->req_sgl_info.sgl,
+ &unf_xchg->scsi_cmnd_info.port_id,
+ &((unf_xchg->req_sgl_info).entry_index), buf, buf_len);
+
+ return ret;
+}
+
+u32 unf_ini_get_dif_sgl_entry(void *pkg, char **buf, u32 *buf_len)
+{
+ struct unf_frame_pkg *unf_pkg = (struct unf_frame_pkg *)pkg;
+ struct unf_xchg *unf_xchg = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf_len, UNF_RETURN_ERROR);
+
+ unf_xchg = (struct unf_xchg *)unf_pkg->xchg_contex;
+ FC_CHECK_RETURN_VALUE(unf_xchg, UNF_RETURN_ERROR);
+
+ /* Get SGL Entry buffer for INI Mode */
+ ret = unf_ini_get_sgl_entry_buf(unf_xchg->scsi_cmnd_info.unf_get_sgl_entry_buf,
+ unf_xchg->scsi_cmnd_info.scsi_cmnd, NULL,
+ &unf_xchg->dif_sgl_info.sgl,
+ &unf_xchg->scsi_cmnd_info.port_id,
+ &((unf_xchg->dif_sgl_info).entry_index), buf, buf_len);
+ return ret;
+}
+
+u32 unf_get_up_level_cmnd_errcode(struct unf_ini_error_code *err_table,
+ u32 err_table_count, u32 drv_err_code)
+{
+ u32 loop = 0;
+
+ /* fail return UNF_RETURN_ERROR,adjust by up level */
+ if (unlikely(!err_table)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Error Code Table is Null, Error Code(0x%x).", drv_err_code);
+
+ return (u32)UNF_SCSI_HOST(DID_ERROR);
+ }
+
+ for (loop = 0; loop < err_table_count; loop++) {
+ if (err_table[loop].drv_errcode == drv_err_code)
+ return err_table[loop].ap_errcode;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Unsupported Ap Error code by Error Code(0x%x).", drv_err_code);
+
+ return (u32)UNF_SCSI_HOST(DID_ERROR);
+}
+
+static u32 unf_ini_status_handle(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg)
+{
+ u32 loop = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 up_status = 0;
+
+ for (loop = 0; loop < sizeof(ini_error_handler_table) /
+ sizeof(struct unf_ini_error_handler_s); loop++) {
+ if (UNF_GET_LL_ERR(pkg) == ini_error_handler_table[loop].ini_error_code) {
+ up_status =
+ unf_get_up_level_cmnd_errcode(xchg->scsi_cmnd_info.err_code_table,
+ xchg->scsi_cmnd_info.err_code_table_cout,
+ UNF_GET_LL_ERR(pkg));
+
+ if (ini_error_handler_table[loop].unf_ini_error_handler) {
+ ret = ini_error_handler_table[loop]
+ .unf_ini_error_handler(xchg, pkg, up_status);
+ } else {
+ /* set exchange->result ---to--->>>scsi_result */
+ ret = unf_ini_error_default_handler(xchg, pkg, up_status);
+ }
+
+ return ret;
+ }
+ }
+
+ up_status = unf_get_up_level_cmnd_errcode(xchg->scsi_cmnd_info.err_code_table,
+ xchg->scsi_cmnd_info.err_code_table_cout,
+ UNF_IO_SOFT_ERR);
+
+ ret = unf_ini_error_default_handler(xchg, pkg, up_status);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Can not find com status, SID(0x%x) exchange(0x%p) com_status(0x%x) DID(0x%x) hot_pool_tag(0x%x)",
+ xchg->sid, xchg, pkg->status, xchg->did, xchg->hotpooltag);
+
+ return ret;
+}
+
+static void unf_analysis_response_info(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg,
+ u32 *up_status)
+{
+ u8 *resp_buf = NULL;
+
+ /* LL_Driver use Little End, and copy RSP_INFO to COM_Driver */
+ if (unlikely(pkg->unf_rsp_pload_bl.length > UNF_RESPONE_DATA_LEN)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Receive FCP response resp buffer len is invalid 0x%x",
+ pkg->unf_rsp_pload_bl.length);
+ return;
+ }
+
+ resp_buf = (u8 *)pkg->unf_rsp_pload_bl.buffer_ptr;
+ if (resp_buf) {
+ /* If chip use Little End, then change it to Big End */
+ if ((pkg->byte_orders & UNF_BIT_3) == 0)
+ unf_cpu_to_big_end(resp_buf, pkg->unf_rsp_pload_bl.length);
+
+ /* Chip DAM data with Big End */
+ if (resp_buf[ARRAY_INDEX_3] != UNF_FCP_TM_RSP_COMPLETE) {
+ *up_status = UNF_SCSI_HOST(DID_BUS_BUSY);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%p) DID bus busy, scsi_status(0x%x)",
+ xchg->lport, UNF_GET_SCSI_STATUS(pkg));
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Receive FCP response, resp buffer is NULL resp buffer len is 0x%x",
+ pkg->unf_rsp_pload_bl.length);
+ }
+}
+
+static void unf_analysis_sense_info(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 *up_status)
+{
+ u32 length = 0;
+
+ /* 4 bytes Align */
+ length = MIN(SCSI_SENSE_DATA_LEN, pkg->unf_sense_pload_bl.length);
+
+ if (unlikely(pkg->unf_sense_pload_bl.length > SCSI_SENSE_DATA_LEN)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[info]Receive FCP response resp buffer len is 0x%x",
+ pkg->unf_sense_pload_bl.length);
+ }
+ /*
+ * If have sense info then copy directly
+ * else, the chip has been dma the data to sense buffer
+ */
+
+ if (length != 0 && pkg->unf_rsp_pload_bl.buffer_ptr) {
+ /* has been dma to exchange buffer */
+ if (unlikely(pkg->unf_rsp_pload_bl.length > UNF_RESPONE_DATA_LEN)) {
+ *up_status = UNF_SCSI_HOST(DID_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Receive FCP response resp buffer len is invalid 0x%x",
+ pkg->unf_rsp_pload_bl.length);
+
+ return;
+ }
+
+ xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu = (u8 *)kmalloc(length, GFP_ATOMIC);
+ if (!xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Alloc FCP sense buffer failed");
+ return;
+ }
+
+ memcpy(xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu,
+ ((u8 *)(pkg->unf_rsp_pload_bl.buffer_ptr)) +
+ pkg->unf_rsp_pload_bl.length, length);
+
+ xchg->fcp_sfs_union.fcp_rsp_entry.fcp_sense_len = length;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Receive FCP response, sense buffer is NULL sense buffer len is 0x%x",
+ length);
+ }
+}
+
+static u32 unf_io_success_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status)
+{
+ u8 scsi_status = 0;
+ u8 control = 0;
+ u32 status = up_status;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ control = UNF_GET_FCP_CTL(pkg);
+ scsi_status = UNF_GET_SCSI_STATUS(pkg);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]Port(0x%p), Exchange(0x%p) Completed, Control(0x%x), Scsi Status(0x%x)",
+ xchg->lport, xchg, control, scsi_status);
+
+ if (control & FCP_SNS_LEN_VALID_MASK) {
+ /* has sense info */
+ if (scsi_status == FCP_SCSI_STATUS_GOOD)
+ scsi_status = SCSI_CHECK_CONDITION;
+
+ unf_analysis_sense_info(xchg, pkg, &status);
+ } else {
+ /*
+ * When the FCP_RSP_LEN_VALID bit is set to one,
+ * the content of the SCSI STATUS CODE field is not reliable
+ * and shall be ignored by the application client.
+ */
+ if (control & FCP_RSP_LEN_VALID_MASK)
+ unf_analysis_response_info(xchg, pkg, &status);
+ }
+
+ xchg->scsi_cmnd_info.result = status | UNF_SCSI_STATUS(scsi_status);
+
+ return RETURN_OK;
+}
+
+static u32 unf_ini_error_default_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg,
+ u32 up_status)
+{
+ /* set exchange->result ---to--->>> scsi_cmnd->result */
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
+ "[warn]SID(0x%x) exchange(0x%p) com_status(0x%x) up_status(0x%x) DID(0x%x) hot_pool_tag(0x%x) response_len(0x%x)",
+ xchg->sid, xchg, pkg->status, up_status, xchg->did,
+ xchg->hotpooltag, pkg->residus_len);
+
+ xchg->scsi_cmnd_info.result =
+ up_status | UNF_SCSI_STATUS(UNF_GET_SCSI_STATUS(pkg));
+
+ return RETURN_OK;
+}
+
+static u32 unf_ini_dif_error_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status)
+{
+ u8 *sense_data = NULL;
+ u16 sense_code = 0;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ /*
+ * According to DIF scheme
+ * drive set check condition(0x2) when dif error occurs,
+ * and returns the values base on the upper-layer verification resule
+ * Check sequence: crc,Lba,App,
+ * if CRC error is found, the subsequent check is not performed
+ */
+ xchg->scsi_cmnd_info.result = UNF_SCSI_STATUS(SCSI_CHECK_CONDITION);
+
+ sense_code = (u16)pkg->status_sub_code;
+ sense_data = (u8 *)kmalloc(SCSI_SENSE_DATA_LEN, GFP_ATOMIC);
+ if (!sense_data) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Alloc FCP sense buffer failed");
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(sense_data, 0, SCSI_SENSE_DATA_LEN);
+ sense_data[ARRAY_INDEX_0] = SENSE_DATA_RESPONSE_CODE; /* response code:0x70 */
+ sense_data[ARRAY_INDEX_2] = ILLEGAL_REQUEST; /* sense key:0x05; */
+ sense_data[ARRAY_INDEX_7] = ADDITINONAL_SENSE_LEN; /* additional sense length:0x7 */
+ sense_data[ARRAY_INDEX_12] = (u8)(sense_code >> UNF_SHIFT_8);
+ sense_data[ARRAY_INDEX_13] = (u8)sense_code;
+
+ xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu = sense_data;
+ xchg->fcp_sfs_union.fcp_rsp_entry.fcp_sense_len = SCSI_SENSE_DATA_LEN;
+
+ /* valid sense data length snscode[13] */
+ return RETURN_OK;
+}
+
+static u32 unf_io_underflow_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status)
+{
+ /* under flow: residlen > 0 */
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ if (xchg->fcp_cmnd.cdb[ARRAY_INDEX_0] != SCSIOPC_REPORT_LUN &&
+ xchg->fcp_cmnd.cdb[ARRAY_INDEX_0] != SCSIOPC_INQUIRY) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]IO under flow: SID(0x%x) exchange(0x%p) com status(0x%x) up_status(0x%x) DID(0x%x) hot_pool_tag(0x%x) response SID(0x%x)",
+ xchg->sid, xchg, pkg->status, up_status,
+ xchg->did, xchg->hotpooltag, pkg->residus_len);
+ }
+
+ xchg->resid_len = (int)pkg->residus_len;
+ (void)unf_io_success_handler(xchg, pkg, up_status);
+
+ return RETURN_OK;
+}
+
+void unf_complete_cmnd(struct unf_scsi_cmnd *scsi_cmnd, u32 result_size)
+{
+ /*
+ * Exception during process Que_CMND
+ * 1. L_Port == NULL;
+ * 2. L_Port == removing;
+ * 3. R_Port == NULL;
+ * 4. Xchg == NULL.
+ */
+ FC_CHECK_RETURN_VOID((UNF_GET_CMND_DONE_FUNC(scsi_cmnd)));
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]Command(0x%p), Result(0x%x).", scsi_cmnd, result_size);
+
+ UNF_SET_CMND_RESULT(scsi_cmnd, result_size);
+
+ UNF_DONE_SCSI_CMND(scsi_cmnd);
+}
+
+static inline void unf_bind_xchg_scsi_cmd(struct unf_xchg *xchg,
+ struct unf_scsi_cmnd *scsi_cmnd)
+{
+ struct unf_scsi_cmd_info *scsi_cmnd_info = NULL;
+
+ scsi_cmnd_info = &xchg->scsi_cmnd_info;
+
+ /* UNF_SCSI_CMND_INFO <<-- UNF_SCSI_CMND */
+ scsi_cmnd_info->err_code_table = UNF_GET_ERR_CODE_TABLE(scsi_cmnd);
+ scsi_cmnd_info->err_code_table_cout = UNF_GET_ERR_CODE_TABLE_COUNT(scsi_cmnd);
+ scsi_cmnd_info->done = UNF_GET_CMND_DONE_FUNC(scsi_cmnd);
+ scsi_cmnd_info->scsi_cmnd = UNF_GET_HOST_CMND(scsi_cmnd);
+ scsi_cmnd_info->sense_buf = (char *)UNF_GET_SENSE_BUF_ADDR(scsi_cmnd);
+ scsi_cmnd_info->uplevel_done = UNF_GET_UP_LEVEL_CMND_DONE(scsi_cmnd);
+ scsi_cmnd_info->unf_get_sgl_entry_buf = UNF_GET_SGL_ENTRY_BUF_FUNC(scsi_cmnd);
+ scsi_cmnd_info->sgl = UNF_GET_CMND_SGL(scsi_cmnd);
+ scsi_cmnd_info->time_out = scsi_cmnd->time_out;
+ scsi_cmnd_info->entry_cnt = scsi_cmnd->entry_count;
+ scsi_cmnd_info->port_id = (u32)scsi_cmnd->port_id;
+ scsi_cmnd_info->scsi_id = UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd);
+}
+
+u32 unf_ini_scsi_completed(void *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_fcp_cmnd *fcp_cmnd = NULL;
+ u32 control = 0;
+ u16 xchg_tag = 0x0ffff;
+ u32 ret = UNF_RETURN_ERROR;
+ ulong xchg_flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ unf_lport = (struct unf_lport *)lport;
+ xchg_tag = (u16)pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX];
+
+ /* 1. Find Exchange Context */
+ unf_xchg = unf_cm_lookup_xchg_by_tag(lport, (u16)xchg_tag);
+ if (unlikely(!unf_xchg)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) can not find exchange by tag(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, xchg_tag);
+
+ /* NOTE: return directly */
+ return UNF_RETURN_ERROR;
+ }
+
+ /* 2. Consistency check */
+ UNF_CHECK_ALLOCTIME_VALID(unf_lport, xchg_tag, unf_xchg,
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
+ unf_xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]);
+
+ /* 3. Increase ref_cnt for exchange protecting */
+ ret = unf_xchg_ref_inc(unf_xchg, INI_RESPONSE_DONE); /* hold */
+ FC_CHECK_RETURN_VALUE((ret == RETURN_OK), UNF_RETURN_ERROR);
+
+ fcp_cmnd = &unf_xchg->fcp_cmnd;
+ control = fcp_cmnd->control;
+ control = UNF_GET_TASK_MGMT_FLAGS(control);
+
+ /* 4. Cancel timer if necessary */
+ if (unf_xchg->scsi_cmnd_info.time_out != 0)
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer(unf_xchg);
+
+ /* 5. process scsi TMF if necessary */
+ if (control != 0) {
+ unf_process_scsi_mgmt_result(pkg, unf_xchg);
+ unf_xchg_ref_dec(unf_xchg, INI_RESPONSE_DONE); /* cancel hold */
+
+ /* NOTE: return directly */
+ return RETURN_OK;
+ }
+
+ /* 6. Xchg Abort state check */
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, xchg_flag);
+ unf_xchg->oxid = UNF_GET_OXID(pkg);
+ unf_xchg->rxid = UNF_GET_RXID(pkg);
+ if (INI_IO_STATE_UPABORT & unf_xchg->io_state) {
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, xchg_flag);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "[warn]Port(0x%x) find exchange(%p) state(0x%x) has been aborted",
+ unf_lport->port_id, unf_xchg, unf_xchg->io_state);
+
+ /* NOTE: release exchange during SCSI ABORT(ABTS) */
+ unf_xchg_ref_dec(unf_xchg, INI_RESPONSE_DONE); /* cancel hold */
+
+ return ret;
+ }
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, xchg_flag);
+
+ /*
+ * 7. INI SCSI CMND Status process
+ * set exchange->result ---to--->>> scsi_result
+ */
+ ret = unf_ini_status_handle(unf_xchg, pkg);
+
+ /* 8. release exchangenecessary */
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+
+ /* 9. dec exch ref_cnt */
+ unf_xchg_ref_dec(unf_xchg, INI_RESPONSE_DONE); /* cancel hold: release resource now */
+
+ return ret;
+}
+
+u32 unf_hardware_start_io(struct unf_lport *lport, struct unf_frame_pkg *pkg)
+{
+ if (unlikely(!lport->low_level_func.service_op.unf_cmnd_send)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) low level send scsi function is NULL",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return lport->low_level_func.service_op.unf_cmnd_send(lport->fc_port, pkg);
+}
+
+struct unf_rport *unf_find_rport_by_scsi_id(struct unf_lport *lport,
+ struct unf_ini_error_code *err_code_table,
+ u32 err_code_table_cout, u32 scsi_id, u32 *scsi_result)
+{
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ struct unf_wwpn_rport_info *wwpn_rport_info = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+
+ /* scsi_table -> session_table ->image_table */
+ scsi_image_table = &lport->rport_scsi_table;
+
+ /* 1. Scsi_Id validity check */
+ if (unlikely(scsi_id >= scsi_image_table->max_scsi_id)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Input scsi_id(0x%x) bigger than max_scsi_id(0x%x).",
+ scsi_id, scsi_image_table->max_scsi_id);
+
+ *scsi_result = unf_get_up_level_cmnd_errcode(err_code_table, err_code_table_cout,
+ UNF_IO_SOFT_ERR); /* did_soft_error */
+
+ return NULL;
+ }
+
+ /* 2. GetR_Port_Info/R_Port: use Scsi_Id find from L_Port's
+ * Rport_Scsi_Table (image table)
+ */
+ spin_lock_irqsave(&scsi_image_table->scsi_image_table_lock, flags);
+ wwpn_rport_info = &scsi_image_table->wwn_rport_info_table[scsi_id];
+ unf_rport = wwpn_rport_info->rport;
+ spin_unlock_irqrestore(&scsi_image_table->scsi_image_table_lock, flags);
+
+ if (unlikely(!unf_rport)) {
+ *scsi_result = unf_get_up_level_cmnd_errcode(err_code_table,
+ err_code_table_cout,
+ UNF_IO_PORT_LOGOUT);
+
+ return NULL;
+ }
+
+ return unf_rport;
+}
+
+static u32 unf_build_xchg_fcpcmnd(struct unf_fcp_cmnd *fcp_cmnd,
+ struct unf_scsi_cmnd *scsi_cmnd)
+{
+ memcpy(fcp_cmnd->cdb, &UNF_GET_FCP_CMND(scsi_cmnd), scsi_cmnd->cmnd_len);
+
+ if ((fcp_cmnd->control == UNF_FCP_WR_DATA &&
+ (IS_READ_COMMAND(fcp_cmnd->cdb[ARRAY_INDEX_0]))) ||
+ (fcp_cmnd->control == UNF_FCP_RD_DATA &&
+ (IS_WRITE_COMMAND(fcp_cmnd->cdb[ARRAY_INDEX_0])))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MINOR,
+ "Scsi command direction inconsistent, CDB[ARRAY_INDEX_0](0x%x), direction(0x%x).",
+ fcp_cmnd->cdb[ARRAY_INDEX_0], fcp_cmnd->control);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ memcpy(fcp_cmnd->lun, scsi_cmnd->lun_id, sizeof(fcp_cmnd->lun));
+
+ unf_big_end_to_cpu((void *)fcp_cmnd->cdb, sizeof(fcp_cmnd->cdb));
+ fcp_cmnd->data_length = UNF_GET_DATA_LEN(scsi_cmnd);
+
+ return RETURN_OK;
+}
+
+static void unf_adjust_xchg_len(struct unf_xchg *xchg, u32 scsi_cmnd)
+{
+ switch (scsi_cmnd) {
+ case SCSIOPC_REQUEST_SENSE: /* requires different buffer */
+ xchg->data_len = UNF_SCSI_SENSE_BUFFERSIZE;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MINOR, "Request Sense new.");
+ break;
+
+ case SCSIOPC_TEST_UNIT_READY:
+ case SCSIOPC_RESERVE:
+ case SCSIOPC_RELEASE:
+ case SCSIOPC_START_STOP_UNIT:
+ xchg->data_len = 0;
+ break;
+
+ default:
+ break;
+ }
+}
+
+static void unf_copy_dif_control(struct unf_dif_control_info *dif_control,
+ struct unf_scsi_cmnd *scsi_cmnd)
+{
+ dif_control->fcp_dl = scsi_cmnd->dif_control.fcp_dl;
+ dif_control->protect_opcode = scsi_cmnd->dif_control.protect_opcode;
+ dif_control->start_lba = scsi_cmnd->dif_control.start_lba;
+ dif_control->app_tag = scsi_cmnd->dif_control.app_tag;
+
+ dif_control->flags = scsi_cmnd->dif_control.flags;
+ dif_control->dif_sge_count = scsi_cmnd->dif_control.dif_sge_count;
+ dif_control->dif_sgl = scsi_cmnd->dif_control.dif_sgl;
+}
+
+static void unf_adjust_dif_pci_transfer_len(struct unf_xchg *xchg, u32 direction)
+{
+ struct unf_dif_control_info *dif_control = NULL;
+ u32 sector_size = 0;
+
+ dif_control = &xchg->dif_control;
+
+ if (dif_control->protect_opcode == UNF_DIF_ACTION_NONE)
+ return;
+ if ((dif_control->flags & UNF_DIF_SECTSIZE_4KB) == 0)
+ sector_size = SECTOR_SIZE_512;
+ else
+ sector_size = SECTOR_SIZE_4096;
+ switch (dif_control->protect_opcode & UNF_DIF_ACTION_MASK) {
+ case UNF_DIF_ACTION_INSERT:
+ if (direction == DMA_TO_DEVICE) {
+ /* write IO,insert,Indicates that data with DIF is
+ * transmitted over the link.
+ */
+ dif_control->fcp_dl = xchg->data_len +
+ UNF_CAL_BLOCK_CNT(xchg->data_len, sector_size) * UNF_DIF_AREA_SIZE;
+ } else {
+ /* read IO,insert,Indicates that the internal DIf is
+ * carried, and the link does not carry the DIf.
+ */
+ dif_control->fcp_dl = xchg->data_len;
+ }
+ break;
+
+ case UNF_DIF_ACTION_VERIFY_AND_DELETE:
+ if (direction == DMA_TO_DEVICE) {
+ /* write IO,Delete,Indicates that the internal DIf is
+ * carried, and the link does not carry the DIf.
+ */
+ dif_control->fcp_dl = xchg->data_len;
+ } else {
+ /* read IO,Delete,Indicates that data with DIF is
+ * carried on the link and does not contain DIF on
+ * internal.
+ */
+ dif_control->fcp_dl = xchg->data_len +
+ UNF_CAL_BLOCK_CNT(xchg->data_len, sector_size) * UNF_DIF_AREA_SIZE;
+ }
+ break;
+
+ case UNF_DIF_ACTION_VERIFY_AND_FORWARD:
+ dif_control->fcp_dl = xchg->data_len +
+ UNF_CAL_BLOCK_CNT(xchg->data_len, sector_size) * UNF_DIF_AREA_SIZE;
+ break;
+
+ default:
+ dif_control->fcp_dl = xchg->data_len;
+ break;
+ }
+
+ xchg->fcp_cmnd.data_length = dif_control->fcp_dl;
+}
+
+static void unf_get_dma_direction(struct unf_fcp_cmnd *fcp_cmnd,
+ struct unf_scsi_cmnd *scsi_cmnd)
+{
+ if (UNF_GET_DATA_DIRECTION(scsi_cmnd) == DMA_TO_DEVICE) {
+ fcp_cmnd->control = UNF_FCP_WR_DATA;
+ } else if (UNF_GET_DATA_DIRECTION(scsi_cmnd) == DMA_FROM_DEVICE) {
+ fcp_cmnd->control = UNF_FCP_RD_DATA;
+ } else {
+ /* DMA Direction None */
+ fcp_cmnd->control = 0;
+ }
+}
+
+static int unf_save_scsi_cmnd_to_xchg(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_xchg *xchg,
+ struct unf_scsi_cmnd *scsi_cmnd)
+{
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ struct unf_xchg *unf_xchg = xchg;
+ u32 result_size = 0;
+
+ scsi_cmnd->driver_scribble = (void *)unf_xchg->start_jif;
+ unf_xchg->rport = unf_rport;
+ unf_xchg->rport_bind_jifs = unf_rport->rport_alloc_jifs;
+
+ /* Build Xchg SCSI_CMND info */
+ unf_bind_xchg_scsi_cmd(unf_xchg, scsi_cmnd);
+
+ unf_xchg->data_len = UNF_GET_DATA_LEN(scsi_cmnd);
+ unf_xchg->data_direction = UNF_GET_DATA_DIRECTION(scsi_cmnd);
+ unf_xchg->sid = unf_lport->nport_id;
+ unf_xchg->did = unf_rport->nport_id;
+ unf_xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = unf_rport->rport_index;
+ unf_xchg->world_id = scsi_cmnd->world_id;
+ unf_xchg->cmnd_sn = scsi_cmnd->cmnd_sn;
+ unf_xchg->pinitiator = scsi_cmnd->pinitiator;
+ unf_xchg->scsi_id = scsi_cmnd->scsi_id;
+ if (scsi_cmnd->qos_level == UNF_QOS_LEVEL_DEFAULT)
+ unf_xchg->qos_level = unf_rport->qos_level;
+ else
+ unf_xchg->qos_level = scsi_cmnd->qos_level;
+
+ unf_get_dma_direction(&unf_xchg->fcp_cmnd, scsi_cmnd);
+ result_size = unf_build_xchg_fcpcmnd(&unf_xchg->fcp_cmnd, scsi_cmnd);
+ if (unlikely(result_size != RETURN_OK))
+ return UNF_RETURN_ERROR;
+
+ unf_adjust_xchg_len(unf_xchg, UNF_GET_FCP_CMND(scsi_cmnd));
+
+ unf_adjust_xchg_len(unf_xchg, UNF_GET_FCP_CMND(scsi_cmnd));
+
+ /* Dif (control) info */
+ unf_copy_dif_control(&unf_xchg->dif_control, scsi_cmnd);
+ memcpy(&unf_xchg->dif_info, &scsi_cmnd->dif_info, sizeof(struct dif_info));
+ unf_adjust_dif_pci_transfer_len(unf_xchg, UNF_GET_DATA_DIRECTION(scsi_cmnd));
+
+ /* single sgl info */
+ if (unf_xchg->data_direction != DMA_NONE && UNF_GET_CMND_SGL(scsi_cmnd)) {
+ unf_xchg->req_sgl_info.sgl = UNF_GET_CMND_SGL(scsi_cmnd);
+ unf_xchg->req_sgl_info.sgl_start = unf_xchg->req_sgl_info.sgl;
+ /* Save the sgl header for easy
+ * location and printing.
+ */
+ unf_xchg->req_sgl_info.req_index = 0;
+ unf_xchg->req_sgl_info.entry_index = 0;
+ }
+
+ if (scsi_cmnd->dif_control.dif_sgl) {
+ unf_xchg->dif_sgl_info.sgl = UNF_INI_GET_DIF_SGL(scsi_cmnd);
+ unf_xchg->dif_sgl_info.entry_index = 0;
+ unf_xchg->dif_sgl_info.req_index = 0;
+ unf_xchg->dif_sgl_info.sgl_start = unf_xchg->dif_sgl_info.sgl;
+ }
+
+ return RETURN_OK;
+}
+
+static int unf_send_fcpcmnd(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+#define UNF_MAX_PENDING_IO_CNT 3
+ struct unf_scsi_cmd_info *scsi_cmnd_info = NULL;
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ struct unf_xchg *unf_xchg = xchg;
+ struct unf_frame_pkg pkg = {0};
+ u32 result_size = 0;
+ ulong flags = 0;
+
+ memcpy(&pkg.dif_control, &unf_xchg->dif_control, sizeof(struct unf_dif_control_info));
+ pkg.dif_control.fcp_dl = unf_xchg->dif_control.fcp_dl;
+ pkg.transfer_len = unf_xchg->data_len; /* Pcie data transfer length */
+ pkg.xchg_contex = unf_xchg;
+ pkg.qos_level = unf_xchg->qos_level;
+ scsi_cmnd_info = &xchg->scsi_cmnd_info;
+ pkg.entry_count = unf_xchg->scsi_cmnd_info.entry_cnt;
+ if (unf_xchg->data_direction == DMA_NONE || !scsi_cmnd_info->sgl)
+ pkg.entry_count = 0;
+
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ unf_xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+ pkg.private_data[PKG_PRIVATE_XCHG_VP_INDEX] = unf_lport->vp_index;
+ pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = unf_rport->rport_index;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] =
+ unf_xchg->hotpooltag | UNF_HOTTAG_FLAG;
+
+ unf_select_sq(unf_xchg, &pkg);
+ pkg.fcp_cmnd = &unf_xchg->fcp_cmnd;
+ pkg.frame_head.csctl_sid = unf_lport->nport_id;
+ pkg.frame_head.rctl_did = unf_rport->nport_id;
+ pkg.upper_cmd = unf_xchg->scsi_cmnd_info.scsi_cmnd;
+
+ /* exch->fcp_rsp_id --->>> pkg->buffer_ptr */
+ pkg.frame_head.oxid_rxid = ((u32)unf_xchg->oxid << (u32)UNF_SHIFT_16 | unf_xchg->rxid);
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
+ "[info]LPort (0x%p), Nport ID(0x%x) RPort ID(0x%x) direction(0x%x) magic number(0x%x) IO to entry count(0x%x) hottag(0x%x)",
+ unf_lport, unf_lport->nport_id, unf_rport->nport_id,
+ xchg->data_direction, pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
+ pkg.entry_count, unf_xchg->hotpooltag);
+
+ atomic_inc(&unf_rport->pending_io_cnt);
+ if (unf_rport->tape_support_needed &&
+ (atomic_read(&unf_rport->pending_io_cnt) <= UNF_MAX_PENDING_IO_CNT)) {
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_xchg->io_state |= INI_IO_STATE_REC_TIMEOUT_WAIT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ scsi_cmnd_info->abort_time_out = scsi_cmnd_info->time_out;
+ scsi_cmnd_info->time_out = UNF_REC_TOV;
+ }
+ /* 3. add INI I/O timer if necessary */
+ if (scsi_cmnd_info->time_out != 0) {
+ /* I/O inner timer, do not used at this time */
+ unf_lport->xchg_mgr_temp.unf_xchg_add_timer(unf_xchg,
+ scsi_cmnd_info->time_out, UNF_TIMER_TYPE_REQ_IO);
+ }
+
+ /* 4. R_Port state check */
+ if (unlikely(unf_rport->lport_ini_state != UNF_PORT_STATE_LINKUP ||
+ unf_rport->rp_state > UNF_RPORT_ST_READY)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[info]Port(0x%x) RPort(0x%p) NPortId(0x%x) inistate(0x%x): RPort state(0x%x) pUpperCmd(0x%p) is not ready",
+ unf_lport->port_id, unf_rport, unf_rport->nport_id,
+ unf_rport->lport_ini_state, unf_rport->rp_state, pkg.upper_cmd);
+
+ result_size = unf_get_up_level_cmnd_errcode(scsi_cmnd_info->err_code_table,
+ scsi_cmnd_info->err_code_table_cout,
+ UNF_IO_INCOMPLETE);
+ scsi_cmnd_info->result = result_size;
+
+ if (scsi_cmnd_info->time_out != 0)
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer(unf_xchg);
+
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+
+ /* DID_IMM_RETRY */
+ return RETURN_OK;
+ } else if (unf_rport->rp_state < UNF_RPORT_ST_READY) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[info]Port(0x%x) RPort(0x%p) NPortId(0x%x) inistate(0x%x): RPort state(0x%x) pUpperCmd(0x%p) is not ready",
+ unf_lport->port_id, unf_rport, unf_rport->nport_id,
+ unf_rport->lport_ini_state, unf_rport->rp_state, pkg.upper_cmd);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_xchg->io_state |= INI_IO_STATE_UPSEND_ERR; /* need retry */
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ if (unlikely(scsi_cmnd_info->time_out != 0))
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)unf_xchg);
+
+ /* Host busy & need scsi retry */
+ return UNF_RETURN_ERROR;
+ }
+
+ /* 5. send scsi_cmnd to FC_LL Driver */
+ if (unf_hardware_start_io(unf_lport, &pkg) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port (0x%x) pUpperCmd(0x%p) Hardware Send IO failed.",
+ unf_lport->port_id, pkg.upper_cmd);
+
+ unf_release_esgls(unf_xchg);
+
+ result_size = unf_get_up_level_cmnd_errcode(scsi_cmnd_info->err_code_table,
+ scsi_cmnd_info->err_code_table_cout,
+ UNF_IO_INCOMPLETE);
+ scsi_cmnd_info->result = result_size;
+
+ if (scsi_cmnd_info->time_out != 0)
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer(unf_xchg);
+
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+
+ /* SCSI_DONE */
+ return RETURN_OK;
+ }
+
+ return RETURN_OK;
+}
+
+int unf_prefer_to_send_scsi_cmnd(struct unf_xchg *xchg)
+{
+ /*
+ * About INI_IO_STATE_DRABORT:
+ * 1. Set ABORT tag: Clean L_Port/V_Port Link Down I/O
+ * with: INI_busy_list, delay_list, delay_transfer_list, wait_list
+ * *
+ * 2. Set ABORT tag: for target session:
+ * with: INI_busy_list, delay_list, delay_transfer_list, wait_list
+ * a. R_Port remove
+ * b. Send PLOGI_ACC callback
+ * c. RCVD PLOGI
+ * d. RCVD LOGO
+ * *
+ * 3. if set ABORT: prevent send scsi_cmnd to target
+ */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ int ret = RETURN_OK;
+ ulong flags = 0;
+
+ unf_lport = xchg->lport;
+
+ unf_rport = xchg->rport;
+ if (unlikely(!unf_lport || !unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%p) or RPort(0x%p) is NULL", unf_lport, unf_rport);
+
+ /* if happened (never happen): need retry */
+ return UNF_RETURN_ERROR;
+ }
+
+ /* 1. inc ref_cnt to protect exchange */
+ ret = (int)unf_xchg_ref_inc(xchg, INI_SEND_CMND);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) exhg(%p) exception ref(%d) ", unf_lport->port_id,
+ xchg, atomic_read(&xchg->ref_cnt));
+ /* exchange exception, need retry */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ xchg->io_state |= INI_IO_STATE_UPSEND_ERR;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ /* INI_IO_STATE_UPSEND_ERR: Host busy --->>> need retry */
+ return UNF_RETURN_ERROR;
+ }
+
+ /* 2. Xchg Abort state check: Free EXCH if necessary */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (unlikely((xchg->io_state & INI_IO_STATE_UPABORT) ||
+ (xchg->io_state & INI_IO_STATE_DRABORT))) {
+ /* Prevent to send: UP_ABORT/DRV_ABORT */
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ xchg->scsi_cmnd_info.result = UNF_SCSI_HOST(DID_IMM_RETRY);
+ ret = RETURN_OK;
+
+ unf_xchg_ref_dec(xchg, INI_SEND_CMND);
+ unf_cm_free_xchg(unf_lport, xchg);
+
+ /*
+ * Release exchange & return directly:
+ * 1. FC LLDD rcvd ABTS before scsi_cmnd: do nothing
+ * 2. INI_IO_STATE_UPABORT/INI_IO_STATE_DRABORT: discard this
+ * cmnd directly
+ */
+ return ret;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ /* 3. Send FCP_CMND to FC_LL Driver */
+ ret = unf_send_fcpcmnd(unf_lport, unf_rport, xchg);
+ if (unlikely(ret != RETURN_OK)) {
+ /* exchange exception, need retry */
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send exhg(%p) hottag(0x%x) to Rport(%p) NPortID(0x%x) state(0x%x) scsi_id(0x%x) failed",
+ unf_lport->port_id, xchg, xchg->hotpooltag, unf_rport,
+ unf_rport->nport_id, unf_rport->rp_state, unf_rport->scsi_id);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+
+ xchg->io_state |= INI_IO_STATE_UPSEND_ERR;
+ /* need retry */
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ /* INI_IO_STATE_UPSEND_ERR: Host busy --->>> need retry */
+ unf_cm_free_xchg(unf_lport, xchg);
+ }
+
+ /* 4. dec ref_cnt */
+ unf_xchg_ref_dec(xchg, INI_SEND_CMND);
+
+ return ret;
+}
+
+struct unf_lport *unf_find_lport_by_scsi_cmd(struct unf_scsi_cmnd *scsi_cmnd)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ /* cmd -->> L_Port */
+ unf_lport = (struct unf_lport *)UNF_GET_HOST_PORT_BY_CMND(scsi_cmnd);
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Find Port by scsi_cmnd(0x%p) failed", scsi_cmnd);
+
+ /* cmnd -->> scsi_host_id -->> L_Port */
+ unf_lport = unf_find_lport_by_scsi_hostid(UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
+ }
+
+ return unf_lport;
+}
+
+int unf_cm_queue_command(struct unf_scsi_cmnd *scsi_cmnd)
+{
+ /* SCSI Command --->>> FC FCP Command */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ u32 cmnd_result = 0;
+ int ret = RETURN_OK;
+ ulong flags = 0;
+ u32 scsi_id = 0;
+ u32 exhg_mgr_type = UNF_XCHG_MGR_TYPE_RANDOM;
+
+ /* 1. Get L_Port */
+ unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
+
+ /*
+ * corresponds to the insertion or removal scenario or the remove card
+ * scenario. This method is used to search for LPort information based
+ * on SCSI_HOST_ID. The Slave alloc is not invoked when LUNs are not
+ * scanned. Therefore, the Lport cannot be obtained. You need to obtain
+ * the Lport from the Lport linked list.
+ * *
+ * FC After Link Up, the first SCSI command is inquiry.
+ * Before inquiry, SCSI delivers slave_alloc.
+ */
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Find Port by scsi cmd(0x%p) failed", scsi_cmnd);
+
+ /* find from ini_error_code_table1 */
+ cmnd_result = unf_get_up_level_cmnd_errcode(scsi_cmnd->err_code_table,
+ scsi_cmnd->err_code_table_cout,
+ UNF_IO_NO_LPORT);
+
+ /* DID_NOT_CONNECT & SCSI_DONE & RETURN_OK(0) & I/O error */
+ unf_complete_cmnd(scsi_cmnd, cmnd_result);
+ return RETURN_OK;
+ }
+
+ /* Get Local SCSI_Image_table & SCSI_ID */
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ scsi_id = scsi_cmnd->scsi_id;
+
+ /* 2. L_Port State check */
+ if (unlikely(unf_lport->port_removing || unf_lport->pcie_link_down)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is removing(%d) or pcielinkdown(%d) and return with scsi_id(0x%x)",
+ unf_lport->port_id, unf_lport->port_removing,
+ unf_lport->pcie_link_down, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
+
+ cmnd_result = unf_get_up_level_cmnd_errcode(scsi_cmnd->err_code_table,
+ scsi_cmnd->err_code_table_cout,
+ UNF_IO_NO_LPORT);
+
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, (cmnd_result >> UNF_SHIFT_16));
+
+ /* DID_NOT_CONNECT & SCSI_DONE & RETURN_OK(0) & I/O error */
+ unf_complete_cmnd(scsi_cmnd, cmnd_result);
+ return RETURN_OK;
+ }
+
+ /* 3. Get R_Port */
+ unf_rport = unf_find_rport_by_scsi_id(unf_lport, scsi_cmnd->err_code_table,
+ scsi_cmnd->err_code_table_cout,
+ UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd), &cmnd_result);
+ if (unlikely(!unf_rport)) {
+ /* never happen: do not care */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) find RPort by scsi_id(0x%x) failed",
+ unf_lport->port_id, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
+
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, (cmnd_result >> UNF_SHIFT_16));
+
+ /* DID_NOT_CONNECT/DID_SOFT_ERROR & SCSI_DONE & RETURN_OK(0) &
+ * I/O error
+ */
+ unf_complete_cmnd(scsi_cmnd, cmnd_result);
+ return RETURN_OK;
+ }
+
+ /* 4. Can't get exchange & return host busy, retry by uplevel */
+ unf_xchg = (struct unf_xchg *)unf_cm_get_free_xchg(unf_lport,
+ exhg_mgr_type << UNF_SHIFT_16 | UNF_XCHG_TYPE_INI);
+ if (unlikely(!unf_xchg)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[err]Port(0x%x) get free exchange for INI IO(0x%x) failed",
+ unf_lport->port_id, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
+
+ /* NOTE: need scsi retry */
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_xchg->scsi_cmnd_info.result = UNF_SCSI_HOST(DID_ERROR);
+
+ /* 5. Save the SCSI CMND information in advance. */
+ ret = unf_save_scsi_cmnd_to_xchg(unf_lport, unf_rport, unf_xchg, scsi_cmnd);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[err]Port(0x%x) save scsi_cmnd info(0x%x) to exchange failed",
+ unf_lport->port_id, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
+
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, flags);
+ unf_xchg->io_state |= INI_IO_STATE_UPSEND_ERR;
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
+
+ /* INI_IO_STATE_UPSEND_ERR: Don't Do SCSI_DONE, need retry I/O */
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+
+ /* NOTE: need scsi retry */
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]Get exchange(0x%p) hottag(0x%x) for Pcmd:%p,Cmdsn:0x%lx,WorldId:%d",
+ unf_xchg, unf_xchg->hotpooltag, scsi_cmnd->upper_cmnd,
+ (ulong)scsi_cmnd->cmnd_sn, scsi_cmnd->world_id);
+ /* 6. Send SCSI CMND */
+ ret = unf_prefer_to_send_scsi_cmnd(unf_xchg);
+
+ return ret;
+}
diff --git a/drivers/scsi/spfc/common/unf_io.h b/drivers/scsi/spfc/common/unf_io.h
new file mode 100644
index 000000000000..d8e50eb8035e
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_io.h
@@ -0,0 +1,96 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_IO_H
+#define UNF_IO_H
+
+#include "unf_type.h"
+#include "unf_scsi_common.h"
+#include "unf_exchg.h"
+#include "unf_rport.h"
+
+#define UNF_MAX_TARGET_NUMBER 2048
+#define UNF_DEFAULT_MAX_LUN 0xFFFF
+#define UNF_MAX_DMA_SEGS 0x400
+#define UNF_MAX_SCSI_CMND_LEN 16
+#define UNF_MAX_BUS_CHANNEL 0
+#define UNF_DMA_BOUNDARY 0xffffffffffffffff
+#define UNF_MAX_CMND_PER_LUN 64 /* LUN max command */
+#define UNF_CHECK_LUN_ID_MATCH(lun_id, raw_lun_id, scsi_id, xchg) \
+ (((lun_id) == (raw_lun_id) || (lun_id) == INVALID_VALUE64) && \
+ ((scsi_id) == (xchg)->scsi_id))
+
+#define NO_SENSE 0x00
+#define RECOVERED_ERROR 0x01
+#define NOT_READY 0x02
+#define MEDIUM_ERROR 0x03
+#define HARDWARE_ERROR 0x04
+#define ILLEGAL_REQUEST 0x05
+#define UNIT_ATTENTION 0x06
+#define DATA_PROTECT 0x07
+#define BLANK_CHECK 0x08
+#define COPY_ABORTED 0x0a
+#define ABORTED_COMMAND 0x0b
+#define VOLUME_OVERFLOW 0x0d
+#define MISCOMPARE 0x0e
+
+#define SENSE_DATA_RESPONSE_CODE 0x70
+#define ADDITINONAL_SENSE_LEN 0x7
+
+extern u32 sector_size_flag;
+
+#define UNF_GET_SCSI_HOST_ID_BY_CMND(cmd) ((cmd)->scsi_host_id)
+#define UNF_GET_SCSI_ID_BY_CMND(cmd) ((cmd)->scsi_id)
+#define UNF_GET_HOST_PORT_BY_CMND(cmd) ((cmd)->drv_private)
+#define UNF_GET_FCP_CMND(cmd) ((cmd)->pcmnd[ARRAY_INDEX_0])
+#define UNF_GET_DATA_LEN(cmd) ((cmd)->transfer_len)
+#define UNF_GET_DATA_DIRECTION(cmd) ((cmd)->data_direction)
+
+#define UNF_GET_HOST_CMND(cmd) ((cmd)->upper_cmnd)
+#define UNF_GET_CMND_DONE_FUNC(cmd) ((cmd)->done)
+#define UNF_GET_UP_LEVEL_CMND_DONE(cmd) ((cmd)->uplevel_done)
+#define UNF_GET_SGL_ENTRY_BUF_FUNC(cmd) ((cmd)->unf_ini_get_sgl_entry)
+#define UNF_GET_SENSE_BUF_ADDR(cmd) ((cmd)->sense_buf)
+#define UNF_GET_ERR_CODE_TABLE(cmd) ((cmd)->err_code_table)
+#define UNF_GET_ERR_CODE_TABLE_COUNT(cmd) ((cmd)->err_code_table_cout)
+
+#define UNF_SET_HOST_CMND(cmd, host_cmd) ((cmd)->upper_cmnd = (host_cmd))
+#define UNF_SER_CMND_DONE_FUNC(cmd, pfn) ((cmd)->done = (pfn))
+#define UNF_SET_UP_LEVEL_CMND_DONE_FUNC(cmd, pfn) ((cmd)->uplevel_done = (pfn))
+
+#define UNF_SET_RESID(cmd, uiresid) ((cmd)->resid = (uiresid))
+#define UNF_SET_CMND_RESULT(cmd, uiresult) ((cmd)->result = ((int)(uiresult)))
+
+#define UNF_DONE_SCSI_CMND(cmd) ((cmd)->done(cmd))
+
+#define UNF_GET_CMND_SGL(cmd) ((cmd)->sgl)
+#define UNF_INI_GET_DIF_SGL(cmd) ((cmd)->dif_control.dif_sgl)
+
+u32 unf_ini_scsi_completed(void *lport, struct unf_frame_pkg *pkg);
+u32 unf_ini_get_sgl_entry(void *pkg, char **buf, u32 *buf_len);
+u32 unf_ini_get_dif_sgl_entry(void *pkg, char **buf, u32 *buf_len);
+void unf_complete_cmnd(struct unf_scsi_cmnd *scsi_cmnd, u32 result_size);
+void unf_done_ini_xchg(struct unf_xchg *xchg);
+u32 unf_tmf_timeout_recovery_special(void *rport, void *xchg);
+u32 unf_tmf_timeout_recovery_default(void *rport, void *xchg);
+void unf_abts_timeout_recovery_default(void *rport, void *xchg);
+int unf_cm_queue_command(struct unf_scsi_cmnd *scsi_cmnd);
+int unf_cm_eh_abort_handler(struct unf_scsi_cmnd *scsi_cmnd);
+int unf_cm_eh_device_reset_handler(struct unf_scsi_cmnd *scsi_cmnd);
+int unf_cm_target_reset_handler(struct unf_scsi_cmnd *scsi_cmnd);
+int unf_cm_bus_reset_handler(struct unf_scsi_cmnd *scsi_cmnd);
+int unf_cm_virtual_reset_handler(struct unf_scsi_cmnd *scsi_cmnd);
+struct unf_rport *unf_find_rport_by_scsi_id(struct unf_lport *lport,
+ struct unf_ini_error_code *errcode_table,
+ u32 errcode_table_count,
+ u32 scsi_id, u32 *scsi_result);
+u32 UNF_IOExchgDelayProcess(struct unf_lport *lport, struct unf_xchg *xchg);
+struct unf_lport *unf_find_lport_by_scsi_cmd(struct unf_scsi_cmnd *scsi_cmnd);
+int unf_send_scsi_mgmt_cmnd(struct unf_xchg *xchg, struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_scsi_cmnd *scsi_cmnd,
+ enum unf_task_mgmt_cmd task_mgnt_cmd_type);
+void unf_tmf_abnormal_recovery(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_io_abnormal.c b/drivers/scsi/spfc/common/unf_io_abnormal.c
new file mode 100644
index 000000000000..fece7aa5f441
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_io_abnormal.c
@@ -0,0 +1,986 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_io_abnormal.h"
+#include "unf_log.h"
+#include "unf_scsi_common.h"
+#include "unf_rport.h"
+#include "unf_io.h"
+#include "unf_portman.h"
+#include "unf_service.h"
+
+static int unf_send_abts_success(struct unf_lport *lport, struct unf_xchg *xchg,
+ struct unf_scsi_cmnd *scsi_cmnd,
+ u32 time_out_value)
+{
+ bool need_wait_marker = true;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ u32 scsi_id = 0;
+ u32 return_value = 0;
+ ulong xchg_flag = 0;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ need_wait_marker = (xchg->abts_state & MARKER_STS_RECEIVED) ? false : true;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ if (need_wait_marker) {
+ if (down_timeout(&xchg->task_sema, (s64)msecs_to_jiffies(time_out_value))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) recv abts marker timeout,Exch(0x%p) OX_ID(0x%x 0x%x) RX_ID(0x%x)",
+ lport->port_id, xchg, xchg->oxid,
+ xchg->hotpooltag, xchg->rxid);
+
+ /* Cancel abts rsp timer when sema timeout */
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ /* Cnacel the flag of INI_IO_STATE_UPABORT and process
+ * the io in TMF
+ */
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ xchg->io_state &= ~INI_IO_STATE_UPABORT;
+ xchg->io_state |= INI_IO_STATE_TMF_ABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ return UNF_SCSI_ABORT_FAIL;
+ }
+ } else {
+ xchg->ucode_abts_state = UNF_IO_SUCCESS;
+ }
+
+ scsi_image_table = &lport->rport_scsi_table;
+ scsi_id = scsi_cmnd->scsi_id;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ if (xchg->ucode_abts_state == UNF_IO_SUCCESS ||
+ xchg->scsi_cmnd_info.result == UNF_IO_ABORT_PORT_REMOVING) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) Send ABTS succeed and recv marker Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x) marker status(0x%x)",
+ lport->port_id, xchg, xchg->oxid, xchg->rxid, xchg->ucode_abts_state);
+ return_value = DID_RESET;
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, return_value);
+ unf_complete_cmnd(scsi_cmnd, DID_RESET << UNF_SHIFT_16);
+ return UNF_SCSI_ABORT_SUCCESS;
+ }
+
+ xchg->io_state &= ~INI_IO_STATE_UPABORT;
+ xchg->io_state |= INI_IO_STATE_TMF_ABORT;
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ /* Cancel abts rsp timer when sema timeout */
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ABTS failed. Exch(0x%p) oxid(0x%x) hot_tag(0x%x) ret(0x%x) xchg->io_state (0x%x)",
+ lport->port_id, xchg, xchg->oxid, xchg->hotpooltag,
+ xchg->scsi_cmnd_info.result, xchg->io_state);
+
+ /* return fail and then enter TMF */
+ return UNF_SCSI_ABORT_FAIL;
+}
+
+static int unf_ini_abort_cmnd(struct unf_lport *lport, struct unf_xchg *xchg,
+ struct unf_scsi_cmnd *scsi_cmnd)
+{
+ /*
+ * About INI_IO_STATE_UPABORT:
+ * *
+ * 1. Check: L_Port destroy
+ * 2. Check: I/O XCHG timeout
+ * 3. Set ABORT: send ABTS
+ * 4. Set ABORT: LUN reset
+ * 5. Set ABORT: Target reset
+ * 6. Check: Prevent to send I/O to target
+ * (unf_prefer_to_send_scsi_cmnd)
+ * 7. Check: Done INI XCHG --->>> do not call scsi_done, return directly
+ * 8. Check: INI SCSI Complete --->>> do not call scsi_done, return
+ * directly
+ */
+#define UNF_RPORT_NOTREADY_WAIT_SEM_TIMEOUT (2000)
+
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong rport_flag = 0;
+ ulong xchg_flag = 0;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ u32 scsi_id = 0;
+ u32 time_out_value = (u32)UNF_WAIT_SEM_TIMEOUT;
+ u32 return_value = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_SCSI_ABORT_FAIL);
+ unf_lport = lport;
+
+ /* 1. Xchg State Set: INI_IO_STATE_UPABORT */
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ xchg->io_state |= INI_IO_STATE_UPABORT;
+ unf_rport = xchg->rport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ /* 2. R_Port check */
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ABTS but no RPort, OX_ID(0x%x) RX_ID(0x%x)",
+ unf_lport->port_id, xchg->oxid, xchg->rxid);
+
+ return UNF_SCSI_ABORT_SUCCESS;
+ }
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
+ if (unlikely(unf_rport->rp_state != UNF_RPORT_ST_READY)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find RPort's state(0x%x) is not ready but send ABTS also, exchange(0x%p) tag(0x%x)",
+ unf_lport->port_id, unf_rport->rp_state, xchg, xchg->hotpooltag);
+
+ /*
+ * Important: Send ABTS also & update timer
+ * Purpose: only used for release chip (uCode) resource,
+ * continue
+ */
+ time_out_value = UNF_RPORT_NOTREADY_WAIT_SEM_TIMEOUT;
+ }
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
+
+ /* 3. L_Port State check */
+ if (unlikely(unf_lport->port_removing)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is removing", unf_lport->port_id);
+
+ xchg->io_state &= ~INI_IO_STATE_UPABORT;
+
+ return UNF_SCSI_ABORT_FAIL;
+ }
+
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ scsi_id = scsi_cmnd->scsi_id;
+
+ /* If pcie linkdown, complete this io and flush all io */
+ if (unlikely(unf_lport->pcie_link_down)) {
+ return_value = DID_RESET;
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, return_value);
+ unf_complete_cmnd(scsi_cmnd, DID_RESET << UNF_SHIFT_16);
+ unf_free_lport_all_xchg(lport);
+ return UNF_SCSI_ABORT_SUCCESS;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
+ "[abort]Port(0x%x) Exchg(0x%p) delay(%llu) SID(0x%x) DID(0x%x) wwpn(0x%llx) hottag(0x%x) scsi_id(0x%x) lun_id(0x%x) cmdsn(0x%llx) Ini:%p",
+ unf_lport->port_id, xchg,
+ (u64)jiffies_to_msecs(jiffies) - (u64)jiffies_to_msecs(xchg->alloc_jif),
+ xchg->sid, xchg->did, unf_rport->port_name, xchg->hotpooltag,
+ scsi_cmnd->scsi_id, (u32)scsi_cmnd->raw_lun_id, scsi_cmnd->cmnd_sn,
+ scsi_cmnd->pinitiator);
+
+ /* Init abts marker semaphore */
+ sema_init(&xchg->task_sema, 0);
+
+ if (xchg->scsi_cmnd_info.time_out != 0)
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer(xchg);
+
+ lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg, (ulong)UNF_WAIT_ABTS_RSP_TIMEOUT,
+ UNF_TIMER_TYPE_INI_ABTS);
+
+ /* 4. Send INI ABTS CMND */
+ if (unf_send_abts(unf_lport, xchg) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) Send ABTS failed. Exch(0x%p) hottag(0x%x)",
+ unf_lport->port_id, xchg, xchg->hotpooltag);
+
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ xchg->io_state &= ~INI_IO_STATE_UPABORT;
+ xchg->io_state |= INI_IO_STATE_TMF_ABORT;
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ return UNF_SCSI_ABORT_FAIL;
+ }
+
+ return unf_send_abts_success(unf_lport, xchg, scsi_cmnd, time_out_value);
+}
+
+static void unf_flush_ini_resp_que(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (lport->low_level_func.service_op.unf_flush_ini_resp_que)
+ (void)lport->low_level_func.service_op.unf_flush_ini_resp_que(lport->fc_port);
+}
+
+int unf_cm_eh_abort_handler(struct unf_scsi_cmnd *scsi_cmnd)
+{
+ /*
+ * SCSI ABORT Command --->>> FC ABTS Command
+ * If return ABORT_FAIL, then enter TMF process
+ */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_lport *xchg_lport = NULL;
+ int ret = UNF_SCSI_ABORT_SUCCESS;
+ ulong flag = 0;
+
+ /* 1. Get L_Port: Point to Scsi_host */
+ unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Can't find port by scsi host id(0x%x)",
+ UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
+ return UNF_SCSI_ABORT_FAIL;
+ }
+
+ /* 2. find target Xchg for INI Abort CMND */
+ unf_xchg = unf_cm_lookup_xchg_by_cmnd_sn(unf_lport, scsi_cmnd->cmnd_sn,
+ scsi_cmnd->world_id,
+ scsi_cmnd->pinitiator);
+ if (unlikely(!unf_xchg)) {
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
+ "[warn]Port(0x%x) can't find exchange by Cmdsn(0x%lx),Ini:%p",
+ unf_lport->port_id, (ulong)scsi_cmnd->cmnd_sn,
+ scsi_cmnd->pinitiator);
+
+ unf_flush_ini_resp_que(unf_lport);
+
+ return UNF_SCSI_ABORT_SUCCESS;
+ }
+
+ /* 3. increase ref_cnt to protect exchange */
+ ret = (int)unf_xchg_ref_inc(unf_xchg, INI_EH_ABORT);
+ if (unlikely(ret != RETURN_OK)) {
+ unf_flush_ini_resp_que(unf_lport);
+
+ return UNF_SCSI_ABORT_SUCCESS;
+ }
+
+ scsi_cmnd->upper_cmnd = unf_xchg->scsi_cmnd_info.scsi_cmnd;
+ unf_xchg->debug_hook = true;
+
+ /* 4. Exchang L_Port/R_Port Get & check */
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, flag);
+ xchg_lport = unf_xchg->lport;
+ unf_rport = unf_xchg->rport;
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flag);
+
+ if (unlikely(!xchg_lport || !unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Exchange(0x%p)'s L_Port or R_Port is NULL, state(0x%x)",
+ unf_xchg, unf_xchg->io_state);
+
+ unf_xchg_ref_dec(unf_xchg, INI_EH_ABORT);
+
+ if (!xchg_lport)
+ /* for L_Port */
+ return UNF_SCSI_ABORT_FAIL;
+ /* for R_Port */
+ return UNF_SCSI_ABORT_SUCCESS;
+ }
+
+ /* 5. Send INI Abort Cmnd */
+ ret = unf_ini_abort_cmnd(xchg_lport, unf_xchg, scsi_cmnd);
+
+ /* 6. decrease exchange ref_cnt */
+ unf_xchg_ref_dec(unf_xchg, INI_EH_ABORT);
+
+ return ret;
+}
+
+u32 unf_tmf_timeout_recovery_default(void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_rport *unf_rport = (struct unf_rport *)rport;
+
+ unf_lport = unf_xchg->lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ unf_rport_enter_logo(unf_lport, unf_rport);
+
+ return RETURN_OK;
+}
+
+void unf_abts_timeout_recovery_default(void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+ ulong flags = 0;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_rport *unf_rport = (struct unf_rport *)rport;
+
+ unf_lport = unf_xchg->lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, flags);
+ if (INI_IO_STATE_DONE & unf_xchg->io_state) {
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
+
+ return;
+ }
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
+
+ if (unf_xchg->rport_bind_jifs != unf_rport->rport_alloc_jifs)
+ return;
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ unf_rport_enter_logo(unf_lport, unf_rport);
+}
+
+u32 unf_tmf_timeout_recovery_special(void *rport, void *xchg)
+{
+ /* Do port reset or R_Port LOGO */
+ int ret = UNF_RETURN_ERROR;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_rport *unf_rport = (struct unf_rport *)rport;
+
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(unf_xchg->lport, UNF_RETURN_ERROR);
+
+ unf_lport = unf_xchg->lport->root_lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
+
+ /* 1. TMF response timeout & Marker STS timeout */
+ if (!(unf_xchg->tmf_state &
+ (MARKER_STS_RECEIVED | TMF_RESPONSE_RECEIVED))) {
+ /* TMF timeout & marker timeout */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive marker status timeout and do recovery",
+ unf_lport->port_id);
+
+ /* Do port reset */
+ ret = unf_cm_reset_port(unf_lport->port_id);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) do reset failed",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+ }
+
+ /* 2. default case: Do LOGO process */
+ unf_tmf_timeout_recovery_default(unf_rport, unf_xchg);
+
+ return RETURN_OK;
+}
+
+void unf_tmf_abnormal_recovery(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ /*
+ * for device(lun)/target(session) reset:
+ * Do port reset or R_Port LOGO
+ */
+ if (lport->unf_tmf_abnormal_recovery)
+ lport->unf_tmf_abnormal_recovery((void *)rport, (void *)xchg);
+}
+
+int unf_cm_eh_device_reset_handler(struct unf_scsi_cmnd *scsi_cmnd)
+{
+ /* SCSI Device/LUN Reset Command --->>> FC LUN/Device Reset Command */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ u32 cmnd_result = 0;
+ int ret = SUCCESS;
+
+ FC_CHECK_RETURN_VALUE(scsi_cmnd, FAILED);
+ FC_CHECK_RETURN_VALUE(scsi_cmnd->lun_id, FAILED);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[event]Enter device/LUN reset handler");
+
+ /* 1. Get L_Port */
+ unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Can't find port by scsi_host_id(0x%x)",
+ UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
+
+ return FAILED;
+ }
+
+ /* 2. L_Port State checking */
+ if (unlikely(unf_lport->port_removing)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%p) is removing", unf_lport);
+
+ return FAILED;
+ }
+
+ /*
+ * 3. Get R_Port: no rport is found or rport is not ready,return ok
+ * from: L_Port -->> rport_scsi_table (image table) -->>
+ * rport_info_table
+ */
+ unf_rport = unf_find_rport_by_scsi_id(unf_lport, scsi_cmnd->err_code_table,
+ scsi_cmnd->err_code_table_cout,
+ UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd), &cmnd_result);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) Can't find rport by scsi_id(0x%x)",
+ unf_lport->port_id, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
+
+ return SUCCESS;
+ }
+
+ /*
+ * 4. Set the I/O of the corresponding LUN to abort.
+ * *
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ unf_cm_xchg_abort_by_lun(unf_lport, unf_rport, *((u64 *)scsi_cmnd->lun_id), NULL, false);
+
+ /* 5. R_Port state check */
+ if (unlikely(unf_rport->rp_state != UNF_RPORT_ST_READY)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) state(0x%x) SCSI Command(0x%p), rport is not ready",
+ unf_lport->port_id, unf_rport->nport_id,
+ unf_rport->rp_state, scsi_cmnd);
+
+ return SUCCESS;
+ }
+
+ /* 6. Get & inc ref_cnt free Xchg for Device reset */
+ unf_xchg = (struct unf_xchg *)unf_cm_get_free_xchg(unf_lport, UNF_XCHG_TYPE_INI);
+ if (unlikely(!unf_xchg)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%p) can't get free exchange", unf_lport);
+
+ return FAILED;
+ }
+
+ /* increase ref_cnt for protecting exchange */
+ ret = (int)unf_xchg_ref_inc(unf_xchg, INI_EH_DEVICE_RESET);
+ FC_CHECK_RETURN_VALUE((ret == RETURN_OK), FAILED);
+
+ /* 7. Send Device/LUN Reset to Low level */
+ ret = unf_send_scsi_mgmt_cmnd(unf_xchg, unf_lport, unf_rport, scsi_cmnd,
+ UNF_FCP_TM_LOGICAL_UNIT_RESET);
+ if (unlikely(ret == FAILED)) {
+ /*
+ * Do port reset or R_Port LOGO:
+ * 1. FAILED: send failed
+ * 2. FAILED: semaphore timeout
+ * 3. SUCCESS: rcvd rsp & semaphore has been waken up
+ */
+ unf_tmf_abnormal_recovery(unf_lport, unf_rport, unf_xchg);
+ }
+
+ /*
+ * 8. Release resource immediately if necessary
+ * NOTE: here, semaphore timeout or rcvd rsp(semaphore has been waken
+ * up)
+ */
+ if (likely(!unf_lport->port_removing || unf_lport->root_lport != unf_lport))
+ unf_cm_free_xchg(unf_xchg->lport, unf_xchg);
+
+ /* decrease ref_cnt */
+ unf_xchg_ref_dec(unf_xchg, INI_EH_DEVICE_RESET);
+
+ return SUCCESS;
+}
+
+int unf_cm_target_reset_handler(struct unf_scsi_cmnd *scsi_cmnd)
+{
+ /* SCSI Target Reset Command --->>> FC Session Reset/Delete Command */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ u32 cmnd_result = 0;
+ int ret = SUCCESS;
+
+ FC_CHECK_RETURN_VALUE(scsi_cmnd, FAILED);
+ FC_CHECK_RETURN_VALUE(scsi_cmnd->lun_id, FAILED);
+
+ /* 1. Get L_Port */
+ unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Can't find port by scsi_host_id(0x%x)",
+ UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
+
+ return FAILED;
+ }
+
+ /* 2. L_Port State check */
+ if (unlikely(unf_lport->port_removing)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%p) is removing", unf_lport);
+
+ return FAILED;
+ }
+
+ /*
+ * 3. Get R_Port: no rport is found or rport is not ready,return ok
+ * from: L_Port -->> rport_scsi_table (image table) -->>
+ * rport_info_table
+ */
+ unf_rport = unf_find_rport_by_scsi_id(unf_lport, scsi_cmnd->err_code_table,
+ scsi_cmnd->err_code_table_cout,
+ UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd), &cmnd_result);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Can't find rport by scsi_id(0x%x)",
+ UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
+
+ return SUCCESS;
+ }
+
+ /*
+ * 4. set UP_ABORT on Target IO and Session IO
+ * *
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ unf_cm_xchg_abort_by_session(unf_lport, unf_rport);
+
+ /* 5. R_Port state check */
+ if (unlikely(unf_rport->rp_state != UNF_RPORT_ST_READY)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) state(0x%x) is not ready, SCSI Command(0x%p)",
+ unf_lport->port_id, unf_rport->nport_id,
+ unf_rport->rp_state, scsi_cmnd);
+
+ return SUCCESS;
+ }
+
+ /* 6. Get free Xchg for Target Reset CMND */
+ unf_xchg = (struct unf_xchg *)unf_cm_get_free_xchg(unf_lport, UNF_XCHG_TYPE_INI);
+ if (unlikely(!unf_xchg)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%p) can't get free exchange", unf_lport);
+
+ return FAILED;
+ }
+
+ /* increase ref_cnt to protect exchange */
+ ret = (int)unf_xchg_ref_inc(unf_xchg, INI_EH_DEVICE_RESET);
+ FC_CHECK_RETURN_VALUE((ret == RETURN_OK), FAILED);
+
+ /* 7. Send Target Reset Cmnd to low-level */
+ ret = unf_send_scsi_mgmt_cmnd(unf_xchg, unf_lport, unf_rport, scsi_cmnd,
+ UNF_FCP_TM_TARGET_RESET);
+ if (unlikely(ret == FAILED)) {
+ /*
+ * Do port reset or R_Port LOGO:
+ * 1. FAILED: send failed
+ * 2. FAILED: semaphore timeout
+ * 3. SUCCESS: rcvd rsp & semaphore has been waken up
+ */
+ unf_tmf_abnormal_recovery(unf_lport, unf_rport, unf_xchg);
+ }
+
+ /*
+ * 8. Release resource immediately if necessary
+ * NOTE: here, semaphore timeout or rcvd rsp(semaphore has been waken
+ * up)
+ */
+ if (likely(!unf_lport->port_removing || unf_lport->root_lport != unf_lport))
+ unf_cm_free_xchg(unf_xchg->lport, unf_xchg);
+
+ /* decrease exchange ref_cnt */
+ unf_xchg_ref_dec(unf_xchg, INI_EH_DEVICE_RESET);
+
+ return SUCCESS;
+}
+
+int unf_cm_bus_reset_handler(struct unf_scsi_cmnd *scsi_cmnd)
+{
+ /* SCSI BUS Reset Command --->>> FC Port Reset Command */
+ struct unf_lport *unf_lport = NULL;
+ int cmnd_result = 0;
+
+ /* 1. Get L_Port */
+ unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Can't find port by scsi_host_id(0x%x)",
+ UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
+
+ return FAILED;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
+ "[event]Do port reset with scsi_bus_reset");
+
+ cmnd_result = unf_cm_reset_port(unf_lport->port_id);
+ if (unlikely(cmnd_result == UNF_RETURN_ERROR))
+ return FAILED;
+ else
+ return SUCCESS;
+}
+
+void unf_process_scsi_mgmt_result(struct unf_frame_pkg *pkg,
+ struct unf_xchg *xchg)
+{
+ u8 *rsp_info = NULL;
+ u8 rsp_code = 0;
+ u32 code_index = 0;
+
+ /*
+ * LLT found that:RSP_CODE is the third byte of
+ * FCP_RSP_INFO, on Little endian should be byte 0, For
+ * detail FCP_4 Table 26 FCP_RSP_INFO field format
+ * *
+ * 1. state setting
+ * 2. wake up semaphore
+ */
+ FC_CHECK_RETURN_VOID(pkg);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ xchg->tmf_state |= TMF_RESPONSE_RECEIVED;
+
+ if (UNF_GET_LL_ERR(pkg) != UNF_IO_SUCCESS ||
+ pkg->unf_rsp_pload_bl.length > UNF_RESPONE_DATA_LEN) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Send scsi manage command failed with error code(0x%x) resp len(0x%x)",
+ UNF_GET_LL_ERR(pkg), pkg->unf_rsp_pload_bl.length);
+
+ xchg->scsi_cmnd_info.result = UNF_IO_FAILED;
+
+ /* wakeup semaphore & return */
+ up(&xchg->task_sema);
+
+ return;
+ }
+
+ rsp_info = pkg->unf_rsp_pload_bl.buffer_ptr;
+ if (rsp_info && pkg->unf_rsp_pload_bl.length != 0) {
+ /* change to little end if necessary */
+ if (pkg->byte_orders & UNF_BIT_3)
+ unf_big_end_to_cpu(rsp_info, pkg->unf_rsp_pload_bl.length);
+ }
+
+ if (!rsp_info) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]FCP response data pointer is NULL with Xchg TAG(0x%x)",
+ xchg->hotpooltag);
+
+ xchg->scsi_cmnd_info.result = UNF_IO_SUCCESS;
+
+ /* wakeup semaphore & return */
+ up(&xchg->task_sema);
+
+ return;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]FCP response data length(0x%x), RSP_CODE(0x%x:%x:%x:%x:%x:%x:%x:%x)",
+ pkg->unf_rsp_pload_bl.length, rsp_info[ARRAY_INDEX_0],
+ rsp_info[ARRAY_INDEX_1], rsp_info[ARRAY_INDEX_2],
+ rsp_info[ARRAY_INDEX_3], rsp_info[ARRAY_INDEX_4],
+ rsp_info[ARRAY_INDEX_5], rsp_info[ARRAY_INDEX_6],
+ rsp_info[ARRAY_INDEX_7]);
+
+ rsp_code = rsp_info[code_index];
+ if (rsp_code == UNF_FCP_TM_RSP_COMPLETE || rsp_code == UNF_FCP_TM_RSP_SUCCEED)
+ xchg->scsi_cmnd_info.result = UNF_IO_SUCCESS;
+ else
+ xchg->scsi_cmnd_info.result = UNF_IO_FAILED;
+
+ /* wakeup semaphore & return */
+ up(&xchg->task_sema);
+}
+
+static void unf_build_task_mgmt_fcp_cmnd(struct unf_fcp_cmnd *fcp_cmnd,
+ struct unf_scsi_cmnd *scsi_cmnd,
+ enum unf_task_mgmt_cmd task_mgmt)
+{
+ FC_CHECK_RETURN_VOID(fcp_cmnd);
+ FC_CHECK_RETURN_VOID(scsi_cmnd);
+
+ unf_big_end_to_cpu((void *)scsi_cmnd->lun_id, UNF_FCP_LUNID_LEN_8);
+ (*(u64 *)(scsi_cmnd->lun_id)) >>= UNF_SHIFT_8;
+ memcpy(fcp_cmnd->lun, scsi_cmnd->lun_id, sizeof(fcp_cmnd->lun));
+
+ /*
+ * If the TASK MANAGEMENT FLAGS field is set to a nonzero value,
+ * the FCP_CDB field, the FCP_DL field, the TASK ATTRIBUTE field,
+ * the RDDATA bit, and the WRDATA bit shall be ignored and the
+ * FCP_BIDIRECTIONAL_READ_DL field shall not be included in the FCP_CMND
+ * IU payload
+ */
+ fcp_cmnd->control = UNF_SET_TASK_MGMT_FLAGS((u32)(task_mgmt));
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "SCSI cmnd(0x%x) is task mgmt cmnd. ntrl Flag(LITTLE END) is 0x%x.",
+ task_mgmt, fcp_cmnd->control);
+}
+
+int unf_send_scsi_mgmt_cmnd(struct unf_xchg *xchg, struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_scsi_cmnd *scsi_cmnd,
+ enum unf_task_mgmt_cmd task_mgnt_cmd_type)
+{
+ /*
+ * 1. Device/LUN reset
+ * 2. Target/Session reset
+ */
+ struct unf_xchg *unf_xchg = NULL;
+ int ret = SUCCESS;
+ struct unf_frame_pkg pkg = {0};
+ ulong xchg_flag = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, FAILED);
+ FC_CHECK_RETURN_VALUE(rport, FAILED);
+ FC_CHECK_RETURN_VALUE(xchg, FAILED);
+ FC_CHECK_RETURN_VALUE(scsi_cmnd, FAILED);
+ FC_CHECK_RETURN_VALUE(task_mgnt_cmd_type <= UNF_FCP_TM_TERMINATE_TASK &&
+ task_mgnt_cmd_type >= UNF_FCP_TM_QUERY_TASK_SET, FAILED);
+
+ unf_xchg = xchg;
+ unf_xchg->lport = lport;
+ unf_xchg->rport = rport;
+
+ /* 1. State: Up_Task */
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, xchg_flag);
+ unf_xchg->io_state |= INI_IO_STATE_UPTASK;
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, xchg_flag);
+ pkg.frame_head.oxid_rxid = ((u32)unf_xchg->oxid << (u32)UNF_SHIFT_16) | unf_xchg->rxid;
+
+ /* 2. Set TASK MANAGEMENT FLAGS of FCP_CMND to the corresponding task
+ * management command
+ */
+ unf_build_task_mgmt_fcp_cmnd(&unf_xchg->fcp_cmnd, scsi_cmnd, task_mgnt_cmd_type);
+
+ pkg.xchg_contex = unf_xchg;
+ pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = rport->rport_index;
+ pkg.fcp_cmnd = &unf_xchg->fcp_cmnd;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg.frame_head.csctl_sid = lport->nport_id;
+ pkg.frame_head.rctl_did = rport->nport_id;
+
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+
+ if (unlikely(lport->pcie_link_down)) {
+ unf_free_lport_all_xchg(lport);
+ return SUCCESS;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) Hottag(0x%x) lunid(0x%llx)",
+ lport->port_id, task_mgnt_cmd_type, rport->nport_id,
+ unf_xchg->hotpooltag, *((u64 *)scsi_cmnd->lun_id));
+
+ /* 3. Init exchange task semaphore */
+ sema_init(&unf_xchg->task_sema, 0);
+
+ /* 4. Send Mgmt Task to low-level */
+ if (unf_hardware_start_io(lport, &pkg) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) failed",
+ lport->port_id, task_mgnt_cmd_type, rport->nport_id);
+
+ return FAILED;
+ }
+
+ /*
+ * semaphore timeout
+ **
+ * Code review: The second input parameter needs to be converted to
+ jiffies.
+ * set semaphore after the message is sent successfully.The semaphore is
+ returned when the semaphore times out or is woken up.
+ **
+ * 5. The semaphore is cleared and counted when the Mgmt Task message is
+ sent, and is Wake Up when the RSP message is received.
+ * If the semaphore is not Wake Up, the semaphore is triggered after
+ timeout. That is, no RSP message is received within the timeout period.
+ */
+ if (down_timeout(&unf_xchg->task_sema, (s64)msecs_to_jiffies((u32)UNF_WAIT_SEM_TIMEOUT))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) timeout scsi id(0x%x) lun id(0x%x)",
+ lport->nport_id, task_mgnt_cmd_type,
+ rport->nport_id, scsi_cmnd->scsi_id,
+ (u32)scsi_cmnd->raw_lun_id);
+ unf_notify_chip_free_xid(unf_xchg);
+ /* semaphore timeout */
+ ret = FAILED;
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ if (lport->states == UNF_LPORT_ST_RESET)
+ ret = SUCCESS;
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ return ret;
+ }
+
+ /*
+ * 6. NOTE: no timeout (has been waken up)
+ * Do Scsi_Cmnd(Mgmt Task) result checking
+ * *
+ * FAILED: with error code or RSP is error
+ * SUCCESS: others
+ */
+ if (unf_xchg->scsi_cmnd_info.result == UNF_IO_SUCCESS) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) and receive rsp succeed",
+ lport->nport_id, task_mgnt_cmd_type, rport->nport_id);
+
+ ret = SUCCESS;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) and receive rsp failed scsi id(0x%x) lun id(0x%x)",
+ lport->nport_id, task_mgnt_cmd_type, rport->nport_id,
+ scsi_cmnd->scsi_id, (u32)scsi_cmnd->raw_lun_id);
+
+ ret = FAILED;
+ }
+
+ return ret;
+}
+
+u32 unf_recv_tmf_marker_status(void *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 uret = RETURN_OK;
+ struct unf_xchg *unf_xchg = NULL;
+ u16 hot_pool_tag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ unf_lport = (struct unf_lport *)lport;
+
+ /* Find exchange which point to marker sts */
+ if (!unf_lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) tag function is NULL", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ hot_pool_tag =
+ (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
+
+ unf_xchg =
+ (struct unf_xchg *)(unf_lport->xchg_mgr_temp
+ .unf_look_up_xchg_by_tag((void *)unf_lport, hot_pool_tag));
+ if (!unf_xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) find exchange by tag(0x%x) failed",
+ unf_lport->port_id, unf_lport->nport_id, hot_pool_tag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /*
+ * NOTE: set exchange TMF state with MARKER_STS_RECEIVED
+ * *
+ * About TMF state
+ * 1. STS received
+ * 2. Response received
+ * 3. Do check if necessary
+ */
+ unf_xchg->tmf_state |= MARKER_STS_RECEIVED;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Marker STS: D_ID(0x%x) S_ID(0x%x) OX_ID(0x%x) RX_ID(0x%x), EXCH: D_ID(0x%x) S_ID(0x%x) OX_ID(0x%x) RX_ID(0x%x)",
+ pkg->frame_head.rctl_did & UNF_NPORTID_MASK,
+ pkg->frame_head.csctl_sid & UNF_NPORTID_MASK,
+ (u16)(pkg->frame_head.oxid_rxid >> UNF_SHIFT_16),
+ (u16)(pkg->frame_head.oxid_rxid), unf_xchg->did, unf_xchg->sid,
+ unf_xchg->oxid, unf_xchg->rxid);
+
+ return uret;
+}
+
+u32 unf_recv_abts_marker_status(void *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ u16 hot_pool_tag = 0;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ unf_lport = (struct unf_lport *)lport;
+
+ /* Find exchange by tag */
+ if (!unf_lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) tag function is NULL", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ hot_pool_tag = (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
+
+ unf_xchg =
+ (struct unf_xchg *)(unf_lport->xchg_mgr_temp.unf_look_up_xchg_by_tag((void *)unf_lport,
+ hot_pool_tag));
+ if (!unf_xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) find exchange by tag(0x%x) failed",
+ unf_lport->port_id, unf_lport->nport_id, hot_pool_tag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /*
+ * NOTE: set exchange ABTS state with MARKER_STS_RECEIVED
+ * *
+ * About exchange ABTS state
+ * 1. STS received
+ * 2. Response received
+ * 3. Do check if necessary
+ * *
+ * About Exchange status get from low level
+ * 1. Set: when RCVD ABTS Marker
+ * 2. Set: when RCVD ABTS Req Done
+ * 3. value: set value with pkg->status
+ */
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, flags);
+ unf_xchg->ucode_abts_state = pkg->status;
+ unf_xchg->abts_state |= MARKER_STS_RECEIVED;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) wake up SEMA for Abts marker exchange(0x%p) oxid(0x%x 0x%x) hottag(0x%x) status(0x%x)",
+ unf_lport->port_id, unf_xchg, unf_xchg->oxid, unf_xchg->rxid,
+ unf_xchg->hotpooltag, pkg->abts_maker_status);
+
+ /*
+ * NOTE: Second time for ABTS marker received, or
+ * ABTS response have been received, no need to wake up sema
+ */
+ if ((INI_IO_STATE_ABORT_TIMEOUT & unf_xchg->io_state) ||
+ (ABTS_RESPONSE_RECEIVED & unf_xchg->abts_state)) {
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) no need to wake up SEMA for Abts marker ABTS_STATE(0x%x) IO_STATE(0x%x)",
+ unf_lport->port_id, unf_xchg->abts_state, unf_xchg->io_state);
+
+ return RETURN_OK;
+ }
+
+ if (unf_xchg->io_state & INI_IO_STATE_TMF_ABORT) {
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) receive Abts marker, exchange(%p) state(0x%x) free it",
+ unf_lport->port_id, unf_xchg, unf_xchg->io_state);
+
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+ } else {
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
+ up(&unf_xchg->task_sema);
+ }
+
+ return RETURN_OK;
+}
diff --git a/drivers/scsi/spfc/common/unf_io_abnormal.h b/drivers/scsi/spfc/common/unf_io_abnormal.h
new file mode 100644
index 000000000000..6eced45c6497
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_io_abnormal.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_IO__ABNORMAL_H
+#define UNF_IO__ABNORMAL_H
+
+#include "unf_type.h"
+#include "unf_lport.h"
+#include "unf_exchg.h"
+
+#define UNF_GET_LL_ERR(pkg) (((pkg)->status) >> 16)
+
+void unf_process_scsi_mgmt_result(struct unf_frame_pkg *pkg,
+ struct unf_xchg *xchg);
+u32 unf_hardware_start_io(struct unf_lport *lport, struct unf_frame_pkg *pkg);
+u32 unf_recv_abts_marker_status(void *lport, struct unf_frame_pkg *pkg);
+u32 unf_recv_tmf_marker_status(void *lport, struct unf_frame_pkg *pkg);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_log.h b/drivers/scsi/spfc/common/unf_log.h
new file mode 100644
index 000000000000..801e23ac0829
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_log.h
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_LOG_H
+#define UNF_LOG_H
+#include "unf_type.h"
+
+#define UNF_CRITICAL 1
+#define UNF_ERR 2
+#define UNF_WARN 3
+#define UNF_KEVENT 4
+#define UNF_MAJOR 5
+#define UNF_MINOR 6
+#define UNF_INFO 7
+#define UNF_DATA 7
+#define UNF_ALL 7
+
+enum unf_debug_type {
+ UNF_DEBUG_TYPE_MML = 0,
+ UNF_DEBUG_TYPE_DIAGNOSE = 1,
+ UNF_DEBUG_TYPE_MESSAGE = 2,
+ UNF_DEBUG_TYPE_BUTT
+};
+
+enum unf_log_attr {
+ UNF_LOG_LOGIN_ATT = 0x1,
+ UNF_LOG_IO_ATT = 0x2,
+ UNF_LOG_EQUIP_ATT = 0x4,
+ UNF_LOG_REG_ATT = 0x8,
+ UNF_LOG_REG_MML_TEST = 0x10,
+ UNF_LOG_EVENT = 0x20,
+ UNF_LOG_NORMAL = 0x40,
+ UNF_LOG_ABNORMAL = 0X80,
+ UNF_LOG_BUTT
+};
+
+enum event_log {
+ UNF_EVTLOG_DRIVER_SUC = 0,
+ UNF_EVTLOG_DRIVER_INFO,
+ UNF_EVTLOG_DRIVER_WARN,
+ UNF_EVTLOG_DRIVER_ERR,
+ UNF_EVTLOG_LINK_SUC,
+ UNF_EVTLOG_LINK_INFO,
+ UNF_EVTLOG_LINK_WARN,
+ UNF_EVTLOG_LINK_ERR,
+ UNF_EVTLOG_IO_SUC,
+ UNF_EVTLOG_IO_INFO,
+ UNF_EVTLOG_IO_WARN,
+ UNF_EVTLOG_IO_ERR,
+ UNF_EVTLOG_TOOL_SUC,
+ UNF_EVTLOG_TOOL_INFO,
+ UNF_EVTLOG_TOOL_WARN,
+ UNF_EVTLOG_TOOL_ERR,
+ UNF_EVTLOG_BUT
+};
+
+#define UNF_IO_ATT_PRINT_TIMES 2
+#define UNF_LOGIN_ATT_PRINT_TIMES 100
+
+#define UNF_IO_ATT_PRINT_LIMIT msecs_to_jiffies(2 * 1000)
+
+extern u32 unf_dgb_level;
+extern u32 log_print_level;
+extern u32 log_limited_times;
+
+#define DRV_LOG_LIMIT(module_id, log_level, log_att, format, ...) \
+ do { \
+ static unsigned long pre; \
+ static int should_print = UNF_LOGIN_ATT_PRINT_TIMES; \
+ if (time_after_eq(jiffies, pre + (UNF_IO_ATT_PRINT_LIMIT))) { \
+ if (log_att == UNF_LOG_ABNORMAL) { \
+ should_print = UNF_IO_ATT_PRINT_TIMES; \
+ } else { \
+ should_print = log_limited_times; \
+ } \
+ } \
+ if (should_print < 0) { \
+ if (log_att != UNF_LOG_ABNORMAL) \
+ pre = jiffies; \
+ break; \
+ } \
+ if (should_print-- > 0) { \
+ printk(log_level "[%d][FC_UNF]" format "[%s][%-5d]\n", \
+ smp_processor_id(), ##__VA_ARGS__, __func__, \
+ __LINE__); \
+ } \
+ if (should_print == 0) { \
+ printk(log_level "[FC_UNF]log is limited[%s][%-5d]\n", \
+ __func__, __LINE__); \
+ } \
+ pre = jiffies; \
+ } while (0)
+
+#define FC_CHECK_RETURN_VALUE(condition, ret) \
+ do { \
+ if (unlikely(!(condition))) { \
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, \
+ UNF_ERR, "Para check(%s) invalid", \
+ #condition); \
+ return ret; \
+ } \
+ } while (0)
+
+#define FC_CHECK_RETURN_VOID(condition) \
+ do { \
+ if (unlikely(!(condition))) { \
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, \
+ UNF_ERR, "Para check(%s) invalid", \
+ #condition); \
+ return; \
+ } \
+ } while (0)
+
+#define FC_DRV_PRINT(log_att, log_level, format, ...) \
+ do { \
+ if (unlikely((log_level) <= log_print_level)) { \
+ if (log_level == UNF_CRITICAL) { \
+ DRV_LOG_LIMIT(UNF_PID, KERN_CRIT, \
+ log_att, format, ##__VA_ARGS__); \
+ } else if (log_level == UNF_WARN) { \
+ DRV_LOG_LIMIT(UNF_PID, KERN_WARNING, \
+ log_att, format, ##__VA_ARGS__); \
+ } else if (log_level == UNF_ERR) { \
+ DRV_LOG_LIMIT(UNF_PID, KERN_ERR, \
+ log_att, format, ##__VA_ARGS__); \
+ } else if (log_level == UNF_MAJOR || \
+ log_level == UNF_MINOR || \
+ log_level == UNF_KEVENT) { \
+ DRV_LOG_LIMIT(UNF_PID, KERN_NOTICE, \
+ log_att, format, ##__VA_ARGS__); \
+ } else if (log_level == UNF_INFO || \
+ log_level == UNF_DATA) { \
+ DRV_LOG_LIMIT(UNF_PID, KERN_INFO, \
+ log_att, format, ##__VA_ARGS__); \
+ } \
+ } \
+ } while (0)
+
+#define UNF_PRINT_SFS(dbg_level, portid, data, size) \
+ do { \
+ if ((dbg_level) <= log_print_level) { \
+ u32 cnt = 0; \
+ printk(KERN_INFO "[INFO]Port(0x%x) sfs:0x", (portid)); \
+ for (cnt = 0; cnt < (size) / 4; cnt++) { \
+ printk(KERN_INFO "%08x ", \
+ ((u32 *)(data))[cnt]); \
+ } \
+ printk(KERN_INFO "[FC_UNF][%s]\n", __func__); \
+ } \
+ } while (0)
+
+#define UNF_PRINT_SFS_LIMIT(dbg_level, portid, data, size) \
+ do { \
+ if ((dbg_level) <= log_print_level) { \
+ static ulong pre; \
+ static int should_print = UNF_LOGIN_ATT_PRINT_TIMES; \
+ if (time_after_eq( \
+ jiffies, pre + UNF_IO_ATT_PRINT_LIMIT)) { \
+ should_print = log_limited_times; \
+ } \
+ if (should_print < 0) { \
+ pre = jiffies; \
+ break; \
+ } \
+ if (should_print-- > 0) { \
+ UNF_PRINT_SFS(dbg_level, portid, data, size); \
+ } \
+ if (should_print == 0) { \
+ printk( \
+ KERN_INFO \
+ "[FC_UNF]sfs log is limited[%s][%-5d]\n", \
+ __func__, __LINE__); \
+ } \
+ pre = jiffies; \
+ } \
+ } while (0)
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_lport.c b/drivers/scsi/spfc/common/unf_lport.c
new file mode 100644
index 000000000000..66d3ac14d676
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_lport.c
@@ -0,0 +1,1008 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_lport.h"
+#include "unf_log.h"
+#include "unf_rport.h"
+#include "unf_exchg.h"
+#include "unf_service.h"
+#include "unf_ls.h"
+#include "unf_gs.h"
+#include "unf_portman.h"
+
+static void unf_lport_config(struct unf_lport *lport);
+void unf_cm_mark_dirty_mem(struct unf_lport *lport, enum unf_lport_dirty_flag type)
+{
+ FC_CHECK_RETURN_VOID((lport));
+
+ lport->dirty_flag |= (u32)type;
+}
+
+u32 unf_init_lport_route(struct unf_lport *lport)
+{
+ u32 ret = RETURN_OK;
+ int ret_val = 0;
+
+ FC_CHECK_RETURN_VALUE((lport), UNF_RETURN_ERROR);
+
+ /* Init L_Port route work */
+ INIT_DELAYED_WORK(&lport->route_timer_work, unf_lport_route_work);
+
+ /* Delay route work */
+ ret_val = queue_delayed_work(unf_wq, &lport->route_timer_work,
+ (ulong)msecs_to_jiffies(UNF_LPORT_POLL_TIMER));
+ if (unlikely((!(bool)(ret_val)))) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_WARN,
+ "[warn]Port(0x%x) schedule route work failed",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ ret = unf_lport_ref_inc(lport);
+ return ret;
+}
+
+void unf_destroy_lport_route(struct unf_lport *lport)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ /* Cancel (route timer) delay work */
+ UNF_DELAYED_WORK_SYNC(ret, (lport->port_id), (&lport->route_timer_work),
+ "Route Timer work");
+ if (ret == RETURN_OK)
+ /* Corresponding to ADD operation */
+ unf_lport_ref_dec(lport);
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_2_CLOSE_ROUTE;
+}
+
+void unf_init_port_parms(struct unf_lport *lport)
+{
+ INIT_LIST_HEAD(&lport->list_vports_head);
+ INIT_LIST_HEAD(&lport->list_intergrad_vports);
+ INIT_LIST_HEAD(&lport->list_destroy_vports);
+ INIT_LIST_HEAD(&lport->entry_lport);
+ INIT_LIST_HEAD(&lport->list_qos_head);
+
+ spin_lock_init(&lport->qos_mgr_lock);
+ spin_lock_init(&lport->lport_state_lock);
+
+ lport->max_frame_size = max_frame_size;
+ lport->ed_tov = UNF_DEFAULT_EDTOV;
+ lport->ra_tov = UNF_DEFAULT_RATOV;
+ lport->fabric_node_name = 0;
+ lport->qos_level = UNF_QOS_LEVEL_DEFAULT;
+ lport->qos_cs_ctrl = false;
+ lport->priority = (bool)UNF_PRIORITY_DISABLE;
+ lport->port_dirt_exchange = false;
+
+ unf_lport_config(lport);
+
+ unf_set_lport_state(lport, UNF_LPORT_ST_ONLINE);
+
+ lport->link_up = UNF_PORT_LINK_DOWN;
+ lport->port_removing = false;
+ lport->lport_free_completion = NULL;
+ lport->last_tx_fault_jif = 0;
+ lport->enhanced_features = 0;
+ lport->destroy_step = INVALID_VALUE32;
+ lport->dirty_flag = 0;
+ lport->switch_state = false;
+ lport->bbscn_support = false;
+ lport->loop_back_test_mode = false;
+ lport->start_work_state = UNF_START_WORK_STOP;
+ lport->sfp_power_fault_count = 0;
+ lport->sfp_9545_fault_count = 0;
+
+ atomic_set(&lport->lport_no_operate_flag, UNF_LPORT_NORMAL);
+ atomic_set(&lport->port_ref_cnt, 0);
+ atomic_set(&lport->scsi_session_add_success, 0);
+ atomic_set(&lport->scsi_session_add_failed, 0);
+ atomic_set(&lport->scsi_session_del_success, 0);
+ atomic_set(&lport->scsi_session_del_failed, 0);
+ atomic_set(&lport->add_start_work_failed, 0);
+ atomic_set(&lport->add_closing_work_failed, 0);
+ atomic_set(&lport->alloc_scsi_id, 0);
+ atomic_set(&lport->resume_scsi_id, 0);
+ atomic_set(&lport->reuse_scsi_id, 0);
+ atomic_set(&lport->device_alloc, 0);
+ atomic_set(&lport->device_destroy, 0);
+ atomic_set(&lport->session_loss_tmo, 0);
+ atomic_set(&lport->host_no, 0);
+ atomic64_set(&lport->exchg_index, 0x1000);
+ atomic_inc(&lport->port_ref_cnt);
+
+ memset(&lport->port_dynamic_info, 0, sizeof(struct unf_port_dynamic_info));
+ memset(&lport->link_service_info, 0, sizeof(struct unf_link_service_collect));
+ memset(&lport->err_code_sum, 0, sizeof(struct unf_err_code));
+}
+
+void unf_reset_lport_params(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = lport;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport->link_up = UNF_PORT_LINK_DOWN;
+ unf_lport->nport_id = 0;
+ unf_lport->max_frame_size = max_frame_size;
+ unf_lport->ed_tov = UNF_DEFAULT_EDTOV;
+ unf_lport->ra_tov = UNF_DEFAULT_RATOV;
+ unf_lport->fabric_node_name = 0;
+}
+
+static enum unf_lport_login_state
+unf_lport_state_online(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_LINK_UP:
+ next_state = UNF_LPORT_ST_LINK_UP;
+ break;
+
+ case UNF_EVENT_LPORT_NORMAL_ENTER:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_initial(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_LINK_UP:
+ next_state = UNF_LPORT_ST_LINK_UP;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_linkup(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_NORMAL_ENTER:
+ next_state = UNF_LPORT_ST_FLOGI_WAIT;
+ break;
+
+ case UNF_EVENT_LPORT_READY:
+ next_state = UNF_LPORT_ST_READY;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_flogi_wait(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_REMOTE_ACC:
+ next_state = UNF_LPORT_ST_PLOGI_WAIT;
+ break;
+
+ case UNF_EVENT_LPORT_READY:
+ next_state = UNF_LPORT_ST_READY;
+ break;
+
+ case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
+ next_state = UNF_LPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_plogi_wait(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_REMOTE_ACC:
+ next_state = UNF_LPORT_ST_RFT_ID_WAIT;
+ break;
+
+ case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
+ next_state = UNF_LPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state
+unf_lport_state_rftid_wait(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_REMOTE_ACC:
+ next_state = UNF_LPORT_ST_RFF_ID_WAIT;
+ break;
+
+ case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
+ next_state = UNF_LPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_rffid_wait(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_REMOTE_ACC:
+ next_state = UNF_LPORT_ST_SCR_WAIT;
+ break;
+
+ case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
+ next_state = UNF_LPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_scr_wait(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_REMOTE_ACC:
+ next_state = UNF_LPORT_ST_READY;
+ break;
+
+ case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
+ next_state = UNF_LPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state
+unf_lport_state_logo(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_NORMAL_ENTER:
+ next_state = UNF_LPORT_ST_OFFLINE;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_offline(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_ONLINE:
+ next_state = UNF_LPORT_ST_ONLINE;
+ break;
+
+ case UNF_EVENT_LPORT_RESET:
+ next_state = UNF_LPORT_ST_RESET;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_reset(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_NORMAL_ENTER:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_ready(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ case UNF_EVENT_LPORT_RESET:
+ next_state = UNF_LPORT_ST_RESET;
+ break;
+
+ case UNF_EVENT_LPORT_OFFLINE:
+ next_state = UNF_LPORT_ST_LOGO;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static struct unf_lport_state_ma lport_state[] = {
+ {UNF_LPORT_ST_ONLINE, unf_lport_state_online},
+ {UNF_LPORT_ST_INITIAL, unf_lport_state_initial},
+ {UNF_LPORT_ST_LINK_UP, unf_lport_state_linkup},
+ {UNF_LPORT_ST_FLOGI_WAIT, unf_lport_state_flogi_wait},
+ {UNF_LPORT_ST_PLOGI_WAIT, unf_lport_state_plogi_wait},
+ {UNF_LPORT_ST_RFT_ID_WAIT, unf_lport_state_rftid_wait},
+ {UNF_LPORT_ST_RFF_ID_WAIT, unf_lport_state_rffid_wait},
+ {UNF_LPORT_ST_SCR_WAIT, unf_lport_state_scr_wait},
+ {UNF_LPORT_ST_LOGO, unf_lport_state_logo},
+ {UNF_LPORT_ST_OFFLINE, unf_lport_state_offline},
+ {UNF_LPORT_ST_RESET, unf_lport_state_reset},
+ {UNF_LPORT_ST_READY, unf_lport_state_ready},
+};
+
+void unf_lport_state_ma(struct unf_lport *lport,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state old_state = UNF_LPORT_ST_ONLINE;
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+ u32 index = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ old_state = lport->states;
+
+ while (index < (sizeof(lport_state) / sizeof(struct unf_lport_state_ma))) {
+ if (lport->states == lport_state[index].lport_state) {
+ next_state = lport_state[index].lport_state_ma(old_state, lport_event);
+ break;
+ }
+ index++;
+ }
+
+ if (index >= (sizeof(lport_state) / sizeof(struct unf_lport_state_ma))) {
+ next_state = old_state;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR, "[info]Port(0x%x) hold state(0x%x)",
+ lport->port_id, lport->states);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) with old state(0x%x) event(0x%x) next state(0x%x)",
+ lport->port_id, old_state, lport_event, next_state);
+
+ unf_set_lport_state(lport, next_state);
+}
+
+u32 unf_lport_retry_flogi(struct unf_lport *lport)
+{
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* Get (new) R_Port */
+ unf_rport = unf_get_rport_by_nport_id(lport, UNF_FC_FID_FLOGI);
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, UNF_FC_FID_FLOGI);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) allocate RPort failed",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Check L_Port state */
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ if (lport->states != UNF_LPORT_ST_FLOGI_WAIT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) no need to retry FLOGI with state(0x%x)",
+ lport->port_id, lport->states);
+
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+ return RETURN_OK;
+ }
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = UNF_FC_FID_FLOGI;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Send FLOGI or FDISC */
+ if (lport->root_lport != lport) {
+ ret = unf_send_fdisc(lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send FDISC failed", lport->port_id);
+
+ /* Do L_Port recovery */
+ unf_lport_error_recovery(lport);
+ }
+ } else {
+ ret = unf_send_flogi(lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send FLOGI failed\n", lport->port_id);
+
+ /* Do L_Port recovery */
+ unf_lport_error_recovery(lport);
+ }
+ }
+
+ return ret;
+}
+
+u32 unf_lport_name_server_register(struct unf_lport *lport,
+ enum unf_lport_login_state state)
+{
+ struct unf_rport *unf_rport = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 fabric_id = UNF_FC_FID_DIR_SERV;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ if (state == UNF_LPORT_ST_SCR_WAIT)
+ fabric_id = UNF_FC_FID_FCTRL;
+
+ /* Get (safe) R_Port */
+ unf_rport =
+ unf_get_rport_by_nport_id(lport, fabric_id);
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY,
+ fabric_id);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) allocate RPort failed",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Update R_Port & L_Port state */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = fabric_id;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_NORMAL_ENTER);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ switch (state) {
+ /* RFT_ID */
+ case UNF_LPORT_ST_RFT_ID_WAIT:
+ ret = unf_send_rft_id(lport, unf_rport);
+ break;
+ /* RFF_ID */
+ case UNF_LPORT_ST_RFF_ID_WAIT:
+ ret = unf_send_rff_id(lport, unf_rport, UNF_FC4_FCP_TYPE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) register SCSI FC4Type to fabric(0xfffffc) failed",
+ lport->nport_id);
+ unf_lport_error_recovery(lport);
+ }
+ break;
+
+ /* SCR */
+ case UNF_LPORT_ST_SCR_WAIT:
+ ret = unf_send_scr(lport, unf_rport);
+ break;
+
+ /* PLOGI */
+ case UNF_LPORT_ST_PLOGI_WAIT:
+ default:
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ ret = unf_send_plogi(lport, unf_rport);
+ break;
+ }
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) register fabric(0xfffffc) failed",
+ lport->nport_id);
+
+ /* Do L_Port recovery */
+ unf_lport_error_recovery(lport);
+ }
+
+ return ret;
+}
+
+u32 unf_lport_enter_sns_logo(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_rport *unf_rport = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ if (!rport)
+ unf_rport = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
+ else
+ unf_rport = rport;
+
+ if (!unf_rport) {
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_NORMAL_ENTER);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ return RETURN_OK;
+ }
+
+ /* Update L_Port & R_Port state */
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_NORMAL_ENTER);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Do R_Port LOGO state */
+ unf_rport_enter_logo(lport, unf_rport);
+
+ return ret;
+}
+
+void unf_lport_enter_sns_plogi(struct unf_lport *lport)
+{
+ /* Fabric or Public Loop Mode: Login with Name server */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ /* Get (safe) R_Port */
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV);
+ if (unf_rport) {
+ /* for port swap: Delete old R_Port if necessary */
+ if (unf_rport->local_nport_id != lport->nport_id) {
+ unf_rport_immediate_link_down(lport, unf_rport);
+ unf_rport = NULL;
+ }
+ }
+
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY,
+ UNF_FC_FID_DIR_SERV);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) allocate RPort failed",
+ lport->port_id);
+
+ unf_lport_error_recovery(unf_lport);
+ return;
+ }
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = UNF_FC_FID_DIR_SERV;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Send PLOGI to Fabric(0xfffffc) */
+ ret = unf_send_plogi(unf_lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send PLOGI to name server failed",
+ lport->port_id);
+
+ unf_lport_error_recovery(unf_lport);
+ }
+}
+
+int unf_get_port_params(void *arg_in, void *arg_out)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)arg_in;
+ struct unf_low_level_port_mgr_op *port_mgr = NULL;
+ struct unf_port_param port_params = {0};
+
+ FC_CHECK_RETURN_VALUE(arg_in, UNF_RETURN_ERROR);
+
+ port_mgr = &unf_lport->low_level_func.port_mgr_op;
+ if (!port_mgr->ll_port_config_get) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_WARN,
+ "[warn]Port(0x%x) low level port_config_get function is NULL",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
+ "[warn]Port(0x%x) get parameters with default:R_A_TOV(%d) E_D_TOV(%d)",
+ unf_lport->port_id, UNF_DEFAULT_FABRIC_RATOV,
+ UNF_DEFAULT_EDTOV);
+
+ port_params.ra_tov = UNF_DEFAULT_FABRIC_RATOV;
+ port_params.ed_tov = UNF_DEFAULT_EDTOV;
+
+ /* Update parameters with Fabric mode */
+ if (unf_lport->act_topo == UNF_ACT_TOP_PUBLIC_LOOP ||
+ unf_lport->act_topo == UNF_ACT_TOP_P2P_FABRIC) {
+ unf_lport->ra_tov = port_params.ra_tov;
+ unf_lport->ed_tov = port_params.ed_tov;
+ }
+
+ return RETURN_OK;
+}
+
+u32 unf_lport_enter_flogi(struct unf_lport *lport)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_cm_event_report *event = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 nport_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* Get (safe) R_Port */
+ nport_id = UNF_FC_FID_FLOGI;
+ unf_rport = unf_get_rport_by_nport_id(lport, UNF_FC_FID_FLOGI);
+
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate RPort failed",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Updtae L_Port state */
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_NORMAL_ENTER);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ /* Update R_Port N_Port_ID */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = UNF_FC_FID_FLOGI;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ event = unf_get_one_event_node(lport);
+ if (event) {
+ event->lport = lport;
+ event->event_asy_flag = UNF_EVENT_ASYN;
+ event->unf_event_task = unf_get_port_params;
+ event->para_in = (void *)lport;
+ unf_post_one_event_node(lport, event);
+ }
+
+ if (lport->root_lport != lport) {
+ /* for NPIV */
+ ret = unf_send_fdisc(lport, unf_rport);
+ if (ret != RETURN_OK)
+ unf_lport_error_recovery(lport);
+ } else {
+ /* for Physical Port */
+ ret = unf_send_flogi(lport, unf_rport);
+ if (ret != RETURN_OK)
+ unf_lport_error_recovery(lport);
+ }
+
+ return ret;
+}
+
+void unf_set_lport_state(struct unf_lport *lport, enum unf_lport_login_state state)
+{
+ FC_CHECK_RETURN_VOID(lport);
+ if (lport->states != state)
+ lport->retries = 0;
+
+ lport->states = state;
+}
+
+static void unf_lport_timeout(struct work_struct *work)
+{
+ struct unf_lport *unf_lport = NULL;
+ enum unf_lport_login_state state = UNF_LPORT_ST_READY;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(work);
+ unf_lport = container_of(work, struct unf_lport, retry_work.work);
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ state = unf_lport->states;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is timeout with state(0x%x)",
+ unf_lport->port_id, state);
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ switch (state) {
+ /* FLOGI retry */
+ case UNF_LPORT_ST_FLOGI_WAIT:
+ (void)unf_lport_retry_flogi(unf_lport);
+ break;
+
+ case UNF_LPORT_ST_PLOGI_WAIT:
+ case UNF_LPORT_ST_RFT_ID_WAIT:
+ case UNF_LPORT_ST_RFF_ID_WAIT:
+ case UNF_LPORT_ST_SCR_WAIT:
+ (void)unf_lport_name_server_register(unf_lport, state);
+ break;
+
+ /* Send LOGO External */
+ case UNF_LPORT_ST_LOGO:
+ break;
+
+ /* Do nothing */
+ case UNF_LPORT_ST_OFFLINE:
+ case UNF_LPORT_ST_READY:
+ case UNF_LPORT_ST_RESET:
+ case UNF_LPORT_ST_ONLINE:
+ case UNF_LPORT_ST_INITIAL:
+ case UNF_LPORT_ST_LINK_UP:
+
+ unf_lport->retries = 0;
+ break;
+ default:
+ break;
+ }
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+}
+
+static void unf_lport_config(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ INIT_DELAYED_WORK(&lport->retry_work, unf_lport_timeout);
+
+ lport->max_retry_count = UNF_MAX_RETRY_COUNT;
+ lport->retries = 0;
+}
+
+void unf_lport_error_recovery(struct unf_lport *lport)
+{
+ ulong delay = 0;
+ ulong flag = 0;
+ int ret_val = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ ret = unf_lport_ref_inc(lport);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is removing and no need process",
+ lport->port_id);
+ return;
+ }
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+
+ /* Port State: removing */
+ if (lport->port_removing) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is removing and no need process",
+ lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(lport);
+ return;
+ }
+
+ /* Port State: offline */
+ if (lport->states == UNF_LPORT_ST_OFFLINE) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is offline and no need process",
+ lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(lport);
+ return;
+ }
+
+ /* Queue work state check */
+ if (delayed_work_pending(&lport->retry_work)) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ unf_lport_ref_dec_to_destroy(lport);
+ return;
+ }
+
+ /* Do retry operation */
+ if (lport->retries < lport->max_retry_count) {
+ lport->retries++;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) enter recovery and retry %u times",
+ lport->port_id, lport->nport_id, lport->retries);
+
+ delay = (ulong)lport->ed_tov;
+ ret_val = queue_delayed_work(unf_wq, &lport->retry_work,
+ (ulong)msecs_to_jiffies((u32)delay));
+ if (ret_val != 0) {
+ atomic_inc(&lport->port_ref_cnt);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) queue work success and reference count is %d",
+ lport->port_id,
+ atomic_read(&lport->port_ref_cnt));
+ }
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+ } else {
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_REMOTE_TIMEOUT);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) register operation timeout and do LOGO",
+ lport->port_id);
+
+ (void)unf_lport_enter_sns_logo(lport, NULL);
+ }
+
+ unf_lport_ref_dec_to_destroy(lport);
+}
+
+struct unf_lport *unf_cm_lookup_vport_by_vp_index(struct unf_lport *lport, u16 vp_index)
+{
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ if (vp_index == 0)
+ return lport;
+
+ if (!lport->lport_mgr_temp.unf_look_up_vport_by_index) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) function do look up vport by index is NULL",
+ lport->port_id);
+
+ return NULL;
+ }
+
+ return lport->lport_mgr_temp.unf_look_up_vport_by_index(lport, vp_index);
+}
+
+struct unf_lport *unf_cm_lookup_vport_by_did(struct unf_lport *lport, u32 did)
+{
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ if (!lport->lport_mgr_temp.unf_look_up_vport_by_did) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) function do look up vport by D_ID is NULL",
+ lport->port_id);
+
+ return NULL;
+ }
+
+ return lport->lport_mgr_temp.unf_look_up_vport_by_did(lport, did);
+}
+
+struct unf_lport *unf_cm_lookup_vport_by_wwpn(struct unf_lport *lport, u64 wwpn)
+{
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ if (!lport->lport_mgr_temp.unf_look_up_vport_by_wwpn) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) function do look up vport by WWPN is NULL",
+ lport->port_id);
+
+ return NULL;
+ }
+
+ return lport->lport_mgr_temp.unf_look_up_vport_by_wwpn(lport, wwpn);
+}
+
+void unf_cm_vport_remove(struct unf_lport *vport)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(vport);
+ unf_lport = vport->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ if (!unf_lport->lport_mgr_temp.unf_vport_remove) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) function do vport remove is NULL",
+ unf_lport->port_id);
+ return;
+ }
+
+ unf_lport->lport_mgr_temp.unf_vport_remove(vport);
+}
diff --git a/drivers/scsi/spfc/common/unf_lport.h b/drivers/scsi/spfc/common/unf_lport.h
new file mode 100644
index 000000000000..dbd531f15b13
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_lport.h
@@ -0,0 +1,519 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_LPORT_H
+#define UNF_LPORT_H
+
+#include "unf_type.h"
+#include "unf_disc.h"
+#include "unf_event.h"
+#include "unf_common.h"
+
+#define UNF_PORT_TYPE_FC 0
+#define UNF_PORT_TYPE_DISC 1
+#define UNF_FW_UPDATE_PATH_LEN_MAX 255
+#define UNF_EXCHG_MGR_NUM (4)
+#define UNF_ERR_CODE_PRINT_TIME 10 /* error code print times */
+#define UNF_MAX_IO_TYPE_STAT_NUM 48 /* IO abnormal max counter */
+#define UNF_MAX_IO_RETURN_VALUE 0x12
+#define UNF_MAX_SCSI_CMD 0xFF
+#define UNF_MAX_LPRT_SCSI_ID_MAP 2048
+
+enum unf_scsi_error_handle_type {
+ UNF_SCSI_ABORT_IO_TYPE = 0,
+ UNF_SCSI_DEVICE_RESET_TYPE,
+ UNF_SCSI_TARGET_RESET_TYPE,
+ UNF_SCSI_BUS_RESET_TYPE,
+ UNF_SCSI_HOST_RESET_TYPE,
+ UNF_SCSI_VIRTUAL_RESET_TYPE,
+ UNF_SCSI_ERROR_HANDLE_BUTT
+};
+
+enum unf_lport_destroy_step {
+ UNF_LPORT_DESTROY_STEP_0_SET_REMOVING = 0,
+ UNF_LPORT_DESTROY_STEP_1_REPORT_PORT_OUT,
+ UNF_LPORT_DESTROY_STEP_2_CLOSE_ROUTE,
+ UNF_LPORT_DESTROY_STEP_3_DESTROY_EVENT_CENTER,
+ UNF_LPORT_DESTROY_STEP_4_DESTROY_EXCH_MGR,
+ UNF_LPORT_DESTROY_STEP_5_DESTROY_ESGL_POOL,
+ UNF_LPORT_DESTROY_STEP_6_DESTROY_DISC_MGR,
+ UNF_LPORT_DESTROY_STEP_7_DESTROY_XCHG_MGR_TMP,
+ UNF_LPORT_DESTROY_STEP_8_DESTROY_RPORT_MG_TMP,
+ UNF_LPORT_DESTROY_STEP_9_DESTROY_LPORT_MG_TMP,
+ UNF_LPORT_DESTROY_STEP_10_DESTROY_SCSI_TABLE,
+ UNF_LPORT_DESTROY_STEP_11_UNREG_TGT_HOST,
+ UNF_LPORT_DESTROY_STEP_12_UNREG_SCSI_HOST,
+ UNF_LPORT_DESTROY_STEP_13_DESTROY_LW_INTERFACE,
+ UNF_LPORT_DESTROY_STEP_BUTT
+};
+
+enum unf_lport_enhanced_feature {
+ /* Enhance GFF feature connect even if fail to get GFF feature */
+ UNF_LPORT_ENHANCED_FEATURE_ENHANCED_GFF = 0x0001,
+ UNF_LPORT_ENHANCED_FEATURE_IO_TRANSFERLIST = 0x0002, /* Enhance IO balance */
+ UNF_LPORT_ENHANCED_FEATURE_IO_CHECKPOINT = 0x0004, /* Enhance IO check */
+ UNF_LPORT_ENHANCED_FEATURE_CLOSE_FW_ROUTE = 0x0008, /* Close FW ROUTE */
+ /* lowest frequency read SFP information */
+ UNF_LPORT_ENHANCED_FEATURE_READ_SFP_ONCE = 0x0010,
+ UNF_LPORT_ENHANCED_FEATURE_BUTT
+};
+
+enum unf_lport_login_state {
+ UNF_LPORT_ST_ONLINE = 0x2000, /* uninitialized */
+ UNF_LPORT_ST_INITIAL, /* initialized and LinkDown */
+ UNF_LPORT_ST_LINK_UP, /* initialized and Link UP */
+ UNF_LPORT_ST_FLOGI_WAIT, /* waiting for FLOGI completion */
+ UNF_LPORT_ST_PLOGI_WAIT, /* waiting for PLOGI completion */
+ UNF_LPORT_ST_RNN_ID_WAIT, /* waiting for RNN_ID completion */
+ UNF_LPORT_ST_RSNN_NN_WAIT, /* waiting for RSNN_NN completion */
+ UNF_LPORT_ST_RSPN_ID_WAIT, /* waiting for RSPN_ID completion */
+ UNF_LPORT_ST_RPN_ID_WAIT, /* waiting for RPN_ID completion */
+ UNF_LPORT_ST_RFT_ID_WAIT, /* waiting for RFT_ID completion */
+ UNF_LPORT_ST_RFF_ID_WAIT, /* waiting for RFF_ID completion */
+ UNF_LPORT_ST_SCR_WAIT, /* waiting for SCR completion */
+ UNF_LPORT_ST_READY, /* ready for use */
+ UNF_LPORT_ST_LOGO, /* waiting for LOGO completion */
+ UNF_LPORT_ST_RESET, /* being reset and will restart */
+ UNF_LPORT_ST_OFFLINE, /* offline */
+ UNF_LPORT_ST_BUTT
+};
+
+enum unf_lport_event {
+ UNF_EVENT_LPORT_NORMAL_ENTER = 0x8000, /* next state enter */
+ UNF_EVENT_LPORT_ONLINE = 0x8001, /* LPort link up */
+ UNF_EVENT_LPORT_LINK_UP = 0x8002, /* LPort link up */
+ UNF_EVENT_LPORT_LINK_DOWN = 0x8003, /* LPort link down */
+ UNF_EVENT_LPORT_OFFLINE = 0x8004, /* lPort bing stopped */
+ UNF_EVENT_LPORT_RESET = 0x8005,
+ UNF_EVENT_LPORT_REMOTE_ACC = 0x8006, /* next state enter */
+ UNF_EVENT_LPORT_REMOTE_RJT = 0x8007, /* rport reject */
+ UNF_EVENT_LPORT_REMOTE_TIMEOUT = 0x8008, /* rport time out */
+ UNF_EVENT_LPORT_READY = 0x8009,
+ UNF_EVENT_LPORT_REMOTE_BUTT
+};
+
+struct unf_cm_disc_mg_template {
+ /* start input:L_Port,return:ok/fail */
+ u32 (*unf_disc_start)(void *lport);
+ /* stop input: L_Port,return:ok/fail */
+ u32 (*unf_disc_stop)(void *lport);
+
+ /* Callback after disc complete[with event:ok/fail]. */
+ void (*unf_disc_callback)(void *lport, u32 result);
+};
+
+struct unf_chip_manage_info {
+ struct list_head list_chip_thread_entry;
+ struct list_head list_head;
+ spinlock_t chip_event_list_lock;
+ struct task_struct *thread;
+ u32 list_num;
+ u32 slot_id;
+ u8 chip_id;
+ u8 rsv;
+ u8 sfp_9545_fault;
+ u8 sfp_power_fault;
+ atomic_t ref_cnt;
+ u32 thread_exit;
+ struct unf_chip_info chip_info;
+ atomic_t card_loop_test_flag;
+ spinlock_t card_loop_back_state_lock;
+ char update_path[UNF_FW_UPDATE_PATH_LEN_MAX];
+};
+
+enum unf_timer_type {
+ UNF_TIMER_TYPE_TGT_IO,
+ UNF_TIMER_TYPE_INI_IO,
+ UNF_TIMER_TYPE_REQ_IO,
+ UNF_TIMER_TYPE_TGT_RRQ,
+ UNF_TIMER_TYPE_INI_RRQ,
+ UNF_TIMER_TYPE_SFS,
+ UNF_TIMER_TYPE_INI_ABTS
+};
+
+struct unf_cm_xchg_mgr_template {
+ void *(*unf_xchg_get_free_and_init)(void *lport, u32 xchg_type);
+ void *(*unf_look_up_xchg_by_id)(void *lport, u16 ox_id, u32 oid);
+ void *(*unf_look_up_xchg_by_tag)(void *lport, u16 hot_pool_tag);
+ void (*unf_xchg_release)(void *lport, void *xchg);
+ void (*unf_xchg_mgr_io_xchg_abort)(void *lport, void *rport, u32 sid, u32 did,
+ u32 extra_io_state);
+ void (*unf_xchg_mgr_sfs_xchg_abort)(void *lport, void *rport, u32 sid, u32 did);
+ void (*unf_xchg_add_timer)(void *xchg, ulong time_ms, enum unf_timer_type time_type);
+ void (*unf_xchg_cancel_timer)(void *xchg);
+ void (*unf_xchg_abort_all_io)(void *lport, u32 xchg_type, bool clean);
+ void *(*unf_look_up_xchg_by_cmnd_sn)(void *lport, u64 command_sn,
+ u32 world_id, void *pinitiator);
+ void (*unf_xchg_abort_by_lun)(void *lport, void *rport, u64 lun_id, void *xchg,
+ bool abort_all_lun_flag);
+
+ void (*unf_xchg_abort_by_session)(void *lport, void *rport);
+};
+
+struct unf_cm_lport_template {
+ void *(*unf_look_up_vport_by_index)(void *lport, u16 vp_index);
+ void *(*unf_look_up_vport_by_port_id)(void *lport, u32 port_id);
+ void *(*unf_look_up_vport_by_wwpn)(void *lport, u64 wwpn);
+ void *(*unf_look_up_vport_by_did)(void *lport, u32 did);
+ void (*unf_vport_remove)(void *vport);
+};
+
+struct unf_lport_state_ma {
+ enum unf_lport_login_state lport_state;
+ enum unf_lport_login_state (*lport_state_ma)(enum unf_lport_login_state old_state,
+ enum unf_lport_event event);
+};
+
+struct unf_rport_pool {
+ u32 rport_pool_count;
+ void *rport_pool_add;
+ struct list_head list_rports_pool;
+ spinlock_t rport_free_pool_lock;
+ /* for synchronous reuse RPort POOL completion */
+ struct completion *rport_pool_completion;
+ ulong *rpi_bitmap;
+};
+
+struct unf_vport_pool {
+ u16 vport_pool_count;
+ void *vport_pool_addr;
+ struct list_head list_vport_pool;
+ spinlock_t vport_pool_lock;
+ struct completion *vport_pool_completion;
+ u16 slab_next_index; /* Next free vport */
+ u16 slab_total_sum; /* Total Vport num */
+ struct unf_lport *vport_slab[ARRAY_INDEX_0];
+};
+
+struct unf_esgl_pool {
+ u32 esgl_pool_count;
+ void *esgl_pool_addr;
+ struct list_head list_esgl_pool;
+ spinlock_t esgl_pool_lock;
+ struct buf_describe esgl_buff_list;
+};
+
+/* little endium */
+struct unf_port_id_page {
+ struct list_head list_node_rscn;
+ u8 port_id_port;
+ u8 port_id_area;
+ u8 port_id_domain;
+ u8 addr_format : 2;
+ u8 event_qualifier : 4;
+ u8 reserved : 2;
+};
+
+struct unf_rscn_mgr {
+ spinlock_t rscn_id_list_lock;
+ u32 free_rscn_count;
+ struct list_head list_free_rscn_page;
+ struct list_head list_using_rscn_page;
+ void *rscn_pool_add;
+ struct unf_port_id_page *(*unf_get_free_rscn_node)(void *rscn_mg);
+ void (*unf_release_rscn_node)(void *rscn_mg, void *rscn_node);
+};
+
+struct unf_disc_rport_mg {
+ void *disc_pool_add;
+ struct list_head list_disc_rports_pool;
+ struct list_head list_disc_rports_busy;
+};
+
+struct unf_disc_manage_info {
+ struct list_head list_head;
+ spinlock_t disc_event_list_lock;
+ atomic_t disc_contrl_size;
+
+ u32 thread_exit;
+ struct task_struct *thread;
+};
+
+struct unf_disc {
+ u32 retry_count;
+ u32 max_retry_count;
+ u32 disc_flag;
+
+ struct completion *disc_completion;
+ atomic_t disc_ref_cnt;
+
+ struct list_head list_busy_rports;
+ struct list_head list_delete_rports;
+ struct list_head list_destroy_rports;
+
+ spinlock_t rport_busy_pool_lock;
+
+ struct unf_lport *lport;
+ enum unf_disc_state states;
+ struct delayed_work disc_work;
+
+ /* Disc operation template */
+ struct unf_cm_disc_mg_template disc_temp;
+
+ /* UNF_INIT_DISC/UNF_RSCN_DISC */
+ u32 disc_option;
+
+ /* RSCN list */
+ struct unf_rscn_mgr rscn_mgr;
+ struct unf_disc_rport_mg disc_rport_mgr;
+ struct unf_disc_manage_info disc_thread_info;
+
+ u64 last_disc_jiff;
+};
+
+enum unf_service_item {
+ UNF_SERVICE_ITEM_FLOGI = 0,
+ UNF_SERVICE_ITEM_PLOGI,
+ UNF_SERVICE_ITEM_PRLI,
+ UNF_SERVICE_ITEM_RSCN,
+ UNF_SERVICE_ITEM_ABTS,
+ UNF_SERVICE_ITEM_PDISC,
+ UNF_SERVICE_ITEM_ADISC,
+ UNF_SERVICE_ITEM_LOGO,
+ UNF_SERVICE_ITEM_SRR,
+ UNF_SERVICE_ITEM_RRQ,
+ UNF_SERVICE_ITEM_ECHO,
+ UNF_SERVICE_BUTT
+};
+
+/* Link service counter */
+struct unf_link_service_collect {
+ u64 service_cnt[UNF_SERVICE_BUTT];
+};
+
+struct unf_pcie_error_count {
+ u32 pcie_error_count[UNF_PCIE_BUTT];
+};
+
+#define INVALID_WWPN 0
+
+enum unf_device_scsi_state {
+ UNF_SCSI_ST_INIT = 0,
+ UNF_SCSI_ST_OFFLINE,
+ UNF_SCSI_ST_ONLINE,
+ UNF_SCSI_ST_DEAD,
+ UNF_SCSI_ST_BUTT
+};
+
+struct unf_wwpn_dfx_counter_info {
+ atomic64_t io_done_cnt[UNF_MAX_IO_RETURN_VALUE];
+ atomic64_t scsi_cmd_cnt[UNF_MAX_SCSI_CMD];
+ atomic64_t target_busy;
+ atomic64_t host_busy;
+ atomic_t error_handle[UNF_SCSI_ERROR_HANDLE_BUTT];
+ atomic_t error_handle_result[UNF_SCSI_ERROR_HANDLE_BUTT];
+ atomic_t device_alloc;
+ atomic_t device_destroy;
+};
+
+#define UNF_MAX_LUN_PER_TARGET 256
+struct unf_wwpn_rport_info {
+ u64 wwpn;
+ struct unf_rport *rport; /* Rport which linkup */
+ void *lport; /* Lport */
+ u32 target_id; /* target_id distribute by scsi */
+ u32 las_ten_scsi_state;
+ atomic_t scsi_state;
+ struct unf_wwpn_dfx_counter_info *dfx_counter;
+ struct delayed_work loss_tmo_work;
+ bool need_scan;
+ struct list_head fc_lun_list;
+ u8 *lun_qos_level;
+};
+
+struct unf_rport_scsi_id_image {
+ spinlock_t scsi_image_table_lock;
+ struct unf_wwpn_rport_info
+ *wwn_rport_info_table;
+ u32 max_scsi_id;
+};
+
+enum unf_lport_dirty_flag {
+ UNF_LPORT_DIRTY_FLAG_NONE = 0,
+ UNF_LPORT_DIRTY_FLAG_XCHGMGR_DIRTY = 0x100,
+ UNF_LPORT_DIRTY_FLAG_RPORT_POOL_DIRTY = 0x200,
+ UNF_LPORT_DIRTY_FLAG_DISC_DIRTY = 0x400,
+ UNF_LPORT_DIRTY_FLAG_BUTT
+};
+
+typedef struct unf_rport *(*unf_rport_set_qualifier)(struct unf_lport *lport,
+ struct unf_rport *rport_by_nport_id,
+ struct unf_rport *rport_by_wwpn,
+ u64 wwpn, u32 sid);
+
+typedef u32 (*unf_tmf_status_recovery)(void *rport, void *xchg);
+
+enum unf_start_work_state {
+ UNF_START_WORK_STOP,
+ UNF_START_WORK_BEGIN,
+ UNF_START_WORK_COMPLETE
+};
+
+struct unf_qos_info {
+ u64 wwpn;
+ u32 nport_id;
+ enum unf_rport_qos_level qos_level;
+ struct list_head entry_qos_info;
+};
+
+struct unf_ini_private_info {
+ u32 driver_type; /* Driver Type */
+ void *lower; /* driver private pointer */
+};
+
+struct unf_product_host_info {
+ void *tgt_host;
+ struct Scsi_Host *host;
+ struct unf_ini_private_info drv_private_info;
+ struct Scsi_Host scsihost;
+};
+
+struct unf_lport {
+ u32 port_type; /* Port Type, fc or fcoe */
+ atomic_t port_ref_cnt; /* LPort reference counter */
+ void *fc_port; /* hard adapter hba pointer */
+ void *rport, *drport; /* Used for SCSI interface */
+ void *vport;
+ ulong system_io_bus_num;
+
+ struct unf_product_host_info host_info; /* scsi host mg */
+ struct unf_rport_scsi_id_image rport_scsi_table;
+ bool port_removing;
+ bool io_allowed;
+ bool port_dirt_exchange;
+
+ spinlock_t xchg_mgr_lock;
+ struct list_head list_xchg_mgr_head;
+ struct list_head list_drty_xchg_mgr_head;
+ void *xchg_mgr[UNF_EXCHG_MGR_NUM];
+ bool qos_cs_ctrl;
+ bool priority;
+ enum unf_rport_qos_level qos_level;
+ spinlock_t qos_mgr_lock;
+ struct list_head list_qos_head;
+ struct list_head list_vports_head; /* Vport Mg */
+ struct list_head list_intergrad_vports; /* Vport intergrad list */
+ struct list_head list_destroy_vports; /* Vport destroy list */
+
+ struct list_head entry_vport; /* VPort entry, hook in list_vports_head */
+
+ struct list_head entry_lport; /* LPort entry */
+ spinlock_t lport_state_lock; /* UL Port Lock */
+ struct unf_disc disc; /* Disc and rport Mg */
+ struct unf_rport_pool rport_pool; /* rport pool,Vport share Lport pool */
+ struct unf_esgl_pool esgl_pool; /* external sgl pool */
+ u32 port_id; /* Port Management ,0x11000 etc. */
+ enum unf_lport_login_state states;
+ u32 link_up;
+ u32 speed;
+
+ u64 node_name;
+ u64 port_name;
+ u64 fabric_node_name;
+ u32 nport_id;
+ u32 max_frame_size;
+ u32 ed_tov;
+ u32 ra_tov;
+ u32 class_of_service;
+ u32 options; /* ini or tgt */
+ u32 retries;
+ u32 max_retry_count;
+ enum unf_act_topo act_topo;
+ bool switch_state; /* TRUE---->ON,false---->OFF */
+ bool last_switch_state; /* TRUE---->ON,false---->OFF */
+ bool bbscn_support; /* TRUE---->ON,false---->OFF */
+
+ enum unf_start_work_state start_work_state;
+ struct unf_cm_xchg_mgr_template xchg_mgr_temp; /* Xchg Mg operation template */
+ struct unf_cm_lport_template lport_mgr_temp; /* Xchg LPort operation template */
+ struct unf_low_level_functioon_op low_level_func;
+ struct unf_event_mgr event_mgr; /* Disc and rport Mg */
+ struct delayed_work retry_work; /* poll work or delay work */
+
+ struct workqueue_struct *link_event_wq;
+ struct workqueue_struct *xchg_wq;
+ atomic64_t io_stat[UNF_MAX_IO_TYPE_STAT_NUM];
+ struct unf_err_code err_code_sum; /* Error code counter */
+ struct unf_port_dynamic_info port_dynamic_info;
+ struct unf_link_service_collect link_service_info;
+ struct unf_pcie_error_count pcie_error_cnt;
+ unf_rport_set_qualifier unf_qualify_rport; /* Qualify Rport */
+
+ unf_tmf_status_recovery unf_tmf_abnormal_recovery; /* tmf marker recovery */
+
+ struct delayed_work route_timer_work; /* L_Port timer route */
+
+ u16 vp_index; /* Vport Index, Lport:0 */
+ u16 path_id;
+ struct unf_vport_pool *vport_pool; /* Only for Lport */
+ void *lport_mgr[UNF_MAX_LPRT_SCSI_ID_MAP];
+ bool vport_remove_flags;
+
+ void *root_lport; /* Point to physic Lport */
+
+ struct completion *lport_free_completion; /* Free LPort Completion */
+
+#define UNF_LPORT_NOP 1
+#define UNF_LPORT_NORMAL 0
+
+ atomic_t lport_no_operate_flag;
+
+ bool loop_back_test_mode;
+ bool switch_state_before_test_mode; /* TRUE---->ON,false---->OFF */
+ u32 enhanced_features; /* Enhanced Features */
+
+ u32 destroy_step;
+ u32 dirty_flag;
+ struct unf_chip_manage_info *chip_info;
+
+ u8 unique_position;
+ u8 sfp_power_fault_count;
+ u8 sfp_9545_fault_count;
+ u64 last_tx_fault_jif; /* SFP last tx fault jiffies */
+ u32 target_cnt;
+ /* Server card: UNF_FC_SERVER_BOARD_32_G(6) for 32G mode,
+ * UNF_FC_SERVER_BOARD_16_G(7) for 16G mode
+ */
+ u32 card_type;
+ atomic_t scsi_session_add_success;
+ atomic_t scsi_session_add_failed;
+ atomic_t scsi_session_del_success;
+ atomic_t scsi_session_del_failed;
+ atomic_t add_start_work_failed;
+ atomic_t add_closing_work_failed;
+ atomic_t device_alloc;
+ atomic_t device_destroy;
+ atomic_t session_loss_tmo;
+ atomic_t alloc_scsi_id;
+ atomic_t resume_scsi_id;
+ atomic_t reuse_scsi_id;
+ atomic64_t last_exchg_mgr_idx;
+ atomic_t host_no;
+ atomic64_t exchg_index;
+ int scan_world_id;
+ struct semaphore wmi_task_sema;
+ bool ready_to_remove;
+ u32 pcie_link_down_cnt;
+ bool pcie_link_down;
+ u8 fw_version[SPFC_VER_LEN];
+ atomic_t link_lose_tmo;
+ u32 max_ssq_num;
+};
+
+void unf_lport_state_ma(struct unf_lport *lport, enum unf_lport_event lport_event);
+void unf_lport_error_recovery(struct unf_lport *lport);
+void unf_set_lport_state(struct unf_lport *lport, enum unf_lport_login_state state);
+void unf_init_port_parms(struct unf_lport *lport);
+u32 unf_lport_enter_flogi(struct unf_lport *lport);
+void unf_lport_enter_sns_plogi(struct unf_lport *lport);
+u32 unf_init_disc_mgr(struct unf_lport *lport);
+u32 unf_init_lport_route(struct unf_lport *lport);
+void unf_destroy_lport_route(struct unf_lport *lport);
+void unf_reset_lport_params(struct unf_lport *lport);
+void unf_cm_mark_dirty_mem(struct unf_lport *lport, enum unf_lport_dirty_flag type);
+struct unf_lport *unf_cm_lookup_vport_by_vp_index(struct unf_lport *lport, u16 vp_index);
+struct unf_lport *unf_cm_lookup_vport_by_did(struct unf_lport *lport, u32 did);
+struct unf_lport *unf_cm_lookup_vport_by_wwpn(struct unf_lport *lport, u64 wwpn);
+void unf_cm_vport_remove(struct unf_lport *vport);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_ls.c b/drivers/scsi/spfc/common/unf_ls.c
new file mode 100644
index 000000000000..6dab1f9cbb46
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_ls.c
@@ -0,0 +1,4884 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_ls.h"
+#include "unf_log.h"
+#include "unf_service.h"
+#include "unf_portman.h"
+#include "unf_gs.h"
+#include "unf_npiv.h"
+
+static void unf_flogi_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_plogi_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_prli_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_rscn_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_pdisc_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_adisc_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_logo_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_logo_ob_callback(struct unf_xchg *xchg);
+static void unf_logo_callback(void *lport, void *rport, void *xchg);
+static void unf_rrq_callback(void *lport, void *rport, void *xchg);
+static void unf_rrq_ob_callback(struct unf_xchg *xchg);
+static void unf_lport_update_nport_id(struct unf_lport *lport, u32 nport_id);
+static void
+unf_lport_update_time_params(struct unf_lport *lport,
+ struct unf_flogi_fdisc_payload *flogi_payload);
+
+static void unf_login_with_rport_in_n2n(struct unf_lport *lport,
+ u64 remote_port_name,
+ u64 remote_node_name);
+#define UNF_LOWLEVEL_BBCREDIT 0x6
+#define UNF_DEFAULT_BB_SC_N 0
+
+#define UNF_ECHO_REQ_SIZE 0
+#define UNF_ECHO_WAIT_SEM_TIMEOUT(lport) (2 * (ulong)(lport)->ra_tov)
+
+#define UNF_SERVICE_COLLECT(service_collect, item) \
+ do { \
+ if ((item) < UNF_SERVICE_BUTT) { \
+ (service_collect).service_cnt[(item)]++; \
+ } \
+ } while (0)
+
+static void unf_check_rport_need_delay_prli(struct unf_lport *lport,
+ struct unf_rport *rport,
+ u32 port_feature)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ port_feature &= UNF_PORT_MODE_BOTH;
+
+ /* Used for: L_Port has INI mode & R_Port is not SW */
+ if (rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ /*
+ * 1. immediately: R_Port only with TGT, or
+ * L_Port only with INI & R_Port has TGT mode, send PRLI
+ * immediately
+ */
+ if ((port_feature == UNF_PORT_MODE_TGT ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) ||
+ (UNF_PORT_MODE_TGT == (port_feature & UNF_PORT_MODE_TGT))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) send PRLI",
+ lport->port_id, lport->nport_id,
+ rport->nport_id, port_feature);
+ ret = unf_send_prli(lport, rport, ELS_PRLI);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) send PRLI failed",
+ lport->port_id, lport->nport_id,
+ rport->nport_id, port_feature);
+
+ unf_rport_error_recovery(rport);
+ }
+ }
+ /* 2. R_Port has BOTH mode or unknown, Delay to send PRLI */
+ else if (port_feature != UNF_PORT_MODE_INI) {
+ /* Prevent: PRLI done before PLOGI */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) delay to send PRLI",
+ lport->port_id, lport->nport_id,
+ rport->nport_id, port_feature);
+
+ /* Delay to send PRLI to R_Port */
+ unf_rport_delay_login(rport);
+ } else {
+ /* 3. R_Port only with INI mode: wait for R_Port's PRLI:
+ * Do not care
+ */
+ /* Cancel recovery(timer) work */
+ if (delayed_work_pending(&rport->recovery_work)) {
+ if (cancel_delayed_work(&rport->recovery_work)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) is pure INI",
+ lport->port_id,
+ lport->nport_id,
+ rport->nport_id,
+ port_feature);
+
+ unf_rport_ref_dec(rport);
+ }
+ }
+
+ /* Server: R_Port only support INI, do not care this
+ * case
+ */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) wait for PRLI",
+ lport->port_id, lport->nport_id,
+ rport->nport_id, port_feature);
+ }
+ }
+}
+
+static u32 unf_low_level_bb_credit(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 bb_credit = UNF_LOWLEVEL_BBCREDIT;
+
+ if (unlikely(!lport))
+ return bb_credit;
+
+ unf_lport = lport;
+
+ if (unlikely(!unf_lport->low_level_func.port_mgr_op.ll_port_config_get))
+ return bb_credit;
+
+ ret = unf_lport->low_level_func.port_mgr_op.ll_port_config_get((void *)unf_lport->fc_port,
+ UNF_PORT_CFG_GET_WORKBALE_BBCREDIT,
+ (void *)&bb_credit);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[warn]Port(0x%x) get BB_Credit failed, use default value(%d)",
+ unf_lport->port_id, UNF_LOWLEVEL_BBCREDIT);
+
+ bb_credit = UNF_LOWLEVEL_BBCREDIT;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) with BB_Credit(%u)", unf_lport->port_id,
+ bb_credit);
+
+ return bb_credit;
+}
+
+u32 unf_low_level_bb_scn(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_low_level_port_mgr_op *port_mgr = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 bb_scn = UNF_DEFAULT_BB_SC_N;
+
+ if (unlikely(!lport))
+ return bb_scn;
+
+ unf_lport = lport;
+ port_mgr = &unf_lport->low_level_func.port_mgr_op;
+
+ if (unlikely(!port_mgr->ll_port_config_get))
+ return bb_scn;
+
+ ret = port_mgr->ll_port_config_get((void *)unf_lport->fc_port,
+ UNF_PORT_CFG_GET_WORKBALE_BBSCN,
+ (void *)&bb_scn);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[warn]Port(0x%x) get bbscn failed, use default value(%d)",
+ unf_lport->port_id, UNF_DEFAULT_BB_SC_N);
+
+ bb_scn = UNF_DEFAULT_BB_SC_N;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x)'s bbscn(%d)", unf_lport->port_id, bb_scn);
+
+ return bb_scn;
+}
+
+static void unf_fill_rec_pld(struct unf_rec_pld *rec_pld, u32 sid)
+{
+ FC_CHECK_RETURN_VOID(rec_pld);
+
+ rec_pld->rec_cmnd = (UNF_ELS_CMND_REC);
+ rec_pld->xchg_org_sid = sid;
+ rec_pld->ox_id = INVALID_VALUE16;
+ rec_pld->rx_id = INVALID_VALUE16;
+}
+
+u32 unf_send_rec(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *io_xchg)
+{
+ struct unf_rec_pld *rec_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(io_xchg, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for PLOGI",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_REC;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+ pkg.origin_hottag = io_xchg->hotpooltag;
+ pkg.origin_magicnum = io_xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+ rec_pld = &fc_entry->rec.rec_pld;
+ memset(rec_pld, 0, sizeof(struct unf_rec_pld));
+
+ unf_fill_rec_pld(rec_pld, lport->nport_id);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]LOGIN: Send REC %s. Port(0x%x_0x%x_0x%llx)--->RPort(0x%x_0x%llx) with hottag(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ lport->nport_id, lport->port_name, rport->nport_id,
+ rport->port_name, xchg->hotpooltag);
+
+ return ret;
+}
+
+static void unf_fill_flogi_pld(struct unf_flogi_fdisc_payload *flogi_pld,
+ struct unf_lport *lport)
+{
+ struct unf_fabric_parm *fabric_parms = NULL;
+
+ FC_CHECK_RETURN_VOID(flogi_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ fabric_parms = &flogi_pld->fabric_parms;
+ if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT ||
+ lport->act_topo == UNF_TOP_P2P_MASK) {
+ /* Fabric or P2P or FCoE VN2VN topology */
+ fabric_parms->co_parms.bb_credit = unf_low_level_bb_credit(lport);
+ fabric_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
+ fabric_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
+ fabric_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
+ fabric_parms->co_parms.bbscn = unf_low_level_bb_scn(lport);
+ } else {
+ /* Loop topology here */
+ fabric_parms->co_parms.clean_address = UNF_CLEAN_ADDRESS_DEFAULT;
+ fabric_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
+ fabric_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
+ fabric_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
+ fabric_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT;
+ fabric_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
+ }
+
+ if (lport->low_level_func.support_max_npiv_num != 0)
+ /* support NPIV */
+ fabric_parms->co_parms.clean_address = 1;
+
+ fabric_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID;
+
+ /* according the user value to set the priority */
+ if (lport->qos_cs_ctrl)
+ fabric_parms->cl_parms[ARRAY_INDEX_2].priority = UNF_PRIORITY_ENABLE;
+ else
+ fabric_parms->cl_parms[ARRAY_INDEX_2].priority = UNF_PRIORITY_DISABLE;
+
+ fabric_parms->cl_parms[ARRAY_INDEX_2].sequential_delivery = UNF_SEQUEN_DELIVERY_REQ;
+ fabric_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
+
+ fabric_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ fabric_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ fabric_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ fabric_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+}
+
+u32 unf_send_flogi(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_xchg *xchg = NULL;
+ struct unf_flogi_fdisc_payload *flogi_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for FLOGI",
+ lport->port_id);
+
+ return ret;
+ }
+
+ /* FLOGI */
+ xchg->cmnd_code = ELS_FLOGI;
+
+ /* for rcvd flogi acc/rjt processer */
+ xchg->callback = unf_flogi_callback;
+ /* for send flogi failed processer */
+ xchg->ob_callback = unf_flogi_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ flogi_pld = &fc_entry->flogi.flogi_payload;
+ memset(flogi_pld, 0, sizeof(struct unf_flogi_fdisc_payload));
+ unf_fill_flogi_pld(flogi_pld, lport);
+ flogi_pld->cmnd = (UNF_ELS_CMND_FLOGI);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Begin to send FLOGI. Port(0x%x)--->RPort(0x%x) with hottag(0x%x)",
+ lport->port_id, rport->nport_id, xchg->hotpooltag);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, flogi_pld,
+ sizeof(struct unf_flogi_fdisc_payload));
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]LOGIN: Send FLOGI failed. Port(0x%x)--->RPort(0x%x)",
+ lport->port_id, rport->nport_id);
+
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ }
+
+ return ret;
+}
+
+u32 unf_send_fdisc(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_xchg *exch = NULL;
+ struct unf_flogi_fdisc_payload *fdisc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ exch = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!exch) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for FDISC",
+ lport->port_id);
+
+ return ret;
+ }
+
+ exch->cmnd_code = ELS_FDISC;
+
+ exch->callback = unf_fdisc_callback;
+ exch->ob_callback = unf_fdisc_ob_callback;
+
+ unf_fill_package(&pkg, exch, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ fdisc_pld = &fc_entry->fdisc.fdisc_payload;
+ memset(fdisc_pld, 0, sizeof(struct unf_flogi_fdisc_payload));
+ unf_fill_flogi_pld(fdisc_pld, lport);
+ fdisc_pld->cmnd = UNF_ELS_CMND_FDISC;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, exch);
+
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)exch);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: FDISC send %s. Port(0x%x)--->RPort(0x%x) with hottag(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, exch->hotpooltag);
+
+ return ret;
+}
+
+static void unf_fill_plogi_pld(struct unf_plogi_payload *plogi_pld,
+ struct unf_lport *lport)
+{
+ struct unf_lgn_parm *login_parms = NULL;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(plogi_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = lport->root_lport;
+ plogi_pld->cmnd = (UNF_ELS_CMND_PLOGI);
+ login_parms = &plogi_pld->stparms;
+
+ if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ /* P2P or Fabric mode or FCoE VN2VN */
+ login_parms->co_parms.bb_credit = (unf_low_level_bb_credit(lport));
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_NFPORT;
+ login_parms->co_parms.bbscn =
+ (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC)
+ ? 0
+ : unf_low_level_bb_scn(lport);
+ } else {
+ /* Public loop & Private loop mode */
+ login_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT;
+ }
+
+ login_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
+ login_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
+ login_parms->co_parms.continuously_increasing = UNF_CONTIN_INCREASE_SUPPORT;
+ login_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
+ login_parms->co_parms.nport_total_concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->co_parms.relative_offset = (UNF_PLOGI_RO_CATEGORY);
+ login_parms->co_parms.e_d_tov = UNF_DEFAULT_EDTOV;
+ if (unf_lport->priority == UNF_PRIORITY_ENABLE) {
+ login_parms->cl_parms[ARRAY_INDEX_2].priority =
+ UNF_PRIORITY_ENABLE;
+ } else {
+ login_parms->cl_parms[ARRAY_INDEX_2].priority =
+ UNF_PRIORITY_DISABLE;
+ }
+
+ /* for class_3 */
+ login_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID;
+ login_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
+ login_parms->cl_parms[ARRAY_INDEX_2].concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->cl_parms[ARRAY_INDEX_2].open_sequence_per_exchange = (UNF_PLOGI_SEQ_PER_XCHG);
+
+ login_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ login_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ login_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ login_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, plogi_pld, sizeof(struct unf_plogi_payload));
+}
+
+u32 unf_send_plogi(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_plogi_payload *plogi_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for PLOGI",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_PLOGI;
+
+ xchg->callback = unf_plogi_callback;
+ xchg->ob_callback = unf_plogi_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+ unf_cm_xchg_mgr_abort_io_by_id(lport, rport, xchg->sid, xchg->did, 0);
+
+ plogi_pld = &fc_entry->plogi.payload;
+ memset(plogi_pld, 0, sizeof(struct unf_plogi_payload));
+ unf_fill_plogi_pld(plogi_pld, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Send PLOGI %s. Port(0x%x_0x%x_0x%llx)--->RPort(0x%x_0x%llx) with hottag(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ lport->nport_id, lport->port_name, rport->nport_id,
+ rport->port_name, xchg->hotpooltag);
+
+ return ret;
+}
+
+static void unf_fill_logo_pld(struct unf_logo_payload *logo_pld,
+ struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(logo_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ logo_pld->cmnd = (UNF_ELS_CMND_LOGO);
+ logo_pld->nport_id = (lport->nport_id);
+ logo_pld->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ logo_pld->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, logo_pld, sizeof(struct unf_logo_payload));
+}
+
+u32 unf_send_logo(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_logo_payload *logo_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct unf_frame_pkg pkg = {0};
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for LOGO",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_LOGO;
+ /* retry or link down immediately */
+ xchg->callback = unf_logo_callback;
+ /* do nothing */
+ xchg->ob_callback = unf_logo_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ logo_pld = &fc_entry->logo.payload;
+ memset(logo_pld, 0, sizeof(struct unf_logo_payload));
+ unf_fill_logo_pld(logo_pld, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ rport->logo_retries++;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]LOGIN: LOGO send %s. Port(0x%x)--->RPort(0x%x) hottag(0x%x) Retries(%d)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, xchg->hotpooltag, rport->logo_retries);
+
+ return ret;
+}
+
+u32 unf_send_logo_by_did(struct unf_lport *lport, u32 did)
+{
+ struct unf_logo_payload *logo_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct unf_frame_pkg pkg = {0};
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, did, NULL, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for LOGO",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_LOGO;
+
+ unf_fill_package(&pkg, xchg, NULL);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ logo_pld = &fc_entry->logo.payload;
+ memset(logo_pld, 0, sizeof(struct unf_logo_payload));
+ unf_fill_logo_pld(logo_pld, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: LOGO send %s. Port(0x%x)--->RPort(0x%x) with hottag(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ did, xchg->hotpooltag);
+
+ return ret;
+}
+
+static void unf_echo_callback(void *lport, void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_rport *unf_rport = (struct unf_rport *)rport;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_echo_payload *echo_rsp_pld = NULL;
+ u32 cmnd = 0;
+ u32 mag_ver_local = 0;
+ u32 mag_ver_remote = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_xchg = (struct unf_xchg *)xchg;
+ if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr)
+ return;
+
+ echo_rsp_pld = unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo_acc.echo_pld;
+ FC_CHECK_RETURN_VOID(echo_rsp_pld);
+
+ if (unf_xchg->byte_orders & UNF_BIT_2) {
+ unf_big_end_to_cpu((u8 *)echo_rsp_pld, sizeof(struct unf_echo_payload));
+ cmnd = echo_rsp_pld->cmnd;
+ } else {
+ cmnd = echo_rsp_pld->cmnd;
+ }
+
+ mag_ver_local = echo_rsp_pld->data[ARRAY_INDEX_0];
+ mag_ver_remote = echo_rsp_pld->data[ARRAY_INDEX_1];
+
+ if (UNF_ELS_CMND_ACC == (cmnd & UNF_ELS_CMND_HIGH_MASK)) {
+ if (mag_ver_local == ECHO_MG_VERSION_LOCAL &&
+ mag_ver_remote == ECHO_MG_VERSION_REMOTE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "LPort(0x%x) send ECHO to RPort(0x%x), received ACC. local snd echo:(0x%x), remote rcv echo:(0x%x), remote snd echo acc:(0x%x), local rcv echo acc:(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id,
+ unf_xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME],
+ unf_xchg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME],
+ unf_xchg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME],
+ unf_xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME]);
+ } else if ((mag_ver_local == ECHO_MG_VERSION_LOCAL) &&
+ (mag_ver_remote != ECHO_MG_VERSION_REMOTE)) {
+ /* the peer don't supprt smartping, only local snd and
+ * rcv rsp time stamp
+ */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "LPort(0x%x) send ECHO to RPort(0x%x), received ACC. local snd echo:(0x%x), local rcv echo acc:(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id,
+ unf_xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME],
+ unf_xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME]);
+ } else if ((mag_ver_local != ECHO_MG_VERSION_LOCAL) &&
+ (mag_ver_remote != ECHO_MG_VERSION_REMOTE)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR,
+ "LPort(0x%x) send ECHO to RPort(0x%x), received ACC. local and remote is not FC HBA",
+ unf_lport->port_id, unf_rport->nport_id);
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ECHO to RPort(0x%x) and received RJT",
+ unf_lport->port_id, unf_rport->nport_id);
+ }
+
+ unf_xchg->echo_info.echo_result = UNF_ELS_ECHO_RESULT_OK;
+ unf_xchg->echo_info.response_time = jiffies - unf_xchg->echo_info.response_time;
+
+ /* wake up semaphore */
+ up(&unf_xchg->echo_info.echo_sync_sema);
+}
+
+static void unf_echo_ob_callback(struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ unf_lport = xchg->lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+ unf_rport = xchg->rport;
+ FC_CHECK_RETURN_VOID(unf_rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ECHO to RPort(0x%x) but timeout",
+ unf_lport->port_id, unf_rport->nport_id);
+
+ xchg->echo_info.echo_result = UNF_ELS_ECHO_RESULT_FAIL;
+
+ /* wake up semaphore */
+ up(&xchg->echo_info.echo_sync_sema);
+}
+
+u32 unf_send_echo(struct unf_lport *lport, struct unf_rport *rport, u32 *time)
+{
+ struct unf_echo_payload *echo_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct unf_frame_pkg pkg = {0};
+ u32 ret = UNF_RETURN_ERROR;
+ ulong delay = 0;
+ dma_addr_t phy_echo_addr;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(time, UNF_RETURN_ERROR);
+
+ delay = UNF_ECHO_WAIT_SEM_TIMEOUT(lport);
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for ECHO",
+ lport->port_id);
+
+ return ret;
+ }
+
+ /* ECHO */
+ xchg->cmnd_code = ELS_ECHO;
+ xchg->fcp_sfs_union.sfs_entry.cur_offset = UNF_ECHO_REQ_SIZE;
+
+ /* Set callback function, wake up semaphore */
+ xchg->callback = unf_echo_callback;
+ /* wake up semaphore */
+ xchg->ob_callback = unf_echo_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ echo_pld = (struct unf_echo_payload *)unf_get_one_big_sfs_buf(xchg);
+ if (!echo_pld) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't allocate buffer for ECHO",
+ lport->port_id);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ fc_entry->echo.echo_pld = echo_pld;
+ phy_echo_addr = pci_map_single(lport->low_level_func.dev, echo_pld,
+ UNF_ECHO_PAYLOAD_LEN,
+ DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(lport->low_level_func.dev, phy_echo_addr)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) pci map err", lport->port_id);
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+ fc_entry->echo.phy_echo_addr = phy_echo_addr;
+ memset(echo_pld, 0, sizeof(struct unf_echo_payload));
+ echo_pld->cmnd = (UNF_ELS_CMND_ECHO);
+ echo_pld->data[ARRAY_INDEX_0] = ECHO_MG_VERSION_LOCAL;
+
+ ret = unf_xchg_ref_inc(xchg, SEND_ELS);
+ FC_CHECK_RETURN_VALUE((ret == RETURN_OK), UNF_RETURN_ERROR);
+
+ xchg->echo_info.response_time = jiffies;
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK) {
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ } else {
+ if (down_timeout(&xchg->echo_info.echo_sync_sema,
+ (long)msecs_to_jiffies((u32)delay))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]ECHO send %s. Port(0x%x)--->RPort(0x%x) but response timeout ",
+ (ret != RETURN_OK) ? "failed" : "succeed",
+ lport->port_id, rport->nport_id);
+
+ xchg->echo_info.echo_result = UNF_ELS_ECHO_RESULT_FAIL;
+ }
+
+ if (xchg->echo_info.echo_result == UNF_ELS_ECHO_RESULT_FAIL) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR, "Echo send fail or timeout");
+
+ ret = UNF_RETURN_ERROR;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "echo acc rsp,echo_cmd_snd(0x%xus)-->echo_cmd_rcv(0x%xus)-->echo_acc_ snd(0x%xus)-->echo_acc_rcv(0x%xus).",
+ xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME],
+ xchg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME],
+ xchg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME],
+ xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME]);
+
+ *time =
+ (xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME] -
+ xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME]) -
+ (xchg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME] -
+ xchg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME]);
+ }
+ }
+
+ pci_unmap_single(lport->low_level_func.dev, phy_echo_addr,
+ UNF_ECHO_PAYLOAD_LEN, DMA_BIDIRECTIONAL);
+ fc_entry->echo.phy_echo_addr = 0;
+ unf_xchg_ref_dec(xchg, SEND_ELS);
+
+ return ret;
+}
+
+static void unf_fill_prli_pld(struct unf_prli_payload *prli_pld,
+ struct unf_lport *lport)
+{
+ u32 pld_len = 0;
+
+ FC_CHECK_RETURN_VOID(prli_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ pld_len = sizeof(struct unf_prli_payload) - UNF_PRLI_SIRT_EXTRA_SIZE;
+ prli_pld->cmnd =
+ (UNF_ELS_CMND_PRLI |
+ ((u32)UNF_FC4_FRAME_PAGE_SIZE << UNF_FC4_FRAME_PAGE_SIZE_SHIFT) |
+ ((u32)pld_len));
+
+ prli_pld->parms[ARRAY_INDEX_0] = (UNF_FC4_FRAME_PARM_0_FCP | UNF_FC4_FRAME_PARM_0_I_PAIR);
+ prli_pld->parms[ARRAY_INDEX_1] = UNF_NOT_MEANINGFUL;
+ prli_pld->parms[ARRAY_INDEX_2] = UNF_NOT_MEANINGFUL;
+
+ /* About Read Xfer_rdy disable */
+ prli_pld->parms[ARRAY_INDEX_3] = (UNF_FC4_FRAME_PARM_3_R_XFER_DIS | lport->options);
+
+ /* About FCP confirm */
+ if (lport->low_level_func.lport_cfg_items.fcp_conf)
+ prli_pld->parms[ARRAY_INDEX_3] |= UNF_FC4_FRAME_PARM_3_CONF_ALLOW;
+
+ /* About Tape support */
+ if (lport->low_level_func.lport_cfg_items.tape_support)
+ prli_pld->parms[ARRAY_INDEX_3] |=
+ (UNF_FC4_FRAME_PARM_3_REC_SUPPORT |
+ UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT |
+ UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT |
+ UNF_FC4_FRAME_PARM_3_CONF_ALLOW);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x)'s PRLI payload: options(0x%x) parameter-3(0x%x)",
+ lport->port_id, lport->options,
+ prli_pld->parms[ARRAY_INDEX_3]);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, prli_pld, sizeof(struct unf_prli_payload));
+}
+
+u32 unf_send_prli(struct unf_lport *lport, struct unf_rport *rport,
+ u32 cmnd_code)
+{
+ struct unf_prli_payload *prli_pal = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for PRLI",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = cmnd_code;
+
+ /* for rcvd prli acc/rjt processer */
+ xchg->callback = unf_prli_callback;
+ /* for send prli failed processer */
+ xchg->ob_callback = unf_prli_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ prli_pal = &fc_entry->prli.payload;
+ memset(prli_pal, 0, sizeof(struct unf_prli_payload));
+ unf_fill_prli_pld(prli_pal, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PRLI send %s. Port(0x%x)--->RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id);
+
+ return ret;
+}
+
+static void unf_fill_prlo_pld(struct unf_prli_payload *prlo_pld,
+ struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(prlo_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ prlo_pld->cmnd = (UNF_ELS_CMND_PRLO);
+ prlo_pld->parms[ARRAY_INDEX_0] = (UNF_FC4_FRAME_PARM_0_FCP);
+ prlo_pld->parms[ARRAY_INDEX_1] = UNF_NOT_MEANINGFUL;
+ prlo_pld->parms[ARRAY_INDEX_2] = UNF_NOT_MEANINGFUL;
+ prlo_pld->parms[ARRAY_INDEX_3] = UNF_NO_SERVICE_PARAMS;
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, prlo_pld, sizeof(struct unf_prli_payload));
+}
+
+u32 unf_send_prlo(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_prli_payload *prlo_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for PRLO", lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_PRLO;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ prlo_pld = &fc_entry->prlo.payload;
+ memset(prlo_pld, 0, sizeof(struct unf_prli_payload));
+ unf_fill_prlo_pld(prlo_pld, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PRLO send %s. Port(0x%x)--->RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id);
+
+ return ret;
+}
+
+static void unf_fill_pdisc_pld(struct unf_plogi_payload *pdisc_pld,
+ struct unf_lport *lport)
+{
+ struct unf_lgn_parm *login_parms = NULL;
+
+ FC_CHECK_RETURN_VOID(pdisc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ pdisc_pld->cmnd = (UNF_ELS_CMND_PDISC);
+ login_parms = &pdisc_pld->stparms;
+
+ if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ /* P2P & Fabric */
+ login_parms->co_parms.bb_credit = (unf_low_level_bb_credit(lport));
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_NFPORT;
+ login_parms->co_parms.bbscn =
+ (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC)
+ ? 0
+ : unf_low_level_bb_scn(lport);
+ } else {
+ /* Public loop & Private loop */
+ login_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
+ /* :1 */
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT;
+ }
+
+ login_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
+ login_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
+ login_parms->co_parms.continuously_increasing = UNF_CONTIN_INCREASE_SUPPORT;
+ login_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
+ login_parms->co_parms.nport_total_concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->co_parms.relative_offset = (UNF_PLOGI_RO_CATEGORY);
+ login_parms->co_parms.e_d_tov = (lport->ed_tov);
+
+ login_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ login_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ login_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ login_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+
+ /* class-3 */
+ login_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID;
+ login_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
+ login_parms->cl_parms[ARRAY_INDEX_2].concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->cl_parms[ARRAY_INDEX_2].open_sequence_per_exchange = (UNF_PLOGI_SEQ_PER_XCHG);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, pdisc_pld, sizeof(struct unf_plogi_payload));
+}
+
+u32 unf_send_pdisc(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* PLOGI/PDISC with same payload */
+ struct unf_plogi_payload *pdisc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for PDISC",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_PDISC;
+ xchg->callback = NULL;
+ xchg->ob_callback = NULL;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ pdisc_pld = &fc_entry->pdisc.payload;
+ memset(pdisc_pld, 0, sizeof(struct unf_plogi_payload));
+ unf_fill_pdisc_pld(pdisc_pld, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PDISC send %s. Port(0x%x)--->RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id, rport->nport_id);
+
+ return ret;
+}
+
+static void unf_fill_adisc_pld(struct unf_adisc_payload *adisc_pld,
+ struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(adisc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ adisc_pld->cmnd = (UNF_ELS_CMND_ADISC);
+ adisc_pld->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ adisc_pld->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ adisc_pld->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ adisc_pld->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, adisc_pld, sizeof(struct unf_adisc_payload));
+}
+
+u32 unf_send_adisc(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_adisc_payload *adisc_pal = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for ADISC", lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_ADISC;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = NULL;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ adisc_pal = &fc_entry->adisc.adisc_payl;
+ memset(adisc_pal, 0, sizeof(struct unf_adisc_payload));
+ unf_fill_adisc_pld(adisc_pal, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: ADISC send %s. Port(0x%x)--->RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id);
+
+ return ret;
+}
+
+static void unf_fill_rrq_pld(struct unf_rrq *rrq_pld, struct unf_xchg *xchg)
+{
+ FC_CHECK_RETURN_VOID(rrq_pld);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ rrq_pld->cmnd = UNF_ELS_CMND_RRQ;
+ rrq_pld->sid = xchg->sid;
+ rrq_pld->oxid_rxid = ((u32)xchg->oxid << UNF_SHIFT_16 | xchg->rxid);
+}
+
+u32 unf_send_rrq(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ /* after ABTS Done */
+ struct unf_rrq *rrq_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_frame_pkg pkg = {0};
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ unf_xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!unf_xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for RRQ",
+ lport->port_id);
+
+ return ret;
+ }
+
+ unf_xchg->cmnd_code = ELS_RRQ; /* RRQ */
+
+ unf_xchg->callback = unf_rrq_callback; /* release I/O exchange context */
+ unf_xchg->ob_callback = unf_rrq_ob_callback; /* release I/O exchange context */
+ unf_xchg->io_xchg = xchg; /* pointer to IO XCHG */
+
+ unf_fill_package(&pkg, unf_xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+ rrq_pld = &fc_entry->rrq;
+ memset(rrq_pld, 0, sizeof(struct unf_rrq));
+ unf_fill_rrq_pld(rrq_pld, xchg);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, unf_xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)unf_xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]RRQ send %s. Port(0x%x)--->RPort(0x%x) free old exchange(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, xchg->hotpooltag);
+
+ return ret;
+}
+
+u32 unf_send_flogi_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_flogi_fdisc_payload *flogi_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_FLOGI);
+
+ xchg->did = 0; /* D_ID must be 0 */
+ xchg->sid = UNF_FC_FID_FLOGI; /* S_ID must be 0xfffffe */
+ xchg->oid = xchg->sid;
+ xchg->callback = NULL;
+ xchg->lport = lport;
+ xchg->rport = rport;
+ xchg->ob_callback = unf_flogi_acc_ob_callback; /* call back for sending
+ * FLOGI response
+ */
+
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ flogi_acc_pld = &fc_entry->flogi_acc.flogi_payload;
+ flogi_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
+ unf_fill_flogi_pld(flogi_acc_pld, lport);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]LOGIN: FLOGI ACC send %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, ox_id, rx_id);
+ return ret;
+}
+
+static void unf_fill_plogi_acc_pld(struct unf_plogi_payload *plogi_acc_pld,
+ struct unf_lport *lport)
+{
+ struct unf_lgn_parm *login_parms = NULL;
+
+ FC_CHECK_RETURN_VOID(plogi_acc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ plogi_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
+ login_parms = &plogi_acc_pld->stparms;
+
+ if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ login_parms->co_parms.bb_credit = (unf_low_level_bb_credit(lport));
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_NFPORT; /* 0 */
+ login_parms->co_parms.bbscn =
+ (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC)
+ ? 0
+ : unf_low_level_bb_scn(lport);
+ } else {
+ login_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT; /* 1 */
+ }
+
+ login_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
+ login_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
+ login_parms->co_parms.continuously_increasing = UNF_CONTIN_INCREASE_SUPPORT;
+ login_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
+ login_parms->co_parms.nport_total_concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->co_parms.relative_offset = (UNF_PLOGI_RO_CATEGORY);
+ login_parms->co_parms.e_d_tov = (lport->ed_tov);
+ login_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID; /* class-3 */
+ login_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
+ login_parms->cl_parms[ARRAY_INDEX_2].concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->cl_parms[ARRAY_INDEX_2].open_sequence_per_exchange = (UNF_PLOGI_SEQ_PER_XCHG);
+ login_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ login_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ login_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ login_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, plogi_acc_pld,
+ sizeof(struct unf_plogi_payload));
+}
+
+u32 unf_send_plogi_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_plogi_payload *plogi_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_PLOGI);
+
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->callback = NULL;
+ xchg->lport = lport;
+ xchg->rport = rport;
+
+ xchg->ob_callback = unf_plogi_acc_ob_callback; /* call back for sending PLOGI ACC */
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ plogi_acc_pld = &fc_entry->plogi_acc.payload;
+ unf_fill_plogi_acc_pld(plogi_acc_pld, lport);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ if (rport->nport_id < UNF_FC_FID_DOM_MGR ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PLOGI ACC send %s. Port(0x%x_0x%x_0x%llx)--->RPort(0x%x_0x%llx) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed",
+ lport->port_id, lport->nport_id, lport->port_name,
+ rport->nport_id, rport->port_name, ox_id, rx_id);
+ }
+
+ return ret;
+}
+
+static void unf_fill_prli_acc_pld(struct unf_prli_payload *prli_acc_pld,
+ struct unf_lport *lport,
+ struct unf_rport *rport)
+{
+ u32 port_mode = UNF_FC4_FRAME_PARM_3_TGT;
+
+ FC_CHECK_RETURN_VOID(prli_acc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ prli_acc_pld->cmnd =
+ (UNF_ELS_CMND_ACC |
+ ((u32)UNF_FC4_FRAME_PAGE_SIZE << UNF_FC4_FRAME_PAGE_SIZE_SHIFT) |
+ ((u32)(sizeof(struct unf_prli_payload) - UNF_PRLI_SIRT_EXTRA_SIZE)));
+
+ prli_acc_pld->parms[ARRAY_INDEX_0] =
+ (UNF_FC4_FRAME_PARM_0_FCP | UNF_FC4_FRAME_PARM_0_I_PAIR |
+ UNF_FC4_FRAME_PARM_0_GOOD_RSP_CODE);
+ prli_acc_pld->parms[ARRAY_INDEX_1] = UNF_NOT_MEANINGFUL;
+ prli_acc_pld->parms[ARRAY_INDEX_2] = UNF_NOT_MEANINGFUL;
+
+ /* About INI/TGT mode */
+ if (rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ /* return INI (0x20): R_Port has TGT mode, L_Port has INI mode
+ */
+ port_mode = UNF_FC4_FRAME_PARM_3_INI;
+ } else {
+ port_mode = lport->options;
+ }
+
+ /* About Read xfer_rdy disable */
+ prli_acc_pld->parms[ARRAY_INDEX_3] =
+ (UNF_FC4_FRAME_PARM_3_R_XFER_DIS | port_mode); /* 0x2 */
+
+ /* About Tape support */
+ if (rport->tape_support_needed) {
+ prli_acc_pld->parms[ARRAY_INDEX_3] |=
+ (UNF_FC4_FRAME_PARM_3_REC_SUPPORT |
+ UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT |
+ UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT |
+ UNF_FC4_FRAME_PARM_3_CONF_ALLOW);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "PRLI ACC tape support");
+ }
+
+ /* About confirm */
+ if (lport->low_level_func.lport_cfg_items.fcp_conf)
+ prli_acc_pld->parms[ARRAY_INDEX_3] |=
+ UNF_FC4_FRAME_PARM_3_CONF_ALLOW; /* 0x80 */
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, prli_acc_pld,
+ sizeof(struct unf_prli_payload));
+}
+
+u32 unf_send_prli_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_prli_payload *prli_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_PRLI);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+
+ xchg->callback = NULL;
+ xchg->ob_callback =
+ unf_prli_acc_ob_callback; /* callback when send succeed */
+
+ unf_fill_package(&pkg, xchg, rport);
+
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ prli_acc_pld = &fc_entry->prli_acc.payload;
+ unf_fill_prli_acc_pld(prli_acc_pld, lport, rport);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ if (rport->nport_id < UNF_FC_FID_DOM_MGR ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PRLI ACC send %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed",
+ lport->port_id, rport->nport_id, ox_id, rx_id);
+ }
+
+ return ret;
+}
+
+u32 unf_send_rec_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ /* Reserved */
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ return RETURN_OK;
+}
+
+static void unf_rrq_acc_ob_callback(struct unf_xchg *xchg)
+{
+ FC_CHECK_RETURN_VOID(xchg);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]RRQ ACC Xchg(0x%p) tag(0x%x)", xchg,
+ xchg->hotpooltag);
+}
+
+static void unf_fill_els_acc_pld(struct unf_els_acc *els_acc_pld)
+{
+ FC_CHECK_RETURN_VOID(els_acc_pld);
+
+ els_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
+}
+
+u32 unf_send_rscn_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_els_acc *rscn_acc = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_RSCN);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = unf_rscn_acc_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ rscn_acc = &fc_entry->els_acc;
+ unf_fill_els_acc_pld(rscn_acc);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: RSCN ACC send %s. Port(0x%x)--->RPort(0x%x) with OXID(0x%x) RXID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, ox_id, rx_id);
+
+ return ret;
+}
+
+u32 unf_send_logo_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_els_acc *logo_acc = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_LOGO);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+ xchg->callback = NULL;
+ xchg->ob_callback = unf_logo_acc_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ logo_acc = &fc_entry->els_acc;
+ unf_fill_els_acc_pld(logo_acc);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ if (rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: LOGO ACC send %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed",
+ lport->port_id, rport->nport_id, ox_id, rx_id);
+ }
+
+ return ret;
+}
+
+static u32 unf_send_rrq_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_els_acc *rrq_acc = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+ xchg->callback = NULL; /* do noting */
+
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ rrq_acc = &fc_entry->els_acc;
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_RRQ);
+ xchg->ob_callback = unf_rrq_acc_ob_callback; /* do noting */
+ unf_fill_els_acc_pld(rrq_acc);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]RRQ ACC send %s. Port(0x%x)--->RPort(0x%x) with Xchg(0x%p) OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, xchg, ox_id, rx_id);
+
+ return ret;
+}
+
+static void unf_fill_pdisc_acc_pld(struct unf_plogi_payload *pdisc_acc_pld,
+ struct unf_lport *lport)
+{
+ struct unf_lgn_parm *login_parms = NULL;
+
+ FC_CHECK_RETURN_VOID(pdisc_acc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ pdisc_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
+ login_parms = &pdisc_acc_pld->stparms;
+
+ if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ login_parms->co_parms.bb_credit = (unf_low_level_bb_credit(lport));
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_NFPORT;
+ login_parms->co_parms.bbscn =
+ (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC)
+ ? 0
+ : unf_low_level_bb_scn(lport);
+ } else {
+ login_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT;
+ }
+
+ login_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
+ login_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
+ login_parms->co_parms.continuously_increasing = UNF_CONTIN_INCREASE_SUPPORT;
+ login_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
+ login_parms->co_parms.nport_total_concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->co_parms.relative_offset = (UNF_PLOGI_RO_CATEGORY);
+ login_parms->co_parms.e_d_tov = (lport->ed_tov);
+
+ login_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID; /* class-3 */
+ login_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
+ login_parms->cl_parms[ARRAY_INDEX_2].concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->cl_parms[ARRAY_INDEX_2].open_sequence_per_exchange = (UNF_PLOGI_SEQ_PER_XCHG);
+
+ login_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ login_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ login_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ login_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, pdisc_acc_pld,
+ sizeof(struct unf_plogi_payload));
+}
+
+u32 unf_send_pdisc_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_plogi_payload *pdisc_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_PDISC);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = unf_pdisc_acc_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ pdisc_acc_pld = &fc_entry->pdisc_acc.payload;
+ unf_fill_pdisc_acc_pld(pdisc_acc_pld, lport);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Send PDISC ACC %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, ox_id, rx_id);
+
+ return ret;
+}
+
+static void unf_fill_adisc_acc_pld(struct unf_adisc_payload *adisc_acc_pld,
+ struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(adisc_acc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ adisc_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
+
+ adisc_acc_pld->hard_address = (lport->nport_id & UNF_ALPA_MASK);
+ adisc_acc_pld->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ adisc_acc_pld->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ adisc_acc_pld->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ adisc_acc_pld->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+ adisc_acc_pld->nport_id = lport->nport_id;
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, adisc_acc_pld,
+ sizeof(struct unf_adisc_payload));
+}
+
+u32 unf_send_adisc_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_adisc_payload *adisc_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_ADISC);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = unf_adisc_acc_ob_callback;
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ adisc_acc_pld = &fc_entry->adisc_acc.adisc_payl;
+ unf_fill_adisc_acc_pld(adisc_acc_pld, lport);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Send ADISC ACC %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, ox_id, rx_id);
+
+ return ret;
+}
+
+static void unf_fill_prlo_acc_pld(struct unf_prli_prlo *prlo_acc,
+ struct unf_lport *lport)
+{
+ struct unf_prli_payload *prlo_acc_pld = NULL;
+
+ FC_CHECK_RETURN_VOID(prlo_acc);
+
+ prlo_acc_pld = &prlo_acc->payload;
+ prlo_acc_pld->cmnd =
+ (UNF_ELS_CMND_ACC |
+ ((u32)UNF_FC4_FRAME_PAGE_SIZE << UNF_FC4_FRAME_PAGE_SIZE_SHIFT) |
+ ((u32)sizeof(struct unf_prli_payload)));
+ prlo_acc_pld->parms[ARRAY_INDEX_0] =
+ (UNF_FC4_FRAME_PARM_0_FCP | UNF_FC4_FRAME_PARM_0_GOOD_RSP_CODE);
+ prlo_acc_pld->parms[ARRAY_INDEX_1] = 0;
+ prlo_acc_pld->parms[ARRAY_INDEX_2] = 0;
+ prlo_acc_pld->parms[ARRAY_INDEX_3] = 0;
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, prlo_acc_pld,
+ sizeof(struct unf_prli_payload));
+}
+
+u32 unf_send_prlo_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_prli_prlo *prlo_acc = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_PRLO);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = NULL;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ prlo_acc = &fc_entry->prlo_acc;
+ unf_fill_prlo_acc_pld(prlo_acc, lport);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Send PRLO ACC %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, ox_id, rx_id);
+
+ return ret;
+}
+
+static void unf_prli_acc_ob_callback(struct unf_xchg *xchg)
+{
+ /* Report R_Port scsi Link Up */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+ enum unf_rport_login_state rport_state = UNF_RPORT_ST_INIT;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+ FC_CHECK_RETURN_VOID(unf_rport);
+
+ /* Update & Report Link Up */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_READY);
+ rport_state = unf_rport->rp_state;
+ if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]LOGIN: Port(0x%x) RPort(0x%x) state(0x%x) WWN(0x%llx) prliacc",
+ unf_lport->port_id, unf_rport->nport_id,
+ unf_rport->rp_state, unf_rport->port_name);
+ }
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ if (rport_state == UNF_RPORT_ST_READY) {
+ unf_rport->logo_retries = 0;
+ unf_update_lport_state_by_linkup_event(unf_lport, unf_rport,
+ unf_rport->options);
+ }
+}
+
+static void unf_schedule_open_work(struct unf_lport *lport,
+ struct unf_rport *rport)
+{
+ /* Used for L_Port port only with TGT, or R_Port only with INI */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ ulong delay = 0;
+ ulong flag = 0;
+ u32 ret = 0;
+ u32 port_feature = INVALID_VALUE32;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ delay = (ulong)unf_lport->ed_tov;
+ port_feature = unf_rport->options & UNF_PORT_MODE_BOTH;
+
+ if (unf_lport->options == UNF_PORT_MODE_TGT ||
+ port_feature == UNF_PORT_MODE_INI) {
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+
+ ret = unf_rport_ref_inc(unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) RPort(0x%x) abnormal, no need open",
+ unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+ return;
+ }
+
+ /* Delay work pending check */
+ if (delayed_work_pending(&unf_rport->open_work)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) RPort(0x%x) open work is running, no need re-open",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+ unf_rport_ref_dec(unf_rport);
+ return;
+ }
+
+ /* start open work */
+ if (queue_delayed_work(unf_wq, &unf_rport->open_work,
+ (ulong)msecs_to_jiffies((u32)delay))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%x) start open work",
+ unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id);
+
+ (void)unf_rport_ref_inc(unf_rport);
+ }
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ unf_rport_ref_dec(unf_rport);
+ }
+}
+
+static void unf_plogi_acc_ob_callback(struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ FC_CHECK_RETURN_VOID(unf_lport);
+ FC_CHECK_RETURN_VOID(unf_rport);
+
+ /*
+ * 1. According to FC-LS 4.2.7.1:
+ * after RCVD PLOGI or sending PLOGI ACC, need to termitate open EXCH
+ */
+ unf_cm_xchg_mgr_abort_io_by_id(unf_lport, unf_rport,
+ unf_rport->nport_id, unf_lport->nport_id, 0);
+
+ /* 2. Send PLOGI ACC fail */
+ if (xchg->ob_callback_sts != UNF_IO_SUCCESS) {
+ /* Do R_Port recovery */
+ unf_rport_error_recovery(unf_rport);
+
+ /* Do not care: Just used for L_Port only is TGT mode or R_Port
+ * only is INI mode
+ */
+ unf_schedule_open_work(unf_lport, unf_rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x_0x%x) send PLOGI ACC failed(0x%x) with RPort(0x%x) feature(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_lport->options, xchg->ob_callback_sts,
+ unf_rport->nport_id, unf_rport->options);
+
+ return;
+ }
+
+ /* 3. Private Loop: check whether or not need to send PRLI */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ if (unf_lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP &&
+ (unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT ||
+ unf_rport->rp_state == UNF_RPORT_ST_READY)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%x) with State(0x%x) return directly",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id, unf_rport->rp_state);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+ return;
+ }
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PRLI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ /* 4. Set Port Feature with BOTH: cancel */
+ if (unf_rport->options == UNF_PORT_MODE_UNKNOWN && unf_rport->port_name != INVALID_WWPN)
+ unf_rport->options = unf_get_port_feature(unf_rport->port_name);
+
+ /*
+ * 5. Check whether need to send PRLI delay
+ * Call by: RCVD PLOGI ACC or callback for sending PLOGI ACC succeed
+ */
+ unf_check_rport_need_delay_prli(unf_lport, unf_rport, unf_rport->options);
+
+ /* 6. Do not care: Just used for L_Port only is TGT mode or R_Port only
+ * is INI mode
+ */
+ unf_schedule_open_work(unf_lport, unf_rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x_0x%x) send PLOGI ACC succeed with RPort(0x%x) feature(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->options,
+ unf_rport->nport_id, unf_rport->options);
+}
+
+static void unf_flogi_acc_ob_callback(struct unf_xchg *xchg)
+{
+ /* Callback for Sending FLOGI ACC succeed */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+ u64 rport_port_name = 0;
+ u64 rport_node_name = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ FC_CHECK_RETURN_VOID(xchg->lport);
+ FC_CHECK_RETURN_VOID(xchg->rport);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ if (unf_rport->port_name == 0 && unf_rport->node_name == 0) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x_0x%x) already send Plogi with RPort(0x%x) feature(0x%x).",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->options,
+ unf_rport->nport_id, unf_rport->options);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+ return;
+ }
+
+ rport_port_name = unf_rport->port_name;
+ rport_node_name = unf_rport->node_name;
+
+ /* Swap case: Set WWPN & WWNN with zero */
+ unf_rport->port_name = 0;
+ unf_rport->node_name = 0;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ /* Enter PLOGI stage: after send FLOGI ACC succeed */
+ unf_login_with_rport_in_n2n(unf_lport, rport_port_name, rport_node_name);
+}
+
+static void unf_rscn_acc_ob_callback(struct unf_xchg *xchg)
+{
+}
+
+static void unf_logo_acc_ob_callback(struct unf_xchg *xchg)
+{
+}
+
+static void unf_adisc_acc_ob_callback(struct unf_xchg *xchg)
+{
+}
+
+static void unf_pdisc_acc_ob_callback(struct unf_xchg *xchg)
+{
+}
+
+static inline u8 unf_determin_bbscn(u8 local_bbscn, u8 remote_bbscn)
+{
+ if (remote_bbscn == 0 || local_bbscn == 0)
+ local_bbscn = 0;
+ else
+ local_bbscn = local_bbscn > remote_bbscn ? local_bbscn : remote_bbscn;
+
+ return local_bbscn;
+}
+
+static void unf_cfg_lowlevel_fabric_params(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_fabric_parm *login_parms)
+{
+ struct unf_port_login_parms login_co_parms = {0};
+ u32 remote_edtov = 0;
+ u32 ret = 0;
+ u8 remote_edtov_resolution = 0; /* 0:ms; 1:ns */
+
+ if (!lport->low_level_func.port_mgr_op.ll_port_config_set)
+ return;
+
+ login_co_parms.remote_rttov_tag = (u8)UNF_GET_RT_TOV_FROM_PARAMS(login_parms);
+ login_co_parms.remote_edtov_tag = 0;
+ login_co_parms.remote_bb_credit = (u16)UNF_GET_BB_CREDIT_FROM_PARAMS(login_parms);
+ login_co_parms.compared_bbscn =
+ (u32)unf_determin_bbscn((u8)lport->low_level_func.lport_cfg_items.bbscn,
+ (u8)UNF_GET_BB_SC_N_FROM_PARAMS(login_parms));
+
+ remote_edtov_resolution = (u8)UNF_GET_E_D_TOV_RESOLUTION_FROM_PARAMS(login_parms);
+ remote_edtov = UNF_GET_E_D_TOV_FROM_PARAMS(login_parms);
+ login_co_parms.compared_edtov_val =
+ remote_edtov_resolution ? (remote_edtov / UNF_OS_MS_TO_NS)
+ : remote_edtov;
+
+ login_co_parms.compared_ratov_val = UNF_GET_RA_TOV_FROM_PARAMS(login_parms);
+ login_co_parms.els_cmnd_code = ELS_FLOGI;
+
+ if (UNF_TOP_P2P_MASK & (u32)lport->act_topo) {
+ login_co_parms.act_topo = (login_parms->co_parms.nport == UNF_F_PORT)
+ ? UNF_ACT_TOP_P2P_FABRIC
+ : UNF_ACT_TOP_P2P_DIRECT;
+ } else {
+ login_co_parms.act_topo = lport->act_topo;
+ }
+
+ ret = lport->low_level_func.port_mgr_op.ll_port_config_set((void *)lport->fc_port,
+ UNF_PORT_CFG_UPDATE_FABRIC_PARAM, (void *)&login_co_parms);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Lowlevel unsupport fabric config");
+ }
+}
+
+u32 unf_check_flogi_params(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_fabric_parm *fabric_parms)
+{
+ u32 ret = RETURN_OK;
+ u32 high_port_name;
+ u32 low_port_name;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(fabric_parms, UNF_RETURN_ERROR);
+
+ if (fabric_parms->cl_parms[ARRAY_INDEX_2].valid == UNF_CLASS_INVALID) {
+ /* Discard directly */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) NPort_ID(0x%x) FLOGI not support class3",
+ lport->port_id, rport->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+ if (fabric_parms->high_port_name == high_port_name &&
+ fabric_parms->low_port_name == low_port_name) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]The wwpn(0x%x%x) of lport(0x%x) is same as the wwpn of rport(0x%x)",
+ high_port_name, low_port_name, lport->port_id, rport->nport_id);
+ return UNF_RETURN_ERROR;
+ }
+
+ return ret;
+}
+
+static void unf_save_fabric_params(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_fabric_parm *fabric_parms)
+{
+ u64 fabric_node_name = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(fabric_parms);
+
+ fabric_node_name = (u64)(((u64)(fabric_parms->high_node_name) << UNF_SHIFT_32) |
+ ((u64)(fabric_parms->low_node_name)));
+
+ /* R_Port for 0xfffffe is used for FLOGI, not need to save WWN */
+ if (fabric_parms->co_parms.bb_receive_data_field_size > UNF_MAX_FRAME_SIZE)
+ rport->max_frame_size = UNF_MAX_FRAME_SIZE; /* 2112 */
+ else
+ rport->max_frame_size = fabric_parms->co_parms.bb_receive_data_field_size;
+
+ /* with Fabric attribute */
+ if (fabric_parms->co_parms.nport == UNF_F_PORT) {
+ rport->ed_tov = fabric_parms->co_parms.e_d_tov;
+ rport->ra_tov = fabric_parms->co_parms.r_a_tov;
+ lport->ed_tov = fabric_parms->co_parms.e_d_tov;
+ lport->ra_tov = fabric_parms->co_parms.r_a_tov;
+ lport->fabric_node_name = fabric_node_name;
+ }
+
+ /* Configure info from FLOGI to chip */
+ unf_cfg_lowlevel_fabric_params(lport, rport, fabric_parms);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) Rport(0x%x) login parameter: E_D_TOV = %u. LPort E_D_TOV = %u. fabric nodename: 0x%x%x",
+ lport->port_id, rport->nport_id, (fabric_parms->co_parms.e_d_tov),
+ lport->ed_tov, fabric_parms->high_node_name, fabric_parms->low_node_name);
+}
+
+u32 unf_flogi_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_flogi_fdisc_acc *flogi_frame = NULL;
+ struct unf_fabric_parm *fabric_login_parms = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ ulong flag = 0;
+ u64 wwpn = 0;
+ u64 wwnn = 0;
+ enum unf_act_topo unf_active_topo;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x)<---RPort(0x%x) Receive FLOGI with OX_ID(0x%x)",
+ lport->port_id, sid, xchg->oxid);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_FLOGI);
+
+ /* Check L_Port state: Offline */
+ if (lport->states >= UNF_LPORT_ST_OFFLINE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with state(0x%x) not need to handle FLOGI",
+ lport->port_id, lport->states);
+
+ unf_cm_free_xchg(lport, xchg);
+ return ret;
+ }
+
+ flogi_frame = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->flogi;
+ fabric_login_parms = &flogi_frame->flogi_payload.fabric_parms;
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, &flogi_frame->flogi_payload,
+ sizeof(struct unf_flogi_fdisc_payload));
+ wwpn = (u64)(((u64)(fabric_login_parms->high_port_name) << UNF_SHIFT_32) |
+ ((u64)fabric_login_parms->low_port_name));
+ wwnn = (u64)(((u64)(fabric_login_parms->high_node_name) << UNF_SHIFT_32) |
+ ((u64)fabric_login_parms->low_node_name));
+
+ /* Get (new) R_Port: reuse only */
+ unf_rport = unf_get_rport_by_nport_id(lport, UNF_FC_FID_FLOGI);
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, UNF_FC_FID_FLOGI);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) has no RPort. do nothing", lport->port_id);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Update R_Port info */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->port_name = wwpn;
+ unf_rport->node_name = wwnn;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Check RCVD FLOGI parameters: only for class-3 */
+ ret = unf_check_flogi_params(lport, unf_rport, fabric_login_parms);
+ if (ret != RETURN_OK) {
+ /* Discard directly */
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Save fabric parameters */
+ unf_save_fabric_params(lport, unf_rport, fabric_login_parms);
+
+ if ((u32)lport->act_topo & UNF_TOP_P2P_MASK) {
+ unf_active_topo =
+ (fabric_login_parms->co_parms.nport == UNF_F_PORT)
+ ? UNF_ACT_TOP_P2P_FABRIC
+ : UNF_ACT_TOP_P2P_DIRECT;
+ unf_lport_update_topo(lport, unf_active_topo);
+ }
+ /* Send ACC for FLOGI */
+ ret = unf_send_flogi_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send FLOGI ACC failed and do recover",
+ lport->port_id);
+
+ /* Do L_Port recovery */
+ unf_lport_error_recovery(lport);
+ }
+
+ return ret;
+}
+
+static void unf_cfg_lowlevel_port_params(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_lgn_parm *login_parms,
+ u32 cmd_type)
+{
+ struct unf_port_login_parms login_co_parms = {0};
+ u32 ret = 0;
+
+ if (!lport->low_level_func.port_mgr_op.ll_port_config_set)
+ return;
+
+ login_co_parms.rport_index = rport->rport_index;
+ login_co_parms.seq_cnt = 0;
+ login_co_parms.ed_tov = 0; /* ms */
+ login_co_parms.ed_tov_timer_val = lport->ed_tov;
+ login_co_parms.tx_mfs = rport->max_frame_size;
+
+ login_co_parms.remote_rttov_tag = (u8)UNF_GET_RT_TOV_FROM_PARAMS(login_parms);
+ login_co_parms.remote_edtov_tag = 0;
+ login_co_parms.remote_bb_credit = (u16)UNF_GET_BB_CREDIT_FROM_PARAMS(login_parms);
+ login_co_parms.els_cmnd_code = cmd_type;
+
+ if (lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
+ login_co_parms.compared_bbscn = 0;
+ } else {
+ login_co_parms.compared_bbscn =
+ (u32)unf_determin_bbscn((u8)lport->low_level_func.lport_cfg_items.bbscn,
+ (u8)UNF_GET_BB_SC_N_FROM_PARAMS(login_parms));
+ }
+
+ login_co_parms.compared_edtov_val = lport->ed_tov;
+ login_co_parms.compared_ratov_val = lport->ra_tov;
+
+ ret = lport->low_level_func.port_mgr_op.ll_port_config_set((void *)lport->fc_port,
+ UNF_PORT_CFG_UPDATE_PLOGI_PARAM, (void *)&login_co_parms);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) Lowlevel unsupport port config", lport->port_id);
+ }
+}
+
+u32 unf_check_plogi_params(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_lgn_parm *login_parms)
+{
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(login_parms, UNF_RETURN_ERROR);
+
+ /* Parameters check: Class-type */
+ if (login_parms->cl_parms[ARRAY_INDEX_2].valid == UNF_CLASS_INVALID ||
+ login_parms->co_parms.bb_receive_data_field_size == 0) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort N_Port_ID(0x%x) with PLOGI parameters invalid: class3(%u), BBReceiveDataFieldSize(0x%x), send LOGO",
+ lport->port_id, rport->nport_id,
+ login_parms->cl_parms[ARRAY_INDEX_2].valid,
+ login_parms->co_parms.bb_receive_data_field_size);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LOGO); /* --->>> LOGO */
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* Enter LOGO stage */
+ unf_rport_enter_logo(lport, rport);
+ return UNF_RETURN_ERROR;
+ }
+
+ /* 16G FC Brocade SW, Domain Controller's PLOGI both support CLASS-1 &
+ * CLASS-2
+ */
+ if (login_parms->cl_parms[ARRAY_INDEX_0].valid == UNF_CLASS_VALID ||
+ login_parms->cl_parms[ARRAY_INDEX_1].valid == UNF_CLASS_VALID) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) get PLOGI class1(%u) class2(%u) from N_Port_ID(0x%x)",
+ lport->port_id, login_parms->cl_parms[ARRAY_INDEX_0].valid,
+ login_parms->cl_parms[ARRAY_INDEX_1].valid, rport->nport_id);
+ }
+
+ return ret;
+}
+
+static void unf_save_plogi_params(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_lgn_parm *login_parms,
+ u32 cmd_code)
+{
+#define UNF_DELAY_TIME 100 /* WWPN smaller delay to send PRLI with COM mode */
+
+ u64 wwpn = INVALID_VALUE64;
+ u64 wwnn = INVALID_VALUE64;
+ u32 ed_tov = 0;
+ u32 remote_edtov = 0;
+
+ if (login_parms->co_parms.bb_receive_data_field_size > UNF_MAX_FRAME_SIZE)
+ rport->max_frame_size = UNF_MAX_FRAME_SIZE; /* 2112 */
+ else
+ rport->max_frame_size = login_parms->co_parms.bb_receive_data_field_size;
+
+ wwnn = (u64)(((u64)(login_parms->high_node_name) << UNF_SHIFT_32) |
+ ((u64)login_parms->low_node_name));
+ wwpn = (u64)(((u64)(login_parms->high_port_name) << UNF_SHIFT_32) |
+ ((u64)login_parms->low_port_name));
+
+ remote_edtov = login_parms->co_parms.e_d_tov;
+ ed_tov = login_parms->co_parms.e_d_tov_resolution
+ ? (remote_edtov / UNF_OS_MS_TO_NS)
+ : remote_edtov;
+
+ rport->port_name = wwpn;
+ rport->node_name = wwnn;
+ rport->local_nport_id = lport->nport_id;
+
+ if (lport->act_topo == UNF_ACT_TOP_P2P_DIRECT ||
+ lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
+ /* P2P or Private Loop or FCoE VN2VN */
+ lport->ed_tov = (lport->ed_tov > ed_tov) ? lport->ed_tov : ed_tov;
+ lport->ra_tov = 2 * lport->ed_tov; /* 2 * E_D_TOV */
+
+ if (ed_tov != 0)
+ rport->ed_tov = ed_tov;
+ else
+ rport->ed_tov = UNF_DEFAULT_EDTOV;
+ } else {
+ /* SAN: E_D_TOV updated by FLOGI */
+ rport->ed_tov = lport->ed_tov;
+ }
+
+ /* WWPN smaller: delay to send PRLI */
+ if (rport->port_name > lport->port_name)
+ rport->ed_tov += UNF_DELAY_TIME; /* 100ms */
+
+ /* Configure port parameters to low level (chip) */
+ unf_cfg_lowlevel_port_params(lport, rport, login_parms, cmd_code);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) RPort(0x%x) with WWPN(0x%llx) WWNN(0x%llx) login: ED_TOV(%u) Port: ED_TOV(%u)",
+ lport->port_id, rport->nport_id, rport->port_name, rport->node_name,
+ ed_tov, lport->ed_tov);
+}
+
+static bool unf_check_bbscn_is_enabled(u8 local_bbscn, u8 remote_bbscn)
+{
+ return unf_determin_bbscn(local_bbscn, remote_bbscn) ? true : false;
+}
+
+static u32 unf_irq_process_switch2thread(void *lport, struct unf_xchg *xchg,
+ unf_event_task evt_task)
+{
+ struct unf_cm_event_report *event = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ u32 ret = 0;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ unf_lport = lport;
+ unf_xchg = xchg;
+
+ if (unlikely(!unf_lport->event_mgr.unf_get_free_event_func ||
+ !unf_lport->event_mgr.unf_post_event_func ||
+ !unf_lport->event_mgr.unf_release_event)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) event function is NULL",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ ret = unf_xchg_ref_inc(unf_xchg, SFS_RESPONSE);
+ FC_CHECK_RETURN_VALUE((ret == RETURN_OK), UNF_RETURN_ERROR);
+
+ event = unf_lport->event_mgr.unf_get_free_event_func((void *)lport);
+ FC_CHECK_RETURN_VALUE(event, UNF_RETURN_ERROR);
+
+ event->lport = unf_lport;
+ event->event_asy_flag = UNF_EVENT_ASYN;
+ event->unf_event_task = evt_task;
+ event->para_in = xchg;
+ unf_lport->event_mgr.unf_post_event_func(unf_lport, event);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) start to switch thread process now",
+ unf_lport->port_id);
+
+ return ret;
+}
+
+u32 unf_plogi_handler_com_process(struct unf_xchg *xchg)
+{
+ struct unf_xchg *unf_xchg = xchg;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_plogi_pdisc *plogi_frame = NULL;
+ struct unf_lgn_parm *login_parms = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(unf_xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(unf_xchg->lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(unf_xchg->rport, UNF_RETURN_ERROR);
+
+ unf_lport = unf_xchg->lport;
+ unf_rport = unf_xchg->rport;
+ plogi_frame = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->plogi;
+ login_parms = &plogi_frame->payload.stparms;
+
+ unf_save_plogi_params(unf_lport, unf_rport, login_parms, ELS_PLOGI);
+
+ /* Update state: PLOGI_WAIT */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = unf_xchg->sid;
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Send PLOGI ACC to remote port */
+ ret = unf_send_plogi_acc(unf_lport, unf_rport, unf_xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send PLOGI ACC failed",
+ unf_lport->port_id);
+
+ /* NOTE: exchange has been freed inner(before) */
+ unf_rport_error_recovery(unf_rport);
+ return ret;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x) send PLOGI ACC to Port(0x%x) succeed",
+ unf_lport->port_id, unf_rport->nport_id);
+
+ return ret;
+}
+
+int unf_plogi_async_handle(void *argc_in, void *argc_out)
+{
+ struct unf_xchg *xchg = (struct unf_xchg *)argc_in;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ ret = unf_plogi_handler_com_process(xchg);
+
+ unf_xchg_ref_dec(xchg, SFS_RESPONSE);
+
+ return (int)ret;
+}
+
+u32 unf_plogi_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_xchg *unf_xchg = xchg;
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_plogi_pdisc *plogi_frame = NULL;
+ struct unf_lgn_parm *login_parms = NULL;
+ struct unf_rjt_info rjt_info = {0};
+ u64 wwpn = INVALID_VALUE64;
+ u32 ret = UNF_RETURN_ERROR;
+ bool bbscn_enabled = false;
+ bool switch2thread = false;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ /* 1. Maybe: PLOGI is sent by Name server */
+ if (sid < UNF_FC_FID_DOM_MGR ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Receive PLOGI. Port(0x%x_0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ lport->port_id, lport->nport_id, sid, xchg->oxid);
+ }
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_PLOGI);
+
+ /* 2. State check: Offline */
+ if (unf_lport->states >= UNF_LPORT_ST_OFFLINE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) received PLOGI with state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
+
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Get R_Port by WWpn */
+ plogi_frame = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->plogi;
+ login_parms = &plogi_frame->payload.stparms;
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, unf_lport->port_id, &plogi_frame->payload,
+ sizeof(struct unf_plogi_payload));
+
+ wwpn = (u64)(((u64)(login_parms->high_port_name) << UNF_SHIFT_32) |
+ ((u64)login_parms->low_port_name));
+
+ /* 3. Get (new) R_Port (by wwpn) */
+ unf_rport = unf_find_rport(unf_lport, sid, wwpn);
+ unf_rport = unf_get_safe_rport(unf_lport, unf_rport, UNF_RPORT_REUSE_ONLY, sid);
+ if (!unf_rport) {
+ memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
+ rjt_info.els_cmnd_code = ELS_PLOGI;
+ rjt_info.reason_code = UNF_LS_RJT_BUSY;
+ rjt_info.reason_explanation = UNF_LS_RJT_INSUFFICIENT_RESOURCES;
+
+ /* R_Port is NULL: Send ELS RJT for PLOGI */
+ (void)unf_send_els_rjt_by_did(unf_lport, unf_xchg, sid, &rjt_info);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) has no RPort and send PLOGI reject",
+ unf_lport->port_id);
+ return RETURN_OK;
+ }
+
+ /*
+ * 4. According to FC-LS 4.2.7.1:
+ * After RCVD PLogi or send Plogi ACC, need to termitate open EXCH
+ */
+ unf_cm_xchg_mgr_abort_io_by_id(unf_lport, unf_rport, sid, unf_lport->nport_id, 0);
+
+ /* 5. Cancel recovery timer work after RCVD PLOGI */
+ if (cancel_delayed_work(&unf_rport->recovery_work))
+ atomic_dec(&unf_rport->rport_ref_cnt);
+
+ /*
+ * 6. Plogi parameters check
+ * Call by: (RCVD) PLOGI handler & callback function for RCVD PLOGI_ACC
+ */
+ ret = unf_check_plogi_params(unf_lport, unf_rport, login_parms);
+ if (ret != RETURN_OK) {
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_xchg->lport = lport;
+ unf_xchg->rport = unf_rport;
+ unf_xchg->sid = sid;
+
+ /* 7. About bbscn for context change */
+ bbscn_enabled =
+ unf_check_bbscn_is_enabled((u8)unf_lport->low_level_func.lport_cfg_items.bbscn,
+ (u8)UNF_GET_BB_SC_N_FROM_PARAMS(login_parms));
+ if (unf_lport->act_topo == UNF_ACT_TOP_P2P_DIRECT && bbscn_enabled) {
+ switch2thread = true;
+ unf_lport->bbscn_support = true;
+ }
+
+ /* 8. Process PLOGI Frame: switch to thread if necessary */
+ if (switch2thread && unf_lport->root_lport == unf_lport) {
+ /* Wait for LR complete sync */
+ ret = unf_irq_process_switch2thread(unf_lport, unf_xchg, unf_plogi_async_handle);
+ } else {
+ ret = unf_plogi_handler_com_process(unf_xchg);
+ }
+
+ return ret;
+}
+
+static void unf_obtain_tape_capacity(struct unf_lport *lport,
+ struct unf_rport *rport, u32 tape_parm)
+{
+ u32 rec_support = 0;
+ u32 task_retry_support = 0;
+ u32 retry_support = 0;
+
+ rec_support = tape_parm & UNF_FC4_FRAME_PARM_3_REC_SUPPORT;
+ task_retry_support =
+ tape_parm & UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT;
+ retry_support = tape_parm & UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT;
+
+ if (lport->low_level_func.lport_cfg_items.tape_support &&
+ rec_support && task_retry_support && retry_support) {
+ rport->tape_support_needed = true;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) FC_tape is needed for RPort(0x%x)",
+ lport->port_id, lport->nport_id, rport->nport_id);
+ }
+
+ if ((tape_parm & UNF_FC4_FRAME_PARM_3_CONF_ALLOW) &&
+ lport->low_level_func.lport_cfg_items.fcp_conf) {
+ rport->fcp_conf_needed = true;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) FCP confirm is needed for RPort(0x%x)",
+ lport->port_id, lport->nport_id, rport->nport_id);
+ }
+}
+
+static u32 unf_prli_handler_com_process(struct unf_xchg *xchg)
+{
+ struct unf_prli_prlo *prli = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ ulong flags = 0;
+ u32 sid = 0;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+
+ unf_xchg = xchg;
+ FC_CHECK_RETURN_VALUE(unf_xchg->lport, UNF_RETURN_ERROR);
+ unf_lport = unf_xchg->lport;
+ sid = xchg->sid;
+
+ UNF_SERVICE_COLLECT(unf_lport->link_service_info, UNF_SERVICE_ITEM_PRLI);
+
+ /* 1. Get R_Port: for each R_Port from rport_busy_list */
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, sid);
+ if (!unf_rport) {
+ /* non session (R_Port) existence */
+ (void)unf_send_logo_by_did(unf_lport, sid);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) received PRLI but no RPort SID(0x%x) OX_ID(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, sid, xchg->oxid);
+
+ unf_cm_free_xchg(unf_lport, xchg);
+ return ret;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Receive PRLI. Port(0x%x)<---RPort(0x%x) with S_ID(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id, sid);
+
+ /* 2. Get PRLI info */
+ prli = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->prli;
+ if (sid < UNF_FC_FID_DOM_MGR || unf_lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Receive PRLI. Port(0x%x_0x%x)<---RPort(0x%x) parameter-3(0x%x) OX_ID(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, sid,
+ prli->payload.parms[ARRAY_INDEX_3], xchg->oxid);
+ }
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, unf_lport->port_id, &prli->payload,
+ sizeof(struct unf_prli_payload));
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+
+ /* 3. Increase R_Port ref_cnt */
+ ret = unf_rport_ref_inc(unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x_0x%p) is removing and do nothing",
+ unf_lport->port_id, unf_rport->nport_id, unf_rport);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ unf_cm_free_xchg(unf_lport, xchg);
+ return RETURN_ERROR;
+ }
+
+ /* 4. Cancel R_Port Open work */
+ if (cancel_delayed_work(&unf_rport->open_work)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%x) cancel open work succeed",
+ unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id);
+
+ /* This is not the last counter */
+ atomic_dec(&unf_rport->rport_ref_cnt);
+ }
+
+ /* 5. Check R_Port state */
+ if (unf_rport->rp_state != UNF_RPORT_ST_PRLI_WAIT &&
+ unf_rport->rp_state != UNF_RPORT_ST_READY) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) RPort(0x%x) with state(0x%x) when received PRLI, send LOGO",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id, unf_rport->rp_state);
+
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ /* NOTE: Start to send LOGO */
+ unf_rport_enter_logo(unf_lport, unf_rport);
+
+ unf_cm_free_xchg(unf_lport, xchg);
+ unf_rport_ref_dec(unf_rport);
+
+ return RETURN_ERROR;
+ }
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ /* 6. Update R_Port options(INI/TGT/BOTH) */
+ unf_rport->options =
+ prli->payload.parms[ARRAY_INDEX_3] &
+ (UNF_FC4_FRAME_PARM_3_TGT | UNF_FC4_FRAME_PARM_3_INI);
+
+ unf_update_port_feature(unf_rport->port_name, unf_rport->options);
+
+ /* for Confirm */
+ unf_rport->fcp_conf_needed = false;
+
+ unf_obtain_tape_capacity(unf_lport, unf_rport, prli->payload.parms[ARRAY_INDEX_3]);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x_0x%x) RPort(0x%x) parameter-3(0x%x) options(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id,
+ prli->payload.parms[ARRAY_INDEX_3], unf_rport->options);
+
+ /* 7. Send PRLI ACC */
+ ret = unf_send_prli_acc(unf_lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) RPort(0x%x) send PRLI ACC failed",
+ unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id);
+
+ /* NOTE: exchange has been freed inner(before) */
+ unf_rport_error_recovery(unf_rport);
+ }
+
+ /* 8. Decrease R_Port ref_cnt */
+ unf_rport_ref_dec(unf_rport);
+
+ return ret;
+}
+
+int unf_prli_async_handle(void *argc_in, void *argc_out)
+{
+ struct unf_xchg *xchg = (struct unf_xchg *)argc_in;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ ret = unf_prli_handler_com_process(xchg);
+
+ unf_xchg_ref_dec(xchg, SFS_RESPONSE);
+
+ return (int)ret;
+}
+
+u32 unf_prli_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ bool switch2thread = false;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->sid = sid;
+ xchg->lport = lport;
+ unf_lport = lport;
+
+ if (lport->bbscn_support &&
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT)
+ switch2thread = true;
+
+ if (switch2thread && unf_lport->root_lport == unf_lport) {
+ /* Wait for LR done sync */
+ ret = unf_irq_process_switch2thread(lport, xchg, unf_prli_async_handle);
+ } else {
+ ret = unf_prli_handler_com_process(xchg);
+ }
+
+ return ret;
+}
+
+static void unf_save_rscn_port_id(struct unf_rscn_mgr *rscn_mg,
+ struct unf_rscn_port_id_page *rscn_port_id)
+{
+ struct unf_port_id_page *exit_port_id_page = NULL;
+ struct unf_port_id_page *new_port_id_page = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ bool is_repeat = false;
+
+ FC_CHECK_RETURN_VOID(rscn_mg);
+ FC_CHECK_RETURN_VOID(rscn_port_id);
+
+ /* 1. check new RSCN Port_ID (RSNC_Page) whether within RSCN_Mgr or not
+ */
+ spin_lock_irqsave(&rscn_mg->rscn_id_list_lock, flag);
+ if (list_empty(&rscn_mg->list_using_rscn_page)) {
+ is_repeat = false;
+ } else {
+ /* Check repeat: for each exist RSCN page form RSCN_Mgr Page
+ * list
+ */
+ list_for_each_safe(node, next_node, &rscn_mg->list_using_rscn_page) {
+ exit_port_id_page = list_entry(node, struct unf_port_id_page,
+ list_node_rscn);
+ if (exit_port_id_page->port_id_port == rscn_port_id->port_id_port &&
+ exit_port_id_page->port_id_area == rscn_port_id->port_id_area &&
+ exit_port_id_page->port_id_domain == rscn_port_id->port_id_domain) {
+ is_repeat = true;
+ break;
+ }
+ }
+ }
+ spin_unlock_irqrestore(&rscn_mg->rscn_id_list_lock, flag);
+
+ FC_CHECK_RETURN_VOID(rscn_mg->unf_get_free_rscn_node);
+
+ /* 2. Get & add free RSNC Node --->>> RSCN_Mgr */
+ if (!is_repeat) {
+ new_port_id_page = rscn_mg->unf_get_free_rscn_node(rscn_mg);
+ if (!new_port_id_page) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_ERR, "[err]Get free RSCN node failed");
+
+ return;
+ }
+
+ new_port_id_page->addr_format = rscn_port_id->addr_format;
+ new_port_id_page->event_qualifier = rscn_port_id->event_qualifier;
+ new_port_id_page->reserved = rscn_port_id->reserved;
+ new_port_id_page->port_id_domain = rscn_port_id->port_id_domain;
+ new_port_id_page->port_id_area = rscn_port_id->port_id_area;
+ new_port_id_page->port_id_port = rscn_port_id->port_id_port;
+
+ /* Add entry to list: using_rscn_page */
+ spin_lock_irqsave(&rscn_mg->rscn_id_list_lock, flag);
+ list_add_tail(&new_port_id_page->list_node_rscn, &rscn_mg->list_using_rscn_page);
+ spin_unlock_irqrestore(&rscn_mg->rscn_id_list_lock, flag);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) has repeat RSCN node with domain(0x%x) area(0x%x)",
+ rscn_port_id->port_id_domain, rscn_port_id->port_id_area,
+ rscn_port_id->port_id_port);
+ }
+}
+
+static u32 unf_analysis_rscn_payload(struct unf_lport *lport,
+ struct unf_rscn_pld *rscn_pld)
+{
+#define UNF_OS_DISC_REDISC_TIME 10000
+
+ struct unf_rscn_port_id_page *rscn_port_id = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+ u32 index = 0;
+ u32 pld_len = 0;
+ u32 port_id_page_cnt = 0;
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+ bool eb_need_disc_flag = false;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rscn_pld, UNF_RETURN_ERROR);
+
+ /* This field is the length in bytes of the entire Payload, inclusive of
+ * the word 0
+ */
+ pld_len = UNF_GET_RSCN_PLD_LEN(rscn_pld->cmnd);
+ pld_len -= sizeof(rscn_pld->cmnd);
+ port_id_page_cnt = pld_len / UNF_RSCN_PAGE_LEN;
+
+ /* Pages within payload is nor more than 255 */
+ if (port_id_page_cnt > UNF_RSCN_PAGE_SUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x_0x%x) page num(0x%x) exceed 255 in RSCN",
+ lport->port_id, lport->nport_id, port_id_page_cnt);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* L_Port-->Disc-->Rscn_Mgr */
+ disc = &lport->disc;
+ rscn_mgr = &disc->rscn_mgr;
+
+ /* for each ID from RSCN_Page: check whether need to Disc or not */
+ while (index < port_id_page_cnt) {
+ rscn_port_id = &rscn_pld->port_id_page[index];
+ if (unf_lookup_lport_by_nportid(lport, *(u32 *)rscn_port_id)) {
+ /* Prevent to create session with L_Port which have the
+ * same N_Port_ID
+ */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) find local N_Port_ID(0x%x) within RSCN payload",
+ ((struct unf_lport *)(lport->root_lport))->nport_id,
+ *(u32 *)rscn_port_id);
+ } else {
+ /* New RSCN_Page ID find, save it to RSCN_Mgr */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x_0x%x) save RSCN N_Port_ID(0x%x)",
+ lport->port_id, lport->nport_id,
+ *(u32 *)rscn_port_id);
+
+ /* 1. new RSCN_Page ID find, save it to RSCN_Mgr */
+ unf_save_rscn_port_id(rscn_mgr, rscn_port_id);
+ eb_need_disc_flag = true;
+ }
+ index++;
+ }
+
+ if (!eb_need_disc_flag) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Port(0x%x) find all N_Port_ID and do not need to disc",
+ ((struct unf_lport *)(lport->root_lport))->nport_id);
+
+ return RETURN_OK;
+ }
+
+ /* 2. Do/Start Disc: Check & do Disc (GID_PT) process */
+ if (!disc->disc_temp.unf_disc_start) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) DISC start function is NULL",
+ lport->nport_id, lport->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ if (disc->states == UNF_DISC_ST_END ||
+ ((jiffies - disc->last_disc_jiff) > msecs_to_jiffies(UNF_OS_DISC_REDISC_TIME))) {
+ disc->disc_option = UNF_RSCN_DISC;
+ disc->last_disc_jiff = jiffies;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ ret = disc->disc_temp.unf_disc_start(lport);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_INFO,
+ "[info]Port(0x%x_0x%x) DISC state(0x%x) with last time(%llu) and don't do DISC",
+ lport->port_id, lport->nport_id, disc->states,
+ disc->last_disc_jiff);
+
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ }
+
+ return ret;
+}
+
+u32 unf_rscn_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ /*
+ * A RSCN ELS shall be sent to registered Nx_Ports
+ * when an event occurs that may have affected the state of
+ * one or more Nx_Ports, or the ULP state within the Nx_Port.
+ * *
+ * The Payload of a RSCN Request includes a list
+ * containing the addresses of the affected Nx_Ports.
+ * *
+ * Each affected Port_ID page contains the ID of the Nx_Port,
+ * Fabric Controller, E_Port, domain, or area for which the event was
+ * detected.
+ */
+ struct unf_rscn_pld *rscn_pld = NULL;
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 pld_len = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Receive RSCN Port(0x%x_0x%x)<---RPort(0x%x) OX_ID(0x%x)",
+ lport->port_id, lport->nport_id, sid, xchg->oxid);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_RSCN);
+
+ /* 1. Get R_Port by S_ID */
+ unf_rport = unf_get_rport_by_nport_id(lport, sid); /* rport busy_list */
+ if (!unf_rport) {
+ unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC, sid);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) received RSCN but has no RPort(0x%x) with OX_ID(0x%x)",
+ lport->port_id, lport->nport_id, sid, xchg->oxid);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_rport->nport_id = sid;
+ }
+
+ rscn_pld = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rscn.rscn_pld;
+ FC_CHECK_RETURN_VALUE(rscn_pld, UNF_RETURN_ERROR);
+ pld_len = UNF_GET_RSCN_PLD_LEN(rscn_pld->cmnd);
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, rscn_pld, pld_len);
+
+ /* 2. NOTE: Analysis RSCN payload(save & disc if necessary) */
+ ret = unf_analysis_rscn_payload(lport, rscn_pld);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) analysis RSCN failed",
+ lport->port_id, lport->nport_id);
+ }
+
+ /* 3. send rscn_acc after analysis payload */
+ ret = unf_send_rscn_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) send RSCN response failed",
+ lport->port_id, lport->nport_id);
+ }
+
+ return ret;
+}
+
+static void unf_analysis_pdisc_pld(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_plogi_pdisc *pdisc)
+{
+ struct unf_lgn_parm *pdisc_params = NULL;
+ u64 wwpn = INVALID_VALUE64;
+ u64 wwnn = INVALID_VALUE64;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(pdisc);
+
+ pdisc_params = &pdisc->payload.stparms;
+ if (pdisc_params->co_parms.bb_receive_data_field_size > UNF_MAX_FRAME_SIZE)
+ rport->max_frame_size = UNF_MAX_FRAME_SIZE;
+ else
+ rport->max_frame_size = pdisc_params->co_parms.bb_receive_data_field_size;
+
+ wwnn = (u64)(((u64)(pdisc_params->high_node_name) << UNF_SHIFT_32) |
+ ((u64)pdisc_params->low_node_name));
+ wwpn = (u64)(((u64)(pdisc_params->high_port_name) << UNF_SHIFT_32) |
+ ((u64)pdisc_params->low_port_name));
+
+ rport->port_name = wwpn;
+ rport->node_name = wwnn;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) save PDISC parameters to Rport(0x%x) WWPN(0x%llx) WWNN(0x%llx)",
+ lport->port_id, rport->nport_id, rport->port_name,
+ rport->node_name);
+}
+
+u32 unf_send_pdisc_rjt(struct unf_lport *lport, struct unf_rport *rport, struct unf_xchg *xchg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_rjt_info rjt_info;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
+ rjt_info.els_cmnd_code = ELS_PDISC;
+ rjt_info.reason_code = UNF_LS_RJT_LOGICAL_ERROR;
+ rjt_info.reason_explanation = UNF_LS_RJT_NO_ADDITIONAL_INFO;
+
+ ret = unf_send_els_rjt_by_rport(lport, xchg, rport, &rjt_info);
+
+ return ret;
+}
+
+u32 unf_pdisc_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_plogi_pdisc *pdisc = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+ u32 ret = RETURN_OK;
+ u64 wwpn = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Receive PDISC. Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ lport->port_id, sid, xchg->oxid);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_PDISC);
+ pdisc = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->pdisc;
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, &pdisc->payload,
+ sizeof(struct unf_plogi_payload));
+ wwpn = (u64)(((u64)(pdisc->payload.stparms.high_port_name) << UNF_SHIFT_32) |
+ ((u64)pdisc->payload.stparms.low_port_name));
+
+ unf_rport = unf_find_rport(lport, sid, wwpn);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find RPort by NPort ID(0x%x). Free exchange and send LOGO",
+ lport->port_id, sid);
+
+ unf_cm_free_xchg(lport, xchg);
+ (void)unf_send_logo_by_did(lport, sid);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MINOR,
+ "[info]Port(0x%x) get exist RPort(0x%x) when receive PDISC with S_Id(0x%x)",
+ lport->port_id, unf_rport->nport_id, sid);
+
+ if (sid >= UNF_FC_FID_DOM_MGR)
+ return unf_send_pdisc_rjt(lport, unf_rport, xchg);
+
+ unf_analysis_pdisc_pld(lport, unf_rport, pdisc);
+
+ /* State: READY */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ if (unf_rport->rp_state == UNF_RPORT_ST_READY) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) find RPort(0x%x) state is READY when receiving PDISC",
+ lport->port_id, sid);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ ret = unf_send_pdisc_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) handle PDISC failed",
+ lport->port_id);
+
+ return ret;
+ }
+
+ /* Report Down/Up event to scsi */
+ unf_update_lport_state_by_linkup_event(lport,
+ unf_rport, unf_rport->options);
+ } else if ((unf_rport->rp_state == UNF_RPORT_ST_CLOSING) &&
+ (unf_rport->session)) {
+ /* State: Closing */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving PDISC",
+ lport->port_id, sid, unf_rport->rp_state);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ unf_cm_free_xchg(lport, xchg);
+ (void)unf_send_logo_by_did(lport, sid);
+ } else if (unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT) {
+ /* State: PRLI_WAIT */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving PDISC",
+ lport->port_id, sid, unf_rport->rp_state);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ ret = unf_send_pdisc_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) handle PDISC failed",
+ lport->port_id);
+
+ return ret;
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving PDISC, send LOGO",
+ lport->port_id, sid, unf_rport->rp_state);
+
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ unf_rport_enter_logo(lport, unf_rport);
+ unf_cm_free_xchg(lport, xchg);
+ }
+ }
+
+ return ret;
+}
+
+static void unf_analysis_adisc_pld(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_adisc_payload *adisc_pld)
+{
+ u64 wwpn = INVALID_VALUE64;
+ u64 wwnn = INVALID_VALUE64;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(adisc_pld);
+
+ wwnn = (u64)(((u64)(adisc_pld->high_node_name) << UNF_SHIFT_32) |
+ ((u64)adisc_pld->low_node_name));
+ wwpn = (u64)(((u64)(adisc_pld->high_port_name) << UNF_SHIFT_32) |
+ ((u64)adisc_pld->low_port_name));
+
+ rport->port_name = wwpn;
+ rport->node_name = wwnn;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) save ADISC parameters to RPort(0x%x), WWPN(0x%llx) WWNN(0x%llx) NPort ID(0x%x)",
+ lport->port_id, rport->nport_id, rport->port_name,
+ rport->node_name, adisc_pld->nport_id);
+}
+
+u32 unf_adisc_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_adisc_payload *adisc_pld = NULL;
+ ulong flags = 0;
+ u64 wwpn = 0;
+ u32 ret = RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Receive ADISC. Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ lport->port_id, sid, xchg->oxid);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_ADISC);
+ adisc_pld = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->adisc.adisc_payl;
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, adisc_pld, sizeof(struct unf_adisc_payload));
+ wwpn = (u64)(((u64)(adisc_pld->high_port_name) << UNF_SHIFT_32) |
+ ((u64)adisc_pld->low_port_name));
+
+ unf_rport = unf_find_rport(lport, sid, wwpn);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find RPort by NPort ID(0x%x). Free exchange and send LOGO",
+ lport->port_id, sid);
+
+ unf_cm_free_xchg(lport, xchg);
+ (void)unf_send_logo_by_did(lport, sid);
+
+ return ret;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MINOR,
+ "[info]Port(0x%x) get exist RPort(0x%x) when receive ADISC with S_ID(0x%x)",
+ lport->port_id, unf_rport->nport_id, sid);
+
+ unf_analysis_adisc_pld(lport, unf_rport, adisc_pld);
+
+ /* State: READY */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ if (unf_rport->rp_state == UNF_RPORT_ST_READY) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) find RPort(0x%x) state is READY when receiving ADISC",
+ lport->port_id, sid);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ /* Return ACC directly */
+ ret = unf_send_adisc_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ADISC ACC failed", lport->port_id);
+
+ return ret;
+ }
+
+ /* Report Down/Up event to SCSI */
+ unf_update_lport_state_by_linkup_event(lport, unf_rport, unf_rport->options);
+ }
+ /* State: Closing */
+ else if ((unf_rport->rp_state == UNF_RPORT_ST_CLOSING) &&
+ (unf_rport->session)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving ADISC",
+ lport->port_id, sid, unf_rport->rp_state);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ unf_rport = unf_get_safe_rport(lport, unf_rport,
+ UNF_RPORT_REUSE_RECOVER,
+ unf_rport->nport_id);
+ if (unf_rport) {
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ unf_rport->nport_id = sid;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ ret = unf_send_adisc_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ADISC ACC failed",
+ lport->port_id);
+
+ return ret;
+ }
+
+ unf_update_lport_state_by_linkup_event(lport,
+ unf_rport, unf_rport->options);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find RPort by NPort_ID(0x%x). Free exchange and send LOGO",
+ lport->port_id, sid);
+
+ unf_cm_free_xchg(lport, xchg);
+ (void)unf_send_logo_by_did(lport, sid);
+ }
+ } else if (unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT) {
+ /* State: PRLI_WAIT */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving ADISC",
+ lport->port_id, sid, unf_rport->rp_state);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ ret = unf_send_adisc_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ADISC ACC failed", lport->port_id);
+
+ return ret;
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving ADISC, send LOGO",
+ lport->port_id, sid, unf_rport->rp_state);
+
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ unf_rport_enter_logo(lport, unf_rport);
+ unf_cm_free_xchg(lport, xchg);
+ }
+
+ return ret;
+}
+
+u32 unf_rec_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x) receive REC", lport->port_id);
+
+ /* Send rec acc */
+ ret = unf_send_rec_acc(lport, unf_rport, xchg); /* discard directly */
+
+ return ret;
+}
+
+u32 unf_rrq_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_rrq *rrq = NULL;
+ struct unf_xchg *xchg_reused = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ u32 unf_sid = 0;
+ ulong flags = 0;
+ struct unf_rjt_info rjt_info = {0};
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_RRQ);
+ rrq = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rrq;
+ ox_id = (u16)(rrq->oxid_rxid >> UNF_SHIFT_16);
+ rx_id = (u16)(rrq->oxid_rxid);
+ unf_sid = rrq->sid & UNF_NPORTID_MASK;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
+ "[warn]Receive RRQ. Port(0x%x)<---RPort(0x%x) sfsXchg(0x%p) OX_ID(0x%x,0x%x) RX_ID(0x%x)",
+ lport->port_id, sid, xchg, ox_id, xchg->oxid, rx_id);
+
+ /* Get R_Port */
+ unf_rport = unf_get_rport_by_nport_id(lport, sid);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive RRQ but has no RPort(0x%x)",
+ lport->port_id, sid);
+
+ /* NOTE: send LOGO */
+ unf_send_logo_by_did(lport, unf_sid);
+
+ unf_cm_free_xchg(lport, xchg);
+ return ret;
+ }
+
+ /* Get Target (Abort I/O) exchange context */
+ xchg_reused = unf_cm_lookup_xchg_by_id(lport, ox_id, unf_sid); /* unf_find_xchg_by_ox_id */
+ if (!xchg_reused) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) cannot find exchange with OX_ID(0x%x) RX_ID(0x%x) S_ID(0x%x)",
+ lport->port_id, ox_id, rx_id, unf_sid);
+
+ rjt_info.els_cmnd_code = ELS_RRQ;
+ rjt_info.reason_code = FCXLS_BA_RJT_LOGICAL_ERROR | FCXLS_LS_RJT_INVALID_OXID_RXID;
+
+ /* NOTE: send ELS RJT */
+ if (unf_send_els_rjt_by_rport(lport, xchg, unf_rport, &rjt_info) != RETURN_OK) {
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+ }
+
+ hot_pool = xchg_reused->hot_pool;
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) OxId(0x%x) Rxid(0x%x) Sid(0x%x) Hot Pool is NULL.",
+ lport->port_id, ox_id, rx_id, unf_sid);
+
+ return ret;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ xchg_reused->oxid = INVALID_VALUE16;
+ xchg_reused->rxid = INVALID_VALUE16;
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* NOTE: release I/O exchange context */
+ unf_xchg_ref_dec(xchg_reused, SFS_RESPONSE);
+
+ /* Send RRQ ACC */
+ ret = unf_send_rrq_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can not send RRQ rsp. Xchg(0x%p) Ioxchg(0x%p) OX_RX_ID(0x%x 0x%x) S_ID(0x%x)",
+ lport->port_id, xchg, xchg_reused, ox_id, rx_id, unf_sid);
+
+ unf_cm_free_xchg(lport, xchg);
+ }
+
+ return ret;
+}
+
+u32 unf_logo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_rport *logo_rport = NULL;
+ struct unf_logo *logo = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 nport_id = 0;
+ struct unf_rjt_info rjt_info = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_LOGO);
+ logo = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->logo;
+ nport_id = logo->payload.nport_id & UNF_NPORTID_MASK;
+
+ if (sid < UNF_FC_FID_DOM_MGR) {
+ /* R_Port is not fabric port */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]LOGIN: Receive LOGO. Port(0x%x)<---RPort(0x%x) NPort_ID(0x%x) OXID(0x%x)",
+ lport->port_id, sid, nport_id, xchg->oxid);
+ }
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, &logo->payload,
+ sizeof(struct unf_logo_payload));
+
+ /*
+ * 1. S_ID unequal to NPort_ID:
+ * link down Rport find by NPort_ID immediately
+ */
+ if (sid != nport_id) {
+ logo_rport = unf_get_rport_by_nport_id(lport, nport_id);
+ if (logo_rport)
+ unf_rport_immediate_link_down(lport, logo_rport);
+ }
+
+ /* 2. Get R_Port by S_ID (frame header) */
+ unf_rport = unf_get_rport_by_nport_id(lport, sid);
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_INIT, sid); /* INIT */
+ if (!unf_rport) {
+ memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
+ rjt_info.els_cmnd_code = ELS_LOGO;
+ rjt_info.reason_code = UNF_LS_RJT_LOGICAL_ERROR;
+ rjt_info.reason_explanation = UNF_LS_RJT_NO_ADDITIONAL_INFO;
+ ret = unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive LOGO but has no RPort(0x%x)",
+ lport->port_id, sid);
+
+ return ret;
+ }
+
+ /*
+ * 3. I/O resource release: set ABORT tag
+ * *
+ * Call by: R_Port remove; RCVD LOGO; RCVD PLOGI; send PLOGI ACC
+ */
+ unf_cm_xchg_mgr_abort_io_by_id(lport, unf_rport, sid, lport->nport_id, INI_IO_STATE_LOGO);
+
+ /* 4. Send LOGO ACC */
+ ret = unf_send_logo_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) send LOGO failed", lport->port_id);
+ }
+ /*
+ * 5. Do same operations with RCVD LOGO/PRLO & Send LOGO:
+ * retry (LOGIN or LOGO) or link down immediately
+ */
+ unf_process_rport_after_logo(lport, unf_rport);
+
+ return ret;
+}
+
+u32 unf_prlo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_prli_prlo *prlo = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Receive PRLO. Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ lport->port_id, sid, xchg->oxid);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_LOGO);
+
+ /* Get (new) R_Port */
+ unf_rport = unf_get_rport_by_nport_id(lport, sid);
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_INIT, sid); /* INIT */
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive PRLO but has no RPort",
+ lport->port_id);
+
+ /* Discard directly */
+ unf_cm_free_xchg(lport, xchg);
+ return ret;
+ }
+
+ prlo = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->prlo;
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, &prlo->payload,
+ sizeof(struct unf_prli_payload));
+
+ /* Send PRLO ACC to remote */
+ ret = unf_send_prlo_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send PRLO ACC failed", lport->port_id);
+ }
+
+ /* Enter Enhanced action after LOGO (retry LOGIN or LOGO) */
+ unf_process_rport_after_logo(lport, unf_rport);
+
+ return ret;
+}
+
+static void unf_fill_echo_acc_pld(struct unf_echo *echo_acc)
+{
+ struct unf_echo_payload *echo_acc_pld = NULL;
+
+ FC_CHECK_RETURN_VOID(echo_acc);
+
+ echo_acc_pld = echo_acc->echo_pld;
+ FC_CHECK_RETURN_VOID(echo_acc_pld);
+
+ echo_acc_pld->cmnd = UNF_ELS_CMND_ACC;
+}
+
+static void unf_echo_acc_callback(struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = xchg->lport;
+
+ FC_CHECK_RETURN_VOID(unf_lport);
+ if (xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo_acc.phy_echo_addr) {
+ pci_unmap_single(unf_lport->low_level_func.dev,
+ xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo_acc
+ .phy_echo_addr,
+ UNF_ECHO_PAYLOAD_LEN, DMA_BIDIRECTIONAL);
+ xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo_acc.phy_echo_addr = 0;
+ }
+}
+
+static u32 unf_send_echo_acc(struct unf_lport *lport, u32 did,
+ struct unf_xchg *xchg)
+{
+ struct unf_echo *echo_acc = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ struct unf_frame_pkg pkg;
+ dma_addr_t phy_echo_acc_addr;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_ECHO);
+ xchg->did = did;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = unf_echo_acc_callback;
+
+ unf_fill_package(&pkg, xchg, xchg->rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ echo_acc = &fc_entry->echo_acc;
+ unf_fill_echo_acc_pld(echo_acc);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+ phy_echo_acc_addr = pci_map_single(lport->low_level_func.dev,
+ echo_acc->echo_pld,
+ UNF_ECHO_PAYLOAD_LEN,
+ DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(lport->low_level_func.dev, phy_echo_acc_addr)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) pci map err",
+ lport->port_id);
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+ echo_acc->phy_echo_addr = phy_echo_acc_addr;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK) {
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ pci_unmap_single(lport->low_level_func.dev,
+ phy_echo_acc_addr, UNF_ECHO_PAYLOAD_LEN,
+ DMA_BIDIRECTIONAL);
+ echo_acc->phy_echo_addr = 0;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]ECHO ACC send %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ did, ox_id, rx_id);
+
+ return ret;
+}
+
+u32 unf_echo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_echo_payload *echo_pld = NULL;
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 data_len = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ data_len = xchg->fcp_sfs_union.sfs_entry.cur_offset;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Receive ECHO. Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x))",
+ lport->port_id, sid, xchg->oxid);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_ECHO);
+ echo_pld = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo.echo_pld;
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, echo_pld, data_len);
+ unf_rport = unf_get_rport_by_nport_id(lport, sid);
+ xchg->rport = unf_rport;
+
+ ret = unf_send_echo_acc(lport, sid, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) send ECHO ACC failed", lport->port_id);
+ }
+
+ return ret;
+}
+
+static void unf_login_with_rport_in_n2n(struct unf_lport *lport,
+ u64 remote_port_name,
+ u64 remote_node_name)
+{
+ /*
+ * Call by (P2P):
+ * 1. RCVD FLOGI ACC
+ * 2. Send FLOGI ACC succeed
+ * *
+ * Compare WWN, larger is master, then send PLOGI
+ */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = NULL;
+ ulong lport_flag = 0;
+ ulong rport_flag = 0;
+ u64 port_name = 0;
+ u64 node_name = 0;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ spin_lock_irqsave(&unf_lport->lport_state_lock, lport_flag);
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_READY); /* LPort: FLOGI_WAIT --> READY */
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, lport_flag);
+
+ port_name = remote_port_name;
+ node_name = remote_node_name;
+
+ if (unf_lport->port_name > port_name) {
+ /* Master case: send PLOGI */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x)'s WWN(0x%llx) is larger than rport(0x%llx), should be master",
+ unf_lport->port_id, unf_lport->port_name, port_name);
+
+ /* Update N_Port_ID now: 0xEF */
+ unf_lport->nport_id = UNF_P2P_LOCAL_NPORT_ID;
+
+ unf_rport = unf_find_valid_rport(lport, port_name, UNF_P2P_REMOTE_NPORT_ID);
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY,
+ UNF_P2P_REMOTE_NPORT_ID);
+ if (unf_rport) {
+ unf_rport->node_name = node_name;
+ unf_rport->port_name = port_name;
+ unf_rport->nport_id = UNF_P2P_REMOTE_NPORT_ID; /* 0xD6 */
+ unf_rport->local_nport_id = UNF_P2P_LOCAL_NPORT_ID; /* 0xEF */
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
+ if (unf_rport->rp_state == UNF_RPORT_ST_PLOGI_WAIT ||
+ unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT ||
+ unf_rport->rp_state == UNF_RPORT_ST_READY) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x) Rport(0x%x) have sent PLOGI or PRLI with state(0x%x)",
+ unf_lport->port_id,
+ unf_rport->nport_id,
+ unf_rport->rp_state);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock,
+ rport_flag);
+ return;
+ }
+ /* Update L_Port State: PLOGI_WAIT */
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
+
+ /* P2P with master: Start to Send PLOGI */
+ ret = unf_send_plogi(unf_lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) with WWN(0x%llx) send PLOGI to(0x%llx) failed",
+ unf_lport->port_id,
+ unf_lport->port_name, port_name);
+
+ unf_rport_error_recovery(unf_rport);
+ }
+ } else {
+ /* Get/Alloc R_Port failed */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with WWN(0x%llx) allocate RPort(ID:0x%x,WWPN:0x%llx) failed",
+ unf_lport->port_id, unf_lport->port_name,
+ UNF_P2P_REMOTE_NPORT_ID, port_name);
+ }
+ } else {
+ /* Slave case: L_Port's Port Name is smaller than R_Port */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) with WWN(0x%llx) is smaller than rport(0x%llx), do nothing",
+ unf_lport->port_id, unf_lport->port_name, port_name);
+ }
+}
+
+void unf_lport_enter_mns_plogi(struct unf_lport *lport)
+{
+ /* Fabric or Public Loop Mode: Login with Name server */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_plogi_payload *plogi_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ /* Get (safe) R_Port */
+ unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC, UNF_FC_FID_MGMT_SERV);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate RPort failed", lport->port_id);
+ return;
+ }
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = UNF_FC_FID_MGMT_SERV; /* 0xfffffa */
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ /* Get & Set new free exchange */
+ xchg = unf_cm_get_free_xchg(lport, UNF_XCHG_TYPE_SFS);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for PLOGI", lport->port_id);
+
+ return;
+ }
+
+ xchg->cmnd_code = ELS_PLOGI; /* PLOGI */
+ xchg->did = unf_rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = unf_lport;
+ xchg->rport = unf_rport;
+
+ /* Set callback function */
+ xchg->callback = NULL; /* for rcvd plogi acc/rjt processer */
+ xchg->ob_callback = NULL; /* for send plogi failed processer */
+
+ unf_fill_package(&pkg, xchg, unf_rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+ /* Fill PLOGI payload */
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return;
+ }
+
+ plogi_pld = &fc_entry->plogi.payload;
+ memset(plogi_pld, 0, sizeof(struct unf_plogi_payload));
+ unf_fill_plogi_pld(plogi_pld, lport);
+
+ /* Start to Send PLOGI command */
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+}
+
+static void unf_register_to_switch(struct unf_lport *lport)
+{
+ /* Register to Fabric, used for: FABRIC & PUBLI LOOP */
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_REMOTE_ACC);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ /* Login with Name server: PLOGI */
+ unf_lport_enter_sns_plogi(lport);
+
+ unf_lport_enter_mns_plogi(lport);
+
+ /* Physical Port */
+ if (lport->root_lport == lport &&
+ lport->act_topo == UNF_ACT_TOP_P2P_FABRIC) {
+ unf_linkup_all_vports(lport);
+ }
+}
+
+void unf_fdisc_ob_callback(struct unf_xchg *xchg)
+{
+ /* Do recovery */
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ unf_lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: FDISC send failed");
+
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ /* Do L_Port error recovery */
+ unf_lport_error_recovery(unf_lport);
+}
+
+void unf_fdisc_callback(void *lport, void *rport, void *exch)
+{
+ /* Register to Name Server or Do recovery */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct unf_flogi_fdisc_payload *fdisc_pld = NULL;
+ ulong flag = 0;
+ u32 cmd = 0;
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_rport = (struct unf_rport *)rport;
+ xchg = (struct unf_xchg *)exch;
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(exch);
+ FC_CHECK_RETURN_VOID(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr);
+ fdisc_pld = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->fdisc_acc.fdisc_payload;
+ if (xchg->byte_orders & UNF_BIT_2)
+ unf_big_end_to_cpu((u8 *)fdisc_pld, sizeof(struct unf_flogi_fdisc_payload));
+
+ cmd = fdisc_pld->cmnd;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: FDISC response is (0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ cmd, unf_lport->port_id, unf_rport->nport_id, xchg->oxid);
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_FLOGI);
+ unf_rport = unf_get_safe_rport(unf_lport, unf_rport,
+ UNF_RPORT_REUSE_ONLY, UNF_FC_FID_FLOGI);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) has no Rport", unf_lport->port_id);
+ return;
+ }
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = UNF_FC_FID_FLOGI;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ if ((cmd & UNF_ELS_CMND_HIGH_MASK) == UNF_ELS_CMND_ACC) {
+ /* Case for ACC */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ if (unf_lport->states != UNF_LPORT_ST_FLOGI_WAIT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) receive Flogi/Fdisc ACC in state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
+
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+ return;
+ }
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ unf_lport_update_nport_id(unf_lport, xchg->sid);
+ unf_lport_update_time_params(unf_lport, fdisc_pld);
+ unf_register_to_switch(unf_lport);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: FDISC response is (0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ cmd, unf_lport->port_id, unf_rport->nport_id, xchg->oxid);
+
+ /* Case for RJT: Do L_Port recovery */
+ unf_lport_error_recovery(unf_lport);
+ }
+}
+
+void unf_flogi_ob_callback(struct unf_xchg *xchg)
+{
+ /* Send FLOGI failed & Do L_Port recovery */
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ /* Get L_port from exchange context */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ unf_lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send FLOGI failed",
+ unf_lport->port_id);
+
+ /* Check L_Port state */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ if (unf_lport->states != UNF_LPORT_ST_FLOGI_WAIT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send FLOGI failed with state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
+
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+ return;
+ }
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ /* Do L_Port error recovery */
+ unf_lport_error_recovery(unf_lport);
+}
+
+static void unf_lport_update_nport_id(struct unf_lport *lport, u32 nport_id)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ lport->nport_id = nport_id;
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+}
+
+static void
+unf_lport_update_time_params(struct unf_lport *lport,
+ struct unf_flogi_fdisc_payload *flogi_payload)
+{
+ ulong flag = 0;
+ u32 ed_tov = 0;
+ u32 ra_tov = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(flogi_payload);
+
+ ed_tov = flogi_payload->fabric_parms.co_parms.e_d_tov;
+ ra_tov = flogi_payload->fabric_parms.co_parms.r_a_tov;
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+
+ /* FC-FS-3: 21.3.4, 21.3.5 */
+ if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ lport->act_topo == UNF_ACT_TOP_PUBLIC_LOOP) {
+ lport->ed_tov = ed_tov;
+ lport->ra_tov = ra_tov;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) with topo(0x%x) no need to save time parameters",
+ lport->port_id, lport->nport_id, lport->act_topo);
+ }
+
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+}
+
+static void unf_rcv_flogi_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_flogi_fdisc_payload *flogi_pld,
+ u32 nport_id, struct unf_xchg *xchg)
+{
+ /* PLOGI to Name server or remote port */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ struct unf_flogi_fdisc_payload *unf_flogi_pld = flogi_pld;
+ struct unf_fabric_parm *fabric_params = NULL;
+ u64 port_name = 0;
+ u64 node_name = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(flogi_pld);
+
+ /* Check L_Port state: FLOGI_WAIT */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ if (unf_lport->states != UNF_LPORT_ST_FLOGI_WAIT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[info]Port(0x%x_0x%x) receive FLOGI ACC with state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
+
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+ return;
+ }
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ fabric_params = &unf_flogi_pld->fabric_parms;
+ node_name =
+ (u64)(((u64)(fabric_params->high_node_name) << UNF_SHIFT_32) |
+ ((u64)(fabric_params->low_node_name)));
+ port_name =
+ (u64)(((u64)(fabric_params->high_port_name) << UNF_SHIFT_32) |
+ ((u64)(fabric_params->low_port_name)));
+
+ /* flogi acc pyload class 3 service priority value */
+ if (unf_lport->root_lport == unf_lport && unf_lport->qos_cs_ctrl &&
+ fabric_params->cl_parms[ARRAY_INDEX_2].priority == UNF_PRIORITY_ENABLE)
+ unf_lport->priority = (bool)UNF_PRIORITY_ENABLE;
+ else
+ unf_lport->priority = (bool)UNF_PRIORITY_DISABLE;
+
+ /* Save Flogi parameters */
+ unf_save_fabric_params(unf_lport, unf_rport, fabric_params);
+
+ if (UNF_CHECK_NPORT_FPORT_BIT(unf_flogi_pld) == UNF_N_PORT) {
+ /* P2P Mode */
+ unf_lport_update_topo(unf_lport, UNF_ACT_TOP_P2P_DIRECT);
+ unf_login_with_rport_in_n2n(unf_lport, port_name, node_name);
+ } else {
+ /* for:
+ * UNF_ACT_TOP_PUBLIC_LOOP/UNF_ACT_TOP_P2P_FABRIC
+ * /UNF_TOP_P2P_MASK
+ */
+ if (unf_lport->act_topo != UNF_ACT_TOP_PUBLIC_LOOP)
+ unf_lport_update_topo(unf_lport, UNF_ACT_TOP_P2P_FABRIC);
+
+ unf_lport_update_nport_id(unf_lport, nport_id);
+ unf_lport_update_time_params(unf_lport, unf_flogi_pld);
+
+ /* Save process both for Public loop & Fabric */
+ unf_register_to_switch(unf_lport);
+ }
+}
+
+static void unf_flogi_acc_com_process(struct unf_xchg *xchg)
+{
+ /* Maybe within interrupt or thread context */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_flogi_fdisc_payload *flogi_pld = NULL;
+ u32 nport_id = 0;
+ u32 cmnd = 0;
+ ulong flags = 0;
+ struct unf_xchg *unf_xchg = xchg;
+
+ FC_CHECK_RETURN_VOID(unf_xchg);
+ FC_CHECK_RETURN_VOID(unf_xchg->lport);
+
+ unf_lport = unf_xchg->lport;
+ unf_rport = unf_xchg->rport;
+ flogi_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->flogi_acc.flogi_payload;
+ cmnd = flogi_pld->cmnd;
+
+ /* Get N_Port_ID & R_Port */
+ /* Others: 0xFFFFFE */
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_FLOGI);
+ nport_id = UNF_FC_FID_FLOGI;
+
+ /* Get Safe R_Port: reuse only */
+ unf_rport = unf_get_safe_rport(unf_lport, unf_rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can not allocate new Rport", unf_lport->port_id);
+
+ return;
+ }
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ unf_rport->nport_id = UNF_FC_FID_FLOGI;
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ /* Process FLOGI ACC or RJT */
+ if ((cmnd & UNF_ELS_CMND_HIGH_MASK) == UNF_ELS_CMND_ACC) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: FLOGI response is(0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ cmnd, unf_lport->port_id, unf_rport->nport_id, unf_xchg->oxid);
+
+ /* Case for ACC */
+ unf_rcv_flogi_acc(unf_lport, unf_rport, flogi_pld, unf_xchg->sid, unf_xchg);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: FLOGI response is(0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ cmnd, unf_lport->port_id, unf_rport->nport_id,
+ unf_xchg->oxid);
+
+ /* Case for RJT: do L_Port error recovery */
+ unf_lport_error_recovery(unf_lport);
+ }
+}
+
+static int unf_rcv_flogi_acc_async_callback(void *argc_in, void *argc_out)
+{
+ struct unf_xchg *xchg = (struct unf_xchg *)argc_in;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ unf_flogi_acc_com_process(xchg);
+
+ unf_xchg_ref_dec(xchg, SFS_RESPONSE);
+
+ return RETURN_OK;
+}
+
+void unf_flogi_callback(void *lport, void *rport, void *xchg)
+{
+ /* Callback function for FLOGI ACC or RJT */
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_flogi_fdisc_payload *flogi_pld = NULL;
+ bool bbscn_enabled = false;
+ enum unf_act_topo act_topo = UNF_ACT_TOP_UNKNOWN;
+ bool switch2thread = false;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+ FC_CHECK_RETURN_VOID(unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr);
+
+ unf_xchg->lport = lport;
+ flogi_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->flogi_acc.flogi_payload;
+
+ if (unf_xchg->byte_orders & UNF_BIT_2)
+ unf_big_end_to_cpu((u8 *)flogi_pld, sizeof(struct unf_flogi_fdisc_payload));
+
+ if (unf_lport->act_topo != UNF_ACT_TOP_PUBLIC_LOOP &&
+ (UNF_CHECK_NPORT_FPORT_BIT(flogi_pld) == UNF_F_PORT))
+ /* Get Top Mode (P2P_F) --->>> used for BBSCN */
+ act_topo = UNF_ACT_TOP_P2P_FABRIC;
+
+ bbscn_enabled =
+ unf_check_bbscn_is_enabled((u8)unf_lport->low_level_func.lport_cfg_items.bbscn,
+ (u8)UNF_GET_BB_SC_N_FROM_PARAMS(&flogi_pld->fabric_parms));
+ if (act_topo == UNF_ACT_TOP_P2P_FABRIC && bbscn_enabled) {
+ /* BBSCN Enable or not --->>> used for Context change */
+ unf_lport->bbscn_support = true;
+ switch2thread = true;
+ }
+
+ if (switch2thread && unf_lport->root_lport == unf_lport) {
+ /* Wait for LR done sync: for Root Port */
+ (void)unf_irq_process_switch2thread(unf_lport, unf_xchg,
+ unf_rcv_flogi_acc_async_callback);
+ } else {
+ /* Process FLOGI response directly */
+ unf_flogi_acc_com_process(unf_xchg);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ALL,
+ "[info]Port(0x%x) process FLOGI response: switch(%d) to thread done",
+ unf_lport->port_id, switch2thread);
+}
+
+void unf_plogi_ob_callback(struct unf_xchg *xchg)
+{
+ /* Do L_Port or R_Port recovery */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_CHECK_RETURN_VOID(unf_lport);
+ FC_CHECK_RETURN_VOID(unf_rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send PLOGI(0x%x_0x%x) to RPort(%p:0x%x_0x%x) failed",
+ unf_lport->port_id, unf_lport->nport_id, xchg->oxid,
+ xchg->rxid, unf_rport, unf_rport->rport_index,
+ unf_rport->nport_id);
+
+ /* Start to recovery */
+ if (unf_rport->nport_id > UNF_FC_FID_DOM_MGR) {
+ /* with Name server: R_Port is fabric --->>> L_Port error
+ * recovery
+ */
+ unf_lport_error_recovery(unf_lport);
+ } else {
+ /* R_Port is not fabric --->>> R_Port error recovery */
+ unf_rport_error_recovery(unf_rport);
+ }
+}
+
+void unf_rcv_plogi_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_lgn_parm *login_parms)
+{
+ /* PLOGI ACC: PRLI(non fabric) or RFT_ID(fabric) */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ struct unf_lgn_parm *unf_login_parms = login_parms;
+ u64 node_name = 0;
+ u64 port_name = 0;
+ ulong flag = 0;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(login_parms);
+
+ node_name = (u64)(((u64)(unf_login_parms->high_node_name) << UNF_SHIFT_32) |
+ ((u64)(unf_login_parms->low_node_name)));
+ port_name = (u64)(((u64)(unf_login_parms->high_port_name) << UNF_SHIFT_32) |
+ ((u64)(unf_login_parms->low_port_name)));
+
+ /* ACC & Case for: R_Port is fabric (RFT_ID) */
+ if (unf_rport->nport_id >= UNF_FC_FID_DOM_MGR) {
+ /* Check L_Port state */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ if (unf_lport->states != UNF_LPORT_ST_PLOGI_WAIT) {
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive PLOGI ACC with error state(0x%x)",
+ lport->port_id, unf_lport->states);
+
+ return;
+ }
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_REMOTE_ACC);
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ /* PLOGI parameters save */
+ unf_save_plogi_params(unf_lport, unf_rport, unf_login_parms, ELS_ACC);
+
+ /* Update R_Port WWPN & WWNN */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->node_name = node_name;
+ unf_rport->port_name = port_name;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Start to Send RFT_ID */
+ ret = unf_send_rft_id(unf_lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send RFT_ID failed",
+ lport->port_id);
+
+ unf_lport_error_recovery(unf_lport);
+ }
+ } else {
+ /* ACC & Case for: R_Port is not fabric */
+ if (unf_rport->options == UNF_PORT_MODE_UNKNOWN &&
+ unf_rport->port_name != INVALID_WWPN)
+ unf_rport->options = unf_get_port_feature(port_name);
+
+ /* Set Port Feature with BOTH: cancel */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->node_name = node_name;
+ unf_rport->port_name = port_name;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x)<---LS_ACC(DID:0x%x SID:0x%x) for PLOGI ACC with RPort state(0x%x) NodeName(0x%llx) E_D_TOV(%u)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id, unf_rport->rp_state,
+ unf_rport->node_name, unf_rport->ed_tov);
+
+ if (unf_lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP &&
+ (unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT ||
+ unf_rport->rp_state == UNF_RPORT_ST_READY)) {
+ /* Do nothing, return directly */
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+ return;
+ }
+
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PRLI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* PLOGI parameters save */
+ unf_save_plogi_params(unf_lport, unf_rport, unf_login_parms, ELS_ACC);
+
+ /*
+ * Need Delay to Send PRLI or not
+ * Used for: L_Port with INI mode & R_Port is not Fabric
+ */
+ unf_check_rport_need_delay_prli(unf_lport, unf_rport, unf_rport->options);
+
+ /* Do not care: Just used for L_Port only is TGT mode or R_Port
+ * only is INI mode
+ */
+ unf_schedule_open_work(unf_lport, unf_rport);
+ }
+}
+
+void unf_plogi_acc_com_process(struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_plogi_payload *plogi_pld = NULL;
+ struct unf_lgn_parm *login_parms = NULL;
+ ulong flag = 0;
+ u64 port_name = 0;
+ u32 rport_nport_id = 0;
+ u32 cmnd = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(unf_xchg);
+ FC_CHECK_RETURN_VOID(unf_xchg->lport);
+ FC_CHECK_RETURN_VOID(unf_xchg->rport);
+
+ unf_lport = unf_xchg->lport;
+ unf_rport = unf_xchg->rport;
+ rport_nport_id = unf_rport->nport_id;
+ plogi_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->plogi_acc.payload;
+ login_parms = &plogi_pld->stparms;
+ cmnd = (plogi_pld->cmnd);
+
+ if (UNF_ELS_CMND_ACC == (cmnd & UNF_ELS_CMND_HIGH_MASK)) {
+ /* Case for PLOGI ACC: Go to next stage */
+ port_name =
+ (u64)(((u64)(login_parms->high_port_name) << UNF_SHIFT_32) |
+ ((u64)(login_parms->low_port_name)));
+
+ /* Get (new) R_Port: 0xfffffc has same WWN with 0xfffcxx */
+ unf_rport = unf_find_rport(unf_lport, rport_nport_id, port_name);
+ unf_rport = unf_get_safe_rport(unf_lport, unf_rport,
+ UNF_RPORT_REUSE_ONLY, rport_nport_id);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) alloc new RPort with wwpn(0x%llx) failed",
+ unf_lport->port_id, unf_lport->nport_id, port_name);
+ return;
+ }
+
+ /* PLOGI parameters check */
+ ret = unf_check_plogi_params(unf_lport, unf_rport, login_parms);
+ if (ret != RETURN_OK)
+ return;
+
+ /* Update R_Port state */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = rport_nport_id;
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Start to process PLOGI ACC */
+ unf_rcv_plogi_acc(unf_lport, unf_rport, login_parms);
+ } else {
+ /* Case for PLOGI RJT: L_Port or R_Port recovery */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x)<---RPort(0x%p) with LS_RJT(DID:0x%x SID:0x%x) for PLOGI",
+ unf_lport->port_id, unf_rport, unf_lport->nport_id,
+ unf_rport->nport_id);
+
+ if (unf_rport->nport_id >= UNF_FC_FID_DOM_MGR)
+ unf_lport_error_recovery(unf_lport);
+ else
+ unf_rport_error_recovery(unf_rport);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PLOGI response(0x%x). Port(0x%x_0x%x)<---RPort(0x%x_0x%p) wwpn(0x%llx) OX_ID(0x%x)",
+ cmnd, unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id,
+ unf_rport, port_name, unf_xchg->oxid);
+}
+
+static int unf_rcv_plogi_acc_async_callback(void *argc_in, void *argc_out)
+{
+ struct unf_xchg *xchg = (struct unf_xchg *)argc_in;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ unf_plogi_acc_com_process(xchg);
+
+ unf_xchg_ref_dec(xchg, SFS_RESPONSE);
+
+ return RETURN_OK;
+}
+
+void unf_plogi_callback(void *lport, void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_plogi_payload *plogi_pld = NULL;
+ struct unf_lgn_parm *login_parms = NULL;
+ bool bbscn_enabled = false;
+ bool switch2thread = false;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+ FC_CHECK_RETURN_VOID(unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr);
+
+ plogi_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->plogi_acc.payload;
+ login_parms = &plogi_pld->stparms;
+ unf_xchg->lport = lport;
+
+ if (unf_xchg->byte_orders & UNF_BIT_2)
+ unf_big_end_to_cpu((u8 *)plogi_pld, sizeof(struct unf_plogi_payload));
+
+ bbscn_enabled =
+ unf_check_bbscn_is_enabled((u8)unf_lport->low_level_func.lport_cfg_items.bbscn,
+ (u8)UNF_GET_BB_SC_N_FROM_PARAMS(login_parms));
+ if ((bbscn_enabled) &&
+ unf_lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ switch2thread = true;
+ unf_lport->bbscn_support = true;
+ }
+
+ if (switch2thread && unf_lport->root_lport == unf_lport) {
+ /* Wait for LR done sync: just for ROOT Port */
+ (void)unf_irq_process_switch2thread(unf_lport, unf_xchg,
+ unf_rcv_plogi_acc_async_callback);
+ } else {
+ unf_plogi_acc_com_process(unf_xchg);
+ }
+}
+
+static void unf_logo_ob_callback(struct unf_xchg *xchg)
+{
+ struct unf_lport *lport = NULL;
+ struct unf_rport *rport = NULL;
+ struct unf_rport *old_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ u32 nport_id = 0;
+ u32 logo_retry = 0;
+ u32 max_frame_size = 0;
+ u64 port_name = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ unf_xchg = xchg;
+ old_rport = unf_xchg->rport;
+ logo_retry = old_rport->logo_retries;
+ max_frame_size = old_rport->max_frame_size;
+ port_name = old_rport->port_name;
+ unf_rport_enter_closing(old_rport);
+
+ lport = unf_xchg->lport;
+ if (unf_is_lport_valid(lport) != RETURN_OK)
+ return;
+
+ /* Get R_Port by exchange info: Init state */
+ nport_id = unf_xchg->did;
+ rport = unf_get_rport_by_nport_id(lport, nport_id);
+ rport = unf_get_safe_rport(lport, rport, UNF_RPORT_REUSE_INIT, nport_id);
+ if (!rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) cannot allocate RPort", lport->port_id);
+ return;
+ }
+
+ rport->logo_retries = logo_retry;
+ rport->max_frame_size = max_frame_size;
+ rport->port_name = port_name;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[info]LOGIN: Port(0x%x) received LOGO RSP timeout topo(0x%x) retries(%u)",
+ lport->port_id, lport->act_topo, rport->logo_retries);
+
+ /* RCVD LOGO/PRLO & SEND LOGO: the same process */
+ if (rport->logo_retries < UNF_MAX_RETRY_COUNT) {
+ /* <: retry (LOGIN or LOGO) if necessary */
+ unf_process_rport_after_logo(lport, rport);
+ } else {
+ /* >=: Link down */
+ unf_rport_immediate_link_down(lport, rport);
+ }
+}
+
+static void unf_logo_callback(void *lport, void *rport, void *xchg)
+{
+ /* RCVD LOGO ACC/RJT: retry(LOGIN/LOGO) or link down immediately */
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_rport *old_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_els_rjt *els_acc_rjt = NULL;
+ u32 cmnd = 0;
+ u32 nport_id = 0;
+ u32 logo_retry = 0;
+ u32 max_frame_size = 0;
+ u64 port_name = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_xchg = (struct unf_xchg *)xchg;
+ old_rport = unf_xchg->rport;
+
+ logo_retry = old_rport->logo_retries;
+ max_frame_size = old_rport->max_frame_size;
+ port_name = old_rport->port_name;
+ unf_rport_enter_closing(old_rport);
+
+ if (unf_is_lport_valid(lport) != RETURN_OK)
+ return;
+
+ if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr)
+ return;
+
+ /* Get R_Port by exchange info: Init state */
+ els_acc_rjt = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->els_rjt;
+ nport_id = unf_xchg->did;
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
+ unf_rport = unf_get_safe_rport(unf_lport, unf_rport, UNF_RPORT_REUSE_INIT, nport_id);
+
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) cannot allocate RPort",
+ unf_lport->port_id);
+ return;
+ }
+
+ unf_rport->logo_retries = logo_retry;
+ unf_rport->max_frame_size = max_frame_size;
+ unf_rport->port_name = port_name;
+ cmnd = be32_to_cpu(els_acc_rjt->cmnd);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x) received LOGO RSP(0x%x),topo(0x%x) Port options(0x%x) RPort options(0x%x) retries(%u)",
+ unf_lport->port_id, (cmnd & UNF_ELS_CMND_HIGH_MASK),
+ unf_lport->act_topo, unf_lport->options, unf_rport->options,
+ unf_rport->logo_retries);
+
+ /* RCVD LOGO/PRLO & SEND LOGO: the same process */
+ if (unf_rport->logo_retries < UNF_MAX_RETRY_COUNT) {
+ /* <: retry (LOGIN or LOGO) if necessary */
+ unf_process_rport_after_logo(unf_lport, unf_rport);
+ } else {
+ /* >=: Link down */
+ unf_rport_immediate_link_down(unf_lport, unf_rport);
+ }
+}
+
+void unf_prli_ob_callback(struct unf_xchg *xchg)
+{
+ /* Do R_Port recovery */
+ struct unf_lport *lport = NULL;
+ struct unf_rport *rport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ rport = xchg->rport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) RPort(0x%x) send PRLI failed and do recovery",
+ lport->port_id, lport->nport_id, rport->nport_id);
+
+ /* Start to do R_Port error recovery */
+ unf_rport_error_recovery(rport);
+}
+
+void unf_prli_callback(void *lport, void *rport, void *xchg)
+{
+ /* RCVD PRLI RSP: ACC or RJT --->>> SCSI Link Up */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_prli_payload *prli_acc_pld = NULL;
+ ulong flag = 0;
+ u32 cmnd = 0;
+ u32 options = 0;
+ u32 fcp_conf = 0;
+ u32 rec_support = 0;
+ u32 task_retry_support = 0;
+ u32 retry_support = 0;
+ u32 tape_support = 0;
+ u32 fc4_type = 0;
+ enum unf_rport_login_state rport_state = UNF_RPORT_ST_INIT;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+ unf_lport = (struct unf_lport *)lport;
+ unf_rport = (struct unf_rport *)rport;
+ unf_xchg = (struct unf_xchg *)xchg;
+
+ if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange(%p) entry is NULL",
+ unf_lport->port_id, unf_xchg);
+ return;
+ }
+
+ /* Get PRLI ACC payload */
+ prli_acc_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->prli_acc.payload;
+ if (unf_xchg->byte_orders & UNF_BIT_2) {
+ /* Change to little End, About INI/TGT mode & confirm info */
+ options = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
+ (UNF_FC4_FRAME_PARM_3_TGT | UNF_FC4_FRAME_PARM_3_INI);
+
+ cmnd = be32_to_cpu(prli_acc_pld->cmnd);
+ fcp_conf = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
+ UNF_FC4_FRAME_PARM_3_CONF_ALLOW;
+ rec_support = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
+ UNF_FC4_FRAME_PARM_3_REC_SUPPORT;
+ task_retry_support = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
+ UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT;
+ retry_support = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
+ UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT;
+ fc4_type = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_0]) >>
+ UNF_FC4_TYPE_SHIFT & UNF_FC4_TYPE_MASK;
+ } else {
+ options = (prli_acc_pld->parms[ARRAY_INDEX_3]) &
+ (UNF_FC4_FRAME_PARM_3_TGT | UNF_FC4_FRAME_PARM_3_INI);
+
+ cmnd = (prli_acc_pld->cmnd);
+ fcp_conf = prli_acc_pld->parms[ARRAY_INDEX_3] & UNF_FC4_FRAME_PARM_3_CONF_ALLOW;
+ rec_support = prli_acc_pld->parms[ARRAY_INDEX_3] & UNF_FC4_FRAME_PARM_3_REC_SUPPORT;
+ task_retry_support = prli_acc_pld->parms[ARRAY_INDEX_3] &
+ UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT;
+ retry_support = prli_acc_pld->parms[ARRAY_INDEX_3] &
+ UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT;
+ fc4_type = prli_acc_pld->parms[ARRAY_INDEX_0] >> UNF_FC4_TYPE_SHIFT;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PRLI RSP: RPort(0x%x) parameter-3(0x%x) option(0x%x) cmd(0x%x) uiRecSupport:%u",
+ unf_rport->nport_id, prli_acc_pld->parms[ARRAY_INDEX_3],
+ options, cmnd, rec_support);
+
+ /* PRLI ACC: R_Port READY & Report R_Port Link Up */
+ if (UNF_ELS_CMND_ACC == (cmnd & UNF_ELS_CMND_HIGH_MASK)) {
+ /* Update R_Port options(INI/TGT/BOTH) */
+ unf_rport->options = options;
+
+ unf_update_port_feature(unf_rport->port_name, unf_rport->options);
+
+ /* NOTE: R_Port only with INI mode, send LOGO */
+ if (unf_rport->options == UNF_PORT_MODE_INI) {
+ /* Update R_Port state: LOGO */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* NOTE: Start to Send LOGO */
+ unf_rport_enter_logo(unf_lport, unf_rport);
+ return;
+ }
+
+ /* About confirm */
+ if (fcp_conf && unf_lport->low_level_func.lport_cfg_items.fcp_conf) {
+ unf_rport->fcp_conf_needed = true;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) FCP config is need for RPort(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id);
+ }
+
+ tape_support = (rec_support && task_retry_support && retry_support);
+ if (tape_support && unf_lport->low_level_func.lport_cfg_items.tape_support) {
+ unf_rport->tape_support_needed = true;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x_0x%x) Rec is enabled for RPort(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id);
+ }
+
+ /* Update R_Port state: READY */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_READY);
+ rport_state = unf_rport->rp_state;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Report R_Port online (Link Up) event to SCSI */
+ if (rport_state == UNF_RPORT_ST_READY) {
+ unf_rport->logo_retries = 0;
+ unf_update_lport_state_by_linkup_event(unf_lport, unf_rport,
+ unf_rport->options);
+ }
+ } else {
+ /* PRLI RJT: Do R_Port error recovery */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x)<---LS_RJT(DID:0x%x SID:0x%x) for PRLI. RPort(0x%p) OX_ID(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id, unf_rport, unf_xchg->oxid);
+
+ unf_rport_error_recovery(unf_rport);
+ }
+}
+
+static void unf_rrq_callback(void *lport, void *rport, void *xchg)
+{
+ /* Release I/O */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_xchg *io_xchg = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)xchg;
+
+ if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) exchange(0x%p) SfsEntryPtr is NULL",
+ unf_lport->port_id, unf_xchg);
+ return;
+ }
+
+ io_xchg = (struct unf_xchg *)unf_xchg->io_xchg;
+ if (!io_xchg) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) IO exchange is NULL. RRQ cb sfs xchg(0x%p) tag(0x%x)",
+ unf_lport->port_id, unf_xchg, unf_xchg->hotpooltag);
+ return;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) release IO exch(0x%p) tag(0x%x). RRQ cb sfs xchg(0x%p) tag(0x%x)",
+ unf_lport->port_id, unf_xchg->io_xchg, io_xchg->hotpooltag,
+ unf_xchg, unf_xchg->hotpooltag);
+
+ /* After RRQ Success, Free xid */
+ unf_notify_chip_free_xid(io_xchg);
+
+ /* NOTE: release I/O exchange resource */
+ unf_xchg_ref_dec(io_xchg, XCHG_ALLOC);
+}
+
+static void unf_rrq_ob_callback(struct unf_xchg *xchg)
+{
+ /* Release I/O */
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_xchg *io_xchg = NULL;
+
+ unf_xchg = (struct unf_xchg *)xchg;
+ if (!unf_xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Exchange can't be NULL");
+ return;
+ }
+
+ io_xchg = (struct unf_xchg *)unf_xchg->io_xchg;
+ if (!io_xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]IO exchange can't be NULL with Sfs exch(0x%p) tag(0x%x)",
+ unf_xchg, unf_xchg->hotpooltag);
+ return;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]send RRQ failed: SFS exch(0x%p) tag(0x%x) exch(0x%p) tag(0x%x) OXID_RXID(0x%x_0x%x) SID_DID(0x%x_0x%x)",
+ unf_xchg, unf_xchg->hotpooltag, io_xchg, io_xchg->hotpooltag,
+ io_xchg->oxid, io_xchg->rxid, io_xchg->sid, io_xchg->did);
+
+ /* If RRQ failure or timepout, Free xid. */
+ unf_notify_chip_free_xid(io_xchg);
+
+ /* NOTE: Free I/O exchange resource */
+ unf_xchg_ref_dec(io_xchg, XCHG_ALLOC);
+}
+
diff --git a/drivers/scsi/spfc/common/unf_ls.h b/drivers/scsi/spfc/common/unf_ls.h
new file mode 100644
index 000000000000..5fdd9e1a258d
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_ls.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_LS_H
+#define UNF_LS_H
+
+#include "unf_type.h"
+#include "unf_exchg.h"
+#include "unf_rport.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif /* __cplusplus */
+
+u32 unf_send_adisc(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_pdisc(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_flogi(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_fdisc(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_plogi(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_prli(struct unf_lport *lport, struct unf_rport *rport,
+ u32 cmnd_code);
+u32 unf_send_prlo(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_logo(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_logo_by_did(struct unf_lport *lport, u32 did);
+u32 unf_send_echo(struct unf_lport *lport, struct unf_rport *rport, u32 *time);
+u32 unf_send_plogi_rjt_by_did(struct unf_lport *lport, u32 did);
+u32 unf_send_rrq(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg);
+void unf_flogi_ob_callback(struct unf_xchg *xchg);
+void unf_flogi_callback(void *lport, void *rport, void *xchg);
+void unf_fdisc_ob_callback(struct unf_xchg *xchg);
+void unf_fdisc_callback(void *lport, void *rport, void *xchg);
+
+void unf_plogi_ob_callback(struct unf_xchg *xchg);
+void unf_plogi_callback(void *lport, void *rport, void *xchg);
+void unf_prli_ob_callback(struct unf_xchg *xchg);
+void unf_prli_callback(void *lport, void *rport, void *xchg);
+u32 unf_flogi_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_plogi_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_rec_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_prli_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_prlo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_rscn_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_logo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_echo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_pdisc_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_send_pdisc_rjt(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg);
+u32 unf_adisc_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_rrq_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_send_rec(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *io_xchg);
+
+u32 unf_low_level_bb_scn(struct unf_lport *lport);
+typedef int (*unf_event_task)(void *arg_in, void *arg_out);
+
+#ifdef __cplusplus
+}
+#endif /* __cplusplus */
+
+#endif /* __UNF_SERVICE_H__ */
diff --git a/drivers/scsi/spfc/common/unf_npiv.c b/drivers/scsi/spfc/common/unf_npiv.c
new file mode 100644
index 000000000000..0d441f1c9e06
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_npiv.c
@@ -0,0 +1,1005 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_npiv.h"
+#include "unf_log.h"
+#include "unf_rport.h"
+#include "unf_exchg.h"
+#include "unf_portman.h"
+#include "unf_npiv_portman.h"
+
+#define UNF_DELETE_VPORT_MAX_WAIT_TIME_MS 60000
+
+u32 unf_init_vport_pool(struct unf_lport *lport)
+{
+ u32 ret = RETURN_OK;
+ u32 i;
+ u16 vport_cnt = 0;
+ struct unf_lport *vport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ u32 vport_pool_size;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, RETURN_ERROR);
+
+ UNF_TOU16_CHECK(vport_cnt, lport->low_level_func.support_max_npiv_num,
+ return RETURN_ERROR);
+ if (vport_cnt == 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) do not support NPIV",
+ lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ vport_pool_size = sizeof(struct unf_vport_pool) + sizeof(struct unf_lport *) * vport_cnt;
+ lport->vport_pool = vmalloc(vport_pool_size);
+ if (!lport->vport_pool) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) cannot allocate vport pool",
+ lport->port_id);
+
+ return RETURN_ERROR;
+ }
+ memset(lport->vport_pool, 0, vport_pool_size);
+ vport_pool = lport->vport_pool;
+ vport_pool->vport_pool_count = vport_cnt;
+ vport_pool->vport_pool_completion = NULL;
+ spin_lock_init(&vport_pool->vport_pool_lock);
+ INIT_LIST_HEAD(&vport_pool->list_vport_pool);
+
+ vport_pool->vport_pool_addr =
+ vmalloc((size_t)(vport_cnt * sizeof(struct unf_lport)));
+ if (!vport_pool->vport_pool_addr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) cannot allocate vport pool address",
+ lport->port_id);
+ vfree(lport->vport_pool);
+ lport->vport_pool = NULL;
+
+ return RETURN_ERROR;
+ }
+
+ memset(vport_pool->vport_pool_addr, 0,
+ vport_cnt * sizeof(struct unf_lport));
+ vport = (struct unf_lport *)vport_pool->vport_pool_addr;
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ for (i = 0; i < vport_cnt; i++) {
+ list_add_tail(&vport->entry_vport, &vport_pool->list_vport_pool);
+ vport++;
+ }
+
+ vport_pool->slab_next_index = 0;
+ vport_pool->slab_total_sum = vport_cnt;
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ return ret;
+}
+
+void unf_free_vport_pool(struct unf_lport *lport)
+{
+ struct unf_vport_pool *vport_pool = NULL;
+ bool wait = false;
+ ulong flag = 0;
+ u32 remain = 0;
+ struct completion vport_pool_completion;
+
+ init_completion(&vport_pool_completion);
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(lport->vport_pool);
+ vport_pool = lport->vport_pool;
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+
+ if (vport_pool->slab_total_sum != vport_pool->vport_pool_count) {
+ vport_pool->vport_pool_completion = &vport_pool_completion;
+ remain = vport_pool->slab_total_sum - vport_pool->vport_pool_count;
+ wait = true;
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ if (wait) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to wait for vport pool completion remain(0x%x)",
+ lport->port_id, remain);
+
+ wait_for_completion(vport_pool->vport_pool_completion);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) wait for vport pool completion end",
+ lport->port_id);
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+ vport_pool->vport_pool_completion = NULL;
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ }
+
+ if (lport->vport_pool->vport_pool_addr) {
+ vfree(lport->vport_pool->vport_pool_addr);
+ lport->vport_pool->vport_pool_addr = NULL;
+ }
+
+ vfree(lport->vport_pool);
+ lport->vport_pool = NULL;
+}
+
+struct unf_lport *unf_get_vport_by_slab_index(struct unf_vport_pool *vport_pool,
+ u16 slab_index)
+{
+ FC_CHECK_RETURN_VALUE(vport_pool, NULL);
+
+ return vport_pool->vport_slab[slab_index];
+}
+
+static inline void unf_vport_pool_slab_set(struct unf_vport_pool *vport_pool,
+ u16 slab_index,
+ struct unf_lport *vport)
+{
+ FC_CHECK_RETURN_VOID(vport_pool);
+
+ vport_pool->vport_slab[slab_index] = vport;
+}
+
+u32 unf_alloc_vp_index(struct unf_vport_pool *vport_pool,
+ struct unf_lport *vport, u16 vpid)
+{
+ u16 slab_index;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(vport_pool, RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ if (vpid == 0) {
+ slab_index = vport_pool->slab_next_index;
+ while (unf_get_vport_by_slab_index(vport_pool, slab_index)) {
+ slab_index = (slab_index + 1) % vport_pool->slab_total_sum;
+
+ if (vport_pool->slab_next_index == slab_index) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]VPort pool has no slab ");
+
+ return RETURN_ERROR;
+ }
+ }
+ } else {
+ slab_index = vpid - 1;
+ if (unf_get_vport_by_slab_index(vport_pool, slab_index)) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_WARN,
+ "[warn]VPort Index(0x%x) is occupy", vpid);
+
+ return RETURN_ERROR;
+ }
+ }
+
+ unf_vport_pool_slab_set(vport_pool, slab_index, vport);
+
+ vport_pool->slab_next_index = (slab_index + 1) % vport_pool->slab_total_sum;
+
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ spin_lock_irqsave(&vport->lport_state_lock, flags);
+ vport->vp_index = slab_index + 1;
+ spin_unlock_irqrestore(&vport->lport_state_lock, flags);
+
+ return RETURN_OK;
+}
+
+void unf_free_vp_index(struct unf_vport_pool *vport_pool,
+ struct unf_lport *vport)
+{
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(vport_pool);
+ FC_CHECK_RETURN_VOID(vport);
+
+ if (vport->vp_index == 0 ||
+ vport->vp_index > vport_pool->slab_total_sum) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Input vpoot index(0x%x) is beyond the normal range, min(0x1), max(0x%x).",
+ vport->vp_index, vport_pool->slab_total_sum);
+ return;
+ }
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ unf_vport_pool_slab_set(vport_pool, vport->vp_index - 1,
+ NULL); /* SlabIndex=VpIndex-1 */
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ spin_lock_irqsave(&vport->lport_state_lock, flags);
+ vport->vp_index = INVALID_VALUE16;
+ spin_unlock_irqrestore(&vport->lport_state_lock, flags);
+}
+
+struct unf_lport *unf_get_free_vport(struct unf_lport *lport)
+{
+ struct unf_lport *vport = NULL;
+ struct list_head *list_head = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(lport->vport_pool, NULL);
+
+ vport_pool = lport->vport_pool;
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+ if (!list_empty(&vport_pool->list_vport_pool)) {
+ list_head = UNF_OS_LIST_NEXT(&vport_pool->list_vport_pool);
+ list_del(list_head);
+ vport_pool->vport_pool_count--;
+ list_add_tail(list_head, &lport->list_vports_head);
+ vport = list_entry(list_head, struct unf_lport, entry_vport);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]LPort(0x%x)'s vport pool is empty", lport->port_id);
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ return NULL;
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ return vport;
+}
+
+void unf_vport_back_to_pool(void *vport)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *list = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(vport);
+ unf_vport = vport;
+ unf_lport = (struct unf_lport *)(unf_vport->root_lport);
+ FC_CHECK_RETURN_VOID(unf_lport);
+ FC_CHECK_RETURN_VOID(unf_lport->vport_pool);
+
+ unf_free_vp_index(unf_lport->vport_pool, unf_vport);
+
+ spin_lock_irqsave(&unf_lport->vport_pool->vport_pool_lock, flag);
+
+ list = &unf_vport->entry_vport;
+ list_del(list);
+ list_add_tail(list, &unf_lport->vport_pool->list_vport_pool);
+ unf_lport->vport_pool->vport_pool_count++;
+
+ spin_unlock_irqrestore(&unf_lport->vport_pool->vport_pool_lock, flag);
+}
+
+void unf_init_vport_from_lport(struct unf_lport *vport, struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(vport);
+ FC_CHECK_RETURN_VOID(lport);
+
+ vport->port_type = lport->port_type;
+ vport->fc_port = lport->fc_port;
+ vport->act_topo = lport->act_topo;
+ vport->root_lport = lport;
+ vport->unf_qualify_rport = lport->unf_qualify_rport;
+ vport->link_event_wq = lport->link_event_wq;
+ vport->xchg_wq = lport->xchg_wq;
+
+ memcpy(&vport->xchg_mgr_temp, &lport->xchg_mgr_temp,
+ sizeof(struct unf_cm_xchg_mgr_template));
+
+ memcpy(&vport->event_mgr, &lport->event_mgr, sizeof(struct unf_event_mgr));
+
+ memset(&vport->lport_mgr_temp, 0, sizeof(struct unf_cm_lport_template));
+
+ memcpy(&vport->low_level_func, &lport->low_level_func,
+ sizeof(struct unf_low_level_functioon_op));
+}
+
+void unf_check_vport_pool_status(struct unf_lport *lport)
+{
+ struct unf_vport_pool *vport_pool = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ vport_pool = lport->vport_pool;
+ FC_CHECK_RETURN_VOID(vport_pool);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+
+ if (vport_pool->vport_pool_completion &&
+ vport_pool->slab_total_sum == vport_pool->vport_pool_count) {
+ complete(vport_pool->vport_pool_completion);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+}
+
+void unf_vport_fabric_logo(struct unf_lport *vport)
+{
+ struct unf_rport *unf_rport = NULL;
+ ulong flag = 0;
+
+ unf_rport = unf_get_rport_by_nport_id(vport, UNF_FC_FID_FLOGI);
+ FC_CHECK_RETURN_VOID(unf_rport);
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ unf_rport_enter_logo(vport, unf_rport);
+}
+
+void unf_vport_deinit(void *vport)
+{
+ struct unf_lport *unf_vport = NULL;
+
+ FC_CHECK_RETURN_VOID(vport);
+ unf_vport = (struct unf_lport *)vport;
+
+ unf_unregister_scsi_host(unf_vport);
+
+ unf_disc_mgr_destroy(unf_vport);
+
+ unf_release_xchg_mgr_temp(unf_vport);
+
+ unf_release_vport_mgr_temp(unf_vport);
+
+ unf_destroy_scsi_id_table(unf_vport);
+
+ unf_lport_release_lw_funop(unf_vport);
+ unf_vport->fc_port = NULL;
+ unf_vport->vport = NULL;
+
+ if (unf_vport->lport_free_completion) {
+ complete(unf_vport->lport_free_completion);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]VPort(0x%x) point(0x%p) completion free function is NULL",
+ unf_vport->port_id, unf_vport);
+ dump_stack();
+ }
+}
+
+void unf_vport_ref_dec(struct unf_lport *vport)
+{
+ FC_CHECK_RETURN_VOID(vport);
+
+ if (atomic_dec_and_test(&vport->port_ref_cnt)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]VPort(0x%x) point(0x%p) reference count is 0 and freevport",
+ vport->port_id, vport);
+
+ unf_vport_deinit(vport);
+ }
+}
+
+u32 unf_vport_init(void *vport)
+{
+ struct unf_lport *unf_vport = NULL;
+
+ FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
+ unf_vport = (struct unf_lport *)vport;
+
+ unf_vport->options = UNF_PORT_MODE_INI;
+ unf_vport->nport_id = 0;
+
+ if (unf_init_scsi_id_table(unf_vport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Vport(0x%x) can not initialize SCSI ID table",
+ unf_vport->port_id);
+
+ return RETURN_ERROR;
+ }
+
+ if (unf_init_disc_mgr(unf_vport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Vport(0x%x) can not initialize discover manager",
+ unf_vport->port_id);
+ unf_destroy_scsi_id_table(unf_vport);
+
+ return RETURN_ERROR;
+ }
+
+ if (unf_register_scsi_host(unf_vport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Vport(0x%x) vport can not register SCSI host",
+ unf_vport->port_id);
+ unf_disc_mgr_destroy(unf_vport);
+ unf_destroy_scsi_id_table(unf_vport);
+
+ return RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Vport(0x%x) Create succeed with wwpn(0x%llx)",
+ unf_vport->port_id, unf_vport->port_name);
+
+ return RETURN_OK;
+}
+
+void unf_vport_remove(void *vport)
+{
+ struct unf_lport *unf_vport = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct completion port_free_completion;
+
+ init_completion(&port_free_completion);
+ FC_CHECK_RETURN_VOID(vport);
+ unf_vport = (struct unf_lport *)vport;
+ unf_lport = (struct unf_lport *)(unf_vport->root_lport);
+ unf_vport->lport_free_completion = &port_free_completion;
+
+ unf_set_lport_removing(unf_vport);
+
+ unf_vport_ref_dec(unf_vport);
+
+ wait_for_completion(unf_vport->lport_free_completion);
+ unf_vport_back_to_pool(unf_vport);
+
+ unf_check_vport_pool_status(unf_lport);
+}
+
+u32 unf_npiv_conf(u32 port_id, u64 wwpn, enum unf_rport_qos_level qos_level)
+{
+#define VPORT_WWN_MASK 0xff00ffffffffffff
+#define VPORT_WWN_SHIFT 48
+
+ struct fc_vport_identifiers vid = {0};
+ struct Scsi_Host *host = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *unf_vport = NULL;
+ u16 vport_id = 0;
+
+ unf_lport = unf_find_lport_by_port_id(port_id);
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Cannot find LPort by (0x%x).", port_id);
+
+ return RETURN_ERROR;
+ }
+
+ unf_vport = unf_cm_lookup_vport_by_wwpn(unf_lport, wwpn);
+ if (unf_vport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Port(0x%x) has find vport with wwpn(0x%llx), can't create again",
+ unf_lport->port_id, wwpn);
+
+ return RETURN_ERROR;
+ }
+
+ unf_vport = unf_get_free_vport(unf_lport);
+ if (!unf_vport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Can not get free vport from pool");
+
+ return RETURN_ERROR;
+ }
+
+ unf_init_port_parms(unf_vport);
+ unf_init_vport_from_lport(unf_vport, unf_lport);
+
+ if ((unf_lport->port_name & VPORT_WWN_MASK) == (wwpn & VPORT_WWN_MASK)) {
+ vport_id = (wwpn & ~VPORT_WWN_MASK) >> VPORT_WWN_SHIFT;
+ if (vport_id == 0)
+ vport_id = (unf_lport->port_name & ~VPORT_WWN_MASK) >> VPORT_WWN_SHIFT;
+ }
+
+ if (unf_alloc_vp_index(unf_lport->vport_pool, unf_vport, vport_id) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Vport can not allocate vport index");
+ unf_vport_back_to_pool(unf_vport);
+
+ return RETURN_ERROR;
+ }
+ unf_vport->port_id = (((u32)unf_vport->vp_index) << PORTID_VPINDEX_SHIT) |
+ unf_lport->port_id;
+
+ vid.roles = FC_PORT_ROLE_FCP_INITIATOR;
+ vid.vport_type = FC_PORTTYPE_NPIV;
+ vid.disable = false;
+ vid.node_name = unf_lport->node_name;
+
+ if (wwpn) {
+ vid.port_name = wwpn;
+ } else {
+ if ((unf_lport->port_name & ~VPORT_WWN_MASK) >> VPORT_WWN_SHIFT !=
+ unf_vport->vp_index) {
+ vid.port_name = (unf_lport->port_name & VPORT_WWN_MASK) |
+ (((u64)unf_vport->vp_index) << VPORT_WWN_SHIFT);
+ } else {
+ vid.port_name = (unf_lport->port_name & VPORT_WWN_MASK);
+ }
+ }
+
+ unf_vport->port_name = vid.port_name;
+
+ host = unf_lport->host_info.host;
+
+ if (!fc_vport_create(host, 0, &vid)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) Cannot Failed to create vport wwpn=%llx",
+ unf_lport->port_id, vid.port_name);
+
+ unf_vport_back_to_pool(unf_vport);
+
+ return RETURN_ERROR;
+ }
+
+ unf_vport->qos_level = qos_level;
+ return RETURN_OK;
+}
+
+struct unf_lport *unf_creat_vport(struct unf_lport *lport,
+ struct vport_config *vport_config)
+{
+ u32 ret = RETURN_OK;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *vport = NULL;
+ enum unf_act_topo lport_topo;
+ enum unf_lport_login_state lport_state;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(vport_config, NULL);
+
+ if (vport_config->port_mode != FC_PORT_ROLE_FCP_INITIATOR) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Only support INITIATOR port mode(0x%x)",
+ vport_config->port_mode);
+
+ return NULL;
+ }
+ unf_lport = lport;
+
+ if (unf_lport->root_lport != unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) not root port return",
+ unf_lport->port_id);
+
+ return NULL;
+ }
+
+ vport = unf_cm_lookup_vport_by_wwpn(unf_lport, vport_config->port_name);
+ if (!vport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Port(0x%x) can not find vport with wwpn(0x%llx)",
+ unf_lport->port_id, vport_config->port_name);
+
+ return NULL;
+ }
+
+ ret = unf_vport_init(vport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]VPort(0x%x) can not initialize vport",
+ vport->port_id);
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ lport_topo = unf_lport->act_topo;
+ lport_state = unf_lport->states;
+
+ vport_config->node_name = unf_lport->node_name;
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ vport->port_name = vport_config->port_name;
+ vport->node_name = vport_config->node_name;
+
+ if (lport_topo == UNF_ACT_TOP_P2P_FABRIC &&
+ lport_state >= UNF_LPORT_ST_PLOGI_WAIT &&
+ lport_state <= UNF_LPORT_ST_READY) {
+ vport->link_up = unf_lport->link_up;
+ (void)unf_lport_login(vport, lport_topo);
+ }
+
+ return vport;
+}
+
+u32 unf_drop_vport(struct unf_lport *vport)
+{
+ u32 ret = RETURN_ERROR;
+ struct fc_vport *unf_vport = NULL;
+
+ FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
+
+ unf_vport = vport->vport;
+ if (!unf_vport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]VPort(0x%x) find vport in scsi is NULL",
+ vport->port_id);
+
+ return ret;
+ }
+
+ ret = fc_vport_terminate(unf_vport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]VPort(0x%x) terminate vport(%p) in scsi failed",
+ vport->port_id, unf_vport);
+
+ return ret;
+ }
+ return ret;
+}
+
+u32 unf_delete_vport(u32 port_id, u32 vp_index)
+{
+ struct unf_lport *unf_lport = NULL;
+ u16 unf_vp_index = 0;
+ struct unf_lport *vport = NULL;
+
+ unf_lport = unf_find_lport_by_port_id(port_id);
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can not be found by portid", port_id);
+
+ return RETURN_ERROR;
+ }
+
+ if (atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) is in NOP, destroy all vports function will be called",
+ unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ UNF_TOU16_CHECK(unf_vp_index, vp_index, return RETURN_ERROR);
+ vport = unf_cm_lookup_vport_by_vp_index(unf_lport, unf_vp_index);
+ if (!vport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Can not lookup VPort by VPort index(0x%x)",
+ unf_vp_index);
+
+ return RETURN_ERROR;
+ }
+
+ return unf_drop_vport(vport);
+}
+
+void unf_vport_abort_all_sfs_exch(struct unf_lport *vport)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_xchg *exch = NULL;
+ ulong pool_lock_flags = 0;
+ ulong exch_lock_flags = 0;
+ u32 i;
+
+ FC_CHECK_RETURN_VOID(vport);
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport((struct unf_lport *)(vport->root_lport), i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) hot pool is NULL",
+ ((struct unf_lport *)(vport->root_lport))->port_id);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
+ list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->sfs_busylist) {
+ exch = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+ spin_lock_irqsave(&exch->xchg_state_lock, exch_lock_flags);
+ if (vport == exch->lport && (atomic_read(&exch->ref_cnt) > 0)) {
+ exch->io_state |= TGT_IO_STATE_ABORT;
+ spin_unlock_irqrestore(&exch->xchg_state_lock, exch_lock_flags);
+ unf_disc_ctrl_size_inc(vport, exch->cmnd_code);
+ /* Transfer exch to destroy chain */
+ list_del(xchg_node);
+ list_add_tail(xchg_node, &hot_pool->list_destroy_xchg);
+ } else {
+ spin_unlock_irqrestore(&exch->xchg_state_lock, exch_lock_flags);
+ }
+ }
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
+ }
+}
+
+void unf_vport_abort_ini_io_exch(struct unf_lport *vport)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_xchg *exch = NULL;
+ ulong pool_lock_flags = 0;
+ ulong exch_lock_flags = 0;
+ u32 i;
+
+ FC_CHECK_RETURN_VOID(vport);
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport((struct unf_lport *)(vport->root_lport), i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) MgrIdex %u hot pool is NULL",
+ ((struct unf_lport *)(vport->root_lport))->port_id, i);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
+ list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->ini_busylist) {
+ exch = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+
+ if (vport == exch->lport && atomic_read(&exch->ref_cnt) > 0) {
+ /* Transfer exch to destroy chain */
+ list_del(xchg_node);
+ list_add_tail(xchg_node, &hot_pool->list_destroy_xchg);
+
+ spin_lock_irqsave(&exch->xchg_state_lock, exch_lock_flags);
+ exch->io_state |= INI_IO_STATE_DRABORT;
+ spin_unlock_irqrestore(&exch->xchg_state_lock, exch_lock_flags);
+ }
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
+ }
+}
+
+void unf_vport_abort_exch(struct unf_lport *vport)
+{
+ FC_CHECK_RETURN_VOID(vport);
+
+ unf_vport_abort_all_sfs_exch(vport);
+
+ unf_vport_abort_ini_io_exch(vport);
+}
+
+u32 unf_vport_wait_all_exch_removed(struct unf_lport *vport)
+{
+#define UNF_WAIT_EXCH_REMOVE_ONE_TIME_MS 1000
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_xchg *exch = NULL;
+ u32 vport_uses = 0;
+ ulong flags = 0;
+ u32 wait_timeout = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
+
+ while (1) {
+ vport_uses = 0;
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool =
+ unf_get_hot_pool_by_lport((struct unf_lport *)(vport->root_lport), i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) hot Pool is NULL",
+ ((struct unf_lport *)(vport->root_lport))->port_id);
+
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ list_for_each_safe(xchg_node, next_xchg_node,
+ &hot_pool->list_destroy_xchg) {
+ exch = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+
+ if (exch->lport != vport)
+ continue;
+ vport_uses++;
+ if (wait_timeout >=
+ UNF_DELETE_VPORT_MAX_WAIT_TIME_MS) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[error]VPort(0x%x) Abort Exch(0x%p) Type(0x%x) OxRxid(0x%x 0x%x),sid did(0x%x 0x%x) SeqId(0x%x) IOState(0x%x) Ref(0x%x)",
+ vport->port_id, exch,
+ (u32)exch->xchg_type,
+ (u32)exch->oxid,
+ (u32)exch->rxid, (u32)exch->sid,
+ (u32)exch->did, (u32)exch->seq_id,
+ (u32)exch->io_state,
+ atomic_read(&exch->ref_cnt));
+ }
+ }
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ }
+
+ if (vport_uses == 0) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]VPort(0x%x) has removed all exchanges it used",
+ vport->port_id);
+ break;
+ }
+
+ if (wait_timeout >= UNF_DELETE_VPORT_MAX_WAIT_TIME_MS)
+ return RETURN_ERROR;
+
+ msleep(UNF_WAIT_EXCH_REMOVE_ONE_TIME_MS);
+ wait_timeout += UNF_WAIT_EXCH_REMOVE_ONE_TIME_MS;
+ }
+
+ return RETURN_OK;
+}
+
+u32 unf_vport_wait_rports_removed(struct unf_lport *vport)
+{
+#define UNF_WAIT_RPORT_REMOVE_ONE_TIME_MS 5000
+
+ struct unf_disc *disc = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ u32 vport_uses = 0;
+ ulong flags = 0;
+ u32 wait_timeout = 0;
+ struct unf_rport *unf_rport = NULL;
+
+ FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
+ disc = &vport->disc;
+
+ while (1) {
+ vport_uses = 0;
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flags);
+ list_for_each_safe(node, next_node, &disc->list_delete_rports) {
+ unf_rport = list_entry(node, struct unf_rport, entry_rport);
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Vport(0x%x) Rport(0x%x) point(%p) is in Delete",
+ vport->port_id, unf_rport->nport_id, unf_rport);
+ vport_uses++;
+ }
+
+ list_for_each_safe(node, next_node, &disc->list_destroy_rports) {
+ unf_rport = list_entry(node, struct unf_rport, entry_rport);
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Vport(0x%x) Rport(0x%x) point(%p) is in Destroy",
+ vport->port_id, unf_rport->nport_id, unf_rport);
+ vport_uses++;
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flags);
+
+ if (vport_uses == 0) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]VPort(0x%x) has removed all RPorts it used",
+ vport->port_id);
+ break;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Vport(0x%x) has %u RPorts not removed wait timeout(%u ms)",
+ vport->port_id, vport_uses, wait_timeout);
+
+ if (wait_timeout >= UNF_DELETE_VPORT_MAX_WAIT_TIME_MS)
+ return RETURN_ERROR;
+
+ msleep(UNF_WAIT_RPORT_REMOVE_ONE_TIME_MS);
+ wait_timeout += UNF_WAIT_RPORT_REMOVE_ONE_TIME_MS;
+ }
+
+ return RETURN_OK;
+}
+
+u32 unf_destroy_one_vport(struct unf_lport *vport)
+{
+ u32 ret;
+ struct unf_lport *root_port = NULL;
+
+ FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
+
+ root_port = (struct unf_lport *)vport->root_lport;
+
+ unf_vport_fabric_logo(vport);
+
+ /* 1 set NOP */
+ atomic_set(&vport->lport_no_operate_flag, UNF_LPORT_NOP);
+ vport->port_removing = true;
+
+ /* 2 report linkdown to scsi and delele rpot */
+ unf_linkdown_one_vport(vport);
+
+ /* 3 set abort for exchange */
+ unf_vport_abort_exch(vport);
+
+ /* 4 wait exch return freepool */
+ if (!root_port->port_dirt_exchange) {
+ ret = unf_vport_wait_all_exch_removed(vport);
+ if (ret != RETURN_OK) {
+ if (!root_port->port_removing) {
+ vport->port_removing = false;
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]VPort(0x%x) can not wait Exchange return freepool",
+ vport->port_id);
+
+ return RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "[warn]Port(0x%x) is removing, there is dirty exchange, continue",
+ root_port->port_id);
+
+ root_port->port_dirt_exchange = true;
+ }
+ }
+
+ /* wait rport return rportpool */
+ ret = unf_vport_wait_rports_removed(vport);
+ if (ret != RETURN_OK) {
+ vport->port_removing = false;
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]VPort(0x%x) can not wait Rport return freepool",
+ vport->port_id);
+
+ return RETURN_ERROR;
+ }
+
+ unf_cm_vport_remove(vport);
+
+ return RETURN_OK;
+}
+
+void unf_destroy_all_vports(struct unf_lport *lport)
+{
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+
+ unf_lport = lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Lport(0x%x) VPort pool is NULL", unf_lport->port_id);
+
+ return;
+ }
+
+ /* Transfer to the transition chain */
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ vport = list_entry(node, struct unf_lport, entry_vport);
+ list_del_init(&vport->entry_vport);
+ list_add_tail(&vport->entry_vport, &unf_lport->list_destroy_vports);
+ }
+
+ list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
+ vport = list_entry(node, struct unf_lport, entry_vport);
+ list_del_init(&vport->entry_vport);
+ list_add_tail(&vport->entry_vport, &unf_lport->list_destroy_vports);
+ atomic_dec(&vport->port_ref_cnt);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ while (!list_empty(&unf_lport->list_destroy_vports)) {
+ node = UNF_OS_LIST_NEXT(&unf_lport->list_destroy_vports);
+ vport = list_entry(node, struct unf_lport, entry_vport);
+
+ list_del_init(&vport->entry_vport);
+ list_add_tail(&vport->entry_vport, &unf_lport->list_vports_head);
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]VPort(0x%x) Destroy begin", vport->port_id);
+ unf_drop_vport(vport);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[info]VPort(0x%x) Destroy end", vport->port_id);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+}
+
+u32 unf_init_vport_mgr_temp(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ lport->lport_mgr_temp.unf_look_up_vport_by_index = unf_lookup_vport_by_index;
+ lport->lport_mgr_temp.unf_look_up_vport_by_port_id = unf_lookup_vport_by_portid;
+ lport->lport_mgr_temp.unf_look_up_vport_by_did = unf_lookup_vport_by_did;
+ lport->lport_mgr_temp.unf_look_up_vport_by_wwpn = unf_lookup_vport_by_wwpn;
+ lport->lport_mgr_temp.unf_vport_remove = unf_vport_remove;
+
+ return RETURN_OK;
+}
+
+void unf_release_vport_mgr_temp(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ memset(&lport->lport_mgr_temp, 0, sizeof(struct unf_cm_lport_template));
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_9_DESTROY_LPORT_MG_TMP;
+}
diff --git a/drivers/scsi/spfc/common/unf_npiv.h b/drivers/scsi/spfc/common/unf_npiv.h
new file mode 100644
index 000000000000..6f522470f47a
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_npiv.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_NPIV_H
+#define UNF_NPIV_H
+
+#include "unf_type.h"
+#include "unf_common.h"
+#include "unf_lport.h"
+
+/* product VPORT configure */
+struct vport_config {
+ u64 node_name;
+ u64 port_name;
+ u32 port_mode; /* INI, TGT or both */
+};
+
+/* product Vport function */
+#define PORTID_VPINDEX_MASK 0xff000000
+#define PORTID_VPINDEX_SHIT 24
+u32 unf_npiv_conf(u32 port_id, u64 wwpn, enum unf_rport_qos_level qos_level);
+struct unf_lport *unf_creat_vport(struct unf_lport *lport,
+ struct vport_config *vport_config);
+u32 unf_delete_vport(u32 port_id, u32 vp_index);
+
+/* Vport pool creat and release function */
+u32 unf_init_vport_pool(struct unf_lport *lport);
+void unf_free_vport_pool(struct unf_lport *lport);
+
+/* Lport resigster stLPortMgTemp function */
+void unf_vport_remove(void *vport);
+void unf_vport_ref_dec(struct unf_lport *vport);
+
+/* linkdown all Vport after receive linkdown event */
+void unf_linkdown_all_vports(void *lport);
+/* Lport receive Flogi Acc linkup all Vport */
+void unf_linkup_all_vports(struct unf_lport *lport);
+/* Lport remove delete all Vport */
+void unf_destroy_all_vports(struct unf_lport *lport);
+void unf_vport_fabric_logo(struct unf_lport *vport);
+u32 unf_destroy_one_vport(struct unf_lport *vport);
+u32 unf_drop_vport(struct unf_lport *vport);
+u32 unf_init_vport_mgr_temp(struct unf_lport *lport);
+void unf_release_vport_mgr_temp(struct unf_lport *lport);
+struct unf_lport *unf_get_vport_by_slab_index(struct unf_vport_pool *vport_pool,
+ u16 slab_index);
+#endif
diff --git a/drivers/scsi/spfc/common/unf_npiv_portman.c b/drivers/scsi/spfc/common/unf_npiv_portman.c
new file mode 100644
index 000000000000..b4f393f2e732
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_npiv_portman.c
@@ -0,0 +1,360 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_npiv_portman.h"
+#include "unf_log.h"
+#include "unf_common.h"
+#include "unf_rport.h"
+#include "unf_npiv.h"
+#include "unf_portman.h"
+
+void *unf_lookup_vport_by_index(void *lport, u16 vp_index)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
+
+ return NULL;
+ }
+
+ if (vp_index == 0 || vp_index > vport_pool->slab_total_sum) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) input vport index(0x%x) is beyond the normal range(0x1~0x%x)",
+ unf_lport->port_id, vp_index, vport_pool->slab_total_sum);
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ unf_vport = unf_get_vport_by_slab_index(vport_pool, vp_index - 1);
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ return (void *)unf_vport;
+}
+
+void *unf_lookup_vport_by_portid(void *lport, u32 port_id)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->port_id == port_id) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ return unf_vport;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->port_id == port_id) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ return unf_vport;
+ }
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) has no vport ID(0x%x).",
+ unf_lport->port_id, port_id);
+ return NULL;
+}
+
+void *unf_lookup_vport_by_did(void *lport, u32 did)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->nport_id == did) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ return unf_vport;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->nport_id == did) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ return unf_vport;
+ }
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) has no vport Nport ID(0x%x)", unf_lport->port_id, did);
+ return NULL;
+}
+
+void *unf_lookup_vport_by_wwpn(void *lport, u64 wwpn)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->port_name == wwpn) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ return unf_vport;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->port_name == wwpn) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ return unf_vport;
+ }
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) has no vport WWPN(0x%llx)",
+ unf_lport->port_id, wwpn);
+
+ return NULL;
+}
+
+void unf_linkdown_one_vport(struct unf_lport *vport)
+{
+ ulong flag = 0;
+ struct unf_lport *root_lport = NULL;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
+ "[info]VPort(0x%x) linkdown", vport->port_id);
+
+ spin_lock_irqsave(&vport->lport_state_lock, flag);
+ vport->link_up = UNF_PORT_LINK_DOWN;
+ vport->nport_id = 0; /* set nportid 0 before send fdisc again */
+ unf_lport_state_ma(vport, UNF_EVENT_LPORT_LINK_DOWN);
+ spin_unlock_irqrestore(&vport->lport_state_lock, flag);
+
+ root_lport = (struct unf_lport *)vport->root_lport;
+
+ unf_flush_disc_event(&root_lport->disc, vport);
+
+ unf_clean_linkdown_rport(vport);
+}
+
+void unf_linkdown_all_vports(void *lport)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = (struct unf_lport *)lport;
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) VPort pool is NULL", unf_lport->port_id);
+
+ return;
+ }
+
+ /* Transfer to the transition chain */
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ list_del_init(&unf_vport->entry_vport);
+ list_add_tail(&unf_vport->entry_vport, &unf_lport->list_intergrad_vports);
+ (void)unf_lport_ref_inc(unf_vport);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ while (!list_empty(&unf_lport->list_intergrad_vports)) {
+ node = UNF_OS_LIST_NEXT(&unf_lport->list_intergrad_vports);
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+
+ list_del_init(&unf_vport->entry_vport);
+ list_add_tail(&unf_vport->entry_vport, &unf_lport->list_vports_head);
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ unf_linkdown_one_vport(unf_vport);
+
+ unf_vport_ref_dec(unf_vport);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+}
+
+int unf_process_vports_linkup(void *arg_in, void *arg_out)
+{
+#define UNF_WAIT_VPORT_LOGIN_ONE_TIME_MS 100
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+ int ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(arg_in, RETURN_ERROR);
+
+ unf_lport = (struct unf_lport *)arg_in;
+
+ if (atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is NOP don't continue", unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ if (unf_lport->link_up != UNF_PORT_LINK_UP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is not linkup don't continue.",
+ unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) VPort pool is NULL.", unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ /* Transfer to the transition chain */
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ list_del_init(&unf_vport->entry_vport);
+ list_add_tail(&unf_vport->entry_vport, &unf_lport->list_intergrad_vports);
+ (void)unf_lport_ref_inc(unf_vport);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ while (!list_empty(&unf_lport->list_intergrad_vports)) {
+ node = UNF_OS_LIST_NEXT(&unf_lport->list_intergrad_vports);
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+
+ list_del_init(&unf_vport->entry_vport);
+ list_add_tail(&unf_vport->entry_vport, &unf_lport->list_vports_head);
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ if (atomic_read(&unf_vport->lport_no_operate_flag) == UNF_LPORT_NOP) {
+ unf_vport_ref_dec(unf_vport);
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ continue;
+ }
+
+ if (unf_lport->link_up == UNF_PORT_LINK_UP &&
+ unf_lport->act_topo == UNF_ACT_TOP_P2P_FABRIC) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Vport(0x%x) begin login", unf_vport->port_id);
+
+ unf_vport->link_up = UNF_PORT_LINK_UP;
+ (void)unf_lport_login(unf_vport, unf_lport->act_topo);
+
+ msleep(UNF_WAIT_VPORT_LOGIN_ONE_TIME_MS);
+ } else {
+ unf_linkdown_one_vport(unf_vport);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Vport(0x%x) login failed because root port linkdown",
+ unf_vport->port_id);
+ }
+
+ unf_vport_ref_dec(unf_vport);
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ return ret;
+}
+
+void unf_linkup_all_vports(struct unf_lport *lport)
+{
+ struct unf_cm_event_report *event = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (unlikely(!lport->event_mgr.unf_get_free_event_func ||
+ !lport->event_mgr.unf_post_event_func ||
+ !lport->event_mgr.unf_release_event)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) Event fun is NULL",
+ lport->port_id);
+ return;
+ }
+
+ event = lport->event_mgr.unf_get_free_event_func((void *)lport);
+ FC_CHECK_RETURN_VOID(event);
+
+ event->lport = lport;
+ event->event_asy_flag = UNF_EVENT_ASYN;
+ event->unf_event_task = unf_process_vports_linkup;
+ event->para_in = (void *)lport;
+
+ lport->event_mgr.unf_post_event_func(lport, event);
+}
diff --git a/drivers/scsi/spfc/common/unf_npiv_portman.h b/drivers/scsi/spfc/common/unf_npiv_portman.h
new file mode 100644
index 000000000000..284c23c9abe4
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_npiv_portman.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_NPIV_PORTMAN_H
+#define UNF_NPIV_PORTMAN_H
+
+#include "unf_type.h"
+#include "unf_lport.h"
+
+/* Lport resigster stLPortMgTemp function */
+void *unf_lookup_vport_by_index(void *lport, u16 vp_index);
+void *unf_lookup_vport_by_portid(void *lport, u32 port_id);
+void *unf_lookup_vport_by_did(void *lport, u32 did);
+void *unf_lookup_vport_by_wwpn(void *lport, u64 wwpn);
+void unf_linkdown_one_vport(struct unf_lport *vport);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_portman.c b/drivers/scsi/spfc/common/unf_portman.c
new file mode 100644
index 000000000000..ef8f90eb3777
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_portman.c
@@ -0,0 +1,2431 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_portman.h"
+#include "unf_log.h"
+#include "unf_exchg.h"
+#include "unf_rport.h"
+#include "unf_io.h"
+#include "unf_npiv.h"
+#include "unf_scsi_common.h"
+
+#define UNF_LPORT_CHIP_ERROR(unf_lport) \
+ ((unf_lport)->pcie_error_cnt.pcie_error_count[UNF_PCIE_FATALERRORDETECTED])
+
+struct unf_global_lport global_lport_mgr;
+
+static int unf_port_switch(struct unf_lport *lport, bool switch_flag);
+static u32 unf_build_lport_wwn(struct unf_lport *lport);
+static int unf_lport_destroy(void *lport, void *arg_out);
+static u32 unf_port_linkup(struct unf_lport *lport, void *input);
+static u32 unf_port_linkdown(struct unf_lport *lport, void *input);
+static u32 unf_port_abnormal_reset(struct unf_lport *lport, void *input);
+static u32 unf_port_reset_start(struct unf_lport *lport, void *input);
+static u32 unf_port_reset_end(struct unf_lport *lport, void *input);
+static u32 unf_port_nop(struct unf_lport *lport, void *input);
+static void unf_destroy_card_thread(struct unf_lport *lport);
+static u32 unf_creat_card_thread(struct unf_lport *lport);
+static u32 unf_find_card_thread(struct unf_lport *lport);
+static u32 unf_port_begin_remove(struct unf_lport *lport, void *input);
+
+static struct unf_port_action g_lport_action[] = {
+ {UNF_PORT_LINK_UP, unf_port_linkup},
+ {UNF_PORT_LINK_DOWN, unf_port_linkdown},
+ {UNF_PORT_RESET_START, unf_port_reset_start},
+ {UNF_PORT_RESET_END, unf_port_reset_end},
+ {UNF_PORT_NOP, unf_port_nop},
+ {UNF_PORT_BEGIN_REMOVE, unf_port_begin_remove},
+ {UNF_PORT_RELEASE_RPORT_INDEX, unf_port_release_rport_index},
+ {UNF_PORT_ABNORMAL_RESET, unf_port_abnormal_reset},
+};
+
+static void unf_destroy_dirty_rport(struct unf_lport *lport, bool show_only)
+{
+ u32 dirty_rport = 0;
+
+ /* for whole L_Port */
+ if (lport->dirty_flag & UNF_LPORT_DIRTY_FLAG_RPORT_POOL_DIRTY) {
+ dirty_rport = lport->rport_pool.rport_pool_count;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) has %d dirty RPort(s)",
+ lport->port_id, dirty_rport);
+
+ /* Show L_Port's R_Port(s) from busy_list & destroy_list */
+ unf_show_all_rport(lport);
+
+ /* free R_Port pool memory & bitmap */
+ if (!show_only) {
+ vfree(lport->rport_pool.rport_pool_add);
+ lport->rport_pool.rport_pool_add = NULL;
+ vfree(lport->rport_pool.rpi_bitmap);
+ lport->rport_pool.rpi_bitmap = NULL;
+ }
+ }
+}
+
+void unf_show_dirty_port(bool show_only, u32 *dirty_port_num)
+{
+ struct list_head *node = NULL;
+ struct list_head *node_next = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong flags = 0;
+ u32 port_num = 0;
+
+ FC_CHECK_RETURN_VOID(dirty_port_num);
+
+ /* for each dirty L_Port from global L_Port list */
+ spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
+ list_for_each_safe(node, node_next, &global_lport_mgr.dirty_list_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) has dirty data(0x%x)",
+ unf_lport->port_id, unf_lport->dirty_flag);
+
+ /* Destroy dirty L_Port's exchange(s) & R_Port(s) */
+ unf_destroy_dirty_xchg(unf_lport, show_only);
+ unf_destroy_dirty_rport(unf_lport, show_only);
+
+ /* Delete (dirty L_Port) list entry if necessary */
+ if (!show_only) {
+ list_del_init(node);
+ vfree(unf_lport);
+ }
+
+ port_num++;
+ }
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+
+ *dirty_port_num = port_num;
+}
+
+void unf_show_all_rport(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_disc *disc = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ u32 rport_cnt = 0;
+ u32 target_cnt = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = lport;
+ disc = &unf_lport->disc;
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Port(0x%x) disc state(0x%x)", unf_lport->port_id, disc->states);
+
+ /* for each R_Port from busy_list */
+ list_for_each_safe(node, next_node, &disc->list_busy_rports) {
+ unf_rport = list_entry(node, struct unf_rport, entry_rport);
+ rport_cnt++;
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Port(0x%x) busy RPorts(%u_%p) WWPN(0x%016llx) scsi_id(0x%x) local N_Port_ID(0x%x) N_Port_ID(0x%06x). State(0x%04x) options(0x%04x) index(0x%04x) ref(%d) pend:%d",
+ unf_lport->port_id, rport_cnt, unf_rport,
+ unf_rport->port_name, unf_rport->scsi_id,
+ unf_rport->local_nport_id, unf_rport->nport_id,
+ unf_rport->rp_state, unf_rport->options,
+ unf_rport->rport_index,
+ atomic_read(&unf_rport->rport_ref_cnt),
+ atomic_read(&unf_rport->pending_io_cnt));
+
+ if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR)
+ target_cnt++;
+ }
+
+ unf_lport->target_cnt = target_cnt;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) targetnum=(%u)", unf_lport->port_id,
+ unf_lport->target_cnt);
+
+ /* for each R_Port from destroy_list */
+ list_for_each_safe(node, next_node, &disc->list_destroy_rports) {
+ unf_rport = list_entry(node, struct unf_rport, entry_rport);
+ rport_cnt++;
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Port(0x%x) destroy RPorts(%u) WWPN(0x%016llx) N_Port_ID(0x%06x) State(0x%04x) options(0x%04x) index(0x%04x) ref(%d)",
+ unf_lport->port_id, rport_cnt, unf_rport->port_name,
+ unf_rport->nport_id, unf_rport->rp_state,
+ unf_rport->options, unf_rport->rport_index,
+ atomic_read(&unf_rport->rport_ref_cnt));
+ }
+
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+}
+
+u32 unf_lport_ref_inc(struct unf_lport *lport)
+{
+ ulong lport_flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&lport->lport_state_lock, lport_flags);
+ if (atomic_read(&lport->port_ref_cnt) <= 0) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ atomic_inc(&lport->port_ref_cnt);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%p) port_id(0x%x) reference count is %d",
+ lport, lport->port_id, atomic_read(&lport->port_ref_cnt));
+
+ spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
+
+ return RETURN_OK;
+}
+
+void unf_lport_ref_dec(struct unf_lport *lport)
+{
+ ulong flags = 0;
+ ulong lport_flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "LPort(0x%p), port ID(0x%x), reference count is %d.",
+ lport, lport->port_id, atomic_read(&lport->port_ref_cnt));
+
+ spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
+ spin_lock_irqsave(&lport->lport_state_lock, lport_flags);
+ if (atomic_dec_and_test(&lport->port_ref_cnt)) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
+ list_del(&lport->entry_lport);
+ global_lport_mgr.lport_sum--;
+
+ /* attaches the lport to the destroy linked list for dfx */
+ list_add_tail(&lport->entry_lport, &global_lport_mgr.destroy_list_head);
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+
+ (void)unf_lport_destroy(lport, NULL);
+ } else {
+ spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+ }
+}
+
+void unf_lport_update_topo(struct unf_lport *lport,
+ enum unf_act_topo active_topo)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (active_topo > UNF_ACT_TOP_UNKNOWN || active_topo < UNF_ACT_TOP_PUBLIC_LOOP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) set invalid topology(0x%x) with current value(0x%x)",
+ lport->nport_id, active_topo, lport->act_topo);
+
+ return;
+ }
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ lport->act_topo = active_topo;
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+}
+
+void unf_set_lport_removing(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ lport->fc_port = NULL;
+ lport->port_removing = true;
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_0_SET_REMOVING;
+}
+
+u32 unf_release_local_port(void *lport)
+{
+ struct unf_lport *unf_lport = lport;
+ struct completion lport_free_completion;
+
+ init_completion(&lport_free_completion);
+ FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
+
+ unf_lport->lport_free_completion = &lport_free_completion;
+ unf_set_lport_removing(unf_lport);
+ unf_lport_ref_dec(unf_lport);
+ wait_for_completion(unf_lport->lport_free_completion);
+ /* for dirty case */
+ if (unf_lport->dirty_flag == 0)
+ vfree(unf_lport);
+
+ return RETURN_OK;
+}
+
+static void unf_free_all_esgl_pages(struct unf_lport *lport)
+{
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ u32 i;
+
+ FC_CHECK_RETURN_VOID(lport);
+ spin_lock_irqsave(&lport->esgl_pool.esgl_pool_lock, flag);
+ list_for_each_safe(node, next_node, &lport->esgl_pool.list_esgl_pool) {
+ list_del(node);
+ }
+
+ spin_unlock_irqrestore(&lport->esgl_pool.esgl_pool_lock, flag);
+ if (lport->esgl_pool.esgl_buff_list.buflist) {
+ for (i = 0; i < lport->esgl_pool.esgl_buff_list.buf_num; i++) {
+ if (lport->esgl_pool.esgl_buff_list.buflist[i].vaddr) {
+ dma_free_coherent(&lport->low_level_func.dev->dev,
+ lport->esgl_pool.esgl_buff_list.buf_size,
+ lport->esgl_pool.esgl_buff_list.buflist[i].vaddr,
+ lport->esgl_pool.esgl_buff_list.buflist[i].paddr);
+ lport->esgl_pool.esgl_buff_list.buflist[i].vaddr = NULL;
+ }
+ }
+ kfree(lport->esgl_pool.esgl_buff_list.buflist);
+ lport->esgl_pool.esgl_buff_list.buflist = NULL;
+ }
+}
+
+static u32 unf_init_esgl_pool(struct unf_lport *lport)
+{
+ struct unf_esgl *esgl = NULL;
+ u32 ret = RETURN_OK;
+ u32 index = 0;
+ u32 buf_total_size;
+ u32 buf_num;
+ u32 alloc_idx;
+ u32 curbuf_idx = 0;
+ u32 curbuf_offset = 0;
+ u32 buf_cnt_perhugebuf;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ lport->esgl_pool.esgl_pool_count = lport->low_level_func.lport_cfg_items.max_io;
+ spin_lock_init(&lport->esgl_pool.esgl_pool_lock);
+ INIT_LIST_HEAD(&lport->esgl_pool.list_esgl_pool);
+
+ lport->esgl_pool.esgl_pool_addr =
+ vmalloc((size_t)((lport->esgl_pool.esgl_pool_count) * sizeof(struct unf_esgl)));
+ if (!lport->esgl_pool.esgl_pool_addr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "LPort(0x%x) cannot allocate ESGL Pool.", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ esgl = (struct unf_esgl *)lport->esgl_pool.esgl_pool_addr;
+ memset(esgl, 0, ((lport->esgl_pool.esgl_pool_count) * sizeof(struct unf_esgl)));
+
+ buf_total_size = (u32)(PAGE_SIZE * lport->esgl_pool.esgl_pool_count);
+
+ lport->esgl_pool.esgl_buff_list.buf_size =
+ buf_total_size > BUF_LIST_PAGE_SIZE ? BUF_LIST_PAGE_SIZE : buf_total_size;
+ buf_cnt_perhugebuf = lport->esgl_pool.esgl_buff_list.buf_size / PAGE_SIZE;
+ buf_num = lport->esgl_pool.esgl_pool_count % buf_cnt_perhugebuf
+ ? lport->esgl_pool.esgl_pool_count / buf_cnt_perhugebuf + 1
+ : lport->esgl_pool.esgl_pool_count / buf_cnt_perhugebuf;
+ lport->esgl_pool.esgl_buff_list.buflist =
+ (struct buff_list *)kmalloc(buf_num * sizeof(struct buff_list), GFP_KERNEL);
+ lport->esgl_pool.esgl_buff_list.buf_num = buf_num;
+
+ if (!lport->esgl_pool.esgl_buff_list.buflist) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate Esgl pool buf list failed out of memory");
+ goto free_buff;
+ }
+ memset(lport->esgl_pool.esgl_buff_list.buflist, 0, buf_num * sizeof(struct buff_list));
+
+ for (alloc_idx = 0; alloc_idx < buf_num; alloc_idx++) {
+ lport->esgl_pool.esgl_buff_list.buflist[alloc_idx]
+ .vaddr = dma_alloc_coherent(&lport->low_level_func.dev->dev,
+ lport->esgl_pool.esgl_buff_list.buf_size,
+ &lport->esgl_pool.esgl_buff_list.buflist[alloc_idx].paddr, GFP_KERNEL);
+ if (!lport->esgl_pool.esgl_buff_list.buflist[alloc_idx].vaddr)
+ goto free_buff;
+ memset(lport->esgl_pool.esgl_buff_list.buflist[alloc_idx].vaddr, 0,
+ lport->esgl_pool.esgl_buff_list.buf_size);
+ }
+
+ /* allocates the Esgl page, and the DMA uses the */
+ for (index = 0; index < lport->esgl_pool.esgl_pool_count; index++) {
+ if (index != 0 && !(index % buf_cnt_perhugebuf))
+ curbuf_idx++;
+ curbuf_offset = (u32)(PAGE_SIZE * (index % buf_cnt_perhugebuf));
+ esgl->page.page_address =
+ (u64)lport->esgl_pool.esgl_buff_list.buflist[curbuf_idx].vaddr + curbuf_offset;
+ esgl->page.page_size = PAGE_SIZE;
+ esgl->page.esgl_phy_addr =
+ lport->esgl_pool.esgl_buff_list.buflist[curbuf_idx].paddr + curbuf_offset;
+ list_add_tail(&esgl->entry_esgl, &lport->esgl_pool.list_esgl_pool);
+ esgl++;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[EVENT]Allocate bufnum:%u,buf_total_size:%u", buf_num, buf_total_size);
+
+ return ret;
+free_buff:
+ unf_free_all_esgl_pages(lport);
+ vfree(lport->esgl_pool.esgl_pool_addr);
+
+ return UNF_RETURN_ERROR;
+}
+
+static void unf_free_esgl_pool(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_free_all_esgl_pages(lport);
+ lport->esgl_pool.esgl_pool_count = 0;
+
+ if (lport->esgl_pool.esgl_pool_addr) {
+ vfree(lport->esgl_pool.esgl_pool_addr);
+ lport->esgl_pool.esgl_pool_addr = NULL;
+ }
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_5_DESTROY_ESGL_POOL;
+}
+
+struct unf_lport *unf_find_lport_by_port_id(u32 port_id)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+ u32 portid = port_id & (~PORTID_VPINDEX_MASK);
+ u16 vport_index;
+ spinlock_t *lport_list_lock = NULL;
+
+ lport_list_lock = &global_lport_mgr.global_lport_list_lock;
+ vport_index = (port_id & PORTID_VPINDEX_MASK) >> PORTID_VPINDEX_SHIT;
+ spin_lock_irqsave(lport_list_lock, flags);
+
+ list_for_each_safe(node, next_node, &global_lport_mgr.lport_list_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+ if (unf_lport->port_id == portid && !unf_lport->port_removing) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return unf_cm_lookup_vport_by_vp_index(unf_lport, vport_index);
+ }
+ }
+
+ list_for_each_safe(node, next_node, &global_lport_mgr.intergrad_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+ if (unf_lport->port_id == portid && !unf_lport->port_removing) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return unf_cm_lookup_vport_by_vp_index(unf_lport, vport_index);
+ }
+ }
+
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return NULL;
+}
+
+u32 unf_is_vport_valid(struct unf_lport *lport, struct unf_lport *vport)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ spinlock_t *vport_pool_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(vport, UNF_RETURN_ERROR);
+
+ unf_lport = lport;
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) vport pool is NULL", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ vport_pool_lock = &vport_pool->vport_pool_lock;
+ spin_lock_irqsave(vport_pool_lock, flag);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+
+ if (unf_vport == vport && !unf_vport->port_removing) {
+ spin_unlock_irqrestore(vport_pool_lock, flag);
+
+ return RETURN_OK;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+
+ if (unf_vport == vport && !unf_vport->port_removing) {
+ spin_unlock_irqrestore(vport_pool_lock, flag);
+
+ return RETURN_OK;
+ }
+ }
+ spin_unlock_irqrestore(vport_pool_lock, flag);
+
+ return UNF_RETURN_ERROR;
+}
+
+u32 unf_is_lport_valid(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+ spinlock_t *lport_list_lock = NULL;
+
+ lport_list_lock = &global_lport_mgr.global_lport_list_lock;
+ spin_lock_irqsave(lport_list_lock, flags);
+
+ list_for_each_safe(node, next_node, &global_lport_mgr.lport_list_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+
+ if (unf_lport == lport && !unf_lport->port_removing) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return RETURN_OK;
+ }
+
+ if (unf_is_vport_valid(unf_lport, lport) == RETURN_OK) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return RETURN_OK;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &global_lport_mgr.intergrad_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+
+ if (unf_lport == lport && !unf_lport->port_removing) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return RETURN_OK;
+ }
+
+ if (unf_is_vport_valid(unf_lport, lport) == RETURN_OK) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return RETURN_OK;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &global_lport_mgr.destroy_list_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+
+ if (unf_lport == lport && !unf_lport->port_removing) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return RETURN_OK;
+ }
+
+ if (unf_is_vport_valid(unf_lport, lport) == RETURN_OK) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return RETURN_OK;
+ }
+ }
+
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return UNF_RETURN_ERROR;
+}
+
+static void unf_clean_linkdown_io(struct unf_lport *lport, bool clean_flag)
+{
+ /* Clean L_Port/V_Port Link Down I/O: Set Abort Tag */
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(lport->xchg_mgr_temp.unf_xchg_abort_all_io);
+
+ lport->xchg_mgr_temp.unf_xchg_abort_all_io(lport, UNF_XCHG_TYPE_INI, clean_flag);
+ lport->xchg_mgr_temp.unf_xchg_abort_all_io(lport, UNF_XCHG_TYPE_SFS, clean_flag);
+}
+
+u32 unf_fc_port_link_event(void *lport, u32 events, void *input)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 index = 0;
+
+ if (unlikely(!lport))
+ return UNF_RETURN_ERROR;
+ unf_lport = (struct unf_lport *)lport;
+
+ ret = unf_lport_ref_inc(unf_lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) is removing and do nothing",
+ unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ /* process port event */
+ while (index < (sizeof(g_lport_action) / sizeof(struct unf_port_action))) {
+ if (g_lport_action[index].action == events) {
+ ret = g_lport_action[index].unf_action(unf_lport, input);
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+
+ return ret;
+ }
+ index++;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive unknown event(0x%x)",
+ unf_lport->port_id, events);
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+
+ return ret;
+}
+
+void unf_port_mgmt_init(void)
+{
+ memset(&global_lport_mgr, 0, sizeof(struct unf_global_lport));
+
+ INIT_LIST_HEAD(&global_lport_mgr.lport_list_head);
+
+ INIT_LIST_HEAD(&global_lport_mgr.intergrad_head);
+
+ INIT_LIST_HEAD(&global_lport_mgr.destroy_list_head);
+
+ INIT_LIST_HEAD(&global_lport_mgr.dirty_list_head);
+
+ spin_lock_init(&global_lport_mgr.global_lport_list_lock);
+
+ UNF_SET_NOMAL_MODE(global_lport_mgr.dft_mode);
+
+ global_lport_mgr.start_work = true;
+}
+
+void unf_port_mgmt_deinit(void)
+{
+ if (global_lport_mgr.lport_sum != 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]There are %u port pool memory giveaway",
+ global_lport_mgr.lport_sum);
+ }
+
+ memset(&global_lport_mgr, 0, sizeof(struct unf_global_lport));
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Common port manager exit succeed");
+}
+
+static void unf_port_register(struct unf_lport *lport)
+{
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Register LPort(0x%p), port ID(0x%x).", lport, lport->port_id);
+
+ /* Add to the global management linked list header */
+ spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
+ list_add_tail(&lport->entry_lport, &global_lport_mgr.lport_list_head);
+ global_lport_mgr.lport_sum++;
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+}
+
+static void unf_port_unregister(struct unf_lport *lport)
+{
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Unregister LPort(0x%p), port ID(0x%x).", lport, lport->port_id);
+
+ /* Remove from the global management linked list header */
+ spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
+ list_del(&lport->entry_lport);
+ global_lport_mgr.lport_sum--;
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+}
+
+int unf_port_start_work(struct unf_lport *lport)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ if (lport->start_work_state != UNF_START_WORK_STOP) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ return RETURN_OK;
+ }
+ lport->start_work_state = UNF_START_WORK_COMPLETE;
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ /* switch sfp to start work */
+ (void)unf_port_switch(lport, true);
+
+ return RETURN_OK;
+}
+
+static u32
+unf_lport_init_lw_funop(struct unf_lport *lport,
+ struct unf_low_level_functioon_op *low_level_op)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(low_level_op, UNF_RETURN_ERROR);
+
+ lport->port_id = low_level_op->lport_cfg_items.port_id;
+ lport->port_name = low_level_op->sys_port_name;
+ lport->node_name = low_level_op->sys_node_name;
+ lport->options = low_level_op->lport_cfg_items.port_mode;
+ lport->act_topo = UNF_ACT_TOP_UNKNOWN;
+ lport->max_ssq_num = low_level_op->support_max_ssq_num;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) .", lport->port_id);
+
+ memcpy(&lport->low_level_func, low_level_op, sizeof(struct unf_low_level_functioon_op));
+
+ return RETURN_OK;
+}
+
+void unf_lport_release_lw_funop(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ memset(&lport->low_level_func, 0, sizeof(struct unf_low_level_functioon_op));
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_13_DESTROY_LW_INTERFACE;
+}
+
+struct unf_lport *unf_find_lport_by_scsi_hostid(u32 scsi_host_id)
+{
+ struct list_head *node = NULL, *next_node = NULL;
+ struct list_head *vp_node = NULL, *next_vp_node = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *unf_vport = NULL;
+ ulong flags = 0;
+ ulong pool_flags = 0;
+ spinlock_t *vp_pool_lock = NULL;
+ spinlock_t *lport_list_lock = &global_lport_mgr.global_lport_list_lock;
+
+ spin_lock_irqsave(lport_list_lock, flags);
+ list_for_each_safe(node, next_node, &global_lport_mgr.lport_list_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+ vp_pool_lock = &unf_lport->vport_pool->vport_pool_lock;
+ if (scsi_host_id == UNF_GET_SCSI_HOST_ID(unf_lport->host_info.host)) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return unf_lport;
+ }
+
+ /* support NPIV */
+ if (unf_lport->vport_pool) {
+ spin_lock_irqsave(vp_pool_lock, pool_flags);
+ list_for_each_safe(vp_node, next_vp_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(vp_node, struct unf_lport, entry_vport);
+
+ if (scsi_host_id ==
+ UNF_GET_SCSI_HOST_ID(unf_vport->host_info.host)) {
+ spin_unlock_irqrestore(vp_pool_lock, pool_flags);
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return unf_vport;
+ }
+ }
+ spin_unlock_irqrestore(vp_pool_lock, pool_flags);
+ }
+ }
+
+ list_for_each_safe(node, next_node, &global_lport_mgr.intergrad_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+ vp_pool_lock = &unf_lport->vport_pool->vport_pool_lock;
+ if (scsi_host_id ==
+ UNF_GET_SCSI_HOST_ID(unf_lport->host_info.host)) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return unf_lport;
+ }
+
+ /* support NPIV */
+ if (unf_lport->vport_pool) {
+ spin_lock_irqsave(vp_pool_lock, pool_flags);
+ list_for_each_safe(vp_node, next_vp_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(vp_node, struct unf_lport, entry_vport);
+
+ if (scsi_host_id ==
+ UNF_GET_SCSI_HOST_ID(unf_vport->host_info.host)) {
+ spin_unlock_irqrestore(vp_pool_lock, pool_flags);
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return unf_vport;
+ }
+ }
+ spin_unlock_irqrestore(vp_pool_lock, pool_flags);
+ }
+ }
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Can not find port by scsi_host_id(0x%x), may be removing",
+ scsi_host_id);
+
+ return NULL;
+}
+
+u32 unf_init_scsi_id_table(struct unf_lport *lport)
+{
+ struct unf_rport_scsi_id_image *rport_scsi_id_image = NULL;
+ struct unf_wwpn_rport_info *wwpn_port_info = NULL;
+ u32 idx;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ rport_scsi_id_image = &lport->rport_scsi_table;
+ rport_scsi_id_image->max_scsi_id = UNF_MAX_SCSI_ID;
+
+ /* If the number of remote connections supported by the L_Port is 0, an
+ * exception occurs
+ */
+ if (rport_scsi_id_image->max_scsi_id == 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x), supported maximum login is zero.", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ rport_scsi_id_image->wwn_rport_info_table =
+ vmalloc(rport_scsi_id_image->max_scsi_id * sizeof(struct unf_wwpn_rport_info));
+ if (!rport_scsi_id_image->wwn_rport_info_table) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't allocate SCSI ID Table(0x%x).",
+ lport->port_id, rport_scsi_id_image->max_scsi_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(rport_scsi_id_image->wwn_rport_info_table, 0,
+ rport_scsi_id_image->max_scsi_id * sizeof(struct unf_wwpn_rport_info));
+
+ wwpn_port_info = rport_scsi_id_image->wwn_rport_info_table;
+
+ for (idx = 0; idx < rport_scsi_id_image->max_scsi_id; idx++) {
+ INIT_DELAYED_WORK(&wwpn_port_info->loss_tmo_work, unf_sesion_loss_timeout);
+ INIT_LIST_HEAD(&wwpn_port_info->fc_lun_list);
+ wwpn_port_info->lport = lport;
+ wwpn_port_info->target_id = INVALID_VALUE32;
+ wwpn_port_info++;
+ }
+
+ spin_lock_init(&rport_scsi_id_image->scsi_image_table_lock);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) supported maximum login is %u.",
+ lport->port_id, rport_scsi_id_image->max_scsi_id);
+
+ return RETURN_OK;
+}
+
+void unf_destroy_scsi_id_table(struct unf_lport *lport)
+{
+ struct unf_rport_scsi_id_image *rport_scsi_id_image = NULL;
+ struct unf_wwpn_rport_info *wwn_rport_info = NULL;
+ u32 i = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ rport_scsi_id_image = &lport->rport_scsi_table;
+ if (rport_scsi_id_image->wwn_rport_info_table) {
+ for (i = 0; i < UNF_MAX_SCSI_ID; i++) {
+ wwn_rport_info = &rport_scsi_id_image->wwn_rport_info_table[i];
+ UNF_DELAYED_WORK_SYNC(ret, (lport->port_id),
+ (&wwn_rport_info->loss_tmo_work),
+ "loss tmo Timer work");
+ if (wwn_rport_info->lun_qos_level) {
+ vfree(wwn_rport_info->lun_qos_level);
+ wwn_rport_info->lun_qos_level = NULL;
+ }
+ }
+
+ if (ret) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x) cancel loss tmo work success", lport->port_id);
+ }
+ vfree(rport_scsi_id_image->wwn_rport_info_table);
+ rport_scsi_id_image->wwn_rport_info_table = NULL;
+ }
+
+ rport_scsi_id_image->max_scsi_id = 0;
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_10_DESTROY_SCSI_TABLE;
+}
+
+static u32 unf_lport_init(struct unf_lport *lport, void *private_data,
+ struct unf_low_level_functioon_op *low_level_op)
+{
+ u32 ret = RETURN_OK;
+ char work_queue_name[13];
+
+ unf_init_port_parms(lport);
+
+ /* Associating LPort with FCPort */
+ lport->fc_port = private_data;
+
+ /* VpIndx=0 is reserved for Lport, and rootLport points to its own */
+ lport->vp_index = 0;
+ lport->root_lport = lport;
+ lport->chip_info = NULL;
+
+ /* Initialize the units related to L_Port and lw func */
+ ret = unf_lport_init_lw_funop(lport, low_level_op);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) initialize lowlevel function unsuccessful.",
+ lport->port_id);
+
+ return ret;
+ }
+
+ /* Init Linkevent workqueue */
+ snprintf(work_queue_name, sizeof(work_queue_name), "%x_lkq", lport->port_id);
+
+ lport->link_event_wq = create_singlethread_workqueue(work_queue_name);
+ if (!lport->link_event_wq) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]Port(0x%x) creat link event work queue failed", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ snprintf(work_queue_name, sizeof(work_queue_name), "%x_xchgwq", lport->port_id);
+ lport->xchg_wq = create_workqueue(work_queue_name);
+ if (!lport->xchg_wq) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]Port(0x%x) creat Exchg work queue failed",
+ lport->port_id);
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+ return UNF_RETURN_ERROR;
+ }
+ /* scsi table (R_Port) required for initializing INI
+ * Initialize the scsi id Table table to manage the mapping between SCSI
+ * ID, WWN, and Rport.
+ */
+
+ ret = unf_init_scsi_id_table(lport);
+ if (ret != RETURN_OK) {
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+ return ret;
+ }
+
+ /* Initialize the EXCH resource */
+ ret = unf_alloc_xchg_resource(lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) can't allocate exchange resource.", lport->port_id);
+
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+ unf_destroy_scsi_id_table(lport);
+
+ return ret;
+ }
+
+ /* Initialize the ESGL resource pool used by Lport */
+ ret = unf_init_esgl_pool(lport);
+ if (ret != RETURN_OK) {
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+ unf_free_all_xchg_mgr(lport);
+ unf_destroy_scsi_id_table(lport);
+
+ return ret;
+ }
+ /* Initialize the disc manager under Lport */
+ ret = unf_init_disc_mgr(lport);
+ if (ret != RETURN_OK) {
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+ unf_free_esgl_pool(lport);
+ unf_free_all_xchg_mgr(lport);
+ unf_destroy_scsi_id_table(lport);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) initialize discover manager unsuccessful.",
+ lport->port_id);
+
+ return ret;
+ }
+
+ /* Initialize the LPort manager */
+ ret = unf_init_vport_mgr_temp(lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) initialize RPort manager unsuccessful.", lport->port_id);
+
+ goto RELEASE_LPORT;
+ }
+
+ /* Initialize the EXCH manager */
+ ret = unf_init_xchg_mgr_temp(lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) initialize exchange manager unsuccessful.",
+ lport->port_id);
+ goto RELEASE_LPORT;
+ }
+ /* Initialize the resources required by the event processing center */
+ ret = unf_init_event_center(lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) initialize event center unsuccessful.", lport->port_id);
+ goto RELEASE_LPORT;
+ }
+ /* Initialize the initialization status of Lport */
+ unf_set_lport_state(lport, UNF_LPORT_ST_INITIAL);
+
+ /* Initialize the Lport route test case */
+ ret = unf_init_lport_route(lport);
+ if (ret != RETURN_OK) {
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+ (void)unf_event_center_destroy(lport);
+ unf_disc_mgr_destroy(lport);
+ unf_free_esgl_pool(lport);
+ unf_free_all_xchg_mgr(lport);
+ unf_destroy_scsi_id_table(lport);
+
+ return ret;
+ }
+ /* Thesupports the initialization stepof the NPIV */
+ ret = unf_init_vport_pool(lport);
+ if (ret != RETURN_OK) {
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+
+ unf_destroy_lport_route(lport);
+ (void)unf_event_center_destroy(lport);
+ unf_disc_mgr_destroy(lport);
+ unf_free_esgl_pool(lport);
+ unf_free_all_xchg_mgr(lport);
+ unf_destroy_scsi_id_table(lport);
+
+ return ret;
+ }
+
+ /* qualifier rport callback */
+ lport->unf_qualify_rport = unf_rport_set_qualifier_key_reuse;
+ lport->unf_tmf_abnormal_recovery = unf_tmf_timeout_recovery_special;
+ return RETURN_OK;
+RELEASE_LPORT:
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+
+ unf_disc_mgr_destroy(lport);
+ unf_free_esgl_pool(lport);
+ unf_free_all_xchg_mgr(lport);
+ unf_destroy_scsi_id_table(lport);
+
+ return ret;
+}
+
+void unf_free_qos_info(struct unf_lport *lport)
+{
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_qos_info *qos_info = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ spin_lock_irqsave(&lport->qos_mgr_lock, flag);
+ list_for_each_safe(node, next_node, &lport->list_qos_head) {
+ qos_info = (struct unf_qos_info *)list_entry(node,
+ struct unf_qos_info, entry_qos_info);
+ list_del_init(&qos_info->entry_qos_info);
+ kfree(qos_info);
+ }
+
+ spin_unlock_irqrestore(&lport->qos_mgr_lock, flag);
+}
+
+u32 unf_lport_deinit(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ unf_free_qos_info(lport);
+
+ unf_unregister_scsi_host(lport);
+
+ /* If the card is unloaded normally, the thread is stopped once. The
+ * problem does not occur if you stop the thread again.
+ */
+ unf_destroy_lport_route(lport);
+
+ /* minus the reference count of the card event; the last port deletes
+ * the card thread
+ */
+ unf_destroy_card_thread(lport);
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ (void)unf_event_center_destroy(lport);
+ unf_free_vport_pool(lport);
+ unf_xchg_mgr_destroy(lport);
+
+ unf_free_esgl_pool(lport);
+
+ /* reliability review :Disc should release after Xchg. Destroy the disc
+ * manager
+ */
+ unf_disc_mgr_destroy(lport);
+
+ unf_release_xchg_mgr_temp(lport);
+
+ unf_release_vport_mgr_temp(lport);
+
+ unf_destroy_scsi_id_table(lport);
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+
+ /* Releasing the lw Interface Template */
+ unf_lport_release_lw_funop(lport);
+ lport->fc_port = NULL;
+
+ return RETURN_OK;
+}
+
+static int unf_card_event_process(void *arg)
+{
+ struct list_head *node = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ ulong flags = 0;
+ struct unf_chip_manage_info *chip_info = (struct unf_chip_manage_info *)arg;
+
+ set_user_nice(current, UNF_OS_THRD_PRI_LOW);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Slot(%u) chip(0x%x) enter event thread.",
+ chip_info->slot_id, chip_info->chip_id);
+
+ while (!kthread_should_stop()) {
+ if (chip_info->thread_exit)
+ break;
+
+ spin_lock_irqsave(&chip_info->chip_event_list_lock, flags);
+ if (list_empty(&chip_info->list_head)) {
+ spin_unlock_irqrestore(&chip_info->chip_event_list_lock, flags);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout((long)msecs_to_jiffies(UNF_S_TO_MS));
+ } else {
+ node = UNF_OS_LIST_NEXT(&chip_info->list_head);
+ list_del_init(node);
+ chip_info->list_num--;
+ event_node = list_entry(node, struct unf_cm_event_report, list_entry);
+ spin_unlock_irqrestore(&chip_info->chip_event_list_lock, flags);
+ unf_handle_event(event_node);
+ }
+ }
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "Slot(%u) chip(0x%x) exit event thread.",
+ chip_info->slot_id, chip_info->chip_id);
+
+ return RETURN_OK;
+}
+
+static void unf_destroy_card_thread(struct unf_lport *lport)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+ struct unf_chip_manage_info *chip_info = NULL;
+ struct list_head *list = NULL;
+ struct list_head *list_tmp = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ ulong event_lock_flag = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ /* If the thread cannot be found, apply for a new thread. */
+ chip_info = lport->chip_info;
+ if (!chip_info) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) has no event thread.", lport->port_id);
+ return;
+ }
+ event_mgr = &lport->event_mgr;
+
+ spin_lock_irqsave(&chip_info->chip_event_list_lock, flag);
+ if (!list_empty(&chip_info->list_head)) {
+ list_for_each_safe(list, list_tmp, &chip_info->list_head) {
+ event_node = list_entry(list, struct unf_cm_event_report, list_entry);
+
+ /* The LPort under the global event node is null. */
+ if (event_node->lport == lport) {
+ list_del_init(&event_node->list_entry);
+ if (event_node->event_asy_flag == UNF_EVENT_SYN) {
+ event_node->result = UNF_RETURN_ERROR;
+ complete(&event_node->event_comp);
+ }
+
+ spin_lock_irqsave(&event_mgr->port_event_lock, event_lock_flag);
+ event_mgr->free_event_count++;
+ list_add_tail(&event_node->list_entry, &event_mgr->list_free_event);
+ spin_unlock_irqrestore(&event_mgr->port_event_lock,
+ event_lock_flag);
+ }
+ }
+ }
+ spin_unlock_irqrestore(&chip_info->chip_event_list_lock, flag);
+
+ /* If the number of events introduced by the event thread is 0,
+ * it indicates that no interface is used. In this case, thread
+ * resources need to be consumed
+ */
+ if (atomic_dec_and_test(&chip_info->ref_cnt)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) destroy slot(%u) chip(0x%x) event thread succeed.",
+ lport->port_id, chip_info->slot_id, chip_info->chip_id);
+ chip_info->thread_exit = true;
+ wake_up_process(chip_info->thread);
+ kthread_stop(chip_info->thread);
+ chip_info->thread = NULL;
+
+ spin_lock_irqsave(&card_thread_mgr.global_card_list_lock, flag);
+ list_del_init(&chip_info->list_chip_thread_entry);
+ card_thread_mgr.card_num--;
+ spin_unlock_irqrestore(&card_thread_mgr.global_card_list_lock, flag);
+
+ vfree(chip_info);
+ }
+
+ lport->chip_info = NULL;
+}
+
+static u32 unf_creat_card_thread(struct unf_lport *lport)
+{
+ ulong flag = 0;
+ struct unf_chip_manage_info *chip_manage_info = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* If the thread cannot be found, apply for a new thread. */
+ chip_manage_info = (struct unf_chip_manage_info *)
+ vmalloc(sizeof(struct unf_chip_manage_info));
+ if (!chip_manage_info) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) cannot allocate thread memory.", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(chip_manage_info, 0, sizeof(struct unf_chip_manage_info));
+
+ memcpy(&chip_manage_info->chip_info, &lport->low_level_func.chip_info,
+ sizeof(struct unf_chip_info));
+ chip_manage_info->slot_id = UNF_GET_BOARD_TYPE_AND_SLOT_ID_BY_PORTID(lport->port_id);
+ chip_manage_info->chip_id = lport->low_level_func.chip_id;
+ chip_manage_info->list_num = 0;
+ chip_manage_info->sfp_9545_fault = false;
+ chip_manage_info->sfp_power_fault = false;
+ atomic_set(&chip_manage_info->ref_cnt, 1);
+ atomic_set(&chip_manage_info->card_loop_test_flag, false);
+ spin_lock_init(&chip_manage_info->card_loop_back_state_lock);
+ INIT_LIST_HEAD(&chip_manage_info->list_head);
+ spin_lock_init(&chip_manage_info->chip_event_list_lock);
+
+ chip_manage_info->thread_exit = false;
+ chip_manage_info->thread = kthread_create(unf_card_event_process,
+ chip_manage_info, "%x_et", lport->port_id);
+
+ if (IS_ERR(chip_manage_info->thread) || !chip_manage_info->thread) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) creat event thread(0x%p) unsuccessful.",
+ lport->port_id, chip_manage_info->thread);
+
+ vfree(chip_manage_info);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ lport->chip_info = chip_manage_info;
+ wake_up_process(chip_manage_info->thread);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) creat slot(%u) chip(0x%x) event thread succeed.",
+ lport->port_id, chip_manage_info->slot_id,
+ chip_manage_info->chip_id);
+
+ spin_lock_irqsave(&card_thread_mgr.global_card_list_lock, flag);
+ list_add_tail(&chip_manage_info->list_chip_thread_entry, &card_thread_mgr.card_list_head);
+ card_thread_mgr.card_num++;
+ spin_unlock_irqrestore(&card_thread_mgr.global_card_list_lock, flag);
+
+ return RETURN_OK;
+}
+
+static u32 unf_find_card_thread(struct unf_lport *lport)
+{
+ ulong flag = 0;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_chip_manage_info *chip_info = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ spin_lock_irqsave(&card_thread_mgr.global_card_list_lock, flag);
+ list_for_each_safe(node, next_node, &card_thread_mgr.card_list_head) {
+ chip_info = list_entry(node, struct unf_chip_manage_info, list_chip_thread_entry);
+
+ if (chip_info->chip_id == lport->low_level_func.chip_id &&
+ chip_info->slot_id ==
+ UNF_GET_BOARD_TYPE_AND_SLOT_ID_BY_PORTID(lport->port_id)) {
+ atomic_inc(&chip_info->ref_cnt);
+ lport->chip_info = chip_info;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) find card(%u) chip(0x%x) event thread succeed.",
+ lport->port_id, chip_info->slot_id, chip_info->chip_id);
+
+ spin_unlock_irqrestore(&card_thread_mgr.global_card_list_lock, flag);
+
+ return RETURN_OK;
+ }
+ }
+ spin_unlock_irqrestore(&card_thread_mgr.global_card_list_lock, flag);
+
+ ret = unf_creat_card_thread(lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) creat event thread unsuccessful. Destroy LPort.",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ } else {
+ return RETURN_OK;
+ }
+}
+
+void *unf_lport_create_and_init(void *private_data, struct unf_low_level_functioon_op *low_level_op)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ if (!private_data) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Private Data is NULL");
+
+ return NULL;
+ }
+ if (!low_level_op) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LowLevel port(0x%p) function is NULL", private_data);
+
+ return NULL;
+ }
+
+ /* 1. vmalloc & Memset L_Port */
+ unf_lport = vmalloc(sizeof(struct unf_lport));
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Alloc LPort memory failed.");
+
+ return NULL;
+ }
+ memset(unf_lport, 0, sizeof(struct unf_lport));
+
+ /* 2. L_Port Init */
+ if (unf_lport_init(unf_lport, private_data, low_level_op) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort initialize unsuccessful.");
+
+ vfree(unf_lport);
+
+ return NULL;
+ }
+
+ /* 4. Get or Create Chip Thread
+ * Chip_ID & Slot_ID
+ */
+ ret = unf_find_card_thread(unf_lport);
+ if (ret != RETURN_OK) {
+ (void)unf_lport_deinit(unf_lport);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) Find Chip thread unsuccessful. Destroy LPort.",
+ unf_lport->port_id);
+
+ vfree(unf_lport);
+ return NULL;
+ }
+
+ /* 5. Registers with in the port management global linked list */
+ unf_port_register(unf_lport);
+ /* update WWN */
+ if (unf_build_lport_wwn(unf_lport) != RETURN_OK) {
+ unf_port_unregister(unf_lport);
+ (void)unf_lport_deinit(unf_lport);
+ vfree(unf_lport);
+ return NULL;
+ }
+
+ // unf_init_link_lose_tmo(unf_lport);//TO DO
+
+ /* initialize Scsi Host */
+ if (unf_register_scsi_host(unf_lport) != RETURN_OK) {
+ unf_port_unregister(unf_lport);
+ (void)unf_lport_deinit(unf_lport);
+ vfree(unf_lport);
+ return NULL;
+ }
+ /* 7. Here, start work now */
+ if (global_lport_mgr.start_work) {
+ if (unf_port_start_work(unf_lport) != RETURN_OK) {
+ unf_port_unregister(unf_lport);
+
+ (void)unf_lport_deinit(unf_lport);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) start work failed", unf_lport->port_id);
+ vfree(unf_lport);
+ return NULL;
+ }
+ }
+
+ return unf_lport;
+}
+
+static int unf_lport_destroy(void *lport, void *arg_out)
+{
+ struct unf_lport *unf_lport = NULL;
+ ulong flags = 0;
+
+ if (!lport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR, "LPort is NULL.");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_lport = (struct unf_lport *)lport;
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "Destroy LPort(0x%p), ID(0x%x).", unf_lport, unf_lport->port_id);
+ /* NPIV Ensure that all Vport are deleted */
+ unf_destroy_all_vports(unf_lport);
+ unf_lport->destroy_step = UNF_LPORT_DESTROY_STEP_1_REPORT_PORT_OUT;
+
+ (void)unf_lport_deinit(lport);
+
+ /* The port is removed from the destroy linked list. The next step is to
+ * release the memory
+ */
+ spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
+ list_del(&unf_lport->entry_lport);
+
+ /* If the port has dirty memory, the port is mounted to the linked list
+ * of dirty ports
+ */
+ if (unf_lport->dirty_flag)
+ list_add_tail(&unf_lport->entry_lport, &global_lport_mgr.dirty_list_head);
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+
+ if (unf_lport->lport_free_completion) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Complete LPort(0x%p), port ID(0x%x)'s Free Completion.",
+ unf_lport, unf_lport->port_id);
+ complete(unf_lport->lport_free_completion);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%p), port ID(0x%x)'s Free Completion is NULL.",
+ unf_lport, unf_lport->port_id);
+ dump_stack();
+ }
+
+ return RETURN_OK;
+}
+
+static int unf_port_switch(struct unf_lport *lport, bool switch_flag)
+{
+ struct unf_lport *unf_lport = lport;
+ int ret = UNF_RETURN_ERROR;
+ bool flag = false;
+
+ FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
+
+ if (!unf_lport->low_level_func.port_mgr_op.ll_port_config_set) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_WARN,
+ "[warn]Port(0x%x)'s config(switch) function is NULL",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ flag = switch_flag ? true : false;
+
+ ret = (int)unf_lport->low_level_func.port_mgr_op.ll_port_config_set(unf_lport->fc_port,
+ UNF_PORT_CFG_SET_PORT_SWITCH, (void *)&flag);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_WARN, "[warn]Port(0x%x) switch %s failed",
+ unf_lport->port_id, switch_flag ? "On" : "Off");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_lport->switch_state = (bool)flag;
+
+ return RETURN_OK;
+}
+
+static int unf_send_event(u32 port_id, u32 syn_flag, void *argc_in, void *argc_out,
+ int (*func)(void *argc_in, void *argc_out))
+{
+ struct unf_lport *lport = NULL;
+ struct unf_cm_event_report *event = NULL;
+ int ret = 0;
+
+ lport = unf_find_lport_by_port_id(port_id);
+ if (!lport) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_INFO, "Cannot find LPort(0x%x).", port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (unf_lport_ref_inc(lport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "LPort(0x%x) is removing, no need process.",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ if (unlikely(!lport->event_mgr.unf_get_free_event_func ||
+ !lport->event_mgr.unf_post_event_func ||
+ !lport->event_mgr.unf_release_event)) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_MAJOR, "Event function is NULL.");
+
+ unf_lport_ref_dec_to_destroy(lport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (lport->port_removing) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "LPort(0x%x) is removing, no need process.",
+ lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(lport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ event = lport->event_mgr.unf_get_free_event_func((void *)lport);
+ if (!event) {
+ unf_lport_ref_dec_to_destroy(lport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ init_completion(&event->event_comp);
+ event->lport = lport;
+ event->event_asy_flag = syn_flag;
+ event->unf_event_task = func;
+ event->para_in = argc_in;
+ event->para_out = argc_out;
+ lport->event_mgr.unf_post_event_func(lport, event);
+
+ if (event->event_asy_flag) {
+ /* You must wait for the other party to return. Otherwise, the
+ * linked list may be in disorder.
+ */
+ wait_for_completion(&event->event_comp);
+ ret = (int)event->result;
+ lport->event_mgr.unf_release_event(lport, event);
+ } else {
+ ret = RETURN_OK;
+ }
+
+ unf_lport_ref_dec_to_destroy(lport);
+ return ret;
+}
+
+static int unf_reset_port(void *arg_in, void *arg_out)
+{
+ struct unf_reset_port_argin *input = (struct unf_reset_port_argin *)arg_in;
+ struct unf_lport *lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ enum unf_port_config_state port_state = UNF_PORT_CONFIG_STATE_RESET;
+
+ FC_CHECK_RETURN_VALUE(input, UNF_RETURN_ERROR);
+
+ lport = unf_find_lport_by_port_id(input->port_id);
+ if (!lport) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_MAJOR, "Not find LPort(0x%x).",
+ input->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* reset port */
+ if (!lport->low_level_func.port_mgr_op.ll_port_config_set) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
+ "Port(0x%x)'s corresponding function is NULL.", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ lport->act_topo = UNF_ACT_TOP_UNKNOWN;
+ lport->speed = UNF_PORT_SPEED_UNKNOWN;
+ lport->fabric_node_name = 0;
+
+ ret = lport->low_level_func.port_mgr_op.ll_port_config_set(lport->fc_port,
+ UNF_PORT_CFG_SET_PORT_STATE,
+ (void *)&port_state);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_MAJOR, "Reset port(0x%x) unsuccessful.",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+int unf_cm_reset_port(u32 port_id)
+{
+ int ret = UNF_RETURN_ERROR;
+
+ ret = unf_send_event(port_id, UNF_EVENT_SYN, (void *)&port_id,
+ (void *)NULL, unf_reset_port);
+ return ret;
+}
+
+int unf_lport_reset_port(struct unf_lport *lport, u32 flag)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ return unf_send_event(lport->port_id, flag, (void *)&lport->port_id,
+ (void *)NULL, unf_reset_port);
+}
+
+static inline u32 unf_get_loop_alpa(struct unf_lport *lport, void *loop_alpa)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport->low_level_func.port_mgr_op.ll_port_config_get,
+ UNF_RETURN_ERROR);
+
+ ret = lport->low_level_func.port_mgr_op.ll_port_config_get(lport->fc_port,
+ UNF_PORT_CFG_GET_LOOP_ALPA, loop_alpa);
+
+ return ret;
+}
+
+static u32 unf_lport_enter_private_loop_login(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = lport;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_READY); /* LPort: LINK_UP --> READY */
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ unf_lport_update_topo(unf_lport, UNF_ACT_TOP_PRIVATE_LOOP);
+
+ /* NOP: check L_Port state */
+ if (atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR, "[info]Port(0x%x) is NOP, do nothing",
+ unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ /* INI: check L_Port mode */
+ if (UNF_PORT_MODE_INI != (unf_lport->options & UNF_PORT_MODE_INI)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) has no INI feature(0x%x), do nothing",
+ unf_lport->port_id, unf_lport->options);
+
+ return RETURN_OK;
+ }
+
+ if (unf_lport->disc.disc_temp.unf_disc_start) {
+ ret = unf_lport->disc.disc_temp.unf_disc_start(unf_lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with nportid(0x%x) start discovery failed",
+ unf_lport->port_id, unf_lport->nport_id);
+ }
+ }
+
+ return ret;
+}
+
+u32 unf_lport_login(struct unf_lport *lport, enum unf_act_topo act_topo)
+{
+ u32 loop_alpa = 0;
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* 1. Update (set) L_Port topo which get from low level */
+ unf_lport_update_topo(lport, act_topo);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+
+ /* 2. Link state check */
+ if (lport->link_up != UNF_PORT_LINK_UP) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with link_state(0x%x) port_state(0x%x) when login",
+ lport->port_id, lport->link_up, lport->states);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* 3. Update L_Port state */
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_LINK_UP); /* LPort: INITIAL --> LINK UP */
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x) start to login with topology(0x%x)",
+ lport->port_id, lport->act_topo);
+
+ /* 4. Start logoin */
+ if (act_topo == UNF_TOP_P2P_MASK ||
+ act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ /* P2P or Fabric mode */
+ ret = unf_lport_enter_flogi(lport);
+ } else if (act_topo == UNF_ACT_TOP_PUBLIC_LOOP) {
+ /* Public loop */
+ (void)unf_get_loop_alpa(lport, &loop_alpa);
+
+ /* Before FLOGI ALPA just low 8 bit, after FLOGI ACC, switch
+ * will assign complete addresses
+ */
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ lport->nport_id = loop_alpa;
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ ret = unf_lport_enter_flogi(lport);
+ } else if (act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
+ /* Private loop */
+ (void)unf_get_loop_alpa(lport, &loop_alpa);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ lport->nport_id = loop_alpa;
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ ret = unf_lport_enter_private_loop_login(lport);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]LOGIN: Port(0x%x) login with unknown topology(0x%x)",
+ lport->port_id, lport->act_topo);
+ }
+
+ return ret;
+}
+
+static u32 unf_port_linkup(struct unf_lport *lport, void *input)
+{
+ struct unf_lport *unf_lport = lport;
+ u32 ret = RETURN_OK;
+ enum unf_act_topo act_topo = UNF_ACT_TOP_UNKNOWN;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* If NOP state, stop */
+ if (atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Port(0x%x) is NOP and do nothing", unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ /* Update port state */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ unf_lport->link_up = UNF_PORT_LINK_UP;
+ unf_lport->speed = *((u32 *)input);
+ unf_set_lport_state(lport, UNF_LPORT_ST_INITIAL); /* INITIAL state */
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ /* set hot pool wait state: so far, do not care */
+ unf_set_hot_pool_wait_state(unf_lport, true);
+
+ unf_lport->enhanced_features |= UNF_LPORT_ENHANCED_FEATURE_READ_SFP_ONCE;
+
+ /* Get port active topopolgy (from low level) */
+ if (!unf_lport->low_level_func.port_mgr_op.ll_port_config_get) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Port(0x%x) get topo function is NULL", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ ret = unf_lport->low_level_func.port_mgr_op.ll_port_config_get(unf_lport->fc_port,
+ UNF_PORT_CFG_GET_TOPO_ACT, (void *)&act_topo);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Port(0x%x) get topo from low level failed",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Start Login process */
+ ret = unf_lport_login(unf_lport, act_topo);
+
+ return ret;
+}
+
+static u32 unf_port_linkdown(struct unf_lport *lport, void *input)
+{
+ ulong flag = 0;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ unf_lport = lport;
+
+ /* To prevent repeated reporting linkdown */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ unf_lport->speed = UNF_PORT_SPEED_UNKNOWN;
+ unf_lport->act_topo = UNF_ACT_TOP_UNKNOWN;
+ if (unf_lport->link_up == UNF_PORT_LINK_DOWN) {
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ return RETURN_OK;
+ }
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_LINK_DOWN);
+ unf_reset_lport_params(unf_lport);
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ unf_set_hot_pool_wait_state(unf_lport, false);
+
+ /*
+ * clear I/O:
+ * 1. INI do ABORT only,
+ * 2. TGT need do source clear with Wait_IO
+ * *
+ * for INI: busy/delay/delay_transfer/wait
+ * Clean L_Port/V_Port Link Down I/O: only set ABORT tag
+ */
+ unf_flush_disc_event(&unf_lport->disc, NULL);
+
+ unf_clean_linkdown_io(unf_lport, false);
+
+ /* for L_Port's R_Ports */
+ unf_clean_linkdown_rport(unf_lport);
+ /* for L_Port's all Vports */
+ unf_linkdown_all_vports(lport);
+ return RETURN_OK;
+}
+
+static u32 unf_port_abnormal_reset(struct unf_lport *lport, void *input)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ unf_lport = lport;
+
+ ret = (u32)unf_lport_reset_port(unf_lport, UNF_EVENT_ASYN);
+
+ return ret;
+}
+
+static u32 unf_port_reset_start(struct unf_lport *lport, void *input)
+{
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_set_lport_state(lport, UNF_LPORT_ST_RESET);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x) begin to reset.", lport->port_id);
+
+ return ret;
+}
+
+static u32 unf_port_reset_end(struct unf_lport *lport, void *input)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x) reset end.", lport->port_id);
+
+ /* Task management command returns success and avoid repair measures
+ * case offline device
+ */
+ unf_wake_up_scsi_task_cmnd(lport);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_set_lport_state(lport, UNF_LPORT_ST_INITIAL);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ return RETURN_OK;
+}
+
+static u32 unf_port_nop(struct unf_lport *lport, void *input)
+{
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ unf_lport = lport;
+
+ atomic_set(&unf_lport->lport_no_operate_flag, UNF_LPORT_NOP);
+
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_LINK_DOWN);
+ unf_reset_lport_params(unf_lport);
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ /* Set Tag prevent pending I/O to wait_list when close sfp failed */
+ unf_set_hot_pool_wait_state(unf_lport, false);
+
+ unf_flush_disc_event(&unf_lport->disc, NULL);
+
+ /* L_Port/V_Port's I/O(s): Clean Link Down I/O: Set Abort Tag */
+ unf_clean_linkdown_io(unf_lport, false);
+
+ /* L_Port/V_Port's R_Port(s): report link down event to scsi & clear
+ * resource
+ */
+ unf_clean_linkdown_rport(unf_lport);
+ unf_linkdown_all_vports(unf_lport);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) report NOP event done", unf_lport->nport_id);
+
+ return RETURN_OK;
+}
+
+static u32 unf_port_begin_remove(struct unf_lport *lport, void *input)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ /* Cancel route timer delay work */
+ unf_destroy_lport_route(lport);
+
+ return RETURN_OK;
+}
+
+static u32 unf_get_pcie_link_state(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = lport;
+ bool linkstate = true;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(unf_lport->low_level_func.port_mgr_op.ll_port_config_get,
+ UNF_RETURN_ERROR);
+
+ ret = unf_lport->low_level_func.port_mgr_op.ll_port_config_get(unf_lport->fc_port,
+ UNF_PORT_CFG_GET_PCIE_LINK_STATE, (void *)&linkstate);
+ if (ret != RETURN_OK || linkstate != true) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_KEVENT, "[err]Can't Get Pcie Link State");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+void unf_root_lport_ref_dec(struct unf_lport *lport)
+{
+ ulong flags = 0;
+ ulong lport_flags = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%p) port_id(0x%x) reference count is %d",
+ lport, lport->port_id, atomic_read(&lport->port_ref_cnt));
+
+ spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
+ spin_lock_irqsave(&lport->lport_state_lock, lport_flags);
+ if (atomic_dec_and_test(&lport->port_ref_cnt)) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
+
+ list_del(&lport->entry_lport);
+ global_lport_mgr.lport_sum--;
+
+ /* Put L_Port to destroy list for debuging */
+ list_add_tail(&lport->entry_lport, &global_lport_mgr.destroy_list_head);
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+
+ ret = unf_schedule_global_event((void *)lport, UNF_GLOBAL_EVENT_ASYN,
+ unf_lport_destroy);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_CRITICAL,
+ "[warn]Schedule global event faile. remain nodes(0x%x)",
+ global_event_queue.list_number);
+ }
+ } else {
+ spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+ }
+}
+
+void unf_lport_ref_dec_to_destroy(struct unf_lport *lport)
+{
+ if (lport->root_lport != lport)
+ unf_vport_ref_dec(lport);
+ else
+ unf_root_lport_ref_dec(lport);
+}
+
+void unf_lport_route_work(struct work_struct *work)
+{
+#define UNF_MAX_PCIE_LINK_DOWN_TIMES 3
+ struct unf_lport *unf_lport = NULL;
+ int ret = 0;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ unf_lport = container_of(work, struct unf_lport, route_timer_work.work);
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_KEVENT, "[err]LPort is NULL");
+
+ return;
+ }
+
+ if (unlikely(unf_lport->port_removing)) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
+ "[warn]LPort(0x%x) route work is closing.", unf_lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+
+ return;
+ }
+
+ if (unlikely(unf_get_pcie_link_state(unf_lport)))
+ unf_lport->pcie_link_down_cnt++;
+ else
+ unf_lport->pcie_link_down_cnt = 0;
+
+ if (unf_lport->pcie_link_down_cnt >= UNF_MAX_PCIE_LINK_DOWN_TIMES) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
+ "[warn]LPort(0x%x) detected pcie linkdown, closing route work",
+ unf_lport->port_id);
+ unf_lport->pcie_link_down = true;
+ unf_free_lport_all_xchg(unf_lport);
+ unf_lport_ref_dec_to_destroy(unf_lport);
+ return;
+ }
+
+ if (unlikely(UNF_LPORT_CHIP_ERROR(unf_lport))) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
+ "[warn]LPort(0x%x) reported chip error, closing route work. ",
+ unf_lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+
+ return;
+ }
+
+ if (unf_lport->enhanced_features &
+ UNF_LPORT_ENHANCED_FEATURE_CLOSE_FW_ROUTE) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
+ "[warn]User close LPort(0x%x) route work. ", unf_lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+
+ return;
+ }
+
+ /* Scheduling 1 second */
+ ret = queue_delayed_work(unf_wq, &unf_lport->route_timer_work,
+ (ulong)msecs_to_jiffies(UNF_LPORT_POLL_TIMER));
+ if (ret == 0) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
+ "[warn]LPort(0x%x) schedule work unsuccessful.", unf_lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+ }
+}
+
+static int unf_cm_get_mac_adr(void *argc_in, void *argc_out)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_get_chip_info_argout *chip_info = NULL;
+
+ FC_CHECK_RETURN_VALUE(argc_in, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(argc_out, UNF_RETURN_ERROR);
+
+ unf_lport = (struct unf_lport *)argc_in;
+ chip_info = (struct unf_get_chip_info_argout *)argc_out;
+
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_MAJOR, " LPort is null.");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (!unf_lport->low_level_func.port_mgr_op.ll_port_config_get) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x)'s corresponding function is NULL.", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (unf_lport->low_level_func.port_mgr_op.ll_port_config_get(unf_lport->fc_port,
+ UNF_PORT_CFG_GET_MAC_ADDR,
+ chip_info) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) get .", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+int unf_build_sys_wwn(u32 port_id, u64 *sys_port_name, u64 *sys_node_name)
+{
+ struct unf_get_chip_info_argout wwn = {0};
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE((sys_port_name), UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE((sys_node_name), UNF_RETURN_ERROR);
+
+ unf_lport = unf_find_lport_by_port_id(port_id);
+ if (!unf_lport)
+ return UNF_RETURN_ERROR;
+
+ ret = (u32)unf_send_event(unf_lport->port_id, UNF_EVENT_SYN,
+ (void *)unf_lport, (void *)&wwn, unf_cm_get_mac_adr);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "send event(port get mac adr) fail.");
+ return UNF_RETURN_ERROR;
+ }
+
+ /* save card mode: UNF_FC_SERVER_BOARD_32_G(6):32G;
+ * UNF_FC_SERVER_BOARD_16_G(7):16G MODE
+ */
+ unf_lport->card_type = wwn.board_type;
+
+ /* update port max speed */
+ if (wwn.board_type == UNF_FC_SERVER_BOARD_32_G)
+ unf_lport->low_level_func.fc_ser_max_speed = UNF_PORT_SPEED_32_G;
+ else if (wwn.board_type == UNF_FC_SERVER_BOARD_16_G)
+ unf_lport->low_level_func.fc_ser_max_speed = UNF_PORT_SPEED_16_G;
+ else if (wwn.board_type == UNF_FC_SERVER_BOARD_8_G)
+ unf_lport->low_level_func.fc_ser_max_speed = UNF_PORT_SPEED_8_G;
+ else
+ unf_lport->low_level_func.fc_ser_max_speed = UNF_PORT_SPEED_32_G;
+
+ *sys_port_name = wwn.wwpn;
+ *sys_node_name = wwn.wwnn;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) Port Name(0x%llx), Node Name(0x%llx.)",
+ port_id, *sys_port_name, *sys_node_name);
+
+ return RETURN_OK;
+}
+
+static u32 unf_update_port_wwn(struct unf_lport *lport,
+ struct unf_port_wwn *port_wwn)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(port_wwn, UNF_RETURN_ERROR);
+
+ /* Now notice lowlevel to update */
+ if (!lport->low_level_func.port_mgr_op.ll_port_config_set) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x)'s corresponding function is NULL.",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (lport->low_level_func.port_mgr_op.ll_port_config_set(lport->fc_port,
+ UNF_PORT_CFG_UPDATE_WWN,
+ port_wwn) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) update WWN unsuccessful.",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) update WWN: previous(0x%llx, 0x%llx), now(0x%llx, 0x%llx).",
+ lport->port_id, lport->port_name, lport->node_name,
+ port_wwn->sys_port_wwn, port_wwn->sys_node_name);
+
+ lport->port_name = port_wwn->sys_port_wwn;
+ lport->node_name = port_wwn->sys_node_name;
+
+ return RETURN_OK;
+}
+
+static u32 unf_build_lport_wwn(struct unf_lport *lport)
+{
+ struct unf_port_wwn port_wwn = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ if (unf_build_sys_wwn(lport->port_id, &port_wwn.sys_port_wwn,
+ &port_wwn.sys_node_name) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) build WWN unsuccessful.", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) build WWN succeed", lport->port_id);
+
+ if (unf_update_port_wwn(lport, &port_wwn) != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ return RETURN_OK;
+}
+
+u32 unf_port_release_rport_index(struct unf_lport *lport, void *input)
+{
+ u32 rport_index = INVALID_VALUE32;
+ ulong flag = 0;
+ struct unf_rport_pool *rport_pool = NULL;
+ struct unf_lport *unf_lport = NULL;
+ spinlock_t *rport_pool_lock = NULL;
+
+ unf_lport = (struct unf_lport *)lport->root_lport;
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ if (input) {
+ rport_index = *(u32 *)input;
+ if (rport_index < lport->low_level_func.support_max_rport) {
+ rport_pool = &unf_lport->rport_pool;
+ rport_pool_lock = &rport_pool->rport_free_pool_lock;
+ spin_lock_irqsave(rport_pool_lock, flag);
+ if (test_bit((int)rport_index, rport_pool->rpi_bitmap)) {
+ clear_bit((int)rport_index, rport_pool->rpi_bitmap);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) try to release a free rport index(0x%x)",
+ lport->port_id, rport_index);
+ }
+ spin_unlock_irqrestore(rport_pool_lock, flag);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) try to release a not exist rport index(0x%x)",
+ lport->port_id, rport_index);
+ }
+ }
+
+ return RETURN_OK;
+}
+
+void *unf_lookup_lport_by_nportid(void *lport, u32 nport_id)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_lport = unf_lport->root_lport;
+ vport_pool = unf_lport->vport_pool;
+
+ if (unf_lport->nport_id == nport_id)
+ return unf_lport;
+
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->nport_id == nport_id) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ return unf_vport;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->nport_id == nport_id) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ return unf_vport;
+ }
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x) has no vport Nport ID(0x%x)",
+ unf_lport->port_id, nport_id);
+
+ return NULL;
+}
+
+int unf_get_link_lose_tmo(struct unf_lport *lport)
+{
+ u32 tmo_value = 0;
+
+ if (!lport)
+ return UNF_LOSE_TMO;
+
+ tmo_value = atomic_read(&lport->link_lose_tmo);
+
+ if (!tmo_value)
+ tmo_value = UNF_LOSE_TMO;
+
+ return (int)tmo_value;
+}
+
+u32 unf_register_scsi_host(struct unf_lport *lport)
+{
+ struct unf_host_param host_param = {0};
+
+ struct Scsi_Host **scsi_host = NULL;
+ struct unf_lport_cfg_item *lport_cfg_items = NULL;
+
+ FC_CHECK_RETURN_VALUE((lport), UNF_RETURN_ERROR);
+
+ /* Point to -->> L_port->Scsi_host */
+ scsi_host = &lport->host_info.host;
+
+ lport_cfg_items = &lport->low_level_func.lport_cfg_items;
+ host_param.can_queue = (int)lport_cfg_items->max_queue_depth;
+
+ /* Performance optimization */
+ host_param.cmnd_per_lun = UNF_MAX_CMND_PER_LUN;
+
+ host_param.sg_table_size = UNF_MAX_DMA_SEGS;
+ host_param.max_id = UNF_MAX_TARGET_NUMBER;
+ host_param.max_lun = UNF_DEFAULT_MAX_LUN;
+ host_param.max_channel = UNF_MAX_BUS_CHANNEL;
+ host_param.max_cmnd_len = UNF_MAX_SCSI_CMND_LEN; /* CDB-16 */
+ host_param.dma_boundary = UNF_DMA_BOUNDARY;
+ host_param.max_sectors = UNF_MAX_SECTORS;
+ host_param.port_id = lport->port_id;
+ host_param.lport = lport;
+ host_param.pdev = &lport->low_level_func.dev->dev;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) allocate scsi host: can queue(%u), command performance LUN(%u), max lun(%u)",
+ lport->port_id, host_param.can_queue, host_param.cmnd_per_lun,
+ host_param.max_lun);
+
+ if (unf_alloc_scsi_host(scsi_host, &host_param) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) allocate scsi host failed", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) allocate scsi host(0x%x) succeed",
+ lport->port_id, UNF_GET_SCSI_HOST_ID(*scsi_host));
+
+ return RETURN_OK;
+}
+
+void unf_unregister_scsi_host(struct unf_lport *lport)
+{
+ struct Scsi_Host *scsi_host = NULL;
+ u32 host_no = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ scsi_host = lport->host_info.host;
+
+ if (scsi_host) {
+ host_no = UNF_GET_SCSI_HOST_ID(scsi_host);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) starting unregister scsi host(0x%x)",
+ lport->port_id, host_no);
+ unf_free_scsi_host(scsi_host);
+ /* can`t set scsi_host for NULL, since it does`t alloc by itself */
+ } else {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[warn]Port(0x%x) unregister scsi host, invalid scsi_host ",
+ lport->port_id);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) unregister scsi host(0x%x) succeed",
+ lport->port_id, host_no);
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_12_UNREG_SCSI_HOST;
+}
diff --git a/drivers/scsi/spfc/common/unf_portman.h b/drivers/scsi/spfc/common/unf_portman.h
new file mode 100644
index 000000000000..c05d31197bd7
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_portman.h
@@ -0,0 +1,96 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_PORT_MAN_H
+#define UNF_PORT_MAN_H
+
+#include "unf_type.h"
+#include "unf_lport.h"
+
+#define UNF_LPORT_POLL_TIMER ((u32)(1 * 1000))
+#define UNF_TX_CREDIT_REG_32_G 0x2289420
+#define UNF_RX_CREDIT_REG_32_G 0x228950c
+#define UNF_CREDIT_REG_16_G 0x2283418
+#define UNF_PORT_OFFSET_BASE 0x10000
+#define UNF_CREDIT_EMU_VALUE 0x20
+#define UNF_CREDIT_VALUE_32_G 0x8
+#define UNF_CREDIT_VALUE_16_G 0x8000000080008
+
+struct unf_nportid_map {
+ u32 sid;
+ u32 did;
+ void *rport[1024];
+ void *lport;
+};
+
+struct unf_global_card_thread {
+ struct list_head card_list_head;
+ spinlock_t global_card_list_lock;
+ u32 card_num;
+};
+
+/* Global L_Port MG,manage all L_Port */
+struct unf_global_lport {
+ struct list_head lport_list_head;
+
+ /* Temporary list,used in hold list traverse */
+ struct list_head intergrad_head;
+
+ /* destroy list,used in card remove */
+ struct list_head destroy_list_head;
+
+ /* Dirty list,abnormal port */
+ struct list_head dirty_list_head;
+ spinlock_t global_lport_list_lock;
+ u32 lport_sum;
+ u8 dft_mode;
+ bool start_work;
+};
+
+struct unf_port_action {
+ u32 action;
+ u32 (*unf_action)(struct unf_lport *lport, void *input);
+};
+
+struct unf_reset_port_argin {
+ u32 port_id;
+};
+
+extern struct unf_global_lport global_lport_mgr;
+extern struct unf_global_card_thread card_thread_mgr;
+extern struct workqueue_struct *unf_wq;
+
+struct unf_lport *unf_find_lport_by_port_id(u32 port_id);
+struct unf_lport *unf_find_lport_by_scsi_hostid(u32 scsi_host_id);
+void *
+unf_lport_create_and_init(void *private_data,
+ struct unf_low_level_functioon_op *low_level_op);
+u32 unf_fc_port_link_event(void *lport, u32 events, void *input);
+u32 unf_release_local_port(void *lport);
+void unf_lport_route_work(struct work_struct *work);
+void unf_lport_update_topo(struct unf_lport *lport,
+ enum unf_act_topo active_topo);
+void unf_lport_ref_dec(struct unf_lport *lport);
+u32 unf_lport_ref_inc(struct unf_lport *lport);
+void unf_lport_ref_dec_to_destroy(struct unf_lport *lport);
+void unf_port_mgmt_deinit(void);
+void unf_port_mgmt_init(void);
+void unf_show_dirty_port(bool show_only, u32 *dirty_port_num);
+void *unf_lookup_lport_by_nportid(void *lport, u32 nport_id);
+u32 unf_is_lport_valid(struct unf_lport *lport);
+int unf_lport_reset_port(struct unf_lport *lport, u32 flag);
+int unf_cm_ops_handle(u32 type, void **arg_in);
+u32 unf_register_scsi_host(struct unf_lport *lport);
+void unf_unregister_scsi_host(struct unf_lport *lport);
+void unf_destroy_scsi_id_table(struct unf_lport *lport);
+u32 unf_lport_login(struct unf_lport *lport, enum unf_act_topo act_topo);
+u32 unf_init_scsi_id_table(struct unf_lport *lport);
+void unf_set_lport_removing(struct unf_lport *lport);
+void unf_lport_release_lw_funop(struct unf_lport *lport);
+void unf_show_all_rport(struct unf_lport *lport);
+void unf_disc_state_ma(struct unf_lport *lport, enum unf_disc_event evnet);
+int unf_get_link_lose_tmo(struct unf_lport *lport);
+u32 unf_port_release_rport_index(struct unf_lport *lport, void *input);
+int unf_cm_reset_port(u32 port_id);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_rport.c b/drivers/scsi/spfc/common/unf_rport.c
new file mode 100644
index 000000000000..aa4967fc0ab6
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_rport.c
@@ -0,0 +1,2286 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_rport.h"
+#include "unf_log.h"
+#include "unf_exchg.h"
+#include "unf_ls.h"
+#include "unf_service.h"
+#include "unf_portman.h"
+
+/* rport state:ready --->>> link_down --->>> closing --->>> timeout --->>> delete */
+struct unf_rport_feature_pool *port_feature_pool;
+
+void unf_sesion_loss_timeout(struct work_struct *work)
+{
+ struct unf_wwpn_rport_info *wwpn_rport_info = NULL;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ wwpn_rport_info = container_of(work, struct unf_wwpn_rport_info, loss_tmo_work.work);
+ if (unlikely(!wwpn_rport_info)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]wwpn_rport_info is NULL");
+ return;
+ }
+
+ atomic_set(&wwpn_rport_info->scsi_state, UNF_SCSI_ST_DEAD);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) wwpn(0x%llx) set target(0x%x) scsi state to dead",
+ ((struct unf_lport *)(wwpn_rport_info->lport))->port_id,
+ wwpn_rport_info->wwpn, wwpn_rport_info->target_id);
+}
+
+u32 unf_alloc_scsi_id(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
+ struct unf_wwpn_rport_info *wwn_rport_info = NULL;
+ ulong flags = 0;
+ u32 index = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ spinlock_t *rport_scsi_tb_lock = NULL;
+
+ rport_scsi_table = &lport->rport_scsi_table;
+ rport_scsi_tb_lock = &rport_scsi_table->scsi_image_table_lock;
+ spin_lock_irqsave(rport_scsi_tb_lock, flags);
+
+ /* 1. At first, existence check */
+ for (index = 0; index < rport_scsi_table->max_scsi_id; index++) {
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[index];
+ if (rport->port_name == wwn_rport_info->wwpn) {
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+ UNF_DELAYED_WORK_SYNC(ret, (lport->port_id),
+ (&wwn_rport_info->loss_tmo_work),
+ "loss tmo Timer work");
+
+ /* Plug case: reuse again */
+ spin_lock_irqsave(rport_scsi_tb_lock, flags);
+ wwn_rport_info->rport = rport;
+ wwn_rport_info->las_ten_scsi_state =
+ atomic_read(&wwn_rport_info->scsi_state);
+ atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_ONLINE);
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) find the same scsi_id(0x%x) by wwpn(0x%llx) RPort(%p) N_Port_ID(0x%x)",
+ lport->port_id, index, wwn_rport_info->wwpn, rport,
+ rport->nport_id);
+
+ atomic_inc(&lport->resume_scsi_id);
+ goto find;
+ }
+ }
+
+ /* 2. Alloc new SCSI ID */
+ for (index = 0; index < rport_scsi_table->max_scsi_id; index++) {
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[index];
+ if (wwn_rport_info->wwpn == INVALID_WWPN) {
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+ UNF_DELAYED_WORK_SYNC(ret, (lport->port_id),
+ (&wwn_rport_info->loss_tmo_work),
+ "loss tmo Timer work");
+ /* Use the free space */
+ spin_lock_irqsave(rport_scsi_tb_lock, flags);
+ wwn_rport_info->rport = rport;
+ wwn_rport_info->wwpn = rport->port_name;
+ wwn_rport_info->las_ten_scsi_state =
+ atomic_read(&wwn_rport_info->scsi_state);
+ atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_ONLINE);
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) allco new scsi_id(0x%x) by wwpn(0x%llx) RPort(%p) N_Port_ID(0x%x)",
+ lport->port_id, index, wwn_rport_info->wwpn, rport,
+ rport->nport_id);
+
+ atomic_inc(&lport->alloc_scsi_id);
+ goto find;
+ }
+ }
+
+ /* 3. Reuse space has been used */
+ for (index = 0; index < rport_scsi_table->max_scsi_id; index++) {
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[index];
+ if (atomic_read(&wwn_rport_info->scsi_state) == UNF_SCSI_ST_DEAD) {
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+ UNF_DELAYED_WORK_SYNC(ret, (lport->port_id),
+ (&wwn_rport_info->loss_tmo_work),
+ "loss tmo Timer work");
+
+ spin_lock_irqsave(rport_scsi_tb_lock, flags);
+ if (wwn_rport_info->dfx_counter) {
+ memset(wwn_rport_info->dfx_counter, 0,
+ sizeof(struct unf_wwpn_dfx_counter_info));
+ }
+ if (wwn_rport_info->lun_qos_level) {
+ memset(wwn_rport_info->lun_qos_level, 0,
+ sizeof(u8) * UNF_MAX_LUN_PER_TARGET);
+ }
+ wwn_rport_info->rport = rport;
+ wwn_rport_info->wwpn = rport->port_name;
+ wwn_rport_info->las_ten_scsi_state =
+ atomic_read(&wwn_rport_info->scsi_state);
+ atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_ONLINE);
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[info]Port(0x%x) reuse a dead scsi_id(0x%x) by wwpn(0x%llx) RPort(%p) N_Port_ID(0x%x)",
+ lport->port_id, index, wwn_rport_info->wwpn, rport,
+ rport->nport_id);
+
+ atomic_inc(&lport->reuse_scsi_id);
+ goto find;
+ }
+ }
+
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) there is not enough scsi_id with max_value(0x%x)",
+ lport->port_id, index);
+
+ return INVALID_VALUE32;
+
+find:
+ if (!wwn_rport_info->dfx_counter) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) allocate Rport(0x%x) DFX buffer",
+ lport->port_id, wwn_rport_info->rport->nport_id);
+ wwn_rport_info->dfx_counter = vmalloc(sizeof(struct unf_wwpn_dfx_counter_info));
+ if (!wwn_rport_info->dfx_counter) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) allocate DFX buffer fail",
+ lport->port_id);
+
+ return INVALID_VALUE32;
+ }
+
+ memset(wwn_rport_info->dfx_counter, 0, sizeof(struct unf_wwpn_dfx_counter_info));
+ }
+
+ return index;
+}
+
+u32 unf_get_scsi_id_by_wwpn(struct unf_lport *lport, u64 wwpn)
+{
+ struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
+ struct unf_wwpn_rport_info *wwn_rport_info = NULL;
+ ulong flags = 0;
+ u32 index = 0;
+ spinlock_t *rport_scsi_tb_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, INVALID_VALUE32);
+ rport_scsi_table = &lport->rport_scsi_table;
+ rport_scsi_tb_lock = &rport_scsi_table->scsi_image_table_lock;
+
+ if (wwpn == 0)
+ return INVALID_VALUE32;
+
+ spin_lock_irqsave(rport_scsi_tb_lock, flags);
+
+ for (index = 0; index < rport_scsi_table->max_scsi_id; index++) {
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[index];
+ if (wwn_rport_info->wwpn == wwpn) {
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+ return index;
+ }
+ }
+
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+
+ return INVALID_VALUE32;
+}
+
+void unf_set_device_state(struct unf_lport *lport, u32 scsi_id, int scsi_state)
+{
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ struct unf_wwpn_rport_info *wwpn_rport_info = NULL;
+
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) RPort scsi_id(0x%x) is max than 0x%x",
+ lport->port_id, scsi_id, UNF_MAX_SCSI_ID);
+ return;
+ }
+
+ scsi_image_table = &lport->rport_scsi_table;
+ wwpn_rport_info = &scsi_image_table->wwn_rport_info_table[scsi_id];
+ atomic_set(&wwpn_rport_info->scsi_state, scsi_state);
+}
+
+void unf_rport_linkdown(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /*
+ * 1. port_logout
+ * 2. rcvd_rscn_port_not_in_disc
+ * 3. each_rport_after_rscn
+ * 4. rcvd_gpnid_rjt
+ * 5. rport_after_logout(rport is fabric port)
+ */
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* 1. Update R_Port state: Link Down Event --->>> closing state */
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* 3. Port enter closing (then enter to Delete) process */
+ unf_rport_enter_closing(rport);
+}
+
+static struct unf_rport *unf_rport_is_changed(struct unf_lport *lport,
+ struct unf_rport *rport, u32 sid)
+{
+ if (rport) {
+ /* S_ID or D_ID has been changed */
+ if (rport->nport_id != sid || rport->local_nport_id != lport->nport_id) {
+ /* 1. Swap case: (SID or DID changed): Report link down
+ * & delete immediately
+ */
+ unf_rport_immediate_link_down(lport, rport);
+ return NULL;
+ }
+ }
+
+ return rport;
+}
+
+struct unf_rport *unf_rport_set_qualifier_key_reuse(struct unf_lport *lport,
+ struct unf_rport *rport_by_nport_id,
+ struct unf_rport *rport_by_wwpn,
+ u64 wwpn, u32 sid)
+{
+ /* Used for SPFC Chip */
+ struct unf_rport *rport = NULL;
+ struct unf_rport *rporta = NULL;
+ struct unf_rport *rportb = NULL;
+ bool wwpn_flag = false;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ /* About R_Port by N_Port_ID */
+ rporta = unf_rport_is_changed(lport, rport_by_nport_id, sid);
+
+ /* About R_Port by WWpn */
+ rportb = unf_rport_is_changed(lport, rport_by_wwpn, sid);
+
+ if (!rporta && !rportb) {
+ return NULL;
+ } else if (!rporta && rportb) {
+ /* 3. Plug case: reuse again */
+ rport = rportb;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%p) WWPN(0x%llx) S_ID(0x%x) D_ID(0x%x) reused by wwpn",
+ lport->port_id, rport, rport->port_name,
+ rport->nport_id, rport->local_nport_id);
+
+ return rport;
+ } else if (rporta && !rportb) {
+ wwpn_flag = (rporta->port_name != wwpn && rporta->port_name != 0 &&
+ rporta->port_name != INVALID_VALUE64);
+ if (wwpn_flag) {
+ /* 4. WWPN changed: Report link down & delete
+ * immediately
+ */
+ unf_rport_immediate_link_down(lport, rporta);
+ return NULL;
+ }
+
+ /* Updtae WWPN */
+ rporta->port_name = wwpn;
+ rport = rporta;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%p) WWPN(0x%llx) S_ID(0x%x) D_ID(0x%x) reused by N_Port_ID",
+ lport->port_id, rport, rport->port_name,
+ rport->nport_id, rport->local_nport_id);
+
+ return rport;
+ }
+
+ /* 5. Case for A == B && A != NULL && B != NULL */
+ if (rportb == rporta) {
+ rport = rporta;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) find the same RPort(0x%p) WWPN(0x%llx) S_ID(0x%x) D_ID(0x%x)",
+ lport->port_id, rport, rport->port_name, rport->nport_id,
+ rport->local_nport_id);
+
+ return rport;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find two duplicate login. RPort(A:0x%p, WWPN:0x%llx, S_ID:0x%x, D_ID:0x%x) RPort(B:0x%p, WWPN:0x%llx, S_ID:0x%x, D_ID:0x%x)",
+ lport->port_id, rporta, rporta->port_name, rporta->nport_id,
+ rporta->local_nport_id, rportb, rportb->port_name, rportb->nport_id,
+ rportb->local_nport_id);
+
+ /* 6. Case for A != B && A != NULL && B != NULL: Immediate
+ * Report && Deletion
+ */
+ unf_rport_immediate_link_down(lport, rporta);
+ unf_rport_immediate_link_down(lport, rportb);
+
+ return NULL;
+}
+
+struct unf_rport *unf_find_valid_rport(struct unf_lport *lport, u64 wwpn, u32 sid)
+{
+ struct unf_rport *rport = NULL;
+ struct unf_rport *rport_by_nport_id = NULL;
+ struct unf_rport *rport_by_wwpn = NULL;
+ ulong flags = 0;
+ spinlock_t *rport_state_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(lport->unf_qualify_rport, NULL);
+
+ /* Get R_Port by WWN & N_Port_ID */
+ rport_by_nport_id = unf_get_rport_by_nport_id(lport, sid);
+ rport_by_wwpn = unf_get_rport_by_wwn(lport, wwpn);
+ rport_state_lock = &rport_by_wwpn->rport_state_lock;
+
+ /* R_Port check: by WWPN */
+ if (rport_by_wwpn) {
+ spin_lock_irqsave(rport_state_lock, flags);
+ if (rport_by_wwpn->nport_id == UNF_FC_FID_FLOGI) {
+ spin_unlock_irqrestore(rport_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[err]Port(0x%x) RPort(0x%p) find by WWPN(0x%llx) is invalid",
+ lport->port_id, rport_by_wwpn, wwpn);
+
+ rport_by_wwpn = NULL;
+ } else {
+ spin_unlock_irqrestore(rport_state_lock, flags);
+ }
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%p) find by N_Port_ID(0x%x) and RPort(0x%p) by WWPN(0x%llx)",
+ lport->port_id, lport->nport_id, rport_by_nport_id, sid, rport_by_wwpn, wwpn);
+
+ /* R_Port validity check: get by WWPN & N_Port_ID */
+ rport = lport->unf_qualify_rport(lport, rport_by_nport_id,
+ rport_by_wwpn, wwpn, sid);
+
+ return rport;
+}
+
+void unf_rport_delay_login(struct unf_rport *rport)
+{
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* Do R_Port recovery: PLOGI or PRLI or LOGO */
+ unf_rport_error_recovery(rport);
+}
+
+void unf_rport_enter_logo(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /*
+ * 1. TMF/ABTS timeout recovery :Y
+ * 2. L_Port error recovery --->>> larger than retry_count :Y
+ * 3. R_Port error recovery --->>> larger than retry_count :Y
+ * 4. Check PLOGI parameter --->>> parameter is error :Y
+ * 5. PRLI handler --->>> R_Port state is error :Y
+ * 6. PDISC handler --->>> R_Port state is not PRLI_WAIT :Y
+ * 7. ADISC handler --->>> R_Port state is not PRLI_WAIT :Y
+ * 8. PLOGI wait timeout with R_PORT is INI mode :Y
+ * 9. RCVD GFFID_RJT --->>> R_Port state is INIT :Y
+ * 10. RCVD GPNID_ACC --->>> R_Port state is error :Y
+ * 11. Private Loop mode with LOGO case :Y
+ * 12. P2P mode with LOGO case :Y
+ * 13. Fabric mode with LOGO case :Y
+ * 14. RCVD PRLI_ACC with R_Port is INI :Y
+ * 15. TGT RCVD BLS_REQ with session is error :Y
+ */
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flags);
+
+ if (rport->rp_state == UNF_RPORT_ST_CLOSING ||
+ rport->rp_state == UNF_RPORT_ST_DELETE) {
+ /* 1. Already within Closing or Delete: Do nothing */
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ return;
+ } else if (rport->rp_state == UNF_RPORT_ST_LOGO) {
+ /* 2. Update R_Port state: Normal Enter Event --->>> closing
+ * state
+ */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_NORMAL_ENTER);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ /* Send Logo if necessary */
+ if (unf_send_logo(lport, rport) != RETURN_OK)
+ unf_rport_enter_closing(rport);
+ } else {
+ /* 3. Update R_Port state: Link Down Event --->>> closing state
+ */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ unf_rport_enter_closing(rport);
+ }
+}
+
+u32 unf_free_scsi_id(struct unf_lport *lport, u32 scsi_id)
+{
+ ulong flags = 0;
+ struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
+ struct unf_wwpn_rport_info *wwn_rport_info = NULL;
+ spinlock_t *rport_scsi_tb_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ if (unlikely(lport->port_removing)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) is removing and do nothing",
+ lport->port_id, lport->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x_0x%x) scsi_id(0x%x) is bigger than %d",
+ lport->port_id, lport->nport_id, scsi_id, UNF_MAX_SCSI_ID);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ rport_scsi_table = &lport->rport_scsi_table;
+ rport_scsi_tb_lock = &rport_scsi_table->scsi_image_table_lock;
+ if (rport_scsi_table->wwn_rport_info_table) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[warn]Port(0x%x_0x%x) RPort(0x%p) free scsi_id(0x%x) wwpn(0x%llx) target_id(0x%x) succeed",
+ lport->port_id, lport->nport_id,
+ rport_scsi_table->wwn_rport_info_table[scsi_id].rport,
+ scsi_id, rport_scsi_table->wwn_rport_info_table[scsi_id].wwpn,
+ rport_scsi_table->wwn_rport_info_table[scsi_id].target_id);
+
+ spin_lock_irqsave(rport_scsi_tb_lock, flags);
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[scsi_id];
+ if (wwn_rport_info->rport) {
+ wwn_rport_info->rport->rport = NULL;
+ wwn_rport_info->rport = NULL;
+ }
+ wwn_rport_info->target_id = INVALID_VALUE32;
+ atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_DEAD);
+
+ /* NOTE: remain WWPN/Port_Name unchanged(un-cleared) */
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+
+ return RETURN_OK;
+ }
+
+ return UNF_RETURN_ERROR;
+}
+
+static void unf_report_ini_linkwown_event(struct unf_lport *lport, struct unf_rport *rport)
+{
+ u32 scsi_id = 0;
+ struct fc_rport *unf_rport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ /*
+ * 1. set local device(rport/rport_info_table) state
+ * -------------------------------------------------OFF_LINE
+ * *
+ * about rport->scsi_id
+ * valid during rport link up to link down
+ */
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ scsi_id = rport->scsi_id;
+ unf_set_device_state(lport, scsi_id, UNF_SCSI_ST_OFFLINE);
+
+ /* 2. delete scsi's rport */
+ unf_rport = (struct fc_rport *)rport->rport;
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ if (unf_rport) {
+ fc_remote_port_delete(unf_rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x_0x%x) delete RPort(0x%x) wwpn(0x%llx) scsi_id(0x%x) succeed",
+ lport->port_id, lport->nport_id, rport->nport_id,
+ rport->port_name, scsi_id);
+
+ atomic_inc(&lport->scsi_session_del_success);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[warn]Port(0x%x_0x%x) delete RPort(0x%x_0x%p) failed",
+ lport->port_id, lport->nport_id, rport->nport_id, rport);
+ }
+}
+
+static void unf_report_ini_linkup_event(struct unf_lport *lport, struct unf_rport *rport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[event]Port(0x%x) RPort(0x%x_0x%p) put INI link up work(%p) to work_queue",
+ lport->port_id, rport->nport_id, rport, &rport->start_work);
+
+ if (unlikely(!queue_work(lport->link_event_wq, &rport->start_work))) {
+ atomic_inc(&lport->add_start_work_failed);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]Port(0x%x) RPort(0x%x_0x%p) put INI link up to work_queue failed",
+ lport->port_id, rport->nport_id, rport);
+ }
+}
+
+void unf_update_lport_state_by_linkup_event(struct unf_lport *lport,
+ struct unf_rport *rport,
+ u32 rport_att)
+{
+ /* Report R_Port Link Up/Down Event */
+ ulong flag = 0;
+ enum unf_port_state lport_state = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+
+ /* 1. R_Port does not has TGT mode any more */
+ if (((rport_att & UNF_FC4_FRAME_PARM_3_TGT) == 0) &&
+ rport->lport_ini_state == UNF_PORT_STATE_LINKUP) {
+ rport->last_lport_ini_state = rport->lport_ini_state;
+ rport->lport_ini_state = UNF_PORT_STATE_LINKDOWN;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) does not have TGT attribute(0x%x) any more",
+ lport->port_id, rport->nport_id, rport_att);
+ }
+
+ /* 2. R_Port with TGT mode, L_Port with INI mode */
+ if ((rport_att & UNF_FC4_FRAME_PARM_3_TGT) &&
+ (lport->options & UNF_FC4_FRAME_PARM_3_INI)) {
+ rport->last_lport_ini_state = rport->lport_ini_state;
+ rport->lport_ini_state = UNF_PORT_STATE_LINKUP;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[warn]Port(0x%x) update INI state with last(0x%x) and now(0x%x)",
+ lport->port_id, rport->last_lport_ini_state,
+ rport->lport_ini_state);
+ }
+
+ /* 3. Report L_Port INI/TGT Down/Up event to SCSI */
+ if (rport->last_lport_ini_state == rport->lport_ini_state) {
+ if (rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x %p) INI state(0x%x) has not been changed",
+ lport->port_id, rport->nport_id, rport,
+ rport->lport_ini_state);
+ }
+
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ return;
+ }
+
+ lport_state = rport->lport_ini_state;
+
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ switch (lport_state) {
+ case UNF_PORT_STATE_LINKDOWN:
+ unf_report_ini_linkwown_event(lport, rport);
+ break;
+ case UNF_PORT_STATE_LINKUP:
+ unf_report_ini_linkup_event(lport, rport);
+ break;
+ default:
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with unknown link status(0x%x)",
+ lport->port_id, rport->lport_ini_state);
+ break;
+ }
+}
+
+static void unf_rport_callback(void *rport, void *lport, u32 result)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(lport);
+ unf_rport = (struct unf_rport *)rport;
+ unf_lport = (struct unf_lport *)lport;
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->last_lport_ini_state = unf_rport->lport_ini_state;
+ unf_rport->lport_ini_state = UNF_PORT_STATE_LINKDOWN;
+ unf_rport->last_lport_tgt_state = unf_rport->lport_tgt_state;
+ unf_rport->lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
+
+ /* Report R_Port Link Down Event to scsi */
+ if (unf_rport->last_lport_ini_state == unf_rport->lport_ini_state) {
+ if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x %p) INI state(0x%x) has not been changed",
+ unf_lport->port_id, unf_rport->nport_id,
+ unf_rport, unf_rport->lport_ini_state);
+ }
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ return;
+ }
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ unf_report_ini_linkwown_event(unf_lport, unf_rport);
+}
+
+static void unf_rport_recovery_timeout(struct work_struct *work)
+{
+ struct unf_lport *lport = NULL;
+ struct unf_rport *rport = NULL;
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+ enum unf_rport_login_state rp_state = UNF_RPORT_ST_INIT;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ rport = container_of(work, struct unf_rport, recovery_work.work);
+ if (unlikely(!rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort is NULL");
+
+ return;
+ }
+
+ lport = rport->lport;
+ if (unlikely(!lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort(0x%x) Port is NULL", rport->nport_id);
+
+ /* for timer */
+ unf_rport_ref_dec(rport);
+ return;
+ }
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rp_state = rport->rp_state;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%x) state(0x%x) recovery timer timeout",
+ lport->port_id, lport->nport_id, rport->nport_id, rp_state);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ switch (rp_state) {
+ case UNF_RPORT_ST_PLOGI_WAIT:
+ if ((lport->act_topo == UNF_ACT_TOP_P2P_DIRECT &&
+ lport->port_name > rport->port_name) ||
+ lport->act_topo != UNF_ACT_TOP_P2P_DIRECT) {
+ /* P2P: Name is master with P2P_D
+ * or has INI Mode
+ */
+ ret = unf_send_plogi(rport->lport, rport);
+ }
+ break;
+ case UNF_RPORT_ST_PRLI_WAIT:
+ ret = unf_send_prli(rport->lport, rport, ELS_PRLI);
+ if (ret != RETURN_OK)
+ unf_rport_error_recovery(rport);
+ fallthrough;
+ default:
+ break;
+ }
+
+ if (ret != RETURN_OK)
+ unf_rport_error_recovery(rport);
+
+ /* company with timer */
+ unf_rport_ref_dec(rport);
+}
+
+void unf_schedule_closing_work(struct unf_lport *lport, struct unf_rport *rport)
+{
+ ulong flags = 0;
+ struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
+ struct unf_wwpn_rport_info *wwn_rport_info = NULL;
+ u32 scsi_id = 0;
+ u32 ret = 0;
+ u32 delay = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ delay = (u32)(unf_get_link_lose_tmo(lport) * 1000);
+
+ rport_scsi_table = &lport->rport_scsi_table;
+ scsi_id = rport->scsi_id;
+ spin_lock_irqsave(&rport->rport_state_lock, flags);
+
+ /* 1. Cancel recovery_work */
+ if (cancel_delayed_work(&rport->recovery_work)) {
+ atomic_dec(&rport->rport_ref_cnt);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%x_0x%p) cancel recovery work succeed",
+ lport->port_id, lport->nport_id, rport->nport_id, rport);
+ }
+
+ /* 2. Cancel Open_work */
+ if (cancel_delayed_work(&rport->open_work)) {
+ atomic_dec(&rport->rport_ref_cnt);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%x_0x%p) cancel open work succeed",
+ lport->port_id, lport->nport_id, rport->nport_id, rport);
+ }
+
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ /* 3. Work in-queue (switch to thread context) */
+ if (!queue_work(lport->link_event_wq, &rport->closing_work)) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[warn]Port(0x%x) RPort(0x%x_0x%p) add link down to work queue failed",
+ lport->port_id, rport->nport_id, rport);
+
+ atomic_inc(&lport->add_closing_work_failed);
+ } else {
+ spin_lock_irqsave(&rport->rport_state_lock, flags);
+ (void)unf_rport_ref_inc(rport);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%x_0x%p) add link down to work(%p) queue succeed",
+ lport->port_id, rport->nport_id, rport,
+ &rport->closing_work);
+ }
+
+ if (rport->nport_id > UNF_FC_FID_DOM_MGR)
+ return;
+
+ if (scsi_id >= UNF_MAX_SCSI_ID) {
+ scsi_id = unf_get_scsi_id_by_wwpn(lport, rport->port_name);
+ if (scsi_id >= UNF_MAX_SCSI_ID) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%p) NPortId(0x%x) wwpn(0x%llx) option(0x%x) scsi_id(0x%x) is max than(0x%x)",
+ lport->port_id, rport, rport->nport_id,
+ rport->port_name, rport->options, scsi_id,
+ UNF_MAX_SCSI_ID);
+
+ return;
+ }
+ }
+
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[scsi_id];
+ ret = queue_delayed_work(unf_wq, &wwn_rport_info->loss_tmo_work,
+ (ulong)msecs_to_jiffies((u32)delay));
+ if (!ret) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info] Port(0x%x) add RPort(0x%p) NPortId(0x%x) scsi_id(0x%x) wwpn(0x%llx) loss timeout work failed",
+ lport->port_id, rport, rport->nport_id, scsi_id,
+ rport->port_name);
+ }
+}
+
+static void unf_rport_closing_timeout(struct work_struct *work)
+{
+ /* closing --->>>(timeout)--->>> delete */
+ struct unf_rport *rport = NULL;
+ struct unf_lport *lport = NULL;
+ struct unf_disc *disc = NULL;
+ ulong rport_flag = 0;
+ ulong disc_flag = 0;
+ void (*unf_rport_callback)(void *, void *, u32) = NULL;
+ enum unf_rport_login_state old_state;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ /* Get R_Port & L_Port & Disc */
+ rport = container_of(work, struct unf_rport, closing_work);
+ if (unlikely(!rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort is NULL");
+ return;
+ }
+
+ lport = rport->lport;
+ if (unlikely(!lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort(0x%x_0x%p) Port is NULL",
+ rport->nport_id, rport);
+
+ /* Release directly (for timer) */
+ unf_rport_ref_dec(rport);
+ return;
+ }
+ disc = &lport->disc;
+
+ spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
+
+ old_state = rport->rp_state;
+ /* 1. Update R_Port state: event_timeout --->>> state_delete */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_CLS_TIMEOUT);
+
+ /* Check R_Port state */
+ if (rport->rp_state != UNF_RPORT_ST_DELETE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x_0x%x) RPort(0x%x_0x%p) closing timeout with error state(0x%x->0x%x)",
+ lport->port_id, lport->nport_id, rport->nport_id,
+ rport, old_state, rport->rp_state);
+
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+
+ /* Dec ref_cnt for timer */
+ unf_rport_ref_dec(rport);
+ return;
+ }
+
+ unf_rport_callback = rport->unf_rport_callback;
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+
+ /* 2. Put R_Port to delete list */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
+ list_del_init(&rport->entry_rport);
+ list_add_tail(&rport->entry_rport, &disc->list_delete_rports);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+
+ /* 3. Report rport link down event to scsi */
+ if (unf_rport_callback) {
+ unf_rport_callback((void *)rport, (void *)rport->lport, RETURN_OK);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort(0x%x) callback is NULL",
+ rport->nport_id);
+ }
+
+ /* 4. Remove/delete R_Port */
+ unf_rport_ref_dec(rport);
+ unf_rport_ref_dec(rport);
+}
+
+static void unf_rport_linkup_to_scsi(struct work_struct *work)
+{
+ struct fc_rport_identifiers rport_ids;
+ struct fc_rport *rport = NULL;
+ ulong flags = RETURN_OK;
+ struct unf_wwpn_rport_info *wwn_rport_info = NULL;
+ struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
+ u32 scsi_id = 0;
+
+ struct unf_lport *lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ unf_rport = container_of(work, struct unf_rport, start_work);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort is NULL for work(%p)", work);
+ return;
+ }
+
+ lport = unf_rport->lport;
+ if (unlikely(!lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort(0x%x_0x%p) Port is NULL",
+ unf_rport->nport_id, unf_rport);
+ return;
+ }
+
+ /* 1. Alloc R_Port SCSI_ID (image table) */
+ unf_rport->scsi_id = unf_alloc_scsi_id(lport, unf_rport);
+ if (unlikely(unf_rport->scsi_id == INVALID_VALUE32)) {
+ atomic_inc(&lport->scsi_session_add_failed);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[err]Port(0x%x_0x%x) RPort(0x%x_0x%p) wwpn(0x%llx) scsi_id(0x%x) is invalid",
+ lport->port_id, lport->nport_id,
+ unf_rport->nport_id, unf_rport,
+ unf_rport->port_name, unf_rport->scsi_id);
+
+ /* NOTE: return */
+ return;
+ }
+
+ /* 2. Add rport to scsi */
+ scsi_id = unf_rport->scsi_id;
+ rport_ids.node_name = unf_rport->node_name;
+ rport_ids.port_name = unf_rport->port_name;
+ rport_ids.port_id = unf_rport->nport_id;
+ rport_ids.roles = FC_RPORT_ROLE_UNKNOWN;
+ rport = fc_remote_port_add(lport->host_info.host, 0, &rport_ids);
+ if (unlikely(!rport)) {
+ atomic_inc(&lport->scsi_session_add_failed);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x_0x%x) RPort(0x%x_0x%p) wwpn(0x%llx) report link up to scsi failed",
+ lport->port_id, lport->nport_id, unf_rport->nport_id, unf_rport,
+ unf_rport->port_name);
+
+ unf_free_scsi_id(lport, scsi_id);
+ return;
+ }
+
+ /* 3. Change rport role */
+ *((u32 *)rport->dd_data) = scsi_id; /* save local SCSI_ID to scsi rport */
+ rport->supported_classes = FC_COS_CLASS3;
+ rport_ids.roles |= FC_PORT_ROLE_FCP_TARGET;
+ rport->dev_loss_tmo = (u32)unf_get_link_lose_tmo(lport); /* default 30s */
+ fc_remote_port_rolechg(rport, rport_ids.roles);
+
+ /* 4. Save scsi rport info to local R_Port */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ unf_rport->rport = rport;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ rport_scsi_table = &lport->rport_scsi_table;
+ spin_lock_irqsave(&rport_scsi_table->scsi_image_table_lock, flags);
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[scsi_id];
+ wwn_rport_info->target_id = rport->scsi_target_id;
+ wwn_rport_info->rport = unf_rport;
+ atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_ONLINE);
+ spin_unlock_irqrestore(&rport_scsi_table->scsi_image_table_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x_0x%x) RPort(0x%x) wwpn(0x%llx) scsi_id(0x%x) link up to scsi succeed",
+ lport->port_id, lport->nport_id, unf_rport->nport_id,
+ unf_rport->port_name, scsi_id);
+
+ atomic_inc(&lport->scsi_session_add_success);
+}
+
+static void unf_rport_open_timeout(struct work_struct *work)
+{
+ struct unf_rport *rport = NULL;
+ struct unf_lport *lport = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ rport = container_of(work, struct unf_rport, open_work.work);
+ if (!rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]RPort is NULL");
+
+ return;
+ }
+
+ spin_lock_irqsave(&rport->rport_state_lock, flags);
+ lport = rport->lport;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) RPort(0x%x) open work timeout with state(0x%x)",
+ lport->port_id, lport->nport_id, rport->nport_id,
+ rport->rp_state);
+
+ /* NOTE: R_Port state check */
+ if (rport->rp_state != UNF_RPORT_ST_PRLI_WAIT) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ /* Dec ref_cnt for timer case */
+ unf_rport_ref_dec(rport);
+ return;
+ }
+
+ /* Report R_Port Link Down event */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ unf_rport_enter_closing(rport);
+ /* Dec ref_cnt for timer case */
+ unf_rport_ref_dec(rport);
+}
+
+static u32 unf_alloc_index_for_rport(struct unf_lport *lport, struct unf_rport *rport)
+{
+ ulong rport_flag = 0;
+ ulong pool_flag = 0;
+ u32 alloc_indx = 0;
+ u32 max_rport = 0;
+ struct unf_rport_pool *rport_pool = NULL;
+ spinlock_t *rport_scsi_tb_lock = NULL;
+
+ rport_pool = &lport->rport_pool;
+ rport_scsi_tb_lock = &rport_pool->rport_free_pool_lock;
+ max_rport = lport->low_level_func.lport_cfg_items.max_login;
+
+ max_rport = max_rport > SPFC_DEFAULT_RPORT_INDEX ? SPFC_DEFAULT_RPORT_INDEX : max_rport;
+
+ spin_lock_irqsave(rport_scsi_tb_lock, pool_flag);
+ while (alloc_indx < max_rport) {
+ if (!test_bit((int)alloc_indx, rport_pool->rpi_bitmap)) {
+ /* Case for SPFC */
+ if (unlikely(atomic_read(&lport->lport_no_operate_flag) == UNF_LPORT_NOP)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is within NOP", lport->port_id);
+
+ spin_unlock_irqrestore(rport_scsi_tb_lock, pool_flag);
+ return UNF_RETURN_ERROR;
+ }
+
+ spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
+ rport->rport_index = alloc_indx;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) RPort(0x%x) alloc index(0x%x) succeed",
+ lport->port_id, alloc_indx, rport->nport_id);
+
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+
+ /* Set (index) bit */
+ set_bit((int)alloc_indx, rport_pool->rpi_bitmap);
+
+ /* Break here */
+ break;
+ }
+ alloc_indx++;
+ }
+ spin_unlock_irqrestore(rport_scsi_tb_lock, pool_flag);
+
+ if (max_rport == alloc_indx)
+ return UNF_RETURN_ERROR;
+ return RETURN_OK;
+}
+
+static void unf_check_rport_pool_status(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport_pool *rport_pool = NULL;
+ ulong flags = 0;
+ u32 max_rport = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ rport_pool = &unf_lport->rport_pool;
+
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flags);
+ max_rport = unf_lport->low_level_func.lport_cfg_items.max_login;
+ if (rport_pool->rport_pool_completion &&
+ rport_pool->rport_pool_count == max_rport) {
+ complete(rport_pool->rport_pool_completion);
+ }
+
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flags);
+}
+
+static void unf_init_rport_sq_num(struct unf_rport *rport, struct unf_lport *lport)
+{
+ u32 session_order;
+ u32 ssq_average_session_num;
+
+ ssq_average_session_num = (lport->max_ssq_num - 1) / UNF_SQ_NUM_PER_SESSION;
+ session_order = (rport->rport_index) % ssq_average_session_num;
+ rport->sqn_base = (session_order * UNF_SQ_NUM_PER_SESSION);
+}
+
+void unf_init_rport_params(struct unf_rport *rport, struct unf_lport *lport)
+{
+ struct unf_rport *unf_rport = rport;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(unf_rport);
+ FC_CHECK_RETURN_VOID(lport);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_set_rport_state(unf_rport, UNF_RPORT_ST_INIT);
+ unf_rport->unf_rport_callback = unf_rport_callback;
+ unf_rport->lport = lport;
+ unf_rport->fcp_conf_needed = false;
+ unf_rport->tape_support_needed = false;
+ unf_rport->max_retries = UNF_MAX_RETRY_COUNT;
+ unf_rport->logo_retries = 0;
+ unf_rport->retries = 0;
+ unf_rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ unf_rport->last_lport_ini_state = UNF_PORT_STATE_LINKDOWN;
+ unf_rport->lport_ini_state = UNF_PORT_STATE_LINKDOWN;
+ unf_rport->last_lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
+ unf_rport->lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
+ unf_rport->node_name = 0;
+ unf_rport->port_name = INVALID_WWPN;
+ unf_rport->disc_done = 0;
+ unf_rport->scsi_id = INVALID_VALUE32;
+ unf_rport->data_thread = NULL;
+ sema_init(&unf_rport->task_sema, 0);
+ atomic_set(&unf_rport->rport_ref_cnt, 0);
+ atomic_set(&unf_rport->pending_io_cnt, 0);
+ unf_rport->rport_alloc_jifs = jiffies;
+
+ unf_rport->ed_tov = UNF_DEFAULT_EDTOV + 500;
+ unf_rport->ra_tov = UNF_DEFAULT_RATOV;
+
+ INIT_WORK(&unf_rport->closing_work, unf_rport_closing_timeout);
+ INIT_WORK(&unf_rport->start_work, unf_rport_linkup_to_scsi);
+ INIT_DELAYED_WORK(&unf_rport->recovery_work, unf_rport_recovery_timeout);
+ INIT_DELAYED_WORK(&unf_rport->open_work, unf_rport_open_timeout);
+
+ atomic_inc(&unf_rport->rport_ref_cnt);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+}
+
+static u32 unf_alloc_ll_rport_resource(struct unf_lport *lport,
+ struct unf_rport *rport, u32 nport_id)
+{
+ u32 ret = RETURN_OK;
+ struct unf_port_info rport_info = {0};
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_qos_info *qos_info = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+
+ unf_lport = lport->root_lport;
+
+ if (unf_lport->low_level_func.service_op.unf_alloc_rport_res) {
+ spin_lock_irqsave(&lport->qos_mgr_lock, flag);
+ rport_info.qos_level = lport->qos_level;
+ list_for_each_safe(node, next_node, &lport->list_qos_head) {
+ qos_info = (struct unf_qos_info *)list_entry(node, struct unf_qos_info,
+ entry_qos_info);
+
+ if (qos_info && qos_info->nport_id == nport_id) {
+ rport_info.qos_level = qos_info->qos_level;
+ break;
+ }
+ }
+
+ spin_unlock_irqrestore(&lport->qos_mgr_lock, flag);
+
+ unf_init_rport_sq_num(rport, unf_lport);
+
+ rport->qos_level = rport_info.qos_level;
+ rport_info.nport_id = nport_id;
+ rport_info.rport_index = rport->rport_index;
+ rport_info.local_nport_id = lport->nport_id;
+ rport_info.port_name = 0;
+ rport_info.cs_ctrl = UNF_CSCTRL_INVALID;
+ rport_info.sqn_base = rport->sqn_base;
+
+ if (unf_lport->priority == UNF_PRIORITY_ENABLE) {
+ if (rport_info.qos_level == UNF_QOS_LEVEL_DEFAULT)
+ rport_info.cs_ctrl = UNF_CSCTRL_LOW;
+ else if (rport_info.qos_level == UNF_QOS_LEVEL_MIDDLE)
+ rport_info.cs_ctrl = UNF_CSCTRL_MIDDLE;
+ else if (rport_info.qos_level == UNF_QOS_LEVEL_HIGH)
+ rport_info.cs_ctrl = UNF_CSCTRL_HIGH;
+ }
+
+ ret = unf_lport->low_level_func.service_op.unf_alloc_rport_res(unf_lport->fc_port,
+ &rport_info);
+ } else {
+ ret = RETURN_OK;
+ }
+
+ return ret;
+}
+
+static void *unf_add_rport_to_busy_list(struct unf_lport *lport,
+ struct unf_rport *new_rport,
+ u32 nport_id)
+{
+ struct unf_rport_pool *rport_pool = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_rport *unf_new_rport = new_rport;
+ struct unf_rport *old_rport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ spinlock_t *rport_free_lock = NULL;
+ spinlock_t *rport_busy_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(new_rport, NULL);
+
+ unf_lport = lport->root_lport;
+ disc = &lport->disc;
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+ rport_pool = &unf_lport->rport_pool;
+ rport_free_lock = &rport_pool->rport_free_pool_lock;
+ rport_busy_lock = &disc->rport_busy_pool_lock;
+
+ spin_lock_irqsave(rport_busy_lock, flag);
+ list_for_each_safe(node, next_node, &disc->list_busy_rports) {
+ /* According to N_Port_ID */
+ old_rport = list_entry(node, struct unf_rport, entry_rport);
+ if (old_rport->nport_id == nport_id)
+ break;
+ old_rport = NULL;
+ }
+
+ if (old_rport) {
+ spin_unlock_irqrestore(rport_busy_lock, flag);
+
+ /* Use old R_Port & Add new R_Port back to R_Port Pool */
+ spin_lock_irqsave(rport_free_lock, flag);
+ clear_bit((int)unf_new_rport->rport_index, rport_pool->rpi_bitmap);
+ list_add_tail(&unf_new_rport->entry_rport, &rport_pool->list_rports_pool);
+ rport_pool->rport_pool_count++;
+ spin_unlock_irqrestore(rport_free_lock, flag);
+
+ unf_check_rport_pool_status(unf_lport);
+ return (void *)old_rport;
+ }
+ spin_unlock_irqrestore(rport_busy_lock, flag);
+ if (nport_id != UNF_FC_FID_FLOGI) {
+ if (unf_alloc_ll_rport_resource(lport, unf_new_rport, nport_id) != RETURN_OK) {
+ /* Add new R_Port back to R_Port Pool */
+ spin_lock_irqsave(rport_free_lock, flag);
+ clear_bit((int)unf_new_rport->rport_index, rport_pool->rpi_bitmap);
+ list_add_tail(&unf_new_rport->entry_rport, &rport_pool->list_rports_pool);
+ rport_pool->rport_pool_count++;
+ spin_unlock_irqrestore(rport_free_lock, flag);
+ unf_check_rport_pool_status(unf_lport);
+
+ return NULL;
+ }
+ }
+
+ spin_lock_irqsave(rport_busy_lock, flag);
+ /* Add new R_Port to busy list */
+ list_add_tail(&unf_new_rport->entry_rport, &disc->list_busy_rports);
+ unf_new_rport->nport_id = nport_id;
+ unf_new_rport->local_nport_id = lport->nport_id;
+ spin_unlock_irqrestore(rport_busy_lock, flag);
+ unf_init_rport_params(unf_new_rport, lport);
+
+ return (void *)unf_new_rport;
+}
+
+void *unf_rport_get_free_and_init(void *lport, u32 port_type, u32 nport_id)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport_pool *rport_pool = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_disc *vport_disc = NULL;
+ struct unf_rport *rport = NULL;
+ struct list_head *list_head = NULL;
+ ulong flag = 0;
+ struct unf_disc_rport *disc_rport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+
+ /* Check L_Port state: NOP */
+ if (unlikely(atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP))
+ return NULL;
+
+ rport_pool = &unf_lport->rport_pool;
+ disc = &unf_lport->disc;
+
+ /* 1. UNF_PORT_TYPE_DISC: Get from disc_rport_pool */
+ if (port_type == UNF_PORT_TYPE_DISC) {
+ vport_disc = &((struct unf_lport *)lport)->disc;
+ /* NOTE: list_disc_rports_pool used with list_disc_rports_busy */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ if (!list_empty(&disc->disc_rport_mgr.list_disc_rports_pool)) {
+ /* Get & delete from Disc R_Port Pool & Add it to Busy list */
+ list_head = UNF_OS_LIST_NEXT(&disc->disc_rport_mgr.list_disc_rports_pool);
+ list_del_init(list_head);
+ disc_rport = list_entry(list_head, struct unf_disc_rport, entry_rport);
+ disc_rport->nport_id = nport_id;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /* Add to list_disc_rports_busy */
+ spin_lock_irqsave(&vport_disc->rport_busy_pool_lock, flag);
+ list_add_tail(list_head, &vport_disc->disc_rport_mgr.list_disc_rports_busy);
+ spin_unlock_irqrestore(&vport_disc->rport_busy_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x_0x%x) add nportid:0x%x to rportbusy list",
+ unf_lport->port_id, unf_lport->nport_id,
+ disc_rport->nport_id);
+ } else {
+ disc_rport = NULL;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ }
+
+ /* NOTE: return */
+ return disc_rport;
+ }
+
+ /* 2. UNF_PORT_TYPE_FC (rport_pool): Get from list_rports_pool */
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
+ if (!list_empty(&rport_pool->list_rports_pool)) {
+ /* Get & delete from R_Port free Pool */
+ list_head = UNF_OS_LIST_NEXT(&rport_pool->list_rports_pool);
+ list_del_init(list_head);
+ rport_pool->rport_pool_count--;
+ rport = list_entry(list_head, struct unf_rport, entry_rport);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) RPort pool is empty",
+ unf_lport->port_id, unf_lport->nport_id);
+
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+
+ return NULL;
+ }
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+
+ /* 3. Alloc (& set bit) R_Port index */
+ if (unf_alloc_index_for_rport(unf_lport, rport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate index for new RPort failed",
+ unf_lport->nport_id);
+
+ /* Alloc failed: Add R_Port back to R_Port Pool */
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
+ list_add_tail(&rport->entry_rport, &rport_pool->list_rports_pool);
+ rport_pool->rport_pool_count++;
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+ unf_check_rport_pool_status(unf_lport);
+ return NULL;
+ }
+
+ /* 4. Add R_Port to busy list */
+ rport = unf_add_rport_to_busy_list(lport, rport, nport_id);
+
+ return (void *)rport;
+}
+
+u32 unf_release_rport_res(struct unf_lport *lport, struct unf_rport *rport)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_port_info rport_info;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ memset(&rport_info, 0, sizeof(struct unf_port_info));
+
+ rport_info.rport_index = rport->rport_index;
+ rport_info.nport_id = rport->nport_id;
+ rport_info.port_name = rport->port_name;
+ rport_info.sqn_base = rport->sqn_base;
+
+ /* 2. release R_Port(parent context/Session) resource */
+ if (!lport->low_level_func.service_op.unf_release_rport_res) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) release rport resource function can't be NULL",
+ lport->port_id);
+
+ return ret;
+ }
+
+ ret = lport->low_level_func.service_op.unf_release_rport_res(lport->fc_port, &rport_info);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) rport_index(0x%x, %p) send release session CMND failed",
+ lport->port_id, rport_info.rport_index, rport);
+ }
+
+ return ret;
+}
+
+static void unf_reset_rport_attribute(struct unf_rport *rport)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->unf_rport_callback = NULL;
+ rport->lport = NULL;
+ rport->node_name = INVALID_VALUE64;
+ rport->port_name = INVALID_WWPN;
+ rport->nport_id = INVALID_VALUE32;
+ rport->local_nport_id = INVALID_VALUE32;
+ rport->max_frame_size = UNF_MAX_FRAME_SIZE;
+ rport->ed_tov = UNF_DEFAULT_EDTOV;
+ rport->ra_tov = UNF_DEFAULT_RATOV;
+ rport->rport_index = INVALID_VALUE32;
+ rport->scsi_id = INVALID_VALUE32;
+ rport->rport_alloc_jifs = INVALID_VALUE64;
+
+ /* ini or tgt */
+ rport->options = 0;
+
+ /* fcp conf */
+ rport->fcp_conf_needed = false;
+
+ /* special req retry times */
+ rport->retries = 0;
+ rport->logo_retries = 0;
+
+ /* special req retry times */
+ rport->max_retries = UNF_MAX_RETRY_COUNT;
+
+ /* for target mode */
+ rport->session = NULL;
+ rport->last_lport_ini_state = UNF_PORT_STATE_LINKDOWN;
+ rport->lport_ini_state = UNF_PORT_STATE_LINKDOWN;
+ rport->rp_state = UNF_RPORT_ST_INIT;
+ rport->last_lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
+ rport->lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
+ rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ rport->disc_done = 0;
+ rport->sqn_base = 0;
+
+ /* for scsi */
+ rport->data_thread = NULL;
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+}
+
+u32 unf_rport_remove(void *rport)
+{
+ struct unf_lport *lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_rport_pool *rport_pool = NULL;
+ ulong flag = 0;
+ u32 rport_index = 0;
+ u32 nport_id = 0;
+
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ unf_rport = (struct unf_rport *)rport;
+ lport = unf_rport->lport;
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ rport_pool = &((struct unf_lport *)lport->root_lport)->rport_pool;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Remove RPort(0x%p) with remote_nport_id(0x%x) local_nport_id(0x%x)",
+ unf_rport, unf_rport->nport_id, unf_rport->local_nport_id);
+
+ /* 1. Terminate open exchange before rport remove: set ABORT tag */
+ unf_cm_xchg_mgr_abort_io_by_id(lport, unf_rport, unf_rport->nport_id, lport->nport_id, 0);
+
+ /* 2. Abort sfp exchange before rport remove */
+ unf_cm_xchg_mgr_abort_sfs_by_id(lport, unf_rport, unf_rport->nport_id, lport->nport_id);
+
+ /* 3. Release R_Port resource: session reset/delete */
+ if (likely(unf_rport->nport_id != UNF_FC_FID_FLOGI))
+ (void)unf_release_rport_res(lport, unf_rport);
+
+ nport_id = unf_rport->nport_id;
+
+ /* 4.1 Delete R_Port from disc destroy/delete list */
+ spin_lock_irqsave(&lport->disc.rport_busy_pool_lock, flag);
+ list_del_init(&unf_rport->entry_rport);
+ spin_unlock_irqrestore(&lport->disc.rport_busy_pool_lock, flag);
+
+ rport_index = unf_rport->rport_index;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) release RPort(0x%x_%p) with index(0x%x)",
+ lport->port_id, unf_rport->nport_id, unf_rport,
+ unf_rport->rport_index);
+
+ unf_reset_rport_attribute(unf_rport);
+
+ /* 4.2 Add rport to --->>> rport_pool (free pool) & clear bitmap */
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
+ if (unlikely(nport_id == UNF_FC_FID_FLOGI)) {
+ if (test_bit((int)rport_index, rport_pool->rpi_bitmap))
+ clear_bit((int)rport_index, rport_pool->rpi_bitmap);
+ }
+
+ list_add_tail(&unf_rport->entry_rport, &rport_pool->list_rports_pool);
+ rport_pool->rport_pool_count++;
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+
+ unf_check_rport_pool_status((struct unf_lport *)lport->root_lport);
+ up(&unf_rport->task_sema);
+
+ return RETURN_OK;
+}
+
+u32 unf_rport_ref_inc(struct unf_rport *rport)
+{
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ if (atomic_read(&rport->rport_ref_cnt) <= 0) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Rport(0x%x) reference count is wrong %d",
+ rport->nport_id,
+ atomic_read(&rport->rport_ref_cnt));
+ return UNF_RETURN_ERROR;
+ }
+
+ atomic_inc(&rport->rport_ref_cnt);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Rport(0x%x) reference count is %d", rport->nport_id,
+ atomic_read(&rport->rport_ref_cnt));
+
+ return RETURN_OK;
+}
+
+void unf_rport_ref_dec(struct unf_rport *rport)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Rport(0x%x) reference count is %d", rport->nport_id,
+ atomic_read(&rport->rport_ref_cnt));
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ if (atomic_dec_and_test(&rport->rport_ref_cnt)) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ (void)unf_rport_remove(rport);
+ } else {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ }
+}
+
+void unf_set_rport_state(struct unf_rport *rport,
+ enum unf_rport_login_state states)
+{
+ FC_CHECK_RETURN_VOID(rport);
+
+ if (rport->rp_state != states) {
+ /* Reset R_Port retry count */
+ rport->retries = 0;
+ }
+
+ rport->rp_state = states;
+}
+
+static enum unf_rport_login_state
+unf_rport_stat_init(enum unf_rport_login_state old_state,
+ enum unf_rport_event event)
+{
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ switch (event) {
+ case UNF_EVENT_RPORT_LOGO:
+ next_state = UNF_RPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_RPORT_ENTER_PLOGI:
+ next_state = UNF_RPORT_ST_PLOGI_WAIT;
+ break;
+
+ case UNF_EVENT_RPORT_LINK_DOWN:
+ next_state = UNF_RPORT_ST_CLOSING;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_rport_login_state unf_rport_stat_plogi_wait(enum unf_rport_login_state old_state,
+ enum unf_rport_event event)
+{
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ switch (event) {
+ case UNF_EVENT_RPORT_ENTER_PRLI:
+ next_state = UNF_RPORT_ST_PRLI_WAIT;
+ break;
+
+ case UNF_EVENT_RPORT_LINK_DOWN:
+ next_state = UNF_RPORT_ST_CLOSING;
+ break;
+
+ case UNF_EVENT_RPORT_LOGO:
+ next_state = UNF_RPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_RPORT_RECOVERY:
+ next_state = UNF_RPORT_ST_READY;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_rport_login_state unf_rport_stat_prli_wait(enum unf_rport_login_state old_state,
+ enum unf_rport_event event)
+{
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ switch (event) {
+ case UNF_EVENT_RPORT_READY:
+ next_state = UNF_RPORT_ST_READY;
+ break;
+
+ case UNF_EVENT_RPORT_LOGO:
+ next_state = UNF_RPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_RPORT_LINK_DOWN:
+ next_state = UNF_RPORT_ST_CLOSING;
+ break;
+
+ case UNF_EVENT_RPORT_RECOVERY:
+ next_state = UNF_RPORT_ST_READY;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_rport_login_state unf_rport_stat_ready(enum unf_rport_login_state old_state,
+ enum unf_rport_event event)
+{
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ switch (event) {
+ case UNF_EVENT_RPORT_LOGO:
+ next_state = UNF_RPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_RPORT_LINK_DOWN:
+ next_state = UNF_RPORT_ST_CLOSING;
+ break;
+
+ case UNF_EVENT_RPORT_ENTER_PLOGI:
+ next_state = UNF_RPORT_ST_PLOGI_WAIT;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_rport_login_state unf_rport_stat_closing(enum unf_rport_login_state old_state,
+ enum unf_rport_event event)
+{
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ switch (event) {
+ case UNF_EVENT_RPORT_CLS_TIMEOUT:
+ next_state = UNF_RPORT_ST_DELETE;
+ break;
+
+ case UNF_EVENT_RPORT_RELOGIN:
+ next_state = UNF_RPORT_ST_INIT;
+ break;
+
+ case UNF_EVENT_RPORT_RECOVERY:
+ next_state = UNF_RPORT_ST_READY;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_rport_login_state unf_rport_stat_logo(enum unf_rport_login_state old_state,
+ enum unf_rport_event event)
+{
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ switch (event) {
+ case UNF_EVENT_RPORT_NORMAL_ENTER:
+ next_state = UNF_RPORT_ST_CLOSING;
+ break;
+
+ case UNF_EVENT_RPORT_RECOVERY:
+ next_state = UNF_RPORT_ST_READY;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+void unf_rport_state_ma(struct unf_rport *rport, enum unf_rport_event event)
+{
+ enum unf_rport_login_state old_state = UNF_RPORT_ST_INIT;
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ FC_CHECK_RETURN_VOID(rport);
+
+ old_state = rport->rp_state;
+
+ switch (rport->rp_state) {
+ case UNF_RPORT_ST_INIT:
+ next_state = unf_rport_stat_init(old_state, event);
+ break;
+ case UNF_RPORT_ST_PLOGI_WAIT:
+ next_state = unf_rport_stat_plogi_wait(old_state, event);
+ break;
+ case UNF_RPORT_ST_PRLI_WAIT:
+ next_state = unf_rport_stat_prli_wait(old_state, event);
+ break;
+ case UNF_RPORT_ST_LOGO:
+ next_state = unf_rport_stat_logo(old_state, event);
+ break;
+ case UNF_RPORT_ST_CLOSING:
+ next_state = unf_rport_stat_closing(old_state, event);
+ break;
+ case UNF_RPORT_ST_READY:
+ next_state = unf_rport_stat_ready(old_state, event);
+ break;
+ case UNF_RPORT_ST_DELETE:
+ default:
+ next_state = UNF_RPORT_ST_INIT;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR, "[info]RPort(0x%x) hold state(0x%x)",
+ rport->nport_id, rport->rp_state);
+ break;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MINOR,
+ "[info]RPort(0x%x) with oldstate(0x%x) event(0x%x) nextstate(0x%x)",
+ rport->nport_id, old_state, event, next_state);
+
+ unf_set_rport_state(rport, next_state);
+}
+
+void unf_clean_linkdown_rport(struct unf_lport *lport)
+{
+ /* for L_Port's R_Port(s) */
+ struct unf_disc *disc = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_rport *rport = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong disc_lock_flag = 0;
+ ulong rport_lock_flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ disc = &lport->disc;
+
+ /* for each busy R_Port */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_lock_flag);
+ list_for_each_safe(node, next_node, &disc->list_busy_rports) {
+ rport = list_entry(node, struct unf_rport, entry_rport);
+
+ /* 1. Prevent process Repeatly: Closing */
+ spin_lock_irqsave(&rport->rport_state_lock, rport_lock_flag);
+ if (rport->rp_state == UNF_RPORT_ST_CLOSING) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
+ continue;
+ }
+
+ /* 2. Increase ref_cnt to protect R_Port */
+ if (unf_rport_ref_inc(rport) != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
+ continue;
+ }
+
+ /* 3. Update R_Port state: Link Down Event --->>> closing state
+ */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
+
+ /* 4. Put R_Port from busy to destroy list */
+ list_del_init(&rport->entry_rport);
+ list_add_tail(&rport->entry_rport, &disc->list_destroy_rports);
+
+ unf_lport = rport->lport;
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
+
+ /* 5. Schedule Closing work (Enqueuing workqueue) */
+ unf_schedule_closing_work(unf_lport, rport);
+
+ /* 6. decrease R_Port ref_cnt (company with 2) */
+ unf_rport_ref_dec(rport);
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_lock_flag);
+}
+
+void unf_rport_enter_closing(struct unf_rport *rport)
+{
+ /*
+ * call by
+ * 1. with RSCN processer
+ * 2. with LOGOUT processer
+ * *
+ * from
+ * 1. R_Port Link Down
+ * 2. R_Port enter LOGO
+ */
+ ulong rport_lock_flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_lport *lport = NULL;
+ struct unf_disc *disc = NULL;
+
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* 1. Increase ref_cnt to protect R_Port */
+ spin_lock_irqsave(&rport->rport_state_lock, rport_lock_flag);
+ ret = unf_rport_ref_inc(rport);
+ if (ret != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) is removing and no need process",
+ rport->nport_id, rport);
+
+ return;
+ }
+
+ /* NOTE: R_Port state has been set(with closing) */
+
+ lport = rport->lport;
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
+
+ /* 2. Put R_Port from busy to destroy list */
+ disc = &lport->disc;
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, rport_lock_flag);
+ list_del_init(&rport->entry_rport);
+ list_add_tail(&rport->entry_rport, &disc->list_destroy_rports);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, rport_lock_flag);
+
+ /* 3. Schedule Closing work (Enqueuing workqueue) */
+ unf_schedule_closing_work(lport, rport);
+
+ /* 4. dec R_Port ref_cnt */
+ unf_rport_ref_dec(rport);
+}
+
+void unf_rport_error_recovery(struct unf_rport *rport)
+{
+ ulong delay = 0;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+
+ ret = unf_rport_ref_inc(rport);
+ if (ret != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) is removing and no need process",
+ rport->nport_id, rport);
+ return;
+ }
+
+ /* Check R_Port state */
+ if (rport->rp_state == UNF_RPORT_ST_CLOSING ||
+ rport->rp_state == UNF_RPORT_ST_DELETE) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]RPort(0x%x_0x%p) offline and no need process",
+ rport->nport_id, rport);
+
+ unf_rport_ref_dec(rport);
+ return;
+ }
+
+ /* Check repeatability with recovery work */
+ if (delayed_work_pending(&rport->recovery_work)) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]RPort(0x%x_0x%p) recovery work is running and no need process",
+ rport->nport_id, rport);
+
+ unf_rport_ref_dec(rport);
+ return;
+ }
+
+ /* NOTE: Re-login or Logout directly (recovery work) */
+ if (rport->retries < rport->max_retries) {
+ rport->retries++;
+ delay = UNF_DEFAULT_EDTOV / 4;
+
+ if (queue_delayed_work(unf_wq, &rport->recovery_work,
+ (ulong)msecs_to_jiffies((u32)delay))) {
+ /* Inc ref_cnt: corresponding to this work timer */
+ (void)unf_rport_ref_inc(rport);
+ }
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) state(0x%x) retry login failed",
+ rport->nport_id, rport, rport->rp_state);
+
+ /* Update R_Port state: LOGO event --->>> ST_LOGO */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ unf_rport_enter_logo(rport->lport, rport);
+ }
+
+ unf_rport_ref_dec(rport);
+}
+
+static u32 unf_rport_reuse_only(struct unf_rport *rport)
+{
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ ret = unf_rport_ref_inc(rport);
+ if (ret != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* R_Port with delete state */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) is removing and no need process",
+ rport->nport_id, rport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* R_Port State check: delete */
+ if (rport->rp_state == UNF_RPORT_ST_DELETE ||
+ rport->rp_state == UNF_RPORT_ST_CLOSING) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) state(0x%x) is delete or closing no need process",
+ rport->nport_id, rport, rport->rp_state);
+
+ ret = UNF_RETURN_ERROR;
+ }
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ unf_rport_ref_dec(rport);
+
+ return ret;
+}
+
+static u32 unf_rport_reuse_recover(struct unf_rport *rport)
+{
+ ulong flags = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flags);
+ ret = unf_rport_ref_inc(rport);
+ if (ret != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ /* R_Port with delete state */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) is removing and no need process",
+ rport->nport_id, rport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* R_Port state check: delete */
+ if (rport->rp_state == UNF_RPORT_ST_DELETE ||
+ rport->rp_state == UNF_RPORT_ST_CLOSING) {
+ ret = UNF_RETURN_ERROR;
+ }
+
+ /* Update R_Port state: recovery --->>> ready */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_RECOVERY);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ unf_rport_ref_dec(rport);
+
+ return ret;
+}
+
+static u32 unf_rport_reuse_init(struct unf_rport *rport)
+{
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ ret = unf_rport_ref_inc(rport);
+ if (ret != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* R_Port with delete state */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) is removing and no need process",
+ rport->nport_id, rport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]RPort(0x%x)'s state is 0x%x with use_init flag",
+ rport->nport_id, rport->rp_state);
+
+ /* R_Port State check: delete */
+ if (rport->rp_state == UNF_RPORT_ST_DELETE ||
+ rport->rp_state == UNF_RPORT_ST_CLOSING) {
+ ret = UNF_RETURN_ERROR;
+ } else {
+ /* Update R_Port state: re-enter Init state */
+ unf_set_rport_state(rport, UNF_RPORT_ST_INIT);
+ }
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ unf_rport_ref_dec(rport);
+
+ return ret;
+}
+
+struct unf_rport *unf_get_rport_by_nport_id(struct unf_lport *lport,
+ u32 nport_id)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_rport *rport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ struct unf_rport *find_rport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ unf_lport = (struct unf_lport *)lport;
+ disc = &unf_lport->disc;
+
+ /* for each r_port from rport_busy_list: compare N_Port_ID */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ list_for_each_safe(node, next_node, &disc->list_busy_rports) {
+ rport = list_entry(node, struct unf_rport, entry_rport);
+ if (rport && rport->nport_id == nport_id) {
+ find_rport = rport;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ return find_rport;
+}
+
+struct unf_rport *unf_get_rport_by_wwn(struct unf_lport *lport, u64 wwpn)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_rport *rport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ struct unf_rport *find_rport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ unf_lport = (struct unf_lport *)lport;
+ disc = &unf_lport->disc;
+
+ /* for each r_port from busy_list: compare wwpn(port name) */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ list_for_each_safe(node, next_node, &disc->list_busy_rports) {
+ rport = list_entry(node, struct unf_rport, entry_rport);
+ if (rport && rport->port_name == wwpn) {
+ find_rport = rport;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ return find_rport;
+}
+
+struct unf_rport *unf_get_safe_rport(struct unf_lport *lport,
+ struct unf_rport *rport,
+ enum unf_rport_reuse_flag reuse_flag,
+ u32 nport_id)
+{
+ /*
+ * New add or plug
+ * *
+ * retry_flogi --->>> reuse_only
+ * name_server_register --->>> reuse_only
+ * SNS_plogi --->>> reuse_only
+ * enter_flogi --->>> reuse_only
+ * logout --->>> reuse_only
+ * flogi_handler --->>> reuse_only
+ * plogi_handler --->>> reuse_only
+ * adisc_handler --->>> reuse_recovery
+ * logout_handler --->>> reuse_init
+ * prlo_handler --->>> reuse_init
+ * login_with_loop --->>> reuse_only
+ * gffid_callback --->>> reuse_only
+ * delay_plogi --->>> reuse_only
+ * gffid_rjt --->>> reuse_only
+ * gffid_rsp_unknown --->>> reuse_only
+ * gpnid_acc --->>> reuse_init
+ * fdisc_callback --->>> reuse_only
+ * flogi_acc --->>> reuse_only
+ * plogi_acc --->>> reuse_only
+ * logo_callback --->>> reuse_init
+ * rffid_callback --->>> reuse_only
+ */
+#define UNF_AVOID_LINK_FLASH_TIME 3000
+
+ struct unf_rport *unf_rport = rport;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ /* 1. Alloc New R_Port or Update R_Port Property */
+ if (!unf_rport) {
+ /* If NULL, get/Alloc new node (R_Port from R_Port pool)
+ * directly
+ */
+ unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC, nport_id);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_INFO,
+ "[info]Port(0x%x) get exist RPort(0x%x) with state(0x%x) and reuse_flag(0x%x)",
+ lport->port_id, unf_rport->nport_id,
+ unf_rport->rp_state, reuse_flag);
+
+ switch (reuse_flag) {
+ case UNF_RPORT_REUSE_ONLY:
+ ret = unf_rport_reuse_only(unf_rport);
+ if (ret != RETURN_OK) {
+ /* R_Port within delete list: need get new */
+ unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC,
+ nport_id);
+ }
+ break;
+
+ case UNF_RPORT_REUSE_INIT:
+ ret = unf_rport_reuse_init(unf_rport);
+ if (ret != RETURN_OK) {
+ /* R_Port within delete list: need get new */
+ unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC,
+ nport_id);
+ }
+ break;
+
+ case UNF_RPORT_REUSE_RECOVER:
+ ret = unf_rport_reuse_recover(unf_rport);
+ if (ret != RETURN_OK) {
+ /* R_Port within delete list,
+ * NOTE: do nothing
+ */
+ unf_rport = NULL;
+ }
+ break;
+
+ default:
+ break;
+ }
+ } // end else: R_Port != NULL
+
+ return unf_rport;
+}
+
+u32 unf_get_port_feature(u64 wwpn)
+{
+ struct unf_rport_feature_recard *port_fea = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+ struct list_head list_temp_node;
+ struct list_head *list_busy_head = NULL;
+ struct list_head *list_free_head = NULL;
+ spinlock_t *feature_lock = NULL;
+
+ list_busy_head = &port_feature_pool->list_busy_head;
+ list_free_head = &port_feature_pool->list_free_head;
+ feature_lock = &port_feature_pool->port_fea_pool_lock;
+ spin_lock_irqsave(feature_lock, flags);
+ list_for_each_safe(node, next_node, list_busy_head) {
+ port_fea = list_entry(node, struct unf_rport_feature_recard, entry_feature);
+
+ if (port_fea->wwpn == wwpn) {
+ list_del(&port_fea->entry_feature);
+ list_add(&port_fea->entry_feature, list_busy_head);
+ spin_unlock_irqrestore(feature_lock, flags);
+
+ return port_fea->port_feature;
+ }
+ }
+
+ list_for_each_safe(node, next_node, list_free_head) {
+ port_fea = list_entry(node, struct unf_rport_feature_recard, entry_feature);
+
+ if (port_fea->wwpn == wwpn) {
+ list_del(&port_fea->entry_feature);
+ list_add(&port_fea->entry_feature, list_busy_head);
+ spin_unlock_irqrestore(feature_lock, flags);
+
+ return port_fea->port_feature;
+ }
+ }
+
+ /* can't find wwpn */
+ if (list_empty(list_free_head)) {
+ /* free is empty, transport busy to free */
+ list_temp_node = port_feature_pool->list_free_head;
+ port_feature_pool->list_free_head = port_feature_pool->list_busy_head;
+ port_feature_pool->list_busy_head = list_temp_node;
+ }
+
+ port_fea = list_entry(UNF_OS_LIST_PREV(list_free_head),
+ struct unf_rport_feature_recard,
+ entry_feature);
+ list_del(&port_fea->entry_feature);
+ list_add(&port_fea->entry_feature, list_busy_head);
+
+ port_fea->wwpn = wwpn;
+ port_fea->port_feature = UNF_PORT_MODE_UNKNOWN;
+
+ spin_unlock_irqrestore(feature_lock, flags);
+ return UNF_PORT_MODE_UNKNOWN;
+}
+
+void unf_update_port_feature(u64 wwpn, u32 port_feature)
+{
+ struct unf_rport_feature_recard *port_fea = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct list_head *busy_head = NULL;
+ struct list_head *free_head = NULL;
+ ulong flags = 0;
+ spinlock_t *feature_lock = NULL;
+
+ feature_lock = &port_feature_pool->port_fea_pool_lock;
+ busy_head = &port_feature_pool->list_busy_head;
+ free_head = &port_feature_pool->list_free_head;
+
+ spin_lock_irqsave(feature_lock, flags);
+ list_for_each_safe(node, next_node, busy_head) {
+ port_fea = list_entry(node, struct unf_rport_feature_recard, entry_feature);
+
+ if (port_fea->wwpn == wwpn) {
+ port_fea->port_feature = port_feature;
+ list_del(&port_fea->entry_feature);
+ list_add(&port_fea->entry_feature, busy_head);
+ spin_unlock_irqrestore(feature_lock, flags);
+
+ return;
+ }
+ }
+
+ list_for_each_safe(node, next_node, free_head) {
+ port_fea = list_entry(node, struct unf_rport_feature_recard, entry_feature);
+
+ if (port_fea->wwpn == wwpn) {
+ port_fea->port_feature = port_feature;
+ list_del(&port_fea->entry_feature);
+ list_add(&port_fea->entry_feature, busy_head);
+
+ spin_unlock_irqrestore(feature_lock, flags);
+
+ return;
+ }
+ }
+
+ spin_unlock_irqrestore(feature_lock, flags);
+}
diff --git a/drivers/scsi/spfc/common/unf_rport.h b/drivers/scsi/spfc/common/unf_rport.h
new file mode 100644
index 000000000000..a9d58cb29b8a
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_rport.h
@@ -0,0 +1,301 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_RPORT_H
+#define UNF_RPORT_H
+
+#include "unf_type.h"
+#include "unf_common.h"
+#include "unf_lport.h"
+
+extern struct unf_rport_feature_pool *port_feature_pool;
+
+#define UNF_MAX_SCSI_ID 2048
+#define UNF_LOSE_TMO 30
+#define UNF_RPORT_INVALID_INDEX 0xffff
+
+/* RSCN compare DISC list with local RPort macro */
+#define UNF_RPORT_NEED_PROCESS 0x1
+#define UNF_RPORT_ONLY_IN_DISC_PROCESS 0x2
+#define UNF_RPORT_ONLY_IN_LOCAL_PROCESS 0x3
+#define UNF_RPORT_IN_DISC_AND_LOCAL_PROCESS 0x4
+#define UNF_RPORT_NOT_NEED_PROCESS 0x5
+
+#define UNF_ECHO_SEND_MAX_TIMES 1
+
+/* csctrl level value */
+#define UNF_CSCTRL_LOW 0x81
+#define UNF_CSCTRL_MIDDLE 0x82
+#define UNF_CSCTRL_HIGH 0x83
+#define UNF_CSCTRL_INVALID 0x0
+
+enum unf_rport_login_state {
+ UNF_RPORT_ST_INIT = 0x1000, /* initialized */
+ UNF_RPORT_ST_PLOGI_WAIT, /* waiting for PLOGI completion */
+ UNF_RPORT_ST_PRLI_WAIT, /* waiting for PRLI completion */
+ UNF_RPORT_ST_READY, /* ready for use */
+ UNF_RPORT_ST_LOGO, /* port logout sent */
+ UNF_RPORT_ST_CLOSING, /* being closed */
+ UNF_RPORT_ST_DELETE, /* port being deleted */
+ UNF_RPORT_ST_BUTT
+};
+
+enum unf_rport_event {
+ UNF_EVENT_RPORT_NORMAL_ENTER = 0x9000,
+ UNF_EVENT_RPORT_ENTER_PLOGI = 0x9001,
+ UNF_EVENT_RPORT_ENTER_PRLI = 0x9002,
+ UNF_EVENT_RPORT_READY = 0x9003,
+ UNF_EVENT_RPORT_LOGO = 0x9004,
+ UNF_EVENT_RPORT_CLS_TIMEOUT = 0x9005,
+ UNF_EVENT_RPORT_RECOVERY = 0x9006,
+ UNF_EVENT_RPORT_RELOGIN = 0x9007,
+ UNF_EVENT_RPORT_LINK_DOWN = 0x9008,
+ UNF_EVENT_RPORT_BUTT
+};
+
+/* RPort local link state */
+enum unf_port_state {
+ UNF_PORT_STATE_LINKUP = 0x1001,
+ UNF_PORT_STATE_LINKDOWN = 0x1002
+};
+
+enum unf_rport_reuse_flag {
+ UNF_RPORT_REUSE_ONLY = 0x1001,
+ UNF_RPORT_REUSE_INIT = 0x1002,
+ UNF_RPORT_REUSE_RECOVER = 0x1003
+};
+
+struct unf_disc_rport {
+ /* RPort entry */
+ struct list_head entry_rport;
+
+ u32 nport_id; /* Remote port NPortID */
+ u32 disc_done; /* 1:Disc done */
+};
+
+struct unf_rport_feature_pool {
+ struct list_head list_busy_head;
+ struct list_head list_free_head;
+ void *port_feature_pool_addr;
+ spinlock_t port_fea_pool_lock;
+};
+
+struct unf_rport_feature_recard {
+ struct list_head entry_feature;
+ u64 wwpn;
+ u32 port_feature;
+ u32 reserved;
+};
+
+struct unf_os_thread_private_data {
+ struct list_head list;
+ spinlock_t spin_lock;
+ struct task_struct *thread;
+ unsigned int in_process;
+ unsigned int cpu_id;
+ atomic_t user_count;
+};
+
+/* Remote Port struct */
+struct unf_rport {
+ u32 max_frame_size;
+ u32 supported_classes;
+
+ /* Dynamic Attributes */
+ /* Remote Port loss timeout in seconds. */
+ u32 dev_loss_tmo;
+
+ u64 node_name;
+ u64 port_name;
+ u32 nport_id; /* Remote port NPortID */
+ u32 local_nport_id;
+
+ u32 roles;
+
+ /* Remote port local INI state */
+ enum unf_port_state lport_ini_state;
+ enum unf_port_state last_lport_ini_state;
+
+ /* Remote port local TGT state */
+ enum unf_port_state lport_tgt_state;
+ enum unf_port_state last_lport_tgt_state;
+
+ /* Port Type,fc or fcoe */
+ u32 port_type;
+
+ /* RPort reference counter */
+ atomic_t rport_ref_cnt;
+
+ /* Pending IO count */
+ atomic_t pending_io_cnt;
+
+ /* RPort entry */
+ struct list_head entry_rport;
+
+ /* Port State,delay reclaim when uiRpState == complete. */
+ enum unf_rport_login_state rp_state;
+ u32 disc_done; /* 1:Disc done */
+
+ struct unf_lport *lport;
+ void *rport;
+ spinlock_t rport_state_lock;
+
+ /* Port attribution */
+ u32 ed_tov;
+ u32 ra_tov;
+ u32 options; /* ini or tgt */
+ u32 last_report_link_up_options;
+ u32 fcp_conf_needed; /* INI Rport send FCP CONF flag */
+ u32 tape_support_needed; /* INI tape support flag */
+ u32 retries; /* special req retry times */
+ u32 logo_retries; /* logo error recovery retry times */
+ u32 max_retries; /* special req retry times */
+ u64 rport_alloc_jifs; /* Rport alloc jiffies */
+
+ void *session;
+
+ /* binding with SCSI */
+ u32 scsi_id;
+
+ /* disc list compare flag */
+ u32 rscn_position;
+
+ u32 rport_index;
+
+ u32 sqn_base;
+ enum unf_rport_qos_level qos_level;
+
+ /* RPort timer,closing status */
+ struct work_struct closing_work;
+
+ /* RPort timer,rport linkup */
+ struct work_struct start_work;
+
+ /* RPort timer,recovery */
+ struct delayed_work recovery_work;
+
+ /* RPort timer,TGT mode,PRLI waiting */
+ struct delayed_work open_work;
+
+ struct semaphore task_sema;
+ /* Callback after rport Ready/delete.[with state:ok/fail].Creat/free TGT session here */
+ /* input : L_Port,R_Port,state:ready --creat session/delete--free session */
+ void (*unf_rport_callback)(void *rport, void *lport, u32 result);
+
+ struct unf_os_thread_private_data *data_thread;
+};
+
+#define UNF_IO_RESULT_CNT(scsi_table, scsi_id, io_result) \
+ do { \
+ if (likely(((io_result) < UNF_MAX_IO_RETURN_VALUE) && \
+ ((scsi_id) < UNF_MAX_SCSI_ID) && \
+ ((scsi_table)->wwn_rport_info_table) && \
+ (((scsi_table)->wwn_rport_info_table[scsi_id].dfx_counter)))) {\
+ atomic64_inc(&((scsi_table)->wwn_rport_info_table[scsi_id] \
+ .dfx_counter->io_done_cnt[(io_result)])); \
+ } else { \
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, \
+ UNF_ERR, \
+ "[err] io return value(0x%x) or " \
+ "scsi id(0x%x) is invalid", \
+ io_result, scsi_id); \
+ } \
+ } while (0)
+
+#define UNF_SCSI_CMD_CNT(scsi_table, scsi_id, io_type) \
+ do { \
+ if (likely(((io_type) < UNF_MAX_SCSI_CMD) && \
+ ((scsi_id) < UNF_MAX_SCSI_ID) && \
+ ((scsi_table)->wwn_rport_info_table) && \
+ (((scsi_table)->wwn_rport_info_table[scsi_id].dfx_counter)))) { \
+ atomic64_inc(&(((scsi_table)->wwn_rport_info_table[scsi_id]) \
+ .dfx_counter->scsi_cmd_cnt[io_type])); \
+ } else { \
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, \
+ UNF_ERR, \
+ "[err] scsi_cmd(0x%x) or scsi id(0x%x) " \
+ "is invalid", \
+ io_type, scsi_id); \
+ } \
+ } while (0)
+
+#define UNF_SCSI_ERROR_HANDLE_CNT(scsi_table, scsi_id, io_type) \
+ do { \
+ if (likely(((io_type) < UNF_SCSI_ERROR_HANDLE_BUTT) && \
+ ((scsi_id) < UNF_MAX_SCSI_ID) && \
+ ((scsi_table)->wwn_rport_info_table) && \
+ (((scsi_table)->wwn_rport_info_table[scsi_id] \
+ .dfx_counter)))) { \
+ atomic_inc(&((scsi_table)->wwn_rport_info_table[scsi_id] \
+ .dfx_counter->error_handle[io_type])); \
+ } else { \
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, \
+ UNF_ERR, \
+ "[err] scsi_cmd(0x%x) or scsi id(0x%x) " \
+ "is invalid", \
+ (io_type), (scsi_id)); \
+ } \
+ } while (0)
+
+#define UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_table, scsi_id, io_type) \
+ do { \
+ if (likely(((io_type) < UNF_SCSI_ERROR_HANDLE_BUTT) && \
+ ((scsi_id) < UNF_MAX_SCSI_ID) && \
+ ((scsi_table)->wwn_rport_info_table) &&\
+ (((scsi_table)-> \
+ wwn_rport_info_table[scsi_id].dfx_counter)))) { \
+ atomic_inc(&( \
+ (scsi_table) \
+ ->wwn_rport_info_table[scsi_id] \
+ .dfx_counter->error_handle_result[io_type])); \
+ } else { \
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, \
+ UNF_ERR, \
+ "[err] scsi_cmd(0x%x) or scsi id(0x%x) " \
+ "is invalid", \
+ io_type, scsi_id); \
+ } \
+ } while (0)
+
+void unf_rport_state_ma(struct unf_rport *rport, enum unf_rport_event event);
+void unf_update_lport_state_by_linkup_event(struct unf_lport *lport,
+ struct unf_rport *rport,
+ u32 rport_att);
+
+void unf_set_rport_state(struct unf_rport *rport, enum unf_rport_login_state states);
+void unf_rport_enter_closing(struct unf_rport *rport);
+u32 unf_release_rport_res(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_initrport_mgr_temp(struct unf_lport *lport);
+void unf_clean_linkdown_rport(struct unf_lport *lport);
+void unf_rport_error_recovery(struct unf_rport *rport);
+struct unf_rport *unf_get_rport_by_nport_id(struct unf_lport *lport, u32 nport_id);
+struct unf_rport *unf_get_rport_by_wwn(struct unf_lport *lport, u64 wwpn);
+void unf_rport_enter_logo(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_rport_ref_inc(struct unf_rport *rport);
+void unf_rport_ref_dec(struct unf_rport *rport);
+
+struct unf_rport *unf_rport_set_qualifier_key_reuse(struct unf_lport *lport,
+ struct unf_rport *rport_by_nport_id,
+ struct unf_rport *rport_by_wwpn,
+ u64 wwpn, u32 sid);
+void unf_rport_delay_login(struct unf_rport *rport);
+struct unf_rport *unf_find_valid_rport(struct unf_lport *lport, u64 wwpn,
+ u32 sid);
+void unf_rport_linkdown(struct unf_lport *lport, struct unf_rport *rport);
+void unf_apply_for_session(struct unf_lport *lport, struct unf_rport *rport);
+struct unf_rport *unf_get_safe_rport(struct unf_lport *lport,
+ struct unf_rport *rport,
+ enum unf_rport_reuse_flag reuse_flag,
+ u32 nport_id);
+void *unf_rport_get_free_and_init(void *lport, u32 port_type, u32 nport_id);
+
+void unf_set_device_state(struct unf_lport *lport, u32 scsi_id, int scsi_state);
+u32 unf_get_scsi_id_by_wwpn(struct unf_lport *lport, u64 wwpn);
+u32 unf_get_device_state(struct unf_lport *lport, u32 scsi_id);
+u32 unf_free_scsi_id(struct unf_lport *lport, u32 scsi_id);
+void unf_schedule_closing_work(struct unf_lport *lport, struct unf_rport *rport);
+void unf_sesion_loss_timeout(struct work_struct *work);
+u32 unf_get_port_feature(u64 wwpn);
+void unf_update_port_feature(u64 wwpn, u32 port_feature);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_scsi.c b/drivers/scsi/spfc/common/unf_scsi.c
new file mode 100644
index 000000000000..961e5dd782c6
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_scsi.c
@@ -0,0 +1,1463 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_type.h"
+#include "unf_log.h"
+#include "unf_scsi_common.h"
+#include "unf_lport.h"
+#include "unf_rport.h"
+#include "unf_portman.h"
+#include "unf_exchg.h"
+#include "unf_exchg_abort.h"
+#include "unf_npiv.h"
+#include "unf_io.h"
+
+#define UNF_LUN_ID_MASK 0x00000000ffff0000
+#define UNF_CMD_PER_LUN 3
+
+static int unf_scsi_queue_cmd(struct Scsi_Host *phost, struct scsi_cmnd *pcmd);
+static int unf_scsi_abort_scsi_cmnd(struct scsi_cmnd *v_cmnd);
+static int unf_scsi_device_reset_handler(struct scsi_cmnd *v_cmnd);
+static int unf_scsi_bus_reset_handler(struct scsi_cmnd *v_cmnd);
+static int unf_scsi_target_reset_handler(struct scsi_cmnd *v_cmnd);
+static int unf_scsi_slave_alloc(struct scsi_device *sdev);
+static void unf_scsi_destroy_slave(struct scsi_device *sdev);
+static int unf_scsi_slave_configure(struct scsi_device *sdev);
+static int unf_scsi_scan_finished(struct Scsi_Host *shost, unsigned long time);
+static void unf_scsi_scan_start(struct Scsi_Host *shost);
+
+static struct scsi_transport_template *scsi_transport_template;
+static struct scsi_transport_template *scsi_transport_template_v;
+
+struct unf_ini_error_code ini_error_code_table1[] = {
+ {UNF_IO_SUCCESS, UNF_SCSI_HOST(DID_OK)},
+ {UNF_IO_ABORTED, UNF_SCSI_HOST(DID_ABORT)},
+ {UNF_IO_FAILED, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_ABORT_ABTS, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_ABORT_LOGIN, UNF_SCSI_HOST(DID_NO_CONNECT)},
+ {UNF_IO_ABORT_REET, UNF_SCSI_HOST(DID_RESET)},
+ {UNF_IO_ABORT_FAILED, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_OUTOF_ORDER, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_FTO, UNF_SCSI_HOST(DID_TIME_OUT)},
+ {UNF_IO_LINK_FAILURE, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_OVER_FLOW, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_RSP_OVER, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_LOST_FRAME, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_UNDER_FLOW, UNF_SCSI_HOST(DID_OK)},
+ {UNF_IO_HOST_PROG_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_SEST_PROG_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_INVALID_ENTRY, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_ABORT_SEQ_NOT, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_REJECT, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_EDC_IN_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_EDC_OUT_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_UNINIT_KEK_ERR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_DEK_OUTOF_RANGE, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_KEY_UNWRAP_ERR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_KEY_TAG_ERR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_KEY_ECC_ERR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_BLOCK_SIZE_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_ILLEGAL_CIPHER_MODE, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_CLEAN_UP, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_ABORTED_BY_TARGET, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_TRANSPORT_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_LINK_FLASH, UNF_SCSI_HOST(DID_NO_CONNECT)},
+ {UNF_IO_TIMEOUT, UNF_SCSI_HOST(DID_TIME_OUT)},
+ {UNF_IO_DMA_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_NO_LPORT, UNF_SCSI_HOST(DID_NO_CONNECT)},
+ {UNF_IO_NO_XCHG, UNF_SCSI_HOST(DID_SOFT_ERROR)},
+ {UNF_IO_SOFT_ERR, UNF_SCSI_HOST(DID_SOFT_ERROR)},
+ {UNF_IO_PORT_LOGOUT, UNF_SCSI_HOST(DID_NO_CONNECT)},
+ {UNF_IO_ERREND, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_DIF_ERROR, (UNF_SCSI_HOST(DID_OK) | UNF_SCSI_STATUS(SCSI_CHECK_CONDITION))},
+ {UNF_IO_INCOMPLETE, UNF_SCSI_HOST(DID_IMM_RETRY)},
+ {UNF_IO_DIF_REF_ERROR, (UNF_SCSI_HOST(DID_OK) | UNF_SCSI_STATUS(SCSI_CHECK_CONDITION))},
+ {UNF_IO_DIF_GEN_ERROR, (UNF_SCSI_HOST(DID_OK) | UNF_SCSI_STATUS(SCSI_CHECK_CONDITION))}
+};
+
+u32 ini_err_code_table_cnt1 = sizeof(ini_error_code_table1) / sizeof(struct unf_ini_error_code);
+
+static void unf_set_rport_loss_tmo(struct fc_rport *rport, u32 timeout)
+{
+ if (timeout)
+ rport->dev_loss_tmo = timeout;
+ else
+ rport->dev_loss_tmo = 1;
+}
+
+static void unf_get_host_port_id(struct Scsi_Host *shost)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ unf_lport = (struct unf_lport *)shost->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
+ return;
+ }
+
+ fc_host_port_id(shost) = unf_lport->port_id;
+}
+
+static void unf_get_host_speed(struct Scsi_Host *shost)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 speed = FC_PORTSPEED_UNKNOWN;
+
+ unf_lport = (struct unf_lport *)shost->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
+ return;
+ }
+
+ switch (unf_lport->speed) {
+ case UNF_PORT_SPEED_2_G:
+ speed = FC_PORTSPEED_2GBIT;
+ break;
+ case UNF_PORT_SPEED_4_G:
+ speed = FC_PORTSPEED_4GBIT;
+ break;
+ case UNF_PORT_SPEED_8_G:
+ speed = FC_PORTSPEED_8GBIT;
+ break;
+ case UNF_PORT_SPEED_16_G:
+ speed = FC_PORTSPEED_16GBIT;
+ break;
+ case UNF_PORT_SPEED_32_G:
+ speed = FC_PORTSPEED_32GBIT;
+ break;
+ default:
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with unknown speed(0x%x) for FC mode",
+ unf_lport->port_id, unf_lport->speed);
+ break;
+ }
+
+ fc_host_speed(shost) = speed;
+}
+
+static void unf_get_host_port_type(struct Scsi_Host *shost)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 port_type = FC_PORTTYPE_UNKNOWN;
+
+ unf_lport = (struct unf_lport *)shost->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
+ return;
+ }
+
+ switch (unf_lport->act_topo) {
+ case UNF_ACT_TOP_PRIVATE_LOOP:
+ port_type = FC_PORTTYPE_LPORT;
+ break;
+ case UNF_ACT_TOP_PUBLIC_LOOP:
+ port_type = FC_PORTTYPE_NLPORT;
+ break;
+ case UNF_ACT_TOP_P2P_DIRECT:
+ port_type = FC_PORTTYPE_PTP;
+ break;
+ case UNF_ACT_TOP_P2P_FABRIC:
+ port_type = FC_PORTTYPE_NPORT;
+ break;
+ default:
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with unknown topo type(0x%x) for FC mode",
+ unf_lport->port_id, unf_lport->act_topo);
+ break;
+ }
+
+ fc_host_port_type(shost) = port_type;
+}
+
+static void unf_get_symbolic_name(struct Scsi_Host *shost)
+{
+ u8 *name = NULL;
+ struct unf_lport *unf_lport = NULL;
+
+ unf_lport = (struct unf_lport *)(uintptr_t)shost->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Check l_port failed");
+ return;
+ }
+
+ name = fc_host_symbolic_name(shost);
+ if (name)
+ snprintf(name, FC_SYMBOLIC_NAME_SIZE, "SPFC_FW_RELEASE:%s SPFC_DRV_RELEASE:%s",
+ unf_lport->fw_version, SPFC_DRV_VERSION);
+}
+
+static void unf_get_host_fabric_name(struct Scsi_Host *shost)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ unf_lport = (struct unf_lport *)shost->hostdata[0];
+
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
+ return;
+ }
+ fc_host_fabric_name(shost) = unf_lport->fabric_node_name;
+}
+
+static void unf_get_host_port_state(struct Scsi_Host *shost)
+{
+ struct unf_lport *unf_lport = NULL;
+ enum fc_port_state port_state;
+
+ unf_lport = (struct unf_lport *)shost->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
+ return;
+ }
+
+ switch (unf_lport->link_up) {
+ case UNF_PORT_LINK_DOWN:
+ port_state = FC_PORTSTATE_OFFLINE;
+ break;
+ case UNF_PORT_LINK_UP:
+ port_state = FC_PORTSTATE_ONLINE;
+ break;
+ default:
+ port_state = FC_PORTSTATE_UNKNOWN;
+ break;
+ }
+
+ fc_host_port_state(shost) = port_state;
+}
+
+static void unf_dev_loss_timeout_callbk(struct fc_rport *rport)
+{
+ /*
+ * NOTE: about rport->dd_data
+ * --->>> local SCSI_ID
+ * 1. Assignment during scsi rport link up
+ * 2. Released when scsi rport link down & timeout(30s)
+ * 3. Used during scsi do callback with slave_alloc function
+ */
+ struct Scsi_Host *host = NULL;
+ struct unf_lport *unf_lport = NULL;
+ u32 scsi_id = 0;
+
+ if (unlikely(!rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]SCSI rport is null");
+ return;
+ }
+
+ host = rport_to_shost(rport);
+ if (unlikely(!host)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Host is null");
+ return;
+ }
+
+ scsi_id = *(u32 *)(rport->dd_data); /* according to Local SCSI_ID */
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]rport(0x%p) scsi_id(0x%x) is max than(0x%x)",
+ rport, scsi_id, UNF_MAX_SCSI_ID);
+ return;
+ }
+
+ unf_lport = (struct unf_lport *)host->hostdata[0];
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[event]Port(0x%x_0x%x) rport(0x%p) scsi_id(0x%x) target_id(0x%x) loss timeout",
+ unf_lport->port_id, unf_lport->nport_id, rport,
+ scsi_id, rport->scsi_target_id);
+
+ atomic_inc(&unf_lport->session_loss_tmo);
+
+ /* Free SCSI ID & set table state with DEAD */
+ (void)unf_free_scsi_id(unf_lport, scsi_id);
+ unf_xchg_up_abort_io_by_scsi_id(unf_lport, scsi_id);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(%p) is invalid", unf_lport);
+ }
+
+ *((u32 *)rport->dd_data) = INVALID_VALUE32;
+}
+
+int unf_scsi_create_vport(struct fc_vport *fc_port, bool disabled)
+{
+ struct unf_lport *vport = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct Scsi_Host *shost = NULL;
+ struct vport_config vport_config = {0};
+
+ shost = vport_to_shost(fc_port);
+
+ unf_lport = (struct unf_lport *)shost->hostdata[0];
+ if (unf_is_lport_valid(unf_lport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(%p) is invalid", unf_lport);
+
+ return RETURN_ERROR;
+ }
+
+ vport_config.port_name = fc_port->port_name;
+
+ vport_config.port_mode = fc_port->roles;
+
+ vport = unf_creat_vport(unf_lport, &vport_config);
+ if (!vport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) Create Vport failed on lldrive",
+ unf_lport->port_id);
+
+ return RETURN_ERROR;
+ }
+
+ fc_port->dd_data = vport;
+ vport->vport = fc_port;
+
+ return RETURN_OK;
+}
+
+int unf_scsi_delete_vport(struct fc_vport *fc_port)
+{
+ int ret = RETURN_ERROR;
+ struct unf_lport *vport = NULL;
+
+ vport = (struct unf_lport *)fc_port->dd_data;
+ if (unf_is_lport_valid(vport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]VPort(%p) is invalid or is removing", vport);
+
+ fc_port->dd_data = NULL;
+
+ return ret;
+ }
+
+ ret = (int)unf_destroy_one_vport(vport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]VPort(0x%x) destroy failed on drive", vport->port_id);
+
+ return ret;
+ }
+
+ fc_port->dd_data = NULL;
+ return ret;
+}
+
+struct fc_function_template function_template = {
+ .show_host_node_name = 1,
+ .show_host_port_name = 1,
+ .show_host_supported_classes = 1,
+ .show_host_supported_speeds = 1,
+
+ .get_host_port_id = unf_get_host_port_id,
+ .show_host_port_id = 1,
+ .get_host_speed = unf_get_host_speed,
+ .show_host_speed = 1,
+ .get_host_port_type = unf_get_host_port_type,
+ .show_host_port_type = 1,
+ .get_host_symbolic_name = unf_get_symbolic_name,
+ .show_host_symbolic_name = 1,
+ .set_host_system_hostname = NULL,
+ .show_host_system_hostname = 1,
+ .get_host_fabric_name = unf_get_host_fabric_name,
+ .show_host_fabric_name = 1,
+ .get_host_port_state = unf_get_host_port_state,
+ .show_host_port_state = 1,
+
+ .dd_fcrport_size = sizeof(void *),
+ .show_rport_supported_classes = 1,
+
+ .get_starget_node_name = NULL,
+ .show_starget_node_name = 1,
+ .get_starget_port_name = NULL,
+ .show_starget_port_name = 1,
+ .get_starget_port_id = NULL,
+ .show_starget_port_id = 1,
+
+ .set_rport_dev_loss_tmo = unf_set_rport_loss_tmo,
+ .show_rport_dev_loss_tmo = 0,
+
+ .issue_fc_host_lip = NULL,
+ .dev_loss_tmo_callbk = unf_dev_loss_timeout_callbk,
+ .terminate_rport_io = NULL,
+ .get_fc_host_stats = NULL,
+
+ .vport_create = unf_scsi_create_vport,
+ .vport_disable = NULL,
+ .vport_delete = unf_scsi_delete_vport,
+ .bsg_request = NULL,
+ .bsg_timeout = NULL,
+};
+
+struct fc_function_template function_template_v = {
+ .show_host_node_name = 1,
+ .show_host_port_name = 1,
+ .show_host_supported_classes = 1,
+ .show_host_supported_speeds = 1,
+
+ .get_host_port_id = unf_get_host_port_id,
+ .show_host_port_id = 1,
+ .get_host_speed = unf_get_host_speed,
+ .show_host_speed = 1,
+ .get_host_port_type = unf_get_host_port_type,
+ .show_host_port_type = 1,
+ .get_host_symbolic_name = unf_get_symbolic_name,
+ .show_host_symbolic_name = 1,
+ .set_host_system_hostname = NULL,
+ .show_host_system_hostname = 1,
+ .get_host_fabric_name = unf_get_host_fabric_name,
+ .show_host_fabric_name = 1,
+ .get_host_port_state = unf_get_host_port_state,
+ .show_host_port_state = 1,
+
+ .dd_fcrport_size = sizeof(void *),
+ .show_rport_supported_classes = 1,
+
+ .get_starget_node_name = NULL,
+ .show_starget_node_name = 1,
+ .get_starget_port_name = NULL,
+ .show_starget_port_name = 1,
+ .get_starget_port_id = NULL,
+ .show_starget_port_id = 1,
+
+ .set_rport_dev_loss_tmo = unf_set_rport_loss_tmo,
+ .show_rport_dev_loss_tmo = 0,
+
+ .issue_fc_host_lip = NULL,
+ .dev_loss_tmo_callbk = unf_dev_loss_timeout_callbk,
+ .terminate_rport_io = NULL,
+ .get_fc_host_stats = NULL,
+
+ .vport_create = NULL,
+ .vport_disable = NULL,
+ .vport_delete = NULL,
+ .bsg_request = NULL,
+ .bsg_timeout = NULL,
+};
+
+struct scsi_host_template scsi_host_template = {
+ .module = THIS_MODULE,
+ .name = "SPFC",
+
+ .queuecommand = unf_scsi_queue_cmd,
+ .eh_timed_out = fc_eh_timed_out,
+ .eh_abort_handler = unf_scsi_abort_scsi_cmnd,
+ .eh_device_reset_handler = unf_scsi_device_reset_handler,
+
+ .eh_target_reset_handler = unf_scsi_target_reset_handler,
+ .eh_bus_reset_handler = unf_scsi_bus_reset_handler,
+ .eh_host_reset_handler = NULL,
+
+ .slave_configure = unf_scsi_slave_configure,
+ .slave_alloc = unf_scsi_slave_alloc,
+ .slave_destroy = unf_scsi_destroy_slave,
+
+ .scan_finished = unf_scsi_scan_finished,
+ .scan_start = unf_scsi_scan_start,
+
+ .this_id = -1, /* this_id: -1 */
+ .cmd_per_lun = UNF_CMD_PER_LUN,
+ .shost_attrs = NULL,
+ .sg_tablesize = SG_ALL,
+ .max_sectors = UNF_MAX_SECTORS,
+ .supported_mode = MODE_INITIATOR,
+};
+
+void unf_unmap_prot_sgl(struct scsi_cmnd *cmnd)
+{
+ struct device *dev = NULL;
+
+ if ((scsi_get_prot_op(cmnd) != SCSI_PROT_NORMAL) && spfc_dif_enable &&
+ (scsi_prot_sg_count(cmnd))) {
+ dev = cmnd->device->host->dma_dev;
+ dma_unmap_sg(dev, scsi_prot_sglist(cmnd),
+ (int)scsi_prot_sg_count(cmnd),
+ cmnd->sc_data_direction);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "scsi done cmd:%p op:%u, difsglcount:%u", cmnd,
+ scsi_get_prot_op(cmnd), scsi_prot_sg_count(cmnd));
+ }
+}
+
+void unf_scsi_done(struct unf_scsi_cmnd *scsi_cmd)
+{
+ struct scsi_cmnd *cmd = NULL;
+
+ cmd = (struct scsi_cmnd *)scsi_cmd->upper_cmnd;
+ FC_CHECK_RETURN_VOID(scsi_cmd);
+ FC_CHECK_RETURN_VOID(cmd);
+ FC_CHECK_RETURN_VOID(cmd->scsi_done);
+ scsi_set_resid(cmd, (int)scsi_cmd->resid);
+
+ cmd->result = scsi_cmd->result;
+ scsi_dma_unmap(cmd);
+ unf_unmap_prot_sgl(cmd);
+ return cmd->scsi_done(cmd);
+}
+
+static void unf_get_protect_op(struct scsi_cmnd *cmd,
+ struct unf_dif_control_info *dif_control_info)
+{
+ switch (scsi_get_prot_op(cmd)) {
+ /* OS-HBA: Unprotected, HBA-Target: Protected */
+ case SCSI_PROT_READ_STRIP:
+ dif_control_info->protect_opcode |= UNF_DIF_ACTION_VERIFY_AND_DELETE;
+ break;
+ case SCSI_PROT_WRITE_INSERT:
+ dif_control_info->protect_opcode |= UNF_DIF_ACTION_INSERT;
+ break;
+
+ /* OS-HBA: Protected, HBA-Target: Unprotected */
+ case SCSI_PROT_READ_INSERT:
+ dif_control_info->protect_opcode |= UNF_DIF_ACTION_INSERT;
+ break;
+ case SCSI_PROT_WRITE_STRIP:
+ dif_control_info->protect_opcode |= UNF_DIF_ACTION_VERIFY_AND_DELETE;
+ break;
+
+ /* OS-HBA: Protected, HBA-Target: Protected */
+ case SCSI_PROT_READ_PASS:
+ case SCSI_PROT_WRITE_PASS:
+ dif_control_info->protect_opcode |= UNF_DIF_ACTION_VERIFY_AND_FORWARD;
+ break;
+
+ default:
+ dif_control_info->protect_opcode |= UNF_DIF_ACTION_VERIFY_AND_FORWARD;
+ break;
+ }
+}
+
+int unf_get_protect_mode(struct unf_lport *lport, struct scsi_cmnd *scsi_cmd,
+ struct unf_scsi_cmnd *unf_scsi_cmd)
+{
+ struct scsi_cmnd *cmd = NULL;
+ int dif_seg_cnt = 0;
+ struct unf_dif_control_info *dif_control_info = NULL;
+
+ cmd = scsi_cmd;
+ dif_control_info = &unf_scsi_cmd->dif_control;
+
+ unf_get_protect_op(cmd, dif_control_info);
+
+ if (dif_sgl_mode)
+ dif_control_info->flags |= UNF_DIF_DOUBLE_SGL;
+ dif_control_info->flags |= ((cmd->device->sector_size) == SECTOR_SIZE_4096)
+ ? UNF_DIF_SECTSIZE_4KB : UNF_DIF_SECTSIZE_512;
+ dif_control_info->protect_opcode |= UNF_VERIFY_CRC_MASK | UNF_VERIFY_LBA_MASK;
+ dif_control_info->dif_sge_count = scsi_prot_sg_count(cmd);
+ dif_control_info->dif_sgl = scsi_prot_sglist(cmd);
+ dif_control_info->start_lba = cpu_to_le32(((uint32_t)(0xffffffff & scsi_get_lba(cmd))));
+
+ if (cmd->device->sector_size == SECTOR_SIZE_4096)
+ dif_control_info->start_lba = dif_control_info->start_lba >> UNF_SHIFT_3;
+
+ if (scsi_prot_sg_count(cmd)) {
+ dif_seg_cnt = dma_map_sg(&lport->low_level_func.dev->dev, scsi_prot_sglist(cmd),
+ (int)scsi_prot_sg_count(cmd), cmd->sc_data_direction);
+ if (unlikely(!dif_seg_cnt)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) cmd:%p map dif sgl err",
+ lport->port_id, cmd);
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "build scsi cmd:%p op:%u,difsglcount:%u,difsegcnt:%u", cmd,
+ scsi_get_prot_op(cmd), scsi_prot_sg_count(cmd),
+ dif_seg_cnt);
+ return RETURN_OK;
+}
+
+static u32 unf_get_rport_qos_level(struct scsi_cmnd *cmd, u32 scsi_id,
+ struct unf_rport_scsi_id_image *scsi_image_table)
+{
+ enum unf_rport_qos_level level = 0;
+
+ if (!scsi_image_table->wwn_rport_info_table[scsi_id].lun_qos_level ||
+ cmd->device->lun >= UNF_MAX_LUN_PER_TARGET) {
+ level = 0;
+ } else {
+ level = (scsi_image_table->wwn_rport_info_table[scsi_id]
+ .lun_qos_level[cmd->device->lun]);
+ }
+ return level;
+}
+
+u32 unf_get_frame_entry_buf(void *up_cmnd, void *driver_sgl, void **upper_sgl,
+ u32 *port_id, u32 *index, char **buf, u32 *buf_len)
+{
+#define SPFC_MAX_DMA_LENGTH (0x20000 - 1)
+ struct scatterlist *scsi_sgl = *upper_sgl;
+
+ if (unlikely(!scsi_sgl)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Command(0x%p) can not get SGL.", up_cmnd);
+ return RETURN_ERROR;
+ }
+ *buf = (char *)sg_dma_address(scsi_sgl);
+ *buf_len = sg_dma_len(scsi_sgl);
+ *upper_sgl = (void *)sg_next(scsi_sgl);
+ if (unlikely((*buf_len > SPFC_MAX_DMA_LENGTH) || (*buf_len == 0))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Command(0x%p) dmalen:0x%x is not support.",
+ up_cmnd, *buf_len);
+ return RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+static void unf_init_scsi_cmnd(struct Scsi_Host *host, struct scsi_cmnd *cmd,
+ struct unf_scsi_cmnd *scsi_cmnd,
+ struct unf_rport_scsi_id_image *scsi_image_table,
+ int datasegcnt)
+{
+ static atomic64_t count;
+ enum unf_rport_qos_level level = 0;
+ u32 scsi_id = 0;
+
+ scsi_id = (u32)((u64)cmd->device->hostdata);
+ level = unf_get_rport_qos_level(cmd, scsi_id, scsi_image_table);
+ scsi_cmnd->scsi_host_id = host->host_no; /* save host_no to scsi_cmnd->scsi_host_id */
+ scsi_cmnd->scsi_id = scsi_id;
+ scsi_cmnd->raw_lun_id = ((u64)cmd->device->lun << 16) & UNF_LUN_ID_MASK;
+ scsi_cmnd->data_direction = cmd->sc_data_direction;
+ scsi_cmnd->under_flow = cmd->underflow;
+ scsi_cmnd->cmnd_len = cmd->cmd_len;
+ scsi_cmnd->pcmnd = cmd->cmnd;
+ scsi_cmnd->transfer_len = cpu_to_le32((uint32_t)scsi_bufflen(cmd));
+ scsi_cmnd->sense_buflen = UNF_SCSI_SENSE_BUFFERSIZE;
+ scsi_cmnd->sense_buf = cmd->sense_buffer;
+ scsi_cmnd->time_out = 0;
+ scsi_cmnd->upper_cmnd = cmd;
+ scsi_cmnd->drv_private = (void *)(*(u64 *)shost_priv(host));
+ scsi_cmnd->entry_count = datasegcnt;
+ scsi_cmnd->sgl = scsi_sglist(cmd);
+ scsi_cmnd->unf_ini_get_sgl_entry = unf_get_frame_entry_buf;
+ scsi_cmnd->done = unf_scsi_done;
+ scsi_cmnd->lun_id = (u8 *)&scsi_cmnd->raw_lun_id;
+ scsi_cmnd->err_code_table_cout = ini_err_code_table_cnt1;
+ scsi_cmnd->err_code_table = ini_error_code_table1;
+ scsi_cmnd->world_id = INVALID_WORLD_ID;
+ scsi_cmnd->cmnd_sn = atomic64_inc_return(&count);
+ scsi_cmnd->qos_level = level;
+ if (unlikely(scsi_cmnd->cmnd_sn == 0))
+ scsi_cmnd->cmnd_sn = atomic64_inc_return(&count);
+}
+
+static void unf_io_error_done(struct scsi_cmnd *cmd,
+ struct unf_rport_scsi_id_image *scsi_image_table,
+ u32 scsi_id, u32 result)
+{
+ cmd->result = (int)(result << UNF_SHIFT_16);
+ cmd->scsi_done(cmd);
+ if (scsi_image_table)
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, result);
+}
+
+static bool unf_scan_device_cmd(struct scsi_cmnd *cmd)
+{
+ return ((cmd->cmnd[0] == INQUIRY) || (cmd->cmnd[0] == REPORT_LUNS));
+}
+
+static int unf_scsi_queue_cmd(struct Scsi_Host *phost, struct scsi_cmnd *pcmd)
+{
+ struct Scsi_Host *host = NULL;
+ struct scsi_cmnd *cmd = NULL;
+ struct unf_scsi_cmnd scsi_cmd = {0};
+ u32 scsi_id = 0;
+ u32 scsi_state = 0;
+ int ret = SCSI_MLQUEUE_HOST_BUSY;
+ struct unf_lport *unf_lport = NULL;
+ struct fc_rport *rport = NULL;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ struct unf_rport *unf_rport = NULL;
+ u32 cmnd_result = 0;
+ u32 rport_state_err = 0;
+ bool scan_device_cmd = false;
+ int datasegcnt = 0;
+
+ host = phost;
+ cmd = pcmd;
+ FC_CHECK_RETURN_VALUE(host, RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(cmd, RETURN_ERROR);
+
+ /* Get L_Port from scsi_cmd */
+ unf_lport = (struct unf_lport *)host->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Check l_port failed, cmd(%p)", cmd);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ /* Check device/session local state by device_id */
+ scsi_id = (u32)((u64)cmd->device->hostdata);
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) scsi_id(0x%x) is max than %d",
+ unf_lport->port_id, scsi_id, UNF_MAX_SCSI_ID);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ UNF_SCSI_CMD_CNT(scsi_image_table, scsi_id, cmd->cmnd[0]);
+
+ /* Get scsi r_port */
+ rport = starget_to_rport(scsi_target(cmd->device));
+ if (unlikely(!rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) cmd(%p) to get scsi rport failed",
+ unf_lport->port_id, cmd);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ if (unlikely(!scsi_image_table->wwn_rport_info_table)) {
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
+ "[warn]LPort porid(0x%x) WwnRportInfoTable NULL",
+ unf_lport->port_id);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ if (unlikely(unf_lport->port_removing)) {
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
+ "[warn]Port(0x%x) scsi_id(0x%x) rport(0x%p) target_id(0x%x) cmd(0x%p) unf_lport removing",
+ unf_lport->port_id, scsi_id, rport, rport->scsi_target_id, cmd);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ scsi_state = atomic_read(&scsi_image_table->wwn_rport_info_table[scsi_id].scsi_state);
+ if (unlikely(scsi_state != UNF_SCSI_ST_ONLINE)) {
+ if (scsi_state == UNF_SCSI_ST_OFFLINE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) scsi_state(0x%x) scsi_id(0x%x) rport(0x%p) target_id(0x%x) cmd(0x%p), target is busy",
+ unf_lport->port_id, scsi_state, scsi_id, rport,
+ rport->scsi_target_id, cmd);
+
+ scan_device_cmd = unf_scan_device_cmd(cmd);
+ /* report lun or inquiry cmd, if send failed, do not
+ * retry, prevent
+ * the scan_mutex in scsi host locked up by eachother
+ */
+ if (scan_device_cmd) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) host(0x%x) scsi_id(0x%x) lun(0x%llx) cmd(0x%x) DID_NO_CONNECT",
+ unf_lport->port_id, host->host_no, scsi_id,
+ (u64)cmd->device->lun, cmd->cmnd[0]);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ if (likely(scsi_image_table->wwn_rport_info_table)) {
+ if (likely(scsi_image_table->wwn_rport_info_table[scsi_id]
+ .dfx_counter)) {
+ atomic64_inc(&(scsi_image_table
+ ->wwn_rport_info_table[scsi_id]
+ .dfx_counter->target_busy));
+ }
+ }
+
+ /* Target busy: need scsi retry */
+ return SCSI_MLQUEUE_TARGET_BUSY;
+ }
+ /* timeout(DEAD): scsi_done & return 0 & I/O error */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) scsi_id(0x%x) rport(0x%p) target_id(0x%x) cmd(0x%p), target is loss timeout",
+ unf_lport->port_id, scsi_id, rport,
+ rport->scsi_target_id, cmd);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ if (scsi_sg_count(cmd)) {
+ datasegcnt = dma_map_sg(&unf_lport->low_level_func.dev->dev, scsi_sglist(cmd),
+ (int)scsi_sg_count(cmd), cmd->sc_data_direction);
+ if (unlikely(!datasegcnt)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) scsi_id(0x%x) rport(0x%p) target_id(0x%x) cmd(0x%p), dma map sg err",
+ unf_lport->port_id, scsi_id, rport,
+ rport->scsi_target_id, cmd);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_BUS_BUSY);
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+ }
+
+ /* Construct local SCSI CMND info */
+ unf_init_scsi_cmnd(host, cmd, &scsi_cmd, scsi_image_table, datasegcnt);
+
+ if ((scsi_get_prot_op(cmd) != SCSI_PROT_NORMAL) && spfc_dif_enable) {
+ ret = unf_get_protect_mode(unf_lport, cmd, &scsi_cmd);
+ if (ret != RETURN_OK) {
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_BUS_BUSY);
+ scsi_dma_unmap(cmd);
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) host(0x%x) scsi_id(0x%x) lun(0x%llx) transfer length(0x%x) cmd_len(0x%x) direction(0x%x) cmd(0x%x) under_flow(0x%x) protect_opcode is (0x%x) dif_sgl_mode is %d, sector size(%d)",
+ unf_lport->port_id, host->host_no, scsi_id, (u64)cmd->device->lun,
+ scsi_cmd.transfer_len, scsi_cmd.cmnd_len, cmd->sc_data_direction,
+ scsi_cmd.pcmnd[0], scsi_cmd.under_flow,
+ scsi_cmd.dif_control.protect_opcode, dif_sgl_mode,
+ (cmd->device->sector_size));
+
+ /* Bind the Exchange address corresponding to scsi_cmd to
+ * scsi_cmd->host_scribble
+ */
+ cmd->host_scribble = (unsigned char *)scsi_cmd.cmnd_sn;
+ ret = unf_cm_queue_command(&scsi_cmd);
+ if (ret != RETURN_OK) {
+ unf_rport = unf_find_rport_by_scsi_id(unf_lport, ini_error_code_table1,
+ ini_err_code_table_cnt1,
+ scsi_id, &cmnd_result);
+ rport_state_err = (!unf_rport) ||
+ (unf_rport->lport_ini_state != UNF_PORT_STATE_LINKUP) ||
+ (unf_rport->rp_state == UNF_RPORT_ST_CLOSING);
+ scan_device_cmd = unf_scan_device_cmd(cmd);
+
+ /* report lun or inquiry cmd if send failed, do not
+ * retry,prevent the scan_mutex in scsi host locked up by
+ * eachother
+ */
+ if (rport_state_err && scan_device_cmd) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) host(0x%x) scsi_id(0x%x) lun(0x%llx) cmd(0x%x) cmResult(0x%x) DID_NO_CONNECT",
+ unf_lport->port_id, host->host_no, scsi_id,
+ (u64)cmd->device->lun, cmd->cmnd[0],
+ cmnd_result);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ scsi_dma_unmap(cmd);
+ unf_unmap_prot_sgl(cmd);
+ return 0;
+ }
+
+ /* Host busy: scsi need to retry */
+ ret = SCSI_MLQUEUE_HOST_BUSY;
+ if (likely(scsi_image_table->wwn_rport_info_table)) {
+ if (likely(scsi_image_table->wwn_rport_info_table[scsi_id].dfx_counter)) {
+ atomic64_inc(&(scsi_image_table->wwn_rport_info_table[scsi_id]
+ .dfx_counter->host_busy));
+ }
+ }
+ scsi_dma_unmap(cmd);
+ unf_unmap_prot_sgl(cmd);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) return(0x%x) to process INI IO falid",
+ unf_lport->port_id, ret);
+ }
+ return ret;
+}
+
+static void unf_init_abts_tmf_scsi_cmd(struct scsi_cmnd *cmnd,
+ struct unf_scsi_cmnd *scsi_cmd,
+ bool abort_cmd)
+{
+ struct Scsi_Host *scsi_host = NULL;
+
+ scsi_host = cmnd->device->host;
+ scsi_cmd->scsi_host_id = scsi_host->host_no;
+ scsi_cmd->scsi_id = (u32)((u64)cmnd->device->hostdata);
+ scsi_cmd->raw_lun_id = (u64)cmnd->device->lun;
+ scsi_cmd->upper_cmnd = cmnd;
+ scsi_cmd->drv_private = (void *)(*(u64 *)shost_priv(scsi_host));
+ scsi_cmd->cmnd_sn = (u64)(cmnd->host_scribble);
+ scsi_cmd->lun_id = (u8 *)&scsi_cmd->raw_lun_id;
+ if (abort_cmd) {
+ scsi_cmd->done = unf_scsi_done;
+ scsi_cmd->world_id = INVALID_WORLD_ID;
+ }
+}
+
+int unf_scsi_abort_scsi_cmnd(struct scsi_cmnd *cmnd)
+{
+ /* SCSI ABORT Command --->>> FC ABTS */
+ struct unf_scsi_cmnd scsi_cmd = {0};
+ int ret = FAILED;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ struct unf_lport *unf_lport = NULL;
+ u32 scsi_id = 0;
+ u32 err_handle = 0;
+
+ FC_CHECK_RETURN_VALUE(cmnd, FAILED);
+
+ unf_lport = (struct unf_lport *)cmnd->device->host->hostdata[0];
+ scsi_id = (u32)((u64)cmnd->device->hostdata);
+
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_ABORT_IO_TYPE;
+ UNF_SCSI_ERROR_HANDLE_CNT(scsi_image_table, scsi_id, err_handle);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[abort]Port(0x%x) scsi_id(0x%x) lun_id(0x%x) cmnd_type(0x%x)",
+ unf_lport->port_id, scsi_id,
+ (u32)cmnd->device->lun, cmnd->cmnd[0]);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Lport(%p) is moving or null", unf_lport);
+ return UNF_SCSI_ABORT_FAIL;
+ }
+
+ /* Check local SCSI_ID validity */
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id,
+ UNF_MAX_SCSI_ID);
+ return UNF_SCSI_ABORT_FAIL;
+ }
+
+ /* Block scsi (check rport state -> whether offline or not) */
+ ret = fc_block_scsi_eh(cmnd);
+ if (unlikely(ret != 0)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Block scsi eh failed(0x%x)", ret);
+ return ret;
+ }
+
+ unf_init_abts_tmf_scsi_cmd(cmnd, &scsi_cmd, true);
+ /* Process scsi Abort cmnd */
+ ret = unf_cm_eh_abort_handler(&scsi_cmd);
+ if (ret == UNF_SCSI_ABORT_SUCCESS) {
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_ABORT_IO_TYPE;
+ UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_image_table,
+ scsi_id, err_handle);
+ }
+ }
+
+ return ret;
+}
+
+int unf_scsi_device_reset_handler(struct scsi_cmnd *cmnd)
+{
+ /* LUN reset */
+ struct unf_scsi_cmnd scsi_cmd = {0};
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ int ret = FAILED;
+ struct unf_lport *unf_lport = NULL;
+ u32 scsi_id = 0;
+ u32 err_handle = 0;
+
+ FC_CHECK_RETURN_VALUE(cmnd, FAILED);
+
+ unf_lport = (struct unf_lport *)cmnd->device->host->hostdata[0];
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_DEVICE_RESET_TYPE;
+ UNF_SCSI_ERROR_HANDLE_CNT(scsi_image_table, scsi_id, err_handle);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[device_reset]Port(0x%x) scsi_id(0x%x) lun_id(0x%x) cmnd_type(0x%x)",
+ unf_lport->port_id, scsi_id, (u32)cmnd->device->lun, cmnd->cmnd[0]);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is invalid");
+
+ return FAILED;
+ }
+
+ /* Check local SCSI_ID validity */
+ scsi_id = (u32)((u64)cmnd->device->hostdata);
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id,
+ UNF_MAX_SCSI_ID);
+
+ return FAILED;
+ }
+
+ /* Block scsi (check rport state -> whether offline or not) */
+ ret = fc_block_scsi_eh(cmnd);
+ if (unlikely(ret != 0)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Block scsi eh failed(0x%x)", ret);
+
+ return ret;
+ }
+
+ unf_init_abts_tmf_scsi_cmd(cmnd, &scsi_cmd, false);
+ /* Process scsi device/LUN reset cmnd */
+ ret = unf_cm_eh_device_reset_handler(&scsi_cmd);
+ if (ret == UNF_SCSI_ABORT_SUCCESS) {
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_DEVICE_RESET_TYPE;
+ UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_image_table,
+ scsi_id, err_handle);
+ }
+ }
+
+ return ret;
+}
+
+int unf_scsi_bus_reset_handler(struct scsi_cmnd *cmnd)
+{
+ /* BUS Reset */
+ struct unf_scsi_cmnd scsi_cmd = {0};
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ int ret = FAILED;
+ u32 scsi_id = 0;
+ u32 err_handle = 0;
+
+ FC_CHECK_RETURN_VALUE(cmnd, FAILED);
+
+ unf_lport = (struct unf_lport *)cmnd->device->host->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port is null");
+
+ return FAILED;
+ }
+
+ /* Check local SCSI_ID validity */
+ scsi_id = (u32)((u64)cmnd->device->hostdata);
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id,
+ UNF_MAX_SCSI_ID);
+
+ return FAILED;
+ }
+
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_BUS_RESET_TYPE;
+ UNF_SCSI_ERROR_HANDLE_CNT(scsi_image_table, scsi_id, err_handle);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info][bus_reset]Port(0x%x) scsi_id(0x%x) lun_id(0x%x) cmnd_type(0x%x)",
+ unf_lport->port_id, scsi_id, (u32)cmnd->device->lun,
+ cmnd->cmnd[0]);
+ }
+
+ /* Block scsi (check rport state -> whether offline or not) */
+ ret = fc_block_scsi_eh(cmnd);
+ if (unlikely(ret != 0)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Block scsi eh failed(0x%x)", ret);
+
+ return ret;
+ }
+
+ unf_init_abts_tmf_scsi_cmd(cmnd, &scsi_cmd, false);
+ /* Process scsi BUS Reset cmnd */
+ ret = unf_cm_bus_reset_handler(&scsi_cmd);
+ if (ret == UNF_SCSI_ABORT_SUCCESS) {
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_BUS_RESET_TYPE;
+ UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_image_table, scsi_id, err_handle);
+ }
+ }
+
+ return ret;
+}
+
+int unf_scsi_target_reset_handler(struct scsi_cmnd *cmnd)
+{
+ /* Session reset/delete */
+ struct unf_scsi_cmnd scsi_cmd = {0};
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ int ret = FAILED;
+ struct unf_lport *unf_lport = NULL;
+ u32 scsi_id = 0;
+ u32 err_handle = 0;
+
+ FC_CHECK_RETURN_VALUE(cmnd, RETURN_ERROR);
+
+ unf_lport = (struct unf_lport *)cmnd->device->host->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port is null");
+
+ return FAILED;
+ }
+
+ /* Check local SCSI_ID validity */
+ scsi_id = (u32)((u64)cmnd->device->hostdata);
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id, UNF_MAX_SCSI_ID);
+
+ return FAILED;
+ }
+
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_TARGET_RESET_TYPE;
+ UNF_SCSI_ERROR_HANDLE_CNT(scsi_image_table, scsi_id, err_handle);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[target_reset]Port(0x%x) scsi_id(0x%x) lun_id(0x%x) cmnd_type(0x%x)",
+ unf_lport->port_id, scsi_id, (u32)cmnd->device->lun, cmnd->cmnd[0]);
+ }
+
+ /* Block scsi (check rport state -> whether offline or not) */
+ ret = fc_block_scsi_eh(cmnd);
+ if (unlikely(ret != 0)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Block scsi eh failed(0x%x)", ret);
+
+ return ret;
+ }
+
+ unf_init_abts_tmf_scsi_cmd(cmnd, &scsi_cmd, false);
+ /* Process scsi Target/Session reset/delete cmnd */
+ ret = unf_cm_target_reset_handler(&scsi_cmd);
+ if (ret == UNF_SCSI_ABORT_SUCCESS) {
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_TARGET_RESET_TYPE;
+ UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_image_table, scsi_id, err_handle);
+ }
+ }
+
+ return ret;
+}
+
+static int unf_scsi_slave_alloc(struct scsi_device *sdev)
+{
+ struct fc_rport *rport = NULL;
+ u32 scsi_id = 0;
+ struct unf_lport *unf_lport = NULL;
+ struct Scsi_Host *host = NULL;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+
+ /* About device */
+ if (unlikely(!sdev)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]SDev is null");
+ return -ENXIO;
+ }
+
+ /* About scsi rport */
+ rport = starget_to_rport(scsi_target(sdev));
+ if (unlikely(!rport || fc_remote_port_chkready(rport))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]SCSI rport is null");
+
+ if (rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]SCSI rport is not ready(0x%x)",
+ fc_remote_port_chkready(rport));
+ }
+
+ return -ENXIO;
+ }
+
+ /* About host */
+ host = rport_to_shost(rport);
+ if (unlikely(!host)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Host is null");
+
+ return -ENXIO;
+ }
+
+ /* About Local Port */
+ unf_lport = (struct unf_lport *)host->hostdata[0];
+ if (unf_is_lport_valid(unf_lport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is invalid");
+
+ return -ENXIO;
+ }
+
+ /* About Local SCSI_ID */
+ scsi_id =
+ *(u32 *)rport->dd_data;
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id, UNF_MAX_SCSI_ID);
+
+ return -ENXIO;
+ }
+
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ if (scsi_image_table->wwn_rport_info_table[scsi_id].dfx_counter) {
+ atomic_inc(&scsi_image_table->wwn_rport_info_table[scsi_id]
+ .dfx_counter->device_alloc);
+ }
+ atomic_inc(&unf_lport->device_alloc);
+ sdev->hostdata = (void *)(u64)scsi_id;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) use scsi_id(%u) to alloc device[%u:%u:%u:%u]",
+ unf_lport->port_id, scsi_id, host->host_no, sdev->channel, sdev->id,
+ (u32)sdev->lun);
+
+ return 0;
+}
+
+static void unf_scsi_destroy_slave(struct scsi_device *sdev)
+{
+ /*
+ * NOTE: about sdev->hostdata
+ * --->>> pointing to local SCSI_ID
+ * 1. Assignment during slave allocation
+ * 2. Released when callback for slave destroy
+ * 3. Used during: Queue_CMND, Abort CMND, Device Reset, Target Reset &
+ * Bus Reset
+ */
+ struct fc_rport *rport = NULL;
+ u32 scsi_id = 0;
+ struct unf_lport *unf_lport = NULL;
+ struct Scsi_Host *host = NULL;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+
+ /* About scsi device */
+ if (unlikely(!sdev)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]SDev is null");
+
+ return;
+ }
+
+ /* About scsi rport */
+ rport = starget_to_rport(scsi_target(sdev));
+ if (unlikely(!rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]SCSI rport is null or remote port is not ready");
+ return;
+ }
+
+ /* About host */
+ host = rport_to_shost(rport);
+ if (unlikely(!host)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Host is null");
+
+ return;
+ }
+
+ /* About L_Port */
+ unf_lport = (struct unf_lport *)host->hostdata[0];
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ atomic_inc(&unf_lport->device_destroy);
+
+ scsi_id = (u32)((u64)sdev->hostdata);
+ if (scsi_id < UNF_MAX_SCSI_ID && scsi_image_table->wwn_rport_info_table) {
+ if (scsi_image_table->wwn_rport_info_table[scsi_id].dfx_counter) {
+ atomic_inc(&scsi_image_table->wwn_rport_info_table[scsi_id]
+ .dfx_counter->device_destroy);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) with scsi_id(%u) to destroy slave device[%u:%u:%u:%u]",
+ unf_lport->port_id, scsi_id, host->host_no,
+ sdev->channel, sdev->id, (u32)sdev->lun);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[err]Port(0x%x) scsi_id(%u) is invalid and destroy device[%u:%u:%u:%u]",
+ unf_lport->port_id, scsi_id, host->host_no,
+ sdev->channel, sdev->id, (u32)sdev->lun);
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(%p) is invalid", unf_lport);
+ }
+
+ sdev->hostdata = NULL;
+}
+
+static int unf_scsi_slave_configure(struct scsi_device *sdev)
+{
+#define UNF_SCSI_DEV_DEPTH 32
+ blk_queue_update_dma_alignment(sdev->request_queue, 0x7);
+
+ scsi_change_queue_depth(sdev, UNF_SCSI_DEV_DEPTH);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[event]Enter slave configure, set depth is %d, sdev->tagged_supported is (%d)",
+ UNF_SCSI_DEV_DEPTH, sdev->tagged_supported);
+
+ return 0;
+}
+
+static int unf_scsi_scan_finished(struct Scsi_Host *shost, unsigned long time)
+{
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[event]Scan finished");
+
+ return 1;
+}
+
+static void unf_scsi_scan_start(struct Scsi_Host *shost)
+{
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[event]Start scsi scan...");
+}
+
+void unf_host_init_attr_setting(struct Scsi_Host *scsi_host)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 speed = FC_PORTSPEED_UNKNOWN;
+
+ unf_lport = (struct unf_lport *)scsi_host->hostdata[0];
+ fc_host_supported_classes(scsi_host) = FC_COS_CLASS3;
+ fc_host_dev_loss_tmo(scsi_host) = (u32)unf_get_link_lose_tmo(unf_lport);
+ fc_host_node_name(scsi_host) = unf_lport->node_name;
+ fc_host_port_name(scsi_host) = unf_lport->port_name;
+
+ fc_host_max_npiv_vports(scsi_host) = (u16)((unf_lport == unf_lport->root_lport) ?
+ unf_lport->low_level_func.support_max_npiv_num
+ : 0);
+ fc_host_npiv_vports_inuse(scsi_host) = 0;
+ fc_host_next_vport_number(scsi_host) = 0;
+
+ /* About speed mode */
+ if (unf_lport->low_level_func.fc_ser_max_speed == UNF_PORT_SPEED_32_G &&
+ unf_lport->card_type == UNF_FC_SERVER_BOARD_32_G) {
+ speed = FC_PORTSPEED_32GBIT | FC_PORTSPEED_16GBIT | FC_PORTSPEED_8GBIT;
+ } else if (unf_lport->low_level_func.fc_ser_max_speed == UNF_PORT_SPEED_16_G &&
+ unf_lport->card_type == UNF_FC_SERVER_BOARD_16_G) {
+ speed = FC_PORTSPEED_16GBIT | FC_PORTSPEED_8GBIT | FC_PORTSPEED_4GBIT;
+ } else if (unf_lport->low_level_func.fc_ser_max_speed == UNF_PORT_SPEED_8_G &&
+ unf_lport->card_type == UNF_FC_SERVER_BOARD_8_G) {
+ speed = FC_PORTSPEED_8GBIT | FC_PORTSPEED_4GBIT | FC_PORTSPEED_2GBIT;
+ }
+
+ fc_host_supported_speeds(scsi_host) = speed;
+}
+
+int unf_alloc_scsi_host(struct Scsi_Host **unf_scsi_host,
+ struct unf_host_param *host_param)
+{
+ int ret = RETURN_ERROR;
+ struct Scsi_Host *scsi_host = NULL;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(unf_scsi_host, RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(host_param, RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR, "[event]Alloc scsi host...");
+
+ /* Check L_Port validity */
+ unf_lport = (struct unf_lport *)(host_param->lport);
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port is NULL and return directly");
+
+ return RETURN_ERROR;
+ }
+
+ scsi_host_template.can_queue = host_param->can_queue;
+ scsi_host_template.cmd_per_lun = host_param->cmnd_per_lun;
+ scsi_host_template.sg_tablesize = host_param->sg_table_size;
+ scsi_host_template.max_sectors = host_param->max_sectors;
+
+ /* Alloc scsi host */
+ scsi_host = scsi_host_alloc(&scsi_host_template, sizeof(u64));
+ if (unlikely(!scsi_host)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Register scsi host failed");
+
+ return RETURN_ERROR;
+ }
+
+ scsi_host->max_channel = host_param->max_channel;
+ scsi_host->max_lun = host_param->max_lun;
+ scsi_host->max_cmd_len = host_param->max_cmnd_len;
+ scsi_host->unchecked_isa_dma = 0;
+ scsi_host->hostdata[0] = (unsigned long)(uintptr_t)unf_lport; /* save lport to scsi */
+ scsi_host->unique_id = scsi_host->host_no;
+ scsi_host->max_id = host_param->max_id;
+ scsi_host->transportt = (unf_lport == unf_lport->root_lport)
+ ? scsi_transport_template
+ : scsi_transport_template_v;
+
+ /* register DIF/DIX protection */
+ if (spfc_dif_enable) {
+ /* Enable DIF and DIX function */
+ scsi_host_set_prot(scsi_host, spfc_dif_type);
+
+ spfc_guard = SHOST_DIX_GUARD_CRC;
+ /* Enable IP checksum algorithm in DIX */
+ if (dix_flag)
+ spfc_guard |= SHOST_DIX_GUARD_IP;
+ scsi_host_set_guard(scsi_host, spfc_guard);
+ }
+
+ /* Add scsi host */
+ ret = scsi_add_host(scsi_host, host_param->pdev);
+ if (unlikely(ret)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Add scsi host failed with return value %d", ret);
+
+ scsi_host_put(scsi_host);
+ return RETURN_ERROR;
+ }
+
+ /* Set scsi host attribute */
+ unf_host_init_attr_setting(scsi_host);
+ *unf_scsi_host = scsi_host;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[event]Alloc and add scsi host(0x%llx) succeed",
+ (u64)scsi_host);
+
+ return RETURN_OK;
+}
+
+void unf_free_scsi_host(struct Scsi_Host *unf_scsi_host)
+{
+ struct Scsi_Host *scsi_host = NULL;
+
+ scsi_host = unf_scsi_host;
+ fc_remove_host(scsi_host);
+ scsi_remove_host(scsi_host);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[event]Remove scsi host(%u) succeed", scsi_host->host_no);
+
+ scsi_host_put(scsi_host);
+}
+
+u32 unf_register_ini_transport(void)
+{
+ /* Register INI Transport */
+ scsi_transport_template = fc_attach_transport(&function_template);
+
+ if (!scsi_transport_template) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Register FC transport to scsi failed");
+
+ return RETURN_ERROR;
+ }
+
+ scsi_transport_template_v = fc_attach_transport(&function_template_v);
+ if (!scsi_transport_template_v) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Register FC vport transport to scsi failed");
+
+ fc_release_transport(scsi_transport_template);
+
+ return RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[event]Register FC transport to scsi succeed");
+
+ return RETURN_OK;
+}
+
+void unf_unregister_ini_transport(void)
+{
+ fc_release_transport(scsi_transport_template);
+ fc_release_transport(scsi_transport_template_v);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[event]Unregister FC transport succeed");
+}
+
+void unf_save_sense_data(void *scsi_cmd, const char *sense, int sens_len)
+{
+ struct scsi_cmnd *cmd = NULL;
+
+ FC_CHECK_RETURN_VOID(scsi_cmd);
+ FC_CHECK_RETURN_VOID(sense);
+
+ cmd = (struct scsi_cmnd *)scsi_cmd;
+ memcpy(cmd->sense_buffer, sense, sens_len);
+}
+
diff --git a/drivers/scsi/spfc/common/unf_scsi_common.h b/drivers/scsi/spfc/common/unf_scsi_common.h
new file mode 100644
index 000000000000..c73b5c3d56ce
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_scsi_common.h
@@ -0,0 +1,570 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_SCSI_COMMON
+#define UNF_SCSI_COMMON
+
+#include "unf_type.h"
+
+#define SCSI_SENSE_DATA_LEN 96
+#define DRV_SCSI_CDB_LEN 16
+#define DRV_SCSI_LUN_LEN 8
+
+#define DRV_ENTRY_PER_SGL 64 /* Size of an entry array in a hash table */
+
+#define UNF_DIF_AREA_SIZE (8)
+
+struct unf_dif_control_info {
+ u16 app_tag;
+ u16 flags;
+ u32 protect_opcode;
+ u32 fcp_dl;
+ u32 start_lba;
+ u8 actual_dif[UNF_DIF_AREA_SIZE];
+ u8 expected_dif[UNF_DIF_AREA_SIZE];
+ u32 dif_sge_count;
+ void *dif_sgl;
+};
+
+struct dif_result_info {
+ unsigned char actual_idf[UNF_DIF_AREA_SIZE];
+ unsigned char expect_dif[UNF_DIF_AREA_SIZE];
+};
+
+struct drv_sge {
+ char *buf;
+ void *page_ctrl;
+ u32 Length;
+ u32 offset;
+};
+
+struct drv_scsi_cmd_result {
+ u32 Status;
+ u16 sense_data_length; /* sense data length */
+ u8 sense_data[SCSI_SENSE_DATA_LEN]; /* fail sense info */
+};
+
+enum drv_io_direction {
+ DRV_IO_BIDIRECTIONAL = 0,
+ DRV_IO_DIRECTION_WRITE = 1,
+ DRV_IO_DIRECTION_READ = 2,
+ DRV_IO_DIRECTION_NONE = 3,
+};
+
+struct drv_sgl {
+ struct drv_sgl *next_sgl; /* poin to SGL,SGL list */
+ unsigned short num_sges_in_chain;
+ unsigned short num_sges_in_sgl;
+ u32 flag;
+ u64 serial_num;
+ struct drv_sge sge[DRV_ENTRY_PER_SGL];
+ struct list_head node;
+ u32 cpu_id;
+};
+
+struct dif_info {
+/* Indicates the result returned when the data protection
+ *information is inconsistent,add by pangea
+ */
+ struct dif_result_info dif_result;
+/* Data protection information operation code
+ * bit[31-24] other operation code
+ * bit[23-16] Data Protection Information Operation
+ * bit[15-8] Data protection information
+ * verification bit[7-0] Data protection information
+ * replace
+ */
+ u32 protect_opcode;
+ unsigned short apptag;
+ u64 start_lba; /* IO start LBA */
+ struct drv_sgl *protection_sgl;
+};
+
+struct drv_device_address {
+ u16 initiator_id; /* ini id */
+ u16 bus_id; /* device bus id */
+ u16 target_id; /* device target id,for PCIe SSD,device id */
+ u16 function_id; /* function id */
+};
+
+struct drv_ini_cmd {
+ struct drv_scsi_cmd_result result;
+ void *upper; /* product private pointer */
+ void *lower; /* driver private pointer */
+ u8 cdb[DRV_SCSI_CDB_LEN]; /* CDB edit by product */
+ u8 lun[DRV_SCSI_LUN_LEN];
+ u16 cmd_len;
+ u16 tag; /* SCSI cmd add by driver */
+ enum drv_io_direction io_direciton;
+ u32 data_length;
+ u32 underflow;
+ u32 overflow;
+ u32 resid;
+ u64 port_id;
+ u64 cmd_sn;
+ struct drv_device_address addr;
+ struct drv_sgl *sgl;
+ void *device;
+ void (*done)(struct drv_ini_cmd *cmd); /* callback pointer */
+ struct dif_info dif_info;
+};
+
+typedef void (*uplevel_cmd_done)(struct drv_ini_cmd *scsi_cmnd);
+
+#ifndef SUCCESS
+#define SUCCESS 0x2002
+#endif
+#ifndef FAILED
+#define FAILED 0x2003
+#endif
+
+#ifndef DRIVER_OK
+#define DRIVER_OK 0x00 /* Driver status */
+#endif
+
+#ifndef PCI_FUNC
+#define PCI_FUNC(devfn) ((devfn) & 0x07)
+#endif
+
+#define UNF_SCSI_ABORT_SUCCESS SUCCESS
+#define UNF_SCSI_ABORT_FAIL FAILED
+
+#define UNF_SCSI_STATUS(byte) (byte)
+#define UNF_SCSI_MSG(byte) ((byte) << 8)
+#define UNF_SCSI_HOST(byte) ((byte) << 16)
+#define UNF_SCSI_DRIVER(byte) ((byte) << 24)
+#define UNF_GET_SCSI_HOST_ID(scsi_host) ((scsi_host)->host_no)
+
+struct unf_ini_error_code {
+ u32 drv_errcode; /* driver error code */
+ u32 ap_errcode; /* up level error code */
+};
+
+typedef u32 (*ini_get_sgl_entry_buf)(void *upper_cmnd, void *driver_sgl,
+ void **upper_sgl, u32 *req_index,
+ u32 *index, char **buf,
+ u32 *buf_len);
+
+#define UNF_SCSI_SENSE_BUFFERSIZE 96
+struct unf_scsi_cmnd {
+ u32 scsi_host_id;
+ u32 scsi_id; /* cmd->dev->id */
+ u64 raw_lun_id;
+ u64 port_id;
+ u32 under_flow; /* Underflow */
+ u32 transfer_len; /* Transfer Length */
+ u32 resid; /* Resid */
+ u32 sense_buflen;
+ int result;
+ u32 entry_count; /* IO Buffer counter */
+ u32 abort;
+ u32 err_code_table_cout; /* error code size */
+ u64 cmnd_sn;
+ ulong time_out; /* EPL driver add timer */
+ u16 cmnd_len; /* Cdb length */
+ u8 data_direction; /* data direction */
+ u8 *pcmnd; /* SCSI CDB */
+ u8 *sense_buf;
+ void *drv_private; /* driver host pionter */
+ void *driver_scribble; /* Xchg pionter */
+ void *upper_cmnd; /* UpperCmnd pointer by driver */
+ u8 *lun_id; /* new lunid */
+ u32 world_id;
+ struct unf_dif_control_info dif_control; /* DIF control */
+ struct unf_ini_error_code *err_code_table; /* error code table */
+ void *sgl; /* Sgl pointer */
+ ini_get_sgl_entry_buf unf_ini_get_sgl_entry;
+ void (*done)(struct unf_scsi_cmnd *cmd);
+ uplevel_cmd_done uplevel_done;
+ struct dif_info dif_info;
+ u32 qos_level;
+ void *pinitiator;
+};
+
+#ifndef FC_PORTSPEED_32GBIT
+#define FC_PORTSPEED_32GBIT 0x40
+#endif
+
+#define UNF_GID_PORT_CNT 2048
+#define UNF_RSCN_PAGE_SUM 255
+
+#define UNF_CPU_ENDIAN
+
+#define UNF_NPORTID_MASK 0x00FFFFFF
+#define UNF_DOMAIN_MASK 0x00FF0000
+#define UNF_AREA_MASK 0x0000FF00
+#define UNF_ALPA_MASK 0x000000FF
+
+struct unf_fc_head {
+ u32 rctl_did; /* Routing control and Destination address of the seq */
+ u32 csctl_sid; /* Class control and Source address of the sequence */
+ u32 type_fctl; /* Data type and Initial frame control value of the seq
+ */
+ u32 seqid_dfctl_seqcnt; /* Seq ID, Data Field and Initial seq count */
+ u32 oxid_rxid; /* Originator & Responder exchange IDs for the sequence
+ */
+ u32 parameter; /* Relative offset of the first frame of the sequence */
+};
+
+#define UNF_FCPRSP_CTL_LEN (24)
+#define UNF_MAX_RSP_INFO_LEN (8)
+#define UNF_RSP_LEN_VLD (1 << 0)
+#define UNF_SENSE_LEN_VLD (1 << 1)
+#define UNF_RESID_OVERRUN (1 << 2)
+#define UNF_RESID_UNDERRUN (1 << 3)
+#define UNF_FCP_CONF_REQ (1 << 4)
+
+/* T10: FCP2r.07 9.4.1 Overview and format of FCP_RSP IU */
+struct unf_fcprsp_iu {
+ u32 reserved[2];
+ u8 reserved2[2];
+ u8 control;
+ u8 fcp_status;
+ u32 fcp_residual;
+ u32 fcp_sense_len; /* Length of sense info field */
+ u32 fcp_response_len; /* Length of response info field in bytes 0,4 or 8
+ */
+ u8 fcp_resp_info[UNF_MAX_RSP_INFO_LEN]; /* Buffer for response info */
+ u8 fcp_sense_info[SCSI_SENSE_DATA_LEN]; /* Buffer for sense info */
+} __attribute__((packed));
+
+#define UNF_CMD_REF_MASK 0xFF000000
+#define UNF_TASK_ATTR_MASK 0x00070000
+#define UNF_TASK_MGMT_MASK 0x0000FF00
+#define UNF_FCP_WR_DATA 0x00000001
+#define UNF_FCP_RD_DATA 0x00000002
+#define UNF_CDB_LEN_MASK 0x0000007C
+#define UNF_FCP_CDB_LEN_16 (16)
+#define UNF_FCP_CDB_LEN_32 (32)
+#define UNF_FCP_LUNID_LEN_8 (8)
+
+/* FCP-4 :Table 27 - RSP_CODE field */
+#define UNF_FCP_TM_RSP_COMPLETE (0)
+#define UNF_FCP_TM_INVALID_CMND (0x2)
+#define UNF_FCP_TM_RSP_REJECT (0x4)
+#define UNF_FCP_TM_RSP_FAIL (0x5)
+#define UNF_FCP_TM_RSP_SUCCEED (0x8)
+#define UNF_FCP_TM_RSP_INCRECT_LUN (0x9)
+
+#define UNF_SET_TASK_MGMT_FLAGS(fcp_tm_code) ((fcp_tm_code) << 8)
+#define UNF_GET_TASK_MGMT_FLAGS(control) (((control) & UNF_TASK_MGMT_MASK) >> 8)
+
+enum unf_task_mgmt_cmd {
+ UNF_FCP_TM_QUERY_TASK_SET = (1 << 0),
+ UNF_FCP_TM_ABORT_TASK_SET = (1 << 1),
+ UNF_FCP_TM_CLEAR_TASK_SET = (1 << 2),
+ UNF_FCP_TM_QUERY_UNIT_ATTENTION = (1 << 3),
+ UNF_FCP_TM_LOGICAL_UNIT_RESET = (1 << 4),
+ UNF_FCP_TM_TARGET_RESET = (1 << 5),
+ UNF_FCP_TM_CLEAR_ACA = (1 << 6),
+ UNF_FCP_TM_TERMINATE_TASK = (1 << 7) /* obsolete */
+};
+
+struct unf_fcp_cmnd {
+ u8 lun[UNF_FCP_LUNID_LEN_8]; /* Logical unit number */
+ u32 control;
+ u8 cdb[UNF_FCP_CDB_LEN_16]; /* Payload data containing cdb info */
+ u32 data_length; /* Number of bytes expected to be transferred */
+} __attribute__((packed));
+
+struct unf_fcp_cmd_hdr {
+ struct unf_fc_head frame_hdr; /* FCHS structure */
+ struct unf_fcp_cmnd fcp_cmnd; /* Fcp Cmnd struct */
+};
+
+/* FC-LS-2 Common Service Parameter applicability */
+struct unf_fabric_coparm {
+#if defined(UNF_CPU_ENDIAN)
+ u32 bb_credit : 16; /* 0 [0-15] */
+ u32 lowest_version : 8; /* 0 [16-23] */
+ u32 highest_version : 8; /* 0 [24-31] */
+#else
+ u32 highest_version : 8; /* 0 [24-31] */
+ u32 lowest_version : 8; /* 0 [16-23] */
+ u32 bb_credit : 16; /* 0 [0-15] */
+#endif
+
+#if defined(UNF_CPU_ENDIAN)
+ u32 bb_receive_data_field_size : 12; /* 1 [0-11] */
+ u32 bbscn : 4; /* 1 [12-15] */
+ u32 payload_length : 1; /* 1 [16] */
+ u32 seq_cnt : 1; /* 1 [17] */
+ u32 dynamic_half_duplex : 1; /* 1 [18] */
+ u32 r_t_tov : 1; /* 1 [19] */
+ u32 reserved_co2 : 6; /* 1 [20-25] */
+ u32 e_d_tov_resolution : 1; /* 1 [26] */
+ u32 alternate_bb_credit_mgmt : 1; /* 1 [27] */
+ u32 nport : 1; /* 1 [28] */
+ u32 mnid_assignment : 1; /* 1 [29] */
+ u32 random_relative_offset : 1; /* 1 [30] */
+ u32 clean_address : 1; /* 1 [31] */
+#else
+ u32 reserved_co2 : 2; /* 1 [24-25] */
+ u32 e_d_tov_resolution : 1; /* 1 [26] */
+ u32 alternate_bb_credit_mgmt : 1; /* 1 [27] */
+ u32 nport : 1; /* 1 [28] */
+ u32 mnid_assignment : 1; /* 1 [29] */
+ u32 random_relative_offset : 1; /* 1 [30] */
+ u32 clean_address : 1; /* 1 [31] */
+
+ u32 payload_length : 1; /* 1 [16] */
+ u32 seq_cnt : 1; /* 1 [17] */
+ u32 dynamic_half_duplex : 1; /* 1 [18] */
+ u32 r_t_tov : 1; /* 1 [19] */
+ u32 reserved_co5 : 4; /* 1 [20-23] */
+
+ u32 bb_receive_data_field_size : 12; /* 1 [0-11] */
+ u32 bbscn : 4; /* 1 [12-15] */
+#endif
+ u32 r_a_tov; /* 2 [0-31] */
+ u32 e_d_tov; /* 3 [0-31] */
+};
+
+/* FC-LS-2 Common Service Parameter applicability */
+/*Common Service Parameters - PLOGI and PLOGI LS_ACC */
+struct lgn_port_coparm {
+#if defined(UNF_CPU_ENDIAN)
+ u32 bb_credit : 16; /* 0 [0-15] */
+ u32 lowest_version : 8; /* 0 [16-23] */
+ u32 highest_version : 8; /* 0 [24-31] */
+#else
+ u32 highest_version : 8; /* 0 [24-31] */
+ u32 lowest_version : 8; /* 0 [16-23] */
+ u32 bb_credit : 16; /* 0 [0-15] */
+#endif
+
+#if defined(UNF_CPU_ENDIAN)
+ u32 bb_receive_data_field_size : 12; /* 1 [0-11] */
+ u32 bbscn : 4; /* 1 [12-15] */
+ u32 payload_length : 1; /* 1 [16] */
+ u32 seq_cnt : 1; /* 1 [17] */
+ u32 dynamic_half_duplex : 1; /* 1 [18] */
+ u32 reserved_co2 : 7; /* 1 [19-25] */
+ u32 e_d_tov_resolution : 1; /* 1 [26] */
+ u32 alternate_bb_credit_mgmt : 1; /* 1 [27] */
+ u32 nport : 1; /* 1 [28] */
+ u32 vendor_version_level : 1; /* 1 [29] */
+ u32 random_relative_offset : 1; /* 1 [30] */
+ u32 continuously_increasing : 1; /* 1 [31] */
+#else
+ u32 reserved_co2 : 2; /* 1 [24-25] */
+ u32 e_d_tov_resolution : 1; /* 1 [26] */
+ u32 alternate_bb_credit_mgmt : 1; /* 1 [27] */
+ u32 nport : 1; /* 1 [28] */
+ u32 vendor_version_level : 1; /* 1 [29] */
+ u32 random_relative_offset : 1; /* 1 [30] */
+ u32 continuously_increasing : 1; /* 1 [31] */
+
+ u32 payload_length : 1; /* 1 [16] */
+ u32 seq_cnt : 1; /* 1 [17] */
+ u32 dynamic_half_duplex : 1; /* 1 [18] */
+ u32 reserved_co5 : 5; /* 1 [19-23] */
+
+ u32 bb_receive_data_field_size : 12; /* 1 [0-11] */
+ u32 reserved_co1 : 4; /* 1 [12-15] */
+#endif
+
+#if defined(UNF_CPU_ENDIAN)
+ u32 relative_offset : 16; /* 2 [0-15] */
+ u32 nport_total_concurrent_sequences : 16; /* 2 [16-31] */
+#else
+ u32 nport_total_concurrent_sequences : 16; /* 2 [16-31] */
+ u32 relative_offset : 16; /* 2 [0-15] */
+#endif
+
+ u32 e_d_tov;
+};
+
+/* FC-LS-2 Class Service Parameters Applicability */
+struct unf_lgn_port_clparm {
+#if defined(UNF_CPU_ENDIAN)
+ u32 reserved_cl1 : 6; /* 0 [0-5] */
+ u32 ic_data_compression_history_buffer_size : 2; /* 0 [6-7] */
+ u32 ic_data_compression_capable : 1; /* 0 [8] */
+
+ u32 ic_ack_generation_assistance : 1; /* 0 [9] */
+ u32 ic_ack_n_capable : 1; /* 0 [10] */
+ u32 ic_ack_o_capable : 1; /* 0 [11] */
+ u32 ic_initial_responder_processes_accociator : 2; /* 0 [12-13] */
+ u32 ic_x_id_reassignment : 2; /* 0 [14-15] */
+
+ u32 reserved_cl2 : 7; /* 0 [16-22] */
+ u32 priority : 1; /* 0 [23] */
+ u32 buffered_class : 1; /* 0 [24] */
+ u32 camp_on : 1; /* 0 [25] */
+ u32 dedicated_simplex : 1; /* 0 [26] */
+ u32 sequential_delivery : 1; /* 0 [27] */
+ u32 stacked_connect_request : 2; /* 0 [28-29] */
+ u32 intermix_mode : 1; /* 0 [30] */
+ u32 valid : 1; /* 0 [31] */
+#else
+ u32 buffered_class : 1; /* 0 [24] */
+ u32 camp_on : 1; /* 0 [25] */
+ u32 dedicated_simplex : 1; /* 0 [26] */
+ u32 sequential_delivery : 1; /* 0 [27] */
+ u32 stacked_connect_request : 2; /* 0 [28-29] */
+ u32 intermix_mode : 1; /* 0 [30] */
+ u32 valid : 1; /* 0 [31] */
+ u32 reserved_cl2 : 7; /* 0 [16-22] */
+ u32 priority : 1; /* 0 [23] */
+ u32 ic_data_compression_capable : 1; /* 0 [8] */
+ u32 ic_ack_generation_assistance : 1; /* 0 [9] */
+ u32 ic_ack_n_capable : 1; /* 0 [10] */
+ u32 ic_ack_o_capable : 1; /* 0 [11] */
+ u32 ic_initial_responder_processes_accociator : 2; /* 0 [12-13] */
+ u32 ic_x_id_reassignment : 2; /* 0 [14-15] */
+
+ u32 reserved_cl1 : 6; /* 0 [0-5] */
+ u32 ic_data_compression_history_buffer_size : 2; /* 0 [6-7] */
+#endif
+
+#if defined(UNF_CPU_ENDIAN)
+ u32 received_data_field_size : 16; /* 1 [0-15] */
+
+ u32 reserved_cl3 : 5; /* 1 [16-20] */
+ u32 rc_data_compression_history_buffer_size : 2; /* 1 [21-22] */
+ u32 rc_data_compression_capable : 1; /* 1 [23] */
+
+ u32 rc_data_categories_per_sequence : 2; /* 1 [24-25] */
+ u32 reserved_cl4 : 1; /* 1 [26] */
+ u32 rc_error_policy_supported : 2; /* 1 [27-28] */
+ u32 rc_x_id_interlock : 1; /* 1 [29] */
+ u32 rc_ack_n_capable : 1; /* 1 [30] */
+ u32 rc_ack_o_capable : 1; /* 1 [31] */
+#else
+ u32 rc_data_categories_per_sequence : 2; /* 1 [24-25] */
+ u32 reserved_cl4 : 1; /* 1 [26] */
+ u32 rc_error_policy_supported : 2; /* 1 [27-28] */
+ u32 rc_x_id_interlock : 1; /* 1 [29] */
+ u32 rc_ack_n_capable : 1; /* 1 [30] */
+ u32 rc_ack_o_capable : 1; /* 1 [31] */
+
+ u32 reserved_cl3 : 5; /* 1 [16-20] */
+ u32 rc_data_compression_history_buffer_size : 2; /* 1 [21-22] */
+ u32 rc_data_compression_capable : 1; /* 1 [23] */
+
+ u32 received_data_field_size : 16; /* 1 [0-15] */
+#endif
+
+#if defined(UNF_CPU_ENDIAN)
+ u32 nport_end_to_end_credit : 15; /* 2 [0-14] */
+ u32 reserved_cl5 : 1; /* 2 [15] */
+
+ u32 concurrent_sequences : 16; /* 2 [16-31] */
+#else
+ u32 concurrent_sequences : 16; /* 2 [16-31] */
+
+ u32 nport_end_to_end_credit : 15; /* 2 [0-14] */
+ u32 reserved_cl5 : 1; /* 2 [15] */
+#endif
+
+#if defined(UNF_CPU_ENDIAN)
+ u32 reserved_cl6 : 16; /* 3 [0-15] */
+ u32 open_sequence_per_exchange : 16; /* 3 [16-31] */
+#else
+ u32 open_sequence_per_exchange : 16; /* 3 [16-31] */
+ u32 reserved_cl6 : 16; /* 3 [0-15] */
+#endif
+};
+
+struct unf_fabric_parm {
+ struct unf_fabric_coparm co_parms;
+ u32 high_port_name;
+ u32 low_port_name;
+ u32 high_node_name;
+ u32 low_node_name;
+ struct unf_lgn_port_clparm cl_parms[3];
+ u32 reserved_1[4];
+ u32 vendor_version_level[4];
+};
+
+struct unf_lgn_parm {
+ struct lgn_port_coparm co_parms;
+ u32 high_port_name;
+ u32 low_port_name;
+ u32 high_node_name;
+ u32 low_node_name;
+ struct unf_lgn_port_clparm cl_parms[3];
+ u32 reserved_1[4];
+ u32 vendor_version_level[4];
+};
+
+#define ELS_RJT 0x1
+#define ELS_ACC 0x2
+#define ELS_PLOGI 0x3
+#define ELS_FLOGI 0x4
+#define ELS_LOGO 0x5
+#define ELS_ECHO 0x10
+#define ELS_RRQ 0x12
+#define ELS_REC 0x13
+#define ELS_PRLI 0x20
+#define ELS_PRLO 0x21
+#define ELS_TPRLO 0x24
+#define ELS_PDISC 0x50
+#define ELS_FDISC 0x51
+#define ELS_ADISC 0x52
+#define ELS_RSCN 0x61 /* registered state change notification */
+#define ELS_SCR 0x62 /* state change registration */
+
+#define NS_GIEL 0X0101
+#define NS_GA_NXT 0X0100
+#define NS_GPN_ID 0x0112 /* get port name by ID */
+#define NS_GNN_ID 0x0113 /* get node name by ID */
+#define NS_GFF_ID 0x011f /* get FC-4 features by ID */
+#define NS_GID_PN 0x0121 /* get ID for port name */
+#define NS_GID_NN 0x0131 /* get IDs for node name */
+#define NS_GID_FT 0x0171 /* get IDs by FC4 type */
+#define NS_GPN_FT 0x0172 /* get port names by FC4 type */
+#define NS_GID_PT 0x01a1 /* get IDs by port type */
+#define NS_RFT_ID 0x0217 /* reg FC4 type for ID */
+#define NS_RPN_ID 0x0212 /* reg port name for ID */
+#define NS_RNN_ID 0x0213 /* reg node name for ID */
+#define NS_RSNPN 0x0218 /* reg symbolic port name */
+#define NS_RFF_ID 0x021f /* reg FC4 Features for ID */
+#define NS_RSNN 0x0239 /* reg symbolic node name */
+#define ST_NULL 0xffff /* reg symbolic node name */
+
+#define BLS_ABTS 0xA001 /* ABTS */
+
+#define FCP_SRR 0x14 /* Sequence Retransmission Request */
+#define UNF_FC_FID_DOM_MGR 0xfffc00 /* domain manager base */
+enum unf_fc_well_known_fabric_id {
+ UNF_FC_FID_NONE = 0x000000, /* No destination */
+ UNF_FC_FID_DOM_CTRL = 0xfffc01, /* domain controller */
+ UNF_FC_FID_BCAST = 0xffffff, /* broadcast */
+ UNF_FC_FID_FLOGI = 0xfffffe, /* fabric login */
+ UNF_FC_FID_FCTRL = 0xfffffd, /* fabric controller */
+ UNF_FC_FID_DIR_SERV = 0xfffffc, /* directory server */
+ UNF_FC_FID_TIME_SERV = 0xfffffb, /* time server */
+ UNF_FC_FID_MGMT_SERV = 0xfffffa, /* management server */
+ UNF_FC_FID_QOS = 0xfffff9, /* QoS Facilitator */
+ UNF_FC_FID_ALIASES = 0xfffff8, /* alias server (FC-PH2) */
+ UNF_FC_FID_SEC_KEY = 0xfffff7, /* Security key dist. server */
+ UNF_FC_FID_CLOCK = 0xfffff6, /* clock synch server */
+ UNF_FC_FID_MCAST_SERV = 0xfffff5 /* multicast server */
+};
+
+#define INVALID_WORLD_ID 0xfffffffc
+
+struct unf_host_param {
+ int can_queue;
+ u16 sg_table_size;
+ short cmnd_per_lun;
+ u32 max_id;
+ u32 max_lun;
+ u32 max_channel;
+ u16 max_cmnd_len;
+ u16 max_sectors;
+ u64 dma_boundary;
+ u32 port_id;
+ void *lport;
+ struct device *pdev;
+};
+
+int unf_alloc_scsi_host(struct Scsi_Host **unf_scsi_host, struct unf_host_param *host_param);
+void unf_free_scsi_host(struct Scsi_Host *unf_scsi_host);
+u32 unf_register_ini_transport(void);
+void unf_unregister_ini_transport(void);
+void unf_save_sense_data(void *scsi_cmd, const char *sense, int sens_len);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_service.c b/drivers/scsi/spfc/common/unf_service.c
new file mode 100644
index 000000000000..8f72f6470647
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_service.c
@@ -0,0 +1,1439 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_service.h"
+#include "unf_log.h"
+#include "unf_rport.h"
+#include "unf_ls.h"
+#include "unf_gs.h"
+
+struct unf_els_handle_table els_handle_table[] = {
+ {ELS_PLOGI, unf_plogi_handler}, {ELS_FLOGI, unf_flogi_handler},
+ {ELS_LOGO, unf_logo_handler}, {ELS_ECHO, unf_echo_handler},
+ {ELS_RRQ, unf_rrq_handler}, {ELS_REC, unf_rec_handler},
+ {ELS_PRLI, unf_prli_handler}, {ELS_PRLO, unf_prlo_handler},
+ {ELS_PDISC, unf_pdisc_handler}, {ELS_ADISC, unf_adisc_handler},
+ {ELS_RSCN, unf_rscn_handler} };
+
+u32 max_frame_size = UNF_DEFAULT_FRAME_SIZE;
+
+#define UNF_NEED_BIG_RESPONSE_BUFF(cmnd_code) \
+ (((cmnd_code) == ELS_ECHO) || ((cmnd_code) == NS_GID_PT) || \
+ ((cmnd_code) == NS_GID_FT))
+
+#define NEED_REFRESH_NPORTID(pkg) \
+ ((((pkg)->cmnd == ELS_PLOGI) || ((pkg)->cmnd == ELS_PDISC) || \
+ ((pkg)->cmnd == ELS_ADISC)))
+
+void unf_select_sq(struct unf_xchg *xchg, struct unf_frame_pkg *pkg)
+{
+ u32 ssq_index = 0;
+ struct unf_rport *unf_rport = NULL;
+
+ if (likely(xchg)) {
+ unf_rport = xchg->rport;
+
+ if (unf_rport) {
+ ssq_index = (xchg->hotpooltag % UNF_SQ_NUM_PER_SESSION) +
+ unf_rport->sqn_base;
+ }
+ }
+
+ pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX] = ssq_index;
+}
+
+u32 unf_ls_gs_cmnd_send(struct unf_lport *lport, struct unf_frame_pkg *pkg,
+ struct unf_xchg *xchg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ ulong time_out = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ if (unlikely(!lport->low_level_func.service_op.unf_ls_gs_send)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) LS/GS send function is NULL",
+ lport->port_id);
+
+ return ret;
+ }
+
+ if (pkg->type == UNF_PKG_GS_REQ)
+ time_out = UNF_GET_GS_SFS_XCHG_TIMER(lport);
+ else
+ time_out = UNF_GET_ELS_SFS_XCHG_TIMER(lport);
+
+ if (xchg->cmnd_code == ELS_RRQ) {
+ time_out = ((ulong)UNF_GET_ELS_SFS_XCHG_TIMER(lport) > UNF_RRQ_MIN_TIMEOUT_INTERVAL)
+ ? (ulong)UNF_GET_ELS_SFS_XCHG_TIMER(lport)
+ : UNF_RRQ_MIN_TIMEOUT_INTERVAL;
+ } else if (xchg->cmnd_code == ELS_LOGO) {
+ time_out = UNF_LOGO_TIMEOUT_INTERVAL;
+ }
+
+ pkg->private_data[PKG_PRIVATE_XCHG_TIMEER] = (u32)time_out;
+ lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg, time_out, UNF_TIMER_TYPE_SFS);
+
+ unf_select_sq(xchg, pkg);
+
+ ret = lport->low_level_func.service_op.unf_ls_gs_send(lport->fc_port, pkg);
+ if (unlikely(ret != RETURN_OK))
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ return ret;
+}
+
+static u32 unf_bls_cmnd_send(struct unf_lport *lport, struct unf_frame_pkg *pkg,
+ struct unf_xchg *xchg)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ pkg->private_data[PKG_PRIVATE_XCHG_TIMEER] = (u32)UNF_GET_BLS_SFS_XCHG_TIMER(lport);
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+
+ unf_select_sq(xchg, pkg);
+
+ return lport->low_level_func.service_op.unf_bls_send(lport->fc_port, pkg);
+}
+
+void unf_fill_package(struct unf_frame_pkg *pkg, struct unf_xchg *xchg,
+ struct unf_rport *rport)
+{
+ /* v_pstRport maybe NULL */
+ FC_CHECK_RETURN_VOID(pkg);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ pkg->cmnd = xchg->cmnd_code;
+ pkg->fcp_cmnd = &xchg->fcp_cmnd;
+ pkg->frame_head.csctl_sid = xchg->sid;
+ pkg->frame_head.rctl_did = xchg->did;
+ pkg->frame_head.oxid_rxid = ((u32)xchg->oxid << UNF_SHIFT_16 | xchg->rxid);
+ pkg->xchg_contex = xchg;
+
+ FC_CHECK_RETURN_VOID(xchg->lport);
+ pkg->private_data[PKG_PRIVATE_XCHG_VP_INDEX] = xchg->lport->vp_index;
+
+ if (!rport) {
+ pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = UNF_RPORT_INVALID_INDEX;
+ pkg->private_data[PKG_PRIVATE_RPORT_RX_SIZE] = INVALID_VALUE32;
+ } else {
+ if (likely(rport->nport_id != UNF_FC_FID_FLOGI))
+ pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = rport->rport_index;
+ else
+ pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = SPFC_DEFAULT_RPORT_INDEX;
+
+ pkg->private_data[PKG_PRIVATE_RPORT_RX_SIZE] = rport->max_frame_size;
+ }
+
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+ pkg->private_data[PKG_PRIVATE_LOWLEVEL_XCHG_ADD] =
+ xchg->private_data[PKG_PRIVATE_LOWLEVEL_XCHG_ADD];
+ pkg->unf_cmnd_pload_bl.buffer_ptr =
+ (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ pkg->unf_cmnd_pload_bl.buf_dma_addr =
+ xchg->fcp_sfs_union.sfs_entry.sfs_buff_phy_addr;
+
+ /* Low level need to know payload length if send ECHO response */
+ pkg->unf_cmnd_pload_bl.length = xchg->fcp_sfs_union.sfs_entry.cur_offset;
+}
+
+struct unf_xchg *unf_get_sfs_free_xchg_and_init(struct unf_lport *lport, u32 did,
+ struct unf_rport *rport,
+ union unf_sfs_u **fc_entry)
+{
+ struct unf_xchg *xchg = NULL;
+ union unf_sfs_u *sfs_fc_entry = NULL;
+
+ xchg = unf_cm_get_free_xchg(lport, UNF_XCHG_TYPE_SFS);
+ if (!xchg)
+ return NULL;
+
+ xchg->did = did;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+ xchg->disc_rport = NULL;
+ xchg->callback = NULL;
+ xchg->ob_callback = NULL;
+
+ sfs_fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!sfs_fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return NULL;
+ }
+
+ *fc_entry = sfs_fc_entry;
+
+ return xchg;
+}
+
+void *unf_get_one_big_sfs_buf(struct unf_xchg *xchg)
+{
+ struct unf_big_sfs *big_sfs = NULL;
+ struct list_head *list_head = NULL;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ ulong flag = 0;
+ spinlock_t *big_sfs_pool_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(xchg, NULL);
+ xchg_mgr = xchg->xchg_mgr;
+ FC_CHECK_RETURN_VALUE(xchg_mgr, NULL);
+ big_sfs_pool_lock = &xchg_mgr->big_sfs_pool.big_sfs_pool_lock;
+
+ spin_lock_irqsave(big_sfs_pool_lock, flag);
+ if (!list_empty(&xchg_mgr->big_sfs_pool.list_freepool)) {
+ /* from free to busy */
+ list_head = UNF_OS_LIST_NEXT(&xchg_mgr->big_sfs_pool.list_freepool);
+ list_del(list_head);
+ xchg_mgr->big_sfs_pool.free_count--;
+ list_add_tail(list_head, &xchg_mgr->big_sfs_pool.list_busypool);
+ big_sfs = list_entry(list_head, struct unf_big_sfs, entry_bigsfs);
+ } else {
+ spin_unlock_irqrestore(big_sfs_pool_lock, flag);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Allocate big sfs buf failed, count(0x%x) exchange(0x%p) command(0x%x)",
+ xchg_mgr->big_sfs_pool.free_count, xchg, xchg->cmnd_code);
+
+ return NULL;
+ }
+ spin_unlock_irqrestore(big_sfs_pool_lock, flag);
+
+ xchg->big_sfs_buf = big_sfs;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Allocate one big sfs buffer(0x%p), remaining count(0x%x) exchange(0x%p) command(0x%x)",
+ big_sfs->addr, xchg_mgr->big_sfs_pool.free_count, xchg,
+ xchg->cmnd_code);
+
+ return big_sfs->addr;
+}
+
+static void unf_fill_rjt_pld(struct unf_els_rjt *els_rjt, u32 reason_code,
+ u32 reason_explanation)
+{
+ FC_CHECK_RETURN_VOID(els_rjt);
+
+ els_rjt->cmnd = UNF_ELS_CMND_RJT;
+ els_rjt->reason_code = (reason_code | reason_explanation);
+}
+
+u32 unf_send_abts(struct unf_lport *lport, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ unf_rport = xchg->rport;
+ FC_CHECK_RETURN_VALUE(unf_rport, UNF_RETURN_ERROR);
+
+ /* set pkg info */
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+ pkg.type = UNF_PKG_BLS_REQ;
+ pkg.frame_head.csctl_sid = xchg->sid;
+ pkg.frame_head.rctl_did = xchg->did;
+ pkg.frame_head.oxid_rxid = ((u32)xchg->oxid << UNF_SHIFT_16 | xchg->rxid);
+ pkg.xchg_contex = xchg;
+ pkg.unf_cmnd_pload_bl.buffer_ptr = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+
+ pkg.unf_cmnd_pload_bl.buf_dma_addr = xchg->fcp_sfs_union.sfs_entry.sfs_buff_phy_addr;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+
+ UNF_SET_XCHG_ALLOC_TIME(&pkg, xchg);
+ UNF_SET_ABORT_INFO_IOTYPE(&pkg, xchg);
+
+ pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] =
+ xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX];
+
+ /* Send ABTS frame to target */
+ ret = unf_bls_cmnd_send(lport, &pkg, xchg);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) send ABTS %s. Abort exch(0x%p) Cmdsn:0x%lx, tag(0x%x) iotype(0x%x)",
+ lport->port_id, lport->nport_id,
+ (ret == UNF_RETURN_ERROR) ? "failed" : "succeed", xchg,
+ (ulong)xchg->cmnd_sn, xchg->hotpooltag, xchg->data_direction);
+
+ return ret;
+}
+
+u32 unf_send_els_rjt_by_rport(struct unf_lport *lport, struct unf_xchg *xchg,
+ struct unf_rport *rport, struct unf_rjt_info *rjt_info)
+{
+ struct unf_els_rjt *els_rjt = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_frame_pkg pkg = {0};
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg->cmnd_code = UNF_SET_ELS_RJT_TYPE(rjt_info->els_cmnd_code);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+ xchg->disc_rport = NULL;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = NULL;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.class_mode = UNF_FC_PROTOCOL_CLASS_3;
+ pkg.type = UNF_PKG_ELS_REPLY;
+
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ els_rjt = &fc_entry->els_rjt;
+ memset(els_rjt, 0, sizeof(struct unf_els_rjt));
+ unf_fill_rjt_pld(els_rjt, rjt_info->reason_code, rjt_info->reason_explanation);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Send LS_RJT for 0x%x %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ rjt_info->els_cmnd_code,
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, ox_id, rx_id);
+
+ return ret;
+}
+
+u32 unf_send_els_rjt_by_did(struct unf_lport *lport, struct unf_xchg *xchg,
+ u32 did, struct unf_rjt_info *rjt_info)
+{
+ struct unf_els_rjt *els_rjt = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_frame_pkg pkg = {0};
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->cmnd_code = UNF_SET_ELS_RJT_TYPE(rjt_info->els_cmnd_code);
+ xchg->did = did;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = NULL;
+ xchg->disc_rport = NULL;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = NULL;
+
+ unf_fill_package(&pkg, xchg, NULL);
+ pkg.class_mode = UNF_FC_PROTOCOL_CLASS_3;
+ pkg.type = UNF_PKG_ELS_REPLY;
+
+ if (rjt_info->reason_code == UNF_LS_RJT_CLASS_ERROR &&
+ rjt_info->class_mode != UNF_FC_PROTOCOL_CLASS_3) {
+ pkg.class_mode = rjt_info->class_mode;
+ }
+
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ els_rjt = &fc_entry->els_rjt;
+ memset(els_rjt, 0, sizeof(struct unf_els_rjt));
+ unf_fill_rjt_pld(els_rjt, rjt_info->reason_code, rjt_info->reason_explanation);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Send LS_RJT %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id, did, ox_id, rx_id);
+
+ return ret;
+}
+
+static u32 unf_els_cmnd_default_handler(struct unf_lport *lport, struct unf_xchg *xchg, u32 sid,
+ u32 els_cmnd_code)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_rjt_info rjt_info = {0};
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_KEVENT,
+ "[info]Receive Unknown ELS command(0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ els_cmnd_code, lport->port_id, sid, xchg->oxid);
+
+ memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
+ rjt_info.els_cmnd_code = els_cmnd_code;
+ rjt_info.reason_code = UNF_LS_RJT_NOT_SUPPORTED;
+
+ unf_rport = unf_get_rport_by_nport_id(lport, sid);
+ if (unf_rport) {
+ if (unf_rport->rport_index !=
+ xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX]) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) NPort handle(0x%x) from low level is not equal to RPort index(0x%x)",
+ lport->port_id, lport->nport_id,
+ xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX],
+ unf_rport->rport_index);
+ }
+ ret = unf_send_els_rjt_by_rport(lport, xchg, unf_rport, &rjt_info);
+ } else {
+ ret = unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
+ }
+
+ return ret;
+}
+
+static struct unf_xchg *unf_alloc_xchg_for_rcv_cmnd(struct unf_lport *lport,
+ struct unf_frame_pkg *pkg)
+{
+ struct unf_xchg *xchg = NULL;
+ ulong flags = 0;
+ u32 i = 0;
+ u32 offset = 0;
+ u8 *cmnd_pld = NULL;
+ u32 first_dword = 0;
+ u32 alloc_time = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(pkg, NULL);
+
+ if (!pkg->xchg_contex) {
+ xchg = unf_cm_get_free_xchg(lport, UNF_XCHG_TYPE_SFS);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[warn]Port(0x%x) get new exchange failed",
+ lport->port_id);
+
+ return NULL;
+ }
+
+ offset = (xchg->fcp_sfs_union.sfs_entry.cur_offset);
+ cmnd_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rscn.rscn_pld;
+ first_dword = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr
+ ->sfs_common.frame_head.rctl_did;
+
+ if (cmnd_pld || first_dword != 0 || offset != 0) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) exchange(0x%p) abnormal, maybe data overrun, start(%llu) command(0x%x)",
+ lport->port_id, xchg, xchg->alloc_jif, pkg->cmnd);
+
+ UNF_PRINT_SFS(UNF_INFO, lport->port_id,
+ xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr,
+ sizeof(union unf_sfs_u));
+ }
+
+ memset(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr, 0, sizeof(union unf_sfs_u));
+
+ pkg->xchg_contex = (void *)xchg;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ xchg->fcp_sfs_union.sfs_entry.cur_offset = 0;
+ alloc_time = xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+ for (i = 0; i < PKG_MAX_PRIVATE_DATA_SIZE; i++)
+ xchg->private_data[i] = pkg->private_data[i];
+
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = alloc_time;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ } else {
+ xchg = (struct unf_xchg *)pkg->xchg_contex;
+ }
+
+ if (!xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ return NULL;
+ }
+
+ return xchg;
+}
+
+static u8 *unf_calc_big_cmnd_pld_buffer(struct unf_xchg *xchg, u32 cmnd_code)
+{
+ u8 *cmnd_pld = NULL;
+ void *buf = NULL;
+ u8 *dest = NULL;
+
+ FC_CHECK_RETURN_VALUE(xchg, NULL);
+
+ if (cmnd_code == ELS_RSCN)
+ cmnd_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rscn.rscn_pld;
+ else
+ cmnd_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo.echo_pld;
+
+ if (!cmnd_pld) {
+ buf = unf_get_one_big_sfs_buf(xchg);
+ if (!buf)
+ return NULL;
+
+ if (cmnd_code == ELS_RSCN) {
+ memset(buf, 0, sizeof(struct unf_rscn_pld));
+ xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rscn.rscn_pld = buf;
+ } else {
+ memset(buf, 0, sizeof(struct unf_echo_payload));
+ xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo.echo_pld = buf;
+ }
+
+ dest = (u8 *)buf;
+ } else {
+ dest = (u8 *)(cmnd_pld + xchg->fcp_sfs_union.sfs_entry.cur_offset);
+ }
+
+ return dest;
+}
+
+static u8 *unf_calc_other_pld_buffer(struct unf_xchg *xchg)
+{
+ u8 *dest = NULL;
+ u32 offset = 0;
+
+ FC_CHECK_RETURN_VALUE(xchg, NULL);
+
+ offset = (sizeof(struct unf_fc_head)) + (xchg->fcp_sfs_union.sfs_entry.cur_offset);
+ dest = (u8 *)((u8 *)(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) + offset);
+
+ return dest;
+}
+
+struct unf_xchg *unf_mv_data_2_xchg(struct unf_lport *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_xchg *xchg = NULL;
+ u8 *dest = NULL;
+ u32 length = 0;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(pkg, NULL);
+
+ xchg = unf_alloc_xchg_for_rcv_cmnd(lport, pkg);
+ if (!xchg)
+ return NULL;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+
+ memcpy(&xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->sfs_common.frame_head,
+ &pkg->frame_head, sizeof(pkg->frame_head));
+
+ if (pkg->cmnd == ELS_RSCN || pkg->cmnd == ELS_ECHO)
+ dest = unf_calc_big_cmnd_pld_buffer(xchg, pkg->cmnd);
+ else
+ dest = unf_calc_other_pld_buffer(xchg);
+
+ if (!dest) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ return NULL;
+ }
+
+ if (((xchg->fcp_sfs_union.sfs_entry.cur_offset +
+ pkg->unf_cmnd_pload_bl.length) > (u32)sizeof(union unf_sfs_u)) &&
+ pkg->cmnd != ELS_RSCN && pkg->cmnd != ELS_ECHO) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) excange(0x%p) command(0x%x,0x%x) copy payload overrun(0x%x:0x%x:0x%x)",
+ lport->port_id, xchg, pkg->cmnd, xchg->hotpooltag,
+ xchg->fcp_sfs_union.sfs_entry.cur_offset,
+ pkg->unf_cmnd_pload_bl.length, (u32)sizeof(union unf_sfs_u));
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ return NULL;
+ }
+
+ length = pkg->unf_cmnd_pload_bl.length;
+ if (length > 0)
+ memcpy(dest, pkg->unf_cmnd_pload_bl.buffer_ptr, length);
+
+ xchg->fcp_sfs_union.sfs_entry.cur_offset += length;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ return xchg;
+}
+
+static u32 unf_check_els_cmnd_valid(struct unf_lport *lport, struct unf_frame_pkg *pkg,
+ struct unf_xchg *xchg)
+{
+ struct unf_rjt_info rjt_info = {0};
+ struct unf_lport *vport = NULL;
+ u32 sid = 0;
+ u32 did = 0;
+
+ sid = (pkg->frame_head.csctl_sid) & UNF_NPORTID_MASK;
+ did = (pkg->frame_head.rctl_did) & UNF_NPORTID_MASK;
+
+ memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
+
+ if (pkg->class_mode != UNF_FC_PROTOCOL_CLASS_3) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) unsupport class 0x%x cmd 0x%x and send RJT",
+ lport->port_id, pkg->class_mode, pkg->cmnd);
+
+ rjt_info.reason_code = UNF_LS_RJT_CLASS_ERROR;
+ rjt_info.els_cmnd_code = pkg->cmnd;
+ rjt_info.class_mode = pkg->class_mode;
+ (void)unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ rjt_info.reason_code = UNF_LS_RJT_NOT_SUPPORTED;
+
+ if (pkg->cmnd == ELS_FLOGI && lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) receive FLOGI in top (0x%x) and send LS_RJT",
+ lport->port_id, lport->act_topo);
+
+ rjt_info.els_cmnd_code = ELS_FLOGI;
+ (void)unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (pkg->cmnd == ELS_PLOGI && did >= UNF_FC_FID_DOM_MGR) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x)receive PLOGI with wellknown address(0x%x) and Send LS_RJT",
+ lport->port_id, did);
+
+ rjt_info.els_cmnd_code = ELS_PLOGI;
+ (void)unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if ((lport->nport_id == 0 || lport->nport_id == INVALID_VALUE32) &&
+ (NEED_REFRESH_NPORTID(pkg))) {
+ lport->nport_id = did;
+ } else if ((lport->nport_id != did) && (pkg->cmnd != ELS_FLOGI)) {
+ vport = unf_cm_lookup_vport_by_did(lport, did);
+ if (!vport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive ELS cmd(0x%x) with abnormal D_ID(0x%x)",
+ lport->nport_id, pkg->cmnd, did);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ return RETURN_OK;
+}
+
+static u32 unf_rcv_els_cmnd_req(struct unf_lport *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 i = 0;
+ u32 sid = 0;
+ u32 did = 0;
+ struct unf_lport *vport = NULL;
+ u32 (*els_cmnd_handler)(struct unf_lport *, u32, struct unf_xchg *) = NULL;
+
+ sid = (pkg->frame_head.csctl_sid) & UNF_NPORTID_MASK;
+ did = (pkg->frame_head.rctl_did) & UNF_NPORTID_MASK;
+
+ xchg = unf_mv_data_2_xchg(lport, pkg);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive ElsCmnd(0x%x), exchange is NULL",
+ lport->port_id, pkg->cmnd);
+ return UNF_RETURN_ERROR;
+ }
+
+ if (!pkg->last_pkg_flag) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Exchange(%u) waiting for last WQE",
+ xchg->hotpooltag);
+ return RETURN_OK;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Exchange(%u) get last WQE", xchg->hotpooltag);
+
+ xchg->oxid = UNF_GET_OXID(pkg);
+ xchg->abort_oxid = xchg->oxid;
+ xchg->rxid = UNF_GET_RXID(pkg);
+ xchg->cmnd_code = pkg->cmnd;
+
+ ret = unf_check_els_cmnd_valid(lport, pkg, xchg);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ if (lport->nport_id != did && pkg->cmnd != ELS_FLOGI) {
+ vport = unf_cm_lookup_vport_by_did(lport, did);
+ if (!vport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) received unknown ELS command with S_ID(0x%x) D_ID(0x%x))",
+ lport->port_id, sid, did);
+ return UNF_RETURN_ERROR;
+ }
+ lport = vport;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]VPort(0x%x) received ELS command with S_ID(0x%x) D_ID(0x%x)",
+ lport->port_id, sid, did);
+ }
+
+ do {
+ if (pkg->cmnd == els_handle_table[i].cmnd) {
+ els_cmnd_handler = els_handle_table[i].els_cmnd_handler;
+ break;
+ }
+ i++;
+ } while (i < (sizeof(els_handle_table) / sizeof(struct unf_els_handle_table)));
+
+ if (els_cmnd_handler) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) receive ELS(0x%x) from RPort(0x%x) and process it",
+ lport->port_id, pkg->cmnd, sid);
+ ret = els_cmnd_handler(lport, sid, xchg);
+ } else {
+ ret = unf_els_cmnd_default_handler(lport, xchg, sid, pkg->cmnd);
+ }
+ return ret;
+}
+
+u32 unf_send_els_rsp_succ(struct unf_lport *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_xchg *xchg = NULL;
+ u32 ret = RETURN_OK;
+ u16 hot_pool_tag = 0;
+ ulong flags = 0;
+ void (*ob_callback)(struct unf_xchg *) = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ if (!lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) lookup exchange by tag function is NULL",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ hot_pool_tag = (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
+ xchg = (struct unf_xchg *)(lport->xchg_mgr_temp.unf_look_up_xchg_by_tag((void *)lport,
+ hot_pool_tag));
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) find exhange by tag(0x%x) failed",
+ lport->port_id, hot_pool_tag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (xchg->ob_callback &&
+ (!(xchg->io_state & TGT_IO_STATE_ABORT))) {
+ ob_callback = xchg->ob_callback;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) with exchange(0x%p) tag(0x%x) do callback",
+ lport->port_id, xchg, hot_pool_tag);
+
+ ob_callback(xchg);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ }
+
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ return ret;
+}
+
+static u8 *unf_calc_big_resp_pld_buffer(struct unf_xchg *xchg, u32 cmnd_code)
+{
+ u8 *resp_pld = NULL;
+ u8 *dest = NULL;
+
+ FC_CHECK_RETURN_VALUE(xchg, NULL);
+
+ if (cmnd_code == ELS_ECHO) {
+ resp_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo.echo_pld;
+ } else {
+ resp_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr
+ ->get_id.gid_rsp.gid_acc_pld;
+ }
+
+ if (resp_pld)
+ dest = (u8 *)(resp_pld + xchg->fcp_sfs_union.sfs_entry.cur_offset);
+
+ return dest;
+}
+
+static u8 *unf_calc_other_resp_pld_buffer(struct unf_xchg *xchg)
+{
+ u8 *dest = NULL;
+ u32 offset = 0;
+
+ FC_CHECK_RETURN_VALUE(xchg, NULL);
+
+ offset = xchg->fcp_sfs_union.sfs_entry.cur_offset;
+ dest = (u8 *)((u8 *)(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) + offset);
+
+ return dest;
+}
+
+u32 unf_mv_resp_2_xchg(struct unf_xchg *xchg, struct unf_frame_pkg *pkg)
+{
+ u8 *dest = NULL;
+ u32 length = 0;
+ u32 offset = 0;
+ u32 max_frame_len = 0;
+ ulong flags = 0;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+
+ if (UNF_NEED_BIG_RESPONSE_BUFF(xchg->cmnd_code)) {
+ dest = unf_calc_big_resp_pld_buffer(xchg, xchg->cmnd_code);
+ offset = 0;
+ max_frame_len = sizeof(struct unf_gid_acc_pld);
+ } else if (NS_GA_NXT == xchg->cmnd_code || NS_GIEL == xchg->cmnd_code) {
+ dest = unf_calc_big_resp_pld_buffer(xchg, xchg->cmnd_code);
+ offset = 0;
+ max_frame_len = xchg->fcp_sfs_union.sfs_entry.sfs_buff_len;
+ } else {
+ dest = unf_calc_other_resp_pld_buffer(xchg);
+ offset = sizeof(struct unf_fc_head);
+ max_frame_len = sizeof(union unf_sfs_u);
+ }
+
+ if (!dest) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (xchg->fcp_sfs_union.sfs_entry.cur_offset == 0) {
+ xchg->fcp_sfs_union.sfs_entry.cur_offset += offset;
+ dest = dest + offset;
+ }
+
+ length = pkg->unf_cmnd_pload_bl.length;
+
+ if ((xchg->fcp_sfs_union.sfs_entry.cur_offset + length) >
+ max_frame_len) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Exchange(0x%p) command(0x%x) hotpooltag(0x%x) OX_RX_ID(0x%x) S_ID(0x%x) D_ID(0x%x) copy payload overrun(0x%x:0x%x:0x%x)",
+ xchg, xchg->cmnd_code, xchg->hotpooltag, pkg->frame_head.oxid_rxid,
+ pkg->frame_head.csctl_sid & UNF_NPORTID_MASK,
+ pkg->frame_head.rctl_did & UNF_NPORTID_MASK,
+ xchg->fcp_sfs_union.sfs_entry.cur_offset,
+ pkg->unf_cmnd_pload_bl.length, max_frame_len);
+
+ length = max_frame_len - xchg->fcp_sfs_union.sfs_entry.cur_offset;
+ }
+
+ if (length > 0)
+ memcpy(dest, pkg->unf_cmnd_pload_bl.buffer_ptr, length);
+
+ xchg->fcp_sfs_union.sfs_entry.cur_offset += length;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ return RETURN_OK;
+}
+
+static void unf_ls_gs_do_callback(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg)
+{
+ ulong flags = 0;
+ void (*callback)(void *, void *, void *) = NULL;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (xchg->callback &&
+ (xchg->cmnd_code == ELS_RRQ ||
+ xchg->cmnd_code == ELS_LOGO ||
+ !(xchg->io_state & TGT_IO_STATE_ABORT))) {
+ callback = xchg->callback;
+
+ if (xchg->cmnd_code == ELS_FLOGI || xchg->cmnd_code == ELS_FDISC)
+ xchg->sid = pkg->frame_head.rctl_did & UNF_NPORTID_MASK;
+
+ if (xchg->cmnd_code == ELS_ECHO) {
+ xchg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME] =
+ pkg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME];
+ xchg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME] =
+ pkg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME];
+ xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME] =
+ pkg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME];
+ xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME] =
+ pkg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME];
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ callback(xchg->lport, xchg->rport, xchg);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ }
+}
+
+u32 unf_send_ls_gs_cmnd_succ(struct unf_lport *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_xchg *xchg = NULL;
+ u32 ret = RETURN_OK;
+ u16 hot_pool_tag = 0;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ unf_lport = lport;
+
+ if (!unf_lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) lookup exchange by tag function can't be NULL",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ hot_pool_tag = (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
+ xchg = (struct unf_xchg *)(unf_lport->xchg_mgr_temp
+ .unf_look_up_xchg_by_tag((void *)unf_lport, hot_pool_tag));
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) find exchange by tag(0x%x) failed",
+ unf_lport->port_id, unf_lport->nport_id, hot_pool_tag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ UNF_CHECK_ALLOCTIME_VALID(unf_lport, hot_pool_tag, xchg,
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]);
+
+ if ((pkg->frame_head.csctl_sid & UNF_NPORTID_MASK) != xchg->did) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find exhange invalid, package S_ID(0x%x) exchange S_ID(0x%x) D_ID(0x%x)",
+ unf_lport->port_id, pkg->frame_head.csctl_sid, xchg->sid, xchg->did);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (pkg->last_pkg_flag == UNF_PKG_NOT_LAST_RESPONSE) {
+ ret = unf_mv_resp_2_xchg(xchg, pkg);
+ return ret;
+ }
+
+ xchg->byte_orders = pkg->byte_orders;
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+ unf_ls_gs_do_callback(xchg, pkg);
+ unf_cm_free_xchg((void *)unf_lport, (void *)xchg);
+ return ret;
+}
+
+u32 unf_send_ls_gs_cmnd_failed(struct unf_lport *lport,
+ struct unf_frame_pkg *pkg)
+{
+ struct unf_xchg *xchg = NULL;
+ u32 ret = RETURN_OK;
+ u16 hot_pool_tag = 0;
+ ulong flags = 0;
+ void (*ob_callback)(struct unf_xchg *) = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ if (!lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) lookup exchange by tag function can't be NULL",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ hot_pool_tag = (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
+ xchg = (struct unf_xchg *)(lport->xchg_mgr_temp.unf_look_up_xchg_by_tag((void *)lport,
+ hot_pool_tag));
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) find exhange by tag(0x%x) failed",
+ lport->port_id, lport->nport_id, hot_pool_tag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ UNF_CHECK_ALLOCTIME_VALID(lport, hot_pool_tag, xchg,
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]);
+
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (xchg->ob_callback &&
+ (xchg->cmnd_code == ELS_RRQ || xchg->cmnd_code == ELS_LOGO ||
+ (!(xchg->io_state & TGT_IO_STATE_ABORT)))) {
+ ob_callback = xchg->ob_callback;
+ xchg->ob_callback_sts = pkg->status;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ ob_callback(xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) exchange(0x%p) tag(0x%x) do callback",
+ lport->port_id, xchg, hot_pool_tag);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ }
+
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ return ret;
+}
+
+static u32 unf_rcv_ls_gs_cmnd_reply(struct unf_lport *lport,
+ struct unf_frame_pkg *pkg)
+{
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ if (pkg->status == UNF_IO_SUCCESS || pkg->status == UNF_IO_UNDER_FLOW)
+ ret = unf_send_ls_gs_cmnd_succ(lport, pkg);
+ else
+ ret = unf_send_ls_gs_cmnd_failed(lport, pkg);
+
+ return ret;
+}
+
+u32 unf_receive_ls_gs_pkg(void *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ unf_lport = (struct unf_lport *)lport;
+
+ switch (pkg->type) {
+ case UNF_PKG_ELS_REQ_DONE:
+ case UNF_PKG_GS_REQ_DONE:
+ ret = unf_rcv_ls_gs_cmnd_reply(unf_lport, pkg);
+ break;
+
+ case UNF_PKG_ELS_REQ:
+ ret = unf_rcv_els_cmnd_req(unf_lport, pkg);
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) with exchange type(0x%x) abnormal",
+ unf_lport->port_id, unf_lport->nport_id, pkg->type);
+ break;
+ }
+
+ return ret;
+}
+
+u32 unf_send_els_done(void *lport, struct unf_frame_pkg *pkg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ if (pkg->type == UNF_PKG_ELS_REPLY_DONE) {
+ if (pkg->status == UNF_IO_SUCCESS || pkg->status == UNF_IO_UNDER_FLOW)
+ ret = unf_send_els_rsp_succ(lport, pkg);
+ else
+ ret = unf_send_ls_gs_cmnd_failed(lport, pkg);
+ }
+
+ return ret;
+}
+
+void unf_rport_immediate_link_down(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* Swap case: Report Link Down immediately & release R_Port */
+ ulong flags = 0;
+ struct unf_disc *disc = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flags);
+ /* 1. Inc R_Port ref_cnt */
+ if (unf_rport_ref_inc(rport) != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) Rport(0x%p,0x%x) is removing and no need process",
+ lport->port_id, rport, rport->nport_id);
+
+ return;
+ }
+
+ /* 2. R_PORT state update: Link Down Event --->>> closing state */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ /* 3. Put R_Port from busy to destroy list */
+ disc = &lport->disc;
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flags);
+ list_del_init(&rport->entry_rport);
+ list_add_tail(&rport->entry_rport, &disc->list_destroy_rports);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flags);
+
+ /* 4. Schedule Closing work (Enqueuing workqueue) */
+ unf_schedule_closing_work(lport, rport);
+
+ unf_rport_ref_dec(rport);
+}
+
+struct unf_rport *unf_find_rport(struct unf_lport *lport, u32 rport_nport_id,
+ u64 lport_name)
+{
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ if (rport_nport_id >= UNF_FC_FID_DOM_MGR) {
+ /* R_Port is Fabric: by N_Port_ID */
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, rport_nport_id);
+ } else {
+ /* Others: by WWPN & N_Port_ID */
+ unf_rport = unf_find_valid_rport(unf_lport, lport_name, rport_nport_id);
+ }
+
+ return unf_rport;
+}
+
+void unf_process_logo_in_pri_loop(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* Send PLOGI or LOGO */
+ struct unf_rport *unf_rport = rport;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI); /* PLOGI WAIT */
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Private Loop with INI mode, Avoid COM Mode problem */
+ unf_rport_delay_login(unf_rport);
+}
+
+void unf_process_logo_in_n2n(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* Send PLOGI or LOGO */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ if (unf_lport->port_name > unf_rport->port_name) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x)'s WWN(0x%llx) is larger than(0x%llx), should be master",
+ unf_lport->port_id, unf_lport->port_name, unf_rport->port_name);
+
+ ret = unf_send_plogi(unf_lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send PLOGI failed, enter recovery",
+ lport->port_id);
+
+ unf_rport_error_recovery(unf_rport);
+ }
+ } else {
+ unf_rport_enter_logo(unf_lport, unf_rport);
+ }
+}
+
+void unf_process_logo_in_fabric(struct unf_lport *lport,
+ struct unf_rport *rport)
+{
+ /* Send GFF_ID or LOGO */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ struct unf_rport *sns_port = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* L_Port with INI Mode: Send GFF_ID */
+ sns_port = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find fabric port",
+ unf_lport->port_id);
+ return;
+ }
+
+ ret = unf_get_and_post_disc_event(lport, sns_port, unf_rport->nport_id,
+ UNF_DISC_GET_FEATURE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ unf_lport->port_id, UNF_DISC_GET_FEATURE,
+ unf_rport->nport_id);
+
+ unf_rcv_gff_id_rsp_unknown(unf_lport, unf_rport->nport_id);
+ }
+}
+
+void unf_process_rport_after_logo(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /*
+ * 1. LOGO handler
+ * 2. RPLO handler
+ * 3. LOGO_CALL_BACK (send LOGO ACC) handler
+ */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ /* R_Port is not fabric port (retry LOGIN or LOGO) */
+ if (unf_lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
+ /* Private Loop: PLOGI or LOGO */
+ unf_process_logo_in_pri_loop(unf_lport, unf_rport);
+ } else if (unf_lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ /* Point to Point: LOGIN or LOGO */
+ unf_process_logo_in_n2n(unf_lport, unf_rport);
+ } else {
+ /* Fabric or Public Loop: GFF_ID or LOGO */
+ unf_process_logo_in_fabric(unf_lport, unf_rport);
+ }
+ } else {
+ /* Rport is fabric port: link down now */
+ unf_rport_linkdown(unf_lport, unf_rport);
+ }
+}
+
+static u32 unf_rcv_bls_req_done(struct unf_lport *lport, struct unf_frame_pkg *pkg)
+{
+ /*
+ * About I/O resource:
+ * 1. normal: Release I/O resource during RRQ processer
+ * 2. exception: Release I/O resource immediately
+ */
+ struct unf_xchg *xchg = NULL;
+ u16 hot_pool_tag = 0;
+ ulong flags = 0;
+ ulong time_ms = 0;
+ u32 ret = RETURN_OK;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ unf_lport = lport;
+
+ hot_pool_tag = (u16)pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX];
+ xchg = (struct unf_xchg *)unf_cm_lookup_xchg_by_tag((void *)unf_lport, hot_pool_tag);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find exchange by tag(0x%x) when receiving ABTS response",
+ unf_lport->port_id, hot_pool_tag);
+ return UNF_RETURN_ERROR;
+ }
+
+ UNF_CHECK_ALLOCTIME_VALID(lport, hot_pool_tag, xchg,
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]);
+
+ ret = unf_xchg_ref_inc(xchg, TGT_ABTS_DONE);
+ FC_CHECK_RETURN_VALUE((ret == RETURN_OK), UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ xchg->oxid = UNF_GET_OXID(pkg);
+ xchg->rxid = UNF_GET_RXID(pkg);
+ xchg->io_state |= INI_IO_STATE_DONE;
+ xchg->abts_state |= ABTS_RESPONSE_RECEIVED;
+ if (!(INI_IO_STATE_UPABORT & xchg->io_state)) {
+ /* NOTE: I/O exchange has been released and used again */
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) SID(0x%x) exch(0x%p) (0x%x:0x%x:0x%x:0x%x) state(0x%x) is abnormal with cnt(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, xchg->sid,
+ xchg, xchg->hotpooltag, xchg->oxid, xchg->rxid,
+ xchg->oid, xchg->io_state,
+ atomic_read(&xchg->ref_cnt));
+
+ unf_xchg_ref_dec(xchg, TGT_ABTS_DONE);
+ return UNF_RETURN_ERROR;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+ /*
+ * Exchage I/O Status check: Succ-> Add RRQ Timer
+ * ***** pkg->status --- to --->>> scsi_cmnd->result *****
+ * *
+ * FAILED: ERR_Code or X_ID is err, or BA_RSP type is err
+ */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (pkg->status == UNF_IO_SUCCESS) {
+ /* Succeed: PKG status -->> EXCH status -->> scsi status */
+ UNF_SET_SCSI_CMND_RESULT(xchg, UNF_IO_SUCCESS);
+ xchg->io_state |= INI_IO_STATE_WAIT_RRQ;
+ xchg->rxid = UNF_GET_RXID(pkg);
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ /* Add RRQ timer */
+ time_ms = (ulong)(unf_lport->ra_tov);
+ unf_lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg, time_ms,
+ UNF_TIMER_TYPE_INI_RRQ);
+ } else {
+ /* Failed: PKG status -->> EXCH status -->> scsi status */
+ UNF_SET_SCSI_CMND_RESULT(xchg, UNF_IO_FAILED);
+ if (MARKER_STS_RECEIVED & xchg->abts_state) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ /* NOTE: release I/O resource immediately */
+ unf_cm_free_xchg(unf_lport, xchg);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) exch(0x%p) OX_RX(0x%x:0x%x) IOstate(0x%x) ABTSstate(0x%x) receive response abnormal ref(0x%x)",
+ unf_lport->port_id, xchg, xchg->oxid, xchg->rxid,
+ xchg->io_state, xchg->abts_state, atomic_read(&xchg->ref_cnt));
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ }
+ }
+
+ /*
+ * If abts response arrived before
+ * marker sts received just wake up abts marker sema
+ */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (!(MARKER_STS_RECEIVED & xchg->abts_state)) {
+ xchg->ucode_abts_state = pkg->status;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ up(&xchg->task_sema);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ }
+
+ unf_xchg_ref_dec(xchg, TGT_ABTS_DONE);
+ return ret;
+}
+
+u32 unf_receive_bls_pkg(void *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ unf_lport = (struct unf_lport *)lport;
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ if (pkg->type == UNF_PKG_BLS_REQ_DONE) {
+ /* INI: RCVD BLS Req Done */
+ ret = unf_rcv_bls_req_done(lport, pkg);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) received BLS packet type(%xh) is error",
+ unf_lport->port_id, pkg->type);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return ret;
+}
+
+static void unf_fill_free_xid_pkg(struct unf_xchg *xchg, struct unf_frame_pkg *pkg)
+{
+ pkg->frame_head.csctl_sid = xchg->sid;
+ pkg->frame_head.rctl_did = xchg->did;
+ pkg->frame_head.oxid_rxid = (u32)(((u32)xchg->oxid << UNF_SHIFT_16) | xchg->rxid);
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ UNF_SET_XCHG_ALLOC_TIME(pkg, xchg);
+
+ if (xchg->xchg_type == UNF_XCHG_TYPE_SFS) {
+ if (UNF_XCHG_IS_ELS_REPLY(xchg)) {
+ pkg->type = UNF_PKG_ELS_REPLY;
+ pkg->rx_or_ox_id = UNF_PKG_FREE_RXID;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = INVALID_VALUE32;
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = INVALID_VALUE32;
+ } else {
+ pkg->type = UNF_PKG_ELS_REQ;
+ pkg->rx_or_ox_id = UNF_PKG_FREE_OXID;
+ }
+ } else if (xchg->xchg_type == UNF_XCHG_TYPE_INI) {
+ pkg->type = UNF_PKG_INI_IO;
+ pkg->rx_or_ox_id = UNF_PKG_FREE_OXID;
+ }
+}
+
+void unf_notify_chip_free_xid(struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 ret = RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VOID(xchg);
+ unf_lport = xchg->lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ unf_fill_free_xid_pkg(xchg, &pkg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Sid_Did(0x%x)(0x%x) Xchg(0x%p) RXorOX(0x%x) tag(0x%x) xid(0x%x) magic(0x%x) Stat(0x%x)type(0x%x) wait timeout.",
+ xchg->sid, xchg->did, xchg, pkg.rx_or_ox_id,
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX], pkg.frame_head.oxid_rxid,
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME], xchg->io_state, pkg.type);
+
+ ret = unf_lport->low_level_func.service_op.ll_release_xid(unf_lport->fc_port, &pkg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Free xid abnormal:Sid_Did(0x%x 0x%x) Xchg(0x%p) RXorOX(0x%x) xid(0x%x) Stat(0x%x) tag(0x%x) magic(0x%x) type(0x%x).",
+ xchg->sid, xchg->did, xchg, pkg.rx_or_ox_id,
+ pkg.frame_head.oxid_rxid, xchg->io_state,
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX],
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
+ pkg.type);
+ }
+}
diff --git a/drivers/scsi/spfc/common/unf_service.h b/drivers/scsi/spfc/common/unf_service.h
new file mode 100644
index 000000000000..0dd2975c6a7b
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_service.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_SERVICE_H
+#define UNF_SERVICE_H
+
+#include "unf_type.h"
+#include "unf_exchg.h"
+#include "unf_rport.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif /* __cplusplus */
+
+extern u32 max_frame_size;
+#define UNF_INIT_DISC 0x1 /* first time DISC */
+#define UNF_RSCN_DISC 0x2 /* RSCN Port Addr DISC */
+#define UNF_SET_ELS_ACC_TYPE(els_cmd) ((u32)(els_cmd) << 16 | ELS_ACC)
+#define UNF_SET_ELS_RJT_TYPE(els_cmd) ((u32)(els_cmd) << 16 | ELS_RJT)
+#define UNF_XCHG_IS_ELS_REPLY(xchg) \
+ ((ELS_ACC == ((xchg)->cmnd_code & 0x0ffff)) || \
+ (ELS_RJT == ((xchg)->cmnd_code & 0x0ffff)))
+
+struct unf_els_handle_table {
+ u32 cmnd;
+ u32 (*els_cmnd_handler)(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+};
+
+void unf_select_sq(struct unf_xchg *xchg, struct unf_frame_pkg *pkg);
+void unf_fill_package(struct unf_frame_pkg *pkg, struct unf_xchg *xchg,
+ struct unf_rport *rport);
+struct unf_xchg *unf_get_sfs_free_xchg_and_init(struct unf_lport *lport,
+ u32 did,
+ struct unf_rport *rport,
+ union unf_sfs_u **fc_entry);
+void *unf_get_one_big_sfs_buf(struct unf_xchg *xchg);
+u32 unf_mv_resp_2_xchg(struct unf_xchg *xchg, struct unf_frame_pkg *pkg);
+void unf_rport_immediate_link_down(struct unf_lport *lport,
+ struct unf_rport *rport);
+struct unf_rport *unf_find_rport(struct unf_lport *lport, u32 rport_nport_id,
+ u64 port_name);
+void unf_process_logo_in_fabric(struct unf_lport *lport,
+ struct unf_rport *rport);
+void unf_notify_chip_free_xid(struct unf_xchg *xchg);
+
+u32 unf_ls_gs_cmnd_send(struct unf_lport *lport, struct unf_frame_pkg *pkg,
+ struct unf_xchg *xchg);
+u32 unf_receive_ls_gs_pkg(void *lport, struct unf_frame_pkg *pkg);
+struct unf_xchg *unf_mv_data_2_xchg(struct unf_lport *lport,
+ struct unf_frame_pkg *pkg);
+u32 unf_receive_bls_pkg(void *lport, struct unf_frame_pkg *pkg);
+u32 unf_send_els_done(void *lport, struct unf_frame_pkg *pkg);
+u32 unf_send_els_rjt_by_did(struct unf_lport *lport, struct unf_xchg *xchg,
+ u32 did, struct unf_rjt_info *rjt_info);
+u32 unf_send_els_rjt_by_rport(struct unf_lport *lport, struct unf_xchg *xchg,
+ struct unf_rport *rport,
+ struct unf_rjt_info *rjt_info);
+u32 unf_send_abts(struct unf_lport *lport, struct unf_xchg *xchg);
+void unf_process_rport_after_logo(struct unf_lport *lport,
+ struct unf_rport *rport);
+
+#ifdef __cplusplus
+}
+#endif /* __cplusplus */
+
+#endif /* __UNF_SERVICE_H__ */
diff --git a/drivers/scsi/spfc/common/unf_type.h b/drivers/scsi/spfc/common/unf_type.h
new file mode 100644
index 000000000000..28e163d0543c
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_type.h
@@ -0,0 +1,216 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_TYPE_H
+#define UNF_TYPE_H
+
+#include <linux/sched.h>
+#include <linux/kthread.h>
+#include <linux/fs.h>
+#include <linux/vmalloc.h>
+#include <linux/version.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/delay.h>
+#include <linux/workqueue.h>
+#include <linux/kref.h>
+#include <linux/scatterlist.h>
+#include <linux/crc-t10dif.h>
+#include <linux/ctype.h>
+#include <linux/types.h>
+#include <linux/compiler.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+#include <linux/random.h>
+#include <linux/jiffies.h>
+#include <linux/cpufreq.h>
+#include <linux/semaphore.h>
+#include <linux/jiffies.h>
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_transport_fc.h>
+#include <linux/sched/signal.h>
+
+#ifndef SPFC_FT
+#define SPFC_FT
+#endif
+
+#define BUF_LIST_PAGE_SIZE (PAGE_SIZE << 8)
+
+#define UNF_S_TO_NS (1000000000)
+#define UNF_S_TO_MS (1000)
+
+enum UNF_OS_THRD_PRI_E {
+ UNF_OS_THRD_PRI_HIGHEST = 0,
+ UNF_OS_THRD_PRI_HIGH,
+ UNF_OS_THRD_PRI_SUBHIGH,
+ UNF_OS_THRD_PRI_MIDDLE,
+ UNF_OS_THRD_PRI_LOW,
+ UNF_OS_THRD_PRI_BUTT
+};
+
+#define UNF_OS_LIST_NEXT(a) ((a)->next)
+#define UNF_OS_LIST_PREV(a) ((a)->prev)
+
+#define UNF_OS_PER_NS (1000000000)
+#define UNF_OS_MS_TO_NS (1000000)
+
+#ifndef MIN
+#define MIN(X, Y) ((X) < (Y) ? (X) : (Y))
+#endif
+
+#ifndef MAX
+#define MAX(X, Y) ((X) > (Y) ? (X) : (Y))
+#endif
+
+#ifndef INVALID_VALUE64
+#define INVALID_VALUE64 0xFFFFFFFFFFFFFFFFULL
+#endif /* INVALID_VALUE64 */
+
+#ifndef INVALID_VALUE32
+#define INVALID_VALUE32 0xFFFFFFFF
+#endif /* INVALID_VALUE32 */
+
+#ifndef INVALID_VALUE16
+#define INVALID_VALUE16 0xFFFF
+#endif /* INVALID_VALUE16 */
+
+#ifndef INVALID_VALUE8
+#define INVALID_VALUE8 0xFF
+#endif /* INVALID_VALUE8 */
+
+#ifndef RETURN_OK
+#define RETURN_OK 0
+#endif
+
+#ifndef RETURN_ERROR
+#define RETURN_ERROR (~0)
+#endif
+#define UNF_RETURN_ERROR (~0)
+
+/* define shift bits */
+#define UNF_SHIFT_1 1
+#define UNF_SHIFT_2 2
+#define UNF_SHIFT_3 3
+#define UNF_SHIFT_4 4
+#define UNF_SHIFT_6 6
+#define UNF_SHIFT_7 7
+#define UNF_SHIFT_8 8
+#define UNF_SHIFT_11 11
+#define UNF_SHIFT_12 12
+#define UNF_SHIFT_15 15
+#define UNF_SHIFT_16 16
+#define UNF_SHIFT_17 17
+#define UNF_SHIFT_19 19
+#define UNF_SHIFT_20 20
+#define UNF_SHIFT_23 23
+#define UNF_SHIFT_24 24
+#define UNF_SHIFT_25 25
+#define UNF_SHIFT_26 26
+#define UNF_SHIFT_28 28
+#define UNF_SHIFT_29 29
+#define UNF_SHIFT_32 32
+#define UNF_SHIFT_35 35
+#define UNF_SHIFT_37 37
+#define UNF_SHIFT_39 39
+#define UNF_SHIFT_40 40
+#define UNF_SHIFT_43 43
+#define UNF_SHIFT_48 48
+#define UNF_SHIFT_51 51
+#define UNF_SHIFT_56 56
+#define UNF_SHIFT_57 57
+#define UNF_SHIFT_59 59
+#define UNF_SHIFT_60 60
+#define UNF_SHIFT_61 61
+
+/* array index */
+#define ARRAY_INDEX_0 0
+#define ARRAY_INDEX_1 1
+#define ARRAY_INDEX_2 2
+#define ARRAY_INDEX_3 3
+#define ARRAY_INDEX_4 4
+#define ARRAY_INDEX_5 5
+#define ARRAY_INDEX_6 6
+#define ARRAY_INDEX_7 7
+#define ARRAY_INDEX_8 8
+#define ARRAY_INDEX_10 10
+#define ARRAY_INDEX_11 11
+#define ARRAY_INDEX_12 12
+#define ARRAY_INDEX_13 13
+
+/* define mask bits */
+#define UNF_MASK_BIT_7_0 0xff
+#define UNF_MASK_BIT_15_0 0x0000ffff
+#define UNF_MASK_BIT_31_16 0xffff0000
+
+#define UNF_IO_SUCCESS 0x00000000
+#define UNF_IO_ABORTED 0x00000001 /* the host system aborted the command */
+#define UNF_IO_FAILED 0x00000002
+#define UNF_IO_ABORT_ABTS 0x00000003
+#define UNF_IO_ABORT_LOGIN 0x00000004 /* abort login */
+#define UNF_IO_ABORT_REET 0x00000005 /* reset event aborted the transport */
+#define UNF_IO_ABORT_FAILED 0x00000006 /* abort failed */
+/* data out of order ,data reassembly error */
+#define UNF_IO_OUTOF_ORDER 0x00000007
+#define UNF_IO_FTO 0x00000008 /* frame time out */
+#define UNF_IO_LINK_FAILURE 0x00000009
+#define UNF_IO_OVER_FLOW 0x0000000a /* data over run */
+#define UNF_IO_RSP_OVER 0x0000000b
+#define UNF_IO_LOST_FRAME 0x0000000c
+#define UNF_IO_UNDER_FLOW 0x0000000d /* data under run */
+#define UNF_IO_HOST_PROG_ERROR 0x0000000e
+#define UNF_IO_SEST_PROG_ERROR 0x0000000f
+#define UNF_IO_INVALID_ENTRY 0x00000010
+#define UNF_IO_ABORT_SEQ_NOT 0x00000011
+#define UNF_IO_REJECT 0x00000012
+#define UNF_IO_RS_INFO 0x00000013
+#define UNF_IO_EDC_IN_ERROR 0x00000014
+#define UNF_IO_EDC_OUT_ERROR 0x00000015
+#define UNF_IO_UNINIT_KEK_ERR 0x00000016
+#define UNF_IO_DEK_OUTOF_RANGE 0x00000017
+#define UNF_IO_KEY_UNWRAP_ERR 0x00000018
+#define UNF_IO_KEY_TAG_ERR 0x00000019
+#define UNF_IO_KEY_ECC_ERR 0x0000001a
+#define UNF_IO_BLOCK_SIZE_ERROR 0x0000001b
+#define UNF_IO_ILLEGAL_CIPHER_MODE 0x0000001c
+#define UNF_IO_CLEAN_UP 0x0000001d
+#define UNF_SRR_RECEIVE 0x0000001e /* receive srr */
+/* The target device sent an ABTS to abort the I/O.*/
+#define UNF_IO_ABORTED_BY_TARGET 0x0000001f
+#define UNF_IO_TRANSPORT_ERROR 0x00000020
+#define UNF_IO_LINK_FLASH 0x00000021
+#define UNF_IO_TIMEOUT 0x00000022
+#define UNF_IO_PORT_UNAVAILABLE 0x00000023
+#define UNF_IO_PORT_LOGOUT 0x00000024
+#define UNF_IO_PORT_CFG_CHG 0x00000025
+#define UNF_IO_FIRMWARE_RES_UNAVAILABLE 0x00000026
+#define UNF_IO_TASK_MGT_OVERRUN 0x00000027
+#define UNF_IO_DMA_ERROR 0x00000028
+#define UNF_IO_DIF_ERROR 0x00000029
+#define UNF_IO_NO_LPORT 0x0000002a
+#define UNF_IO_NO_XCHG 0x0000002b
+#define UNF_IO_SOFT_ERR 0x0000002c
+#define UNF_IO_XCHG_ADD_ERROR 0x0000002d
+#define UNF_IO_NO_LOGIN 0x0000002e
+#define UNF_IO_NO_BUFFER 0x0000002f
+#define UNF_IO_DID_ERROR 0x00000030
+#define UNF_IO_UNSUPPORT 0x00000031
+#define UNF_IO_NOREADY 0x00000032
+#define UNF_IO_NPORTID_REUSED 0x00000033
+#define UNF_IO_NPORT_HANDLE_REUSED 0x00000034
+#define UNF_IO_NO_NPORT_HANDLE 0x00000035
+#define UNF_IO_ABORT_BY_FW 0x00000036
+#define UNF_IO_ABORT_PORT_REMOVING 0x00000037
+#define UNF_IO_INCOMPLETE 0x00000038
+#define UNF_IO_DIF_REF_ERROR 0x00000039
+#define UNF_IO_DIF_GEN_ERROR 0x0000003a
+
+#define UNF_IO_ERREND 0xFFFFFFFF
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_chipitf.c b/drivers/scsi/spfc/hw/spfc_chipitf.c
new file mode 100644
index 000000000000..be6073ff4dc0
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_chipitf.c
@@ -0,0 +1,1105 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_chipitf.h"
+#include "sphw_hw.h"
+#include "sphw_crm.h"
+
+#define SPFC_MBOX_TIME_SEC_MAX (60)
+
+#define SPFC_LINK_UP_COUNT 1
+#define SPFC_LINK_DOWN_COUNT 2
+#define SPFC_FC_DELETE_CMND_COUNT 3
+
+#define SPFC_MBX_MAX_TIMEOUT 10000
+
+u32 spfc_get_chip_msg(void *hba, void *mac)
+{
+ struct spfc_hba_info *spfc_hba = NULL;
+ struct unf_get_chip_info_argout *wwn = NULL;
+ struct spfc_inmbox_get_chip_info get_chip_info;
+ union spfc_outmbox_generic *get_chip_info_sts = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(mac, UNF_RETURN_ERROR);
+
+ spfc_hba = (struct spfc_hba_info *)hba;
+ wwn = (struct unf_get_chip_info_argout *)mac;
+
+ memset(&get_chip_info, 0, sizeof(struct spfc_inmbox_get_chip_info));
+
+ get_chip_info_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!get_chip_info_sts) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(get_chip_info_sts, 0, sizeof(union spfc_outmbox_generic));
+
+ get_chip_info.header.cmnd_type = SPFC_MBOX_GET_CHIP_INFO;
+ get_chip_info.header.length =
+ SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_get_chip_info));
+
+ if (spfc_mb_send_and_wait_mbox(spfc_hba, &get_chip_info,
+ sizeof(struct spfc_inmbox_get_chip_info),
+ get_chip_info_sts) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]spfc can't send and wait mailbox, command type: 0x%x.",
+ get_chip_info.header.cmnd_type);
+
+ goto exit;
+ }
+
+ if (get_chip_info_sts->get_chip_info_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) mailbox status incorrect status(0x%x) .",
+ spfc_hba->port_cfg.port_id,
+ get_chip_info_sts->get_chip_info_sts.status);
+
+ goto exit;
+ }
+
+ if (get_chip_info_sts->get_chip_info_sts.header.cmnd_type != SPFC_MBOX_GET_CHIP_INFO_STS) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) receive mailbox type incorrect type: 0x%x.",
+ spfc_hba->port_cfg.port_id,
+ get_chip_info_sts->get_chip_info_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ wwn->board_type = get_chip_info_sts->get_chip_info_sts.board_type;
+ spfc_hba->card_info.card_type = get_chip_info_sts->get_chip_info_sts.board_type;
+ wwn->wwnn = get_chip_info_sts->get_chip_info_sts.wwnn;
+ wwn->wwpn = get_chip_info_sts->get_chip_info_sts.wwpn;
+
+ ret = RETURN_OK;
+exit:
+ kfree(get_chip_info_sts);
+
+ return ret;
+}
+
+u32 spfc_get_chip_capability(void *hwdev_handle,
+ struct spfc_chip_info *chip_info)
+{
+ struct spfc_inmbox_get_chip_info get_chip_info;
+ union spfc_outmbox_generic *get_chip_info_sts = NULL;
+ u16 out_size = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hwdev_handle, UNF_RETURN_ERROR);
+
+ memset(&get_chip_info, 0, sizeof(struct spfc_inmbox_get_chip_info));
+
+ get_chip_info_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!get_chip_info_sts) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(get_chip_info_sts, 0, sizeof(union spfc_outmbox_generic));
+
+ get_chip_info.header.cmnd_type = SPFC_MBOX_GET_CHIP_INFO;
+ get_chip_info.header.length =
+ SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_get_chip_info));
+ get_chip_info.header.port_id = (u8)sphw_global_func_id(hwdev_handle);
+ out_size = sizeof(union spfc_outmbox_generic);
+
+ if (sphw_msg_to_mgmt_sync(hwdev_handle, COMM_MOD_FC, SPFC_MBOX_GET_CHIP_INFO,
+ (void *)&get_chip_info.header,
+ sizeof(struct spfc_inmbox_get_chip_info),
+ (union spfc_outmbox_generic *)(get_chip_info_sts), &out_size,
+ (SPFC_MBX_MAX_TIMEOUT), SPHW_CHANNEL_FC) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "spfc can't send and wait mailbox, command type: 0x%x.",
+ SPFC_MBOX_GET_CHIP_INFO);
+
+ goto exit;
+ }
+
+ if (get_chip_info_sts->get_chip_info_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port mailbox status incorrect status(0x%x) .",
+ get_chip_info_sts->get_chip_info_sts.status);
+
+ goto exit;
+ }
+
+ if (get_chip_info_sts->get_chip_info_sts.header.cmnd_type != SPFC_MBOX_GET_CHIP_INFO_STS) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port receive mailbox type incorrect type: 0x%x.",
+ get_chip_info_sts->get_chip_info_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ chip_info->wwnn = get_chip_info_sts->get_chip_info_sts.wwnn;
+ chip_info->wwpn = get_chip_info_sts->get_chip_info_sts.wwpn;
+
+ ret = RETURN_OK;
+exit:
+ kfree(get_chip_info_sts);
+
+ return ret;
+}
+
+u32 spfc_config_port_table(struct spfc_hba_info *hba)
+{
+ struct spfc_inmbox_config_api config_api;
+ union spfc_outmbox_generic *out_mbox = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ memset(&config_api, 0, sizeof(config_api));
+ out_mbox = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!out_mbox) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(out_mbox, 0, sizeof(union spfc_outmbox_generic));
+
+ config_api.header.cmnd_type = SPFC_MBOX_CONFIG_API;
+ config_api.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_config_api));
+
+ config_api.op_code = UNDEFINEOPCODE;
+
+ /* change switching top cmd of CM to the cmd that up recognize */
+ /* if the cmd equals UNF_TOP_P2P_MASK sending in CM means that it
+ * should be changed into P2P top, LL using SPFC_TOP_NON_LOOP_MASK
+ */
+ if (((u8)(hba->port_topo_cfg)) == UNF_TOP_P2P_MASK) {
+ config_api.topy_mode = 0x2;
+ /* if the cmd equals UNF_TOP_LOOP_MASK sending in CM means that it
+ *should be changed into loop top, LL using SPFC_TOP_LOOP_MASK
+ */
+ } else if (((u8)(hba->port_topo_cfg)) == UNF_TOP_LOOP_MASK) {
+ config_api.topy_mode = 0x1;
+ /* if the cmd equals UNF_TOP_AUTO_MASK sending in CM means that it
+ *should be changed into loop top, LL using SPFC_TOP_AUTO_MASK
+ */
+ } else if (((u8)(hba->port_topo_cfg)) == UNF_TOP_AUTO_MASK) {
+ config_api.topy_mode = 0x0;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) topo cmd is error, command type: 0x%x",
+ hba->port_cfg.port_id, (u8)(hba->port_topo_cfg));
+
+ goto exit;
+ }
+
+ /* About speed */
+ config_api.sfp_speed = (u8)(hba->port_speed_cfg);
+ config_api.max_speed = (u8)(hba->max_support_speed);
+
+ config_api.rx_6432g_bb_credit = SPFC_LOWLEVEL_DEFAULT_32G_BB_CREDIT;
+ config_api.rx_16g_bb_credit = SPFC_LOWLEVEL_DEFAULT_16G_BB_CREDIT;
+ config_api.rx_84g_bb_credit = SPFC_LOWLEVEL_DEFAULT_8G_BB_CREDIT;
+ config_api.rdy_cnt_bf_fst_frm = SPFC_LOWLEVEL_DEFAULT_LOOP_BB_CREDIT;
+ config_api.esch_32g_value = SPFC_LOWLEVEL_DEFAULT_32G_ESCH_VALUE;
+ config_api.esch_16g_value = SPFC_LOWLEVEL_DEFAULT_16G_ESCH_VALUE;
+ config_api.esch_8g_value = SPFC_LOWLEVEL_DEFAULT_8G_ESCH_VALUE;
+ config_api.esch_4g_value = SPFC_LOWLEVEL_DEFAULT_8G_ESCH_VALUE;
+ config_api.esch_64g_value = SPFC_LOWLEVEL_DEFAULT_8G_ESCH_VALUE;
+ config_api.esch_bust_size = SPFC_LOWLEVEL_DEFAULT_ESCH_BUST_SIZE;
+
+ /* default value:0xFF */
+ config_api.hard_alpa = 0xFF;
+ memcpy(config_api.port_name, hba->sys_port_name, UNF_WWN_LEN);
+
+ /* if only for slave, the value is 1; if participate master choosing,
+ * the value is 0
+ */
+ config_api.slave = hba->port_loop_role;
+
+ /* 1:auto negotiate, 0:fixed mode negotiate */
+ if (config_api.sfp_speed == 0)
+ config_api.auto_sneg = 0x1;
+ else
+ config_api.auto_sneg = 0x0;
+
+ if (spfc_mb_send_and_wait_mbox(hba, &config_api, sizeof(config_api),
+ out_mbox) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[warn]Port(0x%x) SPFC can't send and wait mailbox, command type: 0x%x",
+ hba->port_cfg.port_id,
+ config_api.header.cmnd_type);
+
+ goto exit;
+ }
+
+ if (out_mbox->config_api_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive mailbox type(0x%x) with status(0x%x) error",
+ hba->port_cfg.port_id,
+ out_mbox->config_api_sts.header.cmnd_type,
+ out_mbox->config_api_sts.status);
+
+ goto exit;
+ }
+
+ if (out_mbox->config_api_sts.header.cmnd_type != SPFC_MBOX_CONFIG_API_STS) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive mailbox type(0x%x) error",
+ hba->port_cfg.port_id,
+ out_mbox->config_api_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ ret = RETURN_OK;
+exit:
+ kfree(out_mbox);
+
+ return ret;
+}
+
+u32 spfc_port_switch(struct spfc_hba_info *hba, bool turn_on)
+{
+ struct spfc_inmbox_port_switch port_switch;
+ union spfc_outmbox_generic *port_switch_sts = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ memset(&port_switch, 0, sizeof(port_switch));
+
+ port_switch_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!port_switch_sts) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(port_switch_sts, 0, sizeof(union spfc_outmbox_generic));
+
+ port_switch.header.cmnd_type = SPFC_MBOX_PORT_SWITCH;
+ port_switch.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_port_switch));
+ port_switch.op_code = (u8)turn_on;
+
+ if (spfc_mb_send_and_wait_mbox(hba, &port_switch, sizeof(port_switch),
+ (union spfc_outmbox_generic *)((void *)port_switch_sts)) !=
+ RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[warn]Port(0x%x) SPFC can't send and wait mailbox, command type(0x%x) opcode(0x%x)",
+ hba->port_cfg.port_id,
+ port_switch.header.cmnd_type, port_switch.op_code);
+
+ goto exit;
+ }
+
+ if (port_switch_sts->port_switch_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive mailbox type(0x%x) status(0x%x) error",
+ hba->port_cfg.port_id,
+ port_switch_sts->port_switch_sts.header.cmnd_type,
+ port_switch_sts->port_switch_sts.status);
+
+ goto exit;
+ }
+
+ if (port_switch_sts->port_switch_sts.header.cmnd_type != SPFC_MBOX_PORT_SWITCH_STS) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive mailbox type(0x%x) error",
+ hba->port_cfg.port_id,
+ port_switch_sts->port_switch_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) switch succeed, turns to %s",
+ hba->port_cfg.port_id, (turn_on) ? "on" : "off");
+
+ ret = RETURN_OK;
+exit:
+ kfree(port_switch_sts);
+
+ return ret;
+}
+
+u32 spfc_config_login_api(struct spfc_hba_info *hba,
+ struct unf_port_login_parms *login_parms)
+{
+#define SPFC_LOOP_RDYNUM 8
+ int iret = RETURN_OK;
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_inmbox_config_login config_login;
+ union spfc_outmbox_generic *cfg_login_sts = NULL;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ memset(&config_login, 0, sizeof(config_login));
+ cfg_login_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!cfg_login_sts) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(cfg_login_sts, 0, sizeof(union spfc_outmbox_generic));
+
+ config_login.header.cmnd_type = SPFC_MBOX_CONFIG_LOGIN_API;
+ config_login.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_config_login));
+ config_login.header.port_id = hba->port_index;
+
+ config_login.op_code = UNDEFINEOPCODE;
+
+ config_login.tx_bb_credit = hba->remote_bb_credit;
+
+ config_login.etov = hba->compared_edtov_val;
+ config_login.rtov = hba->compared_ratov_val;
+
+ config_login.rt_tov_tag = hba->remote_rttov_tag;
+ config_login.ed_tov_tag = hba->remote_edtov_tag;
+ config_login.bb_credit = hba->remote_bb_credit;
+ config_login.bb_scn = SPFC_LSB(hba->compared_bb_scn);
+
+ if (config_login.bb_scn) {
+ config_login.lr_flag = (login_parms->els_cmnd_code == ELS_PLOGI) ? 0 : 1;
+ ret = spfc_mb_send_and_wait_mbox(hba, &config_login, sizeof(config_login),
+ (union spfc_outmbox_generic *)cfg_login_sts);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) SPFC can't send and wait mailbox, command type: 0x%x.",
+ hba->port_cfg.port_id, config_login.header.cmnd_type);
+
+ goto exit;
+ }
+
+ if (cfg_login_sts->config_login_sts.header.cmnd_type !=
+ SPFC_MBOX_CONFIG_LOGIN_API_STS) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) Receive mailbox type incorrect. Type: 0x%x.",
+ hba->port_cfg.port_id,
+ cfg_login_sts->config_login_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ if (cfg_login_sts->config_login_sts.status != STATUS_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "Port(0x%x) Receive mailbox type(0x%x) status incorrect. Status: 0x%x.",
+ hba->port_cfg.port_id,
+ cfg_login_sts->config_login_sts.header.cmnd_type,
+ cfg_login_sts->config_login_sts.status);
+
+ goto exit;
+ }
+ } else {
+ iret = sphw_msg_to_mgmt_async(hba->dev_handle, COMM_MOD_FC,
+ SPFC_MBOX_CONFIG_LOGIN_API, &config_login,
+ sizeof(config_login), SPHW_CHANNEL_FC);
+
+ if (iret != 0) {
+ SPFC_MAILBOX_STAT(hba, SPFC_SEND_CONFIG_LOGINAPI_FAIL);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) spfc can't send config login cmd to up,ret:%d.",
+ hba->port_cfg.port_id, iret);
+
+ goto exit;
+ }
+
+ SPFC_MAILBOX_STAT(hba, SPFC_SEND_CONFIG_LOGINAPI);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Port(0x%x) Topo(0x%x) Config login param to up: txbbcredit(0x%x), BB_SC_N(0x%x).",
+ hba->port_cfg.port_id, hba->active_topo,
+ config_login.tx_bb_credit, config_login.bb_scn);
+
+ ret = RETURN_OK;
+exit:
+ kfree(cfg_login_sts);
+
+ return ret;
+}
+
+u32 spfc_mb_send_and_wait_mbox(struct spfc_hba_info *hba, const void *in_mbox,
+ u16 in_size,
+ union spfc_outmbox_generic *out_mbox)
+{
+ void *handle = NULL;
+ u16 out_size = 0;
+ ulong time_out = 0;
+ int ret = 0;
+ struct spfc_mbox_header *header = NULL;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(in_mbox, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(out_mbox, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(hba->dev_handle, UNF_RETURN_ERROR);
+ header = (struct spfc_mbox_header *)in_mbox;
+ out_size = sizeof(union spfc_outmbox_generic);
+ handle = hba->dev_handle;
+ header->port_id = (u8)sphw_global_func_id(handle);
+
+ /* Wait for las mailbox completion: */
+ time_out = wait_for_completion_timeout(&hba->mbox_complete,
+ (ulong)msecs_to_jiffies(SPFC_MBOX_TIME_SEC_MAX *
+ UNF_S_TO_MS));
+ if (time_out == SPFC_ZERO) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[err]Port(0x%x) wait mailbox(0x%x) completion timeout: %d sec",
+ hba->port_cfg.port_id, header->cmnd_type,
+ SPFC_MBOX_TIME_SEC_MAX);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Send Msg to uP Sync: timer 10s */
+ ret = sphw_msg_to_mgmt_sync(handle, COMM_MOD_FC, header->cmnd_type,
+ (void *)in_mbox, in_size,
+ (union spfc_outmbox_generic *)out_mbox,
+ &out_size, (SPFC_MBX_MAX_TIMEOUT),
+ SPHW_CHANNEL_FC);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[warn]Port(0x%x) can not send mailbox(0x%x) with ret:%d",
+ hba->port_cfg.port_id, header->cmnd_type, ret);
+
+ complete(&hba->mbox_complete);
+ return UNF_RETURN_ERROR;
+ }
+
+ complete(&hba->mbox_complete);
+
+ return RETURN_OK;
+}
+
+void spfc_initial_dynamic_info(struct spfc_hba_info *fc_port)
+{
+ struct spfc_hba_info *hba = fc_port;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(hba);
+
+ spin_lock_irqsave(&hba->hba_lock, flag);
+ hba->active_port_speed = UNF_PORT_SPEED_UNKNOWN;
+ hba->active_topo = UNF_ACT_TOP_UNKNOWN;
+ hba->phy_link = UNF_PORT_LINK_DOWN;
+ hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_INIT;
+ hba->loop_map_valid = LOOP_MAP_INVALID;
+ hba->srq_delay_info.srq_delay_flag = 0;
+ hba->srq_delay_info.root_rq_rcvd_flag = 0;
+ spin_unlock_irqrestore(&hba->hba_lock, flag);
+}
+
+static u32 spfc_recv_fc_linkup(struct spfc_hba_info *hba, void *buf_in)
+{
+#define SPFC_LOOP_MASK 0x1
+#define SPFC_LOOPMAP_COUNT 128
+
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_link_event *link_event = NULL;
+
+ link_event = (struct spfc_link_event *)buf_in;
+ hba->phy_link = UNF_PORT_LINK_UP;
+ hba->active_port_speed = link_event->speed;
+ hba->led_states.green_speed_led = (u8)(link_event->green_speed_led);
+ hba->led_states.yellow_speed_led = (u8)(link_event->yellow_speed_led);
+ hba->led_states.ac_led = (u8)(link_event->ac_led);
+
+ if (link_event->top_type == SPFC_LOOP_MASK &&
+ (link_event->loop_map_info[ARRAY_INDEX_1] == UNF_FL_PORT_LOOP_ADDR ||
+ link_event->loop_map_info[ARRAY_INDEX_2] == UNF_FL_PORT_LOOP_ADDR)) {
+ hba->active_topo = UNF_ACT_TOP_PUBLIC_LOOP; /* Public Loop */
+ hba->active_alpa = link_event->alpa_value; /* AL_PA */
+ memcpy(hba->loop_map, link_event->loop_map_info, SPFC_LOOPMAP_COUNT);
+ hba->loop_map_valid = LOOP_MAP_VALID;
+ } else if (link_event->top_type == SPFC_LOOP_MASK) {
+ hba->active_topo = UNF_ACT_TOP_PRIVATE_LOOP; /* Private Loop */
+ hba->active_alpa = link_event->alpa_value; /* AL_PA */
+ memcpy(hba->loop_map, link_event->loop_map_info, SPFC_LOOPMAP_COUNT);
+ hba->loop_map_valid = LOOP_MAP_VALID;
+ } else {
+ hba->active_topo = UNF_TOP_P2P_MASK; /* P2P_D or P2P_F */
+ }
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_KEVENT,
+ "[event]Port(0x%x) receive link up event(0x%x) with speed(0x%x) uP_topo(0x%x) driver_topo(0x%x)",
+ hba->port_cfg.port_id, link_event->link_event,
+ link_event->speed, link_event->top_type, hba->active_topo);
+
+ /* Set clear & flush state */
+ spfc_set_hba_clear_state(hba, false);
+ spfc_set_hba_flush_state(hba, false);
+ spfc_set_rport_flush_state(hba, false);
+
+ /* Report link up event to COM */
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_LINK_UP,
+ &hba->active_port_speed);
+
+ SPFC_LINK_EVENT_STAT(hba, SPFC_LINK_UP_COUNT);
+
+ return ret;
+}
+
+static u32 spfc_recv_fc_linkdown(struct spfc_hba_info *hba, void *buf_in)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_link_event *link_event = NULL;
+
+ link_event = (struct spfc_link_event *)buf_in;
+
+ /* 1. Led state setting */
+ hba->led_states.green_speed_led = (u8)(link_event->green_speed_led);
+ hba->led_states.yellow_speed_led = (u8)(link_event->yellow_speed_led);
+ hba->led_states.ac_led = (u8)(link_event->ac_led);
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_KEVENT,
+ "[event]Port(0x%x) receive link down event(0x%x) reason(0x%x)",
+ hba->port_cfg.port_id, link_event->link_event, link_event->reason);
+
+ spfc_initial_dynamic_info(hba);
+
+ /* 2. set HBA flush state */
+ spfc_set_hba_flush_state(hba, true);
+
+ /* 3. set R_Port (parent SQ) flush state */
+ spfc_set_rport_flush_state(hba, true);
+
+ /* 4. Report link down event to COM */
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_LINK_DOWN, 0);
+
+ /* DFX setting */
+ SPFC_LINK_REASON_STAT(hba, link_event->reason);
+ SPFC_LINK_EVENT_STAT(hba, SPFC_LINK_DOWN_COUNT);
+
+ return ret;
+}
+
+static u32 spfc_recv_fc_delcmd(struct spfc_hba_info *hba, void *buf_in)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_link_event *link_event = NULL;
+
+ link_event = (struct spfc_link_event *)buf_in;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) receive delete cmd event(0x%x)",
+ hba->port_cfg.port_id, link_event->link_event);
+
+ /* Send buffer clear cmnd */
+ ret = spfc_clear_fetched_sq_wqe(hba);
+
+ hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_SCANNING;
+ SPFC_LINK_EVENT_STAT(hba, SPFC_FC_DELETE_CMND_COUNT);
+
+ return ret;
+}
+
+static u32 spfc_recv_fc_error(struct spfc_hba_info *hba, void *buf_in)
+{
+#define FC_ERR_LEVEL_DEAD 0
+#define FC_ERR_LEVEL_HIGH 1
+#define FC_ERR_LEVEL_LOW 2
+
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_up_error_event *up_error_event = NULL;
+
+ up_error_event = (struct spfc_up_error_event *)buf_in;
+ if (up_error_event->error_type >= SPFC_UP_ERR_BUTT ||
+ up_error_event->error_value >= SPFC_ERR_VALUE_BUTT) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) receive a unsupported UP Error Event Type(0x%x) Value(0x%x).",
+ hba->port_cfg.port_id, up_error_event->error_type,
+ up_error_event->error_value);
+ return ret;
+ }
+
+ switch (up_error_event->error_level) {
+ case FC_ERR_LEVEL_DEAD:
+ ret = RETURN_OK;
+ break;
+
+ case FC_ERR_LEVEL_HIGH:
+ /* port reset */
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport,
+ UNF_PORT_ABNORMAL_RESET, NULL);
+ break;
+
+ case FC_ERR_LEVEL_LOW:
+ ret = RETURN_OK;
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) receive a unsupported UP Error Event Level(0x%x), Can not Process.",
+ hba->port_cfg.port_id,
+ up_error_event->error_level);
+ return ret;
+ }
+ if (up_error_event->error_value < SPFC_ERR_VALUE_BUTT)
+ SPFC_UP_ERR_EVENT_STAT(hba, up_error_event->error_value);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) process UP Error Event Level(0x%x) Type(0x%x) Value(0x%x) %s.",
+ hba->port_cfg.port_id, up_error_event->error_level,
+ up_error_event->error_type, up_error_event->error_value,
+ (ret == UNF_RETURN_ERROR) ? "ERROR" : "OK");
+
+ return ret;
+}
+
+static struct spfc_up2drv_msg_handle up_msg_handle[] = {
+ {SPFC_MBOX_RECV_FC_LINKUP, spfc_recv_fc_linkup},
+ {SPFC_MBOX_RECV_FC_LINKDOWN, spfc_recv_fc_linkdown},
+ {SPFC_MBOX_RECV_FC_DELCMD, spfc_recv_fc_delcmd},
+ {SPFC_MBOX_RECV_FC_ERROR, spfc_recv_fc_error}
+};
+
+void spfc_up_msg2driver_proc(void *hwdev_handle, void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 index = 0;
+ struct spfc_hba_info *hba = NULL;
+ struct spfc_mbox_header *mbx_header = NULL;
+
+ FC_CHECK_RETURN_VOID(hwdev_handle);
+ FC_CHECK_RETURN_VOID(pri_handle);
+ FC_CHECK_RETURN_VOID(buf_in);
+ FC_CHECK_RETURN_VOID(buf_out);
+ FC_CHECK_RETURN_VOID(out_size);
+
+ hba = (struct spfc_hba_info *)pri_handle;
+ if (!hba) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_ERR, "[err]Hba is null");
+ return;
+ }
+
+ mbx_header = (struct spfc_mbox_header *)buf_in;
+ if (mbx_header->cmnd_type != cmd) {
+ *out_size = sizeof(struct spfc_link_event);
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_ERR,
+ "[err]Port(0x%x) cmd(0x%x) is not matched with header cmd type(0x%x)",
+ hba->port_cfg.port_id, cmd, mbx_header->cmnd_type);
+ return;
+ }
+
+ while (index < (sizeof(up_msg_handle) / sizeof(struct spfc_up2drv_msg_handle))) {
+ if (up_msg_handle[index].cmd == cmd &&
+ up_msg_handle[index].spfc_msg_up2driver_handler) {
+ ret = up_msg_handle[index].spfc_msg_up2driver_handler(hba, buf_in);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_ERR,
+ "[warn]Port(0x%x) process up cmd(0x%x) failed",
+ hba->port_cfg.port_id, cmd);
+ }
+ *out_size = sizeof(struct spfc_link_event);
+ return;
+ }
+ index++;
+ }
+
+ *out_size = sizeof(struct spfc_link_event);
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_ERR,
+ "[err]Port(0x%x) process up cmd(0x%x) failed",
+ hba->port_cfg.port_id, cmd);
+}
+
+u32 spfc_get_topo_act(void *hba, void *topo_act)
+{
+ struct spfc_hba_info *spfc_hba = hba;
+ enum unf_act_topo *pen_topo_act = topo_act;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(topo_act, UNF_RETURN_ERROR);
+
+ /* Get topo from low_level */
+ *pen_topo_act = spfc_hba->active_topo;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Get active topology: 0x%x", *pen_topo_act);
+
+ return RETURN_OK;
+}
+
+u32 spfc_get_loop_alpa(void *hba, void *alpa)
+{
+ ulong flags = 0;
+ struct spfc_hba_info *spfc_hba = hba;
+ u8 *alpa_temp = alpa;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(alpa, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&spfc_hba->hba_lock, flags);
+ *alpa_temp = spfc_hba->active_alpa;
+ spin_unlock_irqrestore(&spfc_hba->hba_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Get active AL_PA(0x%x)", *alpa_temp);
+
+ return RETURN_OK;
+}
+
+static void spfc_get_fabric_login_params(struct spfc_hba_info *hba,
+ struct unf_port_login_parms *params_addr)
+{
+ ulong flag = 0;
+
+ spin_lock_irqsave(&hba->hba_lock, flag);
+ hba->active_topo = params_addr->act_topo;
+ hba->compared_ratov_val = params_addr->compared_ratov_val;
+ hba->compared_edtov_val = params_addr->compared_edtov_val;
+ hba->compared_bb_scn = params_addr->compared_bbscn;
+ hba->remote_edtov_tag = params_addr->remote_edtov_tag;
+ hba->remote_rttov_tag = params_addr->remote_rttov_tag;
+ hba->remote_bb_credit = params_addr->remote_bb_credit;
+ spin_unlock_irqrestore(&hba->hba_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) topo(0x%x) get fabric params: R_A_TOV(0x%x) E_D_TOV(%u) BB_CREDIT(0x%x) BB_SC_N(0x%x)",
+ hba->port_cfg.port_id, hba->active_topo,
+ hba->compared_ratov_val, hba->compared_edtov_val,
+ hba->remote_bb_credit, hba->compared_bb_scn);
+}
+
+static void spfc_get_port_login_params(struct spfc_hba_info *hba,
+ struct unf_port_login_parms *params_addr)
+{
+ ulong flag = 0;
+
+ spin_lock_irqsave(&hba->hba_lock, flag);
+ hba->compared_ratov_val = params_addr->compared_ratov_val;
+ hba->compared_edtov_val = params_addr->compared_edtov_val;
+ hba->compared_bb_scn = params_addr->compared_bbscn;
+ hba->remote_edtov_tag = params_addr->remote_edtov_tag;
+ hba->remote_rttov_tag = params_addr->remote_rttov_tag;
+ hba->remote_bb_credit = params_addr->remote_bb_credit;
+ spin_unlock_irqrestore(&hba->hba_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Port(0x%x) Topo(0x%x) Get Port Params: R_A_TOV(0x%x), E_D_TOV(0x%x), BB_CREDIT(0x%x), BB_SC_N(0x%x).",
+ hba->port_cfg.port_id, hba->active_topo,
+ hba->compared_ratov_val, hba->compared_edtov_val,
+ hba->remote_bb_credit, hba->compared_bb_scn);
+}
+
+u32 spfc_update_fabric_param(void *hba, void *para_in)
+{
+ u32 ret = RETURN_OK;
+ struct spfc_hba_info *spfc_hba = hba;
+ struct unf_port_login_parms *login_coparms = para_in;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
+
+ spfc_get_fabric_login_params(spfc_hba, login_coparms);
+
+ if (spfc_hba->active_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ spfc_hba->active_topo == UNF_ACT_TOP_PUBLIC_LOOP) {
+ if (spfc_hba->work_mode == SPFC_SMARTIO_WORK_MODE_FC)
+ ret = spfc_config_login_api(spfc_hba, login_coparms);
+ }
+
+ return ret;
+}
+
+u32 spfc_update_port_param(void *hba, void *para_in)
+{
+ u32 ret = RETURN_OK;
+ struct spfc_hba_info *spfc_hba = hba;
+ struct unf_port_login_parms *login_coparms =
+ (struct unf_port_login_parms *)para_in;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
+
+ if (spfc_hba->active_topo == UNF_ACT_TOP_PRIVATE_LOOP ||
+ spfc_hba->active_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ spfc_get_port_login_params(spfc_hba, login_coparms);
+ ret = spfc_config_login_api(spfc_hba, login_coparms);
+ }
+
+ spfc_save_login_parms_in_sq_info(spfc_hba, login_coparms);
+
+ return ret;
+}
+
+u32 spfc_get_workable_bb_credit(void *hba, void *bb_credit)
+{
+ u32 *bb_credit_temp = (u32 *)bb_credit;
+ struct spfc_hba_info *spfc_hba = hba;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(bb_credit, UNF_RETURN_ERROR);
+ if (spfc_hba->active_port_speed == UNF_PORT_SPEED_32_G)
+ *bb_credit_temp = SPFC_LOWLEVEL_DEFAULT_32G_BB_CREDIT;
+ else if (spfc_hba->active_port_speed == UNF_PORT_SPEED_16_G)
+ *bb_credit_temp = SPFC_LOWLEVEL_DEFAULT_16G_BB_CREDIT;
+ else
+ *bb_credit_temp = SPFC_LOWLEVEL_DEFAULT_8G_BB_CREDIT;
+
+ return RETURN_OK;
+}
+
+u32 spfc_get_workable_bb_scn(void *hba, void *bb_scn)
+{
+ u32 *bb_scn_temp = (u32 *)bb_scn;
+ struct spfc_hba_info *spfc_hba = (struct spfc_hba_info *)hba;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(bb_scn, UNF_RETURN_ERROR);
+
+ *bb_scn_temp = spfc_hba->port_bb_scn_cfg;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Return BBSCN(0x%x) to CM", *bb_scn_temp);
+
+ return RETURN_OK;
+}
+
+u32 spfc_get_loop_map(void *hba, void *buf)
+{
+ ulong flags = 0;
+ struct unf_buf *buf_temp = (struct unf_buf *)buf;
+ struct spfc_hba_info *spfc_hba = hba;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf_temp, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf_temp->buf, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf_temp->buf_len, UNF_RETURN_ERROR);
+
+ if (buf_temp->buf_len > UNF_LOOPMAP_COUNT)
+ return UNF_RETURN_ERROR;
+
+ spin_lock_irqsave(&spfc_hba->hba_lock, flags);
+ if (spfc_hba->loop_map_valid != LOOP_MAP_VALID) {
+ spin_unlock_irqrestore(&spfc_hba->hba_lock, flags);
+ return UNF_RETURN_ERROR;
+ }
+ memcpy(buf_temp->buf, spfc_hba->loop_map, buf_temp->buf_len);
+ spin_unlock_irqrestore(&spfc_hba->hba_lock, flags);
+
+ return RETURN_OK;
+}
+
+u32 spfc_mb_reset_chip(struct spfc_hba_info *hba, u8 sub_type)
+{
+ struct spfc_inmbox_port_reset port_reset;
+ union spfc_outmbox_generic *port_reset_sts = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ memset(&port_reset, 0, sizeof(port_reset));
+
+ port_reset_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!port_reset_sts) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(port_reset_sts, 0, sizeof(union spfc_outmbox_generic));
+ port_reset.header.cmnd_type = SPFC_MBOX_PORT_RESET;
+ port_reset.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_port_reset));
+ port_reset.op_code = sub_type;
+
+ if (spfc_mb_send_and_wait_mbox(hba, &port_reset, sizeof(port_reset),
+ port_reset_sts) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[warn]Port(0x%x) can't send and wait mailbox with command type(0x%x)",
+ hba->port_cfg.port_id, port_reset.header.cmnd_type);
+
+ goto exit;
+ }
+
+ if (port_reset_sts->port_reset_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[warn]Port(0x%x) receive mailbox type(0x%x) status(0x%x) incorrect",
+ hba->port_cfg.port_id,
+ port_reset_sts->port_reset_sts.header.cmnd_type,
+ port_reset_sts->port_reset_sts.status);
+
+ goto exit;
+ }
+
+ if (port_reset_sts->port_reset_sts.header.cmnd_type != SPFC_MBOX_PORT_RESET_STS) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[warn]Port(0x%x) recv mailbox type(0x%x) incorrect",
+ hba->port_cfg.port_id,
+ port_reset_sts->port_reset_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) reset chip mailbox success",
+ hba->port_cfg.port_id);
+
+ ret = RETURN_OK;
+exit:
+ kfree(port_reset_sts);
+
+ return ret;
+}
+
+u32 spfc_clear_sq_wqe_done(struct spfc_hba_info *hba)
+{
+ int ret1 = RETURN_OK;
+ u32 ret2 = RETURN_OK;
+ struct spfc_inmbox_clear_done clear_done;
+
+ clear_done.header.cmnd_type = SPFC_MBOX_BUFFER_CLEAR_DONE;
+ clear_done.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_clear_done));
+ clear_done.header.port_id = hba->port_index;
+
+ ret1 = sphw_msg_to_mgmt_async(hba->dev_handle, COMM_MOD_FC,
+ SPFC_MBOX_BUFFER_CLEAR_DONE, &clear_done,
+ sizeof(clear_done), SPHW_CHANNEL_FC);
+
+ if (ret1 != 0) {
+ SPFC_MAILBOX_STAT(hba, SPFC_SEND_CLEAR_DONE_FAIL);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC Port(0x%x) can't send clear done cmd to up, ret:%d",
+ hba->port_cfg.port_id, ret1);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ SPFC_MAILBOX_STAT(hba, SPFC_SEND_CLEAR_DONE);
+ hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_FLUSHDONE;
+ hba->next_clear_sq = 0;
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_KEVENT,
+ "[info]Port(0x%x) clear done msg(0x%x) sent to up succeed with stage(0x%x)",
+ hba->port_cfg.port_id, clear_done.header.cmnd_type,
+ hba->queue_set_stage);
+
+ return ret2;
+}
+
+u32 spfc_mbx_get_fw_clear_stat(struct spfc_hba_info *hba, u32 *clear_state)
+{
+ struct spfc_inmbox_get_clear_state get_clr_state;
+ union spfc_outmbox_generic *port_clear_state_sts = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(clear_state, UNF_RETURN_ERROR);
+
+ memset(&get_clr_state, 0, sizeof(get_clr_state));
+
+ port_clear_state_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!port_clear_state_sts) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(port_clear_state_sts, 0, sizeof(union spfc_outmbox_generic));
+
+ get_clr_state.header.cmnd_type = SPFC_MBOX_GET_CLEAR_STATE;
+ get_clr_state.header.length =
+ SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_get_clear_state));
+
+ if (spfc_mb_send_and_wait_mbox(hba, &get_clr_state, sizeof(get_clr_state),
+ port_clear_state_sts) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "spfc can't send and wait mailbox, command type: 0x%x",
+ get_clr_state.header.cmnd_type);
+
+ goto exit;
+ }
+
+ if (port_clear_state_sts->get_clr_state_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "Port(0x%x) Receive mailbox type(0x%x) status incorrect. Status: 0x%x, state 0x%x.",
+ hba->port_cfg.port_id,
+ port_clear_state_sts->get_clr_state_sts.header.cmnd_type,
+ port_clear_state_sts->get_clr_state_sts.status,
+ port_clear_state_sts->get_clr_state_sts.state);
+
+ goto exit;
+ }
+
+ if (port_clear_state_sts->get_clr_state_sts.header.cmnd_type !=
+ SPFC_MBOX_GET_CLEAR_STATE_STS) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "Port(0x%x) recv mailbox type(0x%x) incorrect.",
+ hba->port_cfg.port_id,
+ port_clear_state_sts->get_clr_state_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "Port(0x%x) get port clear state 0x%x.",
+ hba->port_cfg.port_id,
+ port_clear_state_sts->get_clr_state_sts.state);
+
+ *clear_state = port_clear_state_sts->get_clr_state_sts.state;
+
+ ret = RETURN_OK;
+exit:
+ kfree(port_clear_state_sts);
+
+ return ret;
+}
+
+u32 spfc_mbx_config_default_session(void *hba, u32 flag)
+{
+ struct spfc_hba_info *spfc_hba = NULL;
+ struct spfc_inmbox_default_sq_info default_sq_info;
+ union spfc_outmbox_generic default_sq_info_sts;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ spfc_hba = (struct spfc_hba_info *)hba;
+
+ memset(&default_sq_info, 0, sizeof(struct spfc_inmbox_default_sq_info));
+ memset(&default_sq_info_sts, 0, sizeof(union spfc_outmbox_generic));
+
+ default_sq_info.header.cmnd_type = SPFC_MBOX_SEND_DEFAULT_SQ_INFO;
+ default_sq_info.header.length =
+ SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_default_sq_info));
+ default_sq_info.func_id = sphw_global_func_id(spfc_hba->dev_handle);
+
+ /* When flag is 1, set default SQ info when probe, when 0, clear when
+ * remove
+ */
+ if (flag) {
+ default_sq_info.sq_cid = spfc_hba->default_sq_info.sq_cid;
+ default_sq_info.sq_xid = spfc_hba->default_sq_info.sq_xid;
+ default_sq_info.valid = 1;
+ }
+
+ ret =
+ spfc_mb_send_and_wait_mbox(spfc_hba, &default_sq_info, sizeof(default_sq_info),
+ (union spfc_outmbox_generic *)(void *)&default_sq_info_sts);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "spfc can't send and wait mailbox, command type: 0x%x.",
+ default_sq_info.header.cmnd_type);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (default_sq_info_sts.default_sq_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) mailbox status incorrect status(0x%x) .",
+ spfc_hba->port_cfg.port_id,
+ default_sq_info_sts.default_sq_sts.status);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (SPFC_MBOX_SEND_DEFAULT_SQ_INFO_STS !=
+ default_sq_info_sts.default_sq_sts.header.cmnd_type) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) receive mailbox type incorrect type: 0x%x.",
+ spfc_hba->port_cfg.port_id,
+ default_sq_info_sts.default_sq_sts.header.cmnd_type);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_chipitf.h b/drivers/scsi/spfc/hw/spfc_chipitf.h
new file mode 100644
index 000000000000..acd770514edf
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_chipitf.h
@@ -0,0 +1,797 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_CHIPITF_H
+#define SPFC_CHIPITF_H
+
+#include "unf_type.h"
+#include "unf_log.h"
+#include "spfc_utils.h"
+#include "spfc_module.h"
+
+#include "spfc_service.h"
+
+/* CONF_API_CMND */
+#define SPFC_MBOX_CONFIG_API (0x00)
+#define SPFC_MBOX_CONFIG_API_STS (0xA0)
+
+/* GET_CHIP_INFO_API_CMD */
+#define SPFC_MBOX_GET_CHIP_INFO (0x01)
+#define SPFC_MBOX_GET_CHIP_INFO_STS (0xA1)
+
+/* PORT_RESET */
+#define SPFC_MBOX_PORT_RESET (0x02)
+#define SPFC_MBOX_PORT_RESET_STS (0xA2)
+
+/* SFP_SWITCH_API_CMND */
+#define SPFC_MBOX_PORT_SWITCH (0x03)
+#define SPFC_MBOX_PORT_SWITCH_STS (0xA3)
+
+/* CONF_AF_LOGIN_API_CMND */
+#define SPFC_MBOX_CONFIG_LOGIN_API (0x06)
+#define SPFC_MBOX_CONFIG_LOGIN_API_STS (0xA6)
+
+/* BUFFER_CLEAR_DONE_CMND */
+#define SPFC_MBOX_BUFFER_CLEAR_DONE (0x07)
+#define SPFC_MBOX_BUFFER_CLEAR_DONE_STS (0xA7)
+
+#define SPFC_MBOX_GET_UP_STATE (0x09)
+#define SPFC_MBOX_GET_UP_STATE_STS (0xA9)
+
+/* GET CLEAR DONE STATE */
+#define SPFC_MBOX_GET_CLEAR_STATE (0x0E)
+#define SPFC_MBOX_GET_CLEAR_STATE_STS (0xAE)
+
+/* CONFIG TIMER */
+#define SPFC_MBOX_CONFIG_TIMER (0x10)
+#define SPFC_MBOX_CONFIG_TIMER_STS (0xB0)
+
+/* Led Test */
+#define SPFC_MBOX_LED_TEST (0x12)
+#define SPFC_MBOX_LED_TEST_STS (0xB2)
+
+/* set esch */
+#define SPFC_MBOX_SET_ESCH (0x13)
+#define SPFC_MBOX_SET_ESCH_STS (0xB3)
+
+/* set get tx serdes */
+#define SPFC_MBOX_SET_GET_SERDES_TX (0x14)
+#define SPFC_MBOX_SET_GET_SERDES_TX_STS (0xB4)
+
+/* get rx serdes */
+#define SPFC_MBOX_GET_SERDES_RX (0x15)
+#define SPFC_MBOX_GET_SERDES_RX_STS (0xB5)
+
+/* i2c read write */
+#define SPFC_MBOX_I2C_WR_RD (0x16)
+#define SPFC_MBOX_I2C_WR_RD_STS (0xB6)
+
+/* GET UCODE STATS CMD */
+#define SPFC_MBOX_GET_UCODE_STAT (0x18)
+#define SPFC_MBOX_GET_UCODE_STAT_STS (0xB8)
+
+/* gpio read write */
+#define SPFC_MBOX_GPIO_WR_RD (0x19)
+#define SPFC_MBOX_GPIO_WR_RD_STS (0xB9)
+
+#define SPFC_MBOX_SEND_DEFAULT_SQ_INFO (0x26)
+#define SPFC_MBOX_SEND_DEFAULT_SQ_INFO_STS (0xc6)
+
+/* FC: DRV->UP */
+#define SPFC_MBOX_SEND_ELS_CMD (0x2A)
+#define SPFC_MBOX_SEND_VPORT_INFO (0x2B)
+
+/* FC: UP->DRV */
+#define SPFC_MBOX_RECV_FC_LINKUP (0x40)
+#define SPFC_MBOX_RECV_FC_LINKDOWN (0x41)
+#define SPFC_MBOX_RECV_FC_DELCMD (0x42)
+#define SPFC_MBOX_RECV_FC_ERROR (0x43)
+
+#define LOOP_MAP_VALID (1)
+#define LOOP_MAP_INVALID (0)
+
+#define SPFC_MBOX_SIZE (1024)
+#define SPFC_MBOX_HEADER_SIZE (4)
+
+#define UNDEFINEOPCODE (0)
+
+#define VALUEMASK_L 0x00000000FFFFFFFF
+#define VALUEMASK_H 0xFFFFFFFF00000000
+
+#define STATUS_OK (0)
+#define STATUS_FAIL (1)
+
+enum spfc_drv2up_unblock_msg_cmd_code {
+ SPFC_SEND_ELS_CMD,
+ SPFC_SEND_ELS_CMD_FAIL,
+ SPFC_RCV_ELS_CMD_RSP,
+ SPFC_SEND_CONFIG_LOGINAPI,
+ SPFC_SEND_CONFIG_LOGINAPI_FAIL,
+ SPFC_RCV_CONFIG_LOGIN_API_RSP,
+ SPFC_SEND_CLEAR_DONE,
+ SPFC_SEND_CLEAR_DONE_FAIL,
+ SPFC_RCV_CLEAR_DONE_RSP,
+ SPFC_SEND_VPORT_INFO_DONE,
+ SPFC_SEND_VPORT_INFO_FAIL,
+ SPFC_SEND_VPORT_INFO_RSP,
+ SPFC_MBOX_CMD_BUTT
+};
+
+/* up to dirver cmd code */
+enum spfc_up2drv_msg_cmd_code {
+ SPFC_UP2DRV_MSG_CMD_LINKUP = 0x1,
+ SPFC_UP2DRV_MSG_CMD_LINKDOWN = 0x2,
+ SPFC_UP2DRV_MSG_CMD_BUTT
+};
+
+/* up to driver handle templete */
+struct spfc_up2drv_msg_handle {
+ u8 cmd;
+ u32 (*spfc_msg_up2driver_handler)(struct spfc_hba_info *hba, void *buf_in);
+};
+
+/* tile to driver cmd code */
+enum spfc_tile2drv_msg_cmd_code {
+ SPFC_TILE2DRV_MSG_CMD_SCAN_DONE,
+ SPFC_TILE2DRV_MSG_CMD_FLUSH_DONE,
+ SPFC_TILE2DRV_MSG_CMD_BUTT
+};
+
+/* tile to driver handle templete */
+struct spfc_tile2drv_msg_handle {
+ u8 cmd;
+ u32 (*spfc_msg_tile2driver_handler)(struct spfc_hba_info *hba, u8 cmd, u64 val);
+};
+
+/* Mbox Common Header */
+struct spfc_mbox_header {
+ u8 cmnd_type;
+ u8 length;
+ u8 port_id;
+ u8 reserved;
+};
+
+/* open or close the sfp */
+struct spfc_inmbox_port_switch {
+ struct spfc_mbox_header header;
+ u32 op_code : 8;
+ u32 rsvd0 : 24;
+ u32 rsvd1[6];
+};
+
+struct spfc_inmbox_send_vport_info {
+ struct spfc_mbox_header header;
+
+ u64 sys_port_wwn;
+ u64 sys_node_name;
+
+ u32 nport_id : 24;
+ u32 vpi : 8;
+};
+
+struct spfc_outmbox_port_switch_sts {
+ struct spfc_mbox_header header;
+
+ u16 reserved1;
+ u8 reserved2;
+ u8 status;
+};
+
+/* config API */
+struct spfc_inmbox_config_api {
+ struct spfc_mbox_header header;
+
+ u32 op_code : 8;
+ u32 reserved1 : 24;
+
+ u8 topy_mode;
+ u8 sfp_speed;
+ u8 max_speed;
+ u8 hard_alpa;
+
+ u8 port_name[UNF_WWN_LEN];
+
+ u32 slave : 1;
+ u32 auto_sneg : 1;
+ u32 reserved2 : 30;
+
+ u32 rx_6432g_bb_credit : 16; /* 160 */
+ u32 rx_16g_bb_credit : 16; /* 80 */
+ u32 rx_84g_bb_credit : 16; /* 50 */
+ u32 rdy_cnt_bf_fst_frm : 16; /* 8 */
+
+ u32 esch_32g_value;
+ u32 esch_16g_value;
+ u32 esch_8g_value;
+ u32 esch_4g_value;
+ u32 esch_64g_value;
+ u32 esch_bust_size;
+};
+
+struct spfc_outmbox_config_api_sts {
+ struct spfc_mbox_header header;
+ u16 reserved1;
+ u8 reserved2;
+ u8 status;
+};
+
+/* Get chip info */
+struct spfc_inmbox_get_chip_info {
+ struct spfc_mbox_header header;
+};
+
+struct spfc_outmbox_get_chip_info_sts {
+ struct spfc_mbox_header header;
+ u8 status;
+ u8 board_type;
+ u8 rvsd0[2];
+ u64 wwpn;
+ u64 wwnn;
+ u64 rsvd1;
+};
+
+/* Get reg info */
+struct spfc_inmbox_get_reg_info {
+ struct spfc_mbox_header header;
+ u32 op_code : 1;
+ u32 reg_len : 8;
+ u32 rsvd1 : 23;
+ u32 reg_addr;
+ u32 reg_value_l32;
+ u32 reg_value_h32;
+ u32 rsvd2[27];
+};
+
+/* Get reg info sts */
+struct spfc_outmbox_get_reg_info_sts {
+ struct spfc_mbox_header header;
+
+ u16 rsvd0;
+ u8 rsvd1;
+ u8 status;
+ u32 reg_value_l32;
+ u32 reg_value_h32;
+ u32 rsvd2[28];
+};
+
+/* Config login API */
+struct spfc_inmbox_config_login {
+ struct spfc_mbox_header header;
+
+ u32 op_code : 8;
+ u32 reserved1 : 24;
+
+ u16 tx_bb_credit;
+ u16 reserved2;
+
+ u32 rtov;
+ u32 etov;
+
+ u32 rt_tov_tag : 1;
+ u32 ed_tov_tag : 1;
+ u32 bb_credit : 6;
+ u32 bb_scn : 8;
+ u32 lr_flag : 16;
+};
+
+struct spfc_outmbox_config_login_sts {
+ struct spfc_mbox_header header;
+
+ u16 reserved1;
+ u8 reserved2;
+ u8 status;
+};
+
+/* port reset */
+#define SPFC_MBOX_SUBTYPE_LIGHT_RESET (0x0)
+#define SPFC_MBOX_SUBTYPE_HEAVY_RESET (0x1)
+
+struct spfc_inmbox_port_reset {
+ struct spfc_mbox_header header;
+
+ u32 op_code : 8;
+ u32 reserved : 24;
+};
+
+struct spfc_outmbox_port_reset_sts {
+ struct spfc_mbox_header header;
+
+ u16 reserved1;
+ u8 reserved2;
+ u8 status;
+};
+
+/* led test */
+struct spfc_inmbox_led_test {
+ struct spfc_mbox_header header;
+
+ /* 0->act type;1->low speed;1->high speed */
+ u8 led_type;
+ /* 0:twinkle;1:light on;2:light off;0xff:defalut */
+ u8 led_mode;
+ u8 resvd[ARRAY_INDEX_2];
+};
+
+struct spfc_outmbox_led_test_sts {
+ struct spfc_mbox_header header;
+
+ u16 rsvd1;
+ u8 rsvd2;
+ u8 status;
+};
+
+/* set esch */
+struct spfc_inmbox_set_esch {
+ struct spfc_mbox_header header;
+
+ u32 esch_value;
+ u32 esch_bust_size;
+};
+
+struct spfc_outmbox_set_esch_sts {
+ struct spfc_mbox_header header;
+
+ u16 rsvd1;
+ u8 rsvd2;
+ u8 status;
+};
+
+struct spfc_inmbox_set_serdes_tx {
+ struct spfc_mbox_header header;
+
+ u8 swing; /* amplitude setting */
+ char serdes_pre1; /* pre1 setting */
+ char serdes_pre2; /* pre2 setting */
+ char serdes_post; /* post setting */
+ u8 serdes_main; /* main setting */
+ u8 op_code; /* opcode,0:setting;1:read */
+ u8 rsvd[ARRAY_INDEX_2];
+};
+
+struct spfc_outmbox_set_serdes_tx_sts {
+ struct spfc_mbox_header header;
+ u16 rvsd0;
+ u8 rvsd1;
+ u8 status;
+ u8 swing;
+ char serdes_pre1;
+ char serdes_pre2;
+ char serdes_post;
+ u8 serdes_main;
+ u8 rsvd2[ARRAY_INDEX_3];
+};
+
+struct spfc_inmbox_i2c_wr_rd {
+ struct spfc_mbox_header header;
+ u8 op_code; /* 0 write, 1 read */
+ u8 rsvd[ARRAY_INDEX_3];
+
+ u32 dev_addr;
+ u32 offset;
+ u32 wr_data;
+};
+
+struct spfc_outmbox_i2c_wr_rd_sts {
+ struct spfc_mbox_header header;
+ u8 status;
+ u8 resvd[ARRAY_INDEX_3];
+
+ u32 rd_data;
+};
+
+struct spfc_inmbox_gpio_wr_rd {
+ struct spfc_mbox_header header;
+ u8 op_code; /* 0 write,1 read */
+ u8 rsvd[ARRAY_INDEX_3];
+
+ u32 pin;
+ u32 wr_data;
+};
+
+struct spfc_outmbox_gpio_wr_rd_sts {
+ struct spfc_mbox_header header;
+ u8 status;
+ u8 resvd[ARRAY_INDEX_3];
+
+ u32 rd_data;
+};
+
+struct spfc_inmbox_get_serdes_rx {
+ struct spfc_mbox_header header;
+
+ u8 op_code;
+ u8 h16_macro;
+ u8 h16_lane;
+ u8 rsvd;
+};
+
+struct spfc_inmbox_get_serdes_rx_sts {
+ struct spfc_mbox_header header;
+ u16 rvsd0;
+ u8 rvsd1;
+ u8 status;
+ int left_eye;
+ int right_eye;
+ int low_eye;
+ int high_eye;
+};
+
+struct spfc_ser_op_m_l {
+ u8 op_code;
+ u8 h16_macro;
+ u8 h16_lane;
+ u8 rsvd;
+};
+
+/* get sfp info */
+#define SPFC_MBOX_GET_SFP_INFO_MB_LENGTH 1
+#define OFFSET_TWO_DWORD 2
+#define OFFSET_ONE_DWORD 1
+
+struct spfc_inmbox_get_sfp_info {
+ struct spfc_mbox_header header;
+};
+
+struct spfc_outmbox_get_sfp_info_sts {
+ struct spfc_mbox_header header;
+
+ u32 rcvd : 8;
+ u32 length : 16;
+ u32 status : 8;
+};
+
+/* get ucode stats */
+#define SPFC_UCODE_STAT_NUM 64
+
+struct spfc_outmbox_get_ucode_stat {
+ struct spfc_mbox_header header;
+};
+
+struct spfc_outmbox_get_ucode_stat_sts {
+ struct spfc_mbox_header header;
+
+ u16 rsvd;
+ u8 rsvd2;
+ u8 status;
+
+ u32 ucode_stat[SPFC_UCODE_STAT_NUM];
+};
+
+/* uP-->Driver asyn event API */
+struct spfc_link_event {
+ struct spfc_mbox_header header;
+
+ u8 link_event;
+ u8 reason;
+ u8 speed;
+ u8 top_type;
+
+ u8 alpa_value;
+ u8 reserved1;
+ u16 paticpate : 1;
+ u16 ac_led : 1;
+ u16 yellow_speed_led : 1;
+ u16 green_speed_led : 1;
+ u16 reserved2 : 12;
+
+ u8 loop_map_info[128];
+};
+
+enum spfc_up_err_type {
+ SPFC_UP_ERR_DRV_PARA = 0,
+ SPFC_UP_ERR_SFP = 1,
+ SPFC_UP_ERR_32G_PUB = 2,
+ SPFC_UP_ERR_32G_UA = 3,
+ SPFC_UP_ERR_32G_MAC = 4,
+ SPFC_UP_ERR_NON32G_DFX = 5,
+ SPFC_UP_ERR_NON32G_MAC = 6,
+ SPFC_UP_ERR_BUTT
+
+};
+
+enum spfc_up_err_value {
+ /* ERR type 0 */
+ SPFC_DRV_2_UP_PARA_ERR = 0,
+
+ /* ERR type 1 */
+ SPFC_SFP_SPEED_ERR,
+
+ /* ERR type 2 */
+ SPFC_32GPUB_UA_RXESCH_FIFO_OF,
+ SPFC_32GPUB_UA_RXESCH_FIFO_UCERR,
+
+ /* ERR type 3 */
+ SPFC_32G_UA_UATX_LEN_ABN,
+ SPFC_32G_UA_RXAFIFO_OF,
+ SPFC_32G_UA_TXAFIFO_OF,
+ SPFC_32G_UA_RXAFIFO_UCERR,
+ SPFC_32G_UA_TXAFIFO_UCERR,
+
+ /* ERR type 4 */
+ SPFC_32G_MAC_RX_BBC_FATAL,
+ SPFC_32G_MAC_TX_BBC_FATAL,
+ SPFC_32G_MAC_TXFIFO_UF,
+ SPFC_32G_MAC_PCS_TXFIFO_UF,
+ SPFC_32G_MAC_RXBBC_CRDT_TO,
+ SPFC_32G_MAC_PCS_RXAFIFO_OF,
+ SPFC_32G_MAC_PCS_TXFIFO_OF,
+ SPFC_32G_MAC_FC2P_RXFIFO_OF,
+ SPFC_32G_MAC_FC2P_TXFIFO_OF,
+ SPFC_32G_MAC_FC2P_CAFIFO_OF,
+ SPFC_32G_MAC_PCS_RXRSFECM_UCEER,
+ SPFC_32G_MAC_PCS_RXAFIFO_UCEER,
+ SPFC_32G_MAC_PCS_TXFIFO_UCEER,
+ SPFC_32G_MAC_FC2P_RXFIFO_UCEER,
+ SPFC_32G_MAC_FC2P_TXFIFO_UCEER,
+
+ /* ERR type 5 */
+ SPFC_NON32G_DFX_FC1_DFX_BF_FIFO,
+ SPFC_NON32G_DFX_FC1_DFX_BP_FIFO,
+ SPFC_NON32G_DFX_FC1_DFX_RX_AFIFO_ERR,
+ SPFC_NON32G_DFX_FC1_DFX_TX_AFIFO_ERR,
+ SPFC_NON32G_DFX_FC1_DFX_DIRQ_RXBUF_FIFO1,
+ SPFC_NON32G_DFX_FC1_DFX_DIRQ_RXBBC_TO,
+ SPFC_NON32G_DFX_FC1_DFX_DIRQ_TXDAT_FIFO,
+ SPFC_NON32G_DFX_FC1_DFX_DIRQ_TXCMD_FIFO,
+ SPFC_NON32G_DFX_FC1_ERR_R_RDY,
+
+ /* ERR type 6 */
+ SPFC_NON32G_MAC_FC1_FAIRNESS_ERROR,
+
+ SPFC_ERR_VALUE_BUTT
+
+};
+
+struct spfc_up_error_event {
+ struct spfc_mbox_header header;
+
+ u8 link_event;
+ u8 error_level;
+ u8 error_type;
+ u8 error_value;
+};
+
+struct spfc_inmbox_clear_done {
+ struct spfc_mbox_header header;
+};
+
+/* receive els cmd */
+struct spfc_inmbox_rcv_els {
+ struct spfc_mbox_header header;
+ u16 pkt_type;
+ u16 pkt_len;
+ u8 frame[ARRAY_INDEX_0];
+};
+
+/* FCF event type */
+enum spfc_fcf_event_type {
+ SPFC_FCF_SELECTED = 0,
+ SPFC_FCF_DEAD,
+ SPFC_FCF_CLEAR_VLINK,
+ SPFC_FCF_CLEAR_VLINK_APPOINTED
+};
+
+struct spfc_nport_id_info {
+ u32 nport_id : 24;
+ u32 vp_index : 8;
+};
+
+struct spfc_inmbox_fcf_event {
+ struct spfc_mbox_header header;
+
+ u8 fcf_map[ARRAY_INDEX_3];
+ u8 event_type;
+
+ u8 fcf_mac_h4[ARRAY_INDEX_4];
+
+ u16 vlan_info;
+ u8 fcf_mac_l2[ARRAY_INDEX_2];
+
+ struct spfc_nport_id_info nport_id_info[UNF_SPFC_MAXNPIV_NUM + 1];
+};
+
+/* send els cmd */
+struct spfc_inmbox_send_els {
+ struct spfc_mbox_header header;
+
+ u8 oper_code;
+ u8 rsvd[ARRAY_INDEX_3];
+
+ u8 resvd;
+ u8 els_cmd_type;
+ u16 pkt_len;
+
+ u8 fcf_mac_h4[ARRAY_INDEX_4];
+
+ u16 vlan_info;
+ u8 fcf_mac_l2[ARRAY_INDEX_2];
+
+ u8 fc_frame[SPFC_FC_HEAD_LEN + UNF_FLOGI_PAYLOAD_LEN];
+};
+
+struct spfc_inmbox_send_els_sts {
+ struct spfc_mbox_header header;
+
+ u16 rx_id;
+ u16 err_code;
+
+ u16 ox_id;
+ u16 rsvd;
+};
+
+struct spfc_inmbox_get_clear_state {
+ struct spfc_mbox_header header;
+ u32 resvd[31];
+};
+
+struct spfc_outmbox_get_clear_state_sts {
+ struct spfc_mbox_header header;
+ u16 rsvd1;
+ u8 state; /* 1--clear doing. 0---clear done. */
+ u8 status; /* 0--ok,!0---fail */
+ u32 rsvd2[30];
+};
+
+#define SPFC_FIP_MODE_VN2VF (0)
+#define SPFC_FIP_MODE_VN2VN (1)
+
+/* get up state */
+struct spfc_inmbox_get_up_state {
+ struct spfc_mbox_header header;
+
+ u64 cur_jiff_time;
+};
+
+/* get port state */
+struct spfc_inmbox_get_port_info {
+ struct spfc_mbox_header header;
+};
+
+struct spfc_outmbox_get_up_state_sts {
+ struct spfc_mbox_header header;
+
+ u8 status;
+ u8 rsv0;
+ u16 rsv1;
+ struct unf_port_dynamic_info dymic_info;
+};
+
+struct spfc_outmbox_get_port_info_sts {
+ struct spfc_mbox_header header;
+
+ u32 status : 8;
+ u32 fe_16g_cvis_tts : 8;
+ u32 bb_scn : 8;
+ u32 loop_credit : 8;
+
+ u32 non_loop_rx_credit : 8;
+ u32 non_loop_tx_credit : 8;
+ u32 sfp_speed : 8;
+ u32 present : 8;
+};
+
+struct spfc_inmbox_config_timer {
+ struct spfc_mbox_header header;
+
+ u16 op_code;
+ u16 fun_id;
+ u32 user_data;
+};
+
+struct spfc_inmbox_config_srqc {
+ struct spfc_mbox_header header;
+
+ u16 valid;
+ u16 fun_id;
+ u32 srqc_gpa_hi;
+ u32 srqc_gpa_lo;
+};
+
+struct spfc_outmbox_config_timer_sts {
+ struct spfc_mbox_header header;
+
+ u8 status;
+ u8 rsv[ARRAY_INDEX_3];
+};
+
+struct spfc_outmbox_config_srqc_sts {
+ struct spfc_mbox_header header;
+
+ u8 status;
+ u8 rsv[ARRAY_INDEX_3];
+};
+
+struct spfc_inmbox_default_sq_info {
+ struct spfc_mbox_header header;
+ u32 sq_cid;
+ u32 sq_xid;
+ u16 func_id;
+ u16 valid;
+};
+
+struct spfc_outmbox_default_sq_info_sts {
+ struct spfc_mbox_header header;
+ u8 status;
+ u8 rsv[ARRAY_INDEX_3];
+};
+
+/* Generic Inmailbox and Outmailbox */
+union spfc_inmbox_generic {
+ struct {
+ struct spfc_mbox_header header;
+ u32 rsvd[(SPFC_MBOX_SIZE - SPFC_MBOX_HEADER_SIZE) / sizeof(u32)];
+ } generic;
+
+ struct spfc_inmbox_port_switch port_switch;
+ struct spfc_inmbox_config_api config_api;
+ struct spfc_inmbox_get_chip_info get_chip_info;
+ struct spfc_inmbox_config_login config_login;
+ struct spfc_inmbox_port_reset port_reset;
+ struct spfc_inmbox_set_esch esch_set;
+ struct spfc_inmbox_led_test led_test;
+ struct spfc_inmbox_get_sfp_info get_sfp_info;
+ struct spfc_inmbox_clear_done clear_done;
+ struct spfc_outmbox_get_ucode_stat get_ucode_stat;
+ struct spfc_inmbox_get_clear_state get_clr_state;
+ struct spfc_inmbox_send_vport_info send_vport_info;
+ struct spfc_inmbox_get_up_state get_up_state;
+ struct spfc_inmbox_config_timer timer_config;
+ struct spfc_inmbox_config_srqc config_srqc;
+ struct spfc_inmbox_get_port_info get_port_info;
+};
+
+union spfc_outmbox_generic {
+ struct {
+ struct spfc_mbox_header header;
+ u32 rsvd[(SPFC_MBOX_SIZE - SPFC_MBOX_HEADER_SIZE) / sizeof(u32)];
+ } generic;
+
+ struct spfc_outmbox_port_switch_sts port_switch_sts;
+ struct spfc_outmbox_config_api_sts config_api_sts;
+ struct spfc_outmbox_get_chip_info_sts get_chip_info_sts;
+ struct spfc_outmbox_get_reg_info_sts get_reg_info_sts;
+ struct spfc_outmbox_config_login_sts config_login_sts;
+ struct spfc_outmbox_port_reset_sts port_reset_sts;
+ struct spfc_outmbox_led_test_sts led_test_sts;
+ struct spfc_outmbox_set_esch_sts esch_set_sts;
+ struct spfc_inmbox_get_serdes_rx_sts serdes_rx_get_sts;
+ struct spfc_outmbox_set_serdes_tx_sts serdes_tx_set_sts;
+ struct spfc_outmbox_i2c_wr_rd_sts i2c_wr_rd_sts;
+ struct spfc_outmbox_gpio_wr_rd_sts gpio_wr_rd_sts;
+ struct spfc_outmbox_get_sfp_info_sts get_sfp_info_sts;
+ struct spfc_outmbox_get_ucode_stat_sts get_ucode_stat_sts;
+ struct spfc_outmbox_get_clear_state_sts get_clr_state_sts;
+ struct spfc_outmbox_get_up_state_sts get_up_state_sts;
+ struct spfc_outmbox_config_timer_sts timer_config_sts;
+ struct spfc_outmbox_config_srqc_sts config_srqc_sts;
+ struct spfc_outmbox_get_port_info_sts get_port_info_sts;
+ struct spfc_outmbox_default_sq_info_sts default_sq_sts;
+};
+
+u32 spfc_get_chip_msg(void *hba, void *mac);
+u32 spfc_config_port_table(struct spfc_hba_info *hba);
+u32 spfc_port_switch(struct spfc_hba_info *hba, bool turn_on);
+u32 spfc_get_loop_map(void *hba, void *buf);
+u32 spfc_get_workable_bb_credit(void *hba, void *bb_credit);
+u32 spfc_get_workable_bb_scn(void *hba, void *bb_scn);
+u32 spfc_get_port_current_info(void *hba, void *port_info);
+u32 spfc_get_port_fec(void *hba, void *para_out);
+
+u32 spfc_get_loop_alpa(void *hba, void *alpa);
+u32 spfc_get_topo_act(void *hba, void *topo_act);
+u32 spfc_config_login_api(struct spfc_hba_info *hba, struct unf_port_login_parms *login_parms);
+u32 spfc_mb_send_and_wait_mbox(struct spfc_hba_info *hba, const void *in_mbox, u16 in_size,
+ union spfc_outmbox_generic *out_mbox);
+void spfc_up_msg2driver_proc(void *hwdev_handle, void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+u32 spfc_mb_reset_chip(struct spfc_hba_info *hba, u8 sub_type);
+u32 spfc_clear_sq_wqe_done(struct spfc_hba_info *hba);
+u32 spfc_update_fabric_param(void *hba, void *para_in);
+u32 spfc_update_port_param(void *hba, void *para_in);
+u32 spfc_update_fdisc_param(void *hba, void *vport_info);
+u32 spfc_mbx_get_fw_clear_stat(struct spfc_hba_info *hba, u32 *clear_state);
+u32 spfc_get_chip_capability(void *hwdev_handle, struct spfc_chip_info *chip_info);
+u32 spfc_mbx_config_default_session(void *hba, u32 flag);
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c b/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c
new file mode 100644
index 000000000000..30fb56a9bfed
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c
@@ -0,0 +1,1646 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/mm.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include "sphw_crm.h"
+#include "sphw_hw.h"
+#include "sphw_hwdev.h"
+#include "sphw_hwif.h"
+
+#include "spfc_cqm_object.h"
+#include "spfc_cqm_bitmap_table.h"
+#include "spfc_cqm_bat_cla.h"
+#include "spfc_cqm_main.h"
+
+static unsigned char cqm_ver = 8;
+module_param(cqm_ver, byte, 0644);
+MODULE_PARM_DESC(cqm_ver, "for cqm version control (default=8)");
+
+static void
+cqm_bat_fill_cla_common_gpa(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ struct cqm_bat_entry_standerd *bat_entry_standerd)
+{
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ struct sphw_func_attr *func_attr = NULL;
+ struct cqm_bat_entry_vf2pf gpa = {0};
+ u32 cla_gpa_h = 0;
+ dma_addr_t pa;
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_0)
+ pa = cla_table->cla_z_buf.buf_list[0].pa;
+ else if (cla_table->cla_lvl == CQM_CLA_LVL_1)
+ pa = cla_table->cla_y_buf.buf_list[0].pa;
+ else
+ pa = cla_table->cla_x_buf.buf_list[0].pa;
+
+ gpa.cla_gpa_h = CQM_ADDR_HI(pa) & CQM_CHIP_GPA_HIMASK;
+
+ /* On the SPU, the value of spu_en in the GPA address
+ * in the BAT is determined by the host ID and fun IDx.
+ */
+ if (sphw_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID) {
+ func_attr = &cqm_handle->func_attribute;
+ gpa.acs_spu_en = func_attr->func_global_idx & 0x1;
+ } else {
+ gpa.acs_spu_en = 0;
+ }
+
+ /* In fake mode, fake_vf_en in the GPA address of the BAT
+ * must be set to 1.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD) {
+ gpa.fake_vf_en = 1;
+ func_attr = &cqm_handle->parent_cqm_handle->func_attribute;
+ gpa.pf_id = func_attr->func_global_idx;
+ } else {
+ gpa.fake_vf_en = 0;
+ }
+
+ memcpy(&cla_gpa_h, &gpa, sizeof(u32));
+ bat_entry_standerd->cla_gpa_h = cla_gpa_h;
+
+ /* GPA is valid when gpa[0] = 1.
+ * CQM_BAT_ENTRY_T_REORDER does not support GPA validity check.
+ */
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ bat_entry_standerd->cla_gpa_l = CQM_ADDR_LW(pa);
+ else
+ bat_entry_standerd->cla_gpa_l = CQM_ADDR_LW(pa) | gpa_check_enable;
+}
+
+static void cqm_bat_fill_cla_common(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u8 *entry_base_addr)
+{
+ struct cqm_bat_entry_standerd *bat_entry_standerd = NULL;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 cache_line = 0;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER && cqm_ver == 8)
+ cache_line = CQM_CHIP_TIMER_CACHELINE;
+ else
+ cache_line = CQM_CHIP_CACHELINE;
+
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0, don't init bat entry\n",
+ cla_table->type);
+ return;
+ }
+
+ bat_entry_standerd = (struct cqm_bat_entry_standerd *)entry_base_addr;
+
+ /* The QPC value is 256/512/1024 and the timer value is 512.
+ * The other cacheline value is 256B.
+ * The conversion operation is performed inside the chip.
+ */
+ if (cla_table->obj_size > cache_line) {
+ if (cla_table->obj_size == CQM_OBJECT_512)
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_512;
+ else
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_1024;
+ bat_entry_standerd->max_number = cla_table->max_buffer_size / cla_table->obj_size;
+ } else {
+ if (cache_line == CQM_CHIP_CACHELINE) {
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_256;
+ bat_entry_standerd->max_number = cla_table->max_buffer_size / cache_line;
+ } else {
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_512;
+ bat_entry_standerd->max_number = cla_table->max_buffer_size / cache_line;
+ }
+ }
+
+ bat_entry_standerd->max_number = bat_entry_standerd->max_number - 1;
+
+ bat_entry_standerd->bypass = CQM_BAT_NO_BYPASS_CACHE;
+ bat_entry_standerd->z = cla_table->cacheline_z;
+ bat_entry_standerd->y = cla_table->cacheline_y;
+ bat_entry_standerd->x = cla_table->cacheline_x;
+ bat_entry_standerd->cla_level = cla_table->cla_lvl;
+
+ cqm_bat_fill_cla_common_gpa(cqm_handle, cla_table, bat_entry_standerd);
+}
+
+static void cqm_bat_fill_cla_cfg(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct cqm_bat_entry_cfg *bat_entry_cfg = NULL;
+
+ bat_entry_cfg = (struct cqm_bat_entry_cfg *)(*entry_base_addr);
+ bat_entry_cfg->cur_conn_cache = 0;
+ bat_entry_cfg->max_conn_cache =
+ func_cap->flow_table_based_conn_cache_number;
+ bat_entry_cfg->cur_conn_num_h_4 = 0;
+ bat_entry_cfg->cur_conn_num_l_16 = 0;
+ bat_entry_cfg->max_conn_num = func_cap->flow_table_based_conn_number;
+
+ /* Aligns with 64 buckets and shifts rightward by 6 bits.
+ * The maximum value of this field is 16 bits. A maximum of 4M buckets
+ * can be supported. The value is subtracted by 1. It is used for &hash
+ * value.
+ */
+ if ((func_cap->hash_number >> CQM_HASH_NUMBER_UNIT) != 0) {
+ bat_entry_cfg->bucket_num = ((func_cap->hash_number >>
+ CQM_HASH_NUMBER_UNIT) - 1);
+ }
+ if (func_cap->bloomfilter_length != 0) {
+ bat_entry_cfg->bloom_filter_len = func_cap->bloomfilter_length -
+ 1;
+ bat_entry_cfg->bloom_filter_addr = func_cap->bloomfilter_addr;
+ }
+
+ (*entry_base_addr) += sizeof(struct cqm_bat_entry_cfg);
+}
+
+static void cqm_bat_fill_cla_other(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ cqm_bat_fill_cla_common(cqm_handle, cla_table, *entry_base_addr);
+
+ (*entry_base_addr) += sizeof(struct cqm_bat_entry_standerd);
+}
+
+static void cqm_bat_fill_cla_taskmap(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ struct cqm_bat_entry_taskmap *bat_entry_taskmap = NULL;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ int i;
+
+ if (cqm_handle->func_capability.taskmap_number != 0) {
+ bat_entry_taskmap =
+ (struct cqm_bat_entry_taskmap *)(*entry_base_addr);
+ for (i = 0; i < CQM_BAT_ENTRY_TASKMAP_NUM; i++) {
+ bat_entry_taskmap->addr[i].gpa_h =
+ (u32)(cla_table->cla_z_buf.buf_list[i].pa >>
+ CQM_CHIP_GPA_HSHIFT);
+ bat_entry_taskmap->addr[i].gpa_l =
+ (u32)(cla_table->cla_z_buf.buf_list[i].pa &
+ CQM_CHIP_GPA_LOMASK);
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: taskmap bat entry: 0x%x 0x%x\n",
+ bat_entry_taskmap->addr[i].gpa_h,
+ bat_entry_taskmap->addr[i].gpa_l);
+ }
+ }
+
+ (*entry_base_addr) += sizeof(struct cqm_bat_entry_taskmap);
+}
+
+static void cqm_bat_fill_cla_timer(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ /* Only the PPF allocates timer resources. */
+ if (cqm_handle->func_attribute.func_type != CQM_PPF) {
+ (*entry_base_addr) += CQM_BAT_ENTRY_SIZE;
+ } else {
+ cqm_bat_fill_cla_common(cqm_handle, cla_table, *entry_base_addr);
+
+ (*entry_base_addr) += sizeof(struct cqm_bat_entry_standerd);
+ }
+}
+
+static void cqm_bat_fill_cla_invalid(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ (*entry_base_addr) += CQM_BAT_ENTRY_SIZE;
+}
+
+static void cqm_bat_fill_cla(struct cqm_handle *cqm_handle)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table *cla_table = NULL;
+ u32 entry_type = CQM_BAT_ENTRY_T_INVALID;
+ u8 *entry_base_addr = NULL;
+ u32 i = 0;
+
+ /* Fills each item in the BAT table according to the BAT format. */
+ entry_base_addr = bat_table->bat;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ entry_type = bat_table->bat_entry_type[i];
+ cla_table = &bat_table->entry[i];
+
+ if (entry_type == CQM_BAT_ENTRY_T_CFG)
+ cqm_bat_fill_cla_cfg(cqm_handle, cla_table, &entry_base_addr);
+ else if (entry_type == CQM_BAT_ENTRY_T_TASKMAP)
+ cqm_bat_fill_cla_taskmap(cqm_handle, cla_table, &entry_base_addr);
+ else if (entry_type == CQM_BAT_ENTRY_T_INVALID)
+ cqm_bat_fill_cla_invalid(cqm_handle, cla_table, &entry_base_addr);
+ else if (entry_type == CQM_BAT_ENTRY_T_TIMER)
+ cqm_bat_fill_cla_timer(cqm_handle, cla_table, &entry_base_addr);
+ else
+ cqm_bat_fill_cla_other(cqm_handle, cla_table, &entry_base_addr);
+
+ /* Check whether entry_base_addr is out-of-bounds array. */
+ if (entry_base_addr >= (bat_table->bat + CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE))
+ break;
+ }
+}
+
+u32 cqm_funcid2smfid(struct cqm_handle *cqm_handle)
+{
+ u32 funcid = 0;
+ u32 smf_sel = 0;
+ u32 smf_id = 0;
+ u32 smf_pg_partial = 0;
+ /* SMF_Selection is selected based on
+ * the lower two bits of the function id
+ */
+ u32 lbf_smfsel[4] = {0, 2, 1, 3};
+ /* SMFID is selected based on SMF_PG[1:0] and SMF_Selection(0-1) */
+ u32 smfsel_smfid01[4][2] = { {0, 0}, {0, 0}, {1, 1}, {0, 1} };
+ /* SMFID is selected based on SMF_PG[3:2] and SMF_Selection(2-4) */
+ u32 smfsel_smfid23[4][2] = { {2, 2}, {2, 2}, {3, 3}, {2, 3} };
+
+ /* When the LB mode is disabled, SMF0 is always returned. */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL) {
+ smf_id = 0;
+ } else {
+ funcid = cqm_handle->func_attribute.func_global_idx & 0x3;
+ smf_sel = lbf_smfsel[funcid];
+
+ if (smf_sel < 2) {
+ smf_pg_partial = cqm_handle->func_capability.smf_pg & 0x3;
+ smf_id = smfsel_smfid01[smf_pg_partial][smf_sel];
+ } else {
+ smf_pg_partial = (cqm_handle->func_capability.smf_pg >> 2) & 0x3;
+ smf_id = smfsel_smfid23[smf_pg_partial][smf_sel - 2];
+ }
+ }
+
+ return smf_id;
+}
+
+/* This function is used in LB mode 1/2. The timer spoker info
+ * of independent space needs to be configured for 4 SMFs.
+ */
+static void cqm_update_timer_gpa(struct cqm_handle *cqm_handle, u32 smf_id)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table *cla_table = NULL;
+ u32 entry_type = CQM_BAT_ENTRY_T_INVALID;
+ u8 *entry_base_addr = NULL;
+ u32 i = 0;
+
+ if (cqm_handle->func_attribute.func_type != CQM_PPF)
+ return;
+
+ if (cqm_handle->func_capability.lb_mode != CQM_LB_MODE_1 &&
+ cqm_handle->func_capability.lb_mode != CQM_LB_MODE_2)
+ return;
+
+ cla_table = &bat_table->timer_entry[smf_id];
+ entry_base_addr = bat_table->bat;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ entry_type = bat_table->bat_entry_type[i];
+
+ if (entry_type == CQM_BAT_ENTRY_T_TIMER) {
+ cqm_bat_fill_cla_timer(cqm_handle, cla_table, &entry_base_addr);
+ break;
+ }
+
+ if (entry_type == CQM_BAT_ENTRY_T_TASKMAP)
+ entry_base_addr += sizeof(struct cqm_bat_entry_taskmap);
+ else
+ entry_base_addr += CQM_BAT_ENTRY_SIZE;
+
+ /* Check whether entry_base_addr is out-of-bounds array. */
+ if (entry_base_addr >=
+ (bat_table->bat + CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE))
+ break;
+ }
+}
+
+static s32 cqm_bat_update_cmd(struct cqm_handle *cqm_handle, struct cqm_cmd_buf *buf_in,
+ u32 smf_id, u32 func_id)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cmdq_bat_update *bat_update_cmd = NULL;
+ s32 ret = CQM_FAIL;
+
+ bat_update_cmd = (struct cqm_cmdq_bat_update *)(buf_in->buf);
+ bat_update_cmd->offset = 0;
+
+ if (cqm_handle->bat_table.bat_size > CQM_BAT_MAX_SIZE) {
+ cqm_err(handle->dev_hdl,
+ "bat_size = %u, which is more than %d.\n",
+ cqm_handle->bat_table.bat_size, CQM_BAT_MAX_SIZE);
+ return CQM_FAIL;
+ }
+ bat_update_cmd->byte_len = cqm_handle->bat_table.bat_size;
+
+ memcpy(bat_update_cmd->data, cqm_handle->bat_table.bat, bat_update_cmd->byte_len);
+
+ bat_update_cmd->smf_id = smf_id;
+ bat_update_cmd->func_id = func_id;
+
+ cqm_info(handle->dev_hdl, "Bat update: smf_id=%u\n", bat_update_cmd->smf_id);
+ cqm_info(handle->dev_hdl, "Bat update: func_id=%u\n", bat_update_cmd->func_id);
+
+ cqm_swab32((u8 *)bat_update_cmd, sizeof(struct cqm_cmdq_bat_update) >> CQM_DW_SHIFT);
+
+ ret = cqm3_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
+ CQM_CMD_T_BAT_UPDATE, buf_in, NULL, NULL,
+ CQM_CMD_TIMEOUT, SPHW_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
+ cqm_err(handle->dev_hdl, "%s: send_cmd_box ret=%d\n", __func__,
+ ret);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_bat_update(struct cqm_handle *cqm_handle)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cmd_buf *buf_in = NULL;
+ s32 ret = CQM_FAIL;
+ u32 smf_id = 0;
+ u32 func_id = 0;
+ u32 i = 0;
+
+ buf_in = cqm3_cmd_alloc((void *)(cqm_handle->ex_handle));
+ CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
+ buf_in->size = sizeof(struct cqm_cmdq_bat_update);
+
+ /* In non-fake mode, func_id is set to 0xffff, indicating the current func.
+ * In fake mode, the value of func_id is specified. This is a fake func_id.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD)
+ func_id = cqm_handle->func_attribute.func_global_idx;
+ else
+ func_id = 0xffff;
+
+ /* The LB scenario is supported.
+ * The normal mode is the traditional mode and is configured on SMF0.
+ * In mode 0, load is balanced to four SMFs based on the func ID (except
+ * the PPF func ID). The PPF in mode 0 needs to be configured on four
+ * SMF, so the timer resources can be shared by the four timer engine.
+ * Mode 1/2 is load balanced to four SMF by flow. Therefore, one
+ * function needs to be configured to four SMF.
+ */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ smf_id = cqm_funcid2smfid(cqm_handle);
+ ret = cqm_bat_update_cmd(cqm_handle, buf_in, smf_id, func_id);
+ } else if ((cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1) ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2) ||
+ ((cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0) &&
+ (cqm_handle->func_attribute.func_type == CQM_PPF))) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ cqm_update_timer_gpa(cqm_handle, i);
+
+ /* The smf_pg variable stores the currently enabled SMF. */
+ if (cqm_handle->func_capability.smf_pg & (1U << i)) {
+ smf_id = i;
+ ret = cqm_bat_update_cmd(cqm_handle, buf_in, smf_id, func_id);
+ if (ret != CQM_SUCCESS)
+ goto out;
+ }
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Bat update: unsupport lb mode=%u\n",
+ cqm_handle->func_capability.lb_mode);
+ ret = CQM_FAIL;
+ }
+
+out:
+ cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+s32 cqm_bat_init_ft(struct cqm_handle *cqm_handle, struct cqm_bat_table *bat_table,
+ enum func_type function_type)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 i = 0;
+
+ bat_table->bat_entry_type[CQM_BAT_INDEX0] = CQM_BAT_ENTRY_T_CFG;
+ bat_table->bat_entry_type[CQM_BAT_INDEX1] = CQM_BAT_ENTRY_T_HASH;
+ bat_table->bat_entry_type[CQM_BAT_INDEX2] = CQM_BAT_ENTRY_T_QPC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX3] = CQM_BAT_ENTRY_T_SCQC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX4] = CQM_BAT_ENTRY_T_LUN;
+ bat_table->bat_entry_type[CQM_BAT_INDEX5] = CQM_BAT_ENTRY_T_TASKMAP;
+
+ if (function_type == CQM_PF || function_type == CQM_PPF) {
+ bat_table->bat_entry_type[CQM_BAT_INDEX6] = CQM_BAT_ENTRY_T_L3I;
+ bat_table->bat_entry_type[CQM_BAT_INDEX7] = CQM_BAT_ENTRY_T_CHILDC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX8] = CQM_BAT_ENTRY_T_TIMER;
+ bat_table->bat_entry_type[CQM_BAT_INDEX9] = CQM_BAT_ENTRY_T_XID2CID;
+ bat_table->bat_entry_type[CQM_BAT_INDEX10] = CQM_BAT_ENTRY_T_REORDER;
+ bat_table->bat_size = CQM_BAT_SIZE_FT_PF;
+ } else if (function_type == CQM_VF) {
+ bat_table->bat_size = CQM_BAT_SIZE_FT_VF;
+ } else {
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(function_type));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_bat_init(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *capability = &cqm_handle->func_capability;
+ enum func_type function_type = cqm_handle->func_attribute.func_type;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 i;
+
+ memset(bat_table, 0, sizeof(struct cqm_bat_table));
+
+ /* Initialize the type of each bat entry. */
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ /* Select BATs based on service types. Currently,
+ * feature-related resources of the VF are stored in the BATs of the VF.
+ */
+ if (capability->ft_enable)
+ return cqm_bat_init_ft(cqm_handle, bat_table, function_type);
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(capability->ft_enable));
+
+ return CQM_FAIL;
+}
+
+void cqm_bat_uninit(struct cqm_handle *cqm_handle)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ memset(bat_table->bat, 0, CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE);
+
+ /* Instruct the chip to update the BAT table. */
+ if (cqm_bat_update(cqm_handle) != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_update));
+}
+
+s32 cqm_cla_fill_buf(struct cqm_handle *cqm_handle, struct cqm_buf *cla_base_buf,
+ struct cqm_buf *cla_sub_buf, u8 gpa_check_enable)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct sphw_func_attr *func_attr = NULL;
+ dma_addr_t *base = NULL;
+ u64 fake_en = 0;
+ u64 spu_en = 0;
+ u64 pf_id = 0;
+ u32 i = 0;
+ u32 addr_num;
+ u32 buf_index = 0;
+
+ /* Apply for space for base_buf */
+ if (!cla_base_buf->buf_list) {
+ if (cqm_buf_alloc(cqm_handle, cla_base_buf, false) ==
+ CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(cla_base_buf));
+ return CQM_FAIL;
+ }
+ }
+
+ /* Apply for space for sub_buf */
+ if (!cla_sub_buf->buf_list) {
+ if (cqm_buf_alloc(cqm_handle, cla_sub_buf, false) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(cla_sub_buf));
+ cqm_buf_free(cla_base_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+ }
+
+ /* Fill base_buff with the gpa of sub_buf */
+ addr_num = cla_base_buf->buf_size / sizeof(dma_addr_t);
+ base = (dma_addr_t *)(cla_base_buf->buf_list[0].va);
+ for (i = 0; i < cla_sub_buf->buf_number; i++) {
+ /* The SPU SMF supports load balancing from the SMF to the CPI,
+ * depending on the host ID and func ID.
+ */
+ if (sphw_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID) {
+ func_attr = &cqm_handle->func_attribute;
+ spu_en = (u64)(func_attr->func_global_idx & 0x1) << 63;
+ } else {
+ spu_en = 0;
+ }
+
+ /* fake enable */
+ if (cqm_handle->func_capability.fake_func_type ==
+ CQM_FAKE_FUNC_CHILD) {
+ fake_en = 1ULL << 62;
+ func_attr =
+ &cqm_handle->parent_cqm_handle->func_attribute;
+ pf_id = func_attr->func_global_idx;
+ pf_id = (pf_id & 0x1f) << 57;
+ } else {
+ fake_en = 0;
+ pf_id = 0;
+ }
+
+ *base = (((((cla_sub_buf->buf_list[i].pa & CQM_CHIP_GPA_MASK) |
+ spu_en) |
+ fake_en) |
+ pf_id) |
+ gpa_check_enable);
+
+ cqm_swab64((u8 *)base, 1);
+ if ((i + 1) % addr_num == 0) {
+ buf_index++;
+ if (buf_index < cla_base_buf->buf_number)
+ base = cla_base_buf->buf_list[buf_index].va;
+ } else {
+ base++;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_xyz_lvl1(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 trunk_size)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf *cla_y_buf = NULL;
+ struct cqm_buf *cla_z_buf = NULL;
+ s32 shift = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ u32 cache_line = 0;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER && cqm_ver == 8)
+ cache_line = CQM_CHIP_TIMER_CACHELINE;
+ else
+ cache_line = CQM_CHIP_CACHELINE;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ gpa_check_enable = 0;
+
+ cla_table->cla_lvl = CQM_CLA_LVL_1;
+
+ shift = cqm_shift(trunk_size / cla_table->obj_size);
+ cla_table->z = shift ? (shift - 1) : (shift);
+ cla_table->y = CQM_MAX_INDEX_BIT;
+ cla_table->x = 0;
+
+ if (cla_table->obj_size >= cache_line) {
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+ } else {
+ shift = cqm_shift(trunk_size / cache_line);
+ cla_table->cacheline_z = shift ? (shift - 1) : (shift);
+ cla_table->cacheline_y = CQM_MAX_INDEX_BIT;
+ cla_table->cacheline_x = 0;
+ }
+
+ /* Applying for CLA_Y_BUF Space */
+ cla_y_buf = &cla_table->cla_y_buf;
+ cla_y_buf->buf_size = trunk_size;
+ cla_y_buf->buf_number = 1;
+ cla_y_buf->page_number = cla_y_buf->buf_number <<
+ cla_table->trunk_order;
+ ret = cqm_buf_alloc(cqm_handle, cla_y_buf, false);
+ CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, CQM_FAIL,
+ CQM_ALLOC_FAIL(lvl_1_y_buf));
+
+ /* Applying for CLA_Z_BUF Space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number =
+ (ALIGN(cla_table->max_buffer_size, trunk_size)) / trunk_size;
+ cla_z_buf->page_number = cla_z_buf->buf_number <<
+ cla_table->trunk_order;
+ /* All buffer space must be statically allocated. */
+ if (cla_table->alloc_static) {
+ ret = cqm_cla_fill_buf(cqm_handle, cla_y_buf, cla_z_buf,
+ gpa_check_enable);
+ CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, CQM_FAIL,
+ CQM_FUNCTION_FAIL(cqm_cla_fill_buf));
+ } else { /* Only the buffer list space is initialized. The buffer space
+ * is dynamically allocated in services.
+ */
+ cla_z_buf->buf_list = vmalloc(cla_z_buf->buf_number *
+ sizeof(struct cqm_buf_list));
+ if (!cla_z_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_1_z_buf));
+ cqm_buf_free(cla_y_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+ memset(cla_z_buf->buf_list, 0,
+ cla_z_buf->buf_number * sizeof(struct cqm_buf_list));
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_xyz_lvl2(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 trunk_size)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf *cla_x_buf = NULL;
+ struct cqm_buf *cla_y_buf = NULL;
+ struct cqm_buf *cla_z_buf = NULL;
+ s32 shift = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ u32 cache_line = 0;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER && cqm_ver == 8)
+ cache_line = CQM_CHIP_TIMER_CACHELINE;
+ else
+ cache_line = CQM_CHIP_CACHELINE;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ gpa_check_enable = 0;
+
+ cla_table->cla_lvl = CQM_CLA_LVL_2;
+
+ shift = cqm_shift(trunk_size / cla_table->obj_size);
+ cla_table->z = shift ? (shift - 1) : (shift);
+ shift = cqm_shift(trunk_size / sizeof(dma_addr_t));
+ cla_table->y = cla_table->z + shift;
+ cla_table->x = CQM_MAX_INDEX_BIT;
+
+ if (cla_table->obj_size >= cache_line) {
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+ } else {
+ shift = cqm_shift(trunk_size / cache_line);
+ cla_table->cacheline_z = shift ? (shift - 1) : (shift);
+ shift = cqm_shift(trunk_size / sizeof(dma_addr_t));
+ cla_table->cacheline_y = cla_table->cacheline_z + shift;
+ cla_table->cacheline_x = CQM_MAX_INDEX_BIT;
+ }
+
+ /* Apply for CLA_X_BUF Space */
+ cla_x_buf = &cla_table->cla_x_buf;
+ cla_x_buf->buf_size = trunk_size;
+ cla_x_buf->buf_number = 1;
+ cla_x_buf->page_number = cla_x_buf->buf_number <<
+ cla_table->trunk_order;
+ ret = cqm_buf_alloc(cqm_handle, cla_x_buf, false);
+ CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, CQM_FAIL,
+ CQM_ALLOC_FAIL(lvl_2_x_buf));
+
+ /* Apply for CLA_Z_BUF and CLA_Y_BUF Space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number =
+ (ALIGN(cla_table->max_buffer_size, trunk_size)) / trunk_size;
+ cla_z_buf->page_number = cla_z_buf->buf_number <<
+ cla_table->trunk_order;
+
+ cla_y_buf = &cla_table->cla_y_buf;
+ cla_y_buf->buf_size = trunk_size;
+ cla_y_buf->buf_number =
+ (ALIGN(cla_z_buf->buf_number * sizeof(dma_addr_t), trunk_size)) /
+ trunk_size;
+ cla_y_buf->page_number = cla_y_buf->buf_number <<
+ cla_table->trunk_order;
+ /* All buffer space must be statically allocated. */
+ if (cla_table->alloc_static) {
+ /* Apply for y buf and z buf, and fill the gpa of
+ * z buf list in y buf
+ */
+ if (cqm_cla_fill_buf(cqm_handle, cla_y_buf, cla_z_buf,
+ gpa_check_enable) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_fill_buf));
+ cqm_buf_free(cla_x_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+
+ /* Fill the gpa of the y buf list into the x buf.
+ * After the x and y bufs are applied for,
+ * this function will not fail.
+ * Use void to forcibly convert the return of the function.
+ */
+ (void)cqm_cla_fill_buf(cqm_handle, cla_x_buf, cla_y_buf,
+ gpa_check_enable);
+ } else { /* Only the buffer list space is initialized. The buffer space
+ * is dynamically allocated in services.
+ */
+ cla_z_buf->buf_list = vmalloc(cla_z_buf->buf_number *
+ sizeof(struct cqm_buf_list));
+ if (!cla_z_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_z_buf));
+ cqm_buf_free(cla_x_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+ memset(cla_z_buf->buf_list, 0,
+ cla_z_buf->buf_number * sizeof(struct cqm_buf_list));
+
+ cla_y_buf->buf_list = vmalloc(cla_y_buf->buf_number *
+ sizeof(struct cqm_buf_list));
+ if (!cla_y_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_y_buf));
+ cqm_buf_free(cla_z_buf, cqm_handle->dev);
+ cqm_buf_free(cla_x_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+ memset(cla_y_buf->buf_list, 0,
+ cla_y_buf->buf_number * sizeof(struct cqm_buf_list));
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_xyz_check(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 *size)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 trunk_size = 0;
+
+ /* If the capability(obj_num) is set to 0, the CLA does not need to be
+ * initialized and exits directly.
+ */
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0, don't alloc buffer\n",
+ cla_table->type);
+ return CQM_SUCCESS;
+ }
+
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0x%x, gpa_check_enable=%d\n",
+ cla_table->type, cla_table->obj_num,
+ cqm_handle->func_capability.gpa_check_enable);
+
+ /* Check whether obj_size is 2^n-aligned. An error is reported when
+ * obj_size is 0 or 1.
+ */
+ if (!cqm_check_align(cla_table->obj_size)) {
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_size 0x%x is not align on 2^n\n",
+ cla_table->type, cla_table->obj_size);
+ return CQM_FAIL;
+ }
+
+ trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+
+ if (trunk_size < cla_table->obj_size) {
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cla type %u, obj_size 0x%x is out of trunk size\n",
+ cla_table->type, cla_table->obj_size);
+ return CQM_FAIL;
+ }
+
+ *size = trunk_size;
+
+ return CQM_CONTINUE;
+}
+
+s32 cqm_cla_xyz(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf *cla_z_buf = NULL;
+ u32 trunk_size = 0;
+ s32 ret = CQM_FAIL;
+
+ ret = cqm_cla_xyz_check(cqm_handle, cla_table, &trunk_size);
+ if (ret != CQM_CONTINUE)
+ return ret;
+
+ /* Level-0 CLA occupies a small space.
+ * Only CLA_Z_BUF can be allocated during initialization.
+ */
+ if (cla_table->max_buffer_size <= trunk_size) {
+ cla_table->cla_lvl = CQM_CLA_LVL_0;
+
+ cla_table->z = CQM_MAX_INDEX_BIT;
+ cla_table->y = 0;
+ cla_table->x = 0;
+
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+
+ /* Applying for CLA_Z_BUF Space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size; /* (u32)(PAGE_SIZE <<
+ * cla_table->trunk_order);
+ */
+ cla_z_buf->buf_number = 1;
+ cla_z_buf->page_number = cla_z_buf->buf_number << cla_table->trunk_order;
+ ret = cqm_buf_alloc(cqm_handle, cla_z_buf, false);
+ CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, CQM_FAIL,
+ CQM_ALLOC_FAIL(lvl_0_z_buf));
+ }
+ /* Level-1 CLA
+ * Allocates CLA_Y_BUF and CLA_Z_BUF during initialization.
+ */
+ else if (cla_table->max_buffer_size <= (trunk_size * (trunk_size / sizeof(dma_addr_t)))) {
+ if (cqm_cla_xyz_lvl1(cqm_handle, cla_table, trunk_size) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_xyz_lvl1));
+ return CQM_FAIL;
+ }
+ }
+ /* Level-2 CLA
+ * Allocates CLA_X_BUF, CLA_Y_BUF, and CLA_Z_BUF during initialization.
+ */
+ else if (cla_table->max_buffer_size <=
+ (trunk_size * (trunk_size / sizeof(dma_addr_t)) *
+ (trunk_size / sizeof(dma_addr_t)))) {
+ if (cqm_cla_xyz_lvl2(cqm_handle, cla_table, trunk_size) ==
+ CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_xyz_lvl2));
+ return CQM_FAIL;
+ }
+ } else { /* The current memory management mode does not support such
+ * a large buffer addressing. The order value needs to
+ * be increased.
+ */
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cla max_buffer_size 0x%x exceeds support range\n",
+ cla_table->max_buffer_size);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+void cqm_cla_init_entry_normal(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ struct cqm_func_capability *capability)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_HASH:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->hash_number * capability->hash_basic_size;
+ cla_table->obj_size = capability->hash_basic_size;
+ cla_table->obj_num = capability->hash_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_QPC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->qpc_number * capability->qpc_basic_size;
+ cla_table->obj_size = capability->qpc_basic_size;
+ cla_table->obj_num = capability->qpc_number;
+ cla_table->alloc_static = capability->qpc_alloc_static;
+ cqm_info(handle->dev_hdl, "Cla alloc: qpc alloc_static=%d\n",
+ cla_table->alloc_static);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->mpt_number * capability->mpt_basic_size;
+ cla_table->obj_size = capability->mpt_basic_size;
+ cla_table->obj_num = capability->mpt_number;
+ /* CCB decided. MPT uses only static application scenarios. */
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->scqc_number * capability->scqc_basic_size;
+ cla_table->obj_size = capability->scqc_basic_size;
+ cla_table->obj_num = capability->scqc_number;
+ cla_table->alloc_static = capability->scqc_alloc_static;
+ cqm_info(handle->dev_hdl, "Cla alloc: scqc alloc_static=%d\n",
+ cla_table->alloc_static);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->srqc_number * capability->srqc_basic_size;
+ cla_table->obj_size = capability->srqc_basic_size;
+ cla_table->obj_num = capability->srqc_number;
+ cla_table->alloc_static = false;
+ break;
+ default:
+ break;
+ }
+}
+
+void cqm_cla_init_entry_extern(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ struct cqm_func_capability *capability)
+{
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_GID:
+ /* Level-0 CLA table required */
+ cla_table->max_buffer_size = capability->gid_number * capability->gid_basic_size;
+ cla_table->trunk_order =
+ (u32)cqm_shift(ALIGN(cla_table->max_buffer_size, PAGE_SIZE) / PAGE_SIZE);
+ cla_table->obj_size = capability->gid_basic_size;
+ cla_table->obj_num = capability->gid_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_LUN:
+ cla_table->trunk_order = CLA_TABLE_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->lun_number * capability->lun_basic_size;
+ cla_table->obj_size = capability->lun_basic_size;
+ cla_table->obj_num = capability->lun_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_TASKMAP:
+ cla_table->trunk_order = CQM_4K_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->taskmap_number *
+ capability->taskmap_basic_size;
+ cla_table->obj_size = capability->taskmap_basic_size;
+ cla_table->obj_num = capability->taskmap_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_L3I:
+ cla_table->trunk_order = CLA_TABLE_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->l3i_number * capability->l3i_basic_size;
+ cla_table->obj_size = capability->l3i_basic_size;
+ cla_table->obj_num = capability->l3i_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_CHILDC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->childc_number *
+ capability->childc_basic_size;
+ cla_table->obj_size = capability->childc_basic_size;
+ cla_table->obj_num = capability->childc_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_TIMER:
+ /* Ensure that the basic size of the timer buffer page does not
+ * exceed 128 x 4 KB. Otherwise, clearing the timer buffer of
+ * the function is complex.
+ */
+ cla_table->trunk_order = CQM_4K_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->timer_number *
+ capability->timer_basic_size;
+ cla_table->obj_size = capability->timer_basic_size;
+ cla_table->obj_num = capability->timer_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_XID2CID:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->xid2cid_number *
+ capability->xid2cid_basic_size;
+ cla_table->obj_size = capability->xid2cid_basic_size;
+ cla_table->obj_num = capability->xid2cid_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_REORDER:
+ /* This entry supports only IWARP and does not support GPA validity check. */
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->reorder_number *
+ capability->reorder_basic_size;
+ cla_table->obj_size = capability->reorder_basic_size;
+ cla_table->obj_num = capability->reorder_number;
+ cla_table->alloc_static = true;
+ break;
+ default:
+ break;
+ }
+}
+
+s32 cqm_cla_init_entry_condition(struct cqm_handle *cqm_handle, u32 entry_type)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table *cla_table = &bat_table->entry[entry_type];
+ struct cqm_cla_table *cla_table_timer = NULL;
+ u32 i;
+
+ /* When the timer is in LB mode 1 or 2, the timer needs to be
+ * configured for four SMFs and the address space is independent.
+ */
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER &&
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ cla_table_timer = &bat_table->timer_entry[i];
+ memcpy(cla_table_timer, cla_table, sizeof(struct cqm_cla_table));
+
+ if (cqm_cla_xyz(cqm_handle, cla_table_timer) == CQM_FAIL) {
+ cqm_cla_uninit(cqm_handle, entry_type);
+ return CQM_FAIL;
+ }
+ }
+ }
+
+ if (cqm_cla_xyz(cqm_handle, cla_table) == CQM_FAIL) {
+ cqm_cla_uninit(cqm_handle, entry_type);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_init_entry(struct cqm_handle *cqm_handle,
+ struct cqm_func_capability *capability)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table *cla_table = NULL;
+ s32 ret;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ cla_table->type = bat_table->bat_entry_type[i];
+
+ cqm_cla_init_entry_normal(cqm_handle, cla_table, capability);
+ cqm_cla_init_entry_extern(cqm_handle, cla_table, capability);
+
+ /* Allocate CLA entry space at each level. */
+ if (cla_table->type < CQM_BAT_ENTRY_T_HASH ||
+ cla_table->type > CQM_BAT_ENTRY_T_REORDER) {
+ mutex_init(&cla_table->lock);
+ continue;
+ }
+
+ /* For the PPF, resources (8 wheels x 2k scales x 32B x
+ * func_num) need to be applied for to the timer. The
+ * structure of the timer entry in the BAT table needs
+ * to be filled. For the PF, no resource needs to be
+ * applied for the timer and no structure needs to be
+ * filled in the timer entry in the BAT table.
+ */
+ if (!(cla_table->type == CQM_BAT_ENTRY_T_TIMER &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ ret = cqm_cla_init_entry_condition(cqm_handle, i);
+ if (ret != CQM_SUCCESS)
+ return CQM_FAIL;
+ }
+ mutex_init(&cla_table->lock);
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_init(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ s32 ret;
+
+ /* Applying for CLA Entries */
+ ret = cqm_cla_init_entry(cqm_handle, capability);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_init_entry));
+ return ret;
+ }
+
+ /* After the CLA entry is applied, the address is filled in the BAT table. */
+ cqm_bat_fill_cla(cqm_handle);
+
+ /* Instruct the chip to update the BAT table. */
+ ret = cqm_bat_update(cqm_handle);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_update));
+ goto err;
+ }
+
+ cqm_info(handle->dev_hdl, "Timer start: func_type=%d, timer_enable=%u\n",
+ cqm_handle->func_attribute.func_type,
+ cqm_handle->func_capability.timer_enable);
+
+ if (cqm_handle->func_attribute.func_type == CQM_PPF &&
+ cqm_handle->func_capability.timer_enable == CQM_TIMER_ENABLE) {
+ /* Enable the timer after the timer resources are applied for */
+ cqm_info(handle->dev_hdl, "Timer start: spfc ppf timer start\n");
+ ret = sphw_ppf_tmr_start((void *)(cqm_handle->ex_handle));
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Timer start: spfc ppf timer start, ret=%d\n",
+ ret);
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
+ return CQM_FAIL;
+}
+
+void cqm_cla_uninit(struct cqm_handle *cqm_handle, u32 entry_numb)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table *cla_table = NULL;
+ s32 inv_flag = 0;
+ u32 i;
+
+ for (i = 0; i < entry_numb; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_x_buf, &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_y_buf, &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_z_buf, &inv_flag);
+ }
+ }
+
+ /* When the lb mode is 1/2, the timer space allocated to the 4 SMFs
+ * needs to be released.
+ */
+ if (cqm_handle->func_attribute.func_type == CQM_PPF &&
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ cla_table = &bat_table->timer_entry[i];
+ cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_x_buf, &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_y_buf, &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_z_buf, &inv_flag);
+ }
+ }
+}
+
+s32 cqm_cla_update_cmd(struct cqm_handle *cqm_handle, struct cqm_cmd_buf *buf_in,
+ struct cqm_cla_update_cmd *cmd)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cla_update_cmd *cla_update_cmd = NULL;
+ s32 ret = CQM_FAIL;
+
+ cla_update_cmd = (struct cqm_cla_update_cmd *)(buf_in->buf);
+
+ cla_update_cmd->gpa_h = cmd->gpa_h;
+ cla_update_cmd->gpa_l = cmd->gpa_l;
+ cla_update_cmd->value_h = cmd->value_h;
+ cla_update_cmd->value_l = cmd->value_l;
+ cla_update_cmd->smf_id = cmd->smf_id;
+ cla_update_cmd->func_id = cmd->func_id;
+
+ cqm_swab32((u8 *)cla_update_cmd,
+ (sizeof(struct cqm_cla_update_cmd) >> CQM_DW_SHIFT));
+
+ ret = cqm3_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
+ CQM_CMD_T_CLA_UPDATE, buf_in, NULL, NULL,
+ CQM_CMD_TIMEOUT, SPHW_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
+ cqm_err(handle->dev_hdl, "Cla alloc: cqm_cla_update, cqm3_send_cmd_box_ret=%d\n",
+ ret);
+ cqm_err(handle->dev_hdl, "Cla alloc: cqm_cla_update, cla_update_cmd: 0x%x 0x%x 0x%x 0x%x\n",
+ cmd->gpa_h, cmd->gpa_l, cmd->value_h, cmd->value_l);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_update(struct cqm_handle *cqm_handle, struct cqm_buf_list *buf_node_parent,
+ struct cqm_buf_list *buf_node_child, u32 child_index, u8 cla_update_mode)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cmd_buf *buf_in = NULL;
+ struct cqm_cla_update_cmd cmd;
+ dma_addr_t pa = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ u32 i = 0;
+ u64 spu_en;
+
+ buf_in = cqm3_cmd_alloc(cqm_handle->ex_handle);
+ CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
+ buf_in->size = sizeof(struct cqm_cla_update_cmd);
+
+ /* Fill command format, convert to big endian. */
+ /* SPU function sets bit63: acs_spu_en based on function id. */
+ if (sphw_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID)
+ spu_en = ((u64)(cqm_handle->func_attribute.func_global_idx &
+ 0x1)) << 63;
+ else
+ spu_en = 0;
+
+ pa = ((buf_node_parent->pa + (child_index * sizeof(dma_addr_t))) |
+ spu_en);
+ cmd.gpa_h = CQM_ADDR_HI(pa);
+ cmd.gpa_l = CQM_ADDR_LW(pa);
+
+ pa = (buf_node_child->pa | spu_en);
+ cmd.value_h = CQM_ADDR_HI(pa);
+ cmd.value_l = CQM_ADDR_LW(pa);
+
+ /* current CLA GPA CHECK */
+ if (gpa_check_enable) {
+ switch (cla_update_mode) {
+ /* gpa[0]=1 means this GPA is valid */
+ case CQM_CLA_RECORD_NEW_GPA:
+ cmd.value_l |= 1;
+ break;
+ /* gpa[0]=0 means this GPA is valid */
+ case CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID:
+ case CQM_CLA_DEL_GPA_WITH_CACHE_INVALID:
+ cmd.value_l &= (~1);
+ break;
+ default:
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: %s, wrong cla_update_mode=%u\n",
+ __func__, cla_update_mode);
+ break;
+ }
+ }
+
+ /* In non-fake mode, set func_id to 0xffff.
+ * Indicates the current func fake mode, set func_id to the
+ * specified value, This is a fake func_id.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD)
+ cmd.func_id = cqm_handle->func_attribute.func_global_idx;
+ else
+ cmd.func_id = 0xffff;
+
+ /* Mode 0 is hashed to 4 SMF engines (excluding PPF) by func ID. */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ cmd.smf_id = cqm_funcid2smfid(cqm_handle);
+ ret = cqm_cla_update_cmd(cqm_handle, buf_in, &cmd);
+ }
+ /* Modes 1/2 are allocated to four SMF engines by flow.
+ * Therefore, one function needs to be allocated to four SMF engines.
+ */
+ /* Mode 0 PPF needs to be configured on 4 engines,
+ * and the timer resources need to be shared by the 4 engines.
+ */
+ else if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2 ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type == CQM_PPF)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ /* The smf_pg variable stores currently enabled SMF. */
+ if (cqm_handle->func_capability.smf_pg & (1U << i)) {
+ cmd.smf_id = i;
+ ret = cqm_cla_update_cmd(cqm_handle, buf_in,
+ &cmd);
+ if (ret != CQM_SUCCESS)
+ goto out;
+ }
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Cla update: unsupport lb mode=%u\n",
+ cqm_handle->func_capability.lb_mode);
+ ret = CQM_FAIL;
+ }
+
+out:
+ cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+s32 cqm_cla_alloc(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ struct cqm_buf_list *buf_node_parent,
+ struct cqm_buf_list *buf_node_child, u32 child_index)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ s32 ret = CQM_FAIL;
+
+ /* Apply for trunk page */
+ buf_node_child->va = (u8 *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+ cla_table->trunk_order);
+ CQM_PTR_CHECK_RET(buf_node_child->va, CQM_FAIL, CQM_ALLOC_FAIL(va));
+
+ /* PCI mapping */
+ buf_node_child->pa = pci_map_single(cqm_handle->dev, buf_node_child->va,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, buf_node_child->pa)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf_node_child->pa));
+ goto err1;
+ }
+
+ /* Notify the chip of trunk_pa so that the chip fills in cla entry */
+ ret = cqm_cla_update(cqm_handle, buf_node_parent, buf_node_child,
+ child_index, CQM_CLA_RECORD_NEW_GPA);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_update));
+ goto err2;
+ }
+
+ return CQM_SUCCESS;
+
+err2:
+ pci_unmap_single(cqm_handle->dev, buf_node_child->pa,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+err1:
+ free_pages((ulong)(buf_node_child->va), cla_table->trunk_order);
+ buf_node_child->va = NULL;
+ return CQM_FAIL;
+}
+
+void cqm_cla_free(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ struct cqm_buf_list *buf_node_parent,
+ struct cqm_buf_list *buf_node_child, u32 child_index, u8 cla_update_mode)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 trunk_size;
+
+ if (cqm_cla_update(cqm_handle, buf_node_parent, buf_node_child,
+ child_index, cla_update_mode) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_update));
+ return;
+ }
+
+ if (cla_update_mode == CQM_CLA_DEL_GPA_WITH_CACHE_INVALID) {
+ trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+ if (cqm_cla_cache_invalid(cqm_handle, buf_node_child->pa,
+ trunk_size) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_cache_invalid));
+ return;
+ }
+ }
+
+ /* Remove PCI mapping from the trunk page */
+ pci_unmap_single(cqm_handle->dev, buf_node_child->pa,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+
+ /* Rlease trunk page */
+ free_pages((ulong)(buf_node_child->va), cla_table->trunk_order);
+ buf_node_child->va = NULL;
+}
+
+u8 *cqm_cla_get_unlock_lvl0(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+
+ /* Level 0 CLA pages are statically allocated. */
+ offset = index * cla_table->obj_size;
+ ret_addr = (u8 *)(cla_z_buf->buf_list->va) + offset;
+ *pa = cla_z_buf->buf_list->pa + offset;
+
+ return ret_addr;
+}
+
+u8 *cqm_cla_get_unlock_lvl1(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
+ struct cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_list *buf_node_y = NULL;
+ struct cqm_buf_list *buf_node_z = NULL;
+ u32 y_index = 0;
+ u32 z_index = 0;
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+
+ z_index = index & ((1U << (cla_table->z + 1)) - 1);
+ y_index = index >> (cla_table->z + 1);
+
+ if (y_index >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla get: index exceeds buf_number, y_index %u, z_buf_number %u\n",
+ y_index, cla_z_buf->buf_number);
+ return NULL;
+ }
+ buf_node_z = &cla_z_buf->buf_list[y_index];
+ buf_node_y = cla_y_buf->buf_list;
+
+ /* The z buf node does not exist, applying for a page first. */
+ if (!buf_node_z->va) {
+ if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_y, buf_node_z,
+ y_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ cqm_err(handle->dev_hdl,
+ "Cla get: cla_table->type=%u\n",
+ cla_table->type);
+ return NULL;
+ }
+ }
+
+ buf_node_z->refcount += count;
+ offset = z_index * cla_table->obj_size;
+ ret_addr = (u8 *)(buf_node_z->va) + offset;
+ *pa = buf_node_z->pa + offset;
+
+ return ret_addr;
+}
+
+u8 *cqm_cla_get_unlock_lvl2(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct cqm_buf *cla_x_buf = &cla_table->cla_x_buf;
+ struct cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
+ struct cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_list *buf_node_x = NULL;
+ struct cqm_buf_list *buf_node_y = NULL;
+ struct cqm_buf_list *buf_node_z = NULL;
+ u32 x_index = 0;
+ u32 y_index = 0;
+ u32 z_index = 0;
+ u32 trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+ u64 tmp;
+
+ z_index = index & ((1U << (cla_table->z + 1)) - 1);
+ y_index = (index >> (cla_table->z + 1)) &
+ ((1U << (cla_table->y - cla_table->z)) - 1);
+ x_index = index >> (cla_table->y + 1);
+ tmp = x_index * (trunk_size / sizeof(dma_addr_t)) + y_index;
+
+ if (x_index >= cla_y_buf->buf_number || tmp >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla get: index exceeds buf_number, x_index %u, y_index %u, y_buf_number %u, z_buf_number %u\n",
+ x_index, y_index, cla_y_buf->buf_number,
+ cla_z_buf->buf_number);
+ return NULL;
+ }
+
+ buf_node_x = cla_x_buf->buf_list;
+ buf_node_y = &cla_y_buf->buf_list[x_index];
+ buf_node_z = &cla_z_buf->buf_list[tmp];
+
+ /* The y buf node does not exist, applying for pages for y node. */
+ if (!buf_node_y->va) {
+ if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_x, buf_node_y,
+ x_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ return NULL;
+ }
+ }
+
+ /* The z buf node does not exist, applying for pages for z node. */
+ if (!buf_node_z->va) {
+ if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_y, buf_node_z,
+ y_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ if (buf_node_y->refcount == 0)
+ /* To release node Y, cache_invalid is
+ * required.
+ */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_x, buf_node_y, x_index,
+ CQM_CLA_DEL_GPA_WITH_CACHE_INVALID);
+ return NULL;
+ }
+
+ /* reference counting of the y buffer node needs to increase
+ * by 1.
+ */
+ buf_node_y->refcount++;
+ }
+
+ buf_node_z->refcount += count;
+ offset = z_index * cla_table->obj_size;
+ ret_addr = (u8 *)(buf_node_z->va) + offset;
+ *pa = buf_node_z->pa + offset;
+
+ return ret_addr;
+}
+
+u8 *cqm_cla_get_unlock(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ u8 *ret_addr = NULL;
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_0)
+ ret_addr = cqm_cla_get_unlock_lvl0(cqm_handle, cla_table, index,
+ count, pa);
+ else if (cla_table->cla_lvl == CQM_CLA_LVL_1)
+ ret_addr = cqm_cla_get_unlock_lvl1(cqm_handle, cla_table, index,
+ count, pa);
+ else
+ ret_addr = cqm_cla_get_unlock_lvl2(cqm_handle, cla_table, index,
+ count, pa);
+
+ return ret_addr;
+}
+
+u8 *cqm_cla_get_lock(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ u8 *ret_addr = NULL;
+
+ mutex_lock(&cla_table->lock);
+
+ ret_addr = cqm_cla_get_unlock(cqm_handle, cla_table, index, count, pa);
+
+ mutex_unlock(&cla_table->lock);
+
+ return ret_addr;
+}
+
+void cqm_cla_put(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 index, u32 count)
+{
+ struct cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ struct cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
+ struct cqm_buf *cla_x_buf = &cla_table->cla_x_buf;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_list *buf_node_z = NULL;
+ struct cqm_buf_list *buf_node_y = NULL;
+ struct cqm_buf_list *buf_node_x = NULL;
+ u32 x_index = 0;
+ u32 y_index = 0;
+ u32 trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+ u64 tmp;
+
+ /* The buffer is applied statically, and the reference counting
+ * does not need to be controlled.
+ */
+ if (cla_table->alloc_static)
+ return;
+
+ mutex_lock(&cla_table->lock);
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_1) {
+ y_index = index >> (cla_table->z + 1);
+
+ if (y_index >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla put: index exceeds buf_number, y_index %u, z_buf_number %u\n",
+ y_index, cla_z_buf->buf_number);
+ cqm_err(handle->dev_hdl,
+ "Cla put: cla_table->type=%u\n",
+ cla_table->type);
+ mutex_unlock(&cla_table->lock);
+ return;
+ }
+
+ buf_node_z = &cla_z_buf->buf_list[y_index];
+ buf_node_y = cla_y_buf->buf_list;
+
+ /* When the value of reference counting on the z node page is 0,
+ * the z node page is released.
+ */
+ buf_node_z->refcount -= count;
+ if (buf_node_z->refcount == 0)
+ /* The cache invalid is not required for the Z node. */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_y,
+ buf_node_z, y_index,
+ CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID);
+ } else if (cla_table->cla_lvl == CQM_CLA_LVL_2) {
+ y_index = (index >> (cla_table->z + 1)) &
+ ((1U << (cla_table->y - cla_table->z)) - 1);
+ x_index = index >> (cla_table->y + 1);
+ tmp = x_index * (trunk_size / sizeof(dma_addr_t)) + y_index;
+
+ if (x_index >= cla_y_buf->buf_number || tmp >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla put: index exceeds buf_number, x_index %u, y_index %u, y_buf_number %u, z_buf_number %u\n",
+ x_index, y_index, cla_y_buf->buf_number,
+ cla_z_buf->buf_number);
+ mutex_unlock(&cla_table->lock);
+ return;
+ }
+
+ buf_node_x = cla_x_buf->buf_list;
+ buf_node_y = &cla_y_buf->buf_list[x_index];
+ buf_node_z = &cla_z_buf->buf_list[tmp];
+
+ /* When the value of reference counting on the z node page is 0,
+ * the z node page is released.
+ */
+ buf_node_z->refcount -= count;
+ if (buf_node_z->refcount == 0) {
+ cqm_cla_free(cqm_handle, cla_table, buf_node_y,
+ buf_node_z, y_index,
+ CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID);
+
+ /* When the value of reference counting on the y node
+ * page is 0, the y node page is released.
+ */
+ buf_node_y->refcount--;
+ if (buf_node_y->refcount == 0)
+ /* Node y requires cache to be invalid. */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_x, buf_node_y, x_index,
+ CQM_CLA_DEL_GPA_WITH_CACHE_INVALID);
+ }
+ }
+
+ mutex_unlock(&cla_table->lock);
+}
+
+struct cqm_cla_table *cqm_cla_table_get(struct cqm_bat_table *bat_table, u32 entry_type)
+{
+ struct cqm_cla_table *cla_table = NULL;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (entry_type == cla_table->type)
+ return cla_table;
+ }
+
+ return NULL;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h b/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h
new file mode 100644
index 000000000000..d3b871ca3c82
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h
@@ -0,0 +1,215 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef CQM_BAT_CLA_H
+#define CQM_BAT_CLA_H
+
+/* When the connection check is enabled, the maximum number of connections
+ * supported by the chip is 1M - 63, which cannot reach 1M
+ */
+#define CQM_BAT_MAX_CONN_NUM (0x100000 - 63)
+#define CQM_BAT_MAX_CACHE_CONN_NUM (0x100000 - 63)
+
+#define CLA_TABLE_PAGE_ORDER 0
+#define CQM_4K_PAGE_ORDER 0
+#define CQM_4K_PAGE_SIZE 4096
+
+#define CQM_BAT_ENTRY_MAX 16
+#define CQM_BAT_ENTRY_SIZE 16
+
+#define CQM_BAT_SIZE_FT_PF 192
+#define CQM_BAT_SIZE_FT_VF 112
+
+#define CQM_BAT_INDEX0 0
+#define CQM_BAT_INDEX1 1
+#define CQM_BAT_INDEX2 2
+#define CQM_BAT_INDEX3 3
+#define CQM_BAT_INDEX4 4
+#define CQM_BAT_INDEX5 5
+#define CQM_BAT_INDEX6 6
+#define CQM_BAT_INDEX7 7
+#define CQM_BAT_INDEX8 8
+#define CQM_BAT_INDEX9 9
+#define CQM_BAT_INDEX10 10
+#define CQM_BAT_INDEX11 11
+#define CQM_BAT_INDEX12 12
+#define CQM_BAT_INDEX13 13
+#define CQM_BAT_INDEX14 14
+#define CQM_BAT_INDEX15 15
+
+enum cqm_bat_entry_type {
+ CQM_BAT_ENTRY_T_CFG = 0,
+ CQM_BAT_ENTRY_T_HASH = 1,
+ CQM_BAT_ENTRY_T_QPC = 2,
+ CQM_BAT_ENTRY_T_SCQC = 3,
+ CQM_BAT_ENTRY_T_SRQC = 4,
+ CQM_BAT_ENTRY_T_MPT = 5,
+ CQM_BAT_ENTRY_T_GID = 6,
+ CQM_BAT_ENTRY_T_LUN = 7,
+ CQM_BAT_ENTRY_T_TASKMAP = 8,
+ CQM_BAT_ENTRY_T_L3I = 9,
+ CQM_BAT_ENTRY_T_CHILDC = 10,
+ CQM_BAT_ENTRY_T_TIMER = 11,
+ CQM_BAT_ENTRY_T_XID2CID = 12,
+ CQM_BAT_ENTRY_T_REORDER = 13,
+ CQM_BAT_ENTRY_T_INVALID = 14,
+ CQM_BAT_ENTRY_T_MAX = 15,
+};
+
+/* CLA update mode */
+#define CQM_CLA_RECORD_NEW_GPA 0
+#define CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID 1
+#define CQM_CLA_DEL_GPA_WITH_CACHE_INVALID 2
+
+#define CQM_CLA_LVL_0 0
+#define CQM_CLA_LVL_1 1
+#define CQM_CLA_LVL_2 2
+
+#define CQM_MAX_INDEX_BIT 19
+
+#define CQM_CHIP_CACHELINE 256
+#define CQM_CHIP_TIMER_CACHELINE 512
+#define CQM_OBJECT_256 256
+#define CQM_OBJECT_512 512
+#define CQM_OBJECT_1024 1024
+#define CQM_CHIP_GPA_MASK 0x1ffffffffffffff
+#define CQM_CHIP_GPA_HIMASK 0x1ffffff
+#define CQM_CHIP_GPA_LOMASK 0xffffffff
+#define CQM_CHIP_GPA_HSHIFT 32
+
+/* Aligns with 64 buckets and shifts rightward by 6 bits */
+#define CQM_HASH_NUMBER_UNIT 6
+
+struct cqm_cla_table {
+ u32 type;
+ u32 max_buffer_size;
+ u32 obj_num;
+ bool alloc_static; /* Whether the buffer is statically allocated */
+ u32 cla_lvl;
+ u32 cacheline_x; /* x value calculated based on cacheline, used by the chip */
+ u32 cacheline_y; /* y value calculated based on cacheline, used by the chip */
+ u32 cacheline_z; /* z value calculated based on cacheline, used by the chip */
+ u32 x; /* x value calculated based on obj_size, used by software */
+ u32 y; /* y value calculated based on obj_size, used by software */
+ u32 z; /* z value calculated based on obj_size, used by software */
+ struct cqm_buf cla_x_buf;
+ struct cqm_buf cla_y_buf;
+ struct cqm_buf cla_z_buf;
+ u32 trunk_order; /* A continuous physical page contains 2^order pages */
+ u32 obj_size;
+ struct mutex lock; /* Lock for cla buffer allocation and free */
+
+ struct cqm_bitmap bitmap;
+
+ struct cqm_object_table obj_table; /* Mapping table between indexes and objects */
+};
+
+struct cqm_bat_entry_cfg {
+ u32 cur_conn_num_h_4 : 4;
+ u32 rsv1 : 4;
+ u32 max_conn_num : 20;
+ u32 rsv2 : 4;
+
+ u32 max_conn_cache : 10;
+ u32 rsv3 : 6;
+ u32 cur_conn_num_l_16 : 16;
+
+ u32 bloom_filter_addr : 16;
+ u32 cur_conn_cache : 10;
+ u32 rsv4 : 6;
+
+ u32 bucket_num : 16;
+ u32 bloom_filter_len : 16;
+};
+
+#define CQM_BAT_NO_BYPASS_CACHE 0
+#define CQM_BAT_BYPASS_CACHE 1
+
+#define CQM_BAT_ENTRY_SIZE_256 0
+#define CQM_BAT_ENTRY_SIZE_512 1
+#define CQM_BAT_ENTRY_SIZE_1024 2
+
+struct cqm_bat_entry_standerd {
+ u32 entry_size : 2;
+ u32 rsv1 : 6;
+ u32 max_number : 20;
+ u32 rsv2 : 4;
+
+ u32 cla_gpa_h : 32;
+
+ u32 cla_gpa_l : 32;
+
+ u32 rsv3 : 8;
+ u32 z : 5;
+ u32 y : 5;
+ u32 x : 5;
+ u32 rsv24 : 1;
+ u32 bypass : 1;
+ u32 cla_level : 2;
+ u32 rsv5 : 5;
+};
+
+struct cqm_bat_entry_vf2pf {
+ u32 cla_gpa_h : 25;
+ u32 pf_id : 5;
+ u32 fake_vf_en : 1;
+ u32 acs_spu_en : 1;
+};
+
+#define CQM_BAT_ENTRY_TASKMAP_NUM 4
+struct cqm_bat_entry_taskmap_addr {
+ u32 gpa_h;
+ u32 gpa_l;
+};
+
+struct cqm_bat_entry_taskmap {
+ struct cqm_bat_entry_taskmap_addr addr[CQM_BAT_ENTRY_TASKMAP_NUM];
+};
+
+struct cqm_bat_table {
+ u32 bat_entry_type[CQM_BAT_ENTRY_MAX];
+ u8 bat[CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE];
+ struct cqm_cla_table entry[CQM_BAT_ENTRY_MAX];
+ /* In LB mode 1, the timer needs to be configured in 4 SMFs,
+ * and the GPAs must be different and independent.
+ */
+ struct cqm_cla_table timer_entry[4];
+ u32 bat_size;
+};
+
+#define CQM_BAT_MAX_SIZE 256
+struct cqm_cmdq_bat_update {
+ u32 offset;
+ u32 byte_len;
+ u8 data[CQM_BAT_MAX_SIZE];
+ u32 smf_id;
+ u32 func_id;
+};
+
+struct cqm_cla_update_cmd {
+ /* Gpa address to be updated */
+ u32 gpa_h;
+ u32 gpa_l;
+
+ /* Updated Value */
+ u32 value_h;
+ u32 value_l;
+
+ u32 smf_id;
+ u32 func_id;
+};
+
+s32 cqm_bat_init(struct cqm_handle *cqm_handle);
+void cqm_bat_uninit(struct cqm_handle *cqm_handle);
+s32 cqm_cla_init(struct cqm_handle *cqm_handle);
+void cqm_cla_uninit(struct cqm_handle *cqm_handle, u32 entry_numb);
+u8 *cqm_cla_get_unlock(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa);
+u8 *cqm_cla_get_lock(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa);
+void cqm_cla_put(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 index, u32 count);
+struct cqm_cla_table *cqm_cla_table_get(struct cqm_bat_table *bat_table, u32 entry_type);
+u32 cqm_funcid2smfid(struct cqm_handle *cqm_handle);
+
+#endif /* CQM_BAT_CLA_H */
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c b/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c
new file mode 100644
index 000000000000..4e482776a14f
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c
@@ -0,0 +1,891 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include <linux/mm.h>
+#include <linux/gfp.h>
+
+#include "sphw_crm.h"
+#include "sphw_hw.h"
+#include "sphw_hwdev.h"
+#include "sphw_hwif.h"
+
+#include "spfc_cqm_object.h"
+#include "spfc_cqm_bitmap_table.h"
+#include "spfc_cqm_bat_cla.h"
+#include "spfc_cqm_main.h"
+
+#define common_section
+
+void cqm_swab64(u8 *addr, u32 cnt)
+{
+ u64 *temp = (u64 *)addr;
+ u64 value = 0;
+ u32 i;
+
+ for (i = 0; i < cnt; i++) {
+ value = __swab64(*temp);
+ *temp = value;
+ temp++;
+ }
+}
+
+void cqm_swab32(u8 *addr, u32 cnt)
+{
+ u32 *temp = (u32 *)addr;
+ u32 value = 0;
+ u32 i;
+
+ for (i = 0; i < cnt; i++) {
+ value = __swab32(*temp);
+ *temp = value;
+ temp++;
+ }
+}
+
+s32 cqm_shift(u32 data)
+{
+ s32 shift = -1;
+
+ do {
+ data >>= 1;
+ shift++;
+ } while (data);
+
+ return shift;
+}
+
+bool cqm_check_align(u32 data)
+{
+ if (data == 0)
+ return false;
+
+ do {
+ /* When the value can be exactly divided by 2,
+ * the value of data is shifted right by one bit, that is,
+ * divided by 2.
+ */
+ if ((data & 0x1) == 0)
+ data >>= 1;
+ /* If the value cannot be divisible by 2, the value is
+ * not 2^n-aligned and false is returned.
+ */
+ else
+ return false;
+ } while (data != 1);
+
+ return true;
+}
+
+void *cqm_kmalloc_align(size_t size, gfp_t flags, u16 align_order)
+{
+ void *orig_addr = NULL;
+ void *align_addr = NULL;
+ void *index_addr = NULL;
+
+ orig_addr = kmalloc(size + ((u64)1 << align_order) + sizeof(void *),
+ flags);
+ if (!orig_addr)
+ return NULL;
+
+ index_addr = (void *)((char *)orig_addr + sizeof(void *));
+ align_addr =
+ (void *)((((u64)index_addr + ((u64)1 << align_order) - 1) >>
+ align_order) << align_order);
+
+ /* Record the original memory address for memory release. */
+ index_addr = (void *)((char *)align_addr - sizeof(void *));
+ *(void **)index_addr = orig_addr;
+
+ return align_addr;
+}
+
+void cqm_kfree_align(void *addr)
+{
+ void *index_addr = NULL;
+
+ /* Release the original memory address. */
+ index_addr = (void *)((char *)addr - sizeof(void *));
+
+ kfree(*(void **)index_addr);
+}
+
+void cqm_write_lock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ write_lock_bh(lock);
+ else
+ write_lock(lock);
+}
+
+void cqm_write_unlock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ write_unlock_bh(lock);
+ else
+ write_unlock(lock);
+}
+
+void cqm_read_lock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ read_lock_bh(lock);
+ else
+ read_lock(lock);
+}
+
+void cqm_read_unlock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ read_unlock_bh(lock);
+ else
+ read_unlock(lock);
+}
+
+s32 cqm_buf_alloc_direct(struct cqm_handle *cqm_handle, struct cqm_buf *buf, bool direct)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct page **pages = NULL;
+ u32 i, j, order;
+
+ order = get_order(buf->buf_size);
+
+ if (!direct) {
+ buf->direct.va = NULL;
+ return CQM_SUCCESS;
+ }
+
+ pages = vmalloc(sizeof(struct page *) * buf->page_number);
+ if (!pages) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(pages));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < buf->buf_number; i++) {
+ for (j = 0; j < ((u32)1 << order); j++)
+ pages[(ulong)(unsigned int)((i << order) + j)] =
+ (void *)virt_to_page((u8 *)(buf->buf_list[i].va) +
+ (PAGE_SIZE * j));
+ }
+
+ buf->direct.va = vmap(pages, buf->page_number, VM_MAP, PAGE_KERNEL);
+ vfree(pages);
+ if (!buf->direct.va) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf->direct.va));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_buf_alloc_page(struct cqm_handle *cqm_handle, struct cqm_buf *buf)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct page *newpage = NULL;
+ u32 order;
+ void *va = NULL;
+ s32 i, node;
+
+ order = get_order(buf->buf_size);
+ /* Page for applying for each buffer for non-ovs */
+ if (handle->board_info.service_mode != 0) {
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ va = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+ order);
+ if (!va) {
+ cqm_err(handle->dev_hdl,
+ CQM_ALLOC_FAIL(buf_page));
+ break;
+ }
+ /* Initialize the page after the page is applied for.
+ * If hash entries are involved, the initialization
+ * value must be 0.
+ */
+ memset(va, 0, buf->buf_size);
+ buf->buf_list[i].va = va;
+ }
+ } else {
+ node = dev_to_node(handle->dev_hdl);
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ newpage = alloc_pages_node(node,
+ GFP_KERNEL | __GFP_ZERO,
+ order);
+ if (!newpage) {
+ cqm_err(handle->dev_hdl,
+ CQM_ALLOC_FAIL(buf_page));
+ break;
+ }
+ va = (void *)page_address(newpage);
+ /* Initialize the page after the page is applied for.
+ * If hash entries are involved, the initialization
+ * value must be 0.
+ */
+ memset(va, 0, buf->buf_size);
+ buf->buf_list[i].va = va;
+ }
+ }
+
+ if (i != buf->buf_number) {
+ i--;
+ for (; i >= 0; i--) {
+ free_pages((ulong)(buf->buf_list[i].va), order);
+ buf->buf_list[i].va = NULL;
+ }
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_buf_alloc_map(struct cqm_handle *cqm_handle, struct cqm_buf *buf)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct pci_dev *dev = cqm_handle->dev;
+ void *va = NULL;
+ s32 i;
+
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ va = buf->buf_list[i].va;
+ buf->buf_list[i].pa = pci_map_single(dev, va, buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(dev, buf->buf_list[i].pa)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf_list));
+ break;
+ }
+ }
+
+ if (i != buf->buf_number) {
+ i--;
+ for (; i >= 0; i--)
+ pci_unmap_single(dev, buf->buf_list[i].pa,
+ buf->buf_size, PCI_DMA_BIDIRECTIONAL);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_buf_alloc(struct cqm_handle *cqm_handle, struct cqm_buf *buf, bool direct)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct pci_dev *dev = cqm_handle->dev;
+ u32 order;
+ s32 i;
+
+ order = get_order(buf->buf_size);
+
+ /* Applying for the buffer list descriptor space */
+ buf->buf_list = vmalloc(buf->buf_number * sizeof(struct cqm_buf_list));
+ CQM_PTR_CHECK_RET(buf->buf_list, CQM_FAIL,
+ CQM_ALLOC_FAIL(linux_buf_list));
+ memset(buf->buf_list, 0, buf->buf_number * sizeof(struct cqm_buf_list));
+
+ /* Page for applying for each buffer */
+ if (cqm_buf_alloc_page(cqm_handle, buf) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(linux_cqm_buf_alloc_page));
+ goto err1;
+ }
+
+ /* PCI mapping of the buffer */
+ if (cqm_buf_alloc_map(cqm_handle, buf) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(linux_cqm_buf_alloc_map));
+ goto err2;
+ }
+
+ /* direct remapping */
+ if (cqm_buf_alloc_direct(cqm_handle, buf, direct) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_buf_alloc_direct));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ for (i = 0; i < (s32)buf->buf_number; i++)
+ pci_unmap_single(dev, buf->buf_list[i].pa, buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+err2:
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ free_pages((ulong)(buf->buf_list[i].va), order);
+ buf->buf_list[i].va = NULL;
+ }
+err1:
+ vfree(buf->buf_list);
+ buf->buf_list = NULL;
+ return CQM_FAIL;
+}
+
+void cqm_buf_free(struct cqm_buf *buf, struct pci_dev *dev)
+{
+ u32 order;
+ s32 i;
+
+ order = get_order(buf->buf_size);
+
+ if (buf->direct.va) {
+ vunmap(buf->direct.va);
+ buf->direct.va = NULL;
+ }
+
+ if (buf->buf_list) {
+ for (i = 0; i < (s32)(buf->buf_number); i++) {
+ if (buf->buf_list[i].va) {
+ pci_unmap_single(dev, buf->buf_list[i].pa,
+ buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ free_pages((ulong)(buf->buf_list[i].va), order);
+ buf->buf_list[i].va = NULL;
+ }
+ }
+
+ vfree(buf->buf_list);
+ buf->buf_list = NULL;
+ }
+}
+
+s32 cqm_cla_cache_invalid_cmd(struct cqm_handle *cqm_handle, struct cqm_cmd_buf *buf_in,
+ struct cqm_cla_cache_invalid_cmd *cmd)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cla_cache_invalid_cmd *cla_cache_invalid_cmd = NULL;
+ s32 ret;
+
+ cla_cache_invalid_cmd = (struct cqm_cla_cache_invalid_cmd *)(buf_in->buf);
+ cla_cache_invalid_cmd->gpa_h = cmd->gpa_h;
+ cla_cache_invalid_cmd->gpa_l = cmd->gpa_l;
+ cla_cache_invalid_cmd->cache_size = cmd->cache_size;
+ cla_cache_invalid_cmd->smf_id = cmd->smf_id;
+ cla_cache_invalid_cmd->func_id = cmd->func_id;
+
+ cqm_swab32((u8 *)cla_cache_invalid_cmd,
+ /* shift 2 bits by right to get length of dw(4B) */
+ (sizeof(struct cqm_cla_cache_invalid_cmd) >> 2));
+
+ /* Send the cmdq command. */
+ ret = cqm3_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
+ CQM_CMD_T_CLA_CACHE_INVALID, buf_in, NULL, NULL,
+ CQM_CMD_TIMEOUT, SPHW_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
+ cqm_err(handle->dev_hdl,
+ "Cla cache invalid: cqm3_send_cmd_box_ret=%d\n",
+ ret);
+ cqm_err(handle->dev_hdl,
+ "Cla cache invalid: cla_cache_invalid_cmd: 0x%x 0x%x 0x%x\n",
+ cmd->gpa_h, cmd->gpa_l, cmd->cache_size);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_cache_invalid(struct cqm_handle *cqm_handle, dma_addr_t gpa, u32 cache_size)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cmd_buf *buf_in = NULL;
+ struct cqm_cla_cache_invalid_cmd cmd;
+ s32 ret = CQM_FAIL;
+ u32 i;
+
+ buf_in = cqm3_cmd_alloc((void *)(cqm_handle->ex_handle));
+ CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
+ buf_in->size = sizeof(struct cqm_cla_cache_invalid_cmd);
+
+ /* Fill command and convert it to big endian */
+ cmd.cache_size = cache_size;
+ cmd.gpa_h = CQM_ADDR_HI(gpa);
+ cmd.gpa_l = CQM_ADDR_LW(gpa);
+
+ /* In non-fake mode, set func_id to 0xffff.
+ * Indicate the current func fake mode.
+ * The value of func_id is a fake func ID.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD)
+ cmd.func_id = cqm_handle->func_attribute.func_global_idx;
+ else
+ cmd.func_id = 0xffff;
+
+ /* Mode 0 is hashed to 4 SMF engines (excluding PPF) by func ID. */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ cmd.smf_id = cqm_funcid2smfid(cqm_handle);
+ ret = cqm_cla_cache_invalid_cmd(cqm_handle, buf_in, &cmd);
+ }
+ /* Mode 1/2 are allocated to 4 SMF engines by flow. Therefore,
+ * one function needs to be allocated to 4 SMF engines.
+ */
+ /* The PPF in mode 0 needs to be configured on 4 engines,
+ * and the timer resources need to be shared by the 4 engines.
+ */
+ else if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2 ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type == CQM_PPF)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ /* The smf_pg stored currently enabled SMF engine. */
+ if (cqm_handle->func_capability.smf_pg & (1U << i)) {
+ cmd.smf_id = i;
+ ret = cqm_cla_cache_invalid_cmd(cqm_handle,
+ buf_in, &cmd);
+ if (ret != CQM_SUCCESS)
+ goto out;
+ }
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Cla cache invalid: unsupport lb mode=%u\n",
+ cqm_handle->func_capability.lb_mode);
+ ret = CQM_FAIL;
+ }
+
+out:
+ cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+static void free_cache_inv(struct cqm_handle *cqm_handle, struct cqm_buf *buf,
+ s32 *inv_flag)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 order;
+ s32 i;
+
+ order = get_order(buf->buf_size);
+
+ if (!handle->chip_present_flag)
+ return;
+
+ if (!buf->buf_list)
+ return;
+
+ for (i = 0; i < (s32)(buf->buf_number); i++) {
+ if (!buf->buf_list[i].va)
+ continue;
+
+ if (*inv_flag != CQM_SUCCESS)
+ continue;
+
+ /* In the Pangea environment, if the cmdq times out,
+ * no subsequent message is sent.
+ */
+ *inv_flag = cqm_cla_cache_invalid(cqm_handle, buf->buf_list[i].pa,
+ (u32)(PAGE_SIZE << order));
+ if (*inv_flag != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl,
+ "Buffer free: fail to invalid buf_list pa cache, inv_flag=%d\n",
+ *inv_flag);
+ }
+}
+
+void cqm_buf_free_cache_inv(struct cqm_handle *cqm_handle, struct cqm_buf *buf,
+ s32 *inv_flag)
+{
+ /* Send a command to the chip to kick out the cache. */
+ free_cache_inv(cqm_handle, buf, inv_flag);
+
+ /* Clear host resources */
+ cqm_buf_free(buf, cqm_handle->dev);
+}
+
+#define bitmap_section
+
+s32 cqm_single_bitmap_init(struct cqm_bitmap *bitmap)
+{
+ u32 bit_number;
+
+ spin_lock_init(&bitmap->lock);
+
+ /* Max_num of the bitmap is 8-aligned and then
+ * shifted rightward by 3 bits to obtain the number of bytes required.
+ */
+ bit_number = (ALIGN(bitmap->max_num, CQM_NUM_BIT_BYTE) >> CQM_BYTE_BIT_SHIFT);
+ bitmap->table = vmalloc(bit_number);
+ CQM_PTR_CHECK_RET(bitmap->table, CQM_FAIL, CQM_ALLOC_FAIL(bitmap->table));
+ memset(bitmap->table, 0, bit_number);
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_bitmap_init(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cla_table *cla_table = NULL;
+ struct cqm_bitmap *bitmap = NULL;
+ s32 ret = CQM_SUCCESS;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0, don't init bitmap\n",
+ cla_table->type);
+ continue;
+ }
+
+ bitmap = &cla_table->bitmap;
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_QPC:
+ bitmap->max_num = capability->qpc_number;
+ bitmap->reserved_top = capability->qpc_reserved;
+ bitmap->last = capability->qpc_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ bitmap->max_num = capability->mpt_number;
+ bitmap->reserved_top = capability->mpt_reserved;
+ bitmap->last = capability->mpt_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ bitmap->max_num = capability->scqc_number;
+ bitmap->reserved_top = capability->scq_reserved;
+ bitmap->last = capability->scq_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ bitmap->max_num = capability->srqc_number;
+ bitmap->reserved_top = capability->srq_reserved;
+ bitmap->last = capability->srq_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ default:
+ break;
+ }
+
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ "Bitmap init: failed to init cla_table_type=%u, obj_num=0x%x\n",
+ cla_table->type, cla_table->obj_num);
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_bitmap_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+void cqm_bitmap_uninit(struct cqm_handle *cqm_handle)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table *cla_table = NULL;
+ struct cqm_bitmap *bitmap = NULL;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ bitmap = &cla_table->bitmap;
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ if (bitmap->table) {
+ vfree(bitmap->table);
+ bitmap->table = NULL;
+ }
+ }
+ }
+}
+
+u32 cqm_bitmap_check_range(const ulong *table, u32 step, u32 max_num, u32 begin,
+ u32 count)
+{
+ u32 end = (begin + (count - 1));
+ u32 i;
+
+ /* Single-bit check is not performed. */
+ if (count == 1)
+ return begin;
+
+ /* The end value exceeds the threshold. */
+ if (end >= max_num)
+ return max_num;
+
+ /* Bit check, the next bit is returned when a non-zero bit is found. */
+ for (i = (begin + 1); i <= end; i++) {
+ if (test_bit((s32)i, table))
+ return i + 1;
+ }
+
+ /* Check whether it's in different steps. */
+ if ((begin & (~(step - 1))) != (end & (~(step - 1))))
+ return (end & (~(step - 1)));
+
+ /* If the check succeeds, begin is returned. */
+ return begin;
+}
+
+void cqm_bitmap_find(struct cqm_bitmap *bitmap, u32 *index, u32 last, u32 step, u32 count)
+{
+ u32 max_num = bitmap->max_num;
+ ulong *table = bitmap->table;
+
+ do {
+ *index = (u32)find_next_zero_bit(table, max_num, last);
+ if (*index < max_num)
+ last = cqm_bitmap_check_range(table, step, max_num,
+ *index, count);
+ else
+ break;
+ } while (last != *index);
+}
+
+u32 cqm_bitmap_alloc(struct cqm_bitmap *bitmap, u32 step, u32 count, bool update_last)
+{
+ u32 index = 0;
+ u32 max_num = bitmap->max_num;
+ u32 last = bitmap->last;
+ ulong *table = bitmap->table;
+ u32 i;
+
+ spin_lock(&bitmap->lock);
+
+ /* Search for an idle bit from the last position. */
+ cqm_bitmap_find(bitmap, &index, last, step, count);
+
+ /* The preceding search fails. Search for an idle bit
+ * from the beginning.
+ */
+ if (index >= max_num) {
+ last = bitmap->reserved_top;
+ cqm_bitmap_find(bitmap, &index, last, step, count);
+ }
+
+ /* Set the found bit to 1 and reset last. */
+ if (index < max_num) {
+ for (i = index; i < (index + count); i++)
+ set_bit(i, table);
+
+ if (update_last) {
+ bitmap->last = (index + count);
+ if (bitmap->last >= bitmap->max_num)
+ bitmap->last = bitmap->reserved_top;
+ }
+ }
+
+ spin_unlock(&bitmap->lock);
+ return index;
+}
+
+u32 cqm_bitmap_alloc_reserved(struct cqm_bitmap *bitmap, u32 count, u32 index)
+{
+ ulong *table = bitmap->table;
+ u32 ret_index;
+
+ if (index >= bitmap->reserved_top || index >= bitmap->max_num || count != 1)
+ return CQM_INDEX_INVALID;
+
+ spin_lock(&bitmap->lock);
+
+ if (test_bit((s32)index, table)) {
+ ret_index = CQM_INDEX_INVALID;
+ } else {
+ set_bit(index, table);
+ ret_index = index;
+ }
+
+ spin_unlock(&bitmap->lock);
+ return ret_index;
+}
+
+void cqm_bitmap_free(struct cqm_bitmap *bitmap, u32 index, u32 count)
+{
+ u32 i;
+
+ spin_lock(&bitmap->lock);
+
+ for (i = index; i < (index + count); i++)
+ clear_bit((s32)i, bitmap->table);
+
+ spin_unlock(&bitmap->lock);
+}
+
+#define obj_table_section
+s32 cqm_single_object_table_init(struct cqm_object_table *obj_table)
+{
+ rwlock_init(&obj_table->lock);
+
+ obj_table->table = vmalloc(obj_table->max_num * sizeof(void *));
+ CQM_PTR_CHECK_RET(obj_table->table, CQM_FAIL, CQM_ALLOC_FAIL(table));
+ memset(obj_table->table, 0, obj_table->max_num * sizeof(void *));
+ return CQM_SUCCESS;
+}
+
+s32 cqm_object_table_init(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_object_table *obj_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ s32 ret = CQM_SUCCESS;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Obj table init: cla_table_type %u, obj_num=0, don't init obj table\n",
+ cla_table->type);
+ continue;
+ }
+
+ obj_table = &cla_table->obj_table;
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_QPC:
+ obj_table->max_num = capability->qpc_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ obj_table->max_num = capability->mpt_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ obj_table->max_num = capability->scqc_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ obj_table->max_num = capability->srqc_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ default:
+ break;
+ }
+
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ "Obj table init: failed to init cla_table_type=%u, obj_num=0x%x\n",
+ cla_table->type, cla_table->obj_num);
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_object_table_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+void cqm_object_table_uninit(struct cqm_handle *cqm_handle)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_object_table *obj_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ obj_table = &cla_table->obj_table;
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ if (obj_table->table) {
+ vfree(obj_table->table);
+ obj_table->table = NULL;
+ }
+ }
+ }
+}
+
+s32 cqm_object_table_insert(struct cqm_handle *cqm_handle,
+ struct cqm_object_table *object_table,
+ u32 index, struct cqm_object *obj, bool bh)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl,
+ "Obj table insert: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return CQM_FAIL;
+ }
+
+ cqm_write_lock(&object_table->lock, bh);
+
+ if (!object_table->table[index]) {
+ object_table->table[index] = obj;
+ cqm_write_unlock(&object_table->lock, bh);
+ return CQM_SUCCESS;
+ }
+
+ cqm_write_unlock(&object_table->lock, bh);
+ cqm_err(handle->dev_hdl,
+ "Obj table insert: object_table->table[0x%x] has been inserted\n",
+ index);
+
+ return CQM_FAIL;
+}
+
+void cqm_object_table_remove(struct cqm_handle *cqm_handle,
+ struct cqm_object_table *object_table,
+ u32 index, const struct cqm_object *obj, bool bh)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl,
+ "Obj table remove: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return;
+ }
+
+ cqm_write_lock(&object_table->lock, bh);
+
+ if (object_table->table[index] && object_table->table[index] == obj)
+ object_table->table[index] = NULL;
+ else
+ cqm_err(handle->dev_hdl,
+ "Obj table remove: object_table->table[0x%x] has been removed\n",
+ index);
+
+ cqm_write_unlock(&object_table->lock, bh);
+}
+
+struct cqm_object *cqm_object_table_get(struct cqm_handle *cqm_handle,
+ struct cqm_object_table *object_table,
+ u32 index, bool bh)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_object *obj = NULL;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl,
+ "Obj table get: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return NULL;
+ }
+
+ cqm_read_lock(&object_table->lock, bh);
+
+ obj = object_table->table[index];
+ if (obj)
+ atomic_inc(&obj->refcount);
+
+ cqm_read_unlock(&object_table->lock, bh);
+
+ return obj;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h b/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h
new file mode 100644
index 000000000000..4a1b353794bf
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef CQM_BITMAP_TABLE_H
+#define CQM_BITMAP_TABLE_H
+
+struct cqm_bitmap {
+ ulong *table;
+ u32 max_num;
+ u32 last;
+ u32 reserved_top; /* reserved index */
+ spinlock_t lock;
+};
+
+struct cqm_object_table {
+ /* Now is big array. Later will be optimized as a red-black tree. */
+ struct cqm_object **table;
+ u32 max_num;
+ rwlock_t lock;
+};
+
+struct cqm_cla_cache_invalid_cmd {
+ u32 gpa_h;
+ u32 gpa_l;
+
+ u32 cache_size; /* CLA cache size=4096B */
+
+ u32 smf_id;
+ u32 func_id;
+};
+
+struct cqm_handle;
+
+s32 cqm_bitmap_init(struct cqm_handle *cqm_handle);
+void cqm_bitmap_uninit(struct cqm_handle *cqm_handle);
+u32 cqm_bitmap_alloc(struct cqm_bitmap *bitmap, u32 step, u32 count, bool update_last);
+u32 cqm_bitmap_alloc_reserved(struct cqm_bitmap *bitmap, u32 count, u32 index);
+void cqm_bitmap_free(struct cqm_bitmap *bitmap, u32 index, u32 count);
+s32 cqm_object_table_init(struct cqm_handle *cqm_handle);
+void cqm_object_table_uninit(struct cqm_handle *cqm_handle);
+s32 cqm_object_table_insert(struct cqm_handle *cqm_handle,
+ struct cqm_object_table *object_table,
+ u32 index, struct cqm_object *obj, bool bh);
+void cqm_object_table_remove(struct cqm_handle *cqm_handle,
+ struct cqm_object_table *object_table,
+ u32 index, const struct cqm_object *obj, bool bh);
+struct cqm_object *cqm_object_table_get(struct cqm_handle *cqm_handle,
+ struct cqm_object_table *object_table,
+ u32 index, bool bh);
+
+void cqm_swab64(u8 *addr, u32 cnt);
+void cqm_swab32(u8 *addr, u32 cnt);
+bool cqm_check_align(u32 data);
+s32 cqm_shift(u32 data);
+s32 cqm_buf_alloc(struct cqm_handle *cqm_handle, struct cqm_buf *buf, bool direct);
+s32 cqm_buf_alloc_direct(struct cqm_handle *cqm_handle, struct cqm_buf *buf, bool direct);
+void cqm_buf_free(struct cqm_buf *buf, struct pci_dev *dev);
+void cqm_buf_free_cache_inv(struct cqm_handle *cqm_handle, struct cqm_buf *buf,
+ s32 *inv_flag);
+s32 cqm_cla_cache_invalid(struct cqm_handle *cqm_handle, dma_addr_t gpa,
+ u32 cache_size);
+void *cqm_kmalloc_align(size_t size, gfp_t flags, u16 align_order);
+void cqm_kfree_align(void *addr);
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_main.c b/drivers/scsi/spfc/hw/spfc_cqm_main.c
new file mode 100644
index 000000000000..eecf385ec0f3
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_main.c
@@ -0,0 +1,1257 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/vmalloc.h>
+
+#include "sphw_crm.h"
+#include "sphw_hw.h"
+#include "sphw_hw_cfg.h"
+
+#include "spfc_cqm_main.h"
+
+static unsigned char cqm_lb_mode = CQM_LB_MODE_NORMAL;
+module_param(cqm_lb_mode, byte, 0644);
+MODULE_PARM_DESC(cqm_lb_mode, "for cqm lb mode (default=0xff)");
+
+static unsigned char cqm_fake_mode = CQM_FAKE_MODE_DISABLE;
+module_param(cqm_fake_mode, byte, 0644);
+MODULE_PARM_DESC(cqm_fake_mode, "for cqm fake mode (default=0 disable)");
+
+static unsigned char cqm_platform_mode = CQM_FPGA_MODE;
+module_param(cqm_platform_mode, byte, 0644);
+MODULE_PARM_DESC(cqm_platform_mode, "for cqm platform mode (default=0 FPGA)");
+
+s32 cqm3_init(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+ s32 ret;
+
+ CQM_PTR_CHECK_RET(ex_handle, CQM_FAIL, CQM_PTR_NULL(ex_handle));
+
+ cqm_handle = kmalloc(sizeof(*cqm_handle), GFP_KERNEL | __GFP_ZERO);
+ CQM_PTR_CHECK_RET(cqm_handle, CQM_FAIL, CQM_ALLOC_FAIL(cqm_handle));
+
+ /* Clear the memory to prevent other systems from
+ * not clearing the memory.
+ */
+ memset(cqm_handle, 0, sizeof(struct cqm_handle));
+
+ cqm_handle->ex_handle = handle;
+ cqm_handle->dev = (struct pci_dev *)(handle->pcidev_hdl);
+ handle->cqm_hdl = (void *)cqm_handle;
+
+ /* Clearing Statistics */
+ memset(&handle->hw_stats.cqm_stats, 0, sizeof(struct cqm_stats));
+
+ /* Reads VF/PF information. */
+ cqm_handle->func_attribute = handle->hwif->attr;
+ cqm_info(handle->dev_hdl, "Func init: function[%u] type %d(0:PF,1:VF,2:PPF)\n",
+ cqm_handle->func_attribute.func_global_idx,
+ cqm_handle->func_attribute.func_type);
+
+ /* Read capability from configuration management module */
+ ret = cqm_capability_init(ex_handle);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_capability_init));
+ goto err1;
+ }
+
+ /* In FAKE mode, only the bitmap of the timer of the function is
+ * enabled, and resources are not initialized. Otherwise, the
+ * configuration of the fake function is overwritten.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD_CONFLICT) {
+ if (sphw_func_tmr_bitmap_set(ex_handle, true) != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl, "Timer start: enable timer bitmap failed\n");
+
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+ return CQM_SUCCESS;
+ }
+
+ /* Initialize memory entries such as BAT, CLA, and bitmap. */
+ if (cqm_mem_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_mem_init));
+ goto err1;
+ }
+
+ /* Event callback initialization */
+ if (cqm_event_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_event_init));
+ goto err2;
+ }
+
+ /* Doorbell initiation */
+ if (cqm_db_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_db_init));
+ goto err3;
+ }
+
+ /* Initialize the bloom filter. */
+ if (cqm_bloomfilter_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bloomfilter_init));
+ goto err4;
+ }
+
+ /* The timer bitmap is set directly at the beginning of the CQM.
+ * The ifconfig up/down command is not used to set or clear the bitmap.
+ */
+ if (sphw_func_tmr_bitmap_set(ex_handle, true) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Timer start: enable timer bitmap failed\n");
+ goto err5;
+ }
+
+ return CQM_SUCCESS;
+
+err5:
+ cqm_bloomfilter_uninit(ex_handle);
+err4:
+ cqm_db_uninit(ex_handle);
+err3:
+ cqm_event_uninit(ex_handle);
+err2:
+ cqm_mem_uninit(ex_handle);
+err1:
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+ return CQM_FAIL;
+}
+
+void cqm3_uninit(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+ s32 ret;
+
+ CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle));
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_NO_RET(cqm_handle, CQM_PTR_NULL(cqm_handle));
+
+ /* The timer bitmap is set directly at the beginning of the CQM.
+ * The ifconfig up/down command is not used to set or clear the bitmap.
+ */
+ cqm_info(handle->dev_hdl, "Timer stop: disable timer\n");
+ if (sphw_func_tmr_bitmap_set(ex_handle, false) != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl, "Timer stop: disable timer bitmap failed\n");
+
+ /* After the TMR timer stops, the system releases resources
+ * after a delay of one or two milliseconds.
+ */
+ if (cqm_handle->func_attribute.func_type == CQM_PPF &&
+ cqm_handle->func_capability.timer_enable == CQM_TIMER_ENABLE) {
+ cqm_info(handle->dev_hdl, "Timer stop: spfc ppf timer stop\n");
+ ret = sphw_ppf_tmr_stop(handle);
+ if (ret != CQM_SUCCESS)
+ /* The timer fails to be stopped,
+ * and the resource release is not affected.
+ */
+ cqm_info(handle->dev_hdl, "Timer stop: spfc ppf timer stop, ret=%d\n",
+ ret);
+ /* Somebody requires a delay of 1 ms, which is inaccurate. */
+ usleep_range(900, 1000);
+ }
+
+ /* Release Bloom Filter Table */
+ cqm_bloomfilter_uninit(ex_handle);
+
+ /* Release hardware doorbell */
+ cqm_db_uninit(ex_handle);
+
+ /* Cancel the callback of the event */
+ cqm_event_uninit(ex_handle);
+
+ /* Release various memory tables and require the service
+ * to release all objects.
+ */
+ cqm_mem_uninit(ex_handle);
+
+ /* Release cqm_handle */
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+}
+
+void cqm_test_mode_init(struct cqm_handle *cqm_handle,
+ struct service_cap *service_capability)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+
+ if (service_capability->test_mode == 0)
+ return;
+
+ cqm_info(handle->dev_hdl, "Enter CQM test mode\n");
+
+ func_cap->qpc_number = service_capability->test_qpc_num;
+ func_cap->qpc_reserved =
+ GET_MAX(func_cap->qpc_reserved,
+ service_capability->test_qpc_resvd_num);
+ func_cap->xid_alloc_mode = service_capability->test_xid_alloc_mode;
+ func_cap->gpa_check_enable = service_capability->test_gpa_check_enable;
+ func_cap->pagesize_reorder = service_capability->test_page_size_reorder;
+ func_cap->qpc_alloc_static =
+ (bool)(service_capability->test_qpc_alloc_mode);
+ func_cap->scqc_alloc_static =
+ (bool)(service_capability->test_scqc_alloc_mode);
+ func_cap->flow_table_based_conn_number =
+ service_capability->test_max_conn_num;
+ func_cap->flow_table_based_conn_cache_number =
+ service_capability->test_max_cache_conn_num;
+ func_cap->scqc_number = service_capability->test_scqc_num;
+ func_cap->mpt_number = service_capability->test_mpt_num;
+ func_cap->mpt_reserved = service_capability->test_mpt_recvd_num;
+ func_cap->reorder_number = service_capability->test_reorder_num;
+ /* 256K buckets, 256K*64B = 16MB */
+ func_cap->hash_number = service_capability->test_hash_num;
+}
+
+void cqm_service_capability_update(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+
+ func_cap->qpc_number = GET_MIN(CQM_MAX_QPC_NUM, func_cap->qpc_number);
+ func_cap->scqc_number = GET_MIN(CQM_MAX_SCQC_NUM, func_cap->scqc_number);
+ func_cap->srqc_number = GET_MIN(CQM_MAX_SRQC_NUM, func_cap->srqc_number);
+ func_cap->childc_number = GET_MIN(CQM_MAX_CHILDC_NUM, func_cap->childc_number);
+}
+
+void cqm_service_valid_init(struct cqm_handle *cqm_handle,
+ struct service_cap *service_capability)
+{
+ enum cfg_svc_type_en type = service_capability->chip_svc_type;
+ struct cqm_service *svc = cqm_handle->service;
+
+ svc[CQM_SERVICE_T_FC].valid = ((u32)type & CFG_SVC_FC_BIT5) ? true : false;
+}
+
+void cqm_service_capability_init_fc(struct cqm_handle *cqm_handle, void *pra)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct fc_service_cap *fc_cap = &service_capability->fc_cap;
+ struct dev_fc_svc_cap *dev_fc_cap = &fc_cap->dev_fc_cap;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: fc is valid\n");
+ cqm_info(handle->dev_hdl, "Cap init: fc qpc 0x%x, scqc 0x%x, srqc 0x%x\n",
+ dev_fc_cap->max_parent_qpc_num, dev_fc_cap->scq_num,
+ dev_fc_cap->srq_num);
+ func_cap->hash_number += dev_fc_cap->max_parent_qpc_num;
+ func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ func_cap->qpc_number += dev_fc_cap->max_parent_qpc_num;
+ func_cap->qpc_basic_size = GET_MAX(fc_cap->parent_qpc_size,
+ func_cap->qpc_basic_size);
+ func_cap->qpc_alloc_static = true;
+ func_cap->scqc_number += dev_fc_cap->scq_num;
+ func_cap->scqc_basic_size = GET_MAX(fc_cap->scqc_size,
+ func_cap->scqc_basic_size);
+ func_cap->srqc_number += dev_fc_cap->srq_num;
+ func_cap->srqc_basic_size = GET_MAX(fc_cap->srqc_size,
+ func_cap->srqc_basic_size);
+ func_cap->lun_number = CQM_LUN_FC_NUM;
+ func_cap->lun_basic_size = CQM_LUN_SIZE_8;
+ func_cap->taskmap_number = CQM_TASKMAP_FC_NUM;
+ func_cap->taskmap_basic_size = PAGE_SIZE;
+ func_cap->childc_number += dev_fc_cap->max_child_qpc_num;
+ func_cap->childc_basic_size = GET_MAX(fc_cap->child_qpc_size,
+ func_cap->childc_basic_size);
+ func_cap->pagesize_reorder = CQM_FC_PAGESIZE_ORDER;
+}
+
+void cqm_service_capability_init(struct cqm_handle *cqm_handle,
+ struct service_cap *service_capability)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 i;
+
+ for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
+ cqm_handle->service[i].valid = false;
+ cqm_handle->service[i].has_register = false;
+ cqm_handle->service[i].buf_order = 0;
+ }
+
+ cqm_service_valid_init(cqm_handle, service_capability);
+
+ cqm_info(handle->dev_hdl, "Cap init: service type %d\n",
+ service_capability->chip_svc_type);
+
+ if (cqm_handle->service[CQM_SERVICE_T_FC].valid)
+ cqm_service_capability_init_fc(cqm_handle, (void *)service_capability);
+}
+
+s32 cqm_get_fake_func_type(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ u32 parent_func, child_func_start, child_func_number, i;
+ u32 idx = cqm_handle->func_attribute.func_global_idx;
+
+ /* Currently, only one set of fake configurations is implemented.
+ * fake_cfg_number = 1
+ */
+ for (i = 0; i < func_cap->fake_cfg_number; i++) {
+ parent_func = func_cap->fake_cfg[i].parent_func;
+ child_func_start = func_cap->fake_cfg[i].child_func_start;
+ child_func_number = func_cap->fake_cfg[i].child_func_number;
+
+ if (idx == parent_func) {
+ return CQM_FAKE_FUNC_PARENT;
+ } else if ((idx >= child_func_start) &&
+ (idx < (child_func_start + child_func_number))) {
+ return CQM_FAKE_FUNC_CHILD_CONFLICT;
+ }
+ }
+
+ return CQM_FAKE_FUNC_NORMAL;
+}
+
+s32 cqm_get_child_func_start(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct sphw_func_attr *func_attr = &cqm_handle->func_attribute;
+ u32 i;
+
+ /* Currently, only one set of fake configurations is implemented.
+ * fake_cfg_number = 1
+ */
+ for (i = 0; i < func_cap->fake_cfg_number; i++) {
+ if (func_attr->func_global_idx ==
+ func_cap->fake_cfg[i].parent_func)
+ return (s32)(func_cap->fake_cfg[i].child_func_start);
+ }
+
+ return CQM_FAIL;
+}
+
+s32 cqm_get_child_func_number(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct sphw_func_attr *func_attr = &cqm_handle->func_attribute;
+ u32 i;
+
+ for (i = 0; i < func_cap->fake_cfg_number; i++) {
+ if (func_attr->func_global_idx ==
+ func_cap->fake_cfg[i].parent_func)
+ return (s32)(func_cap->fake_cfg[i].child_func_number);
+ }
+
+ return CQM_FAIL;
+}
+
+/* Set func_type in fake_cqm_handle to ppf, pf, or vf. */
+void cqm_set_func_type(struct cqm_handle *cqm_handle)
+{
+ u32 idx = cqm_handle->func_attribute.func_global_idx;
+
+ if (idx == 0)
+ cqm_handle->func_attribute.func_type = CQM_PPF;
+ else if (idx < CQM_MAX_PF_NUM)
+ cqm_handle->func_attribute.func_type = CQM_PF;
+ else
+ cqm_handle->func_attribute.func_type = CQM_VF;
+}
+
+void cqm_lb_fake_mode_init(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct cqm_fake_cfg *cfg = func_cap->fake_cfg;
+
+ func_cap->lb_mode = cqm_lb_mode;
+ func_cap->fake_mode = cqm_fake_mode;
+
+ /* Initializing the LB Mode */
+ if (func_cap->lb_mode == CQM_LB_MODE_NORMAL) {
+ func_cap->smf_pg = 0;
+ } else {
+ /* The LB mode is tailored on the FPGA.
+ * Only SMF0 and SMF2 are instantiated.
+ */
+ if (cqm_platform_mode == CQM_FPGA_MODE)
+ func_cap->smf_pg = 0x5;
+ else
+ func_cap->smf_pg = 0xF;
+ }
+
+ /* Initializing the FAKE Mode */
+ if (func_cap->fake_mode == CQM_FAKE_MODE_DISABLE) {
+ func_cap->fake_cfg_number = 0;
+ func_cap->fake_func_type = CQM_FAKE_FUNC_NORMAL;
+ } else {
+ func_cap->fake_cfg_number = 1;
+
+ /* When configuring fake mode, ensure that the parent function
+ * cannot be contained in the child function; otherwise, the
+ * system will be initialized repeatedly.
+ */
+ cfg[0].child_func_start = CQM_FAKE_CFUNC_START;
+ func_cap->fake_func_type = cqm_get_fake_func_type(cqm_handle);
+ }
+}
+
+s32 cqm_capability_init(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+ struct sphw_func_attr *func_attr = &cqm_handle->func_attribute;
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ u32 total_function_num = 0;
+ int err = 0;
+
+ /* Initializes the PPF capabilities: include timer, pf, vf. */
+ if (func_attr->func_type == CQM_PPF) {
+ total_function_num = service_capability->host_total_function;
+ func_cap->timer_enable = service_capability->timer_en;
+ func_cap->pf_num = service_capability->pf_num;
+ func_cap->pf_id_start = service_capability->pf_id_start;
+ func_cap->vf_num = service_capability->vf_num;
+ func_cap->vf_id_start = service_capability->vf_id_start;
+
+ cqm_info(handle->dev_hdl, "Cap init: total function num 0x%x\n",
+ total_function_num);
+ cqm_info(handle->dev_hdl, "Cap init: pf_num 0x%x, pf_id_start 0x%x, vf_num 0x%x, vf_id_start 0x%x\n",
+ func_cap->pf_num, func_cap->pf_id_start,
+ func_cap->vf_num, func_cap->vf_id_start);
+ cqm_info(handle->dev_hdl, "Cap init: timer_enable %u (1: enable; 0: disable)\n",
+ func_cap->timer_enable);
+ }
+
+ func_cap->flow_table_based_conn_number = service_capability->max_connect_num;
+ func_cap->flow_table_based_conn_cache_number = service_capability->max_stick2cache_num;
+ cqm_info(handle->dev_hdl, "Cap init: cfg max_conn_num 0x%x, max_cache_conn_num 0x%x\n",
+ func_cap->flow_table_based_conn_number,
+ func_cap->flow_table_based_conn_cache_number);
+
+ func_cap->bloomfilter_enable = service_capability->bloomfilter_en;
+ cqm_info(handle->dev_hdl, "Cap init: bloomfilter_enable %u (1: enable; 0: disable)\n",
+ func_cap->bloomfilter_enable);
+
+ if (func_cap->bloomfilter_enable) {
+ func_cap->bloomfilter_length = service_capability->bfilter_len;
+ func_cap->bloomfilter_addr =
+ service_capability->bfilter_start_addr;
+ if (func_cap->bloomfilter_length != 0 &&
+ !cqm_check_align(func_cap->bloomfilter_length)) {
+ cqm_err(handle->dev_hdl, "Cap init: bloomfilter_length %u is not the power of 2\n",
+ func_cap->bloomfilter_length);
+
+ err = CQM_FAIL;
+ goto out;
+ }
+ }
+
+ cqm_info(handle->dev_hdl, "Cap init: bloomfilter_length 0x%x, bloomfilter_addr 0x%x\n",
+ func_cap->bloomfilter_length, func_cap->bloomfilter_addr);
+
+ func_cap->qpc_reserved = 0;
+ func_cap->mpt_reserved = 0;
+ func_cap->scq_reserved = 0;
+ func_cap->srq_reserved = 0;
+ func_cap->qpc_alloc_static = false;
+ func_cap->scqc_alloc_static = false;
+
+ func_cap->l3i_number = CQM_L3I_COMM_NUM;
+ func_cap->l3i_basic_size = CQM_L3I_SIZE_8;
+
+ func_cap->timer_number = CQM_TIMER_ALIGN_SCALE_NUM * total_function_num;
+ func_cap->timer_basic_size = CQM_TIMER_SIZE_32;
+
+ func_cap->gpa_check_enable = true;
+
+ cqm_lb_fake_mode_init(cqm_handle);
+ cqm_info(handle->dev_hdl, "Cap init: lb_mode=%u\n", func_cap->lb_mode);
+ cqm_info(handle->dev_hdl, "Cap init: smf_pg=%u\n", func_cap->smf_pg);
+ cqm_info(handle->dev_hdl, "Cap init: fake_mode=%u\n", func_cap->fake_mode);
+ cqm_info(handle->dev_hdl, "Cap init: fake_func_type=%u\n", func_cap->fake_func_type);
+ cqm_info(handle->dev_hdl, "Cap init: fake_cfg_number=%u\n", func_cap->fake_cfg_number);
+
+ cqm_service_capability_init(cqm_handle, service_capability);
+
+ cqm_test_mode_init(cqm_handle, service_capability);
+
+ cqm_service_capability_update(cqm_handle);
+
+ func_cap->ft_enable = service_capability->sf_svc_attr.ft_en;
+ func_cap->rdma_enable = service_capability->sf_svc_attr.rdma_en;
+
+ cqm_info(handle->dev_hdl, "Cap init: pagesize_reorder %u\n", func_cap->pagesize_reorder);
+ cqm_info(handle->dev_hdl, "Cap init: xid_alloc_mode %d, gpa_check_enable %d\n",
+ func_cap->xid_alloc_mode, func_cap->gpa_check_enable);
+ cqm_info(handle->dev_hdl, "Cap init: qpc_alloc_mode %d, scqc_alloc_mode %d\n",
+ func_cap->qpc_alloc_static, func_cap->scqc_alloc_static);
+ cqm_info(handle->dev_hdl, "Cap init: hash_number 0x%x\n", func_cap->hash_number);
+ cqm_info(handle->dev_hdl, "Cap init: qpc_number 0x%x, qpc_reserved 0x%x, qpc_basic_size 0x%x\n",
+ func_cap->qpc_number, func_cap->qpc_reserved, func_cap->qpc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: scqc_number 0x%x scqc_reserved 0x%x, scqc_basic_size 0x%x\n",
+ func_cap->scqc_number, func_cap->scq_reserved, func_cap->scqc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: srqc_number 0x%x, srqc_basic_size 0x%x\n",
+ func_cap->srqc_number, func_cap->srqc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: mpt_number 0x%x, mpt_reserved 0x%x\n",
+ func_cap->mpt_number, func_cap->mpt_reserved);
+ cqm_info(handle->dev_hdl, "Cap init: gid_number 0x%x, lun_number 0x%x\n",
+ func_cap->gid_number, func_cap->lun_number);
+ cqm_info(handle->dev_hdl, "Cap init: taskmap_number 0x%x, l3i_number 0x%x\n",
+ func_cap->taskmap_number, func_cap->l3i_number);
+ cqm_info(handle->dev_hdl, "Cap init: timer_number 0x%x, childc_number 0x%x\n",
+ func_cap->timer_number, func_cap->childc_number);
+ cqm_info(handle->dev_hdl, "Cap init: childc_basic_size 0x%x\n",
+ func_cap->childc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: xid2cid_number 0x%x, reorder_number 0x%x\n",
+ func_cap->xid2cid_number, func_cap->reorder_number);
+ cqm_info(handle->dev_hdl, "Cap init: ft_enable %d, rdma_enable %d\n",
+ func_cap->ft_enable, func_cap->rdma_enable);
+
+ return CQM_SUCCESS;
+
+out:
+ if (func_attr->func_type == CQM_PPF)
+ func_cap->timer_enable = 0;
+
+ return err;
+}
+
+void cqm_fake_uninit(struct cqm_handle *cqm_handle)
+{
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type !=
+ CQM_FAKE_FUNC_PARENT)
+ return;
+
+ for (i = 0; i < CQM_FAKE_FUNC_MAX; i++) {
+ kfree(cqm_handle->fake_cqm_handle[i]);
+ cqm_handle->fake_cqm_handle[i] = NULL;
+ }
+}
+
+s32 cqm_fake_init(struct cqm_handle *cqm_handle)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_func_capability *func_cap = NULL;
+ struct cqm_handle *fake_cqm_handle = NULL;
+ struct sphw_func_attr *func_attr = NULL;
+ s32 child_func_start, child_func_number;
+ u32 i;
+
+ func_cap = &cqm_handle->func_capability;
+ if (func_cap->fake_func_type != CQM_FAKE_FUNC_PARENT)
+ return CQM_SUCCESS;
+
+ child_func_start = cqm_get_child_func_start(cqm_handle);
+ if (child_func_start == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_start));
+ return CQM_FAIL;
+ }
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = kmalloc(sizeof(*fake_cqm_handle), GFP_KERNEL | __GFP_ZERO);
+ if (!fake_cqm_handle) {
+ cqm_err(handle->dev_hdl,
+ CQM_ALLOC_FAIL(fake_cqm_handle));
+ goto err;
+ }
+
+ /* Copy the attributes of the parent CQM handle to the child CQM
+ * handle and modify the values of function.
+ */
+ memcpy(fake_cqm_handle, cqm_handle, sizeof(struct cqm_handle));
+ func_attr = &fake_cqm_handle->func_attribute;
+ func_cap = &fake_cqm_handle->func_capability;
+ func_attr->func_global_idx = (u16)(child_func_start + i);
+ cqm_set_func_type(fake_cqm_handle);
+ func_cap->fake_func_type = CQM_FAKE_FUNC_CHILD;
+ cqm_info(handle->dev_hdl, "Fake func init: function[%u] type %d(0:PF,1:VF,2:PPF)\n",
+ func_attr->func_global_idx, func_attr->func_type);
+
+ fake_cqm_handle->parent_cqm_handle = cqm_handle;
+ cqm_handle->fake_cqm_handle[i] = fake_cqm_handle;
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_fake_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+void cqm_fake_mem_uninit(struct cqm_handle *cqm_handle)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_handle *fake_cqm_handle = NULL;
+ s32 child_func_number;
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type != CQM_FAKE_FUNC_PARENT)
+ return;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = cqm_handle->fake_cqm_handle[i];
+ cqm_object_table_uninit(fake_cqm_handle);
+ cqm_bitmap_uninit(fake_cqm_handle);
+ cqm_cla_uninit(fake_cqm_handle, CQM_BAT_ENTRY_MAX);
+ cqm_bat_uninit(fake_cqm_handle);
+ }
+}
+
+s32 cqm_fake_mem_init(struct cqm_handle *cqm_handle)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_handle *fake_cqm_handle = NULL;
+ s32 child_func_number;
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type !=
+ CQM_FAKE_FUNC_PARENT)
+ return CQM_SUCCESS;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = cqm_handle->fake_cqm_handle[i];
+
+ if (cqm_bat_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bat_init));
+ goto err;
+ }
+
+ if (cqm_cla_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_init));
+ goto err;
+ }
+
+ if (cqm_bitmap_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bitmap_init));
+ goto err;
+ }
+
+ if (cqm_object_table_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_init));
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_fake_mem_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+s32 cqm_mem_init(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+
+ if (cqm_fake_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fake_init));
+ return CQM_FAIL;
+ }
+
+ if (cqm_fake_mem_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fake_mem_init));
+ goto err1;
+ }
+
+ if (cqm_bat_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_init));
+ goto err2;
+ }
+
+ if (cqm_cla_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_init));
+ goto err3;
+ }
+
+ if (cqm_bitmap_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bitmap_init));
+ goto err4;
+ }
+
+ if (cqm_object_table_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_init));
+ goto err5;
+ }
+
+ return CQM_SUCCESS;
+
+err5:
+ cqm_bitmap_uninit(cqm_handle);
+err4:
+ cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
+err3:
+ cqm_bat_uninit(cqm_handle);
+err2:
+ cqm_fake_mem_uninit(cqm_handle);
+err1:
+ cqm_fake_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+void cqm_mem_uninit(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+
+ cqm_object_table_uninit(cqm_handle);
+ cqm_bitmap_uninit(cqm_handle);
+ cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
+ cqm_bat_uninit(cqm_handle);
+ cqm_fake_mem_uninit(cqm_handle);
+ cqm_fake_uninit(cqm_handle);
+}
+
+s32 cqm_event_init(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+
+ if (sphw_aeq_register_swe_cb(ex_handle, SPHW_STATEFULL_EVENT,
+ cqm_aeq_callback) != CHIPIF_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Event: fail to register aeq callback\n");
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+void cqm_event_uninit(void *ex_handle)
+{
+ sphw_aeq_unregister_swe_cb(ex_handle, SPHW_STATEFULL_EVENT);
+}
+
+u32 cqm_aeq_event2type(u8 event)
+{
+ u32 service_type;
+
+ /* Distributes events to different service modules
+ * based on the event type.
+ */
+ if (event >= CQM_AEQ_BASE_T_FC && event < CQM_AEQ_MAX_T_FC)
+ service_type = CQM_SERVICE_T_FC;
+ else
+ service_type = CQM_SERVICE_T_MAX;
+
+ return service_type;
+}
+
+u8 cqm_aeq_callback(void *ex_handle, u8 event, u8 *data)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct service_register_template *service_template = NULL;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ u8 event_level = FAULT_LEVEL_MAX;
+ u32 service_type;
+
+ CQM_PTR_CHECK_RET(ex_handle, event_level,
+ CQM_PTR_NULL(aeq_callback_ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_aeq_callback_cnt[event]);
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, event_level,
+ CQM_PTR_NULL(aeq_callback_cqm_handle));
+
+ /* Distributes events to different service modules
+ * based on the event type.
+ */
+ service_type = cqm_aeq_event2type(event);
+ if (service_type == CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(event));
+ return event_level;
+ }
+
+ service = &cqm_handle->service[service_type];
+ service_template = &service->service_template;
+
+ if (!service_template->aeq_level_callback)
+ cqm_err(handle->dev_hdl, "Event: service_type %u aeq_level_callback unregistered\n",
+ service_type);
+ else
+ event_level = service_template->aeq_level_callback(service_template->service_handle,
+ event, data);
+
+ if (!service_template->aeq_callback)
+ cqm_err(handle->dev_hdl, "Event: service_type %u aeq_callback unregistered\n",
+ service_type);
+ else
+ service_template->aeq_callback(service_template->service_handle,
+ event, data);
+
+ return event_level;
+}
+
+s32 cqm3_service_register(void *ex_handle, struct service_register_template *service_template)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+
+ CQM_PTR_CHECK_RET(ex_handle, CQM_FAIL, CQM_PTR_NULL(ex_handle));
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, CQM_FAIL, CQM_PTR_NULL(cqm_handle));
+ CQM_PTR_CHECK_RET(service_template, CQM_FAIL,
+ CQM_PTR_NULL(service_template));
+
+ if (service_template->service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl,
+ CQM_WRONG_VALUE(service_template->service_type));
+ return CQM_FAIL;
+ }
+ service = &cqm_handle->service[service_template->service_type];
+ if (!service->valid) {
+ cqm_err(handle->dev_hdl, "Service register: service_type %u is invalid\n",
+ service_template->service_type);
+ return CQM_FAIL;
+ }
+
+ if (service->has_register) {
+ cqm_err(handle->dev_hdl, "Service register: service_type %u has registered\n",
+ service_template->service_type);
+ return CQM_FAIL;
+ }
+
+ service->has_register = true;
+ (void)memcpy((void *)(&service->service_template),
+ (void *)service_template,
+ sizeof(struct service_register_template));
+
+ return CQM_SUCCESS;
+}
+
+void cqm3_service_unregister(void *ex_handle, u32 service_type)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+
+ CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle));
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_NO_RET(cqm_handle, CQM_PTR_NULL(cqm_handle));
+
+ if (service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return;
+ }
+
+ service = &cqm_handle->service[service_type];
+ if (!service->valid)
+ cqm_err(handle->dev_hdl, "Service unregister: service_type %u is disable\n",
+ service_type);
+
+ service->has_register = false;
+ memset(&service->service_template, 0, sizeof(struct service_register_template));
+}
+
+struct cqm_cmd_buf *cqm3_cmd_alloc(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_RET(ex_handle, NULL, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_cmd_alloc_cnt);
+
+ return (struct cqm_cmd_buf *)sphw_alloc_cmd_buf(ex_handle);
+}
+
+void cqm3_cmd_free(void *ex_handle, struct cqm_cmd_buf *cmd_buf)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle));
+ CQM_PTR_CHECK_NO_RET(cmd_buf, CQM_PTR_NULL(cmd_buf));
+ CQM_PTR_CHECK_NO_RET(cmd_buf->buf, CQM_PTR_NULL(buf));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_cmd_free_cnt);
+
+ sphw_free_cmd_buf(ex_handle, (struct sphw_cmd_buf *)cmd_buf);
+}
+
+s32 cqm3_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, struct cqm_cmd_buf *buf_in,
+ struct cqm_cmd_buf *buf_out, u64 *out_param, u32 timeout,
+ u16 channel)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_RET(ex_handle, CQM_FAIL, CQM_PTR_NULL(ex_handle));
+ CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_PTR_NULL(buf_in));
+ CQM_PTR_CHECK_RET(buf_in->buf, CQM_FAIL, CQM_PTR_NULL(buf));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_send_cmd_box_cnt);
+
+ return sphw_cmdq_detail_resp(ex_handle, mod, cmd,
+ (struct sphw_cmd_buf *)buf_in,
+ (struct sphw_cmd_buf *)buf_out,
+ out_param, timeout, channel);
+}
+
+int cqm_alloc_fc_db_addr(void *hwdev, void __iomem **db_base,
+ void __iomem **dwqe_base)
+{
+ struct sphw_hwif *hwif = NULL;
+ u32 idx = 0;
+#define SPFC_DB_ADDR_RSVD 12
+#define SPFC_DB_MASK 128
+ u64 db_base_phy_fc;
+
+ if (!hwdev || !db_base)
+ return -EINVAL;
+
+ hwif = ((struct sphw_hwdev *)hwdev)->hwif;
+
+ db_base_phy_fc = hwif->db_base_phy >> SPFC_DB_ADDR_RSVD;
+
+ if (db_base_phy_fc & (SPFC_DB_MASK - 1))
+ idx = SPFC_DB_MASK - (db_base_phy_fc && (SPFC_DB_MASK - 1));
+
+ *db_base = hwif->db_base + idx * SPHW_DB_PAGE_SIZE;
+
+ if (!dwqe_base)
+ return 0;
+
+ *dwqe_base = (u8 *)*db_base + SPHW_DWQE_OFFSET;
+
+ return 0;
+}
+
+s32 cqm3_db_addr_alloc(void *ex_handle, void __iomem **db_addr,
+ void __iomem **dwqe_addr)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_RET(ex_handle, CQM_FAIL, CQM_PTR_NULL(ex_handle));
+ CQM_PTR_CHECK_RET(db_addr, CQM_FAIL, CQM_PTR_NULL(db_addr));
+ CQM_PTR_CHECK_RET(dwqe_addr, CQM_FAIL, CQM_PTR_NULL(dwqe_addr));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_db_addr_alloc_cnt);
+
+ return cqm_alloc_fc_db_addr(ex_handle, db_addr, dwqe_addr);
+}
+
+s32 cqm_db_phy_addr_alloc(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr)
+{
+ return sphw_alloc_db_phy_addr(ex_handle, db_paddr, dwqe_addr);
+}
+
+void cqm3_db_addr_free(void *ex_handle, const void __iomem *db_addr,
+ void __iomem *dwqe_addr)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_db_addr_free_cnt);
+
+ sphw_free_db_addr(ex_handle, db_addr, dwqe_addr);
+}
+
+void cqm_db_phy_addr_free(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr)
+{
+ sphw_free_db_phy_addr(ex_handle, *db_paddr, *dwqe_addr);
+}
+
+s32 cqm_db_init(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ s32 i;
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+
+ /* Allocate hardware doorbells to services. */
+ for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
+ service = &cqm_handle->service[i];
+ if (!service->valid)
+ continue;
+
+ if (cqm3_db_addr_alloc(ex_handle, &service->hardware_db_vaddr,
+ &service->dwqe_vaddr) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_db_addr_alloc));
+ break;
+ }
+
+ if (cqm_db_phy_addr_alloc(handle, &service->hardware_db_paddr,
+ &service->dwqe_paddr) != CQM_SUCCESS) {
+ cqm3_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_db_phy_addr_alloc));
+ break;
+ }
+ }
+
+ if (i != CQM_SERVICE_T_MAX) {
+ i--;
+ for (; i >= 0; i--) {
+ service = &cqm_handle->service[i];
+ if (!service->valid)
+ continue;
+
+ cqm3_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+ cqm_db_phy_addr_free(ex_handle,
+ &service->hardware_db_paddr,
+ &service->dwqe_paddr);
+ }
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+void cqm_db_uninit(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ s32 i;
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+
+ /* Release hardware doorbell. */
+ for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
+ service = &cqm_handle->service[i];
+ if (service->valid)
+ cqm3_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+ }
+}
+
+s32 cqm3_ring_hardware_db_fc(void *ex_handle, u32 service_type, u8 db_count,
+ u8 pagenum, u64 db)
+{
+#define SPFC_DB_FAKE_VF_OFFSET 32
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ struct sphw_hwdev *handle = NULL;
+ void *dbaddr = NULL;
+
+ handle = (struct sphw_hwdev *)ex_handle;
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ dbaddr = (u8 *)service->hardware_db_vaddr +
+ ((pagenum + SPFC_DB_FAKE_VF_OFFSET) * SPHW_DB_PAGE_SIZE);
+ *((u64 *)dbaddr + db_count) = db;
+ return CQM_SUCCESS;
+}
+
+s32 cqm_ring_direct_wqe_db_fc(void *ex_handle, u32 service_type,
+ void *direct_wqe)
+{
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ struct sphw_hwdev *handle = NULL;
+ u64 *tmp = (u64 *)direct_wqe;
+ int i;
+
+ handle = (struct sphw_hwdev *)ex_handle;
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ *((u64 *)service->dwqe_vaddr + 0) = tmp[2];
+ *((u64 *)service->dwqe_vaddr + 1) = tmp[3];
+ *((u64 *)service->dwqe_vaddr + 2) = tmp[0];
+ *((u64 *)service->dwqe_vaddr + 3) = tmp[1];
+ tmp += 4;
+
+ /* The FC use 256B WQE. The directwqe is written at block0,
+ * and the length is 256B
+ */
+ for (i = 4; i < 32; i++)
+ *((u64 *)service->dwqe_vaddr + i) = *tmp++;
+
+ return CQM_SUCCESS;
+}
+
+static s32 bloomfilter_init_cmd(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ struct cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct cqm_bloomfilter_init_cmd *cmd = NULL;
+ struct cqm_cmd_buf *buf_in = NULL;
+ s32 ret;
+
+ buf_in = cqm3_cmd_alloc((void *)(cqm_handle->ex_handle));
+ CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
+
+ /* Fill the command format and convert it to big-endian. */
+ buf_in->size = sizeof(struct cqm_bloomfilter_init_cmd);
+ cmd = (struct cqm_bloomfilter_init_cmd *)(buf_in->buf);
+ cmd->bloom_filter_addr = capability->bloomfilter_addr;
+ cmd->bloom_filter_len = capability->bloomfilter_length;
+
+ cqm_swab32((u8 *)cmd, (sizeof(struct cqm_bloomfilter_init_cmd) >> CQM_DW_SHIFT));
+
+ ret = cqm3_send_cmd_box((void *)(cqm_handle->ex_handle),
+ CQM_MOD_CQM, CQM_CMD_T_BLOOMFILTER_INIT, buf_in,
+ NULL, NULL, CQM_CMD_TIMEOUT,
+ SPHW_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
+ cqm_err(handle->dev_hdl, "Bloomfilter: %s ret=%d\n", __func__,
+ ret);
+ cqm_err(handle->dev_hdl, "Bloomfilter: %s: 0x%x 0x%x\n",
+ __func__, cmd->bloom_filter_addr,
+ cmd->bloom_filter_len);
+ cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return CQM_FAIL;
+ }
+ cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return CQM_SUCCESS;
+}
+
+s32 cqm_bloomfilter_init(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_bloomfilter_table *bloomfilter_table = NULL;
+ struct cqm_func_capability *capability = NULL;
+ struct cqm_handle *cqm_handle = NULL;
+ u32 array_size;
+ s32 ret;
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ bloomfilter_table = &cqm_handle->bloomfilter_table;
+ capability = &cqm_handle->func_capability;
+
+ if (capability->bloomfilter_length == 0) {
+ cqm_info(handle->dev_hdl,
+ "Bloomfilter: bf_length=0, don't need to init bloomfilter\n");
+ return CQM_SUCCESS;
+ }
+
+ /* The unit of bloomfilter_length is 64B(512bits). Each bit is a table
+ * node. Therefore the value must be shift 9 bits to the left.
+ */
+ bloomfilter_table->table_size = capability->bloomfilter_length <<
+ CQM_BF_LENGTH_UNIT;
+ /* The unit of bloomfilter_length is 64B. The unit of array entryis 32B.
+ */
+ array_size = capability->bloomfilter_length << 1;
+ if (array_size == 0 || array_size > CQM_BF_BITARRAY_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(array_size));
+ return CQM_FAIL;
+ }
+
+ bloomfilter_table->array_mask = array_size - 1;
+ /* This table is not a bitmap, it is the counter of corresponding bit.
+ */
+ bloomfilter_table->table = vmalloc(bloomfilter_table->table_size * (sizeof(u32)));
+ CQM_PTR_CHECK_RET(bloomfilter_table->table, CQM_FAIL, CQM_ALLOC_FAIL(table));
+
+ memset(bloomfilter_table->table, 0,
+ (bloomfilter_table->table_size * sizeof(u32)));
+
+ /* The the bloomfilter must be initialized to 0 by ucode,
+ * because the bloomfilter is mem mode
+ */
+ if (cqm_handle->func_capability.bloomfilter_enable) {
+ ret = bloomfilter_init_cmd(ex_handle);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ "Bloomfilter: bloomfilter_init_cmd ret=%d\n",
+ ret);
+ vfree(bloomfilter_table->table);
+ bloomfilter_table->table = NULL;
+ return CQM_FAIL;
+ }
+ }
+
+ mutex_init(&bloomfilter_table->lock);
+ return CQM_SUCCESS;
+}
+
+void cqm_bloomfilter_uninit(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_bloomfilter_table *bloomfilter_table = NULL;
+ struct cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ bloomfilter_table = &cqm_handle->bloomfilter_table;
+
+ if (bloomfilter_table->table) {
+ vfree(bloomfilter_table->table);
+ bloomfilter_table->table = NULL;
+ }
+}
+
+s32 cqm_bloomfilter_cmd(void *ex_handle, u32 op, u32 k_flag, u64 id)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_cmd_buf *buf_in = NULL;
+ struct cqm_bloomfilter_cmd *cmd = NULL;
+ s32 ret;
+
+ buf_in = cqm3_cmd_alloc(ex_handle);
+ CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
+
+ /* Fill the command format and convert it to big-endian. */
+ buf_in->size = sizeof(struct cqm_bloomfilter_cmd);
+ cmd = (struct cqm_bloomfilter_cmd *)(buf_in->buf);
+ memset((void *)cmd, 0, sizeof(struct cqm_bloomfilter_cmd));
+ cmd->k_en = k_flag;
+ cmd->index_h = (u32)(id >> CQM_DW_OFFSET);
+ cmd->index_l = (u32)(id & CQM_DW_MASK);
+
+ cqm_swab32((u8 *)cmd, (sizeof(struct cqm_bloomfilter_cmd) >> CQM_DW_SHIFT));
+
+ ret = cqm3_send_cmd_box(ex_handle, CQM_MOD_CQM, (u8)op, buf_in, NULL,
+ NULL, CQM_CMD_TIMEOUT, SPHW_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
+ cqm_err(handle->dev_hdl, "Bloomfilter: bloomfilter_cmd ret=%d\n", ret);
+ cqm_err(handle->dev_hdl, "Bloomfilter: op=0x%x, cmd: 0x%x 0x%x 0x%x 0x%x\n",
+ op, *((u32 *)cmd), *(((u32 *)cmd) + CQM_DW_INDEX1),
+ *(((u32 *)cmd) + CQM_DW_INDEX2),
+ *(((u32 *)cmd) + CQM_DW_INDEX3));
+ cqm3_cmd_free(ex_handle, buf_in);
+ return CQM_FAIL;
+ }
+
+ cqm3_cmd_free(ex_handle, buf_in);
+
+ return CQM_SUCCESS;
+}
+
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_main.h b/drivers/scsi/spfc/hw/spfc_cqm_main.h
new file mode 100644
index 000000000000..c8fb37a631bf
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_main.h
@@ -0,0 +1,414 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef CQM_MAIN_H
+#define CQM_MAIN_H
+
+#include "sphw_hwdev.h"
+#include "sphw_hwif.h"
+#include "spfc_cqm_object.h"
+#include "spfc_cqm_bitmap_table.h"
+#include "spfc_cqm_bat_cla.h"
+
+#define GET_MAX(a, b) ((a) > (b) ? (a) : (b))
+#define GET_MIN(a, b) ((a) < (b) ? (a) : (b))
+#define CQM_DW_SHIFT 2
+#define CQM_QW_SHIFT 3
+#define CQM_BYTE_BIT_SHIFT 3
+#define CQM_NUM_BIT_BYTE 8
+
+#define CHIPIF_SUCCESS 0
+#define CHIPIF_FAIL (-1)
+
+#define CQM_TIMER_ENABLE 1
+#define CQM_TIMER_DISABLE 0
+
+/* The value must be the same as that of sphw_service_type in sphw_crm.h. */
+#define CQM_SERVICE_T_FC SERVICE_T_FC
+#define CQM_SERVICE_T_MAX SERVICE_T_MAX
+
+struct cqm_service {
+ bool valid; /* Whether to enable this service on the function. */
+ bool has_register; /* Registered or Not */
+ u64 hardware_db_paddr;
+ void __iomem *hardware_db_vaddr;
+ u64 dwqe_paddr;
+ void __iomem *dwqe_vaddr;
+ u32 buf_order; /* The size of each buf node is 2^buf_order pages. */
+ struct service_register_template service_template;
+};
+
+struct cqm_fake_cfg {
+ u32 parent_func; /* The parent func_id of the fake vfs. */
+ u32 child_func_start; /* The start func_id of the child fake vfs. */
+ u32 child_func_number; /* The number of the child fake vfs. */
+};
+
+#define CQM_MAX_FACKVF_GROUP 4
+
+struct cqm_func_capability {
+ /* BAT_PTR table(SMLC) */
+ bool ft_enable; /* BAT for flow table enable: support fc service
+ */
+ bool rdma_enable; /* BAT for rdma enable: support RoCE */
+ /* VAT table(SMIR) */
+ bool ft_pf_enable; /* Same as ft_enable. BAT entry for fc on pf
+ */
+ bool rdma_pf_enable; /* Same as rdma_enable. BAT entry for rdma on pf */
+
+ /* Dynamic or static memory allocation during the application of
+ * specified QPC/SCQC for each service.
+ */
+ bool qpc_alloc_static;
+ bool scqc_alloc_static;
+
+ u8 timer_enable; /* Whether the timer function is enabled */
+ u8 bloomfilter_enable; /* Whether the bloomgfilter function is enabled */
+ /* Maximum number of connections for fc, whitch cannot excedd qpc_number */
+ u32 flow_table_based_conn_number;
+ u32 flow_table_based_conn_cache_number; /* Maximum number of sticky caches */
+ u32 bloomfilter_length; /* Size of the bloomfilter table, 64-byte aligned */
+ u32 bloomfilter_addr; /* Start position of the bloomfilter table in the SMF main cache. */
+ u32 qpc_reserved; /* Reserved bit in bitmap */
+ u32 mpt_reserved; /* The ROCE/IWARP MPT also has a reserved bit. */
+
+ /* All basic_size must be 2^n-aligned. */
+ /* The number of hash bucket. The size of BAT table is aliaed with 64 bucket.
+ *At least 64 buckets is required.
+ */
+ u32 hash_number;
+ /* THe basic size of hash bucket is 64B, including 5 valid entry and one next entry. */
+ u32 hash_basic_size;
+ u32 qpc_number;
+ u32 qpc_basic_size;
+
+ /* NUmber of PFs/VFs on the current host */
+ u32 pf_num;
+ u32 pf_id_start;
+ u32 vf_num;
+ u32 vf_id_start;
+
+ u32 lb_mode;
+ /* Only lower 4bit is valid, indicating which SMFs are enabled.
+ * For example, 0101B indicates that SMF0 and SMF2 are enabled.
+ */
+ u32 smf_pg;
+
+ u32 fake_mode;
+ /* Whether the current function belongs to the fake group (parent or child) */
+ u32 fake_func_type;
+ u32 fake_cfg_number; /* Number of current configuration groups */
+ struct cqm_fake_cfg fake_cfg[CQM_MAX_FACKVF_GROUP];
+
+ /* Note: for cqm specail test */
+ u32 pagesize_reorder;
+ bool xid_alloc_mode;
+ bool gpa_check_enable;
+ u32 scq_reserved;
+ u32 srq_reserved;
+
+ u32 mpt_number;
+ u32 mpt_basic_size;
+ u32 scqc_number;
+ u32 scqc_basic_size;
+ u32 srqc_number;
+ u32 srqc_basic_size;
+
+ u32 gid_number;
+ u32 gid_basic_size;
+ u32 lun_number;
+ u32 lun_basic_size;
+ u32 taskmap_number;
+ u32 taskmap_basic_size;
+ u32 l3i_number;
+ u32 l3i_basic_size;
+ u32 childc_number;
+ u32 childc_basic_size;
+ u32 child_qpc_id_start; /* FC service Child CTX is global addressing. */
+ u32 childc_number_all_function; /* The chip supports a maximum of 8096 child CTXs. */
+ u32 timer_number;
+ u32 timer_basic_size;
+ u32 xid2cid_number;
+ u32 xid2cid_basic_size;
+ u32 reorder_number;
+ u32 reorder_basic_size;
+};
+
+#define CQM_PF TYPE_PF
+#define CQM_VF TYPE_VF
+#define CQM_PPF TYPE_PPF
+#define CQM_UNKNOWN TYPE_UNKNOWN
+#define CQM_MAX_PF_NUM 32
+
+#define CQM_LB_MODE_NORMAL 0xff
+#define CQM_LB_MODE_0 0
+#define CQM_LB_MODE_1 1
+#define CQM_LB_MODE_2 2
+
+#define CQM_LB_SMF_MAX 4
+
+#define CQM_FPGA_MODE 0
+#define CQM_EMU_MODE 1
+#define CQM_FAKE_MODE_DISABLE 0
+#define CQM_FAKE_CFUNC_START 32
+
+#define CQM_FAKE_FUNC_NORMAL 0
+#define CQM_FAKE_FUNC_PARENT 1
+#define CQM_FAKE_FUNC_CHILD 2
+#define CQM_FAKE_FUNC_CHILD_CONFLICT 3
+#define CQM_FAKE_FUNC_MAX 32
+
+#define CQM_SPU_HOST_ID 4
+
+#define CQM_QPC_ROCE_PER_DRCT 12
+#define CQM_QPC_NORMAL_RESERVE_DRC 0
+#define CQM_QPC_ROCEAA_ENABLE 1
+#define CQM_QPC_ROCE_VBS_MODE 2
+#define CQM_QPC_NORMAL_WITHOUT_RSERVER_DRC 3
+
+struct cqm_db_common {
+ u32 rsvd1 : 23;
+ u32 c : 1;
+ u32 cos : 3;
+ u32 service_type : 5;
+
+ u32 rsvd2;
+};
+
+struct cqm_bloomfilter_table {
+ u32 *table;
+ u32 table_size; /* The unit is bit */
+ u32 array_mask; /* The unit of array entry is 32B, used to address entry
+ */
+ struct mutex lock;
+};
+
+struct cqm_bloomfilter_init_cmd {
+ u32 bloom_filter_len;
+ u32 bloom_filter_addr;
+};
+
+struct cqm_bloomfilter_cmd {
+ u32 rsv1;
+
+ u32 k_en : 4;
+ u32 rsv2 : 28;
+
+ u32 index_h;
+ u32 index_l;
+};
+
+struct cqm_handle {
+ struct sphw_hwdev *ex_handle;
+ struct pci_dev *dev;
+ struct sphw_func_attr func_attribute; /* vf/pf attributes */
+ struct cqm_func_capability func_capability; /* function capability set */
+ struct cqm_service service[CQM_SERVICE_T_MAX]; /* Service-related structure */
+ struct cqm_bat_table bat_table;
+ struct cqm_bloomfilter_table bloomfilter_table;
+ /* fake-vf-related structure */
+ struct cqm_handle *fake_cqm_handle[CQM_FAKE_FUNC_MAX];
+ struct cqm_handle *parent_cqm_handle;
+};
+
+enum cqm_cmd_type {
+ CQM_CMD_T_INVALID = 0,
+ CQM_CMD_T_BAT_UPDATE,
+ CQM_CMD_T_CLA_UPDATE,
+ CQM_CMD_T_CLA_CACHE_INVALID = 6,
+ CQM_CMD_T_BLOOMFILTER_INIT,
+ CQM_CMD_T_MAX
+};
+
+#define CQM_CQN_FROM_CEQE(data) ((data) & 0xfffff)
+#define CQM_XID_FROM_CEQE(data) ((data) & 0xfffff)
+#define CQM_QID_FROM_CEQE(data) (((data) >> 20) & 0x7)
+#define CQM_TYPE_FROM_CEQE(data) (((data) >> 23) & 0x7)
+
+#define CQM_HASH_BUCKET_SIZE_64 64
+
+#define CQM_MAX_QPC_NUM 0x100000
+#define CQM_MAX_SCQC_NUM 0x100000
+#define CQM_MAX_SRQC_NUM 0x100000
+#define CQM_MAX_CHILDC_NUM 0x100000
+
+#define CQM_QPC_SIZE_256 256
+#define CQM_QPC_SIZE_512 512
+#define CQM_QPC_SIZE_1024 1024
+
+#define CQM_SCQC_SIZE_32 32
+#define CQM_SCQC_SIZE_64 64
+#define CQM_SCQC_SIZE_128 128
+
+#define CQM_SRQC_SIZE_32 32
+#define CQM_SRQC_SIZE_64 64
+#define CQM_SRQC_SIZE_128 128
+
+#define CQM_MPT_SIZE_64 64
+
+#define CQM_GID_SIZE_32 32
+
+#define CQM_LUN_SIZE_8 8
+
+#define CQM_L3I_SIZE_8 8
+
+#define CQM_TIMER_SIZE_32 32
+
+#define CQM_XID2CID_SIZE_8 8
+
+#define CQM_XID2CID_SIZE_8K 8192
+
+#define CQM_REORDER_SIZE_256 256
+
+#define CQM_CHILDC_SIZE_256 256
+
+#define CQM_XID2CID_VBS_NUM (18 * 1024) /* 16K virtio VQ + 2K nvme Q */
+
+#define CQM_VBS_QPC_NUM 2048 /* 2K VOLQ */
+
+#define CQM_VBS_QPC_SIZE 512
+
+#define CQM_XID2CID_VIRTIO_NUM (16 * 1024)
+
+#define CQM_GID_RDMA_NUM 128
+
+#define CQM_LUN_FC_NUM 64
+
+#define CQM_TASKMAP_FC_NUM 4
+
+#define CQM_L3I_COMM_NUM 64
+
+#define CQM_CHILDC_ROCE_NUM (8 * 1024)
+#define CQM_CHILDC_OVS_VBS_NUM (8 * 1024)
+#define CQM_CHILDC_TOE_NUM 256
+#define CQM_CHILDC_IPSEC_NUM (4 * 1024)
+
+#define CQM_TIMER_SCALE_NUM (2 * 1024)
+#define CQM_TIMER_ALIGN_WHEEL_NUM 8
+#define CQM_TIMER_ALIGN_SCALE_NUM \
+ (CQM_TIMER_SCALE_NUM * CQM_TIMER_ALIGN_WHEEL_NUM)
+
+#define CQM_QPC_OVS_RSVD (1024 * 1024)
+#define CQM_QPC_ROCE_RSVD 2
+#define CQM_QPC_ROCEAA_SWITCH_QP_NUM 4
+#define CQM_QPC_ROCEAA_RSVD \
+ (4 * 1024 + CQM_QPC_ROCEAA_SWITCH_QP_NUM) /* 4096 Normal QP + 4 Switch QP */
+#define CQM_CQ_ROCEAA_RSVD 64
+#define CQM_SRQ_ROCEAA_RSVD 64
+#define CQM_QPC_ROCE_VBS_RSVD \
+ (1024 + CQM_QPC_ROCE_RSVD) /* (204800 + CQM_QPC_ROCE_RSVD) */
+
+#define CQM_OVS_PAGESIZE_ORDER 8
+#define CQM_OVS_MAX_TIMER_FUNC 48
+
+#define CQM_FC_PAGESIZE_ORDER 0
+
+#define CQM_QHEAD_ALIGN_ORDER 6
+
+#define CQM_CMD_TIMEOUT 300000 /* ms */
+
+#define CQM_DW_MASK 0xffffffff
+#define CQM_DW_OFFSET 32
+#define CQM_DW_INDEX0 0
+#define CQM_DW_INDEX1 1
+#define CQM_DW_INDEX2 2
+#define CQM_DW_INDEX3 3
+
+/* The unit of bloomfilter_length is 64B(512bits). */
+#define CQM_BF_LENGTH_UNIT 9
+#define CQM_BF_BITARRAY_MAX BIT(17)
+
+typedef void (*serv_cap_init_cb)(struct cqm_handle *, void *);
+
+/* Only for llt test */
+s32 cqm_capability_init(void *ex_handle);
+/* Can be defined as static */
+s32 cqm_mem_init(void *ex_handle);
+void cqm_mem_uninit(void *ex_handle);
+s32 cqm_event_init(void *ex_handle);
+void cqm_event_uninit(void *ex_handle);
+u8 cqm_aeq_callback(void *ex_handle, u8 event, u8 *data);
+s32 cqm_get_fake_func_type(struct cqm_handle *cqm_handle);
+s32 cqm_get_child_func_start(struct cqm_handle *cqm_handle);
+s32 cqm_get_child_func_number(struct cqm_handle *cqm_handle);
+
+s32 cqm3_init(void *ex_handle);
+void cqm3_uninit(void *ex_handle);
+s32 cqm3_service_register(void *ex_handle, struct service_register_template *service_template);
+void cqm3_service_unregister(void *ex_handle, u32 service_type);
+
+struct cqm_cmd_buf *cqm3_cmd_alloc(void *ex_handle);
+void cqm3_cmd_free(void *ex_handle, struct cqm_cmd_buf *cmd_buf);
+s32 cqm3_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, struct cqm_cmd_buf *buf_in,
+ struct cqm_cmd_buf *buf_out, u64 *out_param, u32 timeout,
+ u16 channel);
+
+s32 cqm3_db_addr_alloc(void *ex_handle, void __iomem **db_addr, void __iomem **dwqe_addr);
+s32 cqm_db_phy_addr_alloc(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr);
+s32 cqm_db_init(void *ex_handle);
+void cqm_db_uninit(void *ex_handle);
+
+s32 cqm_bloomfilter_cmd(void *ex_handle, u32 op, u32 k_flag, u64 id);
+s32 cqm_bloomfilter_init(void *ex_handle);
+void cqm_bloomfilter_uninit(void *ex_handle);
+
+#define CQM_LOG_ID 0
+
+#define CQM_PTR_NULL(x) "%s: " #x " is null\n", __func__
+#define CQM_ALLOC_FAIL(x) "%s: " #x " alloc fail\n", __func__
+#define CQM_MAP_FAIL(x) "%s: " #x " map fail\n", __func__
+#define CQM_FUNCTION_FAIL(x) "%s: " #x " return failure\n", __func__
+#define CQM_WRONG_VALUE(x) "%s: " #x " %u is wrong\n", __func__, (u32)(x)
+
+#define cqm_err(dev, format, ...) dev_err(dev, "[CQM]" format, ##__VA_ARGS__)
+#define cqm_warn(dev, format, ...) dev_warn(dev, "[CQM]" format, ##__VA_ARGS__)
+#define cqm_notice(dev, format, ...) \
+ dev_notice(dev, "[CQM]" format, ##__VA_ARGS__)
+#define cqm_info(dev, format, ...) dev_info(dev, "[CQM]" format, ##__VA_ARGS__)
+
+#define CQM_32_ALIGN_CHECK_RET(dev_hdl, x, ret, desc) \
+ do { \
+ if (unlikely(((x) & 0x1f) != 0)) { \
+ cqm_err(dev_hdl, desc); \
+ return ret; \
+ } \
+ } while (0)
+#define CQM_64_ALIGN_CHECK_RET(dev_hdl, x, ret, desc) \
+ do { \
+ if (unlikely(((x) & 0x3f) != 0)) { \
+ cqm_err(dev_hdl, desc); \
+ return ret; \
+ } \
+ } while (0)
+
+#define CQM_PTR_CHECK_RET(ptr, ret, desc) \
+ do { \
+ if (unlikely((ptr) == NULL)) { \
+ pr_err("[CQM]" desc); \
+ return ret; \
+ } \
+ } while (0)
+
+#define CQM_PTR_CHECK_NO_RET(ptr, desc) \
+ do { \
+ if (unlikely((ptr) == NULL)) { \
+ pr_err("[CQM]" desc); \
+ return; \
+ } \
+ } while (0)
+#define CQM_CHECK_EQUAL_RET(dev_hdl, actual, expect, ret, desc) \
+ do { \
+ if (unlikely((expect) != (actual))) { \
+ cqm_err(dev_hdl, desc); \
+ return ret; \
+ } \
+ } while (0)
+#define CQM_CHECK_EQUAL_NO_RET(dev_hdl, actual, expect, desc) \
+ do { \
+ if (unlikely((expect) != (actual))) { \
+ cqm_err(dev_hdl, desc); \
+ return; \
+ } \
+ } while (0)
+
+#endif /* CQM_MAIN_H */
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_object.c b/drivers/scsi/spfc/hw/spfc_cqm_object.c
new file mode 100644
index 000000000000..7e41ee633689
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_object.c
@@ -0,0 +1,959 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/mm.h>
+
+#include "sphw_crm.h"
+#include "sphw_hw.h"
+#include "sphw_hwdev.h"
+#include "sphw_hwif.h"
+
+#include "spfc_cqm_object.h"
+#include "spfc_cqm_bitmap_table.h"
+#include "spfc_cqm_bat_cla.h"
+#include "spfc_cqm_main.h"
+
+s32 cqm_qpc_mpt_bitmap_alloc(struct cqm_object *object, struct cqm_cla_table *cla_table)
+{
+ struct cqm_qpc_mpt *common = container_of(object, struct cqm_qpc_mpt, object);
+ struct cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
+ struct cqm_qpc_mpt_info,
+ common);
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_bitmap *bitmap = &cla_table->bitmap;
+ u32 index, count;
+
+ count = (ALIGN(object->object_size, cla_table->obj_size)) / cla_table->obj_size;
+ qpc_mpt_info->index_count = count;
+
+ if (qpc_mpt_info->common.xid == CQM_INDEX_INVALID) {
+ /* apply for an index normally */
+ index = cqm_bitmap_alloc(bitmap, 1U << (cla_table->z + 1),
+ count, func_cap->xid_alloc_mode);
+ if (index < bitmap->max_num) {
+ qpc_mpt_info->common.xid = index;
+ } else {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bitmap_alloc));
+ return CQM_FAIL;
+ }
+ } else {
+ /* apply for index to be reserved */
+ index = cqm_bitmap_alloc_reserved(bitmap, count,
+ qpc_mpt_info->common.xid);
+ if (index != qpc_mpt_info->common.xid) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bitmap_alloc_reserved));
+ return CQM_FAIL;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_qpc_mpt_create(struct cqm_object *object)
+{
+ struct cqm_qpc_mpt *common = container_of(object, struct cqm_qpc_mpt, object);
+ struct cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
+ struct cqm_qpc_mpt_info,
+ common);
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_object_table *object_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ struct cqm_bitmap *bitmap = NULL;
+ u32 index, count;
+
+ /* find the corresponding cla table */
+ if (object->object_type == CQM_OBJECT_SERVICE_CTX) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ return CQM_FAIL;
+ }
+
+ CQM_PTR_CHECK_RET(cla_table, CQM_FAIL,
+ CQM_FUNCTION_FAIL(cqm_cla_table_get));
+
+ /* Bitmap applies for index. */
+ if (cqm_qpc_mpt_bitmap_alloc(object, cla_table) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_qpc_mpt_bitmap_alloc));
+ return CQM_FAIL;
+ }
+
+ bitmap = &cla_table->bitmap;
+ index = qpc_mpt_info->common.xid;
+ count = qpc_mpt_info->index_count;
+
+ /* Find the trunk page from the BAT/CLA and allocate the buffer.
+ * Ensure that the released buffer has been cleared.
+ */
+ if (cla_table->alloc_static)
+ qpc_mpt_info->common.vaddr = cqm_cla_get_unlock(cqm_handle,
+ cla_table,
+ index, count,
+ &common->paddr);
+ else
+ qpc_mpt_info->common.vaddr = cqm_cla_get_lock(cqm_handle,
+ cla_table, index,
+ count,
+ &common->paddr);
+
+ if (!qpc_mpt_info->common.vaddr) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_get_lock));
+ cqm_err(handle->dev_hdl, "Qpc mpt init: qpc mpt vaddr is null, cla_table->alloc_static=%d\n",
+ cla_table->alloc_static);
+ goto err1;
+ }
+
+ /* Indexes are associated with objects, and FC is executed
+ * in the interrupt context.
+ */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC) {
+ if (cqm_object_table_insert(cqm_handle, object_table, index,
+ object, false) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_insert));
+ goto err2;
+ }
+ } else {
+ if (cqm_object_table_insert(cqm_handle, object_table, index,
+ object, true) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_insert));
+ goto err2;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err2:
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+err1:
+ cqm_bitmap_free(bitmap, index, count);
+ return CQM_FAIL;
+}
+
+struct cqm_qpc_mpt *cqm3_object_qpc_mpt_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ u32 index)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_qpc_mpt_info *qpc_mpt_info = NULL;
+ struct cqm_handle *cqm_handle = NULL;
+ s32 ret = CQM_FAIL;
+ u32 relative_index;
+ u32 fake_func_id;
+
+ CQM_PTR_CHECK_RET(ex_handle, NULL, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_qpc_mpt_create_cnt);
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, NULL, CQM_PTR_NULL(cqm_handle));
+
+ if (service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+ /* exception of service registrion check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ if (object_type != CQM_OBJECT_SERVICE_CTX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ /* fake vf adaption, switch to corresponding VF. */
+ if (cqm_handle->func_capability.fake_func_type ==
+ CQM_FAKE_FUNC_PARENT) {
+ fake_func_id = index / cqm_handle->func_capability.qpc_number;
+ relative_index = index % cqm_handle->func_capability.qpc_number;
+
+ cqm_info(handle->dev_hdl, "qpc create: fake_func_id=%u, relative_index=%u\n",
+ fake_func_id, relative_index);
+
+ if ((s32)fake_func_id >=
+ cqm_get_child_func_number(cqm_handle)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(fake_func_id));
+ return NULL;
+ }
+
+ index = relative_index;
+ cqm_handle = cqm_handle->fake_cqm_handle[fake_func_id];
+ }
+
+ qpc_mpt_info = kmalloc(sizeof(*qpc_mpt_info), GFP_ATOMIC | __GFP_ZERO);
+ CQM_PTR_CHECK_RET(qpc_mpt_info, NULL, CQM_ALLOC_FAIL(qpc_mpt_info));
+
+ qpc_mpt_info->common.object.service_type = service_type;
+ qpc_mpt_info->common.object.object_type = object_type;
+ qpc_mpt_info->common.object.object_size = object_size;
+ atomic_set(&qpc_mpt_info->common.object.refcount, 1);
+ init_completion(&qpc_mpt_info->common.object.free);
+ qpc_mpt_info->common.object.cqm_handle = cqm_handle;
+ qpc_mpt_info->common.xid = index;
+
+ qpc_mpt_info->common.priv = object_priv;
+
+ ret = cqm_qpc_mpt_create(&qpc_mpt_info->common.object);
+ if (ret == CQM_SUCCESS)
+ return &qpc_mpt_info->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_qpc_mpt_create));
+ kfree(qpc_mpt_info);
+ return NULL;
+}
+
+void cqm_linkwqe_fill(struct cqm_buf *buf, u32 wqe_per_buf, u32 wqe_size,
+ u32 wqe_number, bool tail, u8 link_mode)
+{
+ struct cqm_linkwqe_128B *linkwqe = NULL;
+ struct cqm_linkwqe *wqe = NULL;
+ dma_addr_t addr;
+ u8 *tmp = NULL;
+ u8 *va = NULL;
+ u32 i;
+
+ /* The linkwqe of other buffer except the last buffer
+ * is directly filled to the tail.
+ */
+ for (i = 0; i < buf->buf_number; i++) {
+ va = (u8 *)(buf->buf_list[i].va);
+
+ if (i != (buf->buf_number - 1)) {
+ wqe = (struct cqm_linkwqe *)(va + (u32)(wqe_size * wqe_per_buf));
+ wqe->wf = CQM_WQE_WF_LINK;
+ wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+ wqe->lp = CQM_LINK_WQE_LP_INVALID;
+ /* The valid value of link wqe needs to be set to 1.
+ * Each service ensures that o-bit=1 indicates that
+ * link wqe is valid and o-bit=0 indicates that
+ * link wqe is invalid.
+ */
+ wqe->o = CQM_LINK_WQE_OWNER_VALID;
+ addr = buf->buf_list[(u32)(i + 1)].pa;
+ wqe->next_page_gpa_h = CQM_ADDR_HI(addr);
+ wqe->next_page_gpa_l = CQM_ADDR_LW(addr);
+ } else { /* linkwqe special padding of the last buffer */
+ if (tail) {
+ /* must be filled at the end of the page */
+ tmp = va + (u32)(wqe_size * wqe_per_buf);
+ wqe = (struct cqm_linkwqe *)tmp;
+ } else {
+ /* The last linkwqe is filled
+ * following the last wqe.
+ */
+ tmp = va + (u32)(wqe_size * (wqe_number -
+ wqe_per_buf *
+ (buf->buf_number -
+ 1)));
+ wqe = (struct cqm_linkwqe *)tmp;
+ }
+ wqe->wf = CQM_WQE_WF_LINK;
+ wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+
+ /* In link mode, the last link WQE is invalid;
+ * In ring mode, the last link wqe is valid, pointing to
+ * the home page, and the lp is set.
+ */
+ if (link_mode == CQM_QUEUE_LINK_MODE) {
+ wqe->o = CQM_LINK_WQE_OWNER_INVALID;
+ } else {
+ /* The lp field of the last link_wqe is set to
+ * 1, indicating that the meaning of the o-bit
+ * is reversed.
+ */
+ wqe->lp = CQM_LINK_WQE_LP_VALID;
+ wqe->o = CQM_LINK_WQE_OWNER_VALID;
+ addr = buf->buf_list[0].pa;
+ wqe->next_page_gpa_h = CQM_ADDR_HI(addr);
+ wqe->next_page_gpa_l = CQM_ADDR_LW(addr);
+ }
+ }
+
+ if (wqe_size == CQM_LINKWQE_128B) {
+ /* After the B800 version, the WQE obit scheme is
+ * changed. The 64B bits before and after the 128B WQE
+ * need to be assigned a value:
+ * ifoe the 63rd bit from the end of the last 64B is
+ * obit;
+ * toe the 157th bit from the end of the last 64B is
+ * obit.
+ */
+ linkwqe = (struct cqm_linkwqe_128B *)wqe;
+ linkwqe->second64B.forth_16B.bs.ifoe_o = CQM_LINK_WQE_OWNER_VALID;
+
+ /* shift 2 bits by right to get length of dw(4B) */
+ cqm_swab32((u8 *)wqe, sizeof(struct cqm_linkwqe_128B) >> 2);
+ } else {
+ /* shift 2 bits by right to get length of dw(4B) */
+ cqm_swab32((u8 *)wqe, sizeof(struct cqm_linkwqe) >> 2);
+ }
+ }
+}
+
+s32 cqm_nonrdma_queue_ctx_create(struct cqm_object *object)
+{
+ struct cqm_queue *common = container_of(object, struct cqm_queue, object);
+ struct cqm_nonrdma_qinfo *qinfo = container_of(common, struct cqm_nonrdma_qinfo,
+ common);
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_object_table *object_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ struct cqm_bitmap *bitmap = NULL;
+ s32 shift;
+
+ if (object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
+ shift = cqm_shift(qinfo->q_ctx_size);
+ common->q_ctx_vaddr = cqm_kmalloc_align(qinfo->q_ctx_size,
+ GFP_KERNEL | __GFP_ZERO,
+ (u16)shift);
+ if (!common->q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_ctx_vaddr));
+ return CQM_FAIL;
+ }
+
+ common->q_ctx_paddr = pci_map_single(cqm_handle->dev,
+ common->q_ctx_vaddr,
+ qinfo->q_ctx_size,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev,
+ common->q_ctx_paddr)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_ctx_vaddr));
+ cqm_kfree_align(common->q_ctx_vaddr);
+ common->q_ctx_vaddr = NULL;
+ return CQM_FAIL;
+ }
+ } else if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ /* find the corresponding cla table */
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ if (!cla_table) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(nonrdma_cqm_cla_table_get));
+ return CQM_FAIL;
+ }
+
+ /* bitmap applies for index */
+ bitmap = &cla_table->bitmap;
+ qinfo->index_count =
+ (ALIGN(qinfo->q_ctx_size, cla_table->obj_size)) /
+ cla_table->obj_size;
+ qinfo->common.index = cqm_bitmap_alloc(bitmap, 1U << (cla_table->z + 1),
+ qinfo->index_count,
+ cqm_handle->func_capability.xid_alloc_mode);
+ if (qinfo->common.index >= bitmap->max_num) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(nonrdma_cqm_bitmap_alloc));
+ return CQM_FAIL;
+ }
+
+ /* find the trunk page from BAT/CLA and allocate the buffer */
+ common->q_ctx_vaddr = cqm_cla_get_lock(cqm_handle, cla_table,
+ qinfo->common.index,
+ qinfo->index_count,
+ &common->q_ctx_paddr);
+ if (!common->q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(nonrdma_cqm_cla_get_lock));
+ cqm_bitmap_free(bitmap, qinfo->common.index,
+ qinfo->index_count);
+ return CQM_FAIL;
+ }
+
+ /* index and object association */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC) {
+ if (cqm_object_table_insert(cqm_handle, object_table,
+ qinfo->common.index, object,
+ false) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(nonrdma_cqm_object_table_insert));
+ cqm_cla_put(cqm_handle, cla_table,
+ qinfo->common.index,
+ qinfo->index_count);
+ cqm_bitmap_free(bitmap, qinfo->common.index,
+ qinfo->index_count);
+ return CQM_FAIL;
+ }
+ } else {
+ if (cqm_object_table_insert(cqm_handle, object_table,
+ qinfo->common.index, object,
+ true) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(nonrdma_cqm_object_table_insert));
+ cqm_cla_put(cqm_handle, cla_table,
+ qinfo->common.index,
+ qinfo->index_count);
+ cqm_bitmap_free(bitmap, qinfo->common.index,
+ qinfo->index_count);
+ return CQM_FAIL;
+ }
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_nonrdma_queue_create(struct cqm_object *object)
+{
+ struct cqm_queue *common = container_of(object, struct cqm_queue, object);
+ struct cqm_nonrdma_qinfo *qinfo = container_of(common, struct cqm_nonrdma_qinfo,
+ common);
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
+ struct cqm_service *service = cqm_handle->service + object->service_type;
+ struct cqm_buf *q_room_buf = &common->q_room_buf_1;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 wqe_number = qinfo->common.object.object_size;
+ u32 wqe_size = qinfo->wqe_size;
+ u32 order = service->buf_order;
+ u32 buf_number, buf_size;
+ bool tail = false; /* determine whether the linkwqe is at the end of the page */
+
+ /* When creating a CQ/SCQ queue, the page size is 4 KB,
+ * the linkwqe must be at the end of the page.
+ */
+ if (object->object_type == CQM_OBJECT_NONRDMA_EMBEDDED_CQ ||
+ object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ /* depth: 2^n-aligned; depth range: 256-32 K */
+ if (wqe_number < CQM_CQ_DEPTH_MIN ||
+ wqe_number > CQM_CQ_DEPTH_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_number));
+ return CQM_FAIL;
+ }
+ if (!cqm_check_align(wqe_number)) {
+ cqm_err(handle->dev_hdl, "Nonrdma queue alloc: wqe_number is not align on 2^n\n");
+ return CQM_FAIL;
+ }
+
+ order = CQM_4K_PAGE_ORDER; /* wqe page 4k */
+ tail = true; /* The linkwqe must be at the end of the page. */
+ buf_size = CQM_4K_PAGE_SIZE;
+ } else {
+ buf_size = (u32)(PAGE_SIZE << order);
+ }
+
+ /* Calculate the total number of buffers required,
+ * -1 indicates that the link wqe in a buffer is deducted.
+ */
+ qinfo->wqe_per_buf = (buf_size / wqe_size) - 1;
+ /* number of linkwqes that are included in the depth transferred
+ * by the service
+ */
+ buf_number = ALIGN((wqe_size * wqe_number), buf_size) / buf_size;
+
+ /* apply for buffer */
+ q_room_buf->buf_number = buf_number;
+ q_room_buf->buf_size = buf_size;
+ q_room_buf->page_number = buf_number << order;
+ if (cqm_buf_alloc(cqm_handle, q_room_buf, false) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_buf_alloc));
+ return CQM_FAIL;
+ }
+ /* fill link wqe, wqe_number - buf_number is the number of wqe without
+ * link wqe
+ */
+ cqm_linkwqe_fill(q_room_buf, qinfo->wqe_per_buf, wqe_size,
+ wqe_number - buf_number, tail,
+ common->queue_link_mode);
+
+ /* create queue header */
+ qinfo->common.q_header_vaddr = cqm_kmalloc_align(sizeof(struct cqm_queue_header),
+ GFP_KERNEL | __GFP_ZERO,
+ CQM_QHEAD_ALIGN_ORDER);
+ if (!qinfo->common.q_header_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_header_vaddr));
+ goto err1;
+ }
+
+ common->q_header_paddr = pci_map_single(cqm_handle->dev,
+ qinfo->common.q_header_vaddr,
+ sizeof(struct cqm_queue_header),
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, common->q_header_paddr)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_header_vaddr));
+ goto err2;
+ }
+
+ /* create queue ctx */
+ if (cqm_nonrdma_queue_ctx_create(object) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_nonrdma_queue_ctx_create));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ pci_unmap_single(cqm_handle->dev, common->q_header_paddr,
+ sizeof(struct cqm_queue_header), PCI_DMA_BIDIRECTIONAL);
+err2:
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+err1:
+ cqm_buf_free(q_room_buf, cqm_handle->dev);
+ return CQM_FAIL;
+}
+
+struct cqm_queue *cqm3_object_fc_srq_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_nonrdma_qinfo *nonrdma_qinfo = NULL;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ u32 valid_wqe_per_buffer;
+ u32 wqe_sum; /* include linkwqe, normal wqe */
+ u32 buf_size;
+ u32 buf_num;
+ s32 ret;
+
+ CQM_PTR_CHECK_RET(ex_handle, NULL, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_fc_srq_create_cnt);
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, NULL, CQM_PTR_NULL(cqm_handle));
+
+ /* service_type must be fc */
+ if (service_type != CQM_SERVICE_T_FC) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ /* exception of service unregistered check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ /* wqe_size cannot exceed PAGE_SIZE and must be 2^n aligned. */
+ if (wqe_size >= PAGE_SIZE || (!cqm_check_align(wqe_size))) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
+ return NULL;
+ }
+
+ /* FC RQ is SRQ. (Different from the SRQ concept of TOE, FC indicates
+ * that packets received by all flows are placed on the same RQ.
+ * The SRQ of TOE is similar to the RQ resource pool.)
+ */
+ if (object_type != CQM_OBJECT_NONRDMA_SRQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ service = &cqm_handle->service[service_type];
+ buf_size = (u32)(PAGE_SIZE << (service->buf_order));
+ /* subtract 1 link wqe */
+ valid_wqe_per_buffer = buf_size / wqe_size - 1;
+ buf_num = wqe_number / valid_wqe_per_buffer;
+ if (wqe_number % valid_wqe_per_buffer != 0)
+ buf_num++;
+
+ /* calculate the total number of WQEs */
+ wqe_sum = buf_num * (valid_wqe_per_buffer + 1);
+ nonrdma_qinfo = kmalloc(sizeof(*nonrdma_qinfo), GFP_KERNEL | __GFP_ZERO);
+ CQM_PTR_CHECK_RET(nonrdma_qinfo, NULL, CQM_ALLOC_FAIL(nonrdma_qinfo));
+
+ /* initialize object member */
+ nonrdma_qinfo->common.object.service_type = service_type;
+ nonrdma_qinfo->common.object.object_type = object_type;
+ /* total number of WQEs */
+ nonrdma_qinfo->common.object.object_size = wqe_sum;
+ atomic_set(&nonrdma_qinfo->common.object.refcount, 1);
+ init_completion(&nonrdma_qinfo->common.object.free);
+ nonrdma_qinfo->common.object.cqm_handle = cqm_handle;
+
+ /* Initialize the doorbell used by the current queue.
+ * The default doorbell is the hardware doorbell.
+ */
+ nonrdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
+ /* Currently, the connection mode is fixed. In the future,
+ * the service needs to transfer the connection mode.
+ */
+ nonrdma_qinfo->common.queue_link_mode = CQM_QUEUE_RING_MODE;
+
+ /* initialize public members */
+ nonrdma_qinfo->common.priv = object_priv;
+ nonrdma_qinfo->common.valid_wqe_num = wqe_sum - buf_num;
+
+ /* initialize internal private members */
+ nonrdma_qinfo->wqe_size = wqe_size;
+ /* RQ (also called SRQ of FC) created by FC services,
+ * CTX needs to be created.
+ */
+ nonrdma_qinfo->q_ctx_size = service->service_template.srq_ctx_size;
+
+ ret = cqm_nonrdma_queue_create(&nonrdma_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &nonrdma_qinfo->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fc_queue_create));
+ kfree(nonrdma_qinfo);
+ return NULL;
+}
+
+struct cqm_queue *cqm3_object_nonrdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_nonrdma_qinfo *nonrdma_qinfo = NULL;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ s32 ret;
+
+ CQM_PTR_CHECK_RET(ex_handle, NULL, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_nonrdma_queue_create_cnt);
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, NULL, CQM_PTR_NULL(cqm_handle));
+
+ /* exception of service registrion check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+ /* wqe_size can't be more than PAGE_SIZE, can't be zero, must be power
+ * of 2 the function of cqm_check_align is to check above
+ */
+ if (wqe_size >= PAGE_SIZE || (!cqm_check_align(wqe_size))) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
+ return NULL;
+ }
+
+ /* nonrdma supports: RQ, SQ, SRQ, CQ, SCQ */
+ if (object_type < CQM_OBJECT_NONRDMA_EMBEDDED_RQ ||
+ object_type > CQM_OBJECT_NONRDMA_SCQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ nonrdma_qinfo = kmalloc(sizeof(*nonrdma_qinfo), GFP_KERNEL | __GFP_ZERO);
+ CQM_PTR_CHECK_RET(nonrdma_qinfo, NULL, CQM_ALLOC_FAIL(nonrdma_qinfo));
+
+ nonrdma_qinfo->common.object.service_type = service_type;
+ nonrdma_qinfo->common.object.object_type = object_type;
+ nonrdma_qinfo->common.object.object_size = wqe_number;
+ atomic_set(&nonrdma_qinfo->common.object.refcount, 1);
+ init_completion(&nonrdma_qinfo->common.object.free);
+ nonrdma_qinfo->common.object.cqm_handle = cqm_handle;
+
+ /* Initialize the doorbell used by the current queue.
+ * The default value is hardware doorbell
+ */
+ nonrdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
+ /* Currently, the link mode is hardcoded and needs to be transferred by
+ * the service side.
+ */
+ nonrdma_qinfo->common.queue_link_mode = CQM_QUEUE_RING_MODE;
+
+ nonrdma_qinfo->common.priv = object_priv;
+
+ /* Initialize internal private members */
+ nonrdma_qinfo->wqe_size = wqe_size;
+ service = &cqm_handle->service[service_type];
+ switch (object_type) {
+ case CQM_OBJECT_NONRDMA_SCQ:
+ nonrdma_qinfo->q_ctx_size =
+ service->service_template.scq_ctx_size;
+ break;
+ case CQM_OBJECT_NONRDMA_SRQ:
+ /* Currently, the SRQ of the service is created through a
+ * dedicated interface.
+ */
+ nonrdma_qinfo->q_ctx_size =
+ service->service_template.srq_ctx_size;
+ break;
+ default:
+ break;
+ }
+
+ ret = cqm_nonrdma_queue_create(&nonrdma_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &nonrdma_qinfo->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_nonrdma_queue_create));
+ kfree(nonrdma_qinfo);
+ return NULL;
+}
+
+void cqm_qpc_mpt_delete(struct cqm_object *object)
+{
+ struct cqm_qpc_mpt *common = container_of(object, struct cqm_qpc_mpt, object);
+ struct cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
+ struct cqm_qpc_mpt_info,
+ common);
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_object_table *object_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ u32 count = qpc_mpt_info->index_count;
+ u32 index = qpc_mpt_info->common.xid;
+ struct cqm_bitmap *bitmap = NULL;
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_qpc_mpt_delete_cnt);
+
+ /* find the corresponding cla table */
+ if (object->object_type == CQM_OBJECT_SERVICE_CTX) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ return;
+ }
+
+ CQM_PTR_CHECK_NO_RET(cla_table,
+ CQM_FUNCTION_FAIL(cqm_cla_table_get_qpc));
+
+ /* disassociate index and object */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC)
+ cqm_object_table_remove(cqm_handle, object_table, index, object,
+ false);
+ else
+ cqm_object_table_remove(cqm_handle, object_table, index, object,
+ true);
+
+ /* wait for completion to ensure that all references to
+ * the QPC are complete
+ */
+ if (atomic_dec_and_test(&object->refcount))
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Qpc mpt del: object is referred by others, has to wait for completion\n");
+
+ /* Static QPC allocation must be non-blocking.
+ * Services ensure that the QPC is referenced
+ * when the QPC is deleted.
+ */
+ if (!cla_table->alloc_static)
+ wait_for_completion(&object->free);
+
+ /* release qpc buffer */
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+
+ /* release the index to the bitmap */
+ bitmap = &cla_table->bitmap;
+ cqm_bitmap_free(bitmap, index, count);
+}
+
+s32 cqm_qpc_mpt_delete_ret(struct cqm_object *object)
+{
+ u32 object_type;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_SERVICE_CTX:
+ cqm_qpc_mpt_delete(object);
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+void cqm_nonrdma_queue_delete(struct cqm_object *object)
+{
+ struct cqm_queue *common = container_of(object, struct cqm_queue, object);
+ struct cqm_nonrdma_qinfo *qinfo = container_of(common, struct cqm_nonrdma_qinfo,
+ common);
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_buf *q_room_buf = &common->q_room_buf_1;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_object_table *object_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ struct cqm_bitmap *bitmap = NULL;
+ u32 index = qinfo->common.index;
+ u32 count = qinfo->index_count;
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_nonrdma_queue_delete_cnt);
+
+ /* The SCQ has an independent SCQN association. */
+ if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ CQM_PTR_CHECK_NO_RET(cla_table, CQM_FUNCTION_FAIL(cqm_cla_table_get_queue));
+
+ /* disassociate index and object */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC)
+ cqm_object_table_remove(cqm_handle, object_table, index,
+ object, false);
+ else
+ cqm_object_table_remove(cqm_handle, object_table, index,
+ object, true);
+ }
+
+ /* wait for completion to ensure that all references to
+ * the QPC are complete
+ */
+ if (atomic_dec_and_test(&object->refcount))
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Nonrdma queue del: object is referred by others, has to wait for completion\n");
+
+ wait_for_completion(&object->free);
+
+ /* If the q header exists, release. */
+ if (qinfo->common.q_header_vaddr) {
+ pci_unmap_single(cqm_handle->dev, common->q_header_paddr,
+ sizeof(struct cqm_queue_header),
+ PCI_DMA_BIDIRECTIONAL);
+
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+ }
+
+ cqm_buf_free(q_room_buf, cqm_handle->dev);
+ /* SRQ and SCQ have independent CTXs and release. */
+ if (object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
+ /* The CTX of the SRQ of the nordma is
+ * applied for independently.
+ */
+ if (common->q_ctx_vaddr) {
+ pci_unmap_single(cqm_handle->dev, common->q_ctx_paddr,
+ qinfo->q_ctx_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ cqm_kfree_align(common->q_ctx_vaddr);
+ common->q_ctx_vaddr = NULL;
+ }
+ } else if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ /* The CTX of the SCQ of the nordma is managed by BAT/CLA. */
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+
+ /* release the index to the bitmap */
+ bitmap = &cla_table->bitmap;
+ cqm_bitmap_free(bitmap, index, count);
+ }
+}
+
+s32 cqm_nonrdma_queue_delete_ret(struct cqm_object *object)
+{
+ u32 object_type;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_NONRDMA_EMBEDDED_RQ:
+ case CQM_OBJECT_NONRDMA_EMBEDDED_SQ:
+ case CQM_OBJECT_NONRDMA_EMBEDDED_CQ:
+ case CQM_OBJECT_NONRDMA_SCQ:
+ cqm_nonrdma_queue_delete(object);
+ return CQM_SUCCESS;
+ case CQM_OBJECT_NONRDMA_SRQ:
+ cqm_nonrdma_queue_delete(object);
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+void cqm3_object_delete(struct cqm_object *object)
+{
+ struct cqm_handle *cqm_handle = NULL;
+ struct sphw_hwdev *handle = NULL;
+
+ CQM_PTR_CHECK_NO_RET(object, CQM_PTR_NULL(object));
+ if (!object->cqm_handle) {
+ pr_err("[CQM]object del: cqm_handle is null, service type %u, refcount %d\n",
+ object->service_type, (int)object->refcount.counter);
+ kfree(object);
+ return;
+ }
+
+ cqm_handle = (struct cqm_handle *)object->cqm_handle;
+
+ if (!cqm_handle->ex_handle) {
+ pr_err("[CQM]object del: ex_handle is null, service type %u, refcount %d\n",
+ object->service_type, (int)object->refcount.counter);
+ kfree(object);
+ return;
+ }
+
+ handle = cqm_handle->ex_handle;
+
+ if (object->service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->service_type));
+ kfree(object);
+ return;
+ }
+
+ if (cqm_qpc_mpt_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ if (cqm_nonrdma_queue_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ kfree(object);
+}
+
+struct cqm_object *cqm3_object_get(void *ex_handle, enum cqm_object_type object_type,
+ u32 index, bool bh)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_object_table *object_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ struct cqm_object *object = NULL;
+
+ /* The data flow path takes performance into consideration and
+ * does not check input parameters.
+ */
+ switch (object_type) {
+ case CQM_OBJECT_SERVICE_CTX:
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ break;
+ case CQM_OBJECT_NONRDMA_SCQ:
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ break;
+ default:
+ return NULL;
+ }
+
+ if (!cla_table) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_table_get));
+ return NULL;
+ }
+
+ object_table = &cla_table->obj_table;
+ object = cqm_object_table_get(cqm_handle, object_table, index, bh);
+ return object;
+}
+
+void cqm3_object_put(struct cqm_object *object)
+{
+ /* The data flow path takes performance into consideration and
+ * does not check input parameters.
+ */
+ if (atomic_dec_and_test(&object->refcount))
+ complete(&object->free);
+}
+
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_object.h b/drivers/scsi/spfc/hw/spfc_cqm_object.h
new file mode 100644
index 000000000000..3fbaa6a42af0
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_object.h
@@ -0,0 +1,279 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef CQM_OBJECT_H
+#define CQM_OBJECT_H
+
+#ifdef __cplusplus
+#if __cplusplus
+extern "C" {
+#endif
+#endif /* __cplusplus */
+
+#define CQM_SUCCESS 0
+#define CQM_FAIL (-1)
+/* Ignore the return value and continue */
+#define CQM_CONTINUE 1
+
+/* type of WQE is LINK WQE */
+#define CQM_WQE_WF_LINK 1
+
+/* chain queue mode */
+#define CQM_QUEUE_LINK_MODE 0
+/* RING queue mode */
+#define CQM_QUEUE_RING_MODE 1
+
+#define CQM_CQ_DEPTH_MAX 32768
+#define CQM_CQ_DEPTH_MIN 256
+
+/* linkwqe */
+#define CQM_LINK_WQE_CTRLSL_VALUE 2
+#define CQM_LINK_WQE_LP_VALID 1
+#define CQM_LINK_WQE_LP_INVALID 0
+#define CQM_LINK_WQE_OWNER_VALID 1
+#define CQM_LINK_WQE_OWNER_INVALID 0
+
+#define CQM_ADDR_HI(addr) ((u32)((u64)(addr) >> 32))
+#define CQM_ADDR_LW(addr) ((u32)((u64)(addr) & 0xffffffff))
+
+#define CQM_QPC_LAYOUT_TABLE_SIZE 16
+
+#define CQM_MOD_CQM 8
+
+/* generic linkwqe structure */
+struct cqm_linkwqe {
+ u32 rsv1 : 14; /* <reserved field */
+ u32 wf : 1; /* <wf */
+ u32 rsv2 : 14; /* <reserved field */
+ u32 ctrlsl : 2; /* <ctrlsl */
+ u32 o : 1; /* <o bit */
+
+ u32 rsv3 : 31; /* <reserved field */
+ u32 lp : 1; /* The lp field determines whether the o-bit meaning is reversed. */
+ u32 next_page_gpa_h;
+ u32 next_page_gpa_l;
+ u32 next_buffer_addr_h;
+ u32 next_buffer_addr_l;
+};
+
+/* SRQ linkwqe structure. The wqe size must not exceed the common RQE size. */
+struct cqm_srq_linkwqe {
+ struct cqm_linkwqe linkwqe; /* <generic linkwqe structure */
+ u32 current_buffer_gpa_h;
+ u32 current_buffer_gpa_l;
+ u32 current_buffer_addr_h;
+ u32 current_buffer_addr_l;
+
+ u32 fast_link_page_addr_h;
+ u32 fast_link_page_addr_l;
+
+ u32 fixed_next_buffer_addr_h;
+ u32 fixed_next_buffer_addr_l;
+};
+
+#define CQM_LINKWQE_128B 128
+
+/* first 64B of standard 128B WQE */
+union cqm_linkwqe_first64B {
+ struct cqm_linkwqe basic_linkwqe; /* <generic linkwqe structure */
+ u32 value[16]; /* <reserved field */
+};
+
+/* second 64B of standard 128B WQE */
+struct cqm_linkwqe_second64B {
+ u32 rsvd0[4]; /* <first 16B reserved field */
+ u32 rsvd1[4]; /* <second 16B reserved field */
+ u32 rsvd2[4];
+
+ union {
+ struct {
+ u32 rsvd0[2];
+ u32 rsvd1 : 31;
+ u32 ifoe_o : 1; /* <o bit of ifoe */
+ u32 rsvd2;
+ } bs;
+ u32 value[4];
+ } forth_16B; /* <fourth 16B */
+};
+
+/* standard 128B WQE structure */
+struct cqm_linkwqe_128B {
+ union cqm_linkwqe_first64B first64B; /* <first 64B of standard 128B WQE */
+ struct cqm_linkwqe_second64B second64B; /* <back 64B of standard 128B WQE */
+};
+
+/* AEQ type definition */
+enum cqm_aeq_event_type {
+ CQM_AEQ_BASE_T_FC = 48, /* <FC consists of 8 events:48~55 */
+ CQM_AEQ_MAX_T_FC = 56
+};
+
+/* service registration template */
+struct service_register_template {
+ u32 service_type; /* <service type */
+ u32 srq_ctx_size; /* <SRQ context size */
+ u32 scq_ctx_size; /* <SCQ context size */
+ void *service_handle;
+ u8 (*aeq_level_callback)(void *service_handle, u8 event_type, u8 *val);
+ void (*aeq_callback)(void *service_handle, u8 event_type, u8 *val);
+};
+
+/* object operation type definition */
+enum cqm_object_type {
+ CQM_OBJECT_ROOT_CTX = 0, /* <0:root context, which is compatible with root CTX management */
+ CQM_OBJECT_SERVICE_CTX, /* <1:QPC, connection management object */
+ CQM_OBJECT_NONRDMA_EMBEDDED_RQ = 10, /* <10:RQ of non-RDMA services, managed by LINKWQE */
+ CQM_OBJECT_NONRDMA_EMBEDDED_SQ, /* <11:SQ of non-RDMA services, managed by LINKWQE */
+ /* <12:SRQ of non-RDMA services, managed by MTT, but the CQM needs to apply for MTT. */
+ CQM_OBJECT_NONRDMA_SRQ,
+ /* <13:Embedded CQ for non-RDMA services, managed by LINKWQE */
+ CQM_OBJECT_NONRDMA_EMBEDDED_CQ,
+ CQM_OBJECT_NONRDMA_SCQ, /* <14:SCQ of non-RDMA services, managed by LINKWQE */
+};
+
+/* return value of the failure to apply for the BITMAP table */
+#define CQM_INDEX_INVALID (~(0U))
+
+/* doorbell mode selected by the current Q, hardware doorbell */
+#define CQM_HARDWARE_DOORBELL 1
+
+/* single-node structure of the CQM buffer */
+struct cqm_buf_list {
+ void *va; /* <virtual address */
+ dma_addr_t pa; /* <physical address */
+ u32 refcount; /* <reference count of the buf, which is used for internal buf management. */
+};
+
+/* common management structure of the CQM buffer */
+struct cqm_buf {
+ struct cqm_buf_list *buf_list; /* <buffer list */
+ /* <map the discrete buffer list to a group of consecutive addresses */
+ struct cqm_buf_list direct;
+ u32 page_number; /* <buf_number in quantity of page_number=2^n */
+ u32 buf_number; /* <number of buf_list nodes */
+ u32 buf_size; /* <PAGE_SIZE in quantity of buf_size=2^n */
+};
+
+/* CQM object structure, which can be considered
+ * as the base class abstracted from all queues/CTX.
+ */
+struct cqm_object {
+ u32 service_type; /* <service type */
+ u32 object_type; /* <object type, such as context, queue, mpt, and mtt, etc */
+ u32 object_size; /* <object Size, for queue/CTX/MPT, the unit is Byte*/
+ atomic_t refcount; /* <reference counting */
+ struct completion free; /* <release completed quantity */
+ void *cqm_handle; /* <cqm_handle */
+};
+
+/* structure of the QPC and MPT objects of the CQM */
+struct cqm_qpc_mpt {
+ struct cqm_object object;
+ u32 xid;
+ dma_addr_t paddr; /* <physical address of the QPC/MTT memory */
+ void *priv; /* <private information about the object of the service driver. */
+ u8 *vaddr; /* <virtual address of the QPC/MTT memory */
+};
+
+/* queue header structure */
+struct cqm_queue_header {
+ u64 doorbell_record; /* <SQ/RQ DB content */
+ u64 ci_record; /* <CQ DB content */
+ u64 rsv1;
+ u64 rsv2;
+};
+
+/* queue management structure: for queues of non-RDMA services, embedded queues
+ * are managed by LinkWQE, SRQ and SCQ are managed by MTT, but MTT needs to be
+ * applied by CQM; the queue of the RDMA service is managed by the MTT.
+ */
+struct cqm_queue {
+ struct cqm_object object; /* <object base class */
+ /* <The embedded queue and QP do not have indexes, but the SRQ and SCQ do. */
+ u32 index;
+ /* <private information about the object of the service driver */
+ void *priv;
+ /* <doorbell type selected by the current queue. HW/SW are used for the roce QP. */
+ u32 current_q_doorbell;
+ u32 current_q_room;
+ struct cqm_buf q_room_buf_1; /* <nonrdma:only q_room_buf_1 can be set to q_room_buf */
+ struct cqm_buf q_room_buf_2; /* <The CQ of RDMA reallocates the size of the queue room. */
+ struct cqm_queue_header *q_header_vaddr; /* <queue header virtual address */
+ dma_addr_t q_header_paddr; /* <physical address of the queue header */
+ u8 *q_ctx_vaddr; /* <CTX virtual addresses of SRQ and SCQ */
+ dma_addr_t q_ctx_paddr; /* <CTX physical addresses of SRQ and SCQ */
+ u32 valid_wqe_num; /* <number of valid WQEs that are successfully created */
+ u8 *tail_container; /* <tail pointer of the SRQ container */
+ u8 *head_container; /* <head pointer of SRQ container */
+ /* <Determine the connection mode during queue creation, such as link and ring. */
+ u8 queue_link_mode;
+};
+
+struct cqm_qpc_layout_table_node {
+ u32 type;
+ u32 size;
+ u32 offset;
+ struct cqm_object *object;
+};
+
+struct cqm_qpc_mpt_info {
+ struct cqm_qpc_mpt common;
+ /* Different service has different QPC.
+ * The large QPC/mpt will occupy some continuous indexes in bitmap.
+ */
+ u32 index_count;
+ struct cqm_qpc_layout_table_node qpc_layout_table[CQM_QPC_LAYOUT_TABLE_SIZE];
+};
+
+struct cqm_nonrdma_qinfo {
+ struct cqm_queue common;
+ u32 wqe_size;
+ /* Number of WQEs in each buffer (excluding link WQEs)
+ * For SRQ, the value is the number of WQEs contained in a container.
+ */
+ u32 wqe_per_buf;
+ u32 q_ctx_size;
+ /* When different services use CTXs of different sizes,
+ * a large CTX occupies multiple consecutive indexes in the bitmap.
+ */
+ u32 index_count;
+ /* add for srq */
+ u32 container_size;
+};
+
+/* sending command structure */
+struct cqm_cmd_buf {
+ void *buf;
+ dma_addr_t dma;
+ u16 size;
+};
+
+struct cqm_queue *cqm3_object_fc_srq_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv);
+struct cqm_qpc_mpt *cqm3_object_qpc_mpt_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ u32 index);
+struct cqm_queue *cqm3_object_nonrdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv);
+void cqm3_object_delete(struct cqm_object *object);
+struct cqm_object *cqm3_object_get(void *ex_handle, enum cqm_object_type object_type,
+ u32 index, bool bh);
+void cqm3_object_put(struct cqm_object *object);
+
+s32 cqm3_ring_hardware_db_fc(void *ex_handle, u32 service_type, u8 db_count,
+ u8 pagenum, u64 db);
+s32 cqm_ring_direct_wqe_db(void *ex_handle, u32 service_type, u8 db_count, void *direct_wqe);
+s32 cqm_ring_direct_wqe_db_fc(void *ex_handle, u32 service_type, void *direct_wqe);
+
+#ifdef __cplusplus
+#if __cplusplus
+}
+#endif
+#endif /* __cplusplus */
+
+#endif /* CQM_OBJECT_H */
diff --git a/drivers/scsi/spfc/hw/spfc_hba.c b/drivers/scsi/spfc/hw/spfc_hba.c
new file mode 100644
index 000000000000..179f49ddd7ad
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_hba.c
@@ -0,0 +1,1724 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_hba.h"
+#include "spfc_module.h"
+#include "spfc_utils.h"
+#include "spfc_chipitf.h"
+#include "spfc_io.h"
+#include "spfc_lld.h"
+#include "sphw_hw.h"
+#include "spfc_cqm_main.h"
+
+struct spfc_hba_info *spfc_hba[SPFC_HBA_PORT_MAX_NUM];
+ulong probe_bit_map[SPFC_MAX_PROBE_PORT_NUM / SPFC_PORT_NUM_PER_TABLE];
+static ulong card_num_bit_map[SPFC_MAX_PROBE_PORT_NUM / SPFC_PORT_NUM_PER_TABLE];
+static struct spfc_card_num_manage card_num_manage[SPFC_MAX_CARD_NUM];
+spinlock_t probe_spin_lock;
+u32 max_parent_qpc_num;
+
+static int spfc_probe(struct spfc_lld_dev *lld_dev, void **uld_dev, char *uld_dev_name);
+static void spfc_remove(struct spfc_lld_dev *lld_dev, void *uld_dev);
+static u32 spfc_initial_chip_access(struct spfc_hba_info *hba);
+static void spfc_release_chip_access(struct spfc_hba_info *hba);
+static u32 spfc_port_config_set(void *hba, enum unf_port_config_set_op opcode, void *var_in);
+static u32 spfc_port_config_get(void *hba, enum unf_port_cfg_get_op opcode, void *para_out);
+static u32 spfc_port_update_wwn(void *hba, void *para_in);
+static u32 spfc_get_chip_info(struct spfc_hba_info *hba);
+static u32 spfc_delete_scqc_via_cmdq_sync(struct spfc_hba_info *hba, u32 scqn);
+static u32 spfc_delete_srqc_via_cmdq_sync(struct spfc_hba_info *hba, u64 sqrc_gpa);
+static u32 spfc_get_hba_pcie_link_state(void *hba, void *link_state);
+static u32 spfc_port_check_fw_ready(struct spfc_hba_info *hba);
+
+struct spfc_uld_info fc_uld_info = {
+ .probe = spfc_probe,
+ .remove = spfc_remove,
+ .resume = NULL,
+ .event = NULL,
+ .suspend = NULL,
+ .ioctl = NULL
+};
+
+struct service_register_template service_cqm_temp = {
+ .service_type = SERVICE_T_FC,
+ .scq_ctx_size = SPFC_SCQ_CNTX_SIZE,
+ .srq_ctx_size = SPFC_SRQ_CNTX_SIZE, /* srq, scq context_size configuration */
+ .aeq_callback = spfc_process_aeqe, /* the API of asynchronous event from TILE to driver */
+};
+
+/* default configuration: auto speed, auto topology, INI+TGT */
+static struct unf_cfg_item spfc_port_cfg_parm[] = {
+ {"port_id", 0, 0x110000, 0xffffff},
+ /* port mode:INI(0x20), TGT(0x10), BOTH(0x30) */
+ {"port_mode", 0, 0x20, 0xff},
+ /* port topology, 0x3: loop , 0xc:p2p, 0xf:auto, 0x10:vn2vn */
+ {"port_topology", 0, 0xf, 0x20},
+ {"port_alpa", 0, 0xdead, 0xffff}, /* alpa address of port */
+ /* queue depth of originator registered to SCSI midlayer */
+ {"max_queue_depth", 0, 128, 128},
+ {"sest_num", 0, 2048, 2048},
+ {"max_login", 0, 2048, 2048},
+ /* nodename from 32 bit to 64 bit */
+ {"node_name_high", 0, 0x1000286e, 0xffffffff},
+ /* nodename from 0 bit to 31 bit */
+ {"node_name_low", 0, 0xd4bbf12f, 0xffffffff},
+ /* portname from 32 bit to 64 bit */
+ {"port_name_high", 0, 0x2000286e, 0xffffffff},
+ /* portname from 0 bit to 31 bit */
+ {"port_name_low", 0, 0xd4bbf12f, 0xffffffff},
+ /* port speed 0:auto 1:1Gbps 2:2Gbps 3:4Gbps 4:8Gbps 5:16Gbps */
+ {"port_speed", 0, 0, 32},
+ {"interrupt_delay", 0, 0, 100}, /* unit: us */
+ {"tape_support", 0, 0, 1}, /* tape support */
+ {"End", 0, 0, 0}
+};
+
+struct unf_low_level_functioon_op spfc_func_op = {
+ .low_level_type = UNF_SPFC_FC,
+ .name = "SPFC",
+ .xchg_mgr_type = UNF_LOW_LEVEL_MGR_TYPE_PASSTIVE,
+ .abts_xchg = UNF_NO_EXTRA_ABTS_XCHG,
+ .passthrough_flag = UNF_LOW_LEVEL_PASS_THROUGH_PORT_LOGIN,
+ .support_max_npiv_num = UNF_SPFC_MAXNPIV_NUM,
+ .support_max_ssq_num = SPFC_MAX_SSQ_NUM - 1,
+ .chip_id = 0,
+ .support_max_speed = UNF_PORT_SPEED_32_G,
+ .support_max_rport = UNF_SPFC_MAXRPORT_NUM,
+ .sfp_type = UNF_PORT_TYPE_FC_SFP,
+ .rport_release_type = UNF_LOW_LEVEL_RELEASE_RPORT_ASYNC,
+ .sirt_page_mode = UNF_LOW_LEVEL_SIRT_PAGE_MODE_XCHG,
+
+ /* Link service */
+ .service_op = {
+ .unf_ls_gs_send = spfc_send_ls_gs_cmnd,
+ .unf_bls_send = spfc_send_bls_cmnd,
+ .unf_cmnd_send = spfc_send_scsi_cmnd,
+ .unf_release_rport_res = spfc_free_parent_resource,
+ .unf_flush_ini_resp_que = spfc_flush_ini_resp_queue,
+ .unf_alloc_rport_res = spfc_alloc_parent_resource,
+ .ll_release_xid = spfc_free_xid,
+ },
+
+ /* Port Mgr */
+ .port_mgr_op = {
+ .ll_port_config_set = spfc_port_config_set,
+ .ll_port_config_get = spfc_port_config_get,
+ }
+};
+
+struct spfc_port_cfg_op {
+ enum unf_port_config_set_op opcode;
+ u32 (*spfc_operation)(void *hba, void *para);
+};
+
+struct spfc_port_cfg_op spfc_config_set_op[] = {
+ {UNF_PORT_CFG_SET_PORT_SWITCH, spfc_sfp_switch},
+ {UNF_PORT_CFG_UPDATE_WWN, spfc_port_update_wwn},
+ {UNF_PORT_CFG_UPDATE_FABRIC_PARAM, spfc_update_fabric_param},
+ {UNF_PORT_CFG_UPDATE_PLOGI_PARAM, spfc_update_port_param},
+ {UNF_PORT_CFG_SET_BUTT, NULL}
+};
+
+struct spfc_port_cfg_get_op {
+ enum unf_port_cfg_get_op opcode;
+ u32 (*spfc_operation)(void *hba, void *para);
+};
+
+struct spfc_port_cfg_get_op spfc_config_get_op[] = {
+ {UNF_PORT_CFG_GET_TOPO_ACT, spfc_get_topo_act},
+ {UNF_PORT_CFG_GET_LOOP_MAP, spfc_get_loop_map},
+ {UNF_PORT_CFG_GET_WORKBALE_BBCREDIT, spfc_get_workable_bb_credit},
+ {UNF_PORT_CFG_GET_WORKBALE_BBSCN, spfc_get_workable_bb_scn},
+ {UNF_PORT_CFG_GET_LOOP_ALPA, spfc_get_loop_alpa},
+ {UNF_PORT_CFG_GET_MAC_ADDR, spfc_get_chip_msg},
+ {UNF_PORT_CFG_GET_PCIE_LINK_STATE, spfc_get_hba_pcie_link_state},
+ {UNF_PORT_CFG_GET_BUTT, NULL},
+};
+
+static u32 spfc_port_update_wwn(void *hba, void *para_in)
+{
+ struct unf_port_wwn *port_wwn = NULL;
+ struct spfc_hba_info *spfc_hba = hba;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
+
+ port_wwn = (struct unf_port_wwn *)para_in;
+
+ /* Update it to the hba in the later */
+ *(u64 *)spfc_hba->sys_node_name = port_wwn->sys_node_name;
+ *(u64 *)spfc_hba->sys_port_name = port_wwn->sys_port_wwn;
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
+ "[info]Port(0x%x) updates WWNN(0x%llx) WWPN(0x%llx)",
+ spfc_hba->port_cfg.port_id,
+ *(u64 *)spfc_hba->sys_node_name,
+ *(u64 *)spfc_hba->sys_port_name);
+
+ return RETURN_OK;
+}
+
+static u32 spfc_port_config_set(void *hba, enum unf_port_config_set_op opcode,
+ void *var_in)
+{
+ u32 op_idx = 0;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ for (op_idx = 0; op_idx < sizeof(spfc_config_set_op) /
+ sizeof(struct spfc_port_cfg_op); op_idx++) {
+ if (opcode == spfc_config_set_op[op_idx].opcode) {
+ if (!spfc_config_set_op[op_idx].spfc_operation) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Null operation for configuration, opcode(0x%x), operation ID(0x%x)",
+ opcode, op_idx);
+
+ return UNF_RETURN_ERROR;
+ }
+ return spfc_config_set_op[op_idx].spfc_operation(hba, var_in);
+ }
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]No operation code for configuration, opcode(0x%x)",
+ opcode);
+
+ return UNF_RETURN_ERROR;
+}
+
+static u32 spfc_port_config_get(void *hba, enum unf_port_cfg_get_op opcode,
+ void *para_out)
+{
+ u32 op_idx = 0;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ for (op_idx = 0; op_idx < sizeof(spfc_config_get_op) /
+ sizeof(struct spfc_port_cfg_get_op); op_idx++) {
+ if (opcode == spfc_config_get_op[op_idx].opcode) {
+ if (!spfc_config_get_op[op_idx].spfc_operation) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Null operation to get configuration, opcode(0x%x), operation ID(0x%x)",
+ opcode, op_idx);
+ return UNF_RETURN_ERROR;
+ }
+ return spfc_config_get_op[op_idx].spfc_operation(hba, para_out);
+ }
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]No operation to get configuration, opcode(0x%x)",
+ opcode);
+
+ return UNF_RETURN_ERROR;
+}
+
+static u32 spfc_fc_mode_check(void *hw_dev_handle)
+{
+ FC_CHECK_RETURN_VALUE(hw_dev_handle, UNF_RETURN_ERROR);
+
+ if (!sphw_support_fc(hw_dev_handle, NULL)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Work mode is error");
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Selected work mode is FC");
+
+ return RETURN_OK;
+}
+
+static u32 spfc_check_port_cfg(const struct spfc_port_cfg *port_cfg)
+{
+ bool topo_condition = false;
+ bool speed_condition = false;
+ /* About Work Topology */
+ topo_condition = ((port_cfg->port_topology != UNF_TOP_LOOP_MASK) &&
+ (port_cfg->port_topology != UNF_TOP_P2P_MASK) &&
+ (port_cfg->port_topology != UNF_TOP_AUTO_MASK));
+ if (topo_condition) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Configured port topology(0x%x) is incorrect",
+ port_cfg->port_topology);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* About Work Mode */
+ if (port_cfg->port_mode != UNF_PORT_MODE_INI) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Configured port mode(0x%x) is incorrect",
+ port_cfg->port_mode);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* About Work Speed */
+ speed_condition = ((port_cfg->port_speed != UNF_PORT_SPEED_AUTO) &&
+ (port_cfg->port_speed != UNF_PORT_SPEED_2_G) &&
+ (port_cfg->port_speed != UNF_PORT_SPEED_4_G) &&
+ (port_cfg->port_speed != UNF_PORT_SPEED_8_G) &&
+ (port_cfg->port_speed != UNF_PORT_SPEED_16_G) &&
+ (port_cfg->port_speed != UNF_PORT_SPEED_32_G));
+ if (speed_condition) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Configured port speed(0x%x) is incorrect",
+ port_cfg->port_speed);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Check port configuration OK");
+
+ return RETURN_OK;
+}
+
+static u32 spfc_get_port_cfg(struct spfc_hba_info *hba,
+ struct spfc_chip_info *chip_info, u8 card_num)
+{
+#define UNF_CONFIG_ITEM_LEN 15
+ /* Maximum length of a configuration item name, including the end
+ * character
+ */
+#define UNF_MAX_ITEM_NAME_LEN (32 + 1)
+
+ /* Get and check parameters */
+ char cfg_item[UNF_MAX_ITEM_NAME_LEN];
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_hba_info *spfc_hba = hba;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ memset((void *)cfg_item, 0, sizeof(cfg_item));
+
+ spfc_hba->card_info.func_num = (sphw_global_func_id(hba->dev_handle)) & UNF_FUN_ID_MASK;
+ spfc_hba->card_info.card_num = card_num;
+
+ /* The range of PF of FC server is from PF1 to PF2 */
+ snprintf(cfg_item, UNF_MAX_ITEM_NAME_LEN, "spfc_cfg_%1u", (spfc_hba->card_info.func_num));
+
+ cfg_item[UNF_MAX_ITEM_NAME_LEN - 1] = 0;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Get port configuration: %s", cfg_item);
+
+ /* Get configuration parameters from file */
+ UNF_LOWLEVEL_GET_CFG_PARMS(ret, cfg_item, &spfc_port_cfg_parm[ARRAY_INDEX_0],
+ (u32 *)(void *)(&spfc_hba->port_cfg),
+ sizeof(spfc_port_cfg_parm) / sizeof(struct unf_cfg_item));
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't get configuration",
+ spfc_hba->port_cfg.port_id);
+
+ return ret;
+ }
+
+ if (max_parent_qpc_num <= SPFC_MAX_PARENT_QPC_NUM) {
+ spfc_hba->port_cfg.sest_num = UNF_SPFC_MAXRPORT_NUM;
+ spfc_hba->port_cfg.max_login = UNF_SPFC_MAXRPORT_NUM;
+ }
+
+ spfc_hba->port_cfg.port_id &= SPFC_PORT_ID_MASK;
+ spfc_hba->port_cfg.port_id |= spfc_hba->card_info.card_num << UNF_SHIFT_8;
+ spfc_hba->port_cfg.port_id |= spfc_hba->card_info.func_num;
+ spfc_hba->port_cfg.tape_support = (u32)chip_info->tape_support;
+
+ /* Parameters check */
+ ret = spfc_check_port_cfg(&spfc_hba->port_cfg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) check configuration incorrect",
+ spfc_hba->port_cfg.port_id);
+
+ return ret;
+ }
+
+ /* Set configuration which is got from file */
+ spfc_hba->port_speed_cfg = spfc_hba->port_cfg.port_speed;
+ spfc_hba->port_topo_cfg = spfc_hba->port_cfg.port_topology;
+ spfc_hba->port_mode = (enum unf_port_mode)(spfc_hba->port_cfg.port_mode);
+
+ return ret;
+}
+
+void spfc_generate_sys_wwn(struct spfc_hba_info *hba)
+{
+ FC_CHECK_RETURN_VOID(hba);
+
+ *(u64 *)hba->sys_node_name = (((u64)hba->port_cfg.node_name_hi << UNF_SHIFT_32) |
+ (hba->port_cfg.node_name_lo));
+ *(u64 *)hba->sys_port_name = (((u64)hba->port_cfg.port_name_hi << UNF_SHIFT_32) |
+ (hba->port_cfg.port_name_lo));
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]NodeName = 0x%llx, PortName = 0x%llx",
+ *(u64 *)hba->sys_node_name, *(u64 *)hba->sys_port_name);
+}
+
+static u32 spfc_create_queues(struct spfc_hba_info *hba)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ SPFC_FUNCTION_ENTER;
+
+ /* Initialize shared resources of SCQ and SRQ in parent queue */
+ ret = spfc_create_common_share_queues(hba);
+ if (ret != RETURN_OK)
+ goto out_create_common_queue_fail;
+
+ /* Initialize parent queue manager resources */
+ ret = spfc_alloc_parent_queue_mgr(hba);
+ if (ret != RETURN_OK)
+ goto out_free_share_queue_resource;
+
+ /* Initialize shared WQE page pool in parent SQ */
+ ret = spfc_alloc_parent_sq_wqe_page_pool(hba);
+ if (ret != RETURN_OK)
+ goto out_free_parent_queue_resource;
+
+ ret = spfc_create_ssq(hba);
+ if (ret != RETURN_OK)
+ goto out_free_parent_wqe_page_pool;
+
+ /*
+ * Notice: the configuration of SQ and QID(default_sqid)
+ * must be the same in FC
+ */
+ hba->next_clear_sq = 0;
+ hba->default_sqid = SPFC_QID_SQ;
+
+ SPFC_FUNCTION_RETURN;
+ return RETURN_OK;
+out_free_parent_wqe_page_pool:
+ spfc_free_parent_sq_wqe_page_pool(hba);
+
+out_free_parent_queue_resource:
+ spfc_free_parent_queue_mgr(hba);
+
+out_free_share_queue_resource:
+ spfc_flush_scq_ctx(hba);
+ spfc_flush_srq_ctx(hba);
+ spfc_destroy_common_share_queues(hba);
+
+out_create_common_queue_fail:
+ SPFC_FUNCTION_RETURN;
+
+ return ret;
+}
+
+static u32 spfc_alloc_dma_buffers(struct spfc_hba_info *hba)
+{
+ struct pci_dev *pci_dev = NULL;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ pci_dev = hba->pci_dev;
+ FC_CHECK_RETURN_VALUE(pci_dev, UNF_RETURN_ERROR);
+
+ hba->sfp_buf = dma_alloc_coherent(&hba->pci_dev->dev,
+ sizeof(struct unf_sfp_err_rome_info),
+ &hba->sfp_dma_addr, GFP_KERNEL);
+ if (!hba->sfp_buf) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't allocate SFP DMA buffer",
+ hba->port_cfg.port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(hba->sfp_buf, 0, sizeof(struct unf_sfp_err_rome_info));
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) allocate sfp buffer(0x%p 0x%llx)",
+ hba->port_cfg.port_id, hba->sfp_buf,
+ (u64)hba->sfp_dma_addr);
+
+ return RETURN_OK;
+}
+
+static void spfc_free_dma_buffers(struct spfc_hba_info *hba)
+{
+ struct pci_dev *pci_dev = NULL;
+
+ FC_CHECK_RETURN_VOID(hba);
+ pci_dev = hba->pci_dev;
+ FC_CHECK_RETURN_VOID(pci_dev);
+
+ if (hba->sfp_buf) {
+ dma_free_coherent(&pci_dev->dev, sizeof(struct unf_sfp_err_rome_info),
+ hba->sfp_buf, hba->sfp_dma_addr);
+
+ hba->sfp_buf = NULL;
+ hba->sfp_dma_addr = 0;
+ }
+}
+
+static void spfc_destroy_queues(struct spfc_hba_info *hba)
+{
+ /* Free ssq */
+ spfc_free_ssq(hba, SPFC_MAX_SSQ_NUM);
+
+ /* Free parent queue resource */
+ spfc_free_parent_queues(hba);
+
+ /* Free queue manager resource */
+ spfc_free_parent_queue_mgr(hba);
+
+ /* Free linked List SQ and WQE page pool resource */
+ spfc_free_parent_sq_wqe_page_pool(hba);
+
+ /* Free shared SRQ and SCQ queue resource */
+ spfc_destroy_common_share_queues(hba);
+}
+
+static u32 spfc_alloc_default_session(struct spfc_hba_info *hba)
+{
+ struct unf_port_info rport_info = {0};
+ u32 wait_sq_cnt = 0;
+
+ rport_info.nport_id = 0xffffff;
+ rport_info.rport_index = SPFC_DEFAULT_RPORT_INDEX;
+ rport_info.local_nport_id = 0xffffff;
+ rport_info.port_name = 0;
+ rport_info.cs_ctrl = 0x81;
+
+ if (spfc_alloc_parent_resource((void *)hba, &rport_info) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Alloc default session resource failed");
+ goto failed;
+ }
+
+ for (;;) {
+ if (hba->default_sq_info.default_sq_flag == 1)
+ break;
+
+ msleep(SPFC_WAIT_SESS_ENABLE_ONE_TIME_MS);
+ wait_sq_cnt++;
+ if (wait_sq_cnt >= SPFC_MAX_WAIT_LOOP_TIMES) {
+ hba->default_sq_info.default_sq_flag = 0xF;
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Wait Default Session enable timeout");
+ goto failed;
+ }
+ }
+
+ if (spfc_mbx_config_default_session(hba, 1) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Notify up config default session table fail");
+ goto failed;
+ }
+
+ return RETURN_OK;
+
+failed:
+ spfc_sess_resource_free_sync((void *)hba, &rport_info);
+ return UNF_RETURN_ERROR;
+}
+
+static u32 spfc_init_host_res(struct spfc_hba_info *hba)
+{
+ u32 ret = RETURN_OK;
+ struct spfc_hba_info *spfc_hba = hba;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+
+ SPFC_FUNCTION_ENTER;
+
+ /* Initialize spin lock */
+ spin_lock_init(&spfc_hba->hba_lock);
+ spin_lock_init(&spfc_hba->flush_state_lock);
+ spin_lock_init(&spfc_hba->clear_state_lock);
+ spin_lock_init(&spfc_hba->spin_lock);
+ spin_lock_init(&spfc_hba->srq_delay_info.srq_lock);
+ /* Initialize init_completion */
+ init_completion(&spfc_hba->hba_init_complete);
+ init_completion(&spfc_hba->mbox_complete);
+ init_completion(&spfc_hba->vpf_complete);
+ init_completion(&spfc_hba->fcfi_complete);
+ init_completion(&spfc_hba->get_sfp_complete);
+ /* Step-1: initialize the communication channel between driver and uP */
+ ret = spfc_initial_chip_access(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't initialize chip access",
+ spfc_hba->port_cfg.port_id);
+
+ goto out_unmap_memory;
+ }
+ /* Step-2: get chip configuration information before creating
+ * queue resources
+ */
+ ret = spfc_get_chip_info(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't get chip information",
+ spfc_hba->port_cfg.port_id);
+
+ goto out_unmap_memory;
+ }
+
+ /* Step-3: create queue resources */
+ ret = spfc_create_queues(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't create queues",
+ spfc_hba->port_cfg.port_id);
+
+ goto out_release_chip_access;
+ }
+ /* Allocate DMA buffer (SFP information) */
+ ret = spfc_alloc_dma_buffers(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't allocate DMA buffers",
+ spfc_hba->port_cfg.port_id);
+
+ goto out_destroy_queues;
+ }
+ /* Initialize status parameters */
+ spfc_hba->active_port_speed = UNF_PORT_SPEED_UNKNOWN;
+ spfc_hba->active_topo = UNF_ACT_TOP_UNKNOWN;
+ spfc_hba->sfp_on = false;
+ spfc_hba->port_loop_role = UNF_LOOP_ROLE_MASTER_OR_SLAVE;
+ spfc_hba->phy_link = UNF_PORT_LINK_DOWN;
+ spfc_hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_INIT;
+
+ /* Initialize parameters referring to the lowlevel */
+ spfc_hba->remote_rttov_tag = 0;
+ spfc_hba->port_bb_scn_cfg = SPFC_LOWLEVEL_DEFAULT_BB_SCN;
+
+ /* Initialize timer, and the unit of E_D_TOV is ms */
+ spfc_hba->remote_edtov_tag = 0;
+ spfc_hba->remote_bb_credit = 0;
+ spfc_hba->compared_bb_scn = 0;
+ spfc_hba->compared_edtov_val = UNF_DEFAULT_EDTOV;
+ spfc_hba->compared_ratov_val = UNF_DEFAULT_RATOV;
+ spfc_hba->removing = false;
+ spfc_hba->dev_present = true;
+
+ /* Initialize parameters about cos */
+ spfc_hba->cos_bitmap = cos_bit_map;
+ memset(spfc_hba->cos_rport_cnt, 0, SPFC_MAX_COS_NUM * sizeof(atomic_t));
+
+ /* Mailbox access completion */
+ complete(&spfc_hba->mbox_complete);
+
+ ret = spfc_alloc_default_session(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't allocate Default Session",
+ spfc_hba->port_cfg.port_id);
+
+ goto out_destroy_dma_buff;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]SPFC port(0x%x) initialize host resources succeeded",
+ spfc_hba->port_cfg.port_id);
+
+ return ret;
+
+out_destroy_dma_buff:
+ spfc_free_dma_buffers(spfc_hba);
+out_destroy_queues:
+ spfc_flush_scq_ctx(spfc_hba);
+ spfc_flush_srq_ctx(spfc_hba);
+ spfc_destroy_queues(spfc_hba);
+
+out_release_chip_access:
+ spfc_release_chip_access(spfc_hba);
+
+out_unmap_memory:
+ return ret;
+}
+
+static u32 spfc_get_chip_info(struct spfc_hba_info *hba)
+{
+ u32 ret = RETURN_OK;
+ u32 exi_count = 0;
+ u32 exi_base = 0;
+ u32 exi_stride = 0;
+ u32 fun_idx = 0;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ hba->vpid_start = hba->service_cap.dev_fc_cap.vp_id_start;
+ hba->vpid_end = hba->service_cap.dev_fc_cap.vp_id_end;
+ fun_idx = sphw_global_func_id(hba->dev_handle);
+
+ exi_count = (max_parent_qpc_num <= SPFC_MAX_PARENT_QPC_NUM) ?
+ exit_count >> UNF_SHIFT_1 : exit_count;
+ exi_stride = (max_parent_qpc_num <= SPFC_MAX_PARENT_QPC_NUM) ?
+ exit_stride >> UNF_SHIFT_1 : exit_stride;
+ exi_base = exit_base;
+
+ exi_base += (fun_idx * exi_stride);
+ hba->exi_base = SPFC_LSW(exi_base);
+ hba->exi_count = SPFC_LSW(exi_count);
+ hba->max_support_speed = max_speed;
+ hba->port_index = SPFC_LSB(fun_idx);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) base information: PortIndex=0x%x, ExiBase=0x%x, ExiCount=0x%x, VpIdStart=0x%x, VpIdEnd=0x%x, MaxSpeed=0x%x, Speed=0x%x, Topo=0x%x",
+ hba->port_cfg.port_id, hba->port_index, hba->exi_base,
+ hba->exi_count, hba->vpid_start, hba->vpid_end,
+ hba->max_support_speed, hba->port_speed_cfg, hba->port_topo_cfg);
+
+ return ret;
+}
+
+static u32 spfc_initial_chip_access(struct spfc_hba_info *hba)
+{
+ int ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ /* 1. Initialize cqm access related with scq, emb cq, aeq(ucode-->driver) */
+ service_cqm_temp.service_handle = hba;
+
+ ret = cqm3_service_register(hba->dev_handle, &service_cqm_temp);
+ if (ret != CQM_SUCCESS)
+ return UNF_RETURN_ERROR;
+
+ /* 2. Initialize mailbox(driver-->up), aeq(up--->driver) access */
+ ret = sphw_register_mgmt_msg_cb(hba->dev_handle, COMM_MOD_FC, hba,
+ spfc_up_msg2driver_proc);
+ if (ret != CQM_SUCCESS)
+ goto out_unreg_cqm;
+
+ return RETURN_OK;
+
+out_unreg_cqm:
+ cqm3_service_unregister(hba->dev_handle, SERVICE_T_FC);
+
+ return UNF_RETURN_ERROR;
+}
+
+static void spfc_release_chip_access(struct spfc_hba_info *hba)
+{
+ FC_CHECK_RETURN_VOID(hba);
+ FC_CHECK_RETURN_VOID(hba->dev_handle);
+
+ sphw_unregister_mgmt_msg_cb(hba->dev_handle, COMM_MOD_FC);
+
+ cqm3_service_unregister(hba->dev_handle, SERVICE_T_FC);
+}
+
+static void spfc_update_lport_config(struct spfc_hba_info *hba,
+ struct unf_low_level_functioon_op *lowlevel_func)
+{
+#define SPFC_MULTI_CONF_NONSUPPORT 0
+
+ struct unf_lport_cfg_item *lport_cfg = NULL;
+
+ lport_cfg = &lowlevel_func->lport_cfg_items;
+
+ if (hba->port_cfg.max_login < lowlevel_func->support_max_rport)
+ lport_cfg->max_login = hba->port_cfg.max_login;
+ else
+ lport_cfg->max_login = lowlevel_func->support_max_rport;
+
+ if (hba->port_cfg.sest_num >> UNF_SHIFT_1 < UNF_RESERVE_SFS_XCHG)
+ lport_cfg->max_io = hba->port_cfg.sest_num;
+ else
+ lport_cfg->max_io = hba->port_cfg.sest_num - UNF_RESERVE_SFS_XCHG;
+
+ lport_cfg->max_sfs_xchg = UNF_MAX_SFS_XCHG;
+ lport_cfg->port_id = hba->port_cfg.port_id;
+ lport_cfg->port_mode = hba->port_cfg.port_mode;
+ lport_cfg->port_topology = hba->port_cfg.port_topology;
+ lport_cfg->max_queue_depth = hba->port_cfg.max_queue_depth;
+
+ lport_cfg->port_speed = hba->port_cfg.port_speed;
+ lport_cfg->tape_support = hba->port_cfg.tape_support;
+
+ lowlevel_func->sys_port_name = *(u64 *)hba->sys_port_name;
+ lowlevel_func->sys_node_name = *(u64 *)hba->sys_node_name;
+
+ /* Update chip information */
+ lowlevel_func->dev = hba->pci_dev;
+ lowlevel_func->chip_info.chip_work_mode = hba->work_mode;
+ lowlevel_func->chip_info.chip_type = hba->chip_type;
+ lowlevel_func->chip_info.disable_err_flag = 0;
+ lowlevel_func->support_max_speed = hba->max_support_speed;
+ lowlevel_func->support_min_speed = hba->min_support_speed;
+
+ lowlevel_func->chip_id = 0;
+
+ lowlevel_func->sfp_type = UNF_PORT_TYPE_FC_SFP;
+
+ lowlevel_func->multi_conf_support = SPFC_MULTI_CONF_NONSUPPORT;
+ lowlevel_func->support_max_hot_tag_range = hba->port_cfg.sest_num;
+ lowlevel_func->update_fw_reset_active = UNF_PORT_UNGRADE_FW_RESET_INACTIVE;
+ lowlevel_func->port_type = 0; /* DRV_PORT_ENTITY_TYPE_PHYSICAL */
+
+ if ((lport_cfg->port_id & UNF_FIRST_LPORT_ID_MASK) == lport_cfg->port_id)
+ lowlevel_func->support_upgrade_report = UNF_PORT_SUPPORT_UPGRADE_REPORT;
+ else
+ lowlevel_func->support_upgrade_report = UNF_PORT_UNSUPPORT_UPGRADE_REPORT;
+}
+
+static u32 spfc_create_lport(struct spfc_hba_info *hba)
+{
+ void *lport = NULL;
+ struct unf_low_level_functioon_op lowlevel_func;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ spfc_func_op.dev = hba->pci_dev;
+ memcpy(&lowlevel_func, &spfc_func_op, sizeof(struct unf_low_level_functioon_op));
+
+ /* Update port configuration table */
+ spfc_update_lport_config(hba, &lowlevel_func);
+
+ /* Apply for lport resources */
+ UNF_LOWLEVEL_ALLOC_LPORT(lport, hba, &lowlevel_func);
+ if (!lport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't allocate Lport",
+ hba->port_cfg.port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ hba->lport = lport;
+
+ return RETURN_OK;
+}
+
+void spfc_release_probe_index(u32 probe_index)
+{
+ if (probe_index >= SPFC_MAX_PROBE_PORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Probe index(0x%x) is invalid", probe_index);
+
+ return;
+ }
+
+ spin_lock(&probe_spin_lock);
+ if (!test_bit((int)probe_index, (const ulong *)probe_bit_map)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Probe index(0x%x) is not probed",
+ probe_index);
+
+ spin_unlock(&probe_spin_lock);
+
+ return;
+ }
+
+ clear_bit((int)probe_index, probe_bit_map);
+ spin_unlock(&probe_spin_lock);
+}
+
+static void spfc_delete_default_session(struct spfc_hba_info *hba)
+{
+ struct unf_port_info rport_info = {0};
+
+ rport_info.nport_id = 0xffffff;
+ rport_info.rport_index = SPFC_DEFAULT_RPORT_INDEX;
+ rport_info.local_nport_id = 0xffffff;
+ rport_info.port_name = 0;
+ rport_info.cs_ctrl = 0x81;
+
+ /* Need config table to up first, then delete default session */
+ (void)spfc_mbx_config_default_session(hba, 0);
+ spfc_sess_resource_free_sync((void *)hba, &rport_info);
+}
+
+static void spfc_release_host_res(struct spfc_hba_info *hba)
+{
+ spfc_free_dma_buffers(hba);
+
+ spfc_destroy_queues(hba);
+
+ spfc_release_chip_access(hba);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) release low level resource done",
+ hba->port_cfg.port_id);
+}
+
+static struct spfc_hba_info *spfc_init_hba(struct pci_dev *pci_dev,
+ void *hw_dev_handle,
+ struct spfc_chip_info *chip_info,
+ u8 card_num)
+{
+ u32 ret = RETURN_OK;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VALUE(pci_dev, NULL);
+ FC_CHECK_RETURN_VALUE(hw_dev_handle, NULL);
+
+ /* Allocate HBA */
+ hba = kmalloc(sizeof(struct spfc_hba_info), GFP_ATOMIC);
+ FC_CHECK_RETURN_VALUE(hba, NULL);
+ memset(hba, 0, sizeof(struct spfc_hba_info));
+
+ /* Heartbeat default */
+ hba->heart_status = 1;
+ /* Private data in pciDev */
+ hba->pci_dev = pci_dev;
+ hba->dev_handle = hw_dev_handle;
+
+ /* Work mode */
+ hba->work_mode = chip_info->work_mode;
+ /* Create work queue */
+ hba->work_queue = create_singlethread_workqueue("spfc");
+ if (!hba->work_queue) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Spfc creat workqueue failed");
+
+ goto out_free_hba;
+ }
+ /* Init delay work */
+ INIT_DELAYED_WORK(&hba->srq_delay_info.del_work, spfc_rcvd_els_from_srq_timeout);
+ INIT_WORK(&hba->els_srq_clear_work, spfc_wq_destroy_els_srq);
+
+ /* Notice: Only use FC features */
+ (void)sphw_support_fc(hw_dev_handle, &hba->service_cap);
+ /* Check parent context available */
+ if (hba->service_cap.dev_fc_cap.max_parent_qpc_num == 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]FC parent context is not allocated in this function");
+
+ goto out_destroy_workqueue;
+ }
+ max_parent_qpc_num = hba->service_cap.dev_fc_cap.max_parent_qpc_num;
+
+ /* Get port configuration */
+ ret = spfc_get_port_cfg(hba, chip_info, card_num);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Can't get port configuration");
+
+ goto out_destroy_workqueue;
+ }
+ /* Get WWN */
+ spfc_generate_sys_wwn(hba);
+
+ /* Initialize host resources */
+ ret = spfc_init_host_res(hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't initialize host resource",
+ hba->port_cfg.port_id);
+
+ goto out_destroy_workqueue;
+ }
+ /* Local Port create */
+ ret = spfc_create_lport(hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't create lport",
+ hba->port_cfg.port_id);
+ goto out_release_host_res;
+ }
+ complete(&hba->hba_init_complete);
+
+ /* Print reference count */
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) probe succeeded. Memory reference is 0x%x",
+ hba->port_cfg.port_id, atomic_read(&fc_mem_ref));
+
+ return hba;
+
+out_release_host_res:
+ spfc_delete_default_session(hba);
+ spfc_flush_scq_ctx(hba);
+ spfc_flush_srq_ctx(hba);
+ spfc_release_host_res(hba);
+
+out_destroy_workqueue:
+ flush_workqueue(hba->work_queue);
+ destroy_workqueue(hba->work_queue);
+ hba->work_queue = NULL;
+
+out_free_hba:
+ kfree(hba);
+
+ return NULL;
+}
+
+void spfc_get_total_probed_num(u32 *probe_cnt)
+{
+ u32 i = 0;
+ u32 cnt = 0;
+
+ spin_lock(&probe_spin_lock);
+ for (i = 0; i < SPFC_MAX_PROBE_PORT_NUM; i++) {
+ if (test_bit((int)i, (const ulong *)probe_bit_map))
+ cnt++;
+ }
+
+ *probe_cnt = cnt;
+ spin_unlock(&probe_spin_lock);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Probed port total number is 0x%x", cnt);
+}
+
+u32 spfc_assign_card_num(struct spfc_lld_dev *lld_dev,
+ struct spfc_chip_info *chip_info, u8 *card_num)
+{
+ u8 i = 0;
+ u64 card_index = 0;
+
+ card_index = (!pci_is_root_bus(lld_dev->pdev->bus)) ?
+ lld_dev->pdev->bus->parent->number : lld_dev->pdev->bus->number;
+
+ spin_lock(&probe_spin_lock);
+
+ for (i = 0; i < SPFC_MAX_CARD_NUM; i++) {
+ if (test_bit((int)i, (const ulong *)card_num_bit_map)) {
+ if (card_num_manage[i].card_number ==
+ card_index && !card_num_manage[i].is_removing
+ ) {
+ card_num_manage[i].port_count++;
+ *card_num = i;
+ spin_unlock(&probe_spin_lock);
+ return RETURN_OK;
+ }
+ }
+ }
+
+ for (i = 0; i < SPFC_MAX_CARD_NUM; i++) {
+ if (!test_bit((int)i, (const ulong *)card_num_bit_map)) {
+ card_num_manage[i].card_number = card_index;
+ card_num_manage[i].port_count = 1;
+ card_num_manage[i].is_removing = false;
+
+ *card_num = i;
+ set_bit(i, card_num_bit_map);
+
+ spin_unlock(&probe_spin_lock);
+
+ return RETURN_OK;
+ }
+ }
+
+ spin_unlock(&probe_spin_lock);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Have probe more than 0x%x port, probe failed", i);
+
+ return UNF_RETURN_ERROR;
+}
+
+static void spfc_dec_and_free_card_num(u8 card_num)
+{
+ /* 2 ports per card */
+ if (card_num >= SPFC_MAX_CARD_NUM) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Card number(0x%x) is invalid", card_num);
+
+ return;
+ }
+
+ spin_lock(&probe_spin_lock);
+
+ if (test_bit((int)card_num, (const ulong *)card_num_bit_map)) {
+ card_num_manage[card_num].port_count--;
+ card_num_manage[card_num].is_removing = true;
+
+ if (card_num_manage[card_num].port_count == 0) {
+ card_num_manage[card_num].card_number = 0;
+ card_num_manage[card_num].is_removing = false;
+ clear_bit((int)card_num, card_num_bit_map);
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Can not find card number(0x%x)", card_num);
+ }
+
+ spin_unlock(&probe_spin_lock);
+}
+
+u32 spfc_assign_probe_index(u32 *probe_index)
+{
+ u32 i = 0;
+
+ spin_lock(&probe_spin_lock);
+ for (i = 0; i < SPFC_MAX_PROBE_PORT_NUM; i++) {
+ if (!test_bit((int)i, (const ulong *)probe_bit_map)) {
+ *probe_index = i;
+ set_bit(i, probe_bit_map);
+
+ spin_unlock(&probe_spin_lock);
+
+ return RETURN_OK;
+ }
+ }
+ spin_unlock(&probe_spin_lock);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Have probe more than 0x%x port, probe failed", i);
+
+ return UNF_RETURN_ERROR;
+}
+
+u32 spfc_get_probe_index_by_port_id(u32 port_id, u32 *probe_index)
+{
+ u32 total_probe_num = 0;
+ u32 i = 0;
+ u32 probe_cnt = 0;
+
+ spfc_get_total_probed_num(&total_probe_num);
+
+ for (i = 0; i < SPFC_MAX_PROBE_PORT_NUM; i++) {
+ if (!spfc_hba[i])
+ continue;
+
+ if (total_probe_num == probe_cnt)
+ break;
+
+ if (port_id == spfc_hba[i]->port_cfg.port_id) {
+ *probe_index = spfc_hba[i]->probe_index;
+
+ return RETURN_OK;
+ }
+
+ probe_cnt++;
+ }
+
+ return UNF_RETURN_ERROR;
+}
+
+static int spfc_probe(struct spfc_lld_dev *lld_dev, void **uld_dev,
+ char *uld_dev_name)
+{
+ struct pci_dev *pci_dev = NULL;
+ struct spfc_hba_info *hba = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ const u8 work_mode = SPFC_SMARTIO_WORK_MODE_FC;
+ u32 probe_index = 0;
+ u32 probe_total_num = 0;
+ u8 card_num = INVALID_VALUE8;
+ struct spfc_chip_info chip_info;
+
+ FC_CHECK_RETURN_VALUE(lld_dev, UNF_RETURN_ERROR_S32);
+ FC_CHECK_RETURN_VALUE(lld_dev->hwdev, UNF_RETURN_ERROR_S32);
+ FC_CHECK_RETURN_VALUE(lld_dev->pdev, UNF_RETURN_ERROR_S32);
+ FC_CHECK_RETURN_VALUE(uld_dev, UNF_RETURN_ERROR_S32);
+ FC_CHECK_RETURN_VALUE(uld_dev_name, UNF_RETURN_ERROR_S32);
+
+ pci_dev = lld_dev->pdev;
+ memset(&chip_info, 0, sizeof(struct spfc_chip_info));
+ /* 1. Get & check Total_Probed_number */
+ spfc_get_total_probed_num(&probe_total_num);
+ if (probe_total_num >= allowed_probe_num) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Total probe num (0x%x) is larger than allowed number(0x%x)",
+ probe_total_num, allowed_probe_num);
+
+ return UNF_RETURN_ERROR_S32;
+ }
+ /* 2. Check device work mode */
+ ret = spfc_fc_mode_check(lld_dev->hwdev);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR_S32;
+
+ /* 3. Assign & Get new Probe index */
+ ret = spfc_assign_probe_index(&probe_index);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]AssignProbeIndex fail");
+
+ return UNF_RETURN_ERROR_S32;
+ }
+
+ ret = spfc_get_chip_capability((void *)lld_dev->hwdev, &chip_info);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]GetChipCapability fail");
+ return UNF_RETURN_ERROR_S32;
+ }
+ chip_info.work_mode = work_mode;
+
+ /* Assign & Get new Card number */
+ ret = spfc_assign_card_num(lld_dev, &chip_info, &card_num);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]spfc_assign_card_num fail");
+ spfc_release_probe_index(probe_index);
+
+ return UNF_RETURN_ERROR_S32;
+ }
+
+ /* Init HBA resource */
+ hba = spfc_init_hba(pci_dev, lld_dev->hwdev, &chip_info, card_num);
+ if (!hba) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Probe HBA(0x%x) failed. Memory reference = 0x%x",
+ probe_index, atomic_read(&fc_mem_ref));
+
+ spfc_release_probe_index(probe_index);
+ spfc_dec_and_free_card_num(card_num);
+
+ return UNF_RETURN_ERROR_S32;
+ }
+
+ /* Name by the order of probe */
+ *uld_dev = hba;
+ snprintf(uld_dev_name, SPFC_PORT_NAME_STR_LEN, "%s%02x%02x",
+ SPFC_PORT_NAME_LABEL, hba->card_info.card_num,
+ hba->card_info.func_num);
+ memcpy(hba->port_name, uld_dev_name, SPFC_PORT_NAME_STR_LEN);
+ hba->probe_index = probe_index;
+ spfc_hba[probe_index] = hba;
+
+ return RETURN_OK;
+}
+
+u32 spfc_sfp_switch(void *hba, void *para_in)
+{
+ struct spfc_hba_info *spfc_hba = (struct spfc_hba_info *)hba;
+ bool turn_on = false;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
+
+ /* Redundancy check */
+ turn_on = *((bool *)para_in);
+ if ((u32)turn_on == (u32)spfc_hba->sfp_on) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) FC physical port is already %s",
+ spfc_hba->port_cfg.port_id, (turn_on) ? "on" : "off");
+
+ return ret;
+ }
+
+ if (turn_on) {
+ ret = spfc_port_check_fw_ready(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Get port(0x%x) clear state failed, turn on fail",
+ spfc_hba->port_cfg.port_id);
+ return ret;
+ }
+ /* At first, configure port table info if necessary */
+ ret = spfc_config_port_table(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't configurate port table",
+ spfc_hba->port_cfg.port_id);
+
+ return ret;
+ }
+ }
+
+ /* Switch physical port */
+ ret = spfc_port_switch(spfc_hba, turn_on);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Port(0x%x) switch failed",
+ spfc_hba->port_cfg.port_id);
+
+ return ret;
+ }
+
+ /* Update HBA's sfp state */
+ spfc_hba->sfp_on = turn_on;
+
+ return ret;
+}
+
+static u32 spfc_destroy_lport(struct spfc_hba_info *hba)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ UNF_LOWLEVEL_RELEASE_LOCAL_PORT(ret, hba->lport);
+ hba->lport = NULL;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) destroy L_Port done",
+ hba->port_cfg.port_id);
+
+ return ret;
+}
+
+static u32 spfc_port_check_fw_ready(struct spfc_hba_info *hba)
+{
+#define SPFC_PORT_CLEAR_DONE 0
+#define SPFC_PORT_CLEAR_DOING 1
+#define SPFC_WAIT_ONE_TIME_MS 1000
+#define SPFC_LOOP_TIMES 30
+
+ u32 clear_state = SPFC_PORT_CLEAR_DOING;
+ u32 ret = RETURN_OK;
+ u32 wait_timeout = 0;
+
+ do {
+ msleep(SPFC_WAIT_ONE_TIME_MS);
+ wait_timeout += SPFC_WAIT_ONE_TIME_MS;
+ ret = spfc_mbx_get_fw_clear_stat(hba, &clear_state);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ /* Total time more than 30s retry more than 3 times failed */
+ if (wait_timeout > SPFC_LOOP_TIMES * SPFC_WAIT_ONE_TIME_MS &&
+ clear_state != SPFC_PORT_CLEAR_DONE)
+ return UNF_RETURN_ERROR;
+ } while (clear_state != SPFC_PORT_CLEAR_DONE);
+
+ return RETURN_OK;
+}
+
+u32 spfc_port_reset(struct spfc_hba_info *hba)
+{
+ u32 ret = RETURN_OK;
+ ulong timeout = 0;
+ bool sfp_before_reset = false;
+ bool off_para_in = false;
+ struct pci_dev *pci_dev = NULL;
+ struct spfc_hba_info *spfc_hba = hba;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ pci_dev = spfc_hba->pci_dev;
+ FC_CHECK_RETURN_VALUE(pci_dev, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) reset HBA begin",
+ spfc_hba->port_cfg.port_id);
+
+ /* Wait for last init/reset completion */
+ timeout = wait_for_completion_timeout(&spfc_hba->hba_init_complete,
+ (ulong)SPFC_PORT_INIT_TIME_SEC_MAX * HZ);
+
+ if (timeout == SPFC_ZERO) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Last HBA initialize/reset timeout: %d second",
+ SPFC_PORT_INIT_TIME_SEC_MAX);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Save current port state */
+ sfp_before_reset = spfc_hba->sfp_on;
+
+ /* Inform the reset event to CM level before beginning */
+ UNF_LOWLEVEL_PORT_EVENT(ret, spfc_hba->lport, UNF_PORT_RESET_START, NULL);
+ spfc_hba->reset_time = jiffies;
+
+ /* Close SFP */
+ ret = spfc_sfp_switch(spfc_hba, &off_para_in);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't close SFP",
+ spfc_hba->port_cfg.port_id);
+ spfc_hba->sfp_on = sfp_before_reset;
+
+ complete(&spfc_hba->hba_init_complete);
+
+ return ret;
+ }
+
+ ret = spfc_port_check_fw_ready(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Get port(0x%x) clear state failed, hang port and report chip error",
+ spfc_hba->port_cfg.port_id);
+
+ complete(&spfc_hba->hba_init_complete);
+
+ return ret;
+ }
+
+ spfc_queue_pre_process(spfc_hba, false);
+
+ ret = spfc_mb_reset_chip(spfc_hba, SPFC_MBOX_SUBTYPE_LIGHT_RESET);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't reset chip mailbox",
+ spfc_hba->port_cfg.port_id);
+
+ UNF_LOWLEVEL_PORT_EVENT(ret, spfc_hba->lport, UNF_PORT_GET_FWLOG, NULL);
+ UNF_LOWLEVEL_PORT_EVENT(ret, spfc_hba->lport, UNF_PORT_DEBUG_DUMP, NULL);
+ }
+
+ /* Inform the success to CM level */
+ UNF_LOWLEVEL_PORT_EVENT(ret, spfc_hba->lport, UNF_PORT_RESET_END, NULL);
+
+ /* Queue open */
+ spfc_queue_post_process(spfc_hba);
+
+ /* Open SFP */
+ (void)spfc_sfp_switch(spfc_hba, &sfp_before_reset);
+
+ complete(&spfc_hba->hba_init_complete);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) reset HBA done",
+ spfc_hba->port_cfg.port_id);
+
+ return ret;
+#undef SPFC_WAIT_LINKDOWN_EVENT_MS
+}
+
+static u32 spfc_delete_scqc_via_cmdq_sync(struct spfc_hba_info *hba, u32 scqn)
+{
+ /* Via CMND Queue */
+#define SPFC_DEL_SCQC_TIMEOUT 3000
+
+ int ret;
+ struct spfc_cmdqe_delete_scqc del_scqc_cmd;
+ struct sphw_cmd_buf *cmd_buf;
+
+ /* Alloc cmd buffer */
+ cmd_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmd_buf) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]cmdq in_cmd_buf alloc failed");
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_DEL_SCQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Build & Send Cmnd */
+ memset(&del_scqc_cmd, 0, sizeof(del_scqc_cmd));
+ del_scqc_cmd.wd0.task_type = SPFC_TASK_T_DEL_SCQC;
+ del_scqc_cmd.wd1.scqn = SPFC_LSW(scqn);
+ spfc_cpu_to_big32(&del_scqc_cmd, sizeof(del_scqc_cmd));
+ memcpy(cmd_buf->buf, &del_scqc_cmd, sizeof(del_scqc_cmd));
+ cmd_buf->size = sizeof(del_scqc_cmd);
+
+ ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0, cmd_buf,
+ NULL, NULL, SPFC_DEL_SCQC_TIMEOUT,
+ SPHW_CHANNEL_FC);
+
+ /* Free cmnd buffer */
+ sphw_free_cmd_buf(hba->dev_handle, cmd_buf);
+
+ if (ret) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Send del scqc via cmdq failed, ret=0x%x",
+ ret);
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_DEL_SCQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ SPFC_IO_STAT(hba, SPFC_TASK_T_DEL_SCQC);
+
+ return RETURN_OK;
+}
+
+static u32 spfc_delete_srqc_via_cmdq_sync(struct spfc_hba_info *hba, u64 sqrc_gpa)
+{
+ /* Via CMND Queue */
+#define SPFC_DEL_SRQC_TIMEOUT 3000
+
+ int ret;
+ struct spfc_cmdqe_delete_srqc del_srqc_cmd;
+ struct sphw_cmd_buf *cmd_buf;
+
+ /* Alloc Cmnd buffer */
+ cmd_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmd_buf) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]cmdq in_cmd_buf allocate failed");
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_DEL_SRQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Build & Send Cmnd */
+ memset(&del_srqc_cmd, 0, sizeof(del_srqc_cmd));
+ del_srqc_cmd.wd0.task_type = SPFC_TASK_T_DEL_SRQC;
+ del_srqc_cmd.srqc_gpa_h = SPFC_HIGH_32_BITS(sqrc_gpa);
+ del_srqc_cmd.srqc_gpa_l = SPFC_LOW_32_BITS(sqrc_gpa);
+ spfc_cpu_to_big32(&del_srqc_cmd, sizeof(del_srqc_cmd));
+ memcpy(cmd_buf->buf, &del_srqc_cmd, sizeof(del_srqc_cmd));
+ cmd_buf->size = sizeof(del_srqc_cmd);
+
+ ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0, cmd_buf,
+ NULL, NULL, SPFC_DEL_SRQC_TIMEOUT,
+ SPHW_CHANNEL_FC);
+
+ /* Free Cmnd Buffer */
+ sphw_free_cmd_buf(hba->dev_handle, cmd_buf);
+
+ if (ret) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Send del srqc via cmdq failed, ret=0x%x",
+ ret);
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_DEL_SRQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ SPFC_IO_STAT(hba, SPFC_TASK_T_DEL_SRQC);
+
+ return RETURN_OK;
+}
+
+void spfc_flush_scq_ctx(struct spfc_hba_info *hba)
+{
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Start destroy total 0x%x SCQC", SPFC_TOTAL_SCQ_NUM);
+
+ FC_CHECK_RETURN_VOID(hba);
+
+ (void)spfc_delete_scqc_via_cmdq_sync(hba, 0);
+}
+
+void spfc_flush_srq_ctx(struct spfc_hba_info *hba)
+{
+ struct spfc_srq_info *srq_info = NULL;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Start destroy ELS&IMMI SRQC");
+
+ FC_CHECK_RETURN_VOID(hba);
+
+ /* Check state to avoid to flush SRQC again */
+ srq_info = &hba->els_srq_info;
+ if (srq_info->srq_type == SPFC_SRQ_ELS && srq_info->enable) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[event]HBA(0x%x) flush ELS SRQC",
+ hba->port_index);
+
+ (void)spfc_delete_srqc_via_cmdq_sync(hba, srq_info->cqm_srq_info->q_ctx_paddr);
+ }
+}
+
+void spfc_set_hba_flush_state(struct spfc_hba_info *hba, bool in_flush)
+{
+ ulong flag = 0;
+
+ spin_lock_irqsave(&hba->flush_state_lock, flag);
+ hba->in_flushing = in_flush;
+ spin_unlock_irqrestore(&hba->flush_state_lock, flag);
+}
+
+void spfc_set_hba_clear_state(struct spfc_hba_info *hba, bool clear_flag)
+{
+ ulong flag = 0;
+
+ spin_lock_irqsave(&hba->clear_state_lock, flag);
+ hba->port_is_cleared = clear_flag;
+ spin_unlock_irqrestore(&hba->clear_state_lock, flag);
+}
+
+bool spfc_hba_is_present(struct spfc_hba_info *hba)
+{
+ int ret_val = RETURN_OK;
+ bool present_flag = false;
+ u32 vendor_id = 0;
+
+ ret_val = pci_read_config_dword(hba->pci_dev, 0, &vendor_id);
+ vendor_id &= SPFC_PCI_VENDOR_ID_MASK;
+ if (ret_val == RETURN_OK && vendor_id == SPFC_PCI_VENDOR_ID_RAMAXEL) {
+ present_flag = true;
+ } else {
+ present_flag = false;
+ hba->dev_present = false;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[info]Port %s remove: vender_id=0x%x, ret=0x%x",
+ present_flag ? "normal" : "surprise", vendor_id, ret_val);
+
+ return present_flag;
+}
+
+static void spfc_exit(struct pci_dev *pci_dev, struct spfc_hba_info *hba)
+{
+#define SPFC_WAIT_CLR_RESOURCE_MS 1000
+ u32 ret = UNF_RETURN_ERROR;
+ bool sfp_switch = false;
+ bool present_flag = true;
+
+ FC_CHECK_RETURN_VOID(pci_dev);
+ FC_CHECK_RETURN_VOID(hba);
+
+ hba->removing = true;
+
+ /* 1. Check HBA present or not */
+ present_flag = spfc_hba_is_present(hba);
+ if (present_flag) {
+ if (hba->phy_link == UNF_PORT_LINK_DOWN)
+ hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_FLUSHDONE;
+
+ /* At first, close sfp */
+ sfp_switch = false;
+ (void)spfc_sfp_switch((void *)hba, (void *)&sfp_switch);
+ }
+
+ /* 2. Report COM with HBA removing: delete route timer delay work */
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_BEGIN_REMOVE, NULL);
+
+ /* 3. Report COM with HBA Nop, COM release I/O(s) & R_Port(s) forcely */
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_NOP, NULL);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]PCI device(%p) remove port(0x%x) failed",
+ pci_dev, hba->port_index);
+ }
+
+ spfc_delete_default_session(hba);
+
+ if (present_flag)
+ /* 4.1 Wait for all SQ empty, free SRQ buffer & SRQC */
+ spfc_queue_pre_process(hba, true);
+
+ /* 5. Destroy L_Port */
+ (void)spfc_destroy_lport(hba);
+
+ /* 6. With HBA is present */
+ if (present_flag) {
+ /* Enable Queues dispatch */
+ spfc_queue_post_process(hba);
+
+ /* Need reset port if necessary */
+ (void)spfc_mb_reset_chip(hba, SPFC_MBOX_SUBTYPE_HEAVY_RESET);
+
+ /* Flush SCQ context */
+ spfc_flush_scq_ctx(hba);
+
+ /* Flush SRQ context */
+ spfc_flush_srq_ctx(hba);
+
+ sphw_func_rx_tx_flush(hba->dev_handle, SPHW_CHANNEL_FC);
+
+ /* NOTE: while flushing txrx, hash bucket will be cached out in
+ * UP. Wait to clear resources completely
+ */
+ msleep(SPFC_WAIT_CLR_RESOURCE_MS);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) flush scq & srq & root context done",
+ hba->port_cfg.port_id);
+ }
+
+ /* 7. Release host resources */
+ spfc_release_host_res(hba);
+
+ /* 8. Destroy FC work queue */
+ if (hba->work_queue) {
+ flush_workqueue(hba->work_queue);
+ destroy_workqueue(hba->work_queue);
+ hba->work_queue = NULL;
+ }
+
+ /* 9. Release Probe index & Decrease card number */
+ spfc_release_probe_index(hba->probe_index);
+ spfc_dec_and_free_card_num((u8)hba->card_info.card_num);
+
+ /* 10. Free HBA memory */
+ kfree(hba);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]PCI device(%p) remove succeed, memory reference is 0x%x",
+ pci_dev, atomic_read(&fc_mem_ref));
+}
+
+static void spfc_remove(struct spfc_lld_dev *lld_dev, void *uld_dev)
+{
+ struct pci_dev *pci_dev = NULL;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)uld_dev;
+ u32 probe_total_num = 0;
+ u32 probe_index = 0;
+
+ FC_CHECK_RETURN_VOID(lld_dev);
+ FC_CHECK_RETURN_VOID(uld_dev);
+ FC_CHECK_RETURN_VOID(lld_dev->hwdev);
+ FC_CHECK_RETURN_VOID(lld_dev->pdev);
+
+ pci_dev = hba->pci_dev;
+
+ /* Get total probed port number */
+ spfc_get_total_probed_num(&probe_total_num);
+ if (probe_total_num < 1) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port manager is empty and no need to remove");
+ return;
+ }
+
+ /* check pci vendor id */
+ if (pci_dev->vendor != SPFC_PCI_VENDOR_ID_RAMAXEL) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Wrong vendor id(0x%x) and exit",
+ pci_dev->vendor);
+ return;
+ }
+
+ /* Check function ability */
+ if (!sphw_support_fc(lld_dev->hwdev, NULL)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]FC is not enable in this function");
+ return;
+ }
+
+ /* Get probe index */
+ probe_index = hba->probe_index;
+
+ /* Parent context alloc check */
+ if (hba->service_cap.dev_fc_cap.max_parent_qpc_num == 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]FC parent context not allocate in this function");
+ return;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]HBA(0x%x) start removing...", hba->port_index);
+
+ /* HBA removinig... */
+ spfc_exit(pci_dev, hba);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) pci device removed, vendorid(0x%04x) devid(0x%04x)",
+ probe_index, pci_dev->vendor, pci_dev->device);
+
+ /* Probe index check */
+ if (probe_index < SPFC_HBA_PORT_MAX_NUM) {
+ spfc_hba[probe_index] = NULL;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Probe index(0x%x) is invalid and remove failed",
+ probe_index);
+ }
+
+ spfc_get_total_probed_num(&probe_total_num);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]Removed index=%u, RemainNum=%u, AllowNum=%u",
+ probe_index, probe_total_num, allowed_probe_num);
+}
+
+static u32 spfc_get_hba_pcie_link_state(void *hba, void *link_state)
+{
+ bool *link_state_info = link_state;
+ bool present_flag = true;
+ struct spfc_hba_info *spfc_hba = hba;
+ int ret;
+ bool last_dev_state = true;
+ bool cur_dev_state = true;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(link_state, UNF_RETURN_ERROR);
+ last_dev_state = spfc_hba->dev_present;
+ ret = sphw_get_card_present_state(spfc_hba->dev_handle, (bool *)&present_flag);
+ if (ret || !present_flag) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]port(0x%x) is not present,ret:%d, present_flag:%d",
+ spfc_hba->port_cfg.port_id, ret, present_flag);
+ cur_dev_state = false;
+ } else {
+ cur_dev_state = true;
+ }
+
+ spfc_hba->dev_present = cur_dev_state;
+
+ /* To prevent false alarms, the heartbeat is considered lost only
+ * when the PCIe link is down for two consecutive times.
+ */
+ if (!last_dev_state && !cur_dev_state)
+ spfc_hba->heart_status = false;
+
+ *link_state_info = spfc_hba->dev_present;
+
+ return RETURN_OK;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_hba.h b/drivers/scsi/spfc/hw/spfc_hba.h
new file mode 100644
index 000000000000..937f00ea8fc7
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_hba.h
@@ -0,0 +1,341 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_HBA_H
+#define SPFC_HBA_H
+
+#include "unf_type.h"
+#include "unf_common.h"
+#include "spfc_queue.h"
+#include "sphw_crm.h"
+#define SPFC_PCI_VENDOR_ID_MASK (0xffff)
+
+#define FW_VER_LEN (32)
+#define HW_VER_LEN (32)
+#define FW_SUB_VER_LEN (24)
+
+#define SPFC_LOWLEVEL_RTTOV_TAG 0
+#define SPFC_LOWLEVEL_EDTOV_TAG 0
+#define SPFC_LOWLEVEL_DEFAULT_LOOP_BB_CREDIT (8)
+#define SPFC_LOWLEVEL_DEFAULT_32G_BB_CREDIT (255)
+#define SPFC_LOWLEVEL_DEFAULT_16G_BB_CREDIT (255)
+#define SPFC_LOWLEVEL_DEFAULT_8G_BB_CREDIT (255)
+#define SPFC_LOWLEVEL_DEFAULT_BB_SCN 0
+#define SPFC_LOWLEVEL_DEFAULT_RA_TOV UNF_DEFAULT_RATOV
+#define SPFC_LOWLEVEL_DEFAULT_ED_TOV UNF_DEFAULT_EDTOV
+
+#define SPFC_LOWLEVEL_DEFAULT_32G_ESCH_VALUE 28081
+#define SPFC_LOWLEVEL_DEFAULT_16G_ESCH_VALUE 14100
+#define SPFC_LOWLEVEL_DEFAULT_8G_ESCH_VALUE 7000
+#define SPFC_LOWLEVEL_DEFAULT_ESCH_BUST_SIZE 0x2000
+
+#define SPFC_PCI_STATUS 0x06
+
+#define SPFC_SMARTIO_WORK_MODE_FC 0x1
+#define SPFC_SMARTIO_WORK_MODE_OTHER 0xF
+#define UNF_FUN_ID_MASK 0x07
+
+#define UNF_SPFC_FC (0x01)
+#define UNF_SPFC_MAXNPIV_NUM 64 /* If not support NPIV, Initialized to 0 */
+
+#define SPFC_MAX_COS_NUM (8)
+
+#define SPFC_INTR_ENABLE 0x5
+#define SPFC_INTR_DISABLE 0x0
+#define SPFC_CLEAR_FW_INTR 0x1
+#define SPFC_REG_ENABLE_INTR 0x00000200
+
+#define SPFC_PCI_VENDOR_ID_RAMAXEL 0x1E81
+
+#define SPFC_SCQ_CNTX_SIZE 32
+#define SPFC_SRQ_CNTX_SIZE 64
+
+#define SPFC_PORT_INIT_TIME_SEC_MAX 1
+
+#define SPFC_PORT_NAME_LABEL "spfc"
+#define SPFC_PORT_NAME_STR_LEN (16)
+
+#define SPFC_MAX_PROBE_PORT_NUM (64)
+#define SPFC_PORT_NUM_PER_TABLE (64)
+#define SPFC_MAX_CARD_NUM (32)
+
+#define SPFC_HBA_PORT_MAX_NUM SPFC_MAX_PROBE_PORT_NUM
+#define SPFC_SIRT_MIN_RXID 0
+#define SPFC_SIRT_MAX_RXID 255
+
+#define SPFC_GET_HBA_PORT_ID(hba) ((hba)->port_index)
+
+#define SPFC_MAX_WAIT_LOOP_TIMES 10000
+#define SPFC_WAIT_SESS_ENABLE_ONE_TIME_MS 1
+#define SPFC_WAIT_SESS_FREE_ONE_TIME_MS 1
+
+#define SPFC_PORT_ID_MASK 0xff0000
+
+#define SPFC_MAX_PARENT_QPC_NUM 2048
+struct spfc_port_cfg {
+ u32 port_id; /* Port ID */
+ u32 port_mode; /* Port mode:INI(0x20), TGT(0x10), BOTH(0x30) */
+ u32 port_topology; /* Port topo:0x3:loop,0xc:p2p,0xf:auto */
+ u32 port_alpa; /* Port ALPA */
+ u32 max_queue_depth; /* Max Queue depth Registration to SCSI */
+ u32 sest_num; /* IO burst num:512-4096 */
+ u32 max_login; /* Max Login Session. */
+ u32 node_name_hi; /* nodename high 32 bits */
+ u32 node_name_lo; /* nodename low 32 bits */
+ u32 port_name_hi; /* portname high 32 bits */
+ u32 port_name_lo; /* portname low 32 bits */
+ u32 port_speed; /* Port speed 0:auto 4:4Gbps 8:8Gbps 16:16Gbps */
+ u32 interrupt_delay; /* Delay times(ms) in interrupt */
+ u32 tape_support; /* tape support */
+};
+
+#define SPFC_VER_INFO_SIZE 128
+struct spfc_drv_version {
+ char ver[SPFC_VER_INFO_SIZE];
+};
+
+struct spfc_card_info {
+ u32 card_num : 8;
+ u32 func_num : 8;
+ u32 base_func : 8;
+ /* Card type:UNF_FC_SERVER_BOARD_32_G(6) 32G mode,
+ * UNF_FC_SERVER_BOARD_16_G(7)16G mode
+ */
+ u32 card_type : 8;
+};
+
+struct spfc_card_num_manage {
+ bool is_removing;
+ u32 port_count;
+ u64 card_number;
+};
+
+struct spfc_sim_ini_err {
+ u32 err_code;
+ u32 times;
+};
+
+struct spfc_sim_pcie_err {
+ u32 err_code;
+ u32 times;
+};
+
+struct spfc_led_state {
+ u8 green_speed_led;
+ u8 yellow_speed_led;
+ u8 ac_led;
+ u8 rsvd;
+};
+
+enum spfc_led_activity {
+ SPFC_LED_CFG_ACTVE_FRAME = 0,
+ SPFC_LED_CFG_ACTVE_FC = 3
+};
+
+enum spfc_queue_set_stage {
+ SPFC_QUEUE_SET_STAGE_INIT = 0,
+ SPFC_QUEUE_SET_STAGE_SCANNING,
+ SPFC_QUEUE_SET_STAGE_FLUSHING,
+ SPFC_QUEUE_SET_STAGE_FLUSHDONE,
+ SPFC_QUEUE_SET_STAGE_BUTT
+};
+
+struct spfc_vport_info {
+ u64 node_name;
+ u64 port_name;
+ u32 port_mode; /* INI, TGT or both */
+ u32 nport_id; /* maybe acquired by lowlevel and update to common */
+ void *vport;
+ u16 vp_index;
+};
+
+struct spfc_srq_delay_info {
+ u8 srq_delay_flag; /* Check whether need to delay */
+ u8 root_rq_rcvd_flag;
+ u16 rsd;
+
+ spinlock_t srq_lock;
+ struct unf_frame_pkg frame_pkg;
+
+ struct delayed_work del_work;
+};
+
+struct spfc_fw_ver_detail {
+ u8 ucode_ver[SPFC_VER_LEN];
+ u8 ucode_compile_time[SPFC_COMPILE_TIME_LEN];
+
+ u8 up_ver[SPFC_VER_LEN];
+ u8 up_compile_time[SPFC_COMPILE_TIME_LEN];
+
+ u8 boot_ver[SPFC_VER_LEN];
+ u8 boot_compile_time[SPFC_COMPILE_TIME_LEN];
+};
+
+/* get wwpn and wwnn */
+struct spfc_chip_info {
+ u8 work_mode;
+ u8 tape_support;
+ u64 wwpn;
+ u64 wwnn;
+};
+
+/* Default SQ info */
+struct spfc_default_sq_info {
+ u32 sq_cid;
+ u32 sq_xid;
+ u32 fun_cid;
+ u32 default_sq_flag;
+};
+
+struct spfc_hba_info {
+ struct pci_dev *pci_dev;
+ void *dev_handle;
+
+ struct fc_service_cap service_cap; /* struct fc_service_cap pstFcoeServiceCap; */
+
+ struct spfc_scq_info scq_info[SPFC_TOTAL_SCQ_NUM];
+ struct spfc_srq_info els_srq_info;
+
+ struct spfc_vport_info vport_info[UNF_SPFC_MAXNPIV_NUM + 1];
+
+ /* PCI IO Memory */
+ void __iomem *bar0;
+ u32 bar0_len;
+
+ struct spfc_parent_queue_mgr *parent_queue_mgr;
+
+ /* Link list Sq WqePage Pool */
+ struct spfc_sq_wqepage_pool sq_wpg_pool;
+
+ enum spfc_queue_set_stage queue_set_stage;
+ u32 next_clear_sq;
+ u32 default_sqid;
+
+ /* Port parameters, Obtained through firmware */
+ u16 queue_set_max_count;
+ u8 port_type; /* FC or FCoE Port */
+ u8 port_index; /* Phy Port */
+ u32 default_scqn;
+ char fw_ver[FW_VER_LEN]; /* FW version */
+ char hw_ver[HW_VER_LEN]; /* HW version */
+ char mst_fw_ver[FW_SUB_VER_LEN];
+ char fc_fw_ver[FW_SUB_VER_LEN];
+ u8 chip_type; /* chiptype:Smart or fc */
+ u8 work_mode;
+ struct spfc_card_info card_info;
+ char port_name[SPFC_PORT_NAME_STR_LEN];
+ u32 probe_index;
+
+ u16 exi_base;
+ u16 exi_count;
+ u16 vpf_count;
+ u8 vpid_start;
+ u8 vpid_end;
+
+ spinlock_t flush_state_lock;
+ bool in_flushing;
+
+ spinlock_t clear_state_lock;
+ bool port_is_cleared;
+
+ struct spfc_port_cfg port_cfg; /* Obtained through Config */
+
+ void *lport; /* Used in UNF level */
+
+ u8 sys_node_name[UNF_WWN_LEN];
+ u8 sys_port_name[UNF_WWN_LEN];
+
+ struct completion hba_init_complete;
+ struct completion mbox_complete;
+ struct completion vpf_complete;
+ struct completion fcfi_complete;
+ struct completion get_sfp_complete;
+
+ u16 init_stage;
+ u16 removing;
+ bool sfp_on;
+ bool dev_present;
+ bool heart_status;
+ spinlock_t hba_lock;
+ u32 port_topo_cfg;
+ u32 port_bb_scn_cfg;
+ u32 port_loop_role;
+ u32 port_speed_cfg;
+ u32 max_support_speed;
+ u32 min_support_speed;
+ u32 server_max_speed;
+
+ u8 remote_rttov_tag;
+ u8 remote_edtov_tag;
+ u16 compared_bb_scn;
+ u16 remote_bb_credit;
+ u32 compared_edtov_val;
+ u32 compared_ratov_val;
+ enum unf_act_topo active_topo;
+ u32 active_port_speed;
+ u32 active_rxbb_credit;
+ u32 active_bb_scn;
+
+ u32 phy_link;
+
+ enum unf_port_mode port_mode;
+
+ u32 fcp_cfg;
+
+ /* loop */
+ u8 active_alpa;
+ u8 loop_map_valid;
+ u8 loop_map[UNF_LOOPMAP_COUNT];
+
+ /* sfp info dma */
+ void *sfp_buf;
+ dma_addr_t sfp_dma_addr;
+ u32 sfp_status;
+ int chip_temp;
+ u32 sfp_posion;
+
+ u32 cos_bitmap;
+ atomic_t cos_rport_cnt[SPFC_MAX_COS_NUM];
+
+ /* fw debug dma buffer */
+ void *debug_buf;
+ dma_addr_t debug_buf_dma_addr;
+ void *log_buf;
+ dma_addr_t fw_log_dma_addr;
+
+ void *dma_addr;
+ dma_addr_t update_dma_addr;
+
+ struct spfc_sim_ini_err sim_ini_err;
+ struct spfc_sim_pcie_err sim_pcie_err;
+
+ struct spfc_led_state led_states;
+
+ u32 fec_status;
+
+ struct workqueue_struct *work_queue;
+ struct work_struct els_srq_clear_work;
+ u64 reset_time;
+
+ spinlock_t spin_lock;
+
+ struct spfc_srq_delay_info srq_delay_info;
+ struct spfc_fw_ver_detail hardinfo;
+ struct spfc_default_sq_info default_sq_info;
+};
+
+extern struct spfc_hba_info *spfc_hba[SPFC_HBA_PORT_MAX_NUM];
+extern spinlock_t probe_spin_lock;
+extern ulong probe_bit_map[SPFC_MAX_PROBE_PORT_NUM / SPFC_PORT_NUM_PER_TABLE];
+
+u32 spfc_port_reset(struct spfc_hba_info *hba);
+void spfc_flush_scq_ctx(struct spfc_hba_info *hba);
+void spfc_flush_srq_ctx(struct spfc_hba_info *hba);
+void spfc_set_hba_flush_state(struct spfc_hba_info *hba, bool in_flush);
+void spfc_set_hba_clear_state(struct spfc_hba_info *hba, bool clear_flag);
+u32 spfc_get_probe_index_by_port_id(u32 port_id, u32 *probe_index);
+void spfc_get_total_probed_num(u32 *probe_cnt);
+u32 spfc_sfp_switch(void *hba, void *para_in);
+bool spfc_hba_is_present(struct spfc_hba_info *hba);
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_hw_wqe.h b/drivers/scsi/spfc/hw/spfc_hw_wqe.h
new file mode 100644
index 000000000000..e03d24a98579
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_hw_wqe.h
@@ -0,0 +1,1645 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_HW_WQE_H
+#define SPFC_HW_WQE_H
+
+#define FC_ICQ_EN
+#define FC_SCSI_CMDIU_LEN 48
+#define FC_NVME_CMDIU_LEN 96
+#define FC_LS_GS_USERID_CNT_MAX 10
+#define FC_SENSEDATA_USERID_CNT_MAX 2
+#define FC_INVALID_MAGIC_NUM 0xFFFFFFFF
+#define FC_INVALID_HOTPOOLTAG 0xFFFF
+
+/* TASK TYPE: in order to compatible wiht EDA, please add new type before BUTT. */
+enum spfc_task_type {
+ SPFC_TASK_T_EMPTY = 0, /* SCQE TYPE: means task type not initialize */
+
+ SPFC_TASK_T_IWRITE = 1, /* SQE TYPE: ini send FCP Write Command */
+ SPFC_TASK_T_IREAD = 2, /* SQE TYPE: ini send FCP Read Command */
+ SPFC_TASK_T_IRESP = 3, /* SCQE TYPE: ini recv fcp rsp for IREAD/IWRITE/ITMF */
+ SPFC_TASK_T_TCMND = 4, /* NA */
+ SPFC_TASK_T_TREAD = 5, /* SQE TYPE: tgt send FCP Read Command */
+ SPFC_TASK_T_TWRITE = 6, /* SQE TYPE: tgt send FCP Write Command (XFER_RDY) */
+ SPFC_TASK_T_TRESP = 7, /* SQE TYPE: tgt send fcp rsp of Read/Write */
+ SPFC_TASK_T_TSTS = 8, /* SCQE TYPE: tgt sts for TREAD/TWRITE/TRESP */
+ SPFC_TASK_T_ABTS = 9, /* SQE TYPE: ini send abts request Command */
+ SPFC_TASK_T_IELS = 10, /* NA */
+ SPFC_TASK_T_ITMF = 11, /* SQE TYPE: ini send tmf request Command */
+ SPFC_TASK_T_CLEAN_UP = 12, /* NA */
+ SPFC_TASK_T_CLEAN_UP_ALL = 13, /* NA */
+ SPFC_TASK_T_UNSOLICITED = 14, /* NA */
+ SPFC_TASK_T_ERR_WARN = 15, /* NA */
+ SPFC_TASK_T_SESS_EN = 16, /* CMDQ TYPE: enable session */
+ SPFC_TASK_T_SESS_DIS = 17, /* NA */
+ SPFC_TASK_T_SESS_DEL = 18, /* NA */
+ SPFC_TASK_T_RQE_REPLENISH = 19, /* NA */
+
+ SPFC_TASK_T_RCV_TCMND = 20, /* SCQE TYPE: tgt recv fcp cmd */
+ SPFC_TASK_T_RCV_ELS_CMD = 21, /* SCQE TYPE: tgt recv els cmd */
+ SPFC_TASK_T_RCV_ABTS_CMD = 22, /* SCQE TYPE: tgt recv abts cmd */
+ SPFC_TASK_T_RCV_IMMEDIATE = 23, /* SCQE TYPE: tgt recv immediate data */
+ /* SQE TYPE: send ESL rsp. PLOGI_ACC, PRLI_ACC will carry the parent
+ *context parameter indication.
+ */
+ SPFC_TASK_T_ELS_RSP = 24,
+ SPFC_TASK_T_ELS_RSP_STS = 25, /* SCQE TYPE: ELS rsp sts */
+ SPFC_TASK_T_ABTS_RSP = 26, /* CMDQ TYPE: tgt send abts rsp */
+ SPFC_TASK_T_ABTS_RSP_STS = 27, /* SCQE TYPE: tgt abts rsp sts */
+
+ SPFC_TASK_T_ABORT = 28, /* CMDQ TYPE: tgt send Abort Command */
+ SPFC_TASK_T_ABORT_STS = 29, /* SCQE TYPE: Abort sts */
+
+ SPFC_TASK_T_ELS = 30, /* SQE TYPE: send ELS request Command */
+ SPFC_TASK_T_RCV_ELS_RSP = 31, /* SCQE TYPE: recv ELS response */
+
+ SPFC_TASK_T_GS = 32, /* SQE TYPE: send GS request Command */
+ SPFC_TASK_T_RCV_GS_RSP = 33, /* SCQE TYPE: recv GS response */
+
+ SPFC_TASK_T_SESS_EN_STS = 34, /* SCQE TYPE: enable session sts */
+ SPFC_TASK_T_SESS_DIS_STS = 35, /* NA */
+ SPFC_TASK_T_SESS_DEL_STS = 36, /* NA */
+
+ SPFC_TASK_T_RCV_ABTS_RSP = 37, /* SCQE TYPE: ini recv abts rsp */
+
+ SPFC_TASK_T_BUFFER_CLEAR = 38, /* CMDQ TYPE: Buffer Clear */
+ SPFC_TASK_T_BUFFER_CLEAR_STS = 39, /* SCQE TYPE: Buffer Clear sts */
+ SPFC_TASK_T_FLUSH_SQ = 40, /* CMDQ TYPE: flush sq */
+ SPFC_TASK_T_FLUSH_SQ_STS = 41, /* SCQE TYPE: flush sq sts */
+
+ SPFC_TASK_T_SESS_RESET = 42, /* SQE TYPE: Reset session */
+ SPFC_TASK_T_SESS_RESET_STS = 43, /* SCQE TYPE: Reset session sts */
+ SPFC_TASK_T_RQE_REPLENISH_STS = 44, /* NA */
+ SPFC_TASK_T_DUMP_EXCH = 45, /* CMDQ TYPE: dump exch */
+ SPFC_TASK_T_INIT_SRQC = 46, /* CMDQ TYPE: init SRQC */
+ SPFC_TASK_T_CLEAR_SRQ = 47, /* CMDQ TYPE: clear SRQ */
+ SPFC_TASK_T_CLEAR_SRQ_STS = 48, /* SCQE TYPE: clear SRQ sts */
+ SPFC_TASK_T_INIT_SCQC = 49, /* CMDQ TYPE: init SCQC */
+ SPFC_TASK_T_DEL_SCQC = 50, /* CMDQ TYPE: delete SCQC */
+ SPFC_TASK_T_TMF_RESP = 51, /* SQE TYPE: tgt send tmf rsp */
+ SPFC_TASK_T_DEL_SRQC = 52, /* CMDQ TYPE: delete SRQC */
+ SPFC_TASK_T_RCV_IMMI_CONTINUE = 53, /* SCQE TYPE: tgt recv continue immediate data */
+
+ SPFC_TASK_T_ITMF_RESP = 54, /* SCQE TYPE: ini recv tmf rsp */
+ SPFC_TASK_T_ITMF_MARKER_STS = 55, /* SCQE TYPE: tmf marker sts */
+ SPFC_TASK_T_TACK = 56,
+ SPFC_TASK_T_SEND_AEQERR = 57,
+ SPFC_TASK_T_ABTS_MARKER_STS = 58, /* SCQE TYPE: abts marker sts */
+ SPFC_TASK_T_FLR_CLEAR_IO = 59, /* FLR clear io type */
+ SPFC_TASK_T_CREATE_SSQ_CONTEXT = 60,
+ SPFC_TASK_T_CLEAR_SSQ_CONTEXT = 61,
+ SPFC_TASK_T_EXCH_ID_FREE = 62,
+ SPFC_TASK_T_DIFX_RESULT_STS = 63,
+ SPFC_TASK_T_EXCH_ID_FREE_ABORT = 64,
+ SPFC_TASK_T_EXCH_ID_FREE_ABORT_STS = 65,
+ SPFC_TASK_T_PARAM_CHECK_FAIL = 66,
+ SPFC_TASK_T_TGT_UNKNOWN = 67,
+ SPFC_TASK_T_NVME_LS = 70, /* SQE TYPE: Snd Ls Req */
+ SPFC_TASK_T_RCV_NVME_LS_RSP = 71, /* SCQE TYPE: Rcv Ls Rsp */
+
+ SPFC_TASK_T_NVME_LS_RSP = 72, /* SQE TYPE: Snd Ls Rsp */
+ SPFC_TASK_T_RCV_NVME_LS_RSP_STS = 73, /* SCQE TYPE: Rcv Ls Rsp sts */
+
+ SPFC_TASK_T_RCV_NVME_LS_CMD = 74, /* SCQE TYPE: Rcv ls cmd */
+
+ SPFC_TASK_T_NVME_IREAD = 75, /* SQE TYPE: Ini Snd Nvme Read Cmd */
+ SPFC_TASK_T_NVME_IWRITE = 76, /* SQE TYPE: Ini Snd Nvme write Cmd */
+
+ SPFC_TASK_T_NVME_TREAD = 77, /* SQE TYPE: Tgt Snd Nvme Read Cmd */
+ SPFC_TASK_T_NVME_TWRITE = 78, /* SQE TYPE: Tgt Snd Nvme write Cmd */
+
+ SPFC_TASK_T_NVME_IRESP = 79, /* SCQE TYPE: Ini recv nvme rsp for NVMEIREAD/NVMEIWRITE */
+
+ SPFC_TASK_T_INI_IO_ABORT = 80, /* SQE type: INI Abort Cmd */
+ SPFC_TASK_T_INI_IO_ABORT_STS = 81, /* SCQE type: INI Abort sts */
+
+ SPFC_TASK_T_INI_LS_ABORT = 82, /* SQE type: INI ls abort Cmd */
+ SPFC_TASK_T_INI_LS_ABORT_STS = 83, /* SCQE type: INI ls abort sts */
+ SPFC_TASK_T_EXCHID_TIMEOUT_STS = 84, /* SCQE TYPE: EXCH_ID TIME OUT */
+ SPFC_TASK_T_PARENT_ERR_STS = 85, /* SCQE TYPE: PARENT ERR */
+
+ SPFC_TASK_T_NOP = 86,
+ SPFC_TASK_T_NOP_STS = 87,
+
+ SPFC_TASK_T_DFX_INFO = 126,
+ SPFC_TASK_T_BUTT
+};
+
+/* error code for error report */
+
+enum spfc_err_code {
+ FC_CQE_COMPLETED = 0, /* Successful */
+ FC_SESS_HT_INSERT_FAIL = 1, /* Offload fail: hash insert fail */
+ FC_SESS_HT_INSERT_DUPLICATE = 2, /* Offload fail: duplicate offload */
+ FC_SESS_HT_BIT_SET_FAIL = 3, /* Offload fail: bloom filter set fail */
+ FC_SESS_HT_DELETE_FAIL = 4, /* Offload fail: hash delete fail(duplicate delete) */
+ FC_CQE_BUFFER_CLEAR_IO_COMPLETED = 5, /* IO done in buffer clear */
+ FC_CQE_SESSION_ONLY_CLEAR_IO_COMPLETED = 6, /* IO done in session rst mode=1 */
+ FC_CQE_SESSION_RST_CLEAR_IO_COMPLETED = 7, /* IO done in session rst mode=3 */
+ FC_CQE_TMF_RSP_IO_COMPLETED = 8, /* IO done in tgt tmf rsp */
+ FC_CQE_TMF_IO_COMPLETED = 9, /* IO done in ini tmf */
+ FC_CQE_DRV_ABORT_IO_COMPLETED = 10, /* IO done in tgt abort */
+ /*
+ *IO done in fcp rsp process. Used for the sceanrio: 1.abort before cmd 2.
+ *send fcp rsp directly after recv cmd.
+ */
+ FC_CQE_DRV_ABORT_IO_IN_RSP_COMPLETED = 11,
+ /*
+ *IO done in fcp cmd process. Used for the sceanrio: 1.abort before cmd 2.child setup fail.
+ */
+ FC_CQE_DRV_ABORT_IO_IN_CMD_COMPLETED = 12,
+ FC_CQE_WQE_FLUSH_IO_COMPLETED = 13, /* IO done in FLUSH SQ */
+ FC_ERROR_CODE_DATA_DIFX_FAILED = 14, /* fcp data format check: DIFX check error */
+ /* fcp data format check: task_type is not read */
+ FC_ERROR_CODE_DATA_TASK_TYPE_INCORRECT = 15,
+ FC_ERROR_CODE_DATA_OOO_RO = 16, /* fcp data format check: data offset is not continuous */
+ FC_ERROR_CODE_DATA_EXCEEDS_DATA2TRNS = 17, /* fcp data format check: data is over run */
+ /* fcp rsp format check: payload is too short */
+ FC_ERROR_CODE_FCP_RSP_INVALID_LENGTH_FIELD = 18,
+ /* fcp rsp format check: fcp_conf need, but exch don't hold seq initiative */
+ FC_ERROR_CODE_FCP_RSP_CONF_REQ_NOT_SUPPORTED_YET = 19,
+ /* fcp rsp format check: fcp_conf is required, but it's the last seq */
+ FC_ERROR_CODE_FCP_RSP_OPENED_SEQ = 20,
+ /* xfer rdy format check: payload is too short */
+ FC_ERROR_CODE_XFER_INVALID_PAYLOAD_SIZE = 21,
+ /* xfer rdy format check: last data out havn't finished */
+ FC_ERROR_CODE_XFER_PEND_XFER_SET = 22,
+ /* xfer rdy format check: data offset is not continuous */
+ FC_ERROR_CODE_XFER_OOO_RO = 23,
+ FC_ERROR_CODE_XFER_NULL_BURST_LEN = 24, /* xfer rdy format check: burst len is 0 */
+ FC_ERROR_CODE_REC_TIMER_EXPIRE = 25, /* Timer expire: REC_TIMER */
+ FC_ERROR_CODE_E_D_TIMER_EXPIRE = 26, /* Timer expire: E_D_TIMER */
+ FC_ERROR_CODE_ABORT_TIMER_EXPIRE = 27, /* Timer expire: Abort timer */
+ FC_ERROR_CODE_ABORT_MAGIC_NUM_NOT_MATCH = 28, /* Abort IO magic number mismatch */
+ FC_IMMI_CMDPKT_SETUP_FAIL = 29, /* RX immediate data cmd pkt child setup fail */
+ FC_ERROR_CODE_DATA_SEQ_ID_NOT_EQUAL = 30, /* RX fcp data sequence id not equal */
+ FC_ELS_GS_RSP_EXCH_CHECK_FAIL = 31, /* ELS/GS exch info check fail */
+ FC_CQE_ELS_GS_SRQE_GET_FAIL = 32, /* ELS/GS process get SRQE fail */
+ FC_CQE_DATA_DMA_REQ_FAIL = 33, /* SMF soli-childdma rsp error */
+ FC_CQE_SESSION_CLOSED = 34, /* Session is closed */
+ FC_SCQ_IS_FULL = 35, /* SCQ is full */
+ FC_SRQ_IS_FULL = 36, /* SRQ is full */
+ FC_ERROR_DUCHILDCTX_SETUP_FAIL = 37, /* dpchild ctx setup fail */
+ FC_ERROR_INVALID_TXMFS = 38, /* invalid txmfs */
+ FC_ERROR_OFFLOAD_LACKOF_SCQE_FAIL = 39, /* offload fail,lack of SCQE,through AEQ */
+ FC_ERROR_INVALID_TASK_ID = 40, /* tx invlaid task id */
+ FC_ERROR_INVALID_PKT_LEN = 41, /* tx els gs pakcet len check */
+ FC_CQE_ELS_GS_REQ_CLR_IO_COMPLETED = 42, /* IO done in els gs tx */
+ FC_CQE_ELS_RSP_CLR_IO_COMPLETED = 43, /* IO done in els rsp tx */
+ FC_ERROR_CODE_RESID_UNDER_ERR = 44, /* FCP RSP RESID ERROR */
+ FC_ERROR_EXCH_ID_FREE_ERR = 45, /* Abnormal free xid failed */
+ FC_ALLOC_EXCH_ID_FAILED = 46, /* ucode alloc EXCH ID failed */
+ FC_ERROR_DUPLICATE_IO_RECEIVED = 47, /* Duplicate tcmnd or tmf rsp received */
+ FC_ERROR_RXID_MISCOMPARE = 48,
+ FC_ERROR_FAILOVER_CLEAR_VALID_HOST = 49, /* Failover cleared valid host io */
+ FC_ERROR_EXCH_ID_NOT_MATCH = 50, /* SCQ TYPE: xid not match */
+ FC_ERROR_ABORT_FAIL = 51, /* SCQ TYPE: abort fail */
+ FC_ERROR_SHARD_TABLE_OP_FAIL = 52, /* SCQ TYPE: shard table OP fail */
+ FC_ERROR_E0E1_FAIL = 53,
+ FC_INSERT_EXCH_ID_HASH_FAILED = 54, /* ucode INSERT EXCH ID HASH failed */
+ FC_ERROR_CODE_FCP_RSP_UPDMA_FAILED = 55, /* up dma req failed,while fcp rsp is rcving */
+ FC_ERROR_CODE_SID_DID_NOT_MATCH = 56, /* sid or did not match */
+ FC_ERROR_DATA_NOT_REL_OFF = 57, /* data not rel off */
+ FC_ERROR_CODE_EXCH_ID_TIMEOUT = 58, /* exch id timeout */
+ FC_ERROR_PARENT_CHECK_FAIL = 59,
+ FC_ERROR_RECV_REC_REJECT = 60, /* RECV REC RSP REJECT */
+ FC_ERROR_RECV_SRR_REJECT = 61, /* RECV REC SRR REJECT */
+ FC_ERROR_REC_NOT_FIND_EXID_INVALID = 62,
+ FC_ERROR_RECV_REC_NO_ERR = 63,
+ FC_ERROR_PARENT_CTX_ERR = 64
+};
+
+/* AEQ EVENT TYPE */
+enum spfc_aeq_evt_type {
+ /* SCQ and SRQ not enough, HOST will initiate a operation to associated SCQ/SRQ */
+ FC_AEQ_EVENT_QUEUE_ERROR = 48,
+ FC_AEQ_EVENT_WQE_FATAL_ERROR = 49, /* WQE MSN check error,HOST will reset port */
+ FC_AEQ_EVENT_CTX_FATAL_ERROR = 50, /* serious chip error, HOST will reset chip */
+ FC_AEQ_EVENT_OFFLOAD_ERROR = 51,
+ FC_FC_AEQ_EVENT_TYPE_LAST
+};
+
+enum spfc_protocol_class {
+ FC_PROTOCOL_CLASS_3 = 0x0,
+ FC_PROTOCOL_CLASS_2 = 0x1,
+ FC_PROTOCOL_CLASS_1 = 0x2,
+ FC_PROTOCOL_CLASS_F = 0x3,
+ FC_PROTOCOL_CLASS_OTHER = 0x4
+};
+
+enum spfc_aeq_evt_err_code {
+ /* detail type of resource lack */
+ FC_SCQ_IS_FULL_ERR = 0,
+ FC_SRQ_IS_FULL_ERR,
+
+ /* detail type of FC_AEQ_EVENT_WQE_FATAL_ERROR */
+ FC_SQE_CHILD_SETUP_WQE_MSN_ERR = 2,
+ FC_SQE_CHILD_SETUP_WQE_GPA_ERR,
+ FC_CMDPKT_CHILD_SETUP_INVALID_WQE_ERR_1,
+ FC_CMDPKT_CHILD_SETUP_INVALID_WQE_ERR_2,
+ FC_CLEAEQ_WQE_ERR,
+ FC_WQEFETCH_WQE_MSN_ERR,
+ FC_WQEFETCH_QUINFO_ERR,
+
+ /* detail type of FC_AEQ_EVENT_CTX_FATAL_ERROR */
+ FC_SCQE_ERR_BIT_ERR = 9,
+ FC_UPDMA_ADDR_REQ_SRQ_ERR,
+ FC_SOLICHILDDMA_ADDR_REQ_ERR,
+ FC_UNSOLICHILDDMA_ADDR_REQ_ERR,
+ FC_SQE_CHILD_SETUP_QINFO_ERR_1,
+ FC_SQE_CHILD_SETUP_QINFO_ERR_2,
+ FC_CMDPKT_CHILD_SETUP_QINFO_ERR_1,
+ FC_CMDPKT_CHILD_SETUP_QINFO_ERR_2,
+ FC_CMDPKT_CHILD_SETUP_PMSN_ERR,
+ FC_CLEAEQ_CTX_ERR,
+ FC_WQEFETCH_CTX_ERR,
+ FC_FLUSH_QPC_ERR_LQP,
+ FC_FLUSH_QPC_ERR_SMF,
+ FC_PREFETCH_QPC_ERR_PCM_MHIT_LQP,
+ FC_PREFETCH_QPC_ERR_PCM_MHIT_FQG,
+ FC_PREFETCH_QPC_ERR_PCM_ABM_FQG,
+ FC_PREFETCH_QPC_ERR_MAP_FQG,
+ FC_PREFETCH_QPC_ERR_MAP_LQP,
+ FC_PREFETCH_QPC_ERR_SMF_RTN,
+ FC_PREFETCH_QPC_ERR_CFG,
+ FC_PREFETCH_QPC_ERR_FLSH_HIT,
+ FC_PREFETCH_QPC_ERR_FLSH_ACT,
+ FC_PREFETCH_QPC_ERR_ABM_W_RSC,
+ FC_PREFETCH_QPC_ERR_RW_ABM,
+ FC_PREFETCH_QPC_ERR_DEFAULT,
+ FC_CHILDHASH_INSERT_SW_ERR,
+ FC_CHILDHASH_LOOKUP_SW_ERR,
+ FC_CHILDHASH_DEL_SW_ERR,
+ FC_EXCH_ID_FREE_SW_ERR,
+ FC_FLOWHASH_INSERT_SW_ERR,
+ FC_FLOWHASH_LOOKUP_SW_ERR,
+ FC_FLOWHASH_DEL_SW_ERR,
+ FC_FLUSH_QPC_ERR_USED,
+ FC_FLUSH_QPC_ERR_OUTER_LOCK,
+ FC_SETUP_SESSION_ERR,
+
+ FC_AEQ_EVT_ERR_CODE_BUTT
+
+};
+
+/* AEQ data structure */
+struct spfc_aqe_data {
+ union {
+ struct {
+ u32 conn_id : 16;
+ u32 rsvd : 8;
+ u32 evt_code : 8;
+ } wd0;
+
+ u32 data0;
+ };
+
+ union {
+ struct {
+ u32 xid : 20;
+ u32 rsvd : 12;
+ } wd1;
+
+ u32 data1;
+ };
+};
+
+/* Control Section: Common Header */
+struct spfc_wqe_ctrl_ch {
+ union {
+ struct {
+ u32 bdsl : 8;
+ u32 drv_sl : 2;
+ u32 rsvd0 : 4;
+ u32 wf : 1;
+ u32 cf : 1;
+ u32 tsl : 5;
+ u32 va : 1;
+ u32 df : 1;
+ u32 cr : 1;
+ u32 dif_sl : 3;
+ u32 csl : 2;
+ u32 ctrl_sl : 2;
+ u32 owner : 1;
+ } wd0;
+
+ u32 ctrl_ch_val;
+ };
+};
+
+/* Control Section: Queue Specific Field */
+struct spfc_wqe_ctrl_qsf {
+ u32 wqe_sn : 16;
+ u32 dump_wqe_sn : 16;
+};
+
+/* DIF info definition in WQE */
+struct spfc_fc_dif_info {
+ struct {
+ u32 app_tag_ctrl : 3; /* DIF/DIX APP TAG Control */
+ /* Bit 0: scenario of the reference tag verify mode.
+ *Bit 1: scenario of the reference tag insert/replace mode.
+ */
+ u32 ref_tag_mode : 2;
+ /* 0: fixed; 1: increasement; */
+ u32 ref_tag_ctrl : 3; /* The DIF/DIX Reference tag control */
+ u32 grd_agm_ini_ctrl : 3;
+ u32 grd_agm_ctrl : 2; /* Bit 0: DIF/DIX guard verify algorithm control */
+ /* Bit 1: DIF/DIX guard replace or insert algorithm control */
+ u32 grd_ctrl : 3; /* The DIF/DIX Guard control */
+ u32 dif_verify_type : 2; /* verify type */
+ u32 difx_ref_esc : 1; /* Check blocks whose reference tag contains 0xFFFF flag */
+ u32 difx_app_esc : 1;/* Check blocks whose application tag contains 0xFFFF flag */
+ u32 rsvd : 8;
+ u32 sct_size : 1; /* Sector size, 1: 4K; 0: 512 */
+ u32 smd_tp : 2;
+ u32 difx_en : 1;
+ } wd0;
+
+ struct {
+ u32 cmp_app_tag_msk : 16;
+ u32 rsvd : 7;
+ u32 lun_qos_en : 2;
+ u32 vpid : 7;
+ } wd1;
+
+ u16 cmp_app_tag;
+ u16 rep_app_tag;
+
+ u32 cmp_ref_tag;
+ u32 rep_ref_tag;
+};
+
+/* Task Section: TMF SQE for INI */
+struct spfc_tmf_info {
+ union {
+ struct {
+ u32 reset_exch_end : 16;
+ u32 reset_exch_start : 16;
+ } bs;
+ u32 value;
+ } w0;
+
+ union {
+ struct {
+ u32 reset_did : 24;
+ u32 reset_type : 2;
+ u32 marker_sts : 1;
+ u32 rsvd0 : 5;
+ } bs;
+ u32 value;
+ } w1;
+
+ union {
+ struct {
+ u32 reset_sid : 24;
+ u32 rsvd0 : 8;
+ } bs;
+ u32 value;
+ } w2;
+
+ u8 reset_lun[8];
+};
+
+/* Task Section: CMND SQE for INI */
+struct spfc_sqe_icmnd {
+ u8 fcp_cmnd_iu[FC_SCSI_CMDIU_LEN];
+ union {
+ struct spfc_fc_dif_info dif_info;
+ struct spfc_tmf_info tmf;
+ } info;
+};
+
+/* Task Section: ABTS SQE */
+struct spfc_sqe_abts {
+ u32 fh_parm_abts;
+ u32 hotpooltag;
+ u32 release_timer;
+};
+
+struct spfc_keys {
+ struct {
+ u32 smac1 : 8;
+ u32 smac0 : 8;
+ u32 rsv : 16;
+ } wd0;
+
+ u8 smac[4];
+
+ u8 dmac[6];
+ u8 sid[3];
+ u8 did[3];
+
+ struct {
+ u32 port_id : 3;
+ u32 host_id : 2;
+ u32 rsvd : 27;
+ } wd5;
+ u32 rsvd;
+};
+
+/* BDSL: Session Enable WQE.keys field only use 26 bytes room */
+struct spfc_cmdqe_sess_en {
+ struct {
+ u32 rx_id : 16;
+ u32 port_id : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 cid : 20;
+ u32 rsvd1 : 12;
+ } wd1;
+
+ struct {
+ u32 conn_id : 16;
+ u32 scqn : 16;
+ } wd2;
+
+ struct {
+ u32 xid_p : 20;
+ u32 rsvd3 : 12;
+ } wd3;
+
+ u32 context_gpa_hi;
+ u32 context_gpa_lo;
+ struct spfc_keys keys;
+ u32 context[64];
+};
+
+/* Control Section */
+struct spfc_wqe_ctrl {
+ struct spfc_wqe_ctrl_ch ch;
+ struct spfc_wqe_ctrl_qsf qsf;
+};
+
+struct spfc_sqe_els_rsp {
+ struct {
+ u32 echo_flag : 16;
+ u32 data_len : 16;
+ } wd0;
+
+ struct {
+ u32 rsvd1 : 27;
+ u32 offload_flag : 1;
+ u32 lp_bflag : 1;
+ u32 clr_io : 1;
+ u32 para_update : 2;
+ } wd1;
+
+ struct {
+ u32 seq_cnt : 1;
+ u32 e_d_tov : 1;
+ u32 rsvd2 : 6;
+ u32 class_mode : 8; /* 0:class3, 1:class2*/
+ u32 tx_mfs : 16;
+ } wd2;
+
+ u32 e_d_tov_timer_val;
+
+ struct {
+ u32 conf : 1;
+ u32 rec : 1;
+ u32 xfer_dis : 1;
+ u32 immi_taskid_cnt : 13;
+ u32 immi_taskid_start : 16;
+ } wd4;
+
+ u32 first_burst_len;
+
+ struct {
+ u32 reset_exch_end : 16;
+ u32 reset_exch_start : 16;
+ } wd6;
+
+ struct {
+ u32 scqn : 16;
+ u32 hotpooltag : 16;
+ } wd7;
+
+ u32 magic_local;
+ u32 magic_remote;
+ u32 ts_rcv_echo_req;
+ u32 sid;
+ u32 did;
+ u32 context_gpa_hi;
+ u32 context_gpa_lo;
+};
+
+struct spfc_sqe_reset_session {
+ struct {
+ u32 reset_exch_end : 16;
+ u32 reset_exch_start : 16;
+ } wd0;
+
+ struct {
+ u32 reset_did : 24;
+ u32 mode : 2;
+ u32 rsvd : 6;
+ } wd1;
+
+ struct {
+ u32 reset_sid : 24;
+ u32 rsvd : 8;
+ } wd2;
+
+ struct {
+ u32 scqn : 16;
+ u32 rsvd : 16;
+ } wd3;
+};
+
+struct spfc_sqe_nop_sq {
+ struct {
+ u32 scqn : 16;
+ u32 rsvd : 16;
+ } wd0;
+ u32 magic_num;
+};
+
+struct spfc_sqe_t_els_gs {
+ u16 echo_flag;
+ u16 data_len;
+
+ struct {
+ u32 rsvd1 : 9;
+ u32 offload_flag : 1;
+ u32 origin_hottag : 16;
+ u32 rec_flag : 1;
+ u32 rec_support : 1;
+ u32 lp_bflag : 1;
+ u32 clr_io : 1;
+ u32 para_update : 2;
+ } wd4;
+
+ struct {
+ u32 seq_cnt : 1;
+ u32 e_d_tov : 1;
+ u32 rsvd2 : 14;
+ u32 tx_mfs : 16;
+ } wd5;
+
+ u32 e_d_tov_timer_val;
+
+ struct {
+ u32 reset_exch_end : 16;
+ u32 reset_exch_start : 16;
+ } wd6;
+
+ struct {
+ u32 scqn : 16;
+ u32 hotpooltag : 16; /* used for send ELS rsp */
+ } wd7;
+
+ u32 sid;
+ u32 did;
+ u32 context_gpa_hi;
+ u32 context_gpa_lo;
+ u32 origin_magicnum;
+};
+
+struct spfc_sqe_els_gs_elsrsp_comm {
+ u16 rsvd;
+ u16 data_len;
+};
+
+struct spfc_sqe_lpb_msg {
+ struct {
+ u32 reset_exch_end : 16;
+ u32 reset_exch_start : 16;
+ } w0;
+
+ struct {
+ u32 reset_did : 24;
+ u32 reset_type : 2;
+ u32 rsvd0 : 6;
+ } w1;
+
+ struct {
+ u32 reset_sid : 24;
+ u32 rsvd0 : 8;
+ } w2;
+
+ u16 tmf_exch_id;
+ u16 rsvd1;
+
+ u8 reset_lun[8];
+};
+
+/* SQE Task Section's Contents except Common Header */
+union spfc_sqe_ts_cont {
+ struct spfc_sqe_icmnd icmnd;
+ struct spfc_sqe_abts abts;
+ struct spfc_sqe_els_rsp els_rsp;
+ struct spfc_sqe_t_els_gs t_els_gs;
+ struct spfc_sqe_els_gs_elsrsp_comm els_gs_elsrsp_comm;
+ struct spfc_sqe_reset_session reset_session;
+ struct spfc_sqe_lpb_msg lpb_msg;
+ struct spfc_sqe_nop_sq nop_sq;
+ u32 value[17];
+};
+
+struct spfc_sqe_nvme_icmnd_part2 {
+ u8 nvme_cmnd_iu_part2_data[FC_NVME_CMDIU_LEN - FC_SCSI_CMDIU_LEN];
+};
+
+union spfc_sqe_ts_ex {
+ struct spfc_sqe_nvme_icmnd_part2 nvme_icmnd_part2;
+ u32 value[12];
+};
+
+struct spfc_sqe_ts {
+ /* SQE Task Section's Common Header */
+ u32 local_xid : 16; /* local exch_id, icmnd/els send used for hotpooltag */
+ u32 crc_inj : 1;
+ u32 immi_std : 1;
+ u32 cdb_type : 1; /* cdb_type = 0:CDB_LEN = 16B, cdb_type = 1:CDB_LEN = 32B */
+ u32 rsvd : 5; /* used for loopback saving bdsl's num */
+ u32 task_type : 8;
+
+ struct {
+ u16 conn_id;
+ u16 remote_xid;
+ } wd0;
+
+ u32 xid : 20;
+ u32 sqn : 12;
+ u32 cid;
+ u32 magic_num;
+ union spfc_sqe_ts_cont cont;
+};
+
+struct spfc_constant_sge {
+ u32 buf_addr_hi;
+ u32 buf_addr_lo;
+};
+
+struct spfc_variable_sge {
+ u32 buf_addr_hi;
+ u32 buf_addr_lo;
+
+ struct {
+ u32 buf_len : 31;
+ u32 r_flag : 1;
+ } wd0;
+
+ struct {
+ u32 buf_addr_gpa : 16;
+ u32 xid : 14;
+ u32 extension_flag : 1;
+ u32 last_flag : 1;
+ } wd1;
+};
+
+#define FC_WQE_SIZE 256
+/* SQE, should not be over 256B */
+struct spfc_sqe {
+ struct spfc_wqe_ctrl ctrl_sl;
+ u32 sid;
+ u32 did;
+ u64 wqe_gpa; /* gpa shift 6 bit to right*/
+ u64 db_val;
+ union spfc_sqe_ts_ex ts_ex;
+ struct spfc_variable_sge esge[3];
+ struct spfc_wqe_ctrl ectrl_sl;
+ struct spfc_sqe_ts ts_sl;
+ struct spfc_variable_sge sge[2];
+};
+
+struct spfc_rqe_ctrl {
+ struct spfc_wqe_ctrl_ch ch;
+
+ struct {
+ u16 wqe_msn;
+ u16 dump_wqe_msn;
+ } wd0;
+};
+
+struct spfc_rqe_drv {
+ struct {
+ u32 rsvd0 : 16;
+ u32 user_id : 16;
+ } wd0;
+
+ u32 rsvd1;
+};
+
+/* RQE,should not be over 32B */
+struct spfc_rqe {
+ struct spfc_rqe_ctrl ctrl_sl;
+ u32 cqe_gpa_h;
+ u32 cqe_gpa_l;
+ struct spfc_constant_sge bds_sl;
+ struct spfc_rqe_drv drv_sl;
+};
+
+struct spfc_cmdqe_abort {
+ struct {
+ u32 rx_id : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 ox_id : 16;
+ u32 rsvd1 : 12;
+ u32 trsp_send : 1;
+ u32 tcmd_send : 1;
+ u32 immi : 1;
+ u32 reply_sts : 1;
+ } wd1;
+
+ struct {
+ u32 conn_id : 16;
+ u32 scqn : 16;
+ } wd2;
+
+ struct {
+ u32 xid : 20;
+ u32 rsvd : 12;
+ } wd3;
+
+ struct {
+ u32 cid : 20;
+ u32 rsvd : 12;
+ } wd4;
+ struct {
+ u32 hotpooltag : 16;
+ u32 rsvd : 16;
+ } wd5; /* v6 new define */
+ /* abort time out. Used for abort and io cmd reach ucode in different path
+ * and io cmd will not arrive.
+ */
+ u32 time_out;
+ u32 magic_num;
+};
+
+struct spfc_cmdqe_abts_rsp {
+ struct {
+ u32 rx_id : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 ox_id : 16;
+ u32 rsvd1 : 4;
+ u32 port_id : 4;
+ u32 payload_len : 7;
+ u32 rsp_type : 1;
+ } wd1;
+
+ struct {
+ u32 conn_id : 16;
+ u32 scqn : 16;
+ } wd2;
+
+ struct {
+ u32 xid : 20;
+ u32 rsvd : 12;
+ } wd3;
+
+ struct {
+ u32 cid : 20;
+ u32 rsvd : 12;
+ } wd4;
+
+ struct {
+ u32 req_rx_id : 16;
+ u32 hotpooltag : 16;
+ } wd5;
+
+ /* payload length is according to rsp_type:1DWORD or 3DWORD */
+ u32 payload[3];
+};
+
+struct spfc_cmdqe_buffer_clear {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 wqe_type : 8;
+ } wd0;
+
+ struct {
+ u32 rx_id_end : 16;
+ u32 rx_id_start : 16;
+ } wd1;
+
+ u32 scqn;
+ u32 wd3;
+};
+
+struct spfc_cmdqe_flush_sq {
+ struct {
+ u32 entry_count : 16;
+ u32 rsvd : 8;
+ u32 wqe_type : 8;
+ } wd0;
+
+ struct {
+ u32 scqn : 16;
+ u32 port_id : 4;
+ u32 pos : 11;
+ u32 last_wqe : 1;
+ } wd1;
+
+ struct {
+ u32 rsvd : 4;
+ u32 clr_pos : 12;
+ u32 pkt_ptr : 16;
+ } wd2;
+
+ struct {
+ u32 first_sq_xid : 24;
+ u32 sqqid_start_per_session : 4;
+ u32 sqcnt_per_session : 4;
+ } wd3;
+};
+
+struct spfc_cmdqe_dump_exch {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ u16 oqid_wr;
+ u16 oqid_rd;
+
+ u32 host_id;
+ u32 func_id;
+ u32 cache_id;
+ u32 exch_id;
+};
+
+struct spfc_cmdqe_creat_srqc {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ u32 srqc_gpa_h;
+ u32 srqc_gpa_l;
+
+ u32 srqc[16]; /* srqc_size=64B */
+};
+
+struct spfc_cmdqe_delete_srqc {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ u32 srqc_gpa_h;
+ u32 srqc_gpa_l;
+};
+
+struct spfc_cmdqe_clr_srq {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 scqn : 16;
+ u32 srq_type : 16;
+ } wd1;
+
+ u32 srqc_gpa_h;
+ u32 srqc_gpa_l;
+};
+
+struct spfc_cmdqe_creat_scqc {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 scqn : 16;
+ u32 rsvd2 : 16;
+ } wd1;
+
+ u32 scqc[16]; /* scqc_size=64B */
+};
+
+struct spfc_cmdqe_delete_scqc {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 scqn : 16;
+ u32 rsvd2 : 16;
+ } wd1;
+};
+
+struct spfc_cmdqe_creat_ssqc {
+ struct {
+ u32 rsvd1 : 4;
+ u32 xid : 20;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 scqn : 16;
+ u32 rsvd2 : 16;
+ } wd1;
+ u32 context_gpa_hi;
+ u32 context_gpa_lo;
+
+ u32 ssqc[64]; /* ssqc_size=256B */
+};
+
+struct spfc_cmdqe_delete_ssqc {
+ struct {
+ u32 entry_count : 4;
+ u32 xid : 20;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 scqn : 16;
+ u32 rsvd2 : 16;
+ } wd1;
+ u32 context_gpa_hi;
+ u32 context_gpa_lo;
+};
+
+/* add xid free via cmdq */
+struct spfc_cmdqe_exch_id_free {
+ struct {
+ u32 task_id : 16;
+ u32 port_id : 8;
+ u32 rsvd0 : 8;
+ } wd0;
+
+ u32 magic_num;
+
+ struct {
+ u32 scqn : 16;
+ u32 hotpool_tag : 16;
+ } wd2;
+ struct {
+ u32 rsvd1 : 31;
+ u32 clear_abort_flag : 1;
+ } wd3;
+ u32 sid;
+ u32 did;
+ u32 type; /* ELS/ELS RSP/IO */
+};
+
+struct spfc_cmdqe_cmdqe_dfx {
+ struct {
+ u32 rsvd1 : 4;
+ u32 xid : 20;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 qid_crclen : 12;
+ u32 cid : 20;
+ } wd1;
+ u32 context_gpa_hi;
+ u32 context_gpa_lo;
+ u32 dfx_type;
+
+ u32 rsv[16];
+};
+
+struct spfc_sqe_t_rsp {
+ struct {
+ u32 rsvd1 : 16;
+ u32 fcp_rsp_len : 8;
+ u32 busy_rsp : 3;
+ u32 immi : 1;
+ u32 mode : 1;
+ u32 conf : 1;
+ u32 fill : 2;
+ } wd0;
+
+ u32 hotpooltag;
+
+ union {
+ struct {
+ u32 addr_h;
+ u32 addr_l;
+ } gpa;
+
+ struct {
+ u32 data[23]; /* FCP_RESP payload buf, 92B rsvd */
+ } buf;
+ } payload;
+};
+
+struct spfc_sqe_tmf_t_rsp {
+ struct {
+ u32 scqn : 16;
+ u32 fcp_rsp_len : 8;
+ u32 pkt_nosnd_flag : 3; /* tmf rsp snd flag, 0:snd, 1: not snd, Driver ignore */
+ u32 reset_type : 2;
+ u32 conf : 1;
+ u32 fill : 2;
+ } wd0;
+
+ struct {
+ u32 reset_exch_end : 16;
+ u32 reset_exch_start : 16;
+ } wd1;
+
+ struct {
+ u16 hotpooltag; /*tmf rsp hotpooltag, Driver ignore */
+ u16 rsvd;
+ } wd2;
+
+ u8 lun[8]; /* Lun ID */
+ u32 data[20]; /* FCP_RESP payload buf, 80B rsvd */
+};
+
+struct spfc_sqe_tresp_ts {
+ /* SQE Task Section's Common Header */
+ u16 local_xid;
+ u8 rsvd0;
+ u8 task_type;
+
+ struct {
+ u16 conn_id;
+ u16 remote_xid;
+ } wd0;
+
+ u32 xid : 20;
+ u32 sqn : 12;
+ u32 cid;
+ u32 magic_num;
+ struct spfc_sqe_t_rsp t_rsp;
+};
+
+struct spfc_sqe_tmf_resp_ts {
+ /* SQE Task Section's Common Header */
+ u16 local_xid;
+ u8 rsvd0;
+ u8 task_type;
+
+ struct {
+ u16 conn_id;
+ u16 remote_xid;
+ } wd0;
+
+ u32 xid : 20;
+ u32 sqn : 12;
+ u32 cid;
+ u32 magic_num; /* magic num */
+ struct spfc_sqe_tmf_t_rsp tmf_rsp;
+};
+
+/* SQE for fcp response, max TSL is 120B */
+struct spfc_sqe_tresp {
+ struct spfc_wqe_ctrl ctrl_sl;
+ u64 taskrsvd;
+ u64 wqe_gpa;
+ u64 db_val;
+ union spfc_sqe_ts_ex ts_ex;
+ struct spfc_variable_sge esge[3];
+ struct spfc_wqe_ctrl ectrl_sl;
+ struct spfc_sqe_tresp_ts ts_sl;
+};
+
+/* SQE for tmf response, max TSL is 120B */
+struct spfc_sqe_tmf_rsp {
+ struct spfc_wqe_ctrl ctrl_sl;
+ u64 taskrsvd;
+ u64 wqe_gpa;
+ u64 db_val;
+ union spfc_sqe_ts_ex ts_ex;
+ struct spfc_variable_sge esge[3];
+ struct spfc_wqe_ctrl ectrl_sl;
+ struct spfc_sqe_tmf_resp_ts ts_sl;
+};
+
+/* SCQE Common Header */
+struct spfc_scqe_ch {
+ struct {
+ u32 task_type : 8;
+ u32 sqn : 13;
+ u32 cqe_remain_cnt : 3;
+ u32 err_code : 7;
+ u32 owner : 1;
+ } wd0;
+};
+
+struct spfc_scqe_type {
+ struct spfc_scqe_ch ch;
+
+ u32 rsvd0;
+
+ u16 conn_id;
+ u16 rsvd4;
+
+ u32 rsvd1[12];
+
+ struct {
+ u32 done : 1;
+ u32 rsvd : 23;
+ u32 dif_vry_rst : 8;
+ } wd0;
+};
+
+struct spfc_scqe_sess_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 xid_qpn : 20;
+ u32 rsvd1 : 12;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 rsvd3 : 16;
+ } wd1;
+
+ struct {
+ u32 cid : 20;
+ u32 rsvd2 : 12;
+ } wd2;
+
+ u64 rsvd3;
+};
+
+struct spfc_scqe_comm_rsp_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 hotpooltag : 16; /* ucode return hotpooltag to drv */
+ } wd1;
+
+ u32 magic_num;
+};
+
+struct spfc_scqe_iresp {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 rsvd0 : 3;
+ u32 user_id_num : 8;
+ u32 dif_info : 5;
+ } wd1;
+
+ struct {
+ u32 scsi_status : 8;
+ u32 fcp_flag : 8;
+ u32 hotpooltag : 16; /* ucode return hotpooltag to drv */
+ } wd2;
+
+ u32 fcp_resid;
+ u32 fcp_sns_len;
+ u32 fcp_rsp_len;
+ u32 magic_num;
+ u16 user_id[FC_SENSEDATA_USERID_CNT_MAX];
+ u32 rsv1;
+};
+
+struct spfc_scqe_nvme_iresp {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 eresp_flag : 8;
+ u32 user_id_num : 8;
+ } wd1;
+
+ struct {
+ u32 scsi_status : 8;
+ u32 fcp_flag : 8;
+ u32 hotpooltag : 16; /* ucode return hotpooltag to drv */
+ } wd2;
+ u32 magic_num;
+ u32 eresp[8];
+};
+
+#pragma pack(1)
+struct spfc_dif_result {
+ u8 vrd_rpt;
+ u16 pad;
+ u8 rcv_pi_vb;
+ u32 rcv_pi_h;
+ u32 rcv_pi_l;
+ u16 vrf_agm_imm;
+ u16 ri_agm_imm;
+};
+
+#pragma pack()
+
+struct spfc_scqe_dif_result {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 rsvd0 : 11;
+ u32 dif_info : 5;
+ } wd1;
+
+ struct {
+ u32 scsi_status : 8;
+ u32 fcp_flag : 8;
+ u32 hotpooltag : 16; /* ucode return hotpooltag to drv */
+ } wd2;
+
+ u32 fcp_resid;
+ u32 fcp_sns_len;
+ u32 fcp_rsp_len;
+ u32 magic_num;
+
+ u32 rsv1[3];
+ struct spfc_dif_result difinfo;
+};
+
+struct spfc_scqe_rcv_abts_rsp {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 hotpooltag : 16;
+ } wd1;
+
+ struct {
+ u32 fh_rctrl : 8;
+ u32 rsvd0 : 24;
+ } wd2;
+
+ struct {
+ u32 did : 24;
+ u32 rsvd1 : 8;
+ } wd3;
+
+ struct {
+ u32 sid : 24;
+ u32 rsvd2 : 8;
+ } wd4;
+
+ /* payload length is according to fh_rctrl:1DWORD or 3DWORD */
+ u32 payload[3];
+ u32 magic_num;
+};
+
+struct spfc_scqe_fcp_rsp_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 rsvd0 : 10;
+ u32 immi : 1;
+ u32 dif_info : 5;
+ } wd1;
+
+ u32 magic_num;
+ u32 hotpooltag;
+ u32 xfer_rsp;
+ u32 rsvd[5];
+
+ u32 dif_tmp[4]; /* HW will overwrite it */
+};
+
+struct spfc_scqe_rcv_els_cmd {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 did : 24;
+ u32 class_mode : 8; /* 0:class3, 1:class2 */
+ } wd0;
+
+ struct {
+ u32 sid : 24;
+ u32 rsvd1 : 8;
+ } wd1;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd2;
+
+ struct {
+ u32 user_id_num : 16;
+ u32 data_len : 16;
+ } wd3;
+ /* User ID of SRQ SGE, used for drvier buffer release */
+ u16 user_id[FC_LS_GS_USERID_CNT_MAX];
+ u32 ts;
+};
+
+struct spfc_scqe_param_check_scq {
+ struct spfc_scqe_ch ch;
+
+ u8 rsvd0[3];
+ u8 port_id;
+
+ u16 scqn;
+ u16 check_item;
+
+ u16 exch_id_load;
+ u16 exch_id;
+
+ u16 historty_type;
+ u16 entry_count;
+
+ u32 xid;
+
+ u32 gpa_h;
+ u32 gpa_l;
+
+ u32 magic_num;
+ u32 hotpool_tag;
+
+ u32 payload_len;
+ u32 sub_err;
+
+ u32 rsvd2[3];
+};
+
+struct spfc_scqe_rcv_abts_cmd {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 did : 24;
+ u32 rsvd0 : 8;
+ } wd0;
+
+ struct {
+ u32 sid : 24;
+ u32 rsvd1 : 8;
+ } wd1;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd2;
+};
+
+struct spfc_scqe_rcv_els_gs_rsp {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd1;
+
+ struct {
+ u32 conn_id : 16;
+ u32 data_len : 16; /* ELS/GS RSP Payload length */
+ } wd2;
+
+ struct {
+ u32 did : 24;
+ u32 rsvd : 6;
+ u32 echo_rsp : 1;
+ u32 end_rsp : 1;
+ } wd3;
+
+ struct {
+ u32 sid : 24;
+ u32 user_id_num : 8;
+ } wd4;
+
+ struct {
+ u32 rsvd : 16;
+ u32 hotpooltag : 16;
+ } wd5;
+
+ u32 magic_num;
+ u16 user_id[FC_LS_GS_USERID_CNT_MAX];
+};
+
+struct spfc_scqe_rcv_flush_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rsvd0 : 4;
+ u32 clr_pos : 12;
+ u32 port_id : 8;
+ u32 last_flush : 8;
+ } wd0;
+};
+
+struct spfc_scqe_rcv_clear_buf_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rsvd0 : 24;
+ u32 port_id : 8;
+ } wd0;
+};
+
+struct spfc_scqe_clr_srq_rsp {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 srq_type : 16;
+ u32 cur_wqe_msn : 16;
+ } wd0;
+};
+
+struct spfc_scqe_itmf_marker_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd1;
+
+ struct {
+ u32 did : 24;
+ u32 end_rsp : 8;
+ } wd2;
+
+ struct {
+ u32 sid : 24;
+ u32 rsvd1 : 8;
+ } wd3;
+
+ struct {
+ u32 hotpooltag : 16;
+ u32 rsvd : 16;
+ } wd4;
+
+ u32 magic_num;
+};
+
+struct spfc_scqe_abts_marker_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd1;
+
+ struct {
+ u32 did : 24;
+ u32 end_rsp : 8;
+ } wd2;
+
+ struct {
+ u32 sid : 24;
+ u32 io_state : 8;
+ } wd3;
+
+ struct {
+ u32 hotpooltag : 16;
+ u32 rsvd : 16;
+ } wd4;
+
+ u32 magic_num;
+};
+
+struct spfc_scqe_ini_abort_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd1;
+
+ struct {
+ u32 did : 24;
+ u32 rsvd : 8;
+ } wd2;
+
+ struct {
+ u32 sid : 24;
+ u32 io_state : 8;
+ } wd3;
+
+ struct {
+ u32 hotpooltag : 16;
+ u32 rsvd : 16;
+ } wd4;
+
+ u32 magic_num;
+};
+
+struct spfc_scqe_sq_nop_sts {
+ struct spfc_scqe_ch ch;
+ struct {
+ u32 rsvd : 16;
+ u32 sqn : 16;
+ } wd0;
+ struct {
+ u32 rsvd : 16;
+ u32 conn_id : 16;
+ } wd1;
+ u32 magic_num;
+};
+
+/* SCQE, should not be over 64B */
+#define FC_SCQE_SIZE 64
+union spfc_scqe {
+ struct spfc_scqe_type common;
+ struct spfc_scqe_sess_sts sess_sts; /* session enable/disable/delete sts */
+ struct spfc_scqe_comm_rsp_sts comm_sts; /* aborts/abts_rsp/els rsp sts */
+ struct spfc_scqe_rcv_clear_buf_sts clear_sts; /* clear buffer sts */
+ struct spfc_scqe_rcv_flush_sts flush_sts; /* flush sq sts */
+ struct spfc_scqe_iresp iresp;
+ struct spfc_scqe_rcv_abts_rsp rcv_abts_rsp; /* recv abts rsp */
+ struct spfc_scqe_fcp_rsp_sts fcp_rsp_sts; /* Read/Write/Rsp sts */
+ struct spfc_scqe_rcv_els_cmd rcv_els_cmd; /* recv els cmd */
+ struct spfc_scqe_rcv_abts_cmd rcv_abts_cmd; /* recv abts cmd */
+ struct spfc_scqe_rcv_els_gs_rsp rcv_els_gs_rsp; /* recv els/gs rsp */
+ struct spfc_scqe_clr_srq_rsp clr_srq_sts;
+ struct spfc_scqe_itmf_marker_sts itmf_marker_sts; /* tmf marker */
+ struct spfc_scqe_abts_marker_sts abts_marker_sts; /* abts marker */
+ struct spfc_scqe_dif_result dif_result;
+ struct spfc_scqe_param_check_scq param_check_sts;
+ struct spfc_scqe_nvme_iresp nvme_iresp;
+ struct spfc_scqe_ini_abort_sts ini_abort_sts;
+ struct spfc_scqe_sq_nop_sts sq_nop_sts;
+};
+
+struct spfc_cmdqe_type {
+ struct {
+ u32 rx_id : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+};
+
+struct spfc_cmdqe_send_ack {
+ struct {
+ u32 rx_id : 16;
+ u32 immi_stand : 1;
+ u32 rsvd0 : 7;
+ u32 task_type : 8;
+ } wd0;
+
+ u32 xid;
+ u32 cid;
+};
+
+struct spfc_cmdqe_send_aeq_err {
+ struct {
+ u32 errorevent : 8;
+ u32 errortype : 8;
+ u32 portid : 8;
+ u32 task_type : 8;
+ } wd0;
+};
+
+/* CMDQE, variable length */
+union spfc_cmdqe {
+ struct spfc_cmdqe_type common;
+ struct spfc_cmdqe_sess_en session_enable;
+ struct spfc_cmdqe_abts_rsp snd_abts_rsp;
+ struct spfc_cmdqe_abort snd_abort;
+ struct spfc_cmdqe_buffer_clear buffer_clear;
+ struct spfc_cmdqe_flush_sq flush_sq;
+ struct spfc_cmdqe_dump_exch dump_exch;
+ struct spfc_cmdqe_creat_srqc create_srqc;
+ struct spfc_cmdqe_delete_srqc delete_srqc;
+ struct spfc_cmdqe_clr_srq clear_srq;
+ struct spfc_cmdqe_creat_scqc create_scqc;
+ struct spfc_cmdqe_delete_scqc delete_scqc;
+ struct spfc_cmdqe_send_ack send_ack;
+ struct spfc_cmdqe_send_aeq_err send_aeqerr;
+ struct spfc_cmdqe_creat_ssqc createssqc;
+ struct spfc_cmdqe_delete_ssqc deletessqc;
+ struct spfc_cmdqe_cmdqe_dfx dfx_info;
+ struct spfc_cmdqe_exch_id_free xid_free;
+};
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_io.c b/drivers/scsi/spfc/hw/spfc_io.c
new file mode 100644
index 000000000000..2b1d1c607b13
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_io.c
@@ -0,0 +1,1193 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_io.h"
+#include "spfc_module.h"
+#include "spfc_service.h"
+
+#define SPFC_SGE_WD1_XID_MASK 0x3fff
+
+u32 dif_protect_opcode = INVALID_VALUE32;
+u32 dif_app_esc_check = SPFC_DIF_APP_REF_ESC_CHECK;
+u32 dif_ref_esc_check = SPFC_DIF_APP_REF_ESC_CHECK;
+u32 grd_agm_ini_ctrl = SPFC_DIF_CRC_CS_INITIAL_CONFIG_BY_BIT0_1;
+u32 ref_tag_no_increase;
+u32 dix_flag;
+u32 grd_ctrl;
+u32 grd_agm_ctrl = SPFC_DIF_GUARD_VERIFY_ALGORITHM_CTL_T10_CRC16;
+u32 cmp_app_tag_mask = 0xffff;
+u32 app_tag_ctrl;
+u32 ref_tag_ctrl;
+u32 ref_tag_mod = INVALID_VALUE32;
+u32 rep_ref_tag;
+u32 rx_rep_ref_tag;
+u16 cmp_app_tag;
+u16 rep_app_tag;
+
+static void spfc_dif_err_count(struct spfc_hba_info *hba, u8 info)
+{
+ u8 dif_info = info;
+
+ if (dif_info & SPFC_TX_DIF_ERROR_FLAG) {
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_SEND_DIFERR_ALL);
+ if (dif_info & SPFC_DIF_ERROR_CODE_CRC)
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_SEND_DIFERR_CRC);
+
+ if (dif_info & SPFC_DIF_ERROR_CODE_APP)
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_SEND_DIFERR_APP);
+
+ if (dif_info & SPFC_DIF_ERROR_CODE_REF)
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_SEND_DIFERR_REF);
+ } else {
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_RECV_DIFERR_ALL);
+ if (dif_info & SPFC_DIF_ERROR_CODE_CRC)
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_RECV_DIFERR_CRC);
+
+ if (dif_info & SPFC_DIF_ERROR_CODE_APP)
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_RECV_DIFERR_APP);
+
+ if (dif_info & SPFC_DIF_ERROR_CODE_REF)
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_RECV_DIFERR_REF);
+ }
+}
+
+void spfc_build_no_dif_control(struct unf_frame_pkg *pkg,
+ struct spfc_fc_dif_info *info)
+{
+ struct spfc_fc_dif_info *dif_info = info;
+
+ /* dif enable or disable */
+ dif_info->wd0.difx_en = SPFC_DIF_DISABLE;
+
+ dif_info->wd1.vpid = pkg->qos_level;
+ dif_info->wd1.lun_qos_en = 1;
+}
+
+void spfc_dif_action_forward(struct spfc_fc_dif_info *dif_info_l1,
+ struct unf_dif_control_info *dif_ctrl_u1)
+{
+ dif_info_l1->wd0.grd_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_VERIFY_CRC_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
+ : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.grd_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_REPLACE_CRC_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_REPLACE
+ : SPFC_DIF_GARD_REF_APP_CTRL_FORWARD;
+
+ dif_info_l1->wd0.ref_tag_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_VERIFY_LBA_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
+ : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.ref_tag_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_REPLACE_LBA_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_REPLACE
+ : SPFC_DIF_GARD_REF_APP_CTRL_FORWARD;
+
+ dif_info_l1->wd0.app_tag_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_VERIFY_APP_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
+ : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.app_tag_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_REPLACE_APP_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_REPLACE
+ : SPFC_DIF_GARD_REF_APP_CTRL_FORWARD;
+}
+
+void spfc_dif_action_delete(struct spfc_fc_dif_info *dif_info_l1,
+ struct unf_dif_control_info *dif_ctrl_u1)
+{
+ dif_info_l1->wd0.grd_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_VERIFY_CRC_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
+ : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.grd_ctrl |= SPFC_DIF_GARD_REF_APP_CTRL_DELETE;
+
+ dif_info_l1->wd0.ref_tag_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_VERIFY_LBA_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
+ : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.ref_tag_ctrl |= SPFC_DIF_GARD_REF_APP_CTRL_DELETE;
+
+ dif_info_l1->wd0.app_tag_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_VERIFY_APP_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
+ : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.app_tag_ctrl |= SPFC_DIF_GARD_REF_APP_CTRL_DELETE;
+}
+
+static void spfc_convert_dif_action(struct unf_dif_control_info *dif_ctrl,
+ struct spfc_fc_dif_info *dif_info)
+{
+ struct spfc_fc_dif_info *dif_info_l1 = NULL;
+ struct unf_dif_control_info *dif_ctrl_u1 = NULL;
+
+ dif_info_l1 = dif_info;
+ dif_ctrl_u1 = dif_ctrl;
+
+ switch (UNF_DIF_ACTION_MASK & dif_ctrl_u1->protect_opcode) {
+ case UNF_DIF_ACTION_VERIFY_AND_REPLACE:
+ case UNF_DIF_ACTION_VERIFY_AND_FORWARD:
+ spfc_dif_action_forward(dif_info_l1, dif_ctrl_u1);
+ break;
+
+ case UNF_DIF_ACTION_INSERT:
+ dif_info_l1->wd0.grd_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.grd_ctrl |= SPFC_DIF_GARD_REF_APP_CTRL_INSERT;
+ dif_info_l1->wd0.ref_tag_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.ref_tag_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_INSERT;
+ dif_info_l1->wd0.app_tag_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.app_tag_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_INSERT;
+ break;
+
+ case UNF_DIF_ACTION_VERIFY_AND_DELETE:
+ spfc_dif_action_delete(dif_info_l1, dif_ctrl_u1);
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "Unknown dif protect opcode 0x%x",
+ dif_ctrl_u1->protect_opcode);
+ break;
+ }
+}
+
+void spfc_get_dif_info_l1(struct spfc_fc_dif_info *dif_info_l1,
+ struct unf_dif_control_info *dif_ctrl_u1)
+{
+ dif_info_l1->wd1.cmp_app_tag_msk = cmp_app_tag_mask;
+
+ dif_info_l1->rep_app_tag = dif_ctrl_u1->app_tag;
+ dif_info_l1->rep_ref_tag = dif_ctrl_u1->start_lba;
+
+ dif_info_l1->cmp_app_tag = dif_ctrl_u1->app_tag;
+ dif_info_l1->cmp_ref_tag = dif_ctrl_u1->start_lba;
+
+ if (cmp_app_tag != 0)
+ dif_info_l1->cmp_app_tag = cmp_app_tag;
+
+ if (rep_app_tag != 0)
+ dif_info_l1->rep_app_tag = rep_app_tag;
+
+ if (rep_ref_tag != 0)
+ dif_info_l1->rep_ref_tag = rep_ref_tag;
+}
+
+void spfc_build_dif_control(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg,
+ struct spfc_fc_dif_info *dif_info)
+{
+ struct spfc_fc_dif_info *dif_info_l1 = NULL;
+ struct unf_dif_control_info *dif_ctrl_u1 = NULL;
+
+ dif_info_l1 = dif_info;
+ dif_ctrl_u1 = &pkg->dif_control;
+
+ /* dif enable or disable */
+ dif_info_l1->wd0.difx_en = SPFC_DIF_ENABLE;
+
+ dif_info_l1->wd1.vpid = pkg->qos_level;
+ dif_info_l1->wd1.lun_qos_en = 1;
+
+ /* 512B + 8 size mode */
+ dif_info_l1->wd0.sct_size = (dif_ctrl_u1->flags & UNF_DIF_SECTSIZE_4KB)
+ ? SPFC_DIF_SECTOR_4KB_MODE
+ : SPFC_DIF_SECTOR_512B_MODE;
+
+ /* dif type 1 */
+ dif_info_l1->wd0.dif_verify_type = dif_type;
+
+ /* Check whether the 0xffff app or ref domain is isolated */
+ /* If all ff messages are displayed in type1 app, checkcheck sector
+ * dif_info_l1->wd0.difx_app_esc = SPFC_DIF_APP_REF_ESC_CHECK
+ */
+
+ dif_info_l1->wd0.difx_app_esc = dif_app_esc_check;
+
+ /* type1 ref tag If all ff is displayed, check sector is required */
+ dif_info_l1->wd0.difx_ref_esc = dif_ref_esc_check;
+
+ /* Currently, only t10 crc is supported */
+ dif_info_l1->wd0.grd_agm_ctrl = 0;
+
+ /* Set this parameter based on the values of bit zero and bit one.
+ * The initial value is 0, and the value is UNF_DEFAULT_CRC_GUARD_SEED
+ */
+ dif_info_l1->wd0.grd_agm_ini_ctrl = grd_agm_ini_ctrl;
+ dif_info_l1->wd0.app_tag_ctrl = 0;
+ dif_info_l1->wd0.grd_ctrl = 0;
+ dif_info_l1->wd0.ref_tag_ctrl = 0;
+
+ /* Convert the verify operation, replace, forward, insert,
+ * and delete operations based on the actual operation code of the upper
+ * layer
+ */
+ if (dif_protect_opcode != INVALID_VALUE32) {
+ dif_ctrl_u1->protect_opcode =
+ dif_protect_opcode |
+ (dif_ctrl_u1->protect_opcode & UNF_DIF_ACTION_MASK);
+ }
+
+ spfc_convert_dif_action(dif_ctrl_u1, dif_info_l1);
+ dif_info_l1->wd0.app_tag_ctrl |= app_tag_ctrl;
+
+ /* Address self-increase mode */
+ dif_info_l1->wd0.ref_tag_mode =
+ (dif_ctrl_u1->protect_opcode & UNF_DIF_ACTION_NO_INCREASE_REFTAG)
+ ? (BOTH_NONE)
+ : (BOTH_INCREASE);
+
+ if (ref_tag_mod != INVALID_VALUE32)
+ dif_info_l1->wd0.ref_tag_mode = ref_tag_mod;
+
+ /* This parameter is used only when type 3 is set to 0xffff. */
+ spfc_get_dif_info_l1(dif_info_l1, dif_ctrl_u1);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) sid_did(0x%x_0x%x) package type(0x%x) apptag(0x%x) flag(0x%x) opcode(0x%x) fcpdl(0x%x) statlba(0x%x)",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did, pkg->type, pkg->dif_control.app_tag,
+ pkg->dif_control.flags, pkg->dif_control.protect_opcode,
+ pkg->dif_control.fcp_dl, pkg->dif_control.start_lba);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) cover dif control info, app:cmp_tag(0x%x) cmp_tag_mask(0x%x) rep_tag(0x%x), ref:tag_mode(0x%x) cmp_tag(0x%x) rep_tag(0x%x).",
+ hba->port_cfg.port_id, dif_info_l1->cmp_app_tag,
+ dif_info_l1->wd1.cmp_app_tag_msk, dif_info_l1->rep_app_tag,
+ dif_info_l1->wd0.ref_tag_mode, dif_info_l1->cmp_ref_tag,
+ dif_info_l1->rep_ref_tag);
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) cover dif control info, ctrl:grd(0x%x) ref(0x%x) app(0x%x).",
+ hba->port_cfg.port_id, dif_info_l1->wd0.grd_ctrl,
+ dif_info_l1->wd0.ref_tag_ctrl,
+ dif_info_l1->wd0.app_tag_ctrl);
+}
+
+static u32 spfc_fill_external_sgl_page(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg,
+ struct unf_esgl_page *esgl_page,
+ u32 sge_num, int direction,
+ u32 context_id, u32 dif_flag)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 index = 0;
+ u32 sge_num_per_page = 0;
+ u32 buffer_addr = 0;
+ u32 buf_len = 0;
+ char *buf = NULL;
+ ulong phys = 0;
+ struct unf_esgl_page *unf_esgl_page = NULL;
+ struct spfc_variable_sge *sge = NULL;
+
+ unf_esgl_page = esgl_page;
+ while (sge_num > 0) {
+ /* Obtains the initial address of the sge page */
+ sge = (struct spfc_variable_sge *)unf_esgl_page->page_address;
+
+ /* Calculate the number of sge on each page */
+ sge_num_per_page = (unf_esgl_page->page_size) / sizeof(struct spfc_variable_sge);
+
+ /* Fill in sgl page. The last sge of each page is link sge by
+ * default
+ */
+ for (index = 0; index < (sge_num_per_page - 1); index++) {
+ UNF_GET_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len, dif_flag);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+ phys = (ulong)buf;
+ sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
+ sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
+ sge[index].wd0.buf_len = buf_len;
+ sge[index].wd0.r_flag = 0;
+ sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
+
+ /* Parity bit */
+ sge[index].wd1.buf_addr_gpa = (sge[index].buf_addr_lo >> UNF_SHIFT_16);
+ sge[index].wd1.xid = (context_id & SPFC_SGE_WD1_XID_MASK);
+
+ spfc_cpu_to_big32(&sge[index], sizeof(struct spfc_variable_sge));
+
+ sge_num--;
+ if (sge_num == 0)
+ break;
+ }
+
+ /* sge Set the end flag on the last sge of the page if all the
+ * pages have been filled.
+ */
+ if (sge_num == 0) {
+ sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sge[index].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
+
+ /* Parity bit */
+ buffer_addr = be32_to_cpu(sge[index].buf_addr_lo);
+ sge[index].wd1.buf_addr_gpa = (buffer_addr >> UNF_SHIFT_16);
+ sge[index].wd1.xid = (context_id & SPFC_SGE_WD1_XID_MASK);
+
+ spfc_cpu_to_big32(&sge[index].wd1, SPFC_DWORD_BYTE);
+ }
+ /* If only one sge is left empty, the sge reserved on the page
+ * is used for filling.
+ */
+ else if (sge_num == 1) {
+ UNF_GET_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len,
+ dif_flag);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+ phys = (ulong)buf;
+ sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
+ sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
+ sge[index].wd0.buf_len = buf_len;
+ sge[index].wd0.r_flag = 0;
+ sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sge[index].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
+
+ /* Parity bit */
+ sge[index].wd1.buf_addr_gpa = (sge[index].buf_addr_lo >> UNF_SHIFT_16);
+ sge[index].wd1.xid = (context_id & SPFC_SGE_WD1_XID_MASK);
+
+ spfc_cpu_to_big32(&sge[index], sizeof(struct spfc_variable_sge));
+
+ sge_num--;
+ } else {
+ /* Apply for a new sgl page and fill in link sge */
+ UNF_GET_FREE_ESGL_PAGE(unf_esgl_page, hba->lport, pkg);
+ if (!unf_esgl_page) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Get free esgl page failed.");
+ return UNF_RETURN_ERROR;
+ }
+ phys = unf_esgl_page->esgl_phy_addr;
+ sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
+ sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
+
+ /* For the cascaded wqe, you only need to enter the
+ * cascading buffer address and extension flag, and do
+ * not need to fill in other fields
+ */
+ sge[index].wd0.buf_len = 0;
+ sge[index].wd0.r_flag = 0;
+ sge[index].wd1.extension_flag = SPFC_WQE_SGE_EXTEND_FLAG;
+ sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
+
+ /* parity bit */
+ sge[index].wd1.buf_addr_gpa = (sge[index].buf_addr_lo >> UNF_SHIFT_16);
+ sge[index].wd1.xid = (context_id & SPFC_SGE_WD1_XID_MASK);
+
+ spfc_cpu_to_big32(&sge[index], sizeof(struct spfc_variable_sge));
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]Port(0x%x) SID(0x%x) DID(0x%x) RXID(0x%x) build esgl left sge num: %u.",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did,
+ pkg->frame_head.oxid_rxid, sge_num);
+ }
+
+ return RETURN_OK;
+}
+
+static u32 spfc_build_local_dif_sgl(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
+ int direction, u32 bd_sge_num)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ char *buf = NULL;
+ u32 buf_len = 0;
+ ulong phys = 0;
+ u32 dif_sge_place = 0;
+
+ /* DIF SGE must be followed by BD SGE */
+ dif_sge_place = ((bd_sge_num <= pkg->entry_count) ? bd_sge_num : pkg->entry_count);
+
+ /* The entry_count= 0 needs to be specially processed and does not need
+ * to be mounted. As long as len is set to zero, Last-bit is set to one,
+ * and E-bit is set to 0.
+ */
+ if (pkg->dif_control.dif_sge_count == 0) {
+ sqe->sge[dif_sge_place].buf_addr_hi = 0;
+ sqe->sge[dif_sge_place].buf_addr_lo = 0;
+ sqe->sge[dif_sge_place].wd0.buf_len = 0;
+ } else {
+ UNF_CM_GET_DIF_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "DOUBLE DIF Get Dif Buf Fail.");
+ return UNF_RETURN_ERROR;
+ }
+ phys = (ulong)buf;
+ sqe->sge[dif_sge_place].buf_addr_hi = UNF_DMA_HI32(phys);
+ sqe->sge[dif_sge_place].buf_addr_lo = UNF_DMA_LO32(phys);
+ sqe->sge[dif_sge_place].wd0.buf_len = buf_len;
+ }
+
+ /* rdma flag. If the fc is not used, enter 0. */
+ sqe->sge[dif_sge_place].wd0.r_flag = 0;
+
+ /* parity bit */
+ sqe->sge[dif_sge_place].wd1.buf_addr_gpa = 0;
+ sqe->sge[dif_sge_place].wd1.xid = 0;
+
+ /* The local sgl does not use the cascading SGE. Therefore, the value of
+ * this field is always 0.
+ */
+ sqe->sge[dif_sge_place].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sqe->sge[dif_sge_place].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
+
+ spfc_cpu_to_big32(&sqe->sge[dif_sge_place], sizeof(struct spfc_variable_sge));
+
+ return RETURN_OK;
+}
+
+static u32 spfc_build_external_dif_sgl(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg,
+ struct spfc_sqe *sqe, int direction,
+ u32 bd_sge_num)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_esgl_page *esgl_page = NULL;
+ ulong phys = 0;
+ u32 left_sge_num = 0;
+ u32 dif_sge_place = 0;
+ struct spfc_parent_ssq_info *ssq = NULL;
+ u32 ssqn = 0;
+
+ ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ ssq = &hba->parent_queue_mgr->shared_queue[ssqn].parent_ssq_info;
+
+ /* DIF SGE must be followed by BD SGE */
+ dif_sge_place = ((bd_sge_num <= pkg->entry_count) ? bd_sge_num : pkg->entry_count);
+
+ /* Allocate the first page first */
+ UNF_GET_FREE_ESGL_PAGE(esgl_page, hba->lport, pkg);
+ if (!esgl_page) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "DOUBLE DIF Get External Page Fail.");
+ return UNF_RETURN_ERROR;
+ }
+
+ phys = esgl_page->esgl_phy_addr;
+
+ /* Configuring the Address of the Cascading Page */
+ sqe->sge[dif_sge_place].buf_addr_hi = UNF_DMA_HI32(phys);
+ sqe->sge[dif_sge_place].buf_addr_lo = UNF_DMA_LO32(phys);
+
+ /* Configuring Control Information About the Cascading Page */
+ sqe->sge[dif_sge_place].wd0.buf_len = 0;
+ sqe->sge[dif_sge_place].wd0.r_flag = 0;
+ sqe->sge[dif_sge_place].wd1.extension_flag = SPFC_WQE_SGE_EXTEND_FLAG;
+ sqe->sge[dif_sge_place].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
+
+ /* parity bit */
+ sqe->sge[dif_sge_place].wd1.buf_addr_gpa = 0;
+ sqe->sge[dif_sge_place].wd1.xid = 0;
+
+ spfc_cpu_to_big32(&sqe->sge[dif_sge_place], sizeof(struct spfc_variable_sge));
+
+ /* Fill in the sge information on the cascading page */
+ left_sge_num = pkg->dif_control.dif_sge_count;
+ ret = spfc_fill_external_sgl_page(hba, pkg, esgl_page, left_sge_num,
+ direction, ssq->context_id, true);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ return RETURN_OK;
+}
+
+static u32 spfc_build_local_sgl(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
+ int direction)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ char *buf = NULL;
+ u32 buf_len = 0;
+ u32 index = 0;
+ ulong phys = 0;
+
+ for (index = 0; index < pkg->entry_count; index++) {
+ UNF_CM_GET_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ phys = (ulong)buf;
+ sqe->sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
+ sqe->sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
+ sqe->sge[index].wd0.buf_len = buf_len;
+
+ /* rdma flag. If the fc is not used, enter 0. */
+ sqe->sge[index].wd0.r_flag = 0;
+
+ /* parity bit */
+ sqe->sge[index].wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
+ sqe->sge[index].wd1.xid = 0;
+
+ /* The local sgl does not use the cascading SGE. Therefore, the
+ * value of this field is always 0.
+ */
+ sqe->sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sqe->sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
+
+ if (index == (pkg->entry_count - 1)) {
+ /* Sets the last WQE end flag 1 */
+ sqe->sge[index].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
+ }
+
+ spfc_cpu_to_big32(&sqe->sge[index], sizeof(struct spfc_variable_sge));
+ }
+
+ /* Adjust the length of the BDSL field in the CTRL domain. */
+ SPFC_ADJUST_DATA(sqe->ctrl_sl.ch.wd0.bdsl,
+ SPFC_BYTES_TO_QW_NUM((pkg->entry_count *
+ sizeof(struct spfc_variable_sge))));
+
+ /* The entry_count= 0 needs to be specially processed and does not need
+ * to be mounted. As long as len is set to zero, Last-bit is set to one,
+ * and E-bit is set to 0.
+ */
+ if (pkg->entry_count == 0) {
+ sqe->sge[ARRAY_INDEX_0].buf_addr_hi = 0;
+ sqe->sge[ARRAY_INDEX_0].buf_addr_lo = 0;
+ sqe->sge[ARRAY_INDEX_0].wd0.buf_len = 0;
+
+ /* rdma flag. This field is not used in fc. Set it to 0. */
+ sqe->sge[ARRAY_INDEX_0].wd0.r_flag = 0;
+
+ /* parity bit */
+ sqe->sge[ARRAY_INDEX_0].wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
+ sqe->sge[ARRAY_INDEX_0].wd1.xid = 0;
+
+ /* The local sgl does not use the cascading SGE. Therefore, the
+ * value of this field is always 0.
+ */
+ sqe->sge[ARRAY_INDEX_0].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sqe->sge[ARRAY_INDEX_0].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
+
+ spfc_cpu_to_big32(&sqe->sge[ARRAY_INDEX_0], sizeof(struct spfc_variable_sge));
+
+ /* Adjust the length of the BDSL field in the CTRL domain. */
+ SPFC_ADJUST_DATA(sqe->ctrl_sl.ch.wd0.bdsl,
+ SPFC_BYTES_TO_QW_NUM(sizeof(struct spfc_variable_sge)));
+ }
+
+ return RETURN_OK;
+}
+
+static u32 spfc_build_external_sgl(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
+ int direction, u32 bd_sge_num)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ char *buf = NULL;
+ struct unf_esgl_page *esgl_page = NULL;
+ ulong phys = 0;
+ u32 buf_len = 0;
+ u32 index = 0;
+ u32 left_sge_num = 0;
+ u32 local_sge_num = 0;
+ struct spfc_parent_ssq_info *ssq = NULL;
+ u16 ssqn = 0;
+
+ ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ ssq = &hba->parent_queue_mgr->shared_queue[ssqn].parent_ssq_info;
+
+ /* Ensure that the value of bd_sge_num is greater than or equal to one
+ */
+ local_sge_num = bd_sge_num - 1;
+
+ for (index = 0; index < local_sge_num; index++) {
+ UNF_CM_GET_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len);
+ if (unlikely(ret != RETURN_OK))
+ return UNF_RETURN_ERROR;
+
+ phys = (ulong)buf;
+
+ sqe->sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
+ sqe->sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
+ sqe->sge[index].wd0.buf_len = buf_len;
+
+ /* RDMA flag, which is not used by FC. */
+ sqe->sge[index].wd0.r_flag = 0;
+ sqe->sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sqe->sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
+
+ /* parity bit */
+ sqe->sge[index].wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
+ sqe->sge[index].wd1.xid = 0;
+
+ spfc_cpu_to_big32(&sqe->sge[index], sizeof(struct spfc_variable_sge));
+ }
+
+ /* Calculate the number of remaining sge. */
+ left_sge_num = pkg->entry_count - local_sge_num;
+ /* Adjust the length of the BDSL field in the CTRL domain. */
+ SPFC_ADJUST_DATA(sqe->ctrl_sl.ch.wd0.bdsl,
+ SPFC_BYTES_TO_QW_NUM((bd_sge_num * sizeof(struct spfc_variable_sge))));
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "alloc extended sgl page,leftsge:%d", left_sge_num);
+ /* Allocating the first cascading page */
+ UNF_GET_FREE_ESGL_PAGE(esgl_page, hba->lport, pkg);
+ if (unlikely(!esgl_page))
+ return UNF_RETURN_ERROR;
+
+ phys = esgl_page->esgl_phy_addr;
+
+ /* Configuring the Address of the Cascading Page */
+ sqe->sge[index].buf_addr_hi = (u32)UNF_DMA_HI32(phys);
+ sqe->sge[index].buf_addr_lo = (u32)UNF_DMA_LO32(phys);
+
+ /* Configuring Control Information About the Cascading Page */
+ sqe->sge[index].wd0.buf_len = 0;
+ sqe->sge[index].wd0.r_flag = 0;
+ sqe->sge[index].wd1.extension_flag = SPFC_WQE_SGE_EXTEND_FLAG;
+ sqe->sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
+
+ /* parity bit */
+ sqe->sge[index].wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
+ sqe->sge[index].wd1.xid = 0;
+
+ spfc_cpu_to_big32(&sqe->sge[index], sizeof(struct spfc_variable_sge));
+
+ /* Fill in the sge information on the cascading page. */
+ ret = spfc_fill_external_sgl_page(hba, pkg, esgl_page, left_sge_num,
+ direction, ssq->context_id, false);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+ /* Copy the extended data sge to the extended sge of the extended wqe.*/
+ if (left_sge_num > 0) {
+ memcpy(sqe->esge, (void *)esgl_page->page_address,
+ SPFC_WQE_MAX_ESGE_NUM * sizeof(struct spfc_variable_sge));
+ }
+
+ return RETURN_OK;
+}
+
+u32 spfc_build_sgl_by_local_sge_num(struct unf_frame_pkg *pkg,
+ struct spfc_hba_info *hba, struct spfc_sqe *sqe,
+ int direction, u32 bd_sge_num)
+{
+ u32 ret = RETURN_OK;
+
+ if (pkg->entry_count <= bd_sge_num)
+ ret = spfc_build_local_sgl(hba, pkg, sqe, direction);
+ else
+ ret = spfc_build_external_sgl(hba, pkg, sqe, direction, bd_sge_num);
+
+ return ret;
+}
+
+u32 spfc_conf_dual_sgl_info(struct unf_frame_pkg *pkg,
+ struct spfc_hba_info *hba, struct spfc_sqe *sqe,
+ int direction, u32 bd_sge_num, bool double_sgl)
+{
+ u32 ret = RETURN_OK;
+
+ if (double_sgl) {
+ /* Adjust the length of the DIF_SL field in the CTRL domain */
+ SPFC_ADJUST_DATA(sqe->ctrl_sl.ch.wd0.dif_sl,
+ SPFC_BYTES_TO_QW_NUM(sizeof(struct spfc_variable_sge)));
+
+ if (pkg->dif_control.dif_sge_count <= SPFC_WQE_SGE_DIF_ENTRY_NUM)
+ ret = spfc_build_local_dif_sgl(hba, pkg, sqe, direction, bd_sge_num);
+ else
+ ret = spfc_build_external_dif_sgl(hba, pkg, sqe, direction, bd_sge_num);
+ }
+
+ return ret;
+}
+
+u32 spfc_build_sgl(struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
+ struct spfc_sqe *sqe, int direction, u32 dif_flag)
+{
+#define SPFC_ESGE_CNT 3
+ u32 ret = RETURN_OK;
+ u32 bd_sge_num = SPFC_WQE_SGE_ENTRY_NUM;
+ bool double_sgl = false;
+
+ if (dif_flag != 0 && (pkg->dif_control.flags & UNF_DIF_DOUBLE_SGL)) {
+ bd_sge_num = SPFC_WQE_SGE_ENTRY_NUM - SPFC_WQE_SGE_DIF_ENTRY_NUM;
+ double_sgl = true;
+ }
+
+ /* Only one wqe local sge can be loaded. If more than one wqe local sge
+ * is used, use the esgl
+ */
+ ret = spfc_build_sgl_by_local_sge_num(pkg, hba, sqe, direction, bd_sge_num);
+
+ if (unlikely(ret != RETURN_OK))
+ return ret;
+
+ /* Configuring Dual SGL Information for DIF */
+ ret = spfc_conf_dual_sgl_info(pkg, hba, sqe, direction, bd_sge_num, double_sgl);
+
+ return ret;
+}
+
+void spfc_adjust_dix(struct unf_frame_pkg *pkg, struct spfc_fc_dif_info *dif_info,
+ u8 task_type)
+{
+ u8 tasktype = task_type;
+ struct spfc_fc_dif_info *dif_info_l1 = NULL;
+
+ dif_info_l1 = dif_info;
+
+ if (dix_flag == 1) {
+ if (tasktype == SPFC_SQE_FCP_IWRITE ||
+ tasktype == SPFC_SQE_FCP_TRD) {
+ if ((UNF_DIF_ACTION_MASK & pkg->dif_control.protect_opcode) ==
+ UNF_DIF_ACTION_VERIFY_AND_FORWARD) {
+ dif_info_l1->wd0.grd_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_REPLACE;
+ dif_info_l1->wd0.grd_agm_ctrl =
+ SPFC_DIF_GUARD_VERIFY_IP_CHECKSUM_REPLACE_CRC16;
+ }
+
+ if ((UNF_DIF_ACTION_MASK & pkg->dif_control.protect_opcode) ==
+ UNF_DIF_ACTION_VERIFY_AND_DELETE) {
+ dif_info_l1->wd0.grd_agm_ctrl =
+ SPFC_DIF_GUARD_VERIFY_IP_CHECKSUM_REPLACE_CRC16;
+ }
+ }
+
+ if (tasktype == SPFC_SQE_FCP_IREAD ||
+ tasktype == SPFC_SQE_FCP_TWR) {
+ if ((UNF_DIF_ACTION_MASK &
+ pkg->dif_control.protect_opcode) ==
+ UNF_DIF_ACTION_VERIFY_AND_FORWARD) {
+ dif_info_l1->wd0.grd_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_REPLACE;
+ dif_info_l1->wd0.grd_agm_ctrl =
+ SPFC_DIF_GUARD_VERIFY_CRC16_REPLACE_IP_CHECKSUM;
+ }
+
+ if ((UNF_DIF_ACTION_MASK &
+ pkg->dif_control.protect_opcode) ==
+ UNF_DIF_ACTION_INSERT) {
+ dif_info_l1->wd0.grd_agm_ctrl =
+ SPFC_DIF_GUARD_VERIFY_CRC16_REPLACE_IP_CHECKSUM;
+ }
+ }
+ }
+
+ if (grd_agm_ctrl != 0)
+ dif_info_l1->wd0.grd_agm_ctrl = grd_agm_ctrl;
+
+ if (grd_ctrl != 0)
+ dif_info_l1->wd0.grd_ctrl = grd_ctrl;
+}
+
+void spfc_get_dma_direction_by_fcp_cmnd(const struct unf_fcp_cmnd *fcp_cmnd,
+ int *dma_direction, u8 *task_type)
+{
+ if (UNF_FCP_WR_DATA & fcp_cmnd->control) {
+ *task_type = SPFC_SQE_FCP_IWRITE;
+ *dma_direction = DMA_TO_DEVICE;
+ } else if (UNF_GET_TASK_MGMT_FLAGS(fcp_cmnd->control) != 0) {
+ *task_type = SPFC_SQE_FCP_ITMF;
+ *dma_direction = DMA_FROM_DEVICE;
+ } else {
+ *task_type = SPFC_SQE_FCP_IREAD;
+ *dma_direction = DMA_FROM_DEVICE;
+ }
+}
+
+static inline u32 spfc_build_icmnd_wqe(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg,
+ struct spfc_sqe *sge)
+{
+ u32 ret = RETURN_OK;
+ int direction = 0;
+ u8 tasktype = 0;
+ struct unf_fcp_cmnd *fcp_cmnd = NULL;
+ struct spfc_sqe *sqe = sge;
+ u32 dif_flag = 0;
+
+ fcp_cmnd = pkg->fcp_cmnd;
+ if (unlikely(!fcp_cmnd)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Package's FCP commond pointer is NULL.");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ spfc_get_dma_direction_by_fcp_cmnd(fcp_cmnd, &direction, &tasktype);
+
+ spfc_build_icmnd_wqe_ts_header(pkg, sqe, tasktype, hba->exi_base, hba->port_index);
+
+ spfc_build_icmnd_wqe_ctrls(pkg, sqe);
+
+ spfc_build_icmnd_wqe_ts(hba, pkg, &sqe->ts_sl, &sqe->ts_ex);
+
+ if (sqe->ts_sl.task_type != SPFC_SQE_FCP_ITMF) {
+ if (pkg->dif_control.protect_opcode == UNF_DIF_ACTION_NONE) {
+ dif_flag = 0;
+ spfc_build_no_dif_control(pkg, &sqe->ts_sl.cont.icmnd.info.dif_info);
+ } else {
+ dif_flag = 1;
+ spfc_build_dif_control(hba, pkg, &sqe->ts_sl.cont.icmnd.info.dif_info);
+ spfc_adjust_dix(pkg,
+ &sqe->ts_sl.cont.icmnd.info.dif_info,
+ tasktype);
+ }
+ }
+
+ ret = spfc_build_sgl(hba, pkg, sqe, direction, dif_flag);
+
+ sqe->sid = UNF_GET_SID(pkg);
+ sqe->did = UNF_GET_DID(pkg);
+
+ return ret;
+}
+
+u32 spfc_send_scsi_cmnd(void *hba, struct unf_frame_pkg *pkg)
+{
+ struct spfc_hba_info *spfc_hba = NULL;
+ struct spfc_parent_sq_info *parent_sq = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_sqe sqe;
+ u16 ssqn;
+ struct spfc_parent_queue_info *parent_queue = NULL;
+
+ /* input param check */
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ SPFC_CHECK_PKG_ALLOCTIME(pkg);
+ memset(&sqe, 0, sizeof(struct spfc_sqe));
+ spfc_hba = hba;
+
+ /* 1. find parent sq for scsi_cmnd(pkg) */
+ parent_sq = spfc_find_parent_sq_by_pkg(spfc_hba, pkg);
+ if (unlikely(!parent_sq)) {
+ /* Do not need to print info */
+ return UNF_RETURN_ERROR;
+ }
+
+ pkg->qos_level += spfc_hba->vpid_start;
+
+ /* 2. build cmnd wqe (to sqe) for scsi_cmnd(pkg) */
+ ret = spfc_build_icmnd_wqe(spfc_hba, pkg, &sqe);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[fail]Port(0x%x) Build WQE failed, SID(0x%x) DID(0x%x) pkg type(0x%x) hottag(0x%x).",
+ spfc_hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did, pkg->type, UNF_GET_XCHG_TAG(pkg));
+
+ return ret;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) RPort(0x%x) send FCP_CMND TYPE(0x%x) Local_Xid(0x%x) hottag(0x%x) LBA(0x%llx)",
+ spfc_hba->port_cfg.port_id, parent_sq->rport_index,
+ sqe.ts_sl.task_type, sqe.ts_sl.local_xid,
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX],
+ *((u64 *)pkg->fcp_cmnd->cdb));
+
+ ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ if (sqe.ts_sl.task_type == SPFC_SQE_FCP_ITMF) {
+ parent_queue = container_of(parent_sq, struct spfc_parent_queue_info,
+ parent_sq_info);
+ ret = spfc_suspend_sqe_and_send_nop(spfc_hba, parent_queue, &sqe, pkg);
+ return ret;
+ }
+ /* 3. En-Queue Parent SQ for scsi_cmnd(pkg) sqe */
+ ret = spfc_parent_sq_enqueue(parent_sq, &sqe, ssqn);
+
+ return ret;
+}
+
+static void spfc_ini_status_default_handler(struct spfc_scqe_iresp *iresp,
+ struct unf_frame_pkg *pkg)
+{
+ u8 control = 0;
+ u16 com_err_code = 0;
+
+ control = iresp->wd2.fcp_flag & SPFC_CTRL_MASK;
+
+ if (iresp->fcp_resid != 0) {
+ com_err_code = UNF_IO_FAILED;
+ pkg->residus_len = iresp->fcp_resid;
+ } else {
+ com_err_code = UNF_IO_SUCCESS;
+ pkg->residus_len = 0;
+ }
+
+ pkg->status = spfc_fill_pkg_status(com_err_code, control, iresp->wd2.scsi_status);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]Fill package with status: 0x%x, residus len: 0x%x",
+ pkg->status, pkg->residus_len);
+}
+
+static void spfc_check_fcp_rsp_iu(struct spfc_scqe_iresp *iresp,
+ struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg)
+{
+ u8 scsi_status = 0;
+ u8 control = 0;
+
+ control = (u8)iresp->wd2.fcp_flag;
+ scsi_status = (u8)iresp->wd2.scsi_status;
+
+ /* FcpRspIU with Little End from IOB WQE to COM's pkg also */
+ if (control & FCP_RESID_UNDER_MASK) {
+ /* under flow: usually occurs in inquiry */
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]I_STS IOB posts under flow with residus len: %u, FCP residue: %u.",
+ pkg->residus_len, iresp->fcp_resid);
+
+ if (pkg->residus_len != iresp->fcp_resid)
+ pkg->status = spfc_fill_pkg_status(UNF_IO_FAILED, control, scsi_status);
+ else
+ pkg->status = spfc_fill_pkg_status(UNF_IO_UNDER_FLOW, control, scsi_status);
+ }
+
+ if (control & FCP_RESID_OVER_MASK) {
+ /* over flow: error happened */
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]I_STS IOB posts over flow with residus len: %u, FCP residue: %u.",
+ pkg->residus_len, iresp->fcp_resid);
+
+ if (pkg->residus_len != iresp->fcp_resid)
+ pkg->status = spfc_fill_pkg_status(UNF_IO_FAILED, control, scsi_status);
+ else
+ pkg->status = spfc_fill_pkg_status(UNF_IO_OVER_FLOW, control, scsi_status);
+ }
+
+ pkg->unf_rsp_pload_bl.length = 0;
+ pkg->unf_sense_pload_bl.length = 0;
+
+ if (control & FCP_RSP_LEN_VALID_MASK) {
+ /* dma by chip */
+ pkg->unf_rsp_pload_bl.buffer_ptr = NULL;
+
+ pkg->unf_rsp_pload_bl.length = iresp->fcp_rsp_len;
+ pkg->byte_orders |= UNF_BIT_3;
+ }
+
+ if (control & FCP_SNS_LEN_VALID_MASK) {
+ /* dma by chip */
+ pkg->unf_sense_pload_bl.buffer_ptr = NULL;
+
+ pkg->unf_sense_pload_bl.length = iresp->fcp_sns_len;
+ pkg->byte_orders |= UNF_BIT_4;
+ }
+
+ if (iresp->wd1.user_id_num == 1 &&
+ (pkg->unf_sense_pload_bl.length + pkg->unf_rsp_pload_bl.length > 0)) {
+ pkg->unf_rsp_pload_bl.buffer_ptr =
+ (u8 *)spfc_get_els_buf_by_user_id(hba, (u16)iresp->user_id[ARRAY_INDEX_0]);
+ } else if (iresp->wd1.user_id_num > 1) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]receive buff num 0x%x > 1 0x%x",
+ iresp->wd1.user_id_num, control);
+ }
+}
+
+u16 spfc_get_com_err_code(struct unf_frame_pkg *pkg)
+{
+ u16 com_err_code = UNF_IO_FAILED;
+ u32 status_subcode = 0;
+
+ status_subcode = pkg->status_sub_code;
+
+ if (likely(status_subcode == 0))
+ com_err_code = 0;
+ else if (status_subcode == UNF_DIF_CRC_ERR)
+ com_err_code = UNF_IO_DIF_ERROR;
+ else if (status_subcode == UNF_DIF_LBA_ERR)
+ com_err_code = UNF_IO_DIF_REF_ERROR;
+ else if (status_subcode == UNF_DIF_APP_ERR)
+ com_err_code = UNF_IO_DIF_GEN_ERROR;
+
+ return com_err_code;
+}
+
+void spfc_process_ini_fail_io(struct spfc_hba_info *hba, union spfc_scqe *iresp,
+ struct unf_frame_pkg *pkg)
+{
+ u16 com_err_code = UNF_IO_FAILED;
+
+ /* 1. error stats process */
+ if (SPFC_GET_SCQE_STATUS((union spfc_scqe *)(void *)iresp) != 0) {
+ switch (SPFC_GET_SCQE_STATUS((union spfc_scqe *)(void *)iresp)) {
+ /* I/O not complete: 1.session reset; 2.clear buffer */
+ case FC_CQE_BUFFER_CLEAR_IO_COMPLETED:
+ case FC_CQE_SESSION_RST_CLEAR_IO_COMPLETED:
+ case FC_CQE_SESSION_ONLY_CLEAR_IO_COMPLETED:
+ case FC_CQE_WQE_FLUSH_IO_COMPLETED:
+ com_err_code = UNF_IO_CLEAN_UP;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[warn]Port(0x%x) INI IO not complete, OX_ID(0x%x) RX_ID(0x%x) status(0x%x)",
+ hba->port_cfg.port_id,
+ ((struct spfc_scqe_iresp *)iresp)->wd0.ox_id,
+ ((struct spfc_scqe_iresp *)iresp)->wd0.rx_id,
+ com_err_code);
+
+ break;
+ /* Allocate task id(oxid) fail */
+ case FC_ERROR_INVALID_TASK_ID:
+ com_err_code = UNF_IO_NO_XCHG;
+ break;
+ case FC_ALLOC_EXCH_ID_FAILED:
+ com_err_code = UNF_IO_NO_XCHG;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[warn]Port(0x%x) INI IO, tag 0x%x alloc oxid fail.",
+ hba->port_cfg.port_id,
+ ((struct spfc_scqe_iresp *)iresp)->wd2.hotpooltag);
+ break;
+ case FC_ERROR_CODE_DATA_DIFX_FAILED:
+ com_err_code = pkg->status >> UNF_SHIFT_16;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[warn]Port(0x%x) INI IO, tag 0x%x tx dif error.",
+ hba->port_cfg.port_id,
+ ((struct spfc_scqe_iresp *)iresp)->wd2.hotpooltag);
+ break;
+ /* any other: I/O failed --->>> DID error */
+ default:
+ com_err_code = UNF_IO_FAILED;
+ break;
+ }
+
+ /* fill pkg status & return directly */
+ pkg->status =
+ spfc_fill_pkg_status(com_err_code,
+ ((struct spfc_scqe_iresp *)iresp)->wd2.fcp_flag,
+ ((struct spfc_scqe_iresp *)iresp)->wd2.scsi_status);
+
+ return;
+ }
+
+ /* 2. default stats process */
+ spfc_ini_status_default_handler((struct spfc_scqe_iresp *)iresp, pkg);
+
+ /* 3. FCP RSP IU check */
+ spfc_check_fcp_rsp_iu((struct spfc_scqe_iresp *)iresp, hba, pkg);
+}
+
+void spfc_process_dif_result(struct spfc_hba_info *hba, union spfc_scqe *wqe,
+ struct unf_frame_pkg *pkg)
+{
+ u16 com_err_code = UNF_IO_FAILED;
+ u8 dif_info = 0;
+
+ dif_info = wqe->common.wd0.dif_vry_rst;
+ if (dif_info == SPFC_TX_DIF_ERROR_FLAG) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[error]Port(0x%x) TGT recv tx dif result abnormal.",
+ hba->port_cfg.port_id);
+ }
+
+ pkg->status_sub_code =
+ (dif_info & SPFC_DIF_ERROR_CODE_CRC)
+ ? UNF_DIF_CRC_ERR
+ : ((dif_info & SPFC_DIF_ERROR_CODE_REF)
+ ? UNF_DIF_LBA_ERR
+ : ((dif_info & SPFC_DIF_ERROR_CODE_APP) ? UNF_DIF_APP_ERR : 0));
+ com_err_code = spfc_get_com_err_code(pkg);
+ pkg->status = (u32)(com_err_code) << UNF_SHIFT_16;
+
+ if (unlikely(com_err_code != 0)) {
+ spfc_dif_err_count(hba, dif_info);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[error]Port(0x%x) INI io status with dif result(0x%x),subcode(0x%x) pkg->status(0x%x)",
+ hba->port_cfg.port_id, dif_info,
+ pkg->status_sub_code, pkg->status);
+ }
+}
+
+u32 spfc_scq_recv_iresp(struct spfc_hba_info *hba, union spfc_scqe *wqe)
+{
+#define SPFC_IRSP_USERID_LEN ((FC_SENSEDATA_USERID_CNT_MAX + 1) / 2)
+ struct spfc_scqe_iresp *iresp = NULL;
+ struct unf_frame_pkg pkg;
+ u32 ret = RETURN_OK;
+ u16 hot_tag;
+
+ FC_CHECK_RETURN_VALUE((hba), UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE((wqe), UNF_RETURN_ERROR);
+
+ iresp = (struct spfc_scqe_iresp *)(void *)wqe;
+
+ /* 1. Constraints: I_STS remain cnt must be zero */
+ if (unlikely(SPFC_GET_SCQE_REMAIN_CNT(wqe) != 0)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) ini_wqe(OX_ID:0x%x RX_ID:0x%x) HotTag(0x%x) remain_cnt(0x%x) abnormal, status(0x%x)",
+ hba->port_cfg.port_id, iresp->wd0.ox_id,
+ iresp->wd0.rx_id, iresp->wd2.hotpooltag,
+ SPFC_GET_SCQE_REMAIN_CNT(wqe),
+ SPFC_GET_SCQE_STATUS(wqe));
+
+ UNF_PRINT_SFS_LIMIT(UNF_MAJOR, hba->port_cfg.port_id, wqe, sizeof(union spfc_scqe));
+
+ /* return directly */
+ return UNF_RETURN_ERROR;
+ }
+
+ spfc_swap_16_in_32((u32 *)iresp->user_id, SPFC_IRSP_USERID_LEN);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = iresp->magic_num;
+ pkg.frame_head.oxid_rxid = (((iresp->wd0.ox_id) << UNF_SHIFT_16) | (iresp->wd0.rx_id));
+
+ hot_tag = (u16)iresp->wd2.hotpooltag & UNF_ORIGIN_HOTTAG_MASK;
+ /* 2. HotTag validity check */
+ if (likely(hot_tag >= hba->exi_base && (hot_tag < hba->exi_base + hba->exi_count))) {
+ pkg.status = UNF_IO_SUCCESS;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] =
+ hot_tag - hba->exi_base;
+ } else {
+ /* OX_ID error: return by COM */
+ pkg.status = UNF_IO_FAILED;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = INVALID_VALUE16;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) ini_cmnd_wqe(OX_ID:0x%x RX_ID:0x%x) ox_id invalid, status(0x%x)",
+ hba->port_cfg.port_id, iresp->wd0.ox_id, iresp->wd0.rx_id,
+ SPFC_GET_SCQE_STATUS(wqe));
+
+ UNF_PRINT_SFS_LIMIT(UNF_MAJOR, hba->port_cfg.port_id, wqe,
+ sizeof(union spfc_scqe));
+ }
+
+ /* process dif result */
+ spfc_process_dif_result(hba, wqe, &pkg);
+
+ /* 3. status check */
+ if (unlikely(SPFC_GET_SCQE_STATUS(wqe) ||
+ iresp->wd2.scsi_status != 0 || iresp->fcp_resid != 0 ||
+ ((iresp->wd2.fcp_flag & SPFC_CTRL_MASK) != 0))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[warn]Port(0x%x) scq_status(0x%x) scsi_status(0x%x) fcp_resid(0x%x) fcp_flag(0x%x)",
+ hba->port_cfg.port_id, SPFC_GET_SCQE_STATUS(wqe),
+ iresp->wd2.scsi_status, iresp->fcp_resid,
+ iresp->wd2.fcp_flag);
+
+ /* set pkg status & check fcp_rsp IU */
+ spfc_process_ini_fail_io(hba, (union spfc_scqe *)iresp, &pkg);
+ }
+
+ /* 4. LL_Driver ---to--->>> COM_Driver */
+ UNF_LOWLEVEL_SCSI_COMPLETED(ret, hba->lport, &pkg);
+ if (iresp->wd1.user_id_num == 1 &&
+ (pkg.unf_sense_pload_bl.length + pkg.unf_rsp_pload_bl.length > 0)) {
+ spfc_post_els_srq_wqe(&hba->els_srq_info, (u16)iresp->user_id[ARRAY_INDEX_0]);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) rport(0x%x) recv(%s) hottag(0x%x) OX_ID(0x%x) RX_ID(0x%x) return(%s)",
+ hba->port_cfg.port_id, iresp->wd1.conn_id,
+ (SPFC_SCQE_FCP_IRSP == (SPFC_GET_SCQE_TYPE(wqe)) ? "IRESP" : "ITMF_RSP"),
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX], iresp->wd0.ox_id,
+ iresp->wd0.rx_id, (ret == RETURN_OK) ? "OK" : "ERROR");
+
+ return ret;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_io.h b/drivers/scsi/spfc/hw/spfc_io.h
new file mode 100644
index 000000000000..26d10a51bbe4
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_io.h
@@ -0,0 +1,138 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_IO_H
+#define SPFC_IO_H
+
+#include "unf_type.h"
+#include "unf_common.h"
+#include "spfc_hba.h"
+
+#ifdef __cplusplus
+#if __cplusplus
+extern "C" {
+#endif
+#endif /* __cplusplus */
+
+#define BYTE_PER_DWORD 4
+#define SPFC_TRESP_DIRECT_CARRY_LEN (23 * 4)
+#define FCP_RESP_IU_LEN_BYTE_GOOD_STATUS 24
+#define SPFC_TRSP_IU_CONTROL_OFFSET 2
+#define SPFC_TRSP_IU_FCP_CONF_REP (1 << 12)
+
+struct spfc_dif_io_param {
+ u32 all_len;
+ u32 buf_len;
+ char **buf;
+ char *in_buf;
+ int drect;
+};
+
+enum dif_mode_type {
+ DIF_MODE_NONE = 0x0,
+ DIF_MODE_INSERT = 0x1,
+ DIF_MODE_REMOVE = 0x2,
+ DIF_MODE_FORWARD_OR_REPLACE = 0x3
+};
+
+enum ref_tag_mode_type {
+ BOTH_NONE = 0x0,
+ RECEIVE_INCREASE = 0x1,
+ REPLACE_INCREASE = 0x2,
+ BOTH_INCREASE = 0x3
+};
+
+#define SPFC_DIF_DISABLE 0
+#define SPFC_DIF_ENABLE 1
+#define SPFC_DIF_SINGLE_SGL 0
+#define SPFC_DIF_DOUBLE_SGL 1
+#define SPFC_DIF_SECTOR_512B_MODE 0
+#define SPFC_DIF_SECTOR_4KB_MODE 1
+#define SPFC_DIF_TYPE1 0x01
+#define SPFC_DIF_TYPE3 0x03
+#define SPFC_DIF_GUARD_VERIFY_ALGORITHM_CTL_T10_CRC16 0x0
+#define SPFC_DIF_GUARD_VERIFY_CRC16_REPLACE_IP_CHECKSUM 0x1
+#define SPFC_DIF_GUARD_VERIFY_IP_CHECKSUM_REPLACE_CRC16 0x2
+#define SPFC_DIF_GUARD_VERIFY_ALGORITHM_CTL_IP_CHECKSUM 0x3
+#define SPFC_DIF_CRC16_INITIAL_SELECTOR_DEFAUL 0
+#define SPFC_DIF_CRC_CS_INITIAL_CONFIG_BY_REGISTER 0
+#define SPFC_DIF_CRC_CS_INITIAL_CONFIG_BY_BIT0_1 0x4
+
+#define SPFC_DIF_GARD_REF_APP_CTRL_VERIFY 0x4
+#define SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY 0x0
+#define SPFC_DIF_GARD_REF_APP_CTRL_INSERT 0x0
+#define SPFC_DIF_GARD_REF_APP_CTRL_DELETE 0x1
+#define SPFC_DIF_GARD_REF_APP_CTRL_FORWARD 0x2
+#define SPFC_DIF_GARD_REF_APP_CTRL_REPLACE 0x3
+
+#define SPFC_BUILD_RESPONSE_INFO_NON_GAP_MODE0 0
+#define SPFC_BUILD_RESPONSE_INFO_GPA_MODE1 1
+#define SPFC_CONF_SUPPORT 1
+#define SPFC_CONF_NOT_SUPPORT 0
+#define SPFC_XID_INTERVAL 2048
+
+#define SPFC_DIF_ERROR_CODE_MASK 0xe
+#define SPFC_DIF_ERROR_CODE_CRC 0x2
+#define SPFC_DIF_ERROR_CODE_REF 0x4
+#define SPFC_DIF_ERROR_CODE_APP 0x8
+#define SPFC_TX_DIF_ERROR_FLAG (1 << 7)
+
+#define SPFC_DIF_PAYLOAD_TYPE (1 << 0)
+#define SPFC_DIF_CRC_TYPE (1 << 1)
+#define SPFC_DIF_APP_TYPE (1 << 2)
+#define SPFC_DIF_REF_TYPE (1 << 3)
+
+#define SPFC_DIF_SEND_DIFERR_ALL (0)
+#define SPFC_DIF_SEND_DIFERR_CRC (1)
+#define SPFC_DIF_SEND_DIFERR_APP (2)
+#define SPFC_DIF_SEND_DIFERR_REF (3)
+#define SPFC_DIF_RECV_DIFERR_ALL (4)
+#define SPFC_DIF_RECV_DIFERR_CRC (5)
+#define SPFC_DIF_RECV_DIFERR_APP (6)
+#define SPFC_DIF_RECV_DIFERR_REF (7)
+#define SPFC_DIF_ERR_ENABLE (382855)
+#define SPFC_DIF_ERR_DISABLE (0)
+
+#define SPFC_DIF_LENGTH (8)
+#define SPFC_SECT_SIZE_512 (512)
+#define SPFC_SECT_SIZE_4096 (4096)
+#define SPFC_SECT_SIZE_512_8 (520)
+#define SPFC_SECT_SIZE_4096_8 (4104)
+#define SPFC_DIF_SECT_SIZE_APP_OFFSET (2)
+#define SPFC_DIF_SECT_SIZE_LBA_OFFSET (4)
+
+#define SPFC_MAX_IO_TAG (2048)
+#define SPFC_PRINT_WORD (8)
+
+extern u32 dif_protect_opcode;
+extern u32 dif_sect_size;
+extern u32 no_dif_sect_size;
+extern u32 grd_agm_ini_ctrl;
+extern u32 ref_tag_mod;
+extern u32 grd_ctrl;
+extern u32 grd_agm_ctrl;
+extern u32 cmp_app_tag_mask;
+extern u32 app_tag_ctrl;
+extern u32 ref_tag_ctrl;
+extern u32 rep_ref_tag;
+extern u32 rx_rep_ref_tag;
+extern u16 cmp_app_tag;
+extern u16 rep_app_tag;
+
+#define spfc_fill_pkg_status(com_err_code, control, scsi_status) \
+ (((u32)(com_err_code) << 16) | ((u32)(control) << 8) | \
+ (u32)(scsi_status))
+#define SPFC_CTRL_MASK 0x1f
+
+u32 spfc_send_scsi_cmnd(void *hba, struct unf_frame_pkg *pkg);
+u32 spfc_scq_recv_iresp(struct spfc_hba_info *hba, union spfc_scqe *wqe);
+void spfc_process_dif_result(struct spfc_hba_info *hba, union spfc_scqe *wqe,
+ struct unf_frame_pkg *pkg);
+
+#ifdef __cplusplus
+#if __cplusplus
+}
+#endif
+#endif /* __cplusplus */
+
+#endif /* __SPFC_IO_H__ */
diff --git a/drivers/scsi/spfc/hw/spfc_lld.c b/drivers/scsi/spfc/hw/spfc_lld.c
new file mode 100644
index 000000000000..6c252cc60bd6
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_lld.c
@@ -0,0 +1,998 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/io-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/inetdevice.h>
+#include <net/addrconf.h>
+#include <linux/time.h>
+#include <linux/timex.h>
+#include <linux/rtc.h>
+#include <linux/aer.h>
+#include <linux/debugfs.h>
+
+#include "spfc_lld.h"
+#include "sphw_hw.h"
+#include "sphw_mt.h"
+#include "sphw_hw_cfg.h"
+#include "sphw_hw_comm.h"
+#include "sphw_common.h"
+#include "spfc_cqm_main.h"
+#include "spfc_module.h"
+
+#define SPFC_DRV_NAME "spfc"
+#define SPFC_CHIP_NAME "spfc"
+
+#define PCI_VENDOR_ID_RAMAXEL 0x1E81
+#define SPFC_DEV_ID_PF_STD 0x9010
+#define SPFC_DEV_ID_VF 0x9008
+
+#define SPFC_VF_PCI_CFG_REG_BAR 0
+#define SPFC_PF_PCI_CFG_REG_BAR 1
+
+#define SPFC_PCI_INTR_REG_BAR 2
+#define SPFC_PCI_MGMT_REG_BAR 3
+#define SPFC_PCI_DB_BAR 4
+
+#define SPFC_SECOND_BASE (1000)
+#define SPFC_SYNC_YEAR_OFFSET (1900)
+#define SPFC_SYNC_MONTH_OFFSET (1)
+#define SPFC_MINUTE_BASE (60)
+#define SPFC_WAIT_TOOL_CNT_TIMEOUT 10000
+
+#define SPFC_MIN_TIME_IN_USECS 900
+#define SPFC_MAX_TIME_IN_USECS 1000
+#define SPFC_MAX_LOOP_TIMES 10000
+
+#define SPFC_TOOL_MIN_TIME_IN_USECS 9900
+#define SPFC_TOOL_MAX_TIME_IN_USECS 10000
+
+#define SPFC_EVENT_PROCESS_TIMEOUT 10000
+
+#define FIND_BIT(num, n) (((num) & (1UL << (n))) ? 1 : 0)
+#define SET_BIT(num, n) ((num) | (1UL << (n)))
+#define CLEAR_BIT(num, n) ((num) & (~(1UL << (n))))
+
+#define MAX_CARD_ID 64
+static unsigned long card_bit_map;
+LIST_HEAD(g_spfc_chip_list);
+struct spfc_uld_info g_uld_info[SERVICE_T_MAX] = { {0} };
+
+struct unf_cm_handle_op spfc_cm_op_handle = {0};
+
+u32 allowed_probe_num = SPFC_MAX_PORT_NUM;
+u32 dif_sgl_mode;
+u32 max_speed = SPFC_SPEED_32G;
+u32 accum_db_num = 1;
+u32 dif_type = 0x1;
+u32 wqe_page_size = 4096;
+u32 wqe_pre_load = 6;
+u32 combo_length = 128;
+u32 cos_bit_map = 0x1f;
+u32 spfc_dif_type;
+u32 spfc_dif_enable;
+u8 spfc_guard;
+int link_lose_tmo = 30;
+
+u32 exit_count = 4096;
+u32 exit_stride = 4096;
+u32 exit_base;
+
+/* dfx counter */
+atomic64_t rx_tx_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+atomic64_t rx_tx_err[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+atomic64_t scq_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+atomic64_t aeq_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+atomic64_t dif_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+atomic64_t mail_box_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+atomic64_t up_err_event_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+u64 link_event_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_LINK_EVENT_CNT];
+u64 link_reason_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_LINK_REASON_CNT];
+u64 hba_stat[SPFC_MAX_PORT_NUM][SPFC_HBA_STAT_BUTT];
+atomic64_t com_up_event_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+
+#ifndef MAX_SIZE
+#define MAX_SIZE (16)
+#endif
+
+struct spfc_lld_lock g_lld_lock;
+
+/* g_device_mutex */
+struct mutex g_device_mutex;
+
+/* pci device initialize lock */
+struct mutex g_pci_init_mutex;
+
+#define WAIT_LLD_DEV_HOLD_TIMEOUT (10 * 60 * 1000) /* 10minutes */
+#define WAIT_LLD_DEV_NODE_CHANGED (10 * 60 * 1000) /* 10minutes */
+#define WAIT_LLD_DEV_REF_CNT_EMPTY (2 * 60 * 1000) /* 2minutes */
+
+void lld_dev_cnt_init(struct spfc_pcidev *pci_adapter)
+{
+ atomic_set(&pci_adapter->ref_cnt, 0);
+}
+
+void lld_dev_hold(struct spfc_lld_dev *dev)
+{
+ struct spfc_pcidev *pci_adapter = pci_get_drvdata(dev->pdev);
+
+ atomic_inc(&pci_adapter->ref_cnt);
+}
+
+void lld_dev_put(struct spfc_lld_dev *dev)
+{
+ struct spfc_pcidev *pci_adapter = pci_get_drvdata(dev->pdev);
+
+ atomic_dec(&pci_adapter->ref_cnt);
+}
+
+static void spfc_sync_time_to_fmw(struct spfc_pcidev *pdev_pri)
+{
+ struct tm tm = {0};
+ u64 tv_msec;
+ int err;
+
+ tv_msec = ktime_to_ms(ktime_get_real());
+ err = sphw_sync_time(pdev_pri->hwdev, tv_msec);
+ if (err) {
+ sdk_err(&pdev_pri->pcidev->dev, "Synchronize UTC time to firmware failed, errno:%d.\n",
+ err);
+ } else {
+ time64_to_tm(tv_msec / MSEC_PER_SEC, 0, &tm);
+ sdk_info(&pdev_pri->pcidev->dev, "Synchronize UTC time to firmware succeed. UTC time %ld-%02d-%02d %02d:%02d:%02d.\n",
+ tm.tm_year + 1900, tm.tm_mon + 1,
+ tm.tm_mday, tm.tm_hour,
+ tm.tm_min, tm.tm_sec);
+ }
+}
+
+void wait_lld_dev_unused(struct spfc_pcidev *pci_adapter)
+{
+ u32 loop_cnt = 0;
+
+ while (loop_cnt < SPFC_WAIT_TOOL_CNT_TIMEOUT) {
+ if (!atomic_read(&pci_adapter->ref_cnt))
+ return;
+
+ usleep_range(SPFC_TOOL_MIN_TIME_IN_USECS, SPFC_TOOL_MAX_TIME_IN_USECS);
+ loop_cnt++;
+ }
+}
+
+static void lld_lock_chip_node(void)
+{
+ u32 loop_cnt;
+
+ mutex_lock(&g_lld_lock.lld_mutex);
+
+ loop_cnt = 0;
+ while (loop_cnt < WAIT_LLD_DEV_NODE_CHANGED) {
+ if (!test_and_set_bit(SPFC_NODE_CHANGE, &g_lld_lock.status))
+ break;
+
+ loop_cnt++;
+
+ if (loop_cnt % SPFC_MAX_LOOP_TIMES == 0)
+ pr_warn("[warn]Wait for lld node change complete for %us",
+ loop_cnt / UNF_S_TO_MS);
+
+ usleep_range(SPFC_MIN_TIME_IN_USECS, SPFC_MAX_TIME_IN_USECS);
+ }
+
+ if (loop_cnt == WAIT_LLD_DEV_NODE_CHANGED)
+ pr_warn("[warn]Wait for lld node change complete timeout when try to get lld lock");
+
+ loop_cnt = 0;
+ while (loop_cnt < WAIT_LLD_DEV_REF_CNT_EMPTY) {
+ if (!atomic_read(&g_lld_lock.dev_ref_cnt))
+ break;
+
+ loop_cnt++;
+
+ if (loop_cnt % SPFC_MAX_LOOP_TIMES == 0)
+ pr_warn("[warn]Wait for lld dev unused for %us, reference count: %d",
+ loop_cnt / UNF_S_TO_MS, atomic_read(&g_lld_lock.dev_ref_cnt));
+
+ usleep_range(SPFC_MIN_TIME_IN_USECS, SPFC_MAX_TIME_IN_USECS);
+ }
+
+ if (loop_cnt == WAIT_LLD_DEV_REF_CNT_EMPTY)
+ pr_warn("[warn]Wait for lld dev unused timeout");
+
+ mutex_unlock(&g_lld_lock.lld_mutex);
+}
+
+static void lld_unlock_chip_node(void)
+{
+ clear_bit(SPFC_NODE_CHANGE, &g_lld_lock.status);
+}
+
+void lld_hold(void)
+{
+ u32 loop_cnt = 0;
+
+ /* ensure there have not any chip node in changing */
+ mutex_lock(&g_lld_lock.lld_mutex);
+
+ while (loop_cnt < WAIT_LLD_DEV_HOLD_TIMEOUT) {
+ if (!test_bit(SPFC_NODE_CHANGE, &g_lld_lock.status))
+ break;
+
+ loop_cnt++;
+
+ if (loop_cnt % SPFC_MAX_LOOP_TIMES == 0)
+ pr_warn("[warn]Wait lld node change complete for %u",
+ loop_cnt / UNF_S_TO_MS);
+
+ usleep_range(SPFC_MIN_TIME_IN_USECS, SPFC_MAX_TIME_IN_USECS);
+ }
+
+ if (loop_cnt == WAIT_LLD_DEV_HOLD_TIMEOUT)
+ pr_warn("[warn]Wait lld node change complete timeout when try to hode lld dev %u",
+ loop_cnt / UNF_S_TO_MS);
+
+ atomic_inc(&g_lld_lock.dev_ref_cnt);
+
+ mutex_unlock(&g_lld_lock.lld_mutex);
+}
+
+void lld_put(void)
+{
+ atomic_dec(&g_lld_lock.dev_ref_cnt);
+}
+
+static void spfc_lld_lock_init(void)
+{
+ mutex_init(&g_lld_lock.lld_mutex);
+ atomic_set(&g_lld_lock.dev_ref_cnt, 0);
+}
+
+static void spfc_realease_cmo_op_handle(void)
+{
+ memset(&spfc_cm_op_handle, 0, sizeof(struct unf_cm_handle_op));
+}
+
+static void spfc_check_module_para(void)
+{
+ if (spfc_dif_enable) {
+ dif_sgl_mode = true;
+ spfc_dif_type = SHOST_DIF_TYPE1_PROTECTION | SHOST_DIX_TYPE1_PROTECTION;
+ dix_flag = 1;
+ }
+
+ if (dif_sgl_mode != 0)
+ dif_sgl_mode = 1;
+}
+
+void spfc_event_process(void *adapter, struct sphw_event_info *event)
+{
+ struct spfc_pcidev *dev = adapter;
+
+ if (test_and_set_bit(SERVICE_T_FC, &dev->state)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[WARN]Event: fc is in detach");
+ return;
+ }
+
+ if (g_uld_info[SERVICE_T_FC].event)
+ g_uld_info[SERVICE_T_FC].event(&dev->lld_dev, dev->uld_dev[SERVICE_T_FC], event);
+
+ clear_bit(SERVICE_T_FC, &dev->state);
+}
+
+int spfc_stateful_init(void *hwdev)
+{
+ struct sphw_hwdev *dev = hwdev;
+ int stateful_en;
+ int err;
+
+ if (!dev)
+ return -EINVAL;
+
+ if (dev->statufull_ref_cnt++)
+ return 0;
+
+ stateful_en = IS_FT_TYPE(dev) | IS_RDMA_TYPE(dev);
+ if (stateful_en && SPHW_IS_PPF(dev)) {
+ err = sphw_ppf_ext_db_init(dev);
+ if (err)
+ goto out;
+ }
+
+ err = cqm3_init(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to init cqm, err: %d\n", err);
+ goto init_cqm_err;
+ }
+
+ sdk_info(dev->dev_hdl, "Initialize statefull resource success\n");
+
+ return 0;
+
+init_cqm_err:
+ if (stateful_en)
+ sphw_ppf_ext_db_deinit(dev);
+
+out:
+ dev->statufull_ref_cnt--;
+
+ return err;
+}
+
+void spfc_stateful_deinit(void *hwdev)
+{
+ struct sphw_hwdev *dev = hwdev;
+ u32 stateful_en;
+
+ if (!dev || !dev->statufull_ref_cnt)
+ return;
+
+ if (--dev->statufull_ref_cnt)
+ return;
+
+ cqm3_uninit(hwdev);
+
+ stateful_en = IS_FT_TYPE(dev) | IS_RDMA_TYPE(dev);
+ if (stateful_en)
+ sphw_ppf_ext_db_deinit(hwdev);
+
+ sdk_info(dev->dev_hdl, "Clear statefull resource success\n");
+}
+
+static int attach_uld(struct spfc_pcidev *dev, struct spfc_uld_info *uld_info)
+{
+ void *uld_dev = NULL;
+ int err;
+
+ mutex_lock(&dev->pdev_mutex);
+ if (dev->uld_dev[SERVICE_T_FC]) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]fc driver has attached to pcie device");
+ err = 0;
+ goto out_unlock;
+ }
+
+ err = spfc_stateful_init(dev->hwdev);
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to initialize statefull resources");
+ goto out_unlock;
+ }
+
+ err = uld_info->probe(&dev->lld_dev, &uld_dev,
+ dev->uld_dev_name[SERVICE_T_FC]);
+ if (err || !uld_dev) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to add object for fc driver to pcie device");
+ goto probe_failed;
+ }
+
+ dev->uld_dev[SERVICE_T_FC] = uld_dev;
+ mutex_unlock(&dev->pdev_mutex);
+
+ return RETURN_OK;
+
+probe_failed:
+ spfc_stateful_deinit(dev->hwdev);
+
+out_unlock:
+ mutex_unlock(&dev->pdev_mutex);
+
+ return err;
+}
+
+static void detach_uld(struct spfc_pcidev *dev)
+{
+ struct spfc_uld_info *uld_info = &g_uld_info[SERVICE_T_FC];
+ u32 cnt = 0;
+
+ mutex_lock(&dev->pdev_mutex);
+ if (!dev->uld_dev[SERVICE_T_FC]) {
+ mutex_unlock(&dev->pdev_mutex);
+ return;
+ }
+
+ while (cnt < SPFC_EVENT_PROCESS_TIMEOUT) {
+ if (!test_and_set_bit(SERVICE_T_FC, &dev->state))
+ break;
+ usleep_range(900, 1000);
+ cnt++;
+ }
+
+ uld_info->remove(&dev->lld_dev, dev->uld_dev[SERVICE_T_FC]);
+ dev->uld_dev[SERVICE_T_FC] = NULL;
+ spfc_stateful_deinit(dev->hwdev);
+ if (cnt < SPFC_EVENT_PROCESS_TIMEOUT)
+ clear_bit(SERVICE_T_FC, &dev->state);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "Detach fc driver from pcie device succeed");
+ mutex_unlock(&dev->pdev_mutex);
+}
+
+int spfc_register_uld(struct spfc_uld_info *uld_info)
+{
+ memset(g_uld_info, 0, sizeof(g_uld_info));
+ spfc_lld_lock_init();
+ mutex_init(&g_device_mutex);
+ mutex_init(&g_pci_init_mutex);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Module Init Success, wait for pci init and probe");
+
+ if (!uld_info || !uld_info->probe || !uld_info->remove) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Invalid information of fc driver to register");
+ return -EINVAL;
+ }
+
+ lld_hold();
+
+ if (g_uld_info[SERVICE_T_FC].probe) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]fc driver has registered");
+ lld_put();
+ return -EINVAL;
+ }
+
+ memcpy(&g_uld_info[SERVICE_T_FC], uld_info, sizeof(*uld_info));
+
+ lld_put();
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[KEVENT]Register spfc driver succeed");
+ return RETURN_OK;
+}
+
+void spfc_unregister_uld(void)
+{
+ struct spfc_uld_info *uld_info = NULL;
+
+ lld_hold();
+ uld_info = &g_uld_info[SERVICE_T_FC];
+ memset(uld_info, 0, sizeof(*uld_info));
+ lld_put();
+}
+
+static int spfc_pci_init(struct pci_dev *pdev)
+{
+ struct spfc_pcidev *pci_adapter = NULL;
+ int err = 0;
+
+ pci_adapter = kzalloc(sizeof(*pci_adapter), GFP_KERNEL);
+ if (!pci_adapter) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to alloc pci device adapter");
+ return -ENOMEM;
+ }
+ pci_adapter->pcidev = pdev;
+ mutex_init(&pci_adapter->pdev_mutex);
+
+ pci_set_drvdata(pdev, pci_adapter);
+
+ err = pci_enable_device(pdev);
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to enable PCI device");
+ goto pci_enable_err;
+ }
+
+ err = pci_request_regions(pdev, SPFC_DRV_NAME);
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to request regions");
+ goto pci_regions_err;
+ }
+
+ pci_enable_pcie_error_reporting(pdev);
+
+ pci_set_master(pdev);
+
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Couldn't set 64-bit DMA mask");
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_ERR, "[err]Failed to set DMA mask");
+ goto dma_mask_err;
+ }
+ }
+
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Couldn't set 64-bit coherent DMA mask");
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_ERR,
+ "[err]Failed to set coherent DMA mask");
+ goto dma_consistnet_mask_err;
+ }
+ }
+
+ return 0;
+
+dma_consistnet_mask_err:
+dma_mask_err:
+ pci_clear_master(pdev);
+ pci_release_regions(pdev);
+
+pci_regions_err:
+ pci_disable_device(pdev);
+
+pci_enable_err:
+ pci_set_drvdata(pdev, NULL);
+ kfree(pci_adapter);
+
+ return err;
+}
+
+static void spfc_pci_deinit(struct pci_dev *pdev)
+{
+ struct spfc_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ pci_clear_master(pdev);
+ pci_release_regions(pdev);
+ pci_disable_pcie_error_reporting(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ kfree(pci_adapter);
+}
+
+static int alloc_chip_node(struct spfc_pcidev *pci_adapter)
+{
+ struct card_node *chip_node = NULL;
+ unsigned char i;
+ unsigned char bus_number = 0;
+
+ if (!pci_is_root_bus(pci_adapter->pcidev->bus))
+ bus_number = pci_adapter->pcidev->bus->number;
+
+ if (bus_number != 0) {
+ list_for_each_entry(chip_node, &g_spfc_chip_list, node) {
+ if (chip_node->bus_num == bus_number) {
+ pci_adapter->chip_node = chip_node;
+ return 0;
+ }
+ }
+ } else if (pci_adapter->pcidev->device == SPFC_DEV_ID_VF) {
+ list_for_each_entry(chip_node, &g_spfc_chip_list, node) {
+ if (chip_node) {
+ pci_adapter->chip_node = chip_node;
+ return 0;
+ }
+ }
+ }
+
+ for (i = 0; i < MAX_CARD_ID; i++) {
+ if (!test_and_set_bit(i, &card_bit_map))
+ break;
+ }
+
+ if (i == MAX_CARD_ID) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to alloc card id");
+ return -EFAULT;
+ }
+
+ chip_node = kzalloc(sizeof(*chip_node), GFP_KERNEL);
+ if (!chip_node) {
+ clear_bit(i, &card_bit_map);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to alloc chip node");
+ return -ENOMEM;
+ }
+
+ /* bus number */
+ chip_node->bus_num = bus_number;
+
+ snprintf(chip_node->chip_name, IFNAMSIZ, "%s%d", SPFC_CHIP_NAME, i);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[INFO]Add new chip %s to global list succeed",
+ chip_node->chip_name);
+
+ list_add_tail(&chip_node->node, &g_spfc_chip_list);
+
+ INIT_LIST_HEAD(&chip_node->func_list);
+ pci_adapter->chip_node = chip_node;
+
+ return 0;
+}
+
+#ifdef CONFIG_X86
+void cfg_order_reg(struct spfc_pcidev *pci_adapter)
+{
+ u8 cpu_model[] = {0x3c, 0x3f, 0x45, 0x46, 0x3d, 0x47, 0x4f, 0x56};
+ struct cpuinfo_x86 *cpuinfo = NULL;
+ u32 i;
+
+ if (sphw_func_type(pci_adapter->hwdev) == TYPE_VF)
+ return;
+
+ cpuinfo = &cpu_data(0);
+
+ for (i = 0; i < sizeof(cpu_model); i++) {
+ if (cpu_model[i] == cpuinfo->x86_model)
+ sphw_set_pcie_order_cfg(pci_adapter->hwdev);
+ }
+}
+#endif
+
+static int mapping_bar(struct pci_dev *pdev, struct spfc_pcidev *pci_adapter)
+{
+ int cfg_bar;
+
+ cfg_bar = pdev->is_virtfn ? SPFC_VF_PCI_CFG_REG_BAR : SPFC_PF_PCI_CFG_REG_BAR;
+
+ pci_adapter->cfg_reg_base = pci_ioremap_bar(pdev, cfg_bar);
+ if (!pci_adapter->cfg_reg_base) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Failed to map configuration regs");
+ return -ENOMEM;
+ }
+
+ pci_adapter->intr_reg_base = pci_ioremap_bar(pdev, SPFC_PCI_INTR_REG_BAR);
+ if (!pci_adapter->intr_reg_base) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Failed to map interrupt regs");
+ goto map_intr_bar_err;
+ }
+
+ if (!pdev->is_virtfn) {
+ pci_adapter->mgmt_reg_base = pci_ioremap_bar(pdev, SPFC_PCI_MGMT_REG_BAR);
+ if (!pci_adapter->mgmt_reg_base) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_ERR, "Failed to map mgmt regs");
+ goto map_mgmt_bar_err;
+ }
+ }
+
+ pci_adapter->db_base_phy = pci_resource_start(pdev, SPFC_PCI_DB_BAR);
+ pci_adapter->db_dwqe_len = pci_resource_len(pdev, SPFC_PCI_DB_BAR);
+ pci_adapter->db_base = pci_ioremap_bar(pdev, SPFC_PCI_DB_BAR);
+ if (!pci_adapter->db_base) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Failed to map doorbell regs");
+ goto map_db_err;
+ }
+
+ return 0;
+
+map_db_err:
+ if (!pdev->is_virtfn)
+ iounmap(pci_adapter->mgmt_reg_base);
+
+map_mgmt_bar_err:
+ iounmap(pci_adapter->intr_reg_base);
+
+map_intr_bar_err:
+ iounmap(pci_adapter->cfg_reg_base);
+
+ return -ENOMEM;
+}
+
+static void unmapping_bar(struct spfc_pcidev *pci_adapter)
+{
+ iounmap(pci_adapter->db_base);
+
+ if (!pci_adapter->pcidev->is_virtfn)
+ iounmap(pci_adapter->mgmt_reg_base);
+
+ iounmap(pci_adapter->intr_reg_base);
+ iounmap(pci_adapter->cfg_reg_base);
+}
+
+static int spfc_func_init(struct pci_dev *pdev, struct spfc_pcidev *pci_adapter)
+{
+ struct sphw_init_para init_para = {0};
+ int err;
+
+ init_para.adapter_hdl = pci_adapter;
+ init_para.pcidev_hdl = pdev;
+ init_para.dev_hdl = &pdev->dev;
+ init_para.cfg_reg_base = pci_adapter->cfg_reg_base;
+ init_para.intr_reg_base = pci_adapter->intr_reg_base;
+ init_para.mgmt_reg_base = pci_adapter->mgmt_reg_base;
+ init_para.db_base = pci_adapter->db_base;
+ init_para.db_base_phy = pci_adapter->db_base_phy;
+ init_para.db_dwqe_len = pci_adapter->db_dwqe_len;
+ init_para.hwdev = &pci_adapter->hwdev;
+ init_para.chip_node = pci_adapter->chip_node;
+ err = sphw_init_hwdev(&init_para);
+ if (err) {
+ pci_adapter->hwdev = NULL;
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to initialize hardware device");
+ return -EFAULT;
+ }
+
+ pci_adapter->lld_dev.pdev = pdev;
+ pci_adapter->lld_dev.hwdev = pci_adapter->hwdev;
+
+ sphw_event_register(pci_adapter->hwdev, pci_adapter, spfc_event_process);
+
+ if (sphw_func_type(pci_adapter->hwdev) != TYPE_VF)
+ spfc_sync_time_to_fmw(pci_adapter);
+ lld_lock_chip_node();
+ list_add_tail(&pci_adapter->node, &pci_adapter->chip_node->func_list);
+ lld_unlock_chip_node();
+ err = attach_uld(pci_adapter, &g_uld_info[SERVICE_T_FC]);
+
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Spfc3 attach uld fail");
+ goto attach_fc_err;
+ }
+
+#ifdef CONFIG_X86
+ cfg_order_reg(pci_adapter);
+#endif
+
+ return 0;
+
+attach_fc_err:
+ lld_lock_chip_node();
+ list_del(&pci_adapter->node);
+ lld_unlock_chip_node();
+ wait_lld_dev_unused(pci_adapter);
+
+ return err;
+}
+
+static void spfc_func_deinit(struct pci_dev *pdev)
+{
+ struct spfc_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ lld_lock_chip_node();
+ list_del(&pci_adapter->node);
+ lld_unlock_chip_node();
+ wait_lld_dev_unused(pci_adapter);
+
+ detach_uld(pci_adapter);
+ sphw_disable_mgmt_msg_report(pci_adapter->hwdev);
+ sphw_flush_mgmt_workq(pci_adapter->hwdev);
+ sphw_event_unregister(pci_adapter->hwdev);
+ sphw_free_hwdev(pci_adapter->hwdev);
+}
+
+static void free_chip_node(struct spfc_pcidev *pci_adapter)
+{
+ struct card_node *chip_node = pci_adapter->chip_node;
+ int id, err;
+
+ if (list_empty(&chip_node->func_list)) {
+ list_del(&chip_node->node);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[INFO]Delete chip %s from global list succeed",
+ chip_node->chip_name);
+ err = sscanf(chip_node->chip_name, SPFC_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_ERR, "[err]Failed to get spfc id");
+ }
+
+ clear_bit(id, &card_bit_map);
+
+ kfree(chip_node);
+ }
+}
+
+static int spfc_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct spfc_pcidev *pci_adapter = NULL;
+ int err;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Spfc3 Pcie device probe begin");
+
+ mutex_lock(&g_pci_init_mutex);
+ err = spfc_pci_init(pdev);
+ if (err) {
+ mutex_unlock(&g_pci_init_mutex);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]pci init fail, return %d", err);
+ return err;
+ }
+ pci_adapter = pci_get_drvdata(pdev);
+ err = mapping_bar(pdev, pci_adapter);
+ if (err) {
+ mutex_unlock(&g_pci_init_mutex);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to map bar");
+ goto map_bar_failed;
+ }
+ mutex_unlock(&g_pci_init_mutex);
+ pci_adapter->id = *id;
+ lld_dev_cnt_init(pci_adapter);
+
+ /* if chip information of pcie function exist, add the function into chip */
+ lld_lock_chip_node();
+ err = alloc_chip_node(pci_adapter);
+ if (err) {
+ lld_unlock_chip_node();
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to add new chip node to global list");
+ goto alloc_chip_node_fail;
+ }
+
+ lld_unlock_chip_node();
+ err = spfc_func_init(pdev, pci_adapter);
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]spfc func init fail");
+ goto func_init_err;
+ }
+
+ return 0;
+
+func_init_err:
+ lld_lock_chip_node();
+ free_chip_node(pci_adapter);
+ lld_unlock_chip_node();
+
+alloc_chip_node_fail:
+ unmapping_bar(pci_adapter);
+
+map_bar_failed:
+ spfc_pci_deinit(pdev);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Pcie device probe failed");
+ return err;
+}
+
+static void spfc_remove(struct pci_dev *pdev)
+{
+ struct spfc_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ if (!pci_adapter)
+ return;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[INFO]Pcie device remove begin");
+ sphw_detect_hw_present(pci_adapter->hwdev);
+ spfc_func_deinit(pdev);
+ lld_lock_chip_node();
+ free_chip_node(pci_adapter);
+ lld_unlock_chip_node();
+ unmapping_bar(pci_adapter);
+ mutex_lock(&g_pci_init_mutex);
+ spfc_pci_deinit(pdev);
+ mutex_unlock(&g_pci_init_mutex);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[INFO]Pcie device removed");
+}
+
+static void spfc_shutdown(struct pci_dev *pdev)
+{
+ struct spfc_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Shutdown device");
+
+ if (pci_adapter)
+ sphw_shutdown_hwdev(pci_adapter->hwdev);
+
+ pci_disable_device(pdev);
+}
+
+static pci_ers_result_t spfc_io_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct spfc_pcidev *pci_adapter = NULL;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Uncorrectable error detected, log and cleanup error status: 0x%08x",
+ state);
+
+ pci_aer_clear_nonfatal_status(pdev);
+ pci_adapter = pci_get_drvdata(pdev);
+
+ if (pci_adapter)
+ sphw_record_pcie_error(pci_adapter->hwdev);
+
+ return PCI_ERS_RESULT_CAN_RECOVER;
+}
+
+static int unf_global_value_init(void)
+{
+ memset(rx_tx_stat, 0, sizeof(rx_tx_stat));
+ memset(rx_tx_err, 0, sizeof(rx_tx_err));
+ memset(scq_err_stat, 0, sizeof(scq_err_stat));
+ memset(aeq_err_stat, 0, sizeof(aeq_err_stat));
+ memset(dif_err_stat, 0, sizeof(dif_err_stat));
+ memset(link_event_stat, 0, sizeof(link_event_stat));
+ memset(link_reason_stat, 0, sizeof(link_reason_stat));
+ memset(hba_stat, 0, sizeof(hba_stat));
+ memset(&spfc_cm_op_handle, 0, sizeof(struct unf_cm_handle_op));
+ memset(up_err_event_stat, 0, sizeof(up_err_event_stat));
+ memset(mail_box_stat, 0, sizeof(mail_box_stat));
+ memset(spfc_hba, 0, sizeof(spfc_hba));
+
+ spin_lock_init(&probe_spin_lock);
+
+ /* 4. Get COM Handlers used for low_level */
+ if (unf_get_cm_handle_ops(&spfc_cm_op_handle) != RETURN_OK) {
+ spfc_realease_cmo_op_handle();
+ return RETURN_ERROR_S32;
+ }
+
+ return RETURN_OK;
+}
+
+static const struct pci_device_id spfc_pci_table[] = {
+ {PCI_VDEVICE(RAMAXEL, SPFC_DEV_ID_PF_STD), 0},
+ {0, 0}
+};
+
+MODULE_DEVICE_TABLE(pci, spfc_pci_table);
+
+static struct pci_error_handlers spfc_err_handler = {
+ .error_detected = spfc_io_error_detected,
+};
+
+static struct pci_driver spfc_driver = {.name = SPFC_DRV_NAME,
+ .id_table = spfc_pci_table,
+ .probe = spfc_probe,
+ .remove = spfc_remove,
+ .shutdown = spfc_shutdown,
+ .err_handler = &spfc_err_handler};
+
+static __init int spfc_lld_init(void)
+{
+ if (unf_common_init() != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]UNF_Common_init failed");
+ return RETURN_ERROR_S32;
+ }
+
+ spfc_check_module_para();
+
+ if (unf_global_value_init() != RETURN_OK)
+ return RETURN_ERROR_S32;
+
+ spfc_register_uld(&fc_uld_info);
+ return pci_register_driver(&spfc_driver);
+}
+
+static __exit void spfc_lld_exit(void)
+{
+ pci_unregister_driver(&spfc_driver);
+ spfc_unregister_uld();
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]SPFC module removing...");
+
+ spfc_realease_cmo_op_handle();
+
+ /* 2. Unregister FC COM module(level) */
+ unf_common_exit();
+}
+
+module_init(spfc_lld_init);
+module_exit(spfc_lld_exit);
+
+MODULE_AUTHOR("Ramaxel Memory Technology, Ltd");
+MODULE_DESCRIPTION(SPFC_DRV_DESC);
+MODULE_VERSION(SPFC_DRV_VERSION);
+MODULE_LICENSE("GPL");
+
+module_param(allowed_probe_num, uint, 0444);
+module_param(dif_sgl_mode, uint, 0444);
+module_param(max_speed, uint, 0444);
+module_param(wqe_page_size, uint, 0444);
+module_param(combo_length, uint, 0444);
+module_param(cos_bit_map, uint, 0444);
+module_param(spfc_dif_enable, uint, 0444);
+MODULE_PARM_DESC(spfc_dif_enable, "set dif enable/disable(1/0), default is 0(disable).");
+module_param(link_lose_tmo, uint, 0444);
+MODULE_PARM_DESC(link_lose_tmo, "set link time out, default is 30s.");
+
diff --git a/drivers/scsi/spfc/hw/spfc_lld.h b/drivers/scsi/spfc/hw/spfc_lld.h
new file mode 100644
index 000000000000..f7b4a5e5ce07
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_lld.h
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_LLD_H
+#define SPFC_LLD_H
+
+#include "sphw_crm.h"
+
+struct spfc_lld_dev {
+ struct pci_dev *pdev;
+ void *hwdev;
+};
+
+struct spfc_uld_info {
+ /* uld_dev: should not return null even the function capability
+ * is not support the up layer driver
+ * uld_dev_name: NIC driver should copy net device name.
+ * FC driver could copy fc device name.
+ * other up layer driver don`t need copy anything
+ */
+ int (*probe)(struct spfc_lld_dev *lld_dev, void **uld_dev,
+ char *uld_dev_name);
+ void (*remove)(struct spfc_lld_dev *lld_dev, void *uld_dev);
+ int (*suspend)(struct spfc_lld_dev *lld_dev, void *uld_dev,
+ pm_message_t state);
+ int (*resume)(struct spfc_lld_dev *lld_dev, void *uld_dev);
+ void (*event)(struct spfc_lld_dev *lld_dev, void *uld_dev,
+ struct sphw_event_info *event);
+ int (*ioctl)(void *uld_dev, u32 cmd, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+};
+
+/* Structure pcidev private */
+struct spfc_pcidev {
+ struct pci_dev *pcidev;
+ void *hwdev;
+ struct card_node *chip_node;
+ struct spfc_lld_dev lld_dev;
+ /* such as fc_dev */
+ void *uld_dev[SERVICE_T_MAX];
+ /* Record the service object name */
+ char uld_dev_name[SERVICE_T_MAX][IFNAMSIZ];
+ /* It is a the global variable for driver to manage
+ * all function device linked list
+ */
+ struct list_head node;
+ void __iomem *cfg_reg_base;
+ void __iomem *intr_reg_base;
+ void __iomem *mgmt_reg_base;
+ u64 db_dwqe_len;
+ u64 db_base_phy;
+ void __iomem *db_base;
+ /* lock for attach/detach uld */
+ struct mutex pdev_mutex;
+ /* setted when uld driver processing event */
+ unsigned long state;
+ struct pci_device_id id;
+ atomic_t ref_cnt;
+};
+
+enum spfc_lld_status {
+ SPFC_NODE_CHANGE = BIT(0),
+};
+
+struct spfc_lld_lock {
+ /* lock for chip list */
+ struct mutex lld_mutex;
+ unsigned long status;
+ atomic_t dev_ref_cnt;
+};
+
+#ifndef MAX_SIZE
+#define MAX_SIZE (16)
+#endif
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_module.h b/drivers/scsi/spfc/hw/spfc_module.h
new file mode 100644
index 000000000000..153d59955339
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_module.h
@@ -0,0 +1,297 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_MODULE_H
+#define SPFC_MODULE_H
+#include "unf_type.h"
+#include "unf_log.h"
+#include "unf_common.h"
+#include "spfc_utils.h"
+#include "spfc_hba.h"
+
+#define SPFC_FT_ENABLE (1)
+#define SPFC_FC_DISABLE (0)
+
+#define SPFC_P2P_DIRECT (0)
+#define SPFC_P2P_FABRIC (1)
+#define SPFC_LOOP (2)
+#define SPFC_ATUOSPEED (1)
+#define SPFC_FIXEDSPEED (0)
+#define SPFC_AUTOTOPO (0)
+#define SPFC_P2PTOPO (0x2)
+#define SPFC_LOOPTOPO (0x1)
+#define SPFC_SPEED_2G (0x2)
+#define SPFC_SPEED_4G (0x4)
+#define SPFC_SPEED_8G (0x8)
+#define SPFC_SPEED_16G (0x10)
+#define SPFC_SPEED_32G (0x20)
+
+#define SPFC_MAX_PORT_NUM SPFC_MAX_PROBE_PORT_NUM
+#define SPFC_MAX_PORT_TASK_TYPE_STAT_NUM (128)
+#define SPFC_MAX_LINK_EVENT_CNT (4)
+#define SPFC_MAX_LINK_REASON_CNT (256)
+
+#define SPFC_MML_LOGCTRO_NUM (14)
+
+#define WWN_SIZE 8 /* Size of WWPN, WWN & WWNN */
+
+/*
+ * Define the data type
+ */
+struct spfc_log_ctrl {
+ char *log_option;
+ u32 state;
+};
+
+/*
+ * Declare the global function.
+ */
+extern struct unf_cm_handle_op spfc_cm_op_handle;
+extern struct spfc_uld_info fc_uld_info;
+extern u32 allowed_probe_num;
+extern u32 max_speed;
+extern u32 accum_db_num;
+extern u32 wqe_page_size;
+extern u32 dif_type;
+extern u32 wqe_pre_load;
+extern u32 combo_length;
+extern u32 cos_bit_map;
+extern u32 exit_count;
+extern u32 exit_stride;
+extern u32 exit_base;
+
+extern atomic64_t rx_tx_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern atomic64_t rx_tx_err[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern atomic64_t scq_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern atomic64_t aeq_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern atomic64_t dif_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern atomic64_t mail_box_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern atomic64_t com_up_event_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern u64 link_event_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_LINK_EVENT_CNT];
+extern u64 link_reason_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_LINK_REASON_CNT];
+extern atomic64_t up_err_event_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern u64 hba_stat[SPFC_MAX_PORT_NUM][SPFC_HBA_STAT_BUTT];
+#define SPFC_LINK_EVENT_STAT(hba, link_ent) \
+ (link_event_stat[(hba)->probe_index][link_ent]++)
+#define SPFC_LINK_REASON_STAT(hba, link_rsn) \
+ (link_reason_stat[(hba)->probe_index][link_rsn]++)
+#define SPFC_HBA_STAT(hba, hba_stat_type) \
+ (hba_stat[(hba)->probe_index][hba_stat_type]++)
+
+#define SPFC_UP_ERR_EVENT_STAT(hba, err_type) \
+ (atomic64_inc(&up_err_event_stat[(hba)->probe_index][err_type]))
+#define SPFC_UP_ERR_EVENT_STAT_READ(probe_index, io_type) \
+ (atomic64_read(&up_err_event_stat[probe_index][io_type]))
+#define SPFC_DIF_ERR_STAT(hba, dif_err) \
+ (atomic64_inc(&dif_err_stat[(hba)->probe_index][dif_err]))
+#define SPFC_DIF_ERR_STAT_READ(probe_index, dif_err) \
+ (atomic64_read(&dif_err_stat[probe_index][dif_err]))
+
+#define SPFC_IO_STAT(hba, io_type) \
+ (atomic64_inc(&rx_tx_stat[(hba)->probe_index][io_type]))
+#define SPFC_IO_STAT_READ(probe_index, io_type) \
+ (atomic64_read(&rx_tx_stat[probe_index][io_type]))
+
+#define SPFC_ERR_IO_STAT(hba, io_type) \
+ (atomic64_inc(&rx_tx_err[(hba)->probe_index][io_type]))
+#define SPFC_ERR_IO_STAT_READ(probe_index, io_type) \
+ (atomic64_read(&rx_tx_err[probe_index][io_type]))
+
+#define SPFC_SCQ_ERR_TYPE_STAT(hba, err_type) \
+ (atomic64_inc(&scq_err_stat[(hba)->probe_index][err_type]))
+#define SPFC_SCQ_ERR_TYPE_STAT_READ(probe_index, io_type) \
+ (atomic64_read(&scq_err_stat[probe_index][io_type]))
+#define SPFC_AEQ_ERR_TYPE_STAT(hba, err_type) \
+ (atomic64_inc(&aeq_err_stat[(hba)->probe_index][err_type]))
+#define SPFC_AEQ_ERR_TYPE_STAT_READ(probe_index, io_type) \
+ (atomic64_read(&aeq_err_stat[probe_index][io_type]))
+
+#define SPFC_MAILBOX_STAT(hba, io_type) \
+ (atomic64_inc(&mail_box_stat[(hba)->probe_index][io_type]))
+#define SPFC_MAILBOX_STAT_READ(probe_index, io_type) \
+ (atomic64_read(&mail_box_stat[probe_index][io_type]))
+
+#define SPFC_COM_UP_ERR_EVENT_STAT(hba, err_type) \
+ (atomic64_inc( \
+ &com_up_event_err_stat[(hba)->probe_index][err_type]))
+#define SPFC_COM_UP_ERR_EVENT_STAT_READ(probe_index, err_type) \
+ (atomic64_read(&com_up_event_err_stat[probe_index][err_type]))
+
+#define UNF_LOWLEVEL_ALLOC_LPORT(lport, fc_port, low_level) \
+ do { \
+ if (spfc_cm_op_handle.unf_alloc_local_port) { \
+ lport = spfc_cm_op_handle.unf_alloc_local_port( \
+ (fc_port), (low_level)); \
+ } else { \
+ lport = NULL; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_RECEIVE_LS_GS_PKG(ret, fc_port, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_receive_ls_gs_pkg) { \
+ ret = spfc_cm_op_handle.unf_receive_ls_gs_pkg( \
+ (fc_port), (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_SEND_ELS_DONE(ret, fc_port, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_send_els_done) { \
+ ret = spfc_cm_op_handle.unf_send_els_done((fc_port), \
+ (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_GET_CFG_PARMS(ret, section_name, cfg_parm, cfg_value, \
+ item_num) \
+ do { \
+ if (spfc_cm_op_handle.unf_get_cfg_parms) { \
+ ret = (u32)spfc_cm_op_handle.unf_get_cfg_parms( \
+ (section_name), (cfg_parm), (cfg_value), \
+ (item_num)); \
+ } else { \
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN, \
+ "Get config parameter function is NULL."); \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_RELEASE_LOCAL_PORT(ret, lport) \
+ do { \
+ if (unlikely(!spfc_cm_op_handle.unf_release_local_port)) { \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = \
+ spfc_cm_op_handle.unf_release_local_port(lport); \
+ } \
+ } while (0)
+
+#define UNF_CM_GET_SGL_ENTRY(ret, pkg, buf, buf_len) \
+ do { \
+ if (unlikely(!spfc_cm_op_handle.unf_cm_get_sgl_entry)) { \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = spfc_cm_op_handle.unf_cm_get_sgl_entry( \
+ pkg, buf, buf_len); \
+ } \
+ } while (0)
+
+#define UNF_CM_GET_DIF_SGL_ENTRY(ret, pkg, buf, buf_len) \
+ do { \
+ if (unlikely(!spfc_cm_op_handle.unf_cm_get_dif_sgl_entry)) { \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = spfc_cm_op_handle.unf_cm_get_dif_sgl_entry( \
+ pkg, buf, buf_len); \
+ } \
+ } while (0)
+
+#define UNF_GET_SGL_ENTRY(ret, pkg, buf, buf_len, dif_flag) \
+ do { \
+ if (dif_flag) { \
+ UNF_CM_GET_DIF_SGL_ENTRY(ret, pkg, buf, buf_len); \
+ } else { \
+ UNF_CM_GET_SGL_ENTRY(ret, pkg, buf, buf_len); \
+ } \
+ } while (0)
+
+#define UNF_GET_FREE_ESGL_PAGE(ret, lport, pkg) \
+ do { \
+ if (unlikely( \
+ !spfc_cm_op_handle.unf_get_one_free_esgl_page)) { \
+ ret = NULL; \
+ } else { \
+ ret = \
+ spfc_cm_op_handle.unf_get_one_free_esgl_page( \
+ lport, pkg); \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_FCP_CMND_RECEIVED(ret, lport, pkg) \
+ do { \
+ if (unlikely(!spfc_cm_op_handle.unf_process_fcp_cmnd)) { \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = spfc_cm_op_handle.unf_process_fcp_cmnd(lport, \
+ pkg); \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_SCSI_COMPLETED(ret, lport, pkg) \
+ do { \
+ if (unlikely(NULL == \
+ spfc_cm_op_handle.unf_receive_ini_response)) { \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = spfc_cm_op_handle.unf_receive_ini_response( \
+ lport, pkg); \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_PORT_EVENT(ret, lport, event, input) \
+ do { \
+ if (unlikely(!spfc_cm_op_handle.unf_fc_port_event)) { \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = spfc_cm_op_handle.unf_fc_port_event( \
+ lport, event, input); \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_RECEIVE_FC4LS_PKG(ret, fc_port, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_receive_fc4ls_pkg) { \
+ ret = spfc_cm_op_handle.unf_receive_fc4ls_pkg( \
+ (fc_port), (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_SEND_FC4LS_DONE(ret, lport, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_send_fc4ls_done) { \
+ ret = spfc_cm_op_handle.unf_send_fc4ls_done((lport), \
+ (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_RECEIVE_BLS_PKG(ret, lport, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_receive_bls_pkg) { \
+ ret = spfc_cm_op_handle.unf_receive_bls_pkg((lport), \
+ (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_RECEIVE_MARKER_STS(ret, lport, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_receive_marker_status) { \
+ ret = spfc_cm_op_handle.unf_receive_marker_status( \
+ (lport), (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_RECEIVE_ABTS_MARKER_STS(ret, lport, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_receive_abts_marker_status) { \
+ ret = \
+ spfc_cm_op_handle.unf_receive_abts_marker_status( \
+ (lport), (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_parent_context.h b/drivers/scsi/spfc/hw/spfc_parent_context.h
new file mode 100644
index 000000000000..dc4baffe5c44
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_parent_context.h
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_PARENT_CONTEXT_H
+#define SPFC_PARENT_CONTEXT_H
+
+enum fc_parent_status {
+ FC_PARENT_STATUS_INVALID = 0,
+ FC_PARENT_STATUS_NORMAL,
+ FC_PARENT_STATUS_CLOSING
+};
+
+#define SPFC_PARENT_CONTEXT_KEY_ALIGN_SIZE (48)
+
+#define SPFC_PARENT_CONTEXT_TIMER_SIZE (32) /* 24+2*N,N=timer count */
+
+#define FC_CALC_CID(_xid) \
+ (((((_xid) >> 5) & 0x1ff) << 11) | ((((_xid) >> 5) & 0x1ff) << 2) | \
+ (((_xid) >> 3) & 0x3))
+
+#define MAX_PKT_SIZE_PER_DISPATCH (fc_child_ctx_ex->per_xmit_data_size)
+
+/* immediate data DIF info definition in parent context */
+struct immi_dif_info {
+ union {
+ u32 value;
+ struct {
+ u32 app_tag_ctrl : 3; /* DIF/DIX APP TAG Control */
+ u32 ref_tag_mode : 2; /* Bit 0: scenario of the reference tag verify mode */
+ /* Bit 1: scenario of the reference tag insert/replace
+ * mode 0: fixed; 1: increasement;
+ */
+ u32 ref_tag_ctrl : 3; /* The DIF/DIX Reference tag control */
+ u32 grd_agm_ini_ctrl : 3;
+ u32 grd_agm_ctrl : 2; /* Bit 0: DIF/DIX guard verify algorithm control */
+ /* Bit 1: DIF/DIX guard replace or insert algorithm control */
+ u32 grd_ctrl : 3; /* The DIF/DIX Guard control */
+ u32 dif_verify_type : 2; /* verify type */
+ /* Check blocks whose reference tag contains 0xFFFF flag */
+ u32 difx_ref_esc : 1;
+ /* Check blocks whose application tag contains 0xFFFF flag */
+ u32 difx_app_esc : 1;
+ u32 rsvd : 8;
+ u32 sct_size : 1; /* Sector size, 1: 4K; 0: 512 */
+ u32 smd_tp : 2;
+ u32 difx_en : 1;
+ } info;
+ } dif_dw3;
+
+ u32 cmp_app_tag : 16;
+ u32 rep_app_tag : 16;
+ /* The ref tag value for verify compare, do not support replace or insert ref tag */
+ u32 cmp_ref_tag;
+ u32 rep_ref_tag;
+
+ u32 rsv1 : 16;
+ u32 cmp_app_tag_msk : 16;
+};
+
+/* parent context SW section definition: SW(80B) */
+struct spfc_sw_section {
+ u16 scq_num_rcv_cmd;
+ u16 scq_num_max_scqn;
+
+ struct {
+ u32 xid : 13;
+ u32 vport : 7;
+ u32 csctrl : 8;
+ u32 rsvd0 : 4;
+ } sw_ctxt_vport_xid;
+
+ u32 scq_num_scqn_mask : 12;
+ u32 cid : 20; /* ucode init */
+
+ u16 conn_id;
+ u16 immi_rq_page_size;
+
+ u16 immi_taskid_min;
+ u16 immi_taskid_max;
+
+ union {
+ u32 pctxt_val0;
+ struct {
+ u32 srv_type : 5; /* driver init */
+ u32 srr_support : 2; /* sequence retransmition support flag */
+ u32 rsvd1 : 5;
+ u32 port_id : 4; /* driver init */
+ u32 vlan_id : 16; /* driver init */
+ } dw;
+ } sw_ctxt_misc;
+
+ u32 rsvd2;
+ u32 per_xmit_data_size;
+
+ /* RW fields */
+ u32 cmd_scq_gpa_h;
+ u32 cmd_scq_gpa_l;
+ u32 e_d_tov_timer_val; /* E_D_TOV timer value: value should be set on ms by driver */
+ u16 mfs_unaligned_bytes; /* mfs unalined bytes of per 64KB dispatch*/
+ u16 tx_mfs; /* remote port max receive fc payload length */
+ u32 xfer_rdy_dis_max_len_remote; /* max data len allowed in xfer_rdy dis scenario */
+ u32 xfer_rdy_dis_max_len_local;
+
+ union {
+ struct {
+ u32 priority : 3; /* vlan priority */
+ u32 rsvd4 : 2;
+ u32 status : 8; /* status of flow */
+ u32 cos : 3; /* doorbell cos value */
+ u32 oq_cos_data : 3; /* esch oq cos for data */
+ u32 oq_cos_cmd : 3; /* esch oq cos for cmd/xferrdy/rsp */
+ /* used for parent context cache Consistency judgment,1: done */
+ u32 flush_done : 1;
+ u32 work_mode : 2; /* 0:Target, 1:Initiator, 2:Target&Initiator */
+ u32 seq_cnt : 1; /* seq_cnt */
+ u32 e_d_tov : 1; /* E_D_TOV resolution */
+ u32 vlan_enable : 1; /* Vlan enable flag */
+ u32 conf_support : 1; /* Response confirm support flag */
+ u32 rec_support : 1; /* REC support flag */
+ u32 write_xfer_rdy : 1; /* WRITE Xfer_Rdy disable or enable */
+ u32 sgl_num : 1; /* Double or single SGL, 1: double; 0: single */
+ } dw;
+ u32 pctxt_val1;
+ } sw_ctxt_config;
+ struct immi_dif_info immi_dif_info; /* immediate data dif control info(20B) */
+};
+
+struct spfc_hw_rsvd_queue {
+ /* bitmap[0]:255-192 */
+ /* bitmap[1]:191-128 */
+ /* bitmap[2]:127-64 */
+ /* bitmap[3]:63-0 */
+ u64 seq_id_bitmap[4];
+ struct {
+ u64 last_req_seq_id : 8;
+ u64 xid : 20;
+ u64 rsvd0 : 36;
+ } wd0;
+};
+
+struct spfc_sq_qinfo {
+ u64 rsvd_0 : 10;
+ u64 pmsn_type : 1; /* 0: get pmsn from queue header; 1: get pmsn from ucode */
+ u64 rsvd_1 : 4;
+ u64 cur_wqe_o : 1; /* should be opposite from loop_o */
+ u64 rsvd_2 : 48;
+
+ u64 cur_sqe_gpa;
+ u64 pmsn_gpa; /* sq's queue header gpa */
+
+ u64 sqe_dmaattr_idx : 6;
+ u64 sq_so_ro : 2;
+ u64 rsvd_3 : 2;
+ u64 ring : 1; /* 0: link; 1: ring */
+ u64 loop_o : 1; /* init to be the first round o-bit */
+ u64 rsvd_4 : 4;
+ u64 zerocopy_dmaattr_idx : 6;
+ u64 zerocopy_so_ro : 2;
+ u64 parity : 8;
+ u64 r : 1;
+ u64 s : 1;
+ u64 enable_256 : 1;
+ u64 rsvd_5 : 23;
+ u64 pcie_template : 6;
+};
+
+struct spfc_cq_qinfo {
+ u64 pcie_template_hi : 3;
+ u64 parity_2 : 1;
+ u64 cur_cqe_gpa : 60;
+
+ u64 pi : 15;
+ u64 pi_o : 1;
+ u64 ci : 15;
+ u64 ci_o : 1;
+ u64 c_eqn_msi_x : 10; /* if init_mode = 2, is msi/msi-x; other the low-5-bit means c_eqn */
+ u64 parity_1 : 1;
+ u64 ci_type : 1; /* 0: get ci from queue header; 1: get ci from ucode */
+ u64 cq_depth : 3; /* valid when ring = 1 */
+ u64 armq : 1; /* 0: IDLE state; 1: NEXT state */
+ u64 cur_cqe_cnt : 8;
+ u64 cqe_max_cnt : 8;
+
+ u64 cqe_dmaattr_idx : 6;
+ u64 cq_so_ro : 2;
+ u64 init_mode : 2; /* 1: armQ; 2: msi/msi-x; others: rsvd */
+ u64 next_o : 1; /* next pate valid o-bit */
+ u64 loop_o : 1; /* init to be the first round o-bit */
+ u64 next_cq_wqe_page_gpa : 52;
+
+ u64 pcie_template_lo : 3;
+ u64 parity_0 : 1;
+ u64 ci_gpa : 60; /* cq's queue header gpa */
+};
+
+struct spfc_scq_qinfo {
+ union {
+ struct {
+ u64 scq_n : 20; /* scq number */
+ u64 sq_min_preld_cache_num : 4;
+ u64 sq_th0_preld_cache_num : 5;
+ u64 sq_th1_preld_cache_num : 5;
+ u64 sq_th2_preld_cache_num : 5;
+ u64 rq_min_preld_cache_num : 4;
+ u64 rq_th0_preld_cache_num : 5;
+ u64 rq_th1_preld_cache_num : 5;
+ u64 rq_th2_preld_cache_num : 5;
+ u64 parity : 6;
+ } info;
+
+ u64 pctxt_val1;
+ } hw_scqc_config;
+};
+
+struct spfc_srq_qinfo {
+ u64 parity : 4;
+ u64 srqc_gpa : 60;
+};
+
+/* here is the layout of service type 12/13 */
+struct spfc_parent_context {
+ u8 key[SPFC_PARENT_CONTEXT_KEY_ALIGN_SIZE];
+ struct spfc_scq_qinfo resp_scq_qinfo;
+ struct spfc_srq_qinfo imm_srq_info;
+ struct spfc_sq_qinfo sq_qinfo;
+ u8 timer_section[SPFC_PARENT_CONTEXT_TIMER_SIZE];
+ struct spfc_hw_rsvd_queue hw_rsvdq;
+ struct spfc_srq_qinfo els_srq_info;
+ struct spfc_sw_section sw_section;
+};
+
+/* here is the layout of service type 13 */
+struct spfc_ssq_parent_context {
+ u8 rsvd0[64];
+ struct spfc_sq_qinfo sq1_qinfo;
+ u8 rsvd1[32];
+ struct spfc_sq_qinfo sq2_qinfo;
+ u8 rsvd2[32];
+ struct spfc_sq_qinfo sq3_qinfo;
+ struct spfc_scq_qinfo sq_pretchinfo;
+ u8 rsvd3[24];
+};
+
+/* FC Key Section */
+struct spfc_fc_key_section {
+ u32 xid_h : 4;
+ u32 key_size : 2;
+ u32 rsvd1 : 1;
+ u32 srv_type : 5;
+ u32 csize : 2;
+ u32 rsvd0 : 17;
+ u32 v : 1;
+
+ u32 tag_fp_h : 4;
+ u32 rsvd2 : 12;
+ u32 xid_l : 16;
+
+ u16 tag_fp_l;
+ u8 smac[6]; /* Source MAC */
+ u8 dmac[6]; /* Dest MAC */
+ u8 sid[3]; /* Source FC ID */
+ u8 did[3]; /* Dest FC ID */
+ u8 svlan[4]; /* Svlan */
+ u8 cvlan[4]; /* Cvlan */
+
+ u32 next_ptr_h;
+};
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_queue.c b/drivers/scsi/spfc/hw/spfc_queue.c
new file mode 100644
index 000000000000..3f73fa26aad1
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_queue.c
@@ -0,0 +1,4857 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_queue.h"
+#include "unf_log.h"
+#include "unf_lport.h"
+#include "spfc_module.h"
+#include "spfc_utils.h"
+#include "spfc_service.h"
+#include "spfc_chipitf.h"
+#include "spfc_parent_context.h"
+#include "sphw_hw.h"
+#include "sphw_crm.h"
+
+#define SPFC_UCODE_CMD_MODIFY_QUEUE_CONTEXT 0
+
+#define SPFC_DONE_MASK (0x00000001)
+#define SPFC_OWNER_MASK (0x80000000)
+
+#define SPFC_SQ_LINK_PRE (1 << 2)
+
+#define SPFC_SQ_HEADER_ADDR_ALIGN_SIZE (64)
+#define SPFC_SQ_HEADER_ADDR_ALIGN_SIZE_MASK (SPFC_SQ_HEADER_ADDR_ALIGN_SIZE - 1)
+
+#define SPFC_ADDR_64_ALIGN(addr) \
+ (((addr) + (SPFC_SQ_HEADER_ADDR_ALIGN_SIZE_MASK)) & \
+ ~(SPFC_SQ_HEADER_ADDR_ALIGN_SIZE_MASK))
+
+u32 spfc_get_parity_value(u64 *src_data, u32 row, u32 col)
+{
+ u32 i = 0;
+ u32 j = 0;
+ u32 offset = 0;
+ u32 group = 0;
+ u32 bit_offset = 0;
+ u32 bit_val = 0;
+ u32 tmp_val = 0;
+ u32 dest_data = 0;
+
+ for (i = 0; i < row; i++) {
+ for (j = 0; j < col; j++) {
+ offset = (row * j + i);
+ group = offset / (sizeof(src_data[ARRAY_INDEX_0]) * UNF_BITS_PER_BYTE);
+ bit_offset = offset % (sizeof(src_data[ARRAY_INDEX_0]) * UNF_BITS_PER_BYTE);
+ tmp_val = (src_data[group] >> bit_offset) & SPFC_PARITY_MASK;
+
+ if (j == 0) {
+ bit_val = tmp_val;
+ continue;
+ }
+
+ bit_val ^= tmp_val;
+ }
+
+ bit_val = (~bit_val) & SPFC_PARITY_MASK;
+
+ dest_data |= (bit_val << i);
+ }
+
+ return dest_data;
+}
+
+static void spfc_update_producer_info(u16 q_depth, u16 *pus_pi, u16 *pus_owner)
+{
+ u16 current_pi = 0;
+ u16 next_pi = 0;
+ u16 owner = 0;
+
+ current_pi = *pus_pi;
+ next_pi = current_pi + 1;
+
+ if (next_pi < q_depth) {
+ *pus_pi = next_pi;
+ } else {
+ /* PI reversal */
+ *pus_pi = 0;
+
+ /* obit reversal */
+ owner = *pus_owner;
+ *pus_owner = !owner;
+ }
+}
+
+static void spfc_update_consumer_info(u16 q_depth, u16 *pus_ci, u16 *pus_owner)
+{
+ u16 current_ci = 0;
+ u16 next_ci = 0;
+ u16 owner = 0;
+
+ current_ci = *pus_ci;
+ next_ci = current_ci + 1;
+
+ if (next_ci < q_depth) {
+ *pus_ci = next_ci;
+ } else {
+ /* CI reversal */
+ *pus_ci = 0;
+
+ /* obit reversal */
+ owner = *pus_owner;
+ *pus_owner = !owner;
+ }
+}
+
+static inline void spfc_update_cq_header(struct ci_record *ci_record, u16 ci,
+ u16 owner)
+{
+ u32 size = 0;
+ struct ci_record record = {0};
+
+ size = sizeof(struct ci_record);
+ memcpy(&record, ci_record, size);
+ spfc_big_to_cpu64(&record, size);
+ record.cmsn = ci + (u16)(owner << SPFC_CQ_HEADER_OWNER_SHIFT);
+ record.dump_cmsn = record.cmsn;
+ spfc_cpu_to_big64(&record, size);
+
+ wmb();
+ memcpy(ci_record, &record, size);
+}
+
+static void spfc_update_srq_header(struct db_record *pmsn_record, u16 pmsn)
+{
+ u32 size = 0;
+ struct db_record record = {0};
+
+ size = sizeof(struct db_record);
+ memcpy(&record, pmsn_record, size);
+ spfc_big_to_cpu64(&record, size);
+ record.pmsn = pmsn;
+ record.dump_pmsn = record.pmsn;
+ spfc_cpu_to_big64(&record, sizeof(struct db_record));
+
+ wmb();
+ memcpy(pmsn_record, &record, size);
+}
+
+static void spfc_set_srq_wqe_owner_be(struct spfc_wqe_ctrl *sqe_ctrl_in_wp,
+ u32 owner)
+{
+ struct spfc_wqe_ctrl_ch wqe_ctrl_ch;
+
+ mb();
+
+ wqe_ctrl_ch.ctrl_ch_val = be32_to_cpu(sqe_ctrl_in_wp->ch.ctrl_ch_val);
+ wqe_ctrl_ch.wd0.owner = owner;
+ sqe_ctrl_in_wp->ch.ctrl_ch_val = cpu_to_be32(wqe_ctrl_ch.ctrl_ch_val);
+
+ mb();
+}
+
+static inline void spfc_set_sq_wqe_owner_be(void *sqe)
+{
+ u32 *sqe_dw = (u32 *)sqe;
+ u32 *e_sqe_dw = (u32 *)((u8 *)sqe + SPFC_EXTEND_WQE_OFFSET);
+
+ /* Ensure that the write of WQE is complete */
+ mb();
+ e_sqe_dw[SPFC_SQE_SECOND_OBIT_DW_POS] |= SPFC_SQE_OBIT_SET_MASK_BE;
+ e_sqe_dw[SPFC_SQE_FIRST_OBIT_DW_POS] |= SPFC_SQE_OBIT_SET_MASK_BE;
+ sqe_dw[SPFC_SQE_SECOND_OBIT_DW_POS] |= SPFC_SQE_OBIT_SET_MASK_BE;
+ sqe_dw[SPFC_SQE_FIRST_OBIT_DW_POS] |= SPFC_SQE_OBIT_SET_MASK_BE;
+ mb();
+}
+
+void spfc_clear_sq_wqe_owner_be(struct spfc_sqe *sqe)
+{
+ u32 *sqe_dw = (u32 *)sqe;
+ u32 *e_sqe_dw = (u32 *)((u8 *)sqe + SPFC_EXTEND_WQE_OFFSET);
+
+ mb();
+ sqe_dw[SPFC_SQE_SECOND_OBIT_DW_POS] &= SPFC_SQE_OBIT_CLEAR_MASK_BE;
+ mb();
+ sqe_dw[SPFC_SQE_FIRST_OBIT_DW_POS] &= SPFC_SQE_OBIT_CLEAR_MASK_BE;
+ e_sqe_dw[SPFC_SQE_SECOND_OBIT_DW_POS] &= SPFC_SQE_OBIT_CLEAR_MASK_BE;
+ e_sqe_dw[SPFC_SQE_FIRST_OBIT_DW_POS] &= SPFC_SQE_OBIT_CLEAR_MASK_BE;
+}
+
+static void spfc_set_direct_wqe_owner_be(void *sqe, u16 owner)
+{
+ if (owner)
+ spfc_set_sq_wqe_owner_be(sqe);
+ else
+ spfc_clear_sq_wqe_owner_be(sqe);
+}
+
+static void spfc_set_srq_link_wqe_owner_be(struct spfc_linkwqe *link_wqe,
+ u32 owner, u16 pmsn)
+{
+ struct spfc_linkwqe local_lw;
+
+ mb();
+ local_lw.val_wd1 = be32_to_cpu(link_wqe->val_wd1);
+ local_lw.wd1.msn = pmsn;
+ local_lw.wd1.dump_msn = (local_lw.wd1.msn & SPFC_LOCAL_LW_WD1_DUMP_MSN_MASK);
+ link_wqe->val_wd1 = cpu_to_be32(local_lw.val_wd1);
+
+ local_lw.val_wd0 = be32_to_cpu(link_wqe->val_wd0);
+ local_lw.wd0.o = owner;
+ link_wqe->val_wd0 = cpu_to_be32(local_lw.val_wd0);
+ mb();
+}
+
+static inline bool spfc_is_scq_link_wqe(struct spfc_scq_info *scq_info)
+{
+ u16 custom_scqe_num = 0;
+
+ custom_scqe_num = scq_info->ci + 1;
+
+ if ((custom_scqe_num % scq_info->wqe_num_per_buf == 0) ||
+ scq_info->valid_wqe_num == custom_scqe_num)
+ return true;
+ else
+ return false;
+}
+
+static struct spfc_wqe_page *
+spfc_add_tail_wqe_page(struct spfc_parent_ssq_info *ssq)
+{
+ struct spfc_hba_info *hba = NULL;
+ struct spfc_wqe_page *esgl = NULL;
+ struct list_head *free_list_head = NULL;
+ ulong flag = 0;
+
+ hba = (struct spfc_hba_info *)ssq->hba;
+
+ spin_lock_irqsave(&hba->sq_wpg_pool.wpg_pool_lock, flag);
+
+ /* Get a WqePage from hba->sq_wpg_pool.list_free_wpg_pool, and add to
+ * sq.list_SqTailWqePage
+ */
+ if (!list_empty(&hba->sq_wpg_pool.list_free_wpg_pool)) {
+ free_list_head = UNF_OS_LIST_NEXT(&hba->sq_wpg_pool.list_free_wpg_pool);
+ list_del(free_list_head);
+ list_add_tail(free_list_head, &ssq->list_linked_list_sq);
+ esgl = list_entry(free_list_head, struct spfc_wqe_page, entry_wpg);
+
+ /* WqePage Pool counter */
+ atomic_inc(&hba->sq_wpg_pool.wpg_in_use);
+ } else {
+ esgl = NULL;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]SQ pool is empty when SQ(0x%x) try to get wqe page",
+ ssq->sqn);
+ SPFC_HBA_STAT(hba, SPFC_STAT_SQ_POOL_EMPTY);
+ }
+
+ spin_unlock_irqrestore(&hba->sq_wpg_pool.wpg_pool_lock, flag);
+
+ return esgl;
+}
+
+static inline struct spfc_sqe *spfc_get_wqe_page_entry(struct spfc_wqe_page *wpg,
+ u32 wqe_offset)
+{
+ struct spfc_sqe *sqe_wpg = NULL;
+
+ sqe_wpg = (struct spfc_sqe *)(wpg->wpg_addr);
+ sqe_wpg += wqe_offset;
+
+ return sqe_wpg;
+}
+
+static void spfc_free_head_wqe_page(struct spfc_parent_ssq_info *ssq)
+{
+ struct spfc_hba_info *hba = NULL;
+ struct spfc_wqe_page *sq_wpg = NULL;
+ struct list_head *entry_head_wqe_page = NULL;
+ ulong flag = 0;
+
+ atomic_dec(&ssq->wqe_page_cnt);
+
+ hba = (struct spfc_hba_info *)ssq->hba;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "Port(0x%x) free wqe page nowpagecnt:%d",
+ hba->port_cfg.port_id,
+ atomic_read(&ssq->wqe_page_cnt));
+ sq_wpg = SPFC_GET_SQ_HEAD(ssq);
+
+ memset((void *)sq_wpg->wpg_addr, WQE_MARKER_0, hba->sq_wpg_pool.wpg_size);
+
+ spin_lock_irqsave(&hba->sq_wpg_pool.wpg_pool_lock, flag);
+ entry_head_wqe_page = &sq_wpg->entry_wpg;
+ list_del(entry_head_wqe_page);
+ list_add_tail(entry_head_wqe_page, &hba->sq_wpg_pool.list_free_wpg_pool);
+
+ /* WqePage Pool counter */
+ atomic_dec(&hba->sq_wpg_pool.wpg_in_use);
+ spin_unlock_irqrestore(&hba->sq_wpg_pool.wpg_pool_lock, flag);
+}
+
+static void spfc_free_link_list_wpg(struct spfc_parent_ssq_info *ssq)
+{
+ ulong flag = 0;
+ struct spfc_hba_info *hba = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct list_head *entry_head_wqe_page = NULL;
+ struct spfc_wqe_page *sq_wpg = NULL;
+
+ hba = (struct spfc_hba_info *)ssq->hba;
+
+ list_for_each_safe(node, next_node, &ssq->list_linked_list_sq) {
+ sq_wpg = list_entry(node, struct spfc_wqe_page, entry_wpg);
+ memset((void *)sq_wpg->wpg_addr, WQE_MARKER_0, hba->sq_wpg_pool.wpg_size);
+
+ spin_lock_irqsave(&hba->sq_wpg_pool.wpg_pool_lock, flag);
+ entry_head_wqe_page = &sq_wpg->entry_wpg;
+ list_del(entry_head_wqe_page);
+ list_add_tail(entry_head_wqe_page, &hba->sq_wpg_pool.list_free_wpg_pool);
+
+ /* WqePage Pool counter */
+ atomic_dec(&ssq->wqe_page_cnt);
+ atomic_dec(&hba->sq_wpg_pool.wpg_in_use);
+
+ spin_unlock_irqrestore(&hba->sq_wpg_pool.wpg_pool_lock, flag);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]Port(0x%x) RPort(0x%x) Sq(0x%x) link list destroyed, Sq.WqePageCnt=0x%x, SqWpgPool.wpg_in_use=0x%x",
+ hba->port_cfg.port_id, ssq->sqn, ssq->context_id,
+ atomic_read(&ssq->wqe_page_cnt), atomic_read(&hba->sq_wpg_pool.wpg_in_use));
+}
+
+struct spfc_wqe_page *
+spfc_add_one_wqe_page(struct spfc_parent_ssq_info *ssq)
+{
+ u32 wqe_inx = 0;
+ struct spfc_wqe_page *wqe_page = NULL;
+ struct spfc_sqe *sqe_in_wp = NULL;
+ struct spfc_linkwqe *link_wqe_in_wpg = NULL;
+ struct spfc_linkwqe link_wqe;
+
+ /* Add a new Wqe Page */
+ wqe_page = spfc_add_tail_wqe_page(ssq);
+
+ if (!wqe_page)
+ return NULL;
+
+ for (wqe_inx = 0; wqe_inx <= ssq->wqe_num_per_buf; wqe_inx++) {
+ sqe_in_wp = spfc_get_wqe_page_entry(wqe_page, wqe_inx);
+ sqe_in_wp->ctrl_sl.ch.ctrl_ch_val = 0;
+ sqe_in_wp->ectrl_sl.ch.ctrl_ch_val = 0;
+ }
+
+ /* Set last WqePage as linkwqe */
+ link_wqe_in_wpg = (struct spfc_linkwqe *)spfc_get_wqe_page_entry(wqe_page,
+ ssq->wqe_num_per_buf);
+ link_wqe.val_wd0 = 0;
+ link_wqe.val_wd1 = 0;
+ link_wqe.next_page_addr_hi = (ssq->queue_style == SPFC_QUEUE_RING_STYLE)
+ ? SPFC_MSD(wqe_page->wpg_phy_addr)
+ : 0;
+ link_wqe.next_page_addr_lo = (ssq->queue_style == SPFC_QUEUE_RING_STYLE)
+ ? SPFC_LSD(wqe_page->wpg_phy_addr)
+ : 0;
+ link_wqe.wd0.wf = CQM_WQE_WF_LINK;
+ link_wqe.wd0.ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+ link_wqe.wd0.o = !(ssq->last_pi_owner);
+ link_wqe.wd1.lp = (ssq->queue_style == SPFC_QUEUE_RING_STYLE)
+ ? CQM_LINK_WQE_LP_VALID
+ : CQM_LINK_WQE_LP_INVALID;
+ spfc_cpu_to_big32(&link_wqe, sizeof(struct spfc_linkwqe));
+ memcpy(link_wqe_in_wpg, &link_wqe, sizeof(struct spfc_linkwqe));
+ memcpy((u8 *)link_wqe_in_wpg + SPFC_EXTEND_WQE_OFFSET,
+ &link_wqe, sizeof(struct spfc_linkwqe));
+
+ return wqe_page;
+}
+
+static inline struct spfc_scqe_type *
+spfc_get_scq_entry(struct spfc_scq_info *scq_info)
+{
+ u32 buf_id = 0;
+ u16 buf_offset = 0;
+ u16 ci = 0;
+ struct cqm_buf_list *buf = NULL;
+
+ FC_CHECK_RETURN_VALUE(scq_info, NULL);
+
+ ci = scq_info->ci;
+ buf_id = ci / scq_info->wqe_num_per_buf;
+ buf = &scq_info->cqm_scq_info->q_room_buf_1.buf_list[buf_id];
+ buf_offset = (u16)(ci % scq_info->wqe_num_per_buf);
+
+ return (struct spfc_scqe_type *)(buf->va) + buf_offset;
+}
+
+static inline bool spfc_is_cqe_done(u32 *done, u32 *owner, u16 driver_owner)
+{
+ return ((((u16)(!!(*done & SPFC_DONE_MASK)) == driver_owner) &&
+ ((u16)(!!(*owner & SPFC_OWNER_MASK)) == driver_owner)) ? true : false);
+}
+
+u32 spfc_process_scq_cqe_entity(ulong info, u32 proc_cnt)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 index = 0;
+ struct wq_header *queue_header = NULL;
+ struct spfc_scqe_type *scqe = NULL;
+ struct spfc_scqe_type tmp_scqe;
+ struct spfc_scq_info *scq_info = (struct spfc_scq_info *)info;
+
+ FC_CHECK_RETURN_VALUE(scq_info, ret);
+ SPFC_FUNCTION_ENTER;
+
+ queue_header = (struct wq_header *)(void *)(scq_info->cqm_scq_info->q_header_vaddr);
+
+ for (index = 0; index < proc_cnt;) {
+ /* If linked wqe, then update CI */
+ if (spfc_is_scq_link_wqe(scq_info)) {
+ spfc_update_consumer_info(scq_info->valid_wqe_num,
+ &scq_info->ci,
+ &scq_info->ci_owner);
+ spfc_update_cq_header(&queue_header->ci_record,
+ scq_info->ci, scq_info->ci_owner);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_INFO,
+ "[info]Current wqe is a linked wqe");
+ continue;
+ }
+
+ /* Get SCQE and then check obit & donebit whether been set */
+ scqe = spfc_get_scq_entry(scq_info);
+ if (unlikely(!scqe)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Scqe is NULL");
+ break;
+ }
+
+ if (!spfc_is_cqe_done((u32 *)(void *)&scqe->wd0,
+ (u32 *)(void *)&scqe->ch.wd0,
+ scq_info->ci_owner)) {
+ atomic_set(&scq_info->flush_stat, SPFC_QUEUE_FLUSH_DONE);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_INFO, "[info]Now has no valid scqe");
+ break;
+ }
+
+ /* rmb & do memory copy */
+ rmb();
+ memcpy(&tmp_scqe, scqe, sizeof(struct spfc_scqe_type));
+ /* process SCQ entry */
+ ret = spfc_rcv_scq_entry_from_scq(scq_info->hba, (void *)&tmp_scqe,
+ scq_info->queue_id);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]QueueId(0x%x) scqn(0x%x) scqe process error at CI(0x%x)",
+ scq_info->queue_id, scq_info->scqn, scq_info->ci);
+ }
+
+ /* Update Driver's CI & Obit */
+ spfc_update_consumer_info(scq_info->valid_wqe_num,
+ &scq_info->ci, &scq_info->ci_owner);
+ spfc_update_cq_header(&queue_header->ci_record, scq_info->ci,
+ scq_info->ci_owner);
+ index++;
+ }
+
+ /* Re-schedule again if necessary */
+ if (index == proc_cnt)
+ tasklet_schedule(&scq_info->tasklet);
+
+ SPFC_FUNCTION_RETURN;
+
+ return index;
+}
+
+void spfc_set_scq_irg_cfg(struct spfc_hba_info *hba, u32 mode, u16 msix_index)
+{
+#define SPFC_POLLING_MODE_ITERRUPT_PENDING_CNT 5
+#define SPFC_POLLING_MODE_ITERRUPT_COALESC_TIMER_CFG 10
+ u8 pending_limt = 0;
+ u8 coalesc_timer_cfg = 0;
+
+ struct interrupt_info info = {0};
+
+ if (mode != SPFC_SCQ_INTR_LOW_LATENCY_MODE) {
+ pending_limt = SPFC_POLLING_MODE_ITERRUPT_PENDING_CNT;
+ coalesc_timer_cfg =
+ SPFC_POLLING_MODE_ITERRUPT_COALESC_TIMER_CFG;
+ }
+
+ memset(&info, 0, sizeof(info));
+ info.interrupt_coalesc_set = 1;
+ info.lli_set = 0;
+ info.pending_limt = pending_limt;
+ info.coalesc_timer_cfg = coalesc_timer_cfg;
+ info.resend_timer_cfg = 0;
+ info.msix_index = msix_index;
+
+ sphw_set_interrupt_cfg(hba->dev_handle, info, SPHW_CHANNEL_FC);
+}
+
+void spfc_process_scq_cqe(ulong info)
+{
+ struct spfc_scq_info *scq_info = (struct spfc_scq_info *)info;
+
+ FC_CHECK_RETURN_VOID(scq_info);
+
+ spfc_process_scq_cqe_entity(info, SPFC_CQE_MAX_PROCESS_NUM_PER_INTR);
+}
+
+irqreturn_t spfc_scq_irq(int irq, void *scq_info)
+{
+ SPFC_FUNCTION_ENTER;
+
+ FC_CHECK_RETURN_VALUE(scq_info, IRQ_NONE);
+
+ tasklet_schedule(&((struct spfc_scq_info *)scq_info)->tasklet);
+
+ SPFC_FUNCTION_RETURN;
+
+ return IRQ_HANDLED;
+}
+
+static u32 spfc_alloc_scq_int(struct spfc_scq_info *scq_info)
+{
+ int ret = UNF_RETURN_ERROR_S32;
+ u16 act_num = 0;
+ struct irq_info irq_info;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VALUE(scq_info, UNF_RETURN_ERROR);
+
+ /* 1. Alloc & check SCQ IRQ */
+ hba = (struct spfc_hba_info *)(scq_info->hba);
+ ret = sphw_alloc_irqs(hba->dev_handle, SERVICE_T_FC, SPFC_INT_NUM_PER_QUEUE,
+ &irq_info, &act_num);
+ if (ret != RETURN_OK || act_num != SPFC_INT_NUM_PER_QUEUE) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate scq irq failed, return %d", ret);
+ return UNF_RETURN_ERROR;
+ }
+
+ if (irq_info.msix_entry_idx >= SPFC_SCQ_INT_ID_MAX) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SCQ irq id exceed %d, msix_entry_idx %d",
+ SPFC_SCQ_INT_ID_MAX, irq_info.msix_entry_idx);
+ sphw_free_irq(hba->dev_handle, SERVICE_T_FC, irq_info.irq_id);
+ return UNF_RETURN_ERROR;
+ }
+
+ scq_info->irq_id = (u32)(irq_info.irq_id);
+ scq_info->msix_entry_idx = (u16)(irq_info.msix_entry_idx);
+
+ snprintf(scq_info->irq_name, SPFC_IRQ_NAME_MAX, "fc_scq%u_%x_msix%u",
+ scq_info->queue_id, hba->port_cfg.port_id, scq_info->msix_entry_idx);
+
+ /* 2. SCQ IRQ tasklet init */
+ tasklet_init(&scq_info->tasklet, spfc_process_scq_cqe, (ulong)(uintptr_t)scq_info);
+
+ /* 3. Request IRQ for SCQ */
+ ret = request_irq(scq_info->irq_id, spfc_scq_irq, 0, scq_info->irq_name, scq_info);
+
+ sphw_set_msix_state(hba->dev_handle, scq_info->msix_entry_idx, SPHW_MSIX_ENABLE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Request SCQ irq failed, SCQ Index = %u, return %d",
+ scq_info->queue_id, ret);
+ sphw_free_irq(hba->dev_handle, SERVICE_T_FC, scq_info->irq_id);
+ memset(scq_info->irq_name, 0, SPFC_IRQ_NAME_MAX);
+ scq_info->irq_id = 0;
+ scq_info->msix_entry_idx = 0;
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+static void spfc_free_scq_int(struct spfc_scq_info *scq_info)
+{
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID(scq_info);
+
+ hba = (struct spfc_hba_info *)(scq_info->hba);
+ sphw_set_msix_state(hba->dev_handle, scq_info->msix_entry_idx, SPHW_MSIX_DISABLE);
+ free_irq(scq_info->irq_id, scq_info);
+ tasklet_kill(&scq_info->tasklet);
+ sphw_free_irq(hba->dev_handle, SERVICE_T_FC, scq_info->irq_id);
+ memset(scq_info->irq_name, 0, SPFC_IRQ_NAME_MAX);
+ scq_info->irq_id = 0;
+ scq_info->msix_entry_idx = 0;
+}
+
+static void spfc_init_scq_info(struct spfc_hba_info *hba, struct cqm_queue *cqm_scq,
+ u32 queue_id, struct spfc_scq_info **scq_info)
+{
+ FC_CHECK_RETURN_VOID(hba);
+ FC_CHECK_RETURN_VOID(cqm_scq);
+ FC_CHECK_RETURN_VOID(scq_info);
+
+ *scq_info = &hba->scq_info[queue_id];
+ (*scq_info)->queue_id = queue_id;
+ (*scq_info)->scqn = cqm_scq->index;
+ (*scq_info)->hba = (void *)hba;
+
+ (*scq_info)->cqm_scq_info = cqm_scq;
+ (*scq_info)->wqe_num_per_buf =
+ cqm_scq->q_room_buf_1.buf_size / SPFC_SCQE_SIZE;
+ (*scq_info)->wqe_size = SPFC_SCQE_SIZE;
+ (*scq_info)->valid_wqe_num = (SPFC_SCQ_IS_STS(queue_id) ? SPFC_STS_SCQ_DEPTH
+ : SPFC_CMD_SCQ_DEPTH);
+ (*scq_info)->scqc_cq_depth = (SPFC_SCQ_IS_STS(queue_id) ? SPFC_STS_SCQC_CQ_DEPTH
+ : SPFC_CMD_SCQC_CQ_DEPTH);
+ (*scq_info)->scqc_ci_type = SPFC_STS_SCQ_CI_TYPE;
+ (*scq_info)->ci = 0;
+ (*scq_info)->ci_owner = 1;
+}
+
+static void spfc_init_scq_header(struct wq_header *queue_header)
+{
+ FC_CHECK_RETURN_VOID(queue_header);
+
+ memset(queue_header, 0, sizeof(struct wq_header));
+
+ /* Obit default is 1 */
+ queue_header->db_record.pmsn = 1 << UNF_SHIFT_15;
+ queue_header->db_record.dump_pmsn = queue_header->db_record.pmsn;
+ queue_header->ci_record.cmsn = 1 << UNF_SHIFT_15;
+ queue_header->ci_record.dump_cmsn = queue_header->ci_record.cmsn;
+
+ /* Big endian convert */
+ spfc_cpu_to_big64((void *)queue_header, sizeof(struct wq_header));
+}
+
+static void spfc_cfg_scq_ctx(struct spfc_scq_info *scq_info,
+ struct spfc_cq_qinfo *scq_ctx)
+{
+ struct cqm_queue *cqm_scq_info = NULL;
+ struct spfc_queue_info_bus queue_bus;
+ u64 parity = 0;
+
+ FC_CHECK_RETURN_VOID(scq_info);
+
+ cqm_scq_info = scq_info->cqm_scq_info;
+
+ scq_ctx->pcie_template_hi = 0;
+ scq_ctx->cur_cqe_gpa = cqm_scq_info->q_room_buf_1.buf_list->pa >> SPFC_CQE_GPA_SHIFT;
+ scq_ctx->pi = 0;
+ scq_ctx->pi_o = 1;
+ scq_ctx->ci = scq_info->ci;
+ scq_ctx->ci_o = scq_info->ci_owner;
+ scq_ctx->c_eqn_msi_x = scq_info->msix_entry_idx;
+ scq_ctx->ci_type = scq_info->scqc_ci_type;
+ scq_ctx->cq_depth = scq_info->scqc_cq_depth;
+ scq_ctx->armq = SPFC_ARMQ_IDLE;
+ scq_ctx->cur_cqe_cnt = 0;
+ scq_ctx->cqe_max_cnt = 0;
+ scq_ctx->cqe_dmaattr_idx = 0;
+ scq_ctx->cq_so_ro = 0;
+ scq_ctx->init_mode = SPFC_CQ_INT_MODE;
+ scq_ctx->next_o = 1;
+ scq_ctx->loop_o = 1;
+ scq_ctx->next_cq_wqe_page_gpa = cqm_scq_info->q_room_buf_1.buf_list[ARRAY_INDEX_1].pa >>
+ SPFC_NEXT_CQE_GPA_SHIFT;
+ scq_ctx->pcie_template_lo = 0;
+
+ scq_ctx->ci_gpa = (cqm_scq_info->q_header_paddr + offsetof(struct wq_header, ci_record)) >>
+ SPFC_CQE_GPA_SHIFT;
+
+ memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
+ queue_bus.bus[ARRAY_INDEX_0] |= ((u64)(scq_info->scqn & SPFC_SCQN_MASK)); /* bits 20 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->pcie_template_lo)) << UNF_SHIFT_20);
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->ci_gpa & SPFC_SCQ_CTX_CI_GPA_MASK)) <<
+ UNF_SHIFT_23); /* bits 28 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->cqe_dmaattr_idx)) << UNF_SHIFT_51);
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->cq_so_ro)) << UNF_SHIFT_57); /* bits 2 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->init_mode)) << UNF_SHIFT_59); /* bits 2 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->c_eqn_msi_x &
+ SPFC_SCQ_CTX_C_EQN_MSI_X_MASK)) << UNF_SHIFT_61);
+ queue_bus.bus[ARRAY_INDEX_1] |= ((u64)(scq_ctx->c_eqn_msi_x >> UNF_SHIFT_3)); /* bits 7 */
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(scq_ctx->ci_type)) << UNF_SHIFT_7); /* bits 1 */
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(scq_ctx->cq_depth)) << UNF_SHIFT_8); /* bits 3 */
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(scq_ctx->cqe_max_cnt)) << UNF_SHIFT_11);
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(scq_ctx->pcie_template_hi)) << UNF_SHIFT_19);
+
+ parity = spfc_get_parity_value(queue_bus.bus, SPFC_SCQC_BUS_ROW, SPFC_SCQC_BUS_COL);
+ scq_ctx->parity_0 = parity & SPFC_PARITY_MASK;
+ scq_ctx->parity_1 = (parity >> UNF_SHIFT_1) & SPFC_PARITY_MASK;
+ scq_ctx->parity_2 = (parity >> UNF_SHIFT_2) & SPFC_PARITY_MASK;
+
+ spfc_cpu_to_big64((void *)scq_ctx, sizeof(struct spfc_cq_qinfo));
+}
+
+static u32 spfc_creat_scqc_via_cmdq_sync(struct spfc_hba_info *hba,
+ struct spfc_cq_qinfo *scqc, u32 scqn)
+{
+#define SPFC_INIT_SCQC_TIMEOUT 3000
+ int ret;
+ u32 covrt_size;
+ struct spfc_cmdqe_creat_scqc init_scqc_cmd;
+ struct sphw_cmd_buf *cmdq_in_buf;
+
+ cmdq_in_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmdq_in_buf) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]cmdq in_cmd_buf alloc failed");
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_INIT_SCQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(&init_scqc_cmd, 0, sizeof(init_scqc_cmd));
+ init_scqc_cmd.wd0.task_type = SPFC_TASK_T_INIT_SCQC;
+ init_scqc_cmd.wd1.scqn = SPFC_LSW(scqn);
+ covrt_size = sizeof(init_scqc_cmd) - sizeof(init_scqc_cmd.scqc);
+ spfc_cpu_to_big32(&init_scqc_cmd, covrt_size);
+
+ /* scqc is already big endian */
+ memcpy(init_scqc_cmd.scqc, scqc, sizeof(*scqc));
+ memcpy(cmdq_in_buf->buf, &init_scqc_cmd, sizeof(init_scqc_cmd));
+ cmdq_in_buf->size = sizeof(init_scqc_cmd);
+
+ ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0,
+ cmdq_in_buf, NULL, NULL,
+ SPFC_INIT_SCQC_TIMEOUT, SPHW_CHANNEL_FC);
+ sphw_free_cmd_buf(hba->dev_handle, cmdq_in_buf);
+ if (ret) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Send creat scqc via cmdq failed, ret=%d",
+ ret);
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_INIT_SCQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ SPFC_IO_STAT(hba, SPFC_TASK_T_INIT_SCQC);
+
+ return RETURN_OK;
+}
+
+static u32 spfc_delete_ssqc_via_cmdq_sync(struct spfc_hba_info *hba, u32 xid,
+ u64 context_gpa, u32 entry_count)
+{
+#define SPFC_DELETE_SSQC_TIMEOUT 3000
+ int ret = RETURN_OK;
+ struct spfc_cmdqe_delete_ssqc delete_ssqc_cmd;
+ struct sphw_cmd_buf *cmdq_in_buf = NULL;
+
+ cmdq_in_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmdq_in_buf) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]cmdq in_cmd_buf alloc failed");
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(&delete_ssqc_cmd, 0, sizeof(delete_ssqc_cmd));
+ delete_ssqc_cmd.wd0.task_type = SPFC_TASK_T_CLEAR_SSQ_CONTEXT;
+ delete_ssqc_cmd.wd0.xid = xid;
+ delete_ssqc_cmd.wd0.entry_count = entry_count;
+ delete_ssqc_cmd.wd1.scqn = SPFC_LSW(0);
+ delete_ssqc_cmd.context_gpa_hi = SPFC_HIGH_32_BITS(context_gpa);
+ delete_ssqc_cmd.context_gpa_lo = SPFC_LOW_32_BITS(context_gpa);
+ spfc_cpu_to_big32(&delete_ssqc_cmd, sizeof(delete_ssqc_cmd));
+ memcpy(cmdq_in_buf->buf, &delete_ssqc_cmd, sizeof(delete_ssqc_cmd));
+ cmdq_in_buf->size = sizeof(delete_ssqc_cmd);
+
+ ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0,
+ cmdq_in_buf, NULL, NULL,
+ SPFC_DELETE_SSQC_TIMEOUT,
+ SPHW_CHANNEL_FC);
+
+ sphw_free_cmd_buf(hba->dev_handle, cmdq_in_buf);
+
+ return ret;
+}
+
+static void spfc_free_ssq_qpc(struct spfc_hba_info *hba, u32 free_sq_num)
+{
+ u32 global_sq_index = 0;
+ u32 qid = 0;
+ struct spfc_parent_shared_queue_info *ssq_info = NULL;
+
+ SPFC_FUNCTION_ENTER;
+ for (global_sq_index = 0; global_sq_index < free_sq_num;) {
+ for (qid = 1; qid <= SPFC_SQ_NUM_PER_QPC; qid++) {
+ ssq_info = &hba->parent_queue_mgr->shared_queue[global_sq_index];
+ if (qid == SPFC_SQ_NUM_PER_QPC ||
+ global_sq_index == free_sq_num - 1) {
+ if (ssq_info->parent_ctx.cqm_parent_ctx_obj) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[INFO]qid 0x%x, global_sq_index 0x%x, free_sq_num 0x%x",
+ qid, global_sq_index, free_sq_num);
+ cqm3_object_delete(&ssq_info->parent_ctx
+ .cqm_parent_ctx_obj->object);
+ ssq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
+ }
+ }
+ global_sq_index++;
+ if (global_sq_index >= free_sq_num)
+ break;
+ }
+ }
+}
+
+void spfc_free_ssq(void *handle, u32 free_sq_num)
+{
+#define SPFC_FREE_SSQ_WAIT_MS 1000
+ u32 global_sq_index = 0;
+ u32 qid = 0;
+ struct spfc_parent_shared_queue_info *ssq_info = NULL;
+ struct spfc_parent_ssq_info *sq_ctrl = NULL;
+ struct cqm_qpc_mpt *prnt_ctx = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 entry_count = 0;
+ struct spfc_hba_info *hba = NULL;
+
+ SPFC_FUNCTION_ENTER;
+
+ hba = (struct spfc_hba_info *)handle;
+ for (global_sq_index = 0; global_sq_index < free_sq_num;) {
+ for (qid = 1; qid <= SPFC_SQ_NUM_PER_QPC; qid++) {
+ ssq_info = &hba->parent_queue_mgr->shared_queue[global_sq_index];
+ sq_ctrl = &ssq_info->parent_ssq_info;
+ /* Free data cos */
+ spfc_free_link_list_wpg(sq_ctrl);
+ if (sq_ctrl->queue_head_original) {
+ pci_unmap_single(hba->pci_dev,
+ sq_ctrl->queue_hdr_phy_addr_original,
+ sizeof(struct spfc_queue_header) +
+ SPFC_SQ_HEADER_ADDR_ALIGN_SIZE,
+ DMA_BIDIRECTIONAL);
+ kfree(sq_ctrl->queue_head_original);
+ sq_ctrl->queue_head_original = NULL;
+ }
+ if (qid == SPFC_SQ_NUM_PER_QPC || global_sq_index == free_sq_num - 1) {
+ if (ssq_info->parent_ctx.cqm_parent_ctx_obj) {
+ prnt_ctx = ssq_info->parent_ctx.cqm_parent_ctx_obj;
+ entry_count = (qid == SPFC_SQ_NUM_PER_QPC ?
+ SPFC_SQ_NUM_PER_QPC :
+ free_sq_num - global_sq_index);
+ ret = spfc_delete_ssqc_via_cmdq_sync(hba, prnt_ctx->xid,
+ prnt_ctx->paddr,
+ entry_count);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]ucode delete ssq fail, glbindex 0x%x, qid 0x%x, glsqindex 0x%x",
+ global_sq_index, qid, free_sq_num);
+ }
+ }
+ }
+ global_sq_index++;
+ if (global_sq_index >= free_sq_num)
+ break;
+ }
+ }
+
+ msleep(SPFC_FREE_SSQ_WAIT_MS);
+
+ spfc_free_ssq_qpc(hba, free_sq_num);
+}
+
+u32 spfc_creat_ssqc_via_cmdq_sync(struct spfc_hba_info *hba,
+ struct spfc_ssq_parent_context *ssqc,
+ u32 xid, u64 context_gpa)
+{
+#define SPFC_INIT_SSQC_TIMEOUT 3000
+ int ret;
+ u32 covrt_size;
+ struct spfc_cmdqe_creat_ssqc create_ssqc_cmd;
+ struct sphw_cmd_buf *cmdq_in_buf = NULL;
+
+ cmdq_in_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmdq_in_buf) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]cmdq in_cmd_buf alloc failed");
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(&create_ssqc_cmd, 0, sizeof(create_ssqc_cmd));
+ create_ssqc_cmd.wd0.task_type = SPFC_TASK_T_CREATE_SSQ_CONTEXT;
+ create_ssqc_cmd.wd0.xid = xid;
+ create_ssqc_cmd.wd1.scqn = SPFC_LSW(0);
+ create_ssqc_cmd.context_gpa_hi = SPFC_HIGH_32_BITS(context_gpa);
+ create_ssqc_cmd.context_gpa_lo = SPFC_LOW_32_BITS(context_gpa);
+ covrt_size = sizeof(create_ssqc_cmd) - sizeof(create_ssqc_cmd.ssqc);
+ spfc_cpu_to_big32(&create_ssqc_cmd, covrt_size);
+
+ /* scqc is already big endian */
+ memcpy(create_ssqc_cmd.ssqc, ssqc, sizeof(*ssqc));
+ memcpy(cmdq_in_buf->buf, &create_ssqc_cmd, sizeof(create_ssqc_cmd));
+ cmdq_in_buf->size = sizeof(create_ssqc_cmd);
+ ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0,
+ cmdq_in_buf, NULL, NULL,
+ SPFC_INIT_SSQC_TIMEOUT, SPHW_CHANNEL_FC);
+ sphw_free_cmd_buf(hba->dev_handle, cmdq_in_buf);
+ if (ret)
+ return UNF_RETURN_ERROR;
+ return RETURN_OK;
+}
+
+void spfc_init_sq_prnt_ctxt_sq_qinfo(struct spfc_sq_qinfo *sq_info,
+ struct spfc_parent_ssq_info *ssq)
+{
+ struct spfc_wqe_page *head_wqe_page = NULL;
+ struct spfc_sq_qinfo *prnt_sq_ctx = NULL;
+ struct spfc_queue_info_bus queue_bus;
+
+ SPFC_FUNCTION_ENTER;
+
+ /* Obtains the Parent Context address */
+ head_wqe_page = SPFC_GET_SQ_HEAD(ssq);
+
+ prnt_sq_ctx = sq_info;
+
+ /* The PMSN is updated by the host driver */
+ prnt_sq_ctx->pmsn_type = SPFC_PMSN_CI_TYPE_FROM_HOST;
+
+ /* Indicates the value of O of the valid SQE in the current round of SQ.
+ * * The value of Linked List SQ is always one, and the value of 0 is
+ * invalid.
+ */
+ prnt_sq_ctx->loop_o =
+ SPFC_OWNER_DRIVER_PRODUCT; /* current valid o-bit */
+
+ /* should be opposite from loop_o */
+ prnt_sq_ctx->cur_wqe_o = ~(prnt_sq_ctx->loop_o);
+
+ /* the first sqe's gpa */
+ prnt_sq_ctx->cur_sqe_gpa = head_wqe_page->wpg_phy_addr;
+
+ /* Indicates the GPA of the Queue header that is initialized to the SQ
+ * in * the Host memory. The value must be 16-byte aligned.
+ */
+ prnt_sq_ctx->pmsn_gpa = ssq->queue_hdr_phy_addr;
+ if (wqe_pre_load != 0)
+ prnt_sq_ctx->pmsn_gpa |= SPFC_SQ_LINK_PRE;
+
+ /* This field is used to fill in the dmaattr_idx field of the ComboDMA.
+ * The default value is 0
+ */
+ prnt_sq_ctx->sqe_dmaattr_idx = SPFC_DMA_ATTR_OFST;
+
+ /* This field is filled using the value of RO_SO in the SGL0 of the
+ * ComboDMA
+ */
+ prnt_sq_ctx->sq_so_ro = SPFC_PCIE_RELAXED_ORDERING;
+
+ prnt_sq_ctx->ring = ssq->queue_style;
+
+ /* This field is used to set the SGL0 field of the Child solicDMA */
+ prnt_sq_ctx->zerocopy_dmaattr_idx = SPFC_DMA_ATTR_OFST;
+
+ prnt_sq_ctx->zerocopy_so_ro = SPFC_PCIE_RELAXED_ORDERING;
+ prnt_sq_ctx->enable_256 = SPFC_256BWQE_ENABLE;
+
+ /* PCIe attribute information */
+ prnt_sq_ctx->pcie_template = SPFC_PCIE_TEMPLATE;
+
+ memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
+ queue_bus.bus[ARRAY_INDEX_0] |= ((u64)(ssq->context_id & SPFC_SSQ_CTX_MASK)); /* bits 20 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->sqe_dmaattr_idx)) << UNF_SHIFT_20);
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->sq_so_ro)) << UNF_SHIFT_26);
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->ring)) << UNF_SHIFT_28); /* bits 1 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->zerocopy_dmaattr_idx))
+ << UNF_SHIFT_29); /* bits 6 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->zerocopy_so_ro)) << UNF_SHIFT_35);
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->pcie_template)) << UNF_SHIFT_37);
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->pmsn_gpa >> UNF_SHIFT_4))
+ << UNF_SHIFT_43); /* bits 21 */
+ queue_bus.bus[ARRAY_INDEX_1] |= ((u64)(prnt_sq_ctx->pmsn_gpa >> UNF_SHIFT_25));
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(prnt_sq_ctx->pmsn_type)) << UNF_SHIFT_39);
+ prnt_sq_ctx->parity = spfc_get_parity_value(queue_bus.bus, SPFC_SQC_BUS_ROW,
+ SPFC_SQC_BUS_COL);
+ spfc_cpu_to_big64(prnt_sq_ctx, sizeof(struct spfc_sq_qinfo));
+
+ SPFC_FUNCTION_RETURN;
+}
+
+u32 spfc_create_ssq(void *handle)
+{
+ u32 ret = RETURN_OK;
+ u32 global_sq_index = 0;
+ u32 qid = 0;
+ struct cqm_qpc_mpt *prnt_ctx = NULL;
+ struct spfc_parent_shared_queue_info *ssq_info = NULL;
+ struct spfc_parent_ssq_info *sq_ctrl = NULL;
+ u32 queue_header_alloc_size = 0;
+ struct spfc_wqe_page *head_wpg = NULL;
+ struct spfc_ssq_parent_context prnt_ctx_info;
+ struct spfc_sq_qinfo *sq_info = NULL;
+ struct spfc_scq_qinfo *psq_pretchinfo = NULL;
+ struct spfc_queue_info_bus queue_bus;
+ struct spfc_fc_key_section *keysection = NULL;
+ struct spfc_hba_info *hba = NULL;
+ dma_addr_t origin_addr;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ hba = (struct spfc_hba_info *)handle;
+ for (global_sq_index = 0; global_sq_index < SPFC_MAX_SSQ_NUM;) {
+ qid = 0;
+ prnt_ctx = cqm3_object_qpc_mpt_create(hba->dev_handle, SERVICE_T_FC,
+ CQM_OBJECT_SERVICE_CTX,
+ SPFC_CNTX_SIZE_256B, NULL,
+ CQM_INDEX_INVALID);
+ if (!prnt_ctx) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Create ssq context failed, CQM_INDEX is 0x%x",
+ CQM_INDEX_INVALID);
+ goto ssq_ctx_create_fail;
+ }
+ memset(&prnt_ctx_info, 0, sizeof(prnt_ctx_info));
+ keysection = (struct spfc_fc_key_section *)&prnt_ctx_info;
+ keysection->xid_h = (prnt_ctx->xid >> UNF_SHIFT_16) & SPFC_KEYSECTION_XID_H_MASK;
+ keysection->xid_l = prnt_ctx->xid & SPFC_KEYSECTION_XID_L_MASK;
+ spfc_cpu_to_big32(keysection, sizeof(struct spfc_fc_key_section));
+ for (qid = 0; qid < SPFC_SQ_NUM_PER_QPC; qid++) {
+ sq_info = (struct spfc_sq_qinfo *)((u8 *)(&prnt_ctx_info) + ((qid + 1) *
+ SPFC_SQ_SPACE_OFFSET));
+ ssq_info = &hba->parent_queue_mgr->shared_queue[global_sq_index];
+ ssq_info->parent_ctx.cqm_parent_ctx_obj = prnt_ctx;
+ /* Initialize struct spfc_parent_sq_info */
+ sq_ctrl = &ssq_info->parent_ssq_info;
+ sq_ctrl->hba = (void *)hba;
+ sq_ctrl->context_id = prnt_ctx->xid;
+ sq_ctrl->sq_queue_id = qid + SPFC_SQ_QID_START_PER_QPC;
+ sq_ctrl->cache_id = FC_CALC_CID(prnt_ctx->xid);
+ sq_ctrl->sqn = global_sq_index;
+ sq_ctrl->max_sqe_num = hba->exi_count;
+ /* Reduce one Link Wqe */
+ sq_ctrl->wqe_num_per_buf = hba->sq_wpg_pool.wqe_per_wpg - 1;
+ sq_ctrl->wqe_size = SPFC_SQE_SIZE;
+ sq_ctrl->wqe_offset = 0;
+ sq_ctrl->head_start_cmsn = 0;
+ sq_ctrl->head_end_cmsn = SPFC_GET_WP_END_CMSN(0, sq_ctrl->wqe_num_per_buf);
+ sq_ctrl->last_cmsn = 0;
+ /* Linked List SQ Owner Bit 1 valid,0 invalid */
+ sq_ctrl->last_pi_owner = 1;
+ atomic_set(&sq_ctrl->sq_valid, true);
+ sq_ctrl->accum_wqe_cnt = 0;
+ sq_ctrl->service_type = SPFC_SERVICE_TYPE_FC_SQ;
+ sq_ctrl->queue_style = (global_sq_index == SPFC_DIRECTWQE_SQ_INDEX) ?
+ SPFC_QUEUE_RING_STYLE : SPFC_QUEUE_LINK_STYLE;
+ INIT_LIST_HEAD(&sq_ctrl->list_linked_list_sq);
+ atomic_set(&sq_ctrl->wqe_page_cnt, 0);
+ atomic_set(&sq_ctrl->sq_db_cnt, 0);
+ atomic_set(&sq_ctrl->sqe_minus_cqe_cnt, 1);
+ atomic_set(&sq_ctrl->sq_wqe_cnt, 0);
+ atomic_set(&sq_ctrl->sq_cqe_cnt, 0);
+ spin_lock_init(&sq_ctrl->parent_sq_enqueue_lock);
+ memset(sq_ctrl->io_stat, 0, sizeof(sq_ctrl->io_stat));
+
+ /* Allocate and initialize the Queue Header space. 64B
+ * alignment is required. * Additional 64B is applied
+ * for alignment
+ */
+ queue_header_alloc_size = sizeof(struct spfc_queue_header) +
+ SPFC_SQ_HEADER_ADDR_ALIGN_SIZE;
+ sq_ctrl->queue_head_original = kmalloc(queue_header_alloc_size, GFP_ATOMIC);
+ if (!sq_ctrl->queue_head_original) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SQ(0x%x) create SQ queue header failed",
+ global_sq_index);
+ goto ssq_qheader_create_fail;
+ }
+
+ memset((u8 *)sq_ctrl->queue_head_original, 0, queue_header_alloc_size);
+
+ sq_ctrl->queue_hdr_phy_addr_original =
+ pci_map_single(hba->pci_dev, sq_ctrl->queue_head_original,
+ queue_header_alloc_size, DMA_BIDIRECTIONAL);
+ origin_addr = sq_ctrl->queue_hdr_phy_addr_original;
+ if (pci_dma_mapping_error(hba->pci_dev, origin_addr)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SQ(0x%x) SQ queue header DMA mapping failed",
+ global_sq_index);
+ goto ssq_qheader_dma_map_fail;
+ }
+
+ /* Obtains the 64B alignment address */
+ sq_ctrl->queue_header = (struct spfc_queue_header *)(uintptr_t)
+ SPFC_ADDR_64_ALIGN((u64)((uintptr_t)(sq_ctrl->queue_head_original)));
+ sq_ctrl->queue_hdr_phy_addr = SPFC_ADDR_64_ALIGN(origin_addr);
+
+ /* Each SQ is allocated with a Wqe Page by default. The
+ * WqePageCnt is incremented by one
+ */
+ head_wpg = spfc_add_one_wqe_page(sq_ctrl);
+ if (!head_wpg) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]SQ(0x%x) create SQ first wqe page failed",
+ global_sq_index);
+ goto ssq_headwpg_create_fail;
+ }
+
+ atomic_inc(&sq_ctrl->wqe_page_cnt);
+ spfc_init_sq_prnt_ctxt_sq_qinfo(sq_info, sq_ctrl);
+ global_sq_index++;
+ if (global_sq_index == SPFC_MAX_SSQ_NUM)
+ break;
+ }
+ psq_pretchinfo = &prnt_ctx_info.sq_pretchinfo;
+ psq_pretchinfo->hw_scqc_config.info.rq_th2_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.rq_th1_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.rq_th0_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.rq_min_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.sq_th2_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.sq_th1_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.sq_th0_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.sq_min_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.scq_n = (u64)0;
+ psq_pretchinfo->hw_scqc_config.info.parity = 0;
+
+ memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
+ queue_bus.bus[ARRAY_INDEX_0] = psq_pretchinfo->hw_scqc_config.pctxt_val1;
+ psq_pretchinfo->hw_scqc_config.info.parity =
+ spfc_get_parity_value(queue_bus.bus, SPFC_HW_SCQC_BUS_ROW,
+ SPFC_HW_SCQC_BUS_COL);
+ spfc_cpu_to_big64(psq_pretchinfo, sizeof(struct spfc_scq_qinfo));
+ ret = spfc_creat_ssqc_via_cmdq_sync(hba, &prnt_ctx_info,
+ prnt_ctx->xid, prnt_ctx->paddr);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]SQ(0x%x) create ssqc failed.",
+ global_sq_index);
+ goto ssq_cmdqsync_fail;
+ }
+ }
+
+ return RETURN_OK;
+
+ssq_headwpg_create_fail:
+ pci_unmap_single(hba->pci_dev, sq_ctrl->queue_hdr_phy_addr_original,
+ queue_header_alloc_size, DMA_BIDIRECTIONAL);
+
+ssq_qheader_dma_map_fail:
+ kfree(sq_ctrl->queue_head_original);
+ sq_ctrl->queue_head_original = NULL;
+
+ssq_qheader_create_fail:
+ cqm3_object_delete(&prnt_ctx->object);
+ ssq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
+ if (qid > 0) {
+ while (qid--) {
+ ssq_info = &hba->parent_queue_mgr->shared_queue[global_sq_index - qid];
+ ssq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
+ }
+ }
+
+ssq_ctx_create_fail:
+ssq_cmdqsync_fail:
+ if (global_sq_index > 0)
+ spfc_free_ssq(hba, global_sq_index);
+
+ return UNF_RETURN_ERROR;
+}
+
+static u32 spfc_create_scq(struct spfc_hba_info *hba)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 scq_index = 0;
+ u32 scq_cfg_num = 0;
+ struct cqm_queue *cqm_scq = NULL;
+ void *handle = NULL;
+ struct spfc_scq_info *scq_info = NULL;
+ struct spfc_cq_qinfo cq_qinfo;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ handle = hba->dev_handle;
+ /* Create SCQ by CQM interface */
+ for (scq_index = 0; scq_index < SPFC_TOTAL_SCQ_NUM; scq_index++) {
+ /*
+ * 1. Create/Allocate SCQ
+ * *
+ * Notice: SCQ[0, 2, 4 ...]--->CMD SCQ,
+ * SCQ[1, 3, 5 ...]--->STS SCQ,
+ * SCQ[SPFC_TOTAL_SCQ_NUM-1]--->Defaul SCQ
+ */
+ cqm_scq = cqm3_object_nonrdma_queue_create(handle, SERVICE_T_FC,
+ CQM_OBJECT_NONRDMA_SCQ,
+ SPFC_SCQ_IS_STS(scq_index) ?
+ SPFC_STS_SCQ_DEPTH :
+ SPFC_CMD_SCQ_DEPTH,
+ SPFC_SCQE_SIZE, hba);
+ if (!cqm_scq) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_WARN, "[err]Create scq failed");
+
+ goto free_scq;
+ }
+
+ /* 2. Initialize SCQ (info) */
+ spfc_init_scq_info(hba, cqm_scq, scq_index, &scq_info);
+
+ /* 3. Allocate & Initialize SCQ interrupt */
+ ret = spfc_alloc_scq_int(scq_info);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate scq interrupt failed");
+
+ cqm3_object_delete(&cqm_scq->object);
+ memset(scq_info, 0, sizeof(struct spfc_scq_info));
+ goto free_scq;
+ }
+
+ /* 4. Initialize SCQ queue header */
+ spfc_init_scq_header((struct wq_header *)(void *)cqm_scq->q_header_vaddr);
+
+ /* 5. Initialize & Create SCQ CTX */
+ memset(&cq_qinfo, 0, sizeof(cq_qinfo));
+ spfc_cfg_scq_ctx(scq_info, &cq_qinfo);
+ ret = spfc_creat_scqc_via_cmdq_sync(hba, &cq_qinfo, scq_info->scqn);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Create scq context failed");
+
+ cqm3_object_delete(&cqm_scq->object);
+ memset(scq_info, 0, sizeof(struct spfc_scq_info));
+ goto free_scq;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Create SCQ[%u] Scqn=%u WqeNum=%u WqeSize=%u WqePerBuf=%u CqDepth=%u CiType=%u irq=%u msix=%u",
+ scq_info->queue_id, scq_info->scqn,
+ scq_info->valid_wqe_num, scq_info->wqe_size,
+ scq_info->wqe_num_per_buf, scq_info->scqc_cq_depth,
+ scq_info->scqc_ci_type, scq_info->irq_id,
+ scq_info->msix_entry_idx);
+ }
+
+ /* Last SCQ is used to handle SCQE delivery access when clearing buffer
+ */
+ hba->default_scqn = scq_info->scqn;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Default Scqn=%u CqmScqIndex=%u", hba->default_scqn,
+ cqm_scq->index);
+
+ return RETURN_OK;
+
+free_scq:
+ spfc_flush_scq_ctx(hba);
+
+ scq_cfg_num = scq_index;
+ for (scq_index = 0; scq_index < scq_cfg_num; scq_index++) {
+ scq_info = &hba->scq_info[scq_index];
+ spfc_free_scq_int(scq_info);
+ cqm_scq = scq_info->cqm_scq_info;
+ cqm3_object_delete(&cqm_scq->object);
+ memset(scq_info, 0, sizeof(struct spfc_scq_info));
+ }
+
+ return UNF_RETURN_ERROR;
+}
+
+static void spfc_destroy_scq(struct spfc_hba_info *hba)
+{
+ u32 scq_index = 0;
+ struct cqm_queue *cqm_scq = NULL;
+ struct spfc_scq_info *scq_info = NULL;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Start destroy total %d SCQ", SPFC_TOTAL_SCQ_NUM);
+
+ FC_CHECK_RETURN_VOID(hba);
+
+ /* Use CQM to delete SCQ */
+ for (scq_index = 0; scq_index < SPFC_TOTAL_SCQ_NUM; scq_index++) {
+ scq_info = &hba->scq_info[scq_index];
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ALL,
+ "[info]Destroy SCQ%u, Scqn=%u, Irq=%u, msix=%u, name=%s",
+ scq_index, scq_info->scqn, scq_info->irq_id,
+ scq_info->msix_entry_idx, scq_info->irq_name);
+
+ spfc_free_scq_int(scq_info);
+ cqm_scq = scq_info->cqm_scq_info;
+ cqm3_object_delete(&cqm_scq->object);
+ memset(scq_info, 0, sizeof(struct spfc_scq_info));
+ }
+}
+
+static void spfc_init_srq_info(struct spfc_hba_info *hba, struct cqm_queue *cqm_srq,
+ struct spfc_srq_info *srq_info)
+{
+ FC_CHECK_RETURN_VOID(hba);
+ FC_CHECK_RETURN_VOID(cqm_srq);
+ FC_CHECK_RETURN_VOID(srq_info);
+
+ srq_info->hba = (void *)hba;
+
+ srq_info->cqm_srq_info = cqm_srq;
+ srq_info->wqe_num_per_buf = cqm_srq->q_room_buf_1.buf_size / SPFC_SRQE_SIZE - 1;
+ srq_info->wqe_size = SPFC_SRQE_SIZE;
+ srq_info->valid_wqe_num = cqm_srq->valid_wqe_num;
+ srq_info->pi = 0;
+ srq_info->pi_owner = SPFC_SRQ_INIT_LOOP_O;
+ srq_info->pmsn = 0;
+ srq_info->srqn = cqm_srq->index;
+ srq_info->first_rqe_recv_dma = 0;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Init srq info(srq index 0x%x) valid wqe num 0x%x, buffer size 0x%x, wqe num per buf 0x%x",
+ cqm_srq->index, srq_info->valid_wqe_num,
+ cqm_srq->q_room_buf_1.buf_size, srq_info->wqe_num_per_buf);
+}
+
+static void spfc_init_srq_header(struct wq_header *queue_header)
+{
+ FC_CHECK_RETURN_VOID(queue_header);
+
+ memset(queue_header, 0, sizeof(struct wq_header));
+}
+
+/*
+ *Function Name : spfc_get_srq_entry
+ *Function Description: Obtain RQE in SRQ via PI.
+ *Input Parameters : *srq_info,
+ * **linked_rqe,
+ * position
+ *Output Parameters : N/A
+ *Return Type : struct spfc_rqe*
+ */
+static struct spfc_rqe *spfc_get_srq_entry(struct spfc_srq_info *srq_info,
+ struct spfc_rqe **linked_rqe, u16 position)
+{
+ u32 buf_id = 0;
+ u32 wqe_num_per_buf = 0;
+ u16 buf_offset = 0;
+ struct cqm_buf_list *buf = NULL;
+
+ FC_CHECK_RETURN_VALUE(srq_info, NULL);
+
+ wqe_num_per_buf = srq_info->wqe_num_per_buf;
+
+ buf_id = position / wqe_num_per_buf;
+ buf = &srq_info->cqm_srq_info->q_room_buf_1.buf_list[buf_id];
+ buf_offset = position % ((u16)wqe_num_per_buf);
+
+ if (buf_offset + 1 == wqe_num_per_buf)
+ *linked_rqe = (struct spfc_rqe *)(buf->va) + wqe_num_per_buf;
+ else
+ *linked_rqe = NULL;
+
+ return (struct spfc_rqe *)(buf->va) + buf_offset;
+}
+
+void spfc_post_els_srq_wqe(struct spfc_srq_info *srq_info, u16 buf_id)
+{
+ struct spfc_rqe *rqe = NULL;
+ struct spfc_rqe tmp_rqe;
+ struct spfc_rqe *linked_rqe = NULL;
+ struct wq_header *wq_header = NULL;
+ struct spfc_drq_buff_entry *buff_entry = NULL;
+
+ FC_CHECK_RETURN_VOID(srq_info);
+ FC_CHECK_RETURN_VOID(buf_id < srq_info->valid_wqe_num);
+
+ buff_entry = srq_info->els_buff_entry_head + buf_id;
+
+ spin_lock(&srq_info->srq_spin_lock);
+
+ /* Obtain RQE, not include link wqe */
+ rqe = spfc_get_srq_entry(srq_info, &linked_rqe, srq_info->pi);
+ if (!rqe) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]post els srq,get srqe failed, valid wqe num 0x%x, pi 0x%x, pmsn 0x%x",
+ srq_info->valid_wqe_num, srq_info->pi,
+ srq_info->pmsn);
+
+ spin_unlock(&srq_info->srq_spin_lock);
+ return;
+ }
+
+ /* Initialize RQE */
+ /* cs section is not used */
+ memset(&tmp_rqe, 0, sizeof(struct spfc_rqe));
+
+ /* default Obit is invalid, and set valid finally */
+ spfc_build_srq_wqe_ctrls(&tmp_rqe, !srq_info->pi_owner, srq_info->pmsn + 1);
+
+ tmp_rqe.bds_sl.buf_addr_hi = SPFC_HIGH_32_BITS(buff_entry->buff_dma);
+ tmp_rqe.bds_sl.buf_addr_lo = SPFC_LOW_32_BITS(buff_entry->buff_dma);
+ tmp_rqe.drv_sl.wd0.user_id = buf_id;
+
+ /* convert to big endian */
+ spfc_cpu_to_big32(&tmp_rqe, sizeof(struct spfc_rqe));
+
+ memcpy(rqe, &tmp_rqe, sizeof(struct spfc_rqe));
+
+ /* reset Obit */
+ spfc_set_srq_wqe_owner_be((struct spfc_wqe_ctrl *)(void *)(&rqe->ctrl_sl),
+ srq_info->pi_owner);
+
+ if (linked_rqe) {
+ /* Update Obit in linked WQE */
+ spfc_set_srq_link_wqe_owner_be((struct spfc_linkwqe *)(void *)linked_rqe,
+ srq_info->pi_owner, srq_info->pmsn + 1);
+ }
+
+ /* Update PI and PMSN */
+ spfc_update_producer_info((u16)(srq_info->valid_wqe_num),
+ &srq_info->pi, &srq_info->pi_owner);
+
+ /* pmsn is 16bit. The value is added to the maximum value and is
+ * automatically reversed
+ */
+ srq_info->pmsn++;
+
+ /* Update pmsn in queue header */
+ wq_header = (struct wq_header *)(void *)srq_info->cqm_srq_info->q_header_vaddr;
+ spfc_update_srq_header(&wq_header->db_record, srq_info->pmsn);
+
+ spin_unlock(&srq_info->srq_spin_lock);
+}
+
+/*
+ *Function Name : spfc_cfg_srq_ctx
+ *Function Description: Initialize the CTX of the SRQ that receives the
+ * immediate data. The RQE of the SRQ
+ * needs to be
+ *initialized when the RQE is filled. Input Parameters : *srq_info, *srq_ctx,
+ * sge_size,
+ * rqe_gpa
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+static void spfc_cfg_srq_ctx(struct spfc_srq_info *srq_info,
+ struct spfc_srq_ctx *ctx, u32 sge_size,
+ u64 rqe_gpa)
+{
+ struct spfc_srq_ctx *srq_ctx = NULL;
+ struct cqm_queue *cqm_srq_info = NULL;
+ struct spfc_queue_info_bus queue_bus;
+
+ FC_CHECK_RETURN_VOID(srq_info);
+ FC_CHECK_RETURN_VOID(ctx);
+
+ cqm_srq_info = srq_info->cqm_srq_info;
+ srq_ctx = ctx;
+ srq_ctx->last_rq_pmsn = 0;
+ srq_ctx->cur_rqe_msn = 0;
+ srq_ctx->pcie_template = 0;
+ /* The value of CTX needs to be updated
+ *when RQE is configured
+ */
+ srq_ctx->cur_rqe_gpa = rqe_gpa;
+ srq_ctx->cur_sge_v = 0;
+ srq_ctx->cur_sge_l = 0;
+ /* The information received by the SRQ is reported through the
+ *SCQ. The interrupt and ArmCQ are disabled.
+ */
+ srq_ctx->int_mode = 0;
+ srq_ctx->ceqn_msix = 0;
+ srq_ctx->cur_sge_remain_len = 0;
+ srq_ctx->cur_sge_id = 0;
+ srq_ctx->consant_sge_len = sge_size;
+ srq_ctx->cur_wqe_o = 0;
+ srq_ctx->pmsn_type = SPFC_PMSN_CI_TYPE_FROM_HOST;
+ srq_ctx->bdsl = 0;
+ srq_ctx->cr = 0;
+ srq_ctx->csl = 0;
+ srq_ctx->cf = 0;
+ srq_ctx->ctrl_sl = 0;
+ srq_ctx->cur_sge_gpa = 0;
+ srq_ctx->cur_pmsn_gpa = cqm_srq_info->q_header_paddr;
+ srq_ctx->prefetch_max_masn = 0;
+ srq_ctx->cqe_max_cnt = 0;
+ srq_ctx->cur_cqe_cnt = 0;
+ srq_ctx->arm_q = 0;
+ srq_ctx->cq_so_ro = 0;
+ srq_ctx->cqe_dma_attr_idx = 0;
+ srq_ctx->rq_so_ro = 0;
+ srq_ctx->rqe_dma_attr_idx = 0;
+ srq_ctx->loop_o = SPFC_SRQ_INIT_LOOP_O;
+ srq_ctx->ring = SPFC_QUEUE_RING;
+
+ memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
+ queue_bus.bus[ARRAY_INDEX_0] |= ((u64)(cqm_srq_info->q_ctx_paddr >> UNF_SHIFT_4));
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(srq_ctx->rqe_dma_attr_idx &
+ SPFC_SRQ_CTX_rqe_dma_attr_idx_MASK))
+ << UNF_SHIFT_60); /* bits 4 */
+
+ queue_bus.bus[ARRAY_INDEX_1] |= ((u64)(srq_ctx->rqe_dma_attr_idx >> UNF_SHIFT_4));
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(srq_ctx->rq_so_ro)) << UNF_SHIFT_2); /* bits 2 */
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(srq_ctx->cur_pmsn_gpa >> UNF_SHIFT_4))
+ << UNF_SHIFT_4); /* bits 60 */
+
+ queue_bus.bus[ARRAY_INDEX_2] |= ((u64)(srq_ctx->consant_sge_len)); /* bits 17 */
+ queue_bus.bus[ARRAY_INDEX_2] |= (((u64)(srq_ctx->pcie_template)) << UNF_SHIFT_17);
+
+ srq_ctx->parity = spfc_get_parity_value((void *)queue_bus.bus, SPFC_SRQC_BUS_ROW,
+ SPFC_SRQC_BUS_COL);
+
+ spfc_cpu_to_big64((void *)srq_ctx, sizeof(struct spfc_srq_ctx));
+}
+
+static u32 spfc_creat_srqc_via_cmdq_sync(struct spfc_hba_info *hba,
+ struct spfc_srq_ctx *srqc,
+ u64 ctx_gpa)
+{
+#define SPFC_INIT_SRQC_TIMEOUT 3000
+
+ int ret;
+ u32 covrt_size;
+ struct spfc_cmdqe_creat_srqc init_srq_cmd;
+ struct sphw_cmd_buf *cmdq_in_buf;
+
+ cmdq_in_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmdq_in_buf) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]cmdq in_cmd_buf alloc failed");
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_INIT_SRQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(&init_srq_cmd, 0, sizeof(init_srq_cmd));
+ init_srq_cmd.wd0.task_type = SPFC_TASK_T_INIT_SRQC;
+ init_srq_cmd.srqc_gpa_h = SPFC_HIGH_32_BITS(ctx_gpa);
+ init_srq_cmd.srqc_gpa_l = SPFC_LOW_32_BITS(ctx_gpa);
+ covrt_size = sizeof(init_srq_cmd) - sizeof(init_srq_cmd.srqc);
+ spfc_cpu_to_big32(&init_srq_cmd, covrt_size);
+
+ /* srqc is already big-endian */
+ memcpy(init_srq_cmd.srqc, srqc, sizeof(*srqc));
+ memcpy(cmdq_in_buf->buf, &init_srq_cmd, sizeof(init_srq_cmd));
+ cmdq_in_buf->size = sizeof(init_srq_cmd);
+
+ ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0,
+ cmdq_in_buf, NULL, NULL,
+ SPFC_INIT_SRQC_TIMEOUT, SPHW_CHANNEL_FC);
+
+ sphw_free_cmd_buf(hba->dev_handle, cmdq_in_buf);
+
+ if (ret) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Send creat srqc via cmdq failed, ret=%d",
+ ret);
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_INIT_SRQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ SPFC_IO_STAT(hba, SPFC_TASK_T_INIT_SRQC);
+
+ return RETURN_OK;
+}
+
+static void spfc_init_els_srq_wqe(struct spfc_srq_info *srq_info)
+{
+ u32 rqe_index = 0;
+ struct spfc_drq_buff_entry *buf_entry = NULL;
+
+ FC_CHECK_RETURN_VOID(srq_info);
+
+ for (rqe_index = 0; rqe_index < srq_info->valid_wqe_num - 1; rqe_index++) {
+ buf_entry = srq_info->els_buff_entry_head + rqe_index;
+ spfc_post_els_srq_wqe(srq_info, buf_entry->buff_id);
+ }
+}
+
+static void spfc_free_els_srq_buff(struct spfc_hba_info *hba, u32 srq_valid_wqe)
+{
+ u32 buff_index = 0;
+ struct spfc_srq_info *srq_info = NULL;
+ struct spfc_drq_buff_entry *buff_entry = NULL;
+
+ FC_CHECK_RETURN_VOID(hba);
+
+ srq_info = &hba->els_srq_info;
+
+ if (!srq_info->els_buff_entry_head)
+ return;
+
+ for (buff_index = 0; buff_index < srq_valid_wqe; buff_index++) {
+ buff_entry = &srq_info->els_buff_entry_head[buff_index];
+ buff_entry->buff_addr = NULL;
+ }
+
+ if (srq_info->buf_list.buflist) {
+ for (buff_index = 0; buff_index < srq_info->buf_list.buf_num;
+ buff_index++) {
+ if (srq_info->buf_list.buflist[buff_index].paddr != 0) {
+ pci_unmap_single(hba->pci_dev,
+ srq_info->buf_list.buflist[buff_index].paddr,
+ srq_info->buf_list.buf_size,
+ DMA_FROM_DEVICE);
+ srq_info->buf_list.buflist[buff_index].paddr = 0;
+ }
+ kfree(srq_info->buf_list.buflist[buff_index].vaddr);
+ srq_info->buf_list.buflist[buff_index].vaddr = NULL;
+ }
+
+ kfree(srq_info->buf_list.buflist);
+ srq_info->buf_list.buflist = NULL;
+ }
+
+ kfree(srq_info->els_buff_entry_head);
+ srq_info->els_buff_entry_head = NULL;
+}
+
+static u32 spfc_alloc_els_srq_buff(struct spfc_hba_info *hba, u32 srq_valid_wqe)
+{
+ u32 req_buff_size = 0;
+ u32 buff_index = 0;
+ struct spfc_srq_info *srq_info = NULL;
+ struct spfc_drq_buff_entry *buff_entry = NULL;
+ u32 buf_total_size;
+ u32 buf_num;
+ u32 alloc_idx;
+ u32 cur_buf_idx = 0;
+ u32 cur_buf_offset = 0;
+ u32 buf_cnt_perhugebuf;
+
+ srq_info = &hba->els_srq_info;
+
+ /* Apply for entry buffer */
+ req_buff_size = (u32)(srq_valid_wqe * sizeof(struct spfc_drq_buff_entry));
+ srq_info->els_buff_entry_head = (struct spfc_drq_buff_entry *)kmalloc(req_buff_size,
+ GFP_KERNEL);
+ if (!srq_info->els_buff_entry_head) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate ELS Srq receive buffer entries failed");
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(srq_info->els_buff_entry_head, 0, req_buff_size);
+
+ buf_total_size = SPFC_SRQ_ELS_SGE_LEN * srq_valid_wqe;
+
+ srq_info->buf_list.buf_size = buf_total_size > BUF_LIST_PAGE_SIZE
+ ? BUF_LIST_PAGE_SIZE
+ : buf_total_size;
+ buf_cnt_perhugebuf = srq_info->buf_list.buf_size / SPFC_SRQ_ELS_SGE_LEN;
+ buf_num = srq_valid_wqe % buf_cnt_perhugebuf ?
+ srq_valid_wqe / buf_cnt_perhugebuf + 1 :
+ srq_valid_wqe / buf_cnt_perhugebuf;
+ srq_info->buf_list.buflist = (struct buff_list *)kmalloc(buf_num * sizeof(struct buff_list),
+ GFP_KERNEL);
+ srq_info->buf_list.buf_num = buf_num;
+
+ if (!srq_info->buf_list.buflist) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate ELS buf list failed out of memory");
+ goto free_buff;
+ }
+ memset(srq_info->buf_list.buflist, 0, buf_num * sizeof(struct buff_list));
+
+ for (alloc_idx = 0; alloc_idx < buf_num; alloc_idx++) {
+ srq_info->buf_list.buflist[alloc_idx].vaddr = kmalloc(srq_info->buf_list.buf_size,
+ GFP_KERNEL);
+ if (!srq_info->buf_list.buflist[alloc_idx].vaddr)
+ goto free_buff;
+
+ memset(srq_info->buf_list.buflist[alloc_idx].vaddr, 0, srq_info->buf_list.buf_size);
+
+ srq_info->buf_list.buflist[alloc_idx].paddr =
+ pci_map_single(hba->pci_dev, srq_info->buf_list.buflist[alloc_idx].vaddr,
+ srq_info->buf_list.buf_size, DMA_FROM_DEVICE);
+ if (pci_dma_mapping_error(hba->pci_dev,
+ srq_info->buf_list.buflist[alloc_idx].paddr)) {
+ srq_info->buf_list.buflist[alloc_idx].paddr = 0;
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Map els srq buffer failed");
+
+ goto free_buff;
+ }
+ }
+
+ /* Apply for receiving buffer and attach it to the free linked list */
+ for (buff_index = 0; buff_index < srq_valid_wqe; buff_index++) {
+ buff_entry = &srq_info->els_buff_entry_head[buff_index];
+ cur_buf_idx = buff_index / buf_cnt_perhugebuf;
+ cur_buf_offset = SPFC_SRQ_ELS_SGE_LEN * (buff_index % buf_cnt_perhugebuf);
+ buff_entry->buff_addr = srq_info->buf_list.buflist[cur_buf_idx].vaddr +
+ cur_buf_offset;
+ buff_entry->buff_dma = srq_info->buf_list.buflist[cur_buf_idx].paddr +
+ cur_buf_offset;
+ buff_entry->buff_id = (u16)buff_index;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[EVENT]Allocate bufnum:%u,buf_total_size:%u", buf_num,
+ buf_total_size);
+
+ return RETURN_OK;
+
+free_buff:
+ spfc_free_els_srq_buff(hba, srq_valid_wqe);
+ return UNF_RETURN_ERROR;
+}
+
+void spfc_send_clear_srq_cmd(struct spfc_hba_info *hba,
+ struct spfc_srq_info *srq_info)
+{
+ union spfc_cmdqe cmdqe;
+ struct cqm_queue *cqm_fcp_srq = NULL;
+ ulong flag = 0;
+
+ memset(&cmdqe, 0, sizeof(union spfc_cmdqe));
+
+ spin_lock_irqsave(&srq_info->srq_spin_lock, flag);
+ cqm_fcp_srq = srq_info->cqm_srq_info;
+ if (!cqm_fcp_srq) {
+ srq_info->state = SPFC_CLEAN_DONE;
+ spin_unlock_irqrestore(&srq_info->srq_spin_lock, flag);
+ return;
+ }
+
+ cmdqe.clear_srq.wd0.task_type = SPFC_TASK_T_CLEAR_SRQ;
+ cmdqe.clear_srq.wd1.scqn = SPFC_LSW(hba->default_scqn);
+ cmdqe.clear_srq.wd1.srq_type = srq_info->srq_type;
+ cmdqe.clear_srq.srqc_gpa_h = SPFC_HIGH_32_BITS(cqm_fcp_srq->q_ctx_paddr);
+ cmdqe.clear_srq.srqc_gpa_l = SPFC_LOW_32_BITS(cqm_fcp_srq->q_ctx_paddr);
+
+ (void)queue_delayed_work(hba->work_queue, &srq_info->del_work,
+ (ulong)msecs_to_jiffies(SPFC_SRQ_DEL_STAGE_TIMEOUT_MS));
+ spin_unlock_irqrestore(&srq_info->srq_spin_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port 0x%x begin to clear srq 0x%x(0x%x,0x%llx)",
+ hba->port_cfg.port_id, srq_info->srq_type,
+ SPFC_LSW(hba->default_scqn),
+ (u64)cqm_fcp_srq->q_ctx_paddr);
+
+ /* Run the ROOT CMDQ command to issue the clear srq command. If the
+ * command fails to be delivered, retry upon timeout.
+ */
+ (void)spfc_root_cmdq_enqueue(hba, &cmdqe, sizeof(cmdqe.clear_srq));
+}
+
+/*
+ *Function Name : spfc_srq_clr_timeout
+ *Function Description: Delete srq when timeout.
+ *Input Parameters : *work
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+static void spfc_srq_clr_timeout(struct work_struct *work)
+{
+#define SPFC_MAX_DEL_SRQ_RETRY_TIMES 2
+ struct spfc_srq_info *srq = NULL;
+ struct spfc_hba_info *hba = NULL;
+ struct cqm_queue *cqm_fcp_imm_srq = NULL;
+ ulong flag = 0;
+
+ srq = container_of(work, struct spfc_srq_info, del_work.work);
+
+ spin_lock_irqsave(&srq->srq_spin_lock, flag);
+ hba = srq->hba;
+ cqm_fcp_imm_srq = srq->cqm_srq_info;
+ spin_unlock_irqrestore(&srq->srq_spin_lock, flag);
+
+ if (hba && cqm_fcp_imm_srq) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port 0x%x clear srq 0x%x stat 0x%x timeout",
+ hba->port_cfg.port_id, srq->srq_type, srq->state);
+
+ /* If the delivery fails or the execution times out after the
+ * delivery, try again once
+ */
+ srq->del_retry_time++;
+ if (srq->del_retry_time < SPFC_MAX_DEL_SRQ_RETRY_TIMES)
+ spfc_send_clear_srq_cmd(hba, srq);
+ else
+ srq->del_retry_time = 0;
+ }
+}
+
+static u32 spfc_create_els_srq(struct spfc_hba_info *hba)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct cqm_queue *cqm_srq = NULL;
+ struct wq_header *wq_header = NULL;
+ struct spfc_srq_info *srq_info = NULL;
+ struct spfc_srq_ctx srq_ctx = {0};
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ cqm_srq = cqm3_object_fc_srq_create(hba->dev_handle, SERVICE_T_FC,
+ CQM_OBJECT_NONRDMA_SRQ, SPFC_SRQ_ELS_DATA_DEPTH,
+ SPFC_SRQE_SIZE, hba);
+ if (!cqm_srq) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Create Els Srq failed");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Initialize SRQ */
+ srq_info = &hba->els_srq_info;
+ spfc_init_srq_info(hba, cqm_srq, srq_info);
+ srq_info->srq_type = SPFC_SRQ_ELS;
+ srq_info->enable = true;
+ srq_info->state = SPFC_CLEAN_DONE;
+ srq_info->del_retry_time = 0;
+
+ /* The srq lock is initialized and can be created repeatedly */
+ spin_lock_init(&srq_info->srq_spin_lock);
+ srq_info->spin_lock_init = true;
+
+ /* Initialize queue header */
+ wq_header = (struct wq_header *)(void *)cqm_srq->q_header_vaddr;
+ spfc_init_srq_header(wq_header);
+ INIT_DELAYED_WORK(&srq_info->del_work, spfc_srq_clr_timeout);
+
+ /* Apply for RQ buffer */
+ ret = spfc_alloc_els_srq_buff(hba, srq_info->valid_wqe_num);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate Els Srq buffer failed");
+
+ cqm3_object_delete(&cqm_srq->object);
+ memset(srq_info, 0, sizeof(struct spfc_srq_info));
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Fill RQE, update queue header */
+ spfc_init_els_srq_wqe(srq_info);
+
+ /* Fill SRQ CTX */
+ memset(&srq_ctx, 0, sizeof(srq_ctx));
+ spfc_cfg_srq_ctx(srq_info, &srq_ctx, SPFC_SRQ_ELS_SGE_LEN,
+ srq_info->cqm_srq_info->q_room_buf_1.buf_list->pa);
+
+ ret = spfc_creat_srqc_via_cmdq_sync(hba, &srq_ctx, srq_info->cqm_srq_info->q_ctx_paddr);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Creat Els Srqc failed");
+
+ spfc_free_els_srq_buff(hba, srq_info->valid_wqe_num);
+ cqm3_object_delete(&cqm_srq->object);
+ memset(srq_info, 0, sizeof(struct spfc_srq_info));
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+void spfc_wq_destroy_els_srq(struct work_struct *work)
+{
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID(work);
+ hba =
+ container_of(work, struct spfc_hba_info, els_srq_clear_work);
+ spfc_destroy_els_srq(hba);
+}
+
+void spfc_destroy_els_srq(void *handle)
+{
+ /*
+ * Receive clear els srq sts
+ * ---then--->>> destroy els srq
+ */
+ struct spfc_srq_info *srq_info = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID(handle);
+
+ hba = (struct spfc_hba_info *)handle;
+ srq_info = &hba->els_srq_info;
+
+ /* release receive buffer */
+ spfc_free_els_srq_buff(hba, srq_info->valid_wqe_num);
+
+ /* release srq info */
+ if (srq_info->cqm_srq_info) {
+ cqm3_object_delete(&srq_info->cqm_srq_info->object);
+ srq_info->cqm_srq_info = NULL;
+ }
+ if (srq_info->spin_lock_init)
+ srq_info->spin_lock_init = false;
+ srq_info->hba = NULL;
+ srq_info->enable = false;
+ srq_info->state = SPFC_CLEAN_DONE;
+}
+
+/*
+ *Function Name : spfc_create_srq
+ *Function Description: Create SRQ, which contains four SRQ for receiving
+ * instant data and a SRQ for receiving
+ * ELS data.
+ *Input Parameters : *hba Output Parameters : N/A Return Type :u32
+ */
+static u32 spfc_create_srq(struct spfc_hba_info *hba)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ /* Create ELS SRQ */
+ ret = spfc_create_els_srq(hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Create Els Srq failed");
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+/*
+ *Function Name : spfc_destroy_srq
+ *Function Description: Release the SRQ resource, including the SRQ for
+ * receiving the immediate data and the
+ * SRQ forreceiving the ELS data.
+ *Input Parameters : *hba Output Parameters : N/A
+ *Return Type : void
+ */
+static void spfc_destroy_srq(struct spfc_hba_info *hba)
+{
+ FC_CHECK_RETURN_VOID(hba);
+
+ spfc_destroy_els_srq(hba);
+}
+
+u32 spfc_create_common_share_queues(void *handle)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ hba = (struct spfc_hba_info *)handle;
+ /* Create & Init 8 pairs SCQ */
+ ret = spfc_create_scq(hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Create scq failed");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Alloc SRQ resource for SIRT & ELS */
+ ret = spfc_create_srq(hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Create srq failed");
+
+ spfc_flush_scq_ctx(hba);
+ spfc_destroy_scq(hba);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+void spfc_destroy_common_share_queues(void *hba)
+{
+ FC_CHECK_RETURN_VOID(hba);
+
+ spfc_destroy_scq((struct spfc_hba_info *)hba);
+ spfc_destroy_srq((struct spfc_hba_info *)hba);
+}
+
+static u8 spfc_map_fcp_data_cos(struct spfc_hba_info *hba)
+{
+ u8 i = 0;
+ u8 min_cnt_index = SPFC_PACKET_COS_FC_DATA;
+ bool get_init_index = false;
+
+ for (i = 0; i < SPFC_MAX_COS_NUM; i++) {
+ /* Check whether the CoS is valid for the FC and cannot be
+ * occupied by the CMD
+ */
+ if ((!(hba->cos_bitmap & ((u32)1 << i))) || i == SPFC_PACKET_COS_FC_CMD)
+ continue;
+
+ if (!get_init_index) {
+ min_cnt_index = i;
+ get_init_index = true;
+ continue;
+ }
+
+ if (atomic_read(&hba->cos_rport_cnt[i]) <
+ atomic_read(&hba->cos_rport_cnt[min_cnt_index]))
+ min_cnt_index = i;
+ }
+
+ atomic_inc(&hba->cos_rport_cnt[min_cnt_index]);
+
+ return min_cnt_index;
+}
+
+static void spfc_update_cos_rport_cnt(struct spfc_hba_info *hba, u8 cos_index)
+{
+ if (cos_index >= SPFC_MAX_COS_NUM ||
+ cos_index == SPFC_PACKET_COS_FC_CMD ||
+ (!(hba->cos_bitmap & ((u32)1 << cos_index))) ||
+ (atomic_read(&hba->cos_rport_cnt[cos_index]) == 0))
+ return;
+
+ atomic_dec(&hba->cos_rport_cnt[cos_index]);
+}
+
+void spfc_invalid_parent_sq(struct spfc_parent_sq_info *sq_info)
+{
+ sq_info->rport_index = INVALID_VALUE32;
+ sq_info->context_id = INVALID_VALUE32;
+ sq_info->sq_queue_id = INVALID_VALUE32;
+ sq_info->cache_id = INVALID_VALUE32;
+ sq_info->local_port_id = INVALID_VALUE32;
+ sq_info->remote_port_id = INVALID_VALUE32;
+ sq_info->hba = NULL;
+ sq_info->del_start_jiff = INVALID_VALUE64;
+ sq_info->port_in_flush = false;
+ sq_info->sq_in_sess_rst = false;
+ sq_info->oqid_rd = INVALID_VALUE16;
+ sq_info->oqid_wr = INVALID_VALUE16;
+ sq_info->srq_ctx_addr = 0;
+ sq_info->sqn_base = 0;
+ atomic_set(&sq_info->sq_cached, false);
+ sq_info->vport_id = 0;
+ sq_info->sirt_dif_control.protect_opcode = UNF_DIF_ACTION_NONE;
+ sq_info->need_offloaded = INVALID_VALUE8;
+ atomic_set(&sq_info->sq_valid, false);
+ atomic_set(&sq_info->flush_done_wait_cnt, 0);
+ memset(&sq_info->delay_sqe, 0, sizeof(struct spfc_delay_sqe_ctrl_info));
+ memset(sq_info->io_stat, 0, sizeof(sq_info->io_stat));
+}
+
+static void spfc_parent_sq_opreate_timeout(struct work_struct *work)
+{
+ ulong flag = 0;
+ struct spfc_parent_sq_info *parent_sq = NULL;
+ struct spfc_parent_queue_info *parent_queue = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ parent_sq = container_of(work, struct spfc_parent_sq_info, del_work.work);
+ parent_queue = container_of(parent_sq, struct spfc_parent_queue_info, parent_sq_info);
+ hba = (struct spfc_hba_info *)parent_sq->hba;
+ FC_CHECK_RETURN_VOID(hba);
+
+ spin_lock_irqsave(&parent_queue->parent_queue_state_lock, flag);
+ if (parent_queue->offload_state == SPFC_QUEUE_STATE_DESTROYING) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "Port(0x%x) sq rport index(0x%x) local nportid(0x%x),remote nportid(0x%x) reset timeout.",
+ hba->port_cfg.port_id, parent_sq->rport_index,
+ parent_sq->local_port_id,
+ parent_sq->remote_port_id);
+ }
+ spin_unlock_irqrestore(&parent_queue->parent_queue_state_lock, flag);
+}
+
+static void spfc_parent_sq_wait_flush_done_timeout(struct work_struct *work)
+{
+ ulong flag = 0;
+ struct spfc_parent_sq_info *parent_sq = NULL;
+ struct spfc_parent_queue_info *parent_queue = NULL;
+ struct spfc_hba_info *hba = NULL;
+ u32 ctx_flush_done;
+ u32 *ctx_dw = NULL;
+ int ret;
+ int sq_state = SPFC_STAT_PARENT_SQ_QUEUE_DELAYED_WORK;
+ spinlock_t *prtq_state_lock = NULL;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ parent_sq = container_of(work, struct spfc_parent_sq_info, flush_done_timeout_work.work);
+
+ FC_CHECK_RETURN_VOID(parent_sq);
+
+ parent_queue = container_of(parent_sq, struct spfc_parent_queue_info, parent_sq_info);
+ prtq_state_lock = &parent_queue->parent_queue_state_lock;
+ hba = (struct spfc_hba_info *)parent_sq->hba;
+ FC_CHECK_RETURN_VOID(hba);
+ FC_CHECK_RETURN_VOID(parent_queue);
+
+ spin_lock_irqsave(prtq_state_lock, flag);
+ if (parent_queue->offload_state != SPFC_QUEUE_STATE_DESTROYING) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) sq rport index(0x%x) is not destroying status,offloadsts is %d",
+ hba->port_cfg.port_id, parent_sq->rport_index,
+ parent_queue->offload_state);
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ return;
+ }
+
+ if (parent_queue->parent_ctx.cqm_parent_ctx_obj) {
+ ctx_dw = (u32 *)((void *)(parent_queue->parent_ctx.cqm_parent_ctx_obj->vaddr));
+ ctx_flush_done = ctx_dw[SPFC_CTXT_FLUSH_DONE_DW_POS] & SPFC_CTXT_FLUSH_DONE_MASK_BE;
+ if (ctx_flush_done == 0) {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ if (atomic_read(&parent_queue->parent_sq_info.flush_done_wait_cnt) <
+ SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_CNT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[info]Port(0x%x) sq rport index(0x%x) wait flush done timeout %d times",
+ hba->port_cfg.port_id, parent_sq->rport_index,
+ atomic_read(&(parent_queue->parent_sq_info
+ .flush_done_wait_cnt)));
+
+ atomic_inc(&parent_queue->parent_sq_info.flush_done_wait_cnt);
+
+ /* Delay Free Sq info */
+ ret = queue_delayed_work(hba->work_queue,
+ &(parent_queue->parent_sq_info
+ .flush_done_timeout_work),
+ (ulong)msecs_to_jiffies((u32)
+ SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_MS));
+ if (!ret) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) rport(0x%x) queue delayed work failed ret:%d",
+ hba->port_cfg.port_id,
+ parent_sq->rport_index, ret);
+ SPFC_HBA_STAT(hba, sq_state);
+ }
+
+ return;
+ }
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) sq rport index(0x%x) has wait flush done %d times,do not free sq",
+ hba->port_cfg.port_id,
+ parent_sq->rport_index,
+ atomic_read(&(parent_queue->parent_sq_info
+ .flush_done_wait_cnt)));
+
+ SPFC_HBA_STAT(hba, SPFC_STAT_CTXT_FLUSH_DONE);
+ return;
+ }
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) sq rport index(0x%x) flush done bit is ok,free sq now",
+ hba->port_cfg.port_id, parent_sq->rport_index);
+
+ spfc_free_parent_queue_info(hba, parent_queue);
+}
+
+static void spfc_free_parent_sq(struct spfc_hba_info *hba,
+ struct spfc_parent_queue_info *parq_info)
+{
+#define SPFC_WAIT_PRT_CTX_FUSH_DONE_LOOP_TIMES 100
+ u32 ctx_flush_done = 0;
+ u32 *ctx_dw = NULL;
+ struct spfc_parent_sq_info *sq_info = NULL;
+ u32 uidelaycnt = 0;
+ struct list_head *list = NULL;
+ struct spfc_suspend_sqe_info *suspend_sqe = NULL;
+ ulong flag = 0;
+
+ sq_info = &parq_info->parent_sq_info;
+
+ spin_lock_irqsave(&parq_info->parent_queue_state_lock, flag);
+ while (!list_empty(&sq_info->suspend_sqe_list)) {
+ list = UNF_OS_LIST_NEXT(&sq_info->suspend_sqe_list);
+ list_del(list);
+ suspend_sqe = list_entry(list, struct spfc_suspend_sqe_info, list_sqe_entry);
+ if (suspend_sqe) {
+ if (!cancel_delayed_work(&suspend_sqe->timeout_work)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[warn]reset worker timer maybe timeout");
+ }
+
+ kfree(suspend_sqe);
+ }
+ }
+ spin_unlock_irqrestore(&parq_info->parent_queue_state_lock, flag);
+
+ /* Free data cos */
+ spfc_update_cos_rport_cnt(hba, parq_info->queue_data_cos);
+
+ if (parq_info->parent_ctx.cqm_parent_ctx_obj) {
+ ctx_dw = (u32 *)((void *)(parq_info->parent_ctx.cqm_parent_ctx_obj->vaddr));
+ ctx_flush_done = ctx_dw[SPFC_CTXT_FLUSH_DONE_DW_POS] & SPFC_CTXT_FLUSH_DONE_MASK_BE;
+ mb();
+ if (parq_info->offload_state == SPFC_QUEUE_STATE_DESTROYING &&
+ ctx_flush_done == 0) {
+ do {
+ ctx_flush_done = ctx_dw[SPFC_CTXT_FLUSH_DONE_DW_POS] &
+ SPFC_CTXT_FLUSH_DONE_MASK_BE;
+ mb();
+ if (ctx_flush_done != 0)
+ break;
+ uidelaycnt++;
+ } while (uidelaycnt < SPFC_WAIT_PRT_CTX_FUSH_DONE_LOOP_TIMES);
+
+ if (ctx_flush_done == 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) Rport(0x%x) flush done is not set",
+ hba->port_cfg.port_id,
+ sq_info->rport_index);
+ }
+ }
+
+ cqm3_object_delete(&parq_info->parent_ctx.cqm_parent_ctx_obj->object);
+ parq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
+ }
+
+ spfc_invalid_parent_sq(sq_info);
+}
+
+u32 spfc_alloc_parent_sq(struct spfc_hba_info *hba,
+ struct spfc_parent_queue_info *parq_info,
+ struct unf_port_info *rport_info)
+{
+ struct spfc_parent_sq_info *sq_ctrl = NULL;
+ struct cqm_qpc_mpt *prnt_ctx = NULL;
+ ulong flag = 0;
+
+ /* Craete parent context via CQM */
+ prnt_ctx = cqm3_object_qpc_mpt_create(hba->dev_handle, SERVICE_T_FC,
+ CQM_OBJECT_SERVICE_CTX, SPFC_CNTX_SIZE_256B,
+ parq_info, CQM_INDEX_INVALID);
+ if (!prnt_ctx) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Create parent context failed, CQM_INDEX is 0x%x",
+ CQM_INDEX_INVALID);
+ goto parent_create_fail;
+ }
+
+ parq_info->parent_ctx.cqm_parent_ctx_obj = prnt_ctx;
+ /* Initialize struct spfc_parent_sq_info */
+ sq_ctrl = &parq_info->parent_sq_info;
+ sq_ctrl->hba = (void *)hba;
+ sq_ctrl->rport_index = rport_info->rport_index;
+ sq_ctrl->sqn_base = rport_info->sqn_base;
+ sq_ctrl->context_id = prnt_ctx->xid;
+ sq_ctrl->sq_queue_id = SPFC_QID_SQ;
+ sq_ctrl->cache_id = INVALID_VALUE32;
+ sq_ctrl->local_port_id = INVALID_VALUE32;
+ sq_ctrl->remote_port_id = INVALID_VALUE32;
+ sq_ctrl->sq_in_sess_rst = false;
+ atomic_set(&sq_ctrl->sq_valid, true);
+ sq_ctrl->del_start_jiff = INVALID_VALUE64;
+ sq_ctrl->service_type = SPFC_SERVICE_TYPE_FC;
+ sq_ctrl->vport_id = (u8)rport_info->qos_level;
+ sq_ctrl->cs_ctrl = (u8)rport_info->cs_ctrl;
+ sq_ctrl->sirt_dif_control.protect_opcode = UNF_DIF_ACTION_NONE;
+ sq_ctrl->need_offloaded = INVALID_VALUE8;
+ atomic_set(&sq_ctrl->flush_done_wait_cnt, 0);
+
+ /* Check whether the HBA is in the Linkdown state. Note that
+ * offload_state must be in the non-FREE state.
+ */
+ spin_lock_irqsave(&hba->flush_state_lock, flag);
+ sq_ctrl->port_in_flush = hba->in_flushing;
+ spin_unlock_irqrestore(&hba->flush_state_lock, flag);
+ memset(sq_ctrl->io_stat, 0, sizeof(sq_ctrl->io_stat));
+
+ INIT_DELAYED_WORK(&sq_ctrl->del_work, spfc_parent_sq_opreate_timeout);
+ INIT_DELAYED_WORK(&sq_ctrl->flush_done_timeout_work,
+ spfc_parent_sq_wait_flush_done_timeout);
+ INIT_LIST_HEAD(&sq_ctrl->suspend_sqe_list);
+
+ memset(&sq_ctrl->delay_sqe, 0, sizeof(struct spfc_delay_sqe_ctrl_info));
+
+ return RETURN_OK;
+
+parent_create_fail:
+ parq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
+
+ return UNF_RETURN_ERROR;
+}
+
+static void
+spfc_init_prnt_ctxt_scq_qinfo(void *hba,
+ struct spfc_parent_queue_info *prnt_qinfo)
+{
+ u32 resp_scqn = 0;
+ struct spfc_parent_context *ctx = NULL;
+ struct spfc_scq_qinfo *resp_prnt_scq_ctxt = NULL;
+ struct spfc_queue_info_bus queue_bus;
+
+ /* Obtains the queue id of the scq returned by the CQM when the SCQ is
+ * created
+ */
+ resp_scqn = prnt_qinfo->parent_sts_scq_info.cqm_queue_id;
+
+ /* Obtains the Parent Context address */
+ ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
+
+ resp_prnt_scq_ctxt = &ctx->resp_scq_qinfo;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.rq_th2_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.rq_th1_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.rq_th0_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.rq_min_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.sq_th2_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.sq_th1_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.sq_th0_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.sq_min_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.scq_n = (u64)resp_scqn;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.parity = 0;
+
+ memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
+ queue_bus.bus[ARRAY_INDEX_0] = resp_prnt_scq_ctxt->hw_scqc_config.pctxt_val1;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.parity = spfc_get_parity_value(queue_bus.bus,
+ SPFC_HW_SCQC_BUS_ROW,
+ SPFC_HW_SCQC_BUS_COL
+ );
+ spfc_cpu_to_big64(resp_prnt_scq_ctxt, sizeof(struct spfc_scq_qinfo));
+}
+
+static void
+spfc_init_prnt_ctxt_srq_qinfo(void *handle, struct spfc_parent_queue_info *prnt_qinfo)
+{
+ struct spfc_parent_context *ctx = NULL;
+ struct cqm_queue *cqm_els_srq = NULL;
+ struct spfc_parent_sq_info *sq = NULL;
+ struct spfc_queue_info_bus queue_bus;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ /* Obtains the SQ address */
+ sq = &prnt_qinfo->parent_sq_info;
+
+ /* Obtains the Parent Context address */
+ ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
+
+ cqm_els_srq = hba->els_srq_info.cqm_srq_info;
+
+ /* Initialize the Parent SRQ INFO used when the ELS is received */
+ ctx->els_srq_info.srqc_gpa = cqm_els_srq->q_ctx_paddr >> UNF_SHIFT_4;
+
+ memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
+ queue_bus.bus[ARRAY_INDEX_0] = ctx->els_srq_info.srqc_gpa;
+ ctx->els_srq_info.parity = spfc_get_parity_value(queue_bus.bus, SPFC_HW_SRQC_BUS_ROW,
+ SPFC_HW_SRQC_BUS_COL);
+ spfc_cpu_to_big64(&ctx->els_srq_info, sizeof(struct spfc_srq_qinfo));
+
+ ctx->imm_srq_info.srqc_gpa = 0;
+ sq->srq_ctx_addr = 0;
+}
+
+static u16 spfc_get_max_sequence_id(void)
+{
+ return SPFC_HRQI_SEQ_ID_MAX;
+}
+
+static void spfc_init_prnt_rsvd_qinfo(struct spfc_parent_queue_info *prnt_qinfo)
+{
+ struct spfc_parent_context *ctx = NULL;
+ struct spfc_hw_rsvd_queue *hw_rsvd_qinfo = NULL;
+ u16 max_seq = 0;
+ u32 each = 0, seq_index = 0;
+
+ /* Obtains the Parent Context address */
+ ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
+ hw_rsvd_qinfo = (struct spfc_hw_rsvd_queue *)&ctx->hw_rsvdq;
+ memset(hw_rsvd_qinfo->seq_id_bitmap, 0, sizeof(hw_rsvd_qinfo->seq_id_bitmap));
+
+ max_seq = spfc_get_max_sequence_id();
+
+ /* special set for sequence id 0, which is always kept by ucode for
+ * sending fcp-cmd
+ */
+ hw_rsvd_qinfo->seq_id_bitmap[SPFC_HRQI_SEQ_SEPCIAL_ID] = 1;
+ seq_index = SPFC_HRQI_SEQ_SEPCIAL_ID - (max_seq >> SPFC_HRQI_SEQ_INDEX_SHIFT);
+
+ /* Set the unavailable mask to start from max + 1 */
+ for (each = (max_seq % SPFC_HRQI_SEQ_INDEX_MAX) + 1;
+ each < SPFC_HRQI_SEQ_INDEX_MAX; each++) {
+ hw_rsvd_qinfo->seq_id_bitmap[seq_index] |= ((u64)0x1) << each;
+ }
+
+ hw_rsvd_qinfo->seq_id_bitmap[seq_index] =
+ cpu_to_be64(hw_rsvd_qinfo->seq_id_bitmap[seq_index]);
+
+ /* sepcial set for sequence id 0 */
+ if (seq_index != SPFC_HRQI_SEQ_SEPCIAL_ID)
+ hw_rsvd_qinfo->seq_id_bitmap[SPFC_HRQI_SEQ_SEPCIAL_ID] =
+ cpu_to_be64(hw_rsvd_qinfo->seq_id_bitmap[SPFC_HRQI_SEQ_SEPCIAL_ID]);
+
+ for (each = 0; each < seq_index; each++)
+ hw_rsvd_qinfo->seq_id_bitmap[each] = SPFC_HRQI_SEQ_INVALID_ID;
+
+ /* no matter what the range of seq id, last_req_seq_id is fixed value
+ * 0xff
+ */
+ hw_rsvd_qinfo->wd0.last_req_seq_id = SPFC_HRQI_SEQ_ID_MAX;
+ hw_rsvd_qinfo->wd0.xid = prnt_qinfo->parent_sq_info.context_id;
+
+ *(u64 *)&hw_rsvd_qinfo->wd0 =
+ cpu_to_be64(*(u64 *)&hw_rsvd_qinfo->wd0);
+}
+
+/*
+ *Function Name : spfc_init_prnt_sw_section_info
+ *Function Description: Initialize the SW Section area that can be accessed by
+ * the Parent Context uCode.
+ *Input Parameters : *hba,
+ * *prnt_qinfo
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+static void spfc_init_prnt_sw_section_info(struct spfc_hba_info *hba,
+ struct spfc_parent_queue_info *prnt_qinfo)
+{
+#define SPFC_VLAN_ENABLE (1)
+#define SPFC_MB_PER_KB 1024
+ u16 rport_index;
+ struct spfc_parent_context *ctx = NULL;
+ struct spfc_sw_section *sw_setion = NULL;
+ u16 total_scq_num = SPFC_TOTAL_SCQ_NUM;
+ u32 queue_id;
+ dma_addr_t queue_hdr_paddr;
+
+ /* Obtains the Parent Context address */
+ ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
+ sw_setion = &ctx->sw_section;
+
+ /* xid+vPortId */
+ sw_setion->sw_ctxt_vport_xid.xid = prnt_qinfo->parent_sq_info.context_id;
+ spfc_cpu_to_big32(&sw_setion->sw_ctxt_vport_xid, sizeof(sw_setion->sw_ctxt_vport_xid));
+
+ /* conn_id */
+ rport_index = SPFC_LSW(prnt_qinfo->parent_sq_info.rport_index);
+ sw_setion->conn_id = cpu_to_be16(rport_index);
+
+ /* Immediate parameters */
+ sw_setion->immi_rq_page_size = 0;
+
+ /* Parent SCQ INFO used for sending packets to the Cmnd */
+ sw_setion->scq_num_rcv_cmd = cpu_to_be16((u16)prnt_qinfo->parent_cmd_scq_info.cqm_queue_id);
+ sw_setion->scq_num_max_scqn = cpu_to_be16(total_scq_num);
+
+ /* sw_ctxt_misc */
+ sw_setion->sw_ctxt_misc.dw.srv_type = prnt_qinfo->parent_sq_info.service_type;
+ sw_setion->sw_ctxt_misc.dw.port_id = hba->port_index;
+
+ /* only the VN2VF mode is supported */
+ sw_setion->sw_ctxt_misc.dw.vlan_id = 0;
+ spfc_cpu_to_big32(&sw_setion->sw_ctxt_misc.pctxt_val0,
+ sizeof(sw_setion->sw_ctxt_misc.pctxt_val0));
+
+ /* Configuring the combo length */
+ sw_setion->per_xmit_data_size = cpu_to_be32(combo_length * SPFC_MB_PER_KB);
+ sw_setion->sw_ctxt_config.dw.work_mode = SPFC_PORT_MODE_INI;
+ sw_setion->sw_ctxt_config.dw.status = FC_PARENT_STATUS_INVALID;
+ sw_setion->sw_ctxt_config.dw.cos = 0;
+ sw_setion->sw_ctxt_config.dw.oq_cos_cmd = SPFC_PACKET_COS_FC_CMD;
+ sw_setion->sw_ctxt_config.dw.oq_cos_data = prnt_qinfo->queue_data_cos;
+ sw_setion->sw_ctxt_config.dw.priority = 0;
+ sw_setion->sw_ctxt_config.dw.vlan_enable = SPFC_VLAN_ENABLE;
+ sw_setion->sw_ctxt_config.dw.sgl_num = dif_sgl_mode;
+ spfc_cpu_to_big32(&sw_setion->sw_ctxt_config.pctxt_val1,
+ sizeof(sw_setion->sw_ctxt_config.pctxt_val1));
+ spfc_cpu_to_big32(&sw_setion->immi_dif_info, sizeof(sw_setion->immi_dif_info));
+
+ queue_id = prnt_qinfo->parent_cmd_scq_info.local_queue_id;
+ queue_hdr_paddr = hba->scq_info[queue_id].cqm_scq_info->q_header_paddr;
+ sw_setion->cmd_scq_gpa_h = SPFC_HIGH_32_BITS(queue_hdr_paddr);
+ sw_setion->cmd_scq_gpa_l = SPFC_LOW_32_BITS(queue_hdr_paddr);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) RPort(0x%x) CmdLocalScqn(0x%x) QheaderGpaH(0x%x) QheaderGpaL(0x%x)",
+ hba->port_cfg.port_id, prnt_qinfo->parent_sq_info.rport_index, queue_id,
+ sw_setion->cmd_scq_gpa_h, sw_setion->cmd_scq_gpa_l);
+
+ spfc_cpu_to_big32(&sw_setion->cmd_scq_gpa_h, sizeof(sw_setion->cmd_scq_gpa_h));
+ spfc_cpu_to_big32(&sw_setion->cmd_scq_gpa_l, sizeof(sw_setion->cmd_scq_gpa_l));
+}
+
+static void spfc_init_parent_context(void *hba, struct spfc_parent_queue_info *prnt_qinfo)
+{
+ struct spfc_parent_context *ctx = NULL;
+
+ ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
+
+ /* Initialize Parent Context */
+ memset(ctx, 0, SPFC_CNTX_SIZE_256B);
+
+ /* Initialize the Queue Info hardware area */
+ spfc_init_prnt_ctxt_scq_qinfo(hba, prnt_qinfo);
+ spfc_init_prnt_ctxt_srq_qinfo(hba, prnt_qinfo);
+ spfc_init_prnt_rsvd_qinfo(prnt_qinfo);
+
+ /* Initialize Software Section */
+ spfc_init_prnt_sw_section_info(hba, prnt_qinfo);
+}
+
+void spfc_map_shared_queue_qid(struct spfc_hba_info *hba,
+ struct spfc_parent_queue_info *parent_queue_info,
+ u32 rport_index)
+{
+ u32 cmd_scqn_local = 0;
+ u32 sts_scqn_local = 0;
+
+ /* The SCQ is used for each connection based on the balanced *
+ * distribution of commands and responses
+ */
+ cmd_scqn_local = SPFC_RPORTID_TO_CMD_SCQN(rport_index);
+ sts_scqn_local = SPFC_RPORTID_TO_STS_SCQN(rport_index);
+ parent_queue_info->parent_cmd_scq_info.local_queue_id = cmd_scqn_local;
+ parent_queue_info->parent_sts_scq_info.local_queue_id = sts_scqn_local;
+ parent_queue_info->parent_cmd_scq_info.cqm_queue_id =
+ hba->scq_info[cmd_scqn_local].scqn;
+ parent_queue_info->parent_sts_scq_info.cqm_queue_id =
+ hba->scq_info[sts_scqn_local].scqn;
+
+ /* Each session share with immediate SRQ and ElsSRQ */
+ parent_queue_info->parent_els_srq_info.local_queue_id = 0;
+ parent_queue_info->parent_els_srq_info.cqm_queue_id = hba->els_srq_info.srqn;
+
+ /* Allocate fcp data cos value */
+ parent_queue_info->queue_data_cos = spfc_map_fcp_data_cos(hba);
+
+ /* Allocate Parent SQ vPort */
+ parent_queue_info->parent_sq_info.vport_id += parent_queue_info->queue_vport_id;
+}
+
+u32 spfc_send_session_enable(struct spfc_hba_info *hba, struct unf_port_info *rport_info)
+{
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ dma_addr_t ctx_phy_addr = 0;
+ void *ctx_addr = NULL;
+ union spfc_cmdqe session_enable;
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_parent_context *ctx = NULL;
+ struct spfc_sw_section *sw_setion = NULL;
+ struct spfc_host_keys key;
+ u32 tx_mfs = 2048;
+ u32 edtov_timer = 2000;
+ ulong flag = 0;
+ spinlock_t *prtq_state_lock = NULL;
+ u32 index;
+
+ memset(&session_enable, 0, sizeof(union spfc_cmdqe));
+ memset(&key, 0, sizeof(struct spfc_host_keys));
+ index = rport_info->rport_index;
+ parent_queue_info = &hba->parent_queue_mgr->parent_queue[index];
+ prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ ctx = (struct spfc_parent_context *)(parent_queue_info->parent_ctx.parent_ctx);
+ sw_setion = &ctx->sw_section;
+
+ sw_setion->tx_mfs = cpu_to_be16((u16)(tx_mfs));
+ sw_setion->e_d_tov_timer_val = cpu_to_be32(edtov_timer);
+
+ spfc_big_to_cpu32(&sw_setion->sw_ctxt_misc.pctxt_val0,
+ sizeof(sw_setion->sw_ctxt_misc.pctxt_val0));
+ sw_setion->sw_ctxt_misc.dw.port_id = SPFC_GET_NETWORK_PORT_ID(hba);
+ spfc_cpu_to_big32(&sw_setion->sw_ctxt_misc.pctxt_val0,
+ sizeof(sw_setion->sw_ctxt_misc.pctxt_val0));
+
+ spfc_big_to_cpu32(&sw_setion->sw_ctxt_config.pctxt_val1,
+ sizeof(sw_setion->sw_ctxt_config.pctxt_val1));
+ spfc_cpu_to_big32(&sw_setion->sw_ctxt_config.pctxt_val1,
+ sizeof(sw_setion->sw_ctxt_config.pctxt_val1));
+
+ parent_queue_info->parent_sq_info.rport_index = rport_info->rport_index;
+ parent_queue_info->parent_sq_info.local_port_id = rport_info->local_nport_id;
+ parent_queue_info->parent_sq_info.remote_port_id = rport_info->nport_id;
+ parent_queue_info->parent_sq_info.context_id =
+ parent_queue_info->parent_ctx.cqm_parent_ctx_obj->xid;
+
+ /* Fill in contex to the chip */
+ ctx_phy_addr = parent_queue_info->parent_ctx.cqm_parent_ctx_obj->paddr;
+ ctx_addr = parent_queue_info->parent_ctx.cqm_parent_ctx_obj->vaddr;
+ memcpy(ctx_addr, parent_queue_info->parent_ctx.parent_ctx,
+ sizeof(struct spfc_parent_context));
+ session_enable.session_enable.wd0.task_type = SPFC_TASK_T_SESS_EN;
+ session_enable.session_enable.wd2.conn_id = rport_info->rport_index;
+ session_enable.session_enable.wd2.scqn = hba->default_scqn;
+ session_enable.session_enable.wd3.xid_p =
+ parent_queue_info->parent_ctx.cqm_parent_ctx_obj->xid;
+ session_enable.session_enable.context_gpa_hi = SPFC_HIGH_32_BITS(ctx_phy_addr);
+ session_enable.session_enable.context_gpa_lo = SPFC_LOW_32_BITS(ctx_phy_addr);
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ key.wd3.sid_2 = (rport_info->local_nport_id & SPFC_KEY_WD3_SID_2_MASK) >> UNF_SHIFT_16;
+ key.wd3.sid_1 = (rport_info->local_nport_id & SPFC_KEY_WD3_SID_1_MASK) >> UNF_SHIFT_8;
+ key.wd4.sid_0 = rport_info->local_nport_id & SPFC_KEY_WD3_SID_0_MASK;
+ key.wd4.did_0 = rport_info->nport_id & SPFC_KEY_WD4_DID_0_MASK;
+ key.wd4.did_1 = (rport_info->nport_id & SPFC_KEY_WD4_DID_1_MASK) >> UNF_SHIFT_8;
+ key.wd4.did_2 = (rport_info->nport_id & SPFC_KEY_WD4_DID_2_MASK) >> UNF_SHIFT_16;
+ key.wd5.host_id = 0;
+ key.wd5.port_id = hba->port_index;
+
+ memcpy(&session_enable.session_enable.keys, &key, sizeof(struct spfc_host_keys));
+
+ memcpy((void *)(uintptr_t)session_enable.session_enable.context,
+ parent_queue_info->parent_ctx.parent_ctx,
+ sizeof(struct spfc_parent_context));
+ spfc_big_to_cpu32((void *)(uintptr_t)session_enable.session_enable.context,
+ sizeof(struct spfc_parent_context));
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info] xid:0x%x, sid:0x%x,did:0x%x parentcontext:",
+ parent_queue_info->parent_ctx.cqm_parent_ctx_obj->xid,
+ rport_info->local_nport_id, rport_info->nport_id);
+
+ ret = spfc_root_cmdq_enqueue(hba, &session_enable, sizeof(session_enable.session_enable));
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]RootCMDQEnqueue Error, free default session parent resource");
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) send default session enable success,rport index(0x%x),context id(0x%x) SID=(0x%x), DID=(0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index,
+ parent_queue_info->parent_sq_info.context_id,
+ rport_info->local_nport_id, rport_info->nport_id);
+
+ return RETURN_OK;
+}
+
+u32 spfc_alloc_parent_resource(void *handle, struct unf_port_info *rport_info)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_hba_info *hba = NULL;
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ ulong flag = 0;
+ spinlock_t *prtq_state_lock = NULL;
+ u32 index;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport_info, UNF_RETURN_ERROR);
+
+ hba = (struct spfc_hba_info *)handle;
+ if (!hba->parent_queue_mgr) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) cannot find parent queue pool",
+ hba->port_cfg.port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ index = rport_info->rport_index;
+ if (index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) allocate parent resource failed, invlaid rport index(0x%x),rport nportid(0x%x)",
+ hba->port_cfg.port_id, index,
+ rport_info->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ parent_queue_info = &hba->parent_queue_mgr->parent_queue[index];
+ prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ if (parent_queue_info->offload_state != SPFC_QUEUE_STATE_FREE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) allocate parent resource failed, invlaid rport index(0x%x),rport nportid(0x%x), offload state(0x%x)",
+ hba->port_cfg.port_id, index, rport_info->nport_id,
+ parent_queue_info->offload_state);
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ return UNF_RETURN_ERROR;
+ }
+
+ parent_queue_info->offload_state = SPFC_QUEUE_STATE_INITIALIZED;
+ /* Create Parent Context and Link List SQ */
+ ret = spfc_alloc_parent_sq(hba, parent_queue_info, rport_info);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "Port(0x%x) alloc session resoure failed.rport index(0x%x),rport nportid(0x%x).",
+ hba->port_cfg.port_id, index,
+ rport_info->nport_id);
+
+ parent_queue_info->offload_state = SPFC_QUEUE_STATE_FREE;
+ spfc_invalid_parent_sq(&parent_queue_info->parent_sq_info);
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Allocate the corresponding queue xid to each parent */
+ spfc_map_shared_queue_qid(hba, parent_queue_info, rport_info->rport_index);
+
+ /* Initialize Parent Context, including hardware area and ucode area */
+ spfc_init_parent_context(hba, parent_queue_info);
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ /* Only default enable session obviously, other will enable secertly */
+ if (unlikely(rport_info->rport_index == SPFC_DEFAULT_RPORT_INDEX))
+ return spfc_send_session_enable(handle, rport_info);
+
+ parent_queue_info->parent_sq_info.local_port_id = rport_info->local_nport_id;
+ parent_queue_info->parent_sq_info.remote_port_id = rport_info->nport_id;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) allocate parent sq success,rport index(0x%x),rport nportid(0x%x),context id(0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index,
+ rport_info->nport_id,
+ parent_queue_info->parent_sq_info.context_id);
+
+ return ret;
+}
+
+u32 spfc_free_parent_resource(void *handle, struct unf_port_info *rport_info)
+{
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ ulong flag = 0;
+ ulong rst_flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ enum spfc_session_reset_mode mode = SPFC_SESS_RST_DELETE_IO_CONN_BOTH;
+ struct spfc_hba_info *hba = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+ spinlock_t *sq_enq_lock = NULL;
+ u32 index;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport_info, UNF_RETURN_ERROR);
+
+ hba = (struct spfc_hba_info *)handle;
+ if (!hba->parent_queue_mgr) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Port(0x%x) cannot find parent queue pool",
+ hba->port_cfg.port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* get parent queue info (by rport index) */
+ if (rport_info->rport_index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Port(0x%x) free parent resource failed, invlaid rport_index(%u) rport_nport_id(0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index, rport_info->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ index = rport_info->rport_index;
+ parent_queue_info = &hba->parent_queue_mgr->parent_queue[index];
+ prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
+ sq_enq_lock = &parent_queue_info->parent_sq_info.parent_sq_enqueue_lock;
+
+ spin_lock_irqsave(prtq_state_lock, flag);
+ /* 1. for has been offload */
+ if (parent_queue_info->offload_state == SPFC_QUEUE_STATE_OFFLOADED) {
+ parent_queue_info->offload_state = SPFC_QUEUE_STATE_DESTROYING;
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ /* set reset state, in order to prevent I/O in_SQ */
+ spin_lock_irqsave(sq_enq_lock, rst_flag);
+ parent_queue_info->parent_sq_info.sq_in_sess_rst = true;
+ spin_unlock_irqrestore(sq_enq_lock, rst_flag);
+
+ /* check pcie device state */
+ if (!hba->dev_present) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) hba is not present, free directly. rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index,
+ parent_queue_info->parent_sq_info.rport_index,
+ parent_queue_info->parent_sq_info.local_port_id,
+ rport_info->nport_id,
+ parent_queue_info->parent_sq_info.remote_port_id);
+
+ spfc_free_parent_queue_info(hba, parent_queue_info);
+ return RETURN_OK;
+ }
+
+ parent_queue_info->parent_sq_info.del_start_jiff = jiffies;
+ (void)queue_delayed_work(hba->work_queue,
+ &parent_queue_info->parent_sq_info.del_work,
+ (ulong)msecs_to_jiffies((u32)
+ SPFC_SQ_DEL_STAGE_TIMEOUT_MS));
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to reset parent session, rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index,
+ parent_queue_info->parent_sq_info.rport_index,
+ parent_queue_info->parent_sq_info.local_port_id,
+ rport_info->nport_id,
+ parent_queue_info->parent_sq_info.remote_port_id);
+ /* Forcibly set both mode */
+ mode = SPFC_SESS_RST_DELETE_IO_CONN_BOTH;
+ ret = spfc_send_session_rst_cmd(hba, parent_queue_info, mode);
+
+ return ret;
+ } else if (parent_queue_info->offload_state == SPFC_QUEUE_STATE_INITIALIZED) {
+ /* 2. for resource has been alloc, but not offload */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) parent sq is not offloaded, free directly. rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index,
+ parent_queue_info->parent_sq_info.rport_index,
+ parent_queue_info->parent_sq_info.local_port_id,
+ rport_info->nport_id,
+ parent_queue_info->parent_sq_info.remote_port_id);
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ spfc_free_parent_queue_info(hba, parent_queue_info);
+
+ return RETURN_OK;
+ } else if (parent_queue_info->offload_state ==
+ SPFC_QUEUE_STATE_OFFLOADING) {
+ /* 3. for driver has offloading CMND to uCode */
+ spfc_push_destroy_parent_queue_sqe(hba, parent_queue_info, rport_info);
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) parent sq is offloading, push to delay free. rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index,
+ parent_queue_info->parent_sq_info.rport_index,
+ parent_queue_info->parent_sq_info.local_port_id,
+ rport_info->nport_id,
+ parent_queue_info->parent_sq_info.remote_port_id);
+
+ return RETURN_OK;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) parent sq is not created, do not need free state(0x%x) rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
+ hba->port_cfg.port_id, parent_queue_info->offload_state,
+ rport_info->rport_index,
+ parent_queue_info->parent_sq_info.rport_index,
+ parent_queue_info->parent_sq_info.local_port_id,
+ rport_info->nport_id,
+ parent_queue_info->parent_sq_info.remote_port_id);
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ return RETURN_OK;
+}
+
+void spfc_free_parent_queue_mgr(void *handle)
+{
+ u32 index = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID(handle);
+
+ hba = (struct spfc_hba_info *)handle;
+ if (!hba->parent_queue_mgr)
+ return;
+ parent_queue_mgr = hba->parent_queue_mgr;
+
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ if (parent_queue_mgr->parent_queue[index].parent_ctx.parent_ctx)
+ parent_queue_mgr->parent_queue[index].parent_ctx.parent_ctx = NULL;
+ }
+
+ if (parent_queue_mgr->parent_sq_buf_list.buflist) {
+ for (index = 0; index < parent_queue_mgr->parent_sq_buf_list.buf_num; index++) {
+ if (parent_queue_mgr->parent_sq_buf_list.buflist[index].paddr != 0) {
+ pci_unmap_single(hba->pci_dev,
+ parent_queue_mgr->parent_sq_buf_list
+ .buflist[index].paddr,
+ parent_queue_mgr->parent_sq_buf_list.buf_size,
+ DMA_BIDIRECTIONAL);
+ parent_queue_mgr->parent_sq_buf_list.buflist[index].paddr = 0;
+ }
+ kfree(parent_queue_mgr->parent_sq_buf_list.buflist[index].vaddr);
+ parent_queue_mgr->parent_sq_buf_list.buflist[index].vaddr = NULL;
+ }
+
+ kfree(parent_queue_mgr->parent_sq_buf_list.buflist);
+ parent_queue_mgr->parent_sq_buf_list.buflist = NULL;
+ }
+
+ vfree(parent_queue_mgr);
+ hba->parent_queue_mgr = NULL;
+}
+
+void spfc_free_parent_queues(void *handle)
+{
+ u32 index = 0;
+ ulong flag = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ struct spfc_hba_info *hba = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+
+ FC_CHECK_RETURN_VOID(handle);
+
+ hba = (struct spfc_hba_info *)handle;
+ parent_queue_mgr = hba->parent_queue_mgr;
+
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ prtq_state_lock = &parent_queue_mgr->parent_queue[index].parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ if (SPFC_QUEUE_STATE_DESTROYING ==
+ parent_queue_mgr->parent_queue[index].offload_state) {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ (void)cancel_delayed_work_sync(&parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.del_work);
+ (void)cancel_delayed_work_sync(&parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.flush_done_timeout_work);
+
+ /* free parent queue */
+ spfc_free_parent_queue_info(hba, &parent_queue_mgr->parent_queue[index]);
+ continue;
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ }
+}
+
+/*
+ *Function Name : spfc_alloc_parent_queue_mgr
+ *Function Description: Allocate and initialize parent queue manager.
+ *Input Parameters : *handle
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+u32 spfc_alloc_parent_queue_mgr(void *handle)
+{
+ u32 index = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ u32 buf_total_size;
+ u32 buf_num;
+ u32 alloc_idx;
+ u32 cur_buf_idx = 0;
+ u32 cur_buf_offset = 0;
+ u32 prt_ctx_size = sizeof(struct spfc_parent_context);
+ u32 buf_cnt_perhugebuf;
+ struct spfc_hba_info *hba = NULL;
+ u32 init_val = INVALID_VALUE32;
+ dma_addr_t paddr;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+
+ hba = (struct spfc_hba_info *)handle;
+ parent_queue_mgr = (struct spfc_parent_queue_mgr *)vmalloc(sizeof
+ (struct spfc_parent_queue_mgr));
+ if (!parent_queue_mgr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) cannot allocate queue manager",
+ hba->port_cfg.port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ hba->parent_queue_mgr = parent_queue_mgr;
+ memset(parent_queue_mgr, 0, sizeof(struct spfc_parent_queue_mgr));
+
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ spin_lock_init(&parent_queue_mgr->parent_queue[index].parent_queue_state_lock);
+ parent_queue_mgr->parent_queue[index].offload_state = SPFC_QUEUE_STATE_FREE;
+ spin_lock_init(&(parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.parent_sq_enqueue_lock));
+ parent_queue_mgr->parent_queue[index].parent_cmd_scq_info.cqm_queue_id = init_val;
+ parent_queue_mgr->parent_queue[index].parent_sts_scq_info.cqm_queue_id = init_val;
+ parent_queue_mgr->parent_queue[index].parent_els_srq_info.cqm_queue_id = init_val;
+ parent_queue_mgr->parent_queue[index].parent_sq_info.del_start_jiff = init_val;
+ parent_queue_mgr->parent_queue[index].queue_vport_id = hba->vpid_start;
+ }
+
+ buf_total_size = prt_ctx_size * UNF_SPFC_MAXRPORT_NUM;
+ parent_queue_mgr->parent_sq_buf_list.buf_size = buf_total_size > BUF_LIST_PAGE_SIZE ?
+ BUF_LIST_PAGE_SIZE : buf_total_size;
+ buf_cnt_perhugebuf = parent_queue_mgr->parent_sq_buf_list.buf_size / prt_ctx_size;
+ buf_num = UNF_SPFC_MAXRPORT_NUM % buf_cnt_perhugebuf ?
+ UNF_SPFC_MAXRPORT_NUM / buf_cnt_perhugebuf + 1 :
+ UNF_SPFC_MAXRPORT_NUM / buf_cnt_perhugebuf;
+ parent_queue_mgr->parent_sq_buf_list.buflist =
+ (struct buff_list *)kmalloc(buf_num * sizeof(struct buff_list), GFP_KERNEL);
+ parent_queue_mgr->parent_sq_buf_list.buf_num = buf_num;
+
+ if (!parent_queue_mgr->parent_sq_buf_list.buflist) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate QueuMgr buf list failed out of memory");
+ goto free_parent_queue;
+ }
+ memset(parent_queue_mgr->parent_sq_buf_list.buflist, 0, buf_num * sizeof(struct buff_list));
+
+ for (alloc_idx = 0; alloc_idx < buf_num; alloc_idx++) {
+ parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].vaddr =
+ kmalloc(parent_queue_mgr->parent_sq_buf_list.buf_size, GFP_KERNEL);
+ if (!parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].vaddr)
+ goto free_parent_queue;
+
+ memset(parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].vaddr, 0,
+ parent_queue_mgr->parent_sq_buf_list.buf_size);
+
+ parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].paddr =
+ pci_map_single(hba->pci_dev,
+ parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].vaddr,
+ parent_queue_mgr->parent_sq_buf_list.buf_size,
+ DMA_BIDIRECTIONAL);
+ paddr = parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].paddr;
+ if (pci_dma_mapping_error(hba->pci_dev, paddr)) {
+ parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].paddr = 0;
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Map QueuMgr address failed");
+
+ goto free_parent_queue;
+ }
+ }
+
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ cur_buf_idx = index / buf_cnt_perhugebuf;
+ cur_buf_offset = prt_ctx_size * (index % buf_cnt_perhugebuf);
+
+ parent_queue_mgr->parent_queue[index].parent_ctx.parent_ctx =
+ parent_queue_mgr->parent_sq_buf_list.buflist[cur_buf_idx].vaddr +
+ cur_buf_offset;
+ parent_queue_mgr->parent_queue[index].parent_ctx.parent_ctx_addr =
+ parent_queue_mgr->parent_sq_buf_list.buflist[cur_buf_idx].paddr +
+ cur_buf_offset;
+ }
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[EVENT]Allocate bufnum:%u,buf_total_size:%u", buf_num, buf_total_size);
+
+ return RETURN_OK;
+
+free_parent_queue:
+ spfc_free_parent_queue_mgr(hba);
+ return UNF_RETURN_ERROR;
+}
+
+static void spfc_rlease_all_wqe_pages(struct spfc_hba_info *hba)
+{
+ u32 index;
+ struct spfc_wqe_page *wpg = NULL;
+
+ FC_CHECK_RETURN_VOID((hba));
+
+ wpg = hba->sq_wpg_pool.wpg_pool_addr;
+
+ for (index = 0; index < hba->sq_wpg_pool.wpg_cnt; index++) {
+ if (wpg->wpg_addr) {
+ dma_pool_free(hba->sq_wpg_pool.wpg_dma_pool,
+ wpg->wpg_addr, wpg->wpg_phy_addr);
+ wpg->wpg_addr = NULL;
+ wpg->wpg_phy_addr = 0;
+ }
+
+ wpg++;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port[%u] free total %u wqepages", hba->port_index,
+ index);
+}
+
+u32 spfc_alloc_parent_sq_wqe_page_pool(void *handle)
+{
+ u32 index = 0;
+ struct spfc_sq_wqepage_pool *wpg_pool = NULL;
+ struct spfc_wqe_page *wpg = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ wpg_pool = &hba->sq_wpg_pool;
+
+ INIT_LIST_HEAD(&wpg_pool->list_free_wpg_pool);
+ spin_lock_init(&wpg_pool->wpg_pool_lock);
+ atomic_set(&wpg_pool->wpg_in_use, 0);
+
+ /* Calculate the number of Wqe Page required in the pool */
+ wpg_pool->wpg_size = wqe_page_size;
+ wpg_pool->wpg_cnt = SPFC_MIN_WP_NUM * SPFC_MAX_SSQ_NUM +
+ ((hba->exi_count * SPFC_SQE_SIZE) / wpg_pool->wpg_size);
+ wpg_pool->wqe_per_wpg = wpg_pool->wpg_size / SPFC_SQE_SIZE;
+
+ /* Craete DMA POOL */
+ wpg_pool->wpg_dma_pool = dma_pool_create("spfc_wpg_pool",
+ &hba->pci_dev->dev,
+ wpg_pool->wpg_size,
+ SPFC_SQE_SIZE, 0);
+ if (!wpg_pool->wpg_dma_pool) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Cannot allocate SQ WqePage DMA pool");
+
+ goto out_create_dma_pool_err;
+ }
+
+ /* Allocate arrays to record all WqePage addresses */
+ wpg_pool->wpg_pool_addr = (struct spfc_wqe_page *)vmalloc(wpg_pool->wpg_cnt *
+ sizeof(struct spfc_wqe_page));
+ if (!wpg_pool->wpg_pool_addr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Allocate SQ WqePageAddr array failed");
+
+ goto out_alloc_wpg_array_err;
+ }
+ wpg = wpg_pool->wpg_pool_addr;
+ memset(wpg, 0, wpg_pool->wpg_cnt * sizeof(struct spfc_wqe_page));
+
+ for (index = 0; index < wpg_pool->wpg_cnt; index++) {
+ wpg->wpg_addr = dma_pool_alloc(wpg_pool->wpg_dma_pool, GFP_KERNEL,
+ (u64 *)&wpg->wpg_phy_addr);
+ if (!wpg->wpg_addr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_ERR, "[err]Dma pool allocated failed");
+ break;
+ }
+
+ /* To ensure security, clear the memory */
+ memset(wpg->wpg_addr, 0, wpg_pool->wpg_size);
+
+ /* Add to the idle linked list */
+ INIT_LIST_HEAD(&wpg->entry_wpg);
+ list_add_tail(&wpg->entry_wpg, &wpg_pool->list_free_wpg_pool);
+
+ wpg++;
+ }
+ /* ALL allocated successfully */
+ if (wpg_pool->wpg_cnt == index)
+ return RETURN_OK;
+
+ spfc_rlease_all_wqe_pages(hba);
+ vfree(wpg_pool->wpg_pool_addr);
+ wpg_pool->wpg_pool_addr = NULL;
+
+out_alloc_wpg_array_err:
+ dma_pool_destroy(wpg_pool->wpg_dma_pool);
+ wpg_pool->wpg_dma_pool = NULL;
+
+out_create_dma_pool_err:
+ return UNF_RETURN_ERROR;
+}
+
+void spfc_free_parent_sq_wqe_page_pool(void *handle)
+{
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID((handle));
+ hba = (struct spfc_hba_info *)handle;
+ spfc_rlease_all_wqe_pages(hba);
+ hba->sq_wpg_pool.wpg_cnt = 0;
+
+ if (hba->sq_wpg_pool.wpg_pool_addr) {
+ vfree(hba->sq_wpg_pool.wpg_pool_addr);
+ hba->sq_wpg_pool.wpg_pool_addr = NULL;
+ }
+
+ dma_pool_destroy(hba->sq_wpg_pool.wpg_dma_pool);
+ hba->sq_wpg_pool.wpg_dma_pool = NULL;
+}
+
+static u32 spfc_parent_sq_ring_direct_wqe_doorbell(struct spfc_parent_ssq_info *sq, u8 *direct_wqe)
+{
+ u32 ret = RETURN_OK;
+ int ravl;
+ u16 pmsn;
+ u64 queue_hdr_db_val;
+ struct spfc_hba_info *hba;
+
+ hba = (struct spfc_hba_info *)sq->hba;
+ pmsn = sq->last_cmsn;
+
+ if (sq->cache_id == INVALID_VALUE32) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]SQ(0x%x) invalid cid", sq->context_id);
+ return RETURN_ERROR;
+ }
+ /* Fill Doorbell Record */
+ queue_hdr_db_val = sq->queue_header->door_bell_record;
+ queue_hdr_db_val &= (u64)(~(0xFFFFFFFF));
+ queue_hdr_db_val |= (u64)((u64)pmsn << UNF_SHIFT_16 | pmsn);
+ sq->queue_header->door_bell_record =
+ cpu_to_be64(queue_hdr_db_val);
+
+ ravl = cqm_ring_direct_wqe_db_fc(hba->dev_handle, SERVICE_T_FC, direct_wqe);
+ if (unlikely(ravl != CQM_SUCCESS)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]SQ(0x%x) send DB failed", sq->context_id);
+
+ ret = RETURN_ERROR;
+ }
+
+ atomic_inc(&sq->sq_db_cnt);
+
+ return ret;
+}
+
+u32 spfc_parent_sq_ring_doorbell(struct spfc_parent_ssq_info *sq, u8 qos_level, u32 c)
+{
+ u32 ret = RETURN_OK;
+ int ravl;
+ u16 pmsn;
+ u8 pmsn_lo;
+ u8 pmsn_hi;
+ u64 db_val_qw;
+ struct spfc_hba_info *hba;
+ struct spfc_parent_sq_db door_bell;
+
+ hba = (struct spfc_hba_info *)sq->hba;
+ pmsn = sq->last_cmsn;
+ /* Obtain the low 8 Bit of PMSN */
+ pmsn_lo = (u8)(pmsn & SPFC_PMSN_MASK);
+ /* Obtain the high 8 Bit of PMSN */
+ pmsn_hi = (u8)((pmsn >> UNF_SHIFT_8) & SPFC_PMSN_MASK);
+ door_bell.wd0.service_type = SPFC_LSW(sq->service_type);
+ door_bell.wd0.cos = 0;
+ /* c = 0 data type, c = 1 control type, two type are different in mqm */
+ door_bell.wd0.c = c;
+ door_bell.wd0.arm = SPFC_DB_ARM_DISABLE;
+ door_bell.wd0.cntx_size = SPFC_CNTX_SIZE_T_256B;
+ door_bell.wd0.xid = sq->context_id;
+ door_bell.wd1.sm_data = sq->cache_id;
+ door_bell.wd1.qid = sq->sq_queue_id;
+ door_bell.wd1.pi_hi = (u32)pmsn_hi;
+
+ if (sq->cache_id == INVALID_VALUE32) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]SQ(0x%x) invalid cid", sq->context_id);
+ return UNF_RETURN_ERROR;
+ }
+ /* Fill Doorbell Record */
+ db_val_qw = sq->queue_header->door_bell_record;
+ db_val_qw &= (u64)(~(SPFC_DB_VAL_MASK));
+ db_val_qw |= (u64)((u64)pmsn << UNF_SHIFT_16 | pmsn);
+ sq->queue_header->door_bell_record = cpu_to_be64(db_val_qw);
+
+ /* ring doorbell */
+ db_val_qw = *(u64 *)&door_bell;
+ ravl = cqm3_ring_hardware_db_fc(hba->dev_handle, SERVICE_T_FC, pmsn_lo,
+ (qos_level & SPFC_QOS_LEVEL_MASK),
+ db_val_qw);
+ if (unlikely(ravl != CQM_SUCCESS)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]SQ(0x%x) send DB(0x%llx) failed",
+ sq->context_id, db_val_qw);
+
+ ret = UNF_RETURN_ERROR;
+ }
+
+ /* Doorbell success counter */
+ atomic_inc(&sq->sq_db_cnt);
+
+ return ret;
+}
+
+u32 spfc_direct_sq_enqueue(struct spfc_parent_ssq_info *ssq, struct spfc_sqe *io_sqe, u8 wqe_type)
+{
+ u32 ret = RETURN_OK;
+ u32 msn_wd = INVALID_VALUE32;
+ u16 link_wqe_msn = 0;
+ ulong flag = 0;
+ struct spfc_wqe_page *tail_wpg = NULL;
+ struct spfc_sqe *sqe_in_wp = NULL;
+ struct spfc_linkwqe *link_wqe = NULL;
+ struct spfc_linkwqe *link_wqe_last_part = NULL;
+ u64 wqe_gpa;
+ struct spfc_direct_wqe_db dre_door_bell;
+
+ spin_lock_irqsave(&ssq->parent_sq_enqueue_lock, flag);
+ tail_wpg = SPFC_GET_SQ_TAIL(ssq);
+ if (ssq->wqe_offset == ssq->wqe_num_per_buf) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_INFO,
+ "[info]Ssq(0x%x), xid(0x%x) qid(0x%x) add wqepage at Pmsn(0x%x), sqe_minus_cqe_cnt(0x%x)",
+ ssq->sqn, ssq->context_id, ssq->sq_queue_id,
+ ssq->last_cmsn,
+ atomic_read(&ssq->sqe_minus_cqe_cnt));
+
+ link_wqe_msn = SPFC_MSN_DEC(ssq->last_cmsn);
+ link_wqe = (struct spfc_linkwqe *)spfc_get_wqe_page_entry(tail_wpg,
+ ssq->wqe_offset);
+ msn_wd = be32_to_cpu(link_wqe->val_wd1);
+ msn_wd |= ((u32)(link_wqe_msn & SPFC_MSNWD_L_MASK));
+ msn_wd |= (((u32)(link_wqe_msn & SPFC_MSNWD_H_MASK)) << UNF_SHIFT_16);
+ link_wqe->val_wd1 = cpu_to_be32(msn_wd);
+ link_wqe_last_part = (struct spfc_linkwqe *)((u8 *)link_wqe +
+ SPFC_EXTEND_WQE_OFFSET);
+ link_wqe_last_part->val_wd1 = link_wqe->val_wd1;
+ spfc_set_direct_wqe_owner_be(link_wqe, ssq->last_pi_owner);
+ ssq->wqe_offset = 0;
+ ssq->last_pi_owner = !ssq->last_pi_owner;
+ }
+ sqe_in_wp =
+ (struct spfc_sqe *)spfc_get_wqe_page_entry(tail_wpg, ssq->wqe_offset);
+ spfc_build_wqe_owner_pmsn(io_sqe, (ssq->last_pi_owner), ssq->last_cmsn);
+ SPFC_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
+
+ wqe_gpa = tail_wpg->wpg_phy_addr + (ssq->wqe_offset * sizeof(struct spfc_sqe));
+ io_sqe->wqe_gpa = (wqe_gpa >> UNF_SHIFT_6);
+
+ dre_door_bell.wd0.ddb = IWARP_FC_DDB_TYPE;
+ dre_door_bell.wd0.cos = 0;
+ dre_door_bell.wd0.c = 0;
+ dre_door_bell.wd0.pi_hi =
+ (u32)(ssq->last_cmsn >> UNF_SHIFT_12) & SPFC_DB_WD0_PI_H_MASK;
+ dre_door_bell.wd0.cntx_size = SPFC_CNTX_SIZE_T_256B;
+ dre_door_bell.wd0.xid = ssq->context_id;
+ dre_door_bell.wd1.sm_data = ssq->cache_id;
+ dre_door_bell.wd1.pi_lo = (u32)(ssq->last_cmsn & SPFC_DB_WD0_PI_L_MASK);
+ io_sqe->db_val = *(u64 *)&dre_door_bell;
+
+ spfc_convert_parent_wqe_to_big_endian(io_sqe);
+ memcpy(sqe_in_wp, io_sqe, sizeof(struct spfc_sqe));
+ spfc_set_direct_wqe_owner_be(sqe_in_wp, ssq->last_pi_owner);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_INFO,
+ "[INFO]Ssq(0x%x) xid:0x%x,qid:0x%x wqegpa:0x%llx,o:0x%x,outstandind:0x%x,pmsn:0x%x,cmsn:0x%x",
+ ssq->sqn, ssq->context_id, ssq->sq_queue_id, wqe_gpa,
+ ssq->last_pi_owner, atomic_read(&ssq->sqe_minus_cqe_cnt),
+ ssq->last_cmsn, SPFC_GET_QUEUE_CMSN(ssq));
+
+ ssq->accum_wqe_cnt++;
+ if (ssq->accum_wqe_cnt == accum_db_num) {
+ ret = spfc_parent_sq_ring_direct_wqe_doorbell(ssq, (void *)sqe_in_wp);
+ if (unlikely(ret != RETURN_OK))
+ SPFC_ERR_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
+ ssq->accum_wqe_cnt = 0;
+ }
+
+ ssq->wqe_offset += 1;
+ ssq->last_cmsn = SPFC_MSN_INC(ssq->last_cmsn);
+ atomic_inc(&ssq->sq_wqe_cnt);
+ atomic_inc(&ssq->sqe_minus_cqe_cnt);
+ SPFC_SQ_IO_STAT(ssq, wqe_type);
+ spin_unlock_irqrestore(&ssq->parent_sq_enqueue_lock, flag);
+ return ret;
+}
+
+u32 spfc_parent_ssq_enqueue(struct spfc_parent_ssq_info *ssq, struct spfc_sqe *io_sqe, u8 wqe_type)
+{
+ u32 ret = RETURN_OK;
+ u32 addr_wd = INVALID_VALUE32;
+ u32 msn_wd = INVALID_VALUE32;
+ u16 link_wqe_msn = 0;
+ ulong flag = 0;
+ struct spfc_wqe_page *new_wqe_page = NULL;
+ struct spfc_wqe_page *tail_wpg = NULL;
+ struct spfc_sqe *sqe_in_wp = NULL;
+ struct spfc_linkwqe *link_wqe = NULL;
+ struct spfc_linkwqe *link_wqe_last_part = NULL;
+ u32 cur_cmsn = 0;
+ u8 qos_level = (u8)io_sqe->ts_sl.cont.icmnd.info.dif_info.wd1.vpid;
+ u32 c = SPFC_DB_C_BIT_CONTROL_TYPE;
+
+ if (ssq->queue_style == SPFC_QUEUE_RING_STYLE)
+ return spfc_direct_sq_enqueue(ssq, io_sqe, wqe_type);
+
+ spin_lock_irqsave(&ssq->parent_sq_enqueue_lock, flag);
+ tail_wpg = SPFC_GET_SQ_TAIL(ssq);
+ if (ssq->wqe_offset == ssq->wqe_num_per_buf) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_INFO,
+ "[info]Ssq(0x%x), xid(0x%x) qid(0x%x) add wqepage at Pmsn(0x%x), WpgCnt(0x%x)",
+ ssq->sqn, ssq->context_id, ssq->sq_queue_id,
+ ssq->last_cmsn,
+ atomic_read(&ssq->wqe_page_cnt));
+ cur_cmsn = SPFC_GET_QUEUE_CMSN(ssq);
+ spfc_free_sq_wqe_page(ssq, cur_cmsn);
+ new_wqe_page = spfc_add_one_wqe_page(ssq);
+ if (unlikely(!new_wqe_page)) {
+ SPFC_ERR_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
+ spin_unlock_irqrestore(&ssq->parent_sq_enqueue_lock, flag);
+ return UNF_RETURN_ERROR;
+ }
+ link_wqe = (struct spfc_linkwqe *)spfc_get_wqe_page_entry(tail_wpg,
+ ssq->wqe_offset);
+ addr_wd = SPFC_MSD(new_wqe_page->wpg_phy_addr);
+ link_wqe->next_page_addr_hi = cpu_to_be32(addr_wd);
+ addr_wd = SPFC_LSD(new_wqe_page->wpg_phy_addr);
+ link_wqe->next_page_addr_lo = cpu_to_be32(addr_wd);
+ link_wqe_msn = SPFC_MSN_DEC(ssq->last_cmsn);
+ msn_wd = be32_to_cpu(link_wqe->val_wd1);
+ msn_wd |= ((u32)(link_wqe_msn & SPFC_MSNWD_L_MASK));
+ msn_wd |= (((u32)(link_wqe_msn & SPFC_MSNWD_H_MASK)) << UNF_SHIFT_16);
+ link_wqe->val_wd1 = cpu_to_be32(msn_wd);
+ link_wqe_last_part = (struct spfc_linkwqe *)((u8 *)link_wqe +
+ SPFC_EXTEND_WQE_OFFSET);
+ link_wqe_last_part->next_page_addr_hi = link_wqe->next_page_addr_hi;
+ link_wqe_last_part->next_page_addr_lo = link_wqe->next_page_addr_lo;
+ link_wqe_last_part->val_wd1 = link_wqe->val_wd1;
+ spfc_set_sq_wqe_owner_be(link_wqe);
+ ssq->wqe_offset = 0;
+ tail_wpg = SPFC_GET_SQ_TAIL(ssq);
+ atomic_inc(&ssq->wqe_page_cnt);
+ }
+
+ spfc_build_wqe_owner_pmsn(io_sqe, !(ssq->last_pi_owner), ssq->last_cmsn);
+ SPFC_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
+ spfc_convert_parent_wqe_to_big_endian(io_sqe);
+ sqe_in_wp = (struct spfc_sqe *)spfc_get_wqe_page_entry(tail_wpg, ssq->wqe_offset);
+ memcpy(sqe_in_wp, io_sqe, sizeof(struct spfc_sqe));
+ spfc_set_sq_wqe_owner_be(sqe_in_wp);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_INFO,
+ "[INFO]Ssq(0x%x) xid:0x%x,qid:0x%x wqegpa:0x%llx, qos_level:0x%x, c:0x%x",
+ ssq->sqn, ssq->context_id, ssq->sq_queue_id,
+ virt_to_phys(sqe_in_wp), qos_level, c);
+
+ ssq->accum_wqe_cnt++;
+ if (ssq->accum_wqe_cnt == accum_db_num) {
+ ret = spfc_parent_sq_ring_doorbell(ssq, qos_level, c);
+ if (unlikely(ret != RETURN_OK))
+ SPFC_ERR_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
+ ssq->accum_wqe_cnt = 0;
+ }
+ ssq->wqe_offset += 1;
+ ssq->last_cmsn = SPFC_MSN_INC(ssq->last_cmsn);
+ atomic_inc(&ssq->sq_wqe_cnt);
+ atomic_inc(&ssq->sqe_minus_cqe_cnt);
+ SPFC_SQ_IO_STAT(ssq, wqe_type);
+ spin_unlock_irqrestore(&ssq->parent_sq_enqueue_lock, flag);
+ return ret;
+}
+
+u32 spfc_parent_sq_enqueue(struct spfc_parent_sq_info *sq, struct spfc_sqe *io_sqe, u16 ssqn)
+{
+ u8 wqe_type = 0;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)sq->hba;
+ struct spfc_parent_ssq_info *ssq = NULL;
+
+ if (unlikely(ssqn >= SPFC_MAX_SSQ_NUM)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Ssqn 0x%x is invalid.", ssqn);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ wqe_type = (u8)SPFC_GET_WQE_TYPE(io_sqe);
+
+ /* Serial enqueue */
+ io_sqe->ts_sl.xid = sq->context_id;
+ io_sqe->ts_sl.cid = sq->cache_id;
+ io_sqe->ts_sl.sqn = ssqn;
+
+ /* Choose SSQ */
+ ssq = &hba->parent_queue_mgr->shared_queue[ssqn].parent_ssq_info;
+
+ /* If the SQ is invalid, the wqe is discarded */
+ if (unlikely(!atomic_read(&sq->sq_valid))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]SQ is invalid, reject wqe(0x%x)", wqe_type);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* The heartbeat detection status is 0, which allows control sessions
+ * enqueuing
+ */
+ if (unlikely(!hba->heart_status && SPFC_WQE_IS_IO(io_sqe))) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]Heart status is false");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (sq->need_offloaded != SPFC_NEED_DO_OFFLOAD) {
+ /* Ensure to be offloaded */
+ if (unlikely(!atomic_read(&sq->sq_cached))) {
+ SPFC_ERR_IO_STAT((struct spfc_hba_info *)sq->hba, wqe_type);
+ SPFC_HBA_STAT((struct spfc_hba_info *)sq->hba,
+ SPFC_STAT_PARENT_SQ_NOT_OFFLOADED);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]RPort(0x%x) Session(0x%x) is not offloaded, reject wqe(0x%x)",
+ sq->rport_index, sq->context_id, wqe_type);
+
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ /* Whether the SQ is in the flush state. Temporarily allow the control
+ * sessions to enqueue.
+ */
+ if (unlikely(sq->port_in_flush && SPFC_WQE_IS_IO(io_sqe))) {
+ SPFC_ERR_IO_STAT((struct spfc_hba_info *)sq->hba, wqe_type);
+ SPFC_HBA_STAT((struct spfc_hba_info *)sq->hba, SPFC_STAT_PARENT_IO_FLUSHED);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Session(0x%x) in flush, Sqn(0x%x) cmsn(0x%x), reject wqe(0x%x)",
+ sq->context_id, ssqn, SPFC_GET_QUEUE_CMSN(ssq),
+ wqe_type);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* If the SQ is in the Seesion deletion state and is the WQE of the I/O
+ * path, * the I/O failure is directly returned
+ */
+ if (unlikely(sq->sq_in_sess_rst && SPFC_WQE_IS_IO(io_sqe))) {
+ SPFC_ERR_IO_STAT((struct spfc_hba_info *)sq->hba, wqe_type);
+ SPFC_HBA_STAT((struct spfc_hba_info *)sq->hba, SPFC_STAT_PARENT_IO_FLUSHED);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Session(0x%x) in session reset, reject wqe(0x%x)",
+ sq->context_id, wqe_type);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return spfc_parent_ssq_enqueue(ssq, io_sqe, wqe_type);
+}
+
+static bool spfc_msn_in_wqe_page(u32 start_msn, u32 end_msn, u32 cur_cmsn)
+{
+ bool ret = true;
+
+ if (end_msn >= start_msn) {
+ if (cur_cmsn < start_msn || cur_cmsn > end_msn)
+ ret = false;
+ else
+ ret = true;
+ } else {
+ if (cur_cmsn > end_msn && cur_cmsn < start_msn)
+ ret = false;
+ else
+ ret = true;
+ }
+
+ return ret;
+}
+
+void spfc_free_sq_wqe_page(struct spfc_parent_ssq_info *ssq, u32 cur_cmsn)
+{
+ u16 wpg_start_cmsn = 0;
+ u16 wpg_end_cmsn = 0;
+ bool wqe_page_in_use = false;
+
+ /* If there is only zero or one Wqe Page, no release is required */
+ if (atomic_read(&ssq->wqe_page_cnt) <= SPFC_MIN_WP_NUM)
+ return;
+
+ /* Check whether the current MSN is within the MSN range covered by the
+ * WqePage
+ */
+ wpg_start_cmsn = ssq->head_start_cmsn;
+ wpg_end_cmsn = ssq->head_end_cmsn;
+ wqe_page_in_use = spfc_msn_in_wqe_page(wpg_start_cmsn, wpg_end_cmsn, cur_cmsn);
+
+ /* If the value of CMSN is within the current Wqe Page, no release is
+ * required
+ */
+ if (wqe_page_in_use)
+ return;
+
+ /* If the next WqePage is available and the CMSN is not in the current
+ * WqePage, * the current WqePage is released
+ */
+ while (!wqe_page_in_use &&
+ (atomic_read(&ssq->wqe_page_cnt) > SPFC_MIN_WP_NUM)) {
+ /* Free WqePage */
+ spfc_free_head_wqe_page(ssq);
+
+ /* Obtain the start MSN of the next WqePage */
+ wpg_start_cmsn = SPFC_MSN_INC(wpg_end_cmsn);
+
+ /* obtain the end MSN of the next WqePage */
+ wpg_end_cmsn =
+ SPFC_GET_WP_END_CMSN(wpg_start_cmsn, ssq->wqe_num_per_buf);
+
+ /* Set new MSN range */
+ ssq->head_start_cmsn = wpg_start_cmsn;
+ ssq->head_end_cmsn = wpg_end_cmsn;
+ cur_cmsn = SPFC_GET_QUEUE_CMSN(ssq);
+ /* Check whether the current MSN is within the MSN range covered
+ * by the WqePage
+ */
+ wqe_page_in_use = spfc_msn_in_wqe_page(wpg_start_cmsn, wpg_end_cmsn, cur_cmsn);
+ }
+}
+
+/*
+ *Function Name : SPFC_UpdateSqCompletionStat
+ *Function Description: Update the calculation statistics of the CQE
+ *corresponding to the WQE on the connection SQ.
+ *Input Parameters : *sq, *scqe
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+static void spfc_update_sq_wqe_completion_stat(struct spfc_parent_ssq_info *ssq,
+ union spfc_scqe *scqe)
+{
+ struct spfc_scqe_rcv_els_gs_rsp *els_gs_rsp = NULL;
+
+ els_gs_rsp = (struct spfc_scqe_rcv_els_gs_rsp *)scqe;
+
+ /* For the ELS/GS RSP intermediate frame and the CQE that is more than
+ * the ELS_GS_RSP_EXCH_CHECK_FAIL, no statistics are required
+ */
+ if (unlikely(SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_ELS_RSP) ||
+ (SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_GS_RSP)) {
+ if (!els_gs_rsp->wd3.end_rsp || !SPFC_SCQE_ERR_TO_CM(scqe))
+ return;
+ }
+
+ /* When the SQ statistics are updated, the PlogiAcc or PlogiAccSts that
+ * is * implicitly unloaded will enter here, and one more CQE count is
+ * added
+ */
+ atomic_inc(&ssq->sq_cqe_cnt);
+ atomic_dec(&ssq->sqe_minus_cqe_cnt);
+ SPFC_SQ_IO_STAT(ssq, SPFC_GET_SCQE_TYPE(scqe));
+}
+
+/*
+ *Function Name : spfc_reclaim_sq_wqe_page
+ *Function Description: Reclaim the Wqe Pgae that has been used up in the Linked
+ * List SQ.
+ *Input Parameters : *handle,
+ * *scqe
+ *Output Parameters : N/A
+ *Return Type : u32
+ */
+u32 spfc_reclaim_sq_wqe_page(void *handle, union spfc_scqe *scqe)
+{
+ u32 ret = RETURN_OK;
+ u32 cur_cmsn = 0;
+ u32 sqn = INVALID_VALUE32;
+ struct spfc_parent_ssq_info *ssq = NULL;
+ struct spfc_parent_shared_queue_info *parent_queue_info = NULL;
+ struct spfc_hba_info *hba = NULL;
+ ulong flag = 0;
+
+ hba = (struct spfc_hba_info *)handle;
+ sqn = SPFC_GET_SCQE_SQN(scqe);
+ if (sqn >= SPFC_MAX_SSQ_NUM) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) do not have sqn: 0x%x",
+ hba->port_cfg.port_id, sqn);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ parent_queue_info = &hba->parent_queue_mgr->shared_queue[sqn];
+ ssq = &parent_queue_info->parent_ssq_info;
+ /* If there is only zero or one Wqe Page, no release is required */
+ if (atomic_read(&ssq->wqe_page_cnt) <= SPFC_MIN_WP_NUM) {
+ spfc_update_sq_wqe_completion_stat(ssq, scqe);
+ return RETURN_OK;
+ }
+
+ spin_lock_irqsave(&ssq->parent_sq_enqueue_lock, flag);
+ cur_cmsn = SPFC_GET_QUEUE_CMSN(ssq);
+ spfc_free_sq_wqe_page(ssq, cur_cmsn);
+ spin_unlock_irqrestore(&ssq->parent_sq_enqueue_lock, flag);
+
+ spfc_update_sq_wqe_completion_stat(ssq, scqe);
+
+ return ret;
+}
+
+u32 spfc_root_cmdq_enqueue(void *handle, union spfc_cmdqe *cmdqe, u16 cmd_len)
+{
+#define SPFC_ROOTCMDQ_TIMEOUT_MS 3000
+ u8 wqe_type = 0;
+ int cmq_ret = 0;
+ struct sphw_cmd_buf *cmd_buf = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ wqe_type = (u8)cmdqe->common.wd0.task_type;
+ SPFC_IO_STAT(hba, wqe_type);
+
+ cmd_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmd_buf) {
+ SPFC_ERR_IO_STAT(hba, wqe_type);
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) CqmHandle(0x%p) allocate cmdq buffer failed",
+ hba->port_cfg.port_id, hba->dev_handle);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ memcpy(cmd_buf->buf, cmdqe, cmd_len);
+ spfc_cpu_to_big32(cmd_buf->buf, cmd_len);
+ cmd_buf->size = cmd_len;
+
+ cmq_ret = sphw_cmdq_async(hba->dev_handle, COMM_MOD_FC, 0, cmd_buf, SPHW_CHANNEL_FC);
+
+ if (cmq_ret != RETURN_OK) {
+ sphw_free_cmd_buf(hba->dev_handle, cmd_buf);
+ SPFC_ERR_IO_STAT(hba, wqe_type);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) CqmHandle(0x%p) send buff clear cmnd failed(0x%x)",
+ hba->port_cfg.port_id, hba->dev_handle, cmq_ret);
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+struct spfc_parent_queue_info *
+spfc_find_parent_queue_info_by_pkg(void *handle, struct unf_frame_pkg *pkg)
+{
+ u32 rport_index = 0;
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ rport_index = pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX];
+
+ if (unlikely(rport_index >= UNF_SPFC_MAXRPORT_NUM)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[warn]Port(0x%x) send pkg sid_did(0x%x_0x%x), but uplevel allocate invalid rport index: 0x%x",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did, rport_index);
+
+ return NULL;
+ }
+
+ /* parent -->> session */
+ parent_queue_info = &hba->parent_queue_mgr->parent_queue[rport_index];
+
+ return parent_queue_info;
+}
+
+struct spfc_parent_queue_info *spfc_find_parent_queue_info_by_id(struct spfc_hba_info *hba,
+ u32 local_id, u32 remote_id)
+{
+ u32 index = 0;
+ ulong flag = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+ u32 lport_id;
+ u32 rport_id;
+
+ parent_queue_mgr = hba->parent_queue_mgr;
+ if (!parent_queue_mgr)
+ return NULL;
+
+ /* rport_number -->> parent_number -->> session_number */
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ prtq_state_lock = &parent_queue_mgr->parent_queue[index].parent_queue_state_lock;
+ lport_id = parent_queue_mgr->parent_queue[index].parent_sq_info.local_port_id;
+ rport_id = parent_queue_mgr->parent_queue[index].parent_sq_info.remote_port_id;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ /* local_id & remote_id & offload */
+ if (local_id == lport_id && remote_id == rport_id &&
+ parent_queue_mgr->parent_queue[index].offload_state ==
+ SPFC_QUEUE_STATE_OFFLOADED) {
+ parent_queue_info = &parent_queue_mgr->parent_queue[index];
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ return parent_queue_info;
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ }
+
+ return NULL;
+}
+
+struct spfc_parent_queue_info *spfc_find_offload_parent_queue(void *handle, u32 local_id,
+ u32 remote_id, u32 rport_index)
+{
+ u32 index = 0;
+ ulong flag = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ struct spfc_hba_info *hba = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ parent_queue_mgr = hba->parent_queue_mgr;
+ if (!parent_queue_mgr)
+ return NULL;
+
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ if (rport_index == index)
+ continue;
+ prtq_state_lock = &parent_queue_mgr->parent_queue[index].parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ if (local_id == parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.local_port_id &&
+ remote_id == parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.remote_port_id &&
+ parent_queue_mgr->parent_queue[index].offload_state !=
+ SPFC_QUEUE_STATE_FREE &&
+ parent_queue_mgr->parent_queue[index].offload_state !=
+ SPFC_QUEUE_STATE_INITIALIZED) {
+ parent_queue_info = &parent_queue_mgr->parent_queue[index];
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ return parent_queue_info;
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ }
+
+ return NULL;
+}
+
+struct spfc_parent_sq_info *spfc_find_parent_sq_by_pkg(void *handle, struct unf_frame_pkg *pkg)
+{
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ struct cqm_qpc_mpt *cqm_parent_ctxt_obj = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ parent_queue_info = spfc_find_parent_queue_info_by_pkg(hba, pkg);
+ if (unlikely(!parent_queue_info)) {
+ parent_queue_info = spfc_find_parent_queue_info_by_id(hba,
+ pkg->frame_head.csctl_sid &
+ UNF_NPORTID_MASK,
+ pkg->frame_head.rctl_did &
+ UNF_NPORTID_MASK);
+ if (!parent_queue_info) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[err]Port(0x%x) send pkg sid_did(0x%x_0x%x), get a null parent queue information",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+
+ return NULL;
+ }
+ }
+
+ cqm_parent_ctxt_obj = (parent_queue_info->parent_ctx.cqm_parent_ctx_obj);
+ if (unlikely(!cqm_parent_ctxt_obj)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[err]Port(0x%x) send pkg sid_did(0x%x_0x%x) with this rport has not alloc parent sq information",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+
+ return NULL;
+ }
+
+ return &parent_queue_info->parent_sq_info;
+}
+
+u32 spfc_check_all_parent_queue_free(struct spfc_hba_info *hba)
+{
+ u32 index = 0;
+ ulong flag = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+
+ parent_queue_mgr = hba->parent_queue_mgr;
+ if (!parent_queue_mgr) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[err]Port(0x%x) get a null parent queue mgr",
+ hba->port_cfg.port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ prtq_state_lock = &parent_queue_mgr->parent_queue[index].parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ if (parent_queue_mgr->parent_queue[index].offload_state != SPFC_QUEUE_STATE_FREE) {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ return UNF_RETURN_ERROR;
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ }
+
+ return RETURN_OK;
+}
+
+void spfc_flush_specific_scq(struct spfc_hba_info *hba, u32 index)
+{
+ /* The software interrupt is scheduled and processed during the second
+ * timeout period
+ */
+ struct spfc_scq_info *scq_info = NULL;
+ u32 flush_done_time = 0;
+
+ scq_info = &hba->scq_info[index];
+ atomic_set(&scq_info->flush_stat, SPFC_QUEUE_FLUSH_DOING);
+ tasklet_schedule(&scq_info->tasklet);
+
+ /* Wait for a maximum of 2 seconds. If the SCQ soft interrupt is not
+ * scheduled * within 2 seconds, only timeout is returned
+ */
+ while ((atomic_read(&scq_info->flush_stat) != SPFC_QUEUE_FLUSH_DONE) &&
+ (flush_done_time < SPFC_QUEUE_FLUSH_WAIT_TIMEOUT_MS)) {
+ msleep(SPFC_QUEUE_FLUSH_WAIT_MS);
+ flush_done_time += SPFC_QUEUE_FLUSH_WAIT_MS;
+ tasklet_schedule(&scq_info->tasklet);
+ }
+
+ if (atomic_read(&scq_info->flush_stat) != SPFC_QUEUE_FLUSH_DONE) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "[warn]Port(0x%x) special scq(0x%x) flush timeout",
+ hba->port_cfg.port_id, index);
+ }
+}
+
+static void spfc_flush_cmd_scq(struct spfc_hba_info *hba)
+{
+ u32 index = 0;
+
+ for (index = SPFC_CMD_SCQN_START; index < SPFC_SESSION_SCQ_NUM;
+ index += SPFC_SCQS_PER_SESSION) {
+ spfc_flush_specific_scq(hba, index);
+ }
+}
+
+static void spfc_flush_sts_scq(struct spfc_hba_info *hba)
+{
+ u32 index = 0;
+
+ /* for each STS SCQ */
+ for (index = SPFC_STS_SCQN_START; index < SPFC_SESSION_SCQ_NUM;
+ index += SPFC_SCQS_PER_SESSION) {
+ spfc_flush_specific_scq(hba, index);
+ }
+}
+
+static void spfc_flush_all_scq(struct spfc_hba_info *hba)
+{
+ spfc_flush_cmd_scq(hba);
+ spfc_flush_sts_scq(hba);
+ /* Flush Default SCQ */
+ spfc_flush_specific_scq(hba, SPFC_SESSION_SCQ_NUM);
+}
+
+void spfc_wait_all_queues_empty(struct spfc_hba_info *hba)
+{
+ spfc_flush_all_scq(hba);
+}
+
+void spfc_set_rport_flush_state(void *handle, bool in_flush)
+{
+ u32 index = 0;
+ ulong flag = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ parent_queue_mgr = hba->parent_queue_mgr;
+ if (!parent_queue_mgr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) parent queue manager is empty",
+ hba->port_cfg.port_id);
+ return;
+ }
+
+ /*
+ * for each HBA's R_Port(SQ),
+ * set state with been flushing or flush done
+ */
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ spin_lock_irqsave(&parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.parent_sq_enqueue_lock, flag);
+ if (parent_queue_mgr->parent_queue[index].offload_state != SPFC_QUEUE_STATE_FREE) {
+ parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.port_in_flush = in_flush;
+ }
+ spin_unlock_irqrestore(&parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.parent_sq_enqueue_lock, flag);
+ }
+}
+
+u32 spfc_clear_fetched_sq_wqe(void *handle)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ union spfc_cmdqe cmdqe;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+
+ hba = (struct spfc_hba_info *)handle;
+ /*
+ * The ROOT SQ cannot control the WQE in the empty queue of the ROOT SQ.
+ * Therefore, the ROOT SQ does not enqueue the WQE after the hardware
+ * obtains the. Link down after the wait mode is used. Therefore, the
+ * WQE of the hardware driver needs to enter the WQE of the queue after
+ * the Link down of the Link down is reported.
+ */
+ memset(&cmdqe, 0, sizeof(union spfc_cmdqe));
+ spfc_build_cmdqe_common(&cmdqe, SPFC_TASK_T_BUFFER_CLEAR, 0);
+ cmdqe.buffer_clear.wd1.rx_id_start = hba->exi_base;
+ cmdqe.buffer_clear.wd1.rx_id_end = hba->exi_base + hba->exi_count - 1;
+ cmdqe.buffer_clear.scqn = hba->default_scqn;
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "[info]Port(0x%x) start clear all fetched wqe in start(0x%x) - end(0x%x) scqn(0x%x) stage(0x%x)",
+ hba->port_cfg.port_id, cmdqe.buffer_clear.wd1.rx_id_start,
+ cmdqe.buffer_clear.wd1.rx_id_end, cmdqe.buffer_clear.scqn,
+ hba->queue_set_stage);
+
+ /* Send BUFFER_CLEAR command via ROOT CMDQ */
+ ret = spfc_root_cmdq_enqueue(hba, &cmdqe, sizeof(cmdqe.buffer_clear));
+
+ return ret;
+}
+
+u32 spfc_clear_pending_sq_wqe(void *handle)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 cmdqe_len = 0;
+ ulong flag = 0;
+ struct spfc_parent_ssq_info *ssq_info = NULL;
+ union spfc_cmdqe cmdqe;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ memset(&cmdqe, 0, sizeof(union spfc_cmdqe));
+ spfc_build_cmdqe_common(&cmdqe, SPFC_TASK_T_FLUSH_SQ, 0);
+ cmdqe.flush_sq.wd0.wqe_type = SPFC_TASK_T_FLUSH_SQ;
+ cmdqe.flush_sq.wd1.scqn = SPFC_LSW(hba->default_scqn);
+ cmdqe.flush_sq.wd1.port_id = hba->port_index;
+
+ ssq_info = &hba->parent_queue_mgr->shared_queue[ARRAY_INDEX_0].parent_ssq_info;
+
+ spin_lock_irqsave(&ssq_info->parent_sq_enqueue_lock, flag);
+ cmdqe.flush_sq.wd3.first_sq_xid = ssq_info->context_id;
+ spin_unlock_irqrestore(&ssq_info->parent_sq_enqueue_lock, flag);
+ cmdqe.flush_sq.wd0.entry_count = SPFC_MAX_SSQ_NUM;
+ cmdqe.flush_sq.wd3.sqqid_start_per_session = SPFC_SQ_QID_START_PER_QPC;
+ cmdqe.flush_sq.wd3.sqcnt_per_session = SPFC_SQ_NUM_PER_QPC;
+ cmdqe.flush_sq.wd1.last_wqe = 1;
+
+ /* Clear pending Queue */
+ cmdqe_len = (u32)(sizeof(cmdqe.flush_sq));
+ ret = spfc_root_cmdq_enqueue(hba, &cmdqe, (u16)cmdqe_len);
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "[info]Port(0x%x) clear total 0x%x SQ in this CMDQE(last=%u), stage (0x%x)",
+ hba->port_cfg.port_id, SPFC_MAX_SSQ_NUM,
+ cmdqe.flush_sq.wd1.last_wqe, hba->queue_set_stage);
+
+ return ret;
+}
+
+u32 spfc_wait_queue_set_flush_done(struct spfc_hba_info *hba)
+{
+ u32 flush_done_time = 0;
+ u32 ret = RETURN_OK;
+
+ while ((hba->queue_set_stage != SPFC_QUEUE_SET_STAGE_FLUSHDONE) &&
+ (flush_done_time < SPFC_QUEUE_FLUSH_WAIT_TIMEOUT_MS)) {
+ msleep(SPFC_QUEUE_FLUSH_WAIT_MS);
+ flush_done_time += SPFC_QUEUE_FLUSH_WAIT_MS;
+ }
+
+ if (hba->queue_set_stage != SPFC_QUEUE_SET_STAGE_FLUSHDONE) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "[warn]Port(0x%x) queue sets flush timeout with stage(0x%x)",
+ hba->port_cfg.port_id, hba->queue_set_stage);
+
+ ret = UNF_RETURN_ERROR;
+ }
+
+ return ret;
+}
+
+void spfc_disable_all_scq_schedule(struct spfc_hba_info *hba)
+{
+ struct spfc_scq_info *scq_info = NULL;
+ u32 index = 0;
+
+ for (index = 0; index < SPFC_TOTAL_SCQ_NUM; index++) {
+ scq_info = &hba->scq_info[index];
+ tasklet_disable(&scq_info->tasklet);
+ }
+}
+
+void spfc_disable_queues_dispatch(struct spfc_hba_info *hba)
+{
+ spfc_disable_all_scq_schedule(hba);
+}
+
+void spfc_enable_all_scq_schedule(struct spfc_hba_info *hba)
+{
+ struct spfc_scq_info *scq_info = NULL;
+ u32 index = 0;
+
+ for (index = 0; index < SPFC_TOTAL_SCQ_NUM; index++) {
+ scq_info = &hba->scq_info[index];
+ tasklet_enable(&scq_info->tasklet);
+ }
+}
+
+void spfc_enalbe_queues_dispatch(void *handle)
+{
+ spfc_enable_all_scq_schedule((struct spfc_hba_info *)handle);
+}
+
+/*
+ *Function Name : spfc_clear_els_srq
+ *Function Description: When the port is used as the remove, the resources
+ *related to the els srq are deleted.
+ *Input Parameters : *hba Output Parameters
+ *Return Type : void
+ */
+void spfc_clear_els_srq(struct spfc_hba_info *hba)
+{
+#define SPFC_WAIT_CLR_SRQ_CTX_MS 500
+#define SPFC_WAIT_CLR_SRQ_CTX_LOOP_TIMES 60
+
+ u32 index = 0;
+ ulong flag = 0;
+ struct spfc_srq_info *srq_info = NULL;
+
+ srq_info = &hba->els_srq_info;
+
+ spin_lock_irqsave(&srq_info->srq_spin_lock, flag);
+ if (!srq_info->enable || srq_info->state == SPFC_CLEAN_DOING) {
+ spin_unlock_irqrestore(&srq_info->srq_spin_lock, flag);
+
+ return;
+ }
+ srq_info->enable = false;
+ srq_info->state = SPFC_CLEAN_DOING;
+ spin_unlock_irqrestore(&srq_info->srq_spin_lock, flag);
+
+ spfc_send_clear_srq_cmd(hba, &hba->els_srq_info);
+
+ /* wait for uCode to clear SRQ context, the timer is 30S */
+ while ((srq_info->state != SPFC_CLEAN_DONE) &&
+ (index < SPFC_WAIT_CLR_SRQ_CTX_LOOP_TIMES)) {
+ msleep(SPFC_WAIT_CLR_SRQ_CTX_MS);
+ index++;
+ }
+
+ if (srq_info->state != SPFC_CLEAN_DONE) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "[warn]SPFC Port(0x%x) clear els srq timeout",
+ hba->port_cfg.port_id);
+ }
+}
+
+u32 spfc_wait_all_parent_queue_free(struct spfc_hba_info *hba)
+{
+#define SPFC_MAX_LOOP_TIMES 6000
+#define SPFC_WAIT_ONE_TIME_MS 5
+ u32 index = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ do {
+ ret = spfc_check_all_parent_queue_free(hba);
+ if (ret == RETURN_OK)
+ break;
+
+ index++;
+ msleep(SPFC_WAIT_ONE_TIME_MS);
+ } while (index < SPFC_MAX_LOOP_TIMES);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[warn]Port(0x%x) wait all parent queue state free timeout",
+ hba->port_cfg.port_id);
+ }
+
+ return ret;
+}
+
+/*
+ *Function Name : spfc_queue_pre_process
+ *Function Description: When the port functions as the remove, the queue needs
+ * to be preprocessed.
+ *Input Parameters : *handle,
+ * clean
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+void spfc_queue_pre_process(void *handle, bool clean)
+{
+#define SPFC_WAIT_LINKDOWN_EVENT_MS 500
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ /* From port reset & port remove */
+ /* 1. Wait for 2s and wait for QUEUE to be FLUSH Done. */
+ if (spfc_wait_queue_set_flush_done(hba) != RETURN_OK) {
+ /*
+ * During the process of removing the card, if the port is
+ * disabled and the flush done is not available, the chip is
+ * powered off or the pcie link is disconnected. In this case,
+ * you can proceed with the next step.
+ */
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]SPFC Port(0x%x) clean queue sets timeout",
+ hba->port_cfg.port_id);
+ }
+
+ /*
+ * 2. Port remove:
+ * 2.1 free parent queue
+ * 2.2 clear & destroy ELS/SIRT SRQ
+ */
+ if (clean) {
+ if (spfc_wait_all_parent_queue_free(hba) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_WARN,
+ "[warn]SPFC Port(0x%x) free all parent queue timeout",
+ hba->port_cfg.port_id);
+ }
+
+ /* clear & than destroy ELS/SIRT SRQ */
+ spfc_clear_els_srq(hba);
+ }
+
+ msleep(SPFC_WAIT_LINKDOWN_EVENT_MS);
+
+ /*
+ * 3. The internal resources of the port chip are flush done. However,
+ * there may be residual scqe or rq in the queue. The scheduling is
+ * forcibly refreshed once.
+ */
+ spfc_wait_all_queues_empty(hba);
+
+ /* 4. Disable tasklet scheduling for upstream queues on the software
+ * layer
+ */
+ spfc_disable_queues_dispatch(hba);
+}
+
+void spfc_queue_post_process(void *hba)
+{
+ spfc_enalbe_queues_dispatch((struct spfc_hba_info *)hba);
+}
+
+/*
+ *Function Name : spfc_push_delay_sqe
+ *Function Description: Check whether there is a sq that is being deleted.
+ * If yes, add the sq to the sq.
+ *Input Parameters : *hba,
+ * *offload_parent_queue,
+ * *sqe,
+ * *pkg
+ *Output Parameters : N/A
+ *Return Type : u32
+ */
+u32 spfc_push_delay_sqe(void *hba,
+ struct spfc_parent_queue_info *offload_parent_queue,
+ struct spfc_sqe *sqe, struct unf_frame_pkg *pkg)
+{
+ ulong flag = 0;
+ spinlock_t *prtq_state_lock = NULL;
+
+ prtq_state_lock = &offload_parent_queue->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ if (offload_parent_queue->offload_state != SPFC_QUEUE_STATE_INITIALIZED &&
+ offload_parent_queue->offload_state != SPFC_QUEUE_STATE_FREE) {
+ memcpy(&offload_parent_queue->parent_sq_info.delay_sqe.sqe,
+ sqe, sizeof(struct spfc_sqe));
+ offload_parent_queue->parent_sq_info.delay_sqe.start_jiff = jiffies;
+ offload_parent_queue->parent_sq_info.delay_sqe.time_out =
+ pkg->private_data[PKG_PRIVATE_XCHG_TIMEER];
+ offload_parent_queue->parent_sq_info.delay_sqe.valid = true;
+ offload_parent_queue->parent_sq_info.delay_sqe.rport_index =
+ pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX];
+ offload_parent_queue->parent_sq_info.delay_sqe.sid =
+ pkg->frame_head.csctl_sid & UNF_NPORTID_MASK;
+ offload_parent_queue->parent_sq_info.delay_sqe.did =
+ pkg->frame_head.rctl_did & UNF_NPORTID_MASK;
+ offload_parent_queue->parent_sq_info.delay_sqe.xid =
+ sqe->ts_sl.xid;
+ offload_parent_queue->parent_sq_info.delay_sqe.ssqn =
+ (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%x) delay send ELS, OXID(0x%x), RXID(0x%x)",
+ ((struct spfc_hba_info *)hba)->port_cfg.port_id,
+ pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX],
+ UNF_GET_OXID(pkg), UNF_GET_RXID(pkg));
+
+ return RETURN_OK;
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ return UNF_RETURN_ERROR;
+}
+
+static u32 spfc_pop_session_valid_check(struct spfc_hba_info *hba,
+ struct spfc_delay_sqe_ctrl_info *sqe_info, u32 rport_index)
+{
+ if (!sqe_info->valid)
+ return UNF_RETURN_ERROR;
+
+ if (jiffies_to_msecs(jiffies - sqe_info->start_jiff) >= sqe_info->time_out) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) pop delay enable session failed, start time 0x%llx, timeout value 0x%x",
+ hba->port_cfg.port_id, sqe_info->start_jiff,
+ sqe_info->time_out);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) pop delay enable session failed, rport index(0x%x) is invalid",
+ hba->port_cfg.port_id, rport_index);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+/*
+ *Function Name : spfc_pop_delay_sqe
+ *Function Description: The sqe that is delayed due to the deletion of the old
+ * connection is sent to the root sq for
+ *processing. Input Parameters : *hba, *sqe_info Output Parameters : N/A
+ *Return Type : void
+ */
+static void spfc_pop_delay_sqe(struct spfc_hba_info *hba,
+ struct spfc_delay_sqe_ctrl_info *sqe_info)
+{
+ ulong flag;
+ u32 delay_rport_index = INVALID_VALUE32;
+ struct spfc_parent_queue_info *parent_queue = NULL;
+ enum spfc_parent_queue_state offload_state =
+ SPFC_QUEUE_STATE_DESTROYING;
+ struct spfc_delay_destroy_ctrl_info destroy_sqe_info;
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_parent_sq_info *sq_info = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+
+ memset(&destroy_sqe_info, 0, sizeof(struct spfc_delay_destroy_ctrl_info));
+ delay_rport_index = sqe_info->rport_index;
+
+ /* According to the sequence, the rport index id is reported and then
+ * the sqe of the new link setup request is delivered.
+ */
+ ret = spfc_pop_session_valid_check(hba, sqe_info, delay_rport_index);
+
+ if (ret != RETURN_OK)
+ return;
+
+ parent_queue = &hba->parent_queue_mgr->parent_queue[delay_rport_index];
+ sq_info = &parent_queue->parent_sq_info;
+ prtq_state_lock = &parent_queue->parent_queue_state_lock;
+ /* Before the root sq is delivered, check the status again to
+ * ensure that the initialization status is not uninstalled. Other
+ * states are not processed and are discarded directly.
+ */
+ spin_lock_irqsave(prtq_state_lock, flag);
+ offload_state = parent_queue->offload_state;
+
+ /* Before re-enqueuing the rootsq, check whether the offload status and
+ * connection information is consistent to prevent the old request from
+ * being sent after the connection status is changed.
+ */
+ if (offload_state == SPFC_QUEUE_STATE_INITIALIZED &&
+ parent_queue->parent_sq_info.local_port_id == sqe_info->sid &&
+ parent_queue->parent_sq_info.remote_port_id == sqe_info->did &&
+ SPFC_CHECK_XID_MATCHED(parent_queue->parent_sq_info.context_id,
+ sqe_info->sqe.ts_sl.xid)) {
+ parent_queue->offload_state = SPFC_QUEUE_STATE_OFFLOADING;
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) pop up delay session enable, sqe start time 0x%llx, timeout value 0x%x, rport index 0x%x, offload state 0x%x",
+ hba->port_cfg.port_id, sqe_info->start_jiff,
+ sqe_info->time_out, delay_rport_index, offload_state);
+
+ if (spfc_parent_sq_enqueue(sq_info, &sqe_info->sqe, sqe_info->ssqn) != RETURN_OK) {
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ if (parent_queue->offload_state == SPFC_QUEUE_STATE_OFFLOADING)
+ parent_queue->offload_state = offload_state;
+
+ if (parent_queue->parent_sq_info.destroy_sqe.valid) {
+ memcpy(&destroy_sqe_info,
+ &parent_queue->parent_sq_info.destroy_sqe,
+ sizeof(struct spfc_delay_destroy_ctrl_info));
+
+ parent_queue->parent_sq_info.destroy_sqe.valid = false;
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ spfc_pop_destroy_parent_queue_sqe((void *)hba, &destroy_sqe_info);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) pop up delay session enable fail, recover offload state 0x%x",
+ hba->port_cfg.port_id, parent_queue->offload_state);
+ return;
+ }
+ } else {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port 0x%x pop delay session enable failed, sqe start time 0x%llx, timeout value 0x%x, rport index 0x%x, offload state 0x%x",
+ hba->port_cfg.port_id, sqe_info->start_jiff,
+ sqe_info->time_out, delay_rport_index,
+ offload_state);
+ }
+}
+
+void spfc_push_destroy_parent_queue_sqe(void *hba,
+ struct spfc_parent_queue_info *offloading_parent_queue,
+ struct unf_port_info *rport_info)
+{
+ offloading_parent_queue->parent_sq_info.destroy_sqe.valid = true;
+ offloading_parent_queue->parent_sq_info.destroy_sqe.rport_index = rport_info->rport_index;
+ offloading_parent_queue->parent_sq_info.destroy_sqe.time_out =
+ SPFC_SQ_DEL_STAGE_TIMEOUT_MS;
+ offloading_parent_queue->parent_sq_info.destroy_sqe.start_jiff = jiffies;
+ offloading_parent_queue->parent_sq_info.destroy_sqe.rport_info.nport_id =
+ rport_info->nport_id;
+ offloading_parent_queue->parent_sq_info.destroy_sqe.rport_info.rport_index =
+ rport_info->rport_index;
+ offloading_parent_queue->parent_sq_info.destroy_sqe.rport_info.port_name =
+ rport_info->port_name;
+}
+
+/*
+ *Function Name : spfc_pop_destroy_parent_queue_sqe
+ *Function Description: The deletion connection sqe that is delayed due to
+ * connection uninstallation is sent to
+ *the parent sq for processing. Input Parameters : *handle, *destroy_sqe_info
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+void spfc_pop_destroy_parent_queue_sqe(void *handle,
+ struct spfc_delay_destroy_ctrl_info *destroy_sqe_info)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ ulong flag;
+ u32 index = INVALID_VALUE32;
+ struct spfc_parent_queue_info *parent_queue = NULL;
+ enum spfc_parent_queue_state offload_state =
+ SPFC_QUEUE_STATE_DESTROYING;
+ struct spfc_hba_info *hba = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ if (!destroy_sqe_info->valid)
+ return;
+
+ if (jiffies_to_msecs(jiffies - destroy_sqe_info->start_jiff) < destroy_sqe_info->time_out) {
+ index = destroy_sqe_info->rport_index;
+ parent_queue = &hba->parent_queue_mgr->parent_queue[index];
+ prtq_state_lock = &parent_queue->parent_queue_state_lock;
+ /* Before delivery, check the status again to ensure that the
+ * initialization status is not uninstalled. Other states are
+ * not processed and are discarded directly.
+ */
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ offload_state = parent_queue->offload_state;
+ if (offload_state == SPFC_QUEUE_STATE_OFFLOADED ||
+ offload_state == SPFC_QUEUE_STATE_INITIALIZED) {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port 0x%x pop up delay destroy parent sq, sqe start time 0x%llx, timeout value 0x%x, rport index 0x%x, offload state 0x%x",
+ hba->port_cfg.port_id,
+ destroy_sqe_info->start_jiff,
+ destroy_sqe_info->time_out,
+ index, offload_state);
+ ret = spfc_free_parent_resource(hba, &destroy_sqe_info->rport_info);
+ } else {
+ ret = UNF_RETURN_ERROR;
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ }
+ }
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port 0x%x pop delay destroy parent sq failed, sqe start time 0x%llx, timeout value 0x%x, rport index 0x%x, rport nport id 0x%x,offload state 0x%x",
+ hba->port_cfg.port_id, destroy_sqe_info->start_jiff,
+ destroy_sqe_info->time_out, index,
+ destroy_sqe_info->rport_info.nport_id, offload_state);
+ }
+}
+
+void spfc_free_parent_queue_info(void *handle, struct spfc_parent_queue_info *parent_queue_info)
+{
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 rport_index = INVALID_VALUE32;
+ struct spfc_hba_info *hba = NULL;
+ struct spfc_delay_sqe_ctrl_info sqe_info;
+ spinlock_t *prtq_state_lock = NULL;
+
+ memset(&sqe_info, 0, sizeof(struct spfc_delay_sqe_ctrl_info));
+ hba = (struct spfc_hba_info *)handle;
+ prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) begin to free parent sq, rport_index(0x%x)",
+ hba->port_cfg.port_id, parent_queue_info->parent_sq_info.rport_index);
+
+ if (parent_queue_info->offload_state == SPFC_QUEUE_STATE_FREE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[info]Port(0x%x) duplicate free parent sq, rport_index(0x%x)",
+ hba->port_cfg.port_id,
+ parent_queue_info->parent_sq_info.rport_index);
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ return;
+ }
+
+ if (parent_queue_info->parent_sq_info.delay_sqe.valid) {
+ memcpy(&sqe_info, &parent_queue_info->parent_sq_info.delay_sqe,
+ sizeof(struct spfc_delay_sqe_ctrl_info));
+ }
+
+ rport_index = parent_queue_info->parent_sq_info.rport_index;
+
+ /* The Parent Contexe and SQ information is released. After
+ * initialization, the Parent Contexe and SQ information is associated
+ * with the sq in the queue of the parent
+ */
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ spfc_free_parent_sq(hba, parent_queue_info);
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ /* The initialization of all queue id is invalid */
+ parent_queue_info->parent_cmd_scq_info.cqm_queue_id = INVALID_VALUE32;
+ parent_queue_info->parent_sts_scq_info.cqm_queue_id = INVALID_VALUE32;
+ parent_queue_info->parent_els_srq_info.cqm_queue_id = INVALID_VALUE32;
+ parent_queue_info->offload_state = SPFC_QUEUE_STATE_FREE;
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_RELEASE_RPORT_INDEX,
+ (void *)&rport_index);
+
+ spfc_pop_delay_sqe(hba, &sqe_info);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Port(0x%x) free parent sq with rport_index(0x%x) failed",
+ hba->port_cfg.port_id, rport_index);
+ }
+}
+
+static void spfc_do_port_reset(struct work_struct *work)
+{
+ struct spfc_suspend_sqe_info *suspend_sqe = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ suspend_sqe = container_of(work, struct spfc_suspend_sqe_info,
+ timeout_work.work);
+ hba = (struct spfc_hba_info *)suspend_sqe->hba;
+ FC_CHECK_RETURN_VOID(hba);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) magic num (0x%x)do port reset.",
+ hba->port_cfg.port_id, suspend_sqe->magic_num);
+
+ spfc_port_reset(hba);
+}
+
+static void
+spfc_push_sqe_suspend(void *hba, struct spfc_parent_queue_info *parent_queue,
+ struct spfc_sqe *sqe, struct unf_frame_pkg *pkg, u32 magic_num)
+{
+#define SPFC_SQ_NOP_TIMEOUT_MS 1000
+ ulong flag = 0;
+ u32 sqn_base;
+ struct spfc_parent_sq_info *sq = NULL;
+ struct spfc_suspend_sqe_info *suspend_sqe = NULL;
+
+ sq = &parent_queue->parent_sq_info;
+ suspend_sqe =
+ kmalloc(sizeof(struct spfc_suspend_sqe_info), GFP_ATOMIC);
+ if (!suspend_sqe) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[err]alloc suspend sqe memory failed");
+ return;
+ }
+ memset(suspend_sqe, 0, sizeof(struct spfc_suspend_sqe_info));
+ memcpy(&suspend_sqe->sqe, sqe, sizeof(struct spfc_sqe));
+ suspend_sqe->magic_num = magic_num;
+ suspend_sqe->old_offload_sts = sq->need_offloaded;
+ suspend_sqe->hba = sq->hba;
+
+ if (pkg) {
+ memcpy(&suspend_sqe->pkg, pkg, sizeof(struct unf_frame_pkg));
+ } else {
+ sqn_base = sq->sqn_base;
+ suspend_sqe->pkg.private_data[PKG_PRIVATE_XCHG_SSQ_INDEX] =
+ sqn_base;
+ }
+
+ INIT_DELAYED_WORK(&suspend_sqe->timeout_work, spfc_do_port_reset);
+ INIT_LIST_HEAD(&suspend_sqe->list_sqe_entry);
+
+ spin_lock_irqsave(&parent_queue->parent_queue_state_lock, flag);
+ list_add_tail(&suspend_sqe->list_sqe_entry, &sq->suspend_sqe_list);
+ spin_unlock_irqrestore(&parent_queue->parent_queue_state_lock, flag);
+
+ (void)queue_delayed_work(((struct spfc_hba_info *)hba)->work_queue,
+ &suspend_sqe->timeout_work,
+ (ulong)msecs_to_jiffies((u32)SPFC_SQ_NOP_TIMEOUT_MS));
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) magic num(0x%x)suspend sqe",
+ ((struct spfc_hba_info *)hba)->port_cfg.port_id, magic_num);
+}
+
+u32 spfc_pop_suspend_sqe(void *handle, struct spfc_parent_queue_info *parent_queue,
+ struct spfc_suspend_sqe_info *suspen_sqe)
+{
+ ulong flag;
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_parent_sq_info *sq = NULL;
+ u16 ssqn;
+ struct unf_frame_pkg *pkg = NULL;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
+ u8 task_type;
+ spinlock_t *prtq_state_lock = NULL;
+
+ sq = &parent_queue->parent_sq_info;
+ task_type = suspen_sqe->sqe.ts_sl.task_type;
+ pkg = &suspen_sqe->pkg;
+ if (!pkg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR, "[error]pkt is null.");
+ return UNF_RETURN_ERROR;
+ }
+
+ ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) pop up suspend wqe sqn (0x%x) TaskType(0x%x)",
+ hba->port_cfg.port_id, ssqn, task_type);
+
+ prtq_state_lock = &parent_queue->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+ if (SPFC_RPORT_NOT_OFFLOADED(parent_queue) &&
+ (task_type == SPFC_SQE_ELS_RSP ||
+ task_type == SPFC_TASK_T_ELS)) {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ /* Send PLOGI or PLOGI ACC or SCR if session not offload */
+ ret = spfc_send_els_via_default_session(hba, &suspen_sqe->sqe, pkg, parent_queue);
+ } else {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ ret = spfc_parent_sq_enqueue(sq, &suspen_sqe->sqe, ssqn);
+ }
+ return ret;
+}
+
+static void spfc_build_nop_sqe(struct spfc_hba_info *hba, struct spfc_parent_sq_info *sq,
+ struct spfc_sqe *sqe, u32 magic_num, u32 scqn)
+{
+ sqe->ts_sl.task_type = SPFC_SQE_NOP;
+ sqe->ts_sl.wd0.conn_id = (u16)(sq->rport_index);
+ sqe->ts_sl.cont.nop_sq.wd0.scqn = scqn;
+ sqe->ts_sl.cont.nop_sq.magic_num = magic_num;
+ spfc_build_common_wqe_ctrls(&sqe->ctrl_sl,
+ sizeof(struct spfc_sqe_ts) / SPFC_WQE_SECTION_CHUNK_SIZE);
+}
+
+u32 spfc_send_nop_cmd(void *handle, struct spfc_parent_sq_info *parent_sq_info,
+ u32 magic_num, u16 sqn)
+{
+ struct spfc_sqe empty_sq_sqe;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
+ u32 ret;
+
+ memset(&empty_sq_sqe, 0, sizeof(struct spfc_sqe));
+
+ spfc_build_nop_sqe(hba, parent_sq_info, &empty_sq_sqe, magic_num, hba->default_scqn);
+ ret = spfc_parent_sq_enqueue(parent_sq_info, &empty_sq_sqe, sqn);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]send nop cmd scqn(0x%x) sq(0x%x).",
+ hba->default_scqn, sqn);
+ return ret;
+}
+
+u32 spfc_suspend_sqe_and_send_nop(void *handle,
+ struct spfc_parent_queue_info *parent_queue,
+ struct spfc_sqe *sqe, struct unf_frame_pkg *pkg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 magic_num;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
+ struct spfc_parent_sq_info *parent_sq = &parent_queue->parent_sq_info;
+ struct unf_lport *lport = (struct unf_lport *)hba->lport;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ if (pkg) {
+ magic_num = pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+ } else {
+ magic_num = (u32)atomic64_inc_return(&((struct unf_lport *)
+ lport->root_lport)->exchg_index);
+ }
+
+ spfc_push_sqe_suspend(hba, parent_queue, sqe, pkg, magic_num);
+ if (SPFC_RPORT_NOT_OFFLOADED(parent_queue))
+ parent_sq->need_offloaded = SPFC_NEED_DO_OFFLOAD;
+
+ ret = spfc_send_nop_cmd(hba, parent_sq, magic_num,
+ (u16)parent_sq->sqn_base);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[err]Port(0x%x) rport_index(0x%x)send sq empty failed.",
+ hba->port_cfg.port_id, parent_sq->rport_index);
+ }
+ return ret;
+}
+
+void spfc_build_session_rst_wqe(void *handle, struct spfc_parent_sq_info *sq,
+ struct spfc_sqe *sqe, enum spfc_session_reset_mode mode, u32 scqn)
+{
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ /* The reset session command does not occupy xid. Therefore,
+ * 0xffff can be used to align with the microcode.
+ */
+ sqe->ts_sl.task_type = SPFC_SQE_SESS_RST;
+ sqe->ts_sl.local_xid = 0xffff;
+ sqe->ts_sl.wd0.conn_id = (u16)(sq->rport_index);
+ sqe->ts_sl.wd0.remote_xid = 0xffff;
+ sqe->ts_sl.cont.reset_session.wd0.reset_exch_start = hba->exi_base;
+ sqe->ts_sl.cont.reset_session.wd0.reset_exch_end = hba->exi_base + (hba->exi_count - 1);
+ sqe->ts_sl.cont.reset_session.wd1.reset_did = sq->remote_port_id;
+ sqe->ts_sl.cont.reset_session.wd1.mode = mode;
+ sqe->ts_sl.cont.reset_session.wd2.reset_sid = sq->local_port_id;
+ sqe->ts_sl.cont.reset_session.wd3.scqn = scqn;
+
+ spfc_build_common_wqe_ctrls(&sqe->ctrl_sl,
+ sizeof(struct spfc_sqe_ts) / SPFC_WQE_SECTION_CHUNK_SIZE);
+}
+
+u32 spfc_send_session_rst_cmd(void *handle,
+ struct spfc_parent_queue_info *parent_queue_info,
+ enum spfc_session_reset_mode mode)
+{
+ struct spfc_parent_sq_info *sq = NULL;
+ struct spfc_sqe rst_sess_sqe;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 sts_scqn = 0;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ memset(&rst_sess_sqe, 0, sizeof(struct spfc_sqe));
+ sq = &parent_queue_info->parent_sq_info;
+ sts_scqn = hba->default_scqn;
+
+ spfc_build_session_rst_wqe(hba, sq, &rst_sess_sqe, mode, sts_scqn);
+ ret = spfc_suspend_sqe_and_send_nop(hba, parent_queue_info, &rst_sess_sqe, NULL);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]RPort(0x%x) send SESS_RST(%d) start_exch_id(0x%x) end_exch_id(0x%x), scqn(0x%x) ctx_id(0x%x) cid(0x%x)",
+ sq->rport_index, mode,
+ rst_sess_sqe.ts_sl.cont.reset_session.wd0.reset_exch_start,
+ rst_sess_sqe.ts_sl.cont.reset_session.wd0.reset_exch_end,
+ rst_sess_sqe.ts_sl.cont.reset_session.wd3.scqn,
+ sq->context_id, sq->cache_id);
+ return ret;
+}
+
+void spfc_rcvd_els_from_srq_timeout(struct work_struct *work)
+{
+ struct spfc_hba_info *hba = NULL;
+
+ hba = container_of(work, struct spfc_hba_info, srq_delay_info.del_work.work);
+
+ /* If the frame is not processed, the frame is pushed to the CM layer:
+ * The frame may have been processed when the root rq receives data.
+ */
+ if (hba->srq_delay_info.srq_delay_flag) {
+ spfc_recv_els_cmnd(hba, &hba->srq_delay_info.frame_pkg,
+ hba->srq_delay_info.frame_pkg.unf_cmnd_pload_bl.buffer_ptr,
+ 0, false);
+ hba->srq_delay_info.srq_delay_flag = 0;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) srq delay work timeout, send saved plgoi to CM",
+ hba->port_cfg.port_id);
+ }
+}
+
+u32 spfc_flush_ini_resp_queue(void *handle)
+{
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ hba = (struct spfc_hba_info *)handle;
+
+ spfc_flush_sts_scq(hba);
+
+ return RETURN_OK;
+}
+
+static void spfc_handle_aeq_queue_error(struct spfc_hba_info *hba,
+ struct spfc_aqe_data *aeq_msg)
+{
+ u32 sts_scqn_local = 0;
+ u32 full_ci = INVALID_VALUE32;
+ u32 full_ci_owner = INVALID_VALUE32;
+ struct spfc_scq_info *scq_info = NULL;
+
+ sts_scqn_local = SPFC_RPORTID_TO_STS_SCQN(aeq_msg->wd0.conn_id);
+ scq_info = &hba->scq_info[sts_scqn_local];
+ full_ci = scq_info->ci;
+ full_ci_owner = scq_info->ci_owner;
+
+ /* Currently, Flush is forcibly set to StsScq. No matter whether scq is
+ * processed, AEQE is returned
+ */
+ tasklet_schedule(&scq_info->tasklet);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%x) LocalScqn(0x%x) CqmScqn(0x%x) is full, force flush CI from (%u|0x%x) to (%u|0x%x)",
+ hba->port_cfg.port_id, aeq_msg->wd0.conn_id,
+ sts_scqn_local, scq_info->scqn, full_ci_owner, full_ci,
+ scq_info->ci_owner, scq_info->ci);
+}
+
+void spfc_process_aeqe(void *handle, u8 event_type, u8 *val)
+{
+ u32 ret = RETURN_OK;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
+ struct spfc_aqe_data aeq_msg;
+ u8 event_code = INVALID_VALUE8;
+ u64 event_val = *((u64 *)val);
+
+ FC_CHECK_RETURN_VOID(hba);
+
+ memcpy(&aeq_msg, (struct spfc_aqe_data *)&event_val, sizeof(struct spfc_aqe_data));
+ event_code = (u8)aeq_msg.wd0.evt_code;
+
+ switch (event_type) {
+ case FC_AEQ_EVENT_QUEUE_ERROR:
+ spfc_handle_aeq_queue_error(hba, &aeq_msg);
+ break;
+
+ case FC_AEQ_EVENT_WQE_FATAL_ERROR:
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport,
+ UNF_PORT_ABNORMAL_RESET, NULL);
+ break;
+
+ case FC_AEQ_EVENT_CTX_FATAL_ERROR:
+ break;
+
+ case FC_AEQ_EVENT_OFFLOAD_ERROR:
+ ret = spfc_handle_aeq_off_load_err(hba, &aeq_msg);
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[warn]Port(0x%x) receive a unsupported AEQ EventType(0x%x) EventVal(0x%llx).",
+ hba->port_cfg.port_id, event_type, (u64)event_val);
+ return;
+ }
+
+ if (event_code < FC_AEQ_EVT_ERR_CODE_BUTT)
+ SPFC_AEQ_ERR_TYPE_STAT(hba, aeq_msg.wd0.evt_code);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) receive AEQ EventType(0x%x) EventVal(0x%llx) EvtCode(0x%x) Conn_id(0x%x) Xid(0x%x) %s",
+ hba->port_cfg.port_id, event_type, (u64)event_val, event_code,
+ aeq_msg.wd0.conn_id, aeq_msg.wd1.xid,
+ (ret == UNF_RETURN_ERROR) ? "ERROR" : "OK");
+}
+
+void spfc_sess_resource_free_sync(void *handle,
+ struct unf_port_info *rport_info)
+{
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ ulong flag = 0;
+ u32 wait_sq_cnt = 0;
+ struct spfc_hba_info *hba = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+ u32 index = SPFC_DEFAULT_RPORT_INDEX;
+
+ FC_CHECK_RETURN_VOID(handle);
+ FC_CHECK_RETURN_VOID(rport_info);
+
+ hba = (struct spfc_hba_info *)handle;
+ parent_queue_info = &hba->parent_queue_mgr->parent_queue[index];
+ prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
+ (void)spfc_free_parent_resource((void *)hba, rport_info);
+
+ for (;;) {
+ spin_lock_irqsave(prtq_state_lock, flag);
+ if (parent_queue_info->offload_state == SPFC_QUEUE_STATE_FREE) {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ break;
+ }
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ msleep(SPFC_WAIT_SESS_FREE_ONE_TIME_MS);
+ wait_sq_cnt++;
+ if (wait_sq_cnt >= SPFC_MAX_WAIT_LOOP_TIMES)
+ break;
+ }
+}
diff --git a/drivers/scsi/spfc/hw/spfc_queue.h b/drivers/scsi/spfc/hw/spfc_queue.h
new file mode 100644
index 000000000000..b1184eb17556
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_queue.h
@@ -0,0 +1,711 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_QUEUE_H
+#define SPFC_QUEUE_H
+
+#include "unf_type.h"
+#include "spfc_wqe.h"
+#include "spfc_cqm_main.h"
+#define SPFC_MIN_WP_NUM (2)
+#define SPFC_EXTEND_WQE_OFFSET (128)
+#define SPFC_SQE_SIZE (256)
+#define WQE_MARKER_0 (0x0)
+#define WQE_MARKER_6B (0x6b)
+
+/* PARENT SQ & Context defines */
+#define SPFC_MAX_MSN (65535)
+#define SPFC_MSN_MASK (0xffff000000000000LL)
+#define SPFC_SQE_TS_SIZE (72)
+#define SPFC_SQE_FIRST_OBIT_DW_POS (0)
+#define SPFC_SQE_SECOND_OBIT_DW_POS (30)
+#define SPFC_SQE_OBIT_SET_MASK_BE (0x80)
+#define SPFC_SQE_OBIT_CLEAR_MASK_BE (0xffffff7f)
+#define SPFC_MAX_SQ_TASK_TYPE_CNT (128)
+#define SPFC_SQ_NUM_PER_QPC (3)
+#define SPFC_SQ_QID_START_PER_QPC 0
+#define SPFC_SQ_SPACE_OFFSET (64)
+#define SPFC_MAX_SSQ_NUM (SPFC_SQ_NUM_PER_QPC * 63 + 1) /* must be a multiple of 3 */
+#define SPFC_DIRECTWQE_SQ_INDEX (SPFC_MAX_SSQ_NUM - 1)
+
+/* Note: if the location of flush done bit changes, the definition must be
+ * modifyed again
+ */
+#define SPFC_CTXT_FLUSH_DONE_DW_POS (58)
+#define SPFC_CTXT_FLUSH_DONE_MASK_BE (0x4000)
+#define SPFC_CTXT_FLUSH_DONE_MASK_LE (0x400000)
+
+#define SPFC_PCIE_TEMPLATE (0)
+#define SPFC_DMA_ATTR_OFST (0)
+
+/*
+ *When driver assembles WQE SGE, the GPA parity bit is multiplexed as follows:
+ * {rsvd'2,zerocopysoro'2,zerocopy_dmaattr_idx'6,pcie_template'6}
+ */
+#define SPFC_PCIE_TEMPLATE_OFFSET 0
+#define SPFC_PCIE_ZEROCOPY_DMAATTR_IDX_OFFSET 6
+#define SPFC_PCIE_ZEROCOPY_SO_RO_OFFSET 12
+#define SPFC_PCIE_RELAXED_ORDERING (1)
+#define SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE \
+ (SPFC_PCIE_RELAXED_ORDERING << SPFC_PCIE_ZEROCOPY_SO_RO_OFFSET | \
+ SPFC_DMA_ATTR_OFST << SPFC_PCIE_ZEROCOPY_DMAATTR_IDX_OFFSET | \
+ SPFC_PCIE_TEMPLATE)
+
+#define SPFC_GET_SQ_HEAD(sq) \
+ list_entry(UNF_OS_LIST_NEXT(&(sq)->list_linked_list_sq), \
+ struct spfc_wqe_page, entry_wpg)
+#define SPFC_GET_SQ_TAIL(sq) \
+ list_entry(UNF_OS_LIST_PREV(&(sq)->list_linked_list_sq), \
+ struct spfc_wqe_page, entry_wpg)
+#define SPFC_SQ_IO_STAT(ssq, io_type) \
+ (atomic_inc(&(ssq)->io_stat[io_type]))
+#define SPFC_SQ_IO_STAT_READ(ssq, io_type) \
+ (atomic_read(&(ssq)->io_stat[io_type]))
+#define SPFC_GET_QUEUE_CMSN(ssq) \
+ ((u32)(be64_to_cpu(((((ssq)->queue_header)->ci_record) & SPFC_MSN_MASK))))
+#define SPFC_GET_WP_END_CMSN(head_start_cmsn, wqe_num_per_buf) \
+ ((u16)(((u32)(head_start_cmsn) + (u32)(wqe_num_per_buf) - 1) % (SPFC_MAX_MSN + 1)))
+#define SPFC_MSN_INC(msn) (((SPFC_MAX_MSN) == (msn)) ? 0 : ((msn) + 1))
+#define SPFC_MSN_DEC(msn) (((msn) == 0) ? (SPFC_MAX_MSN) : ((msn) - 1))
+#define SPFC_QUEUE_MSN_OFFSET(start_cmsn, end_cmsn) \
+ ((u32)((((u32)(end_cmsn) + (SPFC_MAX_MSN)) - (u32)(start_cmsn)) % (SPFC_MAX_MSN + 1)))
+#define SPFC_MSN32_ADD(msn, inc) (((msn) + (inc)) % (SPFC_MAX_MSN + 1))
+
+/*
+ *SCQ defines
+ */
+#define SPFC_INT_NUM_PER_QUEUE (1)
+#define SPFC_SCQ_INT_ID_MAX (2048) /* 11BIT */
+#define SPFC_SCQE_SIZE (64)
+#define SPFC_CQE_GPA_SHIFT (4)
+#define SPFC_NEXT_CQE_GPA_SHIFT (12)
+/* 1-Update Ci by Tile, 0-Update Ci by Hardware */
+#define SPFC_PMSN_CI_TYPE_FROM_HOST (0)
+#define SPFC_PMSN_CI_TYPE_FROM_UCODE (1)
+#define SPFC_ARMQ_IDLE (0)
+#define SPFC_CQ_INT_MODE (2)
+#define SPFC_CQ_HEADER_OWNER_SHIFT (15)
+
+/* SCQC_CQ_DEPTH 0-256, 1-512, 2-1k, 3-2k, 4-4k, 5-8k, 6-16k, 7-32k.
+ * include LinkWqe
+ */
+#define SPFC_CMD_SCQ_DEPTH (4096)
+#define SPFC_STS_SCQ_DEPTH (8192)
+
+#define SPFC_CMD_SCQC_CQ_DEPTH (spfc_log2n(SPFC_CMD_SCQ_DEPTH >> 8))
+#define SPFC_STS_SCQC_CQ_DEPTH (spfc_log2n(SPFC_STS_SCQ_DEPTH >> 8))
+#define SPFC_STS_SCQ_CI_TYPE SPFC_PMSN_CI_TYPE_FROM_HOST
+
+#define SPFC_CMD_SCQ_CI_TYPE SPFC_PMSN_CI_TYPE_FROM_UCODE
+
+#define SPFC_SCQ_INTR_LOW_LATENCY_MODE 0
+#define SPFC_SCQ_INTR_POLLING_MODE 1
+#define SPFC_SCQ_PROC_CNT_PER_SECOND_THRESHOLD (30000)
+
+#define SPFC_CQE_MAX_PROCESS_NUM_PER_INTR (128)
+#define SPFC_SESSION_SCQ_NUM (16)
+
+/* SCQ[0, 2, 4 ...]CMD SCQ,SCQ[1, 3, 5 ...]STS
+ * SCQ,SCQ[SPFC_TOTAL_SCQ_NUM-1]Defaul SCQ
+ */
+#define SPFC_CMD_SCQN_START (0)
+#define SPFC_STS_SCQN_START (1)
+#define SPFC_SCQS_PER_SESSION (2)
+
+#define SPFC_TOTAL_SCQ_NUM (SPFC_SESSION_SCQ_NUM + 1)
+
+#define SPFC_SCQ_IS_STS(scq_index) \
+ (((scq_index) % SPFC_SCQS_PER_SESSION) || ((scq_index) == SPFC_SESSION_SCQ_NUM))
+#define SPFC_SCQ_IS_CMD(scq_index) (!SPFC_SCQ_IS_STS(scq_index))
+#define SPFC_RPORTID_TO_CMD_SCQN(rport_index) \
+ (((rport_index) * SPFC_SCQS_PER_SESSION) % SPFC_SESSION_SCQ_NUM)
+#define SPFC_RPORTID_TO_STS_SCQN(rport_index) \
+ ((((rport_index) * SPFC_SCQS_PER_SESSION) + 1) % SPFC_SESSION_SCQ_NUM)
+
+/*
+ *SRQ defines
+ */
+#define SPFC_SRQE_SIZE (32)
+#define SPFC_SRQ_INIT_LOOP_O (1)
+#define SPFC_QUEUE_RING (1)
+#define SPFC_SRQ_ELS_DATA_NUM (1)
+#define SPFC_SRQ_ELS_SGE_LEN (256)
+#define SPFC_SRQ_ELS_DATA_DEPTH (31750) /* depth should Divide 127 */
+
+#define SPFC_IRQ_NAME_MAX (30)
+
+/* Support 2048 sessions(xid) */
+#define SPFC_CQM_XID_MASK (0x7ff)
+
+#define SPFC_QUEUE_FLUSH_DOING (0)
+#define SPFC_QUEUE_FLUSH_DONE (1)
+#define SPFC_QUEUE_FLUSH_WAIT_TIMEOUT_MS (2000)
+#define SPFC_QUEUE_FLUSH_WAIT_MS (2)
+
+/*
+ *RPort defines
+ */
+#define SPFC_RPORT_OFFLOADED(prnt_qinfo) \
+ ((prnt_qinfo)->offload_state == SPFC_QUEUE_STATE_OFFLOADED)
+#define SPFC_RPORT_NOT_OFFLOADED(prnt_qinfo) \
+ ((prnt_qinfo)->offload_state != SPFC_QUEUE_STATE_OFFLOADED)
+#define SPFC_RPORT_FLUSH_NOT_NEEDED(prnt_qinfo) \
+ (((prnt_qinfo)->offload_state == SPFC_QUEUE_STATE_INITIALIZED) || \
+ ((prnt_qinfo)->offload_state == SPFC_QUEUE_STATE_OFFLOADING) || \
+ ((prnt_qinfo)->offload_state == SPFC_QUEUE_STATE_FREE))
+#define SPFC_CHECK_XID_MATCHED(sq_xid, sqe_xid) \
+ (((sq_xid) & SPFC_CQM_XID_MASK) == ((sqe_xid) & SPFC_CQM_XID_MASK))
+#define SPFC_PORT_MODE_TGT (0) /* Port mode */
+#define SPFC_PORT_MODE_INI (1)
+#define SPFC_PORT_MODE_BOTH (2)
+
+/*
+ *Hardware Reserved Queue Info defines
+ */
+#define SPFC_HRQI_SEQ_ID_MAX (255)
+#define SPFC_HRQI_SEQ_INDEX_MAX (64)
+#define SPFC_HRQI_SEQ_INDEX_SHIFT (6)
+#define SPFC_HRQI_SEQ_SEPCIAL_ID (3)
+#define SPFC_HRQI_SEQ_INVALID_ID (~0LL)
+
+enum spfc_session_reset_mode {
+ SPFC_SESS_RST_DELETE_IO_ONLY = 1,
+ SPFC_SESS_RST_DELETE_CONN_ONLY = 2,
+ SPFC_SESS_RST_DELETE_IO_CONN_BOTH = 3,
+ SPFC_SESS_RST_MODE_BUTT
+};
+
+/* linkwqe */
+#define CQM_LINK_WQE_CTRLSL_VALUE 2
+#define CQM_LINK_WQE_LP_VALID 1
+#define CQM_LINK_WQE_LP_INVALID 0
+
+/* bit mask */
+#define SPFC_SCQN_MASK 0xfffff
+#define SPFC_SCQ_CTX_CI_GPA_MASK 0xfffffff
+#define SPFC_SCQ_CTX_C_EQN_MSI_X_MASK 0x7
+#define SPFC_PARITY_MASK 0x1
+#define SPFC_KEYSECTION_XID_H_MASK 0xf
+#define SPFC_KEYSECTION_XID_L_MASK 0xffff
+#define SPFC_SRQ_CTX_rqe_dma_attr_idx_MASK 0xf
+#define SPFC_SSQ_CTX_MASK 0xfffff
+#define SPFC_KEY_WD3_SID_2_MASK 0x00ff0000
+#define SPFC_KEY_WD3_SID_1_MASK 0x00ff00
+#define SPFC_KEY_WD3_SID_0_MASK 0x0000ff
+#define SPFC_KEY_WD4_DID_2_MASK 0x00ff0000
+#define SPFC_KEY_WD4_DID_1_MASK 0x00ff00
+#define SPFC_KEY_WD4_DID_0_MASK 0x0000ff
+#define SPFC_LOCAL_LW_WD1_DUMP_MSN_MASK 0x7fff
+#define SPFC_PMSN_MASK 0xff
+#define SPFC_QOS_LEVEL_MASK 0x3
+#define SPFC_DB_VAL_MASK 0xFFFFFFFF
+#define SPFC_MSNWD_L_MASK 0xffff
+#define SPFC_MSNWD_H_MASK 0x7fff
+#define SPFC_DB_WD0_PI_H_MASK 0xf
+#define SPFC_DB_WD0_PI_L_MASK 0xfff
+
+#define SPFC_DB_C_BIT_DATA_TYPE 0
+#define SPFC_DB_C_BIT_CONTROL_TYPE 1
+
+#define SPFC_OWNER_DRIVER_PRODUCT (1)
+
+#define SPFC_256BWQE_ENABLE (1)
+#define SPFC_DB_ARM_DISABLE (0)
+
+#define SPFC_CNTX_SIZE_T_256B (0)
+#define SPFC_CNTX_SIZE_256B (256)
+
+#define SPFC_SERVICE_TYPE_FC (12)
+#define SPFC_SERVICE_TYPE_FC_SQ (13)
+
+#define SPFC_PACKET_COS_FC_CMD (0)
+#define SPFC_PACKET_COS_FC_DATA (1)
+
+#define SPFC_QUEUE_LINK_STYLE (0)
+#define SPFC_QUEUE_RING_STYLE (1)
+
+#define SPFC_NEED_DO_OFFLOAD (1)
+#define SPFC_QID_SQ (0)
+
+/*
+ *SCQ defines
+ */
+struct spfc_scq_info {
+ struct cqm_queue *cqm_scq_info;
+ u32 wqe_num_per_buf;
+ u32 wqe_size;
+ u32 scqc_cq_depth; /* 0-256, 1-512, 2-1k, 3-2k, 4-4k, 5-8k, 6-16k, 7-32k */
+ u16 scqc_ci_type;
+ u16 valid_wqe_num; /* ScQ depth include link wqe */
+ u16 ci;
+ u16 ci_owner;
+ u32 queue_id;
+ u32 scqn;
+ char irq_name[SPFC_IRQ_NAME_MAX];
+ u16 msix_entry_idx;
+ u32 irq_id;
+ struct tasklet_struct tasklet;
+ atomic_t flush_stat;
+ void *hba;
+ u32 reserved;
+ struct task_struct *delay_task;
+ bool task_exit;
+ u32 intr_mode;
+};
+
+struct spfc_srq_ctx {
+ /* DW0 */
+ u64 pcie_template : 6;
+ u64 rsvd0 : 2;
+ u64 parity : 8;
+ u64 cur_rqe_usr_id : 16;
+ u64 cur_rqe_msn : 16;
+ u64 last_rq_pmsn : 16;
+
+ /* DW1 */
+ u64 cur_rqe_gpa;
+
+ /* DW2 */
+ u64 ctrl_sl : 1;
+ u64 cf : 1;
+ u64 csl : 2;
+ u64 cr : 1;
+ u64 bdsl : 4;
+ u64 pmsn_type : 1;
+ u64 cur_wqe_o : 1;
+ u64 consant_sge_len : 17;
+ u64 cur_sge_id : 4;
+ u64 cur_sge_remain_len : 17;
+ u64 ceqn_msix : 11;
+ u64 int_mode : 2;
+ u64 cur_sge_l : 1;
+ u64 cur_sge_v : 1;
+
+ /* DW3 */
+ u64 cur_sge_gpa;
+
+ /* DW4 */
+ u64 cur_pmsn_gpa;
+
+ /* DW5 */
+ u64 rsvd3 : 5;
+ u64 ring : 1;
+ u64 loop_o : 1;
+ u64 rsvd2 : 1;
+ u64 rqe_dma_attr_idx : 6;
+ u64 rq_so_ro : 2;
+ u64 cqe_dma_attr_idx : 6;
+ u64 cq_so_ro : 2;
+ u64 rsvd1 : 7;
+ u64 arm_q : 1;
+ u64 cur_cqe_cnt : 8;
+ u64 cqe_max_cnt : 8;
+ u64 prefetch_max_masn : 16;
+
+ /* DW6~DW7 */
+ u64 rsvd4;
+ u64 rsvd5;
+};
+
+struct spfc_drq_buff_entry {
+ u16 buff_id;
+ void *buff_addr;
+ dma_addr_t buff_dma;
+};
+
+enum spfc_clean_state { SPFC_CLEAN_DONE, SPFC_CLEAN_DOING, SPFC_CLEAN_BUTT };
+enum spfc_srq_type { SPFC_SRQ_ELS = 1, SPFC_SRQ_IMMI, SPFC_SRQ_BUTT };
+
+struct spfc_srq_info {
+ enum spfc_srq_type srq_type;
+
+ struct cqm_queue *cqm_srq_info;
+ u32 wqe_num_per_buf; /* Wqe number per buf, dont't inlcude link wqe */
+ u32 wqe_size;
+ u32 valid_wqe_num; /* valid wqe number, dont't include link wqe */
+ u16 pi;
+ u16 pi_owner;
+ u16 pmsn;
+ u16 ci;
+ u16 cmsn;
+ u32 srqn;
+
+ dma_addr_t first_rqe_recv_dma;
+
+ struct spfc_drq_buff_entry *els_buff_entry_head;
+ struct buf_describe buf_list;
+ spinlock_t srq_spin_lock;
+ bool spin_lock_init;
+ bool enable;
+ enum spfc_clean_state state;
+
+ atomic_t ref;
+
+ struct delayed_work del_work;
+ u32 del_retry_time;
+ void *hba;
+};
+
+/*
+ * The doorbell record keeps PI of WQE, which will be produced next time.
+ * The PI is 15 bits width o-bit
+ */
+struct db_record {
+ u64 pmsn : 16;
+ u64 dump_pmsn : 16;
+ u64 rsvd0 : 32;
+};
+
+/*
+ * The ci record keeps CI of WQE, which will be consumed next time.
+ * The ci is 15 bits width with 1 o-bit
+ */
+struct ci_record {
+ u64 cmsn : 16;
+ u64 dump_cmsn : 16;
+ u64 rsvd0 : 32;
+};
+
+/* The accumulate data in WQ header */
+struct accumulate {
+ u64 data_2_uc;
+ u64 data_2_drv;
+};
+
+/* The WQ header structure */
+struct wq_header {
+ struct db_record db_record;
+ struct ci_record ci_record;
+ struct accumulate soft_data;
+};
+
+/* Link list Sq WqePage Pool */
+/* queue header struct */
+struct spfc_queue_header {
+ u64 door_bell_record;
+ u64 ci_record;
+ u64 rsv1;
+ u64 rsv2;
+};
+
+/* WPG-WQEPAGE, LLSQ-LINKED LIST SQ */
+struct spfc_wqe_page {
+ struct list_head entry_wpg;
+
+ /* Wqe Page virtual addr */
+ void *wpg_addr;
+
+ /* Wqe Page physical addr */
+ u64 wpg_phy_addr;
+};
+
+struct spfc_sq_wqepage_pool {
+ u32 wpg_cnt;
+ u32 wpg_size;
+ u32 wqe_per_wpg;
+
+ /* PCI DMA Pool */
+ struct dma_pool *wpg_dma_pool;
+ struct spfc_wqe_page *wpg_pool_addr;
+ struct list_head list_free_wpg_pool;
+ spinlock_t wpg_pool_lock;
+ atomic_t wpg_in_use;
+};
+
+#define SPFC_SQ_DEL_STAGE_TIMEOUT_MS (3 * 1000)
+#define SPFC_SRQ_DEL_STAGE_TIMEOUT_MS (10 * 1000)
+#define SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_MS (10 * 1000)
+#define SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_CNT (3)
+
+#define SPFC_SRQ_PROCESS_DELAY_MS (20)
+
+/* PLOGI parameters */
+struct spfc_plogi_copram {
+ u32 seq_cnt : 1;
+ u32 ed_tov : 1;
+ u32 rsvd : 14;
+ u32 tx_mfs : 16;
+ u32 ed_tov_time;
+};
+
+struct spfc_delay_sqe_ctrl_info {
+ bool valid;
+ u32 rport_index;
+ u32 time_out;
+ u64 start_jiff;
+ u32 sid;
+ u32 did;
+ u32 xid;
+ u16 ssqn;
+ struct spfc_sqe sqe;
+};
+
+struct spfc_suspend_sqe_info {
+ void *hba;
+ u32 magic_num;
+ u8 old_offload_sts;
+ struct unf_frame_pkg pkg;
+ struct spfc_sqe sqe;
+ struct delayed_work timeout_work;
+ struct list_head list_sqe_entry;
+};
+
+struct spfc_delay_destroy_ctrl_info {
+ bool valid;
+ u32 rport_index;
+ u32 time_out;
+ u64 start_jiff;
+ struct unf_port_info rport_info;
+};
+
+/* PARENT SQ Info */
+struct spfc_parent_sq_info {
+ void *hba;
+ spinlock_t parent_sq_enqueue_lock;
+ u32 rport_index;
+ u32 context_id;
+ /* Fixed value,used for Doorbell */
+ u32 sq_queue_id;
+ /* When a session is offloaded, tile will return the CacheId to the
+ * driver,which is used for Doorbell
+ */
+ u32 cache_id;
+ /* service type, fc or fc */
+ u32 service_type;
+ /* OQID */
+ u16 oqid_rd;
+ u16 oqid_wr;
+ u32 local_port_id;
+ u32 remote_port_id;
+ u32 sqn_base;
+ bool port_in_flush;
+ bool sq_in_sess_rst;
+ atomic_t sq_valid;
+ /* Used by NPIV QoS */
+ u8 vport_id;
+ /* Used by NPIV QoS */
+ u8 cs_ctrl;
+ struct delayed_work del_work;
+ struct delayed_work flush_done_timeout_work;
+ u64 del_start_jiff;
+ dma_addr_t srq_ctx_addr;
+ atomic_t sq_cached;
+ atomic_t flush_done_wait_cnt;
+ struct spfc_plogi_copram plogi_co_parms;
+ /* dif control info for immi */
+ struct unf_dif_control_info sirt_dif_control;
+ struct spfc_delay_sqe_ctrl_info delay_sqe;
+ struct spfc_delay_destroy_ctrl_info destroy_sqe;
+ struct list_head suspend_sqe_list;
+ atomic_t io_stat[SPFC_MAX_SQ_TASK_TYPE_CNT];
+ u8 need_offloaded;
+};
+
+/* parent context doorbell */
+struct spfc_parent_sq_db {
+ struct {
+ u32 xid : 20;
+ u32 cntx_size : 2;
+ u32 arm : 1;
+ u32 c : 1;
+ u32 cos : 3;
+ u32 service_type : 5;
+ } wd0;
+
+ struct {
+ u32 pi_hi : 8;
+ u32 sm_data : 20;
+ u32 qid : 4;
+ } wd1;
+};
+
+#define IWARP_FC_DDB_TYPE 3
+
+/* direct wqe doorbell */
+struct spfc_direct_wqe_db {
+ struct {
+ u32 xid : 20;
+ u32 cntx_size : 2;
+ u32 pi_hi : 4;
+ u32 c : 1;
+ u32 cos : 3;
+ u32 ddb : 2;
+ } wd0;
+
+ struct {
+ u32 pi_lo : 12;
+ u32 sm_data : 20;
+ } wd1;
+};
+
+struct spfc_parent_cmd_scq_info {
+ u32 cqm_queue_id;
+ u32 local_queue_id;
+};
+
+struct spfc_parent_st_scq_info {
+ u32 cqm_queue_id;
+ u32 local_queue_id;
+};
+
+struct spfc_parent_els_srq_info {
+ u32 cqm_queue_id;
+ u32 local_queue_id;
+};
+
+enum spfc_parent_queue_state {
+ SPFC_QUEUE_STATE_INITIALIZED = 0,
+ SPFC_QUEUE_STATE_OFFLOADING = 1,
+ SPFC_QUEUE_STATE_OFFLOADED = 2,
+ SPFC_QUEUE_STATE_DESTROYING = 3,
+ SPFC_QUEUE_STATE_FREE = 4,
+ SPFC_QUEUE_STATE_BUTT
+};
+
+struct spfc_parent_ctx {
+ dma_addr_t parent_ctx_addr;
+ void *parent_ctx;
+ struct cqm_qpc_mpt *cqm_parent_ctx_obj;
+};
+
+struct spfc_parent_queue_info {
+ spinlock_t parent_queue_state_lock;
+ struct spfc_parent_ctx parent_ctx;
+ enum spfc_parent_queue_state offload_state;
+ struct spfc_parent_sq_info parent_sq_info;
+ struct spfc_parent_cmd_scq_info parent_cmd_scq_info;
+ struct spfc_parent_st_scq_info
+ parent_sts_scq_info;
+ struct spfc_parent_els_srq_info parent_els_srq_info;
+ u8 queue_vport_id;
+ u8 queue_data_cos;
+};
+
+struct spfc_parent_ssq_info {
+ void *hba;
+ spinlock_t parent_sq_enqueue_lock;
+ atomic_t wqe_page_cnt;
+ u32 context_id;
+ u32 cache_id;
+ u32 sq_queue_id;
+ u32 sqn;
+ u32 service_type;
+ u32 max_sqe_num; /* SQ depth */
+ u32 wqe_num_per_buf;
+ u32 wqe_size;
+ u32 accum_wqe_cnt;
+ u32 wqe_offset;
+ u16 head_start_cmsn;
+ u16 head_end_cmsn;
+ u16 last_cmsn;
+ u16 last_pi_owner;
+ u32 queue_style;
+ atomic_t sq_valid;
+ void *queue_head_original;
+ struct spfc_queue_header *queue_header;
+ dma_addr_t queue_hdr_phy_addr_original;
+ dma_addr_t queue_hdr_phy_addr;
+ struct list_head list_linked_list_sq;
+ atomic_t sq_db_cnt;
+ atomic_t sq_wqe_cnt;
+ atomic_t sq_cqe_cnt;
+ atomic_t sqe_minus_cqe_cnt;
+ atomic_t io_stat[SPFC_MAX_SQ_TASK_TYPE_CNT];
+};
+
+struct spfc_parent_shared_queue_info {
+ struct spfc_parent_ctx parent_ctx;
+ struct spfc_parent_ssq_info parent_ssq_info;
+};
+
+struct spfc_parent_queue_mgr {
+ struct spfc_parent_queue_info parent_queue[UNF_SPFC_MAXRPORT_NUM];
+ struct spfc_parent_shared_queue_info shared_queue[SPFC_MAX_SSQ_NUM];
+ struct buf_describe parent_sq_buf_list;
+};
+
+#define SPFC_SRQC_BUS_ROW 8
+#define SPFC_SRQC_BUS_COL 19
+#define SPFC_SQC_BUS_ROW 8
+#define SPFC_SQC_BUS_COL 13
+#define SPFC_HW_SCQC_BUS_ROW 6
+#define SPFC_HW_SCQC_BUS_COL 10
+#define SPFC_HW_SRQC_BUS_ROW 4
+#define SPFC_HW_SRQC_BUS_COL 15
+#define SPFC_SCQC_BUS_ROW 3
+#define SPFC_SCQC_BUS_COL 29
+
+#define SPFC_QUEUE_INFO_BUS_NUM 4
+struct spfc_queue_info_bus {
+ u64 bus[SPFC_QUEUE_INFO_BUS_NUM];
+};
+
+u32 spfc_free_parent_resource(void *handle, struct unf_port_info *rport_info);
+u32 spfc_alloc_parent_resource(void *handle, struct unf_port_info *rport_info);
+u32 spfc_alloc_parent_queue_mgr(void *handle);
+void spfc_free_parent_queue_mgr(void *handle);
+u32 spfc_create_common_share_queues(void *handle);
+u32 spfc_create_ssq(void *handle);
+void spfc_destroy_common_share_queues(void *v_pstHba);
+u32 spfc_alloc_parent_sq_wqe_page_pool(void *handle);
+void spfc_free_parent_sq_wqe_page_pool(void *handle);
+struct spfc_parent_queue_info *
+spfc_find_parent_queue_info_by_pkg(void *handle, struct unf_frame_pkg *pkg);
+struct spfc_parent_sq_info *
+spfc_find_parent_sq_by_pkg(void *handle, struct unf_frame_pkg *pkg);
+u32 spfc_root_cmdq_enqueue(void *handle, union spfc_cmdqe *cmdqe, u16 cmd_len);
+void spfc_process_scq_cqe(ulong scq_info);
+u32 spfc_process_scq_cqe_entity(ulong scq_info, u32 proc_cnt);
+void spfc_post_els_srq_wqe(struct spfc_srq_info *srq_info, u16 buf_id);
+void spfc_process_aeqe(void *handle, u8 event_type, u8 *event_val);
+u32 spfc_parent_sq_enqueue(struct spfc_parent_sq_info *sq, struct spfc_sqe *io_sqe,
+ u16 ssqn);
+u32 spfc_parent_ssq_enqueue(struct spfc_parent_ssq_info *ssq,
+ struct spfc_sqe *io_sqe, u8 wqe_type);
+void spfc_free_sq_wqe_page(struct spfc_parent_ssq_info *ssq, u32 cur_cmsn);
+u32 spfc_reclaim_sq_wqe_page(void *handle, union spfc_scqe *scqe);
+void spfc_set_rport_flush_state(void *handle, bool in_flush);
+u32 spfc_clear_fetched_sq_wqe(void *handle);
+u32 spfc_clear_pending_sq_wqe(void *handle);
+void spfc_free_parent_queues(void *handle);
+void spfc_free_ssq(void *handle, u32 free_sq_num);
+void spfc_enalbe_queues_dispatch(void *handle);
+void spfc_queue_pre_process(void *handle, bool clean);
+void spfc_queue_post_process(void *handle);
+void spfc_free_parent_queue_info(void *handle, struct spfc_parent_queue_info *parent_queue_info);
+u32 spfc_send_session_rst_cmd(void *handle,
+ struct spfc_parent_queue_info *parent_queue_info,
+ enum spfc_session_reset_mode mode);
+u32 spfc_send_nop_cmd(void *handle, struct spfc_parent_sq_info *parent_sq_info,
+ u32 magic_num, u16 sqn);
+void spfc_build_session_rst_wqe(void *handle, struct spfc_parent_sq_info *sq,
+ struct spfc_sqe *sqe,
+ enum spfc_session_reset_mode mode, u32 scqn);
+void spfc_wq_destroy_els_srq(struct work_struct *work);
+void spfc_destroy_els_srq(void *handle);
+u32 spfc_push_delay_sqe(void *hba,
+ struct spfc_parent_queue_info *offload_parent_queue,
+ struct spfc_sqe *sqe, struct unf_frame_pkg *pkg);
+void spfc_push_destroy_parent_queue_sqe(void *hba,
+ struct spfc_parent_queue_info *offloading_parent_queue,
+ struct unf_port_info *rport_info);
+void spfc_pop_destroy_parent_queue_sqe(void *handle,
+ struct spfc_delay_destroy_ctrl_info *destroy_sqe_info);
+struct spfc_parent_queue_info *spfc_find_offload_parent_queue(void *handle,
+ u32 local_id,
+ u32 remote_id,
+ u32 rport_index);
+u32 spfc_flush_ini_resp_queue(void *handle);
+void spfc_rcvd_els_from_srq_timeout(struct work_struct *work);
+u32 spfc_send_aeq_info_via_cmdq(void *hba, u32 aeq_error_type);
+u32 spfc_parent_sq_ring_doorbell(struct spfc_parent_ssq_info *sq, u8 qos_level,
+ u32 c);
+void spfc_sess_resource_free_sync(void *handle,
+ struct unf_port_info *rport_info);
+u32 spfc_suspend_sqe_and_send_nop(void *handle,
+ struct spfc_parent_queue_info *parent_queue,
+ struct spfc_sqe *sqe, struct unf_frame_pkg *pkg);
+u32 spfc_pop_suspend_sqe(void *handle,
+ struct spfc_parent_queue_info *parent_queue,
+ struct spfc_suspend_sqe_info *suspen_sqe);
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_service.c b/drivers/scsi/spfc/hw/spfc_service.c
new file mode 100644
index 000000000000..fa3958357de3
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_service.c
@@ -0,0 +1,2169 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_service.h"
+#include "unf_log.h"
+#include "spfc_io.h"
+#include "spfc_chipitf.h"
+
+#define SPFC_ELS_SRQ_BUF_NUM (0x9)
+#define SPFC_LS_GS_USERID_LEN ((FC_LS_GS_USERID_CNT_MAX + 1) / 2)
+
+struct unf_scqe_handle_table {
+ u32 scqe_type; /* ELS type */
+ bool reclaim_sq_wpg;
+ u32 (*scqe_handle_func)(struct spfc_hba_info *hba, union spfc_scqe *scqe);
+};
+
+static u32 spfc_get_els_rsp_pld_len(u16 els_type, u16 els_cmnd,
+ u32 *els_acc_pld_len)
+{
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(els_acc_pld_len, UNF_RETURN_ERROR);
+
+ /* RJT */
+ if (els_type == ELS_RJT) {
+ *els_acc_pld_len = UNF_ELS_ACC_RJT_LEN;
+ return RETURN_OK;
+ }
+
+ /* ACC */
+ switch (els_cmnd) {
+ /* uses the same PAYLOAD length as PLOGI. */
+ case ELS_FLOGI:
+ case ELS_PDISC:
+ case ELS_PLOGI:
+ *els_acc_pld_len = UNF_PLOGI_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_PRLI:
+ /* If sirt is enabled, The PRLI ACC payload extends 12 bytes */
+ *els_acc_pld_len = (UNF_PRLI_ACC_PAYLOAD_LEN - UNF_PRLI_SIRT_EXTRA_SIZE);
+
+ break;
+
+ case ELS_LOGO:
+ *els_acc_pld_len = UNF_LOGO_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_PRLO:
+ *els_acc_pld_len = UNF_PRLO_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_RSCN:
+ *els_acc_pld_len = UNF_RSCN_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_ADISC:
+ *els_acc_pld_len = UNF_ADISC_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_RRQ:
+ *els_acc_pld_len = UNF_RRQ_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_SCR:
+ *els_acc_pld_len = UNF_SCR_RSP_PAYLOAD_LEN;
+ break;
+
+ case ELS_ECHO:
+ *els_acc_pld_len = UNF_ECHO_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_REC:
+ *els_acc_pld_len = UNF_REC_ACC_PAYLOAD_LEN;
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Unknown ELS command(0x%x)",
+ els_cmnd);
+ ret = UNF_RETURN_ERROR;
+ break;
+ }
+
+ return ret;
+}
+
+struct unf_els_cmd_paylod_table {
+ u16 els_cmnd; /* ELS type */
+ u32 els_req_pld_len;
+ u32 els_rsp_pld_len;
+};
+
+static const struct unf_els_cmd_paylod_table els_pld_table_map[] = {
+ {ELS_FDISC, UNF_FDISC_PAYLOAD_LEN, UNF_FDISC_ACC_PAYLOAD_LEN},
+ {ELS_FLOGI, UNF_FLOGI_PAYLOAD_LEN, UNF_FLOGI_ACC_PAYLOAD_LEN},
+ {ELS_PLOGI, UNF_PLOGI_PAYLOAD_LEN, UNF_PLOGI_ACC_PAYLOAD_LEN},
+ {ELS_SCR, UNF_SCR_PAYLOAD_LEN, UNF_SCR_RSP_PAYLOAD_LEN},
+ {ELS_PDISC, UNF_PDISC_PAYLOAD_LEN, UNF_PDISC_ACC_PAYLOAD_LEN},
+ {ELS_LOGO, UNF_LOGO_PAYLOAD_LEN, UNF_LOGO_ACC_PAYLOAD_LEN},
+ {ELS_PRLO, UNF_PRLO_PAYLOAD_LEN, UNF_PRLO_ACC_PAYLOAD_LEN},
+ {ELS_ADISC, UNF_ADISC_PAYLOAD_LEN, UNF_ADISC_ACC_PAYLOAD_LEN},
+ {ELS_RRQ, UNF_RRQ_PAYLOAD_LEN, UNF_RRQ_ACC_PAYLOAD_LEN},
+ {ELS_RSCN, 0, UNF_RSCN_ACC_PAYLOAD_LEN},
+ {ELS_ECHO, UNF_ECHO_PAYLOAD_LEN, UNF_ECHO_ACC_PAYLOAD_LEN},
+ {ELS_REC, UNF_REC_PAYLOAD_LEN, UNF_REC_ACC_PAYLOAD_LEN}
+};
+
+static u32 spfc_get_els_req_acc_pld_len(u16 els_cmnd, u32 *req_pld_len, u32 *rsp_pld_len)
+{
+ u32 ret = RETURN_OK;
+ u32 i;
+
+ FC_CHECK_RETURN_VALUE(req_pld_len, UNF_RETURN_ERROR);
+
+ for (i = 0; i < (sizeof(els_pld_table_map) /
+ sizeof(struct unf_els_cmd_paylod_table));
+ i++) {
+ if (els_pld_table_map[i].els_cmnd == els_cmnd) {
+ *req_pld_len = els_pld_table_map[i].els_req_pld_len;
+ *rsp_pld_len = els_pld_table_map[i].els_rsp_pld_len;
+ return ret;
+ }
+ }
+
+ switch (els_cmnd) {
+ case ELS_PRLI:
+ /* If sirt is enabled, The PRLI ACC payload extends 12 bytes */
+ *req_pld_len = SPFC_GET_PRLI_PAYLOAD_LEN;
+ *rsp_pld_len = SPFC_GET_PRLI_PAYLOAD_LEN;
+
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Unknown ELS_CMD(0x%x)", els_cmnd);
+ ret = UNF_RETURN_ERROR;
+ break;
+ }
+
+ return ret;
+}
+
+static u32 spfc_check_parent_qinfo_valid(struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
+ struct spfc_parent_queue_info **prt_qinfo)
+{
+ if (!*prt_qinfo) {
+ if (pkg->type == UNF_PKG_ELS_REQ || pkg->type == UNF_PKG_ELS_REPLY) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send LS SID(0x%x) DID(0x%x) with null prtqinfo",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+ pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = SPFC_DEFAULT_RPORT_INDEX;
+ *prt_qinfo = spfc_find_parent_queue_info_by_pkg(hba, pkg);
+ if (!*prt_qinfo)
+ return UNF_RETURN_ERROR;
+ } else {
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ if (pkg->type == UNF_PKG_GS_REQ && SPFC_RPORT_NOT_OFFLOADED(*prt_qinfo)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) send GS SID(0x%x) DID(0x%x), send GS Request before PLOGI",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+ return UNF_RETURN_ERROR;
+ }
+ return RETURN_OK;
+}
+
+static void spfc_get_pkt_cmnd_type_code(struct unf_frame_pkg *pkg,
+ u16 *ls_gs_cmnd_code,
+ u16 *ls_gs_cmnd_type)
+{
+ *ls_gs_cmnd_type = SPFC_GET_LS_GS_CMND_CODE(pkg->cmnd);
+ if (SPFC_PKG_IS_ELS_RSP(*ls_gs_cmnd_type)) {
+ *ls_gs_cmnd_code = SPFC_GET_ELS_RSP_CODE(pkg->cmnd);
+ } else if (pkg->type == UNF_PKG_GS_REQ) {
+ *ls_gs_cmnd_code = *ls_gs_cmnd_type;
+ } else {
+ *ls_gs_cmnd_code = *ls_gs_cmnd_type;
+ *ls_gs_cmnd_type = ELS_CMND;
+ }
+}
+
+static u32 spfc_get_gs_req_rsp_pld_len(u16 cmnd_code, u32 *gs_pld_len, u32 *gs_rsp_pld_len)
+{
+ FC_CHECK_RETURN_VALUE(gs_pld_len, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(gs_rsp_pld_len, UNF_RETURN_ERROR);
+
+ switch (cmnd_code) {
+ case NS_GPN_ID:
+ *gs_pld_len = UNF_GPNID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_GPNID_RSP_PAYLOAD_LEN;
+ break;
+
+ case NS_GNN_ID:
+ *gs_pld_len = UNF_GNNID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_GNNID_RSP_PAYLOAD_LEN;
+ break;
+
+ case NS_GFF_ID:
+ *gs_pld_len = UNF_GFFID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_GFFID_RSP_PAYLOAD_LEN;
+ break;
+
+ case NS_GID_FT:
+ case NS_GID_PT:
+ *gs_pld_len = UNF_GID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_GID_ACC_PAYLOAD_LEN;
+ break;
+
+ case NS_RFT_ID:
+ *gs_pld_len = UNF_RFTID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_RFTID_RSP_PAYLOAD_LEN;
+ break;
+
+ case NS_RFF_ID:
+ *gs_pld_len = UNF_RFFID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_RFFID_RSP_PAYLOAD_LEN;
+ break;
+ case NS_GA_NXT:
+ *gs_pld_len = UNF_GID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_GID_ACC_PAYLOAD_LEN;
+ break;
+
+ case NS_GIEL:
+ *gs_pld_len = UNF_RFTID_RSP_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_GID_ACC_PAYLOAD_LEN;
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Unknown GS commond type(0x%x)", cmnd_code);
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+static void *spfc_get_els_frame_addr(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg,
+ u16 els_cmnd_code, u16 els_cmnd_type,
+ u64 *phy_addr)
+{
+ void *frame_pld_addr = NULL;
+ dma_addr_t els_frame_addr = 0;
+
+ if (els_cmnd_code == ELS_ECHO) {
+ frame_pld_addr = (void *)UNF_GET_ECHO_PAYLOAD(pkg);
+ els_frame_addr = UNF_GET_ECHO_PAYLOAD_PHYADDR(pkg);
+ } else if (els_cmnd_code == ELS_RSCN) {
+ if (els_cmnd_type == ELS_CMND) {
+ /* Not Support */
+ frame_pld_addr = NULL;
+ els_frame_addr = 0;
+ } else {
+ frame_pld_addr = (void *)UNF_GET_RSCN_ACC_PAYLOAD(pkg);
+ els_frame_addr = pkg->unf_cmnd_pload_bl.buf_dma_addr +
+ sizeof(struct unf_fc_head);
+ }
+ } else {
+ frame_pld_addr = (void *)SPFC_GET_CMND_PAYLOAD_ADDR(pkg);
+ els_frame_addr = pkg->unf_cmnd_pload_bl.buf_dma_addr +
+ sizeof(struct unf_fc_head);
+ }
+ *phy_addr = els_frame_addr;
+ return frame_pld_addr;
+}
+
+static u32 spfc_get_frame_info(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, void **frame_pld_addr,
+ u32 *frame_pld_len, u64 *frame_phy_addr,
+ u32 *acc_pld_len)
+{
+ u32 ret = RETURN_OK;
+ u16 ls_gs_cmnd_code = SPFC_ZERO;
+ u16 ls_gs_cmnd_type = SPFC_ZERO;
+
+ spfc_get_pkt_cmnd_type_code(pkg, &ls_gs_cmnd_code, &ls_gs_cmnd_type);
+
+ if (pkg->type == UNF_PKG_GS_REQ) {
+ ret = spfc_get_gs_req_rsp_pld_len(ls_gs_cmnd_code,
+ frame_pld_len, acc_pld_len);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) send GS SID(0x%x) DID(0x%x), get error GS request and response payload length",
+ hba->port_cfg.port_id,
+ pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+
+ return ret;
+ }
+ *frame_pld_addr = (void *)(SPFC_GET_CMND_PAYLOAD_ADDR(pkg));
+ *frame_phy_addr = pkg->unf_cmnd_pload_bl.buf_dma_addr + sizeof(struct unf_fc_head);
+ if (ls_gs_cmnd_code == NS_GID_FT || ls_gs_cmnd_code == NS_GID_PT)
+ *frame_pld_addr = (void *)(UNF_GET_GID_PAYLOAD(pkg));
+ } else {
+ *frame_pld_addr = spfc_get_els_frame_addr(hba, pkg, ls_gs_cmnd_code,
+ ls_gs_cmnd_type, frame_phy_addr);
+ if (SPFC_PKG_IS_ELS_RSP(ls_gs_cmnd_type)) {
+ ret = spfc_get_els_rsp_pld_len(ls_gs_cmnd_type, ls_gs_cmnd_code,
+ frame_pld_len);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) get els cmd (0x%x) rsp len failed.",
+ hba->port_cfg.port_id,
+ ls_gs_cmnd_code);
+ return ret;
+ }
+ } else {
+ ret = spfc_get_els_req_acc_pld_len(ls_gs_cmnd_code, frame_pld_len,
+ acc_pld_len);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) get els cmd (0x%x) req and acc len failed.",
+ hba->port_cfg.port_id,
+ ls_gs_cmnd_code);
+ return ret;
+ }
+ }
+ }
+ return ret;
+}
+
+static u32
+spfc_send_ls_gs_via_parent(struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
+ struct spfc_parent_queue_info *prt_queue_info)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ls_gs_cmnd_code = SPFC_ZERO;
+ u16 ls_gs_cmnd_type = SPFC_ZERO;
+ u16 remote_exid = 0;
+ u16 hot_tag = 0;
+ struct spfc_parent_sq_info *parent_sq_info = NULL;
+ struct spfc_sqe tmp_sqe;
+ struct spfc_sqe *sqe = NULL;
+ void *frame_pld_addr = NULL;
+ u32 frame_pld_len = 0;
+ u32 acc_pld_len = 0;
+ u64 frame_pa = 0;
+ ulong flags = 0;
+ u16 ssqn = 0;
+ spinlock_t *prtq_state_lock = NULL;
+
+ ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ sqe = &tmp_sqe;
+ memset(sqe, 0, sizeof(struct spfc_sqe));
+
+ parent_sq_info = &prt_queue_info->parent_sq_info;
+ hot_tag = (u16)UNF_GET_HOTPOOL_TAG(pkg) + hba->exi_base;
+
+ spfc_get_pkt_cmnd_type_code(pkg, &ls_gs_cmnd_code, &ls_gs_cmnd_type);
+
+ ret = spfc_get_frame_info(hba, pkg, &frame_pld_addr, &frame_pld_len,
+ &frame_pa, &acc_pld_len);
+ if (ret != RETURN_OK)
+ return ret;
+
+ if (SPFC_PKG_IS_ELS_RSP(ls_gs_cmnd_type)) {
+ remote_exid = UNF_GET_OXID(pkg);
+ spfc_build_els_wqe_ts_rsp(sqe, prt_queue_info, pkg,
+ frame_pld_addr, ls_gs_cmnd_type,
+ ls_gs_cmnd_code);
+
+ /* Assemble the SQE Task Section Els Common part */
+ spfc_build_service_wqe_ts_common(&sqe->ts_sl, parent_sq_info->rport_index,
+ UNF_GET_RXID(pkg), remote_exid,
+ SPFC_LSW(frame_pld_len));
+ } else {
+ remote_exid = UNF_GET_RXID(pkg);
+ /* send els req ,only use local_xid for hotpooltag */
+ spfc_build_els_wqe_ts_req(sqe, parent_sq_info,
+ prt_queue_info->parent_sts_scq_info.cqm_queue_id,
+ frame_pld_addr, pkg);
+ spfc_build_service_wqe_ts_common(&sqe->ts_sl, parent_sq_info->rport_index, hot_tag,
+ remote_exid, SPFC_LSW(frame_pld_len));
+ }
+ /* Assemble the SQE Control Section part */
+ spfc_build_service_wqe_ctrl_section(&sqe->ctrl_sl, SPFC_BYTES_TO_QW_NUM(SPFC_SQE_TS_SIZE),
+ SPFC_BYTES_TO_QW_NUM(sizeof(struct spfc_variable_sge)));
+
+ /* Build SGE */
+ spfc_build_els_gs_wqe_sge(sqe, frame_pld_addr, frame_pa, frame_pld_len,
+ parent_sq_info->context_id, hba);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%x) send ELS/GS Type(0x%x) Code(0x%x) HotTag(0x%x)",
+ hba->port_cfg.port_id, parent_sq_info->rport_index, ls_gs_cmnd_type,
+ ls_gs_cmnd_code, hot_tag);
+ if (ls_gs_cmnd_code == ELS_PLOGI || ls_gs_cmnd_code == ELS_LOGO) {
+ ret = spfc_suspend_sqe_and_send_nop(hba, prt_queue_info, sqe, pkg);
+ return ret;
+ }
+ prtq_state_lock = &prt_queue_info->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flags);
+ if (SPFC_RPORT_NOT_OFFLOADED(prt_queue_info)) {
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+ /* Send PLOGI or PLOGI ACC or SCR if session not offload */
+ ret = spfc_send_els_via_default_session(hba, sqe, pkg, prt_queue_info);
+ } else {
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+ ret = spfc_parent_sq_enqueue(parent_sq_info, sqe, ssqn);
+ }
+
+ return ret;
+}
+
+u32 spfc_send_ls_gs_cmnd(void *handle, struct unf_frame_pkg *pkg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_hba_info *hba = NULL;
+ struct spfc_parent_queue_info *prt_qinfo = NULL;
+ u16 ls_gs_cmnd_code = SPFC_ZERO;
+ union unf_sfs_u *sfs_entry = NULL;
+ struct unf_rrq *rrq_pld = NULL;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ /* Check Parameters */
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(UNF_GET_SFS_ENTRY(pkg), UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(SPFC_GET_CMND_PAYLOAD_ADDR(pkg), UNF_RETURN_ERROR);
+
+ SPFC_CHECK_PKG_ALLOCTIME(pkg);
+ hba = (struct spfc_hba_info *)handle;
+ ls_gs_cmnd_code = SPFC_GET_LS_GS_CMND_CODE(pkg->cmnd);
+
+ /* If RRQ Req, Special processing */
+ if (ls_gs_cmnd_code == ELS_RRQ) {
+ sfs_entry = UNF_GET_SFS_ENTRY(pkg);
+ rrq_pld = &sfs_entry->rrq;
+ ox_id = (u16)(rrq_pld->oxid_rxid >> UNF_SHIFT_16);
+ rx_id = (u16)(rrq_pld->oxid_rxid & SPFC_RXID_MASK);
+ rrq_pld->oxid_rxid = (u32)ox_id << UNF_SHIFT_16 | rx_id;
+ }
+
+ prt_qinfo = spfc_find_parent_queue_info_by_pkg(hba, pkg);
+ ret = spfc_check_parent_qinfo_valid(hba, pkg, &prt_qinfo);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[error]Port(0x%x) send ELS/GS SID(0x%x) DID(0x%x) check qinfo invalid",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+ return UNF_RETURN_ERROR;
+ }
+
+ ret = spfc_send_ls_gs_via_parent(hba, pkg, prt_qinfo);
+
+ return ret;
+}
+
+void spfc_save_login_parms_in_sq_info(struct spfc_hba_info *hba,
+ struct unf_port_login_parms *login_params)
+{
+ u32 rport_index = login_params->rport_index;
+ struct spfc_parent_sq_info *parent_sq_info = NULL;
+
+ if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) save login parms,but uplevel alloc invalid rport index: 0x%x",
+ hba->port_cfg.port_id, rport_index);
+
+ return;
+ }
+
+ parent_sq_info = &hba->parent_queue_mgr->parent_queue[rport_index].parent_sq_info;
+
+ parent_sq_info->plogi_co_parms.seq_cnt = login_params->seq_cnt;
+ parent_sq_info->plogi_co_parms.ed_tov = login_params->ed_tov;
+ parent_sq_info->plogi_co_parms.tx_mfs = (login_params->tx_mfs <
+ SPFC_DEFAULT_TX_MAX_FREAM_SIZE) ?
+ SPFC_DEFAULT_TX_MAX_FREAM_SIZE :
+ login_params->tx_mfs;
+ parent_sq_info->plogi_co_parms.ed_tov_time = login_params->ed_tov_timer_val;
+}
+
+static void
+spfc_recover_offloading_state(struct spfc_parent_queue_info *prt_queue_info,
+ enum spfc_parent_queue_state offload_state)
+{
+ ulong flags = 0;
+
+ spin_lock_irqsave(&prt_queue_info->parent_queue_state_lock, flags);
+
+ if (prt_queue_info->offload_state == SPFC_QUEUE_STATE_OFFLOADING)
+ prt_queue_info->offload_state = offload_state;
+
+ spin_unlock_irqrestore(&prt_queue_info->parent_queue_state_lock, flags);
+}
+
+static bool spfc_check_need_delay_offload(void *hba, struct unf_frame_pkg *pkg, u32 rport_index,
+ struct spfc_parent_queue_info *cur_prt_queue_info,
+ struct spfc_parent_queue_info **offload_prt_queue_info)
+{
+ ulong flags = 0;
+ struct spfc_parent_queue_info *prt_queue_info = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+
+ prtq_state_lock = &cur_prt_queue_info->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flags);
+
+ if (cur_prt_queue_info->offload_state == SPFC_QUEUE_STATE_OFFLOADING) {
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+
+ prt_queue_info = spfc_find_offload_parent_queue(hba, pkg->frame_head.csctl_sid &
+ UNF_NPORTID_MASK,
+ pkg->frame_head.rctl_did &
+ UNF_NPORTID_MASK, rport_index);
+ if (prt_queue_info) {
+ *offload_prt_queue_info = prt_queue_info;
+ return true;
+ }
+ } else {
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+ }
+
+ return false;
+}
+
+static u16 spfc_build_wqe_with_offload(struct spfc_hba_info *hba, struct spfc_sqe *sqe,
+ struct spfc_parent_queue_info *prt_queue_info,
+ struct unf_frame_pkg *pkg,
+ enum spfc_parent_queue_state last_offload_state)
+{
+ u32 tx_mfs = 2048;
+ u32 edtov_timer = 2000;
+ dma_addr_t ctx_pa = 0;
+ u16 els_cmnd_type = SPFC_ZERO;
+ u16 els_cmnd_code = SPFC_ZERO;
+ void *ctx_va = NULL;
+ struct spfc_parent_context *parent_ctx_info = NULL;
+ struct spfc_sw_section *sw_setction = NULL;
+ struct spfc_parent_sq_info *parent_sq_info = &prt_queue_info->parent_sq_info;
+ u16 offload_flag = 0;
+
+ els_cmnd_type = SPFC_GET_ELS_RSP_TYPE(pkg->cmnd);
+ if (SPFC_PKG_IS_ELS_RSP(els_cmnd_type)) {
+ els_cmnd_code = SPFC_GET_ELS_RSP_CODE(pkg->cmnd);
+ } else {
+ els_cmnd_code = els_cmnd_type;
+ els_cmnd_type = ELS_CMND;
+ }
+
+ offload_flag = SPFC_CHECK_NEED_OFFLOAD(els_cmnd_code, els_cmnd_type, last_offload_state);
+
+ parent_ctx_info = (struct spfc_parent_context *)(prt_queue_info->parent_ctx.parent_ctx);
+ sw_setction = &parent_ctx_info->sw_section;
+
+ sw_setction->tx_mfs = cpu_to_be16((u16)(tx_mfs));
+ sw_setction->e_d_tov_timer_val = cpu_to_be32(edtov_timer);
+
+ spfc_big_to_cpu32(&sw_setction->sw_ctxt_misc.pctxt_val0,
+ sizeof(sw_setction->sw_ctxt_misc.pctxt_val0));
+ sw_setction->sw_ctxt_misc.dw.port_id = SPFC_GET_NETWORK_PORT_ID(hba);
+ spfc_cpu_to_big32(&sw_setction->sw_ctxt_misc.pctxt_val0,
+ sizeof(sw_setction->sw_ctxt_misc.pctxt_val0));
+
+ spfc_big_to_cpu32(&sw_setction->sw_ctxt_config.pctxt_val1,
+ sizeof(sw_setction->sw_ctxt_config.pctxt_val1));
+ spfc_cpu_to_big32(&sw_setction->sw_ctxt_config.pctxt_val1,
+ sizeof(sw_setction->sw_ctxt_config.pctxt_val1));
+
+ /* Fill in contex to the chip */
+ ctx_pa = prt_queue_info->parent_ctx.cqm_parent_ctx_obj->paddr;
+ ctx_va = prt_queue_info->parent_ctx.cqm_parent_ctx_obj->vaddr;
+
+ /* No need write key and no need do BIG TO CPU32 */
+ memcpy(ctx_va, prt_queue_info->parent_ctx.parent_ctx, sizeof(struct spfc_parent_context));
+
+ if (SPFC_PKG_IS_ELS_RSP(els_cmnd_type)) {
+ sqe->ts_sl.cont.els_rsp.context_gpa_hi = SPFC_HIGH_32_BITS(ctx_pa);
+ sqe->ts_sl.cont.els_rsp.context_gpa_lo = SPFC_LOW_32_BITS(ctx_pa);
+ sqe->ts_sl.cont.els_rsp.wd1.offload_flag = offload_flag;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]sid 0x%x, did 0x%x, GPA HIGH 0x%x,GPA LOW 0x%x, scq 0x%x,offload flag 0x%x",
+ parent_sq_info->local_port_id,
+ parent_sq_info->remote_port_id,
+ sqe->ts_sl.cont.els_rsp.context_gpa_hi,
+ sqe->ts_sl.cont.els_rsp.context_gpa_lo,
+ prt_queue_info->parent_sts_scq_info.cqm_queue_id,
+ offload_flag);
+ } else {
+ sqe->ts_sl.cont.t_els_gs.context_gpa_hi = SPFC_HIGH_32_BITS(ctx_pa);
+ sqe->ts_sl.cont.t_els_gs.context_gpa_lo = SPFC_LOW_32_BITS(ctx_pa);
+ sqe->ts_sl.cont.t_els_gs.wd4.offload_flag = offload_flag;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]sid 0x%x, did 0x%x, GPA HIGH 0x%x,GPA LOW 0x%x, scq 0x%x,offload flag 0x%x",
+ parent_sq_info->local_port_id,
+ parent_sq_info->remote_port_id,
+ sqe->ts_sl.cont.t_els_gs.context_gpa_hi,
+ sqe->ts_sl.cont.t_els_gs.context_gpa_lo,
+ prt_queue_info->parent_sts_scq_info.cqm_queue_id,
+ offload_flag);
+ }
+
+ if (offload_flag) {
+ prt_queue_info->offload_state = SPFC_QUEUE_STATE_OFFLOADING;
+ parent_sq_info->need_offloaded = SPFC_NEED_DO_OFFLOAD;
+ }
+
+ return offload_flag;
+}
+
+u32 spfc_send_els_via_default_session(struct spfc_hba_info *hba, struct spfc_sqe *io_sqe,
+ struct unf_frame_pkg *pkg,
+ struct spfc_parent_queue_info *prt_queue_info)
+{
+ ulong flags = 0;
+ bool sqe_delay = false;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 els_cmnd_code = SPFC_ZERO;
+ u16 els_cmnd_type = SPFC_ZERO;
+ u16 ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ u32 rport_index = pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX];
+ struct spfc_sqe *sqe = io_sqe;
+ struct spfc_parent_queue_info *default_prt_queue_info = NULL;
+ struct spfc_parent_sq_info *parent_sq_info = &prt_queue_info->parent_sq_info;
+ struct spfc_parent_queue_info *offload_queue_info = NULL;
+ enum spfc_parent_queue_state last_offload_state = SPFC_QUEUE_STATE_INITIALIZED;
+ struct spfc_delay_destroy_ctrl_info delay_ctl_info;
+ u16 offload_flag = 0;
+ u32 default_index = SPFC_DEFAULT_RPORT_INDEX;
+
+ memset(&delay_ctl_info, 0, sizeof(struct spfc_delay_destroy_ctrl_info));
+ /* Determine the ELS type in pkg */
+ els_cmnd_type = SPFC_GET_LS_GS_CMND_CODE(pkg->cmnd);
+
+ if (SPFC_PKG_IS_ELS_RSP(els_cmnd_type)) {
+ els_cmnd_code = SPFC_GET_ELS_RSP_CODE(pkg->cmnd);
+ } else {
+ els_cmnd_code = els_cmnd_type;
+ els_cmnd_type = ELS_CMND;
+ }
+
+ spin_lock_irqsave(&prt_queue_info->parent_queue_state_lock, flags);
+
+ last_offload_state = prt_queue_info->offload_state;
+
+ offload_flag = spfc_build_wqe_with_offload(hba, sqe, prt_queue_info,
+ pkg, last_offload_state);
+
+ spin_unlock_irqrestore(&prt_queue_info->parent_queue_state_lock, flags);
+
+ if (!offload_flag) {
+ default_prt_queue_info = &hba->parent_queue_mgr->parent_queue[default_index];
+ if (!default_prt_queue_info) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[ERR]cmd(0x%x), type(0x%x) send fail, default session null",
+ els_cmnd_code, els_cmnd_type);
+ return UNF_RETURN_ERROR;
+ }
+ parent_sq_info = &default_prt_queue_info->parent_sq_info;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]cmd(0x%x), type(0x%x) send via default session",
+ els_cmnd_code, els_cmnd_type);
+ } else {
+ /* Need this xid to judge delay offload, when Sqe Enqueue will
+ * write again
+ */
+ sqe->ts_sl.xid = parent_sq_info->context_id;
+ sqe_delay = spfc_check_need_delay_offload(hba, pkg, rport_index, prt_queue_info,
+ &offload_queue_info);
+
+ if (sqe_delay) {
+ ret = spfc_push_delay_sqe(hba, offload_queue_info, sqe, pkg);
+ if (ret == RETURN_OK) {
+ spfc_recover_offloading_state(prt_queue_info, last_offload_state);
+ return ret;
+ }
+ }
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]cmd(0x%x), type(0x%x) do secretly offload",
+ els_cmnd_code, els_cmnd_type);
+ }
+
+ ret = spfc_parent_sq_enqueue(parent_sq_info, sqe, ssqn);
+
+ if (ret != RETURN_OK) {
+ spfc_recover_offloading_state(prt_queue_info, last_offload_state);
+
+ spin_lock_irqsave(&prt_queue_info->parent_queue_state_lock,
+ flags);
+
+ if (prt_queue_info->parent_sq_info.destroy_sqe.valid) {
+ memcpy(&delay_ctl_info, &prt_queue_info->parent_sq_info.destroy_sqe,
+ sizeof(struct spfc_delay_destroy_ctrl_info));
+
+ prt_queue_info->parent_sq_info.destroy_sqe.valid = false;
+ }
+
+ spin_unlock_irqrestore(&prt_queue_info->parent_queue_state_lock, flags);
+
+ spfc_pop_destroy_parent_queue_sqe((void *)hba, &delay_ctl_info);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) send ELS Type(0x%x) Code(0x%x) fail,recover offloadstatus(%u)",
+ hba->port_cfg.port_id, rport_index, els_cmnd_type,
+ els_cmnd_code, prt_queue_info->offload_state);
+ }
+
+ return ret;
+}
+
+static u32 spfc_rcv_ls_gs_rsp_payload(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag,
+ u8 *els_pld_buf, u32 pld_len)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+ if (pkg->type == UNF_PKG_GS_REQ_DONE)
+ spfc_big_to_cpu32(els_pld_buf, pld_len);
+ else
+ pkg->byte_orders |= SPFC_BIT_2;
+
+ pkg->unf_cmnd_pload_bl.buffer_ptr = els_pld_buf;
+ pkg->unf_cmnd_pload_bl.length = pld_len;
+
+ pkg->last_pkg_flag = UNF_PKG_NOT_LAST_RESPONSE;
+
+ UNF_LOWLEVEL_RECEIVE_LS_GS_PKG(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+u32 spfc_scq_recv_abts_rsp(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ /* Default path, which is sent from SCQ to the driver */
+ u8 status = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 ox_id = INVALID_VALUE32;
+ u32 hot_tag = INVALID_VALUE32;
+ struct unf_frame_pkg pkg = {0};
+ struct spfc_scqe_rcv_abts_rsp *abts_rsp = NULL;
+
+ abts_rsp = &scqe->rcv_abts_rsp;
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = abts_rsp->magic_num;
+
+ ox_id = (u32)(abts_rsp->wd0.ox_id);
+
+ hot_tag = abts_rsp->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK;
+ if (unlikely(hot_tag < (u32)hba->exi_base ||
+ hot_tag >= (u32)(hba->exi_base + hba->exi_count))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) has bad HotTag(0x%x) for bls_rsp",
+ hba->port_cfg.port_id, hot_tag);
+
+ status = UNF_IO_FAILED;
+ hot_tag = INVALID_VALUE32;
+ } else {
+ hot_tag -= hba->exi_base;
+ if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) BLS response has error code(0x%x) tag(0x%x)",
+ hba->port_cfg.port_id,
+ SPFC_GET_SCQE_STATUS(scqe), (u32)hot_tag);
+
+ status = UNF_IO_FAILED;
+ } else {
+ pkg.frame_head.rctl_did = abts_rsp->wd3.did;
+ pkg.frame_head.csctl_sid = abts_rsp->wd4.sid;
+ pkg.frame_head.oxid_rxid = (u32)(abts_rsp->wd0.rx_id) | ox_id <<
+ UNF_SHIFT_16;
+
+ /* BLS_ACC/BLS_RJT: IO_succeed */
+ if (abts_rsp->wd2.fh_rctrl == SPFC_RCTL_BLS_ACC) {
+ status = UNF_IO_SUCCESS;
+ } else if (abts_rsp->wd2.fh_rctrl == SPFC_RCTL_BLS_RJT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) ABTS RJT: %08x-%08x-%08x",
+ hba->port_cfg.port_id,
+ abts_rsp->payload[ARRAY_INDEX_0],
+ abts_rsp->payload[ARRAY_INDEX_1],
+ abts_rsp->payload[ARRAY_INDEX_2]);
+
+ status = UNF_IO_SUCCESS;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) BLS response RCTL is error",
+ hba->port_cfg.port_id);
+ SPFC_ERR_IO_STAT(hba, SPFC_SCQE_ABTS_RSP);
+ status = UNF_IO_FAILED;
+ }
+ }
+ }
+
+ /* Set PKG/exchange status & Process BLS_RSP */
+ pkg.status = status;
+ ret = spfc_rcv_bls_rsp(hba, &pkg, hot_tag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) recv ABTS rsp OX_ID(0x%x) RX_ID(0x%x) HotTag(0x%x) SID(0x%x) DID(0x%x) %s",
+ hba->port_cfg.port_id, ox_id, abts_rsp->wd0.rx_id, hot_tag,
+ abts_rsp->wd4.sid, abts_rsp->wd3.did,
+ (ret == RETURN_OK) ? "OK" : "ERROR");
+
+ return ret;
+}
+
+u32 spfc_recv_els_cmnd(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u8 *els_pld, u32 pld_len,
+ bool first)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ /* Convert Payload to small endian */
+ spfc_big_to_cpu32(els_pld, pld_len);
+
+ pkg->type = UNF_PKG_ELS_REQ;
+
+ pkg->unf_cmnd_pload_bl.buffer_ptr = els_pld;
+
+ /* Payload length */
+ pkg->unf_cmnd_pload_bl.length = pld_len;
+
+ /* Obtain the Cmnd type from the Paylaod. The Cmnd is in small endian */
+ if (first)
+ pkg->cmnd = UNF_GET_FC_PAYLOAD_ELS_CMND(pkg->unf_cmnd_pload_bl.buffer_ptr);
+
+ /* Errors have been processed in SPFC_RecvElsError */
+ pkg->status = UNF_IO_SUCCESS;
+
+ /* Send PKG to the CM layer */
+ UNF_LOWLEVEL_RECEIVE_LS_GS_PKG(ret, hba->lport, pkg);
+
+ if (ret != RETURN_OK) {
+ pkg->rx_or_ox_id = UNF_PKG_FREE_RXID;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = INVALID_VALUE32;
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = INVALID_VALUE32;
+ ret = spfc_free_xid((void *)hba, pkg);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) recv %s ox_id(0x%x) RXID(0x%x) PldLen(0x%x) failed, Free xid %s",
+ hba->port_cfg.port_id,
+ UNF_GET_FC_HEADER_RCTL(&pkg->frame_head) == SPFC_FC_RCTL_ELS_REQ ?
+ "ELS REQ" : "ELS RSP",
+ UNF_GET_OXID(pkg), UNF_GET_RXID(pkg), pld_len,
+ (ret == RETURN_OK) ? "OK" : "ERROR");
+ }
+
+ return ret;
+}
+
+u32 spfc_rcv_ls_gs_rsp(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+ if (pkg->type == UNF_PKG_ELS_REQ_DONE)
+ pkg->byte_orders |= SPFC_BIT_2;
+
+ pkg->last_pkg_flag = UNF_PKG_LAST_RESPONSE;
+
+ UNF_LOWLEVEL_RECEIVE_LS_GS_PKG(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+u32 spfc_rcv_els_rsp_sts(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->type = UNF_PKG_ELS_REPLY_DONE;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+
+ UNF_LOWLEVEL_SEND_ELS_DONE(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+u32 spfc_rcv_bls_rsp(const struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
+ u32 hot_tag)
+{
+ /*
+ * 1. SCQ (normal)
+ * 2. from Root RQ (parent no existence)
+ * *
+ * single frame, single sequence
+ */
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->type = UNF_PKG_BLS_REQ_DONE;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+ pkg->last_pkg_flag = UNF_PKG_LAST_RESPONSE;
+
+ UNF_LOWLEVEL_RECEIVE_BLS_PKG(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+u32 spfc_rsv_bls_rsp_sts(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 rx_id)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->type = UNF_PKG_BLS_REPLY_DONE;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = rx_id;
+
+ UNF_LOWLEVEL_RECEIVE_BLS_PKG(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+u32 spfc_rcv_tmf_marker_sts(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+
+ /* Send PKG info to COM */
+ UNF_LOWLEVEL_RECEIVE_MARKER_STS(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+u32 spfc_rcv_abts_marker_sts(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+
+ UNF_LOWLEVEL_RECEIVE_ABTS_MARKER_STS(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+static void spfc_scqe_error_pre_proc(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ /* Currently, only printing and statistics collection are performed */
+ SPFC_ERR_IO_STAT(hba, SPFC_GET_SCQE_TYPE(scqe));
+ SPFC_SCQ_ERR_TYPE_STAT(hba, SPFC_GET_SCQE_STATUS(scqe));
+
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
+ "[warn]Port(0x%x)-Task_type(%u) SCQE contain error code(%u),additional info(0x%x)",
+ hba->port_cfg.port_id, scqe->common.ch.wd0.task_type,
+ scqe->common.ch.wd0.err_code, scqe->common.conn_id);
+}
+
+void *spfc_get_els_buf_by_user_id(struct spfc_hba_info *hba, u16 user_id)
+{
+ struct spfc_drq_buff_entry *srq_buf_entry = NULL;
+ struct spfc_srq_info *srq_info = NULL;
+
+ FC_CHECK_RETURN_VALUE(hba, NULL);
+
+ srq_info = &hba->els_srq_info;
+ FC_CHECK_RETURN_VALUE(user_id < srq_info->valid_wqe_num, NULL);
+
+ srq_buf_entry = &srq_info->els_buff_entry_head[user_id];
+
+ return srq_buf_entry->buff_addr;
+}
+
+static u32 spfc_check_srq_buf_valid(struct spfc_hba_info *hba,
+ u16 *buf_id_array, u32 buf_num)
+{
+ u32 index = 0;
+ u32 buf_id = 0;
+ void *srq_buf = NULL;
+
+ for (index = 0; index < buf_num; index++) {
+ buf_id = buf_id_array[index];
+
+ if (buf_id < hba->els_srq_info.valid_wqe_num)
+ srq_buf = spfc_get_els_buf_by_user_id(hba, (u16)buf_id);
+ else
+ srq_buf = NULL;
+
+ if (!srq_buf) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) get srq buffer user id(0x%x) is null",
+ hba->port_cfg.port_id, buf_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ return RETURN_OK;
+}
+
+static void spfc_reclaim_srq_buf(struct spfc_hba_info *hba, u16 *buf_id_array,
+ u32 buf_num)
+{
+ u32 index = 0;
+ u32 buf_id = 0;
+ void *srq_buf = NULL;
+
+ for (index = 0; index < buf_num; index++) {
+ buf_id = buf_id_array[index];
+ if (buf_id < hba->els_srq_info.valid_wqe_num)
+ srq_buf = spfc_get_els_buf_by_user_id(hba, (u16)buf_id);
+ else
+ srq_buf = NULL;
+
+ /* If the value of buffer is NULL, it indicates that the value
+ * of buffer is invalid. In this case, exit directly.
+ */
+ if (!srq_buf)
+ break;
+
+ spfc_post_els_srq_wqe(&hba->els_srq_info, (u16)buf_id);
+ }
+}
+
+static u32 spfc_check_ls_gs_valid(struct spfc_hba_info *hba, union spfc_scqe *scqe,
+ struct unf_frame_pkg *pkg, u16 *buf_id_array,
+ u32 buf_num, u32 frame_len)
+{
+ u32 hot_tag;
+
+ hot_tag = UNF_GET_HOTPOOL_TAG(pkg);
+
+ /* The ELS CMD returns an error code and discards it directly */
+ if ((sizeof(struct spfc_fc_frame_header) > frame_len) ||
+ (SPFC_SCQE_HAS_ERRCODE(scqe)) || buf_num > SPFC_ELS_SRQ_BUF_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) get scqe type(0x%x) payload len(0x%x),scq status(0x%x),user id num(0x%x) abnormal",
+ hba->port_cfg.port_id, SPFC_GET_SCQE_TYPE(scqe), frame_len,
+ SPFC_GET_SCQE_STATUS(scqe), buf_num);
+
+ /* ELS RSP Special Processing */
+ if (SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_ELS_RSP ||
+ SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_GS_RSP) {
+ if (SPFC_SCQE_ERR_TO_CM(scqe)) {
+ pkg->status = UNF_IO_FAILED;
+ (void)spfc_rcv_ls_gs_rsp(hba, pkg, hot_tag);
+ } else {
+ if (SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_ELS_RSP)
+ SPFC_HBA_STAT(hba, SPFC_STAT_ELS_RSP_EXCH_REUSE);
+ else
+ SPFC_HBA_STAT(hba, SPFC_STAT_GS_RSP_EXCH_REUSE);
+ }
+ }
+
+ /* Reclaim srq */
+ if (buf_num <= SPFC_ELS_SRQ_BUF_NUM)
+ spfc_reclaim_srq_buf(hba, buf_id_array, buf_num);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* ELS CMD Check the validity of the buffer sent by the ucode */
+ if (SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_ELS_CMND) {
+ if (spfc_check_srq_buf_valid(hba, buf_id_array, buf_num) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) get els cmnd scqe user id num(0x%x) abnormal, as some srq buff is null",
+ hba->port_cfg.port_id, buf_num);
+
+ spfc_reclaim_srq_buf(hba, buf_id_array, buf_num);
+
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ return RETURN_OK;
+}
+
+u32 spfc_scq_recv_els_cmnd(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 ret = RETURN_OK;
+ u32 pld_len = 0;
+ u32 header_len = 0;
+ u32 frame_len = 0;
+ u32 rcv_data_len = 0;
+ u32 max_buf_num = 0;
+ u16 buf_id = 0;
+ u32 index = 0;
+ u8 *pld_addr = NULL;
+ struct unf_frame_pkg pkg = {0};
+ struct spfc_scqe_rcv_els_cmd *els_cmd = NULL;
+ struct spfc_fc_frame_header *els_frame = NULL;
+ struct spfc_fc_frame_header tmp_frame = {0};
+ void *els_buf = NULL;
+ bool first = false;
+
+ els_cmd = &scqe->rcv_els_cmd;
+ frame_len = els_cmd->wd3.data_len;
+ max_buf_num = els_cmd->wd3.user_id_num;
+ spfc_swap_16_in_32((u32 *)els_cmd->user_id, SPFC_LS_GS_USERID_LEN);
+
+ pkg.xchg_contex = NULL;
+ pkg.status = UNF_IO_SUCCESS;
+
+ /* Check the validity of error codes and buff. If an exception occurs,
+ * discard the error code
+ */
+ ret = spfc_check_ls_gs_valid(hba, scqe, &pkg, els_cmd->user_id,
+ max_buf_num, frame_len);
+ if (ret != RETURN_OK) {
+ pkg.rx_or_ox_id = UNF_PKG_FREE_RXID;
+ pkg.frame_head.oxid_rxid =
+ (u32)(els_cmd->wd2.rx_id) | (u32)(els_cmd->wd2.ox_id) << UNF_SHIFT_16;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = INVALID_VALUE32;
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = INVALID_VALUE32;
+ pkg.frame_head.csctl_sid = els_cmd->wd1.sid;
+ pkg.frame_head.rctl_did = els_cmd->wd0.did;
+ spfc_free_xid((void *)hba, &pkg);
+ return RETURN_OK;
+ }
+
+ /* Send data to COM cyclically */
+ for (index = 0; index < max_buf_num; index++) {
+ /* Exception record, which is not processed currently */
+ if (rcv_data_len >= frame_len) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) get els cmd date len(0x%x) is bigger than fream len(0x%x)",
+ hba->port_cfg.port_id, rcv_data_len, frame_len);
+ }
+
+ buf_id = (u16)els_cmd->user_id[index];
+ els_buf = spfc_get_els_buf_by_user_id(hba, buf_id);
+
+ /* Obtain playload address */
+ pld_addr = (u8 *)(els_buf);
+ header_len = 0;
+ first = false;
+ if (index == 0) {
+ els_frame = (struct spfc_fc_frame_header *)els_buf;
+ pld_addr = (u8 *)(els_frame + 1);
+
+ header_len = sizeof(struct spfc_fc_frame_header);
+ first = true;
+
+ memcpy(&tmp_frame, els_frame, sizeof(struct spfc_fc_frame_header));
+ spfc_big_to_cpu32(&tmp_frame, sizeof(struct spfc_fc_frame_header));
+ memcpy(&pkg.frame_head, &tmp_frame, sizeof(pkg.frame_head));
+ pkg.frame_head.oxid_rxid = (u32)((pkg.frame_head.oxid_rxid &
+ SPFC_OXID_MASK) | (els_cmd->wd2.rx_id));
+ }
+
+ /* Calculate the playload length */
+ pkg.last_pkg_flag = 0;
+ pld_len = SPFC_SRQ_ELS_SGE_LEN;
+
+ if ((rcv_data_len + SPFC_SRQ_ELS_SGE_LEN) >= frame_len) {
+ pkg.last_pkg_flag = 1;
+ pld_len = frame_len - rcv_data_len;
+ }
+
+ pkg.class_mode = els_cmd->wd0.class_mode;
+
+ /* Push data to COM */
+ if (ret == RETURN_OK) {
+ ret = spfc_recv_els_cmnd(hba, &pkg, pld_addr,
+ (pld_len - header_len), first);
+ }
+
+ /* Reclaim srq buffer */
+ spfc_post_els_srq_wqe(&hba->els_srq_info, buf_id);
+
+ rcv_data_len += pld_len;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) recv ELS Type(0x%x) Cmnd(0x%x) ox_id(0x%x) RXID(0x%x) SID(0x%x) DID(0x%x) %u",
+ hba->port_cfg.port_id, pkg.type, pkg.cmnd, els_cmd->wd2.ox_id,
+ els_cmd->wd2.rx_id, els_cmd->wd1.sid, els_cmd->wd0.did, ret);
+
+ return ret;
+}
+
+static u32 spfc_get_ls_gs_pld_len(struct spfc_hba_info *hba, u32 rcv_data_len, u32 frame_len)
+{
+ u32 pld_len;
+
+ /* Exception record, which is not processed currently */
+ if (rcv_data_len >= frame_len) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) get els rsp data len(0x%x) is bigger than fream len(0x%x)",
+ hba->port_cfg.port_id, rcv_data_len, frame_len);
+ }
+
+ pld_len = SPFC_SRQ_ELS_SGE_LEN;
+ if ((rcv_data_len + SPFC_SRQ_ELS_SGE_LEN) >= frame_len)
+ pld_len = frame_len - rcv_data_len;
+
+ return pld_len;
+}
+
+u32 spfc_scq_recv_ls_gs_rsp(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 ret = RETURN_OK;
+ u32 pld_len = 0;
+ u32 header_len = 0;
+ u32 frame_len = 0;
+ u32 rcv_data_len = 0;
+ u32 max_buf_num = 0;
+ u16 buf_id = 0;
+ u32 hot_tag = INVALID_VALUE32;
+ u32 index = 0;
+ u32 ox_id = (~0);
+ struct unf_frame_pkg pkg = {0};
+ struct spfc_scqe_rcv_els_gs_rsp *ls_gs_rsp_scqe = NULL;
+ struct spfc_fc_frame_header *els_frame = NULL;
+ void *ls_gs_buf = NULL;
+ u8 *pld_addr = NULL;
+ u8 task_type;
+
+ ls_gs_rsp_scqe = &scqe->rcv_els_gs_rsp;
+ frame_len = ls_gs_rsp_scqe->wd2.data_len;
+ max_buf_num = ls_gs_rsp_scqe->wd4.user_id_num;
+ spfc_swap_16_in_32((u32 *)ls_gs_rsp_scqe->user_id, SPFC_LS_GS_USERID_LEN);
+
+ ox_id = ls_gs_rsp_scqe->wd1.ox_id;
+ hot_tag = ((u16)(ls_gs_rsp_scqe->wd5.hotpooltag) & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ pkg.frame_head.oxid_rxid = (u32)(ls_gs_rsp_scqe->wd1.rx_id) | ox_id << UNF_SHIFT_16;
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = ls_gs_rsp_scqe->magic_num;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+ pkg.frame_head.csctl_sid = ls_gs_rsp_scqe->wd4.sid;
+ pkg.frame_head.rctl_did = ls_gs_rsp_scqe->wd3.did;
+ pkg.status = UNF_IO_SUCCESS;
+ pkg.type = UNF_PKG_ELS_REQ_DONE;
+
+ task_type = SPFC_GET_SCQE_TYPE(scqe);
+ if (task_type == SPFC_SCQE_GS_RSP) {
+ if (ls_gs_rsp_scqe->wd3.end_rsp)
+ SPFC_HBA_STAT(hba, SPFC_STAT_LAST_GS_SCQE);
+ pkg.type = UNF_PKG_GS_REQ_DONE;
+ }
+
+ /* Handle the exception first. The LS/GS RSP returns the error code.
+ * Only the ox_id can submit the error code to the CM layer.
+ */
+ ret = spfc_check_ls_gs_valid(hba, scqe, &pkg, ls_gs_rsp_scqe->user_id,
+ max_buf_num, frame_len);
+ if (ret != RETURN_OK)
+ return RETURN_OK;
+
+ if (ls_gs_rsp_scqe->wd3.echo_rsp) {
+ pkg.private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME] =
+ ls_gs_rsp_scqe->user_id[ARRAY_INDEX_5];
+ pkg.private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME] =
+ ls_gs_rsp_scqe->user_id[ARRAY_INDEX_6];
+ pkg.private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME] =
+ ls_gs_rsp_scqe->user_id[ARRAY_INDEX_7];
+ pkg.private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME] =
+ ls_gs_rsp_scqe->user_id[ARRAY_INDEX_8];
+ }
+
+ /* Send data to COM cyclically */
+ for (index = 0; index < max_buf_num; index++) {
+ /* Obtain buffer address */
+ ls_gs_buf = NULL;
+ buf_id = (u16)ls_gs_rsp_scqe->user_id[index];
+ ls_gs_buf = spfc_get_els_buf_by_user_id(hba, buf_id);
+
+ if (unlikely(!ls_gs_buf)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) ox_id(0x%x) RXID(0x%x) SID(0x%x) DID(0x%x) Index(0x%x) get els rsp buff user id(0x%x) abnormal",
+ hba->port_cfg.port_id, ox_id,
+ ls_gs_rsp_scqe->wd1.rx_id, ls_gs_rsp_scqe->wd4.sid,
+ ls_gs_rsp_scqe->wd3.did, index, buf_id);
+
+ if (index == 0) {
+ pkg.status = UNF_IO_FAILED;
+ ret = spfc_rcv_ls_gs_rsp(hba, &pkg, hot_tag);
+ }
+
+ return ret;
+ }
+
+ header_len = 0;
+ pld_addr = (u8 *)(ls_gs_buf);
+ if (index == 0) {
+ header_len = sizeof(struct spfc_fc_frame_header);
+ els_frame = (struct spfc_fc_frame_header *)ls_gs_buf;
+ pld_addr = (u8 *)(els_frame + 1);
+ }
+
+ /* Calculate the playload length */
+ pld_len = spfc_get_ls_gs_pld_len(hba, rcv_data_len, frame_len);
+
+ /* Push data to COM */
+ if (ret == RETURN_OK) {
+ ret = spfc_rcv_ls_gs_rsp_payload(hba, &pkg, hot_tag, pld_addr,
+ (pld_len - header_len));
+ }
+
+ /* Reclaim srq buffer */
+ spfc_post_els_srq_wqe(&hba->els_srq_info, buf_id);
+
+ rcv_data_len += pld_len;
+ }
+
+ if (ls_gs_rsp_scqe->wd3.end_rsp && ret == RETURN_OK)
+ ret = spfc_rcv_ls_gs_rsp(hba, &pkg, hot_tag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) receive LS/GS RSP ox_id(0x%x) RXID(0x%x) SID(0x%x) DID(0x%x) end_rsp(0x%x) user_num(0x%x)",
+ hba->port_cfg.port_id, ox_id, ls_gs_rsp_scqe->wd1.rx_id,
+ ls_gs_rsp_scqe->wd4.sid, ls_gs_rsp_scqe->wd3.did,
+ ls_gs_rsp_scqe->wd3.end_rsp,
+ ls_gs_rsp_scqe->wd4.user_id_num);
+
+ return ret;
+}
+
+u32 spfc_scq_recv_els_rsp_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 rx_id = INVALID_VALUE32;
+ u32 hot_tag = INVALID_VALUE32;
+ struct unf_frame_pkg pkg = {0};
+ struct spfc_scqe_comm_rsp_sts *els_rsp_sts_scqe = NULL;
+
+ els_rsp_sts_scqe = &scqe->comm_sts;
+ rx_id = (u32)els_rsp_sts_scqe->wd0.rx_id;
+
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ els_rsp_sts_scqe->magic_num;
+ pkg.frame_head.oxid_rxid = rx_id | (u32)(els_rsp_sts_scqe->wd0.ox_id) << UNF_SHIFT_16;
+ hot_tag = (u32)((els_rsp_sts_scqe->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) -
+ hba->exi_base);
+
+ if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe)))
+ pkg.status = UNF_IO_FAILED;
+ else
+ pkg.status = UNF_IO_SUCCESS;
+
+ ret = spfc_rcv_els_rsp_sts(hba, &pkg, hot_tag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) recv ELS RSP STS ox_id(0x%x) RXID(0x%x) HotTag(0x%x) %s",
+ hba->port_cfg.port_id, els_rsp_sts_scqe->wd0.ox_id, rx_id,
+ hot_tag, (ret == RETURN_OK) ? "OK" : "ERROR");
+
+ return ret;
+}
+
+static u32 spfc_check_rport_valid(const struct spfc_parent_queue_info *prt_queue_info, u32 scqe_xid)
+{
+ if (prt_queue_info->parent_ctx.cqm_parent_ctx_obj) {
+ if ((prt_queue_info->parent_sq_info.context_id & SPFC_CQM_XID_MASK) ==
+ (scqe_xid & SPFC_CQM_XID_MASK)) {
+ return RETURN_OK;
+ }
+ }
+
+ return UNF_RETURN_ERROR;
+}
+
+u32 spfc_scq_recv_offload_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 valid = UNF_RETURN_ERROR;
+ u32 rport_index = 0;
+ u32 cid = 0;
+ u32 xid = 0;
+ ulong flags = 0;
+ struct spfc_parent_queue_info *prt_qinfo = NULL;
+ struct spfc_parent_sq_info *parent_sq_info = NULL;
+ struct spfc_scqe_sess_sts *offload_sts_scqe = NULL;
+ struct spfc_delay_destroy_ctrl_info delay_ctl_info;
+
+ memset(&delay_ctl_info, 0, sizeof(struct spfc_delay_destroy_ctrl_info));
+ offload_sts_scqe = &scqe->sess_sts;
+ rport_index = offload_sts_scqe->wd1.conn_id;
+ cid = offload_sts_scqe->wd2.cid;
+ xid = offload_sts_scqe->wd0.xid_qpn;
+
+ if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive an error offload status: rport(0x%x) is invalid, cacheid(0x%x)",
+ hba->port_cfg.port_id, rport_index, cid);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (rport_index == SPFC_DEFAULT_RPORT_INDEX &&
+ hba->default_sq_info.default_sq_flag == 0xF) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) default session timeout: rport(0x%x) cacheid(0x%x)",
+ hba->port_cfg.port_id, rport_index, cid);
+ return UNF_RETURN_ERROR;
+ }
+
+ prt_qinfo = &hba->parent_queue_mgr->parent_queue[rport_index];
+ parent_sq_info = &prt_qinfo->parent_sq_info;
+
+ valid = spfc_check_rport_valid(prt_qinfo, xid);
+ if (valid != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive an error offload status: rport(0x%x), context id(0x%x) is invalid",
+ hba->port_cfg.port_id, rport_index, xid);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Offload failed */
+ if (SPFC_GET_SCQE_STATUS(scqe) != SPFC_COMPLETION_STATUS_SUCCESS) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x), rport(0x%x), context id(0x%x), cache id(0x%x), offload failed",
+ hba->port_cfg.port_id, rport_index, xid, cid);
+
+ spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
+ if (prt_qinfo->offload_state != SPFC_QUEUE_STATE_OFFLOADED) {
+ prt_qinfo->offload_state = SPFC_QUEUE_STATE_INITIALIZED;
+ parent_sq_info->need_offloaded = INVALID_VALUE8;
+ }
+ spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock,
+ flags);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
+ prt_qinfo->parent_sq_info.cache_id = cid;
+ prt_qinfo->offload_state = SPFC_QUEUE_STATE_OFFLOADED;
+ parent_sq_info->need_offloaded = SPFC_HAVE_OFFLOAD;
+ atomic_set(&prt_qinfo->parent_sq_info.sq_cached, true);
+
+ if (prt_qinfo->parent_sq_info.destroy_sqe.valid) {
+ delay_ctl_info.valid = prt_qinfo->parent_sq_info.destroy_sqe.valid;
+ delay_ctl_info.rport_index = prt_qinfo->parent_sq_info.destroy_sqe.rport_index;
+ delay_ctl_info.time_out = prt_qinfo->parent_sq_info.destroy_sqe.time_out;
+ delay_ctl_info.start_jiff = prt_qinfo->parent_sq_info.destroy_sqe.start_jiff;
+ delay_ctl_info.rport_info.nport_id =
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.nport_id;
+ delay_ctl_info.rport_info.rport_index =
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.rport_index;
+ delay_ctl_info.rport_info.port_name =
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.port_name;
+ prt_qinfo->parent_sq_info.destroy_sqe.valid = false;
+ }
+ spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
+
+ if (rport_index == SPFC_DEFAULT_RPORT_INDEX) {
+ hba->default_sq_info.sq_cid = cid;
+ hba->default_sq_info.sq_xid = xid;
+ hba->default_sq_info.default_sq_flag = 1;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR, "[info]Receive default Session info");
+ }
+
+ spfc_pop_destroy_parent_queue_sqe((void *)hba, &delay_ctl_info);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) offload success: rport index(0x%x),rport nportid(0x%x),context id(0x%x),cache id(0x%x).",
+ hba->port_cfg.port_id, rport_index,
+ prt_qinfo->parent_sq_info.remote_port_id, xid, cid);
+
+ return RETURN_OK;
+}
+
+static u32 spfc_send_bls_via_parent(struct spfc_hba_info *hba, struct unf_frame_pkg *pkg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = INVALID_VALUE16;
+ u16 rx_id = INVALID_VALUE16;
+ struct spfc_sqe tmp_sqe;
+ struct spfc_sqe *sqe = NULL;
+ struct spfc_parent_sq_info *parent_sq_info = NULL;
+ struct spfc_parent_queue_info *prt_qinfo = NULL;
+ u16 ssqn;
+
+ FC_CHECK_RETURN_VALUE((pkg->type == UNF_PKG_BLS_REQ), UNF_RETURN_ERROR);
+
+ sqe = &tmp_sqe;
+ memset(sqe, 0, sizeof(struct spfc_sqe));
+
+ prt_qinfo = spfc_find_parent_queue_info_by_pkg(hba, pkg);
+ if (!prt_qinfo) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send BLS SID_DID(0x%x_0x%x) with null parent queue information",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+
+ return ret;
+ }
+
+ parent_sq_info = spfc_find_parent_sq_by_pkg(hba, pkg);
+ if (!parent_sq_info) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ABTS SID_DID(0x%x_0x%x) with null parent queue information",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+
+ return ret;
+ }
+
+ rx_id = UNF_GET_RXID(pkg);
+ ox_id = UNF_GET_OXID(pkg);
+
+ /* Assemble the SQE Control Section part. The ABTS does not have
+ * Payload. bdsl=0
+ */
+ spfc_build_service_wqe_ctrl_section(&sqe->ctrl_sl, SPFC_BYTES_TO_QW_NUM(SPFC_SQE_TS_SIZE),
+ 0);
+
+ /* Assemble the SQE Task Section BLS Common part. The value of DW2 of
+ * BLS WQE is Rsvd, and the value of DW2 is 0
+ */
+ spfc_build_service_wqe_ts_common(&sqe->ts_sl, parent_sq_info->rport_index, ox_id, rx_id, 0);
+
+ /* Assemble the special part of the ABTS */
+ spfc_build_bls_wqe_ts_req(sqe, pkg, hba);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%x) send ABTS_REQ ox_id(0x%x) RXID(0x%x), HotTag(0x%x)",
+ hba->port_cfg.port_id, parent_sq_info->rport_index, ox_id,
+ rx_id, (u16)(UNF_GET_HOTPOOL_TAG(pkg) + hba->exi_base));
+
+ ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ ret = spfc_parent_sq_enqueue(parent_sq_info, sqe, ssqn);
+
+ return ret;
+}
+
+u32 spfc_send_bls_cmnd(void *handle, struct unf_frame_pkg *pkg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_hba_info *hba = NULL;
+ ulong flags = 0;
+ struct spfc_parent_queue_info *prt_qinfo = NULL;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg->type == UNF_PKG_BLS_REQ || pkg->type == UNF_PKG_BLS_REPLY,
+ UNF_RETURN_ERROR);
+
+ SPFC_CHECK_PKG_ALLOCTIME(pkg);
+ hba = (struct spfc_hba_info *)handle;
+
+ prt_qinfo = spfc_find_parent_queue_info_by_pkg(hba, pkg);
+ if (!prt_qinfo) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send BLS SID_DID(0x%x_0x%x) with null parent queue information",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+
+ return ret;
+ }
+
+ spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
+
+ if (SPFC_RPORT_OFFLOADED(prt_qinfo)) {
+ spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
+ ret = spfc_send_bls_via_parent(hba, pkg);
+ } else {
+ spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[error]Port(0x%x) send BLS SID_DID(0x%x_0x%x) with no offloaded, do noting",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+ }
+
+ return ret;
+}
+
+static u32 spfc_scq_rcv_flush_sq_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ /*
+ * RCVD sq flush sts
+ * --->>> continue flush or clear done
+ */
+ u32 ret = UNF_RETURN_ERROR;
+
+ if (scqe->flush_sts.wd0.port_id != hba->port_index) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_CRITICAL,
+ "[err]Port(0x%x) clear_sts_port_idx(0x%x) not match hba_port_idx(0x%x), stage(0x%x)",
+ hba->port_cfg.port_id, scqe->clear_sts.wd0.port_id,
+ hba->port_index, hba->queue_set_stage);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (scqe->flush_sts.wd0.last_flush) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_INFO,
+ "[info]Port(0x%x) flush sq(0x%x) done, stage(0x%x)",
+ hba->port_cfg.port_id, hba->next_clear_sq, hba->queue_set_stage);
+
+ /* If the Flush STS is last one, send cmd done */
+ ret = spfc_clear_sq_wqe_done(hba);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "[info]Port(0x%x) continue flush sq(0x%x), stage(0x%x)",
+ hba->port_cfg.port_id, hba->next_clear_sq, hba->queue_set_stage);
+
+ ret = spfc_clear_pending_sq_wqe(hba);
+ }
+
+ return ret;
+}
+
+static u32 spfc_scq_rcv_buf_clear_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ /*
+ * clear: fetched sq wqe
+ * ---to--->>> pending sq wqe
+ */
+ u32 ret = UNF_RETURN_ERROR;
+
+ if (scqe->clear_sts.wd0.port_id != hba->port_index) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_CRITICAL,
+ "[err]Port(0x%x) clear_sts_port_idx(0x%x) not match hba_port_idx(0x%x), stage(0x%x)",
+ hba->port_cfg.port_id, scqe->clear_sts.wd0.port_id,
+ hba->port_index, hba->queue_set_stage);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* set port with I/O cleared state */
+ spfc_set_hba_clear_state(hba, true);
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_KEVENT,
+ "[info]Port(0x%x) cleared all fetched wqe, start clear sq pending wqe, stage (0x%x)",
+ hba->port_cfg.port_id, hba->queue_set_stage);
+
+ hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_FLUSHING;
+ ret = spfc_clear_pending_sq_wqe(hba);
+
+ return ret;
+}
+
+u32 spfc_scq_recv_sess_rst_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 rport_index = INVALID_VALUE32;
+ ulong flags = 0;
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ struct spfc_scqe_sess_sts *sess_sts_scqe = (struct spfc_scqe_sess_sts *)(void *)scqe;
+ u32 flush_done;
+ u32 *ctx_array = NULL;
+ int ret;
+ spinlock_t *prtq_state_lock = NULL;
+
+ rport_index = sess_sts_scqe->wd1.conn_id;
+ if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive reset session cmd sts failed, invlaid rport(0x%x) status_code(0x%x) remain_cnt(0x%x)",
+ hba->port_cfg.port_id, rport_index,
+ sess_sts_scqe->ch.wd0.err_code,
+ sess_sts_scqe->ch.wd0.cqe_remain_cnt);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ parent_queue_info = &hba->parent_queue_mgr->parent_queue[rport_index];
+ prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
+ /*
+ * If only session reset is used, the offload status of sq remains
+ * unchanged. If a link is deleted, the offload status is set to
+ * destroying and is irreversible.
+ */
+ spin_lock_irqsave(prtq_state_lock, flags);
+
+ /*
+ * According to the fault tolerance principle, even if the connection
+ * deletion times out and the sts returns to delete the connection, one
+ * indicates that the cancel timer is successful, and 0 indicates that
+ * the timer is being processed.
+ */
+ if (!cancel_delayed_work(&parent_queue_info->parent_sq_info.del_work)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) rport_index(0x%x) delete rport timer maybe timeout",
+ hba->port_cfg.port_id, rport_index);
+ }
+
+ /*
+ * If the SessRstSts is returned too late and the Parent Queue Info
+ * resource is released, OK is returned.
+ */
+ if (parent_queue_info->offload_state != SPFC_QUEUE_STATE_DESTROYING) {
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[info]Port(0x%x) reset session cmd complete, no need to free parent qinfo, rport(0x%x) status_code(0x%x) remain_cnt(0x%x)",
+ hba->port_cfg.port_id, rport_index,
+ sess_sts_scqe->ch.wd0.err_code,
+ sess_sts_scqe->ch.wd0.cqe_remain_cnt);
+
+ return RETURN_OK;
+ }
+
+ if (parent_queue_info->parent_ctx.cqm_parent_ctx_obj) {
+ ctx_array = (u32 *)((void *)(parent_queue_info->parent_ctx
+ .cqm_parent_ctx_obj->vaddr));
+ flush_done = ctx_array[SPFC_CTXT_FLUSH_DONE_DW_POS] & SPFC_CTXT_FLUSH_DONE_MASK_BE;
+ mb();
+ if (flush_done == 0) {
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) rport(0x%x) flushdone is not set, delay to free parent session",
+ hba->port_cfg.port_id, rport_index);
+
+ /* If flushdone bit is not set,delay free Sq info */
+ ret = queue_delayed_work(hba->work_queue,
+ &(parent_queue_info->parent_sq_info
+ .flush_done_timeout_work),
+ (ulong)msecs_to_jiffies((u32)
+ SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_MS));
+ if (!ret) {
+ SPFC_HBA_STAT(hba, SPFC_STAT_PARENT_SQ_QUEUE_DELAYED_WORK);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) rport(0x%x) queue delayed work failed ret:%d",
+ hba->port_cfg.port_id, rport_index,
+ ret);
+ }
+
+ return RETURN_OK;
+ }
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to free parent session with rport(0x%x)",
+ hba->port_cfg.port_id, rport_index);
+
+ spfc_free_parent_queue_info(hba, parent_queue_info);
+
+ return RETURN_OK;
+}
+
+static u32 spfc_scq_rcv_clear_srq_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ /*
+ * clear ELS/Immi SRQ
+ * ---then--->>> Destroy SRQ
+ */
+ struct spfc_srq_info *srq_info = NULL;
+
+ if (SPFC_GET_SCQE_STATUS(scqe) != 0) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) clear srq failed, status(0x%x)",
+ hba->port_cfg.port_id, SPFC_GET_SCQE_STATUS(scqe));
+
+ return RETURN_OK;
+ }
+
+ srq_info = &hba->els_srq_info;
+
+ /*
+ * 1: cancel timer succeed
+ * 0: the timer is being processed, the SQ is released when the timer
+ * times out
+ */
+ if (cancel_delayed_work(&srq_info->del_work))
+ queue_work(hba->work_queue, &hba->els_srq_clear_work);
+
+ return RETURN_OK;
+}
+
+u32 spfc_scq_recv_marker_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 ox_id = INVALID_VALUE32;
+ u32 rx_id = INVALID_VALUE32;
+ u32 hot_tag = INVALID_VALUE32;
+ struct unf_frame_pkg pkg = {0};
+ struct spfc_scqe_itmf_marker_sts *tmf_marker_sts_scqe = NULL;
+
+ tmf_marker_sts_scqe = &scqe->itmf_marker_sts;
+ ox_id = (u32)tmf_marker_sts_scqe->wd1.ox_id;
+ rx_id = (u32)tmf_marker_sts_scqe->wd1.rx_id;
+ hot_tag = (tmf_marker_sts_scqe->wd4.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ pkg.frame_head.oxid_rxid = rx_id | (u32)(ox_id) << UNF_SHIFT_16;
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = tmf_marker_sts_scqe->magic_num;
+ pkg.frame_head.csctl_sid = tmf_marker_sts_scqe->wd3.sid;
+ pkg.frame_head.rctl_did = tmf_marker_sts_scqe->wd2.did;
+
+ /* 1. set pkg status */
+ if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe)))
+ pkg.status = UNF_IO_FAILED;
+ else
+ pkg.status = UNF_IO_SUCCESS;
+
+ /* 2 .process rcvd marker STS: set exchange state */
+ ret = spfc_rcv_tmf_marker_sts(hba, &pkg, hot_tag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) recv marker STS OX_ID(0x%x) RX_ID(0x%x) HotTag(0x%x) result %s",
+ hba->port_cfg.port_id, ox_id, rx_id, hot_tag,
+ (ret == RETURN_OK) ? "succeed" : "failed");
+
+ return ret;
+}
+
+u32 spfc_scq_recv_abts_marker_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 ox_id = INVALID_VALUE32;
+ u32 rx_id = INVALID_VALUE32;
+ u32 hot_tag = INVALID_VALUE32;
+ struct unf_frame_pkg pkg = {0};
+ struct spfc_scqe_abts_marker_sts *abts_marker_sts_scqe = NULL;
+
+ abts_marker_sts_scqe = &scqe->abts_marker_sts;
+ if (!abts_marker_sts_scqe) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]ABTS marker STS is NULL");
+ return ret;
+ }
+
+ ox_id = (u32)abts_marker_sts_scqe->wd1.ox_id;
+ rx_id = (u32)abts_marker_sts_scqe->wd1.rx_id;
+ hot_tag = (abts_marker_sts_scqe->wd4.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ pkg.frame_head.oxid_rxid = rx_id | (u32)(ox_id) << UNF_SHIFT_16;
+ pkg.frame_head.csctl_sid = abts_marker_sts_scqe->wd3.sid;
+ pkg.frame_head.rctl_did = abts_marker_sts_scqe->wd2.did;
+ pkg.abts_maker_status = (u32)abts_marker_sts_scqe->wd3.io_state;
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = abts_marker_sts_scqe->magic_num;
+
+ if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe)))
+ pkg.status = UNF_IO_FAILED;
+ else
+ pkg.status = UNF_IO_SUCCESS;
+
+ ret = spfc_rcv_abts_marker_sts(hba, &pkg, hot_tag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) recv abts marker STS ox_id(0x%x) RXID(0x%x) HotTag(0x%x) %s",
+ hba->port_cfg.port_id, ox_id, rx_id, hot_tag,
+ (ret == RETURN_OK) ? "SUCCEED" : "FAILED");
+
+ return ret;
+}
+
+u32 spfc_handle_aeq_off_load_err(struct spfc_hba_info *hba, struct spfc_aqe_data *aeq_msg)
+{
+ u32 ret = RETURN_OK;
+ u32 rport_index = 0;
+ u32 xid = 0;
+ struct spfc_parent_queue_info *prt_qinfo = NULL;
+ struct spfc_delay_destroy_ctrl_info delay_ctl_info;
+ ulong flags = 0;
+
+ memset(&delay_ctl_info, 0, sizeof(struct spfc_delay_destroy_ctrl_info));
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive Offload Err Event, EvtCode(0x%x) Conn_id(0x%x) Xid(0x%x)",
+ hba->port_cfg.port_id, aeq_msg->wd0.evt_code,
+ aeq_msg->wd0.conn_id, aeq_msg->wd1.xid);
+
+ /* Currently, only the offload failure caused by insufficient scqe is
+ * processed. Other errors are not processed temporarily.
+ */
+ if (unlikely(aeq_msg->wd0.evt_code != FC_ERROR_OFFLOAD_LACKOF_SCQE_FAIL)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive an unsupported error code of AEQ Event,EvtCode(0x%x) Conn_id(0x%x)",
+ hba->port_cfg.port_id, aeq_msg->wd0.evt_code,
+ aeq_msg->wd0.conn_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ SPFC_SCQ_ERR_TYPE_STAT(hba, FC_ERROR_OFFLOAD_LACKOF_SCQE_FAIL);
+
+ rport_index = aeq_msg->wd0.conn_id;
+ xid = aeq_msg->wd1.xid;
+
+ if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive an error offload status: rport(0x%x) is invalid, Xid(0x%x)",
+ hba->port_cfg.port_id, rport_index, aeq_msg->wd1.xid);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ prt_qinfo = &hba->parent_queue_mgr->parent_queue[rport_index];
+ if (spfc_check_rport_valid(prt_qinfo, xid) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive an error offload status: rport(0x%x), context id(0x%x) is invalid",
+ hba->port_cfg.port_id, rport_index, xid);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* The offload status is restored only when the offload status is offloading */
+ spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
+ if (prt_qinfo->offload_state == SPFC_QUEUE_STATE_OFFLOADING)
+ prt_qinfo->offload_state = SPFC_QUEUE_STATE_INITIALIZED;
+ spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
+
+ if (prt_qinfo->parent_sq_info.destroy_sqe.valid) {
+ delay_ctl_info.valid = prt_qinfo->parent_sq_info.destroy_sqe.valid;
+ delay_ctl_info.rport_index = prt_qinfo->parent_sq_info.destroy_sqe.rport_index;
+ delay_ctl_info.time_out = prt_qinfo->parent_sq_info.destroy_sqe.time_out;
+ delay_ctl_info.start_jiff = prt_qinfo->parent_sq_info.destroy_sqe.start_jiff;
+ delay_ctl_info.rport_info.nport_id =
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.nport_id;
+ delay_ctl_info.rport_info.rport_index =
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.rport_index;
+ delay_ctl_info.rport_info.port_name =
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.port_name;
+ prt_qinfo->parent_sq_info.destroy_sqe.valid = false;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) pop up delay sqe, start:0x%llx, timeout:0x%x, rport:0x%x, offload state:0x%x",
+ hba->port_cfg.port_id, delay_ctl_info.start_jiff,
+ delay_ctl_info.time_out,
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.rport_index,
+ SPFC_QUEUE_STATE_INITIALIZED);
+
+ ret = spfc_free_parent_resource(hba, &delay_ctl_info.rport_info);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) pop delay destroy parent sq failed, rport(0x%x), rport nport id 0x%x",
+ hba->port_cfg.port_id,
+ delay_ctl_info.rport_info.rport_index,
+ delay_ctl_info.rport_info.nport_id);
+ }
+ }
+
+ return ret;
+}
+
+u32 spfc_free_xid(void *handle, struct unf_frame_pkg *pkg)
+{
+ u32 ret = RETURN_ERROR;
+ u16 rx_id = INVALID_VALUE16;
+ u16 ox_id = INVALID_VALUE16;
+ u16 hot_tag = INVALID_VALUE16;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
+ union spfc_cmdqe tmp_cmd_wqe;
+ union spfc_cmdqe *cmd_wqe = NULL;
+
+ FC_CHECK_RETURN_VALUE(hba, RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, RETURN_ERROR);
+ SPFC_CHECK_PKG_ALLOCTIME(pkg);
+
+ cmd_wqe = &tmp_cmd_wqe;
+ memset(cmd_wqe, 0, sizeof(union spfc_cmdqe));
+
+ rx_id = UNF_GET_RXID(pkg);
+ ox_id = UNF_GET_OXID(pkg);
+ if (UNF_GET_HOTPOOL_TAG(pkg) != INVALID_VALUE32)
+ hot_tag = (u16)UNF_GET_HOTPOOL_TAG(pkg) + hba->exi_base;
+
+ spfc_build_cmdqe_common(cmd_wqe, SPFC_TASK_T_EXCH_ID_FREE, rx_id);
+ cmd_wqe->xid_free.wd2.hotpool_tag = hot_tag;
+ cmd_wqe->xid_free.magic_num = UNF_GETXCHGALLOCTIME(pkg);
+ cmd_wqe->xid_free.sid = pkg->frame_head.csctl_sid;
+ cmd_wqe->xid_free.did = pkg->frame_head.rctl_did;
+ cmd_wqe->xid_free.type = pkg->type;
+
+ if (pkg->rx_or_ox_id == UNF_PKG_FREE_OXID)
+ cmd_wqe->xid_free.wd0.task_id = ox_id;
+ else
+ cmd_wqe->xid_free.wd0.task_id = rx_id;
+
+ cmd_wqe->xid_free.wd0.port_id = hba->port_index;
+ cmd_wqe->xid_free.wd2.scqn = hba->default_scqn;
+ ret = spfc_root_cmdq_enqueue(hba, cmd_wqe, sizeof(cmd_wqe->xid_free));
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
+ "[info]Port(0x%x) ox_id(0x%x) RXID(0x%x) hottag(0x%x) magic_num(0x%x) Sid(0x%x) Did(0x%x), send free xid %s",
+ hba->port_cfg.port_id, ox_id, rx_id, hot_tag,
+ cmd_wqe->xid_free.magic_num, cmd_wqe->xid_free.sid,
+ cmd_wqe->xid_free.did,
+ (ret == RETURN_OK) ? "OK" : "ERROR");
+
+ return ret;
+}
+
+u32 spfc_scq_free_xid_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 hot_tag = INVALID_VALUE32;
+ u32 magic_num = INVALID_VALUE32;
+ u32 ox_id = INVALID_VALUE32;
+ u32 rx_id = INVALID_VALUE32;
+ struct spfc_scqe_comm_rsp_sts *free_xid_sts_scqe = NULL;
+
+ free_xid_sts_scqe = &scqe->comm_sts;
+ magic_num = free_xid_sts_scqe->magic_num;
+ ox_id = (u32)free_xid_sts_scqe->wd0.ox_id;
+ rx_id = (u32)free_xid_sts_scqe->wd0.rx_id;
+
+ if (free_xid_sts_scqe->wd1.hotpooltag != INVALID_VALUE16) {
+ hot_tag = (free_xid_sts_scqe->wd1.hotpooltag &
+ UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
+ "Port(0x%x) hottag(0x%x) magicnum(0x%x) ox_id(0x%x) rxid(0x%x) sts(%d)",
+ hba->port_cfg.port_id, hot_tag, magic_num, ox_id, rx_id,
+ SPFC_GET_SCQE_STATUS(scqe));
+
+ return RETURN_OK;
+}
+
+u32 spfc_scq_exchg_timeout_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 hot_tag = INVALID_VALUE32;
+ u32 magic_num = INVALID_VALUE32;
+ u32 ox_id = INVALID_VALUE32;
+ u32 rx_id = INVALID_VALUE32;
+ struct spfc_scqe_comm_rsp_sts *time_out_scqe = NULL;
+
+ time_out_scqe = &scqe->comm_sts;
+ magic_num = time_out_scqe->magic_num;
+ ox_id = (u32)time_out_scqe->wd0.ox_id;
+ rx_id = (u32)time_out_scqe->wd0.rx_id;
+
+ if (time_out_scqe->wd1.hotpooltag != INVALID_VALUE16)
+ hot_tag = (time_out_scqe->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
+ "Port(0x%x) recv timer time out sts hotpooltag(0x%x) magicnum(0x%x) ox_id(0x%x) rxid(0x%x) sts(%d)",
+ hba->port_cfg.port_id, hot_tag, magic_num, ox_id, rx_id,
+ SPFC_GET_SCQE_STATUS(scqe));
+
+ return RETURN_OK;
+}
+
+u32 spfc_scq_rcv_sq_nop_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ struct spfc_scqe_sq_nop_sts *sq_nop_scqe = NULL;
+ struct spfc_parent_queue_info *prt_qinfo = NULL;
+ struct spfc_parent_sq_info *parent_sq_info = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct spfc_suspend_sqe_info *suspend_sqe = NULL;
+ struct spfc_suspend_sqe_info *sqe = NULL;
+ u32 rport_index = 0;
+ u32 magic_num;
+ u16 sqn;
+ u32 sqn_base;
+ u32 sqn_max;
+ u32 ret = RETURN_OK;
+ ulong flags = 0;
+
+ sq_nop_scqe = &scqe->sq_nop_sts;
+ rport_index = sq_nop_scqe->wd1.conn_id;
+ magic_num = sq_nop_scqe->magic_num;
+ sqn = sq_nop_scqe->wd0.sqn;
+ prt_qinfo = &hba->parent_queue_mgr->parent_queue[rport_index];
+ parent_sq_info = &prt_qinfo->parent_sq_info;
+ sqn_base = parent_sq_info->sqn_base;
+ sqn_max = sqn_base + UNF_SQ_NUM_PER_SESSION - 1;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) rport(0x%x), magic_num(0x%x) receive nop sq sts form sq(0x%x)",
+ hba->port_cfg.port_id, rport_index, magic_num, sqn);
+
+ spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
+ list_for_each_safe(node, next_node, &parent_sq_info->suspend_sqe_list) {
+ sqe = list_entry(node, struct spfc_suspend_sqe_info, list_sqe_entry);
+ if (sqe->magic_num != magic_num)
+ continue;
+ suspend_sqe = sqe;
+ if (sqn == sqn_max)
+ list_del(node);
+ break;
+ }
+ spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
+
+ if (suspend_sqe) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) rport_index(0x%x) find suspend sqe.",
+ hba->port_cfg.port_id, rport_index);
+ if (sqn < sqn_max) {
+ ret = spfc_send_nop_cmd(hba, parent_sq_info, magic_num, sqn + 1);
+ } else if (sqn == sqn_max) {
+ if (!cancel_delayed_work(&suspend_sqe->timeout_work)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[warn]Port(0x%x) rport(0x%x) reset worker timer maybe timeout",
+ hba->port_cfg.port_id, rport_index);
+ }
+ parent_sq_info->need_offloaded = suspend_sqe->old_offload_sts;
+ ret = spfc_pop_suspend_sqe(hba, prt_qinfo, suspend_sqe);
+ kfree(suspend_sqe);
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) rport(0x%x) magicnum(0x%x)can't find suspend sqe",
+ hba->port_cfg.port_id, rport_index, magic_num);
+ }
+ return ret;
+}
+
+static const struct unf_scqe_handle_table scqe_handle_table[] = {
+ {/* INI rcvd FCP RSP */
+ SPFC_SCQE_FCP_IRSP, true, spfc_scq_recv_iresp},
+ {/* INI/TGT rcvd ELS_CMND */
+ SPFC_SCQE_ELS_CMND, false, spfc_scq_recv_els_cmnd},
+ {/* INI/TGT rcvd ELS_RSP */
+ SPFC_SCQE_ELS_RSP, true, spfc_scq_recv_ls_gs_rsp},
+ {/* INI/TGT rcvd GS_RSP */
+ SPFC_SCQE_GS_RSP, true, spfc_scq_recv_ls_gs_rsp},
+ {/* INI rcvd BLS_RSP */
+ SPFC_SCQE_ABTS_RSP, true, spfc_scq_recv_abts_rsp},
+ {/* INI/TGT rcvd ELS_RSP STS(Done) */
+ SPFC_SCQE_ELS_RSP_STS, true, spfc_scq_recv_els_rsp_sts},
+ {/* INI or TGT rcvd Session enable STS */
+ SPFC_SCQE_SESS_EN_STS, false, spfc_scq_recv_offload_sts},
+ {/* INI or TGT rcvd flush (pending) SQ STS */
+ SPFC_SCQE_FLUSH_SQ_STS, false, spfc_scq_rcv_flush_sq_sts},
+ {/* INI or TGT rcvd Buffer clear STS */
+ SPFC_SCQE_BUF_CLEAR_STS, false, spfc_scq_rcv_buf_clear_sts},
+ {/* INI or TGT rcvd session reset STS */
+ SPFC_SCQE_SESS_RST_STS, false, spfc_scq_recv_sess_rst_sts},
+ {/* ELS/IMMI SRQ */
+ SPFC_SCQE_CLEAR_SRQ_STS, false, spfc_scq_rcv_clear_srq_sts},
+ {/* INI rcvd TMF RSP */
+ SPFC_SCQE_FCP_ITMF_RSP, true, spfc_scq_recv_iresp},
+ {/* INI rcvd TMF Marker STS */
+ SPFC_SCQE_ITMF_MARKER_STS, false, spfc_scq_recv_marker_sts},
+ {/* INI rcvd ABTS Marker STS */
+ SPFC_SCQE_ABTS_MARKER_STS, false, spfc_scq_recv_abts_marker_sts},
+ {SPFC_SCQE_XID_FREE_ABORT_STS, false, spfc_scq_free_xid_sts},
+ {SPFC_SCQE_EXCHID_TIMEOUT_STS, false, spfc_scq_exchg_timeout_sts},
+ {SPFC_SQE_NOP_STS, true, spfc_scq_rcv_sq_nop_sts},
+
+};
+
+u32 spfc_rcv_scq_entry_from_scq(struct spfc_hba_info *hba, union spfc_scqe *scqe, u32 scqn)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ bool reclaim = false;
+ u32 index = 0;
+ u32 total = 0;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(scqe, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(scqn < SPFC_TOTAL_SCQ_NUM, UNF_RETURN_ERROR);
+
+ SPFC_IO_STAT(hba, SPFC_GET_SCQE_TYPE(scqe));
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) receive scqe type %d from SCQ[%u]",
+ hba->port_cfg.port_id, SPFC_GET_SCQE_TYPE(scqe), scqn);
+
+ /* 1. error code cheking */
+ if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe))) {
+ /* So far, just print & counter */
+ spfc_scqe_error_pre_proc(hba, scqe);
+ }
+
+ /* 2. Process SCQE by corresponding processer */
+ total = sizeof(scqe_handle_table) / sizeof(struct unf_scqe_handle_table);
+ while (index < total) {
+ if (SPFC_GET_SCQE_TYPE(scqe) == scqe_handle_table[index].scqe_type) {
+ ret = scqe_handle_table[index].scqe_handle_func(hba, scqe);
+ reclaim = scqe_handle_table[index].reclaim_sq_wpg;
+
+ break;
+ }
+
+ index++;
+ }
+
+ /* 3. SCQE type check */
+ if (unlikely(total == index)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Unknown SCQE type %d",
+ SPFC_GET_SCQE_TYPE(scqe));
+
+ UNF_PRINT_SFS_LIMIT(UNF_ERR, hba->port_cfg.port_id, scqe, sizeof(union spfc_scqe));
+ }
+
+ /* 4. If SCQE is for SQ-WQE then recovery Link List SQ free page */
+ if (reclaim) {
+ if (SPFC_GET_SCQE_SQN(scqe) < SPFC_MAX_SSQ_NUM) {
+ ret = spfc_reclaim_sq_wqe_page(hba, scqe);
+ } else {
+ /* NOTE: for buffer clear, the SCQE conn_id is 0xFFFF,count with HBA */
+ SPFC_HBA_STAT((struct spfc_hba_info *)hba, SPFC_STAT_SQ_IO_BUFFER_CLEARED);
+ }
+ }
+
+ return ret;
+}
+
diff --git a/drivers/scsi/spfc/hw/spfc_service.h b/drivers/scsi/spfc/hw/spfc_service.h
new file mode 100644
index 000000000000..e2555c55f4d1
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_service.h
@@ -0,0 +1,282 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_SERVICE_H
+#define SPFC_SERVICE_H
+
+#include "unf_type.h"
+#include "unf_common.h"
+#include "unf_scsi_common.h"
+#include "spfc_hba.h"
+
+#define SPFC_HAVE_OFFLOAD (0)
+
+/* FC txmfs */
+#define SPFC_DEFAULT_TX_MAX_FREAM_SIZE (256)
+
+#define SPFC_GET_NETWORK_PORT_ID(hba) \
+ (((hba)->port_index > 1) ? ((hba)->port_index + 2) : (hba)->port_index)
+
+#define SPFC_GET_PRLI_PAYLOAD_LEN \
+ (UNF_PRLI_PAYLOAD_LEN - UNF_PRLI_SIRT_EXTRA_SIZE)
+/* Start addr of the header/payloed of the cmnd buffer in the pkg */
+#define SPFC_FC_HEAD_LEN (sizeof(struct unf_fc_head))
+#define SPFC_PAYLOAD_OFFSET (sizeof(struct unf_fc_head))
+#define SPFC_GET_CMND_PAYLOAD_ADDR(pkg) UNF_GET_FLOGI_PAYLOAD(pkg)
+#define SPFC_GET_CMND_HEADER_ADDR(pkg) \
+ ((pkg)->unf_cmnd_pload_bl.buffer_ptr)
+#define SPFC_GET_RSP_HEADER_ADDR(pkg) \
+ ((pkg)->unf_rsp_pload_bl.buffer_ptr)
+#define SPFC_GET_RSP_PAYLOAD_ADDR(pkg) \
+ ((pkg)->unf_rsp_pload_bl.buffer_ptr + SPFC_PAYLOAD_OFFSET)
+#define SPFC_GET_CMND_FC_HEADER(pkg) \
+ (&(UNF_GET_SFS_ENTRY(pkg)->sfs_common.frame_head))
+#define SPFC_PKG_IS_ELS_RSP(cmd_type) \
+ (((cmd_type) == ELS_ACC) || ((cmd_type) == ELS_RJT))
+#define SPFC_XID_IS_VALID(exid, base, exi_count) \
+ (((exid) >= (base)) && ((exid) < ((base) + (exi_count))))
+#define SPFC_CHECK_NEED_OFFLOAD(cmd_code, cmd_type, offload_state) \
+ (((cmd_code) == ELS_PLOGI) && ((cmd_type) != ELS_RJT) && \
+ ((offload_state) == SPFC_QUEUE_STATE_INITIALIZED))
+
+#define UNF_FC_PAYLOAD_ELS_MASK (0xFF000000)
+#define UNF_FC_PAYLOAD_ELS_SHIFT (24)
+#define UNF_FC_PAYLOAD_ELS_DWORD (0)
+
+/* Note: this pfcpayload is little endian */
+#define UNF_GET_FC_PAYLOAD_ELS_CMND(pfcpayload) \
+ UNF_GET_SHIFTMASK(((u32 *)(void *)(pfcpayload))[UNF_FC_PAYLOAD_ELS_DWORD], \
+ UNF_FC_PAYLOAD_ELS_SHIFT, UNF_FC_PAYLOAD_ELS_MASK)
+
+/* Note: this pfcpayload is big endian */
+#define SPFC_GET_FC_PAYLOAD_ELS_CMND(pfcpayload) \
+ UNF_GET_SHIFTMASK(be32_to_cpu(((u32 *)(void *)(pfcpayload))[UNF_FC_PAYLOAD_ELS_DWORD]), \
+ UNF_FC_PAYLOAD_ELS_SHIFT, UNF_FC_PAYLOAD_ELS_MASK)
+
+#define UNF_FC_PAYLOAD_RX_SZ_MASK (0x00000FFF)
+#define UNF_FC_PAYLOAD_RX_SZ_SHIFT (16)
+#define UNF_FC_PAYLOAD_RX_SZ_DWORD (2)
+
+/* Note: this pfcpayload is little endian */
+#define UNF_GET_FC_PAYLOAD_RX_SZ(pfcpayload) \
+ ((u16)(((u32 *)(void *)(pfcpayload))[UNF_FC_PAYLOAD_RX_SZ_DWORD] & \
+ UNF_FC_PAYLOAD_RX_SZ_MASK))
+
+/* Note: this pfcpayload is big endian */
+#define SPFC_GET_FC_PAYLOAD_RX_SZ(pfcpayload) \
+ (be32_to_cpu((u16)(((u32 *)(void *)(pfcpayload))[UNF_FC_PAYLOAD_RX_SZ_DWORD]) & \
+ UNF_FC_PAYLOAD_RX_SZ_MASK))
+
+#define SPFC_GET_RA_TOV_FROM_PAYLOAD(pfcpayload) \
+ (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.r_a_tov)
+#define SPFC_GET_RT_TOV_FROM_PAYLOAD(pfcpayload) \
+ (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.r_t_tov)
+#define SPFC_GET_E_D_TOV_FROM_PAYLOAD(pfcpayload) \
+ (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.e_d_tov)
+#define SPFC_GET_E_D_TOV_RESOLUTION_FROM_PAYLOAD(pfcpayload) \
+ (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.e_d_tov_resolution)
+#define SPFC_GET_BB_SC_N_FROM_PAYLOAD(pfcpayload) \
+ (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.bbscn)
+#define SPFC_GET_BB_CREDIT_FROM_PAYLOAD(pfcpayload) \
+ (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.bb_credit)
+
+#define SPFC_GET_RA_TOV_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.r_a_tov)
+#define SPFC_GET_RT_TOV_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.r_t_tov)
+#define SPFC_GET_E_D_TOV_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.e_d_tov)
+#define SPFC_GET_E_D_TOV_RESOLUTION_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.e_d_tov_resolution)
+#define SPFC_GET_BB_SC_N_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.bbscn)
+#define SPFC_GET_BB_CREDIT_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.bb_credit)
+#define SPFC_CHECK_NPORT_FPORT_BIT(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.nport)
+
+#define UNF_FC_RCTL_BLS_MASK (0x80)
+#define SPFC_UNSOLICITED_FRAME_IS_BLS(hdr) (UNF_GET_FC_HEADER_RCTL(hdr) & UNF_FC_RCTL_BLS_MASK)
+
+#define SPFC_LOW_SEQ_CNT (0)
+#define SPFC_HIGH_SEQ_CNT (0xFFFF)
+
+/* struct unf_frame_pkg.cmnd meaning:
+ * The least significant 16 bits indicate whether to send ELS CMND or ELS RSP
+ * (ACC or RJT). The most significant 16 bits indicate the corresponding ELS
+ * CMND when the lower 16 bits are ELS RSP.
+ */
+#define SPFC_ELS_CMND_MASK (0xffff)
+#define SPFC_ELS_CMND__RELEVANT_SHIFT (16UL)
+#define SPFC_GET_LS_GS_CMND_CODE(cmnd) ((u16)((cmnd) & SPFC_ELS_CMND_MASK))
+#define SPFC_GET_ELS_RSP_TYPE(cmnd) ((u16)((cmnd) & SPFC_ELS_CMND_MASK))
+#define SPFC_GET_ELS_RSP_CODE(cmnd) \
+ ((u16)((cmnd) >> SPFC_ELS_CMND__RELEVANT_SHIFT & SPFC_ELS_CMND_MASK))
+
+/* ELS CMND Request */
+#define ELS_CMND (0)
+
+/* fh_f_ctl - Frame control flags. */
+#define SPFC_FC_EX_CTX BIT(23) /* sent by responder to exchange */
+#define SPFC_FC_SEQ_CTX BIT(22) /* sent by responder to sequence */
+#define SPFC_FC_FIRST_SEQ BIT(21) /* first sequence of this exchange */
+#define SPFC_FC_LAST_SEQ BIT(20) /* last sequence of this exchange */
+#define SPFC_FC_END_SEQ BIT(19) /* last frame of sequence */
+#define SPFC_FC_END_CONN BIT(18) /* end of class 1 connection pending */
+#define SPFC_FC_RES_B17 BIT(17) /* reserved */
+#define SPFC_FC_SEQ_INIT BIT(16) /* transfer of sequence initiative */
+#define SPFC_FC_X_ID_REASS BIT(15) /* exchange ID has been changed */
+#define SPFC_FC_X_ID_INVAL BIT(14) /* exchange ID invalidated */
+#define SPFC_FC_ACK_1 BIT(12) /* 13:12 = 1: ACK_1 expected */
+#define SPFC_FC_ACK_N (2 << 12) /* 13:12 = 2: ACK_N expected */
+#define SPFC_FC_ACK_0 (3 << 12) /* 13:12 = 3: ACK_0 expected */
+#define SPFC_FC_RES_B11 BIT(11) /* reserved */
+#define SPFC_FC_RES_B10 BIT(10) /* reserved */
+#define SPFC_FC_RETX_SEQ BIT(9) /* retransmitted sequence */
+#define SPFC_FC_UNI_TX BIT(8) /* unidirectional transmit (class 1) */
+#define SPFC_FC_CONT_SEQ(i) ((i) << 6)
+#define SPFC_FC_ABT_SEQ(i) ((i) << 4)
+#define SPFC_FC_REL_OFF BIT(3) /* parameter is relative offset */
+#define SPFC_FC_RES2 BIT(2) /* reserved */
+#define SPFC_FC_FILL(i) ((i) & 3) /* 1:0: bytes of trailing fill */
+
+#define SPFC_FCTL_REQ (SPFC_FC_FIRST_SEQ | SPFC_FC_END_SEQ | SPFC_FC_SEQ_INIT)
+#define SPFC_FCTL_RESP \
+ (SPFC_FC_EX_CTX | SPFC_FC_LAST_SEQ | SPFC_FC_END_SEQ | SPFC_FC_SEQ_INIT)
+#define SPFC_RCTL_BLS_REQ (0x81)
+#define SPFC_RCTL_BLS_ACC (0x84)
+#define SPFC_RCTL_BLS_RJT (0x85)
+
+#define PHY_PORT_TYPE_FC 0x1 /* Physical port type of FC */
+#define PHY_PORT_TYPE_FCOE 0x2 /* Physical port type of FCoE */
+#define SPFC_FC_COS_VALUE (0X4)
+
+#define SPFC_CDB16_LBA_MASK 0xffff
+#define SPFC_CDB16_TRANSFERLEN_MASK 0xff
+#define SPFC_RXID_MASK 0xffff
+#define SPFC_OXID_MASK 0xffff0000
+
+enum spfc_fc_fh_type {
+ SPFC_FC_TYPE_BLS = 0x00, /* basic link service */
+ SPFC_FC_TYPE_ELS = 0x01, /* extended link service */
+ SPFC_FC_TYPE_IP = 0x05, /* IP over FC, RFC 4338 */
+ SPFC_FC_TYPE_FCP = 0x08, /* SCSI FCP */
+ SPFC_FC_TYPE_CT = 0x20, /* Fibre Channel Services (FC-CT) */
+ SPFC_FC_TYPE_ILS = 0x22 /* internal link service */
+};
+
+enum spfc_fc_fh_rctl {
+ SPFC_FC_RCTL_DD_UNCAT = 0x00, /* uncategorized information */
+ SPFC_FC_RCTL_DD_SOL_DATA = 0x01, /* solicited data */
+ SPFC_FC_RCTL_DD_UNSOL_CTL = 0x02, /* unsolicited control */
+ SPFC_FC_RCTL_DD_SOL_CTL = 0x03, /* solicited control or reply */
+ SPFC_FC_RCTL_DD_UNSOL_DATA = 0x04, /* unsolicited data */
+ SPFC_FC_RCTL_DD_DATA_DESC = 0x05, /* data descriptor */
+ SPFC_FC_RCTL_DD_UNSOL_CMD = 0x06, /* unsolicited command */
+ SPFC_FC_RCTL_DD_CMD_STATUS = 0x07, /* command status */
+
+#define SPFC_FC_RCTL_ILS_REQ SPFC_FC_RCTL_DD_UNSOL_CTL /* ILS request */
+#define SPFC_FC_RCTL_ILS_REP SPFC_FC_RCTL_DD_SOL_CTL /* ILS reply */
+
+ /*
+ * Extended Link_Data
+ */
+ SPFC_FC_RCTL_ELS_REQ = 0x22, /* extended link services request */
+ SPFC_FC_RCTL_ELS_RSP = 0x23, /* extended link services reply */
+ SPFC_FC_RCTL_ELS4_REQ = 0x32, /* FC-4 ELS request */
+ SPFC_FC_RCTL_ELS4_RSP = 0x33, /* FC-4 ELS reply */
+ /*
+ * Optional Extended Headers
+ */
+ SPFC_FC_RCTL_VFTH = 0x50, /* virtual fabric tagging header */
+ SPFC_FC_RCTL_IFRH = 0x51, /* inter-fabric routing header */
+ SPFC_FC_RCTL_ENCH = 0x52, /* encapsulation header */
+ /*
+ * Basic Link Services fh_r_ctl values.
+ */
+ SPFC_FC_RCTL_BA_NOP = 0x80, /* basic link service NOP */
+ SPFC_FC_RCTL_BA_ABTS = 0x81, /* basic link service abort */
+ SPFC_FC_RCTL_BA_RMC = 0x82, /* remove connection */
+ SPFC_FC_RCTL_BA_ACC = 0x84, /* basic accept */
+ SPFC_FC_RCTL_BA_RJT = 0x85, /* basic reject */
+ SPFC_FC_RCTL_BA_PRMT = 0x86, /* dedicated connection preempted */
+ /*
+ * Link Control Information.
+ */
+ SPFC_FC_RCTL_ACK_1 = 0xc0, /* acknowledge_1 */
+ SPFC_FC_RCTL_ACK_0 = 0xc1, /* acknowledge_0 */
+ SPFC_FC_RCTL_P_RJT = 0xc2, /* port reject */
+ SPFC_FC_RCTL_F_RJT = 0xc3, /* fabric reject */
+ SPFC_FC_RCTL_P_BSY = 0xc4, /* port busy */
+ SPFC_FC_RCTL_F_BSY = 0xc5, /* fabric busy to data frame */
+ SPFC_FC_RCTL_F_BSYL = 0xc6, /* fabric busy to link control frame */
+ SPFC_FC_RCTL_LCR = 0xc7, /* link credit reset */
+ SPFC_FC_RCTL_END = 0xc9 /* end */
+};
+
+struct spfc_fc_frame_header {
+ u8 rctl; /* routing control */
+ u8 did[ARRAY_INDEX_3]; /* Destination ID */
+
+ u8 cs_ctrl; /* class of service control / pri */
+ u8 sid[ARRAY_INDEX_3]; /* Source ID */
+
+ u8 type; /* see enum fc_fh_type below */
+ u8 frame_ctrl[ARRAY_INDEX_3]; /* frame control */
+
+ u8 seq_id; /* sequence ID */
+ u8 df_ctrl; /* data field control */
+ u16 seq_cnt; /* sequence count */
+
+ u16 oxid; /* originator exchange ID */
+ u16 rxid; /* responder exchange ID */
+ u32 param_offset; /* parameter or relative offset */
+};
+
+u32 spfc_recv_els_cmnd(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u8 *els_pld, u32 pld_len,
+ bool first);
+u32 spfc_rcv_ls_gs_rsp(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag);
+u32 spfc_rcv_els_rsp_sts(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag);
+u32 spfc_rcv_bls_rsp(const struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
+ u32 hot_tag);
+u32 spfc_rsv_bls_rsp_sts(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 rx_id);
+void spfc_save_login_parms_in_sq_info(struct spfc_hba_info *hba,
+ struct unf_port_login_parms *login_params);
+u32 spfc_handle_aeq_off_load_err(struct spfc_hba_info *hba,
+ struct spfc_aqe_data *aeq_msg);
+u32 spfc_free_xid(void *handle, struct unf_frame_pkg *pkg);
+u32 spfc_scq_free_xid_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe);
+u32 spfc_scq_exchg_timeout_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe);
+u32 spfc_scq_rcv_sq_nop_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe);
+u32 spfc_send_els_via_default_session(struct spfc_hba_info *hba, struct spfc_sqe *io_sqe,
+ struct unf_frame_pkg *pkg,
+ struct spfc_parent_queue_info *prt_queue_info);
+u32 spfc_send_ls_gs_cmnd(void *handle, struct unf_frame_pkg *pkg);
+u32 spfc_send_bls_cmnd(void *handle, struct unf_frame_pkg *pkg);
+
+/* Receive Frame from SCQ */
+u32 spfc_rcv_scq_entry_from_scq(struct spfc_hba_info *hba,
+ union spfc_scqe *scqe, u32 scqn);
+void *spfc_get_els_buf_by_user_id(struct spfc_hba_info *hba, u16 user_id);
+
+#define SPFC_CHECK_PKG_ALLOCTIME(pkg) \
+ do { \
+ if (unlikely(UNF_GETXCHGALLOCTIME(pkg) == 0)) { \
+ FC_DRV_PRINT(UNF_LOG_NORMAL, \
+ UNF_WARN, \
+ "[warn]Invalid MagicNum,S_ID(0x%x) " \
+ "D_ID(0x%x) OXID(0x%x) " \
+ "RX_ID(0x%x) Pkg type(0x%x) hot " \
+ "pooltag(0x%x)", \
+ UNF_GET_SID(pkg), UNF_GET_DID(pkg), \
+ UNF_GET_OXID(pkg), UNF_GET_RXID(pkg), \
+ ((struct unf_frame_pkg *)(pkg))->type, \
+ UNF_GET_XCHG_TAG(pkg)); \
+ } \
+ } while (0)
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_utils.c b/drivers/scsi/spfc/hw/spfc_utils.c
new file mode 100644
index 000000000000..328c388c95fe
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_utils.c
@@ -0,0 +1,102 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_utils.h"
+#include "unf_log.h"
+#include "unf_common.h"
+
+void spfc_cpu_to_big64(void *addr, u32 size)
+{
+ u32 index = 0;
+ u32 cnt = 0;
+ u64 *temp = NULL;
+
+ FC_CHECK_VALID(addr, dump_stack(); return);
+ FC_CHECK_VALID((size % SPFC_QWORD_BYTE) == 0, dump_stack(); return);
+
+ temp = (u64 *)addr;
+ cnt = SPFC_SHIFT_TO_U64(size);
+
+ for (index = 0; index < cnt; index++) {
+ *temp = cpu_to_be64(*temp);
+ temp++;
+ }
+}
+
+void spfc_big_to_cpu64(void *addr, u32 size)
+{
+ u32 index = 0;
+ u32 cnt = 0;
+ u64 *temp = NULL;
+
+ FC_CHECK_VALID(addr, dump_stack(); return);
+ FC_CHECK_VALID((size % SPFC_QWORD_BYTE) == 0, dump_stack(); return);
+
+ temp = (u64 *)addr;
+ cnt = SPFC_SHIFT_TO_U64(size);
+
+ for (index = 0; index < cnt; index++) {
+ *temp = be64_to_cpu(*temp);
+ temp++;
+ }
+}
+
+void spfc_cpu_to_big32(void *addr, u32 size)
+{
+ unf_cpu_to_big_end(addr, size);
+}
+
+void spfc_big_to_cpu32(void *addr, u32 size)
+{
+ if (size % UNF_BYTES_OF_DWORD)
+ dump_stack();
+
+ unf_big_end_to_cpu(addr, size);
+}
+
+void spfc_cpu_to_be24(u8 *data, u32 value)
+{
+ data[ARRAY_INDEX_0] = (value >> UNF_SHIFT_16) & UNF_MASK_BIT_7_0;
+ data[ARRAY_INDEX_1] = (value >> UNF_SHIFT_8) & UNF_MASK_BIT_7_0;
+ data[ARRAY_INDEX_2] = value & UNF_MASK_BIT_7_0;
+}
+
+u32 spfc_big_to_cpu24(u8 *data)
+{
+ return (data[ARRAY_INDEX_0] << UNF_SHIFT_16) |
+ (data[ARRAY_INDEX_1] << UNF_SHIFT_8) | data[ARRAY_INDEX_2];
+}
+
+void spfc_print_buff(u32 dbg_level, void *buff, u32 size)
+{
+ u32 *spfc_buff = NULL;
+ u32 loop = 0;
+ u32 index = 0;
+
+ FC_CHECK_VALID(buff, dump_stack(); return);
+ FC_CHECK_VALID(0 == (size % SPFC_DWORD_BYTE), dump_stack(); return);
+
+ if ((dbg_level) <= unf_dgb_level) {
+ spfc_buff = (u32 *)buff;
+ loop = size / SPFC_DWORD_BYTE;
+
+ for (index = 0; index < loop; index++) {
+ spfc_buff = (u32 *)buff + index;
+ FC_DRV_PRINT(UNF_LOG_NORMAL,
+ UNF_MAJOR, "Buff DW%u 0x%08x.", index, *spfc_buff);
+ }
+ }
+}
+
+u32 spfc_log2n(u32 val)
+{
+ u32 result = 0;
+ u32 logn = (val >> UNF_SHIFT_1);
+
+ while (logn) {
+ logn >>= UNF_SHIFT_1;
+ result++;
+ }
+
+ return result;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_utils.h b/drivers/scsi/spfc/hw/spfc_utils.h
new file mode 100644
index 000000000000..6b4330da3f1d
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_utils.h
@@ -0,0 +1,202 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_UTILS_H
+#define SPFC_UTILS_H
+
+#include "unf_type.h"
+#include "unf_log.h"
+
+#define SPFC_ZERO (0)
+
+#define SPFC_BIT(n) (0x1UL << (n))
+#define SPFC_BIT_0 SPFC_BIT(0)
+#define SPFC_BIT_1 SPFC_BIT(1)
+#define SPFC_BIT_2 SPFC_BIT(2)
+#define SPFC_BIT_3 SPFC_BIT(3)
+#define SPFC_BIT_4 SPFC_BIT(4)
+#define SPFC_BIT_5 SPFC_BIT(5)
+#define SPFC_BIT_6 SPFC_BIT(6)
+#define SPFC_BIT_7 SPFC_BIT(7)
+#define SPFC_BIT_8 SPFC_BIT(8)
+#define SPFC_BIT_9 SPFC_BIT(9)
+#define SPFC_BIT_10 SPFC_BIT(10)
+#define SPFC_BIT_11 SPFC_BIT(11)
+#define SPFC_BIT_12 SPFC_BIT(12)
+#define SPFC_BIT_13 SPFC_BIT(13)
+#define SPFC_BIT_14 SPFC_BIT(14)
+#define SPFC_BIT_15 SPFC_BIT(15)
+#define SPFC_BIT_16 SPFC_BIT(16)
+#define SPFC_BIT_17 SPFC_BIT(17)
+#define SPFC_BIT_18 SPFC_BIT(18)
+#define SPFC_BIT_19 SPFC_BIT(19)
+#define SPFC_BIT_20 SPFC_BIT(20)
+#define SPFC_BIT_21 SPFC_BIT(21)
+#define SPFC_BIT_22 SPFC_BIT(22)
+#define SPFC_BIT_23 SPFC_BIT(23)
+#define SPFC_BIT_24 SPFC_BIT(24)
+#define SPFC_BIT_25 SPFC_BIT(25)
+#define SPFC_BIT_26 SPFC_BIT(26)
+#define SPFC_BIT_27 SPFC_BIT(27)
+#define SPFC_BIT_28 SPFC_BIT(28)
+#define SPFC_BIT_29 SPFC_BIT(29)
+#define SPFC_BIT_30 SPFC_BIT(30)
+#define SPFC_BIT_31 SPFC_BIT(31)
+
+#define SPFC_GET_BITS(data, mask) ((data) & (mask)) /* Obtains the bit */
+#define SPFC_SET_BITS(data, mask) ((data) |= (mask)) /* set the bit */
+#define SPFC_CLR_BITS(data, mask) ((data) &= ~(mask)) /* clear the bit */
+
+#define SPFC_LSB(x) ((u8)(x))
+#define SPFC_MSB(x) ((u8)((u16)(x) >> 8))
+
+#define SPFC_LSW(x) ((u16)(x))
+#define SPFC_MSW(x) ((u16)((u32)(x) >> 16))
+
+#define SPFC_LSD(x) ((u32)((u64)(x)))
+#define SPFC_MSD(x) ((u32)((((u64)(x)) >> 16) >> 16))
+
+#define SPFC_BYTES_TO_QW_NUM(x) ((x) >> 3)
+#define SPFC_BYTES_TO_DW_NUM(x) ((x) >> 2)
+
+#define UNF_GET_SHIFTMASK(__src, __shift, __mask) (((__src) & (__mask)) >> (__shift))
+#define UNF_FC_SET_SHIFTMASK(__des, __val, __shift, __mask) \
+ ((__des) = (((__des) & ~(__mask)) | (((__val) << (__shift)) & (__mask))))
+
+/* R_CTL */
+#define UNF_FC_HEADER_RCTL_MASK (0xFF000000)
+#define UNF_FC_HEADER_RCTL_SHIFT (24)
+#define UNF_FC_HEADER_RCTL_DWORD (0)
+#define UNF_GET_FC_HEADER_RCTL(__pfcheader) \
+ UNF_GET_SHIFTMASK(((u32 *)(void *)(__pfcheader))[UNF_FC_HEADER_RCTL_DWORD], \
+ UNF_FC_HEADER_RCTL_SHIFT, UNF_FC_HEADER_RCTL_MASK)
+
+#define UNF_SET_FC_HEADER_RCTL(__pfcheader, __val) \
+ do { \
+ UNF_FC_SET_SHIFTMASK(((u32 *)(void *)(__pfcheader)[UNF_FC_HEADER_RCTL_DWORD], \
+ __val, UNF_FC_HEADER_RCTL_SHIFT, UNF_FC_HEADER_RCTL_MASK) \
+ } while (0)
+
+/* PRLI PARAM 3 */
+#define SPFC_PRLI_PARAM_WXFER_ENABLE_MASK (0x00000001)
+#define SPFC_PRLI_PARAM_WXFER_ENABLE_SHIFT (0)
+#define SPFC_PRLI_PARAM_WXFER_DWORD (3)
+#define SPFC_GET_PRLI_PARAM_WXFER(__pfcheader) \
+ UNF_GET_SHIFTMASK(((u32 *)(void *)(__pfcheader))[SPFC_PRLI_PARAM_WXFER_DWORD], \
+ SPFC_PRLI_PARAM_WXFER_ENABLE_SHIFT, \
+ SPFC_PRLI_PARAM_WXFER_ENABLE_MASK)
+
+#define SPFC_PRLI_PARAM_CONF_ENABLE_MASK (0x00000080)
+#define SPFC_PRLI_PARAM_CONF_ENABLE_SHIFT (7)
+#define SPFC_PRLI_PARAM_CONF_DWORD (3)
+#define SPFC_GET_PRLI_PARAM_CONF(__pfcheader) \
+ UNF_GET_SHIFTMASK(((u32 *)(void *)(__pfcheader))[SPFC_PRLI_PARAM_CONF_DWORD], \
+ SPFC_PRLI_PARAM_CONF_ENABLE_SHIFT, \
+ SPFC_PRLI_PARAM_CONF_ENABLE_MASK)
+
+#define SPFC_PRLI_PARAM_REC_ENABLE_MASK (0x00000400)
+#define SPFC_PRLI_PARAM_REC_ENABLE_SHIFT (10)
+#define SPFC_PRLI_PARAM_CONF_REC (3)
+#define SPFC_GET_PRLI_PARAM_REC(__pfcheader) \
+ UNF_GET_SHIFTMASK(((u32 *)(void *)(__pfcheader))[SPFC_PRLI_PARAM_CONF_REC], \
+ SPFC_PRLI_PARAM_REC_ENABLE_SHIFT, SPFC_PRLI_PARAM_REC_ENABLE_MASK)
+
+#define SPFC_FUNCTION_ENTER \
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ALL, \
+ "%s Enter.", __func__)
+#define SPFC_FUNCTION_RETURN \
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ALL, \
+ "%s Return.", __func__)
+
+#define SPFC_SPIN_LOCK_IRQSAVE(interrupt, hw_adapt_lock, flags) \
+ do { \
+ if ((interrupt) == false) { \
+ spin_lock_irqsave(&(hw_adapt_lock), flags); \
+ } \
+ } while (0)
+
+#define SPFC_SPIN_UNLOCK_IRQRESTORE(interrupt, hw_adapt_lock, flags) \
+ do { \
+ if ((interrupt) == false) { \
+ spin_unlock_irqrestore(&(hw_adapt_lock), flags); \
+ } \
+ } while (0)
+
+#define FC_CHECK_VALID(condition, fail_do) \
+ do { \
+ if (unlikely(!(condition))) { \
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, \
+ UNF_ERR, "Para check(%s) invalid", \
+ #condition); \
+ fail_do; \
+ } \
+ } while (0)
+
+#define RETURN_ERROR_S32 (-1)
+#define UNF_RETURN_ERROR_S32 (-1)
+
+enum SPFC_LOG_CTRL_E {
+ SPFC_LOG_ALL = 0,
+ SPFC_LOG_SCQE_RX,
+ SPFC_LOG_ELS_TX,
+ SPFC_LOG_ELS_RX,
+ SPFC_LOG_GS_TX,
+ SPFC_LOG_GS_RX,
+ SPFC_LOG_BLS_TX,
+ SPFC_LOG_BLS_RX,
+ SPFC_LOG_FCP_TX,
+ SPFC_LOG_FCP_RX,
+ SPFC_LOG_SESS_TX,
+ SPFC_LOG_SESS_RX,
+ SPFC_LOG_DIF_TX,
+ SPFC_LOG_DIF_RX
+};
+
+extern u32 spfc_log_en;
+#define SPFC_LOG_EN(hba, log_ctrl) (spfc_log_en + (log_ctrl))
+
+enum SPFC_HBA_ERR_STAT_E {
+ SPFC_STAT_CTXT_FLUSH_DONE = 0,
+ SPFC_STAT_SQ_WAIT_EMPTY,
+ SPFC_STAT_LAST_GS_SCQE,
+ SPFC_STAT_SQ_POOL_EMPTY,
+ SPFC_STAT_PARENT_IO_FLUSHED,
+ SPFC_STAT_ROOT_IO_FLUSHED, /* 5 */
+ SPFC_STAT_ROOT_SQ_FULL,
+ SPFC_STAT_ELS_RSP_EXCH_REUSE,
+ SPFC_STAT_GS_RSP_EXCH_REUSE,
+ SPFC_STAT_SQ_IO_BUFFER_CLEARED,
+ SPFC_STAT_PARENT_SQ_NOT_OFFLOADED, /* 10 */
+ SPFC_STAT_PARENT_SQ_QUEUE_DELAYED_WORK,
+ SPFC_STAT_PARENT_SQ_INVALID_CACHED_ID,
+ SPFC_HBA_STAT_BUTT
+};
+
+#define SPFC_DWORD_BYTE (4)
+#define SPFC_QWORD_BYTE (8)
+#define SPFC_SHIFT_TO_U64(x) ((x) >> 3)
+#define SPFC_SHIFT_TO_U32(x) ((x) >> 2)
+
+void spfc_cpu_to_big64(void *addr, u32 size);
+void spfc_big_to_cpu64(void *addr, u32 size);
+void spfc_cpu_to_big32(void *addr, u32 size);
+void spfc_big_to_cpu32(void *addr, u32 size);
+void spfc_cpu_to_be24(u8 *data, u32 value);
+u32 spfc_big_to_cpu24(u8 *data);
+
+void spfc_print_buff(u32 dbg_level, void *buff, u32 size);
+
+u32 spfc_log2n(u32 val);
+
+static inline void spfc_swap_16_in_32(u32 *paddr, u32 length)
+{
+ u32 i;
+
+ for (i = 0; i < length; i++) {
+ paddr[i] =
+ ((((paddr[i]) & UNF_MASK_BIT_31_16) >> UNF_SHIFT_16) |
+ (((paddr[i]) & UNF_MASK_BIT_15_0) << UNF_SHIFT_16));
+ }
+}
+
+#endif /* __SPFC_UTILS_H__ */
diff --git a/drivers/scsi/spfc/hw/spfc_wqe.c b/drivers/scsi/spfc/hw/spfc_wqe.c
new file mode 100644
index 000000000000..61909c51bc8c
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_wqe.c
@@ -0,0 +1,646 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_wqe.h"
+#include "spfc_module.h"
+#include "spfc_service.h"
+
+void spfc_build_tmf_rsp_wqe_ts_header(struct unf_frame_pkg *pkg,
+ struct spfc_sqe_tmf_rsp *sqe, u16 exi_base,
+ u32 scqn)
+{
+ sqe->ts_sl.task_type = SPFC_SQE_FCP_TMF_TRSP;
+ sqe->ts_sl.wd0.conn_id =
+ (u16)(pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX]);
+
+ if (UNF_GET_RXID(pkg) == INVALID_VALUE16)
+ sqe->ts_sl.local_xid = INVALID_VALUE16;
+ else
+ sqe->ts_sl.local_xid = UNF_GET_RXID(pkg) + exi_base;
+
+ sqe->ts_sl.tmf_rsp.wd0.scqn = scqn;
+ sqe->ts_sl.magic_num = UNF_GETXCHGALLOCTIME(pkg);
+}
+
+void spfc_build_common_wqe_ctrls(struct spfc_wqe_ctrl *ctrl_sl, u8 task_len)
+{
+ /* "BDSL" field of CtrlS - defines the size of BDS, which varies from 0
+ * to 2040 bytes (8 bits of 8 bytes' chunk)
+ */
+ ctrl_sl->ch.wd0.bdsl = 0;
+
+ /* "DrvSL" field of CtrlS - defines the size of DrvS, which varies from
+ * 0 to 24 bytes
+ */
+ ctrl_sl->ch.wd0.drv_sl = 0;
+
+ /* a.
+ * b1 - linking WQE, which will be only used in linked page architecture
+ * instead of ring, it's a special control WQE which does not contain
+ * any buffer or inline data information, and will only be consumed by
+ * hardware. The size is aligned to WQEBB/WQE b0 - normal WQE, either
+ * normal SEG WQE or inline data WQE
+ */
+ ctrl_sl->ch.wd0.wf = 0;
+
+ /*
+ * "CF" field of CtrlS - Completion Format - defines the format of CS.
+ * a. b0 - Status information is embedded inside of Completion Section
+ * b. b1 - Completion Section keeps SGL, where Status information
+ * should be written. (For the definition of SGLs see ?4.1
+ * .)
+ */
+ ctrl_sl->ch.wd0.cf = 0;
+
+ /* "TSL" field of CtrlS - defines the size of TS, which varies from 0 to
+ * 248 bytes
+ */
+ ctrl_sl->ch.wd0.tsl = task_len;
+
+ /*
+ * Variable length SGE (vSGE). The size of SGE is 16 bytes. The vSGE
+ * format is of two types, which are defined by "VA " field of CtrlS.
+ * "VA" stands for Virtual Address: o b0. SGE comprises 64-bits
+ * buffer's pointer and 31-bits Length, each SGE can only support up to
+ * 2G-1B, it can guarantee each single SGE length can not exceed 2GB by
+ * nature, A byte count value of zero means a 0byte data transfer. o b1.
+ * SGE comprises 64-bits buffer's pointer, 31-bits Length and 30-bits
+ * Key of the Translation table , each SGE can only support up to 2G-1B,
+ * it can guarantee each single SGE length can not exceed 2GB by nature,
+ * A byte count value of zero means a 0byte data transfer
+ */
+ ctrl_sl->ch.wd0.va = 0;
+
+ /*
+ * "DF" field of CtrlS - Data Format - defines the format of BDS
+ * a. b0 - BDS carries the list of SGEs (SGL)
+ * b. b1 - BDS carries the inline data
+ */
+ ctrl_sl->ch.wd0.df = 0;
+
+ /* "CR" - Completion is Required - marks CQE generation request per WQE
+ */
+ ctrl_sl->ch.wd0.cr = 1;
+
+ /* "DIFSL" field of CtrlS - defines the size of DIFS, which varies from
+ * 0 to 56 bytes
+ */
+ ctrl_sl->ch.wd0.dif_sl = 0;
+
+ /* "CSL" field of CtrlS - defines the size of CS, which varies from 0 to
+ * 24 bytes
+ */
+ ctrl_sl->ch.wd0.csl = 0;
+
+ /* CtrlSL describes the size of CtrlS in 8 bytes chunks. The
+ * value Zero is not valid
+ */
+ ctrl_sl->ch.wd0.ctrl_sl = 1;
+
+ /* "O" - Owner - marks ownership of WQE */
+ ctrl_sl->ch.wd0.owner = 0;
+}
+
+void spfc_build_trd_twr_wqe_ctrls(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe)
+{
+ /* "BDSL" field of CtrlS - defines the size of BDS, which varies from 0
+ * to 2040 bytes (8 bits of 8 bytes' chunk)
+ */
+ /* TrdWqe carry 2 SGE defaultly, 4DW per SGE, the value is 4 because
+ * unit is 2DW, in double SGL mode, bdsl is 2
+ */
+ sqe->ctrl_sl.ch.wd0.bdsl = SPFC_T_RD_WR_WQE_CTR_BDSL_SIZE;
+
+ /* "DrvSL" field of CtrlS - defines the size of DrvS, which varies from
+ * 0 to 24 bytes
+ */
+ /* DrvSL = 0 */
+ sqe->ctrl_sl.ch.wd0.drv_sl = 0;
+
+ /* a.
+ * b1 - linking WQE, which will be only used in linked page architecture
+ * instead of ring, it's a special control WQE which does not contain
+ * any buffer or inline data information, and will only be consumed by
+ * hardware. The size is aligned to WQEBB/WQE b0 - normal WQE, either
+ * normal SEG WQE or inline data WQE
+ */
+ /* normal wqe */
+ sqe->ctrl_sl.ch.wd0.wf = 0;
+
+ /*
+ * "CF" field of CtrlS - Completion Format - defines the format of CS.
+ * a. b0 - Status information is embedded inside of Completion Section
+ * b. b1 - Completion Section keeps SGL, where Status information
+ * should be written. (For the definition of SGLs see ?4.1)
+ */
+ /* by SCQE mode, the value is ignored */
+ sqe->ctrl_sl.ch.wd0.cf = 0;
+
+ /* "TSL" field of CtrlS - defines the size of TS, which varies from 0 to
+ * 248 bytes
+ */
+ /* TSL is configured by 56 bytes */
+ sqe->ctrl_sl.ch.wd0.tsl =
+ sizeof(struct spfc_sqe_ts) / SPFC_WQE_SECTION_CHUNK_SIZE;
+
+ /*
+ * Variable length SGE (vSGE). The size of SGE is 16 bytes. The vSGE
+ * format is of two types, which are defined by "VA " field of CtrlS.
+ * "VA" stands for Virtual Address: o b0. SGE comprises 64-bits buffer's
+ * pointer and 31-bits Length, each SGE can only support up to 2G-1B, it
+ * can guarantee each single SGE length can not exceed 2GB by nature, A
+ * byte count value of zero means a 0byte data transfer. o b1. SGE
+ * comprises 64-bits buffer's pointer, 31-bits Length and 30-bits Key of
+ * the Translation table , each SGE can only support up to 2G-1B, it can
+ * guarantee each single SGE length can not exceed 2GB by nature, A byte
+ * count value of zero means a 0byte data transfer
+ */
+ sqe->ctrl_sl.ch.wd0.va = 0;
+
+ /*
+ * "DF" field of CtrlS - Data Format - defines the format of BDS
+ * a. b0 - BDS carries the list of SGEs (SGL)
+ * b. b1 - BDS carries the inline data
+ */
+ sqe->ctrl_sl.ch.wd0.df = 0;
+
+ /* "CR" - Completion is Required - marks CQE generation request per WQE
+ */
+ /* by SCQE mode, this value is ignored */
+ sqe->ctrl_sl.ch.wd0.cr = 1;
+
+ /* "DIFSL" field of CtrlS - defines the size of DIFS, which varies from
+ * 0 to 56 bytes.
+ */
+ sqe->ctrl_sl.ch.wd0.dif_sl = 0;
+
+ /* "CSL" field of CtrlS - defines the size of CS, which varies from 0 to
+ * 24 bytes
+ */
+ sqe->ctrl_sl.ch.wd0.csl = 0;
+
+ /* CtrlSL describes the size of CtrlS in 8 bytes chunks. The
+ * value Zero is not valid.
+ */
+ sqe->ctrl_sl.ch.wd0.ctrl_sl = SPFC_T_RD_WR_WQE_CTR_CTRLSL_SIZE;
+
+ /* "O" - Owner - marks ownership of WQE */
+ sqe->ctrl_sl.ch.wd0.owner = 0;
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_service_wqe_ts_common
+ * Function Description : Construct the DW1~DW3 field in the Parent SQ WQE
+ * request of the ELS and ELS_RSP requests.
+ * Input Parameters : struct spfc_sqe_ts *sqe_ts u32 rport_index u16 local_xid
+ * u16 remote_xid u16 data_len
+ * Output Parameters : N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_build_service_wqe_ts_common(struct spfc_sqe_ts *sqe_ts, u32 rport_index,
+ u16 local_xid, u16 remote_xid, u16 data_len)
+{
+ sqe_ts->local_xid = local_xid;
+
+ sqe_ts->wd0.conn_id = (u16)rport_index;
+ sqe_ts->wd0.remote_xid = remote_xid;
+
+ sqe_ts->cont.els_gs_elsrsp_comm.data_len = data_len;
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_els_gs_wqe_sge
+ * Function Description : Construct the SGE field of the ELS and ELS_RSP WQE.
+ * The SGE and frame content have been converted to large ends in this
+ * function.
+ * Input Parameters: struct spfc_sqe *sqe void *buf_addr u32 buf_len u32 xid
+ * Output Parameters : N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_build_els_gs_wqe_sge(struct spfc_sqe *sqe, void *buf_addr, u64 phy_addr,
+ u32 buf_len, u32 xid, void *handle)
+{
+ u64 els_rsp_phy_addr;
+ struct spfc_variable_sge *sge = NULL;
+
+ /* Fill in SGE and convert it to big-endian. */
+ sge = &sqe->sge[ARRAY_INDEX_0];
+ els_rsp_phy_addr = phy_addr;
+ sge->buf_addr_hi = SPFC_HIGH_32_BITS(els_rsp_phy_addr);
+ sge->buf_addr_lo = SPFC_LOW_32_BITS(els_rsp_phy_addr);
+ sge->wd0.buf_len = buf_len;
+ sge->wd0.r_flag = 0;
+ sge->wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sge->wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
+ sge->wd1.xid = 0;
+ sge->wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
+ spfc_cpu_to_big32(sge, sizeof(*sge));
+
+ /* Converts the payload of an FC frame into a big end. */
+ if (buf_addr)
+ spfc_cpu_to_big32(buf_addr, buf_len);
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_els_wqe_ts_rsp
+ * Function Description : Construct the DW2~DW6 field in the Parent SQ WQE
+ * of the ELS_RSP request.
+ * Input Parameters : struct spfc_sqe *sqe void *sq_info void *frame_pld
+ * u16 type u16 cmnd u32 scqn
+ * Output Parameters: N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_build_els_wqe_ts_rsp(struct spfc_sqe *sqe, void *info,
+ struct unf_frame_pkg *pkg, void *frame_pld,
+ u16 type, u16 cmnd)
+{
+ struct unf_prli_payload *prli_acc_pld = NULL;
+ struct spfc_sqe_els_rsp *els_rsp = NULL;
+ struct spfc_sqe_ts *sqe_ts = NULL;
+ struct spfc_parent_sq_info *sq_info = NULL;
+ struct spfc_hba_info *hba = NULL;
+ struct unf_fc_head *pkg_fc_hdr_info = NULL;
+ struct spfc_parent_queue_info *prnt_q_info = (struct spfc_parent_queue_info *)info;
+
+ FC_CHECK_RETURN_VOID(sqe);
+ FC_CHECK_RETURN_VOID(frame_pld);
+
+ sqe_ts = &sqe->ts_sl;
+ els_rsp = &sqe_ts->cont.els_rsp;
+ sqe_ts->task_type = SPFC_SQE_ELS_RSP;
+
+ /* The default chip does not need to update parameters. */
+ els_rsp->wd1.para_update = 0x0;
+
+ sq_info = &prnt_q_info->parent_sq_info;
+ hba = (struct spfc_hba_info *)sq_info->hba;
+
+ pkg_fc_hdr_info = &pkg->frame_head;
+ els_rsp->sid = pkg_fc_hdr_info->csctl_sid;
+ els_rsp->did = pkg_fc_hdr_info->rctl_did;
+ els_rsp->wd7.hotpooltag = UNF_GET_HOTPOOL_TAG(pkg) + hba->exi_base;
+ els_rsp->wd2.class_mode = FC_PROTOCOL_CLASS_3;
+
+ if (type == ELS_RJT)
+ els_rsp->wd2.class_mode = pkg->class_mode;
+
+ /* When the PLOGI request is sent, the microcode needs to be instructed
+ * to clear the I/O related to the link to avoid data inconsistency
+ * caused by the disorder of the IO.
+ */
+ if ((cmnd == ELS_LOGO || cmnd == ELS_PLOGI)) {
+ els_rsp->wd1.clr_io = 1;
+ els_rsp->wd6.reset_exch_start = hba->exi_base;
+ els_rsp->wd6.reset_exch_end =
+ hba->exi_base + (hba->exi_count - 1);
+ els_rsp->wd7.scqn =
+ prnt_q_info->parent_sts_scq_info.cqm_queue_id;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Port(0x%x) send cmd(0x%x) to RPort(0x%x),rport index(0x%x), notify clean io start 0x%x, end 0x%x, scqn 0x%x.",
+ sq_info->local_port_id, cmnd, sq_info->remote_port_id,
+ sq_info->rport_index, els_rsp->wd6.reset_exch_start,
+ els_rsp->wd6.reset_exch_end, els_rsp->wd7.scqn);
+
+ return;
+ }
+
+ if (type == ELS_RJT)
+ return;
+
+ /* Enter WQE in the PrliAcc negotiation parameter, and fill in the
+ * Update flag in WQE.
+ */
+ if (cmnd == ELS_PRLI) {
+ /* The chip updates the PLOGI ACC negotiation parameters. */
+ els_rsp->wd2.seq_cnt = sq_info->plogi_co_parms.seq_cnt;
+ els_rsp->wd2.e_d_tov = sq_info->plogi_co_parms.ed_tov;
+ els_rsp->wd2.tx_mfs = sq_info->plogi_co_parms.tx_mfs;
+ els_rsp->e_d_tov_timer_val = sq_info->plogi_co_parms.ed_tov_time;
+
+ /* The chip updates the PRLI ACC parameter. */
+ prli_acc_pld = (struct unf_prli_payload *)frame_pld;
+ els_rsp->wd4.xfer_dis = SPFC_GET_PRLI_PARAM_WXFER(prli_acc_pld->parms);
+ els_rsp->wd4.conf = SPFC_GET_PRLI_PARAM_CONF(prli_acc_pld->parms);
+ els_rsp->wd4.rec = SPFC_GET_PRLI_PARAM_REC(prli_acc_pld->parms);
+
+ els_rsp->wd1.para_update = 0x03;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Port(0x%x) save rport index(0x%x) login parms,seqcnt:0x%x,e_d_tov:0x%x,txmfs:0x%x,e_d_tovtimerval:0x%x, xfer_dis:0x%x,conf:0x%x,rec:0x%x.",
+ sq_info->local_port_id, sq_info->rport_index,
+ els_rsp->wd2.seq_cnt, els_rsp->wd2.e_d_tov,
+ els_rsp->wd2.tx_mfs, els_rsp->e_d_tov_timer_val,
+ els_rsp->wd4.xfer_dis, els_rsp->wd4.conf, els_rsp->wd4.rec);
+ }
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_els_wqe_ts_req
+ * Function Description: Construct the DW2~DW4 field in the Parent SQ WQE
+ * of the ELS request.
+ * Input Parameters: struct spfc_sqe *sqe void *sq_info u16 cmnd u32 scqn
+ * Output Parameters: N/A
+ * Return Type: void
+ ****************************************************************************
+ */
+void spfc_build_els_wqe_ts_req(struct spfc_sqe *sqe, void *info, u32 scqn,
+ void *frame_pld, struct unf_frame_pkg *pkg)
+{
+ struct spfc_sqe_ts *sqe_ts = NULL;
+ struct spfc_sqe_t_els_gs *els_req = NULL;
+ struct spfc_parent_sq_info *sq_info = NULL;
+ struct spfc_hba_info *hba = NULL;
+ struct unf_fc_head *pkg_fc_hdr_info = NULL;
+ u16 cmnd;
+
+ cmnd = SPFC_GET_LS_GS_CMND_CODE(pkg->cmnd);
+
+ sqe_ts = &sqe->ts_sl;
+ if (pkg->type == UNF_PKG_GS_REQ)
+ sqe_ts->task_type = SPFC_SQE_GS_CMND;
+ else
+ sqe_ts->task_type = SPFC_SQE_ELS_CMND;
+
+ sqe_ts->magic_num = UNF_GETXCHGALLOCTIME(pkg);
+
+ els_req = &sqe_ts->cont.t_els_gs;
+ pkg_fc_hdr_info = &pkg->frame_head;
+
+ sq_info = (struct spfc_parent_sq_info *)info;
+ hba = (struct spfc_hba_info *)sq_info->hba;
+ els_req->sid = pkg_fc_hdr_info->csctl_sid;
+ els_req->did = pkg_fc_hdr_info->rctl_did;
+
+ /* When the PLOGI request is sent, the microcode needs to be instructed
+ * to clear the I/O related to the link to avoid data inconsistency
+ * caused by the disorder of the IO.
+ */
+ if ((cmnd == ELS_LOGO || cmnd == ELS_PLOGI) && hba) {
+ els_req->wd4.clr_io = 1;
+ els_req->wd6.reset_exch_start = hba->exi_base;
+ els_req->wd6.reset_exch_end = hba->exi_base + (hba->exi_count - 1);
+ els_req->wd7.scqn = scqn;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Port(0x%x) Rport(0x%x) SID(0x%x) send %s to DID(0x%x), notify clean io start 0x%x, end 0x%x, scqn 0x%x.",
+ hba->port_cfg.port_id, sq_info->rport_index,
+ sq_info->local_port_id, (cmnd == ELS_PLOGI) ? "PLOGI" : "LOGO",
+ sq_info->remote_port_id, els_req->wd6.reset_exch_start,
+ els_req->wd6.reset_exch_end, scqn);
+
+ return;
+ }
+
+ /* The chip updates the PLOGI ACC negotiation parameters. */
+ if (cmnd == ELS_PRLI) {
+ els_req->wd5.seq_cnt = sq_info->plogi_co_parms.seq_cnt;
+ els_req->wd5.e_d_tov = sq_info->plogi_co_parms.ed_tov;
+ els_req->wd5.tx_mfs = sq_info->plogi_co_parms.tx_mfs;
+ els_req->e_d_tov_timer_val = sq_info->plogi_co_parms.ed_tov_time;
+
+ els_req->wd4.rec_support = hba->port_cfg.tape_support ? 1 : 0;
+ els_req->wd4.para_update = 0x01;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_INFO,
+ "Port(0x%x) save rport index(0x%x) login parms,seqcnt:0x%x,e_d_tov:0x%x,txmfs:0x%x,e_d_tovtimerval:0x%x.",
+ sq_info->local_port_id, sq_info->rport_index,
+ els_req->wd5.seq_cnt, els_req->wd5.e_d_tov,
+ els_req->wd5.tx_mfs, els_req->e_d_tov_timer_val);
+ }
+
+ if (cmnd == ELS_ECHO)
+ els_req->echo_flag = true;
+
+ if (cmnd == ELS_REC) {
+ els_req->wd4.rec_flag = 1;
+ els_req->wd4.origin_hottag = pkg->origin_hottag + hba->exi_base;
+ els_req->origin_magicnum = pkg->origin_magicnum;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Port(0x%x) Rport(0x%x) SID(0x%x) send Rec to DID(0x%x), origin_hottag 0x%x",
+ hba->port_cfg.port_id, sq_info->rport_index,
+ sq_info->local_port_id, sq_info->remote_port_id,
+ els_req->wd4.origin_hottag);
+ }
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_bls_wqe_ts_req
+ * Function Description: Construct the DW2 field in the Parent SQ WQE of
+ * the ELS request, that is, ABTS parameter.
+ * Input Parameters:struct unf_frame_pkg *pkg void *hba
+ * Output Parameters: N/A
+ * Return Type: void
+ ****************************************************************************
+ */
+void spfc_build_bls_wqe_ts_req(struct spfc_sqe *sqe, struct unf_frame_pkg *pkg, void *handle)
+{
+ struct spfc_sqe_abts *abts;
+
+ sqe->ts_sl.task_type = SPFC_SQE_BLS_CMND;
+ sqe->ts_sl.magic_num = UNF_GETXCHGALLOCTIME(pkg);
+
+ abts = &sqe->ts_sl.cont.abts;
+ abts->fh_parm_abts = pkg->frame_head.parameter;
+ abts->hotpooltag = UNF_GET_HOTPOOL_TAG(pkg) +
+ ((struct spfc_hba_info *)handle)->exi_base;
+ abts->release_timer = UNF_GET_XID_RELEASE_TIMER(pkg);
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_service_wqe_ctrl_section
+ * Function Description: fill Parent SQ WQE and Root SQ WQE's Control Section
+ * Input Parameters : struct spfc_wqe_ctrl *wqe_cs u32 ts_size u32 bdsl
+ * Output Parameters : N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_build_service_wqe_ctrl_section(struct spfc_wqe_ctrl *wqe_cs, u32 ts_size,
+ u32 bdsl)
+{
+ wqe_cs->ch.wd0.bdsl = bdsl;
+ wqe_cs->ch.wd0.drv_sl = 0;
+ wqe_cs->ch.wd0.rsvd0 = 0;
+ wqe_cs->ch.wd0.wf = 0;
+ wqe_cs->ch.wd0.cf = 0;
+ wqe_cs->ch.wd0.tsl = ts_size;
+ wqe_cs->ch.wd0.va = 0;
+ wqe_cs->ch.wd0.df = 0;
+ wqe_cs->ch.wd0.cr = 1;
+ wqe_cs->ch.wd0.dif_sl = 0;
+ wqe_cs->ch.wd0.csl = 0;
+ wqe_cs->ch.wd0.ctrl_sl = SPFC_BYTES_TO_QW_NUM(sizeof(*wqe_cs)); /* divided by 8 */
+ wqe_cs->ch.wd0.owner = 0;
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_wqe_owner_pmsn
+ * Function Description: This field is filled using the value of Control
+ * Section of Parent SQ WQE.
+ * Input Parameters: struct spfc_wqe_ctrl *wqe_cs u16 owner u16 pmsn
+ * Output Parameters : N/A
+ * Return Type: void
+ ****************************************************************************
+ */
+void spfc_build_wqe_owner_pmsn(struct spfc_sqe *io_sqe, u16 owner, u16 pmsn)
+{
+ struct spfc_wqe_ctrl *wqe_cs = &io_sqe->ctrl_sl;
+ struct spfc_wqe_ctrl *wqee_cs = &io_sqe->ectrl_sl;
+
+ wqe_cs->qsf.wqe_sn = pmsn;
+ wqe_cs->qsf.dump_wqe_sn = wqe_cs->qsf.wqe_sn;
+ wqe_cs->ch.wd0.owner = (u32)owner;
+ wqee_cs->ch.ctrl_ch_val = wqe_cs->ch.ctrl_ch_val;
+ wqee_cs->qsf.wqe_sn = wqe_cs->qsf.wqe_sn;
+ wqee_cs->qsf.dump_wqe_sn = wqe_cs->qsf.dump_wqe_sn;
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_convert_parent_wqe_to_big_endian
+ * Function Description: Set the Done field of Parent SQ WQE and convert
+ * Control Section and Task Section to big-endian.
+ * Input Parameters:struct spfc_sqe *sqe
+ * Output Parameters : N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_convert_parent_wqe_to_big_endian(struct spfc_sqe *sqe)
+{
+ if (likely(sqe->ts_sl.task_type != SPFC_TASK_T_TRESP &&
+ sqe->ts_sl.task_type != SPFC_TASK_T_TMF_RESP)) {
+ /* Convert Control Secton and Task Section to big-endian. Before
+ * the SGE enters the queue, the upper-layer driver converts the
+ * SGE and Task Section to the big-endian mode.
+ */
+ spfc_cpu_to_big32(&sqe->ctrl_sl, sizeof(sqe->ctrl_sl));
+ spfc_cpu_to_big32(&sqe->ts_sl, sizeof(sqe->ts_sl));
+ spfc_cpu_to_big32(&sqe->ectrl_sl, sizeof(sqe->ectrl_sl));
+ spfc_cpu_to_big32(&sqe->sid, sizeof(sqe->sid));
+ spfc_cpu_to_big32(&sqe->did, sizeof(sqe->did));
+ spfc_cpu_to_big32(&sqe->wqe_gpa, sizeof(sqe->wqe_gpa));
+ spfc_cpu_to_big32(&sqe->db_val, sizeof(sqe->db_val));
+ } else {
+ /* The SPFC_TASK_T_TRESP may use the SGE as the Task Section to
+ * convert the entire SQE into a large end.
+ */
+ spfc_cpu_to_big32(sqe, sizeof(struct spfc_sqe_tresp));
+ }
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_cmdqe_common
+ * Function Description : Assemble the Cmdqe Common part.
+ * Input Parameters: union spfc_cmdqe *cmd_qe enum spfc_task_type task_type u16 rxid
+ * Output Parameters : N/A
+ * Return Type: void
+ ****************************************************************************
+ */
+void spfc_build_cmdqe_common(union spfc_cmdqe *cmd_qe, enum spfc_task_type task_type,
+ u16 rxid)
+{
+ cmd_qe->common.wd0.task_type = task_type;
+ cmd_qe->common.wd0.rx_id = rxid;
+ cmd_qe->common.wd0.rsvd0 = 0;
+}
+
+#define SPFC_STANDARD_SIRT_ENABLE (1)
+#define SPFC_STANDARD_SIRT_DISABLE (0)
+#define SPFC_UNKNOWN_ID (0xFFFF)
+
+void spfc_build_icmnd_wqe_ts_header(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
+ u8 task_type, u16 exi_base, u8 port_idx)
+{
+ sqe->ts_sl.local_xid = (u16)UNF_GET_HOTPOOL_TAG(pkg) + exi_base;
+ sqe->ts_sl.task_type = task_type;
+ sqe->ts_sl.wd0.conn_id =
+ (u16)(pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX]);
+
+ sqe->ts_sl.wd0.remote_xid = SPFC_UNKNOWN_ID;
+ sqe->ts_sl.magic_num = UNF_GETXCHGALLOCTIME(pkg);
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_icmnd_wqe_ts
+ * Function Description : Constructing the TS Domain of the ICmnd
+ * Input Parameters: void *hba struct unf_frame_pkg *pkg
+ * struct spfc_sqe_ts *sqe_ts
+ * Output Parameters :N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_build_icmnd_wqe_ts(void *handle, struct unf_frame_pkg *pkg,
+ struct spfc_sqe_ts *sqe_ts, union spfc_sqe_ts_ex *sqe_tsex)
+{
+ struct spfc_sqe_icmnd *icmnd = &sqe_ts->cont.icmnd;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+
+ sqe_ts->cdb_type = 0;
+ memcpy(icmnd->fcp_cmnd_iu, pkg->fcp_cmnd, sizeof(struct unf_fcp_cmnd));
+
+ if (sqe_ts->task_type == SPFC_SQE_FCP_ITMF) {
+ icmnd->info.tmf.w0.bs.reset_exch_start = hba->exi_base;
+ icmnd->info.tmf.w0.bs.reset_exch_end = hba->exi_base + hba->exi_count - 1;
+
+ icmnd->info.tmf.w1.bs.reset_did = UNF_GET_DID(pkg);
+ /* delivers the marker status flag to the microcode. */
+ icmnd->info.tmf.w1.bs.marker_sts = 1;
+ SPFC_GET_RESET_TYPE(UNF_GET_TASK_MGMT_FLAGS(pkg->fcp_cmnd->control),
+ icmnd->info.tmf.w1.bs.reset_type);
+
+ icmnd->info.tmf.w2.bs.reset_sid = UNF_GET_SID(pkg);
+
+ memcpy(icmnd->info.tmf.reset_lun, pkg->fcp_cmnd->lun,
+ sizeof(icmnd->info.tmf.reset_lun));
+ }
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_icmnd_wqe_ctrls
+ * Function Description : The CtrlS domain of the ICmnd is constructed. The
+ * analysis result is the same as that of the TWTR.
+ * Input Parameters: struct unf_frame_pkg *pkg struct spfc_sqe *sqe
+ * Output Parameters: N/A
+ * Return Type: void
+ ****************************************************************************
+ */
+void spfc_build_icmnd_wqe_ctrls(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe)
+{
+ spfc_build_trd_twr_wqe_ctrls(pkg, sqe);
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_srq_wqe_ctrls
+ * Function Description : Construct the CtrlS domain of the ICmnd. The analysis
+ * result is the same as that of the TWTR.
+ * Input Parameters : struct spfc_rqe *rqe u16 owner u16 pmsn
+ * Output Parameters : N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_build_srq_wqe_ctrls(struct spfc_rqe *rqe, u16 owner, u16 pmsn)
+{
+ struct spfc_wqe_ctrl_ch *wqe_ctrls = NULL;
+
+ wqe_ctrls = &rqe->ctrl_sl.ch;
+ wqe_ctrls->wd0.owner = owner;
+ wqe_ctrls->wd0.ctrl_sl = sizeof(struct spfc_wqe_ctrl) >> UNF_SHIFT_3;
+ wqe_ctrls->wd0.csl = 1;
+ wqe_ctrls->wd0.dif_sl = 0;
+ wqe_ctrls->wd0.cr = 1;
+ wqe_ctrls->wd0.df = 0;
+ wqe_ctrls->wd0.va = 0;
+ wqe_ctrls->wd0.tsl = 0;
+ wqe_ctrls->wd0.cf = 0;
+ wqe_ctrls->wd0.wf = 0;
+ wqe_ctrls->wd0.drv_sl = sizeof(struct spfc_rqe_drv) >> UNF_SHIFT_3;
+ wqe_ctrls->wd0.bdsl = sizeof(struct spfc_constant_sge) >> UNF_SHIFT_3;
+
+ rqe->ctrl_sl.wd0.wqe_msn = pmsn;
+ rqe->ctrl_sl.wd0.dump_wqe_msn = rqe->ctrl_sl.wd0.wqe_msn;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_wqe.h b/drivers/scsi/spfc/hw/spfc_wqe.h
new file mode 100644
index 000000000000..ec6d7bbdf8f9
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_wqe.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_WQE_H
+#define SPFC_WQE_H
+
+#include "unf_type.h"
+#include "unf_common.h"
+#include "spfc_hw_wqe.h"
+#include "spfc_parent_context.h"
+
+/* TGT WQE type */
+/* DRV->uCode via Parent SQ */
+#define SPFC_SQE_FCP_TRD SPFC_TASK_T_TREAD
+#define SPFC_SQE_FCP_TWR SPFC_TASK_T_TWRITE
+#define SPFC_SQE_FCP_TRSP SPFC_TASK_T_TRESP
+#define SPFC_SQE_FCP_TACK SPFC_TASK_T_TACK
+#define SPFC_SQE_ELS_CMND SPFC_TASK_T_ELS
+#define SPFC_SQE_ELS_RSP SPFC_TASK_T_ELS_RSP
+#define SPFC_SQE_GS_CMND SPFC_TASK_T_GS
+#define SPFC_SQE_BLS_CMND SPFC_TASK_T_ABTS
+#define SPFC_SQE_FCP_IREAD SPFC_TASK_T_IREAD
+#define SPFC_SQE_FCP_IWRITE SPFC_TASK_T_IWRITE
+#define SPFC_SQE_FCP_ITMF SPFC_TASK_T_ITMF
+#define SPFC_SQE_SESS_RST SPFC_TASK_T_SESS_RESET
+#define SPFC_SQE_FCP_TMF_TRSP SPFC_TASK_T_TMF_RESP
+#define SPFC_SQE_NOP SPFC_TASK_T_NOP
+/* DRV->uCode Via CMDQ */
+#define SPFC_CMDQE_ABTS_RSP SPFC_TASK_T_ABTS_RSP
+#define SPFC_CMDQE_ABORT SPFC_TASK_T_ABORT
+#define SPFC_CMDQE_SESS_DIS SPFC_TASK_T_SESS_DIS
+#define SPFC_CMDQE_SESS_DEL SPFC_TASK_T_SESS_DEL
+
+/* uCode->Drv Via CMD SCQ */
+#define SPFC_SCQE_FCP_TCMND SPFC_TASK_T_RCV_TCMND
+#define SPFC_SCQE_ELS_CMND SPFC_TASK_T_RCV_ELS_CMD
+#define SPFC_SCQE_ABTS_CMD SPFC_TASK_T_RCV_ABTS_CMD
+#define SPFC_SCQE_FCP_IRSP SPFC_TASK_T_IRESP
+#define SPFC_SCQE_FCP_ITMF_RSP SPFC_TASK_T_ITMF_RESP
+
+/* uCode->Drv Via STS SCQ */
+#define SPFC_SCQE_FCP_TSTS SPFC_TASK_T_TSTS
+#define SPFC_SCQE_GS_RSP SPFC_TASK_T_RCV_GS_RSP
+#define SPFC_SCQE_ELS_RSP SPFC_TASK_T_RCV_ELS_RSP
+#define SPFC_SCQE_ABTS_RSP SPFC_TASK_T_RCV_ABTS_RSP
+#define SPFC_SCQE_ELS_RSP_STS SPFC_TASK_T_ELS_RSP_STS
+#define SPFC_SCQE_ABORT_STS SPFC_TASK_T_ABORT_STS
+#define SPFC_SCQE_SESS_EN_STS SPFC_TASK_T_SESS_EN_STS
+#define SPFC_SCQE_SESS_DIS_STS SPFC_TASK_T_SESS_DIS_STS
+#define SPFC_SCQE_SESS_DEL_STS SPFC_TASK_T_SESS_DEL_STS
+#define SPFC_SCQE_SESS_RST_STS SPFC_TASK_T_SESS_RESET_STS
+#define SPFC_SCQE_ITMF_MARKER_STS SPFC_TASK_T_ITMF_MARKER_STS
+#define SPFC_SCQE_ABTS_MARKER_STS SPFC_TASK_T_ABTS_MARKER_STS
+#define SPFC_SCQE_FLUSH_SQ_STS SPFC_TASK_T_FLUSH_SQ_STS
+#define SPFC_SCQE_BUF_CLEAR_STS SPFC_TASK_T_BUFFER_CLEAR_STS
+#define SPFC_SCQE_CLEAR_SRQ_STS SPFC_TASK_T_CLEAR_SRQ_STS
+#define SPFC_SCQE_DIFX_RESULT_STS SPFC_TASK_T_DIFX_RESULT_STS
+#define SPFC_SCQE_XID_FREE_ABORT_STS SPFC_TASK_T_EXCH_ID_FREE_ABORT_STS
+#define SPFC_SCQE_EXCHID_TIMEOUT_STS SPFC_TASK_T_EXCHID_TIMEOUT_STS
+#define SPFC_SQE_NOP_STS SPFC_TASK_T_NOP_STS
+
+#define SPFC_LOW_32_BITS(__addr) ((u32)((u64)(__addr) & 0xffffffff))
+#define SPFC_HIGH_32_BITS(__addr) ((u32)(((u64)(__addr) >> 32) & 0xffffffff))
+
+/* Error Code from SCQ */
+#define SPFC_COMPLETION_STATUS_SUCCESS FC_CQE_COMPLETED
+#define SPFC_COMPLETION_STATUS_ABORTED_SETUP_FAIL FC_IMMI_CMDPKT_SETUP_FAIL
+
+#define SPFC_COMPLETION_STATUS_TIMEOUT FC_ERROR_CODE_E_D_TIMER_EXPIRE
+#define SPFC_COMPLETION_STATUS_DIF_ERROR FC_ERROR_CODE_DATA_DIFX_FAILED
+#define SPFC_COMPLETION_STATUS_DATA_OOO FC_ERROR_CODE_DATA_OOO_RO
+#define SPFC_COMPLETION_STATUS_DATA_OVERFLOW \
+ FC_ERROR_CODE_DATA_EXCEEDS_DATA2TRNS
+
+#define SPFC_SCQE_INVALID_CONN_ID (0xffff)
+#define SPFC_GET_SCQE_TYPE(scqe) ((scqe)->common.ch.wd0.task_type)
+#define SPFC_GET_SCQE_STATUS(scqe) ((scqe)->common.ch.wd0.err_code)
+#define SPFC_GET_SCQE_REMAIN_CNT(scqe) ((scqe)->common.ch.wd0.cqe_remain_cnt)
+#define SPFC_GET_SCQE_CONN_ID(scqe) ((scqe)->common.conn_id)
+#define SPFC_GET_SCQE_SQN(scqe) ((scqe)->common.ch.wd0.sqn)
+#define SPFC_GET_WQE_TYPE(wqe) ((wqe)->ts_sl.task_type)
+
+#define SPFC_WQE_IS_IO(wqe) \
+ ((SPFC_GET_WQE_TYPE(wqe) != SPFC_SQE_SESS_RST) && \
+ (SPFC_GET_WQE_TYPE(wqe) != SPFC_SQE_NOP))
+#define SPFC_SCQE_HAS_ERRCODE(scqe) \
+ (SPFC_GET_SCQE_STATUS(scqe) != SPFC_COMPLETION_STATUS_SUCCESS)
+#define SPFC_SCQE_ERR_TO_CM(scqe) \
+ (SPFC_GET_SCQE_STATUS(scqe) != FC_ELS_GS_RSP_EXCH_CHECK_FAIL)
+#define SPFC_SCQE_EXCH_ABORTED(scqe) \
+ ((SPFC_GET_SCQE_STATUS(scqe) >= \
+ FC_CQE_BUFFER_CLEAR_IO_COMPLETED) && \
+ (SPFC_GET_SCQE_STATUS(scqe) <= FC_CQE_WQE_FLUSH_IO_COMPLETED))
+#define SPFC_SCQE_CONN_ID_VALID(scqe) \
+ (SPFC_GET_SCQE_CONN_ID(scqe) != SPFC_SCQE_INVALID_CONN_ID)
+
+/*
+ * checksum error bitmap define
+ */
+#define NIC_RX_CSUM_HW_BYPASS_ERR (1)
+#define NIC_RX_CSUM_IP_CSUM_ERR (1 << 1)
+#define NIC_RX_CSUM_TCP_CSUM_ERR (1 << 2)
+#define NIC_RX_CSUM_UDP_CSUM_ERR (1 << 3)
+#define NIC_RX_CSUM_SCTP_CRC_ERR (1 << 4)
+
+#define SPFC_WQE_SECTION_CHUNK_SIZE 8 /* 8 bytes' chunk */
+#define SPFC_T_RESP_WQE_CTR_TSL_SIZE 15 /* 8 bytes' chunk */
+#define SPFC_T_RD_WR_WQE_CTR_TSL_SIZE 9 /* 8 bytes' chunk */
+#define SPFC_T_RD_WR_WQE_CTR_BDSL_SIZE 4 /* 8 bytes' chunk */
+#define SPFC_T_RD_WR_WQE_CTR_CTRLSL_SIZE 1 /* 8 bytes' chunk */
+
+#define SPFC_WQE_MAX_ESGE_NUM 3 /* 3 ESGE In Extended wqe */
+#define SPFC_WQE_SGE_ENTRY_NUM 2 /* BD SGE and DIF SGE count */
+#define SPFC_WQE_SGE_DIF_ENTRY_NUM 1 /* DIF SGE count */
+#define SPFC_WQE_SGE_LAST_FLAG 1
+#define SPFC_WQE_SGE_NOT_LAST_FLAG 0
+#define SPFC_WQE_SGE_EXTEND_FLAG 1
+#define SPFC_WQE_SGE_NOT_EXTEND_FLAG 0
+
+#define SPFC_FCP_TMF_PORT_RESET (0)
+#define SPFC_FCP_TMF_LUN_RESET (1)
+#define SPFC_FCP_TMF_TGT_RESET (2)
+#define SPFC_FCP_TMF_RSVD (3)
+
+#define SPFC_ADJUST_DATA(old_va, new_va) \
+ { \
+ (old_va) = new_va; \
+ }
+
+#define SPFC_GET_RESET_TYPE(tmf_flag, reset_flag) \
+ { \
+ switch (tmf_flag) { \
+ case UNF_FCP_TM_ABORT_TASK_SET: \
+ case UNF_FCP_TM_LOGICAL_UNIT_RESET: \
+ (reset_flag) = SPFC_FCP_TMF_LUN_RESET; \
+ break; \
+ case UNF_FCP_TM_TARGET_RESET: \
+ (reset_flag) = SPFC_FCP_TMF_TGT_RESET; \
+ break; \
+ case UNF_FCP_TM_CLEAR_TASK_SET: \
+ (reset_flag) = SPFC_FCP_TMF_PORT_RESET; \
+ break; \
+ default: \
+ (reset_flag) = SPFC_FCP_TMF_RSVD; \
+ } \
+ }
+
+/* Link WQE structure */
+struct spfc_linkwqe {
+ union {
+ struct {
+ u32 rsv1 : 14;
+ u32 wf : 1;
+ u32 rsv2 : 14;
+ u32 ctrlsl : 2;
+ u32 o : 1;
+ } wd0;
+
+ u32 val_wd0;
+ };
+
+ union {
+ struct {
+ u32 msn : 16;
+ u32 dump_msn : 15;
+ u32 lp : 1; /* lp means whether O bit is overturn */
+ } wd1;
+
+ u32 val_wd1;
+ };
+
+ u32 next_page_addr_hi;
+ u32 next_page_addr_lo;
+};
+
+/* Session Enable */
+struct spfc_host_keys {
+ struct {
+ u32 smac1 : 8;
+ u32 smac0 : 8;
+ u32 rsv : 16;
+ } wd0;
+
+ u8 smac[ARRAY_INDEX_4];
+
+ u8 dmac[ARRAY_INDEX_4];
+ struct {
+ u8 sid_1;
+ u8 sid_2;
+ u8 dmac_rvd[ARRAY_INDEX_2];
+ } wd3;
+ struct {
+ u8 did_0;
+ u8 did_1;
+ u8 did_2;
+ u8 sid_0;
+ } wd4;
+
+ struct {
+ u32 port_id : 3;
+ u32 host_id : 2;
+ u32 rsvd : 27;
+ } wd5;
+ u32 rsvd;
+};
+
+/* Parent SQ WQE Related function */
+void spfc_build_service_wqe_ctrl_section(struct spfc_wqe_ctrl *wqe_cs, u32 ts_size,
+ u32 bdsl);
+void spfc_build_service_wqe_ts_common(struct spfc_sqe_ts *sqe_ts, u32 rport_index,
+ u16 local_xid, u16 remote_xid,
+ u16 data_len);
+void spfc_build_els_gs_wqe_sge(struct spfc_sqe *sqe, void *buf_addr, u64 phy_addr,
+ u32 buf_len, u32 xid, void *handle);
+void spfc_build_els_wqe_ts_req(struct spfc_sqe *sqe, void *info, u32 scqn,
+ void *frame_pld, struct unf_frame_pkg *pkg);
+void spfc_build_els_wqe_ts_rsp(struct spfc_sqe *sqe, void *info,
+ struct unf_frame_pkg *pkg, void *frame_pld,
+ u16 type, u16 cmnd);
+void spfc_build_bls_wqe_ts_req(struct spfc_sqe *sqe, struct unf_frame_pkg *pkg,
+ void *handle);
+void spfc_build_trd_twr_wqe_ctrls(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe);
+void spfc_build_wqe_owner_pmsn(struct spfc_sqe *io_sqe, u16 owner, u16 pmsn);
+void spfc_convert_parent_wqe_to_big_endian(struct spfc_sqe *sqe);
+void spfc_build_icmnd_wqe_ctrls(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe);
+void spfc_build_icmnd_wqe_ts(void *handle, struct unf_frame_pkg *pkg,
+ struct spfc_sqe_ts *sqe_ts, union spfc_sqe_ts_ex *sqe_tsex);
+void spfc_build_icmnd_wqe_ts_header(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
+ u8 task_type, u16 exi_base, u8 port_idx);
+
+void spfc_build_cmdqe_common(union spfc_cmdqe *cmd_qe, enum spfc_task_type task_type,
+ u16 rxid);
+void spfc_build_srq_wqe_ctrls(struct spfc_rqe *rqe, u16 owner, u16 pmsn);
+void spfc_build_common_wqe_ctrls(struct spfc_wqe_ctrl *ctrl_sl, u8 task_len);
+void spfc_build_tmf_rsp_wqe_ts_header(struct unf_frame_pkg *pkg,
+ struct spfc_sqe_tmf_rsp *sqe, u16 exi_base,
+ u32 scqn);
+
+#endif
--
2.30.0
1
0

[PATCH openEuler-21.09 2/2] tools: add a tool to calculate the CPU utilization rate
by Hongyu Li 13 Oct '21
by Hongyu Li 13 Oct '21
13 Oct '21
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CIJQ
CVE: NA
----------------------------------------------------------------------
This tool can help calculate the CPU utilization rate in higher precision.
Signed-off-by: Hongyu Li <543306408(a)qq.com>
---
tools/accounting/Makefile | 2 +-
tools/accounting/idle_cal.c | 91 +++++++++++++++++++++++++++++++++++++
2 files changed, 92 insertions(+), 1 deletion(-)
create mode 100644 tools/accounting/idle_cal.c
diff --git a/tools/accounting/Makefile b/tools/accounting/Makefile
index 03687f19cbb1..d14151e28173 100644
--- a/tools/accounting/Makefile
+++ b/tools/accounting/Makefile
@@ -2,7 +2,7 @@
CC := $(CROSS_COMPILE)gcc
CFLAGS := -I../../usr/include
-PROGS := getdelays
+PROGS := getdelays idle_cal
all: $(PROGS)
diff --git a/tools/accounting/idle_cal.c b/tools/accounting/idle_cal.c
new file mode 100644
index 000000000000..6d621b37c70e
--- /dev/null
+++ b/tools/accounting/idle_cal.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * idle_cal.c
+ *
+ * Copyright (C) 2021
+ *
+ * cpu idle time accouting
+ */
+
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <fcntl.h>
+#include <string.h>
+#include <unistd.h>
+#include <time.h>
+#include <limits.h>
+#include <sys/time.h>
+
+#define BUFFSIZE 4096
+#define HZ 100
+#define FILE_NAME "/proc/stat2"
+
+struct cpu_info {
+ char name[BUFFSIZE];
+ long long value[1];
+};
+
+int main(void)
+{
+ int cpu_number = sysconf(_SC_NPROCESSORS_ONLN);
+ struct cpu_info *cpus = (struct cpu_info *)malloc(sizeof(struct cpu_info)*cpu_number);
+ struct cpu_info *cpus_2 = (struct cpu_info *)malloc(sizeof(struct cpu_info)*cpu_number);
+
+ char buf[BUFFSIZE];
+ long long sub;
+ double value;
+
+ while (1) {
+ FILE *fp = fopen(FILE_NAME, "r");
+ int i = 0;
+ struct timeval start, end;
+
+
+ while (i < cpu_number+1) {
+ int n = fscanf(fp, "%s %lld\n", cpus[i].name, &cpus[i].value[0]);
+
+ if (n < 0) {
+ printf("wrong");
+ return -1;
+ }
+ i += 1;
+ }
+
+ gettimeofday(&start, NULL);
+ fflush(fp);
+ fclose(fp);
+ i = 0;
+
+ sleep(1);
+
+ FILE *fp_2 = fopen(FILE_NAME, "r");
+
+ while (i < cpu_number+1) {
+ int n = fscanf(fp_2, "%s %lld\n", cpus_2[i].name, &cpus_2[i].value[0]);
+
+ if (n < 0) {
+ printf("wrong");
+ return -1;
+ }
+ i += 1;
+ }
+
+ gettimeofday(&end, NULL);
+ fflush(fp);
+ fclose(fp_2);
+
+ sub = end.tv_sec-start.tv_sec;
+ value = sub*1000000.0+end.tv_usec-start.tv_usec;
+ system("reset");
+ printf("CPU idle rate %f\n", 1000000/HZ*(cpus_2[0].value[0]-cpus[0].value[0])
+ /value);
+
+ for (int i = 1; i < cpu_number+1; i++) {
+ printf("CPU%d idle rate %f\n", i-1, 1-1000000/HZ
+ *(cpus_2[i].value[0]-cpus[i].value[0])/value);
+ }
+ }
+ return 0;
+}
+
--
2.17.1
1
0