Kernel
Threads by month
- ----- 2025 -----
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- 43 participants
- 20353 discussions
crypto bugfix and fix kabi broken.
Herbert Xu (1):
crypto: api - Use work queue in crypto_destroy_instance
Yi Yang (1):
crypto: fix kabi broken in struct crypto_instance
crypto/algapi.c | 29 +++++++++++++++++++++++++++--
include/crypto/algapi.h | 5 +++++
2 files changed, 32 insertions(+), 2 deletions(-)
--
2.25.1
2
3
fix crypto bugfix and fix kabi broken.
Herbert Xu (1):
crypto: api - Use work queue in crypto_destroy_instance
Yi Yang (1):
crypto: fix kabi broken in struct crypto_instance
crypto/algapi.c | 29 +++++++++++++++++++++++++++--
include/crypto/algapi.h | 6 ++++++
2 files changed, 33 insertions(+), 2 deletions(-)
--
2.25.1
2
3

[PATCH OLK-5.10 0/2] ext4: mitigatin cacheline false sharing in struct ext4_inode_info
by Zeng Heng 23 Nov '23
by Zeng Heng 23 Nov '23
23 Nov '23
Yang Yingliang (2):
ext4: mitigatin cacheline false sharing in struct ext4_inode_info
enable MITIGATION_FALSE_SHARING by default
arch/arm64/configs/openeuler_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
fs/ext4/Kconfig | 9 +++++++++
fs/ext4/ext4.h | 5 +++++
fs/ext4/super.c | 4 ++++
5 files changed, 20 insertions(+)
--
2.25.1
2
4
Backport linux-6.6.2 LTS patches from upstream.
git cherry-pick v6.6.1..v6.6.2~1 -s
No conflicts.
Aananth V (1):
tcp: call tcp_try_undo_recovery when an RTOd TFO SYNACK is ACKed
Aaron Plattner (1):
objtool: Propagate early errors
Abel Vesa (2):
clk: imx: Select MXC_CLK for CLK_IMX8QXP
regulator: qcom-rpmh: Fix smps4 regulator for pm8550ve
Adam Dunlap (1):
x86/sev-es: Allow copy_from_kernel_nofault() in earlier boot
Adam Ford (3):
ARM: dts: am3517-evm: Fix LED3/4 pinmux
arm64: dts: imx8mm: Add sound-dai-cells to micfil node
arm64: dts: imx8mn: Add sound-dai-cells to micfil node
Adam Guerin (1):
crypto: qat - enable dc chaining service
Adam Skladowski (1):
arm64: dts: qcom: msm8976: Fix ipc bit shifts
Aditya Gupta (1):
powerpc/vmcore: Add MMU information to vmcoreinfo
Aditya Kumar Singh (1):
wifi: ath11k: fix Tx power value during active CAC
Alex Deucher (2):
drm/amdgpu/gfx10,11: use memcpy_to/fromio for MQDs
drm/amdgpu: don't put MQDs in VRAM on ARM | ARM64
Alexander Aring (4):
dlm: fix creating multiple node structures
dlm: fix remove member after close call
dlm: be sure we reset all nodes at forced shutdown
dlm: fix no ack after final message
Alexandre Ghiti (2):
libbpf: Fix syscall access arguments on riscv
drivers: perf: Do not broadcast to other cpus when starting a counter
Alison Schofield (5):
x86/numa: Introduce numa_fill_memblks()
ACPI/NUMA: Apply SRAT proximity domain to entire CFMWS window
cxl/region: Prepare the decoder match range helper for reuse
cxl/region: Calculate a target position in a region interleave
cxl/region: Use cxl_calc_interleave_pos() for auto-discovery
Amit Kumar Mahapatra (1):
spi: spi-zynq-qspi: add spi-mem to driver kconfig dependencies
Andre Przywara (1):
clocksource/drivers/arm_arch_timer: limit XGene-1 workaround
Andrea Righi (2):
module/decompress: use vmalloc() for gzip decompression workspace
module/decompress: use kvmalloc() consistently
Andrii Staikov (1):
i40e: fix potential memory leaks in i40e_remove()
Andy Shevchenko (2):
ACPI: property: Allow _DSD buffer data only for byte accessors
interconnect: qcom: osm-l3: Replace custom implementation of
COUNT_ARGS()
AngeloGioacchino Del Regno (1):
drm: mediatek: mtk_dsi: Fix NO_EOT_PACKET settings/handling
Aniruddha Paul (1):
ice: Fix VF-VF filter rules in switchdev mode
Anuj Gupta (1):
nvme: fix error-handling for io_uring nvme-passthrough
Anup Patel (2):
irqchip/sifive-plic: Fix syscore registration for multi-socket systems
RISC-V: Don't fail in riscv_of_parent_hartid() for disabled HARTs
Aradhya Bhatia (1):
arm64: dts: ti: Fix HDMI Audio overlay in Makefile
Armin Wolf (4):
platform/x86: wmi: Fix probe failure when failing to register WMI
devices
platform/x86: wmi: Fix opening of char device
hwmon: (sch5627) Use bit macros when accessing the control register
hwmon: (sch5627) Disallow write access if virtual registers are locked
Arnaldo Carvalho de Melo (1):
perf build: Add missing comment about NO_LIBTRACEEVENT=1
Arnd Bergmann (1):
fbdev: fsl-diu-fb: mark wr_reg_wa() static
Artem Savkov (1):
selftests/bpf: Skip module_fentry_shadow test when bpf_testmod is not
available
Avraham Stern (2):
wifi: iwlwifi: mvm: update station's MFP flag after association
wifi: iwlwifi: mvm: fix removing pasn station for responder
Baochen Qiang (2):
wifi: ath12k: fix DMA unmap warning on NULL DMA address
wifi: ath11k: fix boot failure with one MSI vector
Bard Liao (1):
ASoC: Intel: sof_sdw_rt_sdca_jack_common: add rt713 support
Bartosz Golaszewski (1):
gpio: sim: initialize a managed pointer when declaring it
Basavaraj Natikar (1):
xhci: Loosen RPM as default policy to cover for AMD xHC 1.1
Ben Wolsieffer (2):
futex: Don't include process MM in futex key on no-MMU
regmap: prevent noinc writes from clobbering cache
Benjamin Gaignard (1):
media: verisilicon: Fixes clock list for rk3588 av1 decoder
Benjamin Gray (1):
powerpc/xive: Fix endian conversion size
Biju Das (1):
pinctrl: renesas: rzg2l: Make reverse order of enable() for disable()
Binbin Wu (1):
selftests/x86/lam: Zero out buffer for readlink()
Björn Töpel (2):
selftests/bpf: Define SYS_PREFIX for riscv
selftests/bpf: Define SYS_NANOSLEEP_KPROBE_NAME for riscv
Bo Jiao (1):
wifi: mt76: fix potential memory leak of beacon commands
Brad Griffis (2):
arm64: tegra: Fix P3767 card detect polarity
arm64: tegra: Fix P3767 QSPI speed
Brett Creeley (1):
iavf: Fix promiscuous mode configuration flow messages
Cezary Rojewski (1):
ASoC: Intel: Skylake: Fix mem leak when parsing UUIDs fails
Chancel Liu (1):
ASoC: soc-pcm.c: Make sure DAI parameters cleared if the DAI becomes
inactive
Chao Yu (5):
f2fs: compress: fix deadloop in f2fs_write_cache_pages()
f2fs: compress: fix to avoid use-after-free on dic
f2fs: compress: fix to avoid redundant compress extension
f2fs: fix to drop meta_inode's page cache in f2fs_put_super()
f2fs: fix to initialize map.m_pblk in f2fs_precache_extents()
Charles Keepax (1):
ASoC: intel: sof_sdw: Stop processing CODECs when enough are found
Chen Ni (1):
libnvdimm/of_pmem: Use devm_kstrdup instead of kstrdup and check its
return value
Chen Yu (1):
genirq/matrix: Exclude managed interrupts in irq_matrix_allocated()
Chen-Yu Tsai (2):
regulator: mt6358: Fail probe on unknown chip ID
dt-bindings: mfd: mt6397: Split out compatible for MediaTek MT6366
PMIC
Chengchang Tang (3):
RDMA/hns: Fix printing level of asynchronous events
RDMA/hns: Fix uninitialized ucmd in hns_roce_create_qp_common()
RDMA/hns: Fix signed-unsigned mixed comparisons
Chengming Zhou (1):
sched/fair: Fix cfs_rq_is_decayed() on !SMP
Chris Packham (1):
ARM64: dts: marvell: cn9310: Use appropriate label for spi1 pins
Christophe JAILLET (14):
wifi: ath: dfs_pattern_detector: Fix a memory initialization issue
ACPI: sysfs: Fix create_pnp_modalias() and create_of_modalias()
regmap: debugfs: Fix a erroneous check after snprintf()
clk: imx: imx8: Fix an error handling path in
clk_imx_acm_attach_pm_domains()
clk: imx: imx8: Fix an error handling path if
devm_clk_hw_register_mux_parent_data_table() fails
clk: imx: imx8: Fix an error handling path in imx8_acm_clk_probe()
accel/habanalabs/gaudi2: Fix incorrect string length computation in
gaudi2_psoc_razwi_get_engines()
drm/rockchip: cdn-dp: Fix some error handling paths in cdn_dp_probe()
crypto: hisilicon/hpre - Fix a erroneous check after snprintf()
fs: dlm: Fix the size of a buffer in dlm_create_debug_file()
leds: trigger: ledtrig-cpu:: Fix 'output may be truncated' issue for
'cpu'
dmaengine: pxa_dma: Remove an erroneous BUG_ON() in pxad_free_desc()
media: i2c: max9286: Fix some redundant of_node_put() calls
fs: dlm: Simplify buffer size computation in dlm_create_debug_file()
Christophe Leroy (2):
powerpc: Only define __parse_fpscr() when required
powerpc/40x: Remove stale PTE_ATOMIC_UPDATES macro
Claudiu Beznea (5):
clk: renesas: rzg2l: Wait for status bit of SD mux before continuing
clk: renesas: rzg2l: Lock around writes to mux register
clk: renesas: rzg2l: Trust value returned by hardware
clk: renesas: rzg2l: Use FIELD_GET() for PLL register fields
clk: renesas: rzg2l: Fix computation formula
Clément Léger (1):
scripts/gdb: fix usage of MOD_TEXT not defined when CONFIG_MODULES=n
Colin Ian King (1):
rtla: Fix uninitialized variable found
Cong Liu (1):
drm/amd/display: Fix null pointer dereference in error message
Conor Dooley (1):
riscv: dts: allwinner: remove address-cells from intc node
Cristian Ciocaltea (8):
ASoC: cs35l41: Handle mdsync_down reg write errors
ASoC: cs35l41: Handle mdsync_up reg write errors
ASoC: cs35l41: Initialize completion object before requesting IRQ
ASoC: cs35l41: Fix broken shared boost activation
ASoC: cs35l41: Verify PM runtime resume errors in IRQ handler
ASoC: cs35l41: Undo runtime PM changes at driver exit time
ALSA: hda: cs35l41: Fix unbalanced pm_runtime_get()
ALSA: hda: cs35l41: Undo runtime PM changes at driver exit time
D. Wythe (3):
net/smc: fix dangling sock under state SMC_APPFINCLOSEWAIT
net/smc: allow cdc msg send rather than drop it with NULL sndbuf_desc
net/smc: put sk reference if close work was canceled
Dan Carpenter (13):
thermal: core: prevent potential string overflow
clk: keystone: pll: fix a couple NULL vs IS_ERR() checks
clk: ti: fix double free in of_ti_divider_clk_setup()
clk: mediatek: fix double free in mtk_clk_register_pllfh()
drm/rockchip: Fix type promotion bug in rockchip_gem_iommu_map()
PCI: endpoint: Fix double free in __pci_epc_create()
dmaengine: ti: edma: handle irq_of_parse_and_map() errors
media: ov13b10: Fix some error checking in probe
Input: synaptics-rmi4 - fix use after free in
rmi_unregister_function()
watchdog: marvell_gti_wdt: Fix error code in probe()
hsr: Prevent use after free in prp_create_tagged_frame()
fbdev: imsttfb: fix double free in probe()
fbdev: imsttfb: fix a resource leak in probe
Dan Williams (10):
cxl/pci: Remove unnecessary device reference management in sanitize
work
cxl/pci: Cleanup 'sanitize' to always poll
cxl/pci: Remove inconsistent usage of dev_err_probe()
cxl/pci: Clarify devm host for memdev relative setup
cxl/pci: Fix sanitize notifier setup
cxl/memdev: Fix sanitize vs decoder setup locking
cxl/mem: Fix shutdown order
virt: sevguest: Fix passing a stack buffer as a scatterlist target
cxl/port: Fix @host confusion in cxl_dport_setup_regs()
cxl/hdm: Remove broken error path
Daniel Mentz (1):
scsi: ufs: core: Leave space for '\0' in utf8 desc string
Danila Tikhonov (1):
clk: qcom: gcc-sm8150: Fix gcc_sdcc2_apps_clk_src
Danny Kaehn (2):
hid: cp2112: Fix duplicate workqueue initialization
hid: cp2112: Fix IRQ shutdown stopping polling for all IRQs on chip
Dario Binacchi (1):
ARM: dts: stm32: stm32f7-pinctrl: don't use multiple blank lines
Dave Ertman (1):
ice: Fix SRIOV LAG disable on non-compliant aggregate
David Heidelberg (2):
arm64: dts: qcom: sdm845: Fix PSCI power domain names
arm64: dts: qcom: sdm845: cheza doesn't support LMh node
David Howells (2):
iov_iter, x86: Be consistent about the __user tag on copy_mc_to_user()
rxrpc: Fix two connection reaping bugs
Devi Priya (1):
clk: qcom: clk-rcg2: Fix clock rate overflow for high parent
frequencies
Dhruva Gole (1):
firmware: ti_sci: Mark driver as non removable
Dinghao Liu (2):
mfd: dln2: Fix double put in dln2_probe
i3c: Fix potential refcount leak in i3c_master_register_new_i3c_devs
Diogo Ivo (1):
net: ti: icss-iep: fix setting counter value
Dirk Behme (1):
clk: renesas: rcar-gen3: Extend SDnH divider table
Dmitry Antipov (1):
wifi: rtlwifi: fix EDCA limit set by BT coexistence
Dmitry Baryshkov (8):
drm/bridge: lt9611uxc: fix the race in the error path
drm/msm/dsi: use msm_gem_kernel_put to free TX buffer
drm/msm/dsi: free TX buffer in unbind
arm64: dts: qcom: sc7280: link usb3_phy_wrapper_gcc_usb30_pipe_clk
arm64: dts: qcom: sm8150: add ref clock to PCIe PHYs
arm64: dts: qcom: sm8350: fix pinctrl for UART18
arm64: dts: qcom: sdm845-mtp: fix WiFi configuration
soc: qcom: pmic_glink: fix connector type to be DisplayPort
Dominique Martinet (1):
Revert "mmc: core: Capture correct oemid-bits for eMMC cards"
Doug Berger (1):
rtc: brcmstb-waketimer: support level alarm_irq
Douglas Anderson (1):
drm: Call drm_atomic_helper_shutdown() at shutdown/remove time for
misc drivers
Dragos Bogdan (1):
hwmon: (axi-fan-control) Fix possible NULL pointer dereference
Emmanuel Grumbach (1):
wifi: iwlwifi: honor the enable_ini value
Eric Dumazet (19):
udp: introduce udp->udp_flags
udp: move udp->no_check6_tx to udp->udp_flags
udp: move udp->no_check6_rx to udp->udp_flags
udp: move udp->gro_enabled to udp->udp_flags
udp: add missing WRITE_ONCE() around up->encap_rcv
udp: move udp->accept_udp_{l4|fraglist} to udp->udp_flags
udp: lockless UDP_ENCAP_L2TPINUDP / UDP_GRO
udp: annotate data-races around udp->encap_type
udplite: remove UDPLITE_BIT
udplite: fix various data-races
tcp_metrics: add missing barriers on delete
tcp_metrics: properly set tp->snd_ssthresh in tcp_init_metrics()
tcp_metrics: do not create an entry from tcp_init_metrics()
chtls: fix tp->rcv_tstamp initialization
tcp: fix cookie_init_timestamp() overflows
virtio_net: use u64_stats_t infra to avoid data-races
net: add DEV_STATS_READ() helper
ipvlan: properly track tx_errors
inet: shrink struct flowi_common
Erik Kurzinger (1):
drm/syncobj: fix DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE
Eugen Hristev (1):
ASoC: mediatek: mt8186_mt6366_rt1019_rt5682s: trivial: fix error
messages
Fabio Estevam (2):
arm64: dts: imx8qm-ss-img: Fix jpegenc compatible entry
arm64: dts: imx8mp-debix-model-a: Remove USB hub reset-gpios
Fei Shao (1):
media: mtk-jpegenc: Fix bug in JPEG encode quality selection
Felipe Negrelli Wolter (1):
wifi: wfx: fix case where rates are out of order
Felix Fietkau (4):
wifi: mt76: mt7603: rework/fix rx pse hang check
wifi: mt76: mt7603: improve watchdog reset reliablity
wifi: mt76: mt7603: improve stuck beacon handling
wifi: mt76: remove unused error path in mt76_connac_tx_complete_skb
Fenghua Yu (1):
dmaengine: idxd: Register dsa_bus_type before registering idxd
sub-drivers
Filipe Manana (1):
btrfs: use u64 for buffer sizes in the tree search ioctls
Filippo Storniolo (1):
vsock/virtio: remove socket from connected/bound list on shutdown
Florent Revest (1):
kselftest: vm: fix mdwe's mmap_FIXED test case
Florian Fainelli (1):
pwm: brcmstb: Utilize appropriate clock APIs in suspend/resume
Florian Westphal (1):
netfilter: nat: fix ipv6 nat redirect with mapped and scoped addresses
Francesco Dolcini (1):
arm64: dts: ti: verdin-am62: disable MIPI DSI bridge
Frederic Weisbecker (1):
srcu: Fix callbacks acceleration mishandling
Furong Xu (1):
net: stmmac: xgmac: Enable support for multiple Flexible PPS outputs
Gabriel Krisman Bertazi (2):
io_uring/kbuf: Fix check of BID wrapping in provided buffers
io_uring/kbuf: Allow the full buffer id space for provided buffers
Gao Xiang (1):
erofs: fix erofs_insert_workgroup() lockref usage
Gaurav Jain (2):
crypto: caam/qi2 - fix Chacha20 + Poly1305 self test failure
crypto: caam/jr - fix Chacha20 + Poly1305 self test failure
Gaurav Kohli (2):
arm64: dts: qcom: msm8916: Fix iommu local address range
arm64: dts: qcom: msm8939: Fix iommu local address range
Geert Uytterhoeven (4):
drm/ssd130x: Fix screen clearing
ARM: dts: renesas: blanche: Fix typo in GP_11_2 pin name
sh: bios: Revive earlyprintk support
riscv: boot: Fix creation of loader.bin
Geetha sowjanya (1):
octeontx2-pf: Free pending and dropped SQEs
Geliang Tang (2):
selftests: mptcp: run userspace pm tests slower
selftests: mptcp: fix wait_rm_addr/sf parameters
George Kennedy (1):
IB/mlx5: Fix init stage error handling to avoid double free of same QP
and UAF
George Shuklin (1):
tg3: power down device only on SYSTEM_POWER_OFF
Georgia Garcia (1):
apparmor: fix invalid reference on profile->disconnected
Giovanni Cabiddu (10):
crypto: qat - fix state machines cleanup paths
crypto: qat - ignore subsequent state up commands
crypto: qat - fix unregistration of crypto algorithms
crypto: qat - fix unregistration of compression algorithms
crypto: qat - increase size of buffers
crypto: qat - consolidate services structure
crypto: qat - refactor fw config related functions
crypto: qat - use masks for AE groups
crypto: qat - fix ring to service map for QAT GEN4
crypto: qat - fix deadlock in backlog processing
Gou Hao (1):
ext4: move 'ix' sanity check to corrent position
Guenter Roeck (2):
Revert "hwmon: (sch56xx-common) Add DMI override table"
Revert "hwmon: (sch56xx-common) Add automatic module loading on
supported devices"
Guoniu.zhou (1):
media: ov5640: fix vblank unchange issue when work at dvp mode
Gustavo A. R. Silva (7):
gve: Use size_add() in call to struct_size()
mlxsw: Use size_mul() in call to struct_size()
tls: Use size_add() in call to struct_size()
tipc: Use size_add() in calls to struct_size()
net: spider_net: Use size_add() in call to struct_size()
RDMA/core: Use size_{add,sub,mul}() in calls to struct_size()
ASoC: SOF: ipc4-topology: Use size_add() in call to struct_size()
Han Xu (1):
spi: nxp-fspi: use the correct ioremap function
Hangbin Liu (1):
selftests: pmtu.sh: fix result checking
Hangyu Hua (1):
9p/net: fix possible memory leak in p9_check_errors()
Hans Verkuil (1):
media: dvb-usb-v2: af9035: fix missing unlock
Hans de Goede (4):
HID: logitech-hidpp: Don't restart IO, instead defer hid_connect()
only
HID: logitech-hidpp: Revert "Don't restart communication if not
necessary"
HID: logitech-hidpp: Move get_wireless_feature_index() check to
hidpp_connect_event()
mfd: arizona-spi: Set pdata.hpdet_channel for ACPI enumerated devs
Hao Chen (1):
drivers/perf: hisi: use cpuhp_state_remove_instance_nocalls() for
hisi_hns3_pmu uninit process
Harald Freudenberger (1):
s390/ap: re-init AP queues on config on
Haren Myneni (1):
powerpc/vas: Limit open window failure messages in log bufffer
Harshit Mogalapalli (2):
hte: tegra: Fix missing error code in tegra_hte_test_probe()
drm/loongson: Fix error handling in lsdc_pixel_pll_setup()
Harshitha Prem (1):
wifi: ath12k: fix undefined behavior with __fls in dp
Hayes Wang (1):
r8152: break the loop when the budget is exhausted
Heiner Kallweit (3):
r8169: fix rare issue with broken rx after link-down on RTL8125
r8169: respect userspace disabling IFF_MULTICAST
Revert "PCI/ASPM: Disable only ASPM_STATE_L1 when driver, disables L1"
Heng Qi (4):
virtio-net: fix mismatch of getting tx-frames
virtio-net: consistently save parameters for per-queue
virtio-net: fix per queue coalescing parameter setting
virtio-net: fix the vq coalescing setting for vq resize
Herbert Xu (2):
KEYS: Include linux/errno.h in linux/verification.h
certs: Break circular dependency when selftest is modular
Herve Codina (1):
mfd: core: Ensure disabled devices are skipped without aborting
Hou Tao (1):
bpf: Check map->usercnt after timer->timer is assigned
Howard Hsu (2):
wifi: mt76: mt7996: fix beamform mcu cmd configuration
wifi: mt76: mt7996: fix beamformee ss subfield in EHT PHY cap
Hui Wang (1):
ASoC: fsl-asoc-card: Add comment for mclk in the codec_priv
Ian Rogers (9):
perf stat: Fix aggr mode initialization
perf parse-events: Fix tracepoint name memory leak
perf parse-events: Fix for term values that are raw events
perf mem-events: Avoid uninitialized read
perf machine: Avoid out of bounds LBR memory read
libperf rc_check: Make implicit enabling work for GCC
perf hist: Add missing puts to hist__account_cycles
perf vendor events intel: Fix broadwellde tma_info_system_dram_bw_use
metric
perf vendor events intel: Add broadwellde two metrics
Ilan Peer (4):
wifi: mac80211: Fix setting vif links
wifi: iwlwifi: mvm: Correctly set link configuration
wifi: iwlwifi: mvm: Fix key flags for IGTK on AP interface
wifi: iwlwifi: mvm: Don't always bind/link the P2P Device interface
Ilkka Koskinen (2):
perf vendor events arm64: Fix for AmpereOne metrics
arm64/arm: arm_pmuv3: perf: Don't truncate 64-bit registers
Ilpo Järvinen (2):
selftests/resctrl: Ensure the benchmark commands fits to its array
PCI: vmd: Correct PCI Header Type Register's multi-function check
Irui Wang (1):
media: mediatek: vcodec: Handle invalid encoder vsi
Iulia Tanasescu (1):
Bluetooth: ISO: Pass BIG encryption info through QoS
Ivaylo Dimitrov (1):
drivers/clocksource/timer-ti-dm: Don't call clk_get_rate() in stop
function
Jacob Keller (1):
ice: fix pin assignment for E810-T without SMA control
Jai Luthra (2):
drm: bridge: it66121: Fix invalid connector dereference
arm64: dts: ti: k3-am62a7-sk: Drop i2c-1 to 100Khz
Jason Gunthorpe (1):
iommufd: Add iopt_area_alloc()
Jason-JH.Lin (4):
drm/mediatek: Fix coverity issue with unintentional integer overflow
drm/mediatek: Add mmsys_dev_num to mt8188 vdosys0 driver data
drm/mediatek: Fix iommu fault by swapping FBs after updating plane
state
drm/mediatek: Fix iommu fault during crtc enabling
Javier Carrasco (1):
rtc: pcf85363: fix wrong mask/val parameters in regmap_update_bits
call
Jens Axboe (1):
io_uring/net: ensure socket is marked connected on connect retry
Jernej Skrabec (1):
media: cedrus: Fix clock/reset sequence
Jerome Brunet (2):
ASoC: hdmi-codec: register hpd callback on component probe
ASoC: dapm: fix clock get name
Jia-Ju Bai (1):
usb: dwc2: fix possible NULL pointer dereference caused by driver
concurrency
Jian Shen (1):
net: page_pool: add missing free_percpu when page_pool_init fail
Jiasheng Jiang (9):
pstore/platform: Add check for kstrdup
clk: mediatek: clk-mt6765: Add check for mtk_alloc_clk_data
clk: mediatek: clk-mt6779: Add check for mtk_alloc_clk_data
clk: mediatek: clk-mt6797: Add check for mtk_alloc_clk_data
clk: mediatek: clk-mt7629-eth: Add check for mtk_alloc_clk_data
clk: mediatek: clk-mt7629: Add check for mtk_alloc_clk_data
clk: mediatek: clk-mt2701: Add check for mtk_alloc_clk_data
media: vidtv: psi: Add check for kstrdup
media: vidtv: mux: Add check and kfree for kstrdup
Jingbo Xu (1):
writeback, cgroup: switch inodes with dirty timestamps to release
dying cgwbs
Jinjie Ruan (10):
wifi: rtw88: debug: Fix the NULL vs IS_ERR() bug for
debugfs_create_file()
wifi: rtw88: Remove duplicate NULL check before calling
usb_kill/free_urb()
kunit: Fix missed memory release in kunit_free_suite_set()
kunit: Fix the wrong kfree of copy for kunit_filter_suites()
kunit: Fix possible memory leak in kunit_filter_suites()
kunit: test: Fix the possible memory leak in executor_test
HID: uclogic: Fix user-memory-access bug in
uclogic_params_ugee_v2_init_event_hooks()
HID: uclogic: Fix a work->entry not empty bug in __queue_work()
iio: frequency: adf4350: Use device managed functions and fix power
down issue.
misc: st_core: Do not call kfree_skb() under spin_lock_irqsave()
Johannes Berg (18):
wifi: cfg80211: add flush functions for wiphy work
wifi: mac80211: move radar detect work to wiphy work
wifi: mac80211: move scan work to wiphy work
wifi: mac80211: move offchannel works to wiphy work
wifi: mac80211: move sched-scan stop work to wiphy work
wifi: mac80211: fix RCU usage warning in mesh fast-xmit
wifi: cfg80211: fix off-by-one in element defrag
wifi: mac80211: fix # of MSDU in A-MSDU calculation
wifi: cfg80211: fix kernel-doc for wiphy_delayed_work_flush()
wifi: mac80211: fix check for unusable RX result
wifi: iwlwifi: mvm: use correct sta ID for IGTK/BIGTK
wifi: mac80211: don't recreate driver link debugfs in reconfig
wifi: iwlwifi: mvm: change iwl_mvm_flush_sta() API
wifi: iwlwifi: mvm: fix iwl_mvm_mac_flush_sta()
wifi: iwlwifi: mvm: remove TDLS stations from FW
wifi: iwlwifi: increase number of RX buffers for EHT devices
wifi: iwlwifi: mvm: fix netif csum flags
wifi: iwlwifi: pcie: synchronize IRQs before NAPI
Johnny Liu (1):
gpu: host1x: Correct allocated size for contexts
Jonas Blixt (1):
USB: usbip: fix stub_dev hub disconnect
Jonas Gorski (1):
hwrng: geode - fix accessing registers
Jonas Karlman (4):
drm/rockchip: vop: Fix reset of state in duplicate state crtc funcs
drm/rockchip: vop: Fix call to crtc reset helper
drm/rockchip: vop2: Don't crash for invalid duplicate_state
drm/rockchip: vop2: Add missing call to crtc reset helper
Jonathan Neuschäfer (1):
clk: npcm7xx: Fix incorrect kfree
Josh Poimboeuf (4):
x86/srso: Fix SBPB enablement for (possible) future fixed HW
x86/srso: Print mitigation for retbleed IBPB case
x86/srso: Fix vulnerability reporting for missing microcode
x86/srso: Fix unret validation dependencies
Juergen Gross (1):
xenbus: fix error exit in xenbus_init()
Junhao He (1):
perf: hisi: Fix use-after-free when register pmu fails
Junxian Huang (2):
RDMA/hns: Fix unnecessary port_num transition in HW stats allocation
RDMA/hns: Fix init failure of RoCE VF and HIP08
Kai Huang (1):
x86/tdx: Zero out the missing RSI in TDX_HYPERCALL macro
Kailang Yang (1):
ALSA: hda/realtek: Add support dual speaker for Dell
Kajol Jain (2):
tools/perf: Update call stack check in builtin-lock.c
perf vendor events: Update PMC used in PM_RUN_INST_CMPL event for
power10 platform
Kathiravan Thirumoorthy (3):
clk: qcom: ipq5018: drop the CLK_SET_RATE_PARENT flag from GPLL clocks
clk: qcom: ipq9574: drop the CLK_SET_RATE_PARENT flag from GPLL clocks
clk: qcom: ipq5332: drop the CLK_SET_RATE_PARENT flag from GPLL clocks
Katya Orlova (1):
media: s3c-camif: Avoid inappropriate kfree()
Kees Cook (1):
string: Adjust strtomem() logic to allow for smaller sources
Konrad Dybcio (20):
clk: qcom: gcc-msm8996: Remove RPM bus clocks
clk: qcom: mmcc-msm8998: Don't check halt bit on some branch clks
clk: qcom: mmcc-msm8998: Fix the SMMU GDSC
drm/msm/adreno: Fix SM6375 GPU ID
drm/msm/a6xx: Fix unknown speedbin case
arm64: dts: qcom: sc7280: Add missing LMH interrupts
arm64: dts: qcom: qrb2210-rb1: Swap UART index
arm64: dts: qcom: qrb2210-rb1: Fix regulators
arm64: dts: qcom: sdm670: Fix pdc mapping
interconnect: qcom: qdu1000: Set ACV enable_mask
interconnect: qcom: sc7180: Set ACV enable_mask
interconnect: qcom: sc7280: Set ACV enable_mask
interconnect: qcom: sc8180x: Set ACV enable_mask
interconnect: qcom: sc8280xp: Set ACV enable_mask
interconnect: qcom: sdm670: Set ACV enable_mask
interconnect: qcom: sdm845: Set ACV enable_mask
interconnect: qcom: sm6350: Set ACV enable_mask
interconnect: qcom: sm8150: Set ACV enable_mask
interconnect: qcom: sm8250: Set ACV enable_mask
interconnect: qcom: sm8350: Set ACV enable_mask
Konstantin Meskhidze (1):
drm/radeon: possible buffer overflow
Krzysztof Kozlowski (4):
arm64: dts: qcom: msm8992-libra: drop duplicated reserved memory
arm64: dts: qcom: sc7280: drop incorrect EUD port on SoC side
arm64: dts: qcom: sdx75-idp: align RPMh regulator nodes with bindings
ARM: dts: qcom: mdm9615: populate vsdcc fixed regulator
Kumar Kartikeya Dwivedi (2):
bpf: Fix kfunc callback register type handling
selftests/bpf: Make linked_list failure test more robust
Kuninori Morimoto (2):
ASoC: fsl: mpc5200_dma.c: Fix warning of Function parameter or member
not described
ASoC: ams-delta.c: use component after check
Kuniyuki Iwashima (2):
dccp: Call security_inet_conn_request() after setting IPv4 addresses.
dccp/tcp: Call security_inet_conn_request() after setting IPv6
addresses.
Kunwu.Chan (1):
drm/amd/pm: Fix a memory leak on an error path
Kursad Oney (1):
ARM: 9321/1: memset: cast the constant byte to unsigned char
Lalith Rajendran (1):
platform/chrome: cros_ec_lpc: Separate host command and irq disable
Laurent Pinchart (3):
media: i2c: imx219: Convert to CCI register access helpers
media: i2c: imx219: Replace register addresses with macros
media: i2c: imx219: Drop IMX219_REG_CSI_LANE_MODE from common regs
array
Leon Hwang (2):
selftests/bpf: Correct map_fd to data_fd in tailcalls
bpf, x64: Fix tailcall infinite loop
Leon Romanovsky (1):
RDMA/hfi1: Workaround truncation compilation error
Li Lingfeng (1):
nbd: fix uaf in nbd_open
Li Zhijian (1):
cxl/region: Fix cxl_region_rwsem lock held when returning to user
space
Linus Walleij (1):
watchdog: ixp4xx: Make sure restart always works
Longfang Liu (1):
crypto: hisilicon/qm - fix PF queue parameter issue
Lorenzo Bianconi (1):
net: ethernet: mtk_wed: fix EXT_INT_STATUS_RX_FBUF definitions for
MT7986 SoC
Luoyouming (2):
RDMA/hns: Add check for SL
RDMA/hns: The UD mode can only be configured with DCQCN
Maarten Lankhorst (1):
ASoC: SOF: core: Ensure sof_ops_free() is still called when probe
never ran.
Maciej Wieczor-Retman (1):
selftests/pidfd: Fix ksft print formats
Maciej Żenczykowski (1):
netfilter: xt_recent: fix (increase) ipv6 literal buffer length
Marc Kleine-Budde (3):
can: dev: can_restart(): don't crash kernel if carrier is OK
can: dev: can_restart(): fix race condition between controller restart
and netif_carrier_on()
can: dev: can_put_echo_skb(): don't crash kernel if can_priv::echo_skb
is accessed out of bounds
Marcel Ziswiler (1):
Bluetooth: hci_sync: Fix Opcode prints in bt_dev_dbg/err
Marcin Szycik (1):
ice: Fix VF-VF direction matching in drop rule in switchdev
Marek Behún (1):
leds: turris-omnia: Do not use SMBUS calls
Marek Marczykowski-Górecki (1):
xen-pciback: Consider INTx disabled when MSI/MSI-X is enabled
Marek Szyprowski (2):
drm: bridge: samsung-dsim: Fix waiting for empty cmd transfer FIFO on
older Exynos
media: cec: meson: always include meson sub-directory in Makefile
Marek Vasut (3):
drm: bridge: samsung-dsim: Initialize ULPS EXIT for i.MX8M DSIM
media: hantro: Check whether reset op is defined before use
media: verisilicon: Do not enable G2 postproc downscale if source is
narrower than destination
Marijn Suijten (1):
arm64: dts: qcom: sm6125: Pad APPS IOMMU address to 8 characters
Mario Limonciello (5):
crypto: ccp - Get a free page to use while fetching initial nonce
crypto: ccp - Fix ioctl unit tests
crypto: ccp - Fix DBC sample application error handling
crypto: ccp - Fix sample application signature passing
crypto: ccp - Fix some unfused tests
Mark Rutland (1):
arm64/arm: xen: enlighten: Fix KPTI checks
Markus Schneider-Pargmann (1):
thermal/drivers/mediatek: Fix probe for THERMAL_V2
Masahiro Yamada (2):
modpost: fix tee MODULE_DEVICE_TABLE built on big-endian host
modpost: fix ishtp MODULE_DEVICE_TABLE built on big-endian host
Matti Lehtimäki (1):
ARM: dts: qcom: apq8026-samsung-matisse-wifi: Fix inverted hall sensor
Matti Vaittinen (1):
tools: iio: iio_generic_buffer ensure alignment
Maxime Ripard (1):
drm/vc4: tests: Fix UAF in the mock helpers
MeiChia Chiu (2):
wifi: mt76: update beacon size limitation
wifi: mt76: mt7915: fix beamforming availability check
Michael Ellerman (1):
powerpc: Hide empty pt_regs at base of the stack
Michal Schmidt (1):
ice: lag: in RCU, use atomic allocation
Michał Mirosław (3):
mfd: core: Un-constify mfd_cell.of_reg
usb: chipidea: Fix DMA overwrite for Tegra
usb: chipidea: Simplify Tegra DMA alignment code
Michel Dänzer (3):
drm/amd/display: Check all enabled planes in dm_check_crtc_cursor
drm/amd/display: Refactor dm_get_plane_scale helper
drm/amd/display: Bail from dm_check_crtc_cursor if no relevant change
Mike Tipton (1):
debugfs: Fix __rcu type comparison warning
Ming Qian (3):
media: imx-jpeg: initiate a drain of the capture queue in dynamic
resolution change
media: amphion: handle firmware debug message
media: imx-jpeg: notify source chagne event when the first picture
parsed
Miri Korenblit (2):
wifi: iwlwifi: don't use an uninitialized variable
wifi: iwlwifi: empty overflow queue during flush
Moudy Ho (1):
media: platform: mtk-mdp3: fix uninitialized variable in
mdp_path_config()
Namhyung Kim (2):
perf record: Fix BTF type checks in the off-cpu profiling
perf tools: Do not ignore the default vmlinux.h
Naresh Solanki (1):
hwmon: (pmbus/mp2975) Move PGOOD fix
NeilBrown (1):
Fix termination state for idr_for_each_entry_ul()
Nícolas F. R. A. Prado (1):
thermal: core: Don't update trip points inside the hysteresis range
Ondrej Zary (1):
ACPI: video: Add acpi_backlight=vendor quirk for Toshiba Portégé R100
Paolo Abeni (1):
mptcp: properly account fastopen data
Patrick Thompson (1):
net: r8169: Disable multicast filter for RTL8168H and RTL8107E
Patrisious Haddad (1):
IB/mlx5: Fix rdma counter binding for RAW QP
Paul E. McKenney (1):
x86/nmi: Fix out-of-order NMI nesting checks & false positive warning
Peng Fan (1):
clk: imx: imx8mq: correct error handling path
Peter Chiu (4):
wifi: mt76: mt7996: set correct wcid in txp
wifi: mt76: mt7996: fix wmm queue mapping
wifi: mt76: mt7996: fix rx rate report for CBW320-2
wifi: mt76: mt7996: fix TWT command format
Peter Zijlstra (2):
sched: Fix stop_one_cpu_nowait() vs hotplug
perf: Optimize perf_cgroup_switch()
Phil Sutter (2):
netfilter: nf_tables: Drop pointless memset when dumping rules
net: skb_find_text: Ignore patterns extending past 'to'
Philip Yang (3):
drm/amdgpu: Increase IH soft ring size for GFX v9.4.3 dGPU
drm/amdkfd: Remove svm range validated_once flag
drm/amdkfd: Handle errors from svm validate and map
Pratyush Yadav (1):
media: cadence: csi2rx: Unregister v4l2 async notifier
Qais Yousef (2):
sched/uclamp: Set max_spare_cap_cpu even if max_spare_cap is 0
sched/uclamp: Ignore (util == 0) optimization in feec() when
p_util_max = 0
Qu Wenruo (1):
btrfs: make found_logical_ret parameter mandatory for function
queue_scrub_stripe()
Raag Jadav (2):
PM: sleep: Fix symbol export for _SIMPLE_ variants of _PM_OPS()
pinctrl: baytrail: fix debounce disable case
Rafał Miłecki (1):
ARM: dts: BCM5301X: Explicitly disable unused switch CPU ports
Randy Dunlap (2):
clk: linux/clk-provider.h: fix kernel-doc warnings and typos
drm: bridge: for GENERIC_PHY_MIPI_DPHY also select GENERIC_PHY
Ratheesh Kannoth (2):
octeontx2-pf: Fix error codes
octeontx2-pf: Fix holes in error code
Reinette Chatre (1):
PCI/MSI: Provide stubs for IMS functions
Reuben Hawkins (1):
vfs: fix readahead(2) on block devices
Robert Chiras (1):
clk: imx: imx8qxp: Fix elcdif_pll clock
Robert Richter (1):
cxl/core/regs: Rename @dev to @host in struct cxl_register_map
Robin Murphy (1):
perf/arm-cmn: Fix DTC domain detection
Roman Bacik (1):
i2c: iproc: handle invalid slave state
Rotem Saado (1):
wifi: iwlwifi: yoyo: swap cdb and jacket bits values
Sascha Hauer (1):
PM / devfreq: rockchip-dfi: Make pmu regmap mandatory
Sean Wang (3):
wifi: mt76: move struct ieee80211_chanctx_conf up to struct mt76_vif
wifi: mt76: mt7921: fix the wrong rate pickup for the chanctx driver
wifi: mt76: mt7921: fix the wrong rate selected in fw for the chanctx
driver
Sebastian Andrzej Siewior (1):
powerpc/imc-pmu: Use the correct spinlock initializer.
Sergey Shtylyov (1):
usb: host: xhci-plat: fix possible kernel oops while resuming
Sergio Paracuellos (1):
clk: ralink: mtmips: quiet unused variable warning
Shayne Chen (1):
wifi: mt76: fix per-band IEEE80211_CONF_MONITOR flag comparison
Shigeru Yoshida (2):
tipc: Change nla_policy for bearer-related names to NLA_NUL_STRING
virtio/vsock: Fix uninit-value in virtio_transport_recv_pkt()
Shuming Fan (1):
ASoC: rt712-sdca: fix speaker route missing issue
Siddharth Vadapalli (1):
arm64: dts: ti: k3-j721s2-evm-gesi: Specify base dtb for overlay file
Song Liu (1):
bpf: Fix unnecessary -EBUSY from htab_lock_bucket
Srinivasan Shanmugam (1):
drm/radeon: Remove the references of radeon_gem_ pread & pwrite ioctls
StanleyYP Wang (1):
wifi: mt76: get rid of false alamrs of tx emission issues
Stefan Wahren (1):
hwrng: bcm2835 - Fix hwrng throughput regression
Stephan Gerhold (1):
arm64: dts: qcom: apq8016-sbc: Add missing ADV7533 regulators
Steven Rostedt (Google) (1):
eventfs: Check for NULL ef in eventfs_set_attr()
Sudeep Holla (3):
firmware: arm_ffa: Assign the missing IDR allocation ID to the FFA
device
firmware: arm_ffa: Allow the FF-A drivers to use 32bit mode of
messaging
clk: scmi: Free scmi_clk allocated when the clocks with invalid info
are skipped
Sumit Gupta (2):
cpufreq: tegra194: fix warning due to missing opp_put
firmware: tegra: Add suspend hook and reset BPMP IPC early on resume
Theodore Ts'o (1):
ext4: add missing initialization of call_notify_error in
update_super_work()
Thierry Reding (2):
memory: tegra: Set BPMP msg flags to reset IPC channels
arm64: tegra: Use correct interrupts for Tegra234 TKE
Thomas Gleixner (2):
cpu/SMT: Make SMT control more robust against enumeration failures
x86/apic: Fake primary thread mask for XEN/PV
Thomas Richter (1):
perf trace: Use the right bpf_probe_read(_str) variant for reading
user data
Tomas Glozar (1):
nd_btt: Make BTT lanes preemptible
Tomi Valkeinen (12):
drm/bridge: lt8912b: Fix bridge_detach
drm/bridge: lt8912b: Fix crash on bridge detach
drm/bridge: lt8912b: Manually disable HPD only if it was enabled
drm/bridge: lt8912b: Add missing drm_bridge_attach call
drm/bridge: tc358768: Fix use of uninitialized variable
drm/bridge: tc358768: Fix bit updates
drm/bridge: tc358768: Use struct videomode
drm/bridge: tc358768: Print logical values, not raw register values
drm/bridge: tc358768: Use dev for dbg prints, not priv->dev
drm/bridge: tc358768: Rename dsibclk to hsbyteclk
drm/bridge: tc358768: Clean up clock period code
drm/bridge: tc358768: Fix tc358768_ns_to_cnt()
Trond Myklebust (1):
nfsd: Handle EOPENSTALE correctly in the filecache
Tyrel Datwyler (1):
scsi: ibmvfc: Fix erroneous use of rtas_busy_delay with hcall return
code
Uwe Kleine-König (4):
soc: qcom: llcc: Handle a second device without data corruption
backlight: pwm_bl: Disable PWM on shutdown, suspend and remove
leds: pwm: Don't disable the PWM when the LED should be off
pwm: sti: Reduce number of allocations and drop usage of chip_data
Vaishnav Achath (1):
spi: omap2-mcspi: Fix hardcoded reference clock
Varadarajan Narayanan (5):
clk: qcom: ipq5332: Drop set rate parent from gpll0 dependent clocks
clk: qcom: config IPQ_APSS_6018 should depend on QCOM_SMEM
clk: qcom: clk-alpha-pll: introduce stromer plus ops
clk: qcom: apss-ipq-pll: Use stromer plus ops for stromer plus pll
clk: qcom: apss-ipq-pll: Fix 'l' value for ipq5332_pll_config
Vegard Nossum (1):
cpupower: fix reference to nonexistent document
Vincent Mailhol (2):
can: etas_es58x: rework the version check logic to silence
-Wformat-truncation
can: etas_es58x: add missing a blank line after declaration
Viresh Kumar (2):
xen: Make struct privcmd_irqfd's layout architecture independent
xen: irqfd: Use _IOW instead of the internal _IOC() macro
Vlad Buslov (1):
net/sched: act_ct: Always fill offloading tuple iifidx
Vladimir Oltean (1):
net: enetc: shorten enetc_setup_xdp_prog() error message to fit
NETLINK_MAX_FMTMSG_LEN
Wadim Egorov (1):
arm64: dts: ti: k3-am625-beagleplay: Fix typo in ramoops reg
Waiman Long (1):
cgroup/cpuset: Fix load balance state in update_partition_sd_lb()
Wang Yufen (1):
powerpc/pseries: fix potential memory leak in init_cpu_associativity()
WangJinchao (1):
padata: Fix refcnt handling in padata_free_shell()
Willem de Bruijn (1):
llc: verify mac len before reading mac header
Xiaogang Chen (1):
drm/amdkfd: fix some race conditions in vram buffer alloc/free of svm
code
Xiaolei Wang (1):
media: ov5640: Fix a memory leak when ov5640_probe fails
Yafang Shao (1):
bpf: Fix missed rcu read lock in bpf_task_under_cgroup()
Yan Zhai (1):
ipv6: avoid atomic fragment on GSO packets
Yang Jihong (3):
perf kwork: Fix incorrect and missing free atom in work_push_atom()
perf kwork: Add the supported subcommands to the document
perf kwork: Set ordered_events to true in 'struct perf_tool'
Yang Yingliang (5):
spi: omap2-mcspi: switch to use modern name
interconnect: fix error handling in qnoc_probe()
pcmcia: cs: fix possible hung task and memory leak pccardd()
pcmcia: ds: fix refcount leak in pcmcia_device_add()
pcmcia: ds: fix possible name leak in error path in
pcmcia_device_add()
Yazen Ghannam (1):
x86/amd_nb: Use Family 19h Models 60h-7Fh Function 4 IDs
Yedidya Benshimol (1):
wifi: iwlwifi: mvm: update IGTK in mvmvif upon D3 resume
Yi Yang (1):
tty: tty_jobctrl: fix pid memleak in disassociate_ctty()
Yicong Yang (1):
drivers/perf: hisi_pcie: Check the type first in pmu::event_init()
Yu Kuai (1):
blk-core: use pr_warn_ratelimited() in bio_check_ro()
Yujie Liu (1):
tracing/kprobes: Fix the order of argument descriptions
Yunfei Dong (1):
media: mediatek: vcodec: using encoder device to alloc/free encoder
memory
Yuntao Wang (1):
x86/boot: Fix incorrect startup_gdt_descr.size
Yury Norov (3):
numa: Generalize numa_map_to_online_node()
sched/topology: Fix sched_numa_find_nth_cpu() in CPU-less case
sched/topology: Fix sched_numa_find_nth_cpu() in non-NUMA case
Zev Weiss (1):
hwmon: (nct6775) Fix incorrect variable reuse in fan_div calculation
Zhang Rui (1):
hwmon: (coretemp) Fix potentially truncated sysfs attribute name
Zhang Shurong (2):
spi: tegra: Fix missing IRQ check in tegra_slink_probe()
ASoC: fsl: Fix PM disable depth imbalance in fsl_easrc_probe
Zheng Wang (1):
media: bttv: fix use after free error due to btv->timeout timer
Zheng Yejian (1):
livepatch: Fix missing newline character in klp_resolve_symbols()
Ziyang Xuan (1):
Bluetooth: Make handle of hci_conn be unique
wahrenst (1):
ARM: 9323/1: mm: Fix ARCH_LOW_ADDRESS_LIMIT when CONFIG_ZONE_DMA
Documentation/ABI/testing/sysfs-driver-qat | 2 +
Documentation/admin-guide/hw-vuln/srso.rst | 24 +-
.../devicetree/bindings/mfd/mt6397.txt | 4 +-
.../bcm4708-buffalo-wzr-1166dhp-common.dtsi | 8 +
.../dts/broadcom/bcm4708-luxul-xap-1510.dts | 8 +
.../dts/broadcom/bcm4708-luxul-xwc-1000.dts | 8 +
.../dts/broadcom/bcm4708-netgear-r6250.dts | 8 +
.../dts/broadcom/bcm4708-smartrg-sr400ac.dts | 8 +
.../broadcom/bcm47081-buffalo-wzr-600dhp2.dts | 8 +
.../dts/broadcom/bcm47081-luxul-xap-1410.dts | 8 +
.../dts/broadcom/bcm47081-luxul-xwr-1200.dts | 8 +
.../dts/broadcom/bcm4709-netgear-r8000.dts | 8 +
.../dts/broadcom/bcm47094-dlink-dir-885l.dts | 8 +
.../dts/broadcom/bcm47094-dlink-dir-890l.dts | 8 +
.../dts/broadcom/bcm47094-luxul-abr-4500.dts | 8 +
.../dts/broadcom/bcm47094-luxul-xap-1610.dts | 8 +
.../dts/broadcom/bcm47094-luxul-xbr-4500.dts | 8 +
.../dts/broadcom/bcm47094-luxul-xwc-2000.dts | 8 +
.../dts/broadcom/bcm47094-luxul-xwr-3100.dts | 8 +
.../broadcom/bcm47094-luxul-xwr-3150-v1.dts | 8 +
.../dts/broadcom/bcm53015-meraki-mr26.dts | 8 +
.../dts/broadcom/bcm53016-meraki-mr32.dts | 8 +
arch/arm/boot/dts/broadcom/bcm953012er.dts | 8 +
.../qcom-apq8026-samsung-matisse-wifi.dts | 4 +-
arch/arm/boot/dts/qcom/qcom-mdm9615.dtsi | 14 +-
arch/arm/boot/dts/renesas/r8a7792-blanche.dts | 2 +-
arch/arm/boot/dts/st/stm32f7-pinctrl.dtsi | 1 -
arch/arm/boot/dts/ti/omap/am3517-evm.dts | 16 +-
arch/arm/include/asm/arm_pmuv3.h | 48 +-
arch/arm/include/asm/dma.h | 3 +
arch/arm/lib/memset.S | 1 +
arch/arm/xen/enlighten.c | 25 +-
arch/arm64/boot/dts/freescale/imx8mm.dtsi | 1 +
arch/arm64/boot/dts/freescale/imx8mn.dtsi | 1 +
.../dts/freescale/imx8mp-debix-model-a.dts | 3 -
.../boot/dts/freescale/imx8qm-ss-img.dtsi | 2 +-
arch/arm64/boot/dts/marvell/cn9130-crb.dtsi | 4 +-
arch/arm64/boot/dts/marvell/cn9130-db.dtsi | 4 +-
.../arm64/boot/dts/nvidia/tegra234-p3767.dtsi | 4 +-
arch/arm64/boot/dts/nvidia/tegra234.dtsi | 12 +-
arch/arm64/boot/dts/qcom/apq8016-sbc.dts | 3 +
arch/arm64/boot/dts/qcom/msm8916.dtsi | 2 +-
arch/arm64/boot/dts/qcom/msm8939.dtsi | 2 +-
arch/arm64/boot/dts/qcom/msm8976.dtsi | 8 +-
.../boot/dts/qcom/msm8992-xiaomi-libra.dts | 5 -
arch/arm64/boot/dts/qcom/qrb2210-rb1.dts | 90 ++--
arch/arm64/boot/dts/qcom/sc7280.dtsi | 32 +-
arch/arm64/boot/dts/qcom/sdm670.dtsi | 3 +-
arch/arm64/boot/dts/qcom/sdm845-cheza.dtsi | 32 +-
arch/arm64/boot/dts/qcom/sdm845-mtp.dts | 2 +
arch/arm64/boot/dts/qcom/sdx75-idp.dts | 2 +-
arch/arm64/boot/dts/qcom/sm6125.dtsi | 2 +-
arch/arm64/boot/dts/qcom/sm8150.dtsi | 12 +-
arch/arm64/boot/dts/qcom/sm8350.dtsi | 2 +-
arch/arm64/boot/dts/ti/Makefile | 7 +-
arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi | 1 +
.../arm64/boot/dts/ti/k3-am625-beagleplay.dts | 2 +-
arch/arm64/boot/dts/ti/k3-am62a7-sk.dts | 2 +-
arch/arm64/include/asm/arm_pmuv3.h | 25 +-
arch/arm64/include/asm/cputype.h | 3 +-
arch/arm64/kvm/guest.c | 2 +-
arch/powerpc/include/asm/nohash/32/pte-40x.h | 3 -
arch/powerpc/kernel/process.c | 26 +-
arch/powerpc/kernel/traps.c | 2 +
arch/powerpc/kexec/core.c | 3 +
arch/powerpc/perf/imc-pmu.c | 2 +-
arch/powerpc/platforms/book3s/vas-api.c | 34 +-
arch/powerpc/platforms/pseries/lpar.c | 4 +-
arch/powerpc/platforms/pseries/vas.c | 4 +-
arch/powerpc/sysdev/xive/native.c | 2 +-
arch/riscv/boot/Makefile | 1 +
arch/riscv/boot/dts/allwinner/sun20i-d1s.dtsi | 1 -
arch/riscv/kernel/cpu.c | 11 +-
arch/sh/Kconfig.debug | 11 +
arch/x86/coco/tdx/tdcall.S | 1 +
arch/x86/include/asm/nospec-branch.h | 4 +-
arch/x86/include/asm/sparsemem.h | 2 +
arch/x86/include/asm/uaccess.h | 2 +-
arch/x86/kernel/amd_nb.c | 3 +
arch/x86/kernel/apic/apic.c | 11 +
arch/x86/kernel/cpu/bugs.c | 42 +-
arch/x86/kernel/head64.c | 2 +-
arch/x86/kernel/nmi.c | 13 +-
arch/x86/lib/copy_mc.c | 8 +-
arch/x86/mm/maccess.c | 19 +-
arch/x86/mm/numa.c | 80 +++
arch/x86/net/bpf_jit_comp.c | 28 +-
block/blk-core.c | 4 +-
crypto/asymmetric_keys/Kconfig | 3 +-
crypto/asymmetric_keys/Makefile | 3 +-
crypto/asymmetric_keys/selftest.c | 13 +-
crypto/asymmetric_keys/x509_parser.h | 9 -
crypto/asymmetric_keys/x509_public_key.c | 8 +-
drivers/accel/habanalabs/gaudi2/gaudi2.c | 4 +-
drivers/acpi/device_sysfs.c | 10 +-
drivers/acpi/numa/srat.c | 11 +-
drivers/acpi/property.c | 19 +-
drivers/acpi/video_detect.c | 26 +
drivers/base/regmap/regmap-debugfs.c | 2 +-
drivers/base/regmap/regmap.c | 16 +-
drivers/block/nbd.c | 11 +-
drivers/char/hw_random/bcm2835-rng.c | 2 +-
drivers/char/hw_random/core.c | 6 +
drivers/char/hw_random/geode-rng.c | 6 +-
drivers/clk/clk-npcm7xx.c | 2 +-
drivers/clk/clk-scmi.c | 1 +
drivers/clk/imx/Kconfig | 1 +
drivers/clk/imx/clk-imx8-acm.c | 16 +-
drivers/clk/imx/clk-imx8mq.c | 17 +-
drivers/clk/imx/clk-imx8qxp.c | 2 +-
drivers/clk/keystone/pll.c | 15 +-
drivers/clk/mediatek/clk-mt2701.c | 8 +
drivers/clk/mediatek/clk-mt6765.c | 6 +
drivers/clk/mediatek/clk-mt6779.c | 4 +
drivers/clk/mediatek/clk-mt6797.c | 6 +
drivers/clk/mediatek/clk-mt7629-eth.c | 4 +
drivers/clk/mediatek/clk-mt7629.c | 6 +
drivers/clk/mediatek/clk-pll.c | 6 +-
drivers/clk/qcom/Kconfig | 1 +
drivers/clk/qcom/apss-ipq-pll.c | 4 +-
drivers/clk/qcom/clk-alpha-pll.c | 63 +++
drivers/clk/qcom/clk-alpha-pll.h | 1 +
drivers/clk/qcom/clk-rcg2.c | 14 +-
drivers/clk/qcom/gcc-ipq5018.c | 3 -
drivers/clk/qcom/gcc-ipq5332.c | 4 -
drivers/clk/qcom/gcc-ipq9574.c | 4 -
drivers/clk/qcom/gcc-msm8996.c | 237 +--------
drivers/clk/qcom/gcc-sm8150.c | 2 +-
drivers/clk/qcom/mmcc-msm8998.c | 7 +-
drivers/clk/ralink/clk-mtmips.c | 20 +-
drivers/clk/renesas/rcar-cpg-lib.c | 15 +-
drivers/clk/renesas/rzg2l-cpg.c | 62 +--
drivers/clk/renesas/rzg2l-cpg.h | 2 +-
drivers/clk/ti/divider.c | 8 +-
drivers/clocksource/arm_arch_timer.c | 5 +-
drivers/clocksource/timer-ti-dm.c | 36 +-
drivers/cpufreq/tegra194-cpufreq.c | 2 +
drivers/crypto/caam/caamalg.c | 3 +-
drivers/crypto/caam/caamalg_qi2.c | 3 +-
drivers/crypto/ccp/dbc.c | 2 +-
drivers/crypto/hisilicon/hpre/hpre_main.c | 7 +-
drivers/crypto/hisilicon/qm.c | 18 +-
drivers/crypto/hisilicon/qm_common.h | 1 -
drivers/crypto/hisilicon/sec2/sec_main.c | 5 +
drivers/crypto/hisilicon/zip/zip_main.c | 5 +
.../intel/qat/qat_4xxx/adf_4xxx_hw_data.c | 211 +++++---
drivers/crypto/intel/qat/qat_4xxx/adf_drv.c | 34 +-
.../intel/qat/qat_common/adf_accel_devices.h | 3 +-
.../crypto/intel/qat/qat_common/adf_admin.c | 39 +-
.../intel/qat/qat_common/adf_cfg_services.h | 34 ++
.../intel/qat/qat_common/adf_cfg_strings.h | 1 +
.../intel/qat/qat_common/adf_common_drv.h | 2 +
.../crypto/intel/qat/qat_common/adf_init.c | 20 +-
.../crypto/intel/qat/qat_common/adf_sysfs.c | 28 +-
.../qat/qat_common/adf_transport_debug.c | 4 +-
.../qat/qat_common/icp_qat_fw_init_admin.h | 1 +
.../intel/qat/qat_common/qat_algs_send.c | 46 +-
drivers/cxl/core/core.h | 1 +
drivers/cxl/core/hdm.c | 40 +-
drivers/cxl/core/mbox.c | 55 +-
drivers/cxl/core/memdev.c | 157 +++---
drivers/cxl/core/port.c | 53 +-
drivers/cxl/core/region.c | 241 +++++----
drivers/cxl/core/regs.c | 28 +-
drivers/cxl/cxl.h | 4 +-
drivers/cxl/cxlmem.h | 13 +-
drivers/cxl/pci.c | 80 ++-
drivers/devfreq/event/rockchip-dfi.c | 15 +-
drivers/dma/idxd/Makefile | 6 +-
drivers/dma/pxa_dma.c | 1 -
drivers/dma/ti/edma.c | 4 +-
drivers/firmware/arm_ffa/bus.c | 1 +
drivers/firmware/arm_ffa/driver.c | 12 +-
drivers/firmware/tegra/bpmp.c | 30 ++
drivers/firmware/ti_sci.c | 46 +-
drivers/gpio/gpio-sim.c | 4 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h | 2 +-
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 12 +-
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 12 +-
drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 92 ++--
drivers/gpu/drm/amd/amdkfd/kfd_svm.h | 2 -
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 73 ++-
.../drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c | 4 +-
drivers/gpu/drm/aspeed/aspeed_gfx_drv.c | 7 +
drivers/gpu/drm/bridge/Kconfig | 2 +
drivers/gpu/drm/bridge/cadence/Kconfig | 1 +
drivers/gpu/drm/bridge/ite-it66121.c | 12 +-
drivers/gpu/drm/bridge/lontium-lt8912b.c | 22 +-
drivers/gpu/drm/bridge/lontium-lt9611uxc.c | 10 +-
drivers/gpu/drm/bridge/samsung-dsim.c | 20 +-
drivers/gpu/drm/bridge/tc358768.c | 166 +++---
drivers/gpu/drm/drm_syncobj.c | 3 +-
drivers/gpu/drm/loongson/lsdc_pixpll.c | 6 +-
drivers/gpu/drm/mediatek/mtk_drm_crtc.c | 3 +
drivers/gpu/drm/mediatek/mtk_drm_drv.c | 1 +
drivers/gpu/drm/mediatek/mtk_drm_gem.c | 9 +-
drivers/gpu/drm/mediatek/mtk_drm_plane.c | 41 +-
drivers/gpu/drm/mediatek/mtk_dsi.c | 4 +-
drivers/gpu/drm/mgag200/mgag200_drv.c | 8 +
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
drivers/gpu/drm/msm/adreno/adreno_device.c | 2 +-
drivers/gpu/drm/msm/dsi/dsi.c | 1 +
drivers/gpu/drm/msm/dsi/dsi.h | 1 +
drivers/gpu/drm/msm/dsi/dsi_host.c | 16 +-
drivers/gpu/drm/pl111/pl111_drv.c | 7 +
drivers/gpu/drm/radeon/evergreen.c | 7 +-
drivers/gpu/drm/radeon/radeon.h | 4 -
drivers/gpu/drm/radeon/radeon_drv.c | 2 -
drivers/gpu/drm/radeon/radeon_gem.c | 16 -
drivers/gpu/drm/rockchip/cdn-dp-core.c | 15 +-
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 2 +-
drivers/gpu/drm/rockchip/rockchip_drm_vop.c | 8 +-
drivers/gpu/drm/rockchip/rockchip_drm_vop2.c | 39 +-
drivers/gpu/drm/solomon/ssd130x.c | 47 +-
drivers/gpu/drm/stm/drv.c | 7 +
drivers/gpu/drm/tilcdc/tilcdc_drv.c | 11 +-
drivers/gpu/drm/tve200/tve200_drv.c | 7 +
drivers/gpu/drm/vboxvideo/vbox_drv.c | 10 +
drivers/gpu/drm/vc4/tests/vc4_mock_crtc.c | 2 +-
drivers/gpu/drm/vc4/tests/vc4_mock_output.c | 2 +-
drivers/gpu/host1x/context.c | 4 +-
drivers/hid/hid-cp2112.c | 10 +-
drivers/hid/hid-logitech-hidpp.c | 60 +--
drivers/hid/hid-uclogic-core-test.c | 7 +
drivers/hid/hid-uclogic-params-test.c | 16 +-
drivers/hte/hte-tegra194-test.c | 4 +-
drivers/hwmon/axi-fan-control.c | 29 +-
drivers/hwmon/coretemp.c | 2 +-
drivers/hwmon/nct6775-core.c | 12 +-
drivers/hwmon/pmbus/mp2975.c | 10 +-
drivers/hwmon/sch5627.c | 21 +-
drivers/hwmon/sch56xx-common.c | 64 +--
drivers/i2c/busses/i2c-bcm-iproc.c | 133 +++--
drivers/i3c/master.c | 4 +-
drivers/iio/frequency/adf4350.c | 75 +--
drivers/infiniband/core/device.c | 2 +-
drivers/infiniband/core/sa_query.c | 4 +-
drivers/infiniband/core/sysfs.c | 10 +-
drivers/infiniband/core/user_mad.c | 4 +-
drivers/infiniband/hw/hfi1/efivar.c | 2 +-
drivers/infiniband/hw/hns/hns_roce_ah.c | 13 +-
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 34 +-
drivers/infiniband/hw/hns/hns_roce_main.c | 22 +-
drivers/infiniband/hw/hns/hns_roce_qp.c | 2 +-
drivers/infiniband/hw/mlx5/main.c | 4 +-
drivers/infiniband/hw/mlx5/qp.c | 27 +
drivers/input/rmi4/rmi_bus.c | 2 +-
drivers/interconnect/qcom/icc-rpm.c | 14 +-
drivers/interconnect/qcom/osm-l3.c | 3 +-
drivers/interconnect/qcom/qdu1000.c | 1 +
drivers/interconnect/qcom/sc7180.c | 1 +
drivers/interconnect/qcom/sc7280.c | 1 +
drivers/interconnect/qcom/sc8180x.c | 1 +
drivers/interconnect/qcom/sc8280xp.c | 1 +
drivers/interconnect/qcom/sdm670.c | 1 +
drivers/interconnect/qcom/sdm845.c | 1 +
drivers/interconnect/qcom/sm6350.c | 1 +
drivers/interconnect/qcom/sm8150.c | 1 +
drivers/interconnect/qcom/sm8250.c | 1 +
drivers/interconnect/qcom/sm8350.c | 1 +
drivers/iommu/iommufd/io_pagetable.c | 18 +-
drivers/iommu/iommufd/pages.c | 2 +
drivers/irqchip/irq-sifive-plic.c | 7 +-
drivers/leds/leds-pwm.c | 2 +-
drivers/leds/leds-turris-omnia.c | 54 +-
drivers/leds/trigger/ledtrig-cpu.c | 4 +-
drivers/media/cec/platform/Makefile | 2 +-
drivers/media/i2c/Kconfig | 1 +
drivers/media/i2c/imx219.c | 503 ++++++++----------
drivers/media/i2c/max9286.c | 2 -
drivers/media/i2c/ov13b10.c | 2 +-
drivers/media/i2c/ov5640.c | 24 +-
drivers/media/pci/bt8xx/bttv-driver.c | 1 +
drivers/media/platform/amphion/vpu_defs.h | 1 +
drivers/media/platform/amphion/vpu_helpers.c | 1 +
drivers/media/platform/amphion/vpu_malone.c | 1 +
drivers/media/platform/amphion/vpu_msgs.c | 31 +-
drivers/media/platform/cadence/cdns-csi2rx.c | 7 +-
.../platform/mediatek/jpeg/mtk_jpeg_enc_hw.c | 5 +-
.../platform/mediatek/mdp3/mtk-mdp3-cmdq.c | 2 +-
.../mediatek/vcodec/common/mtk_vcodec_util.c | 56 +-
.../mediatek/vcodec/encoder/venc_vpu_if.c | 5 +
.../media/platform/nxp/imx-jpeg/mxc-jpeg.c | 34 +-
.../media/platform/nxp/imx-jpeg/mxc-jpeg.h | 1 +
.../samsung/s3c-camif/camif-capture.c | 6 +-
.../media/platform/verisilicon/hantro_drv.c | 3 +-
.../platform/verisilicon/hantro_postproc.c | 2 +-
.../platform/verisilicon/rockchip_vpu_hw.c | 2 +-
drivers/media/test-drivers/vidtv/vidtv_mux.c | 7 +-
drivers/media/test-drivers/vidtv/vidtv_psi.c | 45 +-
drivers/media/usb/dvb-usb-v2/af9035.c | 13 +-
drivers/memory/tegra/tegra234.c | 4 +
drivers/mfd/arizona-spi.c | 3 +
drivers/mfd/dln2.c | 1 -
drivers/mfd/mfd-core.c | 17 +-
drivers/misc/ti-st/st_core.c | 7 +-
drivers/mmc/core/mmc.c | 2 +-
drivers/net/can/dev/dev.c | 10 +-
drivers/net/can/dev/skb.c | 6 +-
drivers/net/can/usb/etas_es58x/es58x_core.c | 1 +
drivers/net/can/usb/etas_es58x/es58x_core.h | 6 +-
.../net/can/usb/etas_es58x/es58x_devlink.c | 57 +-
drivers/net/ethernet/broadcom/tg3.c | 3 +-
.../chelsio/inline_crypto/chtls/chtls_cm.c | 2 +-
drivers/net/ethernet/freescale/enetc/enetc.c | 2 +-
drivers/net/ethernet/google/gve/gve_main.c | 2 +-
drivers/net/ethernet/intel/i40e/i40e_main.c | 10 +-
drivers/net/ethernet/intel/iavf/iavf.h | 16 +-
drivers/net/ethernet/intel/iavf/iavf_main.c | 43 +-
.../net/ethernet/intel/iavf/iavf_virtchnl.c | 75 ++-
drivers/net/ethernet/intel/ice/ice_lag.c | 18 +-
drivers/net/ethernet/intel/ice/ice_ptp.c | 12 +-
drivers/net/ethernet/intel/ice/ice_tc_lib.c | 114 +++-
.../marvell/octeontx2/nic/otx2_common.c | 15 +-
.../marvell/octeontx2/nic/otx2_common.h | 1 +
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 81 +--
.../marvell/octeontx2/nic/otx2_struct.h | 34 +-
.../marvell/octeontx2/nic/otx2_txrx.c | 42 ++
drivers/net/ethernet/mediatek/mtk_wed_regs.h | 4 +-
.../mlxsw/spectrum_acl_bloom_filter.c | 2 +-
drivers/net/ethernet/realtek/r8169_main.c | 10 +-
.../net/ethernet/stmicro/stmmac/dwxgmac2.h | 2 +-
.../ethernet/stmicro/stmmac/dwxgmac2_core.c | 14 +-
drivers/net/ethernet/ti/icssg/icss_iep.c | 2 +-
drivers/net/ethernet/toshiba/spider_net.c | 2 +-
drivers/net/gtp.c | 4 +-
drivers/net/ipvlan/ipvlan_core.c | 8 +-
drivers/net/ipvlan/ipvlan_main.c | 1 +
drivers/net/macsec.c | 6 +-
drivers/net/usb/r8152.c | 18 +-
drivers/net/virtio_net.c | 198 ++++---
drivers/net/wireless/ath/ath11k/mac.c | 8 +
drivers/net/wireless/ath/ath11k/pci.c | 24 +-
drivers/net/wireless/ath/ath12k/dp_rx.c | 2 +-
drivers/net/wireless/ath/ath12k/dp_tx.c | 7 +-
.../net/wireless/ath/dfs_pattern_detector.c | 2 +-
drivers/net/wireless/intel/iwlwifi/cfg/bz.c | 14 +-
drivers/net/wireless/intel/iwlwifi/cfg/sc.c | 10 +-
drivers/net/wireless/intel/iwlwifi/dvm/tx.c | 5 +-
.../wireless/intel/iwlwifi/fw/api/dbg-tlv.h | 1 +
.../net/wireless/intel/iwlwifi/iwl-config.h | 5 +-
.../net/wireless/intel/iwlwifi/iwl-dbg-tlv.h | 5 +-
drivers/net/wireless/intel/iwlwifi/iwl-drv.c | 51 +-
drivers/net/wireless/intel/iwlwifi/iwl-prph.h | 4 +-
.../net/wireless/intel/iwlwifi/iwl-trans.h | 11 +-
drivers/net/wireless/intel/iwlwifi/mvm/d3.c | 10 +
.../intel/iwlwifi/mvm/ftm-responder.c | 9 +-
drivers/net/wireless/intel/iwlwifi/mvm/link.c | 22 +-
.../net/wireless/intel/iwlwifi/mvm/mac-ctxt.c | 8 +-
.../net/wireless/intel/iwlwifi/mvm/mac80211.c | 182 +++----
.../net/wireless/intel/iwlwifi/mvm/mld-key.c | 16 +-
.../wireless/intel/iwlwifi/mvm/mld-mac80211.c | 105 ++--
.../net/wireless/intel/iwlwifi/mvm/mld-sta.c | 2 +-
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h | 9 +-
drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 12 +-
.../wireless/intel/iwlwifi/mvm/time-event.c | 31 +-
drivers/net/wireless/intel/iwlwifi/mvm/tx.c | 22 +-
.../wireless/intel/iwlwifi/pcie/trans-gen2.c | 1 +
.../net/wireless/intel/iwlwifi/pcie/trans.c | 1 +
drivers/net/wireless/intel/iwlwifi/queue/tx.c | 9 +-
drivers/net/wireless/intel/iwlwifi/queue/tx.h | 2 +-
drivers/net/wireless/mediatek/mt76/dma.c | 3 -
drivers/net/wireless/mediatek/mt76/mac80211.c | 9 +-
drivers/net/wireless/mediatek/mt76/mt76.h | 4 +-
.../wireless/mediatek/mt76/mt7603/beacon.c | 76 ++-
.../net/wireless/mediatek/mt76/mt7603/core.c | 2 +
.../net/wireless/mediatek/mt76/mt7603/mac.c | 52 +-
.../net/wireless/mediatek/mt76/mt7603/regs.h | 5 +
.../net/wireless/mediatek/mt76/mt7615/mcu.c | 2 +-
.../wireless/mediatek/mt76/mt7615/pci_mac.c | 2 +-
.../wireless/mediatek/mt76/mt76_connac3_mac.h | 2 +
.../wireless/mediatek/mt76/mt76_connac_mac.c | 24 +-
.../wireless/mediatek/mt76/mt76_connac_mcu.c | 9 +-
.../net/wireless/mediatek/mt76/mt7915/mac.c | 2 +-
.../net/wireless/mediatek/mt76/mt7915/main.c | 8 +-
.../net/wireless/mediatek/mt76/mt7915/mcu.c | 69 ++-
.../net/wireless/mediatek/mt76/mt7915/mcu.h | 18 +-
.../wireless/mediatek/mt76/mt7915/mt7915.h | 2 +
.../net/wireless/mediatek/mt76/mt7921/main.c | 12 +-
.../wireless/mediatek/mt76/mt7921/pci_mac.c | 2 +-
drivers/net/wireless/mediatek/mt76/mt792x.h | 1 -
.../net/wireless/mediatek/mt76/mt792x_core.c | 4 +-
.../net/wireless/mediatek/mt76/mt7996/init.c | 9 +-
.../net/wireless/mediatek/mt76/mt7996/mac.c | 12 +-
.../net/wireless/mediatek/mt76/mt7996/main.c | 16 +-
.../net/wireless/mediatek/mt76/mt7996/mcu.c | 67 ++-
.../net/wireless/mediatek/mt76/mt7996/mcu.h | 11 +-
.../wireless/realtek/rtlwifi/rtl8188ee/dm.c | 2 +-
.../realtek/rtlwifi/rtl8192c/dm_common.c | 2 +-
.../wireless/realtek/rtlwifi/rtl8723ae/dm.c | 2 +-
drivers/net/wireless/realtek/rtw88/debug.c | 4 +-
drivers/net/wireless/realtek/rtw88/usb.c | 9 +-
drivers/net/wireless/silabs/wfx/data_tx.c | 71 +--
drivers/nvdimm/of_pmem.c | 8 +-
drivers/nvdimm/region_devs.c | 8 +-
drivers/nvme/host/ioctl.c | 7 +-
drivers/pci/controller/vmd.c | 3 +-
drivers/pci/endpoint/pci-epc-core.c | 1 -
drivers/pci/pcie/aspm.c | 3 +-
drivers/pcmcia/cs.c | 1 +
drivers/pcmcia/ds.c | 14 +-
drivers/perf/arm-cmn.c | 16 +-
drivers/perf/arm_pmuv3.c | 6 +-
drivers/perf/hisilicon/hisi_pcie_pmu.c | 7 +-
drivers/perf/hisilicon/hisi_uncore_pa_pmu.c | 4 +-
drivers/perf/hisilicon/hisi_uncore_sllc_pmu.c | 4 +-
drivers/perf/hisilicon/hns3_pmu.c | 8 +-
drivers/perf/riscv_pmu_sbi.c | 6 +-
drivers/pinctrl/intel/pinctrl-baytrail.c | 11 +-
drivers/pinctrl/renesas/pinctrl-rzg2l.c | 3 +-
drivers/platform/chrome/cros_ec.c | 116 +++-
drivers/platform/chrome/cros_ec.h | 4 +
drivers/platform/chrome/cros_ec_lpc.c | 22 +-
drivers/platform/x86/wmi.c | 36 +-
drivers/pwm/pwm-brcmstb.c | 4 +-
drivers/pwm/pwm-sti.c | 29 +-
drivers/regulator/mt6358-regulator.c | 14 +-
drivers/regulator/qcom-rpmh-regulator.c | 2 +-
drivers/rtc/rtc-brcmstb-waketimer.c | 47 +-
drivers/rtc/rtc-pcf85363.c | 2 +-
drivers/s390/crypto/ap_bus.c | 21 +-
drivers/s390/crypto/ap_bus.h | 1 +
drivers/s390/crypto/ap_queue.c | 9 +-
drivers/scsi/ibmvscsi/ibmvfc.c | 3 +-
drivers/soc/qcom/llcc-qcom.c | 3 +
drivers/soc/qcom/pmic_glink_altmode.c | 2 +-
drivers/spi/Kconfig | 1 +
drivers/spi/spi-nxp-fspi.c | 2 +-
drivers/spi/spi-omap2-mcspi.c | 263 ++++-----
drivers/spi/spi-tegra20-slink.c | 2 +
.../staging/media/sunxi/cedrus/cedrus_hw.c | 24 +-
drivers/thermal/mediatek/auxadc_thermal.c | 2 +-
drivers/thermal/thermal_core.c | 6 +-
drivers/thermal/thermal_trip.c | 19 +-
drivers/tty/tty_jobctrl.c | 17 +-
drivers/ufs/core/ufshcd.c | 2 +-
drivers/usb/chipidea/host.c | 48 +-
drivers/usb/dwc2/hcd.c | 2 +-
drivers/usb/host/xhci-pci.c | 2 +
drivers/usb/host/xhci-plat.c | 23 +-
drivers/usb/usbip/stub_dev.c | 9 +-
drivers/video/backlight/pwm_bl.c | 22 +
drivers/video/fbdev/fsl-diu-fb.c | 2 +-
drivers/video/fbdev/imsttfb.c | 35 +-
drivers/virt/coco/sev-guest/sev-guest.c | 45 +-
drivers/watchdog/ixp4xx_wdt.c | 28 +-
drivers/watchdog/marvell_gti_wdt.c | 2 +-
drivers/xen/privcmd.c | 2 +-
drivers/xen/xen-pciback/conf_space.c | 19 +-
.../xen/xen-pciback/conf_space_capability.c | 8 +-
drivers/xen/xen-pciback/conf_space_header.c | 21 +-
drivers/xen/xenbus/xenbus_probe.c | 2 +-
fs/btrfs/ioctl.c | 10 +-
fs/btrfs/scrub.c | 10 +-
fs/debugfs/file.c | 2 +-
fs/dlm/debug_fs.c | 13 +-
fs/dlm/midcomms.c | 39 +-
fs/erofs/utils.c | 8 +-
fs/erofs/zdata.c | 1 +
fs/ext4/extents.c | 10 +-
fs/ext4/super.c | 3 +-
fs/f2fs/data.c | 24 +-
fs/f2fs/file.c | 1 +
fs/f2fs/super.c | 35 +-
fs/fs-writeback.c | 41 +-
fs/nfsd/filecache.c | 27 +-
fs/nfsd/vfs.c | 28 +-
fs/nfsd/vfs.h | 4 +-
fs/pstore/platform.c | 9 +-
fs/tracefs/event_inode.c | 4 +-
include/drm/bridge/samsung-dsim.h | 1 +
include/linux/bpf.h | 5 +
include/linux/clk-provider.h | 15 +-
include/linux/cpuhotplug.h | 1 +
include/linux/hisi_acc_qm.h | 7 +
include/linux/hw_random.h | 1 +
include/linux/idr.h | 6 +-
include/linux/mfd/core.h | 2 +-
include/linux/netdevice.h | 1 +
include/linux/numa.h | 14 +-
include/linux/objtool.h | 3 +-
include/linux/pci.h | 34 +-
include/linux/perf_event.h | 1 +
include/linux/pm.h | 43 +-
include/linux/string.h | 7 +-
include/linux/topology.h | 2 +-
include/linux/udp.h | 66 ++-
include/linux/verification.h | 1 +
include/net/bluetooth/hci.h | 3 +
include/net/bluetooth/hci_core.h | 31 +-
include/net/cfg80211.h | 21 +
include/net/flow.h | 2 +-
include/net/netfilter/nf_conntrack_act_ct.h | 30 +-
include/net/tcp.h | 2 +-
include/net/udp_tunnel.h | 9 +-
include/net/udplite.h | 14 +-
include/soc/tegra/bpmp.h | 6 +
include/sound/cs35l41.h | 4 +-
include/uapi/xen/privcmd.h | 4 +-
io_uring/kbuf.c | 11 +-
io_uring/net.c | 24 +-
kernel/bpf/hashtab.c | 7 +-
kernel/bpf/helpers.c | 32 +-
kernel/bpf/trampoline.c | 4 +-
kernel/bpf/verifier.c | 7 +
kernel/cgroup/cpuset.c | 12 +-
kernel/cpu.c | 18 +-
kernel/events/core.c | 115 ++--
kernel/futex/core.c | 12 +-
kernel/irq/matrix.c | 6 +-
kernel/livepatch/core.c | 2 +-
kernel/module/decompress.c | 8 +-
kernel/padata.c | 6 +-
kernel/rcu/srcutree.c | 31 +-
kernel/sched/core.c | 10 +-
kernel/sched/deadline.c | 2 +
kernel/sched/fair.c | 35 +-
kernel/sched/rt.c | 4 +
kernel/sched/topology.c | 6 +-
kernel/trace/trace_kprobe.c | 2 +-
lib/kunit/executor.c | 23 +-
lib/kunit/executor_test.c | 36 +-
mm/mempolicy.c | 18 +-
mm/readahead.c | 3 +-
net/9p/client.c | 6 +-
net/bluetooth/amp.c | 3 +-
net/bluetooth/hci_conn.c | 58 +-
net/bluetooth/hci_core.c | 3 +
net/bluetooth/hci_event.c | 84 +--
net/bluetooth/hci_sync.c | 4 +-
net/bluetooth/iso.c | 19 +-
net/core/page_pool.c | 6 +-
net/core/skbuff.c | 3 +-
net/dccp/ipv4.c | 6 +-
net/dccp/ipv6.c | 6 +-
net/hsr/hsr_forward.c | 4 +-
net/ipv4/syncookies.c | 20 +-
net/ipv4/tcp_input.c | 9 +-
net/ipv4/tcp_metrics.c | 15 +-
net/ipv4/udp.c | 74 +--
net/ipv4/udp_offload.c | 4 +-
net/ipv4/udp_tunnel_core.c | 2 +-
net/ipv4/udplite.c | 1 -
net/ipv4/xfrm4_input.c | 4 +-
net/ipv6/ip6_output.c | 8 +-
net/ipv6/syncookies.c | 7 +-
net/ipv6/udp.c | 34 +-
net/ipv6/udplite.c | 1 -
net/ipv6/xfrm6_input.c | 4 +-
net/l2tp/l2tp_core.c | 6 +-
net/llc/llc_input.c | 10 +-
net/llc/llc_s_ac.c | 3 +
net/llc/llc_station.c | 3 +
net/mac80211/driver-ops.c | 9 +-
net/mac80211/drop.h | 3 +
net/mac80211/ieee80211_i.h | 18 +-
net/mac80211/iface.c | 2 +-
net/mac80211/link.c | 2 +-
net/mac80211/main.c | 25 +-
net/mac80211/mesh_pathtbl.c | 2 +-
net/mac80211/offchannel.c | 36 +-
net/mac80211/rx.c | 2 +-
net/mac80211/scan.c | 36 +-
net/mac80211/sta_info.c | 2 +-
net/mac80211/util.c | 11 +-
net/mptcp/fastopen.c | 1 +
net/netfilter/nf_nat_redirect.c | 27 +-
net/netfilter/nf_tables_api.c | 4 -
net/netfilter/xt_recent.c | 2 +-
net/openvswitch/conntrack.c | 2 +-
net/rxrpc/conn_object.c | 2 +-
net/rxrpc/local_object.c | 2 +-
net/sched/act_ct.c | 15 +-
net/smc/af_smc.c | 4 +-
net/smc/smc.h | 5 +
net/smc/smc_cdc.c | 11 +-
net/smc/smc_close.c | 5 +-
net/tipc/link.c | 4 +-
net/tipc/netlink.c | 4 +-
net/tls/tls_sw.c | 2 +-
net/vmw_vsock/virtio_transport_common.c | 18 +-
net/wireless/core.c | 34 +-
net/wireless/core.h | 3 +-
net/wireless/scan.c | 4 +-
net/wireless/sysfs.c | 4 +-
scripts/Makefile.vmlinux_o | 3 +-
scripts/gdb/linux/constants.py.in | 9 +-
scripts/mod/file2alias.c | 14 +-
security/apparmor/policy.c | 1 +
security/apparmor/policy_unpack.c | 5 +-
sound/pci/hda/cs35l41_hda.c | 9 +-
sound/pci/hda/patch_realtek.c | 40 +-
sound/soc/codecs/cs35l41-lib.c | 60 ++-
sound/soc/codecs/cs35l41.c | 38 +-
sound/soc/codecs/cs35l41.h | 1 -
sound/soc/codecs/hdmi-codec.c | 27 +-
sound/soc/codecs/rt712-sdca.c | 14 +-
sound/soc/fsl/fsl-asoc-card.c | 1 +
sound/soc/fsl/fsl_easrc.c | 8 +-
sound/soc/fsl/mpc5200_dma.c | 3 +
sound/soc/intel/boards/sof_sdw.c | 2 +-
.../boards/sof_sdw_rt_sdca_jack_common.c | 8 +
sound/soc/intel/skylake/skl-sst-utils.c | 1 +
.../mt8186/mt8186-mt6366-rt1019-rt5682s.c | 4 +-
sound/soc/soc-dapm.c | 2 +-
sound/soc/soc-pcm.c | 21 +-
sound/soc/sof/core.c | 6 +-
sound/soc/sof/ipc4-topology.c | 3 +-
sound/soc/ti/ams-delta.c | 4 +-
tools/crypto/ccp/dbc.c | 17 +-
tools/crypto/ccp/dbc.py | 8 +-
tools/crypto/ccp/test_dbc.py | 45 +-
tools/iio/iio_generic_buffer.c | 13 +-
tools/lib/bpf/bpf_tracing.h | 2 -
tools/lib/perf/include/internal/rc_check.h | 6 +-
tools/objtool/objtool.c | 4 +-
tools/perf/Documentation/perf-kwork.txt | 2 +-
tools/perf/Makefile.perf | 4 +
tools/perf/builtin-kwork.c | 13 +-
tools/perf/builtin-lock.c | 17 +-
tools/perf/builtin-stat.c | 2 +-
.../arch/arm64/ampere/ampereone/metrics.json | 418 ++++++++-------
.../pmu-events/arch/powerpc/power10/pmc.json | 2 +-
.../arch/x86/broadwellde/bdwde-metrics.json | 14 +-
tools/perf/util/bpf_off_cpu.c | 5 +-
.../bpf_skel/augmented_raw_syscalls.bpf.c | 16 +-
tools/perf/util/bpf_skel/vmlinux/.gitignore | 1 +
tools/perf/util/hist.c | 10 +-
tools/perf/util/machine.c | 22 +-
tools/perf/util/mem-events.c | 3 +-
tools/perf/util/parse-events.y | 9 +-
.../cpupower/man/cpupower-powercap-info.1 | 2 +-
tools/testing/cxl/test/mem.c | 4 +-
.../selftests/bpf/prog_tests/linked_list.c | 10 +-
.../bpf/prog_tests/module_fentry_shadow.c | 5 +
.../selftests/bpf/prog_tests/tailcalls.c | 32 +-
tools/testing/selftests/bpf/progs/bpf_misc.h | 3 +
.../selftests/bpf/progs/linked_list_fail.c | 4 +-
tools/testing/selftests/bpf/test_progs.h | 2 +
tools/testing/selftests/mm/mdwe_test.c | 9 +-
.../testing/selftests/net/mptcp/mptcp_join.sh | 18 +-
tools/testing/selftests/net/pmtu.sh | 2 +-
tools/testing/selftests/netfilter/Makefile | 2 +-
.../testing/selftests/netfilter/xt_string.sh | 128 +++++
.../selftests/pidfd/pidfd_fdinfo_test.c | 2 +-
tools/testing/selftests/pidfd/pidfd_test.c | 12 +-
.../testing/selftests/resctrl/resctrl_tests.c | 5 +
tools/testing/selftests/x86/lam.c | 6 +-
tools/tracing/rtla/src/utils.c | 2 +-
650 files changed, 6536 insertions(+), 4426 deletions(-)
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_cfg_services.h
create mode 100644 tools/perf/util/bpf_skel/vmlinux/.gitignore
create mode 100755 tools/testing/selftests/netfilter/xt_string.sh
--
2.20.1
1
602
From: Namjae Jeon <linkinjeon(a)kernel.org>
mainline inclusion
from mainline-v6.3-rc6
commit 3a9b557f44ea8f216aab515a7db20e23f0eb51b9
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I6KEWO?from=project-issue
CVE: CVE-2023-1193
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
When smb2_lock request is canceled by smb2_cancel or smb2_close(),
ksmbd is missing deleting async_request_entry async_requests list.
Because calling init_smb2_rsp_hdr() in smb2_lock() mark ->synchronous
as true and then it will not be deleted in
ksmbd_conn_try_dequeue_request(). This patch add release_async_work() to
release the ones allocated for async work.
Cc: stable(a)vger.kernel.org
Signed-off-by: Namjae Jeon <linkinjeon(a)kernel.org>
Signed-off-by: Steve French <stfrench(a)microsoft.com>
Signed-off-by: Zizhi Wo <wozizhi(a)huawei.com>
---
fs/ksmbd/connection.c | 12 +++++-------
fs/ksmbd/ksmbd_work.h | 2 +-
fs/ksmbd/smb2pdu.c | 28 +++++++++++++++++++++-------
fs/ksmbd/smb2pdu.h | 1 +
4 files changed, 28 insertions(+), 15 deletions(-)
diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c
index 96d6653dabc1..2a0e472f0826 100644
--- a/fs/ksmbd/connection.c
+++ b/fs/ksmbd/connection.c
@@ -106,10 +106,8 @@ void ksmbd_conn_enqueue_request(struct ksmbd_work *work)
struct ksmbd_conn *conn = work->conn;
struct list_head *requests_queue = NULL;
- if (conn->ops->get_cmd_val(work) != SMB2_CANCEL_HE) {
+ if (conn->ops->get_cmd_val(work) != SMB2_CANCEL_HE)
requests_queue = &conn->requests;
- work->syncronous = true;
- }
if (requests_queue) {
atomic_inc(&conn->req_running);
@@ -130,14 +128,14 @@ int ksmbd_conn_try_dequeue_request(struct ksmbd_work *work)
if (!work->multiRsp)
atomic_dec(&conn->req_running);
- spin_lock(&conn->request_lock);
if (!work->multiRsp) {
+ spin_lock(&conn->request_lock);
list_del_init(&work->request_entry);
- if (work->syncronous == false)
- list_del_init(&work->async_request_entry);
+ spin_unlock(&conn->request_lock);
+ if (work->asyncronous)
+ release_async_work(work);
ret = 0;
}
- spin_unlock(&conn->request_lock);
wake_up_all(&conn->req_running_q);
return ret;
diff --git a/fs/ksmbd/ksmbd_work.h b/fs/ksmbd/ksmbd_work.h
index 5ece58e40c97..3b1fc8fbf7e1 100644
--- a/fs/ksmbd/ksmbd_work.h
+++ b/fs/ksmbd/ksmbd_work.h
@@ -68,7 +68,7 @@ struct ksmbd_work {
/* Request is encrypted */
bool encrypted:1;
/* Is this SYNC or ASYNC ksmbd_work */
- bool syncronous:1;
+ bool asyncronous:1;
bool need_invalidate_rkey:1;
unsigned int remote_key;
diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
index 29e9c94b2f7b..3d6302853b69 100644
--- a/fs/ksmbd/smb2pdu.c
+++ b/fs/ksmbd/smb2pdu.c
@@ -519,12 +519,6 @@ int init_smb2_rsp_hdr(struct ksmbd_work *work)
rsp_hdr->SessionId = rcv_hdr->SessionId;
memcpy(rsp_hdr->Signature, rcv_hdr->Signature, 16);
- work->syncronous = true;
- if (work->async_id) {
- ksmbd_release_id(&conn->async_ida, work->async_id);
- work->async_id = 0;
- }
-
return 0;
}
@@ -682,7 +676,7 @@ int setup_async_work(struct ksmbd_work *work, void (*fn)(void **), void **arg)
pr_err("Failed to alloc async message id\n");
return id;
}
- work->syncronous = false;
+ work->asyncronous = true;
work->async_id = id;
rsp_hdr->Id.AsyncId = cpu_to_le64(id);
@@ -702,6 +696,24 @@ int setup_async_work(struct ksmbd_work *work, void (*fn)(void **), void **arg)
return 0;
}
+void release_async_work(struct ksmbd_work *work)
+{
+ struct ksmbd_conn *conn = work->conn;
+
+ spin_lock(&conn->request_lock);
+ list_del_init(&work->async_request_entry);
+ spin_unlock(&conn->request_lock);
+
+ work->asyncronous = 0;
+ work->cancel_fn = NULL;
+ kfree(work->cancel_argv);
+ work->cancel_argv = NULL;
+ if (work->async_id) {
+ ksmbd_release_id(&conn->async_ida, work->async_id);
+ work->async_id = 0;
+ }
+}
+
void smb2_send_interim_resp(struct ksmbd_work *work, __le32 status)
{
struct smb2_hdr *rsp_hdr;
@@ -7091,6 +7103,7 @@ int smb2_lock(struct ksmbd_work *work)
work->send_no_response = 1;
goto out;
}
+
init_smb2_rsp_hdr(work);
smb2_set_err_rsp(work);
rsp->hdr.Status =
@@ -7107,6 +7120,7 @@ int smb2_lock(struct ksmbd_work *work)
spin_lock(&fp->f_lock);
list_del(&work->fp_entry);
spin_unlock(&fp->f_lock);
+ release_async_work(work);
goto retry;
} else if (!rc) {
spin_lock(&work->conn->llist_lock);
diff --git a/fs/ksmbd/smb2pdu.h b/fs/ksmbd/smb2pdu.h
index ba255fd2f77c..089eb93a166f 100644
--- a/fs/ksmbd/smb2pdu.h
+++ b/fs/ksmbd/smb2pdu.h
@@ -1664,6 +1664,7 @@ int find_matching_smb2_dialect(int start_index, __le16 *cli_dialects,
struct file_lock *smb_flock_init(struct file *f);
int setup_async_work(struct ksmbd_work *work, void (*fn)(void **),
void **arg);
+void release_async_work(struct ksmbd_work *work);
void smb2_send_interim_resp(struct ksmbd_work *work, __le32 status);
struct channel *lookup_chann_list(struct ksmbd_session *sess,
struct ksmbd_conn *conn);
--
2.39.2
2
1
From: Namjae Jeon <linkinjeon(a)kernel.org>
mainline inclusion
from mainline-v6.3-rc6
commit 3a9b557f44ea8f216aab515a7db20e23f0eb51b9
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I6KEWO?from=project-issue
CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
When smb2_lock request is canceled by smb2_cancel or smb2_close(),
ksmbd is missing deleting async_request_entry async_requests list.
Because calling init_smb2_rsp_hdr() in smb2_lock() mark ->synchronous
as true and then it will not be deleted in
ksmbd_conn_try_dequeue_request(). This patch add release_async_work() to
release the ones allocated for async work.
Cc: stable(a)vger.kernel.org
Signed-off-by: Namjae Jeon <linkinjeon(a)kernel.org>
Signed-off-by: Steve French <stfrench(a)microsoft.com>
Signed-off-by: Zizhi Wo <wozizhi(a)huawei.com>
---
fs/ksmbd/connection.c | 12 +++++-------
fs/ksmbd/ksmbd_work.h | 2 +-
fs/ksmbd/smb2pdu.c | 28 +++++++++++++++++++++-------
fs/ksmbd/smb2pdu.h | 1 +
4 files changed, 28 insertions(+), 15 deletions(-)
diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c
index 96d6653dabc1..2a0e472f0826 100644
--- a/fs/ksmbd/connection.c
+++ b/fs/ksmbd/connection.c
@@ -106,10 +106,8 @@ void ksmbd_conn_enqueue_request(struct ksmbd_work *work)
struct ksmbd_conn *conn = work->conn;
struct list_head *requests_queue = NULL;
- if (conn->ops->get_cmd_val(work) != SMB2_CANCEL_HE) {
+ if (conn->ops->get_cmd_val(work) != SMB2_CANCEL_HE)
requests_queue = &conn->requests;
- work->syncronous = true;
- }
if (requests_queue) {
atomic_inc(&conn->req_running);
@@ -130,14 +128,14 @@ int ksmbd_conn_try_dequeue_request(struct ksmbd_work *work)
if (!work->multiRsp)
atomic_dec(&conn->req_running);
- spin_lock(&conn->request_lock);
if (!work->multiRsp) {
+ spin_lock(&conn->request_lock);
list_del_init(&work->request_entry);
- if (work->syncronous == false)
- list_del_init(&work->async_request_entry);
+ spin_unlock(&conn->request_lock);
+ if (work->asyncronous)
+ release_async_work(work);
ret = 0;
}
- spin_unlock(&conn->request_lock);
wake_up_all(&conn->req_running_q);
return ret;
diff --git a/fs/ksmbd/ksmbd_work.h b/fs/ksmbd/ksmbd_work.h
index 5ece58e40c97..3b1fc8fbf7e1 100644
--- a/fs/ksmbd/ksmbd_work.h
+++ b/fs/ksmbd/ksmbd_work.h
@@ -68,7 +68,7 @@ struct ksmbd_work {
/* Request is encrypted */
bool encrypted:1;
/* Is this SYNC or ASYNC ksmbd_work */
- bool syncronous:1;
+ bool asyncronous:1;
bool need_invalidate_rkey:1;
unsigned int remote_key;
diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
index 29e9c94b2f7b..3d6302853b69 100644
--- a/fs/ksmbd/smb2pdu.c
+++ b/fs/ksmbd/smb2pdu.c
@@ -519,12 +519,6 @@ int init_smb2_rsp_hdr(struct ksmbd_work *work)
rsp_hdr->SessionId = rcv_hdr->SessionId;
memcpy(rsp_hdr->Signature, rcv_hdr->Signature, 16);
- work->syncronous = true;
- if (work->async_id) {
- ksmbd_release_id(&conn->async_ida, work->async_id);
- work->async_id = 0;
- }
-
return 0;
}
@@ -682,7 +676,7 @@ int setup_async_work(struct ksmbd_work *work, void (*fn)(void **), void **arg)
pr_err("Failed to alloc async message id\n");
return id;
}
- work->syncronous = false;
+ work->asyncronous = true;
work->async_id = id;
rsp_hdr->Id.AsyncId = cpu_to_le64(id);
@@ -702,6 +696,24 @@ int setup_async_work(struct ksmbd_work *work, void (*fn)(void **), void **arg)
return 0;
}
+void release_async_work(struct ksmbd_work *work)
+{
+ struct ksmbd_conn *conn = work->conn;
+
+ spin_lock(&conn->request_lock);
+ list_del_init(&work->async_request_entry);
+ spin_unlock(&conn->request_lock);
+
+ work->asyncronous = 0;
+ work->cancel_fn = NULL;
+ kfree(work->cancel_argv);
+ work->cancel_argv = NULL;
+ if (work->async_id) {
+ ksmbd_release_id(&conn->async_ida, work->async_id);
+ work->async_id = 0;
+ }
+}
+
void smb2_send_interim_resp(struct ksmbd_work *work, __le32 status)
{
struct smb2_hdr *rsp_hdr;
@@ -7091,6 +7103,7 @@ int smb2_lock(struct ksmbd_work *work)
work->send_no_response = 1;
goto out;
}
+
init_smb2_rsp_hdr(work);
smb2_set_err_rsp(work);
rsp->hdr.Status =
@@ -7107,6 +7120,7 @@ int smb2_lock(struct ksmbd_work *work)
spin_lock(&fp->f_lock);
list_del(&work->fp_entry);
spin_unlock(&fp->f_lock);
+ release_async_work(work);
goto retry;
} else if (!rc) {
spin_lock(&work->conn->llist_lock);
diff --git a/fs/ksmbd/smb2pdu.h b/fs/ksmbd/smb2pdu.h
index ba255fd2f77c..089eb93a166f 100644
--- a/fs/ksmbd/smb2pdu.h
+++ b/fs/ksmbd/smb2pdu.h
@@ -1664,6 +1664,7 @@ int find_matching_smb2_dialect(int start_index, __le16 *cli_dialects,
struct file_lock *smb_flock_init(struct file *f);
int setup_async_work(struct ksmbd_work *work, void (*fn)(void **),
void **arg);
+void release_async_work(struct ksmbd_work *work);
void smb2_send_interim_resp(struct ksmbd_work *work, __le32 status);
struct channel *lookup_chann_list(struct ksmbd_session *sess,
struct ksmbd_conn *conn);
--
2.39.2
2
1

[PATCH OLK-5.10 0/2] ext4: mitigatin cacheline false sharing in struct ext4_inode_info
by Zeng Heng 23 Nov '23
by Zeng Heng 23 Nov '23
23 Nov '23
Yang Yingliang (2):
ext4: mitigatin cacheline false sharing in struct ext4_inode_info
enable MITIGATION_FALSE_SHARING by default
arch/arm64/configs/openeuler_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
fs/ext4/Kconfig | 9 +++++++++
fs/ext4/ext4.h | 5 +++++
fs/ext4/super.c | 4 ++++
5 files changed, 20 insertions(+)
--
2.25.1
2
7

23 Nov '23
LoongArch inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I8IRUM
------------------------------------------
Signed-off-by: Hongchen Zhang <zhanghongchen(a)loongson.cn>
---
arch/loongarch/configs/loongson3_defconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/loongarch/configs/loongson3_defconfig b/arch/loongarch/configs/loongson3_defconfig
index d2beed31ab74..2f917ae20f76 100644
--- a/arch/loongarch/configs/loongson3_defconfig
+++ b/arch/loongarch/configs/loongson3_defconfig
@@ -44,6 +44,7 @@ CONFIG_CPU_HAS_LSX=y
CONFIG_CPU_HAS_LASX=y
CONFIG_NR_CPUS=256
CONFIG_NUMA=y
+# CONFIG_VA_BITS_40 is not set
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
--
2.33.0
2
5

23 Nov '23
LoongArch inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I8IRUM
------------------------------------------
Signed-off-by: Hongchen Zhang <zhanghongchen(a)loongson.cn>
---
arch/loongarch/configs/loongson3_defconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/loongarch/configs/loongson3_defconfig b/arch/loongarch/configs/loongson3_defconfig
index d2beed31ab74..2f917ae20f76 100644
--- a/arch/loongarch/configs/loongson3_defconfig
+++ b/arch/loongarch/configs/loongson3_defconfig
@@ -44,6 +44,7 @@ CONFIG_CPU_HAS_LSX=y
CONFIG_CPU_HAS_LASX=y
CONFIG_NR_CPUS=256
CONFIG_NUMA=y
+# CONFIG_VA_BITS_40 is not set
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
--
2.33.0
2
5
Backport linux-6.6.1 LTS patches from upstream.
git cherry-pick v6.6..v6.6.1~1 -s
No conflicts.
Andrey Konovalov (1):
usb: raw-gadget: properly handle interrupted requests
Badhri Jagan Sridharan (1):
usb: typec: tcpm: Add additional checks for contaminant
Cameron Williams (9):
tty: 8250: Remove UC-257 and UC-431
tty: 8250: Add support for additional Brainboxes UC cards
tty: 8250: Add support for Brainboxes UP cards
tty: 8250: Add support for Intashield IS-100
tty: 8250: Fix port count of PX-257
tty: 8250: Fix up PX-803/PX-857
tty: 8250: Add support for additional Brainboxes PX cards
tty: 8250: Add support for Intashield IX cards
tty: 8250: Add Brainboxes Oxford Semiconductor-based quirks
Daniel Starke (1):
tty: n_gsm: fix race condition in status line change on dead
connections
Francesco Dolcini (1):
dt-bindings: serial: rs485: Add rs485-rts-active-high
Ian Rogers (1):
perf evlist: Avoid frequency mode for the dummy event
Janne Grunau (1):
Bluetooth: hci_bcm4377: Mark bcm4378/bcm4387 as BROKEN_LE_CODED
Jimmy Hu (1):
usb: typec: tcpm: Fix NULL pointer dereference in tcpm_pd_svdm()
Kai-Heng Feng (1):
power: supply: core: Use blocking_notifier_call_chain to avoid RCU
complaint
LihaSika (1):
usb: storage: set 1.50 as the lower bcdDevice for older "Super Top"
compatibility
Mark Hasemeyer (2):
ALSA: hda: intel-dsp-config: Fix JSL Chromebook quirk detection
ASoC: SOF: sof-pci-dev: Fix community key quirk detection
Max McCarthy (1):
ALSA: usb-audio: add quirk flag to enable native DSD for McIntosh
devices
Nicholas Kazlauskas (1):
drm/amd/display: Don't use fsleep for PSR exit waits
Siddharth Vadapalli (1):
misc: pci_endpoint_test: Add deviceID for J721S2 PCIe EP device
support
Steven Rostedt (Google) (5):
tracing: Have trace_event_file have ref counters
eventfs: Remove "is_freed" union with rcu head
eventfs: Save ownership and mode
eventfs: Delete eventfs_inode when the last dentry is freed
eventfs: Use simple_recursive_removal() to clean up dentries
Tony Lindgren (1):
serial: core: Fix runtime PM handling for pending tx
Vicki Pfau (1):
PCI: Prevent xHCI driver from claiming AMD VanGogh USB3 DRD device
.../devicetree/bindings/serial/rs485.yaml | 4 +
drivers/bluetooth/hci_bcm4377.c | 5 +
drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c | 3 +-
drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c | 3 +-
drivers/misc/pci_endpoint_test.c | 4 +
drivers/pci/quirks.c | 8 +-
drivers/power/supply/power_supply_core.c | 8 +-
drivers/tty/n_gsm.c | 2 +
drivers/tty/serial/8250/8250_pci.c | 327 +++++++++++++++++-
drivers/tty/serial/serial_core.c | 2 +-
drivers/usb/gadget/legacy/raw_gadget.c | 26 +-
drivers/usb/storage/unusual_cypress.h | 2 +-
drivers/usb/typec/tcpm/tcpm.c | 5 +
fs/tracefs/event_inode.c | 288 +++++++++------
include/linux/pci_ids.h | 1 +
include/linux/power_supply.h | 2 +-
include/linux/trace_events.h | 4 +
kernel/trace/trace.c | 15 +
kernel/trace/trace.h | 3 +
kernel/trace/trace_events.c | 31 +-
kernel/trace/trace_events_filter.c | 3 +
sound/hda/intel-dsp-config.c | 6 +
sound/soc/sof/sof-pci-dev.c | 7 +
sound/usb/quirks.c | 2 +
tools/perf/util/evlist.c | 5 +-
25 files changed, 622 insertions(+), 144 deletions(-)
--
2.20.1
1
30

22 Nov '23
诚邀您参加openEuler Kernel SIG 双周例会,欢迎您自主申报例会议题~
-----原始约会-----
发件人: openEuler conference <public(a)openeuler.org>
发送时间: 2023年11月10日 14:57
收件人: dev@openeuler.org,kernel-discuss@openeuler.org,kernel@openeuler.org
主题: openEuler Kernel SIG 双周例会
时间: 2023年11月24日星期五 14:00-15:45(UTC+08:00) 北京,重庆,香港特别行政区,乌鲁木齐。
地点:
您好!
Kernel SIG 邀请您参加 2023-11-24 14:00 召开的WeLink会议(自动录制)
会议主题:openEuler Kernel SIG 双周例会
会议内容:
1. 进展update
2. 议题征集中
(新增议题可回复邮件申请,也可直接填至会议看板)
会议链接:https://bmeeting.huaweicloud.com:36443/#/j/963265866
会议纪要:https://etherpad.openeuler.org/p/Kernel-meetings
温馨提醒:建议接入会议后修改参会人的姓名,也可以使用您在gitee.com的ID
更多资讯尽在:https://openeuler.org/zh/
Hello!
openEuler Kernel SIG invites you to attend the WeLink conference(auto recording) will be held at 2023-11-24 14:00,
The subject of the conference is openEuler Kernel SIG 双周例会,
Summary:
1. 进展update
2. 议题征集中
(新增议题可回复邮件申请,也可直接填至会议看板)
You can join the meeting at https://bmeeting.huaweicloud.com:36443/#/j/963265866.
Add topics at https://etherpad.openeuler.org/p/Kernel-meetings.
Note: You are advised to change the participant name after joining the conference or use your ID at gitee.com.
More information: https://openeuler.org/en/
1
0
Chen Wandun (3):
mm: disable psi cgroup v1 by default
mm: add config isolation for psi under cgroup v1
psi: dont alloc memory for psi by default
Chengming Zhou (3):
sched/psi: Fix periodic aggregation shut off
sched/psi: Optimize task switch inside shared cgroups again
sched/psi: Add PSI_IRQ to track IRQ/SOFTIRQ pressure
Hao Jia (1):
sched/psi: Zero the memory of struct psi_group
Johannes Weiner (1):
sched/psi: Remove NR_ONCPU task accounting
Lu Jialin (11):
psi: support irq.pressure under cgroup v1
psi: enable CONFIG_PSI_CGROUP_V1 in openeuler_defconfig
psi: update psi irqtime when the irq delta is nozero
memcg: split async memcg reclaim from reclaim_high
psi: add struct psi_group_ext
PSI: Introduce fine grained stall time collect for cgroup reclaim
PSI: Introduce avgs and total calculation for cgroup reclaim
PSI: Introduce pressure.stat in psi
PSI: add more memory fine grained stall tracking in pressure.stat
add cpu fine grained stall tracking in pressure.stat
PSI: enable CONFIG_PSI_FINE_GRAINED in openeuler_defconfig
Suren Baghdasaryan (1):
psi: Fix "defined but not used" warnings when CONFIG_PROC_FS=n
Documentation/admin-guide/cgroup-v2.rst | 6 +
arch/arm64/configs/openeuler_defconfig | 2 +
arch/x86/configs/openeuler_defconfig | 2 +
block/blk-cgroup.c | 2 +-
block/blk-core.c | 2 +-
include/linux/cgroup-defs.h | 6 +-
include/linux/cgroup.h | 2 +-
include/linux/psi.h | 9 +-
include/linux/psi_types.h | 99 +++-
include/linux/sched.h | 4 +
init/Kconfig | 20 +
kernel/cgroup/cgroup.c | 64 ++-
kernel/sched/core.c | 2 +
kernel/sched/cpuacct.c | 6 +-
kernel/sched/fair.c | 6 -
kernel/sched/psi.c | 690 ++++++++++++++++++++----
kernel/sched/stats.h | 9 +
mm/compaction.c | 2 +-
mm/filemap.c | 4 +-
mm/memcontrol.c | 92 ++--
mm/page_alloc.c | 6 +
mm/page_io.c | 3 +
mm/vmscan.c | 5 +-
23 files changed, 872 insertions(+), 171 deletions(-)
--
2.34.1
1
20

22 Nov '23
From: Junxian Huang <huangjunxian6(a)hisilicon.com>
driver inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8HZ7W
--------------------------------------------------------------------------
As dmac is already resolved in kernel while creating AH, there is no
need to repeat the resolving in userspace. Prioritizes getting dmac
from kernel driver, unless kernel driver didn't response one.
Signed-off-by: Junxian Huang <huangjunxian6(a)hisilicon.com>
---
kernel-headers/rdma/hns-abi.h | 2 +-
providers/hns/hns_roce_u_verbs.c | 10 +++++++---
2 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/kernel-headers/rdma/hns-abi.h b/kernel-headers/rdma/hns-abi.h
index 785c4e1..8581df9 100644
--- a/kernel-headers/rdma/hns-abi.h
+++ b/kernel-headers/rdma/hns-abi.h
@@ -135,7 +135,7 @@ struct hns_roce_ib_create_qp_resp {
struct hns_roce_ib_create_ah_resp {
__u8 priority;
__u8 tc_mode;
- __u8 reserved[6];
+ __u8 dmac[6];
};
struct hns_roce_ib_modify_qp_resp {
diff --git a/providers/hns/hns_roce_u_verbs.c b/providers/hns/hns_roce_u_verbs.c
index 5e46f89..c906632 100644
--- a/providers/hns/hns_roce_u_verbs.c
+++ b/providers/hns/hns_roce_u_verbs.c
@@ -2210,9 +2210,13 @@ struct ibv_ah *hns_roce_u_create_ah(struct ibv_pd *pd, struct ibv_ah_attr *attr)
if (ibv_cmd_create_ah(pd, &ah->ibv_ah, attr, &resp.ibv_resp, sizeof(resp)))
goto err;
- if (hr_dev->link_type != HNS_DEV_LINK_TYPE_UB &&
- ibv_resolve_eth_l2_from_gid(pd->context, attr, ah->av.mac, NULL))
- goto err;
+ if (hr_dev->link_type != HNS_DEV_LINK_TYPE_UB) {
+ if (memcmp(ah->av.mac, resp.dmac, ETH_ALEN))
+ memcpy(ah->av.mac, resp.dmac, ETH_ALEN);
+ else if (ibv_resolve_eth_l2_from_gid(pd->context, attr,
+ ah->av.mac, NULL))
+ goto err;
+ }
if (resp.tc_mode == HNS_ROCE_TC_MAP_MODE_DSCP)
ah->av.sl = resp.priority;
--
2.30.0
1
0
From: Juan Zhou <zhoujuan51(a)h-partners.com>
Junxian Huang (5):
RDMA/hns: Fix incorrect congest type configuration
RDMA/hns: Fix a missing default value for invalid congest type
RDMA/hns: Cleanup of RoCE Bonding driver
RDMA/hns: Add a max length of gid table
RDMA/hns: Response dmac to userspace
Luoyouming (2):
RDMA/hns: Fix congestions control algorithm type for UD
RDMA/hns: Fix a missing validation check for sl
wenglianfa (1):
RDMA/hns: Fix simultaneous reset and resource deregistration
drivers/infiniband/hw/hns/hns_roce_ah.c | 10 +++
drivers/infiniband/hw/hns/hns_roce_bond.c | 10 +--
drivers/infiniband/hw/hns/hns_roce_cq.c | 26 +++++-
drivers/infiniband/hw/hns/hns_roce_db.c | 30 +++++--
drivers/infiniband/hw/hns/hns_roce_device.h | 34 +++++++-
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 27 ++++--
drivers/infiniband/hw/hns/hns_roce_main.c | 8 ++
drivers/infiniband/hw/hns/hns_roce_mr.c | 92 ++++++++++++++++++++-
drivers/infiniband/hw/hns/hns_roce_qp.c | 61 ++++++++++----
drivers/infiniband/hw/hns/hns_roce_srq.c | 39 +++++++--
include/uapi/rdma/hns-abi.h | 2 +-
11 files changed, 292 insertions(+), 47 deletions(-)
--
2.30.0
1
8
Chen Wandun (3):
mm: disable psi cgroup v1 by default
mm: add config isolation for psi under cgroup v1
psi: dont alloc memory for psi by default
Chengming Zhou (3):
sched/psi: Fix periodic aggregation shut off
sched/psi: Optimize task switch inside shared cgroups again
sched/psi: Add PSI_IRQ to track IRQ/SOFTIRQ pressure
Hao Jia (1):
sched/psi: Zero the memory of struct psi_group
Johannes Weiner (1):
sched/psi: Remove NR_ONCPU task accounting
Lu Jialin (11):
psi: support irq.pressure under cgroup v1
psi: enable CONFIG_PSI_CGROUP_V1 in openeuler_defconfig
psi: update psi irqtime when the irq delta is nozero
memcg: split async memcg reclaim from reclaim_high
psi: add struct psi_group_ext
PSI: Introduce fine grained stall time collect for cgroup reclaim
PSI: Introduce avgs and total calculation for cgroup reclaim
PSI: Introduce pressure.stat in psi
PSI: add more memory fine grained stall tracking in pressure.stat
add cpu fine grained stall tracking in pressure.stat
PSI: enable CONFIG_PSI_FINE_GRAINED in openeuler_defconfig
Suren Baghdasaryan (1):
psi: Fix "defined but not used" warnings when CONFIG_PROC_FS=n
Documentation/admin-guide/cgroup-v2.rst | 6 +
arch/arm64/configs/openeuler_defconfig | 2 +
arch/x86/configs/openeuler_defconfig | 2 +
block/blk-cgroup.c | 2 +-
block/blk-core.c | 2 +-
include/linux/cgroup-defs.h | 6 +-
include/linux/cgroup.h | 2 +-
include/linux/psi.h | 9 +-
include/linux/psi_types.h | 99 +++-
include/linux/sched.h | 4 +
init/Kconfig | 20 +
kernel/cgroup/cgroup.c | 64 ++-
kernel/sched/core.c | 2 +
kernel/sched/cpuacct.c | 6 +-
kernel/sched/fair.c | 6 -
kernel/sched/psi.c | 690 ++++++++++++++++++++----
kernel/sched/stats.h | 9 +
mm/compaction.c | 2 +-
mm/filemap.c | 4 +-
mm/memcontrol.c | 92 ++--
mm/page_alloc.c | 6 +
mm/page_io.c | 3 +
mm/vmscan.c | 5 +-
23 files changed, 872 insertions(+), 171 deletions(-)
--
2.34.1
1
20
Christoph Hellwig (5):
block: fold register_disk into device_add_disk
block: call blk_integrity_add earlier in device_add_disk
block: add the events* attributes to disk_attrs
block: fix error unwinding in device_add_disk
block: clear ->slave_dir when dropping the main slave_dir reference
Luis Chamberlain (4):
block: return errors from blk_integrity_add
block: return errors from disk_alloc_events
block: add error handling for device_add_disk / add_disk
block: fix device_add_disk() kobject_create_and_add() error handling
Tetsuo Handa (1):
block: check minor range in device_add_disk()
Yu Kuai (1):
block: fix memory leak for elevator on add_disk failure
Zhong Jinghua (6):
block: return errors from blk_register_region
block: Fix the kabi change in device_add_disk
block: Fix the kabi change on blk_register_region
block: call blk_get_queue earlier in __device_add_disk
block: Fix minor range check in device_add_disk()
block: Set memalloc_noio to false in the error path
block/blk.h | 5 +-
include/linux/genhd.h | 13 +++
block/blk-integrity.c | 12 ++-
block/genhd.c | 246 +++++++++++++++++++++++++-----------------
4 files changed, 173 insertions(+), 103 deletions(-)
--
2.39.2
1
17

[PATCH OLK-6.6 v2] dm ioctl: add DMINFO() to track dm device create/remove
by Li Lingfeng 21 Nov '23
by Li Lingfeng 21 Nov '23
21 Nov '23
From: Luo Meng <luomeng12(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8HVC5
CVE: NA
--------------------------------
Add DMINFO() to help tracking device creation/removal success.
Signed-off-by: Luo Meng <luomeng12(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
drivers/md/dm-ioctl.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 21ebb6c39394..5efe0193b2e8 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -327,6 +327,9 @@ static struct dm_table *__hash_remove(struct hash_cell *hc)
table = NULL;
if (hc->new_map)
table = hc->new_map;
+
+ DMINFO("%s[%i]: %s (%s) is removed successfully",
+ current->comm, current->pid, hc->md->disk->disk_name, hc->name);
dm_put(hc->md);
free_cell(hc);
@@ -880,6 +883,7 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
{
int r, m = DM_ANY_MINOR;
struct mapped_device *md;
+ struct hash_cell *hc;
r = check_name(param->name);
if (r)
@@ -903,6 +907,13 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
__dev_status(md, param);
+ mutex_lock(&dm_hash_cells_mutex);
+ hc = dm_get_mdptr(md);
+ if (hc)
+ DMINFO("%s[%i]: %s (%s) is created successfully",
+ current->comm, current->pid, md->disk->disk_name, hc->name);
+
+ mutex_unlock(&dm_hash_cells_mutex);
dm_put(md);
return 0;
--
2.31.1
2
5

[PATCH V2 OLK-5.10] vhost-vdpa: allow set feature VHOST_F_LOG_ALL when been negotiated.
by Jiang Dongxu 21 Nov '23
by Jiang Dongxu 21 Nov '23
21 Nov '23
From: jiangdongxu <jiangdongxu1(a)huawei.com>
virt inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I86ITO
----------------------------------------------------------------------
It's not allowed to change the features after vhost-vdpa devices have
been negotiated. But log start/end is allowed. Add exception to feature
VHOST_F_LOG_ALL.
Signed-off-by: jiangdongxu <jiangdongxu1(a)huawei.com>
---
drivers/vhost/vdpa.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 8e3bf64123ae..eec8027dfc4f 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -425,16 +425,19 @@ static long vhost_vdpa_set_features(struct vhost_vdpa *v, u64 __user *featurep)
u64 features;
int i;
+ if (copy_from_user(&features, featurep, sizeof(features)))
+ return -EFAULT;
+
+ actual_features = ops->get_driver_features(vdpa);
+
/*
* It's not allowed to change the features after they have
- * been negotiated.
+ * been negotiated. But log start/end is allowed.
*/
- if (ops->get_status(vdpa) & VIRTIO_CONFIG_S_FEATURES_OK)
+ if ((ops->get_status(vdpa) & VIRTIO_CONFIG_S_FEATURES_OK) &&
+ (features & ~(BIT_ULL(VHOST_F_LOG_ALL))) != actual_features)
return -EBUSY;
- if (copy_from_user(&features, featurep, sizeof(features)))
- return -EFAULT;
-
if (vdpa_set_features(vdpa, features))
return -EINVAL;
--
2.27.0
2
5

21 Nov '23
From: Luo Meng <luomeng12(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8HVC5
CVE: NA
--------------------------------
Add DMINFO() to help tracking device creation/removal success.
Signed-off-by: Luo Meng <luomeng12(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
drivers/md/dm-ioctl.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 21ebb6c39394..5efe0193b2e8 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -327,6 +327,9 @@ static struct dm_table *__hash_remove(struct hash_cell *hc)
table = NULL;
if (hc->new_map)
table = hc->new_map;
+
+ DMINFO("%s[%i]: %s (%s) is removed successfully",
+ current->comm, current->pid, hc->md->disk->disk_name, hc->name);
dm_put(hc->md);
free_cell(hc);
@@ -880,6 +883,7 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
{
int r, m = DM_ANY_MINOR;
struct mapped_device *md;
+ struct hash_cell *hc;
r = check_name(param->name);
if (r)
@@ -903,6 +907,13 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
__dev_status(md, param);
+ mutex_lock(&dm_hash_cells_mutex);
+ hc = dm_get_mdptr(md);
+ if (hc)
+ DMINFO("%s[%i]: %s (%s) is created successfully",
+ current->comm, current->pid, md->disk->disk_name, hc->name);
+
+ mutex_unlock(&dm_hash_cells_mutex);
dm_put(md);
return 0;
--
2.31.1
1
0

[PATCH OLK-5.10] vhost-vdpa: allow set feature VHOST_F_LOG_ALL when been negotiated.
by Jiang Dongxu 21 Nov '23
by Jiang Dongxu 21 Nov '23
21 Nov '23
From: jiangdongxu <jiangdongxu1(a)huawei.com>
virt inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I86ITO
----------------------------------------------------------------------
It's not allowed to change the features after vhost-vdpa devices have
been negotiated. But log start/end is allowed. Add exception to feature
VHOST_F_LOG_ALL.
Signed-off-by: jiangdongxu <jiangdongxu1(a)huawei.com>
---
drivers/vhost/vdpa.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 8e3bf64123ae..eec8027dfc4f 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -425,16 +425,19 @@ static long vhost_vdpa_set_features(struct vhost_vdpa *v, u64 __user *featurep)
u64 features;
int i;
+ if (copy_from_user(&features, featurep, sizeof(features)))
+ return -EFAULT;
+
+ actual_features = ops->get_driver_features(vdpa);
+
/*
* It's not allowed to change the features after they have
- * been negotiated.
+ * been negotiated. But log start/end is allowed.
*/
- if (ops->get_status(vdpa) & VIRTIO_CONFIG_S_FEATURES_OK)
+ if ((ops->get_status(vdpa) & VIRTIO_CONFIG_S_FEATURES_OK) &&
+ (features & ~(BIT_ULL(VHOST_F_LOG_ALL))) != actual_features)
return -EBUSY;
- if (copy_from_user(&features, featurep, sizeof(features)))
- return -EFAULT;
-
if (vdpa_set_features(vdpa, features))
return -EINVAL;
--
2.27.0
2
5

21 Nov '23
From: Luo Meng <luomeng12(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8HVC5
CVE: NA
--------------------------------
Add DMINFO() to help tracking device creation/removal success.
Signed-off-by: Luo Meng <luomeng12(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
drivers/md/dm-ioctl.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 21ebb6c39394..5efe0193b2e8 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -327,6 +327,9 @@ static struct dm_table *__hash_remove(struct hash_cell *hc)
table = NULL;
if (hc->new_map)
table = hc->new_map;
+
+ DMINFO("%s[%i]: %s (%s) is removed successfully",
+ current->comm, current->pid, hc->md->disk->disk_name, hc->name);
dm_put(hc->md);
free_cell(hc);
@@ -880,6 +883,7 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
{
int r, m = DM_ANY_MINOR;
struct mapped_device *md;
+ struct hash_cell *hc;
r = check_name(param->name);
if (r)
@@ -903,6 +907,13 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
__dev_status(md, param);
+ mutex_lock(&dm_hash_cells_mutex);
+ hc = dm_get_mdptr(md);
+ if (hc)
+ DMINFO("%s[%i]: %s (%s) is created successfully",
+ current->comm, current->pid, md->disk->disk_name, hc->name);
+
+ mutex_unlock(&dm_hash_cells_mutex);
dm_put(md);
return 0;
--
2.31.1
2
1

[PATCH OLK-5.10] vhost_vdpa: add reset state params to indicate reset level
by Jiang Dongxu 21 Nov '23
by Jiang Dongxu 21 Nov '23
21 Nov '23
From: jiangdongxu1 <jiangdongxu1(a)huawei.com>
virt inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I86ITO
----------------------------------------------------------------------
Add the interface parameter state to the vdpa reset interface, which
respectively identifies the reset when the device is turned on/off
and the virtio reset issued by the virtual machine.
Signed-off-by: jiangdongxu1 <jiangdongxu1(a)huawei.com>
---
drivers/vhost/vdpa.c | 10 +++++-----
include/linux/vdpa.h | 16 +++++++++++++---
2 files changed, 18 insertions(+), 8 deletions(-)
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index eec8027dfc4f..eed51d004531 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -212,13 +212,13 @@ static void vhost_vdpa_unsetup_vq_irq(struct vhost_vdpa *v, u16 qid)
irq_bypass_unregister_producer(&vq->call_ctx.producer);
}
-static int vhost_vdpa_reset(struct vhost_vdpa *v)
+static int vhost_vdpa_reset(struct vhost_vdpa *v, int state)
{
struct vdpa_device *vdpa = v->vdpa;
v->in_batch = 0;
- return vdpa_reset(vdpa);
+ return vdpa_reset(vdpa, state);
}
static long vhost_vdpa_bind_mm(struct vhost_vdpa *v)
@@ -297,7 +297,7 @@ static long vhost_vdpa_set_status(struct vhost_vdpa *v, u8 __user *statusp)
vhost_vdpa_unsetup_vq_irq(v, i);
if (status == 0) {
- ret = vdpa_reset(vdpa);
+ ret = vdpa_reset(vdpa, VDPA_DEV_RESET_VIRTIO);
if (ret)
return ret;
} else
@@ -1496,7 +1496,7 @@ static int vhost_vdpa_open(struct inode *inode, struct file *filep)
return r;
nvqs = v->nvqs;
- r = vhost_vdpa_reset(v);
+ r = vhost_vdpa_reset(v, VDPA_DEV_RESET_OPEN);
if (r)
goto err;
@@ -1542,7 +1542,7 @@ static int vhost_vdpa_release(struct inode *inode, struct file *filep)
mutex_lock(&d->mutex);
filep->private_data = NULL;
vhost_vdpa_clean_irq(v);
- vhost_vdpa_reset(v);
+ vhost_vdpa_reset(v, VDPA_DEV_RESET_CLOSE);
vhost_dev_stop(&v->vdev);
vhost_vdpa_unbind_mm(v);
vhost_vdpa_config_put(v);
diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 83df2380eeb9..ef53829c7a7a 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -120,6 +120,12 @@ struct vdpa_map_file {
u64 offset;
};
+enum vdpa_reset_state {
+ VDPA_DEV_RESET_VIRTIO = 0,
+ VDPA_DEV_RESET_OPEN = 1,
+ VDPA_DEV_RESET_CLOSE = 2,
+};
+
/**
* struct vdpa_config_ops - operations for configuring a vDPA device.
* Note: vDPA device drivers are required to implement all of the
@@ -218,6 +224,10 @@ struct vdpa_map_file {
* @status: virtio device status
* @reset: Reset device
* @vdev: vdpa device
+ * @state: state for reset
+ * VDPA_DEV_RESET_VIRTIO for virtio reset
+ * VDPA_DEV_RESET_OPEN for vhost-vdpa device open
+ * VDPA_DEV_RESET_CLOSE for vhost-vdpa device close
* Returns integer: success (0) or error (< 0)
* @suspend: Suspend the device (optional)
* @vdev: vdpa device
@@ -359,7 +369,7 @@ struct vdpa_config_ops {
u32 (*get_vendor_id)(struct vdpa_device *vdev);
u8 (*get_status)(struct vdpa_device *vdev);
void (*set_status)(struct vdpa_device *vdev, u8 status);
- int (*reset)(struct vdpa_device *vdev);
+ int (*reset)(struct vdpa_device *vdev, int state);
int (*suspend)(struct vdpa_device *vdev);
int (*resume)(struct vdpa_device *vdev);
size_t (*get_config_size)(struct vdpa_device *vdev);
@@ -482,14 +492,14 @@ static inline struct device *vdpa_get_dma_dev(struct vdpa_device *vdev)
return vdev->dma_dev;
}
-static inline int vdpa_reset(struct vdpa_device *vdev)
+static inline int vdpa_reset(struct vdpa_device *vdev, int state)
{
const struct vdpa_config_ops *ops = vdev->config;
int ret;
down_write(&vdev->cf_lock);
vdev->features_valid = false;
- ret = ops->reset(vdev);
+ ret = ops->reset(vdev, state);
up_write(&vdev->cf_lock);
return ret;
}
--
2.27.0
2
4

[PATCH OLK-5.10] vhost-vdpa: allow set feature VHOST_F_LOG_ALL when been negotiated.
by Jiang Dongxu 21 Nov '23
by Jiang Dongxu 21 Nov '23
21 Nov '23
From: jiangdongxu1 <jiangdongxu1(a)huawei.com>
virt inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I86ITO
----------------------------------------------------------------------
It's not allowed to change the features after vhost-vdpa devices have
been negotiated. But log start/end is allowed. Add exception to feature
VHOST_F_LOG_ALL.
---
drivers/vhost/vdpa.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 8e3bf64123ae..eec8027dfc4f 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -425,16 +425,19 @@ static long vhost_vdpa_set_features(struct vhost_vdpa *v, u64 __user *featurep)
u64 features;
int i;
+ if (copy_from_user(&features, featurep, sizeof(features)))
+ return -EFAULT;
+
+ actual_features = ops->get_driver_features(vdpa);
+
/*
* It's not allowed to change the features after they have
- * been negotiated.
+ * been negotiated. But log start/end is allowed.
*/
- if (ops->get_status(vdpa) & VIRTIO_CONFIG_S_FEATURES_OK)
+ if ((ops->get_status(vdpa) & VIRTIO_CONFIG_S_FEATURES_OK) &&
+ (features & ~(BIT_ULL(VHOST_F_LOG_ALL))) != actual_features)
return -EBUSY;
- if (copy_from_user(&features, featurep, sizeof(features)))
- return -EFAULT;
-
if (vdpa_set_features(vdpa, features))
return -EINVAL;
--
2.27.0
2
5

21 Nov '23
If new options are introduced, but openeuler_defconfig is not explicitly
configured, the actual compiled version may be configured according to
the default settings, which may be different from the author's expectation.
Therefore, some commands/scripts are added to help developers to check and update
the defconfig. It is also convenient for continuous integration tools to
check the consistency of defconfig.
Based on OLK-5.10 openeuler_defconfig, use the commands mentioned above to
generate the initial openeuler_defconfig for OLK-6.6 kernel build.
Xie XiuQi (1):
kconfig: Add script to check & update openeuler_defconfig
Zheng Zengkai (2):
config: add initial openeuler_defconfig for arm64
config: add initial openeuler_defconfig for x86
arch/arm64/configs/openeuler_defconfig | 7942 ++++++++++++++++++++
arch/x86/configs/openeuler_defconfig | 9158 ++++++++++++++++++++++++
scripts/kconfig/Makefile | 22 +
scripts/kconfig/makeconfig.sh | 24 +
4 files changed, 17146 insertions(+)
create mode 100644 arch/arm64/configs/openeuler_defconfig
create mode 100644 arch/x86/configs/openeuler_defconfig
create mode 100644 scripts/kconfig/makeconfig.sh
--
2.20.1
2
5
v2 -> v3:
- fix compilation failure on arm32
- enable CPU inspect for arm64 by default
This patches series introduce CPU-inspect feature. CPU-inspect is designed
to provide a framework for early detection of SDC by proactively executing
CPU inspection test cases.
Yu Liao (3):
cpuinspect: add CPU-inspect infrastructure
cpuinspect: add ATF inspector
openeuler_defconfig: enable CPU inspect for arm64 by default
arch/arm64/configs/openeuler_defconfig | 7 +
drivers/Kconfig | 2 +
drivers/Makefile | 1 +
drivers/cpuinspect/Kconfig | 24 +++
drivers/cpuinspect/Makefile | 8 +
drivers/cpuinspect/cpuinspect.c | 165 ++++++++++++++++
drivers/cpuinspect/cpuinspect.h | 46 +++++
drivers/cpuinspect/inspector-atf.c | 70 +++++++
drivers/cpuinspect/inspector.c | 124 ++++++++++++
drivers/cpuinspect/sysfs.c | 257 +++++++++++++++++++++++++
include/linux/cpuinspect.h | 40 ++++
11 files changed, 744 insertions(+)
create mode 100644 drivers/cpuinspect/Kconfig
create mode 100644 drivers/cpuinspect/Makefile
create mode 100644 drivers/cpuinspect/cpuinspect.c
create mode 100644 drivers/cpuinspect/cpuinspect.h
create mode 100644 drivers/cpuinspect/inspector-atf.c
create mode 100644 drivers/cpuinspect/inspector.c
create mode 100644 drivers/cpuinspect/sysfs.c
create mode 100644 include/linux/cpuinspect.h
--
2.33.0
1
3

20 Nov '23
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8I0NL
-------------------------------
Add config isolation for SMMU BTM (Broadcast TLB Maintenance), because
VMIDS are all shared and will be invalid all together, which might cause
performance issues. Besides, If the hardware fail to invalid TLB, it
can cause unexpected errors, you need to do some tests to make sure BTM
work as expect.
Signed-off-by: Zhang Zekun <zhangzekun11(a)huawei.com>
---
drivers/iommu/Kconfig | 10 ++++++++++
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 2 ++
2 files changed, 12 insertions(+)
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index b630e58c49b6..f768fbc60dd3 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -438,6 +438,16 @@ config SMMU_BYPASS_DEV
This feature will be replaced by ACPI IORT RMR node, which will be
upstreamed in mainline.
+config SMMU_BTM_SUPPORT
+ bool "SMMU BTM (Broadcast TLB Maintenance) support"
+ depends on ARM_SMMU_V3
+ help
+ ARM SMMU BTM feature can support for receiving broadcast TLBI
+ operations issued by Arm PEs in the system, which can help to
+ speed up the tlb invalidation speed in SVA scenarios.
+
+ This feature need support for BTM features, if not sure, say NO.
+
endif # IOMMU_SUPPORT
config IOVA_MAX_GLOBAL_MAGS
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index d93ce123df49..09777b05b089 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -4954,8 +4954,10 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
* broadcasted TLB invalidations that target EL2-E2H world. Don't enable
* BTM in that case.
*/
+#ifdef CONFIG_SMMU_BTM_SUPPORT
if (reg & IDR0_BTM && (!vhe || reg & IDR0_HYP))
smmu->features |= ARM_SMMU_FEAT_BTM;
+#endif
/*
* The coherency feature as set by FW is used in preference to the ID
--
2.17.1
2
1
v2 -> v3:
- fix compilation failure on arm32
- enable CPU inspect for arm64 by default
This patches series introduce CPU-inspect feature. CPU-inspect is designed
to provide a framework for early detection of SDC by proactively executing
CPU inspection test cases.
Yu Liao (3):
cpuinspect: add CPU-inspect infrastructure
cpuinspect: add ATF inspector
openeuler_defconfig: enable CPU inspect for arm64 by default
arch/arm64/configs/openeuler_defconfig | 7 +
drivers/Kconfig | 2 +
drivers/Makefile | 1 +
drivers/cpuinspect/Kconfig | 24 +++
drivers/cpuinspect/Makefile | 8 +
drivers/cpuinspect/cpuinspect.c | 165 ++++++++++++++++
drivers/cpuinspect/cpuinspect.h | 46 +++++
drivers/cpuinspect/inspector-atf.c | 70 +++++++
drivers/cpuinspect/inspector.c | 124 ++++++++++++
drivers/cpuinspect/sysfs.c | 257 +++++++++++++++++++++++++
include/linux/cpuinspect.h | 40 ++++
11 files changed, 744 insertions(+)
create mode 100644 drivers/cpuinspect/Kconfig
create mode 100644 drivers/cpuinspect/Makefile
create mode 100644 drivers/cpuinspect/cpuinspect.c
create mode 100644 drivers/cpuinspect/cpuinspect.h
create mode 100644 drivers/cpuinspect/inspector-atf.c
create mode 100644 drivers/cpuinspect/inspector.c
create mode 100644 drivers/cpuinspect/sysfs.c
create mode 100644 include/linux/cpuinspect.h
--
2.33.0
1
3
This patches series introduce CPU-inspect feature. CPU-inspect is designed
to provide a framework for early detection of SDC by proactively executing
CPU inspection test cases.
v1 -> v2: fix some issues
Yu Liao (2):
cpuinspect: add CPU-inspect infrastructure
cpuinspect: add ATF inspector
drivers/Kconfig | 2 +
drivers/Makefile | 1 +
drivers/cpuinspect/Kconfig | 24 +++
drivers/cpuinspect/Makefile | 8 +
drivers/cpuinspect/cpuinspect.c | 165 ++++++++++++++++++
drivers/cpuinspect/cpuinspect.h | 46 ++++++
drivers/cpuinspect/inspector-atf.c | 70 ++++++++
drivers/cpuinspect/inspector.c | 124 ++++++++++++++
drivers/cpuinspect/sysfs.c | 257 +++++++++++++++++++++++++++++
include/linux/cpuinspect.h | 40 +++++
10 files changed, 737 insertions(+)
create mode 100644 drivers/cpuinspect/Kconfig
create mode 100644 drivers/cpuinspect/Makefile
create mode 100644 drivers/cpuinspect/cpuinspect.c
create mode 100644 drivers/cpuinspect/cpuinspect.h
create mode 100644 drivers/cpuinspect/inspector-atf.c
create mode 100644 drivers/cpuinspect/inspector.c
create mode 100644 drivers/cpuinspect/sysfs.c
create mode 100644 include/linux/cpuinspect.h
--
2.33.0
2
3

20 Nov '23
From: Luo Meng <luomeng12(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8HVC5
CVE: NA
--------------------------------
Add DMINFO() to help tracking device creation/removal success.
Signed-off-by: Luo Meng <luomeng12(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
drivers/md/dm-ioctl.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 21ebb6c39394..5efe0193b2e8 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -327,6 +327,9 @@ static struct dm_table *__hash_remove(struct hash_cell *hc)
table = NULL;
if (hc->new_map)
table = hc->new_map;
+
+ DMINFO("%s[%i]: %s (%s) is removed successfully",
+ current->comm, current->pid, hc->md->disk->disk_name, hc->name);
dm_put(hc->md);
free_cell(hc);
@@ -880,6 +883,7 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
{
int r, m = DM_ANY_MINOR;
struct mapped_device *md;
+ struct hash_cell *hc;
r = check_name(param->name);
if (r)
@@ -903,6 +907,13 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
__dev_status(md, param);
+ mutex_lock(&dm_hash_cells_mutex);
+ hc = dm_get_mdptr(md);
+ if (hc)
+ DMINFO("%s[%i]: %s (%s) is created successfully",
+ current->comm, current->pid, md->disk->disk_name, hc->name);
+
+ mutex_unlock(&dm_hash_cells_mutex);
dm_put(md);
return 0;
--
2.31.1
1
0
Liu Yuntao (1):
add "enable_swap" ability
Ni Cunshu (1):
mm: add preferred_swap ability
liubo (1):
preferred_swap: share memory can specify swap device
fs/proc/base.c | 188 +++++++++++++++++++++++++++++++++
include/linux/mm_types.h | 4 +
include/linux/sched/coredump.h | 1 +
include/linux/swap.h | 13 +++
kernel/fork.c | 4 +
mm/Kconfig | 7 ++
mm/swap_slots.c | 64 ++++++++++-
mm/swap_state.c | 1 +
mm/swapfile.c | 86 ++++++++++++++-
mm/vmscan.c | 38 +++++++
10 files changed, 403 insertions(+), 3 deletions(-)
--
2.33.0
2
4

20 Nov '23
Fix syntax issues in comments and print statements
v1->v2: Fix syntax issues in comments of the code section that opening
an exclusive opened block device for write.
Li Lingfeng (2):
fs: Fix syntax issues in comments and print statements.
fs: Fix syntax issues in comments
fs/block_dev.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--
2.31.1
2
3

[PATCH OLK-5.10 RESEND 0/2] block: warn once for each partition in bio_check_ro()
by Li Nan 20 Nov '23
by Li Nan 20 Nov '23
20 Nov '23
From: Yu Kuai <yukuai3(a)huawei.com>
Yu Kuai (2):
block: warn once for each partition in bio_check_ro()
block: fix kabi broken in struct hd_part
block/blk-core.c | 5 +++++
include/linux/genhd.h | 1 +
2 files changed, 6 insertions(+)
--
2.39.2
1
0

20 Nov '23
hulk inclusion
category: cleanup
bugzilla: https://gitee.com/openeuler/kernel/issues/I8HWD1
--------------------------------
There are syntax errors in the comments and print statements of the code
section that detects opening write opened block devices exclusively,
which need to be fixed.
Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com>
---
fs/block_dev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/block_dev.c b/fs/block_dev.c
index a0e4d3ec300e..6389551aa29e 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1669,11 +1669,11 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
#ifdef CONFIG_BLK_DEV_DUMPINFO
/*
* Open an write opened block device exclusively, the
- * writing process may probability corrupt the device,
+ * writing process may probably corrupt the device,
* such as a mounted file system, give a hint here.
*/
if (is_conflict_excl_open(bdev, claiming, mode))
- blkdev_dump_conflict_opener(bdev, "VFS: Open an write opened "
+ blkdev_dump_conflict_opener(bdev, "VFS: Open a write opened "
"block device exclusively");
#endif
bd_finish_claiming(bdev, claiming, holder);
--
2.31.1
2
1

[PATCH openEuler-1.0-LTS] mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page()
by Tong Tiangen 20 Nov '23
by Tong Tiangen 20 Nov '23
20 Nov '23
From: Miaohe Lin <linmiaohe(a)huawei.com>
mainline inclusion
from mainline-v5.13-rc1
commit 34f5e9b9d1990d286199084efa752530ee3d8297
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8HVSL
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
If the zone device page does not belong to un-addressable device memory,
the variable entry will be uninitialized and lead to indeterminate pte
entry ultimately. Fix this unexpected case and warn about it.
Link: https://lkml.kernel.org/r/20210325131524.48181-4-linmiaohe@huawei.com
Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache coherent with CPU")
Signed-off-by: Miaohe Lin <linmiaohe(a)huawei.com>
Reviewed-by: David Hildenbrand <david(a)redhat.com>
Cc: Alistair Popple <apopple(a)nvidia.com>
Cc: Jerome Glisse <jglisse(a)redhat.com>
Cc: Rafael Aquini <aquini(a)redhat.com>
Cc: Yang Shi <shy828301(a)gmail.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
conflict:
mm/migrate.c
Signed-off-by: Tong Tiangen <tongtiangen(a)huawei.com>
---
mm/migrate.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/mm/migrate.c b/mm/migrate.c
index dc6416ccef44..56a2033d443c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2688,6 +2688,13 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
if (vma->vm_flags & VM_WRITE)
entry = pte_mkwrite(pte_mkdirty(entry));
entry = pte_mkdevmap(entry);
+ } else {
+ /*
+ * For now we not support migrating to MEMORY_DEVICE_FS_DAX
+ * and MEMORY_DEVICE_PCI_P2PDMA device memory.
+ */
+ pr_warn_once("Unsupported ZONE_DEVICE page type.\n");
+ goto abort;
}
} else {
entry = mk_pte(page, vma->vm_page_prot);
--
2.25.1
2
1

20 Nov '23
From: Yu Kuai <yukuai3(a)huawei.com>
Yu Kuai (2):
block: warn once for each partition in bio_check_ro()
block: fix kabi broken in struct hd_part
block/blk-core.c | 5 +++++
include/linux/genhd.h | 1 +
2 files changed, 6 insertions(+)
--
2.39.2
2
3

[PATCH OLK-5.10] net/tls: do not free tls_rec on async operation in bpf_exec_tx_verdict()
by Liu Jian 18 Nov '23
by Liu Jian 18 Nov '23
18 Nov '23
stable inclusion
from stable-v5.10.195
commit a5096cc6e7836711541b7cd2d6da48d36fe420e9
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I8H5DK
CVE: CVE-2023-6176
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
---------------------------
[ Upstream commit cfaa80c91f6f99b9342b6557f0f0e1143e434066 ]
I got the below warning when do fuzzing test:
BUG: KASAN: null-ptr-deref in scatterwalk_copychunks+0x320/0x470
Read of size 4 at addr 0000000000000008 by task kworker/u8:1/9
CPU: 0 PID: 9 Comm: kworker/u8:1 Tainted: G OE
Hardware name: linux,dummy-virt (DT)
Workqueue: pencrypt_parallel padata_parallel_worker
Call trace:
dump_backtrace+0x0/0x420
show_stack+0x34/0x44
dump_stack+0x1d0/0x248
__kasan_report+0x138/0x140
kasan_report+0x44/0x6c
__asan_load4+0x94/0xd0
scatterwalk_copychunks+0x320/0x470
skcipher_next_slow+0x14c/0x290
skcipher_walk_next+0x2fc/0x480
skcipher_walk_first+0x9c/0x110
skcipher_walk_aead_common+0x380/0x440
skcipher_walk_aead_encrypt+0x54/0x70
ccm_encrypt+0x13c/0x4d0
crypto_aead_encrypt+0x7c/0xfc
pcrypt_aead_enc+0x28/0x84
padata_parallel_worker+0xd0/0x2dc
process_one_work+0x49c/0xbdc
worker_thread+0x124/0x880
kthread+0x210/0x260
ret_from_fork+0x10/0x18
This is because the value of rec_seq of tls_crypto_info configured by the
user program is too large, for example, 0xffffffffffffff. In addition, TLS
is asynchronously accelerated. When tls_do_encryption() returns
-EINPROGRESS and sk->sk_err is set to EBADMSG due to rec_seq overflow,
skmsg is released before the asynchronous encryption process ends. As a
result, the UAF problem occurs during the asynchronous processing of the
encryption module.
If the operation is asynchronous and the encryption module returns
EINPROGRESS, do not free the record information.
Fixes: 635d93981786 ("net/tls: free record only on encryption error")
Signed-off-by: Liu Jian <liujian56(a)huawei.com>
Reviewed-by: Sabrina Dubroca <sd(a)queasysnail.net>
Link: https://lore.kernel.org/r/20230909081434.2324940-1-liujian56@huawei.com
Signed-off-by: Paolo Abeni <pabeni(a)redhat.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Liu Jian <liujian56(a)huawei.com>
---
net/tls/tls_sw.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 96c95ea728ac..84e1f2af1a83 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -815,7 +815,7 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
psock = sk_psock_get(sk);
if (!psock || !policy) {
err = tls_push_record(sk, flags, record_type);
- if (err && sk->sk_err == EBADMSG) {
+ if (err && err != -EINPROGRESS && sk->sk_err == EBADMSG) {
*copied -= sk_msg_free(sk, msg);
tls_free_open_rec(sk);
err = -sk->sk_err;
@@ -844,7 +844,7 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
switch (psock->eval) {
case __SK_PASS:
err = tls_push_record(sk, flags, record_type);
- if (err && sk->sk_err == EBADMSG) {
+ if (err && err != -EINPROGRESS && sk->sk_err == EBADMSG) {
*copied -= sk_msg_free(sk, msg);
tls_free_open_rec(sk);
err = -sk->sk_err;
--
2.34.1
2
1

[PATCH openEuler-22.03-LTS-SP2] mm/hugetlb: fix parameter passed to allocate bootmem memory
by Wupeng Ma 18 Nov '23
by Wupeng Ma 18 Nov '23
18 Nov '23
From: Jason Zeng <jason.zeng(a)intel.com>
Intel inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I7HUWC
CVE: NA
-----------------------------------------------
Intel-SIG: hugetlb: fix parameter passed to allocate bootmem memory
__alloc_bootmem_huge_page_inner() should use 'min_addr" as the
3rd parameter to invoke memblock_alloc_try_nid_raw() and
memblock_alloc_try_nid_raw_flags.
Fixes: 74bfdf157f1f ("mm/hugetlb: Hugetlb use non-mirrored memory if memory reliable is enabled")
Signed-off-by: Jason Zeng <jason.zeng(a)intel.com>
[Wupeng: cherry-pick from a78d9e8141261369833de3cd2aa4e7e7880bddaf]
Signed-off-by: Ma Wupeng <mawupeng1(a)huawei.com>
---
mm/hugetlb.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7aca3351b9b9..8cf3d5bb4881 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2713,10 +2713,10 @@ static void *__init __alloc_bootmem_huge_page_inner(phys_addr_t size,
int nid)
{
if (!mem_reliable_is_enabled())
- return memblock_alloc_try_nid_raw(size, align, max_addr,
+ return memblock_alloc_try_nid_raw(size, align, min_addr,
max_addr, nid);
- return memblock_alloc_try_nid_raw_flags(size, align, max_addr, max_addr,
+ return memblock_alloc_try_nid_raw_flags(size, align, min_addr, max_addr,
nid, MEMBLOCK_NOMIRROR);
}
--
2.25.1
2
1

17 Nov '23
If new options are introduced, but openeuler_defconfig is not explicitly
configured, the actual compiled version may be configured according to
the default settings, which may be different from the author's expectation.
Therefore, some commands/scripts are added to help developers to check and update
the defconfig. It is also convenient for continuous integration tools to
check the consistency of defconfig.
Based on OLK-5.10 openeuler_defconfig, use the commands mentioned above to
generate the initial openeuler_defconfig for OLK-6.6 kernel build.
Xie XiuQi (1):
kconfig: Add script to check & update openeuler_defconfig
Zheng Zengkai (2):
config: add initial openeuler_defconfig for arm64
config: add initial openeuler_defconfig for x86
arch/arm64/configs/openeuler_defconfig | 7942 ++++++++++++++++++++
arch/x86/configs/openeuler_defconfig | 9158 ++++++++++++++++++++++++
scripts/kconfig/Makefile | 22 +
scripts/kconfig/makeconfig.sh | 24 +
4 files changed, 17146 insertions(+)
create mode 100644 arch/arm64/configs/openeuler_defconfig
create mode 100644 arch/x86/configs/openeuler_defconfig
create mode 100644 scripts/kconfig/makeconfig.sh
--
2.20.1
1
3

17 Nov '23
If new options are introduced, but openeuler_defconfig is not explicitly
configured, the actual compiled version may be configured according to
the default settings, which may be different from the author's expectation.
Therefore, some commands/scripts are added to help developers to check and update
the defconfig. It is also convenient for continuous integration tools to
check the consistency of defconfig.
Based on OLK-5.10 openeuler_defconfig, use the commands mentioned above to
generate the initial openeuler_defconfig for OLK-6.6 kernel build.
Xie XiuQi (1):
kconfig: Add script to check & update openeuler_defconfig
Zheng Zengkai (2):
config: add initial openeuler_defconfig for arm64
config: add initial openeuler_defconfig for x86
arch/arm64/configs/openeuler_defconfig | 7942 ++++++++++++++++++++
arch/x86/configs/openeuler_defconfig | 9158 ++++++++++++++++++++++++
scripts/kconfig/Makefile | 22 +
scripts/kconfig/makeconfig.sh | 24 +
4 files changed, 17146 insertions(+)
create mode 100644 arch/arm64/configs/openeuler_defconfig
create mode 100644 arch/x86/configs/openeuler_defconfig
create mode 100644 scripts/kconfig/makeconfig.sh
--
2.20.1
1
3

17 Nov '23
If new options are introduced, but openeuler_defconfig is not explicitly
configured, the actual compiled version may be configured according to
the default settings, which may be different from the author's expectation.
Therefore, some commands/scripts are added to help developers to check and update
the defconfig. It is also convenient for continuous integration tools to
check the consistency of defconfig.
Based on OLK-5.10 openeuler_defconfig, use the commands mentioned above to
generate the initial openeuler_defconfig for OLK-6.6 kernel build.
Xie XiuQi (1):
kconfig: Add script to check & update openeuler_defconfig
Zheng Zengkai (2):
config: add initial openeuler_defconfig for arm64
config: add initial openeuler_defconfig for x86
arch/arm64/configs/openeuler_defconfig | 7942 ++++++++++++++++++++
arch/x86/configs/openeuler_defconfig | 9158 ++++++++++++++++++++++++
scripts/kconfig/Makefile | 22 +
scripts/kconfig/makeconfig.sh | 24 +
4 files changed, 17146 insertions(+)
create mode 100644 arch/arm64/configs/openeuler_defconfig
create mode 100644 arch/x86/configs/openeuler_defconfig
create mode 100644 scripts/kconfig/makeconfig.sh
--
2.20.1
1
3

[PATCH openEuler-1.0-LTS v3] netfilter: conntrack: dccp: copy entire header to stack buffer, not just basic one
by Zhengchao Shao 17 Nov '23
by Zhengchao Shao 17 Nov '23
17 Nov '23
From: Florian Westphal <fw(a)strlen.de>
mainline inclusion
from mainline-v6.5-rc1
commit ff0a3a7d52ff7282dbd183e7fc29a1fe386b0c30
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I8EZSO
CVE: CVE-2023-39197
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
Eric Dumazet says:
nf_conntrack_dccp_packet() has an unique:
dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh);
And nothing more is 'pulled' from the packet, depending on the content.
dh->dccph_doff, and/or dh->dccph_x ...)
So dccp_ack_seq() is happily reading stuff past the _dh buffer.
BUG: KASAN: stack-out-of-bounds in nf_conntrack_dccp_packet+0x1134/0x11c0
Read of size 4 at addr ffff000128f66e0c by task syz-executor.2/29371
[..]
Fix this by increasing the stack buffer to also include room for
the extra sequence numbers and all the known dccp packet type headers,
then pull again after the initial validation of the basic header.
While at it, mark packets invalid that lack 48bit sequence bit but
where RFC says the type MUST use them.
Compile tested only.
v2: first skb_header_pointer() now needs to adjust the size to
only pull the generic header. (Eric)
Heads-up: I intend to remove dccp conntrack support later this year.
Fixes: 2bc780499aa3 ("[NETFILTER]: nf_conntrack: add DCCP protocol support")
Reported-by: Eric Dumazet <edumazet(a)google.com>
Signed-off-by: Florian Westphal <fw(a)strlen.de>
Reviewed-by: Eric Dumazet <edumazet(a)google.com>
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
Conflicts:
net/netfilter/nf_conntrack_proto_dccp.c
Signed-off-by: Zhengchao Shao <shaozhengchao(a)huawei.com>
---
net/netfilter/nf_conntrack_proto_dccp.c | 52 +++++++++++++++++++++++--
1 file changed, 49 insertions(+), 3 deletions(-)
diff --git a/net/netfilter/nf_conntrack_proto_dccp.c b/net/netfilter/nf_conntrack_proto_dccp.c
index 3ba1f4d9934f..fe17c2ddb6c8 100644
--- a/net/netfilter/nf_conntrack_proto_dccp.c
+++ b/net/netfilter/nf_conntrack_proto_dccp.c
@@ -433,17 +433,47 @@ static u64 dccp_ack_seq(const struct dccp_hdr *dh)
ntohl(dhack->dccph_ack_nr_low);
}
+struct nf_conntrack_dccp_buf {
+ struct dccp_hdr dh; /* generic header part */
+ struct dccp_hdr_ext ext; /* optional depending dh->dccph_x */
+ union { /* depends on header type */
+ struct dccp_hdr_ack_bits ack;
+ struct dccp_hdr_request req;
+ struct dccp_hdr_response response;
+ struct dccp_hdr_reset rst;
+ } u;
+};
+
+static struct dccp_hdr *
+dccp_header_pointer(const struct sk_buff *skb, int offset, const struct dccp_hdr *dh,
+ struct nf_conntrack_dccp_buf *buf)
+{
+ unsigned int hdrlen = __dccp_hdr_len(dh);
+
+ if (hdrlen > sizeof(*buf))
+ return NULL;
+
+ return skb_header_pointer(skb, offset, hdrlen, buf);
+}
+
static int dccp_packet(struct nf_conn *ct, const struct sk_buff *skb,
unsigned int dataoff, enum ip_conntrack_info ctinfo)
{
enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo);
- struct dccp_hdr _dh, *dh;
+ struct nf_conntrack_dccp_buf _dh;
u_int8_t type, old_state, new_state;
enum ct_dccp_roles role;
unsigned int *timeouts;
+ struct dccp_hdr *dh;
- dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh);
+ dh = skb_header_pointer(skb, dataoff, sizeof(*dh), &_dh.dh);
BUG_ON(dh == NULL);
+
+ /* pull again, including possible 48 bit sequences and subtype header */
+ dh = dccp_header_pointer(skb, dataoff, dh, &_dh);
+ if (!dh)
+ return NF_DROP;
+
type = dh->dccph_type;
if (type == DCCP_PKT_RESET &&
@@ -526,10 +556,20 @@ static int dccp_error(struct net *net, struct nf_conn *tmpl,
struct sk_buff *skb, unsigned int dataoff,
u_int8_t pf, unsigned int hooknum)
{
+ static const unsigned long require_seq48 = 1 << DCCP_PKT_REQUEST |
+ 1 << DCCP_PKT_RESPONSE |
+ 1 << DCCP_PKT_CLOSEREQ |
+ 1 << DCCP_PKT_CLOSE |
+ 1 << DCCP_PKT_RESET |
+ 1 << DCCP_PKT_SYNC |
+ 1 << DCCP_PKT_SYNCACK;
struct dccp_hdr _dh, *dh;
unsigned int dccp_len = skb->len - dataoff;
unsigned int cscov;
const char *msg;
+ u8 type;
+
+ BUILD_BUG_ON(DCCP_PKT_INVALID >= BITS_PER_LONG);
dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh);
if (dh == NULL) {
@@ -559,11 +599,17 @@ static int dccp_error(struct net *net, struct nf_conn *tmpl,
goto out_invalid;
}
- if (dh->dccph_type >= DCCP_PKT_INVALID) {
+ type = dh->dccph_type;
+ if (type >= DCCP_PKT_INVALID) {
msg = "nf_ct_dccp: reserved packet type ";
goto out_invalid;
}
+ if (test_bit(type, &require_seq48) && !dh->dccph_x) {
+ msg = "nf_ct_dccp: type lacks 48bit sequence numbers";
+ goto out_invalid;
+ }
+
return NF_ACCEPT;
out_invalid:
--
2.34.1
2
1
From: Bibo Mao <maobibo(a)loongson.cn>
LoongArch inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8H1QC
------------------------------------------
1. When cpu is hotplug out, cpu is in idle state and function
arch_cpu_idle_dead is called. Timer interrupt for this processor should
be disabled, else there will be timer interrupt for the dead cpu. Also
this prevents vcpu to schedule out during halt-polling flow when system
is running in vm mode, since there is pending timer interrupt.
This patch adds detailed implementation for timer shutdown interface, so
that timer will be disabled when cpu is plug-out.
2. for woken-up cpu, entry address is 8 bytes, we should check first low 4
bytes and then high 4 bytes.
Signed-off-by: Bibo Mao <maobibo(a)loongson.cn>
Signed-off-by: Hongchen Zhang <zhanghongchen(a)loongson.cn>
---
arch/loongarch/kernel/smp.c | 17 +++++++++++++++--
arch/loongarch/kernel/time.c | 25 +++++++++----------------
2 files changed, 24 insertions(+), 18 deletions(-)
diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c
index f5a94559d441..abf6484fac70 100644
--- a/arch/loongarch/kernel/smp.c
+++ b/arch/loongarch/kernel/smp.c
@@ -310,16 +310,29 @@ void play_dead(void)
register void (*init_fn)(void);
idle_task_exit();
- local_irq_enable();
+ /*
+ * vcpu can be woken up from idle emulation in vm if irq is disabled
+ */
+ if (!cpu_has_hypervisor)
+ local_irq_enable();
set_csr_ecfg(ECFGF_IPI);
__this_cpu_write(cpu_state, CPU_DEAD);
__smp_mb();
do {
__asm__ __volatile__("idle 0\n\t");
- addr = iocsr_read64(LOONGARCH_IOCSR_MBUF0);
+ /*
+ * mailbox info is wroten from other CPU with IPI send method
+ * in function csr_mail_send, only 4 bytes can be wroten with
+ * IPI send method in one time.
+ *
+ * High 4 bytes is sent and then low 4 bytes for 8 bytes mail
+ * sending method. Here low 4 bytes is read by the first.
+ */
+ addr = iocsr_read32(LOONGARCH_IOCSR_MBUF0);
} while (addr == 0);
+ addr = iocsr_read64(LOONGARCH_IOCSR_MBUF0);
init_fn = (void *)TO_CACHE(addr);
iocsr_write32(0xffffffff, LOONGARCH_IOCSR_IPI_CLEAR);
diff --git a/arch/loongarch/kernel/time.c b/arch/loongarch/kernel/time.c
index 18fa38705da7..b2e8108bee10 100644
--- a/arch/loongarch/kernel/time.c
+++ b/arch/loongarch/kernel/time.c
@@ -59,21 +59,6 @@ static int constant_set_state_oneshot(struct clock_event_device *evt)
return 0;
}
-static int constant_set_state_oneshot_stopped(struct clock_event_device *evt)
-{
- unsigned long timer_config;
-
- raw_spin_lock(&state_lock);
-
- timer_config = csr_read64(LOONGARCH_CSR_TCFG);
- timer_config &= ~CSR_TCFG_EN;
- csr_write64(timer_config, LOONGARCH_CSR_TCFG);
-
- raw_spin_unlock(&state_lock);
-
- return 0;
-}
-
static int constant_set_state_periodic(struct clock_event_device *evt)
{
unsigned long period;
@@ -93,6 +78,14 @@ static int constant_set_state_periodic(struct clock_event_device *evt)
static int constant_set_state_shutdown(struct clock_event_device *evt)
{
+ unsigned long timer_config;
+
+ raw_spin_lock(&state_lock);
+ timer_config = csr_read64(LOONGARCH_CSR_TCFG);
+ timer_config &= ~CSR_TCFG_EN;
+ csr_write64(timer_config, LOONGARCH_CSR_TCFG);
+ raw_spin_unlock(&state_lock);
+
return 0;
}
@@ -161,7 +154,7 @@ int constant_clockevent_init(void)
cd->rating = 320;
cd->cpumask = cpumask_of(cpu);
cd->set_state_oneshot = constant_set_state_oneshot;
- cd->set_state_oneshot_stopped = constant_set_state_oneshot_stopped;
+ cd->set_state_oneshot_stopped = constant_set_state_shutdown;
cd->set_state_periodic = constant_set_state_periodic;
cd->set_state_shutdown = constant_set_state_shutdown;
cd->set_next_event = constant_timer_next_event;
--
2.33.0
2
1
To make applying the mainline patch easier, reorder the nbd commit about
the first_minor check here.
Christoph Hellwig (5):
block: fold register_disk into device_add_disk
block: call blk_integrity_add earlier in device_add_disk
block: add the events* attributes to disk_attrs
block: fix error unwinding in device_add_disk
block: clear ->slave_dir when dropping the main slave_dir reference
Luis Chamberlain (4):
block: return errors from blk_integrity_add
block: return errors from disk_alloc_events
block: add error handling for device_add_disk / add_disk
block: fix device_add_disk() kobject_create_and_add() error handling
Tetsuo Handa (1):
block: check minor range in device_add_disk()
Wen Yang (1):
Revert "Revert "block: nbd: add sanity check for first_minor""
Yu Kuai (3):
nbd: fix max value for 'first_minor'
nbd: fix possible overflow for 'first_minor' in nbd_dev_add()
block: fix memory leak for elevator on add_disk failure
Zhang Wensheng (1):
nbd: fix possible overflow on 'first_minor' in nbd_dev_add()
Zhong Jinghua (7):
nbd: Reorganize the messy commit log about the first_minor check
block: return errors from blk_register_region
block: Fix the kabi change in device_add_disk
block: Fix the kabi change on blk_register_region
block: call blk_get_queue earlier in __device_add_disk
block: Fix minor range check in device_add_disk()
block: Set memalloc_noio to false in the error path
block/blk.h | 5 +-
include/linux/genhd.h | 13 +++
block/blk-integrity.c | 12 ++-
block/genhd.c | 246 +++++++++++++++++++++++++-----------------
drivers/block/nbd.c | 5 +-
5 files changed, 175 insertions(+), 106 deletions(-)
--
2.39.2
2
23
This patches series introduce CPU-inspect feature. CPU-inspect is designed
to provide a framework for early detection of SDC by proactively executing
CPU inspection test cases.
Yu Liao (2):
cpuinspect: add CPU-inspect infrastructure
cpuinspect: add ATF inspector
drivers/Kconfig | 2 +
drivers/Makefile | 1 +
drivers/cpuinspect/Kconfig | 24 +++
drivers/cpuinspect/Makefile | 8 +
drivers/cpuinspect/cpuinspect.c | 166 ++++++++++++++++++++
drivers/cpuinspect/cpuinspect.h | 46 ++++++
drivers/cpuinspect/inspector-atf.c | 70 +++++++++
drivers/cpuinspect/inspector.c | 124 +++++++++++++++
drivers/cpuinspect/sysfs.c | 236 +++++++++++++++++++++++++++++
include/linux/cpuinspect.h | 40 +++++
10 files changed, 717 insertions(+)
create mode 100644 drivers/cpuinspect/Kconfig
create mode 100644 drivers/cpuinspect/Makefile
create mode 100644 drivers/cpuinspect/cpuinspect.c
create mode 100644 drivers/cpuinspect/cpuinspect.h
create mode 100644 drivers/cpuinspect/inspector-atf.c
create mode 100644 drivers/cpuinspect/inspector.c
create mode 100644 drivers/cpuinspect/sysfs.c
create mode 100644 include/linux/cpuinspect.h
--
2.33.0
2
3

[PATCH openEuler-1.0-LTS] sched/rt: Fix double enqueue caused by rt_effective_prio
by Xia Fukun 16 Nov '23
by Xia Fukun 16 Nov '23
16 Nov '23
From: Peter Zijlstra <peterz(a)infradead.org>
mainline inclusion
from mainline-v6.5-rc7
commit f558c2b834ec27e75d37b1c860c139e7b7c3a8e4
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8H4SN
CVE: NA
--------------------------------
Double enqueues in rt runqueues (list) have been reported while running
a simple test that spawns a number of threads doing a short sleep/run
pattern while being concurrently setscheduled between rt and fair class.
WARNING: CPU: 3 PID: 2825 at kernel/sched/rt.c:1294 enqueue_task_rt+0x355/0x360
CPU: 3 PID: 2825 Comm: setsched__13
RIP: 0010:enqueue_task_rt+0x355/0x360
Call Trace:
__sched_setscheduler+0x581/0x9d0
_sched_setscheduler+0x63/0xa0
do_sched_setscheduler+0xa0/0x150
__x64_sys_sched_setscheduler+0x1a/0x30
do_syscall_64+0x33/0x40
entry_SYSCALL_64_after_hwframe+0x44/0xae
list_add double add: new=ffff9867cb629b40, prev=ffff9867cb629b40,
next=ffff98679fc67ca0.
kernel BUG at lib/list_debug.c:31!
invalid opcode: 0000 [#1] PREEMPT_RT SMP PTI
CPU: 3 PID: 2825 Comm: setsched__13
RIP: 0010:__list_add_valid+0x41/0x50
Call Trace:
enqueue_task_rt+0x291/0x360
__sched_setscheduler+0x581/0x9d0
_sched_setscheduler+0x63/0xa0
do_sched_setscheduler+0xa0/0x150
__x64_sys_sched_setscheduler+0x1a/0x30
do_syscall_64+0x33/0x40
entry_SYSCALL_64_after_hwframe+0x44/0xae
__sched_setscheduler() uses rt_effective_prio() to handle proper queuing
of priority boosted tasks that are setscheduled while being boosted.
rt_effective_prio() is however called twice per each
__sched_setscheduler() call: first directly by __sched_setscheduler()
before dequeuing the task and then by __setscheduler() to actually do
the priority change. If the priority of the pi_top_task is concurrently
being changed however, it might happen that the two calls return
different results. If, for example, the first call returned the same rt
priority the task was running at and the second one a fair priority, the
task won't be removed by the rt list (on_list still set) and then
enqueued in the fair runqueue. When eventually setscheduled back to rt
it will be seen as enqueued already and the WARNING/BUG be issued.
Fix this by calling rt_effective_prio() only once and then reusing the
return value. While at it refactor code as well for clarity. Concurrent
priority inheritance handling is still safe and will eventually converge
to a new state by following the inheritance chain(s).
Fixes: 0782e63bc6fe ("sched: Handle priority boosted tasks proper in setscheduler()")
[squashed Peterz changes; added changelog]
Reported-by: Mark Simmons <msimmons(a)redhat.com>
Signed-off-by: Juri Lelli <juri.lelli(a)redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Link: https://lkml.kernel.org/r/20210803104501.38333-1-juri.lelli@redhat.com
Signed-off-by: Xia Fukun <xiafukun(a)huawei.com>
---
kernel/sched/core.c | 82 +++++++++++++++++++--------------------------
1 file changed, 35 insertions(+), 47 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7825ceaae0c4..ce0a9026450d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -784,12 +784,18 @@ void deactivate_task(struct rq *rq, struct task_struct *p, int flags)
dequeue_task(rq, p, flags);
}
-/*
- * __normal_prio - return the priority that is based on the static prio
- */
-static inline int __normal_prio(struct task_struct *p)
+static inline int __normal_prio(int policy, int rt_prio, int nice)
{
- return p->static_prio;
+ int prio;
+
+ if (dl_policy(policy))
+ prio = MAX_DL_PRIO - 1;
+ else if (rt_policy(policy))
+ prio = MAX_RT_PRIO - 1 - rt_prio;
+ else
+ prio = NICE_TO_PRIO(nice);
+
+ return prio;
}
/*
@@ -801,15 +807,7 @@ static inline int __normal_prio(struct task_struct *p)
*/
static inline int normal_prio(struct task_struct *p)
{
- int prio;
-
- if (task_has_dl_policy(p))
- prio = MAX_DL_PRIO-1;
- else if (task_has_rt_policy(p))
- prio = MAX_RT_PRIO-1 - p->rt_priority;
- else
- prio = __normal_prio(p);
- return prio;
+ return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio));
}
/*
@@ -2327,7 +2325,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
} else if (PRIO_TO_NICE(p->static_prio) < 0)
p->static_prio = NICE_TO_PRIO(0);
- p->prio = p->normal_prio = __normal_prio(p);
+ p->prio = p->normal_prio = p->static_prio;
set_load_weight(p, false);
/*
@@ -3795,6 +3793,18 @@ int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flag
}
EXPORT_SYMBOL(default_wake_function);
+static void __setscheduler_prio(struct task_struct *p, int prio)
+{
+ if (dl_prio(prio))
+ p->sched_class = &dl_sched_class;
+ else if (rt_prio(prio))
+ p->sched_class = &rt_sched_class;
+ else
+ p->sched_class = &fair_sched_class;
+
+ p->prio = prio;
+}
+
#ifdef CONFIG_RT_MUTEXES
static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
@@ -3909,22 +3919,19 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
queue_flag |= ENQUEUE_REPLENISH;
} else
p->dl.dl_boosted = 0;
- p->sched_class = &dl_sched_class;
} else if (rt_prio(prio)) {
if (dl_prio(oldprio))
p->dl.dl_boosted = 0;
if (oldprio < prio)
queue_flag |= ENQUEUE_HEAD;
- p->sched_class = &rt_sched_class;
} else {
if (dl_prio(oldprio))
p->dl.dl_boosted = 0;
if (rt_prio(oldprio))
p->rt.timeout = 0;
- p->sched_class = &fair_sched_class;
}
- p->prio = prio;
+ __setscheduler_prio(p, prio);
if (queued)
enqueue_task(rq, p, queue_flag);
@@ -4158,27 +4165,6 @@ static void __setscheduler_params(struct task_struct *p,
set_load_weight(p, true);
}
-/* Actually do priority change: must hold pi & rq lock. */
-static void __setscheduler(struct rq *rq, struct task_struct *p,
- const struct sched_attr *attr, bool keep_boost)
-{
- __setscheduler_params(p, attr);
-
- /*
- * Keep a potential priority boosting if called from
- * sched_setscheduler().
- */
- p->prio = normal_prio(p);
- if (keep_boost)
- p->prio = rt_effective_prio(p, p->prio);
-
- if (dl_prio(p->prio))
- p->sched_class = &dl_sched_class;
- else if (rt_prio(p->prio))
- p->sched_class = &rt_sched_class;
- else
- p->sched_class = &fair_sched_class;
-}
/*
* Check the target process has a UID that matches the current process's:
@@ -4200,10 +4186,8 @@ static int __sched_setscheduler(struct task_struct *p,
const struct sched_attr *attr,
bool user, bool pi)
{
- int newprio = dl_policy(attr->sched_policy) ? MAX_DL_PRIO - 1 :
- MAX_RT_PRIO - 1 - attr->sched_priority;
- int retval, oldprio, oldpolicy = -1, queued, running;
- int new_effective_prio, policy = attr->sched_policy;
+ int oldpolicy = -1, policy = attr->sched_policy;
+ int retval, oldprio, newprio, queued, running;
const struct sched_class *prev_class;
struct rq_flags rf;
int reset_on_fork;
@@ -4399,6 +4383,7 @@ static int __sched_setscheduler(struct task_struct *p,
p->sched_reset_on_fork = reset_on_fork;
oldprio = p->prio;
+ newprio = __normal_prio(policy, attr->sched_priority, attr->sched_nice);
if (pi) {
/*
* Take priority boosted tasks into account. If the new
@@ -4407,8 +4392,8 @@ static int __sched_setscheduler(struct task_struct *p,
* the runqueue. This will be done when the task deboost
* itself.
*/
- new_effective_prio = rt_effective_prio(p, newprio);
- if (new_effective_prio == oldprio)
+ newprio = rt_effective_prio(p, newprio);
+ if (newprio == oldprio)
queue_flags &= ~DEQUEUE_MOVE;
}
@@ -4420,7 +4405,10 @@ static int __sched_setscheduler(struct task_struct *p,
put_prev_task(rq, p);
prev_class = p->sched_class;
- __setscheduler(rq, p, attr, pi);
+ if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
+ __setscheduler_params(p, attr);
+ __setscheduler_prio(p, newprio);
+ }
if (queued) {
/*
--
2.34.1
2
1
From: Bibo Mao <maobibo(a)loongson.cn>
LoongArch inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8H1QC
------------------------------------------
1. When cpu is hotplug out, cpu is in idle state and function
arch_cpu_idle_dead is called. Timer interrupt for this processor should
be disabled, else there will be timer interrupt for the dead cpu. Also
this prevents vcpu to schedule out during halt-polling flow when system
is running in vm mode, since there is pending timer interrupt.
This patch adds detailed implementation for timer shutdown interface, so
that timer will be disabled when cpu is plug-out.
2. for woken-up cpu, entry address is 8 bytes, we should check first low 4
bytes and then high 4 bytes.
Signed-off-by: Bibo Mao <maobibo(a)loongson.cn>
Signed-off-by: Hongchen Zhang <zhanghongchen(a)loongson.cn>
---
arch/loongarch/kernel/smp.c | 17 +++++++++++++++--
arch/loongarch/kernel/time.c | 25 +++++++++----------------
2 files changed, 24 insertions(+), 18 deletions(-)
diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c
index f5a94559d441..abf6484fac70 100644
--- a/arch/loongarch/kernel/smp.c
+++ b/arch/loongarch/kernel/smp.c
@@ -310,16 +310,29 @@ void play_dead(void)
register void (*init_fn)(void);
idle_task_exit();
- local_irq_enable();
+ /*
+ * vcpu can be woken up from idle emulation in vm if irq is disabled
+ */
+ if (!cpu_has_hypervisor)
+ local_irq_enable();
set_csr_ecfg(ECFGF_IPI);
__this_cpu_write(cpu_state, CPU_DEAD);
__smp_mb();
do {
__asm__ __volatile__("idle 0\n\t");
- addr = iocsr_read64(LOONGARCH_IOCSR_MBUF0);
+ /*
+ * mailbox info is wroten from other CPU with IPI send method
+ * in function csr_mail_send, only 4 bytes can be wroten with
+ * IPI send method in one time.
+ *
+ * High 4 bytes is sent and then low 4 bytes for 8 bytes mail
+ * sending method. Here low 4 bytes is read by the first.
+ */
+ addr = iocsr_read32(LOONGARCH_IOCSR_MBUF0);
} while (addr == 0);
+ addr = iocsr_read64(LOONGARCH_IOCSR_MBUF0);
init_fn = (void *)TO_CACHE(addr);
iocsr_write32(0xffffffff, LOONGARCH_IOCSR_IPI_CLEAR);
diff --git a/arch/loongarch/kernel/time.c b/arch/loongarch/kernel/time.c
index 18fa38705da7..b2e8108bee10 100644
--- a/arch/loongarch/kernel/time.c
+++ b/arch/loongarch/kernel/time.c
@@ -59,21 +59,6 @@ static int constant_set_state_oneshot(struct clock_event_device *evt)
return 0;
}
-static int constant_set_state_oneshot_stopped(struct clock_event_device *evt)
-{
- unsigned long timer_config;
-
- raw_spin_lock(&state_lock);
-
- timer_config = csr_read64(LOONGARCH_CSR_TCFG);
- timer_config &= ~CSR_TCFG_EN;
- csr_write64(timer_config, LOONGARCH_CSR_TCFG);
-
- raw_spin_unlock(&state_lock);
-
- return 0;
-}
-
static int constant_set_state_periodic(struct clock_event_device *evt)
{
unsigned long period;
@@ -93,6 +78,14 @@ static int constant_set_state_periodic(struct clock_event_device *evt)
static int constant_set_state_shutdown(struct clock_event_device *evt)
{
+ unsigned long timer_config;
+
+ raw_spin_lock(&state_lock);
+ timer_config = csr_read64(LOONGARCH_CSR_TCFG);
+ timer_config &= ~CSR_TCFG_EN;
+ csr_write64(timer_config, LOONGARCH_CSR_TCFG);
+ raw_spin_unlock(&state_lock);
+
return 0;
}
@@ -161,7 +154,7 @@ int constant_clockevent_init(void)
cd->rating = 320;
cd->cpumask = cpumask_of(cpu);
cd->set_state_oneshot = constant_set_state_oneshot;
- cd->set_state_oneshot_stopped = constant_set_state_oneshot_stopped;
+ cd->set_state_oneshot_stopped = constant_set_state_shutdown;
cd->set_state_periodic = constant_set_state_periodic;
cd->set_state_shutdown = constant_set_state_shutdown;
cd->set_next_event = constant_timer_next_event;
--
2.33.0
2
1
The patch sets include two parts:
1. patch 1~15: Rebase smart_grid from openeuler-1.0-LTS to OLK-5.10
2. patch 16~19: introduce smart_grid zone qos and cpufreq
Since v4:
1. Place the highest level task in current domain level itself in
sched_grid_prefer_cpus
Since v3:
1. fix CI warning
Since v2:
1. static alloc sg_zone cpumask.
2. fix some warning
Hui Tang (13):
sched: Introduce smart grid scheduling strategy for cfs
sched: fix smart grid usage count
sched: fix WARN found by deadlock detect
sched: Fix possible deadlock in tg_set_dynamic_affinity_mode
sched: Fix negative count for jump label
sched: Fix timer storm for smart grid
sched: fix dereference NULL pointers
sched: Fix memory leak on error branch
sched: clear credit count in error branch
sched: Adjust few parameters range for smart grid
sched: Delete redundant updates to p->prefer_cpus
sched: Fix memory leak for smart grid
sched: Fix null pointer derefrence for sd->span
Wang ShaoBo (2):
sched: smart grid: init sched_grid_qos structure on QOS purpose
config: enable CONFIG_QOS_SCHED_SMART_GRID by default
Yipeng Zou (4):
sched: introduce smart grid qos zone
smart_grid: introduce /proc/pid/smart_grid_level
smart_grid: introduce smart_grid_strategy_ctrl sysctl
smart_grid: cpufreq: introduce smart_grid cpufreq control
arch/arm64/configs/openeuler_defconfig | 1 +
drivers/cpufreq/cpufreq.c | 234 ++++++++++++
fs/proc/array.c | 13 +
fs/proc/base.c | 76 ++++
include/linux/cpufreq.h | 11 +
include/linux/sched.h | 22 ++
include/linux/sched/grid_qos.h | 135 +++++++
include/linux/sched/sysctl.h | 5 +
init/Kconfig | 13 +
kernel/fork.c | 15 +-
kernel/sched/Makefile | 1 +
kernel/sched/core.c | 160 +++++++-
kernel/sched/fair.c | 496 ++++++++++++++++++++++++-
kernel/sched/grid/Makefile | 2 +
kernel/sched/grid/internal.h | 6 +
kernel/sched/grid/power.c | 27 ++
kernel/sched/grid/qos.c | 273 ++++++++++++++
kernel/sched/grid/stat.c | 47 +++
kernel/sched/sched.h | 48 +++
kernel/sysctl.c | 22 +-
mm/mempolicy.c | 12 +-
21 files changed, 1601 insertions(+), 18 deletions(-)
create mode 100644 include/linux/sched/grid_qos.h
create mode 100644 kernel/sched/grid/Makefile
create mode 100644 kernel/sched/grid/internal.h
create mode 100644 kernel/sched/grid/power.c
create mode 100644 kernel/sched/grid/qos.c
create mode 100644 kernel/sched/grid/stat.c
--
2.34.1
2
20

16 Nov '23
driver inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I89D3P
CVE: NA
------------------------------------------
This commit is to support SPxxx RAID/HBA controllers.
RAID controllers support RAID 0/1/5/6/10/50/60 modes.
HBA controlllers support RAID 0/1/10 modes.
RAID/HBA support SAS/SATA HDD/SSD.
Signed-off-by: zhanglei <zhanglei48(a)huawei.com>
---
Documentation/scsi/hisi_raid.rst | 84 +
MAINTAINERS | 7 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
drivers/scsi/Kconfig | 1 +
drivers/scsi/Makefile | 1 +
drivers/scsi/hisi_raid/Kconfig | 14 +
drivers/scsi/hisi_raid/Makefile | 7 +
drivers/scsi/hisi_raid/hiraid.h | 760 +++++
drivers/scsi/hisi_raid/hiraid_main.c | 3982 ++++++++++++++++++++++++
10 files changed, 4858 insertions(+)
create mode 100644 Documentation/scsi/hisi_raid.rst
create mode 100644 drivers/scsi/hisi_raid/Kconfig
create mode 100644 drivers/scsi/hisi_raid/Makefile
create mode 100644 drivers/scsi/hisi_raid/hiraid.h
create mode 100644 drivers/scsi/hisi_raid/hiraid_main.c
diff --git a/Documentation/scsi/hisi_raid.rst b/Documentation/scsi/hisi_raid.rst
new file mode 100644
index 000000000000..523a6763a7fd
--- /dev/null
+++ b/Documentation/scsi/hisi_raid.rst
@@ -0,0 +1,84 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==============================================
+hisi_raid - HUAWEI SCSI RAID Controller driver
+==============================================
+
+This file describes the hisi_raid SCSI driver for HUAWEI
+RAID controllers. The hisi_raid driver is the first
+generation RAID driver.
+
+For hisi_raid controller support, enable the hisi_raid driver
+when configuring the kernel.
+
+hisi_raid specific entries in /sys
+=================================
+
+hisi_raid host attributes
+------------------------
+ - /sys/class/scsi_host/host*/csts_pp
+ - /sys/class/scsi_host/host*/csts_shst
+ - /sys/class/scsi_host/host*/csts_cfs
+ - /sys/class/scsi_host/host*/csts_rdy
+ - /sys/class/scsi_host/host*/fw_version
+
+ The host csts_pp attribute is a read only attribute. This attribute
+ indicates whether the controller is processing commands. If this attribute
+ is set to ‘1’, then the controller is processing commands normally. If
+ this attribute is cleared to ‘0’, then the controller has temporarily stopped
+ processing commands in order to handle an event (e.g., firmware activation).
+
+ The host csts_shst attribute is a read only attribute. This attribute
+ indicates status of shutdown processing.The shutdown status values are defined
+ as:
+ ====== ==============================
+ Value Definition
+ ====== ==============================
+ 00b Normal operation
+ 01b Shutdown processing occurring
+ 10b Shutdown processing complete
+ 11b Reserved
+ ====== ==============================
+ The host csts_cfs attribute is a read only attribute. This attribute is set to
+ ’1’ when a fatal controller error occurred that could not be communicated in the
+ appropriate Completion Queue. This bit is cleared to ‘0’ when a fatal controller
+ error has not occurred.
+
+ The host csts_rdy attribute is a read only attribute. This attribute is set to
+ ‘1’ when the controller is ready to process submission queue entries.
+
+ The fw_version attribute is read-only and will return the driver version and the
+ controller firmware version.
+
+hisi_raid scsi device attributes
+------------------------------
+ - /sys/class/scsi_device/X\:X\:X\:X/device/raid_level
+ - /sys/class/scsi_device/X\:X\:X\:X/device/raid_state
+ - /sys/class/scsi_device/X\:X\:X\:X/device/raid_resync
+
+ The device raid_level attribute is a read only attribute. This attribute indicates
+ RAID level of scsi device(will dispaly "NA" if scsi device is not virtual disk type).
+
+ The device raid_state attribute is read-only and indicates RAID status of scsi
+ device(will dispaly "NA" if scsi device is not virtual disk type).
+
+ The device raid_resync attribute is read-only and indicates RAID rebuild processing
+ of scsi device(will dispaly "NA" if scsi device is not virtual disk type).
+
+Supported devices
+=================
+
+ =================== ======= =======================================
+ PCI ID (pci.ids) OEM Product
+ =================== ======= =======================================
+ 19E5:3858 HUAWEI SP186-M-8i(HBA:8Ports)
+ 19E5:3858 HUAWEI SP186-M-16i(HBA:16Ports)
+ 19E5:3858 HUAWEI SP186-M-32i(HBA:32Ports)
+ 19E5:3858 HUAWEI SP186-M-40i(HBA:40Ports)
+ 19E5:3758 HUAWEI SP686C-M-16i(RAID:16Ports,2G cache)
+ 19E5:3758 HUAWEI SP686C-M-16i(RAID:16Ports,4G cache)
+ 19E5:3758 HUAWEI SP686C-MH-32i(RAID:32Ports,4G cache)
+ 19E5:3758 HUAWEI SP686C-M-40i(RAID:40Ports,2G cache)
+ 19E5:3758 HUAWEI SP686C-M-40i(RAID:40Ports,4G cache)
+ =================== ======= =======================================
+
diff --git a/MAINTAINERS b/MAINTAINERS
index a7815fd1072f..8324f56a2096 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8070,6 +8070,13 @@ M: Yonglong Liu <liuyonglong(a)huawei.com>
S: Supported
F: drivers/ptp/ptp_hisi.c
+HISI_RAID SCSI RAID DRIVERS
+M: Zhang Lei <zhanglei48(a)huawei.com>
+L: linux-scsi(a)vger.kernel.org
+S: Maintained
+F: Documentation/scsi/hisi_raid.rst
+F: drivers/scsi/hisi_raid/
+
HMM - Heterogeneous Memory Management
M: Jérôme Glisse <jglisse(a)redhat.com>
L: linux-mm(a)kvack.org
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index ec758f0530c1..b9a50ef6d768 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -2413,6 +2413,7 @@ CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_SMARTPQI=m
+CONFIG_SCSI_HISI_RAID=m
# CONFIG_SCSI_UFSHCD is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_MYRB is not set
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index 5171aa50736b..43b5294326e6 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -2369,6 +2369,7 @@ CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_SMARTPQI=m
+CONFIG_SCSI_HISI_RAID=m
# CONFIG_SCSI_UFSHCD is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_BUSLOGIC is not set
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index a9da1b2dec4a..41ef664cf0ed 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -473,6 +473,7 @@ source "drivers/scsi/megaraid/Kconfig.megaraid"
source "drivers/scsi/sssraid/Kconfig"
source "drivers/scsi/mpt3sas/Kconfig"
source "drivers/scsi/smartpqi/Kconfig"
+source "drivers/scsi/hisi_raid/Kconfig"
source "drivers/scsi/ufs/Kconfig"
config SCSI_HPTIOP
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index c2a1efa16912..8f26dbb5ee37 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -101,6 +101,7 @@ obj-$(CONFIG_MEGARAID_LEGACY) += megaraid.o
obj-$(CONFIG_MEGARAID_NEWGEN) += megaraid/
obj-$(CONFIG_MEGARAID_SAS) += megaraid/
obj-$(CONFIG_SCSI_MPT3SAS) += mpt3sas/
+obj-$(CONFIG_SCSI_HISI_RAID) += hisi_raid/
obj-$(CONFIG_SCSI_UFSHCD) += ufs/
obj-$(CONFIG_SCSI_ACARD) += atp870u.o
obj-$(CONFIG_SCSI_SUNESP) += esp_scsi.o sun_esp.o
diff --git a/drivers/scsi/hisi_raid/Kconfig b/drivers/scsi/hisi_raid/Kconfig
new file mode 100644
index 000000000000..d402dc45a7c1
--- /dev/null
+++ b/drivers/scsi/hisi_raid/Kconfig
@@ -0,0 +1,14 @@
+#
+# Kernel configuration file for the hisi_raid
+#
+
+config SCSI_HISI_RAID
+ tristate "Huawei Hisi_Raid Adapter"
+ depends on PCI && SCSI
+ select BLK_DEV_BSGLIB
+ depends on ARM64 || X86_64
+ help
+ This driver supports hisi_raid SPxxx serial RAID controller, which has
+ PCI Express Gen4 interface with host and supports SAS/SATA HDD/SSD.
+ To compile this driver as a module, choose M here: the module will
+ be called hisi_raid.
diff --git a/drivers/scsi/hisi_raid/Makefile b/drivers/scsi/hisi_raid/Makefile
new file mode 100644
index 000000000000..b71a675f4190
--- /dev/null
+++ b/drivers/scsi/hisi_raid/Makefile
@@ -0,0 +1,7 @@
+#
+# Makefile for the hisi_raid drivers.
+#
+
+obj-$(CONFIG_SCSI_HISI_RAID) += hiraid.o
+
+hiraid-objs := hiraid_main.o
diff --git a/drivers/scsi/hisi_raid/hiraid.h b/drivers/scsi/hisi_raid/hiraid.h
new file mode 100644
index 000000000000..1ebc3dd3f2ec
--- /dev/null
+++ b/drivers/scsi/hisi_raid/hiraid.h
@@ -0,0 +1,760 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2022 Huawei Technologies Co., Ltd */
+
+#ifndef __HIRAID_H_
+#define __HIRAID_H_
+
+#define HIRAID_HDD_PD_QD 64
+#define HIRAID_HDD_VD_QD 256
+#define HIRAID_SSD_PD_QD 64
+#define HIRAID_SSD_VD_QD 256
+
+#define BGTASK_TYPE_REBUILD 4
+#define USR_CMD_READ 0xc2
+#define USR_CMD_RDLEN 0x1000
+#define USR_CMD_VDINFO 0x704
+#define USR_CMD_BGTASK 0x504
+#define VDINFO_PARAM_LEN 0x04
+
+#define HIRAID_DEFAULT_MAX_CHANNEL 4
+#define HIRAID_DEFAULT_MAX_ID 240
+#define HIRAID_DEFAULT_MAX_LUN_PER_HOST 8
+
+#define FUA_MASK 0x08
+
+#define HIRAID_IO_SQES 7
+#define HIRAID_IO_CQES 4
+#define PRP_ENTRY_SIZE 8
+
+#define EXTRA_POOL_SIZE 256
+#define MAX_EXTRA_POOL_NUM 16
+#define MAX_CMD_PER_DEV 64
+#define MAX_CDB_LEN 16
+
+#define HIRAID_AQ_DEPTH 128
+#define HIRAID_ASYN_COMMANDS 16
+#define HIRAID_AQ_BLK_MQ_DEPTH (HIRAID_AQ_DEPTH - HIRAID_ASYN_COMMANDS)
+#define HIRAID_AQ_MQ_TAG_DEPTH (HIRAID_AQ_BLK_MQ_DEPTH - 1)
+
+#define HIRAID_ADMIN_QUEUE_NUM 1
+#define HIRAID_PTHRU_CMDS_PERQ 1
+#define HIRAID_TOTAL_PTCMDS(qn) (HIRAID_PTHRU_CMDS_PERQ * (qn))
+
+#define HIRAID_DEV_INFO_ATTR_BOOT(attr) ((attr) & 0x01)
+#define HIRAID_DEV_INFO_ATTR_VD(attr) (((attr) & 0x02) == 0x0)
+#define HIRAID_DEV_INFO_ATTR_PT(attr) (((attr) & 0x22) == 0x02)
+#define HIRAID_DEV_INFO_ATTR_RAWDISK(attr) ((attr) & 0x20)
+#define HIRAID_DEV_DISK_TYPE(attr) ((attr) & 0x1e)
+
+#define HIRAID_DEV_INFO_FLAG_VALID(flag) ((flag) & 0x01)
+#define HIRAID_DEV_INFO_FLAG_CHANGE(flag) ((flag) & 0x02)
+
+#define HIRAID_CAP_MQES(cap) ((cap) & 0xffff)
+#define HIRAID_CAP_STRIDE(cap) (((cap) >> 32) & 0xf)
+#define HIRAID_CAP_MPSMIN(cap) (((cap) >> 48) & 0xf)
+#define HIRAID_CAP_MPSMAX(cap) (((cap) >> 52) & 0xf)
+#define HIRAID_CAP_TIMEOUT(cap) (((cap) >> 24) & 0xff)
+#define HIRAID_CAP_DMAMASK(cap) (((cap) >> 37) & 0xff)
+
+#define IO_SQE_SIZE sizeof(struct hiraid_scsi_io_cmd)
+#define ADMIN_SQE_SIZE sizeof(struct hiraid_admin_command)
+#define SQE_SIZE(qid) (((qid) > 0) ? IO_SQE_SIZE : ADMIN_SQE_SIZE)
+#define CQ_SIZE(depth) ((depth) * sizeof(struct hiraid_completion))
+#define SQ_SIZE(qid, depth) ((depth) * SQE_SIZE(qid))
+
+#define SENSE_SIZE(depth) ((depth) * SCSI_SENSE_BUFFERSIZE)
+
+#define IO_6_DEFAULT_TX_LEN 256
+
+#define MAX_DEV_ENTRY_PER_PAGE_4K 340
+
+#define MAX_REALTIME_BGTASK_NUM 32
+
+#define PCI_VENDOR_ID_HUAWEI_LOGIC 0x19E5
+#define HIRAID_SERVER_DEVICE_HBA_DID 0x3858
+#define HIRAID_SERVER_DEVICE_RAID_DID 0x3758
+
+enum {
+ HIRAID_SC_SUCCESS = 0x0,
+ HIRAID_SC_INVALID_OPCODE = 0x1,
+ HIRAID_SC_INVALID_FIELD = 0x2,
+
+ HIRAID_SC_ABORT_LIMIT = 0x103,
+ HIRAID_SC_ABORT_MISSING = 0x104,
+ HIRAID_SC_ASYNC_LIMIT = 0x105,
+
+ HIRAID_SC_DNR = 0x4000,
+};
+
+enum {
+ HIRAID_REG_CAP = 0x0000,
+ HIRAID_REG_CC = 0x0014,
+ HIRAID_REG_CSTS = 0x001c,
+ HIRAID_REG_AQA = 0x0024,
+ HIRAID_REG_ASQ = 0x0028,
+ HIRAID_REG_ACQ = 0x0030,
+ HIRAID_REG_DBS = 0x1000,
+};
+
+enum {
+ HIRAID_CC_ENABLE = 1 << 0,
+ HIRAID_CC_CSS_NVM = 0 << 4,
+ HIRAID_CC_MPS_SHIFT = 7,
+ HIRAID_CC_AMS_SHIFT = 11,
+ HIRAID_CC_SHN_SHIFT = 14,
+ HIRAID_CC_IOSQES_SHIFT = 16,
+ HIRAID_CC_IOCQES_SHIFT = 20,
+ HIRAID_CC_AMS_RR = 0 << HIRAID_CC_AMS_SHIFT,
+ HIRAID_CC_SHN_NONE = 0 << HIRAID_CC_SHN_SHIFT,
+ HIRAID_CC_IOSQES = HIRAID_IO_SQES << HIRAID_CC_IOSQES_SHIFT,
+ HIRAID_CC_IOCQES = HIRAID_IO_CQES << HIRAID_CC_IOCQES_SHIFT,
+ HIRAID_CC_SHN_NORMAL = 1 << HIRAID_CC_SHN_SHIFT,
+ HIRAID_CC_SHN_MASK = 3 << HIRAID_CC_SHN_SHIFT,
+ HIRAID_CSTS_CFS_SHIFT = 1,
+ HIRAID_CSTS_SHST_SHIFT = 2,
+ HIRAID_CSTS_PP_SHIFT = 5,
+ HIRAID_CSTS_RDY = 1 << 0,
+ HIRAID_CSTS_SHST_CMPLT = 2 << 2,
+ HIRAID_CSTS_SHST_MASK = 3 << 2,
+ HIRAID_CSTS_CFS_MASK = 1 << HIRAID_CSTS_CFS_SHIFT,
+ HIRAID_CSTS_PP_MASK = 1 << HIRAID_CSTS_PP_SHIFT,
+};
+
+enum {
+ HIRAID_ADMIN_DELETE_SQ = 0x00,
+ HIRAID_ADMIN_CREATE_SQ = 0x01,
+ HIRAID_ADMIN_DELETE_CQ = 0x04,
+ HIRAID_ADMIN_CREATE_CQ = 0x05,
+ HIRAID_ADMIN_ABORT_CMD = 0x08,
+ HIRAID_ADMIN_SET_FEATURES = 0x09,
+ HIRAID_ADMIN_ASYNC_EVENT = 0x0c,
+ HIRAID_ADMIN_GET_INFO = 0xc6,
+ HIRAID_ADMIN_RESET = 0xc8,
+};
+
+enum {
+ HIRAID_GET_CTRL_INFO = 0,
+ HIRAID_GET_DEVLIST_INFO = 1,
+};
+
+enum hiraid_rst_type {
+ HIRAID_RESET_TARGET = 0,
+ HIRAID_RESET_BUS = 1,
+};
+
+enum {
+ HIRAID_ASYN_EVENT_ERROR = 0,
+ HIRAID_ASYN_EVENT_NOTICE = 2,
+ HIRAID_ASYN_EVENT_VS = 7,
+};
+
+enum {
+ HIRAID_ASYN_DEV_CHANGED = 0x00,
+ HIRAID_ASYN_FW_ACT_START = 0x01,
+ HIRAID_ASYN_HOST_PROBING = 0x10,
+};
+
+enum {
+ HIRAID_ASYN_TIMESYN = 0x00,
+ HIRAID_ASYN_FW_ACT_FINISH = 0x02,
+ HIRAID_ASYN_EVENT_MIN = 0x80,
+ HIRAID_ASYN_EVENT_MAX = 0xff,
+};
+
+enum {
+ HIRAID_CMD_WRITE = 0x01,
+ HIRAID_CMD_READ = 0x02,
+
+ HIRAID_CMD_NONRW_NONE = 0x80,
+ HIRAID_CMD_NONRW_TODEV = 0x81,
+ HIRAID_CMD_NONRW_FROMDEV = 0x82,
+};
+
+enum {
+ HIRAID_QUEUE_PHYS_CONTIG = (1 << 0),
+ HIRAID_CQ_IRQ_ENABLED = (1 << 1),
+
+ HIRAID_FEATURE_NUM_QUEUES = 0x07,
+ HIRAID_FEATURE_ASYNC_EVENT = 0x0b,
+ HIRAID_FEATURE_TIMESTAMP = 0x0e,
+};
+
+enum hiraid_dev_state {
+ DEV_NEW,
+ DEV_LIVE,
+ DEV_RESETTING,
+ DEV_DELETING,
+ DEV_DEAD,
+};
+
+enum {
+ HIRAID_CARD_HBA,
+ HIRAID_CARD_RAID,
+};
+
+enum hiraid_cmd_type {
+ HIRAID_CMD_ADMIN,
+ HIRAID_CMD_PTHRU,
+};
+
+enum {
+ SQE_FLAG_SGL_METABUF = (1 << 6),
+ SQE_FLAG_SGL_METASEG = (1 << 7),
+ SQE_FLAG_SGL_ALL = SQE_FLAG_SGL_METABUF | SQE_FLAG_SGL_METASEG,
+};
+
+enum hiraid_cmd_state {
+ CMD_IDLE = 0,
+ CMD_FLIGHT = 1,
+ CMD_COMPLETE = 2,
+ CMD_TIMEOUT = 3,
+ CMD_TMO_COMPLETE = 4,
+};
+
+enum {
+ HIRAID_BSG_ADMIN,
+ HIRAID_BSG_IOPTHRU,
+};
+
+enum {
+ HIRAID_SAS_HDD_VD = 0x04,
+ HIRAID_SATA_HDD_VD = 0x08,
+ HIRAID_SAS_SSD_VD = 0x0c,
+ HIRAID_SATA_SSD_VD = 0x10,
+ HIRAID_NVME_SSD_VD = 0x14,
+ HIRAID_SAS_HDD_PD = 0x06,
+ HIRAID_SATA_HDD_PD = 0x0a,
+ HIRAID_SAS_SSD_PD = 0x0e,
+ HIRAID_SATA_SSD_PD = 0x12,
+ HIRAID_NVME_SSD_PD = 0x16,
+};
+
+enum {
+ DISPATCH_BY_CPU,
+ DISPATCH_BY_DISK,
+};
+
+struct hiraid_completion {
+ __le32 result;
+ union {
+ struct {
+ __u8 sense_len;
+ __u8 resv[3];
+ };
+ __le32 result1;
+ };
+ __le16 sq_head;
+ __le16 sq_id;
+ __le16 cmd_id;
+ __le16 status;
+};
+
+struct hiraid_ctrl_info {
+ __le32 nd;
+ __le16 max_cmds;
+ __le16 max_channel;
+ __le32 max_tgt_id;
+ __le16 max_lun;
+ __le16 max_num_sge;
+ __le16 lun_num_boot;
+ __u8 mdts;
+ __u8 acl;
+ __u8 asynevent;
+ __u8 card_type;
+ __u8 pt_use_sgl;
+ __u8 rsvd;
+ __le32 rtd3e;
+ __u8 sn[32];
+ __u8 fw_version[16];
+ __u8 rsvd1[4020];
+};
+
+struct hiraid_dev {
+ struct pci_dev *pdev;
+ struct device *dev;
+ struct Scsi_Host *shost;
+ struct hiraid_queue *queues;
+ struct dma_pool *prp_page_pool;
+ struct dma_pool *prp_extra_pool[MAX_EXTRA_POOL_NUM];
+ void __iomem *bar;
+ u32 max_qid;
+ u32 num_vecs;
+ u32 queue_count;
+ u32 ioq_depth;
+ u32 db_stride;
+ u32 __iomem *dbs;
+ struct rw_semaphore dev_rwsem;
+ int numa_node;
+ u32 page_size;
+ u32 ctrl_config;
+ u32 online_queues;
+ u64 cap;
+ u32 scsi_qd;
+ u32 instance;
+ struct hiraid_ctrl_info *ctrl_info;
+ struct hiraid_dev_info *dev_info;
+
+ struct hiraid_cmd *adm_cmds;
+ struct list_head adm_cmd_list;
+ spinlock_t adm_cmd_lock;
+
+ struct hiraid_cmd *io_ptcmds;
+ struct list_head io_pt_list;
+ spinlock_t io_pt_lock;
+
+ struct work_struct scan_work;
+ struct work_struct timesyn_work;
+ struct work_struct reset_work;
+ struct work_struct fwact_work;
+
+ enum hiraid_dev_state state;
+ spinlock_t state_lock;
+
+ void *sense_buffer_virt;
+ dma_addr_t sense_buffer_phy;
+ u32 last_qcnt;
+ u8 hdd_dispatch;
+
+ struct request_queue *bsg_queue;
+};
+
+struct hiraid_sgl_desc {
+ __le64 addr;
+ __le32 length;
+ __u8 rsvd[3];
+ __u8 type;
+};
+
+union hiraid_data_ptr {
+ struct {
+ __le64 prp1;
+ __le64 prp2;
+ };
+ struct hiraid_sgl_desc sgl;
+};
+
+struct hiraid_admin_com_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __le32 cdw2[4];
+ union hiraid_data_ptr dptr;
+ __le32 cdw10;
+ __le32 cdw11;
+ __le32 cdw12;
+ __le32 cdw13;
+ __le32 cdw14;
+ __le32 cdw15;
+};
+
+struct hiraid_features {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __u64 rsvd2[2];
+ union hiraid_data_ptr dptr;
+ __le32 fid;
+ __le32 dword11;
+ __le32 dword12;
+ __le32 dword13;
+ __le32 dword14;
+ __le32 dword15;
+};
+
+struct hiraid_create_cq {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __u32 rsvd1[5];
+ __le64 prp1;
+ __u64 rsvd8;
+ __le16 cqid;
+ __le16 qsize;
+ __le16 cq_flags;
+ __le16 irq_vector;
+ __u32 rsvd12[4];
+};
+
+struct hiraid_create_sq {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __u32 rsvd1[5];
+ __le64 prp1;
+ __u64 rsvd8;
+ __le16 sqid;
+ __le16 qsize;
+ __le16 sq_flags;
+ __le16 cqid;
+ __u32 rsvd12[4];
+};
+
+struct hiraid_delete_queue {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __u32 rsvd1[9];
+ __le16 qid;
+ __u16 rsvd10;
+ __u32 rsvd11[5];
+};
+
+struct hiraid_get_info {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __u32 rsvd2[4];
+ union hiraid_data_ptr dptr;
+ __u8 type;
+ __u8 rsvd10[3];
+ __le32 cdw11;
+ __u32 rsvd12[4];
+};
+
+struct hiraid_usr_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ union {
+ struct {
+ __le16 subopcode;
+ __le16 rsvd1;
+ } info_0;
+ __le32 cdw2;
+ };
+ union {
+ struct {
+ __le16 data_len;
+ __le16 param_len;
+ } info_1;
+ __le32 cdw3;
+ };
+ __u64 metadata;
+ union hiraid_data_ptr dptr;
+ __le32 cdw10;
+ __le32 cdw11;
+ __le32 cdw12;
+ __le32 cdw13;
+ __le32 cdw14;
+ __le32 cdw15;
+};
+
+struct hiraid_abort_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __u64 rsvd2[4];
+ __le16 sqid;
+ __le16 cid;
+ __u32 rsvd11[5];
+};
+
+struct hiraid_reset_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __u64 rsvd2[4];
+ __u8 type;
+ __u8 rsvd10[3];
+ __u32 rsvd11[5];
+};
+
+struct hiraid_admin_command {
+ union {
+ struct hiraid_admin_com_cmd common;
+ struct hiraid_features features;
+ struct hiraid_create_cq create_cq;
+ struct hiraid_create_sq create_sq;
+ struct hiraid_delete_queue delete_queue;
+ struct hiraid_get_info get_info;
+ struct hiraid_abort_cmd abort;
+ struct hiraid_reset_cmd reset;
+ struct hiraid_usr_cmd usr_cmd;
+ };
+};
+
+struct hiraid_scsi_io_com_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __le16 sense_len;
+ __u8 cdb_len;
+ __u8 rsvd2;
+ __le32 cdw3[3];
+ union hiraid_data_ptr dptr;
+ __le32 cdw10[6];
+ __u8 cdb[32];
+ __le64 sense_addr;
+ __le32 cdw26[6];
+};
+
+struct hiraid_scsi_rw_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __le16 sense_len;
+ __u8 cdb_len;
+ __u8 rsvd2;
+ __u32 rsvd3[3];
+ union hiraid_data_ptr dptr;
+ __le64 slba;
+ __le16 nlb;
+ __le16 control;
+ __u32 rsvd13[3];
+ __u8 cdb[32];
+ __le64 sense_addr;
+ __u32 rsvd26[6];
+};
+
+struct hiraid_scsi_nonrw_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __le16 sense_len;
+ __u8 cdb_length;
+ __u8 rsvd2;
+ __u32 rsvd3[3];
+ union hiraid_data_ptr dptr;
+ __u32 rsvd10[5];
+ __le32 buf_len;
+ __u8 cdb[32];
+ __le64 sense_addr;
+ __u32 rsvd26[6];
+};
+
+struct hiraid_scsi_io_cmd {
+ union {
+ struct hiraid_scsi_io_com_cmd common;
+ struct hiraid_scsi_rw_cmd rw;
+ struct hiraid_scsi_nonrw_cmd nonrw;
+ };
+};
+
+struct hiraid_passthru_common_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __u16 rsvd0;
+ __u32 nsid;
+ union {
+ struct {
+ __u16 subopcode;
+ __u16 rsvd1;
+ } info_0;
+ __u32 cdw2;
+ };
+ union {
+ struct {
+ __u16 data_len;
+ __u16 param_len;
+ } info_1;
+ __u32 cdw3;
+ };
+ __u64 metadata;
+
+ __u64 addr;
+ __u64 prp2;
+
+ __u32 cdw10;
+ __u32 cdw11;
+ __u32 cdw12;
+ __u32 cdw13;
+ __u32 cdw14;
+ __u32 cdw15;
+ __u32 timeout_ms;
+ __u32 result0;
+ __u32 result1;
+};
+
+struct hiraid_passthru_io_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __u16 rsvd0;
+ __u32 nsid;
+ union {
+ struct {
+ __u16 res_sense_len;
+ __u8 cdb_len;
+ __u8 rsvd0;
+ } info_0;
+ __u32 cdw2;
+ };
+ union {
+ struct {
+ __u16 subopcode;
+ __u16 rsvd1;
+ } info_1;
+ __u32 cdw3;
+ };
+ union {
+ struct {
+ __u16 rsvd;
+ __u16 param_len;
+ } info_2;
+ __u32 cdw4;
+ };
+ __u32 cdw5;
+ __u64 addr;
+ __u64 prp2;
+ union {
+ struct {
+ __u16 eid;
+ __u16 sid;
+ } info_3;
+ __u32 cdw10;
+ };
+ union {
+ struct {
+ __u16 did;
+ __u8 did_flag;
+ __u8 rsvd2;
+ } info_4;
+ __u32 cdw11;
+ };
+ __u32 cdw12;
+ __u32 cdw13;
+ __u32 cdw14;
+ __u32 data_len;
+ __u32 cdw16;
+ __u32 cdw17;
+ __u32 cdw18;
+ __u32 cdw19;
+ __u32 cdw20;
+ __u32 cdw21;
+ __u32 cdw22;
+ __u32 cdw23;
+ __u64 sense_addr;
+ __u32 cdw26[4];
+ __u32 timeout_ms;
+ __u32 result0;
+ __u32 result1;
+};
+
+struct hiraid_bsg_request {
+ u32 msgcode;
+ u32 control;
+ union {
+ struct hiraid_passthru_common_cmd admcmd;
+ struct hiraid_passthru_io_cmd pthrucmd;
+ };
+};
+
+struct hiraid_cmd {
+ u16 qid;
+ u16 cid;
+ u32 result0;
+ u32 result1;
+ u16 status;
+ void *priv;
+ enum hiraid_cmd_state state;
+ struct completion cmd_done;
+ struct list_head list;
+};
+
+struct hiraid_queue {
+ struct hiraid_dev *hdev;
+ spinlock_t sq_lock;
+
+ spinlock_t cq_lock ____cacheline_aligned_in_smp;
+
+ void *sq_cmds;
+
+ struct hiraid_completion *cqes;
+
+ dma_addr_t sq_buffer_phy;
+ dma_addr_t cq_buffer_phy;
+ u32 __iomem *q_db;
+ u8 cq_phase;
+ u8 sqes;
+ u16 qid;
+ u16 sq_tail;
+ u16 cq_head;
+ u16 last_cq_head;
+ u16 q_depth;
+ s16 cq_vector;
+ atomic_t inflight;
+ void *sense_buffer_virt;
+ dma_addr_t sense_buffer_phy;
+ struct dma_pool *prp_small_pool;
+};
+
+struct hiraid_mapmange {
+ struct hiraid_queue *hiraidq;
+ enum hiraid_cmd_state state;
+ u16 cid;
+ int page_cnt;
+ u32 sge_cnt;
+ u32 len;
+ bool use_sgl;
+ dma_addr_t first_dma;
+ void *sense_buffer_virt;
+ dma_addr_t sense_buffer_phy;
+ struct scatterlist *sgl;
+ void *list[0];
+};
+
+struct hiraid_vd_info {
+ __u8 name[32];
+ __le16 id;
+ __u8 rg_id;
+ __u8 rg_level;
+ __u8 sg_num;
+ __u8 sg_disk_num;
+ __u8 vd_status;
+ __u8 vd_type;
+ __u8 rsvd1[4056];
+};
+
+struct bgtask_info {
+ __u8 type;
+ __u8 progress;
+ __u8 rate;
+ __u8 rsvd0;
+ __le16 vd_id;
+ __le16 time_left;
+ __u8 rsvd1[4];
+};
+
+struct hiraid_bgtask {
+ __u8 sw;
+ __u8 task_num;
+ __u8 rsvd[6];
+ struct bgtask_info bgtask[MAX_REALTIME_BGTASK_NUM];
+};
+
+struct hiraid_dev_info {
+ __le32 hdid;
+ __le16 target;
+ __u8 channel;
+ __u8 lun;
+ __u8 attr;
+ __u8 flag;
+ __le16 max_io_kb;
+};
+
+struct hiraid_dev_list {
+ __le32 dev_num;
+ __u32 rsvd0[3];
+ struct hiraid_dev_info devinfo[MAX_DEV_ENTRY_PER_PAGE_4K];
+};
+
+struct hiraid_sdev_hostdata {
+ u32 hdid;
+ u16 max_io_kb;
+ u8 attr;
+ u8 flag;
+ u8 rg_id;
+ u8 hwq;
+ u16 pend_count;
+};
+
+#endif
+
diff --git a/drivers/scsi/hisi_raid/hiraid_main.c b/drivers/scsi/hisi_raid/hiraid_main.c
new file mode 100644
index 000000000000..b9ffa642479c
--- /dev/null
+++ b/drivers/scsi/hisi_raid/hiraid_main.c
@@ -0,0 +1,3982 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2022 Huawei Technologies Co., Ltd */
+
+/* Huawei Raid Series Linux Driver */
+
+#define pr_fmt(fmt) "hiraid: " fmt
+
+#include <linux/sched/signal.h>
+#include <linux/version.h>
+#include <linux/pci.h>
+#include <linux/aer.h>
+#include <linux/module.h>
+#include <linux/ioport.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/cdev.h>
+#include <linux/sysfs.h>
+#include <linux/gfp.h>
+#include <linux/types.h>
+#include <linux/ratelimit.h>
+#include <linux/once.h>
+#include <linux/debugfs.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/blkdev.h>
+#include <linux/bsg-lib.h>
+#include <asm/unaligned.h>
+#include <linux/sort.h>
+#include <target/target_core_backend.h>
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_transport.h>
+#include <scsi/scsi_dbg.h>
+#include <scsi/sg.h>
+
+#include "hiraid.h"
+
+static u32 admin_tmout = 60;
+module_param(admin_tmout, uint, 0644);
+MODULE_PARM_DESC(admin_tmout, "admin commands timeout (seconds)");
+
+static u32 scmd_tmout_rawdisk = 180;
+module_param(scmd_tmout_rawdisk, uint, 0644);
+MODULE_PARM_DESC(scmd_tmout_rawdisk, "scsi commands timeout for rawdisk(seconds)");
+
+static u32 scmd_tmout_vd = 180;
+module_param(scmd_tmout_vd, uint, 0644);
+MODULE_PARM_DESC(scmd_tmout_vd, "scsi commands timeout for vd(seconds)");
+
+static bool max_io_force;
+module_param(max_io_force, bool, 0644);
+MODULE_PARM_DESC(max_io_force, "force max_hw_sectors_kb = 1024, default false(performance first)");
+
+static bool work_mode;
+module_param(work_mode, bool, 0444);
+MODULE_PARM_DESC(work_mode, "work mode switch, default false for multi hw queues");
+
+#define MAX_IO_QUEUES 128
+#define MIN_IO_QUEUES 1
+
+static int ioq_num_set(const char *val, const struct kernel_param *kp)
+{
+ int n = 0;
+ int ret;
+
+ ret = kstrtoint(val, 10, &n);
+ if (ret != 0 || n < MIN_IO_QUEUES || n > MAX_IO_QUEUES)
+ return -EINVAL;
+
+ return param_set_int(val, kp);
+}
+
+static const struct kernel_param_ops max_hwq_num_ops = {
+ .set = ioq_num_set,
+ .get = param_get_uint,
+};
+
+static u32 max_hwq_num = 128;
+module_param_cb(max_hwq_num, &max_hwq_num_ops, &max_hwq_num, 0444);
+MODULE_PARM_DESC(max_hwq_num, "max num of hw io queues, should >= 1, default 128");
+
+static int io_queue_depth_set(const char *val, const struct kernel_param *kp)
+{
+ int n = 0;
+ int ret;
+
+ ret = kstrtoint(val, 10, &n);
+ if (ret != 0 || n < 2)
+ return -EINVAL;
+
+ return param_set_int(val, kp);
+}
+
+static const struct kernel_param_ops io_queue_depth_ops = {
+ .set = io_queue_depth_set,
+ .get = param_get_uint,
+};
+
+static u32 io_queue_depth = 1024;
+module_param_cb(io_queue_depth, &io_queue_depth_ops, &io_queue_depth, 0644);
+MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2");
+
+static u32 log_debug_switch;
+module_param(log_debug_switch, uint, 0644);
+MODULE_PARM_DESC(log_debug_switch, "set log state, default zero for switch off");
+
+static int extra_pool_num_set(const char *val, const struct kernel_param *kp)
+{
+ u8 n = 0;
+ int ret;
+
+ ret = kstrtou8(val, 10, &n);
+ if (ret != 0)
+ return -EINVAL;
+ if (n > MAX_EXTRA_POOL_NUM)
+ n = MAX_EXTRA_POOL_NUM;
+ if (n < 1)
+ n = 1;
+ *((u8 *)kp->arg) = n;
+
+ return 0;
+}
+
+static const struct kernel_param_ops small_pool_num_ops = {
+ .set = extra_pool_num_set,
+ .get = param_get_byte,
+};
+
+/* It was found that the spindlock of a single pool conflicts
+ * a lot with multiple CPUs.So multiple pools are introduced
+ * to reduce the conflictions.
+ */
+static unsigned char extra_pool_num = 4;
+module_param_cb(extra_pool_num, &small_pool_num_ops, &extra_pool_num, 0644);
+MODULE_PARM_DESC(extra_pool_num, "set prp extra pool num, default 4, MAX 16");
+
+static void hiraid_handle_async_notice(struct hiraid_dev *hdev, u32 result);
+static void hiraid_handle_async_vs(struct hiraid_dev *hdev, u32 result, u32 result1);
+
+static struct class *hiraid_class;
+
+#define HIRAID_CAP_TIMEOUT_UNIT_MS (HZ / 2)
+
+static struct workqueue_struct *work_queue;
+
+#define dev_log_dbg(dev, fmt, ...) do { \
+ if (unlikely(log_debug_switch)) \
+ dev_info(dev, "[%s] " fmt, \
+ __func__, ##__VA_ARGS__); \
+} while (0)
+
+#define HIRAID_DRV_VERSION "1.1.0.0"
+
+#define ADMIN_TIMEOUT (admin_tmout * HZ)
+#define USRCMD_TIMEOUT (180 * HZ)
+#define CTL_RST_TIME (600 * HZ)
+
+#define HIRAID_WAIT_ABNL_CMD_TIMEOUT 6
+#define HIRAID_WAIT_RST_IO_TIMEOUT 10
+
+#define HIRAID_DMA_MSK_BIT_MAX 64
+
+#define IOQ_PT_DATA_LEN 4096
+#define IOQ_PT_SGL_DATA_LEN (1024 * 1024)
+
+#define MAX_CAN_QUEUE (4096 - 1)
+#define MIN_CAN_QUEUE (1024 - 1)
+
+enum SENSE_STATE_CODE {
+ SENSE_STATE_OK = 0,
+ SENSE_STATE_NEED_CHECK,
+ SENSE_STATE_ERROR,
+ SENSE_STATE_EP_PCIE_ERROR,
+ SENSE_STATE_NAC_DMA_ERROR,
+ SENSE_STATE_ABORTED,
+ SENSE_STATE_NEED_RETRY
+};
+
+enum {
+ FW_EH_OK = 0,
+ FW_EH_DEV_NONE = 0x701
+};
+
+static const char * const raid_levels[] = {"0", "1", "5", "6", "10", "50", "60", "NA"};
+
+static const char * const raid_states[] = {
+ "NA", "NORMAL", "FAULT", "DEGRADE", "NOT_FORMATTED", "FORMATTING", "SANITIZING",
+ "INITIALIZING", "INITIALIZE_FAIL", "DELETING", "DELETE_FAIL", "WRITE_PROTECT"
+};
+
+static int hiraid_remap_bar(struct hiraid_dev *hdev, u32 size)
+{
+ struct pci_dev *pdev = hdev->pdev;
+
+ if (size > pci_resource_len(pdev, 0)) {
+ dev_err(hdev->dev, "input size[%u] exceed bar0 length[%llu]\n",
+ size, pci_resource_len(pdev, 0));
+ return -ENOMEM;
+ }
+
+ if (hdev->bar)
+ iounmap(hdev->bar);
+
+ hdev->bar = ioremap(pci_resource_start(pdev, 0), size);
+ if (!hdev->bar) {
+ dev_err(hdev->dev, "ioremap for bar0 failed\n");
+ return -ENOMEM;
+ }
+ hdev->dbs = hdev->bar + HIRAID_REG_DBS;
+
+ return 0;
+}
+
+static int hiraid_dev_map(struct hiraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+ int ret;
+
+ ret = pci_request_mem_regions(pdev, "hiraid");
+ if (ret) {
+ dev_err(hdev->dev, "fail to request memory regions\n");
+ return ret;
+ }
+
+ ret = hiraid_remap_bar(hdev, HIRAID_REG_DBS + 4096);
+ if (ret) {
+ pci_release_mem_regions(pdev);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void hiraid_dev_unmap(struct hiraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+
+ if (hdev->bar) {
+ iounmap(hdev->bar);
+ hdev->bar = NULL;
+ }
+ pci_release_mem_regions(pdev);
+}
+
+static int hiraid_pci_enable(struct hiraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+ int ret = -ENOMEM;
+ u64 maskbit = HIRAID_DMA_MSK_BIT_MAX;
+
+ if (pci_enable_device_mem(pdev)) {
+ dev_err(hdev->dev, "enable pci device memory resources failed\n");
+ return ret;
+ }
+ pci_set_master(pdev);
+
+ if (readl(hdev->bar + HIRAID_REG_CSTS) == U32_MAX) {
+ ret = -ENODEV;
+ dev_err(hdev->dev, "read CSTS register failed\n");
+ goto disable;
+ }
+
+ hdev->cap = lo_hi_readq(hdev->bar + HIRAID_REG_CAP);
+ hdev->ioq_depth = min_t(u32, HIRAID_CAP_MQES(hdev->cap) + 1, io_queue_depth);
+ hdev->db_stride = 1 << HIRAID_CAP_STRIDE(hdev->cap);
+
+ maskbit = HIRAID_CAP_DMAMASK(hdev->cap);
+ if (maskbit < 32 || maskbit > HIRAID_DMA_MSK_BIT_MAX) {
+ dev_err(hdev->dev, "err, dma mask invalid[%llu], set to default\n", maskbit);
+ maskbit = HIRAID_DMA_MSK_BIT_MAX;
+ }
+
+ if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(maskbit))) {
+ if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) {
+ dev_err(hdev->dev, "set dma mask[32] and coherent failed\n");
+ goto disable;
+ }
+ dev_info(hdev->dev, "set dma mask[32] success\n");
+ } else {
+ dev_info(hdev->dev, "set dma mask[%llu] success\n", maskbit);
+ }
+
+ ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
+ if (ret < 0) {
+ dev_err(hdev->dev, "allocate one IRQ for setup admin queue failed\n");
+ goto disable;
+ }
+
+ pci_enable_pcie_error_reporting(pdev);
+ pci_save_state(pdev);
+
+ return 0;
+
+disable:
+ pci_disable_device(pdev);
+ return ret;
+}
+
+
+/*
+ * It is fact that first prp and last prp may be not full page.
+ * The size to count total nprps for the io equal to size + page_size,
+ * it may be a slightly overestimate.
+ *
+ * 8B per prp address. It may be there is one prp_list address per page,
+ * prp_list address does not count in io data prps. So divisor equal to
+ * PAGE_SIZE - 8, it may be a slightly overestimate.
+ */
+static int hiraid_prp_pagenum(struct hiraid_dev *hdev)
+{
+ u32 size = 1U << ((hdev->ctrl_info->mdts) * 1U) << 12;
+ u32 nprps = DIV_ROUND_UP(size + hdev->page_size, hdev->page_size);
+
+ return DIV_ROUND_UP(PRP_ENTRY_SIZE * nprps, hdev->page_size - PRP_ENTRY_SIZE);
+}
+
+/*
+ * Calculates the number of pages needed for the SGL segments. For example a 4k
+ * page can accommodate 256 SGL descriptors.
+ */
+static int hiraid_sgl_pagenum(struct hiraid_dev *hdev)
+{
+ u32 nsge = le16_to_cpu(hdev->ctrl_info->max_num_sge);
+
+ return DIV_ROUND_UP(nsge * sizeof(struct hiraid_sgl_desc), hdev->page_size);
+}
+
+static inline void **hiraid_mapbuf_list(struct hiraid_mapmange *mapbuf)
+{
+ return mapbuf->list;
+}
+
+static u32 hiraid_get_max_cmd_size(struct hiraid_dev *hdev)
+{
+ u32 alloc_size = sizeof(__le64 *) * max(hiraid_prp_pagenum(hdev), hiraid_sgl_pagenum(hdev));
+
+ dev_info(hdev->dev, "mapbuf size[%lu], alloc_size[%u]\n",
+ sizeof(struct hiraid_mapmange), alloc_size);
+
+ return sizeof(struct hiraid_mapmange) + alloc_size;
+}
+
+static int hiraid_build_passthru_prp(struct hiraid_dev *hdev, struct hiraid_mapmange *mapbuf)
+{
+ struct scatterlist *sg = mapbuf->sgl;
+ __le64 *phy_regpage, *prior_list;
+ u64 buf_addr = sg_dma_address(sg);
+ int buf_length = sg_dma_len(sg);
+ u32 page_size = hdev->page_size;
+ int offset = buf_addr & (page_size - 1);
+ void **list = hiraid_mapbuf_list(mapbuf);
+ int maplen = mapbuf->len;
+ struct dma_pool *pool;
+ dma_addr_t buffer_phy;
+ int i;
+
+ maplen -= (page_size - offset);
+ if (maplen <= 0) {
+ mapbuf->first_dma = 0;
+ return 0;
+ }
+
+ buf_length -= (page_size - offset);
+ if (buf_length) {
+ buf_addr += (page_size - offset);
+ } else {
+ sg = sg_next(sg);
+ buf_addr = sg_dma_address(sg);
+ buf_length = sg_dma_len(sg);
+ }
+
+ if (maplen <= page_size) {
+ mapbuf->first_dma = buf_addr;
+ return 0;
+ }
+
+ pool = hdev->prp_page_pool;
+ mapbuf->page_cnt = 1;
+
+ phy_regpage = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!phy_regpage) {
+ dev_err_ratelimited(hdev->dev, "allocate first admin prp_list memory failed\n");
+ mapbuf->first_dma = buf_addr;
+ mapbuf->page_cnt = -1;
+ return -ENOMEM;
+ }
+ list[0] = phy_regpage;
+ mapbuf->first_dma = buffer_phy;
+ i = 0;
+ for (;;) {
+ if (i == page_size / PRP_ENTRY_SIZE) {
+ prior_list = phy_regpage;
+
+ phy_regpage = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!phy_regpage) {
+ dev_err_ratelimited(hdev->dev, "allocate [%d]th admin prp list memory failed\n",
+ mapbuf->page_cnt + 1);
+ return -ENOMEM;
+ }
+ list[mapbuf->page_cnt++] = phy_regpage;
+ phy_regpage[0] = prior_list[i - 1];
+ prior_list[i - 1] = cpu_to_le64(buffer_phy);
+ i = 1;
+ }
+ phy_regpage[i++] = cpu_to_le64(buf_addr);
+ buf_addr += page_size;
+ buf_length -= page_size;
+ maplen -= page_size;
+ if (maplen <= 0)
+ break;
+ if (buf_length > 0)
+ continue;
+ if (unlikely(buf_length < 0))
+ goto bad_admin_sgl;
+ sg = sg_next(sg);
+ buf_addr = sg_dma_address(sg);
+ buf_length = sg_dma_len(sg);
+ }
+
+ return 0;
+
+bad_admin_sgl:
+ dev_err(hdev->dev, "setup prps, invalid admin SGL for payload[%d] nents[%d]\n",
+ mapbuf->len, mapbuf->sge_cnt);
+ return -EIO;
+}
+
+static int hiraid_build_prp(struct hiraid_dev *hdev, struct hiraid_mapmange *mapbuf)
+{
+ struct scatterlist *sg = mapbuf->sgl;
+ __le64 *phy_regpage, *prior_list;
+ u64 buf_addr = sg_dma_address(sg);
+ int buf_length = sg_dma_len(sg);
+ u32 page_size = hdev->page_size;
+ int offset = buf_addr & (page_size - 1);
+ void **list = hiraid_mapbuf_list(mapbuf);
+ int maplen = mapbuf->len;
+ struct dma_pool *pool;
+ dma_addr_t buffer_phy;
+ int nprps, i;
+
+ maplen -= (page_size - offset);
+ if (maplen <= 0) {
+ mapbuf->first_dma = 0;
+ return 0;
+ }
+
+ buf_length -= (page_size - offset);
+ if (buf_length) {
+ buf_addr += (page_size - offset);
+ } else {
+ sg = sg_next(sg);
+ buf_addr = sg_dma_address(sg);
+ buf_length = sg_dma_len(sg);
+ }
+
+ if (maplen <= page_size) {
+ mapbuf->first_dma = buf_addr;
+ return 0;
+ }
+
+ nprps = DIV_ROUND_UP(maplen, page_size);
+ if (nprps <= (EXTRA_POOL_SIZE / PRP_ENTRY_SIZE)) {
+ pool = mapbuf->hiraidq->prp_small_pool;
+ mapbuf->page_cnt = 0;
+ } else {
+ pool = hdev->prp_page_pool;
+ mapbuf->page_cnt = 1;
+ }
+
+ phy_regpage = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!phy_regpage) {
+ dev_err_ratelimited(hdev->dev, "allocate first prp_list memory failed\n");
+ mapbuf->first_dma = buf_addr;
+ mapbuf->page_cnt = -1;
+ return -ENOMEM;
+ }
+ list[0] = phy_regpage;
+ mapbuf->first_dma = buffer_phy;
+ i = 0;
+ for (;;) {
+ if (i == page_size / PRP_ENTRY_SIZE) {
+ prior_list = phy_regpage;
+
+ phy_regpage = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!phy_regpage) {
+ dev_err_ratelimited(hdev->dev, "allocate [%d]th prp list memory failed\n",
+ mapbuf->page_cnt + 1);
+ return -ENOMEM;
+ }
+ list[mapbuf->page_cnt++] = phy_regpage;
+ phy_regpage[0] = prior_list[i - 1];
+ prior_list[i - 1] = cpu_to_le64(buffer_phy);
+ i = 1;
+ }
+ phy_regpage[i++] = cpu_to_le64(buf_addr);
+ buf_addr += page_size;
+ buf_length -= page_size;
+ maplen -= page_size;
+ if (maplen <= 0)
+ break;
+ if (buf_length > 0)
+ continue;
+ if (unlikely(buf_length < 0))
+ goto bad_sgl;
+ sg = sg_next(sg);
+ buf_addr = sg_dma_address(sg);
+ buf_length = sg_dma_len(sg);
+ }
+
+ return 0;
+
+bad_sgl:
+ dev_err(hdev->dev, "setup prps, invalid SGL for payload[%d] nents[%d]\n",
+ mapbuf->len, mapbuf->sge_cnt);
+ return -EIO;
+}
+
+#define SGES_PER_PAGE (PAGE_SIZE / sizeof(struct hiraid_sgl_desc))
+
+static void hiraid_submit_cmd(struct hiraid_queue *hiraidq, const void *cmd)
+{
+ u32 sqes = SQE_SIZE(hiraidq->qid);
+ unsigned long flags;
+ struct hiraid_admin_com_cmd *acd = (struct hiraid_admin_com_cmd *)cmd;
+
+ spin_lock_irqsave(&hiraidq->sq_lock, flags);
+ memcpy((hiraidq->sq_cmds + sqes * hiraidq->sq_tail), cmd, sqes);
+ if (++hiraidq->sq_tail == hiraidq->q_depth)
+ hiraidq->sq_tail = 0;
+
+ writel(hiraidq->sq_tail, hiraidq->q_db);
+ spin_unlock_irqrestore(&hiraidq->sq_lock, flags);
+
+ dev_log_dbg(hiraidq->hdev->dev, "cid[%d] qid[%d] opcode[0x%x] flags[0x%x] hdid[%u]\n",
+ le16_to_cpu(acd->cmd_id), hiraidq->qid, acd->opcode, acd->flags,
+ le32_to_cpu(acd->hdid));
+}
+
+static inline bool hiraid_is_rw_scmd(struct scsi_cmnd *scmd)
+{
+ switch (scmd->cmnd[0]) {
+ case READ_6:
+ case READ_10:
+ case READ_12:
+ case READ_16:
+ case WRITE_6:
+ case WRITE_10:
+ case WRITE_12:
+ case WRITE_16:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/*
+ * checks if prps can be built for the IO cmd
+ */
+static bool hiraid_is_prp(struct hiraid_dev *hdev, struct scatterlist *sgl, u32 nsge)
+{
+ struct scatterlist *sg = sgl;
+ u32 page_mask = hdev->page_size - 1;
+ bool is_prp = true;
+ u32 i = 0;
+
+ for_each_sg(sgl, sg, nsge, i) {
+ /*
+ * Data length of the middle sge multiple of page_size,
+ * address page_size aligned.
+ */
+ if (i != 0 && i != nsge - 1) {
+ if ((sg_dma_len(sg) & page_mask) ||
+ (sg_dma_address(sg) & page_mask)) {
+ is_prp = false;
+ break;
+ }
+ }
+
+ /*
+ * The first sge addr plus the data length meets
+ * the page_size alignment.
+ */
+ if (nsge > 1 && i == 0) {
+ if ((sg_dma_address(sg) + sg_dma_len(sg)) & page_mask) {
+ is_prp = false;
+ break;
+ }
+ }
+
+ /* The last sge addr meets the page_size alignment. */
+ if (nsge > 1 && i == (nsge - 1)) {
+ if (sg_dma_address(sg) & page_mask) {
+ is_prp = false;
+ break;
+ }
+ }
+ }
+
+ return is_prp;
+}
+
+enum {
+ HIRAID_SGL_FMT_DATA_DESC = 0x00,
+ HIRAID_SGL_FMT_SEG_DESC = 0x02,
+ HIRAID_SGL_FMT_LAST_SEG_DESC = 0x03,
+ HIRAID_KEY_SGL_FMT_DATA_DESC = 0x04,
+ HIRAID_TRANSPORT_SGL_DATA_DESC = 0x05
+};
+
+static void hiraid_sgl_set_data(struct hiraid_sgl_desc *sge, struct scatterlist *sg)
+{
+ sge->addr = cpu_to_le64(sg_dma_address(sg));
+ sge->length = cpu_to_le32(sg_dma_len(sg));
+ sge->type = HIRAID_SGL_FMT_DATA_DESC << 4;
+}
+
+static void hiraid_sgl_set_seg(struct hiraid_sgl_desc *sge, dma_addr_t buffer_phy, int entries)
+{
+ sge->addr = cpu_to_le64(buffer_phy);
+ if (entries <= SGES_PER_PAGE) {
+ sge->length = cpu_to_le32(entries * sizeof(*sge));
+ sge->type = HIRAID_SGL_FMT_LAST_SEG_DESC << 4;
+ } else {
+ sge->length = cpu_to_le32(PAGE_SIZE);
+ sge->type = HIRAID_SGL_FMT_SEG_DESC << 4;
+ }
+}
+
+static int hiraid_build_passthru_sgl(struct hiraid_dev *hdev,
+ struct hiraid_admin_command *admin_cmd,
+ struct hiraid_mapmange *mapbuf)
+{
+ struct hiraid_sgl_desc *sg_list, *link, *old_sg_list;
+ struct scatterlist *sg = mapbuf->sgl;
+ void **list = hiraid_mapbuf_list(mapbuf);
+ struct dma_pool *pool;
+ int nsge = mapbuf->sge_cnt;
+ dma_addr_t buffer_phy;
+ int i = 0;
+
+ admin_cmd->common.flags |= SQE_FLAG_SGL_METABUF;
+
+ if (nsge == 1) {
+ hiraid_sgl_set_data(&admin_cmd->common.dptr.sgl, sg);
+ return 0;
+ }
+
+ pool = hdev->prp_page_pool;
+ mapbuf->page_cnt = 1;
+
+ sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!sg_list) {
+ dev_err_ratelimited(hdev->dev, "allocate first admin sgl_list failed\n");
+ mapbuf->page_cnt = -1;
+ return -ENOMEM;
+ }
+
+ list[0] = sg_list;
+ mapbuf->first_dma = buffer_phy;
+ hiraid_sgl_set_seg(&admin_cmd->common.dptr.sgl, buffer_phy, nsge);
+ do {
+ if (i == SGES_PER_PAGE) {
+ old_sg_list = sg_list;
+ link = &old_sg_list[SGES_PER_PAGE - 1];
+
+ sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!sg_list) {
+ dev_err_ratelimited(hdev->dev, "allocate [%d]th admin sgl_list failed\n",
+ mapbuf->page_cnt + 1);
+ return -ENOMEM;
+ }
+ list[mapbuf->page_cnt++] = sg_list;
+
+ i = 0;
+ memcpy(&sg_list[i++], link, sizeof(*link));
+ hiraid_sgl_set_seg(link, buffer_phy, nsge);
+ }
+
+ hiraid_sgl_set_data(&sg_list[i++], sg);
+ sg = sg_next(sg);
+ } while (--nsge > 0);
+
+ return 0;
+}
+
+
+static int hiraid_build_sgl(struct hiraid_dev *hdev, struct hiraid_scsi_io_cmd *io_cmd,
+ struct hiraid_mapmange *mapbuf)
+{
+ struct hiraid_sgl_desc *sg_list, *link, *old_sg_list;
+ struct scatterlist *sg = mapbuf->sgl;
+ void **list = hiraid_mapbuf_list(mapbuf);
+ struct dma_pool *pool;
+ int nsge = mapbuf->sge_cnt;
+ dma_addr_t buffer_phy;
+ int i = 0;
+
+ io_cmd->common.flags |= SQE_FLAG_SGL_METABUF;
+
+ if (nsge == 1) {
+ hiraid_sgl_set_data(&io_cmd->common.dptr.sgl, sg);
+ return 0;
+ }
+
+ if (nsge <= (EXTRA_POOL_SIZE / sizeof(struct hiraid_sgl_desc))) {
+ pool = mapbuf->hiraidq->prp_small_pool;
+ mapbuf->page_cnt = 0;
+ } else {
+ pool = hdev->prp_page_pool;
+ mapbuf->page_cnt = 1;
+ }
+
+ sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!sg_list) {
+ dev_err_ratelimited(hdev->dev, "allocate first sgl_list failed\n");
+ mapbuf->page_cnt = -1;
+ return -ENOMEM;
+ }
+
+ list[0] = sg_list;
+ mapbuf->first_dma = buffer_phy;
+ hiraid_sgl_set_seg(&io_cmd->common.dptr.sgl, buffer_phy, nsge);
+ do {
+ if (i == SGES_PER_PAGE) {
+ old_sg_list = sg_list;
+ link = &old_sg_list[SGES_PER_PAGE - 1];
+
+ sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!sg_list) {
+ dev_err_ratelimited(hdev->dev, "allocate [%d]th sgl_list failed\n",
+ mapbuf->page_cnt + 1);
+ return -ENOMEM;
+ }
+ list[mapbuf->page_cnt++] = sg_list;
+
+ i = 0;
+ memcpy(&sg_list[i++], link, sizeof(*link));
+ hiraid_sgl_set_seg(link, buffer_phy, nsge);
+ }
+
+ hiraid_sgl_set_data(&sg_list[i++], sg);
+ sg = sg_next(sg);
+ } while (--nsge > 0);
+
+ return 0;
+}
+
+#define HIRAID_RW_FUA BIT(14)
+
+static int hiraid_setup_rw_cmd(struct hiraid_dev *hdev,
+ struct hiraid_scsi_rw_cmd *io_cmd,
+ struct scsi_cmnd *scmd)
+{
+ u32 start_lba_lo, start_lba_hi;
+ u32 datalength = 0;
+ u16 control = 0;
+
+ start_lba_lo = 0;
+ start_lba_hi = 0;
+
+ if (scmd->sc_data_direction == DMA_TO_DEVICE) {
+ io_cmd->opcode = HIRAID_CMD_WRITE;
+ } else if (scmd->sc_data_direction == DMA_FROM_DEVICE) {
+ io_cmd->opcode = HIRAID_CMD_READ;
+ } else {
+ dev_err(hdev->dev, "invalid RW_IO for unsupported data direction[%d]\n",
+ scmd->sc_data_direction);
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
+ /* 6-byte READ(0x08) or WRITE(0x0A) cdb */
+ if (scmd->cmd_len == 6) {
+ datalength = (u32)(scmd->cmnd[4] == 0 ?
+ IO_6_DEFAULT_TX_LEN : scmd->cmnd[4]);
+ start_lba_lo = (u32)get_unaligned_be24(&scmd->cmnd[1]);
+
+ start_lba_lo &= 0x1FFFFF;
+ }
+
+ /* 10-byte READ(0x28) or WRITE(0x2A) cdb */
+ else if (scmd->cmd_len == 10) {
+ datalength = (u32)get_unaligned_be16(&scmd->cmnd[7]);
+ start_lba_lo = get_unaligned_be32(&scmd->cmnd[2]);
+
+ if (scmd->cmnd[1] & FUA_MASK)
+ control |= HIRAID_RW_FUA;
+ }
+
+ /* 12-byte READ(0xA8) or WRITE(0xAA) cdb */
+ else if (scmd->cmd_len == 12) {
+ datalength = get_unaligned_be32(&scmd->cmnd[6]);
+ start_lba_lo = get_unaligned_be32(&scmd->cmnd[2]);
+
+ if (scmd->cmnd[1] & FUA_MASK)
+ control |= HIRAID_RW_FUA;
+ }
+ /* 16-byte READ(0x88) or WRITE(0x8A) cdb */
+ else if (scmd->cmd_len == 16) {
+ datalength = get_unaligned_be32(&scmd->cmnd[10]);
+ start_lba_lo = get_unaligned_be32(&scmd->cmnd[6]);
+ start_lba_hi = get_unaligned_be32(&scmd->cmnd[2]);
+
+ if (scmd->cmnd[1] & FUA_MASK)
+ control |= HIRAID_RW_FUA;
+ }
+
+ if (unlikely(datalength > U16_MAX || datalength == 0)) {
+ dev_err(hdev->dev, "invalid IO for illegal transfer data length[%u]\n", datalength);
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
+ io_cmd->slba = cpu_to_le64(((u64)start_lba_hi << 32) | start_lba_lo);
+ /* 0base for nlb */
+ io_cmd->nlb = cpu_to_le16((u16)(datalength - 1));
+ io_cmd->control = cpu_to_le16(control);
+
+ return 0;
+}
+
+static int hiraid_setup_nonrw_cmd(struct hiraid_dev *hdev,
+ struct hiraid_scsi_nonrw_cmd *io_cmd, struct scsi_cmnd *scmd)
+{
+ io_cmd->buf_len = cpu_to_le32(scsi_bufflen(scmd));
+
+ switch (scmd->sc_data_direction) {
+ case DMA_NONE:
+ io_cmd->opcode = HIRAID_CMD_NONRW_NONE;
+ break;
+ case DMA_TO_DEVICE:
+ io_cmd->opcode = HIRAID_CMD_NONRW_TODEV;
+ break;
+ case DMA_FROM_DEVICE:
+ io_cmd->opcode = HIRAID_CMD_NONRW_FROMDEV;
+ break;
+ default:
+ dev_err(hdev->dev, "invalid NON_IO for unsupported data direction[%d]\n",
+ scmd->sc_data_direction);
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hiraid_setup_io_cmd(struct hiraid_dev *hdev,
+ struct hiraid_scsi_io_cmd *io_cmd, struct scsi_cmnd *scmd)
+{
+ memcpy(io_cmd->common.cdb, scmd->cmnd, scmd->cmd_len);
+ io_cmd->common.cdb_len = scmd->cmd_len;
+
+ if (hiraid_is_rw_scmd(scmd))
+ return hiraid_setup_rw_cmd(hdev, &io_cmd->rw, scmd);
+ else
+ return hiraid_setup_nonrw_cmd(hdev, &io_cmd->nonrw, scmd);
+}
+
+static inline void hiraid_init_mapbuff(struct hiraid_mapmange *mapbuf)
+{
+ mapbuf->sge_cnt = 0;
+ mapbuf->page_cnt = -1;
+ mapbuf->use_sgl = false;
+ WRITE_ONCE(mapbuf->state, CMD_IDLE);
+}
+
+static void hiraid_free_mapbuf(struct hiraid_dev *hdev, struct hiraid_mapmange *mapbuf)
+{
+ const int last_prp = hdev->page_size / sizeof(__le64) - 1;
+ dma_addr_t buffer_phy, next_buffer_phy;
+ struct hiraid_sgl_desc *sg_list;
+ __le64 *prp_list;
+ void *addr;
+ int i;
+
+ buffer_phy = mapbuf->first_dma;
+ if (mapbuf->page_cnt == 0)
+ dma_pool_free(mapbuf->hiraidq->prp_small_pool,
+ hiraid_mapbuf_list(mapbuf)[0], buffer_phy);
+
+ for (i = 0; i < mapbuf->page_cnt; i++) {
+ addr = hiraid_mapbuf_list(mapbuf)[i];
+
+ if (mapbuf->use_sgl) {
+ sg_list = addr;
+ next_buffer_phy =
+ le64_to_cpu((sg_list[SGES_PER_PAGE - 1]).addr);
+ } else {
+ prp_list = addr;
+ next_buffer_phy = le64_to_cpu(prp_list[last_prp]);
+ }
+
+ dma_pool_free(hdev->prp_page_pool, addr, buffer_phy);
+ buffer_phy = next_buffer_phy;
+ }
+
+ mapbuf->sense_buffer_virt = NULL;
+ mapbuf->page_cnt = -1;
+}
+
+static int hiraid_io_map_data(struct hiraid_dev *hdev, struct hiraid_mapmange *mapbuf,
+ struct scsi_cmnd *scmd, struct hiraid_scsi_io_cmd *io_cmd)
+{
+ int ret;
+
+ ret = scsi_dma_map(scmd);
+ if (unlikely(ret < 0))
+ return ret;
+ mapbuf->sge_cnt = ret;
+
+ /* No data to DMA, it may be scsi no-rw command */
+ if (unlikely(mapbuf->sge_cnt == 0))
+ return 0;
+
+ mapbuf->len = scsi_bufflen(scmd);
+ mapbuf->sgl = scsi_sglist(scmd);
+ mapbuf->use_sgl = !hiraid_is_prp(hdev, mapbuf->sgl, mapbuf->sge_cnt);
+
+ if (mapbuf->use_sgl) {
+ ret = hiraid_build_sgl(hdev, io_cmd, mapbuf);
+ } else {
+ ret = hiraid_build_prp(hdev, mapbuf);
+ io_cmd->common.dptr.prp1 =
+ cpu_to_le64(sg_dma_address(mapbuf->sgl));
+ io_cmd->common.dptr.prp2 = cpu_to_le64(mapbuf->first_dma);
+ }
+
+ if (ret)
+ scsi_dma_unmap(scmd);
+
+ return ret;
+}
+
+static void hiraid_check_status(struct hiraid_mapmange *mapbuf, struct scsi_cmnd *scmd,
+ struct hiraid_completion *cqe)
+{
+ scsi_set_resid(scmd, 0);
+
+ switch ((le16_to_cpu(cqe->status) >> 1) & 0x7f) {
+ case SENSE_STATE_OK:
+ set_host_byte(scmd, DID_OK);
+ break;
+ case SENSE_STATE_NEED_CHECK:
+ set_host_byte(scmd, DID_OK);
+ scmd->result |= le16_to_cpu(cqe->status) >> 8;
+ if (scmd->result & SAM_STAT_CHECK_CONDITION) {
+ memset(scmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+ memcpy(scmd->sense_buffer,
+ mapbuf->sense_buffer_virt, SCSI_SENSE_BUFFERSIZE);
+ scmd->result = (scmd->result & 0x00ffffff) | (DRIVER_SENSE << 24);
+ }
+ break;
+ case SENSE_STATE_ABORTED:
+ set_host_byte(scmd, DID_ABORT);
+ break;
+ case SENSE_STATE_NEED_RETRY:
+ set_host_byte(scmd, DID_REQUEUE);
+ break;
+ default:
+ set_host_byte(scmd, DID_BAD_TARGET);
+ dev_warn_ratelimited(mapbuf->hiraidq->hdev->dev, "cid[%d] qid[%d] sdev[%d:%d] opcode[%.2x] bad status[0x%x]\n",
+ le16_to_cpu(cqe->cmd_id), le16_to_cpu(cqe->sq_id), scmd->device->channel,
+ scmd->device->id, scmd->cmnd[0], le16_to_cpu(cqe->status));
+ break;
+ }
+}
+
+static inline void hiraid_query_scmd_tag(struct scsi_cmnd *scmd, u16 *qid, u16 *cid,
+ struct hiraid_dev *hdev, struct hiraid_sdev_hostdata *hostdata)
+{
+ u32 tag = blk_mq_unique_tag(blk_mq_rq_from_pdu((void *)scmd));
+
+ if (work_mode) {
+ if ((hdev->hdd_dispatch == DISPATCH_BY_DISK) && (hostdata->hwq != 0))
+ *qid = hostdata->hwq;
+ else
+ *qid = raw_smp_processor_id() % (hdev->online_queues - 1) + 1;
+ } else {
+ *qid = blk_mq_unique_tag_to_hwq(tag) + 1;
+ }
+ *cid = blk_mq_unique_tag_to_tag(tag);
+}
+
+static int hiraid_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+{
+ struct hiraid_mapmange *mapbuf = scsi_cmd_priv(scmd);
+ struct hiraid_dev *hdev = shost_priv(shost);
+ struct scsi_device *sdev = scmd->device;
+ struct hiraid_sdev_hostdata *hostdata;
+ struct hiraid_scsi_io_cmd io_cmd;
+ struct hiraid_queue *ioq;
+ u16 hwq, cid;
+ int ret;
+
+ if (unlikely(hdev->state == DEV_RESETTING))
+ return SCSI_MLQUEUE_HOST_BUSY;
+
+ if (unlikely(hdev->state != DEV_LIVE)) {
+ set_host_byte(scmd, DID_NO_CONNECT);
+ scmd->scsi_done(scmd);
+ return 0;
+ }
+
+ if (log_debug_switch)
+ scsi_print_command(scmd);
+
+ hostdata = sdev->hostdata;
+ hiraid_query_scmd_tag(scmd, &hwq, &cid, hdev, hostdata);
+ ioq = &hdev->queues[hwq];
+
+ if (unlikely(atomic_inc_return(&ioq->inflight) >
+ (hdev->ioq_depth - HIRAID_PTHRU_CMDS_PERQ))) {
+ atomic_dec(&ioq->inflight);
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+
+ memset(&io_cmd, 0, sizeof(io_cmd));
+ io_cmd.rw.hdid = cpu_to_le32(hostdata->hdid);
+ io_cmd.rw.cmd_id = cpu_to_le16(cid);
+
+ ret = hiraid_setup_io_cmd(hdev, &io_cmd, scmd);
+ if (unlikely(ret)) {
+ set_host_byte(scmd, DID_ERROR);
+ scmd->scsi_done(scmd);
+ atomic_dec(&ioq->inflight);
+ return 0;
+ }
+
+ ret = cid * SCSI_SENSE_BUFFERSIZE;
+ if (work_mode) {
+ mapbuf->sense_buffer_virt = hdev->sense_buffer_virt + ret;
+ mapbuf->sense_buffer_phy = hdev->sense_buffer_phy + ret;
+ } else {
+ mapbuf->sense_buffer_virt = ioq->sense_buffer_virt + ret;
+ mapbuf->sense_buffer_phy = ioq->sense_buffer_phy + ret;
+ }
+ io_cmd.common.sense_addr = cpu_to_le64(mapbuf->sense_buffer_phy);
+ io_cmd.common.sense_len = cpu_to_le16(SCSI_SENSE_BUFFERSIZE);
+
+ hiraid_init_mapbuff(mapbuf);
+
+ mapbuf->hiraidq = ioq;
+ mapbuf->cid = cid;
+ ret = hiraid_io_map_data(hdev, mapbuf, scmd, &io_cmd);
+ if (unlikely(ret)) {
+ dev_err(hdev->dev, "io map data err\n");
+ set_host_byte(scmd, DID_ERROR);
+ scmd->scsi_done(scmd);
+ ret = 0;
+ goto deinit_iobuf;
+ }
+
+ WRITE_ONCE(mapbuf->state, CMD_FLIGHT);
+ hiraid_submit_cmd(ioq, &io_cmd);
+
+ return 0;
+
+deinit_iobuf:
+ atomic_dec(&ioq->inflight);
+ hiraid_free_mapbuf(hdev, mapbuf);
+ return ret;
+}
+
+static int hiraid_match_dev(struct hiraid_dev *hdev, u16 idx, struct scsi_device *sdev)
+{
+ if (HIRAID_DEV_INFO_FLAG_VALID(hdev->dev_info[idx].flag)) {
+ if (sdev->channel == hdev->dev_info[idx].channel &&
+ sdev->id == le16_to_cpu(hdev->dev_info[idx].target) &&
+ sdev->lun < hdev->dev_info[idx].lun) {
+ dev_info(hdev->dev, "match device success, channel:target:lun[%d:%d:%d]\n",
+ hdev->dev_info[idx].channel,
+ hdev->dev_info[idx].target,
+ hdev->dev_info[idx].lun);
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+static int hiraid_disk_qd(u8 attr)
+{
+ switch (HIRAID_DEV_DISK_TYPE(attr)) {
+ case HIRAID_SAS_HDD_VD:
+ case HIRAID_SATA_HDD_VD:
+ return HIRAID_HDD_VD_QD;
+ case HIRAID_SAS_SSD_VD:
+ case HIRAID_SATA_SSD_VD:
+ case HIRAID_NVME_SSD_VD:
+ return HIRAID_SSD_VD_QD;
+ case HIRAID_SAS_HDD_PD:
+ case HIRAID_SATA_HDD_PD:
+ return HIRAID_HDD_PD_QD;
+ case HIRAID_SAS_SSD_PD:
+ case HIRAID_SATA_SSD_PD:
+ case HIRAID_NVME_SSD_PD:
+ return HIRAID_SSD_PD_QD;
+ default:
+ return MAX_CMD_PER_DEV;
+ }
+}
+
+static bool hiraid_disk_is_hdd(u8 attr)
+{
+ switch (HIRAID_DEV_DISK_TYPE(attr)) {
+ case HIRAID_SAS_HDD_VD:
+ case HIRAID_SATA_HDD_VD:
+ case HIRAID_SAS_HDD_PD:
+ case HIRAID_SATA_HDD_PD:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static int hiraid_slave_alloc(struct scsi_device *sdev)
+{
+ struct hiraid_sdev_hostdata *hostdata;
+ struct hiraid_dev *hdev;
+ u16 idx;
+
+ hdev = shost_priv(sdev->host);
+ hostdata = kzalloc(sizeof(*hostdata), GFP_KERNEL);
+ if (!hostdata) {
+ dev_err(hdev->dev, "alloc scsi host data memory failed\n");
+ return -ENOMEM;
+ }
+
+ down_read(&hdev->dev_rwsem);
+ for (idx = 0; idx < le32_to_cpu(hdev->ctrl_info->nd); idx++) {
+ if (hiraid_match_dev(hdev, idx, sdev))
+ goto scan_host;
+ }
+ up_read(&hdev->dev_rwsem);
+
+ kfree(hostdata);
+ return -ENXIO;
+
+scan_host:
+ hostdata->hdid = le32_to_cpu(hdev->dev_info[idx].hdid);
+ hostdata->max_io_kb = le16_to_cpu(hdev->dev_info[idx].max_io_kb);
+ hostdata->attr = hdev->dev_info[idx].attr;
+ hostdata->flag = hdev->dev_info[idx].flag;
+ hostdata->rg_id = 0xff;
+ sdev->hostdata = hostdata;
+ up_read(&hdev->dev_rwsem);
+ return 0;
+}
+
+static void hiraid_slave_destroy(struct scsi_device *sdev)
+{
+ kfree(sdev->hostdata);
+ sdev->hostdata = NULL;
+}
+
+static int hiraid_slave_configure(struct scsi_device *sdev)
+{
+ unsigned int timeout = scmd_tmout_rawdisk * HZ;
+ struct hiraid_dev *hdev = shost_priv(sdev->host);
+ struct hiraid_sdev_hostdata *hostdata = sdev->hostdata;
+ u32 max_sec = sdev->host->max_sectors;
+ int qd = MAX_CMD_PER_DEV;
+
+ if (hostdata) {
+ if (HIRAID_DEV_INFO_ATTR_VD(hostdata->attr))
+ timeout = scmd_tmout_vd * HZ;
+ else if (HIRAID_DEV_INFO_ATTR_RAWDISK(hostdata->attr))
+ timeout = scmd_tmout_rawdisk * HZ;
+ max_sec = hostdata->max_io_kb << 1;
+ qd = hiraid_disk_qd(hostdata->attr);
+
+ if (hiraid_disk_is_hdd(hostdata->attr))
+ hostdata->hwq = hostdata->hdid % (hdev->online_queues - 1) + 1;
+ else
+ hostdata->hwq = 0;
+ } else {
+ dev_err(hdev->dev, "err, sdev->hostdata is null\n");
+ }
+
+ blk_queue_rq_timeout(sdev->request_queue, timeout);
+ sdev->eh_timeout = timeout;
+ scsi_change_queue_depth(sdev, qd);
+
+ if ((max_sec == 0) || (max_sec > sdev->host->max_sectors))
+ max_sec = sdev->host->max_sectors;
+
+ if (!max_io_force)
+ blk_queue_max_hw_sectors(sdev->request_queue, max_sec);
+
+ dev_info(hdev->dev, "sdev->channel:id:lun[%d:%d:%lld] scmd_timeout[%d]s maxsec[%d]\n",
+ sdev->channel, sdev->id, sdev->lun, timeout / HZ, max_sec);
+
+ return 0;
+}
+
+static void hiraid_shost_init(struct hiraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+ u8 domain, bus;
+ u32 dev_func;
+
+ domain = pci_domain_nr(pdev->bus);
+ bus = pdev->bus->number;
+ dev_func = pdev->devfn;
+
+ hdev->shost->nr_hw_queues = work_mode ? 1 : hdev->online_queues - 1;
+ hdev->shost->can_queue = hdev->scsi_qd;
+
+ hdev->shost->sg_tablesize = le16_to_cpu(hdev->ctrl_info->max_num_sge);
+ /* 512B per sector */
+ hdev->shost->max_sectors = (1U << ((hdev->ctrl_info->mdts) * 1U) << 12) / 512;
+ hdev->shost->cmd_per_lun = MAX_CMD_PER_DEV;
+ hdev->shost->max_channel = le16_to_cpu(hdev->ctrl_info->max_channel) - 1;
+ hdev->shost->max_id = le32_to_cpu(hdev->ctrl_info->max_tgt_id);
+ hdev->shost->max_lun = le16_to_cpu(hdev->ctrl_info->max_lun);
+
+ hdev->shost->this_id = -1;
+ hdev->shost->unique_id = (domain << 16) | (bus << 8) | dev_func;
+ hdev->shost->max_cmd_len = MAX_CDB_LEN;
+ hdev->shost->hostt->cmd_size = hiraid_get_max_cmd_size(hdev);
+}
+
+static int hiraid_alloc_queue(struct hiraid_dev *hdev, u16 qid, u16 depth)
+{
+ struct hiraid_queue *hiraidq = &hdev->queues[qid];
+ int ret = 0;
+
+ if (hdev->queue_count > qid) {
+ dev_info(hdev->dev, "warn: queue[%d] is exist\n", qid);
+ return 0;
+ }
+
+ hiraidq->cqes = dma_alloc_coherent(hdev->dev, CQ_SIZE(depth),
+ &hiraidq->cq_buffer_phy, GFP_KERNEL | __GFP_ZERO);
+ if (!hiraidq->cqes)
+ return -ENOMEM;
+
+ hiraidq->sq_cmds = dma_alloc_coherent(hdev->dev, SQ_SIZE(qid, depth),
+ &hiraidq->sq_buffer_phy, GFP_KERNEL);
+ if (!hiraidq->sq_cmds) {
+ ret = -ENOMEM;
+ goto free_cqes;
+ }
+
+ /*
+ * if single hw queue, we do not need to alloc sense buffer for every queue,
+ * we have alloced all on hiraid_alloc_resources.
+ */
+ if (work_mode)
+ goto initq;
+
+ /* alloc sense buffer */
+ hiraidq->sense_buffer_virt = dma_alloc_coherent(hdev->dev, SENSE_SIZE(depth),
+ &hiraidq->sense_buffer_phy, GFP_KERNEL | __GFP_ZERO);
+ if (!hiraidq->sense_buffer_virt) {
+ ret = -ENOMEM;
+ goto free_sq_cmds;
+ }
+
+initq:
+ spin_lock_init(&hiraidq->sq_lock);
+ spin_lock_init(&hiraidq->cq_lock);
+ hiraidq->hdev = hdev;
+ hiraidq->q_depth = depth;
+ hiraidq->qid = qid;
+ hiraidq->cq_vector = -1;
+ hdev->queue_count++;
+
+ return 0;
+
+free_sq_cmds:
+ dma_free_coherent(hdev->dev, SQ_SIZE(qid, depth), (void *)hiraidq->sq_cmds,
+ hiraidq->sq_buffer_phy);
+free_cqes:
+ dma_free_coherent(hdev->dev, CQ_SIZE(depth), (void *)hiraidq->cqes,
+ hiraidq->cq_buffer_phy);
+ return ret;
+}
+
+static int hiraid_wait_control_ready(struct hiraid_dev *hdev, u64 cap, bool enabled)
+{
+ unsigned long timeout =
+ ((HIRAID_CAP_TIMEOUT(cap) + 1) * HIRAID_CAP_TIMEOUT_UNIT_MS) + jiffies;
+ u32 bit = enabled ? HIRAID_CSTS_RDY : 0;
+
+ while ((readl(hdev->bar + HIRAID_REG_CSTS) & HIRAID_CSTS_RDY) != bit) {
+ usleep_range(1000, 2000);
+ if (fatal_signal_pending(current))
+ return -EINTR;
+
+ if (time_after(jiffies, timeout)) {
+ dev_err(hdev->dev, "device not ready; aborting %s\n",
+ enabled ? "initialisation" : "reset");
+ return -ENODEV;
+ }
+ }
+ return 0;
+}
+
+static int hiraid_shutdown_control(struct hiraid_dev *hdev)
+{
+ unsigned long timeout = le32_to_cpu(hdev->ctrl_info->rtd3e) / 1000000 * HZ + jiffies;
+
+ hdev->ctrl_config &= ~HIRAID_CC_SHN_MASK;
+ hdev->ctrl_config |= HIRAID_CC_SHN_NORMAL;
+ writel(hdev->ctrl_config, hdev->bar + HIRAID_REG_CC);
+
+ while ((readl(hdev->bar + HIRAID_REG_CSTS) & HIRAID_CSTS_SHST_MASK) !=
+ HIRAID_CSTS_SHST_CMPLT) {
+ msleep(100);
+ if (fatal_signal_pending(current))
+ return -EINTR;
+ if (time_after(jiffies, timeout)) {
+ dev_err(hdev->dev, "device shutdown incomplete, abort shutdown\n");
+ return -ENODEV;
+ }
+ }
+ return 0;
+}
+
+static int hiraid_disable_control(struct hiraid_dev *hdev)
+{
+ hdev->ctrl_config &= ~HIRAID_CC_SHN_MASK;
+ hdev->ctrl_config &= ~HIRAID_CC_ENABLE;
+ writel(hdev->ctrl_config, hdev->bar + HIRAID_REG_CC);
+
+ return hiraid_wait_control_ready(hdev, hdev->cap, false);
+}
+
+static int hiraid_enable_control(struct hiraid_dev *hdev)
+{
+ u64 cap = hdev->cap;
+ u32 dev_page_min = HIRAID_CAP_MPSMIN(cap) + 12;
+ u32 page_shift = PAGE_SHIFT;
+
+ if (page_shift < dev_page_min) {
+ dev_err(hdev->dev, "minimum device page size[%u], too large for host[%u]\n",
+ 1U << dev_page_min, 1U << page_shift);
+ return -ENODEV;
+ }
+
+ page_shift = min_t(unsigned int, HIRAID_CAP_MPSMAX(cap) + 12, PAGE_SHIFT);
+ hdev->page_size = 1U << page_shift;
+
+ hdev->ctrl_config = HIRAID_CC_CSS_NVM;
+ hdev->ctrl_config |= (page_shift - 12) << HIRAID_CC_MPS_SHIFT;
+ hdev->ctrl_config |= HIRAID_CC_AMS_RR | HIRAID_CC_SHN_NONE;
+ hdev->ctrl_config |= HIRAID_CC_IOSQES | HIRAID_CC_IOCQES;
+ hdev->ctrl_config |= HIRAID_CC_ENABLE;
+ writel(hdev->ctrl_config, hdev->bar + HIRAID_REG_CC);
+
+ return hiraid_wait_control_ready(hdev, cap, true);
+}
+
+static void hiraid_init_queue(struct hiraid_queue *hiraidq, u16 qid)
+{
+ struct hiraid_dev *hdev = hiraidq->hdev;
+
+ memset((void *)hiraidq->cqes, 0, CQ_SIZE(hiraidq->q_depth));
+
+ hiraidq->sq_tail = 0;
+ hiraidq->cq_head = 0;
+ hiraidq->cq_phase = 1;
+ hiraidq->q_db = &hdev->dbs[qid * 2 * hdev->db_stride];
+ hiraidq->prp_small_pool = hdev->prp_extra_pool[qid % extra_pool_num];
+ hdev->online_queues++;
+ atomic_set(&hiraidq->inflight, 0);
+}
+
+static inline bool hiraid_cqe_pending(struct hiraid_queue *hiraidq)
+{
+ return (le16_to_cpu(hiraidq->cqes[hiraidq->cq_head].status) & 1) ==
+ hiraidq->cq_phase;
+}
+
+static void hiraid_complete_io_cmnd(struct hiraid_queue *ioq, struct hiraid_completion *cqe)
+{
+ struct hiraid_dev *hdev = ioq->hdev;
+ struct blk_mq_tags *tags;
+ struct scsi_cmnd *scmd;
+ struct hiraid_mapmange *mapbuf;
+ struct request *req;
+ unsigned long elapsed;
+
+ atomic_dec(&ioq->inflight);
+
+ if (work_mode)
+ tags = hdev->shost->tag_set.tags[0];
+ else
+ tags = hdev->shost->tag_set.tags[ioq->qid - 1];
+ req = blk_mq_tag_to_rq(tags, le16_to_cpu(cqe->cmd_id));
+ if (unlikely(!req || !blk_mq_request_started(req))) {
+ dev_warn(hdev->dev, "invalid id[%d] completed on queue[%d]\n",
+ le16_to_cpu(cqe->cmd_id), ioq->qid);
+ return;
+ }
+
+ scmd = blk_mq_rq_to_pdu(req);
+ mapbuf = scsi_cmd_priv(scmd);
+
+ elapsed = jiffies - scmd->jiffies_at_alloc;
+ dev_log_dbg(hdev->dev, "cid[%d] qid[%d] finish IO cost %3ld.%3ld seconds\n",
+ le16_to_cpu(cqe->cmd_id), ioq->qid, elapsed / HZ, elapsed % HZ);
+
+ if (cmpxchg(&mapbuf->state, CMD_FLIGHT, CMD_COMPLETE) != CMD_FLIGHT) {
+ dev_warn(hdev->dev, "cid[%d] qid[%d] enters abnormal handler, cost %3ld.%3ld seconds\n",
+ le16_to_cpu(cqe->cmd_id), ioq->qid, elapsed / HZ, elapsed % HZ);
+ WRITE_ONCE(mapbuf->state, CMD_TMO_COMPLETE);
+
+ if (mapbuf->sge_cnt) {
+ mapbuf->sge_cnt = 0;
+ scsi_dma_unmap(scmd);
+ }
+ hiraid_free_mapbuf(hdev, mapbuf);
+
+ return;
+ }
+
+ hiraid_check_status(mapbuf, scmd, cqe);
+ if (mapbuf->sge_cnt) {
+ mapbuf->sge_cnt = 0;
+ scsi_dma_unmap(scmd);
+ }
+ hiraid_free_mapbuf(hdev, mapbuf);
+ scmd->scsi_done(scmd);
+}
+
+static void hiraid_complete_admin_cmnd(struct hiraid_queue *adminq, struct hiraid_completion *cqe)
+{
+ struct hiraid_dev *hdev = adminq->hdev;
+ struct hiraid_cmd *adm_cmd;
+
+ adm_cmd = hdev->adm_cmds + le16_to_cpu(cqe->cmd_id);
+ if (unlikely(adm_cmd->state == CMD_IDLE)) {
+ dev_warn(adminq->hdev->dev, "invalid id[%d] completed on queue[%d]\n",
+ le16_to_cpu(cqe->cmd_id), le16_to_cpu(cqe->sq_id));
+ return;
+ }
+
+ adm_cmd->status = le16_to_cpu(cqe->status) >> 1;
+ adm_cmd->result0 = le32_to_cpu(cqe->result);
+ adm_cmd->result1 = le32_to_cpu(cqe->result1);
+
+ complete(&adm_cmd->cmd_done);
+}
+
+static void hiraid_send_async_event(struct hiraid_dev *hdev, u16 cid);
+
+static void hiraid_complete_async_event(struct hiraid_queue *hiraidq, struct hiraid_completion *cqe)
+{
+ struct hiraid_dev *hdev = hiraidq->hdev;
+ u32 result = le32_to_cpu(cqe->result);
+
+ dev_info(hdev->dev, "recv async event, cid[%d] status[0x%x] result[0x%x]\n",
+ le16_to_cpu(cqe->cmd_id), le16_to_cpu(cqe->status) >> 1, result);
+
+ hiraid_send_async_event(hdev, le16_to_cpu(cqe->cmd_id));
+
+ if ((le16_to_cpu(cqe->status) >> 1) != HIRAID_SC_SUCCESS)
+ return;
+ switch (result & 0x7) {
+ case HIRAID_ASYN_EVENT_NOTICE:
+ hiraid_handle_async_notice(hdev, result);
+ break;
+ case HIRAID_ASYN_EVENT_VS:
+ hiraid_handle_async_vs(hdev, result, le32_to_cpu(cqe->result1));
+ break;
+ default:
+ dev_warn(hdev->dev, "unsupported async event type[%u]\n", result & 0x7);
+ break;
+ }
+}
+
+static void hiraid_complete_pthru_cmnd(struct hiraid_queue *ioq, struct hiraid_completion *cqe)
+{
+ struct hiraid_dev *hdev = ioq->hdev;
+ struct hiraid_cmd *ptcmd;
+
+ ptcmd = hdev->io_ptcmds + (ioq->qid - 1) * HIRAID_PTHRU_CMDS_PERQ +
+ le16_to_cpu(cqe->cmd_id) - hdev->scsi_qd;
+
+ ptcmd->status = le16_to_cpu(cqe->status) >> 1;
+ ptcmd->result0 = le32_to_cpu(cqe->result);
+ ptcmd->result1 = le32_to_cpu(cqe->result1);
+
+ complete(&ptcmd->cmd_done);
+}
+
+static inline void hiraid_handle_cqe(struct hiraid_queue *hiraidq, u16 idx)
+{
+ struct hiraid_completion *cqe = &hiraidq->cqes[idx];
+ struct hiraid_dev *hdev = hiraidq->hdev;
+ u16 cid = le16_to_cpu(cqe->cmd_id);
+
+ if (unlikely(!work_mode && (cid >= hiraidq->q_depth))) {
+ dev_err(hdev->dev, "invalid command id[%d] completed on queue[%d]\n",
+ cid, cqe->sq_id);
+ return;
+ }
+
+ dev_log_dbg(hdev->dev, "cid[%d] qid[%d] result[0x%x] sqid[%d] status[0x%x]\n",
+ cid, hiraidq->qid, le32_to_cpu(cqe->result),
+ le16_to_cpu(cqe->sq_id), le16_to_cpu(cqe->status));
+
+ if (unlikely(hiraidq->qid == 0 && cid >= HIRAID_AQ_BLK_MQ_DEPTH)) {
+ hiraid_complete_async_event(hiraidq, cqe);
+ return;
+ }
+
+ if (unlikely(hiraidq->qid && cid >= hdev->scsi_qd)) {
+ hiraid_complete_pthru_cmnd(hiraidq, cqe);
+ return;
+ }
+
+ if (hiraidq->qid)
+ hiraid_complete_io_cmnd(hiraidq, cqe);
+ else
+ hiraid_complete_admin_cmnd(hiraidq, cqe);
+}
+
+static void hiraid_complete_cqes(struct hiraid_queue *hiraidq, u16 start, u16 end)
+{
+ while (start != end) {
+ hiraid_handle_cqe(hiraidq, start);
+ if (++start == hiraidq->q_depth)
+ start = 0;
+ }
+}
+
+static inline void hiraid_update_cq_head(struct hiraid_queue *hiraidq)
+{
+ if (++hiraidq->cq_head == hiraidq->q_depth) {
+ hiraidq->cq_head = 0;
+ hiraidq->cq_phase = !hiraidq->cq_phase;
+ }
+}
+
+static inline bool hiraid_process_cq(struct hiraid_queue *hiraidq, u16 *start, u16 *end, int tag)
+{
+ bool found = false;
+
+ *start = hiraidq->cq_head;
+ while (!found && hiraid_cqe_pending(hiraidq)) {
+ if (le16_to_cpu(hiraidq->cqes[hiraidq->cq_head].cmd_id) == tag)
+ found = true;
+ hiraid_update_cq_head(hiraidq);
+ }
+ *end = hiraidq->cq_head;
+
+ if (*start != *end)
+ writel(hiraidq->cq_head, hiraidq->q_db + hiraidq->hdev->db_stride);
+
+ return found;
+}
+
+static bool hiraid_poll_cq(struct hiraid_queue *hiraidq, int cid)
+{
+ u16 start, end;
+ bool found;
+
+ if (!hiraid_cqe_pending(hiraidq))
+ return 0;
+
+ spin_lock_irq(&hiraidq->cq_lock);
+ found = hiraid_process_cq(hiraidq, &start, &end, cid);
+ spin_unlock_irq(&hiraidq->cq_lock);
+
+ hiraid_complete_cqes(hiraidq, start, end);
+ return found;
+}
+
+static irqreturn_t hiraid_handle_irq(int irq, void *data)
+{
+ struct hiraid_queue *hiraidq = data;
+ irqreturn_t ret = IRQ_NONE;
+ u16 start, end;
+
+ spin_lock(&hiraidq->cq_lock);
+ if (hiraidq->cq_head != hiraidq->last_cq_head)
+ ret = IRQ_HANDLED;
+
+ hiraid_process_cq(hiraidq, &start, &end, -1);
+ hiraidq->last_cq_head = hiraidq->cq_head;
+ spin_unlock(&hiraidq->cq_lock);
+
+ if (start != end) {
+ hiraid_complete_cqes(hiraidq, start, end);
+ ret = IRQ_HANDLED;
+ }
+ return ret;
+}
+
+static int hiraid_setup_admin_queue(struct hiraid_dev *hdev)
+{
+ struct hiraid_queue *adminq = &hdev->queues[0];
+ u32 aqa;
+ int ret;
+
+ dev_info(hdev->dev, "start disable controller\n");
+
+ ret = hiraid_disable_control(hdev);
+ if (ret)
+ return ret;
+
+ ret = hiraid_alloc_queue(hdev, 0, HIRAID_AQ_DEPTH);
+ if (ret)
+ return ret;
+
+ aqa = adminq->q_depth - 1;
+ aqa |= aqa << 16;
+ writel(aqa, hdev->bar + HIRAID_REG_AQA);
+ lo_hi_writeq(adminq->sq_buffer_phy, hdev->bar + HIRAID_REG_ASQ);
+ lo_hi_writeq(adminq->cq_buffer_phy, hdev->bar + HIRAID_REG_ACQ);
+
+ dev_info(hdev->dev, "start enable controller\n");
+
+ ret = hiraid_enable_control(hdev);
+ if (ret) {
+ ret = -ENODEV;
+ return ret;
+ }
+
+ adminq->cq_vector = 0;
+ ret = pci_request_irq(hdev->pdev, adminq->cq_vector, hiraid_handle_irq, NULL,
+ adminq, "hiraid%d_q%d", hdev->instance, adminq->qid);
+ if (ret) {
+ adminq->cq_vector = -1;
+ return ret;
+ }
+
+ hiraid_init_queue(adminq, 0);
+
+ dev_info(hdev->dev, "setup admin queue success, queuecount[%d] online[%d] pagesize[%d]\n",
+ hdev->queue_count, hdev->online_queues, hdev->page_size);
+
+ return 0;
+}
+
+static u32 hiraid_get_bar_size(struct hiraid_dev *hdev, u32 nr_ioqs)
+{
+ return (HIRAID_REG_DBS + ((nr_ioqs + 1) * 8 * hdev->db_stride));
+}
+
+static int hiraid_create_admin_cmds(struct hiraid_dev *hdev)
+{
+ u16 i;
+
+ INIT_LIST_HEAD(&hdev->adm_cmd_list);
+ spin_lock_init(&hdev->adm_cmd_lock);
+
+ hdev->adm_cmds = kcalloc_node(HIRAID_AQ_BLK_MQ_DEPTH, sizeof(struct hiraid_cmd),
+ GFP_KERNEL, hdev->numa_node);
+
+ if (!hdev->adm_cmds) {
+ dev_err(hdev->dev, "alloc admin cmds failed\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < HIRAID_AQ_BLK_MQ_DEPTH; i++) {
+ hdev->adm_cmds[i].qid = 0;
+ hdev->adm_cmds[i].cid = i;
+ list_add_tail(&(hdev->adm_cmds[i].list), &hdev->adm_cmd_list);
+ }
+
+ dev_info(hdev->dev, "alloc admin cmds success, num[%d]\n", HIRAID_AQ_BLK_MQ_DEPTH);
+
+ return 0;
+}
+
+static void hiraid_free_admin_cmds(struct hiraid_dev *hdev)
+{
+ kfree(hdev->adm_cmds);
+ hdev->adm_cmds = NULL;
+ INIT_LIST_HEAD(&hdev->adm_cmd_list);
+}
+
+static struct hiraid_cmd *hiraid_get_cmd(struct hiraid_dev *hdev, enum hiraid_cmd_type type)
+{
+ struct hiraid_cmd *cmd = NULL;
+ unsigned long flags;
+ struct list_head *head = &hdev->adm_cmd_list;
+ spinlock_t *slock = &hdev->adm_cmd_lock;
+
+ if (type == HIRAID_CMD_PTHRU) {
+ head = &hdev->io_pt_list;
+ slock = &hdev->io_pt_lock;
+ }
+
+ spin_lock_irqsave(slock, flags);
+ if (list_empty(head)) {
+ spin_unlock_irqrestore(slock, flags);
+ dev_err(hdev->dev, "err, cmd[%d] list empty\n", type);
+ return NULL;
+ }
+ cmd = list_entry(head->next, struct hiraid_cmd, list);
+ list_del_init(&cmd->list);
+ spin_unlock_irqrestore(slock, flags);
+
+ WRITE_ONCE(cmd->state, CMD_FLIGHT);
+
+ return cmd;
+}
+
+static void hiraid_put_cmd(struct hiraid_dev *hdev, struct hiraid_cmd *cmd,
+ enum hiraid_cmd_type type)
+{
+ unsigned long flags;
+ struct list_head *head = &hdev->adm_cmd_list;
+ spinlock_t *slock = &hdev->adm_cmd_lock;
+
+ if (type == HIRAID_CMD_PTHRU) {
+ head = &hdev->io_pt_list;
+ slock = &hdev->io_pt_lock;
+ }
+
+ spin_lock_irqsave(slock, flags);
+ WRITE_ONCE(cmd->state, CMD_IDLE);
+ list_add_tail(&cmd->list, head);
+ spin_unlock_irqrestore(slock, flags);
+}
+
+static bool hiraid_admin_need_reset(struct hiraid_admin_command *cmd)
+{
+ switch (cmd->common.opcode) {
+ case HIRAID_ADMIN_DELETE_SQ:
+ case HIRAID_ADMIN_CREATE_SQ:
+ case HIRAID_ADMIN_DELETE_CQ:
+ case HIRAID_ADMIN_CREATE_CQ:
+ case HIRAID_ADMIN_SET_FEATURES:
+ return false;
+ default:
+ return true;
+ }
+}
+
+static int hiraid_reset_work_sync(struct hiraid_dev *hdev);
+static inline void hiraid_admin_timeout(struct hiraid_dev *hdev, struct hiraid_cmd *cmd)
+{
+ /* command may be returned because controller reset */
+ if (READ_ONCE(cmd->state) == CMD_COMPLETE)
+ return;
+ if (hiraid_reset_work_sync(hdev) == -EBUSY)
+ flush_work(&hdev->reset_work);
+}
+
+static int hiraid_put_admin_sync_request(struct hiraid_dev *hdev, struct hiraid_admin_command *cmd,
+ u32 *result0, u32 *result1, u32 timeout)
+{
+ struct hiraid_cmd *adm_cmd = hiraid_get_cmd(hdev, HIRAID_CMD_ADMIN);
+
+ if (!adm_cmd) {
+ dev_err(hdev->dev, "err, get admin cmd failed\n");
+ return -EFAULT;
+ }
+
+ timeout = timeout ? timeout : ADMIN_TIMEOUT;
+
+ init_completion(&adm_cmd->cmd_done);
+
+ cmd->common.cmd_id = cpu_to_le16(adm_cmd->cid);
+ hiraid_submit_cmd(&hdev->queues[0], cmd);
+
+ if (!wait_for_completion_timeout(&adm_cmd->cmd_done, timeout)) {
+ dev_err(hdev->dev, "cid[%d] qid[%d] timeout, opcode[0x%x] subopcode[0x%x]\n",
+ adm_cmd->cid, adm_cmd->qid, cmd->usr_cmd.opcode,
+ cmd->usr_cmd.info_0.subopcode);
+
+ /* reset controller if admin timeout */
+ if (hiraid_admin_need_reset(cmd))
+ hiraid_admin_timeout(hdev, adm_cmd);
+
+ hiraid_put_cmd(hdev, adm_cmd, HIRAID_CMD_ADMIN);
+ return -ETIME;
+ }
+
+ if (result0)
+ *result0 = adm_cmd->result0;
+ if (result1)
+ *result1 = adm_cmd->result1;
+
+ hiraid_put_cmd(hdev, adm_cmd, HIRAID_CMD_ADMIN);
+
+ return adm_cmd->status;
+}
+
+/**
+ * hiraid_create_cq - send cmd to controller for create controller cq
+ */
+static int hiraid_create_complete_queue(struct hiraid_dev *hdev, u16 qid,
+ struct hiraid_queue *hiraidq, u16 cq_vector)
+{
+ struct hiraid_admin_command admin_cmd;
+ int flags = HIRAID_QUEUE_PHYS_CONTIG | HIRAID_CQ_IRQ_ENABLED;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.create_cq.opcode = HIRAID_ADMIN_CREATE_CQ;
+ admin_cmd.create_cq.prp1 = cpu_to_le64(hiraidq->cq_buffer_phy);
+ admin_cmd.create_cq.cqid = cpu_to_le16(qid);
+ admin_cmd.create_cq.qsize = cpu_to_le16(hiraidq->q_depth - 1);
+ admin_cmd.create_cq.cq_flags = cpu_to_le16(flags);
+ admin_cmd.create_cq.irq_vector = cpu_to_le16(cq_vector);
+
+ return hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+}
+
+/**
+ * hiraid_create_sq - send cmd to controller for create controller sq
+ */
+static int hiraid_create_send_queue(struct hiraid_dev *hdev, u16 qid,
+ struct hiraid_queue *hiraidq)
+{
+ struct hiraid_admin_command admin_cmd;
+ int flags = HIRAID_QUEUE_PHYS_CONTIG;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.create_sq.opcode = HIRAID_ADMIN_CREATE_SQ;
+ admin_cmd.create_sq.prp1 = cpu_to_le64(hiraidq->sq_buffer_phy);
+ admin_cmd.create_sq.sqid = cpu_to_le16(qid);
+ admin_cmd.create_sq.qsize = cpu_to_le16(hiraidq->q_depth - 1);
+ admin_cmd.create_sq.sq_flags = cpu_to_le16(flags);
+ admin_cmd.create_sq.cqid = cpu_to_le16(qid);
+
+ return hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+}
+
+static void hiraid_free_all_queues(struct hiraid_dev *hdev)
+{
+ int i;
+ struct hiraid_queue *hq;
+
+ for (i = 0; i < hdev->queue_count; i++) {
+ hq = &hdev->queues[i];
+ dma_free_coherent(hdev->dev, CQ_SIZE(hq->q_depth),
+ (void *)hq->cqes, hq->cq_buffer_phy);
+ dma_free_coherent(hdev->dev, SQ_SIZE(hq->qid, hq->q_depth),
+ hq->sq_cmds, hq->sq_buffer_phy);
+ if (!work_mode)
+ dma_free_coherent(hdev->dev, SENSE_SIZE(hq->q_depth),
+ hq->sense_buffer_virt, hq->sense_buffer_phy);
+ }
+
+ hdev->queue_count = 0;
+}
+
+static void hiraid_free_sense_buffer(struct hiraid_dev *hdev)
+{
+ if (hdev->sense_buffer_virt) {
+ dma_free_coherent(hdev->dev,
+ SENSE_SIZE(hdev->scsi_qd + max_hwq_num * HIRAID_PTHRU_CMDS_PERQ),
+ hdev->sense_buffer_virt, hdev->sense_buffer_phy);
+ hdev->sense_buffer_virt = NULL;
+ }
+}
+
+static int hiraid_delete_queue(struct hiraid_dev *hdev, u8 opcode, u16 qid)
+{
+ struct hiraid_admin_command admin_cmd;
+ int ret;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.delete_queue.opcode = opcode;
+ admin_cmd.delete_queue.qid = cpu_to_le16(qid);
+
+ ret = hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+
+ if (ret)
+ dev_err(hdev->dev, "delete %s:[%d] failed\n",
+ (opcode == HIRAID_ADMIN_DELETE_CQ) ? "cq" : "sq", qid);
+
+ return ret;
+}
+
+static int hiraid_delete_complete_queue(struct hiraid_dev *hdev, u16 cqid)
+{
+ return hiraid_delete_queue(hdev, HIRAID_ADMIN_DELETE_CQ, cqid);
+}
+
+static int hiraid_delete_send_queue(struct hiraid_dev *hdev, u16 sqid)
+{
+ return hiraid_delete_queue(hdev, HIRAID_ADMIN_DELETE_SQ, sqid);
+}
+
+static int hiraid_create_queue(struct hiraid_queue *hiraidq, u16 qid)
+{
+ struct hiraid_dev *hdev = hiraidq->hdev;
+ u16 cq_vector;
+ int ret;
+
+ cq_vector = (hdev->num_vecs == 1) ? 0 : qid;
+ ret = hiraid_create_complete_queue(hdev, qid, hiraidq, cq_vector);
+ if (ret)
+ return ret;
+
+ ret = hiraid_create_send_queue(hdev, qid, hiraidq);
+ if (ret)
+ goto delete_cq;
+
+ hiraidq->cq_vector = cq_vector;
+ ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_handle_irq, NULL,
+ hiraidq, "hiraid%d_q%d", hdev->instance, qid);
+ if (ret) {
+ hiraidq->cq_vector = -1;
+ dev_err(hdev->dev, "request queue[%d] irq failed\n", qid);
+ goto delete_sq;
+ }
+
+ hiraid_init_queue(hiraidq, qid);
+
+ return 0;
+
+delete_sq:
+ hiraid_delete_send_queue(hdev, qid);
+delete_cq:
+ hiraid_delete_complete_queue(hdev, qid);
+
+ return ret;
+}
+
+static int hiraid_create_io_queues(struct hiraid_dev *hdev)
+{
+ u32 i, max;
+ int ret = 0;
+
+ max = min(hdev->max_qid, hdev->queue_count - 1);
+ for (i = hdev->online_queues; i <= max; i++) {
+ ret = hiraid_create_queue(&hdev->queues[i], i);
+ if (ret) {
+ dev_err(hdev->dev, "create queue[%d] failed\n", i);
+ break;
+ }
+ }
+
+ if (!hdev->last_qcnt)
+ hdev->last_qcnt = hdev->online_queues;
+
+ dev_info(hdev->dev, "queue_count[%d] online_queue[%d] last_online[%d]",
+ hdev->queue_count, hdev->online_queues, hdev->last_qcnt);
+
+ return ret >= 0 ? 0 : ret;
+}
+
+static int hiraid_set_features(struct hiraid_dev *hdev, u32 fid, u32 dword11, void *buffer,
+ size_t buflen, u32 *result)
+{
+ struct hiraid_admin_command admin_cmd;
+ int ret;
+ u8 *data_ptr = NULL;
+ dma_addr_t buffer_phy = 0;
+
+ if (buffer && buflen) {
+ data_ptr = dma_alloc_coherent(hdev->dev, buflen, &buffer_phy, GFP_KERNEL);
+ if (!data_ptr)
+ return -ENOMEM;
+
+ memcpy(data_ptr, buffer, buflen);
+ }
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.features.opcode = HIRAID_ADMIN_SET_FEATURES;
+ admin_cmd.features.fid = cpu_to_le32(fid);
+ admin_cmd.features.dword11 = cpu_to_le32(dword11);
+ admin_cmd.common.dptr.prp1 = cpu_to_le64(buffer_phy);
+
+ ret = hiraid_put_admin_sync_request(hdev, &admin_cmd, result, NULL, 0);
+
+ if (data_ptr)
+ dma_free_coherent(hdev->dev, buflen, data_ptr, buffer_phy);
+
+ return ret;
+}
+
+static int hiraid_configure_timestamp(struct hiraid_dev *hdev)
+{
+ __le64 timestamp;
+ int ret;
+
+ timestamp = cpu_to_le64(ktime_to_ms(ktime_get_real()));
+ ret = hiraid_set_features(hdev, HIRAID_FEATURE_TIMESTAMP, 0,
+ ×tamp, sizeof(timestamp), NULL);
+
+ if (ret)
+ dev_err(hdev->dev, "set timestamp failed[%d]\n", ret);
+ return ret;
+}
+
+static int hiraid_get_queue_cnt(struct hiraid_dev *hdev, u32 *cnt)
+{
+ u32 q_cnt = (*cnt - 1) | ((*cnt - 1) << 16);
+ u32 nr_ioqs, result;
+ int status;
+
+ status = hiraid_set_features(hdev, HIRAID_FEATURE_NUM_QUEUES, q_cnt, NULL, 0, &result);
+ if (status) {
+ dev_err(hdev->dev, "set queue count failed, status[%d]\n",
+ status);
+ return -EIO;
+ }
+
+ nr_ioqs = min(result & 0xffff, result >> 16) + 1;
+ *cnt = min(*cnt, nr_ioqs);
+ if (*cnt == 0) {
+ dev_err(hdev->dev, "illegal qcount: zero, nr_ioqs[%d], cnt[%d]\n", nr_ioqs, *cnt);
+ return -EIO;
+ }
+ return 0;
+}
+
+static int hiraid_setup_io_queues(struct hiraid_dev *hdev)
+{
+ struct hiraid_queue *adminq = &hdev->queues[0];
+ struct pci_dev *pdev = hdev->pdev;
+ u32 i, size, nr_ioqs;
+ int ret;
+
+ struct irq_affinity affd = {
+ .pre_vectors = 1
+ };
+
+ /* alloc IO sense buffer for single hw queue mode */
+ if (work_mode && !hdev->sense_buffer_virt) {
+ hdev->sense_buffer_virt = dma_alloc_coherent(hdev->dev,
+ SENSE_SIZE(hdev->scsi_qd + max_hwq_num * HIRAID_PTHRU_CMDS_PERQ),
+ &hdev->sense_buffer_phy, GFP_KERNEL | __GFP_ZERO);
+ if (!hdev->sense_buffer_virt)
+ return -ENOMEM;
+ }
+
+ nr_ioqs = min(num_online_cpus(), max_hwq_num);
+ ret = hiraid_get_queue_cnt(hdev, &nr_ioqs);
+ if (ret < 0)
+ return ret;
+
+ size = hiraid_get_bar_size(hdev, nr_ioqs);
+ ret = hiraid_remap_bar(hdev, size);
+ if (ret)
+ return -ENOMEM;
+
+ adminq->q_db = hdev->dbs;
+
+ pci_free_irq(pdev, 0, adminq);
+ pci_free_irq_vectors(pdev);
+ hdev->online_queues--;
+
+ ret = pci_alloc_irq_vectors_affinity(pdev, 1, (nr_ioqs + 1),
+ PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd);
+ if (ret <= 0)
+ return -EIO;
+
+ hdev->num_vecs = ret;
+ hdev->max_qid = max(ret - 1, 1);
+
+ ret = pci_request_irq(pdev, adminq->cq_vector, hiraid_handle_irq, NULL,
+ adminq, "hiraid%d_q%d", hdev->instance, adminq->qid);
+ if (ret) {
+ dev_err(hdev->dev, "request admin irq failed\n");
+ adminq->cq_vector = -1;
+ return ret;
+ }
+
+ hdev->online_queues++;
+
+ for (i = hdev->queue_count; i <= hdev->max_qid; i++) {
+ ret = hiraid_alloc_queue(hdev, i, hdev->ioq_depth);
+ if (ret)
+ break;
+ }
+ dev_info(hdev->dev, "max_qid[%d] queuecount[%d] onlinequeue[%d] ioqdepth[%d]\n",
+ hdev->max_qid, hdev->queue_count, hdev->online_queues, hdev->ioq_depth);
+
+ return hiraid_create_io_queues(hdev);
+}
+
+static void hiraid_delete_io_queues(struct hiraid_dev *hdev)
+{
+ u16 queues = hdev->online_queues - 1;
+ u8 opcode = HIRAID_ADMIN_DELETE_SQ;
+ u16 i, pass;
+
+ if (!pci_device_is_present(hdev->pdev)) {
+ dev_err(hdev->dev, "pci_device is not present, skip disable io queues\n");
+ return;
+ }
+
+ if (hdev->online_queues < 2) {
+ dev_err(hdev->dev, "err, io queue has been delete\n");
+ return;
+ }
+
+ for (pass = 0; pass < 2; pass++) {
+ for (i = queues; i > 0; i--)
+ if (hiraid_delete_queue(hdev, opcode, i))
+ break;
+
+ opcode = HIRAID_ADMIN_DELETE_CQ;
+ }
+}
+
+static void hiraid_pci_disable(struct hiraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+ u32 i;
+
+ for (i = 0; i < hdev->online_queues; i++)
+ pci_free_irq(pdev, hdev->queues[i].cq_vector, &hdev->queues[i]);
+ pci_free_irq_vectors(pdev);
+ if (pci_is_enabled(pdev)) {
+ pci_disable_pcie_error_reporting(pdev);
+ pci_disable_device(pdev);
+ }
+ hdev->online_queues = 0;
+}
+
+static void hiraid_disable_admin_queue(struct hiraid_dev *hdev, bool shutdown)
+{
+ struct hiraid_queue *adminq = &hdev->queues[0];
+ u16 start, end;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ if (shutdown)
+ hiraid_shutdown_control(hdev);
+ else
+ hiraid_disable_control(hdev);
+ }
+
+ if (hdev->queue_count == 0) {
+ dev_err(hdev->dev, "err, admin queue has been delete\n");
+ return;
+ }
+
+ spin_lock_irq(&adminq->cq_lock);
+ hiraid_process_cq(adminq, &start, &end, -1);
+ spin_unlock_irq(&adminq->cq_lock);
+ hiraid_complete_cqes(adminq, start, end);
+}
+
+static int hiraid_create_prp_pools(struct hiraid_dev *hdev)
+{
+ int i;
+ char poolname[20] = { 0 };
+
+ hdev->prp_page_pool = dma_pool_create("prp list page", hdev->dev,
+ PAGE_SIZE, PAGE_SIZE, 0);
+
+ if (!hdev->prp_page_pool) {
+ dev_err(hdev->dev, "create prp_page_pool failed\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < extra_pool_num; i++) {
+ sprintf(poolname, "prp_list_256_%d", i);
+ hdev->prp_extra_pool[i] = dma_pool_create(poolname, hdev->dev, EXTRA_POOL_SIZE,
+ EXTRA_POOL_SIZE, 0);
+
+ if (!hdev->prp_extra_pool[i]) {
+ dev_err(hdev->dev, "create prp extra pool[%d] failed\n", i);
+ goto destroy_prp_extra_pool;
+ }
+ }
+
+ return 0;
+
+destroy_prp_extra_pool:
+ while (i > 0)
+ dma_pool_destroy(hdev->prp_extra_pool[--i]);
+ dma_pool_destroy(hdev->prp_page_pool);
+
+ return -ENOMEM;
+}
+
+static void hiraid_free_prp_pools(struct hiraid_dev *hdev)
+{
+ int i;
+
+ for (i = 0; i < extra_pool_num; i++)
+ dma_pool_destroy(hdev->prp_extra_pool[i]);
+ dma_pool_destroy(hdev->prp_page_pool);
+}
+
+static int hiraid_request_devices(struct hiraid_dev *hdev, struct hiraid_dev_info *dev)
+{
+ u32 nd = le32_to_cpu(hdev->ctrl_info->nd);
+ struct hiraid_admin_command admin_cmd;
+ struct hiraid_dev_list *list_buf;
+ dma_addr_t buffer_phy = 0;
+ u32 i, idx, hdid, ndev;
+ int ret = 0;
+
+ list_buf = dma_alloc_coherent(hdev->dev, PAGE_SIZE, &buffer_phy, GFP_KERNEL);
+ if (!list_buf)
+ return -ENOMEM;
+
+ for (idx = 0; idx < nd;) {
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.get_info.opcode = HIRAID_ADMIN_GET_INFO;
+ admin_cmd.get_info.type = HIRAID_GET_DEVLIST_INFO;
+ admin_cmd.get_info.cdw11 = cpu_to_le32(idx);
+ admin_cmd.common.dptr.prp1 = cpu_to_le64(buffer_phy);
+
+ ret = hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+
+ if (ret) {
+ dev_err(hdev->dev, "get device list failed, nd[%u] idx[%u] ret[%d]\n",
+ nd, idx, ret);
+ goto out;
+ }
+ ndev = le32_to_cpu(list_buf->dev_num);
+
+ dev_info(hdev->dev, "get dev list ndev num[%u]\n", ndev);
+
+ for (i = 0; i < ndev; i++) {
+ hdid = le32_to_cpu(list_buf->devinfo[i].hdid);
+ dev_info(hdev->dev, "devices[%d], hdid[%u] target[%d] channel[%d] lun[%d] attr[0x%x]\n",
+ i, hdid, le16_to_cpu(list_buf->devinfo[i].target),
+ list_buf->devinfo[i].channel,
+ list_buf->devinfo[i].lun,
+ list_buf->devinfo[i].attr);
+ if (hdid > nd || hdid == 0) {
+ dev_err(hdev->dev, "err, hdid[%d] invalid\n", hdid);
+ continue;
+ }
+ memcpy(&dev[hdid - 1], &list_buf->devinfo[i],
+ sizeof(struct hiraid_dev_info));
+ }
+ idx += ndev;
+
+ if (ndev < MAX_DEV_ENTRY_PER_PAGE_4K)
+ break;
+ }
+
+out:
+ dma_free_coherent(hdev->dev, PAGE_SIZE, list_buf, buffer_phy);
+ return ret;
+}
+
+static void hiraid_send_async_event(struct hiraid_dev *hdev, u16 cid)
+{
+ struct hiraid_queue *adminq = &hdev->queues[0];
+ struct hiraid_admin_command admin_cmd;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.common.opcode = HIRAID_ADMIN_ASYNC_EVENT;
+ admin_cmd.common.cmd_id = cpu_to_le16(cid);
+
+ hiraid_submit_cmd(adminq, &admin_cmd);
+ dev_info(hdev->dev, "send async event to controller, cid[%d]\n", cid);
+}
+
+static inline void hiraid_init_async_event(struct hiraid_dev *hdev)
+{
+ u16 i;
+
+ for (i = 0; i < hdev->ctrl_info->asynevent; i++)
+ hiraid_send_async_event(hdev, i + HIRAID_AQ_BLK_MQ_DEPTH);
+}
+
+static int hiraid_add_device(struct hiraid_dev *hdev, struct hiraid_dev_info *devinfo)
+{
+ struct Scsi_Host *shost = hdev->shost;
+ struct scsi_device *sdev;
+
+ dev_info(hdev->dev, "add device, hdid[%u] target[%d] channel[%d] lun[%d] attr[0x%x]\n",
+ le32_to_cpu(devinfo->hdid), le16_to_cpu(devinfo->target),
+ devinfo->channel, devinfo->lun, devinfo->attr);
+
+ sdev = scsi_device_lookup(shost, devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ if (sdev) {
+ dev_warn(hdev->dev, "device is already exist, channel[%d] targetid[%d] lun[%d]\n",
+ devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ scsi_device_put(sdev);
+ return -EEXIST;
+ }
+ scsi_add_device(shost, devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ return 0;
+}
+
+static int hiraid_rescan_device(struct hiraid_dev *hdev, struct hiraid_dev_info *devinfo)
+{
+ struct Scsi_Host *shost = hdev->shost;
+ struct scsi_device *sdev;
+
+ dev_info(hdev->dev, "rescan device, hdid[%u] target[%d] channel[%d] lun[%d] attr[0x%x]\n",
+ le32_to_cpu(devinfo->hdid), le16_to_cpu(devinfo->target),
+ devinfo->channel, devinfo->lun, devinfo->attr);
+
+ sdev = scsi_device_lookup(shost, devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ if (!sdev) {
+ dev_warn(hdev->dev, "device is not exit rescan it, channel[%d] target_id[%d] lun[%d]\n",
+ devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ return -ENODEV;
+ }
+
+ scsi_rescan_device(&sdev->sdev_gendev);
+ scsi_device_put(sdev);
+ return 0;
+}
+
+static int hiraid_delete_device(struct hiraid_dev *hdev, struct hiraid_dev_info *devinfo)
+{
+ struct Scsi_Host *shost = hdev->shost;
+ struct scsi_device *sdev;
+
+ dev_info(hdev->dev, "remove device, hdid[%u] target[%d] channel[%d] lun[%d] attr[0x%x]\n",
+ le32_to_cpu(devinfo->hdid), le16_to_cpu(devinfo->target),
+ devinfo->channel, devinfo->lun, devinfo->attr);
+
+ sdev = scsi_device_lookup(shost, devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ if (!sdev) {
+ dev_warn(hdev->dev, "device is not exit remove it, channel[%d] target_id[%d] lun[%d]\n",
+ devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ return -ENODEV;
+ }
+
+ scsi_remove_device(sdev);
+ scsi_device_put(sdev);
+ return 0;
+}
+
+static int hiraid_dev_list_init(struct hiraid_dev *hdev)
+{
+ u32 nd = le32_to_cpu(hdev->ctrl_info->nd);
+
+ hdev->dev_info = kzalloc_node(nd * sizeof(struct hiraid_dev_info),
+ GFP_KERNEL, hdev->numa_node);
+ if (!hdev->dev_info)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static int hiraid_luntarget_sort(const void *l, const void *r)
+{
+ const struct hiraid_dev_info *ln = l;
+ const struct hiraid_dev_info *rn = r;
+ int l_attr = HIRAID_DEV_INFO_ATTR_BOOT(ln->attr);
+ int r_attr = HIRAID_DEV_INFO_ATTR_BOOT(rn->attr);
+
+ /* boot first */
+ if (l_attr != r_attr)
+ return (r_attr - l_attr);
+
+ if (ln->channel == rn->channel)
+ return le16_to_cpu(ln->target) - le16_to_cpu(rn->target);
+
+ return ln->channel - rn->channel;
+}
+
+static void hiraid_scan_work(struct work_struct *work)
+{
+ struct hiraid_dev *hdev =
+ container_of(work, struct hiraid_dev, scan_work);
+ struct hiraid_dev_info *dev, *old_dev, *new_dev;
+ u32 nd = le32_to_cpu(hdev->ctrl_info->nd);
+ u8 flag, org_flag;
+ int i, ret;
+ int count = 0;
+
+ dev = kcalloc(nd, sizeof(struct hiraid_dev_info), GFP_KERNEL);
+ if (!dev)
+ return;
+
+ new_dev = kcalloc(nd, sizeof(struct hiraid_dev_info), GFP_KERNEL);
+ if (!new_dev)
+ goto free_list;
+
+ ret = hiraid_request_devices(hdev, dev);
+ if (ret)
+ goto free_all;
+ old_dev = hdev->dev_info;
+ for (i = 0; i < nd; i++) {
+ org_flag = old_dev[i].flag;
+ flag = dev[i].flag;
+
+ dev_log_dbg(hdev->dev, "i[%d] org_flag[0x%x] flag[0x%x]\n", i, org_flag, flag);
+
+ if (HIRAID_DEV_INFO_FLAG_VALID(flag)) {
+ if (!HIRAID_DEV_INFO_FLAG_VALID(org_flag)) {
+ down_write(&hdev->dev_rwsem);
+ memcpy(&old_dev[i], &dev[i],
+ sizeof(struct hiraid_dev_info));
+ memcpy(&new_dev[count++], &dev[i],
+ sizeof(struct hiraid_dev_info));
+ up_write(&hdev->dev_rwsem);
+ } else if (HIRAID_DEV_INFO_FLAG_CHANGE(flag)) {
+ hiraid_rescan_device(hdev, &dev[i]);
+ }
+ } else {
+ if (HIRAID_DEV_INFO_FLAG_VALID(org_flag)) {
+ down_write(&hdev->dev_rwsem);
+ old_dev[i].flag &= 0xfe;
+ up_write(&hdev->dev_rwsem);
+ hiraid_delete_device(hdev, &old_dev[i]);
+ }
+ }
+ }
+
+ dev_info(hdev->dev, "scan work add device num[%d]\n", count);
+
+ sort(new_dev, count, sizeof(new_dev[0]), hiraid_luntarget_sort, NULL);
+
+ for (i = 0; i < count; i++)
+ hiraid_add_device(hdev, &new_dev[i]);
+
+free_all:
+ kfree(new_dev);
+free_list:
+ kfree(dev);
+}
+
+static void hiraid_timesyn_work(struct work_struct *work)
+{
+ struct hiraid_dev *hdev =
+ container_of(work, struct hiraid_dev, timesyn_work);
+
+ hiraid_configure_timestamp(hdev);
+}
+
+static int hiraid_init_control_info(struct hiraid_dev *hdev);
+static void hiraid_fwactive_work(struct work_struct *work)
+{
+ struct hiraid_dev *hdev = container_of(work, struct hiraid_dev, fwact_work);
+
+ if (hiraid_init_control_info(hdev))
+ dev_err(hdev->dev, "get controller info failed after fw activation\n");
+}
+
+static void hiraid_queue_scan(struct hiraid_dev *hdev)
+{
+ queue_work(work_queue, &hdev->scan_work);
+}
+
+static void hiraid_handle_async_notice(struct hiraid_dev *hdev, u32 result)
+{
+ switch ((result & 0xff00) >> 8) {
+ case HIRAID_ASYN_DEV_CHANGED:
+ hiraid_queue_scan(hdev);
+ break;
+ case HIRAID_ASYN_FW_ACT_START:
+ dev_info(hdev->dev, "fw activation starting\n");
+ break;
+ case HIRAID_ASYN_HOST_PROBING:
+ break;
+ default:
+ dev_warn(hdev->dev, "async event result[%08x]\n", result);
+ }
+}
+
+static void hiraid_handle_async_vs(struct hiraid_dev *hdev, u32 result, u32 result1)
+{
+ switch ((result & 0xff00) >> 8) {
+ case HIRAID_ASYN_TIMESYN:
+ queue_work(work_queue, &hdev->timesyn_work);
+ break;
+ case HIRAID_ASYN_FW_ACT_FINISH:
+ dev_info(hdev->dev, "fw activation finish\n");
+ queue_work(work_queue, &hdev->fwact_work);
+ break;
+ case HIRAID_ASYN_EVENT_MIN ... HIRAID_ASYN_EVENT_MAX:
+ dev_info(hdev->dev, "recv card event[%d] param1[0x%x] param2[0x%x]\n",
+ (result & 0xff00) >> 8, result, result1);
+ break;
+ default:
+ dev_warn(hdev->dev, "async event result[0x%x]\n", result);
+ }
+}
+
+static int hiraid_alloc_resources(struct hiraid_dev *hdev)
+{
+ int ret, nqueue;
+
+ hdev->ctrl_info = kzalloc_node(sizeof(*hdev->ctrl_info), GFP_KERNEL, hdev->numa_node);
+ if (!hdev->ctrl_info)
+ return -ENOMEM;
+
+ ret = hiraid_create_prp_pools(hdev);
+ if (ret)
+ goto free_ctrl_info;
+ nqueue = min(num_possible_cpus(), max_hwq_num) + 1;
+ hdev->queues = kcalloc_node(nqueue, sizeof(struct hiraid_queue),
+ GFP_KERNEL, hdev->numa_node);
+ if (!hdev->queues) {
+ ret = -ENOMEM;
+ goto destroy_dma_pools;
+ }
+
+ ret = hiraid_create_admin_cmds(hdev);
+ if (ret)
+ goto free_queues;
+
+ dev_info(hdev->dev, "total queues num[%d]\n", nqueue);
+
+ return 0;
+
+free_queues:
+ kfree(hdev->queues);
+destroy_dma_pools:
+ hiraid_free_prp_pools(hdev);
+free_ctrl_info:
+ kfree(hdev->ctrl_info);
+
+ return ret;
+}
+
+static void hiraid_free_resources(struct hiraid_dev *hdev)
+{
+ hiraid_free_admin_cmds(hdev);
+ kfree(hdev->queues);
+ hiraid_free_prp_pools(hdev);
+ kfree(hdev->ctrl_info);
+}
+
+static void hiraid_bsg_buf_unmap(struct hiraid_dev *hdev, struct bsg_job *job)
+{
+ struct request *rq = blk_mq_rq_from_pdu(job);
+ struct hiraid_mapmange *mapbuf = job->dd_data;
+ enum dma_data_direction dma_dir = rq_data_dir(rq) ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+
+ if (mapbuf->sge_cnt)
+ dma_unmap_sg(hdev->dev, mapbuf->sgl, mapbuf->sge_cnt, dma_dir);
+
+ hiraid_free_mapbuf(hdev, mapbuf);
+}
+
+static int hiraid_bsg_buf_map(struct hiraid_dev *hdev, struct bsg_job *job,
+ struct hiraid_admin_command *cmd)
+{
+ struct hiraid_bsg_request *bsg_req = job->request;
+ struct request *rq = blk_mq_rq_from_pdu(job);
+ struct hiraid_mapmange *mapbuf = job->dd_data;
+ enum dma_data_direction dma_dir = rq_data_dir(rq) ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+ int ret = 0;
+
+ /* No data to DMA, it may be scsi no-rw command */
+ mapbuf->sge_cnt = job->request_payload.sg_cnt;
+ mapbuf->sgl = job->request_payload.sg_list;
+ mapbuf->len = job->request_payload.payload_len;
+ mapbuf->page_cnt = -1;
+ if (unlikely(mapbuf->sge_cnt == 0))
+ goto out;
+
+ mapbuf->use_sgl = !hiraid_is_prp(hdev, mapbuf->sgl, mapbuf->sge_cnt);
+
+ ret = dma_map_sg_attrs(hdev->dev, mapbuf->sgl, mapbuf->sge_cnt, dma_dir, DMA_ATTR_NO_WARN);
+ if (!ret)
+ goto out;
+
+ if ((mapbuf->use_sgl == (bool)true) && (bsg_req->msgcode == HIRAID_BSG_IOPTHRU) &&
+ (hdev->ctrl_info->pt_use_sgl != (bool)false)) {
+ ret = hiraid_build_passthru_sgl(hdev, cmd, mapbuf);
+ } else {
+ mapbuf->use_sgl = false;
+
+ ret = hiraid_build_passthru_prp(hdev, mapbuf);
+ cmd->common.dptr.prp1 = cpu_to_le64(sg_dma_address(mapbuf->sgl));
+ cmd->common.dptr.prp2 = cpu_to_le64(mapbuf->first_dma);
+ }
+
+ if (ret)
+ goto unmap;
+
+ return 0;
+
+unmap:
+ dma_unmap_sg(hdev->dev, mapbuf->sgl, mapbuf->sge_cnt, dma_dir);
+out:
+ return ret;
+}
+
+static int hiraid_get_control_info(struct hiraid_dev *hdev, struct hiraid_ctrl_info *ctrl_info)
+{
+ struct hiraid_admin_command admin_cmd;
+ u8 *data_ptr = NULL;
+ dma_addr_t buffer_phy = 0;
+ int ret;
+
+ data_ptr = dma_alloc_coherent(hdev->dev, PAGE_SIZE, &buffer_phy, GFP_KERNEL);
+ if (!data_ptr)
+ return -ENOMEM;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.get_info.opcode = HIRAID_ADMIN_GET_INFO;
+ admin_cmd.get_info.type = HIRAID_GET_CTRL_INFO;
+ admin_cmd.common.dptr.prp1 = cpu_to_le64(buffer_phy);
+
+ ret = hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+ if (!ret)
+ memcpy(ctrl_info, data_ptr, sizeof(struct hiraid_ctrl_info));
+
+ dma_free_coherent(hdev->dev, PAGE_SIZE, data_ptr, buffer_phy);
+
+ return ret;
+}
+
+static int hiraid_init_control_info(struct hiraid_dev *hdev)
+{
+ int ret;
+
+ hdev->ctrl_info->nd = cpu_to_le32(240);
+ hdev->ctrl_info->mdts = 8;
+ hdev->ctrl_info->max_cmds = cpu_to_le16(4096);
+ hdev->ctrl_info->max_num_sge = cpu_to_le16(128);
+ hdev->ctrl_info->max_channel = cpu_to_le16(4);
+ hdev->ctrl_info->max_tgt_id = cpu_to_le32(3239);
+ hdev->ctrl_info->max_lun = cpu_to_le16(2);
+
+ ret = hiraid_get_control_info(hdev, hdev->ctrl_info);
+ if (ret)
+ dev_err(hdev->dev, "get controller info failed[%d]\n", ret);
+
+ dev_info(hdev->dev, "device_num = %d\n", hdev->ctrl_info->nd);
+ dev_info(hdev->dev, "max_cmd = %d\n", hdev->ctrl_info->max_cmds);
+ dev_info(hdev->dev, "max_channel = %d\n", hdev->ctrl_info->max_channel);
+ dev_info(hdev->dev, "max_tgt_id = %d\n", hdev->ctrl_info->max_tgt_id);
+ dev_info(hdev->dev, "max_lun = %d\n", hdev->ctrl_info->max_lun);
+ dev_info(hdev->dev, "max_num_sge = %d\n", hdev->ctrl_info->max_num_sge);
+ dev_info(hdev->dev, "lun_num_boot = %d\n", hdev->ctrl_info->lun_num_boot);
+ dev_info(hdev->dev, "max_data_transfer_size = %d\n", hdev->ctrl_info->mdts);
+ dev_info(hdev->dev, "abort_cmd_limit = %d\n", hdev->ctrl_info->acl);
+ dev_info(hdev->dev, "asyn_event_num = %d\n", hdev->ctrl_info->asynevent);
+ dev_info(hdev->dev, "card_type = %d\n", hdev->ctrl_info->card_type);
+ dev_info(hdev->dev, "pt_use_sgl = %d\n", hdev->ctrl_info->pt_use_sgl);
+ dev_info(hdev->dev, "rtd3e = %d\n", hdev->ctrl_info->rtd3e);
+ dev_info(hdev->dev, "serial_num = %s\n", hdev->ctrl_info->sn);
+ dev_info(hdev->dev, "fw_verion = %s\n", hdev->ctrl_info->fw_version);
+
+ if (!hdev->ctrl_info->asynevent)
+ hdev->ctrl_info->asynevent = 1;
+ if (hdev->ctrl_info->asynevent > HIRAID_ASYN_COMMANDS)
+ hdev->ctrl_info->asynevent = HIRAID_ASYN_COMMANDS;
+
+ hdev->scsi_qd = work_mode ?
+ le16_to_cpu(hdev->ctrl_info->max_cmds) : (hdev->ioq_depth - HIRAID_PTHRU_CMDS_PERQ);
+
+ return 0;
+}
+
+static int hiraid_user_send_admcmd(struct hiraid_dev *hdev, struct bsg_job *job)
+{
+ struct hiraid_bsg_request *bsg_req = job->request;
+ struct hiraid_passthru_common_cmd *ptcmd = &(bsg_req->admcmd);
+ struct hiraid_admin_command admin_cmd;
+ u32 timeout = msecs_to_jiffies(ptcmd->timeout_ms);
+ u32 result[2] = {0};
+ int status;
+
+ if (hdev->state >= DEV_RESETTING) {
+ dev_err(hdev->dev, "err, host state[%d] is not right\n",
+ hdev->state);
+ return -EBUSY;
+ }
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.common.opcode = ptcmd->opcode;
+ admin_cmd.common.flags = ptcmd->flags;
+ admin_cmd.common.hdid = cpu_to_le32(ptcmd->nsid);
+ admin_cmd.common.cdw2[0] = cpu_to_le32(ptcmd->cdw2);
+ admin_cmd.common.cdw2[1] = cpu_to_le32(ptcmd->cdw3);
+ admin_cmd.common.cdw10 = cpu_to_le32(ptcmd->cdw10);
+ admin_cmd.common.cdw11 = cpu_to_le32(ptcmd->cdw11);
+ admin_cmd.common.cdw12 = cpu_to_le32(ptcmd->cdw12);
+ admin_cmd.common.cdw13 = cpu_to_le32(ptcmd->cdw13);
+ admin_cmd.common.cdw14 = cpu_to_le32(ptcmd->cdw14);
+ admin_cmd.common.cdw15 = cpu_to_le32(ptcmd->cdw15);
+
+ status = hiraid_bsg_buf_map(hdev, job, &admin_cmd);
+ if (status) {
+ dev_err(hdev->dev, "err, map data failed\n");
+ return status;
+ }
+
+ status = hiraid_put_admin_sync_request(hdev, &admin_cmd, &result[0], &result[1], timeout);
+ if (status >= 0) {
+ job->reply_len = sizeof(result);
+ memcpy(job->reply, result, sizeof(result));
+ }
+ if (status)
+ dev_info(hdev->dev, "opcode[0x%x] subopcode[0x%x] status[0x%x] result0[0x%x];"
+ "result1[0x%x]\n", ptcmd->opcode, ptcmd->info_0.subopcode, status,
+ result[0], result[1]);
+
+ hiraid_bsg_buf_unmap(hdev, job);
+
+ return status;
+}
+
+static int hiraid_alloc_io_ptcmds(struct hiraid_dev *hdev)
+{
+ u32 i;
+ u32 ptnum = HIRAID_TOTAL_PTCMDS(hdev->online_queues - 1);
+
+ INIT_LIST_HEAD(&hdev->io_pt_list);
+ spin_lock_init(&hdev->io_pt_lock);
+
+ hdev->io_ptcmds = kcalloc_node(ptnum, sizeof(struct hiraid_cmd),
+ GFP_KERNEL, hdev->numa_node);
+
+ if (!hdev->io_ptcmds) {
+ dev_err(hdev->dev, "alloc io pthrunum failed\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < ptnum; i++) {
+ hdev->io_ptcmds[i].qid = i / HIRAID_PTHRU_CMDS_PERQ + 1;
+ hdev->io_ptcmds[i].cid = i % HIRAID_PTHRU_CMDS_PERQ + hdev->scsi_qd;
+ list_add_tail(&(hdev->io_ptcmds[i].list), &hdev->io_pt_list);
+ }
+
+ dev_info(hdev->dev, "alloc io pthru cmd success, pthrunum[%d]\n", ptnum);
+
+ return 0;
+}
+
+static void hiraid_free_io_ptcmds(struct hiraid_dev *hdev)
+{
+ kfree(hdev->io_ptcmds);
+ hdev->io_ptcmds = NULL;
+
+ INIT_LIST_HEAD(&hdev->io_pt_list);
+}
+
+static int hiraid_put_io_sync_request(struct hiraid_dev *hdev, struct hiraid_scsi_io_cmd *io_cmd,
+ u32 *result, u32 *reslen, u32 timeout)
+{
+ int ret;
+ dma_addr_t buffer_phy;
+ struct hiraid_queue *ioq;
+ void *sense_addr = NULL;
+ struct hiraid_cmd *pt_cmd = hiraid_get_cmd(hdev, HIRAID_CMD_PTHRU);
+
+ if (!pt_cmd) {
+ dev_err(hdev->dev, "err, get ioq cmd failed\n");
+ return -EFAULT;
+ }
+
+ timeout = timeout ? timeout : ADMIN_TIMEOUT;
+
+ init_completion(&pt_cmd->cmd_done);
+
+ ioq = &hdev->queues[pt_cmd->qid];
+ if (work_mode) {
+ ret = ((pt_cmd->qid - 1) * HIRAID_PTHRU_CMDS_PERQ + pt_cmd->cid) *
+ SCSI_SENSE_BUFFERSIZE;
+ sense_addr = hdev->sense_buffer_virt + ret;
+ buffer_phy = hdev->sense_buffer_phy + ret;
+ } else {
+ ret = pt_cmd->cid * SCSI_SENSE_BUFFERSIZE;
+ sense_addr = ioq->sense_buffer_virt + ret;
+ buffer_phy = ioq->sense_buffer_phy + ret;
+ }
+
+ io_cmd->common.sense_addr = cpu_to_le64(buffer_phy);
+ io_cmd->common.sense_len = cpu_to_le16(SCSI_SENSE_BUFFERSIZE);
+ io_cmd->common.cmd_id = cpu_to_le16(pt_cmd->cid);
+
+ hiraid_submit_cmd(ioq, io_cmd);
+
+ if (!wait_for_completion_timeout(&pt_cmd->cmd_done, timeout)) {
+ dev_err(hdev->dev, "cid[%d] qid[%d] timeout, opcode[0x%x] subopcode[0x%x]\n",
+ pt_cmd->cid, pt_cmd->qid, io_cmd->common.opcode,
+ (le32_to_cpu(io_cmd->common.cdw3[0]) & 0xffff));
+
+ hiraid_admin_timeout(hdev, pt_cmd);
+
+ hiraid_put_cmd(hdev, pt_cmd, HIRAID_CMD_PTHRU);
+ return -ETIME;
+ }
+
+ if (result && reslen) {
+ if ((pt_cmd->status & 0x17f) == 0x101) {
+ memcpy(result, sense_addr, SCSI_SENSE_BUFFERSIZE);
+ *reslen = SCSI_SENSE_BUFFERSIZE;
+ }
+ }
+
+ hiraid_put_cmd(hdev, pt_cmd, HIRAID_CMD_PTHRU);
+
+ return pt_cmd->status;
+}
+
+static int hiraid_user_send_ptcmd(struct hiraid_dev *hdev, struct bsg_job *job)
+{
+ struct hiraid_bsg_request *bsg_req = (struct hiraid_bsg_request *)(job->request);
+ struct hiraid_passthru_io_cmd *cmd = &(bsg_req->pthrucmd);
+ struct hiraid_scsi_io_cmd pthru_cmd;
+ int status = 0;
+ u32 timeout = msecs_to_jiffies(cmd->timeout_ms);
+ // data len is 4k before use sgl, now len is 1M
+ u32 io_pt_data_len = (hdev->ctrl_info->pt_use_sgl == (bool)true) ?
+ IOQ_PT_SGL_DATA_LEN : IOQ_PT_DATA_LEN;
+
+ if (cmd->data_len > io_pt_data_len) {
+ dev_err(hdev->dev, "data len bigger than %d\n", io_pt_data_len);
+ return -EFAULT;
+ }
+
+ if (hdev->state != DEV_LIVE) {
+ dev_err(hdev->dev, "err, host state[%d] is not live\n", hdev->state);
+ return -EBUSY;
+ }
+
+ memset(&pthru_cmd, 0, sizeof(pthru_cmd));
+ pthru_cmd.common.opcode = cmd->opcode;
+ pthru_cmd.common.flags = cmd->flags;
+ pthru_cmd.common.hdid = cpu_to_le32(cmd->nsid);
+ pthru_cmd.common.sense_len = cpu_to_le16(cmd->info_0.res_sense_len);
+ pthru_cmd.common.cdb_len = cmd->info_0.cdb_len;
+ pthru_cmd.common.rsvd2 = cmd->info_0.rsvd0;
+ pthru_cmd.common.cdw3[0] = cpu_to_le32(cmd->cdw3);
+ pthru_cmd.common.cdw3[1] = cpu_to_le32(cmd->cdw4);
+ pthru_cmd.common.cdw3[2] = cpu_to_le32(cmd->cdw5);
+
+ pthru_cmd.common.cdw10[0] = cpu_to_le32(cmd->cdw10);
+ pthru_cmd.common.cdw10[1] = cpu_to_le32(cmd->cdw11);
+ pthru_cmd.common.cdw10[2] = cpu_to_le32(cmd->cdw12);
+ pthru_cmd.common.cdw10[3] = cpu_to_le32(cmd->cdw13);
+ pthru_cmd.common.cdw10[4] = cpu_to_le32(cmd->cdw14);
+ pthru_cmd.common.cdw10[5] = cpu_to_le32(cmd->data_len);
+
+ memcpy(pthru_cmd.common.cdb, &cmd->cdw16, cmd->info_0.cdb_len);
+
+ pthru_cmd.common.cdw26[0] = cpu_to_le32(cmd->cdw26[0]);
+ pthru_cmd.common.cdw26[1] = cpu_to_le32(cmd->cdw26[1]);
+ pthru_cmd.common.cdw26[2] = cpu_to_le32(cmd->cdw26[2]);
+ pthru_cmd.common.cdw26[3] = cpu_to_le32(cmd->cdw26[3]);
+
+ status = hiraid_bsg_buf_map(hdev, job, (struct hiraid_admin_command *)&pthru_cmd);
+ if (status) {
+ dev_err(hdev->dev, "err, map data failed\n");
+ return status;
+ }
+
+ status = hiraid_put_io_sync_request(hdev, &pthru_cmd, job->reply, &job->reply_len, timeout);
+
+ if (status)
+ dev_info(hdev->dev, "opcode[0x%x] subopcode[0x%x] status[0x%x] replylen[%d]\n",
+ cmd->opcode, cmd->info_1.subopcode, status, job->reply_len);
+
+ hiraid_bsg_buf_unmap(hdev, job);
+
+ return status;
+}
+
+static bool hiraid_check_scmd_finished(struct scsi_cmnd *scmd)
+{
+ struct hiraid_dev *hdev = shost_priv(scmd->device->host);
+ struct hiraid_mapmange *mapbuf = scsi_cmd_priv(scmd);
+ struct hiraid_queue *hiraidq;
+
+ hiraidq = mapbuf->hiraidq;
+ if (!hiraidq)
+ return false;
+ if (READ_ONCE(mapbuf->state) == CMD_COMPLETE || hiraid_poll_cq(hiraidq, mapbuf->cid)) {
+ dev_warn(hdev->dev, "cid[%d] qid[%d] has been completed\n",
+ mapbuf->cid, hiraidq->qid);
+ return true;
+ }
+ return false;
+}
+
+static enum blk_eh_timer_return hiraid_timed_out(struct scsi_cmnd *scmd)
+{
+ struct hiraid_mapmange *mapbuf = scsi_cmd_priv(scmd);
+ unsigned int timeout = scmd->device->request_queue->rq_timeout;
+
+ if (hiraid_check_scmd_finished(scmd))
+ goto out;
+
+ if (time_after(jiffies, scmd->jiffies_at_alloc + timeout)) {
+ if (cmpxchg(&mapbuf->state, CMD_FLIGHT, CMD_TIMEOUT) == CMD_FLIGHT)
+ return BLK_EH_DONE;
+ }
+out:
+ return BLK_EH_RESET_TIMER;
+}
+
+/* send abort command by admin queue temporary */
+static int hiraid_send_abort_cmd(struct hiraid_dev *hdev, u32 hdid, u16 qid, u16 cid)
+{
+ struct hiraid_admin_command admin_cmd;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.abort.opcode = HIRAID_ADMIN_ABORT_CMD;
+ admin_cmd.abort.hdid = cpu_to_le32(hdid);
+ admin_cmd.abort.sqid = cpu_to_le16(qid);
+ admin_cmd.abort.cid = cpu_to_le16(cid);
+
+ return hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+}
+
+/* send reset command by admin quueue temporary */
+static int hiraid_send_reset_cmd(struct hiraid_dev *hdev, u8 type, u32 hdid)
+{
+ struct hiraid_admin_command admin_cmd;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.reset.opcode = HIRAID_ADMIN_RESET;
+ admin_cmd.reset.hdid = cpu_to_le32(hdid);
+ admin_cmd.reset.type = type;
+
+ return hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+}
+
+static bool hiraid_dev_state_trans(struct hiraid_dev *hdev, enum hiraid_dev_state new_state)
+{
+ unsigned long flags;
+ enum hiraid_dev_state old_state;
+ bool change = false;
+
+ spin_lock_irqsave(&hdev->state_lock, flags);
+
+ old_state = hdev->state;
+ switch (new_state) {
+ case DEV_LIVE:
+ switch (old_state) {
+ case DEV_NEW:
+ case DEV_RESETTING:
+ change = true;
+ break;
+ default:
+ break;
+ }
+ break;
+ case DEV_RESETTING:
+ switch (old_state) {
+ case DEV_LIVE:
+ change = true;
+ break;
+ default:
+ break;
+ }
+ break;
+ case DEV_DELETING:
+ if (old_state != DEV_DELETING)
+ change = true;
+ break;
+ case DEV_DEAD:
+ switch (old_state) {
+ case DEV_NEW:
+ case DEV_LIVE:
+ case DEV_RESETTING:
+ change = true;
+ break;
+ default:
+ break;
+ }
+ break;
+ default:
+ break;
+ }
+ if (change)
+ hdev->state = new_state;
+ spin_unlock_irqrestore(&hdev->state_lock, flags);
+
+ dev_info(hdev->dev, "oldstate[%d]->newstate[%d], change[%d]\n",
+ old_state, new_state, change);
+
+ return change;
+}
+
+static void hiraid_drain_pending_ios(struct hiraid_dev *hdev);
+
+static void hiraid_flush_running_cmds(struct hiraid_dev *hdev)
+{
+ int i, j;
+
+ scsi_block_requests(hdev->shost);
+ hiraid_drain_pending_ios(hdev);
+ scsi_unblock_requests(hdev->shost);
+
+ j = HIRAID_AQ_BLK_MQ_DEPTH;
+ for (i = 0; i < j; i++) {
+ if (READ_ONCE(hdev->adm_cmds[i].state) == CMD_FLIGHT) {
+ dev_info(hdev->dev, "flush admin, cid[%d]\n", i);
+ hdev->adm_cmds[i].status = 0xFFFF;
+ WRITE_ONCE(hdev->adm_cmds[i].state, CMD_COMPLETE);
+ complete(&(hdev->adm_cmds[i].cmd_done));
+ }
+ }
+
+ j = HIRAID_TOTAL_PTCMDS(hdev->online_queues - 1);
+ for (i = 0; i < j; i++) {
+ if (READ_ONCE(hdev->io_ptcmds[i].state) == CMD_FLIGHT) {
+ hdev->io_ptcmds[i].status = 0xFFFF;
+ WRITE_ONCE(hdev->io_ptcmds[i].state, CMD_COMPLETE);
+ complete(&(hdev->io_ptcmds[i].cmd_done));
+ }
+ }
+}
+
+static int hiraid_dev_disable(struct hiraid_dev *hdev, bool shutdown)
+{
+ int ret = -ENODEV;
+ struct hiraid_queue *adminq = &hdev->queues[0];
+ u16 start, end;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ if (shutdown)
+ hiraid_shutdown_control(hdev);
+ else
+ ret = hiraid_disable_control(hdev);
+ }
+
+ if (hdev->queue_count == 0) {
+ dev_err(hdev->dev, "warn: queue has been delete\n");
+ return ret;
+ }
+
+ spin_lock_irq(&adminq->cq_lock);
+ hiraid_process_cq(adminq, &start, &end, -1);
+ spin_unlock_irq(&adminq->cq_lock);
+ hiraid_complete_cqes(adminq, start, end);
+
+ hiraid_pci_disable(hdev);
+
+ hiraid_flush_running_cmds(hdev);
+
+ return ret;
+}
+
+static void hiraid_reset_work(struct work_struct *work)
+{
+ int ret = 0;
+ struct hiraid_dev *hdev = container_of(work, struct hiraid_dev, reset_work);
+
+ if (hdev->state != DEV_RESETTING) {
+ dev_err(hdev->dev, "err, host is not reset state\n");
+ return;
+ }
+
+ dev_info(hdev->dev, "enter host reset\n");
+
+ if (hdev->ctrl_config & HIRAID_CC_ENABLE) {
+ dev_info(hdev->dev, "start dev_disable\n");
+ ret = hiraid_dev_disable(hdev, false);
+ }
+
+ if (ret)
+ goto out;
+
+ ret = hiraid_pci_enable(hdev);
+ if (ret)
+ goto out;
+
+ ret = hiraid_setup_admin_queue(hdev);
+ if (ret)
+ goto pci_disable;
+
+ ret = hiraid_setup_io_queues(hdev);
+ if (ret || hdev->online_queues != hdev->last_qcnt)
+ goto pci_disable;
+
+ hiraid_dev_state_trans(hdev, DEV_LIVE);
+
+ hiraid_init_async_event(hdev);
+
+ hiraid_queue_scan(hdev);
+
+ return;
+
+pci_disable:
+ hiraid_pci_disable(hdev);
+out:
+ hiraid_dev_state_trans(hdev, DEV_DEAD);
+ dev_err(hdev->dev, "err, host reset failed\n");
+}
+
+static int hiraid_reset_work_sync(struct hiraid_dev *hdev)
+{
+ if (!hiraid_dev_state_trans(hdev, DEV_RESETTING)) {
+ dev_info(hdev->dev, "can't change to reset state\n");
+ return -EBUSY;
+ }
+
+ if (!queue_work(work_queue, &hdev->reset_work)) {
+ dev_err(hdev->dev, "err, host is already in reset state\n");
+ return -EBUSY;
+ }
+
+ flush_work(&hdev->reset_work);
+ if (hdev->state != DEV_LIVE)
+ return -ENODEV;
+
+ return 0;
+}
+
+static int hiraid_wait_io_completion(struct hiraid_mapmange *mapbuf)
+{
+ u16 times = 0;
+
+ do {
+ if (READ_ONCE(mapbuf->state) == CMD_TMO_COMPLETE)
+ break;
+ msleep(500);
+ times++;
+ } while (times <= HIRAID_WAIT_ABNL_CMD_TIMEOUT);
+
+ /* wait command completion timeout after abort/reset success */
+ if (times >= HIRAID_WAIT_ABNL_CMD_TIMEOUT)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+static bool hiraid_tgt_rst_pending_io_count(struct request *rq, void *data, bool reserved)
+{
+ unsigned int id = *(unsigned int *)data;
+ struct scsi_cmnd *scmd = blk_mq_rq_to_pdu(rq);
+ struct hiraid_mapmange *mapbuf;
+ struct hiraid_sdev_hostdata *hostdata;
+
+ if (scmd) {
+ mapbuf = scsi_cmd_priv(scmd);
+ if ((mapbuf->state == CMD_FLIGHT) || (mapbuf->state == CMD_TIMEOUT)) {
+ if ((scmd->device) && (scmd->device->id == id)) {
+ hostdata = scmd->device->hostdata;
+ hostdata->pend_count++;
+ }
+ }
+ }
+ return true;
+}
+static bool hiraid_clean_pending_io(struct request *rq, void *data, bool reserved)
+{
+ struct hiraid_dev *hdev = data;
+ struct scsi_cmnd *scmd;
+ struct hiraid_mapmange *mapbuf;
+
+ if (unlikely(!rq || !blk_mq_request_started(rq)))
+ return true;
+
+ scmd = blk_mq_rq_to_pdu(rq);
+ mapbuf = scsi_cmd_priv(scmd);
+
+ if ((cmpxchg(&mapbuf->state, CMD_FLIGHT, CMD_COMPLETE) != CMD_FLIGHT) &&
+ (cmpxchg(&mapbuf->state, CMD_TIMEOUT, CMD_COMPLETE) != CMD_TIMEOUT))
+ return true;
+
+ set_host_byte(scmd, DID_NO_CONNECT);
+ if (mapbuf->sge_cnt)
+ scsi_dma_unmap(scmd);
+ hiraid_free_mapbuf(hdev, mapbuf);
+ dev_warn_ratelimited(hdev->dev, "back unfinished CQE, cid[%d] qid[%d]\n",
+ mapbuf->cid, mapbuf->hiraidq->qid);
+ scmd->scsi_done(scmd);
+
+ return true;
+}
+
+static void hiraid_drain_pending_ios(struct hiraid_dev *hdev)
+{
+ blk_mq_tagset_busy_iter(&hdev->shost->tag_set, hiraid_clean_pending_io, (void *)(hdev));
+}
+
+static int wait_tgt_reset_io_done(struct scsi_cmnd *scmd)
+{
+ u16 timeout = 0;
+ struct hiraid_sdev_hostdata *hostdata;
+ struct hiraid_dev *hdev = shost_priv(scmd->device->host);
+
+ hostdata = scmd->device->hostdata;
+
+ do {
+ hostdata->pend_count = 0;
+ blk_mq_tagset_busy_iter(&hdev->shost->tag_set, hiraid_tgt_rst_pending_io_count,
+ (void *)(&scmd->device->id));
+
+ if (!hostdata->pend_count)
+ return 0;
+
+ msleep(500);
+ timeout++;
+ } while (timeout <= HIRAID_WAIT_RST_IO_TIMEOUT);
+
+ return -ETIMEDOUT;
+}
+
+static int hiraid_abort(struct scsi_cmnd *scmd)
+{
+ struct hiraid_dev *hdev = shost_priv(scmd->device->host);
+ struct hiraid_mapmange *mapbuf = scsi_cmd_priv(scmd);
+ struct hiraid_sdev_hostdata *hostdata;
+ u16 hwq, cid;
+ int ret;
+
+ scsi_print_command(scmd);
+
+ if (hdev->state != DEV_LIVE || !hiraid_wait_io_completion(mapbuf) ||
+ hiraid_check_scmd_finished(scmd))
+ return SUCCESS;
+
+ hostdata = scmd->device->hostdata;
+ cid = mapbuf->cid;
+ hwq = mapbuf->hiraidq->qid;
+
+ dev_warn(hdev->dev, "cid[%d] qid[%d] timeout, send abort\n", cid, hwq);
+ ret = hiraid_send_abort_cmd(hdev, hostdata->hdid, hwq, cid);
+ if (ret != -ETIME) {
+ ret = hiraid_wait_io_completion(mapbuf);
+ if (ret) {
+ dev_warn(hdev->dev, "cid[%d] qid[%d] abort failed, not found\n", cid, hwq);
+ return FAILED;
+ }
+ dev_warn(hdev->dev, "cid[%d] qid[%d] abort succ\n", cid, hwq);
+ return SUCCESS;
+ }
+ dev_warn(hdev->dev, "cid[%d] qid[%d] abort failed, timeout\n", cid, hwq);
+ return FAILED;
+}
+
+static int hiraid_scsi_reset(struct scsi_cmnd *scmd, enum hiraid_rst_type rst)
+{
+ struct hiraid_dev *hdev = shost_priv(scmd->device->host);
+ struct hiraid_sdev_hostdata *hostdata;
+ int ret;
+
+ if (hdev->state != DEV_LIVE)
+ return SUCCESS;
+
+ hostdata = scmd->device->hostdata;
+
+ dev_warn(hdev->dev, "sdev[%d:%d] send %s reset\n", scmd->device->channel, scmd->device->id,
+ rst ? "bus" : "target");
+ ret = hiraid_send_reset_cmd(hdev, rst, hostdata->hdid);
+ if ((ret == 0) || (ret == FW_EH_DEV_NONE && rst == HIRAID_RESET_TARGET)) {
+ if (rst == HIRAID_RESET_TARGET) {
+ ret = wait_tgt_reset_io_done(scmd);
+ if (ret) {
+ dev_warn(hdev->dev, "sdev[%d:%d] target has %d peding cmd, target reset failed\n",
+ scmd->device->channel, scmd->device->id,
+ hostdata->pend_count);
+ return FAILED;
+ }
+ }
+ dev_warn(hdev->dev, "sdev[%d:%d] %s reset success\n",
+ scmd->device->channel, scmd->device->id, rst ? "bus" : "target");
+ return SUCCESS;
+ }
+
+ dev_warn(hdev->dev, "sdev[%d:%d] %s reset failed\n",
+ scmd->device->channel, scmd->device->id, rst ? "bus" : "target");
+ return FAILED;
+}
+
+static int hiraid_target_reset(struct scsi_cmnd *scmd)
+{
+ return hiraid_scsi_reset(scmd, HIRAID_RESET_TARGET);
+}
+
+static int hiraid_bus_reset(struct scsi_cmnd *scmd)
+{
+ return hiraid_scsi_reset(scmd, HIRAID_RESET_BUS);
+}
+
+static int hiraid_host_reset(struct scsi_cmnd *scmd)
+{
+ struct hiraid_dev *hdev = shost_priv(scmd->device->host);
+
+ if (hdev->state != DEV_LIVE)
+ return SUCCESS;
+
+ dev_warn(hdev->dev, "sdev[%d:%d] send host reset\n",
+ scmd->device->channel, scmd->device->id);
+ if (hiraid_reset_work_sync(hdev) == -EBUSY)
+ flush_work(&hdev->reset_work);
+
+ if (hdev->state != DEV_LIVE) {
+ dev_warn(hdev->dev, "sdev[%d:%d] host reset failed\n",
+ scmd->device->channel, scmd->device->id);
+ return FAILED;
+ }
+
+ dev_warn(hdev->dev, "sdev[%d:%d] host reset success\n",
+ scmd->device->channel, scmd->device->id);
+
+ return SUCCESS;
+}
+
+static pci_ers_result_t hiraid_pci_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct hiraid_dev *hdev = pci_get_drvdata(pdev);
+
+ dev_info(hdev->dev, "pci error detected, state[%d]\n", state);
+
+ switch (state) {
+ case pci_channel_io_normal:
+ dev_warn(hdev->dev, "channel is normal, do nothing\n");
+
+ return PCI_ERS_RESULT_CAN_RECOVER;
+ case pci_channel_io_frozen:
+ dev_warn(hdev->dev, "channel io frozen, need reset controller\n");
+
+ scsi_block_requests(hdev->shost);
+
+ hiraid_dev_state_trans(hdev, DEV_RESETTING);
+
+ return PCI_ERS_RESULT_NEED_RESET;
+ case pci_channel_io_perm_failure:
+ dev_warn(hdev->dev, "channel io failure, disconnect\n");
+
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ return PCI_ERS_RESULT_NEED_RESET;
+}
+
+static pci_ers_result_t hiraid_pci_slot_reset(struct pci_dev *pdev)
+{
+ struct hiraid_dev *hdev = pci_get_drvdata(pdev);
+
+ dev_info(hdev->dev, "restart after slot reset\n");
+
+ pci_restore_state(pdev);
+
+ if (!queue_work(work_queue, &hdev->reset_work)) {
+ dev_err(hdev->dev, "err, the device is resetting state\n");
+ return PCI_ERS_RESULT_NONE;
+ }
+
+ flush_work(&hdev->reset_work);
+
+ scsi_unblock_requests(hdev->shost);
+
+ return PCI_ERS_RESULT_RECOVERED;
+}
+
+static void hiraid_reset_pci_finish(struct pci_dev *pdev)
+{
+ struct hiraid_dev *hdev = pci_get_drvdata(pdev);
+
+ dev_info(hdev->dev, "enter hiraid reset finish\n");
+}
+
+static ssize_t csts_pp_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+ int ret = -1;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ ret = (readl(hdev->bar + HIRAID_REG_CSTS) & HIRAID_CSTS_PP_MASK);
+ ret >>= HIRAID_CSTS_PP_SHIFT;
+ }
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", ret);
+}
+
+static ssize_t csts_shst_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+ int ret = -1;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ ret = (readl(hdev->bar + HIRAID_REG_CSTS) & HIRAID_CSTS_SHST_MASK);
+ ret >>= HIRAID_CSTS_SHST_SHIFT;
+ }
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", ret);
+}
+
+static ssize_t csts_cfs_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+ int ret = -1;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ ret = (readl(hdev->bar + HIRAID_REG_CSTS) & HIRAID_CSTS_CFS_MASK);
+ ret >>= HIRAID_CSTS_CFS_SHIFT;
+ }
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", ret);
+}
+
+static ssize_t csts_rdy_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+ int ret = -1;
+
+ if (pci_device_is_present(hdev->pdev))
+ ret = (readl(hdev->bar + HIRAID_REG_CSTS) & HIRAID_CSTS_RDY);
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", ret);
+}
+
+static ssize_t fw_version_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+
+ return snprintf(buf, PAGE_SIZE, "%s\n", hdev->ctrl_info->fw_version);
+}
+
+static ssize_t hdd_dispatch_store(struct device *cdev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ int val = 0;
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+
+ if (kstrtoint(buf, 0, &val) != 0)
+ return -EINVAL;
+ if (val < DISPATCH_BY_CPU || val > DISPATCH_BY_DISK)
+ return -EINVAL;
+ hdev->hdd_dispatch = val;
+
+ return strlen(buf);
+}
+static ssize_t hdd_dispatch_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", hdev->hdd_dispatch);
+}
+
+static DEVICE_ATTR_RO(csts_pp);
+static DEVICE_ATTR_RO(csts_shst);
+static DEVICE_ATTR_RO(csts_cfs);
+static DEVICE_ATTR_RO(csts_rdy);
+static DEVICE_ATTR_RO(fw_version);
+static DEVICE_ATTR_RW(hdd_dispatch);
+
+static struct device_attribute *hiraid_host_attrs[] = {
+ &dev_attr_csts_rdy,
+ &dev_attr_csts_pp,
+ &dev_attr_csts_cfs,
+ &dev_attr_fw_version,
+ &dev_attr_csts_shst,
+ &dev_attr_hdd_dispatch,
+ NULL,
+};
+
+static int hiraid_get_vd_info(struct hiraid_dev *hdev, struct hiraid_vd_info *vd_info, u16 vid)
+{
+ struct hiraid_admin_command admin_cmd;
+ u8 *data_ptr = NULL;
+ dma_addr_t buffer_phy = 0;
+ int ret;
+
+ if (hdev->state >= DEV_RESETTING) {
+ dev_err(hdev->dev, "err, host state[%d] is not right\n", hdev->state);
+ return -EBUSY;
+ }
+
+ data_ptr = dma_alloc_coherent(hdev->dev, PAGE_SIZE, &buffer_phy, GFP_KERNEL);
+ if (!data_ptr)
+ return -ENOMEM;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.usr_cmd.opcode = USR_CMD_READ;
+ admin_cmd.usr_cmd.info_0.subopcode = cpu_to_le16(USR_CMD_VDINFO);
+ admin_cmd.usr_cmd.info_1.data_len = cpu_to_le16(USR_CMD_RDLEN);
+ admin_cmd.usr_cmd.info_1.param_len = cpu_to_le16(VDINFO_PARAM_LEN);
+ admin_cmd.usr_cmd.cdw10 = cpu_to_le32(vid);
+ admin_cmd.common.dptr.prp1 = cpu_to_le64(buffer_phy);
+
+ ret = hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, USRCMD_TIMEOUT);
+ if (!ret)
+ memcpy(vd_info, data_ptr, sizeof(struct hiraid_vd_info));
+
+ dma_free_coherent(hdev->dev, PAGE_SIZE, data_ptr, buffer_phy);
+
+ return ret;
+}
+
+static int hiraid_get_bgtask(struct hiraid_dev *hdev, struct hiraid_bgtask *bgtask)
+{
+ struct hiraid_admin_command admin_cmd;
+ u8 *data_ptr = NULL;
+ dma_addr_t buffer_phy = 0;
+ int ret;
+
+ if (hdev->state >= DEV_RESETTING) {
+ dev_err(hdev->dev, "err, host state[%d] is not right\n", hdev->state);
+ return -EBUSY;
+ }
+
+ data_ptr = dma_alloc_coherent(hdev->dev, PAGE_SIZE, &buffer_phy, GFP_KERNEL);
+ if (!data_ptr)
+ return -ENOMEM;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.usr_cmd.opcode = USR_CMD_READ;
+ admin_cmd.usr_cmd.info_0.subopcode = cpu_to_le16(USR_CMD_BGTASK);
+ admin_cmd.usr_cmd.info_1.data_len = cpu_to_le16(USR_CMD_RDLEN);
+ admin_cmd.common.dptr.prp1 = cpu_to_le64(buffer_phy);
+
+ ret = hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, USRCMD_TIMEOUT);
+ if (!ret)
+ memcpy(bgtask, data_ptr, sizeof(struct hiraid_bgtask));
+
+ dma_free_coherent(hdev->dev, PAGE_SIZE, data_ptr, buffer_phy);
+
+ return ret;
+}
+
+static ssize_t raid_level_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct scsi_device *sdev;
+ struct hiraid_dev *hdev;
+ struct hiraid_vd_info *vd_info;
+ struct hiraid_sdev_hostdata *hostdata;
+ int ret;
+
+ sdev = to_scsi_device(dev);
+ hdev = shost_priv(sdev->host);
+ hostdata = sdev->hostdata;
+
+ vd_info = kmalloc(sizeof(*vd_info), GFP_KERNEL);
+ if (!vd_info || !HIRAID_DEV_INFO_ATTR_VD(hostdata->attr))
+ return snprintf(buf, PAGE_SIZE, "NA\n");
+
+ ret = hiraid_get_vd_info(hdev, vd_info, sdev->id);
+ if (ret)
+ vd_info->rg_level = ARRAY_SIZE(raid_levels) - 1;
+
+ ret = (vd_info->rg_level < ARRAY_SIZE(raid_levels)) ?
+ vd_info->rg_level : (ARRAY_SIZE(raid_levels) - 1);
+
+ kfree(vd_info);
+
+ return snprintf(buf, PAGE_SIZE, "RAID-%s\n", raid_levels[ret]);
+}
+
+static ssize_t raid_state_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct scsi_device *sdev;
+ struct hiraid_dev *hdev;
+ struct hiraid_vd_info *vd_info;
+ struct hiraid_sdev_hostdata *hostdata;
+ int ret;
+
+ sdev = to_scsi_device(dev);
+ hdev = shost_priv(sdev->host);
+ hostdata = sdev->hostdata;
+
+ vd_info = kmalloc(sizeof(*vd_info), GFP_KERNEL);
+ if (!vd_info || !HIRAID_DEV_INFO_ATTR_VD(hostdata->attr))
+ return snprintf(buf, PAGE_SIZE, "NA\n");
+
+ ret = hiraid_get_vd_info(hdev, vd_info, sdev->id);
+ if (ret) {
+ vd_info->vd_status = 0;
+ vd_info->rg_id = 0xff;
+ }
+
+ ret = (vd_info->vd_status < ARRAY_SIZE(raid_states)) ? vd_info->vd_status : 0;
+
+ kfree(vd_info);
+
+ return snprintf(buf, PAGE_SIZE, "%s\n", raid_states[ret]);
+}
+
+static ssize_t raid_resync_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct scsi_device *sdev;
+ struct hiraid_dev *hdev;
+ struct hiraid_vd_info *vd_info;
+ struct hiraid_bgtask *bgtask;
+ struct hiraid_sdev_hostdata *hostdata;
+ u8 rg_id, i, progress = 0;
+ int ret;
+
+ sdev = to_scsi_device(dev);
+ hdev = shost_priv(sdev->host);
+ hostdata = sdev->hostdata;
+
+ vd_info = kmalloc(sizeof(*vd_info), GFP_KERNEL);
+ if (!vd_info || !HIRAID_DEV_INFO_ATTR_VD(hostdata->attr))
+ return snprintf(buf, PAGE_SIZE, "NA\n");
+
+ ret = hiraid_get_vd_info(hdev, vd_info, sdev->id);
+ if (ret)
+ goto out;
+
+ rg_id = vd_info->rg_id;
+
+ bgtask = (struct hiraid_bgtask *)vd_info;
+ ret = hiraid_get_bgtask(hdev, bgtask);
+ if (ret)
+ goto out;
+ for (i = 0; i < bgtask->task_num; i++) {
+ if ((bgtask->bgtask[i].type == BGTASK_TYPE_REBUILD) &&
+ (le16_to_cpu(bgtask->bgtask[i].vd_id) == rg_id))
+ progress = bgtask->bgtask[i].progress;
+ }
+
+out:
+ kfree(vd_info);
+ return snprintf(buf, PAGE_SIZE, "%d\n", progress);
+}
+
+static ssize_t dispatch_hwq_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct hiraid_sdev_hostdata *hostdata;
+
+ hostdata = to_scsi_device(dev)->hostdata;
+ return snprintf(buf, PAGE_SIZE, "%d\n", hostdata->hwq);
+}
+
+static ssize_t dispatch_hwq_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ int val;
+ struct hiraid_dev *hdev;
+ struct scsi_device *sdev;
+ struct hiraid_sdev_hostdata *hostdata;
+
+ sdev = to_scsi_device(dev);
+ hdev = shost_priv(sdev->host);
+ hostdata = sdev->hostdata;
+
+ if (kstrtoint(buf, 0, &val) != 0)
+ return -EINVAL;
+ if (val <= 0 || val >= hdev->online_queues)
+ return -EINVAL;
+ if (!hiraid_disk_is_hdd(hostdata->attr))
+ return -EINVAL;
+
+ hostdata->hwq = val;
+ return strlen(buf);
+}
+
+static DEVICE_ATTR_RO(raid_level);
+static DEVICE_ATTR_RO(raid_state);
+static DEVICE_ATTR_RO(raid_resync);
+static DEVICE_ATTR_RW(dispatch_hwq);
+
+static struct device_attribute *hiraid_dev_attrs[] = {
+ &dev_attr_raid_state,
+ &dev_attr_raid_level,
+ &dev_attr_raid_resync,
+ &dev_attr_dispatch_hwq,
+ NULL,
+};
+
+static struct pci_error_handlers hiraid_err_handler = {
+ .error_detected = hiraid_pci_error_detected,
+ .slot_reset = hiraid_pci_slot_reset,
+ .reset_done = hiraid_reset_pci_finish,
+};
+
+static int hiraid_sysfs_host_reset(struct Scsi_Host *shost, int reset_type)
+{
+ int ret;
+ struct hiraid_dev *hdev = shost_priv(shost);
+
+ dev_info(hdev->dev, "start sysfs host reset cmd\n");
+ ret = hiraid_reset_work_sync(hdev);
+ dev_info(hdev->dev, "stop sysfs host reset cmd[%d]\n", ret);
+
+ return ret;
+}
+
+static int hiraid_scan_finished(struct Scsi_Host *shost, unsigned long time)
+{
+ struct hiraid_dev *hdev = shost_priv(shost);
+
+ hiraid_scan_work(&hdev->scan_work);
+
+ return 1;
+}
+
+static struct scsi_host_template hiraid_driver_template = {
+ .module = THIS_MODULE,
+ .name = "hiraid",
+ .proc_name = "hiraid",
+ .queuecommand = hiraid_queue_command,
+ .slave_alloc = hiraid_slave_alloc,
+ .slave_destroy = hiraid_slave_destroy,
+ .slave_configure = hiraid_slave_configure,
+ .scan_finished = hiraid_scan_finished,
+ .eh_timed_out = hiraid_timed_out,
+ .eh_abort_handler = hiraid_abort,
+ .eh_target_reset_handler = hiraid_target_reset,
+ .eh_bus_reset_handler = hiraid_bus_reset,
+ .eh_host_reset_handler = hiraid_host_reset,
+ .change_queue_depth = scsi_change_queue_depth,
+ .this_id = -1,
+ .unchecked_isa_dma = 0,
+ .shost_attrs = hiraid_host_attrs,
+ .sdev_attrs = hiraid_dev_attrs,
+ .host_reset = hiraid_sysfs_host_reset,
+};
+
+static void hiraid_shutdown(struct pci_dev *pdev)
+{
+ struct hiraid_dev *hdev = pci_get_drvdata(pdev);
+
+ hiraid_delete_io_queues(hdev);
+ hiraid_disable_admin_queue(hdev, true);
+}
+
+static bool hiraid_bsg_is_valid(struct bsg_job *job)
+{
+ u64 timeout = 0;
+ struct request *rq = blk_mq_rq_from_pdu(job);
+ struct hiraid_bsg_request *bsg_req = job->request;
+ struct hiraid_dev *hdev = shost_priv(dev_to_shost(job->dev));
+
+ if (bsg_req == NULL || job->request_len != sizeof(struct hiraid_bsg_request))
+ return false;
+
+ switch (bsg_req->msgcode) {
+ case HIRAID_BSG_ADMIN:
+ timeout = msecs_to_jiffies(bsg_req->admcmd.timeout_ms);
+ break;
+ case HIRAID_BSG_IOPTHRU:
+ timeout = msecs_to_jiffies(bsg_req->pthrucmd.timeout_ms);
+ break;
+ default:
+ dev_info(hdev->dev, "bsg unsupport msgcode[%d]\n", bsg_req->msgcode);
+ return false;
+ }
+
+ if ((timeout + CTL_RST_TIME) > rq->timeout) {
+ dev_err(hdev->dev, "bsg invalid time\n");
+ return false;
+ }
+
+ return true;
+}
+
+/* bsg dispatch user command */
+static int hiraid_bsg_dispatch(struct bsg_job *job)
+{
+ struct Scsi_Host *shost = dev_to_shost(job->dev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+ struct request *rq = blk_mq_rq_from_pdu(job);
+ struct hiraid_bsg_request *bsg_req = job->request;
+ int ret = -ENOMSG;
+
+ job->reply_len = 0;
+
+ if (!hiraid_bsg_is_valid(job)) {
+ bsg_job_done(job, ret, 0);
+ return 0;
+ }
+
+ dev_log_dbg(hdev->dev, "bsg msgcode[%d] msglen[%d] timeout[%d];"
+ "reqnsge[%d], reqlen[%d]\n",
+ bsg_req->msgcode, job->request_len, rq->timeout,
+ job->request_payload.sg_cnt, job->request_payload.payload_len);
+
+ switch (bsg_req->msgcode) {
+ case HIRAID_BSG_ADMIN:
+ ret = hiraid_user_send_admcmd(hdev, job);
+ break;
+ case HIRAID_BSG_IOPTHRU:
+ ret = hiraid_user_send_ptcmd(hdev, job);
+ break;
+ default:
+ break;
+ }
+
+ if (ret > 0)
+ ret = ret | (ret << 8);
+
+ bsg_job_done(job, ret, 0);
+ return 0;
+}
+
+static inline void hiraid_unregist_bsg(struct hiraid_dev *hdev)
+{
+ if (hdev->bsg_queue) {
+ bsg_unregister_queue(hdev->bsg_queue);
+ blk_cleanup_queue(hdev->bsg_queue);
+ }
+}
+static int hiraid_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct hiraid_dev *hdev;
+ struct Scsi_Host *shost;
+ int node, ret;
+ char bsg_name[15];
+
+ shost = scsi_host_alloc(&hiraid_driver_template, sizeof(*hdev));
+ if (!shost) {
+ dev_err(&pdev->dev, "failed to allocate scsi host\n");
+ return -ENOMEM;
+ }
+ hdev = shost_priv(shost);
+ hdev->pdev = pdev;
+ hdev->dev = get_device(&pdev->dev);
+
+ node = dev_to_node(hdev->dev);
+ if (node == NUMA_NO_NODE) {
+ node = first_memory_node;
+ set_dev_node(hdev->dev, node);
+ }
+ hdev->numa_node = node;
+ hdev->shost = shost;
+ hdev->instance = shost->host_no;
+ pci_set_drvdata(pdev, hdev);
+
+ ret = hiraid_dev_map(hdev);
+ if (ret)
+ goto put_dev;
+
+ init_rwsem(&hdev->dev_rwsem);
+ INIT_WORK(&hdev->scan_work, hiraid_scan_work);
+ INIT_WORK(&hdev->timesyn_work, hiraid_timesyn_work);
+ INIT_WORK(&hdev->reset_work, hiraid_reset_work);
+ INIT_WORK(&hdev->fwact_work, hiraid_fwactive_work);
+ spin_lock_init(&hdev->state_lock);
+
+ ret = hiraid_alloc_resources(hdev);
+ if (ret)
+ goto dev_unmap;
+
+ ret = hiraid_pci_enable(hdev);
+ if (ret)
+ goto resources_free;
+
+ ret = hiraid_setup_admin_queue(hdev);
+ if (ret)
+ goto pci_disable;
+
+ ret = hiraid_init_control_info(hdev);
+ if (ret)
+ goto disable_admin_q;
+
+ ret = hiraid_setup_io_queues(hdev);
+ if (ret)
+ goto disable_admin_q;
+
+ hiraid_shost_init(hdev);
+
+ ret = scsi_add_host(hdev->shost, hdev->dev);
+ if (ret) {
+ dev_err(hdev->dev, "add shost to system failed, ret[%d]\n", ret);
+ goto remove_io_queues;
+ }
+
+ snprintf(bsg_name, sizeof(bsg_name), "hiraid%d", shost->host_no);
+ hdev->bsg_queue = bsg_setup_queue(&shost->shost_gendev, bsg_name, hiraid_bsg_dispatch,
+ NULL, hiraid_get_max_cmd_size(hdev));
+ if (IS_ERR(hdev->bsg_queue)) {
+ dev_err(hdev->dev, "err, setup bsg failed\n");
+ hdev->bsg_queue = NULL;
+ goto remove_io_queues;
+ }
+
+ if (hdev->online_queues == HIRAID_ADMIN_QUEUE_NUM) {
+ dev_warn(hdev->dev, "warn: only admin queue can be used\n");
+ return 0;
+ }
+
+ hdev->state = DEV_LIVE;
+
+ hiraid_init_async_event(hdev);
+
+ ret = hiraid_dev_list_init(hdev);
+ if (ret)
+ goto unregist_bsg;
+
+ ret = hiraid_configure_timestamp(hdev);
+ if (ret)
+ dev_warn(hdev->dev, "time synchronization failed\n");
+
+ ret = hiraid_alloc_io_ptcmds(hdev);
+ if (ret)
+ goto unregist_bsg;
+
+ scsi_scan_host(hdev->shost);
+
+ return 0;
+
+unregist_bsg:
+ hiraid_unregist_bsg(hdev);
+remove_io_queues:
+ hiraid_delete_io_queues(hdev);
+disable_admin_q:
+ hiraid_free_sense_buffer(hdev);
+ hiraid_disable_admin_queue(hdev, false);
+pci_disable:
+ hiraid_free_all_queues(hdev);
+ hiraid_pci_disable(hdev);
+resources_free:
+ hiraid_free_resources(hdev);
+dev_unmap:
+ hiraid_dev_unmap(hdev);
+put_dev:
+ put_device(hdev->dev);
+ scsi_host_put(shost);
+
+ return -ENODEV;
+}
+
+static void hiraid_remove(struct pci_dev *pdev)
+{
+ struct hiraid_dev *hdev = pci_get_drvdata(pdev);
+ struct Scsi_Host *shost = hdev->shost;
+
+ dev_info(hdev->dev, "enter hiraid remove\n");
+
+ hiraid_dev_state_trans(hdev, DEV_DELETING);
+ flush_work(&hdev->reset_work);
+
+ if (!pci_device_is_present(pdev))
+ hiraid_flush_running_cmds(hdev);
+
+ hiraid_unregist_bsg(hdev);
+ scsi_remove_host(shost);
+ hiraid_free_io_ptcmds(hdev);
+ kfree(hdev->dev_info);
+ hiraid_delete_io_queues(hdev);
+ hiraid_free_sense_buffer(hdev);
+ hiraid_disable_admin_queue(hdev, false);
+ hiraid_free_all_queues(hdev);
+ hiraid_pci_disable(hdev);
+ hiraid_free_resources(hdev);
+ hiraid_dev_unmap(hdev);
+ put_device(hdev->dev);
+ scsi_host_put(shost);
+
+ dev_info(hdev->dev, "exit hiraid remove\n");
+}
+
+static const struct pci_device_id hiraid_hw_card_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_HUAWEI_LOGIC, HIRAID_SERVER_DEVICE_HBA_DID) },
+ { PCI_DEVICE(PCI_VENDOR_ID_HUAWEI_LOGIC, HIRAID_SERVER_DEVICE_RAID_DID) },
+ { 0, }
+};
+MODULE_DEVICE_TABLE(pci, hiraid_hw_card_ids);
+
+static struct pci_driver hiraid_driver = {
+ .name = "hiraid",
+ .id_table = hiraid_hw_card_ids,
+ .probe = hiraid_probe,
+ .remove = hiraid_remove,
+ .shutdown = hiraid_shutdown,
+ .err_handler = &hiraid_err_handler,
+};
+
+static int __init hiraid_init(void)
+{
+ int ret;
+
+ work_queue = alloc_workqueue("hiraid-wq", WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_SYSFS, 0);
+ if (!work_queue)
+ return -ENOMEM;
+
+ hiraid_class = class_create(THIS_MODULE, "hiraid");
+ if (IS_ERR(hiraid_class)) {
+ ret = PTR_ERR(hiraid_class);
+ goto destroy_wq;
+ }
+
+ ret = pci_register_driver(&hiraid_driver);
+ if (ret < 0)
+ goto destroy_class;
+
+ return 0;
+
+destroy_class:
+ class_destroy(hiraid_class);
+destroy_wq:
+ destroy_workqueue(work_queue);
+
+ return ret;
+}
+
+static void __exit hiraid_exit(void)
+{
+ pci_unregister_driver(&hiraid_driver);
+ class_destroy(hiraid_class);
+ destroy_workqueue(work_queue);
+}
+
+MODULE_AUTHOR("Huawei Technologies CO., Ltd");
+MODULE_DESCRIPTION("Huawei RAID driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(HIRAID_DRV_VERSION);
+module_init(hiraid_init);
+module_exit(hiraid_exit);
--
2.22.0.windows.1
2
1

[PATCH openEuler-1.0-LTS] Revert "tcp: fix delayed ACKs for MSS boundary condition"
by Dong Chenchen 16 Nov '23
by Dong Chenchen 16 Nov '23
16 Nov '23
From: dongchenchen <dongchenchen2(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8GYWB
CVE: NA
--------------------------------
This reverts commit 389055ab28760dd7b25c6996c6647b0a37e0a34e.
Signed-off-by: dongchenchen <dongchenchen2(a)huawei.com>
---
net/ipv4/tcp_input.c | 13 -------------
1 file changed, 13 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index f8b1ace50f7a..a12598dabb80 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -172,19 +172,6 @@ static void tcp_measure_rcv_mss(struct sock *sk, const struct sk_buff *skb)
if (unlikely(len > icsk->icsk_ack.rcv_mss +
MAX_TCP_OPTION_SPACE))
tcp_gro_dev_warn(sk, skb, len);
- /* If the skb has a len of exactly 1*MSS and has the PSH bit
- * set then it is likely the end of an application write. So
- * more data may not be arriving soon, and yet the data sender
- * may be waiting for an ACK if cwnd-bound or using TX zero
- * copy. So we set ICSK_ACK_PUSHED here so that
- * tcp_cleanup_rbuf() will send an ACK immediately if the app
- * reads all of the data and is not ping-pong. If len > MSS
- * then this logic does not matter (and does not hurt) because
- * tcp_cleanup_rbuf() will always ACK immediately if the app
- * reads data and there is more than an MSS of unACKed data.
- */
- if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_PSH)
- icsk->icsk_ack.pending |= ICSK_ACK_PUSHED;
} else {
/* Otherwise, we make more careful check taking into account,
* that SACKs block is variable.
--
2.25.1
2
1

16 Nov '23
driver inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I89D3P
CVE: NA
------------------------------------------
This commit is to support SPxxx RAID/HBA controllers.
RAID controllers support RAID 0/1/5/6/10/50/60 modes.
HBA controlllers support RAID 0/1/10 modes.
RAID/HBA support SAS/SATA HDD/SSD.
Signed-off-by: z00848923 <zhanglei48(a)huawei.com>
---
Documentation/scsi/hisi_raid.rst | 84 +
MAINTAINERS | 7 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
drivers/scsi/Kconfig | 1 +
drivers/scsi/Makefile | 1 +
drivers/scsi/hisi_raid/Kconfig | 14 +
drivers/scsi/hisi_raid/Makefile | 7 +
drivers/scsi/hisi_raid/hiraid.h | 760 +++++
drivers/scsi/hisi_raid/hiraid_main.c | 3982 ++++++++++++++++++++++++
10 files changed, 4858 insertions(+)
create mode 100644 Documentation/scsi/hisi_raid.rst
create mode 100644 drivers/scsi/hisi_raid/Kconfig
create mode 100644 drivers/scsi/hisi_raid/Makefile
create mode 100644 drivers/scsi/hisi_raid/hiraid.h
create mode 100644 drivers/scsi/hisi_raid/hiraid_main.c
diff --git a/Documentation/scsi/hisi_raid.rst b/Documentation/scsi/hisi_raid.rst
new file mode 100644
index 000000000000..523a6763a7fd
--- /dev/null
+++ b/Documentation/scsi/hisi_raid.rst
@@ -0,0 +1,84 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==============================================
+hisi_raid - HUAWEI SCSI RAID Controller driver
+==============================================
+
+This file describes the hisi_raid SCSI driver for HUAWEI
+RAID controllers. The hisi_raid driver is the first
+generation RAID driver.
+
+For hisi_raid controller support, enable the hisi_raid driver
+when configuring the kernel.
+
+hisi_raid specific entries in /sys
+=================================
+
+hisi_raid host attributes
+------------------------
+ - /sys/class/scsi_host/host*/csts_pp
+ - /sys/class/scsi_host/host*/csts_shst
+ - /sys/class/scsi_host/host*/csts_cfs
+ - /sys/class/scsi_host/host*/csts_rdy
+ - /sys/class/scsi_host/host*/fw_version
+
+ The host csts_pp attribute is a read only attribute. This attribute
+ indicates whether the controller is processing commands. If this attribute
+ is set to ‘1’, then the controller is processing commands normally. If
+ this attribute is cleared to ‘0’, then the controller has temporarily stopped
+ processing commands in order to handle an event (e.g., firmware activation).
+
+ The host csts_shst attribute is a read only attribute. This attribute
+ indicates status of shutdown processing.The shutdown status values are defined
+ as:
+ ====== ==============================
+ Value Definition
+ ====== ==============================
+ 00b Normal operation
+ 01b Shutdown processing occurring
+ 10b Shutdown processing complete
+ 11b Reserved
+ ====== ==============================
+ The host csts_cfs attribute is a read only attribute. This attribute is set to
+ ’1’ when a fatal controller error occurred that could not be communicated in the
+ appropriate Completion Queue. This bit is cleared to ‘0’ when a fatal controller
+ error has not occurred.
+
+ The host csts_rdy attribute is a read only attribute. This attribute is set to
+ ‘1’ when the controller is ready to process submission queue entries.
+
+ The fw_version attribute is read-only and will return the driver version and the
+ controller firmware version.
+
+hisi_raid scsi device attributes
+------------------------------
+ - /sys/class/scsi_device/X\:X\:X\:X/device/raid_level
+ - /sys/class/scsi_device/X\:X\:X\:X/device/raid_state
+ - /sys/class/scsi_device/X\:X\:X\:X/device/raid_resync
+
+ The device raid_level attribute is a read only attribute. This attribute indicates
+ RAID level of scsi device(will dispaly "NA" if scsi device is not virtual disk type).
+
+ The device raid_state attribute is read-only and indicates RAID status of scsi
+ device(will dispaly "NA" if scsi device is not virtual disk type).
+
+ The device raid_resync attribute is read-only and indicates RAID rebuild processing
+ of scsi device(will dispaly "NA" if scsi device is not virtual disk type).
+
+Supported devices
+=================
+
+ =================== ======= =======================================
+ PCI ID (pci.ids) OEM Product
+ =================== ======= =======================================
+ 19E5:3858 HUAWEI SP186-M-8i(HBA:8Ports)
+ 19E5:3858 HUAWEI SP186-M-16i(HBA:16Ports)
+ 19E5:3858 HUAWEI SP186-M-32i(HBA:32Ports)
+ 19E5:3858 HUAWEI SP186-M-40i(HBA:40Ports)
+ 19E5:3758 HUAWEI SP686C-M-16i(RAID:16Ports,2G cache)
+ 19E5:3758 HUAWEI SP686C-M-16i(RAID:16Ports,4G cache)
+ 19E5:3758 HUAWEI SP686C-MH-32i(RAID:32Ports,4G cache)
+ 19E5:3758 HUAWEI SP686C-M-40i(RAID:40Ports,2G cache)
+ 19E5:3758 HUAWEI SP686C-M-40i(RAID:40Ports,4G cache)
+ =================== ======= =======================================
+
diff --git a/MAINTAINERS b/MAINTAINERS
index a7815fd1072f..8324f56a2096 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8070,6 +8070,13 @@ M: Yonglong Liu <liuyonglong(a)huawei.com>
S: Supported
F: drivers/ptp/ptp_hisi.c
+HISI_RAID SCSI RAID DRIVERS
+M: Zhang Lei <zhanglei48(a)huawei.com>
+L: linux-scsi(a)vger.kernel.org
+S: Maintained
+F: Documentation/scsi/hisi_raid.rst
+F: drivers/scsi/hisi_raid/
+
HMM - Heterogeneous Memory Management
M: Jérôme Glisse <jglisse(a)redhat.com>
L: linux-mm(a)kvack.org
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index ec758f0530c1..b9a50ef6d768 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -2413,6 +2413,7 @@ CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_SMARTPQI=m
+CONFIG_SCSI_HISI_RAID=m
# CONFIG_SCSI_UFSHCD is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_MYRB is not set
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index 5171aa50736b..43b5294326e6 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -2369,6 +2369,7 @@ CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_SMARTPQI=m
+CONFIG_SCSI_HISI_RAID=m
# CONFIG_SCSI_UFSHCD is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_BUSLOGIC is not set
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index a9da1b2dec4a..41ef664cf0ed 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -473,6 +473,7 @@ source "drivers/scsi/megaraid/Kconfig.megaraid"
source "drivers/scsi/sssraid/Kconfig"
source "drivers/scsi/mpt3sas/Kconfig"
source "drivers/scsi/smartpqi/Kconfig"
+source "drivers/scsi/hisi_raid/Kconfig"
source "drivers/scsi/ufs/Kconfig"
config SCSI_HPTIOP
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index c2a1efa16912..8f26dbb5ee37 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -101,6 +101,7 @@ obj-$(CONFIG_MEGARAID_LEGACY) += megaraid.o
obj-$(CONFIG_MEGARAID_NEWGEN) += megaraid/
obj-$(CONFIG_MEGARAID_SAS) += megaraid/
obj-$(CONFIG_SCSI_MPT3SAS) += mpt3sas/
+obj-$(CONFIG_SCSI_HISI_RAID) += hisi_raid/
obj-$(CONFIG_SCSI_UFSHCD) += ufs/
obj-$(CONFIG_SCSI_ACARD) += atp870u.o
obj-$(CONFIG_SCSI_SUNESP) += esp_scsi.o sun_esp.o
diff --git a/drivers/scsi/hisi_raid/Kconfig b/drivers/scsi/hisi_raid/Kconfig
new file mode 100644
index 000000000000..d402dc45a7c1
--- /dev/null
+++ b/drivers/scsi/hisi_raid/Kconfig
@@ -0,0 +1,14 @@
+#
+# Kernel configuration file for the hisi_raid
+#
+
+config SCSI_HISI_RAID
+ tristate "Huawei Hisi_Raid Adapter"
+ depends on PCI && SCSI
+ select BLK_DEV_BSGLIB
+ depends on ARM64 || X86_64
+ help
+ This driver supports hisi_raid SPxxx serial RAID controller, which has
+ PCI Express Gen4 interface with host and supports SAS/SATA HDD/SSD.
+ To compile this driver as a module, choose M here: the module will
+ be called hisi_raid.
diff --git a/drivers/scsi/hisi_raid/Makefile b/drivers/scsi/hisi_raid/Makefile
new file mode 100644
index 000000000000..b71a675f4190
--- /dev/null
+++ b/drivers/scsi/hisi_raid/Makefile
@@ -0,0 +1,7 @@
+#
+# Makefile for the hisi_raid drivers.
+#
+
+obj-$(CONFIG_SCSI_HISI_RAID) += hiraid.o
+
+hiraid-objs := hiraid_main.o
diff --git a/drivers/scsi/hisi_raid/hiraid.h b/drivers/scsi/hisi_raid/hiraid.h
new file mode 100644
index 000000000000..1ebc3dd3f2ec
--- /dev/null
+++ b/drivers/scsi/hisi_raid/hiraid.h
@@ -0,0 +1,760 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2022 Huawei Technologies Co., Ltd */
+
+#ifndef __HIRAID_H_
+#define __HIRAID_H_
+
+#define HIRAID_HDD_PD_QD 64
+#define HIRAID_HDD_VD_QD 256
+#define HIRAID_SSD_PD_QD 64
+#define HIRAID_SSD_VD_QD 256
+
+#define BGTASK_TYPE_REBUILD 4
+#define USR_CMD_READ 0xc2
+#define USR_CMD_RDLEN 0x1000
+#define USR_CMD_VDINFO 0x704
+#define USR_CMD_BGTASK 0x504
+#define VDINFO_PARAM_LEN 0x04
+
+#define HIRAID_DEFAULT_MAX_CHANNEL 4
+#define HIRAID_DEFAULT_MAX_ID 240
+#define HIRAID_DEFAULT_MAX_LUN_PER_HOST 8
+
+#define FUA_MASK 0x08
+
+#define HIRAID_IO_SQES 7
+#define HIRAID_IO_CQES 4
+#define PRP_ENTRY_SIZE 8
+
+#define EXTRA_POOL_SIZE 256
+#define MAX_EXTRA_POOL_NUM 16
+#define MAX_CMD_PER_DEV 64
+#define MAX_CDB_LEN 16
+
+#define HIRAID_AQ_DEPTH 128
+#define HIRAID_ASYN_COMMANDS 16
+#define HIRAID_AQ_BLK_MQ_DEPTH (HIRAID_AQ_DEPTH - HIRAID_ASYN_COMMANDS)
+#define HIRAID_AQ_MQ_TAG_DEPTH (HIRAID_AQ_BLK_MQ_DEPTH - 1)
+
+#define HIRAID_ADMIN_QUEUE_NUM 1
+#define HIRAID_PTHRU_CMDS_PERQ 1
+#define HIRAID_TOTAL_PTCMDS(qn) (HIRAID_PTHRU_CMDS_PERQ * (qn))
+
+#define HIRAID_DEV_INFO_ATTR_BOOT(attr) ((attr) & 0x01)
+#define HIRAID_DEV_INFO_ATTR_VD(attr) (((attr) & 0x02) == 0x0)
+#define HIRAID_DEV_INFO_ATTR_PT(attr) (((attr) & 0x22) == 0x02)
+#define HIRAID_DEV_INFO_ATTR_RAWDISK(attr) ((attr) & 0x20)
+#define HIRAID_DEV_DISK_TYPE(attr) ((attr) & 0x1e)
+
+#define HIRAID_DEV_INFO_FLAG_VALID(flag) ((flag) & 0x01)
+#define HIRAID_DEV_INFO_FLAG_CHANGE(flag) ((flag) & 0x02)
+
+#define HIRAID_CAP_MQES(cap) ((cap) & 0xffff)
+#define HIRAID_CAP_STRIDE(cap) (((cap) >> 32) & 0xf)
+#define HIRAID_CAP_MPSMIN(cap) (((cap) >> 48) & 0xf)
+#define HIRAID_CAP_MPSMAX(cap) (((cap) >> 52) & 0xf)
+#define HIRAID_CAP_TIMEOUT(cap) (((cap) >> 24) & 0xff)
+#define HIRAID_CAP_DMAMASK(cap) (((cap) >> 37) & 0xff)
+
+#define IO_SQE_SIZE sizeof(struct hiraid_scsi_io_cmd)
+#define ADMIN_SQE_SIZE sizeof(struct hiraid_admin_command)
+#define SQE_SIZE(qid) (((qid) > 0) ? IO_SQE_SIZE : ADMIN_SQE_SIZE)
+#define CQ_SIZE(depth) ((depth) * sizeof(struct hiraid_completion))
+#define SQ_SIZE(qid, depth) ((depth) * SQE_SIZE(qid))
+
+#define SENSE_SIZE(depth) ((depth) * SCSI_SENSE_BUFFERSIZE)
+
+#define IO_6_DEFAULT_TX_LEN 256
+
+#define MAX_DEV_ENTRY_PER_PAGE_4K 340
+
+#define MAX_REALTIME_BGTASK_NUM 32
+
+#define PCI_VENDOR_ID_HUAWEI_LOGIC 0x19E5
+#define HIRAID_SERVER_DEVICE_HBA_DID 0x3858
+#define HIRAID_SERVER_DEVICE_RAID_DID 0x3758
+
+enum {
+ HIRAID_SC_SUCCESS = 0x0,
+ HIRAID_SC_INVALID_OPCODE = 0x1,
+ HIRAID_SC_INVALID_FIELD = 0x2,
+
+ HIRAID_SC_ABORT_LIMIT = 0x103,
+ HIRAID_SC_ABORT_MISSING = 0x104,
+ HIRAID_SC_ASYNC_LIMIT = 0x105,
+
+ HIRAID_SC_DNR = 0x4000,
+};
+
+enum {
+ HIRAID_REG_CAP = 0x0000,
+ HIRAID_REG_CC = 0x0014,
+ HIRAID_REG_CSTS = 0x001c,
+ HIRAID_REG_AQA = 0x0024,
+ HIRAID_REG_ASQ = 0x0028,
+ HIRAID_REG_ACQ = 0x0030,
+ HIRAID_REG_DBS = 0x1000,
+};
+
+enum {
+ HIRAID_CC_ENABLE = 1 << 0,
+ HIRAID_CC_CSS_NVM = 0 << 4,
+ HIRAID_CC_MPS_SHIFT = 7,
+ HIRAID_CC_AMS_SHIFT = 11,
+ HIRAID_CC_SHN_SHIFT = 14,
+ HIRAID_CC_IOSQES_SHIFT = 16,
+ HIRAID_CC_IOCQES_SHIFT = 20,
+ HIRAID_CC_AMS_RR = 0 << HIRAID_CC_AMS_SHIFT,
+ HIRAID_CC_SHN_NONE = 0 << HIRAID_CC_SHN_SHIFT,
+ HIRAID_CC_IOSQES = HIRAID_IO_SQES << HIRAID_CC_IOSQES_SHIFT,
+ HIRAID_CC_IOCQES = HIRAID_IO_CQES << HIRAID_CC_IOCQES_SHIFT,
+ HIRAID_CC_SHN_NORMAL = 1 << HIRAID_CC_SHN_SHIFT,
+ HIRAID_CC_SHN_MASK = 3 << HIRAID_CC_SHN_SHIFT,
+ HIRAID_CSTS_CFS_SHIFT = 1,
+ HIRAID_CSTS_SHST_SHIFT = 2,
+ HIRAID_CSTS_PP_SHIFT = 5,
+ HIRAID_CSTS_RDY = 1 << 0,
+ HIRAID_CSTS_SHST_CMPLT = 2 << 2,
+ HIRAID_CSTS_SHST_MASK = 3 << 2,
+ HIRAID_CSTS_CFS_MASK = 1 << HIRAID_CSTS_CFS_SHIFT,
+ HIRAID_CSTS_PP_MASK = 1 << HIRAID_CSTS_PP_SHIFT,
+};
+
+enum {
+ HIRAID_ADMIN_DELETE_SQ = 0x00,
+ HIRAID_ADMIN_CREATE_SQ = 0x01,
+ HIRAID_ADMIN_DELETE_CQ = 0x04,
+ HIRAID_ADMIN_CREATE_CQ = 0x05,
+ HIRAID_ADMIN_ABORT_CMD = 0x08,
+ HIRAID_ADMIN_SET_FEATURES = 0x09,
+ HIRAID_ADMIN_ASYNC_EVENT = 0x0c,
+ HIRAID_ADMIN_GET_INFO = 0xc6,
+ HIRAID_ADMIN_RESET = 0xc8,
+};
+
+enum {
+ HIRAID_GET_CTRL_INFO = 0,
+ HIRAID_GET_DEVLIST_INFO = 1,
+};
+
+enum hiraid_rst_type {
+ HIRAID_RESET_TARGET = 0,
+ HIRAID_RESET_BUS = 1,
+};
+
+enum {
+ HIRAID_ASYN_EVENT_ERROR = 0,
+ HIRAID_ASYN_EVENT_NOTICE = 2,
+ HIRAID_ASYN_EVENT_VS = 7,
+};
+
+enum {
+ HIRAID_ASYN_DEV_CHANGED = 0x00,
+ HIRAID_ASYN_FW_ACT_START = 0x01,
+ HIRAID_ASYN_HOST_PROBING = 0x10,
+};
+
+enum {
+ HIRAID_ASYN_TIMESYN = 0x00,
+ HIRAID_ASYN_FW_ACT_FINISH = 0x02,
+ HIRAID_ASYN_EVENT_MIN = 0x80,
+ HIRAID_ASYN_EVENT_MAX = 0xff,
+};
+
+enum {
+ HIRAID_CMD_WRITE = 0x01,
+ HIRAID_CMD_READ = 0x02,
+
+ HIRAID_CMD_NONRW_NONE = 0x80,
+ HIRAID_CMD_NONRW_TODEV = 0x81,
+ HIRAID_CMD_NONRW_FROMDEV = 0x82,
+};
+
+enum {
+ HIRAID_QUEUE_PHYS_CONTIG = (1 << 0),
+ HIRAID_CQ_IRQ_ENABLED = (1 << 1),
+
+ HIRAID_FEATURE_NUM_QUEUES = 0x07,
+ HIRAID_FEATURE_ASYNC_EVENT = 0x0b,
+ HIRAID_FEATURE_TIMESTAMP = 0x0e,
+};
+
+enum hiraid_dev_state {
+ DEV_NEW,
+ DEV_LIVE,
+ DEV_RESETTING,
+ DEV_DELETING,
+ DEV_DEAD,
+};
+
+enum {
+ HIRAID_CARD_HBA,
+ HIRAID_CARD_RAID,
+};
+
+enum hiraid_cmd_type {
+ HIRAID_CMD_ADMIN,
+ HIRAID_CMD_PTHRU,
+};
+
+enum {
+ SQE_FLAG_SGL_METABUF = (1 << 6),
+ SQE_FLAG_SGL_METASEG = (1 << 7),
+ SQE_FLAG_SGL_ALL = SQE_FLAG_SGL_METABUF | SQE_FLAG_SGL_METASEG,
+};
+
+enum hiraid_cmd_state {
+ CMD_IDLE = 0,
+ CMD_FLIGHT = 1,
+ CMD_COMPLETE = 2,
+ CMD_TIMEOUT = 3,
+ CMD_TMO_COMPLETE = 4,
+};
+
+enum {
+ HIRAID_BSG_ADMIN,
+ HIRAID_BSG_IOPTHRU,
+};
+
+enum {
+ HIRAID_SAS_HDD_VD = 0x04,
+ HIRAID_SATA_HDD_VD = 0x08,
+ HIRAID_SAS_SSD_VD = 0x0c,
+ HIRAID_SATA_SSD_VD = 0x10,
+ HIRAID_NVME_SSD_VD = 0x14,
+ HIRAID_SAS_HDD_PD = 0x06,
+ HIRAID_SATA_HDD_PD = 0x0a,
+ HIRAID_SAS_SSD_PD = 0x0e,
+ HIRAID_SATA_SSD_PD = 0x12,
+ HIRAID_NVME_SSD_PD = 0x16,
+};
+
+enum {
+ DISPATCH_BY_CPU,
+ DISPATCH_BY_DISK,
+};
+
+struct hiraid_completion {
+ __le32 result;
+ union {
+ struct {
+ __u8 sense_len;
+ __u8 resv[3];
+ };
+ __le32 result1;
+ };
+ __le16 sq_head;
+ __le16 sq_id;
+ __le16 cmd_id;
+ __le16 status;
+};
+
+struct hiraid_ctrl_info {
+ __le32 nd;
+ __le16 max_cmds;
+ __le16 max_channel;
+ __le32 max_tgt_id;
+ __le16 max_lun;
+ __le16 max_num_sge;
+ __le16 lun_num_boot;
+ __u8 mdts;
+ __u8 acl;
+ __u8 asynevent;
+ __u8 card_type;
+ __u8 pt_use_sgl;
+ __u8 rsvd;
+ __le32 rtd3e;
+ __u8 sn[32];
+ __u8 fw_version[16];
+ __u8 rsvd1[4020];
+};
+
+struct hiraid_dev {
+ struct pci_dev *pdev;
+ struct device *dev;
+ struct Scsi_Host *shost;
+ struct hiraid_queue *queues;
+ struct dma_pool *prp_page_pool;
+ struct dma_pool *prp_extra_pool[MAX_EXTRA_POOL_NUM];
+ void __iomem *bar;
+ u32 max_qid;
+ u32 num_vecs;
+ u32 queue_count;
+ u32 ioq_depth;
+ u32 db_stride;
+ u32 __iomem *dbs;
+ struct rw_semaphore dev_rwsem;
+ int numa_node;
+ u32 page_size;
+ u32 ctrl_config;
+ u32 online_queues;
+ u64 cap;
+ u32 scsi_qd;
+ u32 instance;
+ struct hiraid_ctrl_info *ctrl_info;
+ struct hiraid_dev_info *dev_info;
+
+ struct hiraid_cmd *adm_cmds;
+ struct list_head adm_cmd_list;
+ spinlock_t adm_cmd_lock;
+
+ struct hiraid_cmd *io_ptcmds;
+ struct list_head io_pt_list;
+ spinlock_t io_pt_lock;
+
+ struct work_struct scan_work;
+ struct work_struct timesyn_work;
+ struct work_struct reset_work;
+ struct work_struct fwact_work;
+
+ enum hiraid_dev_state state;
+ spinlock_t state_lock;
+
+ void *sense_buffer_virt;
+ dma_addr_t sense_buffer_phy;
+ u32 last_qcnt;
+ u8 hdd_dispatch;
+
+ struct request_queue *bsg_queue;
+};
+
+struct hiraid_sgl_desc {
+ __le64 addr;
+ __le32 length;
+ __u8 rsvd[3];
+ __u8 type;
+};
+
+union hiraid_data_ptr {
+ struct {
+ __le64 prp1;
+ __le64 prp2;
+ };
+ struct hiraid_sgl_desc sgl;
+};
+
+struct hiraid_admin_com_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __le32 cdw2[4];
+ union hiraid_data_ptr dptr;
+ __le32 cdw10;
+ __le32 cdw11;
+ __le32 cdw12;
+ __le32 cdw13;
+ __le32 cdw14;
+ __le32 cdw15;
+};
+
+struct hiraid_features {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __u64 rsvd2[2];
+ union hiraid_data_ptr dptr;
+ __le32 fid;
+ __le32 dword11;
+ __le32 dword12;
+ __le32 dword13;
+ __le32 dword14;
+ __le32 dword15;
+};
+
+struct hiraid_create_cq {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __u32 rsvd1[5];
+ __le64 prp1;
+ __u64 rsvd8;
+ __le16 cqid;
+ __le16 qsize;
+ __le16 cq_flags;
+ __le16 irq_vector;
+ __u32 rsvd12[4];
+};
+
+struct hiraid_create_sq {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __u32 rsvd1[5];
+ __le64 prp1;
+ __u64 rsvd8;
+ __le16 sqid;
+ __le16 qsize;
+ __le16 sq_flags;
+ __le16 cqid;
+ __u32 rsvd12[4];
+};
+
+struct hiraid_delete_queue {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __u32 rsvd1[9];
+ __le16 qid;
+ __u16 rsvd10;
+ __u32 rsvd11[5];
+};
+
+struct hiraid_get_info {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __u32 rsvd2[4];
+ union hiraid_data_ptr dptr;
+ __u8 type;
+ __u8 rsvd10[3];
+ __le32 cdw11;
+ __u32 rsvd12[4];
+};
+
+struct hiraid_usr_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ union {
+ struct {
+ __le16 subopcode;
+ __le16 rsvd1;
+ } info_0;
+ __le32 cdw2;
+ };
+ union {
+ struct {
+ __le16 data_len;
+ __le16 param_len;
+ } info_1;
+ __le32 cdw3;
+ };
+ __u64 metadata;
+ union hiraid_data_ptr dptr;
+ __le32 cdw10;
+ __le32 cdw11;
+ __le32 cdw12;
+ __le32 cdw13;
+ __le32 cdw14;
+ __le32 cdw15;
+};
+
+struct hiraid_abort_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __u64 rsvd2[4];
+ __le16 sqid;
+ __le16 cid;
+ __u32 rsvd11[5];
+};
+
+struct hiraid_reset_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __u64 rsvd2[4];
+ __u8 type;
+ __u8 rsvd10[3];
+ __u32 rsvd11[5];
+};
+
+struct hiraid_admin_command {
+ union {
+ struct hiraid_admin_com_cmd common;
+ struct hiraid_features features;
+ struct hiraid_create_cq create_cq;
+ struct hiraid_create_sq create_sq;
+ struct hiraid_delete_queue delete_queue;
+ struct hiraid_get_info get_info;
+ struct hiraid_abort_cmd abort;
+ struct hiraid_reset_cmd reset;
+ struct hiraid_usr_cmd usr_cmd;
+ };
+};
+
+struct hiraid_scsi_io_com_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __le16 sense_len;
+ __u8 cdb_len;
+ __u8 rsvd2;
+ __le32 cdw3[3];
+ union hiraid_data_ptr dptr;
+ __le32 cdw10[6];
+ __u8 cdb[32];
+ __le64 sense_addr;
+ __le32 cdw26[6];
+};
+
+struct hiraid_scsi_rw_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __le16 sense_len;
+ __u8 cdb_len;
+ __u8 rsvd2;
+ __u32 rsvd3[3];
+ union hiraid_data_ptr dptr;
+ __le64 slba;
+ __le16 nlb;
+ __le16 control;
+ __u32 rsvd13[3];
+ __u8 cdb[32];
+ __le64 sense_addr;
+ __u32 rsvd26[6];
+};
+
+struct hiraid_scsi_nonrw_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __le16 cmd_id;
+ __le32 hdid;
+ __le16 sense_len;
+ __u8 cdb_length;
+ __u8 rsvd2;
+ __u32 rsvd3[3];
+ union hiraid_data_ptr dptr;
+ __u32 rsvd10[5];
+ __le32 buf_len;
+ __u8 cdb[32];
+ __le64 sense_addr;
+ __u32 rsvd26[6];
+};
+
+struct hiraid_scsi_io_cmd {
+ union {
+ struct hiraid_scsi_io_com_cmd common;
+ struct hiraid_scsi_rw_cmd rw;
+ struct hiraid_scsi_nonrw_cmd nonrw;
+ };
+};
+
+struct hiraid_passthru_common_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __u16 rsvd0;
+ __u32 nsid;
+ union {
+ struct {
+ __u16 subopcode;
+ __u16 rsvd1;
+ } info_0;
+ __u32 cdw2;
+ };
+ union {
+ struct {
+ __u16 data_len;
+ __u16 param_len;
+ } info_1;
+ __u32 cdw3;
+ };
+ __u64 metadata;
+
+ __u64 addr;
+ __u64 prp2;
+
+ __u32 cdw10;
+ __u32 cdw11;
+ __u32 cdw12;
+ __u32 cdw13;
+ __u32 cdw14;
+ __u32 cdw15;
+ __u32 timeout_ms;
+ __u32 result0;
+ __u32 result1;
+};
+
+struct hiraid_passthru_io_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __u16 rsvd0;
+ __u32 nsid;
+ union {
+ struct {
+ __u16 res_sense_len;
+ __u8 cdb_len;
+ __u8 rsvd0;
+ } info_0;
+ __u32 cdw2;
+ };
+ union {
+ struct {
+ __u16 subopcode;
+ __u16 rsvd1;
+ } info_1;
+ __u32 cdw3;
+ };
+ union {
+ struct {
+ __u16 rsvd;
+ __u16 param_len;
+ } info_2;
+ __u32 cdw4;
+ };
+ __u32 cdw5;
+ __u64 addr;
+ __u64 prp2;
+ union {
+ struct {
+ __u16 eid;
+ __u16 sid;
+ } info_3;
+ __u32 cdw10;
+ };
+ union {
+ struct {
+ __u16 did;
+ __u8 did_flag;
+ __u8 rsvd2;
+ } info_4;
+ __u32 cdw11;
+ };
+ __u32 cdw12;
+ __u32 cdw13;
+ __u32 cdw14;
+ __u32 data_len;
+ __u32 cdw16;
+ __u32 cdw17;
+ __u32 cdw18;
+ __u32 cdw19;
+ __u32 cdw20;
+ __u32 cdw21;
+ __u32 cdw22;
+ __u32 cdw23;
+ __u64 sense_addr;
+ __u32 cdw26[4];
+ __u32 timeout_ms;
+ __u32 result0;
+ __u32 result1;
+};
+
+struct hiraid_bsg_request {
+ u32 msgcode;
+ u32 control;
+ union {
+ struct hiraid_passthru_common_cmd admcmd;
+ struct hiraid_passthru_io_cmd pthrucmd;
+ };
+};
+
+struct hiraid_cmd {
+ u16 qid;
+ u16 cid;
+ u32 result0;
+ u32 result1;
+ u16 status;
+ void *priv;
+ enum hiraid_cmd_state state;
+ struct completion cmd_done;
+ struct list_head list;
+};
+
+struct hiraid_queue {
+ struct hiraid_dev *hdev;
+ spinlock_t sq_lock;
+
+ spinlock_t cq_lock ____cacheline_aligned_in_smp;
+
+ void *sq_cmds;
+
+ struct hiraid_completion *cqes;
+
+ dma_addr_t sq_buffer_phy;
+ dma_addr_t cq_buffer_phy;
+ u32 __iomem *q_db;
+ u8 cq_phase;
+ u8 sqes;
+ u16 qid;
+ u16 sq_tail;
+ u16 cq_head;
+ u16 last_cq_head;
+ u16 q_depth;
+ s16 cq_vector;
+ atomic_t inflight;
+ void *sense_buffer_virt;
+ dma_addr_t sense_buffer_phy;
+ struct dma_pool *prp_small_pool;
+};
+
+struct hiraid_mapmange {
+ struct hiraid_queue *hiraidq;
+ enum hiraid_cmd_state state;
+ u16 cid;
+ int page_cnt;
+ u32 sge_cnt;
+ u32 len;
+ bool use_sgl;
+ dma_addr_t first_dma;
+ void *sense_buffer_virt;
+ dma_addr_t sense_buffer_phy;
+ struct scatterlist *sgl;
+ void *list[0];
+};
+
+struct hiraid_vd_info {
+ __u8 name[32];
+ __le16 id;
+ __u8 rg_id;
+ __u8 rg_level;
+ __u8 sg_num;
+ __u8 sg_disk_num;
+ __u8 vd_status;
+ __u8 vd_type;
+ __u8 rsvd1[4056];
+};
+
+struct bgtask_info {
+ __u8 type;
+ __u8 progress;
+ __u8 rate;
+ __u8 rsvd0;
+ __le16 vd_id;
+ __le16 time_left;
+ __u8 rsvd1[4];
+};
+
+struct hiraid_bgtask {
+ __u8 sw;
+ __u8 task_num;
+ __u8 rsvd[6];
+ struct bgtask_info bgtask[MAX_REALTIME_BGTASK_NUM];
+};
+
+struct hiraid_dev_info {
+ __le32 hdid;
+ __le16 target;
+ __u8 channel;
+ __u8 lun;
+ __u8 attr;
+ __u8 flag;
+ __le16 max_io_kb;
+};
+
+struct hiraid_dev_list {
+ __le32 dev_num;
+ __u32 rsvd0[3];
+ struct hiraid_dev_info devinfo[MAX_DEV_ENTRY_PER_PAGE_4K];
+};
+
+struct hiraid_sdev_hostdata {
+ u32 hdid;
+ u16 max_io_kb;
+ u8 attr;
+ u8 flag;
+ u8 rg_id;
+ u8 hwq;
+ u16 pend_count;
+};
+
+#endif
+
diff --git a/drivers/scsi/hisi_raid/hiraid_main.c b/drivers/scsi/hisi_raid/hiraid_main.c
new file mode 100644
index 000000000000..b9ffa642479c
--- /dev/null
+++ b/drivers/scsi/hisi_raid/hiraid_main.c
@@ -0,0 +1,3982 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2022 Huawei Technologies Co., Ltd */
+
+/* Huawei Raid Series Linux Driver */
+
+#define pr_fmt(fmt) "hiraid: " fmt
+
+#include <linux/sched/signal.h>
+#include <linux/version.h>
+#include <linux/pci.h>
+#include <linux/aer.h>
+#include <linux/module.h>
+#include <linux/ioport.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/cdev.h>
+#include <linux/sysfs.h>
+#include <linux/gfp.h>
+#include <linux/types.h>
+#include <linux/ratelimit.h>
+#include <linux/once.h>
+#include <linux/debugfs.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/blkdev.h>
+#include <linux/bsg-lib.h>
+#include <asm/unaligned.h>
+#include <linux/sort.h>
+#include <target/target_core_backend.h>
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_transport.h>
+#include <scsi/scsi_dbg.h>
+#include <scsi/sg.h>
+
+#include "hiraid.h"
+
+static u32 admin_tmout = 60;
+module_param(admin_tmout, uint, 0644);
+MODULE_PARM_DESC(admin_tmout, "admin commands timeout (seconds)");
+
+static u32 scmd_tmout_rawdisk = 180;
+module_param(scmd_tmout_rawdisk, uint, 0644);
+MODULE_PARM_DESC(scmd_tmout_rawdisk, "scsi commands timeout for rawdisk(seconds)");
+
+static u32 scmd_tmout_vd = 180;
+module_param(scmd_tmout_vd, uint, 0644);
+MODULE_PARM_DESC(scmd_tmout_vd, "scsi commands timeout for vd(seconds)");
+
+static bool max_io_force;
+module_param(max_io_force, bool, 0644);
+MODULE_PARM_DESC(max_io_force, "force max_hw_sectors_kb = 1024, default false(performance first)");
+
+static bool work_mode;
+module_param(work_mode, bool, 0444);
+MODULE_PARM_DESC(work_mode, "work mode switch, default false for multi hw queues");
+
+#define MAX_IO_QUEUES 128
+#define MIN_IO_QUEUES 1
+
+static int ioq_num_set(const char *val, const struct kernel_param *kp)
+{
+ int n = 0;
+ int ret;
+
+ ret = kstrtoint(val, 10, &n);
+ if (ret != 0 || n < MIN_IO_QUEUES || n > MAX_IO_QUEUES)
+ return -EINVAL;
+
+ return param_set_int(val, kp);
+}
+
+static const struct kernel_param_ops max_hwq_num_ops = {
+ .set = ioq_num_set,
+ .get = param_get_uint,
+};
+
+static u32 max_hwq_num = 128;
+module_param_cb(max_hwq_num, &max_hwq_num_ops, &max_hwq_num, 0444);
+MODULE_PARM_DESC(max_hwq_num, "max num of hw io queues, should >= 1, default 128");
+
+static int io_queue_depth_set(const char *val, const struct kernel_param *kp)
+{
+ int n = 0;
+ int ret;
+
+ ret = kstrtoint(val, 10, &n);
+ if (ret != 0 || n < 2)
+ return -EINVAL;
+
+ return param_set_int(val, kp);
+}
+
+static const struct kernel_param_ops io_queue_depth_ops = {
+ .set = io_queue_depth_set,
+ .get = param_get_uint,
+};
+
+static u32 io_queue_depth = 1024;
+module_param_cb(io_queue_depth, &io_queue_depth_ops, &io_queue_depth, 0644);
+MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2");
+
+static u32 log_debug_switch;
+module_param(log_debug_switch, uint, 0644);
+MODULE_PARM_DESC(log_debug_switch, "set log state, default zero for switch off");
+
+static int extra_pool_num_set(const char *val, const struct kernel_param *kp)
+{
+ u8 n = 0;
+ int ret;
+
+ ret = kstrtou8(val, 10, &n);
+ if (ret != 0)
+ return -EINVAL;
+ if (n > MAX_EXTRA_POOL_NUM)
+ n = MAX_EXTRA_POOL_NUM;
+ if (n < 1)
+ n = 1;
+ *((u8 *)kp->arg) = n;
+
+ return 0;
+}
+
+static const struct kernel_param_ops small_pool_num_ops = {
+ .set = extra_pool_num_set,
+ .get = param_get_byte,
+};
+
+/* It was found that the spindlock of a single pool conflicts
+ * a lot with multiple CPUs.So multiple pools are introduced
+ * to reduce the conflictions.
+ */
+static unsigned char extra_pool_num = 4;
+module_param_cb(extra_pool_num, &small_pool_num_ops, &extra_pool_num, 0644);
+MODULE_PARM_DESC(extra_pool_num, "set prp extra pool num, default 4, MAX 16");
+
+static void hiraid_handle_async_notice(struct hiraid_dev *hdev, u32 result);
+static void hiraid_handle_async_vs(struct hiraid_dev *hdev, u32 result, u32 result1);
+
+static struct class *hiraid_class;
+
+#define HIRAID_CAP_TIMEOUT_UNIT_MS (HZ / 2)
+
+static struct workqueue_struct *work_queue;
+
+#define dev_log_dbg(dev, fmt, ...) do { \
+ if (unlikely(log_debug_switch)) \
+ dev_info(dev, "[%s] " fmt, \
+ __func__, ##__VA_ARGS__); \
+} while (0)
+
+#define HIRAID_DRV_VERSION "1.1.0.0"
+
+#define ADMIN_TIMEOUT (admin_tmout * HZ)
+#define USRCMD_TIMEOUT (180 * HZ)
+#define CTL_RST_TIME (600 * HZ)
+
+#define HIRAID_WAIT_ABNL_CMD_TIMEOUT 6
+#define HIRAID_WAIT_RST_IO_TIMEOUT 10
+
+#define HIRAID_DMA_MSK_BIT_MAX 64
+
+#define IOQ_PT_DATA_LEN 4096
+#define IOQ_PT_SGL_DATA_LEN (1024 * 1024)
+
+#define MAX_CAN_QUEUE (4096 - 1)
+#define MIN_CAN_QUEUE (1024 - 1)
+
+enum SENSE_STATE_CODE {
+ SENSE_STATE_OK = 0,
+ SENSE_STATE_NEED_CHECK,
+ SENSE_STATE_ERROR,
+ SENSE_STATE_EP_PCIE_ERROR,
+ SENSE_STATE_NAC_DMA_ERROR,
+ SENSE_STATE_ABORTED,
+ SENSE_STATE_NEED_RETRY
+};
+
+enum {
+ FW_EH_OK = 0,
+ FW_EH_DEV_NONE = 0x701
+};
+
+static const char * const raid_levels[] = {"0", "1", "5", "6", "10", "50", "60", "NA"};
+
+static const char * const raid_states[] = {
+ "NA", "NORMAL", "FAULT", "DEGRADE", "NOT_FORMATTED", "FORMATTING", "SANITIZING",
+ "INITIALIZING", "INITIALIZE_FAIL", "DELETING", "DELETE_FAIL", "WRITE_PROTECT"
+};
+
+static int hiraid_remap_bar(struct hiraid_dev *hdev, u32 size)
+{
+ struct pci_dev *pdev = hdev->pdev;
+
+ if (size > pci_resource_len(pdev, 0)) {
+ dev_err(hdev->dev, "input size[%u] exceed bar0 length[%llu]\n",
+ size, pci_resource_len(pdev, 0));
+ return -ENOMEM;
+ }
+
+ if (hdev->bar)
+ iounmap(hdev->bar);
+
+ hdev->bar = ioremap(pci_resource_start(pdev, 0), size);
+ if (!hdev->bar) {
+ dev_err(hdev->dev, "ioremap for bar0 failed\n");
+ return -ENOMEM;
+ }
+ hdev->dbs = hdev->bar + HIRAID_REG_DBS;
+
+ return 0;
+}
+
+static int hiraid_dev_map(struct hiraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+ int ret;
+
+ ret = pci_request_mem_regions(pdev, "hiraid");
+ if (ret) {
+ dev_err(hdev->dev, "fail to request memory regions\n");
+ return ret;
+ }
+
+ ret = hiraid_remap_bar(hdev, HIRAID_REG_DBS + 4096);
+ if (ret) {
+ pci_release_mem_regions(pdev);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void hiraid_dev_unmap(struct hiraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+
+ if (hdev->bar) {
+ iounmap(hdev->bar);
+ hdev->bar = NULL;
+ }
+ pci_release_mem_regions(pdev);
+}
+
+static int hiraid_pci_enable(struct hiraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+ int ret = -ENOMEM;
+ u64 maskbit = HIRAID_DMA_MSK_BIT_MAX;
+
+ if (pci_enable_device_mem(pdev)) {
+ dev_err(hdev->dev, "enable pci device memory resources failed\n");
+ return ret;
+ }
+ pci_set_master(pdev);
+
+ if (readl(hdev->bar + HIRAID_REG_CSTS) == U32_MAX) {
+ ret = -ENODEV;
+ dev_err(hdev->dev, "read CSTS register failed\n");
+ goto disable;
+ }
+
+ hdev->cap = lo_hi_readq(hdev->bar + HIRAID_REG_CAP);
+ hdev->ioq_depth = min_t(u32, HIRAID_CAP_MQES(hdev->cap) + 1, io_queue_depth);
+ hdev->db_stride = 1 << HIRAID_CAP_STRIDE(hdev->cap);
+
+ maskbit = HIRAID_CAP_DMAMASK(hdev->cap);
+ if (maskbit < 32 || maskbit > HIRAID_DMA_MSK_BIT_MAX) {
+ dev_err(hdev->dev, "err, dma mask invalid[%llu], set to default\n", maskbit);
+ maskbit = HIRAID_DMA_MSK_BIT_MAX;
+ }
+
+ if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(maskbit))) {
+ if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) {
+ dev_err(hdev->dev, "set dma mask[32] and coherent failed\n");
+ goto disable;
+ }
+ dev_info(hdev->dev, "set dma mask[32] success\n");
+ } else {
+ dev_info(hdev->dev, "set dma mask[%llu] success\n", maskbit);
+ }
+
+ ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
+ if (ret < 0) {
+ dev_err(hdev->dev, "allocate one IRQ for setup admin queue failed\n");
+ goto disable;
+ }
+
+ pci_enable_pcie_error_reporting(pdev);
+ pci_save_state(pdev);
+
+ return 0;
+
+disable:
+ pci_disable_device(pdev);
+ return ret;
+}
+
+
+/*
+ * It is fact that first prp and last prp may be not full page.
+ * The size to count total nprps for the io equal to size + page_size,
+ * it may be a slightly overestimate.
+ *
+ * 8B per prp address. It may be there is one prp_list address per page,
+ * prp_list address does not count in io data prps. So divisor equal to
+ * PAGE_SIZE - 8, it may be a slightly overestimate.
+ */
+static int hiraid_prp_pagenum(struct hiraid_dev *hdev)
+{
+ u32 size = 1U << ((hdev->ctrl_info->mdts) * 1U) << 12;
+ u32 nprps = DIV_ROUND_UP(size + hdev->page_size, hdev->page_size);
+
+ return DIV_ROUND_UP(PRP_ENTRY_SIZE * nprps, hdev->page_size - PRP_ENTRY_SIZE);
+}
+
+/*
+ * Calculates the number of pages needed for the SGL segments. For example a 4k
+ * page can accommodate 256 SGL descriptors.
+ */
+static int hiraid_sgl_pagenum(struct hiraid_dev *hdev)
+{
+ u32 nsge = le16_to_cpu(hdev->ctrl_info->max_num_sge);
+
+ return DIV_ROUND_UP(nsge * sizeof(struct hiraid_sgl_desc), hdev->page_size);
+}
+
+static inline void **hiraid_mapbuf_list(struct hiraid_mapmange *mapbuf)
+{
+ return mapbuf->list;
+}
+
+static u32 hiraid_get_max_cmd_size(struct hiraid_dev *hdev)
+{
+ u32 alloc_size = sizeof(__le64 *) * max(hiraid_prp_pagenum(hdev), hiraid_sgl_pagenum(hdev));
+
+ dev_info(hdev->dev, "mapbuf size[%lu], alloc_size[%u]\n",
+ sizeof(struct hiraid_mapmange), alloc_size);
+
+ return sizeof(struct hiraid_mapmange) + alloc_size;
+}
+
+static int hiraid_build_passthru_prp(struct hiraid_dev *hdev, struct hiraid_mapmange *mapbuf)
+{
+ struct scatterlist *sg = mapbuf->sgl;
+ __le64 *phy_regpage, *prior_list;
+ u64 buf_addr = sg_dma_address(sg);
+ int buf_length = sg_dma_len(sg);
+ u32 page_size = hdev->page_size;
+ int offset = buf_addr & (page_size - 1);
+ void **list = hiraid_mapbuf_list(mapbuf);
+ int maplen = mapbuf->len;
+ struct dma_pool *pool;
+ dma_addr_t buffer_phy;
+ int i;
+
+ maplen -= (page_size - offset);
+ if (maplen <= 0) {
+ mapbuf->first_dma = 0;
+ return 0;
+ }
+
+ buf_length -= (page_size - offset);
+ if (buf_length) {
+ buf_addr += (page_size - offset);
+ } else {
+ sg = sg_next(sg);
+ buf_addr = sg_dma_address(sg);
+ buf_length = sg_dma_len(sg);
+ }
+
+ if (maplen <= page_size) {
+ mapbuf->first_dma = buf_addr;
+ return 0;
+ }
+
+ pool = hdev->prp_page_pool;
+ mapbuf->page_cnt = 1;
+
+ phy_regpage = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!phy_regpage) {
+ dev_err_ratelimited(hdev->dev, "allocate first admin prp_list memory failed\n");
+ mapbuf->first_dma = buf_addr;
+ mapbuf->page_cnt = -1;
+ return -ENOMEM;
+ }
+ list[0] = phy_regpage;
+ mapbuf->first_dma = buffer_phy;
+ i = 0;
+ for (;;) {
+ if (i == page_size / PRP_ENTRY_SIZE) {
+ prior_list = phy_regpage;
+
+ phy_regpage = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!phy_regpage) {
+ dev_err_ratelimited(hdev->dev, "allocate [%d]th admin prp list memory failed\n",
+ mapbuf->page_cnt + 1);
+ return -ENOMEM;
+ }
+ list[mapbuf->page_cnt++] = phy_regpage;
+ phy_regpage[0] = prior_list[i - 1];
+ prior_list[i - 1] = cpu_to_le64(buffer_phy);
+ i = 1;
+ }
+ phy_regpage[i++] = cpu_to_le64(buf_addr);
+ buf_addr += page_size;
+ buf_length -= page_size;
+ maplen -= page_size;
+ if (maplen <= 0)
+ break;
+ if (buf_length > 0)
+ continue;
+ if (unlikely(buf_length < 0))
+ goto bad_admin_sgl;
+ sg = sg_next(sg);
+ buf_addr = sg_dma_address(sg);
+ buf_length = sg_dma_len(sg);
+ }
+
+ return 0;
+
+bad_admin_sgl:
+ dev_err(hdev->dev, "setup prps, invalid admin SGL for payload[%d] nents[%d]\n",
+ mapbuf->len, mapbuf->sge_cnt);
+ return -EIO;
+}
+
+static int hiraid_build_prp(struct hiraid_dev *hdev, struct hiraid_mapmange *mapbuf)
+{
+ struct scatterlist *sg = mapbuf->sgl;
+ __le64 *phy_regpage, *prior_list;
+ u64 buf_addr = sg_dma_address(sg);
+ int buf_length = sg_dma_len(sg);
+ u32 page_size = hdev->page_size;
+ int offset = buf_addr & (page_size - 1);
+ void **list = hiraid_mapbuf_list(mapbuf);
+ int maplen = mapbuf->len;
+ struct dma_pool *pool;
+ dma_addr_t buffer_phy;
+ int nprps, i;
+
+ maplen -= (page_size - offset);
+ if (maplen <= 0) {
+ mapbuf->first_dma = 0;
+ return 0;
+ }
+
+ buf_length -= (page_size - offset);
+ if (buf_length) {
+ buf_addr += (page_size - offset);
+ } else {
+ sg = sg_next(sg);
+ buf_addr = sg_dma_address(sg);
+ buf_length = sg_dma_len(sg);
+ }
+
+ if (maplen <= page_size) {
+ mapbuf->first_dma = buf_addr;
+ return 0;
+ }
+
+ nprps = DIV_ROUND_UP(maplen, page_size);
+ if (nprps <= (EXTRA_POOL_SIZE / PRP_ENTRY_SIZE)) {
+ pool = mapbuf->hiraidq->prp_small_pool;
+ mapbuf->page_cnt = 0;
+ } else {
+ pool = hdev->prp_page_pool;
+ mapbuf->page_cnt = 1;
+ }
+
+ phy_regpage = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!phy_regpage) {
+ dev_err_ratelimited(hdev->dev, "allocate first prp_list memory failed\n");
+ mapbuf->first_dma = buf_addr;
+ mapbuf->page_cnt = -1;
+ return -ENOMEM;
+ }
+ list[0] = phy_regpage;
+ mapbuf->first_dma = buffer_phy;
+ i = 0;
+ for (;;) {
+ if (i == page_size / PRP_ENTRY_SIZE) {
+ prior_list = phy_regpage;
+
+ phy_regpage = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!phy_regpage) {
+ dev_err_ratelimited(hdev->dev, "allocate [%d]th prp list memory failed\n",
+ mapbuf->page_cnt + 1);
+ return -ENOMEM;
+ }
+ list[mapbuf->page_cnt++] = phy_regpage;
+ phy_regpage[0] = prior_list[i - 1];
+ prior_list[i - 1] = cpu_to_le64(buffer_phy);
+ i = 1;
+ }
+ phy_regpage[i++] = cpu_to_le64(buf_addr);
+ buf_addr += page_size;
+ buf_length -= page_size;
+ maplen -= page_size;
+ if (maplen <= 0)
+ break;
+ if (buf_length > 0)
+ continue;
+ if (unlikely(buf_length < 0))
+ goto bad_sgl;
+ sg = sg_next(sg);
+ buf_addr = sg_dma_address(sg);
+ buf_length = sg_dma_len(sg);
+ }
+
+ return 0;
+
+bad_sgl:
+ dev_err(hdev->dev, "setup prps, invalid SGL for payload[%d] nents[%d]\n",
+ mapbuf->len, mapbuf->sge_cnt);
+ return -EIO;
+}
+
+#define SGES_PER_PAGE (PAGE_SIZE / sizeof(struct hiraid_sgl_desc))
+
+static void hiraid_submit_cmd(struct hiraid_queue *hiraidq, const void *cmd)
+{
+ u32 sqes = SQE_SIZE(hiraidq->qid);
+ unsigned long flags;
+ struct hiraid_admin_com_cmd *acd = (struct hiraid_admin_com_cmd *)cmd;
+
+ spin_lock_irqsave(&hiraidq->sq_lock, flags);
+ memcpy((hiraidq->sq_cmds + sqes * hiraidq->sq_tail), cmd, sqes);
+ if (++hiraidq->sq_tail == hiraidq->q_depth)
+ hiraidq->sq_tail = 0;
+
+ writel(hiraidq->sq_tail, hiraidq->q_db);
+ spin_unlock_irqrestore(&hiraidq->sq_lock, flags);
+
+ dev_log_dbg(hiraidq->hdev->dev, "cid[%d] qid[%d] opcode[0x%x] flags[0x%x] hdid[%u]\n",
+ le16_to_cpu(acd->cmd_id), hiraidq->qid, acd->opcode, acd->flags,
+ le32_to_cpu(acd->hdid));
+}
+
+static inline bool hiraid_is_rw_scmd(struct scsi_cmnd *scmd)
+{
+ switch (scmd->cmnd[0]) {
+ case READ_6:
+ case READ_10:
+ case READ_12:
+ case READ_16:
+ case WRITE_6:
+ case WRITE_10:
+ case WRITE_12:
+ case WRITE_16:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/*
+ * checks if prps can be built for the IO cmd
+ */
+static bool hiraid_is_prp(struct hiraid_dev *hdev, struct scatterlist *sgl, u32 nsge)
+{
+ struct scatterlist *sg = sgl;
+ u32 page_mask = hdev->page_size - 1;
+ bool is_prp = true;
+ u32 i = 0;
+
+ for_each_sg(sgl, sg, nsge, i) {
+ /*
+ * Data length of the middle sge multiple of page_size,
+ * address page_size aligned.
+ */
+ if (i != 0 && i != nsge - 1) {
+ if ((sg_dma_len(sg) & page_mask) ||
+ (sg_dma_address(sg) & page_mask)) {
+ is_prp = false;
+ break;
+ }
+ }
+
+ /*
+ * The first sge addr plus the data length meets
+ * the page_size alignment.
+ */
+ if (nsge > 1 && i == 0) {
+ if ((sg_dma_address(sg) + sg_dma_len(sg)) & page_mask) {
+ is_prp = false;
+ break;
+ }
+ }
+
+ /* The last sge addr meets the page_size alignment. */
+ if (nsge > 1 && i == (nsge - 1)) {
+ if (sg_dma_address(sg) & page_mask) {
+ is_prp = false;
+ break;
+ }
+ }
+ }
+
+ return is_prp;
+}
+
+enum {
+ HIRAID_SGL_FMT_DATA_DESC = 0x00,
+ HIRAID_SGL_FMT_SEG_DESC = 0x02,
+ HIRAID_SGL_FMT_LAST_SEG_DESC = 0x03,
+ HIRAID_KEY_SGL_FMT_DATA_DESC = 0x04,
+ HIRAID_TRANSPORT_SGL_DATA_DESC = 0x05
+};
+
+static void hiraid_sgl_set_data(struct hiraid_sgl_desc *sge, struct scatterlist *sg)
+{
+ sge->addr = cpu_to_le64(sg_dma_address(sg));
+ sge->length = cpu_to_le32(sg_dma_len(sg));
+ sge->type = HIRAID_SGL_FMT_DATA_DESC << 4;
+}
+
+static void hiraid_sgl_set_seg(struct hiraid_sgl_desc *sge, dma_addr_t buffer_phy, int entries)
+{
+ sge->addr = cpu_to_le64(buffer_phy);
+ if (entries <= SGES_PER_PAGE) {
+ sge->length = cpu_to_le32(entries * sizeof(*sge));
+ sge->type = HIRAID_SGL_FMT_LAST_SEG_DESC << 4;
+ } else {
+ sge->length = cpu_to_le32(PAGE_SIZE);
+ sge->type = HIRAID_SGL_FMT_SEG_DESC << 4;
+ }
+}
+
+static int hiraid_build_passthru_sgl(struct hiraid_dev *hdev,
+ struct hiraid_admin_command *admin_cmd,
+ struct hiraid_mapmange *mapbuf)
+{
+ struct hiraid_sgl_desc *sg_list, *link, *old_sg_list;
+ struct scatterlist *sg = mapbuf->sgl;
+ void **list = hiraid_mapbuf_list(mapbuf);
+ struct dma_pool *pool;
+ int nsge = mapbuf->sge_cnt;
+ dma_addr_t buffer_phy;
+ int i = 0;
+
+ admin_cmd->common.flags |= SQE_FLAG_SGL_METABUF;
+
+ if (nsge == 1) {
+ hiraid_sgl_set_data(&admin_cmd->common.dptr.sgl, sg);
+ return 0;
+ }
+
+ pool = hdev->prp_page_pool;
+ mapbuf->page_cnt = 1;
+
+ sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!sg_list) {
+ dev_err_ratelimited(hdev->dev, "allocate first admin sgl_list failed\n");
+ mapbuf->page_cnt = -1;
+ return -ENOMEM;
+ }
+
+ list[0] = sg_list;
+ mapbuf->first_dma = buffer_phy;
+ hiraid_sgl_set_seg(&admin_cmd->common.dptr.sgl, buffer_phy, nsge);
+ do {
+ if (i == SGES_PER_PAGE) {
+ old_sg_list = sg_list;
+ link = &old_sg_list[SGES_PER_PAGE - 1];
+
+ sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!sg_list) {
+ dev_err_ratelimited(hdev->dev, "allocate [%d]th admin sgl_list failed\n",
+ mapbuf->page_cnt + 1);
+ return -ENOMEM;
+ }
+ list[mapbuf->page_cnt++] = sg_list;
+
+ i = 0;
+ memcpy(&sg_list[i++], link, sizeof(*link));
+ hiraid_sgl_set_seg(link, buffer_phy, nsge);
+ }
+
+ hiraid_sgl_set_data(&sg_list[i++], sg);
+ sg = sg_next(sg);
+ } while (--nsge > 0);
+
+ return 0;
+}
+
+
+static int hiraid_build_sgl(struct hiraid_dev *hdev, struct hiraid_scsi_io_cmd *io_cmd,
+ struct hiraid_mapmange *mapbuf)
+{
+ struct hiraid_sgl_desc *sg_list, *link, *old_sg_list;
+ struct scatterlist *sg = mapbuf->sgl;
+ void **list = hiraid_mapbuf_list(mapbuf);
+ struct dma_pool *pool;
+ int nsge = mapbuf->sge_cnt;
+ dma_addr_t buffer_phy;
+ int i = 0;
+
+ io_cmd->common.flags |= SQE_FLAG_SGL_METABUF;
+
+ if (nsge == 1) {
+ hiraid_sgl_set_data(&io_cmd->common.dptr.sgl, sg);
+ return 0;
+ }
+
+ if (nsge <= (EXTRA_POOL_SIZE / sizeof(struct hiraid_sgl_desc))) {
+ pool = mapbuf->hiraidq->prp_small_pool;
+ mapbuf->page_cnt = 0;
+ } else {
+ pool = hdev->prp_page_pool;
+ mapbuf->page_cnt = 1;
+ }
+
+ sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!sg_list) {
+ dev_err_ratelimited(hdev->dev, "allocate first sgl_list failed\n");
+ mapbuf->page_cnt = -1;
+ return -ENOMEM;
+ }
+
+ list[0] = sg_list;
+ mapbuf->first_dma = buffer_phy;
+ hiraid_sgl_set_seg(&io_cmd->common.dptr.sgl, buffer_phy, nsge);
+ do {
+ if (i == SGES_PER_PAGE) {
+ old_sg_list = sg_list;
+ link = &old_sg_list[SGES_PER_PAGE - 1];
+
+ sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &buffer_phy);
+ if (!sg_list) {
+ dev_err_ratelimited(hdev->dev, "allocate [%d]th sgl_list failed\n",
+ mapbuf->page_cnt + 1);
+ return -ENOMEM;
+ }
+ list[mapbuf->page_cnt++] = sg_list;
+
+ i = 0;
+ memcpy(&sg_list[i++], link, sizeof(*link));
+ hiraid_sgl_set_seg(link, buffer_phy, nsge);
+ }
+
+ hiraid_sgl_set_data(&sg_list[i++], sg);
+ sg = sg_next(sg);
+ } while (--nsge > 0);
+
+ return 0;
+}
+
+#define HIRAID_RW_FUA BIT(14)
+
+static int hiraid_setup_rw_cmd(struct hiraid_dev *hdev,
+ struct hiraid_scsi_rw_cmd *io_cmd,
+ struct scsi_cmnd *scmd)
+{
+ u32 start_lba_lo, start_lba_hi;
+ u32 datalength = 0;
+ u16 control = 0;
+
+ start_lba_lo = 0;
+ start_lba_hi = 0;
+
+ if (scmd->sc_data_direction == DMA_TO_DEVICE) {
+ io_cmd->opcode = HIRAID_CMD_WRITE;
+ } else if (scmd->sc_data_direction == DMA_FROM_DEVICE) {
+ io_cmd->opcode = HIRAID_CMD_READ;
+ } else {
+ dev_err(hdev->dev, "invalid RW_IO for unsupported data direction[%d]\n",
+ scmd->sc_data_direction);
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
+ /* 6-byte READ(0x08) or WRITE(0x0A) cdb */
+ if (scmd->cmd_len == 6) {
+ datalength = (u32)(scmd->cmnd[4] == 0 ?
+ IO_6_DEFAULT_TX_LEN : scmd->cmnd[4]);
+ start_lba_lo = (u32)get_unaligned_be24(&scmd->cmnd[1]);
+
+ start_lba_lo &= 0x1FFFFF;
+ }
+
+ /* 10-byte READ(0x28) or WRITE(0x2A) cdb */
+ else if (scmd->cmd_len == 10) {
+ datalength = (u32)get_unaligned_be16(&scmd->cmnd[7]);
+ start_lba_lo = get_unaligned_be32(&scmd->cmnd[2]);
+
+ if (scmd->cmnd[1] & FUA_MASK)
+ control |= HIRAID_RW_FUA;
+ }
+
+ /* 12-byte READ(0xA8) or WRITE(0xAA) cdb */
+ else if (scmd->cmd_len == 12) {
+ datalength = get_unaligned_be32(&scmd->cmnd[6]);
+ start_lba_lo = get_unaligned_be32(&scmd->cmnd[2]);
+
+ if (scmd->cmnd[1] & FUA_MASK)
+ control |= HIRAID_RW_FUA;
+ }
+ /* 16-byte READ(0x88) or WRITE(0x8A) cdb */
+ else if (scmd->cmd_len == 16) {
+ datalength = get_unaligned_be32(&scmd->cmnd[10]);
+ start_lba_lo = get_unaligned_be32(&scmd->cmnd[6]);
+ start_lba_hi = get_unaligned_be32(&scmd->cmnd[2]);
+
+ if (scmd->cmnd[1] & FUA_MASK)
+ control |= HIRAID_RW_FUA;
+ }
+
+ if (unlikely(datalength > U16_MAX || datalength == 0)) {
+ dev_err(hdev->dev, "invalid IO for illegal transfer data length[%u]\n", datalength);
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
+ io_cmd->slba = cpu_to_le64(((u64)start_lba_hi << 32) | start_lba_lo);
+ /* 0base for nlb */
+ io_cmd->nlb = cpu_to_le16((u16)(datalength - 1));
+ io_cmd->control = cpu_to_le16(control);
+
+ return 0;
+}
+
+static int hiraid_setup_nonrw_cmd(struct hiraid_dev *hdev,
+ struct hiraid_scsi_nonrw_cmd *io_cmd, struct scsi_cmnd *scmd)
+{
+ io_cmd->buf_len = cpu_to_le32(scsi_bufflen(scmd));
+
+ switch (scmd->sc_data_direction) {
+ case DMA_NONE:
+ io_cmd->opcode = HIRAID_CMD_NONRW_NONE;
+ break;
+ case DMA_TO_DEVICE:
+ io_cmd->opcode = HIRAID_CMD_NONRW_TODEV;
+ break;
+ case DMA_FROM_DEVICE:
+ io_cmd->opcode = HIRAID_CMD_NONRW_FROMDEV;
+ break;
+ default:
+ dev_err(hdev->dev, "invalid NON_IO for unsupported data direction[%d]\n",
+ scmd->sc_data_direction);
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hiraid_setup_io_cmd(struct hiraid_dev *hdev,
+ struct hiraid_scsi_io_cmd *io_cmd, struct scsi_cmnd *scmd)
+{
+ memcpy(io_cmd->common.cdb, scmd->cmnd, scmd->cmd_len);
+ io_cmd->common.cdb_len = scmd->cmd_len;
+
+ if (hiraid_is_rw_scmd(scmd))
+ return hiraid_setup_rw_cmd(hdev, &io_cmd->rw, scmd);
+ else
+ return hiraid_setup_nonrw_cmd(hdev, &io_cmd->nonrw, scmd);
+}
+
+static inline void hiraid_init_mapbuff(struct hiraid_mapmange *mapbuf)
+{
+ mapbuf->sge_cnt = 0;
+ mapbuf->page_cnt = -1;
+ mapbuf->use_sgl = false;
+ WRITE_ONCE(mapbuf->state, CMD_IDLE);
+}
+
+static void hiraid_free_mapbuf(struct hiraid_dev *hdev, struct hiraid_mapmange *mapbuf)
+{
+ const int last_prp = hdev->page_size / sizeof(__le64) - 1;
+ dma_addr_t buffer_phy, next_buffer_phy;
+ struct hiraid_sgl_desc *sg_list;
+ __le64 *prp_list;
+ void *addr;
+ int i;
+
+ buffer_phy = mapbuf->first_dma;
+ if (mapbuf->page_cnt == 0)
+ dma_pool_free(mapbuf->hiraidq->prp_small_pool,
+ hiraid_mapbuf_list(mapbuf)[0], buffer_phy);
+
+ for (i = 0; i < mapbuf->page_cnt; i++) {
+ addr = hiraid_mapbuf_list(mapbuf)[i];
+
+ if (mapbuf->use_sgl) {
+ sg_list = addr;
+ next_buffer_phy =
+ le64_to_cpu((sg_list[SGES_PER_PAGE - 1]).addr);
+ } else {
+ prp_list = addr;
+ next_buffer_phy = le64_to_cpu(prp_list[last_prp]);
+ }
+
+ dma_pool_free(hdev->prp_page_pool, addr, buffer_phy);
+ buffer_phy = next_buffer_phy;
+ }
+
+ mapbuf->sense_buffer_virt = NULL;
+ mapbuf->page_cnt = -1;
+}
+
+static int hiraid_io_map_data(struct hiraid_dev *hdev, struct hiraid_mapmange *mapbuf,
+ struct scsi_cmnd *scmd, struct hiraid_scsi_io_cmd *io_cmd)
+{
+ int ret;
+
+ ret = scsi_dma_map(scmd);
+ if (unlikely(ret < 0))
+ return ret;
+ mapbuf->sge_cnt = ret;
+
+ /* No data to DMA, it may be scsi no-rw command */
+ if (unlikely(mapbuf->sge_cnt == 0))
+ return 0;
+
+ mapbuf->len = scsi_bufflen(scmd);
+ mapbuf->sgl = scsi_sglist(scmd);
+ mapbuf->use_sgl = !hiraid_is_prp(hdev, mapbuf->sgl, mapbuf->sge_cnt);
+
+ if (mapbuf->use_sgl) {
+ ret = hiraid_build_sgl(hdev, io_cmd, mapbuf);
+ } else {
+ ret = hiraid_build_prp(hdev, mapbuf);
+ io_cmd->common.dptr.prp1 =
+ cpu_to_le64(sg_dma_address(mapbuf->sgl));
+ io_cmd->common.dptr.prp2 = cpu_to_le64(mapbuf->first_dma);
+ }
+
+ if (ret)
+ scsi_dma_unmap(scmd);
+
+ return ret;
+}
+
+static void hiraid_check_status(struct hiraid_mapmange *mapbuf, struct scsi_cmnd *scmd,
+ struct hiraid_completion *cqe)
+{
+ scsi_set_resid(scmd, 0);
+
+ switch ((le16_to_cpu(cqe->status) >> 1) & 0x7f) {
+ case SENSE_STATE_OK:
+ set_host_byte(scmd, DID_OK);
+ break;
+ case SENSE_STATE_NEED_CHECK:
+ set_host_byte(scmd, DID_OK);
+ scmd->result |= le16_to_cpu(cqe->status) >> 8;
+ if (scmd->result & SAM_STAT_CHECK_CONDITION) {
+ memset(scmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+ memcpy(scmd->sense_buffer,
+ mapbuf->sense_buffer_virt, SCSI_SENSE_BUFFERSIZE);
+ scmd->result = (scmd->result & 0x00ffffff) | (DRIVER_SENSE << 24);
+ }
+ break;
+ case SENSE_STATE_ABORTED:
+ set_host_byte(scmd, DID_ABORT);
+ break;
+ case SENSE_STATE_NEED_RETRY:
+ set_host_byte(scmd, DID_REQUEUE);
+ break;
+ default:
+ set_host_byte(scmd, DID_BAD_TARGET);
+ dev_warn_ratelimited(mapbuf->hiraidq->hdev->dev, "cid[%d] qid[%d] sdev[%d:%d] opcode[%.2x] bad status[0x%x]\n",
+ le16_to_cpu(cqe->cmd_id), le16_to_cpu(cqe->sq_id), scmd->device->channel,
+ scmd->device->id, scmd->cmnd[0], le16_to_cpu(cqe->status));
+ break;
+ }
+}
+
+static inline void hiraid_query_scmd_tag(struct scsi_cmnd *scmd, u16 *qid, u16 *cid,
+ struct hiraid_dev *hdev, struct hiraid_sdev_hostdata *hostdata)
+{
+ u32 tag = blk_mq_unique_tag(blk_mq_rq_from_pdu((void *)scmd));
+
+ if (work_mode) {
+ if ((hdev->hdd_dispatch == DISPATCH_BY_DISK) && (hostdata->hwq != 0))
+ *qid = hostdata->hwq;
+ else
+ *qid = raw_smp_processor_id() % (hdev->online_queues - 1) + 1;
+ } else {
+ *qid = blk_mq_unique_tag_to_hwq(tag) + 1;
+ }
+ *cid = blk_mq_unique_tag_to_tag(tag);
+}
+
+static int hiraid_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+{
+ struct hiraid_mapmange *mapbuf = scsi_cmd_priv(scmd);
+ struct hiraid_dev *hdev = shost_priv(shost);
+ struct scsi_device *sdev = scmd->device;
+ struct hiraid_sdev_hostdata *hostdata;
+ struct hiraid_scsi_io_cmd io_cmd;
+ struct hiraid_queue *ioq;
+ u16 hwq, cid;
+ int ret;
+
+ if (unlikely(hdev->state == DEV_RESETTING))
+ return SCSI_MLQUEUE_HOST_BUSY;
+
+ if (unlikely(hdev->state != DEV_LIVE)) {
+ set_host_byte(scmd, DID_NO_CONNECT);
+ scmd->scsi_done(scmd);
+ return 0;
+ }
+
+ if (log_debug_switch)
+ scsi_print_command(scmd);
+
+ hostdata = sdev->hostdata;
+ hiraid_query_scmd_tag(scmd, &hwq, &cid, hdev, hostdata);
+ ioq = &hdev->queues[hwq];
+
+ if (unlikely(atomic_inc_return(&ioq->inflight) >
+ (hdev->ioq_depth - HIRAID_PTHRU_CMDS_PERQ))) {
+ atomic_dec(&ioq->inflight);
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+
+ memset(&io_cmd, 0, sizeof(io_cmd));
+ io_cmd.rw.hdid = cpu_to_le32(hostdata->hdid);
+ io_cmd.rw.cmd_id = cpu_to_le16(cid);
+
+ ret = hiraid_setup_io_cmd(hdev, &io_cmd, scmd);
+ if (unlikely(ret)) {
+ set_host_byte(scmd, DID_ERROR);
+ scmd->scsi_done(scmd);
+ atomic_dec(&ioq->inflight);
+ return 0;
+ }
+
+ ret = cid * SCSI_SENSE_BUFFERSIZE;
+ if (work_mode) {
+ mapbuf->sense_buffer_virt = hdev->sense_buffer_virt + ret;
+ mapbuf->sense_buffer_phy = hdev->sense_buffer_phy + ret;
+ } else {
+ mapbuf->sense_buffer_virt = ioq->sense_buffer_virt + ret;
+ mapbuf->sense_buffer_phy = ioq->sense_buffer_phy + ret;
+ }
+ io_cmd.common.sense_addr = cpu_to_le64(mapbuf->sense_buffer_phy);
+ io_cmd.common.sense_len = cpu_to_le16(SCSI_SENSE_BUFFERSIZE);
+
+ hiraid_init_mapbuff(mapbuf);
+
+ mapbuf->hiraidq = ioq;
+ mapbuf->cid = cid;
+ ret = hiraid_io_map_data(hdev, mapbuf, scmd, &io_cmd);
+ if (unlikely(ret)) {
+ dev_err(hdev->dev, "io map data err\n");
+ set_host_byte(scmd, DID_ERROR);
+ scmd->scsi_done(scmd);
+ ret = 0;
+ goto deinit_iobuf;
+ }
+
+ WRITE_ONCE(mapbuf->state, CMD_FLIGHT);
+ hiraid_submit_cmd(ioq, &io_cmd);
+
+ return 0;
+
+deinit_iobuf:
+ atomic_dec(&ioq->inflight);
+ hiraid_free_mapbuf(hdev, mapbuf);
+ return ret;
+}
+
+static int hiraid_match_dev(struct hiraid_dev *hdev, u16 idx, struct scsi_device *sdev)
+{
+ if (HIRAID_DEV_INFO_FLAG_VALID(hdev->dev_info[idx].flag)) {
+ if (sdev->channel == hdev->dev_info[idx].channel &&
+ sdev->id == le16_to_cpu(hdev->dev_info[idx].target) &&
+ sdev->lun < hdev->dev_info[idx].lun) {
+ dev_info(hdev->dev, "match device success, channel:target:lun[%d:%d:%d]\n",
+ hdev->dev_info[idx].channel,
+ hdev->dev_info[idx].target,
+ hdev->dev_info[idx].lun);
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+static int hiraid_disk_qd(u8 attr)
+{
+ switch (HIRAID_DEV_DISK_TYPE(attr)) {
+ case HIRAID_SAS_HDD_VD:
+ case HIRAID_SATA_HDD_VD:
+ return HIRAID_HDD_VD_QD;
+ case HIRAID_SAS_SSD_VD:
+ case HIRAID_SATA_SSD_VD:
+ case HIRAID_NVME_SSD_VD:
+ return HIRAID_SSD_VD_QD;
+ case HIRAID_SAS_HDD_PD:
+ case HIRAID_SATA_HDD_PD:
+ return HIRAID_HDD_PD_QD;
+ case HIRAID_SAS_SSD_PD:
+ case HIRAID_SATA_SSD_PD:
+ case HIRAID_NVME_SSD_PD:
+ return HIRAID_SSD_PD_QD;
+ default:
+ return MAX_CMD_PER_DEV;
+ }
+}
+
+static bool hiraid_disk_is_hdd(u8 attr)
+{
+ switch (HIRAID_DEV_DISK_TYPE(attr)) {
+ case HIRAID_SAS_HDD_VD:
+ case HIRAID_SATA_HDD_VD:
+ case HIRAID_SAS_HDD_PD:
+ case HIRAID_SATA_HDD_PD:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static int hiraid_slave_alloc(struct scsi_device *sdev)
+{
+ struct hiraid_sdev_hostdata *hostdata;
+ struct hiraid_dev *hdev;
+ u16 idx;
+
+ hdev = shost_priv(sdev->host);
+ hostdata = kzalloc(sizeof(*hostdata), GFP_KERNEL);
+ if (!hostdata) {
+ dev_err(hdev->dev, "alloc scsi host data memory failed\n");
+ return -ENOMEM;
+ }
+
+ down_read(&hdev->dev_rwsem);
+ for (idx = 0; idx < le32_to_cpu(hdev->ctrl_info->nd); idx++) {
+ if (hiraid_match_dev(hdev, idx, sdev))
+ goto scan_host;
+ }
+ up_read(&hdev->dev_rwsem);
+
+ kfree(hostdata);
+ return -ENXIO;
+
+scan_host:
+ hostdata->hdid = le32_to_cpu(hdev->dev_info[idx].hdid);
+ hostdata->max_io_kb = le16_to_cpu(hdev->dev_info[idx].max_io_kb);
+ hostdata->attr = hdev->dev_info[idx].attr;
+ hostdata->flag = hdev->dev_info[idx].flag;
+ hostdata->rg_id = 0xff;
+ sdev->hostdata = hostdata;
+ up_read(&hdev->dev_rwsem);
+ return 0;
+}
+
+static void hiraid_slave_destroy(struct scsi_device *sdev)
+{
+ kfree(sdev->hostdata);
+ sdev->hostdata = NULL;
+}
+
+static int hiraid_slave_configure(struct scsi_device *sdev)
+{
+ unsigned int timeout = scmd_tmout_rawdisk * HZ;
+ struct hiraid_dev *hdev = shost_priv(sdev->host);
+ struct hiraid_sdev_hostdata *hostdata = sdev->hostdata;
+ u32 max_sec = sdev->host->max_sectors;
+ int qd = MAX_CMD_PER_DEV;
+
+ if (hostdata) {
+ if (HIRAID_DEV_INFO_ATTR_VD(hostdata->attr))
+ timeout = scmd_tmout_vd * HZ;
+ else if (HIRAID_DEV_INFO_ATTR_RAWDISK(hostdata->attr))
+ timeout = scmd_tmout_rawdisk * HZ;
+ max_sec = hostdata->max_io_kb << 1;
+ qd = hiraid_disk_qd(hostdata->attr);
+
+ if (hiraid_disk_is_hdd(hostdata->attr))
+ hostdata->hwq = hostdata->hdid % (hdev->online_queues - 1) + 1;
+ else
+ hostdata->hwq = 0;
+ } else {
+ dev_err(hdev->dev, "err, sdev->hostdata is null\n");
+ }
+
+ blk_queue_rq_timeout(sdev->request_queue, timeout);
+ sdev->eh_timeout = timeout;
+ scsi_change_queue_depth(sdev, qd);
+
+ if ((max_sec == 0) || (max_sec > sdev->host->max_sectors))
+ max_sec = sdev->host->max_sectors;
+
+ if (!max_io_force)
+ blk_queue_max_hw_sectors(sdev->request_queue, max_sec);
+
+ dev_info(hdev->dev, "sdev->channel:id:lun[%d:%d:%lld] scmd_timeout[%d]s maxsec[%d]\n",
+ sdev->channel, sdev->id, sdev->lun, timeout / HZ, max_sec);
+
+ return 0;
+}
+
+static void hiraid_shost_init(struct hiraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+ u8 domain, bus;
+ u32 dev_func;
+
+ domain = pci_domain_nr(pdev->bus);
+ bus = pdev->bus->number;
+ dev_func = pdev->devfn;
+
+ hdev->shost->nr_hw_queues = work_mode ? 1 : hdev->online_queues - 1;
+ hdev->shost->can_queue = hdev->scsi_qd;
+
+ hdev->shost->sg_tablesize = le16_to_cpu(hdev->ctrl_info->max_num_sge);
+ /* 512B per sector */
+ hdev->shost->max_sectors = (1U << ((hdev->ctrl_info->mdts) * 1U) << 12) / 512;
+ hdev->shost->cmd_per_lun = MAX_CMD_PER_DEV;
+ hdev->shost->max_channel = le16_to_cpu(hdev->ctrl_info->max_channel) - 1;
+ hdev->shost->max_id = le32_to_cpu(hdev->ctrl_info->max_tgt_id);
+ hdev->shost->max_lun = le16_to_cpu(hdev->ctrl_info->max_lun);
+
+ hdev->shost->this_id = -1;
+ hdev->shost->unique_id = (domain << 16) | (bus << 8) | dev_func;
+ hdev->shost->max_cmd_len = MAX_CDB_LEN;
+ hdev->shost->hostt->cmd_size = hiraid_get_max_cmd_size(hdev);
+}
+
+static int hiraid_alloc_queue(struct hiraid_dev *hdev, u16 qid, u16 depth)
+{
+ struct hiraid_queue *hiraidq = &hdev->queues[qid];
+ int ret = 0;
+
+ if (hdev->queue_count > qid) {
+ dev_info(hdev->dev, "warn: queue[%d] is exist\n", qid);
+ return 0;
+ }
+
+ hiraidq->cqes = dma_alloc_coherent(hdev->dev, CQ_SIZE(depth),
+ &hiraidq->cq_buffer_phy, GFP_KERNEL | __GFP_ZERO);
+ if (!hiraidq->cqes)
+ return -ENOMEM;
+
+ hiraidq->sq_cmds = dma_alloc_coherent(hdev->dev, SQ_SIZE(qid, depth),
+ &hiraidq->sq_buffer_phy, GFP_KERNEL);
+ if (!hiraidq->sq_cmds) {
+ ret = -ENOMEM;
+ goto free_cqes;
+ }
+
+ /*
+ * if single hw queue, we do not need to alloc sense buffer for every queue,
+ * we have alloced all on hiraid_alloc_resources.
+ */
+ if (work_mode)
+ goto initq;
+
+ /* alloc sense buffer */
+ hiraidq->sense_buffer_virt = dma_alloc_coherent(hdev->dev, SENSE_SIZE(depth),
+ &hiraidq->sense_buffer_phy, GFP_KERNEL | __GFP_ZERO);
+ if (!hiraidq->sense_buffer_virt) {
+ ret = -ENOMEM;
+ goto free_sq_cmds;
+ }
+
+initq:
+ spin_lock_init(&hiraidq->sq_lock);
+ spin_lock_init(&hiraidq->cq_lock);
+ hiraidq->hdev = hdev;
+ hiraidq->q_depth = depth;
+ hiraidq->qid = qid;
+ hiraidq->cq_vector = -1;
+ hdev->queue_count++;
+
+ return 0;
+
+free_sq_cmds:
+ dma_free_coherent(hdev->dev, SQ_SIZE(qid, depth), (void *)hiraidq->sq_cmds,
+ hiraidq->sq_buffer_phy);
+free_cqes:
+ dma_free_coherent(hdev->dev, CQ_SIZE(depth), (void *)hiraidq->cqes,
+ hiraidq->cq_buffer_phy);
+ return ret;
+}
+
+static int hiraid_wait_control_ready(struct hiraid_dev *hdev, u64 cap, bool enabled)
+{
+ unsigned long timeout =
+ ((HIRAID_CAP_TIMEOUT(cap) + 1) * HIRAID_CAP_TIMEOUT_UNIT_MS) + jiffies;
+ u32 bit = enabled ? HIRAID_CSTS_RDY : 0;
+
+ while ((readl(hdev->bar + HIRAID_REG_CSTS) & HIRAID_CSTS_RDY) != bit) {
+ usleep_range(1000, 2000);
+ if (fatal_signal_pending(current))
+ return -EINTR;
+
+ if (time_after(jiffies, timeout)) {
+ dev_err(hdev->dev, "device not ready; aborting %s\n",
+ enabled ? "initialisation" : "reset");
+ return -ENODEV;
+ }
+ }
+ return 0;
+}
+
+static int hiraid_shutdown_control(struct hiraid_dev *hdev)
+{
+ unsigned long timeout = le32_to_cpu(hdev->ctrl_info->rtd3e) / 1000000 * HZ + jiffies;
+
+ hdev->ctrl_config &= ~HIRAID_CC_SHN_MASK;
+ hdev->ctrl_config |= HIRAID_CC_SHN_NORMAL;
+ writel(hdev->ctrl_config, hdev->bar + HIRAID_REG_CC);
+
+ while ((readl(hdev->bar + HIRAID_REG_CSTS) & HIRAID_CSTS_SHST_MASK) !=
+ HIRAID_CSTS_SHST_CMPLT) {
+ msleep(100);
+ if (fatal_signal_pending(current))
+ return -EINTR;
+ if (time_after(jiffies, timeout)) {
+ dev_err(hdev->dev, "device shutdown incomplete, abort shutdown\n");
+ return -ENODEV;
+ }
+ }
+ return 0;
+}
+
+static int hiraid_disable_control(struct hiraid_dev *hdev)
+{
+ hdev->ctrl_config &= ~HIRAID_CC_SHN_MASK;
+ hdev->ctrl_config &= ~HIRAID_CC_ENABLE;
+ writel(hdev->ctrl_config, hdev->bar + HIRAID_REG_CC);
+
+ return hiraid_wait_control_ready(hdev, hdev->cap, false);
+}
+
+static int hiraid_enable_control(struct hiraid_dev *hdev)
+{
+ u64 cap = hdev->cap;
+ u32 dev_page_min = HIRAID_CAP_MPSMIN(cap) + 12;
+ u32 page_shift = PAGE_SHIFT;
+
+ if (page_shift < dev_page_min) {
+ dev_err(hdev->dev, "minimum device page size[%u], too large for host[%u]\n",
+ 1U << dev_page_min, 1U << page_shift);
+ return -ENODEV;
+ }
+
+ page_shift = min_t(unsigned int, HIRAID_CAP_MPSMAX(cap) + 12, PAGE_SHIFT);
+ hdev->page_size = 1U << page_shift;
+
+ hdev->ctrl_config = HIRAID_CC_CSS_NVM;
+ hdev->ctrl_config |= (page_shift - 12) << HIRAID_CC_MPS_SHIFT;
+ hdev->ctrl_config |= HIRAID_CC_AMS_RR | HIRAID_CC_SHN_NONE;
+ hdev->ctrl_config |= HIRAID_CC_IOSQES | HIRAID_CC_IOCQES;
+ hdev->ctrl_config |= HIRAID_CC_ENABLE;
+ writel(hdev->ctrl_config, hdev->bar + HIRAID_REG_CC);
+
+ return hiraid_wait_control_ready(hdev, cap, true);
+}
+
+static void hiraid_init_queue(struct hiraid_queue *hiraidq, u16 qid)
+{
+ struct hiraid_dev *hdev = hiraidq->hdev;
+
+ memset((void *)hiraidq->cqes, 0, CQ_SIZE(hiraidq->q_depth));
+
+ hiraidq->sq_tail = 0;
+ hiraidq->cq_head = 0;
+ hiraidq->cq_phase = 1;
+ hiraidq->q_db = &hdev->dbs[qid * 2 * hdev->db_stride];
+ hiraidq->prp_small_pool = hdev->prp_extra_pool[qid % extra_pool_num];
+ hdev->online_queues++;
+ atomic_set(&hiraidq->inflight, 0);
+}
+
+static inline bool hiraid_cqe_pending(struct hiraid_queue *hiraidq)
+{
+ return (le16_to_cpu(hiraidq->cqes[hiraidq->cq_head].status) & 1) ==
+ hiraidq->cq_phase;
+}
+
+static void hiraid_complete_io_cmnd(struct hiraid_queue *ioq, struct hiraid_completion *cqe)
+{
+ struct hiraid_dev *hdev = ioq->hdev;
+ struct blk_mq_tags *tags;
+ struct scsi_cmnd *scmd;
+ struct hiraid_mapmange *mapbuf;
+ struct request *req;
+ unsigned long elapsed;
+
+ atomic_dec(&ioq->inflight);
+
+ if (work_mode)
+ tags = hdev->shost->tag_set.tags[0];
+ else
+ tags = hdev->shost->tag_set.tags[ioq->qid - 1];
+ req = blk_mq_tag_to_rq(tags, le16_to_cpu(cqe->cmd_id));
+ if (unlikely(!req || !blk_mq_request_started(req))) {
+ dev_warn(hdev->dev, "invalid id[%d] completed on queue[%d]\n",
+ le16_to_cpu(cqe->cmd_id), ioq->qid);
+ return;
+ }
+
+ scmd = blk_mq_rq_to_pdu(req);
+ mapbuf = scsi_cmd_priv(scmd);
+
+ elapsed = jiffies - scmd->jiffies_at_alloc;
+ dev_log_dbg(hdev->dev, "cid[%d] qid[%d] finish IO cost %3ld.%3ld seconds\n",
+ le16_to_cpu(cqe->cmd_id), ioq->qid, elapsed / HZ, elapsed % HZ);
+
+ if (cmpxchg(&mapbuf->state, CMD_FLIGHT, CMD_COMPLETE) != CMD_FLIGHT) {
+ dev_warn(hdev->dev, "cid[%d] qid[%d] enters abnormal handler, cost %3ld.%3ld seconds\n",
+ le16_to_cpu(cqe->cmd_id), ioq->qid, elapsed / HZ, elapsed % HZ);
+ WRITE_ONCE(mapbuf->state, CMD_TMO_COMPLETE);
+
+ if (mapbuf->sge_cnt) {
+ mapbuf->sge_cnt = 0;
+ scsi_dma_unmap(scmd);
+ }
+ hiraid_free_mapbuf(hdev, mapbuf);
+
+ return;
+ }
+
+ hiraid_check_status(mapbuf, scmd, cqe);
+ if (mapbuf->sge_cnt) {
+ mapbuf->sge_cnt = 0;
+ scsi_dma_unmap(scmd);
+ }
+ hiraid_free_mapbuf(hdev, mapbuf);
+ scmd->scsi_done(scmd);
+}
+
+static void hiraid_complete_admin_cmnd(struct hiraid_queue *adminq, struct hiraid_completion *cqe)
+{
+ struct hiraid_dev *hdev = adminq->hdev;
+ struct hiraid_cmd *adm_cmd;
+
+ adm_cmd = hdev->adm_cmds + le16_to_cpu(cqe->cmd_id);
+ if (unlikely(adm_cmd->state == CMD_IDLE)) {
+ dev_warn(adminq->hdev->dev, "invalid id[%d] completed on queue[%d]\n",
+ le16_to_cpu(cqe->cmd_id), le16_to_cpu(cqe->sq_id));
+ return;
+ }
+
+ adm_cmd->status = le16_to_cpu(cqe->status) >> 1;
+ adm_cmd->result0 = le32_to_cpu(cqe->result);
+ adm_cmd->result1 = le32_to_cpu(cqe->result1);
+
+ complete(&adm_cmd->cmd_done);
+}
+
+static void hiraid_send_async_event(struct hiraid_dev *hdev, u16 cid);
+
+static void hiraid_complete_async_event(struct hiraid_queue *hiraidq, struct hiraid_completion *cqe)
+{
+ struct hiraid_dev *hdev = hiraidq->hdev;
+ u32 result = le32_to_cpu(cqe->result);
+
+ dev_info(hdev->dev, "recv async event, cid[%d] status[0x%x] result[0x%x]\n",
+ le16_to_cpu(cqe->cmd_id), le16_to_cpu(cqe->status) >> 1, result);
+
+ hiraid_send_async_event(hdev, le16_to_cpu(cqe->cmd_id));
+
+ if ((le16_to_cpu(cqe->status) >> 1) != HIRAID_SC_SUCCESS)
+ return;
+ switch (result & 0x7) {
+ case HIRAID_ASYN_EVENT_NOTICE:
+ hiraid_handle_async_notice(hdev, result);
+ break;
+ case HIRAID_ASYN_EVENT_VS:
+ hiraid_handle_async_vs(hdev, result, le32_to_cpu(cqe->result1));
+ break;
+ default:
+ dev_warn(hdev->dev, "unsupported async event type[%u]\n", result & 0x7);
+ break;
+ }
+}
+
+static void hiraid_complete_pthru_cmnd(struct hiraid_queue *ioq, struct hiraid_completion *cqe)
+{
+ struct hiraid_dev *hdev = ioq->hdev;
+ struct hiraid_cmd *ptcmd;
+
+ ptcmd = hdev->io_ptcmds + (ioq->qid - 1) * HIRAID_PTHRU_CMDS_PERQ +
+ le16_to_cpu(cqe->cmd_id) - hdev->scsi_qd;
+
+ ptcmd->status = le16_to_cpu(cqe->status) >> 1;
+ ptcmd->result0 = le32_to_cpu(cqe->result);
+ ptcmd->result1 = le32_to_cpu(cqe->result1);
+
+ complete(&ptcmd->cmd_done);
+}
+
+static inline void hiraid_handle_cqe(struct hiraid_queue *hiraidq, u16 idx)
+{
+ struct hiraid_completion *cqe = &hiraidq->cqes[idx];
+ struct hiraid_dev *hdev = hiraidq->hdev;
+ u16 cid = le16_to_cpu(cqe->cmd_id);
+
+ if (unlikely(!work_mode && (cid >= hiraidq->q_depth))) {
+ dev_err(hdev->dev, "invalid command id[%d] completed on queue[%d]\n",
+ cid, cqe->sq_id);
+ return;
+ }
+
+ dev_log_dbg(hdev->dev, "cid[%d] qid[%d] result[0x%x] sqid[%d] status[0x%x]\n",
+ cid, hiraidq->qid, le32_to_cpu(cqe->result),
+ le16_to_cpu(cqe->sq_id), le16_to_cpu(cqe->status));
+
+ if (unlikely(hiraidq->qid == 0 && cid >= HIRAID_AQ_BLK_MQ_DEPTH)) {
+ hiraid_complete_async_event(hiraidq, cqe);
+ return;
+ }
+
+ if (unlikely(hiraidq->qid && cid >= hdev->scsi_qd)) {
+ hiraid_complete_pthru_cmnd(hiraidq, cqe);
+ return;
+ }
+
+ if (hiraidq->qid)
+ hiraid_complete_io_cmnd(hiraidq, cqe);
+ else
+ hiraid_complete_admin_cmnd(hiraidq, cqe);
+}
+
+static void hiraid_complete_cqes(struct hiraid_queue *hiraidq, u16 start, u16 end)
+{
+ while (start != end) {
+ hiraid_handle_cqe(hiraidq, start);
+ if (++start == hiraidq->q_depth)
+ start = 0;
+ }
+}
+
+static inline void hiraid_update_cq_head(struct hiraid_queue *hiraidq)
+{
+ if (++hiraidq->cq_head == hiraidq->q_depth) {
+ hiraidq->cq_head = 0;
+ hiraidq->cq_phase = !hiraidq->cq_phase;
+ }
+}
+
+static inline bool hiraid_process_cq(struct hiraid_queue *hiraidq, u16 *start, u16 *end, int tag)
+{
+ bool found = false;
+
+ *start = hiraidq->cq_head;
+ while (!found && hiraid_cqe_pending(hiraidq)) {
+ if (le16_to_cpu(hiraidq->cqes[hiraidq->cq_head].cmd_id) == tag)
+ found = true;
+ hiraid_update_cq_head(hiraidq);
+ }
+ *end = hiraidq->cq_head;
+
+ if (*start != *end)
+ writel(hiraidq->cq_head, hiraidq->q_db + hiraidq->hdev->db_stride);
+
+ return found;
+}
+
+static bool hiraid_poll_cq(struct hiraid_queue *hiraidq, int cid)
+{
+ u16 start, end;
+ bool found;
+
+ if (!hiraid_cqe_pending(hiraidq))
+ return 0;
+
+ spin_lock_irq(&hiraidq->cq_lock);
+ found = hiraid_process_cq(hiraidq, &start, &end, cid);
+ spin_unlock_irq(&hiraidq->cq_lock);
+
+ hiraid_complete_cqes(hiraidq, start, end);
+ return found;
+}
+
+static irqreturn_t hiraid_handle_irq(int irq, void *data)
+{
+ struct hiraid_queue *hiraidq = data;
+ irqreturn_t ret = IRQ_NONE;
+ u16 start, end;
+
+ spin_lock(&hiraidq->cq_lock);
+ if (hiraidq->cq_head != hiraidq->last_cq_head)
+ ret = IRQ_HANDLED;
+
+ hiraid_process_cq(hiraidq, &start, &end, -1);
+ hiraidq->last_cq_head = hiraidq->cq_head;
+ spin_unlock(&hiraidq->cq_lock);
+
+ if (start != end) {
+ hiraid_complete_cqes(hiraidq, start, end);
+ ret = IRQ_HANDLED;
+ }
+ return ret;
+}
+
+static int hiraid_setup_admin_queue(struct hiraid_dev *hdev)
+{
+ struct hiraid_queue *adminq = &hdev->queues[0];
+ u32 aqa;
+ int ret;
+
+ dev_info(hdev->dev, "start disable controller\n");
+
+ ret = hiraid_disable_control(hdev);
+ if (ret)
+ return ret;
+
+ ret = hiraid_alloc_queue(hdev, 0, HIRAID_AQ_DEPTH);
+ if (ret)
+ return ret;
+
+ aqa = adminq->q_depth - 1;
+ aqa |= aqa << 16;
+ writel(aqa, hdev->bar + HIRAID_REG_AQA);
+ lo_hi_writeq(adminq->sq_buffer_phy, hdev->bar + HIRAID_REG_ASQ);
+ lo_hi_writeq(adminq->cq_buffer_phy, hdev->bar + HIRAID_REG_ACQ);
+
+ dev_info(hdev->dev, "start enable controller\n");
+
+ ret = hiraid_enable_control(hdev);
+ if (ret) {
+ ret = -ENODEV;
+ return ret;
+ }
+
+ adminq->cq_vector = 0;
+ ret = pci_request_irq(hdev->pdev, adminq->cq_vector, hiraid_handle_irq, NULL,
+ adminq, "hiraid%d_q%d", hdev->instance, adminq->qid);
+ if (ret) {
+ adminq->cq_vector = -1;
+ return ret;
+ }
+
+ hiraid_init_queue(adminq, 0);
+
+ dev_info(hdev->dev, "setup admin queue success, queuecount[%d] online[%d] pagesize[%d]\n",
+ hdev->queue_count, hdev->online_queues, hdev->page_size);
+
+ return 0;
+}
+
+static u32 hiraid_get_bar_size(struct hiraid_dev *hdev, u32 nr_ioqs)
+{
+ return (HIRAID_REG_DBS + ((nr_ioqs + 1) * 8 * hdev->db_stride));
+}
+
+static int hiraid_create_admin_cmds(struct hiraid_dev *hdev)
+{
+ u16 i;
+
+ INIT_LIST_HEAD(&hdev->adm_cmd_list);
+ spin_lock_init(&hdev->adm_cmd_lock);
+
+ hdev->adm_cmds = kcalloc_node(HIRAID_AQ_BLK_MQ_DEPTH, sizeof(struct hiraid_cmd),
+ GFP_KERNEL, hdev->numa_node);
+
+ if (!hdev->adm_cmds) {
+ dev_err(hdev->dev, "alloc admin cmds failed\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < HIRAID_AQ_BLK_MQ_DEPTH; i++) {
+ hdev->adm_cmds[i].qid = 0;
+ hdev->adm_cmds[i].cid = i;
+ list_add_tail(&(hdev->adm_cmds[i].list), &hdev->adm_cmd_list);
+ }
+
+ dev_info(hdev->dev, "alloc admin cmds success, num[%d]\n", HIRAID_AQ_BLK_MQ_DEPTH);
+
+ return 0;
+}
+
+static void hiraid_free_admin_cmds(struct hiraid_dev *hdev)
+{
+ kfree(hdev->adm_cmds);
+ hdev->adm_cmds = NULL;
+ INIT_LIST_HEAD(&hdev->adm_cmd_list);
+}
+
+static struct hiraid_cmd *hiraid_get_cmd(struct hiraid_dev *hdev, enum hiraid_cmd_type type)
+{
+ struct hiraid_cmd *cmd = NULL;
+ unsigned long flags;
+ struct list_head *head = &hdev->adm_cmd_list;
+ spinlock_t *slock = &hdev->adm_cmd_lock;
+
+ if (type == HIRAID_CMD_PTHRU) {
+ head = &hdev->io_pt_list;
+ slock = &hdev->io_pt_lock;
+ }
+
+ spin_lock_irqsave(slock, flags);
+ if (list_empty(head)) {
+ spin_unlock_irqrestore(slock, flags);
+ dev_err(hdev->dev, "err, cmd[%d] list empty\n", type);
+ return NULL;
+ }
+ cmd = list_entry(head->next, struct hiraid_cmd, list);
+ list_del_init(&cmd->list);
+ spin_unlock_irqrestore(slock, flags);
+
+ WRITE_ONCE(cmd->state, CMD_FLIGHT);
+
+ return cmd;
+}
+
+static void hiraid_put_cmd(struct hiraid_dev *hdev, struct hiraid_cmd *cmd,
+ enum hiraid_cmd_type type)
+{
+ unsigned long flags;
+ struct list_head *head = &hdev->adm_cmd_list;
+ spinlock_t *slock = &hdev->adm_cmd_lock;
+
+ if (type == HIRAID_CMD_PTHRU) {
+ head = &hdev->io_pt_list;
+ slock = &hdev->io_pt_lock;
+ }
+
+ spin_lock_irqsave(slock, flags);
+ WRITE_ONCE(cmd->state, CMD_IDLE);
+ list_add_tail(&cmd->list, head);
+ spin_unlock_irqrestore(slock, flags);
+}
+
+static bool hiraid_admin_need_reset(struct hiraid_admin_command *cmd)
+{
+ switch (cmd->common.opcode) {
+ case HIRAID_ADMIN_DELETE_SQ:
+ case HIRAID_ADMIN_CREATE_SQ:
+ case HIRAID_ADMIN_DELETE_CQ:
+ case HIRAID_ADMIN_CREATE_CQ:
+ case HIRAID_ADMIN_SET_FEATURES:
+ return false;
+ default:
+ return true;
+ }
+}
+
+static int hiraid_reset_work_sync(struct hiraid_dev *hdev);
+static inline void hiraid_admin_timeout(struct hiraid_dev *hdev, struct hiraid_cmd *cmd)
+{
+ /* command may be returned because controller reset */
+ if (READ_ONCE(cmd->state) == CMD_COMPLETE)
+ return;
+ if (hiraid_reset_work_sync(hdev) == -EBUSY)
+ flush_work(&hdev->reset_work);
+}
+
+static int hiraid_put_admin_sync_request(struct hiraid_dev *hdev, struct hiraid_admin_command *cmd,
+ u32 *result0, u32 *result1, u32 timeout)
+{
+ struct hiraid_cmd *adm_cmd = hiraid_get_cmd(hdev, HIRAID_CMD_ADMIN);
+
+ if (!adm_cmd) {
+ dev_err(hdev->dev, "err, get admin cmd failed\n");
+ return -EFAULT;
+ }
+
+ timeout = timeout ? timeout : ADMIN_TIMEOUT;
+
+ init_completion(&adm_cmd->cmd_done);
+
+ cmd->common.cmd_id = cpu_to_le16(adm_cmd->cid);
+ hiraid_submit_cmd(&hdev->queues[0], cmd);
+
+ if (!wait_for_completion_timeout(&adm_cmd->cmd_done, timeout)) {
+ dev_err(hdev->dev, "cid[%d] qid[%d] timeout, opcode[0x%x] subopcode[0x%x]\n",
+ adm_cmd->cid, adm_cmd->qid, cmd->usr_cmd.opcode,
+ cmd->usr_cmd.info_0.subopcode);
+
+ /* reset controller if admin timeout */
+ if (hiraid_admin_need_reset(cmd))
+ hiraid_admin_timeout(hdev, adm_cmd);
+
+ hiraid_put_cmd(hdev, adm_cmd, HIRAID_CMD_ADMIN);
+ return -ETIME;
+ }
+
+ if (result0)
+ *result0 = adm_cmd->result0;
+ if (result1)
+ *result1 = adm_cmd->result1;
+
+ hiraid_put_cmd(hdev, adm_cmd, HIRAID_CMD_ADMIN);
+
+ return adm_cmd->status;
+}
+
+/**
+ * hiraid_create_cq - send cmd to controller for create controller cq
+ */
+static int hiraid_create_complete_queue(struct hiraid_dev *hdev, u16 qid,
+ struct hiraid_queue *hiraidq, u16 cq_vector)
+{
+ struct hiraid_admin_command admin_cmd;
+ int flags = HIRAID_QUEUE_PHYS_CONTIG | HIRAID_CQ_IRQ_ENABLED;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.create_cq.opcode = HIRAID_ADMIN_CREATE_CQ;
+ admin_cmd.create_cq.prp1 = cpu_to_le64(hiraidq->cq_buffer_phy);
+ admin_cmd.create_cq.cqid = cpu_to_le16(qid);
+ admin_cmd.create_cq.qsize = cpu_to_le16(hiraidq->q_depth - 1);
+ admin_cmd.create_cq.cq_flags = cpu_to_le16(flags);
+ admin_cmd.create_cq.irq_vector = cpu_to_le16(cq_vector);
+
+ return hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+}
+
+/**
+ * hiraid_create_sq - send cmd to controller for create controller sq
+ */
+static int hiraid_create_send_queue(struct hiraid_dev *hdev, u16 qid,
+ struct hiraid_queue *hiraidq)
+{
+ struct hiraid_admin_command admin_cmd;
+ int flags = HIRAID_QUEUE_PHYS_CONTIG;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.create_sq.opcode = HIRAID_ADMIN_CREATE_SQ;
+ admin_cmd.create_sq.prp1 = cpu_to_le64(hiraidq->sq_buffer_phy);
+ admin_cmd.create_sq.sqid = cpu_to_le16(qid);
+ admin_cmd.create_sq.qsize = cpu_to_le16(hiraidq->q_depth - 1);
+ admin_cmd.create_sq.sq_flags = cpu_to_le16(flags);
+ admin_cmd.create_sq.cqid = cpu_to_le16(qid);
+
+ return hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+}
+
+static void hiraid_free_all_queues(struct hiraid_dev *hdev)
+{
+ int i;
+ struct hiraid_queue *hq;
+
+ for (i = 0; i < hdev->queue_count; i++) {
+ hq = &hdev->queues[i];
+ dma_free_coherent(hdev->dev, CQ_SIZE(hq->q_depth),
+ (void *)hq->cqes, hq->cq_buffer_phy);
+ dma_free_coherent(hdev->dev, SQ_SIZE(hq->qid, hq->q_depth),
+ hq->sq_cmds, hq->sq_buffer_phy);
+ if (!work_mode)
+ dma_free_coherent(hdev->dev, SENSE_SIZE(hq->q_depth),
+ hq->sense_buffer_virt, hq->sense_buffer_phy);
+ }
+
+ hdev->queue_count = 0;
+}
+
+static void hiraid_free_sense_buffer(struct hiraid_dev *hdev)
+{
+ if (hdev->sense_buffer_virt) {
+ dma_free_coherent(hdev->dev,
+ SENSE_SIZE(hdev->scsi_qd + max_hwq_num * HIRAID_PTHRU_CMDS_PERQ),
+ hdev->sense_buffer_virt, hdev->sense_buffer_phy);
+ hdev->sense_buffer_virt = NULL;
+ }
+}
+
+static int hiraid_delete_queue(struct hiraid_dev *hdev, u8 opcode, u16 qid)
+{
+ struct hiraid_admin_command admin_cmd;
+ int ret;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.delete_queue.opcode = opcode;
+ admin_cmd.delete_queue.qid = cpu_to_le16(qid);
+
+ ret = hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+
+ if (ret)
+ dev_err(hdev->dev, "delete %s:[%d] failed\n",
+ (opcode == HIRAID_ADMIN_DELETE_CQ) ? "cq" : "sq", qid);
+
+ return ret;
+}
+
+static int hiraid_delete_complete_queue(struct hiraid_dev *hdev, u16 cqid)
+{
+ return hiraid_delete_queue(hdev, HIRAID_ADMIN_DELETE_CQ, cqid);
+}
+
+static int hiraid_delete_send_queue(struct hiraid_dev *hdev, u16 sqid)
+{
+ return hiraid_delete_queue(hdev, HIRAID_ADMIN_DELETE_SQ, sqid);
+}
+
+static int hiraid_create_queue(struct hiraid_queue *hiraidq, u16 qid)
+{
+ struct hiraid_dev *hdev = hiraidq->hdev;
+ u16 cq_vector;
+ int ret;
+
+ cq_vector = (hdev->num_vecs == 1) ? 0 : qid;
+ ret = hiraid_create_complete_queue(hdev, qid, hiraidq, cq_vector);
+ if (ret)
+ return ret;
+
+ ret = hiraid_create_send_queue(hdev, qid, hiraidq);
+ if (ret)
+ goto delete_cq;
+
+ hiraidq->cq_vector = cq_vector;
+ ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_handle_irq, NULL,
+ hiraidq, "hiraid%d_q%d", hdev->instance, qid);
+ if (ret) {
+ hiraidq->cq_vector = -1;
+ dev_err(hdev->dev, "request queue[%d] irq failed\n", qid);
+ goto delete_sq;
+ }
+
+ hiraid_init_queue(hiraidq, qid);
+
+ return 0;
+
+delete_sq:
+ hiraid_delete_send_queue(hdev, qid);
+delete_cq:
+ hiraid_delete_complete_queue(hdev, qid);
+
+ return ret;
+}
+
+static int hiraid_create_io_queues(struct hiraid_dev *hdev)
+{
+ u32 i, max;
+ int ret = 0;
+
+ max = min(hdev->max_qid, hdev->queue_count - 1);
+ for (i = hdev->online_queues; i <= max; i++) {
+ ret = hiraid_create_queue(&hdev->queues[i], i);
+ if (ret) {
+ dev_err(hdev->dev, "create queue[%d] failed\n", i);
+ break;
+ }
+ }
+
+ if (!hdev->last_qcnt)
+ hdev->last_qcnt = hdev->online_queues;
+
+ dev_info(hdev->dev, "queue_count[%d] online_queue[%d] last_online[%d]",
+ hdev->queue_count, hdev->online_queues, hdev->last_qcnt);
+
+ return ret >= 0 ? 0 : ret;
+}
+
+static int hiraid_set_features(struct hiraid_dev *hdev, u32 fid, u32 dword11, void *buffer,
+ size_t buflen, u32 *result)
+{
+ struct hiraid_admin_command admin_cmd;
+ int ret;
+ u8 *data_ptr = NULL;
+ dma_addr_t buffer_phy = 0;
+
+ if (buffer && buflen) {
+ data_ptr = dma_alloc_coherent(hdev->dev, buflen, &buffer_phy, GFP_KERNEL);
+ if (!data_ptr)
+ return -ENOMEM;
+
+ memcpy(data_ptr, buffer, buflen);
+ }
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.features.opcode = HIRAID_ADMIN_SET_FEATURES;
+ admin_cmd.features.fid = cpu_to_le32(fid);
+ admin_cmd.features.dword11 = cpu_to_le32(dword11);
+ admin_cmd.common.dptr.prp1 = cpu_to_le64(buffer_phy);
+
+ ret = hiraid_put_admin_sync_request(hdev, &admin_cmd, result, NULL, 0);
+
+ if (data_ptr)
+ dma_free_coherent(hdev->dev, buflen, data_ptr, buffer_phy);
+
+ return ret;
+}
+
+static int hiraid_configure_timestamp(struct hiraid_dev *hdev)
+{
+ __le64 timestamp;
+ int ret;
+
+ timestamp = cpu_to_le64(ktime_to_ms(ktime_get_real()));
+ ret = hiraid_set_features(hdev, HIRAID_FEATURE_TIMESTAMP, 0,
+ ×tamp, sizeof(timestamp), NULL);
+
+ if (ret)
+ dev_err(hdev->dev, "set timestamp failed[%d]\n", ret);
+ return ret;
+}
+
+static int hiraid_get_queue_cnt(struct hiraid_dev *hdev, u32 *cnt)
+{
+ u32 q_cnt = (*cnt - 1) | ((*cnt - 1) << 16);
+ u32 nr_ioqs, result;
+ int status;
+
+ status = hiraid_set_features(hdev, HIRAID_FEATURE_NUM_QUEUES, q_cnt, NULL, 0, &result);
+ if (status) {
+ dev_err(hdev->dev, "set queue count failed, status[%d]\n",
+ status);
+ return -EIO;
+ }
+
+ nr_ioqs = min(result & 0xffff, result >> 16) + 1;
+ *cnt = min(*cnt, nr_ioqs);
+ if (*cnt == 0) {
+ dev_err(hdev->dev, "illegal qcount: zero, nr_ioqs[%d], cnt[%d]\n", nr_ioqs, *cnt);
+ return -EIO;
+ }
+ return 0;
+}
+
+static int hiraid_setup_io_queues(struct hiraid_dev *hdev)
+{
+ struct hiraid_queue *adminq = &hdev->queues[0];
+ struct pci_dev *pdev = hdev->pdev;
+ u32 i, size, nr_ioqs;
+ int ret;
+
+ struct irq_affinity affd = {
+ .pre_vectors = 1
+ };
+
+ /* alloc IO sense buffer for single hw queue mode */
+ if (work_mode && !hdev->sense_buffer_virt) {
+ hdev->sense_buffer_virt = dma_alloc_coherent(hdev->dev,
+ SENSE_SIZE(hdev->scsi_qd + max_hwq_num * HIRAID_PTHRU_CMDS_PERQ),
+ &hdev->sense_buffer_phy, GFP_KERNEL | __GFP_ZERO);
+ if (!hdev->sense_buffer_virt)
+ return -ENOMEM;
+ }
+
+ nr_ioqs = min(num_online_cpus(), max_hwq_num);
+ ret = hiraid_get_queue_cnt(hdev, &nr_ioqs);
+ if (ret < 0)
+ return ret;
+
+ size = hiraid_get_bar_size(hdev, nr_ioqs);
+ ret = hiraid_remap_bar(hdev, size);
+ if (ret)
+ return -ENOMEM;
+
+ adminq->q_db = hdev->dbs;
+
+ pci_free_irq(pdev, 0, adminq);
+ pci_free_irq_vectors(pdev);
+ hdev->online_queues--;
+
+ ret = pci_alloc_irq_vectors_affinity(pdev, 1, (nr_ioqs + 1),
+ PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd);
+ if (ret <= 0)
+ return -EIO;
+
+ hdev->num_vecs = ret;
+ hdev->max_qid = max(ret - 1, 1);
+
+ ret = pci_request_irq(pdev, adminq->cq_vector, hiraid_handle_irq, NULL,
+ adminq, "hiraid%d_q%d", hdev->instance, adminq->qid);
+ if (ret) {
+ dev_err(hdev->dev, "request admin irq failed\n");
+ adminq->cq_vector = -1;
+ return ret;
+ }
+
+ hdev->online_queues++;
+
+ for (i = hdev->queue_count; i <= hdev->max_qid; i++) {
+ ret = hiraid_alloc_queue(hdev, i, hdev->ioq_depth);
+ if (ret)
+ break;
+ }
+ dev_info(hdev->dev, "max_qid[%d] queuecount[%d] onlinequeue[%d] ioqdepth[%d]\n",
+ hdev->max_qid, hdev->queue_count, hdev->online_queues, hdev->ioq_depth);
+
+ return hiraid_create_io_queues(hdev);
+}
+
+static void hiraid_delete_io_queues(struct hiraid_dev *hdev)
+{
+ u16 queues = hdev->online_queues - 1;
+ u8 opcode = HIRAID_ADMIN_DELETE_SQ;
+ u16 i, pass;
+
+ if (!pci_device_is_present(hdev->pdev)) {
+ dev_err(hdev->dev, "pci_device is not present, skip disable io queues\n");
+ return;
+ }
+
+ if (hdev->online_queues < 2) {
+ dev_err(hdev->dev, "err, io queue has been delete\n");
+ return;
+ }
+
+ for (pass = 0; pass < 2; pass++) {
+ for (i = queues; i > 0; i--)
+ if (hiraid_delete_queue(hdev, opcode, i))
+ break;
+
+ opcode = HIRAID_ADMIN_DELETE_CQ;
+ }
+}
+
+static void hiraid_pci_disable(struct hiraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+ u32 i;
+
+ for (i = 0; i < hdev->online_queues; i++)
+ pci_free_irq(pdev, hdev->queues[i].cq_vector, &hdev->queues[i]);
+ pci_free_irq_vectors(pdev);
+ if (pci_is_enabled(pdev)) {
+ pci_disable_pcie_error_reporting(pdev);
+ pci_disable_device(pdev);
+ }
+ hdev->online_queues = 0;
+}
+
+static void hiraid_disable_admin_queue(struct hiraid_dev *hdev, bool shutdown)
+{
+ struct hiraid_queue *adminq = &hdev->queues[0];
+ u16 start, end;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ if (shutdown)
+ hiraid_shutdown_control(hdev);
+ else
+ hiraid_disable_control(hdev);
+ }
+
+ if (hdev->queue_count == 0) {
+ dev_err(hdev->dev, "err, admin queue has been delete\n");
+ return;
+ }
+
+ spin_lock_irq(&adminq->cq_lock);
+ hiraid_process_cq(adminq, &start, &end, -1);
+ spin_unlock_irq(&adminq->cq_lock);
+ hiraid_complete_cqes(adminq, start, end);
+}
+
+static int hiraid_create_prp_pools(struct hiraid_dev *hdev)
+{
+ int i;
+ char poolname[20] = { 0 };
+
+ hdev->prp_page_pool = dma_pool_create("prp list page", hdev->dev,
+ PAGE_SIZE, PAGE_SIZE, 0);
+
+ if (!hdev->prp_page_pool) {
+ dev_err(hdev->dev, "create prp_page_pool failed\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < extra_pool_num; i++) {
+ sprintf(poolname, "prp_list_256_%d", i);
+ hdev->prp_extra_pool[i] = dma_pool_create(poolname, hdev->dev, EXTRA_POOL_SIZE,
+ EXTRA_POOL_SIZE, 0);
+
+ if (!hdev->prp_extra_pool[i]) {
+ dev_err(hdev->dev, "create prp extra pool[%d] failed\n", i);
+ goto destroy_prp_extra_pool;
+ }
+ }
+
+ return 0;
+
+destroy_prp_extra_pool:
+ while (i > 0)
+ dma_pool_destroy(hdev->prp_extra_pool[--i]);
+ dma_pool_destroy(hdev->prp_page_pool);
+
+ return -ENOMEM;
+}
+
+static void hiraid_free_prp_pools(struct hiraid_dev *hdev)
+{
+ int i;
+
+ for (i = 0; i < extra_pool_num; i++)
+ dma_pool_destroy(hdev->prp_extra_pool[i]);
+ dma_pool_destroy(hdev->prp_page_pool);
+}
+
+static int hiraid_request_devices(struct hiraid_dev *hdev, struct hiraid_dev_info *dev)
+{
+ u32 nd = le32_to_cpu(hdev->ctrl_info->nd);
+ struct hiraid_admin_command admin_cmd;
+ struct hiraid_dev_list *list_buf;
+ dma_addr_t buffer_phy = 0;
+ u32 i, idx, hdid, ndev;
+ int ret = 0;
+
+ list_buf = dma_alloc_coherent(hdev->dev, PAGE_SIZE, &buffer_phy, GFP_KERNEL);
+ if (!list_buf)
+ return -ENOMEM;
+
+ for (idx = 0; idx < nd;) {
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.get_info.opcode = HIRAID_ADMIN_GET_INFO;
+ admin_cmd.get_info.type = HIRAID_GET_DEVLIST_INFO;
+ admin_cmd.get_info.cdw11 = cpu_to_le32(idx);
+ admin_cmd.common.dptr.prp1 = cpu_to_le64(buffer_phy);
+
+ ret = hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+
+ if (ret) {
+ dev_err(hdev->dev, "get device list failed, nd[%u] idx[%u] ret[%d]\n",
+ nd, idx, ret);
+ goto out;
+ }
+ ndev = le32_to_cpu(list_buf->dev_num);
+
+ dev_info(hdev->dev, "get dev list ndev num[%u]\n", ndev);
+
+ for (i = 0; i < ndev; i++) {
+ hdid = le32_to_cpu(list_buf->devinfo[i].hdid);
+ dev_info(hdev->dev, "devices[%d], hdid[%u] target[%d] channel[%d] lun[%d] attr[0x%x]\n",
+ i, hdid, le16_to_cpu(list_buf->devinfo[i].target),
+ list_buf->devinfo[i].channel,
+ list_buf->devinfo[i].lun,
+ list_buf->devinfo[i].attr);
+ if (hdid > nd || hdid == 0) {
+ dev_err(hdev->dev, "err, hdid[%d] invalid\n", hdid);
+ continue;
+ }
+ memcpy(&dev[hdid - 1], &list_buf->devinfo[i],
+ sizeof(struct hiraid_dev_info));
+ }
+ idx += ndev;
+
+ if (ndev < MAX_DEV_ENTRY_PER_PAGE_4K)
+ break;
+ }
+
+out:
+ dma_free_coherent(hdev->dev, PAGE_SIZE, list_buf, buffer_phy);
+ return ret;
+}
+
+static void hiraid_send_async_event(struct hiraid_dev *hdev, u16 cid)
+{
+ struct hiraid_queue *adminq = &hdev->queues[0];
+ struct hiraid_admin_command admin_cmd;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.common.opcode = HIRAID_ADMIN_ASYNC_EVENT;
+ admin_cmd.common.cmd_id = cpu_to_le16(cid);
+
+ hiraid_submit_cmd(adminq, &admin_cmd);
+ dev_info(hdev->dev, "send async event to controller, cid[%d]\n", cid);
+}
+
+static inline void hiraid_init_async_event(struct hiraid_dev *hdev)
+{
+ u16 i;
+
+ for (i = 0; i < hdev->ctrl_info->asynevent; i++)
+ hiraid_send_async_event(hdev, i + HIRAID_AQ_BLK_MQ_DEPTH);
+}
+
+static int hiraid_add_device(struct hiraid_dev *hdev, struct hiraid_dev_info *devinfo)
+{
+ struct Scsi_Host *shost = hdev->shost;
+ struct scsi_device *sdev;
+
+ dev_info(hdev->dev, "add device, hdid[%u] target[%d] channel[%d] lun[%d] attr[0x%x]\n",
+ le32_to_cpu(devinfo->hdid), le16_to_cpu(devinfo->target),
+ devinfo->channel, devinfo->lun, devinfo->attr);
+
+ sdev = scsi_device_lookup(shost, devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ if (sdev) {
+ dev_warn(hdev->dev, "device is already exist, channel[%d] targetid[%d] lun[%d]\n",
+ devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ scsi_device_put(sdev);
+ return -EEXIST;
+ }
+ scsi_add_device(shost, devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ return 0;
+}
+
+static int hiraid_rescan_device(struct hiraid_dev *hdev, struct hiraid_dev_info *devinfo)
+{
+ struct Scsi_Host *shost = hdev->shost;
+ struct scsi_device *sdev;
+
+ dev_info(hdev->dev, "rescan device, hdid[%u] target[%d] channel[%d] lun[%d] attr[0x%x]\n",
+ le32_to_cpu(devinfo->hdid), le16_to_cpu(devinfo->target),
+ devinfo->channel, devinfo->lun, devinfo->attr);
+
+ sdev = scsi_device_lookup(shost, devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ if (!sdev) {
+ dev_warn(hdev->dev, "device is not exit rescan it, channel[%d] target_id[%d] lun[%d]\n",
+ devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ return -ENODEV;
+ }
+
+ scsi_rescan_device(&sdev->sdev_gendev);
+ scsi_device_put(sdev);
+ return 0;
+}
+
+static int hiraid_delete_device(struct hiraid_dev *hdev, struct hiraid_dev_info *devinfo)
+{
+ struct Scsi_Host *shost = hdev->shost;
+ struct scsi_device *sdev;
+
+ dev_info(hdev->dev, "remove device, hdid[%u] target[%d] channel[%d] lun[%d] attr[0x%x]\n",
+ le32_to_cpu(devinfo->hdid), le16_to_cpu(devinfo->target),
+ devinfo->channel, devinfo->lun, devinfo->attr);
+
+ sdev = scsi_device_lookup(shost, devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ if (!sdev) {
+ dev_warn(hdev->dev, "device is not exit remove it, channel[%d] target_id[%d] lun[%d]\n",
+ devinfo->channel, le16_to_cpu(devinfo->target), 0);
+ return -ENODEV;
+ }
+
+ scsi_remove_device(sdev);
+ scsi_device_put(sdev);
+ return 0;
+}
+
+static int hiraid_dev_list_init(struct hiraid_dev *hdev)
+{
+ u32 nd = le32_to_cpu(hdev->ctrl_info->nd);
+
+ hdev->dev_info = kzalloc_node(nd * sizeof(struct hiraid_dev_info),
+ GFP_KERNEL, hdev->numa_node);
+ if (!hdev->dev_info)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static int hiraid_luntarget_sort(const void *l, const void *r)
+{
+ const struct hiraid_dev_info *ln = l;
+ const struct hiraid_dev_info *rn = r;
+ int l_attr = HIRAID_DEV_INFO_ATTR_BOOT(ln->attr);
+ int r_attr = HIRAID_DEV_INFO_ATTR_BOOT(rn->attr);
+
+ /* boot first */
+ if (l_attr != r_attr)
+ return (r_attr - l_attr);
+
+ if (ln->channel == rn->channel)
+ return le16_to_cpu(ln->target) - le16_to_cpu(rn->target);
+
+ return ln->channel - rn->channel;
+}
+
+static void hiraid_scan_work(struct work_struct *work)
+{
+ struct hiraid_dev *hdev =
+ container_of(work, struct hiraid_dev, scan_work);
+ struct hiraid_dev_info *dev, *old_dev, *new_dev;
+ u32 nd = le32_to_cpu(hdev->ctrl_info->nd);
+ u8 flag, org_flag;
+ int i, ret;
+ int count = 0;
+
+ dev = kcalloc(nd, sizeof(struct hiraid_dev_info), GFP_KERNEL);
+ if (!dev)
+ return;
+
+ new_dev = kcalloc(nd, sizeof(struct hiraid_dev_info), GFP_KERNEL);
+ if (!new_dev)
+ goto free_list;
+
+ ret = hiraid_request_devices(hdev, dev);
+ if (ret)
+ goto free_all;
+ old_dev = hdev->dev_info;
+ for (i = 0; i < nd; i++) {
+ org_flag = old_dev[i].flag;
+ flag = dev[i].flag;
+
+ dev_log_dbg(hdev->dev, "i[%d] org_flag[0x%x] flag[0x%x]\n", i, org_flag, flag);
+
+ if (HIRAID_DEV_INFO_FLAG_VALID(flag)) {
+ if (!HIRAID_DEV_INFO_FLAG_VALID(org_flag)) {
+ down_write(&hdev->dev_rwsem);
+ memcpy(&old_dev[i], &dev[i],
+ sizeof(struct hiraid_dev_info));
+ memcpy(&new_dev[count++], &dev[i],
+ sizeof(struct hiraid_dev_info));
+ up_write(&hdev->dev_rwsem);
+ } else if (HIRAID_DEV_INFO_FLAG_CHANGE(flag)) {
+ hiraid_rescan_device(hdev, &dev[i]);
+ }
+ } else {
+ if (HIRAID_DEV_INFO_FLAG_VALID(org_flag)) {
+ down_write(&hdev->dev_rwsem);
+ old_dev[i].flag &= 0xfe;
+ up_write(&hdev->dev_rwsem);
+ hiraid_delete_device(hdev, &old_dev[i]);
+ }
+ }
+ }
+
+ dev_info(hdev->dev, "scan work add device num[%d]\n", count);
+
+ sort(new_dev, count, sizeof(new_dev[0]), hiraid_luntarget_sort, NULL);
+
+ for (i = 0; i < count; i++)
+ hiraid_add_device(hdev, &new_dev[i]);
+
+free_all:
+ kfree(new_dev);
+free_list:
+ kfree(dev);
+}
+
+static void hiraid_timesyn_work(struct work_struct *work)
+{
+ struct hiraid_dev *hdev =
+ container_of(work, struct hiraid_dev, timesyn_work);
+
+ hiraid_configure_timestamp(hdev);
+}
+
+static int hiraid_init_control_info(struct hiraid_dev *hdev);
+static void hiraid_fwactive_work(struct work_struct *work)
+{
+ struct hiraid_dev *hdev = container_of(work, struct hiraid_dev, fwact_work);
+
+ if (hiraid_init_control_info(hdev))
+ dev_err(hdev->dev, "get controller info failed after fw activation\n");
+}
+
+static void hiraid_queue_scan(struct hiraid_dev *hdev)
+{
+ queue_work(work_queue, &hdev->scan_work);
+}
+
+static void hiraid_handle_async_notice(struct hiraid_dev *hdev, u32 result)
+{
+ switch ((result & 0xff00) >> 8) {
+ case HIRAID_ASYN_DEV_CHANGED:
+ hiraid_queue_scan(hdev);
+ break;
+ case HIRAID_ASYN_FW_ACT_START:
+ dev_info(hdev->dev, "fw activation starting\n");
+ break;
+ case HIRAID_ASYN_HOST_PROBING:
+ break;
+ default:
+ dev_warn(hdev->dev, "async event result[%08x]\n", result);
+ }
+}
+
+static void hiraid_handle_async_vs(struct hiraid_dev *hdev, u32 result, u32 result1)
+{
+ switch ((result & 0xff00) >> 8) {
+ case HIRAID_ASYN_TIMESYN:
+ queue_work(work_queue, &hdev->timesyn_work);
+ break;
+ case HIRAID_ASYN_FW_ACT_FINISH:
+ dev_info(hdev->dev, "fw activation finish\n");
+ queue_work(work_queue, &hdev->fwact_work);
+ break;
+ case HIRAID_ASYN_EVENT_MIN ... HIRAID_ASYN_EVENT_MAX:
+ dev_info(hdev->dev, "recv card event[%d] param1[0x%x] param2[0x%x]\n",
+ (result & 0xff00) >> 8, result, result1);
+ break;
+ default:
+ dev_warn(hdev->dev, "async event result[0x%x]\n", result);
+ }
+}
+
+static int hiraid_alloc_resources(struct hiraid_dev *hdev)
+{
+ int ret, nqueue;
+
+ hdev->ctrl_info = kzalloc_node(sizeof(*hdev->ctrl_info), GFP_KERNEL, hdev->numa_node);
+ if (!hdev->ctrl_info)
+ return -ENOMEM;
+
+ ret = hiraid_create_prp_pools(hdev);
+ if (ret)
+ goto free_ctrl_info;
+ nqueue = min(num_possible_cpus(), max_hwq_num) + 1;
+ hdev->queues = kcalloc_node(nqueue, sizeof(struct hiraid_queue),
+ GFP_KERNEL, hdev->numa_node);
+ if (!hdev->queues) {
+ ret = -ENOMEM;
+ goto destroy_dma_pools;
+ }
+
+ ret = hiraid_create_admin_cmds(hdev);
+ if (ret)
+ goto free_queues;
+
+ dev_info(hdev->dev, "total queues num[%d]\n", nqueue);
+
+ return 0;
+
+free_queues:
+ kfree(hdev->queues);
+destroy_dma_pools:
+ hiraid_free_prp_pools(hdev);
+free_ctrl_info:
+ kfree(hdev->ctrl_info);
+
+ return ret;
+}
+
+static void hiraid_free_resources(struct hiraid_dev *hdev)
+{
+ hiraid_free_admin_cmds(hdev);
+ kfree(hdev->queues);
+ hiraid_free_prp_pools(hdev);
+ kfree(hdev->ctrl_info);
+}
+
+static void hiraid_bsg_buf_unmap(struct hiraid_dev *hdev, struct bsg_job *job)
+{
+ struct request *rq = blk_mq_rq_from_pdu(job);
+ struct hiraid_mapmange *mapbuf = job->dd_data;
+ enum dma_data_direction dma_dir = rq_data_dir(rq) ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+
+ if (mapbuf->sge_cnt)
+ dma_unmap_sg(hdev->dev, mapbuf->sgl, mapbuf->sge_cnt, dma_dir);
+
+ hiraid_free_mapbuf(hdev, mapbuf);
+}
+
+static int hiraid_bsg_buf_map(struct hiraid_dev *hdev, struct bsg_job *job,
+ struct hiraid_admin_command *cmd)
+{
+ struct hiraid_bsg_request *bsg_req = job->request;
+ struct request *rq = blk_mq_rq_from_pdu(job);
+ struct hiraid_mapmange *mapbuf = job->dd_data;
+ enum dma_data_direction dma_dir = rq_data_dir(rq) ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+ int ret = 0;
+
+ /* No data to DMA, it may be scsi no-rw command */
+ mapbuf->sge_cnt = job->request_payload.sg_cnt;
+ mapbuf->sgl = job->request_payload.sg_list;
+ mapbuf->len = job->request_payload.payload_len;
+ mapbuf->page_cnt = -1;
+ if (unlikely(mapbuf->sge_cnt == 0))
+ goto out;
+
+ mapbuf->use_sgl = !hiraid_is_prp(hdev, mapbuf->sgl, mapbuf->sge_cnt);
+
+ ret = dma_map_sg_attrs(hdev->dev, mapbuf->sgl, mapbuf->sge_cnt, dma_dir, DMA_ATTR_NO_WARN);
+ if (!ret)
+ goto out;
+
+ if ((mapbuf->use_sgl == (bool)true) && (bsg_req->msgcode == HIRAID_BSG_IOPTHRU) &&
+ (hdev->ctrl_info->pt_use_sgl != (bool)false)) {
+ ret = hiraid_build_passthru_sgl(hdev, cmd, mapbuf);
+ } else {
+ mapbuf->use_sgl = false;
+
+ ret = hiraid_build_passthru_prp(hdev, mapbuf);
+ cmd->common.dptr.prp1 = cpu_to_le64(sg_dma_address(mapbuf->sgl));
+ cmd->common.dptr.prp2 = cpu_to_le64(mapbuf->first_dma);
+ }
+
+ if (ret)
+ goto unmap;
+
+ return 0;
+
+unmap:
+ dma_unmap_sg(hdev->dev, mapbuf->sgl, mapbuf->sge_cnt, dma_dir);
+out:
+ return ret;
+}
+
+static int hiraid_get_control_info(struct hiraid_dev *hdev, struct hiraid_ctrl_info *ctrl_info)
+{
+ struct hiraid_admin_command admin_cmd;
+ u8 *data_ptr = NULL;
+ dma_addr_t buffer_phy = 0;
+ int ret;
+
+ data_ptr = dma_alloc_coherent(hdev->dev, PAGE_SIZE, &buffer_phy, GFP_KERNEL);
+ if (!data_ptr)
+ return -ENOMEM;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.get_info.opcode = HIRAID_ADMIN_GET_INFO;
+ admin_cmd.get_info.type = HIRAID_GET_CTRL_INFO;
+ admin_cmd.common.dptr.prp1 = cpu_to_le64(buffer_phy);
+
+ ret = hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+ if (!ret)
+ memcpy(ctrl_info, data_ptr, sizeof(struct hiraid_ctrl_info));
+
+ dma_free_coherent(hdev->dev, PAGE_SIZE, data_ptr, buffer_phy);
+
+ return ret;
+}
+
+static int hiraid_init_control_info(struct hiraid_dev *hdev)
+{
+ int ret;
+
+ hdev->ctrl_info->nd = cpu_to_le32(240);
+ hdev->ctrl_info->mdts = 8;
+ hdev->ctrl_info->max_cmds = cpu_to_le16(4096);
+ hdev->ctrl_info->max_num_sge = cpu_to_le16(128);
+ hdev->ctrl_info->max_channel = cpu_to_le16(4);
+ hdev->ctrl_info->max_tgt_id = cpu_to_le32(3239);
+ hdev->ctrl_info->max_lun = cpu_to_le16(2);
+
+ ret = hiraid_get_control_info(hdev, hdev->ctrl_info);
+ if (ret)
+ dev_err(hdev->dev, "get controller info failed[%d]\n", ret);
+
+ dev_info(hdev->dev, "device_num = %d\n", hdev->ctrl_info->nd);
+ dev_info(hdev->dev, "max_cmd = %d\n", hdev->ctrl_info->max_cmds);
+ dev_info(hdev->dev, "max_channel = %d\n", hdev->ctrl_info->max_channel);
+ dev_info(hdev->dev, "max_tgt_id = %d\n", hdev->ctrl_info->max_tgt_id);
+ dev_info(hdev->dev, "max_lun = %d\n", hdev->ctrl_info->max_lun);
+ dev_info(hdev->dev, "max_num_sge = %d\n", hdev->ctrl_info->max_num_sge);
+ dev_info(hdev->dev, "lun_num_boot = %d\n", hdev->ctrl_info->lun_num_boot);
+ dev_info(hdev->dev, "max_data_transfer_size = %d\n", hdev->ctrl_info->mdts);
+ dev_info(hdev->dev, "abort_cmd_limit = %d\n", hdev->ctrl_info->acl);
+ dev_info(hdev->dev, "asyn_event_num = %d\n", hdev->ctrl_info->asynevent);
+ dev_info(hdev->dev, "card_type = %d\n", hdev->ctrl_info->card_type);
+ dev_info(hdev->dev, "pt_use_sgl = %d\n", hdev->ctrl_info->pt_use_sgl);
+ dev_info(hdev->dev, "rtd3e = %d\n", hdev->ctrl_info->rtd3e);
+ dev_info(hdev->dev, "serial_num = %s\n", hdev->ctrl_info->sn);
+ dev_info(hdev->dev, "fw_verion = %s\n", hdev->ctrl_info->fw_version);
+
+ if (!hdev->ctrl_info->asynevent)
+ hdev->ctrl_info->asynevent = 1;
+ if (hdev->ctrl_info->asynevent > HIRAID_ASYN_COMMANDS)
+ hdev->ctrl_info->asynevent = HIRAID_ASYN_COMMANDS;
+
+ hdev->scsi_qd = work_mode ?
+ le16_to_cpu(hdev->ctrl_info->max_cmds) : (hdev->ioq_depth - HIRAID_PTHRU_CMDS_PERQ);
+
+ return 0;
+}
+
+static int hiraid_user_send_admcmd(struct hiraid_dev *hdev, struct bsg_job *job)
+{
+ struct hiraid_bsg_request *bsg_req = job->request;
+ struct hiraid_passthru_common_cmd *ptcmd = &(bsg_req->admcmd);
+ struct hiraid_admin_command admin_cmd;
+ u32 timeout = msecs_to_jiffies(ptcmd->timeout_ms);
+ u32 result[2] = {0};
+ int status;
+
+ if (hdev->state >= DEV_RESETTING) {
+ dev_err(hdev->dev, "err, host state[%d] is not right\n",
+ hdev->state);
+ return -EBUSY;
+ }
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.common.opcode = ptcmd->opcode;
+ admin_cmd.common.flags = ptcmd->flags;
+ admin_cmd.common.hdid = cpu_to_le32(ptcmd->nsid);
+ admin_cmd.common.cdw2[0] = cpu_to_le32(ptcmd->cdw2);
+ admin_cmd.common.cdw2[1] = cpu_to_le32(ptcmd->cdw3);
+ admin_cmd.common.cdw10 = cpu_to_le32(ptcmd->cdw10);
+ admin_cmd.common.cdw11 = cpu_to_le32(ptcmd->cdw11);
+ admin_cmd.common.cdw12 = cpu_to_le32(ptcmd->cdw12);
+ admin_cmd.common.cdw13 = cpu_to_le32(ptcmd->cdw13);
+ admin_cmd.common.cdw14 = cpu_to_le32(ptcmd->cdw14);
+ admin_cmd.common.cdw15 = cpu_to_le32(ptcmd->cdw15);
+
+ status = hiraid_bsg_buf_map(hdev, job, &admin_cmd);
+ if (status) {
+ dev_err(hdev->dev, "err, map data failed\n");
+ return status;
+ }
+
+ status = hiraid_put_admin_sync_request(hdev, &admin_cmd, &result[0], &result[1], timeout);
+ if (status >= 0) {
+ job->reply_len = sizeof(result);
+ memcpy(job->reply, result, sizeof(result));
+ }
+ if (status)
+ dev_info(hdev->dev, "opcode[0x%x] subopcode[0x%x] status[0x%x] result0[0x%x];"
+ "result1[0x%x]\n", ptcmd->opcode, ptcmd->info_0.subopcode, status,
+ result[0], result[1]);
+
+ hiraid_bsg_buf_unmap(hdev, job);
+
+ return status;
+}
+
+static int hiraid_alloc_io_ptcmds(struct hiraid_dev *hdev)
+{
+ u32 i;
+ u32 ptnum = HIRAID_TOTAL_PTCMDS(hdev->online_queues - 1);
+
+ INIT_LIST_HEAD(&hdev->io_pt_list);
+ spin_lock_init(&hdev->io_pt_lock);
+
+ hdev->io_ptcmds = kcalloc_node(ptnum, sizeof(struct hiraid_cmd),
+ GFP_KERNEL, hdev->numa_node);
+
+ if (!hdev->io_ptcmds) {
+ dev_err(hdev->dev, "alloc io pthrunum failed\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < ptnum; i++) {
+ hdev->io_ptcmds[i].qid = i / HIRAID_PTHRU_CMDS_PERQ + 1;
+ hdev->io_ptcmds[i].cid = i % HIRAID_PTHRU_CMDS_PERQ + hdev->scsi_qd;
+ list_add_tail(&(hdev->io_ptcmds[i].list), &hdev->io_pt_list);
+ }
+
+ dev_info(hdev->dev, "alloc io pthru cmd success, pthrunum[%d]\n", ptnum);
+
+ return 0;
+}
+
+static void hiraid_free_io_ptcmds(struct hiraid_dev *hdev)
+{
+ kfree(hdev->io_ptcmds);
+ hdev->io_ptcmds = NULL;
+
+ INIT_LIST_HEAD(&hdev->io_pt_list);
+}
+
+static int hiraid_put_io_sync_request(struct hiraid_dev *hdev, struct hiraid_scsi_io_cmd *io_cmd,
+ u32 *result, u32 *reslen, u32 timeout)
+{
+ int ret;
+ dma_addr_t buffer_phy;
+ struct hiraid_queue *ioq;
+ void *sense_addr = NULL;
+ struct hiraid_cmd *pt_cmd = hiraid_get_cmd(hdev, HIRAID_CMD_PTHRU);
+
+ if (!pt_cmd) {
+ dev_err(hdev->dev, "err, get ioq cmd failed\n");
+ return -EFAULT;
+ }
+
+ timeout = timeout ? timeout : ADMIN_TIMEOUT;
+
+ init_completion(&pt_cmd->cmd_done);
+
+ ioq = &hdev->queues[pt_cmd->qid];
+ if (work_mode) {
+ ret = ((pt_cmd->qid - 1) * HIRAID_PTHRU_CMDS_PERQ + pt_cmd->cid) *
+ SCSI_SENSE_BUFFERSIZE;
+ sense_addr = hdev->sense_buffer_virt + ret;
+ buffer_phy = hdev->sense_buffer_phy + ret;
+ } else {
+ ret = pt_cmd->cid * SCSI_SENSE_BUFFERSIZE;
+ sense_addr = ioq->sense_buffer_virt + ret;
+ buffer_phy = ioq->sense_buffer_phy + ret;
+ }
+
+ io_cmd->common.sense_addr = cpu_to_le64(buffer_phy);
+ io_cmd->common.sense_len = cpu_to_le16(SCSI_SENSE_BUFFERSIZE);
+ io_cmd->common.cmd_id = cpu_to_le16(pt_cmd->cid);
+
+ hiraid_submit_cmd(ioq, io_cmd);
+
+ if (!wait_for_completion_timeout(&pt_cmd->cmd_done, timeout)) {
+ dev_err(hdev->dev, "cid[%d] qid[%d] timeout, opcode[0x%x] subopcode[0x%x]\n",
+ pt_cmd->cid, pt_cmd->qid, io_cmd->common.opcode,
+ (le32_to_cpu(io_cmd->common.cdw3[0]) & 0xffff));
+
+ hiraid_admin_timeout(hdev, pt_cmd);
+
+ hiraid_put_cmd(hdev, pt_cmd, HIRAID_CMD_PTHRU);
+ return -ETIME;
+ }
+
+ if (result && reslen) {
+ if ((pt_cmd->status & 0x17f) == 0x101) {
+ memcpy(result, sense_addr, SCSI_SENSE_BUFFERSIZE);
+ *reslen = SCSI_SENSE_BUFFERSIZE;
+ }
+ }
+
+ hiraid_put_cmd(hdev, pt_cmd, HIRAID_CMD_PTHRU);
+
+ return pt_cmd->status;
+}
+
+static int hiraid_user_send_ptcmd(struct hiraid_dev *hdev, struct bsg_job *job)
+{
+ struct hiraid_bsg_request *bsg_req = (struct hiraid_bsg_request *)(job->request);
+ struct hiraid_passthru_io_cmd *cmd = &(bsg_req->pthrucmd);
+ struct hiraid_scsi_io_cmd pthru_cmd;
+ int status = 0;
+ u32 timeout = msecs_to_jiffies(cmd->timeout_ms);
+ // data len is 4k before use sgl, now len is 1M
+ u32 io_pt_data_len = (hdev->ctrl_info->pt_use_sgl == (bool)true) ?
+ IOQ_PT_SGL_DATA_LEN : IOQ_PT_DATA_LEN;
+
+ if (cmd->data_len > io_pt_data_len) {
+ dev_err(hdev->dev, "data len bigger than %d\n", io_pt_data_len);
+ return -EFAULT;
+ }
+
+ if (hdev->state != DEV_LIVE) {
+ dev_err(hdev->dev, "err, host state[%d] is not live\n", hdev->state);
+ return -EBUSY;
+ }
+
+ memset(&pthru_cmd, 0, sizeof(pthru_cmd));
+ pthru_cmd.common.opcode = cmd->opcode;
+ pthru_cmd.common.flags = cmd->flags;
+ pthru_cmd.common.hdid = cpu_to_le32(cmd->nsid);
+ pthru_cmd.common.sense_len = cpu_to_le16(cmd->info_0.res_sense_len);
+ pthru_cmd.common.cdb_len = cmd->info_0.cdb_len;
+ pthru_cmd.common.rsvd2 = cmd->info_0.rsvd0;
+ pthru_cmd.common.cdw3[0] = cpu_to_le32(cmd->cdw3);
+ pthru_cmd.common.cdw3[1] = cpu_to_le32(cmd->cdw4);
+ pthru_cmd.common.cdw3[2] = cpu_to_le32(cmd->cdw5);
+
+ pthru_cmd.common.cdw10[0] = cpu_to_le32(cmd->cdw10);
+ pthru_cmd.common.cdw10[1] = cpu_to_le32(cmd->cdw11);
+ pthru_cmd.common.cdw10[2] = cpu_to_le32(cmd->cdw12);
+ pthru_cmd.common.cdw10[3] = cpu_to_le32(cmd->cdw13);
+ pthru_cmd.common.cdw10[4] = cpu_to_le32(cmd->cdw14);
+ pthru_cmd.common.cdw10[5] = cpu_to_le32(cmd->data_len);
+
+ memcpy(pthru_cmd.common.cdb, &cmd->cdw16, cmd->info_0.cdb_len);
+
+ pthru_cmd.common.cdw26[0] = cpu_to_le32(cmd->cdw26[0]);
+ pthru_cmd.common.cdw26[1] = cpu_to_le32(cmd->cdw26[1]);
+ pthru_cmd.common.cdw26[2] = cpu_to_le32(cmd->cdw26[2]);
+ pthru_cmd.common.cdw26[3] = cpu_to_le32(cmd->cdw26[3]);
+
+ status = hiraid_bsg_buf_map(hdev, job, (struct hiraid_admin_command *)&pthru_cmd);
+ if (status) {
+ dev_err(hdev->dev, "err, map data failed\n");
+ return status;
+ }
+
+ status = hiraid_put_io_sync_request(hdev, &pthru_cmd, job->reply, &job->reply_len, timeout);
+
+ if (status)
+ dev_info(hdev->dev, "opcode[0x%x] subopcode[0x%x] status[0x%x] replylen[%d]\n",
+ cmd->opcode, cmd->info_1.subopcode, status, job->reply_len);
+
+ hiraid_bsg_buf_unmap(hdev, job);
+
+ return status;
+}
+
+static bool hiraid_check_scmd_finished(struct scsi_cmnd *scmd)
+{
+ struct hiraid_dev *hdev = shost_priv(scmd->device->host);
+ struct hiraid_mapmange *mapbuf = scsi_cmd_priv(scmd);
+ struct hiraid_queue *hiraidq;
+
+ hiraidq = mapbuf->hiraidq;
+ if (!hiraidq)
+ return false;
+ if (READ_ONCE(mapbuf->state) == CMD_COMPLETE || hiraid_poll_cq(hiraidq, mapbuf->cid)) {
+ dev_warn(hdev->dev, "cid[%d] qid[%d] has been completed\n",
+ mapbuf->cid, hiraidq->qid);
+ return true;
+ }
+ return false;
+}
+
+static enum blk_eh_timer_return hiraid_timed_out(struct scsi_cmnd *scmd)
+{
+ struct hiraid_mapmange *mapbuf = scsi_cmd_priv(scmd);
+ unsigned int timeout = scmd->device->request_queue->rq_timeout;
+
+ if (hiraid_check_scmd_finished(scmd))
+ goto out;
+
+ if (time_after(jiffies, scmd->jiffies_at_alloc + timeout)) {
+ if (cmpxchg(&mapbuf->state, CMD_FLIGHT, CMD_TIMEOUT) == CMD_FLIGHT)
+ return BLK_EH_DONE;
+ }
+out:
+ return BLK_EH_RESET_TIMER;
+}
+
+/* send abort command by admin queue temporary */
+static int hiraid_send_abort_cmd(struct hiraid_dev *hdev, u32 hdid, u16 qid, u16 cid)
+{
+ struct hiraid_admin_command admin_cmd;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.abort.opcode = HIRAID_ADMIN_ABORT_CMD;
+ admin_cmd.abort.hdid = cpu_to_le32(hdid);
+ admin_cmd.abort.sqid = cpu_to_le16(qid);
+ admin_cmd.abort.cid = cpu_to_le16(cid);
+
+ return hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+}
+
+/* send reset command by admin quueue temporary */
+static int hiraid_send_reset_cmd(struct hiraid_dev *hdev, u8 type, u32 hdid)
+{
+ struct hiraid_admin_command admin_cmd;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.reset.opcode = HIRAID_ADMIN_RESET;
+ admin_cmd.reset.hdid = cpu_to_le32(hdid);
+ admin_cmd.reset.type = type;
+
+ return hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, 0);
+}
+
+static bool hiraid_dev_state_trans(struct hiraid_dev *hdev, enum hiraid_dev_state new_state)
+{
+ unsigned long flags;
+ enum hiraid_dev_state old_state;
+ bool change = false;
+
+ spin_lock_irqsave(&hdev->state_lock, flags);
+
+ old_state = hdev->state;
+ switch (new_state) {
+ case DEV_LIVE:
+ switch (old_state) {
+ case DEV_NEW:
+ case DEV_RESETTING:
+ change = true;
+ break;
+ default:
+ break;
+ }
+ break;
+ case DEV_RESETTING:
+ switch (old_state) {
+ case DEV_LIVE:
+ change = true;
+ break;
+ default:
+ break;
+ }
+ break;
+ case DEV_DELETING:
+ if (old_state != DEV_DELETING)
+ change = true;
+ break;
+ case DEV_DEAD:
+ switch (old_state) {
+ case DEV_NEW:
+ case DEV_LIVE:
+ case DEV_RESETTING:
+ change = true;
+ break;
+ default:
+ break;
+ }
+ break;
+ default:
+ break;
+ }
+ if (change)
+ hdev->state = new_state;
+ spin_unlock_irqrestore(&hdev->state_lock, flags);
+
+ dev_info(hdev->dev, "oldstate[%d]->newstate[%d], change[%d]\n",
+ old_state, new_state, change);
+
+ return change;
+}
+
+static void hiraid_drain_pending_ios(struct hiraid_dev *hdev);
+
+static void hiraid_flush_running_cmds(struct hiraid_dev *hdev)
+{
+ int i, j;
+
+ scsi_block_requests(hdev->shost);
+ hiraid_drain_pending_ios(hdev);
+ scsi_unblock_requests(hdev->shost);
+
+ j = HIRAID_AQ_BLK_MQ_DEPTH;
+ for (i = 0; i < j; i++) {
+ if (READ_ONCE(hdev->adm_cmds[i].state) == CMD_FLIGHT) {
+ dev_info(hdev->dev, "flush admin, cid[%d]\n", i);
+ hdev->adm_cmds[i].status = 0xFFFF;
+ WRITE_ONCE(hdev->adm_cmds[i].state, CMD_COMPLETE);
+ complete(&(hdev->adm_cmds[i].cmd_done));
+ }
+ }
+
+ j = HIRAID_TOTAL_PTCMDS(hdev->online_queues - 1);
+ for (i = 0; i < j; i++) {
+ if (READ_ONCE(hdev->io_ptcmds[i].state) == CMD_FLIGHT) {
+ hdev->io_ptcmds[i].status = 0xFFFF;
+ WRITE_ONCE(hdev->io_ptcmds[i].state, CMD_COMPLETE);
+ complete(&(hdev->io_ptcmds[i].cmd_done));
+ }
+ }
+}
+
+static int hiraid_dev_disable(struct hiraid_dev *hdev, bool shutdown)
+{
+ int ret = -ENODEV;
+ struct hiraid_queue *adminq = &hdev->queues[0];
+ u16 start, end;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ if (shutdown)
+ hiraid_shutdown_control(hdev);
+ else
+ ret = hiraid_disable_control(hdev);
+ }
+
+ if (hdev->queue_count == 0) {
+ dev_err(hdev->dev, "warn: queue has been delete\n");
+ return ret;
+ }
+
+ spin_lock_irq(&adminq->cq_lock);
+ hiraid_process_cq(adminq, &start, &end, -1);
+ spin_unlock_irq(&adminq->cq_lock);
+ hiraid_complete_cqes(adminq, start, end);
+
+ hiraid_pci_disable(hdev);
+
+ hiraid_flush_running_cmds(hdev);
+
+ return ret;
+}
+
+static void hiraid_reset_work(struct work_struct *work)
+{
+ int ret = 0;
+ struct hiraid_dev *hdev = container_of(work, struct hiraid_dev, reset_work);
+
+ if (hdev->state != DEV_RESETTING) {
+ dev_err(hdev->dev, "err, host is not reset state\n");
+ return;
+ }
+
+ dev_info(hdev->dev, "enter host reset\n");
+
+ if (hdev->ctrl_config & HIRAID_CC_ENABLE) {
+ dev_info(hdev->dev, "start dev_disable\n");
+ ret = hiraid_dev_disable(hdev, false);
+ }
+
+ if (ret)
+ goto out;
+
+ ret = hiraid_pci_enable(hdev);
+ if (ret)
+ goto out;
+
+ ret = hiraid_setup_admin_queue(hdev);
+ if (ret)
+ goto pci_disable;
+
+ ret = hiraid_setup_io_queues(hdev);
+ if (ret || hdev->online_queues != hdev->last_qcnt)
+ goto pci_disable;
+
+ hiraid_dev_state_trans(hdev, DEV_LIVE);
+
+ hiraid_init_async_event(hdev);
+
+ hiraid_queue_scan(hdev);
+
+ return;
+
+pci_disable:
+ hiraid_pci_disable(hdev);
+out:
+ hiraid_dev_state_trans(hdev, DEV_DEAD);
+ dev_err(hdev->dev, "err, host reset failed\n");
+}
+
+static int hiraid_reset_work_sync(struct hiraid_dev *hdev)
+{
+ if (!hiraid_dev_state_trans(hdev, DEV_RESETTING)) {
+ dev_info(hdev->dev, "can't change to reset state\n");
+ return -EBUSY;
+ }
+
+ if (!queue_work(work_queue, &hdev->reset_work)) {
+ dev_err(hdev->dev, "err, host is already in reset state\n");
+ return -EBUSY;
+ }
+
+ flush_work(&hdev->reset_work);
+ if (hdev->state != DEV_LIVE)
+ return -ENODEV;
+
+ return 0;
+}
+
+static int hiraid_wait_io_completion(struct hiraid_mapmange *mapbuf)
+{
+ u16 times = 0;
+
+ do {
+ if (READ_ONCE(mapbuf->state) == CMD_TMO_COMPLETE)
+ break;
+ msleep(500);
+ times++;
+ } while (times <= HIRAID_WAIT_ABNL_CMD_TIMEOUT);
+
+ /* wait command completion timeout after abort/reset success */
+ if (times >= HIRAID_WAIT_ABNL_CMD_TIMEOUT)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+static bool hiraid_tgt_rst_pending_io_count(struct request *rq, void *data, bool reserved)
+{
+ unsigned int id = *(unsigned int *)data;
+ struct scsi_cmnd *scmd = blk_mq_rq_to_pdu(rq);
+ struct hiraid_mapmange *mapbuf;
+ struct hiraid_sdev_hostdata *hostdata;
+
+ if (scmd) {
+ mapbuf = scsi_cmd_priv(scmd);
+ if ((mapbuf->state == CMD_FLIGHT) || (mapbuf->state == CMD_TIMEOUT)) {
+ if ((scmd->device) && (scmd->device->id == id)) {
+ hostdata = scmd->device->hostdata;
+ hostdata->pend_count++;
+ }
+ }
+ }
+ return true;
+}
+static bool hiraid_clean_pending_io(struct request *rq, void *data, bool reserved)
+{
+ struct hiraid_dev *hdev = data;
+ struct scsi_cmnd *scmd;
+ struct hiraid_mapmange *mapbuf;
+
+ if (unlikely(!rq || !blk_mq_request_started(rq)))
+ return true;
+
+ scmd = blk_mq_rq_to_pdu(rq);
+ mapbuf = scsi_cmd_priv(scmd);
+
+ if ((cmpxchg(&mapbuf->state, CMD_FLIGHT, CMD_COMPLETE) != CMD_FLIGHT) &&
+ (cmpxchg(&mapbuf->state, CMD_TIMEOUT, CMD_COMPLETE) != CMD_TIMEOUT))
+ return true;
+
+ set_host_byte(scmd, DID_NO_CONNECT);
+ if (mapbuf->sge_cnt)
+ scsi_dma_unmap(scmd);
+ hiraid_free_mapbuf(hdev, mapbuf);
+ dev_warn_ratelimited(hdev->dev, "back unfinished CQE, cid[%d] qid[%d]\n",
+ mapbuf->cid, mapbuf->hiraidq->qid);
+ scmd->scsi_done(scmd);
+
+ return true;
+}
+
+static void hiraid_drain_pending_ios(struct hiraid_dev *hdev)
+{
+ blk_mq_tagset_busy_iter(&hdev->shost->tag_set, hiraid_clean_pending_io, (void *)(hdev));
+}
+
+static int wait_tgt_reset_io_done(struct scsi_cmnd *scmd)
+{
+ u16 timeout = 0;
+ struct hiraid_sdev_hostdata *hostdata;
+ struct hiraid_dev *hdev = shost_priv(scmd->device->host);
+
+ hostdata = scmd->device->hostdata;
+
+ do {
+ hostdata->pend_count = 0;
+ blk_mq_tagset_busy_iter(&hdev->shost->tag_set, hiraid_tgt_rst_pending_io_count,
+ (void *)(&scmd->device->id));
+
+ if (!hostdata->pend_count)
+ return 0;
+
+ msleep(500);
+ timeout++;
+ } while (timeout <= HIRAID_WAIT_RST_IO_TIMEOUT);
+
+ return -ETIMEDOUT;
+}
+
+static int hiraid_abort(struct scsi_cmnd *scmd)
+{
+ struct hiraid_dev *hdev = shost_priv(scmd->device->host);
+ struct hiraid_mapmange *mapbuf = scsi_cmd_priv(scmd);
+ struct hiraid_sdev_hostdata *hostdata;
+ u16 hwq, cid;
+ int ret;
+
+ scsi_print_command(scmd);
+
+ if (hdev->state != DEV_LIVE || !hiraid_wait_io_completion(mapbuf) ||
+ hiraid_check_scmd_finished(scmd))
+ return SUCCESS;
+
+ hostdata = scmd->device->hostdata;
+ cid = mapbuf->cid;
+ hwq = mapbuf->hiraidq->qid;
+
+ dev_warn(hdev->dev, "cid[%d] qid[%d] timeout, send abort\n", cid, hwq);
+ ret = hiraid_send_abort_cmd(hdev, hostdata->hdid, hwq, cid);
+ if (ret != -ETIME) {
+ ret = hiraid_wait_io_completion(mapbuf);
+ if (ret) {
+ dev_warn(hdev->dev, "cid[%d] qid[%d] abort failed, not found\n", cid, hwq);
+ return FAILED;
+ }
+ dev_warn(hdev->dev, "cid[%d] qid[%d] abort succ\n", cid, hwq);
+ return SUCCESS;
+ }
+ dev_warn(hdev->dev, "cid[%d] qid[%d] abort failed, timeout\n", cid, hwq);
+ return FAILED;
+}
+
+static int hiraid_scsi_reset(struct scsi_cmnd *scmd, enum hiraid_rst_type rst)
+{
+ struct hiraid_dev *hdev = shost_priv(scmd->device->host);
+ struct hiraid_sdev_hostdata *hostdata;
+ int ret;
+
+ if (hdev->state != DEV_LIVE)
+ return SUCCESS;
+
+ hostdata = scmd->device->hostdata;
+
+ dev_warn(hdev->dev, "sdev[%d:%d] send %s reset\n", scmd->device->channel, scmd->device->id,
+ rst ? "bus" : "target");
+ ret = hiraid_send_reset_cmd(hdev, rst, hostdata->hdid);
+ if ((ret == 0) || (ret == FW_EH_DEV_NONE && rst == HIRAID_RESET_TARGET)) {
+ if (rst == HIRAID_RESET_TARGET) {
+ ret = wait_tgt_reset_io_done(scmd);
+ if (ret) {
+ dev_warn(hdev->dev, "sdev[%d:%d] target has %d peding cmd, target reset failed\n",
+ scmd->device->channel, scmd->device->id,
+ hostdata->pend_count);
+ return FAILED;
+ }
+ }
+ dev_warn(hdev->dev, "sdev[%d:%d] %s reset success\n",
+ scmd->device->channel, scmd->device->id, rst ? "bus" : "target");
+ return SUCCESS;
+ }
+
+ dev_warn(hdev->dev, "sdev[%d:%d] %s reset failed\n",
+ scmd->device->channel, scmd->device->id, rst ? "bus" : "target");
+ return FAILED;
+}
+
+static int hiraid_target_reset(struct scsi_cmnd *scmd)
+{
+ return hiraid_scsi_reset(scmd, HIRAID_RESET_TARGET);
+}
+
+static int hiraid_bus_reset(struct scsi_cmnd *scmd)
+{
+ return hiraid_scsi_reset(scmd, HIRAID_RESET_BUS);
+}
+
+static int hiraid_host_reset(struct scsi_cmnd *scmd)
+{
+ struct hiraid_dev *hdev = shost_priv(scmd->device->host);
+
+ if (hdev->state != DEV_LIVE)
+ return SUCCESS;
+
+ dev_warn(hdev->dev, "sdev[%d:%d] send host reset\n",
+ scmd->device->channel, scmd->device->id);
+ if (hiraid_reset_work_sync(hdev) == -EBUSY)
+ flush_work(&hdev->reset_work);
+
+ if (hdev->state != DEV_LIVE) {
+ dev_warn(hdev->dev, "sdev[%d:%d] host reset failed\n",
+ scmd->device->channel, scmd->device->id);
+ return FAILED;
+ }
+
+ dev_warn(hdev->dev, "sdev[%d:%d] host reset success\n",
+ scmd->device->channel, scmd->device->id);
+
+ return SUCCESS;
+}
+
+static pci_ers_result_t hiraid_pci_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct hiraid_dev *hdev = pci_get_drvdata(pdev);
+
+ dev_info(hdev->dev, "pci error detected, state[%d]\n", state);
+
+ switch (state) {
+ case pci_channel_io_normal:
+ dev_warn(hdev->dev, "channel is normal, do nothing\n");
+
+ return PCI_ERS_RESULT_CAN_RECOVER;
+ case pci_channel_io_frozen:
+ dev_warn(hdev->dev, "channel io frozen, need reset controller\n");
+
+ scsi_block_requests(hdev->shost);
+
+ hiraid_dev_state_trans(hdev, DEV_RESETTING);
+
+ return PCI_ERS_RESULT_NEED_RESET;
+ case pci_channel_io_perm_failure:
+ dev_warn(hdev->dev, "channel io failure, disconnect\n");
+
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ return PCI_ERS_RESULT_NEED_RESET;
+}
+
+static pci_ers_result_t hiraid_pci_slot_reset(struct pci_dev *pdev)
+{
+ struct hiraid_dev *hdev = pci_get_drvdata(pdev);
+
+ dev_info(hdev->dev, "restart after slot reset\n");
+
+ pci_restore_state(pdev);
+
+ if (!queue_work(work_queue, &hdev->reset_work)) {
+ dev_err(hdev->dev, "err, the device is resetting state\n");
+ return PCI_ERS_RESULT_NONE;
+ }
+
+ flush_work(&hdev->reset_work);
+
+ scsi_unblock_requests(hdev->shost);
+
+ return PCI_ERS_RESULT_RECOVERED;
+}
+
+static void hiraid_reset_pci_finish(struct pci_dev *pdev)
+{
+ struct hiraid_dev *hdev = pci_get_drvdata(pdev);
+
+ dev_info(hdev->dev, "enter hiraid reset finish\n");
+}
+
+static ssize_t csts_pp_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+ int ret = -1;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ ret = (readl(hdev->bar + HIRAID_REG_CSTS) & HIRAID_CSTS_PP_MASK);
+ ret >>= HIRAID_CSTS_PP_SHIFT;
+ }
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", ret);
+}
+
+static ssize_t csts_shst_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+ int ret = -1;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ ret = (readl(hdev->bar + HIRAID_REG_CSTS) & HIRAID_CSTS_SHST_MASK);
+ ret >>= HIRAID_CSTS_SHST_SHIFT;
+ }
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", ret);
+}
+
+static ssize_t csts_cfs_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+ int ret = -1;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ ret = (readl(hdev->bar + HIRAID_REG_CSTS) & HIRAID_CSTS_CFS_MASK);
+ ret >>= HIRAID_CSTS_CFS_SHIFT;
+ }
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", ret);
+}
+
+static ssize_t csts_rdy_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+ int ret = -1;
+
+ if (pci_device_is_present(hdev->pdev))
+ ret = (readl(hdev->bar + HIRAID_REG_CSTS) & HIRAID_CSTS_RDY);
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", ret);
+}
+
+static ssize_t fw_version_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+
+ return snprintf(buf, PAGE_SIZE, "%s\n", hdev->ctrl_info->fw_version);
+}
+
+static ssize_t hdd_dispatch_store(struct device *cdev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ int val = 0;
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+
+ if (kstrtoint(buf, 0, &val) != 0)
+ return -EINVAL;
+ if (val < DISPATCH_BY_CPU || val > DISPATCH_BY_DISK)
+ return -EINVAL;
+ hdev->hdd_dispatch = val;
+
+ return strlen(buf);
+}
+static ssize_t hdd_dispatch_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", hdev->hdd_dispatch);
+}
+
+static DEVICE_ATTR_RO(csts_pp);
+static DEVICE_ATTR_RO(csts_shst);
+static DEVICE_ATTR_RO(csts_cfs);
+static DEVICE_ATTR_RO(csts_rdy);
+static DEVICE_ATTR_RO(fw_version);
+static DEVICE_ATTR_RW(hdd_dispatch);
+
+static struct device_attribute *hiraid_host_attrs[] = {
+ &dev_attr_csts_rdy,
+ &dev_attr_csts_pp,
+ &dev_attr_csts_cfs,
+ &dev_attr_fw_version,
+ &dev_attr_csts_shst,
+ &dev_attr_hdd_dispatch,
+ NULL,
+};
+
+static int hiraid_get_vd_info(struct hiraid_dev *hdev, struct hiraid_vd_info *vd_info, u16 vid)
+{
+ struct hiraid_admin_command admin_cmd;
+ u8 *data_ptr = NULL;
+ dma_addr_t buffer_phy = 0;
+ int ret;
+
+ if (hdev->state >= DEV_RESETTING) {
+ dev_err(hdev->dev, "err, host state[%d] is not right\n", hdev->state);
+ return -EBUSY;
+ }
+
+ data_ptr = dma_alloc_coherent(hdev->dev, PAGE_SIZE, &buffer_phy, GFP_KERNEL);
+ if (!data_ptr)
+ return -ENOMEM;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.usr_cmd.opcode = USR_CMD_READ;
+ admin_cmd.usr_cmd.info_0.subopcode = cpu_to_le16(USR_CMD_VDINFO);
+ admin_cmd.usr_cmd.info_1.data_len = cpu_to_le16(USR_CMD_RDLEN);
+ admin_cmd.usr_cmd.info_1.param_len = cpu_to_le16(VDINFO_PARAM_LEN);
+ admin_cmd.usr_cmd.cdw10 = cpu_to_le32(vid);
+ admin_cmd.common.dptr.prp1 = cpu_to_le64(buffer_phy);
+
+ ret = hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, USRCMD_TIMEOUT);
+ if (!ret)
+ memcpy(vd_info, data_ptr, sizeof(struct hiraid_vd_info));
+
+ dma_free_coherent(hdev->dev, PAGE_SIZE, data_ptr, buffer_phy);
+
+ return ret;
+}
+
+static int hiraid_get_bgtask(struct hiraid_dev *hdev, struct hiraid_bgtask *bgtask)
+{
+ struct hiraid_admin_command admin_cmd;
+ u8 *data_ptr = NULL;
+ dma_addr_t buffer_phy = 0;
+ int ret;
+
+ if (hdev->state >= DEV_RESETTING) {
+ dev_err(hdev->dev, "err, host state[%d] is not right\n", hdev->state);
+ return -EBUSY;
+ }
+
+ data_ptr = dma_alloc_coherent(hdev->dev, PAGE_SIZE, &buffer_phy, GFP_KERNEL);
+ if (!data_ptr)
+ return -ENOMEM;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.usr_cmd.opcode = USR_CMD_READ;
+ admin_cmd.usr_cmd.info_0.subopcode = cpu_to_le16(USR_CMD_BGTASK);
+ admin_cmd.usr_cmd.info_1.data_len = cpu_to_le16(USR_CMD_RDLEN);
+ admin_cmd.common.dptr.prp1 = cpu_to_le64(buffer_phy);
+
+ ret = hiraid_put_admin_sync_request(hdev, &admin_cmd, NULL, NULL, USRCMD_TIMEOUT);
+ if (!ret)
+ memcpy(bgtask, data_ptr, sizeof(struct hiraid_bgtask));
+
+ dma_free_coherent(hdev->dev, PAGE_SIZE, data_ptr, buffer_phy);
+
+ return ret;
+}
+
+static ssize_t raid_level_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct scsi_device *sdev;
+ struct hiraid_dev *hdev;
+ struct hiraid_vd_info *vd_info;
+ struct hiraid_sdev_hostdata *hostdata;
+ int ret;
+
+ sdev = to_scsi_device(dev);
+ hdev = shost_priv(sdev->host);
+ hostdata = sdev->hostdata;
+
+ vd_info = kmalloc(sizeof(*vd_info), GFP_KERNEL);
+ if (!vd_info || !HIRAID_DEV_INFO_ATTR_VD(hostdata->attr))
+ return snprintf(buf, PAGE_SIZE, "NA\n");
+
+ ret = hiraid_get_vd_info(hdev, vd_info, sdev->id);
+ if (ret)
+ vd_info->rg_level = ARRAY_SIZE(raid_levels) - 1;
+
+ ret = (vd_info->rg_level < ARRAY_SIZE(raid_levels)) ?
+ vd_info->rg_level : (ARRAY_SIZE(raid_levels) - 1);
+
+ kfree(vd_info);
+
+ return snprintf(buf, PAGE_SIZE, "RAID-%s\n", raid_levels[ret]);
+}
+
+static ssize_t raid_state_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct scsi_device *sdev;
+ struct hiraid_dev *hdev;
+ struct hiraid_vd_info *vd_info;
+ struct hiraid_sdev_hostdata *hostdata;
+ int ret;
+
+ sdev = to_scsi_device(dev);
+ hdev = shost_priv(sdev->host);
+ hostdata = sdev->hostdata;
+
+ vd_info = kmalloc(sizeof(*vd_info), GFP_KERNEL);
+ if (!vd_info || !HIRAID_DEV_INFO_ATTR_VD(hostdata->attr))
+ return snprintf(buf, PAGE_SIZE, "NA\n");
+
+ ret = hiraid_get_vd_info(hdev, vd_info, sdev->id);
+ if (ret) {
+ vd_info->vd_status = 0;
+ vd_info->rg_id = 0xff;
+ }
+
+ ret = (vd_info->vd_status < ARRAY_SIZE(raid_states)) ? vd_info->vd_status : 0;
+
+ kfree(vd_info);
+
+ return snprintf(buf, PAGE_SIZE, "%s\n", raid_states[ret]);
+}
+
+static ssize_t raid_resync_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct scsi_device *sdev;
+ struct hiraid_dev *hdev;
+ struct hiraid_vd_info *vd_info;
+ struct hiraid_bgtask *bgtask;
+ struct hiraid_sdev_hostdata *hostdata;
+ u8 rg_id, i, progress = 0;
+ int ret;
+
+ sdev = to_scsi_device(dev);
+ hdev = shost_priv(sdev->host);
+ hostdata = sdev->hostdata;
+
+ vd_info = kmalloc(sizeof(*vd_info), GFP_KERNEL);
+ if (!vd_info || !HIRAID_DEV_INFO_ATTR_VD(hostdata->attr))
+ return snprintf(buf, PAGE_SIZE, "NA\n");
+
+ ret = hiraid_get_vd_info(hdev, vd_info, sdev->id);
+ if (ret)
+ goto out;
+
+ rg_id = vd_info->rg_id;
+
+ bgtask = (struct hiraid_bgtask *)vd_info;
+ ret = hiraid_get_bgtask(hdev, bgtask);
+ if (ret)
+ goto out;
+ for (i = 0; i < bgtask->task_num; i++) {
+ if ((bgtask->bgtask[i].type == BGTASK_TYPE_REBUILD) &&
+ (le16_to_cpu(bgtask->bgtask[i].vd_id) == rg_id))
+ progress = bgtask->bgtask[i].progress;
+ }
+
+out:
+ kfree(vd_info);
+ return snprintf(buf, PAGE_SIZE, "%d\n", progress);
+}
+
+static ssize_t dispatch_hwq_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct hiraid_sdev_hostdata *hostdata;
+
+ hostdata = to_scsi_device(dev)->hostdata;
+ return snprintf(buf, PAGE_SIZE, "%d\n", hostdata->hwq);
+}
+
+static ssize_t dispatch_hwq_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ int val;
+ struct hiraid_dev *hdev;
+ struct scsi_device *sdev;
+ struct hiraid_sdev_hostdata *hostdata;
+
+ sdev = to_scsi_device(dev);
+ hdev = shost_priv(sdev->host);
+ hostdata = sdev->hostdata;
+
+ if (kstrtoint(buf, 0, &val) != 0)
+ return -EINVAL;
+ if (val <= 0 || val >= hdev->online_queues)
+ return -EINVAL;
+ if (!hiraid_disk_is_hdd(hostdata->attr))
+ return -EINVAL;
+
+ hostdata->hwq = val;
+ return strlen(buf);
+}
+
+static DEVICE_ATTR_RO(raid_level);
+static DEVICE_ATTR_RO(raid_state);
+static DEVICE_ATTR_RO(raid_resync);
+static DEVICE_ATTR_RW(dispatch_hwq);
+
+static struct device_attribute *hiraid_dev_attrs[] = {
+ &dev_attr_raid_state,
+ &dev_attr_raid_level,
+ &dev_attr_raid_resync,
+ &dev_attr_dispatch_hwq,
+ NULL,
+};
+
+static struct pci_error_handlers hiraid_err_handler = {
+ .error_detected = hiraid_pci_error_detected,
+ .slot_reset = hiraid_pci_slot_reset,
+ .reset_done = hiraid_reset_pci_finish,
+};
+
+static int hiraid_sysfs_host_reset(struct Scsi_Host *shost, int reset_type)
+{
+ int ret;
+ struct hiraid_dev *hdev = shost_priv(shost);
+
+ dev_info(hdev->dev, "start sysfs host reset cmd\n");
+ ret = hiraid_reset_work_sync(hdev);
+ dev_info(hdev->dev, "stop sysfs host reset cmd[%d]\n", ret);
+
+ return ret;
+}
+
+static int hiraid_scan_finished(struct Scsi_Host *shost, unsigned long time)
+{
+ struct hiraid_dev *hdev = shost_priv(shost);
+
+ hiraid_scan_work(&hdev->scan_work);
+
+ return 1;
+}
+
+static struct scsi_host_template hiraid_driver_template = {
+ .module = THIS_MODULE,
+ .name = "hiraid",
+ .proc_name = "hiraid",
+ .queuecommand = hiraid_queue_command,
+ .slave_alloc = hiraid_slave_alloc,
+ .slave_destroy = hiraid_slave_destroy,
+ .slave_configure = hiraid_slave_configure,
+ .scan_finished = hiraid_scan_finished,
+ .eh_timed_out = hiraid_timed_out,
+ .eh_abort_handler = hiraid_abort,
+ .eh_target_reset_handler = hiraid_target_reset,
+ .eh_bus_reset_handler = hiraid_bus_reset,
+ .eh_host_reset_handler = hiraid_host_reset,
+ .change_queue_depth = scsi_change_queue_depth,
+ .this_id = -1,
+ .unchecked_isa_dma = 0,
+ .shost_attrs = hiraid_host_attrs,
+ .sdev_attrs = hiraid_dev_attrs,
+ .host_reset = hiraid_sysfs_host_reset,
+};
+
+static void hiraid_shutdown(struct pci_dev *pdev)
+{
+ struct hiraid_dev *hdev = pci_get_drvdata(pdev);
+
+ hiraid_delete_io_queues(hdev);
+ hiraid_disable_admin_queue(hdev, true);
+}
+
+static bool hiraid_bsg_is_valid(struct bsg_job *job)
+{
+ u64 timeout = 0;
+ struct request *rq = blk_mq_rq_from_pdu(job);
+ struct hiraid_bsg_request *bsg_req = job->request;
+ struct hiraid_dev *hdev = shost_priv(dev_to_shost(job->dev));
+
+ if (bsg_req == NULL || job->request_len != sizeof(struct hiraid_bsg_request))
+ return false;
+
+ switch (bsg_req->msgcode) {
+ case HIRAID_BSG_ADMIN:
+ timeout = msecs_to_jiffies(bsg_req->admcmd.timeout_ms);
+ break;
+ case HIRAID_BSG_IOPTHRU:
+ timeout = msecs_to_jiffies(bsg_req->pthrucmd.timeout_ms);
+ break;
+ default:
+ dev_info(hdev->dev, "bsg unsupport msgcode[%d]\n", bsg_req->msgcode);
+ return false;
+ }
+
+ if ((timeout + CTL_RST_TIME) > rq->timeout) {
+ dev_err(hdev->dev, "bsg invalid time\n");
+ return false;
+ }
+
+ return true;
+}
+
+/* bsg dispatch user command */
+static int hiraid_bsg_dispatch(struct bsg_job *job)
+{
+ struct Scsi_Host *shost = dev_to_shost(job->dev);
+ struct hiraid_dev *hdev = shost_priv(shost);
+ struct request *rq = blk_mq_rq_from_pdu(job);
+ struct hiraid_bsg_request *bsg_req = job->request;
+ int ret = -ENOMSG;
+
+ job->reply_len = 0;
+
+ if (!hiraid_bsg_is_valid(job)) {
+ bsg_job_done(job, ret, 0);
+ return 0;
+ }
+
+ dev_log_dbg(hdev->dev, "bsg msgcode[%d] msglen[%d] timeout[%d];"
+ "reqnsge[%d], reqlen[%d]\n",
+ bsg_req->msgcode, job->request_len, rq->timeout,
+ job->request_payload.sg_cnt, job->request_payload.payload_len);
+
+ switch (bsg_req->msgcode) {
+ case HIRAID_BSG_ADMIN:
+ ret = hiraid_user_send_admcmd(hdev, job);
+ break;
+ case HIRAID_BSG_IOPTHRU:
+ ret = hiraid_user_send_ptcmd(hdev, job);
+ break;
+ default:
+ break;
+ }
+
+ if (ret > 0)
+ ret = ret | (ret << 8);
+
+ bsg_job_done(job, ret, 0);
+ return 0;
+}
+
+static inline void hiraid_unregist_bsg(struct hiraid_dev *hdev)
+{
+ if (hdev->bsg_queue) {
+ bsg_unregister_queue(hdev->bsg_queue);
+ blk_cleanup_queue(hdev->bsg_queue);
+ }
+}
+static int hiraid_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct hiraid_dev *hdev;
+ struct Scsi_Host *shost;
+ int node, ret;
+ char bsg_name[15];
+
+ shost = scsi_host_alloc(&hiraid_driver_template, sizeof(*hdev));
+ if (!shost) {
+ dev_err(&pdev->dev, "failed to allocate scsi host\n");
+ return -ENOMEM;
+ }
+ hdev = shost_priv(shost);
+ hdev->pdev = pdev;
+ hdev->dev = get_device(&pdev->dev);
+
+ node = dev_to_node(hdev->dev);
+ if (node == NUMA_NO_NODE) {
+ node = first_memory_node;
+ set_dev_node(hdev->dev, node);
+ }
+ hdev->numa_node = node;
+ hdev->shost = shost;
+ hdev->instance = shost->host_no;
+ pci_set_drvdata(pdev, hdev);
+
+ ret = hiraid_dev_map(hdev);
+ if (ret)
+ goto put_dev;
+
+ init_rwsem(&hdev->dev_rwsem);
+ INIT_WORK(&hdev->scan_work, hiraid_scan_work);
+ INIT_WORK(&hdev->timesyn_work, hiraid_timesyn_work);
+ INIT_WORK(&hdev->reset_work, hiraid_reset_work);
+ INIT_WORK(&hdev->fwact_work, hiraid_fwactive_work);
+ spin_lock_init(&hdev->state_lock);
+
+ ret = hiraid_alloc_resources(hdev);
+ if (ret)
+ goto dev_unmap;
+
+ ret = hiraid_pci_enable(hdev);
+ if (ret)
+ goto resources_free;
+
+ ret = hiraid_setup_admin_queue(hdev);
+ if (ret)
+ goto pci_disable;
+
+ ret = hiraid_init_control_info(hdev);
+ if (ret)
+ goto disable_admin_q;
+
+ ret = hiraid_setup_io_queues(hdev);
+ if (ret)
+ goto disable_admin_q;
+
+ hiraid_shost_init(hdev);
+
+ ret = scsi_add_host(hdev->shost, hdev->dev);
+ if (ret) {
+ dev_err(hdev->dev, "add shost to system failed, ret[%d]\n", ret);
+ goto remove_io_queues;
+ }
+
+ snprintf(bsg_name, sizeof(bsg_name), "hiraid%d", shost->host_no);
+ hdev->bsg_queue = bsg_setup_queue(&shost->shost_gendev, bsg_name, hiraid_bsg_dispatch,
+ NULL, hiraid_get_max_cmd_size(hdev));
+ if (IS_ERR(hdev->bsg_queue)) {
+ dev_err(hdev->dev, "err, setup bsg failed\n");
+ hdev->bsg_queue = NULL;
+ goto remove_io_queues;
+ }
+
+ if (hdev->online_queues == HIRAID_ADMIN_QUEUE_NUM) {
+ dev_warn(hdev->dev, "warn: only admin queue can be used\n");
+ return 0;
+ }
+
+ hdev->state = DEV_LIVE;
+
+ hiraid_init_async_event(hdev);
+
+ ret = hiraid_dev_list_init(hdev);
+ if (ret)
+ goto unregist_bsg;
+
+ ret = hiraid_configure_timestamp(hdev);
+ if (ret)
+ dev_warn(hdev->dev, "time synchronization failed\n");
+
+ ret = hiraid_alloc_io_ptcmds(hdev);
+ if (ret)
+ goto unregist_bsg;
+
+ scsi_scan_host(hdev->shost);
+
+ return 0;
+
+unregist_bsg:
+ hiraid_unregist_bsg(hdev);
+remove_io_queues:
+ hiraid_delete_io_queues(hdev);
+disable_admin_q:
+ hiraid_free_sense_buffer(hdev);
+ hiraid_disable_admin_queue(hdev, false);
+pci_disable:
+ hiraid_free_all_queues(hdev);
+ hiraid_pci_disable(hdev);
+resources_free:
+ hiraid_free_resources(hdev);
+dev_unmap:
+ hiraid_dev_unmap(hdev);
+put_dev:
+ put_device(hdev->dev);
+ scsi_host_put(shost);
+
+ return -ENODEV;
+}
+
+static void hiraid_remove(struct pci_dev *pdev)
+{
+ struct hiraid_dev *hdev = pci_get_drvdata(pdev);
+ struct Scsi_Host *shost = hdev->shost;
+
+ dev_info(hdev->dev, "enter hiraid remove\n");
+
+ hiraid_dev_state_trans(hdev, DEV_DELETING);
+ flush_work(&hdev->reset_work);
+
+ if (!pci_device_is_present(pdev))
+ hiraid_flush_running_cmds(hdev);
+
+ hiraid_unregist_bsg(hdev);
+ scsi_remove_host(shost);
+ hiraid_free_io_ptcmds(hdev);
+ kfree(hdev->dev_info);
+ hiraid_delete_io_queues(hdev);
+ hiraid_free_sense_buffer(hdev);
+ hiraid_disable_admin_queue(hdev, false);
+ hiraid_free_all_queues(hdev);
+ hiraid_pci_disable(hdev);
+ hiraid_free_resources(hdev);
+ hiraid_dev_unmap(hdev);
+ put_device(hdev->dev);
+ scsi_host_put(shost);
+
+ dev_info(hdev->dev, "exit hiraid remove\n");
+}
+
+static const struct pci_device_id hiraid_hw_card_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_HUAWEI_LOGIC, HIRAID_SERVER_DEVICE_HBA_DID) },
+ { PCI_DEVICE(PCI_VENDOR_ID_HUAWEI_LOGIC, HIRAID_SERVER_DEVICE_RAID_DID) },
+ { 0, }
+};
+MODULE_DEVICE_TABLE(pci, hiraid_hw_card_ids);
+
+static struct pci_driver hiraid_driver = {
+ .name = "hiraid",
+ .id_table = hiraid_hw_card_ids,
+ .probe = hiraid_probe,
+ .remove = hiraid_remove,
+ .shutdown = hiraid_shutdown,
+ .err_handler = &hiraid_err_handler,
+};
+
+static int __init hiraid_init(void)
+{
+ int ret;
+
+ work_queue = alloc_workqueue("hiraid-wq", WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_SYSFS, 0);
+ if (!work_queue)
+ return -ENOMEM;
+
+ hiraid_class = class_create(THIS_MODULE, "hiraid");
+ if (IS_ERR(hiraid_class)) {
+ ret = PTR_ERR(hiraid_class);
+ goto destroy_wq;
+ }
+
+ ret = pci_register_driver(&hiraid_driver);
+ if (ret < 0)
+ goto destroy_class;
+
+ return 0;
+
+destroy_class:
+ class_destroy(hiraid_class);
+destroy_wq:
+ destroy_workqueue(work_queue);
+
+ return ret;
+}
+
+static void __exit hiraid_exit(void)
+{
+ pci_unregister_driver(&hiraid_driver);
+ class_destroy(hiraid_class);
+ destroy_workqueue(work_queue);
+}
+
+MODULE_AUTHOR("Huawei Technologies CO., Ltd");
+MODULE_DESCRIPTION("Huawei RAID driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(HIRAID_DRV_VERSION);
+module_init(hiraid_init);
+module_exit(hiraid_exit);
--
2.22.0.windows.1
2
1