Kernel
Threads by month
- ----- 2025 -----
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- 54 participants
- 16907 discussions
Firstly, this series improves function return address protection for the arm64
kernel, by compiling the kernel with ARMv8.3 Pointer Authentication instructions
(referred ptrauth hereafter). This should help protect the kernel against
attacks using return-oriented programming. Secondly, Suzuki's patches optimize
checking and enabling cpu capabilities. Thirdly, three patches on kernel config
enable ptrauth compiling and fix compile warnings. Finally, two patches of Mark
Rutland and one patch of Marc Zyngier are necessary simplifications and
cleanups.
Amit Daniel Kachhap (9):
arm64: cpufeature: Fix meta-capability cpufeature check
arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
arm64: cpufeature: Move cpu capability helpers inside C file
arm64: initialize ptrauth keys for kernel booting task
arm64: mask PAC bits of __builtin_return_address
arm64: __show_regs: strip PAC from lr in printk
arm64: suspend: restore the kernel ptrauth keys
lkdtm: arm64: test kernel pointer authentication
arm64: Kconfig: ptrauth: Add binutils version check to fix mismatch
Catalin Marinas (1):
kbuild: Add support for 'as-instr' to be used in Kconfig files
Kristina Martsenko (7):
arm64: cpufeature: add pointer auth meta-capabilities
arm64: rename ptrauth key structures to be user-specific
arm64: install user ptrauth keys at kernel exit time
arm64: cpufeature: handle conflicts based on capability
arm64: enable ptrauth earlier
arm64: initialize and switch ptrauth kernel keys
arm64: compile the kernel with ptrauth return address signing
Marc Zyngier (1):
arm64: Drop unnecessary include from asm/smp.h
Mark Rutland (3):
arm64: unwind: strip PAC from kernel addresses
arm64: remove ptrauth_keys_install_kernel sync arg
arm64: simplify ptrauth initialization
Nick Desaulniers (1):
arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH
Suzuki K Poulose (4):
arm64: capabilities: Speed up capability lookup
arm64: capabilities: Optimize this_cpu_has_cap
arm64: capabilities: Use linear array for detection and verification
arm64: capabilities: Batch cpu_enable callbacks
Vincenzo Frascino (1):
kconfig: Add support for 'as-option'
Zheng Zengkai (1):
config: enable ARM64 pointer authentication configs by default
arch/arm64/Kconfig | 38 +++-
arch/arm64/Makefile | 11 +
arch/arm64/configs/hulk_defconfig | 7 +
arch/arm64/configs/openeuler_defconfig | 7 +
arch/arm64/include/asm/asm_pointer_auth.h | 98 ++++++++
arch/arm64/include/asm/compiler.h | 19 ++
arch/arm64/include/asm/cpucaps.h | 4 +-
arch/arm64/include/asm/cpufeature.h | 42 ++--
arch/arm64/include/asm/pointer_auth.h | 50 ++---
arch/arm64/include/asm/processor.h | 3 +-
arch/arm64/include/asm/stackprotector.h | 5 +
arch/arm64/kernel/asm-offsets.c | 13 ++
arch/arm64/kernel/cpufeature.c | 258 ++++++++++++++--------
arch/arm64/kernel/entry.S | 6 +
arch/arm64/kernel/head.S | 10 +
arch/arm64/kernel/pointer_auth.c | 7 +-
arch/arm64/kernel/process.c | 5 +-
arch/arm64/kernel/ptrace.c | 16 +-
arch/arm64/kernel/sleep.S | 1 +
arch/arm64/kernel/stacktrace.c | 5 +-
arch/arm64/mm/proc.S | 27 ++-
drivers/misc/lkdtm/bugs.c | 36 +++
drivers/misc/lkdtm/core.c | 1 +
drivers/misc/lkdtm/lkdtm.h | 1 +
include/linux/stackprotector.h | 2 +-
scripts/Kconfig.include | 10 +
26 files changed, 513 insertions(+), 169 deletions(-)
create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h
--
2.25.1
1
28
Adrian Hunter (1):
perf intel-pt: Fix FUP packet state
Ahmad Fatoum (3):
watchdog: f71808e_wdt: indicate WDIOF_CARDRESET support in
watchdog_info.options
watchdog: f71808e_wdt: remove use of wrong watchdog_info option
watchdog: f71808e_wdt: clear watchdog timeout occurred flag
Alexandru Ardelean (1):
iio: dac: ad5592r: fix unbalanced mutex unlocks in ad5592r_read_raw()
Anand Jain (1):
btrfs: don't traverse into the seed devices in show_devname
Andy Shevchenko (1):
mfd: dln2: Run event handler loop under spinlock
Aneesh Kumar K.V (3):
selftests/powerpc: ptrace-pkey: Rename variables to make it easier to
follow code
selftests/powerpc: ptrace-pkey: Update the test to mark an invalid
pkey correctly
selftests/powerpc: ptrace-pkey: Don't update expected UAMOR value
Ansuel Smith (2):
PCI: qcom: Define some PARF params needed for ipq8064 SoC
PCI: qcom: Add support for tx term offset for rev 2.1.0
Anton Blanchard (1):
pseries: Fix 64 bit logical memory block panic
ChangSyun Peng (1):
md/raid5: Fix Force reconstruct-write io stuck in degraded raid5
Charles Keepax (1):
mfd: arizona: Ensure 32k clock is put on driver unbind and error
Chengming Zhou (1):
ftrace: Setup correct FTRACE_FL_REGS flags for module
Christian Eggers (1):
dt-bindings: iio: io-channel-mux: Fix compatible string in example
code
Colin Ian King (3):
iommu/omap: Check for failure of a call to omap_iommu_dump_ctx
Input: sentelic - fix error return when fsp_reg_write fails
fs/ufs: avoid potential u32 multiplication overflow
Coly Li (2):
bcache: allocate meta data pages as compound pages
bcache: fix overflow in offset_to_stripe()
Dan Carpenter (2):
drm/vmwgfx: Use correct vmw_legacy_display_unit pointer
drm/vmwgfx: Fix two list_for_each loop exit tests
Daniel Díaz (1):
tools build feature: Quote CC and CXX for their arguments
David Sterba (1):
btrfs: fix messages after changing compression level by remount
Dinghao Liu (1):
ALSA: echoaudio: Fix potential Oops in snd_echo_resume()
Eric Biggers (3):
fs/minix: set s_maxbytes correctly
fs/minix: fix block limit check for V1 filesystems
fs/minix: remove expected error message in block_to_path()
Eugeniu Rosca (1):
media: vsp1: dl: Fix NULL pointer dereference on unbind
Ewan D. Milne (1):
scsi: lpfc: nvmet: Avoid hang / use-after-free again when destroying
targetport
Filipe Manana (1):
btrfs: fix memory leaks after failure to lookup checksums during inode
logging
Geert Uytterhoeven (1):
sh: landisk: Add missing initialization of sh_io_port_base
Greg Kroah-Hartman (1):
Linux 4.19.141
Huacai Chen (1):
MIPS: CPU#0 is not hotpluggable
Hugh Dickins (1):
khugepaged: retract_page_tables() remember to test exit
Jason Gunthorpe (1):
RDMA/ipoib: Fix ABBA deadlock with ipoib_reap_ah()
Jeffrey Mitchell (1):
nfs: Fix getxattr kernel panic and memory overflow
Johan Hovold (2):
USB: serial: ftdi_sio: make process-packet buffer unsigned
USB: serial: ftdi_sio: clean up receive processing
Johannes Berg (1):
mac80211: fix misplaced while instead of if
Jonathan McDowell (2):
net: ethernet: stmmac: Disable hardware multicast filter
net: stmmac: dwmac1000: provide multicast filter fallback
Josef Bacik (2):
btrfs: open device without device_list_mutex
btrfs: only search for left_info if there is no right_info in
try_merge_free_space
Junxiao Bi (1):
ocfs2: change slot number type s16 to u16
Kai-Heng Feng (1):
PCI: Mark AMD Navi10 GPU rev 0x00 ATS as broken
Kamal Heib (1):
RDMA/ipoib: Return void from ipoib_ib_dev_stop()
Kees Cook (2):
net/compat: Add missing sock updates for SCM_RIGHTS
module: Correctly truncate sysfs sections output
Kevin Hao (1):
tracing/hwlat: Honor the tracing_cpumask
Krzysztof Sobota (1):
watchdog: initialize device before misc_register
Liu Yi L (1):
iommu/vt-d: Enforce PASID devTLB field mask
Liu Ying (1):
drm/imx: imx-ldb: Disable both channels for split mode in
enc->disable()
Lukas Wunner (1):
driver core: Avoid binding drivers to dead devices
Marius Iacob (1):
drm: Added orientation quirk for ASUS tablet model T103HAF
Max Filippov (1):
xtensa: fix xtensa_pmu_setup prototype
Michael Ellerman (2):
powerpc: Allow 4224 bytes of stack expansion for the signal frame
powerpc: Fix circular dependency between percpu.h and mmu.h
Michal Koutný (1):
mm/page_counter.c: fix protection usage propagation
Mikulas Patocka (1):
ext2: fix missing percpu_counter_inc
Ming Lei (1):
dm rq: don't call blk_mq_queue_stopped() in dm_stop_queue()
Muchun Song (1):
kprobes: Fix NULL pointer dereference at kprobe_ftrace_handler
Paul Aurich (1):
cifs: Fix leak when handling lease break for cached root fid
Paul Kocialkowski (2):
media: rockchip: rga: Introduce color fmt macros and refactor CSC mode
logic
media: rockchip: rga: Only set output CSC mode for RGB input
Pavel Machek (1):
btrfs: fix return value mixup in btrfs_get_extent
Qu Wenruo (2):
btrfs: free anon block device right after subvolume deletion
btrfs: don't allocate anonymous block device for user invisible roots
Rafael J. Wysocki (1):
PCI: hotplug: ACPI: Fix context refcounting in acpiphp_grab_context()
Rajat Jain (1):
PCI: Add device even if driver attach failed
Rayagonda Kokatanur (1):
pwm: bcm-iproc: handle clk_get_rate() return
Sandeep Raghuraman (1):
drm/amdgpu: Fix bug where DPM is not enabled after hibernate and
resume
Sibi Sankar (1):
remoteproc: qcom: q6v5: Update running state before requesting stop
Stafford Horne (1):
openrisc: Fix oops caused when dumping stack
Steve French (1):
smb3: warn on confusing error scenario with sec=krb5
Steve Longerbeam (1):
gpu: ipu-v3: image-convert: Combine rotate/no-rotate irq handlers
Steven Rostedt (VMware) (1):
tracing: Use trace_sched_process_free() instead of exit() for pid
tracing
Thomas Gleixner (1):
genirq/affinity: Make affinity setting if activated opt-in
Thomas Hebb (1):
tools build feature: Use CC and CXX from parent
Tiezhu Yang (1):
test_kmod: avoid potential double free in trigger_config_run_type()
Tom Rix (1):
btrfs: ref-verify: fix memory leak in add_block_entry
Tomasz Maciej Nowak (1):
arm64: dts: marvell: espressobin: add ethernet alias
Vincent Whitchurch (1):
perf bench mem: Always memset source before memcpy
Wang Hai (1):
net: qcom/emac: add missed clk_disable_unprepare in error path of
emac_clks_phase1_init
Wolfram Sang (2):
i2c: rcar: slave: only send STOP event when we have been addressed
i2c: rcar: avoid race when unregistering slave
Xu Wang (1):
clk: clk-atlas6: fix return value check in atlas6_clk_init()
Yoshihiro Shimoda (1):
mmc: renesas_sdhi_internal_dmac: clean up the code for dma complete
.../iio/multiplexer/io-channel-mux.txt | 2 +-
Makefile | 2 +-
.../dts/marvell/armada-3720-espressobin.dts | 6 ++
arch/mips/kernel/topology.c | 2 +-
arch/openrisc/kernel/stacktrace.c | 18 ++++-
arch/powerpc/include/asm/percpu.h | 4 +-
arch/powerpc/mm/fault.c | 7 +-
.../platforms/pseries/hotplug-memory.c | 2 +-
arch/sh/boards/mach-landisk/setup.c | 3 +
arch/x86/kernel/apic/vector.c | 4 +
arch/xtensa/kernel/perf_event.c | 2 +-
drivers/base/dd.c | 4 +-
drivers/clk/sirf/clk-atlas6.c | 2 +-
.../gpu/drm/amd/powerplay/smumgr/ci_smumgr.c | 5 +-
.../gpu/drm/drm_panel_orientation_quirks.c | 6 ++
drivers/gpu/drm/imx/imx-ldb.c | 7 +-
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 8 +-
drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c | 5 +-
drivers/gpu/ipu-v3/ipu-image-convert.c | 58 +++++----------
drivers/i2c/busses/i2c-rcar.c | 15 ++--
drivers/iio/dac/ad5592r-base.c | 4 +-
drivers/infiniband/ulp/ipoib/ipoib.h | 2 +-
drivers/infiniband/ulp/ipoib/ipoib_ib.c | 67 ++++++++---------
drivers/infiniband/ulp/ipoib/ipoib_main.c | 2 +
drivers/input/mouse/sentelic.c | 2 +-
drivers/iommu/omap-iommu-debug.c | 3 +
drivers/irqchip/irq-gic-v3-its.c | 5 +-
drivers/md/bcache/bcache.h | 2 +-
drivers/md/bcache/bset.c | 2 +-
drivers/md/bcache/btree.c | 2 +-
drivers/md/bcache/journal.c | 4 +-
drivers/md/bcache/super.c | 2 +-
drivers/md/bcache/writeback.c | 14 ++--
drivers/md/bcache/writeback.h | 19 ++++-
drivers/md/dm-rq.c | 3 -
drivers/md/raid5.c | 3 +-
drivers/media/platform/rockchip/rga/rga-hw.c | 29 ++++----
drivers/media/platform/rockchip/rga/rga-hw.h | 5 ++
drivers/media/platform/vsp1/vsp1_dl.c | 2 +
drivers/mfd/arizona-core.c | 18 +++++
drivers/mfd/dln2.c | 4 +
drivers/mmc/host/renesas_sdhi_internal_dmac.c | 18 +++--
drivers/net/ethernet/qualcomm/emac/emac.c | 17 ++++-
.../ethernet/stmicro/stmmac/dwmac-ipq806x.c | 1 +
.../ethernet/stmicro/stmmac/dwmac1000_core.c | 3 +
drivers/pci/bus.c | 6 +-
drivers/pci/controller/dwc/pcie-qcom.c | 41 ++++++++++-
drivers/pci/hotplug/acpiphp_glue.c | 14 +++-
drivers/pci/quirks.c | 5 +-
drivers/pwm/pwm-bcm-iproc.c | 9 ++-
drivers/remoteproc/qcom_q6v5.c | 2 +
drivers/scsi/lpfc/lpfc_nvmet.c | 2 +-
drivers/usb/serial/ftdi_sio.c | 37 +++++-----
drivers/watchdog/f71808e_wdt.c | 13 +++-
drivers/watchdog/watchdog_dev.c | 18 ++---
fs/btrfs/disk-io.c | 13 +++-
fs/btrfs/free-space-cache.c | 4 +-
fs/btrfs/inode.c | 4 +-
fs/btrfs/ref-verify.c | 2 +
fs/btrfs/super.c | 35 ++++-----
fs/btrfs/tree-log.c | 8 +-
fs/btrfs/volumes.c | 21 +++++-
fs/cifs/smb2misc.c | 73 +++++++++++++------
fs/cifs/smb2pdu.c | 2 +
fs/ext2/ialloc.c | 3 +-
fs/minix/inode.c | 12 +--
fs/minix/itree_v1.c | 12 +--
fs/minix/itree_v2.c | 13 ++--
fs/minix/minix.h | 1 -
fs/nfs/nfs4proc.c | 2 -
fs/nfs/nfs4xdr.c | 6 +-
fs/ocfs2/ocfs2.h | 4 +-
fs/ocfs2/suballoc.c | 4 +-
fs/ocfs2/super.c | 4 +-
fs/ufs/super.c | 2 +-
include/linux/intel-iommu.h | 4 +-
include/linux/irq.h | 13 ++++
include/net/sock.h | 4 +
kernel/irq/manage.c | 6 +-
kernel/kprobes.c | 7 ++
kernel/module.c | 22 +++++-
kernel/trace/ftrace.c | 15 ++--
kernel/trace/trace_events.c | 4 +-
kernel/trace/trace_hwlat.c | 5 +-
lib/test_kmod.c | 2 +-
mm/khugepaged.c | 22 +++---
mm/page_counter.c | 6 +-
net/compat.c | 1 +
net/core/sock.c | 21 ++++++
net/mac80211/sta_info.c | 2 +-
sound/pci/echoaudio/echoaudio.c | 2 -
tools/build/Makefile.feature | 2 +-
tools/build/feature/Makefile | 2 -
tools/perf/bench/mem-functions.c | 21 +++---
.../util/intel-pt-decoder/intel-pt-decoder.c | 21 ++----
.../selftests/powerpc/ptrace/ptrace-pkey.c | 55 +++++++-------
96 files changed, 639 insertions(+), 365 deletions(-)
--
2.25.1
1
90
Python2 is no longer supported by the upstream community. The dependency
on python2 should be removed from the kernel code.
Signed-off-by: Zhipeng Xie <xiezhipeng1(a)huawei.com>
---
tools/perf/scripts/python/call-graph-from-sql.py | 2 +-
tools/perf/scripts/python/export-to-postgresql.py | 2 +-
tools/power/pm-graph/bootgraph.py | 2 +-
tools/power/pm-graph/sleepgraph.py | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/perf/scripts/python/call-graph-from-sql.py b/tools/perf/scripts/python/call-graph-from-sql.py
index b494a67a1c67..099b472df4a2 100644
--- a/tools/perf/scripts/python/call-graph-from-sql.py
+++ b/tools/perf/scripts/python/call-graph-from-sql.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python2
+#!/usr/bin/python
# call-graph-from-sql.py: create call-graph from sql database
# Copyright (c) 2014-2017, Intel Corporation.
#
diff --git a/tools/perf/scripts/python/export-to-postgresql.py b/tools/perf/scripts/python/export-to-postgresql.py
index e46f51b17513..e97a03697aa2 100644
--- a/tools/perf/scripts/python/export-to-postgresql.py
+++ b/tools/perf/scripts/python/export-to-postgresql.py
@@ -171,7 +171,7 @@ import datetime
# SELECT * FROM samples_view WHERE event = 'transactions' AND branch_type_name = 'transaction abort';
#
# To print a call stack requires walking the call_paths table. For example this python script:
-# #!/usr/bin/python2
+# #!/usr/bin/python
#
# import sys
# from PySide.QtSql import *
diff --git a/tools/power/pm-graph/bootgraph.py b/tools/power/pm-graph/bootgraph.py
index 8ee626c0f6a5..abb4c38f029b 100755
--- a/tools/power/pm-graph/bootgraph.py
+++ b/tools/power/pm-graph/bootgraph.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python2
+#!/usr/bin/python
#
# Tool for analyzing boot timing
# Copyright (c) 2013, Intel Corporation.
diff --git a/tools/power/pm-graph/sleepgraph.py b/tools/power/pm-graph/sleepgraph.py
index 0c760478f7d7..420102e2cd08 100755
--- a/tools/power/pm-graph/sleepgraph.py
+++ b/tools/power/pm-graph/sleepgraph.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python2
+#!/usr/bin/python
#
# Tool for analyzing suspend/resume timing
# Copyright (c) 2013, Intel Corporation.
--
2.18.1
2
1
Python2 is no longer supported by the upstream community. The dependency
on python2 should be removed from the kernel code.
Signed-off-by: Zhipeng Xie <xiezhipeng1(a)huawei.com>
---
tools/perf/scripts/python/call-graph-from-sql.py | 2 +-
tools/perf/scripts/python/export-to-postgresql.py | 2 +-
tools/power/pm-graph/bootgraph.py | 2 +-
tools/power/pm-graph/sleepgraph.py | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/perf/scripts/python/call-graph-from-sql.py b/tools/perf/scripts/python/call-graph-from-sql.py
index b494a67a1c67..099b472df4a2 100644
--- a/tools/perf/scripts/python/call-graph-from-sql.py
+++ b/tools/perf/scripts/python/call-graph-from-sql.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python2
+#!/usr/bin/python
# call-graph-from-sql.py: create call-graph from sql database
# Copyright (c) 2014-2017, Intel Corporation.
#
diff --git a/tools/perf/scripts/python/export-to-postgresql.py b/tools/perf/scripts/python/export-to-postgresql.py
index e46f51b17513..e97a03697aa2 100644
--- a/tools/perf/scripts/python/export-to-postgresql.py
+++ b/tools/perf/scripts/python/export-to-postgresql.py
@@ -171,7 +171,7 @@ import datetime
# SELECT * FROM samples_view WHERE event = 'transactions' AND branch_type_name = 'transaction abort';
#
# To print a call stack requires walking the call_paths table. For example this python script:
-# #!/usr/bin/python2
+# #!/usr/bin/python
#
# import sys
# from PySide.QtSql import *
diff --git a/tools/power/pm-graph/bootgraph.py b/tools/power/pm-graph/bootgraph.py
index 8ee626c0f6a5..abb4c38f029b 100755
--- a/tools/power/pm-graph/bootgraph.py
+++ b/tools/power/pm-graph/bootgraph.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python2
+#!/usr/bin/python
#
# Tool for analyzing boot timing
# Copyright (c) 2013, Intel Corporation.
diff --git a/tools/power/pm-graph/sleepgraph.py b/tools/power/pm-graph/sleepgraph.py
index 0c760478f7d7..420102e2cd08 100755
--- a/tools/power/pm-graph/sleepgraph.py
+++ b/tools/power/pm-graph/sleepgraph.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python2
+#!/usr/bin/python
#
# Tool for analyzing suspend/resume timing
# Copyright (c) 2013, Intel Corporation.
--
2.18.1
1
0

[PATCH kernel-4.19, openEuler-1.0-LTS, openEuler-20.09] arm64/config: enable TIPC module for openEuler
by Xie XiuQi 31 Aug '20
by Xie XiuQi 31 Aug '20
31 Aug '20
hulk inclusion
category: feature
feature: TIPC
bugzilla: NA
CVE: NA
Link: https://gitee.com/openeuler/kernel/issues/I1TDS3
The Transparent Inter Process Communication (TIPC) protocol is
specially designed for intra cluster communication. This protocol
originates from Ericsson where it has been used in carrier grade
cluster applications for many years.
Signed-off-by: Xie XiuQi <xiexiuqi(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index 44a7d1661576..0ca7fd9cfeee 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -1459,7 +1459,10 @@ CONFIG_SCTP_COOKIE_HMAC_MD5=y
CONFIG_SCTP_COOKIE_HMAC_SHA1=y
CONFIG_INET_SCTP_DIAG=m
# CONFIG_RDS is not set
-# CONFIG_TIPC is not set
+CONFIG_TIPC=m
+CONFIG_TIPC_MEDIA_IB=y
+CONFIG_TIPC_MEDIA_UDP=y
+CONFIG_TIPC_DIAG=m
CONFIG_ATM=m
CONFIG_ATM_CLIP=m
# CONFIG_ATM_CLIP_NO_ICMP is not set
--
2.20.1
1
0
From: Peng Liang <liangpeng10(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: NA
CVE: NA
linux/kvm.h should be self-contained so uint64_t should be replaced
with __u64.
Signed-off-by: Peng Liang <liangpeng10(a)huawei.com>
Reviewed-by: zhanghailiang <zhang.zhanghailiang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/uapi/linux/kvm.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 4bd8e8bcc78e1..6b6eb5b00a385 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1201,13 +1201,13 @@ struct kvm_vfio_spapr_tce {
#define ID_REG_MAX_NUMS 64
struct id_reg_info {
- uint64_t sys_id;
- uint64_t sys_val;
+ __u64 sys_id;
+ __u64 sys_val;
};
struct id_registers {
struct id_reg_info regs[ID_REG_MAX_NUMS];
- uint64_t num;
+ __u64 num;
};
/*
--
2.25.1
1
0

[PATCH 1/8] Revert "block: rename 'q->debugfs_dir' and 'q->blk_trace->dir' in blk_unregister_queue()"
by Yang Yingliang 29 Aug '20
by Yang Yingliang 29 Aug '20
29 Aug '20
From: Yu Kuai <yukuai3(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 30213
CVE: CVE-2019-19770
---------------------------
The UAF in blktrace was fixed by a different approch from mainline, thus
revert our solution and backport related patches.
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
Reviewed-by: Yufen Yu <yuyufen(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/blk-sysfs.c | 49 -----------------------------------------------
1 file changed, 49 deletions(-)
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index ce84526ed51dd..2d905a8b14730 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -11,7 +11,6 @@
#include <linux/blktrace_api.h>
#include <linux/blk-mq.h>
#include <linux/blk-cgroup.h>
-#include <linux/debugfs.h>
#include <linux/atomic.h>
#include "blk.h"
@@ -960,53 +959,6 @@ int blk_register_queue(struct gendisk *disk)
}
EXPORT_SYMBOL_GPL(blk_register_queue);
-#ifdef CONFIG_DEBUG_FS
-void blk_rename_debugfs_dir(struct dentry **old)
-{
- static atomic_t i = ATOMIC_INIT(0);
- struct dentry *new;
- char name[DNAME_INLINE_LEN];
- u32 index = atomic_fetch_inc(&i);
-
- snprintf(name, sizeof(name), "ready_to_remove_%u", index);
- new = debugfs_lookup(name, blk_debugfs_root);
- if (WARN_ON(new)) {
- dput(new);
- return;
- }
- new = debugfs_rename(blk_debugfs_root, *old, blk_debugfs_root, name);
- if (WARN_ON(!new))
- return;
- *old = new;
-}
-
-/*
- * blk_prepare_release_queue - rename q->debugfs_dir and q->blk_trace->dir
- * @q: request_queue of which the dir to be renamed belong to.
- *
- * Because the final release of request_queue is in a workqueue, the
- * cleanup might not been finished yet while the same device start to
- * create. It's not correct if q->debugfs_dir still exist while trying
- * to create a new one.
- */
-static void blk_prepare_release_queue(struct request_queue *q)
-{
-#ifdef CONFIG_BLK_DEBUG_FS
- if (!IS_ERR_OR_NULL(q->debugfs_dir))
- blk_rename_debugfs_dir(&q->debugfs_dir);
-
-#endif
-#ifdef CONFIG_BLK_DEV_IO_TRACE
- mutex_lock(&q->blk_trace_mutex);
- if (q->blk_trace && !IS_ERR_OR_NULL(q->blk_trace->dir))
- blk_rename_debugfs_dir(&q->blk_trace->dir);
- mutex_unlock(&q->blk_trace_mutex);
-#endif
-}
-#else
-#define blk_prepare_release_queue(q) do { } while (0)
-#endif
-
/**
* blk_unregister_queue - counterpart of blk_register_queue()
* @disk: Disk of which the request queue should be unregistered from sysfs.
@@ -1031,7 +983,6 @@ void blk_unregister_queue(struct gendisk *disk)
* concurrent elv_iosched_store() calls.
*/
mutex_lock(&q->sysfs_lock);
- blk_prepare_release_queue(q);
blk_queue_flag_clear(QUEUE_FLAG_REGISTERED, q);
--
2.25.1
1
7
This series adds support for the ARMv8.3 pointer authentication extension,
enabling userspace return address protection with GCC 7 and above.
Arnaldo Carvalho de Melo (2):
tools beauty: Make the prctl option table generator catch all PR_
options
tools headers uapi: Sync prctl.h with the kernel sources
Kristina Martsenko (3):
arm64: add comments about EC exception levels
arm64: add prctl control for resetting ptrauth keys
arm64: add ptrace regsets for ptrauth key management
Mark Rutland (8):
arm64: add pointer authentication register bits
arm64/kvm: hide ptrauth from guests
arm64/cpufeature: detect pointer authentication
arm64: add basic pointer authentication support
arm64: expose user PAC bit positions via ptrace
arm64: perf: strip PAC when unwinding userspace
arm64: enable pointer authentication
arm64: docs: document pointer authentication
Will Deacon (3):
arm64: ptr auth: Move per-thread keys from thread_info to
thread_struct
arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4
arm64: cpufeature: Rework ptr auth hwcaps using
multi_entry_cap_matches
Documentation/arm64/booting.txt | 8 +
Documentation/arm64/cpu-feature-registers.txt | 8 +
Documentation/arm64/elf_hwcaps.txt | 12 ++
.../arm64/pointer-authentication.txt | 93 +++++++++
arch/arm64/Kconfig | 23 +++
arch/arm64/include/asm/cpucaps.h | 6 +-
arch/arm64/include/asm/cpufeature.h | 73 +++++--
arch/arm64/include/asm/esr.h | 17 +-
arch/arm64/include/asm/pointer_auth.h | 97 +++++++++
arch/arm64/include/asm/processor.h | 7 +
arch/arm64/include/asm/sysreg.h | 30 +++
arch/arm64/include/uapi/asm/hwcap.h | 2 +
arch/arm64/include/uapi/asm/ptrace.h | 20 ++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/cpu_errata.c | 32 ---
arch/arm64/kernel/cpufeature.c | 138 +++++++++++--
arch/arm64/kernel/cpuinfo.c | 2 +
arch/arm64/kernel/perf_callchain.c | 6 +-
arch/arm64/kernel/pointer_auth.c | 47 +++++
arch/arm64/kernel/process.c | 4 +
arch/arm64/kernel/ptrace.c | 185 ++++++++++++++++++
arch/arm64/kvm/handle_exit.c | 18 ++
arch/arm64/kvm/sys_regs.c | 8 +
include/uapi/linux/elf.h | 3 +
include/uapi/linux/prctl.h | 8 +
kernel/sys.c | 8 +
tools/include/uapi/linux/prctl.h | 8 +
tools/perf/trace/beauty/prctl_option.sh | 2 +-
28 files changed, 789 insertions(+), 77 deletions(-)
create mode 100644 Documentation/arm64/pointer-authentication.txt
create mode 100644 arch/arm64/include/asm/pointer_auth.h
create mode 100644 arch/arm64/kernel/pointer_auth.c
--
2.25.1
1
16
euleros inclusion
category: feature
bugzilla: NA
CVE: NA
This series supports log dirty for migration in RISC-V KVM. We have
implemented the vm migration in Qemu, So these patches have been
tested and are valid.
Link: https://gitee.com/openeuler/kernel/issues/I1SWY2
Yifei Jiang (4):
riscv/kvm: Fix use VSIP_VALID_MASK mask HIP register
riscv/kvm: add kvm_arch_mmu_enable_log_dirty_pt_masked
riscv/kvm: implement kvm_vm_ioctl_get_dirty_log
riscv/kvm: Add kvm_vm_ioctl_clear_dirty_log
arch/riscv/configs/defconfig | 1 +
arch/riscv/kvm/Kconfig | 1 +
arch/riscv/kvm/mmu.c | 37 +++++++++++++++++++++++++++
arch/riscv/kvm/vcpu.c | 2 +-
arch/riscv/kvm/vm.c | 49 ++++++++++++++++++++++++++++++++++--
5 files changed, 87 insertions(+), 3 deletions(-)
--
2.19.1
3
7
From: Jun Huang <huangjun63(a)huawei.com>
The acpi_execute_simple_method function uses a mutex, so it cannot be called in the timer. We replace timer with delayed work.
Signed-off-by: Jun Huang <huangjun63(a)huawei.com>
---
drivers/acpi/evged.c | 60 ++++++++++++++++++++------------------------
1 file changed, 27 insertions(+), 33 deletions(-)
diff --git a/drivers/acpi/evged.c b/drivers/acpi/evged.c
index 7354d8d..5bf65b1 100644
--- a/drivers/acpi/evged.c
+++ b/drivers/acpi/evged.c
@@ -34,7 +34,7 @@
* Method (_EVT, 1) {
* if (Lequal(123, Arg0))
* {
- * }
+ * }#
* }
* }
*
@@ -46,20 +46,23 @@
#include <linux/list.h>
#include <linux/platform_device.h>
#include <linux/acpi.h>
-#include <linux/timer.h>
#include <linux/jiffies.h>
+#include <linux/workqueue.h>
#define MODULE_NAME "acpi-ged"
+static void gpp_enable_pwd(struct work_struct *private_);
+
+static DECLARE_DELAYED_WORK(gpp_work, gpp_enable_pwd);
+
struct acpi_ged_handle {
- struct timer_list timer; /* For 4s anti-shake of power button */
acpi_handle gpp_handle; /* ACPI Handle: enable shutdown */
acpi_handle gpo_handle; /* ACPI Handle: set sleep flag */
};
struct acpi_ged_device {
struct device *dev;
- struct acpi_ged_handle *wakeup_handle;
+ struct acpi_ged_handle *ged_handle;
struct list_head event_list;
};
@@ -71,6 +74,8 @@ struct acpi_ged_event {
acpi_handle handle;
};
+static struct acpi_ged_handle *wakeup_handle;
+
static irqreturn_t acpi_ged_irq_handler(int irq, void *data)
{
struct acpi_ged_event *event = data;
@@ -142,7 +147,6 @@ static acpi_status acpi_ged_request_interrupt(struct acpi_resource *ares,
#ifdef CONFIG_PM_SLEEP
static void init_ged_handle(struct acpi_ged_device *geddev) {
- struct acpi_ged_handle *wakeup_handle;
acpi_handle gpo_handle = NULL;
acpi_handle gpp_handle = NULL;
acpi_status acpi_ret;
@@ -151,10 +155,7 @@ static void init_ged_handle(struct acpi_ged_device *geddev) {
if (!wakeup_handle)
return;
- geddev->wakeup_handle = wakeup_handle;
-
- /* Initialize wakeup_handle, prepare for ged suspend and resume */
- timer_setup(&wakeup_handle->timer, NULL, 0);
+ geddev->ged_handle = wakeup_handle;
acpi_ret = acpi_get_handle(ACPI_HANDLE(geddev->dev), "_GPO", &gpo_handle);
if (ACPI_FAILURE(acpi_ret))
@@ -204,8 +205,6 @@ static void ged_shutdown(struct platform_device *pdev)
dev_dbg(geddev->dev, "GED releasing GSI %u @ IRQ %u\n",
event->gsi, event->irq);
}
- if (geddev->wakeup_handle)
- del_timer(&geddev->wakeup_handle->timer);
}
static int ged_remove(struct platform_device *pdev)
@@ -220,23 +219,10 @@ static const struct acpi_device_id ged_acpi_ids[] = {
};
#ifdef CONFIG_PM_SLEEP
-static void ged_timer_callback(struct timer_list *t)
-{
- struct acpi_ged_handle *wakeup_handle = from_timer(wakeup_handle, t, timer);
- acpi_status acpi_ret;
-
- /* _GPP method enable power button */
- if (wakeup_handle && wakeup_handle->gpp_handle) {
- acpi_ret = acpi_execute_simple_method(wakeup_handle->gpp_handle, NULL, ACPI_IRQ_MODEL_GIC);
- if (ACPI_FAILURE(acpi_ret))
- pr_warn("_GPP method execution failed\n");
- }
-}
-
static int ged_suspend(struct device *dev)
{
struct acpi_ged_device *geddev = dev_get_drvdata(dev);
- struct acpi_ged_handle *wakeup_handle = geddev->wakeup_handle;
+ //struct acpi_ged_handle *wakeup_handle = geddev->wakeup_handle;
struct acpi_ged_event *event, *next;
acpi_status acpi_ret;
@@ -255,18 +241,26 @@ static int ged_suspend(struct device *dev)
return 0;
}
+static void gpp_enable_pwd(struct work_struct *private_)
+{
+ acpi_status acpi_ret;
+
+ /* _GPP method enable power button */
+ if (wakeup_handle && wakeup_handle->gpp_handle) {
+ acpi_ret = acpi_execute_simple_method(wakeup_handle->gpp_handle, NULL, ACPI_IRQ_MODEL_GIC);
+ if (ACPI_FAILURE(acpi_ret))
+ pr_warn("_GPP method execution failed\n");
+ }
+
+}
+
static int ged_resume(struct device *dev)
{
struct acpi_ged_device *geddev = dev_get_drvdata(dev);
- struct acpi_ged_handle *wakeup_handle = geddev->wakeup_handle;
struct acpi_ged_event *event, *next;
-
- /* use timer to complete 4s anti-shake */
- if (wakeup_handle && wakeup_handle->gpp_handle) {
- wakeup_handle->timer.expires = jiffies + (4 * HZ);
- wakeup_handle->timer.function = ged_timer_callback;
- add_timer(&wakeup_handle->timer);
- }
+
+ /* use delayed_work to complete 4s anti-shake */
+ schedule_delayed_work(&gpp_work, 4 * HZ);
list_for_each_entry_safe(event, next, &geddev->event_list, node)
disable_irq_wake(event->irq);
--
2.23.0.windows.1
1
0
euleros inclusion
category: feature
bugzilla: NA
CVE: NA
These two patches enable the support for vhost-net on RISC-V
architecture.
The following two steps are performed to run a vm with
vhost-net:
1. create virbr0 on riscv64 emulation
$ brctl addbr virbr0
$ brctl stp virbr0 on
$ ifconfig virbr0 up
$ ifconfig virbr0 virbr0_ip netmask 255.255.255.0
2. boot a riscv64 guest OS on riscv64 emulation
$ ./qemu-system-riscv64 -M virt,accel=kvm -m 1024M -cpu host -nographic \
-name guest=riscv-guest \
-smp 2 \
-kernel ./Image \
-drive file=./guest.img,format=raw,id=hd0 \
-device virtio-blk,drive=hd0 \
-netdev type=tap,vhost=on,script=./ifup.sh,downscript=./ifdown.sh,id=net0 \
-append "root=/dev/vda rw console=ttyS0 earlycon=sbi"
$ cat ifup.sh
#!/bin/sh
brctl addif virbr0 $1
ifconfig $1 up
$ cat ifdown.sh
#!/bin/sh
ifconfig $1 down
brctl delif virbr0 $1
This brenchmark netperf is used to test the performance of
vhost-net and virtio-net. The results are as follow:
$ ./netperf -H virbr0_ip -l 100 -t TCP_STREAM
vhost-net:
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
131072 16384 16384 100.06 521.62
virtio-net:
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
131072 16384 16384 292.86 292.86
Link: https://gitee.com/openeuler/kernel/issues/I1SQJ2
Yifei Jiang (2):
RISC-V: KVM: enable ioeventfd capability
RISC-V: KVM: kernel mmio read/write support
arch/riscv/kvm/Kconfig | 2 ++
arch/riscv/kvm/Makefile | 2 +-
arch/riscv/kvm/vcpu_exit.c | 28 ++++++++++++++++++++++++++--
arch/riscv/kvm/vm.c | 1 +
4 files changed, 30 insertions(+), 3 deletions(-)
--
2.19.1
3
4

25 Aug '20
From: Zhang Changzhong <zhangchangzhong(a)huawei.com>
mainline inclusion
from mainline-v5.9-rc2
commit f4fd77fd87e9b214c26bb2ebd4f90055eaea5ade
category: bugfix
bugzilla: 39990
CVE: NA
---------------------------
Currently j1939_tp_im_involved_anydir() in j1939_tp_recv() check the previously
set flags J1939_ECU_LOCAL_DST and J1939_ECU_LOCAL_SRC of incoming skb, thus
multipacket broadcast message was aborted by receive side because it may come
from remote ECUs and have no exact dst address. Similarly, j1939_tp_cmd_recv()
and j1939_xtp_rx_dat() didn't process broadcast message.
So fix it by checking and process broadcast message in j1939_tp_recv(),
j1939_tp_cmd_recv() and j1939_xtp_rx_dat().
Fixes: 9d71dd0c7009 ("can: add support of SAE J1939 protocol")
Signed-off-by: Zhang Changzhong <zhangchangzhong(a)huawei.com>
Link: https://lore.kernel.org/r/1596599425-5534-2-git-send-email-zhangchangzhong@…
Acked-by: Oleksij Rempel <o.rempel(a)pengutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl(a)pengutronix.de>
Signed-off-by: Zhang Changzhong <zhangchangzhong(a)huawei.com>
Reviewed-by: Yue Haibing <yuehaibing(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/can/j1939/transport.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
index 9f99af5b0b11e..e5188acbb1db7 100644
--- a/net/can/j1939/transport.c
+++ b/net/can/j1939/transport.c
@@ -1651,8 +1651,12 @@ static void j1939_xtp_rx_rts(struct j1939_priv *priv, struct sk_buff *skb,
return;
}
session = j1939_xtp_rx_rts_session_new(priv, skb);
- if (!session)
+ if (!session) {
+ if (cmd == J1939_TP_CMD_BAM && j1939_sk_recv_match(priv, skcb))
+ netdev_info(priv->ndev, "%s: failed to create TP BAM session\n",
+ __func__);
return;
+ }
} else {
if (j1939_xtp_rx_rts_session_active(session, skb)) {
j1939_session_put(session);
@@ -1829,6 +1833,13 @@ static void j1939_xtp_rx_dat(struct j1939_priv *priv, struct sk_buff *skb)
else
j1939_xtp_rx_dat_one(session, skb);
}
+
+ if (j1939_cb_is_broadcast(skcb)) {
+ session = j1939_session_get_by_addr(priv, &skcb->addr, false,
+ false);
+ if (session)
+ j1939_xtp_rx_dat_one(session, skb);
+ }
}
/* j1939 main intf */
@@ -1920,7 +1931,7 @@ static void j1939_tp_cmd_recv(struct j1939_priv *priv, struct sk_buff *skb)
if (j1939_tp_im_transmitter(skcb))
j1939_xtp_rx_rts(priv, skb, true);
- if (j1939_tp_im_receiver(skcb))
+ if (j1939_tp_im_receiver(skcb) || j1939_cb_is_broadcast(skcb))
j1939_xtp_rx_rts(priv, skb, false);
break;
@@ -1984,7 +1995,7 @@ int j1939_tp_recv(struct j1939_priv *priv, struct sk_buff *skb)
{
struct j1939_sk_buff_cb *skcb = j1939_skb_to_cb(skb);
- if (!j1939_tp_im_involved_anydir(skcb))
+ if (!j1939_tp_im_involved_anydir(skcb) && !j1939_cb_is_broadcast(skcb))
return 0;
switch (skcb->addr.pgn) {
--
2.25.1
1
3
Resending due to conflicts with existing code.
This patch set provides some improvements and fixes. The first three
patches solve a permission problem caused by EVM. EVM denies xattr
operations also for files that are not appraised by IMA. If only
executables are appraised, xattr operations on the other files should be
allowed, even if metadata verification fails (for example due to missing
security.evm).
At the moment, in openEuler we use EVM_ALLOW_METADATA_WRITES to avoid this
problem (EVM does not check metadata integrity), but it would be useful to
do the verification for example to prevent accidental changes on immutable
metadata.
The fourth patch enables the choice of the algorithm for the HMAC and
ensures that the parameters passed to the functions which handle the HMAC
are consistent with the algorithm chosen.
The last three patches are simple bug fixes.
Roberto Sassu (7):
evm: Move hooks outside LSM infrastructure
evm: Extend API of post hooks to pass the result of pre hooks
evm: Return -EAGAIN to ignore verification failures
evm: Propagate choice of HMAC algorithm in evm_crypto.c
ima: Fix datalen check in ima_write_data()
evm: Fix validation of fake xattr passed by IMA
evm: Initialize saved_evm_status
fs/attr.c | 7 ++-
fs/xattr.c | 65 +++++++++++++++++---------
include/linux/evm.h | 18 +++++---
security/integrity/evm/Kconfig | 32 +++++++++++++
security/integrity/evm/evm.h | 1 +
security/integrity/evm/evm_crypto.c | 15 ++++--
security/integrity/evm/evm_main.c | 71 +++++++++++++++++++++--------
security/integrity/ima/ima_fs.c | 2 +-
security/integrity/integrity.h | 2 +-
security/security.c | 18 ++------
10 files changed, 159 insertions(+), 72 deletions(-)
--
2.27.GIT
2
8
patches 1-31 are from mainline
patch 32 add cgroup-V1 support
patch 33 and 34 are from ali-cloud-kernel(https://github.com/alibaba/cloud-kernel.git),
they fix bug when introduce iocost in lower kernel.
patch 35 fix a bug when using iocost in sq
Dan Carpenter (1):
iocost: don't nest spin_lock_irq in ioc_weight_write()
Jiufei Xue (3):
iocost: check active_list of all the ancestors in iocg_activate()
iocost: fix NULL pointer dereference in ioc_rqos_throttle
iocost: fix a deadlock in ioc_rqos_throttle()
Stephen Rothwell (1):
blkcg: blk-iocost: predeclare used structs
Tejun Heo (27):
blk-mq: add optional request->alloc_time_ns
block/rq_qos: add rq_qos_merge()
block/rq_qos: implement rq_qos_ops->queue_depth_changed()
blkcg: separate blkcg_conf_get_disk() out of blkg_conf_prep()
cgroup: add cgroup_parse_float()
cgroup: Move cgroup_parse_float() implementation out of CONFIG_SYSFS
blkcg: pass @q and @blkcg into blkcg_pol_alloc_pd_fn()
blkcg: make ->cpd_init_fn() optional
blkcg: implement blk-iocost
blkcg: add tools/cgroup/iocost_monitor.py
blkcg: add tools/cgroup/iocost_coef_gen.py
blkcg: fix missing free on error path of blk_iocost_init()
blkcg: add missing NULL check in ioc_cpd_alloc()
blk-iocost: Fix incorrect operation order during iocg free
blk-iocost: Account force-charged overage in absolute vtime
blk-iocost: Don't let merges push vtime into the future
iocost_monitor: Always use strings for json values
iocost_monitor: Report more info with higher accuracy
iocost_monitor: Report debt
iocost: better trace vrate changes
iocost: improve nr_lagging handling
iocost: bump up default latency targets for hard disks
iocost: over-budget forced IOs should schedule async delay
iocost: Fix iocost_monitor.py due to helper type mismatch
blk-iocost: fix incorrect vtime comparison in iocg_is_idle()
blkcg: blkcg_activate_policy() should initialize ancestors first
blkcg: Fix multiple bugs in blkcg_activate_policy()
Waiman Long (1):
blk-iocost: Fix error on iocost_ioc_vrate_adj
Yu Kuai (2):
iocost: add cgroup V1 suport
blk-iocost: fix spin_lock won't release in sq
Documentation/admin-guide/cgroup-v2.rst | 103 +
block/Kconfig | 13 +
block/Makefile | 1 +
block/bfq-cgroup.c | 5 +-
block/blk-cgroup.c | 136 +-
block/blk-core.c | 4 +
block/blk-iocost.c | 2547 +++++++++++++++++++++++
block/blk-iolatency.c | 5 +-
block/blk-mq.c | 13 +-
block/blk-rq-qos.c | 21 +
block/blk-rq-qos.h | 5 +
block/blk-settings.c | 2 +-
block/blk-throttle.c | 6 +-
block/blk-wbt.c | 18 +-
block/blk-wbt.h | 4 -
block/cfq-iosched.c | 5 +-
include/linux/blk-cgroup.h | 4 +-
include/linux/blk_types.h | 3 +
include/linux/blkdev.h | 14 +-
include/linux/cgroup.h | 2 +
include/trace/events/iocost.h | 178 ++
kernel/cgroup/cgroup.c | 43 +
tools/cgroup/iocost_coef_gen.py | 178 ++
tools/cgroup/iocost_monitor.py | 277 +++
24 files changed, 3522 insertions(+), 65 deletions(-)
create mode 100644 block/blk-iocost.c
create mode 100644 include/trace/events/iocost.h
create mode 100644 tools/cgroup/iocost_coef_gen.py
create mode 100644 tools/cgroup/iocost_monitor.py
--
2.25.1
1
35
Idle-haltpoll has been supported in X86. Now, we will provide it in ARM.
Xiangyou Xie (5):
arm64: Optimize ttwu IPI
cpuidle-haltpoll: Use arch_cpu_idle() to replace default_idle()
arm64: Add some definitions of kvm_para*
ARM: cpuidle: Add support for cpuidle-haltpoll driver for ARM
config: set default value of haltpoll
arch/arm64/Kconfig | 3 +++
arch/arm64/configs/euleros_defconfig | 2 ++
arch/arm64/configs/hulk_defconfig | 2 ++
arch/arm64/configs/openeuler_defconfig | 4 +++-
arch/arm64/include/asm/kvm_para.h | 27 ++++++++++++++++++++++++++
arch/arm64/include/asm/thread_info.h | 3 +++
arch/arm64/kernel/process.c | 4 ++++
arch/x86/kernel/process.c | 1 +
drivers/cpuidle/Kconfig | 4 ++--
drivers/cpuidle/cpuidle-haltpoll.c | 6 +++---
drivers/cpuidle/governors/haltpoll.c | 6 +++++-
drivers/cpuidle/poll_state.c | 3 +++
12 files changed, 58 insertions(+), 7 deletions(-)
create mode 100644 arch/arm64/include/asm/kvm_para.h
--
2.25.1
1
5
In AArch64, guest will read the same values of the ID regsiters with
host. Both of them read the values from arm64_ftr_regs. This patch
series add support to emulate and configure ID registers so that we can
control the value of ID registers that guest read.
Peng Liang (4):
arm64: add a helper function to traverse arm64_ftr_regs
kvm: arm64: emulate the ID registers
kvm: arm64: make ID registers configurable
kvm: arm64: add KVM_CAP_ARM_CPU_FEATURE extension
arch/arm64/include/asm/cpufeature.h | 2 +
arch/arm64/include/asm/kvm_host.h | 2 +
arch/arm64/kernel/cpufeature.c | 13 ++++++
arch/arm64/kvm/sys_regs.c | 71 ++++++++++++++++++++++-------
include/uapi/linux/kvm.h | 13 ++++++
virt/kvm/arm/arm.c | 23 ++++++++++
6 files changed, 107 insertions(+), 17 deletions(-)
--
2.25.1
1
4
From: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
mainline inclusion
from mainline-v4.20-rc2
commit 26a4676faa1ad5d99317e0cd701e5d6f3e716b77
category: feature
bugzilla: NA
CVE: NA
---------------------------
On arm64, there is no need to add 2 bytes of padding to the start of
each network buffer just to make the IP header appear 32-bit aligned.
Since this might actually adversely affect DMA performance some
platforms, let's override NET_IP_ALIGN to 0 to get rid of this
padding.
Acked-by: Ilias Apalodimas <ilias.apalodimas(a)linaro.org>
Tested-by: Ilias Apalodimas <ilias.apalodimas(a)linaro.org>
Acked-by: Mark Rutland <mark.rutland(a)arm.com>
Acked-by: Will Deacon <will.deacon(a)arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas(a)arm.com>
Signed-off-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/include/asm/processor.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 98529c8b5d313..7695a5117ff20 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -24,6 +24,14 @@
#define KERNEL_DS UL(-1)
#define USER_DS (TASK_SIZE_64 - 1)
+/*
+ * On arm64 systems, unaligned accesses by the CPU are cheap, and so there is
+ * no point in shifting all network buffers by 2 bytes just to make some IP
+ * header fields appear aligned in memory, potentially sacrificing some DMA
+ * performance on some platforms.
+ */
+#define NET_IP_ALIGN 0
+
#ifndef __ASSEMBLY__
/*
--
2.25.1
1
21
Andrew Pinski (3):
arm64: rename COMPAT to AARCH32_EL0
arm64: uapi: set __BITS_PER_LONG correctly for ILP32 and LP64
arm64:ilp32: add ARM64_ILP32 to Kconfig
Dave Martin (1):
arm64: signal: Make parse_user_sigframe() independent of rt_sigframe
layout
James Morse (1):
ptrace: Add compat PTRACE_{G, S}ETSIGMASK handlers
Philipp Tomsich (1):
arm64:ilp32: add vdso-ilp32 and use for signal return
Xiongfeng Wang (2):
arm64: ilp32: fix compile warning cause by 'VA_BITS'
config: add CONFIG_ARM64_ILP32 in defconfigs
Yury Norov (17):
compat ABI: use non-compat openat and open_by_handle_at variants
32-bit userspace ABI: introduce ARCH_32BIT_OFF_T config option
asm-generic: Drop getrlimit and setrlimit syscalls from default list
thread: move thread bits accessors to separated file
arm64: ilp32: add documentation on the ILP32 ABI for ARM64
arm64: rename functions that reference compat term
arm64: introduce is_a32_compat_{task, thread} for AArch32 compat
arm64: ilp32: add is_ilp32_compat_{task, thread} and TIF_32BIT_AARCH64
arm64: introduce binfmt_elf32.c
arm64: change compat_elf_hwcap and compat_elf_hwcap2 prefix to a32
arm64: ilp32: introduce binfmt_ilp32.c
arm64: ilp32: share aarch32 syscall handlers
arm64: ilp32: introduce syscall table for ILP32
arm64: signal: share lp64 signal structures and routines to ilp32
arm64: signal32: move ilp32 and aarch32 common code to separated file
arm64: ilp32: introduce ilp32-specific sigframe and ucontext
arm64: ptrace: handle ptrace_request differently for aarch32 and ilp32
Documentation/arm64/ilp32.txt | 52 +++
arch/Kconfig | 15 +
arch/arc/Kconfig | 1 +
arch/arc/include/uapi/asm/unistd.h | 1 +
arch/arm/Kconfig | 1 +
arch/arm64/Kconfig | 17 +-
arch/arm64/Makefile | 3 +
arch/arm64/configs/euleros_defconfig | 2 +
arch/arm64/configs/hulk_defconfig | 2 +
arch/arm64/configs/openeuler_defconfig | 2 +
arch/arm64/configs/storage_ci_defconfig | 2 +
arch/arm64/configs/syzkaller_defconfig | 2 +
arch/arm64/include/asm/compat.h | 19 +-
arch/arm64/include/asm/elf.h | 36 +-
arch/arm64/include/asm/fpsimd.h | 2 +-
arch/arm64/include/asm/ftrace.h | 2 +-
arch/arm64/include/asm/hwcap.h | 8 +-
arch/arm64/include/asm/is_compat.h | 78 ++++
arch/arm64/include/asm/memory.h | 4 +
arch/arm64/include/asm/processor.h | 13 +-
arch/arm64/include/asm/ptrace.h | 12 +-
arch/arm64/include/asm/seccomp.h | 2 +-
arch/arm64/include/asm/signal32.h | 19 +-
arch/arm64/include/asm/signal32_common.h | 13 +
arch/arm64/include/asm/signal_common.h | 303 +++++++++++++++
arch/arm64/include/asm/signal_ilp32.h | 23 ++
arch/arm64/include/asm/syscall.h | 10 +-
arch/arm64/include/asm/thread_info.h | 4 +-
arch/arm64/include/asm/unistd.h | 6 +-
arch/arm64/include/asm/vdso.h | 6 +
arch/arm64/include/uapi/asm/bitsperlong.h | 9 +-
arch/arm64/include/uapi/asm/unistd.h | 13 +
arch/arm64/kernel/Makefile | 8 +-
arch/arm64/kernel/armv8_deprecated.c | 6 +-
arch/arm64/kernel/asm-offsets.c | 9 +-
arch/arm64/kernel/binfmt_elf32.c | 35 ++
arch/arm64/kernel/binfmt_ilp32.c | 87 +++++
arch/arm64/kernel/cpufeature.c | 28 +-
arch/arm64/kernel/cpuinfo.c | 18 +-
arch/arm64/kernel/debug-monitors.c | 4 +-
arch/arm64/kernel/entry.S | 6 +-
arch/arm64/kernel/head.S | 2 +-
arch/arm64/kernel/hw_breakpoint.c | 8 +-
arch/arm64/kernel/perf_callchain.c | 28 +-
arch/arm64/kernel/perf_regs.c | 4 +-
arch/arm64/kernel/process.c | 13 +-
arch/arm64/kernel/ptrace.c | 38 +-
arch/arm64/kernel/signal.c | 348 ++++--------------
arch/arm64/kernel/signal32.c | 111 +++---
arch/arm64/kernel/signal32_common.c | 37 ++
arch/arm64/kernel/signal_ilp32.c | 67 ++++
arch/arm64/kernel/sys32.c | 104 +-----
arch/arm64/kernel/sys32_common.c | 106 ++++++
arch/arm64/kernel/sys_compat.c | 12 +-
arch/arm64/kernel/sys_ilp32.c | 75 ++++
arch/arm64/kernel/syscall.c | 37 +-
arch/arm64/kernel/traps.c | 3 +-
arch/arm64/kernel/vdso-ilp32/.gitignore | 2 +
arch/arm64/kernel/vdso-ilp32/Makefile | 89 +++++
arch/arm64/kernel/vdso-ilp32/vdso-ilp32.S | 22 ++
arch/arm64/kernel/vdso-ilp32/vdso-ilp32.lds.S | 84 +++++
arch/arm64/kernel/vdso.c | 62 +++-
arch/arm64/kernel/vdso/gettimeofday.c | 6 +
arch/arm64/kernel/vdso/vdso.S | 6 +-
arch/arm64/mm/mmap.c | 2 +-
arch/c6x/include/uapi/asm/unistd.h | 1 +
arch/h8300/Kconfig | 1 +
arch/h8300/include/uapi/asm/unistd.h | 1 +
arch/hexagon/Kconfig | 1 +
arch/hexagon/include/uapi/asm/unistd.h | 1 +
arch/m68k/Kconfig | 1 +
arch/microblaze/Kconfig | 1 +
arch/mips/Kconfig | 1 +
arch/nds32/Kconfig | 1 +
arch/nds32/include/uapi/asm/unistd.h | 1 +
arch/nios2/Kconfig | 1 +
arch/nios2/include/uapi/asm/unistd.h | 1 +
arch/openrisc/Kconfig | 1 +
arch/openrisc/include/uapi/asm/unistd.h | 1 +
arch/parisc/Kconfig | 1 +
arch/powerpc/Kconfig | 1 +
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/unistd.h | 1 +
arch/sh/Kconfig | 1 +
arch/sparc/Kconfig | 1 +
arch/unicore32/Kconfig | 1 +
arch/unicore32/include/uapi/asm/unistd.h | 1 +
arch/x86/Kconfig | 1 +
arch/x86/um/Kconfig | 1 +
arch/xtensa/Kconfig | 1 +
drivers/clocksource/arm_arch_timer.c | 4 +-
include/linux/fcntl.h | 2 +-
include/linux/sched.h | 1 +
include/linux/thread_bits.h | 87 +++++
include/linux/thread_info.h | 75 +---
include/uapi/asm-generic/unistd.h | 10 +-
kernel/ptrace.c | 47 ++-
scripts/checksyscalls.sh | 5 +
98 files changed, 1679 insertions(+), 726 deletions(-)
create mode 100644 Documentation/arm64/ilp32.txt
create mode 100644 arch/arm64/include/asm/is_compat.h
create mode 100644 arch/arm64/include/asm/signal32_common.h
create mode 100644 arch/arm64/include/asm/signal_common.h
create mode 100644 arch/arm64/include/asm/signal_ilp32.h
create mode 100644 arch/arm64/kernel/binfmt_elf32.c
create mode 100644 arch/arm64/kernel/binfmt_ilp32.c
create mode 100644 arch/arm64/kernel/signal32_common.c
create mode 100644 arch/arm64/kernel/signal_ilp32.c
create mode 100644 arch/arm64/kernel/sys32_common.c
create mode 100644 arch/arm64/kernel/sys_ilp32.c
create mode 100644 arch/arm64/kernel/vdso-ilp32/.gitignore
create mode 100644 arch/arm64/kernel/vdso-ilp32/Makefile
create mode 100644 arch/arm64/kernel/vdso-ilp32/vdso-ilp32.S
create mode 100644 arch/arm64/kernel/vdso-ilp32/vdso-ilp32.lds.S
create mode 100644 include/linux/thread_bits.h
--
2.25.1
1
25
Aditya Pakki (2):
drm/radeon: Fix reference count leaks caused by pm_runtime_get_sync
drm/nouveau: fix multiple instances of reference count leaks
Alim Akhtar (1):
arm64: dts: exynos: Fix silent hang after boot on Espresso
Andrii Nakryiko (1):
tools, build: Propagate build failures from tools/build/Makefile.build
Aneesh Kumar K.V (1):
powerpc/book3s64/pkeys: Use PVR check instead of cpu feature
Arnd Bergmann (1):
leds: lm355x: avoid enum conversion warning
Bartosz Golaszewski (1):
irqchip/irq-mtk-sysirq: Replace spinlock with raw_spinlock
Bjorn Helgaas (1):
PCI: Fix pci_cfg_wait queue locking problem
Bolarinwa Olayemi Saheed (1):
iwlegacy: Check the return value of pcie_capability_read_*()
Brant Merryman (2):
USB: serial: cp210x: re-enable auto-RTS on open
USB: serial: cp210x: enable usb generic throttle/unthrottle
Chris Packham (1):
net: dsa: mv88e6xxx: MV88E6097 does not support jumbo configuration
Christian Eggers (1):
spi: spidev: Align buffers for DMA
Christian König (1):
drm/radeon: disable AGP by default
Christophe JAILLET (5):
video: pxafb: Fix the function used to balance a
'dma_alloc_coherent()' call
scsi: cumana_2: Fix different dev_id between request_irq() and
free_irq()
scsi: powertec: Fix different dev_id between request_irq() and
free_irq()
scsi: eesox: Fix different dev_id between request_irq() and free_irq()
net: spider_net: Fix the size used in a 'dma_free_coherent()' call
Chuck Lever (1):
svcrdma: Fix page leak in svc_rdma_recv_read_chunk()
Chuhong Yuan (2):
media: omap3isp: Add missed v4l2_ctrl_handler_free() for
preview_init_entities()
media: exynos4-is: Add missed check for pinctrl_lookup_state()
Chunfeng Yun (1):
usb: mtu3: clear dual mode of u3port when disable device
Colin Ian King (3):
drm/arm: fix unintentional integer overflow on left shift
drm/radeon: fix array out-of-bounds read and write issues
staging: rtl8192u: fix a dubious looking mask before a shift
Coly Li (1):
bcache: fix super block seq numbers comparision in
register_cache_set()
Cristian Marussi (1):
firmware: arm_scmi: Fix SCMI genpd domain probing
Dan Carpenter (5):
media: firewire: Using uninitialized values in node_probe()
mwifiex: Prevent memory corruption handling keys
thermal: ti-soc-thermal: Fix reversed condition in
ti_thermal_expose_sensor()
Smack: fix another vsscanf out of bounds
Smack: prevent underflow in smk_set_cipso()
Danesh Petigara (1):
usb: bdc: Halt controller on suspend
Darrick J. Wong (2):
xfs: don't eat an EIO/ENOSPC writeback error when scrubbing data fork
xfs: fix reflink quota reservation accounting error
Dave Airlie (1):
drm/ttm/nouveau: don't call tt destroy callback on alloc failure.
Dejin Zheng (2):
video: fbdev: sm712fb: fix an issue about iounmap for a wrong address
console: newport_con: fix an issue about leak related system resources
Dilip Kota (1):
spi: lantiq: fix: Rx overflow error in full duplex mode
Dmitry Osipenko (1):
gpu: host1x: debug: Fix multiple channels emitting messages
simultaneously
Drew Fustini (1):
pinctrl-single: fix pcs_parse_pinconf() return value
Emil Velikov (1):
drm/mipi: use dcs write for mipi_dsi_dcs_set_tear_scanline
Eric Biggers (3):
fs/minix: check return value of sb_getblk()
fs/minix: don't allow getting deleted inodes
fs/minix: reject too-large maximum file size
Eric Dumazet (1):
x86/fsgsbase/64: Fix NULL deref in 86_fsgsbase_read_task
Erik Kaneda (1):
ACPICA: Do not increment operation_region reference counts for field
units
Evan Green (1):
ath10k: Acquire tx_lock in tx error paths
Evgeny Novikov (2):
video: fbdev: neofb: fix memory leak in neo_scan_monitor()
usb: gadget: net2280: fix memory leak on probe error handling paths
Finn Thain (3):
m68k: mac: Don't send IOP message until channel is idle
m68k: mac: Fix IOP status/control register writes
scsi: mesh: Fix panic after host or bus reset
Florinel Iordache (5):
fsl/fman: use 32-bit unsigned integer
fsl/fman: fix dereference null return value
fsl/fman: fix unreachable code
fsl/fman: check dereferencing null pointer
fsl/fman: fix eth hash table allocation
Gilad Ben-Yossef (1):
crypto: ccree - fix resource leak on error path
Grant Likely (1):
HID: input: Fix devices that return multiple bytes in battery report
Greg Kroah-Hartman (1):
Linux 4.19.140
Hanjun Guo (1):
PCI: Release IVRS table in AMD ACS quirk
Harish (1):
selftests/powerpc: Fix CPU affinity for child process
Hector Martin (3):
ALSA: usb-audio: fix overeager device match for MacroSilicon MS2109
ALSA: usb-audio: work around streaming quirk for MacroSilicon MS2109
ALSA: usb-audio: add quirk for Pioneer DDJ-RB
Heiko Stuebner (3):
arm64: dts: rockchip: fix rk3368-lion gmac reset gpio
arm64: dts: rockchip: fix rk3399-puma vcc5v0-host gpio
arm64: dts: rockchip: fix rk3399-puma gmac reset gpio
Hui Wang (1):
ALSA: hda - fix the micmute led status for Lenovo ThinkCentre AIO
Ira Weiny (1):
net/tls: Fix kmap usage
Ivan Kokshaysky (1):
cpufreq: dt: fix oops on armada37xx
Jack Xiao (1):
drm/amdgpu: avoid dereferencing a NULL pointer
Jakub Kicinski (1):
bitfield.h: don't compile-time validate _val in FIELD_FIT
Jerome Brunet (1):
ASoC: meson: axg-tdm-interface: fix link fmt setup
Jian Cai (1):
crypto: aesni - add compatibility with IAS
Jim Cromie (1):
dyndbg: fix a BUG_ON in ddebug_describe_flags
Johan Hovold (1):
USB: serial: iuu_phoenix: fix led-activity helpers
John Allen (1):
crypto: ccp - Fix use of merged scatterlists
John David Anglin (1):
parisc: Implement __smp_store_release and __smp_load_acquire barriers
John Garry (1):
scsi: scsi_debug: Add check for sdebug_max_queue during module init
John Ogness (1):
af_packet: TPACKET_V3: fix fill status rwlock imbalance
Jon Derrick (1):
irqdomain/treewide: Free firmware node after domain removal
Julian Anastasov (1):
ipvs: allow connection reuse for unconfirmed conntrack
Julian Wiedmann (1):
s390/qeth: don't process empty bridge port events
Kai-Heng Feng (1):
leds: core: Flush scheduled work for system suspend
Kars Mulder (1):
usb: core: fix quirks_param_set() writing to a const pointer
Kishon Vijay Abraham I (1):
PCI: cadence: Fix updating Vendor ID and Subsystem Vendor ID register
Laurent Pinchart (1):
drm: panel: simple: Fix bpc for LG LB070WV8 panel
Li Heng (1):
RDMA/core: Fix return error value in _ib_modify_qp() to negative
Lihong Kou (1):
Bluetooth: add a mutex lock to avoid UAF in do_enale_set
Linus Walleij (2):
net: dsa: rtl8366: Fix VLAN semantics
net: dsa: rtl8366: Fix VLAN set-up
Lu Wei (2):
platform/x86: intel-hid: Fix return value check in check_acpi_dev()
platform/x86: intel-vbtn: Fix return value check in check_acpi_dev()
Lubomir Rintel (1):
drm/etnaviv: Fix error path on failure to enable bus clk
Luis Chamberlain (1):
loop: be paranoid on exit and prevent new additions / removals
Marco Felsch (1):
drm/imx: tve: fix regulator_disable error path
Marek Szyprowski (2):
phy: exynos5-usbdrd: Calibrating makes sense only for USB2.0 PHY
usb: dwc2: Fix error path in gadget registration
Matteo Croce (1):
pstore: Fix linking when crypto API disabled
Maulik Shah (1):
soc: qcom: rpmh-rsc: Set suppress_bind_attrs flag
Miaohe Lin (1):
net: Set fput_needed iff FDPUT_FPUT is set
Michael Ellerman (1):
powerpc/boot: Fix CONFIG_PPC_MPC52XX references
Michael Tretter (1):
drm/debugfs: fix plain echo to connector "force" attribute
Mikhail Malygin (1):
RDMA/rxe: Prevent access to wr->next ptr afrer wr is posted to send
queue
Mikulas Patocka (2):
crypto: hisilicon - don't sleep of CRYPTO_TFM_REQ_MAY_SLEEP was not
specified
crypto: cpt - don't sleep of CRYPTO_TFM_REQ_MAY_SLEEP was not
specified
Milton Miller (1):
powerpc/vdso: Fix vdso cpu truncation
Mirko Dietrich (1):
ALSA: usb-audio: Creative USB X-Fi Pro SB1095 volume knob support
Nathan Huckleberry (1):
ARM: 8992/1: Fix unwind_frame for clang-built kernels
Navid Emamdoost (1):
drm/etnaviv: fix ref count leak via pm_runtime_get_sync
Nick Desaulniers (1):
tracepoint: Mark __tracepoint_string's __used
Nicolas Boichat (2):
Bluetooth: hci_h5: Set HCI_UART_RESET_ON_INIT to correct flags
Bluetooth: hci_serdev: Only unregister device if it was registered
Niklas Söderlund (2):
ARM: dts: gose: Fix ports node name for adv7180
ARM: dts: gose: Fix ports node name for adv7612
Oleksandr Andrushchenko (1):
xen/gntdev: Fix dmabuf import with non-zero sgt offset
Paul E. McKenney (2):
fs/btrfs: Add cond_resched() for try_release_extent_mapping() stalls
mm/mmap.c: Add cond_resched() for exit_mmap() CPU stalls
Pavel Machek (1):
ocfs2: fix unbalanced locking
Peng Liu (1):
sched: correct SD_flags returned by tl->sd_flags()
Pierre-Louis Bossart (1):
ASoC: Intel: bxt_rt298: add missing .owner field
Prasanna Kerekoppa (1):
brcmfmac: To fix Bss Info flag definition Bug
Qingyu Li (1):
net/nfc/rawsock.c: add CAP_NET_RAW check.
Qiushi Wu (2):
EDAC: Fix reference count leaks
agp/intel: Fix a memory leak on module initialisation failure
Ricardo Cañuelo (1):
arm64: dts: hisilicon: hikey: fixes to comply with adi, adv7533 DT
binding
Rob Clark (1):
drm/msm: ratelimit crtc event overflow error
Roger Pau Monne (2):
xen/balloon: fix accounting in alloc_xenballooned_pages error path
xen/balloon: make the balloon wait interruptible
Romain Naour (1):
include/asm-generic/vmlinux.lds.h: align ro_after_init
Sai Prakash Ranjan (1):
coresight: tmc: Fix TMC mode read in tmc_read_unprepare_etb()
Sandipan Das (1):
selftests/powerpc: Fix online CPU selection
Sasi Kumar (1):
bdc: Fix bug causing crash after multiple disconnects
Sedat Dilek (1):
crypto: aesni - Fix build with LLVM_IAS=1
Sivaprakash Murugesan (1):
mtd: rawnand: qcom: avoid write to unavailable register
Stephan Gerhold (1):
arm64: dts: qcom: msm8916: Replace invalid bias-pull-none property
Sudeep Holla (1):
clk: scmi: Fix min and max rate when registering clocks with discrete
rates
Sven Schnelle (1):
parisc: mask out enable and reserved bits from sba imask
Tianjia Zhang (2):
net: ethernet: aquantia: Fix wrong return value
liquidio: Fix wrong return value in cn23xx_get_pf_num()
Tim Froidcoeur (2):
net: refactor bind_bucket fastreuse into helper
net: initialize fastreuse on inet_inherit_port
Tom Rix (3):
drm/bridge: sil_sii8620: initialize return of sii8620_readb
power: supply: check if calc_soc succeeded in pm860x_init_battery
crypto: qat - fix double free in qat_uclo_create_batch_init_list
Tomasz Duszynski (1):
iio: improve IIO_CONCENTRATION channel type description
Tomi Valkeinen (1):
drm/tilcdc: fix leak & null ref in panel_connector_get_modes
Trond Myklebust (2):
NFS: Don't move layouts to plh_return_segs list while in use
NFS: Don't return layout segments that are in use
Vincent Guittot (1):
sched/fair: Fix NOHZ next idle balance
Wang Hai (3):
cxl: Fix kobject memleak
wl1251: fix always return 0 error
dlm: Fix kobject memleak
Wright Feng (2):
brcmfmac: keep SDIO watchdog running when console_interval is non-zero
brcmfmac: set state of hanger slot to FREE when flushing PSQ
Xie He (1):
drivers/net/wan/lapbether: Added needed_headroom and a skb->len check
Xiongfeng Wang (1):
PCI/ASPM: Add missing newline in sysfs 'policy'
Yang Yingliang (1):
Revert "pci: lock the pci_cfg_wait queue for the consistency of data"
Yu Kuai (2):
ARM: socfpga: PM: add missing put_device() call in
socfpga_setup_ocram_self_refresh()
MIPS: OCTEON: add missing put_device() call in
dwc3_octeon_device_init()
Yuval Basson (1):
RDMA/qedr: SRQ's bug fixes
Zhao Heming (1):
md-cluster: fix wild pointer of unlock_all_bitmaps()
Zheng Bin (1):
9p: Fix memory leak in v9fs_mount
Zhenzhong Duan (1):
x86/mce/inject: Fix a wrong assignment of i_mce.status
Zhu Yanjun (1):
RDMA/rxe: Skip dgid check in loopback mode
yu kuai (1):
ARM: at91: pm: add missing put_device() call in at91_pm_sram_init()
Documentation/ABI/testing/sysfs-bus-iio | 3 +-
Makefile | 2 +-
arch/arm/boot/dts/r8a7793-gose.dts | 4 +-
arch/arm/kernel/stacktrace.c | 24 +++++
arch/arm/mach-at91/pm.c | 11 ++-
arch/arm/mach-socfpga/pm.c | 8 +-
.../boot/dts/exynos/exynos7-espresso.dts | 1 +
.../boot/dts/hisilicon/hi3660-hikey960.dts | 11 +++
.../arm64/boot/dts/hisilicon/hi6220-hikey.dts | 2 +-
arch/arm64/boot/dts/qcom/msm8916-pins.dtsi | 10 +-
arch/arm64/boot/dts/rockchip/rk3368-lion.dtsi | 2 +-
arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi | 4 +-
arch/m68k/mac/iop.c | 21 ++--
arch/mips/cavium-octeon/octeon-usb.c | 5 +-
arch/parisc/include/asm/barrier.h | 61 ++++++++++++
arch/powerpc/boot/Makefile | 2 +-
arch/powerpc/boot/serial.c | 2 +-
arch/powerpc/kernel/vdso.c | 2 +-
arch/powerpc/mm/pkeys.c | 16 +--
arch/x86/crypto/aes_ctrby8_avx-x86_64.S | 14 +--
arch/x86/crypto/aesni-intel_asm.S | 6 +-
arch/x86/kernel/apic/io_apic.c | 5 +
arch/x86/kernel/cpu/mce/inject.c | 2 +-
arch/x86/kernel/ptrace.c | 2 +-
drivers/acpi/acpica/exprep.c | 4 -
drivers/acpi/acpica/utdelete.c | 6 +-
drivers/block/loop.c | 4 +
drivers/bluetooth/hci_h5.c | 2 +-
drivers/bluetooth/hci_serdev.c | 3 +-
drivers/char/agp/intel-gtt.c | 4 +-
drivers/clk/clk-scmi.c | 22 ++++-
drivers/cpufreq/armada-37xx-cpufreq.c | 1 +
drivers/crypto/cavium/cpt/cptvf_algs.c | 1 +
drivers/crypto/cavium/cpt/cptvf_reqmanager.c | 12 +--
drivers/crypto/cavium/cpt/request_manager.h | 2 +
drivers/crypto/ccp/ccp-dev.h | 1 +
drivers/crypto/ccp/ccp-ops.c | 37 ++++---
drivers/crypto/ccree/cc_cipher.c | 30 +++---
drivers/crypto/hisilicon/sec/sec_algs.c | 34 ++++---
drivers/crypto/qat/qat_common/qat_uclo.c | 9 +-
drivers/edac/edac_device_sysfs.c | 1 +
drivers/edac/edac_pci_sysfs.c | 2 +-
drivers/firmware/arm_scmi/scmi_pm_domain.c | 12 +--
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 19 ++--
drivers/gpu/drm/arm/malidp_planes.c | 2 +-
drivers/gpu/drm/bridge/sil-sii8620.c | 2 +-
drivers/gpu/drm/drm_debugfs.c | 8 +-
drivers/gpu/drm/drm_mipi_dsi.c | 6 +-
drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 19 +++-
drivers/gpu/drm/imx/imx-tve.c | 20 ++--
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_drm.c | 8 +-
drivers/gpu/drm/nouveau/nouveau_gem.c | 4 +-
drivers/gpu/drm/nouveau/nouveau_sgdma.c | 9 +-
drivers/gpu/drm/panel/panel-simple.c | 2 +-
drivers/gpu/drm/radeon/ci_dpm.c | 2 +-
drivers/gpu/drm/radeon/radeon_display.c | 4 +-
drivers/gpu/drm/radeon/radeon_drv.c | 9 +-
drivers/gpu/drm/radeon/radeon_kms.c | 4 +-
drivers/gpu/drm/tilcdc/tilcdc_panel.c | 6 +-
drivers/gpu/drm/ttm/ttm_tt.c | 3 -
drivers/gpu/host1x/debug.c | 4 +
drivers/hid/hid-input.c | 6 +-
.../hwtracing/coresight/coresight-tmc-etf.c | 13 ++-
drivers/infiniband/core/verbs.c | 2 +-
drivers/infiniband/hw/qedr/qedr.h | 4 +-
drivers/infiniband/hw/qedr/verbs.c | 22 ++---
drivers/infiniband/sw/rxe/rxe_recv.c | 6 +-
drivers/infiniband/sw/rxe/rxe_verbs.c | 5 +-
drivers/iommu/intel_irq_remapping.c | 8 ++
drivers/irqchip/irq-mtk-sysirq.c | 8 +-
drivers/leds/led-class.c | 1 +
drivers/leds/leds-lm355x.c | 7 +-
drivers/md/bcache/super.c | 9 +-
drivers/md/md-cluster.c | 1 +
drivers/media/firewire/firedtv-fw.c | 2 +
drivers/media/platform/exynos4-is/media-dev.c | 3 +
drivers/media/platform/omap3isp/isppreview.c | 4 +-
drivers/misc/cxl/sysfs.c | 2 +-
drivers/mtd/nand/raw/qcom_nandc.c | 7 +-
drivers/net/dsa/mv88e6xxx/chip.c | 1 -
drivers/net/dsa/rtl8366.c | 35 ++++---
.../aquantia/atlantic/hw_atl/hw_atl_a0.c | 2 +-
.../cavium/liquidio/cn23xx_pf_device.c | 2 +-
drivers/net/ethernet/freescale/fman/fman.c | 3 +-
.../net/ethernet/freescale/fman/fman_dtsec.c | 4 +-
.../net/ethernet/freescale/fman/fman_mac.h | 2 +-
.../net/ethernet/freescale/fman/fman_memac.c | 3 +-
.../net/ethernet/freescale/fman/fman_port.c | 9 +-
.../net/ethernet/freescale/fman/fman_tgec.c | 2 +-
drivers/net/ethernet/toshiba/spider_net.c | 4 +-
drivers/net/wan/lapbether.c | 10 +-
drivers/net/wireless/ath/ath10k/htt_tx.c | 4 +
.../broadcom/brcm80211/brcmfmac/fwil_types.h | 2 +-
.../broadcom/brcm80211/brcmfmac/fwsignal.c | 4 +
.../broadcom/brcm80211/brcmfmac/sdio.c | 6 +-
drivers/net/wireless/intel/iwlegacy/common.c | 4 +-
.../wireless/marvell/mwifiex/sta_cmdresp.c | 22 +++--
drivers/net/wireless/ti/wl1251/event.c | 2 +-
drivers/parisc/sba_iommu.c | 2 +-
drivers/pci/access.c | 8 +-
drivers/pci/controller/pcie-cadence-host.c | 9 +-
drivers/pci/controller/vmd.c | 3 +
drivers/pci/pcie/aspm.c | 1 +
drivers/pci/quirks.c | 2 +
drivers/phy/samsung/phy-exynos5-usbdrd.c | 4 +-
drivers/pinctrl/pinctrl-single.c | 11 ++-
drivers/platform/x86/intel-hid.c | 2 +-
drivers/platform/x86/intel-vbtn.c | 2 +-
drivers/power/supply/88pm860x_battery.c | 6 +-
drivers/s390/net/qeth_l2_main.c | 4 +
drivers/scsi/arm/cumana_2.c | 2 +-
drivers/scsi/arm/eesox.c | 2 +-
drivers/scsi/arm/powertec.c | 2 +-
drivers/scsi/mesh.c | 8 +-
drivers/scsi/scsi_debug.c | 6 ++
drivers/soc/qcom/rpmh-rsc.c | 1 +
drivers/spi/spi-lantiq-ssc.c | 10 ++
drivers/spi/spidev.c | 21 ++--
drivers/staging/rtl8192u/r8192U_core.c | 2 +-
.../ti-soc-thermal/ti-thermal-common.c | 2 +-
drivers/usb/core/quirks.c | 16 ++-
drivers/usb/dwc2/platform.c | 4 +-
drivers/usb/gadget/udc/bdc/bdc_core.c | 13 ++-
drivers/usb/gadget/udc/bdc/bdc_ep.c | 16 +--
drivers/usb/gadget/udc/net2280.c | 4 +-
drivers/usb/mtu3/mtu3_core.c | 6 +-
drivers/usb/serial/cp210x.c | 19 ++++
drivers/usb/serial/iuu_phoenix.c | 14 +--
drivers/video/console/newport_con.c | 12 ++-
drivers/video/fbdev/neofb.c | 1 +
drivers/video/fbdev/pxafb.c | 4 +-
drivers/video/fbdev/sm712fb.c | 2 +
drivers/xen/balloon.c | 12 ++-
drivers/xen/gntdev-dmabuf.c | 8 ++
fs/9p/v9fs.c | 5 +-
fs/btrfs/extent_io.c | 2 +
fs/dlm/lockspace.c | 6 +-
fs/minix/inode.c | 36 ++++++-
fs/minix/itree_common.c | 8 +-
fs/nfs/pnfs.c | 46 +++------
fs/ocfs2/dlmglue.c | 8 +-
fs/pstore/platform.c | 5 +-
fs/xfs/scrub/bmap.c | 22 ++++-
fs/xfs/xfs_reflink.c | 21 ++--
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/bitfield.h | 2 +-
include/linux/tracepoint.h | 2 +-
include/net/inet_connection_sock.h | 4 +
include/net/ip_vs.h | 10 +-
kernel/sched/fair.c | 23 +++--
kernel/sched/topology.c | 2 +-
lib/dynamic_debug.c | 23 +++--
mm/mmap.c | 1 +
net/bluetooth/6lowpan.c | 5 +
net/ipv4/inet_connection_sock.c | 97 ++++++++++---------
net/ipv4/inet_hashtables.c | 1 +
net/netfilter/ipvs/ip_vs_core.c | 12 ++-
net/nfc/rawsock.c | 7 +-
net/packet/af_packet.c | 9 +-
net/socket.c | 2 +-
net/sunrpc/xprtrdma/svc_rdma_rw.c | 28 ++++--
net/tls/tls_device.c | 3 +-
security/smack/smackfs.c | 6 +-
sound/pci/hda/patch_realtek.c | 1 +
sound/soc/intel/boards/bxt_rt298.c | 2 +
sound/soc/meson/axg-tdm-interface.c | 26 +++--
sound/usb/card.h | 1 +
sound/usb/mixer_quirks.c | 1 +
sound/usb/pcm.c | 6 ++
sound/usb/quirks-table.h | 64 +++++++++++-
sound/usb/quirks.c | 3 +
sound/usb/stream.c | 1 +
tools/build/Build.include | 3 +-
.../powerpc/benchmarks/context_switch.c | 21 +++-
tools/testing/selftests/powerpc/utils.c | 37 ++++---
176 files changed, 1085 insertions(+), 490 deletions(-)
--
2.25.1
1
169
Hi,
On 2020/8/24 21:23, Sugar wrote:
> Hi, everyone.
>
> I have been working on porting openeuler to a cortex-a7 based board.
> Some errors hit me when building arm32 openeuler 4.19.90 kernel in *ubuntu-base-16.04-core-armhf.tar.gz* chroot environment emulated by qemu-user-static.
> The errors were endless even though I commented on some CONFIG options and fixed some errors.
> Here are some of my questions:
> - Did openeuler test kernel arm32 build before the LTS-20.03 release, that maybe help me out?
No. LTS-20.03 does not have an arm32 release and has not been tested on the arm32 platform.
> - Will openeuler provide arm32 support in future version? And a detailed date?
We do not have a plan to support ARM32 at this point, and it would be great if you could help to do it.
IMHO, the patches merged into the openEuler kernel do not break the support for ARM32.
So, select a proper config would be helpful for building on arm32 platform.
>
> Looking for your rely.
>
> _______________________________________________
> Dev mailing list -- dev(a)openeuler.org
> To unsubscribe send an email to dev-leave(a)openeuler.org
>
2
1
From: Jun Huang <huangjun63(a)huawei.com>
The acpi_execute_simple_method function uses a mutex, so it cannot be called in the timer. We replace timer with delayed work.
Signed-off-by: Jun Huang <huangjun63(a)huawei.com>
---
drivers/acpi/evged.c | 60 ++++++++++++++++++++------------------------
1 file changed, 27 insertions(+), 33 deletions(-)
diff --git a/drivers/acpi/evged.c b/drivers/acpi/evged.c
index 7354d8d..5bf65b1 100644
--- a/drivers/acpi/evged.c
+++ b/drivers/acpi/evged.c
@@ -34,7 +34,7 @@
* Method (_EVT, 1) {
* if (Lequal(123, Arg0))
* {
- * }
+ * }#
* }
* }
*
@@ -46,20 +46,23 @@
#include <linux/list.h>
#include <linux/platform_device.h>
#include <linux/acpi.h>
-#include <linux/timer.h>
#include <linux/jiffies.h>
+#include <linux/workqueue.h>
#define MODULE_NAME "acpi-ged"
+static void gpp_enable_pwr_btn(struct work_struct *private_);
+
+static DECLARE_DELAYED_WORK(gpp_work, gpp_enable_pwr_btn);
+
struct acpi_ged_handle {
- struct timer_list timer; /* For 4s anti-shake of power button */
acpi_handle gpp_handle; /* ACPI Handle: enable shutdown */
acpi_handle gpo_handle; /* ACPI Handle: set sleep flag */
};
struct acpi_ged_device {
struct device *dev;
- struct acpi_ged_handle *wakeup_handle;
+ struct acpi_ged_handle *ged_handle;
struct list_head event_list;
};
@@ -71,6 +74,8 @@ struct acpi_ged_event {
acpi_handle handle;
};
+static struct acpi_ged_handle *wakeup_handle;
+
static irqreturn_t acpi_ged_irq_handler(int irq, void *data)
{
struct acpi_ged_event *event = data;
@@ -142,7 +147,6 @@ static acpi_status acpi_ged_request_interrupt(struct acpi_resource *ares,
#ifdef CONFIG_PM_SLEEP
static void init_ged_handle(struct acpi_ged_device *geddev) {
- struct acpi_ged_handle *wakeup_handle;
acpi_handle gpo_handle = NULL;
acpi_handle gpp_handle = NULL;
acpi_status acpi_ret;
@@ -151,10 +155,7 @@ static void init_ged_handle(struct acpi_ged_device *geddev) {
if (!wakeup_handle)
return;
- geddev->wakeup_handle = wakeup_handle;
-
- /* Initialize wakeup_handle, prepare for ged suspend and resume */
- timer_setup(&wakeup_handle->timer, NULL, 0);
+ geddev->ged_handle = wakeup_handle;
acpi_ret = acpi_get_handle(ACPI_HANDLE(geddev->dev), "_GPO", &gpo_handle);
if (ACPI_FAILURE(acpi_ret))
@@ -204,8 +205,6 @@ static void ged_shutdown(struct platform_device *pdev)
dev_dbg(geddev->dev, "GED releasing GSI %u @ IRQ %u\n",
event->gsi, event->irq);
}
- if (geddev->wakeup_handle)
- del_timer(&geddev->wakeup_handle->timer);
}
static int ged_remove(struct platform_device *pdev)
@@ -220,23 +219,10 @@ static const struct acpi_device_id ged_acpi_ids[] = {
};
#ifdef CONFIG_PM_SLEEP
-static void ged_timer_callback(struct timer_list *t)
-{
- struct acpi_ged_handle *wakeup_handle = from_timer(wakeup_handle, t, timer);
- acpi_status acpi_ret;
-
- /* _GPP method enable power button */
- if (wakeup_handle && wakeup_handle->gpp_handle) {
- acpi_ret = acpi_execute_simple_method(wakeup_handle->gpp_handle, NULL, ACPI_IRQ_MODEL_GIC);
- if (ACPI_FAILURE(acpi_ret))
- pr_warn("_GPP method execution failed\n");
- }
-}
-
static int ged_suspend(struct device *dev)
{
struct acpi_ged_device *geddev = dev_get_drvdata(dev);
- struct acpi_ged_handle *wakeup_handle = geddev->wakeup_handle;
+ //struct acpi_ged_handle *wakeup_handle = geddev->wakeup_handle;
struct acpi_ged_event *event, *next;
acpi_status acpi_ret;
@@ -255,18 +241,26 @@ static int ged_suspend(struct device *dev)
return 0;
}
+static void gpp_enable_pwr_btn(struct work_struct *private_)
+{
+ acpi_status acpi_ret;
+
+ /* _GPP method enable power button */
+ if (wakeup_handle && wakeup_handle->gpp_handle) {
+ acpi_ret = acpi_execute_simple_method(wakeup_handle->gpp_handle, NULL, ACPI_IRQ_MODEL_GIC);
+ if (ACPI_FAILURE(acpi_ret))
+ pr_warn("_GPP method execution failed\n");
+ }
+
+}
+
static int ged_resume(struct device *dev)
{
struct acpi_ged_device *geddev = dev_get_drvdata(dev);
- struct acpi_ged_handle *wakeup_handle = geddev->wakeup_handle;
struct acpi_ged_event *event, *next;
-
- /* use timer to complete 4s anti-shake */
- if (wakeup_handle && wakeup_handle->gpp_handle) {
- wakeup_handle->timer.expires = jiffies + (4 * HZ);
- wakeup_handle->timer.function = ged_timer_callback;
- add_timer(&wakeup_handle->timer);
- }
+
+ /* use delayed_work to complete 4s anti-shake */
+ schedule_delayed_work(&gpp_work, 4 * HZ);
list_for_each_entry_safe(event, next, &geddev->event_list, node)
disable_irq_wake(event->irq);
--
2.23.0.windows.1
1
0

23 Aug '20
hulk inclusion
category: doc
bugzilla: NA
CVE: NA
Link: https://gitee.com/openeuler/kernel/issues/I1SF7H
-------------------------------------------------
provide a simple script to help generate mainline patch info. It is
useful when backport many patches from mainline or stable tree.
usage:
1) backport a patch from mainline
./scripts/pick -c <commit> --issue I1SF7H
2) insert patch info to exsit patch file
./scripts/pick -c <commit> --issue I1SF7H -i test.patch
3) backport a patch from stable branch
./scripts/pick -c <commit> -s --issue I1SF7H
4) insert patch info to a out-of-tree patch
./scripts/pick --from driver --issue I1SF7H -i 0001-driver.patch
./scripts/pick --from driver --cate feature --feat "xxx feature" --issue I1SF7H -i 0001-driver.patch
5) if the commit information is not provided in the argument,
try to identify it from the input file.
The following formats can be recognized:
(cherry picked from commit 587a87b9d7e94927edcdea018565bc1939381eb1)
commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream.
./scripts/pick --issue I1SF7H -i 0001-input.patch
Signed-off-by: Xie XiuQi <xiexiuqi(a)huawei.com>
---
scripts/pick | 205 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 205 insertions(+)
create mode 100755 scripts/pick
diff --git a/scripts/pick b/scripts/pick
new file mode 100755
index 000000000000..1166a300f508
--- /dev/null
+++ b/scripts/pick
@@ -0,0 +1,205 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+#
+# pick - helpers for pick a patch from upstream
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# (C) Copyright 2015-2020 Huawei Technologies Co., Ltd
+#
+# Authors: Xie XiuQi <xiexiuqi(a)huawei.com>
+
+#
+# usage:
+# 1) backport a patch from mainline
+# ./scripts/pick -c <commit> --issue I1SF7H
+#
+# 2) insert patch info to exsit patch file
+# ./scripts/pick -c <commit> --issue I1SF7H -i test.patch
+#
+# 3) backport a patch from stable branch
+# ./scripts/pick -c <commit> -s --issue I1SF7H
+#
+# 4) insert patch info to a out-of-tree patch
+# ./scripts/pick --from driver --issue I1SF7H -i 0001-driver.patch
+# ./scripts/pick --from driver --cate feature --feat "xxx feature" --issue I1SF7H -i 0001-driver.patch
+#
+# 5) if the commit information is not provided in the argument,
+# try to identify it from the input file.
+# The following formats can be recognized:
+# (cherry picked from commit 587a87b9d7e94927edcdea018565bc1939381eb1)
+# commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream.
+#
+# ./scripts/pick --issue I1SF7H -i 0001-input.patch
+#
+
+# set -x
+
+COMMIT=
+CVE=NA
+BUG=NA
+ISSUE=NA
+WITH_ISSUE=0
+ISSUE_PREFIX=https://gitee.com/openeuler/kernel/issues
+OUTPUT_PREFIX=
+OUTPUT=
+FROM=mainline # default
+CATEGORY=bugfix # default
+FEATURE=""
+SUBJECT_PREFIX=
+WITH_SOB=
+INSERT_LINE=0
+
+usage()
+{
+ echo "usage: $(basename $0) [options]" >&2
+ echo " -h|--help show this help message" >&2
+ echo " -s|--stable from stable (default mainline)" >&2
+ echo " -f|--from where does the patch from (default mainline)" >&2
+ echo " -C|--cve ID of CVE (without CVE prefix)" >&2
+ echo " --cate category(feature/bugfix/...)" >&2
+ echo " --feat feature" >&2
+ echo " --subject-prefix " >&2
+ echo " -b|--bugzilla ID of BUGZILLA" >&2
+ echo " --issue issue id of gitee" >&2
+ echo " -o|--output output filename, use input filename if no specific" >&2
+ echo " -i|--input input patch filename" >&2
+ echo " --output-prefix output filename" >&2
+ echo " --with-sob with signed-off-by" >&2
+ echo " -c|--commit <commit id>" >&2
+}
+
+options=$(getopt -o hr:u:c: "help,pick,stable,from:bugzilla:cate:feat:cve:commit" -- "$@") || die "getopt failed"
+
+eval set -- "$options"
+while [[ $# -gt 0 ]]; do
+ case "$1" in
+ -h|--help)
+ usage
+ exit 0
+ ;;
+ -f|--from)
+ FROM="$2"
+ shift
+ ;;
+ -s|--stable)
+ FROM="stable"
+ ;;
+ -C|--cve)
+ CVE="$2"
+ shift
+ ;;
+ --cate)
+ CATEGORY="$2"
+ shift
+ ;;
+ --feat)
+ FEATURE="$2"
+ shift
+ ;;
+ --subject-prefix)
+ SUBJECT_PREFIX=" $2"
+ shift
+ ;;
+ --with-sob)
+ WITH_SOB="-s"
+ ;;
+ -b|--bugzilla)
+ BUG="$2"
+ shift
+ ;;
+ --issue)
+ ISSUE=$2
+ WITH_ISSUE=1
+ shift
+ ;;
+ -i|--input)
+ INPUT=$2
+ shift
+ ;;
+ -u|--utf8)
+ UTF8=1
+ ;;
+ -c|--commit)
+ COMMIT=$2
+ shift
+ ;;
+ esac
+ shift
+done
+
+if [ -z "$COMMIT" ] && [ -z "$INPUT" ]; then
+ echo 'Your must specific at least commit or input filename'
+ exit 1
+fi
+
+if [ "$INPUT" != "" ] && [ "$COMMIT" == "" ]; then
+ COMMIT=$(grep "^(cherry picked from commit" $INPUT | sed 's/^(cherry picked from commit \([a-f0-9]*\).*/\1/g')
+
+ if [ "$COMMIT" == "" ]; then
+ COMMIT=$(grep "^commit [a-f0-9]* upstream" $INPUT | sed 's/^commit \([a-f0-9]*\).*/\1/g')
+ fi
+fi
+
+if [ "$PREFIX" != "" ]; then
+ PREFIX="[$PREFIX] "
+fi
+
+if [ $(echo $CVE |grep CVE -c) -eq 0 ] && [ "$CVE" != "NA" ]; then
+ CVE=CVE-$CVE
+fi
+
+TAG=$(git name-rev --tags $COMMIT |sed 's/^[a-f0-9]* tags\/v//g' | sed 's/~.*//' | sed 's/-.*//')
+
+trap 'rm -f "$TMPFILE"' EXIT
+TMPFILE=$(mktemp) || exit 1
+
+printf "%s\n" "$FROM inclusion" >> $TMPFILE
+if [ $FROM == "mainline" ] || [ "$FROM" == "stable" ]; then
+ printf "%s\n" "from $FROM-$TAG" >> $TMPFILE
+ printf "%s\n" "commit $COMMIT" >> $TMPFILE
+fi
+printf "%s\n" "category: $CATEGORY" >> $TMPFILE
+
+if [ "$CATEGORY" == "feature" ]; then
+ if [ "$FEATURE" == "" ]; then
+ echo "please describe this feature"
+ exit 1
+ else
+ printf "%s\n" "feature: $FEATURE" >> $TMPFILE
+ fi
+fi
+
+printf "%s\n" "bugzilla: $BUG" >> $TMPFILE
+printf "%s\n" "CVE: $CVE" >> $TMPFILE
+
+if [ "$BUG" == "NA" ] && [ "$ISSUE" == "NA" ];
+then
+ echo please set bugzilla id or issue id
+ exit 1
+fi
+
+if [ $WITH_ISSUE -eq 1 ] && [ "$ISSUE" != "" ]; then
+ printf "%s\n" "Link: $ISSUE_PREFIX/$ISSUE" >> $TMPFILE
+fi
+
+printf "\n%s\n\n" "-------------------------------------------------" >> $TMPFILE
+
+if [ -z $INPUT ]; then
+ INPUT=$(git format-patch -1 $WITH_SOB --subject-prefix="PATCH$SUBJECT_PREFIX" $COMMIT)
+fi
+
+if [ -z $OUTPUT ]; then
+ OUTPUT=$INPUT
+fi
+
+INSERT_LINE=$(sed -n -e '/^$/=' $INPUT |head -n1)
+sed -i "$INSERT_LINE"r" $TMPFILE" $OUTPUT
--
2.20.1
1
0

20 Aug '20
This patch set aims to support the vCPU preempted check and PV qspinlock
under KVM/arm64.
Christoffer Dall (1):
KVM: arm/arm64: Factor out hypercall handling from PSCI code
Qian Cai (1):
arm64/spinlock: fix a -Wunused-function warning
Steven Price (3):
KVM: Implement kvm_put_guest()
arm/arm64: Provide a wrapper for SMCCC 1.1 calls
arm/arm64: Make use of the SMCCC 1.1 wrapper
Waiman Long (1):
locking/osq: Use optimized spinning loop for arm64
Wanpeng Li (2):
KVM: Boost vCPUs that are delivering interrupts
KVM: Check preempted_in_kernel for involuntary preemption
Zengruan Ye (12):
arm/paravirt: Use a single ops structure
KVM: arm64: Document PV-sched interface
KVM: arm64: Implement PV_SCHED_FEATURES call
KVM: arm64: Support pvsched preempted via shared structure
KVM: arm64: Add interface to support vCPU preempted check
KVM: arm64: Support the vCPU preemption check
KVM: arm64: Add SMCCC PV-sched to kick cpu
KVM: arm64: Implement PV_SCHED_KICK_CPU call
KVM: arm64: Add interface to support PV qspinlock
KVM: arm64: Enable PV qspinlock
KVM: arm64: Add tracepoints for PV qspinlock
arm64: defconfig: add CONFIG_PARAVIRT_SPINLOCKS in default
Documentation/virtual/kvm/arm/pvsched.txt | 72 ++++++++
arch/arm/include/asm/kvm_host.h | 25 +++
arch/arm/include/asm/paravirt.h | 9 +-
arch/arm/kernel/paravirt.c | 4 +-
arch/arm/kvm/Makefile | 2 +-
arch/arm/kvm/handle_exit.c | 2 +-
arch/arm/mm/proc-v7-bugs.c | 13 +-
arch/arm64/Kconfig | 14 ++
arch/arm64/configs/euleros_defconfig | 1 +
arch/arm64/configs/hulk_defconfig | 1 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/include/asm/kvm_host.h | 20 +++
arch/arm64/include/asm/paravirt.h | 63 ++++++-
arch/arm64/include/asm/pvsched-abi.h | 16 ++
arch/arm64/include/asm/qspinlock.h | 15 +-
arch/arm64/include/asm/qspinlock_paravirt.h | 12 ++
arch/arm64/include/asm/spinlock.h | 26 +++
arch/arm64/kernel/Makefile | 3 +-
arch/arm64/kernel/alternative.c | 5 +-
arch/arm64/kernel/cpu_errata.c | 82 +++------
arch/arm64/kernel/paravirt-spinlocks.c | 18 ++
arch/arm64/kernel/paravirt.c | 188 +++++++++++++++++++-
arch/arm64/kernel/setup.c | 2 +
arch/arm64/kernel/trace-paravirt.h | 66 +++++++
arch/arm64/kvm/Makefile | 2 +
arch/arm64/kvm/handle_exit.c | 5 +-
arch/s390/kvm/interrupt.c | 2 +-
drivers/xen/time.c | 4 +
include/kvm/arm_hypercalls.h | 43 +++++
include/kvm/arm_psci.h | 2 +-
include/linux/arm-smccc.h | 71 ++++++++
include/linux/cpuhotplug.h | 1 +
include/linux/kvm_host.h | 23 +++
include/linux/kvm_types.h | 2 +
kernel/locking/osq_lock.c | 23 ++-
virt/kvm/arm/arm.c | 12 +-
virt/kvm/arm/hypercalls.c | 72 ++++++++
virt/kvm/arm/psci.c | 76 +-------
virt/kvm/arm/pvsched.c | 83 +++++++++
virt/kvm/arm/trace.h | 18 ++
virt/kvm/kvm_main.c | 17 +-
41 files changed, 939 insertions(+), 177 deletions(-)
create mode 100644 Documentation/virtual/kvm/arm/pvsched.txt
create mode 100644 arch/arm64/include/asm/pvsched-abi.h
create mode 100644 arch/arm64/include/asm/qspinlock_paravirt.h
create mode 100644 arch/arm64/kernel/paravirt-spinlocks.c
create mode 100644 arch/arm64/kernel/trace-paravirt.h
create mode 100644 include/kvm/arm_hypercalls.h
create mode 100644 virt/kvm/arm/hypercalls.c
create mode 100644 virt/kvm/arm/pvsched.c
--
2.25.1
1
20
From: Xu Qiang <xuqiang36(a)huawei.com>
ascend inclusion
category: feature
Bugzilla: N/A
CVE: N/A
-------------------------------------------
Signed-off-by: Xu Qiang <xuqiang36(a)huawei.com>
Acked-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/printk/printk_safe.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
index af5f4e42ca54..63ba880a6df0 100644
--- a/kernel/printk/printk_safe.c
+++ b/kernel/printk/printk_safe.c
@@ -290,6 +290,7 @@ void printk_safe_flush_on_panic(void)
printk_safe_flush();
}
+EXPORT_SYMBOL_GPL(printk_safe_flush_on_panic);
#ifdef CONFIG_PRINTK_NMI
/*
--
2.25.1
1
0

[PATCH 1/7] serial: amba-pl011: Fix serial port discard interrupt when interrupt signal line of serial port is connected to mbigen.
by Yang Yingliang 19 Aug '20
by Yang Yingliang 19 Aug '20
19 Aug '20
From: Xu Qiang <xuqiang36(a)huawei.com>
ascend inclusion
category: bugfix
Bugzilla: N/A
CVE: N/A
---------------------------------------
Hisi when designing ascend chip, connect the serial port interrupt
signal lines to mbigen equipment, mbigen write GICD_SETSPI_NSR
register trigger the SPI interrupt. This can result in serial
port drop interrupts.
Signed-off-by: Xu Qiang <xuqiang36(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/tty/serial/Kconfig | 18 ++++++++++
drivers/tty/serial/amba-pl011.c | 64 +++++++++++++++++++++++++++++++++
2 files changed, 82 insertions(+)
diff --git a/drivers/tty/serial/Kconfig b/drivers/tty/serial/Kconfig
index df8bd0c7b97d..66e69c7d10a3 100644
--- a/drivers/tty/serial/Kconfig
+++ b/drivers/tty/serial/Kconfig
@@ -73,6 +73,24 @@ config SERIAL_AMBA_PL011_CONSOLE
your boot loader (lilo or loadlin) about how to pass options to the
kernel at boot time.)
+if ASCEND_FEATURES
+
+config SERIAL_ATTACHED_MBIGEN
+ bool "Serial port interrupt signal lines connected to the mbigen"
+ depends on SERIAL_AMBA_PL011=y
+ default n
+ help
+ Say Y here when the interrupt signal line of the serial port is
+ connected to the mbigne. The mbigen device has the function of
+ clearing interrupts automatically. However, the interrupt processing
+ function of the serial port driver may process multiple interrupts
+ at a time. The mbigen device cannot adapt to this scenario.
+ As a result, interrupts are lost.Because it maybe discard interrupt.
+
+ If unsure, say N.
+
+endif
+
config SERIAL_EARLYCON_ARM_SEMIHOST
bool "Early console using ARM semihosting"
depends on ARM64 || ARM
diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
index 1d501154e9f7..781a27eebf0b 100644
--- a/drivers/tty/serial/amba-pl011.c
+++ b/drivers/tty/serial/amba-pl011.c
@@ -1473,6 +1473,63 @@ static void check_apply_cts_event_workaround(struct uart_amba_port *uap)
dummy_read = pl011_read(uap, REG_ICR);
}
+#ifdef CONFIG_SERIAL_ATTACHED_MBIGEN
+struct workaround_oem_info {
+ char oem_id[ACPI_OEM_ID_SIZE + 1];
+ char oem_table_id[ACPI_OEM_TABLE_ID_SIZE + 1];
+ u32 oem_revision;
+};
+
+static bool pl011_enable_hisi_wkrd;
+static struct workaround_oem_info pl011_wkrd_info[] = {
+ {
+ .oem_id = "HISI ",
+ .oem_table_id = "HIP08 ",
+ .oem_revision = 0x300,
+ }, {
+ .oem_id = "HISI ",
+ .oem_table_id = "HIP08 ",
+ .oem_revision = 0x301,
+ }, {
+ .oem_id = "HISI ",
+ .oem_table_id = "HIP08 ",
+ .oem_revision = 0x400,
+ }, {
+ .oem_id = "HISI ",
+ .oem_table_id = "HIP08 ",
+ .oem_revision = 0x401,
+ }, {
+ .oem_id = "HISI ",
+ .oem_table_id = "HIP08 ",
+ .oem_revision = 0x402,
+ }
+};
+
+static void pl011_check_hisi_workaround(void)
+{
+ struct acpi_table_header *tbl;
+ acpi_status status = AE_OK;
+ int i;
+
+ status = acpi_get_table(ACPI_SIG_MADT, 0, &tbl);
+ if (ACPI_FAILURE(status) || !tbl)
+ return;
+
+ for (i = 0; i < ARRAY_SIZE(pl011_wkrd_info); i++) {
+ if (!memcmp(pl011_wkrd_info[i].oem_id, tbl->oem_id, ACPI_OEM_ID_SIZE) &&
+ !memcmp(pl011_wkrd_info[i].oem_table_id, tbl->oem_table_id, ACPI_OEM_TABLE_ID_SIZE) &&
+ pl011_wkrd_info[i].oem_revision == tbl->oem_revision) {
+ pl011_enable_hisi_wkrd = true;
+ break;
+ }
+ }
+}
+
+#else
+#define pl011_enable_hisi_wkrd 0
+static inline void pl011_check_hisi_workaround(void){ }
+#endif
+
static irqreturn_t pl011_int(int irq, void *dev_id)
{
struct uart_amba_port *uap = dev_id;
@@ -1510,6 +1567,11 @@ static irqreturn_t pl011_int(int irq, void *dev_id)
handled = 1;
}
+ if (pl011_enable_hisi_wkrd) {
+ pl011_write(0, uap, REG_IMSC);
+ pl011_write(uap->im, uap, REG_IMSC);
+ }
+
spin_unlock_irqrestore(&uap->port.lock, flags);
return IRQ_RETVAL(handled);
@@ -1687,6 +1749,8 @@ static int pl011_hwinit(struct uart_port *port)
if (plat->init)
plat->init();
}
+
+ pl011_check_hisi_workaround();
return 0;
}
--
2.25.1
1
6

19 Aug '20
From: Bixuan Cui <cuibixuan(a)huawei.com>
ascend inclusion
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
Fix a mistake check for commit bd6c06e0917d
("iommu: introduce device fault report API")
Fixes: bd6c06e0917d ("iommu: introduce device fault report API")
Signed-off-by: Bixuan Cui <cuibixuan(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/iommu/iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 831c5065f7f8..bee6b8fcfe0e 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -982,7 +982,7 @@ int iommu_unregister_device_fault_handler(struct device *dev)
mutex_lock(¶m->lock);
/* we cannot unregister handler if there are pending faults */
- if (list_empty(¶m->fault_param->faults)) {
+ if (!list_empty(¶m->fault_param->faults)) {
ret = -EBUSY;
goto unlock;
}
--
2.25.1
1
0
From: Xie XiuQi <xiexiuqi(a)huawei.com>
hulk inclusion
cagegory: feature
feature: support 1822 on x86 platform
After patch "5d57d1e24808 net/hinic: Add support for X86 Arch",
hinic driver could support x86 platform, so enable this config
by default.
Link: https://gitee.com/openeuler/kernel/issues/I1DC1F
Signed-off-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/x86/configs/hulk_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
arch/x86/configs/syzkaller_defconfig | 1 +
3 files changed, 3 insertions(+)
diff --git a/arch/x86/configs/hulk_defconfig b/arch/x86/configs/hulk_defconfig
index ddd209c39445..6a1d3307e34e 100644
--- a/arch/x86/configs/hulk_defconfig
+++ b/arch/x86/configs/hulk_defconfig
@@ -2526,6 +2526,7 @@ CONFIG_NET_VENDOR_INTEL=y
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_E1000E_HWTS=y
+CONFIG_HINIC=m
CONFIG_IGB=m
CONFIG_IGB_HWMON=y
CONFIG_IGB_DCA=y
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index 4c9de1198798..747c6bd5967f 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -2523,6 +2523,7 @@ CONFIG_NET_VENDOR_INTEL=y
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_E1000E_HWTS=y
+CONFIG_HINIC=m
CONFIG_IGB=m
CONFIG_IGB_HWMON=y
CONFIG_IGB_DCA=y
diff --git a/arch/x86/configs/syzkaller_defconfig b/arch/x86/configs/syzkaller_defconfig
index 061f2fe87382..ff5f843e09c4 100644
--- a/arch/x86/configs/syzkaller_defconfig
+++ b/arch/x86/configs/syzkaller_defconfig
@@ -1716,6 +1716,7 @@ CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_E1000E_HWTS=y
# CONFIG_IGB is not set
+CONFIG_HINIC=m
# CONFIG_IGBVF is not set
# CONFIG_IXGB is not set
# CONFIG_IXGBE is not set
--
2.25.1
1
0
This series adds initial KVM RISC-V support. Currently, we are able to boot
RISC-V 64bit Linux Guests with multiple VCPUs.
This series can be found in riscv_kvm_v13 branch at:
https//github.com/avpatel/linux.git
Link: https://gitee.com/openeuler/kernel/issues/I1RR1Y
This is backported by Mingwang Li <limingwang(a)huawei.com>.
Alistair Francis (1):
Revert "riscv: Use latest system call ABI"
Anup Patel (19):
RISC-V: Add bitmap reprensenting ISA features common across CPUs
RISC-V: Add fragmented config for debug options
RISC-V: Enable CPU Hotplug in defconfigs
RISC-V: Add hypervisor extension related CSR defines
RISC-V: Add initial skeletal KVM support
RISC-V: KVM: Implement VCPU create, init and destroy functions
RISC-V: KVM: Implement VCPU interrupts and requests handling
RISC-V: KVM: Implement KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls
RISC-V: KVM: Implement VCPU world-switch
RISC-V: KVM: Handle MMIO exits for VCPU
RISC-V: KVM: Handle WFI exits for VCPU
RISC-V: KVM: Implement VMID allocator
RISC-V: KVM: Implement stage2 page table programming
RISC-V: KVM: Implement MMU notifiers
RISC-V: KVM: Document RISC-V specific parts of KVM API.
RISC-V: KVM: Add MAINTAINERS entry
RISC-V: Enable KVM for RV64 and RV32
RISC-V: KVM: Fix stage2_map_page() and kvm_riscv_vcpu_unpriv_read()
RISC-V: KVM: Determine instruction correctly from HTINST CSR
Atish Patra (9):
RISC-V: Mark existing SBI as 0.1 SBI.
RISC-V: Add basic support for SBI v0.2
RISC-V: Add SBI v0.2 extension definitions
RISC-V: Introduce a new config for SBI v0.1
RISC-V: Implement new SBI v0.2 extensions
RISC-V: KVM: Add timer functionality
RISC-V: KVM: FP lazy save/restore
RISC-V: KVM: Implement ONE REG interface for FP registers
RISC-V: KVM: Add SBI v0.1 support
limingwang (1):
RISC-V: KVM: fix some interfaces problems
Documentation/virt/kvm/api.txt | 169 +++-
MAINTAINERS | 11 +
arch/riscv/Kconfig | 9 +
arch/riscv/Makefile | 2 +
arch/riscv/configs/defconfig | 24 +-
arch/riscv/configs/extra_debug.config | 21 +
arch/riscv/configs/rv32_defconfig | 24 +-
arch/riscv/include/asm/csr.h | 90 +-
arch/riscv/include/asm/hwcap.h | 22 +
arch/riscv/include/asm/kvm_host.h | 274 ++++++
arch/riscv/include/asm/kvm_vcpu_timer.h | 44 +
arch/riscv/include/asm/pgtable-bits.h | 1 +
arch/riscv/include/asm/sbi.h | 178 ++--
arch/riscv/include/uapi/asm/kvm.h | 127 +++
arch/riscv/include/uapi/asm/unistd.h | 5 +-
arch/riscv/kernel/asm-offsets.c | 154 ++++
arch/riscv/kernel/cpufeature.c | 83 +-
arch/riscv/kernel/sbi.c | 522 +++++++++++-
arch/riscv/kernel/setup.c | 2 +
arch/riscv/kvm/Kconfig | 34 +
arch/riscv/kvm/Makefile | 14 +
arch/riscv/kvm/main.c | 99 +++
arch/riscv/kvm/mmu.c | 804 ++++++++++++++++++
arch/riscv/kvm/tlb.S | 74 ++
arch/riscv/kvm/vcpu.c | 1037 +++++++++++++++++++++++
arch/riscv/kvm/vcpu_exit.c | 678 +++++++++++++++
arch/riscv/kvm/vcpu_sbi.c | 173 ++++
arch/riscv/kvm/vcpu_switch.S | 391 +++++++++
arch/riscv/kvm/vcpu_timer.c | 225 +++++
arch/riscv/kvm/vm.c | 86 ++
arch/riscv/kvm/vmid.c | 120 +++
drivers/clocksource/timer-riscv.c | 8 +
include/clocksource/timer-riscv.h | 16 +
include/uapi/linux/kvm.h | 8 +
34 files changed, 5402 insertions(+), 127 deletions(-)
create mode 100644 arch/riscv/configs/extra_debug.config
create mode 100644 arch/riscv/include/asm/kvm_host.h
create mode 100644 arch/riscv/include/asm/kvm_vcpu_timer.h
create mode 100644 arch/riscv/include/uapi/asm/kvm.h
create mode 100644 arch/riscv/kvm/Kconfig
create mode 100644 arch/riscv/kvm/Makefile
create mode 100644 arch/riscv/kvm/main.c
create mode 100644 arch/riscv/kvm/mmu.c
create mode 100644 arch/riscv/kvm/tlb.S
create mode 100644 arch/riscv/kvm/vcpu.c
create mode 100644 arch/riscv/kvm/vcpu_exit.c
create mode 100644 arch/riscv/kvm/vcpu_sbi.c
create mode 100644 arch/riscv/kvm/vcpu_switch.S
create mode 100644 arch/riscv/kvm/vcpu_timer.c
create mode 100644 arch/riscv/kvm/vm.c
create mode 100644 arch/riscv/kvm/vmid.c
create mode 100644 include/clocksource/timer-riscv.h
--
2.20.1
2
32

18 Aug '20
euleros inclusion
category: feature
bugzilla: NA
issues: #I1RC8Z
CVE: NA
In normal kexec, relocating kernel may cost 5 ~ 10 seconds, to
copy all segments from vmalloced memory to kernel boot memory,
because of disabled mmu.
We introduce quick kexec to save time of copying memory as above,
just like kdump(kexec on crash), by using reserved memory
"Quick Kexec".
Constructing quick kimage as the same as crash kernel,
then simply copy all segments of kimage to reserved memroy.
We also add this support in syscall kexec_load using flags
of KEXEC_QUICK.
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
---
arch/Kconfig | 9 +++++++++
include/linux/ioport.h | 1 +
include/linux/kexec.h | 11 ++++++++++-
include/uapi/linux/kexec.h | 1 +
kernel/kexec.c | 10 ++++++++++
kernel/kexec_core.c | 41 ++++++++++++++++++++++++++++++++---------
6 files changed, 63 insertions(+), 10 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index d3d70369..11580292 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -18,6 +18,15 @@ config KEXEC_CORE
select CRASH_CORE
bool
+config QUICK_KEXEC
+ bool "Support for quick kexec"
+ depends on KEXEC_CORE
+ help
+ It uses pre-reserved memory to accelerate kexec, just like
+ crash kexec, loads new kernel and initrd to reserved memory,
+ and boots new kernel on that memory. It will save the time
+ of relocating kernel.
+
config HAVE_IMA_KEXEC
bool
diff --git a/include/linux/ioport.h b/include/linux/ioport.h
index 5330288..1a438ec 100644
--- a/include/linux/ioport.h
+++ b/include/linux/ioport.h
@@ -139,6 +139,7 @@ enum {
IORES_DESC_PERSISTENT_MEMORY_LEGACY = 5,
IORES_DESC_DEVICE_PRIVATE_MEMORY = 6,
IORES_DESC_DEVICE_PUBLIC_MEMORY = 7,
+ IORES_DESC_QUICK_KEXEC = 8,
};
/* helpers to define resources */
diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index d6b8d0a..98cf0fb 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -233,9 +233,10 @@ struct kimage {
unsigned long control_page;
/* Flags to indicate special processing */
- unsigned int type : 1;
+ unsigned int type : 2;
#define KEXEC_TYPE_DEFAULT 0
#define KEXEC_TYPE_CRASH 1
+#define KEXEC_TYPE_QUICK 2
unsigned int preserve_context : 1;
/* If set, we are using file mode kexec syscall */
unsigned int file_mode:1;
@@ -296,6 +297,11 @@ extern int kexec_load_disabled;
#define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_PRESERVE_CONTEXT)
#endif
+#ifdef CONFIG_QUICK_KEXEC
+#undef KEXEC_FLAGS
+#define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_QUICK)
+#endif
+
/* List of defined/legal kexec file flags */
#define KEXEC_FILE_FLAGS (KEXEC_FILE_UNLOAD | KEXEC_FILE_ON_CRASH | \
KEXEC_FILE_NO_INITRAMFS)
@@ -305,6 +311,9 @@ extern int kexec_load_disabled;
extern struct resource crashk_res;
extern struct resource crashk_low_res;
extern note_buf_t __percpu *crash_notes;
+#ifdef CONFIG_QUICK_KEXEC
+extern struct resource quick_kexec_res;
+#endif
/* flag to track if kexec reboot is in progress */
extern bool kexec_in_progress;
diff --git a/include/uapi/linux/kexec.h b/include/uapi/linux/kexec.h
index 6d11286..ca3cebe 100644
--- a/include/uapi/linux/kexec.h
+++ b/include/uapi/linux/kexec.h
@@ -12,6 +12,7 @@
/* kexec flags for different usage scenarios */
#define KEXEC_ON_CRASH 0x00000001
#define KEXEC_PRESERVE_CONTEXT 0x00000002
+#define KEXEC_QUICK 0x00000004
#define KEXEC_ARCH_MASK 0xffff0000
/*
diff --git a/kernel/kexec.c b/kernel/kexec.c
index 6855980..47dfad7 100644
--- a/kernel/kexec.c
+++ b/kernel/kexec.c
@@ -46,6 +46,9 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry,
int ret;
struct kimage *image;
bool kexec_on_panic = flags & KEXEC_ON_CRASH;
+#ifdef CONFIG_QUICK_KEXEC
+ bool kexec_on_quick = flags & KEXEC_QUICK;
+#endif
if (kexec_on_panic) {
/* Verify we have a valid entry point */
@@ -71,6 +74,13 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry,
image->type = KEXEC_TYPE_CRASH;
}
+#ifdef CONFIG_QUICK_KEXEC
+ if (kexec_on_quick) {
+ image->control_page = quick_kexec_res.start;
+ image->type = KEXEC_TYPE_QUICK;
+ }
+#endif
+
ret = sanity_check_segment_list(image);
if (ret)
goto out_free_image;
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index b36c9c4..595a757 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -74,6 +74,16 @@ struct resource crashk_low_res = {
.desc = IORES_DESC_CRASH_KERNEL
};
+#ifdef CONFIG_QUICK_KEXEC
+struct resource quick_kexec_res = {
+ .name = "Quick kexec",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM,
+ .desc = IORES_DESC_QUICK_KEXEC
+};
+#endif
+
int kexec_should_crash(struct task_struct *p)
{
/*
@@ -470,8 +480,10 @@ static struct page *kimage_alloc_normal_control_pages(struct kimage *image,
return pages;
}
-static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
- unsigned int order)
+
+static struct page *kimage_alloc_special_control_pages(struct kimage *image,
+ unsigned int order,
+ unsigned long end)
{
/* Control pages are special, they are the intermediaries
* that are needed while we copy the rest of the pages
@@ -501,7 +513,7 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
size = (1 << order) << PAGE_SHIFT;
hole_start = (image->control_page + (size - 1)) & ~(size - 1);
hole_end = hole_start + size - 1;
- while (hole_end <= crashk_res.end) {
+ while (hole_end <= end) {
unsigned long i;
cond_resched();
@@ -536,7 +548,6 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
return pages;
}
-
struct page *kimage_alloc_control_pages(struct kimage *image,
unsigned int order)
{
@@ -547,8 +558,15 @@ struct page *kimage_alloc_control_pages(struct kimage *image,
pages = kimage_alloc_normal_control_pages(image, order);
break;
case KEXEC_TYPE_CRASH:
- pages = kimage_alloc_crash_control_pages(image, order);
+ pages = kimage_alloc_special_control_pages(image, order,
+ crashk_res.end);
+ break;
+#ifdef CONFIG_QUICK_KEXEC
+ case KEXEC_TYPE_QUICK:
+ pages = kimage_alloc_special_control_pages(image, order,
+ quick_kexec_res.end);
break;
+#endif
}
return pages;
@@ -898,11 +916,11 @@ static int kimage_load_normal_segment(struct kimage *image,
return result;
}
-static int kimage_load_crash_segment(struct kimage *image,
+static int kimage_load_special_segment(struct kimage *image,
struct kexec_segment *segment)
{
- /* For crash dumps kernels we simply copy the data from
- * user space to it's destination.
+ /* For crash dumps kernels and quick kexec kernels
+ * we simply copy the data from user space to it's destination.
* We do things a page at a time for the sake of kmap.
*/
unsigned long maddr;
@@ -976,8 +994,13 @@ int kimage_load_segment(struct kimage *image,
result = kimage_load_normal_segment(image, segment);
break;
case KEXEC_TYPE_CRASH:
- result = kimage_load_crash_segment(image, segment);
+ result = kimage_load_special_segment(image, segment);
+ break;
+#ifdef CONFIG_QUICK_KEXEC
+ case KEXEC_TYPE_QUICK:
+ result = kimage_load_special_segment(image, segment);
break;
+#endif
}
return result;
--
2.9.5
1
1

18 Aug '20
euleros inclusion
category: feature
bugzilla: NA
issues: #I1RC8Z
CVE: NA
In normal kexec, relocating kernel may cost 5 ~ 10 seconds, to
copy all segments from vmalloced memory to kernel boot memory,
because of disabled mmu.
We introduce quick kexec to save time of copying memory as above,
just like kdump(kexec on crash), by using reserved memory
"Quick Kexec".
Constructing quick kimage as the same as crash kernel,
then simply copy all segments of kimage to reserved memroy.
We also add this support in syscall kexec_load using flags
of KEXEC_QUICK.
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
---
arch/Kconfig | 9 +++++++++
include/linux/ioport.h | 1 +
include/linux/kexec.h | 11 ++++++++++-
include/uapi/linux/kexec.h | 1 +
kernel/kexec.c | 10 ++++++++++
kernel/kexec_core.c | 41 ++++++++++++++++++++++++++++++++---------
6 files changed, 63 insertions(+), 10 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index d3d70369..11580292 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -18,6 +18,15 @@ config KEXEC_CORE
select CRASH_CORE
bool
+config QUICK_KEXEC
+ bool "Support for quick kexec"
+ depends on KEXEC_CORE
+ help
+ It uses pre-reserved memory to accelerate kexec, just like
+ crash kexec, loads new kernel and initrd to reserved memory,
+ and boots new kernel on that memory. It will save the time
+ of relocating kernel.
+
config HAVE_IMA_KEXEC
bool
diff --git a/include/linux/ioport.h b/include/linux/ioport.h
index 5330288..1a438ec 100644
--- a/include/linux/ioport.h
+++ b/include/linux/ioport.h
@@ -139,6 +139,7 @@ enum {
IORES_DESC_PERSISTENT_MEMORY_LEGACY = 5,
IORES_DESC_DEVICE_PRIVATE_MEMORY = 6,
IORES_DESC_DEVICE_PUBLIC_MEMORY = 7,
+ IORES_DESC_QUICK_KEXEC = 8,
};
/* helpers to define resources */
diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index d6b8d0a..98cf0fb 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -233,9 +233,10 @@ struct kimage {
unsigned long control_page;
/* Flags to indicate special processing */
- unsigned int type : 1;
+ unsigned int type : 2;
#define KEXEC_TYPE_DEFAULT 0
#define KEXEC_TYPE_CRASH 1
+#define KEXEC_TYPE_QUICK 2
unsigned int preserve_context : 1;
/* If set, we are using file mode kexec syscall */
unsigned int file_mode:1;
@@ -296,6 +297,11 @@ extern int kexec_load_disabled;
#define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_PRESERVE_CONTEXT)
#endif
+#ifdef CONFIG_QUICK_KEXEC
+#undef KEXEC_FLAGS
+#define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_QUICK)
+#endif
+
/* List of defined/legal kexec file flags */
#define KEXEC_FILE_FLAGS (KEXEC_FILE_UNLOAD | KEXEC_FILE_ON_CRASH | \
KEXEC_FILE_NO_INITRAMFS)
@@ -305,6 +311,9 @@ extern int kexec_load_disabled;
extern struct resource crashk_res;
extern struct resource crashk_low_res;
extern note_buf_t __percpu *crash_notes;
+#ifdef CONFIG_QUICK_KEXEC
+extern struct resource quick_kexec_res;
+#endif
/* flag to track if kexec reboot is in progress */
extern bool kexec_in_progress;
diff --git a/include/uapi/linux/kexec.h b/include/uapi/linux/kexec.h
index 6d11286..ca3cebe 100644
--- a/include/uapi/linux/kexec.h
+++ b/include/uapi/linux/kexec.h
@@ -12,6 +12,7 @@
/* kexec flags for different usage scenarios */
#define KEXEC_ON_CRASH 0x00000001
#define KEXEC_PRESERVE_CONTEXT 0x00000002
+#define KEXEC_QUICK 0x00000004
#define KEXEC_ARCH_MASK 0xffff0000
/*
diff --git a/kernel/kexec.c b/kernel/kexec.c
index 6855980..47dfad7 100644
--- a/kernel/kexec.c
+++ b/kernel/kexec.c
@@ -46,6 +46,9 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry,
int ret;
struct kimage *image;
bool kexec_on_panic = flags & KEXEC_ON_CRASH;
+#ifdef CONFIG_QUICK_KEXEC
+ bool kexec_on_quick = flags & KEXEC_QUICK;
+#endif
if (kexec_on_panic) {
/* Verify we have a valid entry point */
@@ -71,6 +74,13 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry,
image->type = KEXEC_TYPE_CRASH;
}
+#ifdef CONFIG_QUICK_KEXEC
+ if (kexec_on_quick) {
+ image->control_page = quick_kexec_res.start;
+ image->type = KEXEC_TYPE_QUICK;
+ }
+#endif
+
ret = sanity_check_segment_list(image);
if (ret)
goto out_free_image;
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index b36c9c4..595a757 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -74,6 +74,16 @@ struct resource crashk_low_res = {
.desc = IORES_DESC_CRASH_KERNEL
};
+#ifdef CONFIG_QUICK_KEXEC
+struct resource quick_kexec_res = {
+ .name = "Quick kexec",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM,
+ .desc = IORES_DESC_QUICK_KEXEC
+};
+#endif
+
int kexec_should_crash(struct task_struct *p)
{
/*
@@ -470,8 +480,10 @@ static struct page *kimage_alloc_normal_control_pages(struct kimage *image,
return pages;
}
-static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
- unsigned int order)
+
+static struct page *kimage_alloc_special_control_pages(struct kimage *image,
+ unsigned int order,
+ unsigned long end)
{
/* Control pages are special, they are the intermediaries
* that are needed while we copy the rest of the pages
@@ -501,7 +513,7 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
size = (1 << order) << PAGE_SHIFT;
hole_start = (image->control_page + (size - 1)) & ~(size - 1);
hole_end = hole_start + size - 1;
- while (hole_end <= crashk_res.end) {
+ while (hole_end <= end) {
unsigned long i;
cond_resched();
@@ -536,7 +548,6 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
return pages;
}
-
struct page *kimage_alloc_control_pages(struct kimage *image,
unsigned int order)
{
@@ -547,8 +558,15 @@ struct page *kimage_alloc_control_pages(struct kimage *image,
pages = kimage_alloc_normal_control_pages(image, order);
break;
case KEXEC_TYPE_CRASH:
- pages = kimage_alloc_crash_control_pages(image, order);
+ pages = kimage_alloc_special_control_pages(image, order,
+ crashk_res.end);
+ break;
+#ifdef CONFIG_QUICK_KEXEC
+ case KEXEC_TYPE_QUICK:
+ pages = kimage_alloc_special_control_pages(image, order,
+ quick_kexec_res.end);
break;
+#endif
}
return pages;
@@ -898,11 +916,11 @@ static int kimage_load_normal_segment(struct kimage *image,
return result;
}
-static int kimage_load_crash_segment(struct kimage *image,
+static int kimage_load_special_segment(struct kimage *image,
struct kexec_segment *segment)
{
- /* For crash dumps kernels we simply copy the data from
- * user space to it's destination.
+ /* For crash dumps kernels and quick kexec kernels
+ * we simply copy the data from user space to it's destination.
* We do things a page at a time for the sake of kmap.
*/
unsigned long maddr;
@@ -976,8 +994,13 @@ int kimage_load_segment(struct kimage *image,
result = kimage_load_normal_segment(image, segment);
break;
case KEXEC_TYPE_CRASH:
- result = kimage_load_crash_segment(image, segment);
+ result = kimage_load_special_segment(image, segment);
+ break;
+#ifdef CONFIG_QUICK_KEXEC
+ case KEXEC_TYPE_QUICK:
+ result = kimage_load_special_segment(image, segment);
break;
+#endif
}
return result;
--
2.9.5
1
1
From: Hanjun Guo <guohanjun(a)huawei.com>
hulk inclusion
category: feature
bugzilla: NA
CVE: NA
---------------------------
New features were mereged but leave the defconfig un-updated,
such as Hygon CPU support, update it now.
Signed-off-by: Hanjun Guo <guohanjun(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/x86/configs/hulk_defconfig | 105 ++++++++++++--------------
arch/x86/configs/openeuler_defconfig | 66 ++++++----------
arch/x86/configs/storage_ci_defconfig | 40 ++++++----
arch/x86/configs/syzkaller_defconfig | 56 ++++++++++----
4 files changed, 140 insertions(+), 127 deletions(-)
diff --git a/arch/x86/configs/hulk_defconfig b/arch/x86/configs/hulk_defconfig
index 0a74cfe00137..ddd209c39445 100644
--- a/arch/x86/configs/hulk_defconfig
+++ b/arch/x86/configs/hulk_defconfig
@@ -1,14 +1,15 @@
#
# Automatically generated file; DO NOT EDIT.
-# Linux/x86 4.19.21 Kernel Configuration
+# Linux/x86 4.19.138 Kernel Configuration
#
#
-# Compiler: gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
+# Compiler: gcc 7.5.0
#
CONFIG_CC_IS_GCC=y
-CONFIG_GCC_VERSION=40805
+CONFIG_GCC_VERSION=70500
CONFIG_CLANG_VERSION=0
+CONFIG_CC_HAS_ASM_GOTO=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y
CONFIG_THREAD_INFO_IN_TASK=y
@@ -46,6 +47,7 @@ CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y
+CONFIG_KTASK=y
#
# IRQ subsystem
@@ -150,14 +152,15 @@ CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_BPF=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_SOCK_CGROUP_DATA=y
+# CONFIG_CGROUP_FILES is not set
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
-CONFIG_CHECKPOINT_RESTORE=y
CONFIG_SCHED_STEAL=y
+CONFIG_CHECKPOINT_RESTORE=y
CONFIG_SCHED_AUTOGROUP=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
@@ -169,6 +172,7 @@ CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
+CONFIG_INITRAMFS_FILE_METADATA=""
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
@@ -307,6 +311,7 @@ CONFIG_XEN_SAVE_RESTORE=y
# CONFIG_XEN_DEBUG_FS is not set
# CONFIG_XEN_PVH is not set
CONFIG_KVM_GUEST=y
+CONFIG_ARCH_CPUIDLE_HALTPOLL=y
# CONFIG_KVM_DEBUG_FS is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
@@ -326,6 +331,7 @@ CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
+CONFIG_CPU_SUP_HYGON=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
@@ -373,8 +379,8 @@ CONFIG_X86_DIRECT_GBPAGES=y
CONFIG_ARCH_HAS_MEM_ENCRYPT=y
CONFIG_AMD_MEM_ENCRYPT=y
# CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT is not set
-CONFIG_ARCH_USE_MEMREMAP_PROT=y
CONFIG_NUMA=y
+CONFIG_NUMA_AWARE_SPINLOCKS=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
@@ -402,6 +408,9 @@ CONFIG_X86_SMAP=y
CONFIG_X86_INTEL_UMIP=y
# CONFIG_X86_INTEL_MPX is not set
CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y
+CONFIG_X86_INTEL_TSX_MODE_OFF=y
+# CONFIG_X86_INTEL_TSX_MODE_ON is not set
+# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
CONFIG_EFI=y
CONFIG_EFI_STUB=y
CONFIG_EFI_MIXED=y
@@ -436,6 +445,7 @@ CONFIG_LEGACY_VSYSCALL_EMULATE=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_MODIFY_LDT_SYSCALL=y
CONFIG_HAVE_LIVEPATCH_FTRACE=y
+CONFIG_HAVE_LIVEPATCH_WO_FTRACE=y
#
# Enable Livepatch
@@ -578,6 +588,8 @@ CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_GOV_LADDER is not set
CONFIG_CPU_IDLE_GOV_MENU=y
CONFIG_CPU_IDLE_GOV_TEO=y
+# CONFIG_CPU_IDLE_GOV_HALTPOLL is not set
+CONFIG_HALTPOLL_CPUIDLE=y
CONFIG_INTEL_IDLE=y
#
@@ -637,6 +649,7 @@ CONFIG_VMD=y
# DesignWare PCI Core Support
#
# CONFIG_PCIE_DW_PLAT_HOST is not set
+CONFIG_HISILICON_PCIE_CAE=m
#
# PCI Endpoint
@@ -669,7 +682,6 @@ CONFIG_YENTA_TOSHIBA=y
# Binary Emulations
#
CONFIG_IA32_EMULATION=y
-# CONFIG_IA32_AOUT is not set
# CONFIG_X86_X32 is not set
CONFIG_COMPAT_32=y
CONFIG_COMPAT=y
@@ -731,6 +743,7 @@ CONFIG_KVM_VFIO=y
CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
CONFIG_KVM_COMPAT=y
CONFIG_HAVE_KVM_IRQ_BYPASS=y
+CONFIG_HAVE_KVM_NO_POLL=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=m
CONFIG_KVM_INTEL=m
@@ -832,18 +845,16 @@ CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
CONFIG_STRICT_MODULE_RWX=y
-CONFIG_ARCH_HAS_REFCOUNT=y
-# CONFIG_REFCOUNT_FULL is not set
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
+CONFIG_ARCH_USE_MEMREMAP_PROT=y
#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
-CONFIG_PLUGIN_HOSTCC="g++"
+CONFIG_PLUGIN_HOSTCC=""
CONFIG_HAVE_GCC_PLUGINS=y
-# CONFIG_GCC_PLUGINS is not set
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
@@ -996,6 +1007,7 @@ CONFIG_THP_SWAP=y
CONFIG_TRANSPARENT_HUGE_PAGECACHE=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
+# CONFIG_SHRINK_PAGECACHE is not set
# CONFIG_CMA is not set
CONFIG_MEM_SOFT_DIRTY=y
CONFIG_ZSWAP=y
@@ -1940,6 +1952,7 @@ CONFIG_MTD_UBI_BEB_LIMIT=20
# CONFIG_MTD_UBI_FASTMAP is not set
# CONFIG_MTD_UBI_GLUEBI is not set
# CONFIG_MTD_UBI_BLOCK is not set
+CONFIG_MTD_HISILICON_SFC=m
# CONFIG_OF is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PARPORT=m
@@ -2124,7 +2137,6 @@ CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
# CONFIG_CHR_DEV_OSST is not set
CONFIG_BLK_DEV_SR=m
-CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_ENCLOSURE=m
@@ -2505,6 +2517,9 @@ CONFIG_BE2NET_LANCER=y
CONFIG_BE2NET_SKYHAWK=y
# CONFIG_NET_VENDOR_EZCHIP is not set
# CONFIG_NET_VENDOR_HP is not set
+CONFIG_NET_VENDOR_HUAWEI=y
+# CONFIG_HINIC is not set
+# CONFIG_BMA is not set
# CONFIG_NET_VENDOR_I825XX is not set
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
@@ -2647,7 +2662,7 @@ CONFIG_LED_TRIGGER_PHY=y
#
CONFIG_AMD_PHY=m
CONFIG_AQUANTIA_PHY=m
-CONFIG_ASIX_PHY=m
+# CONFIG_AX88796B_PHY is not set
CONFIG_AT803X_PHY=m
CONFIG_BCM7XXX_PHY=m
CONFIG_BCM87XX_PHY=m
@@ -3286,6 +3301,7 @@ CONFIG_NOZOMI=m
CONFIG_N_HDLC=m
CONFIG_N_GSM=m
# CONFIG_TRACE_SINK is not set
+CONFIG_LDISC_AUTOLOAD=y
CONFIG_DEVMEM=y
# CONFIG_DEVKMEM is not set
@@ -3358,7 +3374,6 @@ CONFIG_HW_RANDOM_AMD=m
CONFIG_HW_RANDOM_VIA=m
CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_NVRAM=y
-# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
# CONFIG_MWAVE is not set
CONFIG_RAW_DRIVER=y
@@ -3602,6 +3617,7 @@ CONFIG_GPIO_ICH=m
# PCI GPIO expanders
#
# CONFIG_GPIO_AMD8111 is not set
+# CONFIG_GPIO_BT8XX is not set
# CONFIG_GPIO_ML_IOH is not set
# CONFIG_GPIO_PCI_IDIO_16 is not set
# CONFIG_GPIO_PCIE_IDIO_24 is not set
@@ -3964,8 +3980,6 @@ CONFIG_MFD_CORE=y
# CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set
CONFIG_LPC_ICH=m
CONFIG_LPC_SCH=m
-# CONFIG_INTEL_SOC_PMIC is not set
-# CONFIG_INTEL_SOC_PMIC_CHTWC is not set
# CONFIG_INTEL_SOC_PMIC_CHTDC_TI is not set
CONFIG_MFD_INTEL_LPSS=y
CONFIG_MFD_INTEL_LPSS_ACPI=y
@@ -4010,7 +4024,6 @@ CONFIG_MFD_SM501_GPIO=y
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65086 is not set
# CONFIG_MFD_TPS65090 is not set
-# CONFIG_MFD_TPS68470 is not set
# CONFIG_MFD_TI_LP873X is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS65910 is not set
@@ -4082,9 +4095,6 @@ CONFIG_VIDEO_V4L2=m
# CONFIG_VIDEO_ADV_DEBUG is not set
# CONFIG_VIDEO_FIXED_MINOR_RANGES is not set
CONFIG_VIDEO_TUNER=m
-CONFIG_VIDEOBUF_GEN=m
-CONFIG_VIDEOBUF_DMA_SG=m
-CONFIG_VIDEOBUF_VMALLOC=m
CONFIG_DVB_CORE=m
# CONFIG_DVB_MMAP is not set
CONFIG_DVB_NET=y
@@ -4158,7 +4168,6 @@ CONFIG_USB_PWC=m
# CONFIG_USB_PWC_DEBUG is not set
CONFIG_USB_PWC_INPUT_EVDEV=y
# CONFIG_VIDEO_CPIA2 is not set
-CONFIG_USB_ZR364XX=m
CONFIG_USB_STKWEBCAM=m
CONFIG_USB_S2255=m
# CONFIG_VIDEO_USBTV is not set
@@ -4181,13 +4190,6 @@ CONFIG_VIDEO_USBVISION=m
CONFIG_VIDEO_AU0828=m
CONFIG_VIDEO_AU0828_V4L2=y
# CONFIG_VIDEO_AU0828_RC is not set
-CONFIG_VIDEO_CX231XX=m
-CONFIG_VIDEO_CX231XX_RC=y
-CONFIG_VIDEO_CX231XX_ALSA=m
-CONFIG_VIDEO_CX231XX_DVB=m
-CONFIG_VIDEO_TM6000=m
-CONFIG_VIDEO_TM6000_ALSA=m
-CONFIG_VIDEO_TM6000_DVB=m
#
# Digital TV USB devices
@@ -4272,16 +4274,11 @@ CONFIG_VIDEO_IVTV=m
# CONFIG_VIDEO_IVTV_DEPRECATED_IOCTLS is not set
# CONFIG_VIDEO_IVTV_ALSA is not set
CONFIG_VIDEO_FB_IVTV=m
-# CONFIG_VIDEO_HEXIUM_GEMINI is not set
-# CONFIG_VIDEO_HEXIUM_ORION is not set
-# CONFIG_VIDEO_MXB is not set
# CONFIG_VIDEO_DT3155 is not set
#
# Media capture/analog/hybrid TV support
#
-CONFIG_VIDEO_CX18=m
-CONFIG_VIDEO_CX18_ALSA=m
CONFIG_VIDEO_CX23885=m
CONFIG_MEDIA_ALTERA_CI=m
# CONFIG_VIDEO_CX25821 is not set
@@ -4291,8 +4288,6 @@ CONFIG_VIDEO_CX88_BLACKBIRD=m
CONFIG_VIDEO_CX88_DVB=m
# CONFIG_VIDEO_CX88_ENABLE_VP3054 is not set
CONFIG_VIDEO_CX88_MPEG=m
-CONFIG_VIDEO_BT848=m
-CONFIG_DVB_BT8XX=m
CONFIG_VIDEO_SAA7134=m
CONFIG_VIDEO_SAA7134_ALSA=m
CONFIG_VIDEO_SAA7134_RC=y
@@ -4302,14 +4297,9 @@ CONFIG_VIDEO_SAA7164=m
#
# Media digital TV PCI Adapters
#
-CONFIG_DVB_AV7110_IR=y
-CONFIG_DVB_AV7110=m
-CONFIG_DVB_AV7110_OSD=y
CONFIG_DVB_BUDGET_CORE=m
CONFIG_DVB_BUDGET=m
CONFIG_DVB_BUDGET_CI=m
-CONFIG_DVB_BUDGET_AV=m
-CONFIG_DVB_BUDGET_PATCH=m
CONFIG_DVB_B2C2_FLEXCOP_PCI=m
# CONFIG_DVB_B2C2_FLEXCOP_PCI_DEBUG is not set
CONFIG_DVB_PLUTO2=m
@@ -4376,7 +4366,6 @@ CONFIG_VIDEOBUF2_DMA_SG=m
CONFIG_VIDEOBUF2_DVB=m
CONFIG_DVB_B2C2_FLEXCOP=m
CONFIG_VIDEO_SAA7146=m
-CONFIG_VIDEO_SAA7146_VV=m
CONFIG_SMS_SIANO_MDTV=m
CONFIG_SMS_SIANO_RC=y
# CONFIG_SMS_SIANO_DEBUGFS is not set
@@ -4391,11 +4380,8 @@ CONFIG_VIDEO_IR_I2C=m
#
# Audio decoders, processors and mixers
#
-CONFIG_VIDEO_TVAUDIO=m
-CONFIG_VIDEO_TDA7432=m
CONFIG_VIDEO_MSP3400=m
CONFIG_VIDEO_CS3308=m
-CONFIG_VIDEO_CS5345=m
CONFIG_VIDEO_CS53L32A=m
CONFIG_VIDEO_WM8775=m
CONFIG_VIDEO_WM8739=m
@@ -4519,7 +4505,6 @@ CONFIG_DVB_MN88473=m
#
# DVB-S (satellite) frontends
#
-CONFIG_DVB_CX24110=m
CONFIG_DVB_CX24123=m
CONFIG_DVB_MT312=m
CONFIG_DVB_ZL10036=m
@@ -4532,12 +4517,10 @@ CONFIG_DVB_STV6110=m
CONFIG_DVB_STV0900=m
CONFIG_DVB_TDA8083=m
CONFIG_DVB_TDA10086=m
-CONFIG_DVB_TDA8261=m
CONFIG_DVB_VES1X93=m
CONFIG_DVB_TUNER_ITD1000=m
CONFIG_DVB_TUNER_CX24113=m
CONFIG_DVB_TDA826X=m
-CONFIG_DVB_TUA6100=m
CONFIG_DVB_CX24116=m
CONFIG_DVB_CX24117=m
CONFIG_DVB_CX24120=m
@@ -4550,8 +4533,6 @@ CONFIG_DVB_TDA10071=m
#
# DVB-T (terrestrial) frontends
#
-CONFIG_DVB_SP8870=m
-CONFIG_DVB_SP887X=m
CONFIG_DVB_CX22700=m
CONFIG_DVB_CX22702=m
CONFIG_DVB_DRXD=m
@@ -4587,7 +4568,6 @@ CONFIG_DVB_STV0297=m
# ATSC (North American/Korean Terrestrial/Cable DTV) frontends
#
CONFIG_DVB_NXT200X=m
-CONFIG_DVB_OR51211=m
CONFIG_DVB_OR51132=m
CONFIG_DVB_BCM3510=m
CONFIG_DVB_LGDT330X=m
@@ -4707,6 +4687,7 @@ CONFIG_CHASH=m
# CONFIG_CHASH_STATS is not set
# CONFIG_CHASH_SELFTEST is not set
CONFIG_DRM_NOUVEAU=m
+CONFIG_NOUVEAU_LEGACY_CTX_SUPPORT=y
CONFIG_NOUVEAU_DEBUG=5
CONFIG_NOUVEAU_DEBUG_DEFAULT=3
# CONFIG_NOUVEAU_DEBUG_MMU is not set
@@ -4755,10 +4736,10 @@ CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
#
# Frame buffer Devices
#
-CONFIG_FB=y
-# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_CMDLINE=y
CONFIG_FB_NOTIFY=y
+CONFIG_FB=y
+# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_BOOT_VESA_SUPPORT=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
@@ -5559,7 +5540,6 @@ CONFIG_USB_EMI62=m
CONFIG_USB_EMI26=m
CONFIG_USB_ADUTUX=m
CONFIG_USB_SEVSEG=m
-# CONFIG_USB_RIO500 is not set
CONFIG_USB_LEGOTOWER=m
CONFIG_USB_LCD=m
# CONFIG_USB_CYPRESS_CY7C63 is not set
@@ -5948,7 +5928,6 @@ CONFIG_VFIO_PCI_INTX=y
# CONFIG_VFIO_PCI_IGD is not set
CONFIG_VFIO_MDEV=m
CONFIG_VFIO_MDEV_DEVICE=m
-# CONFIG_VFIO_SPIMDEV is not set
CONFIG_IRQ_BYPASS_MANAGER=m
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO=y
@@ -6065,6 +6044,7 @@ CONFIG_PVPANIC=y
CONFIG_MLX_PLATFORM=m
CONFIG_INTEL_TURBO_MAX_3=y
# CONFIG_I2C_MULTI_INSTANTIATE is not set
+# CONFIG_INTEL_ATOMISP2_PM is not set
CONFIG_PMC_ATOM=y
# CONFIG_CHROME_PLATFORMS is not set
CONFIG_MELLANOX_PLATFORM=y
@@ -6100,9 +6080,14 @@ CONFIG_IOMMU_SUPPORT=y
#
# Generic IOMMU Pagetable Support
#
+
+#
+# Generic PASID table support
+#
# CONFIG_IOMMU_DEBUGFS is not set
CONFIG_IOMMU_DEFAULT_PASSTHROUGH=y
CONFIG_IOMMU_IOVA=y
+# CONFIG_IOMMU_PROCESS is not set
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_V2=m
CONFIG_DMAR_TABLE=y
@@ -6111,6 +6096,7 @@ CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
CONFIG_IRQ_REMAP=y
+# CONFIG_SMMU_BYPASS_DEV is not set
#
# Remoteproc drivers
@@ -6153,6 +6139,8 @@ CONFIG_IRQ_REMAP=y
# Xilinx SoC drivers
#
# CONFIG_XILINX_VCU is not set
+CONFIG_SOC_HISILICON_LBC=m
+CONFIG_SOC_HISILICON_SYSCTL=m
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
@@ -6581,6 +6569,7 @@ CONFIG_NVMEM=y
# CONFIG_FPGA is not set
# CONFIG_UNISYS_VISORBUS is not set
# CONFIG_SIOX is not set
+# CONFIG_UACCE is not set
# CONFIG_SLIMBUS is not set
#
@@ -6676,7 +6665,6 @@ CONFIG_VFAT_FS=m
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
# CONFIG_FAT_DEFAULT_UTF8 is not set
-# CONFIG_NTFS_FS is not set
#
# Pseudo filesystems
@@ -6927,6 +6915,7 @@ CONFIG_IMA_APPRAISE_BOOTPARAM=y
CONFIG_IMA_TRUSTED_KEYRING=y
# CONFIG_IMA_BLACKLIST_KEYRING is not set
# CONFIG_IMA_LOAD_X509 is not set
+# CONFIG_IMA_DIGEST_LIST is not set
CONFIG_EVM=y
CONFIG_EVM_ATTR_FSUUID=y
# CONFIG_EVM_ADD_XATTRS is not set
@@ -7142,13 +7131,15 @@ CONFIG_CRYPTO_DEV_CHELSIO=m
CONFIG_CHELSIO_IPSEC_INLINE=y
# CONFIG_CRYPTO_DEV_CHELSIO_TLS is not set
CONFIG_CRYPTO_DEV_VIRTIO=m
-# CONFIG_CRYPTO_DEV_HISILICON is not set
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
CONFIG_X509_CERTIFICATE_PARSER=y
CONFIG_PKCS7_MESSAGE_PARSER=y
# CONFIG_PKCS7_TEST_KEY is not set
CONFIG_SIGNED_PE_FILE_VERIFICATION=y
+# CONFIG_PGP_LIBRARY is not set
+# CONFIG_PGP_KEY_PARSER is not set
+# CONFIG_PGP_PRELOAD is not set
#
# Certificates for signature checking
@@ -7160,6 +7151,7 @@ CONFIG_SYSTEM_TRUSTED_KEYS=""
# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
CONFIG_SYSTEM_BLACKLIST_KEYRING=y
CONFIG_SYSTEM_BLACKLIST_HASH_LIST=""
+# CONFIG_PGP_PRELOAD_PUBLIC_KEYS is not set
CONFIG_BINARY_PRINTF=y
#
@@ -7191,6 +7183,7 @@ CONFIG_CRC32_SLICEBY8=y
CONFIG_CRC7=m
CONFIG_LIBCRC32C=m
CONFIG_CRC8=m
+CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
@@ -7325,6 +7318,8 @@ CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_HAVE_ARCH_KASAN=y
# CONFIG_KASAN is not set
CONFIG_ARCH_HAS_KCOV=y
+CONFIG_CC_HAS_SANCOV_TRACE_PC=y
+# CONFIG_KCOV is not set
CONFIG_DEBUG_SHIRQ=y
#
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index 8a05478477dc..4c9de1198798 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -1,5 +1,13 @@
+#
+# Automatically generated file; DO NOT EDIT.
+# Linux/x86 4.19.138 Kernel Configuration
+#
+
+#
+# Compiler: gcc 7.5.0
+#
CONFIG_CC_IS_GCC=y
-CONFIG_GCC_VERSION=50400
+CONFIG_GCC_VERSION=70500
CONFIG_CLANG_VERSION=0
CONFIG_CC_HAS_ASM_GOTO=y
CONFIG_IRQ_WORK=y
@@ -151,8 +159,8 @@ CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
-CONFIG_CHECKPOINT_RESTORE=y
CONFIG_SCHED_STEAL=y
+CONFIG_CHECKPOINT_RESTORE=y
CONFIG_SCHED_AUTOGROUP=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
@@ -303,6 +311,7 @@ CONFIG_XEN_SAVE_RESTORE=y
# CONFIG_XEN_DEBUG_FS is not set
# CONFIG_XEN_PVH is not set
CONFIG_KVM_GUEST=y
+CONFIG_ARCH_CPUIDLE_HALTPOLL=y
# CONFIG_KVM_DEBUG_FS is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
@@ -322,6 +331,7 @@ CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
+CONFIG_CPU_SUP_HYGON=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
@@ -700,17 +710,15 @@ CONFIG_FW_CFG_SYSFS=y
#
# EFI (Extensible Firmware Interface) Support
#
-#CONFIG_EFI_VARS is not set
+# CONFIG_EFI_VARS is not set
CONFIG_EFI_ESRT=y
CONFIG_EFI_RUNTIME_MAP=y
# CONFIG_EFI_FAKE_MEMMAP is not set
CONFIG_EFI_RUNTIME_WRAPPERS=y
-# CONFIG_EFI_BOOTLOADER_CONTROL is not set
# CONFIG_EFI_CAPSULE_LOADER is not set
# CONFIG_EFI_TEST is not set
CONFIG_APPLE_PROPERTIES=y
# CONFIG_RESET_ATTACK_MITIGATION is not set
-CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_X86=y
CONFIG_EFI_DEV_PATH_PARSER=y
@@ -731,6 +739,7 @@ CONFIG_KVM_VFIO=y
CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
CONFIG_KVM_COMPAT=y
CONFIG_HAVE_KVM_IRQ_BYPASS=y
+CONFIG_HAVE_KVM_NO_POLL=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=m
CONFIG_KVM_INTEL=m
@@ -832,8 +841,6 @@ CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
CONFIG_STRICT_MODULE_RWX=y
-CONFIG_ARCH_HAS_REFCOUNT=y
-# CONFIG_REFCOUNT_FULL is not set
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
CONFIG_ARCH_USE_MEMREMAP_PROT=y
@@ -842,9 +849,8 @@ CONFIG_ARCH_USE_MEMREMAP_PROT=y
#
# CONFIG_GCOV_KERNEL is not set
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
-CONFIG_PLUGIN_HOSTCC="g++"
+CONFIG_PLUGIN_HOSTCC=""
CONFIG_HAVE_GCC_PLUGINS=y
-# CONFIG_GCC_PLUGINS is not set
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
@@ -2128,7 +2134,6 @@ CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
# CONFIG_CHR_DEV_OSST is not set
CONFIG_BLK_DEV_SR=m
-CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_ENCLOSURE=m
@@ -2509,6 +2514,8 @@ CONFIG_BE2NET_LANCER=y
CONFIG_BE2NET_SKYHAWK=y
# CONFIG_NET_VENDOR_EZCHIP is not set
# CONFIG_NET_VENDOR_HP is not set
+CONFIG_NET_VENDOR_HUAWEI=y
+# CONFIG_HINIC is not set
CONFIG_BMA=m
# CONFIG_NET_VENDOR_I825XX is not set
CONFIG_NET_VENDOR_INTEL=y
@@ -3609,6 +3616,7 @@ CONFIG_GPIO_ICH=m
# PCI GPIO expanders
#
# CONFIG_GPIO_AMD8111 is not set
+# CONFIG_GPIO_BT8XX is not set
# CONFIG_GPIO_ML_IOH is not set
# CONFIG_GPIO_PCI_IDIO_16 is not set
# CONFIG_GPIO_PCIE_IDIO_24 is not set
@@ -4085,9 +4093,6 @@ CONFIG_VIDEO_V4L2=m
# CONFIG_VIDEO_ADV_DEBUG is not set
# CONFIG_VIDEO_FIXED_MINOR_RANGES is not set
CONFIG_VIDEO_TUNER=m
-CONFIG_VIDEOBUF_GEN=m
-CONFIG_VIDEOBUF_DMA_SG=m
-CONFIG_VIDEOBUF_VMALLOC=m
CONFIG_DVB_CORE=m
# CONFIG_DVB_MMAP is not set
CONFIG_DVB_NET=y
@@ -4161,7 +4166,6 @@ CONFIG_USB_PWC=m
# CONFIG_USB_PWC_DEBUG is not set
CONFIG_USB_PWC_INPUT_EVDEV=y
# CONFIG_VIDEO_CPIA2 is not set
-CONFIG_USB_ZR364XX=m
CONFIG_USB_STKWEBCAM=m
CONFIG_USB_S2255=m
# CONFIG_VIDEO_USBTV is not set
@@ -4184,13 +4188,6 @@ CONFIG_VIDEO_USBVISION=m
CONFIG_VIDEO_AU0828=m
CONFIG_VIDEO_AU0828_V4L2=y
# CONFIG_VIDEO_AU0828_RC is not set
-CONFIG_VIDEO_CX231XX=m
-CONFIG_VIDEO_CX231XX_RC=y
-CONFIG_VIDEO_CX231XX_ALSA=m
-CONFIG_VIDEO_CX231XX_DVB=m
-CONFIG_VIDEO_TM6000=m
-CONFIG_VIDEO_TM6000_ALSA=m
-CONFIG_VIDEO_TM6000_DVB=m
#
# Digital TV USB devices
@@ -4275,16 +4272,11 @@ CONFIG_VIDEO_IVTV=m
# CONFIG_VIDEO_IVTV_DEPRECATED_IOCTLS is not set
# CONFIG_VIDEO_IVTV_ALSA is not set
CONFIG_VIDEO_FB_IVTV=m
-# CONFIG_VIDEO_HEXIUM_GEMINI is not set
-# CONFIG_VIDEO_HEXIUM_ORION is not set
-# CONFIG_VIDEO_MXB is not set
# CONFIG_VIDEO_DT3155 is not set
#
# Media capture/analog/hybrid TV support
#
-CONFIG_VIDEO_CX18=m
-CONFIG_VIDEO_CX18_ALSA=m
CONFIG_VIDEO_CX23885=m
CONFIG_MEDIA_ALTERA_CI=m
# CONFIG_VIDEO_CX25821 is not set
@@ -4294,8 +4286,6 @@ CONFIG_VIDEO_CX88_BLACKBIRD=m
CONFIG_VIDEO_CX88_DVB=m
# CONFIG_VIDEO_CX88_ENABLE_VP3054 is not set
CONFIG_VIDEO_CX88_MPEG=m
-CONFIG_VIDEO_BT848=m
-CONFIG_DVB_BT8XX=m
CONFIG_VIDEO_SAA7134=m
CONFIG_VIDEO_SAA7134_ALSA=m
CONFIG_VIDEO_SAA7134_RC=y
@@ -4305,14 +4295,9 @@ CONFIG_VIDEO_SAA7164=m
#
# Media digital TV PCI Adapters
#
-CONFIG_DVB_AV7110_IR=y
-CONFIG_DVB_AV7110=m
-CONFIG_DVB_AV7110_OSD=y
CONFIG_DVB_BUDGET_CORE=m
CONFIG_DVB_BUDGET=m
CONFIG_DVB_BUDGET_CI=m
-CONFIG_DVB_BUDGET_AV=m
-CONFIG_DVB_BUDGET_PATCH=m
CONFIG_DVB_B2C2_FLEXCOP_PCI=m
# CONFIG_DVB_B2C2_FLEXCOP_PCI_DEBUG is not set
CONFIG_DVB_PLUTO2=m
@@ -4379,7 +4364,6 @@ CONFIG_VIDEOBUF2_DMA_SG=m
CONFIG_VIDEOBUF2_DVB=m
CONFIG_DVB_B2C2_FLEXCOP=m
CONFIG_VIDEO_SAA7146=m
-CONFIG_VIDEO_SAA7146_VV=m
CONFIG_SMS_SIANO_MDTV=m
CONFIG_SMS_SIANO_RC=y
# CONFIG_SMS_SIANO_DEBUGFS is not set
@@ -4394,11 +4378,8 @@ CONFIG_VIDEO_IR_I2C=m
#
# Audio decoders, processors and mixers
#
-CONFIG_VIDEO_TVAUDIO=m
-CONFIG_VIDEO_TDA7432=m
CONFIG_VIDEO_MSP3400=m
CONFIG_VIDEO_CS3308=m
-CONFIG_VIDEO_CS5345=m
CONFIG_VIDEO_CS53L32A=m
CONFIG_VIDEO_WM8775=m
CONFIG_VIDEO_WM8739=m
@@ -4522,7 +4503,6 @@ CONFIG_DVB_MN88473=m
#
# DVB-S (satellite) frontends
#
-CONFIG_DVB_CX24110=m
CONFIG_DVB_CX24123=m
CONFIG_DVB_MT312=m
CONFIG_DVB_ZL10036=m
@@ -4535,12 +4515,10 @@ CONFIG_DVB_STV6110=m
CONFIG_DVB_STV0900=m
CONFIG_DVB_TDA8083=m
CONFIG_DVB_TDA10086=m
-CONFIG_DVB_TDA8261=m
CONFIG_DVB_VES1X93=m
CONFIG_DVB_TUNER_ITD1000=m
CONFIG_DVB_TUNER_CX24113=m
CONFIG_DVB_TDA826X=m
-CONFIG_DVB_TUA6100=m
CONFIG_DVB_CX24116=m
CONFIG_DVB_CX24117=m
CONFIG_DVB_CX24120=m
@@ -4553,8 +4531,6 @@ CONFIG_DVB_TDA10071=m
#
# DVB-T (terrestrial) frontends
#
-CONFIG_DVB_SP8870=m
-CONFIG_DVB_SP887X=m
CONFIG_DVB_CX22700=m
CONFIG_DVB_CX22702=m
CONFIG_DVB_DRXD=m
@@ -4590,7 +4566,6 @@ CONFIG_DVB_STV0297=m
# ATSC (North American/Korean Terrestrial/Cable DTV) frontends
#
CONFIG_DVB_NXT200X=m
-CONFIG_DVB_OR51211=m
CONFIG_DVB_OR51132=m
CONFIG_DVB_BCM3510=m
CONFIG_DVB_LGDT330X=m
@@ -6110,6 +6085,7 @@ CONFIG_IOMMU_SUPPORT=y
# CONFIG_IOMMU_DEBUGFS is not set
CONFIG_IOMMU_DEFAULT_PASSTHROUGH=y
CONFIG_IOMMU_IOVA=y
+# CONFIG_IOMMU_PROCESS is not set
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_V2=m
CONFIG_DMAR_TABLE=y
@@ -6894,7 +6870,7 @@ CONFIG_SECURITY_NETWORK=y
CONFIG_PAGE_TABLE_ISOLATION=y
CONFIG_SECURITY_INFINIBAND=y
CONFIG_SECURITY_NETWORK_XFRM=y
-# CONFIG_SECURITY_PATH is not set
+CONFIG_SECURITY_PATH=y
CONFIG_INTEL_TXT=y
CONFIG_LSM_MMAP_MIN_ADDR=65535
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y
@@ -7349,6 +7325,8 @@ CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_HAVE_ARCH_KASAN=y
# CONFIG_KASAN is not set
CONFIG_ARCH_HAS_KCOV=y
+CONFIG_CC_HAS_SANCOV_TRACE_PC=y
+# CONFIG_KCOV is not set
CONFIG_DEBUG_SHIRQ=y
#
diff --git a/arch/x86/configs/storage_ci_defconfig b/arch/x86/configs/storage_ci_defconfig
index c7344e20d964..889f42e8a968 100644
--- a/arch/x86/configs/storage_ci_defconfig
+++ b/arch/x86/configs/storage_ci_defconfig
@@ -1,14 +1,15 @@
#
# Automatically generated file; DO NOT EDIT.
-# Linux/x86 4.19.44 Kernel Configuration
+# Linux/x86 4.19.138 Kernel Configuration
#
#
-# Compiler: gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
+# Compiler: gcc (Ubuntu 7.5.0-6ubuntu2) 7.5.0
#
CONFIG_CC_IS_GCC=y
-CONFIG_GCC_VERSION=50400
+CONFIG_GCC_VERSION=70500
CONFIG_CLANG_VERSION=0
+CONFIG_CC_HAS_ASM_GOTO=y
CONFIG_CONSTRUCTORS=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y
@@ -47,6 +48,7 @@ CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y
+CONFIG_KTASK=y
#
# IRQ subsystem
@@ -157,8 +159,8 @@ CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
-# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_SCHED_STEAL=y
+# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_SCHED_AUTOGROUP=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
@@ -170,6 +172,7 @@ CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
+CONFIG_INITRAMFS_FILE_METADATA=""
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
@@ -298,6 +301,7 @@ CONFIG_PARAVIRT=y
# CONFIG_PARAVIRT_SPINLOCKS is not set
# CONFIG_XEN is not set
CONFIG_KVM_GUEST=y
+CONFIG_ARCH_CPUIDLE_HALTPOLL=y
# CONFIG_KVM_DEBUG_FS is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
@@ -317,6 +321,7 @@ CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
+CONFIG_CPU_SUP_HYGON=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
@@ -391,6 +396,9 @@ CONFIG_X86_SMAP=y
CONFIG_X86_INTEL_UMIP=y
CONFIG_X86_INTEL_MPX=y
CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y
+CONFIG_X86_INTEL_TSX_MODE_OFF=y
+# CONFIG_X86_INTEL_TSX_MODE_ON is not set
+# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
CONFIG_EFI=y
CONFIG_EFI_STUB=y
CONFIG_EFI_MIXED=y
@@ -422,6 +430,7 @@ CONFIG_LEGACY_VSYSCALL_EMULATE=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_MODIFY_LDT_SYSCALL=y
CONFIG_HAVE_LIVEPATCH_FTRACE=y
+CONFIG_HAVE_LIVEPATCH_WO_FTRACE=y
#
# Enable Livepatch
@@ -551,6 +560,8 @@ CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_GOV_LADDER is not set
CONFIG_CPU_IDLE_GOV_MENU=y
CONFIG_CPU_IDLE_GOV_TEO=y
+# CONFIG_CPU_IDLE_GOV_HALTPOLL is not set
+CONFIG_HALTPOLL_CPUIDLE=y
CONFIG_INTEL_IDLE=y
#
@@ -607,6 +618,7 @@ CONFIG_HOTPLUG_PCI_ACPI=y
# DesignWare PCI Core Support
#
# CONFIG_PCIE_DW_PLAT_HOST is not set
+CONFIG_HISILICON_PCIE_CAE=m
#
# PCI Endpoint
@@ -627,7 +639,6 @@ CONFIG_AMD_NB=y
# Binary Emulations
#
CONFIG_IA32_EMULATION=y
-# CONFIG_IA32_AOUT is not set
# CONFIG_X86_X32 is not set
CONFIG_COMPAT_32=y
CONFIG_COMPAT=y
@@ -646,7 +657,6 @@ CONFIG_FIRMWARE_MEMMAP=y
# CONFIG_DMIID is not set
# CONFIG_DMI_SYSFS is not set
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
-CONFIG_ISCSI_IBFT_FIND=y
# CONFIG_ISCSI_IBFT is not set
# CONFIG_FW_CFG_SYSFS is not set
# CONFIG_GOOGLE_FIRMWARE is not set
@@ -663,7 +673,6 @@ CONFIG_EFI_RUNTIME_WRAPPERS=y
# CONFIG_EFI_TEST is not set
# CONFIG_APPLE_PROPERTIES is not set
# CONFIG_RESET_ATTACK_MITIGATION is not set
-CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_X86=y
@@ -760,8 +769,6 @@ CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
CONFIG_STRICT_MODULE_RWX=y
-CONFIG_ARCH_HAS_REFCOUNT=y
-# CONFIG_REFCOUNT_FULL is not set
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
#
@@ -1348,7 +1355,6 @@ CONFIG_BLK_DEV_SD=y
# CONFIG_CHR_DEV_ST is not set
# CONFIG_CHR_DEV_OSST is not set
CONFIG_BLK_DEV_SR=y
-CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=y
# CONFIG_CHR_DEV_SCH is not set
CONFIG_SCSI_CONSTANTS=y
@@ -1670,6 +1676,8 @@ CONFIG_NET_VENDOR_EMULEX=y
CONFIG_NET_VENDOR_EZCHIP=y
CONFIG_NET_VENDOR_HP=y
# CONFIG_HP100 is not set
+CONFIG_NET_VENDOR_HUAWEI=y
+# CONFIG_BMA is not set
CONFIG_NET_VENDOR_I825XX=y
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
@@ -2049,10 +2057,10 @@ CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
#
# Frame buffer Devices
#
-CONFIG_FB=y
-# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_CMDLINE=y
CONFIG_FB_NOTIFY=y
+CONFIG_FB=y
+# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_BOOT_VESA_SUPPORT=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
@@ -2537,7 +2545,6 @@ CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
# CONFIG_FAT_DEFAULT_UTF8 is not set
-# CONFIG_NTFS_FS is not set
#
# Pseudo filesystems
@@ -2962,6 +2969,9 @@ CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
CONFIG_X509_CERTIFICATE_PARSER=y
CONFIG_PKCS7_MESSAGE_PARSER=y
+# CONFIG_PGP_LIBRARY is not set
+# CONFIG_PGP_KEY_PARSER is not set
+# CONFIG_PGP_PRELOAD is not set
#
# Certificates for signature checking
@@ -2971,6 +2981,7 @@ CONFIG_SYSTEM_TRUSTED_KEYS=""
# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set
+# CONFIG_PGP_PRELOAD_PUBLIC_KEYS is not set
CONFIG_BINARY_PRINTF=y
#
@@ -3002,6 +3013,7 @@ CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=y
# CONFIG_CRC8 is not set
+CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
@@ -3128,6 +3140,8 @@ CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_HAVE_ARCH_KASAN=y
# CONFIG_KASAN is not set
CONFIG_ARCH_HAS_KCOV=y
+CONFIG_CC_HAS_SANCOV_TRACE_PC=y
+# CONFIG_KCOV is not set
CONFIG_DEBUG_SHIRQ=y
#
diff --git a/arch/x86/configs/syzkaller_defconfig b/arch/x86/configs/syzkaller_defconfig
index fe45137f8529..061f2fe87382 100644
--- a/arch/x86/configs/syzkaller_defconfig
+++ b/arch/x86/configs/syzkaller_defconfig
@@ -1,14 +1,15 @@
#
# Automatically generated file; DO NOT EDIT.
-# Linux/x86 4.19.32 Kernel Configuration
+# Linux/x86 4.19.138 Kernel Configuration
#
#
-# Compiler: gcc (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
+# Compiler: gcc (Ubuntu 7.5.0-6ubuntu2) 7.5.0
#
CONFIG_CC_IS_GCC=y
-CONFIG_GCC_VERSION=70300
+CONFIG_GCC_VERSION=70500
CONFIG_CLANG_VERSION=0
+CONFIG_CC_HAS_ASM_GOTO=y
CONFIG_CONSTRUCTORS=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y
@@ -47,6 +48,7 @@ CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y
+CONFIG_KTASK=y
#
# IRQ subsystem
@@ -158,6 +160,7 @@ CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
+# CONFIG_SCHED_STEAL is not set
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_SCHED_AUTOGROUP=y
# CONFIG_SYSFS_DEPRECATED is not set
@@ -170,6 +173,7 @@ CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
+CONFIG_INITRAMFS_FILE_METADATA=""
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
@@ -311,6 +315,7 @@ CONFIG_XEN_SAVE_RESTORE=y
# CONFIG_XEN_DEBUG_FS is not set
# CONFIG_XEN_PVH is not set
CONFIG_KVM_GUEST=y
+CONFIG_ARCH_CPUIDLE_HALTPOLL=y
# CONFIG_KVM_DEBUG_FS is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
@@ -330,6 +335,7 @@ CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
+CONFIG_CPU_SUP_HYGON=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
@@ -377,6 +383,7 @@ CONFIG_X86_DIRECT_GBPAGES=y
CONFIG_ARCH_HAS_MEM_ENCRYPT=y
# CONFIG_AMD_MEM_ENCRYPT is not set
CONFIG_NUMA=y
+CONFIG_NUMA_AWARE_SPINLOCKS=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
@@ -403,6 +410,9 @@ CONFIG_X86_SMAP=y
CONFIG_X86_INTEL_UMIP=y
CONFIG_X86_INTEL_MPX=y
CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y
+CONFIG_X86_INTEL_TSX_MODE_OFF=y
+# CONFIG_X86_INTEL_TSX_MODE_ON is not set
+# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
CONFIG_EFI=y
CONFIG_EFI_STUB=y
CONFIG_EFI_MIXED=y
@@ -433,6 +443,7 @@ CONFIG_LEGACY_VSYSCALL_EMULATE=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_MODIFY_LDT_SYSCALL=y
CONFIG_HAVE_LIVEPATCH_FTRACE=y
+CONFIG_HAVE_LIVEPATCH_WO_FTRACE=y
#
# Enable Livepatch
@@ -562,6 +573,9 @@ CONFIG_X86_INTEL_PSTATE=y
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_GOV_LADDER is not set
CONFIG_CPU_IDLE_GOV_MENU=y
+# CONFIG_CPU_IDLE_GOV_TEO is not set
+# CONFIG_CPU_IDLE_GOV_HALTPOLL is not set
+CONFIG_HALTPOLL_CPUIDLE=y
CONFIG_INTEL_IDLE=y
#
@@ -620,6 +634,7 @@ CONFIG_HOTPLUG_PCI_ACPI=y
# DesignWare PCI Core Support
#
# CONFIG_PCIE_DW_PLAT_HOST is not set
+CONFIG_HISILICON_PCIE_CAE=m
#
# PCI Endpoint
@@ -647,7 +662,6 @@ CONFIG_CARDBUS=y
# Binary Emulations
#
CONFIG_IA32_EMULATION=y
-# CONFIG_IA32_AOUT is not set
# CONFIG_X86_X32 is not set
CONFIG_COMPAT_32=y
CONFIG_COMPAT=y
@@ -666,7 +680,6 @@ CONFIG_FIRMWARE_MEMMAP=y
CONFIG_DMIID=y
CONFIG_DMI_SYSFS=y
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
-CONFIG_ISCSI_IBFT_FIND=y
# CONFIG_ISCSI_IBFT is not set
# CONFIG_FW_CFG_SYSFS is not set
# CONFIG_GOOGLE_FIRMWARE is not set
@@ -782,8 +795,6 @@ CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
CONFIG_STRICT_MODULE_RWX=y
-CONFIG_ARCH_HAS_REFCOUNT=y
-# CONFIG_REFCOUNT_FULL is not set
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
#
@@ -1610,6 +1621,7 @@ CONFIG_IPVLAN=y
# CONFIG_IPVTAP is not set
# CONFIG_VXLAN is not set
# CONFIG_GENEVE is not set
+# CONFIG_GTP is not set
# CONFIG_MACSEC is not set
# CONFIG_NETCONSOLE is not set
CONFIG_TUN=y
@@ -1694,6 +1706,9 @@ CONFIG_NET_VENDOR_EMULEX=y
# CONFIG_BE2NET is not set
CONFIG_NET_VENDOR_EZCHIP=y
# CONFIG_NET_VENDOR_HP is not set
+CONFIG_NET_VENDOR_HUAWEI=y
+# CONFIG_HINIC is not set
+# CONFIG_BMA is not set
# CONFIG_NET_VENDOR_I825XX is not set
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
@@ -1795,7 +1810,7 @@ CONFIG_SWPHY=y
#
# CONFIG_AMD_PHY is not set
# CONFIG_AQUANTIA_PHY is not set
-# CONFIG_ASIX_PHY is not set
+# CONFIG_AX88796B_PHY is not set
# CONFIG_AT803X_PHY is not set
# CONFIG_BCM7XXX_PHY is not set
# CONFIG_BCM87XX_PHY is not set
@@ -2039,6 +2054,7 @@ CONFIG_SERIAL_NONSTANDARD=y
# CONFIG_N_HDLC is not set
# CONFIG_N_GSM is not set
# CONFIG_TRACE_SINK is not set
+CONFIG_LDISC_AUTOLOAD=y
CONFIG_DEVMEM=y
# CONFIG_DEVKMEM is not set
@@ -2098,7 +2114,6 @@ CONFIG_HW_RANDOM_INTEL=y
CONFIG_HW_RANDOM_AMD=y
CONFIG_HW_RANDOM_VIA=y
CONFIG_NVRAM=y
-# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
# CONFIG_MWAVE is not set
CONFIG_RAW_DRIVER=y
@@ -2475,10 +2490,10 @@ CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
#
# Frame buffer Devices
#
-CONFIG_FB=y
-# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_CMDLINE=y
CONFIG_FB_NOTIFY=y
+CONFIG_FB=y
+# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_BOOT_VESA_SUPPORT=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
@@ -2825,7 +2840,6 @@ CONFIG_USB_SERIAL_GENERIC=y
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
-# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
@@ -3108,7 +3122,6 @@ CONFIG_STAGING=y
# Gasket devices
#
# CONFIG_STAGING_GASKET_FRAMEWORK is not set
-# CONFIG_XIL_AXIS_FIFO is not set
# CONFIG_EROFS_FS is not set
CONFIG_X86_PLATFORM_DEVICES=y
# CONFIG_ACER_WIRELESS is not set
@@ -3148,6 +3161,7 @@ CONFIG_PVPANIC=y
# CONFIG_SURFACE_PRO3_BUTTON is not set
# CONFIG_INTEL_PUNIT_IPC is not set
# CONFIG_INTEL_TURBO_MAX_3 is not set
+# CONFIG_INTEL_ATOMISP2_PM is not set
CONFIG_PMC_ATOM=y
# CONFIG_CHROME_PLATFORMS is not set
# CONFIG_MELLANOX_PLATFORM is not set
@@ -3176,6 +3190,10 @@ CONFIG_IOMMU_SUPPORT=y
#
# Generic IOMMU Pagetable Support
#
+
+#
+# Generic PASID table support
+#
# CONFIG_IOMMU_DEBUGFS is not set
# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set
CONFIG_IOMMU_IOVA=y
@@ -3187,6 +3205,7 @@ CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
CONFIG_IRQ_REMAP=y
+# CONFIG_SMMU_BYPASS_DEV is not set
#
# Remoteproc drivers
@@ -3229,6 +3248,8 @@ CONFIG_IRQ_REMAP=y
# Xilinx SoC drivers
#
# CONFIG_XILINX_VCU is not set
+CONFIG_SOC_HISILICON_LBC=m
+CONFIG_SOC_HISILICON_SYSCTL=m
CONFIG_PM_DEVFREQ=y
#
@@ -3299,6 +3320,7 @@ CONFIG_NVMEM=y
CONFIG_PM_OPP=y
# CONFIG_UNISYS_VISORBUS is not set
# CONFIG_SIOX is not set
+# CONFIG_UACCE is not set
# CONFIG_SLIMBUS is not set
#
@@ -3359,7 +3381,6 @@ CONFIG_AUTOFS_FS=y
#
# CONFIG_MSDOS_FS is not set
# CONFIG_VFAT_FS is not set
-# CONFIG_NTFS_FS is not set
#
# Pseudo filesystems
@@ -3536,6 +3557,7 @@ CONFIG_IMA_APPRAISE_BOOTPARAM=y
CONFIG_IMA_TRUSTED_KEYRING=y
# CONFIG_IMA_BLACKLIST_KEYRING is not set
# CONFIG_IMA_LOAD_X509 is not set
+# CONFIG_IMA_DIGEST_LIST is not set
CONFIG_EVM=y
CONFIG_EVM_ATTR_FSUUID=y
# CONFIG_EVM_ADD_XATTRS is not set
@@ -3726,13 +3748,15 @@ CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_QAT_C3XXXVF is not set
# CONFIG_CRYPTO_DEV_QAT_C62XVF is not set
# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set
-# CONFIG_CRYPTO_DEV_HISILICON is not set
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
CONFIG_X509_CERTIFICATE_PARSER=y
CONFIG_PKCS7_MESSAGE_PARSER=y
# CONFIG_PKCS7_TEST_KEY is not set
CONFIG_SIGNED_PE_FILE_VERIFICATION=y
+# CONFIG_PGP_LIBRARY is not set
+# CONFIG_PGP_KEY_PARSER is not set
+# CONFIG_PGP_PRELOAD is not set
#
# Certificates for signature checking
@@ -3743,6 +3767,7 @@ CONFIG_SYSTEM_TRUSTED_KEYS=""
# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set
+# CONFIG_PGP_PRELOAD_PUBLIC_KEYS is not set
CONFIG_BINARY_PRINTF=y
#
@@ -3773,6 +3798,7 @@ CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC7 is not set
# CONFIG_LIBCRC32C is not set
# CONFIG_CRC8 is not set
+CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
--
2.25.1
1
0
From: Xie XiuQi <xiexiuqi(a)huawei.com>
hulk inclusion
cagegory: feature
feature: support 1822 on x86 platform
After patch "5d57d1e24808 net/hinic: Add support for X86 Arch",
hinic driver could support x86 platform, so enable this config
by default.
Link: https://gitee.com/openeuler/kernel/issues/I1DC1F
Signed-off-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/x86/configs/hulk_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
arch/x86/configs/syzkaller_defconfig | 1 +
3 files changed, 3 insertions(+)
diff --git a/arch/x86/configs/hulk_defconfig b/arch/x86/configs/hulk_defconfig
index ddd209c39445..6a1d3307e34e 100644
--- a/arch/x86/configs/hulk_defconfig
+++ b/arch/x86/configs/hulk_defconfig
@@ -2526,6 +2526,7 @@ CONFIG_NET_VENDOR_INTEL=y
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_E1000E_HWTS=y
+CONFIG_HINIC=m
CONFIG_IGB=m
CONFIG_IGB_HWMON=y
CONFIG_IGB_DCA=y
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index 4c9de1198798..747c6bd5967f 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -2523,6 +2523,7 @@ CONFIG_NET_VENDOR_INTEL=y
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_E1000E_HWTS=y
+CONFIG_HINIC=m
CONFIG_IGB=m
CONFIG_IGB_HWMON=y
CONFIG_IGB_DCA=y
diff --git a/arch/x86/configs/syzkaller_defconfig b/arch/x86/configs/syzkaller_defconfig
index 061f2fe87382..ff5f843e09c4 100644
--- a/arch/x86/configs/syzkaller_defconfig
+++ b/arch/x86/configs/syzkaller_defconfig
@@ -1716,6 +1716,7 @@ CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_E1000E_HWTS=y
# CONFIG_IGB is not set
+CONFIG_HINIC=m
# CONFIG_IGBVF is not set
# CONFIG_IXGB is not set
# CONFIG_IXGBE is not set
--
2.25.1
1
0

17 Aug '20
hulk inclusion
category: bugfix
bugzilla:
CVE: NA
---------------------------
Add skcd->no_refcnt check which is missed when backporting
ad0f75e5f57c ("cgroup: fix cgroup_sk_alloc() for sk_clone_lock()").
This patch is needed in stable-4.9, stable-4.14 and stable-4.19.
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/cgroup/cgroup.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 12fd72d94b7b..e38da89fad66 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -5955,6 +5955,8 @@ void cgroup_sk_clone(struct sock_cgroup_data *skcd)
{
/* Socket clone path */
if (skcd->val) {
+ if (skcd->is_data & 2)
+ return;
/*
* We might be cloning a socket which is left in an empty
* cgroup and the cgroup might have already been rmdir'd.
--
2.25.1
1
0

17 Aug '20
From: Chiqijun <chiqijun(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: 4472
-----------------------------------------------------------------------
Fix misspelled word and wrong print format.
Signed-off-by: Chiqijun <chiqijun(a)huawei.com>
Reviewed-by: Zengweiliang <zengweiliang.zengweiliang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
.../net/ethernet/huawei/hinic/hinic_api_cmd.c | 4 +-
drivers/net/ethernet/huawei/hinic/hinic_cfg.c | 20 +++---
.../net/ethernet/huawei/hinic/hinic_cmdq.c | 4 +-
drivers/net/ethernet/huawei/hinic/hinic_dcb.c | 34 +++++-----
drivers/net/ethernet/huawei/hinic/hinic_eqs.c | 4 +-
.../net/ethernet/huawei/hinic/hinic_ethtool.c | 20 +++---
.../net/ethernet/huawei/hinic/hinic_hwdev.c | 40 +++++------
drivers/net/ethernet/huawei/hinic/hinic_lld.c | 20 +++---
.../net/ethernet/huawei/hinic/hinic_main.c | 6 +-
.../net/ethernet/huawei/hinic/hinic_mbox.c | 9 +--
.../net/ethernet/huawei/hinic/hinic_mgmt.c | 20 +++---
.../net/ethernet/huawei/hinic/hinic_nic_cfg.c | 34 +++++-----
.../net/ethernet/huawei/hinic/hinic_nic_dbg.c | 6 +-
.../net/ethernet/huawei/hinic/hinic_nic_io.c | 2 +-
.../net/ethernet/huawei/hinic/hinic_nictool.c | 68 +++++++++----------
.../ethernet/huawei/hinic/hinic_port_cmd.h | 2 +-
.../net/ethernet/huawei/hinic/hinic_sriov.c | 4 +-
drivers/net/ethernet/huawei/hinic/hinic_tx.c | 4 +-
drivers/net/ethernet/huawei/hinic/hinic_wq.c | 2 +-
19 files changed, 154 insertions(+), 149 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_api_cmd.c b/drivers/net/ethernet/huawei/hinic/hinic_api_cmd.c
index 5a1e8655be38..6ce69dc73064 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_api_cmd.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_api_cmd.c
@@ -129,7 +129,7 @@ static void dump_api_chain_reg(struct hinic_api_cmd_chain *chain)
addr = HINIC_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
val = hinic_hwif_read_reg(chain->hwdev->hwif, addr);
- sdk_err(dev, "Chain type: 0x%x, cpld error: 0x%x, check error: 0x%x, current fsm: 0x%x\n",
+ sdk_err(dev, "Chain type: 0x%x, cpld error: 0x%x, check error: 0x%x, current fsm: 0x%x\n",
chain->chain_type, HINIC_API_CMD_STATUS_GET(val, CPLD_ERR),
HINIC_API_CMD_STATUS_GET(val, CHKSUM_ERR),
HINIC_API_CMD_STATUS_GET(val, FSM));
@@ -161,7 +161,7 @@ static int chain_busy(struct hinic_api_cmd_chain *chain)
resp_header = be64_to_cpu(ctxt->resp->header);
if (ctxt->status &&
!HINIC_API_CMD_RESP_HEADER_VALID(resp_header)) {
- sdk_err(dev, "Context(0x%x) busy!, pi: %d, resp_header: 0x%08x%08x\n",
+ sdk_err(dev, "Context(0x%x) busy, pi: %d, resp_header: 0x%08x%08x\n",
ctxt->status, chain->prod_idx,
upper_32_bits(resp_header),
lower_32_bits(resp_header));
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_cfg.c b/drivers/net/ethernet/huawei/hinic/hinic_cfg.c
index 18937d7d9057..303225e5689e 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_cfg.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_cfg.c
@@ -629,7 +629,7 @@ static int get_cap_from_pf(struct hinic_hwdev *dev, enum func_type type)
err = hinic_msg_to_mgmt_sync(dev, HINIC_MOD_CFGM, HINIC_CFG_MBOX_CAP,
&dev_cap, in_len, &dev_cap, &out_len, 0);
if (err || dev_cap.status || !out_len) {
- sdk_err(dev->dev_hdl, "Failed to get capability from PF, err: %d, status: 0x%x, out size: 0x%x\n",
+ sdk_err(dev->dev_hdl, "Failed to get capability from PF, err: %d, status: 0x%x, out size: 0x%x\n",
err, dev_cap.status, out_len);
return -EFAULT;
}
@@ -925,7 +925,7 @@ int hinic_vector_to_eqn(void *hwdev, enum hinic_service_type type, int vector)
if (type != SERVICE_T_ROCE && type != SERVICE_T_IWARP) {
sdk_err(dev->dev_hdl,
- "Service type :%d, only RDMA service could get eqn by vector.\n",
+ "Service type: %d, only RDMA service could get eqn by vector\n",
type);
return -EINVAL;
}
@@ -982,7 +982,7 @@ static int cfg_enable_interrupt(struct hinic_hwdev *dev)
irq_info = cfg_mgmt->irq_param_info.alloc_info;
- sdk_info(dev->dev_hdl, "Interrupt type: %d, irq num: %d.\n",
+ sdk_info(dev->dev_hdl, "Interrupt type: %d, irq num: %d\n",
cfg_mgmt->svc_cap.interrupt_type, nreq);
switch (cfg_mgmt->svc_cap.interrupt_type) {
@@ -1001,7 +1001,7 @@ static int cfg_enable_interrupt(struct hinic_hwdev *dev)
actual_irq = pci_enable_msix_range(pcidev, entry,
VECTOR_THRESHOLD, nreq);
if (actual_irq < 0) {
- sdk_err(dev->dev_hdl, "Alloc msix entries with threshold 2 failed.\n");
+ sdk_err(dev->dev_hdl, "Alloc msix entries with threshold 2 failed\n");
kfree(entry);
return -ENOMEM;
}
@@ -1009,7 +1009,7 @@ static int cfg_enable_interrupt(struct hinic_hwdev *dev)
nreq = (u16)actual_irq;
cfg_mgmt->irq_param_info.num_total = nreq;
cfg_mgmt->irq_param_info.num_irq_remain = nreq;
- sdk_info(dev->dev_hdl, "Request %d msix vector success.\n",
+ sdk_info(dev->dev_hdl, "Request %d msix vector success\n",
nreq);
for (i = 0; i < nreq; ++i) {
@@ -1059,12 +1059,12 @@ int hinic_alloc_irqs(void *hwdev, enum hinic_service_type type, u16 num,
if (num > free_num_irq) {
if (free_num_irq == 0) {
sdk_err(dev->dev_hdl,
- "no free irq resource in cfg mgmt.\n");
+ "no free irq resource in cfg mgmt\n");
mutex_unlock(&irq_info->irq_mutex);
return -ENOMEM;
}
- sdk_warn(dev->dev_hdl, "only %d irq resource in cfg mgmt.\n",
+ sdk_warn(dev->dev_hdl, "only %d irq resource in cfg mgmt\n",
free_num_irq);
num = free_num_irq;
}
@@ -1125,7 +1125,7 @@ void hinic_free_irq(void *hwdev, enum hinic_service_type type, u32 irq_id)
alloc_info[i].free = CFG_FREE;
irq_info->num_irq_remain++;
if (irq_info->num_irq_remain > max_num_irq) {
- sdk_err(dev->dev_hdl, "Find target,but over range\n");
+ sdk_err(dev->dev_hdl, "Find target, but over range\n");
mutex_unlock(&irq_info->irq_mutex);
return;
}
@@ -1135,7 +1135,7 @@ void hinic_free_irq(void *hwdev, enum hinic_service_type type, u32 irq_id)
}
if (i >= max_num_irq)
- sdk_warn(dev->dev_hdl, "Irq %d don`t need to free\n", irq_id);
+ sdk_warn(dev->dev_hdl, "Irq %d don't need to free\n", irq_id);
mutex_unlock(&irq_info->irq_mutex);
}
@@ -1262,7 +1262,7 @@ void hinic_free_ceq(void *hwdev, enum hinic_service_type type, int ceq_id)
}
if (i >= num_ceq)
- sdk_warn(dev->dev_hdl, "ceq %d don`t need to free.\n", ceq_id);
+ sdk_warn(dev->dev_hdl, "ceq %d don't need to free\n", ceq_id);
mutex_unlock(&eq->eq_mutex);
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_cmdq.c
index 7293ff216a7c..6d78d22272f0 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_cmdq.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_cmdq.c
@@ -251,7 +251,7 @@ struct hinic_cmd_buf *hinic_alloc_cmd_buf(void *hwdev)
void *dev;
if (!hwdev) {
- pr_err("Failed to alloc cmd buf, Invalid hwdev\n");
+ pr_err("Failed to alloc cmd buf, invalid hwdev\n");
return NULL;
}
@@ -1516,7 +1516,7 @@ int hinic_cmdqs_init(struct hinic_hwdev *hwdev)
err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev,
&cmdqs->saved_wqs[cmdq_type], cmdq_type);
if (err) {
- sdk_err(hwdev->dev_hdl, "Failed to initialize cmdq type :%d\n",
+ sdk_err(hwdev->dev_hdl, "Failed to initialize cmdq type: %d\n",
cmdq_type);
goto init_cmdq_err;
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_dcb.c b/drivers/net/ethernet/huawei/hinic/hinic_dcb.c
index e17a91f8bd59..0c7ae9133407 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_dcb.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_dcb.c
@@ -338,8 +338,8 @@ u8 hinic_setup_dcb_tool(struct net_device *netdev, u8 *dcb_en, bool wr_flag)
if (wr_flag) {
if (nic_dev->max_qps < nic_dev->dcb_cfg.pg_tcs && *dcb_en) {
- netif_err(nic_dev, drv, netdev,
- "max_qps:%d is less than %d\n",
+ nicif_err(nic_dev, drv, netdev,
+ "max_qps: %d is less than %d\n",
nic_dev->max_qps, nic_dev->dcb_cfg.pg_tcs);
return 1;
}
@@ -357,7 +357,7 @@ u8 hinic_setup_dcb_tool(struct net_device *netdev, u8 *dcb_en, bool wr_flag)
mutex_lock(&nic_dev->nic_mutex);
if (!err)
- netif_info(nic_dev, drv, netdev, "%s DCB\n",
+ nicif_info(nic_dev, drv, netdev, "%s DCB\n",
*dcb_en ? "Enable" : "Disable");
} else {
*dcb_en = (u8)test_bit(HINIC_DCB_ENABLE, &nic_dev->flags);
@@ -386,15 +386,15 @@ static u8 hinic_dcbnl_set_state(struct net_device *netdev, u8 state)
return 0;
if (nic_dev->max_qps < nic_dev->dcb_cfg.pg_tcs && state) {
- netif_err(nic_dev, drv, netdev,
- "max_qps:%d is less than %d\n",
+ nicif_err(nic_dev, drv, netdev,
+ "max_qps: %d is less than %d\n",
nic_dev->max_qps, nic_dev->dcb_cfg.pg_tcs);
return 1;
}
err = hinic_setup_tc(netdev, state ? nic_dev->dcb_cfg.pg_tcs : 0);
if (!err)
- netif_info(nic_dev, drv, netdev, "%s DCB\n",
+ nicif_info(nic_dev, drv, netdev, "%s DCB\n",
state ? "Enable" : "Disable");
return !!err;
@@ -1062,12 +1062,12 @@ static int __set_hw_ets(struct hinic_nic_dev *nic_dev)
err = hinic_dcb_set_ets(nic_dev->hwdev, up_tc, pg_bw, up_pgid,
up_bw, up_strict);
if (err) {
- hinic_err(nic_dev, drv, "Failed to set ets with mode:%d\n",
+ hinic_err(nic_dev, drv, "Failed to set ets with mode: %d\n",
nic_dev->dcbx_cap);
return err;
}
- hinic_info(nic_dev, drv, "Set ets to hw done with mode:%d\n",
+ hinic_info(nic_dev, drv, "Set ets to hw done with mode: %d\n",
nic_dev->dcbx_cap);
return 0;
@@ -1332,8 +1332,8 @@ static int hinic_dcbnl_ieee_set_ets(struct net_device *netdev,
if (max_tc != netdev_get_num_tc(netdev)) {
err = hinic_setup_tc(netdev, max_tc);
if (err) {
- netif_err(nic_dev, drv, netdev,
- "Failed to setup tc with max_tc:%d, err:%d\n",
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to setup tc with max_tc: %d, err: %d\n",
max_tc, err);
memcpy(my_ets, &back_ets, sizeof(struct ieee_ets));
return err;
@@ -1386,7 +1386,7 @@ static int hinic_dcbnl_ieee_set_pfc(struct net_device *netdev,
pfc_map = pfc->pfc_en & nic_dev->up_valid_bitmap;
outof_range_pfc = pfc->pfc_en & (~nic_dev->up_valid_bitmap);
if (outof_range_pfc)
- netif_info(nic_dev, drv, netdev,
+ nicif_info(nic_dev, drv, netdev,
"pfc setting out of range, 0x%x will be ignored\n",
outof_range_pfc);
@@ -1407,7 +1407,7 @@ static int hinic_dcbnl_ieee_set_pfc(struct net_device *netdev,
err = hinic_dcb_set_pfc(nic_dev->hwdev, pfc_en, pfc_map);
if (err) {
hinic_info(nic_dev, drv,
- "Failed to set pfc to hw with pfc_map:0x%x err:%d\n",
+ "Failed to set pfc to hw with pfc_map: 0x%x err: %d\n",
pfc_map, err);
hinic_start_port_traffic_flow(nic_dev);
return err;
@@ -1416,7 +1416,7 @@ static int hinic_dcbnl_ieee_set_pfc(struct net_device *netdev,
hinic_start_port_traffic_flow(nic_dev);
my_pfc->pfc_en = pfc->pfc_en;
hinic_info(nic_dev, drv,
- "Set pfc successfully with pfc_map:0x%x, pfc_en:%d\n",
+ "Set pfc successfully with pfc_map: 0x%x, pfc_en: %d\n",
pfc_map, pfc_en);
return 0;
@@ -1479,7 +1479,7 @@ static u8 hinic_dcbnl_setdcbx(struct net_device *netdev, u8 mode)
((mode & DCB_CAP_DCBX_LLD_MANAGED) &&
(!(mode & DCB_CAP_DCBX_HOST)))) {
nicif_info(nic_dev, drv, netdev,
- "Set dcbx failed with invalid mode:%d\n", mode);
+ "Set dcbx failed with invalid mode: %d\n", mode);
return 1;
}
@@ -1497,7 +1497,7 @@ static u8 hinic_dcbnl_setdcbx(struct net_device *netdev, u8 mode)
err = hinic_setup_tc(netdev, 0);
if (err) {
nicif_err(nic_dev, drv, netdev,
- "Failed to setup tc with mode:%d\n",
+ "Failed to setup tc with mode: %d\n",
mode);
return 1;
}
@@ -1509,7 +1509,7 @@ static u8 hinic_dcbnl_setdcbx(struct net_device *netdev, u8 mode)
err = hinic_setup_tc(netdev, 0);
if (err) {
nicif_err(nic_dev, drv, netdev,
- "Failed to setup tc with mode:%d\n", mode);
+ "Failed to setup tc with mode: %d\n", mode);
return 1;
}
}
@@ -1617,7 +1617,7 @@ int __set_cos_up_map(struct hinic_nic_dev *nic_dev, u8 *cos_up)
return -EFAULT;
}
- nicif_info(nic_dev, drv, netdev, "Set cos2up:%d%d%d%d%d%d%d%d\n",
+ nicif_info(nic_dev, drv, netdev, "Set cos2up: %d%d%d%d%d%d%d%d\n",
cos_up[0], cos_up[1], cos_up[2], cos_up[3],
cos_up[4], cos_up[5], cos_up[6], cos_up[7]);
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_eqs.c
index 22fc49d16462..6c352dff996a 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_eqs.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_eqs.c
@@ -486,7 +486,7 @@ static void ceq_event_handler(struct hinic_ceqs *ceqs, u32 ceqe)
u32 ceqe_data = CEQE_DATA(ceqe);
if (event >= HINIC_MAX_CEQ_EVENTS) {
- sdk_err(hwdev->dev_hdl, "Ceq unknown event:%d, ceqe date: 0x%x\n",
+ sdk_err(hwdev->dev_hdl, "Ceq unknown event: %d, ceqe date: 0x%x\n",
event, ceqe_data);
return;
}
@@ -1126,7 +1126,7 @@ static int init_eq(struct hinic_eq *eq, struct hinic_hwdev *hwdev, u16 q_id,
eq->orig_page_size = eq->page_size;
eq->num_pages = GET_EQ_NUM_PAGES(eq, eq->page_size);
if (eq->num_pages > HINIC_EQ_MAX_PAGES) {
- sdk_err(hwdev->dev_hdl, "Number pages:%d too many pages for eq\n",
+ sdk_err(hwdev->dev_hdl, "Number pages: %d too many pages for eq\n",
eq->num_pages);
return -EINVAL;
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c b/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c
index 98e230d4e3c1..f79594570002 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c
@@ -753,7 +753,7 @@ static int hinic_set_settings_to_hw(struct hinic_nic_dev *nic_dev,
(autoneg ? "autong enable " : "autong disable ") : "");
if (err < 0 || err >= SET_LINK_STR_MAX_LEN) {
nicif_err(nic_dev, drv, netdev,
- "Failed snprintf link state, function return(%d) and dest_len(%d)\n",
+ "Failed to snprintf link state, function return(%d) and dest_len(%d)\n",
err, SET_LINK_STR_MAX_LEN);
return -EFAULT;
}
@@ -763,7 +763,7 @@ static int hinic_set_settings_to_hw(struct hinic_nic_dev *nic_dev,
"%sspeed %d ", set_link_str, speed);
if (err <= 0 || err >= SET_LINK_STR_MAX_LEN) {
nicif_err(nic_dev, drv, netdev,
- "Failed snprintf link speed, function return(%d) and dest_len(%d)\n",
+ "Failed to snprintf link speed, function return(%d) and dest_len(%d)\n",
err, SET_LINK_STR_MAX_LEN);
return -EFAULT;
}
@@ -897,7 +897,7 @@ static void hinic_get_drvinfo(struct net_device *netdev,
"%s", mgmt_ver);
if (err <= 0 || err >= (int)sizeof(info->fw_version))
nicif_err(nic_dev, drv, netdev,
- "Failed snprintf fw_version, function return(%d) and dest_len(%d)\n",
+ "Failed to snprintf fw_version, function return(%d) and dest_len(%d)\n",
err, (int)sizeof(info->fw_version));
}
@@ -997,7 +997,7 @@ static int hinic_set_ringparam(struct net_device *netdev,
ring->rx_pending > HINIC_MAX_QUEUE_DEPTH ||
ring->rx_pending < HINIC_MIN_QUEUE_DEPTH) {
nicif_err(nic_dev, drv, netdev,
- "Queue depth out of rang [%d-%d]\n",
+ "Queue depth out of range [%d-%d]\n",
HINIC_MIN_QUEUE_DEPTH, HINIC_MAX_QUEUE_DEPTH);
return -EINVAL;
}
@@ -1449,7 +1449,7 @@ static int __hinic_set_coalesce(struct net_device *netdev,
err = snprintf(obj_str, sizeof(obj_str), "for netdev");
if (err <= 0 || err >= OBJ_STR_MAX_LEN) {
nicif_err(nic_dev, drv, netdev,
- "Failed snprintf string, function return(%d) and dest_len(%d)\n",
+ "Failed to snprintf string, function return(%d) and dest_len(%d)\n",
err, OBJ_STR_MAX_LEN);
return -EFAULT;
}
@@ -1458,7 +1458,7 @@ static int __hinic_set_coalesce(struct net_device *netdev,
err = snprintf(obj_str, sizeof(obj_str), "for queue %d", queue);
if (err <= 0 || err >= OBJ_STR_MAX_LEN) {
nicif_err(nic_dev, drv, netdev,
- "Failed snprintf string, function return(%d) and dest_len(%d)\n",
+ "Failed to snprintf string, function return(%d) and dest_len(%d)\n",
err, OBJ_STR_MAX_LEN);
return -EFAULT;
}
@@ -1712,7 +1712,7 @@ static void hinic_get_strings(struct net_device *netdev,
return;
default:
nicif_err(nic_dev, drv, netdev,
- "Invalid string set %d.", stringset);
+ "Invalid string set %d", stringset);
return;
}
}
@@ -1913,7 +1913,7 @@ static int hinic_run_lp_test(struct hinic_nic_dev *nic_dev, u32 test_time)
}
dev_kfree_skb_any(skb_tmp);
- nicif_info(nic_dev, drv, netdev, "Loopback test succeed.\n");
+ nicif_info(nic_dev, drv, netdev, "Loopback test succeed\n");
return 0;
}
@@ -1961,7 +1961,7 @@ void hinic_lp_test(struct net_device *netdev, struct ethtool_test *eth_test,
lb_test_rx_buf = vmalloc(LP_PKT_CNT * LP_PKT_LEN);
if (!lb_test_rx_buf) {
nicif_err(nic_dev, drv, netdev,
- "Failed to alloc rx buffer for loopback test.\n");
+ "Failed to alloc rx buffer for loopback test\n");
err = 1;
} else {
nic_dev->lb_test_rx_buf = lb_test_rx_buf;
@@ -1980,7 +1980,7 @@ void hinic_lp_test(struct net_device *netdev, struct ethtool_test *eth_test,
if (!(test.flags & ETH_TEST_FL_EXTERNAL_LB)) {
if (hinic_set_loopback_mode(nic_dev->hwdev, false)) {
nicif_err(nic_dev, drv, netdev,
- "Failed to cancel port loopback mode after loopback test.\n");
+ "Failed to cancel port loopback mode after loopback test\n");
err = 1;
goto resume_link;
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c
index e815eec59b45..d19da7238f9b 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c
@@ -1369,7 +1369,7 @@ int hinic_clean_root_ctxt(void *hwdev)
&root_ctxt, &out_size, 0);
if (err || !out_size || root_ctxt.status) {
sdk_err(((struct hinic_hwdev *)hwdev)->dev_hdl,
- "Failed to set root context, err: %d, status: 0x%x, out_size: 0x%x\n",
+ "Failed to clean root context, err: %d, status: 0x%x, out_size: 0x%x\n",
err, root_ctxt.status, out_size);
return -EFAULT;
}
@@ -1528,7 +1528,7 @@ static int hinic_vf_rx_tx_flush_in_pf(struct hinic_hwdev *hwdev, u16 vf_id)
HINIC_MGMT_CMD_START_FLR, &clr_res,
sizeof(clr_res), &clr_res, &out_size, 0);
if (err || !out_size || clr_res.status) {
- sdk_warn(hwdev->dev_hdl, "Failed to flush doorbell, err: %d, status: 0x%x, out_size: 0x%x\n",
+ sdk_warn(hwdev->dev_hdl, "Failed to start flr, err: %d, status: 0x%x, out_size: 0x%x\n",
err, clr_res.status, out_size);
ret = err ? err : (-EFAULT);
}
@@ -1759,7 +1759,7 @@ static int init_aeqs_msix_attr(struct hinic_hwdev *hwdev)
info.msix_index = eq->eq_irq.msix_entry_idx;
err = hinic_set_interrupt_cfg_direct(hwdev, &info);
if (err) {
- sdk_err(hwdev->dev_hdl, "Set msix attr for aeq %d failed\n",
+ sdk_err(hwdev->dev_hdl, "Failed to set msix attr for aeq %d\n",
q_id);
return -EFAULT;
}
@@ -1789,7 +1789,7 @@ static int init_ceqs_msix_attr(struct hinic_hwdev *hwdev)
info.msix_index = eq->eq_irq.msix_entry_idx;
err = hinic_set_interrupt_cfg(hwdev, info);
if (err) {
- sdk_err(hwdev->dev_hdl, "Set msix attr for ceq %d failed\n",
+ sdk_err(hwdev->dev_hdl, "Failed to set msix attr for ceq %d\n",
q_id);
return -EFAULT;
}
@@ -2032,7 +2032,7 @@ int comm_pf_mbox_handler(void *handle, u16 vf_id, u8 cmd, void *buf_in,
if (!hinic_mbox_check_cmd_valid(handle, hw_cmd_support_vf, vf_id, cmd,
buf_in, in_size, size)) {
sdk_err(((struct hinic_hwdev *)handle)->dev_hdl,
- "PF Receive VF(%d) common cmd(0x%x), mbox len(0x%x) is invalid\n",
+ "PF Receive VF(%d) common cmd(0x%x) or mbox len(0x%x) is invalid\n",
vf_id + hinic_glb_pf_vf_offset(handle), cmd, in_size);
err = HINIC_MBOX_VF_CMD_ERROR;
return err;
@@ -2050,8 +2050,8 @@ int comm_pf_mbox_handler(void *handle, u16 vf_id, u8 cmd, void *buf_in,
if (err && err != HINIC_DEV_BUSY_ACTIVE_FW &&
err != HINIC_MBOX_PF_BUSY_ACTIVE_FW)
sdk_err(((struct hinic_hwdev *)handle)->dev_hdl,
- "PF mbox common callback handler err: %d\n",
- err);
+ "PF mbox common cmd %d callback handler err: %d\n",
+ cmd, err);
}
return err;
@@ -2462,7 +2462,7 @@ int hinic_init_comm_ch(struct hinic_hwdev *hwdev)
err = __get_func_misc_info(hwdev);
if (err) {
- sdk_err(hwdev->dev_hdl, "Failed to get function msic information\n");
+ sdk_err(hwdev->dev_hdl, "Failed to get function misc information\n");
goto get_func_info_err;
}
@@ -3133,7 +3133,7 @@ int mqm_eqm_init(struct hinic_hwdev *hwdev)
&info_eqm_fix, sizeof(info_eqm_fix),
&info_eqm_fix, &len, 0);
if (ret || !len || info_eqm_fix.status) {
- sdk_err(hwdev->dev_hdl, "Get mqm fix info fail,err: %d, status: 0x%x, out_size: 0x%x\n",
+ sdk_err(hwdev->dev_hdl, "Get mqm fix info failed, err: %d, status: 0x%x, out_size: 0x%x\n",
ret, info_eqm_fix.status, len);
return -EFAULT;
}
@@ -3149,25 +3149,25 @@ int mqm_eqm_init(struct hinic_hwdev *hwdev)
kcalloc(hwdev->mqm_att.chunk_num,
sizeof(struct hinic_page_addr), GFP_KERNEL);
if (!(hwdev->mqm_att.brm_srch_page_addr)) {
- sdk_err(hwdev->dev_hdl, "Alloc virtual mem failed\r\n");
+ sdk_err(hwdev->dev_hdl, "Alloc virtual mem failed\n");
return -EFAULT;
}
ret = mqm_eqm_alloc_page_mem(hwdev);
if (ret) {
- sdk_err(hwdev->dev_hdl, "Alloc eqm page mem failed\r\n");
+ sdk_err(hwdev->dev_hdl, "Alloc eqm page mem failed\n");
goto err_page;
}
ret = mqm_eqm_set_page_2_hw(hwdev);
if (ret) {
- sdk_err(hwdev->dev_hdl, "Set page to hw failed\r\n");
+ sdk_err(hwdev->dev_hdl, "Set page to hw failed\n");
goto err_ecmd;
}
ret = mqm_eqm_set_cfg_2_hw(hwdev, 1);
if (ret) {
- sdk_err(hwdev->dev_hdl, "Set page to hw failed\r\n");
+ sdk_err(hwdev->dev_hdl, "Set page to hw failed\n");
goto err_ecmd;
}
@@ -3197,7 +3197,7 @@ void mqm_eqm_deinit(struct hinic_hwdev *hwdev)
ret = mqm_eqm_set_cfg_2_hw(hwdev, 0);
if (ret) {
- sdk_err(hwdev->dev_hdl, "Set mqm eqm cfg to chip fail! err: %d\n",
+ sdk_err(hwdev->dev_hdl, "Set mqm eqm cfg to chip fail, err: %d\n",
ret);
return;
}
@@ -3218,7 +3218,7 @@ int hinic_ppf_ext_db_init(void *dev)
ret = mqm_eqm_init(hwdev);
if (ret) {
- sdk_err(hwdev->dev_hdl, "MQM eqm init fail!\n");
+ sdk_err(hwdev->dev_hdl, "MQM eqm init failed\n");
return -EFAULT;
}
@@ -3425,7 +3425,7 @@ static void fault_report_show(struct hinic_hwdev *hwdev,
struct hinic_fault_event_stats *fault;
u8 node_id;
- sdk_err(hwdev->dev_hdl, "Fault event report received, func_id: %d.\n",
+ sdk_err(hwdev->dev_hdl, "Fault event report received, func_id: %d\n",
hinic_global_func_id(hwdev));
memset(type_str, 0, FAULT_SHOW_STR_LEN + 1);
@@ -3759,7 +3759,7 @@ static void sw_watchdog_timeout_info_show(struct hinic_hwdev *hwdev,
u32 *dump_addr, *reg, stack_len, i, j;
if (in_size != sizeof(*watchdog_info)) {
- sdk_err(hwdev->dev_hdl, "Invalid mgmt watchdog report, length: %d, should be %ld.\n",
+ sdk_err(hwdev->dev_hdl, "Invalid mgmt watchdog report, length: %d, should be %ld\n",
in_size, sizeof(*watchdog_info));
return;
}
@@ -3924,7 +3924,7 @@ static void hinic_fmw_act_ntc_handler(struct hinic_hwdev *hwdev,
struct hinic_fmw_act_ntc *notice_info;
if (in_size != sizeof(*notice_info)) {
- sdk_err(hwdev->dev_hdl, "Invalid mgmt firmware active notice, length: %d, should be %ld.\n",
+ sdk_err(hwdev->dev_hdl, "Invalid mgmt firmware active notice, length: %d, should be %ld\n",
in_size, sizeof(*notice_info));
return;
}
@@ -3957,7 +3957,7 @@ static void hinic_pcie_dfx_event_handler(struct hinic_hwdev *hwdev,
u32 *reg;
if (in_size != sizeof(*notice_info)) {
- sdk_err(hwdev->dev_hdl, "Invalid mgmt firmware active notice, length: %d, should be %ld.\n",
+ sdk_err(hwdev->dev_hdl, "Invalid mgmt firmware active notice, length: %d, should be %ld\n",
in_size, sizeof(*notice_info));
return;
}
@@ -4724,7 +4724,7 @@ u8 hinic_nic_sw_aeqe_handler(void *handle, u8 event, u64 data)
event_level = FAULT_LEVEL_FATAL;
break;
default:
- sdk_err(hwdev->dev_hdl, "Unsupported sw event %d to process.\n",
+ sdk_err(hwdev->dev_hdl, "Unsupported sw event %d to process\n",
event);
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_lld.c b/drivers/net/ethernet/huawei/hinic/hinic_lld.c
index 81461da5e9aa..a19a1f67bc97 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_lld.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_lld.c
@@ -668,7 +668,7 @@ static void hinic_ignore_minor_version(char *version)
err = snprintf(version, max_ver_len, "%s.%s.%s.0",
ver_split[0], ver_split[1], ver_split[2]);
if (err <= 0 || err >= max_ver_len)
- pr_err("Failed snprintf version, function return(%d) and dest_len(%d)\n",
+ pr_err("Failed to snprintf version, function return(%d) and dest_len(%d)\n",
err, max_ver_len);
}
@@ -1087,7 +1087,7 @@ void *hinic_get_uld_dev_by_ifname(char *ifname, enum hinic_service_type type)
struct hinic_pcidev *dev;
if (type >= SERVICE_T_MAX) {
- pr_err("Service type :%d is error\n", type);
+ pr_err("Service type: %d is error\n", type);
return NULL;
}
@@ -1467,7 +1467,7 @@ struct net_device *hinic_get_netdev_by_lld(struct hinic_lld_dev *lld_dev)
nic_dev = pci_adapter->uld_dev[SERVICE_T_NIC];
if (!nic_dev) {
sdk_err(&pci_adapter->pcidev->dev,
- "There's no net device attached on the pci device");
+ "There's no net device attached on the pci device\n");
return NULL;
}
@@ -1501,7 +1501,7 @@ struct net_device *hinic_get_netdev_by_pcidev(struct pci_dev *pdev)
nic_dev = pci_adapter->uld_dev[SERVICE_T_NIC];
if (!nic_dev) {
sdk_err(&pci_adapter->pcidev->dev,
- "There`s no net device attached on the pci device");
+ "There`s no net device attached on the pci device\n");
return NULL;
}
@@ -1739,7 +1739,7 @@ int hinic_ovs_set_vf_nic_state(struct hinic_lld_dev *lld_dev, u16 vf_func_id,
if (err) {
sdk_err(&des_dev->pcidev->dev,
- "%s driver Set VF max_queue_num failed, err=%d.\n",
+ "%s driver Set VF max_queue_num failed, err=%d\n",
s_uld_name[SERVICE_T_NIC], err);
break;
@@ -2071,7 +2071,7 @@ static int alloc_chip_node(struct hinic_pcidev *pci_adapter)
HINIC_CHIP_NAME, i);
if (err <= 0 || err >= IFNAMSIZ) {
sdk_err(&pci_adapter->pcidev->dev,
- "Failed snprintf chip_name, function return(%d) and dest_len(%d)\n",
+ "Failed to snprintf chip_name, function return(%d) and dest_len(%d)\n",
err, IFNAMSIZ);
goto alloc_dbgtool_attr_file_err;
}
@@ -2080,7 +2080,7 @@ static int alloc_chip_node(struct hinic_pcidev *pci_adapter)
IFNAMSIZ, "%s%d", HINIC_CHIP_NAME, i);
if (err <= 0 || err >= IFNAMSIZ) {
sdk_err(&pci_adapter->pcidev->dev,
- "Failed snprintf dbgtool_attr_file_name, function return(%d) and dest_len(%d)\n",
+ "Failed to snprintf dbgtool_attr_file_name, function return(%d) and dest_len(%d)\n",
err, IFNAMSIZ);
goto alloc_dbgtool_attr_file_err;
}
@@ -2188,13 +2188,13 @@ int hinic_ovs_set_vf_load_state(struct pci_dev *pdev)
{
struct hinic_pcidev *pci_adapter;
if (!pdev) {
- pr_err("pdev is null.\n");
+ pr_err("pdev is null\n");
return -EINVAL;
}
pci_adapter = pci_get_drvdata(pdev);
if (!pci_adapter) {
- pr_err("pci_adapter is null.\n");
+ pr_err("pci_adapter is null\n");
return -EFAULT;
}
@@ -2699,7 +2699,7 @@ static int hinic_probe(struct pci_dev *pdev, const struct pci_device_id *id)
create_singlethread_workqueue(HINIC_SLAVE_NIC_DELAY);
if (!pci_adapter->slave_nic_init_workq) {
sdk_err(&pdev->dev,
- "Failed to create work queue:%s\n",
+ "Failed to create work queue: %s\n",
HINIC_SLAVE_NIC_DELAY);
goto ceate_nic_delay_work_fail;
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
index 1679c24eba9d..56af81d132fd 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
@@ -407,7 +407,7 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
}
nic_dev->qps_irq_info = kzalloc(irq_size, GFP_KERNEL);
if (!nic_dev->qps_irq_info) {
- nicif_err(nic_dev, drv, netdev, "Failed to alloc msix entries\n");
+ nicif_err(nic_dev, drv, netdev, "Failed to alloc qps_irq_info\n");
return -ENOMEM;
}
@@ -560,7 +560,7 @@ static int hinic_request_irq(struct hinic_irq *irq_cfg, u16 q_id)
err = hinic_set_interrupt_cfg(nic_dev->hwdev, info);
if (err) {
nicif_err(nic_dev, drv, irq_cfg->netdev,
- "Failed to set RX interrupt coalescing attribute.\n");
+ "Failed to set RX interrupt coalescing attribute\n");
qp_del_napi(irq_cfg);
return err;
}
@@ -2977,7 +2977,7 @@ int hinic_enable_func_rss(struct hinic_nic_dev *nic_dev)
if (err) {
if (err == -ENOSPC)
nicif_warn(nic_dev, drv, netdev,
- "Failed to alloc RSS template,table is full\n");
+ "Failed to alloc RSS template, table is full\n");
else
nicif_err(nic_dev, drv, netdev,
"Failed to alloc RSS template\n");
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_mbox.c b/drivers/net/ethernet/huawei/hinic/hinic_mbox.c
index fc91bdffe1eb..29feb9b4a16c 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_mbox.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_mbox.c
@@ -213,8 +213,8 @@ static bool check_func_id(struct hinic_hwdev *hwdev, u16 src_func_idx,
if (in_size < offset + sizeof(func_idx)) {
sdk_warn(hwdev->dev_hdl,
- "Reveice mailbox msg len: %d less than 10 Bytes is invalid\n",
- in_size);
+ "Receive mailbox msg len: %d less than %ld Bytes is invalid\n",
+ in_size, offset + sizeof(func_idx));
return false;
}
@@ -222,7 +222,7 @@ static bool check_func_id(struct hinic_hwdev *hwdev, u16 src_func_idx,
if (src_func_idx != func_idx) {
sdk_warn(hwdev->dev_hdl,
- "Reveice mailbox function id(0x%x) not equal to msg function id(0x%x)\n",
+ "Reveive mailbox function id(0x%x) not equal to msg function id(0x%x)\n",
src_func_idx, func_idx);
return false;
}
@@ -909,7 +909,8 @@ void hinic_mbox_func_aeqe_handler(void *handle, u8 *header, u8 size)
if (src >= HINIC_MAX_FUNCTIONS) {
sdk_err(func_to_func->hwdev->dev_hdl,
- "Mailbox source function id:%u is invalid\n", (u32)src);
+ "Mailbox source function id: %u is invalid\n",
+ (u32)src);
return;
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_mgmt.c
index 38abf8fe0817..3a8362cc7d01 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_mgmt.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_mgmt.c
@@ -680,7 +680,7 @@ static int hinic_read_clp_data(struct hinic_hwdev *hwdev,
err = hinic_read_clp_reg(hwdev, HINIC_CLP_RSP_HOST,
HINIC_CLP_READY_RSP_HOST, &ready);
if (err || delay_cnt > HINIC_CLP_DELAY_CNT_MAX) {
- sdk_err(hwdev->dev_hdl, "timeout with delay_cnt:%d\n",
+ sdk_err(hwdev->dev_hdl, "Timeout with delay_cnt: %d\n",
delay_cnt);
return -EINVAL;
}
@@ -692,7 +692,7 @@ static int hinic_read_clp_data(struct hinic_hwdev *hwdev,
return err;
if (temp_out_size > HINIC_CLP_SRAM_SIZE_REG_MAX || !temp_out_size) {
- sdk_err(hwdev->dev_hdl, "invalid temp_out_size:%d\n",
+ sdk_err(hwdev->dev_hdl, "Invalid temp_out_size: %d\n",
temp_out_size);
return -EINVAL;
}
@@ -757,14 +757,16 @@ static int hinic_check_clp_init_status(struct hinic_hwdev *hwdev)
err = hinic_read_clp_reg(hwdev, HINIC_CLP_REQ_HOST,
HINIC_CLP_BA_HOST, ®_value);
if (err || !reg_value) {
- sdk_err(hwdev->dev_hdl, "Wrong req ba value:0x%x\n", reg_value);
+ sdk_err(hwdev->dev_hdl, "Wrong req ba value: 0x%x\n",
+ reg_value);
return -EINVAL;
}
err = hinic_read_clp_reg(hwdev, HINIC_CLP_RSP_HOST,
HINIC_CLP_BA_HOST, ®_value);
if (err || !reg_value) {
- sdk_err(hwdev->dev_hdl, "Wrong rsp ba value:0x%x\n", reg_value);
+ sdk_err(hwdev->dev_hdl, "Wrong rsp ba value: 0x%x\n",
+ reg_value);
return -EINVAL;
}
@@ -822,7 +824,7 @@ int hinic_pf_clp_to_mgmt(void *hwdev, enum hinic_mod_type mod, u8 cmd,
if (real_size >
(HINIC_CLP_INPUT_BUFFER_LEN_HOST / HINIC_CLP_DATA_UNIT_HOST)) {
- sdk_err(dev->dev_hdl, "Invalid real_size:%d\n", real_size);
+ sdk_err(dev->dev_hdl, "Invalid real_size: %d\n", real_size);
return -EINVAL;
}
down(&clp_pf_to_mgmt->clp_msg_lock);
@@ -871,13 +873,13 @@ int hinic_pf_clp_to_mgmt(void *hwdev, enum hinic_mod_type mod, u8 cmd,
real_size = (u16)((real_size * HINIC_CLP_DATA_UNIT_HOST) & 0xffff);
if (real_size <= sizeof(header) ||
real_size > HINIC_CLP_INPUT_BUFFER_LEN_HOST) {
- sdk_err(dev->dev_hdl, "Invalid response size:%d", real_size);
+ sdk_err(dev->dev_hdl, "Invalid response size: %d", real_size);
up(&clp_pf_to_mgmt->clp_msg_lock);
return -EINVAL;
}
real_size = real_size - sizeof(header);
if (real_size != *out_size) {
- sdk_err(dev->dev_hdl, "Invalid real_size:%d, out_size:%d\n",
+ sdk_err(dev->dev_hdl, "Invalid real_size: %d, out_size: %d\n",
real_size, *out_size);
up(&clp_pf_to_mgmt->clp_msg_lock);
return -EINVAL;
@@ -1090,11 +1092,11 @@ static void mgmt_resp_msg_handler(struct hinic_msg_pf_to_mgmt *pf_to_mgmt,
pf_to_mgmt->event_flag == SEND_EVENT_START) {
complete(&recv_msg->recv_done);
} else if (recv_msg->msg_id != pf_to_mgmt->sync_msg_id) {
- sdk_err(dev, "Send msg id(0x%x) recv msg id(0x%x) dismatch, event state=%d\n",
+ sdk_err(dev, "Send msg id(0x%x) recv msg id(0x%x) dismatch, event state: %d\n",
pf_to_mgmt->sync_msg_id, recv_msg->msg_id,
pf_to_mgmt->event_flag);
} else {
- sdk_err(dev, "Wait timeout, send msg id(0x%x) recv msg id(0x%x), event state=%d!\n",
+ sdk_err(dev, "Wait timeout, send msg id(0x%x) recv msg id(0x%x), event state: %d\n",
pf_to_mgmt->sync_msg_id, recv_msg->msg_id,
pf_to_mgmt->event_flag);
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c
index 90b16ea9658d..1bac0ea08a52 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c
@@ -309,7 +309,7 @@ int hinic_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id)
return -EINVAL;
}
if (mac_info.status == HINIC_PF_SET_VF_ALREADY) {
- nic_warn(nic_hwdev->dev_hdl, "PF has already set VF mac, Ignore delete operation.\n");
+ nic_warn(nic_hwdev->dev_hdl, "PF has already set VF mac, Ignore delete operation\n");
return HINIC_PF_SET_VF_ALREADY;
}
@@ -461,12 +461,14 @@ int hinic_set_port_mtu(void *hwdev, u32 new_mtu)
if (new_mtu < HINIC_MIN_MTU_SIZE) {
nic_err(nic_hwdev->dev_hdl,
- "Invalid mtu size, mtu size < 256bytes");
+ "Invalid mtu size, mtu size < %dbytes\n",
+ HINIC_MIN_MTU_SIZE);
return -EINVAL;
}
if (new_mtu > HINIC_MAX_JUMBO_FRAME_SIZE) {
- nic_err(nic_hwdev->dev_hdl, "Invalid mtu size, mtu size > 9600bytes");
+ nic_err(nic_hwdev->dev_hdl, "Invalid mtu size, mtu size > %dbytes\n",
+ HINIC_MAX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
@@ -2220,7 +2222,7 @@ int hinic_get_mgmt_version(void *hwdev, u8 *mgmt_ver)
err = snprintf(mgmt_ver, HINIC_MGMT_VERSION_MAX_LEN, "%s", up_ver.ver);
if (err <= 0 || err >= HINIC_MGMT_VERSION_MAX_LEN) {
nic_err(dev->dev_hdl,
- "Failed snprintf fw version, function return(%d) and dest_len(%d)\n",
+ "Failed to snprintf fw version, function return(%d) and dest_len(%d)\n",
err, HINIC_MGMT_VERSION_MAX_LEN);
return -EINVAL;
}
@@ -2501,7 +2503,7 @@ static int hinic_del_vf_mac_msg_handler(struct hinic_nic_io *nic_io, u16 vf,
if (vf_info->pf_set_mac && !(vf_info->trust) &&
is_valid_ether_addr(mac_in->mac) &&
!memcmp(vf_info->vf_mac_addr, mac_in->mac, ETH_ALEN)) {
- nic_warn(nic_io->hwdev->dev_hdl, "PF has already set VF mac.\n");
+ nic_warn(nic_io->hwdev->dev_hdl, "PF has already set VF mac\n");
mac_out->status = HINIC_PF_SET_VF_ALREADY;
*out_size = sizeof(*mac_out);
return 0;
@@ -2530,12 +2532,12 @@ static int hinic_update_vf_mac_msg_handler(struct hinic_nic_io *nic_io, u16 vf,
int err;
if (!is_valid_ether_addr(mac_in->new_mac)) {
- nic_err(nic_io->hwdev->dev_hdl, "Update VF MAC is invalid.\n");
+ nic_err(nic_io->hwdev->dev_hdl, "Update VF MAC is invalid\n");
return -EINVAL;
}
if (vf_info->pf_set_mac && !(vf_info->trust)) {
- nic_warn(nic_io->hwdev->dev_hdl, "PF has already set VF mac.\n");
+ nic_warn(nic_io->hwdev->dev_hdl, "PF has already set VF mac\n");
mac_out->status = HINIC_PF_SET_VF_ALREADY;
*out_size = sizeof(*mac_out);
return 0;
@@ -2724,7 +2726,7 @@ int nic_pf_mbox_handler(void *hwdev, u16 vf_id, u8 cmd, void *buf_in,
if (!hinic_mbox_check_cmd_valid(hwdev, nic_cmd_support_vf, vf_id, cmd,
buf_in, in_size, size)) {
nic_err(((struct hinic_hwdev *)hwdev)->dev_hdl,
- "PF Receive VF nic cmd(0x%x), mbox len(0x%x) is invalid\n",
+ "PF Receive VF nic cmd(0x%x) or mbox len(0x%x) is invalid\n",
cmd, in_size);
err = HINIC_MBOX_VF_CMD_ERROR;
return err;
@@ -2793,7 +2795,7 @@ int nic_pf_mbox_handler(void *hwdev, u16 vf_id, u8 cmd, void *buf_in,
if (err && err != HINIC_DEV_BUSY_ACTIVE_FW &&
err != HINIC_MBOX_PF_BUSY_ACTIVE_FW)
- nic_err(nic_io->hwdev->dev_hdl, "PF receive VF L2NIC cmd: %d process error, err:%d\n",
+ nic_err(nic_io->hwdev->dev_hdl, "PF receive VF L2NIC cmd: %d process error, err: %d\n",
cmd, err);
return err;
}
@@ -3522,13 +3524,13 @@ int hinic_set_anti_attack(void *hwdev, bool enable)
&rate, sizeof(rate), &rate,
&out_size);
if (err || !out_size || rate.status) {
- nic_err(nic_hwdev->dev_hdl, "Can`t %s port Anti-Attack rate limit err: %d, status: 0x%x, out size: 0x%x\n",
+ nic_err(nic_hwdev->dev_hdl, "Can't %s port Anti-Attack rate limit err: %d, status: 0x%x, out size: 0x%x\n",
(enable ? "enable" : "disable"), err, rate.status,
out_size);
return -EINVAL;
}
- nic_info(nic_hwdev->dev_hdl, "%s port Anti-Attack rate limit succeed.\n",
+ nic_info(nic_hwdev->dev_hdl, "%s port Anti-Attack rate limit succeed\n",
(enable ? "Enable" : "Disable"));
return 0;
@@ -3616,13 +3618,13 @@ int hinic_set_super_cqe_state(void *hwdev, bool enable)
&super_cqe, sizeof(super_cqe), &super_cqe,
&out_size);
if (err || !out_size || super_cqe.status) {
- nic_err(nic_hwdev->dev_hdl, "Can`t %s surper cqe, err: %d, status: 0x%x, out size: 0x%x\n",
+ nic_err(nic_hwdev->dev_hdl, "Can't %s surper cqe, err: %d, status: 0x%x, out size: 0x%x\n",
(enable ? "enable" : "disable"), err, super_cqe.status,
out_size);
return -EINVAL;
}
- nic_info(nic_hwdev->dev_hdl, "%s super cqe succeed.\n",
+ nic_info(nic_hwdev->dev_hdl, "%s super cqe succeed\n",
(enable ? "Enable" : "Disable"));
return 0;
@@ -3960,16 +3962,16 @@ int hinic_disable_tx_promisc(void *hwdev)
info.cfg = HINIC_TX_PROMISC_DISABLE;
err = hinic_msg_to_mgmt_sync(hwdev, HINIC_MOD_L2NIC,
- HINIC_PORT_CMD_DISABLE_PROMISIC, &info,
+ HINIC_PORT_CMD_DISABLE_PROMISC, &info,
sizeof(info), &info, &out_size, 0);
if (err || !out_size || info.status) {
if (info.status == HINIC_MGMT_CMD_UNSUPPORTED) {
nic_info(((struct hinic_hwdev *)hwdev)->dev_hdl,
- "Unsupported to disable TX promisic\n");
+ "Unsupported to disable TX promisc\n");
return 0;
}
nic_err(((struct hinic_hwdev *)hwdev)->dev_hdl,
- "Failed to disable multihost promisic, err: %d, status: 0x%x, out size: 0x%x\n",
+ "Failed to disable multihost promisc, err: %d, status: 0x%x, out size: 0x%x\n",
err, info.status, out_size);
return -EFAULT;
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_dbg.c b/drivers/net/ethernet/huawei/hinic/hinic_nic_dbg.c
index e49a21fa952e..9bcdcd3ce1c0 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nic_dbg.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_dbg.c
@@ -101,7 +101,7 @@ u16 hinic_dbg_get_rq_hw_pi(void *hwdev, u16 q_id)
if (qp)
return cpu_to_be16(*qp->rq.pi_virt_addr);
- nic_err(((struct hinic_hwdev *)hwdev)->dev_hdl, "Get rq hw pi failed!\n");
+ nic_err(((struct hinic_hwdev *)hwdev)->dev_hdl, "Get rq hw pi failed\n");
return INVALID_PI;
}
@@ -184,7 +184,7 @@ static int get_wqe_info(struct hinic_wq *wq, u16 idx, u16 wqebb_cnt,
return -EFAULT;
if (*wqe_size != (u16)(wq->wqebb_size * wqebb_cnt)) {
- pr_err("Unexpect out buf size from user :%d, expect: %d\n",
+ pr_err("Unexpect out buf size from user: %d, expect: %d\n",
*wqe_size, (u16)(wq->wqebb_size * wqebb_cnt));
return -EFAULT;
}
@@ -231,7 +231,7 @@ int hinic_dbg_get_rq_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt,
int hinic_dbg_get_hw_stats(const void *hwdev, u8 *hw_stats, u16 *out_size)
{
if (!hw_stats || *out_size != sizeof(struct hinic_hw_stats)) {
- pr_err("Unexpect out buf size from user :%d, expect: %lu\n",
+ pr_err("Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(struct hinic_hw_stats));
return -EFAULT;
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c b/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c
index f2fb0bc54570..b935c41a4435 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c
@@ -253,7 +253,7 @@ int hinic_create_qps(void *dev, u16 num_qp, u16 sq_depth, u16 rq_depth,
max_qps = hinic_func_max_qnum(hwdev);
if (num_qp > max_qps) {
- nic_err(hwdev->dev_hdl, "Create number of qps: %d > max number of qps:%d\n",
+ nic_err(hwdev->dev_hdl, "Create number of qps: %d > max number of qps: %d\n",
num_qp, max_qps);
return -EINVAL;
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nictool.c b/drivers/net/ethernet/huawei/hinic/hinic_nictool.c
index 3fcb855bd605..8ef008af05e8 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nictool.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nictool.c
@@ -336,7 +336,7 @@ static int get_inter_num(struct hinic_nic_dev *nic_dev, void *buf_in,
if (*out_size != sizeof(u16)) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Unexpect out buf size from user :%d, expect: %lu\n",
+ "Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(u16));
return -EFAULT;
}
@@ -381,7 +381,7 @@ static int get_num_cos(struct hinic_nic_dev *nic_dev, void *buf_in,
if (*out_size != sizeof(*num_cos)) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Unexpect out buf size from user :%d, expect: %lu\n",
+ "Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(*num_cos));
return -EFAULT;
}
@@ -399,7 +399,7 @@ static int get_dcb_cos_up_map(struct hinic_nic_dev *nic_dev, void *buf_in,
if (*out_size != sizeof(*map)) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Unexpect out buf size from user :%d, expect: %lu\n",
+ "Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(*map));
return -EFAULT;
}
@@ -438,7 +438,7 @@ static int get_rx_cqe_info(struct hinic_nic_dev *nic_dev, void *buf_in,
if (*out_size != sizeof(struct hinic_rq_cqe)) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Unexpect out buf size from user :%d, expect: %lu\n",
+ "Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(struct hinic_rq_cqe));
return -EFAULT;
}
@@ -477,7 +477,7 @@ static int hinic_dbg_get_sq_info(struct hinic_nic_dev *nic_dev, u16 q_id,
if (*msg_size != sizeof(*sq_info)) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Unexpect out buf size from user :%d, expect: %lu\n",
+ "Unexpect out buf size from user: %d, expect: %lu\n",
*msg_size, sizeof(*sq_info));
return -EFAULT;
}
@@ -576,7 +576,7 @@ static int get_loopback_mode(struct hinic_nic_dev *nic_dev, void *buf_in,
if (*out_size != sizeof(*mode)) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Unexpect out buf size from user :%d, expect: %lu\n",
+ "Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(*mode));
return -EFAULT;
}
@@ -717,7 +717,7 @@ int set_pfc_control(struct hinic_nic_dev *nic_dev, void *buf_in,
pfc_en = *((u8 *)buf_in);
if (!(test_bit(HINIC_DCB_ENABLE, &nic_dev->flags))) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Need to enable dcb first.\n");
+ "Need to enable dcb first\n");
err = 0xff;
goto exit;
}
@@ -752,7 +752,7 @@ int set_ets(struct hinic_nic_dev *nic_dev, void *buf_in,
if (!(test_bit(HINIC_DCB_ENABLE, &nic_dev->flags))) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Need to enable dcb first.\n");
+ "Need to enable dcb first\n");
err = 0xff;
goto exit;
}
@@ -765,7 +765,7 @@ int set_ets(struct hinic_nic_dev *nic_dev, void *buf_in,
if (!(test_bit(HINIC_ETS_ENABLE, &nic_dev->flags))) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Need to enable ets first.\n");
+ "Need to enable ets first\n");
err = 0xff;
goto exit;
}
@@ -792,7 +792,7 @@ int set_ets(struct hinic_nic_dev *nic_dev, void *buf_in,
err = hinic_dcbnl_set_ets_tool(nic_dev->netdev);
if (err) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Failed to set ets [%d].\n", err);
+ "Failed to set ets [%d]\n", err);
}
exit:
*((u8 *)buf_out) = err;
@@ -838,7 +838,7 @@ int get_support_tc(struct hinic_nic_dev *nic_dev, void *buf_in,
if (*out_size != sizeof(*tc_num)) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Unexpect out buf size from user :%d, expect: %lu\n",
+ "Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(*tc_num));
return -EFAULT;
}
@@ -881,7 +881,7 @@ int set_pfc_priority(struct hinic_nic_dev *nic_dev, void *buf_in,
if (!((test_bit(HINIC_DCB_ENABLE, &nic_dev->flags)) &&
nic_dev->tmp_dcb_cfg.pfc_state)) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Need to enable pfc first.\n");
+ "Need to enable pfc first\n");
err = 0xff;
goto exit;
}
@@ -983,7 +983,7 @@ static int set_poll_weight(struct hinic_nic_dev *nic_dev, void *buf_in,
if (!buf_in || in_size != sizeof(*weight_info)) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Unexpect in buf size from user :%u, expect: %lu\n",
+ "Unexpect in buf size from user: %u, expect: %lu\n",
*out_size, sizeof(*weight_info));
return -EFAULT;
}
@@ -999,7 +999,7 @@ static int get_homologue(struct hinic_nic_dev *nic_dev, void *buf_in,
struct hinic_homologues *homo = buf_out;
if (!buf_out || *out_size != sizeof(*homo)) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Unexpect out buf size from user :%d, expect: %lu\n",
+ "Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(*homo));
return -EFAULT;
}
@@ -1020,7 +1020,7 @@ static int set_homologue(struct hinic_nic_dev *nic_dev, void *buf_in,
struct hinic_homologues *homo = buf_in;
if (!buf_in || in_size != sizeof(*homo)) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Unexpect in buf size from user :%d, expect: %lu\n",
+ "Unexpect in buf size from user: %d, expect: %lu\n",
*out_size, sizeof(*homo));
return -EFAULT;
}
@@ -1030,7 +1030,7 @@ static int set_homologue(struct hinic_nic_dev *nic_dev, void *buf_in,
} else if (homo->homo_state == HINIC_HOMOLOGUES_OFF) {
clear_bit(HINIC_SAME_RXTX, &nic_dev->flags);
} else {
- pr_err("Invalid parameters.\n");
+ pr_err("Invalid parameters\n");
return -EFAULT;
}
@@ -1047,7 +1047,7 @@ static int get_sset_count(struct hinic_nic_dev *nic_dev, void *buf_in,
if (!buf_in || !buf_out || in_size != sizeof(u32) ||
*out_size != sizeof(u32)) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Invalid parameters.\n");
+ "Invalid parameters\n");
return -EINVAL;
}
@@ -1083,7 +1083,7 @@ static int get_sset_stats(struct hinic_nic_dev *nic_dev, void *buf_in,
if (count * sizeof(*items) != *out_size) {
nicif_err(nic_dev, drv, nic_dev->netdev,
- "Unexpect out buf size from user :%d, expect: %lu\n",
+ "Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, count * sizeof(*items));
return -EINVAL;
}
@@ -1112,7 +1112,7 @@ static int get_func_type(void *hwdev, void *buf_in, u32 in_size,
func_typ = hinic_func_type(hwdev);
if (!buf_out || *out_size != sizeof(u16)) {
- pr_err("Unexpect out buf size from user :%d, expect: %lu\n",
+ pr_err("Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(u16));
return -EFAULT;
}
@@ -1126,7 +1126,7 @@ static int get_func_id(void *hwdev, void *buf_in, u32 in_size,
u16 func_id;
if (!buf_out || *out_size != sizeof(u16)) {
- pr_err("Unexpect out buf size from user :%d, expect: %lu\n",
+ pr_err("Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(u16));
return -EFAULT;
}
@@ -1145,7 +1145,7 @@ static int get_chip_faults_stats(void *hwdev, void *buf_in, u32 in_size,
if (!buf_in || !buf_out || *out_size != sizeof(*fault_info) ||
in_size != sizeof(*fault_info)) {
- pr_err("Unexpect out buf size from user :%d, expect: %lu\n",
+ pr_err("Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(*fault_info));
return -EFAULT;
}
@@ -1178,7 +1178,7 @@ static int get_drv_version(void *hwdev, void *buf_in, u32 in_size,
int err;
if (*out_size != sizeof(*ver_info)) {
- pr_err("Unexpect out buf size from user :%d, expect: %lu\n",
+ pr_err("Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(*ver_info));
return -EFAULT;
}
@@ -1212,7 +1212,7 @@ static int get_single_card_info(void *hwdev, void *buf_in, u32 in_size,
{
if (!buf_in || !buf_out || in_size != sizeof(struct card_info) ||
*out_size != sizeof(struct card_info)) {
- pr_err("Unexpect out buf size from user :%d, expect: %lu\n",
+ pr_err("Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(struct card_info));
return -EFAULT;
}
@@ -1230,7 +1230,7 @@ static int get_device_id(void *hwdev, void *buf_in, u32 in_size,
int err;
if (!buf_out || !buf_in || *out_size != sizeof(u16) ||
in_size != sizeof(u16)) {
- pr_err("Unexpect out buf size from user :%d, expect: %lu\n",
+ pr_err("Unexpect out buf size from user: %d, expect: %lu\n",
*out_size, sizeof(u16));
return -EFAULT;
}
@@ -1297,7 +1297,7 @@ static int __get_card_usr_api_chain_mem(int card_idx)
(void *)__get_free_pages(GFP_KERNEL,
DBGTOOL_PAGE_ORDER);
if (!g_card_vir_addr[card_idx]) {
- pr_err("Alloc api chain memory fail for card %d.\n",
+ pr_err("Alloc api chain memory fail for card %d\n",
card_idx);
mutex_unlock(&g_addr_lock);
return -EFAULT;
@@ -1309,7 +1309,7 @@ static int __get_card_usr_api_chain_mem(int card_idx)
g_card_phy_addr[card_idx] =
virt_to_phys(g_card_vir_addr[card_idx]);
if (!g_card_phy_addr[card_idx]) {
- pr_err("phy addr for card %d is 0.\n", card_idx);
+ pr_err("phy addr for card %d is 0\n", card_idx);
free_pages((unsigned long)g_card_vir_addr[card_idx],
DBGTOOL_PAGE_ORDER);
g_card_vir_addr[card_idx] = NULL;
@@ -1589,7 +1589,7 @@ static int send_to_ucode(void *hwdev, struct msg_module *nt_msg,
nt_msg->ucode_cmd.ucode_db.ucode_cmd_type,
buf_in, buf_out, 0);
if (ret)
- pr_err("Send direct cmdq err: %d!\n", ret);
+ pr_err("Send direct cmdq err: %d\n", ret);
} else {
ret = hinic_cmdq_detail_resp
(hwdev, nt_msg->ucode_cmd.ucode_db.cmdq_ack_type,
@@ -1597,7 +1597,7 @@ static int send_to_ucode(void *hwdev, struct msg_module *nt_msg,
nt_msg->ucode_cmd.ucode_db.ucode_cmd_type,
buf_in, buf_out, 0);
if (ret)
- pr_err("Send detail cmdq err: %d!\n", ret);
+ pr_err("Send detail cmdq err: %d\n", ret);
}
return ret;
@@ -1732,7 +1732,7 @@ static int check_useparam_valid(struct msg_module *nt_msg, void *buf_in)
u32 rd_len = csr_write_msg->rd_len;
if (rd_len > TOOL_COUNTER_MAX_LEN) {
- pr_err("Csr read or write len is invalid!\n");
+ pr_err("Csr read or write len is invalid\n");
return -EINVAL;
}
@@ -1810,7 +1810,7 @@ static int sm_rd32(void *hwdev, u32 id, u8 instance,
ret = hinic_sm_ctr_rd32(hwdev, node, instance, id, &val1);
if (ret) {
- pr_err("Get sm ctr information (32 bits)failed!\n");
+ pr_err("Get sm ctr information (32 bits)failed\n");
val1 = 0xffffffff;
}
@@ -1827,7 +1827,7 @@ static int sm_rd64_pair(void *hwdev, u32 id, u8 instance,
ret = hinic_sm_ctr_rd64_pair(hwdev, node, instance, id, &val1, &val2);
if (ret) {
- pr_err("Get sm ctr information (64 bits pair)failed!\n");
+ pr_err("Get sm ctr information (64 bits pair)failed\n");
val1 = 0xffffffff;
}
@@ -1845,7 +1845,7 @@ static int sm_rd64(void *hwdev, u32 id, u8 instance,
ret = hinic_sm_ctr_rd64(hwdev, node, instance, id, &val1);
if (ret) {
- pr_err("Get sm ctr information (64 bits)failed!\n");
+ pr_err("Get sm ctr information (64 bits)failed\n");
val1 = 0xffffffff;
}
buf_out->val1 = val1;
@@ -1890,7 +1890,7 @@ static int send_to_sm(void *hwdev, struct msg_module *nt_msg,
}
if (ret)
- pr_err("Get sm information fail!\n");
+ pr_err("Get sm information fail\n");
*out_size = sizeof(struct sm_out_st);
@@ -2024,7 +2024,7 @@ static int get_self_test_cmd(struct msg_module *nt_msg)
ret = hinic_get_self_test_result(nt_msg->device_name, &res);
if (ret) {
- pr_err("Get self test result failed!\n");
+ pr_err("Get self test result failed\n");
return -EFAULT;
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h b/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h
index 0b13ba9d2f26..1d3c0301ba63 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h
@@ -93,7 +93,7 @@ enum hinic_port_cmd {
HINIC_PORT_CMD_SET_JUMBO_FRAME_SIZE,
/* 0x4c ~ 0x57 have defined in base line */
- HINIC_PORT_CMD_DISABLE_PROMISIC = 0x4c,
+ HINIC_PORT_CMD_DISABLE_PROMISC = 0x4c,
HINIC_PORT_CMD_ENABLE_SPOOFCHK = 0x4e,
HINIC_PORT_CMD_GET_MGMT_VERSION = 0x58,
HINIC_PORT_CMD_GET_BOOT_VERSION,
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_sriov.c b/drivers/net/ethernet/huawei/hinic/hinic_sriov.c
index 047023267ead..1a436c133785 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_sriov.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_sriov.c
@@ -39,7 +39,7 @@ int hinic_pci_sriov_disable(struct pci_dev *dev)
if (test_and_set_bit(HINIC_SRIOV_DISABLE, &sriov_info->state)) {
nic_err(&sriov_info->pdev->dev,
- "SR-IOV disable in process, please wait");
+ "SR-IOV disable in process, please wait\n");
return -EPERM;
}
@@ -190,7 +190,7 @@ int hinic_ndo_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)
return err;
nic_info(&sriov_info->pdev->dev, "Setting MAC %pM on VF %d\n", mac, vf);
- nic_info(&sriov_info->pdev->dev, "Reload the VF driver to make this change effective.");
+ nic_info(&sriov_info->pdev->dev, "Reload the VF driver to make this change effective\n");
return 0;
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
index b556132a72af..8d921ad104e8 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
@@ -949,7 +949,7 @@ int hinic_setup_all_tx_resources(struct net_device *netdev)
txq = &nic_dev->txqs[q_id];
tx_info_sz = txq->q_depth * sizeof(*txq->tx_info);
if (!tx_info_sz) {
- nicif_err(nic_dev, drv, netdev, "Cannot allocate zero size tx%d info\n",
+ nicif_err(nic_dev, drv, netdev, "Cannot allocate zero size txq%d info\n",
q_id);
err = -EINVAL;
goto init_txq_err;
@@ -965,7 +965,7 @@ int hinic_setup_all_tx_resources(struct net_device *netdev)
err = hinic_setup_tx_wqe(txq);
if (err != txq->q_depth) {
- nicif_err(nic_dev, drv, netdev, "Failed to setup Tx:%d wqe\n",
+ nicif_err(nic_dev, drv, netdev, "Failed to setup Tx: %d wqe\n",
q_id);
q_id++;
goto init_txq_err;
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_wq.c b/drivers/net/ethernet/huawei/hinic/hinic_wq.c
index 032b28332a65..fcf98413d2de 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_wq.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_wq.c
@@ -55,7 +55,7 @@ static int queue_alloc_page(void *handle, u64 **vaddr, u64 *paddr,
}
if (!ADDR_4K_ALIGNED(dma_addr)) {
- sdk_err(handle, "Cla is not 4k aligned!\n");
+ sdk_err(handle, "Cla is not 4k aligned\n");
goto shadow_vaddr_err;
}
--
2.25.1
1
5
Adam Ford (1):
omapfb: dss: Fix max fclk divider for omap36xx
Ben Skeggs (2):
drm/nouveau/fbcon: fix module unload when fbcon init has failed for
some reason
drm/nouveau/fbcon: zero-initialise the mode_cmd2 structure
Christoph Hellwig (1):
net/9p: validate fds in p9_fd_open
Cong Wang (1):
ipv6: fix memory leaks on IPV6_ADDRFORM path
David Howells (1):
rxrpc: Fix race between recvmsg and sendmsg on immediate call failure
Dexuan Cui (1):
Drivers: hv: vmbus: Ignore CHANNELMSG_TL_CONNECT_RESULT(23)
Eric Biggers (1):
Smack: fix use-after-free in smk_write_relabel_self()
Erik Ekman (1):
USB: serial: qcserial: add EM7305 QDL product ID
Forest Crossman (2):
usb: xhci: define IDs for various ASMedia host controllers
usb: xhci: Fix ASMedia ASM1142 DMA addressing
Francesco Ruggeri (1):
igb: reinit_locked() should be called with rtnl_lock
Frank van der Linden (1):
xattr: break delegations in {set,remove}xattr
Greg Kroah-Hartman (3):
USB: iowarrior: fix up report size handling for some devices
mtd: properly check all write ioctls for permissions
Linux 4.19.139
Grzegorz Siwik (1):
i40e: Wrong truncation from u16 to u8
Hangbin Liu (1):
Revert "vxlan: fix tos value before xmit"
Hui Wang (1):
Revert "ALSA: hda: call runtime_allow() for all hda controllers"
Ido Schimmel (2):
ipv4: Silence suspicious RCU usage warning
vxlan: Ensure FDB dump is performed under RCU
Jann Horn (1):
binder: Prevent context manager from incrementing ref 0
Johan Hovold (5):
leds: wm831x-status: fix use-after-free on unbind
leds: da903x: fix use-after-free on unbind
leds: lm3533: fix use-after-free on unbind
leds: 88pm860x: fix use-after-free on unbind
net: lan78xx: replace bogus endpoint lookup
Julian Squires (1):
cfg80211: check vendor command doit pointer before use
Landen Chao (1):
net: ethernet: mtk_eth_soc: fix MTU warnings
Lorenzo Bianconi (1):
net: gre: recompute gre csum for sctp over gre tunnels
Martyna Szapar (2):
i40e: Fix of memory leak and integer truncation in i40e_virtchnl.c
i40e: Memory leak in i40e_config_iwarp_qvlist
Peilin Ye (4):
Bluetooth: Fix slab-out-of-bounds read in
hci_extended_inquiry_result_evt()
Bluetooth: Prevent out-of-bounds read in hci_inquiry_result_evt()
Bluetooth: Prevent out-of-bounds read in
hci_inquiry_result_with_rssi_evt()
openvswitch: Prevent kernel-infoleak in ovs_ct_put_key()
Philippe Duplessis-Guindon (1):
tools lib traceevent: Fix memory leak in process_dynamic_array_len
Qiushi Wu (1):
firmware: Fix a reference count leak.
Rustam Kovhaev (1):
usb: hso: check for return value in hso_serial_common_create()
Sergey Nemov (1):
i40e: add num_vectors checker in iwarp handler
Stephen Hemminger (1):
hv_netvsc: do not use VF device if link is down
Suren Baghdasaryan (1):
staging: android: ashmem: Fix lockdep warning for write operation
Takashi Iwai (1):
ALSA: seq: oss: Serialize ioctls
Willem de Bruijn (1):
selftests/net: relax cpu affinity requirement in msg_zerocopy test
Wolfram Sang (2):
i2c: slave: improve sanity check when registering
i2c: slave: add sanity check when unregistering
Xin Long (1):
net: thunderx: use spin_lock_bh in nicvf_set_rx_mode_task()
Xin Xiong (1):
atm: fix atm_dev refcnt leaks in atmtcp_remove_persistent
Makefile | 2 +-
drivers/android/binder.c | 15 ++-
drivers/atm/atmtcp.c | 10 +-
drivers/firmware/qemu_fw_cfg.c | 7 +-
drivers/gpu/drm/nouveau/nouveau_fbcon.c | 3 +-
drivers/hv/channel_mgmt.c | 21 ++--
drivers/hv/vmbus_drv.c | 4 +
drivers/i2c/i2c-core-slave.c | 7 +-
drivers/leds/leds-88pm860x.c | 14 ++-
drivers/leds/leds-da903x.c | 14 ++-
drivers/leds/leds-lm3533.c | 12 +-
drivers/leds/leds-wm831x-status.c | 14 ++-
drivers/mtd/mtdchar.c | 56 +++++++--
.../net/ethernet/cavium/thunder/nicvf_main.c | 4 +-
.../ethernet/intel/i40e/i40e_virtchnl_pf.c | 51 +++++---
drivers/net/ethernet/intel/igb/igb_main.c | 9 ++
drivers/net/ethernet/mediatek/mtk_eth_soc.c | 2 +
drivers/net/hyperv/netvsc_drv.c | 7 +-
drivers/net/usb/hso.c | 5 +-
drivers/net/usb/lan78xx.c | 117 +++++-------------
drivers/net/vxlan.c | 10 +-
drivers/staging/android/ashmem.c | 12 ++
drivers/usb/host/xhci-pci.c | 10 +-
drivers/usb/misc/iowarrior.c | 35 ++++--
drivers/usb/serial/qcserial.c | 1 +
drivers/video/fbdev/omap2/omapfb/dss/dss.c | 2 +-
fs/xattr.c | 84 +++++++++++--
include/linux/hyperv.h | 2 +
include/linux/xattr.h | 2 +
include/net/addrconf.h | 1 +
net/9p/trans_fd.c | 24 ++--
net/bluetooth/hci_event.c | 11 +-
net/ipv4/fib_trie.c | 2 +-
net/ipv4/gre_offload.c | 13 +-
net/ipv6/anycast.c | 17 ++-
net/ipv6/ipv6_sockglue.c | 1 +
net/openvswitch/conntrack.c | 38 +++---
net/rxrpc/call_object.c | 27 ++--
net/rxrpc/conn_object.c | 8 +-
net/rxrpc/recvmsg.c | 2 +-
net/rxrpc/sendmsg.c | 3 +
net/wireless/nl80211.c | 6 +-
security/smack/smackfs.c | 13 +-
sound/core/seq/oss/seq_oss.c | 8 +-
sound/pci/hda/hda_intel.c | 1 -
tools/lib/traceevent/event-parse.c | 1 +
tools/testing/selftests/net/msg_zerocopy.c | 5 +-
47 files changed, 483 insertions(+), 230 deletions(-)
--
2.25.1
1
48

[PATCH 01/26] arm64/ascend: Add new enable_oom_killer interface for oom contrl
by Yang Yingliang 17 Aug '20
by Yang Yingliang 17 Aug '20
17 Aug '20
From: Weilong Chen <chenweilong(a)huawei.com>
ascend inclusion
category: feature
bugzilla: NA
CVE: NA
-------------------------------------------------
Support disable oom-killer, and report oom events to bbox
vm.enable_oom_killer:
0: disable oom killer
1: enable oom killer (default,compatible with mainline)
2: disable oom killer and panic on oom
Signed-off-by: Weilong Chen <chenweilong(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/Kconfig | 11 +++++++++
include/linux/oom.h | 11 +++++++++
kernel/sysctl.c | 12 ++++++++++
mm/memcontrol.c | 6 +++++
mm/oom_kill.c | 56 +++++++++++++++++++++++++++++++++++++++++++++
mm/util.c | 6 +++++
6 files changed, 102 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 10fabb5f633d..4412f14547af 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1351,6 +1351,17 @@ config ASCEND_DVPP_MMAP
special memory for DvPP processor, the new flag is only valid for Ascend
platform.
+config ASCEND_OOM
+ bool "Enable support for disable oom killer"
+ default y
+ help
+ In some cases we hopes that the oom will not kill the process when it occurs,
+ be able to notify the black box to report the event, and be able to trigger
+ the panic to locate the problem.
+ vm.enable_oom_killer:
+ 0: disable oom killer
+ 1: enable oom killer (default,compatible with mainline)
+ 2: disable oom killer and panic on oom
endif
endmenu
diff --git a/include/linux/oom.h b/include/linux/oom.h
index 69864a547663..689d32ab694b 100644
--- a/include/linux/oom.h
+++ b/include/linux/oom.h
@@ -117,4 +117,15 @@ extern struct task_struct *find_lock_task_mm(struct task_struct *p);
extern int sysctl_oom_dump_tasks;
extern int sysctl_oom_kill_allocating_task;
extern int sysctl_panic_on_oom;
+
+#ifdef CONFIG_ASCEND_OOM
+#define HISI_OOM_TYPE_NOMEM 0
+#define HISI_OOM_TYPE_OVERCOMMIT 1
+#define HISI_OOM_TYPE_CGROUP 2
+
+extern int sysctl_enable_oom_killer;
+extern int register_hisi_oom_notifier(struct notifier_block *nb);
+extern int hisi_oom_notifier_call(unsigned long val, void *v);
+extern int unregister_hisi_oom_notifier(struct notifier_block *nb);
+#endif
#endif /* _INCLUDE_LINUX_OOM_H */
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 54ae74d3180b..665c9e2a8802 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1264,6 +1264,18 @@ static struct ctl_table vm_table[] = {
.extra1 = &zero,
.extra2 = &two,
},
+#ifdef CONFIG_ASCEND_OOM
+ {
+ /* 0: diasable, 1: enable, 2: disable and panic on oom */
+ .procname = "enable_oom_killer",
+ .data = &sysctl_enable_oom_killer,
+ .maxlen = sizeof(sysctl_enable_oom_killer),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &zero,
+ .extra2 = &two,
+ },
+#endif
{
.procname = "oom_kill_allocating_task",
.data = &sysctl_oom_kill_allocating_task,
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e0377bae0bf6..a63bfd73da9a 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1729,6 +1729,9 @@ static enum oom_status mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int
current->memcg_in_oom = memcg;
current->memcg_oom_gfp_mask = mask;
current->memcg_oom_order = order;
+#ifdef CONFIG_ASCEND_OOM
+ hisi_oom_notifier_call(HISI_OOM_TYPE_CGROUP, NULL);
+#endif
return OOM_ASYNC;
}
@@ -1802,6 +1805,9 @@ bool mem_cgroup_oom_synchronize(bool handle)
mem_cgroup_out_of_memory(memcg, current->memcg_oom_gfp_mask,
current->memcg_oom_order);
} else {
+#ifdef CONFIG_ASCEND_OOM
+ hisi_oom_notifier_call(HISI_OOM_TYPE_CGROUP, NULL);
+#endif
schedule();
mem_cgroup_unmark_under_oom(memcg);
finish_wait(&memcg_oom_waitq, &owait.wait);
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 8a4570c53e83..c08041ecd286 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -52,6 +52,9 @@
int sysctl_panic_on_oom;
int sysctl_oom_kill_allocating_task;
int sysctl_oom_dump_tasks = 1;
+#ifdef CONFIG_ASCEND_OOM
+int sysctl_enable_oom_killer = 1;
+#endif
/*
* Serializes oom killer invocations (out_of_memory()) from all contexts to
@@ -1047,6 +1050,42 @@ int unregister_oom_notifier(struct notifier_block *nb)
}
EXPORT_SYMBOL_GPL(unregister_oom_notifier);
+#ifdef CONFIG_ASCEND_OOM
+static BLOCKING_NOTIFIER_HEAD(hisi_oom_notify_list);
+
+int register_hisi_oom_notifier(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_register(&hisi_oom_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(register_hisi_oom_notifier);
+
+static unsigned long last_jiffies;
+int hisi_oom_notifier_call(unsigned long val, void *v)
+{
+ /* when enable oom killer, just return */
+ if (sysctl_enable_oom_killer == 1)
+ return 0;
+
+ /* Print time interval to 10 seconds */
+ if (time_after(jiffies, last_jiffies + 10 * HZ)) {
+ pr_err("OOM_NOTIFIER: oom type %lu\n", val);
+ dump_stack();
+ show_mem(SHOW_MEM_FILTER_NODES, NULL);
+ dump_tasks(NULL, 0);
+ last_jiffies = jiffies;
+ }
+
+ return blocking_notifier_call_chain(&hisi_oom_notify_list, val, v);
+}
+EXPORT_SYMBOL_GPL(hisi_oom_notifier_call);
+
+int unregister_hisi_oom_notifier(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_unregister(&hisi_oom_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_hisi_oom_notifier);
+#endif
+
/**
* out_of_memory - kill the "best" process when we run out of memory
* @oc: pointer to struct oom_control
@@ -1060,10 +1099,27 @@ bool out_of_memory(struct oom_control *oc)
{
unsigned long freed = 0;
enum oom_constraint constraint = CONSTRAINT_NONE;
+#ifdef CONFIG_ASCEND_OOM
+ unsigned long oom_type;
+#endif
if (oom_killer_disabled)
return false;
+#ifdef CONFIG_ASCEND_OOM
+ if (sysctl_enable_oom_killer == 0 || sysctl_enable_oom_killer == 2) {
+ if (is_memcg_oom(oc))
+ oom_type = HISI_OOM_TYPE_CGROUP;
+ else
+ oom_type = HISI_OOM_TYPE_NOMEM;
+
+ hisi_oom_notifier_call(oom_type, NULL);
+ if (unlikely(sysctl_enable_oom_killer == 2))
+ panic("Out of memory, panic by sysctl_enable_oom_killer");
+ return false;
+ }
+#endif
+
if (!is_memcg_oom(oc)) {
blocking_notifier_call_chain(&oom_notify_list, 0, &freed);
if (freed > 0)
diff --git a/mm/util.c b/mm/util.c
index 5515219168e8..ed64ef1f8387 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -17,6 +17,9 @@
#include <asm/sections.h>
#include <linux/uaccess.h>
+#ifdef CONFIG_ASCEND_OOM
+#include <linux/oom.h>
+#endif
#include "internal.h"
@@ -744,6 +747,9 @@ int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin)
if (percpu_counter_read_positive(&vm_committed_as) < allowed)
return 0;
error:
+#ifdef CONFIG_ASCEND_OOM
+ hisi_oom_notifier_call(HISI_OOM_TYPE_OVERCOMMIT, NULL);
+#endif
vm_unacct_memory(pages);
return -ENOMEM;
--
2.25.1
1
25
Hi ALL,
[Description]
The kernel branch openEuler-1.0-LTS has import the patch f6b330acc
but has no no_refcnt
and build failed
[Debug]
the upstream commit ad0f75 has introduce the no_refcnt to the struct sock_cgroup_data.
Maybe we should the merge the patch or revert it on branch openEuler-1.0-LTS
2
1
Euleros inclusion
Category: feature
Bugzilla: NA
CVE: NA
Use reserved memory to create a pmem device to store the
processes information that dumped before kernel update.
When you want to use this feature you need to declare by
"pmemmem=pmem_size:pmem_phystart" in cmdline.
(exp: pmemmem=100M:0x202000000000)
Signed-off-by: zhuling <zhuling8(a)huawei.com>
---
arch/arm64/kernel/setup.c | 5 +++
arch/arm64/mm/init.c | 90 ++++++++++++++++++++++++++++++++++++++
drivers/nvdimm/Kconfig | 11 +++++
drivers/nvdimm/Makefile | 3 ++
drivers/nvdimm/kup_pmem.c | 107 ++++++++++++++++++++++++++++++++++++++++++++++
include/linux/ioport.h | 1 +
include/linux/mm.h | 4 ++
lib/Kconfig | 6 +++
8 files changed, 227 insertions(+)
create mode 100644 drivers/nvdimm/kup_pmem.c
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 155b8a6..e96cade 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -251,6 +251,11 @@ static void __init request_standard_resources(void)
if (kernel_data.start >= res->start &&
kernel_data.end <= res->end)
request_resource(res, &kernel_data);
+#ifdef CONFIG_KUP_PMEM_MEMORY
+ if (pmem_res.end)
+ insert_resource(&iomem_resource, &pmem_res);
+#endif
+
#ifdef CONFIG_KEXEC_CORE
/* Userspace will find "Crash kernel" region in /proc/iomem. */
if (crashk_low_res.end && crashk_low_res.start >= res->start &&
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index e43764d..169d663 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -65,6 +65,18 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init;
struct res_mem res_mem[MAX_RES_REGIONS];
int res_mem_count;
+#ifdef CONFIG_KUP_PMEM_MEMORY
+static unsigned long long pmem_size, pmem_phystart;
+
+struct resource pmem_res = {
+ .name = "Kpmem Dev",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_MEM,
+ .desc = IORES_DESC_KPMEM_DEV
+};
+#endif
+
#ifdef CONFIG_BLK_DEV_INITRD
static int __init early_initrd(char *p)
{
@@ -192,6 +204,80 @@ static void __init kexec_reserve_crashkres_pages(void)
}
#endif /* CONFIG_KEXEC_CORE */
+#ifdef CONFIG_KUP_PMEM_MEMORY
+/*
+ * reserve_pmem() - reserves memory for pmem
+ *
+ * This function reserves memory area given in "pmemmem=" kernel command
+ * line parameter. The memory reserved is used by pmem restore progress
+ * when kernel update.
+ */
+static int __init parse_pmem(char *par)
+{
+ char *cur = par;
+
+ if(!par)
+ return 0;
+
+ pmem_size = 0;
+ pmem_phystart = 0;
+
+ pmem_size = memparse(par, &cur);
+ if (par == cur) {
+ pr_warn("pmem: memory value expected\n");
+ return -EINVAL;
+ }
+
+ if (*cur == ':')
+ pmem_phystart = memparse(cur+1, &cur);
+ else if (*cur != ' ' && *cur != '\0') {
+ pr_warn("pmem: unrecognized char %c\n", *cur);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+early_param("pmemmem", parse_pmem);
+
+static void __init reserve_pmem(void)
+{
+ if (!pmem_size ||!pmem_phystart) {
+ return;
+ }
+
+ pmem_size = PAGE_ALIGN(pmem_size);
+
+ if (!memblock_is_region_memory(pmem_phystart, pmem_size)) {
+ pr_warn("cannot reserve pmem: region is not memory!\n");
+ return;
+ }
+
+ if (memblock_is_region_reserved(pmem_phystart, pmem_size)) {
+ pr_warn("cannot reserve pmem: region overlaps reserved memory!\n");
+ return;
+ }
+
+ if (!IS_ALIGNED(pmem_phystart, SZ_2M)) {
+ pr_warn("cannot reserve pmem: base address is not 2MB aligned\n");
+ return;
+ }
+ memblock_reserve(pmem_phystart, pmem_size);
+ memblock_remove(pmem_phystart, pmem_size);
+ pr_info("pmem reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ pmem_phystart, pmem_phystart + pmem_size, pmem_size >> 20);
+
+ pmem_res.start = pmem_phystart;
+ pmem_res.end = pmem_phystart + pmem_size - 1;
+}
+#else
+static void __init reserve_pmem(void)
+{
+}
+static void __init reserve_pmem_pages(void)
+{
+}
+#endif /*CONFIG_KUP_PMEM_MEMORY*/
+
#ifdef CONFIG_CRASH_DUMP
static int __init early_init_dt_scan_elfcorehdr(unsigned long node,
const char *uname, int depth, void *data)
@@ -584,6 +670,10 @@ void __init arm64_memblock_init(void)
reserve_elfcorehdr();
+#ifdef CONFIG_KUP_PMEM_MEMORY
+ reserve_pmem();
+#endif
+
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
dma_contiguous_reserve(arm64_dma_phys_limit);
diff --git a/drivers/nvdimm/Kconfig b/drivers/nvdimm/Kconfig
index 9d36473..1097a8d 100644
--- a/drivers/nvdimm/Kconfig
+++ b/drivers/nvdimm/Kconfig
@@ -112,4 +112,15 @@ config OF_PMEM
Select Y if unsure.
+config KUP_PMEM
+ tristate "Persistent memory for kernel update"
+ depends on LIBNVDIMM
+ depends on KUP_PMEM_MEMORY
+ default LIBNVDIMM
+ help
+ Allows regions of persistent memory to be described in the
+ device-tree.
+
+ Select Y if unsure.
+
endif
diff --git a/drivers/nvdimm/Makefile b/drivers/nvdimm/Makefile
index e884704..853cbcb 100644
--- a/drivers/nvdimm/Makefile
+++ b/drivers/nvdimm/Makefile
@@ -5,6 +5,7 @@ obj-$(CONFIG_ND_BTT) += nd_btt.o
obj-$(CONFIG_ND_BLK) += nd_blk.o
obj-$(CONFIG_X86_PMEM_LEGACY) += nd_e820.o
obj-$(CONFIG_OF_PMEM) += of_pmem.o
+obj-$(CONFIG_KUP_PMEM) += nd_kup_pmem.o
nd_pmem-y := pmem.o
@@ -14,6 +15,8 @@ nd_blk-y := blk.o
nd_e820-y := e820.o
+nd_kup_pmem-y := kup_pmem.o
+
libnvdimm-y := core.o
libnvdimm-y += bus.o
libnvdimm-y += dimm_devs.o
diff --git a/drivers/nvdimm/kup_pmem.c b/drivers/nvdimm/kup_pmem.c
new file mode 100644
index 00000000..d9b0633
--- /dev/null
+++ b/drivers/nvdimm/kup_pmem.c
@@ -0,0 +1,107 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2020. All rights reserved.
+ *
+ * This source code is licensed under the GNU General Public License,
+ * Version 2. See the file COPYING for more details.
+ *
+ * kup_pmem.c - kernel update support code.
+ * create a pmem device to store the processes information that is dumped
+ * when we want to kernel update.
+ */
+
+#include <linux/platform_device.h>
+#include <linux/memory_hotplug.h>
+#include <linux/libnvdimm.h>
+#include <linux/module.h>
+#include <asm/io.h>
+
+static const struct attribute_group *kup_pmem_attribute_groups[] = {
+ &nvdimm_bus_attribute_group,
+ NULL,
+};
+
+static const struct attribute_group *kup_pmem_region_attribute_groups[] = {
+ &nd_region_attribute_group,
+ &nd_device_attribute_group,
+ NULL,
+};
+
+static int kup_pmem_remove(struct platform_device *pdev)
+{
+ struct nvdimm_bus *nvdimm_bus = platform_get_drvdata(pdev);
+
+ nvdimm_bus_unregister(nvdimm_bus);
+
+ return 0;
+}
+
+static int kup_register_one(struct resource *res, void *data)
+{
+ struct nd_region_desc ndr_desc;
+ struct nvdimm_bus *nvdimm_bus = data;
+
+ memset(&ndr_desc, 0, sizeof(ndr_desc));
+ ndr_desc.res = res;
+ ndr_desc.attr_groups = kup_pmem_region_attribute_groups;
+ ndr_desc.numa_node = NUMA_NO_NODE;
+ set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags);
+ if (!nvdimm_pmem_region_create(nvdimm_bus, &ndr_desc))
+ return -ENXIO;
+ return 0;
+}
+
+static int kup_pmem_probe(struct platform_device *pdev)
+{
+ static struct nvdimm_bus_descriptor nd_desc;
+ struct device *dev = &pdev->dev;
+ struct nvdimm_bus *nvdimm_bus;
+ int rc = -ENXIO;
+
+ nd_desc.attr_groups = kup_pmem_attribute_groups;
+ nd_desc.provider_name = "kup_pmem";
+ nd_desc.module = THIS_MODULE;
+ nvdimm_bus = nvdimm_bus_register(dev, &nd_desc);
+ if (!nvdimm_bus)
+ goto err;
+ platform_set_drvdata(pdev, nvdimm_bus);
+
+ rc = walk_iomem_res_desc(IORES_DESC_KPMEM_DEV,
+ IORESOURCE_MEM, 0, -1, nvdimm_bus, kup_register_one);
+ if (rc)
+ goto err;
+
+ return 0;
+err:
+ nvdimm_bus_unregister(nvdimm_bus);
+ dev_err(dev, "kup_pmem: failed to register legacy persistent memory ranges\n");
+ return rc;
+}
+
+static struct platform_driver kup_pmem_driver = {
+ .probe = kup_pmem_probe,
+ .remove = kup_pmem_remove,
+ .driver = {
+ .name = "kup_pmem",
+ },
+};
+static struct platform_device *pdev;
+
+static __init int register_kup_pmem(void)
+{
+ platform_driver_register(&kup_pmem_driver);
+ pdev = platform_device_alloc("kup_pmem", -1);
+
+ return platform_device_add(pdev);
+}
+
+static __exit void unregister_kup_pmem(void)
+{
+ platform_device_del(pdev);
+ platform_driver_unregister(&kup_pmem_driver);
+}
+
+module_init(register_kup_pmem);
+module_exit(unregister_kup_pmem);
+MODULE_ALIAS("platform:kup_pmem*");
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Huawei Corporation");
diff --git a/include/linux/ioport.h b/include/linux/ioport.h
index 5330288..c5f59b9 100644
--- a/include/linux/ioport.h
+++ b/include/linux/ioport.h
@@ -139,6 +139,7 @@ enum {
IORES_DESC_PERSISTENT_MEMORY_LEGACY = 5,
IORES_DESC_DEVICE_PRIVATE_MEMORY = 6,
IORES_DESC_DEVICE_PUBLIC_MEMORY = 7,
+ IORES_DESC_KPMEM_DEV = 8,
};
/* helpers to define resources */
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b985af8..d84b0f0 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -38,6 +38,10 @@ struct bdi_writeback;
void init_mm_internals(void);
+#ifdef CONFIG_KUP_PMEM_MEMORY
+extern struct resource pmem_res;
+#endif
+
#ifndef CONFIG_NEED_MULTIPLE_NODES /* Don't use mapnrs, do it properly */
extern unsigned long max_mapnr;
diff --git a/lib/Kconfig b/lib/Kconfig
index a3928d4..cc49a86 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -621,3 +621,9 @@ config GENERIC_LIB_CMPDI2
config GENERIC_LIB_UCMPDI2
bool
+
+config KUP_PMEM_MEMORY
+ bool "reserve memory for kup pmem to store image"
+ default y
+ help
+ Say y here to enable this feature
--
2.9.5
1
0
euleros inclusion
category: feature
bugzilla: NA
issues: #I1RC8Z
CVE: NA
In normal kexec, relocating kernel may cost 5 ~ 10 seconds, to
copy all segments from vmalloced memory to kernel boot memory,
because of disabled mmu.
We introduce quick kexec to save time of copying memory as above,
just like kdump(kexec on crash), by using reserved memory
"Quick Kexec".
Constructing quick kimage as the same as crash kernel,
then simply copy all segments of kimage to reserved memroy.
We also add this support in syscall kexec_load using flags
of KEXEC_QUICK.
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
---
arch/Kconfig | 7 +++++++
include/linux/ioport.h | 3 +++
include/linux/kexec.h | 13 +++++++++++-
include/uapi/linux/kexec.h | 3 +++
kernel/kexec.c | 10 ++++++++++
kernel/kexec_core.c | 41 +++++++++++++++++++++++++++++---------
6 files changed, 67 insertions(+), 10 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index b54e485e47ae..1a9d00a6e122 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -18,6 +18,13 @@ config KEXEC_CORE
select CRASH_CORE
bool
+config QUICK_KEXEC
+ bool "Support for quick kexec"
+ depends on KEXEC_CORE
+ help
+ Say y here to enable this feature.
+ It use reserved memory to accelerate kexec.
+
config HAVE_IMA_KEXEC
bool
diff --git a/include/linux/ioport.h b/include/linux/ioport.h
index 5330288da8db..a42a1800c133 100644
--- a/include/linux/ioport.h
+++ b/include/linux/ioport.h
@@ -139,6 +139,9 @@ enum {
IORES_DESC_PERSISTENT_MEMORY_LEGACY = 5,
IORES_DESC_DEVICE_PRIVATE_MEMORY = 6,
IORES_DESC_DEVICE_PUBLIC_MEMORY = 7,
+#ifdef CONFIG_QUICK_KEXEC
+ IORES_DESC_QUICK_KEXEC = 8,
+#endif
};
/* helpers to define resources */
diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index d6b8d0a69720..d5e42a44f2ca 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -233,9 +233,12 @@ struct kimage {
unsigned long control_page;
/* Flags to indicate special processing */
- unsigned int type : 1;
+ unsigned int type : 2;
#define KEXEC_TYPE_DEFAULT 0
#define KEXEC_TYPE_CRASH 1
+#ifdef CONFIG_QUICK_KEXEC
+#define KEXEC_TYPE_QUICK 2
+#endif
unsigned int preserve_context : 1;
/* If set, we are using file mode kexec syscall */
unsigned int file_mode:1;
@@ -296,6 +299,11 @@ extern int kexec_load_disabled;
#define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_PRESERVE_CONTEXT)
#endif
+#ifdef CONFIG_QUICK_KEXEC
+#undef KEXEC_FLAGS
+#define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_QUICK)
+#endif
+
/* List of defined/legal kexec file flags */
#define KEXEC_FILE_FLAGS (KEXEC_FILE_UNLOAD | KEXEC_FILE_ON_CRASH | \
KEXEC_FILE_NO_INITRAMFS)
@@ -305,6 +313,9 @@ extern int kexec_load_disabled;
extern struct resource crashk_res;
extern struct resource crashk_low_res;
extern note_buf_t __percpu *crash_notes;
+#ifdef CONFIG_QUICK_KEXEC
+extern struct resource quick_kexec_res;
+#endif
/* flag to track if kexec reboot is in progress */
extern bool kexec_in_progress;
diff --git a/include/uapi/linux/kexec.h b/include/uapi/linux/kexec.h
index 6d112868272d..dcf9857452da 100644
--- a/include/uapi/linux/kexec.h
+++ b/include/uapi/linux/kexec.h
@@ -12,6 +12,9 @@
/* kexec flags for different usage scenarios */
#define KEXEC_ON_CRASH 0x00000001
#define KEXEC_PRESERVE_CONTEXT 0x00000002
+#ifdef CONFIG_QUICK_KEXEC
+#define KEXEC_QUICK 0x00000004
+#endif
#define KEXEC_ARCH_MASK 0xffff0000
/*
diff --git a/kernel/kexec.c b/kernel/kexec.c
index 68559808fdfa..47dfad722b7c 100644
--- a/kernel/kexec.c
+++ b/kernel/kexec.c
@@ -46,6 +46,9 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry,
int ret;
struct kimage *image;
bool kexec_on_panic = flags & KEXEC_ON_CRASH;
+#ifdef CONFIG_QUICK_KEXEC
+ bool kexec_on_quick = flags & KEXEC_QUICK;
+#endif
if (kexec_on_panic) {
/* Verify we have a valid entry point */
@@ -71,6 +74,13 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry,
image->type = KEXEC_TYPE_CRASH;
}
+#ifdef CONFIG_QUICK_KEXEC
+ if (kexec_on_quick) {
+ image->control_page = quick_kexec_res.start;
+ image->type = KEXEC_TYPE_QUICK;
+ }
+#endif
+
ret = sanity_check_segment_list(image);
if (ret)
goto out_free_image;
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index b36c9c46cd2c..595a757af656 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -74,6 +74,16 @@ struct resource crashk_low_res = {
.desc = IORES_DESC_CRASH_KERNEL
};
+#ifdef CONFIG_QUICK_KEXEC
+struct resource quick_kexec_res = {
+ .name = "Quick kexec",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM,
+ .desc = IORES_DESC_QUICK_KEXEC
+};
+#endif
+
int kexec_should_crash(struct task_struct *p)
{
/*
@@ -470,8 +480,10 @@ static struct page *kimage_alloc_normal_control_pages(struct kimage *image,
return pages;
}
-static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
- unsigned int order)
+
+static struct page *kimage_alloc_special_control_pages(struct kimage *image,
+ unsigned int order,
+ unsigned long end)
{
/* Control pages are special, they are the intermediaries
* that are needed while we copy the rest of the pages
@@ -501,7 +513,7 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
size = (1 << order) << PAGE_SHIFT;
hole_start = (image->control_page + (size - 1)) & ~(size - 1);
hole_end = hole_start + size - 1;
- while (hole_end <= crashk_res.end) {
+ while (hole_end <= end) {
unsigned long i;
cond_resched();
@@ -536,7 +548,6 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
return pages;
}
-
struct page *kimage_alloc_control_pages(struct kimage *image,
unsigned int order)
{
@@ -547,8 +558,15 @@ struct page *kimage_alloc_control_pages(struct kimage *image,
pages = kimage_alloc_normal_control_pages(image, order);
break;
case KEXEC_TYPE_CRASH:
- pages = kimage_alloc_crash_control_pages(image, order);
+ pages = kimage_alloc_special_control_pages(image, order,
+ crashk_res.end);
+ break;
+#ifdef CONFIG_QUICK_KEXEC
+ case KEXEC_TYPE_QUICK:
+ pages = kimage_alloc_special_control_pages(image, order,
+ quick_kexec_res.end);
break;
+#endif
}
return pages;
@@ -898,11 +916,11 @@ static int kimage_load_normal_segment(struct kimage *image,
return result;
}
-static int kimage_load_crash_segment(struct kimage *image,
+static int kimage_load_special_segment(struct kimage *image,
struct kexec_segment *segment)
{
- /* For crash dumps kernels we simply copy the data from
- * user space to it's destination.
+ /* For crash dumps kernels and quick kexec kernels
+ * we simply copy the data from user space to it's destination.
* We do things a page at a time for the sake of kmap.
*/
unsigned long maddr;
@@ -976,8 +994,13 @@ int kimage_load_segment(struct kimage *image,
result = kimage_load_normal_segment(image, segment);
break;
case KEXEC_TYPE_CRASH:
- result = kimage_load_crash_segment(image, segment);
+ result = kimage_load_special_segment(image, segment);
+ break;
+#ifdef CONFIG_QUICK_KEXEC
+ case KEXEC_TYPE_QUICK:
+ result = kimage_load_special_segment(image, segment);
break;
+#endif
}
return result;
--
2.19.1
1
1
euleros inclusion
category: feature
bugzilla: NA
issues: #I1RC8Z
CVE: NA
In normal kexec, relocating kernel may cost 5 ~ 10 seconds, to
copy all segments from vmalloced memory to kernel boot memory,
because of disabled mmu.
We introduce quick kexec to save time of copying memory as above,
just like kdump(kexec on crash), by using reserved memory
"Quick Kexec".
Constructing quick kimage as the same as crash kernel,
then simply copy all segments of kimage to reserved memroy.
We also add this support in syscall kexec_load using flags
of KEXEC_QUICK.
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
---
arch/Kconfig | 7 +++++++
include/linux/ioport.h | 3 +++
include/linux/kexec.h | 13 +++++++++++-
include/uapi/linux/kexec.h | 3 +++
kernel/kexec.c | 10 ++++++++++
kernel/kexec_core.c | 41 +++++++++++++++++++++++++++++---------
6 files changed, 67 insertions(+), 10 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index b54e485e47ae..1a9d00a6e122 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -18,6 +18,13 @@ config KEXEC_CORE
select CRASH_CORE
bool
+config QUICK_KEXEC
+ bool "Support for quick kexec"
+ depends on KEXEC_CORE
+ help
+ Say y here to enable this feature.
+ It use reserved memory to accelerate kexec.
+
config HAVE_IMA_KEXEC
bool
diff --git a/include/linux/ioport.h b/include/linux/ioport.h
index 5330288da8db..a42a1800c133 100644
--- a/include/linux/ioport.h
+++ b/include/linux/ioport.h
@@ -139,6 +139,9 @@ enum {
IORES_DESC_PERSISTENT_MEMORY_LEGACY = 5,
IORES_DESC_DEVICE_PRIVATE_MEMORY = 6,
IORES_DESC_DEVICE_PUBLIC_MEMORY = 7,
+#ifdef CONFIG_QUICK_KEXEC
+ IORES_DESC_QUICK_KEXEC = 8,
+#endif
};
/* helpers to define resources */
diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index d6b8d0a69720..d5e42a44f2ca 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -233,9 +233,12 @@ struct kimage {
unsigned long control_page;
/* Flags to indicate special processing */
- unsigned int type : 1;
+ unsigned int type : 2;
#define KEXEC_TYPE_DEFAULT 0
#define KEXEC_TYPE_CRASH 1
+#ifdef CONFIG_QUICK_KEXEC
+#define KEXEC_TYPE_QUICK 2
+#endif
unsigned int preserve_context : 1;
/* If set, we are using file mode kexec syscall */
unsigned int file_mode:1;
@@ -296,6 +299,11 @@ extern int kexec_load_disabled;
#define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_PRESERVE_CONTEXT)
#endif
+#ifdef CONFIG_QUICK_KEXEC
+#undef KEXEC_FLAGS
+#define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_QUICK)
+#endif
+
/* List of defined/legal kexec file flags */
#define KEXEC_FILE_FLAGS (KEXEC_FILE_UNLOAD | KEXEC_FILE_ON_CRASH | \
KEXEC_FILE_NO_INITRAMFS)
@@ -305,6 +313,9 @@ extern int kexec_load_disabled;
extern struct resource crashk_res;
extern struct resource crashk_low_res;
extern note_buf_t __percpu *crash_notes;
+#ifdef CONFIG_QUICK_KEXEC
+extern struct resource quick_kexec_res;
+#endif
/* flag to track if kexec reboot is in progress */
extern bool kexec_in_progress;
diff --git a/include/uapi/linux/kexec.h b/include/uapi/linux/kexec.h
index 6d112868272d..dcf9857452da 100644
--- a/include/uapi/linux/kexec.h
+++ b/include/uapi/linux/kexec.h
@@ -12,6 +12,9 @@
/* kexec flags for different usage scenarios */
#define KEXEC_ON_CRASH 0x00000001
#define KEXEC_PRESERVE_CONTEXT 0x00000002
+#ifdef CONFIG_QUICK_KEXEC
+#define KEXEC_QUICK 0x00000004
+#endif
#define KEXEC_ARCH_MASK 0xffff0000
/*
diff --git a/kernel/kexec.c b/kernel/kexec.c
index 68559808fdfa..47dfad722b7c 100644
--- a/kernel/kexec.c
+++ b/kernel/kexec.c
@@ -46,6 +46,9 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry,
int ret;
struct kimage *image;
bool kexec_on_panic = flags & KEXEC_ON_CRASH;
+#ifdef CONFIG_QUICK_KEXEC
+ bool kexec_on_quick = flags & KEXEC_QUICK;
+#endif
if (kexec_on_panic) {
/* Verify we have a valid entry point */
@@ -71,6 +74,13 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry,
image->type = KEXEC_TYPE_CRASH;
}
+#ifdef CONFIG_QUICK_KEXEC
+ if (kexec_on_quick) {
+ image->control_page = quick_kexec_res.start;
+ image->type = KEXEC_TYPE_QUICK;
+ }
+#endif
+
ret = sanity_check_segment_list(image);
if (ret)
goto out_free_image;
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index b36c9c46cd2c..595a757af656 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -74,6 +74,16 @@ struct resource crashk_low_res = {
.desc = IORES_DESC_CRASH_KERNEL
};
+#ifdef CONFIG_QUICK_KEXEC
+struct resource quick_kexec_res = {
+ .name = "Quick kexec",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM,
+ .desc = IORES_DESC_QUICK_KEXEC
+};
+#endif
+
int kexec_should_crash(struct task_struct *p)
{
/*
@@ -470,8 +480,10 @@ static struct page *kimage_alloc_normal_control_pages(struct kimage *image,
return pages;
}
-static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
- unsigned int order)
+
+static struct page *kimage_alloc_special_control_pages(struct kimage *image,
+ unsigned int order,
+ unsigned long end)
{
/* Control pages are special, they are the intermediaries
* that are needed while we copy the rest of the pages
@@ -501,7 +513,7 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
size = (1 << order) << PAGE_SHIFT;
hole_start = (image->control_page + (size - 1)) & ~(size - 1);
hole_end = hole_start + size - 1;
- while (hole_end <= crashk_res.end) {
+ while (hole_end <= end) {
unsigned long i;
cond_resched();
@@ -536,7 +548,6 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
return pages;
}
-
struct page *kimage_alloc_control_pages(struct kimage *image,
unsigned int order)
{
@@ -547,8 +558,15 @@ struct page *kimage_alloc_control_pages(struct kimage *image,
pages = kimage_alloc_normal_control_pages(image, order);
break;
case KEXEC_TYPE_CRASH:
- pages = kimage_alloc_crash_control_pages(image, order);
+ pages = kimage_alloc_special_control_pages(image, order,
+ crashk_res.end);
+ break;
+#ifdef CONFIG_QUICK_KEXEC
+ case KEXEC_TYPE_QUICK:
+ pages = kimage_alloc_special_control_pages(image, order,
+ quick_kexec_res.end);
break;
+#endif
}
return pages;
@@ -898,11 +916,11 @@ static int kimage_load_normal_segment(struct kimage *image,
return result;
}
-static int kimage_load_crash_segment(struct kimage *image,
+static int kimage_load_special_segment(struct kimage *image,
struct kexec_segment *segment)
{
- /* For crash dumps kernels we simply copy the data from
- * user space to it's destination.
+ /* For crash dumps kernels and quick kexec kernels
+ * we simply copy the data from user space to it's destination.
* We do things a page at a time for the sake of kmap.
*/
unsigned long maddr;
@@ -976,8 +994,13 @@ int kimage_load_segment(struct kimage *image,
result = kimage_load_normal_segment(image, segment);
break;
case KEXEC_TYPE_CRASH:
- result = kimage_load_crash_segment(image, segment);
+ result = kimage_load_special_segment(image, segment);
+ break;
+#ifdef CONFIG_QUICK_KEXEC
+ case KEXEC_TYPE_QUICK:
+ result = kimage_load_special_segment(image, segment);
break;
+#endif
}
return result;
--
2.19.1
1
1
This patch set provides some improvements and fixes. The first three
patches solve a permission problem caused by EVM. EVM denies xattr
operations also for files that are not appraised by IMA. If only
executables are appraised, xattr operations on the other files should be
allowed, even if metadata verification fails (for example due to missing
security.evm).
At the moment, in openEuler we use EVM_ALLOW_METADATA_WRITES to avoid this
problem (EVM does not check metadata integrity), but it would be useful to
do the verification for example to prevent accidental changes on immutable
metadata.
The fourth patch enables the choice of the algorithm for the HMAC and
ensures that the parameters passed to the functions which handle the HMAC
are consistent with the algorithm chosen.
The last three patches are simple bug fixes.
Roberto Sassu (7):
evm: Move hooks outside LSM infrastructure
evm: Extend API of post hooks to pass the result of pre hooks
evm: Return -EAGAIN to ignore verification failures
evm: Propagate choice of HMAC algorithm in evm_crypto.c
ima: Fix datalen check in ima_write_data()
evm: Fix validation of fake xattr passed by IMA
evm: Initialize saved_evm_status
fs/attr.c | 7 ++-
fs/xattr.c | 64 +++++++++++++++++---------
include/linux/evm.h | 18 +++++---
security/integrity/evm/Kconfig | 32 +++++++++++++
security/integrity/evm/evm.h | 1 +
security/integrity/evm/evm_crypto.c | 15 ++++--
security/integrity/evm/evm_main.c | 71 +++++++++++++++++++++--------
security/integrity/ima/ima_fs.c | 2 +-
security/integrity/integrity.h | 2 +-
security/security.c | 18 ++------
10 files changed, 158 insertions(+), 72 deletions(-)
--
2.27.GIT
1
7

[PATCH 01/23] ima: Use buffer large enough to store fake IMA xattr for appraisal
by Roberto Sassu 06 Aug '20
by Roberto Sassu 06 Aug '20
06 Aug '20
hulk inclusion
category: feature
feature: digest-lists
---------------------------
A fake IMA xattr is created to perform EVM verification even if
security.ima is not present. Appraisal could succeed if EVM status
is unknown and the file digest is found in a digest list.
This patch allocates a larger buffer to store fake IMA xattrs (struct
evm_ima_xattr_data can be used only for SHA1 digests).
Signed-off-by: Roberto Sassu <roberto.sassu(a)huawei.com>
---
security/integrity/ima/ima_appraise.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
index a11577147022..c6376ec28ccd 100644
--- a/security/integrity/ima/ima_appraise.c
+++ b/security/integrity/ima/ima_appraise.c
@@ -193,19 +193,19 @@ int ima_appraise_measurement(enum ima_hooks func,
struct dentry *dentry = file_dentry(file);
struct inode *inode = d_backing_inode(dentry);
enum integrity_status status = INTEGRITY_UNKNOWN;
- struct evm_ima_xattr_data digest_list_value;
+ char _buf[sizeof(struct evm_ima_xattr_data) + SHA512_DIGEST_SIZE];
int rc = xattr_len, hash_start = 0;
if (!(inode->i_opflags & IOP_XATTR))
return INTEGRITY_UNKNOWN;
if (rc == -ENODATA && found_digest &&
- !(file->f_mode && FMODE_CREATED)) {
- digest_list_value.type = EVM_IMA_XATTR_DIGEST_LIST;
- digest_list_value.digest[0] = found_digest->algo;
- memcpy(&digest_list_value.digest[1], found_digest->digest,
+ !(file->f_mode & FMODE_CREATED)) {
+ xattr_value = (struct evm_ima_xattr_data *)_buf;
+ xattr_value->type = IMA_XATTR_DIGEST_NG;
+ xattr_value->digest[0] = found_digest->algo;
+ memcpy(&xattr_value->digest[1], found_digest->digest,
hash_digest_size[found_digest->algo]);
- xattr_value = &digest_list_value;
rc = hash_digest_size[found_digest->algo] + 2;
}
@@ -283,7 +283,6 @@ int ima_appraise_measurement(enum ima_hooks func,
status = INTEGRITY_PASS;
break;
}
-
if (!ima_appraise_no_metadata) {
cause = "IMA-xattr-untrusted";
status = INTEGRITY_FAIL;
--
2.27.GIT
3
26

06 Aug '20
From: Jonathan Lebon <jlebon(a)redhat.com>
mainline inclusion
from mainline-v5.5-rc1
commit 3e3e24b42043eceb97ed834102c2d094dfd7aaa6
category: bugfix
---------------------------
Currently, the SELinux LSM prevents one from setting the
`security.selinux` xattr on an inode without a policy first being
loaded. However, this restriction is problematic: it makes it impossible
to have newly created files with the correct label before actually
loading the policy.
This is relevant in distributions like Fedora, where the policy is
loaded by systemd shortly after pivoting out of the initrd. In such
instances, all files created prior to pivoting will be unlabeled. One
then has to relabel them after pivoting, an operation which inherently
races with other processes trying to access those same files.
Going further, there are use cases for creating the entire root
filesystem on first boot from the initrd (e.g. Container Linux supports
this today[1], and we'd like to support it in Fedora CoreOS as well[2]).
One can imagine doing this in two ways: at the block device level (e.g.
laying down a disk image), or at the filesystem level. In the former,
labeling can simply be part of the image. But even in the latter
scenario, one still really wants to be able to set the right labels when
populating the new filesystem.
This patch enables this by changing behaviour in the following two ways:
1. allow `setxattr` on mounts without `SBLABEL_MNT` (which is all of
them if no policy is loaded yet)
2. don't try to set the in-core inode SID if we're not initialized;
instead leave it as `LABEL_INVALID` so that revalidation may be
attempted at a later time
Note the first hunk of this patch is functionally the same as a
previously discussed one[3], though it was part of a larger series which
wasn't accepted.
Co-developed-by: Victor Kamensky <kamensky(a)cisco.com>
Signed-off-by: Victor Kamensky <kamensky(a)cisco.com>
Signed-off-by: Jonathan Lebon <jlebon(a)redhat.com>
[1] https://coreos.com/os/docs/latest/root-filesystem-placement.html
[2] https://github.com/coreos/fedora-coreos-tracker/issues/94
[3] https://www.spinics.net/lists/linux-initramfs/msg04593.html
---
security/selinux/hooks.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
index 452254fd89f8..931546d80211 100644
--- a/security/selinux/hooks.c
+++ b/security/selinux/hooks.c
@@ -3305,7 +3305,7 @@ static int selinux_inode_setxattr(struct dentry *dentry, const char *name,
}
sbsec = inode->i_sb->s_security;
- if (!(sbsec->flags & SBLABEL_MNT))
+ if (!(sbsec->flags & SBLABEL_MNT) && selinux_state.initialized)
return -EOPNOTSUPP;
if (!inode_owner_or_capable(inode))
@@ -3387,6 +3387,15 @@ static void selinux_inode_post_setxattr(struct dentry *dentry, const char *name,
return;
}
+ if (!selinux_state.initialized) {
+ /* If we haven't even been initialized, then we can't validate
+ * against a policy, so leave the label as invalid. It may
+ * resolve to a valid label on the next revalidation try if
+ * we've since initialized.
+ */
+ return;
+ }
+
rc = security_context_to_sid_force(&selinux_state, value, size,
&newsid);
if (rc) {
--
2.27.GIT
3
40

[PATCH 1/5] cgroup: memcg: net: do not associate sock with unrelated cgroup
by Yang Yingliang 04 Aug '20
by Yang Yingliang 04 Aug '20
04 Aug '20
From: Shakeel Butt <shakeelb(a)google.com>
[ Upstream commit e876ecc67db80dfdb8e237f71e5b43bb88ae549c ]
We are testing network memory accounting in our setup and noticed
inconsistent network memory usage and often unrelated cgroups network
usage correlates with testing workload. On further inspection, it
seems like mem_cgroup_sk_alloc() and cgroup_sk_alloc() are broken in
irq context specially for cgroup v1.
mem_cgroup_sk_alloc() and cgroup_sk_alloc() can be called in irq context
and kind of assumes that this can only happen from sk_clone_lock()
and the source sock object has already associated cgroup. However in
cgroup v1, where network memory accounting is opt-in, the source sock
can be unassociated with any cgroup and the new cloned sock can get
associated with unrelated interrupted cgroup.
Cgroup v2 can also suffer if the source sock object was created by
process in the root cgroup or if sk_alloc() is called in irq context.
The fix is to just do nothing in interrupt.
WARNING: Please note that about half of the TCP sockets are allocated
from the IRQ context, so, memory used by such sockets will not be
accouted by the memcg.
The stack trace of mem_cgroup_sk_alloc() from IRQ-context:
CPU: 70 PID: 12720 Comm: ssh Tainted: 5.6.0-smp-DEV #1
Hardware name: ...
Call Trace:
<IRQ>
dump_stack+0x57/0x75
mem_cgroup_sk_alloc+0xe9/0xf0
sk_clone_lock+0x2a7/0x420
inet_csk_clone_lock+0x1b/0x110
tcp_create_openreq_child+0x23/0x3b0
tcp_v6_syn_recv_sock+0x88/0x730
tcp_check_req+0x429/0x560
tcp_v6_rcv+0x72d/0xa40
ip6_protocol_deliver_rcu+0xc9/0x400
ip6_input+0x44/0xd0
? ip6_protocol_deliver_rcu+0x400/0x400
ip6_rcv_finish+0x71/0x80
ipv6_rcv+0x5b/0xe0
? ip6_sublist_rcv+0x2e0/0x2e0
process_backlog+0x108/0x1e0
net_rx_action+0x26b/0x460
__do_softirq+0x104/0x2a6
do_softirq_own_stack+0x2a/0x40
</IRQ>
do_softirq.part.19+0x40/0x50
__local_bh_enable_ip+0x51/0x60
ip6_finish_output2+0x23d/0x520
? ip6table_mangle_hook+0x55/0x160
__ip6_finish_output+0xa1/0x100
ip6_finish_output+0x30/0xd0
ip6_output+0x73/0x120
? __ip6_finish_output+0x100/0x100
ip6_xmit+0x2e3/0x600
? ipv6_anycast_cleanup+0x50/0x50
? inet6_csk_route_socket+0x136/0x1e0
? skb_free_head+0x1e/0x30
inet6_csk_xmit+0x95/0xf0
__tcp_transmit_skb+0x5b4/0xb20
__tcp_send_ack.part.60+0xa3/0x110
tcp_send_ack+0x1d/0x20
tcp_rcv_state_process+0xe64/0xe80
? tcp_v6_connect+0x5d1/0x5f0
tcp_v6_do_rcv+0x1b1/0x3f0
? tcp_v6_do_rcv+0x1b1/0x3f0
__release_sock+0x7f/0xd0
release_sock+0x30/0xa0
__inet_stream_connect+0x1c3/0x3b0
? prepare_to_wait+0xb0/0xb0
inet_stream_connect+0x3b/0x60
__sys_connect+0x101/0x120
? __sys_getsockopt+0x11b/0x140
__x64_sys_connect+0x1a/0x20
do_syscall_64+0x51/0x200
entry_SYSCALL_64_after_hwframe+0x44/0xa9
The stack trace of mem_cgroup_sk_alloc() from IRQ-context:
Fixes: 2d7580738345 ("mm: memcontrol: consolidate cgroup socket tracking")
Fixes: d979a39d7242 ("cgroup: duplicate cgroup reference when cloning sockets")
Signed-off-by: Shakeel Butt <shakeelb(a)google.com>
Reviewed-by: Roman Gushchin <guro(a)fb.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/cgroup/cgroup.c | 4 ++++
mm/memcontrol.c | 4 ++++
2 files changed, 8 insertions(+)
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 78ef274b036e..94f4d6a73d70 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -5928,6 +5928,10 @@ void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
return;
}
+ /* Don't associate the sock with unrelated interrupted task's cgroup. */
+ if (in_interrupt())
+ return;
+
rcu_read_lock();
while (true) {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5d62c1fada01..f90672e7b4e6 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6328,6 +6328,10 @@ void mem_cgroup_sk_alloc(struct sock *sk)
return;
}
+ /* Do not associate the sock with unrelated interrupted task's memcg. */
+ if (in_interrupt())
+ return;
+
rcu_read_lock();
memcg = mem_cgroup_from_task(current);
if (memcg == root_mem_cgroup)
--
2.25.1
1
4

04 Aug '20
From: Chiqijun <chiqijun(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: 4472
-----------------------------------------------------------------------
Set the fault device link down so that the bond device can fail over
to the backup device.
Signed-off-by: Chiqijun <chiqijun(a)huawei.com>
Reviewed-by: Zengweiliang <zengweiliang.zengweiliang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/huawei/hinic/hinic_hw.h | 4 +-
.../net/ethernet/huawei/hinic/hinic_hwdev.c | 22 +++--
drivers/net/ethernet/huawei/hinic/hinic_lld.c | 83 +++++++++++++++++--
.../net/ethernet/huawei/hinic/hinic_main.c | 8 ++
4 files changed, 100 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw.h b/drivers/net/ethernet/huawei/hinic/hinic_hw.h
index 6afe31c9b80e..c4d48ccf8f2a 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hw.h
@@ -565,7 +565,8 @@ union hinic_fault_hw_mgmt {
struct hinic_fault_event {
/* enum hinic_fault_type */
u8 type;
- u8 rsvd0[3];
+ u8 fault_level; /* sdk write fault level for uld event */
+ u8 rsvd0[2];
union hinic_fault_hw_mgmt event;
};
@@ -653,6 +654,7 @@ enum hinic_event_type {
HINIC_EVENT_MCTP_GET_HOST_INFO,
HINIC_EVENT_MULTI_HOST_MGMT,
HINIC_EVENT_INIT_MIGRATE_PF,
+ HINIC_EVENT_MGMT_WATCHDOG_EVENT,
};
struct hinic_event_info {
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c
index 058939d05346..2d6a547c4d34 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c
@@ -3537,9 +3537,10 @@ static void fault_event_handler(struct hinic_hwdev *hwdev, void *buf_in,
struct hinic_cmd_fault_event *fault_event;
struct hinic_event_info event_info;
struct hinic_fault_info_node *fault_node;
+ u8 fault_level;
if (in_size != sizeof(*fault_event)) {
- sdk_err(hwdev->dev_hdl, "Invalid fault event report, length: %d, should be %ld.\n",
+ sdk_err(hwdev->dev_hdl, "Invalid fault event report, length: %d, should be %ld\n",
in_size, sizeof(*fault_event));
return;
}
@@ -3547,11 +3548,16 @@ static void fault_event_handler(struct hinic_hwdev *hwdev, void *buf_in,
fault_event = buf_in;
fault_report_show(hwdev, &fault_event->event);
+ if (fault_event->event.type == HINIC_FAULT_SRC_HW_MGMT_CHIP)
+ fault_level = fault_event->event.event.chip.err_level;
+ else
+ fault_level = FAULT_LEVEL_FATAL;
+
if (hwdev->event_callback) {
event_info.type = HINIC_EVENT_FAULT;
memcpy(&event_info.info, &fault_event->event,
sizeof(event_info.info));
-
+ event_info.info.fault_level = fault_level;
hwdev->event_callback(hwdev->event_pri_handle, &event_info);
}
@@ -3567,11 +3573,7 @@ static void fault_event_handler(struct hinic_hwdev *hwdev, void *buf_in,
else if (fault_event->event.type == FAULT_TYPE_PHY_FAULT)
fault_node->info.fault_src = HINIC_FAULT_SRC_HW_PHY_FAULT;
- if (fault_node->info.fault_src == HINIC_FAULT_SRC_HW_MGMT_CHIP)
- fault_node->info.fault_lev =
- fault_event->event.event.chip.err_level;
- else
- fault_node->info.fault_lev = FAULT_LEVEL_FATAL;
+ fault_node->info.fault_lev = fault_level;
memcpy(&fault_node->info.fault_data.hw_mgmt, &fault_event->event.event,
sizeof(union hinic_fault_hw_mgmt));
@@ -3811,10 +3813,16 @@ static void mgmt_watchdog_timeout_event_handler(struct hinic_hwdev *hwdev,
void *buf_out, u16 *out_size)
{
struct hinic_fault_info_node *fault_node;
+ struct hinic_event_info event_info = { 0 };
sw_watchdog_timeout_info_show(hwdev, buf_in, in_size,
buf_out, out_size);
+ if (hwdev->event_callback) {
+ event_info.type = HINIC_EVENT_MGMT_WATCHDOG_EVENT;
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+ }
+
/* refresh history fault info */
fault_node = kzalloc(sizeof(*fault_node), GFP_KERNEL);
if (!fault_node) {
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_lld.c b/drivers/net/ethernet/huawei/hinic/hinic_lld.c
index 044d0df996e5..412f4c8a93ea 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_lld.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_lld.c
@@ -1793,18 +1793,11 @@ static void __multi_host_mgmt(struct hinic_pcidev *dev,
}
}
-void hinic_event_process(void *adapter, struct hinic_event_info *event)
+static void send_uld_dev_event(struct hinic_pcidev *dev,
+ struct hinic_event_info *event)
{
- struct hinic_pcidev *dev = adapter;
enum hinic_service_type type;
- if (event->type == HINIC_EVENT_FMW_ACT_NTC)
- return hinic_sync_time_to_fmw(dev);
- else if (event->type == HINIC_EVENT_MCTP_GET_HOST_INFO)
- return __mctp_get_host_info(dev, &event->mctp_info);
- else if (event->type == HINIC_EVENT_MULTI_HOST_MGMT)
- return __multi_host_mgmt(dev, &event->mhost_mgmt);
-
for (type = SERVICE_T_NIC; type < SERVICE_T_MAX; type++) {
if (test_and_set_bit(type, &dev->state)) {
sdk_warn(&dev->pcidev->dev, "Event: 0x%x can't handler, %s is in detach\n",
@@ -1819,6 +1812,78 @@ void hinic_event_process(void *adapter, struct hinic_event_info *event)
}
}
+static void send_event_to_all_pf(struct hinic_pcidev *dev,
+ struct hinic_event_info *event)
+{
+ struct hinic_pcidev *des_dev = NULL;
+
+ lld_dev_hold();
+ list_for_each_entry(des_dev, &dev->chip_node->func_list, node) {
+ if (test_bit(HINIC_FUNC_IN_REMOVE, &des_dev->flag))
+ continue;
+
+ if (hinic_func_type(des_dev->hwdev) == TYPE_VF)
+ continue;
+
+ send_uld_dev_event(des_dev, event);
+ }
+ lld_dev_put();
+}
+
+static void send_event_to_dst_pf(struct hinic_pcidev *dev, u16 func_id,
+ struct hinic_event_info *event)
+{
+ struct hinic_pcidev *des_dev = NULL;
+
+ lld_dev_hold();
+ list_for_each_entry(des_dev, &dev->chip_node->func_list, node) {
+ if (test_bit(HINIC_FUNC_IN_REMOVE, &des_dev->flag))
+ continue;
+
+ if (hinic_func_type(des_dev->hwdev) == TYPE_VF)
+ continue;
+
+ if (hinic_global_func_id(des_dev->hwdev) == func_id) {
+ send_uld_dev_event(des_dev, event);
+ break;
+ }
+ }
+ lld_dev_put();
+}
+
+void hinic_event_process(void *adapter, struct hinic_event_info *event)
+{
+ struct hinic_pcidev *dev = adapter;
+ u16 func_id;
+
+ switch (event->type) {
+ case HINIC_EVENT_FMW_ACT_NTC:
+ hinic_sync_time_to_fmw(dev);
+ break;
+ case HINIC_EVENT_MCTP_GET_HOST_INFO:
+ __mctp_get_host_info(dev, &event->mctp_info);
+ break;
+ case HINIC_EVENT_MULTI_HOST_MGMT:
+ __multi_host_mgmt(dev, &event->mhost_mgmt);
+ break;
+ case HINIC_EVENT_FAULT:
+ if (event->info.fault_level == FAULT_LEVEL_SERIOUS_FLR &&
+ event->info.event.chip.func_id < HINIC_MAX_PF_NUM) {
+ func_id = event->info.event.chip.func_id;
+ send_event_to_dst_pf(adapter, func_id, event);
+ } else {
+ send_uld_dev_event(adapter, event);
+ }
+ break;
+ case HINIC_EVENT_MGMT_WATCHDOG_EVENT:
+ send_event_to_all_pf(adapter, event);
+ break;
+ default:
+ send_uld_dev_event(adapter, event);
+ break;
+ }
+}
+
static int mapping_bar(struct pci_dev *pdev, struct hinic_pcidev *pci_adapter)
{
u32 db_dwqe_size;
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
index 8dc7b555f0bf..e0635287bf05 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
@@ -3026,8 +3026,13 @@ void nic_event(struct hinic_lld_dev *lld_dev, void *adapter,
break;
case HINIC_EVENT_HEART_LOST:
hinic_heart_lost(nic_dev);
+ hinic_link_status_change(nic_dev, false);
break;
case HINIC_EVENT_FAULT:
+ if (event->info.fault_level == FAULT_LEVEL_SERIOUS_FLR &&
+ event->info.event.chip.func_id ==
+ hinic_global_func_id(lld_dev->hwdev))
+ hinic_link_status_change(nic_dev, false);
break;
case HINIC_EVENT_DCB_STATE_CHANGE:
if (nic_dev->default_cos_id == event->dcb_state.default_cos)
@@ -3047,6 +3052,9 @@ void nic_event(struct hinic_lld_dev *lld_dev, void *adapter,
case HINIC_EVENT_PORT_MODULE_EVENT:
hinic_port_module_event_handler(nic_dev, event);
break;
+ case HINIC_EVENT_MGMT_WATCHDOG_EVENT:
+ hinic_link_status_change(nic_dev, false);
+ break;
default:
break;
}
--
2.25.1
1
5
Cong Wang (1):
qrtr: orphan socket in qrtr_release()
Dan Carpenter (1):
AX.25: Prevent integer overflows in connect and sendmsg
David Howells (1):
rxrpc: Fix sendmsg() returning EPIPE due to recvmsg() returning
ENODATA
Greg Kroah-Hartman (1):
Linux 4.19.136
Kuniyuki Iwashima (2):
udp: Copy has_conns in reuseport_grow().
udp: Improve load balancing for SO_REUSEPORT.
Miaohe Lin (1):
net: udp: Fix wrong clean up for IS_UDPLITE macro
Peilin Ye (2):
AX.25: Fix out-of-bounds read in ax25_connect()
AX.25: Prevent out-of-bounds read in ax25_sendmsg()
Peng Fan (1):
regmap: debugfs: check count when read regmap file
Subash Abhinov Kasiviswanathan (1):
dev: Defer free of skbs in flush_backlog
Wei Yongjun (1):
ip6_gre: fix null-ptr-deref in ip6gre_init_net()
Weilong Chen (1):
rtnetlink: Fix memory(net_device) leak when ->newlink fails
Xie He (1):
drivers/net/wan/x25_asy: Fix to make it work
Xin Long (2):
sctp: shrink stream outq only when new outcnt < old outcnt
sctp: shrink stream outq when fails to do addstream reconf
Xiongfeng Wang (1):
net-sysfs: add a newline when printing 'tx_timeout' by sysfs
Yuchung Cheng (1):
tcp: allow at most one TLP probe per flight
Makefile | 2 +-
drivers/base/regmap/regmap-debugfs.c | 6 ++++++
drivers/net/wan/x25_asy.c | 21 ++++++++++++++-------
include/linux/tcp.h | 4 +++-
net/ax25/af_ax25.c | 10 ++++++++--
net/core/dev.c | 2 +-
net/core/net-sysfs.c | 2 +-
net/core/rtnetlink.c | 3 ++-
net/core/sock_reuseport.c | 1 +
net/ipv4/tcp_input.c | 11 ++++++-----
net/ipv4/tcp_output.c | 13 ++++++++-----
net/ipv4/udp.c | 17 ++++++++++-------
net/ipv6/ip6_gre.c | 11 ++++++-----
net/ipv6/udp.c | 17 ++++++++++-------
net/qrtr/qrtr.c | 1 +
net/rxrpc/recvmsg.c | 2 +-
net/rxrpc/sendmsg.c | 2 +-
net/sctp/stream.c | 27 ++++++++++++++++++---------
18 files changed, 98 insertions(+), 54 deletions(-)
--
2.25.1
1
18
Alexander Lobakin (1):
qed: suppress "don't support RoCE & iWARP" flooding on HW init
Arnd Bergmann (1):
x86: math-emu: Fix up 'cmp' insn for clang ias
Ben Skeggs (1):
drm/nouveau/i2c/g94-: increase NV_PMGR_DP_AUXCTL_TRANSACTREQ timeout
Boris Burkov (1):
btrfs: fix mount failure caused by race with umount
Caiyuan Xie (1):
HID: alps: support devices with report id 2
Chen-Yu Tsai (1):
drm: sun4i: hdmi: Fix inverted HPD result
Christophe JAILLET (1):
hippi: Fix a size used in a 'pci_free_consistent()' in an error
handling path
Chu Lin (1):
hwmon: (adm1275) Make sure we are reading enough data for different
chips
Chunfeng Yun (1):
usb: xhci-mtk: fix the failure of bandwidth allocation
Cong Wang (1):
bonding: check return value of register_netdevice() in bond_newlink()
Cristian Marussi (1):
hwmon: (scmi) Fix potential buffer overflow in scmi_hwmon_probe()
Dinghao Liu (1):
dmaengine: tegra210-adma: Fix runtime PM imbalance on error
Douglas Anderson (1):
soc: qcom: rpmh: Dirt can only make you dirtier, not cleaner
Evgeny Novikov (2):
hwmon: (aspeed-pwm-tacho) Avoid possible buffer overflow
usb: gadget: udc: gr_udc: fix memleak on error handling path in
gr_ep_init()
Fangrui Song (1):
Makefile: Fix GCC_TOOLCHAIN_DIR prefix for Clang cross compilation
Federico Ricchiuto (1):
HID: i2c-hid: add Mediacom FlexBook edge13 to descriptor override
Filipe Manana (1):
btrfs: fix double free on ulist after backref resolution failure
Forest Crossman (1):
usb: xhci: Fix ASM2142/ASM3142 DMA addressing
Gavin Shan (1):
drivers/firmware/psci: Fix memory leakage in alloc_init_cpu_groups()
Geert Uytterhoeven (1):
ASoC: qcom: Drop HAS_DMA dependency to fix link failure
George Kennedy (1):
ax88172a: fix ax88172a_unbind() failures
Greg Kroah-Hartman (1):
Linux 4.19.135
Hans de Goede (3):
ASoC: rt5670: Correct RT5670_LDO_SEL_MASK
HID: apple: Disable Fn-key key-re-mapping on clone keyboards
ASoC: rt5670: Add new gpio1_is_ext_spk_en quirk and enable it on the
Lenovo Miix 2 10
Hugh Dickins (1):
mm/memcg: fix refcount error while moving and swapping
Ian Abbott (4):
staging: comedi: addi_apci_1032: check INSN_CONFIG_DIGITAL_TRIG shift
staging: comedi: ni_6527: fix INSN_CONFIG_DIGITAL_TRIG support
staging: comedi: addi_apci_1500: check INSN_CONFIG_DIGITAL_TRIG shift
staging: comedi: addi_apci_1564: check INSN_CONFIG_DIGITAL_TRIG shift
Ilya Katsnelson (1):
Input: synaptics - enable InterTouch for ThinkPad X1E 1st gen
Jacky Hu (1):
pinctrl: amd: fix npins for uart0 in kerncz_groups
Joerg Roedel (1):
x86, vmlinux.lds: Page-align end of ..page_aligned sections
John David Anglin (1):
parisc: Add atomic64_set_release() define to avoid CPU soft lockups
Jon Maloy (1):
tipc: clean up skb list lock handling on send path
Leonid Ravich (1):
dmaengine: ioat setting ioat timeout as module parameter
Liu Jian (2):
ieee802154: fix one possible memleak in adf7242_probe
mlxsw: destroy workqueue when trap_register in mlxsw_emad_init
Marc Kleine-Budde (1):
regmap: dev_get_regmap_match(): fix string comparison
Mark O'Donovan (1):
ath9k: Fix regression with Atheros 9271
Markus Theil (1):
mac80211: allow rx of mesh eapol frames with default rx key
Matthew Gerlach (1):
fpga: dfl: fix bug in port reset handshake
Matthew Howell (1):
serial: exar: Fix GPIO configuration for Sealevel cards based on
XR17V35X
Max Filippov (2):
xtensa: fix __sync_fetch_and_{and,or}_4 declarations
xtensa: update *pos in cpuinfo_op.next
Merlijn Wajer (1):
Input: add `SW_MACHINE_COVER`
Michael J. Ruhl (1):
io-mapping: indicate mapping failure
Miklos Szeredi (1):
fuse: fix weird page warning
Mikulas Patocka (1):
dm integrity: fix integrity recalculation that is improperly skipped
Muchun Song (1):
mm: memcg/slab: fix memory leak at non-root kmem_cache destroy
Navid Emamdoost (2):
gpio: arizona: handle pm_runtime_get_sync failure case
gpio: arizona: put pm_runtime in case of failure
Oleg Nesterov (1):
uprobes: Change handle_swbp() to send SIGTRAP with si_code=SI_KERNEL,
to fix GDB regression
Olga Kornievskaia (1):
SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct
IO compeletion")
Palmer Dabbelt (1):
RISC-V: Upgrade smp_mb__after_spinlock() to iorw,iorw
Paweł Gronowski (1):
drm/amdgpu: Fix NULL dereference in dpm sysfs handlers
Pi-Hsun Shih (1):
scripts/decode_stacktrace: strip basepath from all paths
Qiu Wenbo (1):
drm/amd/powerplay: fix a crash when overclocking Vega M
Qiujun Huang (1):
ath9k: Fix general protection fault in ath9k_hif_usb_rx_cb
Robbie Ko (1):
btrfs: fix page leaks after failure to lock page for delalloc
Rodrigo Rivas Costa (1):
HID: steam: fixes race in handling device list.
Roman Gushchin (1):
mm: memcg/slab: synchronize access to kmem_cache dying flag using a
spinlock
Rustam Kovhaev (1):
staging: wlan-ng: properly check endpoint types
Serge Semin (1):
serial: 8250_mtk: Fix high-speed baud rates clamping
Sergey Organov (1):
net: dp83640: fix SIOCSHWTSTAMP to update the struct with actual
configuration
Stefano Garzarella (1):
scripts/gdb: fix lx-symbols 'gdb.error' while loading modules
Steve French (1):
Revert "cifs: Fix the target file was deleted when rename failed."
Taehee Yoo (1):
bonding: check error value of register_netdevice() immediately
Takashi Iwai (1):
ALSA: info: Drop WARN_ON() from buffer NULL sanity check
Tetsuo Handa (3):
binder: Don't use mmput() from shrinker function.
fbdev: Detect integer underflow at "struct fbcon_ops"->clear_margins.
vt: Reject zero-sized screen buffer size.
Thomas Gleixner (1):
irqdomain/treewide: Keep firmware node unconditionally allocated
Tom Rix (2):
scsi: scsi_transport_spi: Fix function pointer check
net: sky2: initialize return of gm_phy_read
Vasundhara Volam (1):
bnxt_en: Fix race when modifying pause settings.
Vladimir Oltean (1):
spi: spi-fsl-dspi: Exit the ISR with IRQ_NONE when it's not ours
Wang Hai (2):
net: smc91x: Fix possible memory leak in smc_drv_probe()
net: ethernet: ave: Fix error returns in ave_init
Will Deacon (1):
arm64: Use test_tsk_thread_flag() for checking TIF_SINGLESTEP
Wolfram Sang (1):
i2c: rcar: always clear ICSAR to avoid side effects
Xie He (1):
drivers/net/wan/lapbether: Fixed the value of hard_header_len
Yang Yingliang (2):
IB/umem: fix reference count leak in ib_umem_odp_get()
serial: 8250: fix null-ptr-deref in serial8250_start_tx()
guodeqing (1):
ipvs: fix the connection sync failed in some cases
leilk.liu (1):
spi: mediatek: use correct SPI_CFG2_REG MACRO
Makefile | 4 +-
arch/arm64/kernel/debug-monitors.c | 4 +-
arch/parisc/include/asm/atomic.h | 2 +
arch/riscv/include/asm/barrier.h | 10 ++-
arch/x86/kernel/apic/io_apic.c | 10 +--
arch/x86/kernel/apic/msi.c | 18 +++--
arch/x86/kernel/apic/vector.c | 1 -
arch/x86/kernel/vmlinux.lds.S | 1 +
arch/x86/math-emu/wm_sqrt.S | 2 +-
arch/x86/platform/uv/uv_irq.c | 3 +-
arch/xtensa/kernel/setup.c | 3 +-
arch/xtensa/kernel/xtensa_ksyms.c | 4 +-
drivers/android/binder_alloc.c | 2 +-
drivers/base/regmap/regmap.c | 2 +-
drivers/dma/ioat/dma.c | 12 ++++
drivers/dma/ioat/dma.h | 2 -
drivers/dma/tegra210-adma.c | 5 +-
drivers/firmware/psci_checker.c | 5 +-
drivers/fpga/dfl-afu-main.c | 3 +-
drivers/gpio/gpio-arizona.c | 7 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c | 9 +--
.../drm/amd/powerplay/smumgr/vegam_smumgr.c | 10 +--
.../gpu/drm/nouveau/nvkm/subdev/i2c/auxg94.c | 4 +-
.../drm/nouveau/nvkm/subdev/i2c/auxgm200.c | 4 +-
drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c | 2 +-
drivers/hid/hid-alps.c | 2 +
drivers/hid/hid-apple.c | 18 +++++
drivers/hid/hid-steam.c | 6 +-
drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c | 8 +++
drivers/hwmon/aspeed-pwm-tacho.c | 2 +
drivers/hwmon/pmbus/adm1275.c | 10 ++-
drivers/hwmon/scmi-hwmon.c | 2 +-
drivers/i2c/busses/i2c-rcar.c | 3 +
drivers/infiniband/core/umem_odp.c | 3 +-
drivers/input/mouse/synaptics.c | 1 +
drivers/iommu/amd_iommu.c | 5 +-
drivers/iommu/intel_irq_remapping.c | 2 +-
drivers/md/dm-integrity.c | 4 +-
drivers/md/dm.c | 17 +++++
drivers/net/bonding/bond_main.c | 10 ++-
drivers/net/bonding/bond_netlink.c | 3 +-
.../net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 5 +-
drivers/net/ethernet/marvell/sky2.c | 2 +-
drivers/net/ethernet/mellanox/mlxsw/core.c | 3 +-
drivers/net/ethernet/qlogic/qed/qed_cxt.c | 4 +-
drivers/net/ethernet/smsc/smc91x.c | 4 +-
drivers/net/ethernet/socionext/sni_ave.c | 2 +-
drivers/net/hippi/rrunner.c | 2 +-
drivers/net/ieee802154/adf7242.c | 4 +-
drivers/net/phy/dp83640.c | 4 ++
drivers/net/usb/ax88172a.c | 1 +
drivers/net/wan/lapbether.c | 9 ++-
drivers/net/wireless/ath/ath9k/hif_usb.c | 52 ++++++++++----
drivers/net/wireless/ath/ath9k/hif_usb.h | 5 ++
drivers/pci/controller/vmd.c | 5 +-
drivers/pinctrl/pinctrl-amd.h | 2 +-
drivers/scsi/scsi_transport_spi.c | 2 +-
drivers/soc/qcom/rpmh.c | 8 +--
drivers/spi/spi-fsl-dspi.c | 4 +-
drivers/spi/spi-mt65xx.c | 15 ++--
.../staging/comedi/drivers/addi_apci_1032.c | 20 ++++--
.../staging/comedi/drivers/addi_apci_1500.c | 24 +++++--
.../staging/comedi/drivers/addi_apci_1564.c | 20 ++++--
drivers/staging/comedi/drivers/ni_6527.c | 2 +-
drivers/staging/wlan-ng/prism2usb.c | 16 ++++-
drivers/tty/serial/8250/8250_core.c | 2 +-
drivers/tty/serial/8250/8250_exar.c | 12 +++-
drivers/tty/serial/8250/8250_mtk.c | 18 +++++
drivers/tty/vt/vt.c | 29 +++++---
drivers/usb/gadget/udc/gr_udc.c | 7 +-
drivers/usb/host/xhci-mtk-sch.c | 4 ++
drivers/usb/host/xhci-pci.c | 3 +
drivers/video/fbdev/core/bitblit.c | 4 +-
drivers/video/fbdev/core/fbcon_ccw.c | 4 +-
drivers/video/fbdev/core/fbcon_cw.c | 4 +-
drivers/video/fbdev/core/fbcon_ud.c | 4 +-
fs/btrfs/backref.c | 1 +
fs/btrfs/extent_io.c | 3 +-
fs/btrfs/volumes.c | 8 +++
fs/cifs/inode.c | 10 +--
fs/fuse/dev.c | 3 +-
fs/nfs/direct.c | 13 ++--
fs/nfs/file.c | 1 -
include/asm-generic/vmlinux.lds.h | 5 +-
include/linux/device-mapper.h | 1 +
include/linux/io-mapping.h | 5 +-
include/linux/mod_devicetable.h | 2 +-
include/sound/rt5670.h | 1 +
include/uapi/linux/input-event-codes.h | 3 +-
kernel/events/uprobes.c | 2 +-
mm/memcontrol.c | 4 +-
mm/slab_common.c | 50 ++++++++++---
net/mac80211/rx.c | 26 +++++++
net/netfilter/ipvs/ip_vs_sync.c | 12 ++--
net/tipc/bcast.c | 8 +--
net/tipc/group.c | 4 +-
net/tipc/link.c | 12 ++--
net/tipc/node.c | 7 +-
net/tipc/socket.c | 12 ++--
scripts/decode_stacktrace.sh | 4 +-
scripts/gdb/linux/symbols.py | 2 +-
sound/core/info.c | 4 +-
sound/soc/codecs/rt5670.c | 71 +++++++++++++++----
sound/soc/codecs/rt5670.h | 2 +-
sound/soc/qcom/Kconfig | 2 +-
105 files changed, 584 insertions(+), 225 deletions(-)
--
2.25.1
1
87

[PATCH] netfilter: nat: check the bounds of nf_nat_l3protos and nf_nat_l4protos
by Yang Yingliang 29 Jul '20
by Yang Yingliang 29 Jul '20
29 Jul '20
hulk inclusion
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
famliy can be passed from user space and will be out of range of
nf_nat_l3protos or nf_nat_l4protos, so we need check the family
when call __nf_nat_l3proto_find() or __nf_nat_l4proto_find().
nfnetlink_parse_nat_setup() need return EAGAIN, if __nf_nat_l3proto_find()
returns null, so we return a error number to distinguish this case.
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Wei Yongjun <weiyongjun1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/ipv4/netfilter/nf_nat_l3proto_ipv4.c | 4 +++-
net/ipv6/netfilter/nf_nat_l3proto_ipv6.c | 4 +++-
net/netfilter/nf_nat_core.c | 21 ++++++++++++++++++++-
3 files changed, 26 insertions(+), 3 deletions(-)
diff --git a/net/ipv4/netfilter/nf_nat_l3proto_ipv4.c b/net/ipv4/netfilter/nf_nat_l3proto_ipv4.c
index 6115bf1ff6f0..52e3f9c7bc69 100644
--- a/net/ipv4/netfilter/nf_nat_l3proto_ipv4.c
+++ b/net/ipv4/netfilter/nf_nat_l3proto_ipv4.c
@@ -218,6 +218,8 @@ int nf_nat_icmp_reply_translation(struct sk_buff *skb,
return 1;
l4proto = __nf_nat_l4proto_find(NFPROTO_IPV4, inside->ip.protocol);
+ if (!l4proto)
+ return 1;
if (!nf_nat_ipv4_manip_pkt(skb, hdrlen + sizeof(inside->icmp),
l4proto, &ct->tuplehash[!dir].tuple, !manip))
return 0;
@@ -234,7 +236,7 @@ int nf_nat_icmp_reply_translation(struct sk_buff *skb,
/* Change outer to look like the reply to an incoming packet */
nf_ct_invert_tuplepr(&target, &ct->tuplehash[!dir].tuple);
l4proto = __nf_nat_l4proto_find(NFPROTO_IPV4, 0);
- if (!nf_nat_ipv4_manip_pkt(skb, 0, l4proto, &target, manip))
+ if (l4proto && !nf_nat_ipv4_manip_pkt(skb, 0, l4proto, &target, manip))
return 0;
return 1;
diff --git a/net/ipv6/netfilter/nf_nat_l3proto_ipv6.c b/net/ipv6/netfilter/nf_nat_l3proto_ipv6.c
index ca6d38698b1a..bf25263b2dcd 100644
--- a/net/ipv6/netfilter/nf_nat_l3proto_ipv6.c
+++ b/net/ipv6/netfilter/nf_nat_l3proto_ipv6.c
@@ -228,6 +228,8 @@ int nf_nat_icmpv6_reply_translation(struct sk_buff *skb,
return 1;
l4proto = __nf_nat_l4proto_find(NFPROTO_IPV6, inside->ip6.nexthdr);
+ if (!l4proto)
+ return 1;
if (!nf_nat_ipv6_manip_pkt(skb, hdrlen + sizeof(inside->icmp6),
l4proto, &ct->tuplehash[!dir].tuple, !manip))
return 0;
@@ -245,7 +247,7 @@ int nf_nat_icmpv6_reply_translation(struct sk_buff *skb,
nf_ct_invert_tuplepr(&target, &ct->tuplehash[!dir].tuple);
l4proto = __nf_nat_l4proto_find(NFPROTO_IPV6, IPPROTO_ICMPV6);
- if (!nf_nat_ipv6_manip_pkt(skb, 0, l4proto, &target, manip))
+ if (l4proto && !nf_nat_ipv6_manip_pkt(skb, 0, l4proto, &target, manip))
return 0;
return 1;
diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c
index 2268b10a9dcf..f1576e46ee2c 100644
--- a/net/netfilter/nf_nat_core.c
+++ b/net/netfilter/nf_nat_core.c
@@ -64,12 +64,18 @@ struct nat_net {
inline const struct nf_nat_l3proto *
__nf_nat_l3proto_find(u8 family)
{
+ if (unlikely(family >= NFPROTO_NUMPROTO))
+ return ERR_PTR(-EPROTONOSUPPORT);
+
return rcu_dereference(nf_nat_l3protos[family]);
}
inline const struct nf_nat_l4proto *
__nf_nat_l4proto_find(u8 family, u8 protonum)
{
+ if (unlikely(family >= NFPROTO_NUMPROTO || nf_nat_l4protos[family] == NULL))
+ return NULL;
+
return rcu_dereference(nf_nat_l4protos[family][protonum]);
}
EXPORT_SYMBOL_GPL(__nf_nat_l4proto_find);
@@ -90,7 +96,7 @@ static void __nf_nat_decode_session(struct sk_buff *skb, struct flowi *fl)
family = nf_ct_l3num(ct);
l3proto = __nf_nat_l3proto_find(family);
- if (l3proto == NULL)
+ if (IS_ERR_OR_NULL(l3proto))
return;
dir = CTINFO2DIR(ctinfo);
@@ -333,8 +339,12 @@ get_unique_tuple(struct nf_conntrack_tuple *tuple,
rcu_read_lock();
l3proto = __nf_nat_l3proto_find(orig_tuple->src.l3num);
+ if (IS_ERR_OR_NULL(l3proto))
+ goto out;
l4proto = __nf_nat_l4proto_find(orig_tuple->src.l3num,
orig_tuple->dst.protonum);
+ if (!l4proto)
+ goto out;
/* 1) If this srcip/proto/src-proto-part is currently mapped,
* and that same mapping gives a unique tuple within the given
@@ -509,8 +519,12 @@ static unsigned int nf_nat_manip_pkt(struct sk_buff *skb, struct nf_conn *ct,
nf_ct_invert_tuplepr(&target, &ct->tuplehash[!dir].tuple);
l3proto = __nf_nat_l3proto_find(target.src.l3num);
+ if (IS_ERR_OR_NULL(l3proto))
+ return NF_DROP;
l4proto = __nf_nat_l4proto_find(target.src.l3num,
target.dst.protonum);
+ if (!l4proto)
+ return NF_DROP;
if (!l3proto->manip_pkt(skb, 0, l4proto, &target, mtype))
return NF_DROP;
@@ -816,6 +830,9 @@ static int nfnetlink_parse_nat_proto(struct nlattr *attr,
return err;
l4proto = __nf_nat_l4proto_find(nf_ct_l3num(ct), nf_ct_protonum(ct));
+ if (!l4proto)
+ return -EPROTONOSUPPORT;
+
if (l4proto->nlattr_to_range)
err = l4proto->nlattr_to_range(tb, range);
@@ -876,6 +893,8 @@ nfnetlink_parse_nat_setup(struct nf_conn *ct,
l3proto = __nf_nat_l3proto_find(nf_ct_l3num(ct));
if (l3proto == NULL)
return -EAGAIN;
+ if (IS_ERR(l3proto))
+ return PTR_ERR(l3proto);
/* No NAT information has been passed, allocate the null-binding */
if (attr == NULL)
--
2.25.1
1
0

[PATCH 1/6] net/hinic: Delete unused functions and macro definitions in ossl
by Yang Yingliang 29 Jul '20
by Yang Yingliang 29 Jul '20
29 Jul '20
From: Chiqijun <chiqijun(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: 4472
-----------------------------------------------------------------------
Delete unused functions and macro definitions in ossl.
Signed-off-by: Chiqijun <chiqijun(a)huawei.com>
Reviewed-by: Zengweiliang <zengweiliang.zengweiliang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/huawei/hinic/ossl_knl.h | 16 --
.../ethernet/huawei/hinic/ossl_knl_linux.c | 85 ------
.../ethernet/huawei/hinic/ossl_knl_linux.h | 255 ------------------
.../net/ethernet/huawei/hinic/ossl_types.h | 41 ---
4 files changed, 397 deletions(-)
delete mode 100644 drivers/net/ethernet/huawei/hinic/ossl_types.h
diff --git a/drivers/net/ethernet/huawei/hinic/ossl_knl.h b/drivers/net/ethernet/huawei/hinic/ossl_knl.h
index 7f19deba7d4c..1dae5ca63d04 100644
--- a/drivers/net/ethernet/huawei/hinic/ossl_knl.h
+++ b/drivers/net/ethernet/huawei/hinic/ossl_knl.h
@@ -18,22 +18,6 @@
#include "ossl_knl_linux.h"
-#if defined(__WIN__) || defined(__VMWARE__)
-#define __WIN_OR_VMWARE__
-#endif
-
-#if defined(__WIN__) || defined(__VMWARE__) || defined(__UEFI__)
-#define __WIN_OR_VMWARE_OR_UEFI__
-#endif
-
-#if (defined(__WIN__) || defined(__VMWARE__)) && !defined(__HIFC__)
-#define __WIN_OR_VMWARE_AND_NONHIFC__
-#endif
-
-#if defined(__WIN__) || defined(__UEFI__)
-#define __WIN_OR_UEFI__
-#endif
-
#define sdk_err(dev, format, ...) \
dev_err(dev, "[COMM]"format, ##__VA_ARGS__)
#define sdk_warn(dev, format, ...) \
diff --git a/drivers/net/ethernet/huawei/hinic/ossl_knl_linux.c b/drivers/net/ethernet/huawei/hinic/ossl_knl_linux.c
index 798faa6f401d..daed2eca32c1 100644
--- a/drivers/net/ethernet/huawei/hinic/ossl_knl_linux.c
+++ b/drivers/net/ethernet/huawei/hinic/ossl_knl_linux.c
@@ -17,91 +17,6 @@
#include "ossl_knl_linux.h"
-#define OSSL_MINUTE_BASE (60)
-
-sdk_file *file_creat(const char *file_name)
-{
- return filp_open(file_name, O_CREAT | O_RDWR | O_APPEND, 0);
-}
-
-sdk_file *file_open(const char *file_name)
-{
- return filp_open(file_name, O_RDONLY, 0);
-}
-
-void file_close(sdk_file *file_handle)
-{
- (void)filp_close(file_handle, NULL);
-}
-
-u32 get_file_size(sdk_file *file_handle)
-{
- struct inode *file_inode;
-
- file_inode = file_handle->f_inode;
-
- return (u32)(file_inode->i_size);
-}
-
-void set_file_position(sdk_file *file_handle, u32 position)
-{
- file_handle->f_pos = position;
-}
-
-int file_read(sdk_file *file_handle, char *log_buffer,
- u32 rd_length, u32 *file_pos)
-{
- return (int)file_handle->f_op->read(file_handle, log_buffer,
- rd_length, &file_handle->f_pos);
-}
-
-u32 file_write(sdk_file *file_handle, char *log_buffer, u32 wr_length)
-{
- return (u32)file_handle->f_op->write(file_handle, log_buffer,
- wr_length, &file_handle->f_pos);
-}
-
-static int _linux_thread_func(void *thread)
-{
- struct sdk_thread_info *info = (struct sdk_thread_info *)thread;
-
- while (!kthread_should_stop())
- info->thread_fn(info->data);
-
- return 0;
-}
-
-int creat_thread(struct sdk_thread_info *thread_info)
-{
- thread_info->thread_obj = kthread_run(_linux_thread_func,
- thread_info, thread_info->name);
- if (!thread_info->thread_obj)
- return -EFAULT;
-
- return 0;
-}
-
-void stop_thread(struct sdk_thread_info *thread_info)
-{
- if (thread_info->thread_obj)
- (void)kthread_stop(thread_info->thread_obj);
-}
-
-void utctime_to_localtime(u64 utctime, u64 *localtime)
-{
- *localtime = utctime - sys_tz.tz_minuteswest * OSSL_MINUTE_BASE;
-}
-
-#ifndef HAVE_TIMER_SETUP
-void initialize_timer(void *adapter_hdl, struct timer_list *timer)
-{
- if (!adapter_hdl || !timer)
- return;
-
- init_timer(timer);
-}
-#endif
-
void add_to_timer(struct timer_list *timer, long period)
{
if (!timer)
diff --git a/drivers/net/ethernet/huawei/hinic/ossl_knl_linux.h b/drivers/net/ethernet/huawei/hinic/ossl_knl_linux.h
index f99e1dbf3fda..dd5d0fc949d7 100644
--- a/drivers/net/ethernet/huawei/hinic/ossl_knl_linux.h
+++ b/drivers/net/ethernet/huawei/hinic/ossl_knl_linux.h
@@ -29,16 +29,6 @@
#include <linux/udp.h>
#include <linux/highmem.h>
-/* UTS_RELEASE is in a different header starting in kernel 2.6.18 */
-#ifndef UTS_RELEASE
-/* utsrelease.h changed locations in 2.6.33 */
-#include <generated/utsrelease.h>
-#endif
-
-#ifndef NETIF_F_SCTP_CSUM
-#define NETIF_F_SCTP_CSUM 0
-#endif
-
#ifndef __GFP_COLD
#define __GFP_COLD 0
#endif
@@ -74,99 +64,15 @@
#define ADVERTISED_25000baseCR_Full 0
#endif
-#ifndef ETHTOOL_GLINKSETTINGS
-enum ethtool_link_mode_bit_indices {
- ETHTOOL_LINK_MODE_1000baseKX_Full_BIT = 17,
- ETHTOOL_LINK_MODE_10000baseKR_Full_BIT = 19,
- ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT = 23,
- ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT = 24,
- ETHTOOL_LINK_MODE_25000baseCR_Full_BIT = 31,
- ETHTOOL_LINK_MODE_25000baseKR_Full_BIT = 32,
- ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT = 36,
- ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT = 38,
-};
-#endif
-
#ifndef RHEL_RELEASE_VERSION
#define RHEL_RELEASE_VERSION(a, b) (((a) << 8) + (b))
#endif
-#ifndef AX_RELEASE_VERSION
-#define AX_RELEASE_VERSION(a, b) (((a) << 8) + (b))
-#endif
-
-#ifndef AX_RELEASE_CODE
-#define AX_RELEASE_CODE 0
-#endif
-
-#if (AX_RELEASE_CODE && AX_RELEASE_CODE == AX_RELEASE_VERSION(3, 0))
-#define RHEL_RELEASE_CODE RHEL_RELEASE_VERSION(5, 0)
-#elif (AX_RELEASE_CODE && AX_RELEASE_CODE == AX_RELEASE_VERSION(3, 1))
-#define RHEL_RELEASE_CODE RHEL_RELEASE_VERSION(5, 1)
-#elif (AX_RELEASE_CODE && AX_RELEASE_CODE == AX_RELEASE_VERSION(3, 2))
-#define RHEL_RELEASE_CODE RHEL_RELEASE_VERSION(5, 3)
-#endif
#ifndef RHEL_RELEASE_CODE
/* NOTE: RHEL_RELEASE_* introduced in RHEL4.5. */
#define RHEL_RELEASE_CODE 0
#endif
-/* RHEL 7 didn't backport the parameter change in
- * create_singlethread_workqueue.
- * If/when RH corrects this we will want to tighten up the version check.
- */
-#if (RHEL_RELEASE_CODE && RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7, 0))
-#undef create_singlethread_workqueue
-#define create_singlethread_workqueue(name) \
- alloc_ordered_workqueue("%s", WQ_MEM_RECLAIM, name)
-#endif
-
-/* Ubuntu Release ABI is the 4th digit of their kernel version. You can find
- * it in /usr/src/linux/$(uname -r)/include/generated/utsrelease.h for new
- * enough versions of Ubuntu. Otherwise you can simply see it in the output of
- * uname as the 4th digit of the kernel. The UTS_UBUNTU_RELEASE_ABI is not in
- * the linux-source package, but in the linux-headers package. It begins to
- * appear in later releases of 14.04 and 14.10.
- *
- * Ex:
- * <Ubuntu 14.04.1>
- * $uname -r
- * 3.13.0-45-generic
- * ABI is 45
- *
- * <Ubuntu 14.10>
- * $uname -r
- * 3.16.0-23-generic
- * ABI is 23.
- */
-#ifndef UTS_UBUNTU_RELEASE_ABI
-#define UTS_UBUNTU_RELEASE_ABI 0
-#define UBUNTU_VERSION_CODE 0
-#else
-
-#if UTS_UBUNTU_RELEASE_ABI > 255
-#error UTS_UBUNTU_RELEASE_ABI is too large...
-#endif /* UTS_UBUNTU_RELEASE_ABI > 255 */
-
-#endif
-
-/* Note that the 3rd digit is always zero, and will be ignored. This is
- * because Ubuntu kernels are based on x.y.0-ABI values, and while their linux
- * version codes are 3 digit, this 3rd digit is superseded by the ABI value.
- */
-#define UBUNTU_VERSION(a, b, c, d) ((KERNEL_VERSION(a, b, 0) << 8) + (d))
-
-#ifndef DEEPIN_PRODUCT_VERSION
-#define DEEPIN_PRODUCT_VERSION(a, b, c) (((a) << 16) + ((b) << 8) + (c))
-#endif
-
-#ifdef CONFIG_DEEPIN_KERNEL
-#endif
-
-#ifndef DEEPIN_VERSION_CODE
-#define DEEPIN_VERSION_CODE 0
-#endif
-
/* SuSE version macros are the same as Linux kernel version macro. */
#ifndef SLE_VERSION
#define SLE_VERSION(a, b, c) KERNEL_VERSION(a, b, c)
@@ -199,13 +105,6 @@ enum ethtool_link_mode_bit_indices {
#define SLE_LOCALVERSION_CODE 0
#endif /* SLE_LOCALVERSION_CODE */
-#ifndef ALIGN_DOWN
-#ifndef __ALIGN_KERNEL
-#define __ALIGN_KERNEL(x, a) __ALIGN_MASK(x, (typeof(x))(a) - 1)
-#endif
-#define ALIGN_DOWN(x, a) __ALIGN_KERNEL((x) - ((a) - 1), (a))
-#endif
-
/*****************************************************************************/
#define ETH_TYPE_TRANS_SETS_DEV
#define HAVE_NETDEV_STATS_IN_NETDEV
@@ -224,23 +123,9 @@ enum ethtool_link_mode_bit_indices {
#define HAVE_NDO_SET_FEATURES
#endif /* RHEL >= 6.6 && RHEL < 7.0 */
-/*****************************************************************************/
-
-/*****************************************************************************/
-#ifndef HAVE_SET_RX_MODE
-#define HAVE_SET_RX_MODE
-#endif
-#define HAVE_INET6_IFADDR_LIST
-
-/*****************************************************************************/
-
#define HAVE_NDO_GET_STATS64
/*****************************************************************************/
-
-#ifndef HAVE_MQPRIO
-#define HAVE_MQPRIO
-#endif
#ifndef HAVE_SETUP_TC
#define HAVE_SETUP_TC
#endif
@@ -248,45 +133,21 @@ enum ethtool_link_mode_bit_indices {
#ifndef HAVE_NDO_SET_FEATURES
#define HAVE_NDO_SET_FEATURES
#endif
-#define HAVE_IRQ_AFFINITY_NOTIFY
/*****************************************************************************/
#define HAVE_ETHTOOL_SET_PHYS_ID
/*****************************************************************************/
-#define HAVE_NETDEV_WANTED_FEAUTES
-
-/*****************************************************************************/
-#ifndef HAVE_PCI_DEV_FLAGS_ASSIGNED
-#define HAVE_PCI_DEV_FLAGS_ASSIGNED
#define HAVE_VF_SPOOFCHK_CONFIGURE
-#endif
-#ifndef HAVE_SKB_L4_RXHASH
-#define HAVE_SKB_L4_RXHASH
-#endif
#define HAVE_NDO_SET_VF_TRUST
-/*****************************************************************************/
-#define HAVE_ETHTOOL_GRXFHINDIR_SIZE
-#define HAVE_INT_NDO_VLAN_RX_ADD_VID
-#ifdef ETHTOOL_SRXNTUPLE
-#undef ETHTOOL_SRXNTUPLE
-#endif
-
/*****************************************************************************/
#include <linux/kconfig.h>
#define _kc_kmap_atomic(page) kmap_atomic(page)
#define _kc_kunmap_atomic(addr) kunmap_atomic(addr)
-/*****************************************************************************/
-#include <linux/of_net.h>
-#define HAVE_FDB_OPS
-#define HAVE_ETHTOOL_GET_TS_INFO
-
-/*****************************************************************************/
-
/*****************************************************************************/
#define HAVE_NAPI_GRO_FLUSH_OLD
@@ -295,62 +156,11 @@ enum ethtool_link_mode_bit_indices {
#define HAVE_SRIOV_CONFIGURE
#endif
-/*****************************************************************************/
-#define HAVE_ENCAP_TSO_OFFLOAD
-#define HAVE_SKB_INNER_NETWORK_HEADER
-#if (RHEL_RELEASE_CODE && \
- (RHEL_RELEASE_VERSION(7, 0) <= RHEL_RELEASE_CODE) && \
- (RHEL_RELEASE_VERSION(8, 0) > RHEL_RELEASE_CODE))
-#define HAVE_RHEL7_PCI_DRIVER_RH
-#if (RHEL_RELEASE_VERSION(7, 2) <= RHEL_RELEASE_CODE)
-#define HAVE_RHEL7_PCI_RESET_NOTIFY
-#endif /* RHEL >= 7.2 */
-#if (RHEL_RELEASE_VERSION(7, 3) <= RHEL_RELEASE_CODE)
-#define HAVE_GENEVE_RX_OFFLOAD
-#if !defined(HAVE_UDP_ENC_TUNNEL) && IS_ENABLED(CONFIG_GENEVE)
-#define HAVE_UDP_ENC_TUNNEL
-#endif
-#ifdef ETHTOOL_GLINKSETTINGS
-/* pay attention pangea platform when use this micro */
-#define HAVE_ETHTOOL_25G_BITS
-#endif /* ETHTOOL_GLINKSETTINGS */
-#endif /* RHEL >= 7.3 */
-
-/* new hooks added to net_device_ops_extended in RHEL7.4 */
-#if (RHEL_RELEASE_VERSION(7, 4) <= RHEL_RELEASE_CODE)
-#define HAVE_RHEL7_NETDEV_OPS_EXT_NDO_UDP_TUNNEL
-#define HAVE_UDP_ENC_RX_OFFLOAD
-#endif /* RHEL >= 7.4 */
-
-#if (RHEL_RELEASE_VERSION(7, 5) <= RHEL_RELEASE_CODE)
-#define HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV
-#endif /* RHEL > 7.5 */
-
-#endif /* RHEL >= 7.0 && RHEL < 8.0 */
-
/*****************************************************************************/
#define HAVE_NDO_SET_VF_LINK_STATE
-#define HAVE_SKB_INNER_PROTOCOL
-#define HAVE_MPLS_FEATURES
/*****************************************************************************/
-#if (SLE_VERSION_CODE && SLE_VERSION_CODE >= SLE_VERSION(12, 0, 0))
-#define HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK
-#endif
-#define HAVE_NDO_GET_PHYS_PORT_ID
-#define HAVE_NETIF_SET_XPS_QUEUE_CONST_MASK
-
-/*****************************************************************************/
-#define HAVE_VXLAN_CHECKS
-#if (UBUNTU_VERSION_CODE && UBUNTU_VERSION_CODE >= UBUNTU_VERSION(3, 13, 0, 24))
-#define HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK
-#else
#define HAVE_NDO_SELECT_QUEUE_ACCEL
-#endif
-#define HAVE_NET_GET_RANDOM_ONCE
-#define HAVE_HWMON_DEVICE_REGISTER_WITH_GROUPS
-
-/*****************************************************************************/
#define HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK
@@ -364,32 +174,14 @@ enum ethtool_link_mode_bit_indices {
/*****************************************************************************/
#define HAVE_RXFH_HASHFUNC
-/*****************************************************************************/
-
-/****************************************************************/
-
-/****************************************************************/
-
-/****************************************************************/
-
-/****************************************************************/
-
-/****************************************************************/
-
-#define HAVE_IO_MAP_WC_SIZE
-
/*****************************************************************************/
#define HAVE_NETDEVICE_MIN_MAX_MTU
/*****************************************************************************/
#define HAVE_VOID_NDO_GET_STATS64
-#define HAVE_VM_OPS_FAULT_NO_VMA
/*****************************************************************************/
-#define HAVE_HWTSTAMP_FILTER_NTP_ALL
#define HAVE_NDO_SETUP_TC_CHAIN_INDEX
-#define HAVE_PCI_ERROR_HANDLER_RESET_PREPARE
-#define HAVE_PTP_CLOCK_DO_AUX_WORK
/*****************************************************************************/
#define HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV
@@ -416,56 +208,9 @@ enum ethtool_link_mode_bit_indices {
#define HAVE_ENCAPSULATION_CSUM
-#ifndef eth_zero_addr
-static inline void __kc_eth_zero_addr(u8 *addr)
-{
- memset(addr, 0x00, ETH_ALEN);
-}
-
-#define eth_zero_addr(_addr) __kc_eth_zero_addr(_addr)
-#endif
-
-#ifndef netdev_hw_addr_list_for_each
-#define netdev_hw_addr_list_for_each(ha, l) \
- list_for_each_entry(ha, &(l)->list, list)
-#endif
-
#define spin_lock_deinit(lock)
-typedef struct file sdk_file;
-
-sdk_file *file_creat(const char *file_name);
-
-sdk_file *file_open(const char *file_name);
-
-void file_close(sdk_file *file_handle);
-
-u32 get_file_size(sdk_file *file_handle);
-
-void set_file_position(sdk_file *file_handle, u32 position);
-
-int file_read(sdk_file *file_handle, char *log_buffer,
- u32 rd_length, u32 *file_pos);
-
-u32 file_write(sdk_file *file_handle, char *log_buffer, u32 wr_length);
-
-struct sdk_thread_info {
- struct task_struct *thread_obj;
- char *name;
- void (*thread_fn)(void *x);
- void *thread_event;
- void *data;
-};
-
-int creat_thread(struct sdk_thread_info *thread_info);
-
-void stop_thread(struct sdk_thread_info *thread_info);
-
#define destroy_work(work)
-void utctime_to_localtime(u64 utctime, u64 *localtime);
-#ifndef HAVE_TIMER_SETUP
-void initialize_timer(void *adapter_hdl, struct timer_list *timer);
-#endif
void add_to_timer(struct timer_list *timer, long period);
void stop_timer(struct timer_list *timer);
void delete_timer(struct timer_list *timer);
diff --git a/drivers/net/ethernet/huawei/hinic/ossl_types.h b/drivers/net/ethernet/huawei/hinic/ossl_types.h
deleted file mode 100644
index b8591541bb7c..000000000000
--- a/drivers/net/ethernet/huawei/hinic/ossl_types.h
+++ /dev/null
@@ -1,41 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0*/
-/* Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- * Statement:
- * It must include "ossl_knl.h" or "ossl_user.h" before include "ossl_types.h"
- */
-
-#ifndef _OSSL_TYPES_H
-#define _OSSL_TYPES_H
-
-#undef NULL
-#if defined(__cplusplus)
-#define NULL 0
-#else
-#define NULL ((void *)0)
-#endif
-
-#define uda_handle void *
-
-#define UDA_TRUE 1
-#define UDA_FALSE 0
-
-#ifndef UINT8_MAX
-#define UINT8_MAX (u8)(~((u8)0)) /* 0xFF */
-#define UINT16_MAX (u16)(~((u16)0)) /* 0xFFFF */
-#define UINT32_MAX (u32)(~((u32)0)) /* 0xFFFFFFFF */
-#define UINT64_MAX (u64)(~((u64)0)) /* 0xFFFFFFFFFFFFFFFF */
-#define ASCII_MAX (0x7F)
-#endif
-
-#endif /* OSSL_TYPES_H */
--
2.25.1
1
5

[PATCH 1/3] arm64: Add support for SB barrier and patch in over DSB; ISB sequences
by Yang Yingliang 29 Jul '20
by Yang Yingliang 29 Jul '20
29 Jul '20
From: Will Deacon <will.deacon(a)arm.com>
mainline inclusion
from v5.0-rc1
commit bd4fb6d270bc423a9a4098108784f7f9254c4e6d
category: feature
bugzilla: 30102
CVE: NA
-------------------------------------------------
We currently use a DSB; ISB sequence to inhibit speculation in set_fs().
Whilst this works for current CPUs, future CPUs may implement a new SB
barrier instruction which acts as an architected speculation barrier.
On CPUs that support it, patch in an SB; NOP sequence over the DSB; ISB
sequence and advertise the presence of the new instruction to userspace.
Conflicts:
arch/arm64/include/asm/barrier.h
arch/arm64/include/asm/cpucaps.h
arch/arm64/include/asm/sysreg.h
arch/arm64/kernel/cpufeature.c
[wangshaobo: adjust context]
Signed-off-by: Will Deacon <will.deacon(a)arm.com>
Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/include/asm/assembler.h | 13 +++++++++++++
arch/arm64/include/asm/barrier.h | 4 ++++
arch/arm64/include/asm/cpucaps.h | 3 ++-
arch/arm64/include/asm/sysreg.h | 6 ++++++
arch/arm64/include/asm/uaccess.h | 3 +--
arch/arm64/include/uapi/asm/hwcap.h | 1 +
arch/arm64/kernel/cpufeature.c | 12 ++++++++++++
arch/arm64/kernel/cpuinfo.c | 1 +
8 files changed, 40 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index becb5454151e..5446b34f4a26 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -118,6 +118,19 @@
hint #20
.endm
+/*
+ * Speculation barrier
+ */
+ .macro sb
+alternative_if_not ARM64_HAS_SB
+ dsb nsh
+ isb
+alternative_else
+ SB_BARRIER_INSN
+ nop
+alternative_endif
+ .endm
+
/*
* Sanitise a 64-bit bounded index wrt speculation, returning zero if out
* of bounds.
diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 3cae78c1ce33..1b3e1194b511 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -34,6 +34,10 @@
#define psb_csync() asm volatile("hint #17" : : : "memory")
#define csdb() asm volatile("hint #20" : : : "memory")
+#define spec_bar() asm volatile(ALTERNATIVE("dsb nsh\nisb\n", \
+ SB_BARRIER_INSN"nop\n", \
+ ARM64_HAS_SB))
+
#ifdef CONFIG_ARM64_PSEUDO_NMI
#define pmr_sync() \
do { \
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index d56d815b9960..cd57225af16e 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -58,7 +58,8 @@
#define ARM64_SSBS 37
#define ARM64_CLEARPAGE_STNP 38
#define ARM64_WORKAROUND_1542419 39
+#define ARM64_HAS_SB 40
-#define ARM64_NCAPS 40
+#define ARM64_NCAPS 41
#endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 0ba65a14d398..9d4235995ebd 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -97,6 +97,11 @@
#define SET_PSTATE_SSBS(x) __emit_inst(0xd5000000 | REG_PSTATE_SSBS_IMM | \
(!!x)<<8 | 0x1f)
+#define __SYS_BARRIER_INSN(CRm, op2, Rt) \
+ __emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f))
+
+#define SB_BARRIER_INSN __SYS_BARRIER_INSN(0, 7, 31)
+
#define SYS_DC_ISW sys_insn(1, 0, 7, 6, 2)
#define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2)
#define SYS_DC_CISW sys_insn(1, 0, 7, 14, 2)
@@ -524,6 +529,7 @@
#define ID_AA64ISAR0_AES_SHIFT 4
/* id_aa64isar1 */
+#define ID_AA64ISAR1_SB_SHIFT 36
#define ID_AA64ISAR1_LRCPC_SHIFT 20
#define ID_AA64ISAR1_FCMA_SHIFT 16
#define ID_AA64ISAR1_JSCVT_SHIFT 12
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index bde376077167..35482c9076db 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -46,8 +46,7 @@ static inline void set_fs(mm_segment_t fs)
* Prevent a mispredicted conditional call to set_fs from forwarding
* the wrong address limit to access_ok under speculation.
*/
- dsb(nsh);
- isb();
+ spec_bar();
/* On user-mode return, check fs is correct */
set_thread_flag(TIF_FSCHECK);
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 2bcd6e4f3474..7784f7cba16c 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -49,5 +49,6 @@
#define HWCAP_ILRCPC (1 << 26)
#define HWCAP_FLAGM (1 << 27)
#define HWCAP_SSBS (1 << 28)
+#define HWCAP_SB (1 << 29)
#endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 8b84a47d0112..64b85f2015c6 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -141,6 +141,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
};
static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_SB_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
@@ -1501,6 +1502,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = can_clearpage_use_stnp,
},
+ {
+ .desc = "Speculation barrier (SB)",
+ .capability = ARM64_HAS_SB,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = has_cpuid_feature,
+ .sys_reg = SYS_ID_AA64ISAR1_EL1,
+ .field_pos = ID_AA64ISAR1_SB_SHIFT,
+ .sign = FTR_UNSIGNED,
+ .min_field_value = 1,
+ },
{},
};
@@ -1555,6 +1566,7 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ILRCPC),
+ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_SB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SB),
HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_USCAT),
#ifdef CONFIG_ARM64_SVE
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index bcc2831399cb..7cb0b08ab0a7 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -82,6 +82,7 @@ static const char *const hwcap_str[] = {
"ilrcpc",
"flagm",
"ssbs",
+ "sb",
NULL
};
--
2.25.1
1
2

[PATCH] kernel/notifier.c: intercept duplicate registrations to avoid infinite loops
by Yang Yingliang 29 Jul '20
by Yang Yingliang 29 Jul '20
29 Jul '20
From: Xiaoming Ni <nixiaoming(a)huawei.com>
mainline inclusion
from mainline-v5.5-rc1
commit 1a50cb80f219c44adb6265f5071b81fc3c1deced
category: bugfix
bugzilla: NA
CVE: NA
---------------------------------------------
[ Upstream commit 1a50cb80f219c44adb6265f5071b81fc3c1deced ]
Registering the same notifier to a hook repeatedly can cause the hook
list to form a ring or lose other members of the list.
case1: An infinite loop in notifier_chain_register() can cause soft lockup
atomic_notifier_chain_register(&test_notifier_list, &test1);
atomic_notifier_chain_register(&test_notifier_list, &test1);
atomic_notifier_chain_register(&test_notifier_list, &test2);
case2: An infinite loop in notifier_chain_register() can cause soft lockup
atomic_notifier_chain_register(&test_notifier_list, &test1);
atomic_notifier_chain_register(&test_notifier_list, &test1);
atomic_notifier_call_chain(&test_notifier_list, 0, NULL);
case3: lose other hook test2
atomic_notifier_chain_register(&test_notifier_list, &test1);
atomic_notifier_chain_register(&test_notifier_list, &test2);
atomic_notifier_chain_register(&test_notifier_list, &test1);
case4: Unregister returns 0, but the hook is still in the linked list,
and it is not really registered. If you call
notifier_call_chain after ko is unloaded, it will trigger oops.
If the system is configured with softlockup_panic and the same hook is
repeatedly registered on the panic_notifier_list, it will cause a loop
panic.
Add a check in notifier_chain_register(), intercepting duplicate
registrations to avoid infinite loops
Link: http://lkml.kernel.org/r/1568861888-34045-2-git-send-email-nixiaoming@huawe…
Signed-off-by: Xiaoming Ni <nixiaoming(a)huawei.com>
Reviewed-by: Vasily Averin <vvs(a)virtuozzo.com>
Reviewed-by: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Alexey Dobriyan <adobriyan(a)gmail.com>
Cc: Anna Schumaker <anna.schumaker(a)netapp.com>
Cc: Arjan van de Ven <arjan(a)linux.intel.com>
Cc: J. Bruce Fields <bfields(a)fieldses.org>
Cc: Chuck Lever <chuck.lever(a)oracle.com>
Cc: David S. Miller <davem(a)davemloft.net>
Cc: Jeff Layton <jlayton(a)kernel.org>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Ingo Molnar <mingo(a)kernel.org>
Cc: Nadia Derbey <Nadia.Derbey(a)bull.net>
Cc: "Paul E. McKenney" <paulmck(a)kernel.org>
Cc: Sam Protsenko <semen.protsenko(a)linaro.org>
Cc: Alan Stern <stern(a)rowland.harvard.edu>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Trond Myklebust <trond.myklebust(a)hammerspace.com>
Cc: Viresh Kumar <viresh.kumar(a)linaro.org>
Cc: Xiaoming Ni <nixiaoming(a)huawei.com>
Cc: YueHaibing <yuehaibing(a)huawei.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Ding Tianhong <dingtianhong(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/notifier.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/kernel/notifier.c b/kernel/notifier.c
index 6196af8a8223..c6de38836f50 100644
--- a/kernel/notifier.c
+++ b/kernel/notifier.c
@@ -22,6 +22,11 @@ static int notifier_chain_register(struct notifier_block **nl,
struct notifier_block *n)
{
while ((*nl) != NULL) {
+ if (unlikely((*nl) == n)) {
+ WARN(1, "double register detected");
+ return 0;
+ }
+
if (n->priority > (*nl)->priority)
break;
nl = &((*nl)->next);
--
2.25.1
1
1
From: Taehee Yoo <ap420073(a)gmail.com>
commit 2a762e9e8cd1cf1242e4269a2244666ed02eecd1 upstream.
There are two types of the lower interface of rmnet that are VND
and BRIDGE.
Each lower interface can have only one type either VND or BRIDGE.
But, there is a case, which uses both lower interface types.
Due to this unexpected behavior, lower interface leak occurs.
Test commands:
ip link add dummy0 type dummy
ip link add dummy1 type dummy
ip link add rmnet0 link dummy0 type rmnet mux_id 1
ip link set dummy1 master rmnet0
ip link add rmnet1 link dummy1 type rmnet mux_id 2
ip link del rmnet0
The dummy1 was attached as BRIDGE interface of rmnet0.
Then, it also was attached as VND interface of rmnet1.
This is unexpected behavior and there is no code for handling this case.
So that below splat occurs when the rmnet0 interface is deleted.
Splat looks like:
[ 53.254112][ C1] WARNING: CPU: 1 PID: 1192 at net/core/dev.c:8992 rollback_registered_many+0x986/0xcf0
[ 53.254117][ C1] Modules linked in: rmnet dummy openvswitch nsh nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nfx
[ 53.254182][ C1] CPU: 1 PID: 1192 Comm: ip Not tainted 5.8.0-rc1+ #620
[ 53.254188][ C1] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
[ 53.254192][ C1] RIP: 0010:rollback_registered_many+0x986/0xcf0
[ 53.254200][ C1] Code: 41 8b 4e cc 45 31 c0 31 d2 4c 89 ee 48 89 df e8 e0 47 ff ff 85 c0 0f 84 cd fc ff ff 0f 0b e5
[ 53.254205][ C1] RSP: 0018:ffff888050a5f2e0 EFLAGS: 00010287
[ 53.254214][ C1] RAX: ffff88805756d658 RBX: ffff88804d99c000 RCX: ffffffff8329d323
[ 53.254219][ C1] RDX: 1ffffffff0be6410 RSI: 0000000000000008 RDI: ffffffff85f32080
[ 53.254223][ C1] RBP: dffffc0000000000 R08: fffffbfff0be6411 R09: fffffbfff0be6411
[ 53.254228][ C1] R10: ffffffff85f32087 R11: 0000000000000001 R12: ffff888050a5f480
[ 53.254233][ C1] R13: ffff88804d99c0b8 R14: ffff888050a5f400 R15: ffff8880548ebe40
[ 53.254238][ C1] FS: 00007f6b86b370c0(0000) GS:ffff88806c200000(0000) knlGS:0000000000000000
[ 53.254243][ C1] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 53.254248][ C1] CR2: 0000562c62438758 CR3: 000000003f600005 CR4: 00000000000606e0
[ 53.254253][ C1] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 53.254257][ C1] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 53.254261][ C1] Call Trace:
[ 53.254266][ C1] ? lockdep_hardirqs_on_prepare+0x379/0x540
[ 53.254270][ C1] ? netif_set_real_num_tx_queues+0x780/0x780
[ 53.254275][ C1] ? rmnet_unregister_real_device+0x56/0x90 [rmnet]
[ 53.254279][ C1] ? __kasan_slab_free+0x126/0x150
[ 53.254283][ C1] ? kfree+0xdc/0x320
[ 53.254288][ C1] ? rmnet_unregister_real_device+0x56/0x90 [rmnet]
[ 53.254293][ C1] unregister_netdevice_many.part.135+0x13/0x1b0
[ 53.254297][ C1] rtnl_delete_link+0xbc/0x100
[ 53.254301][ C1] ? rtnl_af_register+0xc0/0xc0
[ 53.254305][ C1] rtnl_dellink+0x2dc/0x840
[ 53.254309][ C1] ? find_held_lock+0x39/0x1d0
[ 53.254314][ C1] ? valid_fdb_dump_strict+0x620/0x620
[ 53.254318][ C1] ? rtnetlink_rcv_msg+0x457/0x890
[ 53.254322][ C1] ? lock_contended+0xd20/0xd20
[ 53.254326][ C1] rtnetlink_rcv_msg+0x4a8/0x890
[ ... ]
[ 73.813696][ T1192] unregister_netdevice: waiting for rmnet0 to become free. Usage count = 1
Fixes: 037f9cdf72fb ("net: rmnet: use upper/lower device infrastructure")
Signed-off-by: Taehee Yoo <ap420073(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
.../ethernet/qualcomm/rmnet/rmnet_config.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
index 7389648d0fea..05c438f47ff1 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
@@ -56,15 +56,23 @@ static int rmnet_unregister_real_device(struct net_device *real_dev)
return 0;
}
-static int rmnet_register_real_device(struct net_device *real_dev)
+static int rmnet_register_real_device(struct net_device *real_dev,
+ struct netlink_ext_ack *extack)
{
struct rmnet_port *port;
int rc, entry;
ASSERT_RTNL();
- if (rmnet_is_real_dev_registered(real_dev))
+ if (rmnet_is_real_dev_registered(real_dev)) {
+ port = rmnet_get_port_rtnl(real_dev);
+ if (port->rmnet_mode != RMNET_EPMODE_VND) {
+ NL_SET_ERR_MSG_MOD(extack, "bridge device already exists");
+ return -EINVAL;
+ }
+
return 0;
+ }
port = kzalloc(sizeof(*port), GFP_ATOMIC);
if (!port)
@@ -143,7 +151,7 @@ static int rmnet_newlink(struct net *src_net, struct net_device *dev,
mux_id = nla_get_u16(data[IFLA_RMNET_MUX_ID]);
- err = rmnet_register_real_device(real_dev);
+ err = rmnet_register_real_device(real_dev, extack);
if (err)
goto err0;
@@ -425,13 +433,10 @@ int rmnet_add_bridge(struct net_device *rmnet_dev,
if (port->nr_rmnet_devs > 1)
return -EINVAL;
- if (port->rmnet_mode != RMNET_EPMODE_VND)
- return -EINVAL;
-
if (rmnet_is_real_dev_registered(slave_dev))
return -EBUSY;
- err = rmnet_register_real_device(slave_dev);
+ err = rmnet_register_real_device(slave_dev, extack);
if (err)
return -EBUSY;
--
2.25.1
1
131
From: Hanjun Guo <guohanjun(a)huawei.com>
hulk inclusion
category: cleanup
bugzilla: NA
CVE: NA
---------------------------
When some patches are merged, the config file wasn't updated,
so update them all together, no functional change.
Signed-off-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/configs/hulk_defconfig | 39 ++++++++++++++++---------------
1 file changed, 20 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/configs/hulk_defconfig b/arch/arm64/configs/hulk_defconfig
index 4580f4a82635..488ea95474e7 100644
--- a/arch/arm64/configs/hulk_defconfig
+++ b/arch/arm64/configs/hulk_defconfig
@@ -1,14 +1,15 @@
#
# Automatically generated file; DO NOT EDIT.
-# Linux/arm64 4.19.25 Kernel Configuration
+# Linux/arm64 4.19.132 Kernel Configuration
#
#
-# Compiler: gcc (GCC) 6.3.0
+# Compiler: gcc (GCC) 7.3.0
#
CONFIG_CC_IS_GCC=y
-CONFIG_GCC_VERSION=60300
+CONFIG_GCC_VERSION=70300
CONFIG_CLANG_VERSION=0
+CONFIG_CC_HAS_ASM_GOTO=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y
CONFIG_THREAD_INFO_IN_TASK=y
@@ -145,8 +146,8 @@ CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
-CONFIG_CHECKPOINT_RESTORE=y
CONFIG_SCHED_STEAL=y
+CONFIG_CHECKPOINT_RESTORE=y
CONFIG_SCHED_AUTOGROUP=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
@@ -178,6 +179,7 @@ CONFIG_ELF_CORE=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_FUTEX_PI=y
+CONFIG_HAVE_FUTEX_CMPXCHG=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
@@ -279,8 +281,6 @@ CONFIG_ARCH_XGENE=y
# CONFIG_ARCH_ZX is not set
# CONFIG_ARCH_ZYNQMP is not set
CONFIG_HAVE_LIVEPATCH_WO_FTRACE=y
-CONFIG_SOC_HISILICON_SYSCTL=m
-CONFIG_SOC_HISILICON_LBC=m
#
# Enable Livepatch
@@ -355,10 +355,10 @@ CONFIG_PCIE_DW=y
CONFIG_PCIE_DW_HOST=y
# CONFIG_PCIE_DW_PLAT_HOST is not set
CONFIG_PCI_HISI=y
-CONFIG_HISILICON_PCIE_CAE=m
# CONFIG_PCIE_QCOM is not set
# CONFIG_PCIE_KIRIN is not set
# CONFIG_PCIE_HISI_STB is not set
+CONFIG_HISILICON_PCIE_CAE=m
#
# PCI Endpoint
@@ -420,7 +420,6 @@ CONFIG_NUMA_AWARE_SPINLOCKS=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
-CONFIG_COHERENT_DEVICE=y
CONFIG_HOLES_IN_ZONE=y
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
@@ -557,7 +556,6 @@ CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
#
# CONFIG_CPUFREQ_DT is not set
CONFIG_ACPI_CPPC_CPUFREQ=y
-CONFIG_HISILICON_CPPC_CPUFREQ_WORKAROUND=y
# CONFIG_ARM_BIG_LITTLE_CPUFREQ is not set
CONFIG_ARM_SCPI_CPUFREQ=m
# CONFIG_QORIQ_CPUFREQ is not set
@@ -573,8 +571,8 @@ CONFIG_ARM_SCPI_POWER_DOMAIN=m
CONFIG_ARM_SDE_INTERFACE=y
CONFIG_DMIID=y
CONFIG_DMI_SYSFS=y
-CONFIG_FW_CFG_SYSFS=y
# CONFIG_ISCSI_IBFT is not set
+CONFIG_FW_CFG_SYSFS=y
# CONFIG_FW_CFG_SYSFS_CMDLINE is not set
CONFIG_HAVE_ARM_SMCCC=y
# CONFIG_GOOGLE_FIRMWARE is not set
@@ -643,6 +641,7 @@ CONFIG_ACPI_APEI_EINJ=m
# CONFIG_ACPI_CONFIGFS is not set
CONFIG_ACPI_IORT=y
CONFIG_ACPI_GTDT=y
+CONFIG_ACPI_MPAM=y
CONFIG_ACPI_PPTT=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQFD=y
@@ -678,7 +677,6 @@ CONFIG_CRYPTO_SM3_ARM64_CE=m
CONFIG_CRYPTO_SM4_ARM64_CE=m
CONFIG_CRYPTO_GHASH_ARM64_CE=m
CONFIG_CRYPTO_CRCT10DIF_ARM64_CE=m
-CONFIG_CRYPTO_CRC32_ARM64_CE=m
CONFIG_CRYPTO_AES_ARM64=y
CONFIG_CRYPTO_AES_ARM64_CE=m
CONFIG_CRYPTO_AES_ARM64_CE_CCM=m
@@ -720,6 +718,7 @@ CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_HAVE_RCU_TABLE_FREE=y
+CONFIG_HAVE_RCU_TABLE_INVALIDATE=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
@@ -932,6 +931,8 @@ CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
+CONFIG_HAVE_MEMBLOCK_PFN_VALID=y
+CONFIG_COHERENT_DEVICE=y
CONFIG_NO_BOOTMEM=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_MEMORY_HOTPLUG=y
@@ -1757,6 +1758,7 @@ CONFIG_ALLOW_DEV_COREDUMP=y
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_GENERIC_CPU_AUTOPROBE=y
+CONFIG_GENERIC_CPU_VULNERABILITIES=y
CONFIG_REGMAP=y
CONFIG_REGMAP_I2C=y
CONFIG_REGMAP_SPI=y
@@ -1793,7 +1795,6 @@ CONFIG_MTD=m
# CONFIG_MTD_CMDLINE_PARTS is not set
# CONFIG_MTD_AFS_PARTS is not set
CONFIG_MTD_OF_PARTS=m
-CONFIG_MTD_HISILICON_SFC=m
# CONFIG_MTD_AR7_PARTS is not set
#
@@ -1879,6 +1880,7 @@ CONFIG_MTD_UBI_BEB_LIMIT=20
# CONFIG_MTD_UBI_FASTMAP is not set
# CONFIG_MTD_UBI_GLUEBI is not set
# CONFIG_MTD_UBI_BLOCK is not set
+CONFIG_MTD_HISILICON_SFC=m
CONFIG_DTC=y
CONFIG_OF=y
# CONFIG_OF_UNITTEST is not set
@@ -2914,7 +2916,6 @@ CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_HW_RANDOM_HISI=y
CONFIG_HW_RANDOM_XGENE=y
CONFIG_HW_RANDOM_CAVIUM=y
-# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
#
@@ -2939,6 +2940,7 @@ CONFIG_TCG_CRB=y
# CONFIG_DEVPORT is not set
# CONFIG_XILLYBUS is not set
CONFIG_HISI_SVM=y
+
#
# I2C support
#
@@ -3686,10 +3688,10 @@ CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
#
# Frame buffer Devices
#
-CONFIG_FB=y
-# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_CMDLINE=y
CONFIG_FB_NOTIFY=y
+CONFIG_FB=y
+# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
@@ -4081,7 +4083,6 @@ CONFIG_USB_EMI62=m
CONFIG_USB_EMI26=m
CONFIG_USB_ADUTUX=m
CONFIG_USB_SEVSEG=m
-# CONFIG_USB_RIO500 is not set
CONFIG_USB_LEGOTOWER=m
CONFIG_USB_LCD=m
# CONFIG_USB_CYPRESS_CY7C63 is not set
@@ -4494,7 +4495,6 @@ CONFIG_VFIO_PLATFORM=m
# CONFIG_VFIO_PLATFORM_AMDXGBE_RESET is not set
CONFIG_VFIO_MDEV=m
CONFIG_VFIO_MDEV_DEVICE=m
-CONFIG_VFIO_SPIMDEV=m
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO=y
CONFIG_VIRTIO_MENU=y
@@ -4650,6 +4650,8 @@ CONFIG_SMMU_BYPASS_DEV=y
# Xilinx SoC drivers
#
# CONFIG_XILINX_VCU is not set
+CONFIG_SOC_HISILICON_LBC=m
+CONFIG_SOC_HISILICON_SYSCTL=m
# CONFIG_PM_DEVFREQ is not set
CONFIG_EXTCON=y
@@ -4862,7 +4864,6 @@ CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
# CONFIG_FAT_DEFAULT_UTF8 is not set
-# CONFIG_NTFS_FS is not set
#
# Pseudo filesystems
@@ -5286,7 +5287,6 @@ CONFIG_CRYPTO_DEV_HISI_QM=m
CONFIG_CRYPTO_QM_UACCE=y
CONFIG_CRYPTO_DEV_HISI_ZIP=m
CONFIG_CRYPTO_DEV_HISI_HPRE=m
-CONFIG_CRYPTO_HISI_SGL=m
CONFIG_CRYPTO_DEV_HISI_SEC2=m
CONFIG_CRYPTO_DEV_HISI_RDE=m
CONFIG_ASYMMETRIC_KEY_TYPE=y
@@ -5336,6 +5336,7 @@ CONFIG_CRC32_SLICEBY8=y
CONFIG_CRC7=m
CONFIG_LIBCRC32C=y
CONFIG_CRC8=m
+CONFIG_XXHASH=y
CONFIG_AUDIT_GENERIC=y
CONFIG_AUDIT_ARCH_COMPAT_GENERIC=y
CONFIG_AUDIT_COMPAT_GENERIC=y
--
2.25.1
1
0

24 Jul '20
From: Colin Ian King <colin.king(a)canonical.com>
mainline inclusion
from mainline-v5.2-rc3
commit 65b1dc99008de592f7c1c8e5fad446824791b4da
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
Currently the qedf_dbg_* family of functions can overrun the end of the
source string if it is less than the destination buffer length because of
the use of a fixed sized memcpy. Remove the memset/memcpy calls to nfunc
and just use func instead as it is always a null terminated string.
Addresses-Coverity: ("Out-of-bounds access")
Fixes: 61d8658b4a43 ("scsi: qedf: Add QLogic FastLinQ offload FCoE driver framework.")
Signed-off-by: Colin Ian King <colin.king(a)canonical.com>
Acked-by: Saurav Kashyap <skashyap(a)marvell.com>
Signed-off-by: Martin K. Petersen <martin.petersen(a)oracle.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/scsi/qedf/qedf_dbg.c | 32 ++++++++------------------------
1 file changed, 8 insertions(+), 24 deletions(-)
diff --git a/drivers/scsi/qedf/qedf_dbg.c b/drivers/scsi/qedf/qedf_dbg.c
index f2397ee9ba69..f7d170bffc82 100644
--- a/drivers/scsi/qedf/qedf_dbg.c
+++ b/drivers/scsi/qedf/qedf_dbg.c
@@ -15,10 +15,6 @@ qedf_dbg_err(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
{
va_list va;
struct va_format vaf;
- char nfunc[32];
-
- memset(nfunc, 0, sizeof(nfunc));
- memcpy(nfunc, func, sizeof(nfunc) - 1);
va_start(va, fmt);
@@ -27,9 +23,9 @@ qedf_dbg_err(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
if (likely(qedf) && likely(qedf->pdev))
pr_err("[%s]:[%s:%d]:%d: %pV", dev_name(&(qedf->pdev->dev)),
- nfunc, line, qedf->host_no, &vaf);
+ func, line, qedf->host_no, &vaf);
else
- pr_err("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
+ pr_err("[0000:00:00.0]:[%s:%d]: %pV", func, line, &vaf);
va_end(va);
}
@@ -40,10 +36,6 @@ qedf_dbg_warn(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
{
va_list va;
struct va_format vaf;
- char nfunc[32];
-
- memset(nfunc, 0, sizeof(nfunc));
- memcpy(nfunc, func, sizeof(nfunc) - 1);
va_start(va, fmt);
@@ -55,9 +47,9 @@ qedf_dbg_warn(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
if (likely(qedf) && likely(qedf->pdev))
pr_warn("[%s]:[%s:%d]:%d: %pV", dev_name(&(qedf->pdev->dev)),
- nfunc, line, qedf->host_no, &vaf);
+ func, line, qedf->host_no, &vaf);
else
- pr_warn("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
+ pr_warn("[0000:00:00.0]:[%s:%d]: %pV", func, line, &vaf);
ret:
va_end(va);
@@ -69,10 +61,6 @@ qedf_dbg_notice(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
{
va_list va;
struct va_format vaf;
- char nfunc[32];
-
- memset(nfunc, 0, sizeof(nfunc));
- memcpy(nfunc, func, sizeof(nfunc) - 1);
va_start(va, fmt);
@@ -84,10 +72,10 @@ qedf_dbg_notice(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
if (likely(qedf) && likely(qedf->pdev))
pr_notice("[%s]:[%s:%d]:%d: %pV",
- dev_name(&(qedf->pdev->dev)), nfunc, line,
+ dev_name(&(qedf->pdev->dev)), func, line,
qedf->host_no, &vaf);
else
- pr_notice("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
+ pr_notice("[0000:00:00.0]:[%s:%d]: %pV", func, line, &vaf);
ret:
va_end(va);
@@ -99,10 +87,6 @@ qedf_dbg_info(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
{
va_list va;
struct va_format vaf;
- char nfunc[32];
-
- memset(nfunc, 0, sizeof(nfunc));
- memcpy(nfunc, func, sizeof(nfunc) - 1);
va_start(va, fmt);
@@ -114,9 +98,9 @@ qedf_dbg_info(struct qedf_dbg_ctx *qedf, const char *func, u32 line,
if (likely(qedf) && likely(qedf->pdev))
pr_info("[%s]:[%s:%d]:%d: %pV", dev_name(&(qedf->pdev->dev)),
- nfunc, line, qedf->host_no, &vaf);
+ func, line, qedf->host_no, &vaf);
else
- pr_info("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
+ pr_info("[0000:00:00.0]:[%s:%d]: %pV", func, line, &vaf);
ret:
va_end(va);
--
2.25.1
1
0

24 Jul '20
From: Ye Bin <yebin10(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
Sysmonitor will check sb->s_flags if set SB_RDONLY, then judge if report
exception message.
Fixes: 3cfdac228065 ("ext4: report error to userspace by netlink")
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/super.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 4fdd17055dc5..5a09d52b864c 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -512,8 +512,6 @@ static void ext4_handle_error(struct super_block *sb)
jbd2_journal_abort(journal, -EIO);
}
- ext4_netlink_send_info(sb, 1);
-
/*
* We force ERRORS_RO behavior when system is rebooting. Otherwise we
* could panic during 'reboot -f' as the underlying device got already
@@ -531,6 +529,8 @@ static void ext4_handle_error(struct super_block *sb)
panic("EXT4-fs (device %s): panic forced after error\n",
sb->s_id);
}
+
+ ext4_netlink_send_info(sb, 1);
}
#define ext4_error_ratelimit(sb) \
--
2.25.1
1
0

20 Jul '20
When resume from s3 or s4, the mac table will lose, and the
network will be broken. This patch adds mac table restore
operation to fix this problem.
Fixes: 3fc746a2967a ("net: hns3: add suspend/resume function for hns3 driver")
Signed-off-by: Yonglong Liu <liuyonglong(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 478a3b5..d7da2c8 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -3864,6 +3864,10 @@ static int hclge_resume(struct hnae3_ae_dev *ae_dev)
if (ret)
goto err_reset_lock;
+ ret = hclge_notify_client(hdev, HNAE3_RESTORE_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
rtnl_unlock();
ret = hclge_notify_roce_client(hdev, HNAE3_INIT_CLIENT);
--
2.8.1
1
0

14 Jul '20
From: Lin Yi <teroincn(a)163.com>
mainline inclusion
from mainline-v5.3-rc1
commit 122f8ec7b78e
category: bugfix
bugzilla: 18462
CVE: NA
-------------------------------------------------
the kobj refcount increased by kobject_get should be released before
error return, otherwise lead to a memory leak.
Signed-off-by: Lin Yi <teroincn(a)163.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Li Heng <liheng40(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
lib/kobject.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/lib/kobject.c b/lib/kobject.c
index 97d86dc17c42..c82b88b2e860 100644
--- a/lib/kobject.c
+++ b/lib/kobject.c
@@ -478,8 +478,10 @@ int kobject_rename(struct kobject *kobj, const char *new_name)
kobj = kobject_get(kobj);
if (!kobj)
return -EINVAL;
- if (!kobj->parent)
+ if (!kobj->parent) {
+ kobject_put(kobj);
return -EINVAL;
+ }
devpath = kobject_get_path(kobj, GFP_KERNEL);
if (!devpath) {
--
2.25.1
1
0

14 Jul '20
hulk inclusion
category: config
bugzilla: NA
CVE: NA
---------------------------
It's introduced by f5d76c78e496 ("efi: Make it possible to disable efivar_ssdt entirely").
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/configs/euleros_defconfig | 1 +
arch/arm64/configs/hulk_defconfig | 1 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/configs/storage_ci_defconfig | 1 +
arch/arm64/configs/syzkaller_defconfig | 1 +
arch/x86/configs/hulk_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
arch/x86/configs/storage_ci_defconfig | 1 +
8 files changed, 8 insertions(+)
diff --git a/arch/arm64/configs/euleros_defconfig b/arch/arm64/configs/euleros_defconfig
index 44c2e03365c5..64558b4f47ab 100644
--- a/arch/arm64/configs/euleros_defconfig
+++ b/arch/arm64/configs/euleros_defconfig
@@ -592,6 +592,7 @@ CONFIG_EFI_ARMSTUB_DTB_LOADER=y
# CONFIG_EFI_CAPSULE_LOADER is not set
# CONFIG_EFI_TEST is not set
# CONFIG_RESET_ATTACK_MITIGATION is not set
+CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_ARM=y
diff --git a/arch/arm64/configs/hulk_defconfig b/arch/arm64/configs/hulk_defconfig
index 3bca0a86fcb6..4580f4a82635 100644
--- a/arch/arm64/configs/hulk_defconfig
+++ b/arch/arm64/configs/hulk_defconfig
@@ -594,6 +594,7 @@ CONFIG_EFI_ARMSTUB_DTB_LOADER=y
# CONFIG_EFI_CAPSULE_LOADER is not set
# CONFIG_EFI_TEST is not set
# CONFIG_RESET_ATTACK_MITIGATION is not set
+CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_ARM=y
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index 2548b27b6e2d..b87ad341de6d 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -593,6 +593,7 @@ CONFIG_EFI_ARMSTUB_DTB_LOADER=y
# CONFIG_EFI_CAPSULE_LOADER is not set
# CONFIG_EFI_TEST is not set
# CONFIG_RESET_ATTACK_MITIGATION is not set
+CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_ARM=y
diff --git a/arch/arm64/configs/storage_ci_defconfig b/arch/arm64/configs/storage_ci_defconfig
index ac6067dfc44f..2c41902eb6a1 100644
--- a/arch/arm64/configs/storage_ci_defconfig
+++ b/arch/arm64/configs/storage_ci_defconfig
@@ -538,6 +538,7 @@ CONFIG_EFI_ARMSTUB_DTB_LOADER=y
# CONFIG_EFI_CAPSULE_LOADER is not set
# CONFIG_EFI_TEST is not set
# CONFIG_RESET_ATTACK_MITIGATION is not set
+CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
#
# Tegra firmware driver
diff --git a/arch/arm64/configs/syzkaller_defconfig b/arch/arm64/configs/syzkaller_defconfig
index 73969e93cf79..438a2c74ab73 100644
--- a/arch/arm64/configs/syzkaller_defconfig
+++ b/arch/arm64/configs/syzkaller_defconfig
@@ -590,6 +590,7 @@ CONFIG_EFI_ARMSTUB_DTB_LOADER=y
# CONFIG_EFI_CAPSULE_LOADER is not set
# CONFIG_EFI_TEST is not set
# CONFIG_RESET_ATTACK_MITIGATION is not set
+CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_ARM=y
diff --git a/arch/x86/configs/hulk_defconfig b/arch/x86/configs/hulk_defconfig
index 4ffb3d3fe30d..0a74cfe00137 100644
--- a/arch/x86/configs/hulk_defconfig
+++ b/arch/x86/configs/hulk_defconfig
@@ -710,6 +710,7 @@ CONFIG_EFI_RUNTIME_WRAPPERS=y
# CONFIG_EFI_TEST is not set
CONFIG_APPLE_PROPERTIES=y
# CONFIG_RESET_ATTACK_MITIGATION is not set
+CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_X86=y
CONFIG_EFI_DEV_PATH_PARSER=y
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index 3876c3b4aa3a..37c7c8893e3b 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -707,6 +707,7 @@ CONFIG_EFI_RUNTIME_WRAPPERS=y
# CONFIG_EFI_TEST is not set
CONFIG_APPLE_PROPERTIES=y
# CONFIG_RESET_ATTACK_MITIGATION is not set
+CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_X86=y
CONFIG_EFI_DEV_PATH_PARSER=y
diff --git a/arch/x86/configs/storage_ci_defconfig b/arch/x86/configs/storage_ci_defconfig
index f7de48d12fc0..c7344e20d964 100644
--- a/arch/x86/configs/storage_ci_defconfig
+++ b/arch/x86/configs/storage_ci_defconfig
@@ -663,6 +663,7 @@ CONFIG_EFI_RUNTIME_WRAPPERS=y
# CONFIG_EFI_TEST is not set
# CONFIG_APPLE_PROPERTIES is not set
# CONFIG_RESET_ATTACK_MITIGATION is not set
+CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_X86=y
--
2.25.1
1
0

14 Jul '20
From: Yonglong Liu <liuyonglong(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------
When add "mac table" info in debugfs, there's some rendudant
codes entered by mistake.
Fixes: 2dbf3632915b ("net: hns3: Add "mac table" information query function")
Fixes: a003346a5ee9 ("net: hns3: drop the WQ_MEM_RECLAIM flag when allocating wq")
Signed-off-by: Yonglong Liu <liuyonglong(a)huawei.com>
Reviewed-by: Weiwei Deng <dengweiwei(a)huawei.com>
Reviewed-by: Zhaohui Zhong <zhongzhaohui(a)huawei.com>
Reviewed-by: Junxin Chen <chenjunxin1(a)huawei.com>
Signed-off-by: Shengzui You <youshengzui(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c | 4 ++--
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
index fd5cfb1e6311..d85aaf78808e 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
@@ -962,9 +962,9 @@ static void hclge_dbg_dump_mac_table(struct hclge_dev *hdev)
memset(printf_buf, 0, HCLGE_DBG_BUF_LEN);
dev_info(&hdev->pdev->dev, "Unicast tab:\n");
- strncat(printf_buf, "|index |mac_addr drivers/net/ethernet/hisilicon/hns3/|vlan_id drivers/net/ethernet/hisilicon/hns3/|VMDq1 |",
+ strncat(printf_buf, "|index |mac_addr |vlan_id |VMDq1 |",
HCLGE_DBG_BUF_LEN - 1);
- strncat(printf_buf, "U_M drivers/net/ethernet/hisilicon/hns3/|mac_en |in_port |E_type |E_Port\n",
+ strncat(printf_buf, "U_M |mac_en |in_port |E_type |E_Port\n",
HCLGE_DBG_BUF_LEN - strlen(printf_buf) - 1);
dev_info(&hdev->pdev->dev, "%s", printf_buf);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
index 819c100abc8f..0f70a7792fba 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
@@ -2293,7 +2293,7 @@ static enum hclgevf_evt_cause hclgevf_check_evt_cause(struct hclgevf_dev *hdev,
/* print other vector0 event source */
dev_info(&hdev->pdev->dev,
- "vector 0 interrupt from unknown source, cmdq_drivers/net/ethernet/hisilicon = %#x\n",
+ "vector 0 interrupt from unknown source, cmdq_src = %#x\n",
cmdq_stat_reg);
return HCLGEVF_VECTOR0_EVENT_OTHER;
}
--
2.25.1
1
9
From: Naixin Yu <yunaixin(a)huawei.com>
This patch set contains 5 communication drivers for Huawei BMA software.
The BMA software is a system management software. It supports the status
monitoring, performance monitoring, and event monitoring of various
components, including server CPUs, memory, hard disks, NICs, IB cards,
PCIe cards, RAID controller cards, and optical modules.
These 5 drivers are used to send/receive message through PCIe channel in
different ways by BMA software.
Naixin Yu (5):
Huawei BMA: Adding Huawei BMA driver: host_edma_drv
Huawei BMA: Adding Huawei BMA driver: host_cdev_drv
Huawei BMA: Adding Huawei BMA driver: host_veth_drv
Huawei BMA: Adding Huawei BMA driver: cdev_veth_drv
Huawei BMA: Adding Huawei BMA driver: host_kbox_drv
drivers/net/ethernet/huawei/Kconfig | 1 +
drivers/net/ethernet/huawei/Makefile | 1 +
drivers/net/ethernet/huawei/bma/Kconfig | 10 +
drivers/net/ethernet/huawei/bma/Makefile | 9 +
.../net/ethernet/huawei/bma/cdev_drv/Makefile | 2 +
.../ethernet/huawei/bma/cdev_drv/bma_cdev.c | 369 +++
.../huawei/bma/cdev_veth_drv/Makefile | 2 +
.../bma/cdev_veth_drv/virtual_cdev_eth_net.c | 1862 ++++++++++++
.../bma/cdev_veth_drv/virtual_cdev_eth_net.h | 299 ++
.../net/ethernet/huawei/bma/edma_drv/Makefile | 2 +
.../huawei/bma/edma_drv/bma_devintf.c | 597 ++++
.../huawei/bma/edma_drv/bma_devintf.h | 40 +
.../huawei/bma/edma_drv/bma_include.h | 116 +
.../ethernet/huawei/bma/edma_drv/bma_pci.c | 533 ++++
.../ethernet/huawei/bma/edma_drv/bma_pci.h | 94 +
.../ethernet/huawei/bma/edma_drv/edma_host.c | 1462 ++++++++++
.../ethernet/huawei/bma/edma_drv/edma_host.h | 351 +++
.../huawei/bma/include/bma_ker_intf.h | 94 +
.../net/ethernet/huawei/bma/kbox_drv/Makefile | 5 +
.../ethernet/huawei/bma/kbox_drv/kbox_dump.c | 121 +
.../ethernet/huawei/bma/kbox_drv/kbox_dump.h | 33 +
.../ethernet/huawei/bma/kbox_drv/kbox_hook.c | 101 +
.../ethernet/huawei/bma/kbox_drv/kbox_hook.h | 33 +
.../huawei/bma/kbox_drv/kbox_include.h | 40 +
.../ethernet/huawei/bma/kbox_drv/kbox_main.c | 168 ++
.../ethernet/huawei/bma/kbox_drv/kbox_main.h | 23 +
.../ethernet/huawei/bma/kbox_drv/kbox_mce.c | 264 ++
.../ethernet/huawei/bma/kbox_drv/kbox_mce.h | 23 +
.../ethernet/huawei/bma/kbox_drv/kbox_panic.c | 187 ++
.../ethernet/huawei/bma/kbox_drv/kbox_panic.h | 25 +
.../huawei/bma/kbox_drv/kbox_printk.c | 363 +++
.../huawei/bma/kbox_drv/kbox_printk.h | 33 +
.../huawei/bma/kbox_drv/kbox_ram_drive.c | 188 ++
.../huawei/bma/kbox_drv/kbox_ram_drive.h | 31 +
.../huawei/bma/kbox_drv/kbox_ram_image.c | 135 +
.../huawei/bma/kbox_drv/kbox_ram_image.h | 84 +
.../huawei/bma/kbox_drv/kbox_ram_op.c | 986 +++++++
.../huawei/bma/kbox_drv/kbox_ram_op.h | 77 +
.../net/ethernet/huawei/bma/veth_drv/Makefile | 2 +
.../ethernet/huawei/bma/veth_drv/veth_hb.c | 2502 +++++++++++++++++
.../ethernet/huawei/bma/veth_drv/veth_hb.h | 440 +++
41 files changed, 11708 insertions(+)
create mode 100644 drivers/net/ethernet/huawei/bma/Kconfig
create mode 100644 drivers/net/ethernet/huawei/bma/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_drv/bma_cdev.c
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/virtual_cdev_eth_net.c
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/virtual_cdev_eth_net.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_devintf.c
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_devintf.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_include.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_pci.c
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_pci.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/edma_host.c
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/edma_host.h
create mode 100644 drivers/net/ethernet/huawei/bma/include/bma_ker_intf.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_dump.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_dump.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_hook.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_hook.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_include.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_main.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_main.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_mce.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_mce.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_panic.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_panic.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_printk.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_printk.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_drive.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_drive.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_image.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_image.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_op.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_op.h
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/veth_hb.c
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/veth_hb.h
--
2.26.2.windows.1
3
7

10 Jul '20
From: Chiqijun <chiqijun(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: 4472
-----------------------------------------------------------------------
When the last fragment message of mbox/mgmt message exceeds 32 bytes,
the copy will be out of bounds, so the length of the last fragment
needs to be judged.
Signed-off-by: Chiqijun <chiqijun(a)huawei.com>
Reviewed-by: Zengweiliang <zengweiliang.zengweiliang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/huawei/hinic/hinic_mbox.c | 4 ++++
drivers/net/ethernet/huawei/hinic/hinic_mgmt.c | 5 +++++
2 files changed, 9 insertions(+)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_mbox.c b/drivers/net/ethernet/huawei/hinic/hinic_mbox.c
index 32c27a1df78f..add76893055d 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_mbox.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_mbox.c
@@ -142,6 +142,8 @@ enum hinic_mbox_tx_status {
#define SEQ_ID_START_VAL 0
#define SEQ_ID_MAX_VAL 42
+#define MBOX_LAST_SEG_MAX_LEN (MBOX_MAX_BUF_SZ - \
+ SEQ_ID_MAX_VAL * MBOX_SEG_LEN)
#define DST_AEQ_IDX_DEFAULT_VAL 0
#define SRC_AEQ_IDX_DEFAULT_VAL 0
@@ -659,6 +661,8 @@ static bool check_mbox_seq_id_and_seg_len(struct hinic_recv_mbox *recv_mbox,
{
if (seq_id > SEQ_ID_MAX_VAL || seg_len > MBOX_SEG_LEN)
return false;
+ else if (seq_id == SEQ_ID_MAX_VAL && seg_len > MBOX_LAST_SEG_MAX_LEN)
+ return false;
if (seq_id == 0) {
recv_mbox->seq_id = seq_id;
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_mgmt.c
index dadb7cc0588b..ac9988710cad 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_mgmt.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_mgmt.c
@@ -45,6 +45,8 @@
SEGMENT_LEN) / SEGMENT_LEN)
#define MAX_PF_MGMT_BUF_SIZE 2048UL
+#define MGMT_MSG_LAST_SEG_MAX_LEN (MAX_PF_MGMT_BUF_SIZE - \
+ SEGMENT_LEN * MGMT_MSG_MAX_SEQ_ID)
#define MGMT_MSG_SIZE_MIN 20
#define MGMT_MSG_SIZE_STEP 16
@@ -1122,6 +1124,9 @@ static bool check_mgmt_seq_id_and_seg_len(struct hinic_recv_msg *recv_msg,
{
if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN)
return false;
+ else if (seq_id == MGMT_MSG_MAX_SEQ_ID &&
+ seg_len > MGMT_MSG_LAST_SEG_MAX_LEN)
+ return false;
if (seq_id == 0) {
recv_msg->seq_id = seq_id;
--
2.25.1
1
0
driver inclusion
category: feature
bugzilla: 34535
CVE: NA
Make CONFIG_BMA=m, except euleros_defconfig.
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/configs/euleros_defconfig | 1 +
arch/arm64/configs/hulk_defconfig | 1 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/configs/syzkaller_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
5 files changed, 5 insertions(+)
diff --git a/arch/arm64/configs/euleros_defconfig b/arch/arm64/configs/euleros_defconfig
index ada5c09408a2..44c2e03365c5 100644
--- a/arch/arm64/configs/euleros_defconfig
+++ b/arch/arm64/configs/euleros_defconfig
@@ -2444,6 +2444,7 @@ CONFIG_HNS3_ENET=m
# CONFIG_NET_VENDOR_HP is not set
CONFIG_NET_VENDOR_HUAWEI=y
CONFIG_HINIC=m
+# CONFIG_BMA is not set
# CONFIG_NET_VENDOR_I825XX is not set
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
diff --git a/arch/arm64/configs/hulk_defconfig b/arch/arm64/configs/hulk_defconfig
index 4dd3d18d99ba..3bca0a86fcb6 100644
--- a/arch/arm64/configs/hulk_defconfig
+++ b/arch/arm64/configs/hulk_defconfig
@@ -2427,6 +2427,7 @@ CONFIG_HNS3_CAE=m
# CONFIG_NET_VENDOR_HP is not set
CONFIG_NET_VENDOR_HUAWEI=y
CONFIG_HINIC=m
+CONFIG_BMA=m
# CONFIG_NET_VENDOR_I825XX is not set
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index c88d11068e11..2548b27b6e2d 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -2447,6 +2447,7 @@ CONFIG_HNS3_CAE=m
# CONFIG_NET_VENDOR_HP is not set
CONFIG_NET_VENDOR_HUAWEI=y
CONFIG_HINIC=m
+CONFIG_BMA=m
# CONFIG_NET_VENDOR_I825XX is not set
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
diff --git a/arch/arm64/configs/syzkaller_defconfig b/arch/arm64/configs/syzkaller_defconfig
index b78bcbb45e8d..73969e93cf79 100644
--- a/arch/arm64/configs/syzkaller_defconfig
+++ b/arch/arm64/configs/syzkaller_defconfig
@@ -2398,6 +2398,7 @@ CONFIG_HNS3_CAE=m
# CONFIG_NET_VENDOR_HP is not set
CONFIG_NET_VENDOR_HUAWEI=y
CONFIG_HINIC=m
+CONFIG_BMA=m
# CONFIG_NET_VENDOR_I825XX is not set
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index 1db762928818..3876c3b4aa3a 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -2505,6 +2505,7 @@ CONFIG_BE2NET_LANCER=y
CONFIG_BE2NET_SKYHAWK=y
# CONFIG_NET_VENDOR_EZCHIP is not set
# CONFIG_NET_VENDOR_HP is not set
+CONFIG_BMA=m
# CONFIG_NET_VENDOR_I825XX is not set
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
--
2.25.1
1
0

10 Jul '20
Ikjoon Jang (1):
cpuidle: teo: Fix intervals[] array indexing bug
Rafael J. Wysocki (9):
cpuidle: New timer events oriented governor for tickless systems
cpuidle: teo: Allow tick to be stopped if PM QoS is used
cpuidle: teo: Get rid of redundant check in teo_update()
cpuidle: teo: Ignore disabled idle states that are too deep
cpuidle: teo: Rename local variable in teo_select()
cpuidle: teo: Consider hits and misses metrics of disabled states
cpuidle: teo: Fix "early hits" handling for disabled idle states
cpuidle: teo: Exclude cpuidle overhead from computations
cpuidle: teo: Avoid using "early hits" incorrectly
Xiongfeng Wang (2):
Documentation: PM: Add SPDX license tags to cpuidle.rst
config: add CONFIG_CPU_IDLE_GOV_TEO in defconfigs
Documentation/admin-guide/pm/cpuidle.rst | 721 +++++++++++++++++++++++
arch/arm64/configs/euleros_defconfig | 1 +
arch/arm64/configs/hulk_defconfig | 1 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/configs/storage_ci_defconfig | 1 +
arch/arm64/configs/syzkaller_defconfig | 1 +
arch/x86/configs/hulk_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
arch/x86/configs/storage_ci_defconfig | 1 +
drivers/cpuidle/Kconfig | 11 +-
drivers/cpuidle/governors/Makefile | 1 +
drivers/cpuidle/governors/teo.c | 496 ++++++++++++++++
12 files changed, 1236 insertions(+), 1 deletion(-)
create mode 100644 Documentation/admin-guide/pm/cpuidle.rst
create mode 100644 drivers/cpuidle/governors/teo.c
--
2.25.1
1
12
From: Xie XiuQi <xiexiuqi(a)huawei.com>
hulk inclusion
category: feature
feature: apparmor
bugzilla: 34472
apparmor is easier for customer to use, we need to provide both SELinux
and Apparmor capability for customers.
The default security is still selinux, use security=apparmor on
the kernel's command line to enable it.
Link: https://gitee.com/openeuler/kernel/issues/I1DMG1
Signed-off-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index df560c0a353b..0147907a349f 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -5422,11 +5422,16 @@ CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
-# CONFIG_SECURITY_APPARMOR is not set
+CONFIG_SECURITY_APPARMOR=y
+CONFIG_SECURITY_APPARMOR_BOOTPARAM_VALUE=1
+CONFIG_SECURITY_APPARMOR_HASH=y
+CONFIG_SECURITY_APPARMOR_HASH_DEFAULT=y
+# CONFIG_SECURITY_APPARMOR_DEBUG is not set
# CONFIG_SECURITY_LOADPIN is not set
CONFIG_SECURITY_YAMA=y
# CONFIG_INTEGRITY is not set
CONFIG_DEFAULT_SECURITY_SELINUX=y
+# CONFIG_DEFAULT_SECURITY_APPARMOR is not set
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_DEFAULT_SECURITY="selinux"
CONFIG_XOR_BLOCKS=m
--
2.25.1
1
1
From: Naixin Yu <yunaixin(a)huawei.com>
This patch set contains 5 communication drivers for Huawei BMA software.
The BMA software is a system management software. It supports the status
monitoring, performance monitoring, and event monitoring of various
components, including server CPUs, memory, hard disks, NICs, IB cards,
PCIe cards, RAID controller cards, and optical modules.
These 5 drivers are used to send/receive message through PCIe channel in
different ways by BMA software.
Naixin Yu (5):
Huawei BMA: Adding Huawei BMA driver: host_edma_drv
Huawei BMA: Adding Huawei BMA driver: host_cdev_drv
Huawei BMA: Adding Huawei BMA driver: host_veth_drv
Huawei BMA: Adding Huawei BMA driver: cdev_veth_drv
Huawei BMA: Adding Huawei BMA driver: host_kbox_drv
drivers/net/ethernet/huawei/Kconfig | 1 +
drivers/net/ethernet/huawei/Makefile | 1 +
drivers/net/ethernet/huawei/bma/Kconfig | 10 +
drivers/net/ethernet/huawei/bma/Makefile | 9 +
.../net/ethernet/huawei/bma/cdev_drv/Makefile | 2 +
.../ethernet/huawei/bma/cdev_drv/bma_cdev.c | 369 +++
.../huawei/bma/cdev_veth_drv/Makefile | 2 +
.../bma/cdev_veth_drv/virtual_cdev_eth_net.c | 1862 ++++++++++++
.../bma/cdev_veth_drv/virtual_cdev_eth_net.h | 299 ++
.../net/ethernet/huawei/bma/edma_drv/Makefile | 2 +
.../huawei/bma/edma_drv/bma_devintf.c | 597 ++++
.../huawei/bma/edma_drv/bma_devintf.h | 40 +
.../huawei/bma/edma_drv/bma_include.h | 116 +
.../ethernet/huawei/bma/edma_drv/bma_pci.c | 533 ++++
.../ethernet/huawei/bma/edma_drv/bma_pci.h | 94 +
.../ethernet/huawei/bma/edma_drv/edma_host.c | 1462 ++++++++++
.../ethernet/huawei/bma/edma_drv/edma_host.h | 351 +++
.../huawei/bma/include/bma_ker_intf.h | 94 +
.../net/ethernet/huawei/bma/kbox_drv/Makefile | 5 +
.../ethernet/huawei/bma/kbox_drv/kbox_dump.c | 121 +
.../ethernet/huawei/bma/kbox_drv/kbox_dump.h | 33 +
.../ethernet/huawei/bma/kbox_drv/kbox_hook.c | 101 +
.../ethernet/huawei/bma/kbox_drv/kbox_hook.h | 33 +
.../huawei/bma/kbox_drv/kbox_include.h | 40 +
.../ethernet/huawei/bma/kbox_drv/kbox_main.c | 168 ++
.../ethernet/huawei/bma/kbox_drv/kbox_main.h | 23 +
.../ethernet/huawei/bma/kbox_drv/kbox_mce.c | 264 ++
.../ethernet/huawei/bma/kbox_drv/kbox_mce.h | 23 +
.../ethernet/huawei/bma/kbox_drv/kbox_panic.c | 187 ++
.../ethernet/huawei/bma/kbox_drv/kbox_panic.h | 25 +
.../huawei/bma/kbox_drv/kbox_printk.c | 363 +++
.../huawei/bma/kbox_drv/kbox_printk.h | 33 +
.../huawei/bma/kbox_drv/kbox_ram_drive.c | 188 ++
.../huawei/bma/kbox_drv/kbox_ram_drive.h | 31 +
.../huawei/bma/kbox_drv/kbox_ram_image.c | 135 +
.../huawei/bma/kbox_drv/kbox_ram_image.h | 84 +
.../huawei/bma/kbox_drv/kbox_ram_op.c | 986 +++++++
.../huawei/bma/kbox_drv/kbox_ram_op.h | 77 +
.../net/ethernet/huawei/bma/veth_drv/Makefile | 2 +
.../ethernet/huawei/bma/veth_drv/veth_hb.c | 2502 +++++++++++++++++
.../ethernet/huawei/bma/veth_drv/veth_hb.h | 440 +++
41 files changed, 11708 insertions(+)
create mode 100644 drivers/net/ethernet/huawei/bma/Kconfig
create mode 100644 drivers/net/ethernet/huawei/bma/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_drv/bma_cdev.c
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/virtual_cdev_eth_net.c
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/virtual_cdev_eth_net.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_devintf.c
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_devintf.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_include.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_pci.c
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_pci.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/edma_host.c
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/edma_host.h
create mode 100644 drivers/net/ethernet/huawei/bma/include/bma_ker_intf.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_dump.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_dump.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_hook.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_hook.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_include.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_main.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_main.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_mce.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_mce.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_panic.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_panic.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_printk.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_printk.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_drive.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_drive.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_image.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_image.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_op.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_op.h
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/veth_hb.c
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/veth_hb.h
--
2.26.2.windows.1
1
5

07 Jul '20
From: Hu Chunzhi <huchunzhi(a)huawei.com>
driver inclusion
category: cleanup
bugzilla: NA
CVE: NA
----------------------------------
This patch modifies the code based on the review comments.
Reviewed-by: Zhao Weibo <zhaoweibo3(a)huawei.com>
Reviewed-by: Yang Shunfeng <yangshunfeng2(a)huawei.com>
Signed-off-by: Hu Chunzhi <huchunzhi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/infiniband/hw/hns/hns_roce_alloc.c | 3 +--
.../infiniband/hw/hns/hns_roce_hw_sysfs_v2.c | 1 -
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 22 +++++++++++--------
drivers/infiniband/hw/hns/hns_roce_mr.c | 5 -----
4 files changed, 14 insertions(+), 17 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_alloc.c b/drivers/infiniband/hw/hns/hns_roce_alloc.c
index ec8b30b24a3c..89547c8bb82d 100644
--- a/drivers/infiniband/hw/hns/hns_roce_alloc.c
+++ b/drivers/infiniband/hw/hns/hns_roce_alloc.c
@@ -63,6 +63,7 @@ int hns_roce_bitmap_alloc(struct hns_roce_bitmap *bitmap, unsigned long *obj)
return ret;
}
EXPORT_SYMBOL_GPL(hns_roce_bitmap_alloc);
+
void hns_roce_bitmap_free(struct hns_roce_bitmap *bitmap, unsigned long obj,
int rr)
{
@@ -214,7 +215,6 @@ int hns_roce_buf_alloc(struct hns_roce_dev *hr_dev, u32 size, u32 max_direct,
buf->page_shift = page_shift;
buf->page_list = kcalloc(buf->nbufs, sizeof(*buf->page_list),
GFP_KERNEL);
-
if (!buf->page_list)
return -ENOMEM;
@@ -222,7 +222,6 @@ int hns_roce_buf_alloc(struct hns_roce_dev *hr_dev, u32 size, u32 max_direct,
buf->page_list[i].buf = dma_zalloc_coherent(dev,
page_size, &t,
GFP_KERNEL);
-
if (!buf->page_list[i].buf)
goto err_free;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_sysfs_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_sysfs_v2.c
index dfbdfb2192f1..a780a0e10386 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_sysfs_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_sysfs_v2.c
@@ -612,7 +612,6 @@ int hns_roce_v2_modify_eq(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq,
roce_set_field(eqc_mask->byte_12,
HNS_ROCE_EQC_MAX_CNT_M,
HNS_ROCE_EQC_MAX_CNT_S, 0);
-
} else if (type == HNS_ROCE_EQ_PERIOD_MASK) {
roce_set_field(eqc->byte_12,
HNS_ROCE_EQC_PERIOD_M,
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index b77fef65ab1c..e7efd37b7734 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -1366,23 +1366,26 @@ static void hns_roce_func_clr_rst_prc(struct hns_roce_dev *hr_dev, int retval,
}
}
-static void hns_roce_query_func_info(struct hns_roce_dev *hr_dev)
+static int hns_roce_query_func_info(struct hns_roce_dev *hr_dev)
{
struct hns_roce_cmq_desc desc;
struct hns_roce_pf_func_info *resp;
- int ret;
+ int ret = 0;
hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_QUERY_FUNC_INFO,
true);
ret = hns_roce_cmq_send(hr_dev, &desc, 1);
if (ret) {
- dev_err(hr_dev->dev, "Query vf count failed(%d).\n", ret);
- return;
+ dev_err(hr_dev->dev, "Query function info failed(%d).\n",
+ ret);
+ return ret;
}
resp = (struct hns_roce_pf_func_info *)desc.data;
hr_dev->func_num = le32_to_cpu(resp->pf_own_func_num);
hr_dev->mac_id = le32_to_cpu(resp->pf_own_mac_id);
+
+ return ret;
}
static void hns_roce_clear_func(struct hns_roce_dev *hr_dev, int vf_id)
@@ -1438,12 +1441,11 @@ static void hns_roce_clear_func(struct hns_roce_dev *hr_dev, int vf_id)
static void hns_roce_function_clear(struct hns_roce_dev *hr_dev)
{
- int i;
int vf_num = 0;/* should be (hr_dev->func_num-1) when enable ROCE VF */
/* Clear vf first, then clear pf */
- for (i = vf_num; i >= 0; i--)
- hns_roce_clear_func(hr_dev, i);
+ for (; vf_num >= 0; vf_num--)
+ hns_roce_clear_func(hr_dev, vf_num);
}
static void hns_roce_clear_extdb_list_info(struct hns_roce_dev *hr_dev)
@@ -2182,7 +2184,9 @@ static int hns_roce_v2_profile(struct hns_roce_dev *hr_dev)
return ret;
}
- hns_roce_query_func_info(hr_dev);
+ ret = hns_roce_query_func_info(hr_dev);
+ if (ret)
+ return ret;
if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08_B) {
ret = hns_roce_query_pf_timer_resource(hr_dev);
@@ -5488,7 +5492,7 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
kfree(hr_qp->sq.wrid);
kfree(hr_qp->rq.wrid);
hns_roce_buf_free(hr_dev, hr_qp->buff_size, &hr_qp->hr_buf);
- if (hr_qp->rq.wqe_cnt)
+ if (hr_qp->rq.wqe_cnt && (hr_qp->rdb_en == 1))
hns_roce_free_db(hr_dev, &hr_qp->rdb);
}
diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
index 0802454b2b96..ce5c0544fc1b 100644
--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
+++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
@@ -552,7 +552,6 @@ static int hns_roce_mhop_alloc(struct hns_roce_dev *hr_dev, int npages,
goto err_kcalloc_l2_dma;
}
-
mr->pbl_size = npages;
mr->pbl_ba = mr->pbl_l0_dma_addr;
mr->pbl_hop_num = hr_dev->caps.pbl_hop_num;
@@ -1289,7 +1288,6 @@ static int rereg_mr_trans(struct ib_mr *ibmr, int flags,
if (ret)
goto release_umem;
-
ret = hns_roce_ib_umem_write_mr(hr_dev, mr, mr->umem);
if (ret) {
if (mr->size != ~0ULL) {
@@ -1307,14 +1305,11 @@ static int rereg_mr_trans(struct ib_mr *ibmr, int flags,
}
return 0;
-
release_umem:
ib_umem_release(mr->umem);
return ret;
-
}
-
int hns_roce_rereg_user_mr(struct ib_mr *ibmr, int flags, u64 start, u64 length,
u64 virt_addr, int mr_access_flags, struct ib_pd *pd,
struct ib_udata *udata)
--
2.25.1
1
0

[PATCH] Revert "zram: convert remaining CLASS_ATTR() to CLASS_ATTR_RO()"
by Yang Yingliang 07 Jul '20
by Yang Yingliang 07 Jul '20
07 Jul '20
From: Wade Mealing <wmealing(a)redhat.com>
mainline inclusion
from mainline-v5.8
commit 853eab68afc80f59f36bbdeb715e5c88c501e680
category: bugfix
bugzilla: NA
CVE: CVE-2020-10781
---------------------------
Turns out that the permissions for 0400 really are what we want here,
otherwise any user can read from this file.
[fixed formatting, added changelog, and made attribute static - gregkh]
Reported-by: Wade Mealing <wmealing(a)redhat.com>
Cc: stable <stable(a)vger.kernel.org>
Fixes: f40609d1591f ("zram: convert remaining CLASS_ATTR() to CLASS_ATTR_RO()")
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1847832
Reviewed-by: Steffen Maier <maier(a)linux.ibm.com>
Acked-by: Minchan Kim <minchan(a)kernel.org>
Link: https://lore.kernel.org/r/20200617114946.GA2131650@kroah.com
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/block/zram/zram_drv.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 76abe40bfa83..52850c10165e 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1802,7 +1802,8 @@ static ssize_t hot_add_show(struct class *class,
return ret;
return scnprintf(buf, PAGE_SIZE, "%d\n", ret);
}
-static CLASS_ATTR_RO(hot_add);
+static struct class_attribute class_attr_hot_add =
+ __ATTR(hot_add, 0400, hot_add_show, NULL);
static ssize_t hot_remove_store(struct class *class,
struct class_attribute *attr,
--
2.25.1
1
0

[PATCH 01/35] can: introduce CAN midlayer private and allocate it automatically
by Yang Yingliang 07 Jul '20
by Yang Yingliang 07 Jul '20
07 Jul '20
From: Marc Kleine-Budde <mkl(a)pengutronix.de>
mainline inclusion
from mainline-v5.4-rc1
commit ffd956eef69b212a724b1cc4cdc61828f3ad9104
category: feature
bugzilla: 38684
CVE: NA
---------------------------
This patch introduces the CAN midlayer private structure ("struct
can_ml_priv") which should be used to hold protocol specific per device
data structures. For now it's only member is "struct can_dev_rcv_lists".
The CAN midlayer private is allocated via alloc_netdev()'s private and
assigned to "struct net_device::ml_priv" during device creation. This is
done transparently for CAN drivers using alloc_candev(). The slcan, vcan
and vxcan drivers which are not using alloc_candev() have been adopted
manually. The memory layout of the netdev_priv allocated via
alloc_candev() will looke like this:
+-------------------------+
| driver's priv |
+-------------------------+
| struct can_ml_priv |
+-------------------------+
| array of struct sk_buff |
+-------------------------+
Signed-off-by: Oleksij Rempel <o.rempel(a)pengutronix.de>
Signed-off-by: Oliver Hartkopp <socketcan(a)hartkopp.net>
Signed-off-by: Marc Kleine-Budde <mkl(a)pengutronix.de>
Signed-off-by: Zhang Changzhong <zhangchangzhong(a)huawei.com>
Reviewed-by: YueHaibing <yuehaibing(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/can/dev.c | 22 ++++++++++---
drivers/net/can/slcan.c | 5 ++-
drivers/net/can/vcan.c | 6 ++--
drivers/net/can/vxcan.c | 3 +-
include/linux/can/can-ml.h | 66 ++++++++++++++++++++++++++++++++++++++
net/can/af_can.c | 1 +
net/can/af_can.h | 15 ---------
net/can/proc.c | 1 +
8 files changed, 96 insertions(+), 23 deletions(-)
create mode 100644 include/linux/can/can-ml.h
diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
index 1545f2b299d0..1723c848d0c4 100644
--- a/drivers/net/can/dev.c
+++ b/drivers/net/can/dev.c
@@ -23,6 +23,7 @@
#include <linux/if_arp.h>
#include <linux/workqueue.h>
#include <linux/can.h>
+#include <linux/can/can-ml.h>
#include <linux/can/dev.h>
#include <linux/can/skb.h>
#include <linux/can/netlink.h>
@@ -729,11 +730,24 @@ struct net_device *alloc_candev_mqs(int sizeof_priv, unsigned int echo_skb_max,
struct can_priv *priv;
int size;
+ /* We put the driver's priv, the CAN mid layer priv and the
+ * echo skb into the netdevice's priv. The memory layout for
+ * the netdev_priv is like this:
+ *
+ * +-------------------------+
+ * | driver's priv |
+ * +-------------------------+
+ * | struct can_ml_priv |
+ * +-------------------------+
+ * | array of struct sk_buff |
+ * +-------------------------+
+ */
+
+ size = ALIGN(sizeof_priv, NETDEV_ALIGN) + sizeof(struct can_ml_priv);
+
if (echo_skb_max)
- size = ALIGN(sizeof_priv, sizeof(struct sk_buff *)) +
+ size = ALIGN(size, sizeof(struct sk_buff *)) +
echo_skb_max * sizeof(struct sk_buff *);
- else
- size = sizeof_priv;
dev = alloc_netdev_mqs(size, "can%d", NET_NAME_UNKNOWN, can_setup,
txqs, rxqs);
@@ -746,7 +760,7 @@ struct net_device *alloc_candev_mqs(int sizeof_priv, unsigned int echo_skb_max,
if (echo_skb_max) {
priv->echo_skb_max = echo_skb_max;
priv->echo_skb = (void *)priv +
- ALIGN(sizeof_priv, sizeof(struct sk_buff *));
+ (size - echo_skb_max * sizeof(struct sk_buff *));
}
priv->state = CAN_STATE_STOPPED;
diff --git a/drivers/net/can/slcan.c b/drivers/net/can/slcan.c
index f99cd94509be..3abce32b7dcf 100644
--- a/drivers/net/can/slcan.c
+++ b/drivers/net/can/slcan.c
@@ -55,6 +55,7 @@
#include <linux/workqueue.h>
#include <linux/can.h>
#include <linux/can/skb.h>
+#include <linux/can/can-ml.h>
MODULE_ALIAS_LDISC(N_SLCAN);
MODULE_DESCRIPTION("serial line CAN interface");
@@ -519,6 +520,7 @@ static struct slcan *slc_alloc(void)
char name[IFNAMSIZ];
struct net_device *dev = NULL;
struct slcan *sl;
+ int size;
for (i = 0; i < maxdev; i++) {
dev = slcan_devs[i];
@@ -532,7 +534,8 @@ static struct slcan *slc_alloc(void)
return NULL;
sprintf(name, "slcan%d", i);
- dev = alloc_netdev(sizeof(*sl), name, NET_NAME_UNKNOWN, slc_setup);
+ size = ALIGN(sizeof(*sl), NETDEV_ALIGN) + sizeof(struct can_ml_priv);
+ dev = alloc_netdev(size, name, NET_NAME_UNKNOWN, slc_setup);
if (!dev)
return NULL;
diff --git a/drivers/net/can/vcan.c b/drivers/net/can/vcan.c
index d200a5b0651c..603f53b81d86 100644
--- a/drivers/net/can/vcan.c
+++ b/drivers/net/can/vcan.c
@@ -45,6 +45,7 @@
#include <linux/if_arp.h>
#include <linux/if_ether.h>
#include <linux/can.h>
+#include <linux/can/can-ml.h>
#include <linux/can/dev.h>
#include <linux/can/skb.h>
#include <linux/slab.h>
@@ -167,8 +168,9 @@ static void vcan_setup(struct net_device *dev)
}
static struct rtnl_link_ops vcan_link_ops __read_mostly = {
- .kind = DRV_NAME,
- .setup = vcan_setup,
+ .kind = DRV_NAME,
+ .priv_size = sizeof(struct can_ml_priv),
+ .setup = vcan_setup,
};
static __init int vcan_init_module(void)
diff --git a/drivers/net/can/vxcan.c b/drivers/net/can/vxcan.c
index ed6828821fbd..3cc957f0a529 100644
--- a/drivers/net/can/vxcan.c
+++ b/drivers/net/can/vxcan.c
@@ -29,6 +29,7 @@
#include <linux/can/dev.h>
#include <linux/can/skb.h>
#include <linux/can/vxcan.h>
+#include <linux/can/can-ml.h>
#include <linux/slab.h>
#include <net/rtnetlink.h>
@@ -292,7 +293,7 @@ static struct net *vxcan_get_link_net(const struct net_device *dev)
static struct rtnl_link_ops vxcan_link_ops = {
.kind = DRV_NAME,
- .priv_size = sizeof(struct vxcan_priv),
+ .priv_size = ALIGN(sizeof(struct vxcan_priv), NETDEV_ALIGN) + sizeof(struct can_ml_priv),
.setup = vxcan_setup,
.newlink = vxcan_newlink,
.dellink = vxcan_dellink,
diff --git a/include/linux/can/can-ml.h b/include/linux/can/can-ml.h
new file mode 100644
index 000000000000..0a9d778de8af
--- /dev/null
+++ b/include/linux/can/can-ml.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */
+/* Copyright (c) 2002-2007 Volkswagen Group Electronic Research
+ * Copyright (c) 2017 Pengutronix, Marc Kleine-Budde <kernel(a)pengutronix.de>
+ *
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of Volkswagen nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * Alternatively, provided that this notice is retained in full, this
+ * software may be distributed under the terms of the GNU General
+ * Public License ("GPL") version 2, in which case the provisions of the
+ * GPL apply INSTEAD OF those given above.
+ *
+ * The provided data structures and external interfaces from this code
+ * are not restricted to be used by modules with a GPL compatible license.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
+ * DAMAGE.
+ *
+ */
+
+#ifndef CAN_ML_H
+#define CAN_ML_H
+
+#include <linux/can.h>
+#include <linux/list.h>
+
+#define CAN_SFF_RCV_ARRAY_SZ (1 << CAN_SFF_ID_BITS)
+#define CAN_EFF_RCV_HASH_BITS 10
+#define CAN_EFF_RCV_ARRAY_SZ (1 << CAN_EFF_RCV_HASH_BITS)
+
+enum { RX_ERR, RX_ALL, RX_FIL, RX_INV, RX_MAX };
+
+struct can_dev_rcv_lists {
+ struct hlist_head rx[RX_MAX];
+ struct hlist_head rx_sff[CAN_SFF_RCV_ARRAY_SZ];
+ struct hlist_head rx_eff[CAN_EFF_RCV_ARRAY_SZ];
+ int remove_on_zero_entries;
+ int entries;
+};
+
+struct can_ml_priv {
+ struct can_dev_rcv_lists dev_rcv_lists;
+};
+
+#endif /* CAN_ML_H */
diff --git a/net/can/af_can.c b/net/can/af_can.c
index 04132b0b5d36..9b85b3455bc1 100644
--- a/net/can/af_can.c
+++ b/net/can/af_can.c
@@ -58,6 +58,7 @@
#include <linux/can.h>
#include <linux/can/core.h>
#include <linux/can/skb.h>
+#include <linux/can/can-ml.h>
#include <linux/ratelimit.h>
#include <net/net_namespace.h>
#include <net/sock.h>
diff --git a/net/can/af_can.h b/net/can/af_can.h
index 9cb3719632bd..051a83bedab1 100644
--- a/net/can/af_can.h
+++ b/net/can/af_can.h
@@ -60,21 +60,6 @@ struct receiver {
struct rcu_head rcu;
};
-#define CAN_SFF_RCV_ARRAY_SZ (1 << CAN_SFF_ID_BITS)
-#define CAN_EFF_RCV_HASH_BITS 10
-#define CAN_EFF_RCV_ARRAY_SZ (1 << CAN_EFF_RCV_HASH_BITS)
-
-enum { RX_ERR, RX_ALL, RX_FIL, RX_INV, RX_MAX };
-
-/* per device receive filters linked at dev->ml_priv */
-struct can_dev_rcv_lists {
- struct hlist_head rx[RX_MAX];
- struct hlist_head rx_sff[CAN_SFF_RCV_ARRAY_SZ];
- struct hlist_head rx_eff[CAN_EFF_RCV_ARRAY_SZ];
- int remove_on_zero_entries;
- int entries;
-};
-
/* statistic structures */
/* can be reset e.g. by can_init_stats() */
diff --git a/net/can/proc.c b/net/can/proc.c
index 70fea17bb04c..f132ee6282e4 100644
--- a/net/can/proc.c
+++ b/net/can/proc.c
@@ -44,6 +44,7 @@
#include <linux/list.h>
#include <linux/rcupdate.h>
#include <linux/if_arp.h>
+#include <linux/can/can-ml.h>
#include <linux/can/core.h>
#include "af_can.h"
--
2.25.1
1
34
From: Shengzui You <youshengzui(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
------------------------------
If two interrupts are triggered continuously
within 15 seconds, modify the timer and reset
again after 15 seconds.
Signed-off-by: Shengzui You <youshengzui(a)huawei.com>
Reviewed-by: Weiwei Deng <dengweiwei(a)huawei.com>
Reviewed-by: Zhaohui Zhong <zhongzhaohui(a)huawei.com>
Reviewed-by: Junxin Chen <chenjunxin1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
.../hisilicon/hns3/hns-customer/hns3pf/hclge_main_it.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns-customer/hns3pf/hclge_main_it.c b/drivers/net/ethernet/hisilicon/hns3/hns-customer/hns3pf/hclge_main_it.c
index b8cab1665385..01f0e19f1e50 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns-customer/hns3pf/hclge_main_it.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns-customer/hns3pf/hclge_main_it.c
@@ -115,13 +115,15 @@ void hclge_reset_event_it(struct pci_dev *pdev, struct hnae3_handle *handle)
* not allow it again before 12*HZ times.
*/
if (time_before(jiffies, (hdev->last_reset_time +
- HCLGE_RESET_INTERVAL)))
+ HCLGE_RESET_INTERVAL))) {
+ mod_timer(&hdev->reset_timer, jiffies + HCLGE_RESET_INTERVAL);
return;
- else if (hdev->default_reset_request)
+ } else if (hdev->default_reset_request) {
hdev->reset_level =
hclge_get_reset_level(ae_dev, &hdev->default_reset_request);
- else if (time_after(jiffies, (hdev->last_reset_time + 4 * 5 * HZ)))
+ } else if (time_after(jiffies, (hdev->last_reset_time + 4 * 5 * HZ))) {
hdev->reset_level = HNAE3_FUNC_RESET;
+ }
dev_info(&hdev->pdev->dev, "IT received reset event, reset type is %d",
hdev->reset_level);
--
2.25.1
1
29

[PATCH] sdei_watchdog: fix compile error when CONFIG_HARDLOCKUP_DETECTOR is not set
by Yang Yingliang 07 Jul '20
by Yang Yingliang 07 Jul '20
07 Jul '20
From: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: NA
CVE: NA
-----------------------------------------------
sdei_watchdog use function 'watchdog_hardlockup_check()' to check
whether a hardlockup exists. This function is included by
CONFIG_HARDLOCKUP_DETECTOR. So select CONFIG_HARDLOCKUP_DETECTOR when
CONFIG_SDEI_WATCHDOG is set. Otherwise a compile error as below will be
shown.
arch/arm64/kernel/watchdog_sdei.c: In function ‘watchdog_nmi_enable’:
arch/arm64/kernel/watchdog_sdei.c:38:35: error: ‘watchdog_thresh’ undeclared (first use in this function)
sdei_api_set_secure_timer_period(watchdog_thresh);
^
arch/arm64/kernel/watchdog_sdei.c:38:35: note: each undeclared identifier is reported only once for each function it appears in
arch/arm64/kernel/watchdog_sdei.c: In function ‘watchdog_nmi_probe’:
arch/arm64/kernel/watchdog_sdei.c:119:39: error: ‘watchdog_thresh’ undeclared (first use in this function)
if (sdei_api_set_secure_timer_period(watchdog_thresh)) {
Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
lib/Kconfig.debug | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index bc65f3b8a8b5..47dca144782b 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -845,6 +845,7 @@ config SDEI_WATCHDOG
bool "SDEI NMI Watchdog support"
depends on ARM_SDE_INTERFACE && !HARDLOCKUP_CHECK_TIMESTAMP
select HAVE_HARDLOCKUP_DETECTOR_ARCH
+ select HARDLOCKUP_DETECTOR
config PMU_WATCHDOG
bool "PMU NMI Watchdog support"
--
2.25.1
1
0
From: Chiqijun <chiqijun(a)huawei.com>
driver inclusion
category: feature
bugzilla: 4472
-----------------------------------------------------------------------
Add support for X86 Arch.
Add the best default interrupt aggregation parameter and LRO parameter on
x86 platform.
Signed-off-by: Chiqijun <chiqijun(a)huawei.com>
Reviewed-by: Zengweiliang <zengweiliang.zengweiliang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/huawei/Kconfig | 1 -
drivers/net/ethernet/huawei/hinic/hinic_hwif.c | 10 ++++++++++
drivers/net/ethernet/huawei/hinic/hinic_hwif.h | 4 ++++
drivers/net/ethernet/huawei/hinic/hinic_lld.c | 17 ++++++++++++++++-
drivers/net/ethernet/huawei/hinic/hinic_main.c | 6 ++++++
.../net/ethernet/huawei/hinic/hinic_nic_cfg.h | 14 ++++++++++++++
6 files changed, 50 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/huawei/Kconfig b/drivers/net/ethernet/huawei/Kconfig
index dc48e5163531..c1a95ae4058b 100644
--- a/drivers/net/ethernet/huawei/Kconfig
+++ b/drivers/net/ethernet/huawei/Kconfig
@@ -5,7 +5,6 @@
config NET_VENDOR_HUAWEI
bool "Huawei devices"
default y
- depends on ARM64
---help---
If you have a network (Ethernet) card belonging to this class, say Y.
Note that the answer to this question doesn't directly affect the
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hwif.c b/drivers/net/ethernet/huawei/hinic/hinic_hwif.c
index 1fc0fa57f313..b267084d3672 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hwif.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hwif.c
@@ -368,7 +368,12 @@ void hinic_free_db_addr(void *hwdev, void __iomem *db_base,
hwif = ((struct hinic_hwdev *)hwdev)->hwif;
idx = DB_IDX(db_base, hwif->db_base);
+#if defined(__aarch64__)
/* No need to unmap */
+#else
+ if (dwqe_base && hwif->chip_mode == CHIP_MODE_NORMAL)
+ io_mapping_unmap(dwqe_base);
+#endif
free_db_idx(hwif, idx);
}
@@ -398,7 +403,12 @@ int hinic_alloc_db_addr(void *hwdev, void __iomem **db_base,
offset = ((u64)idx) << PAGE_SHIFT;
+#if defined(__aarch64__)
*dwqe_base = hwif->dwqe_mapping + offset;
+#else
+ *dwqe_base = io_mapping_map_wc(hwif->dwqe_mapping, offset,
+ HINIC_DB_PAGE_SIZE);
+#endif
if (!(*dwqe_base)) {
hinic_free_db_addr(hwdev, *db_base, NULL);
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hwif.h b/drivers/net/ethernet/huawei/hinic/hinic_hwif.h
index 9cd2ad8c5962..3a58a44b6bfe 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hwif.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hwif.h
@@ -58,7 +58,11 @@ struct hinic_hwif {
u64 db_base_phy;
u8 __iomem *db_base;
+#if defined(__aarch64__)
void __iomem *dwqe_mapping;
+#else
+ struct io_mapping *dwqe_mapping;
+#endif
struct hinic_free_db_area free_db_area;
struct hinic_func_attr attr;
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_lld.c b/drivers/net/ethernet/huawei/hinic/hinic_lld.c
index 2ad6d2d23207..20e3b0bde8ef 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_lld.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_lld.c
@@ -114,7 +114,11 @@ struct hinic_pcidev {
u64 db_base_phy;
void __iomem *db_base;
+#if defined(__aarch64__)
void __iomem *dwqe_mapping;
+#else
+ struct io_mapping *dwqe_mapping;
+#endif
/* lock for attach/detach uld */
struct mutex pdev_mutex;
struct hinic_sriov_info sriov_info;
@@ -1874,9 +1878,15 @@ static int mapping_bar(struct pci_dev *pdev, struct hinic_pcidev *pci_adapter)
dwqe_addr = pci_adapter->db_base_phy + db_dwqe_size;
+#if defined(__aarch64__)
/* arm do not support call ioremap_wc() */
pci_adapter->dwqe_mapping = __ioremap(dwqe_addr, db_dwqe_size,
__pgprot(PROT_DEVICE_nGnRnE));
+#else
+ pci_adapter->dwqe_mapping = io_mapping_create_wc(dwqe_addr,
+ db_dwqe_size);
+
+#endif
if (!pci_adapter->dwqe_mapping) {
sdk_err(&pci_adapter->pcidev->dev, "Failed to io_mapping_create_wc\n");
goto mapping_dwqe_err;
@@ -1898,8 +1908,13 @@ static int mapping_bar(struct pci_dev *pdev, struct hinic_pcidev *pci_adapter)
static void unmapping_bar(struct hinic_pcidev *pci_adapter)
{
- if (pci_adapter->chip_mode == CHIP_MODE_NORMAL)
+ if (pci_adapter->chip_mode == CHIP_MODE_NORMAL) {
+#if defined(__aarch64__)
iounmap(pci_adapter->dwqe_mapping);
+#else
+ io_mapping_free(pci_adapter->dwqe_mapping);
+#endif
+ }
iounmap(pci_adapter->db_base);
iounmap(pci_adapter->intr_reg_base);
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
index dd67d903f739..d78a5e3678b1 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
@@ -2908,7 +2908,13 @@ static void adaptive_configuration_init(struct hinic_nic_dev *nic_dev)
nic_dev->env_info.os = HINIC_OS_HUAWEI;
+#if defined(__aarch64__)
nic_dev->env_info.cpu = HINIC_CPU_ARM_GENERIC;
+#elif defined(__x86_64__)
+ nic_dev->env_info.cpu = HINIC_CPU_X86_GENERIC;
+#else
+ nic_dev->env_info.cpu = HINIC_CPU_UNKNOWN;
+#endif
nic_info(&nic_dev->pdev->dev,
"Board type %u, OS type %u, CPU type %u\n",
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h
index e0979df6e209..243265072710 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h
@@ -56,6 +56,7 @@
#define HINIC_LRO_RX_TIMER_DEFAULT_PG_10GE 10
#define HINIC_LRO_RX_TIMER_DEFAULT_PG_100GE 8
+#if defined(__aarch64__)
#define HINIC_LOWEST_LATENCY 1
#define HINIC_RX_RATE_LOW 400000
#define HINIC_RX_COAL_TIME_LOW 20
@@ -67,6 +68,19 @@
#define HINIC_TX_RATE_THRESH 35000
#define HINIC_RX_RATE_LOW_VM 400000
#define HINIC_RX_PENDING_LIMIT_HIGH_VM 50
+#else
+#define HINIC_LOWEST_LATENCY 1
+#define HINIC_RX_RATE_LOW 400000
+#define HINIC_RX_COAL_TIME_LOW 16
+#define HINIC_RX_PENDING_LIMIT_LOW 2
+#define HINIC_RX_RATE_HIGH 1000000
+#define HINIC_RX_COAL_TIME_HIGH 225
+#define HINIC_RX_PENDING_LIMIT_HIGH 8
+#define HINIC_RX_RATE_THRESH 50000
+#define HINIC_TX_RATE_THRESH 50000
+#define HINIC_RX_RATE_LOW_VM 100000
+#define HINIC_RX_PENDING_LIMIT_HIGH_VM 87
+#endif
enum hinic_board_type {
HINIC_BOARD_UNKNOWN = 0,
--
2.25.1
1
1
Cheng Jian (3):
disable stealing by default
sched/fair: introduce SCHED_STEAL
config: enable CONFIG_SCHED_STEAL by default
Steve Sistare (10):
sched: Provide sparsemask, a reduced contention bitmap
sched/topology: Provide hooks to allocate data shared per LLC
sched/topology: Provide cfs_overload_cpus bitmap
sched/fair: Dynamically update cfs_overload_cpus
sched/fair: Hoist idle_stamp up from idle_balance
sched/fair: Generalize the detach_task interface
sched/fair: Provide can_migrate_task_llc
sched/fair: Steal work from an overloaded CPU when CPU goes idle
sched/fair: disable stealing if too many NUMA nodes
sched/fair: Provide idle search schedstats
arch/arm64/configs/euleros_defconfig | 1 +
arch/arm64/configs/hulk_defconfig | 1 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/configs/storage_ci_defconfig | 1 +
arch/arm64/configs/syzkaller_defconfig | 1 +
arch/x86/configs/hulk_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
arch/x86/configs/storage_ci_defconfig | 1 +
include/linux/sched/topology.h | 3 +
init/Kconfig | 15 +
kernel/sched/core.c | 35 ++-
kernel/sched/fair.c | 367 ++++++++++++++++++++++--
kernel/sched/features.h | 8 +
kernel/sched/sched.h | 20 ++
kernel/sched/sparsemask.h | 210 ++++++++++++++
kernel/sched/stats.c | 15 +
kernel/sched/stats.h | 20 ++
kernel/sched/topology.c | 141 ++++++++-
18 files changed, 810 insertions(+), 32 deletions(-)
create mode 100644 kernel/sched/sparsemask.h
--
2.25.1
1
13

[PATCH 001/125] net: be more gentle about silly gso requests coming from user
by Yang Yingliang 07 Jul '20
by Yang Yingliang 07 Jul '20
07 Jul '20
From: Eric Dumazet <edumazet(a)google.com>
commit 7c6d2ecbda83150b2036a2b36b21381ad4667762 upstream.
Recent change in virtio_net_hdr_to_skb() broke some packetdrill tests.
When --mss=XXX option is set, packetdrill always provide gso_type & gso_size
for its inbound packets, regardless of packet size.
if (packet->tcp && packet->mss) {
if (packet->ipv4)
gso.gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
else
gso.gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
gso.gso_size = packet->mss;
}
Since many other programs could do the same, relax virtio_net_hdr_to_skb()
to no longer return an error, but instead ignore gso settings.
This keeps Willem intent to make sure no malicious packet could
reach gso stack.
Note that TCP stack has a special logic in tcp_set_skb_tso_segs()
to clear gso_size for small packets.
Fixes: 6dd912f82680 ("net: check untrusted gso_size at kernel entry")
Signed-off-by: Eric Dumazet <edumazet(a)google.com>
Cc: Willem de Bruijn <willemb(a)google.com>
Acked-by: Willem de Bruijn <willemb(a)google.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Cc: Guenter Roeck <linux(a)roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/virtio_net.h | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
index 1c296f370e46..f32fe7080d2e 100644
--- a/include/linux/virtio_net.h
+++ b/include/linux/virtio_net.h
@@ -109,16 +109,17 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
u16 gso_size = __virtio16_to_cpu(little_endian, hdr->gso_size);
+ struct skb_shared_info *shinfo = skb_shinfo(skb);
- if (skb->len - p_off <= gso_size)
- return -EINVAL;
-
- skb_shinfo(skb)->gso_size = gso_size;
- skb_shinfo(skb)->gso_type = gso_type;
+ /* Too small packets are not really GSO ones. */
+ if (skb->len - p_off > gso_size) {
+ shinfo->gso_size = gso_size;
+ shinfo->gso_type = gso_type;
- /* Header must be checked, and gso_segs computed. */
- skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
- skb_shinfo(skb)->gso_segs = 0;
+ /* Header must be checked, and gso_segs computed. */
+ shinfo->gso_type |= SKB_GSO_DODGY;
+ shinfo->gso_segs = 0;
+ }
}
return 0;
--
2.25.1
1
124

06 Jul '20
From: Jonathan Lebon <jlebon(a)redhat.com>
mainline inclusion
from mainline-v5.5-rc1
commit 3e3e24b42043eceb97ed834102c2d094dfd7aaa6
category: bugfix
---------------------------
Currently, the SELinux LSM prevents one from setting the
`security.selinux` xattr on an inode without a policy first being
loaded. However, this restriction is problematic: it makes it impossible
to have newly created files with the correct label before actually
loading the policy.
This is relevant in distributions like Fedora, where the policy is
loaded by systemd shortly after pivoting out of the initrd. In such
instances, all files created prior to pivoting will be unlabeled. One
then has to relabel them after pivoting, an operation which inherently
races with other processes trying to access those same files.
Going further, there are use cases for creating the entire root
filesystem on first boot from the initrd (e.g. Container Linux supports
this today[1], and we'd like to support it in Fedora CoreOS as well[2]).
One can imagine doing this in two ways: at the block device level (e.g.
laying down a disk image), or at the filesystem level. In the former,
labeling can simply be part of the image. But even in the latter
scenario, one still really wants to be able to set the right labels when
populating the new filesystem.
This patch enables this by changing behaviour in the following two ways:
1. allow `setxattr` on mounts without `SBLABEL_MNT` (which is all of
them if no policy is loaded yet)
2. don't try to set the in-core inode SID if we're not initialized;
instead leave it as `LABEL_INVALID` so that revalidation may be
attempted at a later time
Note the first hunk of this patch is functionally the same as a
previously discussed one[3], though it was part of a larger series which
wasn't accepted.
Co-developed-by: Victor Kamensky <kamensky(a)cisco.com>
Signed-off-by: Victor Kamensky <kamensky(a)cisco.com>
Signed-off-by: Jonathan Lebon <jlebon(a)redhat.com>
[1] https://coreos.com/os/docs/latest/root-filesystem-placement.html
[2] https://github.com/coreos/fedora-coreos-tracker/issues/94
[3] https://www.spinics.net/lists/linux-initramfs/msg04593.html
---
security/selinux/hooks.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
index 452254fd89f8..931546d80211 100644
--- a/security/selinux/hooks.c
+++ b/security/selinux/hooks.c
@@ -3305,7 +3305,7 @@ static int selinux_inode_setxattr(struct dentry *dentry, const char *name,
}
sbsec = inode->i_sb->s_security;
- if (!(sbsec->flags & SBLABEL_MNT))
+ if (!(sbsec->flags & SBLABEL_MNT) && selinux_state.initialized)
return -EOPNOTSUPP;
if (!inode_owner_or_capable(inode))
@@ -3387,6 +3387,15 @@ static void selinux_inode_post_setxattr(struct dentry *dentry, const char *name,
return;
}
+ if (!selinux_state.initialized) {
+ /* If we haven't even been initialized, then we can't validate
+ * against a policy, so leave the label as invalid. It may
+ * resolve to a valid label on the next revalidation try if
+ * we've since initialized.
+ */
+ return;
+ }
+
rc = security_context_to_sid_force(&selinux_state, value, size,
&newsid);
if (rc) {
--
2.27.GIT
1
4

[PATCH] fs/filescontrol: add a switch to enable / disable accounting of open fds
by Yang Yingliang 02 Jul '20
by Yang Yingliang 02 Jul '20
02 Jul '20
From: Yu Kuai <yukuai3(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 38268
CVE: NA
---------------------------
Such switch can only set the accounting of open fds in filescontrol from
enable to disable. If it is disabled arealdy, the switch can't enable it.
The counter is enabled by default, and it can be disabled by:
a. echo 1 > /sys/fs/cgroup/files/files.no_acct
b. add "filescontrol.no_acct=1" to boot cmd
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/filescontrol.c | 30 +++++++++++++++++++++++++++---
1 file changed, 27 insertions(+), 3 deletions(-)
diff --git a/fs/filescontrol.c b/fs/filescontrol.c
index 5ec5096383c3..32f3a127579c 100644
--- a/fs/filescontrol.c
+++ b/fs/filescontrol.c
@@ -25,14 +25,17 @@
#include <linux/seq_file.h>
#include <linux/fdtable.h>
#include <linux/sched/signal.h>
+#include <linux/module.h>
#define FILES_MAX D_COUNT_MAX
#define FILES_MAX_STR "max"
-
+static bool no_acct;
struct cgroup_subsys files_cgrp_subsys __read_mostly;
EXPORT_SYMBOL(files_cgrp_subsys);
+module_param(no_acct, bool, 0444);
+
struct files_cgroup {
struct cgroup_subsys_state css;
struct page_counter open_handles;
@@ -204,7 +207,7 @@ int files_cgroup_alloc_fd(struct files_struct *files, u64 n)
* we don't charge their fds, only issue is that files.usage
* won't be accurate in root files cgroup.
*/
- if (files != &init_files) {
+ if (!no_acct && files != &init_files) {
struct page_counter *fail_res;
struct files_cgroup *files_cgroup =
files_cgroup_from_files(files);
@@ -222,7 +225,7 @@ void files_cgroup_unalloc_fd(struct files_struct *files, u64 n)
* It's not charged so no need to uncharge, see comments in
* files_cgroup_alloc_fd.
*/
- if (files != &init_files) {
+ if (!no_acct && files != &init_files) {
struct files_cgroup *files_cgroup =
files_cgroup_from_files(files);
page_counter_uncharge(&files_cgroup->open_handles, n);
@@ -230,6 +233,21 @@ void files_cgroup_unalloc_fd(struct files_struct *files, u64 n)
}
EXPORT_SYMBOL(files_cgroup_unalloc_fd);
+static u64 files_disabled_read(struct cgroup_subsys_state *css,
+ struct cftype *cft)
+{
+ return no_acct;
+}
+
+static int files_disabled_write(struct cgroup_subsys_state *css,
+ struct cftype *cft, u64 val)
+{
+ if (!val)
+ return -EINVAL;
+ no_acct = true;
+
+ return 0;
+}
static int files_limit_read(struct seq_file *sf, void *v)
{
@@ -291,6 +309,12 @@ static struct cftype files[] = {
.name = "usage",
.read_u64 = files_usage_read,
},
+ {
+ .name = "no_acct",
+ .flags = CFTYPE_ONLY_ON_ROOT,
+ .read_u64 = files_disabled_read,
+ .write_u64 = files_disabled_write,
+ },
{ }
};
--
2.25.1
1
0

[PATCH] usb: usbtest: fix missing kfree(dev->buf) in usbtest_disconnect
by Yang Yingliang 01 Jul '20
by Yang Yingliang 01 Jul '20
01 Jul '20
From: Zqiang <qiang.zhang(a)windriver.com>
mainline inclusion
from mainline-v5.8-rc3
commit 28ebeb8db77035e058a510ce9bd17c2b9a009dba
category: bugfix
bugzilla: NA
CVE: CVE-2020-15393
---------------------------
BUG: memory leak
unreferenced object 0xffff888055046e00 (size 256):
comm "kworker/2:9", pid 2570, jiffies 4294942129 (age 1095.500s)
hex dump (first 32 bytes):
00 70 04 55 80 88 ff ff 18 bb 5a 81 ff ff ff ff .p.U......Z.....
f5 96 78 81 ff ff ff ff 37 de 8e 81 ff ff ff ff ..x.....7.......
backtrace:
[<00000000d121dccf>] kmemleak_alloc_recursive
include/linux/kmemleak.h:43 [inline]
[<00000000d121dccf>] slab_post_alloc_hook mm/slab.h:586 [inline]
[<00000000d121dccf>] slab_alloc_node mm/slub.c:2786 [inline]
[<00000000d121dccf>] slab_alloc mm/slub.c:2794 [inline]
[<00000000d121dccf>] kmem_cache_alloc_trace+0x15e/0x2d0 mm/slub.c:2811
[<000000005c3c3381>] kmalloc include/linux/slab.h:555 [inline]
[<000000005c3c3381>] usbtest_probe+0x286/0x19d0
drivers/usb/misc/usbtest.c:2790
[<000000001cec6910>] usb_probe_interface+0x2bd/0x870
drivers/usb/core/driver.c:361
[<000000007806c118>] really_probe+0x48d/0x8f0 drivers/base/dd.c:551
[<00000000a3308c3e>] driver_probe_device+0xfc/0x2a0 drivers/base/dd.c:724
[<000000003ef66004>] __device_attach_driver+0x1b6/0x240
drivers/base/dd.c:831
[<00000000eee53e97>] bus_for_each_drv+0x14e/0x1e0 drivers/base/bus.c:431
[<00000000bb0648d0>] __device_attach+0x1f9/0x350 drivers/base/dd.c:897
[<00000000838b324a>] device_initial_probe+0x1a/0x20 drivers/base/dd.c:944
[<0000000030d501c1>] bus_probe_device+0x1e1/0x280 drivers/base/bus.c:491
[<000000005bd7adef>] device_add+0x131d/0x1c40 drivers/base/core.c:2504
[<00000000a0937814>] usb_set_configuration+0xe84/0x1ab0
drivers/usb/core/message.c:2030
[<00000000e3934741>] generic_probe+0x6a/0xe0 drivers/usb/core/generic.c:210
[<0000000098ade0f1>] usb_probe_device+0x90/0xd0
drivers/usb/core/driver.c:266
[<000000007806c118>] really_probe+0x48d/0x8f0 drivers/base/dd.c:551
[<00000000a3308c3e>] driver_probe_device+0xfc/0x2a0 drivers/base/dd.c:724
Acked-by: Alan Stern <stern(a)rowland.harvard.edu>
Reported-by: Kyungtae Kim <kt0755(a)gmail.com>
Signed-off-by: Zqiang <qiang.zhang(a)windriver.com>
Link: https://lore.kernel.org/r/20200612035210.20494-1-qiang.zhang@windriver.com
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/usb/misc/usbtest.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/usb/misc/usbtest.c b/drivers/usb/misc/usbtest.c
index c7f82310e73e..fc3fc9d48a55 100644
--- a/drivers/usb/misc/usbtest.c
+++ b/drivers/usb/misc/usbtest.c
@@ -2853,6 +2853,7 @@ static void usbtest_disconnect(struct usb_interface *intf)
usb_set_intfdata(intf, NULL);
dev_dbg(&intf->dev, "disconnect\n");
+ kfree(dev->buf);
kfree(dev);
}
--
2.25.1
1
0

[PATCH] genirq/debugfs: Add missing sanity checks to interrupt injection
by Yang Yingliang 01 Jul '20
by Yang Yingliang 01 Jul '20
01 Jul '20
From: Thomas Gleixner <tglx(a)linutronix.de>
mainline inclusion
from mainline-v5.6
commit a740a423c36932695b01a3e920f697bc55b05fec
category: bugfix
bugzilla: 32425
CVE: NA
-------------------
Interrupts cannot be injected when the interrupt is not activated and when
a replay is already in progress.
Fixes: 536e2e34bd00 ("genirq/debugfs: Triggering of interrupts from userspace")
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Marc Zyngier <maz(a)kernel.org>
Link: https://lkml.kernel.org/r/20200306130623.500019114@linutronix.de
Signed-off-by: Yuan Can <yuancan(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/irq/debugfs.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/kernel/irq/debugfs.c b/kernel/irq/debugfs.c
index 0a7365456e72..4eb5c62991fe 100644
--- a/kernel/irq/debugfs.c
+++ b/kernel/irq/debugfs.c
@@ -206,8 +206,15 @@ static ssize_t irq_debug_write(struct file *file, const char __user *user_buf,
chip_bus_lock(desc);
raw_spin_lock_irqsave(&desc->lock, flags);
- if (irq_settings_is_level(desc) || desc->istate & IRQS_NMI) {
- /* Can't do level nor NMIs, sorry */
+ /*
+ * Don't allow injection when the interrupt is:
+ * - Level or NMI type
+ * - not activated
+ * - replaying already
+ */
+ if (irq_settings_is_level(desc) ||
+ !irqd_is_activated(&desc->irq_data) ||
+ (desc->istate & (IRQS_NMI | IRQS_REPLAY))) {
err = -EINVAL;
} else {
desc->istate |= IRQS_PENDING;
--
2.25.1
1
0

01 Jul '20
From: Tommi Rantala <tommi.t.rantala(a)nokia.com>
mainline inclusion
from mainline-5.6-rc6
commit 29b4f5f18857
category: bugfix
bugzilla: 31774
CVE: NA
-------------------------------------------------
Since glibc 2.28 when running 'perf top --stdio', input handling no
longer works, but hitting any key always just prints the "Mapped keys"
help text.
To fix it, call clearerr() in the display_thread() loop to clear any EOF
sticky errors, as instructed in the glibc NEWS file
(https://sourceware.org/git/?p=glibc.git;a=blob;f=NEWS)
* All stdio functions now treat end-of-file as a sticky condition. If you
read from a file until EOF, and then the file is enlarged by another
process, you must call clearerr or another function with the same effect
(e.g. fseek, rewind) before you can read the additional data. This
corrects a longstanding C99 conformance bug. It is most likely to affect
programs that use stdio to read interactive input from a terminal.
(Bug #1190.)
Signed-off-by: Tommi Rantala <tommi.t.rantala(a)nokia.com>
Tested-by: Arnaldo Carvalho de Melo <acme(a)redhat.com>
Cc: Alexander Shishkin <alexander.shishkin(a)linux.intel.com>
Cc: Jiri Olsa <jolsa(a)redhat.com>
Cc: Mark Rutland <mark.rutland(a)arm.com>
Cc: Namhyung Kim <namhyung(a)kernel.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Link: http://lore.kernel.org/lkml/20200305083714.9381-2-tommi.t.rantala@nokia.com
Signed-off-by: Arnaldo Carvalho de Melo <acme(a)redhat.com>
Signed-off-by: Wei Li <liwei391(a)huawei.com>
Reviewed-by: Jian Cheng <cj.chengjian(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
tools/perf/builtin-top.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index 781f6cf1cda6..265c074cbe3b 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -661,7 +661,9 @@ static void *display_thread(void *arg)
delay_msecs = top->delay_secs * MSEC_PER_SEC;
set_term_quiet_input(&save);
/* trash return*/
- getc(stdin);
+ clearerr(stdin);
+ if (poll(&stdin_poll, 1, 0) > 0)
+ getc(stdin);
while (!done) {
perf_top__print_sym_table(top);
--
2.25.1
1
0
From: Alex Williamson <alex.williamson(a)redhat.com>
mainline inclusion
from mainline-v5.8-rc3
commit ebfa440ce38b7e2e04c3124aa89c8a9f4094cf21
category: bugfix
bugzilla: NA
CVE: CVE-2020-12888
---------------------------
SR-IOV VFs do not implement the memory enable bit of the command
register, therefore this bit is not set in config space after
pci_enable_device(). This leads to an unintended difference
between PF and VF in hand-off state to the user. We can correct
this by setting the initial value of the memory enable bit in our
virtualized config space. There's really no need however to
ever fault a user on a VF though as this would only indicate an
error in the user's management of the enable bit, versus a PF
where the same access could trigger hardware faults.
Fixes: abafbc551fdd ("vfio-pci: Invalidate mmaps and block MMIO access on disabled memory")
Signed-off-by: Alex Williamson <alex.williamson(a)redhat.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/vfio/pci/vfio_pci_config.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
index 4fe71fbce194..a1a26465d224 100644
--- a/drivers/vfio/pci/vfio_pci_config.c
+++ b/drivers/vfio/pci/vfio_pci_config.c
@@ -401,9 +401,15 @@ static inline void p_setd(struct perm_bits *p, int off, u32 virt, u32 write)
/* Caller should hold memory_lock semaphore */
bool __vfio_pci_memory_enabled(struct vfio_pci_device *vdev)
{
+ struct pci_dev *pdev = vdev->pdev;
u16 cmd = le16_to_cpu(*(__le16 *)&vdev->vconfig[PCI_COMMAND]);
- return cmd & PCI_COMMAND_MEMORY;
+ /*
+ * SR-IOV VF memory enable is handled by the MSE bit in the
+ * PF SR-IOV capability, there's therefore no need to trigger
+ * faults based on the virtual value.
+ */
+ return pdev->is_virtfn || (cmd & PCI_COMMAND_MEMORY);
}
/*
@@ -1732,6 +1738,15 @@ int vfio_config_init(struct vfio_pci_device *vdev)
vconfig[PCI_INTERRUPT_PIN]);
vconfig[PCI_INTERRUPT_PIN] = 0; /* Gratuitous for good VFs */
+
+ /*
+ * VFs do no implement the memory enable bit of the COMMAND
+ * register therefore we'll not have it set in our initial
+ * copy of config space after pci_enable_device(). For
+ * consistency with PFs, set the virtual enable bit here.
+ */
+ *(__le16 *)&vconfig[PCI_COMMAND] |=
+ cpu_to_le16(PCI_COMMAND_MEMORY);
}
if (!IS_ENABLED(CONFIG_VFIO_PCI_INTX) || vdev->nointx)
--
2.25.1
1
0

30 Jun '20
From: Viresh Kumar <viresh.kumar(a)linaro.org>
mainline inclusion
from mainline-v5.0-rc1
commit 1da1843f9f0334e2428308945d396ffecc2acfe1
category: feature
bugzilla: 38260
CVE: NA
---------------------------
We already have task_has_rt_policy() and task_has_dl_policy() helpers,
create task_has_idle_policy() as well and update sched core to start
using it.
While at it, use task_has_dl_policy() at one more place.
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Acked-by: Daniel Lezcano <daniel.lezcano(a)linaro.org>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Vincent Guittot <vincent.guittot(a)linaro.org>
Link: http://lkml.kernel.org/r/ce3915d5b490fc81af926a3b6bfb775e7188e005.154141689…
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/sched/core.c | 4 ++--
kernel/sched/debug.c | 2 +-
kernel/sched/fair.c | 10 +++++-----
kernel/sched/sched.h | 5 +++++
4 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0fbdc620697b..c1513905c0de 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -724,7 +724,7 @@ static void set_load_weight(struct task_struct *p, bool update_load)
/*
* SCHED_IDLE tasks get minimal weight:
*/
- if (idle_policy(p->policy)) {
+ if (task_has_idle_policy(p)) {
load->weight = scale_load(WEIGHT_IDLEPRIO);
load->inv_weight = WMULT_IDLEPRIO;
return;
@@ -4270,7 +4270,7 @@ static int __sched_setscheduler(struct task_struct *p,
* Treat SCHED_IDLE as nice 20. Only allow a switch to
* SCHED_NORMAL if the RLIMIT_NICE would normally permit it.
*/
- if (idle_policy(p->policy) && !idle_policy(policy)) {
+ if (task_has_idle_policy(p) && !idle_policy(policy)) {
if (!can_nice(p, task_nice(p)))
return -EPERM;
}
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index c542d84dafce..f12382877390 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -1012,7 +1012,7 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
#endif
P(policy);
P(prio);
- if (p->policy == SCHED_DEADLINE) {
+ if (task_has_dl_policy(p)) {
P(dl.runtime);
P(dl.deadline);
}
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 09b0d83d5bbc..807c7fb78b6f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6609,7 +6609,7 @@ wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se)
static void set_last_buddy(struct sched_entity *se)
{
- if (entity_is_task(se) && unlikely(task_of(se)->policy == SCHED_IDLE))
+ if (entity_is_task(se) && unlikely(task_has_idle_policy(task_of(se))))
return;
for_each_sched_entity(se) {
@@ -6621,7 +6621,7 @@ static void set_last_buddy(struct sched_entity *se)
static void set_next_buddy(struct sched_entity *se)
{
- if (entity_is_task(se) && unlikely(task_of(se)->policy == SCHED_IDLE))
+ if (entity_is_task(se) && unlikely(task_has_idle_policy(task_of(se))))
return;
for_each_sched_entity(se) {
@@ -6679,8 +6679,8 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
return;
/* Idle tasks are by definition preempted by non-idle tasks. */
- if (unlikely(curr->policy == SCHED_IDLE) &&
- likely(p->policy != SCHED_IDLE))
+ if (unlikely(task_has_idle_policy(curr)) &&
+ likely(!task_has_idle_policy(p)))
goto preempt;
/*
@@ -7090,7 +7090,7 @@ static int task_hot(struct task_struct *p, struct lb_env *env)
if (p->sched_class != &fair_sched_class)
return 0;
- if (unlikely(p->policy == SCHED_IDLE))
+ if (unlikely(task_has_idle_policy(p)))
return 0;
/*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 955abd645ff9..fbe3b3b8a19f 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -184,6 +184,11 @@ static inline bool valid_policy(int policy)
rt_policy(policy) || dl_policy(policy);
}
+static inline int task_has_idle_policy(struct task_struct *p)
+{
+ return idle_policy(p->policy);
+}
+
static inline int task_has_rt_policy(struct task_struct *p)
{
return rt_policy(p->policy);
--
2.25.1
1
4

[PATCH 001/204] power: supply: bq24257_charger: Replace depends on REGMAP_I2C with select
by Yang Yingliang 29 Jun '20
by Yang Yingliang 29 Jun '20
29 Jun '20
From: Enric Balletbo i Serra <enric.balletbo(a)collabora.com>
[ Upstream commit 87c3d579c8ed0eaea6b1567d529a8daa85a2bc6c ]
regmap is a library function that gets selected by drivers that need
it. No driver modules should depend on it. Depending on REGMAP_I2C makes
this driver only build if another driver already selected REGMAP_I2C,
as the symbol can't be selected through the menu kernel configuration.
Fixes: 2219a935963e ("power_supply: Add TI BQ24257 charger driver")
Signed-off-by: Enric Balletbo i Serra <enric.balletbo(a)collabora.com>
Signed-off-by: Sebastian Reichel <sebastian.reichel(a)collabora.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/power/supply/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/power/supply/Kconfig b/drivers/power/supply/Kconfig
index ff6dab0bf0dd..76c699b5abda 100644
--- a/drivers/power/supply/Kconfig
+++ b/drivers/power/supply/Kconfig
@@ -555,7 +555,7 @@ config CHARGER_BQ24257
tristate "TI BQ24250/24251/24257 battery charger driver"
depends on I2C
depends on GPIOLIB || COMPILE_TEST
- depends on REGMAP_I2C
+ select REGMAP_I2C
help
Say Y to enable support for the TI BQ24250, BQ24251, and BQ24257 battery
chargers.
--
2.25.1
1
203

[PATCH 1/2] Revert "ipmi: fix sleep-in-atomic in free_user at cleanup SRCU user->release_barrier"
by Yang Yingliang 29 Jun '20
by Yang Yingliang 29 Jun '20
29 Jun '20
hulk inclusion
category: bugfix
bugzilla: 5499
CVE: NA
---------------------------
cleanup_srcu_struct_quiesced won't free memory if work_pending()
is true, and it will cause a memory leak, so revert it and
using mainline patch instead.
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/char/ipmi/ipmi_msghandler.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
index 0085d32cef46..7510013f28f8 100644
--- a/drivers/char/ipmi/ipmi_msghandler.c
+++ b/drivers/char/ipmi/ipmi_msghandler.c
@@ -1190,12 +1190,7 @@ EXPORT_SYMBOL(ipmi_get_smi_info);
static void free_user(struct kref *ref)
{
struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);
-
- /*
- * Cleanup without waiting. This could be called in atomic context.
- * Refcount is zero: all read-sections should have been ended.
- */
- cleanup_srcu_struct_quiesced(&user->release_barrier);
+ cleanup_srcu_struct(&user->release_barrier);
kfree(user);
}
--
2.25.1
1
1
From: Hangbin Liu <liuhangbin(a)gmail.com>
[ Upstream commit 79a1f0ccdbb4ad700590f61b00525b390cb53905 ]
Socket option IPV6_ADDRFORM supports UDP/UDPLITE and TCP at present.
Previously the checking logic looks like:
if (sk->sk_protocol == IPPROTO_UDP || sk->sk_protocol == IPPROTO_UDPLITE)
do_some_check;
else if (sk->sk_protocol != IPPROTO_TCP)
break;
After commit b6f6118901d1 ("ipv6: restrict IPV6_ADDRFORM operation"), TCP
was blocked as the logic changed to:
if (sk->sk_protocol == IPPROTO_UDP || sk->sk_protocol == IPPROTO_UDPLITE)
do_some_check;
else if (sk->sk_protocol == IPPROTO_TCP)
do_some_check;
break;
else
break;
Then after commit 82c9ae440857 ("ipv6: fix restrict IPV6_ADDRFORM operation")
UDP/UDPLITE were blocked as the logic changed to:
if (sk->sk_protocol == IPPROTO_UDP || sk->sk_protocol == IPPROTO_UDPLITE)
do_some_check;
if (sk->sk_protocol == IPPROTO_TCP)
do_some_check;
if (sk->sk_protocol != IPPROTO_TCP)
break;
Fix it by using Eric's code and simply remove the break in TCP check, which
looks like:
if (sk->sk_protocol == IPPROTO_UDP || sk->sk_protocol == IPPROTO_UDPLITE)
do_some_check;
else if (sk->sk_protocol == IPPROTO_TCP)
do_some_check;
else
break;
Fixes: 82c9ae440857 ("ipv6: fix restrict IPV6_ADDRFORM operation")
Signed-off-by: Hangbin Liu <liuhangbin(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/ipv6/ipv6_sockglue.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
index f255eaaf0136..9ca4e8657534 100644
--- a/net/ipv6/ipv6_sockglue.c
+++ b/net/ipv6/ipv6_sockglue.c
@@ -187,14 +187,15 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
retv = -EBUSY;
break;
}
- }
- if (sk->sk_protocol == IPPROTO_TCP &&
- sk->sk_prot != &tcpv6_prot) {
- retv = -EBUSY;
+ } else if (sk->sk_protocol == IPPROTO_TCP) {
+ if (sk->sk_prot != &tcpv6_prot) {
+ retv = -EBUSY;
+ break;
+ }
+ } else {
break;
}
- if (sk->sk_protocol != IPPROTO_TCP)
- break;
+
if (sk->sk_state != TCP_ESTABLISHED) {
retv = -ENOTCONN;
break;
--
2.25.1
1
256
From: Alex Williamson <alex.williamson(a)redhat.com>
mainline inclusion
from mainline-v5.8-rc1
commit 41311242221e3482b20bfed10fa4d9db98d87016
category: bugfix
bugzilla: NA
CVE: CVE-2020-12888
---------------------------
With conversion to follow_pfn(), DMA mapping a PFNMAP range depends on
the range being faulted into the vma. Add support to manually provide
that, in the same way as done on KVM with hva_to_pfn_remapped().
Reviewed-by: Peter Xu <peterx(a)redhat.com>
Signed-off-by: Alex Williamson <alex.williamson(a)redhat.com>
Conflicts:
drivers/vfio/vfio_iommu_type1.c
[yyl: adjust context]
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/vfio/vfio_iommu_type1.c | 36 ++++++++++++++++++++++++++++++---
1 file changed, 33 insertions(+), 3 deletions(-)
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 4bfc79635c6a..01be1496307b 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -346,6 +346,32 @@ static int put_pfn(unsigned long pfn, int prot)
return 0;
}
+static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm,
+ unsigned long vaddr, unsigned long *pfn,
+ bool write_fault)
+{
+ int ret;
+
+ ret = follow_pfn(vma, vaddr, pfn);
+ if (ret) {
+ bool unlocked = false;
+
+ ret = fixup_user_fault(NULL, mm, vaddr,
+ FAULT_FLAG_REMOTE |
+ (write_fault ? FAULT_FLAG_WRITE : 0),
+ &unlocked);
+ if (unlocked)
+ return -EAGAIN;
+
+ if (ret)
+ return ret;
+
+ ret = follow_pfn(vma, vaddr, pfn);
+ }
+
+ return ret;
+}
+
static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
int prot, unsigned long *pfn)
{
@@ -385,12 +411,16 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
down_read(&mm->mmap_sem);
+retry:
vma = find_vma_intersection(mm, vaddr, vaddr + 1);
if (vma && vma->vm_flags & VM_PFNMAP) {
- if (!follow_pfn(vma, vaddr, pfn) &&
- is_invalid_reserved_pfn(*pfn))
- ret = 0;
+ ret = follow_fault_pfn(vma, mm, vaddr, pfn, prot & IOMMU_WRITE);
+ if (ret == -EAGAIN)
+ goto retry;
+
+ if (!ret && !is_invalid_reserved_pfn(*pfn))
+ ret = -EFAULT;
}
up_read(&mm->mmap_sem);
--
2.25.1
1
2

[PATCH] Disable cache readunique on 1620 which can cause performance decrease
by shenkai (D) 28 Jun '20
by shenkai (D) 28 Jun '20
28 Jun '20
From 85f6f7c3bd13b3d6fd064899e2818ee70bdf29e2 Mon Sep 17 00:00:00 2001
euleros inclusion
commit 85f6f7c3bd13b3d6fd064899e2818ee70bdf29e2
category: bugfix
Random performance decreases appear on cases of Hackbench which test
pipe or socket communication among multi-threads on Hisi 1620 SoC.
Cache sharing which caused by the change of the data layout and the
cache readunique mechanism both lead to this problem.
Readunique mechanism which may caused by store operation will invalid
cachelines on other cores during data fetching stage which can cause
cacheline invalidation happens frequently in a sharing data access
situation.
Disable cache readunique can trackle this problem.
Test cases are like:
./hackbench -pipe 20 thread 1000
On 128 cores 1620 machine, the time cost of above test cases is basicly
0.3s when RU is off while when RU is on the time cost can be over 1s.
What's more, we disble readunique only in el2 for in el1 disabling
readunique may cause panic due to lack of related priority which often
be set in BIOS.
Signed-off-by: Kai Shen <shenkai8(a)huawei.com>
Reviewed-by: Wenliang He <hewenliang4(a)huawei.com>
Reviewed-by: Jinxian He <hejingxian(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Reviewed-by: Wei Li <liwei391(a)huawei.com>
---
arch/arm64/Kconfig | 9 +++++++
arch/arm64/configs/euleros_defconfig | 1 +
arch/arm64/include/asm/cpucaps.h | 3 ++-
arch/arm64/kernel/cpu_errata.c | 37 ++++++++++++++++++++++++++++
4 files changed, 49 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b818273ef..33e185af5 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -642,6 +642,15 @@ config QCOM_FALKOR_ERRATUM_E1041
If unsure, say Y.
+config HISILICON_ERRATUM_1620_RU
+ bool "Hisi 1620 cache readunique might compromise performance"
+ default y
+ help
+ The HiSilicon 1620 cache readunique might compromise performance,
+ use cmdline to enable or disable RU.
+
+ If unsure, say Y.
+
endmenu
diff --git a/arch/arm64/configs/euleros_defconfig
b/arch/arm64/configs/euleros_defconfig
index 360e69291..29403bfe4 100644
--- a/arch/arm64/configs/euleros_defconfig
+++ b/arch/arm64/configs/euleros_defconfig
@@ -5650,3 +5650,4 @@ CONFIG_IO_STRICT_DEVMEM=y
# CONFIG_DEBUG_EFI is not set
# CONFIG_ARM64_RELOC_TEST is not set
# CONFIG_CORESIGHT is not set
+CONFIG_HISILICON_ERRATUM_1620=y
diff --git a/arch/arm64/include/asm/cpucaps.h
b/arch/arm64/include/asm/cpucaps.h
index d56d815b9..112c866c1 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -58,7 +58,8 @@
#define ARM64_SSBS 37
#define ARM64_CLEARPAGE_STNP 38
#define ARM64_WORKAROUND_1542419 39
+#define ARM64_HISI_1620_RU 40
-#define ARM64_NCAPS 40
+#define ARM64_NCAPS 41
#endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 7522163c1..fa751abed 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -25,6 +25,11 @@
#include <asm/cpufeature.h>
#include <asm/smp_plat.h>
+#ifdef CONFIG_HISILICON_ERRATUM_1620_RU
+#include <asm/ptrace.h>
+#include <asm/sysreg.h>
+#endif
+
static bool __maybe_unused
is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int
scope)
{
@@ -491,6 +496,30 @@ cpu_enable_cache_maint_trap(const struct
arm64_cpu_capabilities *__unused)
sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCI, 0);
}
+#ifdef CONFIG_HISILICON_ERRATUM_1620_RU
+static bool readunique_enabled = false;
+
+static int __init enable_readunique(char *data)
+{
+ readunique_enabled = true;
+ return 0;
+}
+__setup("readunique_enable", enable_readunique);
+
+#define CTLR_HISI_1620_RU (1L << 40)
+static void __maybe_unused
+hisi_1620_ru_disable(const struct arm64_cpu_capabilities *__unused)
+{
+ u64 el;
+ if (readunique_enabled)
+ return;
+
+ el = read_sysreg(CurrentEL);
+ if (el == CurrentEL_EL2)
+ sysreg_clear_set(S3_1_c15_c6_4, 0, CTLR_HISI_1620_RU);
+}
+#endif
+
/* known invulnerable cores */
static const struct midr_range arm64_ssb_cpus[] = {
MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
@@ -884,6 +913,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
.matches = has_neoverse_n1_erratum_1542419,
.cpu_enable = cpu_enable_trap_ctr_access,
},
+#endif
+#ifdef CONFIG_HISILICON_ERRATUM_1620_RU
+ {
+ .desc = "Hisi 1620 Cache Readunique Disable",
+ .capability = ARM64_HISI_1620_RU,
+ ERRATA_MIDR_ALL_VERSIONS(MIDR_HISI_TSV110),
+ .cpu_enable = hisi_1620_ru_disable,
+ },
#endif
{
}
--
2.19.1
1
0
From: yunaixin <yunaixin03610(a)163.com>
This patch set contains 5 communication drivers for Huawei BMA software.
The BMA software is a system management software. It supports the status
monitoring, performance monitoring, and event monitoring of various
components, including server CPUs, memory, hard disks, NICs, IB cards,
PCIe cards, RAID controller cards, and optical modules.
These 5 drivers are used to send/receive message through PCIe channel in
different ways by BMA software.
yunaixin (6):
Huawei BMA: Adding Huawei BMA driver: host_edma_drv
Huawei BMA: Adding Huawei BMA driver: host_cdev_drv
Huawei BMA: Adding Huawei BMA driver: host_veth_drv
Huawei BMA: Adding Huawei BMA driver: cdev_veth_drv
Huawei BMA: Adding Huawei BMA driver: host_kbox_drv
Huawei BMA: Add BMA driver config options
arch/arm64/configs/euleros_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
drivers/net/ethernet/huawei/Kconfig | 1 +
drivers/net/ethernet/huawei/Makefile | 3 +-
drivers/net/ethernet/huawei/bma/Kconfig | 5 +
drivers/net/ethernet/huawei/bma/Makefile | 9 +
.../net/ethernet/huawei/bma/cdev_drv/Kconfig | 11 +
.../net/ethernet/huawei/bma/cdev_drv/Makefile | 2 +
.../ethernet/huawei/bma/cdev_drv/bma_cdev.c | 369 +++
.../ethernet/huawei/bma/cdev_veth_drv/Kconfig | 11 +
.../huawei/bma/cdev_veth_drv/Makefile | 2 +
.../bma/cdev_veth_drv/virtual_cdev_eth_net.c | 1839 ++++++++++++
.../bma/cdev_veth_drv/virtual_cdev_eth_net.h | 300 ++
.../net/ethernet/huawei/bma/edma_drv/Kconfig | 11 +
.../net/ethernet/huawei/bma/edma_drv/Makefile | 2 +
.../huawei/bma/edma_drv/bma_devintf.c | 597 ++++
.../huawei/bma/edma_drv/bma_devintf.h | 40 +
.../huawei/bma/edma_drv/bma_include.h | 116 +
.../ethernet/huawei/bma/edma_drv/bma_pci.c | 533 ++++
.../ethernet/huawei/bma/edma_drv/bma_pci.h | 94 +
.../ethernet/huawei/bma/edma_drv/edma_host.c | 1462 ++++++++++
.../ethernet/huawei/bma/edma_drv/edma_host.h | 351 +++
.../huawei/bma/include/bma_ker_intf.h | 94 +
.../net/ethernet/huawei/bma/kbox_drv/Kconfig | 11 +
.../net/ethernet/huawei/bma/kbox_drv/Makefile | 2 +
.../ethernet/huawei/bma/kbox_drv/kbox_dump.c | 121 +
.../ethernet/huawei/bma/kbox_drv/kbox_dump.h | 33 +
.../ethernet/huawei/bma/kbox_drv/kbox_hook.c | 101 +
.../ethernet/huawei/bma/kbox_drv/kbox_hook.h | 33 +
.../huawei/bma/kbox_drv/kbox_include.h | 40 +
.../ethernet/huawei/bma/kbox_drv/kbox_main.c | 168 ++
.../ethernet/huawei/bma/kbox_drv/kbox_main.h | 23 +
.../ethernet/huawei/bma/kbox_drv/kbox_mce.c | 264 ++
.../ethernet/huawei/bma/kbox_drv/kbox_mce.h | 23 +
.../ethernet/huawei/bma/kbox_drv/kbox_panic.c | 187 ++
.../ethernet/huawei/bma/kbox_drv/kbox_panic.h | 25 +
.../huawei/bma/kbox_drv/kbox_printk.c | 363 +++
.../huawei/bma/kbox_drv/kbox_printk.h | 33 +
.../huawei/bma/kbox_drv/kbox_ram_drive.c | 188 ++
.../huawei/bma/kbox_drv/kbox_ram_drive.h | 31 +
.../huawei/bma/kbox_drv/kbox_ram_image.c | 135 +
.../huawei/bma/kbox_drv/kbox_ram_image.h | 84 +
.../huawei/bma/kbox_drv/kbox_ram_op.c | 986 +++++++
.../huawei/bma/kbox_drv/kbox_ram_op.h | 77 +
.../net/ethernet/huawei/bma/veth_drv/Kconfig | 11 +
.../net/ethernet/huawei/bma/veth_drv/Makefile | 2 +
.../ethernet/huawei/bma/veth_drv/veth_hb.c | 2502 +++++++++++++++++
.../ethernet/huawei/bma/veth_drv/veth_hb.h | 440 +++
48 files changed, 11736 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/huawei/bma/Kconfig
create mode 100644 drivers/net/ethernet/huawei/bma/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_drv/Kconfig
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_drv/bma_cdev.c
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/Kconfig
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/virtual_cdev_eth_net.c
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/virtual_cdev_eth_net.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/Kconfig
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_devintf.c
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_devintf.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_include.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_pci.c
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_pci.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/edma_host.c
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/edma_host.h
create mode 100644 drivers/net/ethernet/huawei/bma/include/bma_ker_intf.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/Kconfig
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_dump.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_dump.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_hook.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_hook.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_include.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_main.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_main.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_mce.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_mce.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_panic.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_panic.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_printk.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_printk.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_drive.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_drive.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_image.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_image.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_op.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_op.h
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/Kconfig
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/veth_hb.c
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/veth_hb.h
--
2.26.2.windows.1
2
7
From: Amir Goldstein <amir73il(a)gmail.com>
mainline inclusion
from mainline-5.5-rc2
commit 7e63c87fc2dcf3be9d3aab82d4a0ea085880bdca
category: bugfix
bugzilla: 37632
CVE: NA
---------------------------
In the past, overlayfs required that lower fs have non null uuid in
order to support nfs export and decode copy up origin file handles.
Commit 9df085f3c9a2 ("ovl: relax requirement for non null uuid of
lower fs") relaxed this requirement for nfs export support, as long
as uuid (even if null) is unique among all lower fs.
However, said commit unintentionally also relaxed the non null uuid
requirement for decoding copy up origin file handles, regardless of
the unique uuid requirement.
Amend this mistake by disabling decoding of copy up origin file handle
from lower fs with a conflicting uuid.
We still encode copy up origin file handles from those fs, because
file handles like those already exist in the wild and because they
might provide useful information in the future.
There is an unhandled corner case described by Miklos this way:
- two filesystems, A and B, both have null uuid
- upper layer is on A
- lower layer 1 is also on A
- lower layer 2 is on B
In this case bad_uuid won't be set for B, because the check only
involves the list of lower fs. Hence we'll try to decode a layer 2
origin on layer 1 and fail.
We will deal with this corner case later.
Reported-by: Colin Ian King <colin.king(a)canonical.com>
Tested-by: Colin Ian King <colin.king(a)canonical.com>
Link: https://lore.kernel.org/lkml/20191106234301.283006-1-colin.king@canonical.c…
Fixes: 9df085f3c9a2 ("ovl: relax requirement for non null uuid ...")
Cc: stable(a)vger.kernel.org # v4.20+
Signed-off-by: Amir Goldstein <amir73il(a)gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi(a)redhat.com>
Signed-off-by: Zhihao Cheng <chengzhihao1(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/overlayfs/namei.c | 8 ++++++++
fs/overlayfs/ovl_entry.h | 2 ++
fs/overlayfs/super.c | 24 +++++++++++++++++-------
3 files changed, 27 insertions(+), 7 deletions(-)
diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
index badf039267a2..62515ec25481 100644
--- a/fs/overlayfs/namei.c
+++ b/fs/overlayfs/namei.c
@@ -328,6 +328,14 @@ int ovl_check_origin_fh(struct ovl_fs *ofs, struct ovl_fh *fh, bool connected,
int i;
for (i = 0; i < ofs->numlower; i++) {
+ /*
+ * If lower fs uuid is not unique among lower fs we cannot match
+ * fh->uuid to layer.
+ */
+ if (ofs->lower_layers[i].fsid &&
+ ofs->lower_layers[i].fs->bad_uuid)
+ continue;
+
origin = ovl_decode_real_fh(fh, ofs->lower_layers[i].mnt,
connected);
if (origin)
diff --git a/fs/overlayfs/ovl_entry.h b/fs/overlayfs/ovl_entry.h
index 1a1adc697c55..787fe4be835e 100644
--- a/fs/overlayfs/ovl_entry.h
+++ b/fs/overlayfs/ovl_entry.h
@@ -25,6 +25,8 @@ struct ovl_config {
struct ovl_sb {
struct super_block *sb;
dev_t pseudo_dev;
+ /* Unusable (conflicting) uuid */
+ bool bad_uuid;
};
struct ovl_layer {
diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
index ddcf8d3a83bf..3881f07a2434 100644
--- a/fs/overlayfs/super.c
+++ b/fs/overlayfs/super.c
@@ -1259,7 +1259,7 @@ static bool ovl_lower_uuid_ok(struct ovl_fs *ofs, const uuid_t *uuid)
{
unsigned int i;
- if (!ofs->config.nfs_export && !(ofs->config.index && ofs->upper_mnt))
+ if (!ofs->config.nfs_export && !ofs->upper_mnt)
return true;
for (i = 0; i < ofs->numlowerfs; i++) {
@@ -1267,9 +1267,13 @@ static bool ovl_lower_uuid_ok(struct ovl_fs *ofs, const uuid_t *uuid)
* We use uuid to associate an overlay lower file handle with a
* lower layer, so we can accept lower fs with null uuid as long
* as all lower layers with null uuid are on the same fs.
+ * if we detect multiple lower fs with the same uuid, we
+ * disable lower file handle decoding on all of them.
*/
- if (uuid_equal(&ofs->lower_fs[i].sb->s_uuid, uuid))
+ if (uuid_equal(&ofs->lower_fs[i].sb->s_uuid, uuid)) {
+ ofs->lower_fs[i].bad_uuid = true;
return false;
+ }
}
return true;
}
@@ -1281,6 +1285,7 @@ static int ovl_get_fsid(struct ovl_fs *ofs, const struct path *path)
unsigned int i;
dev_t dev;
int err;
+ bool bad_uuid = false;
/* fsid 0 is reserved for upper fs even with non upper overlay */
if (ofs->upper_mnt && ofs->upper_mnt->mnt_sb == sb)
@@ -1292,11 +1297,15 @@ static int ovl_get_fsid(struct ovl_fs *ofs, const struct path *path)
}
if (!ovl_lower_uuid_ok(ofs, &sb->s_uuid)) {
- ofs->config.index = false;
- ofs->config.nfs_export = false;
- pr_warn("overlayfs: %s uuid detected in lower fs '%pd2', falling back to index=off,nfs_export=off.\n",
- uuid_is_null(&sb->s_uuid) ? "null" : "conflicting",
- path->dentry);
+ bad_uuid = true;
+ if (ofs->config.index || ofs->config.nfs_export) {
+ ofs->config.index = false;
+ ofs->config.nfs_export = false;
+ pr_warn("overlayfs: %s uuid detected in lower fs '%pd2', falling back to index=off,nfs_export=off.\n",
+ uuid_is_null(&sb->s_uuid) ? "null" :
+ "conflicting",
+ path->dentry);
+ }
}
err = get_anon_bdev(&dev);
@@ -1307,6 +1316,7 @@ static int ovl_get_fsid(struct ovl_fs *ofs, const struct path *path)
ofs->lower_fs[ofs->numlowerfs].sb = sb;
ofs->lower_fs[ofs->numlowerfs].pseudo_dev = dev;
+ ofs->lower_fs[ofs->numlowerfs].bad_uuid = bad_uuid;
ofs->numlowerfs++;
return ofs->numlowerfs;
--
2.25.1
1
7

21 Jun '20
From: He Kuang <hekuang(a)huawei.com>
mainline inclusion
from mainline-5.0-rc5
commit da06d5683868
category: bugfix
bugzilla: 38073
CVE: NA
-------------------------------------------------
The annotation line percentage is compared and inserted into the rbtree,
but the percent field of 'struct annotation_data' is an array, the
comparison result between them is the address difference.
This patch compares the right slot of percent array according to
opts->percent_type and makes things right.
The problem can be reproduced by pressing 'H' in perf top annotation view.
It should highlight the instruction line which has the highest sampling
percentage.
Signed-off-by: He Kuang <hekuang(a)huawei.com>
Reviewed-by: Jiri Olsa <jolsa(a)kernel.org>
Cc: Alexander Shishkin <alexander.shishkin(a)linux.intel.com>
Cc: Namhyung Kim <namhyung(a)kernel.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Link: http://lkml.kernel.org/r/20190120160523.4391-1-hekuang@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme(a)redhat.com>
Signed-off-by: Wei Li <liwei391(a)huawei.com>
Reviewed-by: Jian Cheng <cj.chengjian(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
tools/perf/ui/browsers/annotate.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
index a3c255228d62..131f51131b52 100644
--- a/tools/perf/ui/browsers/annotate.c
+++ b/tools/perf/ui/browsers/annotate.c
@@ -225,20 +225,24 @@ static unsigned int annotate_browser__refresh(struct ui_browser *browser)
return ret;
}
-static int disasm__cmp(struct annotation_line *a, struct annotation_line *b)
+static double disasm__cmp(struct annotation_line *a, struct annotation_line *b,
+ int percent_type)
{
int i;
for (i = 0; i < a->data_nr; i++) {
- if (a->data[i].percent == b->data[i].percent)
+ if (a->data[i].percent[percent_type] == b->data[i].percent[percent_type])
continue;
- return a->data[i].percent < b->data[i].percent;
+ return a->data[i].percent[percent_type] -
+ b->data[i].percent[percent_type];
}
return 0;
}
-static void disasm_rb_tree__insert(struct rb_root *root, struct annotation_line *al)
+static void disasm_rb_tree__insert(struct annotate_browser *browser,
+ struct annotation_line *al)
{
+ struct rb_root *root = &browser->entries;
struct rb_node **p = &root->rb_node;
struct rb_node *parent = NULL;
struct annotation_line *l;
@@ -247,7 +251,7 @@ static void disasm_rb_tree__insert(struct rb_root *root, struct annotation_line
parent = *p;
l = rb_entry(parent, struct annotation_line, rb_node);
- if (disasm__cmp(al, l))
+ if (disasm__cmp(al, l, browser->opts->percent_type) < 0)
p = &(*p)->rb_left;
else
p = &(*p)->rb_right;
@@ -330,7 +334,7 @@ static void annotate_browser__calc_percent(struct annotate_browser *browser,
RB_CLEAR_NODE(&pos->al.rb_node);
continue;
}
- disasm_rb_tree__insert(&browser->entries, &pos->al);
+ disasm_rb_tree__insert(browser, &pos->al);
}
pthread_mutex_unlock(¬es->lock);
--
2.25.1
1
5

[PATCH 1/2] xfs: Use scnprintf() for avoiding potential buffer overflow
by Yang Yingliang 17 Jun '20
by Yang Yingliang 17 Jun '20
17 Jun '20
From: Takashi Iwai <tiwai(a)suse.de>
mainline inclusion
from mainline-v5.7-rc1
commit 17bb60b74124e9491d593e2601e3afe14daa2f57
category: bugfix
bugzilla: 33322
CVE: NA
---------------------------
Since snprintf() returns the would-be-output size instead of the
actual output size, the succeeding calls may go beyond the given
buffer limit. Fix it by replacing with scnprintf().
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Reviewed-by: Darrick J. Wong <darrick.wong(a)oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong(a)oracle.com>
Signed-off-by: yu kuai <yukuai3(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/xfs/xfs_stats.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/xfs/xfs_stats.c b/fs/xfs/xfs_stats.c
index 740ac9674848..0373d9ea7334 100644
--- a/fs/xfs/xfs_stats.c
+++ b/fs/xfs/xfs_stats.c
@@ -58,13 +58,13 @@ int xfs_stats_format(struct xfsstats __percpu *stats, char *buf)
/* Loop over all stats groups */
for (i = j = 0; i < ARRAY_SIZE(xstats); i++) {
- len += snprintf(buf + len, PATH_MAX - len, "%s",
+ len += scnprintf(buf + len, PATH_MAX - len, "%s",
xstats[i].desc);
/* inner loop does each group */
for (; j < xstats[i].endpoint; j++)
- len += snprintf(buf + len, PATH_MAX - len, " %u",
+ len += scnprintf(buf + len, PATH_MAX - len, " %u",
counter_val(stats, j));
- len += snprintf(buf + len, PATH_MAX - len, "\n");
+ len += scnprintf(buf + len, PATH_MAX - len, "\n");
}
/* extra precision counters */
for_each_possible_cpu(i) {
@@ -73,9 +73,9 @@ int xfs_stats_format(struct xfsstats __percpu *stats, char *buf)
xs_read_bytes += per_cpu_ptr(stats, i)->s.xs_read_bytes;
}
- len += snprintf(buf + len, PATH_MAX-len, "xpc %Lu %Lu %Lu\n",
+ len += scnprintf(buf + len, PATH_MAX-len, "xpc %Lu %Lu %Lu\n",
xs_xstrat_bytes, xs_write_bytes, xs_read_bytes);
- len += snprintf(buf + len, PATH_MAX-len, "debug %u\n",
+ len += scnprintf(buf + len, PATH_MAX-len, "debug %u\n",
#if defined(DEBUG)
1);
#else
--
2.25.1
1
1
From: Zheng Bin <zhengbin13(a)huawei.com>
mainline inclusion
from mainline-bc5603c314bd
commit bc5603c314bd1d2a931ad3088fcea52f20cfbac5
category: bugfix
bugzilla: 34591
CVE: NA
---------------------------
Use the following command to test nfsv4(size of file1M is 1MB):
mount -t nfs -o vers=4.0,actimeo=60 127.0.0.1/dir1 /mnt
cp file1M /mnt
du -h /mnt/file1M -->0 within 60s, then 1M
When write is done(cp file1M /mnt), will call this:
nfs_writeback_done
nfs4_write_done
nfs4_write_done_cb
nfs_writeback_update_inode
nfs_post_op_update_inode_force_wcc_locked(change, ctime, mtime
nfs_post_op_update_inode_force_wcc_locked
nfs_set_cache_invalid
nfs_refresh_inode_locked
nfs_update_inode
nfsd write response contains change, ctime, mtime, the flag will be
clear after nfs_update_inode. Howerver, write response does not contain
space_used, previous open response contains space_used whose value is 0,
so inode->i_blocks is still 0.
nfs_getattr -->called by "du -h"
do_update |= force_sync || nfs_attribute_cache_expired -->false in 60s
cache_validity = READ_ONCE(NFS_I(inode)->cache_validity)
do_update |= cache_validity & (NFS_INO_INVALID_ATTR -->false
if (do_update) {
__nfs_revalidate_inode
}
Within 60s, does not send getattr request to nfsd, thus "du -h /mnt/file1M"
is 0.
Add a NFS_INO_INVALID_BLOCKS flag, set it when nfsv4 write is done.
[zb: although commit 1c341b777501 ("NFS: Add deferred cache invalidation
for close-to-open consistency violations") not merged, still define
NFS_INO_INVALID_BLOCKS BIT(14)]
Fixes: 16e143751727 ("NFS: More fine grained attribute tracking")
Signed-off-by: Zheng Bin <zhengbin13(a)huawei.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker(a)Netapp.com>
Signed-off-by: Zheng Bin <zhengbin13(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/nfs/inode.c | 14 +++++++++++---
include/linux/nfs_fs.h | 1 +
2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
index e4cd3a2fe698..5d8e0eb0d663 100644
--- a/fs/nfs/inode.c
+++ b/fs/nfs/inode.c
@@ -822,6 +822,8 @@ int nfs_getattr(const struct path *path, struct kstat *stat,
do_update |= cache_validity & NFS_INO_INVALID_ATIME;
if (request_mask & (STATX_CTIME|STATX_MTIME))
do_update |= cache_validity & NFS_INO_REVAL_PAGECACHE;
+ if (request_mask & STATX_BLOCKS)
+ do_update |= cache_validity & NFS_INO_INVALID_BLOCKS;
if (do_update) {
/* Update the attribute cache */
if (!(server->flags & NFS_MOUNT_NOAC))
@@ -1741,7 +1743,8 @@ int nfs_post_op_update_inode_force_wcc_locked(struct inode *inode, struct nfs_fa
status = nfs_post_op_update_inode_locked(inode, fattr,
NFS_INO_INVALID_CHANGE
| NFS_INO_INVALID_CTIME
- | NFS_INO_INVALID_MTIME);
+ | NFS_INO_INVALID_MTIME
+ | NFS_INO_INVALID_BLOCKS);
return status;
}
@@ -1851,7 +1854,8 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
nfsi->cache_validity &= ~(NFS_INO_INVALID_ATTR
| NFS_INO_INVALID_ATIME
| NFS_INO_REVAL_FORCED
- | NFS_INO_REVAL_PAGECACHE);
+ | NFS_INO_REVAL_PAGECACHE
+ | NFS_INO_INVALID_BLOCKS);
/* Do atomic weak cache consistency updates */
nfs_wcc_update_inode(inode, fattr);
@@ -2012,8 +2016,12 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
inode->i_blocks = nfs_calc_block_size(fattr->du.nfs3.used);
} else if (fattr->valid & NFS_ATTR_FATTR_BLOCKS_USED)
inode->i_blocks = fattr->du.nfs2.blocks;
- else
+ else {
+ nfsi->cache_validity |= save_cache_validity &
+ (NFS_INO_INVALID_BLOCKS
+ | NFS_INO_REVAL_FORCED);
cache_revalidated = false;
+ }
/* Update attrtimeo value if we're out of the unstable period */
if (attr_changed) {
diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
index a0831e9d19c9..620306b1604d 100644
--- a/include/linux/nfs_fs.h
+++ b/include/linux/nfs_fs.h
@@ -221,6 +221,7 @@ struct nfs4_copy_state {
#define NFS_INO_INVALID_MTIME BIT(10) /* cached mtime is invalid */
#define NFS_INO_INVALID_SIZE BIT(11) /* cached size is invalid */
#define NFS_INO_INVALID_OTHER BIT(12) /* other attrs are invalid */
+#define NFS_INO_INVALID_BLOCKS BIT(14) /* cached blocks are invalid */
#define NFS_INO_INVALID_ATTR (NFS_INO_INVALID_CHANGE \
| NFS_INO_INVALID_CTIME \
--
2.25.1
1
0
From: Dave Chinner <dchinner(a)redhat.com>
mainline inclusion
from mainline-5.5-rc1
commit 3f8a4f1d876d3e3e49e50b0396eaffcc4ba71b08
category: bugfix
bugzilla: 26723
CVE: NA
---------------------------
[commit message is verbose for discussion purposes - will trim it
down later. Some questions about implementation details at the end.]
Zorro Lang recently ran a new test to stress single inode extent
counts now that they are no longer limited by memory allocation.
The test was simply:
This test uncovered a problem where the hole punching operation
appeared to finish with no error, but apparently only created 268M
extents instead of the 10 billion it was supposed to.
Further, trying to punch out extents that should have been present
resulted in success, but no change in the extent count. It looked
like a silent failure.
While running the test and observing the behaviour in real time,
I observed the extent coutn growing at ~2M extents/minute, and saw
this after about an hour:
> sleep 60 ; \
> xfs_io -f -c "stat" /mnt/scratch/big-file |grep next
fsxattr.nextents = 127657993
fsxattr.nextents = 129683339
And a few minutes later this:
fsxattr.nextents = 4177861124
Ah, what? Where did that 4 billion extra extents suddenly come from?
Stop the workload, unmount, mount:
fsxattr.nextents = 166044375
And it's back at the expected number. i.e. the extent count is
correct on disk, but it's screwed up in memory. I loaded up the
extent list, and immediately:
fsxattr.nextents = 4192576215
It's bad again. So, where does that number come from?
xfs_fill_fsxattr():
if (ip->i_df.if_flags & XFS_IFEXTENTS)
fa->fsx_nextents = xfs_iext_count(&ip->i_df);
else
fa->fsx_nextents = ip->i_d.di_nextents;
And that's the behaviour I just saw in a nutshell. The on disk count
is correct, but once the tree is loaded into memory, it goes whacky.
Clearly there's something wrong with xfs_iext_count():
inline xfs_extnum_t xfs_iext_count(struct xfs_ifork *ifp)
{
return ifp->if_bytes / sizeof(struct xfs_iext_rec);
}
Simple enough, but 134M extents is 2**27, and that's right about
where things went wrong. A struct xfs_iext_rec is 16 bytes in size,
which means 2**27 * 2**4 = 2**31 and we're right on target for an
integer overflow. And, sure enough:
struct xfs_ifork {
int if_bytes; /* bytes in if_u1 */
....
Once we get 2**27 extents in a file, we overflow if_bytes and the
in-core extent count goes wrong. And when we reach 2**28 extents,
if_bytes wraps back to zero and things really start to go wrong
there. This is where the silent failure comes from - only the first
2**28 extents can be looked up directly due to the overflow, all the
extents above this index wrap back to somewhere in the first 2**28
extents. Hence with a regular pattern, trying to punch a hole in the
range that didn't have holes mapped to a hole in the first 2**28
extents and so "succeeded" without changing anything. Hence "silent
failure"...
Fix this by converting if_bytes to a int64_t and converting all the
index variables and size calculations to use int64_t types to avoid
overflows in future. Signed integers are still used to enable easy
detection of extent count underflows. This enables scalability of
extent counts to the limits of the on-disk format - MAXEXTNUM
(2**31) extents.
Current testing is at over 500M extents and still going:
fsxattr.nextents = 517310478
Reported-by: Zorro Lang <zlang(a)redhat.com>
Signed-off-by: Dave Chinner <dchinner(a)redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong(a)oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong(a)oracle.com>
Conflict: fs/xfs/libxfs/xfs_inode_fork.h
Signed-off-by: yu kuai <yukuai3(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/xfs/libxfs/xfs_attr_leaf.c | 18 ++++++++++--------
fs/xfs/libxfs/xfs_dir2_sf.c | 2 +-
fs/xfs/libxfs/xfs_iext_tree.c | 2 +-
fs/xfs/libxfs/xfs_inode_fork.c | 8 ++++----
fs/xfs/libxfs/xfs_inode_fork.h | 14 ++++++++------
5 files changed, 24 insertions(+), 20 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c
index 2652d00842d6..ceb9b0045725 100644
--- a/fs/xfs/libxfs/xfs_attr_leaf.c
+++ b/fs/xfs/libxfs/xfs_attr_leaf.c
@@ -421,13 +421,15 @@ xfs_attr_namesp_match(int arg_flags, int ondisk_flags)
* special case for dev/uuid inodes, they have fixed size data forks.
*/
int
-xfs_attr_shortform_bytesfit(xfs_inode_t *dp, int bytes)
+xfs_attr_shortform_bytesfit(
+ struct xfs_inode *dp,
+ int bytes)
{
- int offset;
- int minforkoff; /* lower limit on valid forkoff locations */
- int maxforkoff; /* upper limit on valid forkoff locations */
- int dsize;
- xfs_mount_t *mp = dp->i_mount;
+ struct xfs_mount *mp = dp->i_mount;
+ int64_t dsize;
+ int minforkoff;
+ int maxforkoff;
+ int offset;
/* rounded down */
offset = (XFS_LITINO(mp, dp->i_d.di_version) - bytes) >> 3;
@@ -493,7 +495,7 @@ xfs_attr_shortform_bytesfit(xfs_inode_t *dp, int bytes)
* A data fork btree root must have space for at least
* MINDBTPTRS key/ptr pairs if the data fork is small or empty.
*/
- minforkoff = max(dsize, XFS_BMDR_SPACE_CALC(MINDBTPTRS));
+ minforkoff = max_t(int64_t, dsize, XFS_BMDR_SPACE_CALC(MINDBTPTRS));
minforkoff = roundup(minforkoff, 8) >> 3;
/* attr fork btree root can have at least this many key/ptr pairs */
@@ -913,7 +915,7 @@ xfs_attr_shortform_verify(
char *endp;
struct xfs_ifork *ifp;
int i;
- int size;
+ int64_t size;
ASSERT(ip->i_d.di_aformat == XFS_DINODE_FMT_LOCAL);
ifp = XFS_IFORK_PTR(ip, XFS_ATTR_FORK);
diff --git a/fs/xfs/libxfs/xfs_dir2_sf.c b/fs/xfs/libxfs/xfs_dir2_sf.c
index 716c8c50f8bc..e126446a9484 100644
--- a/fs/xfs/libxfs/xfs_dir2_sf.c
+++ b/fs/xfs/libxfs/xfs_dir2_sf.c
@@ -653,7 +653,7 @@ xfs_dir2_sf_verify(
int i;
int i8count;
int offset;
- int size;
+ int64_t size;
int error;
uint8_t filetype;
diff --git a/fs/xfs/libxfs/xfs_iext_tree.c b/fs/xfs/libxfs/xfs_iext_tree.c
index 771dd072015d..22beaab8f9b4 100644
--- a/fs/xfs/libxfs/xfs_iext_tree.c
+++ b/fs/xfs/libxfs/xfs_iext_tree.c
@@ -600,7 +600,7 @@ xfs_iext_realloc_root(
struct xfs_ifork *ifp,
struct xfs_iext_cursor *cur)
{
- size_t new_size = ifp->if_bytes + sizeof(struct xfs_iext_rec);
+ int64_t new_size = ifp->if_bytes + sizeof(struct xfs_iext_rec);
void *new;
/* account for the prev/next pointers */
diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c
index f9acf1d436f6..80aa0f1a04dd 100644
--- a/fs/xfs/libxfs/xfs_inode_fork.c
+++ b/fs/xfs/libxfs/xfs_inode_fork.c
@@ -131,7 +131,7 @@ xfs_init_local_fork(
struct xfs_inode *ip,
int whichfork,
const void *data,
- int size)
+ int64_t size)
{
struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork);
int mem_size = size, real_size = 0;
@@ -469,11 +469,11 @@ xfs_iroot_realloc(
void
xfs_idata_realloc(
struct xfs_inode *ip,
- int byte_diff,
+ int64_t byte_diff,
int whichfork)
{
struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork);
- int new_size = (int)ifp->if_bytes + byte_diff;
+ int64_t new_size = ifp->if_bytes + byte_diff;
ASSERT(new_size >= 0);
ASSERT(new_size <= XFS_IFORK_SIZE(ip, whichfork));
@@ -554,7 +554,7 @@ xfs_iextents_copy(
struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork);
struct xfs_iext_cursor icur;
struct xfs_bmbt_irec rec;
- int copied = 0;
+ int64_t copied = 0;
ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL | XFS_ILOCK_SHARED));
ASSERT(ifp->if_bytes > 0);
diff --git a/fs/xfs/libxfs/xfs_inode_fork.h b/fs/xfs/libxfs/xfs_inode_fork.h
index 60361d2d74a1..09e3548afdde 100644
--- a/fs/xfs/libxfs/xfs_inode_fork.h
+++ b/fs/xfs/libxfs/xfs_inode_fork.h
@@ -13,16 +13,16 @@ struct xfs_dinode;
* File incore extent information, present for each of data & attr forks.
*/
struct xfs_ifork {
- int if_bytes; /* bytes in if_u1 */
- unsigned int if_seq; /* cow fork mod counter */
+ int64_t if_bytes; /* bytes in if_u1 */
struct xfs_btree_block *if_broot; /* file's incore btree root */
- short if_broot_bytes; /* bytes allocated for root */
- unsigned char if_flags; /* per-fork flags */
+ unsigned int if_seq; /* cow fork mod counter */
int if_height; /* height of the extent tree */
union {
void *if_root; /* extent tree root */
char *if_data; /* inline file data */
} if_u1;
+ short if_broot_bytes; /* bytes allocated for root */
+ unsigned char if_flags; /* per-fork flags */
};
/*
@@ -93,12 +93,14 @@ int xfs_iformat_fork(struct xfs_inode *, struct xfs_dinode *);
void xfs_iflush_fork(struct xfs_inode *, struct xfs_dinode *,
struct xfs_inode_log_item *, int);
void xfs_idestroy_fork(struct xfs_inode *, int);
-void xfs_idata_realloc(struct xfs_inode *, int, int);
+void xfs_idata_realloc(struct xfs_inode *ip, int64_t byte_diff,
+ int whichfork);
void xfs_iroot_realloc(struct xfs_inode *, int, int);
int xfs_iread_extents(struct xfs_trans *, struct xfs_inode *, int);
int xfs_iextents_copy(struct xfs_inode *, struct xfs_bmbt_rec *,
int);
-void xfs_init_local_fork(struct xfs_inode *, int, const void *, int);
+void xfs_init_local_fork(struct xfs_inode *ip, int whichfork,
+ const void *data, int64_t size);
xfs_extnum_t xfs_iext_count(struct xfs_ifork *ifp);
void xfs_iext_insert(struct xfs_inode *, struct xfs_iext_cursor *cur,
--
2.25.1
1
3
From: Tang Yizhou <tangyizhou(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 37631
CVE: NA
-------------------------------------------------
To support signal monitoring, we need to export the defined tracepoint
signal_generate so that it can be used in kernel modules.
Signed-off-by: Tang Yizhou <tangyizhou(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/signal.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/signal.c b/kernel/signal.c
index 718f22da0305..091a3a313410 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -52,6 +52,8 @@
#include <asm/cacheflush.h>
#include "audit.h" /* audit_signal_info() */
+EXPORT_TRACEPOINT_SYMBOL(signal_generate);
+
/*
* SLAB caches for signal bits.
*/
--
2.25.1
1
0

[PATCH 1/2] perf/x86: Make perf callchains work without CONFIG_FRAME_POINTER
by Yang Yingliang 15 Jun '20
by Yang Yingliang 15 Jun '20
15 Jun '20
From: Kairui Song <kasong(a)redhat.com>
mainline inclusion
from mainline-5.2-rc1
commit d15d356887e7
category: bugfix
bugzilla: 35738
CVE: NA
-------------------------------------------------
Currently perf callchain doesn't work well with ORC unwinder
when sampling from trace point. We'll get useless in kernel callchain
like this:
perf 6429 [000] 22.498450: kmem:mm_page_alloc: page=0x176a17 pfn=1534487 order=0 migratetype=0 gfp_flags=GFP_KERNEL
ffffffffbe23e32e __alloc_pages_nodemask+0x22e (/lib/modules/5.1.0-rc3+/build/vmlinux)
7efdf7f7d3e8 __poll+0x18 (/usr/lib64/libc-2.28.so)
5651468729c1 [unknown] (/usr/bin/perf)
5651467ee82a main+0x69a (/usr/bin/perf)
7efdf7eaf413 __libc_start_main+0xf3 (/usr/lib64/libc-2.28.so)
5541f689495641d7 [unknown] ([unknown])
The root cause is that, for trace point events, it doesn't provide a
real snapshot of the hardware registers. Instead perf tries to get
required caller's registers and compose a fake register snapshot
which suppose to contain enough information for start a unwinding.
However without CONFIG_FRAME_POINTER, if failed to get caller's BP as the
frame pointer, so current frame pointer is returned instead. We get
a invalid register combination which confuse the unwinder, and end the
stacktrace early.
So in such case just don't try dump BP, and let the unwinder start
directly when the register is not a real snapshot. Use SP
as the skip mark, unwinder will skip all the frames until it meet
the frame of the trace point caller.
Tested with frame pointer unwinder and ORC unwinder, this makes perf
callchain get the full kernel space stacktrace again like this:
perf 6503 [000] 1567.570191: kmem:mm_page_alloc: page=0x16c904 pfn=1493252 order=0 migratetype=0 gfp_flags=GFP_KERNEL
ffffffffb523e2ae __alloc_pages_nodemask+0x22e (/lib/modules/5.1.0-rc3+/build/vmlinux)
ffffffffb52383bd __get_free_pages+0xd (/lib/modules/5.1.0-rc3+/build/vmlinux)
ffffffffb52fd28a __pollwait+0x8a (/lib/modules/5.1.0-rc3+/build/vmlinux)
ffffffffb521426f perf_poll+0x2f (/lib/modules/5.1.0-rc3+/build/vmlinux)
ffffffffb52fe3e2 do_sys_poll+0x252 (/lib/modules/5.1.0-rc3+/build/vmlinux)
ffffffffb52ff027 __x64_sys_poll+0x37 (/lib/modules/5.1.0-rc3+/build/vmlinux)
ffffffffb500418b do_syscall_64+0x5b (/lib/modules/5.1.0-rc3+/build/vmlinux)
ffffffffb5a0008c entry_SYSCALL_64_after_hwframe+0x44 (/lib/modules/5.1.0-rc3+/build/vmlinux)
7f71e92d03e8 __poll+0x18 (/usr/lib64/libc-2.28.so)
55a22960d9c1 [unknown] (/usr/bin/perf)
55a22958982a main+0x69a (/usr/bin/perf)
7f71e9202413 __libc_start_main+0xf3 (/usr/lib64/libc-2.28.so)
5541f689495641d7 [unknown] ([unknown])
Co-developed-by: Josh Poimboeuf <jpoimboe(a)redhat.com>
Signed-off-by: Kairui Song <kasong(a)redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Cc: Alexander Shishkin <alexander.shishkin(a)linux.intel.com>
Cc: Alexei Starovoitov <alexei.starovoitov(a)gmail.com>
Cc: Arnaldo Carvalho de Melo <acme(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Dave Young <dyoung(a)redhat.com>
Cc: Jiri Olsa <jolsa(a)redhat.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Namhyung Kim <namhyung(a)kernel.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Link: https://lkml.kernel.org/r/20190422162652.15483-1-kasong@redhat.com
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Wei Li <liwei391(a)huawei.com>
Reviewed-by: Jian Cheng <cj.chengjian(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/x86/events/core.c | 21 +++++++++++++++++----
arch/x86/include/asm/perf_event.h | 7 +------
arch/x86/include/asm/stacktrace.h | 13 -------------
include/linux/perf_event.h | 14 ++++++++++----
4 files changed, 28 insertions(+), 27 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index c89e6c6e26f6..0a6b48afeb05 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2332,6 +2332,15 @@ void arch_perf_update_userpage(struct perf_event *event,
cyc2ns_read_end();
}
+/*
+ * Determine whether the regs were taken from an irq/exception handler rather
+ * than from perf_arch_fetch_caller_regs().
+ */
+static bool perf_hw_regs(struct pt_regs *regs)
+{
+ return regs->flags & X86_EFLAGS_FIXED;
+}
+
void
perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
@@ -2343,11 +2352,15 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
return;
}
- if (perf_callchain_store(entry, regs->ip))
- return;
+ if (perf_hw_regs(regs)) {
+ if (perf_callchain_store(entry, regs->ip))
+ return;
+ unwind_start(&state, current, regs, NULL);
+ } else {
+ unwind_start(&state, current, NULL, (void *)regs->sp);
+ }
- for (unwind_start(&state, current, regs, NULL); !unwind_done(&state);
- unwind_next_frame(&state)) {
+ for (; !unwind_done(&state); unwind_next_frame(&state)) {
addr = unwind_get_return_address(&state);
if (!addr || perf_callchain_store(entry, addr))
return;
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index f6c4915a863e..620c3e3564be 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -264,14 +264,9 @@ extern unsigned long perf_misc_flags(struct pt_regs *regs);
*/
#define perf_arch_fetch_caller_regs(regs, __ip) { \
(regs)->ip = (__ip); \
- (regs)->bp = caller_frame_pointer(); \
+ (regs)->sp = (unsigned long)__builtin_frame_address(0); \
(regs)->cs = __KERNEL_CS; \
regs->flags = 0; \
- asm volatile( \
- _ASM_MOV "%%"_ASM_SP ", %0\n" \
- : "=m" ((regs)->sp) \
- :: "memory" \
- ); \
}
struct perf_guest_switch_msr {
diff --git a/arch/x86/include/asm/stacktrace.h b/arch/x86/include/asm/stacktrace.h
index f335aad404a4..beef7ad9e43a 100644
--- a/arch/x86/include/asm/stacktrace.h
+++ b/arch/x86/include/asm/stacktrace.h
@@ -98,19 +98,6 @@ struct stack_frame_ia32 {
u32 return_address;
};
-static inline unsigned long caller_frame_pointer(void)
-{
- struct stack_frame *frame;
-
- frame = __builtin_frame_address(0);
-
-#ifdef CONFIG_FRAME_POINTER
- frame = frame->next_frame;
-#endif
-
- return (unsigned long)frame;
-}
-
void show_opcodes(struct pt_regs *regs, const char *loglvl);
void show_ip(struct pt_regs *regs, const char *loglvl);
#endif /* _ASM_X86_STACKTRACE_H */
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index d8b4d31acd18..cf3da70056f7 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1050,12 +1050,18 @@ static inline void perf_arch_fetch_caller_regs(struct pt_regs *regs, unsigned lo
#endif
/*
- * Take a snapshot of the regs. Skip ip and frame pointer to
- * the nth caller. We only need a few of the regs:
+ * When generating a perf sample in-line, instead of from an interrupt /
+ * exception, we lack a pt_regs. This is typically used from software events
+ * like: SW_CONTEXT_SWITCHES, SW_MIGRATIONS and the tie-in with tracepoints.
+ *
+ * We typically don't need a full set, but (for x86) do require:
* - ip for PERF_SAMPLE_IP
* - cs for user_mode() tests
- * - bp for callchains
- * - eflags, for future purposes, just in case
+ * - sp for PERF_SAMPLE_CALLCHAIN
+ * - eflags for MISC bits and CALLCHAIN (see: perf_hw_regs())
+ *
+ * NOTE: assumes @regs is otherwise already 0 filled; this is important for
+ * things like PERF_SAMPLE_REGS_INTR.
*/
static inline void perf_fetch_caller_regs(struct pt_regs *regs)
{
--
2.25.1
1
1

15 Jun '20
From: yangerkun <yangerkun(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 35487
CVE: NA
-----------------------------------------------
Now the errcode from ext4_commit_super will overwrite EROFS exists in
ext4_setup_super. Actually, no need to call ext4_commit_super since we
will return EROFS. Fix it by goto done directly.
Fixes: c89128a00838 ("ext4: handle errors on ext4_commit_super")
Signed-off-by: yangerkun <yangerkun(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/super.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index cc057001961e..b10f96860dd8 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -2305,6 +2305,7 @@ static int ext4_setup_super(struct super_block *sb, struct ext4_super_block *es,
ext4_msg(sb, KERN_ERR, "revision level too high, "
"forcing read-only mode");
err = -EROFS;
+ goto done;
}
if (read_only)
goto done;
--
2.25.1
1
0
From: Cheng Jian <cj.chengjian(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 34546
CVE: NA
----------------------------------------
There are two problems with the implementation and use of
zap_lock().
Firstly, This console_sem does not require reinit in zap_lock(),
this is because:
1). printk() itself does try_lock() and skips console handling
when the semaphore is not available.
2). panic() tries to push the messages later in console_flush_on_panic().
It ignores the semaphore. Also most console drivers ignore their
internal locks because oops_in_progress is set by bust_spinlocks().
Secondly, The situation is more complicated when NMI is not used.
1). Non-stopped CPUs are in unknown state, most likely in a busy loop.
Nobody knows whether printk() is repeatedly called in the loop.
When it was called, re-initializing any lock would cause double
unlock and deadlock.
2). It would be possible to add some more hacks. One problem is that
there are two groups of users. One prefer to risk a deadlock and
have a chance to see the messages. Others prefer to always
reach emergency_restart() and reboot the machine.
Fixes: b15f5676d00e ("printk/panic: Avoid deadlock in printk()")
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/panic.c | 25 +++++++++++++++++++++++++
kernel/printk/printk.c | 3 ---
2 files changed, 25 insertions(+), 3 deletions(-)
diff --git a/kernel/panic.c b/kernel/panic.c
index 9434f6c2da9e..ebdb58e761c0 100644
--- a/kernel/panic.c
+++ b/kernel/panic.c
@@ -238,7 +238,32 @@ void panic(const char *fmt, ...)
crash_smp_send_stop();
}
+ /*
+ * ZAP console related locks when nmi broadcast. If a crash is occurring,
+ * make sure we can't deadlock. And make sure that we print immediately.
+ *
+ * A deadlock caused by logbuf_lock can be occured when panic:
+ * a) Panic CPU is running in non-NMI context;
+ * b) Panic CPU sends out shutdown IPI via NMI vector;
+ * c) One of the CPUs that we bring down via NMI vector holded logbuf_lock;
+ * d) Panic CPU try to hold logbuf_lock, then deadlock occurs.
+ *
+ * At present, only try to solve this problem for the ARCH with NMI,
+ * by reinit lock, this situation is more complicated when NMI is not
+ * used.
+ * 1). Non-stopped CPUs are in unknown state, most likely in a busy loop.
+ * Nobody knows whether printk() is repeatedly called in the loop.
+ * When it was called, re-initializing any lock would cause double
+ * unlock and deadlock.
+ *
+ * 2). It would be possible to add some more hacks. One problem is that
+ * there are two groups of users. One prefer to risk a deadlock and
+ * have a chance to see the messages. Others prefer to always
+ * reach emergency_restart() and reboot the machine.
+ */
+#ifdef CONFIG_X86
zap_locks();
+#endif
/*
* Run any panic handlers, including those that might need to
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 7225f5ef0f54..f67b6a1c8f74 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1592,9 +1592,6 @@ void zap_locks(void)
if (raw_spin_is_locked(&logbuf_lock)) {
debug_locks_off();
raw_spin_lock_init(&logbuf_lock);
-
- console_suspended = 1;
- sema_init(&console_sem, 1);
}
if (raw_spin_is_locked(&console_owner_lock)) {
--
2.25.1
1
0
From: Junxin Chen <chenjunxin1(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
----------------------------------
Nowadays, we have checked device name at user tool, and have limited
length of device name at kernel driver. But when get_netdev_by_ifname
return NULL, it would cause print device name without check terminator.
This patch fixes this bug, when device name's last character isn't a '\0',
kernel driver would return -EINVAL to user.
Signed-off-by: Junxin Chen <chenjunxin1(a)huawei.com>
Reviewed-by: Weiwei Deng <dengweiwei(a)huawei.com>
Reviewed-by: Shengzui You <youshengzui(a)huawei.com>
Reviewed-by: Zhaohui Zhong <zhongzhaohui(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_init.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_init.c b/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_init.c
index 7c084a6fbe18..1e012917e010 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_init.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_init.c
@@ -335,6 +335,11 @@ static long hns3_cae_k_unlocked_ioctl(struct file *pfile, unsigned int cmd,
* code yet, so we don't need lock.
*/
rtnl_lock();
+ if (nt_msg.device_name[IFNAMSIZ - 1] != '\0') {
+ pr_err("the device name is invalid.\n");
+ ret = -EINVAL;
+ goto out_invalid;
+ }
ret = hns3_cae_k_get_netdev_by_ifname(nt_msg.device_name, &nic_dev);
if (ret) {
pr_err("can not get the netdevice correctly\n");
--
2.25.1
1
2

15 Jun '20
From: YueHaibing <yuehaibing(a)huawei.com>
mainline inclusion
from mainline-5.4
commit 33902b4a4227877896dd9368ac10f4ca0d100de5
category: bugfix
issue: I1JZHT
bugzilla: NA
------------------------------
In nsim_fib_init(), if register_fib_notifier failed, nsim_fib_net_ops
should be unregistered before return.
In nsim_fib_exit(), unregister_fib_notifier should be called before
nsim_fib_net_ops be unregistered, otherwise may cause use-after-free:
BUG: KASAN: use-after-free in nsim_fib_event_nb+0x342/0x570 [netdevsim]
Read of size 8 at addr ffff8881daaf4388 by task kworker/0:3/3499
CPU: 0 PID: 3499 Comm: kworker/0:3 Not tainted 5.3.0-rc7+ #30
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
Workqueue: ipv6_addrconf addrconf_dad_work [ipv6]
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0xa9/0x10e lib/dump_stack.c:113
print_address_description+0x65/0x380 mm/kasan/report.c:351
__kasan_report+0x149/0x18d mm/kasan/report.c:482
kasan_report+0xe/0x20 mm/kasan/common.c:618
nsim_fib_event_nb+0x342/0x570 [netdevsim]
notifier_call_chain+0x52/0xf0 kernel/notifier.c:95
__atomic_notifier_call_chain+0x78/0x140 kernel/notifier.c:185
call_fib_notifiers+0x30/0x60 net/core/fib_notifier.c:30
call_fib6_entry_notifiers+0xc1/0x100 [ipv6]
fib6_add+0x92e/0x1b10 [ipv6]
__ip6_ins_rt+0x40/0x60 [ipv6]
ip6_ins_rt+0x84/0xb0 [ipv6]
__ipv6_ifa_notify+0x4b6/0x550 [ipv6]
ipv6_ifa_notify+0xa5/0x180 [ipv6]
addrconf_dad_completed+0xca/0x640 [ipv6]
addrconf_dad_work+0x296/0x960 [ipv6]
process_one_work+0x5c0/0xc00 kernel/workqueue.c:2269
worker_thread+0x5c/0x670 kernel/workqueue.c:2415
kthread+0x1d7/0x200 kernel/kthread.c:255
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:352
Allocated by task 3388:
save_stack+0x19/0x80 mm/kasan/common.c:69
set_track mm/kasan/common.c:77 [inline]
__kasan_kmalloc.constprop.3+0xa0/0xd0 mm/kasan/common.c:493
kmalloc include/linux/slab.h:557 [inline]
kzalloc include/linux/slab.h:748 [inline]
ops_init+0xa9/0x220 net/core/net_namespace.c:127
__register_pernet_operations net/core/net_namespace.c:1135 [inline]
register_pernet_operations+0x1d4/0x420 net/core/net_namespace.c:1212
register_pernet_subsys+0x24/0x40 net/core/net_namespace.c:1253
nsim_fib_init+0x12/0x70 [netdevsim]
veth_get_link_ksettings+0x2b/0x50 [veth]
do_one_initcall+0xd4/0x454 init/main.c:939
do_init_module+0xe0/0x330 kernel/module.c:3490
load_module+0x3c2f/0x4620 kernel/module.c:3841
__do_sys_finit_module+0x163/0x190 kernel/module.c:3931
do_syscall_64+0x72/0x2e0 arch/x86/entry/common.c:296
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Freed by task 3534:
save_stack+0x19/0x80 mm/kasan/common.c:69
set_track mm/kasan/common.c:77 [inline]
__kasan_slab_free+0x130/0x180 mm/kasan/common.c:455
slab_free_hook mm/slub.c:1423 [inline]
slab_free_freelist_hook mm/slub.c:1474 [inline]
slab_free mm/slub.c:3016 [inline]
kfree+0xe9/0x2d0 mm/slub.c:3957
ops_free net/core/net_namespace.c:151 [inline]
ops_free_list.part.7+0x156/0x220 net/core/net_namespace.c:184
ops_free_list net/core/net_namespace.c:182 [inline]
__unregister_pernet_operations net/core/net_namespace.c:1165 [inline]
unregister_pernet_operations+0x221/0x2a0 net/core/net_namespace.c:1224
unregister_pernet_subsys+0x1d/0x30 net/core/net_namespace.c:1271
nsim_fib_exit+0x11/0x20 [netdevsim]
nsim_module_exit+0x16/0x21 [netdevsim]
__do_sys_delete_module kernel/module.c:1015 [inline]
__se_sys_delete_module kernel/module.c:958 [inline]
__x64_sys_delete_module+0x244/0x330 kernel/module.c:958
do_syscall_64+0x72/0x2e0 arch/x86/entry/common.c:296
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Reported-by: Hulk Robot <hulkci(a)huawei.com>
Fixes: 59c84b9fcf42 ("netdevsim: Restore per-network namespace accounting for fib entries")
Signed-off-by: YueHaibing <yuehaibing(a)huawei.com>
Acked-by: Jakub Kicinski <jakub.kicinski(a)netronome.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: LiAichun <liaichun(a)huawei.com>
---
drivers/net/netdevsim/fib.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
index f61d094746c0..1a251f76d09b 100644
--- a/drivers/net/netdevsim/fib.c
+++ b/drivers/net/netdevsim/fib.c
@@ -241,8 +241,8 @@ static struct pernet_operations nsim_fib_net_ops = {
void nsim_fib_exit(void)
{
- unregister_pernet_subsys(&nsim_fib_net_ops);
unregister_fib_notifier(&nsim_fib_nb);
+ unregister_pernet_subsys(&nsim_fib_net_ops);
}
int nsim_fib_init(void)
@@ -258,6 +258,7 @@ int nsim_fib_init(void)
err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
if (err < 0) {
pr_err("Failed to register fib notifier\n");
+ unregister_pernet_subsys(&nsim_fib_net_ops);
goto err_out;
}
--
2.20.1
4
3

[PATCH] scsi: Fix kabi change due to add offline_already member in struct scsi_device
by Yang Yingliang 12 Jun '20
by Yang Yingliang 12 Jun '20
12 Jun '20
From: Ye Bin <yebin10(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 34604
CVE: NA
-----------------------------------------------
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/scsi/scsi_lib.c | 4 ++--
include/scsi/scsi_device.h | 7 ++++---
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index a8043039f53e..bdb90f5c9eeb 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1438,7 +1438,7 @@ scsi_prep_state_check(struct scsi_device *sdev, struct request *req)
* before trying any recovery commands.
*/
if (!sdev->offline_already) {
- sdev->offline_already = true;
+ sdev->offline_already = 1;
sdev_printk(KERN_ERR, sdev,
"rejecting I/O to offline device\n");
}
@@ -2859,7 +2859,7 @@ scsi_device_set_state(struct scsi_device *sdev, enum scsi_device_state state)
break;
}
- sdev->offline_already = false;
+ sdev->offline_already = 0;
sdev->sdev_state = state;
return 0;
diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
index 52b255e868a9..550739a5ea96 100644
--- a/include/scsi/scsi_device.h
+++ b/include/scsi/scsi_device.h
@@ -201,8 +201,6 @@ struct scsi_device {
unsigned lun_in_cdb:1; /* Store LUN bits in CDB[1] */
unsigned unmap_limit_for_ws:1; /* Use the UNMAP limit for WRITE SAME */
- bool offline_already; /* Device offline message logged */
-
atomic_t disk_events_disable_depth; /* disable depth for disk events */
DECLARE_BITMAP(supported_events, SDEV_EVT_MAXBITS); /* supported events */
@@ -230,8 +228,11 @@ struct scsi_device {
struct mutex state_mutex;
enum scsi_device_state sdev_state;
struct task_struct *quiesced_by;
-
+#ifndef __GENKSYMS__
+ unsigned long offline_already; /* Device offline message logged */
+#else
KABI_RESERVE(1)
+#endif
KABI_RESERVE(2)
KABI_RESERVE(3)
KABI_RESERVE(4)
--
2.25.1
1
0

12 Jun '20
From: Takashi Iwai <tiwai(a)suse.de>
mainline inclusion
from mainline-v5.1-rc1
commit 3a55437141a1d287dead685b37fe240185144f15
category: bugfix
bugzilla: 34614
CVE: NA
-----------------------------------------------
This patch changes the parent pointer assignment of snd_info_entry
object to be always non-NULL. More specifically,check the parent
argument in snd_info_create_module_entry() & co, and assign
snd_proc_root if NULL is passed there.
This assures that the proc object is always freed when the root is
freed, so avoid possible memory leaks. For example, some error paths
(e.g. snd_info_register() error at snd_minor_info_init()) may leave
snd_info_entry object although the proc file itself is freed.
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Reviewed-by: Yang Yingliang <yangyingliang(a)huawei.com>
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
sound/core/info.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/sound/core/info.c b/sound/core/info.c
index 679136fba730..3411bea54d66 100644
--- a/sound/core/info.c
+++ b/sound/core/info.c
@@ -744,7 +744,11 @@ struct snd_info_entry *snd_info_create_module_entry(struct module * module,
const char *name,
struct snd_info_entry *parent)
{
- struct snd_info_entry *entry = snd_info_create_entry(name, parent);
+ struct snd_info_entry *entry;
+
+ if (!parent)
+ parent = snd_proc_root;
+ entry = snd_info_create_entry(name, parent);
if (entry)
entry->module = module;
return entry;
@@ -765,7 +769,11 @@ struct snd_info_entry *snd_info_create_card_entry(struct snd_card *card,
const char *name,
struct snd_info_entry * parent)
{
- struct snd_info_entry *entry = snd_info_create_entry(name, parent);
+ struct snd_info_entry *entry;
+
+ if (!parent)
+ parent = card->proc_root;
+ entry = snd_info_create_entry(name, parent);
if (entry) {
entry->module = card->module;
entry->card = card;
--
2.25.1
1
0

[PATCH 01/12] jbd2: clean __jbd2_journal_abort_hard() and __journal_abort_soft()
by Yang Yingliang 12 Jun '20
by Yang Yingliang 12 Jun '20
12 Jun '20
From: "zhangyi (F)" <yi.zhang(a)huawei.com>
mainline inclusion
from mainline-5.6-rc1
commit 7f6225e446cc8dfa4c3c7959a4de3dd03ec277bf
category: bugfix
bugzilla: 34619
CVE: NA
---------------------------
__jbd2_journal_abort_hard() is no longer used, so now we can merge
__jbd2_journal_abort_hard() and __journal_abort_soft() these two
functions into jbd2_journal_abort() and remove them.
Signed-off-by: zhangyi (F) <yi.zhang(a)huawei.com>
Reviewed-by: Jan Kara <jack(a)suse.cz>
Link: https://lore.kernel.org/r/20191204124614.45424-5-yi.zhang@huawei.com
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Reviewed-by: Zhang Xiaoxu <zhangxiaoxu5(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/jbd2/journal.c | 103 ++++++++++++++++++-------------------------
include/linux/jbd2.h | 1 -
2 files changed, 42 insertions(+), 62 deletions(-)
diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
index 1f085f0bf908..7301bb766172 100644
--- a/fs/jbd2/journal.c
+++ b/fs/jbd2/journal.c
@@ -101,7 +101,6 @@ EXPORT_SYMBOL(jbd2_journal_release_jbd_inode);
EXPORT_SYMBOL(jbd2_journal_begin_ordered_truncate);
EXPORT_SYMBOL(jbd2_inode_cache);
-static void __journal_abort_soft (journal_t *journal, int errno);
static int jbd2_journal_create_slab(size_t slab_size);
#ifdef CONFIG_JBD2_DEBUG
@@ -810,7 +809,7 @@ int jbd2_journal_bmap(journal_t *journal, unsigned long blocknr,
"at offset %lu on %s\n",
__func__, blocknr, journal->j_devname);
err = -EIO;
- __journal_abort_soft(journal, err);
+ jbd2_journal_abort(journal, err);
}
} else {
*retp = blocknr; /* +journal->j_blk_offset */
@@ -2108,64 +2107,6 @@ int jbd2_journal_wipe(journal_t *journal, int write)
return err;
}
-/*
- * Journal abort has very specific semantics, which we describe
- * for journal abort.
- *
- * Two internal functions, which provide abort to the jbd layer
- * itself are here.
- */
-
-/*
- * Quick version for internal journal use (doesn't lock the journal).
- * Aborts hard --- we mark the abort as occurred, but do _nothing_ else,
- * and don't attempt to make any other journal updates.
- */
-void __jbd2_journal_abort_hard(journal_t *journal)
-{
- transaction_t *transaction;
-
- if (journal->j_flags & JBD2_ABORT)
- return;
-
- printk(KERN_ERR "Aborting journal on device %s.\n",
- journal->j_devname);
-
- write_lock(&journal->j_state_lock);
- journal->j_flags |= JBD2_ABORT;
- transaction = journal->j_running_transaction;
- if (transaction)
- __jbd2_log_start_commit(journal, transaction->t_tid);
- write_unlock(&journal->j_state_lock);
-}
-
-/* Soft abort: record the abort error status in the journal superblock,
- * but don't do any other IO. */
-static void __journal_abort_soft (journal_t *journal, int errno)
-{
- int old_errno;
-
- write_lock(&journal->j_state_lock);
- old_errno = journal->j_errno;
- if (!journal->j_errno || errno == -ESHUTDOWN)
- journal->j_errno = errno;
-
- if (journal->j_flags & JBD2_ABORT) {
- write_unlock(&journal->j_state_lock);
- if (old_errno != -ESHUTDOWN && errno == -ESHUTDOWN)
- jbd2_journal_update_sb_errno(journal);
- return;
- }
- write_unlock(&journal->j_state_lock);
-
- __jbd2_journal_abort_hard(journal);
-
- jbd2_journal_update_sb_errno(journal);
- write_lock(&journal->j_state_lock);
- journal->j_flags |= JBD2_REC_ERR;
- write_unlock(&journal->j_state_lock);
-}
-
/**
* void jbd2_journal_abort () - Shutdown the journal immediately.
* @journal: the journal to shutdown.
@@ -2209,7 +2150,47 @@ static void __journal_abort_soft (journal_t *journal, int errno)
void jbd2_journal_abort(journal_t *journal, int errno)
{
- __journal_abort_soft(journal, errno);
+ transaction_t *transaction;
+
+ /*
+ * ESHUTDOWN always takes precedence because a file system check
+ * caused by any other journal abort error is not required after
+ * a shutdown triggered.
+ */
+ write_lock(&journal->j_state_lock);
+ if (journal->j_flags & JBD2_ABORT) {
+ int old_errno = journal->j_errno;
+
+ write_unlock(&journal->j_state_lock);
+ if (old_errno != -ESHUTDOWN && errno == -ESHUTDOWN) {
+ journal->j_errno = errno;
+ jbd2_journal_update_sb_errno(journal);
+ }
+ return;
+ }
+
+ /*
+ * Mark the abort as occurred and start current running transaction
+ * to release all journaled buffer.
+ */
+ pr_err("Aborting journal on device %s.\n", journal->j_devname);
+
+ journal->j_flags |= JBD2_ABORT;
+ journal->j_errno = errno;
+ transaction = journal->j_running_transaction;
+ if (transaction)
+ __jbd2_log_start_commit(journal, transaction->t_tid);
+ write_unlock(&journal->j_state_lock);
+
+ /*
+ * Record errno to the journal super block, so that fsck and jbd2
+ * layer could realise that a filesystem check is needed.
+ */
+ jbd2_journal_update_sb_errno(journal);
+
+ write_lock(&journal->j_state_lock);
+ journal->j_flags |= JBD2_REC_ERR;
+ write_unlock(&journal->j_state_lock);
}
/**
diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
index 7dd9d25a9cbb..12b935c9ec1e 100644
--- a/include/linux/jbd2.h
+++ b/include/linux/jbd2.h
@@ -1427,7 +1427,6 @@ extern int jbd2_journal_skip_recovery (journal_t *);
extern void jbd2_journal_update_sb_errno(journal_t *);
extern int jbd2_journal_update_sb_log_tail (journal_t *, tid_t,
unsigned long, int);
-extern void __jbd2_journal_abort_hard (journal_t *);
extern void jbd2_journal_abort (journal_t *, int);
extern int jbd2_journal_errno (journal_t *);
extern void jbd2_journal_ack_err (journal_t *);
--
2.25.1
1
11

Minutes of the meeting //: 【Meeting Notice】Kernel & testing sync between Linaro and openEuler Time: 2020-06-10 16:00-17:30
by Guohanjun (Hanjun Guo) 11 Jun '20
by Guohanjun (Hanjun Guo) 11 Jun '20
11 Jun '20
Attendees (I can see more people join later but can’t remember now):
[cid:image001.png@01D63F6C.270B2CF0]
Following action items:
* Vincent to share the JIRA link for thermal pressure
* Vincent to share the JIRA link for scheduler improvements
* Anmar to work on the opening of Power and Performance Project
* Hanjun/Jonathan to share some plan for openEuler kernel in the coming 6 or 12 months
* Anmar to create the open mailing list for LKQ project, so that Fengguang and the team can participate
And see attached for Linaro slide, thanks!
发件人: Meeting Book via Kernel [mailto:kernel@openeuler.org]
发送时间: 2020年6月3日 16:01
收件人: kernel(a)openeuler.org
主题: 【Meeting Notice】Kernel & testing sync between Linaro and openEuler Time: 2020-06-10 16:00-17:30
Topic
Kernel & testing sync between Linaro and openEuler
Time
2020-06-10 16:00-17:30((UTC+08:00)Beijing)
Join Conference
Join (External) >><https://welink-meeting.zoom.us/j/834726983>
Meeting ID
834 726 983
Convener
郭寒军
Agenda
* Linux Kernel Update by Linaro - 15 mins
* Linux Kernel Functional Testing (LKFT) introduction - 15mins
* Brief introduction for openEuler and the kernel - 15mins
* Brief introduction for Crystal testing system - 15 mins
* Open discussions - about 30 mins
Tips:
Dial the access number to join conference<http://app.huawei.com/eshare/voiceMeetings>
注:仅支持部分国家和地区。( Only some countries and regions are supported.)
请点击会议链接按照提示下载Zoom客户端入会,并升级Zoom最新版本。
Please click the conference link and download the Zoom to join the conference, and upgrade to the latest version of Zoom.
2
1
commit dba0bc7b76dcf(irqchip/gic-v3-its: add ability to save/restore ITS
state) add support to save/restore its suspend state.
However, it needs to set ITS_FLAGS_SAVE_SUSPEND_STATE to 1 when we want
to resend MAPC. In GIC500, GITS_typer.HCC is not zero, and it can be
used to judge if we can resend MAPC if the firmware restores the state,
but when GITS_typer.HCC is zero, and it will never have chance to set the
flag, which will lead to a failuer to power off GIC logic in suspend.
Signed-off-by: Hongbo Yao <yaohongbo(a)huawei.com>
---
drivers/irqchip/irq-gic-v3-its.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 881466163c99..6ea2809b8680 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -3733,8 +3733,7 @@ static int __init its_probe_one(struct resource *res,
ctlr |= GITS_CTLR_ImDe;
writel_relaxed(ctlr, its->base + GITS_CTLR);
- if (GITS_TYPER_HCC(typer))
- its->flags |= ITS_FLAGS_SAVE_SUSPEND_STATE;
+ its->flags |= ITS_FLAGS_SAVE_SUSPEND_STATE;
err = its_init_domain(handle, its);
if (err)
--
2.20.1
1
0
From: LiAichun <liaichun(a)huawei.com>
euleros inclusion
commit NA
category:bugfix
bugzilla:NA
CVE:NA
issue:I1JZHT
-------------------------------------------------
From:YueHaibing<>
Subject: [PATCH] netdevsim: Fix error handling in nsim_fib_init and neim_fib_exit
Date: Mon, 8 Jun 2020 15:13:40 +0800
In nsim_fib_init(),if register_fib_notifier failed,nsim_fib_net_ops
should be unregistered before return;
In nsim_fib_exit(),unregister_fib_notifier should be called before
nsim_fib_net_ops be unregistered, otherwise may cause use-after-free
More detailed information can refer to:
https://lkml.org/lkml/2019/10/11/216
Reported-by:Hulk Robot <hulkci(a)huawei.com>
Fixes: 59c84b9fcf42("netdevsim: Restore per-network namespace accounting for fib entries")
Signed-off-by: YueHaibing <yuehaibing(a)huawei.com>
Signed-off-by: LiAichun <liaichun(a)huawei.com>
---
drivers/net/netdevsim/fib.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
index f61d094..d5c01b8 100644
--- a/drivers/net/netdevsim/fib.c
+++ b/drivers/net/netdevsim/fib.c
@@ -241,8 +241,8 @@ static int __net_init nsim_fib_netns_init(struct net *net)
void nsim_fib_exit(void)
{
- unregister_pernet_subsys(&nsim_fib_net_ops);
unregister_fib_notifier(&nsim_fib_nb);
+ unregister_pernet_subsys(&nsim_fib_net_ops);
}
int nsim_fib_init(void)
@@ -257,6 +257,7 @@ int nsim_fib_init(void)
err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
if (err < 0) {
+ unregister_pernet_subsys(&nsim_fib_net_ops);
pr_err("Failed to register fib notifier\n");
goto err_out;
}
--
1.8.3.1
1
0
From: LiAichun <liaichun(a)huawei.com>
euleros inclusion
commit NA
category:bugfix
bugzilla:NA
CVE:NA
Fix issue:gitee.com/openeuler/kernel/issues/I1JZHT
-------------------------------------------------
From:YueHaibing<>
Subject: [PATCH] netdevsim: Fix error handling in nsim_fib_init and neim_fib_exit
Date: Mon, 8 Jun 2020 15:13:40 +0800
In nsim_fib_init(),if register_fib_notifier failed,nsim_fib_net_ops
should be unregistered before return;
In nsim_fib_exit(),unregister_fib_notifier should be called before
nsim_fib_net_ops be unregistered, otherwise may cause use-after-free
More detailed information can refer to:
https://lkml.org/lkml/2019/10/11/216
Reported-by:Hulk Robot <hulkci(a)huawei.com>
Fixes: 59c84b9fcf42("netdevsim: Restore per-network namespace accounting for fib entries")
Signed-off-by: YueHaibing <yuehaibing(a)huawei.com>
Signed-off-by: LiAichun <liaichun(a)huawei.com>
---
drivers/net/netdevsim/fib.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
index f61d094..d5c01b8 100644
--- a/drivers/net/netdevsim/fib.c
+++ b/drivers/net/netdevsim/fib.c
@@ -241,8 +241,8 @@ static int __net_init nsim_fib_netns_init(struct net *net)
void nsim_fib_exit(void)
{
- unregister_pernet_subsys(&nsim_fib_net_ops);
unregister_fib_notifier(&nsim_fib_nb);
+ unregister_pernet_subsys(&nsim_fib_net_ops);
}
int nsim_fib_init(void)
@@ -257,6 +257,7 @@ int nsim_fib_init(void)
err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
if (err < 0) {
+ unregister_pernet_subsys(&nsim_fib_net_ops);
pr_err("Failed to register fib notifier\n");
goto err_out;
}
--
1.8.3.1
1
0
From: LiAichun <liaichun(a)huawei.com>
mainline inclusion
from mainline-4.19.90-2003.4.0
commit a91b7a28e76a702bd81357bf41a45dab6486716b
category:bugfix
bugzilla:NA
CVE:NA
-------------------------------------------------
From:YueHaibing<>
Subject: [PATCH] netdevsim: Fix error handling in nsim_fib_init and neim_fib_exit
Date: Mon, 8 Jun 2020 15:13:40 +0800
In nsim_fib_init(),if register_fib_notifier failed,nsim_fib_net_ops
should be unregistered before return;
In nsim_fib_exit(),unregister_fib_notifier should be called before
nsim_fib_net_ops be unregistered, otherwise may cause use-after-free
More detailed information can refer to:
https://lkml.org/lkml/2019/10/11/216
Reported-by:Hulk Robot <hulkci(a)huawei.com>
Fixes: 59c84b9fcf42("netdevsim: Restore per-network namespace accounting for fib entries")
Signed-off-by: YueHaibing <yuehaibing(a)huawei.com>
Signed-off-by: LiAichun <liaichun(a)huawei.com>
---
drivers/net/netdevsim/fib.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
index f61d094..d5c01b8 100644
--- a/drivers/net/netdevsim/fib.c
+++ b/drivers/net/netdevsim/fib.c
@@ -241,8 +241,8 @@ static int __net_init nsim_fib_netns_init(struct net *net)
void nsim_fib_exit(void)
{
- unregister_pernet_subsys(&nsim_fib_net_ops);
unregister_fib_notifier(&nsim_fib_nb);
+ unregister_pernet_subsys(&nsim_fib_net_ops);
}
int nsim_fib_init(void)
@@ -257,6 +257,7 @@ int nsim_fib_init(void)
err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
if (err < 0) {
+ unregister_pernet_subsys(&nsim_fib_net_ops);
pr_err("Failed to register fib notifier\n");
goto err_out;
}
--
1.8.3.1
2
1
In nsim_fib_init(),if register_fib_notifier failed,nsim_fib_net_ops
should be unregistered before return;
In nsim_fib_exit(),unregister_fib_notifier should be called before
nsim_fib_net_ops be unregistered, otherwise may cause use-after-free
More detailed information can refer to:
https://lkml.org/lkml/2019/10/11/216
kernel inclusion
from:kernel-4.19
commit:2dca76fa95bf03c15bdcacd7cfa8f974c1a799f6
category:bugfix
bugzilla:NA
CVE:NA
Signed-off-by: liaichun <513565428(a)qq.com>
DESC:fix netdevsim resource leak
---
drivers/net/netdevsim/fib.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
index f61d094..d5c01b8 100644
--- a/drivers/net/netdevsim/fib.c
+++ b/drivers/net/netdevsim/fib.c
@@ -241,8 +241,8 @@ static int __net_init nsim_fib_netns_init(struct net *net)
void nsim_fib_exit(void)
{
- unregister_pernet_subsys(&nsim_fib_net_ops);
unregister_fib_notifier(&nsim_fib_nb);
+ unregister_pernet_subsys(&nsim_fib_net_ops);
}
int nsim_fib_init(void)
@@ -257,6 +257,7 @@ int nsim_fib_init(void)
err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
if (err < 0) {
+ unregister_pernet_subsys(&nsim_fib_net_ops);
pr_err("Failed to register fib notifier\n");
goto err_out;
}
--
1.8.3.1
2
1
In nsim_fib_init(),if register_fib_notifier failed,nsim_fib_net_ops
should be unregistered before return;
In nsim_fib_exit(),unregister_fib_notifier should be called before
nsim_fib_net_ops be unregistered, otherwise may cause use-after-free
More detailed information can refer to:
https://lkml.org/lkml/2019/10/11/216
Signed-off-by: liaichun <513565428(a)qq.com>
Tested-by: liaichun <513565428(a)qq.com>
Type:bugfix
ID:NA
SUG:restart
DESC:fix netdevsim resource leak
---
drivers/net/netdevsim/fib.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
index f61d094..d5c01b8 100644
--- a/drivers/net/netdevsim/fib.c
+++ b/drivers/net/netdevsim/fib.c
@@ -241,8 +241,8 @@ static int __net_init nsim_fib_netns_init(struct net *net)
void nsim_fib_exit(void)
{
- unregister_pernet_subsys(&nsim_fib_net_ops);
unregister_fib_notifier(&nsim_fib_nb);
+ unregister_pernet_subsys(&nsim_fib_net_ops);
}
int nsim_fib_init(void)
@@ -257,6 +257,7 @@ int nsim_fib_init(void)
err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
if (err < 0) {
+ unregister_pernet_subsys(&nsim_fib_net_ops);
pr_err("Failed to register fib notifier\n");
goto err_out;
}
--
1.8.3.1
1
0
diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
index f61d094..d5c01b8 100644
--- a/drivers/net/netdevsim/fib.c
+++ b/drivers/net/netdevsim/fib.c
@@ -241,8 +241,8 @@ static int __net_init nsim_fib_netns_init(struct net *net)
void nsim_fib_exit(void)
{
- unregister_pernet_subsys(&nsim_fib_net_ops);
unregister_fib_notifier(&nsim_fib_nb);
+ unregister_pernet_subsys(&nsim_fib_net_ops);
}
int nsim_fib_init(void)
@@ -257,6 +257,7 @@ int nsim_fib_init(void)
err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
if (err < 0) {
+ unregister_pernet_subsys(&nsim_fib_net_ops);
pr_err("Failed to register fib notifier\n");
goto err_out;
}
--
1.8.3.1
2
1
From: liaichun <liaichun(a)huawei.com>
diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
index f61d094..d5c01b8 100644
--- a/drivers/net/netdevsim/fib.c
+++ b/drivers/net/netdevsim/fib.c
@@ -241,8 +241,8 @@ static int __net_init nsim_fib_netns_init(struct net *net)
void nsim_fib_exit(void)
{
- unregister_pernet_subsys(&nsim_fib_net_ops);
unregister_fib_notifier(&nsim_fib_nb);
+ unregister_pernet_subsys(&nsim_fib_net_ops);
}
int nsim_fib_init(void)
@@ -257,6 +257,7 @@ int nsim_fib_init(void)
err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
if (err < 0) {
+ unregister_pernet_subsys(&nsim_fib_net_ops);
pr_err("Failed to register fib notifier\n");
goto err_out;
}
--
1.8.3.1
1
0
diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
index f61d094..d5c01b8 100644
--- a/drivers/net/netdevsim/fib.c
+++ b/drivers/net/netdevsim/fib.c
@@ -241,8 +241,8 @@ static int __net_init nsim_fib_netns_init(struct net *net)
void nsim_fib_exit(void)
{
- unregister_pernet_subsys(&nsim_fib_net_ops);
unregister_fib_notifier(&nsim_fib_nb);
+ unregister_pernet_subsys(&nsim_fib_net_ops);
}
int nsim_fib_init(void)
@@ -257,6 +257,7 @@ int nsim_fib_init(void)
err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
if (err < 0) {
+ unregister_pernet_subsys(&nsim_fib_net_ops);
pr_err("Failed to register fib notifier\n");
goto err_out;
}
--
1.8.3.1
1
0
From: liaichun <liaichun(a)huawei.com>
diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
index f61d094..d5c01b8 100644
--- a/drivers/net/netdevsim/fib.c
+++ b/drivers/net/netdevsim/fib.c
@@ -241,8 +241,8 @@ static int __net_init nsim_fib_netns_init(struct net *net)
void nsim_fib_exit(void)
{
- unregister_pernet_subsys(&nsim_fib_net_ops);
unregister_fib_notifier(&nsim_fib_nb);
+ unregister_pernet_subsys(&nsim_fib_net_ops);
}
int nsim_fib_init(void)
@@ -257,6 +257,7 @@ int nsim_fib_init(void)
err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
if (err < 0) {
+ unregister_pernet_subsys(&nsim_fib_net_ops);
pr_err("Failed to register fib notifier\n");
goto err_out;
}
--
1.8.3.1
1
0
diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c
index f61d094..d5c01b8 100644
--- a/drivers/net/netdevsim/fib.c
+++ b/drivers/net/netdevsim/fib.c
@@ -241,8 +241,8 @@ static int __net_init nsim_fib_netns_init(struct net *net)
void nsim_fib_exit(void)
{
- unregister_pernet_subsys(&nsim_fib_net_ops);
unregister_fib_notifier(&nsim_fib_nb);
+ unregister_pernet_subsys(&nsim_fib_net_ops);
}
int nsim_fib_init(void)
@@ -257,6 +257,7 @@ int nsim_fib_init(void)
err = register_fib_notifier(&nsim_fib_nb, nsim_fib_dump_inconsistent);
if (err < 0) {
+ unregister_pernet_subsys(&nsim_fib_net_ops);
pr_err("Failed to register fib notifier\n");
goto err_out;
}
--
1.8.3.1
1
0

[PATCH v2] scsi: hisi_sas: directly trigger error handle of scsi instead of waiting for timeout
by Luo Jiaxing 05 Jun '20
by Luo Jiaxing 05 Jun '20
05 Jun '20
We used timeout mechanism of scsi layer to trigger some IO's error handle,
this type of abnormal IO require driver to enter error handle to clear the
residue in the hardware.
But timeout mechanism caught error handle time to be longer, some threads
need to wait for tens of seconds when IO failed and enter error handle, so
we try to trigger error handling directly.
Signed-off-by: Luo Jiaxing <luojiaxing(a)huawei.com>
---
drivers/scsi/hisi_sas/hisi_sas_v1_hw.c | 4 +++-
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c | 4 +++-
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 4 +++-
3 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
index 031091e..49ea3b7 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
@@ -1263,8 +1263,10 @@ static int slot_complete_v1_hw(struct hisi_hba *hisi_hba,
!(cmplt_hdr_data & CMPLT_HDR_RSPNS_XFRD_MSK)) {
slot_err_v1_hw(hisi_hba, task, slot);
- if (unlikely(slot->abort))
+ if (unlikely(slot->abort)) {
+ sas_task_abort(task);
return ts->stat;
+ }
goto out;
}
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
index 6568e20..8ebddf5 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
@@ -2406,8 +2406,10 @@ slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot)
error_info[0], error_info[1],
error_info[2], error_info[3]);
- if (unlikely(slot->abort))
+ if (unlikely(slot->abort)) {
+ sas_task_abort(task);
return ts->stat;
+ }
goto out;
}
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
index 5b80856..7fd067c 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
@@ -2408,8 +2408,10 @@ slot_complete_v3_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot)
dev_info(dev, "data underflow without sense, rsp_code:0x%x, rc:%d.\n",
iu->resp_data[0], rc);
}
- if (unlikely(slot->abort))
+ if (unlikely(slot->abort)) {
+ sas_task_abort(task);
return ts->stat;
+ }
goto out;
}
--
2.7.4
1
0

[PATCH v1 hulk-4.19-next] scsi: hisi_sas: directly trigger error handle of scsi instead of waiting for timeout
by Luo Jiaxing 05 Jun '20
by Luo Jiaxing 05 Jun '20
05 Jun '20
driver inclusion
category: bugfix
bugzilla: NA
DTS: NA
-----------------------------------------------------------------------
We used timeout mechanism of scsi layer to trigger some IO's error handle,
this type of abnormal IO require driver to enter error handle to clear the
residue in the hardware.
But timeout mechanism caught error handle time to be longer, some threads
need to wait for tens of seconds when IO failed and enter error handle, so
we try to trigger error handling directly.
Signed-off-by: Luo Jiaxing <luojiaxing(a)huawei.com>
---
drivers/scsi/hisi_sas/hisi_sas_v1_hw.c | 4 +++-
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c | 4 +++-
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 4 +++-
3 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
index 031091e..49ea3b7 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
@@ -1263,8 +1263,10 @@ static int slot_complete_v1_hw(struct hisi_hba *hisi_hba,
!(cmplt_hdr_data & CMPLT_HDR_RSPNS_XFRD_MSK)) {
slot_err_v1_hw(hisi_hba, task, slot);
- if (unlikely(slot->abort))
+ if (unlikely(slot->abort)) {
+ sas_task_abort(task);
return ts->stat;
+ }
goto out;
}
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
index 6568e20..8ebddf5 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
@@ -2406,8 +2406,10 @@ slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot)
error_info[0], error_info[1],
error_info[2], error_info[3]);
- if (unlikely(slot->abort))
+ if (unlikely(slot->abort)) {
+ sas_task_abort(task);
return ts->stat;
+ }
goto out;
}
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
index 5b80856..7fd067c 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
@@ -2408,8 +2408,10 @@ slot_complete_v3_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot)
dev_info(dev, "data underflow without sense, rsp_code:0x%x, rc:%d.\n",
iu->resp_data[0], rc);
}
- if (unlikely(slot->abort))
+ if (unlikely(slot->abort)) {
+ sas_task_abort(task);
return ts->stat;
+ }
goto out;
}
--
2.7.4
1
0

【Meeting Notice】Kernel & testing sync between Linaro and openEuler Time: 2020-06-10 16:00-17:30
by Meeting Book 03 Jun '20
by Meeting Book 03 Jun '20
03 Jun '20
1
0

【Meeting Notice】Kernel & testing sync between Linaro and openEuler Time: 2020-06-10 16:00-17:30
by Meeting Book 03 Jun '20
by Meeting Book 03 Jun '20
03 Jun '20
1
0
From: Florian Fainelli <f.fainelli(a)gmail.com>
commit 86f8b1c01a0a537a73d2996615133be63cdf75db upstream.
Prior to 1d27732f411d ("net: dsa: setup and teardown ports"), we would
not treat failures to set-up an user port as fatal, but after this
commit we would, which is a regression for some systems where interfaces
may be declared in the Device Tree, but the underlying hardware may not
be present (pluggable daughter cards for instance).
Fixes: 1d27732f411d ("net: dsa: setup and teardown ports")
Signed-off-by: Florian Fainelli <f.fainelli(a)gmail.com>
Reviewed-by: Andrew Lunn <andrew(a)lunn.ch>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/dsa/dsa2.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
index 46ae4de..b036c55 100644
--- a/net/dsa/dsa2.c
+++ b/net/dsa/dsa2.c
@@ -412,7 +412,7 @@ static int dsa_tree_setup_switches(struct dsa_switch_tree *dst)
err = dsa_switch_setup(ds);
if (err)
- return err;
+ continue;
for (port = 0; port < ds->num_ports; port++) {
dp = &ds->ports[port];
--
1.8.3
1
75

[PATCH 1/9] net/hinic: Fix the firmware compatibility bug in the MAC reuse scenario
by Yang Yingliang 01 Jun '20
by Yang Yingliang 01 Jun '20
01 Jun '20
From: Chiqijun <chiqijun(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: 4472
-----------------------------------------------------------------------
2.3.0.1 and later drivers and firmware add MAC reuse feature, when the
driver is compatible with firmware version 2.3.0.0 and before, if the ip
link command is used to configure the MAC and VLAN for the VF, it will
cause the VF network anomaly.
To solve this problem, the driver obtains the function capability from
the firmware when loading, and determines whether to support MAC reuse
for different processing.
Signed-off-by: Chiqijun <chiqijun(a)huawei.com>
Reviewed-by: Zengweiliang <zengweiliang.zengweiliang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/huawei/hinic/hinic_hwdev.h | 1 +
.../ethernet/huawei/hinic/hinic_mgmt_interface.h | 11 ++++
drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c | 45 +++++++++++++++--
drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h | 4 ++
drivers/net/ethernet/huawei/hinic/hinic_nic_io.c | 10 +++-
drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h | 2 +
drivers/net/ethernet/huawei/hinic/hinic_sriov.c | 59 ++++++++++++++--------
7 files changed, 106 insertions(+), 26 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.h b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.h
index a096f91..2a81867 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.h
@@ -319,6 +319,7 @@ struct hinic_hwdev {
struct hinic_board_info board_info;
#define MGMT_VERSION_MAX_LEN 32
u8 mgmt_ver[MGMT_VERSION_MAX_LEN];
+ u64 fw_support_func_flag;
};
int hinic_init_comm_ch(struct hinic_hwdev *hwdev);
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_mgmt_interface.h b/drivers/net/ethernet/huawei/hinic/hinic_mgmt_interface.h
index 3c3be6e..8673cf5 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_mgmt_interface.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_mgmt_interface.h
@@ -690,6 +690,15 @@ struct hinic_port_rt_cmd {
u8 rsvd1[6];
};
+struct fw_support_func {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u64 flag;
+ u64 rsvd;
+};
+
struct hinic_vf_dcb_state {
u8 status;
u8 version;
@@ -741,6 +750,8 @@ struct hinic_set_link_follow {
int hinic_get_base_qpn(void *hwdev, u16 *global_qpn);
+int hinic_get_fw_support_func(void *hwdev);
+
int hinic_vf_func_init(struct hinic_hwdev *hwdev);
void hinic_vf_func_free(struct hinic_hwdev *hwdev);
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c
index f9e16ec..7dfedbd 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c
@@ -201,6 +201,34 @@ int hinic_get_base_qpn(void *hwdev, u16 *global_qpn)
return 0;
}
+int hinic_get_fw_support_func(void *hwdev)
+{
+ struct fw_support_func support_flag = {0};
+ struct hinic_hwdev *dev = hwdev;
+ u16 out_size = sizeof(support_flag);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ err = hinic_msg_to_mgmt_sync(hwdev, HINIC_MOD_L2NIC,
+ HINIC_PORT_CMD_GET_FW_SUPPORT_FLAG,
+ &support_flag, sizeof(support_flag),
+ &support_flag, &out_size, 0);
+ if (support_flag.status == HINIC_MGMT_CMD_UNSUPPORTED) {
+ nic_info(dev->dev_hdl, "Current firmware doesn't support to get function capability\n");
+ support_flag.flag = 0;
+ } else if (support_flag.status || err || !out_size) {
+ nic_err(dev->dev_hdl, "Failed to get function capability, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, support_flag.status, out_size);
+ return -EFAULT;
+ }
+
+ dev->fw_support_func_flag = support_flag.flag;
+
+ return 0;
+}
+
#define HINIC_ADD_VLAN_IN_MAC 0x8000
#define HINIC_VLAN_ID_MASK 0x7FFF
@@ -352,6 +380,11 @@ int hinic_update_mac_vlan(void *hwdev, u16 old_vlan, u16 new_vlan, int vf_id)
if (!vf_info->pf_set_mac)
return 0;
+ if (!FW_SUPPORT_MAC_REUSE_FUNC(dev)) {
+ nic_info(dev->dev_hdl, "Current firmware doesn't support mac reuse\n");
+ return 0;
+ }
+
func_id = hinic_glb_pf_vf_offset(dev) + (u16)vf_id;
vlan_id = old_vlan;
if (vlan_id)
@@ -2562,9 +2595,14 @@ static int hinic_init_vf_config(struct hinic_hwdev *hwdev, u16 vf_id)
vf_info = hwdev->nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
if (vf_info->pf_set_mac) {
func_id = hinic_glb_pf_vf_offset(hwdev) + vf_id;
- vlan_id = vf_info->pf_vlan;
- if (vlan_id)
- vlan_id |= HINIC_ADD_VLAN_IN_MAC;
+ if (FW_SUPPORT_MAC_REUSE_FUNC(hwdev)) {
+ vlan_id = vf_info->pf_vlan;
+ if (vlan_id)
+ vlan_id |= HINIC_ADD_VLAN_IN_MAC;
+ } else {
+ vlan_id = 0;
+ }
+
err = hinic_set_mac(hwdev, vf_info->vf_mac_addr, vlan_id,
func_id);
if (err) {
@@ -4065,3 +4103,4 @@ int hinic_get_sfp_type(void *hwdev, u8 *data0, u8 *data1)
return 0;
}
+
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h
index 427d469..e0979df 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h
@@ -19,6 +19,10 @@
#define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1)
#define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1)
+#define FW_SUPPORT_MAC_REUSE 0x1
+#define FW_SUPPORT_MAC_REUSE_FUNC(hwdev) \
+ ((hwdev)->fw_support_func_flag & FW_SUPPORT_MAC_REUSE)
+
#define HINIC_VLAN_PRIORITY_SHIFT 13
#define HINIC_RSS_INDIR_SIZE 256
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c b/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c
index 1cb4e26..f2fb0bc 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c
@@ -831,9 +831,17 @@ int hinic_init_nic_hwdev(void *hwdev, u16 rx_buff_len)
}
/* VFs don't set port routine command report */
- if (hinic_func_type(dev) != TYPE_VF)
+ if (hinic_func_type(dev) != TYPE_VF) {
+ /* Get the fw support mac reuse flag */
+ err = hinic_get_fw_support_func(hwdev);
+ if (err) {
+ nic_err(dev->dev_hdl, "Failed to get function capability\n");
+ return err;
+ }
+
/* Inform mgmt to send sfp's information to driver */
err = hinic_set_port_routine_cmd_report(hwdev, true);
+ }
return err;
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h b/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h
index 46464b1f..d428f48 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h
@@ -146,6 +146,8 @@ enum hinic_port_cmd {
HINIC_PORT_CMD_SET_PORT_LINK_STATUS = 0x76,
HINIC_PORT_CMD_SET_CGE_PAUSE_TIME_CFG = 0x77,
+ HINIC_PORT_CMD_GET_FW_SUPPORT_FLAG = 0x79,
+
HINIC_PORT_CMD_SET_PORT_REPORT = 0x7B,
HINIC_PORT_CMD_LINK_STATUS_REPORT = 0xa0,
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_sriov.c b/drivers/net/ethernet/huawei/hinic/hinic_sriov.c
index 37d0116..73ab2d7 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_sriov.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_sriov.c
@@ -266,6 +266,42 @@ int hinic_ndo_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)
/*lint -save -e574 -e734*/
#ifdef IFLA_VF_MAX
+static int set_hw_vf_vlan(struct hinic_sriov_info *sriov_info,
+ u16 cur_vlanprio, int vf, u16 vlan, u8 qos)
+{
+ int err = 0;
+ u16 old_vlan = cur_vlanprio & VLAN_VID_MASK;
+
+ if (vlan || qos) {
+ if (cur_vlanprio) {
+ err = hinic_kill_vf_vlan(sriov_info->hwdev,
+ OS_VF_ID_TO_HW(vf));
+ if (err) {
+ nic_err(&sriov_info->pdev->dev, "Failed to delete vf %d old vlan %d\n",
+ vf, old_vlan);
+ return err;
+ }
+ }
+ err = hinic_add_vf_vlan(sriov_info->hwdev,
+ OS_VF_ID_TO_HW(vf), vlan, qos);
+ if (err) {
+ nic_err(&sriov_info->pdev->dev, "Failed to add vf %d new vlan %d\n",
+ vf, vlan);
+ return err;
+ }
+ } else {
+ err = hinic_kill_vf_vlan(sriov_info->hwdev, OS_VF_ID_TO_HW(vf));
+ if (err) {
+ nic_err(&sriov_info->pdev->dev, "Failed to delete vf %d vlan %d\n",
+ vf, old_vlan);
+ return err;
+ }
+ }
+
+ return hinic_update_mac_vlan(sriov_info->hwdev, old_vlan, vlan,
+ OS_VF_ID_TO_HW(vf));
+}
+
#ifdef IFLA_VF_VLAN_INFO_MAX
int hinic_ndo_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, u8 qos,
__be16 vlan_proto)
@@ -276,7 +312,6 @@ int hinic_ndo_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, u8 qos)
struct hinic_nic_dev *adapter = netdev_priv(netdev);
struct hinic_sriov_info *sriov_info;
u16 vlanprio, cur_vlanprio;
- int err = 0;
if (!FUNC_SUPPORT_SET_VF_MAC_VLAN(adapter->hwdev)) {
nicif_err(adapter, drv, netdev,
@@ -298,27 +333,7 @@ int hinic_ndo_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, u8 qos)
if (vlanprio == cur_vlanprio)
return 0;
- if (vlan || qos) {
- if (cur_vlanprio)
- err = hinic_kill_vf_vlan(sriov_info->hwdev,
- OS_VF_ID_TO_HW(vf));
- if (err)
- goto out;
- err = hinic_add_vf_vlan(sriov_info->hwdev, OS_VF_ID_TO_HW(vf),
- vlan, qos);
- } else {
- err = hinic_kill_vf_vlan(sriov_info->hwdev, OS_VF_ID_TO_HW(vf));
- }
-
- if (err)
- return err;
-
- err = hinic_update_mac_vlan(sriov_info->hwdev,
- cur_vlanprio & VLAN_VID_MASK, vlan,
- OS_VF_ID_TO_HW(vf));
-
-out:
- return err;
+ return set_hw_vf_vlan(sriov_info, cur_vlanprio, vf, vlan, qos);
}
#endif
--
1.8.3
1
8

[PATCH] irqchip/gic-v3-its: Probe ITS page size for all GITS_BASERn registers
by Yang Yingliang 01 Jun '20
by Yang Yingliang 01 Jun '20
01 Jun '20
From: Marc Zyngier <maz(a)kernel.org>
mainline inclusion
from mainline-v5.6-rc4
commit d5df9dc96eb7423d3f742b13d5e1e479ff795eaa
category: bugfix
bugzilla: NA
CVE: NA
---------------------------
The GICv3 ITS driver assumes that once it has latched on a page size for
a given BASER register, it can use the same page size as the maximum
page size for all subsequent BASER registers.
Although it worked so far, nothing in the architecture guarantees this,
and Nianyao Tang hit this problem on some undisclosed implementation.
Let's bite the bullet and probe the the supported page size on all BASER
registers before starting to populate the tables. This simplifies the
setup a bit, at the expense of a few additional MMIO accesses.
Signed-off-by: Marc Zyngier <maz(a)kernel.org>
Reported-by: Nianyao Tang <tangnianyao(a)huawei.com>
Tested-by: Nianyao Tang <tangnianyao(a)huawei.com>
Link: https://lore.kernel.org/r/1584089195-63897-1-git-send-email-zhangshaokun@hi…
Signed-off-by: Hongbo Yao <yaohongbo(a)huawei.com>
conflicts:
drivers/irqchip/irq-gic-v3-its.c
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/irqchip/irq-gic-v3-its.c | 101 ++++++++++++++++++++++++++-------------
1 file changed, 67 insertions(+), 34 deletions(-)
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 59c6c0d..bfedab2 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -17,6 +17,7 @@
#include <linux/acpi.h>
#include <linux/acpi_iort.h>
+#include <linux/bitfield.h>
#include <linux/bitmap.h>
#include <linux/cpu.h>
#include <linux/crash_dump.h>
@@ -1768,17 +1769,16 @@ static void its_write_baser(struct its_node *its, struct its_baser *baser,
}
static int its_setup_baser(struct its_node *its, struct its_baser *baser,
- u64 cache, u64 shr, u32 psz, u32 order,
- bool indirect)
+ u64 cache, u64 shr, u32 order, bool indirect)
{
u64 val = its_read_baser(its, baser);
u64 esz = GITS_BASER_ENTRY_SIZE(val);
u64 type = GITS_BASER_TYPE(val);
u64 baser_phys, tmp;
- u32 alloc_pages;
+ u32 alloc_pages, psz;
void *base;
-retry_alloc_baser:
+ psz = baser->psz;
alloc_pages = (PAGE_ORDER_TO_SIZE(order) / psz);
if (alloc_pages > GITS_BASER_PAGES_MAX) {
pr_warn("ITS@%pa: %s too large, reduce ITS pages %u->%u\n",
@@ -1851,25 +1851,6 @@ static int its_setup_baser(struct its_node *its, struct its_baser *baser,
goto retry_baser;
}
- if ((val ^ tmp) & GITS_BASER_PAGE_SIZE_MASK) {
- /*
- * Page size didn't stick. Let's try a smaller
- * size and retry. If we reach 4K, then
- * something is horribly wrong...
- */
- free_pages((unsigned long)base, order);
- baser->base = NULL;
-
- switch (psz) {
- case SZ_16K:
- psz = SZ_4K;
- goto retry_alloc_baser;
- case SZ_64K:
- psz = SZ_16K;
- goto retry_alloc_baser;
- }
- }
-
if (val != tmp) {
pr_err("ITS@%pa: %s doesn't stick: %llx %llx\n",
&its->phys_base, its_base_type_string[type],
@@ -1895,13 +1876,14 @@ static int its_setup_baser(struct its_node *its, struct its_baser *baser,
static bool its_parse_indirect_baser(struct its_node *its,
struct its_baser *baser,
- u32 psz, u32 *order, u32 ids)
+ u32 *order, u32 ids)
{
u64 tmp = its_read_baser(its, baser);
u64 type = GITS_BASER_TYPE(tmp);
u64 esz = GITS_BASER_ENTRY_SIZE(tmp);
u64 val = GITS_BASER_InnerShareable | GITS_BASER_RaWaWb;
u32 new_order = *order;
+ u32 psz = baser->psz;
bool indirect = false;
/* No need to enable Indirection if memory requirement < (psz*2)bytes */
@@ -1960,11 +1942,58 @@ static void its_free_tables(struct its_node *its)
}
}
+static int its_probe_baser_psz(struct its_node *its, struct its_baser *baser)
+{
+ u64 psz = SZ_64K;
+
+ while (psz) {
+ u64 val, gpsz;
+
+ val = its_read_baser(its, baser);
+ val &= ~GITS_BASER_PAGE_SIZE_MASK;
+
+ switch (psz) {
+ case SZ_64K:
+ gpsz = GITS_BASER_PAGE_SIZE_64K;
+ break;
+ case SZ_16K:
+ gpsz = GITS_BASER_PAGE_SIZE_16K;
+ break;
+ case SZ_4K:
+ default:
+ gpsz = GITS_BASER_PAGE_SIZE_4K;
+ break;
+ }
+
+ gpsz >>= GITS_BASER_PAGE_SIZE_SHIFT;
+
+ val |= FIELD_PREP(GITS_BASER_PAGE_SIZE_MASK, gpsz);
+ its_write_baser(its, baser, val);
+
+ if (FIELD_GET(GITS_BASER_PAGE_SIZE_MASK, baser->val) == gpsz)
+ break;
+
+ switch (psz) {
+ case SZ_64K:
+ psz = SZ_16K;
+ break;
+ case SZ_16K:
+ psz = SZ_4K;
+ break;
+ case SZ_4K:
+ default:
+ return -1;
+ }
+ }
+
+ baser->psz = psz;
+ return 0;
+}
+
static int its_alloc_tables(struct its_node *its)
{
u64 shr = GITS_BASER_InnerShareable;
u64 cache = GITS_BASER_RaWaWb;
- u32 psz = SZ_64K;
int err, i;
if (its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_22375)
@@ -1975,34 +2004,38 @@ static int its_alloc_tables(struct its_node *its)
struct its_baser *baser = its->tables + i;
u64 val = its_read_baser(its, baser);
u64 type = GITS_BASER_TYPE(val);
- u32 order = get_order(psz);
bool indirect = false;
+ u32 order;
- switch (type) {
- case GITS_BASER_TYPE_NONE:
+ if (type == GITS_BASER_TYPE_NONE)
continue;
+ if (its_probe_baser_psz(its, baser)) {
+ its_free_tables(its);
+ return -ENXIO;
+ }
+
+ order = get_order(baser->psz);
+
+ switch (type) {
case GITS_BASER_TYPE_DEVICE:
- indirect = its_parse_indirect_baser(its, baser,
- psz, &order,
+ indirect = its_parse_indirect_baser(its, baser, &order,
its->device_ids);
break;
case GITS_BASER_TYPE_VCPU:
- indirect = its_parse_indirect_baser(its, baser,
- psz, &order,
+ indirect = its_parse_indirect_baser(its, baser, &order,
ITS_MAX_VPEID_BITS);
break;
}
- err = its_setup_baser(its, baser, cache, shr, psz, order, indirect);
+ err = its_setup_baser(its, baser, cache, shr, order, indirect);
if (err < 0) {
its_free_tables(its);
return err;
}
/* Update settings which will be used for next BASERn */
- psz = baser->psz;
cache = baser->val & GITS_BASER_CACHEABILITY_MASK;
shr = baser->val & GITS_BASER_SHAREABILITY_MASK;
}
--
1.8.3
1
0
From: Parav Pandit <parav(a)mellanox.com>
mainline inclusion
from mainline-5.2
commit 522ecce08ab20b57342d65b05601818e0f95fb2c
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
This patch addresses below two issues and prepares the code to address
3rd issue listed below.
1. mdev device is placed on the mdev bus before it is created in the
vendor driver. Once a device is placed on the mdev bus without creating
its supporting underlying vendor device, mdev driver's probe() gets
triggered. However there isn't a stable mdev available to work on.
create_store()
mdev_create_device()
device_register()
...
vfio_mdev_probe()
[...]
parent->ops->create()
vfio_ap_mdev_create()
mdev_set_drvdata(mdev, matrix_mdev);
/* Valid pointer set above */
Due to this way of initialization, mdev driver who wants to use the mdev,
doesn't have a valid mdev to work on.
2. Current creation sequence is,
parent->ops_create()
groups_register()
Remove sequence is,
parent->ops->remove()
groups_unregister()
However, remove sequence should be exact mirror of creation sequence.
Once this is achieved, all users of the mdev will be terminated first
before removing underlying vendor device.
(Follow standard linux driver model).
At that point vendor's remove() ops shouldn't fail because taking the
device off the bus should terminate any usage.
3. When remove operation fails, mdev sysfs removal attempts to add the
file back on already removed device. Following call trace [1] is observed.
[1] call trace:
kernel: WARNING: CPU: 2 PID: 9348
at fs/sysfs/file.c:327 sysfs_create_file_ns+0x7f/0x90
kernel: CPU: 2 PID: 9348 Comm: bash Kdump:
loaded Not tainted 5.1.0-rc6-vdevbus+ #6
kernel: Hardware name:
Supermicro SYS-6028U-TR4+/X10DRU-i+, BIOS 2.0b 08/09/2016
kernel: RIP: 0010:sysfs_create_file_ns+0x7f/0x90
kernel: Call Trace:
kernel: remove_store+0xdc/0x100 [mdev]
kernel: kernfs_fop_write+0x113/0x1a0
kernel: vfs_write+0xad/0x1b0
kernel: ksys_write+0x5a/0xe0
kernel: do_syscall_64+0x5a/0x210
kernel: entry_SYSCALL_64_after_hwframe+0x49/0xbe
Therefore, mdev core is improved in following ways.
1. Split the device registration/deregistration sequence so that some
things can be done between initialization of the device and hooking it
up to the bus respectively after deregistering it from the bus but
before giving up our final reference.
In particular, this means invoking the ->create() and ->remove()
callbacks in those new windows. This gives the vendor driver an
initialized mdev device to work with during creation.
At the same time, a bus driver who wish to bind to mdev driver also
gets initialized mdev device.
This follows standard Linux kernel bus and device model.
2. During remove flow, first remove the device from the bus. This
ensures that any bus specific devices are removed.
Once device is taken off the mdev bus, invoke remove() of mdev
from the vendor driver.
3. The driver core device model provides way to register and auto
unregister the device sysfs attribute groups at dev->groups.
Make use of dev->groups to let core create the groups and eliminate
code to avoid explicit groups creation and removal.
To ensure, that new sequence is solid, a below stack dump of a
process is taken who attempts to remove the device while device is in
use by vfio driver and user application.
This stack dump validates that vfio driver guards against such device
removal when device is in use.
cat /proc/21962/stack
[<0>] vfio_del_group_dev+0x216/0x3c0 [vfio]
[<0>] mdev_remove+0x21/0x40 [mdev]
[<0>] device_release_driver_internal+0xe8/0x1b0
[<0>] bus_remove_device+0xf9/0x170
[<0>] device_del+0x168/0x350
[<0>] mdev_device_remove_common+0x1d/0x50 [mdev]
[<0>] mdev_device_remove+0x8c/0xd0 [mdev]
[<0>] remove_store+0x71/0x90 [mdev]
[<0>] kernfs_fop_write+0x113/0x1a0
[<0>] vfs_write+0xad/0x1b0
[<0>] ksys_write+0x5a/0xe0
[<0>] do_syscall_64+0x5a/0x210
[<0>] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[<0>] 0xffffffffffffffff
This prepares the code to eliminate calling device_create_file() in
subsequent patch.
Reviewed-by: Cornelia Huck <cohuck(a)redhat.com>
Signed-off-by: Parav Pandit <parav(a)mellanox.com>
Signed-off-by: Alex Williamson <alex.williamson(a)redhat.com>
Conflicts:
drivers/vfio/mdev/mdev_core.c
-mdev_device_remove_ops() and -mdev_device_remove_cb() are removed anyway,
although they are not identical compared within the original patch.
The type of uuid is still uuid_le, and it has not been changed to guid_t.
drivers/vfio/mdev/mdev_private.h
The type of uuid is still uuid_le, and it has not been changed to guid_t.
Signed-off-by: Xiaoyang Xu <xuxiaoyang2(a)huawei.com>
Reviewed-by: Yingtai Xie <xieyingtai(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/vfio/mdev/mdev_core.c | 94 +++++++++++-----------------------------
drivers/vfio/mdev/mdev_private.h | 2 +-
drivers/vfio/mdev/mdev_sysfs.c | 2 +-
3 files changed, 27 insertions(+), 71 deletions(-)
diff --git a/drivers/vfio/mdev/mdev_core.c b/drivers/vfio/mdev/mdev_core.c
index e052f62..9705a4a 100644
--- a/drivers/vfio/mdev/mdev_core.c
+++ b/drivers/vfio/mdev/mdev_core.c
@@ -103,55 +103,10 @@ static inline void mdev_put_parent(struct mdev_parent *parent)
kref_put(&parent->ref, mdev_release_parent);
}
-static int mdev_device_create_ops(struct kobject *kobj,
- struct mdev_device *mdev)
-{
- struct mdev_parent *parent = mdev->parent;
- int ret;
-
- ret = parent->ops->create(kobj, mdev);
- if (ret)
- return ret;
-
- ret = sysfs_create_groups(&mdev->dev.kobj,
- parent->ops->mdev_attr_groups);
- if (ret)
- parent->ops->remove(mdev);
-
- return ret;
-}
-
-/*
- * mdev_device_remove_ops gets called from sysfs's 'remove' and when parent
- * device is being unregistered from mdev device framework.
- * - 'force_remove' is set to 'false' when called from sysfs's 'remove' which
- * indicates that if the mdev device is active, used by VMM or userspace
- * application, vendor driver could return error then don't remove the device.
- * - 'force_remove' is set to 'true' when called from mdev_unregister_device()
- * which indicate that parent device is being removed from mdev device
- * framework so remove mdev device forcefully.
- */
-static int mdev_device_remove_ops(struct mdev_device *mdev, bool force_remove)
-{
- struct mdev_parent *parent = mdev->parent;
- int ret;
-
- /*
- * Vendor driver can return error if VMM or userspace application is
- * using this mdev device.
- */
- ret = parent->ops->remove(mdev);
- if (ret && !force_remove)
- return -EBUSY;
-
- sysfs_remove_groups(&mdev->dev.kobj, parent->ops->mdev_attr_groups);
- return 0;
-}
-
static int mdev_device_remove_cb(struct device *dev, void *data)
{
if (dev_is_mdev(dev))
- mdev_device_remove(dev, true);
+ mdev_device_remove(dev);
return 0;
}
@@ -311,41 +266,43 @@ int mdev_device_create(struct kobject *kobj, struct device *dev, uuid_le uuid)
mdev->parent = parent;
kref_init(&mdev->ref);
+ device_initialize(&mdev->dev);
mdev->dev.parent = dev;
mdev->dev.bus = &mdev_bus_type;
mdev->dev.release = mdev_device_release;
dev_set_name(&mdev->dev, "%pUl", uuid.b);
+ mdev->dev.groups = parent->ops->mdev_attr_groups;
+ mdev->type_kobj = kobj;
- ret = device_register(&mdev->dev);
- if (ret) {
- put_device(&mdev->dev);
- goto mdev_fail;
- }
+ ret = parent->ops->create(kobj, mdev);
+ if (ret)
+ goto ops_create_fail;
- ret = mdev_device_create_ops(kobj, mdev);
+ ret = device_add(&mdev->dev);
if (ret)
- goto create_fail;
+ goto add_fail;
ret = mdev_create_sysfs_files(&mdev->dev, type);
- if (ret) {
- mdev_device_remove_ops(mdev, true);
- goto create_fail;
- }
+ if (ret)
+ goto sysfs_fail;
- mdev->type_kobj = kobj;
mdev->active = true;
dev_dbg(&mdev->dev, "MDEV: created\n");
return 0;
-create_fail:
- device_unregister(&mdev->dev);
+sysfs_fail:
+ device_del(&mdev->dev);
+add_fail:
+ parent->ops->remove(mdev);
+ops_create_fail:
+ put_device(&mdev->dev);
mdev_fail:
mdev_put_parent(parent);
return ret;
}
-int mdev_device_remove(struct device *dev, bool force_remove)
+int mdev_device_remove(struct device *dev)
{
struct mdev_device *mdev, *tmp;
struct mdev_parent *parent;
@@ -374,16 +331,15 @@ int mdev_device_remove(struct device *dev, bool force_remove)
mutex_unlock(&mdev_list_lock);
type = to_mdev_type(mdev->type_kobj);
+ mdev_remove_sysfs_files(dev, type);
+ device_del(&mdev->dev);
parent = mdev->parent;
+ ret = parent->ops->remove(mdev);
+ if (ret)
+ dev_err(&mdev->dev, "Remove failed: err=%d\n", ret);
- ret = mdev_device_remove_ops(mdev, force_remove);
- if (ret) {
- mdev->active = true;
- return ret;
- }
-
- mdev_remove_sysfs_files(dev, type);
- device_unregister(dev);
+ /* Balances with device_initialize() */
+ put_device(&mdev->dev);
mdev_put_parent(parent);
return 0;
diff --git a/drivers/vfio/mdev/mdev_private.h b/drivers/vfio/mdev/mdev_private.h
index b5819b7..9c65d8e 100644
--- a/drivers/vfio/mdev/mdev_private.h
+++ b/drivers/vfio/mdev/mdev_private.h
@@ -59,6 +59,6 @@ struct mdev_type {
void mdev_remove_sysfs_files(struct device *dev, struct mdev_type *type);
int mdev_device_create(struct kobject *kobj, struct device *dev, uuid_le uuid);
-int mdev_device_remove(struct device *dev, bool force_remove);
+int mdev_device_remove(struct device *dev);
#endif /* MDEV_PRIVATE_H */
diff --git a/drivers/vfio/mdev/mdev_sysfs.c b/drivers/vfio/mdev/mdev_sysfs.c
index e7770b5..1286366 100644
--- a/drivers/vfio/mdev/mdev_sysfs.c
+++ b/drivers/vfio/mdev/mdev_sysfs.c
@@ -236,7 +236,7 @@ static ssize_t remove_store(struct device *dev, struct device_attribute *attr,
if (val && device_remove_file_self(dev, attr)) {
int ret;
- ret = mdev_device_remove(dev, false);
+ ret = mdev_device_remove(dev);
if (ret) {
device_create_file(dev, attr);
return ret;
--
1.8.3
1
3

[PATCH 1/2] arm64/mpam: Fix unreset resources when mkdir ctrl group or umount resctrl
by Yang Yingliang 01 Jun '20
by Yang Yingliang 01 Jun '20
01 Jun '20
From: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 34588
CVE: NA
-----------------------------------------------
There are two problems related to schemata:
1) When rmdir a group and then mkdir a new group under resctrl
root directory, the new group still inherits the schemata
configuration from old.
e.g.
> mount -t resctrl resctrl /sys/fs/resctrl
> cd /sys/fs/resctrl
> mkdir p1 && cd p1
> echo 'L3:0=7f' > schemata
> cd .. && rmdir p1 && mkdir p1 && cd p1
> cat schemata
L3:0=7f;1=7fff;2=7fff;3=7fff
MB:0=100;1=100;2=100;3=100
2) It still exists when umount /sys/fs/resctrl and remount.
e.g.
> mount -t resctrl resctrl /sys/fs/resctrl
> cd /sys/fs/resctrl
> echo 'L3:0=7f' > schemata
> umount /sys/fs/resctrl
> mount -t resctrl resctrl /sys/fs/resctrl
> cat schemata
L3:0=7f;1=7fff;2=7fff;3=7fff
MB:0=100;1=100;2=100;3=100
Firstly we make each resctrl resource obtains their corresponding
default configuration. NOTE we use zero to initialize L3 default
value instead of max cpbm bits, as zero configurarion equals to
maximum configuration for L3 MSCs. And we use max-percentage masks
of max bandwidth to generate maximum configuration for MB.
Then we reset resources' configuration settings to default value
and back MSCs to default state, when mkdir or umount happended.
Fixes: caf75b6b2540 ("resctrlfs: mpam: init struct for mpam")
Fixes: 916dd9321e3c ("resctrlfs: init support resctrlfs")
Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/include/asm/mpam_resource.h | 4 ++++
arch/arm64/include/asm/resctrl.h | 2 ++
arch/arm64/kernel/mpam.c | 17 +++++++++++++++++
arch/arm64/kernel/mpam_ctrlmon.c | 28 ++++++++++++++++++++++++++++
fs/resctrlfs.c | 5 +++++
5 files changed, 56 insertions(+)
diff --git a/arch/arm64/include/asm/mpam_resource.h b/arch/arm64/include/asm/mpam_resource.h
index ab90596..1a6904c 100644
--- a/arch/arm64/include/asm/mpam_resource.h
+++ b/arch/arm64/include/asm/mpam_resource.h
@@ -89,6 +89,10 @@
/* [FIXME] hard code for hardlim */
#define MBW_MAX_SET(v) (MBW_MAX_HARDLIM|((v) << (16 - BWA_WD)))
#define MBW_MAX_GET(v) (((v) & MBW_MAX_MASK) >> (16 - BWA_WD))
+
+/* hard code for mbw_max max-percentage's cresponding masks */
+#define MBA_MAX_WD 63u
+
/*
* emulate the mpam nodes
* These should be reported by ACPI MPAM Table.
diff --git a/arch/arm64/include/asm/resctrl.h b/arch/arm64/include/asm/resctrl.h
index 0a0a12b..fb5fa6c 100644
--- a/arch/arm64/include/asm/resctrl.h
+++ b/arch/arm64/include/asm/resctrl.h
@@ -65,4 +65,6 @@ int mkdir_mondata_all(struct kernfs_node *parent_kn,
mongroup_create_dir(struct kernfs_node *parent_kn, struct resctrl_group *prgrp,
char *name, struct kernfs_node **dest_kn);
+int rdtgroup_init_alloc(struct rdtgroup *rdtgrp);
+
#endif /* _ASM_ARM64_RESCTRL_H */
diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c
index 202e49a..120795e 100644
--- a/arch/arm64/kernel/mpam.c
+++ b/arch/arm64/kernel/mpam.c
@@ -86,6 +86,9 @@ static inline void mpam_node_assign_val(struct mpam_node *n,
n->addr = hwpage_address;
n->component_id = component_id;
n->cpus_list = "0";
+
+ if (n->type == MPAM_RESOURCE_MC)
+ n->default_ctrl = MBA_MAX_WD;
}
#define MPAM_NODE_NAME_SIZE (10)
@@ -544,6 +547,20 @@ void post_resctrl_mount(void)
static int reset_all_ctrls(struct resctrl_resource *r)
{
+ struct raw_resctrl_resource *rr;
+ struct rdt_domain *d;
+ int partid;
+
+ rr = (struct raw_resctrl_resource *)r->res;
+ for (partid = 0; partid < rr->num_partid; partid++) {
+ list_for_each_entry(d, &r->domains, list) {
+ d->new_ctrl = rr->default_ctrl;
+ d->ctrl_val[partid] = rr->default_ctrl;
+ d->have_new_ctrl = true;
+ rr->msr_update(d, partid);
+ }
+ }
+
return 0;
}
diff --git a/arch/arm64/kernel/mpam_ctrlmon.c b/arch/arm64/kernel/mpam_ctrlmon.c
index b9f9495..2b94efc 100644
--- a/arch/arm64/kernel/mpam_ctrlmon.c
+++ b/arch/arm64/kernel/mpam_ctrlmon.c
@@ -585,3 +585,31 @@ int resctrl_mkdir_ctrlmon_mondata(struct kernfs_node *parent_kn,
}
return ret;
}
+
+/* Initialize the RDT group's allocations. */
+int rdtgroup_init_alloc(struct rdtgroup *rdtgrp)
+{
+ struct resctrl_resource *r;
+ struct raw_resctrl_resource *rr;
+ struct rdt_domain *d;
+ int ret;
+
+ for_each_resctrl_resource(r) {
+ if (!r->alloc_enabled)
+ continue;
+
+ rr = (struct raw_resctrl_resource *)r->res;
+ list_for_each_entry(d, &r->domains, list) {
+ d->new_ctrl = rr->default_ctrl;
+ d->have_new_ctrl = true;
+ }
+
+ ret = update_domains(r, rdtgrp);
+ if (ret < 0) {
+ rdt_last_cmd_puts("Failed to initialize allocations\n");
+ return ret;
+ }
+ }
+
+ return 0;
+}
diff --git a/fs/resctrlfs.c b/fs/resctrlfs.c
index 09e56384..8d463b8 100644
--- a/fs/resctrlfs.c
+++ b/fs/resctrlfs.c
@@ -670,6 +670,11 @@ static int resctrl_group_mkdir_ctrl_mon(struct kernfs_node *parent_kn,
ret = 0;
rdtgrp->closid = closid;
+
+ ret = rdtgroup_init_alloc(rdtgrp);
+ if (ret < 0)
+ goto out_id_free;
+
list_add(&rdtgrp->resctrl_group_list, &resctrl_all_groups);
if (resctrl_mon_capable) {
--
1.8.3
1
1
From: Zhao Minmin <zhaominmin1(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 34592
CVE: NA
-------------------------------------------------
Implement the ext3/ext4 file system error report.
This patch is used to implement abnormal alarm of ext3/ext4 filesystem.
You can archieve this by setting "FILESYSTEM_MONITOR" or "FILESYSTEM_ALARM"
on in configuration file. With this setting, alarm will be raised when
ext3/ext4 file system expection occurs.
Signed-off-by: Zhao Minmin <zhaominmin1(a)huawei.com>
Reviewed-by: Yi Zhang <yi.zhang(a)huawei.com>
Link: http://hulk.huawei.com/pipermail/kernel.openeuler/2016-March/009711.html
Signed-off-by: Wang Hui <john.wanghui(a)huawei.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
[yebin: cherry-pick this patch from redhat-7.5, commit 13848b1856c1]
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Reviewed-by: Wenan Mao <maowenan(a)huawei.com>
Reviewed-by: Yi Zhang <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/ext4.h | 9 ++++++++
fs/ext4/super.c | 51 ++++++++++++++++++++++++++++++++++++++++++++
include/uapi/linux/netlink.h | 1 +
3 files changed, 61 insertions(+)
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index b1bcc6e..83bec07 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -54,6 +54,15 @@
#endif
#endif
+#define NL_EXT4_ERROR_GROUP 1
+#define EXT4_ERROR_MAGIC 0xAE32014U
+struct ext4_err_msg {
+ int magic;
+ char s_id[32];
+ unsigned long s_flags;
+ int ext4_errno;
+};
+
/*
* The fourth extended filesystem constants/structures
*/
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index bd6c1cf..33f1e7c 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -54,6 +54,10 @@
#include "mballoc.h"
#include "fsmap.h"
+#include <uapi/linux/netlink.h>
+#include <net/sock.h>
+#include <net/net_namespace.h>
+
#define CREATE_TRACE_POINTS
#include <trace/events/ext4.h>
@@ -84,6 +88,8 @@ static struct dentry *ext4_mount(struct file_system_type *fs_type, int flags,
static void ext4_clear_request_list(void);
static struct inode *ext4_get_journal_inode(struct super_block *sb,
unsigned int journal_inum);
+static void ext4_netlink_send_info(struct super_block *sb, int ext4_errno);
+static struct sock *ext4nl;
/*
* Lock ordering
@@ -439,6 +445,42 @@ static bool system_going_down(void)
|| system_state == SYSTEM_RESTART;
}
+static void ext4_netlink_send_info(struct super_block *sb, int ext4_errno)
+{
+ int size;
+ sk_buff_data_t old_tail;
+ struct sk_buff *skb;
+ struct nlmsghdr *nlh;
+ struct ext4_err_msg *msg;
+
+ if (ext4nl) {
+ size = NLMSG_SPACE(sizeof(struct ext4_err_msg));
+ skb = alloc_skb(size, GFP_ATOMIC);
+ if (!skb) {
+ printk(KERN_ERR "Cannot alloc skb!");
+ return;
+ }
+ old_tail = skb->tail;
+ nlh = nlmsg_put(skb, 0, 0, NLMSG_ERROR, size - sizeof(*nlh), 0);
+ if (!nlh)
+ goto nlmsg_failure;
+ msg = (struct ext4_err_msg *)NLMSG_DATA(nlh);
+ msg->magic = EXT4_ERROR_MAGIC;
+ memcpy(msg->s_id, sb->s_id, sizeof(sb->s_id));
+ msg->s_flags = sb->s_flags;
+ msg->ext4_errno = ext4_errno;
+ nlh->nlmsg_len = skb->tail - old_tail;
+ NETLINK_CB(skb).portid = 0;
+ NETLINK_CB(skb).dst_group = NL_EXT4_ERROR_GROUP;
+ netlink_broadcast(ext4nl, skb, 0, NL_EXT4_ERROR_GROUP,
+ GFP_ATOMIC);
+ return;
+nlmsg_failure:
+ if (skb)
+ kfree_skb(skb);
+ }
+}
+
/* Deal with the reporting of failure conditions on a filesystem such as
* inconsistencies detected or read IO failures.
*
@@ -469,6 +511,9 @@ static void ext4_handle_error(struct super_block *sb)
if (journal)
jbd2_journal_abort(journal, -EIO);
}
+
+ ext4_netlink_send_info(sb, 1);
+
/*
* We force ERRORS_RO behavior when system is rebooting. Otherwise we
* could panic during 'reboot -f' as the underlying device got already
@@ -700,6 +745,7 @@ void __ext4_abort(struct super_block *sb, const char *function,
if (EXT4_SB(sb)->s_journal)
jbd2_journal_abort(EXT4_SB(sb)->s_journal, -EIO);
save_error_info(sb, function, line);
+ ext4_netlink_send_info(sb, 2);
}
if (test_opt(sb, ERRORS_PANIC) && !system_going_down()) {
if (EXT4_SB(sb)->s_journal &&
@@ -6063,6 +6109,7 @@ static inline int ext3_feature_set_ok(struct super_block *sb)
static int __init ext4_init_fs(void)
{
int i, err;
+ struct netlink_kernel_cfg cfg = {.groups = NL_EXT4_ERROR_GROUP,};
ratelimit_state_init(&ext4_mount_msg_ratelimit, 30 * HZ, 64);
ext4_li_info = NULL;
@@ -6102,6 +6149,9 @@ static int __init ext4_init_fs(void)
if (err)
goto out;
+ ext4nl = netlink_kernel_create(&init_net, NETLINK_FILESYSTEM, &cfg);
+ if (!ext4nl)
+ printk(KERN_ERR "EXT4-fs: Cannot create netlink socket.\n");
return 0;
out:
unregister_as_ext2();
@@ -6133,6 +6183,7 @@ static void __exit ext4_exit_fs(void)
ext4_exit_system_zone();
ext4_exit_pageio();
ext4_exit_es();
+ netlink_kernel_release(ext4nl);
}
MODULE_AUTHOR("Remy Card, Stephen Tweedie, Andrew Morton, Andreas Dilger, Theodore Ts'o and others");
diff --git a/include/uapi/linux/netlink.h b/include/uapi/linux/netlink.h
index 776bc92..ecaefee 100644
--- a/include/uapi/linux/netlink.h
+++ b/include/uapi/linux/netlink.h
@@ -29,6 +29,7 @@
#define NETLINK_RDMA 20
#define NETLINK_CRYPTO 21 /* Crypto layer */
#define NETLINK_SMC 22 /* SMC monitoring */
+#define NETLINK_FILESYSTEM 28 /* filesystem alarm*/
#define NETLINK_INET_DIAG NETLINK_SOCK_DIAG
--
1.8.3
1
0

[PATCH 1/3] Added ethtool_ops interface to query optical module information
by Yang Yingliang 01 Jun '20
by Yang Yingliang 01 Jun '20
01 Jun '20
From: Chiqijun <chiqijun(a)huawei.com>
driver inclusion
category: feature
bugzilla: 4472
-----------------------------------------------------------------------
Obtaining optical module information is time-consuming. If the user
frequently acquires, it will affect the firmware to process other
commands. To prevent such events, the firmware will actively report
the optical module information to the driver when it is idle. Obtained
from the driver cache information. When the user obtains through the
command, directly obtain from the information saved by the driver.
Signed-off-by: Chiqijun <chiqijun(a)huawei.com>
Reviewed-by: Luoshaokai <luoshaokai(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/huawei/hinic/hinic_ethtool.c | 77 ++++++++++
drivers/net/ethernet/huawei/hinic/hinic_hw.h | 54 +++++++
drivers/net/ethernet/huawei/hinic/hinic_hwdev.c | 168 +++++++++++++++++++++
drivers/net/ethernet/huawei/hinic/hinic_lld.c | 2 +
.../ethernet/huawei/hinic/hinic_mgmt_interface.h | 12 ++
drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c | 139 ++++++++++++++++-
drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h | 2 +
drivers/net/ethernet/huawei/hinic/hinic_nic_io.c | 11 +-
drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h | 3 +
9 files changed, 465 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c b/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c
index f321365..78d32ea 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c
@@ -2039,6 +2039,74 @@ static void hinic_diag_test(struct net_device *netdev,
hinic_lp_test(netdev, eth_test, data, 0);
}
+#ifdef ETHTOOL_GMODULEEEPROM
+static int hinic_get_module_info(struct net_device *netdev,
+ struct ethtool_modinfo *modinfo)
+{
+ struct hinic_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 sfp_type;
+ u8 sfp_type_ext;
+ int err;
+
+ err = hinic_get_sfp_type(nic_dev->hwdev, &sfp_type, &sfp_type_ext);
+ if (err)
+ return err;
+
+ switch (sfp_type) {
+ case MODULE_TYPE_SFP:
+ modinfo->type = ETH_MODULE_SFF_8472;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
+ break;
+ case MODULE_TYPE_QSFP:
+ modinfo->type = ETH_MODULE_SFF_8436;
+ modinfo->eeprom_len = STD_SFP_INFO_MAX_SIZE;
+ break;
+ case MODULE_TYPE_QSFP_PLUS:
+ if (sfp_type_ext >= 0x3) {
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = STD_SFP_INFO_MAX_SIZE;
+
+ } else {
+ modinfo->type = ETH_MODULE_SFF_8436;
+ modinfo->eeprom_len = STD_SFP_INFO_MAX_SIZE;
+ }
+ break;
+ case MODULE_TYPE_QSFP28:
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = STD_SFP_INFO_MAX_SIZE;
+ break;
+ default:
+ nicif_warn(nic_dev, drv, netdev,
+ "Optical module unknown: 0x%x\n", sfp_type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic_get_module_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *ee, u8 *data)
+{
+ struct hinic_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 sfp_data[STD_SFP_INFO_MAX_SIZE];
+ u16 len;
+ int err;
+
+ if (!ee->len || ((ee->len + ee->offset) > STD_SFP_INFO_MAX_SIZE))
+ return -EINVAL;
+
+ memset(data, 0, ee->len);
+
+ err = hinic_get_sfp_eeprom(nic_dev->hwdev, sfp_data, &len);
+ if (err)
+ return err;
+
+ memcpy(data, sfp_data + ee->offset, ee->len);
+
+ return 0;
+}
+#endif /* ETHTOOL_GMODULEEEPROM */
+
static int set_l4_rss_hash_ops(struct ethtool_rxnfc *cmd,
struct nic_rss_type *rss_type)
{
@@ -2532,6 +2600,10 @@ static int hinic_set_rxfh_indir(struct net_device *netdev, const u32 *indir)
#ifndef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
.get_channels = hinic_get_channels,
.set_channels = hinic_set_channels,
+#ifdef ETHTOOL_GMODULEEEPROM
+ .get_module_info = hinic_get_module_info,
+ .get_module_eeprom = hinic_get_module_eeprom,
+#endif
#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
.get_rxfh_indir_size = hinic_get_rxfh_indir_size,
#endif
@@ -2552,6 +2624,11 @@ static int hinic_set_rxfh_indir(struct net_device *netdev, const u32 *indir)
.set_phys_id = hinic_set_phys_id,
.get_channels = hinic_get_channels,
.set_channels = hinic_set_channels,
+#ifdef ETHTOOL_GMODULEEEPROM
+ .get_module_info = hinic_get_module_info,
+ .get_module_eeprom = hinic_get_module_eeprom,
+#endif
+
#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
.get_rxfh_indir_size = hinic_get_rxfh_indir_size,
#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw.h b/drivers/net/ethernet/huawei/hinic/hinic_hw.h
index e489a00..d4becb6 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hw.h
@@ -264,6 +264,56 @@ struct hinic_init_para {
#define INIT_SUCCESS 1
#define MAX_DRV_BUF_SIZE 4096
+struct hinic_cmd_get_light_module_abs {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u8 port_id;
+ u8 abs_status; /* 0:present, 1:absent */
+ u8 rsv[2];
+};
+
+#define MODULE_TYPE_SFP 0x3
+#define MODULE_TYPE_QSFP28 0x11
+#define MODULE_TYPE_QSFP 0x0C
+#define MODULE_TYPE_QSFP_PLUS 0x0D
+
+#define SFP_INFO_MAX_SIZE 512
+struct hinic_cmd_get_sfp_qsfp_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u8 port_id;
+ u8 wire_type;
+ u16 out_len;
+ u8 sfp_qsfp_info[SFP_INFO_MAX_SIZE];
+};
+
+#define STD_SFP_INFO_MAX_SIZE 640
+struct hinic_cmd_get_std_sfp_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u8 port_id;
+ u8 wire_type;
+ u16 eeprom_len;
+ u32 rsvd;
+ u8 sfp_info[STD_SFP_INFO_MAX_SIZE];
+};
+
+#define HINIC_MAX_PORT_ID 4
+
+struct hinic_port_routine_cmd {
+ int up_send_sfp_info;
+ int up_send_sfp_abs;
+
+ struct hinic_cmd_get_sfp_qsfp_info sfp_info;
+ struct hinic_cmd_get_light_module_abs abs;
+};
+
struct card_node {
struct list_head node;
struct list_head func_list;
@@ -282,6 +332,10 @@ struct card_node {
bool disable_vf_load[HINIC_MAX_PF_NUM];
u32 vf_mbx_old_rand_id[MAX_FUNCTION_NUM];
u32 vf_mbx_rand_id[MAX_FUNCTION_NUM];
+ struct hinic_port_routine_cmd rt_cmd[HINIC_MAX_PORT_ID];
+
+ /* mutex used for copy sfp info */
+ struct mutex sfp_mutex;
};
enum hinic_hwdev_init_state {
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c
index 101b671..172be6c 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hwdev.c
@@ -59,6 +59,8 @@
#define HINIC_OK_FLAG_FAILED 1
+#define HINIC_GET_SFP_INFO_REAL_TIME 0x1
+
#define HINIC_GLB_SO_RO_CFG_SHIFT 0x0
#define HINIC_GLB_SO_RO_CFG_MASK 0x1
#define HINIC_DISABLE_ORDER 0
@@ -913,6 +915,80 @@ int hinic_pf_msg_to_mgmt_sync(void *hwdev, enum hinic_mod_type mod, u8 cmd,
return err;
}
+static bool is_sfp_info_cmd_cached(struct hinic_hwdev *hwdev,
+ enum hinic_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic_cmd_get_sfp_qsfp_info *sfp_info = NULL;
+ struct hinic_port_routine_cmd *rt_cmd = NULL;
+ struct card_node *chip_node = hwdev->chip_node;
+
+ sfp_info = buf_in;
+ if (sfp_info->port_id >= HINIC_MAX_PORT_ID ||
+ *out_size < sizeof(*sfp_info))
+ return false;
+
+ if (sfp_info->version == HINIC_GET_SFP_INFO_REAL_TIME)
+ return false;
+
+ rt_cmd = &chip_node->rt_cmd[sfp_info->port_id];
+ mutex_lock(&chip_node->sfp_mutex);
+ memcpy(buf_out, &rt_cmd->sfp_info, sizeof(*sfp_info));
+ mutex_unlock(&chip_node->sfp_mutex);
+
+ return true;
+}
+
+static bool is_sfp_abs_cmd_cached(struct hinic_hwdev *hwdev,
+ enum hinic_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic_cmd_get_light_module_abs *abs = NULL;
+ struct hinic_port_routine_cmd *rt_cmd = NULL;
+ struct card_node *chip_node = hwdev->chip_node;
+
+ abs = buf_in;
+ if (abs->port_id >= HINIC_MAX_PORT_ID ||
+ *out_size < sizeof(*abs))
+ return false;
+
+ if (abs->version == HINIC_GET_SFP_INFO_REAL_TIME)
+ return false;
+
+ rt_cmd = &chip_node->rt_cmd[abs->port_id];
+ mutex_lock(&chip_node->sfp_mutex);
+ memcpy(buf_out, &rt_cmd->abs, sizeof(*abs));
+ mutex_unlock(&chip_node->sfp_mutex);
+
+ return true;
+}
+
+static bool driver_processed_cmd(struct hinic_hwdev *hwdev,
+ enum hinic_mod_type mod, u8 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct card_node *chip_node = hwdev->chip_node;
+
+ if (mod == HINIC_MOD_L2NIC) {
+ if (cmd == HINIC_PORT_CMD_GET_SFP_INFO &&
+ chip_node->rt_cmd->up_send_sfp_info) {
+ return is_sfp_info_cmd_cached(hwdev, mod, cmd, buf_in,
+ in_size, buf_out,
+ out_size);
+ } else if (cmd == HINIC_PORT_CMD_GET_SFP_ABS &&
+ chip_node->rt_cmd->up_send_sfp_abs) {
+ return is_sfp_abs_cmd_cached(hwdev, mod, cmd, buf_in,
+ in_size, buf_out,
+ out_size);
+ }
+ }
+
+ return false;
+}
+
int hinic_msg_to_mgmt_sync(void *hwdev, enum hinic_mod_type mod, u8 cmd,
void *buf_in, u16 in_size,
void *buf_out, u16 *out_size, u32 timeout)
@@ -949,6 +1025,10 @@ int hinic_msg_to_mgmt_sync(void *hwdev, enum hinic_mod_type mod, u8 cmd,
err = __func_send_mbox(hwdev, mod, cmd, buf_in, in_size,
buf_out, out_size, timeout);
} else {
+ if (driver_processed_cmd(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size))
+ return 0;
+
do {
if (!hinic_get_mgmt_channel_status(hwdev) ||
!hinic_get_chip_present_flag(hwdev))
@@ -3202,6 +3282,8 @@ enum hinic_event_cmd {
HINIC_EVENT_MGMT_PCIE_DFX,
HINIC_EVENT_MCTP_HOST_INFO,
HINIC_EVENT_MGMT_HEARTBEAT_EHD,
+ HINIC_EVENT_SFP_INFO_REPORT,
+ HINIC_EVENT_SFP_ABS_REPORT,
HINIC_EVENT_MAX_TYPE,
};
@@ -3284,6 +3366,16 @@ struct hinic_event_convert {
.cmd = HINIC_MGMT_CMD_HEARTBEAT_EVENT,
.event = HINIC_EVENT_MGMT_HEARTBEAT_EHD,
},
+ {
+ .mod = HINIC_MOD_L2NIC,
+ .cmd = HINIC_PORT_CMD_GET_SFP_INFO,
+ .event = HINIC_EVENT_SFP_INFO_REPORT,
+ },
+ {
+ .mod = HINIC_MOD_L2NIC,
+ .cmd = HINIC_PORT_CMD_GET_SFP_ABS,
+ .event = HINIC_EVENT_SFP_ABS_REPORT,
+ },
};
static enum hinic_event_cmd __get_event_type(u8 mod, u8 cmd)
@@ -3587,12 +3679,22 @@ static void module_status_event(struct hinic_hwdev *hwdev,
struct hinic_cable_plug_event *plug_event;
struct hinic_link_err_event *link_err;
struct hinic_event_info event_info = {0};
+ struct hinic_port_routine_cmd *rt_cmd;
+ struct card_node *chip_node = hwdev->chip_node;
event_info.type = HINIC_EVENT_PORT_MODULE_EVENT;
if (cmd == HINIC_EVENT_CABLE_PLUG) {
plug_event = buf_in;
+ if (plug_event->port_id < HINIC_MAX_PORT_ID) {
+ rt_cmd = &chip_node->rt_cmd[plug_event->port_id];
+ mutex_lock(&chip_node->sfp_mutex);
+ rt_cmd->up_send_sfp_abs = false;
+ rt_cmd->up_send_sfp_info = false;
+ mutex_unlock(&chip_node->sfp_mutex);
+ }
+
event_info.module_event.type = plug_event->plugged ?
HINIC_PORT_MODULE_CABLE_PLUGGED :
HINIC_PORT_MODULE_CABLE_UNPLUGGED;
@@ -3733,6 +3835,64 @@ static void mgmt_watchdog_timeout_event_handler(struct hinic_hwdev *hwdev,
queue_work(hwdev->workq, &hwdev->fault_work);
}
+static void port_sfp_info_event(struct hinic_hwdev *hwdev, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic_cmd_get_sfp_qsfp_info *sfp_info = buf_in;
+ struct hinic_port_routine_cmd *rt_cmd;
+ struct card_node *chip_node = hwdev->chip_node;
+
+ if (in_size != sizeof(*sfp_info)) {
+ sdk_err(hwdev->dev_hdl, "Invalid sfp info cmd, length: %d, should be %ld\n",
+ in_size, sizeof(*sfp_info));
+ return;
+ }
+
+ if (sfp_info->port_id >= HINIC_MAX_PORT_ID) {
+ sdk_err(hwdev->dev_hdl, "Invalid sfp port id: %d, max port is %d\n",
+ sfp_info->port_id, HINIC_MAX_PORT_ID - 1);
+ return;
+ }
+
+ if (!chip_node->rt_cmd)
+ return;
+
+ rt_cmd = &chip_node->rt_cmd[sfp_info->port_id];
+ mutex_lock(&chip_node->sfp_mutex);
+ memcpy(&rt_cmd->sfp_info, sfp_info, sizeof(rt_cmd->sfp_info));
+ rt_cmd->up_send_sfp_info = true;
+ mutex_unlock(&chip_node->sfp_mutex);
+}
+
+static void port_sfp_abs_event(struct hinic_hwdev *hwdev, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic_cmd_get_light_module_abs *sfp_abs = buf_in;
+ struct hinic_port_routine_cmd *rt_cmd;
+ struct card_node *chip_node = hwdev->chip_node;
+
+ if (in_size != sizeof(*sfp_abs)) {
+ sdk_err(hwdev->dev_hdl, "Invalid sfp absent cmd, length: %d, should be %ld\n",
+ in_size, sizeof(*sfp_abs));
+ return;
+ }
+
+ if (sfp_abs->port_id >= HINIC_MAX_PORT_ID) {
+ sdk_err(hwdev->dev_hdl, "Invalid sfp port id: %d, max port is %d\n",
+ sfp_abs->port_id, HINIC_MAX_PORT_ID - 1);
+ return;
+ }
+
+ if (!chip_node->rt_cmd)
+ return;
+
+ rt_cmd = &chip_node->rt_cmd[sfp_abs->port_id];
+ mutex_lock(&chip_node->sfp_mutex);
+ memcpy(&rt_cmd->abs, sfp_abs, sizeof(rt_cmd->abs));
+ rt_cmd->up_send_sfp_abs = true;
+ mutex_unlock(&chip_node->sfp_mutex);
+}
+
static void mgmt_reset_event_handler(struct hinic_hwdev *hwdev)
{
sdk_info(hwdev->dev_hdl, "Mgmt is reset\n");
@@ -4241,6 +4401,14 @@ static void _event_handler(struct hinic_hwdev *hwdev, enum hinic_event_cmd cmd,
buf_out, out_size);
break;
+ case HINIC_EVENT_SFP_INFO_REPORT:
+ port_sfp_info_event(hwdev, buf_in, in_size, buf_out, out_size);
+ break;
+
+ case HINIC_EVENT_SFP_ABS_REPORT:
+ port_sfp_abs_event(hwdev, buf_in, in_size, buf_out, out_size);
+ break;
+
default:
sdk_warn(hwdev->dev_hdl, "Unsupported event %d to process\n",
cmd);
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_lld.c b/drivers/net/ethernet/huawei/hinic/hinic_lld.c
index e8e446a..3bf47ac 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_lld.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_lld.c
@@ -1990,6 +1990,8 @@ static int alloc_chip_node(struct hinic_pcidev *pci_adapter)
INIT_LIST_HEAD(&chip_node->func_list);
pci_adapter->chip_node = chip_node;
+ mutex_init(&chip_node->sfp_mutex);
+
return 0;
alloc_dbgtool_attr_file_err:
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_mgmt_interface.h b/drivers/net/ethernet/huawei/hinic/hinic_mgmt_interface.h
index 13bc351..3c3be6e 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_mgmt_interface.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_mgmt_interface.h
@@ -680,6 +680,16 @@ struct hinic_capture_info {
u32 data_vlan;
};
+struct hinic_port_rt_cmd {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u8 pf_id;
+ u8 enable;
+ u8 rsvd1[6];
+};
+
struct hinic_vf_dcb_state {
u8 status;
u8 version;
@@ -737,6 +747,8 @@ struct hinic_set_link_follow {
void hinic_unregister_vf_msg_handler(void *hwdev, u16 vf_id);
+int hinic_set_port_routine_cmd_report(void *hwdev, bool enable);
+
int hinic_refresh_nic_cfg(void *hwdev, struct nic_port_info *port_info);
int hinic_save_dcb_state(struct hinic_hwdev *hwdev,
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c
index 07858be..6ddae36 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.c
@@ -35,7 +35,6 @@
#include "hinic_nic.h"
#include "hinic_mgmt_interface.h"
#include "hinic_hwif.h"
-#include "hinic_eqs.h"
static unsigned char set_vf_link_state;
module_param(set_vf_link_state, byte, 0444);
@@ -3571,6 +3570,35 @@ int hinic_set_super_cqe_state(void *hwdev, bool enable)
return 0;
}
+int hinic_set_port_routine_cmd_report(void *hwdev, bool enable)
+{
+ struct hinic_port_rt_cmd rt_cmd = { 0 };
+ struct hinic_hwdev *dev = hwdev;
+ u16 out_size = sizeof(rt_cmd);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ rt_cmd.pf_id = (u8)hinic_global_func_id(hwdev);
+ rt_cmd.enable = enable;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev,
+ HINIC_PORT_CMD_SET_PORT_REPORT,
+ &rt_cmd, sizeof(rt_cmd), &rt_cmd,
+ &out_size);
+ if (rt_cmd.status == HINIC_MGMT_CMD_UNSUPPORTED) {
+ nic_info(dev->dev_hdl, "Current firmware doesn't support to set port routine command report\n");
+ } else if (rt_cmd.status || err || !out_size) {
+ nic_err(dev->dev_hdl,
+ "Failed to set port routine command report, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rt_cmd.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
int hinic_set_func_capture_en(void *hwdev, u16 func_id, bool cap_en)
{
struct hinic_hwdev *dev = hwdev;
@@ -3889,3 +3917,112 @@ int hinic_disable_tx_promisc(void *hwdev)
}
return 0;
}
+
+static bool hinic_if_sfp_absent(void *hwdev)
+{
+ struct card_node *chip_node = ((struct hinic_hwdev *)hwdev)->chip_node;
+ struct hinic_port_routine_cmd *rt_cmd;
+ struct hinic_cmd_get_light_module_abs sfp_abs = {0};
+ u8 port_id = hinic_physical_port_id(hwdev);
+ u16 out_size = sizeof(sfp_abs);
+ int err;
+ bool sfp_abs_valid;
+ bool sfp_abs_status;
+
+ rt_cmd = &chip_node->rt_cmd[port_id];
+ mutex_lock(&chip_node->sfp_mutex);
+ sfp_abs_valid = rt_cmd->up_send_sfp_abs;
+ sfp_abs_status = (bool)rt_cmd->abs.abs_status;
+ if (sfp_abs_valid) {
+ mutex_unlock(&chip_node->sfp_mutex);
+ return sfp_abs_status;
+ }
+ mutex_unlock(&chip_node->sfp_mutex);
+
+ sfp_abs.port_id = port_id;
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC_PORT_CMD_GET_SFP_ABS,
+ &sfp_abs, sizeof(sfp_abs), &sfp_abs,
+ &out_size);
+ if (sfp_abs.status || err || !out_size) {
+ nic_err(((struct hinic_hwdev *)hwdev)->dev_hdl,
+ "Failed to get port%d sfp absent status, err: %d, status: 0x%x, out size: 0x%x\n",
+ port_id, err, sfp_abs.status, out_size);
+ return true;
+ }
+
+ return ((sfp_abs.abs_status == 0) ? false : true);
+}
+
+int hinic_get_sfp_eeprom(void *hwdev, u8 *data, u16 *len)
+{
+ struct hinic_cmd_get_std_sfp_info sfp_info = {0};
+ u8 port_id;
+ u16 out_size = sizeof(sfp_info);
+ int err;
+
+ if (!hwdev || !data || !len)
+ return -EINVAL;
+
+ port_id = hinic_physical_port_id(hwdev);
+ if (port_id >= HINIC_MAX_PORT_ID)
+ return -EINVAL;
+
+ if (hinic_if_sfp_absent(hwdev))
+ return -ENXIO;
+
+ sfp_info.port_id = port_id;
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC_PORT_CMD_GET_STD_SFP_INFO,
+ &sfp_info, sizeof(sfp_info), &sfp_info,
+ &out_size);
+ if (sfp_info.status || err || !out_size) {
+ nic_err(((struct hinic_hwdev *)hwdev)->dev_hdl,
+ "Failed to get port%d sfp eeprom information, err: %d, status: 0x%x, out size: 0x%x\n",
+ port_id, err, sfp_info.status, out_size);
+ return -EIO;
+ }
+
+ *len = min_t(u16, sfp_info.eeprom_len, STD_SFP_INFO_MAX_SIZE);
+ memcpy(data, sfp_info.sfp_info, STD_SFP_INFO_MAX_SIZE);
+
+ return 0;
+}
+
+int hinic_get_sfp_type(void *hwdev, u8 *data0, u8 *data1)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic_port_routine_cmd *rt_cmd;
+ u8 sfp_data[STD_SFP_INFO_MAX_SIZE];
+ u16 len;
+ u8 port_id;
+ int err;
+
+ if (!hwdev || !data0 || !data1)
+ return -EINVAL;
+
+ port_id = hinic_physical_port_id(hwdev);
+ if (port_id >= HINIC_MAX_PORT_ID)
+ return -EINVAL;
+
+ if (hinic_if_sfp_absent(hwdev))
+ return -ENXIO;
+
+ chip_node = ((struct hinic_hwdev *)hwdev)->chip_node;
+ rt_cmd = &chip_node->rt_cmd[port_id];
+ mutex_lock(&chip_node->sfp_mutex);
+ if (rt_cmd->up_send_sfp_info) {
+ *data0 = rt_cmd->sfp_info.sfp_qsfp_info[0];
+ *data1 = rt_cmd->sfp_info.sfp_qsfp_info[1];
+ mutex_unlock(&chip_node->sfp_mutex);
+ return 0;
+ }
+ mutex_unlock(&chip_node->sfp_mutex);
+
+ err = hinic_get_sfp_eeprom(hwdev, sfp_data, &len);
+ if (err)
+ return err;
+
+ *data0 = sfp_data[0];
+ *data1 = sfp_data[1];
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h
index 395f6d8..eb060fb 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_cfg.h
@@ -620,5 +620,7 @@ int hinic_add_hw_rqfilter(void *hwdev,
struct hinic_rq_filter_info *filter_info);
int hinic_del_hw_rqfilter(void *hwdev,
struct hinic_rq_filter_info *filter_info);
+int hinic_get_sfp_eeprom(void *hwdev, u8 *data, u16 *len);
+int hinic_get_sfp_type(void *hwdev, u8 *data0, u8 *data1);
#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c b/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c
index 8d13105..1cb4e26 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nic_io.c
@@ -829,13 +829,20 @@ int hinic_init_nic_hwdev(void *hwdev, u16 rx_buff_len)
}
}
}
- return 0;
+
+ /* VFs don't set port routine command report */
+ if (hinic_func_type(dev) != TYPE_VF)
+ /* Inform mgmt to send sfp's information to driver */
+ err = hinic_set_port_routine_cmd_report(hwdev, true);
+
+ return err;
}
EXPORT_SYMBOL(hinic_init_nic_hwdev);
void hinic_free_nic_hwdev(void *hwdev)
{
- /* nothing to do for now */
+ if (hinic_func_type(hwdev) != TYPE_VF)
+ hinic_set_port_routine_cmd_report(hwdev, false);
}
EXPORT_SYMBOL(hinic_free_nic_hwdev);
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h b/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h
index 8730565..46464b1f 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_port_cmd.h
@@ -146,6 +146,8 @@ enum hinic_port_cmd {
HINIC_PORT_CMD_SET_PORT_LINK_STATUS = 0x76,
HINIC_PORT_CMD_SET_CGE_PAUSE_TIME_CFG = 0x77,
+ HINIC_PORT_CMD_SET_PORT_REPORT = 0x7B,
+
HINIC_PORT_CMD_LINK_STATUS_REPORT = 0xa0,
HINIC_PORT_CMD_SET_LOSSLESS_ETH = 0xa3,
@@ -200,6 +202,7 @@ enum hinic_port_cmd {
HINIC_PORT_CMD_SET_LINK_FOLLOW = 0xF8,
HINIC_PORT_CMD_SET_VF_MAX_MIN_RATE = 0xF9,
HINIC_PORT_CMD_SET_RXQ_LRO_ADPT = 0xFA,
+ HINIC_PORT_CMD_GET_SFP_ABS = 0xFB,
HINIC_PORT_CMD_Q_FILTER = 0xFC,
HINIC_PORT_CMD_TCAM_FILTER = 0xFE,
HINIC_PORT_CMD_SET_VLAN_FILTER = 0xFF,
--
1.8.3
1
2

[PATCH] Revert "consolemap: Fix a memory leaking bug in drivers/tty/vt/consolemap.c"
by Yang Yingliang 01 Jun '20
by Yang Yingliang 01 Jun '20
01 Jun '20
From: Ben Hutchings <ben(a)decadent.org.uk>
mainline inclusion
from mainline-v5.3-rc1
commit 15b3cd8ef46ad1b100e0d3c7e38774f330726820
category: bugfix
bugzilla: 34617
CVE: CVE-2019-12379
---------------------------
This reverts commit 84ecc2f6eb1cb12e6d44818f94fa49b50f06e6ac.
con_insert_unipair() is working with a sparse 3-dimensional array:
- p->uni_pgdir[] is the top layer
- p1 points to a middle layer
- p2 points to a bottom layer
If it needs to allocate a new middle layer, and then fails to allocate
a new bottom layer, it would previously free only p2, and now it frees
both p1 and p2. But since the new middle layer was already registered
in the top layer, it was not leaked.
However, if it looks up an *existing* middle layer and then fails to
allocate a bottom layer, it now frees both p1 and p2 but does *not*
free any other bottom layers under p1. So it *introduces* a memory
leak.
The error path also cleared the wrong index in p->uni_pgdir[],
introducing a use-after-free.
Signed-off-by: Ben Hutchings <ben(a)decadent.org.uk>
Fixes: 84ecc2f6eb1c ("consolemap: Fix a memory leaking bug in drivers/tty/vt/consolemap.c")
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Zheng Bin <zhengbin13(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/tty/vt/consolemap.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/drivers/tty/vt/consolemap.c b/drivers/tty/vt/consolemap.c
index 814d1b7..7c7ada0 100644
--- a/drivers/tty/vt/consolemap.c
+++ b/drivers/tty/vt/consolemap.c
@@ -489,11 +489,7 @@ static int con_unify_unimap(struct vc_data *conp, struct uni_pagedir *p)
p2 = p1[n = (unicode >> 6) & 0x1f];
if (!p2) {
p2 = p1[n] = kmalloc_array(64, sizeof(u16), GFP_KERNEL);
- if (!p2) {
- kfree(p1);
- p->uni_pgdir[n] = NULL;
- return -ENOMEM;
- }
+ if (!p2) return -ENOMEM;
memset(p2, 0xff, 64*sizeof(u16)); /* No glyphs for the characters (yet) */
}
--
1.8.3
1
0

26 May '20
From: NeilBrown <neilb(a)suse.de>
hulk inclusion
category: bugfix
bugzilla: NA
CVE: CVE-2020-12656
---------------------------
The domain table should be empty at module unload. If it isn't there is
a bug somewhere. So check and report.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=206651
Signed-off-by: NeilBrown <neilb(a)suse.de>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Wenan Mao <maowenan(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/sunrpc/sunrpc.h | 1 +
net/sunrpc/sunrpc_syms.c | 2 ++
net/sunrpc/svcauth.c | 27 +++++++++++++++++++++++++++
3 files changed, 30 insertions(+)
diff --git a/net/sunrpc/sunrpc.h b/net/sunrpc/sunrpc.h
index c9bacb3..82035fa 100644
--- a/net/sunrpc/sunrpc.h
+++ b/net/sunrpc/sunrpc.h
@@ -56,4 +56,5 @@ int svc_send_common(struct socket *sock, struct xdr_buf *xdr,
int rpc_clients_notifier_register(void);
void rpc_clients_notifier_unregister(void);
+void auth_domain_cleanup(void);
#endif /* _NET_SUNRPC_SUNRPC_H */
diff --git a/net/sunrpc/sunrpc_syms.c b/net/sunrpc/sunrpc_syms.c
index 56f9eff..7b4c6ee 100644
--- a/net/sunrpc/sunrpc_syms.c
+++ b/net/sunrpc/sunrpc_syms.c
@@ -22,6 +22,7 @@
#include <linux/sunrpc/rpc_pipe_fs.h>
#include <linux/sunrpc/xprtsock.h>
+#include "sunrpc.h"
#include "netns.h"
unsigned int sunrpc_net_id;
@@ -130,6 +131,7 @@ static __net_exit void sunrpc_exit_net(struct net *net)
unregister_rpc_pipefs();
rpc_destroy_mempool();
unregister_pernet_subsys(&sunrpc_net_ops);
+ auth_domain_cleanup();
#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
rpc_unregister_sysctl();
#endif
diff --git a/net/sunrpc/svcauth.c b/net/sunrpc/svcauth.c
index bb8db3c..0107350 100644
--- a/net/sunrpc/svcauth.c
+++ b/net/sunrpc/svcauth.c
@@ -18,6 +18,10 @@
#include <linux/err.h>
#include <linux/hash.h>
+#include <trace/events/sunrpc.h>
+
+#include "sunrpc.h"
+
#define RPCDBG_FACILITY RPCDBG_AUTH
@@ -170,3 +174,26 @@ struct auth_domain *auth_domain_find(char *name)
return auth_domain_lookup(name, NULL);
}
EXPORT_SYMBOL_GPL(auth_domain_find);
+
+/**
+ * auth_domain_cleanup - check that the auth_domain table is empty
+ *
+ * On module unload the auth_domain_table must be empty. To make it
+ * easier to catch bugs which don't clean up domains properly, we
+ * warn if anything remains in the table at cleanup time.
+ *
+ * Note that we cannot proactively remove the domains at this stage.
+ * The ->release() function might be in a module that has already been
+ * unloaded.
+ */
+
+void auth_domain_cleanup(void)
+{
+ int h;
+ struct auth_domain *hp;
+
+ for (h = 0; h < DN_HASHMAX; h++)
+ hlist_for_each_entry(hp, &auth_domain_table[h], hash)
+ pr_warn("svc: domain %s still present at module unload.\n",
+ hp->name);
+}
--
1.8.3
1
2

26 May '20
From: Kyungtae Kim <kt0755(a)gmail.com>
mainline inclusion
from mainline-v5.7-rc6
commit 15753588bcd4bbffae1cca33c8ced5722477fe1f
category: bugfix
bugzilla: 13690
CVE: CVE-2020-13143
-------------------------------------------------
FuzzUSB (a variant of syzkaller) found an illegal array access
using an incorrect index while binding a gadget with UDC.
Reference: https://www.spinics.net/lists/linux-usb/msg194331.html
This bug occurs when a size variable used for a buffer
is misused to access its strcpy-ed buffer.
Given a buffer along with its size variable (taken from user input),
from which, a new buffer is created using kstrdup().
Due to the original buffer containing 0 value in the middle,
the size of the kstrdup-ed buffer becomes smaller than that of the original.
So accessing the kstrdup-ed buffer with the same size variable
triggers memory access violation.
The fix makes sure no zero value in the buffer,
by comparing the strlen() of the orignal buffer with the size variable,
so that the access to the kstrdup-ed buffer is safe.
BUG: KASAN: slab-out-of-bounds in gadget_dev_desc_UDC_store+0x1ba/0x200
drivers/usb/gadget/configfs.c:266
Read of size 1 at addr ffff88806a55dd7e by task syz-executor.0/17208
CPU: 2 PID: 17208 Comm: syz-executor.0 Not tainted 5.6.8 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0xce/0x128 lib/dump_stack.c:118
print_address_description.constprop.4+0x21/0x3c0 mm/kasan/report.c:374
__kasan_report+0x131/0x1b0 mm/kasan/report.c:506
kasan_report+0x12/0x20 mm/kasan/common.c:641
__asan_report_load1_noabort+0x14/0x20 mm/kasan/generic_report.c:132
gadget_dev_desc_UDC_store+0x1ba/0x200 drivers/usb/gadget/configfs.c:266
flush_write_buffer fs/configfs/file.c:251 [inline]
configfs_write_file+0x2f1/0x4c0 fs/configfs/file.c:283
__vfs_write+0x85/0x110 fs/read_write.c:494
vfs_write+0x1cd/0x510 fs/read_write.c:558
ksys_write+0x18a/0x220 fs/read_write.c:611
__do_sys_write fs/read_write.c:623 [inline]
__se_sys_write fs/read_write.c:620 [inline]
__x64_sys_write+0x73/0xb0 fs/read_write.c:620
do_syscall_64+0x9e/0x510 arch/x86/entry/common.c:294
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Signed-off-by: Kyungtae Kim <kt0755(a)gmail.com>
Reported-and-tested-by: Kyungtae Kim <kt0755(a)gmail.com>
Cc: Felipe Balbi <balbi(a)kernel.org>
Cc: stable <stable(a)vger.kernel.org>
Link: https://lore.kernel.org/r/20200510054326.GA19198@pizza01
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/usb/gadget/configfs.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
index ab9ac48..a7709d1 100644
--- a/drivers/usb/gadget/configfs.c
+++ b/drivers/usb/gadget/configfs.c
@@ -260,6 +260,9 @@ static ssize_t gadget_dev_desc_UDC_store(struct config_item *item,
char *name;
int ret;
+ if (strlen(page) < len)
+ return -EOVERFLOW;
+
name = kstrdup(page, GFP_KERNEL);
if (!name)
return -ENOMEM;
--
1.8.3
1
6
hulk inclusion
category: kabi
bugzilla: NA
CVE: CVE-2020-10741, CVE-2020-12826
---------------------------
Commit d1e7fd6462ca ("signal: Extend exec_id to 64bits") can fixes
CVE-2020-10741 and CVE-2020-12826, but it introduces a kabi change
in struct task_strcut. Fix this kabi broken by using another new
64bits variables parent_exec_id_u64 and self_exec_id_u64.
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/exec.c | 1 +
include/linux/sched.h | 9 +++++++--
kernel/fork.c | 2 ++
kernel/signal.c | 2 +-
4 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/fs/exec.c b/fs/exec.c
index 15d9974..19c0700 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1380,6 +1380,7 @@ void setup_new_exec(struct linux_binprm * bprm)
/* An exec changes our domain. We are no longer part of the thread
group */
WRITE_ONCE(current->self_exec_id, current->self_exec_id + 1);
+ WRITE_ONCE(current->self_exec_id_u64, current->self_exec_id_u64 + 1);
flush_signal_handlers(current, 0);
}
EXPORT_SYMBOL(setup_new_exec);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 1d15ab4..302fa00 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -892,8 +892,8 @@ struct task_struct {
struct seccomp seccomp;
/* Thread group tracking: */
- u64 parent_exec_id;
- u64 self_exec_id;
+ u32 parent_exec_id;
+ u32 self_exec_id;
/* Protection against (de-)allocation: mm, files, fs, tty, keyrings, mems_allowed, mempolicy: */
spinlock_t alloc_lock;
@@ -1212,8 +1212,13 @@ struct task_struct {
*/
randomized_struct_fields_end
+#ifndef __GENKSYMS__
+ u64 parent_exec_id_u64;
+ u64 self_exec_id_u64;
+#else
KABI_RESERVE(1)
KABI_RESERVE(2)
+#endif
KABI_RESERVE(3)
KABI_RESERVE(4)
KABI_RESERVE(5)
diff --git a/kernel/fork.c b/kernel/fork.c
index 2839961..951aa6f 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2055,9 +2055,11 @@ static __latent_entropy struct task_struct *copy_process(
if (clone_flags & (CLONE_PARENT|CLONE_THREAD)) {
p->real_parent = current->real_parent;
p->parent_exec_id = current->parent_exec_id;
+ p->parent_exec_id_u64 = current->parent_exec_id_u64;
} else {
p->real_parent = current;
p->parent_exec_id = current->self_exec_id;
+ p->parent_exec_id_u64 = current->self_exec_id_u64;
}
klp_copy_process(p);
diff --git a/kernel/signal.c b/kernel/signal.c
index 60ea2ee..a58af7d 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -1827,7 +1827,7 @@ bool do_notify_parent(struct task_struct *tsk, int sig)
* This is only possible if parent == real_parent.
* Check if it has changed security domain.
*/
- if (tsk->parent_exec_id != READ_ONCE(tsk->parent->self_exec_id))
+ if (tsk->parent_exec_id_u64 != READ_ONCE(tsk->parent->self_exec_id_u64))
sig = SIGCHLD;
}
--
1.8.3
1
0

[PATCH 01/14] net: hns3: solve the unlock 2 times when rocee init fault
by Yang Yingliang 20 May '20
by Yang Yingliang 20 May '20
20 May '20
From: Hao Shen <shenhao21(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
--------------------------------------------------------
When rocee init fault, the rtnl_unlock execute 2 times
Signed-off-by: Hao Shen <shenhao21(a)huawei.com>
Reviewed-by: Zhong Zhaohui <zhongzhaohui(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 0b7e159..3e0afc4 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -3865,7 +3865,7 @@ static int hclge_resume(struct hnae3_ae_dev *ae_dev)
ret = hclge_notify_roce_client(hdev, HNAE3_INIT_CLIENT);
if (ret)
- goto err_reset_lock;
+ return ret;
rtnl_lock();
--
1.8.3
1
13

13 May '20
From: Miaohe Lin <linmiaohe(a)huawei.com>
mainline inclusion
from mainline-v5.6-rc4
commit d80b64ff297e40c2b6f7d7abc1b3eba70d22a068
category: bugfix
bugzilla: 13690
CVE: CVE-2020-12768
-------------------------------------------------
When kmalloc memory for sd->sev_vmcbs failed, we forget to free the page
held by sd->save_area. Also get rid of the var r as '-ENOMEM' is actually
the only possible outcome here.
Reviewed-by: Liran Alon <liran.alon(a)oracle.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets(a)redhat.com>
Signed-off-by: Miaohe Lin <linmiaohe(a)huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini(a)redhat.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/x86/kvm/svm.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index df22744..226db3dc 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -998,33 +998,32 @@ static void svm_cpu_uninit(int cpu)
static int svm_cpu_init(int cpu)
{
struct svm_cpu_data *sd;
- int r;
sd = kzalloc(sizeof(struct svm_cpu_data), GFP_KERNEL);
if (!sd)
return -ENOMEM;
sd->cpu = cpu;
- r = -ENOMEM;
sd->save_area = alloc_page(GFP_KERNEL);
if (!sd->save_area)
- goto err_1;
+ goto free_cpu_data;
if (svm_sev_enabled()) {
- r = -ENOMEM;
sd->sev_vmcbs = kmalloc_array(max_sev_asid + 1,
sizeof(void *),
GFP_KERNEL);
if (!sd->sev_vmcbs)
- goto err_1;
+ goto free_save_area;
}
per_cpu(svm_data, cpu) = sd;
return 0;
-err_1:
+free_save_area:
+ __free_page(sd->save_area);
+free_cpu_data:
kfree(sd);
- return r;
+ return -ENOMEM;
}
--
1.8.3
1
1

13 May '20
hulk inclusion
category: config
bugzilla: NA
CVE: NA
-------------------------------------------------
Introduced by 05460849c3b5 ("arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419").
Disable CONFIG_ARM64_ERRATUM_1542419 by default.
Reviewed-by: Xuefeng Wang <wxf.wang(a)hisilicon.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/configs/euleros_defconfig | 1 +
arch/arm64/configs/hulk_defconfig | 1 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/configs/storage_ci_defconfig | 1 +
arch/arm64/configs/syzkaller_defconfig | 1 +
5 files changed, 5 insertions(+)
diff --git a/arch/arm64/configs/euleros_defconfig b/arch/arm64/configs/euleros_defconfig
index 04456f3..360e692 100644
--- a/arch/arm64/configs/euleros_defconfig
+++ b/arch/arm64/configs/euleros_defconfig
@@ -387,6 +387,7 @@ CONFIG_ARM64_ERRATUM_845719=y
CONFIG_ARM64_ERRATUM_843419=y
CONFIG_ARM64_ERRATUM_1024718=y
# CONFIG_ARM64_ERRATUM_1463225 is not set
+# CONFIG_ARM64_ERRATUM_1542419 is not set
CONFIG_CAVIUM_ERRATUM_22375=y
CONFIG_CAVIUM_ERRATUM_23144=y
CONFIG_CAVIUM_ERRATUM_23154=y
diff --git a/arch/arm64/configs/hulk_defconfig b/arch/arm64/configs/hulk_defconfig
index 6ee2472..bb020a1 100644
--- a/arch/arm64/configs/hulk_defconfig
+++ b/arch/arm64/configs/hulk_defconfig
@@ -386,6 +386,7 @@ CONFIG_ARM64_ERRATUM_845719=y
# CONFIG_ARM64_ERRATUM_843419 is not set
# CONFIG_ARM64_ERRATUM_1024718 is not set
# CONFIG_ARM64_ERRATUM_1463225 is not set
+# CONFIG_ARM64_ERRATUM_1542419 is not set
# CONFIG_CAVIUM_ERRATUM_22375 is not set
# CONFIG_CAVIUM_ERRATUM_23144 is not set
# CONFIG_CAVIUM_ERRATUM_23154 is not set
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index 9e6f560..b943729 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -389,6 +389,7 @@ CONFIG_ARM64_ERRATUM_845719=y
CONFIG_ARM64_ERRATUM_843419=y
CONFIG_ARM64_ERRATUM_1024718=y
# CONFIG_ARM64_ERRATUM_1463225 is not set
+# CONFIG_ARM64_ERRATUM_1542419 is not set
CONFIG_CAVIUM_ERRATUM_22375=y
CONFIG_CAVIUM_ERRATUM_23144=y
CONFIG_CAVIUM_ERRATUM_23154=y
diff --git a/arch/arm64/configs/storage_ci_defconfig b/arch/arm64/configs/storage_ci_defconfig
index 2ad533d..b7f23ca 100644
--- a/arch/arm64/configs/storage_ci_defconfig
+++ b/arch/arm64/configs/storage_ci_defconfig
@@ -373,6 +373,7 @@ CONFIG_ARM64_ERRATUM_845719=y
CONFIG_ARM64_ERRATUM_843419=y
CONFIG_ARM64_ERRATUM_1024718=y
# CONFIG_ARM64_ERRATUM_1463225 is not set
+# CONFIG_ARM64_ERRATUM_1542419 is not set
CONFIG_CAVIUM_ERRATUM_22375=y
CONFIG_CAVIUM_ERRATUM_23144=y
CONFIG_CAVIUM_ERRATUM_23154=y
diff --git a/arch/arm64/configs/syzkaller_defconfig b/arch/arm64/configs/syzkaller_defconfig
index 509713a..eaf5e32 100644
--- a/arch/arm64/configs/syzkaller_defconfig
+++ b/arch/arm64/configs/syzkaller_defconfig
@@ -385,6 +385,7 @@ CONFIG_ARM64_ERRATUM_845719=y
# CONFIG_ARM64_ERRATUM_843419 is not set
# CONFIG_ARM64_ERRATUM_1024718 is not set
# CONFIG_ARM64_ERRATUM_1463225 is not set
+# CONFIG_ARM64_ERRATUM_1542419 is not set
# CONFIG_CAVIUM_ERRATUM_22375 is not set
# CONFIG_CAVIUM_ERRATUM_23144 is not set
# CONFIG_CAVIUM_ERRATUM_23154 is not set
--
1.8.3
1
0

[PATCH 01/33] vhost: vsock: kick send_pkt worker once device is started
by Yang Yingliang 13 May '20
by Yang Yingliang 13 May '20
13 May '20
From: Jia He <justin.he(a)arm.com>
commit 0b841030625cde5f784dd62aec72d6a766faae70 upstream.
Ning Bo reported an abnormal 2-second gap when booting Kata container [1].
The unconditional timeout was caused by VSOCK_DEFAULT_CONNECT_TIMEOUT of
connecting from the client side. The vhost vsock client tries to connect
an initializing virtio vsock server.
The abnormal flow looks like:
host-userspace vhost vsock guest vsock
============== =========== ============
connect() --------> vhost_transport_send_pkt_work() initializing
| vq->private_data==NULL
| will not be queued
V
schedule_timeout(2s)
vhost_vsock_start() <--------- device ready
set vq->private_data
wait for 2s and failed
connect() again vq->private_data!=NULL recv connecting pkt
Details:
1. Host userspace sends a connect pkt, at that time, guest vsock is under
initializing, hence the vhost_vsock_start has not been called. So
vq->private_data==NULL, and the pkt is not been queued to send to guest
2. Then it sleeps for 2s
3. After guest vsock finishes initializing, vq->private_data is set
4. When host userspace wakes up after 2s, send connecting pkt again,
everything is fine.
As suggested by Stefano Garzarella, this fixes it by additional kicking the
send_pkt worker in vhost_vsock_start once the virtio device is started. This
makes the pending pkt sent again.
After this patch, kata-runtime (with vsock enabled) boot time is reduced
from 3s to 1s on a ThunderX2 arm64 server.
[1] https://github.com/kata-containers/runtime/issues/1917
Reported-by: Ning Bo <n.b(a)live.com>
Suggested-by: Stefano Garzarella <sgarzare(a)redhat.com>
Signed-off-by: Jia He <justin.he(a)arm.com>
Link: https://lore.kernel.org/r/20200501043840.186557-1-justin.he@arm.com
Signed-off-by: Michael S. Tsirkin <mst(a)redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/vhost/vsock.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 5f5c5de..bac1365 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -499,6 +499,11 @@ static int vhost_vsock_start(struct vhost_vsock *vsock)
mutex_unlock(&vq->mutex);
}
+ /* Some packets may have been queued before the device was started,
+ * let's kick the send worker to send them.
+ */
+ vhost_work_queue(&vsock->dev, &vsock->send_pkt_work);
+
mutex_unlock(&vsock->dev.mutex);
return 0;
--
1.8.3
1
32

12 May '20
From: Ville Syrjälä <ville.syrjala(a)linux.intel.com>
commit 6292b8efe32e6be408af364132f09572aed14382 upstream.
The DispID DTD pixel clock is documented as:
"00 00 00 h → FF FF FF h | Pixel clock ÷ 10,000 0.01 → 167,772.16 Mega Pixels per Sec"
Which seems to imply that we to add one to the raw value.
Reality seems to agree as there are tiled displays in the wild
which currently show a 10kHz difference in the pixel clock
between the tiles (one tile gets its mode from the base EDID,
the other from the DispID block).
Cc: stable(a)vger.kernel.org
References: https://gitlab.freedesktop.org/drm/intel/-/issues/27
Signed-off-by: Ville Syrjälä <ville.syrjala(a)linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200423151743.18767-1-ville.…
Reviewed-by: Manasi Navare <manasi.d.navare(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/gpu/drm/drm_edid.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index f5926bf5..d5dcee7 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -4706,7 +4706,7 @@ static struct drm_display_mode *drm_mode_displayid_detailed(struct drm_device *d
struct drm_display_mode *mode;
unsigned pixel_clock = (timings->pixel_clock[0] |
(timings->pixel_clock[1] << 8) |
- (timings->pixel_clock[2] << 16));
+ (timings->pixel_clock[2] << 16)) + 1;
unsigned hactive = (timings->hactive[0] | timings->hactive[1] << 8) + 1;
unsigned hblank = (timings->hblank[0] | timings->hblank[1] << 8) + 1;
unsigned hsync = (timings->hsync[0] | (timings->hsync[1] & 0x7f) << 8) + 1;
--
1.8.3
1
75

12 May '20
From: Clement Leger <cleger(a)kalray.eu>
commit 00a0eec59ddbb1ce966b19097d8a8d2f777e726a upstream.
Index of rvring is computed using pointer arithmetic. However, since
rvring->rvdev->vring is the base of the vring array, computation
of rvring idx should be reversed. It previously lead to writing at negative
indices in the resource table.
Signed-off-by: Clement Leger <cleger(a)kalray.eu>
Link: https://lore.kernel.org/r/20191004073736.8327-1-cleger@kalray.eu
Signed-off-by: Bjorn Andersson <bjorn.andersson(a)linaro.org>
Cc: Doug Anderson <dianders(a)chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/remoteproc/remoteproc_core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
index abbef17..d5ff272 100644
--- a/drivers/remoteproc/remoteproc_core.c
+++ b/drivers/remoteproc/remoteproc_core.c
@@ -289,7 +289,7 @@ void rproc_free_vring(struct rproc_vring *rvring)
{
int size = PAGE_ALIGN(vring_size(rvring->len, rvring->align));
struct rproc *rproc = rvring->rvdev->rproc;
- int idx = rvring->rvdev->vring - rvring;
+ int idx = rvring - rvring->rvdev->vring;
struct fw_rsc_vdev *rsc;
dma_free_coherent(rproc->dev.parent, size, rvring->va, rvring->dma);
--
1.8.3
1
95

12 May '20
From: Dmitry Monakhov <dmonakhov(a)gmail.com>
[ Upstream commit 4068664e3cd2312610ceac05b74c4cf1853b8325 ]
Extents are cached in read_extent_tree_block(); as a result, extents
are not cached for inodes with depth == 0 when we try to find the
extent using ext4_find_extent(). The result of the lookup is cached
in ext4_map_blocks() but is only a subset of the extent on disk. As a
result, the contents of extents status cache can get very badly
fragmented for certain workloads, such as a random 4k read workload.
File size of /mnt/test is 33554432 (8192 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 8191: 40960.. 49151: 8192: last,eof
$ perf record -e 'ext4:ext4_es_*' /root/bin/fio --name=t --direct=0 --rw=randread --bs=4k --filesize=32M --size=32M --filename=/mnt/test
$ perf script | grep ext4_es_insert_extent | head -n 10
fio 131 [000] 13.975421: ext4:ext4_es_insert_extent: dev 253,0 ino 12 es [494/1) mapped 41454 status W
fio 131 [000] 13.975939: ext4:ext4_es_insert_extent: dev 253,0 ino 12 es [6064/1) mapped 47024 status W
fio 131 [000] 13.976467: ext4:ext4_es_insert_extent: dev 253,0 ino 12 es [6907/1) mapped 47867 status W
fio 131 [000] 13.976937: ext4:ext4_es_insert_extent: dev 253,0 ino 12 es [3850/1) mapped 44810 status W
fio 131 [000] 13.977440: ext4:ext4_es_insert_extent: dev 253,0 ino 12 es [3292/1) mapped 44252 status W
fio 131 [000] 13.977931: ext4:ext4_es_insert_extent: dev 253,0 ino 12 es [6882/1) mapped 47842 status W
fio 131 [000] 13.978376: ext4:ext4_es_insert_extent: dev 253,0 ino 12 es [3117/1) mapped 44077 status W
fio 131 [000] 13.978957: ext4:ext4_es_insert_extent: dev 253,0 ino 12 es [2896/1) mapped 43856 status W
fio 131 [000] 13.979474: ext4:ext4_es_insert_extent: dev 253,0 ino 12 es [7479/1) mapped 48439 status W
Fix this by caching the extents for inodes with depth == 0 in
ext4_find_extent().
[ Renamed ext4_es_cache_extents() to ext4_cache_extents() since this
newly added function is not in extents_cache.c, and to avoid
potential visual confusion with ext4_es_cache_extent(). -TYT ]
Signed-off-by: Dmitry Monakhov <dmonakhov(a)gmail.com>
Link: https://lore.kernel.org/r/20191106122502.19986-1-dmonakhov@gmail.com
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
fs/ext4/extents.c | 47 +++++++++++++++++++++++++++--------------------
1 file changed, 27 insertions(+), 20 deletions(-)
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 90862ee..0f17c54 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -510,6 +510,30 @@ int ext4_ext_check_inode(struct inode *inode)
return ext4_ext_check(inode, ext_inode_hdr(inode), ext_depth(inode), 0);
}
+static void ext4_cache_extents(struct inode *inode,
+ struct ext4_extent_header *eh)
+{
+ struct ext4_extent *ex = EXT_FIRST_EXTENT(eh);
+ ext4_lblk_t prev = 0;
+ int i;
+
+ for (i = le16_to_cpu(eh->eh_entries); i > 0; i--, ex++) {
+ unsigned int status = EXTENT_STATUS_WRITTEN;
+ ext4_lblk_t lblk = le32_to_cpu(ex->ee_block);
+ int len = ext4_ext_get_actual_len(ex);
+
+ if (prev && (prev != lblk))
+ ext4_es_cache_extent(inode, prev, lblk - prev, ~0,
+ EXTENT_STATUS_HOLE);
+
+ if (ext4_ext_is_unwritten(ex))
+ status = EXTENT_STATUS_UNWRITTEN;
+ ext4_es_cache_extent(inode, lblk, len,
+ ext4_ext_pblock(ex), status);
+ prev = lblk + len;
+ }
+}
+
static struct buffer_head *
__read_extent_tree_block(const char *function, unsigned int line,
struct inode *inode, ext4_fsblk_t pblk, int depth,
@@ -544,26 +568,7 @@ int ext4_ext_check_inode(struct inode *inode)
*/
if (!(flags & EXT4_EX_NOCACHE) && depth == 0) {
struct ext4_extent_header *eh = ext_block_hdr(bh);
- struct ext4_extent *ex = EXT_FIRST_EXTENT(eh);
- ext4_lblk_t prev = 0;
- int i;
-
- for (i = le16_to_cpu(eh->eh_entries); i > 0; i--, ex++) {
- unsigned int status = EXTENT_STATUS_WRITTEN;
- ext4_lblk_t lblk = le32_to_cpu(ex->ee_block);
- int len = ext4_ext_get_actual_len(ex);
-
- if (prev && (prev != lblk))
- ext4_es_cache_extent(inode, prev,
- lblk - prev, ~0,
- EXTENT_STATUS_HOLE);
-
- if (ext4_ext_is_unwritten(ex))
- status = EXTENT_STATUS_UNWRITTEN;
- ext4_es_cache_extent(inode, lblk, len,
- ext4_ext_pblock(ex), status);
- prev = lblk + len;
- }
+ ext4_cache_extents(inode, eh);
}
return bh;
errout:
@@ -911,6 +916,8 @@ struct ext4_ext_path *
path[0].p_bh = NULL;
i = depth;
+ if (!(flags & EXT4_EX_NOCACHE) && depth == 0)
+ ext4_cache_extents(inode, eh);
/* walk through the tree */
while (i) {
ext_debug("depth %d: num %d, max %d\n",
--
1.8.3
1
247

11 May '20
From: fengsheng <fengsheng5(a)huawei.com>
driver inclusion
category: feature
bugzilla: NA
CVE: NA
1. add interface: sysctl_pmbus_write_common
2. add interface: sysctl_pmbus_read_common
Signed-off-by: fengsheng <fengsheng5(a)huawei.com>
Reviewed-by: wuyang <wuyang7(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/soc/hisilicon/sysctl/sysctl_drv.c | 2 +-
drivers/soc/hisilicon/sysctl/sysctl_pmbus.c | 113 ++++++++++++++++++++++------
drivers/soc/hisilicon/sysctl/sysctl_pmbus.h | 16 ++++
3 files changed, 108 insertions(+), 23 deletions(-)
diff --git a/drivers/soc/hisilicon/sysctl/sysctl_drv.c b/drivers/soc/hisilicon/sysctl/sysctl_drv.c
index 3899fac..c4cb157 100644
--- a/drivers/soc/hisilicon/sysctl/sysctl_drv.c
+++ b/drivers/soc/hisilicon/sysctl/sysctl_drv.c
@@ -48,7 +48,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#define DEBUG
-#define SYSCTL_DRIVER_VERSION "1.9.39.1"
+#define SYSCTL_DRIVER_VERSION "1.9.60.0"
unsigned int g_sysctrl_debug;
diff --git a/drivers/soc/hisilicon/sysctl/sysctl_pmbus.c b/drivers/soc/hisilicon/sysctl/sysctl_pmbus.c
index 74d06a8..92eeba6 100644
--- a/drivers/soc/hisilicon/sysctl/sysctl_pmbus.c
+++ b/drivers/soc/hisilicon/sysctl/sysctl_pmbus.c
@@ -190,6 +190,9 @@ int InitPmbus(u8 chip_id)
his_sysctrl_reg_wr(base, I2C_SS_SCL_HCNT_OFFSET, I2C_SS_SCLHCNT);
/* ulSclLow > 1.5us */
his_sysctrl_reg_wr(base, I2C_SS_SCL_LCNT_OFFSET, I2C_SS_SCLLCNT);
+ /* set sda_hold_fs 1us > 250ns */
+ his_sysctrl_reg_wr(base, I2C_SDA_HOLD_OFFSET, I2C_SS_SDA_HOLD_FS);
+
his_sysctrl_reg_wr(base, I2C_ENABLE_OFFSET, 0x1);
debug_sysctrl_print("Initialize Pmbus end\n");
@@ -233,35 +236,36 @@ int sysctl_pmbus_cfg(u8 chip_id, u8 addr, u8 page, u32 slave_addr)
return 0;
}
-int sysctl_pmbus_write(u8 chip_id, u8 addr, u32 slave_addr, u32 data_len, u32 buf)
+int sysctl_pmbus_write_common(u8 chip_id, u32 slave_addr, u32 data_len, u8 *buf)
{
u32 i = 0;
u32 temp = 0;
u32 loop = 0x1000;
- u32 temp_data = addr;
+ u32 temp_data;
void __iomem *base = NULL;
if ((chip_id >= CHIP_ID_NUM_MAX) ||
- (data_len > DATA_NUM_MAX) ||
- (slave_addr >= SLAVE_ADDR_MAX)) {
+ (slave_addr >= SLAVE_ADDR_MAX) ||
+ (!data_len) || (data_len > PMBUS_WRITE_LEN_MAX) ||
+ (!buf)) {
pr_err("[sysctl pmbus] write param err,chipid=0x%x,data_len=0x%x,slave_addr=0x%x!\n",
chip_id, data_len, slave_addr);
return SYSCTL_ERR_PARAM;
}
+ /* clear all interrupt */
base = g_sysctl_pmbus_base[chip_id];
-
his_sysctrl_reg_wr(base, I2C_INTR_RAW_OFFSET, 0x3ffff);
+ /* send: slave_addr[7bit] + write[1bit] */
his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, (0x2 << 0x8) | slave_addr);
- if (data_len != 0) {
- his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, addr);
- for (i = 0; i < data_len - 1; i++)
- his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, 0xff & (buf >> (i * 0x8)));
+ /* write data */
+ for (i = 0; i < data_len - 1; i++)
+ his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, buf[i]);
- temp_data = (0xff & (buf >> (i * 0x8)));
- }
+ /* last data should send stop */
+ temp_data = buf[i];
his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, (0x4 << 0x8) | temp_data);
/* poll until send done */
@@ -273,14 +277,15 @@ int sysctl_pmbus_write(u8 chip_id, u8 addr, u32 slave_addr, u32 data_len, u32 bu
/* send data failed */
if (temp & I2C_TX_ABRT) {
his_sysctrl_reg_rd(base, I2C_TX_ABRT_SRC_REG, &temp);
- pr_err("[sysctl pmbus]write data fail, chip_id:0x%x,slave_addr:0x%x, addr:0x%x!\r\n",
- chip_id, slave_addr, addr);
+ pr_err("[sysctl pmbus]write data fail, chip_id:0x%x,slave_addr:0x%x\r\n",
+ chip_id, slave_addr);
his_sysctrl_reg_rd(base, I2C_CLR_TX_ABRT_REG, &temp);
return SYSCTL_ERR_FAILED;
}
his_sysctrl_reg_rd(base, I2C_STATUS_REG, &temp);
+ /* send done */
if (temp & I2C_TX_FIFO_EMPTY) {
his_sysctrl_reg_rd(base, I2C_TX_FIFO_DATA_NUM_REG, &temp);
if (temp == 0)
@@ -289,8 +294,8 @@ int sysctl_pmbus_write(u8 chip_id, u8 addr, u32 slave_addr, u32 data_len, u32 bu
loop--;
if (loop == 0) {
- pr_err("[sysctl pmbus]write data retry fail, chip_id:0x%x,slave_addr:0x%x, addr:0x%x!\r\n",
- chip_id, slave_addr, addr);
+ pr_err("[sysctl pmbus]write data retry fail, chip_id:0x%x,slave_addr:0x%x\r\n",
+ chip_id, slave_addr);
return SYSCTL_ERR_FAILED;
}
}
@@ -298,7 +303,25 @@ int sysctl_pmbus_write(u8 chip_id, u8 addr, u32 slave_addr, u32 data_len, u32 bu
return SYSCTL_ERR_OK;
}
-static int sysctl_pmbus_read_pre(void __iomem *base, u8 addr, u32 slave_addr, u32 data_len)
+int sysctl_pmbus_write(u8 chip_id, u8 addr, u32 slave_addr, u32 data_len, u32 buf)
+{
+#define TMP_LEN_MAX 5
+ u8 i;
+ u8 tmp[TMP_LEN_MAX] = {0};
+
+ if (data_len > DATA_NUM_MAX) {
+ pr_err("[sysctl pmbus] write param err,data_len=0x%x!\n", data_len);
+ return SYSCTL_ERR_PARAM;
+ }
+
+ tmp[0] = addr;
+ for (i = 0; i < data_len; i++)
+ tmp[i + 1] = (buf >> (i * 0x8)) & 0xff;
+
+ return sysctl_pmbus_write_common(chip_id, slave_addr, data_len + sizeof(addr), &tmp[0]);
+}
+
+static int sysctl_pmbus_read_pre(void __iomem *base, u32 cmd_len, u8 *cmd, u32 slave_addr, u32 data_len)
{
u32 i = 0;
u32 fifo_num = 0;
@@ -309,22 +332,29 @@ static int sysctl_pmbus_read_pre(void __iomem *base, u8 addr, u32 slave_addr, u3
return SYSCTL_ERR_PARAM;
}
+ /* clear all interrupt */
his_sysctrl_reg_wr(base, I2C_INTR_RAW_OFFSET, 0x3ffff);
+ /* clear rx fifo */
his_sysctrl_reg_rd(base, I2C_RXFLR_OFFSET, &fifo_num);
- debug_sysctrl_print("[sysctl_pmbus_read_byte]read pmbus , read empty rx fifo num:%d\r\n", fifo_num);
for (i = 0; i < fifo_num; i++)
his_sysctrl_reg_rd(base, I2C_DATA_CMD_OFFSET, &temp_byte);
- his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, (0x2 << 0x8) | slave_addr);
- his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, addr);
- his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, (0x3 << 0x8) | slave_addr);
+ /* send cmd */
+ if (cmd_len) {
+ his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, (0x2 << 0x8) | slave_addr);
+ for (i = 0; i < cmd_len; i++)
+ his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, cmd[i]);
+ }
+ /* read data */
+ his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, (0x3 << 0x8) | slave_addr);
i = data_len;
while ((i - 1) > 0) {
his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, 0x100);
i--;
}
+ /* last data should send stop */
his_sysctrl_reg_wr(base, I2C_DATA_CMD_OFFSET, 0x500);
return 0;
@@ -366,6 +396,43 @@ static int sysctl_pmbus_wait_data(void __iomem *base, u32 data_len)
return SYSCTL_ERR_OK;
}
+int sysctl_pmbus_read_common(u8 chip_id, struct pmbus_read_op *op)
+{
+ u32 ret;
+ u32 i = 0;
+ u32 temp_byte = 0;
+ void __iomem *base = NULL;
+
+ if ((chip_id >= CHIP_ID_NUM_MAX) || (!op)) {
+ pr_err("[sysctl pmbus]read param err,chipid=0x%x!\n", chip_id);
+ return SYSCTL_ERR_PARAM;
+ }
+
+ if ((op->slave_addr >= SLAVE_ADDR_MAX) ||
+ (!op->data_len) || ((op->cmd_len + op->data_len) > PMBUS_READ_LEN_MAX) ||
+ (!op->data) || ((op->cmd_len) && (!op->cmd))) {
+ pr_err("[sysctl pmbus]read param err,data_len=0x%x,cmd_len=0x%x,slave_addr=0x%x\n",
+ op->data_len, op->cmd_len, op->slave_addr);
+ return SYSCTL_ERR_PARAM;
+ }
+
+ base = g_sysctl_pmbus_base[chip_id];
+ ret = sysctl_pmbus_read_pre(base, op->cmd_len, op->cmd, op->slave_addr, op->data_len);
+ if (ret != SYSCTL_ERR_OK)
+ return ret;
+
+ ret = sysctl_pmbus_wait_data(base, op->data_len);
+ if (ret != SYSCTL_ERR_OK)
+ return ret;
+
+ for (i = 0; i < op->data_len; i++) {
+ his_sysctrl_reg_rd(base, I2C_DATA_CMD_OFFSET, &temp_byte);
+ op->data[i] = temp_byte & 0xff;
+ }
+
+ return SYSCTL_ERR_OK;
+}
+
int sysctl_pmbus_read(u8 chip_id, u8 addr, u32 slave_addr, u32 data_len, u32 *buf)
{
u32 ret;
@@ -385,7 +452,7 @@ int sysctl_pmbus_read(u8 chip_id, u8 addr, u32 slave_addr, u32 data_len, u32 *bu
base = g_sysctl_pmbus_base[chip_id];
- ret = sysctl_pmbus_read_pre(base, addr, slave_addr, data_len);
+ ret = sysctl_pmbus_read_pre(base, sizeof(addr), &addr, slave_addr, data_len);
if (ret != SYSCTL_ERR_OK)
return ret;
@@ -566,7 +633,7 @@ static int sysctl_cpu_convert_vol_to_vid(u32 vid_table, u32 value, u32 *vid)
return SYSCTL_ERR_OK;
}
-int sysctl_cpu_voltage_adjust (u8 chip_id, u8 loop, u32 slave_addr, u32 value)
+int sysctl_cpu_voltage_adjust(u8 chip_id, u8 loop, u32 slave_addr, u32 value)
{
u32 ret;
u32 vid;
@@ -647,5 +714,7 @@ void hip_sysctl_pmbus_exit(void)
EXPORT_SYMBOL(sysctl_cpu_voltage_adjust);
EXPORT_SYMBOL(sysctl_pmbus_write);
EXPORT_SYMBOL(sysctl_pmbus_read);
+EXPORT_SYMBOL(sysctl_pmbus_write_common);
+EXPORT_SYMBOL(sysctl_pmbus_read_common);
EXPORT_SYMBOL(InitPmbus);
EXPORT_SYMBOL(DeInitPmbus);
diff --git a/drivers/soc/hisilicon/sysctl/sysctl_pmbus.h b/drivers/soc/hisilicon/sysctl/sysctl_pmbus.h
index e1a4742..ce1d83d 100644
--- a/drivers/soc/hisilicon/sysctl/sysctl_pmbus.h
+++ b/drivers/soc/hisilicon/sysctl/sysctl_pmbus.h
@@ -20,6 +20,13 @@
#define PAGE_NUM_MAX (0x6f)
#define DATA_NUM_MAX (0x4)
+#define I2C_FIFO_DEPTH (256)
+
+/* slave_addr use 1 fifo */
+#define PMBUS_READ_LEN_MAX (I2C_FIFO_DEPTH - 1)
+/* slave_addr use 2 fifo */
+#define PMBUS_WRITE_LEN_MAX (I2C_FIFO_DEPTH - 2)
+
#define I2C_TX_ABRT (0x040)
#define I2C_TX_ABRT_SRC_REG (0x0880)
#define I2C_CLR_TX_ABRT_REG (0x0854)
@@ -29,6 +36,7 @@
#define I2C_SS_SCLHCNT 0x3db
#define I2C_SS_SCLLCNT 0x3e6
+#define I2C_SS_SDA_HOLD_FS 0xfa
/* AVS_REG_GEN */
#define AVS_WR_OPEN_OFFSET 0x0004
@@ -77,6 +85,14 @@
#define STATUS_RPT_OFFSET 0x0AA4
#define STATUS_ERR_RPT_OFFSET 0x0AA8
+struct pmbus_read_op {
+ u32 slave_addr;
+ u32 cmd_len;
+ u32 data_len;
+ u8 *cmd;
+ u8 *data;
+};
+
/* Define the union pmbus_vout_mode */
typedef union {
/* Define the struct bits */
--
1.8.3
1
1

11 May '20
From: Luke Nelson <lukenels(a)cs.washington.edu>
commit 4178417cc5359c329790a4a8f4a6604612338cca upstream.
This patch fixes an incorrect check in how immediate memory offsets are
computed for BPF_DW on arm.
For BPF_LDX/ST/STX + BPF_DW, the 32-bit arm JIT breaks down an 8-byte
access into two separate 4-byte accesses using off+0 and off+4. If off
fits in imm12, the JIT emits a ldr/str instruction with the immediate
and avoids the use of a temporary register. While the current check off
<= 0xfff ensures that the first immediate off+0 doesn't overflow imm12,
it's not sufficient for the second immediate off+4, which may cause the
second access of BPF_DW to read/write the wrong address.
This patch fixes the problem by changing the check to
off <= 0xfff - 4 for BPF_DW, ensuring off+4 will never overflow.
A side effect of simplifying the check is that it now allows using
negative immediate offsets in ldr/str. This means that small negative
offsets can also avoid the use of a temporary register.
This patch introduces no new failures in test_verifier or test_bpf.c.
Fixes: c5eae692571d6 ("ARM: net: bpf: improve 64-bit store implementation")
Fixes: ec19e02b343db ("ARM: net: bpf: fix LDX instructions")
Co-developed-by: Xi Wang <xi.wang(a)gmail.com>
Signed-off-by: Xi Wang <xi.wang(a)gmail.com>
Signed-off-by: Luke Nelson <luke.r.nels(a)gmail.com>
Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net>
Link: https://lore.kernel.org/bpf/20200409221752.28448-1-luke.r.nels@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/arm/net/bpf_jit_32.c | 40 ++++++++++++++++++++++++----------------
1 file changed, 24 insertions(+), 16 deletions(-)
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 25b3ee8..d06293a 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -993,21 +993,35 @@ static inline void emit_a32_mul_r64(const s8 dst[], const s8 src[],
arm_bpf_put_reg32(dst_hi, rd[0], ctx);
}
+static bool is_ldst_imm(s16 off, const u8 size)
+{
+ s16 off_max = 0;
+
+ switch (size) {
+ case BPF_B:
+ case BPF_W:
+ off_max = 0xfff;
+ break;
+ case BPF_H:
+ off_max = 0xff;
+ break;
+ case BPF_DW:
+ /* Need to make sure off+4 does not overflow. */
+ off_max = 0xfff - 4;
+ break;
+ }
+ return -off_max <= off && off <= off_max;
+}
+
/* *(size *)(dst + off) = src */
static inline void emit_str_r(const s8 dst, const s8 src[],
- s32 off, struct jit_ctx *ctx, const u8 sz){
+ s16 off, struct jit_ctx *ctx, const u8 sz){
const s8 *tmp = bpf2a32[TMP_REG_1];
- s32 off_max;
s8 rd;
rd = arm_bpf_get_reg32(dst, tmp[1], ctx);
- if (sz == BPF_H)
- off_max = 0xff;
- else
- off_max = 0xfff;
-
- if (off < 0 || off > off_max) {
+ if (!is_ldst_imm(off, sz)) {
emit_a32_mov_i(tmp[0], off, ctx);
emit(ARM_ADD_R(tmp[0], tmp[0], rd), ctx);
rd = tmp[0];
@@ -1036,18 +1050,12 @@ static inline void emit_str_r(const s8 dst, const s8 src[],
/* dst = *(size*)(src + off) */
static inline void emit_ldx_r(const s8 dst[], const s8 src,
- s32 off, struct jit_ctx *ctx, const u8 sz){
+ s16 off, struct jit_ctx *ctx, const u8 sz){
const s8 *tmp = bpf2a32[TMP_REG_1];
const s8 *rd = is_stacked(dst_lo) ? tmp : dst;
s8 rm = src;
- s32 off_max;
-
- if (sz == BPF_H)
- off_max = 0xff;
- else
- off_max = 0xfff;
- if (off < 0 || off > off_max) {
+ if (!is_ldst_imm(off, sz)) {
emit_a32_mov_i(tmp[0], off, ctx);
emit(ARM_ADD_R(tmp[0], tmp[0], src), ctx);
rm = tmp[0];
--
1.8.3
1
61
openEuler Kernel SIG:你好!
内核社区(包括Linus)怀疑kfifo有潜在的BUG,见
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/l…
我于去年底向社区提交了一份分析报告,用litmus验证了问题的可能性,梳理了所有kfifo相关函数和宏,提出了建议。但杳无音信,不知是丢了,还是因无实践
BUG而未受重视。
ARM是典型的弱内存序架构,因此想到转给你们供参考。如无兴趣请忽略:-)
此致
laokz
2
4
From: Chiqijun <chiqijun(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: 4472
-----------------------------------------------------------------------
The current driver does not support ipip tunnel packet parsing. When the
csum of the device is turned on, driver will get wrong inner headers,
and causing device error.
Signed-off-by: Chiqijun <chiqijun(a)huawei.com>
Reviewed-by: Luoshaokai <luoshaokai(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/huawei/hinic/hinic_tx.c | 31 ++++++++++++++++++++++------
1 file changed, 25 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
index 05291a5..c8492ae 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
@@ -373,26 +373,45 @@ static int hinic_tx_csum(struct hinic_sq_task *task, u32 *queue_info,
if (ip.v4->version == 4) {
l3_type = IPV4_PKT_NO_CHKSUM_OFFLOAD;
+ l4_proto = ip.v4->protocol;
} else if (ip.v4->version == 6) {
+ unsigned char *exthdr;
+ __be16 frag_off;
+
l3_type = IPV6_PKT;
#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
tunnel_type = TUNNEL_UDP_CSUM;
#endif
+ exthdr = ip.hdr + sizeof(*ip.v6);
+ l4_proto = ip.v6->nexthdr;
+ l4.hdr = skb_transport_header(skb);
+ if (l4.hdr != exthdr)
+ ipv6_skip_exthdr(skb, exthdr - skb->data,
+ &l4_proto, &frag_off);
} else {
l3_type = UNKNOWN_L3TYPE;
+ l4_proto = IPPROTO_RAW;
}
hinic_task_set_outter_l3(task, l3_type,
skb_network_header_len(skb));
- l4_tunnel_len = skb_inner_network_offset(skb) -
- skb_transport_offset(skb);
+ if (l4_proto == IPPROTO_UDP || l4_proto == IPPROTO_GRE) {
+ l4_tunnel_len = skb_inner_network_offset(skb) -
+ skb_transport_offset(skb);
+ ip.hdr = skb_inner_network_header(skb);
+ l4.hdr = skb_inner_transport_header(skb);
+ network_hdr_len = skb_inner_network_header_len(skb);
+ } else {
+ tunnel_type = NOT_TUNNEL;
+ l4_tunnel_len = 0;
- hinic_task_set_tunnel_l4(task, tunnel_type, l4_tunnel_len);
+ ip.hdr = skb_inner_network_header(skb);
+ l4.hdr = skb_transport_header(skb);
+ network_hdr_len = skb_network_header_len(skb);
+ }
- ip.hdr = skb_inner_network_header(skb);
- l4.hdr = skb_inner_transport_header(skb);
- network_hdr_len = skb_inner_network_header_len(skb);
+ hinic_task_set_tunnel_l4(task, tunnel_type, l4_tunnel_len);
} else {
ip.hdr = skb_network_header(skb);
l4.hdr = skb_transport_header(skb);
--
1.8.3
1
0

09 May '20
From: Alan Stern <stern(a)rowland.harvard.edu>
mainline inclusion
from mainline-v5.7-rc3
commit 056ad39ee9253873522f6469c3364964a322912b
category: bugfix
bugzilla: 13690
CVE: CVE-2020-12464
-------------------------------------------------
FuzzUSB (a variant of syzkaller) found a free-while-still-in-use bug
in the USB scatter-gather library:
BUG: KASAN: use-after-free in atomic_read
include/asm-generic/atomic-instrumented.h:26 [inline]
BUG: KASAN: use-after-free in usb_hcd_unlink_urb+0x5f/0x170
drivers/usb/core/hcd.c:1607
Read of size 4 at addr ffff888065379610 by task kworker/u4:1/27
CPU: 1 PID: 27 Comm: kworker/u4:1 Not tainted 5.5.11 #2
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
1.10.2-1ubuntu1 04/01/2014
Workqueue: scsi_tmf_2 scmd_eh_abort_handler
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0xce/0x128 lib/dump_stack.c:118
print_address_description.constprop.4+0x21/0x3c0 mm/kasan/report.c:374
__kasan_report+0x153/0x1cb mm/kasan/report.c:506
kasan_report+0x12/0x20 mm/kasan/common.c:639
check_memory_region_inline mm/kasan/generic.c:185 [inline]
check_memory_region+0x152/0x1b0 mm/kasan/generic.c:192
__kasan_check_read+0x11/0x20 mm/kasan/common.c:95
atomic_read include/asm-generic/atomic-instrumented.h:26 [inline]
usb_hcd_unlink_urb+0x5f/0x170 drivers/usb/core/hcd.c:1607
usb_unlink_urb+0x72/0xb0 drivers/usb/core/urb.c:657
usb_sg_cancel+0x14e/0x290 drivers/usb/core/message.c:602
usb_stor_stop_transport+0x5e/0xa0 drivers/usb/storage/transport.c:937
This bug occurs when cancellation of the S-G transfer races with
transfer completion. When that happens, usb_sg_cancel() may continue
to access the transfer's URBs after usb_sg_wait() has freed them.
The bug is caused by the fact that usb_sg_cancel() does not take any
sort of reference to the transfer, and so there is nothing to prevent
the URBs from being deallocated while the routine is trying to use
them. The fix is to take such a reference by incrementing the
transfer's io->count field while the cancellation is in progres and
decrementing it afterward. The transfer's URBs are not deallocated
until io->complete is triggered, which happens when io->count reaches
zero.
Signed-off-by: Alan Stern <stern(a)rowland.harvard.edu>
Reported-and-tested-by: Kyungtae Kim <kt0755(a)gmail.com>
CC: <stable(a)vger.kernel.org>
Link: https://lore.kernel.org/r/Pine.LNX.4.44L0.2003281615140.14837-100000@netrid…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/usb/core/message.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
index 0d3fd20..fcf84bf 100644
--- a/drivers/usb/core/message.c
+++ b/drivers/usb/core/message.c
@@ -588,12 +588,13 @@ void usb_sg_cancel(struct usb_sg_request *io)
int i, retval;
spin_lock_irqsave(&io->lock, flags);
- if (io->status) {
+ if (io->status || io->count == 0) {
spin_unlock_irqrestore(&io->lock, flags);
return;
}
/* shut everything down */
io->status = -ECONNRESET;
+ io->count++; /* Keep the request alive until we're done */
spin_unlock_irqrestore(&io->lock, flags);
for (i = io->entries - 1; i >= 0; --i) {
@@ -607,6 +608,12 @@ void usb_sg_cancel(struct usb_sg_request *io)
dev_warn(&io->dev->dev, "%s, unlink --> %d\n",
__func__, retval);
}
+
+ spin_lock_irqsave(&io->lock, flags);
+ io->count--;
+ if (!io->count)
+ complete(&io->complete);
+ spin_unlock_irqrestore(&io->lock, flags);
}
EXPORT_SYMBOL_GPL(usb_sg_cancel);
--
1.8.3
1
2
From: Shengzui You <youshengzui(a)huawei.com>
driver inclusion
category: other
bugzilla: NA
CVE: NA
---------------------------------
This patch is used to modify the hns3 driver version to 1.9.37.9
Signed-off-by: Shengzui You <youshengzui(a)huawei.com>
Reviewed-by: Weiwei Deng <dengweiwei(a)huawei.com>
Reviewed-by: Zhaohui Zhong <zhongzhaohui(a)huawei.com>
Reviewed-by: Junxin Chen <chenjunxin1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hnae3.h | 2 +-
drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h | 2 +-
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 2 +-
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 2 +-
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h | 2 +-
5 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
index 5253374..14b3991 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
@@ -30,7 +30,7 @@
#include <linux/pci.h>
#include <linux/types.h>
-#define HNAE3_MOD_VERSION "1.9.37.8"
+#define HNAE3_MOD_VERSION "1.9.37.9"
#define HNAE3_MIN_VECTOR_NUM 2 /* first one for misc, another for IO */
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h b/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h
index 34ff097..b89b110 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h
@@ -4,7 +4,7 @@
#ifndef __HNS3_CAE_VERSION_H__
#define __HNS3_CAE_VERSION_H__
-#define HNS3_CAE_MOD_VERSION "1.9.37.8"
+#define HNS3_CAE_MOD_VERSION "1.9.37.9"
#define CMT_ID_LEN 8
#define RESV_LEN 3
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
index 4567c2c..62f34e9 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
@@ -8,7 +8,7 @@
#include "hnae3.h"
-#define HNS3_MOD_VERSION "1.9.37.8"
+#define HNS3_MOD_VERSION "1.9.37.9"
extern char hns3_driver_version[];
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index 80270e4..15af034 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -12,7 +12,7 @@
#include "hclge_cmd.h"
#include "hnae3.h"
-#define HCLGE_MOD_VERSION "1.9.37.8"
+#define HCLGE_MOD_VERSION "1.9.37.9"
#define HCLGE_DRIVER_NAME "hclge"
#define HCLGE_MAX_PF_NUM 8
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
index bd24595..f89ee1b 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
@@ -10,7 +10,7 @@
#include "hclgevf_cmd.h"
#include "hnae3.h"
-#define HCLGEVF_MOD_VERSION "1.9.37.8"
+#define HCLGEVF_MOD_VERSION "1.9.37.9"
#define HCLGEVF_DRIVER_NAME "hclgevf"
#define HCLGEVF_MAX_VLAN_ID 4095
--
1.8.3
1
1
From: Ye Bin <yebin10(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 34271
CVE: NA
--------------------------------
This reverts commit fd72360bc94ad304136beb56e8ff2ec089113bb8.
test setp:
...
rmmod hisi_sas_v3_hw
lsmod
fdisk -l
insomd hisi_sas_v3_hw.ko
lsmod
fdisk -l
....
We get follow error when we test by above test steps.
[ 3660.259153] [ffff00000116f000] pgd=00002027ffffe003, pud=00002027ffffd003,
pmd=00002027cdf28003, pte=0000000000000000
[ 3660.269719] Internal error: Oops: 96000007 [#1] PREEMPT SMP
[ 3660.275266] Modules linked in: hisi_sas_v3_hw(+) hisi_sas_main hns_roce_hw_v2(O)
hns_roce(O) rpcrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi
ib_ipoib ib_umad realtek hns3(O) hclge(O) hnae3(O) crc32_ce crct10dif_ce hisi_hpre
hisi_zip hisi_qm uacce hisi_trng_v2 rng_core sfc lbc ip_tables x_tables libsas
scsi_transport_sas [last unloaded: hisi_sas_main]
[ 3660.308227] Process smartd (pid: 19570, stack limit = 0x000000001103634d)
[ 3660.314985] CPU: 31 PID: 19570 Comm: smartd Kdump: loaded Tainted: G O 4.19.36-g32894fc #1
[ 3660.324504] Hardware name: Huawei TaiShan 200 (Model 2280)/BC82AMDD,
BIOS 2280-V2 CS V3.B220.02 03/27/2020
[ 3660.334110] pstate: 60400009 (nZCv daif +PAN -UAO)
[ 3660.338882] pc : scsi_device_put+0x18/0x38
[ 3660.342961] lr : scsi_disk_put+0x3c/0x58
[ 3660.346865] sp : ffff0000158a3cb0
[ 3660.350164] x29: ffff0000158a3cb0 x28: ffff8027b8111000
[ 3660.355451] x27: 00000000080a005d x26: 0000000000000000
[ 3660.360738] x25: ffff8027c6310398 x24: ffff8027cd2ec410
[ 3660.366025] x23: ffff000009811000 x22: ffff80276d274750
[ 3660.371312] x21: ffff8027abdd5000 x20: ffff8027b8110800
[ 3660.376599] x19: ffff8027abdd5000 x18: 0000000000000000
[ 3660.381886] x17: 0000000000000000 x16: 0000000000000000
[ 3660.387172] x15: 0000000000000000 x14: 0000000000000000
[ 3660.392459] x13: ffff000009996cd0 x12: ffffffffffffffff
[ 3660.397746] x11: ffff000009996cc8 x10: 0000000000000000
[ 3660.403033] x9 : 0000000000000000 x8 : 0000000040000000
[ 3660.408320] x7 : ffff0000098116c8 x6 : 0000000000000000
[ 3660.413607] x5 : ffff00000820ebbc x4 : ffff7e009eb8fb20
[ 3660.418894] x3 : 0000000080400009 x2 : ffff8027ae3ec600
[ 3660.424180] x1 : 71b6030ca20bb300 x0 : ffff00000116f000
[ 3660.429467] Call trace:
[ 3660.431904] scsi_device_put+0x18/0x38
[ 3660.435636] scsi_disk_put+0x3c/0x58
[ 3660.439195] sd_release+0x50/0xc0
[ 3660.442496] __blkdev_put+0x20c/0x220
[ 3660.446141] blkdev_put+0x4c/0x110
[ 3660.449527] blkdev_close+0x1c/0x28
[ 3660.453000] __fput+0x88/0x1b8
[ 3660.456042] ____fput+0xc/0x18
[ 3660.459085] task_work_run+0x94/0xb0
[ 3660.462646] do_notify_resume+0x17c/0x180
[ 3660.466637] work_pending+0x8/0x10
[ 3660.470022] Code: f9000bf3 aa0003f3 f9400000 f9404c00 (f9400000)
[ 3660.476089] ---[ end trace ca1d0144f9241f71 ]---
void scsi_device_put(struct scsi_device *sdev)
{
module_put(sdev->host->hostt->module); ---> error code
put_device(&sdev->sdev_gendev);
}
When access "sdev->host->hostt" occurs exception, as "sdev->host->hostt" is point
to the module address space which is already removed. module_delete first check
module reference count, then call module exit function. So after pass
module reference count check and before call module exit, we can call
scsi_device_get function successfully.
As "scsi: fix failing unload of a LLDD module" lead to call scsi_device_get
success during remove module. We revert this patch, "scsi: fixup kernel warning
during rmmod()" already fixed previous error.
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/scsi/scsi.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
index 7d472c2..fc1356d 100644
--- a/drivers/scsi/scsi.c
+++ b/drivers/scsi/scsi.c
@@ -544,6 +544,9 @@ int scsi_report_opcode(struct scsi_device *sdev, unsigned char *buffer,
* Description: Gets a reference to the scsi_device and increments the use count
* of the underlying LLDD module. You must hold host_lock of the
* parent Scsi_Host or already have a reference when calling this.
+ *
+ * This will fail if a device is deleted or cancelled, or when the LLD module
+ * is in the process of being unloaded.
*/
int scsi_device_get(struct scsi_device *sdev)
{
@@ -551,12 +554,12 @@ int scsi_device_get(struct scsi_device *sdev)
goto fail;
if (!get_device(&sdev->sdev_gendev))
goto fail;
- /* We can fail try_module_get if we're doing SCSI operations
- * from module exit (like cache flush)
- */
- __module_get(sdev->host->hostt->module);
+ if (!try_module_get(sdev->host->hostt->module))
+ goto fail_put_device;
return 0;
+fail_put_device:
+ put_device(&sdev->sdev_gendev);
fail:
return -ENXIO;
}
--
1.8.3
1
0

[PATCH] s390/mm: fix page table upgrade vs 2ndary address mode accesses
by Yang Yingliang 09 May '20
by Yang Yingliang 09 May '20
09 May '20
From: Christian Borntraeger <borntraeger(a)de.ibm.com>
mainline inclusion
from mainline-v5.7-rc4
commit 316ec154810960052d4586b634156c54d0778f74
category: bugfix
bugzilla: 13690
CVE: CVE-2020-11884
-------------------------------------------------
A page table upgrade in a kernel section that uses secondary address
mode will mess up the kernel instructions as follows:
Consider the following scenario: two threads are sharing memory.
On CPU1 thread 1 does e.g. strnlen_user(). That gets to
old_fs = enable_sacf_uaccess();
len = strnlen_user_srst(src, size);
and
" la %2,0(%1)\n"
" la %3,0(%0,%1)\n"
" slgr %0,%0\n"
" sacf 256\n"
"0: srst %3,%2\n"
in strnlen_user_srst(). At that point we are in secondary space mode,
control register 1 points to kernel page table and instruction fetching
happens via c1, rather than usual c13. Interrupts are not disabled, for
obvious reasons.
On CPU2 thread 2 does MAP_FIXED mmap(), forcing the upgrade of page table
from 3-level to e.g. 4-level one. We'd allocated new top-level table,
set it up and now we hit this:
notify = 1;
spin_unlock_bh(&mm->page_table_lock);
}
if (notify)
on_each_cpu(__crst_table_upgrade, mm, 0);
OK, we need to actually change over to use of new page table and we
need that to happen in all threads that are currently running. Which
happens to include the thread 1. IPI is delivered and we have
static void __crst_table_upgrade(void *arg)
{
struct mm_struct *mm = arg;
if (current->active_mm == mm)
set_user_asce(mm);
__tlb_flush_local();
}
run on CPU1. That does
static inline void set_user_asce(struct mm_struct *mm)
{
S390_lowcore.user_asce = mm->context.asce;
OK, user page table address updated...
__ctl_load(S390_lowcore.user_asce, 1, 1);
... and control register 1 set to it.
clear_cpu_flag(CIF_ASCE_PRIMARY);
}
IPI is run in home space mode, so it's fine - insns are fetched
using c13, which always points to kernel page table. But as soon
as we return from the interrupt, previous PSW is restored, putting
CPU1 back into secondary space mode, at which point we no longer
get the kernel instructions from the kernel mapping.
The fix is to only fixup the control registers that are currently in use
for user processes during the page table update. We must also disable
interrupts in enable_sacf_uaccess to synchronize the cr and
thread.mm_segment updates against the on_each-cpu.
Fixes: 0aaba41b58bc ("s390: remove all code using the access register mode")
Cc: stable(a)vger.kernel.org # 4.15+
Reported-by: Al Viro <viro(a)zeniv.linux.org.uk>
Reviewed-by: Gerald Schaefer <gerald.schaefer(a)de.ibm.com>
References: CVE-2020-11884
Signed-off-by: Christian Borntraeger <borntraeger(a)de.ibm.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/s390/lib/uaccess.c | 4 ++++
arch/s390/mm/pgalloc.c | 16 ++++++++++++++--
2 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index c4f8039..0267405 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -64,10 +64,13 @@ mm_segment_t enable_sacf_uaccess(void)
{
mm_segment_t old_fs;
unsigned long asce, cr;
+ unsigned long flags;
old_fs = current->thread.mm_segment;
if (old_fs & 1)
return old_fs;
+ /* protect against a concurrent page table upgrade */
+ local_irq_save(flags);
current->thread.mm_segment |= 1;
asce = S390_lowcore.kernel_asce;
if (likely(old_fs == USER_DS)) {
@@ -83,6 +86,7 @@ mm_segment_t enable_sacf_uaccess(void)
__ctl_load(asce, 7, 7);
set_cpu_flag(CIF_ASCE_SECONDARY);
}
+ local_irq_restore(flags);
return old_fs;
}
EXPORT_SYMBOL(enable_sacf_uaccess);
diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
index 814f265..f3bc9c9 100644
--- a/arch/s390/mm/pgalloc.c
+++ b/arch/s390/mm/pgalloc.c
@@ -72,8 +72,20 @@ static void __crst_table_upgrade(void *arg)
{
struct mm_struct *mm = arg;
- if (current->active_mm == mm)
- set_user_asce(mm);
+ /* we must change all active ASCEs to avoid the creation of new TLBs */
+ if (current->active_mm == mm) {
+ S390_lowcore.user_asce = mm->context.asce;
+ if (current->thread.mm_segment == USER_DS) {
+ __ctl_load(S390_lowcore.user_asce, 1, 1);
+ /* Mark user-ASCE present in CR1 */
+ clear_cpu_flag(CIF_ASCE_PRIMARY);
+ }
+ if (current->thread.mm_segment == USER_DS_SACF) {
+ __ctl_load(S390_lowcore.user_asce, 7, 7);
+ /* enable_sacf_uaccess does all or nothing */
+ WARN_ON(!test_cpu_flag(CIF_ASCE_SECONDARY));
+ }
+ }
__tlb_flush_local();
}
--
1.8.3
1
0
just for testing openeuler kernel ci.
Signed-off-by: Xie XiuQi <xiexiuqi(a)huawei.com>
---
init/version.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/init/version.c b/init/version.c
index ef4012ec4375..b8871517a6b2 100644
--- a/init/version.c
+++ b/init/version.c
@@ -43,7 +43,7 @@ EXPORT_SYMBOL_GPL(init_uts_ns);
/* FIXED STRINGS! Don't touch! */
const char linux_banner[] =
- "Linux version " UTS_RELEASE " (" LINUX_COMPILE_BY "@"
+ "Linux Version " UTS_RELEASE " (" LINUX_COMPILE_BY "@"
LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION "\n";
const char linux_proc_banner[] =
--
2.20.1
1
0
just for testing openeuler kernel ci.
Signed-off-by: Xie XiuQi <xiexiuqi(a)huawei.com>
---
init/version.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/init/version.c b/init/version.c
index ef4012ec4375..b8871517a6b2 100644
--- a/init/version.c
+++ b/init/version.c
@@ -43,7 +43,7 @@ EXPORT_SYMBOL_GPL(init_uts_ns);
/* FIXED STRINGS! Don't touch! */
const char linux_banner[] =
- "Linux version " UTS_RELEASE " (" LINUX_COMPILE_BY "@"
+ "Linux Version " UTS_RELEASE " (" LINUX_COMPILE_BY "@"
LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION "\n";
const char linux_proc_banner[] =
--
2.20.1
1
0

06 May '20
From: yu kuai <yukuai3(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 34280
CVE: NA
---------------------------
tags->rqs[] will not been cleaned when free driver tag to avoid
an extra store on a shared area in the per io path. But there
is a window between get driver tag and write tags->rqs[], so we
may see stale rq in tags->rqs[] which may have been freed, as
the following case:
blk_mq_get_request blk_mq_queue_tag_busy_iter
-> blk_mq_get_tag
-> bt_for_each
-> bt_iter
-> rq = tags->rqs[]
-> rq->q
-> blk_mq_rq_ctx_init
-> data->hctx->tags->rqs[rq->tag] = rq;
In additiion, tags->rqs[] only contains the requests that get
driver tag. It is not accurate for io-scheduler case when account
busy tags in part_in_flight.
To fix both of them, the blk_mq_queue_tag_busy_iter is changed
in this patch to use tags->static_rqs[] instead of tags->rqs[].
We have to identify whether there is a io scheduler attached to
decide to use hctx->tags or hctx->sched_tags. And we will try to
get a non-zero q_usage_counter before that, then could avoid race
with update nr_hw_queues, switch io-scheduler and even queue cleanup.
Add 'inflight' parameter to determine to iterate in-flight
requests or just busy tags and add a new helper interface
blk_mq_queue_tag_inflight_iter to iterate all of the in-flight
tags and export this interface for drivers.
Signed-off-by: yu kuai <yukuai3(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/blk-mq-tag.c | 77 ++++++++++++++++++++++++++++++++++++++++----------
block/blk-mq.c | 6 ++--
include/linux/blk-mq.h | 3 +-
3 files changed, 67 insertions(+), 19 deletions(-)
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 41317c5..323bbca 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -216,37 +216,51 @@ struct bt_iter_data {
busy_iter_fn *fn;
void *data;
bool reserved;
+ bool inflight;
};
static bool bt_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
{
struct bt_iter_data *iter_data = data;
struct blk_mq_hw_ctx *hctx = iter_data->hctx;
- struct blk_mq_tags *tags = hctx->tags;
bool reserved = iter_data->reserved;
+ struct blk_mq_tags *tags;
struct request *rq;
+ tags = hctx->sched_tags ? hctx->sched_tags : hctx->tags;
+
if (!reserved)
bitnr += tags->nr_reserved_tags;
- rq = tags->rqs[bitnr];
/*
- * We can hit rq == NULL here, because the tagging functions
- * test and set the bit before assining ->rqs[].
+ * Because tags->rqs[] will not been cleaned when free driver tag
+ * and there is a window between get driver tag and write tags->rqs[],
+ * so we may see stale rq in tags->rqs[] which may have been freed.
+ * Using static_rqs[] is safer.
*/
- if (rq && rq->q == hctx->queue)
+ rq = tags->static_rqs[bitnr];
+
+ /*
+ * There is a small window between get tag and blk_mq_rq_ctx_init,
+ * so rq->q and rq->mq_hctx maybe different.
+ */
+ if (rq && rq->q == hctx->queue &&
+ (!iter_data->inflight ||
+ blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT))
iter_data->fn(hctx, rq, iter_data->data, reserved);
return true;
}
-static void bt_for_each(struct blk_mq_hw_ctx *hctx, struct sbitmap_queue *bt,
- busy_iter_fn *fn, void *data, bool reserved)
+static void bt_for_each(struct blk_mq_hw_ctx *hctx,
+ struct sbitmap_queue *bt, busy_iter_fn *fn,
+ void *data, bool reserved, bool inflight)
{
struct bt_iter_data iter_data = {
.hctx = hctx,
.fn = fn,
.data = data,
.reserved = reserved,
+ .inflight = inflight,
};
sbitmap_for_each_set(&bt->sb, bt_iter, &iter_data);
@@ -314,22 +328,23 @@ void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset,
}
EXPORT_SYMBOL(blk_mq_tagset_busy_iter);
-void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn,
- void *priv)
+static void __blk_mq_queue_tag_busy_iter(struct request_queue *q,
+ busy_iter_fn *fn, void *priv, bool inflight)
{
struct blk_mq_hw_ctx *hctx;
int i;
/*
- * __blk_mq_update_nr_hw_queues will update the nr_hw_queues and
- * queue_hw_ctx after freeze the queue, so we use q_usage_counter
- * to avoid race with it.
+ * Get a reference of the queue unless it has been zero. We use this
+ * to avoid the race with the code that would modify the hctxs after
+ * freeze and drain the queue, including updating nr_hw_queues, io
+ * scheduler switching and queue clean up.
*/
if (!percpu_ref_tryget(&q->q_usage_counter))
return;
queue_for_each_hw_ctx(q, hctx, i) {
- struct blk_mq_tags *tags = hctx->tags;
+ struct blk_mq_tags *tags;
/*
* If not software queues are currently mapped to this
@@ -338,13 +353,45 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn,
if (!blk_mq_hw_queue_mapped(hctx))
continue;
+ tags = hctx->sched_tags ? hctx->sched_tags : hctx->tags;
+
if (tags->nr_reserved_tags)
- bt_for_each(hctx, &tags->breserved_tags, fn, priv, true);
- bt_for_each(hctx, &tags->bitmap_tags, fn, priv, false);
+ bt_for_each(hctx, &tags->breserved_tags,
+ fn, priv, true, inflight);
+ bt_for_each(hctx, &tags->bitmap_tags,
+ fn, priv, false, inflight);
+ /*
+ * flush_rq represents the rq with REQ_PREFLUSH and REQ_FUA
+ * (if FUA is not supported by device) to be issued to
+ * device. So we need to consider it when iterate inflight
+ * rqs, but needn't to count it when iterate busy tags.
+ */
+ if (inflight &&
+ blk_mq_rq_state(hctx->fq->flush_rq) == MQ_RQ_IN_FLIGHT)
+ fn(hctx, hctx->fq->flush_rq, priv, false);
}
blk_queue_exit(q);
}
+/*
+ * Iterate all the busy tags including pending and in-flight ones.
+ */
+void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn,
+ void *priv)
+{
+ __blk_mq_queue_tag_busy_iter(q, fn, priv, false);
+}
+
+/*
+ * Iterate all the inflight tags.
+ */
+void blk_mq_queue_tag_inflight_iter(struct request_queue *q,
+ busy_iter_fn *fn, void *priv)
+{
+ __blk_mq_queue_tag_busy_iter(q, fn, priv, true);
+}
+EXPORT_SYMBOL(blk_mq_queue_tag_inflight_iter);
+
static int bt_alloc(struct sbitmap_queue *bt, unsigned int depth,
bool round_robin, int node)
{
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 8a7c3d8..ee07575 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -112,7 +112,7 @@ void blk_mq_in_flight(struct request_queue *q, struct hd_struct *part,
struct mq_inflight mi = { .part = part, .inflight = inflight, };
inflight[0] = inflight[1] = 0;
- blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight, &mi);
+ blk_mq_queue_tag_inflight_iter(q, blk_mq_check_inflight, &mi);
}
static void blk_mq_check_inflight_rw(struct blk_mq_hw_ctx *hctx,
@@ -131,7 +131,7 @@ void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part,
struct mq_inflight mi = { .part = part, .inflight = inflight, };
inflight[0] = inflight[1] = 0;
- blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight_rw, &mi);
+ blk_mq_queue_tag_inflight_iter(q, blk_mq_check_inflight_rw, &mi);
}
void blk_freeze_queue_start(struct request_queue *q)
@@ -875,7 +875,7 @@ static void blk_mq_timeout_work(struct work_struct *work)
if (!percpu_ref_tryget(&q->q_usage_counter))
return;
- blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next);
+ blk_mq_queue_tag_inflight_iter(q, blk_mq_check_expired, &next);
if (next != 0) {
mod_timer(&q->timeout, next);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 6578070..149d411 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -320,7 +320,8 @@ void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset,
void blk_mq_freeze_queue_wait(struct request_queue *q);
int blk_mq_freeze_queue_wait_timeout(struct request_queue *q,
unsigned long timeout);
-
+void blk_mq_queue_tag_inflight_iter(struct request_queue *q, busy_iter_fn *fn,
+ void *priv);
int blk_mq_map_queues(struct blk_mq_tag_set *set);
void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues);
--
1.8.3
1
2
From: Changbin Du <changbin.du(a)gmail.com>
mainline inclusion
from mainline-5.6
commit 0ada120c883d4f1f6aafd01cf0fbb10d8bbba015
category: bugfix
bugzilla: 34555
CVE: NA
-------------------------------------------------
libbfd has changed the bfd_section_* macros to inline functions
bfd_section_<field> since 2019-09-18. See below two commits:
o http://www.sourceware.org/ml/gdb-cvs/2019-09/msg00064.html
o https://www.sourceware.org/ml/gdb-cvs/2019-09/msg00072.html
This fix make perf able to build with both old and new libbfd.
Signed-off-by: Changbin Du <changbin.du(a)gmail.com>
Acked-by: Jiri Olsa <jolsa(a)redhat.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Link: http://lore.kernel.org/lkml/20200128152938.31413-1-changbin.du@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme(a)redhat.com>
Signed-off-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Reviewed-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
tools/perf/util/srcline.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/tools/perf/util/srcline.c b/tools/perf/util/srcline.c
index af3f9b9..b8e7761 100644
--- a/tools/perf/util/srcline.c
+++ b/tools/perf/util/srcline.c
@@ -191,16 +191,30 @@ static void find_address_in_section(bfd *abfd, asection *section, void *data)
bfd_vma pc, vma;
bfd_size_type size;
struct a2l_data *a2l = data;
+ flagword flags;
if (a2l->found)
return;
- if ((bfd_get_section_flags(abfd, section) & SEC_ALLOC) == 0)
+#ifdef bfd_get_section_flags
+ flags = bfd_get_section_flags(abfd, section);
+#else
+ flags = bfd_section_flags(section);
+#endif
+ if ((flags & SEC_ALLOC) == 0)
return;
pc = a2l->addr;
+#ifdef bfd_get_section_vma
vma = bfd_get_section_vma(abfd, section);
+#else
+ vma = bfd_section_vma(section);
+#endif
+#ifdef bfd_get_section_size
size = bfd_get_section_size(section);
+#else
+ size = bfd_section_size(section);
+#endif
if (pc < vma || pc >= vma + size)
return;
--
1.8.3
1
0
subscribe linux-kernel
3
2
From: Liu Yanshi <liuyanshi(a)huawei.com>
driver inclusion
category: feature
bugzilla: NA
CVE: NA
pcie_cae add interface to get the chipnums of the current system.
Signed-off-by: Liu Yanshi <liuyanshi(a)huawei.com>
Reviewed-by: Zhu Xiongxiong <zhuxiongxiong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
.../controller/hisi-pcie-customer/hisi_pcie_cae.c | 37 ++++++++++++++++++++--
1 file changed, 34 insertions(+), 3 deletions(-)
diff --git a/drivers/pci/controller/hisi-pcie-customer/hisi_pcie_cae.c b/drivers/pci/controller/hisi-pcie-customer/hisi_pcie_cae.c
index 8d0c801..6229a54 100644
--- a/drivers/pci/controller/hisi-pcie-customer/hisi_pcie_cae.c
+++ b/drivers/pci/controller/hisi-pcie-customer/hisi_pcie_cae.c
@@ -3,6 +3,7 @@
#include <linux/mm.h>
#include <linux/miscdevice.h>
+#include <linux/uaccess.h>
#include <linux/device.h>
#include <linux/module.h>
#include <linux/io.h>
@@ -21,6 +22,7 @@
#define CHIP_INFO_REG_SIZE 4
#define TYPE_SHIFT 4
#define BIT_SHIFT_8 8
+#define PCIE_CMD_GET_CHIPNUMS 0x01
#define DEVICE_NAME "pcie_reg_dev"
@@ -37,7 +39,7 @@ enum {
MMAP_TYPE_VIRTIO
};
-static int current_chip_nums;
+static u32 current_chip_nums;
static const struct vm_operations_struct mmap_pcie_mem_ops = {
#ifdef CONFIG_HAVE_IOREMAP_PROT
@@ -100,10 +102,10 @@ static int pcie_reg_mmap(struct file *filep, struct vm_area_struct *vma)
return 0;
}
-int pcie_get_chipnums(u32 cpu_info)
+u32 pcie_get_chipnums(u32 cpu_info)
{
int i;
- int chip_count = 0;
+ u32 chip_count = 0;
u32 chip_i_info;
for (i = 0; i < MAX_CHIP_NUM; i++) {
@@ -144,12 +146,41 @@ static int pcie_release(struct inode *inode, struct file *f)
return 0;
}
+static long pcie_reg_ioctl(struct file *pfile, unsigned int cmd,
+ unsigned long arg)
+{
+ int ret = 0;
+
+ switch (cmd) {
+ case PCIE_CMD_GET_CHIPNUMS:
+ if ((void *)arg == NULL) {
+ pr_info("[PCIe Base] invalid arg address\n");
+ ret = -EINVAL;
+ break;
+ }
+
+ if (copy_to_user((void *)arg, (void *)¤t_chip_nums,
+ sizeof(int))) {
+ pr_info("[PCIe Base] copy chip_nums to usr failed\n");
+ ret = -EINVAL;
+ }
+ break;
+
+ default:
+ pr_info("[PCIe Base] invalid pcie ioctl cmd:%u\n", cmd);
+ break;
+ }
+
+ return ret;
+}
+
static const struct file_operations pcie_dfx_fops = {
.owner = THIS_MODULE,
.open = pcie_open,
.release = pcie_release,
.llseek = noop_llseek,
.mmap = pcie_reg_mmap,
+ .unlocked_ioctl = pcie_reg_ioctl,
};
static struct miscdevice pcie_dfx_misc = {
--
1.8.3
1
0
category: other
bugzilla: NA
CVE: NA
----------------------------------
The ccflags removed in this patch shouldn't be in open kernel tree.
Especially for -fstack-protector-strong, there will be a compile error
when CONFIG_STACKPROTECTOR_STRONG isn't enabled.
Signed-off-by: Zhengyuan Liu <liuzhengyuan(a)tj.kylinos.cn>
---
drivers/net/ethernet/hisilicon/hns3/Makefile | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/Makefile b/drivers/net/ethernet/hisilicon/hns3/Makefile
index 6653e81..0365f1c 100644
--- a/drivers/net/ethernet/hisilicon/hns3/Makefile
+++ b/drivers/net/ethernet/hisilicon/hns3/Makefile
@@ -3,15 +3,6 @@
# Makefile for the HISILICON network device drivers.
#
-# Add security options
-ccflags-y += -fstack-protector-strong
-ccflags-y += -Wl,-z,relro,-z,now
-ccflags-y += -Wl,-z,noexecstack
-ccflags-y += -D_FORTIFY_SOURCE=2 -O2
-ccflags-y += -fvisibility=hidden
-ccflags-y += -Wformat=2 -Wfloat-equal
-ccflags-y += -fsigned-char
-
ccflags-y += -DCONFIG_IT_VALIDATION
ccflags-y += -DCONFIG_HNS3_TEST
--
2.7.4
3
2
cc kernel(a)openeuler.org.
Just for members who may not know, dwadm(a)pcl.ac.cn is from 鹏程实验室 and will
help on issues related with VMs in 鹏程实验室。
On Sun, Apr 26, 2020 at 10:08 AM Xie XiuQi via Infra <infra(a)openeuler.org>
wrote:
> 感谢你的热心参与,我们正在和鹏程实验室的兄弟联合定位
> On 2020/4/25 13:30, Qichen Zhang wrote:
>
>
> Qichen Zhang
> 邮箱:17852657226(a)163.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&n…>
>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail88> 定制
>
> --------- 转发的邮件 ---------
> 发件人: Qichen Zhang <17852657226(a)163.com>
> 发送日期: 2020年04月25日 13:26
> 收件人: Xiehong (Cynthia) <xiehong(a)huawei.com>
> 抄送人:
> 主题: 回复:gitee上已更新issue,请及时查收
> 各位工程师
> 你们好!
>
> bug依然存在,还是报Message
> kernel watchdog Bug soft lockup
> 在issue上已发布,昨晚上七点多一直等到今天凌晨一点,依然存在bug。
>
> 请各位工程师看看如何解决?
> 上周五发现的这个bug,不能这么愣着啊
> 你们需要我做什么,阔以直接给我说哈,
> 我叫张琦琛,手机号同微信号17852657226
> 希望大家一起努力尽快解决这个bug
>
> 盼回复
>
>
> Qichen Zhang
> 邮箱:17852657226(a)163.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&n…>
>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail88> 定制
>
> 在2020年04月21日 14:17,Xiehong (Cynthia) <xiehong(a)huawei.com> 写道:
>
> HI
>
>
>
> 多谢!
>
>
>
> 我们会在社区上的ISSUE系统上继续跟踪。
>
>
>
>
>
> *发件人:* Qichen Zhang [mailto:17852657226@163.com]
> *发送时间:* 2020年4月21日 13:27
> *收件人:* Xiehong (Cynthia) <xiehong(a)huawei.com>
> *主题:* gitee上已更新issue,请及时查收
>
>
>
> 尊敬的工程师
>
> 您好!
>
>
>
> 我已在 gitee 上已更新issue,请及时查收
>
> https://gitee.com/openeuler/community/issues/I1F13I?from=project-issue
>
> 很乐意和大家一块解决这个bug。
>
> — —
>
> Qichen Zhang
>
>
> _______________________________________________
> Community mailing list -- community(a)openeuler.org
> To unsubscribe send an email to community-leave(a)openeuler.org
>
> _______________________________________________
> Infra mailing list -- infra(a)openeuler.org
> To unsubscribe send an email to infra-leave(a)openeuler.org
>
--
Regards
Fred Li (李永乐)
1
0

[PATCH 01/25] net: hns3: merge mac state HCLGE_MAC_TO_DEL and HCLGE_MAC_DEL_FAIL
by Yang Yingliang 26 Apr '20
by Yang Yingliang 26 Apr '20
26 Apr '20
From: shenhao <shenhao21(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
--------------------------------------------
HCLGE_MAC_DEL_FAIL is a middle state for mac address handling,
it can be merged with HCLGE_MAC_TO_DEL.
Btw, this patch also change the enum name from HCLGE_MAC_ADDR_STATE
to HCLGE_MAC_NODE_STATE, for it's used to indicate the state of
mac node.
Signed-off-by: Jian Shen <shenjian15(a)huawei.com>
Signed-off-by: shenhao <shenhao21(a)huawei.com>
Reviewed-by: Zhong Zhaohui <zhongzhaohui(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 46 ++++++++++------------
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 9 ++---
2 files changed, 25 insertions(+), 30 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 478a3b5..d985c68 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -7422,14 +7422,13 @@ static void hclge_update_umv_space(struct hclge_vport *vport, bool is_free)
return NULL;
}
-static void hclge_mac_node_convert(struct hclge_vport_mac_addr_cfg *mac_node,
- enum HCLGE_MAC_ADDR_STATE state)
+static void hclge_update_mac_node(struct hclge_vport_mac_addr_cfg *mac_node,
+ enum HCLGE_MAC_NODE_STATE state)
{
switch (state) {
/* from set_rx_mode or tmp_add_list */
case HCLGE_MAC_TO_ADD:
- if (mac_node->state == HCLGE_MAC_TO_DEL ||
- mac_node->state == HCLGE_MAC_DEL_FAIL)
+ if (mac_node->state == HCLGE_MAC_TO_DEL)
mac_node->state = HCLGE_MAC_ACTIVE;
break;
/* only from set_rx_mode */
@@ -7442,14 +7441,7 @@ static void hclge_mac_node_convert(struct hclge_vport_mac_addr_cfg *mac_node,
}
break;
/* only from tmp_add_list, the mac_node->state won't be
- * HCLGE_MAC_ACTIVE/HCLGE_MAC_DEL_FAIL/HCLGE_MAC_ADD_FAIL
- */
- case HCLGE_MAC_DEL_FAIL:
- if (mac_node->state == HCLGE_MAC_TO_ADD)
- mac_node->state = HCLGE_MAC_ACTIVE;
- break;
- /* only from tmp_add_list, the mac_node->state won't be
- * HCLGE_MAC_ACTIVE/HCLGE_MAC_DEL_FAIL/HCLGE_MAC_ADD_FAIL
+ * ACTIVE.
*/
case HCLGE_MAC_ACTIVE:
if (mac_node->state == HCLGE_MAC_TO_ADD)
@@ -7460,7 +7452,7 @@ static void hclge_mac_node_convert(struct hclge_vport_mac_addr_cfg *mac_node,
}
int hclge_update_mac_list(struct hclge_vport *vport,
- enum HCLGE_MAC_ADDR_STATE state,
+ enum HCLGE_MAC_NODE_STATE state,
enum HCLGE_MAC_ADDR_TYPE mac_type,
const unsigned char *addr)
{
@@ -7480,7 +7472,7 @@ int hclge_update_mac_list(struct hclge_vport *vport,
*/
mac_node = hclge_find_mac_node(list, addr);
if (mac_node) {
- hclge_mac_node_convert(mac_node, state);
+ hclge_update_mac_node(mac_node, state);
spin_unlock_bh(&vport->mac_list_lock);
return 0;
}
@@ -7731,7 +7723,6 @@ static void hclge_unsync_mac_list(struct hclge_vport *vport,
list_del(&mac_node->node);
kfree(mac_node);
} else {
- mac_node->state = HCLGE_MAC_DEL_FAIL;
set_bit(HCLGE_VPORT_STATE_MAC_TBL_CHANGE,
&vport->state);
break;
@@ -7753,13 +7744,13 @@ static bool hclge_sync_from_add_list(struct list_head *add_list,
* uc/mc_mac_list, it means have received a TO_DEL request
* during the time window of adding the mac address into mac
* table. if mac_node state is ACTIVE, then change it to TO_DEL,
- * then it will be removed at next time. else it must be TO_ADD
- * or ADD_FAIL, this address hasn't been added into mac table,
+ * then it will be removed at next time. else it must be TO_ADD,
+ * this address hasn't been added into mac table,
* so just remove the mac node.
*/
new_node = hclge_find_mac_node(mac_list, mac_node->mac_addr);
if (new_node) {
- hclge_mac_node_convert(new_node, mac_node->state);
+ hclge_update_mac_node(new_node, mac_node->state);
list_del(&mac_node->node);
kfree(mac_node);
} else if (mac_node->state == HCLGE_MAC_ACTIVE) {
@@ -7782,7 +7773,16 @@ static void hclge_sync_from_del_list(struct list_head *del_list,
list_for_each_entry_safe(mac_node, tmp, del_list, node) {
new_node = hclge_find_mac_node(mac_list, mac_node->mac_addr);
if (new_node) {
- hclge_mac_node_convert(new_node, mac_node->state);
+ /* If the mac addr exists in the mac list, it means
+ * received a new TO_ADD request during the time window
+ * of configuring the mac address. For the mac node
+ * state is TO_ADD, and the address is already in the
+ * in the hardware(due to delete fail), so we just need
+ * to change the mac node state to ACTIVE.
+ */
+ new_node->state = HCLGE_MAC_ACTIVE;
+ list_del(&mac_node->node);
+ kfree(mac_node);
} else {
list_del(&mac_node->node);
list_add_tail(&mac_node->node, mac_list);
@@ -7850,7 +7850,6 @@ static void hclge_sync_vport_mac_table(struct hclge_vport *vport,
list_for_each_entry_safe(mac_node, tmp, list, node) {
switch (mac_node->state) {
case HCLGE_MAC_TO_DEL:
- case HCLGE_MAC_DEL_FAIL:
list_del(&mac_node->node);
list_add_tail(&mac_node->node, &tmp_del_list);
break;
@@ -7962,7 +7961,6 @@ void hclge_rm_vport_all_mac_table(struct hclge_vport *vport, bool is_del_list,
list_for_each_entry_safe(mac_cfg, tmp, list, node) {
switch (mac_cfg->state) {
case HCLGE_MAC_TO_DEL:
- case HCLGE_MAC_DEL_FAIL:
case HCLGE_MAC_ACTIVE:
list_del(&mac_cfg->node);
list_add_tail(&mac_cfg->node, &tmp_del_list);
@@ -8021,7 +8019,6 @@ static void hclge_uninit_mac_list(struct hclge_vport *vport,
list_for_each_entry_safe(mac_node, tmp, list, node) {
switch (mac_node->state) {
case HCLGE_MAC_TO_DEL:
- case HCLGE_MAC_DEL_FAIL:
case HCLGE_MAC_ACTIVE:
list_del(&mac_node->node);
list_add_tail(&mac_node->node, &tmp_del_list);
@@ -8221,7 +8218,7 @@ void hclge_replace_mac_node(struct list_head *list, const u8 *old_addr,
}
void hclge_modify_mac_node_state(struct list_head *list, const u8 *addr,
- enum HCLGE_MAC_ADDR_STATE state)
+ enum HCLGE_MAC_NODE_STATE state)
{
struct hclge_vport_mac_addr_cfg *mac_node;
@@ -8989,8 +8986,7 @@ static void hclge_mac_node_convert_for_reset(struct list_head *list)
list_for_each_entry_safe(mac_node, tmp, list, node) {
if (mac_node->state == HCLGE_MAC_ACTIVE) {
mac_node->state = HCLGE_MAC_TO_ADD;
- } else if (mac_node->state == HCLGE_MAC_TO_DEL ||
- mac_node->state == HCLGE_MAC_DEL_FAIL) {
+ } else if (mac_node->state == HCLGE_MAC_TO_DEL) {
list_del(&mac_node->node);
kfree(mac_node);
}
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index 5e462cf..a1fa782 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -640,16 +640,15 @@ struct hclge_fd_ad_data {
u16 rule_id;
};
-enum HCLGE_MAC_ADDR_STATE {
+enum HCLGE_MAC_NODE_STATE {
HCLGE_MAC_TO_ADD,
HCLGE_MAC_TO_DEL,
- HCLGE_MAC_DEL_FAIL,
HCLGE_MAC_ACTIVE
};
struct hclge_vport_mac_addr_cfg {
struct list_head node;
- enum HCLGE_MAC_ADDR_STATE state;
+ enum HCLGE_MAC_NODE_STATE state;
u8 mac_addr[ETH_ALEN];
};
@@ -1016,13 +1015,13 @@ int hclge_set_vlan_filter(struct hnae3_handle *handle, __be16 proto,
int hclge_notify_client(struct hclge_dev *hdev,
enum hnae3_reset_notify_type type);
int hclge_update_mac_list(struct hclge_vport *vport,
- enum HCLGE_MAC_ADDR_STATE state,
+ enum HCLGE_MAC_NODE_STATE state,
enum HCLGE_MAC_ADDR_TYPE mac_type,
const unsigned char *addr);
void hclge_replace_mac_node(struct list_head *list, const u8 *old_addr,
const u8 *new_addr, bool keep_old);
void hclge_modify_mac_node_state(struct list_head *list, const u8 *addr,
- enum HCLGE_MAC_ADDR_STATE state);
+ enum HCLGE_MAC_NODE_STATE state);
void hclge_rm_vport_all_mac_table(struct hclge_vport *vport, bool is_del_list,
enum HCLGE_MAC_ADDR_TYPE mac_type);
void hclge_rm_vport_all_vlan_table(struct hclge_vport *vport, bool is_del_list);
--
1.8.3
1
24
From: Huang jun <huangjun61(a)huawei.com>
If we want acpi ged to support wake from freeze, we need to implement
the suspend/resume function. In these two methods, ACPI's _GPO, GPP
method is called to realize the setting of sleep flag and anti-shake.
Signed-off-by: Huangjun <huangjun63(a)huawei.com>
---
drivers/acpi/evged.c | 108 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 108 insertions(+)
diff --git a/drivers/acpi/evged.c b/drivers/acpi/evged.c
index f13ba2c07667..dde8fbff8d19 100644
--- a/drivers/acpi/evged.c
+++ b/drivers/acpi/evged.c
@@ -46,11 +46,20 @@
#include <linux/list.h>
#include <linux/platform_device.h>
#include <linux/acpi.h>
+#include <linux/timer.h>
+#include <linux/jiffies.h>
#define MODULE_NAME "acpi-ged"
+struct acpi_ged_handle {
+ struct timer_list timer;/* For 4s anti-shake of power button */
+ acpi_handle gpp_handle; /* ACPI Handle: enable shutdown */
+ acpi_handle gpo_handle; /* ACPI Handle: set sleep flag */
+};
+
struct acpi_ged_device {
struct device *dev;
+ struct acpi_ged_handle *wakeup_handle;
struct list_head event_list;
};
@@ -131,6 +140,34 @@ static acpi_status acpi_ged_request_interrupt(struct acpi_resource *ares,
return AE_OK;
}
+#ifdef CONFIG_PM_SLEEP
+static void init_ged_handle(struct acpi_ged_device *geddev) {
+ struct acpi_ged_handle *wakeup_handle;
+ acpi_handle gpo_handle = NULL;
+ acpi_handle gpp_handle = NULL;
+ acpi_status acpi_ret;
+
+ wakeup_handle = devm_kzalloc(geddev->dev, sizeof(*wakeup_handle), GFP_KERNEL);
+ if (!wakeup_handle)
+ return;
+
+ geddev->wakeup_handle = wakeup_handle;
+
+ /* Initialize wakeup_handle, prepare for ged suspend and resume */
+ timer_setup(&wakeup_handle->timer, NULL, 0);
+
+ acpi_ret = acpi_get_handle(ACPI_HANDLE(geddev->dev), "_GPO", &gpo_handle);
+ if (ACPI_FAILURE(acpi_ret))
+ dev_info(geddev->dev, "cannot locate _GPO method\n");
+ wakeup_handle->gpo_handle = gpo_handle;
+
+ acpi_ret = acpi_get_handle(ACPI_HANDLE(geddev->dev), "_GPP", &gpp_handle);
+ if (ACPI_FAILURE(acpi_ret))
+ dev_info(geddev->dev, "cannot locate _GPP method\n");
+ wakeup_handle->gpp_handle = gpp_handle;
+}
+#endif
+
static int ged_probe(struct platform_device *pdev)
{
struct acpi_ged_device *geddev;
@@ -149,6 +186,9 @@ static int ged_probe(struct platform_device *pdev)
return -EINVAL;
}
platform_set_drvdata(pdev, geddev);
+#ifdef CONFIG_PM_SLEEP
+ init_ged_handle(geddev);
+#endif
return 0;
}
@@ -164,6 +204,10 @@ static void ged_shutdown(struct platform_device *pdev)
dev_dbg(geddev->dev, "GED releasing GSI %u @ IRQ %u\n",
event->gsi, event->irq);
}
+
+ if (geddev->wakeup_handle)
+ del_timer(&geddev->wakeup_handle->timer);
+
}
static int ged_remove(struct platform_device *pdev)
@@ -172,6 +216,67 @@ static int ged_remove(struct platform_device *pdev)
return 0;
}
+#ifdef CONFIG_PM_SLEEP
+static void ged_timer_callback(struct timer_list *t)
+{
+ struct acpi_ged_handle *wakeup_handle = from_timer(wakeup_handle, t, timer);
+ acpi_status acpi_ret;
+
+ /* _GPP method enable power button */
+ if (wakeup_handle && wakeup_handle->gpp_handle) {
+ acpi_ret = acpi_execute_simple_method(wakeup_handle->gpp_handle, NULL, ACPI_IRQ_MODEL_GIC);
+ if (ACPI_FAILURE(acpi_ret))
+ pr_warn("_GPP method execution failed\n");
+ }
+}
+
+static int ged_suspend(struct device *dev)
+{
+ struct acpi_ged_device *geddev = dev_get_drvdata(dev);
+ struct acpi_ged_handle *wakeup_handle = geddev->wakeup_handle;
+ struct acpi_ged_event *event, *next;
+ acpi_status acpi_ret;
+
+ /* _GPO method set sleep flag */
+ if (wakeup_handle && wakeup_handle->gpo_handle) {
+ acpi_ret = acpi_execute_simple_method(wakeup_handle->gpo_handle, NULL, ACPI_IRQ_MODEL_GIC);
+ if (ACPI_FAILURE(acpi_ret)) {
+ pr_warn("_GPO method execution failed\n");
+ return AE_ERROR;
+ }
+ }
+
+ list_for_each_entry_safe(event, next, &geddev->event_list, node)
+ enable_irq_wake(event->irq);
+
+ return 0;
+}
+
+static int ged_resume(struct device *dev)
+{
+ struct acpi_ged_device *geddev = dev_get_drvdata(dev);
+ struct acpi_ged_handle *wakeup_handle = geddev->wakeup_handle;
+ struct acpi_ged_event *event, *next;
+
+ /* use timer to complete 4s anti-shake */
+ if (wakeup_handle && wakeup_handle->gpp_handle) {
+ wakeup_handle->timer.expires = jiffies + (4 * HZ);
+ wakeup_handle->timer.function = ged_timer_callback;
+ add_timer(&wakeup_handle->timer);
+ }
+
+ list_for_each_entry_safe(event, next, &geddev->event_list, node)
+ disable_irq_wake(event->irq);
+
+ return 0;
+}
+
+static const struct dev_pm_ops ged_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(ged_suspend, ged_resume)
+};
+#endif
+
+
static const struct acpi_device_id ged_acpi_ids[] = {
{"ACPI0013"},
{},
@@ -184,6 +289,9 @@ static struct platform_driver ged_driver = {
.driver = {
.name = MODULE_NAME,
.acpi_match_table = ACPI_PTR(ged_acpi_ids),
+#ifdef CONFIG_PM_SLEEP
+ .pm = &ged_pm_ops,
+#endif
},
};
builtin_platform_driver(ged_driver);
--
2.20.1
1
1
Alexei Avshalom Lazar (1):
wil6210: add general initialization/size checks
Amir Goldstein (1):
ovl: fix value of i_ino for lower hardlink corner case
Austin Kim (1):
mm/vmalloc.c: move 'area->pages' after if statement
Can Guo (1):
scsi: ufs: Fix ufshcd_hold() caused scheduling while atomic
Colin Ian King (2):
ASoC: Intel: mrfld: fix incorrect check on p->sink
ASoC: Intel: mrfld: return error codes when an error occurs
DENG Qingfang (1):
net: dsa: mt7530: fix tagged frames pass-through in VLAN-unaware mode
Dedy Lansky (2):
wil6210: check rx_buff_mgmt before accessing it
wil6210: make sure Rx ring sizes are correlated
Florian Fainelli (1):
net: stmmac: dwmac-sunxi: Provide TX and RX fifo sizes
Greg Kroah-Hartman (1):
Linux 4.19.117
James Morse (1):
x86/resctrl: Preserve CDP enable over CPU hotplug
Jan Kara (1):
ext4: do not zeroout extents beyond i_disksize
Jim Mattson (1):
kvm: x86: Host feature SSBD doesn't imply guest feature SPEC_CTRL_SSBD
John Allen (1):
x86/microcode/AMD: Increase microcode PATCH_MAX_SIZE
Josef Bacik (1):
btrfs: check commit root generation in should_ignore_root
Josh Triplett (2):
ext4: fix incorrect group count in ext4_fill_super error message
ext4: fix incorrect inodes per group in error message
Karthick Gopalasubramanian (1):
wil6210: remove reset file from debugfs
Konstantin Khlebnikov (1):
net: revert default NAPI poll timeout to 2 jiffies
Maurizio Lombardi (2):
scsi: target: remove boilerplate code
scsi: target: fix hang when multiple threads try to destroy the same
iscsi session
Maya Erez (1):
wil6210: ignore HALP ICR if already handled
Reinette Chatre (1):
x86/resctrl: Fix invalid attempt at removing the default resource
group
Sasha Levin (1):
usb: dwc3: gadget: don't enable interrupt when disabling endpoint
Sebastian Andrzej Siewior (1):
amd-xgbe: Use __napi_schedule() in BH context
Sergei Lopatin (1):
drm/amd/powerplay: force the trim of the mclk dpm_levels if OD is
enabled
Sven Van Asbroeck (1):
pwm: pca9685: Fix PWM/GPIO inter-operation
Taehee Yoo (1):
hsr: check protocol version in hsr_newlink()
Takashi Iwai (4):
ALSA: usb-audio: Filter error from connector kctl ops, too
ALSA: usb-audio: Don't override ignore_ctl_error value from the map
ALSA: usb-audio: Don't create jack controls for PCM terminals
ALSA: usb-audio: Check mapping at creating connector controls, too
Taras Chornyi (1):
net: ipv4: devinet: Fix crash when add/del multicast IP with autojoin
Thinh Nguyen (1):
usb: dwc3: gadget: Don't clear flags before transfer ended
Tim Stallard (1):
net: ipv6: do not consider routes via gateways for anycast address
check
Tuomas Tynkkynen (1):
mac80211_hwsim: Use kstrndup() in place of kasprintf()
Vasily Averin (1):
keys: Fix proc_keys_next to increase position index
Wang Wenhu (1):
net: qrtr: send msgs from local of same id as broadcast
Xiao Yang (1):
tracing: Fix the race between registering 'snapshot' event trigger and
triggering 'snapshot' operation
zhangyi (F) (1):
jbd2: improve comments about freeing data buffers whose page mapping
is NULL
Makefile | 2 +-
arch/x86/include/asm/microcode_amd.h | 2 +-
arch/x86/kernel/cpu/intel_rdt.c | 2 +
arch/x86/kernel/cpu/intel_rdt.h | 1 +
arch/x86/kernel/cpu/intel_rdt_rdtgroup.c | 16 ++++-
arch/x86/kvm/cpuid.c | 3 +-
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 5 +-
drivers/net/dsa/mt7530.c | 18 +++--
drivers/net/dsa/mt7530.h | 7 ++
drivers/net/ethernet/amd/xgbe/xgbe-drv.c | 2 +-
drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c | 2 +
drivers/net/wireless/ath/wil6210/debugfs.c | 29 +-------
drivers/net/wireless/ath/wil6210/interrupt.c | 12 ++--
drivers/net/wireless/ath/wil6210/main.c | 5 +-
drivers/net/wireless/ath/wil6210/txrx.c | 4 +-
drivers/net/wireless/ath/wil6210/txrx_edma.c | 14 +++-
drivers/net/wireless/ath/wil6210/wil6210.h | 3 +-
drivers/net/wireless/ath/wil6210/wmi.c | 2 +-
drivers/net/wireless/mac80211_hwsim.c | 12 ++--
drivers/pwm/pwm-pca9685.c | 85 +++++++++++++----------
drivers/scsi/ufs/ufshcd.c | 5 ++
drivers/target/iscsi/iscsi_target.c | 79 ++++++---------------
drivers/target/iscsi/iscsi_target.h | 1 -
drivers/target/iscsi/iscsi_target_configfs.c | 5 +-
drivers/target/iscsi/iscsi_target_login.c | 5 +-
drivers/usb/dwc3/gadget.c | 18 ++---
fs/btrfs/relocation.c | 4 +-
fs/ext4/extents.c | 8 +--
fs/ext4/super.c | 6 +-
fs/jbd2/commit.c | 7 +-
fs/overlayfs/inode.c | 4 +-
include/net/ip6_route.h | 1 +
include/target/iscsi/iscsi_target_core.h | 2 +-
kernel/trace/trace_events_trigger.c | 10 +--
mm/vmalloc.c | 8 ++-
net/core/dev.c | 3 +-
net/hsr/hsr_netlink.c | 10 ++-
net/ipv4/devinet.c | 13 ++--
net/qrtr/qrtr.c | 7 +-
security/keys/proc.c | 2 +
sound/soc/intel/atom/sst-atom-controls.c | 2 +-
sound/soc/intel/atom/sst/sst_pci.c | 2 +-
sound/usb/mixer.c | 31 +++++----
sound/usb/mixer_maps.c | 4 +-
44 files changed, 251 insertions(+), 212 deletions(-)
--
1.8.3
1
41

[PATCH] btrfs: tree-checker: Enhance chunk checker to validate chunk profile
by Yang Yingliang 22 Apr '20
by Yang Yingliang 22 Apr '20
22 Apr '20
From: Qu Wenruo <wqu(a)suse.com>
mainline inclusion
from mainline-v5.2-rc1
commit 80e46cf22ba0bcb57b39c7c3b52961ab3a0fd5f2
category: bugfix
bugzilla: 13690
CVE: CVE-2019-19378
-------------------------------------------------
Btrfs-progs already have a comprehensive type checker, to ensure there
is only 0 (SINGLE profile) or 1 (DUP/RAID0/1/5/6/10) bit set for chunk
profile bits.
Do the same work for kernel.
Reported-by: Yoon Jungyeon <jungyeon(a)gatech.edu>
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202765
Reviewed-by: Nikolay Borisov <nborisov(a)suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn(a)suse.de>
Signed-off-by: Qu Wenruo <wqu(a)suse.com>
Reviewed-by: David Sterba <dsterba(a)suse.com>
Signed-off-by: David Sterba <dsterba(a)suse.com>
Conflicts:
fs/btrfs/volumes.c
[yyl: btrfs_check_chunk_valid() is defined in fs/btrfs/volumes.c]
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/btrfs/volumes.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 99260d2..110cdfd 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -6391,6 +6391,14 @@ static int btrfs_check_chunk_valid(struct btrfs_fs_info *fs_info,
return -EIO;
}
+ if (!is_power_of_2(type & BTRFS_BLOCK_GROUP_PROFILE_MASK) &&
+ (type & BTRFS_BLOCK_GROUP_PROFILE_MASK) != 0) {
+ btrfs_err(fs_info,
+ "invalid chunk profile flag: 0x%llx, expect 0 or 1 bit set",
+ type & BTRFS_BLOCK_GROUP_PROFILE_MASK);
+ return -EIO;
+ }
+
if ((type & BTRFS_BLOCK_GROUP_TYPE_MASK) == 0) {
btrfs_err(fs_info, "missing chunk type flag: 0x%llx", type);
return -EIO;
--
1.8.3
1
0
From: Cheng Jian <cj.chengjian(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 5391/28338/24634
CVE: NA
-----------------------------------------------
The previous patch added a field klp_rel_state in the
module structure, which caused KABI changes, so fix
this problem
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/module.h | 33 ++++++++++++++++++++-------------
1 file changed, 20 insertions(+), 13 deletions(-)
diff --git a/include/linux/module.h b/include/linux/module.h
index 4994243..e1f3418 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -469,19 +469,6 @@ struct module {
/* Elf information */
struct klp_modinfo *klp_info;
- /*
- * livepatch should relocate the key of jump_label by
- * using klp_write_module_reloc. So it's necessary to
- * do jump_label_apply_nops() and jump_label_add_module()
- * later after livepatch relocation finised.
- *
- * for normal module :
- * always MODULE_KLP_REL_DONE.
- * for livepatch module :
- * init as MODULE_KLP_REL_UNDO,
- * set to MODULE_KLP_REL_DONE when relocate completed.
- */
- enum MODULE_KLP_REL_STATE klp_rel_state;
#endif
#ifdef CONFIG_MODULE_UNLOAD
@@ -507,7 +494,27 @@ struct module {
unsigned int num_ei_funcs;
#endif
+#if defined(CONFIG_LIVEPATCH) && !defined(__GENKSYMS__)
+ union {
+ /*
+ * livepatch should relocate the key of jump_label by
+ * using klp_write_module_reloc. So it's necessary to
+ * do jump_label_apply_nops() and jump_label_add_module()
+ * later after livepatch relocation finised.
+ *
+ * for normal module :
+ * always MODULE_KLP_REL_DONE.
+ * for livepatch module :
+ * init as MODULE_KLP_REL_UNDO,
+ * set to MODULE_KLP_REL_DONE when relocate completed.
+ */
+ enum MODULE_KLP_REL_STATE klp_rel_state;
+ long klp_rel_state_KABI;
+ };
+#else
KABI_RESERVE(1)
+#endif
+
KABI_RESERVE(2)
KABI_RESERVE(3)
KABI_RESERVE(4)
--
1.8.3
1
0

[PATCH v1] scsi: hisi_sas: do not reset the timer to wait for phyup when phy already up
by Luo Jiaxing 22 Apr '20
by Luo Jiaxing 22 Apr '20
22 Apr '20
We found out that after phy up, the hardware report another oob interrupt,
but did not follow a phy up interrupt. like:
oob ready -> phy up -> DEV found -> oob read -> wait phy up -> timeout
We run link reset when wait phy up timeout, and it make a normal disk into
reset processing. So we made some circumvention action in the code, so that
this abnormal oob interrupt will not start the timer to wait for phy up.
Signed-off-by: Luo Jiaxing <luojiaxing(a)huawei.com>
Signed-off-by: John Garry <john.garry(a)huawei.com>
---
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
index acf2fc6..5b80856 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
@@ -1898,8 +1898,11 @@ static void handle_chl_int0_v3_hw(struct hisi_hba *hisi_hba, int phy_no)
if (irq_value0 & CHL_INT0_PHY_RDY_MSK) {
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
+ dev_dbg(dev, "phy%d OOB ready\n", phy_no);
+ if (phy->phy_attached)
+ return;
+
if (!timer_pending(&phy->timer)) {
- dev_dbg(dev, "phy%d OOB ready\n", phy_no);
phy->timer.function = wait_phyup_timedout_v3_hw;
phy->timer.expires = jiffies +
WAIT_PHYUP_TIMEOUT_V3_HW * HZ;
--
2.7.4
1
0
Hi all,
We are frequently asked what patches kernel has comparing with Kernel 4.19.
[1] shows how to find them and hope it helps. If more information is
needed, please leave messages in the issue [1].
[1] https://gitee.com/openeuler/kernel/issues/I1F430
--
Regards
Fred Li (李永乐)
1
0
From: Hao Shen <shenhao21(a)huawei.com>
The suspend and resume feature is useful when you want to
save the current state of your machine, and continue work
later from the same state. This patch makes the hns3 net
device work form the same state after s3 and s4.
Signed-off-by: Weiwei Deng <dengweiwei(a)huawei.com>
Signed-off-by: Hao Shen <shenhao21(a)huawei.com>
Signed-off-by: Jian Shen <shenjian15(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hnae3.h | 2 +
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 32 ++++++++++
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 70 ++++++++++++++++++++++
3 files changed, 104 insertions(+)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
index bb29b46..8b1e690 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
@@ -561,6 +561,8 @@ struct hnae3_ae_ops {
int (*set_vf_mac)(struct hnae3_handle *handle, int vf, u8 *p);
int (*get_module_eeprom)(struct hnae3_handle *handle, u32 offset,
u32 len, u8 *data);
+ int (*suspend)(struct hnae3_ae_dev *ae_dev);
+ int (*resume)(struct hnae3_ae_dev *ae_dev);
};
struct hnae3_dcb_ops {
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index 4eca8cf..de3d37c3 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -2226,6 +2226,34 @@ static void hns3_shutdown(struct pci_dev *pdev)
pci_set_power_state(pdev, PCI_D3hot);
}
+#ifdef CONFIG_PM
+static int hns3_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
+
+ if (ae_dev->ops->suspend)
+ ae_dev->ops->suspend(ae_dev);
+
+ pci_save_state(pdev);
+ pci_set_power_state(pdev, PCI_D3hot);
+
+ return 0;
+}
+
+static int hns3_resume(struct pci_dev *pdev)
+{
+ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+
+ if (ae_dev->ops->resume)
+ return ae_dev->ops->resume(ae_dev);
+
+ return 0;
+}
+#endif
+
static pci_ers_result_t hns3_error_detected(struct pci_dev *pdev,
pci_channel_state_t state)
{
@@ -2310,6 +2338,10 @@ struct pci_driver hns3_driver = {
.probe = hns3_probe,
.remove = hns3_remove,
.shutdown = hns3_shutdown,
+#ifdef CONFIG_PM
+ .suspend = hns3_suspend,
+ .resume = hns3_resume,
+#endif
.sriov_configure = hns3_pci_sriov_configure,
.err_handler = &hns3_err_handler,
};
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index c2ae05f..478a3b5 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -3820,6 +3820,72 @@ static void hclge_reset(struct hclge_dev *hdev)
hclge_reset_task_schedule(hdev);
}
+#ifdef CONFIG_PM
+static int hclge_suspend(struct hnae3_ae_dev *ae_dev)
+{
+ struct hclge_dev *hdev = ae_dev->priv;
+ int ret;
+
+ ret = hclge_notify_roce_client(hdev, HNAE3_DOWN_CLIENT);
+ if (ret)
+ return ret;
+
+ rtnl_lock();
+
+ ret = hclge_notify_client(hdev, HNAE3_DOWN_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ ret = hclge_notify_client(hdev, HNAE3_UNINIT_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ rtnl_unlock();
+
+ return hclge_notify_roce_client(hdev, HNAE3_UNINIT_CLIENT);
+
+err_reset_lock:
+ rtnl_unlock();
+ return ret;
+}
+
+static int hclge_resume(struct hnae3_ae_dev *ae_dev)
+{
+ struct hclge_dev *hdev = ae_dev->priv;
+ int ret;
+
+ rtnl_lock();
+
+ ret = hclge_reset_ae_dev(hdev->ae_dev);
+ if (ret)
+ goto err_reset_lock;
+
+ ret = hclge_notify_client(hdev, HNAE3_INIT_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ rtnl_unlock();
+
+ ret = hclge_notify_roce_client(hdev, HNAE3_INIT_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ rtnl_lock();
+
+ ret = hclge_notify_client(hdev, HNAE3_UP_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ rtnl_unlock();
+
+ return hclge_notify_roce_client(hdev, HNAE3_UP_CLIENT);
+
+err_reset_lock:
+ rtnl_unlock();
+ return ret;
+}
+#endif
+
static void hclge_reset_event(struct pci_dev *pdev, struct hnae3_handle *handle)
{
struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
@@ -11525,6 +11591,10 @@ struct hnae3_ae_ops hclge_ops = {
.set_vf_rate = hclge_set_vf_rate,
.set_vf_mac = hclge_set_vf_mac,
.get_module_eeprom = hclge_get_module_eeprom,
+#ifdef CONFIG_PM
+ .suspend = hclge_suspend,
+ .resume = hclge_resume,
+#endif
};
static struct hnae3_ae_algo ae_algo = {
--
1.9.1
1
0
From: Hao Shen <shenhao21(a)huawei.com>
The suspend and resume feature is useful when you want to
save the current state of your machine, and continue work
later from the same state. This patch makes the hns3 net
device work form the same state after s3 and s4.
Signed-off-by: Weiwei Deng <dengweiwei(a)huawei.com>
Signed-off-by: Hao Shen <shenhao21(a)huawei.com>
Signed-off-by: Jian Shen <shenjian15(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hnae3.h | 2 +
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 32 ++++++++++
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 70 ++++++++++++++++++++++
3 files changed, 104 insertions(+)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
index bb29b46..8b1e690 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
@@ -561,6 +561,8 @@ struct hnae3_ae_ops {
int (*set_vf_mac)(struct hnae3_handle *handle, int vf, u8 *p);
int (*get_module_eeprom)(struct hnae3_handle *handle, u32 offset,
u32 len, u8 *data);
+ int (*suspend)(struct hnae3_ae_dev *ae_dev);
+ int (*resume)(struct hnae3_ae_dev *ae_dev);
};
struct hnae3_dcb_ops {
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index 4eca8cf..de3d37c3 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -2226,6 +2226,34 @@ static void hns3_shutdown(struct pci_dev *pdev)
pci_set_power_state(pdev, PCI_D3hot);
}
+#ifdef CONFIG_PM
+static int hns3_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
+
+ if (ae_dev->ops->suspend)
+ ae_dev->ops->suspend(ae_dev);
+
+ pci_save_state(pdev);
+ pci_set_power_state(pdev, PCI_D3hot);
+
+ return 0;
+}
+
+static int hns3_resume(struct pci_dev *pdev)
+{
+ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+
+ if (ae_dev->ops->resume)
+ return ae_dev->ops->resume(ae_dev);
+
+ return 0;
+}
+#endif
+
static pci_ers_result_t hns3_error_detected(struct pci_dev *pdev,
pci_channel_state_t state)
{
@@ -2310,6 +2338,10 @@ struct pci_driver hns3_driver = {
.probe = hns3_probe,
.remove = hns3_remove,
.shutdown = hns3_shutdown,
+#ifdef CONFIG_PM
+ .suspend = hns3_suspend,
+ .resume = hns3_resume,
+#endif
.sriov_configure = hns3_pci_sriov_configure,
.err_handler = &hns3_err_handler,
};
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index c2ae05f..478a3b5 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -3820,6 +3820,72 @@ static void hclge_reset(struct hclge_dev *hdev)
hclge_reset_task_schedule(hdev);
}
+#ifdef CONFIG_PM
+static int hclge_suspend(struct hnae3_ae_dev *ae_dev)
+{
+ struct hclge_dev *hdev = ae_dev->priv;
+ int ret;
+
+ ret = hclge_notify_roce_client(hdev, HNAE3_DOWN_CLIENT);
+ if (ret)
+ return ret;
+
+ rtnl_lock();
+
+ ret = hclge_notify_client(hdev, HNAE3_DOWN_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ ret = hclge_notify_client(hdev, HNAE3_UNINIT_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ rtnl_unlock();
+
+ return hclge_notify_roce_client(hdev, HNAE3_UNINIT_CLIENT);
+
+err_reset_lock:
+ rtnl_unlock();
+ return ret;
+}
+
+static int hclge_resume(struct hnae3_ae_dev *ae_dev)
+{
+ struct hclge_dev *hdev = ae_dev->priv;
+ int ret;
+
+ rtnl_lock();
+
+ ret = hclge_reset_ae_dev(hdev->ae_dev);
+ if (ret)
+ goto err_reset_lock;
+
+ ret = hclge_notify_client(hdev, HNAE3_INIT_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ rtnl_unlock();
+
+ ret = hclge_notify_roce_client(hdev, HNAE3_INIT_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ rtnl_lock();
+
+ ret = hclge_notify_client(hdev, HNAE3_UP_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ rtnl_unlock();
+
+ return hclge_notify_roce_client(hdev, HNAE3_UP_CLIENT);
+
+err_reset_lock:
+ rtnl_unlock();
+ return ret;
+}
+#endif
+
static void hclge_reset_event(struct pci_dev *pdev, struct hnae3_handle *handle)
{
struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
@@ -11525,6 +11591,10 @@ struct hnae3_ae_ops hclge_ops = {
.set_vf_rate = hclge_set_vf_rate,
.set_vf_mac = hclge_set_vf_mac,
.get_module_eeprom = hclge_get_module_eeprom,
+#ifdef CONFIG_PM
+ .suspend = hclge_suspend,
+ .resume = hclge_resume,
+#endif
};
static struct hnae3_ae_algo ae_algo = {
--
1.9.1
1
0

[PATCH hulk-4.19-next] net: hns3: add suspend/resume function for hns3 driver
by Jian Shen 21 Apr '20
by Jian Shen 21 Apr '20
21 Apr '20
From: Hao Shen <shenhao21(a)huawei.com>
The suspend and resume feature is useful when you want to
save the current state of your machine, and continue work
later from the same state. This patch makes the hns3 net
device work form the same state after s3 and s4.
Signed-off-by: Weiwei Deng <dengweiwei(a)huawei.com>
Signed-off-by: Hao Shen <shenhao21(a)huawei.com>
Signed-off-by: Jian Shen <shenjian15(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hnae3.h | 2 +
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 32 ++++++++++
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 70 ++++++++++++++++++++++
3 files changed, 104 insertions(+)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
index bb29b46..8b1e690 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
@@ -561,6 +561,8 @@ struct hnae3_ae_ops {
int (*set_vf_mac)(struct hnae3_handle *handle, int vf, u8 *p);
int (*get_module_eeprom)(struct hnae3_handle *handle, u32 offset,
u32 len, u8 *data);
+ int (*suspend)(struct hnae3_ae_dev *ae_dev);
+ int (*resume)(struct hnae3_ae_dev *ae_dev);
};
struct hnae3_dcb_ops {
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index 4eca8cf..de3d37c3 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -2226,6 +2226,34 @@ static void hns3_shutdown(struct pci_dev *pdev)
pci_set_power_state(pdev, PCI_D3hot);
}
+#ifdef CONFIG_PM
+static int hns3_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
+
+ if (ae_dev->ops->suspend)
+ ae_dev->ops->suspend(ae_dev);
+
+ pci_save_state(pdev);
+ pci_set_power_state(pdev, PCI_D3hot);
+
+ return 0;
+}
+
+static int hns3_resume(struct pci_dev *pdev)
+{
+ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+
+ if (ae_dev->ops->resume)
+ return ae_dev->ops->resume(ae_dev);
+
+ return 0;
+}
+#endif
+
static pci_ers_result_t hns3_error_detected(struct pci_dev *pdev,
pci_channel_state_t state)
{
@@ -2310,6 +2338,10 @@ struct pci_driver hns3_driver = {
.probe = hns3_probe,
.remove = hns3_remove,
.shutdown = hns3_shutdown,
+#ifdef CONFIG_PM
+ .suspend = hns3_suspend,
+ .resume = hns3_resume,
+#endif
.sriov_configure = hns3_pci_sriov_configure,
.err_handler = &hns3_err_handler,
};
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index c2ae05f..478a3b5 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -3820,6 +3820,72 @@ static void hclge_reset(struct hclge_dev *hdev)
hclge_reset_task_schedule(hdev);
}
+#ifdef CONFIG_PM
+static int hclge_suspend(struct hnae3_ae_dev *ae_dev)
+{
+ struct hclge_dev *hdev = ae_dev->priv;
+ int ret;
+
+ ret = hclge_notify_roce_client(hdev, HNAE3_DOWN_CLIENT);
+ if (ret)
+ return ret;
+
+ rtnl_lock();
+
+ ret = hclge_notify_client(hdev, HNAE3_DOWN_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ ret = hclge_notify_client(hdev, HNAE3_UNINIT_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ rtnl_unlock();
+
+ return hclge_notify_roce_client(hdev, HNAE3_UNINIT_CLIENT);
+
+err_reset_lock:
+ rtnl_unlock();
+ return ret;
+}
+
+static int hclge_resume(struct hnae3_ae_dev *ae_dev)
+{
+ struct hclge_dev *hdev = ae_dev->priv;
+ int ret;
+
+ rtnl_lock();
+
+ ret = hclge_reset_ae_dev(hdev->ae_dev);
+ if (ret)
+ goto err_reset_lock;
+
+ ret = hclge_notify_client(hdev, HNAE3_INIT_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ rtnl_unlock();
+
+ ret = hclge_notify_roce_client(hdev, HNAE3_INIT_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ rtnl_lock();
+
+ ret = hclge_notify_client(hdev, HNAE3_UP_CLIENT);
+ if (ret)
+ goto err_reset_lock;
+
+ rtnl_unlock();
+
+ return hclge_notify_roce_client(hdev, HNAE3_UP_CLIENT);
+
+err_reset_lock:
+ rtnl_unlock();
+ return ret;
+}
+#endif
+
static void hclge_reset_event(struct pci_dev *pdev, struct hnae3_handle *handle)
{
struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
@@ -11525,6 +11591,10 @@ struct hnae3_ae_ops hclge_ops = {
.set_vf_rate = hclge_set_vf_rate,
.set_vf_mac = hclge_set_vf_mac,
.get_module_eeprom = hclge_get_module_eeprom,
+#ifdef CONFIG_PM
+ .suspend = hclge_suspend,
+ .resume = hclge_resume,
+#endif
};
static struct hnae3_ae_algo ae_algo = {
--
1.9.1
1
0
Alain Volmat (1):
i2c: st: fix missing struct parameter description
Alex Vesker (1):
IB/mlx5: Replace tunnel mpls capability bits for tunnel_offloads
Alexander Duyck (1):
mm: Use fixed constant in page_frag_alloc instead of size + 1
Alexander Sverdlin (1):
genirq/irqdomain: Check pointer in irq_domain_alloc_irqs_hierarchy()
Alexey Dobriyan (1):
null_blk: fix spurious IO errors after failed past-wp access
Andrei Botila (1):
crypto: caam - update xts sector size for large input length
Andy Lutomirski (1):
selftests/x86/ptrace_syscall_32: Fix no-vDSO segfault
Andy Shevchenko (1):
mfd: dln2: Fix sanity checking for endpoints
Aneesh Kumar K.V (1):
powerpc/hash64/devmap: Use H_PAGE_THP_HUGE when setting up huge devmap
PTE entries
Anssi Hannula (1):
tools: gpio: Fix out-of-tree build regression
Ard Biesheuvel (1):
efi/x86: Ignore the memory attributes table on i386
Arvind Sankar (1):
x86/boot: Use unsigned comparison for addresses
Bart Van Assche (2):
null_blk: Fix the null_add_dev() error path
null_blk: Handle null_add_dev() failures properly
Benoit Parrot (1):
media: ti-vpe: cal: fix disable_irqs to only the intended target
Bob Liu (1):
dm zoned: remove duplicate nr_rnd_zones increase in dmz_init_zone()
Bob Peterson (1):
gfs2: Don't demote a glock until its revokes are written
Boqun Feng (1):
locking/lockdep: Avoid recursion in
lockdep_count_{for,back}ward_deps()
Changwei Ge (1):
ocfs2: no need try to truncate file beyond i_size
Chris Wilson (1):
drm: Remove PageReserved manipulation from drm_pci_alloc
Christian Gmeiner (2):
drm/etnaviv: rework perfmon query infrastructure
etnaviv: perfmon: fix total and idle HI cyleces readout
Christoph Niedermaier (1):
cpufreq: imx6q: Fixes unwanted cpu overclocking on i.MX6ULL
Christophe Leroy (1):
powerpc/kprobes: Ignore traps that happened in real mode
Clement Courbet (1):
powerpc: Make setjmp/longjmp signature standard
Cédric Le Goater (1):
powerpc/xive: Use XIVE_BAD_IRQ instead of zero to catch non configured
IPIs
David Hildenbrand (2):
KVM: s390: vsie: Fix region 1 ASCE sanity shadow address checks
KVM: s390: vsie: Fix delivery of addressing exceptions
Dongchun Zhu (1):
media: i2c: ov5695: Fix power on and off sequences
Eric Biggers (2):
fs/filesystems.c: downgrade user-reachable WARN_ONCE() to
pr_warn_once()
kmod: make request_module() return an error when autoloading is
disabled
Eric W. Biederman (1):
signal: Extend exec_id to 64bits
Filipe Manana (2):
Btrfs: fix crash during unmount due to race with delayed inode workers
btrfs: fix missing file extent item for hole after ranged fsync
Fredrik Strupe (1):
arm64: armv8_deprecated: Fix undef_hook mask for thumb setend
Frieder Schrempf (2):
mtd: spinand: Stop using spinand->oobbuf for buffering bad block
markers
mtd: spinand: Do not erase the block before writing a bad block marker
Gao Xiang (1):
erofs: correct the remaining shrink objects
Gary Lin (1):
efi/x86: Fix the deletion of variables in mixed mode
Gilad Ben-Yossef (4):
crypto: ccree - zero out internal struct before use
crypto: ccree - don't mangle the request assoclen
crypto: ccree - dec auth tag size from cryptlen map
crypto: ccree - only try to map auth tag if needed
Greg Kroah-Hartman (1):
Linux 4.19.116
Guoqing Jiang (1):
md: check arrays is suspended in mddev_detach before call quiesce
operations
Gustavo A. R. Silva (1):
MIPS: OCTEON: irq: Fix potential NULL pointer dereference
Hadar Gat (1):
crypto: ccree - improve error handling
Hans de Goede (1):
Input: i8042 - add Acer Aspire 5738z to nomux list
Huacai Chen (1):
MIPS/tlbex: Fix LDDIR usage in setup_pw() for Loongson-3
James Morse (1):
firmware: arm_sdei: fix double-lock on hibernate with shared events
James Smart (2):
nvme-fc: Revert "add module to ops template to allow module
references"
nvme: Treat discovery subsystems as unique subsystems
Jan Engelhardt (1):
acpi/x86: ignore unspecified bit positions in the ACPI global lock
field
John Garry (1):
libata: Remove extra scsi_host_put() in ata_scsi_add_hosts()
Josef Bacik (5):
btrfs: remove a BUG_ON() from merge_reloc_roots()
btrfs: track reloc roots based on their commit root bytenr
btrfs: set update the uuid generation as soon as possible
btrfs: drop block from cache on error in relocation
btrfs: use nofs allocations for running delayed items
Juergen Gross (1):
xen/blkfront: fix memory allocation flags in blkfront_setup_indirect()
Junyong Sun (1):
firmware: fix a double abort case with fw_load_sysfs_fallback
Kai-Heng Feng (1):
libata: Return correct status in sata_pmp_eh_recover_pm() when
ATA_DFLAG_DETACH is set
Kees Cook (1):
slub: improve bit diffusion for freelist ptr obfuscation
Kishon Vijay Abraham I (1):
PCI: endpoint: Fix for concurrent memory allocation in OB address
region
Konstantin Khlebnikov (1):
block: keep bdi->io_pages in sync with max_sectors_kb for stacked
devices
Laurentiu Tudor (1):
powerpc/fsl_booke: Avoid creating duplicate tlb1 entry
Libor Pechacek (1):
powerpc/pseries: Avoid NULL pointer dereference when drmem is
unavailable
Logan Gunthorpe (1):
PCI/switchtec: Fix init_completion race condition with poll_wait()
Lukas Wunner (1):
PCI: pciehp: Fix indefinite wait on sysfs requests
Lyude Paul (1):
drm/dp_mst: Fix clearing payload state on topology disable
Marc Zyngier (1):
irqchip/gic-v4: Provide irq_retrigger to avoid circular locking
dependency
Martin Blumenstingl (1):
thermal: devfreq_cooling: inline all stubs for
CONFIG_DEVFREQ_THERMAL=n
Masami Hiramatsu (1):
ftrace/kprobe: Show the maxactive number on kprobe_events
Mathias Nyman (1):
xhci: bail out early if driver can't accress host in resume
Matt Ranostay (1):
media: i2c: video-i2c: fix build errors due to 'imply hwmon'
Matthew Garrett (1):
tpm: Don't make log failures fatal
Maxime Ripard (1):
arm64: dts: allwinner: h6: Fix PMU compatible
Michael Ellerman (1):
powerpc/64/tm: Don't let userspace set regs->trap via sigreturn
Michael Mueller (1):
s390/diag: fix display of diagnose call statistics
Michael Wang (1):
sched: Avoid scale real weight down to zero
Michal Hocko (1):
selftests: vm: drop dependencies on page flags from mlock2 tests
Mikulas Patocka (1):
dm writecache: add cond_resched to avoid CPU hangs
Nathan Chancellor (2):
rtc: omap: Use define directive for PIN_CONFIG_ACTIVE_HIGH
misc: echo: Remove unnecessary parentheses and simplify check for zero
Neil Armstrong (1):
usb: dwc3: core: add support for disabling SS instances in park mode
Oliver O'Halloran (1):
cpufreq: powernv: Fix use-after-free
Ondrej Jirman (2):
ARM: dts: sun8i-a83t-tbs-a711: HM5065 doesn't like such a high voltage
bus: sunxi-rsb: Return correct data when mixing 16-bit and 8-bit reads
Paul Cercueil (1):
clk: ingenic/jz4770: Exit with error if CGU init failed
Qian Cai (1):
ext4: fix a data race at inode->i_blocks
Qu Wenruo (1):
btrfs: qgroup: ensure qgroup_rescan_running is only set when the
worker is at least queued
Raju Rangoju (1):
cxgb4/ptp: pass the sign of offset delta in FW CMD
Remi Pommarel (1):
ath9k: Handle txpower changes even when TPC is disabled
Robbie Ko (1):
btrfs: fix missing semaphore unlock in btrfs_sync_file
Rosioru Dragos (1):
crypto: mxs-dcp - fix scatterlist linearization for hash
Sahitya Tummala (1):
block: Fix use-after-free issue accessing struct io_cq
Sam Lunt (1):
perf tools: Support Python 3.8+ in Makefile
Sasha Levin (1):
Revert "drm/dp_mst: Remove VCPI while disabling topology mgr"
Sean Christopherson (4):
KVM: nVMX: Properly handle userspace interrupt window request
KVM: x86: Allocate new rmap and large page tracking when moving
memslot
KVM: VMX: Always VMCLEAR in-use VMCSes during crash with kexec support
KVM: x86: Gracefully handle __vmalloc() failure during VM allocation
Sean V Kelley (1):
PCI: Add boot interrupt quirk mechanism for Xeon chipsets
Segher Boessenkool (1):
powerpc: Add attributes for setjmp/longjmp
Shetty, Harshini X (EXT-Sony Mobile) (1):
dm verity fec: fix memory leak in verity_fec_dtr
Simon Gander (1):
hfsplus: fix crash and filesystem corruption when deleting files
Sreekanth Reddy (1):
scsi: mpt3sas: Fix kernel panic observed on soft HBA unplug
Sriharsha Allenki (1):
usb: gadget: f_fs: Fix use after free issue as part of queue failure
Steffen Maier (1):
scsi: zfcp: fix missing erp_lock in port recovery trigger for
point-to-point
Stephan Gerhold (1):
media: venus: hfi_parser: Ignore HEVC encoding for V1
Subash Abhinov Kasiviswanathan (1):
net: qualcomm: rmnet: Allow configuration updates to existing devices
Sungbo Eo (2):
irqchip/versatile-fpga: Handle chained IRQs properly
irqchip/versatile-fpga: Apply clear-mask earlier
Takashi Iwai (6):
ALSA: usb-audio: Add mixer workaround for TRX40 and co
ALSA: hda: Add driver blacklist
ALSA: hda: Fix potential access overflow in beep helper
ALSA: ice1724: Fix invalid access for enumerated ctl items
ALSA: pcm: oss: Fix regression by buffer overflow fix
ALSA: hda/realtek - Add quirk for MSI GL63
Thinh Nguyen (1):
usb: gadget: composite: Inform controller driver of self-powered
Thomas Gleixner (1):
x86/entry/32: Add missing ASM_CLAC to general_protection entry
Thomas Hebb (3):
ALSA: doc: Document PC Beep Hidden Register on Realtek ALC256
ALSA: hda/realtek - Set principled PC Beep configuration for ALC256
ALSA: hda/realtek - Remove now-unnecessary XPS 13 headphone noise
fixups
Thomas Hellstrom (1):
x86: Don't let pgprot_modify() change the page encryption bit
Trond Myklebust (1):
NFS: Fix a page leak in nfs_destroy_unlinked_subrequests()
Vasily Averin (3):
tpm: tpm1_bios_measurements_next should increase position index
tpm: tpm2_bios_measurements_next should increase position index
pstore: pstore_ftrace_seq_next should increase position index
Vitaly Kuznetsov (1):
KVM: VMX: fix crash cleanup when KVM wasn't used
Wen Yang (1):
ipmi: fix hung processes in __get_guid()
Xu Wang (1):
qlcnic: Fix bad kzalloc null test
Yang Xu (1):
KEYS: reaching the keys quotas correctly
Yicong Yang (1):
PCI/ASPM: Clear the correct bits when enabling L1 substates
YueHaibing (1):
powerpc/pseries: Drop pointless static qualifier in vpa_debugfs_init()
Yury Norov (1):
uapi: rename ext2_swab() to swab() and share globally in swab.h
Zheng Wei (1):
net: vxge: fix wrong __VA_ARGS__ usage
Zhenzhong Duan (1):
x86/speculation: Remove redundant arch_smt_update() invocation
Zhiqiang Liu (1):
block, bfq: fix use-after-free in bfq_idle_slice_timer_body
chenqiwu (1):
pstore/platform: fix potential mem leak if pstore_init_fs failed
이경택 (4):
ASoC: fix regwmask
ASoC: dapm: connect virtual mux with default value
ASoC: dpcm: allow start or stop during pause for backend
ASoC: topology: use name_prefix for new kcontrol
Documentation/sound/hd-audio/index.rst | 1 +
Documentation/sound/hd-audio/models.rst | 2 -
Documentation/sound/hd-audio/realtek-pc-beep.rst | 129 ++++++++++++
Makefile | 2 +-
arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts | 4 +-
arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi | 3 +-
arch/arm64/kernel/armv8_deprecated.c | 2 +-
arch/mips/cavium-octeon/octeon-irq.c | 3 +
arch/mips/mm/tlbex.c | 5 +-
arch/powerpc/include/asm/book3s/64/hash-4k.h | 6 +
arch/powerpc/include/asm/book3s/64/hash-64k.h | 8 +-
arch/powerpc/include/asm/book3s/64/pgtable.h | 4 +-
arch/powerpc/include/asm/book3s/64/radix.h | 5 +
arch/powerpc/include/asm/drmem.h | 4 +-
arch/powerpc/include/asm/setjmp.h | 6 +-
arch/powerpc/kernel/Makefile | 3 -
arch/powerpc/kernel/kprobes.c | 3 +
arch/powerpc/kernel/signal_64.c | 4 +-
arch/powerpc/mm/tlb_nohash_low.S | 12 +-
arch/powerpc/platforms/pseries/hotplug-memory.c | 8 +-
arch/powerpc/platforms/pseries/lpar.c | 2 +-
arch/powerpc/sysdev/xive/common.c | 12 +-
arch/powerpc/sysdev/xive/native.c | 4 +-
arch/powerpc/sysdev/xive/spapr.c | 4 +-
arch/powerpc/sysdev/xive/xive-internal.h | 7 +
arch/powerpc/xmon/Makefile | 3 -
arch/s390/kernel/diag.c | 2 +-
arch/s390/kvm/vsie.c | 1 +
arch/s390/mm/gmap.c | 6 +-
arch/x86/boot/compressed/head_32.S | 2 +-
arch/x86/boot/compressed/head_64.S | 4 +-
arch/x86/entry/entry_32.S | 1 +
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/include/asm/pgtable.h | 7 +-
arch/x86/include/asm/pgtable_types.h | 2 +-
arch/x86/kernel/acpi/boot.c | 2 +-
arch/x86/kvm/svm.c | 4 +
arch/x86/kvm/vmx.c | 110 ++++------
arch/x86/kvm/x86.c | 21 +-
arch/x86/platform/efi/efi_64.c | 4 +-
block/bfq-iosched.c | 16 +-
block/blk-ioc.c | 7 +
block/blk-settings.c | 3 +
drivers/ata/libata-pmp.c | 1 +
drivers/ata/libata-scsi.c | 9 +-
drivers/base/firmware_loader/fallback.c | 2 +-
drivers/block/null_blk_main.c | 10 +-
drivers/block/xen-blkfront.c | 17 +-
drivers/bus/sunxi-rsb.c | 2 +-
drivers/char/ipmi/ipmi_msghandler.c | 4 +-
drivers/char/tpm/eventlog/common.c | 12 +-
drivers/char/tpm/eventlog/tpm1.c | 2 +-
drivers/char/tpm/eventlog/tpm2.c | 2 +-
drivers/char/tpm/tpm-chip.c | 4 +-
drivers/char/tpm/tpm.h | 2 +-
drivers/clk/ingenic/jz4770-cgu.c | 4 +-
drivers/cpufreq/imx6q-cpufreq.c | 3 +
drivers/cpufreq/powernv-cpufreq.c | 6 +
drivers/crypto/caam/caamalg_desc.c | 16 +-
drivers/crypto/ccree/cc_aead.c | 56 +++--
drivers/crypto/ccree/cc_aead.h | 1 +
drivers/crypto/ccree/cc_buffer_mgr.c | 108 +++++-----
drivers/crypto/mxs-dcp.c | 58 +++--
drivers/firmware/arm_sdei.c | 32 ++-
drivers/firmware/efi/efi.c | 2 +-
drivers/gpu/drm/drm_dp_mst_topology.c | 19 +-
drivers/gpu/drm/drm_pci.c | 25 +--
drivers/gpu/drm/etnaviv/etnaviv_perfmon.c | 103 +++++++--
drivers/i2c/busses/i2c-st.c | 1 +
drivers/infiniband/hw/mlx5/main.c | 6 +-
drivers/input/serio/i8042-x86ia64io.h | 11 +
drivers/irqchip/irq-gic-v3-its.c | 6 +
drivers/irqchip/irq-versatile-fpga.c | 18 +-
drivers/md/dm-verity-fec.c | 1 +
drivers/md/dm-writecache.c | 6 +-
drivers/md/dm-zoned-metadata.c | 1 -
drivers/md/md.c | 2 +-
drivers/media/i2c/ov5695.c | 49 +++--
drivers/media/i2c/video-i2c.c | 2 +-
drivers/media/platform/qcom/venus/hfi_parser.c | 1 +
drivers/media/platform/ti-vpe/cal.c | 16 +-
drivers/mfd/dln2.c | 9 +-
drivers/misc/echo/echo.c | 2 +-
drivers/mtd/nand/spi/core.c | 17 +-
drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c | 3 +
drivers/net/ethernet/neterion/vxge/vxge-config.h | 2 +-
drivers/net/ethernet/neterion/vxge/vxge-main.h | 14 +-
.../net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c | 2 +-
drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c | 31 +--
drivers/net/wireless/ath/ath9k/main.c | 3 +
drivers/nvme/host/core.c | 11 +
drivers/nvme/host/fc.c | 14 +-
drivers/nvme/target/fcloop.c | 1 -
drivers/pci/endpoint/pci-epc-mem.c | 10 +-
drivers/pci/hotplug/pciehp_hpc.c | 14 +-
drivers/pci/pcie/aspm.c | 4 +-
drivers/pci/quirks.c | 80 ++++++-
drivers/pci/switch/switchtec.c | 2 +-
drivers/rtc/rtc-omap.c | 4 +-
drivers/s390/scsi/zfcp_erp.c | 2 +-
drivers/scsi/lpfc/lpfc_nvme.c | 2 -
drivers/scsi/mpt3sas/mpt3sas_scsih.c | 8 +-
drivers/scsi/qla2xxx/qla_nvme.c | 1 -
drivers/staging/erofs/utils.c | 2 +-
drivers/usb/dwc3/core.c | 5 +
drivers/usb/dwc3/core.h | 4 +
drivers/usb/gadget/composite.c | 9 +
drivers/usb/gadget/function/f_fs.c | 1 +
drivers/usb/host/xhci.c | 4 +-
fs/btrfs/async-thread.c | 8 +
fs/btrfs/async-thread.h | 1 +
fs/btrfs/delayed-inode.c | 13 ++
fs/btrfs/disk-io.c | 27 ++-
fs/btrfs/file.c | 11 +
fs/btrfs/qgroup.c | 11 +-
fs/btrfs/relocation.c | 35 ++--
fs/exec.c | 2 +-
fs/ext4/inode.c | 2 +-
fs/filesystems.c | 4 +-
fs/gfs2/glock.c | 3 +
fs/hfsplus/attributes.c | 4 +
fs/nfs/write.c | 1 +
fs/ocfs2/alloc.c | 4 +
fs/pstore/inode.c | 5 +-
fs/pstore/platform.c | 4 +-
include/linux/devfreq_cooling.h | 2 +-
include/linux/iocontext.h | 1 +
include/linux/mlx5/mlx5_ifc.h | 9 +-
include/linux/nvme-fc-driver.h | 4 -
include/linux/pci-epc.h | 3 +
include/linux/sched.h | 4 +-
include/linux/swab.h | 1 +
include/uapi/linux/swab.h | 10 +
kernel/cpu.c | 5 +-
kernel/irq/irqdomain.c | 10 +-
kernel/kmod.c | 4 +-
kernel/locking/lockdep.c | 4 +
kernel/sched/sched.h | 8 +-
kernel/signal.c | 2 +-
kernel/trace/trace_kprobe.c | 2 +
lib/find_bit.c | 16 +-
mm/page_alloc.c | 8 +-
mm/slub.c | 2 +-
security/keys/key.c | 2 +-
security/keys/keyctl.c | 4 +-
sound/core/oss/pcm_plugin.c | 32 ++-
sound/pci/hda/hda_beep.c | 6 +-
sound/pci/hda/hda_intel.c | 16 ++
sound/pci/hda/patch_realtek.c | 50 +----
sound/pci/ice1712/prodigy_hifi.c | 4 +-
sound/soc/soc-dapm.c | 8 +-
sound/soc/soc-ops.c | 4 +-
sound/soc/soc-pcm.c | 6 +-
sound/soc/soc-topology.c | 2 +-
sound/usb/mixer_maps.c | 28 +++
tools/gpio/Makefile | 2 +-
tools/perf/Makefile.config | 11 +-
tools/testing/selftests/vm/mlock2-tests.c | 233 ++++-----------------
tools/testing/selftests/x86/ptrace_syscall.c | 8 +-
159 files changed, 1190 insertions(+), 777 deletions(-)
create mode 100644 Documentation/sound/hd-audio/realtek-pc-beep.rst
--
1.8.3
1
144
From: huangjun <huangjun63(a)huawei.com>
If we want acpi ged to support wake from freeze, we need to implement
the suspend/resume function. In these two methods, ACPI's _GPO, GPP
method is called to realize the setting of sleep flag and anti-shake.
Signed-off-by: Huangjun <huangjun63(a)huawei.com>
---
drivers/acpi/evged.c | 82 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 82 insertions(+)
diff --git a/drivers/acpi/evged.c b/drivers/acpi/evged.c
index f13ba2c07667..84656fc09d15 100644
--- a/drivers/acpi/evged.c
+++ b/drivers/acpi/evged.c
@@ -46,11 +46,14 @@
#include <linux/list.h>
#include <linux/platform_device.h>
#include <linux/acpi.h>
+#include <linux/timer.h>
+#include <linux/jiffies.h>
#define MODULE_NAME "acpi-ged"
struct acpi_ged_device {
struct device *dev;
+ struct timer_list timer;
struct list_head event_list;
};
@@ -148,6 +151,8 @@ static int ged_probe(struct platform_device *pdev)
dev_err(&pdev->dev, "unable to parse the _CRS record\n");
return -EINVAL;
}
+
+ timer_setup(&geddev->timer, NULL, 0);
platform_set_drvdata(pdev, geddev);
return 0;
@@ -164,6 +169,7 @@ static void ged_shutdown(struct platform_device *pdev)
dev_dbg(geddev->dev, "GED releasing GSI %u @ IRQ %u\n",
event->gsi, event->irq);
}
+ del_timer(&geddev->timer);
}
static int ged_remove(struct platform_device *pdev)
@@ -177,6 +183,78 @@ static const struct acpi_device_id ged_acpi_ids[] = {
{},
};
+#ifdef CONFIG_PM_SLEEP
+static acpi_status ged_acpi_execute(struct device *dev, char* method, u64 arg)
+{
+ acpi_status acpi_ret;
+ acpi_handle method_handle;
+
+ acpi_ret = acpi_get_handle(ACPI_HANDLE(dev), method, &method_handle);
+
+ if (ACPI_FAILURE(acpi_ret)) {
+ dev_err(dev, "cannot locate %s method\n", method);
+ return AE_NOT_FOUND;
+ }
+
+ acpi_ret = acpi_execute_simple_method(method_handle, NULL, arg);
+ if (ACPI_FAILURE(acpi_ret)) {
+ dev_err(dev, "%s method execution failed\n", method);
+ return AE_ERROR;
+ }
+
+ return AE_OK;
+}
+
+static void ged_timer_callback(struct timer_list *t)
+{
+ struct acpi_ged_device *geddev = from_timer(geddev, t, timer);
+ struct acpi_ged_event *event, *next;
+
+ list_for_each_entry_safe(event, next, &geddev->event_list, node) {
+ ged_acpi_execute(geddev->dev, "_GPP", event->gsi);
+ }
+}
+
+static int ged_suspend(struct device *dev)
+{
+ struct acpi_ged_device *geddev = dev_get_drvdata(dev);
+ struct acpi_ged_event *event, *next;
+ acpi_status acpi_ret;
+
+ list_for_each_entry_safe(event, next, &geddev->event_list, node) {
+ acpi_ret = ged_acpi_execute(dev, "_GPO", event->gsi);
+
+ if (acpi_ret == AE_ERROR)
+ return -EINVAL;
+
+ enable_irq_wake(event->irq);
+ }
+ return 0;
+}
+
+static int ged_resume(struct device *dev)
+{
+ struct acpi_ged_device *geddev = dev_get_drvdata(dev);
+ struct acpi_ged_event *event, *next;
+
+ list_for_each_entry_safe(event, next, &geddev->event_list, node) {
+ disable_irq_wake(event->irq);
+ }
+
+ /* use timer to complete 4s anti-shake */
+ geddev->timer.expires = jiffies + (4 * HZ);
+ geddev->timer.function = ged_timer_callback;
+ add_timer(&geddev->timer);
+
+ return 0;
+}
+
+static const struct dev_pm_ops ged_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(ged_suspend, ged_resume)
+};
+#endif
+
+
static struct platform_driver ged_driver = {
.probe = ged_probe,
.remove = ged_remove,
@@ -184,6 +262,10 @@ static struct platform_driver ged_driver = {
.driver = {
.name = MODULE_NAME,
.acpi_match_table = ACPI_PTR(ged_acpi_ids),
+#ifdef CONFIG_PM_SLEEP
+ .pm = &ged_pm_ops,
+#endif
+
},
};
builtin_platform_driver(ged_driver);
--
2.20.1
1
1

20 Apr '20
From: Sunnanyong <sunnanyong(a)huawei.com>
If we want to support S4 (suspend to Disk), it is necessary to guarantee
that the ITS tables are at the same address in the booting kernel and
the resumed kernel. That covers all the ITS tables and as well as the
RDs'.
To support this, allocting the itt memory from memory pool intead.
Signed-off-by: Sunnanyong <sunnanyong(a)huawei.com>
---
drivers/irqchip/irq-gic-v3-its.c | 72 ++++++++++++++++++++++++++++++--
1 file changed, 69 insertions(+), 3 deletions(-)
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 860f3ef2969e..9de585fe74fb 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -189,6 +189,14 @@ static DEFINE_IDA(its_vpeid_ida);
#define gic_data_rdist_rd_base() (gic_data_rdist()->rd_base)
#define gic_data_rdist_vlpi_base() (gic_data_rdist_rd_base() + SZ_128K)
+static void *its_mem_pool_alloc(int dev_id);
+#define ITS_MEM_POOL_SIZE (SZ_4M)
+#define ITS_MEM_POOL_MAX (16)
+#define GITS_OTHER_OFFSET 0x20000
+#define GITS_OTHER_REG_SIZE 0x100
+#define GITS_FUNC_REG_OFFSET 0x80
+static void *its_mem_pool[ITS_MEM_POOL_MAX] = {0};
+
static u16 get_its_list(struct its_vm *vm)
{
struct its_node *its;
@@ -2436,7 +2444,11 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id,
nr_ites = max(2, nvecs);
sz = nr_ites * its->ite_size;
sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1;
- itt = kzalloc_node(sz, GFP_KERNEL, its->numa_node);
+
+ itt = its_mem_pool_alloc(dev_id);
+ if (!itt)
+ itt = kzalloc_node(sz, GFP_KERNEL, its->numa_node);
+
if (alloc_lpis) {
lpi_map = its_lpi_alloc(nvecs, &lpi_base, &nr_lpis);
if (lpi_map)
@@ -2450,7 +2462,6 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id,
if (!dev || !itt || !col_map || (!lpi_map && alloc_lpis)) {
kfree(dev);
- kfree(itt);
kfree(lpi_map);
kfree(col_map);
return NULL;
@@ -2486,7 +2497,6 @@ static void its_free_device(struct its_device *its_dev)
raw_spin_lock_irqsave(&its_dev->its->lock, flags);
list_del(&its_dev->entry);
raw_spin_unlock_irqrestore(&its_dev->its->lock, flags);
- kfree(its_dev->itt);
kfree(its_dev);
}
@@ -3798,11 +3808,41 @@ static int redist_disable_lpis(void)
return 0;
}
+static void its_cpu_clear_cache(void)
+{
+ struct its_node *its;
+ u64 val = 0;
+ void __iomem *func_base;
+
+ raw_spin_lock(&its_lock);
+
+ list_for_each_entry(its, &its_nodes, entry) {
+ func_base = ioremap(its->phys_base + GITS_OTHER_OFFSET,
+ GITS_OTHER_REG_SIZE);
+ if (!func_base) {
+ pr_err("ITS@%p : Unable to map ITS OTHER registers\n",
+ (void *)(its->phys_base + GITS_OTHER_OFFSET));
+ raw_spin_unlock(&its_lock);
+ return;
+ }
+
+ val = readl_relaxed(func_base + GITS_FUNC_REG_OFFSET);
+ val = val | (0x7 << 16);
+ writel_relaxed(val, func_base + GITS_FUNC_REG_OFFSET);
+ dsb(sy);
+ iounmap(func_base);
+ }
+
+ raw_spin_unlock(&its_lock);
+}
+
+
int its_cpu_init(void)
{
if (!list_empty(&its_nodes)) {
int ret;
+ its_cpu_clear_cache();
ret = redist_disable_lpis();
if (ret)
return ret;
@@ -4001,6 +4041,7 @@ int __init its_init(struct fwnode_handle *handle, struct rdists *rdists,
struct its_node *its;
bool has_v4 = false;
int err;
+ int i;
its_parent = parent_domain;
of_node = to_of_node(handle);
@@ -4014,6 +4055,16 @@ int __init its_init(struct fwnode_handle *handle, struct rdists *rdists,
return -ENXIO;
}
+ for (i = 0; i < ITS_MEM_POOL_MAX; i++) {
+ if (!its_mem_pool[i]) {
+ its_mem_pool[i] = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+ get_order(ITS_MEM_POOL_SIZE));
+ if (!its_mem_pool[i])
+ pr_err("err:[its mem[%d]] has no memory\n", i);
+ }
+ }
+
+
gic_rdists = rdists;
err = allocate_lpi_tables();
@@ -4035,3 +4086,18 @@ int __init its_init(struct fwnode_handle *handle, struct rdists *rdists,
return 0;
}
+
+void *its_mem_pool_alloc(int dev_id)
+{
+ int pool_num = dev_id / (ITS_MEM_POOL_SIZE / SZ_512);
+ int idx = dev_id % (ITS_MEM_POOL_SIZE / SZ_512);
+ void *addr = NULL;
+
+ if (pool_num >= ITS_MEM_POOL_MAX || !its_mem_pool[pool_num]) {
+ pr_err("[its mem[%d]] alloc error\n", pool_num);
+ return NULL;
+ }
+
+ addr = its_mem_pool[pool_num] + idx * SZ_512;
+ return addr;
+}
--
2.20.1
1
0
As a new x86 CPU Vendor, Chengdu Haiguang IC Design Co., Ltd (Hygon)
is a Joint Venture between AMD and Haiguang Information Technology Co.,
Ltd., and aims at providing high performance x86 processor for China
server market.
The first generation Hygon's processor(Dhyana) originates from AMD
technology and shares most of the architecture with AMD's family 17h,
but with different CPU Vendor ID("HygonGenuine")/PCIE Device Vendor ID
(0x1D94)/Family series number (Family 18h).
To enable the support of Linux kernel to Hygon's CPU, we added a new
vendor type (X86_VENDOR_HYGON, with value of 9) in arch/x86/include/
asm/processor.h, and shared most of kernel support codes with AMD
family 17h.
As Hygon will negotiate with AMD to make sure that only Hygon will
use family 18h, so try to minimize code modification and share most
codes with AMD under this consideration.
This patch series have been applied and tested successfully on Hygon
Dhyana SoC silicon. Also tested on AMD EPYC (Family 17h) processor,
it works fine and makes no harm to the existing codes.
This patch series are created for the current branch openEuler-1.0-LTS.
References:
[1] Linux kernel patches for Hygon Dhyana, merged in 4.20:
https://git.kernel.org/tip/c9661c1e80b609cd038db7c908e061f0535804ef
[2] MSR and CPUID definition:
https://www.amd.com/system/files/TechDocs/54945_PPR_Family_17h_Models_00h-0…
Pu Wen (22):
x86/cpu: Create Hygon Dhyana architecture support file
x86/cpu: Get cache info and setup cache cpumap for Hygon Dhyana
x86/cpu/mtrr: Support TOP_MEM2 and get MTRR number
x86/smpboot: Do not use BSP INIT delay and MWAIT to idle on Dhyana
x86/events: Add Hygon Dhyana support to PMU infrastructure
x86/alternative: Init ideal_nops for Hygon Dhyana
x86/amd_nb: Check vendor in AMD-only functions
x86/pci, x86/amd_nb: Add Hygon Dhyana support to PCI and northbridge
x86/apic: Add Hygon Dhyana support
x86/bugs: Add Hygon Dhyana to the respective mitigation machinery
x86/mce: Add Hygon Dhyana support to the MCA infrastructure
x86/kvm: Add Hygon Dhyana support to KVM
x86/xen: Add Hygon Dhyana support to Xen
ACPI: Add Hygon Dhyana support
cpufreq: Add Hygon Dhyana support
EDAC, amd64: Add Hygon Dhyana support
tools/cpupower: Add Hygon Dhyana support
hwmon: (k10temp) Add Hygon Dhyana support
x86/CPU/hygon: Fix phys_proc_id calculation logic for multi-die
processors
i2c-piix4: Add Hygon Dhyana SMBus support
x86/amd_nb: Make hygon_nb_misc_ids static
NTB: Add Hygon Device ID
Documentation/i2c/busses/i2c-piix4 | 2 +
MAINTAINERS | 6 +
arch/x86/Kconfig.cpu | 14 +
arch/x86/events/amd/core.c | 4 +
arch/x86/events/amd/uncore.c | 20 +-
arch/x86/events/core.c | 4 +
arch/x86/include/asm/amd_nb.h | 3 +
arch/x86/include/asm/cacheinfo.h | 1 +
arch/x86/include/asm/kvm_emulate.h | 4 +
arch/x86/include/asm/mce.h | 2 +
arch/x86/include/asm/processor.h | 3 +-
arch/x86/include/asm/virtext.h | 5 +-
arch/x86/kernel/alternative.c | 4 +
arch/x86/kernel/amd_nb.c | 49 ++-
arch/x86/kernel/apic/apic.c | 7 +
arch/x86/kernel/apic/probe_32.c | 1 +
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/bugs.c | 4 +-
arch/x86/kernel/cpu/cacheinfo.c | 31 +-
arch/x86/kernel/cpu/common.c | 4 +
arch/x86/kernel/cpu/cpu.h | 1 +
arch/x86/kernel/cpu/hygon.c | 413 ++++++++++++++++++
arch/x86/kernel/cpu/mce/core.c | 20 +-
arch/x86/kernel/cpu/mce/severity.c | 3 +-
arch/x86/kernel/cpu/mtrr/cleanup.c | 3 +-
arch/x86/kernel/cpu/mtrr/mtrr.c | 2 +-
arch/x86/kernel/cpu/perfctr-watchdog.c | 2 +
arch/x86/kernel/smpboot.c | 4 +-
arch/x86/kvm/emulate.c | 11 +-
arch/x86/pci/amd_bus.c | 6 +-
arch/x86/xen/pmu.c | 12 +-
drivers/acpi/acpi_pad.c | 1 +
drivers/acpi/processor_idle.c | 1 +
drivers/cpufreq/acpi-cpufreq.c | 5 +
drivers/cpufreq/amd_freq_sensitivity.c | 9 +-
drivers/edac/amd64_edac.c | 10 +-
drivers/edac/mce_amd.c | 4 +-
drivers/hwmon/k10temp.c | 3 +-
drivers/i2c/busses/Kconfig | 1 +
drivers/i2c/busses/i2c-piix4.c | 15 +-
drivers/ntb/hw/amd/ntb_hw_amd.c | 1 +
include/linux/pci_ids.h | 2 +
tools/power/cpupower/utils/cpufreq-info.c | 6 +-
tools/power/cpupower/utils/helpers/amd.c | 4 +-
tools/power/cpupower/utils/helpers/cpuid.c | 8 +-
tools/power/cpupower/utils/helpers/helpers.h | 2 +-
tools/power/cpupower/utils/helpers/misc.c | 2 +-
.../utils/idle_monitor/mperf_monitor.c | 3 +-
48 files changed, 668 insertions(+), 55 deletions(-)
create mode 100644 arch/x86/kernel/cpu/hygon.c
--
2.23.0
5
28
Alexander Usyskin (1):
mei: me: add cedar fork device ids
Amritha Nambiar (1):
net: Fix Tx hash bound checking
Arun KS (1):
arm64: Fix size of __early_cpu_boot_status
Avihai Horon (1):
RDMA/cm: Update num_paths in cma_resolve_iboe_route error flow
Chris Lew (1):
rpmsg: glink: Remove chunk size word align warning
Daniel Jordan (1):
padata: always acquire cpu_hotplug_lock before pinst->lock
David Ahern (1):
tools/accounting/getdelays.c: fix netlink attribute length
David Howells (1):
rxrpc: Fix sendmsg(MSG_WAITALL) handling
Eugene Syromiatnikov (1):
coresight: do not use the BIT() macro in the UAPI header
Eugeniy Paltsev (1):
initramfs: restore default compression behavior
Florian Fainelli (2):
net: dsa: bcm_sf2: Do not register slave MDIO bus with OF
net: dsa: bcm_sf2: Ensure correct sub-node is parsed
Geoffrey Allott (1):
ALSA: hda/ca0132 - Add Recon3Di quirk to handle integrated sound on
EVGA X99 Classified motherboard
Gerd Hoffmann (1):
drm/bochs: downgrade pci_request_region failure from error to warning
Greg Kroah-Hartman (1):
Linux 4.19.115
Hans Verkuil (1):
drm_dp_mst_topology: fix broken
drm_dp_sideband_parse_remote_dpcd_read()
Hans de Goede (2):
extcon: axp288: Add wakeup support
power: supply: axp288_charger: Add special handling for HP Pavilion x2
10
Ilya Dryomov (1):
ceph: canonicalize server path in place
James Zhu (1):
drm/amdgpu: fix typo for vcn1 idle check
Jarod Wilson (1):
ipv6: don't auto-add link-local address to lag ports
Jason A. Donenfeld (1):
random: always use batched entropy for get_random_u{32, 64}
Jason Gunthorpe (2):
RDMA/ucma: Put a lock around every call to the rdma_cm layer
RDMA/cma: Teach lockdep about the order of rtnl and lock
Jisheng Zhang (1):
net: stmmac: dwmac1000: fix out-of-bounds mac address reg setting
Kaike Wan (2):
IB/hfi1: Call kobject_put() when kobject_init_and_add() fails
IB/hfi1: Fix memory leaks in sysfs registration and unregistration
Kishon Vijay Abraham I (2):
misc: pci_endpoint_test: Fix to support > 10 pci-endpoint-test devices
misc: pci_endpoint_test: Avoid using module parameter to determine
irqtype
Len Brown (2):
tools/power turbostat: Fix gcc build warnings
tools/power turbostat: Fix missing SYS_LPI counter on some Chromebooks
Lucas Stach (1):
drm/etnaviv: replace MMU flush marker with flush sequence
Marcelo Ricardo Leitner (1):
sctp: fix possibly using a bad saddr with a given dst
Mario Kleiner (1):
drm/amd/display: Add link_rate quirk for Apple 15" MBP 2017
Martin Kaiser (1):
hwrng: imx-rngc - fix an error path
Oleksij Rempel (1):
net: phy: micrel: kszphy_resume(): add delay after genphy_resume()
before accessing PHY registers
Paul Cercueil (1):
ASoC: jz4740-i2s: Fix divider written at incorrect offset in register
Petr Machata (1):
mlxsw: spectrum_flower: Do not stop at FLOW_ACTION_VLAN_MANGLE
Prabhath Sajeepa (1):
nvme-rdma: Avoid double freeing of async event data
Qian Cai (1):
ipv4: fix a RCU-list lock in fib_triestat_seq_show
Qiujun Huang (3):
sctp: fix refcount bug in sctp_wfree
Bluetooth: RFCOMM: fix ODEBUG bug in rfcomm_dev_ioctl
fbcon: fix null-ptr-deref in fbcon_switch
Rob Clark (2):
drm/msm: stop abusing dma_map/unmap for cache
drm/msm: Use the correct dma_sync calls in msm_gem
Roger Quadros (1):
usb: dwc3: don't set gadget->is_otg flag
Sean Young (1):
media: rc: IR signal for Panasonic air conditioner too long
Taniya Das (1):
clk: qcom: rcg: Return failure for RCG update
Thinh Nguyen (1):
usb: dwc3: gadget: Wrap around when skip TRBs
William Dauchy (1):
net, ip_tunnel: fix interface lookup with no key
Xiubo Li (1):
ceph: remove the extra slashes in the server path
YueHaibing (1):
misc: rtsx: set correct pcr_ops for rts522A
Makefile | 2 +-
arch/arm64/kernel/head.S | 2 +-
drivers/char/hw_random/imx-rngc.c | 4 +-
drivers/char/random.c | 20 ++------
drivers/clk/qcom/clk-rcg2.c | 2 +-
drivers/extcon/extcon-axp288.c | 32 ++++++++++++
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c | 2 +-
drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 11 +++++
drivers/gpu/drm/bochs/bochs_hw.c | 6 +--
drivers/gpu/drm/drm_dp_mst_topology.c | 1 +
drivers/gpu/drm/etnaviv/etnaviv_buffer.c | 10 ++--
drivers/gpu/drm/etnaviv/etnaviv_gpu.h | 1 +
drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 6 +--
drivers/gpu/drm/etnaviv/etnaviv_mmu.h | 2 +-
drivers/gpu/drm/msm/msm_gem.c | 47 ++++++++++++++++--
drivers/infiniband/core/cma.c | 14 ++++++
drivers/infiniband/core/ucma.c | 49 ++++++++++++++++++-
drivers/infiniband/hw/hfi1/sysfs.c | 26 +++++++---
drivers/media/rc/lirc_dev.c | 2 +-
drivers/misc/cardreader/rts5227.c | 1 +
drivers/misc/mei/hw-me-regs.h | 2 +
drivers/misc/mei/pci-me.c | 2 +
drivers/misc/pci_endpoint_test.c | 14 ++++--
drivers/net/dsa/bcm_sf2.c | 9 +++-
.../net/ethernet/mellanox/mlxsw/spectrum_flower.c | 8 +--
.../net/ethernet/stmicro/stmmac/dwmac1000_core.c | 2 +-
drivers/net/phy/micrel.c | 7 +++
drivers/nvme/host/rdma.c | 8 +--
drivers/power/supply/axp288_charger.c | 57 +++++++++++++++++++++-
drivers/rpmsg/qcom_glink_native.c | 3 --
drivers/usb/dwc3/gadget.c | 3 +-
drivers/video/fbdev/core/fbcon.c | 3 ++
fs/ceph/super.c | 56 +++++++++++++--------
fs/ceph/super.h | 2 +-
include/uapi/linux/coresight-stm.h | 6 ++-
kernel/padata.c | 4 +-
net/bluetooth/rfcomm/tty.c | 4 +-
net/core/dev.c | 2 +
net/ipv4/fib_trie.c | 3 ++
net/ipv4/ip_tunnel.c | 6 +--
net/ipv6/addrconf.c | 4 ++
net/rxrpc/sendmsg.c | 4 +-
net/sctp/ipv6.c | 20 +++++---
net/sctp/protocol.c | 28 +++++++----
net/sctp/socket.c | 31 +++++++++---
sound/pci/hda/patch_ca0132.c | 1 +
sound/soc/jz4740/jz4740-i2s.c | 2 +-
tools/accounting/getdelays.c | 2 +-
tools/power/x86/turbostat/turbostat.c | 27 +++++-----
usr/Kconfig | 22 ++++-----
50 files changed, 434 insertions(+), 148 deletions(-)
--
1.8.3
1
52
From: "Paul E. McKenney" <paulmck(a)kernel.org>
mainline inclusion
from mainline-v5.6-rc1
commit 844a378de3372c923909681706d62336d702531e
category: bugfix
bugzilla: 28851
CVE: NA
-------------------------------------------------------------------------
The ->srcu_last_gp_end field is accessed from any CPU at any time
by synchronize_srcu(), so non-initialization references need to use
READ_ONCE() and WRITE_ONCE(). This commit therefore makes that change.
Reported-by: syzbot+08f3e9d26e5541e1ecf2(a)syzkaller.appspotmail.com
Acked-by: Marco Elver <elver(a)google.com>
Signed-off-by: Paul E. McKenney <paulmck(a)kernel.org>
Conflicts:
kernel/rcu/srcutree.c
Signed-off-by: Zhen Lei <thunder.leizhen(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/rcu/srcutree.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 4b0a6e3..7bd0204 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -552,7 +552,7 @@ static void srcu_gp_end(struct srcu_struct *sp)
idx = rcu_seq_state(sp->srcu_gp_seq);
WARN_ON_ONCE(idx != SRCU_STATE_SCAN2);
cbdelay = srcu_get_delay(sp);
- sp->srcu_last_gp_end = ktime_get_mono_fast_ns();
+ WRITE_ONCE(sp->srcu_last_gp_end, ktime_get_mono_fast_ns());
rcu_seq_end(&sp->srcu_gp_seq);
gpseq = rcu_seq_current(&sp->srcu_gp_seq);
if (ULONG_CMP_LT(sp->srcu_gp_seq_needed_exp, gpseq))
@@ -780,6 +780,7 @@ static bool srcu_might_be_idle(struct srcu_struct *sp)
unsigned long flags;
struct srcu_data *sdp;
unsigned long t;
+ unsigned long tlast;
/* If the local srcu_data structure has callbacks, not idle. */
local_irq_save(flags);
@@ -798,9 +799,9 @@ static bool srcu_might_be_idle(struct srcu_struct *sp)
/* First, see if enough time has passed since the last GP. */
t = ktime_get_mono_fast_ns();
+ tlast = READ_ONCE(sp->srcu_last_gp_end);
if (exp_holdoff == 0 ||
- time_in_range_open(t, sp->srcu_last_gp_end,
- sp->srcu_last_gp_end + exp_holdoff))
+ time_in_range_open(t, tlast, tlast + exp_holdoff))
return false; /* Too soon after last GP. */
/* Next, check for probable idleness. */
--
1.8.3
1
0

18 Apr '20
From: Yunsheng Lin <linyunsheng(a)huawei.com>
mainline inclusion
from mainline-v5.4-rc1
commit 6b0c54e7f2715997c366e8374209bc74259b0a59
category: bugfix
bugzilla: 21318
CVE: NA
-------------------------------------------------------------------------
The cookie is dereferenced before null checking in the function
iommu_dma_init_domain.
This patch moves the dereferencing after the null checking.
Fixes: fdbe574eb693 ("iommu/dma: Allow MSI-only cookies")
Signed-off-by: Yunsheng Lin <linyunsheng(a)huawei.com>
Signed-off-by: Joerg Roedel <jroedel(a)suse.de>
Conflicts:
drivers/iommu/dma-iommu.c
Signed-off-by: Zhen Lei <thunder.leizhen(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/iommu/dma-iommu.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 64ae17e8b..b68d9fd 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -290,13 +290,15 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
u64 size, struct device *dev)
{
struct iommu_dma_cookie *cookie = domain->iova_cookie;
- struct iova_domain *iovad = &cookie->iovad;
unsigned long order, base_pfn, end_pfn;
+ struct iova_domain *iovad;
int attr;
if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE)
return -EINVAL;
+ iovad = &cookie->iovad;
+
/* Use the smallest supported page size for IOVA granularity */
order = __ffs(domain->pgsize_bitmap);
base_pfn = max_t(unsigned long, 1, base >> order);
--
1.8.3
1
1

18 Apr '20
From: Shaozhengchao <shaozhengchao(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: 4472
-----------------------------------------------------------------------
Fix the problem that out-of-bounds access caused by user input
In order to solve the problem, restrictions are imposed on each input
which is done in kernel driver.
Signed-off-by: Shaozhengchao <shaozhengchao(a)huawei.com>
Reviewed-by: Luoshaokai <luoshaokai(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/huawei/hinic/hinic_nictool.c | 18 ++++++++++++++++++
drivers/net/ethernet/huawei/hinic/hinic_nictool.h | 2 ++
drivers/net/ethernet/huawei/hinic/hinic_sml_counter.c | 16 +++++++++++++---
3 files changed, 33 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nictool.c b/drivers/net/ethernet/huawei/hinic/hinic_nictool.c
index 46dd9ec..df01088 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nictool.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nictool.c
@@ -1712,6 +1712,19 @@ static u32 get_up_timeout_val(enum hinic_mod_type mod, u8 cmd)
return UP_COMP_TIME_OUT_VAL;
}
+static int check_useparam_valid(struct msg_module *nt_msg, void *buf_in)
+{
+ struct csr_write_st *csr_write_msg = (struct csr_write_st *)buf_in;
+ u32 rd_len = csr_write_msg->rd_len;
+
+ if (rd_len > TOOL_COUNTER_MAX_LEN) {
+ pr_err("Csr read or write len is invalid!\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
static int send_to_up(void *hwdev, struct msg_module *nt_msg,
void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
{
@@ -1744,6 +1757,9 @@ static int send_to_up(void *hwdev, struct msg_module *nt_msg,
}
} else if (nt_msg->up_cmd.up_db.up_api_type == API_CHAIN) {
+ if (check_useparam_valid(nt_msg, buf_in))
+ return -EINVAL;
+
if (nt_msg->up_cmd.up_db.chipif_cmd == API_CSR_WRITE) {
ret = api_csr_write(hwdev, nt_msg, buf_in,
in_size, buf_out, out_size);
@@ -1994,6 +2010,8 @@ static int get_all_chip_id_cmd(struct msg_module *nt_msg)
{
struct nic_card_id card_id;
+ memset(&card_id, 0, sizeof(card_id));
+
hinic_get_all_chip_id((void *)&card_id);
if (copy_to_user(nt_msg->out_buf, &card_id, sizeof(card_id))) {
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_nictool.h b/drivers/net/ethernet/huawei/hinic/hinic_nictool.h
index cfbe435..e8eccaf 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_nictool.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_nictool.h
@@ -285,4 +285,6 @@ struct hinic_pf_info {
extern void hinic_get_io_stats(struct hinic_nic_dev *nic_dev,
struct hinic_show_item *items);
+#define TOOL_COUNTER_MAX_LEN 512
+
#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_sml_counter.c b/drivers/net/ethernet/huawei/hinic/hinic_sml_counter.c
index 9536adf..eb35df6 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_sml_counter.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_sml_counter.c
@@ -253,9 +253,19 @@ int hinic_sm_ctr_rd64_pair(void *hwdev, u8 node, u8 instance,
ctr_rd_rsp_u rsp;
int ret;
- if (!hwdev || (0 != (ctr_id & 0x1)) || !value1 || !value2) {
- pr_err("Hwdev(0x%p) or value1(0x%p) or value2(0x%p) is NULL or ctr_id(%d) is odd number\n",
- hwdev, value1, value2, ctr_id);
+ if (!value1) {
+ pr_err("value1 is NULL for read 64 bit pair\n");
+ return -EFAULT;
+ }
+
+ if (!value2) {
+ pr_err("value2 is NULL for read 64 bit pair\n");
+ return -EFAULT;
+ }
+
+ if (!hwdev || (0 != (ctr_id & 0x1))) {
+ pr_err("Hwdev is NULL or ctr_id(%d) is odd number for read 64 bit pair\n",
+ ctr_id);
return -EFAULT;
}
--
1.8.3
1
0

18 Apr '20
From: Li Bin <huawei.libin(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 34234
CVE: NA
--------------------------------
If the dxfer_len is greater than 256M then the request is invalid,
it should call sg_remove_request in sg_common_write.
Fixes: f930c7043663 ("scsi: sg: only check for dxfer_len greater than 256M")
Signed-off-by: Li Bin <huawei.libin(a)huawei.com>
Acked-by: Douglas Gilbert <dgilbert(a)interlog.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/scsi/sg.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index c75324a..9c4b71e 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -808,8 +808,10 @@ static int sg_allow_access(struct file *filp, unsigned char *cmd)
"sg_common_write: scsi opcode=0x%02x, cmd_size=%d\n",
(int) cmnd[0], (int) hp->cmd_len));
- if (hp->dxfer_len >= SZ_256M)
+ if (hp->dxfer_len >= SZ_256M) {
+ sg_remove_request(sfp, srp);
return -EINVAL;
+ }
k = sg_start_req(srp, cmnd);
if (k) {
--
1.8.3
1
1

[PATCH 1/2] btrfs: extent_io: Handle errors better in extent_write_full_page()
by Yang Yingliang 18 Apr '20
by Yang Yingliang 18 Apr '20
18 Apr '20
From: Qu Wenruo <wqu(a)suse.com>
mainline inclusion
from mainline-v5.2-rc2
commit 3065976b045f77a910809fa7699f99a1e7c0dbbb
category: bugfix
bugzilla: 13690
CVE: CVE-2019-19377
Introduce end_write_bio() for CVE-2019-19377.
-------------------------------------------------
Since now flush_write_bio() could return error, kill the BUG_ON() first.
Then don't call flush_write_bio() unconditionally, instead we check the
return value from __extent_writepage() first.
If __extent_writepage() fails, we do cleanup, and return error without
submitting the possible corrupted or half-baked bio.
If __extent_writepage() successes, then we call flush_write_bio() and
return the result.
Signed-off-by: Qu Wenruo <wqu(a)suse.com>
Reviewed-by: David Sterba <dsterba(a)suse.com>
Signed-off-by: David Sterba <dsterba(a)suse.com>
Conflicts:
fs/btrfs/extent_io.c
[yyl: adjust context]
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/btrfs/extent_io.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 11efb4f..7f2990f 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2731,6 +2731,16 @@ static int __must_check submit_one_bio(struct bio *bio, int mirror_num,
return blk_status_to_errno(ret);
}
+/* Cleanup unsubmitted bios */
+static void end_write_bio(struct extent_page_data *epd, int ret)
+{
+ if (epd->bio) {
+ epd->bio->bi_status = errno_to_blk_status(ret);
+ bio_endio(epd->bio);
+ epd->bio = NULL;
+ }
+}
+
/*
* @opf: bio REQ_OP_* and REQ_* flags as one value
* @tree: tree so we can call our merge_bio hook
@@ -3438,6 +3448,9 @@ static noinline_for_stack int __extent_writepage_io(struct inode *inode,
* records are inserted to lock ranges in the tree, and as dirty areas
* are found, they are marked writeback. Then the lock bits are removed
* and the end_io handler clears the writeback ranges
+ *
+ * Return 0 if everything goes well.
+ * Return <0 for error.
*/
static int __extent_writepage(struct page *page, struct writeback_control *wbc,
struct extent_page_data *epd)
@@ -3505,6 +3518,7 @@ static int __extent_writepage(struct page *page, struct writeback_control *wbc,
end_extent_writepage(page, ret, start, page_end);
}
unlock_page(page);
+ ASSERT(ret <= 0);
return ret;
done_unlocked:
@@ -4054,6 +4068,11 @@ int extent_write_full_page(struct page *page, struct writeback_control *wbc)
};
ret = __extent_writepage(page, wbc, &epd);
+ ASSERT(ret <= 0);
+ if (ret < 0) {
+ end_write_bio(&epd, ret);
+ return ret;
+ }
flush_write_bio(&epd);
return ret;
--
1.8.3
1
1
From: Shaozhengchao <shaozhengchao(a)huawei.com>
driver inclusion
category: cleanup
bugzilla: 4472
-----------------------------------------------------------------------
Delete useless header files
Signed-off-by: Shaozhengchao <shaozhengchao(a)huawei.com>
Reviewed-by: Luoshaokai <luoshaokai(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/huawei/hinic/hinic_common.c | 80 -
drivers/net/ethernet/huawei/hinic/hinic_common.h | 38 -
drivers/net/ethernet/huawei/hinic/hinic_dev.h | 64 -
.../net/ethernet/huawei/hinic/hinic_hw_api_cmd.c | 978 -------
.../net/ethernet/huawei/hinic/hinic_hw_api_cmd.h | 208 --
drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 947 -------
drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h | 187 --
drivers/net/ethernet/huawei/hinic/hinic_hw_csr.h | 149 --
drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c | 1010 --------
drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h | 239 --
drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c | 886 -------
drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h | 265 --
drivers/net/ethernet/huawei/hinic/hinic_hw_if.c | 351 ---
drivers/net/ethernet/huawei/hinic/hinic_hw_if.h | 272 --
drivers/net/ethernet/huawei/hinic/hinic_hw_io.c | 533 ----
drivers/net/ethernet/huawei/hinic/hinic_hw_io.h | 97 -
drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c | 597 -----
drivers/net/ethernet/huawei/hinic/hinic_hw_qp.c | 907 -------
drivers/net/ethernet/huawei/hinic/hinic_hw_qp.h | 205 --
.../net/ethernet/huawei/hinic/hinic_hw_qp_ctxt.h | 214 --
drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c | 878 -------
drivers/net/ethernet/huawei/hinic/hinic_hw_wq.h | 117 -
drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h | 368 ---
drivers/net/ethernet/huawei/hinic/hinic_port.c | 379 ---
drivers/net/ethernet/huawei/hinic/hinic_port.h | 198 --
.../net/ethernet/huawei/hinic/hinic_sml_table.h | 2728 --------------------
.../ethernet/huawei/hinic/hinic_sml_table_pub.h | 277 --
27 files changed, 13172 deletions(-)
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_common.c
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_common.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_dev.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_csr.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_if.c
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_if.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_io.c
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_io.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_qp.c
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_qp.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_qp_ctxt.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_wq.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_port.c
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_port.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_sml_table.h
delete mode 100644 drivers/net/ethernet/huawei/hinic/hinic_sml_table_pub.h
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_common.c b/drivers/net/ethernet/huawei/hinic/hinic_common.c
deleted file mode 100644
index 02c74fd..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_common.c
+++ /dev/null
@@ -1,80 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <asm/byteorder.h>
-
-#include "hinic_common.h"
-
-/**
- * hinic_cpu_to_be32 - convert data to big endian 32 bit format
- * @data: the data to convert
- * @len: length of data to convert
- **/
-void hinic_cpu_to_be32(void *data, int len)
-{
- u32 *mem = data;
- int i;
-
- len = len / sizeof(u32);
-
- for (i = 0; i < len; i++) {
- *mem = cpu_to_be32(*mem);
- mem++;
- }
-}
-
-/**
- * hinic_be32_to_cpu - convert data from big endian 32 bit format
- * @data: the data to convert
- * @len: length of data to convert
- **/
-void hinic_be32_to_cpu(void *data, int len)
-{
- u32 *mem = data;
- int i;
-
- len = len / sizeof(u32);
-
- for (i = 0; i < len; i++) {
- *mem = be32_to_cpu(*mem);
- mem++;
- }
-}
-
-/**
- * hinic_set_sge - set dma area in scatter gather entry
- * @sge: scatter gather entry
- * @addr: dma address
- * @len: length of relevant data in the dma address
- **/
-void hinic_set_sge(struct hinic_sge *sge, dma_addr_t addr, int len)
-{
- sge->hi_addr = upper_32_bits(addr);
- sge->lo_addr = lower_32_bits(addr);
- sge->len = len;
-}
-
-/**
- * hinic_sge_to_dma - get dma address from scatter gather entry
- * @sge: scatter gather entry
- *
- * Return dma address of sg entry
- **/
-dma_addr_t hinic_sge_to_dma(struct hinic_sge *sge)
-{
- return (dma_addr_t)((((u64)sge->hi_addr) << 32) | sge->lo_addr);
-}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_common.h b/drivers/net/ethernet/huawei/hinic/hinic_common.h
deleted file mode 100644
index 2c06b76..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_common.h
+++ /dev/null
@@ -1,38 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_COMMON_H
-#define HINIC_COMMON_H
-
-#include <linux/types.h>
-
-#define UPPER_8_BITS(data) (((data) >> 8) & 0xFF)
-#define LOWER_8_BITS(data) ((data) & 0xFF)
-
-struct hinic_sge {
- u32 hi_addr;
- u32 lo_addr;
- u32 len;
-};
-
-void hinic_cpu_to_be32(void *data, int len);
-
-void hinic_be32_to_cpu(void *data, int len);
-
-void hinic_set_sge(struct hinic_sge *sge, dma_addr_t addr, int len);
-
-dma_addr_t hinic_sge_to_dma(struct hinic_sge *sge);
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_dev.h b/drivers/net/ethernet/huawei/hinic/hinic_dev.h
deleted file mode 100644
index 5186cc9..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_dev.h
+++ /dev/null
@@ -1,64 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_DEV_H
-#define HINIC_DEV_H
-
-#include <linux/netdevice.h>
-#include <linux/types.h>
-#include <linux/semaphore.h>
-#include <linux/workqueue.h>
-#include <linux/bitops.h>
-
-#include "hinic_hw_dev.h"
-#include "hinic_tx.h"
-#include "hinic_rx.h"
-
-#define HINIC_DRV_NAME "hinic"
-
-enum hinic_flags {
- HINIC_LINK_UP = BIT(0),
- HINIC_INTF_UP = BIT(1),
-};
-
-struct hinic_rx_mode_work {
- struct work_struct work;
- u32 rx_mode;
-};
-
-struct hinic_dev {
- struct net_device *netdev;
- struct hinic_hwdev *hwdev;
-
- u32 msg_enable;
- unsigned int tx_weight;
- unsigned int rx_weight;
-
- unsigned int flags;
-
- struct semaphore mgmt_lock;
- unsigned long *vlan_bitmap;
-
- struct hinic_rx_mode_work rx_mode_work;
- struct workqueue_struct *workq;
-
- struct hinic_txq *txqs;
- struct hinic_rxq *rxqs;
-
- struct hinic_txq_stats tx_stats;
- struct hinic_rxq_stats rx_stats;
-};
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c
deleted file mode 100644
index c40603a..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c
+++ /dev/null
@@ -1,978 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/errno.h>
-#include <linux/pci.h>
-#include <linux/device.h>
-#include <linux/slab.h>
-#include <linux/dma-mapping.h>
-#include <linux/bitops.h>
-#include <linux/err.h>
-#include <linux/jiffies.h>
-#include <linux/delay.h>
-#include <linux/log2.h>
-#include <linux/semaphore.h>
-#include <asm/byteorder.h>
-#include <asm/barrier.h>
-
-#include "hinic_hw_csr.h"
-#include "hinic_hw_if.h"
-#include "hinic_hw_api_cmd.h"
-
-#define API_CHAIN_NUM_CELLS 32
-
-#define API_CMD_CELL_SIZE_SHIFT 6
-#define API_CMD_CELL_SIZE_MIN (BIT(API_CMD_CELL_SIZE_SHIFT))
-
-#define API_CMD_CELL_SIZE(cell_size) \
- (((cell_size) >= API_CMD_CELL_SIZE_MIN) ? \
- (1 << (fls(cell_size - 1))) : API_CMD_CELL_SIZE_MIN)
-
-#define API_CMD_CELL_SIZE_VAL(size) \
- ilog2((size) >> API_CMD_CELL_SIZE_SHIFT)
-
-#define API_CMD_BUF_SIZE 2048
-
-/* Sizes of the members in hinic_api_cmd_cell */
-#define API_CMD_CELL_DESC_SIZE 8
-#define API_CMD_CELL_DATA_ADDR_SIZE 8
-
-#define API_CMD_CELL_ALIGNMENT 8
-
-#define API_CMD_TIMEOUT 1000
-
-#define MASKED_IDX(chain, idx) ((idx) & ((chain)->num_cells - 1))
-
-#define SIZE_8BYTES(size) (ALIGN((size), 8) >> 3)
-#define SIZE_4BYTES(size) (ALIGN((size), 4) >> 2)
-
-#define RD_DMA_ATTR_DEFAULT 0
-#define WR_DMA_ATTR_DEFAULT 0
-
-enum api_cmd_data_format {
- SGE_DATA = 1, /* cell data is passed by hw address */
-};
-
-enum api_cmd_type {
- API_CMD_WRITE = 0,
-};
-
-enum api_cmd_bypass {
- NO_BYPASS = 0,
- BYPASS = 1,
-};
-
-enum api_cmd_xor_chk_level {
- XOR_CHK_DIS = 0,
-
- XOR_CHK_ALL = 3,
-};
-
-static u8 xor_chksum_set(void *data)
-{
- int idx;
- u8 *val, checksum = 0;
-
- val = data;
-
- for (idx = 0; idx < 7; idx++)
- checksum ^= val[idx];
-
- return checksum;
-}
-
-static void set_prod_idx(struct hinic_api_cmd_chain *chain)
-{
- enum hinic_api_cmd_chain_type chain_type = chain->chain_type;
- struct hinic_hwif *hwif = chain->hwif;
- u32 addr, prod_idx;
-
- addr = HINIC_CSR_API_CMD_CHAIN_PI_ADDR(chain_type);
- prod_idx = hinic_hwif_read_reg(hwif, addr);
-
- prod_idx = HINIC_API_CMD_PI_CLEAR(prod_idx, IDX);
-
- prod_idx |= HINIC_API_CMD_PI_SET(chain->prod_idx, IDX);
-
- hinic_hwif_write_reg(hwif, addr, prod_idx);
-}
-
-static u32 get_hw_cons_idx(struct hinic_api_cmd_chain *chain)
-{
- u32 addr, val;
-
- addr = HINIC_CSR_API_CMD_STATUS_ADDR(chain->chain_type);
- val = hinic_hwif_read_reg(chain->hwif, addr);
-
- return HINIC_API_CMD_STATUS_GET(val, CONS_IDX);
-}
-
-/**
- * chain_busy - check if the chain is still processing last requests
- * @chain: chain to check
- *
- * Return 0 - Success, negative - Failure
- **/
-static int chain_busy(struct hinic_api_cmd_chain *chain)
-{
- struct hinic_hwif *hwif = chain->hwif;
- struct pci_dev *pdev = hwif->pdev;
- u32 prod_idx;
-
- switch (chain->chain_type) {
- case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
- chain->cons_idx = get_hw_cons_idx(chain);
- prod_idx = chain->prod_idx;
-
- /* check for a space for a new command */
- if (chain->cons_idx == MASKED_IDX(chain, prod_idx + 1)) {
- dev_err(&pdev->dev, "API CMD chain %d is busy\n",
- chain->chain_type);
- return -EBUSY;
- }
- break;
-
- default:
- dev_err(&pdev->dev, "Unknown API CMD Chain type\n");
- break;
- }
-
- return 0;
-}
-
-/**
- * get_cell_data_size - get the data size of a specific cell type
- * @type: chain type
- *
- * Return the data(Desc + Address) size in the cell
- **/
-static u8 get_cell_data_size(enum hinic_api_cmd_chain_type type)
-{
- u8 cell_data_size = 0;
-
- switch (type) {
- case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
- cell_data_size = ALIGN(API_CMD_CELL_DESC_SIZE +
- API_CMD_CELL_DATA_ADDR_SIZE,
- API_CMD_CELL_ALIGNMENT);
- break;
- default:
- break;
- }
-
- return cell_data_size;
-}
-
-/**
- * prepare_cell_ctrl - prepare the ctrl of the cell for the command
- * @cell_ctrl: the control of the cell to set the control value into it
- * @data_size: the size of the data in the cell
- **/
-static void prepare_cell_ctrl(u64 *cell_ctrl, u16 data_size)
-{
- u8 chksum;
- u64 ctrl;
-
- ctrl = HINIC_API_CMD_CELL_CTRL_SET(SIZE_8BYTES(data_size), DATA_SZ) |
- HINIC_API_CMD_CELL_CTRL_SET(RD_DMA_ATTR_DEFAULT, RD_DMA_ATTR) |
- HINIC_API_CMD_CELL_CTRL_SET(WR_DMA_ATTR_DEFAULT, WR_DMA_ATTR);
-
- chksum = xor_chksum_set(&ctrl);
-
- ctrl |= HINIC_API_CMD_CELL_CTRL_SET(chksum, XOR_CHKSUM);
-
- /* The data in the HW should be in Big Endian Format */
- *cell_ctrl = cpu_to_be64(ctrl);
-}
-
-/**
- * prepare_api_cmd - prepare API CMD command
- * @chain: chain for the command
- * @dest: destination node on the card that will receive the command
- * @cmd: command data
- * @cmd_size: the command size
- **/
-static void prepare_api_cmd(struct hinic_api_cmd_chain *chain,
- enum hinic_node_id dest,
- void *cmd, u16 cmd_size)
-{
- struct hinic_api_cmd_cell *cell = chain->curr_node;
- struct hinic_api_cmd_cell_ctxt *cell_ctxt;
- struct hinic_hwif *hwif = chain->hwif;
- struct pci_dev *pdev = hwif->pdev;
-
- cell_ctxt = &chain->cell_ctxt[chain->prod_idx];
-
- switch (chain->chain_type) {
- case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
- cell->desc = HINIC_API_CMD_DESC_SET(SGE_DATA, API_TYPE) |
- HINIC_API_CMD_DESC_SET(API_CMD_WRITE, RD_WR) |
- HINIC_API_CMD_DESC_SET(NO_BYPASS, MGMT_BYPASS);
- break;
-
- default:
- dev_err(&pdev->dev, "unknown Chain type\n");
- return;
- }
-
- cell->desc |= HINIC_API_CMD_DESC_SET(dest, DEST) |
- HINIC_API_CMD_DESC_SET(SIZE_4BYTES(cmd_size), SIZE);
-
- cell->desc |= HINIC_API_CMD_DESC_SET(xor_chksum_set(&cell->desc),
- XOR_CHKSUM);
-
- /* The data in the HW should be in Big Endian Format */
- cell->desc = cpu_to_be64(cell->desc);
-
- memcpy(cell_ctxt->api_cmd_vaddr, cmd, cmd_size);
-}
-
-/**
- * prepare_cell - prepare cell ctrl and cmd in the current cell
- * @chain: chain for the command
- * @dest: destination node on the card that will receive the command
- * @cmd: command data
- * @cmd_size: the command size
- *
- * Return 0 - Success, negative - Failure
- **/
-static void prepare_cell(struct hinic_api_cmd_chain *chain,
- enum hinic_node_id dest,
- void *cmd, u16 cmd_size)
-{
- struct hinic_api_cmd_cell *curr_node = chain->curr_node;
- u16 data_size = get_cell_data_size(chain->chain_type);
-
- prepare_cell_ctrl(&curr_node->ctrl, data_size);
- prepare_api_cmd(chain, dest, cmd, cmd_size);
-}
-
-static inline void cmd_chain_prod_idx_inc(struct hinic_api_cmd_chain *chain)
-{
- chain->prod_idx = MASKED_IDX(chain, chain->prod_idx + 1);
-}
-
-/**
- * api_cmd_status_update - update the status in the chain struct
- * @chain: chain to update
- **/
-static void api_cmd_status_update(struct hinic_api_cmd_chain *chain)
-{
- enum hinic_api_cmd_chain_type chain_type;
- struct hinic_api_cmd_status *wb_status;
- struct hinic_hwif *hwif = chain->hwif;
- struct pci_dev *pdev = hwif->pdev;
- u64 status_header;
- u32 status;
-
- wb_status = chain->wb_status;
- status_header = be64_to_cpu(wb_status->header);
-
- status = be32_to_cpu(wb_status->status);
- if (HINIC_API_CMD_STATUS_GET(status, CHKSUM_ERR)) {
- dev_err(&pdev->dev, "API CMD status: Xor check error\n");
- return;
- }
-
- chain_type = HINIC_API_CMD_STATUS_HEADER_GET(status_header, CHAIN_ID);
- if (chain_type >= HINIC_API_CMD_MAX) {
- dev_err(&pdev->dev, "unknown API CMD Chain %d\n", chain_type);
- return;
- }
-
- chain->cons_idx = HINIC_API_CMD_STATUS_GET(status, CONS_IDX);
-}
-
-/**
- * wait_for_status_poll - wait for write to api cmd command to complete
- * @chain: the chain of the command
- *
- * Return 0 - Success, negative - Failure
- **/
-static int wait_for_status_poll(struct hinic_api_cmd_chain *chain)
-{
- int err = -ETIMEDOUT;
- unsigned long end;
-
- end = jiffies + msecs_to_jiffies(API_CMD_TIMEOUT);
- do {
- api_cmd_status_update(chain);
-
- /* wait for CI to be updated - sign for completion */
- if (chain->cons_idx == chain->prod_idx) {
- err = 0;
- break;
- }
-
- msleep(20);
- } while (time_before(jiffies, end));
-
- return err;
-}
-
-/**
- * wait_for_api_cmd_completion - wait for command to complete
- * @chain: chain for the command
- *
- * Return 0 - Success, negative - Failure
- **/
-static int wait_for_api_cmd_completion(struct hinic_api_cmd_chain *chain)
-{
- struct hinic_hwif *hwif = chain->hwif;
- struct pci_dev *pdev = hwif->pdev;
- int err;
-
- switch (chain->chain_type) {
- case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
- err = wait_for_status_poll(chain);
- if (err) {
- dev_err(&pdev->dev, "API CMD Poll status timeout\n");
- break;
- }
- break;
-
- default:
- dev_err(&pdev->dev, "unknown API CMD Chain type\n");
- err = -EINVAL;
- break;
- }
-
- return err;
-}
-
-/**
- * api_cmd - API CMD command
- * @chain: chain for the command
- * @dest: destination node on the card that will receive the command
- * @cmd: command data
- * @size: the command size
- *
- * Return 0 - Success, negative - Failure
- **/
-static int api_cmd(struct hinic_api_cmd_chain *chain,
- enum hinic_node_id dest, u8 *cmd, u16 cmd_size)
-{
- struct hinic_api_cmd_cell_ctxt *ctxt;
- int err;
-
- down(&chain->sem);
- if (chain_busy(chain)) {
- up(&chain->sem);
- return -EBUSY;
- }
-
- prepare_cell(chain, dest, cmd, cmd_size);
- cmd_chain_prod_idx_inc(chain);
-
- wmb(); /* inc pi before issue the command */
-
- set_prod_idx(chain); /* issue the command */
-
- ctxt = &chain->cell_ctxt[chain->prod_idx];
-
- chain->curr_node = ctxt->cell_vaddr;
-
- err = wait_for_api_cmd_completion(chain);
-
- up(&chain->sem);
- return err;
-}
-
-/**
- * hinic_api_cmd_write - Write API CMD command
- * @chain: chain for write command
- * @dest: destination node on the card that will receive the command
- * @cmd: command data
- * @size: the command size
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_api_cmd_write(struct hinic_api_cmd_chain *chain,
- enum hinic_node_id dest, u8 *cmd, u16 size)
-{
- /* Verify the chain type */
- if (chain->chain_type == HINIC_API_CMD_WRITE_TO_MGMT_CPU)
- return api_cmd(chain, dest, cmd, size);
-
- return -EINVAL;
-}
-
-/**
- * api_cmd_hw_restart - restart the chain in the HW
- * @chain: the API CMD specific chain to restart
- *
- * Return 0 - Success, negative - Failure
- **/
-static int api_cmd_hw_restart(struct hinic_api_cmd_chain *chain)
-{
- struct hinic_hwif *hwif = chain->hwif;
- int err = -ETIMEDOUT;
- unsigned long end;
- u32 reg_addr, val;
-
- /* Read Modify Write */
- reg_addr = HINIC_CSR_API_CMD_CHAIN_REQ_ADDR(chain->chain_type);
- val = hinic_hwif_read_reg(hwif, reg_addr);
-
- val = HINIC_API_CMD_CHAIN_REQ_CLEAR(val, RESTART);
- val |= HINIC_API_CMD_CHAIN_REQ_SET(1, RESTART);
-
- hinic_hwif_write_reg(hwif, reg_addr, val);
-
- end = jiffies + msecs_to_jiffies(API_CMD_TIMEOUT);
- do {
- val = hinic_hwif_read_reg(hwif, reg_addr);
-
- if (!HINIC_API_CMD_CHAIN_REQ_GET(val, RESTART)) {
- err = 0;
- break;
- }
-
- msleep(20);
- } while (time_before(jiffies, end));
-
- return err;
-}
-
-/**
- * api_cmd_ctrl_init - set the control register of a chain
- * @chain: the API CMD specific chain to set control register for
- **/
-static void api_cmd_ctrl_init(struct hinic_api_cmd_chain *chain)
-{
- struct hinic_hwif *hwif = chain->hwif;
- u32 addr, ctrl;
- u16 cell_size;
-
- /* Read Modify Write */
- addr = HINIC_CSR_API_CMD_CHAIN_CTRL_ADDR(chain->chain_type);
-
- cell_size = API_CMD_CELL_SIZE_VAL(chain->cell_size);
-
- ctrl = hinic_hwif_read_reg(hwif, addr);
-
- ctrl = HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, RESTART_WB_STAT) &
- HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_ERR) &
- HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, AEQE_EN) &
- HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_CHK_EN) &
- HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, CELL_SIZE);
-
- ctrl |= HINIC_API_CMD_CHAIN_CTRL_SET(1, XOR_ERR) |
- HINIC_API_CMD_CHAIN_CTRL_SET(XOR_CHK_ALL, XOR_CHK_EN) |
- HINIC_API_CMD_CHAIN_CTRL_SET(cell_size, CELL_SIZE);
-
- hinic_hwif_write_reg(hwif, addr, ctrl);
-}
-
-/**
- * api_cmd_set_status_addr - set the status address of a chain in the HW
- * @chain: the API CMD specific chain to set in HW status address for
- **/
-static void api_cmd_set_status_addr(struct hinic_api_cmd_chain *chain)
-{
- struct hinic_hwif *hwif = chain->hwif;
- u32 addr, val;
-
- addr = HINIC_CSR_API_CMD_STATUS_HI_ADDR(chain->chain_type);
- val = upper_32_bits(chain->wb_status_paddr);
- hinic_hwif_write_reg(hwif, addr, val);
-
- addr = HINIC_CSR_API_CMD_STATUS_LO_ADDR(chain->chain_type);
- val = lower_32_bits(chain->wb_status_paddr);
- hinic_hwif_write_reg(hwif, addr, val);
-}
-
-/**
- * api_cmd_set_num_cells - set the number cells of a chain in the HW
- * @chain: the API CMD specific chain to set in HW the number of cells for
- **/
-static void api_cmd_set_num_cells(struct hinic_api_cmd_chain *chain)
-{
- struct hinic_hwif *hwif = chain->hwif;
- u32 addr, val;
-
- addr = HINIC_CSR_API_CMD_CHAIN_NUM_CELLS_ADDR(chain->chain_type);
- val = chain->num_cells;
- hinic_hwif_write_reg(hwif, addr, val);
-}
-
-/**
- * api_cmd_head_init - set the head of a chain in the HW
- * @chain: the API CMD specific chain to set in HW the head for
- **/
-static void api_cmd_head_init(struct hinic_api_cmd_chain *chain)
-{
- struct hinic_hwif *hwif = chain->hwif;
- u32 addr, val;
-
- addr = HINIC_CSR_API_CMD_CHAIN_HEAD_HI_ADDR(chain->chain_type);
- val = upper_32_bits(chain->head_cell_paddr);
- hinic_hwif_write_reg(hwif, addr, val);
-
- addr = HINIC_CSR_API_CMD_CHAIN_HEAD_LO_ADDR(chain->chain_type);
- val = lower_32_bits(chain->head_cell_paddr);
- hinic_hwif_write_reg(hwif, addr, val);
-}
-
-/**
- * api_cmd_chain_hw_clean - clean the HW
- * @chain: the API CMD specific chain
- **/
-static void api_cmd_chain_hw_clean(struct hinic_api_cmd_chain *chain)
-{
- struct hinic_hwif *hwif = chain->hwif;
- u32 addr, ctrl;
-
- addr = HINIC_CSR_API_CMD_CHAIN_CTRL_ADDR(chain->chain_type);
-
- ctrl = hinic_hwif_read_reg(hwif, addr);
- ctrl = HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, RESTART_WB_STAT) &
- HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_ERR) &
- HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, AEQE_EN) &
- HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_CHK_EN) &
- HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, CELL_SIZE);
-
- hinic_hwif_write_reg(hwif, addr, ctrl);
-}
-
-/**
- * api_cmd_chain_hw_init - initialize the chain in the HW
- * @chain: the API CMD specific chain to initialize in HW
- *
- * Return 0 - Success, negative - Failure
- **/
-static int api_cmd_chain_hw_init(struct hinic_api_cmd_chain *chain)
-{
- struct hinic_hwif *hwif = chain->hwif;
- struct pci_dev *pdev = hwif->pdev;
- int err;
-
- api_cmd_chain_hw_clean(chain);
-
- api_cmd_set_status_addr(chain);
-
- err = api_cmd_hw_restart(chain);
- if (err) {
- dev_err(&pdev->dev, "Failed to restart API CMD HW\n");
- return err;
- }
-
- api_cmd_ctrl_init(chain);
- api_cmd_set_num_cells(chain);
- api_cmd_head_init(chain);
- return 0;
-}
-
-/**
- * free_cmd_buf - free the dma buffer of API CMD command
- * @chain: the API CMD specific chain of the cmd
- * @cell_idx: the cell index of the cmd
- **/
-static void free_cmd_buf(struct hinic_api_cmd_chain *chain, int cell_idx)
-{
- struct hinic_api_cmd_cell_ctxt *cell_ctxt;
- struct hinic_hwif *hwif = chain->hwif;
- struct pci_dev *pdev = hwif->pdev;
-
- cell_ctxt = &chain->cell_ctxt[cell_idx];
-
- dma_free_coherent(&pdev->dev, API_CMD_BUF_SIZE,
- cell_ctxt->api_cmd_vaddr,
- cell_ctxt->api_cmd_paddr);
-}
-
-/**
- * alloc_cmd_buf - allocate a dma buffer for API CMD command
- * @chain: the API CMD specific chain for the cmd
- * @cell: the cell in the HW for the cmd
- * @cell_idx: the index of the cell
- *
- * Return 0 - Success, negative - Failure
- **/
-static int alloc_cmd_buf(struct hinic_api_cmd_chain *chain,
- struct hinic_api_cmd_cell *cell, int cell_idx)
-{
- struct hinic_api_cmd_cell_ctxt *cell_ctxt;
- struct hinic_hwif *hwif = chain->hwif;
- struct pci_dev *pdev = hwif->pdev;
- dma_addr_t cmd_paddr;
- u8 *cmd_vaddr;
- int err = 0;
-
- cmd_vaddr = dma_zalloc_coherent(&pdev->dev, API_CMD_BUF_SIZE,
- &cmd_paddr, GFP_KERNEL);
- if (!cmd_vaddr) {
- dev_err(&pdev->dev, "Failed to allocate API CMD DMA memory\n");
- return -ENOMEM;
- }
-
- cell_ctxt = &chain->cell_ctxt[cell_idx];
-
- cell_ctxt->api_cmd_vaddr = cmd_vaddr;
- cell_ctxt->api_cmd_paddr = cmd_paddr;
-
- /* set the cmd DMA address in the cell */
- switch (chain->chain_type) {
- case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
- /* The data in the HW should be in Big Endian Format */
- cell->write.hw_cmd_paddr = cpu_to_be64(cmd_paddr);
- break;
-
- default:
- dev_err(&pdev->dev, "Unsupported API CMD chain type\n");
- free_cmd_buf(chain, cell_idx);
- err = -EINVAL;
- break;
- }
-
- return err;
-}
-
-/**
- * api_cmd_create_cell - create API CMD cell for specific chain
- * @chain: the API CMD specific chain to create its cell
- * @cell_idx: the index of the cell to create
- * @pre_node: previous cell
- * @node_vaddr: the returned virt addr of the cell
- *
- * Return 0 - Success, negative - Failure
- **/
-static int api_cmd_create_cell(struct hinic_api_cmd_chain *chain,
- int cell_idx,
- struct hinic_api_cmd_cell *pre_node,
- struct hinic_api_cmd_cell **node_vaddr)
-{
- struct hinic_api_cmd_cell_ctxt *cell_ctxt;
- struct hinic_hwif *hwif = chain->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_api_cmd_cell *node;
- dma_addr_t node_paddr;
- int err;
-
- node = dma_zalloc_coherent(&pdev->dev, chain->cell_size,
- &node_paddr, GFP_KERNEL);
- if (!node) {
- dev_err(&pdev->dev, "Failed to allocate dma API CMD cell\n");
- return -ENOMEM;
- }
-
- node->read.hw_wb_resp_paddr = 0;
-
- cell_ctxt = &chain->cell_ctxt[cell_idx];
- cell_ctxt->cell_vaddr = node;
- cell_ctxt->cell_paddr = node_paddr;
-
- if (!pre_node) {
- chain->head_cell_paddr = node_paddr;
- chain->head_node = node;
- } else {
- /* The data in the HW should be in Big Endian Format */
- pre_node->next_cell_paddr = cpu_to_be64(node_paddr);
- }
-
- switch (chain->chain_type) {
- case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
- err = alloc_cmd_buf(chain, node, cell_idx);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate cmd buffer\n");
- goto err_alloc_cmd_buf;
- }
- break;
-
- default:
- dev_err(&pdev->dev, "Unsupported API CMD chain type\n");
- err = -EINVAL;
- goto err_alloc_cmd_buf;
- }
-
- *node_vaddr = node;
- return 0;
-
-err_alloc_cmd_buf:
- dma_free_coherent(&pdev->dev, chain->cell_size, node, node_paddr);
- return err;
-}
-
-/**
- * api_cmd_destroy_cell - destroy API CMD cell of specific chain
- * @chain: the API CMD specific chain to destroy its cell
- * @cell_idx: the cell to destroy
- **/
-static void api_cmd_destroy_cell(struct hinic_api_cmd_chain *chain,
- int cell_idx)
-{
- struct hinic_api_cmd_cell_ctxt *cell_ctxt;
- struct hinic_hwif *hwif = chain->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_api_cmd_cell *node;
- dma_addr_t node_paddr;
- size_t node_size;
-
- cell_ctxt = &chain->cell_ctxt[cell_idx];
-
- node = cell_ctxt->cell_vaddr;
- node_paddr = cell_ctxt->cell_paddr;
- node_size = chain->cell_size;
-
- if (cell_ctxt->api_cmd_vaddr) {
- switch (chain->chain_type) {
- case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
- free_cmd_buf(chain, cell_idx);
- break;
- default:
- dev_err(&pdev->dev, "Unsupported API CMD chain type\n");
- break;
- }
-
- dma_free_coherent(&pdev->dev, node_size, node,
- node_paddr);
- }
-}
-
-/**
- * api_cmd_destroy_cells - destroy API CMD cells of specific chain
- * @chain: the API CMD specific chain to destroy its cells
- * @num_cells: number of cells to destroy
- **/
-static void api_cmd_destroy_cells(struct hinic_api_cmd_chain *chain,
- int num_cells)
-{
- int cell_idx;
-
- for (cell_idx = 0; cell_idx < num_cells; cell_idx++)
- api_cmd_destroy_cell(chain, cell_idx);
-}
-
-/**
- * api_cmd_create_cells - create API CMD cells for specific chain
- * @chain: the API CMD specific chain
- *
- * Return 0 - Success, negative - Failure
- **/
-static int api_cmd_create_cells(struct hinic_api_cmd_chain *chain)
-{
- struct hinic_api_cmd_cell *node = NULL, *pre_node = NULL;
- struct hinic_hwif *hwif = chain->hwif;
- struct pci_dev *pdev = hwif->pdev;
- int err, cell_idx;
-
- for (cell_idx = 0; cell_idx < chain->num_cells; cell_idx++) {
- err = api_cmd_create_cell(chain, cell_idx, pre_node, &node);
- if (err) {
- dev_err(&pdev->dev, "Failed to create API CMD cell\n");
- goto err_create_cell;
- }
-
- pre_node = node;
- }
-
- /* set the Final node to point on the start */
- node->next_cell_paddr = cpu_to_be64(chain->head_cell_paddr);
-
- /* set the current node to be the head */
- chain->curr_node = chain->head_node;
- return 0;
-
-err_create_cell:
- api_cmd_destroy_cells(chain, cell_idx);
- return err;
-}
-
-/**
- * api_chain_init - initialize API CMD specific chain
- * @chain: the API CMD specific chain to initialize
- * @attr: attributes to set in the chain
- *
- * Return 0 - Success, negative - Failure
- **/
-static int api_chain_init(struct hinic_api_cmd_chain *chain,
- struct hinic_api_cmd_chain_attr *attr)
-{
- struct hinic_hwif *hwif = attr->hwif;
- struct pci_dev *pdev = hwif->pdev;
- size_t cell_ctxt_size;
-
- chain->hwif = hwif;
- chain->chain_type = attr->chain_type;
- chain->num_cells = attr->num_cells;
- chain->cell_size = attr->cell_size;
-
- chain->prod_idx = 0;
- chain->cons_idx = 0;
-
- sema_init(&chain->sem, 1);
-
- cell_ctxt_size = chain->num_cells * sizeof(*chain->cell_ctxt);
- chain->cell_ctxt = devm_kzalloc(&pdev->dev, cell_ctxt_size, GFP_KERNEL);
- if (!chain->cell_ctxt)
- return -ENOMEM;
-
- chain->wb_status = dma_zalloc_coherent(&pdev->dev,
- sizeof(*chain->wb_status),
- &chain->wb_status_paddr,
- GFP_KERNEL);
- if (!chain->wb_status) {
- dev_err(&pdev->dev, "Failed to allocate DMA wb status\n");
- return -ENOMEM;
- }
-
- return 0;
-}
-
-/**
- * api_chain_free - free API CMD specific chain
- * @chain: the API CMD specific chain to free
- **/
-static void api_chain_free(struct hinic_api_cmd_chain *chain)
-{
- struct hinic_hwif *hwif = chain->hwif;
- struct pci_dev *pdev = hwif->pdev;
-
- dma_free_coherent(&pdev->dev, sizeof(*chain->wb_status),
- chain->wb_status, chain->wb_status_paddr);
-}
-
-/**
- * api_cmd_create_chain - create API CMD specific chain
- * @attr: attributes to set the chain
- *
- * Return the created chain
- **/
-static struct hinic_api_cmd_chain *
- api_cmd_create_chain(struct hinic_api_cmd_chain_attr *attr)
-{
- struct hinic_hwif *hwif = attr->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_api_cmd_chain *chain;
- int err;
-
- if (attr->num_cells & (attr->num_cells - 1)) {
- dev_err(&pdev->dev, "Invalid number of cells, must be power of 2\n");
- return ERR_PTR(-EINVAL);
- }
-
- chain = devm_kzalloc(&pdev->dev, sizeof(*chain), GFP_KERNEL);
- if (!chain)
- return ERR_PTR(-ENOMEM);
-
- err = api_chain_init(chain, attr);
- if (err) {
- dev_err(&pdev->dev, "Failed to initialize chain\n");
- return ERR_PTR(err);
- }
-
- err = api_cmd_create_cells(chain);
- if (err) {
- dev_err(&pdev->dev, "Failed to create cells for API CMD chain\n");
- goto err_create_cells;
- }
-
- err = api_cmd_chain_hw_init(chain);
- if (err) {
- dev_err(&pdev->dev, "Failed to initialize chain HW\n");
- goto err_chain_hw_init;
- }
-
- return chain;
-
-err_chain_hw_init:
- api_cmd_destroy_cells(chain, chain->num_cells);
-
-err_create_cells:
- api_chain_free(chain);
- return ERR_PTR(err);
-}
-
-/**
- * api_cmd_destroy_chain - destroy API CMD specific chain
- * @chain: the API CMD specific chain to destroy
- **/
-static void api_cmd_destroy_chain(struct hinic_api_cmd_chain *chain)
-{
- api_cmd_chain_hw_clean(chain);
- api_cmd_destroy_cells(chain, chain->num_cells);
- api_chain_free(chain);
-}
-
-/**
- * hinic_api_cmd_init - Initialize all the API CMD chains
- * @chain: the API CMD chains that are initialized
- * @hwif: the hardware interface of a pci function device
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_api_cmd_init(struct hinic_api_cmd_chain **chain,
- struct hinic_hwif *hwif)
-{
- enum hinic_api_cmd_chain_type type, chain_type;
- struct hinic_api_cmd_chain_attr attr;
- struct pci_dev *pdev = hwif->pdev;
- size_t hw_cell_sz;
- int err;
-
- hw_cell_sz = sizeof(struct hinic_api_cmd_cell);
-
- attr.hwif = hwif;
- attr.num_cells = API_CHAIN_NUM_CELLS;
- attr.cell_size = API_CMD_CELL_SIZE(hw_cell_sz);
-
- chain_type = HINIC_API_CMD_WRITE_TO_MGMT_CPU;
- for ( ; chain_type < HINIC_API_CMD_MAX; chain_type++) {
- attr.chain_type = chain_type;
-
- if (chain_type != HINIC_API_CMD_WRITE_TO_MGMT_CPU)
- continue;
-
- chain[chain_type] = api_cmd_create_chain(&attr);
- if (IS_ERR(chain[chain_type])) {
- dev_err(&pdev->dev, "Failed to create chain %d\n",
- chain_type);
- err = PTR_ERR(chain[chain_type]);
- goto err_create_chain;
- }
- }
-
- return 0;
-
-err_create_chain:
- type = HINIC_API_CMD_WRITE_TO_MGMT_CPU;
- for ( ; type < chain_type; type++) {
- if (type != HINIC_API_CMD_WRITE_TO_MGMT_CPU)
- continue;
-
- api_cmd_destroy_chain(chain[type]);
- }
-
- return err;
-}
-
-/**
- * hinic_api_cmd_free - free the API CMD chains
- * @chain: the API CMD chains that are freed
- **/
-void hinic_api_cmd_free(struct hinic_api_cmd_chain **chain)
-{
- enum hinic_api_cmd_chain_type chain_type;
-
- chain_type = HINIC_API_CMD_WRITE_TO_MGMT_CPU;
- for ( ; chain_type < HINIC_API_CMD_MAX; chain_type++) {
- if (chain_type != HINIC_API_CMD_WRITE_TO_MGMT_CPU)
- continue;
-
- api_cmd_destroy_chain(chain[chain_type]);
- }
-}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.h
deleted file mode 100644
index 31b94d5..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.h
+++ /dev/null
@@ -1,208 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_HW_API_CMD_H
-#define HINIC_HW_API_CMD_H
-
-#include <linux/types.h>
-#include <linux/semaphore.h>
-
-#include "hinic_hw_if.h"
-
-#define HINIC_API_CMD_PI_IDX_SHIFT 0
-
-#define HINIC_API_CMD_PI_IDX_MASK 0xFFFFFF
-
-#define HINIC_API_CMD_PI_SET(val, member) \
- (((u32)(val) & HINIC_API_CMD_PI_##member##_MASK) << \
- HINIC_API_CMD_PI_##member##_SHIFT)
-
-#define HINIC_API_CMD_PI_CLEAR(val, member) \
- ((val) & (~(HINIC_API_CMD_PI_##member##_MASK \
- << HINIC_API_CMD_PI_##member##_SHIFT)))
-
-#define HINIC_API_CMD_CHAIN_REQ_RESTART_SHIFT 1
-
-#define HINIC_API_CMD_CHAIN_REQ_RESTART_MASK 0x1
-
-#define HINIC_API_CMD_CHAIN_REQ_SET(val, member) \
- (((u32)(val) & HINIC_API_CMD_CHAIN_REQ_##member##_MASK) << \
- HINIC_API_CMD_CHAIN_REQ_##member##_SHIFT)
-
-#define HINIC_API_CMD_CHAIN_REQ_GET(val, member) \
- (((val) >> HINIC_API_CMD_CHAIN_REQ_##member##_SHIFT) & \
- HINIC_API_CMD_CHAIN_REQ_##member##_MASK)
-
-#define HINIC_API_CMD_CHAIN_REQ_CLEAR(val, member) \
- ((val) & (~(HINIC_API_CMD_CHAIN_REQ_##member##_MASK \
- << HINIC_API_CMD_CHAIN_REQ_##member##_SHIFT)))
-
-#define HINIC_API_CMD_CHAIN_CTRL_RESTART_WB_STAT_SHIFT 1
-#define HINIC_API_CMD_CHAIN_CTRL_XOR_ERR_SHIFT 2
-#define HINIC_API_CMD_CHAIN_CTRL_AEQE_EN_SHIFT 4
-#define HINIC_API_CMD_CHAIN_CTRL_AEQ_ID_SHIFT 8
-#define HINIC_API_CMD_CHAIN_CTRL_XOR_CHK_EN_SHIFT 28
-#define HINIC_API_CMD_CHAIN_CTRL_CELL_SIZE_SHIFT 30
-
-#define HINIC_API_CMD_CHAIN_CTRL_RESTART_WB_STAT_MASK 0x1
-#define HINIC_API_CMD_CHAIN_CTRL_XOR_ERR_MASK 0x1
-#define HINIC_API_CMD_CHAIN_CTRL_AEQE_EN_MASK 0x1
-#define HINIC_API_CMD_CHAIN_CTRL_AEQ_ID_MASK 0x3
-#define HINIC_API_CMD_CHAIN_CTRL_XOR_CHK_EN_MASK 0x3
-#define HINIC_API_CMD_CHAIN_CTRL_CELL_SIZE_MASK 0x3
-
-#define HINIC_API_CMD_CHAIN_CTRL_SET(val, member) \
- (((u32)(val) & HINIC_API_CMD_CHAIN_CTRL_##member##_MASK) << \
- HINIC_API_CMD_CHAIN_CTRL_##member##_SHIFT)
-
-#define HINIC_API_CMD_CHAIN_CTRL_CLEAR(val, member) \
- ((val) & (~(HINIC_API_CMD_CHAIN_CTRL_##member##_MASK \
- << HINIC_API_CMD_CHAIN_CTRL_##member##_SHIFT)))
-
-#define HINIC_API_CMD_CELL_CTRL_DATA_SZ_SHIFT 0
-#define HINIC_API_CMD_CELL_CTRL_RD_DMA_ATTR_SHIFT 16
-#define HINIC_API_CMD_CELL_CTRL_WR_DMA_ATTR_SHIFT 24
-#define HINIC_API_CMD_CELL_CTRL_XOR_CHKSUM_SHIFT 56
-
-#define HINIC_API_CMD_CELL_CTRL_DATA_SZ_MASK 0x3F
-#define HINIC_API_CMD_CELL_CTRL_RD_DMA_ATTR_MASK 0x3F
-#define HINIC_API_CMD_CELL_CTRL_WR_DMA_ATTR_MASK 0x3F
-#define HINIC_API_CMD_CELL_CTRL_XOR_CHKSUM_MASK 0xFF
-
-#define HINIC_API_CMD_CELL_CTRL_SET(val, member) \
- ((((u64)val) & HINIC_API_CMD_CELL_CTRL_##member##_MASK) << \
- HINIC_API_CMD_CELL_CTRL_##member##_SHIFT)
-
-#define HINIC_API_CMD_DESC_API_TYPE_SHIFT 0
-#define HINIC_API_CMD_DESC_RD_WR_SHIFT 1
-#define HINIC_API_CMD_DESC_MGMT_BYPASS_SHIFT 2
-#define HINIC_API_CMD_DESC_DEST_SHIFT 32
-#define HINIC_API_CMD_DESC_SIZE_SHIFT 40
-#define HINIC_API_CMD_DESC_XOR_CHKSUM_SHIFT 56
-
-#define HINIC_API_CMD_DESC_API_TYPE_MASK 0x1
-#define HINIC_API_CMD_DESC_RD_WR_MASK 0x1
-#define HINIC_API_CMD_DESC_MGMT_BYPASS_MASK 0x1
-#define HINIC_API_CMD_DESC_DEST_MASK 0x1F
-#define HINIC_API_CMD_DESC_SIZE_MASK 0x7FF
-#define HINIC_API_CMD_DESC_XOR_CHKSUM_MASK 0xFF
-
-#define HINIC_API_CMD_DESC_SET(val, member) \
- ((((u64)val) & HINIC_API_CMD_DESC_##member##_MASK) << \
- HINIC_API_CMD_DESC_##member##_SHIFT)
-
-#define HINIC_API_CMD_STATUS_HEADER_CHAIN_ID_SHIFT 16
-
-#define HINIC_API_CMD_STATUS_HEADER_CHAIN_ID_MASK 0xFF
-
-#define HINIC_API_CMD_STATUS_HEADER_GET(val, member) \
- (((val) >> HINIC_API_CMD_STATUS_HEADER_##member##_SHIFT) & \
- HINIC_API_CMD_STATUS_HEADER_##member##_MASK)
-
-#define HINIC_API_CMD_STATUS_CONS_IDX_SHIFT 0
-#define HINIC_API_CMD_STATUS_CHKSUM_ERR_SHIFT 28
-
-#define HINIC_API_CMD_STATUS_CONS_IDX_MASK 0xFFFFFF
-#define HINIC_API_CMD_STATUS_CHKSUM_ERR_MASK 0x3
-
-#define HINIC_API_CMD_STATUS_GET(val, member) \
- (((val) >> HINIC_API_CMD_STATUS_##member##_SHIFT) & \
- HINIC_API_CMD_STATUS_##member##_MASK)
-
-enum hinic_api_cmd_chain_type {
- HINIC_API_CMD_WRITE_TO_MGMT_CPU = 2,
-
- HINIC_API_CMD_MAX,
-};
-
-struct hinic_api_cmd_chain_attr {
- struct hinic_hwif *hwif;
- enum hinic_api_cmd_chain_type chain_type;
-
- u32 num_cells;
- u16 cell_size;
-};
-
-struct hinic_api_cmd_status {
- u64 header;
- u32 status;
- u32 rsvd0;
- u32 rsvd1;
- u32 rsvd2;
- u64 rsvd3;
-};
-
-/* HW struct */
-struct hinic_api_cmd_cell {
- u64 ctrl;
-
- /* address is 64 bit in HW struct */
- u64 next_cell_paddr;
-
- u64 desc;
-
- /* HW struct */
- union {
- struct {
- u64 hw_cmd_paddr;
- } write;
-
- struct {
- u64 hw_wb_resp_paddr;
- u64 hw_cmd_paddr;
- } read;
- };
-};
-
-struct hinic_api_cmd_cell_ctxt {
- dma_addr_t cell_paddr;
- struct hinic_api_cmd_cell *cell_vaddr;
-
- dma_addr_t api_cmd_paddr;
- u8 *api_cmd_vaddr;
-};
-
-struct hinic_api_cmd_chain {
- struct hinic_hwif *hwif;
- enum hinic_api_cmd_chain_type chain_type;
-
- u32 num_cells;
- u16 cell_size;
-
- /* HW members in 24 bit format */
- u32 prod_idx;
- u32 cons_idx;
-
- struct semaphore sem;
-
- struct hinic_api_cmd_cell_ctxt *cell_ctxt;
-
- dma_addr_t wb_status_paddr;
- struct hinic_api_cmd_status *wb_status;
-
- dma_addr_t head_cell_paddr;
- struct hinic_api_cmd_cell *head_node;
- struct hinic_api_cmd_cell *curr_node;
-};
-
-int hinic_api_cmd_write(struct hinic_api_cmd_chain *chain,
- enum hinic_node_id dest, u8 *cmd, u16 size);
-
-int hinic_api_cmd_init(struct hinic_api_cmd_chain **chain,
- struct hinic_hwif *hwif);
-
-void hinic_api_cmd_free(struct hinic_api_cmd_chain **chain);
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
deleted file mode 100644
index 4d09ea7..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
+++ /dev/null
@@ -1,947 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/errno.h>
-#include <linux/pci.h>
-#include <linux/device.h>
-#include <linux/slab.h>
-#include <linux/vmalloc.h>
-#include <linux/spinlock.h>
-#include <linux/sizes.h>
-#include <linux/atomic.h>
-#include <linux/log2.h>
-#include <linux/io.h>
-#include <linux/completion.h>
-#include <linux/err.h>
-#include <asm/byteorder.h>
-#include <asm/barrier.h>
-
-#include "hinic_common.h"
-#include "hinic_hw_if.h"
-#include "hinic_hw_eqs.h"
-#include "hinic_hw_mgmt.h"
-#include "hinic_hw_wqe.h"
-#include "hinic_hw_wq.h"
-#include "hinic_hw_cmdq.h"
-#include "hinic_hw_io.h"
-#include "hinic_hw_dev.h"
-
-#define CMDQ_CEQE_TYPE_SHIFT 0
-
-#define CMDQ_CEQE_TYPE_MASK 0x7
-
-#define CMDQ_CEQE_GET(val, member) \
- (((val) >> CMDQ_CEQE_##member##_SHIFT) \
- & CMDQ_CEQE_##member##_MASK)
-
-#define CMDQ_WQE_ERRCODE_VAL_SHIFT 20
-
-#define CMDQ_WQE_ERRCODE_VAL_MASK 0xF
-
-#define CMDQ_WQE_ERRCODE_GET(val, member) \
- (((val) >> CMDQ_WQE_ERRCODE_##member##_SHIFT) \
- & CMDQ_WQE_ERRCODE_##member##_MASK)
-
-#define CMDQ_DB_PI_OFF(pi) (((u16)LOWER_8_BITS(pi)) << 3)
-
-#define CMDQ_DB_ADDR(db_base, pi) ((db_base) + CMDQ_DB_PI_OFF(pi))
-
-#define CMDQ_WQE_HEADER(wqe) ((struct hinic_cmdq_header *)(wqe))
-
-#define CMDQ_WQE_COMPLETED(ctrl_info) \
- HINIC_CMDQ_CTRL_GET(ctrl_info, HW_BUSY_BIT)
-
-#define FIRST_DATA_TO_WRITE_LAST sizeof(u64)
-
-#define CMDQ_DB_OFF SZ_2K
-
-#define CMDQ_WQEBB_SIZE 64
-#define CMDQ_WQE_SIZE 64
-#define CMDQ_DEPTH SZ_4K
-
-#define CMDQ_WQ_PAGE_SIZE SZ_4K
-
-#define WQE_LCMD_SIZE 64
-#define WQE_SCMD_SIZE 64
-
-#define COMPLETE_LEN 3
-
-#define CMDQ_TIMEOUT 1000
-
-#define CMDQ_PFN(addr, page_size) ((addr) >> (ilog2(page_size)))
-
-#define cmdq_to_cmdqs(cmdq) container_of((cmdq) - (cmdq)->cmdq_type, \
- struct hinic_cmdqs, cmdq[0])
-
-#define cmdqs_to_func_to_io(cmdqs) container_of(cmdqs, \
- struct hinic_func_to_io, \
- cmdqs)
-
-enum cmdq_wqe_type {
- WQE_LCMD_TYPE = 0,
- WQE_SCMD_TYPE = 1,
-};
-
-enum completion_format {
- COMPLETE_DIRECT = 0,
- COMPLETE_SGE = 1,
-};
-
-enum data_format {
- DATA_SGE = 0,
- DATA_DIRECT = 1,
-};
-
-enum bufdesc_len {
- BUFDESC_LCMD_LEN = 2, /* 16 bytes - 2(8 byte unit) */
- BUFDESC_SCMD_LEN = 3, /* 24 bytes - 3(8 byte unit) */
-};
-
-enum ctrl_sect_len {
- CTRL_SECT_LEN = 1, /* 4 bytes (ctrl) - 1(8 byte unit) */
- CTRL_DIRECT_SECT_LEN = 2, /* 12 bytes (ctrl + rsvd) - 2(8 byte unit) */
-};
-
-enum cmdq_scmd_type {
- CMDQ_SET_ARM_CMD = 2,
-};
-
-enum cmdq_cmd_type {
- CMDQ_CMD_SYNC_DIRECT_RESP = 0,
- CMDQ_CMD_SYNC_SGE_RESP = 1,
-};
-
-enum completion_request {
- NO_CEQ = 0,
- CEQ_SET = 1,
-};
-
-/**
- * hinic_alloc_cmdq_buf - alloc buffer for sending command
- * @cmdqs: the cmdqs
- * @cmdq_buf: the buffer returned in this struct
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_alloc_cmdq_buf(struct hinic_cmdqs *cmdqs,
- struct hinic_cmdq_buf *cmdq_buf)
-{
- struct hinic_hwif *hwif = cmdqs->hwif;
- struct pci_dev *pdev = hwif->pdev;
-
- cmdq_buf->buf = dma_pool_alloc(cmdqs->cmdq_buf_pool, GFP_KERNEL,
- &cmdq_buf->dma_addr);
- if (!cmdq_buf->buf) {
- dev_err(&pdev->dev, "Failed to allocate cmd from the pool\n");
- return -ENOMEM;
- }
-
- return 0;
-}
-
-/**
- * hinic_free_cmdq_buf - free buffer
- * @cmdqs: the cmdqs
- * @cmdq_buf: the buffer to free that is in this struct
- **/
-void hinic_free_cmdq_buf(struct hinic_cmdqs *cmdqs,
- struct hinic_cmdq_buf *cmdq_buf)
-{
- dma_pool_free(cmdqs->cmdq_buf_pool, cmdq_buf->buf, cmdq_buf->dma_addr);
-}
-
-static unsigned int cmdq_wqe_size_from_bdlen(enum bufdesc_len len)
-{
- unsigned int wqe_size = 0;
-
- switch (len) {
- case BUFDESC_LCMD_LEN:
- wqe_size = WQE_LCMD_SIZE;
- break;
- case BUFDESC_SCMD_LEN:
- wqe_size = WQE_SCMD_SIZE;
- break;
- }
-
- return wqe_size;
-}
-
-static void cmdq_set_sge_completion(struct hinic_cmdq_completion *completion,
- struct hinic_cmdq_buf *buf_out)
-{
- struct hinic_sge_resp *sge_resp = &completion->sge_resp;
-
- hinic_set_sge(&sge_resp->sge, buf_out->dma_addr, buf_out->size);
-}
-
-static void cmdq_prepare_wqe_ctrl(struct hinic_cmdq_wqe *wqe, int wrapped,
- enum hinic_cmd_ack_type ack_type,
- enum hinic_mod_type mod, u8 cmd, u16 prod_idx,
- enum completion_format complete_format,
- enum data_format data_format,
- enum bufdesc_len buf_len)
-{
- struct hinic_cmdq_wqe_lcmd *wqe_lcmd;
- struct hinic_cmdq_wqe_scmd *wqe_scmd;
- enum ctrl_sect_len ctrl_len;
- struct hinic_ctrl *ctrl;
- u32 saved_data;
-
- if (data_format == DATA_SGE) {
- wqe_lcmd = &wqe->wqe_lcmd;
-
- wqe_lcmd->status.status_info = 0;
- ctrl = &wqe_lcmd->ctrl;
- ctrl_len = CTRL_SECT_LEN;
- } else {
- wqe_scmd = &wqe->direct_wqe.wqe_scmd;
-
- wqe_scmd->status.status_info = 0;
- ctrl = &wqe_scmd->ctrl;
- ctrl_len = CTRL_DIRECT_SECT_LEN;
- }
-
- ctrl->ctrl_info = HINIC_CMDQ_CTRL_SET(prod_idx, PI) |
- HINIC_CMDQ_CTRL_SET(cmd, CMD) |
- HINIC_CMDQ_CTRL_SET(mod, MOD) |
- HINIC_CMDQ_CTRL_SET(ack_type, ACK_TYPE);
-
- CMDQ_WQE_HEADER(wqe)->header_info =
- HINIC_CMDQ_WQE_HEADER_SET(buf_len, BUFDESC_LEN) |
- HINIC_CMDQ_WQE_HEADER_SET(complete_format, COMPLETE_FMT) |
- HINIC_CMDQ_WQE_HEADER_SET(data_format, DATA_FMT) |
- HINIC_CMDQ_WQE_HEADER_SET(CEQ_SET, COMPLETE_REQ) |
- HINIC_CMDQ_WQE_HEADER_SET(COMPLETE_LEN, COMPLETE_SECT_LEN) |
- HINIC_CMDQ_WQE_HEADER_SET(ctrl_len, CTRL_LEN) |
- HINIC_CMDQ_WQE_HEADER_SET(wrapped, TOGGLED_WRAPPED);
-
- saved_data = CMDQ_WQE_HEADER(wqe)->saved_data;
- saved_data = HINIC_SAVED_DATA_CLEAR(saved_data, ARM);
-
- if ((cmd == CMDQ_SET_ARM_CMD) && (mod == HINIC_MOD_COMM))
- CMDQ_WQE_HEADER(wqe)->saved_data |=
- HINIC_SAVED_DATA_SET(1, ARM);
- else
- CMDQ_WQE_HEADER(wqe)->saved_data = saved_data;
-}
-
-static void cmdq_set_lcmd_bufdesc(struct hinic_cmdq_wqe_lcmd *wqe_lcmd,
- struct hinic_cmdq_buf *buf_in)
-{
- hinic_set_sge(&wqe_lcmd->buf_desc.sge, buf_in->dma_addr, buf_in->size);
-}
-
-static void cmdq_set_direct_wqe_data(struct hinic_cmdq_direct_wqe *wqe,
- void *buf_in, u32 in_size)
-{
- struct hinic_cmdq_wqe_scmd *wqe_scmd = &wqe->wqe_scmd;
-
- wqe_scmd->buf_desc.buf_len = in_size;
- memcpy(wqe_scmd->buf_desc.data, buf_in, in_size);
-}
-
-static void cmdq_set_lcmd_wqe(struct hinic_cmdq_wqe *wqe,
- enum cmdq_cmd_type cmd_type,
- struct hinic_cmdq_buf *buf_in,
- struct hinic_cmdq_buf *buf_out, int wrapped,
- enum hinic_cmd_ack_type ack_type,
- enum hinic_mod_type mod, u8 cmd, u16 prod_idx)
-{
- struct hinic_cmdq_wqe_lcmd *wqe_lcmd = &wqe->wqe_lcmd;
- enum completion_format complete_format;
-
- switch (cmd_type) {
- case CMDQ_CMD_SYNC_SGE_RESP:
- complete_format = COMPLETE_SGE;
- cmdq_set_sge_completion(&wqe_lcmd->completion, buf_out);
- break;
- case CMDQ_CMD_SYNC_DIRECT_RESP:
- complete_format = COMPLETE_DIRECT;
- wqe_lcmd->completion.direct_resp = 0;
- break;
- }
-
- cmdq_prepare_wqe_ctrl(wqe, wrapped, ack_type, mod, cmd,
- prod_idx, complete_format, DATA_SGE,
- BUFDESC_LCMD_LEN);
-
- cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in);
-}
-
-static void cmdq_set_direct_wqe(struct hinic_cmdq_wqe *wqe,
- enum cmdq_cmd_type cmd_type,
- void *buf_in, u16 in_size,
- struct hinic_cmdq_buf *buf_out, int wrapped,
- enum hinic_cmd_ack_type ack_type,
- enum hinic_mod_type mod, u8 cmd, u16 prod_idx)
-{
- struct hinic_cmdq_direct_wqe *direct_wqe = &wqe->direct_wqe;
- enum completion_format complete_format;
- struct hinic_cmdq_wqe_scmd *wqe_scmd;
-
- wqe_scmd = &direct_wqe->wqe_scmd;
-
- switch (cmd_type) {
- case CMDQ_CMD_SYNC_SGE_RESP:
- complete_format = COMPLETE_SGE;
- cmdq_set_sge_completion(&wqe_scmd->completion, buf_out);
- break;
- case CMDQ_CMD_SYNC_DIRECT_RESP:
- complete_format = COMPLETE_DIRECT;
- wqe_scmd->completion.direct_resp = 0;
- break;
- }
-
- cmdq_prepare_wqe_ctrl(wqe, wrapped, ack_type, mod, cmd, prod_idx,
- complete_format, DATA_DIRECT, BUFDESC_SCMD_LEN);
-
- cmdq_set_direct_wqe_data(direct_wqe, buf_in, in_size);
-}
-
-static void cmdq_wqe_fill(void *dst, void *src)
-{
- memcpy(dst + FIRST_DATA_TO_WRITE_LAST, src + FIRST_DATA_TO_WRITE_LAST,
- CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST);
-
- wmb(); /* The first 8 bytes should be written last */
-
- *(u64 *)dst = *(u64 *)src;
-}
-
-static void cmdq_fill_db(u32 *db_info,
- enum hinic_cmdq_type cmdq_type, u16 prod_idx)
-{
- *db_info = HINIC_CMDQ_DB_INFO_SET(UPPER_8_BITS(prod_idx), HI_PROD_IDX) |
- HINIC_CMDQ_DB_INFO_SET(HINIC_CTRL_PATH, PATH) |
- HINIC_CMDQ_DB_INFO_SET(cmdq_type, CMDQ_TYPE) |
- HINIC_CMDQ_DB_INFO_SET(HINIC_DB_CMDQ_TYPE, DB_TYPE);
-}
-
-static void cmdq_set_db(struct hinic_cmdq *cmdq,
- enum hinic_cmdq_type cmdq_type, u16 prod_idx)
-{
- u32 db_info;
-
- cmdq_fill_db(&db_info, cmdq_type, prod_idx);
-
- /* The data that is written to HW should be in Big Endian Format */
- db_info = cpu_to_be32(db_info);
-
- wmb(); /* write all before the doorbell */
-
- writel(db_info, CMDQ_DB_ADDR(cmdq->db_base, prod_idx));
-}
-
-static int cmdq_sync_cmd_direct_resp(struct hinic_cmdq *cmdq,
- enum hinic_mod_type mod, u8 cmd,
- struct hinic_cmdq_buf *buf_in,
- u64 *resp)
-{
- struct hinic_cmdq_wqe *curr_cmdq_wqe, cmdq_wqe;
- u16 curr_prod_idx, next_prod_idx;
- int errcode, wrapped, num_wqebbs;
- struct hinic_wq *wq = cmdq->wq;
- struct hinic_hw_wqe *hw_wqe;
- struct completion done;
-
- /* Keep doorbell index correct. bh - for tasklet(ceq). */
- spin_lock_bh(&cmdq->cmdq_lock);
-
- /* WQE_SIZE = WQEBB_SIZE, we will get the wq element and not shadow*/
- hw_wqe = hinic_get_wqe(wq, WQE_LCMD_SIZE, &curr_prod_idx);
- if (IS_ERR(hw_wqe)) {
- spin_unlock_bh(&cmdq->cmdq_lock);
- return -EBUSY;
- }
-
- curr_cmdq_wqe = &hw_wqe->cmdq_wqe;
-
- wrapped = cmdq->wrapped;
-
- num_wqebbs = ALIGN(WQE_LCMD_SIZE, wq->wqebb_size) / wq->wqebb_size;
- next_prod_idx = curr_prod_idx + num_wqebbs;
- if (next_prod_idx >= wq->q_depth) {
- cmdq->wrapped = !cmdq->wrapped;
- next_prod_idx -= wq->q_depth;
- }
-
- cmdq->errcode[curr_prod_idx] = &errcode;
-
- init_completion(&done);
- cmdq->done[curr_prod_idx] = &done;
-
- cmdq_set_lcmd_wqe(&cmdq_wqe, CMDQ_CMD_SYNC_DIRECT_RESP, buf_in, NULL,
- wrapped, HINIC_CMD_ACK_TYPE_CMDQ, mod, cmd,
- curr_prod_idx);
-
- /* The data that is written to HW should be in Big Endian Format */
- hinic_cpu_to_be32(&cmdq_wqe, WQE_LCMD_SIZE);
-
- /* CMDQ WQE is not shadow, therefore wqe will be written to wq */
- cmdq_wqe_fill(curr_cmdq_wqe, &cmdq_wqe);
-
- cmdq_set_db(cmdq, HINIC_CMDQ_SYNC, next_prod_idx);
-
- spin_unlock_bh(&cmdq->cmdq_lock);
-
- if (!wait_for_completion_timeout(&done, CMDQ_TIMEOUT)) {
- spin_lock_bh(&cmdq->cmdq_lock);
-
- if (cmdq->errcode[curr_prod_idx] == &errcode)
- cmdq->errcode[curr_prod_idx] = NULL;
-
- if (cmdq->done[curr_prod_idx] == &done)
- cmdq->done[curr_prod_idx] = NULL;
-
- spin_unlock_bh(&cmdq->cmdq_lock);
-
- return -ETIMEDOUT;
- }
-
- smp_rmb(); /* read error code after completion */
-
- if (resp) {
- struct hinic_cmdq_wqe_lcmd *wqe_lcmd = &curr_cmdq_wqe->wqe_lcmd;
-
- *resp = cpu_to_be64(wqe_lcmd->completion.direct_resp);
- }
-
- if (errcode != 0)
- return -EFAULT;
-
- return 0;
-}
-
-static int cmdq_set_arm_bit(struct hinic_cmdq *cmdq, void *buf_in,
- u16 in_size)
-{
- struct hinic_cmdq_wqe *curr_cmdq_wqe, cmdq_wqe;
- u16 curr_prod_idx, next_prod_idx;
- struct hinic_wq *wq = cmdq->wq;
- struct hinic_hw_wqe *hw_wqe;
- int wrapped, num_wqebbs;
-
- /* Keep doorbell index correct */
- spin_lock(&cmdq->cmdq_lock);
-
- /* WQE_SIZE = WQEBB_SIZE, we will get the wq element and not shadow*/
- hw_wqe = hinic_get_wqe(wq, WQE_SCMD_SIZE, &curr_prod_idx);
- if (IS_ERR(hw_wqe)) {
- spin_unlock(&cmdq->cmdq_lock);
- return -EBUSY;
- }
-
- curr_cmdq_wqe = &hw_wqe->cmdq_wqe;
-
- wrapped = cmdq->wrapped;
-
- num_wqebbs = ALIGN(WQE_SCMD_SIZE, wq->wqebb_size) / wq->wqebb_size;
- next_prod_idx = curr_prod_idx + num_wqebbs;
- if (next_prod_idx >= wq->q_depth) {
- cmdq->wrapped = !cmdq->wrapped;
- next_prod_idx -= wq->q_depth;
- }
-
- cmdq_set_direct_wqe(&cmdq_wqe, CMDQ_CMD_SYNC_DIRECT_RESP, buf_in,
- in_size, NULL, wrapped, HINIC_CMD_ACK_TYPE_CMDQ,
- HINIC_MOD_COMM, CMDQ_SET_ARM_CMD, curr_prod_idx);
-
- /* The data that is written to HW should be in Big Endian Format */
- hinic_cpu_to_be32(&cmdq_wqe, WQE_SCMD_SIZE);
-
- /* cmdq wqe is not shadow, therefore wqe will be written to wq */
- cmdq_wqe_fill(curr_cmdq_wqe, &cmdq_wqe);
-
- cmdq_set_db(cmdq, HINIC_CMDQ_SYNC, next_prod_idx);
-
- spin_unlock(&cmdq->cmdq_lock);
- return 0;
-}
-
-static int cmdq_params_valid(struct hinic_cmdq_buf *buf_in)
-{
- if (buf_in->size > HINIC_CMDQ_MAX_DATA_SIZE)
- return -EINVAL;
-
- return 0;
-}
-
-/**
- * hinic_cmdq_direct_resp - send command with direct data as resp
- * @cmdqs: the cmdqs
- * @mod: module on the card that will handle the command
- * @cmd: the command
- * @buf_in: the buffer for the command
- * @resp: the response to return
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_cmdq_direct_resp(struct hinic_cmdqs *cmdqs,
- enum hinic_mod_type mod, u8 cmd,
- struct hinic_cmdq_buf *buf_in, u64 *resp)
-{
- struct hinic_hwif *hwif = cmdqs->hwif;
- struct pci_dev *pdev = hwif->pdev;
- int err;
-
- err = cmdq_params_valid(buf_in);
- if (err) {
- dev_err(&pdev->dev, "Invalid CMDQ parameters\n");
- return err;
- }
-
- return cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC_CMDQ_SYNC],
- mod, cmd, buf_in, resp);
-}
-
-/**
- * hinic_set_arm_bit - set arm bit for enable interrupt again
- * @cmdqs: the cmdqs
- * @q_type: type of queue to set the arm bit for
- * @q_id: the queue number
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_set_arm_bit(struct hinic_cmdqs *cmdqs,
- enum hinic_set_arm_qtype q_type, u32 q_id)
-{
- struct hinic_cmdq *cmdq = &cmdqs->cmdq[HINIC_CMDQ_SYNC];
- struct hinic_hwif *hwif = cmdqs->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_cmdq_arm_bit arm_bit;
- int err;
-
- arm_bit.q_type = q_type;
- arm_bit.q_id = q_id;
-
- err = cmdq_set_arm_bit(cmdq, &arm_bit, sizeof(arm_bit));
- if (err) {
- dev_err(&pdev->dev, "Failed to set arm for qid %d\n", q_id);
- return err;
- }
-
- return 0;
-}
-
-static void clear_wqe_complete_bit(struct hinic_cmdq *cmdq,
- struct hinic_cmdq_wqe *wqe)
-{
- u32 header_info = be32_to_cpu(CMDQ_WQE_HEADER(wqe)->header_info);
- unsigned int bufdesc_len, wqe_size;
- struct hinic_ctrl *ctrl;
-
- bufdesc_len = HINIC_CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN);
- wqe_size = cmdq_wqe_size_from_bdlen(bufdesc_len);
- if (wqe_size == WQE_LCMD_SIZE) {
- struct hinic_cmdq_wqe_lcmd *wqe_lcmd = &wqe->wqe_lcmd;
-
- ctrl = &wqe_lcmd->ctrl;
- } else {
- struct hinic_cmdq_direct_wqe *direct_wqe = &wqe->direct_wqe;
- struct hinic_cmdq_wqe_scmd *wqe_scmd;
-
- wqe_scmd = &direct_wqe->wqe_scmd;
- ctrl = &wqe_scmd->ctrl;
- }
-
- /* clear HW busy bit */
- ctrl->ctrl_info = 0;
-
- wmb(); /* verify wqe is clear */
-}
-
-/**
- * cmdq_arm_ceq_handler - cmdq completion event handler for arm command
- * @cmdq: the cmdq of the arm command
- * @wqe: the wqe of the arm command
- *
- * Return 0 - Success, negative - Failure
- **/
-static int cmdq_arm_ceq_handler(struct hinic_cmdq *cmdq,
- struct hinic_cmdq_wqe *wqe)
-{
- struct hinic_cmdq_direct_wqe *direct_wqe = &wqe->direct_wqe;
- struct hinic_cmdq_wqe_scmd *wqe_scmd;
- struct hinic_ctrl *ctrl;
- u32 ctrl_info;
-
- wqe_scmd = &direct_wqe->wqe_scmd;
- ctrl = &wqe_scmd->ctrl;
- ctrl_info = be32_to_cpu(ctrl->ctrl_info);
-
- /* HW should toggle the HW BUSY BIT */
- if (!CMDQ_WQE_COMPLETED(ctrl_info))
- return -EBUSY;
-
- clear_wqe_complete_bit(cmdq, wqe);
-
- hinic_put_wqe(cmdq->wq, WQE_SCMD_SIZE);
- return 0;
-}
-
-static void cmdq_update_errcode(struct hinic_cmdq *cmdq, u16 prod_idx,
- int errcode)
-{
- if (cmdq->errcode[prod_idx])
- *cmdq->errcode[prod_idx] = errcode;
-}
-
-/**
- * cmdq_arm_ceq_handler - cmdq completion event handler for sync command
- * @cmdq: the cmdq of the command
- * @cons_idx: the consumer index to update the error code for
- * @errcode: the error code
- **/
-static void cmdq_sync_cmd_handler(struct hinic_cmdq *cmdq, u16 cons_idx,
- int errcode)
-{
- u16 prod_idx = cons_idx;
-
- spin_lock(&cmdq->cmdq_lock);
- cmdq_update_errcode(cmdq, prod_idx, errcode);
-
- wmb(); /* write all before update for the command request */
-
- if (cmdq->done[prod_idx])
- complete(cmdq->done[prod_idx]);
- spin_unlock(&cmdq->cmdq_lock);
-}
-
-static int cmdq_cmd_ceq_handler(struct hinic_cmdq *cmdq, u16 ci,
- struct hinic_cmdq_wqe *cmdq_wqe)
-{
- struct hinic_cmdq_wqe_lcmd *wqe_lcmd = &cmdq_wqe->wqe_lcmd;
- struct hinic_status *status = &wqe_lcmd->status;
- struct hinic_ctrl *ctrl = &wqe_lcmd->ctrl;
- int errcode;
-
- if (!CMDQ_WQE_COMPLETED(be32_to_cpu(ctrl->ctrl_info)))
- return -EBUSY;
-
- errcode = CMDQ_WQE_ERRCODE_GET(be32_to_cpu(status->status_info), VAL);
-
- cmdq_sync_cmd_handler(cmdq, ci, errcode);
-
- clear_wqe_complete_bit(cmdq, cmdq_wqe);
- hinic_put_wqe(cmdq->wq, WQE_LCMD_SIZE);
- return 0;
-}
-
-/**
- * cmdq_ceq_handler - cmdq completion event handler
- * @handle: private data for the handler(cmdqs)
- * @ceqe_data: ceq element data
- **/
-static void cmdq_ceq_handler(void *handle, u32 ceqe_data)
-{
- enum hinic_cmdq_type cmdq_type = CMDQ_CEQE_GET(ceqe_data, TYPE);
- struct hinic_cmdqs *cmdqs = (struct hinic_cmdqs *)handle;
- struct hinic_cmdq *cmdq = &cmdqs->cmdq[cmdq_type];
- struct hinic_cmdq_header *header;
- struct hinic_hw_wqe *hw_wqe;
- int err, set_arm = 0;
- u32 saved_data;
- u16 ci;
-
- /* Read the smallest wqe size for getting wqe size */
- while ((hw_wqe = hinic_read_wqe(cmdq->wq, WQE_SCMD_SIZE, &ci))) {
- if (IS_ERR(hw_wqe))
- break;
-
- header = CMDQ_WQE_HEADER(&hw_wqe->cmdq_wqe);
- saved_data = be32_to_cpu(header->saved_data);
-
- if (HINIC_SAVED_DATA_GET(saved_data, ARM)) {
- /* arm_bit was set until here */
- set_arm = 0;
-
- if (cmdq_arm_ceq_handler(cmdq, &hw_wqe->cmdq_wqe))
- break;
- } else {
- set_arm = 1;
-
- hw_wqe = hinic_read_wqe(cmdq->wq, WQE_LCMD_SIZE, &ci);
- if (IS_ERR(hw_wqe))
- break;
-
- if (cmdq_cmd_ceq_handler(cmdq, ci, &hw_wqe->cmdq_wqe))
- break;
- }
- }
-
- if (set_arm) {
- struct hinic_hwif *hwif = cmdqs->hwif;
- struct pci_dev *pdev = hwif->pdev;
-
- err = hinic_set_arm_bit(cmdqs, HINIC_SET_ARM_CMDQ, cmdq_type);
- if (err)
- dev_err(&pdev->dev, "Failed to set arm for CMDQ\n");
- }
-}
-
-/**
- * cmdq_init_queue_ctxt - init the queue ctxt of a cmdq
- * @cmdq_ctxt: cmdq ctxt to initialize
- * @cmdq: the cmdq
- * @cmdq_pages: the memory of the queue
- **/
-static void cmdq_init_queue_ctxt(struct hinic_cmdq_ctxt *cmdq_ctxt,
- struct hinic_cmdq *cmdq,
- struct hinic_cmdq_pages *cmdq_pages)
-{
- struct hinic_cmdq_ctxt_info *ctxt_info = &cmdq_ctxt->ctxt_info;
- u64 wq_first_page_paddr, cmdq_first_block_paddr, pfn;
- struct hinic_cmdqs *cmdqs = cmdq_to_cmdqs(cmdq);
- struct hinic_wq *wq = cmdq->wq;
-
- /* The data in the HW is in Big Endian Format */
- wq_first_page_paddr = be64_to_cpu(*wq->block_vaddr);
-
- pfn = CMDQ_PFN(wq_first_page_paddr, wq->wq_page_size);
-
- ctxt_info->curr_wqe_page_pfn =
- HINIC_CMDQ_CTXT_PAGE_INFO_SET(pfn, CURR_WQE_PAGE_PFN) |
- HINIC_CMDQ_CTXT_PAGE_INFO_SET(HINIC_CEQ_ID_CMDQ, EQ_ID) |
- HINIC_CMDQ_CTXT_PAGE_INFO_SET(1, CEQ_ARM) |
- HINIC_CMDQ_CTXT_PAGE_INFO_SET(1, CEQ_EN) |
- HINIC_CMDQ_CTXT_PAGE_INFO_SET(cmdq->wrapped, WRAPPED);
-
- /* block PFN - Read Modify Write */
- cmdq_first_block_paddr = cmdq_pages->page_paddr;
-
- pfn = CMDQ_PFN(cmdq_first_block_paddr, wq->wq_page_size);
-
- ctxt_info->wq_block_pfn =
- HINIC_CMDQ_CTXT_BLOCK_INFO_SET(pfn, WQ_BLOCK_PFN) |
- HINIC_CMDQ_CTXT_BLOCK_INFO_SET(atomic_read(&wq->cons_idx), CI);
-
- cmdq_ctxt->func_idx = HINIC_HWIF_FUNC_IDX(cmdqs->hwif);
- cmdq_ctxt->cmdq_type = cmdq->cmdq_type;
-}
-
-/**
- * init_cmdq - initialize cmdq
- * @cmdq: the cmdq
- * @wq: the wq attaced to the cmdq
- * @q_type: the cmdq type of the cmdq
- * @db_area: doorbell area for the cmdq
- *
- * Return 0 - Success, negative - Failure
- **/
-static int init_cmdq(struct hinic_cmdq *cmdq, struct hinic_wq *wq,
- enum hinic_cmdq_type q_type, void __iomem *db_area)
-{
- int err;
-
- cmdq->wq = wq;
- cmdq->cmdq_type = q_type;
- cmdq->wrapped = 1;
-
- spin_lock_init(&cmdq->cmdq_lock);
-
- cmdq->done = vzalloc(array_size(sizeof(*cmdq->done), wq->q_depth));
- if (!cmdq->done)
- return -ENOMEM;
-
- cmdq->errcode = vzalloc(array_size(sizeof(*cmdq->errcode),
- wq->q_depth));
- if (!cmdq->errcode) {
- err = -ENOMEM;
- goto err_errcode;
- }
-
- cmdq->db_base = db_area + CMDQ_DB_OFF;
- return 0;
-
-err_errcode:
- vfree(cmdq->done);
- return err;
-}
-
-/**
- * free_cmdq - Free cmdq
- * @cmdq: the cmdq to free
- **/
-static void free_cmdq(struct hinic_cmdq *cmdq)
-{
- vfree(cmdq->errcode);
- vfree(cmdq->done);
-}
-
-/**
- * init_cmdqs_ctxt - write the cmdq ctxt to HW after init all cmdq
- * @hwdev: the NIC HW device
- * @cmdqs: cmdqs to write the ctxts for
- * &db_area: db_area for all the cmdqs
- *
- * Return 0 - Success, negative - Failure
- **/
-static int init_cmdqs_ctxt(struct hinic_hwdev *hwdev,
- struct hinic_cmdqs *cmdqs, void __iomem **db_area)
-{
- struct hinic_hwif *hwif = hwdev->hwif;
- enum hinic_cmdq_type type, cmdq_type;
- struct hinic_cmdq_ctxt *cmdq_ctxts;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_pfhwdev *pfhwdev;
- size_t cmdq_ctxts_size;
- int err;
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "Unsupported PCI function type\n");
- return -EINVAL;
- }
-
- cmdq_ctxts_size = HINIC_MAX_CMDQ_TYPES * sizeof(*cmdq_ctxts);
- cmdq_ctxts = devm_kzalloc(&pdev->dev, cmdq_ctxts_size, GFP_KERNEL);
- if (!cmdq_ctxts)
- return -ENOMEM;
-
- pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
-
- cmdq_type = HINIC_CMDQ_SYNC;
- for (; cmdq_type < HINIC_MAX_CMDQ_TYPES; cmdq_type++) {
- err = init_cmdq(&cmdqs->cmdq[cmdq_type],
- &cmdqs->saved_wqs[cmdq_type], cmdq_type,
- db_area[cmdq_type]);
- if (err) {
- dev_err(&pdev->dev, "Failed to initialize cmdq\n");
- goto err_init_cmdq;
- }
-
- cmdq_init_queue_ctxt(&cmdq_ctxts[cmdq_type],
- &cmdqs->cmdq[cmdq_type],
- &cmdqs->cmdq_pages);
- }
-
- /* Write the CMDQ ctxts */
- cmdq_type = HINIC_CMDQ_SYNC;
- for (; cmdq_type < HINIC_MAX_CMDQ_TYPES; cmdq_type++) {
- err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
- HINIC_COMM_CMD_CMDQ_CTXT_SET,
- &cmdq_ctxts[cmdq_type],
- sizeof(cmdq_ctxts[cmdq_type]),
- NULL, NULL, HINIC_MGMT_MSG_SYNC);
- if (err) {
- dev_err(&pdev->dev, "Failed to set CMDQ CTXT type = %d\n",
- cmdq_type);
- goto err_write_cmdq_ctxt;
- }
- }
-
- devm_kfree(&pdev->dev, cmdq_ctxts);
- return 0;
-
-err_write_cmdq_ctxt:
- cmdq_type = HINIC_MAX_CMDQ_TYPES;
-
-err_init_cmdq:
- for (type = HINIC_CMDQ_SYNC; type < cmdq_type; type++)
- free_cmdq(&cmdqs->cmdq[type]);
-
- devm_kfree(&pdev->dev, cmdq_ctxts);
- return err;
-}
-
-/**
- * hinic_init_cmdqs - init all cmdqs
- * @cmdqs: cmdqs to init
- * @hwif: HW interface for accessing cmdqs
- * @db_area: doorbell areas for all the cmdqs
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif,
- void __iomem **db_area)
-{
- struct hinic_func_to_io *func_to_io = cmdqs_to_func_to_io(cmdqs);
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_hwdev *hwdev;
- size_t saved_wqs_size;
- u16 max_wqe_size;
- int err;
-
- cmdqs->hwif = hwif;
- cmdqs->cmdq_buf_pool = dma_pool_create("hinic_cmdq", &pdev->dev,
- HINIC_CMDQ_BUF_SIZE,
- HINIC_CMDQ_BUF_SIZE, 0);
- if (!cmdqs->cmdq_buf_pool)
- return -ENOMEM;
-
- saved_wqs_size = HINIC_MAX_CMDQ_TYPES * sizeof(struct hinic_wq);
- cmdqs->saved_wqs = devm_kzalloc(&pdev->dev, saved_wqs_size, GFP_KERNEL);
- if (!cmdqs->saved_wqs) {
- err = -ENOMEM;
- goto err_saved_wqs;
- }
-
- max_wqe_size = WQE_LCMD_SIZE;
- err = hinic_wqs_cmdq_alloc(&cmdqs->cmdq_pages, cmdqs->saved_wqs, hwif,
- HINIC_MAX_CMDQ_TYPES, CMDQ_WQEBB_SIZE,
- CMDQ_WQ_PAGE_SIZE, CMDQ_DEPTH, max_wqe_size);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate CMDQ wqs\n");
- goto err_cmdq_wqs;
- }
-
- hwdev = container_of(func_to_io, struct hinic_hwdev, func_to_io);
- err = init_cmdqs_ctxt(hwdev, cmdqs, db_area);
- if (err) {
- dev_err(&pdev->dev, "Failed to write cmdq ctxt\n");
- goto err_cmdq_ctxt;
- }
-
- hinic_ceq_register_cb(&func_to_io->ceqs, HINIC_CEQ_CMDQ, cmdqs,
- cmdq_ceq_handler);
- return 0;
-
-err_cmdq_ctxt:
- hinic_wqs_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
- HINIC_MAX_CMDQ_TYPES);
-
-err_cmdq_wqs:
- devm_kfree(&pdev->dev, cmdqs->saved_wqs);
-
-err_saved_wqs:
- dma_pool_destroy(cmdqs->cmdq_buf_pool);
- return err;
-}
-
-/**
- * hinic_free_cmdqs - free all cmdqs
- * @cmdqs: cmdqs to free
- **/
-void hinic_free_cmdqs(struct hinic_cmdqs *cmdqs)
-{
- struct hinic_func_to_io *func_to_io = cmdqs_to_func_to_io(cmdqs);
- struct hinic_hwif *hwif = cmdqs->hwif;
- struct pci_dev *pdev = hwif->pdev;
- enum hinic_cmdq_type cmdq_type;
-
- hinic_ceq_unregister_cb(&func_to_io->ceqs, HINIC_CEQ_CMDQ);
-
- cmdq_type = HINIC_CMDQ_SYNC;
- for (; cmdq_type < HINIC_MAX_CMDQ_TYPES; cmdq_type++)
- free_cmdq(&cmdqs->cmdq[cmdq_type]);
-
- hinic_wqs_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
- HINIC_MAX_CMDQ_TYPES);
-
- devm_kfree(&pdev->dev, cmdqs->saved_wqs);
-
- dma_pool_destroy(cmdqs->cmdq_buf_pool);
-}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h
deleted file mode 100644
index 23f8d39..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h
+++ /dev/null
@@ -1,187 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_CMDQ_H
-#define HINIC_CMDQ_H
-
-#include <linux/types.h>
-#include <linux/spinlock.h>
-#include <linux/completion.h>
-#include <linux/pci.h>
-
-#include "hinic_hw_if.h"
-#include "hinic_hw_wq.h"
-
-#define HINIC_CMDQ_CTXT_CURR_WQE_PAGE_PFN_SHIFT 0
-#define HINIC_CMDQ_CTXT_EQ_ID_SHIFT 56
-#define HINIC_CMDQ_CTXT_CEQ_ARM_SHIFT 61
-#define HINIC_CMDQ_CTXT_CEQ_EN_SHIFT 62
-#define HINIC_CMDQ_CTXT_WRAPPED_SHIFT 63
-
-#define HINIC_CMDQ_CTXT_CURR_WQE_PAGE_PFN_MASK 0xFFFFFFFFFFFFF
-#define HINIC_CMDQ_CTXT_EQ_ID_MASK 0x1F
-#define HINIC_CMDQ_CTXT_CEQ_ARM_MASK 0x1
-#define HINIC_CMDQ_CTXT_CEQ_EN_MASK 0x1
-#define HINIC_CMDQ_CTXT_WRAPPED_MASK 0x1
-
-#define HINIC_CMDQ_CTXT_PAGE_INFO_SET(val, member) \
- (((u64)(val) & HINIC_CMDQ_CTXT_##member##_MASK) \
- << HINIC_CMDQ_CTXT_##member##_SHIFT)
-
-#define HINIC_CMDQ_CTXT_PAGE_INFO_CLEAR(val, member) \
- ((val) & (~((u64)HINIC_CMDQ_CTXT_##member##_MASK \
- << HINIC_CMDQ_CTXT_##member##_SHIFT)))
-
-#define HINIC_CMDQ_CTXT_WQ_BLOCK_PFN_SHIFT 0
-#define HINIC_CMDQ_CTXT_CI_SHIFT 52
-
-#define HINIC_CMDQ_CTXT_WQ_BLOCK_PFN_MASK 0xFFFFFFFFFFFFF
-#define HINIC_CMDQ_CTXT_CI_MASK 0xFFF
-
-#define HINIC_CMDQ_CTXT_BLOCK_INFO_SET(val, member) \
- (((u64)(val) & HINIC_CMDQ_CTXT_##member##_MASK) \
- << HINIC_CMDQ_CTXT_##member##_SHIFT)
-
-#define HINIC_CMDQ_CTXT_BLOCK_INFO_CLEAR(val, member) \
- ((val) & (~((u64)HINIC_CMDQ_CTXT_##member##_MASK \
- << HINIC_CMDQ_CTXT_##member##_SHIFT)))
-
-#define HINIC_SAVED_DATA_ARM_SHIFT 31
-
-#define HINIC_SAVED_DATA_ARM_MASK 0x1
-
-#define HINIC_SAVED_DATA_SET(val, member) \
- (((u32)(val) & HINIC_SAVED_DATA_##member##_MASK) \
- << HINIC_SAVED_DATA_##member##_SHIFT)
-
-#define HINIC_SAVED_DATA_GET(val, member) \
- (((val) >> HINIC_SAVED_DATA_##member##_SHIFT) \
- & HINIC_SAVED_DATA_##member##_MASK)
-
-#define HINIC_SAVED_DATA_CLEAR(val, member) \
- ((val) & (~(HINIC_SAVED_DATA_##member##_MASK \
- << HINIC_SAVED_DATA_##member##_SHIFT)))
-
-#define HINIC_CMDQ_DB_INFO_HI_PROD_IDX_SHIFT 0
-#define HINIC_CMDQ_DB_INFO_PATH_SHIFT 23
-#define HINIC_CMDQ_DB_INFO_CMDQ_TYPE_SHIFT 24
-#define HINIC_CMDQ_DB_INFO_DB_TYPE_SHIFT 27
-
-#define HINIC_CMDQ_DB_INFO_HI_PROD_IDX_MASK 0xFF
-#define HINIC_CMDQ_DB_INFO_PATH_MASK 0x1
-#define HINIC_CMDQ_DB_INFO_CMDQ_TYPE_MASK 0x7
-#define HINIC_CMDQ_DB_INFO_DB_TYPE_MASK 0x1F
-
-#define HINIC_CMDQ_DB_INFO_SET(val, member) \
- (((u32)(val) & HINIC_CMDQ_DB_INFO_##member##_MASK) \
- << HINIC_CMDQ_DB_INFO_##member##_SHIFT)
-
-#define HINIC_CMDQ_BUF_SIZE 2048
-
-#define HINIC_CMDQ_BUF_HW_RSVD 8
-#define HINIC_CMDQ_MAX_DATA_SIZE (HINIC_CMDQ_BUF_SIZE - \
- HINIC_CMDQ_BUF_HW_RSVD)
-
-enum hinic_cmdq_type {
- HINIC_CMDQ_SYNC,
-
- HINIC_MAX_CMDQ_TYPES,
-};
-
-enum hinic_set_arm_qtype {
- HINIC_SET_ARM_CMDQ,
-};
-
-enum hinic_cmd_ack_type {
- HINIC_CMD_ACK_TYPE_CMDQ,
-};
-
-struct hinic_cmdq_buf {
- void *buf;
- dma_addr_t dma_addr;
- size_t size;
-};
-
-struct hinic_cmdq_arm_bit {
- u32 q_type;
- u32 q_id;
-};
-
-struct hinic_cmdq_ctxt_info {
- u64 curr_wqe_page_pfn;
- u64 wq_block_pfn;
-};
-
-struct hinic_cmdq_ctxt {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u8 cmdq_type;
- u8 rsvd1[1];
-
- u8 rsvd2[4];
-
- struct hinic_cmdq_ctxt_info ctxt_info;
-};
-
-struct hinic_cmdq {
- struct hinic_wq *wq;
-
- enum hinic_cmdq_type cmdq_type;
- int wrapped;
-
- /* Lock for keeping the doorbell order */
- spinlock_t cmdq_lock;
-
- struct completion **done;
- int **errcode;
-
- /* doorbell area */
- void __iomem *db_base;
-};
-
-struct hinic_cmdqs {
- struct hinic_hwif *hwif;
-
- struct dma_pool *cmdq_buf_pool;
-
- struct hinic_wq *saved_wqs;
-
- struct hinic_cmdq_pages cmdq_pages;
-
- struct hinic_cmdq cmdq[HINIC_MAX_CMDQ_TYPES];
-};
-
-int hinic_alloc_cmdq_buf(struct hinic_cmdqs *cmdqs,
- struct hinic_cmdq_buf *cmdq_buf);
-
-void hinic_free_cmdq_buf(struct hinic_cmdqs *cmdqs,
- struct hinic_cmdq_buf *cmdq_buf);
-
-int hinic_cmdq_direct_resp(struct hinic_cmdqs *cmdqs,
- enum hinic_mod_type mod, u8 cmd,
- struct hinic_cmdq_buf *buf_in, u64 *out_param);
-
-int hinic_set_arm_bit(struct hinic_cmdqs *cmdqs,
- enum hinic_set_arm_qtype q_type, u32 q_id);
-
-int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif,
- void __iomem **db_area);
-
-void hinic_free_cmdqs(struct hinic_cmdqs *cmdqs);
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_csr.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_csr.h
deleted file mode 100644
index f39b184..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_csr.h
+++ /dev/null
@@ -1,149 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_HW_CSR_H
-#define HINIC_HW_CSR_H
-
-/* HW interface registers */
-#define HINIC_CSR_FUNC_ATTR0_ADDR 0x0
-#define HINIC_CSR_FUNC_ATTR1_ADDR 0x4
-
-#define HINIC_CSR_FUNC_ATTR4_ADDR 0x10
-#define HINIC_CSR_FUNC_ATTR5_ADDR 0x14
-
-#define HINIC_DMA_ATTR_BASE 0xC80
-#define HINIC_ELECTION_BASE 0x4200
-
-#define HINIC_DMA_ATTR_STRIDE 0x4
-#define HINIC_CSR_DMA_ATTR_ADDR(idx) \
- (HINIC_DMA_ATTR_BASE + (idx) * HINIC_DMA_ATTR_STRIDE)
-
-#define HINIC_PPF_ELECTION_STRIDE 0x4
-#define HINIC_CSR_MAX_PORTS 4
-
-#define HINIC_CSR_PPF_ELECTION_ADDR(idx) \
- (HINIC_ELECTION_BASE + (idx) * HINIC_PPF_ELECTION_STRIDE)
-
-/* API CMD registers */
-#define HINIC_CSR_API_CMD_BASE 0xF000
-
-#define HINIC_CSR_API_CMD_STRIDE 0x100
-
-#define HINIC_CSR_API_CMD_CHAIN_HEAD_HI_ADDR(idx) \
- (HINIC_CSR_API_CMD_BASE + 0x0 + (idx) * HINIC_CSR_API_CMD_STRIDE)
-
-#define HINIC_CSR_API_CMD_CHAIN_HEAD_LO_ADDR(idx) \
- (HINIC_CSR_API_CMD_BASE + 0x4 + (idx) * HINIC_CSR_API_CMD_STRIDE)
-
-#define HINIC_CSR_API_CMD_STATUS_HI_ADDR(idx) \
- (HINIC_CSR_API_CMD_BASE + 0x8 + (idx) * HINIC_CSR_API_CMD_STRIDE)
-
-#define HINIC_CSR_API_CMD_STATUS_LO_ADDR(idx) \
- (HINIC_CSR_API_CMD_BASE + 0xC + (idx) * HINIC_CSR_API_CMD_STRIDE)
-
-#define HINIC_CSR_API_CMD_CHAIN_NUM_CELLS_ADDR(idx) \
- (HINIC_CSR_API_CMD_BASE + 0x10 + (idx) * HINIC_CSR_API_CMD_STRIDE)
-
-#define HINIC_CSR_API_CMD_CHAIN_CTRL_ADDR(idx) \
- (HINIC_CSR_API_CMD_BASE + 0x14 + (idx) * HINIC_CSR_API_CMD_STRIDE)
-
-#define HINIC_CSR_API_CMD_CHAIN_PI_ADDR(idx) \
- (HINIC_CSR_API_CMD_BASE + 0x1C + (idx) * HINIC_CSR_API_CMD_STRIDE)
-
-#define HINIC_CSR_API_CMD_CHAIN_REQ_ADDR(idx) \
- (HINIC_CSR_API_CMD_BASE + 0x20 + (idx) * HINIC_CSR_API_CMD_STRIDE)
-
-#define HINIC_CSR_API_CMD_STATUS_ADDR(idx) \
- (HINIC_CSR_API_CMD_BASE + 0x30 + (idx) * HINIC_CSR_API_CMD_STRIDE)
-
-/* MSI-X registers */
-#define HINIC_CSR_MSIX_CTRL_BASE 0x2000
-#define HINIC_CSR_MSIX_CNT_BASE 0x2004
-
-#define HINIC_CSR_MSIX_STRIDE 0x8
-
-#define HINIC_CSR_MSIX_CTRL_ADDR(idx) \
- (HINIC_CSR_MSIX_CTRL_BASE + (idx) * HINIC_CSR_MSIX_STRIDE)
-
-#define HINIC_CSR_MSIX_CNT_ADDR(idx) \
- (HINIC_CSR_MSIX_CNT_BASE + (idx) * HINIC_CSR_MSIX_STRIDE)
-
-/* EQ registers */
-#define HINIC_AEQ_MTT_OFF_BASE_ADDR 0x200
-#define HINIC_CEQ_MTT_OFF_BASE_ADDR 0x400
-
-#define HINIC_EQ_MTT_OFF_STRIDE 0x40
-
-#define HINIC_CSR_AEQ_MTT_OFF(id) \
- (HINIC_AEQ_MTT_OFF_BASE_ADDR + (id) * HINIC_EQ_MTT_OFF_STRIDE)
-
-#define HINIC_CSR_CEQ_MTT_OFF(id) \
- (HINIC_CEQ_MTT_OFF_BASE_ADDR + (id) * HINIC_EQ_MTT_OFF_STRIDE)
-
-#define HINIC_CSR_EQ_PAGE_OFF_STRIDE 8
-
-#define HINIC_CSR_AEQ_HI_PHYS_ADDR_REG(q_id, pg_num) \
- (HINIC_CSR_AEQ_MTT_OFF(q_id) + \
- (pg_num) * HINIC_CSR_EQ_PAGE_OFF_STRIDE)
-
-#define HINIC_CSR_CEQ_HI_PHYS_ADDR_REG(q_id, pg_num) \
- (HINIC_CSR_CEQ_MTT_OFF(q_id) + \
- (pg_num) * HINIC_CSR_EQ_PAGE_OFF_STRIDE)
-
-#define HINIC_CSR_AEQ_LO_PHYS_ADDR_REG(q_id, pg_num) \
- (HINIC_CSR_AEQ_MTT_OFF(q_id) + \
- (pg_num) * HINIC_CSR_EQ_PAGE_OFF_STRIDE + 4)
-
-#define HINIC_CSR_CEQ_LO_PHYS_ADDR_REG(q_id, pg_num) \
- (HINIC_CSR_CEQ_MTT_OFF(q_id) + \
- (pg_num) * HINIC_CSR_EQ_PAGE_OFF_STRIDE + 4)
-
-#define HINIC_AEQ_CTRL_0_ADDR_BASE 0xE00
-#define HINIC_AEQ_CTRL_1_ADDR_BASE 0xE04
-#define HINIC_AEQ_CONS_IDX_ADDR_BASE 0xE08
-#define HINIC_AEQ_PROD_IDX_ADDR_BASE 0xE0C
-
-#define HINIC_CEQ_CTRL_0_ADDR_BASE 0x1000
-#define HINIC_CEQ_CTRL_1_ADDR_BASE 0x1004
-#define HINIC_CEQ_CONS_IDX_ADDR_BASE 0x1008
-#define HINIC_CEQ_PROD_IDX_ADDR_BASE 0x100C
-
-#define HINIC_EQ_OFF_STRIDE 0x80
-
-#define HINIC_CSR_AEQ_CTRL_0_ADDR(idx) \
- (HINIC_AEQ_CTRL_0_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
-
-#define HINIC_CSR_AEQ_CTRL_1_ADDR(idx) \
- (HINIC_AEQ_CTRL_1_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
-
-#define HINIC_CSR_AEQ_CONS_IDX_ADDR(idx) \
- (HINIC_AEQ_CONS_IDX_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
-
-#define HINIC_CSR_AEQ_PROD_IDX_ADDR(idx) \
- (HINIC_AEQ_PROD_IDX_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
-
-#define HINIC_CSR_CEQ_CTRL_0_ADDR(idx) \
- (HINIC_CEQ_CTRL_0_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
-
-#define HINIC_CSR_CEQ_CTRL_1_ADDR(idx) \
- (HINIC_CEQ_CTRL_1_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
-
-#define HINIC_CSR_CEQ_CONS_IDX_ADDR(idx) \
- (HINIC_CEQ_CONS_IDX_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
-
-#define HINIC_CSR_CEQ_PROD_IDX_ADDR(idx) \
- (HINIC_CEQ_PROD_IDX_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
deleted file mode 100644
index 6b19607..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
+++ /dev/null
@@ -1,1010 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/pci.h>
-#include <linux/device.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/bitops.h>
-#include <linux/delay.h>
-#include <linux/jiffies.h>
-#include <linux/log2.h>
-#include <linux/err.h>
-
-#include "hinic_hw_if.h"
-#include "hinic_hw_eqs.h"
-#include "hinic_hw_mgmt.h"
-#include "hinic_hw_qp_ctxt.h"
-#include "hinic_hw_qp.h"
-#include "hinic_hw_io.h"
-#include "hinic_hw_dev.h"
-
-#define IO_STATUS_TIMEOUT 100
-#define OUTBOUND_STATE_TIMEOUT 100
-#define DB_STATE_TIMEOUT 100
-
-#define MAX_IRQS(max_qps, num_aeqs, num_ceqs) \
- (2 * (max_qps) + (num_aeqs) + (num_ceqs))
-
-#define ADDR_IN_4BYTES(addr) ((addr) >> 2)
-
-enum intr_type {
- INTR_MSIX_TYPE,
-};
-
-enum io_status {
- IO_STOPPED = 0,
- IO_RUNNING = 1,
-};
-
-enum hw_ioctxt_set_cmdq_depth {
- HW_IOCTXT_SET_CMDQ_DEPTH_DEFAULT,
-};
-
-/* HW struct */
-struct hinic_dev_cap {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u8 rsvd1[5];
- u8 intr_type;
- u8 rsvd2[66];
- u16 max_sqs;
- u16 max_rqs;
- u8 rsvd3[208];
-};
-
-/**
- * get_capability - convert device capabilities to NIC capabilities
- * @hwdev: the HW device to set and convert device capabilities for
- * @dev_cap: device capabilities from FW
- *
- * Return 0 - Success, negative - Failure
- **/
-static int get_capability(struct hinic_hwdev *hwdev,
- struct hinic_dev_cap *dev_cap)
-{
- struct hinic_cap *nic_cap = &hwdev->nic_cap;
- int num_aeqs, num_ceqs, num_irqs;
-
- if (!HINIC_IS_PF(hwdev->hwif) && !HINIC_IS_PPF(hwdev->hwif))
- return -EINVAL;
-
- if (dev_cap->intr_type != INTR_MSIX_TYPE)
- return -EFAULT;
-
- num_aeqs = HINIC_HWIF_NUM_AEQS(hwdev->hwif);
- num_ceqs = HINIC_HWIF_NUM_CEQS(hwdev->hwif);
- num_irqs = HINIC_HWIF_NUM_IRQS(hwdev->hwif);
-
- /* Each QP has its own (SQ + RQ) interrupts */
- nic_cap->num_qps = (num_irqs - (num_aeqs + num_ceqs)) / 2;
-
- if (nic_cap->num_qps > HINIC_Q_CTXT_MAX)
- nic_cap->num_qps = HINIC_Q_CTXT_MAX;
-
- /* num_qps must be power of 2 */
- nic_cap->num_qps = BIT(fls(nic_cap->num_qps) - 1);
-
- nic_cap->max_qps = dev_cap->max_sqs + 1;
- if (nic_cap->max_qps != (dev_cap->max_rqs + 1))
- return -EFAULT;
-
- if (nic_cap->num_qps > nic_cap->max_qps)
- nic_cap->num_qps = nic_cap->max_qps;
-
- return 0;
-}
-
-/**
- * get_cap_from_fw - get device capabilities from FW
- * @pfhwdev: the PF HW device to get capabilities for
- *
- * Return 0 - Success, negative - Failure
- **/
-static int get_cap_from_fw(struct hinic_pfhwdev *pfhwdev)
-{
- struct hinic_hwdev *hwdev = &pfhwdev->hwdev;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_dev_cap dev_cap;
- u16 in_len, out_len;
- int err;
-
- in_len = 0;
- out_len = sizeof(dev_cap);
-
- err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_CFGM,
- HINIC_CFG_NIC_CAP, &dev_cap, in_len, &dev_cap,
- &out_len, HINIC_MGMT_MSG_SYNC);
- if (err) {
- dev_err(&pdev->dev, "Failed to get capability from FW\n");
- return err;
- }
-
- return get_capability(hwdev, &dev_cap);
-}
-
-/**
- * get_dev_cap - get device capabilities
- * @hwdev: the NIC HW device to get capabilities for
- *
- * Return 0 - Success, negative - Failure
- **/
-static int get_dev_cap(struct hinic_hwdev *hwdev)
-{
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_pfhwdev *pfhwdev;
- int err;
-
- switch (HINIC_FUNC_TYPE(hwif)) {
- case HINIC_PPF:
- case HINIC_PF:
- pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
-
- err = get_cap_from_fw(pfhwdev);
- if (err) {
- dev_err(&pdev->dev, "Failed to get capability from FW\n");
- return err;
- }
- break;
-
- default:
- dev_err(&pdev->dev, "Unsupported PCI Function type\n");
- return -EINVAL;
- }
-
- return 0;
-}
-
-/**
- * init_msix - enable the msix and save the entries
- * @hwdev: the NIC HW device
- *
- * Return 0 - Success, negative - Failure
- **/
-static int init_msix(struct hinic_hwdev *hwdev)
-{
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- int nr_irqs, num_aeqs, num_ceqs;
- size_t msix_entries_size;
- int i, err;
-
- num_aeqs = HINIC_HWIF_NUM_AEQS(hwif);
- num_ceqs = HINIC_HWIF_NUM_CEQS(hwif);
- nr_irqs = MAX_IRQS(HINIC_MAX_QPS, num_aeqs, num_ceqs);
- if (nr_irqs > HINIC_HWIF_NUM_IRQS(hwif))
- nr_irqs = HINIC_HWIF_NUM_IRQS(hwif);
-
- msix_entries_size = nr_irqs * sizeof(*hwdev->msix_entries);
- hwdev->msix_entries = devm_kzalloc(&pdev->dev, msix_entries_size,
- GFP_KERNEL);
- if (!hwdev->msix_entries)
- return -ENOMEM;
-
- for (i = 0; i < nr_irqs; i++)
- hwdev->msix_entries[i].entry = i;
-
- err = pci_enable_msix_exact(pdev, hwdev->msix_entries, nr_irqs);
- if (err) {
- dev_err(&pdev->dev, "Failed to enable pci msix\n");
- return err;
- }
-
- return 0;
-}
-
-/**
- * disable_msix - disable the msix
- * @hwdev: the NIC HW device
- **/
-static void disable_msix(struct hinic_hwdev *hwdev)
-{
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
-
- pci_disable_msix(pdev);
-}
-
-/**
- * hinic_port_msg_cmd - send port msg to mgmt
- * @hwdev: the NIC HW device
- * @cmd: the port command
- * @buf_in: input buffer
- * @in_size: input size
- * @buf_out: output buffer
- * @out_size: returned output size
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_port_msg_cmd(struct hinic_hwdev *hwdev, enum hinic_port_cmd cmd,
- void *buf_in, u16 in_size, void *buf_out, u16 *out_size)
-{
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_pfhwdev *pfhwdev;
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "unsupported PCI Function type\n");
- return -EINVAL;
- }
-
- pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
-
- return hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_L2NIC, cmd,
- buf_in, in_size, buf_out, out_size,
- HINIC_MGMT_MSG_SYNC);
-}
-
-/**
- * init_fw_ctxt- Init Firmware tables before network mgmt and io operations
- * @hwdev: the NIC HW device
- *
- * Return 0 - Success, negative - Failure
- **/
-static int init_fw_ctxt(struct hinic_hwdev *hwdev)
-{
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_cmd_fw_ctxt fw_ctxt;
- u16 out_size;
- int err;
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "Unsupported PCI Function type\n");
- return -EINVAL;
- }
-
- fw_ctxt.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
- fw_ctxt.rx_buf_sz = HINIC_RX_BUF_SZ;
-
- err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_FWCTXT_INIT,
- &fw_ctxt, sizeof(fw_ctxt),
- &fw_ctxt, &out_size);
- if (err || (out_size != sizeof(fw_ctxt)) || fw_ctxt.status) {
- dev_err(&pdev->dev, "Failed to init FW ctxt, ret = %d\n",
- fw_ctxt.status);
- return -EFAULT;
- }
-
- return 0;
-}
-
-/**
- * set_hw_ioctxt - set the shape of the IO queues in FW
- * @hwdev: the NIC HW device
- * @rq_depth: rq depth
- * @sq_depth: sq depth
- *
- * Return 0 - Success, negative - Failure
- **/
-static int set_hw_ioctxt(struct hinic_hwdev *hwdev, unsigned int rq_depth,
- unsigned int sq_depth)
-{
- struct hinic_hwif *hwif = hwdev->hwif;
- struct hinic_cmd_hw_ioctxt hw_ioctxt;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_pfhwdev *pfhwdev;
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "Unsupported PCI Function type\n");
- return -EINVAL;
- }
-
- hw_ioctxt.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
-
- hw_ioctxt.set_cmdq_depth = HW_IOCTXT_SET_CMDQ_DEPTH_DEFAULT;
- hw_ioctxt.cmdq_depth = 0;
-
- hw_ioctxt.rq_depth = ilog2(rq_depth);
-
- hw_ioctxt.rx_buf_sz_idx = HINIC_RX_BUF_SZ_IDX;
-
- hw_ioctxt.sq_depth = ilog2(sq_depth);
-
- pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
-
- return hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
- HINIC_COMM_CMD_HWCTXT_SET,
- &hw_ioctxt, sizeof(hw_ioctxt), NULL,
- NULL, HINIC_MGMT_MSG_SYNC);
-}
-
-static int wait_for_outbound_state(struct hinic_hwdev *hwdev)
-{
- enum hinic_outbound_state outbound_state;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- unsigned long end;
-
- end = jiffies + msecs_to_jiffies(OUTBOUND_STATE_TIMEOUT);
- do {
- outbound_state = hinic_outbound_state_get(hwif);
-
- if (outbound_state == HINIC_OUTBOUND_ENABLE)
- return 0;
-
- msleep(20);
- } while (time_before(jiffies, end));
-
- dev_err(&pdev->dev, "Wait for OUTBOUND - Timeout\n");
- return -EFAULT;
-}
-
-static int wait_for_db_state(struct hinic_hwdev *hwdev)
-{
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- enum hinic_db_state db_state;
- unsigned long end;
-
- end = jiffies + msecs_to_jiffies(DB_STATE_TIMEOUT);
- do {
- db_state = hinic_db_state_get(hwif);
-
- if (db_state == HINIC_DB_ENABLE)
- return 0;
-
- msleep(20);
- } while (time_before(jiffies, end));
-
- dev_err(&pdev->dev, "Wait for DB - Timeout\n");
- return -EFAULT;
-}
-
-static int wait_for_io_stopped(struct hinic_hwdev *hwdev)
-{
- struct hinic_cmd_io_status cmd_io_status;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_pfhwdev *pfhwdev;
- unsigned long end;
- u16 out_size;
- int err;
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "Unsupported PCI Function type\n");
- return -EINVAL;
- }
-
- pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
-
- cmd_io_status.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
-
- end = jiffies + msecs_to_jiffies(IO_STATUS_TIMEOUT);
- do {
- err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
- HINIC_COMM_CMD_IO_STATUS_GET,
- &cmd_io_status, sizeof(cmd_io_status),
- &cmd_io_status, &out_size,
- HINIC_MGMT_MSG_SYNC);
- if ((err) || (out_size != sizeof(cmd_io_status))) {
- dev_err(&pdev->dev, "Failed to get IO status, ret = %d\n",
- err);
- return err;
- }
-
- if (cmd_io_status.status == IO_STOPPED) {
- dev_info(&pdev->dev, "IO stopped\n");
- return 0;
- }
-
- msleep(20);
- } while (time_before(jiffies, end));
-
- dev_err(&pdev->dev, "Wait for IO stopped - Timeout\n");
- return -ETIMEDOUT;
-}
-
-/**
- * clear_io_resource - set the IO resources as not active in the NIC
- * @hwdev: the NIC HW device
- *
- * Return 0 - Success, negative - Failure
- **/
-static int clear_io_resources(struct hinic_hwdev *hwdev)
-{
- struct hinic_cmd_clear_io_res cmd_clear_io_res;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_pfhwdev *pfhwdev;
- int err;
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "Unsupported PCI Function type\n");
- return -EINVAL;
- }
-
- err = wait_for_io_stopped(hwdev);
- if (err) {
- dev_err(&pdev->dev, "IO has not stopped yet\n");
- return err;
- }
-
- cmd_clear_io_res.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
-
- pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
-
- err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
- HINIC_COMM_CMD_IO_RES_CLEAR, &cmd_clear_io_res,
- sizeof(cmd_clear_io_res), NULL, NULL,
- HINIC_MGMT_MSG_SYNC);
- if (err) {
- dev_err(&pdev->dev, "Failed to clear IO resources\n");
- return err;
- }
-
- return 0;
-}
-
-/**
- * set_resources_state - set the state of the resources in the NIC
- * @hwdev: the NIC HW device
- * @state: the state to set
- *
- * Return 0 - Success, negative - Failure
- **/
-static int set_resources_state(struct hinic_hwdev *hwdev,
- enum hinic_res_state state)
-{
- struct hinic_cmd_set_res_state res_state;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_pfhwdev *pfhwdev;
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "Unsupported PCI Function type\n");
- return -EINVAL;
- }
-
- res_state.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
- res_state.state = state;
-
- pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
-
- return hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt,
- HINIC_MOD_COMM,
- HINIC_COMM_CMD_RES_STATE_SET,
- &res_state, sizeof(res_state), NULL,
- NULL, HINIC_MGMT_MSG_SYNC);
-}
-
-/**
- * get_base_qpn - get the first qp number
- * @hwdev: the NIC HW device
- * @base_qpn: returned qp number
- *
- * Return 0 - Success, negative - Failure
- **/
-static int get_base_qpn(struct hinic_hwdev *hwdev, u16 *base_qpn)
-{
- struct hinic_cmd_base_qpn cmd_base_qpn;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- u16 out_size;
- int err;
-
- cmd_base_qpn.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
-
- err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_GLOBAL_QPN,
- &cmd_base_qpn, sizeof(cmd_base_qpn),
- &cmd_base_qpn, &out_size);
- if (err || (out_size != sizeof(cmd_base_qpn)) || cmd_base_qpn.status) {
- dev_err(&pdev->dev, "Failed to get base qpn, status = %d\n",
- cmd_base_qpn.status);
- return -EFAULT;
- }
-
- *base_qpn = cmd_base_qpn.qpn;
- return 0;
-}
-
-/**
- * hinic_hwdev_ifup - Preparing the HW for passing IO
- * @hwdev: the NIC HW device
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_hwdev_ifup(struct hinic_hwdev *hwdev)
-{
- struct hinic_func_to_io *func_to_io = &hwdev->func_to_io;
- struct hinic_cap *nic_cap = &hwdev->nic_cap;
- struct hinic_hwif *hwif = hwdev->hwif;
- int err, num_aeqs, num_ceqs, num_qps;
- struct msix_entry *ceq_msix_entries;
- struct msix_entry *sq_msix_entries;
- struct msix_entry *rq_msix_entries;
- struct pci_dev *pdev = hwif->pdev;
- u16 base_qpn;
-
- err = get_base_qpn(hwdev, &base_qpn);
- if (err) {
- dev_err(&pdev->dev, "Failed to get global base qp number\n");
- return err;
- }
-
- num_aeqs = HINIC_HWIF_NUM_AEQS(hwif);
- num_ceqs = HINIC_HWIF_NUM_CEQS(hwif);
-
- ceq_msix_entries = &hwdev->msix_entries[num_aeqs];
-
- err = hinic_io_init(func_to_io, hwif, nic_cap->max_qps, num_ceqs,
- ceq_msix_entries);
- if (err) {
- dev_err(&pdev->dev, "Failed to init IO channel\n");
- return err;
- }
-
- num_qps = nic_cap->num_qps;
- sq_msix_entries = &hwdev->msix_entries[num_aeqs + num_ceqs];
- rq_msix_entries = &hwdev->msix_entries[num_aeqs + num_ceqs + num_qps];
-
- err = hinic_io_create_qps(func_to_io, base_qpn, num_qps,
- sq_msix_entries, rq_msix_entries);
- if (err) {
- dev_err(&pdev->dev, "Failed to create QPs\n");
- goto err_create_qps;
- }
-
- err = wait_for_db_state(hwdev);
- if (err) {
- dev_warn(&pdev->dev, "db - disabled, try again\n");
- hinic_db_state_set(hwif, HINIC_DB_ENABLE);
- }
-
- err = set_hw_ioctxt(hwdev, HINIC_SQ_DEPTH, HINIC_RQ_DEPTH);
- if (err) {
- dev_err(&pdev->dev, "Failed to set HW IO ctxt\n");
- goto err_hw_ioctxt;
- }
-
- return 0;
-
-err_hw_ioctxt:
- hinic_io_destroy_qps(func_to_io, num_qps);
-
-err_create_qps:
- hinic_io_free(func_to_io);
- return err;
-}
-
-/**
- * hinic_hwdev_ifdown - Closing the HW for passing IO
- * @hwdev: the NIC HW device
- *
- **/
-void hinic_hwdev_ifdown(struct hinic_hwdev *hwdev)
-{
- struct hinic_func_to_io *func_to_io = &hwdev->func_to_io;
- struct hinic_cap *nic_cap = &hwdev->nic_cap;
-
- clear_io_resources(hwdev);
-
- hinic_io_destroy_qps(func_to_io, nic_cap->num_qps);
- hinic_io_free(func_to_io);
-}
-
-/**
- * hinic_hwdev_cb_register - register callback handler for MGMT events
- * @hwdev: the NIC HW device
- * @cmd: the mgmt event
- * @handle: private data for the handler
- * @handler: event handler
- **/
-void hinic_hwdev_cb_register(struct hinic_hwdev *hwdev,
- enum hinic_mgmt_msg_cmd cmd, void *handle,
- void (*handler)(void *handle, void *buf_in,
- u16 in_size, void *buf_out,
- u16 *out_size))
-{
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_pfhwdev *pfhwdev;
- struct hinic_nic_cb *nic_cb;
- u8 cmd_cb;
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "unsupported PCI Function type\n");
- return;
- }
-
- pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
-
- cmd_cb = cmd - HINIC_MGMT_MSG_CMD_BASE;
- nic_cb = &pfhwdev->nic_cb[cmd_cb];
-
- nic_cb->handler = handler;
- nic_cb->handle = handle;
- nic_cb->cb_state = HINIC_CB_ENABLED;
-}
-
-/**
- * hinic_hwdev_cb_unregister - unregister callback handler for MGMT events
- * @hwdev: the NIC HW device
- * @cmd: the mgmt event
- **/
-void hinic_hwdev_cb_unregister(struct hinic_hwdev *hwdev,
- enum hinic_mgmt_msg_cmd cmd)
-{
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_pfhwdev *pfhwdev;
- struct hinic_nic_cb *nic_cb;
- u8 cmd_cb;
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "unsupported PCI Function type\n");
- return;
- }
-
- pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
-
- cmd_cb = cmd - HINIC_MGMT_MSG_CMD_BASE;
- nic_cb = &pfhwdev->nic_cb[cmd_cb];
-
- nic_cb->cb_state &= ~HINIC_CB_ENABLED;
-
- while (nic_cb->cb_state & HINIC_CB_RUNNING)
- schedule();
-
- nic_cb->handler = NULL;
-}
-
-/**
- * nic_mgmt_msg_handler - nic mgmt event handler
- * @handle: private data for the handler
- * @buf_in: input buffer
- * @in_size: input size
- * @buf_out: output buffer
- * @out_size: returned output size
- **/
-static void nic_mgmt_msg_handler(void *handle, u8 cmd, void *buf_in,
- u16 in_size, void *buf_out, u16 *out_size)
-{
- struct hinic_pfhwdev *pfhwdev = handle;
- enum hinic_cb_state cb_state;
- struct hinic_nic_cb *nic_cb;
- struct hinic_hwdev *hwdev;
- struct hinic_hwif *hwif;
- struct pci_dev *pdev;
- u8 cmd_cb;
-
- hwdev = &pfhwdev->hwdev;
- hwif = hwdev->hwif;
- pdev = hwif->pdev;
-
- if ((cmd < HINIC_MGMT_MSG_CMD_BASE) ||
- (cmd >= HINIC_MGMT_MSG_CMD_MAX)) {
- dev_err(&pdev->dev, "unknown L2NIC event, cmd = %d\n", cmd);
- return;
- }
-
- cmd_cb = cmd - HINIC_MGMT_MSG_CMD_BASE;
-
- nic_cb = &pfhwdev->nic_cb[cmd_cb];
-
- cb_state = cmpxchg(&nic_cb->cb_state,
- HINIC_CB_ENABLED,
- HINIC_CB_ENABLED | HINIC_CB_RUNNING);
-
- if ((cb_state == HINIC_CB_ENABLED) && (nic_cb->handler))
- nic_cb->handler(nic_cb->handle, buf_in,
- in_size, buf_out, out_size);
- else
- dev_err(&pdev->dev, "Unhandled NIC Event %d\n", cmd);
-
- nic_cb->cb_state &= ~HINIC_CB_RUNNING;
-}
-
-/**
- * init_pfhwdev - Initialize the extended components of PF
- * @pfhwdev: the HW device for PF
- *
- * Return 0 - success, negative - failure
- **/
-static int init_pfhwdev(struct hinic_pfhwdev *pfhwdev)
-{
- struct hinic_hwdev *hwdev = &pfhwdev->hwdev;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- int err;
-
- err = hinic_pf_to_mgmt_init(&pfhwdev->pf_to_mgmt, hwif);
- if (err) {
- dev_err(&pdev->dev, "Failed to initialize PF to MGMT channel\n");
- return err;
- }
-
- hinic_register_mgmt_msg_cb(&pfhwdev->pf_to_mgmt, HINIC_MOD_L2NIC,
- pfhwdev, nic_mgmt_msg_handler);
-
- hinic_set_pf_action(hwif, HINIC_PF_MGMT_ACTIVE);
- return 0;
-}
-
-/**
- * free_pfhwdev - Free the extended components of PF
- * @pfhwdev: the HW device for PF
- **/
-static void free_pfhwdev(struct hinic_pfhwdev *pfhwdev)
-{
- struct hinic_hwdev *hwdev = &pfhwdev->hwdev;
-
- hinic_set_pf_action(hwdev->hwif, HINIC_PF_MGMT_INIT);
-
- hinic_unregister_mgmt_msg_cb(&pfhwdev->pf_to_mgmt, HINIC_MOD_L2NIC);
-
- hinic_pf_to_mgmt_free(&pfhwdev->pf_to_mgmt);
-}
-
-/**
- * hinic_init_hwdev - Initialize the NIC HW
- * @pdev: the NIC pci device
- *
- * Return initialized NIC HW device
- *
- * Initialize the NIC HW device and return a pointer to it
- **/
-struct hinic_hwdev *hinic_init_hwdev(struct pci_dev *pdev)
-{
- struct hinic_pfhwdev *pfhwdev;
- struct hinic_hwdev *hwdev;
- struct hinic_hwif *hwif;
- int err, num_aeqs;
-
- hwif = devm_kzalloc(&pdev->dev, sizeof(*hwif), GFP_KERNEL);
- if (!hwif)
- return ERR_PTR(-ENOMEM);
-
- err = hinic_init_hwif(hwif, pdev);
- if (err) {
- dev_err(&pdev->dev, "Failed to init HW interface\n");
- return ERR_PTR(err);
- }
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "Unsupported PCI Function type\n");
- err = -EFAULT;
- goto err_func_type;
- }
-
- pfhwdev = devm_kzalloc(&pdev->dev, sizeof(*pfhwdev), GFP_KERNEL);
- if (!pfhwdev) {
- err = -ENOMEM;
- goto err_pfhwdev_alloc;
- }
-
- hwdev = &pfhwdev->hwdev;
- hwdev->hwif = hwif;
-
- err = init_msix(hwdev);
- if (err) {
- dev_err(&pdev->dev, "Failed to init msix\n");
- goto err_init_msix;
- }
-
- err = wait_for_outbound_state(hwdev);
- if (err) {
- dev_warn(&pdev->dev, "outbound - disabled, try again\n");
- hinic_outbound_state_set(hwif, HINIC_OUTBOUND_ENABLE);
- }
-
- num_aeqs = HINIC_HWIF_NUM_AEQS(hwif);
-
- err = hinic_aeqs_init(&hwdev->aeqs, hwif, num_aeqs,
- HINIC_DEFAULT_AEQ_LEN, HINIC_EQ_PAGE_SIZE,
- hwdev->msix_entries);
- if (err) {
- dev_err(&pdev->dev, "Failed to init async event queues\n");
- goto err_aeqs_init;
- }
-
- err = init_pfhwdev(pfhwdev);
- if (err) {
- dev_err(&pdev->dev, "Failed to init PF HW device\n");
- goto err_init_pfhwdev;
- }
-
- err = get_dev_cap(hwdev);
- if (err) {
- dev_err(&pdev->dev, "Failed to get device capabilities\n");
- goto err_dev_cap;
- }
-
- err = init_fw_ctxt(hwdev);
- if (err) {
- dev_err(&pdev->dev, "Failed to init function table\n");
- goto err_init_fw_ctxt;
- }
-
- err = set_resources_state(hwdev, HINIC_RES_ACTIVE);
- if (err) {
- dev_err(&pdev->dev, "Failed to set resources state\n");
- goto err_resources_state;
- }
-
- return hwdev;
-
-err_resources_state:
-err_init_fw_ctxt:
-err_dev_cap:
- free_pfhwdev(pfhwdev);
-
-err_init_pfhwdev:
- hinic_aeqs_free(&hwdev->aeqs);
-
-err_aeqs_init:
- disable_msix(hwdev);
-
-err_init_msix:
-err_pfhwdev_alloc:
-err_func_type:
- hinic_free_hwif(hwif);
- return ERR_PTR(err);
-}
-
-/**
- * hinic_free_hwdev - Free the NIC HW device
- * @hwdev: the NIC HW device
- **/
-void hinic_free_hwdev(struct hinic_hwdev *hwdev)
-{
- struct hinic_pfhwdev *pfhwdev = container_of(hwdev,
- struct hinic_pfhwdev,
- hwdev);
-
- set_resources_state(hwdev, HINIC_RES_CLEAN);
-
- free_pfhwdev(pfhwdev);
-
- hinic_aeqs_free(&hwdev->aeqs);
-
- disable_msix(hwdev);
-
- hinic_free_hwif(hwdev->hwif);
-}
-
-/**
- * hinic_hwdev_num_qps - return the number QPs available for use
- * @hwdev: the NIC HW device
- *
- * Return number QPs available for use
- **/
-int hinic_hwdev_num_qps(struct hinic_hwdev *hwdev)
-{
- struct hinic_cap *nic_cap = &hwdev->nic_cap;
-
- return nic_cap->num_qps;
-}
-
-/**
- * hinic_hwdev_get_sq - get SQ
- * @hwdev: the NIC HW device
- * @i: the position of the SQ
- *
- * Return: the SQ in the i position
- **/
-struct hinic_sq *hinic_hwdev_get_sq(struct hinic_hwdev *hwdev, int i)
-{
- struct hinic_func_to_io *func_to_io = &hwdev->func_to_io;
- struct hinic_qp *qp = &func_to_io->qps[i];
-
- if (i >= hinic_hwdev_num_qps(hwdev))
- return NULL;
-
- return &qp->sq;
-}
-
-/**
- * hinic_hwdev_get_sq - get RQ
- * @hwdev: the NIC HW device
- * @i: the position of the RQ
- *
- * Return: the RQ in the i position
- **/
-struct hinic_rq *hinic_hwdev_get_rq(struct hinic_hwdev *hwdev, int i)
-{
- struct hinic_func_to_io *func_to_io = &hwdev->func_to_io;
- struct hinic_qp *qp = &func_to_io->qps[i];
-
- if (i >= hinic_hwdev_num_qps(hwdev))
- return NULL;
-
- return &qp->rq;
-}
-
-/**
- * hinic_hwdev_msix_cnt_set - clear message attribute counters for msix entry
- * @hwdev: the NIC HW device
- * @msix_index: msix_index
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_hwdev_msix_cnt_set(struct hinic_hwdev *hwdev, u16 msix_index)
-{
- return hinic_msix_attr_cnt_clear(hwdev->hwif, msix_index);
-}
-
-/**
- * hinic_hwdev_msix_set - set message attribute for msix entry
- * @hwdev: the NIC HW device
- * @msix_index: msix_index
- * @pending_limit: the maximum pending interrupt events (unit 8)
- * @coalesc_timer: coalesc period for interrupt (unit 8 us)
- * @lli_timer: replenishing period for low latency credit (unit 8 us)
- * @lli_credit_limit: maximum credits for low latency msix messages (unit 8)
- * @resend_timer: maximum wait for resending msix (unit coalesc period)
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_hwdev_msix_set(struct hinic_hwdev *hwdev, u16 msix_index,
- u8 pending_limit, u8 coalesc_timer,
- u8 lli_timer_cfg, u8 lli_credit_limit,
- u8 resend_timer)
-{
- return hinic_msix_attr_set(hwdev->hwif, msix_index,
- pending_limit, coalesc_timer,
- lli_timer_cfg, lli_credit_limit,
- resend_timer);
-}
-
-/**
- * hinic_hwdev_hw_ci_addr_set - set cons idx addr and attributes in HW for sq
- * @hwdev: the NIC HW device
- * @sq: send queue
- * @pending_limit: the maximum pending update ci events (unit 8)
- * @coalesc_timer: coalesc period for update ci (unit 8 us)
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_hwdev_hw_ci_addr_set(struct hinic_hwdev *hwdev, struct hinic_sq *sq,
- u8 pending_limit, u8 coalesc_timer)
-{
- struct hinic_qp *qp = container_of(sq, struct hinic_qp, sq);
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_pfhwdev *pfhwdev;
- struct hinic_cmd_hw_ci hw_ci;
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "Unsupported PCI Function type\n");
- return -EINVAL;
- }
-
- hw_ci.dma_attr_off = 0;
- hw_ci.pending_limit = pending_limit;
- hw_ci.coalesc_timer = coalesc_timer;
-
- hw_ci.msix_en = 1;
- hw_ci.msix_entry_idx = sq->msix_entry;
-
- hw_ci.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
-
- hw_ci.sq_id = qp->q_id;
-
- hw_ci.ci_addr = ADDR_IN_4BYTES(sq->hw_ci_dma_addr);
-
- pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
- return hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt,
- HINIC_MOD_COMM,
- HINIC_COMM_CMD_SQ_HI_CI_SET,
- &hw_ci, sizeof(hw_ci), NULL,
- NULL, HINIC_MGMT_MSG_SYNC);
-}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h
deleted file mode 100644
index 0f5563f..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h
+++ /dev/null
@@ -1,239 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_HW_DEV_H
-#define HINIC_HW_DEV_H
-
-#include <linux/pci.h>
-#include <linux/types.h>
-#include <linux/bitops.h>
-
-#include "hinic_hw_if.h"
-#include "hinic_hw_eqs.h"
-#include "hinic_hw_mgmt.h"
-#include "hinic_hw_qp.h"
-#include "hinic_hw_io.h"
-
-#define HINIC_MAX_QPS 32
-
-#define HINIC_MGMT_NUM_MSG_CMD (HINIC_MGMT_MSG_CMD_MAX - \
- HINIC_MGMT_MSG_CMD_BASE)
-
-struct hinic_cap {
- u16 max_qps;
- u16 num_qps;
-};
-
-enum hinic_port_cmd {
- HINIC_PORT_CMD_CHANGE_MTU = 2,
-
- HINIC_PORT_CMD_ADD_VLAN = 3,
- HINIC_PORT_CMD_DEL_VLAN = 4,
-
- HINIC_PORT_CMD_SET_MAC = 9,
- HINIC_PORT_CMD_GET_MAC = 10,
- HINIC_PORT_CMD_DEL_MAC = 11,
-
- HINIC_PORT_CMD_SET_RX_MODE = 12,
-
- HINIC_PORT_CMD_GET_LINK_STATE = 24,
-
- HINIC_PORT_CMD_SET_PORT_STATE = 41,
-
- HINIC_PORT_CMD_FWCTXT_INIT = 69,
-
- HINIC_PORT_CMD_SET_FUNC_STATE = 93,
-
- HINIC_PORT_CMD_GET_GLOBAL_QPN = 102,
-
- HINIC_PORT_CMD_GET_CAP = 170,
-};
-
-enum hinic_mgmt_msg_cmd {
- HINIC_MGMT_MSG_CMD_BASE = 160,
-
- HINIC_MGMT_MSG_CMD_LINK_STATUS = 160,
-
- HINIC_MGMT_MSG_CMD_MAX,
-};
-
-enum hinic_cb_state {
- HINIC_CB_ENABLED = BIT(0),
- HINIC_CB_RUNNING = BIT(1),
-};
-
-enum hinic_res_state {
- HINIC_RES_CLEAN = 0,
- HINIC_RES_ACTIVE = 1,
-};
-
-struct hinic_cmd_fw_ctxt {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u16 rx_buf_sz;
-
- u32 rsvd1;
-};
-
-struct hinic_cmd_hw_ioctxt {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
-
- u16 rsvd1;
-
- u8 set_cmdq_depth;
- u8 cmdq_depth;
-
- u8 rsvd2;
- u8 rsvd3;
- u8 rsvd4;
- u8 rsvd5;
-
- u16 rq_depth;
- u16 rx_buf_sz_idx;
- u16 sq_depth;
-};
-
-struct hinic_cmd_io_status {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u8 rsvd1;
- u8 rsvd2;
- u32 io_status;
-};
-
-struct hinic_cmd_clear_io_res {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u8 rsvd1;
- u8 rsvd2;
-};
-
-struct hinic_cmd_set_res_state {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u8 state;
- u8 rsvd1;
- u32 rsvd2;
-};
-
-struct hinic_cmd_base_qpn {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u16 qpn;
-};
-
-struct hinic_cmd_hw_ci {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
-
- u8 dma_attr_off;
- u8 pending_limit;
- u8 coalesc_timer;
-
- u8 msix_en;
- u16 msix_entry_idx;
-
- u32 sq_id;
- u32 rsvd1;
- u64 ci_addr;
-};
-
-struct hinic_hwdev {
- struct hinic_hwif *hwif;
- struct msix_entry *msix_entries;
-
- struct hinic_aeqs aeqs;
- struct hinic_func_to_io func_to_io;
-
- struct hinic_cap nic_cap;
-};
-
-struct hinic_nic_cb {
- void (*handler)(void *handle, void *buf_in,
- u16 in_size, void *buf_out,
- u16 *out_size);
-
- void *handle;
- unsigned long cb_state;
-};
-
-struct hinic_pfhwdev {
- struct hinic_hwdev hwdev;
-
- struct hinic_pf_to_mgmt pf_to_mgmt;
-
- struct hinic_nic_cb nic_cb[HINIC_MGMT_NUM_MSG_CMD];
-};
-
-void hinic_hwdev_cb_register(struct hinic_hwdev *hwdev,
- enum hinic_mgmt_msg_cmd cmd, void *handle,
- void (*handler)(void *handle, void *buf_in,
- u16 in_size, void *buf_out,
- u16 *out_size));
-
-void hinic_hwdev_cb_unregister(struct hinic_hwdev *hwdev,
- enum hinic_mgmt_msg_cmd cmd);
-
-int hinic_port_msg_cmd(struct hinic_hwdev *hwdev, enum hinic_port_cmd cmd,
- void *buf_in, u16 in_size, void *buf_out,
- u16 *out_size);
-
-int hinic_hwdev_ifup(struct hinic_hwdev *hwdev);
-
-void hinic_hwdev_ifdown(struct hinic_hwdev *hwdev);
-
-struct hinic_hwdev *hinic_init_hwdev(struct pci_dev *pdev);
-
-void hinic_free_hwdev(struct hinic_hwdev *hwdev);
-
-int hinic_hwdev_num_qps(struct hinic_hwdev *hwdev);
-
-struct hinic_sq *hinic_hwdev_get_sq(struct hinic_hwdev *hwdev, int i);
-
-struct hinic_rq *hinic_hwdev_get_rq(struct hinic_hwdev *hwdev, int i);
-
-int hinic_hwdev_msix_cnt_set(struct hinic_hwdev *hwdev, u16 msix_index);
-
-int hinic_hwdev_msix_set(struct hinic_hwdev *hwdev, u16 msix_index,
- u8 pending_limit, u8 coalesc_timer,
- u8 lli_timer_cfg, u8 lli_credit_limit,
- u8 resend_timer);
-
-int hinic_hwdev_hw_ci_addr_set(struct hinic_hwdev *hwdev, struct hinic_sq *sq,
- u8 pending_limit, u8 coalesc_timer);
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c
deleted file mode 100644
index 7cb8b9b9..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c
+++ /dev/null
@@ -1,886 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/errno.h>
-#include <linux/pci.h>
-#include <linux/device.h>
-#include <linux/workqueue.h>
-#include <linux/interrupt.h>
-#include <linux/slab.h>
-#include <linux/dma-mapping.h>
-#include <linux/log2.h>
-#include <asm/byteorder.h>
-#include <asm/barrier.h>
-
-#include "hinic_hw_csr.h"
-#include "hinic_hw_if.h"
-#include "hinic_hw_eqs.h"
-
-#define HINIC_EQS_WQ_NAME "hinic_eqs"
-
-#define GET_EQ_NUM_PAGES(eq, pg_size) \
- (ALIGN((eq)->q_len * (eq)->elem_size, pg_size) / (pg_size))
-
-#define GET_EQ_NUM_ELEMS_IN_PG(eq, pg_size) ((pg_size) / (eq)->elem_size)
-
-#define EQ_CONS_IDX_REG_ADDR(eq) (((eq)->type == HINIC_AEQ) ? \
- HINIC_CSR_AEQ_CONS_IDX_ADDR((eq)->q_id) : \
- HINIC_CSR_CEQ_CONS_IDX_ADDR((eq)->q_id))
-
-#define EQ_PROD_IDX_REG_ADDR(eq) (((eq)->type == HINIC_AEQ) ? \
- HINIC_CSR_AEQ_PROD_IDX_ADDR((eq)->q_id) : \
- HINIC_CSR_CEQ_PROD_IDX_ADDR((eq)->q_id))
-
-#define EQ_HI_PHYS_ADDR_REG(eq, pg_num) (((eq)->type == HINIC_AEQ) ? \
- HINIC_CSR_AEQ_HI_PHYS_ADDR_REG((eq)->q_id, pg_num) : \
- HINIC_CSR_CEQ_HI_PHYS_ADDR_REG((eq)->q_id, pg_num))
-
-#define EQ_LO_PHYS_ADDR_REG(eq, pg_num) (((eq)->type == HINIC_AEQ) ? \
- HINIC_CSR_AEQ_LO_PHYS_ADDR_REG((eq)->q_id, pg_num) : \
- HINIC_CSR_CEQ_LO_PHYS_ADDR_REG((eq)->q_id, pg_num))
-
-#define GET_EQ_ELEMENT(eq, idx) \
- ((eq)->virt_addr[(idx) / (eq)->num_elem_in_pg] + \
- (((idx) & ((eq)->num_elem_in_pg - 1)) * (eq)->elem_size))
-
-#define GET_AEQ_ELEM(eq, idx) ((struct hinic_aeq_elem *) \
- GET_EQ_ELEMENT(eq, idx))
-
-#define GET_CEQ_ELEM(eq, idx) ((u32 *) \
- GET_EQ_ELEMENT(eq, idx))
-
-#define GET_CURR_AEQ_ELEM(eq) GET_AEQ_ELEM(eq, (eq)->cons_idx)
-
-#define GET_CURR_CEQ_ELEM(eq) GET_CEQ_ELEM(eq, (eq)->cons_idx)
-
-#define PAGE_IN_4K(page_size) ((page_size) >> 12)
-#define EQ_SET_HW_PAGE_SIZE_VAL(eq) (ilog2(PAGE_IN_4K((eq)->page_size)))
-
-#define ELEMENT_SIZE_IN_32B(eq) (((eq)->elem_size) >> 5)
-#define EQ_SET_HW_ELEM_SIZE_VAL(eq) (ilog2(ELEMENT_SIZE_IN_32B(eq)))
-
-#define EQ_MAX_PAGES 8
-
-#define CEQE_TYPE_SHIFT 23
-#define CEQE_TYPE_MASK 0x7
-
-#define CEQE_TYPE(ceqe) (((ceqe) >> CEQE_TYPE_SHIFT) & \
- CEQE_TYPE_MASK)
-
-#define CEQE_DATA_MASK 0x3FFFFFF
-#define CEQE_DATA(ceqe) ((ceqe) & CEQE_DATA_MASK)
-
-#define aeq_to_aeqs(eq) \
- container_of((eq) - (eq)->q_id, struct hinic_aeqs, aeq[0])
-
-#define ceq_to_ceqs(eq) \
- container_of((eq) - (eq)->q_id, struct hinic_ceqs, ceq[0])
-
-#define work_to_aeq_work(work) \
- container_of(work, struct hinic_eq_work, work)
-
-#define DMA_ATTR_AEQ_DEFAULT 0
-#define DMA_ATTR_CEQ_DEFAULT 0
-
-/* No coalescence */
-#define THRESH_CEQ_DEFAULT 0
-
-enum eq_int_mode {
- EQ_INT_MODE_ARMED,
- EQ_INT_MODE_ALWAYS
-};
-
-enum eq_arm_state {
- EQ_NOT_ARMED,
- EQ_ARMED
-};
-
-/**
- * hinic_aeq_register_hw_cb - register AEQ callback for specific event
- * @aeqs: pointer to Async eqs of the chip
- * @event: aeq event to register callback for it
- * @handle: private data will be used by the callback
- * @hw_handler: callback function
- **/
-void hinic_aeq_register_hw_cb(struct hinic_aeqs *aeqs,
- enum hinic_aeq_type event, void *handle,
- void (*hwe_handler)(void *handle, void *data,
- u8 size))
-{
- struct hinic_hw_event_cb *hwe_cb = &aeqs->hwe_cb[event];
-
- hwe_cb->hwe_handler = hwe_handler;
- hwe_cb->handle = handle;
- hwe_cb->hwe_state = HINIC_EQE_ENABLED;
-}
-
-/**
- * hinic_aeq_unregister_hw_cb - unregister the AEQ callback for specific event
- * @aeqs: pointer to Async eqs of the chip
- * @event: aeq event to unregister callback for it
- **/
-void hinic_aeq_unregister_hw_cb(struct hinic_aeqs *aeqs,
- enum hinic_aeq_type event)
-{
- struct hinic_hw_event_cb *hwe_cb = &aeqs->hwe_cb[event];
-
- hwe_cb->hwe_state &= ~HINIC_EQE_ENABLED;
-
- while (hwe_cb->hwe_state & HINIC_EQE_RUNNING)
- schedule();
-
- hwe_cb->hwe_handler = NULL;
-}
-
-/**
- * hinic_ceq_register_cb - register CEQ callback for specific event
- * @ceqs: pointer to Completion eqs part of the chip
- * @event: ceq event to register callback for it
- * @handle: private data will be used by the callback
- * @handler: callback function
- **/
-void hinic_ceq_register_cb(struct hinic_ceqs *ceqs,
- enum hinic_ceq_type event, void *handle,
- void (*handler)(void *handle, u32 ceqe_data))
-{
- struct hinic_ceq_cb *ceq_cb = &ceqs->ceq_cb[event];
-
- ceq_cb->handler = handler;
- ceq_cb->handle = handle;
- ceq_cb->ceqe_state = HINIC_EQE_ENABLED;
-}
-
-/**
- * hinic_ceq_unregister_cb - unregister the CEQ callback for specific event
- * @ceqs: pointer to Completion eqs part of the chip
- * @event: ceq event to unregister callback for it
- **/
-void hinic_ceq_unregister_cb(struct hinic_ceqs *ceqs,
- enum hinic_ceq_type event)
-{
- struct hinic_ceq_cb *ceq_cb = &ceqs->ceq_cb[event];
-
- ceq_cb->ceqe_state &= ~HINIC_EQE_ENABLED;
-
- while (ceq_cb->ceqe_state & HINIC_EQE_RUNNING)
- schedule();
-
- ceq_cb->handler = NULL;
-}
-
-static u8 eq_cons_idx_checksum_set(u32 val)
-{
- u8 checksum = 0;
- int idx;
-
- for (idx = 0; idx < 32; idx += 4)
- checksum ^= ((val >> idx) & 0xF);
-
- return (checksum & 0xF);
-}
-
-/**
- * eq_update_ci - update the HW cons idx of event queue
- * @eq: the event queue to update the cons idx for
- **/
-static void eq_update_ci(struct hinic_eq *eq)
-{
- u32 val, addr = EQ_CONS_IDX_REG_ADDR(eq);
-
- /* Read Modify Write */
- val = hinic_hwif_read_reg(eq->hwif, addr);
-
- val = HINIC_EQ_CI_CLEAR(val, IDX) &
- HINIC_EQ_CI_CLEAR(val, WRAPPED) &
- HINIC_EQ_CI_CLEAR(val, INT_ARMED) &
- HINIC_EQ_CI_CLEAR(val, XOR_CHKSUM);
-
- val |= HINIC_EQ_CI_SET(eq->cons_idx, IDX) |
- HINIC_EQ_CI_SET(eq->wrapped, WRAPPED) |
- HINIC_EQ_CI_SET(EQ_ARMED, INT_ARMED);
-
- val |= HINIC_EQ_CI_SET(eq_cons_idx_checksum_set(val), XOR_CHKSUM);
-
- hinic_hwif_write_reg(eq->hwif, addr, val);
-}
-
-/**
- * aeq_irq_handler - handler for the AEQ event
- * @eq: the Async Event Queue that received the event
- **/
-static void aeq_irq_handler(struct hinic_eq *eq)
-{
- struct hinic_aeqs *aeqs = aeq_to_aeqs(eq);
- struct hinic_hwif *hwif = aeqs->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_aeq_elem *aeqe_curr;
- struct hinic_hw_event_cb *hwe_cb;
- enum hinic_aeq_type event;
- unsigned long eqe_state;
- u32 aeqe_desc;
- int i, size;
-
- for (i = 0; i < eq->q_len; i++) {
- aeqe_curr = GET_CURR_AEQ_ELEM(eq);
-
- /* Data in HW is in Big endian Format */
- aeqe_desc = be32_to_cpu(aeqe_curr->desc);
-
- /* HW toggles the wrapped bit, when it adds eq element */
- if (HINIC_EQ_ELEM_DESC_GET(aeqe_desc, WRAPPED) == eq->wrapped)
- break;
-
- event = HINIC_EQ_ELEM_DESC_GET(aeqe_desc, TYPE);
- if (event >= HINIC_MAX_AEQ_EVENTS) {
- dev_err(&pdev->dev, "Unknown AEQ Event %d\n", event);
- return;
- }
-
- if (!HINIC_EQ_ELEM_DESC_GET(aeqe_desc, SRC)) {
- hwe_cb = &aeqs->hwe_cb[event];
-
- size = HINIC_EQ_ELEM_DESC_GET(aeqe_desc, SIZE);
-
- eqe_state = cmpxchg(&hwe_cb->hwe_state,
- HINIC_EQE_ENABLED,
- HINIC_EQE_ENABLED |
- HINIC_EQE_RUNNING);
- if ((eqe_state == HINIC_EQE_ENABLED) &&
- (hwe_cb->hwe_handler))
- hwe_cb->hwe_handler(hwe_cb->handle,
- aeqe_curr->data, size);
- else
- dev_err(&pdev->dev, "Unhandled AEQ Event %d\n",
- event);
-
- hwe_cb->hwe_state &= ~HINIC_EQE_RUNNING;
- }
-
- eq->cons_idx++;
-
- if (eq->cons_idx == eq->q_len) {
- eq->cons_idx = 0;
- eq->wrapped = !eq->wrapped;
- }
- }
-}
-
-/**
- * ceq_event_handler - handler for the ceq events
- * @ceqs: ceqs part of the chip
- * @ceqe: ceq element that describes the event
- **/
-static void ceq_event_handler(struct hinic_ceqs *ceqs, u32 ceqe)
-{
- struct hinic_hwif *hwif = ceqs->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_ceq_cb *ceq_cb;
- enum hinic_ceq_type event;
- unsigned long eqe_state;
-
- event = CEQE_TYPE(ceqe);
- if (event >= HINIC_MAX_CEQ_EVENTS) {
- dev_err(&pdev->dev, "Unknown CEQ event, event = %d\n", event);
- return;
- }
-
- ceq_cb = &ceqs->ceq_cb[event];
-
- eqe_state = cmpxchg(&ceq_cb->ceqe_state,
- HINIC_EQE_ENABLED,
- HINIC_EQE_ENABLED | HINIC_EQE_RUNNING);
-
- if ((eqe_state == HINIC_EQE_ENABLED) && (ceq_cb->handler))
- ceq_cb->handler(ceq_cb->handle, CEQE_DATA(ceqe));
- else
- dev_err(&pdev->dev, "Unhandled CEQ Event %d\n", event);
-
- ceq_cb->ceqe_state &= ~HINIC_EQE_RUNNING;
-}
-
-/**
- * ceq_irq_handler - handler for the CEQ event
- * @eq: the Completion Event Queue that received the event
- **/
-static void ceq_irq_handler(struct hinic_eq *eq)
-{
- struct hinic_ceqs *ceqs = ceq_to_ceqs(eq);
- u32 ceqe;
- int i;
-
- for (i = 0; i < eq->q_len; i++) {
- ceqe = *(GET_CURR_CEQ_ELEM(eq));
-
- /* Data in HW is in Big endian Format */
- ceqe = be32_to_cpu(ceqe);
-
- /* HW toggles the wrapped bit, when it adds eq element event */
- if (HINIC_EQ_ELEM_DESC_GET(ceqe, WRAPPED) == eq->wrapped)
- break;
-
- ceq_event_handler(ceqs, ceqe);
-
- eq->cons_idx++;
-
- if (eq->cons_idx == eq->q_len) {
- eq->cons_idx = 0;
- eq->wrapped = !eq->wrapped;
- }
- }
-}
-
-/**
- * eq_irq_handler - handler for the EQ event
- * @data: the Event Queue that received the event
- **/
-static void eq_irq_handler(void *data)
-{
- struct hinic_eq *eq = data;
-
- if (eq->type == HINIC_AEQ)
- aeq_irq_handler(eq);
- else if (eq->type == HINIC_CEQ)
- ceq_irq_handler(eq);
-
- eq_update_ci(eq);
-}
-
-/**
- * eq_irq_work - the work of the EQ that received the event
- * @work: the work struct that is associated with the EQ
- **/
-static void eq_irq_work(struct work_struct *work)
-{
- struct hinic_eq_work *aeq_work = work_to_aeq_work(work);
- struct hinic_eq *aeq;
-
- aeq = aeq_work->data;
- eq_irq_handler(aeq);
-}
-
-/**
- * ceq_tasklet - the tasklet of the EQ that received the event
- * @ceq_data: the eq
- **/
-static void ceq_tasklet(unsigned long ceq_data)
-{
- struct hinic_eq *ceq = (struct hinic_eq *)ceq_data;
-
- eq_irq_handler(ceq);
-}
-
-/**
- * aeq_interrupt - aeq interrupt handler
- * @irq: irq number
- * @data: the Async Event Queue that collected the event
- **/
-static irqreturn_t aeq_interrupt(int irq, void *data)
-{
- struct hinic_eq_work *aeq_work;
- struct hinic_eq *aeq = data;
- struct hinic_aeqs *aeqs;
-
- /* clear resend timer cnt register */
- hinic_msix_attr_cnt_clear(aeq->hwif, aeq->msix_entry.entry);
-
- aeq_work = &aeq->aeq_work;
- aeq_work->data = aeq;
-
- aeqs = aeq_to_aeqs(aeq);
- queue_work(aeqs->workq, &aeq_work->work);
-
- return IRQ_HANDLED;
-}
-
-/**
- * ceq_interrupt - ceq interrupt handler
- * @irq: irq number
- * @data: the Completion Event Queue that collected the event
- **/
-static irqreturn_t ceq_interrupt(int irq, void *data)
-{
- struct hinic_eq *ceq = data;
-
- /* clear resend timer cnt register */
- hinic_msix_attr_cnt_clear(ceq->hwif, ceq->msix_entry.entry);
-
- tasklet_schedule(&ceq->ceq_tasklet);
-
- return IRQ_HANDLED;
-}
-
-static void set_ctrl0(struct hinic_eq *eq)
-{
- struct msix_entry *msix_entry = &eq->msix_entry;
- enum hinic_eq_type type = eq->type;
- u32 addr, val, ctrl0;
-
- if (type == HINIC_AEQ) {
- /* RMW Ctrl0 */
- addr = HINIC_CSR_AEQ_CTRL_0_ADDR(eq->q_id);
-
- val = hinic_hwif_read_reg(eq->hwif, addr);
-
- val = HINIC_AEQ_CTRL_0_CLEAR(val, INT_IDX) &
- HINIC_AEQ_CTRL_0_CLEAR(val, DMA_ATTR) &
- HINIC_AEQ_CTRL_0_CLEAR(val, PCI_INTF_IDX) &
- HINIC_AEQ_CTRL_0_CLEAR(val, INT_MODE);
-
- ctrl0 = HINIC_AEQ_CTRL_0_SET(msix_entry->entry, INT_IDX) |
- HINIC_AEQ_CTRL_0_SET(DMA_ATTR_AEQ_DEFAULT, DMA_ATTR) |
- HINIC_AEQ_CTRL_0_SET(HINIC_HWIF_PCI_INTF(eq->hwif),
- PCI_INTF_IDX) |
- HINIC_AEQ_CTRL_0_SET(EQ_INT_MODE_ARMED, INT_MODE);
-
- val |= ctrl0;
-
- hinic_hwif_write_reg(eq->hwif, addr, val);
- } else if (type == HINIC_CEQ) {
- /* RMW Ctrl0 */
- addr = HINIC_CSR_CEQ_CTRL_0_ADDR(eq->q_id);
-
- val = hinic_hwif_read_reg(eq->hwif, addr);
-
- val = HINIC_CEQ_CTRL_0_CLEAR(val, INTR_IDX) &
- HINIC_CEQ_CTRL_0_CLEAR(val, DMA_ATTR) &
- HINIC_CEQ_CTRL_0_CLEAR(val, KICK_THRESH) &
- HINIC_CEQ_CTRL_0_CLEAR(val, PCI_INTF_IDX) &
- HINIC_CEQ_CTRL_0_CLEAR(val, INTR_MODE);
-
- ctrl0 = HINIC_CEQ_CTRL_0_SET(msix_entry->entry, INTR_IDX) |
- HINIC_CEQ_CTRL_0_SET(DMA_ATTR_CEQ_DEFAULT, DMA_ATTR) |
- HINIC_CEQ_CTRL_0_SET(THRESH_CEQ_DEFAULT, KICK_THRESH) |
- HINIC_CEQ_CTRL_0_SET(HINIC_HWIF_PCI_INTF(eq->hwif),
- PCI_INTF_IDX) |
- HINIC_CEQ_CTRL_0_SET(EQ_INT_MODE_ARMED, INTR_MODE);
-
- val |= ctrl0;
-
- hinic_hwif_write_reg(eq->hwif, addr, val);
- }
-}
-
-static void set_ctrl1(struct hinic_eq *eq)
-{
- enum hinic_eq_type type = eq->type;
- u32 page_size_val, elem_size;
- u32 addr, val, ctrl1;
-
- if (type == HINIC_AEQ) {
- /* RMW Ctrl1 */
- addr = HINIC_CSR_AEQ_CTRL_1_ADDR(eq->q_id);
-
- page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
- elem_size = EQ_SET_HW_ELEM_SIZE_VAL(eq);
-
- val = hinic_hwif_read_reg(eq->hwif, addr);
-
- val = HINIC_AEQ_CTRL_1_CLEAR(val, LEN) &
- HINIC_AEQ_CTRL_1_CLEAR(val, ELEM_SIZE) &
- HINIC_AEQ_CTRL_1_CLEAR(val, PAGE_SIZE);
-
- ctrl1 = HINIC_AEQ_CTRL_1_SET(eq->q_len, LEN) |
- HINIC_AEQ_CTRL_1_SET(elem_size, ELEM_SIZE) |
- HINIC_AEQ_CTRL_1_SET(page_size_val, PAGE_SIZE);
-
- val |= ctrl1;
-
- hinic_hwif_write_reg(eq->hwif, addr, val);
- } else if (type == HINIC_CEQ) {
- /* RMW Ctrl1 */
- addr = HINIC_CSR_CEQ_CTRL_1_ADDR(eq->q_id);
-
- page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
-
- val = hinic_hwif_read_reg(eq->hwif, addr);
-
- val = HINIC_CEQ_CTRL_1_CLEAR(val, LEN) &
- HINIC_CEQ_CTRL_1_CLEAR(val, PAGE_SIZE);
-
- ctrl1 = HINIC_CEQ_CTRL_1_SET(eq->q_len, LEN) |
- HINIC_CEQ_CTRL_1_SET(page_size_val, PAGE_SIZE);
-
- val |= ctrl1;
-
- hinic_hwif_write_reg(eq->hwif, addr, val);
- }
-}
-
-/**
- * set_eq_ctrls - setting eq's ctrl registers
- * @eq: the Event Queue for setting
- **/
-static void set_eq_ctrls(struct hinic_eq *eq)
-{
- set_ctrl0(eq);
- set_ctrl1(eq);
-}
-
-/**
- * aeq_elements_init - initialize all the elements in the aeq
- * @eq: the Async Event Queue
- * @init_val: value to initialize the elements with it
- **/
-static void aeq_elements_init(struct hinic_eq *eq, u32 init_val)
-{
- struct hinic_aeq_elem *aeqe;
- int i;
-
- for (i = 0; i < eq->q_len; i++) {
- aeqe = GET_AEQ_ELEM(eq, i);
- aeqe->desc = cpu_to_be32(init_val);
- }
-
- wmb(); /* Write the initilzation values */
-}
-
-/**
- * ceq_elements_init - Initialize all the elements in the ceq
- * @eq: the event queue
- * @init_val: value to init with it the elements
- **/
-static void ceq_elements_init(struct hinic_eq *eq, u32 init_val)
-{
- u32 *ceqe;
- int i;
-
- for (i = 0; i < eq->q_len; i++) {
- ceqe = GET_CEQ_ELEM(eq, i);
- *(ceqe) = cpu_to_be32(init_val);
- }
-
- wmb(); /* Write the initilzation values */
-}
-
-/**
- * alloc_eq_pages - allocate the pages for the queue
- * @eq: the event queue
- *
- * Return 0 - Success, Negative - Failure
- **/
-static int alloc_eq_pages(struct hinic_eq *eq)
-{
- struct hinic_hwif *hwif = eq->hwif;
- struct pci_dev *pdev = hwif->pdev;
- u32 init_val, addr, val;
- size_t addr_size;
- int err, pg;
-
- addr_size = eq->num_pages * sizeof(*eq->dma_addr);
- eq->dma_addr = devm_kzalloc(&pdev->dev, addr_size, GFP_KERNEL);
- if (!eq->dma_addr)
- return -ENOMEM;
-
- addr_size = eq->num_pages * sizeof(*eq->virt_addr);
- eq->virt_addr = devm_kzalloc(&pdev->dev, addr_size, GFP_KERNEL);
- if (!eq->virt_addr) {
- err = -ENOMEM;
- goto err_virt_addr_alloc;
- }
-
- for (pg = 0; pg < eq->num_pages; pg++) {
- eq->virt_addr[pg] = dma_zalloc_coherent(&pdev->dev,
- eq->page_size,
- &eq->dma_addr[pg],
- GFP_KERNEL);
- if (!eq->virt_addr[pg]) {
- err = -ENOMEM;
- goto err_dma_alloc;
- }
-
- addr = EQ_HI_PHYS_ADDR_REG(eq, pg);
- val = upper_32_bits(eq->dma_addr[pg]);
-
- hinic_hwif_write_reg(hwif, addr, val);
-
- addr = EQ_LO_PHYS_ADDR_REG(eq, pg);
- val = lower_32_bits(eq->dma_addr[pg]);
-
- hinic_hwif_write_reg(hwif, addr, val);
- }
-
- init_val = HINIC_EQ_ELEM_DESC_SET(eq->wrapped, WRAPPED);
-
- if (eq->type == HINIC_AEQ)
- aeq_elements_init(eq, init_val);
- else if (eq->type == HINIC_CEQ)
- ceq_elements_init(eq, init_val);
-
- return 0;
-
-err_dma_alloc:
- while (--pg >= 0)
- dma_free_coherent(&pdev->dev, eq->page_size,
- eq->virt_addr[pg],
- eq->dma_addr[pg]);
-
- devm_kfree(&pdev->dev, eq->virt_addr);
-
-err_virt_addr_alloc:
- devm_kfree(&pdev->dev, eq->dma_addr);
- return err;
-}
-
-/**
- * free_eq_pages - free the pages of the queue
- * @eq: the Event Queue
- **/
-static void free_eq_pages(struct hinic_eq *eq)
-{
- struct hinic_hwif *hwif = eq->hwif;
- struct pci_dev *pdev = hwif->pdev;
- int pg;
-
- for (pg = 0; pg < eq->num_pages; pg++)
- dma_free_coherent(&pdev->dev, eq->page_size,
- eq->virt_addr[pg],
- eq->dma_addr[pg]);
-
- devm_kfree(&pdev->dev, eq->virt_addr);
- devm_kfree(&pdev->dev, eq->dma_addr);
-}
-
-/**
- * init_eq - initialize Event Queue
- * @eq: the event queue
- * @hwif: the HW interface of a PCI function device
- * @type: the type of the event queue, aeq or ceq
- * @q_id: Queue id number
- * @q_len: the number of EQ elements
- * @page_size: the page size of the pages in the event queue
- * @entry: msix entry associated with the event queue
- *
- * Return 0 - Success, Negative - Failure
- **/
-static int init_eq(struct hinic_eq *eq, struct hinic_hwif *hwif,
- enum hinic_eq_type type, int q_id, u32 q_len, u32 page_size,
- struct msix_entry entry)
-{
- struct pci_dev *pdev = hwif->pdev;
- int err;
-
- eq->hwif = hwif;
- eq->type = type;
- eq->q_id = q_id;
- eq->q_len = q_len;
- eq->page_size = page_size;
-
- /* Clear PI and CI, also clear the ARM bit */
- hinic_hwif_write_reg(eq->hwif, EQ_CONS_IDX_REG_ADDR(eq), 0);
- hinic_hwif_write_reg(eq->hwif, EQ_PROD_IDX_REG_ADDR(eq), 0);
-
- eq->cons_idx = 0;
- eq->wrapped = 0;
-
- if (type == HINIC_AEQ) {
- eq->elem_size = HINIC_AEQE_SIZE;
- } else if (type == HINIC_CEQ) {
- eq->elem_size = HINIC_CEQE_SIZE;
- } else {
- dev_err(&pdev->dev, "Invalid EQ type\n");
- return -EINVAL;
- }
-
- eq->num_pages = GET_EQ_NUM_PAGES(eq, page_size);
- eq->num_elem_in_pg = GET_EQ_NUM_ELEMS_IN_PG(eq, page_size);
-
- eq->msix_entry = entry;
-
- if (eq->num_elem_in_pg & (eq->num_elem_in_pg - 1)) {
- dev_err(&pdev->dev, "num elements in eq page != power of 2\n");
- return -EINVAL;
- }
-
- if (eq->num_pages > EQ_MAX_PAGES) {
- dev_err(&pdev->dev, "too many pages for eq\n");
- return -EINVAL;
- }
-
- set_eq_ctrls(eq);
- eq_update_ci(eq);
-
- err = alloc_eq_pages(eq);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate pages for eq\n");
- return err;
- }
-
- if (type == HINIC_AEQ) {
- struct hinic_eq_work *aeq_work = &eq->aeq_work;
-
- INIT_WORK(&aeq_work->work, eq_irq_work);
- } else if (type == HINIC_CEQ) {
- tasklet_init(&eq->ceq_tasklet, ceq_tasklet,
- (unsigned long)eq);
- }
-
- /* set the attributes of the msix entry */
- hinic_msix_attr_set(eq->hwif, eq->msix_entry.entry,
- HINIC_EQ_MSIX_PENDING_LIMIT_DEFAULT,
- HINIC_EQ_MSIX_COALESC_TIMER_DEFAULT,
- HINIC_EQ_MSIX_LLI_TIMER_DEFAULT,
- HINIC_EQ_MSIX_LLI_CREDIT_LIMIT_DEFAULT,
- HINIC_EQ_MSIX_RESEND_TIMER_DEFAULT);
-
- if (type == HINIC_AEQ)
- err = request_irq(entry.vector, aeq_interrupt, 0,
- "hinic_aeq", eq);
- else if (type == HINIC_CEQ)
- err = request_irq(entry.vector, ceq_interrupt, 0,
- "hinic_ceq", eq);
-
- if (err) {
- dev_err(&pdev->dev, "Failed to request irq for the EQ\n");
- goto err_req_irq;
- }
-
- return 0;
-
-err_req_irq:
- free_eq_pages(eq);
- return err;
-}
-
-/**
- * remove_eq - remove Event Queue
- * @eq: the event queue
- **/
-static void remove_eq(struct hinic_eq *eq)
-{
- struct msix_entry *entry = &eq->msix_entry;
-
- free_irq(entry->vector, eq);
-
- if (eq->type == HINIC_AEQ) {
- struct hinic_eq_work *aeq_work = &eq->aeq_work;
-
- cancel_work_sync(&aeq_work->work);
- } else if (eq->type == HINIC_CEQ) {
- tasklet_kill(&eq->ceq_tasklet);
- }
-
- free_eq_pages(eq);
-}
-
-/**
- * hinic_aeqs_init - initialize all the aeqs
- * @aeqs: pointer to Async eqs of the chip
- * @hwif: the HW interface of a PCI function device
- * @num_aeqs: number of AEQs
- * @q_len: number of EQ elements
- * @page_size: the page size of the pages in the event queue
- * @msix_entries: msix entries associated with the event queues
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_aeqs_init(struct hinic_aeqs *aeqs, struct hinic_hwif *hwif,
- int num_aeqs, u32 q_len, u32 page_size,
- struct msix_entry *msix_entries)
-{
- struct pci_dev *pdev = hwif->pdev;
- int err, i, q_id;
-
- aeqs->workq = create_singlethread_workqueue(HINIC_EQS_WQ_NAME);
- if (!aeqs->workq)
- return -ENOMEM;
-
- aeqs->hwif = hwif;
- aeqs->num_aeqs = num_aeqs;
-
- for (q_id = 0; q_id < num_aeqs; q_id++) {
- err = init_eq(&aeqs->aeq[q_id], hwif, HINIC_AEQ, q_id, q_len,
- page_size, msix_entries[q_id]);
- if (err) {
- dev_err(&pdev->dev, "Failed to init aeq %d\n", q_id);
- goto err_init_aeq;
- }
- }
-
- return 0;
-
-err_init_aeq:
- for (i = 0; i < q_id; i++)
- remove_eq(&aeqs->aeq[i]);
-
- destroy_workqueue(aeqs->workq);
- return err;
-}
-
-/**
- * hinic_aeqs_free - free all the aeqs
- * @aeqs: pointer to Async eqs of the chip
- **/
-void hinic_aeqs_free(struct hinic_aeqs *aeqs)
-{
- int q_id;
-
- for (q_id = 0; q_id < aeqs->num_aeqs ; q_id++)
- remove_eq(&aeqs->aeq[q_id]);
-
- destroy_workqueue(aeqs->workq);
-}
-
-/**
- * hinic_ceqs_init - init all the ceqs
- * @ceqs: ceqs part of the chip
- * @hwif: the hardware interface of a pci function device
- * @num_ceqs: number of CEQs
- * @q_len: number of EQ elements
- * @page_size: the page size of the event queue
- * @msix_entries: msix entries associated with the event queues
- *
- * Return 0 - Success, Negative - Failure
- **/
-int hinic_ceqs_init(struct hinic_ceqs *ceqs, struct hinic_hwif *hwif,
- int num_ceqs, u32 q_len, u32 page_size,
- struct msix_entry *msix_entries)
-{
- struct pci_dev *pdev = hwif->pdev;
- int i, q_id, err;
-
- ceqs->hwif = hwif;
- ceqs->num_ceqs = num_ceqs;
-
- for (q_id = 0; q_id < num_ceqs; q_id++) {
- err = init_eq(&ceqs->ceq[q_id], hwif, HINIC_CEQ, q_id, q_len,
- page_size, msix_entries[q_id]);
- if (err) {
- dev_err(&pdev->dev, "Failed to init ceq %d\n", q_id);
- goto err_init_ceq;
- }
- }
-
- return 0;
-
-err_init_ceq:
- for (i = 0; i < q_id; i++)
- remove_eq(&ceqs->ceq[i]);
-
- return err;
-}
-
-/**
- * hinic_ceqs_free - free all the ceqs
- * @ceqs: ceqs part of the chip
- **/
-void hinic_ceqs_free(struct hinic_ceqs *ceqs)
-{
- int q_id;
-
- for (q_id = 0; q_id < ceqs->num_ceqs; q_id++)
- remove_eq(&ceqs->ceq[q_id]);
-}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h
deleted file mode 100644
index ecb9c2b..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.h
+++ /dev/null
@@ -1,265 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_HW_EQS_H
-#define HINIC_HW_EQS_H
-
-#include <linux/types.h>
-#include <linux/workqueue.h>
-#include <linux/pci.h>
-#include <linux/sizes.h>
-#include <linux/bitops.h>
-#include <linux/interrupt.h>
-
-#include "hinic_hw_if.h"
-
-#define HINIC_AEQ_CTRL_0_INT_IDX_SHIFT 0
-#define HINIC_AEQ_CTRL_0_DMA_ATTR_SHIFT 12
-#define HINIC_AEQ_CTRL_0_PCI_INTF_IDX_SHIFT 20
-#define HINIC_AEQ_CTRL_0_INT_MODE_SHIFT 31
-
-#define HINIC_AEQ_CTRL_0_INT_IDX_MASK 0x3FF
-#define HINIC_AEQ_CTRL_0_DMA_ATTR_MASK 0x3F
-#define HINIC_AEQ_CTRL_0_PCI_INTF_IDX_MASK 0x3
-#define HINIC_AEQ_CTRL_0_INT_MODE_MASK 0x1
-
-#define HINIC_AEQ_CTRL_0_SET(val, member) \
- (((u32)(val) & HINIC_AEQ_CTRL_0_##member##_MASK) << \
- HINIC_AEQ_CTRL_0_##member##_SHIFT)
-
-#define HINIC_AEQ_CTRL_0_CLEAR(val, member) \
- ((val) & (~(HINIC_AEQ_CTRL_0_##member##_MASK \
- << HINIC_AEQ_CTRL_0_##member##_SHIFT)))
-
-#define HINIC_AEQ_CTRL_1_LEN_SHIFT 0
-#define HINIC_AEQ_CTRL_1_ELEM_SIZE_SHIFT 24
-#define HINIC_AEQ_CTRL_1_PAGE_SIZE_SHIFT 28
-
-#define HINIC_AEQ_CTRL_1_LEN_MASK 0x1FFFFF
-#define HINIC_AEQ_CTRL_1_ELEM_SIZE_MASK 0x3
-#define HINIC_AEQ_CTRL_1_PAGE_SIZE_MASK 0xF
-
-#define HINIC_AEQ_CTRL_1_SET(val, member) \
- (((u32)(val) & HINIC_AEQ_CTRL_1_##member##_MASK) << \
- HINIC_AEQ_CTRL_1_##member##_SHIFT)
-
-#define HINIC_AEQ_CTRL_1_CLEAR(val, member) \
- ((val) & (~(HINIC_AEQ_CTRL_1_##member##_MASK \
- << HINIC_AEQ_CTRL_1_##member##_SHIFT)))
-
-#define HINIC_CEQ_CTRL_0_INTR_IDX_SHIFT 0
-#define HINIC_CEQ_CTRL_0_DMA_ATTR_SHIFT 12
-#define HINIC_CEQ_CTRL_0_KICK_THRESH_SHIFT 20
-#define HINIC_CEQ_CTRL_0_PCI_INTF_IDX_SHIFT 24
-#define HINIC_CEQ_CTRL_0_INTR_MODE_SHIFT 31
-
-#define HINIC_CEQ_CTRL_0_INTR_IDX_MASK 0x3FF
-#define HINIC_CEQ_CTRL_0_DMA_ATTR_MASK 0x3F
-#define HINIC_CEQ_CTRL_0_KICK_THRESH_MASK 0xF
-#define HINIC_CEQ_CTRL_0_PCI_INTF_IDX_MASK 0x3
-#define HINIC_CEQ_CTRL_0_INTR_MODE_MASK 0x1
-
-#define HINIC_CEQ_CTRL_0_SET(val, member) \
- (((u32)(val) & HINIC_CEQ_CTRL_0_##member##_MASK) << \
- HINIC_CEQ_CTRL_0_##member##_SHIFT)
-
-#define HINIC_CEQ_CTRL_0_CLEAR(val, member) \
- ((val) & (~(HINIC_CEQ_CTRL_0_##member##_MASK \
- << HINIC_CEQ_CTRL_0_##member##_SHIFT)))
-
-#define HINIC_CEQ_CTRL_1_LEN_SHIFT 0
-#define HINIC_CEQ_CTRL_1_PAGE_SIZE_SHIFT 28
-
-#define HINIC_CEQ_CTRL_1_LEN_MASK 0x1FFFFF
-#define HINIC_CEQ_CTRL_1_PAGE_SIZE_MASK 0xF
-
-#define HINIC_CEQ_CTRL_1_SET(val, member) \
- (((u32)(val) & HINIC_CEQ_CTRL_1_##member##_MASK) << \
- HINIC_CEQ_CTRL_1_##member##_SHIFT)
-
-#define HINIC_CEQ_CTRL_1_CLEAR(val, member) \
- ((val) & (~(HINIC_CEQ_CTRL_1_##member##_MASK \
- << HINIC_CEQ_CTRL_1_##member##_SHIFT)))
-
-#define HINIC_EQ_ELEM_DESC_TYPE_SHIFT 0
-#define HINIC_EQ_ELEM_DESC_SRC_SHIFT 7
-#define HINIC_EQ_ELEM_DESC_SIZE_SHIFT 8
-#define HINIC_EQ_ELEM_DESC_WRAPPED_SHIFT 31
-
-#define HINIC_EQ_ELEM_DESC_TYPE_MASK 0x7F
-#define HINIC_EQ_ELEM_DESC_SRC_MASK 0x1
-#define HINIC_EQ_ELEM_DESC_SIZE_MASK 0xFF
-#define HINIC_EQ_ELEM_DESC_WRAPPED_MASK 0x1
-
-#define HINIC_EQ_ELEM_DESC_SET(val, member) \
- (((u32)(val) & HINIC_EQ_ELEM_DESC_##member##_MASK) << \
- HINIC_EQ_ELEM_DESC_##member##_SHIFT)
-
-#define HINIC_EQ_ELEM_DESC_GET(val, member) \
- (((val) >> HINIC_EQ_ELEM_DESC_##member##_SHIFT) & \
- HINIC_EQ_ELEM_DESC_##member##_MASK)
-
-#define HINIC_EQ_CI_IDX_SHIFT 0
-#define HINIC_EQ_CI_WRAPPED_SHIFT 20
-#define HINIC_EQ_CI_XOR_CHKSUM_SHIFT 24
-#define HINIC_EQ_CI_INT_ARMED_SHIFT 31
-
-#define HINIC_EQ_CI_IDX_MASK 0xFFFFF
-#define HINIC_EQ_CI_WRAPPED_MASK 0x1
-#define HINIC_EQ_CI_XOR_CHKSUM_MASK 0xF
-#define HINIC_EQ_CI_INT_ARMED_MASK 0x1
-
-#define HINIC_EQ_CI_SET(val, member) \
- (((u32)(val) & HINIC_EQ_CI_##member##_MASK) << \
- HINIC_EQ_CI_##member##_SHIFT)
-
-#define HINIC_EQ_CI_CLEAR(val, member) \
- ((val) & (~(HINIC_EQ_CI_##member##_MASK \
- << HINIC_EQ_CI_##member##_SHIFT)))
-
-#define HINIC_MAX_AEQS 4
-#define HINIC_MAX_CEQS 32
-
-#define HINIC_AEQE_SIZE 64
-#define HINIC_CEQE_SIZE 4
-
-#define HINIC_AEQE_DESC_SIZE 4
-#define HINIC_AEQE_DATA_SIZE \
- (HINIC_AEQE_SIZE - HINIC_AEQE_DESC_SIZE)
-
-#define HINIC_DEFAULT_AEQ_LEN 64
-#define HINIC_DEFAULT_CEQ_LEN 1024
-
-#define HINIC_EQ_PAGE_SIZE SZ_4K
-
-#define HINIC_CEQ_ID_CMDQ 0
-
-enum hinic_eq_type {
- HINIC_AEQ,
- HINIC_CEQ,
-};
-
-enum hinic_aeq_type {
- HINIC_MSG_FROM_MGMT_CPU = 2,
-
- HINIC_MAX_AEQ_EVENTS,
-};
-
-enum hinic_ceq_type {
- HINIC_CEQ_CMDQ = 3,
-
- HINIC_MAX_CEQ_EVENTS,
-};
-
-enum hinic_eqe_state {
- HINIC_EQE_ENABLED = BIT(0),
- HINIC_EQE_RUNNING = BIT(1),
-};
-
-struct hinic_aeq_elem {
- u8 data[HINIC_AEQE_DATA_SIZE];
- u32 desc;
-};
-
-struct hinic_eq_work {
- struct work_struct work;
- void *data;
-};
-
-struct hinic_eq {
- struct hinic_hwif *hwif;
-
- enum hinic_eq_type type;
- int q_id;
- u32 q_len;
- u32 page_size;
-
- u32 cons_idx;
- int wrapped;
-
- size_t elem_size;
- int num_pages;
- int num_elem_in_pg;
-
- struct msix_entry msix_entry;
-
- dma_addr_t *dma_addr;
- void **virt_addr;
-
- struct hinic_eq_work aeq_work;
-
- struct tasklet_struct ceq_tasklet;
-};
-
-struct hinic_hw_event_cb {
- void (*hwe_handler)(void *handle, void *data, u8 size);
- void *handle;
- unsigned long hwe_state;
-};
-
-struct hinic_aeqs {
- struct hinic_hwif *hwif;
-
- struct hinic_eq aeq[HINIC_MAX_AEQS];
- int num_aeqs;
-
- struct hinic_hw_event_cb hwe_cb[HINIC_MAX_AEQ_EVENTS];
-
- struct workqueue_struct *workq;
-};
-
-struct hinic_ceq_cb {
- void (*handler)(void *handle, u32 ceqe_data);
- void *handle;
- enum hinic_eqe_state ceqe_state;
-};
-
-struct hinic_ceqs {
- struct hinic_hwif *hwif;
-
- struct hinic_eq ceq[HINIC_MAX_CEQS];
- int num_ceqs;
-
- struct hinic_ceq_cb ceq_cb[HINIC_MAX_CEQ_EVENTS];
-};
-
-void hinic_aeq_register_hw_cb(struct hinic_aeqs *aeqs,
- enum hinic_aeq_type event, void *handle,
- void (*hwe_handler)(void *handle, void *data,
- u8 size));
-
-void hinic_aeq_unregister_hw_cb(struct hinic_aeqs *aeqs,
- enum hinic_aeq_type event);
-
-void hinic_ceq_register_cb(struct hinic_ceqs *ceqs,
- enum hinic_ceq_type event, void *handle,
- void (*ceq_cb)(void *handle, u32 ceqe_data));
-
-void hinic_ceq_unregister_cb(struct hinic_ceqs *ceqs,
- enum hinic_ceq_type event);
-
-int hinic_aeqs_init(struct hinic_aeqs *aeqs, struct hinic_hwif *hwif,
- int num_aeqs, u32 q_len, u32 page_size,
- struct msix_entry *msix_entries);
-
-void hinic_aeqs_free(struct hinic_aeqs *aeqs);
-
-int hinic_ceqs_init(struct hinic_ceqs *ceqs, struct hinic_hwif *hwif,
- int num_ceqs, u32 q_len, u32 page_size,
- struct msix_entry *msix_entries);
-
-void hinic_ceqs_free(struct hinic_ceqs *ceqs);
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_if.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_if.c
deleted file mode 100644
index 823a170..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_if.c
+++ /dev/null
@@ -1,351 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#include <linux/pci.h>
-#include <linux/device.h>
-#include <linux/errno.h>
-#include <linux/io.h>
-#include <linux/types.h>
-#include <linux/bitops.h>
-
-#include "hinic_hw_csr.h"
-#include "hinic_hw_if.h"
-
-#define PCIE_ATTR_ENTRY 0
-
-#define VALID_MSIX_IDX(attr, msix_index) ((msix_index) < (attr)->num_irqs)
-
-/**
- * hinic_msix_attr_set - set message attribute for msix entry
- * @hwif: the HW interface of a pci function device
- * @msix_index: msix_index
- * @pending_limit: the maximum pending interrupt events (unit 8)
- * @coalesc_timer: coalesc period for interrupt (unit 8 us)
- * @lli_timer: replenishing period for low latency credit (unit 8 us)
- * @lli_credit_limit: maximum credits for low latency msix messages (unit 8)
- * @resend_timer: maximum wait for resending msix (unit coalesc period)
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_msix_attr_set(struct hinic_hwif *hwif, u16 msix_index,
- u8 pending_limit, u8 coalesc_timer,
- u8 lli_timer, u8 lli_credit_limit,
- u8 resend_timer)
-{
- u32 msix_ctrl, addr;
-
- if (!VALID_MSIX_IDX(&hwif->attr, msix_index))
- return -EINVAL;
-
- msix_ctrl = HINIC_MSIX_ATTR_SET(pending_limit, PENDING_LIMIT) |
- HINIC_MSIX_ATTR_SET(coalesc_timer, COALESC_TIMER) |
- HINIC_MSIX_ATTR_SET(lli_timer, LLI_TIMER) |
- HINIC_MSIX_ATTR_SET(lli_credit_limit, LLI_CREDIT) |
- HINIC_MSIX_ATTR_SET(resend_timer, RESEND_TIMER);
-
- addr = HINIC_CSR_MSIX_CTRL_ADDR(msix_index);
-
- hinic_hwif_write_reg(hwif, addr, msix_ctrl);
- return 0;
-}
-
-/**
- * hinic_msix_attr_get - get message attribute of msix entry
- * @hwif: the HW interface of a pci function device
- * @msix_index: msix_index
- * @pending_limit: the maximum pending interrupt events (unit 8)
- * @coalesc_timer: coalesc period for interrupt (unit 8 us)
- * @lli_timer: replenishing period for low latency credit (unit 8 us)
- * @lli_credit_limit: maximum credits for low latency msix messages (unit 8)
- * @resend_timer: maximum wait for resending msix (unit coalesc period)
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_msix_attr_get(struct hinic_hwif *hwif, u16 msix_index,
- u8 *pending_limit, u8 *coalesc_timer,
- u8 *lli_timer, u8 *lli_credit_limit,
- u8 *resend_timer)
-{
- u32 addr, val;
-
- if (!VALID_MSIX_IDX(&hwif->attr, msix_index))
- return -EINVAL;
-
- addr = HINIC_CSR_MSIX_CTRL_ADDR(msix_index);
- val = hinic_hwif_read_reg(hwif, addr);
-
- *pending_limit = HINIC_MSIX_ATTR_GET(val, PENDING_LIMIT);
- *coalesc_timer = HINIC_MSIX_ATTR_GET(val, COALESC_TIMER);
- *lli_timer = HINIC_MSIX_ATTR_GET(val, LLI_TIMER);
- *lli_credit_limit = HINIC_MSIX_ATTR_GET(val, LLI_CREDIT);
- *resend_timer = HINIC_MSIX_ATTR_GET(val, RESEND_TIMER);
- return 0;
-}
-
-/**
- * hinic_msix_attr_cnt_clear - clear message attribute counters for msix entry
- * @hwif: the HW interface of a pci function device
- * @msix_index: msix_index
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_msix_attr_cnt_clear(struct hinic_hwif *hwif, u16 msix_index)
-{
- u32 msix_ctrl, addr;
-
- if (!VALID_MSIX_IDX(&hwif->attr, msix_index))
- return -EINVAL;
-
- msix_ctrl = HINIC_MSIX_CNT_SET(1, RESEND_TIMER);
- addr = HINIC_CSR_MSIX_CNT_ADDR(msix_index);
-
- hinic_hwif_write_reg(hwif, addr, msix_ctrl);
- return 0;
-}
-
-/**
- * hinic_set_pf_action - set action on pf channel
- * @hwif: the HW interface of a pci function device
- * @action: action on pf channel
- *
- * Return 0 - Success, negative - Failure
- **/
-void hinic_set_pf_action(struct hinic_hwif *hwif, enum hinic_pf_action action)
-{
- u32 attr5 = hinic_hwif_read_reg(hwif, HINIC_CSR_FUNC_ATTR5_ADDR);
-
- attr5 = HINIC_FA5_CLEAR(attr5, PF_ACTION);
- attr5 |= HINIC_FA5_SET(action, PF_ACTION);
-
- hinic_hwif_write_reg(hwif, HINIC_CSR_FUNC_ATTR5_ADDR, attr5);
-}
-
-enum hinic_outbound_state hinic_outbound_state_get(struct hinic_hwif *hwif)
-{
- u32 attr4 = hinic_hwif_read_reg(hwif, HINIC_CSR_FUNC_ATTR4_ADDR);
-
- return HINIC_FA4_GET(attr4, OUTBOUND_STATE);
-}
-
-void hinic_outbound_state_set(struct hinic_hwif *hwif,
- enum hinic_outbound_state outbound_state)
-{
- u32 attr4 = hinic_hwif_read_reg(hwif, HINIC_CSR_FUNC_ATTR4_ADDR);
-
- attr4 = HINIC_FA4_CLEAR(attr4, OUTBOUND_STATE);
- attr4 |= HINIC_FA4_SET(outbound_state, OUTBOUND_STATE);
-
- hinic_hwif_write_reg(hwif, HINIC_CSR_FUNC_ATTR4_ADDR, attr4);
-}
-
-enum hinic_db_state hinic_db_state_get(struct hinic_hwif *hwif)
-{
- u32 attr4 = hinic_hwif_read_reg(hwif, HINIC_CSR_FUNC_ATTR4_ADDR);
-
- return HINIC_FA4_GET(attr4, DB_STATE);
-}
-
-void hinic_db_state_set(struct hinic_hwif *hwif,
- enum hinic_db_state db_state)
-{
- u32 attr4 = hinic_hwif_read_reg(hwif, HINIC_CSR_FUNC_ATTR4_ADDR);
-
- attr4 = HINIC_FA4_CLEAR(attr4, DB_STATE);
- attr4 |= HINIC_FA4_SET(db_state, DB_STATE);
-
- hinic_hwif_write_reg(hwif, HINIC_CSR_FUNC_ATTR4_ADDR, attr4);
-}
-
-/**
- * hwif_ready - test if the HW is ready for use
- * @hwif: the HW interface of a pci function device
- *
- * Return 0 - Success, negative - Failure
- **/
-static int hwif_ready(struct hinic_hwif *hwif)
-{
- struct pci_dev *pdev = hwif->pdev;
- u32 addr, attr1;
-
- addr = HINIC_CSR_FUNC_ATTR1_ADDR;
- attr1 = hinic_hwif_read_reg(hwif, addr);
-
- if (!HINIC_FA1_GET(attr1, INIT_STATUS)) {
- dev_err(&pdev->dev, "hwif status is not ready\n");
- return -EFAULT;
- }
-
- return 0;
-}
-
-/**
- * set_hwif_attr - set the attributes in the relevant members in hwif
- * @hwif: the HW interface of a pci function device
- * @attr0: the first attribute that was read from the hw
- * @attr1: the second attribute that was read from the hw
- **/
-static void set_hwif_attr(struct hinic_hwif *hwif, u32 attr0, u32 attr1)
-{
- hwif->attr.func_idx = HINIC_FA0_GET(attr0, FUNC_IDX);
- hwif->attr.pf_idx = HINIC_FA0_GET(attr0, PF_IDX);
- hwif->attr.pci_intf_idx = HINIC_FA0_GET(attr0, PCI_INTF_IDX);
- hwif->attr.func_type = HINIC_FA0_GET(attr0, FUNC_TYPE);
-
- hwif->attr.num_aeqs = BIT(HINIC_FA1_GET(attr1, AEQS_PER_FUNC));
- hwif->attr.num_ceqs = BIT(HINIC_FA1_GET(attr1, CEQS_PER_FUNC));
- hwif->attr.num_irqs = BIT(HINIC_FA1_GET(attr1, IRQS_PER_FUNC));
- hwif->attr.num_dma_attr = BIT(HINIC_FA1_GET(attr1, DMA_ATTR_PER_FUNC));
-}
-
-/**
- * read_hwif_attr - read the attributes and set members in hwif
- * @hwif: the HW interface of a pci function device
- **/
-static void read_hwif_attr(struct hinic_hwif *hwif)
-{
- u32 addr, attr0, attr1;
-
- addr = HINIC_CSR_FUNC_ATTR0_ADDR;
- attr0 = hinic_hwif_read_reg(hwif, addr);
-
- addr = HINIC_CSR_FUNC_ATTR1_ADDR;
- attr1 = hinic_hwif_read_reg(hwif, addr);
-
- set_hwif_attr(hwif, attr0, attr1);
-}
-
-/**
- * set_ppf - try to set hwif as ppf and set the type of hwif in this case
- * @hwif: the HW interface of a pci function device
- **/
-static void set_ppf(struct hinic_hwif *hwif)
-{
- struct hinic_func_attr *attr = &hwif->attr;
- u32 addr, val, ppf_election;
-
- /* Read Modify Write */
- addr = HINIC_CSR_PPF_ELECTION_ADDR(HINIC_HWIF_PCI_INTF(hwif));
-
- val = hinic_hwif_read_reg(hwif, addr);
- val = HINIC_PPF_ELECTION_CLEAR(val, IDX);
-
- ppf_election = HINIC_PPF_ELECTION_SET(HINIC_HWIF_FUNC_IDX(hwif), IDX);
-
- val |= ppf_election;
- hinic_hwif_write_reg(hwif, addr, val);
-
- /* check PPF */
- val = hinic_hwif_read_reg(hwif, addr);
-
- attr->ppf_idx = HINIC_PPF_ELECTION_GET(val, IDX);
- if (attr->ppf_idx == HINIC_HWIF_FUNC_IDX(hwif))
- attr->func_type = HINIC_PPF;
-}
-
-/**
- * set_dma_attr - set the dma attributes in the HW
- * @hwif: the HW interface of a pci function device
- * @entry_idx: the entry index in the dma table
- * @st: PCIE TLP steering tag
- * @at: PCIE TLP AT field
- * @ph: PCIE TLP Processing Hint field
- * @no_snooping: PCIE TLP No snooping
- * @tph_en: PCIE TLP Processing Hint Enable
- **/
-static void set_dma_attr(struct hinic_hwif *hwif, u32 entry_idx,
- u8 st, u8 at, u8 ph,
- enum hinic_pcie_nosnoop no_snooping,
- enum hinic_pcie_tph tph_en)
-{
- u32 addr, val, dma_attr_entry;
-
- /* Read Modify Write */
- addr = HINIC_CSR_DMA_ATTR_ADDR(entry_idx);
-
- val = hinic_hwif_read_reg(hwif, addr);
- val = HINIC_DMA_ATTR_CLEAR(val, ST) &
- HINIC_DMA_ATTR_CLEAR(val, AT) &
- HINIC_DMA_ATTR_CLEAR(val, PH) &
- HINIC_DMA_ATTR_CLEAR(val, NO_SNOOPING) &
- HINIC_DMA_ATTR_CLEAR(val, TPH_EN);
-
- dma_attr_entry = HINIC_DMA_ATTR_SET(st, ST) |
- HINIC_DMA_ATTR_SET(at, AT) |
- HINIC_DMA_ATTR_SET(ph, PH) |
- HINIC_DMA_ATTR_SET(no_snooping, NO_SNOOPING) |
- HINIC_DMA_ATTR_SET(tph_en, TPH_EN);
-
- val |= dma_attr_entry;
- hinic_hwif_write_reg(hwif, addr, val);
-}
-
-/**
- * dma_attr_table_init - initialize the the default dma attributes
- * @hwif: the HW interface of a pci function device
- **/
-static void dma_attr_init(struct hinic_hwif *hwif)
-{
- set_dma_attr(hwif, PCIE_ATTR_ENTRY, HINIC_PCIE_ST_DISABLE,
- HINIC_PCIE_AT_DISABLE, HINIC_PCIE_PH_DISABLE,
- HINIC_PCIE_SNOOP, HINIC_PCIE_TPH_DISABLE);
-}
-
-/**
- * hinic_init_hwif - initialize the hw interface
- * @hwif: the HW interface of a pci function device
- * @pdev: the pci device for acessing PCI resources
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_init_hwif(struct hinic_hwif *hwif, struct pci_dev *pdev)
-{
- int err;
-
- hwif->pdev = pdev;
-
- hwif->cfg_regs_bar = pci_ioremap_bar(pdev, HINIC_PCI_CFG_REGS_BAR);
- if (!hwif->cfg_regs_bar) {
- dev_err(&pdev->dev, "Failed to map configuration regs\n");
- return -ENOMEM;
- }
-
- err = hwif_ready(hwif);
- if (err) {
- dev_err(&pdev->dev, "HW interface is not ready\n");
- goto err_hwif_ready;
- }
-
- read_hwif_attr(hwif);
-
- if (HINIC_IS_PF(hwif))
- set_ppf(hwif);
-
- /* No transactionss before DMA is initialized */
- dma_attr_init(hwif);
- return 0;
-
-err_hwif_ready:
- iounmap(hwif->cfg_regs_bar);
- return err;
-}
-
-/**
- * hinic_free_hwif - free the HW interface
- * @hwif: the HW interface of a pci function device
- **/
-void hinic_free_hwif(struct hinic_hwif *hwif)
-{
- iounmap(hwif->cfg_regs_bar);
-}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_if.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_if.h
deleted file mode 100644
index 5b4760c..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_if.h
+++ /dev/null
@@ -1,272 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_HW_IF_H
-#define HINIC_HW_IF_H
-
-#include <linux/pci.h>
-#include <linux/io.h>
-#include <linux/types.h>
-#include <asm/byteorder.h>
-
-#define HINIC_DMA_ATTR_ST_SHIFT 0
-#define HINIC_DMA_ATTR_AT_SHIFT 8
-#define HINIC_DMA_ATTR_PH_SHIFT 10
-#define HINIC_DMA_ATTR_NO_SNOOPING_SHIFT 12
-#define HINIC_DMA_ATTR_TPH_EN_SHIFT 13
-
-#define HINIC_DMA_ATTR_ST_MASK 0xFF
-#define HINIC_DMA_ATTR_AT_MASK 0x3
-#define HINIC_DMA_ATTR_PH_MASK 0x3
-#define HINIC_DMA_ATTR_NO_SNOOPING_MASK 0x1
-#define HINIC_DMA_ATTR_TPH_EN_MASK 0x1
-
-#define HINIC_DMA_ATTR_SET(val, member) \
- (((u32)(val) & HINIC_DMA_ATTR_##member##_MASK) << \
- HINIC_DMA_ATTR_##member##_SHIFT)
-
-#define HINIC_DMA_ATTR_CLEAR(val, member) \
- ((val) & (~(HINIC_DMA_ATTR_##member##_MASK \
- << HINIC_DMA_ATTR_##member##_SHIFT)))
-
-#define HINIC_FA0_FUNC_IDX_SHIFT 0
-#define HINIC_FA0_PF_IDX_SHIFT 10
-#define HINIC_FA0_PCI_INTF_IDX_SHIFT 14
-/* reserved members - off 16 */
-#define HINIC_FA0_FUNC_TYPE_SHIFT 24
-
-#define HINIC_FA0_FUNC_IDX_MASK 0x3FF
-#define HINIC_FA0_PF_IDX_MASK 0xF
-#define HINIC_FA0_PCI_INTF_IDX_MASK 0x3
-#define HINIC_FA0_FUNC_TYPE_MASK 0x1
-
-#define HINIC_FA0_GET(val, member) \
- (((val) >> HINIC_FA0_##member##_SHIFT) & HINIC_FA0_##member##_MASK)
-
-#define HINIC_FA1_AEQS_PER_FUNC_SHIFT 8
-/* reserved members - off 10 */
-#define HINIC_FA1_CEQS_PER_FUNC_SHIFT 12
-/* reserved members - off 15 */
-#define HINIC_FA1_IRQS_PER_FUNC_SHIFT 20
-#define HINIC_FA1_DMA_ATTR_PER_FUNC_SHIFT 24
-/* reserved members - off 27 */
-#define HINIC_FA1_INIT_STATUS_SHIFT 30
-
-#define HINIC_FA1_AEQS_PER_FUNC_MASK 0x3
-#define HINIC_FA1_CEQS_PER_FUNC_MASK 0x7
-#define HINIC_FA1_IRQS_PER_FUNC_MASK 0xF
-#define HINIC_FA1_DMA_ATTR_PER_FUNC_MASK 0x7
-#define HINIC_FA1_INIT_STATUS_MASK 0x1
-
-#define HINIC_FA1_GET(val, member) \
- (((val) >> HINIC_FA1_##member##_SHIFT) & HINIC_FA1_##member##_MASK)
-
-#define HINIC_FA4_OUTBOUND_STATE_SHIFT 0
-#define HINIC_FA4_DB_STATE_SHIFT 1
-
-#define HINIC_FA4_OUTBOUND_STATE_MASK 0x1
-#define HINIC_FA4_DB_STATE_MASK 0x1
-
-#define HINIC_FA4_GET(val, member) \
- (((val) >> HINIC_FA4_##member##_SHIFT) & HINIC_FA4_##member##_MASK)
-
-#define HINIC_FA4_SET(val, member) \
- ((((u32)val) & HINIC_FA4_##member##_MASK) << HINIC_FA4_##member##_SHIFT)
-
-#define HINIC_FA4_CLEAR(val, member) \
- ((val) & (~(HINIC_FA4_##member##_MASK << HINIC_FA4_##member##_SHIFT)))
-
-#define HINIC_FA5_PF_ACTION_SHIFT 0
-#define HINIC_FA5_PF_ACTION_MASK 0xFFFF
-
-#define HINIC_FA5_SET(val, member) \
- (((u32)(val) & HINIC_FA5_##member##_MASK) << HINIC_FA5_##member##_SHIFT)
-
-#define HINIC_FA5_CLEAR(val, member) \
- ((val) & (~(HINIC_FA5_##member##_MASK << HINIC_FA5_##member##_SHIFT)))
-
-#define HINIC_PPF_ELECTION_IDX_SHIFT 0
-#define HINIC_PPF_ELECTION_IDX_MASK 0x1F
-
-#define HINIC_PPF_ELECTION_SET(val, member) \
- (((u32)(val) & HINIC_PPF_ELECTION_##member##_MASK) << \
- HINIC_PPF_ELECTION_##member##_SHIFT)
-
-#define HINIC_PPF_ELECTION_GET(val, member) \
- (((val) >> HINIC_PPF_ELECTION_##member##_SHIFT) & \
- HINIC_PPF_ELECTION_##member##_MASK)
-
-#define HINIC_PPF_ELECTION_CLEAR(val, member) \
- ((val) & (~(HINIC_PPF_ELECTION_##member##_MASK \
- << HINIC_PPF_ELECTION_##member##_SHIFT)))
-
-#define HINIC_MSIX_PENDING_LIMIT_SHIFT 0
-#define HINIC_MSIX_COALESC_TIMER_SHIFT 8
-#define HINIC_MSIX_LLI_TIMER_SHIFT 16
-#define HINIC_MSIX_LLI_CREDIT_SHIFT 24
-#define HINIC_MSIX_RESEND_TIMER_SHIFT 29
-
-#define HINIC_MSIX_PENDING_LIMIT_MASK 0xFF
-#define HINIC_MSIX_COALESC_TIMER_MASK 0xFF
-#define HINIC_MSIX_LLI_TIMER_MASK 0xFF
-#define HINIC_MSIX_LLI_CREDIT_MASK 0x1F
-#define HINIC_MSIX_RESEND_TIMER_MASK 0x7
-
-#define HINIC_MSIX_ATTR_SET(val, member) \
- (((u32)(val) & HINIC_MSIX_##member##_MASK) << \
- HINIC_MSIX_##member##_SHIFT)
-
-#define HINIC_MSIX_ATTR_GET(val, member) \
- (((val) >> HINIC_MSIX_##member##_SHIFT) & \
- HINIC_MSIX_##member##_MASK)
-
-#define HINIC_MSIX_CNT_RESEND_TIMER_SHIFT 29
-
-#define HINIC_MSIX_CNT_RESEND_TIMER_MASK 0x1
-
-#define HINIC_MSIX_CNT_SET(val, member) \
- (((u32)(val) & HINIC_MSIX_CNT_##member##_MASK) << \
- HINIC_MSIX_CNT_##member##_SHIFT)
-
-#define HINIC_HWIF_NUM_AEQS(hwif) ((hwif)->attr.num_aeqs)
-#define HINIC_HWIF_NUM_CEQS(hwif) ((hwif)->attr.num_ceqs)
-#define HINIC_HWIF_NUM_IRQS(hwif) ((hwif)->attr.num_irqs)
-#define HINIC_HWIF_FUNC_IDX(hwif) ((hwif)->attr.func_idx)
-#define HINIC_HWIF_PCI_INTF(hwif) ((hwif)->attr.pci_intf_idx)
-#define HINIC_HWIF_PF_IDX(hwif) ((hwif)->attr.pf_idx)
-
-#define HINIC_FUNC_TYPE(hwif) ((hwif)->attr.func_type)
-#define HINIC_IS_PF(hwif) (HINIC_FUNC_TYPE(hwif) == HINIC_PF)
-#define HINIC_IS_PPF(hwif) (HINIC_FUNC_TYPE(hwif) == HINIC_PPF)
-
-#define HINIC_PCI_CFG_REGS_BAR 0
-#define HINIC_PCI_DB_BAR 4
-
-#define HINIC_PCIE_ST_DISABLE 0
-#define HINIC_PCIE_AT_DISABLE 0
-#define HINIC_PCIE_PH_DISABLE 0
-
-#define HINIC_EQ_MSIX_PENDING_LIMIT_DEFAULT 0 /* Disabled */
-#define HINIC_EQ_MSIX_COALESC_TIMER_DEFAULT 0xFF /* max */
-#define HINIC_EQ_MSIX_LLI_TIMER_DEFAULT 0 /* Disabled */
-#define HINIC_EQ_MSIX_LLI_CREDIT_LIMIT_DEFAULT 0 /* Disabled */
-#define HINIC_EQ_MSIX_RESEND_TIMER_DEFAULT 7 /* max */
-
-enum hinic_pcie_nosnoop {
- HINIC_PCIE_SNOOP = 0,
- HINIC_PCIE_NO_SNOOP = 1,
-};
-
-enum hinic_pcie_tph {
- HINIC_PCIE_TPH_DISABLE = 0,
- HINIC_PCIE_TPH_ENABLE = 1,
-};
-
-enum hinic_func_type {
- HINIC_PF = 0,
- HINIC_PPF = 2,
-};
-
-enum hinic_mod_type {
- HINIC_MOD_COMM = 0, /* HW communication module */
- HINIC_MOD_L2NIC = 1, /* L2NIC module */
- HINIC_MOD_CFGM = 7, /* Configuration module */
-
- HINIC_MOD_MAX = 15
-};
-
-enum hinic_node_id {
- HINIC_NODE_ID_MGMT = 21,
-};
-
-enum hinic_pf_action {
- HINIC_PF_MGMT_INIT = 0x0,
-
- HINIC_PF_MGMT_ACTIVE = 0x11,
-};
-
-enum hinic_outbound_state {
- HINIC_OUTBOUND_ENABLE = 0,
- HINIC_OUTBOUND_DISABLE = 1,
-};
-
-enum hinic_db_state {
- HINIC_DB_ENABLE = 0,
- HINIC_DB_DISABLE = 1,
-};
-
-struct hinic_func_attr {
- u16 func_idx;
- u8 pf_idx;
- u8 pci_intf_idx;
-
- enum hinic_func_type func_type;
-
- u8 ppf_idx;
-
- u16 num_irqs;
- u8 num_aeqs;
- u8 num_ceqs;
-
- u8 num_dma_attr;
-};
-
-struct hinic_hwif {
- struct pci_dev *pdev;
- void __iomem *cfg_regs_bar;
-
- struct hinic_func_attr attr;
-};
-
-static inline u32 hinic_hwif_read_reg(struct hinic_hwif *hwif, u32 reg)
-{
- return be32_to_cpu(readl(hwif->cfg_regs_bar + reg));
-}
-
-static inline void hinic_hwif_write_reg(struct hinic_hwif *hwif, u32 reg,
- u32 val)
-{
- writel(cpu_to_be32(val), hwif->cfg_regs_bar + reg);
-}
-
-int hinic_msix_attr_set(struct hinic_hwif *hwif, u16 msix_index,
- u8 pending_limit, u8 coalesc_timer,
- u8 lli_timer_cfg, u8 lli_credit_limit,
- u8 resend_timer);
-
-int hinic_msix_attr_get(struct hinic_hwif *hwif, u16 msix_index,
- u8 *pending_limit, u8 *coalesc_timer_cfg,
- u8 *lli_timer, u8 *lli_credit_limit,
- u8 *resend_timer);
-
-int hinic_msix_attr_cnt_clear(struct hinic_hwif *hwif, u16 msix_index);
-
-void hinic_set_pf_action(struct hinic_hwif *hwif, enum hinic_pf_action action);
-
-enum hinic_outbound_state hinic_outbound_state_get(struct hinic_hwif *hwif);
-
-void hinic_outbound_state_set(struct hinic_hwif *hwif,
- enum hinic_outbound_state outbound_state);
-
-enum hinic_db_state hinic_db_state_get(struct hinic_hwif *hwif);
-
-void hinic_db_state_set(struct hinic_hwif *hwif,
- enum hinic_db_state db_state);
-
-int hinic_init_hwif(struct hinic_hwif *hwif, struct pci_dev *pdev);
-
-void hinic_free_hwif(struct hinic_hwif *hwif);
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_io.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_io.c
deleted file mode 100644
index 8e58976..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_io.c
+++ /dev/null
@@ -1,533 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/pci.h>
-#include <linux/device.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/semaphore.h>
-#include <linux/dma-mapping.h>
-#include <linux/io.h>
-#include <linux/err.h>
-
-#include "hinic_hw_if.h"
-#include "hinic_hw_eqs.h"
-#include "hinic_hw_wqe.h"
-#include "hinic_hw_wq.h"
-#include "hinic_hw_cmdq.h"
-#include "hinic_hw_qp_ctxt.h"
-#include "hinic_hw_qp.h"
-#include "hinic_hw_io.h"
-
-#define CI_Q_ADDR_SIZE sizeof(u32)
-
-#define CI_ADDR(base_addr, q_id) ((base_addr) + \
- (q_id) * CI_Q_ADDR_SIZE)
-
-#define CI_TABLE_SIZE(num_qps) ((num_qps) * CI_Q_ADDR_SIZE)
-
-#define DB_IDX(db, db_base) \
- (((unsigned long)(db) - (unsigned long)(db_base)) / HINIC_DB_PAGE_SIZE)
-
-enum io_cmd {
- IO_CMD_MODIFY_QUEUE_CTXT = 0,
-};
-
-static void init_db_area_idx(struct hinic_free_db_area *free_db_area)
-{
- int i;
-
- for (i = 0; i < HINIC_DB_MAX_AREAS; i++)
- free_db_area->db_idx[i] = i;
-
- free_db_area->alloc_pos = 0;
- free_db_area->return_pos = HINIC_DB_MAX_AREAS;
-
- free_db_area->num_free = HINIC_DB_MAX_AREAS;
-
- sema_init(&free_db_area->idx_lock, 1);
-}
-
-static void __iomem *get_db_area(struct hinic_func_to_io *func_to_io)
-{
- struct hinic_free_db_area *free_db_area = &func_to_io->free_db_area;
- int pos, idx;
-
- down(&free_db_area->idx_lock);
-
- free_db_area->num_free--;
-
- if (free_db_area->num_free < 0) {
- free_db_area->num_free++;
- up(&free_db_area->idx_lock);
- return ERR_PTR(-ENOMEM);
- }
-
- pos = free_db_area->alloc_pos++;
- pos &= HINIC_DB_MAX_AREAS - 1;
-
- idx = free_db_area->db_idx[pos];
-
- free_db_area->db_idx[pos] = -1;
-
- up(&free_db_area->idx_lock);
-
- return func_to_io->db_base + idx * HINIC_DB_PAGE_SIZE;
-}
-
-static void return_db_area(struct hinic_func_to_io *func_to_io,
- void __iomem *db_base)
-{
- struct hinic_free_db_area *free_db_area = &func_to_io->free_db_area;
- int pos, idx = DB_IDX(db_base, func_to_io->db_base);
-
- down(&free_db_area->idx_lock);
-
- pos = free_db_area->return_pos++;
- pos &= HINIC_DB_MAX_AREAS - 1;
-
- free_db_area->db_idx[pos] = idx;
-
- free_db_area->num_free++;
-
- up(&free_db_area->idx_lock);
-}
-
-static int write_sq_ctxts(struct hinic_func_to_io *func_to_io, u16 base_qpn,
- u16 num_sqs)
-{
- struct hinic_hwif *hwif = func_to_io->hwif;
- struct hinic_sq_ctxt_block *sq_ctxt_block;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_cmdq_buf cmdq_buf;
- struct hinic_sq_ctxt *sq_ctxt;
- struct hinic_qp *qp;
- u64 out_param;
- int err, i;
-
- err = hinic_alloc_cmdq_buf(&func_to_io->cmdqs, &cmdq_buf);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate cmdq buf\n");
- return err;
- }
-
- sq_ctxt_block = cmdq_buf.buf;
- sq_ctxt = sq_ctxt_block->sq_ctxt;
-
- hinic_qp_prepare_header(&sq_ctxt_block->hdr, HINIC_QP_CTXT_TYPE_SQ,
- num_sqs, func_to_io->max_qps);
- for (i = 0; i < num_sqs; i++) {
- qp = &func_to_io->qps[i];
-
- hinic_sq_prepare_ctxt(&sq_ctxt[i], &qp->sq,
- base_qpn + qp->q_id);
- }
-
- cmdq_buf.size = HINIC_SQ_CTXT_SIZE(num_sqs);
-
- err = hinic_cmdq_direct_resp(&func_to_io->cmdqs, HINIC_MOD_L2NIC,
- IO_CMD_MODIFY_QUEUE_CTXT, &cmdq_buf,
- &out_param);
- if ((err) || (out_param != 0)) {
- dev_err(&pdev->dev, "Failed to set SQ ctxts\n");
- err = -EFAULT;
- }
-
- hinic_free_cmdq_buf(&func_to_io->cmdqs, &cmdq_buf);
- return err;
-}
-
-static int write_rq_ctxts(struct hinic_func_to_io *func_to_io, u16 base_qpn,
- u16 num_rqs)
-{
- struct hinic_hwif *hwif = func_to_io->hwif;
- struct hinic_rq_ctxt_block *rq_ctxt_block;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_cmdq_buf cmdq_buf;
- struct hinic_rq_ctxt *rq_ctxt;
- struct hinic_qp *qp;
- u64 out_param;
- int err, i;
-
- err = hinic_alloc_cmdq_buf(&func_to_io->cmdqs, &cmdq_buf);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate cmdq buf\n");
- return err;
- }
-
- rq_ctxt_block = cmdq_buf.buf;
- rq_ctxt = rq_ctxt_block->rq_ctxt;
-
- hinic_qp_prepare_header(&rq_ctxt_block->hdr, HINIC_QP_CTXT_TYPE_RQ,
- num_rqs, func_to_io->max_qps);
- for (i = 0; i < num_rqs; i++) {
- qp = &func_to_io->qps[i];
-
- hinic_rq_prepare_ctxt(&rq_ctxt[i], &qp->rq,
- base_qpn + qp->q_id);
- }
-
- cmdq_buf.size = HINIC_RQ_CTXT_SIZE(num_rqs);
-
- err = hinic_cmdq_direct_resp(&func_to_io->cmdqs, HINIC_MOD_L2NIC,
- IO_CMD_MODIFY_QUEUE_CTXT, &cmdq_buf,
- &out_param);
- if ((err) || (out_param != 0)) {
- dev_err(&pdev->dev, "Failed to set RQ ctxts\n");
- err = -EFAULT;
- }
-
- hinic_free_cmdq_buf(&func_to_io->cmdqs, &cmdq_buf);
- return err;
-}
-
-/**
- * write_qp_ctxts - write the qp ctxt to HW
- * @func_to_io: func to io channel that holds the IO components
- * @base_qpn: first qp number
- * @num_qps: number of qps to write
- *
- * Return 0 - Success, negative - Failure
- **/
-static int write_qp_ctxts(struct hinic_func_to_io *func_to_io, u16 base_qpn,
- u16 num_qps)
-{
- return (write_sq_ctxts(func_to_io, base_qpn, num_qps) ||
- write_rq_ctxts(func_to_io, base_qpn, num_qps));
-}
-
-/**
- * init_qp - Initialize a Queue Pair
- * @func_to_io: func to io channel that holds the IO components
- * @qp: pointer to the qp to initialize
- * @q_id: the id of the qp
- * @sq_msix_entry: msix entry for sq
- * @rq_msix_entry: msix entry for rq
- *
- * Return 0 - Success, negative - Failure
- **/
-static int init_qp(struct hinic_func_to_io *func_to_io,
- struct hinic_qp *qp, int q_id,
- struct msix_entry *sq_msix_entry,
- struct msix_entry *rq_msix_entry)
-{
- struct hinic_hwif *hwif = func_to_io->hwif;
- struct pci_dev *pdev = hwif->pdev;
- void __iomem *db_base;
- int err;
-
- qp->q_id = q_id;
-
- err = hinic_wq_allocate(&func_to_io->wqs, &func_to_io->sq_wq[q_id],
- HINIC_SQ_WQEBB_SIZE, HINIC_SQ_PAGE_SIZE,
- HINIC_SQ_DEPTH, HINIC_SQ_WQE_MAX_SIZE);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate WQ for SQ\n");
- return err;
- }
-
- err = hinic_wq_allocate(&func_to_io->wqs, &func_to_io->rq_wq[q_id],
- HINIC_RQ_WQEBB_SIZE, HINIC_RQ_PAGE_SIZE,
- HINIC_RQ_DEPTH, HINIC_RQ_WQE_SIZE);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate WQ for RQ\n");
- goto err_rq_alloc;
- }
-
- db_base = get_db_area(func_to_io);
- if (IS_ERR(db_base)) {
- dev_err(&pdev->dev, "Failed to get DB area for SQ\n");
- err = PTR_ERR(db_base);
- goto err_get_db;
- }
-
- func_to_io->sq_db[q_id] = db_base;
-
- err = hinic_init_sq(&qp->sq, hwif, &func_to_io->sq_wq[q_id],
- sq_msix_entry,
- CI_ADDR(func_to_io->ci_addr_base, q_id),
- CI_ADDR(func_to_io->ci_dma_base, q_id), db_base);
- if (err) {
- dev_err(&pdev->dev, "Failed to init SQ\n");
- goto err_sq_init;
- }
-
- err = hinic_init_rq(&qp->rq, hwif, &func_to_io->rq_wq[q_id],
- rq_msix_entry);
- if (err) {
- dev_err(&pdev->dev, "Failed to init RQ\n");
- goto err_rq_init;
- }
-
- return 0;
-
-err_rq_init:
- hinic_clean_sq(&qp->sq);
-
-err_sq_init:
- return_db_area(func_to_io, db_base);
-
-err_get_db:
- hinic_wq_free(&func_to_io->wqs, &func_to_io->rq_wq[q_id]);
-
-err_rq_alloc:
- hinic_wq_free(&func_to_io->wqs, &func_to_io->sq_wq[q_id]);
- return err;
-}
-
-/**
- * destroy_qp - Clean the resources of a Queue Pair
- * @func_to_io: func to io channel that holds the IO components
- * @qp: pointer to the qp to clean
- **/
-static void destroy_qp(struct hinic_func_to_io *func_to_io,
- struct hinic_qp *qp)
-{
- int q_id = qp->q_id;
-
- hinic_clean_rq(&qp->rq);
- hinic_clean_sq(&qp->sq);
-
- return_db_area(func_to_io, func_to_io->sq_db[q_id]);
-
- hinic_wq_free(&func_to_io->wqs, &func_to_io->rq_wq[q_id]);
- hinic_wq_free(&func_to_io->wqs, &func_to_io->sq_wq[q_id]);
-}
-
-/**
- * hinic_io_create_qps - Create Queue Pairs
- * @func_to_io: func to io channel that holds the IO components
- * @base_qpn: base qp number
- * @num_qps: number queue pairs to create
- * @sq_msix_entry: msix entries for sq
- * @rq_msix_entry: msix entries for rq
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_io_create_qps(struct hinic_func_to_io *func_to_io,
- u16 base_qpn, int num_qps,
- struct msix_entry *sq_msix_entries,
- struct msix_entry *rq_msix_entries)
-{
- struct hinic_hwif *hwif = func_to_io->hwif;
- struct pci_dev *pdev = hwif->pdev;
- size_t qps_size, wq_size, db_size;
- void *ci_addr_base;
- int i, j, err;
-
- qps_size = num_qps * sizeof(*func_to_io->qps);
- func_to_io->qps = devm_kzalloc(&pdev->dev, qps_size, GFP_KERNEL);
- if (!func_to_io->qps)
- return -ENOMEM;
-
- wq_size = num_qps * sizeof(*func_to_io->sq_wq);
- func_to_io->sq_wq = devm_kzalloc(&pdev->dev, wq_size, GFP_KERNEL);
- if (!func_to_io->sq_wq) {
- err = -ENOMEM;
- goto err_sq_wq;
- }
-
- wq_size = num_qps * sizeof(*func_to_io->rq_wq);
- func_to_io->rq_wq = devm_kzalloc(&pdev->dev, wq_size, GFP_KERNEL);
- if (!func_to_io->rq_wq) {
- err = -ENOMEM;
- goto err_rq_wq;
- }
-
- db_size = num_qps * sizeof(*func_to_io->sq_db);
- func_to_io->sq_db = devm_kzalloc(&pdev->dev, db_size, GFP_KERNEL);
- if (!func_to_io->sq_db) {
- err = -ENOMEM;
- goto err_sq_db;
- }
-
- ci_addr_base = dma_zalloc_coherent(&pdev->dev, CI_TABLE_SIZE(num_qps),
- &func_to_io->ci_dma_base,
- GFP_KERNEL);
- if (!ci_addr_base) {
- dev_err(&pdev->dev, "Failed to allocate CI area\n");
- err = -ENOMEM;
- goto err_ci_base;
- }
-
- func_to_io->ci_addr_base = ci_addr_base;
-
- for (i = 0; i < num_qps; i++) {
- err = init_qp(func_to_io, &func_to_io->qps[i], i,
- &sq_msix_entries[i], &rq_msix_entries[i]);
- if (err) {
- dev_err(&pdev->dev, "Failed to create QP %d\n", i);
- goto err_init_qp;
- }
- }
-
- err = write_qp_ctxts(func_to_io, base_qpn, num_qps);
- if (err) {
- dev_err(&pdev->dev, "Failed to init QP ctxts\n");
- goto err_write_qp_ctxts;
- }
-
- return 0;
-
-err_write_qp_ctxts:
-err_init_qp:
- for (j = 0; j < i; j++)
- destroy_qp(func_to_io, &func_to_io->qps[j]);
-
- dma_free_coherent(&pdev->dev, CI_TABLE_SIZE(num_qps),
- func_to_io->ci_addr_base, func_to_io->ci_dma_base);
-
-err_ci_base:
- devm_kfree(&pdev->dev, func_to_io->sq_db);
-
-err_sq_db:
- devm_kfree(&pdev->dev, func_to_io->rq_wq);
-
-err_rq_wq:
- devm_kfree(&pdev->dev, func_to_io->sq_wq);
-
-err_sq_wq:
- devm_kfree(&pdev->dev, func_to_io->qps);
- return err;
-}
-
-/**
- * hinic_io_destroy_qps - Destroy the IO Queue Pairs
- * @func_to_io: func to io channel that holds the IO components
- * @num_qps: number queue pairs to destroy
- **/
-void hinic_io_destroy_qps(struct hinic_func_to_io *func_to_io, int num_qps)
-{
- struct hinic_hwif *hwif = func_to_io->hwif;
- struct pci_dev *pdev = hwif->pdev;
- size_t ci_table_size;
- int i;
-
- ci_table_size = CI_TABLE_SIZE(num_qps);
-
- for (i = 0; i < num_qps; i++)
- destroy_qp(func_to_io, &func_to_io->qps[i]);
-
- dma_free_coherent(&pdev->dev, ci_table_size, func_to_io->ci_addr_base,
- func_to_io->ci_dma_base);
-
- devm_kfree(&pdev->dev, func_to_io->sq_db);
-
- devm_kfree(&pdev->dev, func_to_io->rq_wq);
- devm_kfree(&pdev->dev, func_to_io->sq_wq);
-
- devm_kfree(&pdev->dev, func_to_io->qps);
-}
-
-/**
- * hinic_io_init - Initialize the IO components
- * @func_to_io: func to io channel that holds the IO components
- * @hwif: HW interface for accessing IO
- * @max_qps: maximum QPs in HW
- * @num_ceqs: number completion event queues
- * @ceq_msix_entries: msix entries for ceqs
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_io_init(struct hinic_func_to_io *func_to_io,
- struct hinic_hwif *hwif, u16 max_qps, int num_ceqs,
- struct msix_entry *ceq_msix_entries)
-{
- struct pci_dev *pdev = hwif->pdev;
- enum hinic_cmdq_type cmdq, type;
- void __iomem *db_area;
- int err;
-
- func_to_io->hwif = hwif;
- func_to_io->qps = NULL;
- func_to_io->max_qps = max_qps;
-
- err = hinic_ceqs_init(&func_to_io->ceqs, hwif, num_ceqs,
- HINIC_DEFAULT_CEQ_LEN, HINIC_EQ_PAGE_SIZE,
- ceq_msix_entries);
- if (err) {
- dev_err(&pdev->dev, "Failed to init CEQs\n");
- return err;
- }
-
- err = hinic_wqs_alloc(&func_to_io->wqs, 2 * max_qps, hwif);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate WQS for IO\n");
- goto err_wqs_alloc;
- }
-
- func_to_io->db_base = pci_ioremap_bar(pdev, HINIC_PCI_DB_BAR);
- if (!func_to_io->db_base) {
- dev_err(&pdev->dev, "Failed to remap IO DB area\n");
- err = -ENOMEM;
- goto err_db_ioremap;
- }
-
- init_db_area_idx(&func_to_io->free_db_area);
-
- for (cmdq = HINIC_CMDQ_SYNC; cmdq < HINIC_MAX_CMDQ_TYPES; cmdq++) {
- db_area = get_db_area(func_to_io);
- if (IS_ERR(db_area)) {
- dev_err(&pdev->dev, "Failed to get cmdq db area\n");
- err = PTR_ERR(db_area);
- goto err_db_area;
- }
-
- func_to_io->cmdq_db_area[cmdq] = db_area;
- }
-
- err = hinic_init_cmdqs(&func_to_io->cmdqs, hwif,
- func_to_io->cmdq_db_area);
- if (err) {
- dev_err(&pdev->dev, "Failed to initialize cmdqs\n");
- goto err_init_cmdqs;
- }
-
- return 0;
-
-err_init_cmdqs:
-err_db_area:
- for (type = HINIC_CMDQ_SYNC; type < cmdq; type++)
- return_db_area(func_to_io, func_to_io->cmdq_db_area[type]);
-
- iounmap(func_to_io->db_base);
-
-err_db_ioremap:
- hinic_wqs_free(&func_to_io->wqs);
-
-err_wqs_alloc:
- hinic_ceqs_free(&func_to_io->ceqs);
- return err;
-}
-
-/**
- * hinic_io_free - Free the IO components
- * @func_to_io: func to io channel that holds the IO components
- **/
-void hinic_io_free(struct hinic_func_to_io *func_to_io)
-{
- enum hinic_cmdq_type cmdq;
-
- hinic_free_cmdqs(&func_to_io->cmdqs);
-
- for (cmdq = HINIC_CMDQ_SYNC; cmdq < HINIC_MAX_CMDQ_TYPES; cmdq++)
- return_db_area(func_to_io, func_to_io->cmdq_db_area[cmdq]);
-
- iounmap(func_to_io->db_base);
- hinic_wqs_free(&func_to_io->wqs);
- hinic_ceqs_free(&func_to_io->ceqs);
-}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_io.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_io.h
deleted file mode 100644
index adb6417..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_io.h
+++ /dev/null
@@ -1,97 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_HW_IO_H
-#define HINIC_HW_IO_H
-
-#include <linux/types.h>
-#include <linux/pci.h>
-#include <linux/semaphore.h>
-#include <linux/sizes.h>
-
-#include "hinic_hw_if.h"
-#include "hinic_hw_eqs.h"
-#include "hinic_hw_wq.h"
-#include "hinic_hw_cmdq.h"
-#include "hinic_hw_qp.h"
-
-#define HINIC_DB_PAGE_SIZE SZ_4K
-#define HINIC_DB_SIZE SZ_4M
-
-#define HINIC_DB_MAX_AREAS (HINIC_DB_SIZE / HINIC_DB_PAGE_SIZE)
-
-enum hinic_db_type {
- HINIC_DB_CMDQ_TYPE,
- HINIC_DB_SQ_TYPE,
-};
-
-enum hinic_io_path {
- HINIC_CTRL_PATH,
- HINIC_DATA_PATH,
-};
-
-struct hinic_free_db_area {
- int db_idx[HINIC_DB_MAX_AREAS];
-
- int alloc_pos;
- int return_pos;
-
- int num_free;
-
- /* Lock for getting db area */
- struct semaphore idx_lock;
-};
-
-struct hinic_func_to_io {
- struct hinic_hwif *hwif;
-
- struct hinic_ceqs ceqs;
-
- struct hinic_wqs wqs;
-
- struct hinic_wq *sq_wq;
- struct hinic_wq *rq_wq;
-
- struct hinic_qp *qps;
- u16 max_qps;
-
- void __iomem **sq_db;
- void __iomem *db_base;
-
- void *ci_addr_base;
- dma_addr_t ci_dma_base;
-
- struct hinic_free_db_area free_db_area;
-
- void __iomem *cmdq_db_area[HINIC_MAX_CMDQ_TYPES];
-
- struct hinic_cmdqs cmdqs;
-};
-
-int hinic_io_create_qps(struct hinic_func_to_io *func_to_io,
- u16 base_qpn, int num_qps,
- struct msix_entry *sq_msix_entries,
- struct msix_entry *rq_msix_entries);
-
-void hinic_io_destroy_qps(struct hinic_func_to_io *func_to_io,
- int num_qps);
-
-int hinic_io_init(struct hinic_func_to_io *func_to_io,
- struct hinic_hwif *hwif, u16 max_qps, int num_ceqs,
- struct msix_entry *ceq_msix_entries);
-
-void hinic_io_free(struct hinic_func_to_io *func_to_io);
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
deleted file mode 100644
index 278dc13..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+++ /dev/null
@@ -1,597 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/errno.h>
-#include <linux/pci.h>
-#include <linux/device.h>
-#include <linux/semaphore.h>
-#include <linux/completion.h>
-#include <linux/slab.h>
-#include <asm/barrier.h>
-
-#include "hinic_hw_if.h"
-#include "hinic_hw_eqs.h"
-#include "hinic_hw_api_cmd.h"
-#include "hinic_hw_mgmt.h"
-#include "hinic_hw_dev.h"
-
-#define SYNC_MSG_ID_MASK 0x1FF
-
-#define SYNC_MSG_ID(pf_to_mgmt) ((pf_to_mgmt)->sync_msg_id)
-
-#define SYNC_MSG_ID_INC(pf_to_mgmt) (SYNC_MSG_ID(pf_to_mgmt) = \
- ((SYNC_MSG_ID(pf_to_mgmt) + 1) & \
- SYNC_MSG_ID_MASK))
-
-#define MSG_SZ_IS_VALID(in_size) ((in_size) <= MAX_MSG_LEN)
-
-#define MGMT_MSG_LEN_MIN 20
-#define MGMT_MSG_LEN_STEP 16
-#define MGMT_MSG_RSVD_FOR_DEV 8
-
-#define SEGMENT_LEN 48
-
-#define MAX_PF_MGMT_BUF_SIZE 2048
-
-/* Data should be SEG LEN size aligned */
-#define MAX_MSG_LEN 2016
-
-#define MSG_NOT_RESP 0xFFFF
-
-#define MGMT_MSG_TIMEOUT 1000
-
-#define mgmt_to_pfhwdev(pf_mgmt) \
- container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt)
-
-enum msg_segment_type {
- NOT_LAST_SEGMENT = 0,
- LAST_SEGMENT = 1,
-};
-
-enum mgmt_direction_type {
- MGMT_DIRECT_SEND = 0,
- MGMT_RESP = 1,
-};
-
-enum msg_ack_type {
- MSG_ACK = 0,
- MSG_NO_ACK = 1,
-};
-
-/**
- * hinic_register_mgmt_msg_cb - register msg handler for a msg from a module
- * @pf_to_mgmt: PF to MGMT channel
- * @mod: module in the chip that this handler will handle its messages
- * @handle: private data for the callback
- * @callback: the handler that will handle messages
- **/
-void hinic_register_mgmt_msg_cb(struct hinic_pf_to_mgmt *pf_to_mgmt,
- enum hinic_mod_type mod,
- void *handle,
- void (*callback)(void *handle,
- u8 cmd, void *buf_in,
- u16 in_size, void *buf_out,
- u16 *out_size))
-{
- struct hinic_mgmt_cb *mgmt_cb = &pf_to_mgmt->mgmt_cb[mod];
-
- mgmt_cb->cb = callback;
- mgmt_cb->handle = handle;
- mgmt_cb->state = HINIC_MGMT_CB_ENABLED;
-}
-
-/**
- * hinic_unregister_mgmt_msg_cb - unregister msg handler for a msg from a module
- * @pf_to_mgmt: PF to MGMT channel
- * @mod: module in the chip that this handler handles its messages
- **/
-void hinic_unregister_mgmt_msg_cb(struct hinic_pf_to_mgmt *pf_to_mgmt,
- enum hinic_mod_type mod)
-{
- struct hinic_mgmt_cb *mgmt_cb = &pf_to_mgmt->mgmt_cb[mod];
-
- mgmt_cb->state &= ~HINIC_MGMT_CB_ENABLED;
-
- while (mgmt_cb->state & HINIC_MGMT_CB_RUNNING)
- schedule();
-
- mgmt_cb->cb = NULL;
-}
-
-/**
- * prepare_header - prepare the header of the message
- * @pf_to_mgmt: PF to MGMT channel
- * @msg_len: the length of the message
- * @mod: module in the chip that will get the message
- * @ack_type: ask for response
- * @direction: the direction of the message
- * @cmd: command of the message
- * @msg_id: message id
- *
- * Return the prepared header value
- **/
-static u64 prepare_header(struct hinic_pf_to_mgmt *pf_to_mgmt,
- u16 msg_len, enum hinic_mod_type mod,
- enum msg_ack_type ack_type,
- enum mgmt_direction_type direction,
- u16 cmd, u16 msg_id)
-{
- struct hinic_hwif *hwif = pf_to_mgmt->hwif;
-
- return HINIC_MSG_HEADER_SET(msg_len, MSG_LEN) |
- HINIC_MSG_HEADER_SET(mod, MODULE) |
- HINIC_MSG_HEADER_SET(SEGMENT_LEN, SEG_LEN) |
- HINIC_MSG_HEADER_SET(ack_type, NO_ACK) |
- HINIC_MSG_HEADER_SET(0, ASYNC_MGMT_TO_PF) |
- HINIC_MSG_HEADER_SET(0, SEQID) |
- HINIC_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
- HINIC_MSG_HEADER_SET(direction, DIRECTION) |
- HINIC_MSG_HEADER_SET(cmd, CMD) |
- HINIC_MSG_HEADER_SET(HINIC_HWIF_PCI_INTF(hwif), PCI_INTF) |
- HINIC_MSG_HEADER_SET(HINIC_HWIF_PF_IDX(hwif), PF_IDX) |
- HINIC_MSG_HEADER_SET(msg_id, MSG_ID);
-}
-
-/**
- * prepare_mgmt_cmd - prepare the mgmt command
- * @mgmt_cmd: pointer to the command to prepare
- * @header: pointer of the header for the message
- * @msg: the data of the message
- * @msg_len: the length of the message
- **/
-static void prepare_mgmt_cmd(u8 *mgmt_cmd, u64 *header, u8 *msg, u16 msg_len)
-{
- memset(mgmt_cmd, 0, MGMT_MSG_RSVD_FOR_DEV);
-
- mgmt_cmd += MGMT_MSG_RSVD_FOR_DEV;
- memcpy(mgmt_cmd, header, sizeof(*header));
-
- mgmt_cmd += sizeof(*header);
- memcpy(mgmt_cmd, msg, msg_len);
-}
-
-/**
- * mgmt_msg_len - calculate the total message length
- * @msg_data_len: the length of the message data
- *
- * Return the total message length
- **/
-static u16 mgmt_msg_len(u16 msg_data_len)
-{
- /* RSVD + HEADER_SIZE + DATA_LEN */
- u16 msg_len = MGMT_MSG_RSVD_FOR_DEV + sizeof(u64) + msg_data_len;
-
- if (msg_len > MGMT_MSG_LEN_MIN)
- msg_len = MGMT_MSG_LEN_MIN +
- ALIGN((msg_len - MGMT_MSG_LEN_MIN),
- MGMT_MSG_LEN_STEP);
- else
- msg_len = MGMT_MSG_LEN_MIN;
-
- return msg_len;
-}
-
-/**
- * send_msg_to_mgmt - send message to mgmt by API CMD
- * @pf_to_mgmt: PF to MGMT channel
- * @mod: module in the chip that will get the message
- * @cmd: command of the message
- * @data: the msg data
- * @data_len: the msg data length
- * @ack_type: ask for response
- * @direction: the direction of the original message
- * @resp_msg_id: msg id to response for
- *
- * Return 0 - Success, negative - Failure
- **/
-static int send_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
- enum hinic_mod_type mod, u8 cmd,
- u8 *data, u16 data_len,
- enum msg_ack_type ack_type,
- enum mgmt_direction_type direction,
- u16 resp_msg_id)
-{
- struct hinic_api_cmd_chain *chain;
- u64 header;
- u16 msg_id;
-
- msg_id = SYNC_MSG_ID(pf_to_mgmt);
-
- if (direction == MGMT_RESP) {
- header = prepare_header(pf_to_mgmt, data_len, mod, ack_type,
- direction, cmd, resp_msg_id);
- } else {
- SYNC_MSG_ID_INC(pf_to_mgmt);
- header = prepare_header(pf_to_mgmt, data_len, mod, ack_type,
- direction, cmd, msg_id);
- }
-
- prepare_mgmt_cmd(pf_to_mgmt->sync_msg_buf, &header, data, data_len);
-
- chain = pf_to_mgmt->cmd_chain[HINIC_API_CMD_WRITE_TO_MGMT_CPU];
- return hinic_api_cmd_write(chain, HINIC_NODE_ID_MGMT,
- pf_to_mgmt->sync_msg_buf,
- mgmt_msg_len(data_len));
-}
-
-/**
- * msg_to_mgmt_sync - send sync message to mgmt
- * @pf_to_mgmt: PF to MGMT channel
- * @mod: module in the chip that will get the message
- * @cmd: command of the message
- * @buf_in: the msg data
- * @in_size: the msg data length
- * @buf_out: response
- * @out_size: response length
- * @direction: the direction of the original message
- * @resp_msg_id: msg id to response for
- *
- * Return 0 - Success, negative - Failure
- **/
-static int msg_to_mgmt_sync(struct hinic_pf_to_mgmt *pf_to_mgmt,
- enum hinic_mod_type mod, u8 cmd,
- u8 *buf_in, u16 in_size,
- u8 *buf_out, u16 *out_size,
- enum mgmt_direction_type direction,
- u16 resp_msg_id)
-{
- struct hinic_hwif *hwif = pf_to_mgmt->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_recv_msg *recv_msg;
- struct completion *recv_done;
- u16 msg_id;
- int err;
-
- /* Lock the sync_msg_buf */
- down(&pf_to_mgmt->sync_msg_lock);
-
- recv_msg = &pf_to_mgmt->recv_resp_msg_from_mgmt;
- recv_done = &recv_msg->recv_done;
-
- if (resp_msg_id == MSG_NOT_RESP)
- msg_id = SYNC_MSG_ID(pf_to_mgmt);
- else
- msg_id = resp_msg_id;
-
- init_completion(recv_done);
-
- err = send_msg_to_mgmt(pf_to_mgmt, mod, cmd, buf_in, in_size,
- MSG_ACK, direction, resp_msg_id);
- if (err) {
- dev_err(&pdev->dev, "Failed to send sync msg to mgmt\n");
- goto unlock_sync_msg;
- }
-
- if (!wait_for_completion_timeout(recv_done, MGMT_MSG_TIMEOUT)) {
- dev_err(&pdev->dev, "MGMT timeout, MSG id = %d\n", msg_id);
- err = -ETIMEDOUT;
- goto unlock_sync_msg;
- }
-
- smp_rmb(); /* verify reading after completion */
-
- if (recv_msg->msg_id != msg_id) {
- dev_err(&pdev->dev, "incorrect MSG for id = %d\n", msg_id);
- err = -EFAULT;
- goto unlock_sync_msg;
- }
-
- if ((buf_out) && (recv_msg->msg_len <= MAX_PF_MGMT_BUF_SIZE)) {
- memcpy(buf_out, recv_msg->msg, recv_msg->msg_len);
- *out_size = recv_msg->msg_len;
- }
-
-unlock_sync_msg:
- up(&pf_to_mgmt->sync_msg_lock);
- return err;
-}
-
-/**
- * msg_to_mgmt_async - send message to mgmt without response
- * @pf_to_mgmt: PF to MGMT channel
- * @mod: module in the chip that will get the message
- * @cmd: command of the message
- * @buf_in: the msg data
- * @in_size: the msg data length
- * @direction: the direction of the original message
- * @resp_msg_id: msg id to response for
- *
- * Return 0 - Success, negative - Failure
- **/
-static int msg_to_mgmt_async(struct hinic_pf_to_mgmt *pf_to_mgmt,
- enum hinic_mod_type mod, u8 cmd,
- u8 *buf_in, u16 in_size,
- enum mgmt_direction_type direction,
- u16 resp_msg_id)
-{
- int err;
-
- /* Lock the sync_msg_buf */
- down(&pf_to_mgmt->sync_msg_lock);
-
- err = send_msg_to_mgmt(pf_to_mgmt, mod, cmd, buf_in, in_size,
- MSG_NO_ACK, direction, resp_msg_id);
-
- up(&pf_to_mgmt->sync_msg_lock);
- return err;
-}
-
-/**
- * hinic_msg_to_mgmt - send message to mgmt
- * @pf_to_mgmt: PF to MGMT channel
- * @mod: module in the chip that will get the message
- * @cmd: command of the message
- * @buf_in: the msg data
- * @in_size: the msg data length
- * @buf_out: response
- * @out_size: returned response length
- * @sync: sync msg or async msg
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
- enum hinic_mod_type mod, u8 cmd,
- void *buf_in, u16 in_size, void *buf_out, u16 *out_size,
- enum hinic_mgmt_msg_type sync)
-{
- struct hinic_hwif *hwif = pf_to_mgmt->hwif;
- struct pci_dev *pdev = hwif->pdev;
-
- if (sync != HINIC_MGMT_MSG_SYNC) {
- dev_err(&pdev->dev, "Invalid MGMT msg type\n");
- return -EINVAL;
- }
-
- if (!MSG_SZ_IS_VALID(in_size)) {
- dev_err(&pdev->dev, "Invalid MGMT msg buffer size\n");
- return -EINVAL;
- }
-
- return msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
- buf_out, out_size, MGMT_DIRECT_SEND,
- MSG_NOT_RESP);
-}
-
-/**
- * mgmt_recv_msg_handler - handler for message from mgmt cpu
- * @pf_to_mgmt: PF to MGMT channel
- * @recv_msg: received message details
- **/
-static void mgmt_recv_msg_handler(struct hinic_pf_to_mgmt *pf_to_mgmt,
- struct hinic_recv_msg *recv_msg)
-{
- struct hinic_hwif *hwif = pf_to_mgmt->hwif;
- struct pci_dev *pdev = hwif->pdev;
- u8 *buf_out = recv_msg->buf_out;
- struct hinic_mgmt_cb *mgmt_cb;
- unsigned long cb_state;
- u16 out_size = 0;
-
- if (recv_msg->mod >= HINIC_MOD_MAX) {
- dev_err(&pdev->dev, "Unknown MGMT MSG module = %d\n",
- recv_msg->mod);
- return;
- }
-
- mgmt_cb = &pf_to_mgmt->mgmt_cb[recv_msg->mod];
-
- cb_state = cmpxchg(&mgmt_cb->state,
- HINIC_MGMT_CB_ENABLED,
- HINIC_MGMT_CB_ENABLED | HINIC_MGMT_CB_RUNNING);
-
- if ((cb_state == HINIC_MGMT_CB_ENABLED) && (mgmt_cb->cb))
- mgmt_cb->cb(mgmt_cb->handle, recv_msg->cmd,
- recv_msg->msg, recv_msg->msg_len,
- buf_out, &out_size);
- else
- dev_err(&pdev->dev, "No MGMT msg handler, mod = %d\n",
- recv_msg->mod);
-
- mgmt_cb->state &= ~HINIC_MGMT_CB_RUNNING;
-
- if (!recv_msg->async_mgmt_to_pf)
- /* MGMT sent sync msg, send the response */
- msg_to_mgmt_async(pf_to_mgmt, recv_msg->mod, recv_msg->cmd,
- buf_out, out_size, MGMT_RESP,
- recv_msg->msg_id);
-}
-
-/**
- * mgmt_resp_msg_handler - handler for a response message from mgmt cpu
- * @pf_to_mgmt: PF to MGMT channel
- * @recv_msg: received message details
- **/
-static void mgmt_resp_msg_handler(struct hinic_pf_to_mgmt *pf_to_mgmt,
- struct hinic_recv_msg *recv_msg)
-{
- wmb(); /* verify writing all, before reading */
-
- complete(&recv_msg->recv_done);
-}
-
-/**
- * recv_mgmt_msg_handler - handler for a message from mgmt cpu
- * @pf_to_mgmt: PF to MGMT channel
- * @header: the header of the message
- * @recv_msg: received message details
- **/
-static void recv_mgmt_msg_handler(struct hinic_pf_to_mgmt *pf_to_mgmt,
- u64 *header, struct hinic_recv_msg *recv_msg)
-{
- struct hinic_hwif *hwif = pf_to_mgmt->hwif;
- struct pci_dev *pdev = hwif->pdev;
- int seq_id, seg_len;
- u8 *msg_body;
-
- seq_id = HINIC_MSG_HEADER_GET(*header, SEQID);
- seg_len = HINIC_MSG_HEADER_GET(*header, SEG_LEN);
-
- if (seq_id >= (MAX_MSG_LEN / SEGMENT_LEN)) {
- dev_err(&pdev->dev, "recv big mgmt msg\n");
- return;
- }
-
- msg_body = (u8 *)header + sizeof(*header);
- memcpy(recv_msg->msg + seq_id * SEGMENT_LEN, msg_body, seg_len);
-
- if (!HINIC_MSG_HEADER_GET(*header, LAST))
- return;
-
- recv_msg->cmd = HINIC_MSG_HEADER_GET(*header, CMD);
- recv_msg->mod = HINIC_MSG_HEADER_GET(*header, MODULE);
- recv_msg->async_mgmt_to_pf = HINIC_MSG_HEADER_GET(*header,
- ASYNC_MGMT_TO_PF);
- recv_msg->msg_len = HINIC_MSG_HEADER_GET(*header, MSG_LEN);
- recv_msg->msg_id = HINIC_MSG_HEADER_GET(*header, MSG_ID);
-
- if (HINIC_MSG_HEADER_GET(*header, DIRECTION) == MGMT_RESP)
- mgmt_resp_msg_handler(pf_to_mgmt, recv_msg);
- else
- mgmt_recv_msg_handler(pf_to_mgmt, recv_msg);
-}
-
-/**
- * mgmt_msg_aeqe_handler - handler for a mgmt message event
- * @handle: PF to MGMT channel
- * @data: the header of the message
- * @size: unused
- **/
-static void mgmt_msg_aeqe_handler(void *handle, void *data, u8 size)
-{
- struct hinic_pf_to_mgmt *pf_to_mgmt = handle;
- struct hinic_recv_msg *recv_msg;
- u64 *header = (u64 *)data;
-
- recv_msg = HINIC_MSG_HEADER_GET(*header, DIRECTION) ==
- MGMT_DIRECT_SEND ?
- &pf_to_mgmt->recv_msg_from_mgmt :
- &pf_to_mgmt->recv_resp_msg_from_mgmt;
-
- recv_mgmt_msg_handler(pf_to_mgmt, header, recv_msg);
-}
-
-/**
- * alloc_recv_msg - allocate receive message memory
- * @pf_to_mgmt: PF to MGMT channel
- * @recv_msg: pointer that will hold the allocated data
- *
- * Return 0 - Success, negative - Failure
- **/
-static int alloc_recv_msg(struct hinic_pf_to_mgmt *pf_to_mgmt,
- struct hinic_recv_msg *recv_msg)
-{
- struct hinic_hwif *hwif = pf_to_mgmt->hwif;
- struct pci_dev *pdev = hwif->pdev;
-
- recv_msg->msg = devm_kzalloc(&pdev->dev, MAX_PF_MGMT_BUF_SIZE,
- GFP_KERNEL);
- if (!recv_msg->msg)
- return -ENOMEM;
-
- recv_msg->buf_out = devm_kzalloc(&pdev->dev, MAX_PF_MGMT_BUF_SIZE,
- GFP_KERNEL);
- if (!recv_msg->buf_out)
- return -ENOMEM;
-
- return 0;
-}
-
-/**
- * alloc_msg_buf - allocate all the message buffers of PF to MGMT channel
- * @pf_to_mgmt: PF to MGMT channel
- *
- * Return 0 - Success, negative - Failure
- **/
-static int alloc_msg_buf(struct hinic_pf_to_mgmt *pf_to_mgmt)
-{
- struct hinic_hwif *hwif = pf_to_mgmt->hwif;
- struct pci_dev *pdev = hwif->pdev;
- int err;
-
- err = alloc_recv_msg(pf_to_mgmt,
- &pf_to_mgmt->recv_msg_from_mgmt);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate recv msg\n");
- return err;
- }
-
- err = alloc_recv_msg(pf_to_mgmt,
- &pf_to_mgmt->recv_resp_msg_from_mgmt);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate resp recv msg\n");
- return err;
- }
-
- pf_to_mgmt->sync_msg_buf = devm_kzalloc(&pdev->dev,
- MAX_PF_MGMT_BUF_SIZE,
- GFP_KERNEL);
- if (!pf_to_mgmt->sync_msg_buf)
- return -ENOMEM;
-
- return 0;
-}
-
-/**
- * hinic_pf_to_mgmt_init - initialize PF to MGMT channel
- * @pf_to_mgmt: PF to MGMT channel
- * @hwif: HW interface the PF to MGMT will use for accessing HW
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_pf_to_mgmt_init(struct hinic_pf_to_mgmt *pf_to_mgmt,
- struct hinic_hwif *hwif)
-{
- struct hinic_pfhwdev *pfhwdev = mgmt_to_pfhwdev(pf_to_mgmt);
- struct hinic_hwdev *hwdev = &pfhwdev->hwdev;
- struct pci_dev *pdev = hwif->pdev;
- int err;
-
- pf_to_mgmt->hwif = hwif;
-
- sema_init(&pf_to_mgmt->sync_msg_lock, 1);
- pf_to_mgmt->sync_msg_id = 0;
-
- err = alloc_msg_buf(pf_to_mgmt);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate msg buffers\n");
- return err;
- }
-
- err = hinic_api_cmd_init(pf_to_mgmt->cmd_chain, hwif);
- if (err) {
- dev_err(&pdev->dev, "Failed to initialize cmd chains\n");
- return err;
- }
-
- hinic_aeq_register_hw_cb(&hwdev->aeqs, HINIC_MSG_FROM_MGMT_CPU,
- pf_to_mgmt,
- mgmt_msg_aeqe_handler);
- return 0;
-}
-
-/**
- * hinic_pf_to_mgmt_free - free PF to MGMT channel
- * @pf_to_mgmt: PF to MGMT channel
- **/
-void hinic_pf_to_mgmt_free(struct hinic_pf_to_mgmt *pf_to_mgmt)
-{
- struct hinic_pfhwdev *pfhwdev = mgmt_to_pfhwdev(pf_to_mgmt);
- struct hinic_hwdev *hwdev = &pfhwdev->hwdev;
-
- hinic_aeq_unregister_hw_cb(&hwdev->aeqs, HINIC_MSG_FROM_MGMT_CPU);
- hinic_api_cmd_free(pf_to_mgmt->cmd_chain);
-}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_qp.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_qp.c
deleted file mode 100644
index cb23962..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_qp.c
+++ /dev/null
@@ -1,907 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/pci.h>
-#include <linux/device.h>
-#include <linux/dma-mapping.h>
-#include <linux/vmalloc.h>
-#include <linux/errno.h>
-#include <linux/sizes.h>
-#include <linux/atomic.h>
-#include <linux/skbuff.h>
-#include <linux/io.h>
-#include <asm/barrier.h>
-#include <asm/byteorder.h>
-
-#include "hinic_common.h"
-#include "hinic_hw_if.h"
-#include "hinic_hw_wqe.h"
-#include "hinic_hw_wq.h"
-#include "hinic_hw_qp_ctxt.h"
-#include "hinic_hw_qp.h"
-#include "hinic_hw_io.h"
-
-#define SQ_DB_OFF SZ_2K
-
-/* The number of cache line to prefetch Until threshold state */
-#define WQ_PREFETCH_MAX 2
-/* The number of cache line to prefetch After threshold state */
-#define WQ_PREFETCH_MIN 1
-/* Threshold state */
-#define WQ_PREFETCH_THRESHOLD 256
-
-/* sizes of the SQ/RQ ctxt */
-#define Q_CTXT_SIZE 48
-#define CTXT_RSVD 240
-
-#define SQ_CTXT_OFFSET(max_sqs, max_rqs, q_id) \
- (((max_rqs) + (max_sqs)) * CTXT_RSVD + (q_id) * Q_CTXT_SIZE)
-
-#define RQ_CTXT_OFFSET(max_sqs, max_rqs, q_id) \
- (((max_rqs) + (max_sqs)) * CTXT_RSVD + \
- (max_sqs + (q_id)) * Q_CTXT_SIZE)
-
-#define SIZE_16BYTES(size) (ALIGN(size, 16) >> 4)
-#define SIZE_8BYTES(size) (ALIGN(size, 8) >> 3)
-#define SECT_SIZE_FROM_8BYTES(size) ((size) << 3)
-
-#define SQ_DB_PI_HI_SHIFT 8
-#define SQ_DB_PI_HI(prod_idx) ((prod_idx) >> SQ_DB_PI_HI_SHIFT)
-
-#define SQ_DB_PI_LOW_MASK 0xFF
-#define SQ_DB_PI_LOW(prod_idx) ((prod_idx) & SQ_DB_PI_LOW_MASK)
-
-#define SQ_DB_ADDR(sq, pi) ((u64 *)((sq)->db_base) + SQ_DB_PI_LOW(pi))
-
-#define SQ_MASKED_IDX(sq, idx) ((idx) & (sq)->wq->mask)
-#define RQ_MASKED_IDX(rq, idx) ((idx) & (rq)->wq->mask)
-
-#define TX_MAX_MSS_DEFAULT 0x3E00
-
-enum sq_wqe_type {
- SQ_NORMAL_WQE = 0,
-};
-
-enum rq_completion_fmt {
- RQ_COMPLETE_SGE = 1
-};
-
-void hinic_qp_prepare_header(struct hinic_qp_ctxt_header *qp_ctxt_hdr,
- enum hinic_qp_ctxt_type ctxt_type,
- u16 num_queues, u16 max_queues)
-{
- u16 max_sqs = max_queues;
- u16 max_rqs = max_queues;
-
- qp_ctxt_hdr->num_queues = num_queues;
- qp_ctxt_hdr->queue_type = ctxt_type;
-
- if (ctxt_type == HINIC_QP_CTXT_TYPE_SQ)
- qp_ctxt_hdr->addr_offset = SQ_CTXT_OFFSET(max_sqs, max_rqs, 0);
- else
- qp_ctxt_hdr->addr_offset = RQ_CTXT_OFFSET(max_sqs, max_rqs, 0);
-
- qp_ctxt_hdr->addr_offset = SIZE_16BYTES(qp_ctxt_hdr->addr_offset);
-
- hinic_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr));
-}
-
-void hinic_sq_prepare_ctxt(struct hinic_sq_ctxt *sq_ctxt,
- struct hinic_sq *sq, u16 global_qid)
-{
- u32 wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo;
- u64 wq_page_addr, wq_page_pfn, wq_block_pfn;
- u16 pi_start, ci_start;
- struct hinic_wq *wq;
-
- wq = sq->wq;
- ci_start = atomic_read(&wq->cons_idx);
- pi_start = atomic_read(&wq->prod_idx);
-
- /* Read the first page paddr from the WQ page paddr ptrs */
- wq_page_addr = be64_to_cpu(*wq->block_vaddr);
-
- wq_page_pfn = HINIC_WQ_PAGE_PFN(wq_page_addr);
- wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
- wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
-
- wq_block_pfn = HINIC_WQ_BLOCK_PFN(wq->block_paddr);
- wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
- wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
-
- sq_ctxt->ceq_attr = HINIC_SQ_CTXT_CEQ_ATTR_SET(global_qid,
- GLOBAL_SQ_ID) |
- HINIC_SQ_CTXT_CEQ_ATTR_SET(0, EN);
-
- sq_ctxt->ci_wrapped = HINIC_SQ_CTXT_CI_SET(ci_start, IDX) |
- HINIC_SQ_CTXT_CI_SET(1, WRAPPED);
-
- sq_ctxt->wq_hi_pfn_pi =
- HINIC_SQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
- HINIC_SQ_CTXT_WQ_PAGE_SET(pi_start, PI);
-
- sq_ctxt->wq_lo_pfn = wq_page_pfn_lo;
-
- sq_ctxt->pref_cache =
- HINIC_SQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
- HINIC_SQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
- HINIC_SQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
-
- sq_ctxt->pref_wrapped = 1;
-
- sq_ctxt->pref_wq_hi_pfn_ci =
- HINIC_SQ_CTXT_PREF_SET(ci_start, CI) |
- HINIC_SQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_HI_PFN);
-
- sq_ctxt->pref_wq_lo_pfn = wq_page_pfn_lo;
-
- sq_ctxt->wq_block_hi_pfn =
- HINIC_SQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, HI_PFN);
-
- sq_ctxt->wq_block_lo_pfn = wq_block_pfn_lo;
-
- hinic_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt));
-}
-
-void hinic_rq_prepare_ctxt(struct hinic_rq_ctxt *rq_ctxt,
- struct hinic_rq *rq, u16 global_qid)
-{
- u32 wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo;
- u64 wq_page_addr, wq_page_pfn, wq_block_pfn;
- u16 pi_start, ci_start;
- struct hinic_wq *wq;
-
- wq = rq->wq;
- ci_start = atomic_read(&wq->cons_idx);
- pi_start = atomic_read(&wq->prod_idx);
-
- /* Read the first page paddr from the WQ page paddr ptrs */
- wq_page_addr = be64_to_cpu(*wq->block_vaddr);
-
- wq_page_pfn = HINIC_WQ_PAGE_PFN(wq_page_addr);
- wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
- wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
-
- wq_block_pfn = HINIC_WQ_BLOCK_PFN(wq->block_paddr);
- wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
- wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
-
- rq_ctxt->ceq_attr = HINIC_RQ_CTXT_CEQ_ATTR_SET(0, EN) |
- HINIC_RQ_CTXT_CEQ_ATTR_SET(1, WRAPPED);
-
- rq_ctxt->pi_intr_attr = HINIC_RQ_CTXT_PI_SET(pi_start, IDX) |
- HINIC_RQ_CTXT_PI_SET(rq->msix_entry, INTR);
-
- rq_ctxt->wq_hi_pfn_ci = HINIC_RQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi,
- HI_PFN) |
- HINIC_RQ_CTXT_WQ_PAGE_SET(ci_start, CI);
-
- rq_ctxt->wq_lo_pfn = wq_page_pfn_lo;
-
- rq_ctxt->pref_cache =
- HINIC_RQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
- HINIC_RQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
- HINIC_RQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
-
- rq_ctxt->pref_wrapped = 1;
-
- rq_ctxt->pref_wq_hi_pfn_ci =
- HINIC_RQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_HI_PFN) |
- HINIC_RQ_CTXT_PREF_SET(ci_start, CI);
-
- rq_ctxt->pref_wq_lo_pfn = wq_page_pfn_lo;
-
- rq_ctxt->pi_paddr_hi = upper_32_bits(rq->pi_dma_addr);
- rq_ctxt->pi_paddr_lo = lower_32_bits(rq->pi_dma_addr);
-
- rq_ctxt->wq_block_hi_pfn =
- HINIC_RQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, HI_PFN);
-
- rq_ctxt->wq_block_lo_pfn = wq_block_pfn_lo;
-
- hinic_cpu_to_be32(rq_ctxt, sizeof(*rq_ctxt));
-}
-
-/**
- * alloc_sq_skb_arr - allocate sq array for saved skb
- * @sq: HW Send Queue
- *
- * Return 0 - Success, negative - Failure
- **/
-static int alloc_sq_skb_arr(struct hinic_sq *sq)
-{
- struct hinic_wq *wq = sq->wq;
- size_t skb_arr_size;
-
- skb_arr_size = wq->q_depth * sizeof(*sq->saved_skb);
- sq->saved_skb = vzalloc(skb_arr_size);
- if (!sq->saved_skb)
- return -ENOMEM;
-
- return 0;
-}
-
-/**
- * free_sq_skb_arr - free sq array for saved skb
- * @sq: HW Send Queue
- **/
-static void free_sq_skb_arr(struct hinic_sq *sq)
-{
- vfree(sq->saved_skb);
-}
-
-/**
- * alloc_rq_skb_arr - allocate rq array for saved skb
- * @rq: HW Receive Queue
- *
- * Return 0 - Success, negative - Failure
- **/
-static int alloc_rq_skb_arr(struct hinic_rq *rq)
-{
- struct hinic_wq *wq = rq->wq;
- size_t skb_arr_size;
-
- skb_arr_size = wq->q_depth * sizeof(*rq->saved_skb);
- rq->saved_skb = vzalloc(skb_arr_size);
- if (!rq->saved_skb)
- return -ENOMEM;
-
- return 0;
-}
-
-/**
- * free_rq_skb_arr - free rq array for saved skb
- * @rq: HW Receive Queue
- **/
-static void free_rq_skb_arr(struct hinic_rq *rq)
-{
- vfree(rq->saved_skb);
-}
-
-/**
- * hinic_init_sq - Initialize HW Send Queue
- * @sq: HW Send Queue
- * @hwif: HW Interface for accessing HW
- * @wq: Work Queue for the data of the SQ
- * @entry: msix entry for sq
- * @ci_addr: address for reading the current HW consumer index
- * @ci_dma_addr: dma address for reading the current HW consumer index
- * @db_base: doorbell base address
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_init_sq(struct hinic_sq *sq, struct hinic_hwif *hwif,
- struct hinic_wq *wq, struct msix_entry *entry,
- void *ci_addr, dma_addr_t ci_dma_addr,
- void __iomem *db_base)
-{
- sq->hwif = hwif;
-
- sq->wq = wq;
-
- sq->irq = entry->vector;
- sq->msix_entry = entry->entry;
-
- sq->hw_ci_addr = ci_addr;
- sq->hw_ci_dma_addr = ci_dma_addr;
-
- sq->db_base = db_base + SQ_DB_OFF;
-
- return alloc_sq_skb_arr(sq);
-}
-
-/**
- * hinic_clean_sq - Clean HW Send Queue's Resources
- * @sq: Send Queue
- **/
-void hinic_clean_sq(struct hinic_sq *sq)
-{
- free_sq_skb_arr(sq);
-}
-
-/**
- * alloc_rq_cqe - allocate rq completion queue elements
- * @rq: HW Receive Queue
- *
- * Return 0 - Success, negative - Failure
- **/
-static int alloc_rq_cqe(struct hinic_rq *rq)
-{
- struct hinic_hwif *hwif = rq->hwif;
- struct pci_dev *pdev = hwif->pdev;
- size_t cqe_dma_size, cqe_size;
- struct hinic_wq *wq = rq->wq;
- int j, i;
-
- cqe_size = wq->q_depth * sizeof(*rq->cqe);
- rq->cqe = vzalloc(cqe_size);
- if (!rq->cqe)
- return -ENOMEM;
-
- cqe_dma_size = wq->q_depth * sizeof(*rq->cqe_dma);
- rq->cqe_dma = vzalloc(cqe_dma_size);
- if (!rq->cqe_dma)
- goto err_cqe_dma_arr_alloc;
-
- for (i = 0; i < wq->q_depth; i++) {
- rq->cqe[i] = dma_zalloc_coherent(&pdev->dev,
- sizeof(*rq->cqe[i]),
- &rq->cqe_dma[i], GFP_KERNEL);
- if (!rq->cqe[i])
- goto err_cqe_alloc;
- }
-
- return 0;
-
-err_cqe_alloc:
- for (j = 0; j < i; j++)
- dma_free_coherent(&pdev->dev, sizeof(*rq->cqe[j]), rq->cqe[j],
- rq->cqe_dma[j]);
-
- vfree(rq->cqe_dma);
-
-err_cqe_dma_arr_alloc:
- vfree(rq->cqe);
- return -ENOMEM;
-}
-
-/**
- * free_rq_cqe - free rq completion queue elements
- * @rq: HW Receive Queue
- **/
-static void free_rq_cqe(struct hinic_rq *rq)
-{
- struct hinic_hwif *hwif = rq->hwif;
- struct pci_dev *pdev = hwif->pdev;
- struct hinic_wq *wq = rq->wq;
- int i;
-
- for (i = 0; i < wq->q_depth; i++)
- dma_free_coherent(&pdev->dev, sizeof(*rq->cqe[i]), rq->cqe[i],
- rq->cqe_dma[i]);
-
- vfree(rq->cqe_dma);
- vfree(rq->cqe);
-}
-
-/**
- * hinic_init_rq - Initialize HW Receive Queue
- * @rq: HW Receive Queue
- * @hwif: HW Interface for accessing HW
- * @wq: Work Queue for the data of the RQ
- * @entry: msix entry for rq
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_init_rq(struct hinic_rq *rq, struct hinic_hwif *hwif,
- struct hinic_wq *wq, struct msix_entry *entry)
-{
- struct pci_dev *pdev = hwif->pdev;
- size_t pi_size;
- int err;
-
- rq->hwif = hwif;
-
- rq->wq = wq;
-
- rq->irq = entry->vector;
- rq->msix_entry = entry->entry;
-
- rq->buf_sz = HINIC_RX_BUF_SZ;
-
- err = alloc_rq_skb_arr(rq);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate rq priv data\n");
- return err;
- }
-
- err = alloc_rq_cqe(rq);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate rq cqe\n");
- goto err_alloc_rq_cqe;
- }
-
- /* HW requirements: Must be at least 32 bit */
- pi_size = ALIGN(sizeof(*rq->pi_virt_addr), sizeof(u32));
- rq->pi_virt_addr = dma_zalloc_coherent(&pdev->dev, pi_size,
- &rq->pi_dma_addr, GFP_KERNEL);
- if (!rq->pi_virt_addr) {
- dev_err(&pdev->dev, "Failed to allocate PI address\n");
- err = -ENOMEM;
- goto err_pi_virt;
- }
-
- return 0;
-
-err_pi_virt:
- free_rq_cqe(rq);
-
-err_alloc_rq_cqe:
- free_rq_skb_arr(rq);
- return err;
-}
-
-/**
- * hinic_clean_rq - Clean HW Receive Queue's Resources
- * @rq: HW Receive Queue
- **/
-void hinic_clean_rq(struct hinic_rq *rq)
-{
- struct hinic_hwif *hwif = rq->hwif;
- struct pci_dev *pdev = hwif->pdev;
- size_t pi_size;
-
- pi_size = ALIGN(sizeof(*rq->pi_virt_addr), sizeof(u32));
- dma_free_coherent(&pdev->dev, pi_size, rq->pi_virt_addr,
- rq->pi_dma_addr);
-
- free_rq_cqe(rq);
- free_rq_skb_arr(rq);
-}
-
-/**
- * hinic_get_sq_free_wqebbs - return number of free wqebbs for use
- * @sq: send queue
- *
- * Return number of free wqebbs
- **/
-int hinic_get_sq_free_wqebbs(struct hinic_sq *sq)
-{
- struct hinic_wq *wq = sq->wq;
-
- return atomic_read(&wq->delta) - 1;
-}
-
-/**
- * hinic_get_rq_free_wqebbs - return number of free wqebbs for use
- * @rq: recv queue
- *
- * Return number of free wqebbs
- **/
-int hinic_get_rq_free_wqebbs(struct hinic_rq *rq)
-{
- struct hinic_wq *wq = rq->wq;
-
- return atomic_read(&wq->delta) - 1;
-}
-
-static void sq_prepare_ctrl(struct hinic_sq_ctrl *ctrl, u16 prod_idx,
- int nr_descs)
-{
- u32 ctrl_size, task_size, bufdesc_size;
-
- ctrl_size = SIZE_8BYTES(sizeof(struct hinic_sq_ctrl));
- task_size = SIZE_8BYTES(sizeof(struct hinic_sq_task));
- bufdesc_size = nr_descs * sizeof(struct hinic_sq_bufdesc);
- bufdesc_size = SIZE_8BYTES(bufdesc_size);
-
- ctrl->ctrl_info = HINIC_SQ_CTRL_SET(bufdesc_size, BUFDESC_SECT_LEN) |
- HINIC_SQ_CTRL_SET(task_size, TASKSECT_LEN) |
- HINIC_SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
- HINIC_SQ_CTRL_SET(ctrl_size, LEN);
-
- ctrl->queue_info = HINIC_SQ_CTRL_SET(TX_MAX_MSS_DEFAULT,
- QUEUE_INFO_MSS);
-}
-
-static void sq_prepare_task(struct hinic_sq_task *task)
-{
- task->pkt_info0 =
- HINIC_SQ_TASK_INFO0_SET(0, L2HDR_LEN) |
- HINIC_SQ_TASK_INFO0_SET(HINIC_L4_OFF_DISABLE, L4_OFFLOAD) |
- HINIC_SQ_TASK_INFO0_SET(HINIC_OUTER_L3TYPE_UNKNOWN,
- INNER_L3TYPE) |
- HINIC_SQ_TASK_INFO0_SET(HINIC_VLAN_OFF_DISABLE,
- VLAN_OFFLOAD) |
- HINIC_SQ_TASK_INFO0_SET(HINIC_PKT_NOT_PARSED, PARSE_FLAG);
-
- task->pkt_info1 =
- HINIC_SQ_TASK_INFO1_SET(HINIC_MEDIA_UNKNOWN, MEDIA_TYPE) |
- HINIC_SQ_TASK_INFO1_SET(0, INNER_L4_LEN) |
- HINIC_SQ_TASK_INFO1_SET(0, INNER_L3_LEN);
-
- task->pkt_info2 =
- HINIC_SQ_TASK_INFO2_SET(0, TUNNEL_L4_LEN) |
- HINIC_SQ_TASK_INFO2_SET(0, OUTER_L3_LEN) |
- HINIC_SQ_TASK_INFO2_SET(HINIC_TUNNEL_L4TYPE_UNKNOWN,
- TUNNEL_L4TYPE) |
- HINIC_SQ_TASK_INFO2_SET(HINIC_OUTER_L3TYPE_UNKNOWN,
- OUTER_L3TYPE);
-
- task->ufo_v6_identify = 0;
-
- task->pkt_info4 = HINIC_SQ_TASK_INFO4_SET(HINIC_L2TYPE_ETH, L2TYPE);
-
- task->zero_pad = 0;
-}
-
-/**
- * hinic_sq_prepare_wqe - prepare wqe before insert to the queue
- * @sq: send queue
- * @prod_idx: pi value
- * @sq_wqe: wqe to prepare
- * @sges: sges for use by the wqe for send for buf addresses
- * @nr_sges: number of sges
- **/
-void hinic_sq_prepare_wqe(struct hinic_sq *sq, u16 prod_idx,
- struct hinic_sq_wqe *sq_wqe, struct hinic_sge *sges,
- int nr_sges)
-{
- int i;
-
- sq_prepare_ctrl(&sq_wqe->ctrl, prod_idx, nr_sges);
-
- sq_prepare_task(&sq_wqe->task);
-
- for (i = 0; i < nr_sges; i++)
- sq_wqe->buf_descs[i].sge = sges[i];
-}
-
-/**
- * sq_prepare_db - prepare doorbell to write
- * @sq: send queue
- * @prod_idx: pi value for the doorbell
- * @cos: cos of the doorbell
- *
- * Return db value
- **/
-static u32 sq_prepare_db(struct hinic_sq *sq, u16 prod_idx, unsigned int cos)
-{
- struct hinic_qp *qp = container_of(sq, struct hinic_qp, sq);
- u8 hi_prod_idx = SQ_DB_PI_HI(SQ_MASKED_IDX(sq, prod_idx));
-
- /* Data should be written to HW in Big Endian Format */
- return cpu_to_be32(HINIC_SQ_DB_INFO_SET(hi_prod_idx, PI_HI) |
- HINIC_SQ_DB_INFO_SET(HINIC_DB_SQ_TYPE, TYPE) |
- HINIC_SQ_DB_INFO_SET(HINIC_DATA_PATH, PATH) |
- HINIC_SQ_DB_INFO_SET(cos, COS) |
- HINIC_SQ_DB_INFO_SET(qp->q_id, QID));
-}
-
-/**
- * hinic_sq_write_db- write doorbell
- * @sq: send queue
- * @prod_idx: pi value for the doorbell
- * @wqe_size: wqe size
- * @cos: cos of the wqe
- **/
-void hinic_sq_write_db(struct hinic_sq *sq, u16 prod_idx, unsigned int wqe_size,
- unsigned int cos)
-{
- struct hinic_wq *wq = sq->wq;
-
- /* increment prod_idx to the next */
- prod_idx += ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size;
-
- wmb(); /* Write all before the doorbell */
-
- writel(sq_prepare_db(sq, prod_idx, cos), SQ_DB_ADDR(sq, prod_idx));
-}
-
-/**
- * hinic_sq_get_wqe - get wqe ptr in the current pi and update the pi
- * @sq: sq to get wqe from
- * @wqe_size: wqe size
- * @prod_idx: returned pi
- *
- * Return wqe pointer
- **/
-struct hinic_sq_wqe *hinic_sq_get_wqe(struct hinic_sq *sq,
- unsigned int wqe_size, u16 *prod_idx)
-{
- struct hinic_hw_wqe *hw_wqe = hinic_get_wqe(sq->wq, wqe_size,
- prod_idx);
-
- if (IS_ERR(hw_wqe))
- return NULL;
-
- return &hw_wqe->sq_wqe;
-}
-
-/**
- * hinic_sq_write_wqe - write the wqe to the sq
- * @sq: send queue
- * @prod_idx: pi of the wqe
- * @sq_wqe: the wqe to write
- * @skb: skb to save
- * @wqe_size: the size of the wqe
- **/
-void hinic_sq_write_wqe(struct hinic_sq *sq, u16 prod_idx,
- struct hinic_sq_wqe *sq_wqe,
- struct sk_buff *skb, unsigned int wqe_size)
-{
- struct hinic_hw_wqe *hw_wqe = (struct hinic_hw_wqe *)sq_wqe;
-
- sq->saved_skb[prod_idx] = skb;
-
- /* The data in the HW should be in Big Endian Format */
- hinic_cpu_to_be32(sq_wqe, wqe_size);
-
- hinic_write_wqe(sq->wq, hw_wqe, wqe_size);
-}
-
-/**
- * hinic_sq_read_wqebb - read wqe ptr in the current ci and update the ci, the
- * wqe only have one wqebb
- * @sq: send queue
- * @skb: return skb that was saved
- * @wqe_size: the wqe size ptr
- * @cons_idx: consumer index of the wqe
- *
- * Return wqe in ci position
- **/
-struct hinic_sq_wqe *hinic_sq_read_wqebb(struct hinic_sq *sq,
- struct sk_buff **skb,
- unsigned int *wqe_size, u16 *cons_idx)
-{
- struct hinic_hw_wqe *hw_wqe;
- struct hinic_sq_wqe *sq_wqe;
- struct hinic_sq_ctrl *ctrl;
- unsigned int buf_sect_len;
- u32 ctrl_info;
-
- /* read the ctrl section for getting wqe size */
- hw_wqe = hinic_read_wqe(sq->wq, sizeof(*ctrl), cons_idx);
- if (IS_ERR(hw_wqe))
- return NULL;
-
- *skb = sq->saved_skb[*cons_idx];
-
- sq_wqe = &hw_wqe->sq_wqe;
- ctrl = &sq_wqe->ctrl;
- ctrl_info = be32_to_cpu(ctrl->ctrl_info);
- buf_sect_len = HINIC_SQ_CTRL_GET(ctrl_info, BUFDESC_SECT_LEN);
-
- *wqe_size = sizeof(*ctrl) + sizeof(sq_wqe->task);
- *wqe_size += SECT_SIZE_FROM_8BYTES(buf_sect_len);
- *wqe_size = ALIGN(*wqe_size, sq->wq->wqebb_size);
-
- return &hw_wqe->sq_wqe;
-}
-
-/**
- * hinic_sq_read_wqe - read wqe ptr in the current ci and update the ci
- * @sq: send queue
- * @skb: return skb that was saved
- * @wqe_size: the size of the wqe
- * @cons_idx: consumer index of the wqe
- *
- * Return wqe in ci position
- **/
-struct hinic_sq_wqe *hinic_sq_read_wqe(struct hinic_sq *sq,
- struct sk_buff **skb,
- unsigned int wqe_size, u16 *cons_idx)
-{
- struct hinic_hw_wqe *hw_wqe;
-
- hw_wqe = hinic_read_wqe(sq->wq, wqe_size, cons_idx);
- *skb = sq->saved_skb[*cons_idx];
-
- return &hw_wqe->sq_wqe;
-}
-
-/**
- * hinic_sq_put_wqe - release the ci for new wqes
- * @sq: send queue
- * @wqe_size: the size of the wqe
- **/
-void hinic_sq_put_wqe(struct hinic_sq *sq, unsigned int wqe_size)
-{
- hinic_put_wqe(sq->wq, wqe_size);
-}
-
-/**
- * hinic_sq_get_sges - get sges from the wqe
- * @sq_wqe: wqe to get the sges from its buffer addresses
- * @sges: returned sges
- * @nr_sges: number sges to return
- **/
-void hinic_sq_get_sges(struct hinic_sq_wqe *sq_wqe, struct hinic_sge *sges,
- int nr_sges)
-{
- int i;
-
- for (i = 0; i < nr_sges && i < HINIC_MAX_SQ_BUFDESCS; i++) {
- sges[i] = sq_wqe->buf_descs[i].sge;
- hinic_be32_to_cpu(&sges[i], sizeof(sges[i]));
- }
-}
-
-/**
- * hinic_rq_get_wqe - get wqe ptr in the current pi and update the pi
- * @rq: rq to get wqe from
- * @wqe_size: wqe size
- * @prod_idx: returned pi
- *
- * Return wqe pointer
- **/
-struct hinic_rq_wqe *hinic_rq_get_wqe(struct hinic_rq *rq,
- unsigned int wqe_size, u16 *prod_idx)
-{
- struct hinic_hw_wqe *hw_wqe = hinic_get_wqe(rq->wq, wqe_size,
- prod_idx);
-
- if (IS_ERR(hw_wqe))
- return NULL;
-
- return &hw_wqe->rq_wqe;
-}
-
-/**
- * hinic_rq_write_wqe - write the wqe to the rq
- * @rq: recv queue
- * @prod_idx: pi of the wqe
- * @rq_wqe: the wqe to write
- * @skb: skb to save
- **/
-void hinic_rq_write_wqe(struct hinic_rq *rq, u16 prod_idx,
- struct hinic_rq_wqe *rq_wqe, struct sk_buff *skb)
-{
- struct hinic_hw_wqe *hw_wqe = (struct hinic_hw_wqe *)rq_wqe;
-
- rq->saved_skb[prod_idx] = skb;
-
- /* The data in the HW should be in Big Endian Format */
- hinic_cpu_to_be32(rq_wqe, sizeof(*rq_wqe));
-
- hinic_write_wqe(rq->wq, hw_wqe, sizeof(*rq_wqe));
-}
-
-/**
- * hinic_rq_read_wqe - read wqe ptr in the current ci and update the ci
- * @rq: recv queue
- * @wqe_size: the size of the wqe
- * @skb: return saved skb
- * @cons_idx: consumer index of the wqe
- *
- * Return wqe in ci position
- **/
-struct hinic_rq_wqe *hinic_rq_read_wqe(struct hinic_rq *rq,
- unsigned int wqe_size,
- struct sk_buff **skb, u16 *cons_idx)
-{
- struct hinic_hw_wqe *hw_wqe;
- struct hinic_rq_cqe *cqe;
- int rx_done;
- u32 status;
-
- hw_wqe = hinic_read_wqe(rq->wq, wqe_size, cons_idx);
- if (IS_ERR(hw_wqe))
- return NULL;
-
- cqe = rq->cqe[*cons_idx];
-
- status = be32_to_cpu(cqe->status);
-
- rx_done = HINIC_RQ_CQE_STATUS_GET(status, RXDONE);
- if (!rx_done)
- return NULL;
-
- *skb = rq->saved_skb[*cons_idx];
-
- return &hw_wqe->rq_wqe;
-}
-
-/**
- * hinic_rq_read_next_wqe - increment ci and read the wqe in ci position
- * @rq: recv queue
- * @wqe_size: the size of the wqe
- * @skb: return saved skb
- * @cons_idx: consumer index in the wq
- *
- * Return wqe in incremented ci position
- **/
-struct hinic_rq_wqe *hinic_rq_read_next_wqe(struct hinic_rq *rq,
- unsigned int wqe_size,
- struct sk_buff **skb,
- u16 *cons_idx)
-{
- struct hinic_wq *wq = rq->wq;
- struct hinic_hw_wqe *hw_wqe;
- unsigned int num_wqebbs;
-
- wqe_size = ALIGN(wqe_size, wq->wqebb_size);
- num_wqebbs = wqe_size / wq->wqebb_size;
-
- *cons_idx = RQ_MASKED_IDX(rq, *cons_idx + num_wqebbs);
-
- *skb = rq->saved_skb[*cons_idx];
-
- hw_wqe = hinic_read_wqe_direct(wq, *cons_idx);
-
- return &hw_wqe->rq_wqe;
-}
-
-/**
- * hinic_put_wqe - release the ci for new wqes
- * @rq: recv queue
- * @cons_idx: consumer index of the wqe
- * @wqe_size: the size of the wqe
- **/
-void hinic_rq_put_wqe(struct hinic_rq *rq, u16 cons_idx,
- unsigned int wqe_size)
-{
- struct hinic_rq_cqe *cqe = rq->cqe[cons_idx];
- u32 status = be32_to_cpu(cqe->status);
-
- status = HINIC_RQ_CQE_STATUS_CLEAR(status, RXDONE);
-
- /* Rx WQE size is 1 WQEBB, no wq shadow*/
- cqe->status = cpu_to_be32(status);
-
- wmb(); /* clear done flag */
-
- hinic_put_wqe(rq->wq, wqe_size);
-}
-
-/**
- * hinic_rq_get_sge - get sge from the wqe
- * @rq: recv queue
- * @rq_wqe: wqe to get the sge from its buf address
- * @cons_idx: consumer index
- * @sge: returned sge
- **/
-void hinic_rq_get_sge(struct hinic_rq *rq, struct hinic_rq_wqe *rq_wqe,
- u16 cons_idx, struct hinic_sge *sge)
-{
- struct hinic_rq_cqe *cqe = rq->cqe[cons_idx];
- u32 len = be32_to_cpu(cqe->len);
-
- sge->hi_addr = be32_to_cpu(rq_wqe->buf_desc.hi_addr);
- sge->lo_addr = be32_to_cpu(rq_wqe->buf_desc.lo_addr);
- sge->len = HINIC_RQ_CQE_SGE_GET(len, LEN);
-}
-
-/**
- * hinic_rq_prepare_wqe - prepare wqe before insert to the queue
- * @rq: recv queue
- * @prod_idx: pi value
- * @rq_wqe: the wqe
- * @sge: sge for use by the wqe for recv buf address
- **/
-void hinic_rq_prepare_wqe(struct hinic_rq *rq, u16 prod_idx,
- struct hinic_rq_wqe *rq_wqe, struct hinic_sge *sge)
-{
- struct hinic_rq_cqe_sect *cqe_sect = &rq_wqe->cqe_sect;
- struct hinic_rq_bufdesc *buf_desc = &rq_wqe->buf_desc;
- struct hinic_rq_cqe *cqe = rq->cqe[prod_idx];
- struct hinic_rq_ctrl *ctrl = &rq_wqe->ctrl;
- dma_addr_t cqe_dma = rq->cqe_dma[prod_idx];
-
- ctrl->ctrl_info =
- HINIC_RQ_CTRL_SET(SIZE_8BYTES(sizeof(*ctrl)), LEN) |
- HINIC_RQ_CTRL_SET(SIZE_8BYTES(sizeof(*cqe_sect)),
- COMPLETE_LEN) |
- HINIC_RQ_CTRL_SET(SIZE_8BYTES(sizeof(*buf_desc)),
- BUFDESC_SECT_LEN) |
- HINIC_RQ_CTRL_SET(RQ_COMPLETE_SGE, COMPLETE_FORMAT);
-
- hinic_set_sge(&cqe_sect->sge, cqe_dma, sizeof(*cqe));
-
- buf_desc->hi_addr = sge->hi_addr;
- buf_desc->lo_addr = sge->lo_addr;
-}
-
-/**
- * hinic_rq_update - update pi of the rq
- * @rq: recv queue
- * @prod_idx: pi value
- **/
-void hinic_rq_update(struct hinic_rq *rq, u16 prod_idx)
-{
- *rq->pi_virt_addr = cpu_to_be16(RQ_MASKED_IDX(rq, prod_idx + 1));
-}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_qp.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_qp.h
deleted file mode 100644
index 6c84f83..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_qp.h
+++ /dev/null
@@ -1,205 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_HW_QP_H
-#define HINIC_HW_QP_H
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/sizes.h>
-#include <linux/pci.h>
-#include <linux/skbuff.h>
-
-#include "hinic_common.h"
-#include "hinic_hw_if.h"
-#include "hinic_hw_wqe.h"
-#include "hinic_hw_wq.h"
-#include "hinic_hw_qp_ctxt.h"
-
-#define HINIC_SQ_DB_INFO_PI_HI_SHIFT 0
-#define HINIC_SQ_DB_INFO_QID_SHIFT 8
-#define HINIC_SQ_DB_INFO_PATH_SHIFT 23
-#define HINIC_SQ_DB_INFO_COS_SHIFT 24
-#define HINIC_SQ_DB_INFO_TYPE_SHIFT 27
-
-#define HINIC_SQ_DB_INFO_PI_HI_MASK 0xFF
-#define HINIC_SQ_DB_INFO_QID_MASK 0x3FF
-#define HINIC_SQ_DB_INFO_PATH_MASK 0x1
-#define HINIC_SQ_DB_INFO_COS_MASK 0x7
-#define HINIC_SQ_DB_INFO_TYPE_MASK 0x1F
-
-#define HINIC_SQ_DB_INFO_SET(val, member) \
- (((u32)(val) & HINIC_SQ_DB_INFO_##member##_MASK) \
- << HINIC_SQ_DB_INFO_##member##_SHIFT)
-
-#define HINIC_SQ_WQEBB_SIZE 64
-#define HINIC_RQ_WQEBB_SIZE 32
-
-#define HINIC_SQ_PAGE_SIZE SZ_4K
-#define HINIC_RQ_PAGE_SIZE SZ_4K
-
-#define HINIC_SQ_DEPTH SZ_4K
-#define HINIC_RQ_DEPTH SZ_4K
-
-/* In any change to HINIC_RX_BUF_SZ, HINIC_RX_BUF_SZ_IDX must be changed */
-#define HINIC_RX_BUF_SZ 2048
-#define HINIC_RX_BUF_SZ_IDX HINIC_RX_BUF_SZ_2048_IDX
-
-#define HINIC_MIN_TX_WQE_SIZE(wq) \
- ALIGN(HINIC_SQ_WQE_SIZE(1), (wq)->wqebb_size)
-
-#define HINIC_MIN_TX_NUM_WQEBBS(sq) \
- (HINIC_MIN_TX_WQE_SIZE((sq)->wq) / (sq)->wq->wqebb_size)
-
-enum hinic_rx_buf_sz_idx {
- HINIC_RX_BUF_SZ_32_IDX,
- HINIC_RX_BUF_SZ_64_IDX,
- HINIC_RX_BUF_SZ_96_IDX,
- HINIC_RX_BUF_SZ_128_IDX,
- HINIC_RX_BUF_SZ_192_IDX,
- HINIC_RX_BUF_SZ_256_IDX,
- HINIC_RX_BUF_SZ_384_IDX,
- HINIC_RX_BUF_SZ_512_IDX,
- HINIC_RX_BUF_SZ_768_IDX,
- HINIC_RX_BUF_SZ_1024_IDX,
- HINIC_RX_BUF_SZ_1536_IDX,
- HINIC_RX_BUF_SZ_2048_IDX,
- HINIC_RX_BUF_SZ_3072_IDX,
- HINIC_RX_BUF_SZ_4096_IDX,
- HINIC_RX_BUF_SZ_8192_IDX,
- HINIC_RX_BUF_SZ_16384_IDX,
-};
-
-struct hinic_sq {
- struct hinic_hwif *hwif;
-
- struct hinic_wq *wq;
-
- u32 irq;
- u16 msix_entry;
-
- void *hw_ci_addr;
- dma_addr_t hw_ci_dma_addr;
-
- void __iomem *db_base;
-
- struct sk_buff **saved_skb;
-};
-
-struct hinic_rq {
- struct hinic_hwif *hwif;
-
- struct hinic_wq *wq;
-
- u32 irq;
- u16 msix_entry;
-
- size_t buf_sz;
-
- struct sk_buff **saved_skb;
-
- struct hinic_rq_cqe **cqe;
- dma_addr_t *cqe_dma;
-
- u16 *pi_virt_addr;
- dma_addr_t pi_dma_addr;
-};
-
-struct hinic_qp {
- struct hinic_sq sq;
- struct hinic_rq rq;
-
- u16 q_id;
-};
-
-void hinic_qp_prepare_header(struct hinic_qp_ctxt_header *qp_ctxt_hdr,
- enum hinic_qp_ctxt_type ctxt_type,
- u16 num_queues, u16 max_queues);
-
-void hinic_sq_prepare_ctxt(struct hinic_sq_ctxt *sq_ctxt,
- struct hinic_sq *sq, u16 global_qid);
-
-void hinic_rq_prepare_ctxt(struct hinic_rq_ctxt *rq_ctxt,
- struct hinic_rq *rq, u16 global_qid);
-
-int hinic_init_sq(struct hinic_sq *sq, struct hinic_hwif *hwif,
- struct hinic_wq *wq, struct msix_entry *entry, void *ci_addr,
- dma_addr_t ci_dma_addr, void __iomem *db_base);
-
-void hinic_clean_sq(struct hinic_sq *sq);
-
-int hinic_init_rq(struct hinic_rq *rq, struct hinic_hwif *hwif,
- struct hinic_wq *wq, struct msix_entry *entry);
-
-void hinic_clean_rq(struct hinic_rq *rq);
-
-int hinic_get_sq_free_wqebbs(struct hinic_sq *sq);
-
-int hinic_get_rq_free_wqebbs(struct hinic_rq *rq);
-
-void hinic_sq_prepare_wqe(struct hinic_sq *sq, u16 prod_idx,
- struct hinic_sq_wqe *wqe, struct hinic_sge *sges,
- int nr_sges);
-
-void hinic_sq_write_db(struct hinic_sq *sq, u16 prod_idx, unsigned int wqe_size,
- unsigned int cos);
-
-struct hinic_sq_wqe *hinic_sq_get_wqe(struct hinic_sq *sq,
- unsigned int wqe_size, u16 *prod_idx);
-
-void hinic_sq_write_wqe(struct hinic_sq *sq, u16 prod_idx,
- struct hinic_sq_wqe *wqe, struct sk_buff *skb,
- unsigned int wqe_size);
-
-struct hinic_sq_wqe *hinic_sq_read_wqe(struct hinic_sq *sq,
- struct sk_buff **skb,
- unsigned int wqe_size, u16 *cons_idx);
-
-struct hinic_sq_wqe *hinic_sq_read_wqebb(struct hinic_sq *sq,
- struct sk_buff **skb,
- unsigned int *wqe_size, u16 *cons_idx);
-
-void hinic_sq_put_wqe(struct hinic_sq *sq, unsigned int wqe_size);
-
-void hinic_sq_get_sges(struct hinic_sq_wqe *wqe, struct hinic_sge *sges,
- int nr_sges);
-
-struct hinic_rq_wqe *hinic_rq_get_wqe(struct hinic_rq *rq,
- unsigned int wqe_size, u16 *prod_idx);
-
-void hinic_rq_write_wqe(struct hinic_rq *rq, u16 prod_idx,
- struct hinic_rq_wqe *wqe, struct sk_buff *skb);
-
-struct hinic_rq_wqe *hinic_rq_read_wqe(struct hinic_rq *rq,
- unsigned int wqe_size,
- struct sk_buff **skb, u16 *cons_idx);
-
-struct hinic_rq_wqe *hinic_rq_read_next_wqe(struct hinic_rq *rq,
- unsigned int wqe_size,
- struct sk_buff **skb,
- u16 *cons_idx);
-
-void hinic_rq_put_wqe(struct hinic_rq *rq, u16 cons_idx,
- unsigned int wqe_size);
-
-void hinic_rq_get_sge(struct hinic_rq *rq, struct hinic_rq_wqe *wqe,
- u16 cons_idx, struct hinic_sge *sge);
-
-void hinic_rq_prepare_wqe(struct hinic_rq *rq, u16 prod_idx,
- struct hinic_rq_wqe *wqe, struct hinic_sge *sge);
-
-void hinic_rq_update(struct hinic_rq *rq, u16 prod_idx);
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_qp_ctxt.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_qp_ctxt.h
deleted file mode 100644
index 376abf0..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_qp_ctxt.h
+++ /dev/null
@@ -1,214 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_HW_QP_CTXT_H
-#define HINIC_HW_QP_CTXT_H
-
-#include <linux/types.h>
-
-#include "hinic_hw_cmdq.h"
-
-#define HINIC_SQ_CTXT_CEQ_ATTR_GLOBAL_SQ_ID_SHIFT 13
-#define HINIC_SQ_CTXT_CEQ_ATTR_EN_SHIFT 23
-
-#define HINIC_SQ_CTXT_CEQ_ATTR_GLOBAL_SQ_ID_MASK 0x3FF
-#define HINIC_SQ_CTXT_CEQ_ATTR_EN_MASK 0x1
-
-#define HINIC_SQ_CTXT_CEQ_ATTR_SET(val, member) \
- (((u32)(val) & HINIC_SQ_CTXT_CEQ_ATTR_##member##_MASK) \
- << HINIC_SQ_CTXT_CEQ_ATTR_##member##_SHIFT)
-
-#define HINIC_SQ_CTXT_CI_IDX_SHIFT 11
-#define HINIC_SQ_CTXT_CI_WRAPPED_SHIFT 23
-
-#define HINIC_SQ_CTXT_CI_IDX_MASK 0xFFF
-#define HINIC_SQ_CTXT_CI_WRAPPED_MASK 0x1
-
-#define HINIC_SQ_CTXT_CI_SET(val, member) \
- (((u32)(val) & HINIC_SQ_CTXT_CI_##member##_MASK) \
- << HINIC_SQ_CTXT_CI_##member##_SHIFT)
-
-#define HINIC_SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
-#define HINIC_SQ_CTXT_WQ_PAGE_PI_SHIFT 20
-
-#define HINIC_SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFF
-#define HINIC_SQ_CTXT_WQ_PAGE_PI_MASK 0xFFF
-
-#define HINIC_SQ_CTXT_WQ_PAGE_SET(val, member) \
- (((u32)(val) & HINIC_SQ_CTXT_WQ_PAGE_##member##_MASK) \
- << HINIC_SQ_CTXT_WQ_PAGE_##member##_SHIFT)
-
-#define HINIC_SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
-#define HINIC_SQ_CTXT_PREF_CACHE_MAX_SHIFT 14
-#define HINIC_SQ_CTXT_PREF_CACHE_MIN_SHIFT 25
-
-#define HINIC_SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFF
-#define HINIC_SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FF
-#define HINIC_SQ_CTXT_PREF_CACHE_MIN_MASK 0x7F
-
-#define HINIC_SQ_CTXT_PREF_WQ_HI_PFN_SHIFT 0
-#define HINIC_SQ_CTXT_PREF_CI_SHIFT 20
-
-#define HINIC_SQ_CTXT_PREF_WQ_HI_PFN_MASK 0xFFFFF
-#define HINIC_SQ_CTXT_PREF_CI_MASK 0xFFF
-
-#define HINIC_SQ_CTXT_PREF_SET(val, member) \
- (((u32)(val) & HINIC_SQ_CTXT_PREF_##member##_MASK) \
- << HINIC_SQ_CTXT_PREF_##member##_SHIFT)
-
-#define HINIC_SQ_CTXT_WQ_BLOCK_HI_PFN_SHIFT 0
-
-#define HINIC_SQ_CTXT_WQ_BLOCK_HI_PFN_MASK 0x7FFFFF
-
-#define HINIC_SQ_CTXT_WQ_BLOCK_SET(val, member) \
- (((u32)(val) & HINIC_SQ_CTXT_WQ_BLOCK_##member##_MASK) \
- << HINIC_SQ_CTXT_WQ_BLOCK_##member##_SHIFT)
-
-#define HINIC_RQ_CTXT_CEQ_ATTR_EN_SHIFT 0
-#define HINIC_RQ_CTXT_CEQ_ATTR_WRAPPED_SHIFT 1
-
-#define HINIC_RQ_CTXT_CEQ_ATTR_EN_MASK 0x1
-#define HINIC_RQ_CTXT_CEQ_ATTR_WRAPPED_MASK 0x1
-
-#define HINIC_RQ_CTXT_CEQ_ATTR_SET(val, member) \
- (((u32)(val) & HINIC_RQ_CTXT_CEQ_ATTR_##member##_MASK) \
- << HINIC_RQ_CTXT_CEQ_ATTR_##member##_SHIFT)
-
-#define HINIC_RQ_CTXT_PI_IDX_SHIFT 0
-#define HINIC_RQ_CTXT_PI_INTR_SHIFT 22
-
-#define HINIC_RQ_CTXT_PI_IDX_MASK 0xFFF
-#define HINIC_RQ_CTXT_PI_INTR_MASK 0x3FF
-
-#define HINIC_RQ_CTXT_PI_SET(val, member) \
- (((u32)(val) & HINIC_RQ_CTXT_PI_##member##_MASK) << \
- HINIC_RQ_CTXT_PI_##member##_SHIFT)
-
-#define HINIC_RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
-#define HINIC_RQ_CTXT_WQ_PAGE_CI_SHIFT 20
-
-#define HINIC_RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFF
-#define HINIC_RQ_CTXT_WQ_PAGE_CI_MASK 0xFFF
-
-#define HINIC_RQ_CTXT_WQ_PAGE_SET(val, member) \
- (((u32)(val) & HINIC_RQ_CTXT_WQ_PAGE_##member##_MASK) << \
- HINIC_RQ_CTXT_WQ_PAGE_##member##_SHIFT)
-
-#define HINIC_RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
-#define HINIC_RQ_CTXT_PREF_CACHE_MAX_SHIFT 14
-#define HINIC_RQ_CTXT_PREF_CACHE_MIN_SHIFT 25
-
-#define HINIC_RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFF
-#define HINIC_RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FF
-#define HINIC_RQ_CTXT_PREF_CACHE_MIN_MASK 0x7F
-
-#define HINIC_RQ_CTXT_PREF_WQ_HI_PFN_SHIFT 0
-#define HINIC_RQ_CTXT_PREF_CI_SHIFT 20
-
-#define HINIC_RQ_CTXT_PREF_WQ_HI_PFN_MASK 0xFFFFF
-#define HINIC_RQ_CTXT_PREF_CI_MASK 0xFFF
-
-#define HINIC_RQ_CTXT_PREF_SET(val, member) \
- (((u32)(val) & HINIC_RQ_CTXT_PREF_##member##_MASK) << \
- HINIC_RQ_CTXT_PREF_##member##_SHIFT)
-
-#define HINIC_RQ_CTXT_WQ_BLOCK_HI_PFN_SHIFT 0
-
-#define HINIC_RQ_CTXT_WQ_BLOCK_HI_PFN_MASK 0x7FFFFF
-
-#define HINIC_RQ_CTXT_WQ_BLOCK_SET(val, member) \
- (((u32)(val) & HINIC_RQ_CTXT_WQ_BLOCK_##member##_MASK) << \
- HINIC_RQ_CTXT_WQ_BLOCK_##member##_SHIFT)
-
-#define HINIC_SQ_CTXT_SIZE(num_sqs) (sizeof(struct hinic_qp_ctxt_header) \
- + (num_sqs) * sizeof(struct hinic_sq_ctxt))
-
-#define HINIC_RQ_CTXT_SIZE(num_rqs) (sizeof(struct hinic_qp_ctxt_header) \
- + (num_rqs) * sizeof(struct hinic_rq_ctxt))
-
-#define HINIC_WQ_PAGE_PFN_SHIFT 12
-#define HINIC_WQ_BLOCK_PFN_SHIFT 9
-
-#define HINIC_WQ_PAGE_PFN(page_addr) ((page_addr) >> HINIC_WQ_PAGE_PFN_SHIFT)
-#define HINIC_WQ_BLOCK_PFN(page_addr) ((page_addr) >> \
- HINIC_WQ_BLOCK_PFN_SHIFT)
-
-#define HINIC_Q_CTXT_MAX \
- ((HINIC_CMDQ_BUF_SIZE - sizeof(struct hinic_qp_ctxt_header)) \
- / sizeof(struct hinic_sq_ctxt))
-
-enum hinic_qp_ctxt_type {
- HINIC_QP_CTXT_TYPE_SQ,
- HINIC_QP_CTXT_TYPE_RQ
-};
-
-struct hinic_qp_ctxt_header {
- u16 num_queues;
- u16 queue_type;
- u32 addr_offset;
-};
-
-struct hinic_sq_ctxt {
- u32 ceq_attr;
-
- u32 ci_wrapped;
-
- u32 wq_hi_pfn_pi;
- u32 wq_lo_pfn;
-
- u32 pref_cache;
- u32 pref_wrapped;
- u32 pref_wq_hi_pfn_ci;
- u32 pref_wq_lo_pfn;
-
- u32 rsvd0;
- u32 rsvd1;
-
- u32 wq_block_hi_pfn;
- u32 wq_block_lo_pfn;
-};
-
-struct hinic_rq_ctxt {
- u32 ceq_attr;
-
- u32 pi_intr_attr;
-
- u32 wq_hi_pfn_ci;
- u32 wq_lo_pfn;
-
- u32 pref_cache;
- u32 pref_wrapped;
-
- u32 pref_wq_hi_pfn_ci;
- u32 pref_wq_lo_pfn;
-
- u32 pi_paddr_hi;
- u32 pi_paddr_lo;
-
- u32 wq_block_hi_pfn;
- u32 wq_block_lo_pfn;
-};
-
-struct hinic_sq_ctxt_block {
- struct hinic_qp_ctxt_header hdr;
- struct hinic_sq_ctxt sq_ctxt[HINIC_Q_CTXT_MAX];
-};
-
-struct hinic_rq_ctxt_block {
- struct hinic_qp_ctxt_header hdr;
- struct hinic_rq_ctxt rq_ctxt[HINIC_Q_CTXT_MAX];
-};
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
deleted file mode 100644
index 3e3181c08..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
+++ /dev/null
@@ -1,878 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/pci.h>
-#include <linux/device.h>
-#include <linux/dma-mapping.h>
-#include <linux/slab.h>
-#include <linux/atomic.h>
-#include <linux/semaphore.h>
-#include <linux/errno.h>
-#include <linux/vmalloc.h>
-#include <linux/err.h>
-#include <asm/byteorder.h>
-
-#include "hinic_hw_if.h"
-#include "hinic_hw_wqe.h"
-#include "hinic_hw_wq.h"
-#include "hinic_hw_cmdq.h"
-
-#define WQS_BLOCKS_PER_PAGE 4
-
-#define WQ_BLOCK_SIZE 4096
-#define WQS_PAGE_SIZE (WQS_BLOCKS_PER_PAGE * WQ_BLOCK_SIZE)
-
-#define WQS_MAX_NUM_BLOCKS 128
-#define WQS_FREE_BLOCKS_SIZE(wqs) (WQS_MAX_NUM_BLOCKS * \
- sizeof((wqs)->free_blocks[0]))
-
-#define WQ_SIZE(wq) ((wq)->q_depth * (wq)->wqebb_size)
-
-#define WQ_PAGE_ADDR_SIZE sizeof(u64)
-#define WQ_MAX_PAGES (WQ_BLOCK_SIZE / WQ_PAGE_ADDR_SIZE)
-
-#define CMDQ_BLOCK_SIZE 512
-#define CMDQ_PAGE_SIZE 4096
-
-#define CMDQ_WQ_MAX_PAGES (CMDQ_BLOCK_SIZE / WQ_PAGE_ADDR_SIZE)
-
-#define WQ_BASE_VADDR(wqs, wq) \
- ((void *)((wqs)->page_vaddr[(wq)->page_idx]) \
- + (wq)->block_idx * WQ_BLOCK_SIZE)
-
-#define WQ_BASE_PADDR(wqs, wq) \
- ((wqs)->page_paddr[(wq)->page_idx] \
- + (wq)->block_idx * WQ_BLOCK_SIZE)
-
-#define WQ_BASE_ADDR(wqs, wq) \
- ((void *)((wqs)->shadow_page_vaddr[(wq)->page_idx]) \
- + (wq)->block_idx * WQ_BLOCK_SIZE)
-
-#define CMDQ_BASE_VADDR(cmdq_pages, wq) \
- ((void *)((cmdq_pages)->page_vaddr) \
- + (wq)->block_idx * CMDQ_BLOCK_SIZE)
-
-#define CMDQ_BASE_PADDR(cmdq_pages, wq) \
- ((cmdq_pages)->page_paddr \
- + (wq)->block_idx * CMDQ_BLOCK_SIZE)
-
-#define CMDQ_BASE_ADDR(cmdq_pages, wq) \
- ((void *)((cmdq_pages)->shadow_page_vaddr) \
- + (wq)->block_idx * CMDQ_BLOCK_SIZE)
-
-#define WQE_PAGE_OFF(wq, idx) (((idx) & ((wq)->num_wqebbs_per_page - 1)) * \
- (wq)->wqebb_size)
-
-#define WQE_PAGE_NUM(wq, idx) (((idx) / ((wq)->num_wqebbs_per_page)) \
- & ((wq)->num_q_pages - 1))
-
-#define WQ_PAGE_ADDR(wq, idx) \
- ((wq)->shadow_block_vaddr[WQE_PAGE_NUM(wq, idx)])
-
-#define MASKED_WQE_IDX(wq, idx) ((idx) & (wq)->mask)
-
-#define WQE_IN_RANGE(wqe, start, end) \
- (((unsigned long)(wqe) >= (unsigned long)(start)) && \
- ((unsigned long)(wqe) < (unsigned long)(end)))
-
-#define WQE_SHADOW_PAGE(wq, wqe) \
- (((unsigned long)(wqe) - (unsigned long)(wq)->shadow_wqe) \
- / (wq)->max_wqe_size)
-
-/**
- * queue_alloc_page - allocate page for Queue
- * @hwif: HW interface for allocating DMA
- * @vaddr: virtual address will be returned in this address
- * @paddr: physical address will be returned in this address
- * @shadow_vaddr: VM area will be return here for holding WQ page addresses
- * @page_sz: page size of each WQ page
- *
- * Return 0 - Success, negative - Failure
- **/
-static int queue_alloc_page(struct hinic_hwif *hwif, u64 **vaddr, u64 *paddr,
- void ***shadow_vaddr, size_t page_sz)
-{
- struct pci_dev *pdev = hwif->pdev;
- dma_addr_t dma_addr;
-
- *vaddr = dma_zalloc_coherent(&pdev->dev, page_sz, &dma_addr,
- GFP_KERNEL);
- if (!*vaddr) {
- dev_err(&pdev->dev, "Failed to allocate dma for wqs page\n");
- return -ENOMEM;
- }
-
- *paddr = (u64)dma_addr;
-
- /* use vzalloc for big mem */
- *shadow_vaddr = vzalloc(page_sz);
- if (!*shadow_vaddr)
- goto err_shadow_vaddr;
-
- return 0;
-
-err_shadow_vaddr:
- dma_free_coherent(&pdev->dev, page_sz, *vaddr, dma_addr);
- return -ENOMEM;
-}
-
-/**
- * wqs_allocate_page - allocate page for WQ set
- * @wqs: Work Queue Set
- * @page_idx: the page index of the page will be allocated
- *
- * Return 0 - Success, negative - Failure
- **/
-static int wqs_allocate_page(struct hinic_wqs *wqs, int page_idx)
-{
- return queue_alloc_page(wqs->hwif, &wqs->page_vaddr[page_idx],
- &wqs->page_paddr[page_idx],
- &wqs->shadow_page_vaddr[page_idx],
- WQS_PAGE_SIZE);
-}
-
-/**
- * wqs_free_page - free page of WQ set
- * @wqs: Work Queue Set
- * @page_idx: the page index of the page will be freed
- **/
-static void wqs_free_page(struct hinic_wqs *wqs, int page_idx)
-{
- struct hinic_hwif *hwif = wqs->hwif;
- struct pci_dev *pdev = hwif->pdev;
-
- dma_free_coherent(&pdev->dev, WQS_PAGE_SIZE,
- wqs->page_vaddr[page_idx],
- (dma_addr_t)wqs->page_paddr[page_idx]);
- vfree(wqs->shadow_page_vaddr[page_idx]);
-}
-
-/**
- * cmdq_allocate_page - allocate page for cmdq
- * @cmdq_pages: the pages of the cmdq queue struct to hold the page
- *
- * Return 0 - Success, negative - Failure
- **/
-static int cmdq_allocate_page(struct hinic_cmdq_pages *cmdq_pages)
-{
- return queue_alloc_page(cmdq_pages->hwif, &cmdq_pages->page_vaddr,
- &cmdq_pages->page_paddr,
- &cmdq_pages->shadow_page_vaddr,
- CMDQ_PAGE_SIZE);
-}
-
-/**
- * cmdq_free_page - free page from cmdq
- * @cmdq_pages: the pages of the cmdq queue struct that hold the page
- *
- * Return 0 - Success, negative - Failure
- **/
-static void cmdq_free_page(struct hinic_cmdq_pages *cmdq_pages)
-{
- struct hinic_hwif *hwif = cmdq_pages->hwif;
- struct pci_dev *pdev = hwif->pdev;
-
- dma_free_coherent(&pdev->dev, CMDQ_PAGE_SIZE,
- cmdq_pages->page_vaddr,
- (dma_addr_t)cmdq_pages->page_paddr);
- vfree(cmdq_pages->shadow_page_vaddr);
-}
-
-static int alloc_page_arrays(struct hinic_wqs *wqs)
-{
- struct hinic_hwif *hwif = wqs->hwif;
- struct pci_dev *pdev = hwif->pdev;
- size_t size;
-
- size = wqs->num_pages * sizeof(*wqs->page_paddr);
- wqs->page_paddr = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
- if (!wqs->page_paddr)
- return -ENOMEM;
-
- size = wqs->num_pages * sizeof(*wqs->page_vaddr);
- wqs->page_vaddr = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
- if (!wqs->page_vaddr)
- goto err_page_vaddr;
-
- size = wqs->num_pages * sizeof(*wqs->shadow_page_vaddr);
- wqs->shadow_page_vaddr = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
- if (!wqs->shadow_page_vaddr)
- goto err_page_shadow_vaddr;
-
- return 0;
-
-err_page_shadow_vaddr:
- devm_kfree(&pdev->dev, wqs->page_vaddr);
-
-err_page_vaddr:
- devm_kfree(&pdev->dev, wqs->page_paddr);
- return -ENOMEM;
-}
-
-static void free_page_arrays(struct hinic_wqs *wqs)
-{
- struct hinic_hwif *hwif = wqs->hwif;
- struct pci_dev *pdev = hwif->pdev;
-
- devm_kfree(&pdev->dev, wqs->shadow_page_vaddr);
- devm_kfree(&pdev->dev, wqs->page_vaddr);
- devm_kfree(&pdev->dev, wqs->page_paddr);
-}
-
-static int wqs_next_block(struct hinic_wqs *wqs, int *page_idx,
- int *block_idx)
-{
- int pos;
-
- down(&wqs->alloc_blocks_lock);
-
- wqs->num_free_blks--;
-
- if (wqs->num_free_blks < 0) {
- wqs->num_free_blks++;
- up(&wqs->alloc_blocks_lock);
- return -ENOMEM;
- }
-
- pos = wqs->alloc_blk_pos++;
- pos &= WQS_MAX_NUM_BLOCKS - 1;
-
- *page_idx = wqs->free_blocks[pos].page_idx;
- *block_idx = wqs->free_blocks[pos].block_idx;
-
- wqs->free_blocks[pos].page_idx = -1;
- wqs->free_blocks[pos].block_idx = -1;
-
- up(&wqs->alloc_blocks_lock);
- return 0;
-}
-
-static void wqs_return_block(struct hinic_wqs *wqs, int page_idx,
- int block_idx)
-{
- int pos;
-
- down(&wqs->alloc_blocks_lock);
-
- pos = wqs->return_blk_pos++;
- pos &= WQS_MAX_NUM_BLOCKS - 1;
-
- wqs->free_blocks[pos].page_idx = page_idx;
- wqs->free_blocks[pos].block_idx = block_idx;
-
- wqs->num_free_blks++;
-
- up(&wqs->alloc_blocks_lock);
-}
-
-static void init_wqs_blocks_arr(struct hinic_wqs *wqs)
-{
- int page_idx, blk_idx, pos = 0;
-
- for (page_idx = 0; page_idx < wqs->num_pages; page_idx++) {
- for (blk_idx = 0; blk_idx < WQS_BLOCKS_PER_PAGE; blk_idx++) {
- wqs->free_blocks[pos].page_idx = page_idx;
- wqs->free_blocks[pos].block_idx = blk_idx;
- pos++;
- }
- }
-
- wqs->alloc_blk_pos = 0;
- wqs->return_blk_pos = pos;
- wqs->num_free_blks = pos;
-
- sema_init(&wqs->alloc_blocks_lock, 1);
-}
-
-/**
- * hinic_wqs_alloc - allocate Work Queues set
- * @wqs: Work Queue Set
- * @max_wqs: maximum wqs to allocate
- * @hwif: HW interface for use for the allocation
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_wqs_alloc(struct hinic_wqs *wqs, int max_wqs,
- struct hinic_hwif *hwif)
-{
- struct pci_dev *pdev = hwif->pdev;
- int err, i, page_idx;
-
- max_wqs = ALIGN(max_wqs, WQS_BLOCKS_PER_PAGE);
- if (max_wqs > WQS_MAX_NUM_BLOCKS) {
- dev_err(&pdev->dev, "Invalid max_wqs = %d\n", max_wqs);
- return -EINVAL;
- }
-
- wqs->hwif = hwif;
- wqs->num_pages = max_wqs / WQS_BLOCKS_PER_PAGE;
-
- if (alloc_page_arrays(wqs)) {
- dev_err(&pdev->dev,
- "Failed to allocate mem for page addresses\n");
- return -ENOMEM;
- }
-
- for (page_idx = 0; page_idx < wqs->num_pages; page_idx++) {
- err = wqs_allocate_page(wqs, page_idx);
- if (err) {
- dev_err(&pdev->dev, "Failed wq page allocation\n");
- goto err_wq_allocate_page;
- }
- }
-
- wqs->free_blocks = devm_kzalloc(&pdev->dev, WQS_FREE_BLOCKS_SIZE(wqs),
- GFP_KERNEL);
- if (!wqs->free_blocks) {
- err = -ENOMEM;
- goto err_alloc_blocks;
- }
-
- init_wqs_blocks_arr(wqs);
- return 0;
-
-err_alloc_blocks:
-err_wq_allocate_page:
- for (i = 0; i < page_idx; i++)
- wqs_free_page(wqs, i);
-
- free_page_arrays(wqs);
- return err;
-}
-
-/**
- * hinic_wqs_free - free Work Queues set
- * @wqs: Work Queue Set
- **/
-void hinic_wqs_free(struct hinic_wqs *wqs)
-{
- struct hinic_hwif *hwif = wqs->hwif;
- struct pci_dev *pdev = hwif->pdev;
- int page_idx;
-
- devm_kfree(&pdev->dev, wqs->free_blocks);
-
- for (page_idx = 0; page_idx < wqs->num_pages; page_idx++)
- wqs_free_page(wqs, page_idx);
-
- free_page_arrays(wqs);
-}
-
-/**
- * alloc_wqes_shadow - allocate WQE shadows for WQ
- * @wq: WQ to allocate shadows for
- *
- * Return 0 - Success, negative - Failure
- **/
-static int alloc_wqes_shadow(struct hinic_wq *wq)
-{
- struct hinic_hwif *hwif = wq->hwif;
- struct pci_dev *pdev = hwif->pdev;
- size_t size;
-
- size = wq->num_q_pages * wq->max_wqe_size;
- wq->shadow_wqe = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
- if (!wq->shadow_wqe)
- return -ENOMEM;
-
- size = wq->num_q_pages * sizeof(wq->prod_idx);
- wq->shadow_idx = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
- if (!wq->shadow_idx)
- goto err_shadow_idx;
-
- return 0;
-
-err_shadow_idx:
- devm_kfree(&pdev->dev, wq->shadow_wqe);
- return -ENOMEM;
-}
-
-/**
- * free_wqes_shadow - free WQE shadows of WQ
- * @wq: WQ to free shadows from
- **/
-static void free_wqes_shadow(struct hinic_wq *wq)
-{
- struct hinic_hwif *hwif = wq->hwif;
- struct pci_dev *pdev = hwif->pdev;
-
- devm_kfree(&pdev->dev, wq->shadow_idx);
- devm_kfree(&pdev->dev, wq->shadow_wqe);
-}
-
-/**
- * free_wq_pages - free pages of WQ
- * @hwif: HW interface for releasing dma addresses
- * @wq: WQ to free pages from
- * @num_q_pages: number pages to free
- **/
-static void free_wq_pages(struct hinic_wq *wq, struct hinic_hwif *hwif,
- int num_q_pages)
-{
- struct pci_dev *pdev = hwif->pdev;
- int i;
-
- for (i = 0; i < num_q_pages; i++) {
- void **vaddr = &wq->shadow_block_vaddr[i];
- u64 *paddr = &wq->block_vaddr[i];
- dma_addr_t dma_addr;
-
- dma_addr = (dma_addr_t)be64_to_cpu(*paddr);
- dma_free_coherent(&pdev->dev, wq->wq_page_size, *vaddr,
- dma_addr);
- }
-
- free_wqes_shadow(wq);
-}
-
-/**
- * alloc_wq_pages - alloc pages for WQ
- * @hwif: HW interface for allocating dma addresses
- * @wq: WQ to allocate pages for
- * @max_pages: maximum pages allowed
- *
- * Return 0 - Success, negative - Failure
- **/
-static int alloc_wq_pages(struct hinic_wq *wq, struct hinic_hwif *hwif,
- int max_pages)
-{
- struct pci_dev *pdev = hwif->pdev;
- int i, err, num_q_pages;
-
- num_q_pages = ALIGN(WQ_SIZE(wq), wq->wq_page_size) / wq->wq_page_size;
- if (num_q_pages > max_pages) {
- dev_err(&pdev->dev, "Number wq pages exceeds the limit\n");
- return -EINVAL;
- }
-
- if (num_q_pages & (num_q_pages - 1)) {
- dev_err(&pdev->dev, "Number wq pages must be power of 2\n");
- return -EINVAL;
- }
-
- wq->num_q_pages = num_q_pages;
-
- err = alloc_wqes_shadow(wq);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate wqe shadow\n");
- return err;
- }
-
- for (i = 0; i < num_q_pages; i++) {
- void **vaddr = &wq->shadow_block_vaddr[i];
- u64 *paddr = &wq->block_vaddr[i];
- dma_addr_t dma_addr;
-
- *vaddr = dma_zalloc_coherent(&pdev->dev, wq->wq_page_size,
- &dma_addr, GFP_KERNEL);
- if (!*vaddr) {
- dev_err(&pdev->dev, "Failed to allocate wq page\n");
- goto err_alloc_wq_pages;
- }
-
- /* HW uses Big Endian Format */
- *paddr = cpu_to_be64(dma_addr);
- }
-
- return 0;
-
-err_alloc_wq_pages:
- free_wq_pages(wq, hwif, i);
- return -ENOMEM;
-}
-
-/**
- * hinic_wq_allocate - Allocate the WQ resources from the WQS
- * @wqs: WQ set from which to allocate the WQ resources
- * @wq: WQ to allocate resources for it from the WQ set
- * @wqebb_size: Work Queue Block Byte Size
- * @wq_page_size: the page size in the Work Queue
- * @q_depth: number of wqebbs in WQ
- * @max_wqe_size: maximum WQE size that will be used in the WQ
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_wq_allocate(struct hinic_wqs *wqs, struct hinic_wq *wq,
- u16 wqebb_size, u16 wq_page_size, u16 q_depth,
- u16 max_wqe_size)
-{
- struct hinic_hwif *hwif = wqs->hwif;
- struct pci_dev *pdev = hwif->pdev;
- u16 num_wqebbs_per_page;
- int err;
-
- if (wqebb_size == 0) {
- dev_err(&pdev->dev, "wqebb_size must be > 0\n");
- return -EINVAL;
- }
-
- if (wq_page_size == 0) {
- dev_err(&pdev->dev, "wq_page_size must be > 0\n");
- return -EINVAL;
- }
-
- if (q_depth & (q_depth - 1)) {
- dev_err(&pdev->dev, "WQ q_depth must be power of 2\n");
- return -EINVAL;
- }
-
- num_wqebbs_per_page = ALIGN(wq_page_size, wqebb_size) / wqebb_size;
-
- if (num_wqebbs_per_page & (num_wqebbs_per_page - 1)) {
- dev_err(&pdev->dev, "num wqebbs per page must be power of 2\n");
- return -EINVAL;
- }
-
- wq->hwif = hwif;
-
- err = wqs_next_block(wqs, &wq->page_idx, &wq->block_idx);
- if (err) {
- dev_err(&pdev->dev, "Failed to get free wqs next block\n");
- return err;
- }
-
- wq->wqebb_size = wqebb_size;
- wq->wq_page_size = wq_page_size;
- wq->q_depth = q_depth;
- wq->max_wqe_size = max_wqe_size;
- wq->num_wqebbs_per_page = num_wqebbs_per_page;
-
- wq->block_vaddr = WQ_BASE_VADDR(wqs, wq);
- wq->shadow_block_vaddr = WQ_BASE_ADDR(wqs, wq);
- wq->block_paddr = WQ_BASE_PADDR(wqs, wq);
-
- err = alloc_wq_pages(wq, wqs->hwif, WQ_MAX_PAGES);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate wq pages\n");
- goto err_alloc_wq_pages;
- }
-
- atomic_set(&wq->cons_idx, 0);
- atomic_set(&wq->prod_idx, 0);
- atomic_set(&wq->delta, q_depth);
- wq->mask = q_depth - 1;
-
- return 0;
-
-err_alloc_wq_pages:
- wqs_return_block(wqs, wq->page_idx, wq->block_idx);
- return err;
-}
-
-/**
- * hinic_wq_free - Free the WQ resources to the WQS
- * @wqs: WQ set to free the WQ resources to it
- * @wq: WQ to free its resources to the WQ set resources
- **/
-void hinic_wq_free(struct hinic_wqs *wqs, struct hinic_wq *wq)
-{
- free_wq_pages(wq, wqs->hwif, wq->num_q_pages);
-
- wqs_return_block(wqs, wq->page_idx, wq->block_idx);
-}
-
-/**
- * hinic_wqs_cmdq_alloc - Allocate wqs for cmdqs
- * @cmdq_pages: will hold the pages of the cmdq
- * @wq: returned wqs
- * @hwif: HW interface
- * @cmdq_blocks: number of cmdq blocks/wq to allocate
- * @wqebb_size: Work Queue Block Byte Size
- * @wq_page_size: the page size in the Work Queue
- * @q_depth: number of wqebbs in WQ
- * @max_wqe_size: maximum WQE size that will be used in the WQ
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_wqs_cmdq_alloc(struct hinic_cmdq_pages *cmdq_pages,
- struct hinic_wq *wq, struct hinic_hwif *hwif,
- int cmdq_blocks, u16 wqebb_size, u16 wq_page_size,
- u16 q_depth, u16 max_wqe_size)
-{
- struct pci_dev *pdev = hwif->pdev;
- u16 num_wqebbs_per_page;
- int i, j, err = -ENOMEM;
-
- if (wqebb_size == 0) {
- dev_err(&pdev->dev, "wqebb_size must be > 0\n");
- return -EINVAL;
- }
-
- if (wq_page_size == 0) {
- dev_err(&pdev->dev, "wq_page_size must be > 0\n");
- return -EINVAL;
- }
-
- if (q_depth & (q_depth - 1)) {
- dev_err(&pdev->dev, "WQ q_depth must be power of 2\n");
- return -EINVAL;
- }
-
- num_wqebbs_per_page = ALIGN(wq_page_size, wqebb_size) / wqebb_size;
-
- if (num_wqebbs_per_page & (num_wqebbs_per_page - 1)) {
- dev_err(&pdev->dev, "num wqebbs per page must be power of 2\n");
- return -EINVAL;
- }
-
- cmdq_pages->hwif = hwif;
-
- err = cmdq_allocate_page(cmdq_pages);
- if (err) {
- dev_err(&pdev->dev, "Failed to allocate CMDQ page\n");
- return err;
- }
-
- for (i = 0; i < cmdq_blocks; i++) {
- wq[i].hwif = hwif;
- wq[i].page_idx = 0;
- wq[i].block_idx = i;
-
- wq[i].wqebb_size = wqebb_size;
- wq[i].wq_page_size = wq_page_size;
- wq[i].q_depth = q_depth;
- wq[i].max_wqe_size = max_wqe_size;
- wq[i].num_wqebbs_per_page = num_wqebbs_per_page;
-
- wq[i].block_vaddr = CMDQ_BASE_VADDR(cmdq_pages, &wq[i]);
- wq[i].shadow_block_vaddr = CMDQ_BASE_ADDR(cmdq_pages, &wq[i]);
- wq[i].block_paddr = CMDQ_BASE_PADDR(cmdq_pages, &wq[i]);
-
- err = alloc_wq_pages(&wq[i], cmdq_pages->hwif,
- CMDQ_WQ_MAX_PAGES);
- if (err) {
- dev_err(&pdev->dev, "Failed to alloc CMDQ blocks\n");
- goto err_cmdq_block;
- }
-
- atomic_set(&wq[i].cons_idx, 0);
- atomic_set(&wq[i].prod_idx, 0);
- atomic_set(&wq[i].delta, q_depth);
- wq[i].mask = q_depth - 1;
- }
-
- return 0;
-
-err_cmdq_block:
- for (j = 0; j < i; j++)
- free_wq_pages(&wq[j], cmdq_pages->hwif, wq[j].num_q_pages);
-
- cmdq_free_page(cmdq_pages);
- return err;
-}
-
-/**
- * hinic_wqs_cmdq_free - Free wqs from cmdqs
- * @cmdq_pages: hold the pages of the cmdq
- * @wq: wqs to free
- * @cmdq_blocks: number of wqs to free
- **/
-void hinic_wqs_cmdq_free(struct hinic_cmdq_pages *cmdq_pages,
- struct hinic_wq *wq, int cmdq_blocks)
-{
- int i;
-
- for (i = 0; i < cmdq_blocks; i++)
- free_wq_pages(&wq[i], cmdq_pages->hwif, wq[i].num_q_pages);
-
- cmdq_free_page(cmdq_pages);
-}
-
-static void copy_wqe_to_shadow(struct hinic_wq *wq, void *shadow_addr,
- int num_wqebbs, u16 idx)
-{
- void *wqebb_addr;
- int i;
-
- for (i = 0; i < num_wqebbs; i++, idx++) {
- idx = MASKED_WQE_IDX(wq, idx);
- wqebb_addr = WQ_PAGE_ADDR(wq, idx) +
- WQE_PAGE_OFF(wq, idx);
-
- memcpy(shadow_addr, wqebb_addr, wq->wqebb_size);
-
- shadow_addr += wq->wqebb_size;
- }
-}
-
-static void copy_wqe_from_shadow(struct hinic_wq *wq, void *shadow_addr,
- int num_wqebbs, u16 idx)
-{
- void *wqebb_addr;
- int i;
-
- for (i = 0; i < num_wqebbs; i++, idx++) {
- idx = MASKED_WQE_IDX(wq, idx);
- wqebb_addr = WQ_PAGE_ADDR(wq, idx) +
- WQE_PAGE_OFF(wq, idx);
-
- memcpy(wqebb_addr, shadow_addr, wq->wqebb_size);
- shadow_addr += wq->wqebb_size;
- }
-}
-
-/**
- * hinic_get_wqe - get wqe ptr in the current pi and update the pi
- * @wq: wq to get wqe from
- * @wqe_size: wqe size
- * @prod_idx: returned pi
- *
- * Return wqe pointer
- **/
-struct hinic_hw_wqe *hinic_get_wqe(struct hinic_wq *wq, unsigned int wqe_size,
- u16 *prod_idx)
-{
- int curr_pg, end_pg, num_wqebbs;
- u16 curr_prod_idx, end_prod_idx;
-
- *prod_idx = MASKED_WQE_IDX(wq, atomic_read(&wq->prod_idx));
-
- num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size;
-
- if (atomic_sub_return(num_wqebbs, &wq->delta) <= 0) {
- atomic_add(num_wqebbs, &wq->delta);
- return ERR_PTR(-EBUSY);
- }
-
- end_prod_idx = atomic_add_return(num_wqebbs, &wq->prod_idx);
-
- end_prod_idx = MASKED_WQE_IDX(wq, end_prod_idx);
- curr_prod_idx = end_prod_idx - num_wqebbs;
- curr_prod_idx = MASKED_WQE_IDX(wq, curr_prod_idx);
-
- /* end prod index points to the next wqebb, therefore minus 1 */
- end_prod_idx = MASKED_WQE_IDX(wq, end_prod_idx - 1);
-
- curr_pg = WQE_PAGE_NUM(wq, curr_prod_idx);
- end_pg = WQE_PAGE_NUM(wq, end_prod_idx);
-
- *prod_idx = curr_prod_idx;
-
- if (curr_pg != end_pg) {
- void *shadow_addr = &wq->shadow_wqe[curr_pg * wq->max_wqe_size];
-
- copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *prod_idx);
-
- wq->shadow_idx[curr_pg] = *prod_idx;
- return shadow_addr;
- }
-
- return WQ_PAGE_ADDR(wq, *prod_idx) + WQE_PAGE_OFF(wq, *prod_idx);
-}
-
-/**
- * hinic_put_wqe - return the wqe place to use for a new wqe
- * @wq: wq to return wqe
- * @wqe_size: wqe size
- **/
-void hinic_put_wqe(struct hinic_wq *wq, unsigned int wqe_size)
-{
- int num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size;
-
- atomic_add(num_wqebbs, &wq->cons_idx);
-
- atomic_add(num_wqebbs, &wq->delta);
-}
-
-/**
- * hinic_read_wqe - read wqe ptr in the current ci
- * @wq: wq to get read from
- * @wqe_size: wqe size
- * @cons_idx: returned ci
- *
- * Return wqe pointer
- **/
-struct hinic_hw_wqe *hinic_read_wqe(struct hinic_wq *wq, unsigned int wqe_size,
- u16 *cons_idx)
-{
- int num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size;
- u16 curr_cons_idx, end_cons_idx;
- int curr_pg, end_pg;
-
- if ((atomic_read(&wq->delta) + num_wqebbs) > wq->q_depth)
- return ERR_PTR(-EBUSY);
-
- curr_cons_idx = atomic_read(&wq->cons_idx);
-
- curr_cons_idx = MASKED_WQE_IDX(wq, curr_cons_idx);
- end_cons_idx = MASKED_WQE_IDX(wq, curr_cons_idx + num_wqebbs - 1);
-
- curr_pg = WQE_PAGE_NUM(wq, curr_cons_idx);
- end_pg = WQE_PAGE_NUM(wq, end_cons_idx);
-
- *cons_idx = curr_cons_idx;
-
- if (curr_pg != end_pg) {
- void *shadow_addr = &wq->shadow_wqe[curr_pg * wq->max_wqe_size];
-
- copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *cons_idx);
- return shadow_addr;
- }
-
- return WQ_PAGE_ADDR(wq, *cons_idx) + WQE_PAGE_OFF(wq, *cons_idx);
-}
-
-/**
- * hinic_read_wqe_direct - read wqe directly from ci position
- * @wq: wq
- * @cons_idx: ci position
- *
- * Return wqe
- **/
-struct hinic_hw_wqe *hinic_read_wqe_direct(struct hinic_wq *wq, u16 cons_idx)
-{
- return WQ_PAGE_ADDR(wq, cons_idx) + WQE_PAGE_OFF(wq, cons_idx);
-}
-
-/**
- * wqe_shadow - check if a wqe is shadow
- * @wq: wq of the wqe
- * @wqe: the wqe for shadow checking
- *
- * Return true - shadow, false - Not shadow
- **/
-static inline bool wqe_shadow(struct hinic_wq *wq, struct hinic_hw_wqe *wqe)
-{
- size_t wqe_shadow_size = wq->num_q_pages * wq->max_wqe_size;
-
- return WQE_IN_RANGE(wqe, wq->shadow_wqe,
- &wq->shadow_wqe[wqe_shadow_size]);
-}
-
-/**
- * hinic_write_wqe - write the wqe to the wq
- * @wq: wq to write wqe to
- * @wqe: wqe to write
- * @wqe_size: wqe size
- **/
-void hinic_write_wqe(struct hinic_wq *wq, struct hinic_hw_wqe *wqe,
- unsigned int wqe_size)
-{
- int curr_pg, num_wqebbs;
- void *shadow_addr;
- u16 prod_idx;
-
- if (wqe_shadow(wq, wqe)) {
- curr_pg = WQE_SHADOW_PAGE(wq, wqe);
-
- prod_idx = wq->shadow_idx[curr_pg];
- num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size;
- shadow_addr = &wq->shadow_wqe[curr_pg * wq->max_wqe_size];
-
- copy_wqe_from_shadow(wq, shadow_addr, num_wqebbs, prod_idx);
- }
-}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.h
deleted file mode 100644
index 9c030a0..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.h
+++ /dev/null
@@ -1,117 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_HW_WQ_H
-#define HINIC_HW_WQ_H
-
-#include <linux/types.h>
-#include <linux/semaphore.h>
-#include <linux/atomic.h>
-
-#include "hinic_hw_if.h"
-#include "hinic_hw_wqe.h"
-
-struct hinic_free_block {
- int page_idx;
- int block_idx;
-};
-
-struct hinic_wq {
- struct hinic_hwif *hwif;
-
- int page_idx;
- int block_idx;
-
- u16 wqebb_size;
- u16 wq_page_size;
- u16 q_depth;
- u16 max_wqe_size;
- u16 num_wqebbs_per_page;
-
- /* The addresses are 64 bit in the HW */
- u64 block_paddr;
- void **shadow_block_vaddr;
- u64 *block_vaddr;
-
- int num_q_pages;
- u8 *shadow_wqe;
- u16 *shadow_idx;
-
- atomic_t cons_idx;
- atomic_t prod_idx;
- atomic_t delta;
- u16 mask;
-};
-
-struct hinic_wqs {
- struct hinic_hwif *hwif;
- int num_pages;
-
- /* The addresses are 64 bit in the HW */
- u64 *page_paddr;
- u64 **page_vaddr;
- void ***shadow_page_vaddr;
-
- struct hinic_free_block *free_blocks;
- int alloc_blk_pos;
- int return_blk_pos;
- int num_free_blks;
-
- /* Lock for getting a free block from the WQ set */
- struct semaphore alloc_blocks_lock;
-};
-
-struct hinic_cmdq_pages {
- /* The addresses are 64 bit in the HW */
- u64 page_paddr;
- u64 *page_vaddr;
- void **shadow_page_vaddr;
-
- struct hinic_hwif *hwif;
-};
-
-int hinic_wqs_cmdq_alloc(struct hinic_cmdq_pages *cmdq_pages,
- struct hinic_wq *wq, struct hinic_hwif *hwif,
- int cmdq_blocks, u16 wqebb_size, u16 wq_page_size,
- u16 q_depth, u16 max_wqe_size);
-
-void hinic_wqs_cmdq_free(struct hinic_cmdq_pages *cmdq_pages,
- struct hinic_wq *wq, int cmdq_blocks);
-
-int hinic_wqs_alloc(struct hinic_wqs *wqs, int num_wqs,
- struct hinic_hwif *hwif);
-
-void hinic_wqs_free(struct hinic_wqs *wqs);
-
-int hinic_wq_allocate(struct hinic_wqs *wqs, struct hinic_wq *wq,
- u16 wqebb_size, u16 wq_page_size, u16 q_depth,
- u16 max_wqe_size);
-
-void hinic_wq_free(struct hinic_wqs *wqs, struct hinic_wq *wq);
-
-struct hinic_hw_wqe *hinic_get_wqe(struct hinic_wq *wq, unsigned int wqe_size,
- u16 *prod_idx);
-
-void hinic_put_wqe(struct hinic_wq *wq, unsigned int wqe_size);
-
-struct hinic_hw_wqe *hinic_read_wqe(struct hinic_wq *wq, unsigned int wqe_size,
- u16 *cons_idx);
-
-struct hinic_hw_wqe *hinic_read_wqe_direct(struct hinic_wq *wq, u16 cons_idx);
-
-void hinic_write_wqe(struct hinic_wq *wq, struct hinic_hw_wqe *wqe,
- unsigned int wqe_size);
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h
deleted file mode 100644
index bc73485..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h
+++ /dev/null
@@ -1,368 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_HW_WQE_H
-#define HINIC_HW_WQE_H
-
-#include "hinic_common.h"
-
-#define HINIC_CMDQ_CTRL_PI_SHIFT 0
-#define HINIC_CMDQ_CTRL_CMD_SHIFT 16
-#define HINIC_CMDQ_CTRL_MOD_SHIFT 24
-#define HINIC_CMDQ_CTRL_ACK_TYPE_SHIFT 29
-#define HINIC_CMDQ_CTRL_HW_BUSY_BIT_SHIFT 31
-
-#define HINIC_CMDQ_CTRL_PI_MASK 0xFFFF
-#define HINIC_CMDQ_CTRL_CMD_MASK 0xFF
-#define HINIC_CMDQ_CTRL_MOD_MASK 0x1F
-#define HINIC_CMDQ_CTRL_ACK_TYPE_MASK 0x3
-#define HINIC_CMDQ_CTRL_HW_BUSY_BIT_MASK 0x1
-
-#define HINIC_CMDQ_CTRL_SET(val, member) \
- (((u32)(val) & HINIC_CMDQ_CTRL_##member##_MASK) \
- << HINIC_CMDQ_CTRL_##member##_SHIFT)
-
-#define HINIC_CMDQ_CTRL_GET(val, member) \
- (((val) >> HINIC_CMDQ_CTRL_##member##_SHIFT) \
- & HINIC_CMDQ_CTRL_##member##_MASK)
-
-#define HINIC_CMDQ_WQE_HEADER_BUFDESC_LEN_SHIFT 0
-#define HINIC_CMDQ_WQE_HEADER_COMPLETE_FMT_SHIFT 15
-#define HINIC_CMDQ_WQE_HEADER_DATA_FMT_SHIFT 22
-#define HINIC_CMDQ_WQE_HEADER_COMPLETE_REQ_SHIFT 23
-#define HINIC_CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_SHIFT 27
-#define HINIC_CMDQ_WQE_HEADER_CTRL_LEN_SHIFT 29
-#define HINIC_CMDQ_WQE_HEADER_TOGGLED_WRAPPED_SHIFT 31
-
-#define HINIC_CMDQ_WQE_HEADER_BUFDESC_LEN_MASK 0xFF
-#define HINIC_CMDQ_WQE_HEADER_COMPLETE_FMT_MASK 0x1
-#define HINIC_CMDQ_WQE_HEADER_DATA_FMT_MASK 0x1
-#define HINIC_CMDQ_WQE_HEADER_COMPLETE_REQ_MASK 0x1
-#define HINIC_CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_MASK 0x3
-#define HINIC_CMDQ_WQE_HEADER_CTRL_LEN_MASK 0x3
-#define HINIC_CMDQ_WQE_HEADER_TOGGLED_WRAPPED_MASK 0x1
-
-#define HINIC_CMDQ_WQE_HEADER_SET(val, member) \
- (((u32)(val) & HINIC_CMDQ_WQE_HEADER_##member##_MASK) \
- << HINIC_CMDQ_WQE_HEADER_##member##_SHIFT)
-
-#define HINIC_CMDQ_WQE_HEADER_GET(val, member) \
- (((val) >> HINIC_CMDQ_WQE_HEADER_##member##_SHIFT) \
- & HINIC_CMDQ_WQE_HEADER_##member##_MASK)
-
-#define HINIC_SQ_CTRL_BUFDESC_SECT_LEN_SHIFT 0
-#define HINIC_SQ_CTRL_TASKSECT_LEN_SHIFT 16
-#define HINIC_SQ_CTRL_DATA_FORMAT_SHIFT 22
-#define HINIC_SQ_CTRL_LEN_SHIFT 29
-
-#define HINIC_SQ_CTRL_BUFDESC_SECT_LEN_MASK 0xFF
-#define HINIC_SQ_CTRL_TASKSECT_LEN_MASK 0x1F
-#define HINIC_SQ_CTRL_DATA_FORMAT_MASK 0x1
-#define HINIC_SQ_CTRL_LEN_MASK 0x3
-
-#define HINIC_SQ_CTRL_QUEUE_INFO_MSS_SHIFT 13
-
-#define HINIC_SQ_CTRL_QUEUE_INFO_MSS_MASK 0x3FFF
-
-#define HINIC_SQ_CTRL_SET(val, member) \
- (((u32)(val) & HINIC_SQ_CTRL_##member##_MASK) \
- << HINIC_SQ_CTRL_##member##_SHIFT)
-
-#define HINIC_SQ_CTRL_GET(val, member) \
- (((val) >> HINIC_SQ_CTRL_##member##_SHIFT) \
- & HINIC_SQ_CTRL_##member##_MASK)
-
-#define HINIC_SQ_TASK_INFO0_L2HDR_LEN_SHIFT 0
-#define HINIC_SQ_TASK_INFO0_L4_OFFLOAD_SHIFT 8
-#define HINIC_SQ_TASK_INFO0_INNER_L3TYPE_SHIFT 10
-#define HINIC_SQ_TASK_INFO0_VLAN_OFFLOAD_SHIFT 12
-#define HINIC_SQ_TASK_INFO0_PARSE_FLAG_SHIFT 13
-/* 1 bit reserved */
-#define HINIC_SQ_TASK_INFO0_TSO_FLAG_SHIFT 15
-#define HINIC_SQ_TASK_INFO0_VLAN_TAG_SHIFT 16
-
-#define HINIC_SQ_TASK_INFO0_L2HDR_LEN_MASK 0xFF
-#define HINIC_SQ_TASK_INFO0_L4_OFFLOAD_MASK 0x3
-#define HINIC_SQ_TASK_INFO0_INNER_L3TYPE_MASK 0x3
-#define HINIC_SQ_TASK_INFO0_VLAN_OFFLOAD_MASK 0x1
-#define HINIC_SQ_TASK_INFO0_PARSE_FLAG_MASK 0x1
-/* 1 bit reserved */
-#define HINIC_SQ_TASK_INFO0_TSO_FLAG_MASK 0x1
-#define HINIC_SQ_TASK_INFO0_VLAN_TAG_MASK 0xFFFF
-
-#define HINIC_SQ_TASK_INFO0_SET(val, member) \
- (((u32)(val) & HINIC_SQ_TASK_INFO0_##member##_MASK) << \
- HINIC_SQ_TASK_INFO0_##member##_SHIFT)
-
-/* 8 bits reserved */
-#define HINIC_SQ_TASK_INFO1_MEDIA_TYPE_SHIFT 8
-#define HINIC_SQ_TASK_INFO1_INNER_L4_LEN_SHIFT 16
-#define HINIC_SQ_TASK_INFO1_INNER_L3_LEN_SHIFT 24
-
-/* 8 bits reserved */
-#define HINIC_SQ_TASK_INFO1_MEDIA_TYPE_MASK 0xFF
-#define HINIC_SQ_TASK_INFO1_INNER_L4_LEN_MASK 0xFF
-#define HINIC_SQ_TASK_INFO1_INNER_L3_LEN_MASK 0xFF
-
-#define HINIC_SQ_TASK_INFO1_SET(val, member) \
- (((u32)(val) & HINIC_SQ_TASK_INFO1_##member##_MASK) << \
- HINIC_SQ_TASK_INFO1_##member##_SHIFT)
-
-#define HINIC_SQ_TASK_INFO2_TUNNEL_L4_LEN_SHIFT 0
-#define HINIC_SQ_TASK_INFO2_OUTER_L3_LEN_SHIFT 12
-#define HINIC_SQ_TASK_INFO2_TUNNEL_L4TYPE_SHIFT 19
-/* 1 bit reserved */
-#define HINIC_SQ_TASK_INFO2_OUTER_L3TYPE_SHIFT 22
-/* 8 bits reserved */
-
-#define HINIC_SQ_TASK_INFO2_TUNNEL_L4_LEN_MASK 0xFFF
-#define HINIC_SQ_TASK_INFO2_OUTER_L3_LEN_MASK 0x7F
-#define HINIC_SQ_TASK_INFO2_TUNNEL_L4TYPE_MASK 0x3
-/* 1 bit reserved */
-#define HINIC_SQ_TASK_INFO2_OUTER_L3TYPE_MASK 0x3
-/* 8 bits reserved */
-
-#define HINIC_SQ_TASK_INFO2_SET(val, member) \
- (((u32)(val) & HINIC_SQ_TASK_INFO2_##member##_MASK) << \
- HINIC_SQ_TASK_INFO2_##member##_SHIFT)
-
-/* 31 bits reserved */
-#define HINIC_SQ_TASK_INFO4_L2TYPE_SHIFT 31
-
-/* 31 bits reserved */
-#define HINIC_SQ_TASK_INFO4_L2TYPE_MASK 0x1
-
-#define HINIC_SQ_TASK_INFO4_SET(val, member) \
- (((u32)(val) & HINIC_SQ_TASK_INFO4_##member##_MASK) << \
- HINIC_SQ_TASK_INFO4_##member##_SHIFT)
-
-#define HINIC_RQ_CQE_STATUS_RXDONE_SHIFT 31
-
-#define HINIC_RQ_CQE_STATUS_RXDONE_MASK 0x1
-
-#define HINIC_RQ_CQE_STATUS_GET(val, member) \
- (((val) >> HINIC_RQ_CQE_STATUS_##member##_SHIFT) & \
- HINIC_RQ_CQE_STATUS_##member##_MASK)
-
-#define HINIC_RQ_CQE_STATUS_CLEAR(val, member) \
- ((val) & (~(HINIC_RQ_CQE_STATUS_##member##_MASK << \
- HINIC_RQ_CQE_STATUS_##member##_SHIFT)))
-
-#define HINIC_RQ_CQE_SGE_LEN_SHIFT 16
-
-#define HINIC_RQ_CQE_SGE_LEN_MASK 0xFFFF
-
-#define HINIC_RQ_CQE_SGE_GET(val, member) \
- (((val) >> HINIC_RQ_CQE_SGE_##member##_SHIFT) & \
- HINIC_RQ_CQE_SGE_##member##_MASK)
-
-#define HINIC_RQ_CTRL_BUFDESC_SECT_LEN_SHIFT 0
-#define HINIC_RQ_CTRL_COMPLETE_FORMAT_SHIFT 15
-#define HINIC_RQ_CTRL_COMPLETE_LEN_SHIFT 27
-#define HINIC_RQ_CTRL_LEN_SHIFT 29
-
-#define HINIC_RQ_CTRL_BUFDESC_SECT_LEN_MASK 0xFF
-#define HINIC_RQ_CTRL_COMPLETE_FORMAT_MASK 0x1
-#define HINIC_RQ_CTRL_COMPLETE_LEN_MASK 0x3
-#define HINIC_RQ_CTRL_LEN_MASK 0x3
-
-#define HINIC_RQ_CTRL_SET(val, member) \
- (((u32)(val) & HINIC_RQ_CTRL_##member##_MASK) << \
- HINIC_RQ_CTRL_##member##_SHIFT)
-
-#define HINIC_SQ_WQE_SIZE(nr_sges) \
- (sizeof(struct hinic_sq_ctrl) + \
- sizeof(struct hinic_sq_task) + \
- (nr_sges) * sizeof(struct hinic_sq_bufdesc))
-
-#define HINIC_SCMD_DATA_LEN 16
-
-#define HINIC_MAX_SQ_BUFDESCS 17
-
-#define HINIC_SQ_WQE_MAX_SIZE 320
-#define HINIC_RQ_WQE_SIZE 32
-
-enum hinic_l4offload_type {
- HINIC_L4_OFF_DISABLE = 0,
- HINIC_TCP_OFFLOAD_ENABLE = 1,
- HINIC_SCTP_OFFLOAD_ENABLE = 2,
- HINIC_UDP_OFFLOAD_ENABLE = 3,
-};
-
-enum hinic_vlan_offload {
- HINIC_VLAN_OFF_DISABLE = 0,
- HINIC_VLAN_OFF_ENABLE = 1,
-};
-
-enum hinic_pkt_parsed {
- HINIC_PKT_NOT_PARSED = 0,
- HINIC_PKT_PARSED = 1,
-};
-
-enum hinic_outer_l3type {
- HINIC_OUTER_L3TYPE_UNKNOWN = 0,
- HINIC_OUTER_L3TYPE_IPV6 = 1,
- HINIC_OUTER_L3TYPE_IPV4_NO_CHKSUM = 2,
- HINIC_OUTER_L3TYPE_IPV4_CHKSUM = 3,
-};
-
-enum hinic_media_type {
- HINIC_MEDIA_UNKNOWN = 0,
-};
-
-enum hinic_l2type {
- HINIC_L2TYPE_ETH = 0,
-};
-
-enum hinc_tunnel_l4type {
- HINIC_TUNNEL_L4TYPE_UNKNOWN = 0,
-};
-
-struct hinic_cmdq_header {
- u32 header_info;
- u32 saved_data;
-};
-
-struct hinic_status {
- u32 status_info;
-};
-
-struct hinic_ctrl {
- u32 ctrl_info;
-};
-
-struct hinic_sge_resp {
- struct hinic_sge sge;
- u32 rsvd;
-};
-
-struct hinic_cmdq_completion {
- /* HW Format */
- union {
- struct hinic_sge_resp sge_resp;
- u64 direct_resp;
- };
-};
-
-struct hinic_scmd_bufdesc {
- u32 buf_len;
- u32 rsvd;
- u8 data[HINIC_SCMD_DATA_LEN];
-};
-
-struct hinic_lcmd_bufdesc {
- struct hinic_sge sge;
- u32 rsvd1;
- u64 rsvd2;
- u64 rsvd3;
-};
-
-struct hinic_cmdq_wqe_scmd {
- struct hinic_cmdq_header header;
- u64 rsvd;
- struct hinic_status status;
- struct hinic_ctrl ctrl;
- struct hinic_cmdq_completion completion;
- struct hinic_scmd_bufdesc buf_desc;
-};
-
-struct hinic_cmdq_wqe_lcmd {
- struct hinic_cmdq_header header;
- struct hinic_status status;
- struct hinic_ctrl ctrl;
- struct hinic_cmdq_completion completion;
- struct hinic_lcmd_bufdesc buf_desc;
-};
-
-struct hinic_cmdq_direct_wqe {
- struct hinic_cmdq_wqe_scmd wqe_scmd;
-};
-
-struct hinic_cmdq_wqe {
- /* HW Format */
- union {
- struct hinic_cmdq_direct_wqe direct_wqe;
- struct hinic_cmdq_wqe_lcmd wqe_lcmd;
- };
-};
-
-struct hinic_sq_ctrl {
- u32 ctrl_info;
- u32 queue_info;
-};
-
-struct hinic_sq_task {
- u32 pkt_info0;
- u32 pkt_info1;
- u32 pkt_info2;
- u32 ufo_v6_identify;
- u32 pkt_info4;
- u32 zero_pad;
-};
-
-struct hinic_sq_bufdesc {
- struct hinic_sge sge;
- u32 rsvd;
-};
-
-struct hinic_sq_wqe {
- struct hinic_sq_ctrl ctrl;
- struct hinic_sq_task task;
- struct hinic_sq_bufdesc buf_descs[HINIC_MAX_SQ_BUFDESCS];
-};
-
-struct hinic_rq_cqe {
- u32 status;
- u32 len;
-
- u32 rsvd2;
- u32 rsvd3;
- u32 rsvd4;
- u32 rsvd5;
- u32 rsvd6;
- u32 rsvd7;
-};
-
-struct hinic_rq_ctrl {
- u32 ctrl_info;
-};
-
-struct hinic_rq_cqe_sect {
- struct hinic_sge sge;
- u32 rsvd;
-};
-
-struct hinic_rq_bufdesc {
- u32 hi_addr;
- u32 lo_addr;
-};
-
-struct hinic_rq_wqe {
- struct hinic_rq_ctrl ctrl;
- u32 rsvd;
- struct hinic_rq_cqe_sect cqe_sect;
- struct hinic_rq_bufdesc buf_desc;
-};
-
-struct hinic_hw_wqe {
- /* HW Format */
- union {
- struct hinic_cmdq_wqe cmdq_wqe;
- struct hinic_sq_wqe sq_wqe;
- struct hinic_rq_wqe rq_wqe;
- };
-};
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_port.c b/drivers/net/ethernet/huawei/hinic/hinic_port.c
deleted file mode 100644
index 4d4e3f0..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_port.c
+++ /dev/null
@@ -1,379 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#include <linux/types.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/if_vlan.h>
-#include <linux/pci.h>
-#include <linux/device.h>
-#include <linux/errno.h>
-
-#include "hinic_hw_if.h"
-#include "hinic_hw_dev.h"
-#include "hinic_port.h"
-#include "hinic_dev.h"
-
-#define HINIC_MIN_MTU_SIZE 256
-#define HINIC_MAX_JUMBO_FRAME_SIZE 15872
-
-enum mac_op {
- MAC_DEL,
- MAC_SET,
-};
-
-/**
- * change_mac - change(add or delete) mac address
- * @nic_dev: nic device
- * @addr: mac address
- * @vlan_id: vlan number to set with the mac
- * @op: add or delete the mac
- *
- * Return 0 - Success, negative - Failure
- **/
-static int change_mac(struct hinic_dev *nic_dev, const u8 *addr,
- u16 vlan_id, enum mac_op op)
-{
- struct net_device *netdev = nic_dev->netdev;
- struct hinic_hwdev *hwdev = nic_dev->hwdev;
- struct hinic_port_mac_cmd port_mac_cmd;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- enum hinic_port_cmd cmd;
- u16 out_size;
- int err;
-
- if (vlan_id >= VLAN_N_VID) {
- netif_err(nic_dev, drv, netdev, "Invalid VLAN number\n");
- return -EINVAL;
- }
-
- if (op == MAC_SET)
- cmd = HINIC_PORT_CMD_SET_MAC;
- else
- cmd = HINIC_PORT_CMD_DEL_MAC;
-
- port_mac_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
- port_mac_cmd.vlan_id = vlan_id;
- memcpy(port_mac_cmd.mac, addr, ETH_ALEN);
-
- err = hinic_port_msg_cmd(hwdev, cmd, &port_mac_cmd,
- sizeof(port_mac_cmd),
- &port_mac_cmd, &out_size);
- if (err || (out_size != sizeof(port_mac_cmd)) || port_mac_cmd.status) {
- dev_err(&pdev->dev, "Failed to change MAC, ret = %d\n",
- port_mac_cmd.status);
- return -EFAULT;
- }
-
- return 0;
-}
-
-/**
- * hinic_port_add_mac - add mac address
- * @nic_dev: nic device
- * @addr: mac address
- * @vlan_id: vlan number to set with the mac
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_port_add_mac(struct hinic_dev *nic_dev,
- const u8 *addr, u16 vlan_id)
-{
- return change_mac(nic_dev, addr, vlan_id, MAC_SET);
-}
-
-/**
- * hinic_port_del_mac - remove mac address
- * @nic_dev: nic device
- * @addr: mac address
- * @vlan_id: vlan number that is connected to the mac
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_port_del_mac(struct hinic_dev *nic_dev, const u8 *addr,
- u16 vlan_id)
-{
- return change_mac(nic_dev, addr, vlan_id, MAC_DEL);
-}
-
-/**
- * hinic_port_get_mac - get the mac address of the nic device
- * @nic_dev: nic device
- * @addr: returned mac address
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_port_get_mac(struct hinic_dev *nic_dev, u8 *addr)
-{
- struct hinic_hwdev *hwdev = nic_dev->hwdev;
- struct hinic_port_mac_cmd port_mac_cmd;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- u16 out_size;
- int err;
-
- port_mac_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
-
- err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_MAC,
- &port_mac_cmd, sizeof(port_mac_cmd),
- &port_mac_cmd, &out_size);
- if (err || (out_size != sizeof(port_mac_cmd)) || port_mac_cmd.status) {
- dev_err(&pdev->dev, "Failed to get mac, ret = %d\n",
- port_mac_cmd.status);
- return -EFAULT;
- }
-
- memcpy(addr, port_mac_cmd.mac, ETH_ALEN);
- return 0;
-}
-
-/**
- * hinic_port_set_mtu - set mtu
- * @nic_dev: nic device
- * @new_mtu: new mtu
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_port_set_mtu(struct hinic_dev *nic_dev, int new_mtu)
-{
- struct net_device *netdev = nic_dev->netdev;
- struct hinic_hwdev *hwdev = nic_dev->hwdev;
- struct hinic_port_mtu_cmd port_mtu_cmd;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- int err, max_frame;
- u16 out_size;
-
- if (new_mtu < HINIC_MIN_MTU_SIZE) {
- netif_err(nic_dev, drv, netdev, "mtu < MIN MTU size");
- return -EINVAL;
- }
-
- max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN;
- if (max_frame > HINIC_MAX_JUMBO_FRAME_SIZE) {
- netif_err(nic_dev, drv, netdev, "mtu > MAX MTU size");
- return -EINVAL;
- }
-
- port_mtu_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
- port_mtu_cmd.mtu = new_mtu;
-
- err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_CHANGE_MTU,
- &port_mtu_cmd, sizeof(port_mtu_cmd),
- &port_mtu_cmd, &out_size);
- if (err || (out_size != sizeof(port_mtu_cmd)) || port_mtu_cmd.status) {
- dev_err(&pdev->dev, "Failed to set mtu, ret = %d\n",
- port_mtu_cmd.status);
- return -EFAULT;
- }
-
- return 0;
-}
-
-/**
- * hinic_port_add_vlan - add vlan to the nic device
- * @nic_dev: nic device
- * @vlan_id: the vlan number to add
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_port_add_vlan(struct hinic_dev *nic_dev, u16 vlan_id)
-{
- struct hinic_hwdev *hwdev = nic_dev->hwdev;
- struct hinic_port_vlan_cmd port_vlan_cmd;
-
- port_vlan_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwdev->hwif);
- port_vlan_cmd.vlan_id = vlan_id;
-
- return hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_ADD_VLAN,
- &port_vlan_cmd, sizeof(port_vlan_cmd),
- NULL, NULL);
-}
-
-/**
- * hinic_port_del_vlan - delete vlan from the nic device
- * @nic_dev: nic device
- * @vlan_id: the vlan number to delete
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_port_del_vlan(struct hinic_dev *nic_dev, u16 vlan_id)
-{
- struct hinic_hwdev *hwdev = nic_dev->hwdev;
- struct hinic_port_vlan_cmd port_vlan_cmd;
-
- port_vlan_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwdev->hwif);
- port_vlan_cmd.vlan_id = vlan_id;
-
- return hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_DEL_VLAN,
- &port_vlan_cmd, sizeof(port_vlan_cmd),
- NULL, NULL);
-}
-
-/**
- * hinic_port_set_rx_mode - set rx mode in the nic device
- * @nic_dev: nic device
- * @rx_mode: the rx mode to set
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_port_set_rx_mode(struct hinic_dev *nic_dev, u32 rx_mode)
-{
- struct hinic_hwdev *hwdev = nic_dev->hwdev;
- struct hinic_port_rx_mode_cmd rx_mode_cmd;
-
- rx_mode_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwdev->hwif);
- rx_mode_cmd.rx_mode = rx_mode;
-
- return hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_RX_MODE,
- &rx_mode_cmd, sizeof(rx_mode_cmd),
- NULL, NULL);
-}
-
-/**
- * hinic_port_link_state - get the link state
- * @nic_dev: nic device
- * @link_state: the returned link state
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_port_link_state(struct hinic_dev *nic_dev,
- enum hinic_port_link_state *link_state)
-{
- struct hinic_hwdev *hwdev = nic_dev->hwdev;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct hinic_port_link_cmd link_cmd;
- struct pci_dev *pdev = hwif->pdev;
- u16 out_size;
- int err;
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "unsupported PCI Function type\n");
- return -EINVAL;
- }
-
- link_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
-
- err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_LINK_STATE,
- &link_cmd, sizeof(link_cmd),
- &link_cmd, &out_size);
- if (err || (out_size != sizeof(link_cmd)) || link_cmd.status) {
- dev_err(&pdev->dev, "Failed to get link state, ret = %d\n",
- link_cmd.status);
- return -EINVAL;
- }
-
- *link_state = link_cmd.state;
- return 0;
-}
-
-/**
- * hinic_port_set_state - set port state
- * @nic_dev: nic device
- * @state: the state to set
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_port_set_state(struct hinic_dev *nic_dev, enum hinic_port_state state)
-{
- struct hinic_hwdev *hwdev = nic_dev->hwdev;
- struct hinic_port_state_cmd port_state;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- u16 out_size;
- int err;
-
- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
- dev_err(&pdev->dev, "unsupported PCI Function type\n");
- return -EINVAL;
- }
-
- port_state.state = state;
-
- err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_PORT_STATE,
- &port_state, sizeof(port_state),
- &port_state, &out_size);
- if (err || (out_size != sizeof(port_state)) || port_state.status) {
- dev_err(&pdev->dev, "Failed to set port state, ret = %d\n",
- port_state.status);
- return -EFAULT;
- }
-
- return 0;
-}
-
-/**
- * hinic_port_set_func_state- set func device state
- * @nic_dev: nic device
- * @state: the state to set
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_port_set_func_state(struct hinic_dev *nic_dev,
- enum hinic_func_port_state state)
-{
- struct hinic_port_func_state_cmd func_state;
- struct hinic_hwdev *hwdev = nic_dev->hwdev;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- u16 out_size;
- int err;
-
- func_state.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
- func_state.state = state;
-
- err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_FUNC_STATE,
- &func_state, sizeof(func_state),
- &func_state, &out_size);
- if (err || (out_size != sizeof(func_state)) || func_state.status) {
- dev_err(&pdev->dev, "Failed to set port func state, ret = %d\n",
- func_state.status);
- return -EFAULT;
- }
-
- return 0;
-}
-
-/**
- * hinic_port_get_cap - get port capabilities
- * @nic_dev: nic device
- * @port_cap: returned port capabilities
- *
- * Return 0 - Success, negative - Failure
- **/
-int hinic_port_get_cap(struct hinic_dev *nic_dev,
- struct hinic_port_cap *port_cap)
-{
- struct hinic_hwdev *hwdev = nic_dev->hwdev;
- struct hinic_hwif *hwif = hwdev->hwif;
- struct pci_dev *pdev = hwif->pdev;
- u16 out_size;
- int err;
-
- port_cap->func_idx = HINIC_HWIF_FUNC_IDX(hwif);
-
- err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_CAP,
- port_cap, sizeof(*port_cap),
- port_cap, &out_size);
- if (err || (out_size != sizeof(*port_cap)) || port_cap->status) {
- dev_err(&pdev->dev,
- "Failed to get port capabilities, ret = %d\n",
- port_cap->status);
- return -EINVAL;
- }
-
- return 0;
-}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_port.h b/drivers/net/ethernet/huawei/hinic/hinic_port.h
deleted file mode 100644
index 9404365..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_port.h
+++ /dev/null
@@ -1,198 +0,0 @@
-/*
- * Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef HINIC_PORT_H
-#define HINIC_PORT_H
-
-#include <linux/types.h>
-#include <linux/etherdevice.h>
-#include <linux/bitops.h>
-
-#include "hinic_dev.h"
-
-enum hinic_rx_mode {
- HINIC_RX_MODE_UC = BIT(0),
- HINIC_RX_MODE_MC = BIT(1),
- HINIC_RX_MODE_BC = BIT(2),
- HINIC_RX_MODE_MC_ALL = BIT(3),
- HINIC_RX_MODE_PROMISC = BIT(4),
-};
-
-enum hinic_port_link_state {
- HINIC_LINK_STATE_DOWN,
- HINIC_LINK_STATE_UP,
-};
-
-enum hinic_port_state {
- HINIC_PORT_DISABLE = 0,
- HINIC_PORT_ENABLE = 3,
-};
-
-enum hinic_func_port_state {
- HINIC_FUNC_PORT_DISABLE = 0,
- HINIC_FUNC_PORT_ENABLE = 2,
-};
-
-enum hinic_autoneg_cap {
- HINIC_AUTONEG_UNSUPPORTED,
- HINIC_AUTONEG_SUPPORTED,
-};
-
-enum hinic_autoneg_state {
- HINIC_AUTONEG_DISABLED,
- HINIC_AUTONEG_ACTIVE,
-};
-
-enum hinic_duplex {
- HINIC_DUPLEX_HALF,
- HINIC_DUPLEX_FULL,
-};
-
-enum hinic_speed {
- HINIC_SPEED_10MB_LINK = 0,
- HINIC_SPEED_100MB_LINK,
- HINIC_SPEED_1000MB_LINK,
- HINIC_SPEED_10GB_LINK,
- HINIC_SPEED_25GB_LINK,
- HINIC_SPEED_40GB_LINK,
- HINIC_SPEED_100GB_LINK,
-
- HINIC_SPEED_UNKNOWN = 0xFF,
-};
-
-struct hinic_port_mac_cmd {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u16 vlan_id;
- u16 rsvd1;
- unsigned char mac[ETH_ALEN];
-};
-
-struct hinic_port_mtu_cmd {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u16 rsvd1;
- u32 mtu;
-};
-
-struct hinic_port_vlan_cmd {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u16 vlan_id;
-};
-
-struct hinic_port_rx_mode_cmd {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u16 rsvd;
- u32 rx_mode;
-};
-
-struct hinic_port_link_cmd {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u8 state;
- u8 rsvd1;
-};
-
-struct hinic_port_state_cmd {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u8 state;
- u8 rsvd1[3];
-};
-
-struct hinic_port_link_status {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 rsvd1;
- u8 link;
- u8 rsvd2;
-};
-
-struct hinic_port_func_state_cmd {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u16 rsvd1;
- u8 state;
- u8 rsvd2[3];
-};
-
-struct hinic_port_cap {
- u8 status;
- u8 version;
- u8 rsvd0[6];
-
- u16 func_idx;
- u16 rsvd1;
- u8 port_type;
- u8 autoneg_cap;
- u8 autoneg_state;
- u8 duplex;
- u8 speed;
- u8 rsvd2[3];
-};
-
-int hinic_port_add_mac(struct hinic_dev *nic_dev, const u8 *addr,
- u16 vlan_id);
-
-int hinic_port_del_mac(struct hinic_dev *nic_dev, const u8 *addr,
- u16 vlan_id);
-
-int hinic_port_get_mac(struct hinic_dev *nic_dev, u8 *addr);
-
-int hinic_port_set_mtu(struct hinic_dev *nic_dev, int new_mtu);
-
-int hinic_port_add_vlan(struct hinic_dev *nic_dev, u16 vlan_id);
-
-int hinic_port_del_vlan(struct hinic_dev *nic_dev, u16 vlan_id);
-
-int hinic_port_set_rx_mode(struct hinic_dev *nic_dev, u32 rx_mode);
-
-int hinic_port_link_state(struct hinic_dev *nic_dev,
- enum hinic_port_link_state *link_state);
-
-int hinic_port_set_state(struct hinic_dev *nic_dev,
- enum hinic_port_state state);
-
-int hinic_port_set_func_state(struct hinic_dev *nic_dev,
- enum hinic_func_port_state state);
-
-int hinic_port_get_cap(struct hinic_dev *nic_dev,
- struct hinic_port_cap *port_cap);
-
-#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_sml_table.h b/drivers/net/ethernet/huawei/hinic/hinic_sml_table.h
deleted file mode 100644
index b837dab..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_sml_table.h
+++ /dev/null
@@ -1,2728 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0*/
-/* Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef __SML_TABLE_H__
-#define __SML_TABLE_H__
-
-#include "hinic_sml_table_pub.h"
-
-#ifdef __cplusplus
-#if __cplusplus
-extern "C" {
-#endif
-#endif /* __cplusplus */
-
-#define TBL_ID_CTR_DFX_S32_SM_NODE 11
-#define TBL_ID_CTR_DFX_S32_SM_INST 20
-
-#define TBL_ID_CTR_DFX_PAIR_SM_NODE 10
-#define TBL_ID_CTR_DFX_PAIR_SM_INST 24
-
-#define TBL_ID_CTR_DFX_S64_SM_NODE 11
-#define TBL_ID_CTR_DFX_S64_SM_INST 21
-
-#if (!defined(__UP_FPGA__) && (!defined(HI1822_MODE_FPGA)) && \
- (!defined(__FPGA__)))
-
-#define TBL_ID_GLOBAL_SM_NODE 10
-#define TBL_ID_GLOBAL_SM_INST 1
-
-#define TBL_ID_PORT_CFG_SM_NODE 10
-#define TBL_ID_PORT_CFG_SM_INST 2
-
-#define TBL_ID_VLAN_SM_NODE 10
-#define TBL_ID_VLAN_SM_INST 3
-
-#define TBL_ID_MULTICAST_SM_NODE 10
-#define TBL_ID_MULTICAST_SM_INST 4
-
-#define TBL_ID_MISC_RSS_HASH0_SM_NODE 10
-#define TBL_ID_MISC_RSS_HASH0_SM_INST 5
-
-#define TBL_ID_FIC_VOQ_MAP_SM_NODE 10
-#define TBL_ID_FIC_VOQ_MAP_SM_INST 6
-
-#define TBL_ID_CAR_SM_NODE 10
-#define TBL_ID_CAR_SM_INST 7
-
-#define TBL_ID_IPMAC_FILTER_SM_NODE 10
-#define TBL_ID_IPMAC_FILTER_SM_INST 8
-
-#define TBL_ID_GLOBAL_QUE_MAP_SM_NODE 10
-#define TBL_ID_GLOBAL_QUE_MAP_SM_INST 9
-
-#define TBL_ID_CTR_VSW_FUNC_MIB_SM_NODE 10
-#define TBL_ID_CTR_VSW_FUNC_MIB_SM_INST 10
-
-#define TBL_ID_UCODE_EXEC_INFO_SM_NODE 10
-#define TBL_ID_UCODE_EXEC_INFO_SM_INST 11
-
-#define TBL_ID_RQ_IQ_MAPPING_SM_NODE 10
-#define TBL_ID_RQ_IQ_MAPPING_SM_INST 12
-
-#define TBL_ID_MAC_SM_NODE 10
-#define TBL_ID_MAC_SM_INST 21
-
-#define TBL_ID_MAC_BHEAP_SM_NODE 10
-#define TBL_ID_MAC_BHEAP_SM_INST 22
-
-#define TBL_ID_MAC_MISC_SM_NODE 10
-#define TBL_ID_MAC_MISC_SM_INST 23
-
-#define TBL_ID_FUNC_CFG_SM_NODE 11
-#define TBL_ID_FUNC_CFG_SM_INST 1
-
-#define TBL_ID_TRUNK_FWD_SM_NODE 11
-#define TBL_ID_TRUNK_FWD_SM_INST 2
-
-#define TBL_ID_VLAN_FILTER_SM_NODE 11
-#define TBL_ID_VLAN_FILTER_SM_INST 3
-
-#define TBL_ID_ELB_SM_NODE 11
-#define TBL_ID_ELB_SM_INST 4
-
-#define TBL_ID_MISC_RSS_HASH1_SM_NODE 11
-#define TBL_ID_MISC_RSS_HASH1_SM_INST 5
-
-#define TBL_ID_RSS_CONTEXT_SM_NODE 11
-#define TBL_ID_RSS_CONTEXT_SM_INST 6
-
-#define TBL_ID_ETHERTYPE_FILTER_SM_NODE 11
-#define TBL_ID_ETHERTYPE_FILTER_SM_INST 7
-
-#define TBL_ID_VTEP_IP_SM_NODE 11
-#define TBL_ID_VTEP_IP_SM_INST 8
-
-#define TBL_ID_NAT_SM_NODE 11
-#define TBL_ID_NAT_SM_INST 9
-
-#define TBL_ID_BHEAP_LRO_AGING_SM_NODE 11
-#define TBL_ID_BHEAP_LRO_AGING_SM_INST 10
-
-#define TBL_ID_MISC_LRO_AGING_SM_NODE 11
-#define TBL_ID_MISC_LRO_AGING_SM_INST 11
-
-#define TBL_ID_BHEAP_CQE_AGING_SM_NODE 11
-#define TBL_ID_BHEAP_CQE_AGING_SM_INST 12
-
-#define TBL_ID_MISC_CQE_AGING_SM_NODE 11
-#define TBL_ID_MISC_CQE_AGING_SM_INST 13
-
-#define TBL_ID_DFX_LOG_POINTER_SM_NODE 11
-#define TBL_ID_DFX_LOG_POINTER_SM_INST 14
-
-#define TBL_ID_CTR_VSW_FUNC_S32_DROP_ERR_SM_NODE 11
-#define TBL_ID_CTR_VSW_FUNC_S32_DROP_ERR_SM_INST 15
-
-#define TBL_ID_CTR_VSW_FUNC_S32_DFX_SM_NODE 11
-#define TBL_ID_CTR_VSW_FUNC_S32_DFX_SM_INST 16
-
-#define TBL_ID_CTR_COMM_FUNC_S32_SM_NODE 11
-#define TBL_ID_CTR_COMM_FUNC_S32_SM_INST 17
-
-#define TBL_ID_CTR_SRIOV_FUNC_PAIR_SM_NODE 11
-#define TBL_ID_CTR_SRIOV_FUNC_PAIR_SM_INST 41
-
-#define TBL_ID_CTR_SRIOV_FUNC_S32_SM_NODE 11
-#define TBL_ID_CTR_SRIOV_FUNC_S32_SM_INST 42
-
-#define TBL_ID_CTR_OVS_FUNC_S64_SM_NODE 11
-#define TBL_ID_CTR_OVS_FUNC_S64_SM_INST 43
-
-#define TBL_ID_CTR_XOE_FUNC_PAIR_SM_NODE 11
-#define TBL_ID_CTR_XOE_FUNC_PAIR_SM_INST 44
-
-#define TBL_ID_CTR_XOE_FUNC_S32_SM_NODE 11
-#define TBL_ID_CTR_XOE_FUNC_S32_SM_INST 45
-
-#define TBL_ID_CTR_SYS_GLB_S32_SM_NODE 11
-#define TBL_ID_CTR_SYS_GLB_S32_SM_INST 46
-
-#define TBL_ID_CTR_VSW_GLB_S32_SM_NODE 11
-#define TBL_ID_CTR_VSW_GLB_S32_SM_INST 47
-
-#define TBL_ID_CTR_ROCE_GLB_S32_SM_NODE 11
-#define TBL_ID_CTR_ROCE_GLB_S32_SM_INST 48
-
-#define TBL_ID_CTR_COMM_GLB_S32_SM_NODE 11
-#define TBL_ID_CTR_COMM_GLB_S32_SM_INST 49
-
-#define TBL_ID_CTR_XOE_GLB_S32_SM_NODE 11
-#define TBL_ID_CTR_XOE_GLB_S32_SM_INST 50
-
-#define TBL_ID_CTR_OVS_GLB_S64_SM_NODE 11
-#define TBL_ID_CTR_OVS_GLB_S64_SM_INST 51
-
-#define TBL_ID_RWLOCK_ROCE_SM_NODE 11
-#define TBL_ID_RWLOCK_ROCE_SM_INST 30
-
-#define TBL_ID_CQE_ADDR_SM_NODE 11
-#define TBL_ID_CQE_ADDR_SM_INST 31
-
-#else
-
-#define TBL_ID_GLOBAL_SM_NODE 10
-#define TBL_ID_GLOBAL_SM_INST 1
-
-#define TBL_ID_PORT_CFG_SM_NODE 10
-#define TBL_ID_PORT_CFG_SM_INST 2
-
-#define TBL_ID_VLAN_SM_NODE 10
-#define TBL_ID_VLAN_SM_INST 3
-
-#define TBL_ID_MULTICAST_SM_NODE 10
-#define TBL_ID_MULTICAST_SM_INST 4
-
-#define TBL_ID_MISC_RSS_HASH0_SM_NODE 10
-#define TBL_ID_MISC_RSS_HASH0_SM_INST 5
-
-#define TBL_ID_FIC_VOQ_MAP_SM_NODE 10
-#define TBL_ID_FIC_VOQ_MAP_SM_INST 6
-
-#define TBL_ID_CAR_SM_NODE 10
-#define TBL_ID_CAR_SM_INST 7
-
-#define TBL_ID_IPMAC_FILTER_SM_NODE 10
-#define TBL_ID_IPMAC_FILTER_SM_INST 8
-
-#define TBL_ID_GLOBAL_QUE_MAP_SM_NODE 10
-#define TBL_ID_GLOBAL_QUE_MAP_SM_INST 9
-
-#define TBL_ID_CTR_VSW_FUNC_MIB_SM_NODE 10
-#define TBL_ID_CTR_VSW_FUNC_MIB_SM_INST 10
-
-#define TBL_ID_UCODE_EXEC_INFO_SM_NODE 10
-#define TBL_ID_UCODE_EXEC_INFO_SM_INST 11
-
-#define TBL_ID_RQ_IQ_MAPPING_SM_NODE 10
-#define TBL_ID_RQ_IQ_MAPPING_SM_INST 12
-
-#define TBL_ID_MAC_SM_NODE 10
-#define TBL_ID_MAC_SM_INST 13
-
-#define TBL_ID_MAC_BHEAP_SM_NODE 10
-#define TBL_ID_MAC_BHEAP_SM_INST 14
-
-#define TBL_ID_MAC_MISC_SM_NODE 10
-#define TBL_ID_MAC_MISC_SM_INST 15
-
-#define TBL_ID_FUNC_CFG_SM_NODE 10
-#define TBL_ID_FUNC_CFG_SM_INST 16
-
-#define TBL_ID_TRUNK_FWD_SM_NODE 10
-#define TBL_ID_TRUNK_FWD_SM_INST 17
-
-#define TBL_ID_VLAN_FILTER_SM_NODE 10
-#define TBL_ID_VLAN_FILTER_SM_INST 18
-
-#define TBL_ID_ELB_SM_NODE 10
-#define TBL_ID_ELB_SM_INST 19
-
-#define TBL_ID_MISC_RSS_HASH1_SM_NODE 10
-#define TBL_ID_MISC_RSS_HASH1_SM_INST 20
-
-#define TBL_ID_RSS_CONTEXT_SM_NODE 10
-#define TBL_ID_RSS_CONTEXT_SM_INST 21
-
-#define TBL_ID_ETHERTYPE_FILTER_SM_NODE 10
-#define TBL_ID_ETHERTYPE_FILTER_SM_INST 22
-
-#define TBL_ID_VTEP_IP_SM_NODE 10
-#define TBL_ID_VTEP_IP_SM_INST 23
-
-#define TBL_ID_NAT_SM_NODE 10
-#define TBL_ID_NAT_SM_INST 24
-
-#define TBL_ID_BHEAP_LRO_AGING_SM_NODE 10
-#define TBL_ID_BHEAP_LRO_AGING_SM_INST 25
-
-#define TBL_ID_MISC_LRO_AGING_SM_NODE 10
-#define TBL_ID_MISC_LRO_AGING_SM_INST 26
-
-#define TBL_ID_BHEAP_CQE_AGING_SM_NODE 10
-#define TBL_ID_BHEAP_CQE_AGING_SM_INST 27
-
-#define TBL_ID_MISC_CQE_AGING_SM_NODE 10
-#define TBL_ID_MISC_CQE_AGING_SM_INST 28
-
-#define TBL_ID_DFX_LOG_POINTER_SM_NODE 10
-#define TBL_ID_DFX_LOG_POINTER_SM_INST 29
-
-#define TBL_ID_CTR_VSW_FUNC_S32_DROP_ERR_SM_NODE 10
-#define TBL_ID_CTR_VSW_FUNC_S32_DROP_ERR_SM_INST 40
-
-#define TBL_ID_CTR_VSW_FUNC_S32_DFX_SM_NODE 10
-#define TBL_ID_CTR_VSW_FUNC_S32_DFX_SM_INST 41
-
-#define TBL_ID_CTR_COMM_FUNC_S32_SM_NODE 10
-#define TBL_ID_CTR_COMM_FUNC_S32_SM_INST 42
-
-#define TBL_ID_CTR_SRIOV_FUNC_PAIR_SM_NODE 10
-#define TBL_ID_CTR_SRIOV_FUNC_PAIR_SM_INST 43
-
-#define TBL_ID_CTR_SRIOV_FUNC_S32_SM_NODE 10
-#define TBL_ID_CTR_SRIOV_FUNC_S32_SM_INST 44
-
-#define TBL_ID_CTR_OVS_FUNC_S64_SM_NODE 10
-#define TBL_ID_CTR_OVS_FUNC_S64_SM_INST 45
-
-#define TBL_ID_CTR_XOE_FUNC_PAIR_SM_NODE 10
-#define TBL_ID_CTR_XOE_FUNC_PAIR_SM_INST 46
-
-#define TBL_ID_CTR_XOE_FUNC_S32_SM_NODE 10
-#define TBL_ID_CTR_XOE_FUNC_S32_SM_INST 47
-
-#define TBL_ID_CTR_SYS_GLB_S32_SM_NODE 10
-#define TBL_ID_CTR_SYS_GLB_S32_SM_INST 48
-
-#define TBL_ID_CTR_VSW_GLB_S32_SM_NODE 10
-#define TBL_ID_CTR_VSW_GLB_S32_SM_INST 49
-
-#define TBL_ID_CTR_ROCE_GLB_S32_SM_NODE 10
-#define TBL_ID_CTR_ROCE_GLB_S32_SM_INST 50
-
-#define TBL_ID_CTR_COMM_GLB_S32_SM_NODE 10
-#define TBL_ID_CTR_COMM_GLB_S32_SM_INST 51
-
-#define TBL_ID_CTR_XOE_GLB_S32_SM_NODE 10
-#define TBL_ID_CTR_XOE_GLB_S32_SM_INST 52
-
-#define TBL_ID_CTR_OVS_GLB_S64_SM_NODE 10
-#define TBL_ID_CTR_OVS_GLB_S64_SM_INST 53
-
-#define TBL_ID_RWLOCK_ROCE_SM_NODE 10
-#define TBL_ID_RWLOCK_ROCE_SM_INST 30
-
-#define TBL_ID_CQE_ADDR_SM_NODE 10
-#define TBL_ID_CQE_ADDR_SM_INST 31
-
-#endif
-
-#define TBL_ID_MISC_RSS_HASH_SM_NODE TBL_ID_MISC_RSS_HASH0_SM_NODE
-#define TBL_ID_MISC_RSS_HASH_SM_INST TBL_ID_MISC_RSS_HASH0_SM_INST
-
-/*rx cqe checksum err*/
-#define NIC_RX_CSUM_IP_CSUM_ERR BIT(0)
-#define NIC_RX_CSUM_TCP_CSUM_ERR BIT(1)
-#define NIC_RX_CSUM_UDP_CSUM_ERR BIT(2)
-#define NIC_RX_CSUM_IGMP_CSUM_ERR BIT(3)
-#define NIC_RX_CSUM_ICMPV4_CSUM_ERR BIT(4)
-#define NIC_RX_CSUM_ICMPV6_CSUM_ERR BIT(5)
-#define NIC_RX_CSUM_SCTP_CRC_ERR BIT(6)
-#define NIC_RX_CSUM_HW_BYPASS_ERR BIT(7)
-
-typedef struct tag_log_ctrl {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 mod_name:4;
- u32 log_level:4;
- u32 rsvd:8;
- u32 line_num:16;
-#else
- u32 line_num:16;
- u32 rsvd:8;
- u32 log_level:4;
- u32 mod_name:4;
-#endif
-} log_ctrl;
-
-/**
- * 1. bank GPA address is HOST-based, every host has 4 bank GPA,
- * total size 4*32B
- * 2. Allocated space for storing
- * Two global entry are allocated for storing bank GPA,
- * which are index5 and index6. (Note index start value is 0)
- * The index5 top 32B store the bank GPA of host 0;
- * Remain 32B store the bank GPA of host 1.
- * The index6 top 32B store the bank GPA of host 2,
- * the remain 32B store the bank GPA of host 3.
- * Bank GPA corresponding to the each host is based on the following format)
- */
-typedef struct tag_sml_global_bank_gpa {
- u32 bank0_gpa_h32;
- u32 bank0_gpa_l32;
-
- u32 bank1_gpa_h32;
- u32 bank1_gpa_l32;
-
- u32 bank2_gpa_h32;
- u32 bank2_gpa_l32;
-
- u32 bank3_gpa_h32;
- u32 bank3_gpa_l32;
-} global_bank_gpa_s;
-
-/**
- * Struct name: sml_global_table_s
- * @brief: global_table structure
- * Description: global configuration table
- */
-typedef struct tag_sml_global_table {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 port_mode:1; /*portmode:0-eth;1-fic */
- /* dualplaneenable:0-disable;1-enable */
- u32 dual_plane_en:1;
- /* fourrouteenable:0-disable;1-enable */
- u32 four_route_en:1;
- /* ficworkmode:0-fabric;1-fullmesh.*/
- u32 fic_work_mode:1;
- /* unicast/multicastmode:0-drop;
- * 1-broadcastinvlandomain
- */
- u32 un_mc_mode:1;
- /* maclearnenable:1-enable */
- u32 mac_learn_en:1;
- u32 qcn_en:1;
- u32 esl_run_flag:1;
- /* 1-special protocal pkt to up; 0-to x86 */
- u32 special_pro_to_up_flag:1;
- u32 vf_mask:4;
- u32 dif_ser_type:2;
- u32 rsvd0:1;
- u32 board_num:16; /*boardnumber */
-#else
- u32 board_num:16; /*boardnumber */
- u32 rsvd0:1;
- u32 dif_ser_type:2;
- u32 vf_mask:4;
- /*1-special protocal pkt to up; 0-to x86 */
- u32 special_pro_to_up_flag:1;
- u32 esl_run_flag:1;
- u32 qcn_en:1;
- u32 mac_learn_en:1; /*maclearnenable:1-enable */
- /*unicast/multicastmode:0-drop;1-broadcastinvlandomain*/
- u32 un_mc_mode:1;
- /* ficworkmode:0-fabric;1-fullmesh.*/
- u32 fic_work_mode:1;
- /*fourrouteenable:0-disable;1-enable */
- u32 four_route_en:1;
- /*dualplaneenable:0-disable;1-enable */
- u32 dual_plane_en:1;
- u32 port_mode:1; /*portmode:0-eth;1-fic */
-#endif
- } bs;
- u32 value;
- } dw0;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 bc_offset:16; /*broadcastoffset */
- u32 mc_offset:16; /*multicastoffset */
-#else
- u32 mc_offset:16; /*multicastoffset */
- u32 bc_offset:16; /*broadcastoffset */
-#endif
- } bs;
- u32 value;
- } dw1;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 net_src_type:8; /* eth-FWD_PORT, fic-FWD_FIC */
- u32 xrc_pl_dec:1;
- u32 sq_cqn:20;
- u32 qpc_stg:1;
- u32 qpc_state_err:1;
- u32 qpc_wb_flag:1;
-#else
- u32 qpc_wb_flag:1;
- u32 qpc_state_err:1;
- u32 qpc_stg:1;
- u32 sq_cqn:20;
- u32 xrc_pl_dec:1;
- u32 net_src_type:8; /* eth-FWD_PORT, fic-FWD_FIC */
-#endif
- } bs;
-
- u32 value;
- } dw2;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 drop_cause_id:16;
- u32 pkt_len:16;
-#else
- u32 pkt_len:16;
- u32 drop_cause_id:16;
-#endif
- } bs;
-
- u32 value;
- } dw3;
-
- u8 fcoe_vf_table[12];
-
- union {
- struct {
- /* [31:30]Pipeline number mode. */
- u32 cfg_mode_pn:2;
- /* [29:28]initial default fq mode for traffic
- * from rx side
- */
- u32 cfg_mode_init_def_fq:2;
- /* [27:16]base fqid for initial default fqs
- * (for packest from rx side only).
- */
- u32 cfg_base_init_def_fq:12;
- /* [15:15]push doorbell as new packet to tile
- * via command path enable.
- */
- u32 cfg_psh_msg_en:1;
- /* [14:14]1,enable asc for scanning
- * active fq.0,disable.
- */
- u32 enable_asc:1;
- /* [13:13]1,enable pro for commands process.0,disable.*/
- u32 enable_pro:1;
- /* [12:12]1,ngsf mode.0,ethernet mode. */
- u32 cfg_ngsf_mod:1;
- /* [11:11]Stateful process enable. */
- u32 enable_stf:1;
- /* [10:9]initial default fq mode for
- * traffic from tx side.
- */
- u32 cfg_mode_init_def_fq_tx:2;
- /* [8:0]maximum allocable oeid configuration. */
- u32 cfg_max_oeid:9;
- } bs;
- u32 value;
- } fq_mode;
-
- u32 rsvd2[8];
-} sml_global_table_s;
-
-/**
- * Struct name: sml_fic_config_table_s
- * @brief: global_table structure
- * Description: global configuration table
- */
-typedef struct tag_sml_fic_config_table {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- /*dualplaneenable:0-disable;1-enable */
- u32 dual_plane_en:1;
- /*fourrouteenable:0-disable;1-enable */
- u32 four_route_en:1;
- /* ficworkmode:0-fabric;1-fullmesh.*/
- u32 fic_work_mode:1;
- u32 mac_learn_en:1; /*maclearnenable:1-enable */
- u32 rsvd:12;
- u32 board_num:16; /*boardnumber */
-#else
- u32 board_num:16; /*boardnumber */
- u32 rsvd:12;
- u32 mac_learn_en:1;
- /* ficworkmode:0-fabric;1-fullmesh.*/
- u32 fic_work_mode:1;
- /* fourrouteenable:0-disable;1-enable */
- u32 four_route_en:1;
- /* dualplaneenable:0-disable;1-enable */
- u32 dual_plane_en:1;
-#endif
- } bs;
- u32 value;
- } dw0;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 bc_offset:16; /*broadcastoffset */
- u32 mc_offset:16; /*multicastoffset */
-#else
- u32 mc_offset:16; /*multicastoffset */
- u32 bc_offset:16; /*broadcastoffset */
-#endif
- } bs;
- u32 value;
- } dw1;
-
- u32 rsvd2[14];
-} sml_fic_config_table_s;
-
-/**
- * Struct name: sml_ucode_version_info_table_s
- * @brief: microcode version information structure
- * Description: global configuration table entry data structure of index 1
- */
-typedef struct tag_sml_ucode_version_info_table {
- u32 ucode_version[4];
- u32 ucode_compile_time[5];
- u32 rsvd[7];
-} sml_ucode_version_info_table_s;
-
-/**
- * Struct name: sml_funcfg_tbl_s
- * @brief: Function Configuration Table
- * Description: Function Configuration attribute table
- */
-typedef struct tag_sml_funcfg_tbl {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- /* function valid: 0-invalid; 1-valid */
- u32 valid:1;
- /* mac learn enable: 0-disable; 1-enable */
- u32 learn_en:1;
- /* lli enable: 0-disable; 1-enable */
- u32 lli_en:1;
- /* rss enable: 0-disable; 1-enable */
- u32 rss_en:1;
- /* rx vlan offload enable: 0-disable; 1-enable */
- u32 rxvlan_offload_en:1;
- /* tso local coalesce enable: 0-disable; 1-enable */
- u32 tso_local_coalesce:1;
- u32 rsvd1:1;
- u32 rsvd2:1;
- /* qos rx car enable: 0-disable; 1-enable */
- u32 qos_rx_car_en:1;
- /* mac filter enable: 0-disable; 1-enable */
- u32 mac_filter_en:1;
- /* ipmac filter enable: 0-disable; 1-enable */
- u32 ipmac_filter_en:1;
- /* ethtype filter enable: 0-disable; 1-enable */
- u32 ethtype_filter_en:1;
- /* mc bc limit enable: 0-disable; 1-enable */
- u32 mc_bc_limit_en:1;
- /* acl tx enable: 0-disable; 1-enable */
- u32 acl_tx_en:1;
- /* acl rx enable: 0-disable; 1-enable */
- u32 acl_rx_en:1;
- /* ovs function enable: 0-disable; 1-enable */
- u32 ovs_func_en:1;
- /* ucode capture enable: 0-disable; 1-enable */
- u32 ucapture_en:1;
- /* fic car enable: 0-disable; 1-enable */
- u32 fic_car_en:1;
- u32 tso_en:1;
- u32 nic_rx_mode:5; /* nic_rx_mode:
- * 0b00001: unicast mode
- * 0b00010: multicast mode
- * 0b00100: broadcast mode
- * 0b01000: all multicast mode
- * 0b10000: promisc mod
- */
- u32 rsvd4:3;
- u32 def_pri:3; /* default priority */
- /* host id: [0~3]. support up to 4 Host. */
- u32 host_id:2;
-#else
- u32 host_id:2;
- u32 def_pri:3;
- u32 rsvd4:3;
- u32 nic_rx_mode:5;
- u32 tso_en:1;
- u32 fic_car_en:1;
- /* ucode capture enable: 0-disable; 1-enable */
- u32 ucapture_en:1;
- u32 ovs_func_en:1;
- u32 acl_rx_en:1;
- u32 acl_tx_en:1;
- u32 mc_bc_limit_en:1;
- u32 ethtype_filter_en:1;
- u32 ipmac_filter_en:1;
- u32 mac_filter_en:1;
- u32 qos_rx_car_en:1;
- u32 rsvd2:1;
- u32 rsvd1:1;
- u32 tso_local_coalesce:1;
- u32 rxvlan_offload_en:1;
- u32 rss_en:1;
- u32 lli_en:1;
- u32 learn_en:1;
- u32 valid:1;
-#endif
- } bs;
-
- u32 value;
- } dw0;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 mtu:16; /* mtu value: [64-15500] */
- u32 rsvd1:1;
- /* vlan mode: 0-all; 1-access; 2-trunk;
- * 3-hybrid(unsupport); 4-qinq port;
- */
- u32 vlan_mode:3;
- u32 vlan_id:12; /* vlan id: [0~4095] */
-#else
- u32 vlan_id:12;
- u32 vlan_mode:3;
- u32 rsvd1:1;
- u32 mtu:16;
-#endif
- } bs;
-
- u32 value;
- } dw1;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 lli_mode:1; /* lli mode */
- /* er forward trunk type: 0-ethernet type, 1-fic type */
- u32 er_fwd_trunk_type:1;
- /* er forward trunk mode:
- * 0-standby; 1-smac; 2-dmac; 3-smacdmac; 4-sip; 5-dip;
- * 6-sipdip; 7-5tuples; 8-lacp
- */
- u32 er_fwd_trunk_mode:4;
- /* edge relay mode: 0-VEB; 1-VEPA(unsupport);
- * 2-Multi-Channel(unsupport)
- */
- u32 er_mode:2;
- /* edge relay id: [0~15]. support up to 16 er. */
- u32 er_id:4;
- /* er forward type: 2-port; 3-fic;
- * 4-trunk; other-unsupport
- */
- u32 er_fwd_type:4;
- /* er forward id:
- * fwd_type=2: forward ethernet port id
- * fwd_type=3: forward fic id(tb+tp)
- * fwd_type=4: forward trunk id
- */
- u32 er_fwd_id:16;
-#else
- u32 er_fwd_id:16;
- u32 er_fwd_type:4;
- u32 er_id:4;
- u32 er_mode:2;
- u32 er_fwd_trunk_mode:4;
- u32 er_fwd_trunk_type:1;
- u32 lli_mode:1;
-#endif
- } bs;
-
- u32 value;
- } dw2;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 pfc_en:1;
- u32 rsvd1:7;
- u32 ovs_invld_tcp_action:1;
- u32 ovs_ip_frag_action:1;
- u32 rsvd2:2;
- u32 roce_en:1;
- u32 iwarp_en:1;
- u32 fcoe_en:1;
- u32 toe_en:1;
- u32 rsvd3:8;
- u32 ethtype_group_id:8;
-#else
- u32 ethtype_group_id:8;
- u32 rsvd3:8;
- u32 toe_en:1;
- u32 fcoe_en:1;
- u32 iwarp_en:1;
- u32 roce_en:1;
- u32 rsvd2:2;
- u32 ovs_ip_frag_action:1;
- u32 ovs_invld_tcp_action:1;
- u32 rsvd1:7;
- u32 pfc_en:1;
-#endif
- } bs;
-
- u32 value;
- } dw3;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd1:8;
- u32 vni:24;
-#else
- u32 vni:24;
- u32 rsvd1:8;
-#endif
- } bs;
-
- u32 value;
- } dw4;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd1;
-#else
- u32 rsvd1;
-#endif
- } bs;
-
- u32 value;
- } dw5;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd1:8;
- u32 rq_thd:13;
- u32 host_car_id:11; /* host car id */
-#else
- u32 host_car_id:11;
- u32 rq_thd:13;
- u32 rsvd1:8;
-#endif
- } bs;
-
- u32 value;
- } dw6;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd1:5;
- u32 fic_uc_car_id:11; /* fic unicast car id */
- u32 rsvd2:5;
- u32 fic_mc_car_id:11; /* fic multicast car id */
-#else
- u32 fic_mc_car_id:11;
- u32 rsvd2:5;
- u32 fic_uc_car_id:11;
- u32 rsvd1:5;
-#endif
- } fic_bs;
-
- u32 value;
- } dw7;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- /* safe group identifier valid: 0-invalid; 1-valid */
- u32 sg_id_valid:1;
- u32 sg_id:10; /* safe group identifier */
- u32 rsvd9:1;
- /* rq priority enable: 0-disable; 1-enable */
- u32 rq_pri_en:1;
- /* rq priority num: 0-1pri; 1-2pri; 2-4pri; 3-8pri */
- u32 rq_pri_num:3;
- /* one wqe buffer size, default is 2K bytes */
- u32 rx_wqe_buffer_size:16;
-#else
- u32 rx_wqe_buffer_size:16;
- u32 rq_pri_num:3;
- u32 rq_pri_en:1;
- u32 rsvd9:1;
- u32 sg_id:10;
- u32 sg_id_valid:1;
-#endif
- } bs;
-
- u32 value;
- } dw8;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- /* IPv4 LRO enable: 0-disable; 1-enable; */
- u32 lro_ipv4_en:1;
- /* IPv6 LRO enable: 0-disable; 1-enable; */
- u32 lro_ipv6_en:1;
- /* LRO pkt max wqe buffer number */
- u32 lro_max_wqe_num:6;
- /* Each group occupies 3bits,
- * 8 group share allocation 24bits,
- * group 0 corresponds to the low 3bits
- */
- u32 vlan_pri_map_group:24;
-#else
- u32 vlan_pri_map_group:24;
- u32 lro_max_wqe_num:6;
- u32 lro_ipv6_en:1;
- u32 lro_ipv4_en:1;
-#endif
- } bs;
-
- u32 value;
- } dw9;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rss_group_id:4;
- u32 lli_frame_size:12;
- u32 smac_h16:16;
-#else
- u32 smac_h16:16;
- u32 lli_frame_size:12;
- u32 rss_group_id:4;
-#endif
- } bs;
-
- u32 value;
- } dw10;
-
- u32 smac_l32;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 oqid:16;
- u32 vf_map_pf_id:4;
- /*lro change; 0:changing 1:change done */
- u32 lro_change_flag:1;
- u32 rsvd11:1;
- u32 base_qid:10;
-#else
- u32 base_qid:10;
- u32 rsvd11:1;
- u32 lro_change_flag:1;
- u32 vf_map_pf_id:4;
- u32 oqid:16;
-#endif
- } bs;
-
- u32 value;
- } dw12;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd1:2;
- u32 cfg_rq_depth:6;
- u32 cfg_q_num:6;
- u32 fc_port_id:4;
- u32 rsvd2:14;
-#else
- u32 rsvd2:14;
- u32 fc_port_id:4;
- u32 cfg_q_num:6;
- u32 cfg_rq_depth:6;
- u32 rsvd1:2;
-#endif
- } bs;
-
- u32 value;
- } dw13;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd1;
-#else
- u32 rsvd1;
-#endif
- } bs;
-
- u32 value;
-
- } dw14;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd3:2;
- u32 bond3_hash_policy:3;
- u32 bond3_mode:3;
- u32 rsvd2:2;
- u32 bond2_hash_policy:3;
- u32 bond2_mode:3;
- u32 rsvd1:2;
- u32 bond1_hash_policy:3;
- u32 bond1_mode:3;
- u32 rsvd0:2;
- u32 bond0_hash_policy:3;
- u32 bond0_mode:3;
-#else
- u32 bond0_mode:3;
- u32 bond0_hash_policy:3;
- u32 rsvd0:2;
- u32 bond1_mode:3;
- u32 bond1_hash_policy:3;
- u32 rsvd1:2;
- u32 bond2_mode:3;
- u32 bond2_hash_policy:3;
- u32 rsvd2:2;
- u32 bond3_mode:3;
- u32 bond3_hash_policy:3;
- u32 rsvd3:2;
-#endif
- } bs;
-
- u32 value;
-
- } dw15;
-} sml_funcfg_tbl_s;
-
-/**
- * Struct name: sml_portcfg_tbl_s
- * @brief: Port Configuration Table
- * Description: Port Configuration attribute table
- */
-typedef struct tag_sml_portcfg_tbl {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 valid:1; /* valid:0-invalid; 1-valid */
- /* mac learn enable: 0-disable; 1-enable */
- u32 learn_en:1;
- u32 trunk_en:1; /* trunk enable: 0-disable; 1-enable */
- /* broadcast suppression enable: 0-disable; 1-enable */
- u32 bc_sups_en:1;
- /* unknown multicast suppression enable:
- * 0-disable; 1-enable
- */
- u32 un_mc_sups_en:1;
- /* unknown unicast suppression enable:
- * 0-disable; 1-enable
- */
- u32 un_uc_sups_en:1;
- u32 ovs_mirror_tx_en:1;
- /* ovs port enable: 0-disable; 1-enable */
- u32 ovs_port_en:1;
- u32 ovs_mirror_rx_en:1;
- u32 qcn_en:1; /* qcn enable: 0-disable; 1-enable */
- /* ucode capture enable: 0-disable; 1-enable */
- u32 ucapture_en:1;
- u32 ovs_invld_tcp_action:1;
- u32 ovs_ip_frag_action:1;
- u32 def_pri:3; /* default priority */
- u32 rsvd3:2;
- /* edge relay mode: 0-VEB; 1-VEPA(unsupport);
- * 2-Multi-Channel(unsupport)
- */
- u32 er_mode:2;
- /* edge relay identifier: [0~15]. support up to 16 er */
- u32 er_id:4;
- u32 trunk_id:8; /* trunk identifier: [0~255] */
-#else
- u32 trunk_id:8;
- u32 er_id:4;
- u32 er_mode:2;
- u32 rsvd3:2;
- u32 def_pri:3;
- u32 ovs_ip_frag_action:1;
- u32 ovs_invld_tcp_action:1;
- u32 ucapture_en:1;
- u32 qcn_en:1;
- u32 ovs_mirror_rx_en:1;
- u32 ovs_port_en:1;
- u32 ovs_mirror_tx_en:1;
- u32 un_uc_sups_en:1;
- u32 un_mc_sups_en:1;
- u32 bc_sups_en:1;
- u32 trunk_en:1;
- u32 learn_en:1;
- u32 valid:1;
-#endif
- } bs;
- u32 value;
- } dw0;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd2:2;
- u32 mtu:14;
- u32 rsvd3:1;
- u32 vlan_mode:3;
- u32 vlan_id:12;
-#else
- u32 vlan_id:12;
- u32 vlan_mode:3;
- u32 rsvd3:1;
- u32 mtu:14;
- u32 rsvd2:2;
-#endif
- } bs;
- u32 value;
- } dw1;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- /* q7_cos : ... : q0_cos = 4bits : ... : 4bits */
- u32 ovs_queue_cos;
-#else
- u32 ovs_queue_cos;
-#endif
- } bs;
- u32 value;
- } dw2;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd1:10;
- u32 un_mc_car_id:11;
- u32 un_uc_car_id:11;
-#else
- u32 un_uc_car_id:11;
- u32 un_mc_car_id:11;
- u32 rsvd1:10;
-#endif
- } bs;
- u32 value;
- } dw3;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd6:5;
- u32 bc_car_id:11;
- u32 pf_promiscuous_bitmap:16;
-#else
- u32 pf_promiscuous_bitmap:16;
- u32 bc_car_id:11;
- u32 rsvd6:5;
-#endif
- } bs;
- u32 value;
- } dw4;
-
- union {
- struct {
- u32 fc_map;
-
- } fcoe_bs;
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 start_queue:8;
- u32 queue_size:8;
- u32 mirror_func_id:16;
-#else
- u32 mirror_func_id:16;
- u32 queue_size:8;
- u32 start_queue:8;
-#endif
- } ovs_mirror_bs;
- u32 value;
- } dw5;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u16 vlan;
- u16 dmac_h16;
-#else
- u16 dmac_h16;
- u16 vlan;
-#endif
- } fcoe_bs;
- u32 value;
- } dw6;
-
- union {
- struct {
- u32 dmac_l32;
-
- } fcoe_bs;
- u32 value;
- } dw7;
-
-} sml_portcfg_tbl_s;
-
-/**
- * Struct name: sml_taggedlist_tbl_s
- * @brief: Tagged List Table
- * Description: VLAN filtering Trunk/Hybrid type tagged list table
- */
-typedef struct tag_sml_taggedlist_tbl {
- u32 bitmap[TBL_ID_TAGGEDLIST_BITMAP32_NUM];
-} sml_taggedlist_tbl_s;
-
-/**
- * Struct name: sml_untaggedlist_tbl_s
- * @brief: Untagged List Table
- * Description: VLAN filtering Hybrid type Untagged list table
- */
-typedef struct tag_sml_untaggedlist_tbl {
- u32 bitmap[TBL_ID_UNTAGGEDLIST_BITMAP32_NUM];
-} sml_untaggedlist_tbl_s;
-
-/**
- * Struct name: sml_trunkfwd_tbl_s
- * @brief: Trunk Forward Table
- * Description: port aggregation Eth-Trunk forwarding table
- */
-typedef struct tag_sml_trunkfwd_tbl {
- u16 fwd_id[TBL_ID_TRUNKFWD_ENTRY_ELEM_NUM]; /* dw0-dw15 */
-} sml_trunkfwd_tbl_s;
-
-/**
- * Struct name: sml_mac_tbl_head_u
- * @brief: Mac table request/response head
- * Description: MAC table, Hash API header
- */
-typedef union tag_sml_mac_tbl_head {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 src:5;
- u32 instance_id:6;
- u32 opid:5;
- u32 A:1;
- u32 S:1;
- u32 rsvd:14;
-#elif (__BYTE_ORDER__ == __LITTLE_ENDIAN__)
- u32 rsvd:14;
- u32 S:1;
- u32 A:1;
- u32 opid:5;
- u32 instance_id:6;
- u32 src:5;
-#endif
- } req_bs;
-
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 code:2;
- u32 subcode:2;
- u32 node_index:28;
-#elif (__BYTE_ORDER__ == __LITTLE_ENDIAN__)
- u32 node_index:28;
- u32 subcode:2;
- u32 code:2;
-#endif
- } rsp_bs;
-
- u32 value;
-} sml_mac_tbl_head_u;
-
-/**
- * Struct name: sml_mac_tbl_8_4_key_u
- * @brief: Mac Table Key
- * Description: MAC table key
- */
-typedef union tag_sml_mac_tbl_8_4_key {
- struct {
- u32 val0;
- u32 val1;
- } value;
-
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 er_id:4;
- u32 vlan_id:12;
- u32 mac_h16:16;
-
- u32 mac_m16:16;
- u32 mac_l16:16;
-#elif (__BYTE_ORDER__ == __LITTLE_ENDIAN__)
- u32 mac_h16:16;
- u32 vlan_id:12;
- u32 er_id:4;
-
- u32 mac_l16:16;
- u32 mac_m16:16;
-#endif
- } bs;
-
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 er_id:4;
- u32 vlan_id:12;
- u32 mac0:8;
- u32 mac1:8;
-
- u32 mac2:8;
- u32 mac3:8;
- u32 mac4:8;
- u32 mac5:8;
-#elif (__BYTE_ORDER__ == __LITTLE_ENDIAN__)
- u32 mac1:8;
- u32 mac0:8;
- u32 vlan_id:12;
- u32 er_id:4;
-
- u32 mac5:8;
- u32 mac4:8;
- u32 mac3:8;
- u32 mac2:8;
-#endif
- } mac_bs;
-} sml_mac_tbl_8_4_key_u;
-
-/**
- * Struct name: sml_mac_tbl_8_4_item_u
- * @brief: Mac Table Item
- * Description: xxxxxxxxxxxxxxx
- */
-typedef union tag_sml_mac_tbl_8_4_item {
- u32 value;
-
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd:10;
- u32 host_id:2;
- u32 fwd_type:4;
- u32 fwd_id:16;
-#elif (__BYTE_ORDER__ == __LITTLE_ENDIAN__)
- u32 fwd_id:16;
- u32 fwd_type:4;
- u32 host_id:2;
- u32 rsvd:10;
-#endif
- } bs;
-} sml_mac_tbl_8_4_item_u;
-
-/**
- * Struct name: sml_mac_tbl_key_item_s
- * @brief: Mac Table( 8 + 4 )
- * Description: MAC table Key + Item
- */
-typedef struct tag_sml_mac_tbl_8_4 {
- sml_mac_tbl_head_u head;
- sml_mac_tbl_8_4_key_u key;
- sml_mac_tbl_8_4_item_u item;
-} sml_mac_tbl_8_4_s;
-
-/**
- * Struct name: sml_vtep_tbl_8_20_key_s
- * @brief: Vtep Table Key
- * Description: xxxxxxxxxxxxxxx
- */
-typedef struct tag_sml_vtep_tbl_8_20_key {
- u32 vtep_remote_ip;
- u32 rsvd;
-} sml_vtep_tbl_8_20_key_s;
-
-/**
- * Struct name: dmac_smac_u
- * @brief: Dmac & Smac for VxLAN encapsulation
- * Description: xxxxxxxxxxxxxxx
- */
-typedef union tag_dmac_smac {
- u16 mac_addr[6];
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u16 d_mac0:8;
- u16 d_mac1:8;
- u16 d_mac2:8;
- u16 d_mac3:8;
-
- u16 d_mac4:8;
- u16 d_mac5:8;
- u16 s_mac0:8;
- u16 s_mac1:8;
-
- u16 s_mac2:8;
- u16 s_mac3:8;
- u16 s_mac4:8;
- u16 s_mac5:8;
-#elif (__BYTE_ORDER__ == __LITTLE_ENDIAN__)
- u16 d_mac1:8;
- u16 d_mac0:8;
- u16 d_mac3:8;
- u16 d_mac2:8;
-
- u16 d_mac5:8;
- u16 d_mac4:8;
- u16 s_mac1:8;
- u16 s_mac0:8;
-
- u16 s_mac3:8;
- u16 s_mac2:8;
- u16 s_mac5:8;
- u16 s_mac4:8;
-#endif
- } bs;
-} dmac_smac_u;
-
-/**
- * Struct name: sml_vtep_tbl_8_20_item_u
- * @brief: Vtep Table Item
- * Description: xxxxxxxxxxxxxxx
- */
-typedef struct tag_sml_vtep_tbl_8_20_item {
- dmac_smac_u dmac_smac;
- u32 source_ip;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 er_id:4;
- u32 rsvd:12;
- u32 vlan:16; /* The PRI*/
-#else
- u32 vlan:16; /* The PRI*/
- u32 rsvd:12;
- u32 er_id:4;
-#endif
- } bs;
-
- u32 value;
- } misc;
-} sml_vtep_tbl_8_20_item_s;
-
-/**
- * Struct name: sml_vtep_tbl_8_20_s
- * @brief: Vtep Table( 8 + 20)
- * Description: xxxxxxxxxxxxxxx
- */
-typedef struct tag_sml_vtep_tbl_8_20 {
- sml_mac_tbl_head_u head; /*first 4 bytes , the same as mac tbl */
- sml_vtep_tbl_8_20_key_s key;
- sml_vtep_tbl_8_20_item_s item;
-} sml_vtep_tbl_8_20_s;
-
-/**
- * Struct name: sml_vtep_tbl_8_20_key_s
- * @brief: Vtep Table Key
- * Description: xxxxxxxxxxxxxxx
- */
-typedef struct tag_sml_vxlan_udp_portcfg_4_8_key {
- u32 udp_dest_port;
- u32 rsvd;
-} sml_vxlan_udp_portcfg_4_8_key_s;
-
-/**
- * Struct name: sml_vtep_tbl_8_20_item_u
- * @brief: Vtep Table Item
- * Description: xxxxxxxxxxxxxxx
- */
-typedef struct tag_sml_vxlan_udp_portcfg_4_8_item {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 odp_port:12;
- u32 dp_id:2;
- u32 resvd:20;
-#else
- u32 resvd:20;
- u32 dp_id:2;
- u32 odp_port:12;
-#endif
- } bs;
-
- u32 value;
- } dw0;
-} sml_vxlan_udp_portcfg_4_8_item_s;
-
-/**
- * Struct name: sml_vxlan_udp_portcfg_4_8_s
- * @brief: Vxlan Dest Udp Port Table( 8 + 20)
- * Description: xxxxxxxxxxxxxxx
- */
-typedef struct tag_sml_vxlan_udp_portcfg_4_8 {
- sml_mac_tbl_head_u head; /*first 4 bytes , the same as mac tbl */
- sml_vxlan_udp_portcfg_4_8_key_s key;
- sml_vxlan_udp_portcfg_4_8_item_s item;
-} sml_vxlan_udp_portcfg_4_8_s;
-
-/**
- * Struct name: sml_vtep_er_info_s
- * @brief: Vtep Er Info Table
- * Description: xxxxxxxxxxxxxxx
- */
-typedef struct tag_sml_vtep_er_info {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 lli_mode:1;
- /* ER bound to the outbound port is Eth-Trunk,
- * type (FIC/Port)
- */
- u32 er_fwd_trunk_type:1;
- /* ER bound to the outbound port is Eth-Trunk,
- * port aggregation mode (Standby/LoadBalance/LACP)
- */
- u32 er_fwd_trunk_mode:4;
- u32 er_mode:2; /* ER mode (VEB/VEPA)*/
- /* er_id as LT index but also used as entries,
- * facilitating service
- */
- u32 er_id:4;
- /* Type of the ER bound to the outbound port
- * (Port/FIC/Eth-Trunk)
- */
- u32 er_fwd_type:4;
- /* ER bound egress ID(PortID/FICID/TrunkID)*/
- u32 er_fwd_id:16;
-#else
- u32 er_fwd_id:16;
- u32 er_fwd_type:4;
- u32 er_id:4;
- u32 er_mode:2;
- u32 er_fwd_trunk_mode:4;
- u32 er_fwd_trunk_type:1;
- u32 lli_mode:1;
-#endif
- } bs;
-
- u32 value;
- } dw0;
-} sml_vtep_er_info_s;
-
-/**
- * Struct name: sml_logic_port_cfg_tbl_s
- * @brief: Logic Port Cfg Table
- * Description: xxxxxxxxxxxxxxx
- */
-typedef struct tag_sm_logic_port_cfg {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- /* Input switch port (or DP_MAX_PORTS). */
- u32 odp_port:12;
- u32 dp_id:2; /* datapath id */
- u32 er_id:4;
- /* logic port MAC Learning enable or disable */
- u32 learn_en:1;
- u32 resvd:13;
-#else
- u32 resvd:13;
- /* logic port MAC Learning enable or disable */
- u32 learn_en:1;
- u32 er_id:4;
- u32 dp_id:2; /* datapath id */
- /* Input switch port (or DP_MAX_PORTS). */
- u32 odp_port:12;
-#endif
- } bs;
-
- u32 value;
- } dw0;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd4:1;
- u32 er_fwd_trunk_type:1;
- u32 er_fwd_trunk_mode:4;
- u32 er_mode:2;
- u32 er_id:4;
- u32 er_fwd_type:4;
- u32 er_fwd_id:16;
-#else
- u32 er_fwd_id:16;
- u32 er_fwd_type:4;
- u32 er_id:4;
- u32 er_mode:2;
- u32 er_fwd_trunk_mode:4;
- u32 er_fwd_trunk_type:1;
- u32 rsvd4:1;
-#endif
- } bs;
-
- u32 value;
- } dw1;
-} sml_logic_port_cfg_tbl_s;
-
-/* vport stats counter */
-typedef struct tag_vport_stats_ctr {
- u16 rx_packets; /* total packets received */
- u16 tx_packets; /* total packets transmitted */
- u16 rx_bytes; /* total bytes received */
- u16 tx_bytes; /* total bytes transmitted */
- u16 rx_errors; /* bad packets received */
- u16 tx_errors; /* packet transmit problems */
- u16 rx_dropped; /* no space in linux buffers */
- u16 tx_dropped; /* no space available in linux */
-} vport_stats_ctr_s;
-
-/**
- * Struct name: vport_s
- * @brief: Datapath Cfg Table
- * Description: xxxxxxxxxxxxxxx
- */
-typedef struct tag_vport {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- /* dw0 */
- u32 valid:1;
- u32 learn_en:1;
- u32 type:4;
- u32 dp_id:2;
- /* The type of Vport mapping port, 0:VF, 1:Logic Port */
- u32 mapping_type:4;
- u32 mapping_port:12; /* odp_port mapping on VF or ER Logic Port */
- u32 rsvd:8;
-
- /* dw1 */
- u32 srctagl:12; /* the function used by parent context */
- /* parent context XID used to upcall missed packet to ovs-vswitchd */
- u32 xid:20;
-
- /* dw2 */
- u32 odp_port:12; /* on datapath port id */
- /* parent context CID used to upcall missed packet to ovs-vswitchd */
- u32 cid:20;
-#else
- /* dw0 */
- u32 rsvd:8;
- u32 mapping_port:12; /* odp_port mapping on VF or ER Logic Port */
- /* The type of Vport mapping port, 0:VF, 1:Logic Port */
- u32 mapping_type:4;
- u32 dp_id:2;
- u32 type:4;
- u32 learn_en:1;
- u32 valid:1;
-
- /* dw1 */
- /* parent context XID used to upcall missed packet to ovs-vswitchd */
- u32 xid:20;
- u32 srctagl:12; /* the function used by parent context */
-
- /* dw2 */
- /* parent context CID used to upcall missed packet to ovs-vswitchd */
- u32 cid:20;
- u32 odp_port:12; /* on datapath port id */
-#endif
-
- /* dw3 is er information and it is valid only
- * when mapping_type=1(logic port)
- */
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 lli_mode:1;
- /* ER bound to the outbound port is Eth-Trunk,
- * type (FIC/Port)
- */
- u32 er_fwd_trunk_type:1;
- /* ER bound to the outbound port is Eth-Trunk,
- * port aggregation mode (Standby/LoadBalance/LACP)
- */
- u32 er_fwd_trunk_mode:4;
- u32 er_mode:2; /* ER mode (VEB/VEPA)*/
- u32 er_id:4; /* ERID */
- /* Type of the ER bound to the outbound port
- * (Port/FIC/Eth-Trunk)
- */
- u32 er_fwd_type:4;
- /*ER bound egress ID(PortID/FICID/TrunkID)*/
- u32 er_fwd_id:16;
-#else
- u32 er_fwd_id:16;
- u32 er_fwd_type:4;
- u32 er_id:4;
- u32 er_mode:2;
- u32 er_fwd_trunk_mode:4;
- /* ER bound to the outbound port is Eth-Trunk,
- * type (FIC/Port)
- */
- u32 er_fwd_trunk_type:1;
- u32 lli_mode:1;
-#endif
- } bs;
- u32 value;
- } dw3;
-
- /* dw4~dw7 */
- vport_stats_ctr_s stats; /* vport stats counters */
-
-} vport_s;
-
-/**
- * Struct name: sml_elb_tbl_elem_u
- * @brief: ELB Table Elem
- * Description: ELB leaf table members
- */
-typedef union tag_sml_elb_tbl_elem {
- struct {
- u32 fwd_val;
- u32 next_val;
- } value;
-
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd0:12;
- u32 fwd_type:4;
- u32 fwd_id:16;
-
- u32 rsvd1:17;
- u32 elb_index_next:15;
-#elif (__BYTE_ORDER__ == __LITTLE_ENDIAN__)
- u32 fwd_id:16;
- u32 fwd_type:4;
- u32 rsvd0:12;
-
- u32 elb_index_next:15;
- u32 rsvd1:17;
-#endif
- } bs;
-} sml_elb_tbl_elem_u;
-
-/**
- * Struct name: sml_elb_tbl_s
- * @brief ELB Table
- * Description: ELB leaf table Entry
- */
-typedef struct tag_sml_elb_tbl {
- sml_elb_tbl_elem_u elem[TBL_ID_ELB_ENTRY_ELEM_NUM];
-} sml_elb_tbl_s;
-
-/**
- * Struct name: sml_vlan_tbl_elem_u
- * @brief: VLAN Table Elem
- * Description: VLAN broadcast table members
- */
-typedef union tag_sml_vlan_tbl_elem {
- u16 value;
-
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u16 learn_en:1;
- u16 elb_index:15;
-#elif (__BYTE_ORDER__ == __LITTLE_ENDIAN__)
- u16 elb_index:15;
- u16 learn_en:1;
-#endif
- } bs;
-} sml_vlan_tbl_elem_u;
-
-/**
- * Struct name: sml_vlan_tbl_s
- * @brief: VLAN Table
- * Entry Description: VLAN broadcast table
- */
-typedef struct tag_sml_vlan_tbl {
- sml_vlan_tbl_elem_u elem[TBL_ID_VLAN_ENTRY_ELEM_NUM];
-} sml_vlan_tbl_s;
-
-/**
- * Struct name: sml_multicast_tbl_array_u
- * @brief: Multicast Table Elem
- * Description: multicast table members
- */
-typedef union tag_sml_multicast_tbl_elem {
- struct {
- u32 route_val;
- u32 next_val;
- } value;
-
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd0:12;
- u32 route_fwd_type:4;
- u32 route_fwd_id:16;
-
- u32 rsvd1:17;
- u32 elb_index:15;
-#elif (__BYTE_ORDER__ == __LITTLE_ENDIAN__)
- u32 route_fwd_id:16;
- u32 route_fwd_type:4;
- u32 rsvd0:12;
-
- u32 elb_index:15;
- u32 rsvd1:17;
-#endif
- } bs;
-} sml_multicast_tbl_elem_u;
-
-/* Struct name: sml_multicast_tbl_s
- * @brief: Multicast Table
- * Entry Description: multicast table
- */
-typedef struct tag_sml_multicast_tbl {
- sml_multicast_tbl_elem_u elem[TBL_ID_MULTICAST_ENTRY_ELEM_NUM];
-} sml_multicast_tbl_s;
-
-/* Struct name: sml_observe_port_s
- * @brief: Observe Port Table
- * Description: observing port entries defined
- */
-typedef struct tag_sml_observe_port {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 valid:1;
- u32 rsvd0:11;
- u32 dst_type:4;
- u32 dst_id:16;
-#else
- u32 dst_id:16;
- u32 dst_type:4;
- u32 rsvd0:11;
- u32 valid:1;
-#endif
- } bs;
- u32 value;
- } dw0;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd1:4;
- u32 vlan_id:12;
- u32 rsvd2:2;
- u32 cut_len:14;
-#else
- u32 cut_len:14;
- u32 rsvd2:2;
- u32 vlan_id:12;
- u32 rsvd1:4;
-#endif
- } bs;
- u32 value;
- } dw1;
-
- u32 rsvd_pad[2];
-} sml_observe_port_s;
-
-/* Struct name: sml_ipmac_tbl_16_12_key_s
- * @brief ipmac filter table key
- * Description: ipmac filter key define
- */
-typedef struct tag_sml_ipmac_tbl_16_12_key {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 func_id:16;
- u32 mac_h16:16;
-#else
- u32 mac_h16:16;
- u32 func_id:16;
-#endif
-
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 mac_m16:16;
- u32 mac_l16:16;
-#else
- u32 mac_l16:16;
- u32 mac_m16:16;
-#endif
-
- u32 ip;
- u32 rsvd;
-} sml_ipmac_tbl_16_12_key_s;
-
-/* Struct name: sml_ipmac_tbl_16_12_item_s
- * @brief ipmac filter table item
- * Description: ipmac filter item define
- */
-typedef struct tag_sml_ipmac_tbl_16_12_item {
- u32 rsvd[3];
-} sml_ipmac_tbl_16_12_item_s;
-
-/* Struct name: sml_ethtype_tbl_8_4_key_s
- * @brief: ethtype filter table key
- * Description: ethtype filter key define
- */
-typedef struct tag_sml_ethtype_tbl_8_4_key {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 group_id:16;
- u32 ethtype:16;
-#else
- u32 ethtype:16;
- u32 group_id:16;
-#endif
-
- u32 rsvd;
-} sml_ethtype_tbl_8_4_key_s;
-
-/* Struct name: sml_ethtype_tbl_8_4_item_s
- * @brief ethtype filter table item
- * Description: ethtype filter item define
- */
-typedef struct tag_sml_ethtype_tbl_8_4_item {
- u32 rsvd;
-} sml_ethtype_tbl_8_4_item_s;
-
-/* ACL to dfx record packets*/
-typedef enum {
- ACL_PKT_TX = 0,
- ACL_PKT_RX = 1,
-} sml_acl_pkt_dir_e;
-
-/* ACL policy table item*/
-typedef struct tag_sml_acl_policy_tbl {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 drop:1;
- u32 car_en:1;
- u32 car_id:12;
- u32 counter_type:2;
- u32 counter_id:16;
-#else
- u32 counter_id:16;
- u32 counter_type:2;
- u32 car_id:12;
- u32 car_en:1;
- u32 drop:1;
-#endif
- } bs;
-
- u32 value;
- } dw0;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd1:7;
- u32 mirrior_en:1;
- u32 observer_port:10;
- u32 change_dscp:1;
- u32 new_dscp:6;
- u32 change_pkt_pri:1;
- u32 new_pkt_pri:3;
- u32 redirect_en:3;
-#else
- u32 redirect_en:3;
- u32 new_pkt_pri:3;
- u32 change_pkt_pri:1;
- u32 new_dscp:6;
- u32 change_dscp:1;
- u32 observer_port:10;
- u32 mirrior_en:1;
- u32 rsvd1:7;
-#endif
- } bs;
-
- u32 value;
- } dw1;
-
- u32 redirect_data;
- u32 rsvd2;
-} sml_acl_policy_tbl_s;
-
-typedef struct tag_sml_acl_ipv4_key {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- /* The alignment, match_key_type and
- * later field is a KEY value
- */
- u32 padding:16;
- u32 tid0:2;
- u32 match_key_type:3; /* Matching type*/
- u32 rsvd:11; /* Reserved field*/
-#else
- u32 rsvd:11;
- u32 match_key_type:3;
- u32 tid0:2;
- u32 padding:16;
-#endif
- } bs;
- u32 value;
- } dw0;
-
- /* dw1&dw2 */
- u32 sipv4;
- u32 dipv4;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 l4_sport:16;
- u32 l4_dport:16;
-#else
- u32 l4_dport:16;
- u32 l4_sport:16;
-#endif
- } bs;
- u32 value;
- } dw3;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 l4_protocol:8;
- u32 rsvd0:8;
- u32 seg_id:10;
- u32 rsvd1:6;
-#else
- u32 rsvd1:6;
- u32 seg_id:10;
- u32 rsvd0:8;
- u32 l4_protocol:8;
-#endif
- } bs;
- u32 value;
- } dw4;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 tid1:2;
- u32 rsvd:14;
- u32 padding:16;
-#else
- u32 padding:16;
- u32 rsvd:14;
- u32 tid1:2;
-#endif
- } bs;
- u32 value;
- } dw5;
-} sml_acl_ipv4_key_s;
-
-typedef struct tag_sml_acl_ipv6_key {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- /* The alignment, match_key_type and
- * later field is a KEY value
- */
- u32 padding:16;
- u32 tid0:2;
- u32 match_key_type:3; /* Matching type*/
- u32 rsvd:11; /* Reserved field*/
-#else
- u32 rsvd:11;
- u32 match_key_type:3;
- u32 tid0:2;
- u32 padding:16;
-#endif
- } bs;
- u32 value;
- } dw0;
-
- /*dw1~dw4 */
- u32 sipv6[4];
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 tid1:2;
- u32 rsvd1:14;
- u32 tid2:2;
- u32 rsvd2:14;
-#else
- u32 rsvd2:14;
- u32 tid2:2;
- u32 rsvd1:14;
- u32 tid1:2;
-#endif
- } bs;
- u32 value;
- } dw5;
-
- /*dw6~dw9 */
- u32 dipv6[4];
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 tid3:2;
- u32 rsvd3:14;
- u32 tid4:2;
- u32 rsvd4:14;
-#else
- u32 rsvd4:14;
- u32 tid4:2;
- u32 rsvd3:14;
- u32 tid3:2;
-#endif
- } bs;
- u32 value;
- } dw10;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 l4_sport:16;
- u32 l4_dport:16;
-#else
- u32 l4_dport:16;
- u32 l4_sport:16;
-#endif
- } bs;
- u32 value;
- } dw11;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 l4_protocol:8;
- u32 rsvd0:8;
- u32 seg_id:10;
- u32 rsvd1:6;
-#else
- u32 rsvd1:6;
- u32 seg_id:10;
- u32 rsvd0:8;
- u32 l4_protocol:8;
-#endif
- } bs;
- u32 value;
- } dw12;
-
- u32 dw13;
- u32 dw14;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 tid5:2;
- u32 rsvd5:14;
- u32 tid6:2;
- u32 rsvd6:14;
-#else
- u32 rsvd6:14;
- u32 tid6:2;
- u32 rsvd5:14;
- u32 tid5:2;
-#endif
- } bs;
- u32 value;
- } dw15;
-
- u32 dw16;
- u32 dw17;
- u32 dw18;
- u32 dw19;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 tid7:2;
- u32 rsvd7:30;
-#else
- u32 rsvd7:30;
- u32 tid7:2;
-#endif
- } bs;
- u32 value;
- } dw20;
-} sml_acl_ipv6_key_s;
-
-/**
- * Struct name: sml_voq_map_table_s
- * @brief: voq_map_table
- * Description: xxxxxxxxxxxxxxx
- */
-typedef struct tag_sml_voq_map_table {
- u16 voq_base[8];
-} sml_voq_map_table_s;
-
-/**
- * Struct name: sml_rss_context_u
- * @brief: rss_context
- * Description: xxxxxxxxxxxxxxx
- */
-typedef union tag_sml_rss_context {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 udp_ipv4:1;
- u32 udp_ipv6:1;
- u32 ipv4:1;
- u32 tcp_ipv4:1;
- u32 ipv6:1;
- u32 tcp_ipv6:1;
- u32 ipv6_ext:1;
- u32 tcp_ipv6_ext:1;
- u32 valid:1;
- u32 rsvd1:13;
- u32 def_qpn:10;
-#else
- u32 def_qpn:10;
- u32 rsvd1:13;
- u32 valid:1;
- u32 tcp_ipv6_ext:1;
- u32 ipv6_ext:1;
- u32 tcp_ipv6:1;
- u32 ipv6:1;
- u32 tcp_ipv4:1;
- u32 ipv4:1;
- u32 udp_ipv6:1;
- u32 udp_ipv4:1;
-#endif
- } bs;
-
- u32 value;
-} sml_rss_context_u;
-
-typedef struct tag_sml_rss_context_tbl {
- sml_rss_context_u element[TBL_ID_RSS_CONTEXT_NUM];
-} sml_rss_context_tbl_s;
-
-/**
- * Struct name: sml_rss_hash_u
- * @brief: rss_hash
- * Description: xxxxxxxxxxxxxxx
- */
-typedef union tag_sml_rss_hash {
- u8 rq_index[256];
-} sml_rss_hash_u;
-
-typedef struct tag_sml_rss_hash_tbl {
- sml_rss_hash_u element[TBL_ID_RSS_HASH_NUM];
-} sml_rss_hash_tbl_s;
-
-/**
- * Struct name: sml_lli_5tuple_key_s
- * @brief: lli_5tuple_key
- * Description: xxxxxxxxxxxxxxx
- */
-typedef struct tag_sml_lli_5tuple_key {
- union {
- struct {
-/** Define the struct bits */
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 src:5;
- /* The tile need fill the Dest */
- u32 rt:1;
- u32 key_size:2;
- /* determines which action that engine will take */
- u32 profile_id:3;
- /* indicates that requestor expect
- * to receive a response data
- */
- u32 op_id:5;
- u32 a:1;
- u32 rsvd:12;
- u32 vld:1;
- u32 xy:1;
- u32 at:1;
-#else
- u32 at:1;
- u32 xy:1;
- u32 vld:1;
- /* indicates that requestor expect to
- * receive a response data
- */
- u32 rsvd:12;
- /* determines which action that engine will take*/
- u32 a:1;
- u32 op_id:5;
- u32 profile_id:3;
- u32 key_size:2;
- u32 rt:1;
- u32 src:5;
-#endif
- } bs;
-
-/* Define an unsigned member */
- u32 value;
- } dw0;
- union {
- struct {
- u32 rsvd:1;
- /* The tile need fill the Dest */
- u32 address:15;
-
- u32 table_type:5;
- u32 ip_type:1;
- u32 func_id:10;
- } bs;
-
- u32 value;
- } misc;
-
- u32 src_ip[4];
- u32 dst_ip[4];
-
- u16 src_port;
- u16 dst_port;
-
- u8 protocol;
- u8 tcp_flag;
- u8 fcoe_rctl;
- u8 fcoe_type;
- u16 eth_type;
-} sml_lli_5tuple_key_s;
-
-/**
- * Struct name: sml_lli_5tuple_rsp_s
- * @brief: lli_5tuple_rsp
- * Description: xxxxxxxxxxxxxxx
- */
-typedef struct tag_sml_lli_5tuple_rsp {
- union {
- struct {
- u32 state:4;
- u32 rsvd:28;
- } bs;
-
- u32 value;
- } dw0;
-
- u32 dw1;
-
- union {
- struct {
- u32 frame_size:16;
- u32 lli_en:8;
- u32 rsvd:8;
- } bs;
-
- u32 value;
- } dw2;
-
- u32 dw3;
-} sml_lli_5tuple_rsp_s;
-
-/**
- * Struct name: l2nic_rx_cqe_s.
- * @brief: l2nic_rx_cqe_s data structure.
- * Description:
- */
-typedef struct tag_l2nic_rx_cqe {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rx_done:1;
- u32 bp_en:1;
- u32 rsvd1:6;
- u32 lro_num:8;
- u32 checksum_err:16;
-#else
- u32 checksum_err:16;
- u32 lro_num:8;
- u32 rsvd1:6;
- u32 bp_en:1;
- u32 rx_done:1;
-#endif
- } bs;
- u32 value;
- } dw0;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 length:16;
- u32 vlan:16;
-#else
- u32 vlan:16;
- u32 length:16;
-#endif
- } bs;
- u32 value;
- } dw1;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rss_type:8;
- u32 rsvd0:2;
- u32 vlan_offload_en:1;
- u32 umbcast:2;
- u32 rsvd1:7;
- u32 pkt_types:12;
-#else
- u32 pkt_types:12;
- u32 rsvd1:7;
- u32 umbcast:2;
- u32 vlan_offload_en:1;
- u32 rsvd0:2;
- u32 rss_type:8;
-#endif
- } bs;
- u32 value;
- } dw2;
-
- union {
- struct {
- u32 rss_hash_value;
- } bs;
- u32 value;
- } dw3;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 if_1588:1;
- u32 if_tx_ts:1;
- u32 if_rx_ts:1;
- u32 rsvd:1;
- u32 msg_1588_type:4;
- u32 msg_1588_offset:8;
- u32 tx_ts_seq:16;
-#else
- u32 tx_ts_seq:16;
- u32 msg_1588_offset:8;
- u32 msg_1588_type:4;
- u32 rsvd:1;
- u32 if_rx_ts:1;
- u32 if_tx_ts:1;
- u32 if_1588:1;
-#endif
- } bs;
- u32 value;
- } dw4;
-
- union {
- struct {
- u32 msg_1588_ts;
- } bs;
-
- struct {
- u32 rsvd0:12;
- /* for ovs. traffic type: 0-default l2nic pkt,
- * 1-fallback traffic, 2-miss upcall traffic,
- * 2-command
- */
- u32 traffic_type:4;
- /* for ovs. traffic from: vf_id,
- * only support traffic_type=0(default l2nic)
- * or 2(miss upcall)
- */
- u32 traffic_from:16;
- } ovs_bs;
-
- u32 value;
- } dw5;
-
- union {
- struct {
- u32 lro_ts;
- } bs;
- u32 value;
- } dw6;
-
- union {
- struct {
- u32 rsvd0;
- } bs;
-
- u32 localtag; /* for ovs */
-
- u32 value;
- } dw7;
-} l2nic_rx_cqe_s;
-
-typedef union tag_sml_global_queue_tbl_elem {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 src_tag_l:16;
- u32 local_qid:8;
- u32 rsvd:8;
-#elif (__BYTE_ORDER__ == __LITTLE_ENDIAN__)
- u32 rsvd:8;
- u32 local_qid:8;
- u32 src_tag_l:16;
-#endif
- } bs;
-
- u32 value;
-} sml_global_queue_tbl_elem_u;
-
-typedef struct tag_sml_global_queue_tbl {
- sml_global_queue_tbl_elem_u element[TBL_ID_GLOBAL_QUEUE_NUM];
-} sml_global_queue_tbl_s;
-
-typedef struct tag_sml_dfx_log_tbl {
- u32 wr_init_pc_h32; /* Initial value of write_pc*/
- u32 wr_init_pc_l32;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 state:8;
- u32 func_en:1;
- u32 srctag:12;
- u32 max_num:11; /* Data block highest value*/
-#else
- u32 max_num:11;
- u32 srctag:12;
- u32 func_en:1;
- u32 state:8;
-#endif
- } bs;
- u32 value;
- } dw2;
-
- u32 ci_index;
-} sml_dfx_log_tbl_s;
-
-typedef struct tag_sml_glb_capture_tbl {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 valid:1;
- u32 max_num:15;
- u32 rsvd:16;
-#else
- u32 rsvd:16;
- u32 max_num:15;
- u32 valid:1;
-#endif
- } bs;
- u32 value;
- } dw0;
-
- u32 discard_addr_h32;
- u32 discard_addr_l32;
-
- u32 rsvd0;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 valid:1;
- u32 mode:5;
- u32 direct:2;
- u32 offset:8;
- u32 cos:3;
- u32 max_num:13;
-#else
- u32 max_num:13;
- u32 cos:3;
- u32 offset:8;
- u32 direct:2;
- u32 mode:5;
- u32 valid:1;
-#endif
- } bs;
- u32 value;
- } dw4;
-
- u32 data_vlan;
-
- u32 condition_addr_h32;
- u32 condition_addr_l32;
-
-} sml_glb_capture_tbl_s;
-
-typedef struct tag_sml_cqe_addr_tbl {
- u32 cqe_first_addr_h32;
- u32 cqe_first_addr_l32;
- u32 cqe_last_addr_h32;
- u32 cqe_last_addr_l32;
-
-} sml_cqe_addr_tbl_s;
-
-/**
- * Struct name: sml_ucode_exec_info_tbl_s
- * @brief: ucode execption info Table
- * Description: microcode exception information table
- */
-typedef struct tag_ucode_exec_info_tbl {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 wptr_cpb_ack_str:4;
- u32 mem_cpb_ack_cnums_dma:4;
- u32 mem_cpb_ack_cmd_mode:2;
- u32 pr_ret_vld:1;
- u32 oeid_pd_pkt:1;
- u32 rptr_cmd:4;
- u32 wptr_cmd:4;
- u32 src_tag_l:12;
-#else
- u32 src_tag_l:12;
- u32 wptr_cmd:4;
- u32 rptr_cmd:4;
- u32 oeid_pd_pkt:1;
- u32 pr_ret_vld:1;
- u32 mem_cpb_ack_cmd_mode:2;
- u32 mem_cpb_ack_cnums_dma:4;
- u32 wptr_cpb_ack_str:4;
-#endif
- } bs;
-
- u32 value;
- } dw0;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 fq:16;
- u32 exception_type:4;
- u32 rptr_cpb_ack_str:4;
- u32 header_oeid:8;
-#else
- u32 header_oeid:8;
- u32 rptr_cpb_ack_str:4;
- u32 exception_type:4;
- u32 fq:16;
-#endif
- } bs;
-
- u32 value;
- } dw1;
-
- u32 oeid_pd_data_l32;
- u32 oeid_pd_data_m32;
-} sml_ucode_exec_info_s;
-
-typedef struct rq_iq_mapping_tbl {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rqid:16;
- u32 iqid:8;
- u32 rsvd:8;
-#else
- u32 rsvd:8;
- u32 iqid:8;
- u32 rqid:16;
-#endif
- } bs;
- u32 value;
- } dw[4];
-} sml_rq_iq_mapping_tbl_s;
-
-/* nic_ucode_rq_ctx table define
- */
-typedef struct nic_ucode_rq_ctx {
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 max_count:10;
- u32 cqe_tmpl:6;
- u32 pkt_tmpl:6;
- u32 wqe_tmpl:6;
- u32 psge_valid:1;
- u32 rsvd1:1;
- u32 owner:1;
- u32 ceq_en:1;
-#else
- u32 ceq_en:1;
- u32 owner:1;
- u32 rsvd1:1;
- u32 psge_valid:1;
- u32 wqe_tmpl:6;
- u32 pkt_tmpl:6;
- u32 cqe_tmpl:6;
- u32 max_count:10;
-#endif
- } bs;
- u32 dw0;
- };
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- /* Interrupt number that L2NIC engine tell SW
- * if generate int instead of CEQ
- */
- u32 int_num:10;
- u32 ceq_count:10;
- /* product index */
- u32 pi:12;
-#else
- /* product index */
- u32 pi:12;
- u32 ceq_count:10;
- /* Interrupt number that L2NIC engine tell SW
- * if generate int instead of CEQ
- */
- u32 int_num:10;
-#endif
- } bs0;
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- /* CEQ arm, L2NIC engine will clear it after send ceq,
- * driver should set it by CMD Q after receive all pkt.
- */
- u32 ceq_arm:1;
- u32 eq_id:5;
- u32 rsvd2:4;
- u32 ceq_count:10;
- /* product index */
- u32 pi:12;
-#else
- /* product index */
- u32 pi:12;
- u32 ceq_count:10;
- u32 rsvd2:4;
- u32 eq_id:5;
- /* CEQ arm, L2NIC engine will clear it after send ceq,
- * driver should set it by CMD Q after receive all pkt.
- */
- u32 ceq_arm:1;
-#endif
- } bs1;
- u32 dw1;
- };
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- /* consumer index */
- u32 ci:12;
- /* WQE page address of current CI point to, high part */
- u32 ci_wqe_page_addr_hi:20;
-#else
- /* WQE page address of current CI point to, high part */
- u32 ci_wqe_page_addr_hi:20;
- /* consumer index */
- u32 ci:12;
-#endif
- } bs2;
- u32 dw2;
- };
-
- /* WQE page address of current CI point to, low part */
- u32 ci_wqe_page_addr_lo;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 prefetch_min:7;
- u32 prefetch_max:11;
- u32 prefetch_cache_threshold:14;
-#else
- u32 prefetch_cache_threshold:14;
- u32 prefetch_max:11;
- u32 prefetch_min:7;
-#endif
- } bs3;
- u32 dw3;
- };
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd3:31;
- /* ownership of WQE */
- u32 prefetch_owner:1;
-#else
- /* ownership of WQE */
- u32 prefetch_owner:1;
- u32 rsvd3:31;
-#endif
- } bs4;
- u32 dw4;
- };
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 prefetch_ci:12;
- /* high part */
- u32 prefetch_ci_wqe_page_addr_hi:20;
-#else
- /* high part */
- u32 prefetch_ci_wqe_page_addr_hi:20;
- u32 prefetch_ci:12;
-#endif
- } bs5;
- u32 dw5;
- };
-
- /* low part */
- u32 prefetch_ci_wqe_page_addr_lo;
- /* host mem GPA, high part */
- u32 pi_gpa_hi;
- /* host mem GPA, low part */
- u32 pi_gpa_lo;
-
- union {
- struct {
-#if (__BYTE_ORDER__ == __BIG_ENDIAN__)
- u32 rsvd4:9;
- u32 ci_cla_tbl_addr_hi:23;
-#else
- u32 ci_cla_tbl_addr_hi:23;
- u32 rsvd4:9;
-#endif
- } bs6;
- u32 dw6;
- };
-
- u32 ci_cla_tbl_addr_lo;
-
-} nic_ucode_rq_ctx_s;
-
-#define LRO_TSO_SPACE_SIZE (240) /* (15 * 16) */
-#define RQ_CTX_SIZE (48)
-
-#ifdef __cplusplus
-#if __cplusplus
-}
-#endif
-#endif /* __cplusplus */
-#endif /* __L2_TABLE_H__ */
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_sml_table_pub.h b/drivers/net/ethernet/huawei/hinic/hinic_sml_table_pub.h
deleted file mode 100644
index 39d0516c..00000000
--- a/drivers/net/ethernet/huawei/hinic/hinic_sml_table_pub.h
+++ /dev/null
@@ -1,277 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0*/
-/* Huawei HiNIC PCI Express Linux driver
- * Copyright(c) 2017 Huawei Technologies Co., Ltd
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * for more details.
- *
- */
-
-#ifndef __SML_TABLE_PUB_H__
-#define __SML_TABLE_PUB_H__
-
-#ifdef __cplusplus
-#if __cplusplus
-extern "C" {
-#endif
-#endif /* __cplusplus */
-
-/* Un-FPGA(ESL/EMU/EDA) specification */
-#if (!defined(__UP_FPGA__) && (!defined(HI1822_MODE_FPGA)))
-/* ER specification*/
-#define L2_ER_SPEC (16)
-
-/* Entry specification*/
-#define TBL_ID_FUNC_CFG_SPEC (512)
-#define TBL_ID_PORT_CFG_SPEC (16)
-#define TBL_ID_MAC_SPEC (4096)
-#define TBL_ID_MULTICAST_SPEC (1024)
-#define TBL_ID_TRUNK_SPEC (256)
-#define TBL_ID_ELB_SPEC (18432)
-#define TBL_ID_TAGGEDLIST_SPEC (80)
-#define TBL_ID_UNTAGGEDLIST_SPEC (16)
-
-/* VLAN specification*/
-#define VSW_VLAN_SPEC (4096)
-
-#else /* FPGA scenario specifications */
-
-/* ER specification*/
-#define L2_ER_SPEC (4)
-
-/* Entry specification*/
-#define TBL_ID_FUNC_CFG_SPEC (64)
-#define TBL_ID_PORT_CFG_SPEC (16)
-#define TBL_ID_MAC_SPEC (256)
-#define TBL_ID_MULTICAST_SPEC (32)
-#define TBL_ID_TRUNK_SPEC (16)
-#define TBL_ID_ELB_SPEC (1152)
-#define TBL_ID_TAGGEDLIST_SPEC (20)
-#define TBL_ID_UNTAGGEDLIST_SPEC (4)
-
-/* VLAN specification*/
-#define VSW_VLAN_SPEC (1024)
-#endif
-
-/**
- * Number of entries elements defined
- */
-#define TBL_ID_ELB_ENTRY_ELEM_NUM 2
-#define TBL_ID_VLAN_ENTRY_ELEM_NUM 8
-#define TBL_ID_MULTICAST_ENTRY_ELEM_NUM 2
-#define TBL_ID_TRUNKFWD_ENTRY_ELEM_NUM 32
-#define TBL_ID_TAGGEDLIST_BITMAP32_NUM 4
-#define TBL_ID_UNTAGGEDLIST_BITMAP32_NUM 4
-#define TBL_ID_GLOBAL_QUEUE_NUM 4
-#define TBL_ID_RSS_CONTEXT_NUM 4
-#define TBL_ID_RSS_HASH_NUM 4
-
-/**
- * NIC receiving mode defined
- */
-#define NIC_RX_MODE_UC 0x01 /* 0b00001 */
-#define NIC_RX_MODE_MC 0x02 /* 0b00010 */
-#define NIC_RX_MODE_BC 0x04 /* 0b00100 */
-#define NIC_RX_MODE_MC_ALL 0x08 /* 0b01000 */
-#define NIC_RX_MODE_PROMISC 0x10 /* 0b10000 */
-
-/**
- * Maximum number of HCAR
- */
-#define QOS_MAX_HCAR_NUM (12)
-
-/**
- * VLAN Table, Multicast Table, ELB Table Definitions
- * The Table index and sub id index
- */
-#define VSW_DEFAULT_VLAN0 (0)
-#define INVALID_ELB_INDEX (0)
-
-#if (!defined(__UP_FPGA__) && (!defined(HI1822_MODE_FPGA)))
-/* Supports ESL/EMU/EDA 16ER * 4K VLAN, 1 entry stored 8 vlan*/
-#define GET_VLAN_TABLE_INDEX(er_id, vlan_id) \
- ((((er_id) & 0xF) << 9) | (((vlan_id) & 0xFFF) >> 3))
-#else
-/*FPGA supports only 4ER * 1K VLAN, 1 entry stored 8 vlan*/
-#define GET_VLAN_TABLE_INDEX(er_id, vlan_id) \
- ((((er_id) & 0x3) << 7) | (((vlan_id) & 0x3FF) >> 3))
-#endif
-#define GET_VLAN_ENTRY_SUBID(vlan_id) ((vlan_id) & 0x7)
-
-#define GET_MULTICAST_TABLE_INDEX(mc_id) ((mc_id) >> 1)
-#define GET_MULTICAST_ENTRY_SUBID(mc_id) ((mc_id) & 0x1)
-
-#define GET_ELB_TABLE_INDEX(elb_id) ((elb_id) >> 1)
-#define GET_ELB_ENTRY_SUBID(elb_id) ((elb_id) & 0x1)
-
-/**
- * taggedlist_table and untaggedlist_table access offset calculation
- */
-#define GET_TAGLIST_TABLE_INDEX(list_id, vlan_id) \
- (((list_id) << 5) | (((vlan_id) & 0xFFF) >> 7))
-#define GET_TAGLIST_TABLE_BITMAP_IDX(vlan_id) (((vlan_id) >> 5) & 0x3)
-#define GET_TAGLIST_TABLE_VLAN_BIT(vlan_id) \
- (0x1UL << ((vlan_id) & 0x1F))
-
-#define TRUNK_FWDID_NOPORT 0xFFFF
-
-/**
- * MAC type definition
- */
-typedef enum {
- MAC_TYPE_UC = 0,
- MAC_TYPE_BC,
- MAC_TYPE_MC,
- MAC_TYPE_RSV,
-} mac_type_e;
-
-/**
- * Ethernet port definition
- */
-typedef enum {
- MAG_ETH_PORT0 = 0,
- MAG_ETH_PORT1,
- MAG_ETH_PORT2,
- MAG_ETH_PORT3,
- MAG_ETH_PORT4,
- MAG_ETH_PORT5,
- MAG_ETH_PORT6,
- MAG_ETH_PORT7,
- MAG_ETH_PORT8,
- MAG_ETH_PORT9,
-} mag_eth_port_e;
-
-/**
- * vlan filter type defined
- */
-typedef enum {
- VSW_VLAN_MODE_ALL = 0,
- VSW_VLAN_MODE_ACCESS,
- VSW_VLAN_MODE_TRUNK,
- VSW_VLAN_MODE_HYBRID,
- VSW_VLAN_MODE_QINQ,
- VSW_VLAN_MODE_MAX,
-} vsw_vlan_mode_e;
-
-/**
- * MAC table query forwarding port type definition
- */
-typedef enum {
- VSW_FWD_TYPE_FUNCTION = 0, /* forward type function */
- VSW_FWD_TYPE_VMDQ, /* forward type function-queue(vmdq) */
- VSW_FWD_TYPE_PORT, /* forward type port */
- VSW_FWD_TYPE_FIC, /* forward type fic */
- VSW_FWD_TYPE_TRUNK, /* forward type trunk */
- VSW_FWD_TYPE_DP, /* forward type DP */
- VSW_FWD_TYPE_MC, /* forward type multicast */
-
- /* START: is not used and has to be removed */
- VSW_FWD_TYPE_BC, /* forward type broadcast */
- VSW_FWD_TYPE_PF, /* forward type pf */
- /* END: is not used and has to be removed */
-
- VSW_FWD_TYPE_NULL, /* forward type null */
-} vsw_fwd_type_e;
-
-/**
- * Eth-Trunk port aggregation mode
- */
-typedef enum {
- VSW_ETRK_MODE_STANDBY,
- VSW_ETRK_MODE_SMAC,
- VSW_ETRK_MODE_DMAC,
- VSW_ETRK_MODE_SMACDMAC,
- VSW_ETRK_MODE_SIP,
- VSW_ETRK_MODE_DIP,
- VSW_ETRK_MODE_SIPDIP,
- VSW_ETRK_MODE_5TUPLES,
- VSW_ETRK_MODE_LACP,
- VSW_ETRK_MODE_MAX,
-} vsw_etrk_mode_e;
-
-/**
- * Eth-Trunk port aggregation mode
- */
-typedef enum {
- TRUNK_MODE_STANDBY,
- TRUNK_MODE_SMAC,
- TRUNK_MODE_DMAC,
- TRUNK_MODE_SMACDMAC,
- TRUNK_MODE_SIP,
- TRUNK_MODE_DIP,
- TRUNK_MODE_SIPDIP,
- TRUNK_MODE_5TUPLES,
- TRUNK_MODE_SIPV6,
- TRUNK_MODE_DIPV6,
- TRUNK_MODE_SIPDIPV6,
- TRUNK_MODE_5TUPLESV6,
- TRUNK_MODE_LACP,
-} trunk_mode_s;
-
-/* ACL key type */
-enum {
- ACL_KEY_IPV4 = 0,
- ACL_KEY_IPV6
-};
-
-/* ACL filter action */
-enum {
- ACL_ACTION_PERMIT = 0,
- ACL_ACTION_DENY
-};
-
-/* ACL action button*/
-enum {
- ACL_ACTION_OFF = 0,
- ACL_ACTION_ON,
-};
-
-/* ACL statistic action*/
-enum {
- ACL_ACTION_NO_COUNTER = 0,
- ACL_ACTION_COUNT_PKT,
- ACL_ACTION_COUNT_PKT_LEN,
-};
-
-/* ACL redirect action*/
-enum {
- ACL_ACTION_FORWAR_UP = 1,
- ACL_ACTION_FORWAR_PORT,
- ACL_ACTION_FORWAR_NEXT_HOP,
- ACL_ACTION_FORWAR_OTHER,
-};
-
-enum {
- CEQ_TIMER_STOP = 0,
- CEQ_TIMER_START,
-};
-
-enum {
- CEQ_API_DISPATCH = 0,
- CEQ_API_NOT_DISPATCH,
-};
-
-enum {
- CEQ_MODE = 1,
- INT_MODE,
-};
-
-enum {
- ER_MODE_VEB,
- ER_MODE_VEPA,
- ER_MODE_MULTI,
- ER_MODE_NULL,
-};
-
-#ifdef __cplusplus
-#if __cplusplus
-}
-#endif
-#endif /* __cplusplus */
-#endif /* __L2_TABLE_PUB_H__ */
--
1.8.3
1
0

[PATCH 1/4] btrfs: delayed-inode: Kill the BUG_ON() in btrfs_delete_delayed_dir_index()
by Yang Yingliang 17 Apr '20
by Yang Yingliang 17 Apr '20
17 Apr '20
From: Qu Wenruo <wqu(a)suse.com>
mainline inclusion
from mainline-v5.4-rc1
commit 933c22a7512c5c09b1fdc46b557384efe8d03233
category: bugfix
bugzilla: 13690
CVE: CVE-2019-19813
-------------------------------------------------
There is one report of fuzzed image which leads to BUG_ON() in
btrfs_delete_delayed_dir_index().
Although that fuzzed image can already be addressed by enhanced
extent-tree error handler, it's still better to hunt down more BUG_ON().
This patch will hunt down two BUG_ON()s in
btrfs_delete_delayed_dir_index():
- One for error from btrfs_delayed_item_reserve_metadata()
Instead of BUG_ON(), we output an error message and free the item.
And return the error.
All callers of this function handles the error by aborting current
trasaction.
- One for possible EEXIST from __btrfs_add_delayed_deletion_item()
That function can return -EEXIST.
We already have a good enough error message for that, only need to
clean up the reserved metadata space and allocated item.
To help above cleanup, also modifiy __btrfs_remove_delayed_item() called
in btrfs_release_delayed_item(), to skip unassociated item.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=203253
Signed-off-by: Qu Wenruo <wqu(a)suse.com>
Reviewed-by: David Sterba <dsterba(a)suse.com>
Signed-off-by: David Sterba <dsterba(a)suse.com>
Conflicts:
fs/btrfs/delayed-inode.c
[yyl: adjust context]
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/btrfs/delayed-inode.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
index e9522f2..5dc6141 100644
--- a/fs/btrfs/delayed-inode.c
+++ b/fs/btrfs/delayed-inode.c
@@ -471,6 +471,9 @@ static void __btrfs_remove_delayed_item(struct btrfs_delayed_item *delayed_item)
struct rb_root *root;
struct btrfs_delayed_root *delayed_root;
+ /* Not associated with any delayed_node */
+ if (!delayed_item->delayed_node)
+ return;
delayed_root = delayed_item->delayed_node->root->fs_info->delayed_root;
BUG_ON(!delayed_root);
@@ -1526,7 +1529,12 @@ int btrfs_delete_delayed_dir_index(struct btrfs_trans_handle *trans,
* we have reserved enough space when we start a new transaction,
* so reserving metadata failure is impossible.
*/
- BUG_ON(ret);
+ if (ret < 0) {
+ btrfs_err(trans->fs_info,
+"metadata reservation failed for delayed dir item deltiona, should have been reserved");
+ btrfs_release_delayed_item(item);
+ goto end;
+ }
mutex_lock(&node->mutex);
ret = __btrfs_add_delayed_deletion_item(node, item);
@@ -1534,7 +1542,8 @@ int btrfs_delete_delayed_dir_index(struct btrfs_trans_handle *trans,
btrfs_err(trans->fs_info,
"err add delayed dir index item(index: %llu) into the deletion tree of the delayed node(root id: %llu, inode id: %llu, errno: %d)",
index, node->root->objectid, node->inode_id, ret);
- BUG();
+ btrfs_delayed_item_release_metadata(dir->root, item);
+ btrfs_release_delayed_item(item);
}
mutex_unlock(&node->mutex);
end:
--
1.8.3
1
3

[PATCH 01/30] net: hns3: add one printing information in hnae3_unregister_client() function
by Yang Yingliang 17 Apr '20
by Yang Yingliang 17 Apr '20
17 Apr '20
From: shenhao <shenhao21(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
-----------------------------------------------------
This patch adds one printing information to let user know the client does
not exist when unregister a noneexistent client.
Signed-off-by: Guangbin Huang <huangguangbin2(a)huawei.com>
Signed-off-by: shenhao <shenhao21(a)huawei.com>
Reviewed-by: Zhong Zhaohui <zhongzhaohui(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hnae3.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
index 4afa509..53a87b3 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
@@ -156,6 +156,7 @@ void hnae3_unregister_client(struct hnae3_client *client)
if (!existed) {
mutex_unlock(&hnae3_common_lock);
+ pr_err("client %s does not exist!\n", client->name);
return;
}
--
1.8.3
1
29
From: youshengzui <youshengzui(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
--------------------------
This patch is used to modify the hns3 driver version to 1.9.37.4
Signed-off-by: youshengzui <youshengzui(a)huawei.com>
Reviewed-by: Weiwei Deng <dengweiwei(a)huawei.com>
Reviewed-by: Zhaohui Zhong <zhongzhaohui(a)huawei.com>
Reviewed-by: Junxin Chen <chenjunxin1(a)huawei.com>
Reviewed-by: Zhong Zhaohui <zhongzhaohui(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hnae3.h | 2 +-
drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h | 2 +-
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 2 +-
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 2 +-
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h | 2 +-
5 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
index f795bfd..98dfa7c 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
@@ -30,7 +30,7 @@
#include <linux/pci.h>
#include <linux/types.h>
-#define HNAE3_MOD_VERSION "1.9.37.3"
+#define HNAE3_MOD_VERSION "1.9.37.4"
#define HNAE3_MIN_VECTOR_NUM 2 /* one for msi-x, another for IO */
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h b/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h
index 3977883..630f642 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_version.h
@@ -4,7 +4,7 @@
#ifndef __HNS3_CAE_VERSION_H__
#define __HNS3_CAE_VERSION_H__
-#define HNS3_CAE_MOD_VERSION "1.9.37.3"
+#define HNS3_CAE_MOD_VERSION "1.9.37.4"
#define CMT_ID_LEN 8
#define RESV_LEN 3
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
index 5f1d5a3..9e11ec3 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
@@ -8,7 +8,7 @@
#include "hnae3.h"
-#define HNS3_MOD_VERSION "1.9.37.3"
+#define HNS3_MOD_VERSION "1.9.37.4"
extern char hns3_driver_version[];
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index 0146470..5e64d2a 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -12,7 +12,7 @@
#include "hclge_cmd.h"
#include "hnae3.h"
-#define HCLGE_MOD_VERSION "1.9.37.3"
+#define HCLGE_MOD_VERSION "1.9.37.4"
#define HCLGE_DRIVER_NAME "hclge"
#define HCLGE_MAX_PF_NUM 8
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
index 51af1050..596618e 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
@@ -10,7 +10,7 @@
#include "hclgevf_cmd.h"
#include "hnae3.h"
-#define HCLGEVF_MOD_VERSION "1.9.37.3"
+#define HCLGEVF_MOD_VERSION "1.9.37.4"
#define HCLGEVF_DRIVER_NAME "hclgevf"
#define HCLGEVF_MAX_VLAN_ID 4095
--
1.8.3
1
192

[PATCH 01/40] perf/amd/uncore: Replace manual sampling check with CAP_NO_INTERRUPT flag
by Yang Yingliang 17 Apr '20
by Yang Yingliang 17 Apr '20
17 Apr '20
From: Kim Phillips <kim.phillips(a)amd.com>
[ Upstream commit f967140dfb7442e2db0868b03b961f9c59418a1b ]
Enable the sampling check in kernel/events/core.c::perf_event_open(),
which returns the more appropriate -EOPNOTSUPP.
BEFORE:
$ sudo perf record -a -e instructions,l3_request_g1.caching_l3_cache_accesses true
Error:
The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (l3_request_g1.caching_l3_cache_accesses).
/bin/dmesg | grep -i perf may provide additional information.
With nothing relevant in dmesg.
AFTER:
$ sudo perf record -a -e instructions,l3_request_g1.caching_l3_cache_accesses true
Error:
l3_request_g1.caching_l3_cache_accesses: PMU Hardware doesn't support sampling/overflow-interrupts. Try 'perf stat'
Fixes: c43ca5091a37 ("perf/x86/amd: Add support for AMD NB and L2I "uncore" counters")
Signed-off-by: Kim Phillips <kim.phillips(a)amd.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Acked-by: Peter Zijlstra <peterz(a)infradead.org>
Cc: stable(a)vger.kernel.org
Link: https://lkml.kernel.org/r/20200311191323.13124-1-kim.phillips@amd.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/x86/events/amd/uncore.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
index baa7e36..604a855 100644
--- a/arch/x86/events/amd/uncore.c
+++ b/arch/x86/events/amd/uncore.c
@@ -193,20 +193,18 @@ static int amd_uncore_event_init(struct perf_event *event)
/*
* NB and Last level cache counters (MSRs) are shared across all cores
- * that share the same NB / Last level cache. Interrupts can be directed
- * to a single target core, however, event counts generated by processes
- * running on other cores cannot be masked out. So we do not support
- * sampling and per-thread events.
+ * that share the same NB / Last level cache. On family 16h and below,
+ * Interrupts can be directed to a single target core, however, event
+ * counts generated by processes running on other cores cannot be masked
+ * out. So we do not support sampling and per-thread events via
+ * CAP_NO_INTERRUPT, and we do not enable counter overflow interrupts:
*/
- if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK)
- return -EINVAL;
/* NB and Last level cache counters do not have usr/os/guest/host bits */
if (event->attr.exclude_user || event->attr.exclude_kernel ||
event->attr.exclude_host || event->attr.exclude_guest)
return -EINVAL;
- /* and we do not enable counter overflow interrupts */
hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB;
hwc->idx = -1;
@@ -314,6 +312,7 @@ static ssize_t amd_uncore_attr_show_cpumask(struct device *dev,
.start = amd_uncore_start,
.stop = amd_uncore_stop,
.read = amd_uncore_read,
+ .capabilities = PERF_PMU_CAP_NO_INTERRUPT,
};
static struct pmu amd_llc_pmu = {
@@ -324,6 +323,7 @@ static ssize_t amd_uncore_attr_show_cpumask(struct device *dev,
.start = amd_uncore_start,
.stop = amd_uncore_stop,
.read = amd_uncore_read,
+ .capabilities = PERF_PMU_CAP_NO_INTERRUPT,
};
static struct amd_uncore *amd_uncore_alloc(unsigned int cpu)
--
1.8.3
1
39
From: Yu'an Wang <wangyuan46(a)huawei.com>
driver inclusion
category: feature
bugzilla: NA
CVE: NA
In this patch, we try to add dfx for io operation, including send/
recv/send_fail/send_busy. We also can define overtime_threshold to
judge timeout task.
Signed-off-by: Yu'an Wang <wangyuan46(a)huawei.com>
Reviewed-by: Mingqiang Ling <lingmingqiang(a)huawei.com>
Reviewed-by: Guangwei Zhou <zhouguangwei5(a)huawei.com>
Reviewed-by: Ye Kai <yekai13(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/crypto/hisilicon/hpre/hpre.h | 16 ++++++
drivers/crypto/hisilicon/hpre/hpre_crypto.c | 89 ++++++++++++++++++++++++-----
drivers/crypto/hisilicon/hpre/hpre_main.c | 55 ++++++++++++++++++
3 files changed, 146 insertions(+), 14 deletions(-)
diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h
index 42b2f2a..203eb2a 100644
--- a/drivers/crypto/hisilicon/hpre/hpre.h
+++ b/drivers/crypto/hisilicon/hpre/hpre.h
@@ -25,6 +25,16 @@ enum hpre_ctrl_dbgfs_file {
HPRE_DEBUG_FILE_NUM,
};
+enum hpre_dfx_dbgfs_file {
+ HPRE_SEND_CNT,
+ HPRE_RECV_CNT,
+ HPRE_SEND_FAIL_CNT,
+ HPRE_SEND_BUSY_CNT,
+ HPRE_OVER_THRHLD_CNT,
+ HPRE_OVERTIME_THRHLD,
+ HPRE_DFX_FILE_NUM
+};
+
#define HPRE_DEBUGFS_FILE_NUM (HPRE_DEBUG_FILE_NUM + HPRE_CLUSTERS_NUM - 1)
struct hpre_debugfs_file {
@@ -34,12 +44,18 @@ struct hpre_debugfs_file {
struct hpre_debug *debug;
};
+struct hpre_dfx {
+ atomic64_t value;
+ enum hpre_dfx_dbgfs_file type;
+};
+
/*
* One HPRE controller has one PF and multiple VFs, some global configurations
* which PF has need this structure.
* Just relevant for PF.
*/
struct hpre_debug {
+ struct hpre_dfx dfx[HPRE_DFX_FILE_NUM];
struct hpre_debugfs_file files[HPRE_DEBUGFS_FILE_NUM];
};
diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
index 7610e13..b68b30c 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
@@ -10,6 +10,7 @@
#include <linux/dma-mapping.h>
#include <linux/fips.h>
#include <linux/module.h>
+#include <linux/time.h>
#include "hpre.h"
struct hpre_ctx;
@@ -68,6 +69,7 @@ struct hpre_dh_ctx {
struct hpre_ctx {
struct hisi_qp *qp;
struct hpre_asym_request **req_list;
+ struct hpre *hpre;
spinlock_t req_lock;
unsigned int key_sz;
bool crt_g2_mode;
@@ -90,6 +92,7 @@ struct hpre_asym_request {
int err;
int req_id;
hpre_cb cb;
+ struct timespec64 req_time;
};
static DEFINE_MUTEX(hpre_alg_lock);
@@ -119,6 +122,7 @@ static void hpre_free_req_id(struct hpre_ctx *ctx, int req_id)
static int hpre_add_req_to_ctx(struct hpre_asym_request *hpre_req)
{
struct hpre_ctx *ctx;
+ struct hpre_dfx *dfx;
int id;
ctx = hpre_req->ctx;
@@ -129,6 +133,10 @@ static int hpre_add_req_to_ctx(struct hpre_asym_request *hpre_req)
ctx->req_list[id] = hpre_req;
hpre_req->req_id = id;
+ dfx = ctx->hpre->debug.dfx;
+ if (atomic64_read(&dfx[HPRE_OVERTIME_THRHLD].value))
+ ktime_get_ts64(&hpre_req->req_time);
+
return id;
}
@@ -308,12 +316,16 @@ static int hpre_alg_res_post_hf(struct hpre_ctx *ctx, struct hpre_sqe *sqe,
static int hpre_ctx_set(struct hpre_ctx *ctx, struct hisi_qp *qp, int qlen)
{
+ struct hpre *hpre;
+
if (!ctx || !qp || qlen < 0)
return -EINVAL;
spin_lock_init(&ctx->req_lock);
ctx->qp = qp;
+ hpre = container_of(ctx->qp->qm, struct hpre, qm);
+ ctx->hpre = hpre;
ctx->req_list = kcalloc(qlen, sizeof(void *), GFP_KERNEL);
if (!ctx->req_list)
return -ENOMEM;
@@ -336,30 +348,67 @@ static void hpre_ctx_clear(struct hpre_ctx *ctx, bool is_clear_all)
ctx->key_sz = 0;
}
+static bool hpre_is_bd_timeout(struct hpre_asym_request *req,
+ u64 overtime_thrhld)
+{
+ struct timespec64 reply_time;
+ u64 time_use_us;
+
+#define HPRE_DFX_SEC_TO_US 1000000
+#define HPRE_DFX_US_TO_NS 1000
+
+ ktime_get_ts64(&reply_time);
+ time_use_us = (reply_time.tv_sec - req->req_time.tv_sec) *
+ HPRE_DFX_SEC_TO_US +
+ (reply_time.tv_nsec - req->req_time.tv_nsec) /
+ HPRE_DFX_US_TO_NS;
+
+ if (time_use_us <= overtime_thrhld)
+ return false;
+
+ return true;
+}
+
static void hpre_dh_cb(struct hpre_ctx *ctx, void *resp)
{
+ struct hpre_dfx *dfx = ctx->hpre->debug.dfx;
struct hpre_asym_request *req;
struct kpp_request *areq;
+ u64 overtime_thrhld;
int ret;
ret = hpre_alg_res_post_hf(ctx, resp, (void **)&req);
areq = req->areq.dh;
areq->dst_len = ctx->key_sz;
+
+ overtime_thrhld = atomic64_read(&dfx[HPRE_OVERTIME_THRHLD].value);
+ if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld))
+ atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value);
+
hpre_hw_data_clr_all(ctx, req, areq->dst, areq->src);
kpp_request_complete(areq, ret);
+ atomic64_inc(&dfx[HPRE_RECV_CNT].value);
}
static void hpre_rsa_cb(struct hpre_ctx *ctx, void *resp)
{
+ struct hpre_dfx *dfx = ctx->hpre->debug.dfx;
struct hpre_asym_request *req;
struct akcipher_request *areq;
+ u64 overtime_thrhld;
int ret;
ret = hpre_alg_res_post_hf(ctx, resp, (void **)&req);
+
+ overtime_thrhld = atomic64_read(&dfx[HPRE_OVERTIME_THRHLD].value);
+ if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld))
+ atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value);
+
areq = req->areq.rsa;
areq->dst_len = ctx->key_sz;
hpre_hw_data_clr_all(ctx, req, areq->dst, areq->src);
akcipher_request_complete(areq, ret);
+ atomic64_inc(&dfx[HPRE_RECV_CNT].value);
}
static void hpre_alg_cb(struct hisi_qp *qp, void *resp)
@@ -435,6 +484,29 @@ static int hpre_msg_request_set(struct hpre_ctx *ctx, void *req, bool is_rsa)
return 0;
}
+static int hpre_send(struct hpre_ctx *ctx, struct hpre_sqe *msg)
+{
+ struct hpre_dfx *dfx = ctx->hpre->debug.dfx;
+ int ctr = 0;
+ int ret;
+
+ do {
+ atomic64_inc(&dfx[HPRE_SEND_CNT].value);
+ ret = hisi_qp_send(ctx->qp, msg);
+ if (ret != -EBUSY)
+ break;
+ atomic64_inc(&dfx[HPRE_SEND_BUSY_CNT].value);
+ } while (ctr++ < HPRE_TRY_SEND_TIMES);
+
+ if (likely(!ret))
+ return ret;
+
+ if (ret != -EBUSY)
+ atomic64_inc(&dfx[HPRE_SEND_FAIL_CNT].value);
+
+ return ret;
+}
+
#ifdef CONFIG_CRYPTO_DH
static int hpre_dh_compute_value(struct kpp_request *req)
{
@@ -443,7 +515,6 @@ static int hpre_dh_compute_value(struct kpp_request *req)
void *tmp = kpp_request_ctx(req);
struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ);
struct hpre_sqe *msg = &hpre_req->req;
- int ctr = 0;
int ret;
ret = hpre_msg_request_set(ctx, req, false);
@@ -464,11 +535,9 @@ static int hpre_dh_compute_value(struct kpp_request *req)
msg->dw0 = cpu_to_le32(le32_to_cpu(msg->dw0) | HPRE_ALG_DH_G2);
else
msg->dw0 = cpu_to_le32(le32_to_cpu(msg->dw0) | HPRE_ALG_DH);
- do {
- ret = hisi_qp_send(ctx->qp, msg);
- } while (ret == -EBUSY && ctr++ < HPRE_TRY_SEND_TIMES);
/* success */
+ ret = hpre_send(ctx, msg);
if (likely(!ret))
return -EINPROGRESS;
@@ -646,7 +715,6 @@ static int hpre_rsa_enc(struct akcipher_request *req)
void *tmp = akcipher_request_ctx(req);
struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ);
struct hpre_sqe *msg = &hpre_req->req;
- int ctr = 0;
int ret;
/* For 512 and 1536 bits key size, use soft tfm instead */
@@ -676,11 +744,8 @@ static int hpre_rsa_enc(struct akcipher_request *req)
if (unlikely(ret))
goto clear_all;
- do {
- ret = hisi_qp_send(ctx->qp, msg);
- } while (ret == -EBUSY && ctr++ < HPRE_TRY_SEND_TIMES);
-
/* success */
+ ret = hpre_send(ctx, msg);
if (likely(!ret))
return -EINPROGRESS;
@@ -698,7 +763,6 @@ static int hpre_rsa_dec(struct akcipher_request *req)
void *tmp = akcipher_request_ctx(req);
struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ);
struct hpre_sqe *msg = &hpre_req->req;
- int ctr = 0;
int ret;
/* For 512 and 1536 bits key size, use soft tfm instead */
@@ -735,11 +799,8 @@ static int hpre_rsa_dec(struct akcipher_request *req)
if (unlikely(ret))
goto clear_all;
- do {
- ret = hisi_qp_send(ctx->qp, msg);
- } while (ret == -EBUSY && ctr++ < HPRE_TRY_SEND_TIMES);
-
/* success */
+ ret = hpre_send(ctx, msg);
if (likely(!ret))
return -EINPROGRESS;
diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
index f727158..2ede8d78 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
@@ -169,6 +169,15 @@ struct hpre_hw_error {
{"INT_STATUS ", HPRE_INT_STATUS},
};
+static const char *hpre_dfx_files[HPRE_DFX_FILE_NUM] = {
+ "send_cnt",
+ "recv_cnt",
+ "send_fail_cnt",
+ "send_busy_cnt",
+ "over_thrhld_cnt",
+ "overtime_thrhld"
+};
+
#ifdef CONFIG_CRYPTO_QM_UACCE
static int uacce_mode_set(const char *val, const struct kernel_param *kp)
{
@@ -588,6 +597,33 @@ static ssize_t hpre_ctrl_debug_write(struct file *filp, const char __user *buf,
.write = hpre_ctrl_debug_write,
};
+static int hpre_debugfs_atomic64_get(void *data, u64 *val)
+{
+ struct hpre_dfx *dfx_item = data;
+
+ *val = atomic64_read(&dfx_item->value);
+ return 0;
+}
+
+static int hpre_debugfs_atomic64_set(void *data, u64 val)
+{
+ struct hpre_dfx *dfx_item = data;
+
+ if (dfx_item->type == HPRE_OVERTIME_THRHLD) {
+ struct hpre_dfx *hpre_dfx = dfx_item - HPRE_OVERTIME_THRHLD;
+
+ atomic64_set(&hpre_dfx[HPRE_OVER_THRHLD_CNT].value, 0);
+ } else if (val) {
+ return -EINVAL;
+ }
+
+ atomic64_set(&dfx_item->value, val);
+ return 0;
+}
+
+DEFINE_DEBUGFS_ATTRIBUTE(hpre_atomic64_ops, hpre_debugfs_atomic64_get,
+ hpre_debugfs_atomic64_set, "%llu\n");
+
static int hpre_create_debugfs_file(struct hpre_debug *dbg, struct dentry *dir,
enum hpre_ctrl_dbgfs_file type, int indx)
{
@@ -691,6 +727,22 @@ static int hpre_ctrl_debug_init(struct hpre_debug *debug)
return hpre_cluster_debugfs_init(debug);
}
+static void hpre_dfx_debug_init(struct hpre_debug *debug)
+{
+ struct hpre *hpre = container_of(debug, struct hpre, debug);
+ struct hpre_dfx *dfx = hpre->debug.dfx;
+ struct hisi_qm *qm = &hpre->qm;
+ struct dentry *parent;
+ int i;
+
+ parent = debugfs_create_dir("hpre_dfx", qm->debug.debug_root);
+ for (i = 0; i < HPRE_DFX_FILE_NUM; i++) {
+ dfx[i].type = i;
+ debugfs_create_file(hpre_dfx_files[i], 0644, parent, &dfx[i],
+ &hpre_atomic64_ops);
+ }
+}
+
static int hpre_debugfs_init(struct hisi_qm *qm)
{
struct hpre *hpre = container_of(qm, struct hpre, qm);
@@ -709,6 +761,9 @@ static int hpre_debugfs_init(struct hisi_qm *qm)
if (ret)
goto failed_to_create;
}
+
+ hpre_dfx_debug_init(&hpre->debug);
+
return 0;
failed_to_create:
--
1.8.3
1
8
From: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
When I backported 52918ed5fcf0 ("KVM: SVM: Override default MMIO mask if
memory encryption is enabled") to 4.19 (which resulted in commit
a4e761c9f63a ("KVM: SVM: Override default MMIO mask if memory encryption
is enabled")), I messed up the call to kvm_mmu_set_mmio_spte_mask()
Fix that here now.
Reported-by: Tom Lendacky <thomas.lendacky(a)amd.com>
Cc: Sean Christopherson <sean.j.christopherson(a)intel.com>
Cc: Paolo Bonzini <pbonzini(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/x86/kvm/svm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 3f0565e..cc8f3b41 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1336,7 +1336,7 @@ static __init void svm_adjust_mmio_mask(void)
*/
mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0;
- kvm_mmu_set_mmio_spte_mask(mask, PT_WRITABLE_MASK | PT_USER_MASK);
+ kvm_mmu_set_mmio_spte_mask(mask, mask);
}
static __init int svm_hardware_setup(void)
--
1.8.3
1
91

[PATCH 1/5] Revert "dm crypt: fix benbi IV constructor crash if used in authenticated mode"
by Yang Yingliang 17 Apr '20
by Yang Yingliang 17 Apr '20
17 Apr '20
From: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 31797
CVE: NA
-------------------------
The next two patches to be reverted will conflict with this patch. Let's
revert this patch and merge the original patch.
This reverts commit 808f3768deec5e1a5c9d2a4a2d8593fbbaf3e4cc.
Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Reviewed-by: ZhangXiaoxu <zhangxiaoxu5(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/md/dm-crypt.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index d451f98..aa7f741 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -563,14 +563,8 @@ static int crypt_iv_essiv_gen(struct geniv_ctx *ctx,
static int crypt_iv_benbi_ctr(struct geniv_ctx *ctx)
{
- unsigned bs;
- int log;
-
- if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &ctx->cipher_flags))
- bs = crypto_aead_blocksize(ctx->tfms.tfms_aead[0]);
- else
- bs = crypto_skcipher_blocksize(ctx->tfms.tfms[0]);
- log = ilog2(bs);
+ unsigned int bs = crypto_skcipher_blocksize(ctx->tfms.tfms[0]);
+ int log = ilog2(bs);
/* we need to calculate how far we must shift the sector count
* to get the cipher block count, we use this shift in _gen */
--
1.8.3
1
4

[PATCH] arm64: clear_page: Add new implementation of clear_page() by STNP
by Yang Yingliang 17 Apr '20
by Yang Yingliang 17 Apr '20
17 Apr '20
From: Wei Li <liwei391(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 31400
CVE: NA
---------------------------
Currently, clear_page() clear the page through 'dc zva', while the page may
not be used immediately mostly, so the cache flush is in vain.
Add an optimized implementation of clear_page() by 'stnp' for performance
promotion. It can be switched by the boot cmdline 'mm.use_clearpage_stnp'.
In the hugetlb clear test, we gained about 53.7% performance improvement:
Set mm.use_clearpage_stnp = 0 | Set mm.use_clearpage_stnp = 1
[root@localhost liwei]# ./a.out 50 20 | [root@localhost liwei]# ./a.out 50 20
size is 50 Gib, test times is 20 | size is 50 Gib, test times is 20
test_time[0] : use 8.438046 sec | test_time[0] : use 3.722682 sec
test_time[1] : use 8.028493 sec | test_time[1] : use 3.640274 sec
test_time[2] : use 8.646547 sec | test_time[2] : use 4.095052 sec
test_time[3] : use 8.122490 sec | test_time[3] : use 3.998446 sec
test_time[4] : use 8.053038 sec | test_time[4] : use 4.084259 sec
test_time[5] : use 8.843512 sec | test_time[5] : use 3.933871 sec
test_time[6] : use 8.308906 sec | test_time[6] : use 3.934334 sec
test_time[7] : use 8.093817 sec | test_time[7] : use 3.869142 sec
test_time[8] : use 8.303504 sec | test_time[8] : use 3.902916 sec
test_time[9] : use 8.178336 sec | test_time[9] : use 3.541885 sec
test_time[10] : use 8.003625 sec | test_time[10] : use 3.595554 sec
test_time[11] : use 8.163807 sec | test_time[11] : use 3.583813 sec
test_time[12] : use 8.267464 sec | test_time[12] : use 3.863033 sec
test_time[13] : use 8.055326 sec | test_time[13] : use 3.770953 sec
test_time[14] : use 8.246986 sec | test_time[14] : use 3.808006 sec
test_time[15] : use 8.546992 sec | test_time[15] : use 3.653194 sec
test_time[16] : use 8.727256 sec | test_time[16] : use 3.722395 sec
test_time[17] : use 8.288951 sec | test_time[17] : use 3.683508 sec
test_time[18] : use 8.019322 sec | test_time[18] : use 4.253087 sec
test_time[19] : use 8.250685 sec | test_time[19] : use 4.082845 sec
hugetlb test end! | hugetlb test end!
Signed-off-by: Wei Li <liwei391(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/include/asm/cpucaps.h | 3 ++-
arch/arm64/kernel/cpufeature.c | 34 ++++++++++++++++++++++++++++++++++
arch/arm64/lib/clear_page.S | 21 +++++++++++++++++++++
3 files changed, 57 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index a9090f2..3cd169f 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -56,7 +56,8 @@
#define ARM64_WORKAROUND_1463225 35
#define ARM64_HAS_CRC32 36
#define ARM64_SSBS 37
+#define ARM64_CLEARPAGE_STNP 38
-#define ARM64_NCAPS 38
+#define ARM64_NCAPS 39
#endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index b1f621c..8b84a47 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1232,6 +1232,34 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
}
#endif
+static bool use_clearpage_stnp;
+
+static int __init early_use_clearpage_stnp(char *p)
+{
+ return strtobool(p, &use_clearpage_stnp);
+}
+early_param("mm.use_clearpage_stnp", early_use_clearpage_stnp);
+
+static bool has_mor_nontemporal(const struct arm64_cpu_capabilities *entry)
+{
+ /*
+ * List of CPUs which have memory ordering ruled non-temporal
+ * load and store.
+ */
+ static const struct midr_range cpus[] = {
+ MIDR_ALL_VERSIONS(MIDR_HISI_TSV110),
+ {},
+ };
+
+ return is_midr_in_range_list(read_cpuid_id(), cpus);
+}
+
+static bool can_clearpage_use_stnp(const struct arm64_cpu_capabilities *entry,
+ int scope)
+{
+ return use_clearpage_stnp && has_mor_nontemporal(entry);
+}
+
static const struct arm64_cpu_capabilities arm64_features[] = {
{
.desc = "GIC system register CPU interface",
@@ -1467,6 +1495,12 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
.cpu_enable = cpu_enable_ssbs,
},
#endif
+ {
+ .desc = "Clear Page by STNP",
+ .capability = ARM64_CLEARPAGE_STNP,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = can_clearpage_use_stnp,
+ },
{},
};
diff --git a/arch/arm64/lib/clear_page.S b/arch/arm64/lib/clear_page.S
index ef08e90..9aa1de1 100644
--- a/arch/arm64/lib/clear_page.S
+++ b/arch/arm64/lib/clear_page.S
@@ -18,6 +18,25 @@
#include <linux/const.h>
#include <asm/assembler.h>
#include <asm/page.h>
+#include <asm/alternative.h>
+
+/*
+ * Clear page @dest
+ *
+ * Parameters:
+ * x0 - dest
+ */
+ENTRY(clear_page_stnp)
+ .align 6
+1: stnp xzr, xzr, [x0]
+ stnp xzr, xzr, [x0, #0x10]
+ stnp xzr, xzr, [x0, #0x20]
+ stnp xzr, xzr, [x0, #0x30]
+ add x0, x0, #0x40
+ tst x0, #(PAGE_SIZE - 1)
+ b.ne 1b
+ ret
+ENDPROC(clear_page_stnp)
/*
* Clear page @dest
@@ -26,6 +45,8 @@
* x0 - dest
*/
ENTRY(clear_page)
+ ALTERNATIVE("nop", "b clear_page_stnp", ARM64_CLEARPAGE_STNP)
+
mrs x1, dczid_el0
and w1, w1, #0xf
mov x2, #4
--
1.8.3
1
0

17 Apr '20
From: Tao Jihua <taojihua4(a)huawei.com>
driver inclusion
category: Bugfix
bugzilla: NA
This modification is mainly to optimize mtr management
and solve mtr addressing bug:When mtt_ba_pg_sz = 0,
hem-> start / step = 1, which eventually results in an
additional BA_BYTE_LEN added to the offset
Signed-off-by: Tao Jihua <taojihua4(a)huawei.com>
Reviewed-by: Hu Chunzhi <huchunzhi(a)huawei.com>
Reviewed-by: Wang Lin <wanglin137(a)huawei.com>
Reviewed-by: Zhao Weibo <zhaoweibo3(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/infiniband/hw/hns/hns_roce_hem.c | 43 ++++++++++++++++----------------
1 file changed, 21 insertions(+), 22 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
index 510c008..1911470 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
@@ -1165,13 +1165,6 @@ struct roce_hem_item {
int end; /* end buf offset in this hem */
};
-#define hem_list_for_each(pos, n, head) \
- list_for_each_entry_safe(pos, n, head, list)
-
-#define hem_list_del_item(hem) list_del(&hem->list)
-#define hem_list_add_item(hem, head) list_add(&hem->list, head)
-#define hem_list_link_item(hem, head) list_add(&hem->sibling, head)
-
static struct roce_hem_item *hem_list_alloc_item(struct hns_roce_dev *hr_dev,
int start, int end,
int count, bool exist_bt,
@@ -1216,8 +1209,8 @@ static void hem_list_free_all(struct hns_roce_dev *hr_dev,
{
struct roce_hem_item *hem, *temp_hem;
- hem_list_for_each(hem, temp_hem, head) {
- hem_list_del_item(hem);
+ list_for_each_entry_safe(hem, temp_hem, head, list) {
+ list_del(&hem->list);
hem_list_free_item(hr_dev, hem, exist_bt);
}
}
@@ -1249,7 +1242,7 @@ static struct roce_hem_item *hem_list_search_item(struct list_head *ba_list,
struct roce_hem_item *hem, *temp_hem;
struct roce_hem_item *found = NULL;
- hem_list_for_each(hem, temp_hem, ba_list) {
+ list_for_each_entry_safe(hem, temp_hem, ba_list, list) {
if (hem_list_page_is_in_range(hem, page_offset)) {
found = hem;
break;
@@ -1391,9 +1384,9 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
goto err_exit;
}
hem_ptrs[level] = cur;
- hem_list_add_item(cur, &temp_list[level]);
+ list_add(&cur->list, &temp_list[level]);
if (hem_list_is_bottom_bt(hopnum, level))
- hem_list_link_item(cur, &temp_list[0]);
+ list_add(&cur->sibling, &temp_list[0]);
/* link bt to parent bt */
if (level > 1) {
@@ -1430,6 +1423,7 @@ static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
void *cpu_base;
u64 phy_base;
int ret = 0;
+ int ba_num;
int offset;
int total;
int step;
@@ -1440,15 +1434,19 @@ static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
if (root_hem)
return 0;
+ ba_num = hns_roce_hem_list_calc_root_ba(regions, region_cnt, unit);
+ if (ba_num < 1)
+ return -ENOMEM;
+
INIT_LIST_HEAD(&temp_root);
- total = r->offset;
+ offset = r->offset;
/* indicate to last region */
r = ®ions[region_cnt - 1];
- root_hem = hem_list_alloc_item(hr_dev, total, r->offset + r->count - 1,
- unit, true, 0);
+ root_hem = hem_list_alloc_item(hr_dev, offset, r->offset + r->count - 1,
+ ba_num, true, 0);
if (!root_hem)
return -ENOMEM;
- hem_list_add_item(root_hem, &temp_root);
+ list_add(&root_hem->list, &temp_root);
hem_list->root_ba = root_hem->dma_addr;
@@ -1457,7 +1455,7 @@ static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
INIT_LIST_HEAD(&temp_list[i]);
total = 0;
- for (i = 0; i < region_cnt && total < unit; i++) {
+ for (i = 0; i < region_cnt && total < ba_num; i++) {
r = ®ions[i];
if (!r->count)
continue;
@@ -1478,8 +1476,8 @@ static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
goto err_exit;
}
hem_list_assign_bt(hr_dev, hem, cpu_base, phy_base);
- hem_list_add_item(hem, &temp_list[i]);
- hem_list_link_item(hem, &temp_btm);
+ list_add(&hem->list, &temp_list[i]);
+ list_add(&hem->sibling, &temp_btm);
total += r->count;
} else {
step = hem_list_calc_ba_range(r->hopnum, 1, unit);
@@ -1488,9 +1486,10 @@ static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
goto err_exit;
}
/* if exist mid bt, link L1 to L0 */
- hem_list_for_each(hem, temp_hem,
- &hem_list->mid_bt[i][1]) {
- offset = hem->start / step * BA_BYTE_LEN;
+ list_for_each_entry_safe(hem, temp_hem,
+ &hem_list->mid_bt[i][1], list) {
+ offset = ((hem->start - r->offset) / step) *
+ BA_BYTE_LEN;
hem_list_link_bt(hr_dev, cpu_base + offset,
hem->dma_addr);
total++;
--
1.8.3
1
0

17 Apr '20
From: Joe Perches <joe(a)perches.com>
[ Upstream commit 20faba848752901de23a4d45a1174d64d2069dde ]
Arguments are supposed to be ordered high then low.
Signed-off-by: Joe Perches <joe(a)perches.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Marc Zyngier <marc.zyngier(a)arm.com>
Link: https://lkml.kernel.org/r/ab5deb4fc3cd604cb620054770b7d00016d736bc.15627348…
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/irqchip/irq-gic-v3-its.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 4a9c14f..860f3ef 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -215,7 +215,7 @@ static struct its_collection *dev_event_to_col(struct its_device *its_dev,
static struct its_collection *valid_col(struct its_collection *col)
{
- if (WARN_ON_ONCE(col->target_address & GENMASK_ULL(0, 15)))
+ if (WARN_ON_ONCE(col->target_address & GENMASK_ULL(15, 0)))
return NULL;
return col;
--
1.8.3
1
159
From: Yu'an Wang <wangyuan46(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
1. we delete sec_usr_if.h, then move the define of sec hardware structure
into sec_crypto.h and normalize two structure types.
2. In sec_main.c, we remove fusion_limit/fusion_time, because this part of
logic is not used in the end. We also optimize the logic of debugfs without
judging some return codes, because this does not affect the driver loading.
Probe flow is also be optimized, including add sec_iommu_used_check, modify
sec_probe_init, realize sec_qm_pre_init and so on.
3. In sec.h, we define structure of sec_ctx, which defines queue/cipher/
request .etc relatives.
4. In sec_crypto.c,we encapsulate independent interfaces, such as init/
uninit/map/unmap/callback/alloc resource/free resource/encrypt/decrypt/
filling hardware descriptor/set key .etc, which removes fusion logic and is
easy to expand algorithm. Meanwhile, we remove DES algorithm support,
because of its weak key.
Signed-off-by: Yu'an Wang <wangyuan46(a)huawei.com>
Reviewed-by: Cheng Hu <hucheng.hu(a)huawei.com>
Reviewed-by: Guangwei Zhou <zhouguangwei5(a)huawei.com>
Reviewed-by: Ye Kai <yekai13(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/crypto/hisilicon/qm.c | 3 +-
drivers/crypto/hisilicon/sec2/sec.h | 166 ++-
drivers/crypto/hisilicon/sec2/sec_crypto.c | 1770 +++++++++++-----------------
drivers/crypto/hisilicon/sec2/sec_crypto.h | 237 +++-
drivers/crypto/hisilicon/sec2/sec_main.c | 541 ++++-----
drivers/crypto/hisilicon/sec2/sec_usr_if.h | 179 ---
6 files changed, 1246 insertions(+), 1650 deletions(-)
delete mode 100644 drivers/crypto/hisilicon/sec2/sec_usr_if.h
diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
index d3429e7..8b49902 100644
--- a/drivers/crypto/hisilicon/qm.c
+++ b/drivers/crypto/hisilicon/qm.c
@@ -1389,6 +1389,7 @@ static int qm_sq_ctx_cfg(struct hisi_qp *qp, int qp_id, int pasid)
sqc->w13 = cpu_to_le16(QM_MK_SQC_W13(0, 1, qp->alg_type));
ret = qm_mb(qm, QM_MB_CMD_SQC, sqc_dma, qp_id, 0);
+
dma_unmap_single(dev, sqc_dma, sizeof(struct qm_sqc), DMA_TO_DEVICE);
kfree(sqc);
@@ -1598,7 +1599,7 @@ static int hisi_qm_stop_qp_nolock(struct hisi_qp *qp)
else
flush_work(&qp->qm->work);
- /* wait for increase used count in qp send and last poll qp finish */
+ /* waiting for increase used count in qp send and last poll qp finish */
udelay(WAIT_PERIOD);
if (unlikely(qp->is_resetting && atomic_read(&qp->qp_status.used)))
qp_stop_fail_cb(qp);
diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h
index f85dd06..e3b581a 100644
--- a/drivers/crypto/hisilicon/sec2/sec.h
+++ b/drivers/crypto/hisilicon/sec2/sec.h
@@ -1,19 +1,124 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/* Copyright (c) 2018-2019 HiSilicon Limited. */
-#ifndef HISI_SEC_H
-#define HISI_SEC_H
+#ifndef __HISI_SEC_V2_H
+#define __HISI_SEC_V2_H
#include <linux/list.h>
+
#include "../qm.h"
-#include "sec_usr_if.h"
+#include "sec_crypto.h"
+
+/* Algorithm resource per hardware SEC queue */
+struct sec_alg_res {
+ u8 *pbuf;
+ dma_addr_t pbuf_dma;
+ u8 *c_ivin;
+ dma_addr_t c_ivin_dma;
+ u8 *out_mac;
+ dma_addr_t out_mac_dma;
+};
+
+/* Cipher request of SEC private */
+struct sec_cipher_req {
+ struct hisi_acc_hw_sgl *c_in;
+ dma_addr_t c_in_dma;
+ struct hisi_acc_hw_sgl *c_out;
+ dma_addr_t c_out_dma;
+ u8 *c_ivin;
+ dma_addr_t c_ivin_dma;
+ struct skcipher_request *sk_req;
+ u32 c_len;
+ bool encrypt;
+};
+
+/* SEC request of Crypto */
+struct sec_req {
+ struct sec_sqe sec_sqe;
+ struct sec_ctx *ctx;
+ struct sec_qp_ctx *qp_ctx;
+
+ struct sec_cipher_req c_req;
+
+ int err_type;
+ int req_id;
+
+ /* Status of the SEC request */
+ bool fake_busy;
+};
+
+/**
+ * struct sec_req_op - Operations for SEC request
+ * @buf_map: DMA map the SGL buffers of the request
+ * @buf_unmap: DMA unmap the SGL buffers of the request
+ * @bd_fill: Fill the SEC queue BD
+ * @bd_send: Send the SEC BD into the hardware queue
+ * @callback: Call back for the request
+ * @process: Main processing logic of Skcipher
+ */
+struct sec_req_op {
+ int (*buf_map)(struct sec_ctx *ctx, struct sec_req *req);
+ void (*buf_unmap)(struct sec_ctx *ctx, struct sec_req *req);
+ void (*do_transfer)(struct sec_ctx *ctx, struct sec_req *req);
+ int (*bd_fill)(struct sec_ctx *ctx, struct sec_req *req);
+ int (*bd_send)(struct sec_ctx *ctx, struct sec_req *req);
+ void (*callback)(struct sec_ctx *ctx, struct sec_req *req, int err);
+ int (*process)(struct sec_ctx *ctx, struct sec_req *req);
+};
+
+/* SEC cipher context which cipher's relatives */
+struct sec_cipher_ctx {
+ u8 *c_key;
+ dma_addr_t c_key_dma;
+ sector_t iv_offset;
+ u32 c_gran_size;
+ u32 ivsize;
+ u8 c_mode;
+ u8 c_alg;
+ u8 c_key_len;
+};
-#undef pr_fmt
-#define pr_fmt(fmt) "hisi_sec: " fmt
+/* SEC queue context which defines queue's relatives */
+struct sec_qp_ctx {
+ struct hisi_qp *qp;
+ struct sec_req *req_list[QM_Q_DEPTH];
+ struct idr req_idr;
+ struct sec_alg_res res[QM_Q_DEPTH];
+ struct sec_ctx *ctx;
+ struct mutex req_lock;
+ struct hisi_acc_sgl_pool *c_in_pool;
+ struct hisi_acc_sgl_pool *c_out_pool;
+ atomic_t pending_reqs;
+};
+enum sec_alg_type {
+ SEC_SKCIPHER,
+ SEC_AEAD
+};
+
+/* SEC Crypto TFM context which defines queue and cipher .etc relatives */
+struct sec_ctx {
+ struct sec_qp_ctx *qp_ctx;
+ struct sec_dev *sec;
+ const struct sec_req_op *req_op;
+ struct hisi_qp **qps;
+
+ /* Half queues for encipher, and half for decipher */
+ u32 hlf_q_num;
+
+ /* Threshold for fake busy, trigger to return -EBUSY to user */
+ u32 fake_req_limit;
+
+ /* Currrent cyclic index to select a queue for encipher */
+ atomic_t enc_qcyclic;
+
+ /* Currrent cyclic index to select a queue for decipher */
+ atomic_t dec_qcyclic;
-#define FUSION_LIMIT_DEF 1
-#define FUSION_LIMIT_MAX 64
-#define FUSION_TMOUT_NSEC_DEF (400 * 1000)
+ enum sec_alg_type alg_type;
+ bool pbuf_supported;
+ bool use_pbuf;
+ struct sec_cipher_ctx c_ctx;
+};
enum sec_endian {
SEC_LE = 0,
@@ -21,32 +126,37 @@ enum sec_endian {
SEC_64BE
};
-struct hisi_sec_ctrl;
+enum sec_debug_file_index {
+ SEC_CURRENT_QM,
+ SEC_CLEAR_ENABLE,
+ SEC_DEBUG_FILE_NUM,
+};
+
+struct sec_debug_file {
+ enum sec_debug_file_index index;
+ spinlock_t lock;
+ struct hisi_qm *qm;
+};
-struct hisi_sec_dfx {
- u64 send_cnt;
- u64 send_by_tmout;
- u64 send_by_full;
- u64 recv_cnt;
- u64 get_task_cnt;
- u64 put_task_cnt;
- u64 gran_task_cnt;
- u64 thread_cnt;
- u64 fake_busy_cnt;
- u64 busy_comp_cnt;
+struct sec_dfx {
+ atomic64_t send_cnt;
+ atomic64_t recv_cnt;
};
-struct hisi_sec {
+struct sec_debug {
+ struct sec_dfx dfx;
+ struct sec_debug_file files[SEC_DEBUG_FILE_NUM];
+};
+
+struct sec_dev {
struct hisi_qm qm;
- struct hisi_sec_dfx sec_dfx;
- struct hisi_sec_ctrl *ctrl;
- int ctx_q_num;
- int fusion_limit;
- int fusion_tmout_nsec;
+ struct sec_debug debug;
+ u32 ctx_q_num;
+ bool iommu_used;
};
void sec_destroy_qps(struct hisi_qp **qps, int qp_num);
struct hisi_qp **sec_create_qps(void);
-struct hisi_sec *find_sec_device(int node);
-
+int sec_register_to_crypto(void);
+void sec_unregister_from_crypto(void);
#endif
diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
index 0643955..52448d0 100644
--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
+++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
@@ -1,384 +1,329 @@
// SPDX-License-Identifier: GPL-2.0+
/* Copyright (c) 2018-2019 HiSilicon Limited. */
-#include <linux/crypto.h>
-#include <linux/hrtimer.h>
-#include <linux/dma-mapping.h>
-#include <linux/ktime.h>
-
#include <crypto/aes.h>
#include <crypto/algapi.h>
#include <crypto/des.h>
#include <crypto/skcipher.h>
#include <crypto/xts.h>
-#include <crypto/internal/skcipher.h>
+#include <linux/crypto.h>
+#include <linux/dma-mapping.h>
+#include <linux/idr.h>
#include "sec.h"
#include "sec_crypto.h"
-static atomic_t sec_active_devs;
-
-#define SEC_ASYNC
-
-#define SEC_INVLD_REQ_ID (-1)
-#define SEC_PRIORITY 4001
-#define SEC_XTS_MIN_KEY_SIZE (2 * AES_MIN_KEY_SIZE)
-#define SEC_XTS_MAX_KEY_SIZE (2 * AES_MAX_KEY_SIZE)
-#define SEC_DES3_2KEY_SIZE (2 * DES_KEY_SIZE)
-#define SEC_DES3_3KEY_SIZE (3 * DES_KEY_SIZE)
-
-#define BUF_MAP_PER_SGL 64
-#define SEC_FUSION_BD
-
-enum C_ALG {
- C_ALG_DES = 0x0,
- C_ALG_3DES = 0x1,
- C_ALG_AES = 0x2,
- C_ALG_SM4 = 0x3,
-};
-
-enum C_MODE {
- C_MODE_ECB = 0x0,
- C_MODE_CBC = 0x1,
- C_MODE_CTR = 0x4,
- C_MODE_CCM = 0x5,
- C_MODE_GCM = 0x6,
- C_MODE_XTS = 0x7,
- C_MODE_CBC_CS = 0x9,
-};
-
-enum CKEY_LEN {
- CKEY_LEN_128_BIT = 0x0,
- CKEY_LEN_192_BIT = 0x1,
- CKEY_LEN_256_BIT = 0x2,
- CKEY_LEN_DES = 0x1,
- CKEY_LEN_3DES_3KEY = 0x1,
- CKEY_LEN_3DES_2KEY = 0x3,
-};
-
-enum SEC_BD_TYPE {
- BD_TYPE1 = 0x1,
- BD_TYPE2 = 0x2,
-};
-
-enum SEC_CIPHER_TYPE {
- SEC_CIPHER_ENC = 0x1,
- SEC_CIPHER_DEC = 0x2,
-};
-
-enum SEC_ADDR_TYPE {
- PBUF = 0x0,
- SGL = 0x1,
- PRP = 0x2,
-};
-
-enum SEC_CI_GEN {
- CI_GEN_BY_ADDR = 0x0,
- CI_GEN_BY_LBA = 0X3,
-};
-
-enum SEC_SCENE {
- SCENE_IPSEC = 0x0,
- SCENE_STORAGE = 0x5,
-};
-
-enum {
- SEC_NO_FUSION = 0x0,
- SEC_IV_FUSION = 0x1,
- SEC_FUSION_BUTT
-};
-
-enum SEC_REQ_OPS_TYPE {
- SEC_OPS_SKCIPHER_ALG = 0x0,
- SEC_OPS_MULTI_IV = 0x1,
- SEC_OPS_BUTT
-};
-
-struct cipher_res {
- struct skcipher_request_ctx **sk_reqs;
- u8 *c_ivin;
- dma_addr_t c_ivin_dma;
- struct scatterlist *src;
- struct scatterlist *dst;
-};
-
-struct hisi_sec_cipher_req {
- struct hisi_acc_hw_sgl *c_in;
- dma_addr_t c_in_dma;
- struct hisi_acc_hw_sgl *c_out;
- dma_addr_t c_out_dma;
- u8 *c_ivin;
- dma_addr_t c_ivin_dma;
- struct skcipher_request *sk_req;
- struct scatterlist *src;
- struct scatterlist *dst;
- u32 c_len;
- u32 gran_num;
- u64 lba;
- bool encrypt;
-};
-
-struct hisi_sec_ctx;
-struct hisi_sec_qp_ctx;
-
-struct hisi_sec_req {
- struct hisi_sec_sqe sec_sqe;
- struct hisi_sec_ctx *ctx;
- struct hisi_sec_qp_ctx *qp_ctx;
- void **priv;
- struct hisi_sec_cipher_req c_req;
- ktime_t st_time;
- int err_type;
- int req_id;
- int req_cnt;
- int fusion_num;
- int fake_busy;
-};
+#define SEC_PRIORITY 4001
+#define SEC_XTS_MIN_KEY_SIZE (2 * AES_MIN_KEY_SIZE)
+#define SEC_XTS_MAX_KEY_SIZE (2 * AES_MAX_KEY_SIZE)
+#define SEC_DES3_2KEY_SIZE (2 * DES_KEY_SIZE)
+#define SEC_DES3_3KEY_SIZE (3 * DES_KEY_SIZE)
+
+/* SEC sqe(bd) bit operational relative MACRO */
+#define SEC_DE_OFFSET 1
+#define SEC_CI_GEN_OFFSET 6
+#define SEC_CIPHER_OFFSET 4
+#define SEC_SCENE_OFFSET 3
+#define SEC_DST_SGL_OFFSET 2
+#define SEC_SRC_SGL_OFFSET 7
+#define SEC_CKEY_OFFSET 9
+#define SEC_CMODE_OFFSET 12
+#define SEC_AKEY_OFFSET 5
+#define SEC_AEAD_ALG_OFFSET 11
+#define SEC_AUTH_OFFSET 6
+
+#define SEC_FLAG_OFFSET 7
+#define SEC_FLAG_MASK 0x0780
+#define SEC_TYPE_MASK 0x0F
+#define SEC_DONE_MASK 0x0001
+
+#define SEC_TOTAL_IV_SZ (SEC_IV_SIZE * QM_Q_DEPTH)
+#define SEC_SGL_SGE_NR 128
+#define SEC_CTX_DEV(ctx) (&(ctx)->sec->qm.pdev->dev)
+#define SEC_CIPHER_AUTH 0xfe
+#define SEC_AUTH_CIPHER 0x1
+#define SEC_MAX_MAC_LEN 64
+#define SEC_MAX_AAD_LEN 65535
+#define SEC_TOTAL_MAC_SZ (SEC_MAX_MAC_LEN * QM_Q_DEPTH)
+
+#define SEC_PBUF_SZ 512
+#define SEC_PBUF_IV_OFFSET SEC_PBUF_SZ
+#define SEC_PBUF_MAC_OFFSET (SEC_PBUF_SZ + SEC_IV_SIZE)
+#define SEC_PBUF_PKG (SEC_PBUF_SZ + SEC_IV_SIZE + \
+ SEC_MAX_MAC_LEN * 2)
+#define SEC_PBUF_NUM (PAGE_SIZE / SEC_PBUF_PKG)
+#define SEC_PBUF_PAGE_NUM (QM_Q_DEPTH / SEC_PBUF_NUM)
+#define SEC_PBUF_LEFT_SZ (SEC_PBUF_PKG * (QM_Q_DEPTH - \
+ SEC_PBUF_PAGE_NUM * SEC_PBUF_NUM))
+#define SEC_TOTAL_PBUF_SZ (PAGE_SIZE * SEC_PBUF_PAGE_NUM + \
+ SEC_PBUF_LEFT_SZ)
+
+#define SEC_SQE_LEN_RATE 4
+#define SEC_SQE_CFLAG 2
+#define SEC_SQE_AEAD_FLAG 3
+#define SEC_SQE_DONE 0x1
-struct hisi_sec_req_op {
- int fusion_type;
- int (*get_res)(struct hisi_sec_ctx *ctx, struct hisi_sec_req *req);
- int (*queue_alloc)(struct hisi_sec_ctx *ctx,
- struct hisi_sec_qp_ctx *qp_ctx);
- int (*queue_free)(struct hisi_sec_ctx *ctx,
- struct hisi_sec_qp_ctx *qp_ctx);
- int (*buf_map)(struct hisi_sec_ctx *ctx, struct hisi_sec_req *req);
- int (*buf_unmap)(struct hisi_sec_ctx *ctx, struct hisi_sec_req *req);
- int (*do_transfer)(struct hisi_sec_ctx *ctx, struct hisi_sec_req *req);
- int (*bd_fill)(struct hisi_sec_ctx *ctx, struct hisi_sec_req *req);
- int (*bd_send)(struct hisi_sec_ctx *ctx, struct hisi_sec_req *req);
- int (*callback)(struct hisi_sec_ctx *ctx, struct hisi_sec_req *req);
-};
-
-struct hisi_sec_cipher_ctx {
- u8 *c_key;
- dma_addr_t c_key_dma;
- sector_t iv_offset;
- u32 c_gran_size;
- u8 c_mode;
- u8 c_alg;
- u8 c_key_len;
-};
-
-struct hisi_sec_qp_ctx {
- struct hisi_qp *qp;
- struct hisi_sec_req **req_list;
- struct hisi_sec_req *fusion_req;
- unsigned long *req_bitmap;
- void *priv_req_res;
- struct hisi_sec_ctx *ctx;
- struct mutex req_lock;
- atomic_t req_cnt;
- struct hisi_sec_sqe *sqe_list;
- struct hisi_acc_sgl_pool *c_in_pool;
- struct hisi_acc_sgl_pool *c_out_pool;
- int fusion_num;
- int fusion_limit;
-};
+static atomic_t sec_active_devs;
-struct hisi_sec_ctx {
- struct hisi_sec_qp_ctx *qp_ctx;
- struct hisi_sec *sec;
- struct device *dev;
- struct hisi_sec_req_op *req_op;
- struct hisi_qp **qps;
- struct hrtimer timer;
- struct work_struct work;
- atomic_t thread_cnt;
- int req_fake_limit;
- int req_limit;
- int q_num;
- int enc_q_num;
- atomic_t enc_qid;
- atomic_t dec_qid;
- struct hisi_sec_cipher_ctx c_ctx;
- int fusion_tmout_nsec;
- int fusion_limit;
- u64 enc_fusion_num;
- u64 dec_fusion_num;
- bool is_fusion;
-};
+/* Get an en/de-cipher queue cyclically to balance load over queues of TFM */
+static inline int sec_alloc_queue_id(struct sec_ctx *ctx, struct sec_req *req)
+{
+ if (req->c_req.encrypt)
+ return atomic_inc_return(&ctx->enc_qcyclic) % ctx->hlf_q_num;
-#define DES_WEAK_KEY_NUM 4
-u64 des_weak_key[DES_WEAK_KEY_NUM] = {0x0101010101010101, 0xFEFEFEFEFEFEFEFE,
- 0xE0E0E0E0F1F1F1F1, 0x1F1F1F1F0E0E0E0E};
+ return atomic_inc_return(&ctx->dec_qcyclic) % ctx->hlf_q_num +
+ ctx->hlf_q_num;
+}
-static void hisi_sec_req_cb(struct hisi_qp *qp, void *);
+static inline void sec_free_queue_id(struct sec_ctx *ctx, struct sec_req *req)
+{
+ if (req->c_req.encrypt)
+ atomic_dec(&ctx->enc_qcyclic);
+ else
+ atomic_dec(&ctx->dec_qcyclic);
+}
-static int hisi_sec_alloc_req_id(struct hisi_sec_req *req,
- struct hisi_sec_qp_ctx *qp_ctx)
+static int sec_alloc_req_id(struct sec_req *req, struct sec_qp_ctx *qp_ctx)
{
- struct hisi_sec_ctx *ctx = req->ctx;
int req_id;
- req_id = find_first_zero_bit(qp_ctx->req_bitmap, ctx->req_limit);
- if (req_id >= ctx->req_limit || req_id < 0) {
- dev_err(ctx->dev, "no free req id\n");
- return -ENOBUFS;
+ mutex_lock(&qp_ctx->req_lock);
+
+ req_id = idr_alloc_cyclic(&qp_ctx->req_idr, NULL,
+ 0, QM_Q_DEPTH, GFP_ATOMIC);
+ mutex_unlock(&qp_ctx->req_lock);
+ if (unlikely(req_id < 0)) {
+ dev_err(SEC_CTX_DEV(req->ctx), "alloc req id fail!\n");
+ return req_id;
}
- set_bit(req_id, qp_ctx->req_bitmap);
- qp_ctx->req_list[req_id] = req;
- req->req_id = req_id;
req->qp_ctx = qp_ctx;
-
- return 0;
+ qp_ctx->req_list[req_id] = req;
+ return req_id;
}
-static void hisi_sec_free_req_id(struct hisi_sec_qp_ctx *qp_ctx, int req_id)
+static void sec_free_req_id(struct sec_req *req)
{
- if (req_id < 0 || req_id >= qp_ctx->ctx->req_limit) {
- pr_err("invalid req_id[%d]\n", req_id);
+ struct sec_qp_ctx *qp_ctx = req->qp_ctx;
+ int req_id = req->req_id;
+
+ if (unlikely(req_id < 0 || req_id >= QM_Q_DEPTH)) {
+ dev_err(SEC_CTX_DEV(req->ctx), "free request id invalid!\n");
return;
}
qp_ctx->req_list[req_id] = NULL;
+ req->qp_ctx = NULL;
mutex_lock(&qp_ctx->req_lock);
- clear_bit(req_id, qp_ctx->req_bitmap);
- atomic_dec(&qp_ctx->req_cnt);
+ idr_remove(&qp_ctx->req_idr, req_id);
mutex_unlock(&qp_ctx->req_lock);
}
-static int sec_request_transfer(struct hisi_sec_ctx *, struct hisi_sec_req *);
-static int sec_request_send(struct hisi_sec_ctx *, struct hisi_sec_req *);
-
-void qp_ctx_work_process(struct hisi_sec_qp_ctx *qp_ctx)
+static void sec_req_cb(struct hisi_qp *qp, void *resp)
{
- struct hisi_sec_req *req;
- struct hisi_sec_ctx *ctx;
- ktime_t cur_time = ktime_get();
- int ret;
-
- mutex_lock(&qp_ctx->req_lock);
-
- req = qp_ctx->fusion_req;
- if (req == NULL) {
- mutex_unlock(&qp_ctx->req_lock);
+ struct sec_qp_ctx *qp_ctx = qp->qp_ctx;
+ struct sec_sqe *bd = resp;
+ struct sec_ctx *ctx;
+ struct sec_req *req;
+ u16 done, flag;
+ int err = 0;
+ u8 type;
+
+ type = bd->type_cipher_auth & SEC_TYPE_MASK;
+ if (unlikely(type != SEC_BD_TYPE2)) {
+ pr_err("err bd type [%d]\n", type);
return;
}
- ctx = req->ctx;
- if (ctx == NULL || req->fusion_num == qp_ctx->fusion_limit) {
- mutex_unlock(&qp_ctx->req_lock);
+ req = qp_ctx->req_list[le16_to_cpu(bd->type2.tag)];
+ if (unlikely(!req)) {
+ atomic_inc(&qp->qp_status.used);
return;
}
- if (cur_time - qp_ctx->fusion_req->st_time < ctx->fusion_tmout_nsec) {
- mutex_unlock(&qp_ctx->req_lock);
- return;
+ req->err_type = bd->type2.error_type;
+ ctx = req->ctx;
+ done = le16_to_cpu(bd->type2.done_flag) & SEC_DONE_MASK;
+ flag = (le16_to_cpu(bd->type2.done_flag) &
+ SEC_FLAG_MASK) >> SEC_FLAG_OFFSET;
+ if (unlikely(req->err_type || done != SEC_SQE_DONE ||
+ (ctx->alg_type == SEC_SKCIPHER && flag != SEC_SQE_CFLAG))) {
+ dev_err_ratelimited(SEC_CTX_DEV(ctx),
+ "err_type[%d],done[%d],flag[%d]\n",
+ req->err_type, done, flag);
+ err = -EIO;
}
- qp_ctx->fusion_req = NULL;
+ atomic64_inc(&ctx->sec->debug.dfx.recv_cnt);
+
+ ctx->req_op->buf_unmap(ctx, req);
+
+ ctx->req_op->callback(ctx, req, err);
+}
+
+static int sec_bd_send(struct sec_ctx *ctx, struct sec_req *req)
+{
+ struct sec_qp_ctx *qp_ctx = req->qp_ctx;
+ int ret;
+ mutex_lock(&qp_ctx->req_lock);
+ ret = hisi_qp_send(qp_ctx->qp, &req->sec_sqe);
mutex_unlock(&qp_ctx->req_lock);
+ atomic64_inc(&ctx->sec->debug.dfx.send_cnt);
- ret = sec_request_transfer(ctx, req);
- if (ret)
- goto err_free_req;
-
- ret = sec_request_send(ctx, req);
- __sync_add_and_fetch(&ctx->sec->sec_dfx.send_by_tmout, 1);
- if (ret != -EBUSY && ret != -EINPROGRESS) {
- dev_err(ctx->dev, "[%s][%d] ret[%d]\n", __func__,
- __LINE__, ret);
- goto err_unmap_req;
- }
+ if (unlikely(ret == -EBUSY))
+ return -ENOBUFS;
- return;
+ if (!ret) {
+ if (req->fake_busy)
+ ret = -EBUSY;
+ else
+ ret = -EINPROGRESS;
+ }
-err_unmap_req:
- ctx->req_op->buf_unmap(ctx, req);
-err_free_req:
- hisi_sec_free_req_id(qp_ctx, req->req_id);
- atomic_dec(&ctx->thread_cnt);
+ return ret;
}
-void ctx_work_process(struct work_struct *work)
+/* Get DMA memory resources */
+static int sec_alloc_civ_resource(struct device *dev, struct sec_alg_res *res)
{
- struct hisi_sec_ctx *ctx;
int i;
- ctx = container_of(work, struct hisi_sec_ctx, work);
- for (i = 0; i < ctx->q_num; i++)
- qp_ctx_work_process(&ctx->qp_ctx[i]);
+ res->c_ivin = dma_alloc_coherent(dev, SEC_TOTAL_IV_SZ,
+ &res->c_ivin_dma, GFP_KERNEL);
+ if (!res->c_ivin)
+ return -ENOMEM;
+
+ for (i = 1; i < QM_Q_DEPTH; i++) {
+ res[i].c_ivin_dma = res->c_ivin_dma + i * SEC_IV_SIZE;
+ res[i].c_ivin = res->c_ivin + i * SEC_IV_SIZE;
+ }
+
+ return 0;
}
-static enum hrtimer_restart hrtimer_handler(struct hrtimer *timer)
+static void sec_free_civ_resource(struct device *dev, struct sec_alg_res *res)
{
- struct hisi_sec_ctx *ctx;
- ktime_t tim;
+ if (res->c_ivin)
+ dma_free_coherent(dev, SEC_TOTAL_IV_SZ,
+ res->c_ivin, res->c_ivin_dma);
+}
- ctx = container_of(timer, struct hisi_sec_ctx, timer);
- tim = ktime_set(0, ctx->fusion_tmout_nsec);
+static void sec_free_pbuf_resource(struct device *dev, struct sec_alg_res *res)
+{
+ if (res->pbuf)
+ dma_free_coherent(dev, SEC_TOTAL_PBUF_SZ,
+ res->pbuf, res->pbuf_dma);
+}
- if (ctx->sec->qm.wq)
- queue_work(ctx->sec->qm.wq, &ctx->work);
- else
- schedule_work(&ctx->work);
+/*
+ * To improve performance, pbuffer is used for
+ * small packets (< 576Bytes) as IOMMU translation using.
+ */
+static int sec_alloc_pbuf_resource(struct device *dev, struct sec_alg_res *res)
+{
+ int pbuf_page_offset;
+ int i, j, k;
- hrtimer_forward(timer, timer->base->get_time(), tim);
+ res->pbuf = dma_alloc_coherent(dev, SEC_TOTAL_PBUF_SZ, &res->pbuf_dma,
+ GFP_KERNEL);
+ if (!res->pbuf)
+ return -ENOMEM;
- return HRTIMER_RESTART;
+ /*
+ * SEC_PBUF_PKG contains data pbuf, iv and
+ * out_mac : <SEC_PBUF|SEC_IV|SEC_MAC>
+ * Every PAGE contains six SEC_PBUF_PKG
+ * The sec_qp_ctx contains QM_Q_DEPTH numbers of SEC_PBUF_PKG
+ * So we need SEC_PBUF_PAGE_NUM numbers of PAGE
+ * for the SEC_TOTAL_PBUF_SZ
+ */
+ for (i = 0; i <= SEC_PBUF_PAGE_NUM; i++) {
+ pbuf_page_offset = PAGE_SIZE * i;
+ for (j = 0; j < SEC_PBUF_NUM; j++) {
+ k = i * SEC_PBUF_NUM + j;
+ if (k == QM_Q_DEPTH)
+ break;
+ res[k].pbuf = res->pbuf +
+ j * SEC_PBUF_PKG + pbuf_page_offset;
+ res[k].pbuf_dma = res->pbuf_dma +
+ j * SEC_PBUF_PKG + pbuf_page_offset;
+ }
+ }
+ return 0;
}
-static int hisi_sec_create_qp_ctx(struct hisi_sec_ctx *ctx,
- int qp_ctx_id, int req_type)
+static int sec_alg_resource_alloc(struct sec_ctx *ctx,
+ struct sec_qp_ctx *qp_ctx)
{
- struct hisi_sec_qp_ctx *qp_ctx;
- struct device *dev = ctx->dev;
- struct hisi_qp *qp;
+ struct device *dev = SEC_CTX_DEV(ctx);
+ struct sec_alg_res *res = qp_ctx->res;
int ret;
+ ret = sec_alloc_civ_resource(dev, res);
+ if (ret)
+ return ret;
+
+ if (ctx->pbuf_supported) {
+ ret = sec_alloc_pbuf_resource(dev, res);
+ if (ret) {
+ dev_err(dev, "fail to alloc pbuf dma resource!\n");
+ goto alloc_fail;
+ }
+ }
+ return 0;
+alloc_fail:
+ sec_free_civ_resource(dev, res);
+
+ return ret;
+}
+
+static void sec_alg_resource_free(struct sec_ctx *ctx,
+ struct sec_qp_ctx *qp_ctx)
+{
+ struct device *dev = SEC_CTX_DEV(ctx);
+
+ sec_free_civ_resource(dev, qp_ctx->res);
+
+ if (ctx->pbuf_supported)
+ sec_free_pbuf_resource(dev, qp_ctx->res);
+}
+
+static int sec_create_qp_ctx(struct sec_ctx *ctx, int qp_ctx_id, int alg_type)
+{
+ struct device *dev = SEC_CTX_DEV(ctx);
+ struct sec_qp_ctx *qp_ctx;
+ struct hisi_qp *qp;
+ int ret = -ENOMEM;
+
qp_ctx = &ctx->qp_ctx[qp_ctx_id];
qp = ctx->qps[qp_ctx_id];
- qp->req_type = req_type;
+ qp->req_type = 0;
qp->qp_ctx = qp_ctx;
-#ifdef SEC_ASYNC
- qp->req_cb = hisi_sec_req_cb;
-#endif
+ qp->req_cb = sec_req_cb;
qp_ctx->qp = qp;
- qp_ctx->fusion_num = 0;
- qp_ctx->fusion_req = NULL;
- qp_ctx->fusion_limit = ctx->fusion_limit;
qp_ctx->ctx = ctx;
mutex_init(&qp_ctx->req_lock);
- atomic_set(&qp_ctx->req_cnt, 0);
-
- qp_ctx->req_bitmap = kcalloc(BITS_TO_LONGS(QM_Q_DEPTH), sizeof(long),
- GFP_ATOMIC);
- if (!qp_ctx->req_bitmap)
- return -ENOMEM;
-
- qp_ctx->req_list = kcalloc(QM_Q_DEPTH, sizeof(void *), GFP_ATOMIC);
- if (!qp_ctx->req_list) {
- ret = -ENOMEM;
- goto err_free_req_bitmap;
- }
-
- qp_ctx->sqe_list = kcalloc(ctx->fusion_limit,
- sizeof(struct hisi_sec_sqe), GFP_KERNEL);
- if (!qp_ctx->sqe_list) {
- ret = -ENOMEM;
- goto err_free_req_list;
- }
+ atomic_set(&qp_ctx->pending_reqs, 0);
+ idr_init(&qp_ctx->req_idr);
qp_ctx->c_in_pool = hisi_acc_create_sgl_pool(dev, QM_Q_DEPTH,
- FUSION_LIMIT_MAX);
+ SEC_SGL_SGE_NR);
if (IS_ERR(qp_ctx->c_in_pool)) {
- ret = PTR_ERR(qp_ctx->c_in_pool);
- goto err_free_sqe_list;
+ dev_err(dev, "fail to create sgl pool for input!\n");
+ goto err_destroy_idr;
}
qp_ctx->c_out_pool = hisi_acc_create_sgl_pool(dev, QM_Q_DEPTH,
- FUSION_LIMIT_MAX);
+ SEC_SGL_SGE_NR);
if (IS_ERR(qp_ctx->c_out_pool)) {
- ret = PTR_ERR(qp_ctx->c_out_pool);
+ dev_err(dev, "fail to create sgl pool for output!\n");
goto err_free_c_in_pool;
}
- ret = ctx->req_op->queue_alloc(ctx, qp_ctx);
+ ret = sec_alg_resource_alloc(ctx, qp_ctx);
if (ret)
goto err_free_c_out_pool;
@@ -389,304 +334,153 @@ static int hisi_sec_create_qp_ctx(struct hisi_sec_ctx *ctx,
return 0;
err_queue_free:
- ctx->req_op->queue_free(ctx, qp_ctx);
+ sec_alg_resource_free(ctx, qp_ctx);
err_free_c_out_pool:
hisi_acc_free_sgl_pool(dev, qp_ctx->c_out_pool);
err_free_c_in_pool:
hisi_acc_free_sgl_pool(dev, qp_ctx->c_in_pool);
-err_free_sqe_list:
- kfree(qp_ctx->sqe_list);
-err_free_req_list:
- kfree(qp_ctx->req_list);
-err_free_req_bitmap:
- kfree(qp_ctx->req_bitmap);
+err_destroy_idr:
+ idr_destroy(&qp_ctx->req_idr);
return ret;
}
-static void hisi_sec_release_qp_ctx(struct hisi_sec_ctx *ctx,
- struct hisi_sec_qp_ctx *qp_ctx)
+static void sec_release_qp_ctx(struct sec_ctx *ctx,
+ struct sec_qp_ctx *qp_ctx)
{
- struct device *dev = ctx->dev;
+ struct device *dev = SEC_CTX_DEV(ctx);
hisi_qm_stop_qp(qp_ctx->qp);
- ctx->req_op->queue_free(ctx, qp_ctx);
+ sec_alg_resource_free(ctx, qp_ctx);
+
hisi_acc_free_sgl_pool(dev, qp_ctx->c_out_pool);
hisi_acc_free_sgl_pool(dev, qp_ctx->c_in_pool);
- kfree(qp_ctx->req_bitmap);
- kfree(qp_ctx->req_list);
- kfree(qp_ctx->sqe_list);
-}
-
-static int __hisi_sec_ctx_init(struct hisi_sec_ctx *ctx, int qlen)
-{
- if (!ctx || qlen < 0)
- return -EINVAL;
-
- ctx->req_limit = qlen;
- ctx->req_fake_limit = qlen / 2;
- atomic_set(&ctx->thread_cnt, 0);
- atomic_set(&ctx->enc_qid, 0);
- atomic_set(&ctx->dec_qid, ctx->enc_q_num);
- if (ctx->fusion_limit > 1 && ctx->fusion_tmout_nsec > 0) {
- ktime_t tim = ktime_set(0, ctx->fusion_tmout_nsec);
-
- hrtimer_init(&ctx->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
- ctx->timer.function = hrtimer_handler;
- hrtimer_start(&ctx->timer, tim, HRTIMER_MODE_REL);
- INIT_WORK(&ctx->work, ctx_work_process);
- }
-
- return 0;
-}
-
-static void hisi_sec_get_fusion_param(struct hisi_sec_ctx *ctx,
- struct hisi_sec *sec)
-{
- if (ctx->is_fusion) {
- ctx->fusion_tmout_nsec = sec->fusion_tmout_nsec;
- ctx->fusion_limit = sec->fusion_limit;
- } else {
- ctx->fusion_tmout_nsec = 0;
- ctx->fusion_limit = 1;
- }
+ idr_destroy(&qp_ctx->req_idr);
}
-static int hisi_sec_cipher_ctx_init(struct crypto_skcipher *tfm)
+static int sec_ctx_base_init(struct sec_ctx *ctx)
{
- struct hisi_sec_ctx *ctx = crypto_skcipher_ctx(tfm);
- struct hisi_sec_cipher_ctx *c_ctx;
- struct hisi_sec *sec;
+ struct sec_dev *sec;
int i, ret;
- crypto_skcipher_set_reqsize(tfm, sizeof(struct hisi_sec_req));
-
ctx->qps = sec_create_qps();
if (!ctx->qps) {
pr_err("Can not create sec qps!\n");
return -ENODEV;
}
- sec = container_of(ctx->qps[0]->qm, struct hisi_sec, qm);
+ sec = container_of(ctx->qps[0]->qm, struct sec_dev, qm);
ctx->sec = sec;
+ ctx->hlf_q_num = sec->ctx_q_num >> 1;
- ctx->dev = &sec->qm.pdev->dev;
-
- ctx->q_num = sec->ctx_q_num;
-
- ctx->enc_q_num = ctx->q_num / 2;
- ctx->qp_ctx = kcalloc(ctx->q_num, sizeof(struct hisi_sec_qp_ctx),
- GFP_KERNEL);
- if (!ctx->qp_ctx) {
- dev_err(ctx->dev, "failed to alloc qp_ctx");
+ if (ctx->sec->iommu_used)
+ ctx->pbuf_supported = true;
+ else
+ ctx->pbuf_supported = false;
+ ctx->use_pbuf = false;
+
+ /* Half of queue depth is taken as fake requests limit in the queue. */
+ ctx->fake_req_limit = QM_Q_DEPTH >> 1;
+ ctx->qp_ctx = kcalloc(sec->ctx_q_num, sizeof(struct sec_qp_ctx),
+ GFP_KERNEL);
+ if (!ctx->qp_ctx)
return -ENOMEM;
- }
-
- hisi_sec_get_fusion_param(ctx, sec);
- for (i = 0; i < ctx->q_num; i++) {
- ret = hisi_sec_create_qp_ctx(ctx, i, 0);
+ for (i = 0; i < sec->ctx_q_num; i++) {
+ ret = sec_create_qp_ctx(ctx, i, 0);
if (ret)
goto err_sec_release_qp_ctx;
}
-
- c_ctx = &ctx->c_ctx;
- c_ctx->c_key = dma_alloc_coherent(ctx->dev,
- SEC_MAX_KEY_SIZE, &c_ctx->c_key_dma, GFP_KERNEL);
-
- if (!ctx->c_ctx.c_key) {
- ret = -ENOMEM;
- goto err_sec_release_qp_ctx;
- }
-
- return __hisi_sec_ctx_init(ctx, QM_Q_DEPTH);
-
+ return 0;
err_sec_release_qp_ctx:
for (i = i - 1; i >= 0; i--)
- hisi_sec_release_qp_ctx(ctx, &ctx->qp_ctx[i]);
+ sec_release_qp_ctx(ctx, &ctx->qp_ctx[i]);
sec_destroy_qps(ctx->qps, sec->ctx_q_num);
kfree(ctx->qp_ctx);
+
return ret;
}
-static void hisi_sec_cipher_ctx_exit(struct crypto_skcipher *tfm)
+static void sec_ctx_base_uninit(struct sec_ctx *ctx)
{
- struct hisi_sec_ctx *ctx = crypto_skcipher_ctx(tfm);
- struct hisi_sec_cipher_ctx *c_ctx;
- int i = 0;
-
- c_ctx = &ctx->c_ctx;
-
- if (ctx->fusion_limit > 1 && ctx->fusion_tmout_nsec > 0)
- hrtimer_cancel(&ctx->timer);
-
- if (c_ctx->c_key) {
- memzero_explicit(c_ctx->c_key, SEC_MAX_KEY_SIZE);
- dma_free_coherent(ctx->dev, SEC_MAX_KEY_SIZE, c_ctx->c_key,
- c_ctx->c_key_dma);
- c_ctx->c_key = NULL;
- }
+ int i;
- for (i = 0; i < ctx->q_num; i++)
- hisi_sec_release_qp_ctx(ctx, &ctx->qp_ctx[i]);
+ for (i = 0; i < ctx->sec->ctx_q_num; i++)
+ sec_release_qp_ctx(ctx, &ctx->qp_ctx[i]);
- sec_destroy_qps(ctx->qps, ctx->q_num);
+ sec_destroy_qps(ctx->qps, ctx->sec->ctx_q_num);
kfree(ctx->qp_ctx);
}
-static int hisi_sec_skcipher_get_res(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req);
-static int hisi_sec_skcipher_queue_alloc(struct hisi_sec_ctx *ctx,
- struct hisi_sec_qp_ctx *qp_ctx);
-static int hisi_sec_skcipher_queue_free(struct hisi_sec_ctx *ctx,
- struct hisi_sec_qp_ctx *qp_ctx);
-static int hisi_sec_skcipher_buf_map(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req);
-static int hisi_sec_skcipher_buf_unmap(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req);
-static int hisi_sec_skcipher_copy_iv(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req);
-static int hisi_sec_skcipher_bd_fill_base(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req);
-static int hisi_sec_skcipher_bd_fill_storage(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req);
-static int hisi_sec_skcipher_bd_fill_multi_iv(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req);
-static int hisi_sec_bd_send_asyn(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req);
-static int hisi_sec_skcipher_callback(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req);
-
-struct hisi_sec_req_op sec_req_ops_tbl[] = {
- {
- .fusion_type = SEC_NO_FUSION,
- .get_res = hisi_sec_skcipher_get_res,
- .queue_alloc = hisi_sec_skcipher_queue_alloc,
- .queue_free = hisi_sec_skcipher_queue_free,
- .buf_map = hisi_sec_skcipher_buf_map,
- .buf_unmap = hisi_sec_skcipher_buf_unmap,
- .do_transfer = hisi_sec_skcipher_copy_iv,
- .bd_fill = hisi_sec_skcipher_bd_fill_base,
- .bd_send = hisi_sec_bd_send_asyn,
- .callback = hisi_sec_skcipher_callback,
- }, {
- .fusion_type = SEC_IV_FUSION,
- .get_res = hisi_sec_skcipher_get_res,
- .queue_alloc = hisi_sec_skcipher_queue_alloc,
- .queue_free = hisi_sec_skcipher_queue_free,
- .buf_map = hisi_sec_skcipher_buf_map,
- .buf_unmap = hisi_sec_skcipher_buf_unmap,
- .do_transfer = hisi_sec_skcipher_copy_iv,
- .bd_fill = hisi_sec_skcipher_bd_fill_multi_iv,
- .bd_send = hisi_sec_bd_send_asyn,
- .callback = hisi_sec_skcipher_callback,
- }
-};
-
-static int hisi_sec_cipher_ctx_init_alg(struct crypto_skcipher *tfm)
+static int sec_cipher_init(struct sec_ctx *ctx)
{
- struct hisi_sec_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct sec_cipher_ctx *c_ctx = &ctx->c_ctx;
- ctx->req_op = &sec_req_ops_tbl[SEC_OPS_SKCIPHER_ALG];
- ctx->is_fusion = ctx->req_op->fusion_type;
+ c_ctx->c_key = dma_alloc_coherent(SEC_CTX_DEV(ctx), SEC_MAX_KEY_SIZE,
+ &c_ctx->c_key_dma, GFP_KERNEL);
+ if (!c_ctx->c_key)
+ return -ENOMEM;
- return hisi_sec_cipher_ctx_init(tfm);
+ return 0;
}
-static int hisi_sec_cipher_ctx_init_multi_iv(struct crypto_skcipher *tfm)
+static void sec_cipher_uninit(struct sec_ctx *ctx)
{
- struct hisi_sec_ctx *ctx = crypto_skcipher_ctx(tfm);
-
- ctx->req_op = &sec_req_ops_tbl[SEC_OPS_MULTI_IV];
- ctx->is_fusion = ctx->req_op->fusion_type;
+ struct sec_cipher_ctx *c_ctx = &ctx->c_ctx;
- return hisi_sec_cipher_ctx_init(tfm);
+ memzero_explicit(c_ctx->c_key, SEC_MAX_KEY_SIZE);
+ dma_free_coherent(SEC_CTX_DEV(ctx), SEC_MAX_KEY_SIZE,
+ c_ctx->c_key, c_ctx->c_key_dma);
}
-static void hisi_sec_req_cb(struct hisi_qp *qp, void *resp)
+static int sec_skcipher_init(struct crypto_skcipher *tfm)
{
- struct hisi_sec_sqe *sec_sqe = (struct hisi_sec_sqe *)resp;
- struct hisi_sec_qp_ctx *qp_ctx = qp->qp_ctx;
- struct device *dev = &qp->qm->pdev->dev;
- struct hisi_sec_req *req;
- struct hisi_sec_dfx *dfx;
- u32 req_id;
-
- if (sec_sqe->type == 1) {
- req_id = sec_sqe->type1.tag;
- req = qp_ctx->req_list[req_id];
-
- req->err_type = sec_sqe->type1.error_type;
- if (req->err_type || sec_sqe->type1.done != 0x1 ||
- sec_sqe->type1.flag != 0x2) {
- dev_err_ratelimited(dev,
- "err_type[%d] done[%d] flag[%d]\n",
- req->err_type,
- sec_sqe->type1.done,
- sec_sqe->type1.flag);
- }
- } else if (sec_sqe->type == 2) {
- req_id = sec_sqe->type2.tag;
- req = qp_ctx->req_list[req_id];
-
- req->err_type = sec_sqe->type2.error_type;
- if (req->err_type || sec_sqe->type2.done != 0x1 ||
- sec_sqe->type2.flag != 0x2) {
- dev_err_ratelimited(dev,
- "err_type[%d] done[%d] flag[%d]\n",
- req->err_type,
- sec_sqe->type2.done,
- sec_sqe->type2.flag);
- }
- } else {
- dev_err_ratelimited(dev, "err bd type [%d]\n", sec_sqe->type);
- return;
- }
-
- dfx = &req->ctx->sec->sec_dfx;
-
- req->ctx->req_op->buf_unmap(req->ctx, req);
- req->ctx->req_op->callback(req->ctx, req);
+ struct sec_ctx *ctx = crypto_skcipher_ctx(tfm);
+ int ret;
- __sync_add_and_fetch(&dfx->recv_cnt, 1);
-}
+ ctx->alg_type = SEC_SKCIPHER;
+ crypto_skcipher_set_reqsize(tfm, sizeof(struct sec_req));
+ ctx->c_ctx.ivsize = crypto_skcipher_ivsize(tfm);
+ if (ctx->c_ctx.ivsize > SEC_IV_SIZE) {
+ dev_err(SEC_CTX_DEV(ctx), "get error skcipher iv size!\n");
+ return -EINVAL;
+ }
-static int sec_des_weak_key(const u64 *key, const u32 keylen)
-{
- int i;
+ ret = sec_ctx_base_init(ctx);
+ if (ret)
+ return ret;
- for (i = 0; i < DES_WEAK_KEY_NUM; i++)
- if (*key == des_weak_key[i])
- return 1;
+ ret = sec_cipher_init(ctx);
+ if (ret)
+ goto err_cipher_init;
return 0;
+err_cipher_init:
+ sec_ctx_base_uninit(ctx);
+
+ return ret;
}
-static int sec_skcipher_des_setkey(struct hisi_sec_cipher_ctx *c_ctx,
- const u32 keylen, const u8 *key)
+static void sec_skcipher_uninit(struct crypto_skcipher *tfm)
{
- if (keylen != DES_KEY_SIZE)
- return -EINVAL;
-
- if (sec_des_weak_key((const u64 *)key, keylen))
- return -EKEYREJECTED;
-
- c_ctx->c_key_len = CKEY_LEN_DES;
+ struct sec_ctx *ctx = crypto_skcipher_ctx(tfm);
- return 0;
+ sec_cipher_uninit(ctx);
+ sec_ctx_base_uninit(ctx);
}
-static int sec_skcipher_3des_setkey(struct hisi_sec_cipher_ctx *c_ctx,
- const u32 keylen, const enum C_MODE c_mode)
+static int sec_skcipher_3des_setkey(struct sec_cipher_ctx *c_ctx,
+ const u32 keylen,
+ const enum sec_cmode c_mode)
{
switch (keylen) {
case SEC_DES3_2KEY_SIZE:
- c_ctx->c_key_len = CKEY_LEN_3DES_2KEY;
+ c_ctx->c_key_len = SEC_CKEY_3DES_2KEY;
break;
case SEC_DES3_3KEY_SIZE:
- c_ctx->c_key_len = CKEY_LEN_3DES_3KEY;
+ c_ctx->c_key_len = SEC_CKEY_3DES_3KEY;
break;
default:
return -EINVAL;
@@ -695,32 +489,35 @@ static int sec_skcipher_3des_setkey(struct hisi_sec_cipher_ctx *c_ctx,
return 0;
}
-static int sec_skcipher_aes_sm4_setkey(struct hisi_sec_cipher_ctx *c_ctx,
- const u32 keylen, const enum C_MODE c_mode)
+static int sec_skcipher_aes_sm4_setkey(struct sec_cipher_ctx *c_ctx,
+ const u32 keylen,
+ const enum sec_cmode c_mode)
{
- if (c_mode == C_MODE_XTS) {
+ if (c_mode == SEC_CMODE_XTS) {
switch (keylen) {
case SEC_XTS_MIN_KEY_SIZE:
- c_ctx->c_key_len = CKEY_LEN_128_BIT;
+ c_ctx->c_key_len = SEC_CKEY_128BIT;
break;
case SEC_XTS_MAX_KEY_SIZE:
- c_ctx->c_key_len = CKEY_LEN_256_BIT;
+ c_ctx->c_key_len = SEC_CKEY_256BIT;
break;
default:
+ pr_err("hisi_sec2: xts mode key error!\n");
return -EINVAL;
}
} else {
switch (keylen) {
case AES_KEYSIZE_128:
- c_ctx->c_key_len = CKEY_LEN_128_BIT;
+ c_ctx->c_key_len = SEC_CKEY_128BIT;
break;
case AES_KEYSIZE_192:
- c_ctx->c_key_len = CKEY_LEN_192_BIT;
+ c_ctx->c_key_len = SEC_CKEY_192BIT;
break;
case AES_KEYSIZE_256:
- c_ctx->c_key_len = CKEY_LEN_256_BIT;
+ c_ctx->c_key_len = SEC_CKEY_256BIT;
break;
default:
+ pr_err("hisi_sec2: aes key error!\n");
return -EINVAL;
}
}
@@ -729,38 +526,40 @@ static int sec_skcipher_aes_sm4_setkey(struct hisi_sec_cipher_ctx *c_ctx,
}
static int sec_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
- const u32 keylen, const enum C_ALG c_alg, const enum C_MODE c_mode)
+ const u32 keylen, const enum sec_calg c_alg,
+ const enum sec_cmode c_mode)
{
- struct hisi_sec_ctx *ctx = crypto_skcipher_ctx(tfm);
- struct hisi_sec_cipher_ctx *c_ctx = &ctx->c_ctx;
+ struct sec_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct sec_cipher_ctx *c_ctx = &ctx->c_ctx;
int ret;
- if (c_mode == C_MODE_XTS) {
+ if (c_mode == SEC_CMODE_XTS) {
ret = xts_verify_key(tfm, key, keylen);
- if (ret)
+ if (ret) {
+ dev_err(SEC_CTX_DEV(ctx), "xts mode key err!\n");
return ret;
+ }
}
c_ctx->c_alg = c_alg;
c_ctx->c_mode = c_mode;
switch (c_alg) {
- case C_ALG_DES:
- ret = sec_skcipher_des_setkey(c_ctx, keylen, key);
- break;
- case C_ALG_3DES:
+ case SEC_CALG_3DES:
ret = sec_skcipher_3des_setkey(c_ctx, keylen, c_mode);
break;
- case C_ALG_AES:
- case C_ALG_SM4:
+ case SEC_CALG_AES:
+ case SEC_CALG_SM4:
ret = sec_skcipher_aes_sm4_setkey(c_ctx, keylen, c_mode);
break;
default:
return -EINVAL;
}
- if (ret)
+ if (ret) {
+ dev_err(SEC_CTX_DEV(ctx), "set sec key err!\n");
return ret;
+ }
memcpy(c_ctx->c_key, key, keylen);
@@ -769,639 +568,423 @@ static int sec_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
#define GEN_SEC_SETKEY_FUNC(name, c_alg, c_mode) \
static int sec_setkey_##name(struct crypto_skcipher *tfm, const u8 *key,\
- u32 keylen)\
+ u32 keylen) \
{ \
return sec_skcipher_setkey(tfm, key, keylen, c_alg, c_mode); \
}
-GEN_SEC_SETKEY_FUNC(aes_ecb, C_ALG_AES, C_MODE_ECB)
-GEN_SEC_SETKEY_FUNC(aes_cbc, C_ALG_AES, C_MODE_CBC)
-GEN_SEC_SETKEY_FUNC(sm4_cbc, C_ALG_SM4, C_MODE_CBC)
-
-GEN_SEC_SETKEY_FUNC(des_ecb, C_ALG_DES, C_MODE_ECB)
-GEN_SEC_SETKEY_FUNC(des_cbc, C_ALG_DES, C_MODE_CBC)
-GEN_SEC_SETKEY_FUNC(3des_ecb, C_ALG_3DES, C_MODE_ECB)
-GEN_SEC_SETKEY_FUNC(3des_cbc, C_ALG_3DES, C_MODE_CBC)
-
-GEN_SEC_SETKEY_FUNC(aes_xts, C_ALG_AES, C_MODE_XTS)
-GEN_SEC_SETKEY_FUNC(sm4_xts, C_ALG_SM4, C_MODE_XTS)
+GEN_SEC_SETKEY_FUNC(aes_ecb, SEC_CALG_AES, SEC_CMODE_ECB)
+GEN_SEC_SETKEY_FUNC(aes_cbc, SEC_CALG_AES, SEC_CMODE_CBC)
+GEN_SEC_SETKEY_FUNC(aes_xts, SEC_CALG_AES, SEC_CMODE_XTS)
-static int hisi_sec_get_async_ret(int ret, int req_cnt, int req_fake_limit)
-{
- if (ret == 0) {
- if (req_cnt >= req_fake_limit)
- ret = -EBUSY;
- else
- ret = -EINPROGRESS;
- } else {
- if (ret == -EBUSY)
- ret = -ENOBUFS;
- }
+GEN_SEC_SETKEY_FUNC(3des_ecb, SEC_CALG_3DES, SEC_CMODE_ECB)
+GEN_SEC_SETKEY_FUNC(3des_cbc, SEC_CALG_3DES, SEC_CMODE_CBC)
- return ret;
-}
+GEN_SEC_SETKEY_FUNC(sm4_xts, SEC_CALG_SM4, SEC_CMODE_XTS)
+GEN_SEC_SETKEY_FUNC(sm4_cbc, SEC_CALG_SM4, SEC_CMODE_CBC)
-static int hisi_sec_skcipher_get_res(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req)
+static int sec_cipher_pbuf_map(struct sec_ctx *ctx, struct sec_req *req,
+ struct scatterlist *src)
{
- struct hisi_sec_cipher_req *c_req = &req->c_req;
- struct hisi_sec_qp_ctx *qp_ctx = req->qp_ctx;
- struct cipher_res *c_res = (struct cipher_res *)qp_ctx->priv_req_res;
+ struct sec_cipher_req *c_req = &req->c_req;
+ struct sec_qp_ctx *qp_ctx = req->qp_ctx;
+ struct device *dev = SEC_CTX_DEV(ctx);
+ int copy_size, pbuf_length;
int req_id = req->req_id;
- c_req->c_ivin = c_res[req_id].c_ivin;
- c_req->c_ivin_dma = c_res[req_id].c_ivin_dma;
- req->priv = (void **)c_res[req_id].sk_reqs;
- c_req->src = c_res[req_id].src;
- c_req->dst = c_res[req_id].dst;
-
- return 0;
-}
-
-static int hisi_sec_skcipher_queue_alloc(struct hisi_sec_ctx *ctx,
- struct hisi_sec_qp_ctx *qp_ctx)
-{
- struct cipher_res *c_res;
- int req_num = ctx->fusion_limit;
- int alloc_num = QM_Q_DEPTH * ctx->fusion_limit;
- int buf_map_num = QM_Q_DEPTH * ctx->fusion_limit;
- struct device *dev = ctx->dev;
- int i, ret;
-
- c_res = kcalloc(QM_Q_DEPTH, sizeof(struct cipher_res), GFP_KERNEL);
- if (!c_res)
- return -ENOMEM;
-
- qp_ctx->priv_req_res = (void *)c_res;
-
- c_res[0].sk_reqs = kcalloc(alloc_num,
- sizeof(struct skcipher_request_ctx *), GFP_KERNEL);
- if (!c_res[0].sk_reqs) {
- ret = -ENOMEM;
- goto err_free_c_res;
- }
-
- c_res[0].c_ivin = dma_alloc_coherent(dev,
- SEC_IV_SIZE * alloc_num, &c_res[0].c_ivin_dma, GFP_KERNEL);
- if (!c_res[0].c_ivin) {
- ret = -ENOMEM;
- goto err_free_sk_reqs;
- }
+ copy_size = c_req->c_len;
- c_res[0].src = kcalloc(buf_map_num, sizeof(struct scatterlist),
- GFP_KERNEL);
- if (!c_res[0].src) {
- ret = -ENOMEM;
- goto err_free_c_ivin;
+ pbuf_length = sg_copy_to_buffer(src, sg_nents(src),
+ qp_ctx->res[req_id].pbuf, copy_size);
+ if (unlikely(pbuf_length != copy_size)) {
+ dev_err(dev, "copy src data to pbuf error!\n");
+ return -EINVAL;
}
- c_res[0].dst = kcalloc(buf_map_num, sizeof(struct scatterlist),
- GFP_KERNEL);
- if (!c_res[0].dst) {
- ret = -ENOMEM;
- goto err_free_src;
+ c_req->c_in_dma = qp_ctx->res[req_id].pbuf_dma;
+ if (!c_req->c_in_dma) {
+ dev_err(dev, "fail to set pbuffer address!\n");
+ return -ENOMEM;
}
- for (i = 1; i < QM_Q_DEPTH; i++) {
- c_res[i].sk_reqs = c_res[0].sk_reqs + i * req_num;
- c_res[i].c_ivin = c_res[0].c_ivin
- + i * req_num * SEC_IV_SIZE;
- c_res[i].c_ivin_dma = c_res[0].c_ivin_dma
- + i * req_num * SEC_IV_SIZE;
- c_res[i].src = c_res[0].src + i * req_num;
- c_res[i].dst = c_res[0].dst + i * req_num;
- }
+ c_req->c_out_dma = c_req->c_in_dma;
return 0;
-
-err_free_src:
- kfree(c_res[0].src);
-err_free_c_ivin:
- dma_free_coherent(dev, SEC_IV_SIZE * alloc_num, c_res[0].c_ivin,
- c_res[0].c_ivin_dma);
-err_free_sk_reqs:
- kfree(c_res[0].sk_reqs);
-err_free_c_res:
- kfree(c_res);
-
- return ret;
}
-static int hisi_sec_skcipher_queue_free(struct hisi_sec_ctx *ctx,
- struct hisi_sec_qp_ctx *qp_ctx)
+static void sec_cipher_pbuf_unmap(struct sec_ctx *ctx, struct sec_req *req,
+ struct scatterlist *dst)
{
- struct cipher_res *c_res = (struct cipher_res *)qp_ctx->priv_req_res;
- struct device *dev = ctx->dev;
- int alloc_num = QM_Q_DEPTH * ctx->fusion_limit;
+ struct sec_cipher_req *c_req = &req->c_req;
+ struct sec_qp_ctx *qp_ctx = req->qp_ctx;
+ struct device *dev = SEC_CTX_DEV(ctx);
+ int copy_size, pbuf_length;
+ int req_id = req->req_id;
- kfree(c_res[0].dst);
- kfree(c_res[0].src);
- dma_free_coherent(dev, SEC_IV_SIZE * alloc_num, c_res[0].c_ivin,
- c_res[0].c_ivin_dma);
- kfree(c_res[0].sk_reqs);
- kfree(c_res);
+ copy_size = c_req->c_len;
- return 0;
+ pbuf_length = sg_copy_from_buffer(dst, sg_nents(dst),
+ qp_ctx->res[req_id].pbuf, copy_size);
+ if (unlikely(pbuf_length != copy_size))
+ dev_err(dev, "copy pbuf data to dst error!\n");
}
-static int hisi_sec_skcipher_buf_map(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req)
+static int sec_cipher_map(struct sec_ctx *ctx, struct sec_req *req,
+ struct scatterlist *src, struct scatterlist *dst)
{
- struct hisi_sec_cipher_req *c_req = &req->c_req;
- struct device *dev = ctx->dev;
- struct skcipher_request *sk_next;
- struct hisi_sec_qp_ctx *qp_ctx = req->qp_ctx;
- int src_nents, src_nents_sum, copyed_src_nents;
- int dst_nents, dst_nents_sum, copyed_dst_nents;
- int i, ret, buf_map_limit;
-
- src_nents_sum = 0;
- dst_nents_sum = 0;
- for (i = 0; i < req->fusion_num; i++) {
- sk_next = (struct skcipher_request *)req->priv[i];
- if (sk_next == NULL) {
- dev_err(ctx->dev, "nullptr at [%d]\n", i);
- return -EFAULT;
- }
- src_nents_sum += sg_nents(sk_next->src);
- dst_nents_sum += sg_nents(sk_next->dst);
- if (sk_next->src == sk_next->dst && i > 0) {
- dev_err(ctx->dev, "err: src == dst\n");
- return -EFAULT;
- }
- }
+ struct sec_cipher_req *c_req = &req->c_req;
+ struct sec_qp_ctx *qp_ctx = req->qp_ctx;
+ struct device *dev = SEC_CTX_DEV(ctx);
- buf_map_limit = FUSION_LIMIT_MAX;
- if (src_nents_sum > buf_map_limit || dst_nents_sum > buf_map_limit) {
- dev_err(ctx->dev, "src[%d] or dst[%d] bigger than %d\n",
- src_nents_sum, dst_nents_sum, buf_map_limit);
- return -ENOBUFS;
- }
-
- copyed_src_nents = 0;
- copyed_dst_nents = 0;
- for (i = 0; i < req->fusion_num; i++) {
- sk_next = (struct skcipher_request *)req->priv[i];
- src_nents = sg_nents(sk_next->src);
- dst_nents = sg_nents(sk_next->dst);
-
- if (i != req->fusion_num - 1) {
- sg_unmark_end(&sk_next->src[src_nents - 1]);
- sg_unmark_end(&sk_next->dst[dst_nents - 1]);
- }
-
- memcpy(c_req->src + copyed_src_nents, sk_next->src,
- src_nents * sizeof(struct scatterlist));
- memcpy(c_req->dst + copyed_dst_nents, sk_next->dst,
- dst_nents * sizeof(struct scatterlist));
+ if (ctx->use_pbuf)
+ return sec_cipher_pbuf_map(ctx, req, src);
- copyed_src_nents += src_nents;
- copyed_dst_nents += dst_nents;
- }
-
- c_req->c_in = hisi_acc_sg_buf_map_to_hw_sgl(dev, c_req->src,
- qp_ctx->c_in_pool, req->req_id, &c_req->c_in_dma);
+ c_req->c_in = hisi_acc_sg_buf_map_to_hw_sgl(dev, src,
+ qp_ctx->c_in_pool,
+ req->req_id,
+ &c_req->c_in_dma);
- if (IS_ERR(c_req->c_in))
+ if (IS_ERR(c_req->c_in)) {
+ dev_err(dev, "fail to dma map input sgl buffers!\n");
return PTR_ERR(c_req->c_in);
+ }
- if (c_req->dst == c_req->src) {
+ if (dst == src) {
c_req->c_out = c_req->c_in;
c_req->c_out_dma = c_req->c_in_dma;
} else {
- c_req->c_out = hisi_acc_sg_buf_map_to_hw_sgl(dev, c_req->dst,
- qp_ctx->c_out_pool, req->req_id, &c_req->c_out_dma);
+ c_req->c_out = hisi_acc_sg_buf_map_to_hw_sgl(dev, dst,
+ qp_ctx->c_out_pool,
+ req->req_id,
+ &c_req->c_out_dma);
+
if (IS_ERR(c_req->c_out)) {
- ret = PTR_ERR(c_req->c_out);
- goto err_unmap_src;
+ dev_err(dev, "fail to dma map output sgl buffers!\n");
+ hisi_acc_sg_buf_unmap(dev, src, c_req->c_in);
+ return PTR_ERR(c_req->c_out);
}
}
return 0;
-
-err_unmap_src:
- hisi_acc_sg_buf_unmap(dev, c_req->src, c_req->c_in);
-
- return ret;
}
-static int hisi_sec_skcipher_buf_unmap(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req)
+static void sec_cipher_unmap(struct sec_ctx *ctx, struct sec_req *req,
+ struct scatterlist *src, struct scatterlist *dst)
{
- struct hisi_sec_cipher_req *c_req = &req->c_req;
- struct device *dev = ctx->dev;
-
- if (c_req->dst != c_req->src)
- hisi_acc_sg_buf_unmap(dev, c_req->src, c_req->c_in);
+ struct sec_cipher_req *c_req = &req->c_req;
+ struct device *dev = SEC_CTX_DEV(ctx);
- hisi_acc_sg_buf_unmap(dev, c_req->dst, c_req->c_out);
+ if (ctx->use_pbuf) {
+ sec_cipher_pbuf_unmap(ctx, req, dst);
+ } else {
+ if (dst != src)
+ hisi_acc_sg_buf_unmap(dev, src, c_req->c_in);
- return 0;
+ hisi_acc_sg_buf_unmap(dev, dst, c_req->c_out);
+ }
}
-static int hisi_sec_skcipher_copy_iv(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req)
+static int sec_skcipher_sgl_map(struct sec_ctx *ctx, struct sec_req *req)
{
- struct hisi_sec_cipher_ctx *c_ctx = &ctx->c_ctx;
- struct hisi_sec_cipher_req *c_req = &req->c_req;
- struct skcipher_request *sk_req =
- (struct skcipher_request *)req->priv[0];
- struct crypto_skcipher *atfm = crypto_skcipher_reqtfm(sk_req);
- struct skcipher_request *sk_next;
- int i, iv_size;
-
- c_req->c_len = sk_req->cryptlen;
-
- iv_size = crypto_skcipher_ivsize(atfm);
- if (iv_size > SEC_IV_SIZE)
- return -EINVAL;
+ struct skcipher_request *sq = req->c_req.sk_req;
- memcpy(c_req->c_ivin, sk_req->iv, iv_size);
-
- if (ctx->is_fusion) {
- for (i = 1; i < req->fusion_num; i++) {
- sk_next = (struct skcipher_request *)req->priv[i];
- memcpy(c_req->c_ivin + i * iv_size, sk_next->iv,
- iv_size);
- }
-
- c_req->gran_num = req->fusion_num;
- c_ctx->c_gran_size = sk_req->cryptlen;
- }
-
- return 0;
+ return sec_cipher_map(ctx, req, sq->src, sq->dst);
}
-static int hisi_sec_skcipher_bd_fill_storage(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req)
+static void sec_skcipher_sgl_unmap(struct sec_ctx *ctx, struct sec_req *req)
{
- struct hisi_sec_cipher_ctx *c_ctx = &ctx->c_ctx;
- struct hisi_sec_cipher_req *c_req = &req->c_req;
- struct hisi_sec_sqe *sec_sqe = &req->sec_sqe;
+ struct skcipher_request *sq = req->c_req.sk_req;
- if (!c_req->c_len)
- return -EINVAL;
-
- sec_sqe->type1.c_key_addr_l = lower_32_bits(c_ctx->c_key_dma);
- sec_sqe->type1.c_key_addr_h = upper_32_bits(c_ctx->c_key_dma);
- sec_sqe->type1.c_ivin_addr_l = lower_32_bits(c_req->c_ivin_dma);
- sec_sqe->type1.c_ivin_addr_h = upper_32_bits(c_req->c_ivin_dma);
- sec_sqe->type1.data_src_addr_l = lower_32_bits(c_req->c_in_dma);
- sec_sqe->type1.data_src_addr_h = upper_32_bits(c_req->c_in_dma);
- sec_sqe->type1.data_dst_addr_l = lower_32_bits(c_req->c_out_dma);
- sec_sqe->type1.data_dst_addr_h = upper_32_bits(c_req->c_out_dma);
-
- sec_sqe->type1.c_mode = c_ctx->c_mode;
- sec_sqe->type1.c_alg = c_ctx->c_alg;
- sec_sqe->type1.c_key_len = c_ctx->c_key_len;
-
- sec_sqe->src_addr_type = SGL;
- sec_sqe->dst_addr_type = SGL;
- sec_sqe->type = BD_TYPE1;
- sec_sqe->scene = SCENE_STORAGE;
- sec_sqe->de = c_req->c_in_dma != c_req->c_out_dma;
-
- if (c_req->encrypt)
- sec_sqe->cipher = SEC_CIPHER_ENC;
- else
- sec_sqe->cipher = SEC_CIPHER_DEC;
-
- if (c_ctx->c_mode == C_MODE_XTS)
- sec_sqe->type1.ci_gen = CI_GEN_BY_LBA;
-
- sec_sqe->type1.cipher_gran_size = c_ctx->c_gran_size;
- sec_sqe->type1.gran_num = c_req->gran_num;
- __sync_fetch_and_add(&ctx->sec->sec_dfx.gran_task_cnt, c_req->gran_num);
- sec_sqe->type1.block_size = c_req->c_len;
-
- sec_sqe->type1.lba_l = lower_32_bits(c_req->lba);
- sec_sqe->type1.lba_h = upper_32_bits(c_req->lba);
-
- sec_sqe->type1.tag = req->req_id;
-
- return 0;
+ sec_cipher_unmap(ctx, req, sq->src, sq->dst);
}
-static int hisi_sec_skcipher_bd_fill_multi_iv(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req)
+static int sec_request_transfer(struct sec_ctx *ctx, struct sec_req *req)
{
int ret;
- ret = hisi_sec_skcipher_bd_fill_storage(ctx, req);
- if (ret)
+ ret = ctx->req_op->buf_map(ctx, req);
+ if (unlikely(ret))
return ret;
- req->sec_sqe.type1.ci_gen = CI_GEN_BY_ADDR;
-
- return 0;
-}
-
-static int hisi_sec_skcipher_bd_fill_base(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req)
-{
- struct hisi_sec_cipher_ctx *c_ctx = &ctx->c_ctx;
- struct hisi_sec_cipher_req *c_req = &req->c_req;
- struct hisi_sec_sqe *sec_sqe = &req->sec_sqe;
-
- if (!c_req->c_len)
- return -EINVAL;
-
- sec_sqe->type2.c_key_addr_l = lower_32_bits(c_ctx->c_key_dma);
- sec_sqe->type2.c_key_addr_h = upper_32_bits(c_ctx->c_key_dma);
- sec_sqe->type2.c_ivin_addr_l = lower_32_bits(c_req->c_ivin_dma);
- sec_sqe->type2.c_ivin_addr_h = upper_32_bits(c_req->c_ivin_dma);
- sec_sqe->type2.data_src_addr_l = lower_32_bits(c_req->c_in_dma);
- sec_sqe->type2.data_src_addr_h = upper_32_bits(c_req->c_in_dma);
- sec_sqe->type2.data_dst_addr_l = lower_32_bits(c_req->c_out_dma);
- sec_sqe->type2.data_dst_addr_h = upper_32_bits(c_req->c_out_dma);
+ ctx->req_op->do_transfer(ctx, req);
- sec_sqe->type2.c_mode = c_ctx->c_mode;
- sec_sqe->type2.c_alg = c_ctx->c_alg;
- sec_sqe->type2.c_key_len = c_ctx->c_key_len;
-
- sec_sqe->src_addr_type = SGL;
- sec_sqe->dst_addr_type = SGL;
- sec_sqe->type = BD_TYPE2;
- sec_sqe->scene = SCENE_IPSEC;
- sec_sqe->de = c_req->c_in_dma != c_req->c_out_dma;
+ ret = ctx->req_op->bd_fill(ctx, req);
+ if (unlikely(ret))
+ goto unmap_req_buf;
- __sync_fetch_and_add(&ctx->sec->sec_dfx.gran_task_cnt, 1);
+ return ret;
- if (c_req->encrypt)
- sec_sqe->cipher = SEC_CIPHER_ENC;
- else
- sec_sqe->cipher = SEC_CIPHER_DEC;
+unmap_req_buf:
+ ctx->req_op->buf_unmap(ctx, req);
- sec_sqe->type2.c_len = c_req->c_len;
- sec_sqe->type2.tag = req->req_id;
+ return ret;
+}
- return 0;
+static void sec_request_untransfer(struct sec_ctx *ctx, struct sec_req *req)
+{
+ ctx->req_op->buf_unmap(ctx, req);
}
-static int hisi_sec_bd_send_asyn(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req)
+static void sec_skcipher_copy_iv(struct sec_ctx *ctx, struct sec_req *req)
{
- struct hisi_sec_qp_ctx *qp_ctx = req->qp_ctx;
- int req_cnt = req->req_cnt;
- int ret;
+ struct skcipher_request *sk_req = req->c_req.sk_req;
+ struct sec_cipher_req *c_req = &req->c_req;
+ struct sec_alg_res *res = &req->qp_ctx->res[req->req_id];
- mutex_lock(&qp_ctx->req_lock);
- ret = hisi_qp_send(qp_ctx->qp, &req->sec_sqe);
- if (ret == 0)
- ctx->sec->sec_dfx.send_cnt++;
- mutex_unlock(&qp_ctx->req_lock);
+ if (ctx->use_pbuf) {
+ c_req->c_ivin = res->pbuf + SEC_PBUF_IV_OFFSET;
+ c_req->c_ivin_dma = res->pbuf_dma + SEC_PBUF_IV_OFFSET;
+ } else {
+ c_req->c_ivin = res->c_ivin;
+ c_req->c_ivin_dma = res->c_ivin_dma;
+ }
- return hisi_sec_get_async_ret(ret, req_cnt, ctx->req_fake_limit);
+ memcpy(c_req->c_ivin, sk_req->iv, ctx->c_ctx.ivsize);
}
-static void hisi_sec_skcipher_complete(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req, int err_code)
+static int sec_skcipher_bd_fill(struct sec_ctx *ctx, struct sec_req *req)
{
- struct skcipher_request **sk_reqs =
- (struct skcipher_request **)req->priv;
- int i, req_fusion_num;
-
- if (ctx->is_fusion == SEC_NO_FUSION)
- req_fusion_num = 1;
+ struct sec_cipher_ctx *c_ctx = &ctx->c_ctx;
+ struct sec_cipher_req *c_req = &req->c_req;
+ struct sec_sqe *sec_sqe = &req->sec_sqe;
+ u8 scene, sa_type, da_type;
+ u8 bd_type, cipher;
+ u8 de = 0;
+
+ memset(sec_sqe, 0, sizeof(struct sec_sqe));
+
+ sec_sqe->type2.c_key_addr = cpu_to_le64(c_ctx->c_key_dma);
+ sec_sqe->type2.c_ivin_addr = cpu_to_le64(c_req->c_ivin_dma);
+ sec_sqe->type2.data_src_addr = cpu_to_le64(c_req->c_in_dma);
+ sec_sqe->type2.data_dst_addr = cpu_to_le64(c_req->c_out_dma);
+
+ sec_sqe->type2.icvw_kmode |= cpu_to_le16(((u16)c_ctx->c_mode) <<
+ SEC_CMODE_OFFSET);
+ sec_sqe->type2.c_alg = c_ctx->c_alg;
+ sec_sqe->type2.icvw_kmode |= cpu_to_le16(((u16)c_ctx->c_key_len) <<
+ SEC_CKEY_OFFSET);
+
+ bd_type = SEC_BD_TYPE2;
+ if (c_req->encrypt)
+ cipher = SEC_CIPHER_ENC << SEC_CIPHER_OFFSET;
else
- req_fusion_num = req->fusion_num;
+ cipher = SEC_CIPHER_DEC << SEC_CIPHER_OFFSET;
+ sec_sqe->type_cipher_auth = bd_type | cipher;
- for (i = 0; i < req_fusion_num; i++)
- sk_reqs[i]->base.complete(&sk_reqs[i]->base, err_code);
-
- /* free sk_reqs if this request is completed */
- if (err_code != -EINPROGRESS)
- __sync_add_and_fetch(&ctx->sec->sec_dfx.put_task_cnt,
- req_fusion_num);
+ if (ctx->use_pbuf)
+ sa_type = SEC_PBUF << SEC_SRC_SGL_OFFSET;
else
- __sync_add_and_fetch(&ctx->sec->sec_dfx.busy_comp_cnt,
- req_fusion_num);
-}
-
-static int hisi_sec_skcipher_callback(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req)
-{
- struct hisi_sec_qp_ctx *qp_ctx = req->qp_ctx;
- int req_id = req->req_id;
+ sa_type = SEC_SGL << SEC_SRC_SGL_OFFSET;
+ scene = SEC_COMM_SCENE << SEC_SCENE_OFFSET;
+ if (c_req->c_in_dma != c_req->c_out_dma)
+ de = 0x1 << SEC_DE_OFFSET;
- if (__sync_bool_compare_and_swap(&req->fake_busy, 1, 0))
- hisi_sec_skcipher_complete(ctx, req, -EINPROGRESS);
+ sec_sqe->sds_sa_type = (de | scene | sa_type);
- hisi_sec_skcipher_complete(ctx, req, req->err_type);
+ /* Just set DST address type */
+ if (ctx->use_pbuf)
+ da_type = SEC_PBUF << SEC_DST_SGL_OFFSET;
+ else
+ da_type = SEC_SGL << SEC_DST_SGL_OFFSET;
+ sec_sqe->sdm_addr_type |= da_type;
- hisi_sec_free_req_id(qp_ctx, req_id);
+ sec_sqe->type2.clen_ivhlen |= cpu_to_le32(c_req->c_len);
+ sec_sqe->type2.tag = cpu_to_le16((u16)req->req_id);
return 0;
}
-static int sec_get_issue_id_range(atomic_t *qid, int start, int end)
+static void sec_update_iv(struct sec_req *req, enum sec_alg_type alg_type)
{
- int issue_id;
- int issue_len = end - start;
+ struct skcipher_request *sk_req = req->c_req.sk_req;
+ u32 iv_size = req->ctx->c_ctx.ivsize;
+ struct scatterlist *sgl;
+ unsigned int cryptlen;
+ size_t sz;
+ u8 *iv;
+
+ if (req->c_req.encrypt)
+ sgl = sk_req->dst;
+ else
+ sgl = sk_req->src;
- issue_id = (atomic_inc_return(qid) - start) % issue_len + start;
- if (issue_id % issue_len == 0 && atomic_read(qid) > issue_len)
- atomic_sub(issue_len, qid);
+ iv = sk_req->iv;
+ cryptlen = sk_req->cryptlen;
- return issue_id;
+ sz = sg_pcopy_to_buffer(sgl, sg_nents(sgl), iv, iv_size,
+ cryptlen - iv_size);
+ if (unlikely(sz != iv_size))
+ dev_err(SEC_CTX_DEV(req->ctx), "copy output iv error!\n");
}
-static inline int sec_get_issue_id(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req)
+static void sec_skcipher_callback(struct sec_ctx *ctx, struct sec_req *req,
+ int err)
{
- int issue_id;
+ struct skcipher_request *sk_req = req->c_req.sk_req;
+ struct sec_qp_ctx *qp_ctx = req->qp_ctx;
- if (req->c_req.encrypt == 1)
- issue_id = sec_get_issue_id_range(&ctx->enc_qid, 0,
- ctx->enc_q_num);
- else
- issue_id = sec_get_issue_id_range(&ctx->dec_qid, ctx->enc_q_num,
- ctx->q_num);
+ atomic_dec(&qp_ctx->pending_reqs);
+ sec_free_req_id(req);
- return issue_id;
-}
+ /* IV output at encrypto of CBC mode */
+ if (!err && ctx->c_ctx.c_mode == SEC_CMODE_CBC && req->c_req.encrypt)
+ sec_update_iv(req, SEC_SKCIPHER);
-static inline void hisi_sec_inc_thread_cnt(struct hisi_sec_ctx *ctx)
-{
- int thread_cnt = atomic_inc_return(&ctx->thread_cnt);
+ if (req->fake_busy)
+ sk_req->base.complete(&sk_req->base, -EINPROGRESS);
- if (thread_cnt > ctx->sec->sec_dfx.thread_cnt)
- ctx->sec->sec_dfx.thread_cnt = thread_cnt;
+ sk_req->base.complete(&sk_req->base, err);
}
-static struct hisi_sec_req *sec_request_alloc(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *in_req, int *fusion_send, int *fake_busy)
+static void sec_request_uninit(struct sec_ctx *ctx, struct sec_req *req)
{
- struct hisi_sec_qp_ctx *qp_ctx;
- struct hisi_sec_req *req;
- int issue_id, ret;
-
- __sync_add_and_fetch(&ctx->sec->sec_dfx.get_task_cnt, 1);
-
- issue_id = sec_get_issue_id(ctx, in_req);
- hisi_sec_inc_thread_cnt(ctx);
-
- qp_ctx = &ctx->qp_ctx[issue_id];
-
- mutex_lock(&qp_ctx->req_lock);
-
- if (in_req->c_req.sk_req->src == in_req->c_req.sk_req->dst) {
- *fusion_send = 1;
- } else if (qp_ctx->fusion_req &&
- qp_ctx->fusion_req->fusion_num < qp_ctx->fusion_limit) {
- req = qp_ctx->fusion_req;
-
- *fake_busy = req->fake_busy;
- __sync_add_and_fetch(&ctx->sec->sec_dfx.fake_busy_cnt,
- *fake_busy);
-
- req->priv[req->fusion_num] = in_req->c_req.sk_req;
- req->fusion_num++;
- in_req->fusion_num = req->fusion_num;
- if (req->fusion_num == qp_ctx->fusion_limit) {
- *fusion_send = 1;
- qp_ctx->fusion_req = NULL;
- }
- mutex_unlock(&qp_ctx->req_lock);
- return req;
- }
+ struct sec_qp_ctx *qp_ctx = req->qp_ctx;
- req = in_req;
-
- if (hisi_sec_alloc_req_id(req, qp_ctx)) {
- mutex_unlock(&qp_ctx->req_lock);
- return NULL;
- }
+ atomic_dec(&qp_ctx->pending_reqs);
+ sec_free_req_id(req);
+ sec_free_queue_id(ctx, req);
+}
- req->fake_busy = 0;
+static int sec_request_init(struct sec_ctx *ctx, struct sec_req *req)
+{
+ struct sec_qp_ctx *qp_ctx;
+ int queue_id;
- req->req_cnt = atomic_inc_return(&qp_ctx->req_cnt);
- if (req->req_cnt >= ctx->req_fake_limit) {
- req->fake_busy = 1;
- *fake_busy = 1;
- __sync_add_and_fetch(&ctx->sec->sec_dfx.fake_busy_cnt, 1);
- }
+ /* To load balance */
+ queue_id = sec_alloc_queue_id(ctx, req);
+ qp_ctx = &ctx->qp_ctx[queue_id];
- ret = ctx->req_op->get_res(ctx, req);
- if (ret) {
- dev_err(ctx->dev, "req_op get_res failed\n");
- mutex_unlock(&qp_ctx->req_lock);
- goto err_free_req_id;
+ req->req_id = sec_alloc_req_id(req, qp_ctx);
+ if (unlikely(req->req_id < 0)) {
+ sec_free_queue_id(ctx, req);
+ return req->req_id;
}
- if (ctx->fusion_limit <= 1 || ctx->fusion_tmout_nsec == 0)
- *fusion_send = 1;
-
- if (ctx->is_fusion && *fusion_send == 0)
- qp_ctx->fusion_req = req;
-
- req->fusion_num = 1;
-
- req->priv[0] = in_req->c_req.sk_req;
- req->st_time = ktime_get();
-
- mutex_unlock(&qp_ctx->req_lock);
-
- return req;
+ if (ctx->fake_req_limit <= atomic_inc_return(&qp_ctx->pending_reqs))
+ req->fake_busy = true;
+ else
+ req->fake_busy = false;
-err_free_req_id:
- hisi_sec_free_req_id(qp_ctx, req->req_id);
- return NULL;
+ return 0;
}
-static int sec_request_transfer(struct hisi_sec_ctx *ctx,
- struct hisi_sec_req *req)
+static int sec_process(struct sec_ctx *ctx, struct sec_req *req)
{
+ struct sec_cipher_req *c_req = &req->c_req;
int ret;
- ret = ctx->req_op->buf_map(ctx, req);
- if (ret)
+ ret = sec_request_init(ctx, req);
+ if (unlikely(ret))
return ret;
- ret = ctx->req_op->do_transfer(ctx, req);
- if (ret)
- goto unmap_req_buf;
+ ret = sec_request_transfer(ctx, req);
+ if (unlikely(ret))
+ goto err_uninit_req;
- memset(&req->sec_sqe, 0, sizeof(struct hisi_sec_sqe));
- ret = ctx->req_op->bd_fill(ctx, req);
- if (ret)
- goto unmap_req_buf;
+ /* Output IV as decrypto */
+ if (ctx->c_ctx.c_mode == SEC_CMODE_CBC && !req->c_req.encrypt)
+ sec_update_iv(req, ctx->alg_type);
- return 0;
+ ret = ctx->req_op->bd_send(ctx, req);
+ if (unlikely(ret != -EBUSY && ret != -EINPROGRESS)) {
+ dev_err_ratelimited(SEC_CTX_DEV(ctx),
+ "send sec request failed!\n");
+ goto err_send_req;
+ }
-unmap_req_buf:
- ctx->req_op->buf_unmap(ctx, req);
return ret;
-}
-
-static int sec_request_send(struct hisi_sec_ctx *ctx, struct hisi_sec_req *req)
-{
- int ret;
- ret = ctx->req_op->bd_send(ctx, req);
+err_send_req:
+ /* As failing, restore the IV from user */
+ if (ctx->c_ctx.c_mode == SEC_CMODE_CBC && !req->c_req.encrypt) {
+ if (ctx->alg_type == SEC_SKCIPHER)
+ memcpy(req->c_req.sk_req->iv, c_req->c_ivin,
+ ctx->c_ctx.ivsize);
+ }
- if (ret == 0 || ret == -EBUSY || ret == -EINPROGRESS)
- atomic_dec(&ctx->thread_cnt);
+ sec_request_untransfer(ctx, req);
+err_uninit_req:
+ sec_request_uninit(ctx, req);
return ret;
}
-static int sec_io_proc(struct hisi_sec_ctx *ctx, struct hisi_sec_req *in_req)
+static const struct sec_req_op sec_skcipher_req_ops = {
+ .buf_map = sec_skcipher_sgl_map,
+ .buf_unmap = sec_skcipher_sgl_unmap,
+ .do_transfer = sec_skcipher_copy_iv,
+ .bd_fill = sec_skcipher_bd_fill,
+ .bd_send = sec_bd_send,
+ .callback = sec_skcipher_callback,
+ .process = sec_process,
+};
+
+static int sec_skcipher_ctx_init(struct crypto_skcipher *tfm)
{
- struct hisi_sec_req *req;
- int fusion_send = 0;
- int fake_busy = 0;
- int ret;
+ struct sec_ctx *ctx = crypto_skcipher_ctx(tfm);
- in_req->fusion_num = 1;
+ ctx->req_op = &sec_skcipher_req_ops;
- req = sec_request_alloc(ctx, in_req, &fusion_send, &fake_busy);
+ return sec_skcipher_init(tfm);
+}
- if (!req) {
- dev_err_ratelimited(ctx->dev, "sec_request_alloc failed\n");
- return -ENOMEM;
- }
+static void sec_skcipher_ctx_exit(struct crypto_skcipher *tfm)
+{
+ sec_skcipher_uninit(tfm);
+}
- if (ctx->is_fusion && fusion_send == 0)
- return fake_busy ? -EBUSY : -EINPROGRESS;
+static int sec_skcipher_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
+{
+ struct skcipher_request *sk_req = sreq->c_req.sk_req;
+ struct device *dev = SEC_CTX_DEV(ctx);
+ u8 c_alg = ctx->c_ctx.c_alg;
- ret = sec_request_transfer(ctx, req);
- if (ret) {
- dev_err_ratelimited(ctx->dev, "sec_transfer ret[%d]\n", ret);
- goto err_free_req;
+ if (unlikely(!sk_req->src || !sk_req->dst)) {
+ dev_err(dev, "skcipher input param error!\n");
+ return -EINVAL;
}
+ sreq->c_req.c_len = sk_req->cryptlen;
- ret = sec_request_send(ctx, req);
- __sync_add_and_fetch(&ctx->sec->sec_dfx.send_by_full, 1);
- if (ret != -EBUSY && ret != -EINPROGRESS) {
- dev_err_ratelimited(ctx->dev, "sec_send ret[%d]\n", ret);
- goto err_unmap_req;
- }
+ if (ctx->pbuf_supported && sk_req->cryptlen <= SEC_PBUF_SZ)
+ ctx->use_pbuf = true;
- return ret;
+ if (c_alg == SEC_CALG_3DES) {
+ if (unlikely(sk_req->cryptlen & (DES3_EDE_BLOCK_SIZE - 1))) {
+ dev_err(dev, "skcipher 3des input length error!\n");
+ return -EINVAL;
+ }
+ return 0;
+ } else if (c_alg == SEC_CALG_AES || c_alg == SEC_CALG_SM4) {
+ if (unlikely(sk_req->cryptlen & (AES_BLOCK_SIZE - 1))) {
+ dev_err(dev, "skcipher aes input length error!\n");
+ return -EINVAL;
+ }
+ return 0;
+ }
-err_unmap_req:
- ctx->req_op->buf_unmap(ctx, req);
-err_free_req:
- hisi_sec_free_req_id(req->qp_ctx, req->req_id);
- atomic_dec(&ctx->thread_cnt);
- return ret;
+ dev_err(dev, "skcipher algorithm error!\n");
+ return -EINVAL;
}
static int sec_skcipher_crypto(struct skcipher_request *sk_req, bool encrypt)
{
- struct crypto_skcipher *atfm = crypto_skcipher_reqtfm(sk_req);
- struct hisi_sec_ctx *ctx = crypto_skcipher_ctx(atfm);
- struct hisi_sec_req *req = skcipher_request_ctx(sk_req);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(sk_req);
+ struct sec_req *req = skcipher_request_ctx(sk_req);
+ struct sec_ctx *ctx = crypto_skcipher_ctx(tfm);
+ int ret;
- if (!sk_req->src || !sk_req->dst || !sk_req->cryptlen)
- return -EINVAL;
+ if (!sk_req->cryptlen)
+ return 0;
- req->c_req.sk_req = sk_req;
+ req->c_req.sk_req = sk_req;
req->c_req.encrypt = encrypt;
- req->ctx = ctx;
+ req->ctx = ctx;
+
+ ret = sec_skcipher_param_check(ctx, req);
+ if (unlikely(ret))
+ return -EINVAL;
- return sec_io_proc(ctx, req);
+ return ctx->req_op->process(ctx, req);
}
static int sec_skcipher_encrypt(struct skcipher_request *sk_req)
@@ -1415,7 +998,7 @@ static int sec_skcipher_decrypt(struct skcipher_request *sk_req)
}
#define SEC_SKCIPHER_GEN_ALG(sec_cra_name, sec_set_key, sec_min_key_size, \
- sec_max_key_size, hisi_sec_cipher_ctx_init_func, blk_size, iv_size)\
+ sec_max_key_size, ctx_init, ctx_exit, blk_size, iv_size)\
{\
.base = {\
.cra_name = sec_cra_name,\
@@ -1423,12 +1006,11 @@ static int sec_skcipher_decrypt(struct skcipher_request *sk_req)
.cra_priority = SEC_PRIORITY,\
.cra_flags = CRYPTO_ALG_ASYNC,\
.cra_blocksize = blk_size,\
- .cra_ctxsize = sizeof(struct hisi_sec_ctx),\
- .cra_alignmask = 0,\
+ .cra_ctxsize = sizeof(struct sec_ctx),\
.cra_module = THIS_MODULE,\
},\
- .init = hisi_sec_cipher_ctx_init_func,\
- .exit = hisi_sec_cipher_ctx_exit,\
+ .init = ctx_init,\
+ .exit = ctx_exit,\
.setkey = sec_set_key,\
.decrypt = sec_skcipher_decrypt,\
.encrypt = sec_skcipher_encrypt,\
@@ -1437,75 +1019,55 @@ static int sec_skcipher_decrypt(struct skcipher_request *sk_req)
.ivsize = iv_size,\
},
-#define SEC_SKCIPHER_NORMAL_ALG(name, key_func, min_key_size, \
+#define SEC_SKCIPHER_ALG(name, key_func, min_key_size, \
max_key_size, blk_size, iv_size) \
SEC_SKCIPHER_GEN_ALG(name, key_func, min_key_size, max_key_size, \
- hisi_sec_cipher_ctx_init_alg, blk_size, iv_size)
+ sec_skcipher_ctx_init, sec_skcipher_ctx_exit, blk_size, iv_size)
-#define SEC_SKCIPHER_FUSION_ALG(name, key_func, min_key_size, \
- max_key_size, blk_size, iv_size) \
- SEC_SKCIPHER_GEN_ALG(name, key_func, min_key_size, max_key_size, \
- hisi_sec_cipher_ctx_init_multi_iv, blk_size, iv_size)
-
-static struct skcipher_alg sec_normal_algs[] = {
- SEC_SKCIPHER_NORMAL_ALG("ecb(aes)", sec_setkey_aes_ecb,
- AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE, AES_BLOCK_SIZE, 0)
- SEC_SKCIPHER_NORMAL_ALG("cbc(aes)", sec_setkey_aes_cbc,
- AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE, AES_BLOCK_SIZE,
- AES_BLOCK_SIZE)
- SEC_SKCIPHER_NORMAL_ALG("xts(aes)", sec_setkey_aes_xts,
- SEC_XTS_MIN_KEY_SIZE, SEC_XTS_MAX_KEY_SIZE, AES_BLOCK_SIZE,
- AES_BLOCK_SIZE)
- SEC_SKCIPHER_NORMAL_ALG("ecb(des)", sec_setkey_des_ecb,
- DES_KEY_SIZE, DES_KEY_SIZE, DES_BLOCK_SIZE, 0)
- SEC_SKCIPHER_NORMAL_ALG("cbc(des)", sec_setkey_des_cbc,
- DES_KEY_SIZE, DES_KEY_SIZE, DES_BLOCK_SIZE, DES_BLOCK_SIZE)
- SEC_SKCIPHER_NORMAL_ALG("ecb(des3_ede)", sec_setkey_3des_ecb,
- SEC_DES3_2KEY_SIZE, SEC_DES3_3KEY_SIZE, DES3_EDE_BLOCK_SIZE, 0)
- SEC_SKCIPHER_NORMAL_ALG("cbc(des3_ede)", sec_setkey_3des_cbc,
- SEC_DES3_2KEY_SIZE, SEC_DES3_3KEY_SIZE, DES3_EDE_BLOCK_SIZE,
- DES3_EDE_BLOCK_SIZE)
- SEC_SKCIPHER_NORMAL_ALG("xts(sm4)", sec_setkey_sm4_xts,
- SEC_XTS_MIN_KEY_SIZE, SEC_XTS_MIN_KEY_SIZE, AES_BLOCK_SIZE,
- AES_BLOCK_SIZE)
- SEC_SKCIPHER_NORMAL_ALG("cbc(sm4)", sec_setkey_sm4_cbc,
- AES_MIN_KEY_SIZE, AES_MIN_KEY_SIZE, AES_BLOCK_SIZE,
- AES_BLOCK_SIZE)
-};
+static struct skcipher_alg sec_skciphers[] = {
+ SEC_SKCIPHER_ALG("ecb(aes)", sec_setkey_aes_ecb,
+ AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE,
+ AES_BLOCK_SIZE, 0)
+
+ SEC_SKCIPHER_ALG("cbc(aes)", sec_setkey_aes_cbc,
+ AES_MIN_KEY_SIZE, AES_MAX_KEY_SIZE,
+ AES_BLOCK_SIZE, AES_BLOCK_SIZE)
+
+ SEC_SKCIPHER_ALG("xts(aes)", sec_setkey_aes_xts,
+ SEC_XTS_MIN_KEY_SIZE, SEC_XTS_MAX_KEY_SIZE,
+ AES_BLOCK_SIZE, AES_BLOCK_SIZE)
+
+ SEC_SKCIPHER_ALG("ecb(des3_ede)", sec_setkey_3des_ecb,
+ SEC_DES3_2KEY_SIZE, SEC_DES3_3KEY_SIZE,
+ DES3_EDE_BLOCK_SIZE, 0)
+
+ SEC_SKCIPHER_ALG("cbc(des3_ede)", sec_setkey_3des_cbc,
+ SEC_DES3_2KEY_SIZE, SEC_DES3_3KEY_SIZE,
+ DES3_EDE_BLOCK_SIZE, DES3_EDE_BLOCK_SIZE)
+
+ SEC_SKCIPHER_ALG("xts(sm4)", sec_setkey_sm4_xts,
+ SEC_XTS_MIN_KEY_SIZE, SEC_XTS_MIN_KEY_SIZE,
+ AES_BLOCK_SIZE, AES_BLOCK_SIZE)
+
+ SEC_SKCIPHER_ALG("cbc(sm4)", sec_setkey_sm4_cbc,
+ AES_MIN_KEY_SIZE, AES_MIN_KEY_SIZE,
+ AES_BLOCK_SIZE, AES_BLOCK_SIZE)
-static struct skcipher_alg sec_fusion_algs[] = {
- SEC_SKCIPHER_FUSION_ALG("xts(sm4)", sec_setkey_sm4_xts,
- SEC_XTS_MIN_KEY_SIZE, SEC_XTS_MIN_KEY_SIZE, AES_BLOCK_SIZE,
- AES_BLOCK_SIZE)
- SEC_SKCIPHER_FUSION_ALG("cbc(sm4)", sec_setkey_sm4_cbc,
- AES_MIN_KEY_SIZE, AES_MIN_KEY_SIZE, AES_BLOCK_SIZE,
- AES_BLOCK_SIZE)
};
-int hisi_sec_register_to_crypto(int fusion_limit)
+int sec_register_to_crypto(void)
{
/* To avoid repeat register */
- if (atomic_add_return(1, &sec_active_devs) == 1) {
- if (fusion_limit == 1)
- return crypto_register_skciphers(sec_normal_algs,
- ARRAY_SIZE(sec_normal_algs));
- else
- return crypto_register_skciphers(sec_fusion_algs,
- ARRAY_SIZE(sec_fusion_algs));
- }
+ if (atomic_add_return(1, &sec_active_devs) == 1)
+ return crypto_register_skciphers(sec_skciphers,
+ ARRAY_SIZE(sec_skciphers));
return 0;
}
-void hisi_sec_unregister_from_crypto(int fusion_limit)
+void sec_unregister_from_crypto(void)
{
- if (atomic_sub_return(1, &sec_active_devs) == 0) {
- if (fusion_limit == 1)
- crypto_unregister_skciphers(sec_normal_algs,
- ARRAY_SIZE(sec_normal_algs));
- else
- crypto_unregister_skciphers(sec_fusion_algs,
- ARRAY_SIZE(sec_fusion_algs));
- }
+ if (atomic_sub_return(1, &sec_active_devs) == 0)
+ crypto_unregister_skciphers(sec_skciphers,
+ ARRAY_SIZE(sec_skciphers));
}
-
diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.h b/drivers/crypto/hisilicon/sec2/sec_crypto.h
index bffbeba..221257e 100644
--- a/drivers/crypto/hisilicon/sec2/sec_crypto.h
+++ b/drivers/crypto/hisilicon/sec2/sec_crypto.h
@@ -1,13 +1,238 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/* Copyright (c) 2018-2019 HiSilicon Limited. */
-#ifndef HISI_SEC_CRYPTO_H
-#define HISI_SEC_CRYPTO_H
+#ifndef __HISI_SEC_V2_CRYPTO_H
+#define __HISI_SEC_V2_CRYPTO_H
-#define SEC_IV_SIZE 24
-#define SEC_MAX_KEY_SIZE 64
+#define SEC_IV_SIZE 24
+#define SEC_MAX_KEY_SIZE 64
+#define SEC_MAX_AUTH_KEY_SIZE 64
-int hisi_sec_register_to_crypto(int fusion_limit);
-void hisi_sec_unregister_from_crypto(int fusion_limit);
+#define SEC_COMM_SCENE 0
+enum sec_calg {
+ SEC_CALG_3DES = 0x1,
+ SEC_CALG_AES = 0x2,
+ SEC_CALG_SM4 = 0x3,
+};
+
+enum sec_hash_alg {
+ SEC_A_HMAC_SHA1 = 0x10,
+ SEC_A_HMAC_SHA256 = 0x11,
+ SEC_A_HMAC_SHA512 = 0x15,
+};
+
+enum sec_mac_len {
+ SEC_HMAC_SHA1_MAC = 20,
+ SEC_HMAC_SHA256_MAC = 32,
+ SEC_HMAC_SHA512_MAC = 64,
+};
+
+enum sec_cmode {
+ SEC_CMODE_ECB = 0x0,
+ SEC_CMODE_CBC = 0x1,
+ SEC_CMODE_CTR = 0x4,
+ SEC_CMODE_XTS = 0x7,
+};
+
+enum sec_ckey_type {
+ SEC_CKEY_128BIT = 0x0,
+ SEC_CKEY_192BIT = 0x1,
+ SEC_CKEY_256BIT = 0x2,
+ SEC_CKEY_3DES_3KEY = 0x1,
+ SEC_CKEY_3DES_2KEY = 0x3,
+};
+
+enum sec_bd_type {
+ SEC_BD_TYPE1 = 0x1,
+ SEC_BD_TYPE2 = 0x2,
+};
+
+enum sec_auth {
+ SEC_NO_AUTH = 0x0,
+ SEC_AUTH_TYPE1 = 0x1,
+ SEC_AUTH_TYPE2 = 0x2,
+};
+
+enum sec_cipher_dir {
+ SEC_CIPHER_ENC = 0x1,
+ SEC_CIPHER_DEC = 0x2,
+};
+
+enum sec_addr_type {
+ SEC_PBUF = 0x0,
+ SEC_SGL = 0x1,
+ SEC_PRP = 0x2,
+};
+
+enum sec_ci_gen {
+ SEC_CI_GEN_BY_ADDR = 0x0,
+ SEC_CI_GEN_BY_LBA = 0X3,
+};
+
+enum sec_scene {
+ SEC_SCENE_IPSEC = 0x1,
+ SEC_SCENE_STORAGE = 0x5,
+};
+
+enum sec_work_mode {
+ SEC_NO_FUSION = 0x0,
+ SEC_IV_FUSION = 0x1,
+ SEC_FUSION_BUTT
+};
+
+enum sec_req_ops_type {
+ SEC_OPS_SKCIPHER_ALG = 0x0,
+ SEC_OPS_DMCRYPT = 0x1,
+ SEC_OPS_MULTI_IV = 0x2,
+ SEC_OPS_BUTT
+};
+
+struct sec_sqe_type2 {
+ /*
+ * mac_len: 0~4 bits
+ * a_key_len: 5~10 bits
+ * a_alg: 11~16 bits
+ */
+ __le32 mac_key_alg;
+
+ /*
+ * c_icv_len: 0~5 bits
+ * c_width: 6~8 bits
+ * c_key_len: 9~11 bits
+ * c_mode: 12~15 bits
+ */
+ __le16 icvw_kmode;
+
+ /* c_alg: 0~3 bits */
+ __u8 c_alg;
+ __u8 rsvd4;
+
+ /*
+ * a_len: 0~23 bits
+ * iv_offset_l: 24~31 bits
+ */
+ __le32 alen_ivllen;
+
+ /*
+ * c_len: 0~23 bits
+ * iv_offset_h: 24~31 bits
+ */
+ __le32 clen_ivhlen;
+
+ __le16 auth_src_offset;
+ __le16 cipher_src_offset;
+ __le16 cs_ip_header_offset;
+ __le16 cs_udp_header_offset;
+ __le16 pass_word_len;
+ __le16 dk_len;
+ __u8 salt3;
+ __u8 salt2;
+ __u8 salt1;
+ __u8 salt0;
+
+ __le16 tag;
+ __le16 rsvd5;
+
+ /*
+ * c_pad_type: 0~3 bits
+ * c_pad_len: 4~11 bits
+ * c_pad_data_type: 12~15 bits
+ */
+ __le16 cph_pad;
+
+ /* c_pad_len_field: 0~1 bits */
+ __le16 c_pad_len_field;
+
+ __le64 long_a_data_len;
+ __le64 a_ivin_addr;
+ __le64 a_key_addr;
+ __le64 mac_addr;
+ __le64 c_ivin_addr;
+ __le64 c_key_addr;
+
+ __le64 data_src_addr;
+ __le64 data_dst_addr;
+
+ /*
+ * done: 0 bit
+ * icv: 1~3 bits
+ * csc: 4~6 bits
+ * flag: 7-10 bits
+ * dif_check: 11~13 bits
+ */
+ __le16 done_flag;
+
+ __u8 error_type;
+ __u8 warning_type;
+ __u8 mac_i3;
+ __u8 mac_i2;
+ __u8 mac_i1;
+ __u8 mac_i0;
+ __le16 check_sum_i;
+ __u8 tls_pad_len_i;
+ __u8 rsvd12;
+ __le32 counter;
+};
+
+struct sec_sqe {
+ /*
+ * type: 0~3 bits
+ * cipher: 4~5 bits
+ * auth: 6~7 bit s
+ */
+ __u8 type_cipher_auth;
+
+ /*
+ * seq: 0 bit
+ * de: 1~2 bits
+ * scene: 3~6 bits
+ * src_addr_type: ~7 bit, with sdm_addr_type 0-1 bits
+ */
+ __u8 sds_sa_type;
+
+ /*
+ * src_addr_type: 0~1 bits, not used now,
+ * if support PRP, set this field, or set zero.
+ * dst_addr_type: 2~4 bits
+ * mac_addr_type: 5~7 bits
+ */
+ __u8 sdm_addr_type;
+ __u8 rsvd0;
+
+ /*
+ * nonce_len(type2): 0~3 bits
+ * huk(type2): 4 bit
+ * key_s(type2): 5 bit
+ * ci_gen: 6~7 bits
+ */
+ __u8 huk_key_ci;
+
+ /*
+ * ai_gen: 0~1 bits
+ * a_pad(type2): 2~3 bits
+ * c_s(type2): 4~5 bits
+ */
+ __u8 ai_apd_cs;
+
+ /*
+ * rhf(type2): 0 bit
+ * c_key_type: 1~2 bits
+ * a_key_type: 3~4 bits
+ * write_frame_len(type2): 5~7 bits
+ */
+ __u8 rca_key_frm;
+
+ /*
+ * cal_iv_addr_en(type2): 0 bit
+ * tls_up(type2): 1 bit
+ * inveld: 7 bit
+ */
+ __u8 iv_tls_ld;
+
+ struct sec_sqe_type2 type2;
+};
+
+int sec_register_to_crypto(void);
+void sec_unregister_from_crypto(void);
#endif
diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
index b4e5d57f..b3340c0 100644
--- a/drivers/crypto/hisilicon/sec2/sec_main.c
+++ b/drivers/crypto/hisilicon/sec2/sec_main.c
@@ -1,33 +1,30 @@
// SPDX-License-Identifier: GPL-2.0+
-/*
- * Copyright (c) 2018-2019 HiSilicon Limited.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- */
+/* Copyright (c) 2018-2019 HiSilicon Limited. */
#include <linux/acpi.h>
#include <linux/aer.h>
#include <linux/bitops.h>
#include <linux/debugfs.h>
#include <linux/init.h>
+#include <linux/iommu.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/seq_file.h>
#include <linux/topology.h>
-#include <linux/uacce.h>
+
#include "sec.h"
-#include "sec_crypto.h"
#define SEC_QUEUE_NUM_V1 4096
#define SEC_QUEUE_NUM_V2 1024
#define SEC_PF_PCI_DEVICE_ID 0xa255
#define SEC_VF_PCI_DEVICE_ID 0xa256
+#define SEC_BD_ERR_CHK_EN0 0xEFFFFFFF
+#define SEC_BD_ERR_CHK_EN1 0x7ffff7fd
+#define SEC_BD_ERR_CHK_EN3 0xffffbfff
+
#define SEC_SQE_SIZE 128
#define SEC_SQ_SIZE (SEC_SQE_SIZE * QM_Q_DEPTH)
#define SEC_PF_DEF_Q_NUM 64
@@ -35,8 +32,6 @@
#define SEC_CTX_Q_NUM_DEF 24
#define SEC_CTX_Q_NUM_MAX 32
-#define SEC_AM_CFG_SIG_PORT_MAX_TRANS 0x300014
-#define SEC_SINGLE_PORT_MAX_TRANS 0x2060
#define SEC_CTRL_CNT_CLR_CE 0x301120
#define SEC_CTRL_CNT_CLR_CE_BIT BIT(0)
#define SEC_ENGINE_PF_CFG_OFF 0x300000
@@ -44,13 +39,13 @@
#define SEC_CORE_INT_SOURCE 0x301010
#define SEC_CORE_INT_MASK 0x301000
#define SEC_CORE_INT_STATUS 0x301008
-#define SEC_CORE_INT_STATUS_M_ECC BIT(2)
-#define SEC_CORE_ECC_INFO 0x301C14
-#define SEC_ECC_NUM(err_val) (((err_val) >> 16) & 0xFFFF)
-#define SEC_ECC_ADDR(err_val) ((err_val) & 0xFFFF)
+#define SEC_CORE_SRAM_ECC_ERR_INFO 0x301C14
+#define SEC_ECC_NUM(err) (((err) >> 16) & 0xFFFF)
+#define SEC_ECC_ADDR(err) ((err) >> 0)
#define SEC_CORE_INT_DISABLE 0x0
#define SEC_CORE_INT_ENABLE 0x1ff
#define SEC_CORE_INT_CLEAR 0x1ff
+#define SEC_SAA_ENABLE 0x17f
#define SEC_RAS_CE_REG 0x301050
#define SEC_RAS_FE_REG 0x301054
@@ -64,6 +59,7 @@
#define SEC_CONTROL_REG 0x0200
#define SEC_TRNG_EN_SHIFT 8
+#define SEC_CLK_GATE_ENABLE BIT(3)
#define SEC_CLK_GATE_DISABLE (~BIT(3))
#define SEC_AXI_SHUTDOWN_ENABLE BIT(12)
#define SEC_AXI_SHUTDOWN_DISABLE 0xFFFFEFFF
@@ -71,26 +67,24 @@
#define SEC_INTERFACE_USER_CTRL0_REG 0x0220
#define SEC_INTERFACE_USER_CTRL1_REG 0x0224
-#define SEC_SAA_EN_REG 0x270
-#define SEC_SAA_EN 0x17F
+#define SEC_SAA_EN_REG 0x0270
#define SEC_BD_ERR_CHK_EN_REG0 0x0380
#define SEC_BD_ERR_CHK_EN_REG1 0x0384
#define SEC_BD_ERR_CHK_EN_REG3 0x038c
-#define SEC_BD_ERR_CHK_EN0 0xEFFFFFFF
-#define SEC_BD_ERR_CHK_EN1 0x7FFFF7FD
-#define SEC_BD_ERR_CHK_EN3 0xFFFFBFFF
#define SEC_USER0_SMMU_NORMAL (BIT(23) | BIT(15))
#define SEC_USER1_SMMU_NORMAL (BIT(31) | BIT(23) | BIT(15) | BIT(7))
+#define SEC_CORE_INT_STATUS_M_ECC BIT(2)
#define SEC_DELAY_10_US 10
#define SEC_POLL_TIMEOUT_US 1000
#define SEC_DBGFS_VAL_MAX_LEN 20
+#define SEC_SINGLE_PORT_MAX_TRANS 0x2060
#define SEC_ADDR(qm, offset) ((qm)->io_base + (offset) + \
SEC_ENGINE_PF_CFG_OFF + SEC_ACC_COMMON_REG_OFF)
-struct hisi_sec_hw_error {
+struct sec_hw_error {
u32 int_msk;
const char *msg;
};
@@ -98,9 +92,8 @@ struct hisi_sec_hw_error {
static const char sec_name[] = "hisi_sec2";
static struct dentry *sec_debugfs_root;
static struct hisi_qm_list sec_devices;
-static struct workqueue_struct *sec_wq;
-static const struct hisi_sec_hw_error sec_hw_error[] = {
+static const struct sec_hw_error sec_hw_errors[] = {
{.int_msk = BIT(0), .msg = "sec_axi_rresp_err_rint"},
{.int_msk = BIT(1), .msg = "sec_axi_bresp_err_rint"},
{.int_msk = BIT(2), .msg = "sec_ecc_2bit_err_rint"},
@@ -113,36 +106,13 @@ struct hisi_sec_hw_error {
{ /* sentinel */ }
};
-enum ctrl_debug_file_index {
- SEC_CURRENT_QM,
- SEC_CLEAR_ENABLE,
- SEC_DEBUG_FILE_NUM,
-};
-
-static const char * const ctrl_debug_file_name[] = {
+static const char * const sec_dbg_file_name[] = {
[SEC_CURRENT_QM] = "current_qm",
[SEC_CLEAR_ENABLE] = "clear_enable",
};
-struct ctrl_debug_file {
- enum ctrl_debug_file_index index;
- spinlock_t lock;
- struct hisi_sec_ctrl *ctrl;
-};
-
-/*
- * One SEC controller has one PF and multiple VFs, some global configurations
- * which PF has need this structure.
- *
- * Just relevant for PF.
- */
-struct hisi_sec_ctrl {
- struct hisi_sec *hisi_sec;
- struct ctrl_debug_file files[SEC_DEBUG_FILE_NUM];
-};
-
static struct debugfs_reg32 sec_dfx_regs[] = {
- {"SEC_PF_ABNORMAL_INT_SOURCE ", 0x301010},
+ {"SEC_PF_ABNORMAL_INT_SOURCE ", 0x301010},
{"SEC_SAA_EN ", 0x301270},
{"SEC_BD_LATENCY_MIN ", 0x301600},
{"SEC_BD_LATENCY_MAX ", 0x301608},
@@ -262,71 +232,12 @@ static int vfs_num_set(const char *val, const struct kernel_param *kp)
module_param_cb(vfs_num, &vfs_num_ops, &vfs_num, 0444);
MODULE_PARM_DESC(vfs_num, "Number of VFs to enable(1-63), 0(default)");
-static int sec_fusion_limit_set(const char *val, const struct kernel_param *kp)
-{
- u32 fusion_limit;
- int ret;
-
- if (!val)
- return -EINVAL;
-
- ret = kstrtou32(val, 10, &fusion_limit);
- if (ret)
- return ret;
-
- if (!fusion_limit || fusion_limit > FUSION_LIMIT_MAX) {
- pr_err("fusion_limit[%u] is't at range(0, %d)", fusion_limit,
- FUSION_LIMIT_MAX);
- return -EINVAL;
- }
-
- return param_set_int(val, kp);
-}
-
-static const struct kernel_param_ops sec_fusion_limit_ops = {
- .set = sec_fusion_limit_set,
- .get = param_get_int,
-};
-static u32 fusion_limit = FUSION_LIMIT_DEF;
-
-module_param_cb(fusion_limit, &sec_fusion_limit_ops, &fusion_limit, 0444);
-MODULE_PARM_DESC(fusion_limit, "(1, acc_sgl_sge_nr of hisilicon QM)");
-
-static int sec_fusion_tmout_ns_set(const char *val,
- const struct kernel_param *kp)
-{
- u32 fusion_tmout_nsec;
- int ret;
-
- if (!val)
- return -EINVAL;
-
- ret = kstrtou32(val, 10, &fusion_tmout_nsec);
- if (ret)
- return ret;
-
- if (fusion_tmout_nsec > NSEC_PER_SEC) {
- pr_err("fusion_tmout_nsec[%u] is too large", fusion_tmout_nsec);
- return -EINVAL;
- }
-
- return param_set_int(val, kp);
-}
-
-static const struct kernel_param_ops sec_fusion_time_ops = {
- .set = sec_fusion_tmout_ns_set,
- .get = param_get_int,
-};
-static u32 fusion_time = FUSION_TMOUT_NSEC_DEF; /* ns */
-module_param_cb(fusion_time, &sec_fusion_time_ops, &fusion_time, 0444);
-MODULE_PARM_DESC(fusion_time, "(0, NSEC_PER_SEC)");
-
-static const struct pci_device_id hisi_sec_dev_ids[] = {
+static const struct pci_device_id sec_dev_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, SEC_PF_PCI_DEVICE_ID) },
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, SEC_VF_PCI_DEVICE_ID) },
{ 0, }
};
-MODULE_DEVICE_TABLE(pci, hisi_sec_dev_ids);
+MODULE_DEVICE_TABLE(pci, sec_dev_ids);
static u8 sec_get_endian(struct hisi_qm *qm)
{
@@ -390,9 +301,9 @@ static int sec_engine_init(struct hisi_qm *qm)
writel_relaxed(reg, SEC_ADDR(qm, SEC_INTERFACE_USER_CTRL1_REG));
writel(SEC_SINGLE_PORT_MAX_TRANS,
- qm->io_base + SEC_AM_CFG_SIG_PORT_MAX_TRANS);
+ qm->io_base + AM_CFG_SINGLE_PORT_MAX_TRANS);
- writel(SEC_SAA_EN, SEC_ADDR(qm, SEC_SAA_EN_REG));
+ writel(SEC_SAA_ENABLE, SEC_ADDR(qm, SEC_SAA_EN_REG));
/* Enable sm4 extra mode, as ctr/ecb */
writel_relaxed(SEC_BD_ERR_CHK_EN0,
@@ -436,6 +347,7 @@ static int sec_set_user_domain_and_cache(struct hisi_qm *qm)
return sec_engine_init(qm);
}
+/* sec_debug_regs_clear() - clear the sec debug regs */
static void sec_debug_regs_clear(struct hisi_qm *qm)
{
/* clear current_qm */
@@ -497,23 +409,16 @@ static void sec_hw_error_disable(struct hisi_qm *qm)
writel(val, SEC_ADDR(qm, SEC_CONTROL_REG));
}
-static inline struct hisi_qm *file_to_qm(struct ctrl_debug_file *file)
+static u32 sec_current_qm_read(struct sec_debug_file *file)
{
- struct hisi_sec *hisi_sec = file->ctrl->hisi_sec;
-
- return &hisi_sec->qm;
-}
-
-static u32 current_qm_read(struct ctrl_debug_file *file)
-{
- struct hisi_qm *qm = file_to_qm(file);
+ struct hisi_qm *qm = file->qm;
return readl(qm->io_base + QM_DFX_MB_CNT_VF);
}
-static int current_qm_write(struct ctrl_debug_file *file, u32 val)
+static int sec_current_qm_write(struct sec_debug_file *file, u32 val)
{
- struct hisi_qm *qm = file_to_qm(file);
+ struct hisi_qm *qm = file->qm;
u32 vfq_num;
u32 tmp;
@@ -521,17 +426,17 @@ static int current_qm_write(struct ctrl_debug_file *file, u32 val)
return -EINVAL;
/* According PF or VF Dev ID to calculation curr_qm_qp_num and store */
- if (val == 0) {
+ if (!val) {
qm->debug.curr_qm_qp_num = qm->qp_num;
} else {
vfq_num = (qm->ctrl_q_num - qm->qp_num) / qm->vfs_num;
- if (val == qm->vfs_num) {
+
+ if (val == qm->vfs_num)
qm->debug.curr_qm_qp_num =
qm->ctrl_q_num - qm->qp_num -
(qm->vfs_num - 1) * vfq_num;
- } else {
+ else
qm->debug.curr_qm_qp_num = vfq_num;
- }
}
writel(val, qm->io_base + QM_DFX_MB_CNT_VF);
@@ -548,33 +453,33 @@ static int current_qm_write(struct ctrl_debug_file *file, u32 val)
return 0;
}
-static u32 clear_enable_read(struct ctrl_debug_file *file)
+static u32 sec_clear_enable_read(struct sec_debug_file *file)
{
- struct hisi_qm *qm = file_to_qm(file);
+ struct hisi_qm *qm = file->qm;
return readl(qm->io_base + SEC_CTRL_CNT_CLR_CE) &
SEC_CTRL_CNT_CLR_CE_BIT;
}
-static int clear_enable_write(struct ctrl_debug_file *file, u32 val)
+static int sec_clear_enable_write(struct sec_debug_file *file, u32 val)
{
- struct hisi_qm *qm = file_to_qm(file);
+ struct hisi_qm *qm = file->qm;
u32 tmp;
if (val != 1 && val)
return -EINVAL;
tmp = (readl(qm->io_base + SEC_CTRL_CNT_CLR_CE) &
- ~SEC_CTRL_CNT_CLR_CE_BIT) | val;
+ ~SEC_CTRL_CNT_CLR_CE_BIT) | val;
writel(tmp, qm->io_base + SEC_CTRL_CNT_CLR_CE);
return 0;
}
-static ssize_t ctrl_debug_read(struct file *filp, char __user *buf,
+static ssize_t sec_debug_read(struct file *filp, char __user *buf,
size_t count, loff_t *pos)
{
- struct ctrl_debug_file *file = filp->private_data;
+ struct sec_debug_file *file = filp->private_data;
char tbuf[SEC_DBGFS_VAL_MAX_LEN];
u32 val;
int ret;
@@ -583,10 +488,10 @@ static ssize_t ctrl_debug_read(struct file *filp, char __user *buf,
switch (file->index) {
case SEC_CURRENT_QM:
- val = current_qm_read(file);
+ val = sec_current_qm_read(file);
break;
case SEC_CLEAR_ENABLE:
- val = clear_enable_read(file);
+ val = sec_clear_enable_read(file);
break;
default:
spin_unlock_irq(&file->lock);
@@ -599,10 +504,10 @@ static ssize_t ctrl_debug_read(struct file *filp, char __user *buf,
return simple_read_from_buffer(buf, count, pos, tbuf, ret);
}
-static ssize_t ctrl_debug_write(struct file *filp, const char __user *buf,
- size_t count, loff_t *pos)
+static ssize_t sec_debug_write(struct file *filp, const char __user *buf,
+ size_t count, loff_t *pos)
{
- struct ctrl_debug_file *file = filp->private_data;
+ struct sec_debug_file *file = filp->private_data;
char tbuf[SEC_DBGFS_VAL_MAX_LEN];
unsigned long val;
int len, ret;
@@ -626,12 +531,12 @@ static ssize_t ctrl_debug_write(struct file *filp, const char __user *buf,
switch (file->index) {
case SEC_CURRENT_QM:
- ret = current_qm_write(file, val);
+ ret = sec_current_qm_write(file, val);
if (ret)
goto err_input;
break;
case SEC_CLEAR_ENABLE:
- ret = clear_enable_write(file, val);
+ ret = sec_clear_enable_write(file, val);
if (ret)
goto err_input;
break;
@@ -649,30 +554,30 @@ static ssize_t ctrl_debug_write(struct file *filp, const char __user *buf,
return ret;
}
-static const struct file_operations ctrl_debug_fops = {
+static const struct file_operations sec_dbg_fops = {
.owner = THIS_MODULE,
.open = simple_open,
- .read = ctrl_debug_read,
- .write = ctrl_debug_write,
+ .read = sec_debug_read,
+ .write = sec_debug_write,
};
-static int hisi_sec_core_debug_init(struct hisi_qm *qm)
+static int sec_debugfs_atomic64_get(void *data, u64 *val)
{
- struct hisi_sec *sec = container_of(qm, struct hisi_sec, qm);
+ *val = atomic64_read((atomic64_t *)data);
+ return 0;
+}
+DEFINE_DEBUGFS_ATTRIBUTE(sec_atomic64_ops, sec_debugfs_atomic64_get,
+ NULL, "%lld\n");
+
+static int sec_core_debug_init(struct hisi_qm *qm)
+{
+ struct sec_dev *sec = container_of(qm, struct sec_dev, qm);
struct device *dev = &qm->pdev->dev;
- struct hisi_sec_dfx *dfx = &sec->sec_dfx;
+ struct sec_dfx *dfx = &sec->debug.dfx;
struct debugfs_regset32 *regset;
- struct dentry *tmp_d, *tmp;
- char buf[SEC_DBGFS_VAL_MAX_LEN];
- int ret;
+ struct dentry *tmp_d;
- ret = snprintf(buf, SEC_DBGFS_VAL_MAX_LEN, "sec_dfx");
- if (ret < 0)
- return -ENOENT;
-
- tmp_d = debugfs_create_dir(buf, qm->debug.debug_root);
- if (!tmp_d)
- return -ENOENT;
+ tmp_d = debugfs_create_dir("sec_dfx", qm->debug.debug_root);
regset = devm_kzalloc(dev, sizeof(*regset), GFP_KERNEL);
if (!regset)
@@ -682,123 +587,69 @@ static int hisi_sec_core_debug_init(struct hisi_qm *qm)
regset->nregs = ARRAY_SIZE(sec_dfx_regs);
regset->base = qm->io_base;
- tmp = debugfs_create_regset32("regs", 0444, tmp_d, regset);
- if (!tmp)
- return -ENOENT;
-
- tmp = debugfs_create_u64("send_cnt", 0444, tmp_d, &dfx->send_cnt);
- if (!tmp)
- return -ENOENT;
+ debugfs_create_regset32("regs", 0444, tmp_d, regset);
- tmp = debugfs_create_u64("send_by_tmout", 0444, tmp_d,
- &dfx->send_by_tmout);
- if (!tmp)
- return -ENOENT;
-
- tmp = debugfs_create_u64("send_by_full", 0444, tmp_d,
- &dfx->send_by_full);
- if (!tmp)
- return -ENOENT;
-
- tmp = debugfs_create_u64("recv_cnt", 0444, tmp_d, &dfx->recv_cnt);
- if (!tmp)
- return -ENOENT;
-
- tmp = debugfs_create_u64("get_task_cnt", 0444, tmp_d,
- &dfx->get_task_cnt);
- if (!tmp)
- return -ENOENT;
-
- tmp = debugfs_create_u64("put_task_cnt", 0444, tmp_d,
- &dfx->put_task_cnt);
- if (!tmp)
- return -ENOENT;
-
- tmp = debugfs_create_u64("gran_task_cnt", 0444, tmp_d,
- &dfx->gran_task_cnt);
- if (!tmp)
- return -ENOENT;
-
- tmp = debugfs_create_u64("thread_cnt", 0444, tmp_d, &dfx->thread_cnt);
- if (!tmp)
- return -ENOENT;
-
- tmp = debugfs_create_u64("fake_busy_cnt", 0444,
- tmp_d, &dfx->fake_busy_cnt);
- if (!tmp)
- return -ENOENT;
+ debugfs_create_file("send_cnt", 0444, tmp_d,
+ &dfx->send_cnt, &sec_atomic64_ops);
- tmp = debugfs_create_u64("busy_comp_cnt", 0444, tmp_d,
- &dfx->busy_comp_cnt);
- if (!tmp)
- return -ENOENT;
+ debugfs_create_file("recv_cnt", 0444, tmp_d,
+ &dfx->recv_cnt, &sec_atomic64_ops);
return 0;
}
-static int hisi_sec_ctrl_debug_init(struct hisi_qm *qm)
+static int sec_debug_init(struct hisi_qm *qm)
{
- struct hisi_sec *sec = container_of(qm, struct hisi_sec, qm);
- struct dentry *tmp;
+ struct sec_dev *sec = container_of(qm, struct sec_dev, qm);
int i;
for (i = SEC_CURRENT_QM; i < SEC_DEBUG_FILE_NUM; i++) {
- spin_lock_init(&sec->ctrl->files[i].lock);
- sec->ctrl->files[i].ctrl = sec->ctrl;
- sec->ctrl->files[i].index = i;
+ spin_lock_init(&sec->debug.files[i].lock);
+ sec->debug.files[i].index = i;
+ sec->debug.files[i].qm = qm;
- tmp = debugfs_create_file(ctrl_debug_file_name[i], 0600,
+ debugfs_create_file(sec_dbg_file_name[i], 0600,
qm->debug.debug_root,
- sec->ctrl->files + i,
- &ctrl_debug_fops);
- if (!tmp)
- return -ENOENT;
+ sec->debug.files + i,
+ &sec_dbg_fops);
}
- return hisi_sec_core_debug_init(qm);
+ return sec_core_debug_init(qm);
}
-static int hisi_sec_debugfs_init(struct hisi_qm *qm)
+static int sec_debugfs_init(struct hisi_qm *qm)
{
struct device *dev = &qm->pdev->dev;
- struct dentry *dev_d;
int ret;
- dev_d = debugfs_create_dir(dev_name(dev), sec_debugfs_root);
- if (!dev_d)
- return -ENOENT;
-
- qm->debug.debug_root = dev_d;
+ qm->debug.debug_root = debugfs_create_dir(dev_name(dev),
+ sec_debugfs_root);
ret = hisi_qm_debug_init(qm);
if (ret)
goto failed_to_create;
if (qm->pdev->device == SEC_PF_PCI_DEVICE_ID) {
- ret = hisi_sec_ctrl_debug_init(qm);
+ ret = sec_debug_init(qm);
if (ret)
goto failed_to_create;
}
return 0;
- failed_to_create:
+failed_to_create:
debugfs_remove_recursive(sec_debugfs_root);
+
return ret;
}
-static void hisi_sec_debugfs_exit(struct hisi_qm *qm)
+static void sec_debugfs_exit(struct hisi_qm *qm)
{
debugfs_remove_recursive(qm->debug.debug_root);
-
- if (qm->fun_type == QM_HW_PF) {
- sec_debug_regs_clear(qm);
- qm->debug.curr_qm_qp_num = 0;
- }
}
static void sec_log_hw_error(struct hisi_qm *qm, u32 err_sts)
{
- const struct hisi_sec_hw_error *errs = sec_hw_error;
+ const struct sec_hw_error *errs = sec_hw_errors;
struct device *dev = &qm->pdev->dev;
u32 err_val;
@@ -809,7 +660,7 @@ static void sec_log_hw_error(struct hisi_qm *qm, u32 err_sts)
if (SEC_CORE_INT_STATUS_M_ECC & errs->int_msk) {
err_val = readl(qm->io_base +
- SEC_CORE_ECC_INFO);
+ SEC_CORE_SRAM_ECC_ERR_INFO);
dev_err(dev, "multi ecc sram num=0x%x\n",
SEC_ECC_NUM(err_val));
}
@@ -837,19 +688,10 @@ static void sec_open_axi_master_ooo(struct hisi_qm *qm)
writel(val | SEC_AXI_SHUTDOWN_ENABLE, SEC_ADDR(qm, SEC_CONTROL_REG));
}
-static int hisi_sec_pf_probe_init(struct hisi_qm *qm)
+static int sec_pf_probe_init(struct hisi_qm *qm)
{
- struct hisi_sec *hisi_sec = container_of(qm, struct hisi_sec, qm);
- struct hisi_sec_ctrl *ctrl;
int ret;
- ctrl = devm_kzalloc(&qm->pdev->dev, sizeof(*ctrl), GFP_KERNEL);
- if (!ctrl)
- return -ENOMEM;
-
- hisi_sec->ctrl = ctrl;
- ctrl->hisi_sec = hisi_sec;
-
switch (qm->ver) {
case QM_HW_V1:
qm->ctrl_q_num = SEC_QUEUE_NUM_V1;
@@ -868,7 +710,7 @@ static int hisi_sec_pf_probe_init(struct hisi_qm *qm)
qm->err_ini.err_info.ecc_2bits_mask = SEC_CORE_INT_STATUS_M_ECC;
qm->err_ini.err_info.ce = QM_BASE_CE;
qm->err_ini.err_info.nfe = QM_BASE_NFE | QM_ACC_DO_TASK_TIMEOUT |
- QM_ACC_WB_NOT_READY_TIMEOUT;
+ QM_ACC_WB_NOT_READY_TIMEOUT;
qm->err_ini.err_info.fe = 0;
qm->err_ini.err_info.msi = QM_DB_RANDOM_INVALID;
qm->err_ini.err_info.acpi_rst = "SRST";
@@ -884,42 +726,32 @@ static int hisi_sec_pf_probe_init(struct hisi_qm *qm)
return ret;
hisi_qm_dev_err_init(qm);
- qm->err_ini.open_axi_master_ooo(qm);
sec_debug_regs_clear(qm);
return 0;
}
-static int hisi_sec_qm_pre_init(struct hisi_qm *qm, struct pci_dev *pdev)
+static int sec_probe_init(struct hisi_qm *qm)
{
int ret;
-#ifdef CONFIG_CRYPTO_QM_UACCE
- qm->algs = "sec\ncipher\ndigest\n";
- qm->uacce_mode = uacce_mode;
-#endif
- qm->pdev = pdev;
- ret = hisi_qm_pre_init(qm, pf_q_num, SEC_PF_DEF_Q_BASE);
- if (ret)
- return ret;
- qm->sqe_size = SEC_SQE_SIZE;
- qm->dev_name = sec_name;
- qm->qm_list = &sec_devices;
- qm->wq = sec_wq;
-
- return 0;
-}
+ qm->wq = alloc_workqueue("%s", WQ_HIGHPRI | WQ_CPU_INTENSIVE |
+ WQ_MEM_RECLAIM | WQ_UNBOUND, num_online_cpus(),
+ pci_name(qm->pdev));
+ if (!qm->wq) {
+ pci_err(qm->pdev, "fail to alloc workqueue\n");
+ return -ENOMEM;
+ }
-static int hisi_sec_probe_init(struct hisi_qm *qm)
-{
if (qm->fun_type == QM_HW_PF) {
- return hisi_sec_pf_probe_init(qm);
+ ret = sec_pf_probe_init(qm);
+ if (ret)
+ goto err_probe_uninit;
} else if (qm->fun_type == QM_HW_VF) {
/*
* have no way to get qm configure in VM in v1 hardware,
* so currently force PF to uses SEC_PF_DEF_Q_NUM, and force
* to trigger only one VF in v1 hardware.
- *
* v2 hardware has no such problem.
*/
if (qm->ver == QM_HW_V1) {
@@ -927,41 +759,92 @@ static int hisi_sec_probe_init(struct hisi_qm *qm)
qm->qp_num = SEC_QUEUE_NUM_V1 - SEC_PF_DEF_Q_NUM;
} else if (qm->ver == QM_HW_V2) {
/* v2 starts to support get vft by mailbox */
- return hisi_qm_get_vft(qm, &qm->qp_base, &qm->qp_num);
+ ret = hisi_qm_get_vft(qm, &qm->qp_base, &qm->qp_num);
+ if (ret)
+ goto err_probe_uninit;
}
+ } else {
+ ret = -ENODEV;
+ goto err_probe_uninit;
}
return 0;
+
+err_probe_uninit:
+ destroy_workqueue(qm->wq);
+ return ret;
+}
+
+static void sec_probe_uninit(struct hisi_qm *qm)
+{
+ if (qm->fun_type == QM_HW_PF)
+ hisi_qm_dev_err_uninit(qm);
+ destroy_workqueue(qm->wq);
+}
+
+static int sec_qm_pre_init(struct hisi_qm *qm, struct pci_dev *pdev)
+{
+ int ret;
+
+#ifdef CONFIG_CRYPTO_QM_UACCE
+ qm->algs = "sec\ncipher\ndigest\n";
+ qm->uacce_mode = uacce_mode;
+#endif
+ qm->pdev = pdev;
+ ret = hisi_qm_pre_init(qm, pf_q_num, SEC_PF_DEF_Q_BASE);
+ if (ret)
+ return ret;
+
+ qm->qm_list = &sec_devices;
+ qm->sqe_size = SEC_SQE_SIZE;
+ qm->dev_name = sec_name;
+
+ return 0;
+}
+
+static void sec_iommu_used_check(struct sec_dev *sec)
+{
+ struct iommu_domain *domain;
+ struct device *dev = &sec->qm.pdev->dev;
+
+ domain = iommu_get_domain_for_dev(dev);
+
+ /* Check if iommu is used */
+ sec->iommu_used = false;
+ if (domain) {
+ if (domain->type & __IOMMU_DOMAIN_PAGING)
+ sec->iommu_used = true;
+ dev_info(dev, "SMMU Opened! the iommu type:= %d!\n",
+ domain->type);
+ }
}
-static int hisi_sec_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+static int sec_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
- struct hisi_sec *hisi_sec;
+ struct sec_dev *sec;
struct hisi_qm *qm;
int ret;
- hisi_sec = devm_kzalloc(&pdev->dev, sizeof(*hisi_sec), GFP_KERNEL);
- if (!hisi_sec)
+ sec = devm_kzalloc(&pdev->dev, sizeof(*sec), GFP_KERNEL);
+ if (!sec)
return -ENOMEM;
- qm = &hisi_sec->qm;
+ qm = &sec->qm;
qm->fun_type = pdev->is_physfn ? QM_HW_PF : QM_HW_VF;
- ret = hisi_sec_qm_pre_init(qm, pdev);
+ ret = sec_qm_pre_init(qm, pdev);
if (ret)
return ret;
- hisi_sec->ctx_q_num = ctx_q_num;
- hisi_sec->fusion_limit = fusion_limit;
- hisi_sec->fusion_tmout_nsec = fusion_time;
-
+ sec->ctx_q_num = ctx_q_num;
+ sec_iommu_used_check(sec);
ret = hisi_qm_init(qm);
if (ret) {
pci_err(pdev, "Failed to init qm (%d)!\n", ret);
return ret;
}
- ret = hisi_sec_probe_init(qm);
+ ret = sec_probe_init(qm);
if (ret) {
pci_err(pdev, "Failed to probe init (%d)!\n", ret);
goto err_qm_uninit;
@@ -970,18 +853,18 @@ static int hisi_sec_probe(struct pci_dev *pdev, const struct pci_device_id *id)
ret = hisi_qm_start(qm);
if (ret) {
pci_err(pdev, "Failed to start qm (%d)!\n", ret);
- goto err_qm_uninit;
+ goto err_probe_uninit;
}
- ret = hisi_sec_debugfs_init(qm);
+ ret = sec_debugfs_init(qm);
if (ret)
pci_warn(pdev, "Failed to init debugfs (%d)!\n", ret);
hisi_qm_add_to_list(qm, &sec_devices);
- ret = hisi_sec_register_to_crypto(fusion_limit);
+ ret = sec_register_to_crypto();
if (ret < 0) {
- pci_err(pdev, "Failed to register driver to crypto!\n");
+ pr_err("Failed to register driver to crypto!\n");
goto err_remove_from_list;
}
@@ -994,121 +877,115 @@ static int hisi_sec_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return 0;
err_crypto_unregister:
- hisi_sec_unregister_from_crypto(fusion_limit);
+ sec_unregister_from_crypto();
err_remove_from_list:
hisi_qm_del_from_list(qm, &sec_devices);
- hisi_sec_debugfs_exit(qm);
+ sec_debugfs_exit(qm);
hisi_qm_stop(qm, QM_NORMAL);
+err_probe_uninit:
+ sec_probe_uninit(qm);
+
err_qm_uninit:
hisi_qm_uninit(qm);
return ret;
}
-static int hisi_sec_sriov_configure(struct pci_dev *pdev, int num_vfs)
+static int sec_sriov_configure(struct pci_dev *pdev, int num_vfs)
{
- if (num_vfs == 0)
- return hisi_qm_sriov_disable(pdev, &sec_devices);
- else
+ if (num_vfs)
return hisi_qm_sriov_enable(pdev, num_vfs);
+ else
+ return hisi_qm_sriov_disable(pdev, &sec_devices);
}
-static void hisi_sec_remove(struct pci_dev *pdev)
+static void sec_remove(struct pci_dev *pdev)
{
struct hisi_qm *qm = pci_get_drvdata(pdev);
- if (uacce_mode != UACCE_MODE_NOUACCE)
- hisi_qm_remove_wait_delay(qm, &sec_devices);
+ hisi_qm_remove_wait_delay(qm, &sec_devices);
+
+ sec_unregister_from_crypto();
+
+ hisi_qm_del_from_list(qm, &sec_devices);
if (qm->fun_type == QM_HW_PF && qm->vfs_num)
(void)hisi_qm_sriov_disable(pdev, NULL);
- hisi_sec_unregister_from_crypto(fusion_limit);
+ sec_debugfs_exit(qm);
- hisi_qm_del_from_list(qm, &sec_devices);
- hisi_sec_debugfs_exit(qm);
(void)hisi_qm_stop(qm, QM_NORMAL);
if (qm->fun_type == QM_HW_PF)
- hisi_qm_dev_err_uninit(qm);
+ sec_debug_regs_clear(qm);
+
+ sec_probe_uninit(qm);
hisi_qm_uninit(qm);
}
-static const struct pci_error_handlers hisi_sec_err_handler = {
+static const struct pci_error_handlers sec_err_handler = {
.error_detected = hisi_qm_dev_err_detected,
- .slot_reset = hisi_qm_dev_slot_reset,
- .reset_prepare = hisi_qm_reset_prepare,
- .reset_done = hisi_qm_reset_done,
+ .slot_reset = hisi_qm_dev_slot_reset,
+ .reset_prepare = hisi_qm_reset_prepare,
+ .reset_done = hisi_qm_reset_done,
};
-static struct pci_driver hisi_sec_pci_driver = {
+static struct pci_driver sec_pci_driver = {
.name = "hisi_sec2",
- .id_table = hisi_sec_dev_ids,
- .probe = hisi_sec_probe,
- .remove = hisi_sec_remove,
- .sriov_configure = hisi_sec_sriov_configure,
- .err_handler = &hisi_sec_err_handler,
+ .id_table = sec_dev_ids,
+ .probe = sec_probe,
+ .remove = sec_remove,
+ .err_handler = &sec_err_handler,
+ .sriov_configure = sec_sriov_configure,
.shutdown = hisi_qm_dev_shutdown,
};
-static void hisi_sec_register_debugfs(void)
+static void sec_register_debugfs(void)
{
if (!debugfs_initialized())
return;
sec_debugfs_root = debugfs_create_dir("hisi_sec2", NULL);
- if (IS_ERR_OR_NULL(sec_debugfs_root))
- sec_debugfs_root = NULL;
}
-static void hisi_sec_unregister_debugfs(void)
+static void sec_unregister_debugfs(void)
{
debugfs_remove_recursive(sec_debugfs_root);
}
-static int __init hisi_sec_init(void)
+static int __init sec_init(void)
{
int ret;
- sec_wq = alloc_workqueue("hisi_sec2", WQ_HIGHPRI | WQ_CPU_INTENSIVE |
- WQ_MEM_RECLAIM | WQ_UNBOUND, num_online_cpus());
-
- if (!sec_wq) {
- pr_err("Fallied to alloc workqueue\n");
- return -ENOMEM;
- }
-
INIT_LIST_HEAD(&sec_devices.list);
mutex_init(&sec_devices.lock);
sec_devices.check = NULL;
+ sec_register_debugfs();
- hisi_sec_register_debugfs();
-
- ret = pci_register_driver(&hisi_sec_pci_driver);
+ ret = pci_register_driver(&sec_pci_driver);
if (ret < 0) {
- hisi_sec_unregister_debugfs();
- if (sec_wq)
- destroy_workqueue(sec_wq);
+ sec_unregister_debugfs();
pr_err("Failed to register pci driver.\n");
+ return ret;
}
- return ret;
+ return 0;
}
-static void __exit hisi_sec_exit(void)
+static void __exit sec_exit(void)
{
- pci_unregister_driver(&hisi_sec_pci_driver);
- hisi_sec_unregister_debugfs();
- if (sec_wq)
- destroy_workqueue(sec_wq);
+ pci_unregister_driver(&sec_pci_driver);
+ sec_unregister_debugfs();
}
-module_init(hisi_sec_init);
-module_exit(hisi_sec_exit);
+module_init(sec_init);
+module_exit(sec_exit);
MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Zaibo Xu <xuzaibo(a)huawei.com>");
+MODULE_AUTHOR("Longfang Liu <liulongfang(a)huawei.com>");
MODULE_AUTHOR("Zhang Wei <zhangwei375(a)huawei.com>");
MODULE_DESCRIPTION("Driver for HiSilicon SEC accelerator");
diff --git a/drivers/crypto/hisilicon/sec2/sec_usr_if.h b/drivers/crypto/hisilicon/sec2/sec_usr_if.h
deleted file mode 100644
index 7c76e19..00000000
--- a/drivers/crypto/hisilicon/sec2/sec_usr_if.h
+++ /dev/null
@@ -1,179 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0+ */
-/* Copyright (c) 2018-2019 HiSilicon Limited. */
-
-#ifndef HISI_SEC_USR_IF_H
-#define HISI_SEC_USR_IF_H
-
-struct hisi_sec_sqe_type1 {
- __u32 rsvd2:6;
- __u32 ci_gen:2;
- __u32 ai_gen:2;
- __u32 rsvd1:7;
- __u32 c_key_type:2;
- __u32 a_key_type:2;
- __u32 rsvd0:10;
- __u32 inveld:1;
-
- __u32 mac_len:6;
- __u32 a_key_len:5;
- __u32 a_alg:6;
- __u32 rsvd3:15;
- __u32 c_icv_len:6;
- __u32 c_width:3;
- __u32 c_key_len:3;
- __u32 c_mode:4;
- __u32 c_alg:4;
- __u32 rsvd4:12;
- __u32 auth_gran_size:24;
- __u32:8;
- __u32 cipher_gran_size:24;
- __u32:8;
- __u32 auth_src_offset:16;
- __u32 cipher_src_offset:16;
- __u32 gran_num:16;
- __u32 rsvd5:16;
- __u32 src_skip_data_len:24;
- __u32 rsvd6:8;
- __u32 dst_skip_data_len:24;
- __u32 rsvd7:8;
- __u32 tag:16;
- __u32 rsvd8:16;
- __u32 gen_page_pad_ctrl:4;
- __u32 gen_grd_ctrl:4;
- __u32 gen_ver_ctrl:4;
- __u32 gen_app_ctrl:4;
- __u32 gen_ver_val:8;
- __u32 gen_app_val:8;
- __u32 private_info;
- __u32 gen_ref_ctrl:4;
- __u32 page_pad_type:2;
- __u32 rsvd9:2;
- __u32 chk_grd_ctrl:4;
- __u32 chk_ref_ctrl:4;
- __u32 block_size:16;
- __u32 lba_l;
- __u32 lba_h;
- __u32 a_key_addr_l;
- __u32 a_key_addr_h;
- __u32 mac_addr_l;
- __u32 mac_addr_h;
- __u32 c_ivin_addr_l;
- __u32 c_ivin_addr_h;
- __u32 c_key_addr_l;
- __u32 c_key_addr_h;
- __u32 data_src_addr_l;
- __u32 data_src_addr_h;
- __u32 data_dst_addr_l;
- __u32 data_dst_addr_h;
- __u32 done:1;
- __u32 icv:3;
- __u32 rsvd11:3;
- __u32 flag:4;
- __u32 dif_check:3;
- __u32 rsvd10:2;
- __u32 error_type:8;
- __u32 warning_type:8;
- __u32 dw29;
- __u32 dw30;
- __u32 dw31;
-};
-
-struct hisi_sec_sqe_type2 {
- __u32 nonce_len:4;
- __u32 huk:1;
- __u32 key_s:1;
- __u32 ci_gen:2;
- __u32 ai_gen:2;
- __u32 a_pad:2;
- __u32 c_s:2;
- __u32 rsvd1:2;
- __u32 rhf:1;
- __u32 c_key_type:2;
- __u32 a_key_type:2;
- __u32 write_frame_len:3;
- __u32 cal_iv_addr_en:1;
- __u32 tls_up:1;
- __u32 rsvd0:5;
- __u32 inveld:1;
- __u32 mac_len:5;
- __u32 a_key_len:6;
- __u32 a_alg:6;
- __u32 rsvd3:15;
- __u32 c_icv_len:6;
- __u32 c_width:3;
- __u32 c_key_len:3;
- __u32 c_mode:4;
- __u32 c_alg:4;
- __u32 rsvd4:12;
- __u32 a_len:24;
- __u32 iv_offset_l:8;
- __u32 c_len:24;
- __u32 iv_offset_h:8;
- __u32 auth_src_offset:16;
- __u32 cipher_src_offset:16;
- __u32 cs_ip_header_offset:16;
- __u32 cs_udp_header_offset:16;
- __u32 pass_word_len:16;
- __u32 dk_len:16;
- __u32 salt3:8;
- __u32 salt2:8;
- __u32 salt1:8;
- __u32 salt0:8;
- __u32 tag:16;
- __u32 rsvd5:16;
- __u32 c_pad_type:4;
- __u32 c_pad_len:8;
- __u32 c_pad_data_type:4;
- __u32 c_pad_len_field:2;
- __u32 rsvd6:14;
- __u32 long_a_data_len_l;
- __u32 long_a_data_len_h;
- __u32 a_ivin_addr_l;
- __u32 a_ivin_addr_h;
- __u32 a_key_addr_l;
- __u32 a_key_addr_h;
- __u32 mac_addr_l;
- __u32 mac_addr_h;
- __u32 c_ivin_addr_l;
- __u32 c_ivin_addr_h;
- __u32 c_key_addr_l;
- __u32 c_key_addr_h;
- __u32 data_src_addr_l;
- __u32 data_src_addr_h;
- __u32 data_dst_addr_l;
- __u32 data_dst_addr_h;
- __u32 done:1;
- __u32 icv:3;
- __u32 rsvd11:3;
- __u32 flag:4;
- __u32 rsvd10:5;
- __u32 error_type:8;
- __u32 warning_type:8;
- __u32 mac_i3:8;
- __u32 mac_i2:8;
- __u32 mac_i1:8;
- __u32 mac_i0:8;
- __u32 check_sum_i:16;
- __u32 tls_pad_len_i:8;
- __u32 rsvd12:8;
- __u32 counter;
-};
-
-struct hisi_sec_sqe {
- __u32 type:4;
- __u32 cipher:2;
- __u32 auth:2;
- __u32 seq:1;
- __u32 de:2;
- __u32 scene:4;
- __u32 src_addr_type:3;
- __u32 dst_addr_type:3;
- __u32 mac_addr_type:3;
- __u32 rsvd0:8;
- union {
- struct hisi_sec_sqe_type1 type1;
- struct hisi_sec_sqe_type2 type2;
- };
-};
-
-#endif
--
1.8.3
1
1

[PATCH] arm64: kprobes: Recover pstate.D in single-step exception handler
by Yang Yingliang 17 Apr '20
by Yang Yingliang 17 Apr '20
17 Apr '20
From: Masami Hiramatsu <mhiramat(a)kernel.org>
mainline inclusion
from mainline-5.3-rc3
commit b3980e48528c
category: bugfix
bugzilla: 20080
CVE: NA
-------------------------------------------------
kprobes manipulates the interrupted PSTATE for single step, and
doesn't restore it. Thus, if we put a kprobe where the pstate.D
(debug) masked, the mask will be cleared after the kprobe hits.
Moreover, in the most complicated case, this can lead a kernel
crash with below message when a nested kprobe hits.
[ 152.118921] Unexpected kernel single-step exception at EL1
When the 1st kprobe hits, do_debug_exception() will be called.
At this point, debug exception (= pstate.D) must be masked (=1).
But if another kprobes hits before single-step of the first kprobe
(e.g. inside user pre_handler), it unmask the debug exception
(pstate.D = 0) and return.
Then, when the 1st kprobe setting up single-step, it saves current
DAIF, mask DAIF, enable single-step, and restore DAIF.
However, since "D" flag in DAIF is cleared by the 2nd kprobe, the
single-step exception happens soon after restoring DAIF.
This has been introduced by commit 7419333fa15e ("arm64: kprobe:
Always clear pstate.D in breakpoint exception handler")
To solve this issue, this stores all DAIF bits and restore it
after single stepping.
Reported-by: Naresh Kamboju <naresh.kamboju(a)linaro.org>
Fixes: 7419333fa15e ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler")
Reviewed-by: James Morse <james.morse(a)arm.com>
Tested-by: James Morse <james.morse(a)arm.com>
Signed-off-by: Masami Hiramatsu <mhiramat(a)kernel.org>
Signed-off-by: Will Deacon <will(a)kernel.org>
Signed-off-by: Wei Li <liwei391(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/include/asm/daifflags.h | 1 +
arch/arm64/kernel/probes/kprobes.c | 40 ++++++--------------------------------
2 files changed, 7 insertions(+), 34 deletions(-)
diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index 3441ca0..1230923 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -24,6 +24,7 @@
#define DAIF_PROCCTX 0
#define DAIF_PROCCTX_NOIRQ PSR_I_BIT
+#define DAIF_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
/* mask/save/unmask/restore all exceptions, including interrupts. */
static inline void local_daif_mask(void)
diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index 2d63df1..fe9d207 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -30,6 +30,7 @@
#include <asm/ptrace.h>
#include <asm/cacheflush.h>
#include <asm/debug-monitors.h>
+#include <asm/daifflags.h>
#include <asm/system_misc.h>
#include <asm/insn.h>
#include <linux/uaccess.h>
@@ -180,33 +181,6 @@ static void __kprobes set_current_kprobe(struct kprobe *p)
}
/*
- * When PSTATE.D is set (masked), then software step exceptions can not be
- * generated.
- * SPSR's D bit shows the value of PSTATE.D immediately before the
- * exception was taken. PSTATE.D is set while entering into any exception
- * mode, however software clears it for any normal (none-debug-exception)
- * mode in the exception entry. Therefore, when we are entering into kprobe
- * breakpoint handler from any normal mode then SPSR.D bit is already
- * cleared, however it is set when we are entering from any debug exception
- * mode.
- * Since we always need to generate single step exception after a kprobe
- * breakpoint exception therefore we need to clear it unconditionally, when
- * we become sure that the current breakpoint exception is for kprobe.
- */
-static void __kprobes
-spsr_set_debug_flag(struct pt_regs *regs, int mask)
-{
- unsigned long spsr = regs->pstate;
-
- if (mask)
- spsr |= PSR_D_BIT;
- else
- spsr &= ~PSR_D_BIT;
-
- regs->pstate = spsr;
-}
-
-/*
* Interrupts need to be disabled before single-step mode is set, and not
* reenabled until after single-step mode ends.
* Without disabling interrupt on local CPU, there is a chance of
@@ -217,17 +191,17 @@ static void __kprobes set_current_kprobe(struct kprobe *p)
static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
struct pt_regs *regs)
{
- kcb->saved_irqflag = regs->pstate;
+ kcb->saved_irqflag = regs->pstate & DAIF_MASK;
regs->pstate |= PSR_I_BIT;
+ /* Unmask PSTATE.D for enabling software step exceptions. */
+ regs->pstate &= ~PSR_D_BIT;
}
static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
struct pt_regs *regs)
{
- if (kcb->saved_irqflag & PSR_I_BIT)
- regs->pstate |= PSR_I_BIT;
- else
- regs->pstate &= ~PSR_I_BIT;
+ regs->pstate &= ~DAIF_MASK;
+ regs->pstate |= kcb->saved_irqflag;
}
static void __kprobes
@@ -264,8 +238,6 @@ static void __kprobes setup_singlestep(struct kprobe *p,
set_ss_context(kcb, slot); /* mark pending ss */
- spsr_set_debug_flag(regs, 0);
-
/* IRQs and single stepping do not mix well. */
kprobes_save_local_irqflag(kcb, regs);
kernel_enable_single_step(regs);
--
1.8.3
1
0

[PATCH 1/2] qm: optimize the maximum number of VF and delete invalid addr
by Yang Yingliang 17 Apr '20
by Yang Yingliang 17 Apr '20
17 Apr '20
From: Yu'an Wang <wangyuan46(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
In this patch, we try to optimize the way to set the maximum number
of VF, which is designed for compation with next hardware standards.
Then we remove invalid address parameter definition and assignment.
Meanwhile, the return code judgment of debugfs related functions is
deleted, because this does not affect the main function of driver.
Signed-off-by: Yu'an Wang <wangyuan46(a)huawei.com>
Reviewed-by: Cheng Hu <hucheng.hu(a)huawei.com>
Reviewed-by: Guangwei Zhou <zhouguangwei5(a)huawei.com>
Reviewed-by: Junxian Liu <liujunxian3(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/crypto/hisilicon/hpre/hpre.h | 1 -
drivers/crypto/hisilicon/hpre/hpre_main.c | 54 ++++++++++++++-----------------
drivers/crypto/hisilicon/qm.c | 22 ++++---------
drivers/crypto/hisilicon/qm.h | 6 ++--
drivers/crypto/hisilicon/rde/rde_main.c | 2 ++
drivers/crypto/hisilicon/zip/zip_main.c | 2 ++
6 files changed, 38 insertions(+), 49 deletions(-)
diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h
index 3ac02ef..42b2f2a 100644
--- a/drivers/crypto/hisilicon/hpre/hpre.h
+++ b/drivers/crypto/hisilicon/hpre/hpre.h
@@ -18,7 +18,6 @@ enum {
HPRE_CLUSTERS_NUM,
};
-
enum hpre_ctrl_dbgfs_file {
HPRE_CURRENT_QM,
HPRE_CLEAR_ENABLE,
diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
index 4dc0d3e..f727158 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
@@ -435,8 +435,7 @@ static int hpre_current_qm_write(struct hpre_debugfs_file *file, u32 val)
vfq_num = (qm->ctrl_q_num - qm->qp_num) / num_vfs;
if (val == num_vfs) {
qm->debug.curr_qm_qp_num =
- qm->ctrl_q_num - qm->qp_num -
- (num_vfs - 1) * vfq_num;
+ qm->ctrl_q_num - qm->qp_num - (num_vfs - 1) * vfq_num;
} else {
qm->debug.curr_qm_qp_num = vfq_num;
}
@@ -592,7 +591,7 @@ static ssize_t hpre_ctrl_debug_write(struct file *filp, const char __user *buf,
static int hpre_create_debugfs_file(struct hpre_debug *dbg, struct dentry *dir,
enum hpre_ctrl_dbgfs_file type, int indx)
{
- struct dentry *tmp, *file_dir;
+ struct dentry *file_dir;
struct hpre *hpre;
if (dir) {
@@ -609,10 +608,9 @@ static int hpre_create_debugfs_file(struct hpre_debug *dbg, struct dentry *dir,
dbg->files[indx].debug = dbg;
dbg->files[indx].type = type;
dbg->files[indx].index = indx;
- tmp = debugfs_create_file(hpre_debug_file_name[type], 0600, file_dir,
- dbg->files + indx, &hpre_ctrl_debug_fops);
- if (!tmp)
- return -ENOENT;
+
+ debugfs_create_file(hpre_debug_file_name[type], 0600, file_dir,
+ dbg->files + indx, &hpre_ctrl_debug_fops);
return 0;
}
@@ -623,7 +621,6 @@ static int hpre_pf_comm_regs_debugfs_init(struct hpre_debug *debug)
struct hisi_qm *qm = &hpre->qm;
struct device *dev = &qm->pdev->dev;
struct debugfs_regset32 *regset;
- struct dentry *tmp;
regset = devm_kzalloc(dev, sizeof(*regset), GFP_KERNEL);
if (!regset)
@@ -633,11 +630,7 @@ static int hpre_pf_comm_regs_debugfs_init(struct hpre_debug *debug)
regset->nregs = ARRAY_SIZE(hpre_com_dfx_regs);
regset->base = qm->io_base;
- tmp = debugfs_create_regset32("regs", 0444, qm->debug.debug_root,
- regset);
- if (!tmp)
- return -ENOENT;
-
+ debugfs_create_regset32("regs", 0444, qm->debug.debug_root, regset);
return 0;
}
@@ -648,7 +641,7 @@ static int hpre_cluster_debugfs_init(struct hpre_debug *debug)
struct device *dev = &qm->pdev->dev;
char buf[HPRE_DBGFS_VAL_MAX_LEN];
struct debugfs_regset32 *regset;
- struct dentry *tmp_d, *tmp;
+ struct dentry *tmp_d;
int i, ret;
for (i = 0; i < HPRE_CLUSTERS_NUM; i++) {
@@ -657,8 +650,6 @@ static int hpre_cluster_debugfs_init(struct hpre_debug *debug)
return -EINVAL;
tmp_d = debugfs_create_dir(buf, qm->debug.debug_root);
- if (!tmp_d)
- return -ENOENT;
regset = devm_kzalloc(dev, sizeof(*regset), GFP_KERNEL);
if (!regset)
@@ -668,9 +659,8 @@ static int hpre_cluster_debugfs_init(struct hpre_debug *debug)
regset->nregs = ARRAY_SIZE(hpre_cluster_dfx_regs);
regset->base = qm->io_base + hpre_cluster_offsets[i];
- tmp = debugfs_create_regset32("regs", 0444, tmp_d, regset);
- if (!tmp)
- return -ENOENT;
+ debugfs_create_regset32("regs", 0444, tmp_d, regset);
+
ret = hpre_create_debugfs_file(debug, tmp_d, HPRE_CLUSTER_CTRL,
i + HPRE_CLUSTER_CTRL);
if (ret)
@@ -705,14 +695,10 @@ static int hpre_debugfs_init(struct hisi_qm *qm)
{
struct hpre *hpre = container_of(qm, struct hpre, qm);
struct device *dev = &qm->pdev->dev;
- struct dentry *dir;
int ret;
- dir = debugfs_create_dir(dev_name(dev), hpre_debugfs_root);
- if (!dir)
- return -ENOENT;
-
- qm->debug.debug_root = dir;
+ qm->debug.debug_root = debugfs_create_dir(dev_name(dev),
+ hpre_debugfs_root);
ret = hisi_qm_debug_init(qm);
if (ret)
@@ -730,6 +716,11 @@ static int hpre_debugfs_init(struct hisi_qm *qm)
return ret;
}
+static void hpre_debugfs_exit(struct hisi_qm *qm)
+{
+ debugfs_remove_recursive(qm->debug.debug_root);
+}
+
static int hpre_qm_pre_init(struct hisi_qm *qm, struct pci_dev *pdev)
{
int ret;
@@ -929,7 +920,8 @@ static void hpre_remove(struct pci_dev *pdev)
hpre_cnt_regs_clear(qm);
qm->debug.curr_qm_qp_num = 0;
}
- debugfs_remove_recursive(qm->debug.debug_root);
+
+ hpre_debugfs_exit(qm);
hisi_qm_stop(qm, QM_NORMAL);
if (qm->fun_type == QM_HW_PF)
@@ -967,19 +959,23 @@ static void hpre_register_debugfs(void)
hpre_debugfs_root = NULL;
}
+static void hpre_unregister_debugfs(void)
+{
+ debugfs_remove_recursive(hpre_debugfs_root);
+}
+
static int __init hpre_init(void)
{
int ret;
INIT_LIST_HEAD(&hpre_devices.list);
mutex_init(&hpre_devices.lock);
- hpre_devices.check = NULL;
hpre_register_debugfs();
ret = pci_register_driver(&hpre_pci_driver);
if (ret) {
- debugfs_remove_recursive(hpre_debugfs_root);
+ hpre_unregister_debugfs();
pr_err("hpre: can't register hisi hpre driver.\n");
}
@@ -989,7 +985,7 @@ static int __init hpre_init(void)
static void __exit hpre_exit(void)
{
pci_unregister_driver(&hpre_pci_driver);
- debugfs_remove_recursive(hpre_debugfs_root);
+ hpre_unregister_debugfs();
}
module_init(hpre_init);
diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
index 6a5337a..d3429e7 100644
--- a/drivers/crypto/hisilicon/qm.c
+++ b/drivers/crypto/hisilicon/qm.c
@@ -1428,6 +1428,7 @@ static int qm_cq_ctx_cfg(struct hisi_qp *qp, int qp_id, int pasid)
qp->c_flag << QM_CQ_FLAG_SHIFT);
ret = qm_mb(qm, QM_MB_CMD_CQC, cqc_dma, qp_id, 0);
+
dma_unmap_single(dev, cqc_dma, sizeof(struct qm_cqc), DMA_TO_DEVICE);
kfree(cqc);
@@ -2330,6 +2331,7 @@ static int qm_aeq_ctx_cfg(struct hisi_qm *qm)
aeqc->base_h = cpu_to_le32(upper_32_bits(qm->aeqe_dma));
aeqc->dw6 = cpu_to_le32((QM_Q_DEPTH - 1) | (1 << QM_EQC_PHASE_SHIFT));
ret = qm_mb(qm, QM_MB_CMD_AEQC, aeqc_dma, 0, 0);
+
dma_unmap_single(dev, aeqc_dma, sizeof(struct qm_aeqc), DMA_TO_DEVICE);
kfree(aeqc);
@@ -2384,13 +2386,6 @@ static int __hisi_qm_start(struct hisi_qm *qm)
QM_INIT_BUF(qm, sqc, qm->qp_num);
QM_INIT_BUF(qm, cqc, qm->qp_num);
-#ifdef CONFIG_CRYPTO_QM_UACCE
- /* get reserved dma memory */
- qm->reserve = qm->qdma.va + off;
- qm->reserve_dma = qm->qdma.dma + off;
- off += PAGE_SIZE;
-#endif
-
ret = qm_eq_aeq_ctx_cfg(qm);
if (ret)
return ret;
@@ -2681,7 +2676,7 @@ void hisi_qm_debug_regs_clear(struct hisi_qm *qm)
*/
int hisi_qm_debug_init(struct hisi_qm *qm)
{
- struct dentry *qm_d, *qm_regs;
+ struct dentry *qm_d;
int i, ret;
qm_d = debugfs_create_dir("qm", qm->debug.debug_root);
@@ -2697,12 +2692,7 @@ int hisi_qm_debug_init(struct hisi_qm *qm)
goto failed_to_create;
}
- qm_regs = debugfs_create_file("regs", 0444, qm->debug.qm_d, qm,
- &qm_regs_fops);
- if (IS_ERR(qm_regs)) {
- ret = -ENOENT;
- goto failed_to_create;
- }
+ debugfs_create_file("regs", 0444, qm->debug.qm_d, qm, &qm_regs_fops);
return 0;
@@ -3038,7 +3028,9 @@ int hisi_qm_sriov_enable(struct pci_dev *pdev, int max_vfs)
{
struct hisi_qm *qm = pci_get_drvdata(pdev);
int pre_existing_vfs, num_vfs, ret;
+ int total_vfs;
+ total_vfs = pci_sriov_get_totalvfs(pdev);
pre_existing_vfs = pci_num_vf(pdev);
if (pre_existing_vfs) {
pci_err(pdev,
@@ -3046,7 +3038,7 @@ int hisi_qm_sriov_enable(struct pci_dev *pdev, int max_vfs)
return 0;
}
- num_vfs = min_t(int, max_vfs, QM_MAX_VFS_NUM);
+ num_vfs = min_t(int, max_vfs, total_vfs);
ret = qm_vf_q_assign(qm, num_vfs);
if (ret) {
pci_err(pdev, "Can't assign queues for VF!\n");
diff --git a/drivers/crypto/hisilicon/qm.h b/drivers/crypto/hisilicon/qm.h
index 36e888f..79e29ee 100644
--- a/drivers/crypto/hisilicon/qm.h
+++ b/drivers/crypto/hisilicon/qm.h
@@ -19,7 +19,7 @@
#define QNUM_V1 4096
#define QNUM_V2 1024
-#define QM_MAX_VFS_NUM 63
+#define QM_MAX_VFS_NUM_V2 63
/* qm user domain */
#define QM_ARUSER_M_CFG_1 0x100088
#define AXUSER_SNOOP_ENABLE BIT(30)
@@ -322,9 +322,7 @@ struct hisi_qm {
resource_size_t size;
struct uacce uacce;
const char *algs;
- void *reserve;
int uacce_mode;
- dma_addr_t reserve_dma;
#endif
struct workqueue_struct *wq;
struct work_struct work;
@@ -423,7 +421,7 @@ static inline int vf_num_set(const char *val, const struct kernel_param *kp)
if (ret < 0)
return ret;
- if (n > QM_MAX_VFS_NUM)
+ if (n > QM_MAX_VFS_NUM_V2)
return -ERANGE;
return param_set_int(val, kp);
diff --git a/drivers/crypto/hisilicon/rde/rde_main.c b/drivers/crypto/hisilicon/rde/rde_main.c
index 318d4a0..946532f 100644
--- a/drivers/crypto/hisilicon/rde/rde_main.c
+++ b/drivers/crypto/hisilicon/rde/rde_main.c
@@ -13,6 +13,7 @@
#include <linux/bitops.h>
#include <linux/debugfs.h>
#include <linux/init.h>
+#include <linux/iommu.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
@@ -20,6 +21,7 @@
#include <linux/seq_file.h>
#include <linux/topology.h>
#include <linux/uacce.h>
+
#include "rde.h"
#define HRDE_QUEUE_NUM_V1 4096
diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c
index 54681dc..83e2869 100644
--- a/drivers/crypto/hisilicon/zip/zip_main.c
+++ b/drivers/crypto/hisilicon/zip/zip_main.c
@@ -858,8 +858,10 @@ static void hisi_zip_remove(struct pci_dev *pdev)
{
struct hisi_qm *qm = pci_get_drvdata(pdev);
+#ifdef CONFIG_CRYPTO_QM_UACCE
if (uacce_mode != UACCE_MODE_NOUACCE)
hisi_qm_remove_wait_delay(qm, &zip_devices);
+#endif
if (qm->fun_type == QM_HW_PF && qm->vfs_num)
hisi_qm_sriov_disable(pdev, NULL);
--
1.8.3
1
1

[PATCH 1/4] Revert "dm crypt: use WQ_HIGHPRI for the IO and crypt workqueues"
by Yang Yingliang 17 Apr '20
by Yang Yingliang 17 Apr '20
17 Apr '20
From: Mike Snitzer <snitzer(a)redhat.com>
mainline inclusion
from mainline-5.5-rc1
commit f612b2132db529feac4f965f28a1b9258ea7c22b
category: bugfix
bugzilla: 25149
CVE: NA
---------------------------
This reverts commit a1b89132dc4f61071bdeaab92ea958e0953380a1.
Revert required hand-patching due to subsequent changes that were
applied since commit a1b89132dc4f61071bdeaab92ea958e0953380a1.
Requires: ed0302e83098d ("dm crypt: make workqueue names device-specific")
Cc: stable(a)vger.kernel.org
Bug: https://bugzilla.kernel.org/show_bug.cgi?id=199857
Reported-by: Vito Caputo <vcaputo(a)pengaru.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Sun Ke <sunke32(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/md/dm-crypt.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index f68b9bd..d451f98 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -3996,17 +3996,16 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
}
ret = -ENOMEM;
- cc->io_queue = alloc_workqueue("kcryptd_io", WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);
+ cc->io_queue = alloc_workqueue("kcryptd_io", WQ_MEM_RECLAIM, 1);
if (!cc->io_queue) {
ti->error = "Couldn't create kcryptd io queue";
goto bad;
}
if (test_bit(DM_CRYPT_SAME_CPU, &cc->flags))
- cc->crypt_queue = alloc_workqueue("kcryptd", WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);
+ cc->crypt_queue = alloc_workqueue("kcryptd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 1);
else
- cc->crypt_queue = alloc_workqueue("kcryptd",
- WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND,
+ cc->crypt_queue = alloc_workqueue("kcryptd", WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND,
num_online_cpus());
if (!cc->crypt_queue) {
ti->error = "Couldn't create kcryptd queue";
--
1.8.3
1
3

17 Apr '20
From: Bob Moore <robert.moore(a)intel.com>
mainline inclusion
from mainline-v5.5-rc1
commit 197aba2090e3
category: bugfix
bugzilla: 25975
CVE: NA
-------------------------------------------------
ACPICA commit 7bc16c650317001bc82d4bae227b888a49c51f5e
Avoid possible overflow from get_tick_count. Also, cast math
using ACPI_100NSEC_PER_MSEC to uint64.
Link: https://github.com/acpica/acpica/commit/7bc16c65
Signed-off-by: Bob Moore <robert.moore(a)intel.com>
Signed-off-by: Erik Schmauss <erik.schmauss(a)intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/acpi/acpica/dscontrol.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/acpi/acpica/dscontrol.c b/drivers/acpi/acpica/dscontrol.c
index 0da9626..ebaa74d 100644
--- a/drivers/acpi/acpica/dscontrol.c
+++ b/drivers/acpi/acpica/dscontrol.c
@@ -85,7 +85,7 @@
walk_state->parser_state.pkg_end;
control_state->control.opcode = op->common.aml_opcode;
control_state->control.loop_timeout = acpi_os_get_timer() +
- (u64)(acpi_gbl_max_loop_iterations * ACPI_100NSEC_PER_SEC);
+ ((u64)acpi_gbl_max_loop_iterations * ACPI_100NSEC_PER_SEC);
/* Push the control state on this walk's control stack */
--
1.8.3
1
0

[PATCH 001/276] core: Don't skip generic XDP program execution for cloned SKBs
by Yang Yingliang 17 Apr '20
by Yang Yingliang 17 Apr '20
17 Apr '20
From: Toke Høiland-Jørgensen <toke(a)redhat.com>
[ Upstream commit ad1e03b2b3d4430baaa109b77bc308dc73050de3 ]
The current generic XDP handler skips execution of XDP programs entirely if
an SKB is marked as cloned. This leads to some surprising behaviour, as
packets can end up being cloned in various ways, which will make an XDP
program not see all the traffic on an interface.
This was discovered by a simple test case where an XDP program that always
returns XDP_DROP is installed on a veth device. When combining this with
the Scapy packet sniffer (which uses an AF_PACKET) socket on the sending
side, SKBs reliably end up in the cloned state, causing them to be passed
through to the receiving interface instead of being dropped. A minimal
reproducer script for this is included below.
This patch fixed the issue by simply triggering the existing linearisation
code for cloned SKBs instead of skipping the XDP program execution. This
behaviour is in line with the behaviour of the native XDP implementation
for the veth driver, which will reallocate and copy the SKB data if the SKB
is marked as shared.
Reproducer Python script (requires BCC and Scapy):
from scapy.all import TCP, IP, Ether, sendp, sniff, AsyncSniffer, Raw, UDP
from bcc import BPF
import time, sys, subprocess, shlex
SKB_MODE = (1 << 1)
DRV_MODE = (1 << 2)
PYTHON=sys.executable
def client():
time.sleep(2)
# Sniffing on the sender causes skb_cloned() to be set
s = AsyncSniffer()
s.start()
for p in range(10):
sendp(Ether(dst="aa:aa:aa:aa:aa:aa", src="cc:cc:cc:cc:cc:cc")/IP()/UDP()/Raw("Test"),
verbose=False)
time.sleep(0.1)
s.stop()
return 0
def server(mode):
prog = BPF(text="int dummy_drop(struct xdp_md *ctx) {return XDP_DROP;}")
func = prog.load_func("dummy_drop", BPF.XDP)
prog.attach_xdp("a_to_b", func, mode)
time.sleep(1)
s = sniff(iface="a_to_b", count=10, timeout=15)
if len(s):
print(f"Got {len(s)} packets - should have gotten 0")
return 1
else:
print("Got no packets - as expected")
return 0
if len(sys.argv) < 2:
print(f"Usage: {sys.argv[0]} <skb|drv>")
sys.exit(1)
if sys.argv[1] == "client":
sys.exit(client())
elif sys.argv[1] == "server":
mode = SKB_MODE if sys.argv[2] == 'skb' else DRV_MODE
sys.exit(server(mode))
else:
try:
mode = sys.argv[1]
if mode not in ('skb', 'drv'):
print(f"Usage: {sys.argv[0]} <skb|drv>")
sys.exit(1)
print(f"Running in {mode} mode")
for cmd in [
'ip netns add netns_a',
'ip netns add netns_b',
'ip -n netns_a link add a_to_b type veth peer name b_to_a netns netns_b',
# Disable ipv6 to make sure there's no address autoconf traffic
'ip netns exec netns_a sysctl -qw net.ipv6.conf.a_to_b.disable_ipv6=1',
'ip netns exec netns_b sysctl -qw net.ipv6.conf.b_to_a.disable_ipv6=1',
'ip -n netns_a link set dev a_to_b address aa:aa:aa:aa:aa:aa',
'ip -n netns_b link set dev b_to_a address cc:cc:cc:cc:cc:cc',
'ip -n netns_a link set dev a_to_b up',
'ip -n netns_b link set dev b_to_a up']:
subprocess.check_call(shlex.split(cmd))
server = subprocess.Popen(shlex.split(f"ip netns exec netns_a {PYTHON} {sys.argv[0]} server {mode}"))
client = subprocess.Popen(shlex.split(f"ip netns exec netns_b {PYTHON} {sys.argv[0]} client"))
client.wait()
server.wait()
sys.exit(server.returncode)
finally:
subprocess.run(shlex.split("ip netns delete netns_a"))
subprocess.run(shlex.split("ip netns delete netns_b"))
Fixes: d445516966dc ("net: xdp: support xdp generic on virtual devices")
Reported-by: Stepan Horacek <shoracek(a)redhat.com>
Suggested-by: Paolo Abeni <pabeni(a)redhat.com>
Signed-off-by: Toke Høiland-Jørgensen <toke(a)redhat.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/core/dev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/core/dev.c b/net/core/dev.c
index 1c0224e..c1a3baf 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4306,14 +4306,14 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
/* Reinjected packets coming from act_mirred or similar should
* not get XDP generic processing.
*/
- if (skb_cloned(skb) || skb_is_tc_redirected(skb))
+ if (skb_is_tc_redirected(skb))
return XDP_PASS;
/* XDP packets must be linear and must have sufficient headroom
* of XDP_PACKET_HEADROOM bytes. This is the guarantee that also
* native XDP provides, thus we need to do it here as well.
*/
- if (skb_is_nonlinear(skb) ||
+ if (skb_cloned(skb) || skb_is_nonlinear(skb) ||
skb_headroom(skb) < XDP_PACKET_HEADROOM) {
int hroom = XDP_PACKET_HEADROOM - skb_headroom(skb);
int troom = skb->tail + skb->data_len - skb->end;
--
1.8.3
1
275
From: Al Viro <viro(a)zeniv.linux.org.uk>
mainline inclusion
from mainline-5.1-rc1
commit 35ac1184244f1329783e1d897f74926d8bb1103a
category: bugfix
bugzilla: 30217
CVE: NA
---------------------------
* make the reference from superblock to cgroup_root counting -
do cgroup_put() in cgroup_kill_sb() whether we'd done
percpu_ref_kill() or not; matching grab is done when we allocate
a new root. That gives the same refcounting rules for all callers
of cgroup_do_mount() - a reference to cgroup_root has been grabbed
by caller and it either is transferred to new superblock or dropped.
* have cgroup_kill_sb() treat an already killed refcount as "just
don't bother killing it, then".
* after successful cgroup_do_mount() have cgroup1_mount() recheck
if we'd raced with mount/umount from somebody else and cgroup_root
got killed. In that case we drop the superblock and bugger off
with -ERESTARTSYS, same as if we'd found it in the list already
dying.
* don't bother with delayed initialization of refcount - it's
unreliable and not needed. No need to prevent attempts to bump
the refcount if we find cgroup_root of another mount in progress -
sget will reuse an existing superblock just fine and if the
other sb manages to die before we get there, we'll catch
that immediately after cgroup_do_mount().
* don't bother with kernfs_pin_sb() - no need for doing that
either.
Signed-off-by: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: yangerkun <yangerkun(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/cgroup/cgroup-internal.h | 2 +-
kernel/cgroup/cgroup-v1.c | 58 +++++++++--------------------------------
kernel/cgroup/cgroup.c | 16 +++++-------
3 files changed, 21 insertions(+), 55 deletions(-)
diff --git a/kernel/cgroup/cgroup-internal.h b/kernel/cgroup/cgroup-internal.h
index 75568fc..6f02be1 100644
--- a/kernel/cgroup/cgroup-internal.h
+++ b/kernel/cgroup/cgroup-internal.h
@@ -196,7 +196,7 @@ int cgroup_path_ns_locked(struct cgroup *cgrp, char *buf, size_t buflen,
void cgroup_free_root(struct cgroup_root *root);
void init_cgroup_root(struct cgroup_root *root, struct cgroup_sb_opts *opts);
-int cgroup_setup_root(struct cgroup_root *root, u16 ss_mask, int ref_flags);
+int cgroup_setup_root(struct cgroup_root *root, u16 ss_mask);
int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask);
struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
struct cgroup_root *root, unsigned long magic,
diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
index ff3e1aa..e66bb45 100644
--- a/kernel/cgroup/cgroup-v1.c
+++ b/kernel/cgroup/cgroup-v1.c
@@ -1112,13 +1112,11 @@ struct dentry *cgroup1_mount(struct file_system_type *fs_type, int flags,
void *data, unsigned long magic,
struct cgroup_namespace *ns)
{
- struct super_block *pinned_sb = NULL;
struct cgroup_sb_opts opts;
struct cgroup_root *root;
struct cgroup_subsys *ss;
struct dentry *dentry;
int i, ret;
- bool new_root = false;
cgroup_lock_and_drain_offline(&cgrp_dfl_root.cgrp);
@@ -1180,29 +1178,6 @@ struct dentry *cgroup1_mount(struct file_system_type *fs_type, int flags,
if (root->flags ^ opts.flags)
pr_warn("new mount options do not match the existing superblock, will be ignored\n");
- /*
- * We want to reuse @root whose lifetime is governed by its
- * ->cgrp. Let's check whether @root is alive and keep it
- * that way. As cgroup_kill_sb() can happen anytime, we
- * want to block it by pinning the sb so that @root doesn't
- * get killed before mount is complete.
- *
- * With the sb pinned, tryget_live can reliably indicate
- * whether @root can be reused. If it's being killed,
- * drain it. We can use wait_queue for the wait but this
- * path is super cold. Let's just sleep a bit and retry.
- */
- pinned_sb = kernfs_pin_sb(root->kf_root, NULL);
- if (IS_ERR(pinned_sb) ||
- !percpu_ref_tryget_live(&root->cgrp.self.refcnt)) {
- mutex_unlock(&cgroup_mutex);
- if (!IS_ERR_OR_NULL(pinned_sb))
- deactivate_super(pinned_sb);
- msleep(10);
- ret = restart_syscall();
- goto out_free;
- }
-
ret = 0;
goto out_unlock;
}
@@ -1228,15 +1203,20 @@ struct dentry *cgroup1_mount(struct file_system_type *fs_type, int flags,
ret = -ENOMEM;
goto out_unlock;
}
- new_root = true;
init_cgroup_root(root, &opts);
- ret = cgroup_setup_root(root, opts.subsys_mask, PERCPU_REF_INIT_DEAD);
+ ret = cgroup_setup_root(root, opts.subsys_mask);
if (ret)
cgroup_free_root(root);
out_unlock:
+ if (!ret && !percpu_ref_tryget_live(&root->cgrp.self.refcnt)) {
+ mutex_unlock(&cgroup_mutex);
+ msleep(10);
+ ret = restart_syscall();
+ goto out_free;
+ }
mutex_unlock(&cgroup_mutex);
out_free:
kfree(opts.release_agent);
@@ -1248,25 +1228,13 @@ struct dentry *cgroup1_mount(struct file_system_type *fs_type, int flags,
dentry = cgroup_do_mount(&cgroup_fs_type, flags, root,
CGROUP_SUPER_MAGIC, ns);
- /*
- * There's a race window after we release cgroup_mutex and before
- * allocating a superblock. Make sure a concurrent process won't
- * be able to re-use the root during this window by delaying the
- * initialization of root refcnt.
- */
- if (new_root) {
- mutex_lock(&cgroup_mutex);
- percpu_ref_reinit(&root->cgrp.self.refcnt);
- mutex_unlock(&cgroup_mutex);
+ if (!IS_ERR(dentry) && percpu_ref_is_dying(&root->cgrp.self.refcnt)) {
+ struct super_block *sb = dentry->d_sb;
+ dput(dentry);
+ deactivate_locked_super(sb);
+ msleep(10);
+ dentry = ERR_PTR(restart_syscall());
}
-
- /*
- * If @pinned_sb, we're reusing an existing root and holding an
- * extra ref on its sb. Mount is complete. Put the extra ref.
- */
- if (pinned_sb)
- deactivate_super(pinned_sb);
-
return dentry;
}
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 08bd40d..eaee21d 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -1897,7 +1897,7 @@ void init_cgroup_root(struct cgroup_root *root, struct cgroup_sb_opts *opts)
set_bit(CGRP_CPUSET_CLONE_CHILDREN, &root->cgrp.flags);
}
-int cgroup_setup_root(struct cgroup_root *root, u16 ss_mask, int ref_flags)
+int cgroup_setup_root(struct cgroup_root *root, u16 ss_mask)
{
LIST_HEAD(tmp_links);
struct cgroup *root_cgrp = &root->cgrp;
@@ -1914,7 +1914,7 @@ int cgroup_setup_root(struct cgroup_root *root, u16 ss_mask, int ref_flags)
root_cgrp->ancestor_ids[0] = ret;
ret = percpu_ref_init(&root_cgrp->self.refcnt, css_release,
- ref_flags, GFP_KERNEL);
+ 0, GFP_KERNEL);
if (ret)
goto out;
@@ -2091,18 +2091,16 @@ static void cgroup_kill_sb(struct super_block *sb)
struct cgroup_root *root = cgroup_root_from_kf(kf_root);
/*
- * If @root doesn't have any mounts or children, start killing it.
+ * If @root doesn't have any children, start killing it.
* This prevents new mounts by disabling percpu_ref_tryget_live().
* cgroup_mount() may wait for @root's release.
*
* And don't kill the default root.
*/
- if (!list_empty(&root->cgrp.self.children) ||
- root == &cgrp_dfl_root)
- cgroup_put(&root->cgrp);
- else
+ if (list_empty(&root->cgrp.self.children) && root != &cgrp_dfl_root &&
+ !percpu_ref_is_dying(&root->cgrp.self.refcnt))
percpu_ref_kill(&root->cgrp.self.refcnt);
-
+ cgroup_put(&root->cgrp);
kernfs_kill_sb(sb);
}
@@ -5371,7 +5369,7 @@ int __init cgroup_init(void)
hash_add(css_set_table, &init_css_set.hlist,
css_set_hash(init_css_set.subsys));
- BUG_ON(cgroup_setup_root(&cgrp_dfl_root, 0, 0));
+ BUG_ON(cgroup_setup_root(&cgrp_dfl_root, 0));
mutex_unlock(&cgroup_mutex);
--
1.8.3
1
1

17 Apr '20
From: Yu'an Wang <wangyuan46(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
In this patch, we try to fixup the problem of wrong judgement of
used parameter. When the accelerator driver registers to crypto,
the self-test program will send task to hardware, the used para
will decrease in interrupt thread, but exit flow of crypto will
call hisi_qm_stop_qp_nolock function to stop queue, which try to
get value of used. In the above scene, it will appear to get the
value first and then decrease, which causes null pointer. So we
should distinguish fault handling process from normal stop process.
Signed-off-by: Yu'an Wang <wangyuan46(a)huawei.com>
Reviewed-by: Cheng Hu <hucheng.hu(a)huawei.com>
Reviewed-by: Guangwei Zhou <zhouguangwei5(a)huawei.com>
Reviewed-by: Junxian Liu <liujunxian3(a)huawei.com>
Reviewed-by: Shukun Tan <tanshukun1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/crypto/hisilicon/qm.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
index e89a770..86f4a12 100644
--- a/drivers/crypto/hisilicon/qm.c
+++ b/drivers/crypto/hisilicon/qm.c
@@ -1591,10 +1591,12 @@ static int hisi_qm_stop_qp_nolock(struct hisi_qp *qp)
if (qp->qm->wq)
flush_workqueue(qp->qm->wq);
+ else
+ flush_work(&qp->qm->work);
/* wait for increase used count in qp send and last poll qp finish */
udelay(WAIT_PERIOD);
- if (atomic_read(&qp->qp_status.used))
+ if (unlikely(qp->is_resetting && atomic_read(&qp->qp_status.used)))
qp_stop_fail_cb(qp);
dev_dbg(dev, "stop queue %u!", qp->qp_id);
--
1.8.3
1
1

[PATCH 1/9] nfsd: Ensure CLONE persists data and metadata changes to the target file
by Yang Yingliang 17 Apr '20
by Yang Yingliang 17 Apr '20
17 Apr '20
From: Trond Myklebust <trondmy(a)gmail.com>
mainline inclusion
from mainline-v5.5-rc1
commit a25e3726b32c746c0098125d4c7463bb84df72bb
category: bugfix
bugzilla: 27346
CVE: NA
-------------------------------------------------
The NFSv4.2 CLONE operation has implicit persistence requirements on the
target file, since there is no protocol requirement that the client issue
a separate operation to persist data.
For that reason, we should call vfs_fsync_range() on the destination file
after a successful call to vfs_clone_file_range().
Fixes: ffa0160a1039 ("nfsd: implement the NFSv4.2 CLONE operation")
Signed-off-by: Trond Myklebust <trond.myklebust(a)hammerspace.com>
Cc: stable(a)vger.kernel.org # v4.5+
Signed-off-by: J. Bruce Fields <bfields(a)redhat.com>
Conflicts:
fs/nfsd/nfs4proc.c
fs/nfsd/vfs.c
42ec3d4c0218 ("vfs: make remap_file_range functions take and return bytes
completed")
2e5dfc99f2e6 ("vfs: combine the clone and dedupe into a single
remap_file_range")
Signed-off-by: Zhang Xiaoxu <zhangxiaoxu5(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/nfsd/nfs4proc.c | 3 ++-
fs/nfsd/vfs.c | 16 +++++++++++++---
fs/nfsd/vfs.h | 2 +-
3 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index f35aa9f..1c3e6de 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -1082,7 +1082,8 @@ static __be32 nfsd4_do_lookupp(struct svc_rqst *rqstp, struct svc_fh *fh)
goto out;
status = nfsd4_clone_file_range(src, clone->cl_src_pos,
- dst, clone->cl_dst_pos, clone->cl_count);
+ dst, clone->cl_dst_pos, clone->cl_count,
+ EX_ISSYNC(cstate->current_fh.fh_export));
fput(dst);
fput(src);
diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
index 80ceded..90e97c8 100644
--- a/fs/nfsd/vfs.c
+++ b/fs/nfsd/vfs.c
@@ -552,10 +552,20 @@ __be32 nfsd4_set_nfs4_label(struct svc_rqst *rqstp, struct svc_fh *fhp,
#endif
__be32 nfsd4_clone_file_range(struct file *src, u64 src_pos, struct file *dst,
- u64 dst_pos, u64 count)
+ u64 dst_pos, u64 count, bool sync)
{
- return nfserrno(vfs_clone_file_range(src, src_pos, dst, dst_pos,
- count));
+ int cloned;
+
+ cloned = vfs_clone_file_range(src, src_pos, dst, dst_pos, count);
+ if (cloned < 0)
+ return nfserrno(cloned);
+ if (sync) {
+ loff_t dst_end = count ? dst_pos + count - 1 : LLONG_MAX;
+ int status = vfs_fsync_range(dst, dst_pos, dst_end, 0);
+ if (status < 0)
+ return nfserrno(status);
+ }
+ return 0;
}
ssize_t nfsd_copy_file_range(struct file *src, u64 src_pos, struct file *dst,
diff --git a/fs/nfsd/vfs.h b/fs/nfsd/vfs.h
index db35124..02b0a14 100644
--- a/fs/nfsd/vfs.h
+++ b/fs/nfsd/vfs.h
@@ -58,7 +58,7 @@ __be32 nfsd4_set_nfs4_label(struct svc_rqst *, struct svc_fh *,
__be32 nfsd4_vfs_fallocate(struct svc_rqst *, struct svc_fh *,
struct file *, loff_t, loff_t, int);
__be32 nfsd4_clone_file_range(struct file *, u64, struct file *,
- u64, u64);
+ u64, u64, bool);
#endif /* CONFIG_NFSD_V4 */
__be32 nfsd_create_locked(struct svc_rqst *, struct svc_fh *,
char *name, int len, struct iattr *attrs,
--
1.8.3
1
8

[PATCH] qm: Move all the same logic functions of hisilicon crypto to qm
by Yang Yingliang 16 Apr '20
by Yang Yingliang 16 Apr '20
16 Apr '20
From: Yu'an Wang <wangyuan46(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
In this patch, we try to move accelerator drivers into qm module
to simplify code, including RAS/FLR/SRIOV and uacce_mode/pf_q_num/
vfs_num setting.
In qm.h we add mode_set/q_num_set/vf_num_set for accelerator to
realize module parm uacce_mode/pf_q_num/vfs_num setting.
In qm.c hisi_qm_add_to_list and hisi_qm_del_from_list can be called
to manage accelerators through hisi_qm_list. We additionally realize
hisi_qm_alloc_qps_node to fix the problem that device is found but
queue request fails. Because of RAS process flow/FLR process flow/
SRIOV config flow are consistent for different accelerator drivers,
so we add Corresponding interfaces.
Meanwhile, zip/hpre/sec/rde accelerator drivers should match changes
of qm, including RAS/FLR/SRIOV processing, module parms setting, queue
allocing.
Signed-off-by: Yu'an Wang <wangyuan46(a)huawei.com>
Reviewed-by: Cheng Hu <hucheng.hu(a)huawei.com>
Reviewed-by: Wei Zhang <zhangwei375(a)huawei.com>
Reviewed-by: Guangwei Zhang <zhouguangwei5(a)huawei.com>
Reviewed-by: Junxian Liu <liujunxian3(a)huawei.com>
Reviewed-by: Shukun Tan <tanshukun1(a)huawei.com>
Reviewed-by: Hao Fang <fanghao11(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/crypto/hisilicon/hpre/hpre.h | 9 +-
drivers/crypto/hisilicon/hpre/hpre_crypto.c | 20 +-
drivers/crypto/hisilicon/hpre/hpre_main.c | 944 +++---------------
drivers/crypto/hisilicon/qm.c | 1093 +++++++++++++++++----
drivers/crypto/hisilicon/qm.h | 209 +++-
drivers/crypto/hisilicon/rde/rde.h | 11 +-
drivers/crypto/hisilicon/rde/rde_api.c | 29 +-
drivers/crypto/hisilicon/rde/rde_api.h | 2 +-
drivers/crypto/hisilicon/rde/rde_main.c | 717 ++++----------
drivers/crypto/hisilicon/sec2/sec.h | 13 +-
drivers/crypto/hisilicon/sec2/sec_crypto.c | 83 +-
drivers/crypto/hisilicon/sec2/sec_main.c | 1364 +++++++--------------------
drivers/crypto/hisilicon/zip/zip.h | 9 +-
drivers/crypto/hisilicon/zip/zip_crypto.c | 30 +-
drivers/crypto/hisilicon/zip/zip_main.c | 1152 +++++-----------------
15 files changed, 2053 insertions(+), 3632 deletions(-)
diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h
index ba7c88e..3ac02ef 100644
--- a/drivers/crypto/hisilicon/hpre/hpre.h
+++ b/drivers/crypto/hisilicon/hpre/hpre.h
@@ -35,25 +35,18 @@ struct hpre_debugfs_file {
struct hpre_debug *debug;
};
-#define HPRE_RESET 0
-#define HPRE_WAIT_DELAY 1000
-
/*
* One HPRE controller has one PF and multiple VFs, some global configurations
* which PF has need this structure.
* Just relevant for PF.
*/
struct hpre_debug {
- struct dentry *debug_root;
struct hpre_debugfs_file files[HPRE_DEBUGFS_FILE_NUM];
};
struct hpre {
struct hisi_qm qm;
- struct list_head list;
struct hpre_debug debug;
- u32 num_vfs;
- unsigned long status;
};
enum hpre_alg_type {
@@ -80,7 +73,7 @@ struct hpre_sqe {
__le32 rsvd1[_HPRE_SQE_ALIGN_EXT];
};
-struct hpre *hpre_find_device(int node);
+struct hisi_qp *hpre_create_qp(void);
int hpre_algs_register(void);
void hpre_algs_unregister(void);
diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
index aadc975..7610e13 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
@@ -147,26 +147,18 @@ static void hpre_rm_req_from_ctx(struct hpre_asym_request *hpre_req)
static struct hisi_qp *hpre_get_qp_and_start(void)
{
struct hisi_qp *qp;
- struct hpre *hpre;
int ret;
- /* find the proper hpre device, which is near the current CPU core */
- hpre = hpre_find_device(cpu_to_node(smp_processor_id()));
- if (!hpre) {
- pr_err("Can not find proper hpre device!\n");
- return ERR_PTR(-ENODEV);
- }
-
- qp = hisi_qm_create_qp(&hpre->qm, 0);
- if (IS_ERR(qp)) {
- pci_err(hpre->qm.pdev, "Can not create qp!\n");
+ qp = hpre_create_qp();
+ if (!qp) {
+ pr_err("Can not create hpre qp!\n");
return ERR_PTR(-ENODEV);
}
ret = hisi_qm_start_qp(qp, 0);
if (ret < 0) {
- hisi_qm_release_qp(qp);
- pci_err(hpre->qm.pdev, "Can not start qp!\n");
+ hisi_qm_free_qps(&qp, 1);
+ pci_err(qp->qm->pdev, "Can not start qp!\n");
return ERR_PTR(-EINVAL);
}
@@ -337,7 +329,7 @@ static void hpre_ctx_clear(struct hpre_ctx *ctx, bool is_clear_all)
if (is_clear_all) {
idr_destroy(&ctx->req_idr);
kfree(ctx->req_list);
- hisi_qm_release_qp(ctx->qp);
+ hisi_qm_free_qps(&ctx->qp, 1);
}
ctx->crt_g2_mode = false;
diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
index 6a3bce2..4dc0d3e 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
@@ -13,9 +13,6 @@
#include <linux/uacce.h>
#include "hpre.h"
-#define HPRE_ENABLE 1
-#define HPRE_DISABLE 0
-#define HPRE_VF_NUM 63
#define HPRE_QUEUE_NUM_V2 1024
#define HPRE_QUEUE_NUM_V1 4096
#define HPRE_QM_ABNML_INT_MASK 0x100004
@@ -63,10 +60,6 @@
#define HPRE_HAC_ECC2_CNT 0x301a08
#define HPRE_HAC_INT_STATUS 0x301800
#define HPRE_HAC_SOURCE_INT 0x301600
-#define MASTER_GLOBAL_CTRL_SHUTDOWN 1
-#define MASTER_TRANS_RETURN_RW 3
-#define HPRE_MASTER_TRANS_RETURN 0x300150
-#define HPRE_MASTER_GLOBAL_CTRL 0x300000
#define HPRE_CLSTR_ADDR_INTRVL 0x1000
#define HPRE_CLUSTER_INQURY 0x100
#define HPRE_CLSTR_ADDR_INQRY_RSLT 0x104
@@ -83,24 +76,18 @@
#define HPRE_QM_VFG_AX_MASK 0xff
#define HPRE_BD_USR_MASK 0x3
#define HPRE_CLUSTER_CORE_MASK 0xf
-#define HPRE_RESET_WAIT_TIMEOUT 400
#define HPRE_AM_OOO_SHUTDOWN_ENB 0x301044
#define AM_OOO_SHUTDOWN_ENABLE BIT(0)
#define AM_OOO_SHUTDOWN_DISABLE 0xFFFFFFFE
-#define HPRE_WR_MSI_PORT 0xFFFB
+#define HPRE_WR_MSI_PORT BIT(2)
-#define HPRE_HW_ERROR_IRQ_ENABLE 1
-#define HPRE_HW_ERROR_IRQ_DISABLE 0
-#define HPRE_PCI_COMMAND_INVALID 0xFFFFFFFF
#define HPRE_CORE_ECC_2BIT_ERR BIT(1)
#define HPRE_OOO_ECC_2BIT_ERR BIT(5)
-#define HPRE_QM_BME_FLR BIT(7)
-#define HPRE_QM_PM_FLR BIT(11)
-#define HPRE_QM_SRIOV_FLR BIT(12)
-
-#define HPRE_USLEEP 10
+#define HPRE_QM_BME_FLR BIT(7)
+#define HPRE_QM_PM_FLR BIT(11)
+#define HPRE_QM_SRIOV_FLR BIT(12)
/* function index:
* 1 for hpre bypass mode,
@@ -108,8 +95,7 @@
*/
#define HPRE_VIA_MSI_DSM 1
-static LIST_HEAD(hpre_list);
-static DEFINE_MUTEX(hpre_list_lock);
+static struct hisi_qm_list hpre_devices;
static const char hpre_name[] = "hisi_hpre";
static struct dentry *hpre_debugfs_root;
static const struct pci_device_id hpre_dev_ids[] = {
@@ -183,59 +169,29 @@ struct hpre_hw_error {
{"INT_STATUS ", HPRE_INT_STATUS},
};
-static int hpre_pf_q_num_set(const char *val, const struct kernel_param *kp)
+#ifdef CONFIG_CRYPTO_QM_UACCE
+static int uacce_mode_set(const char *val, const struct kernel_param *kp)
{
- struct pci_dev *pdev;
- u32 q_num;
- u32 n = 0;
- u8 rev_id;
- int ret;
-
- if (!val)
- return -EINVAL;
-
- pdev = pci_get_device(PCI_VENDOR_ID_HUAWEI, HPRE_PCI_DEVICE_ID, NULL);
- if (!pdev) {
- q_num = HPRE_QUEUE_NUM_V2;
- pr_info("No device found currently, suppose queue number is %d\n",
- q_num);
- } else {
- rev_id = pdev->revision;
- if (rev_id != QM_HW_V2)
- return -EINVAL;
-
- q_num = HPRE_QUEUE_NUM_V2;
- }
-
- ret = kstrtou32(val, 10, &n);
- if (ret != 0 || n == 0 || n > q_num)
- return -EINVAL;
-
- return param_set_int(val, kp);
+ return mode_set(val, kp);
}
-static const struct kernel_param_ops hpre_pf_q_num_ops = {
- .set = hpre_pf_q_num_set,
+static const struct kernel_param_ops uacce_mode_ops = {
+ .set = uacce_mode_set,
.get = param_get_int,
};
-static int uacce_mode_set(const char *val, const struct kernel_param *kp)
-{
- u32 n;
- int ret;
-
- if (!val)
- return -EINVAL;
-
- ret = kstrtou32(val, 10, &n);
- if (ret != 0 || (n != UACCE_MODE_NOIOMMU && n != UACCE_MODE_NOUACCE))
- return -EINVAL;
+static int uacce_mode = UACCE_MODE_NOUACCE;
+module_param_cb(uacce_mode, &uacce_mode_ops, &uacce_mode, 0444);
+MODULE_PARM_DESC(uacce_mode, "Mode of UACCE can be 0(default), 2");
+#endif
- return param_set_int(val, kp);
+static int pf_q_num_set(const char *val, const struct kernel_param *kp)
+{
+ return q_num_set(val, kp, HPRE_PCI_DEVICE_ID);
}
-static const struct kernel_param_ops uacce_mode_ops = {
- .set = uacce_mode_set,
+static const struct kernel_param_ops hpre_pf_q_num_ops = {
+ .set = pf_q_num_set,
.get = param_get_int,
};
@@ -243,46 +199,31 @@ static int uacce_mode_set(const char *val, const struct kernel_param *kp)
module_param_cb(pf_q_num, &hpre_pf_q_num_ops, &pf_q_num, 0444);
MODULE_PARM_DESC(pf_q_num, "Number of queues in PF of CS(1-1024)");
-static int uacce_mode = UACCE_MODE_NOUACCE;
-module_param_cb(uacce_mode, &uacce_mode_ops, &uacce_mode, 0444);
-MODULE_PARM_DESC(uacce_mode, "Mode of UACCE can be 0(default), 2");
-static inline void hpre_add_to_list(struct hpre *hpre)
+static int vfs_num_set(const char *val, const struct kernel_param *kp)
{
- mutex_lock(&hpre_list_lock);
- list_add_tail(&hpre->list, &hpre_list);
- mutex_unlock(&hpre_list_lock);
+ return vf_num_set(val, kp);
}
-static inline void hpre_remove_from_list(struct hpre *hpre)
-{
- mutex_lock(&hpre_list_lock);
- list_del(&hpre->list);
- mutex_unlock(&hpre_list_lock);
-}
+static const struct kernel_param_ops vfs_num_ops = {
+ .set = vfs_num_set,
+ .get = param_get_int,
+};
-struct hpre *hpre_find_device(int node)
+static u32 vfs_num;
+module_param_cb(vfs_num, &vfs_num_ops, &vfs_num, 0444);
+MODULE_PARM_DESC(vfs_num, "Number of VFs to enable(1-63), 0(default)");
+
+struct hisi_qp *hpre_create_qp(void)
{
- struct hpre *hpre, *ret = NULL;
- int min_distance = INT_MAX;
- struct device *dev;
- int dev_node = 0;
-
- mutex_lock(&hpre_list_lock);
- list_for_each_entry(hpre, &hpre_list, list) {
- dev = &hpre->qm.pdev->dev;
-#ifdef CONFIG_NUMA
- dev_node = dev->numa_node;
- if (dev_node < 0)
- dev_node = 0;
-#endif
- if (node_distance(dev_node, node) < min_distance) {
- ret = hpre;
- min_distance = node_distance(dev_node, node);
- }
- }
- mutex_unlock(&hpre_list_lock);
+ int node = cpu_to_node(smp_processor_id());
+ struct hisi_qp *qp = NULL;
+ int ret;
- return ret;
+ ret = hisi_qm_alloc_qps_node(node, &hpre_devices, &qp, 1, 0);
+ if (!ret)
+ return qp;
+
+ return NULL;
}
static void hpre_pasid_enable(struct hisi_qm *qm)
@@ -351,9 +292,8 @@ static int hpre_set_cluster(struct hisi_qm *qm)
return 0;
}
-static int hpre_set_user_domain_and_cache(struct hpre *hpre)
+static int hpre_set_user_domain_and_cache(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hpre->qm;
struct pci_dev *pdev = qm->pdev;
u32 val;
int ret;
@@ -403,7 +343,7 @@ static int hpre_set_user_domain_and_cache(struct hpre *hpre)
pci_err(pdev, "acpi_evaluate_dsm err.\n");
/* disable FLR triggered by BME(bus master enable) */
- val = readl(hpre->qm.io_base + QM_PEH_AXUSER_CFG);
+ val = readl(HPRE_ADDR(qm, QM_PEH_AXUSER_CFG));
val &= ~(HPRE_QM_BME_FLR | HPRE_QM_SRIOV_FLR);
val |= HPRE_QM_PM_FLR;
writel(val, HPRE_ADDR(qm, QM_PEH_AXUSER_CFG));
@@ -433,23 +373,21 @@ static void hpre_cnt_regs_clear(struct hisi_qm *qm)
hisi_qm_debug_regs_clear(qm);
}
-static void hpre_hw_error_disable(struct hpre *hpre)
+static void hpre_hw_error_disable(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hpre->qm;
u32 val;
/* disable hpre hw error interrupts */
writel(HPRE_CORE_INT_DISABLE, qm->io_base + HPRE_INT_MASK);
/* disable HPRE block master OOO when m-bit error occur */
- val = readl(hpre->qm.io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
+ val = readl(qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
val &= AM_OOO_SHUTDOWN_DISABLE;
- writel(val, hpre->qm.io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
+ writel(val, qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
}
-static void hpre_hw_error_enable(struct hpre *hpre)
+static void hpre_hw_error_enable(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hpre->qm;
u32 val;
/* clear HPRE hw error source if having */
@@ -462,9 +400,9 @@ static void hpre_hw_error_enable(struct hpre *hpre)
writel(HPRE_HAC_RAS_FE_ENABLE, qm->io_base + HPRE_RAS_FE_ENB);
/* enable HPRE block master OOO when m-bit error occur */
- val = readl(hpre->qm.io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
+ val = readl(qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
val |= AM_OOO_SHUTDOWN_ENABLE;
- writel(val, hpre->qm.io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
+ writel(val, qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
}
static inline struct hisi_qm *hpre_file_to_qm(struct hpre_debugfs_file *file)
@@ -484,9 +422,7 @@ static u32 hpre_current_qm_read(struct hpre_debugfs_file *file)
static int hpre_current_qm_write(struct hpre_debugfs_file *file, u32 val)
{
struct hisi_qm *qm = hpre_file_to_qm(file);
- struct hpre_debug *debug = file->debug;
- struct hpre *hpre = container_of(debug, struct hpre, debug);
- u32 num_vfs = hpre->num_vfs;
+ u32 num_vfs = qm->vfs_num;
u32 vfq_num, tmp;
if (val > num_vfs)
@@ -657,11 +593,14 @@ static int hpre_create_debugfs_file(struct hpre_debug *dbg, struct dentry *dir,
enum hpre_ctrl_dbgfs_file type, int indx)
{
struct dentry *tmp, *file_dir;
+ struct hpre *hpre;
- if (dir)
+ if (dir) {
file_dir = dir;
- else
- file_dir = dbg->debug_root;
+ } else {
+ hpre = container_of(dbg, struct hpre, debug);
+ file_dir = hpre->qm.debug.debug_root;
+ }
if (type >= HPRE_DEBUG_FILE_NUM)
return -EINVAL;
@@ -694,7 +633,8 @@ static int hpre_pf_comm_regs_debugfs_init(struct hpre_debug *debug)
regset->nregs = ARRAY_SIZE(hpre_com_dfx_regs);
regset->base = qm->io_base;
- tmp = debugfs_create_regset32("regs", 0444, debug->debug_root, regset);
+ tmp = debugfs_create_regset32("regs", 0444, qm->debug.debug_root,
+ regset);
if (!tmp)
return -ENOENT;
@@ -716,7 +656,7 @@ static int hpre_cluster_debugfs_init(struct hpre_debug *debug)
if (ret < 0)
return -EINVAL;
- tmp_d = debugfs_create_dir(buf, debug->debug_root);
+ tmp_d = debugfs_create_dir(buf, qm->debug.debug_root);
if (!tmp_d)
return -ENOENT;
@@ -761,9 +701,9 @@ static int hpre_ctrl_debug_init(struct hpre_debug *debug)
return hpre_cluster_debugfs_init(debug);
}
-static int hpre_debugfs_init(struct hpre *hpre)
+static int hpre_debugfs_init(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hpre->qm;
+ struct hpre *hpre = container_of(qm, struct hpre, qm);
struct device *dev = &qm->pdev->dev;
struct dentry *dir;
int ret;
@@ -779,7 +719,6 @@ static int hpre_debugfs_init(struct hpre *hpre)
goto failed_to_create;
if (qm->pdev->device == HPRE_PCI_DEVICE_ID) {
- hpre->debug.debug_root = dir;
ret = hpre_ctrl_debug_init(&hpre->debug);
if (ret)
goto failed_to_create;
@@ -791,69 +730,41 @@ static int hpre_debugfs_init(struct hpre *hpre)
return ret;
}
-static void hpre_debugfs_exit(struct hpre *hpre)
-{
- struct hisi_qm *qm = &hpre->qm;
-
- debugfs_remove_recursive(qm->debug.debug_root);
-}
-
static int hpre_qm_pre_init(struct hisi_qm *qm, struct pci_dev *pdev)
{
- enum qm_hw_ver rev_id;
-
- rev_id = hisi_qm_get_hw_version(pdev);
- if (rev_id < 0)
- return -ENODEV;
+ int ret;
- if (rev_id == QM_HW_V1) {
+#ifdef CONFIG_CRYPTO_QM_UACCE
+ qm->algs = "rsa\ndh\n";
+ qm->uacce_mode = uacce_mode;
+#endif
+ qm->pdev = pdev;
+ ret = hisi_qm_pre_init(qm, pf_q_num, HPRE_PF_DEF_Q_BASE);
+ if (ret)
+ return ret;
+ if (qm->ver == QM_HW_V1) {
pci_warn(pdev, "HPRE version 1 is not supported!\n");
return -EINVAL;
}
- qm->pdev = pdev;
- qm->ver = rev_id;
+ qm->qm_list = &hpre_devices;
qm->sqe_size = HPRE_SQE_SIZE;
qm->dev_name = hpre_name;
- qm->fun_type = (pdev->device == HPRE_PCI_DEVICE_ID) ?
- QM_HW_PF : QM_HW_VF;
- qm->algs = "rsa\ndh\n";
- switch (uacce_mode) {
- case UACCE_MODE_NOUACCE:
- qm->use_uacce = false;
- break;
- case UACCE_MODE_NOIOMMU:
- qm->use_uacce = true;
- break;
- default:
- return -EINVAL;
- }
- if (pdev->is_physfn) {
- qm->qp_base = HPRE_PF_DEF_Q_BASE;
- qm->qp_num = pf_q_num;
- qm->debug.curr_qm_qp_num = pf_q_num;
- }
return 0;
}
-static void hpre_hw_err_init(struct hpre *hpre)
-{
- hisi_qm_hw_error_init(&hpre->qm, QM_BASE_CE,
- QM_BASE_NFE | QM_ACC_DO_TASK_TIMEOUT,
- 0, QM_DB_RANDOM_INVALID);
- hpre_hw_error_enable(hpre);
-}
-
-static void hpre_open_master_ooo(struct hisi_qm *qm)
+static void hpre_log_hw_error(struct hisi_qm *qm, u32 err_sts)
{
- u32 val;
+ const struct hpre_hw_error *err = hpre_hw_errors;
+ struct device *dev = &qm->pdev->dev;
- val = readl(qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
- writel(val & AM_OOO_SHUTDOWN_DISABLE,
- HPRE_ADDR(qm, HPRE_AM_OOO_SHUTDOWN_ENB));
- writel(val | AM_OOO_SHUTDOWN_ENABLE,
- HPRE_ADDR(qm, HPRE_AM_OOO_SHUTDOWN_ENB));
+ while (err->msg) {
+ if (err->int_msk & err_sts)
+ dev_warn(dev, "%s [error status=0x%x] found\n",
+ err->msg, err->int_msk);
+ err++;
+ }
}
static u32 hpre_get_hw_err_status(struct hisi_qm *qm)
@@ -866,41 +777,47 @@ static void hpre_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
writel(err_sts, qm->io_base + HPRE_HAC_SOURCE_INT);
}
-static void hpre_log_hw_error(struct hisi_qm *qm, u32 err_sts)
+static void hpre_open_axi_master_ooo(struct hisi_qm *qm)
{
- const struct hpre_hw_error *err = hpre_hw_errors;
- struct device *dev = &qm->pdev->dev;
+ u32 value;
- while (err->msg) {
- if (err->int_msk & err_sts)
- dev_warn(dev, "%s [error status=0x%x] found\n",
- err->msg, err->int_msk);
- err++;
- }
+ value = readl(qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
+ writel(value & AM_OOO_SHUTDOWN_DISABLE,
+ HPRE_ADDR(qm, HPRE_AM_OOO_SHUTDOWN_ENB));
+ writel(value | AM_OOO_SHUTDOWN_ENABLE,
+ HPRE_ADDR(qm, HPRE_AM_OOO_SHUTDOWN_ENB));
}
-static int hpre_pf_probe_init(struct hpre *hpre)
+static int hpre_pf_probe_init(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hpre->qm;
int ret;
if (qm->ver != QM_HW_V2)
return -EINVAL;
qm->ctrl_q_num = HPRE_QUEUE_NUM_V2;
- qm->err_ini.qm_wr_port = HPRE_WR_MSI_PORT;
- qm->err_ini.ecc_2bits_mask = (HPRE_CORE_ECC_2BIT_ERR |
- HPRE_OOO_ECC_2BIT_ERR);
- qm->err_ini.open_axi_master_ooo = hpre_open_master_ooo;
qm->err_ini.get_dev_hw_err_status = hpre_get_hw_err_status;
qm->err_ini.clear_dev_hw_err_status = hpre_clear_hw_err_status;
+ qm->err_ini.err_info.ecc_2bits_mask = HPRE_CORE_ECC_2BIT_ERR |
+ HPRE_OOO_ECC_2BIT_ERR;
+ qm->err_ini.err_info.ce = QM_BASE_CE;
+ qm->err_ini.err_info.nfe = QM_BASE_NFE | QM_ACC_DO_TASK_TIMEOUT;
+ qm->err_ini.err_info.fe = 0;
+ qm->err_ini.err_info.msi = QM_DB_RANDOM_INVALID;
+ qm->err_ini.err_info.acpi_rst = "HRST";
+
+ qm->err_ini.hw_err_disable = hpre_hw_error_disable;
+ qm->err_ini.hw_err_enable = hpre_hw_error_enable;
+ qm->err_ini.set_usr_domain_cache = hpre_set_user_domain_and_cache;
qm->err_ini.log_dev_hw_err = hpre_log_hw_error;
+ qm->err_ini.open_axi_master_ooo = hpre_open_axi_master_ooo;
+ qm->err_ini.err_info.msi_wr_port = HPRE_WR_MSI_PORT;
- ret = hpre_set_user_domain_and_cache(hpre);
+ ret = qm->err_ini.set_usr_domain_cache(qm);
if (ret)
return ret;
- hpre_hw_err_init(hpre);
+ hisi_qm_dev_err_init(qm);
return 0;
}
@@ -914,10 +831,9 @@ static int hpre_probe(struct pci_dev *pdev, const struct pci_device_id *id)
hpre = devm_kzalloc(&pdev->dev, sizeof(*hpre), GFP_KERNEL);
if (!hpre)
return -ENOMEM;
-
- pci_set_drvdata(pdev, hpre);
-
qm = &hpre->qm;
+ qm->fun_type = pdev->is_physfn ? QM_HW_PF : QM_HW_VF;
+
ret = hpre_qm_pre_init(qm, pdev);
if (ret)
return ret;
@@ -929,7 +845,7 @@ static int hpre_probe(struct pci_dev *pdev, const struct pci_device_id *id)
}
if (pdev->is_physfn) {
- ret = hpre_pf_probe_init(hpre);
+ ret = hpre_pf_probe_init(qm);
if (ret) {
pci_err(pdev, "Failed to init pf probe (%d)!\n", ret);
goto err_with_qm_init;
@@ -947,26 +863,35 @@ static int hpre_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto err_with_err_init;
}
- ret = hpre_debugfs_init(hpre);
+ ret = hpre_debugfs_init(qm);
if (ret)
pci_warn(pdev, "init debugfs fail!\n");
- hpre_add_to_list(hpre);
+ hisi_qm_add_to_list(qm, &hpre_devices);
ret = hpre_algs_register();
if (ret < 0) {
- hpre_remove_from_list(hpre);
pci_err(pdev, "fail to register algs to crypto!\n");
goto err_with_qm_start;
}
+
+ if (qm->fun_type == QM_HW_PF && vfs_num > 0) {
+ ret = hisi_qm_sriov_enable(pdev, vfs_num);
+ if (ret < 0)
+ goto err_with_crypto_register;
+ }
+
return 0;
+err_with_crypto_register:
+ hpre_algs_unregister();
+
err_with_qm_start:
+ hisi_qm_del_from_list(qm, &hpre_devices);
hisi_qm_stop(qm, QM_NORMAL);
err_with_err_init:
- if (pdev->is_physfn)
- hpre_hw_error_disable(hpre);
+ hisi_qm_dev_err_uninit(qm);
err_with_qm_init:
hisi_qm_uninit(qm);
@@ -974,627 +899,51 @@ static int hpre_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return ret;
}
-static int hpre_vf_q_assign(struct hpre *hpre, int num_vfs)
-{
- struct hisi_qm *qm = &hpre->qm;
- u32 qp_num = qm->qp_num;
- int q_num, remain_q_num, i;
- u32 q_base = qp_num;
- int ret;
-
- if (!num_vfs)
- return -EINVAL;
-
- remain_q_num = qm->ctrl_q_num - qp_num;
- /* If remain queues not enough, return error. */
- if (remain_q_num < num_vfs)
- return -EINVAL;
-
- q_num = remain_q_num / num_vfs;
- for (i = 1; i <= num_vfs; i++) {
- if (i == num_vfs)
- q_num += remain_q_num % num_vfs;
- ret = hisi_qm_set_vft(qm, i, q_base, (u32)q_num);
- if (ret)
- return ret;
- q_base += q_num;
- }
-
- return 0;
-}
-
-static int hpre_clear_vft_config(struct hpre *hpre)
-{
- struct hisi_qm *qm = &hpre->qm;
- u32 num_vfs = hpre->num_vfs;
- int ret;
- u32 i;
-
- for (i = 1; i <= num_vfs; i++) {
- ret = hisi_qm_set_vft(qm, i, 0, 0);
- if (ret)
- return ret;
- }
- hpre->num_vfs = 0;
-
- return 0;
-}
-
-static int hpre_sriov_enable(struct pci_dev *pdev, int max_vfs)
-{
- struct hpre *hpre = pci_get_drvdata(pdev);
- int pre_existing_vfs, num_vfs, ret;
-
- pre_existing_vfs = pci_num_vf(pdev);
- if (pre_existing_vfs) {
- pci_err(pdev,
- "Can't enable VF. Please disable pre-enabled VFs!\n");
- return 0;
- }
-
- num_vfs = min_t(int, max_vfs, HPRE_VF_NUM);
- ret = hpre_vf_q_assign(hpre, num_vfs);
- if (ret) {
- pci_err(pdev, "Can't assign queues for VF!\n");
- return ret;
- }
-
- hpre->num_vfs = num_vfs;
-
- ret = pci_enable_sriov(pdev, num_vfs);
- if (ret) {
- pci_err(pdev, "Can't enable VF!\n");
- hpre_clear_vft_config(hpre);
- return ret;
- }
- return num_vfs;
-}
-
-static int hpre_try_frozen_vfs(struct pci_dev *pdev)
-{
- int ret = 0;
- struct hpre *hpre, *vf_hpre;
- struct pci_dev *dev;
-
- /* Try to frozen all the VFs as disable SRIOV */
- mutex_lock(&hpre_list_lock);
- list_for_each_entry(hpre, &hpre_list, list) {
- dev = hpre->qm.pdev;
- if (dev == pdev)
- continue;
- if (pci_physfn(dev) == pdev) {
- vf_hpre = pci_get_drvdata(dev);
- ret = hisi_qm_frozen(&vf_hpre->qm);
- if (ret)
- goto frozen_fail;
- }
- }
-
-frozen_fail:
- mutex_unlock(&hpre_list_lock);
- return ret;
-}
-
-static int hpre_sriov_disable(struct pci_dev *pdev)
-{
- struct hpre *hpre = pci_get_drvdata(pdev);
-
- if (pci_vfs_assigned(pdev)) {
- pci_err(pdev, "Failed to disable VFs while VFs are assigned!\n");
- return -EPERM;
- }
-
- /* While VF is in used, SRIOV cannot be disabled.
- * However, there is a risk that the behavior is uncertain if the
- * device is in hardware resetting.
- */
- if (hpre_try_frozen_vfs(pdev)) {
- dev_err(&pdev->dev,
- "Uacce user space task is using its VF!\n");
- return -EBUSY;
- }
-
- /* remove in hpre_pci_driver will be called to free VF resources */
- pci_disable_sriov(pdev);
- return hpre_clear_vft_config(hpre);
-}
-
static int hpre_sriov_configure(struct pci_dev *pdev, int num_vfs)
{
if (num_vfs)
- return hpre_sriov_enable(pdev, num_vfs);
+ return hisi_qm_sriov_enable(pdev, num_vfs);
else
- return hpre_sriov_disable(pdev);
-}
-
-static void hpre_remove_wait_delay(struct hpre *hpre)
-{
- struct hisi_qm *qm = &hpre->qm;
-
- while (hisi_qm_frozen(&hpre->qm) ||
- ((qm->fun_type == QM_HW_PF) &&
- hpre_try_frozen_vfs(hpre->qm.pdev)))
- usleep_range(HPRE_USLEEP, HPRE_USLEEP);
- udelay(HPRE_WAIT_DELAY);
+ return hisi_qm_sriov_disable(pdev, &hpre_devices);
}
static void hpre_remove(struct pci_dev *pdev)
{
- struct hpre *hpre = pci_get_drvdata(pdev);
- struct hisi_qm *qm = &hpre->qm;
+ struct hisi_qm *qm = pci_get_drvdata(pdev);
+ int ret;
+#ifdef CONFIG_CRYPTO_QM_UACCE
if (uacce_mode != UACCE_MODE_NOUACCE)
- hpre_remove_wait_delay(hpre);
-
+ hisi_qm_remove_wait_delay(qm, &hpre_devices);
+#endif
hpre_algs_unregister();
- hpre_remove_from_list(hpre);
- if (qm->fun_type == QM_HW_PF && hpre->num_vfs != 0)
- hpre_sriov_disable(pdev);
-
+ hisi_qm_del_from_list(qm, &hpre_devices);
+ if (qm->fun_type == QM_HW_PF && qm->vfs_num) {
+ ret = hisi_qm_sriov_disable(pdev, NULL);
+ if (ret) {
+ pci_err(pdev, "Disable SRIOV fail!\n");
+ return;
+ }
+ }
if (qm->fun_type == QM_HW_PF) {
hpre_cnt_regs_clear(qm);
qm->debug.curr_qm_qp_num = 0;
}
-
- hpre_debugfs_exit(hpre);
+ debugfs_remove_recursive(qm->debug.debug_root);
hisi_qm_stop(qm, QM_NORMAL);
if (qm->fun_type == QM_HW_PF)
- hpre_hw_error_disable(hpre);
+ hisi_qm_dev_err_uninit(qm);
hisi_qm_uninit(qm);
}
-static void hpre_shutdown(struct pci_dev *pdev)
-{
- struct hpre *hpre = pci_get_drvdata(pdev);
-
- hisi_qm_stop(&hpre->qm, QM_NORMAL);
-}
-
-static pci_ers_result_t hpre_error_detected(struct pci_dev *pdev,
- pci_channel_state_t state)
-{
- if (pdev->is_virtfn)
- return PCI_ERS_RESULT_NONE;
-
- pci_info(pdev, "PCI error detected, state(=%d)!!\n", state);
- if (state == pci_channel_io_perm_failure)
- return PCI_ERS_RESULT_DISCONNECT;
-
- return hisi_qm_process_dev_error(pdev);
-}
-
-static int hpre_vf_reset_prepare(struct pci_dev *pdev,
- enum qm_stop_reason stop_reason)
-{
- struct pci_dev *dev;
- struct hisi_qm *qm;
- struct hpre *hpre;
- int ret = 0;
-
- mutex_lock(&hpre_list_lock);
- if (pdev->is_physfn) {
- list_for_each_entry(hpre, &hpre_list, list) {
- dev = hpre->qm.pdev;
- if (dev == pdev)
- continue;
-
- if (pci_physfn(dev) == pdev) {
- qm = &hpre->qm;
-
- ret = hisi_qm_stop(qm, stop_reason);
- if (ret)
- goto prepare_fail;
- }
- }
- }
-
-prepare_fail:
- mutex_unlock(&hpre_list_lock);
- return ret;
-}
-
-static int hpre_reset_prepare_rdy(struct hpre *hpre)
-{
- struct pci_dev *pdev = hpre->qm.pdev;
- struct hpre *hisi_hpre = pci_get_drvdata(pci_physfn(pdev));
- int delay = 0;
-
- while (test_and_set_bit(HPRE_RESET, &hisi_hpre->status)) {
- msleep(++delay);
- if (delay > HPRE_RESET_WAIT_TIMEOUT)
- return -EBUSY;
- }
-
- return 0;
-}
-
-static int hpre_controller_reset_prepare(struct hpre *hpre)
-{
- struct hisi_qm *qm = &hpre->qm;
- struct pci_dev *pdev = qm->pdev;
- int ret;
-
- ret = hpre_reset_prepare_rdy(hpre);
- if (ret) {
- dev_err(&pdev->dev, "Controller reset not ready!\n");
- return ret;
- }
-
- ret = hpre_vf_reset_prepare(pdev, QM_SOFT_RESET);
- if (ret) {
- dev_err(&pdev->dev, "Fails to stop VFs!\n");
- return ret;
- }
-
- ret = hisi_qm_stop(qm, QM_SOFT_RESET);
- if (ret) {
- dev_err(&pdev->dev, "Fails to stop QM!\n");
- return ret;
- }
-
-#ifdef CONFIG_CRYPTO_QM_UACCE
- if (qm->use_uacce) {
- ret = uacce_hw_err_isolate(&qm->uacce);
- if (ret) {
- dev_err(&pdev->dev, "Fails to isolate hw err!\n");
- return ret;
- }
- }
-#endif
-
- return 0;
-}
-
-static int hpre_soft_reset(struct hpre *hpre)
-{
- struct hisi_qm *qm = &hpre->qm;
- struct device *dev = &qm->pdev->dev;
- unsigned long long value = 0;
- int ret;
- u32 val;
-
- ret = hisi_qm_reg_test(qm);
- if (ret)
- return ret;
-
- ret = hisi_qm_set_vf_mse(qm, HPRE_DISABLE);
- if (ret) {
- dev_err(dev, "Fails to disable vf mse bit.\n");
- return ret;
- }
-
- ret = hisi_qm_set_msi(qm, HPRE_DISABLE);
- if (ret) {
- dev_err(dev, "Fails to disable peh msi bit.\n");
- return ret;
- }
-
- /* Set qm ecc if dev ecc happened to hold on ooo */
- hisi_qm_set_ecc(qm);
-
- /* OOO register set and check */
- writel(MASTER_GLOBAL_CTRL_SHUTDOWN,
- hpre->qm.io_base + HPRE_MASTER_GLOBAL_CTRL);
-
- /* If bus lock, reset chip */
- ret = readl_relaxed_poll_timeout(hpre->qm.io_base +
- HPRE_MASTER_TRANS_RETURN, val,
- (val == MASTER_TRANS_RETURN_RW),
- HPRE_REG_RD_INTVRL_US,
- HPRE_REG_RD_TMOUT_US);
- if (ret) {
- dev_emerg(dev, "Bus lock! Please reset system.\n");
- return ret;
- }
-
- ret = hisi_qm_set_pf_mse(qm, HPRE_DISABLE);
- if (ret) {
- dev_err(dev, "Fails to disable pf mse bit.\n");
- return ret;
- }
-
- /* The reset related sub-control registers are not in PCI BAR */
- if (ACPI_HANDLE(dev)) {
- acpi_status s;
-
- s = acpi_evaluate_integer(ACPI_HANDLE(dev), "HRST",
- NULL, &value);
- if (ACPI_FAILURE(s)) {
- dev_err(dev, "NO controller reset method!\n");
- return -EIO;
- }
-
- if (value) {
- dev_err(dev, "Reset step %llu failed!\n", value);
- return -EIO;
- }
- } else {
- dev_err(dev, "No reset method!\n");
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int hpre_vf_reset_done(struct pci_dev *pdev)
-{
- struct pci_dev *dev;
- struct hisi_qm *qm;
- struct hpre *hpre;
- int ret = 0;
-
- mutex_lock(&hpre_list_lock);
- list_for_each_entry(hpre, &hpre_list, list) {
- dev = hpre->qm.pdev;
- if (dev == pdev)
- continue;
-
- if (pci_physfn(dev) == pdev) {
- qm = &hpre->qm;
-
- ret = hisi_qm_restart(qm);
- if (ret)
- goto reset_fail;
- }
- }
-
-reset_fail:
- mutex_unlock(&hpre_list_lock);
- return ret;
-}
-
-static int hpre_controller_reset_done(struct hpre *hpre)
-{
- struct hisi_qm *qm = &hpre->qm;
- struct pci_dev *pdev = qm->pdev;
- int ret;
-
- ret = hisi_qm_set_msi(qm, HPRE_ENABLE);
- if (ret) {
- dev_err(&pdev->dev, "Fails to enable peh msi bit!\n");
- return ret;
- }
-
- ret = hisi_qm_set_pf_mse(qm, HPRE_ENABLE);
- if (ret) {
- dev_err(&pdev->dev, "Fails to enable pf mse bit!\n");
- return ret;
- }
-
- ret = hisi_qm_set_vf_mse(qm, HPRE_ENABLE);
- if (ret) {
- dev_err(&pdev->dev, "Fails to enable vf mse bit!\n");
- return ret;
- }
-
- ret = hpre_set_user_domain_and_cache(hpre);
- if (ret)
- return ret;
-
- hisi_qm_restart_prepare(qm);
-
- ret = hisi_qm_restart(qm);
- if (ret) {
- dev_err(&pdev->dev, "Failed to start QM!\n");
- return ret;
- }
-
- if (hpre->num_vfs)
- hpre_vf_q_assign(hpre, hpre->num_vfs);
-
- ret = hpre_vf_reset_done(pdev);
- if (ret) {
- dev_err(&pdev->dev, "Failed to start VFs!\n");
- return -EPERM;
- }
-
- hisi_qm_restart_done(qm);
- hpre_hw_err_init(hpre);
-
- return 0;
-}
-
-static int hpre_controller_reset(struct hpre *hpre)
-{
- struct device *dev = &hpre->qm.pdev->dev;
- int ret;
-
- dev_info(dev, "Controller resetting...\n");
-
- ret = hpre_controller_reset_prepare(hpre);
- if (ret)
- return ret;
-
- ret = hpre_soft_reset(hpre);
- if (ret) {
- dev_err(dev, "Controller reset failed (%d)\n", ret);
- return ret;
- }
-
- ret = hpre_controller_reset_done(hpre);
- if (ret)
- return ret;
-
- clear_bit(HPRE_RESET, &hpre->status);
- dev_info(dev, "Controller reset complete\n");
-
- return 0;
-}
-
-static pci_ers_result_t hpre_slot_reset(struct pci_dev *pdev)
-{
- struct hpre *hpre = pci_get_drvdata(pdev);
- int ret;
-
- if (pdev->is_virtfn)
- return PCI_ERS_RESULT_RECOVERED;
-
- dev_info(&pdev->dev, "Requesting reset due to PCI error\n");
- pci_cleanup_aer_uncorrect_error_status(pdev);
-
- /* reset hpre controller */
- ret = hpre_controller_reset(hpre);
- if (ret) {
- dev_err(&pdev->dev, "hpre controller reset failed (%d)\n",
- ret);
- return PCI_ERS_RESULT_DISCONNECT;
- }
-
- return PCI_ERS_RESULT_RECOVERED;
-}
-
-static void hpre_set_hw_error(struct hpre *hisi_hpre, bool enable)
-{
- struct pci_dev *pdev = hisi_hpre->qm.pdev;
- struct hpre *hpre = pci_get_drvdata(pci_physfn(pdev));
- struct hisi_qm *qm = &hpre->qm;
-
- if (qm->fun_type == QM_HW_VF)
- return;
-
- if (enable) {
- hisi_qm_hw_error_init(qm, QM_BASE_CE,
- QM_BASE_NFE | QM_ACC_DO_TASK_TIMEOUT,
- 0, QM_DB_RANDOM_INVALID);
- hpre_hw_error_enable(hpre);
- } else {
- hisi_qm_hw_error_uninit(qm);
- hpre_hw_error_disable(hpre);
- }
-}
-
-static int hpre_get_hw_error_status(struct hpre *hpre)
-{
- u32 err_sts;
-
- err_sts = readl(hpre->qm.io_base + HPRE_HAC_INT_STATUS) &
- (HPRE_CORE_ECC_2BIT_ERR | HPRE_OOO_ECC_2BIT_ERR);
- if (err_sts)
- return err_sts;
-
- return 0;
-}
-
-/* check the interrupt is ecc-zbit error or not */
-static int hpre_check_hw_error(struct hpre *hisi_hpre)
-{
- struct pci_dev *pdev = hisi_hpre->qm.pdev;
- struct hpre *hpre = pci_get_drvdata(pci_physfn(pdev));
- struct hisi_qm *qm = &hpre->qm;
- int ret;
-
- if (qm->fun_type == QM_HW_VF)
- return 0;
-
- ret = hisi_qm_get_hw_error_status(qm);
- if (ret)
- return ret;
-
- /* Now the ecc-2bit is ce_err, so this func is always return 0 */
- return hpre_get_hw_error_status(hpre);
-}
-
-static void hpre_reset_prepare(struct pci_dev *pdev)
-{
- struct hpre *hpre = pci_get_drvdata(pdev);
- struct hisi_qm *qm = &hpre->qm;
- struct device *dev = &pdev->dev;
- u32 delay = 0;
- int ret;
-
- hpre_set_hw_error(hpre, HPRE_HW_ERROR_IRQ_DISABLE);
-
- while (hpre_check_hw_error(hpre)) {
- msleep(++delay);
- if (delay > HPRE_RESET_WAIT_TIMEOUT)
- return;
- }
-
- ret = hpre_reset_prepare_rdy(hpre);
- if (ret) {
- dev_err(dev, "FLR not ready!\n");
- return;
- }
-
- ret = hpre_vf_reset_prepare(pdev, QM_FLR);
- if (ret) {
- dev_err(&pdev->dev, "Fails to prepare reset!\n");
- return;
- }
-
- ret = hisi_qm_stop(qm, QM_FLR);
- if (ret) {
- dev_err(&pdev->dev, "Fails to stop QM!\n");
- return;
- }
-
- dev_info(dev, "FLR resetting...\n");
-}
-
-static bool hpre_flr_reset_complete(struct pci_dev *pdev)
-{
- struct pci_dev *pf_pdev = pci_physfn(pdev);
- struct hpre *hpre = pci_get_drvdata(pf_pdev);
- struct device *dev = &hpre->qm.pdev->dev;
- u32 id;
-
- pci_read_config_dword(hpre->qm.pdev, PCI_COMMAND, &id);
- if (id == HPRE_PCI_COMMAND_INVALID) {
- dev_err(dev, "Device HPRE can not be used!\n");
- return false;
- }
-
- clear_bit(HPRE_RESET, &hpre->status);
- return true;
-}
-
-static void hpre_reset_done(struct pci_dev *pdev)
-{
- struct hpre *hpre = pci_get_drvdata(pdev);
- struct hisi_qm *qm = &hpre->qm;
- struct device *dev = &pdev->dev;
- int ret;
-
- hpre_set_hw_error(hpre, HPRE_HW_ERROR_IRQ_ENABLE);
-
- ret = hisi_qm_restart(qm);
- if (ret) {
- dev_err(dev, "Failed to start QM!\n");
- return;
- }
-
- if (pdev->is_physfn) {
- ret = hpre_set_user_domain_and_cache(hpre);
- if (ret) {
- dev_err(dev, "Failed to start QM!\n");
- goto flr_done;
- }
-
- hpre_hw_err_init(hpre);
-
- if (hpre->num_vfs)
- hpre_vf_q_assign(hpre, hpre->num_vfs);
-
- ret = hpre_vf_reset_done(pdev);
- if (ret) {
- dev_err(&pdev->dev, "Failed to start VFs!\n");
- return;
- }
- }
-
-flr_done:
- if (hpre_flr_reset_complete(pdev))
- dev_info(dev, "FLR reset complete\n");
-}
-
static const struct pci_error_handlers hpre_err_handler = {
- .error_detected = hpre_error_detected,
- .slot_reset = hpre_slot_reset,
+ .error_detected = hisi_qm_dev_err_detected,
+ .slot_reset = hisi_qm_dev_slot_reset,
#ifdef CONFIG_CRYPTO_QM_UACCE
- .reset_prepare = hpre_reset_prepare,
- .reset_done = hpre_reset_done,
+ .reset_prepare = hisi_qm_reset_prepare,
+ .reset_done = hisi_qm_reset_done,
#endif
};
@@ -1605,7 +954,7 @@ static void hpre_reset_done(struct pci_dev *pdev)
.remove = hpre_remove,
.sriov_configure = hpre_sriov_configure,
.err_handler = &hpre_err_handler,
- .shutdown = hpre_shutdown,
+ .shutdown = hisi_qm_dev_shutdown,
};
static void hpre_register_debugfs(void)
@@ -1618,20 +967,19 @@ static void hpre_register_debugfs(void)
hpre_debugfs_root = NULL;
}
-static void hpre_unregister_debugfs(void)
-{
- debugfs_remove_recursive(hpre_debugfs_root);
-}
-
static int __init hpre_init(void)
{
int ret;
+ INIT_LIST_HEAD(&hpre_devices.list);
+ mutex_init(&hpre_devices.lock);
+ hpre_devices.check = NULL;
+
hpre_register_debugfs();
ret = pci_register_driver(&hpre_pci_driver);
if (ret) {
- hpre_unregister_debugfs();
+ debugfs_remove_recursive(hpre_debugfs_root);
pr_err("hpre: can't register hisi hpre driver.\n");
}
@@ -1641,7 +989,7 @@ static int __init hpre_init(void)
static void __exit hpre_exit(void)
{
pci_unregister_driver(&hpre_pci_driver);
- hpre_unregister_debugfs();
+ debugfs_remove_recursive(hpre_debugfs_root);
}
module_init(hpre_init);
diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
index 4bd7739..e89a770 100644
--- a/drivers/crypto/hisilicon/qm.c
+++ b/drivers/crypto/hisilicon/qm.c
@@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018-2019 HiSilicon Limited. */
#include <asm/page.h>
+#include <linux/acpi.h>
+#include <linux/aer.h>
#include <linux/bitmap.h>
#include <linux/debugfs.h>
#include <linux/dma-mapping.h>
@@ -117,7 +119,7 @@
#define QM_ABNORMAL_INT_MASK 0x100004
#define QM_HW_ERROR_IRQ_DISABLE GENMASK(12, 0)
#define QM_ABNORMAL_INT_STATUS 0x100008
-#define QM_ABNORMAL_INT_SET 0x10000c
+#define QM_PF_ABNORMAL_INT_SET 0x10000c
#define QM_ABNORMAL_INF00 0x100010
#define QM_FIFO_OVERFLOW_TYPE 0xc0
#define QM_FIFO_OVERFLOW_VF 0x3f
@@ -167,17 +169,30 @@
#define TASK_TIMEOUT 10000
#define WAIT_PERIOD 20
-#define MAX_WAIT_COUNTS 1000
#define WAIT_PERIOD_US_MAX 200
#define WAIT_PERIOD_US_MIN 100
-#define MAX_WAIT_TASK_COUNTS 10
-
-#define QM_RAS_NFE_MBIT_DISABLE ~QM_ECC_MBIT
+#define REMOVE_WAIT_DELAY 10
+#define MAX_WAIT_COUNTS 1000
+#define DELAY_PERIOD_MS 100
+#define QM_DEV_RESET_STATUS 0
+#define QM_RESET_WAIT_TIMEOUT 400
+#define QM_PCI_COMMAND_INVALID 0xFFFFFFFF
+#define MASTER_GLOBAL_CTRL_SHUTDOWN 0x1
+#define MASTER_TRANS_RETURN_RW 3
+#define MASTER_TRANS_RETURN 0x300150
+#define MASTER_GLOBAL_CTRL 0x300000
+#define QM_REG_RD_INTVRL_US 10
+#define QM_REG_RD_TMOUT_US 1000
+#define AM_CFG_PORT_RD_EN 0x300018
#define AM_CFG_PORT_WR_EN 0x30001C
-#define AM_CFG_PORT_WR_EN_VALUE 0xFFFF
+#define QM_RAS_NFE_MBIT_DISABLE ~QM_ECC_MBIT
#define AM_ROB_ECC_INT_STS 0x300104
#define ROB_ECC_ERR_MULTPL BIT(1)
+#define QM_DBG_READ_LEN 256
+#define QM_DBG_WRITE_LEN 1024
+#define QM_DBG_SHOW_SHIFT 16
+
#define QM_MK_CQC_DW3_V1(hop_num, pg_sz, buf_sz, cqe_sz) \
(((hop_num) << QM_CQ_HOP_NUM_SHIFT) | \
((pg_sz) << QM_CQ_PAGE_SIZE_SHIFT) | \
@@ -219,6 +234,12 @@ enum vft_type {
CQC_VFT,
};
+struct hisi_qm_resource {
+ struct hisi_qm *qm;
+ int distance;
+ struct list_head list;
+};
+
struct hisi_qm_hw_ops {
int (*get_vft)(struct hisi_qm *qm, u32 *base, u32 *number);
void (*qm_db)(struct hisi_qm *qm, u16 qn,
@@ -237,11 +258,6 @@ struct hisi_qm_hw_ops {
[QM_STATE] = "qm_state",
};
-struct hisi_qm_hw_error {
- u32 int_msk;
- const char *msg;
-};
-
static const struct hisi_qm_hw_error qm_hw_error[] = {
{ .int_msk = BIT(0), .msg = "qm_axi_rresp" },
{ .int_msk = BIT(1), .msg = "qm_axi_bresp" },
@@ -1115,13 +1131,20 @@ static void qm_hw_error_init_v2(struct hisi_qm *qm, u32 ce, u32 nfe, u32 fe,
{
u32 irq_enable = ce | nfe | fe | msi;
u32 irq_unmask = ~irq_enable;
+ u32 error_status;
qm->error_mask = ce | nfe | fe;
qm->msi_mask = msi;
/* clear QM hw residual error source */
- writel(QM_ABNORMAL_INT_SOURCE_CLR, qm->io_base +
- QM_ABNORMAL_INT_SOURCE);
+ error_status = readl(qm->io_base + QM_ABNORMAL_INT_STATUS);
+ if (!(qm->hw_status & BIT(QM_DEV_RESET_STATUS))
+ || !error_status)
+ error_status = QM_ABNORMAL_INT_SOURCE_CLR;
+ else
+ error_status &= qm->error_mask;
+
+ writel(error_status, qm->io_base + QM_ABNORMAL_INT_SOURCE);
/* configure error type */
writel(ce, qm->io_base + QM_RAS_CE_ENABLE);
@@ -1190,9 +1213,7 @@ static pci_ers_result_t qm_hw_error_handle_v2(struct hisi_qm *qm)
error_status = qm->error_mask & tmp;
if (error_status) {
if (error_status & QM_ECC_MBIT)
- qm->err_ini.is_qm_ecc_mbit = 1;
- else
- qm->err_ini.is_qm_ecc_mbit = 0;
+ qm->err_ini.err_info.is_qm_ecc_mbit = true;
qm_log_hw_error(qm, error_status);
return PCI_ERS_RESULT_NEED_RESET;
@@ -1513,7 +1534,8 @@ static void qm_qp_has_no_task(struct hisi_qp *qp)
int i = 0;
int ret;
- if (qp->qm->err_ini.is_qm_ecc_mbit || qp->qm->err_ini.is_dev_ecc_mbit)
+ if (qp->qm->err_ini.err_info.is_qm_ecc_mbit ||
+ qp->qm->err_ini.err_info.is_dev_ecc_mbit)
return;
addr = qm_ctx_alloc(qp->qm, size, &dma_addr);
@@ -1967,6 +1989,74 @@ static int qm_unregister_uacce(struct hisi_qm *qm)
#endif
/**
+ * hisi_qm_frozen() - Try to froze QM to cut continuous queue request. If
+ * there is user on the QM, return failure without doing anything.
+ * @qm: The qm needed to be fronzen.
+ *
+ * This function frozes QM, then we can do SRIOV disabling.
+ */
+static int hisi_qm_frozen(struct hisi_qm *qm)
+{
+ int count, i;
+
+ down_write(&qm->qps_lock);
+ for (i = 0, count = 0; i < qm->qp_num; i++)
+ if (!qm->qp_array[i])
+ count++;
+
+ if (count == qm->qp_num) {
+ bitmap_set(qm->qp_bitmap, 0, qm->qp_num);
+ } else {
+ up_write(&qm->qps_lock);
+ return -EBUSY;
+ }
+ up_write(&qm->qps_lock);
+
+ return 0;
+}
+
+static int qm_try_frozen_vfs(struct pci_dev *pdev,
+ struct hisi_qm_list *qm_list)
+{
+ struct hisi_qm *qm, *vf_qm;
+ struct pci_dev *dev;
+ int ret = 0;
+
+ if (!qm_list || !pdev)
+ return -EINVAL;
+
+ /* Try to frozen all the VFs as disable SRIOV */
+ mutex_lock(&qm_list->lock);
+ list_for_each_entry(qm, &qm_list->list, list) {
+ dev = qm->pdev;
+ if (dev == pdev)
+ continue;
+ if (pci_physfn(dev) == pdev) {
+ vf_qm = pci_get_drvdata(dev);
+ ret = hisi_qm_frozen(vf_qm);
+ if (ret)
+ goto frozen_fail;
+ }
+ }
+
+frozen_fail:
+ mutex_unlock(&qm_list->lock);
+ return ret;
+}
+
+void hisi_qm_remove_wait_delay(struct hisi_qm *qm,
+ struct hisi_qm_list *qm_list)
+{
+ while (hisi_qm_frozen(qm) ||
+ ((qm->fun_type == QM_HW_PF) &&
+ qm_try_frozen_vfs(qm->pdev, qm_list))) {
+ msleep(WAIT_PERIOD);
+ }
+ udelay(REMOVE_WAIT_DELAY);
+}
+EXPORT_SYMBOL_GPL(hisi_qm_remove_wait_delay);
+
+/**
* hisi_qm_init() - Initialize configures about qm.
* @qm: The qm needed init.
*
@@ -2107,32 +2197,21 @@ void hisi_qm_uninit(struct hisi_qm *qm)
EXPORT_SYMBOL_GPL(hisi_qm_uninit);
/**
- * hisi_qm_frozen() - Try to froze QM to cut continuous queue request. If
- * there is user on the QM, return failure without doing anything.
- * @qm: The qm needed to be fronzen.
+ * hisi_qm_dev_shutdown() - shutdown device.
+ * @pdev: The device will be shutdown.
*
- * This function frozes QM, then we can do SRIOV disabling.
+ * This function will stop qm when OS shutdown or rebooting.
*/
-int hisi_qm_frozen(struct hisi_qm *qm)
+void hisi_qm_dev_shutdown(struct pci_dev *pdev)
{
- int count, i;
-
- down_write(&qm->qps_lock);
- for (i = 0, count = 0; i < qm->qp_num; i++)
- if (!qm->qp_array[i])
- count++;
-
- if (count == qm->qp_num) {
- bitmap_set(qm->qp_bitmap, 0, qm->qp_num);
- } else {
- up_write(&qm->qps_lock);
- return -EBUSY;
- }
- up_write(&qm->qps_lock);
+ struct hisi_qm *qm = pci_get_drvdata(pdev);
+ int ret;
- return 0;
+ ret = hisi_qm_stop(qm, QM_NORMAL);
+ if (ret)
+ dev_err(&pdev->dev, "Fail to stop qm in shutdown!\n");
}
-EXPORT_SYMBOL_GPL(hisi_qm_frozen);
+EXPORT_SYMBOL_GPL(hisi_qm_dev_shutdown);
/**
* hisi_qm_get_vft() - Get vft from a qm.
@@ -2174,7 +2253,7 @@ int hisi_qm_get_vft(struct hisi_qm *qm, u32 *base, u32 *number)
* Assign queues A~B to VF: hisi_qm_set_vft(qm, 2, A, B - A + 1)
* (VF function number 0x2)
*/
-int hisi_qm_set_vft(struct hisi_qm *qm, u32 fun_num, u32 base,
+static int hisi_qm_set_vft(struct hisi_qm *qm, u32 fun_num, u32 base,
u32 number)
{
u32 max_q_num = qm->ctrl_q_num;
@@ -2185,7 +2264,6 @@ int hisi_qm_set_vft(struct hisi_qm *qm, u32 fun_num, u32 base,
return qm_set_sqc_cqc_vft(qm, fun_num, base, number);
}
-EXPORT_SYMBOL_GPL(hisi_qm_set_vft);
static void qm_init_eq_aeq_status(struct hisi_qm *qm)
{
@@ -2483,6 +2561,28 @@ static int qm_stop_started_qp(struct hisi_qm *qm)
}
/**
+ * qm_clear_queues() - Clear memory of queues in a qm.
+ * @qm: The qm which memory needs clear.
+ *
+ * This function clears all queues memory in a qm. Reset of accelerator can
+ * use this to clear queues.
+ */
+static void qm_clear_queues(struct hisi_qm *qm)
+{
+ struct hisi_qp *qp;
+ int i;
+
+ for (i = 0; i < qm->qp_num; i++) {
+ qp = qm->qp_array[i];
+ if (qp)
+ /* device state use the last page */
+ memset(qp->qdma.va, 0, qp->qdma.size - PAGE_SIZE);
+ }
+
+ memset(qm->qdma.va, 0, qm->qdma.size);
+}
+
+/**
* hisi_qm_stop() - Stop a qm.
* @qm: The qm which will be stopped.
* @r: The reason to stop qm.
@@ -2528,7 +2628,7 @@ int hisi_qm_stop(struct hisi_qm *qm, enum qm_stop_reason r)
}
}
- hisi_qm_clear_queues(qm);
+ qm_clear_queues(qm);
atomic_set(&qm->status.flags, QM_STOP);
err_unlock:
@@ -2589,7 +2689,7 @@ int hisi_qm_debug_init(struct hisi_qm *qm)
goto failed_to_create;
}
- qm_regs = debugfs_create_file("qm_regs", 0444, qm->debug.qm_d, qm,
+ qm_regs = debugfs_create_file("regs", 0444, qm->debug.qm_d, qm,
&qm_regs_fops);
if (IS_ERR(qm_regs)) {
ret = -ENOENT;
@@ -2605,7 +2705,7 @@ int hisi_qm_debug_init(struct hisi_qm *qm)
EXPORT_SYMBOL_GPL(hisi_qm_debug_init);
/**
- * hisi_qm_hw_error_init() - Configure qm hardware error report method.
+ * qm_hw_error_init() - Configure qm hardware error report method.
* @qm: The qm which we want to configure.
* @ce: Correctable error configure.
* @nfe: Non-fatal error configure.
@@ -2622,9 +2722,13 @@ int hisi_qm_debug_init(struct hisi_qm *qm)
* related report methods. Error report will be masked if related error bit
* does not configure.
*/
-void hisi_qm_hw_error_init(struct hisi_qm *qm, u32 ce, u32 nfe, u32 fe,
- u32 msi)
+static void qm_hw_error_init(struct hisi_qm *qm)
{
+ u32 nfe = qm->err_ini.err_info.nfe;
+ u32 msi = qm->err_ini.err_info.msi;
+ u32 ce = qm->err_ini.err_info.ce;
+ u32 fe = qm->err_ini.err_info.fe;
+
if (!qm->ops->hw_error_init) {
dev_err(&qm->pdev->dev,
"QM version %d doesn't support hw error handling!\n",
@@ -2634,9 +2738,8 @@ void hisi_qm_hw_error_init(struct hisi_qm *qm, u32 ce, u32 nfe, u32 fe,
qm->ops->hw_error_init(qm, ce, nfe, fe, msi);
}
-EXPORT_SYMBOL_GPL(hisi_qm_hw_error_init);
-void hisi_qm_hw_error_uninit(struct hisi_qm *qm)
+static void qm_hw_error_uninit(struct hisi_qm *qm)
{
if (!qm->ops->hw_error_uninit) {
dev_err(&qm->pdev->dev,
@@ -2647,15 +2750,14 @@ void hisi_qm_hw_error_uninit(struct hisi_qm *qm)
qm->ops->hw_error_uninit(qm);
}
-EXPORT_SYMBOL_GPL(hisi_qm_hw_error_uninit);
/**
- * hisi_qm_hw_error_handle() - Handle qm non-fatal hardware errors.
+ * qm_hw_error_handle() - Handle qm non-fatal hardware errors.
* @qm: The qm which has non-fatal hardware errors.
*
* Accelerators use this function to handle qm non-fatal hardware errors.
*/
-pci_ers_result_t hisi_qm_hw_error_handle(struct hisi_qm *qm)
+static pci_ers_result_t qm_hw_error_handle(struct hisi_qm *qm)
{
if (!qm->ops->hw_error_handle) {
dev_err(&qm->pdev->dev,
@@ -2666,104 +2768,19 @@ pci_ers_result_t hisi_qm_hw_error_handle(struct hisi_qm *qm)
return qm->ops->hw_error_handle(qm);
}
-EXPORT_SYMBOL_GPL(hisi_qm_hw_error_handle);
-
-/**
- * hisi_qm_clear_queues() - Clear memory of queues in a qm.
- * @qm: The qm which memory needs clear.
- *
- * This function clears all queues memory in a qm. Reset of accelerator can
- * use this to clear queues.
- */
-void hisi_qm_clear_queues(struct hisi_qm *qm)
-{
- struct hisi_qp *qp;
- int i;
-
- for (i = 0; i < qm->qp_num; i++) {
- qp = qm->qp_array[i];
- if (qp)
- /* device state use the last page */
- memset(qp->qdma.va, 0, qp->qdma.size - PAGE_SIZE);
- }
-
- memset(qm->qdma.va, 0, qm->qdma.size);
-}
-EXPORT_SYMBOL_GPL(hisi_qm_clear_queues);
-
-/**
- * hisi_qm_get_hw_version() - Get hardware version of a qm.
- * @pdev: The device which hardware version we want to get.
- *
- * This function gets the hardware version of a qm. Return QM_HW_UNKNOWN
- * if the hardware version is not supported.
- */
-enum qm_hw_ver hisi_qm_get_hw_version(struct pci_dev *pdev)
-{
- switch (pdev->revision) {
- case QM_HW_V1:
- case QM_HW_V2:
- return pdev->revision;
- default:
- return QM_HW_UNKNOWN;
- }
-}
-EXPORT_SYMBOL_GPL(hisi_qm_get_hw_version);
-int hisi_qm_get_hw_error_status(struct hisi_qm *qm)
+static int qm_get_hw_error_status(struct hisi_qm *qm)
{
u32 err_sts;
- err_sts = readl(qm->io_base + QM_ABNORMAL_INT_STATUS) &
- QM_ECC_MBIT;
+ err_sts = readl(qm->io_base + QM_ABNORMAL_INT_STATUS) & QM_ECC_MBIT;
if (err_sts)
return err_sts;
return 0;
}
-EXPORT_SYMBOL_GPL(hisi_qm_get_hw_error_status);
-
-static pci_ers_result_t hisi_qm_dev_err_handle(struct hisi_qm *qm)
-{
- u32 err_sts;
-
- if (!qm->err_ini.get_dev_hw_err_status ||
- !qm->err_ini.log_dev_hw_err)
- return PCI_ERS_RESULT_RECOVERED;
-
- /* read err sts */
- err_sts = qm->err_ini.get_dev_hw_err_status(qm);
- if (err_sts) {
- if (err_sts & qm->err_ini.ecc_2bits_mask)
- qm->err_ini.is_dev_ecc_mbit = 1;
- else
- qm->err_ini.is_dev_ecc_mbit = 0;
-
- qm->err_ini.log_dev_hw_err(qm, err_sts);
- return PCI_ERS_RESULT_NEED_RESET;
- }
-
- return PCI_ERS_RESULT_RECOVERED;
-}
-
-pci_ers_result_t hisi_qm_process_dev_error(struct pci_dev *pdev)
-{
- struct hisi_qm *qm = pci_get_drvdata(pdev);
- pci_ers_result_t qm_ret, dev_ret;
-
- /* log qm error */
- qm_ret = hisi_qm_hw_error_handle(qm);
-
- /* log device error */
- dev_ret = hisi_qm_dev_err_handle(qm);
-
- return (qm_ret == PCI_ERS_RESULT_NEED_RESET ||
- dev_ret == PCI_ERS_RESULT_NEED_RESET) ?
- PCI_ERS_RESULT_NEED_RESET : PCI_ERS_RESULT_RECOVERED;
-}
-EXPORT_SYMBOL_GPL(hisi_qm_process_dev_error);
-int hisi_qm_reg_test(struct hisi_qm *qm)
+static int qm_reg_test(struct hisi_qm *qm)
{
struct pci_dev *pdev = qm->pdev;
int ret;
@@ -2782,16 +2799,13 @@ int hisi_qm_reg_test(struct hisi_qm *qm)
ret = readl_relaxed_poll_timeout(qm->io_base + QM_PEH_VENDOR_ID, val,
(val == PCI_VENDOR_ID_HUAWEI),
POLL_PERIOD, POLL_TIMEOUT);
- if (ret) {
+ if (ret)
dev_err(&pdev->dev, "Fails to read QM reg in the second time!\n");
- return ret;
- }
return ret;
}
-EXPORT_SYMBOL_GPL(hisi_qm_reg_test);
-int hisi_qm_set_pf_mse(struct hisi_qm *qm, bool set)
+static int qm_set_pf_mse(struct hisi_qm *qm, bool set)
{
struct pci_dev *pdev = qm->pdev;
u16 cmd;
@@ -2814,9 +2828,8 @@ int hisi_qm_set_pf_mse(struct hisi_qm *qm, bool set)
return -ETIMEDOUT;
}
-EXPORT_SYMBOL_GPL(hisi_qm_set_pf_mse);
-int hisi_qm_set_vf_mse(struct hisi_qm *qm, bool set)
+static int qm_set_vf_mse(struct hisi_qm *qm, bool set)
{
struct pci_dev *pdev = qm->pdev;
u16 sriov_ctrl;
@@ -2833,8 +2846,8 @@ int hisi_qm_set_vf_mse(struct hisi_qm *qm, bool set)
for (i = 0; i < MAX_WAIT_COUNTS; i++) {
pci_read_config_word(pdev, pos + PCI_SRIOV_CTRL, &sriov_ctrl);
- if (set == ((sriov_ctrl & PCI_SRIOV_CTRL_MSE) >>
- PEH_SRIOV_CTRL_VF_MSE_SHIFT))
+ if (set == (sriov_ctrl & PCI_SRIOV_CTRL_MSE) >>
+ PEH_SRIOV_CTRL_VF_MSE_SHIFT)
return 0;
udelay(1);
@@ -2842,9 +2855,8 @@ int hisi_qm_set_vf_mse(struct hisi_qm *qm, bool set)
return -ETIMEDOUT;
}
-EXPORT_SYMBOL_GPL(hisi_qm_set_vf_mse);
-int hisi_qm_set_msi(struct hisi_qm *qm, bool set)
+static int qm_set_msi(struct hisi_qm *qm, bool set)
{
struct pci_dev *pdev = qm->pdev;
@@ -2854,7 +2866,8 @@ int hisi_qm_set_msi(struct hisi_qm *qm, bool set)
} else {
pci_write_config_dword(pdev, pdev->msi_cap +
PCI_MSI_MASK_64, PEH_MSI_DISABLE);
- if (qm->err_ini.is_qm_ecc_mbit || qm->err_ini.is_dev_ecc_mbit)
+ if (qm->err_ini.err_info.is_qm_ecc_mbit ||
+ qm->err_ini.err_info.is_dev_ecc_mbit)
return 0;
mdelay(1);
@@ -2864,64 +2877,768 @@ int hisi_qm_set_msi(struct hisi_qm *qm, bool set)
return 0;
}
-EXPORT_SYMBOL_GPL(hisi_qm_set_msi);
-void hisi_qm_set_ecc(struct hisi_qm *qm)
+void hisi_qm_free_qps(struct hisi_qp **qps, int qp_num)
{
- u32 nfe_enb;
+ int i;
- if ((!qm->err_ini.is_qm_ecc_mbit && !qm->err_ini.is_dev_ecc_mbit) ||
- (qm->err_ini.is_qm_ecc_mbit && !qm->err_ini.inject_dev_hw_err) ||
- (qm->err_ini.is_dev_ecc_mbit && qm->err_ini.inject_dev_hw_err))
+ if (!qps || qp_num < 0)
return;
- if (qm->err_ini.inject_dev_hw_err)
- qm->err_ini.inject_dev_hw_err(qm);
- else {
- nfe_enb = readl(qm->io_base + QM_RAS_NFE_ENABLE);
- writel(nfe_enb & QM_RAS_NFE_MBIT_DISABLE,
- qm->io_base + QM_RAS_NFE_ENABLE);
- writel(QM_ECC_MBIT, qm->io_base + QM_ABNORMAL_INT_SET);
- qm->err_ini.is_qm_ecc_mbit = 1;
+ for (i = qp_num - 1; i >= 0; i--)
+ hisi_qm_release_qp(qps[i]);
+}
+EXPORT_SYMBOL_GPL(hisi_qm_free_qps);
+
+static void free_list(struct list_head *head)
+{
+ struct hisi_qm_resource *res, *tmp;
+
+ list_for_each_entry_safe(res, tmp, head, list) {
+ list_del(&res->list);
+ kfree(res);
}
}
-EXPORT_SYMBOL_GPL(hisi_qm_set_ecc);
-void hisi_qm_restart_prepare(struct hisi_qm *qm)
+static int hisi_qm_sort_devices(int node, struct list_head *head,
+ struct hisi_qm_list *qm_list)
{
- if (!qm->err_ini.is_qm_ecc_mbit && !qm->err_ini.is_dev_ecc_mbit)
- return;
+ struct hisi_qm_resource *res, *tmp;
+ struct hisi_qm *qm;
+ struct list_head *n;
+ struct device *dev;
+ int dev_node = 0;
+
+ list_for_each_entry(qm, &qm_list->list, list) {
+ dev = &qm->pdev->dev;
+
+ if (IS_ENABLED(CONFIG_NUMA)) {
+ dev_node = dev->numa_node;
+ if (dev_node < 0)
+ dev_node = 0;
+ }
- /* close AM wr msi port */
- writel(qm->err_ini.qm_wr_port, qm->io_base + AM_CFG_PORT_WR_EN);
+ if (qm_list->check && !qm_list->check(qm))
+ continue;
+
+ res = kzalloc(sizeof(*res), GFP_KERNEL);
+ if (!res)
+ return -ENOMEM;
- /* clear dev ecc 2bit error source */
- if (qm->err_ini.clear_dev_hw_err_status) {
- qm->err_ini.clear_dev_hw_err_status(qm,
- qm->err_ini.ecc_2bits_mask);
+ res->qm = qm;
+ res->distance = node_distance(dev_node, node);
+ n = head;
+ list_for_each_entry(tmp, head, list) {
+ if (res->distance < tmp->distance) {
+ n = &tmp->list;
+ break;
+ }
+ }
+ list_add_tail(&res->list, n);
}
- /* clear QM ecc mbit error source */
- writel(QM_ECC_MBIT, qm->io_base + QM_ABNORMAL_INT_SOURCE);
+ return 0;
+}
- /* clear AM Reorder Buffer ecc mbit source */
- writel(ROB_ECC_ERR_MULTPL, qm->io_base + AM_ROB_ECC_INT_STS);
+int hisi_qm_alloc_qps_node(int node, struct hisi_qm_list *qm_list,
+ struct hisi_qp **qps, int qp_num, u8 alg_type)
+{
+ struct hisi_qm_resource *tmp;
+ int ret = -ENODEV;
+ LIST_HEAD(head);
+ int i;
- if (qm->err_ini.open_axi_master_ooo)
- qm->err_ini.open_axi_master_ooo(qm);
+ if (!qps || !qm_list || qp_num <= 0)
+ return -EINVAL;
+
+ mutex_lock(&qm_list->lock);
+ if (hisi_qm_sort_devices(node, &head, qm_list)) {
+ mutex_unlock(&qm_list->lock);
+ goto err;
+ }
+
+ list_for_each_entry(tmp, &head, list) {
+ for (i = 0; i < qp_num; i++) {
+ qps[i] = hisi_qm_create_qp(tmp->qm, alg_type);
+ if (IS_ERR(qps[i])) {
+ hisi_qm_free_qps(qps, i);
+ break;
+ }
+ }
+
+ if (i == qp_num) {
+ ret = 0;
+ break;
+ }
+ }
+
+ mutex_unlock(&qm_list->lock);
+ if (ret)
+ pr_info("Failed to create qps, node[%d], alg[%d], qp[%d]!\n",
+ node, alg_type, qp_num);
+
+err:
+ free_list(&head);
+ return ret;
}
-EXPORT_SYMBOL_GPL(hisi_qm_restart_prepare);
+EXPORT_SYMBOL_GPL(hisi_qm_alloc_qps_node);
-void hisi_qm_restart_done(struct hisi_qm *qm)
+static int qm_vf_q_assign(struct hisi_qm *qm, u32 num_vfs)
{
- if (!qm->err_ini.is_qm_ecc_mbit && !qm->err_ini.is_dev_ecc_mbit)
- return;
+ u32 q_num, i, remain_q_num;
+ u32 q_base = qm->qp_num;
+ int ret;
+
+ if (!num_vfs)
+ return -EINVAL;
+
+ remain_q_num = qm->ctrl_q_num - qm->qp_num;
+
+ /* If remain queues not enough, return error. */
+ if (qm->ctrl_q_num < qm->qp_num || remain_q_num < num_vfs)
+ return -EINVAL;
+
+ q_num = remain_q_num / num_vfs;
+ for (i = 1; i <= num_vfs; i++) {
+ if (i == num_vfs)
+ q_num += remain_q_num % num_vfs;
+ ret = hisi_qm_set_vft(qm, i, q_base, q_num);
+ if (ret)
+ return ret;
+ q_base += q_num;
+ }
+
+ return 0;
+}
+
+static int qm_clear_vft_config(struct hisi_qm *qm)
+{
+ int ret;
+ u32 i;
+
+ for (i = 1; i <= qm->vfs_num; i++) {
+ ret = hisi_qm_set_vft(qm, i, 0, 0);
+ if (ret)
+ return ret;
+ }
+ qm->vfs_num = 0;
+
+ return 0;
+}
+
+int hisi_qm_sriov_enable(struct pci_dev *pdev, int max_vfs)
+{
+ struct hisi_qm *qm = pci_get_drvdata(pdev);
+ int pre_existing_vfs, num_vfs, ret;
+
+ pre_existing_vfs = pci_num_vf(pdev);
+ if (pre_existing_vfs) {
+ pci_err(pdev,
+ "Can't enable VF. Please disable pre-enabled VFs!\n");
+ return 0;
+ }
+
+ num_vfs = min_t(int, max_vfs, QM_MAX_VFS_NUM);
+ ret = qm_vf_q_assign(qm, num_vfs);
+ if (ret) {
+ pci_err(pdev, "Can't assign queues for VF!\n");
+ return ret;
+ }
+
+ qm->vfs_num = num_vfs;
+
+ ret = pci_enable_sriov(pdev, num_vfs);
+ if (ret) {
+ pci_err(pdev, "Can't enable VF!\n");
+ qm_clear_vft_config(qm);
+ return ret;
+ }
+
+ pci_info(pdev, "VF enabled, vfs_num(=%d)!\n", num_vfs);
+
+ return num_vfs;
+}
+EXPORT_SYMBOL_GPL(hisi_qm_sriov_enable);
+
+int hisi_qm_sriov_disable(struct pci_dev *pdev, struct hisi_qm_list *qm_list)
+{
+ struct hisi_qm *qm = pci_get_drvdata(pdev);
+
+ if (pci_vfs_assigned(pdev)) {
+ pci_err(pdev, "Failed to disable VFs as VFs are assigned!\n");
+ return -EPERM;
+ }
+
+ /* While VF is in used, SRIOV cannot be disabled.
+ * However, there is a risk that the behavior is uncertain if the
+ * device is in hardware resetting.
+ */
+ if (qm_list && qm_try_frozen_vfs(pdev, qm_list)) {
+ pci_err(pdev, "Uacce user space task is using its VF!\n");
+ return -EBUSY;
+ }
+
+ /* remove in hpre_pci_driver will be called to free VF resources */
+ pci_disable_sriov(pdev);
+ return qm_clear_vft_config(qm);
+}
+EXPORT_SYMBOL_GPL(hisi_qm_sriov_disable);
+
+void hisi_qm_dev_err_init(struct hisi_qm *qm)
+{
+ struct pci_dev *pdev = qm->pdev;
+ struct hisi_qm *pf_qm = pci_get_drvdata(pci_physfn(pdev));
+
+ if (pf_qm->fun_type == QM_HW_VF)
+ return;
+
+ qm_hw_error_init(pf_qm);
+ pf_qm->err_ini.hw_err_enable(pf_qm);
+}
+EXPORT_SYMBOL_GPL(hisi_qm_dev_err_init);
+
+/**
+ * hisi_qm_dev_err_uninit() - Uninitialize device error configuration.
+ * @qm: The qm for which we want to do error uninitialization.
+ *
+ * Uninitialize QM and device error related configuration, It may called
+ * by PF/VF, the caller should ensure the scene explicilty.
+ */
+void hisi_qm_dev_err_uninit(struct hisi_qm *qm)
+{
+ struct pci_dev *pdev = qm->pdev;
+ struct hisi_qm *pf_qm = pci_get_drvdata(pci_physfn(pdev));
+
+ if (pf_qm->fun_type == QM_HW_VF)
+ return;
+
+ qm_hw_error_uninit(pf_qm);
+ pf_qm->err_ini.hw_err_disable(pf_qm);
+}
+EXPORT_SYMBOL_GPL(hisi_qm_dev_err_uninit);
+
+static pci_ers_result_t qm_dev_err_handle(struct hisi_qm *qm)
+{
+ u32 err_sts;
+
+ /* read err sts */
+ err_sts = qm->err_ini.get_dev_hw_err_status(qm);
+ if (err_sts) {
+ if (err_sts & qm->err_ini.err_info.ecc_2bits_mask)
+ qm->err_ini.err_info.is_dev_ecc_mbit = true;
+
+ qm->err_ini.log_dev_hw_err(qm, err_sts);
+ return PCI_ERS_RESULT_NEED_RESET;
+ }
+
+ return PCI_ERS_RESULT_RECOVERED;
+}
+
+pci_ers_result_t hisi_qm_process_dev_error(struct pci_dev *pdev)
+{
+ struct hisi_qm *qm = pci_get_drvdata(pdev);
+ pci_ers_result_t qm_ret, dev_ret;
+
+ /* log qm error */
+ qm_ret = qm_hw_error_handle(qm);
+
+ /* log device error */
+ dev_ret = qm_dev_err_handle(qm);
+
+ return (qm_ret == PCI_ERS_RESULT_NEED_RESET ||
+ dev_ret == PCI_ERS_RESULT_NEED_RESET) ?
+ PCI_ERS_RESULT_NEED_RESET : PCI_ERS_RESULT_RECOVERED;
+}
+EXPORT_SYMBOL_GPL(hisi_qm_process_dev_error);
+
+pci_ers_result_t hisi_qm_dev_err_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ if (pdev->is_virtfn)
+ return PCI_ERS_RESULT_NONE;
+
+ pci_info(pdev, "PCI error detected, state(=%d)!!\n", state);
+ if (state == pci_channel_io_perm_failure)
+ return PCI_ERS_RESULT_DISCONNECT;
+
+ return hisi_qm_process_dev_error(pdev);
+}
+EXPORT_SYMBOL_GPL(hisi_qm_dev_err_detected);
+
+static int qm_vf_reset_prepare(struct pci_dev *pdev,
+ struct hisi_qm_list *qm_list,
+ enum qm_stop_reason stop_reason)
+{
+ struct pci_dev *dev;
+ struct hisi_qm *qm;
+ int ret = 0;
+
+ mutex_lock(&qm_list->lock);
+ list_for_each_entry(qm, &qm_list->list, list) {
+ dev = qm->pdev;
+ if (dev == pdev)
+ continue;
+
+ if (pci_physfn(dev) == pdev) {
+ /* save VFs PCIE BAR configuration */
+ pci_save_state(dev);
+
+ ret = hisi_qm_stop(qm, stop_reason);
+ if (ret)
+ goto prepare_fail;
+ }
+ }
+
+prepare_fail:
+ mutex_unlock(&qm_list->lock);
+ return ret;
+}
+
+static int qm_reset_prepare_ready(struct hisi_qm *qm)
+{
+ struct pci_dev *pdev = qm->pdev;
+ struct hisi_qm *pf_qm = pci_get_drvdata(pci_physfn(pdev));
+ int delay = 0;
+
+ while (test_and_set_bit(QM_DEV_RESET_STATUS, &pf_qm->hw_status)) {
+ msleep(++delay);
+ if (delay > QM_RESET_WAIT_TIMEOUT)
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+static int qm_controller_reset_prepare(struct hisi_qm *qm)
+{
+ struct pci_dev *pdev = qm->pdev;
+ int ret;
+
+ ret = qm_reset_prepare_ready(qm);
+ if (ret) {
+ pci_err(pdev, "Controller reset not ready!\n");
+ return ret;
+ }
+
+ if (qm->vfs_num) {
+ ret = qm_vf_reset_prepare(pdev, qm->qm_list, QM_SOFT_RESET);
+ if (ret) {
+ pci_err(pdev, "Fails to stop VFs!\n");
+ return ret;
+ }
+ }
+
+ ret = hisi_qm_stop(qm, QM_SOFT_RESET);
+ if (ret) {
+ pci_err(pdev, "Fails to stop QM!\n");
+ return ret;
+ }
+
+#ifdef CONFIG_CRYPTO_QM_UACCE
+ if (qm->use_uacce) {
+ ret = uacce_hw_err_isolate(&qm->uacce);
+ if (ret) {
+ pci_err(pdev, "Fails to isolate hw err!\n");
+ return ret;
+ }
+ }
+#endif
+
+ return 0;
+}
+
+static void qm_dev_ecc_mbit_handle(struct hisi_qm *qm)
+{
+ u32 nfe_enb = 0;
+
+ if (!qm->err_ini.err_info.is_dev_ecc_mbit &&
+ qm->err_ini.err_info.is_qm_ecc_mbit &&
+ qm->err_ini.close_axi_master_ooo) {
+
+ qm->err_ini.close_axi_master_ooo(qm);
+
+ } else if (qm->err_ini.err_info.is_dev_ecc_mbit &&
+ !qm->err_ini.err_info.is_qm_ecc_mbit &&
+ !qm->err_ini.close_axi_master_ooo) {
+
+ nfe_enb = readl(qm->io_base + QM_RAS_NFE_ENABLE);
+ writel(nfe_enb & QM_RAS_NFE_MBIT_DISABLE,
+ qm->io_base + QM_RAS_NFE_ENABLE);
+ writel(QM_ECC_MBIT, qm->io_base + QM_PF_ABNORMAL_INT_SET);
+ }
+}
+
+static int qm_soft_reset(struct hisi_qm *qm)
+{
+ struct pci_dev *pdev = qm->pdev;
+ int ret;
+ u32 val;
+
+ ret = qm_reg_test(qm);
+ if (ret)
+ return ret;
+
+ if (qm->vfs_num) {
+ ret = qm_set_vf_mse(qm, false);
+ if (ret) {
+ pci_err(pdev, "Fails to disable vf mse bit.\n");
+ return ret;
+ }
+ }
+
+ ret = qm_set_msi(qm, false);
+ if (ret) {
+ pci_err(pdev, "Fails to disable peh msi bit.\n");
+ return ret;
+ }
+
+ qm_dev_ecc_mbit_handle(qm);
+
+ mdelay(DELAY_PERIOD_MS);
+
+ /* OOO register set and check */
+ writel(MASTER_GLOBAL_CTRL_SHUTDOWN, qm->io_base + MASTER_GLOBAL_CTRL);
+
+ /* If bus lock, reset chip */
+ ret = readl_relaxed_poll_timeout(qm->io_base + MASTER_TRANS_RETURN,
+ val, (val == MASTER_TRANS_RETURN_RW),
+ QM_REG_RD_INTVRL_US,
+ QM_REG_RD_TMOUT_US);
+ if (ret) {
+ pci_emerg(pdev, "Bus lock! Please reset system.\n");
+ return ret;
+ }
+
+ ret = qm_set_pf_mse(qm, false);
+ if (ret) {
+ pci_err(pdev, "Fails to disable pf mse bit.\n");
+ return ret;
+ }
+
+ /* The reset related sub-control registers are not in PCI BAR */
+ if (ACPI_HANDLE(&pdev->dev)) {
+ unsigned long long value = 0;
+ acpi_status s;
+
+ s = acpi_evaluate_integer(ACPI_HANDLE(&pdev->dev),
+ qm->err_ini.err_info.acpi_rst,
+ NULL, &value);
+ if (ACPI_FAILURE(s)) {
+ pci_err(pdev, "NO controller reset method!\n");
+ return -EIO;
+ }
+
+ if (value) {
+ pci_err(pdev, "Reset step %llu failed!\n", value);
+ return -EIO;
+ }
+ } else {
+ pci_err(pdev, "No reset method!\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int qm_vf_reset_done(struct pci_dev *pdev,
+ struct hisi_qm_list *qm_list)
+{
+ struct pci_dev *dev;
+ struct hisi_qm *qm;
+ int ret = 0;
+
+ mutex_lock(&qm_list->lock);
+ list_for_each_entry(qm, &qm_list->list, list) {
+ dev = qm->pdev;
+ if (dev == pdev)
+ continue;
+
+ if (pci_physfn(dev) == pdev) {
+ /* enable VFs PCIE BAR configuration */
+ pci_restore_state(dev);
+
+ ret = hisi_qm_restart(qm);
+ if (ret)
+ goto reset_fail;
+ }
+ }
+
+reset_fail:
+ mutex_unlock(&qm_list->lock);
+ return ret;
+}
+
+static int qm_get_dev_err_status(struct hisi_qm *qm)
+{
+ u32 err_sts;
+
+ err_sts = qm->err_ini.get_dev_hw_err_status(qm) &
+ qm->err_ini.err_info.ecc_2bits_mask;
+ if (err_sts)
+ return err_sts;
+
+ return 0;
+}
+
+static void hisi_qm_restart_prepare(struct hisi_qm *qm)
+{
+ u32 value;
+
+ if (!qm->err_ini.err_info.is_qm_ecc_mbit &&
+ !qm->err_ini.err_info.is_dev_ecc_mbit)
+ return;
+
+ value = readl(qm->io_base + AM_CFG_PORT_WR_EN);
+ writel(value & ~qm->err_ini.err_info.msi_wr_port,
+ qm->io_base + AM_CFG_PORT_WR_EN);
+
+ /* clear dev ecc 2bit error source if having */
+ value = qm_get_dev_err_status(qm);
+ if (value && qm->err_ini.clear_dev_hw_err_status)
+ qm->err_ini.clear_dev_hw_err_status(qm, value);
+
+ /* clear QM ecc mbit error source */
+ writel(QM_ECC_MBIT, qm->io_base +
+ QM_ABNORMAL_INT_SOURCE);
+
+ /* clear AM Reorder Buffer ecc mbit source */
+ writel(ROB_ECC_ERR_MULTPL, qm->io_base +
+ AM_ROB_ECC_INT_STS);
+
+ if (qm->err_ini.open_axi_master_ooo)
+ qm->err_ini.open_axi_master_ooo(qm);
+}
+
+static void hisi_qm_restart_done(struct hisi_qm *qm)
+{
+ u32 value;
+
+ if (!qm->err_ini.err_info.is_qm_ecc_mbit &&
+ !qm->err_ini.err_info.is_dev_ecc_mbit)
+ return;
+
+ value = readl(qm->io_base + AM_CFG_PORT_WR_EN);
+ value |= qm->err_ini.err_info.msi_wr_port;
+
+ writel(value, qm->io_base + AM_CFG_PORT_WR_EN);
+ qm->err_ini.err_info.is_qm_ecc_mbit = false;
+ qm->err_ini.err_info.is_dev_ecc_mbit = false;
+}
+
+static int qm_controller_reset_done(struct hisi_qm *qm)
+{
+ struct pci_dev *pdev = qm->pdev;
+ int ret;
+
+ ret = qm_set_msi(qm, true);
+ if (ret) {
+ pci_err(pdev, "Fails to enable peh msi bit!\n");
+ return ret;
+ }
+
+ ret = qm_set_pf_mse(qm, true);
+ if (ret) {
+ pci_err(pdev, "Fails to enable pf mse bit!\n");
+ return ret;
+ }
+
+ if (qm->vfs_num) {
+ ret = qm_set_vf_mse(qm, true);
+ if (ret) {
+ pci_err(pdev, "Fails to enable vf mse bit!\n");
+ return ret;
+ }
+ }
+
+ ret = qm->err_ini.set_usr_domain_cache(qm);
+ if (ret)
+ return ret;
+
+ hisi_qm_restart_prepare(qm);
+
+ ret = hisi_qm_restart(qm);
+ if (ret) {
+ pci_err(pdev, "Failed to start QM!\n");
+ return ret;
+ }
+
+ if (qm->vfs_num) {
+ ret = qm_vf_q_assign(qm, qm->vfs_num);
+ if (ret) {
+ pci_err(pdev, "Failed to assign queue!\n");
+ return ret;
+ }
+ }
+
+ ret = qm_vf_reset_done(pdev, qm->qm_list);
+ if (ret) {
+ pci_err(pdev, "Failed to start VFs!\n");
+ return -EPERM;
+ }
+
+ hisi_qm_dev_err_init(qm);
+
+ hisi_qm_restart_done(qm);
+
+ return 0;
+}
+
+int hisi_qm_controller_reset(struct hisi_qm *qm)
+{
+ struct pci_dev *pdev = qm->pdev;
+ int ret;
+
+ pci_info(pdev, "Controller resetting...\n");
+
+ ret = qm_controller_reset_prepare(qm);
+ if (ret)
+ return ret;
+
+ ret = qm_soft_reset(qm);
+ if (ret) {
+ pci_err(pdev, "Controller reset failed (%d)\n", ret);
+ return ret;
+ }
+
+ ret = qm_controller_reset_done(qm);
+ if (ret)
+ return ret;
+
+ clear_bit(QM_DEV_RESET_STATUS, &qm->hw_status);
+ pci_info(pdev, "Controller reset complete\n");
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(hisi_qm_controller_reset);
+
+pci_ers_result_t hisi_qm_dev_slot_reset(struct pci_dev *pdev)
+{
+ struct hisi_qm *qm = pci_get_drvdata(pdev);
+ int ret;
+
+ if (pdev->is_virtfn)
+ return PCI_ERS_RESULT_RECOVERED;
+
+ pci_info(pdev, "Requesting reset due to PCI error\n");
+ pci_cleanup_aer_uncorrect_error_status(pdev);
+
+ /* reset pcie device controller */
+ ret = hisi_qm_controller_reset(qm);
+ if (ret) {
+ pci_err(pdev, "controller reset failed (%d)\n", ret);
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ return PCI_ERS_RESULT_RECOVERED;
+}
+EXPORT_SYMBOL_GPL(hisi_qm_dev_slot_reset);
+
+/* check the interrupt is ecc-mbit error or not */
+static int qm_check_dev_error(struct hisi_qm *qm)
+{
+ struct pci_dev *pdev = qm->pdev;
+ struct hisi_qm *pf_qm = pci_get_drvdata(pci_physfn(pdev));
+ int ret;
+
+ if (pf_qm->fun_type == QM_HW_VF)
+ return 0;
+
+ ret = qm_get_hw_error_status(pf_qm);
+ if (ret)
+ return ret;
+
+ return qm_get_dev_err_status(pf_qm);
+}
+
+void hisi_qm_reset_prepare(struct pci_dev *pdev)
+{
+ struct hisi_qm *qm = pci_get_drvdata(pdev);
+ u32 delay = 0;
+ int ret;
+
+ hisi_qm_dev_err_uninit(qm);
+
+ while (qm_check_dev_error(qm)) {
+ msleep(++delay);
+ if (delay > QM_RESET_WAIT_TIMEOUT)
+ return;
+ }
+
+ ret = qm_reset_prepare_ready(qm);
+ if (ret) {
+ pci_err(pdev, "FLR not ready!\n");
+ return;
+ }
+
+ if (qm->vfs_num) {
+ ret = qm_vf_reset_prepare(pdev, qm->qm_list, QM_FLR);
+ if (ret) {
+ pci_err(pdev, "Fails to prepare reset!\n");
+ return;
+ }
+ }
+
+ ret = hisi_qm_stop(qm, QM_FLR);
+ if (ret) {
+ pci_err(pdev, "Fails to stop QM!\n");
+ return;
+ }
+
+ pci_info(pdev, "FLR resetting...\n");
+}
+EXPORT_SYMBOL_GPL(hisi_qm_reset_prepare);
+
+static bool qm_flr_reset_complete(struct pci_dev *pdev)
+{
+ struct pci_dev *pf_pdev = pci_physfn(pdev);
+ struct hisi_qm *qm = pci_get_drvdata(pf_pdev);
+ u32 id;
+
+ pci_read_config_dword(qm->pdev, PCI_COMMAND, &id);
+ if (id == QM_PCI_COMMAND_INVALID) {
+ pci_err(pdev, "Device can not be used!\n");
+ return false;
+ }
+
+ clear_bit(QM_DEV_RESET_STATUS, &qm->hw_status);
+ return true;
+}
+
+void hisi_qm_reset_done(struct pci_dev *pdev)
+{
+ struct hisi_qm *qm = pci_get_drvdata(pdev);
+ int ret;
+
+ hisi_qm_dev_err_init(qm);
+
+ ret = hisi_qm_restart(qm);
+ if (ret) {
+ pci_err(pdev, "Failed to start QM!\n");
+ goto flr_done;
+ }
+
+ if (qm->fun_type == QM_HW_PF) {
+ ret = qm->err_ini.set_usr_domain_cache(qm);
+ if (ret) {
+ pci_err(pdev, "Failed to start QM!\n");
+ goto flr_done;
+ }
+
+ if (qm->vfs_num)
+ qm_vf_q_assign(qm, qm->vfs_num);
+
+ ret = qm_vf_reset_done(pdev, qm->qm_list);
+ if (ret) {
+ pci_err(pdev, "Failed to start VFs!\n");
+ goto flr_done;
+ }
+ }
- writel(AM_CFG_PORT_WR_EN_VALUE, qm->io_base + AM_CFG_PORT_WR_EN);
- qm->err_ini.is_qm_ecc_mbit = 0;
- qm->err_ini.is_dev_ecc_mbit = 0;
+flr_done:
+ if (qm_flr_reset_complete(pdev))
+ pci_info(pdev, "FLR reset complete\n");
}
-EXPORT_SYMBOL_GPL(hisi_qm_restart_done);
+EXPORT_SYMBOL_GPL(hisi_qm_reset_done);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Zhou Wang <wangzhou1(a)hisilicon.com>");
diff --git a/drivers/crypto/hisilicon/qm.h b/drivers/crypto/hisilicon/qm.h
index 24b3609..36e888f 100644
--- a/drivers/crypto/hisilicon/qm.h
+++ b/drivers/crypto/hisilicon/qm.h
@@ -17,6 +17,9 @@
#include "qm_usr_if.h"
+#define QNUM_V1 4096
+#define QNUM_V2 1024
+#define QM_MAX_VFS_NUM 63
/* qm user domain */
#define QM_ARUSER_M_CFG_1 0x100088
#define AXUSER_SNOOP_ENABLE BIT(30)
@@ -49,6 +52,7 @@
#define QM_AXI_M_CFG 0x1000ac
#define AXI_M_CFG 0xffff
#define QM_AXI_M_CFG_ENABLE 0x1000b0
+#define AM_CFG_SINGLE_PORT_MAX_TRANS 0x300014
#define AXI_M_CFG_ENABLE 0xffffffff
#define QM_PEH_AXUSER_CFG 0x1000cc
#define QM_PEH_AXUSER_CFG_ENABLE 0x1000d0
@@ -235,19 +239,41 @@ struct hisi_qm_status {
int stop_reason;
};
+struct hisi_qm_hw_error {
+ u32 int_msk;
+ const char *msg;
+};
+
struct hisi_qm;
-struct hisi_qm_err_ini {
- u32 qm_wr_port;
+struct hisi_qm_err_info {
+ char *acpi_rst;
+ u32 msi_wr_port;
+ u32 ecc_2bits_mask;
u32 is_qm_ecc_mbit;
u32 is_dev_ecc_mbit;
- u32 ecc_2bits_mask;
- void (*open_axi_master_ooo)(struct hisi_qm *qm);
+ u32 ce;
+ u32 nfe;
+ u32 fe;
+ u32 msi;
+};
+
+struct hisi_qm_err_ini {
u32 (*get_dev_hw_err_status)(struct hisi_qm *qm);
void (*clear_dev_hw_err_status)(struct hisi_qm *qm, u32 err_sts);
+ void (*hw_err_enable)(struct hisi_qm *qm);
+ void (*hw_err_disable)(struct hisi_qm *qm);
+ int (*set_usr_domain_cache)(struct hisi_qm *qm);
void (*log_dev_hw_err)(struct hisi_qm *qm, u32 err_sts);
- /* design for module can not hold on ooo through qm, such as zip */
- void (*inject_dev_hw_err)(struct hisi_qm *qm);
+ void (*open_axi_master_ooo)(struct hisi_qm *qm);
+ void (*close_axi_master_ooo)(struct hisi_qm *qm);
+ struct hisi_qm_err_info err_info;
+};
+
+struct hisi_qm_list {
+ struct mutex lock;
+ struct list_head list;
+ bool (*check)(struct hisi_qm *qm);
};
struct hisi_qm {
@@ -260,7 +286,9 @@ struct hisi_qm {
u32 qp_base;
u32 qp_num;
u32 ctrl_q_num;
-
+ u32 vfs_num;
+ struct list_head list;
+ struct hisi_qm_list *qm_list;
struct qm_dma qdma;
struct qm_sqc *sqc;
struct qm_cqc *cqc;
@@ -285,8 +313,7 @@ struct hisi_qm {
u32 error_mask;
u32 msi_mask;
-
- const char *algs;
+ unsigned long hw_status;
bool use_uacce; /* register to uacce */
bool use_sva;
@@ -294,7 +321,9 @@ struct hisi_qm {
resource_size_t phys_base;
resource_size_t size;
struct uacce uacce;
+ const char *algs;
void *reserve;
+ int uacce_mode;
dma_addr_t reserve_dma;
#endif
struct workqueue_struct *wq;
@@ -345,9 +374,144 @@ struct hisi_qp {
#endif
};
+static inline int q_num_set(const char *val, const struct kernel_param *kp,
+ unsigned int device)
+{
+ struct pci_dev *pdev = pci_get_device(PCI_VENDOR_ID_HUAWEI,
+ device, NULL);
+ u32 n, q_num;
+ u8 rev_id;
+ int ret;
+
+ if (!val)
+ return -EINVAL;
+
+ if (!pdev) {
+ q_num = min_t(u32, QNUM_V1, QNUM_V2);
+ pr_info("No device found currently, suppose queue number is %d\n",
+ q_num);
+ } else {
+ rev_id = pdev->revision;
+ switch (rev_id) {
+ case QM_HW_V1:
+ q_num = QNUM_V1;
+ break;
+ case QM_HW_V2:
+ q_num = QNUM_V2;
+ break;
+ default:
+ return -EINVAL;
+ }
+ }
+
+ ret = kstrtou32(val, 10, &n);
+ if (ret || !n || n > q_num)
+ return -EINVAL;
+
+ return param_set_int(val, kp);
+}
+
+static inline int vf_num_set(const char *val, const struct kernel_param *kp)
+{
+ u32 n;
+ int ret;
+
+ if (!val)
+ return -EINVAL;
+
+ ret = kstrtou32(val, 10, &n);
+ if (ret < 0)
+ return ret;
+
+ if (n > QM_MAX_VFS_NUM)
+ return -ERANGE;
+
+ return param_set_int(val, kp);
+}
+
+#ifdef CONFIG_CRYPTO_QM_UACCE
+static inline int mode_set(const char *val, const struct kernel_param *kp)
+{
+ u32 n;
+ int ret;
+
+ if (!val)
+ return -EINVAL;
+
+ ret = kstrtou32(val, 10, &n);
+ if (ret != 0 || (n != UACCE_MODE_NOIOMMU &&
+ n != UACCE_MODE_NOUACCE))
+ return -EINVAL;
+
+ return param_set_int(val, kp);
+}
+#endif
+
+static inline void hisi_qm_add_to_list(struct hisi_qm *qm,
+ struct hisi_qm_list *qm_list)
+{
+ mutex_lock(&qm_list->lock);
+ list_add_tail(&qm->list, &qm_list->list);
+ mutex_unlock(&qm_list->lock);
+}
+
+static inline void hisi_qm_del_from_list(struct hisi_qm *qm,
+ struct hisi_qm_list *qm_list)
+{
+ mutex_lock(&qm_list->lock);
+ list_del(&qm->list);
+ mutex_unlock(&qm_list->lock);
+}
+
+static inline int hisi_qm_pre_init(struct hisi_qm *qm,
+ u32 pf_q_num, u32 def_q_num)
+{
+ struct pci_dev *pdev = qm->pdev;
+
+ switch (pdev->revision) {
+ case QM_HW_V1:
+ case QM_HW_V2:
+ qm->ver = pdev->revision;
+ break;
+ default:
+ pci_err(pdev, "hardware version err!\n");
+ return -ENODEV;
+ }
+
+ pci_set_drvdata(pdev, qm);
+
+#ifdef CONFIG_CRYPTO_QM_UACCE
+ switch (qm->uacce_mode) {
+ case UACCE_MODE_NOUACCE:
+ qm->use_uacce = false;
+ break;
+ case UACCE_MODE_NOIOMMU:
+ qm->use_uacce = true;
+ break;
+ default:
+ pci_err(pdev, "uacce mode error!\n");
+ return -EINVAL;
+ }
+#else
+ qm->use_uacce = false;
+#endif
+ if (qm->fun_type == QM_HW_PF) {
+ qm->qp_base = def_q_num;
+ qm->qp_num = pf_q_num;
+ qm->debug.curr_qm_qp_num = pf_q_num;
+ }
+
+ return 0;
+}
+
+void hisi_qm_free_qps(struct hisi_qp **qps, int qp_num);
+int hisi_qm_alloc_qps_node(int node, struct hisi_qm_list *qm_list,
+ struct hisi_qp **qps, int qp_num, u8 alg_type);
int hisi_qm_init(struct hisi_qm *qm);
void hisi_qm_uninit(struct hisi_qm *qm);
-int hisi_qm_frozen(struct hisi_qm *qm);
+void hisi_qm_dev_shutdown(struct pci_dev *pdev);
+void hisi_qm_remove_wait_delay(struct hisi_qm *qm,
+ struct hisi_qm_list *qm_list);
int hisi_qm_start(struct hisi_qm *qm);
int hisi_qm_stop(struct hisi_qm *qm, enum qm_stop_reason r);
struct hisi_qp *hisi_qm_create_qp(struct hisi_qm *qm, u8 alg_type);
@@ -358,25 +522,20 @@ struct hisi_qp {
int hisi_qp_wait(struct hisi_qp *qp);
int hisi_qm_get_free_qp_num(struct hisi_qm *qm);
int hisi_qm_get_vft(struct hisi_qm *qm, u32 *base, u32 *number);
-int hisi_qm_set_vft(struct hisi_qm *qm, u32 fun_num, u32 base, u32 number);
void hisi_qm_debug_regs_clear(struct hisi_qm *qm);
int hisi_qm_debug_init(struct hisi_qm *qm);
-void hisi_qm_hw_error_init(struct hisi_qm *qm, u32 ce, u32 nfe, u32 fe,
- u32 msi);
-void hisi_qm_hw_error_uninit(struct hisi_qm *qm);
-pci_ers_result_t hisi_qm_hw_error_handle(struct hisi_qm *qm);
-void hisi_qm_clear_queues(struct hisi_qm *qm);
-enum qm_hw_ver hisi_qm_get_hw_version(struct pci_dev *pdev);
int hisi_qm_restart(struct hisi_qm *qm);
-int hisi_qm_get_hw_error_status(struct hisi_qm *qm);
+int hisi_qm_sriov_enable(struct pci_dev *pdev, int max_vfs);
+int hisi_qm_sriov_disable(struct pci_dev *pdev, struct hisi_qm_list *qm_list);
+void hisi_qm_dev_err_init(struct hisi_qm *qm);
+void hisi_qm_dev_err_uninit(struct hisi_qm *qm);
+pci_ers_result_t hisi_qm_dev_err_detected(struct pci_dev *pdev,
+ pci_channel_state_t state);
+pci_ers_result_t hisi_qm_dev_slot_reset(struct pci_dev *pdev);
+void hisi_qm_reset_prepare(struct pci_dev *pdev);
+void hisi_qm_reset_done(struct pci_dev *pdev);
pci_ers_result_t hisi_qm_process_dev_error(struct pci_dev *pdev);
-int hisi_qm_reg_test(struct hisi_qm *qm);
-int hisi_qm_set_pf_mse(struct hisi_qm *qm, bool set);
-int hisi_qm_set_vf_mse(struct hisi_qm *qm, bool set);
-int hisi_qm_set_msi(struct hisi_qm *qm, bool set);
-void hisi_qm_set_ecc(struct hisi_qm *qm);
-void hisi_qm_restart_prepare(struct hisi_qm *qm);
-void hisi_qm_restart_done(struct hisi_qm *qm);
+int hisi_qm_controller_reset(struct hisi_qm *qm);
struct hisi_acc_sgl_pool;
struct hisi_acc_hw_sgl *hisi_acc_sg_buf_map_to_hw_sgl(struct device *dev,
diff --git a/drivers/crypto/hisilicon/rde/rde.h b/drivers/crypto/hisilicon/rde/rde.h
index aa7887a..e06efc7 100644
--- a/drivers/crypto/hisilicon/rde/rde.h
+++ b/drivers/crypto/hisilicon/rde/rde.h
@@ -22,19 +22,11 @@
struct hisi_rde_ctrl;
-enum hisi_rde_status {
- HISI_RDE_RESET,
-};
-
struct hisi_rde {
struct hisi_qm qm;
- struct list_head list;
struct hisi_rde_ctrl *ctrl;
struct work_struct reset_work;
- struct mutex *rde_list_lock;
- unsigned long status;
u32 smmu_state;
- int q_ref;
};
#define RDE_CM_LOAD_ENABLE 1
@@ -134,7 +126,6 @@ struct hisi_rde_msg {
struct hisi_rde_ctx {
struct device *dev;
struct hisi_qp *qp;
- struct hisi_rde *rde_dev;
struct hisi_rde_msg *req_list;
unsigned long *req_bitmap;
spinlock_t req_lock;
@@ -323,7 +314,7 @@ static inline void rde_table_dump(const struct hisi_rde_msg *req)
}
}
-struct hisi_rde *find_rde_device(int node);
+struct hisi_qp *rde_create_qp(void);
int hisi_rde_abnormal_fix(struct hisi_qm *qm);
#endif
diff --git a/drivers/crypto/hisilicon/rde/rde_api.c b/drivers/crypto/hisilicon/rde/rde_api.c
index 1be468a..f1330f1 100644
--- a/drivers/crypto/hisilicon/rde/rde_api.c
+++ b/drivers/crypto/hisilicon/rde/rde_api.c
@@ -835,17 +835,12 @@ int hisi_rde_io_proc(struct acc_ctx *ctx, struct raid_ec_ctrl *ctrl,
return ret;
}
-static int hisi_rde_create_qp(struct hisi_qm *qm, struct acc_ctx *ctx,
- int alg_type, int req_type)
+static int hisi_rde_start_qp(struct hisi_qp *qp, struct acc_ctx *ctx,
+ int req_type)
{
- struct hisi_qp *qp;
struct hisi_rde_ctx *rde_ctx;
int ret;
- qp = hisi_qm_create_qp(qm, alg_type);
- if (IS_ERR(qp))
- return PTR_ERR(qp);
-
qp->req_type = req_type;
qp->qp_ctx = ctx;
@@ -994,9 +989,10 @@ static int hisi_rde_ctx_init(struct hisi_rde_ctx *rde_ctx, int qlen)
int acc_init(struct acc_ctx *ctx)
{
+ struct hisi_rde_ctx *rde_ctx;
struct hisi_rde *hisi_rde;
+ struct hisi_qp *qp;
struct hisi_qm *qm;
- struct hisi_rde_ctx *rde_ctx;
int ret;
if (unlikely(!ctx)) {
@@ -1004,9 +1000,9 @@ int acc_init(struct acc_ctx *ctx)
return -EINVAL;
}
- hisi_rde = find_rde_device(cpu_to_node(smp_processor_id()));
- if (unlikely(!hisi_rde)) {
- pr_err("[%s]Can not find proper RDE device.\n", __func__);
+ qp = rde_create_qp();
+ if (unlikely(!qp)) {
+ pr_err("[%s]Can not create RDE qp.\n", __func__);
return -ENODEV;
}
/* alloc inner private struct */
@@ -1017,20 +1013,20 @@ int acc_init(struct acc_ctx *ctx)
}
ctx->inner = (void *)rde_ctx;
- qm = &hisi_rde->qm;
+ qm = qp->qm;
if (unlikely(!qm->pdev)) {
pr_err("[%s] Pdev is NULL.\n", __func__);
return -ENODEV;
}
rde_ctx->dev = &qm->pdev->dev;
- ret = hisi_rde_create_qp(qm, ctx, 0, 0);
+ ret = hisi_rde_start_qp(qp, ctx, 0);
if (ret) {
- dev_err(rde_ctx->dev, "[%s] Create qp failed.\n", __func__);
+ dev_err(rde_ctx->dev, "[%s] start qp failed.\n", __func__);
goto qp_err;
}
- rde_ctx->rde_dev = hisi_rde;
+ hisi_rde = container_of(qm, struct hisi_rde, qm);
rde_ctx->smmu_state = hisi_rde->smmu_state;
rde_ctx->addr_type = ctx->addr_type;
hisi_rde_session_init(rde_ctx);
@@ -1081,9 +1077,6 @@ int acc_clear(struct acc_ctx *ctx)
rde_ctx->req_list = NULL;
hisi_rde_release_qp(rde_ctx);
- mutex_lock(rde_ctx->rde_dev->rde_list_lock);
- rde_ctx->rde_dev->q_ref = rde_ctx->rde_dev->q_ref - 1;
- mutex_unlock(rde_ctx->rde_dev->rde_list_lock);
kfree(rde_ctx);
ctx->inner = NULL;
diff --git a/drivers/crypto/hisilicon/rde/rde_api.h b/drivers/crypto/hisilicon/rde/rde_api.h
index 0f9021b..167607e 100644
--- a/drivers/crypto/hisilicon/rde/rde_api.h
+++ b/drivers/crypto/hisilicon/rde/rde_api.h
@@ -308,7 +308,7 @@ struct acc_dif {
* @input_block: number of sector
* @data_len: data len of per disk, block_size (with dif)* input_block
* @buf_type: denoted by ACC_BUF_TYPE_E
- * @src_dif��dif information of source disks
+ * @src_dif: dif information of source disks
* @dst_dif: dif information of dest disks
* @cm_load: coe_matrix reload control, 0: do not load, 1: load
* @cm_len: length of loaded coe_matrix, equal to src_num
diff --git a/drivers/crypto/hisilicon/rde/rde_main.c b/drivers/crypto/hisilicon/rde/rde_main.c
index 453657a..318d4a0 100644
--- a/drivers/crypto/hisilicon/rde/rde_main.c
+++ b/drivers/crypto/hisilicon/rde/rde_main.c
@@ -22,7 +22,6 @@
#include <linux/uacce.h>
#include "rde.h"
-#define HRDE_VF_NUM 63
#define HRDE_QUEUE_NUM_V1 4096
#define HRDE_QUEUE_NUM_V2 1024
#define HRDE_PCI_DEVICE_ID 0xa25a
@@ -32,7 +31,6 @@
#define HRDE_PF_DEF_Q_BASE 0
#define HRDE_RD_INTVRL_US 10
#define HRDE_RD_TMOUT_US 1000
-#define FORMAT_DECIMAL 10
#define HRDE_RST_TMOUT_MS 400
#define HRDE_ENABLE 1
#define HRDE_DISABLE 0
@@ -68,7 +66,7 @@
#define CHN_CFG 0x5010101
#define HRDE_AXI_SHUTDOWN_EN BIT(26)
#define HRDE_AXI_SHUTDOWN_DIS 0xFBFFFFFF
-#define HRDE_WR_MSI_PORT 0xFFFE
+#define HRDE_WR_MSI_PORT BIT(0)
#define HRDE_AWUSER_BD_1 0x310104
#define HRDE_ARUSER_BD_1 0x310114
#define HRDE_ARUSER_SGL_1 0x310124
@@ -87,9 +85,6 @@
#define HRDE_QM_IDEL_STATUS 0x1040e4
#define HRDE_QM_PEH_DFX_INFO0 0x1000fc
#define PEH_MSI_MASK_SHIFT 0x90
-#define HRDE_MASTER_GLOBAL_CTRL 0x300000
-#define MASTER_GLOBAL_CTRL_SHUTDOWN 0x1
-#define MASTER_TRANS_RETURN_RW 0x3
#define CACHE_CTL 0x1833
#define HRDE_DBGFS_VAL_MAX_LEN 20
#define HRDE_PROBE_ADDR 0x31025c
@@ -100,16 +95,9 @@
static const char hisi_rde_name[] = "hisi_rde";
static struct dentry *hrde_debugfs_root;
-LIST_HEAD(hisi_rde_list);
-DEFINE_MUTEX(hisi_rde_list_lock);
+static struct hisi_qm_list rde_devices;
static void hisi_rde_ras_proc(struct work_struct *work);
-struct hisi_rde_resource {
- struct hisi_rde *hrde;
- int distance;
- struct list_head list;
-};
-
static const struct hisi_rde_hw_error rde_hw_error[] = {
{.int_msk = BIT(0), .msg = "Rde_ecc_1bitt_err"},
{.int_msk = BIT(1), .msg = "Rde_ecc_2bit_err"},
@@ -157,7 +145,6 @@ struct ctrl_debug_file {
*/
struct hisi_rde_ctrl {
struct hisi_rde *hisi_rde;
- struct dentry *debug_root;
struct ctrl_debug_file files[HRDE_DEBUG_FILE_NUM];
};
@@ -199,78 +186,36 @@ struct hisi_rde_ctrl {
{"HRDE_AM_CURR_WR_TXID_STS_2", 0x300178ull},
};
-static int pf_q_num_set(const char *val, const struct kernel_param *kp)
+#ifdef CONFIG_CRYPTO_QM_UACCE
+static int uacce_mode_set(const char *val, const struct kernel_param *kp)
{
- struct pci_dev *pdev;
- u32 n;
- u32 q_num;
- u8 rev_id;
- int ret;
-
- if (!val)
- return -EINVAL;
-
- pdev = pci_get_device(PCI_VENDOR_ID_HUAWEI, HRDE_PCI_DEVICE_ID, NULL);
- if (unlikely(!pdev)) {
- q_num = min_t(u32, HRDE_QUEUE_NUM_V1, HRDE_QUEUE_NUM_V2);
- pr_info
- ("No device found currently, suppose queue number is %d.\n",
- q_num);
- } else {
- rev_id = pdev->revision;
- switch (rev_id) {
- case QM_HW_V1:
- q_num = HRDE_QUEUE_NUM_V1;
- break;
- case QM_HW_V2:
- q_num = HRDE_QUEUE_NUM_V2;
- break;
- default:
- return -EINVAL;
- }
- }
-
- ret = kstrtou32(val, 10, &n);
- if (ret != 0 || n > q_num)
- return -EINVAL;
-
- return param_set_int(val, kp);
+ return mode_set(val, kp);
}
-static const struct kernel_param_ops pf_q_num_ops = {
- .set = pf_q_num_set,
+static const struct kernel_param_ops uacce_mode_ops = {
+ .set = uacce_mode_set,
.get = param_get_int,
};
-static int uacce_mode_set(const char *val, const struct kernel_param *kp)
-{
- u32 n;
- int ret;
-
- if (!val)
- return -EINVAL;
-
- ret = kstrtou32(val, FORMAT_DECIMAL, &n);
- if (ret != 0 || (n != UACCE_MODE_NOIOMMU && n != UACCE_MODE_NOUACCE))
- return -EINVAL;
+static int uacce_mode = UACCE_MODE_NOUACCE;
+module_param_cb(uacce_mode, &uacce_mode_ops, &uacce_mode, 0444);
+MODULE_PARM_DESC(uacce_mode, "Mode of UACCE can be 0(default), 2");
+#endif
- return param_set_int(val, kp);
+static int pf_q_num_set(const char *val, const struct kernel_param *kp)
+{
+ return q_num_set(val, kp, HRDE_PCI_DEVICE_ID);
}
-static const struct kernel_param_ops uacce_mode_ops = {
- .set = uacce_mode_set,
+static const struct kernel_param_ops pf_q_num_ops = {
+ .set = pf_q_num_set,
.get = param_get_int,
};
-
static u32 pf_q_num = HRDE_PF_DEF_Q_NUM;
module_param_cb(pf_q_num, &pf_q_num_ops, &pf_q_num, 0444);
MODULE_PARM_DESC(pf_q_num, "Number of queues in PF(v1 0-4096, v2 0-1024)");
-static int uacce_mode = UACCE_MODE_NOUACCE;
-module_param_cb(uacce_mode, &uacce_mode_ops, &uacce_mode, 0444);
-MODULE_PARM_DESC(uacce_mode, "Mode of UACCE can be 0(default), 2");
-
static const struct pci_device_id hisi_rde_dev_ids[] = {
{PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HRDE_PCI_DEVICE_ID)},
{0,}
@@ -278,125 +223,59 @@ static int uacce_mode_set(const char *val, const struct kernel_param *kp)
MODULE_DEVICE_TABLE(pci, hisi_rde_dev_ids);
-static void free_list(struct list_head *head)
-{
- struct hisi_rde_resource *res;
- struct hisi_rde_resource *tmp;
-
- list_for_each_entry_safe(res, tmp, head, list) {
- list_del(&res->list);
- kfree(res);
- }
-}
-
-struct hisi_rde *find_rde_device(int node)
+struct hisi_qp *rde_create_qp(void)
{
- struct hisi_rde *ret = NULL;
-#ifdef CONFIG_NUMA
- struct hisi_rde_resource *res, *tmp;
- struct hisi_rde *hisi_rde;
- struct list_head *n;
- struct device *dev;
- LIST_HEAD(head);
-
- mutex_lock(&hisi_rde_list_lock);
-
- list_for_each_entry(hisi_rde, &hisi_rde_list, list) {
- res = kzalloc(sizeof(*res), GFP_KERNEL);
- if (!res)
- goto err;
-
- dev = &hisi_rde->qm.pdev->dev;
- res->hrde = hisi_rde;
- res->distance = node_distance(dev->numa_node, node);
- n = &head;
- list_for_each_entry(tmp, &head, list) {
- if (res->distance < tmp->distance) {
- n = &tmp->list;
- break;
- }
- }
- list_add_tail(&res->list, n);
- }
-
- list_for_each_entry(tmp, &head, list) {
- if (tmp->hrde->q_ref + 1 <= pf_q_num) {
- tmp->hrde->q_ref = tmp->hrde->q_ref + 1;
- ret = tmp->hrde;
- break;
- }
- }
+ int node = cpu_to_node(smp_processor_id());
+ struct hisi_qp *qp;
+ int ret;
- free_list(&head);
-#else
- mutex_lock(&hisi_rde_list_lock);
- ret = list_first_entry(&hisi_rde_list, struct hisi_rde, list);
-#endif
- mutex_unlock(&hisi_rde_list_lock);
- return ret;
+ ret = hisi_qm_alloc_qps_node(node, &rde_devices, &qp, 1, 0);
+ if (!ret)
+ return qp;
-err:
- free_list(&head);
- mutex_unlock(&hisi_rde_list_lock);
return NULL;
}
-static inline void hisi_rde_add_to_list(struct hisi_rde *hisi_rde)
-{
- mutex_lock(&hisi_rde_list_lock);
- list_add_tail(&hisi_rde->list, &hisi_rde_list);
- mutex_unlock(&hisi_rde_list_lock);
-}
-
-static inline void hisi_rde_remove_from_list(struct hisi_rde *hisi_rde)
-{
- mutex_lock(&hisi_rde_list_lock);
- list_del(&hisi_rde->list);
- mutex_unlock(&hisi_rde_list_lock);
-}
-
-static void hisi_rde_engine_init(struct hisi_rde *hisi_rde)
+static int hisi_rde_engine_init(struct hisi_qm *qm)
{
- writel(DFX_CTRL0, hisi_rde->qm.io_base + HRDE_DFX_CTRL_0);
+ writel(DFX_CTRL0, qm->io_base + HRDE_DFX_CTRL_0);
/* usr domain */
- writel(HRDE_USER_SMMU, hisi_rde->qm.io_base + HRDE_AWUSER_BD_1);
- writel(HRDE_USER_SMMU, hisi_rde->qm.io_base + HRDE_ARUSER_BD_1);
- writel(HRDE_USER_SMMU, hisi_rde->qm.io_base + HRDE_AWUSER_DAT_1);
- writel(HRDE_USER_SMMU, hisi_rde->qm.io_base + HRDE_ARUSER_DAT_1);
- writel(HRDE_USER_SMMU, hisi_rde->qm.io_base + HRDE_ARUSER_SGL_1);
+ writel(HRDE_USER_SMMU, qm->io_base + HRDE_AWUSER_BD_1);
+ writel(HRDE_USER_SMMU, qm->io_base + HRDE_ARUSER_BD_1);
+ writel(HRDE_USER_SMMU, qm->io_base + HRDE_AWUSER_DAT_1);
+ writel(HRDE_USER_SMMU, qm->io_base + HRDE_ARUSER_DAT_1);
+ writel(HRDE_USER_SMMU, qm->io_base + HRDE_ARUSER_SGL_1);
/* rde cache */
- writel(AWCACHE, hisi_rde->qm.io_base + HRDE_AWCACHE);
- writel(ARCACHE, hisi_rde->qm.io_base + HRDE_ARCACHE);
+ writel(AWCACHE, qm->io_base + HRDE_AWCACHE);
+ writel(ARCACHE, qm->io_base + HRDE_ARCACHE);
/* rde chn enable + outstangding config */
- writel(CHN_CFG, hisi_rde->qm.io_base + HRDE_CFG);
+ writel(CHN_CFG, qm->io_base + HRDE_CFG);
+
+ return 0;
}
-static void hisi_rde_set_user_domain_and_cache(struct hisi_rde *hisi_rde)
+static int hisi_rde_set_user_domain_and_cache(struct hisi_qm *qm)
{
/* qm user domain */
- writel(AXUSER_BASE, hisi_rde->qm.io_base + QM_ARUSER_M_CFG_1);
- writel(ARUSER_M_CFG_ENABLE, hisi_rde->qm.io_base +
- QM_ARUSER_M_CFG_ENABLE);
- writel(AXUSER_BASE, hisi_rde->qm.io_base + QM_AWUSER_M_CFG_1);
- writel(AWUSER_M_CFG_ENABLE, hisi_rde->qm.io_base +
- QM_AWUSER_M_CFG_ENABLE);
- writel(WUSER_M_CFG_ENABLE, hisi_rde->qm.io_base +
- QM_WUSER_M_CFG_ENABLE);
+ writel(AXUSER_BASE, qm->io_base + QM_ARUSER_M_CFG_1);
+ writel(ARUSER_M_CFG_ENABLE, qm->io_base + QM_ARUSER_M_CFG_ENABLE);
+ writel(AXUSER_BASE, qm->io_base + QM_AWUSER_M_CFG_1);
+ writel(AWUSER_M_CFG_ENABLE, qm->io_base + QM_AWUSER_M_CFG_ENABLE);
+ writel(WUSER_M_CFG_ENABLE, qm->io_base + QM_WUSER_M_CFG_ENABLE);
/* qm cache */
- writel(AXI_M_CFG, hisi_rde->qm.io_base + QM_AXI_M_CFG);
- writel(AXI_M_CFG_ENABLE, hisi_rde->qm.io_base + QM_AXI_M_CFG_ENABLE);
+ writel(AXI_M_CFG, qm->io_base + QM_AXI_M_CFG);
+ writel(AXI_M_CFG_ENABLE, qm->io_base + QM_AXI_M_CFG_ENABLE);
/* disable BME/PM/SRIOV FLR*/
- writel(PEH_AXUSER_CFG, hisi_rde->qm.io_base + QM_PEH_AXUSER_CFG);
- writel(PEH_AXUSER_CFG_ENABLE, hisi_rde->qm.io_base +
- QM_PEH_AXUSER_CFG_ENABLE);
+ writel(PEH_AXUSER_CFG, qm->io_base + QM_PEH_AXUSER_CFG);
+ writel(PEH_AXUSER_CFG_ENABLE, qm->io_base + QM_PEH_AXUSER_CFG_ENABLE);
- writel(CACHE_CTL, hisi_rde->qm.io_base + QM_CACHE_CTL);
+ writel(CACHE_CTL, qm->io_base + QM_CACHE_CTL);
- hisi_rde_engine_init(hisi_rde);
+ return hisi_rde_engine_init(qm);
}
static void hisi_rde_debug_regs_clear(struct hisi_qm *qm)
@@ -418,30 +297,38 @@ static void hisi_rde_debug_regs_clear(struct hisi_qm *qm)
hisi_qm_debug_regs_clear(qm);
}
-static void hisi_rde_hw_error_set_state(struct hisi_rde *hisi_rde, bool state)
+static void hisi_rde_hw_error_enable(struct hisi_qm *qm)
{
- u32 ras_msk = (HRDE_RAS_CE_MSK | HRDE_RAS_NFE_MSK);
u32 val;
- val = readl(hisi_rde->qm.io_base + HRDE_CFG);
- if (state) {
- writel(HRDE_INT_SOURCE_CLEAR,
- hisi_rde->qm.io_base + HRDE_INT_SOURCE);
- writel(HRDE_RAS_ENABLE,
- hisi_rde->qm.io_base + HRDE_RAS_INT_MSK);
- /* bd prefetch should bd masked to prevent misreport */
- writel((HRDE_INT_ENABLE | HRDE_BD_PREFETCH),
- hisi_rde->qm.io_base + HRDE_INT_MSK);
- /* make master ooo close, when m-bits error happens*/
- val = val | HRDE_AXI_SHUTDOWN_EN;
- } else {
- writel(ras_msk, hisi_rde->qm.io_base + HRDE_RAS_INT_MSK);
- writel(HRDE_INT_DISABLE, hisi_rde->qm.io_base + HRDE_INT_MSK);
- /* make master ooo open, when m-bits error happens*/
- val = val & HRDE_AXI_SHUTDOWN_DIS;
- }
+ val = readl(qm->io_base + HRDE_CFG);
+
+ /* clear RDE hw error source if having */
+ writel(HRDE_INT_SOURCE_CLEAR, qm->io_base + HRDE_INT_SOURCE);
+ writel(HRDE_RAS_ENABLE, qm->io_base + HRDE_RAS_INT_MSK);
+
+ /* bd prefetch should bd masked to prevent misreport */
+ writel((HRDE_INT_ENABLE | HRDE_BD_PREFETCH),
+ qm->io_base + HRDE_INT_MSK);
- writel(val, hisi_rde->qm.io_base + HRDE_CFG);
+ /* when m-bit error occur, master ooo will close */
+ val = val | HRDE_AXI_SHUTDOWN_EN;
+ writel(val, qm->io_base + HRDE_CFG);
+}
+
+static void hisi_rde_hw_error_disable(struct hisi_qm *qm)
+{
+ u32 ras_msk = HRDE_RAS_CE_MSK | HRDE_RAS_NFE_MSK;
+ u32 val;
+
+ val = readl(qm->io_base + HRDE_CFG);
+
+ writel(ras_msk, qm->io_base + HRDE_RAS_INT_MSK);
+ writel(HRDE_INT_DISABLE, qm->io_base + HRDE_INT_MSK);
+
+ /* when m-bit error occur, master ooo will not close */
+ val = val & HRDE_AXI_SHUTDOWN_DIS;
+ writel(val, qm->io_base + HRDE_CFG);
}
static inline struct hisi_qm *file_to_qm(struct ctrl_debug_file *file)
@@ -587,10 +474,8 @@ static ssize_t ctrl_debug_write(struct file *filp, const char __user *buf,
.write = ctrl_debug_write,
};
-static int hisi_rde_chn_debug_init(struct hisi_rde_ctrl *ctrl)
+static int hisi_rde_chn_debug_init(struct hisi_qm *qm)
{
- struct hisi_rde *hisi_rde = ctrl->hisi_rde;
- struct hisi_qm *qm = &hisi_rde->qm;
struct device *dev = &qm->pdev->dev;
struct debugfs_regset32 *regset, *regset_ooo;
struct dentry *tmp_d, *tmp;
@@ -601,7 +486,7 @@ static int hisi_rde_chn_debug_init(struct hisi_rde_ctrl *ctrl)
if (ret < 0)
return -ENOENT;
- tmp_d = debugfs_create_dir(buf, ctrl->debug_root);
+ tmp_d = debugfs_create_dir(buf, qm->debug.debug_root);
if (!tmp_d)
return -ENOENT;
@@ -628,29 +513,30 @@ static int hisi_rde_chn_debug_init(struct hisi_rde_ctrl *ctrl)
return 0;
}
-static int hisi_rde_ctrl_debug_init(struct hisi_rde_ctrl *ctrl)
+static int hisi_rde_ctrl_debug_init(struct hisi_qm *qm)
{
+ struct hisi_rde *hisi_rde = container_of(qm, struct hisi_rde, qm);
struct dentry *tmp;
int i;
for (i = HRDE_CURRENT_FUNCTION; i < HRDE_DEBUG_FILE_NUM; i++) {
- spin_lock_init(&ctrl->files[i].lock);
- ctrl->files[i].ctrl = ctrl;
- ctrl->files[i].index = i;
+ spin_lock_init(&hisi_rde->ctrl->files[i].lock);
+ hisi_rde->ctrl->files[i].ctrl = hisi_rde->ctrl;
+ hisi_rde->ctrl->files[i].index = i;
tmp = debugfs_create_file(ctrl_debug_file_name[i], 0600,
- ctrl->debug_root, ctrl->files + i,
+ qm->debug.debug_root,
+ hisi_rde->ctrl->files + i,
&ctrl_debug_fops);
if (!tmp)
return -ENOENT;
}
- return hisi_rde_chn_debug_init(ctrl);
+ return hisi_rde_chn_debug_init(qm);
}
-static int hisi_rde_debugfs_init(struct hisi_rde *hisi_rde)
+static int hisi_rde_debugfs_init(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_rde->qm;
struct device *dev = &qm->pdev->dev;
struct dentry *dev_d;
int ret;
@@ -665,8 +551,7 @@ static int hisi_rde_debugfs_init(struct hisi_rde *hisi_rde)
goto failed_to_create;
if (qm->pdev->device == HRDE_PCI_DEVICE_ID) {
- hisi_rde->ctrl->debug_root = dev_d;
- ret = hisi_rde_ctrl_debug_init(hisi_rde->ctrl);
+ ret = hisi_rde_ctrl_debug_init(qm);
if (ret)
goto failed_to_create;
}
@@ -678,49 +563,17 @@ static int hisi_rde_debugfs_init(struct hisi_rde *hisi_rde)
return ret;
}
-static void hisi_rde_debugfs_exit(struct hisi_rde *hisi_rde)
+static void hisi_rde_debugfs_exit(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_rde->qm;
-
debugfs_remove_recursive(qm->debug.debug_root);
+
if (qm->fun_type == QM_HW_PF) {
hisi_rde_debug_regs_clear(qm);
qm->debug.curr_qm_qp_num = 0;
}
}
-static void hisi_rde_set_hw_error(struct hisi_rde *hisi_rde, bool state)
-{
- if (state)
- hisi_qm_hw_error_init(&hisi_rde->qm, QM_BASE_CE,
- QM_BASE_NFE | QM_ACC_DO_TASK_TIMEOUT,
- 0, 0);
- else
- hisi_qm_hw_error_uninit(&hisi_rde->qm);
-
- hisi_rde_hw_error_set_state(hisi_rde, state);
-}
-
-static void hisi_rde_open_master_ooo(struct hisi_qm *qm)
-{
- u32 val;
-
- val = readl(qm->io_base + HRDE_CFG);
- writel(val & HRDE_AXI_SHUTDOWN_DIS, qm->io_base + HRDE_CFG);
- writel(val | HRDE_AXI_SHUTDOWN_EN, qm->io_base + HRDE_CFG);
-}
-
-static u32 hisi_rde_get_hw_err_status(struct hisi_qm *qm)
-{
- return readl(qm->io_base + HRDE_INT_STATUS);
-}
-
-static void hisi_rde_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
-{
- writel(err_sts, qm->io_base + HRDE_INT_SOURCE);
-}
-
-static void hisi_rde_hw_error_log(struct hisi_qm *qm, u32 err_sts)
+void hisi_rde_hw_error_log(struct hisi_qm *qm, u32 err_sts)
{
const struct hisi_rde_hw_error *err = rde_hw_error;
struct device *dev = &qm->pdev->dev;
@@ -751,10 +604,30 @@ static void hisi_rde_hw_error_log(struct hisi_qm *qm, u32 err_sts)
}
}
-static int hisi_rde_pf_probe_init(struct hisi_rde *hisi_rde)
+u32 hisi_rde_get_hw_err_status(struct hisi_qm *qm)
+{
+ return readl(qm->io_base + HRDE_INT_STATUS);
+}
+
+void hisi_rde_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
{
- struct hisi_qm *qm = &hisi_rde->qm;
+ writel(err_sts, qm->io_base + HRDE_INT_SOURCE);
+}
+
+static void hisi_rde_open_master_ooo(struct hisi_qm *qm)
+{
+ u32 val;
+
+ val = readl(qm->io_base + HRDE_CFG);
+ writel(val & HRDE_AXI_SHUTDOWN_DIS, qm->io_base + HRDE_CFG);
+ writel(val | HRDE_AXI_SHUTDOWN_EN, qm->io_base + HRDE_CFG);
+}
+
+static int hisi_rde_pf_probe_init(struct hisi_qm *qm)
+{
+ struct hisi_rde *hisi_rde = container_of(qm, struct hisi_rde, qm);
struct hisi_rde_ctrl *ctrl;
+ int ret;
ctrl = devm_kzalloc(&qm->pdev->dev, sizeof(*ctrl), GFP_KERNEL);
if (!ctrl)
@@ -776,14 +649,26 @@ static int hisi_rde_pf_probe_init(struct hisi_rde *hisi_rde)
return -EINVAL;
}
- qm->err_ini.qm_wr_port = HRDE_WR_MSI_PORT;
- qm->err_ini.ecc_2bits_mask = HRDE_ECC_2BIT_ERR;
- qm->err_ini.open_axi_master_ooo = hisi_rde_open_master_ooo;
qm->err_ini.get_dev_hw_err_status = hisi_rde_get_hw_err_status;
qm->err_ini.clear_dev_hw_err_status = hisi_rde_clear_hw_err_status;
+ qm->err_ini.err_info.ecc_2bits_mask = HRDE_ECC_2BIT_ERR;
+ qm->err_ini.err_info.ce = QM_BASE_CE;
+ qm->err_ini.err_info.nfe = QM_BASE_NFE | QM_ACC_DO_TASK_TIMEOUT;
+ qm->err_ini.err_info.fe = 0;
+ qm->err_ini.err_info.msi = 0;
+ qm->err_ini.err_info.acpi_rst = "RRST";
+ qm->err_ini.hw_err_disable = hisi_rde_hw_error_disable;
+ qm->err_ini.hw_err_enable = hisi_rde_hw_error_enable;
+ qm->err_ini.set_usr_domain_cache = hisi_rde_set_user_domain_and_cache;
qm->err_ini.log_dev_hw_err = hisi_rde_hw_error_log;
- hisi_rde_set_user_domain_and_cache(hisi_rde);
- hisi_rde_set_hw_error(hisi_rde, true);
+ qm->err_ini.open_axi_master_ooo = hisi_rde_open_master_ooo;
+ qm->err_ini.err_info.msi_wr_port = HRDE_WR_MSI_PORT;
+
+ ret = qm->err_ini.set_usr_domain_cache(qm);
+ if (ret)
+ return ret;
+
+ hisi_qm_dev_err_init(qm);
qm->err_ini.open_axi_master_ooo(qm);
hisi_rde_debug_regs_clear(qm);
@@ -792,33 +677,21 @@ static int hisi_rde_pf_probe_init(struct hisi_rde *hisi_rde)
static int hisi_rde_qm_pre_init(struct hisi_qm *qm, struct pci_dev *pdev)
{
- enum qm_hw_ver rev_id;
+ int ret;
- rev_id = hisi_qm_get_hw_version(pdev);
- if (rev_id == QM_HW_UNKNOWN)
- return -EINVAL;
+#ifdef CONFIG_CRYPTO_QM_UACCE
+ qm->algs = "ec\n";
+ qm->uacce_mode = uacce_mode;
+#endif
qm->pdev = pdev;
- qm->ver = rev_id;
+ ret = hisi_qm_pre_init(qm, pf_q_num, HRDE_PF_DEF_Q_BASE);
+ if (ret)
+ return ret;
+
+ qm->qm_list = &rde_devices;
qm->sqe_size = HRDE_SQE_SIZE;
qm->dev_name = hisi_rde_name;
- qm->fun_type = QM_HW_PF;
- qm->algs = "ec\n";
-
- switch (uacce_mode) {
- case UACCE_MODE_NOUACCE:
- qm->use_uacce = false;
- break;
- case UACCE_MODE_NOIOMMU:
- qm->use_uacce = true;
- break;
- default:
- return -EINVAL;
- }
-
- qm->qp_base = HRDE_PF_DEF_Q_BASE;
- qm->qp_num = pf_q_num;
- qm->debug.curr_qm_qp_num = pf_q_num;
qm->abnormal_fix = hisi_rde_abnormal_fix;
return 0;
@@ -849,11 +722,12 @@ static int hisi_rde_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (!hisi_rde)
return -ENOMEM;
- pci_set_drvdata(pdev, hisi_rde);
INIT_WORK(&hisi_rde->reset_work, hisi_rde_ras_proc);
hisi_rde->smmu_state = hisi_rde_smmu_state(&pdev->dev);
qm = &hisi_rde->qm;
+ qm->fun_type = QM_HW_PF;
+
ret = hisi_rde_qm_pre_init(qm, pdev);
if (ret) {
pci_err(pdev, "Pre init qm failed!\n");
@@ -866,7 +740,7 @@ static int hisi_rde_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return ret;
}
- ret = hisi_rde_pf_probe_init(hisi_rde);
+ ret = hisi_rde_pf_probe_init(qm);
if (ret) {
pci_err(pdev, "Init pf failed!\n");
goto err_qm_uninit;
@@ -878,16 +752,15 @@ static int hisi_rde_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto err_qm_uninit;
}
- ret = hisi_rde_debugfs_init(hisi_rde);
+ ret = hisi_rde_debugfs_init(qm);
if (ret)
pci_warn(pdev, "Init debugfs failed!\n");
- hisi_rde_add_to_list(hisi_rde);
- hisi_rde->rde_list_lock = &hisi_rde_list_lock;
+ hisi_qm_add_to_list(qm, &rde_devices);
return 0;
- err_qm_uninit:
+err_qm_uninit:
hisi_qm_uninit(qm);
return ret;
@@ -895,198 +768,20 @@ static int hisi_rde_probe(struct pci_dev *pdev, const struct pci_device_id *id)
static void hisi_rde_remove(struct pci_dev *pdev)
{
- struct hisi_rde *hisi_rde = pci_get_drvdata(pdev);
- struct hisi_qm *qm = &hisi_rde->qm;
+ struct hisi_qm *qm = pci_get_drvdata(pdev);
+ struct hisi_rde *hisi_rde = container_of(qm, struct hisi_rde, qm);
+
+ hisi_qm_remove_wait_delay(qm, &rde_devices);
qm->abnormal_fix = NULL;
- hisi_rde_hw_error_set_state(hisi_rde, false);
+ hisi_qm_dev_err_uninit(qm);
cancel_work_sync(&hisi_rde->reset_work);
- hisi_rde_remove_from_list(hisi_rde);
- hisi_rde_debugfs_exit(hisi_rde);
+ hisi_qm_del_from_list(qm, &rde_devices);
+ hisi_rde_debugfs_exit(qm);
hisi_qm_stop(qm, QM_NORMAL);
hisi_qm_uninit(qm);
}
-static void hisi_rde_shutdown(struct pci_dev *pdev)
-{
- struct hisi_rde *hisi_rde = pci_get_drvdata(pdev);
-
- hisi_qm_stop(&hisi_rde->qm, QM_NORMAL);
-}
-
-static int hisi_rde_reset_prepare_rdy(struct hisi_rde *hisi_rde)
-{
- int delay = 0;
-
- while (test_and_set_bit(HISI_RDE_RESET, &hisi_rde->status)) {
- msleep(++delay);
- if (delay > HRDE_RST_TMOUT_MS)
- return -EBUSY;
- }
-
- return 0;
-}
-
-static int hisi_rde_controller_reset_prepare(struct hisi_rde *hisi_rde)
-{
- struct hisi_qm *qm = &hisi_rde->qm;
- struct pci_dev *pdev = qm->pdev;
- int ret;
-
- ret = hisi_rde_reset_prepare_rdy(hisi_rde);
- if (ret) {
- dev_err(&pdev->dev, "Controller reset not ready!\n");
- return ret;
- }
-
- ret = hisi_qm_stop(qm, QM_SOFT_RESET);
- if (ret) {
- dev_err(&pdev->dev, "Stop QM failed!\n");
- return ret;
- }
-
-#ifdef CONFIG_CRYPTO_QM_UACCE
- if (qm->use_uacce) {
- ret = uacce_hw_err_isolate(&qm->uacce);
- if (ret) {
- dev_err(&pdev->dev, "Isolate hw err failed!\n");
- return ret;
- }
- }
-#endif
-
- return 0;
-}
-
-static int hisi_rde_soft_reset(struct hisi_rde *hisi_rde)
-{
- struct hisi_qm *qm = &hisi_rde->qm;
- struct device *dev = &qm->pdev->dev;
- unsigned long long value;
- int ret;
- u32 val;
-
- /* Check PF stream stop */
- ret = hisi_qm_reg_test(qm);
- if (ret)
- return ret;
-
- /* Disable PEH MSI */
- ret = hisi_qm_set_msi(qm, HRDE_DISABLE);
- if (ret) {
- dev_err(dev, "Disable peh msi bit failed.\n");
- return ret;
- }
-
- /* Set qm ecc if dev ecc happened to hold on ooo */
- hisi_qm_set_ecc(qm);
-
- /* OOO register set and check */
- writel(MASTER_GLOBAL_CTRL_SHUTDOWN,
- hisi_rde->qm.io_base + HRDE_MASTER_GLOBAL_CTRL);
-
- /* If bus lock, reset chip */
- ret = readl_relaxed_poll_timeout(hisi_rde->qm.io_base +
- HRDE_MASTER_TRANS_RET, val,
- (val == MASTER_TRANS_RETURN_RW),
- HRDE_RD_INTVRL_US, HRDE_RD_TMOUT_US);
- if (ret) {
- dev_emerg(dev, "Bus lock! Please reset system.\n");
- return ret;
- }
-
- /* Disable PF MSE bit */
- ret = hisi_qm_set_pf_mse(qm, HRDE_DISABLE);
- if (ret) {
- dev_err(dev, "Disable pf mse bit failed.\n");
- return ret;
- }
-
- /* The reset related sub-control registers are not in PCI BAR */
- if (ACPI_HANDLE(dev)) {
- acpi_status s;
-
- s = acpi_evaluate_integer(ACPI_HANDLE(dev), "RRST",
- NULL, &value);
- if (ACPI_FAILURE(s)) {
- dev_err(dev, "No controller reset method.\n");
- return -EIO;
- }
-
- if (value) {
- dev_err(dev, "Reset step %llu failed.\n", value);
- return -EIO;
- }
- } else {
- dev_err(dev, "No reset method!\n");
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int hisi_rde_controller_reset_done(struct hisi_rde *hisi_rde)
-{
- struct hisi_qm *qm = &hisi_rde->qm;
- struct pci_dev *pdev = qm->pdev;
- int ret;
-
- /* Enable PEH MSI */
- ret = hisi_qm_set_msi(qm, HRDE_ENABLE);
- if (ret) {
- dev_err(&pdev->dev, "Enable peh msi bit failed!\n");
- return ret;
- }
-
- /* Enable PF MSE bit */
- ret = hisi_qm_set_pf_mse(qm, HRDE_ENABLE);
- if (ret) {
- dev_err(&pdev->dev, "Enable pf mse bit failed!\n");
- return ret;
- }
-
- hisi_rde_set_user_domain_and_cache(hisi_rde);
- hisi_qm_restart_prepare(qm);
-
- ret = hisi_qm_restart(qm);
- if (ret) {
- dev_err(&pdev->dev, "Start QM failed!\n");
- return -EPERM;
- }
-
- hisi_qm_restart_done(qm);
- hisi_rde_set_hw_error(hisi_rde, true);
-
- return 0;
-}
-
-static int hisi_rde_controller_reset(struct hisi_rde *hisi_rde)
-{
- struct device *dev = &hisi_rde->qm.pdev->dev;
- int ret;
-
- dev_info_ratelimited(dev, "Controller resetting...\n");
-
- ret = hisi_rde_controller_reset_prepare(hisi_rde);
- if (ret)
- return ret;
-
- ret = hisi_rde_soft_reset(hisi_rde);
- if (ret) {
- dev_err(dev, "Controller reset failed (%d).\n", ret);
- return ret;
- }
-
- ret = hisi_rde_controller_reset_done(hisi_rde);
- if (ret)
- return ret;
-
- clear_bit(HISI_RDE_RESET, &hisi_rde->status);
- dev_info_ratelimited(dev, "Controller reset complete.\n");
-
- return 0;
-}
-
static void hisi_rde_ras_proc(struct work_struct *work)
{
struct pci_dev *pdev;
@@ -1100,121 +795,26 @@ static void hisi_rde_ras_proc(struct work_struct *work)
ret = hisi_qm_process_dev_error(pdev);
if (ret == PCI_ERS_RESULT_NEED_RESET)
- if (hisi_rde_controller_reset(hisi_rde))
+ if (hisi_qm_controller_reset(&hisi_rde->qm))
dev_err(&pdev->dev, "Hisi_rde reset fail.\n");
}
int hisi_rde_abnormal_fix(struct hisi_qm *qm)
{
- struct pci_dev *pdev;
struct hisi_rde *hisi_rde;
if (!qm)
return -EINVAL;
- pdev = qm->pdev;
- if (!pdev)
- return -EINVAL;
-
- hisi_rde = pci_get_drvdata(pdev);
- if (!hisi_rde) {
- dev_err(&pdev->dev, "Hisi_rde is NULL.\n");
- return -EINVAL;
- }
+ hisi_rde = container_of(qm, struct hisi_rde, qm);
return schedule_work(&hisi_rde->reset_work);
}
-static int hisi_rde_get_hw_error_status(struct hisi_rde *hisi_rde)
-{
- u32 err_sts;
-
- err_sts = readl(hisi_rde->qm.io_base + HRDE_INT_STATUS) &
- HRDE_ECC_2BIT_ERR;
- if (err_sts)
- return err_sts;
-
- return 0;
-}
-
-static int hisi_rde_check_hw_error(struct hisi_rde *hisi_rde)
-{
- int ret;
-
- ret = hisi_qm_get_hw_error_status(&hisi_rde->qm);
- if (ret)
- return ret;
-
- return hisi_rde_get_hw_error_status(hisi_rde);
-}
-
-static void hisi_rde_reset_prepare(struct pci_dev *pdev)
-{
- struct hisi_rde *hisi_rde = pci_get_drvdata(pdev);
- struct hisi_qm *qm = &hisi_rde->qm;
- u32 delay = 0;
- int ret;
-
- hisi_rde_set_hw_error(hisi_rde, false);
-
- while (hisi_rde_check_hw_error(hisi_rde)) {
- msleep(++delay);
- if (delay > HRDE_RST_TMOUT_MS)
- return;
- }
-
- ret = hisi_rde_reset_prepare_rdy(hisi_rde);
- if (ret) {
- dev_err(&pdev->dev, "FLR not ready!\n");
- return;
- }
-
- ret = hisi_qm_stop(qm, QM_FLR);
- if (ret) {
- dev_err(&pdev->dev, "Stop QM failed!\n");
- return;
- }
-
- dev_info(&pdev->dev, "FLR resetting...\n");
-}
-
-static void hisi_rde_flr_reset_complete(struct pci_dev *pdev,
- struct hisi_rde *hisi_rde)
-{
- u32 id;
-
- pci_read_config_dword(pdev, PCI_COMMAND, &id);
- if (id == HRDE_PCI_COMMAND_INVALID)
- dev_err(&pdev->dev, "Device can not be used!\n");
-
- clear_bit(HISI_RDE_RESET, &hisi_rde->status);
-}
-
-static void hisi_rde_reset_done(struct pci_dev *pdev)
-{
- struct hisi_rde *hisi_rde = pci_get_drvdata(pdev);
- struct hisi_qm *qm = &hisi_rde->qm;
- int ret;
-
- hisi_rde_set_hw_error(hisi_rde, true);
-
- ret = hisi_qm_restart(qm);
- if (ret) {
- dev_err(&pdev->dev, "Start QM failed!\n");
- goto flr_done;
- }
-
- hisi_rde_set_user_domain_and_cache(hisi_rde);
-
-flr_done:
- hisi_rde_flr_reset_complete(pdev, hisi_rde);
- dev_info(&pdev->dev, "FLR reset complete.\n");
-}
-
static const struct pci_error_handlers hisi_rde_err_handler = {
- .reset_prepare = hisi_rde_reset_prepare,
- .reset_done = hisi_rde_reset_done,
+ .reset_prepare = hisi_qm_reset_prepare,
+ .reset_done = hisi_qm_reset_done,
};
static struct pci_driver hisi_rde_pci_driver = {
@@ -1223,7 +823,7 @@ static void hisi_rde_reset_done(struct pci_dev *pdev)
.probe = hisi_rde_probe,
.remove = hisi_rde_remove,
.err_handler = &hisi_rde_err_handler,
- .shutdown = hisi_rde_shutdown,
+ .shutdown = hisi_qm_dev_shutdown,
};
static void hisi_rde_register_debugfs(void)
@@ -1245,6 +845,9 @@ static int __init hisi_rde_init(void)
{
int ret;
+ INIT_LIST_HEAD(&rde_devices.list);
+ mutex_init(&rde_devices.lock);
+ rde_devices.check = NULL;
hisi_rde_register_debugfs();
ret = pci_register_driver(&hisi_rde_pci_driver);
diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h
index 0e16452..f85dd06 100644
--- a/drivers/crypto/hisilicon/sec2/sec.h
+++ b/drivers/crypto/hisilicon/sec2/sec.h
@@ -11,7 +11,6 @@
#undef pr_fmt
#define pr_fmt(fmt) "hisi_sec: " fmt
-#define CTX_Q_NUM_DEF 24
#define FUSION_LIMIT_DEF 1
#define FUSION_LIMIT_MAX 64
#define FUSION_TMOUT_NSEC_DEF (400 * 1000)
@@ -24,10 +23,6 @@ enum sec_endian {
struct hisi_sec_ctrl;
-enum hisi_sec_status {
- HISI_SEC_RESET,
-};
-
struct hisi_sec_dfx {
u64 send_cnt;
u64 send_by_tmout;
@@ -39,21 +34,19 @@ struct hisi_sec_dfx {
u64 thread_cnt;
u64 fake_busy_cnt;
u64 busy_comp_cnt;
- u64 sec_ctrl;
};
struct hisi_sec {
struct hisi_qm qm;
- struct list_head list;
struct hisi_sec_dfx sec_dfx;
struct hisi_sec_ctrl *ctrl;
- struct mutex *hisi_sec_list_lock;
- int q_ref;
int ctx_q_num;
int fusion_limit;
int fusion_tmout_nsec;
- unsigned long status;
};
+void sec_destroy_qps(struct hisi_qp **qps, int qp_num);
+struct hisi_qp **sec_create_qps(void);
struct hisi_sec *find_sec_device(int node);
+
#endif
diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
index 3a362ce..0643955 100644
--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
+++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
@@ -16,6 +16,8 @@
#include "sec.h"
#include "sec_crypto.h"
+static atomic_t sec_active_devs;
+
#define SEC_ASYNC
#define SEC_INVLD_REQ_ID (-1)
@@ -179,6 +181,7 @@ struct hisi_sec_ctx {
struct hisi_sec *sec;
struct device *dev;
struct hisi_sec_req_op *req_op;
+ struct hisi_qp **qps;
struct hrtimer timer;
struct work_struct work;
atomic_t thread_cnt;
@@ -200,11 +203,6 @@ struct hisi_sec_ctx {
u64 des_weak_key[DES_WEAK_KEY_NUM] = {0x0101010101010101, 0xFEFEFEFEFEFEFEFE,
0xE0E0E0E0F1F1F1F1, 0x1F1F1F1F0E0E0E0E};
-static void sec_update_iv(struct hisi_sec_req *req, u8 *iv)
-{
- // todo: update iv by cbc/ctr mode
-}
-
static void hisi_sec_req_cb(struct hisi_qp *qp, void *);
static int hisi_sec_alloc_req_id(struct hisi_sec_req *req,
@@ -324,19 +322,16 @@ static enum hrtimer_restart hrtimer_handler(struct hrtimer *timer)
return HRTIMER_RESTART;
}
-static int hisi_sec_create_qp_ctx(struct hisi_qm *qm, struct hisi_sec_ctx *ctx,
- int qp_ctx_id, int alg_type, int req_type)
+static int hisi_sec_create_qp_ctx(struct hisi_sec_ctx *ctx,
+ int qp_ctx_id, int req_type)
{
- struct hisi_qp *qp;
struct hisi_sec_qp_ctx *qp_ctx;
struct device *dev = ctx->dev;
+ struct hisi_qp *qp;
int ret;
- qp = hisi_qm_create_qp(qm, alg_type);
- if (IS_ERR(qp))
- return PTR_ERR(qp);
-
qp_ctx = &ctx->qp_ctx[qp_ctx_id];
+ qp = ctx->qps[qp_ctx_id];
qp->req_type = req_type;
qp->qp_ctx = qp_ctx;
#ifdef SEC_ASYNC
@@ -353,10 +348,8 @@ static int hisi_sec_create_qp_ctx(struct hisi_qm *qm, struct hisi_sec_ctx *ctx,
qp_ctx->req_bitmap = kcalloc(BITS_TO_LONGS(QM_Q_DEPTH), sizeof(long),
GFP_ATOMIC);
- if (!qp_ctx->req_bitmap) {
- ret = -ENOMEM;
- goto err_qm_release_qp;
- }
+ if (!qp_ctx->req_bitmap)
+ return -ENOMEM;
qp_ctx->req_list = kcalloc(QM_Q_DEPTH, sizeof(void *), GFP_ATOMIC);
if (!qp_ctx->req_list) {
@@ -407,8 +400,7 @@ static int hisi_sec_create_qp_ctx(struct hisi_qm *qm, struct hisi_sec_ctx *ctx,
kfree(qp_ctx->req_list);
err_free_req_bitmap:
kfree(qp_ctx->req_bitmap);
-err_qm_release_qp:
- hisi_qm_release_qp(qp);
+
return ret;
}
@@ -424,7 +416,6 @@ static void hisi_sec_release_qp_ctx(struct hisi_sec_ctx *ctx,
kfree(qp_ctx->req_bitmap);
kfree(qp_ctx->req_list);
kfree(qp_ctx->sqe_list);
- hisi_qm_release_qp(qp_ctx->qp);
}
static int __hisi_sec_ctx_init(struct hisi_sec_ctx *ctx, int qlen)
@@ -465,22 +456,22 @@ static void hisi_sec_get_fusion_param(struct hisi_sec_ctx *ctx,
static int hisi_sec_cipher_ctx_init(struct crypto_skcipher *tfm)
{
struct hisi_sec_ctx *ctx = crypto_skcipher_ctx(tfm);
- struct hisi_qm *qm;
struct hisi_sec_cipher_ctx *c_ctx;
struct hisi_sec *sec;
int i, ret;
crypto_skcipher_set_reqsize(tfm, sizeof(struct hisi_sec_req));
- sec = find_sec_device(cpu_to_node(smp_processor_id()));
- if (!sec) {
- pr_err("failed to find a proper sec device!\n");
+ ctx->qps = sec_create_qps();
+ if (!ctx->qps) {
+ pr_err("Can not create sec qps!\n");
return -ENODEV;
}
+
+ sec = container_of(ctx->qps[0]->qm, struct hisi_sec, qm);
ctx->sec = sec;
- qm = &sec->qm;
- ctx->dev = &qm->pdev->dev;
+ ctx->dev = &sec->qm.pdev->dev;
ctx->q_num = sec->ctx_q_num;
@@ -495,7 +486,7 @@ static int hisi_sec_cipher_ctx_init(struct crypto_skcipher *tfm)
hisi_sec_get_fusion_param(ctx, sec);
for (i = 0; i < ctx->q_num; i++) {
- ret = hisi_sec_create_qp_ctx(qm, ctx, i, 0, 0);
+ ret = hisi_sec_create_qp_ctx(ctx, i, 0);
if (ret)
goto err_sec_release_qp_ctx;
}
@@ -515,6 +506,7 @@ static int hisi_sec_cipher_ctx_init(struct crypto_skcipher *tfm)
for (i = i - 1; i >= 0; i--)
hisi_sec_release_qp_ctx(ctx, &ctx->qp_ctx[i]);
+ sec_destroy_qps(ctx->qps, sec->ctx_q_num);
kfree(ctx->qp_ctx);
return ret;
}
@@ -540,11 +532,8 @@ static void hisi_sec_cipher_ctx_exit(struct crypto_skcipher *tfm)
for (i = 0; i < ctx->q_num; i++)
hisi_sec_release_qp_ctx(ctx, &ctx->qp_ctx[i]);
+ sec_destroy_qps(ctx->qps, ctx->q_num);
kfree(ctx->qp_ctx);
-
- mutex_lock(ctx->sec->hisi_sec_list_lock);
- ctx->sec->q_ref -= ctx->sec->ctx_q_num;
- mutex_unlock(ctx->sec->hisi_sec_list_lock);
}
static int hisi_sec_skcipher_get_res(struct hisi_sec_ctx *ctx,
@@ -658,8 +647,6 @@ static void hisi_sec_req_cb(struct hisi_qp *qp, void *resp)
dfx = &req->ctx->sec->sec_dfx;
- sec_update_iv(req, req->c_req.sk_req->iv);
-
req->ctx->req_op->buf_unmap(req->ctx, req);
req->ctx->req_op->callback(req->ctx, req);
@@ -1497,20 +1484,28 @@ static int sec_skcipher_decrypt(struct skcipher_request *sk_req)
int hisi_sec_register_to_crypto(int fusion_limit)
{
- if (fusion_limit == 1)
- return crypto_register_skciphers(sec_normal_algs,
- ARRAY_SIZE(sec_normal_algs));
- else
- return crypto_register_skciphers(sec_fusion_algs,
- ARRAY_SIZE(sec_fusion_algs));
+ /* To avoid repeat register */
+ if (atomic_add_return(1, &sec_active_devs) == 1) {
+ if (fusion_limit == 1)
+ return crypto_register_skciphers(sec_normal_algs,
+ ARRAY_SIZE(sec_normal_algs));
+ else
+ return crypto_register_skciphers(sec_fusion_algs,
+ ARRAY_SIZE(sec_fusion_algs));
+ }
+
+ return 0;
}
void hisi_sec_unregister_from_crypto(int fusion_limit)
{
- if (fusion_limit == 1)
- crypto_unregister_skciphers(sec_normal_algs,
- ARRAY_SIZE(sec_normal_algs));
- else
- crypto_unregister_skciphers(sec_fusion_algs,
- ARRAY_SIZE(sec_fusion_algs));
+ if (atomic_sub_return(1, &sec_active_devs) == 0) {
+ if (fusion_limit == 1)
+ crypto_unregister_skciphers(sec_normal_algs,
+ ARRAY_SIZE(sec_normal_algs));
+ else
+ crypto_unregister_skciphers(sec_fusion_algs,
+ ARRAY_SIZE(sec_fusion_algs));
+ }
}
+
diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
index ba5c478..b4e5d57f 100644
--- a/drivers/crypto/hisilicon/sec2/sec_main.c
+++ b/drivers/crypto/hisilicon/sec2/sec_main.c
@@ -23,21 +23,24 @@
#include "sec.h"
#include "sec_crypto.h"
-#define SEC_VF_NUM 63
#define SEC_QUEUE_NUM_V1 4096
#define SEC_QUEUE_NUM_V2 1024
-#define SEC_PCI_DEVICE_ID_PF 0xa255
-#define SEC_PCI_DEVICE_ID_VF 0xa256
+#define SEC_PF_PCI_DEVICE_ID 0xa255
+#define SEC_VF_PCI_DEVICE_ID 0xa256
-#define SEC_COMMON_REG_OFF 0x1000
+#define SEC_SQE_SIZE 128
+#define SEC_SQ_SIZE (SEC_SQE_SIZE * QM_Q_DEPTH)
+#define SEC_PF_DEF_Q_NUM 64
+#define SEC_PF_DEF_Q_BASE 0
+#define SEC_CTX_Q_NUM_DEF 24
+#define SEC_CTX_Q_NUM_MAX 32
-#define SEC_MASTER_GLOBAL_CTRL 0x300000
-#define SEC_MASTER_GLOBAL_CTRL_SHUTDOWN 0x1
-#define SEC_MASTER_TRANS_RETURN 0x300150
-#define SEC_MASTER_TRANS_RETURN_RW 0x3
#define SEC_AM_CFG_SIG_PORT_MAX_TRANS 0x300014
#define SEC_SINGLE_PORT_MAX_TRANS 0x2060
-
+#define SEC_CTRL_CNT_CLR_CE 0x301120
+#define SEC_CTRL_CNT_CLR_CE_BIT BIT(0)
+#define SEC_ENGINE_PF_CFG_OFF 0x300000
+#define SEC_ACC_COMMON_REG_OFF 0x1000
#define SEC_CORE_INT_SOURCE 0x301010
#define SEC_CORE_INT_MASK 0x301000
#define SEC_CORE_INT_STATUS 0x301008
@@ -45,41 +48,17 @@
#define SEC_CORE_ECC_INFO 0x301C14
#define SEC_ECC_NUM(err_val) (((err_val) >> 16) & 0xFFFF)
#define SEC_ECC_ADDR(err_val) ((err_val) & 0xFFFF)
-
#define SEC_CORE_INT_DISABLE 0x0
#define SEC_CORE_INT_ENABLE 0x1ff
-#define SEC_HW_ERROR_IRQ_ENABLE 1
-#define SEC_HW_ERROR_IRQ_DISABLE 0
-
-#define SEC_BD_ERR_CHK_EN0 0xEFFFFFFF
-#define SEC_BD_ERR_CHK_EN1 0x7FFFF7FD
-#define SEC_BD_ERR_CHK_EN3 0xFFFFBFFF
-#define SEC_BD_ERR_CHK_EN_REG0 0x0380
-#define SEC_BD_ERR_CHK_EN_REG1 0x0384
-#define SEC_BD_ERR_CHK_EN_REG3 0x038c
-
-#define SEC_SQE_SIZE 128
-#define SEC_SQ_SIZE (SEC_SQE_SIZE * QM_Q_DEPTH)
-#define SEC_PF_DEF_Q_NUM 64
-#define SEC_PF_DEF_Q_BASE 0
-
-#define SEC_CTRL_CNT_CLR_CE 0x301120
-#define SEC_CTRL_CNT_CLR_CE_BIT BIT(0)
-
-#define SEC_ENGINE_PF_CFG_OFF 0x300000
-#define SEC_ACC_COMMON_REG_OFF 0x1000
+#define SEC_CORE_INT_CLEAR 0x1ff
-#define SEC_RAS_CE_REG 0x50
-#define SEC_RAS_FE_REG 0x54
-#define SEC_RAS_NFE_REG 0x58
+#define SEC_RAS_CE_REG 0x301050
+#define SEC_RAS_FE_REG 0x301054
+#define SEC_RAS_NFE_REG 0x301058
#define SEC_RAS_CE_ENB_MSK 0x88
#define SEC_RAS_FE_ENB_MSK 0x0
#define SEC_RAS_NFE_ENB_MSK 0x177
#define SEC_RAS_DISABLE 0x0
-
-#define SEC_SAA_EN_REG 0x270
-#define SEC_SAA_EN 0x17F
-
#define SEC_MEM_START_INIT_REG 0x0100
#define SEC_MEM_INIT_DONE_REG 0x0104
@@ -88,114 +67,39 @@
#define SEC_CLK_GATE_DISABLE (~BIT(3))
#define SEC_AXI_SHUTDOWN_ENABLE BIT(12)
#define SEC_AXI_SHUTDOWN_DISABLE 0xFFFFEFFF
-#define SEC_WR_MSI_PORT 0xFFFE
+#define SEC_WR_MSI_PORT BIT(0)
#define SEC_INTERFACE_USER_CTRL0_REG 0x0220
#define SEC_INTERFACE_USER_CTRL1_REG 0x0224
+#define SEC_SAA_EN_REG 0x270
+#define SEC_SAA_EN 0x17F
+#define SEC_BD_ERR_CHK_EN_REG0 0x0380
+#define SEC_BD_ERR_CHK_EN_REG1 0x0384
+#define SEC_BD_ERR_CHK_EN_REG3 0x038c
+#define SEC_BD_ERR_CHK_EN0 0xEFFFFFFF
+#define SEC_BD_ERR_CHK_EN1 0x7FFFF7FD
+#define SEC_BD_ERR_CHK_EN3 0xFFFFBFFF
#define SEC_USER0_SMMU_NORMAL (BIT(23) | BIT(15))
#define SEC_USER1_SMMU_NORMAL (BIT(31) | BIT(23) | BIT(15) | BIT(7))
#define SEC_DELAY_10_US 10
#define SEC_POLL_TIMEOUT_US 1000
-#define SEC_WAIT_DELAY 1000
-
#define SEC_DBGFS_VAL_MAX_LEN 20
-#define SEC_CHAIN_ABN_LEN 128UL
-#define SEC_ENABLE 1
-#define SEC_DISABLE 0
-#define SEC_RESET_WAIT_TIMEOUT 400
-#define SEC_PCI_COMMAND_INVALID 0xFFFFFFFF
-
-#define FORMAT_DECIMAL 10
-#define FROZEN_RANGE_MIN 10
-#define FROZEN_RANGE_MAX 20
-
-static const char sec_name[] = "hisi_sec2";
-static struct dentry *sec_debugfs_root;
-static u32 pf_q_num = SEC_PF_DEF_Q_NUM;
-static struct workqueue_struct *sec_wq;
-
-LIST_HEAD(hisi_sec_list);
-DEFINE_MUTEX(hisi_sec_list_lock);
-
-struct hisi_sec_resource {
- struct hisi_sec *sec;
- int distance;
- struct list_head list;
-};
-
-static void free_list(struct list_head *head)
-{
- struct hisi_sec_resource *res, *tmp;
-
- list_for_each_entry_safe(res, tmp, head, list) {
- list_del(&res->list);
- kfree(res);
- }
-}
-
-struct hisi_sec *find_sec_device(int node)
-{
- struct hisi_sec *ret = NULL;
-#ifdef CONFIG_NUMA
- struct hisi_sec_resource *res, *tmp;
- struct hisi_sec *hisi_sec;
- struct list_head *n;
- struct device *dev;
- LIST_HEAD(head);
-
- mutex_lock(&hisi_sec_list_lock);
-
- list_for_each_entry(hisi_sec, &hisi_sec_list, list) {
- res = kzalloc(sizeof(*res), GFP_KERNEL);
- if (!res)
- goto err;
-
- dev = &hisi_sec->qm.pdev->dev;
- res->sec = hisi_sec;
- res->distance = node_distance(dev->numa_node, node);
-
- n = &head;
- list_for_each_entry(tmp, &head, list) {
- if (res->distance < tmp->distance) {
- n = &tmp->list;
- break;
- }
- }
- list_add_tail(&res->list, n);
- }
-
- list_for_each_entry(tmp, &head, list) {
- if (tmp->sec->q_ref + tmp->sec->ctx_q_num <= pf_q_num) {
- tmp->sec->q_ref += tmp->sec->ctx_q_num;
- ret = tmp->sec;
- break;
- }
- }
-
- free_list(&head);
-#else
- mutex_lock(&hisi_sec_list_lock);
-
- ret = list_first_entry(&hisi_sec_list, struct hisi_sec, list);
-#endif
- mutex_unlock(&hisi_sec_list_lock);
-
- return ret;
-
-err:
- free_list(&head);
- mutex_unlock(&hisi_sec_list_lock);
- return NULL;
-}
+#define SEC_ADDR(qm, offset) ((qm)->io_base + (offset) + \
+ SEC_ENGINE_PF_CFG_OFF + SEC_ACC_COMMON_REG_OFF)
struct hisi_sec_hw_error {
u32 int_msk;
const char *msg;
};
+static const char sec_name[] = "hisi_sec2";
+static struct dentry *sec_debugfs_root;
+static struct hisi_qm_list sec_devices;
+static struct workqueue_struct *sec_wq;
+
static const struct hisi_sec_hw_error sec_hw_error[] = {
{.int_msk = BIT(0), .msg = "sec_axi_rresp_err_rint"},
{.int_msk = BIT(1), .msg = "sec_axi_bresp_err_rint"},
@@ -233,9 +137,7 @@ struct ctrl_debug_file {
* Just relevant for PF.
*/
struct hisi_sec_ctrl {
- u32 num_vfs;
struct hisi_sec *hisi_sec;
- struct dentry *debug_root;
struct ctrl_debug_file files[SEC_DEBUG_FILE_NUM];
};
@@ -263,94 +165,104 @@ struct hisi_sec_ctrl {
{"SEC_BD_SAA8 ", 0x301C40},
};
-static int pf_q_num_set(const char *val, const struct kernel_param *kp)
+static int sec_ctx_q_num_set(const char *val, const struct kernel_param *kp)
{
- struct pci_dev *pdev = pci_get_device(PCI_VENDOR_ID_HUAWEI,
- SEC_PCI_DEVICE_ID_PF, NULL);
- u32 n, q_num;
- u8 rev_id;
+ u32 ctx_q_num;
int ret;
if (!val)
return -EINVAL;
- if (unlikely(!pdev)) {
- q_num = min_t(u32, SEC_QUEUE_NUM_V1, SEC_QUEUE_NUM_V2);
- pr_info
- ("No device found currently, suppose queue number is %d\n",
- q_num);
- } else {
- rev_id = pdev->revision;
- switch (rev_id) {
- case QM_HW_V1:
- q_num = SEC_QUEUE_NUM_V1;
- break;
- case QM_HW_V2:
- q_num = SEC_QUEUE_NUM_V2;
- break;
- default:
- return -EINVAL;
- }
- }
+ ret = kstrtou32(val, 10, &ctx_q_num);
+ if (ret)
+ return -EINVAL;
- ret = kstrtou32(val, 10, &n);
- if (ret != 0 || n > q_num)
+ if (!ctx_q_num || ctx_q_num > SEC_CTX_Q_NUM_MAX || ctx_q_num & 0x1) {
+ pr_err("ctx queue num[%u] is invalid!\n", ctx_q_num);
return -EINVAL;
+ }
return param_set_int(val, kp);
}
-static const struct kernel_param_ops pf_q_num_ops = {
- .set = pf_q_num_set,
+static const struct kernel_param_ops sec_ctx_q_num_ops = {
+ .set = sec_ctx_q_num_set,
.get = param_get_int,
};
+static u32 ctx_q_num = SEC_CTX_Q_NUM_DEF;
+module_param_cb(ctx_q_num, &sec_ctx_q_num_ops, &ctx_q_num, 0444);
+MODULE_PARM_DESC(ctx_q_num, "Queue num in ctx (24 default, 2, 4, ..., 32)");
-static int uacce_mode_set(const char *val, const struct kernel_param *kp)
+void sec_destroy_qps(struct hisi_qp **qps, int qp_num)
+{
+ hisi_qm_free_qps(qps, qp_num);
+ kfree(qps);
+}
+
+struct hisi_qp **sec_create_qps(void)
{
- u32 n;
+ int node = cpu_to_node(smp_processor_id());
+ u32 ctx_num = ctx_q_num;
+ struct hisi_qp **qps;
int ret;
- if (!val)
- return -EINVAL;
+ qps = kcalloc(ctx_num, sizeof(struct hisi_qp *), GFP_KERNEL);
+ if (!qps)
+ return NULL;
- ret = kstrtou32(val, FORMAT_DECIMAL, &n);
- if (ret != 0 || (n != UACCE_MODE_NOIOMMU && n != UACCE_MODE_NOUACCE))
- return -EINVAL;
+ ret = hisi_qm_alloc_qps_node(node, &sec_devices, qps, ctx_num, 0);
+ if (!ret)
+ return qps;
- return param_set_int(val, kp);
+ kfree(qps);
+ return NULL;
+}
+
+#ifdef CONFIG_CRYPTO_QM_UACCE
+static int uacce_mode_set(const char *val, const struct kernel_param *kp)
+{
+ return mode_set(val, kp);
}
-static const struct kernel_param_ops uacce_mode_ops = {
+static const struct kernel_param_ops sec_uacce_mode_ops = {
.set = uacce_mode_set,
.get = param_get_int,
};
-static int ctx_q_num_set(const char *val, const struct kernel_param *kp)
-{
- u32 ctx_q_num;
- int ret;
+static u32 uacce_mode = UACCE_MODE_NOUACCE;
+module_param_cb(uacce_mode, &sec_uacce_mode_ops, &uacce_mode, 0444);
+MODULE_PARM_DESC(uacce_mode, "Mode of UACCE can be 0(default), 2");
+#endif
- if (!val)
- return -EINVAL;
+static int pf_q_num_set(const char *val, const struct kernel_param *kp)
+{
+ return q_num_set(val, kp, SEC_PF_PCI_DEVICE_ID);
+}
- ret = kstrtou32(val, FORMAT_DECIMAL, &ctx_q_num);
- if (ret)
- return -EINVAL;
+static const struct kernel_param_ops sec_pf_q_num_ops = {
+ .set = pf_q_num_set,
+ .get = param_get_int,
+};
- if (ctx_q_num == 0 || ctx_q_num > QM_Q_DEPTH || ctx_q_num % 2 == 1) {
- pr_err("ctx_q_num[%u] is invalid\n", ctx_q_num);
- return -EINVAL;
- }
+static u32 pf_q_num = SEC_PF_DEF_Q_NUM;
+module_param_cb(pf_q_num, &sec_pf_q_num_ops, &pf_q_num, 0444);
+MODULE_PARM_DESC(pf_q_num, "Number of queues in PF(v1 1-4096, v2 1-1024)");
- return param_set_int(val, kp);
+static int vfs_num_set(const char *val, const struct kernel_param *kp)
+{
+ return vf_num_set(val, kp);
}
-static const struct kernel_param_ops ctx_q_num_ops = {
- .set = ctx_q_num_set,
+static const struct kernel_param_ops vfs_num_ops = {
+ .set = vfs_num_set,
.get = param_get_int,
};
-static int fusion_limit_set(const char *val, const struct kernel_param *kp)
+static u32 vfs_num;
+module_param_cb(vfs_num, &vfs_num_ops, &vfs_num, 0444);
+MODULE_PARM_DESC(vfs_num, "Number of VFs to enable(1-63), 0(default)");
+
+static int sec_fusion_limit_set(const char *val, const struct kernel_param *kp)
{
u32 fusion_limit;
int ret;
@@ -358,11 +270,11 @@ static int fusion_limit_set(const char *val, const struct kernel_param *kp)
if (!val)
return -EINVAL;
- ret = kstrtou32(val, FORMAT_DECIMAL, &fusion_limit);
+ ret = kstrtou32(val, 10, &fusion_limit);
if (ret)
return ret;
- if (fusion_limit == 0 || fusion_limit > FUSION_LIMIT_MAX) {
+ if (!fusion_limit || fusion_limit > FUSION_LIMIT_MAX) {
pr_err("fusion_limit[%u] is't at range(0, %d)", fusion_limit,
FUSION_LIMIT_MAX);
return -EINVAL;
@@ -371,12 +283,17 @@ static int fusion_limit_set(const char *val, const struct kernel_param *kp)
return param_set_int(val, kp);
}
-static const struct kernel_param_ops fusion_limit_ops = {
- .set = fusion_limit_set,
+static const struct kernel_param_ops sec_fusion_limit_ops = {
+ .set = sec_fusion_limit_set,
.get = param_get_int,
};
+static u32 fusion_limit = FUSION_LIMIT_DEF;
-static int fusion_tmout_nsec_set(const char *val, const struct kernel_param *kp)
+module_param_cb(fusion_limit, &sec_fusion_limit_ops, &fusion_limit, 0444);
+MODULE_PARM_DESC(fusion_limit, "(1, acc_sgl_sge_nr of hisilicon QM)");
+
+static int sec_fusion_tmout_ns_set(const char *val,
+ const struct kernel_param *kp)
{
u32 fusion_tmout_nsec;
int ret;
@@ -384,7 +301,7 @@ static int fusion_tmout_nsec_set(const char *val, const struct kernel_param *kp)
if (!val)
return -EINVAL;
- ret = kstrtou32(val, FORMAT_DECIMAL, &fusion_tmout_nsec);
+ ret = kstrtou32(val, 10, &fusion_tmout_nsec);
if (ret)
return ret;
@@ -396,53 +313,22 @@ static int fusion_tmout_nsec_set(const char *val, const struct kernel_param *kp)
return param_set_int(val, kp);
}
-static const struct kernel_param_ops fusion_tmout_nsec_ops = {
- .set = fusion_tmout_nsec_set,
+static const struct kernel_param_ops sec_fusion_time_ops = {
+ .set = sec_fusion_tmout_ns_set,
.get = param_get_int,
};
-
-module_param_cb(pf_q_num, &pf_q_num_ops, &pf_q_num, 0444);
-MODULE_PARM_DESC(pf_q_num, "Number of queues in PF(v1 0-4096, v2 0-1024)");
-
-static int uacce_mode = UACCE_MODE_NOUACCE;
-module_param_cb(uacce_mode, &uacce_mode_ops, &uacce_mode, 0444);
-MODULE_PARM_DESC(uacce_mode, "Mode of UACCE can be 0(default), 2");
-
-static int ctx_q_num = CTX_Q_NUM_DEF;
-module_param_cb(ctx_q_num, &ctx_q_num_ops, &ctx_q_num, 0444);
-MODULE_PARM_DESC(ctx_q_num, "Number of queue in ctx (2, 4, 6, ..., 1024)");
-
-static int fusion_limit = FUSION_LIMIT_DEF;
-module_param_cb(fusion_limit, &fusion_limit_ops, &fusion_limit, 0444);
-MODULE_PARM_DESC(fusion_limit, "(1, acc_sgl_sge_nr)");
-
-static int fusion_tmout_nsec = FUSION_TMOUT_NSEC_DEF;
-module_param_cb(fusion_tmout_nsec, &fusion_tmout_nsec_ops, &fusion_tmout_nsec,
- 0444);
-MODULE_PARM_DESC(fusion_tmout_nsec, "(0, NSEC_PER_SEC)");
+static u32 fusion_time = FUSION_TMOUT_NSEC_DEF; /* ns */
+module_param_cb(fusion_time, &sec_fusion_time_ops, &fusion_time, 0444);
+MODULE_PARM_DESC(fusion_time, "(0, NSEC_PER_SEC)");
static const struct pci_device_id hisi_sec_dev_ids[] = {
- { PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, SEC_PCI_DEVICE_ID_PF) },
- { PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, SEC_PCI_DEVICE_ID_VF) },
+ { PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, SEC_PF_PCI_DEVICE_ID) },
+ { PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, SEC_VF_PCI_DEVICE_ID) },
{ 0, }
};
MODULE_DEVICE_TABLE(pci, hisi_sec_dev_ids);
-static inline void hisi_sec_add_to_list(struct hisi_sec *hisi_sec)
-{
- mutex_lock(&hisi_sec_list_lock);
- list_add_tail(&hisi_sec->list, &hisi_sec_list);
- mutex_unlock(&hisi_sec_list_lock);
-}
-
-static inline void hisi_sec_remove_from_list(struct hisi_sec *hisi_sec)
-{
- mutex_lock(&hisi_sec_list_lock);
- list_del(&hisi_sec->list);
- mutex_unlock(&hisi_sec_list_lock);
-}
-
-u8 sec_get_endian(struct hisi_sec *hisi_sec)
+static u8 sec_get_endian(struct hisi_qm *qm)
{
u32 reg;
@@ -450,83 +336,83 @@ u8 sec_get_endian(struct hisi_sec *hisi_sec)
* As for VF, it is a wrong way to get endian setting by
* reading a register of the engine
*/
- if (hisi_sec->qm.pdev->is_virtfn) {
- dev_err_ratelimited(&hisi_sec->qm.pdev->dev,
- "error! shouldn't access a register in VF\n");
+ if (qm->pdev->is_virtfn) {
+ dev_err_ratelimited(&qm->pdev->dev,
+ "cannot access a register in VF!\n");
return SEC_LE;
}
- reg = readl_relaxed(hisi_sec->qm.io_base + SEC_ENGINE_PF_CFG_OFF +
+ reg = readl_relaxed(qm->io_base + SEC_ENGINE_PF_CFG_OFF +
SEC_ACC_COMMON_REG_OFF + SEC_CONTROL_REG);
+
/* BD little endian mode */
if (!(reg & BIT(0)))
return SEC_LE;
+
/* BD 32-bits big endian mode */
else if (!(reg & BIT(1)))
return SEC_32BE;
+
/* BD 64-bits big endian mode */
else
return SEC_64BE;
}
-static int sec_engine_init(struct hisi_sec *hisi_sec)
+static int sec_engine_init(struct hisi_qm *qm)
{
int ret;
u32 reg;
- struct hisi_qm *qm = &hisi_sec->qm;
- void *base = qm->io_base + SEC_ENGINE_PF_CFG_OFF +
- SEC_ACC_COMMON_REG_OFF;
-
- /* config sec single port max outstanding */
- writel(SEC_SINGLE_PORT_MAX_TRANS,
- qm->io_base + SEC_AM_CFG_SIG_PORT_MAX_TRANS);
-
- /* config sec saa enable */
- writel(SEC_SAA_EN, base + SEC_SAA_EN_REG);
/* disable clock gate control */
- reg = readl_relaxed(base + SEC_CONTROL_REG);
+ reg = readl_relaxed(SEC_ADDR(qm, SEC_CONTROL_REG));
reg &= SEC_CLK_GATE_DISABLE;
- writel(reg, base + SEC_CONTROL_REG);
+ writel_relaxed(reg, SEC_ADDR(qm, SEC_CONTROL_REG));
- writel(0x1, base + SEC_MEM_START_INIT_REG);
- ret = readl_relaxed_poll_timeout(base +
- SEC_MEM_INIT_DONE_REG, reg, reg & 0x1,
- SEC_DELAY_10_US, SEC_POLL_TIMEOUT_US);
+ writel_relaxed(0x1, SEC_ADDR(qm, SEC_MEM_START_INIT_REG));
+
+ ret = readl_relaxed_poll_timeout(SEC_ADDR(qm, SEC_MEM_INIT_DONE_REG),
+ reg, reg & 0x1, SEC_DELAY_10_US,
+ SEC_POLL_TIMEOUT_US);
if (ret) {
- dev_err(&qm->pdev->dev, "fail to init sec mem\n");
+ pci_err(qm->pdev, "fail to init sec mem\n");
return ret;
}
- reg = readl_relaxed(base + SEC_CONTROL_REG);
+ reg = readl_relaxed(SEC_ADDR(qm, SEC_CONTROL_REG));
reg |= (0x1 << SEC_TRNG_EN_SHIFT);
- writel(reg, base + SEC_CONTROL_REG);
+ writel_relaxed(reg, SEC_ADDR(qm, SEC_CONTROL_REG));
- reg = readl_relaxed(base + SEC_INTERFACE_USER_CTRL0_REG);
+ reg = readl_relaxed(SEC_ADDR(qm, SEC_INTERFACE_USER_CTRL0_REG));
reg |= SEC_USER0_SMMU_NORMAL;
- writel(reg, base + SEC_INTERFACE_USER_CTRL0_REG);
+ writel_relaxed(reg, SEC_ADDR(qm, SEC_INTERFACE_USER_CTRL0_REG));
- reg = readl_relaxed(base + SEC_INTERFACE_USER_CTRL1_REG);
+ reg = readl_relaxed(SEC_ADDR(qm, SEC_INTERFACE_USER_CTRL1_REG));
reg |= SEC_USER1_SMMU_NORMAL;
- writel(reg, base + SEC_INTERFACE_USER_CTRL1_REG);
+ writel_relaxed(reg, SEC_ADDR(qm, SEC_INTERFACE_USER_CTRL1_REG));
+
+ writel(SEC_SINGLE_PORT_MAX_TRANS,
+ qm->io_base + SEC_AM_CFG_SIG_PORT_MAX_TRANS);
+
+ writel(SEC_SAA_EN, SEC_ADDR(qm, SEC_SAA_EN_REG));
/* Enable sm4 extra mode, as ctr/ecb */
- writel(SEC_BD_ERR_CHK_EN0, base + SEC_BD_ERR_CHK_EN_REG0);
+ writel_relaxed(SEC_BD_ERR_CHK_EN0,
+ SEC_ADDR(qm, SEC_BD_ERR_CHK_EN_REG0));
/* Enable sm4 xts mode multiple iv */
- writel(SEC_BD_ERR_CHK_EN1, base + SEC_BD_ERR_CHK_EN_REG1);
- writel(SEC_BD_ERR_CHK_EN3, base + SEC_BD_ERR_CHK_EN_REG3);
+ writel_relaxed(SEC_BD_ERR_CHK_EN1,
+ SEC_ADDR(qm, SEC_BD_ERR_CHK_EN_REG1));
+ writel_relaxed(SEC_BD_ERR_CHK_EN3,
+ SEC_ADDR(qm, SEC_BD_ERR_CHK_EN_REG3));
/* config endian */
- reg = readl_relaxed(base + SEC_CONTROL_REG);
- reg |= sec_get_endian(hisi_sec);
- writel(reg, base + SEC_CONTROL_REG);
+ reg = readl_relaxed(SEC_ADDR(qm, SEC_CONTROL_REG));
+ reg |= sec_get_endian(qm);
+ writel_relaxed(reg, SEC_ADDR(qm, SEC_CONTROL_REG));
return 0;
}
-static void hisi_sec_set_user_domain_and_cache(struct hisi_sec *hisi_sec)
+static int sec_set_user_domain_and_cache(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_sec->qm;
-
/* qm user domain */
writel(AXUSER_BASE, qm->io_base + QM_ARUSER_M_CFG_1);
writel(ARUSER_M_CFG_ENABLE, qm->io_base + QM_ARUSER_M_CFG_ENABLE);
@@ -540,22 +426,18 @@ static void hisi_sec_set_user_domain_and_cache(struct hisi_sec *hisi_sec)
/* disable FLR triggered by BME(bus master enable) */
writel(PEH_AXUSER_CFG, qm->io_base + QM_PEH_AXUSER_CFG);
- writel(PEH_AXUSER_CFG_ENABLE, qm->io_base +
- QM_PEH_AXUSER_CFG_ENABLE);
+ writel(PEH_AXUSER_CFG_ENABLE, qm->io_base + QM_PEH_AXUSER_CFG_ENABLE);
/* enable sqc,cqc writeback */
writel(SQC_CACHE_ENABLE | CQC_CACHE_ENABLE | SQC_CACHE_WB_ENABLE |
CQC_CACHE_WB_ENABLE | FIELD_PREP(SQC_CACHE_WB_THRD, 1) |
FIELD_PREP(CQC_CACHE_WB_THRD, 1), qm->io_base + QM_CACHE_CTL);
- if (sec_engine_init(hisi_sec))
- dev_err(&qm->pdev->dev, "sec_engine_init failed");
+ return sec_engine_init(qm);
}
-static void hisi_sec_debug_regs_clear(struct hisi_sec *hisi_sec)
+static void sec_debug_regs_clear(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_sec->qm;
-
/* clear current_qm */
writel(0x0, qm->io_base + QM_DFX_MB_CNT_VF);
writel(0x0, qm->io_base + QM_DFX_DB_CNT_VF);
@@ -566,50 +448,53 @@ static void hisi_sec_debug_regs_clear(struct hisi_sec *hisi_sec)
hisi_qm_debug_regs_clear(qm);
}
-static void hisi_sec_hw_error_set_state(struct hisi_sec *hisi_sec, bool state)
+static void sec_hw_error_enable(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_sec->qm;
- void *base = qm->io_base + SEC_ENGINE_PF_CFG_OFF +
- SEC_ACC_COMMON_REG_OFF;
u32 val;
if (qm->ver == QM_HW_V1) {
writel(SEC_CORE_INT_DISABLE, qm->io_base + SEC_CORE_INT_MASK);
- dev_info(&qm->pdev->dev, "v%d don't support hw error handle\n",
- qm->ver);
+ pci_info(qm->pdev, "V1 not support hw error handle\n");
return;
}
- val = readl(base + SEC_CONTROL_REG);
- if (state) {
- /* clear SEC hw error source if having */
- writel(SEC_CORE_INT_ENABLE,
- hisi_sec->qm.io_base + SEC_CORE_INT_SOURCE);
+ val = readl(SEC_ADDR(qm, SEC_CONTROL_REG));
- /* enable SEC hw error interrupts */
- writel(SEC_CORE_INT_ENABLE, qm->io_base + SEC_CORE_INT_MASK);
+ /* clear SEC hw error source if having */
+ writel(SEC_CORE_INT_CLEAR, qm->io_base + SEC_CORE_INT_SOURCE);
- /* enable RAS int */
- writel(SEC_RAS_CE_ENB_MSK, base + SEC_RAS_CE_REG);
- writel(SEC_RAS_FE_ENB_MSK, base + SEC_RAS_FE_REG);
- writel(SEC_RAS_NFE_ENB_MSK, base + SEC_RAS_NFE_REG);
+ /* enable SEC hw error interrupts */
+ writel(SEC_CORE_INT_ENABLE, qm->io_base + SEC_CORE_INT_MASK);
- /* enable SEC block master OOO when m-bit error occur */
- val = val | SEC_AXI_SHUTDOWN_ENABLE;
- } else {
- /* disable RAS int */
- writel(SEC_RAS_DISABLE, base + SEC_RAS_CE_REG);
- writel(SEC_RAS_DISABLE, base + SEC_RAS_FE_REG);
- writel(SEC_RAS_DISABLE, base + SEC_RAS_NFE_REG);
+ /* enable RAS int */
+ writel(SEC_RAS_CE_ENB_MSK, qm->io_base + SEC_RAS_CE_REG);
+ writel(SEC_RAS_FE_ENB_MSK, qm->io_base + SEC_RAS_FE_REG);
+ writel(SEC_RAS_NFE_ENB_MSK, qm->io_base + SEC_RAS_NFE_REG);
- /* disable SEC hw error interrupts */
- writel(SEC_CORE_INT_DISABLE, qm->io_base + SEC_CORE_INT_MASK);
+ /* enable SEC block master OOO when m-bit error occur */
+ val = val | SEC_AXI_SHUTDOWN_ENABLE;
- /* disable SEC block master OOO when m-bit error occur */
- val = val & SEC_AXI_SHUTDOWN_DISABLE;
- }
+ writel(val, SEC_ADDR(qm, SEC_CONTROL_REG));
+}
- writel(val, base + SEC_CONTROL_REG);
+static void sec_hw_error_disable(struct hisi_qm *qm)
+{
+ u32 val;
+
+ val = readl(SEC_ADDR(qm, SEC_CONTROL_REG));
+
+ /* disable RAS int */
+ writel(SEC_RAS_DISABLE, qm->io_base + SEC_RAS_CE_REG);
+ writel(SEC_RAS_DISABLE, qm->io_base + SEC_RAS_FE_REG);
+ writel(SEC_RAS_DISABLE, qm->io_base + SEC_RAS_NFE_REG);
+
+ /* disable SEC hw error interrupts */
+ writel(SEC_CORE_INT_DISABLE, qm->io_base + SEC_CORE_INT_MASK);
+
+ /* disable SEC block master OOO when m-bit error occur */
+ val = val & SEC_AXI_SHUTDOWN_DISABLE;
+
+ writel(val, SEC_ADDR(qm, SEC_CONTROL_REG));
}
static inline struct hisi_qm *file_to_qm(struct ctrl_debug_file *file)
@@ -629,21 +514,21 @@ static u32 current_qm_read(struct ctrl_debug_file *file)
static int current_qm_write(struct ctrl_debug_file *file, u32 val)
{
struct hisi_qm *qm = file_to_qm(file);
- struct hisi_sec_ctrl *ctrl = file->ctrl;
- u32 tmp, vfq_num;
+ u32 vfq_num;
+ u32 tmp;
- if (val > ctrl->num_vfs)
+ if (val > qm->vfs_num)
return -EINVAL;
/* According PF or VF Dev ID to calculation curr_qm_qp_num and store */
if (val == 0) {
qm->debug.curr_qm_qp_num = qm->qp_num;
} else {
- vfq_num = (qm->ctrl_q_num - qm->qp_num) / ctrl->num_vfs;
- if (val == ctrl->num_vfs) {
+ vfq_num = (qm->ctrl_q_num - qm->qp_num) / qm->vfs_num;
+ if (val == qm->vfs_num) {
qm->debug.curr_qm_qp_num =
qm->ctrl_q_num - qm->qp_num -
- (ctrl->num_vfs - 1) * vfq_num;
+ (qm->vfs_num - 1) * vfq_num;
} else {
qm->debug.curr_qm_qp_num = vfq_num;
}
@@ -668,7 +553,7 @@ static u32 clear_enable_read(struct ctrl_debug_file *file)
struct hisi_qm *qm = file_to_qm(file);
return readl(qm->io_base + SEC_CTRL_CNT_CLR_CE) &
- SEC_CTRL_CNT_CLR_CE_BIT;
+ SEC_CTRL_CNT_CLR_CE_BIT;
}
static int clear_enable_write(struct ctrl_debug_file *file, u32 val)
@@ -676,11 +561,11 @@ static int clear_enable_write(struct ctrl_debug_file *file, u32 val)
struct hisi_qm *qm = file_to_qm(file);
u32 tmp;
- if (val != 1 && val != 0)
+ if (val != 1 && val)
return -EINVAL;
tmp = (readl(qm->io_base + SEC_CTRL_CNT_CLR_CE) &
- ~SEC_CTRL_CNT_CLR_CE_BIT) | val;
+ ~SEC_CTRL_CNT_CLR_CE_BIT) | val;
writel(tmp, qm->io_base + SEC_CTRL_CNT_CLR_CE);
return 0;
@@ -695,6 +580,7 @@ static ssize_t ctrl_debug_read(struct file *filp, char __user *buf,
int ret;
spin_lock_irq(&file->lock);
+
switch (file->index) {
case SEC_CURRENT_QM:
val = current_qm_read(file);
@@ -706,8 +592,10 @@ static ssize_t ctrl_debug_read(struct file *filp, char __user *buf,
spin_unlock_irq(&file->lock);
return -EINVAL;
}
+
spin_unlock_irq(&file->lock);
ret = snprintf(tbuf, SEC_DBGFS_VAL_MAX_LEN, "%u\n", val);
+
return simple_read_from_buffer(buf, count, pos, tbuf, ret);
}
@@ -726,7 +614,7 @@ static ssize_t ctrl_debug_write(struct file *filp, const char __user *buf,
return -ENOSPC;
len = simple_write_to_buffer(tbuf, SEC_DBGFS_VAL_MAX_LEN - 1,
- pos, buf, count);
+ pos, buf, count);
if (len < 0)
return len;
@@ -735,6 +623,7 @@ static ssize_t ctrl_debug_write(struct file *filp, const char __user *buf,
return -EFAULT;
spin_lock_irq(&file->lock);
+
switch (file->index) {
case SEC_CURRENT_QM:
ret = current_qm_write(file, val);
@@ -750,6 +639,7 @@ static ssize_t ctrl_debug_write(struct file *filp, const char __user *buf,
ret = -EINVAL;
goto err_input;
}
+
spin_unlock_irq(&file->lock);
return count;
@@ -766,12 +656,11 @@ static ssize_t ctrl_debug_write(struct file *filp, const char __user *buf,
.write = ctrl_debug_write,
};
-static int hisi_sec_core_debug_init(struct hisi_sec_ctrl *ctrl)
+static int hisi_sec_core_debug_init(struct hisi_qm *qm)
{
- struct hisi_sec *hisi_sec = ctrl->hisi_sec;
- struct hisi_qm *qm = &hisi_sec->qm;
+ struct hisi_sec *sec = container_of(qm, struct hisi_sec, qm);
struct device *dev = &qm->pdev->dev;
- struct hisi_sec_dfx *dfx = &hisi_sec->sec_dfx;
+ struct hisi_sec_dfx *dfx = &sec->sec_dfx;
struct debugfs_regset32 *regset;
struct dentry *tmp_d, *tmp;
char buf[SEC_DBGFS_VAL_MAX_LEN];
@@ -781,7 +670,7 @@ static int hisi_sec_core_debug_init(struct hisi_sec_ctrl *ctrl)
if (ret < 0)
return -ENOENT;
- tmp_d = debugfs_create_dir(buf, ctrl->debug_root);
+ tmp_d = debugfs_create_dir(buf, qm->debug.debug_root);
if (!tmp_d)
return -ENOENT;
@@ -847,29 +736,30 @@ static int hisi_sec_core_debug_init(struct hisi_sec_ctrl *ctrl)
return 0;
}
-static int hisi_sec_ctrl_debug_init(struct hisi_sec_ctrl *ctrl)
+static int hisi_sec_ctrl_debug_init(struct hisi_qm *qm)
{
+ struct hisi_sec *sec = container_of(qm, struct hisi_sec, qm);
struct dentry *tmp;
int i;
for (i = SEC_CURRENT_QM; i < SEC_DEBUG_FILE_NUM; i++) {
- spin_lock_init(&ctrl->files[i].lock);
- ctrl->files[i].ctrl = ctrl;
- ctrl->files[i].index = i;
+ spin_lock_init(&sec->ctrl->files[i].lock);
+ sec->ctrl->files[i].ctrl = sec->ctrl;
+ sec->ctrl->files[i].index = i;
tmp = debugfs_create_file(ctrl_debug_file_name[i], 0600,
- ctrl->debug_root, ctrl->files + i,
+ qm->debug.debug_root,
+ sec->ctrl->files + i,
&ctrl_debug_fops);
if (!tmp)
return -ENOENT;
}
- return hisi_sec_core_debug_init(ctrl);
+ return hisi_sec_core_debug_init(qm);
}
-static int hisi_sec_debugfs_init(struct hisi_sec *hisi_sec)
+static int hisi_sec_debugfs_init(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_sec->qm;
struct device *dev = &qm->pdev->dev;
struct dentry *dev_d;
int ret;
@@ -883,9 +773,8 @@ static int hisi_sec_debugfs_init(struct hisi_sec *hisi_sec)
if (ret)
goto failed_to_create;
- if (qm->pdev->device == SEC_PCI_DEVICE_ID_PF) {
- hisi_sec->ctrl->debug_root = dev_d;
- ret = hisi_sec_ctrl_debug_init(hisi_sec->ctrl);
+ if (qm->pdev->device == SEC_PF_PCI_DEVICE_ID) {
+ ret = hisi_sec_ctrl_debug_init(qm);
if (ret)
goto failed_to_create;
}
@@ -897,71 +786,62 @@ static int hisi_sec_debugfs_init(struct hisi_sec *hisi_sec)
return ret;
}
-static void hisi_sec_debugfs_exit(struct hisi_sec *hisi_sec)
+static void hisi_sec_debugfs_exit(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_sec->qm;
-
debugfs_remove_recursive(qm->debug.debug_root);
+
if (qm->fun_type == QM_HW_PF) {
- hisi_sec_debug_regs_clear(hisi_sec);
+ sec_debug_regs_clear(qm);
qm->debug.curr_qm_qp_num = 0;
}
}
-static void hisi_sec_hw_error_init(struct hisi_sec *hisi_sec)
+static void sec_log_hw_error(struct hisi_qm *qm, u32 err_sts)
{
- hisi_qm_hw_error_init(&hisi_sec->qm, QM_BASE_CE,
- QM_BASE_NFE | QM_ACC_DO_TASK_TIMEOUT
- | QM_ACC_WB_NOT_READY_TIMEOUT, 0,
- QM_DB_RANDOM_INVALID);
- hisi_sec_hw_error_set_state(hisi_sec, true);
-}
+ const struct hisi_sec_hw_error *errs = sec_hw_error;
+ struct device *dev = &qm->pdev->dev;
+ u32 err_val;
-static void hisi_sec_open_master_ooo(struct hisi_qm *qm)
-{
- u32 val;
- void *base = qm->io_base + SEC_ENGINE_PF_CFG_OFF +
- SEC_ACC_COMMON_REG_OFF;
+ while (errs->msg) {
+ if (errs->int_msk & err_sts) {
+ dev_err(dev, "%s [error status=0x%x] found\n",
+ errs->msg, errs->int_msk);
- val = readl(base + SEC_CONTROL_REG);
- writel(val & SEC_AXI_SHUTDOWN_DISABLE, base + SEC_CONTROL_REG);
- writel(val | SEC_AXI_SHUTDOWN_ENABLE, base + SEC_CONTROL_REG);
+ if (SEC_CORE_INT_STATUS_M_ECC & errs->int_msk) {
+ err_val = readl(qm->io_base +
+ SEC_CORE_ECC_INFO);
+ dev_err(dev, "multi ecc sram num=0x%x\n",
+ SEC_ECC_NUM(err_val));
+ }
+ }
+ errs++;
+ }
}
-static u32 hisi_sec_get_hw_err_status(struct hisi_qm *qm)
+static u32 sec_get_hw_err_status(struct hisi_qm *qm)
{
return readl(qm->io_base + SEC_CORE_INT_STATUS);
}
-static void hisi_sec_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
+static void sec_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
{
writel(err_sts, qm->io_base + SEC_CORE_INT_SOURCE);
}
-static void hisi_sec_log_hw_error(struct hisi_qm *qm, u32 err_sts)
+static void sec_open_axi_master_ooo(struct hisi_qm *qm)
{
- const struct hisi_sec_hw_error *err = sec_hw_error;
- struct device *dev = &qm->pdev->dev;
- u32 err_val;
-
- while (err->msg) {
- if (err->int_msk & err_sts)
- dev_err(dev, "%s [error status=0x%x] found\n",
- err->msg, err->int_msk);
- err++;
- }
+ u32 val;
- if (SEC_CORE_INT_STATUS_M_ECC & err_sts) {
- err_val = readl(qm->io_base + SEC_CORE_ECC_INFO);
- dev_err(dev, "hisi-sec multi ecc sram num=0x%x\n",
- SEC_ECC_NUM(err_val));
- }
+ val = readl(SEC_ADDR(qm, SEC_CONTROL_REG));
+ writel(val & SEC_AXI_SHUTDOWN_DISABLE, SEC_ADDR(qm, SEC_CONTROL_REG));
+ writel(val | SEC_AXI_SHUTDOWN_ENABLE, SEC_ADDR(qm, SEC_CONTROL_REG));
}
-static int hisi_sec_pf_probe_init(struct hisi_sec *hisi_sec)
+static int hisi_sec_pf_probe_init(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_sec->qm;
+ struct hisi_sec *hisi_sec = container_of(qm, struct hisi_sec, qm);
struct hisi_sec_ctrl *ctrl;
+ int ret;
ctrl = devm_kzalloc(&qm->pdev->dev, sizeof(*ctrl), GFP_KERNEL);
if (!ctrl)
@@ -983,59 +863,57 @@ static int hisi_sec_pf_probe_init(struct hisi_sec *hisi_sec)
return -EINVAL;
}
- qm->err_ini.qm_wr_port = SEC_WR_MSI_PORT;
- qm->err_ini.ecc_2bits_mask = SEC_CORE_INT_STATUS_M_ECC;
- qm->err_ini.open_axi_master_ooo = hisi_sec_open_master_ooo;
- qm->err_ini.get_dev_hw_err_status = hisi_sec_get_hw_err_status;
- qm->err_ini.clear_dev_hw_err_status = hisi_sec_clear_hw_err_status;
- qm->err_ini.log_dev_hw_err = hisi_sec_log_hw_error;
- hisi_sec_set_user_domain_and_cache(hisi_sec);
- hisi_sec_hw_error_init(hisi_sec);
+ qm->err_ini.get_dev_hw_err_status = sec_get_hw_err_status;
+ qm->err_ini.clear_dev_hw_err_status = sec_clear_hw_err_status;
+ qm->err_ini.err_info.ecc_2bits_mask = SEC_CORE_INT_STATUS_M_ECC;
+ qm->err_ini.err_info.ce = QM_BASE_CE;
+ qm->err_ini.err_info.nfe = QM_BASE_NFE | QM_ACC_DO_TASK_TIMEOUT |
+ QM_ACC_WB_NOT_READY_TIMEOUT;
+ qm->err_ini.err_info.fe = 0;
+ qm->err_ini.err_info.msi = QM_DB_RANDOM_INVALID;
+ qm->err_ini.err_info.acpi_rst = "SRST";
+ qm->err_ini.hw_err_disable = sec_hw_error_disable;
+ qm->err_ini.hw_err_enable = sec_hw_error_enable;
+ qm->err_ini.set_usr_domain_cache = sec_set_user_domain_and_cache;
+ qm->err_ini.log_dev_hw_err = sec_log_hw_error;
+ qm->err_ini.open_axi_master_ooo = sec_open_axi_master_ooo;
+ qm->err_ini.err_info.msi_wr_port = SEC_WR_MSI_PORT;
+
+ ret = qm->err_ini.set_usr_domain_cache(qm);
+ if (ret)
+ return ret;
+
+ hisi_qm_dev_err_init(qm);
qm->err_ini.open_axi_master_ooo(qm);
- hisi_sec_debug_regs_clear(hisi_sec);
+ sec_debug_regs_clear(qm);
return 0;
}
-static int hisi_sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+static int hisi_sec_qm_pre_init(struct hisi_qm *qm, struct pci_dev *pdev)
{
- enum qm_hw_ver rev_id;
-
- rev_id = hisi_qm_get_hw_version(pdev);
- if (rev_id == QM_HW_UNKNOWN)
- return -ENODEV;
+ int ret;
+#ifdef CONFIG_CRYPTO_QM_UACCE
+ qm->algs = "sec\ncipher\ndigest\n";
+ qm->uacce_mode = uacce_mode;
+#endif
qm->pdev = pdev;
- qm->ver = rev_id;
-
+ ret = hisi_qm_pre_init(qm, pf_q_num, SEC_PF_DEF_Q_BASE);
+ if (ret)
+ return ret;
qm->sqe_size = SEC_SQE_SIZE;
qm->dev_name = sec_name;
- qm->fun_type = (pdev->device == SEC_PCI_DEVICE_ID_PF) ?
- QM_HW_PF : QM_HW_VF;
- qm->algs = "sec\ncipher\ndigest\n";
+ qm->qm_list = &sec_devices;
qm->wq = sec_wq;
- switch (uacce_mode) {
- case UACCE_MODE_NOUACCE:
- qm->use_uacce = false;
- break;
- case UACCE_MODE_NOIOMMU:
- qm->use_uacce = true;
- break;
- default:
- return -EINVAL;
- }
-
- return hisi_qm_init(qm);
+ return 0;
}
-static int hisi_sec_probe_init(struct hisi_qm *qm, struct hisi_sec *hisi_sec)
+static int hisi_sec_probe_init(struct hisi_qm *qm)
{
if (qm->fun_type == QM_HW_PF) {
- qm->qp_base = SEC_PF_DEF_Q_BASE;
- qm->qp_num = pf_q_num;
- qm->debug.curr_qm_qp_num = pf_q_num;
- return hisi_sec_pf_probe_init(hisi_sec);
+ return hisi_sec_pf_probe_init(qm);
} else if (qm->fun_type == QM_HW_VF) {
/*
* have no way to get qm configure in VM in v1 hardware,
@@ -1066,660 +944,104 @@ static int hisi_sec_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (!hisi_sec)
return -ENOMEM;
- pci_set_drvdata(pdev, hisi_sec);
-
- hisi_sec_add_to_list(hisi_sec);
+ qm = &hisi_sec->qm;
+ qm->fun_type = pdev->is_physfn ? QM_HW_PF : QM_HW_VF;
- hisi_sec->hisi_sec_list_lock = &hisi_sec_list_lock;
+ ret = hisi_sec_qm_pre_init(qm, pdev);
+ if (ret)
+ return ret;
hisi_sec->ctx_q_num = ctx_q_num;
hisi_sec->fusion_limit = fusion_limit;
+ hisi_sec->fusion_tmout_nsec = fusion_time;
- hisi_sec->fusion_tmout_nsec = fusion_tmout_nsec;
-
- qm = &hisi_sec->qm;
-
- ret = hisi_sec_qm_init(qm, pdev);
+ ret = hisi_qm_init(qm);
if (ret) {
- dev_err(&pdev->dev, "Failed to pre init qm!\n");
- goto err_remove_from_list;
+ pci_err(pdev, "Failed to init qm (%d)!\n", ret);
+ return ret;
}
- ret = hisi_sec_probe_init(qm, hisi_sec);
+ ret = hisi_sec_probe_init(qm);
if (ret) {
- dev_err(&pdev->dev, "Failed to probe!\n");
+ pci_err(pdev, "Failed to probe init (%d)!\n", ret);
goto err_qm_uninit;
}
ret = hisi_qm_start(qm);
- if (ret)
+ if (ret) {
+ pci_err(pdev, "Failed to start qm (%d)!\n", ret);
goto err_qm_uninit;
+ }
- ret = hisi_sec_debugfs_init(hisi_sec);
+ ret = hisi_sec_debugfs_init(qm);
if (ret)
- dev_err(&pdev->dev, "Failed to init debugfs (%d)!\n", ret);
-
- return 0;
-
- err_qm_uninit:
- hisi_qm_uninit(qm);
- err_remove_from_list:
- hisi_sec_remove_from_list(hisi_sec);
- return ret;
-}
-
-/* now we only support equal assignment */
-static int hisi_sec_vf_q_assign(struct hisi_sec *hisi_sec, u32 num_vfs)
-{
- struct hisi_qm *qm = &hisi_sec->qm;
- u32 qp_num = qm->qp_num;
- u32 q_base = qp_num;
- u32 q_num, remain_q_num, i;
- int ret;
+ pci_warn(pdev, "Failed to init debugfs (%d)!\n", ret);
- if (!num_vfs)
- return -EINVAL;
-
- remain_q_num = qm->ctrl_q_num - qp_num;
- q_num = remain_q_num / num_vfs;
+ hisi_qm_add_to_list(qm, &sec_devices);
- for (i = 1; i <= num_vfs; i++) {
- if (i == num_vfs)
- q_num += remain_q_num % num_vfs;
- ret = hisi_qm_set_vft(qm, i, q_base, q_num);
- if (ret)
- return ret;
- q_base += q_num;
+ ret = hisi_sec_register_to_crypto(fusion_limit);
+ if (ret < 0) {
+ pci_err(pdev, "Failed to register driver to crypto!\n");
+ goto err_remove_from_list;
}
- return 0;
-}
-
-static int hisi_sec_clear_vft_config(struct hisi_sec *hisi_sec)
-{
- struct hisi_sec_ctrl *ctrl = hisi_sec->ctrl;
- struct hisi_qm *qm = &hisi_sec->qm;
- u32 num_vfs = ctrl->num_vfs;
- int ret;
- u32 i;
-
- for (i = 1; i <= num_vfs; i++) {
- ret = hisi_qm_set_vft(qm, i, 0, 0);
- if (ret)
- return ret;
+ if (qm->fun_type == QM_HW_PF && vfs_num > 0) {
+ ret = hisi_qm_sriov_enable(pdev, vfs_num);
+ if (ret < 0)
+ goto err_crypto_unregister;
}
- ctrl->num_vfs = 0;
-
return 0;
-}
-
-static int hisi_sec_sriov_enable(struct pci_dev *pdev, int max_vfs)
-{
-#ifdef CONFIG_PCI_IOV
- struct hisi_sec *hisi_sec = pci_get_drvdata(pdev);
- u32 num_vfs;
- int pre_existing_vfs, ret;
-
- pre_existing_vfs = pci_num_vf(pdev);
-
- if (pre_existing_vfs) {
- dev_err(&pdev->dev,
- "Can't enable VF. Please disable pre-enabled VFs!\n");
- return 0;
- }
-
- num_vfs = min_t(u32, max_vfs, SEC_VF_NUM);
-
- ret = hisi_sec_vf_q_assign(hisi_sec, num_vfs);
- if (ret) {
- dev_err(&pdev->dev, "Can't assign queues for VF!\n");
- return ret;
- }
- hisi_sec->ctrl->num_vfs = num_vfs;
+err_crypto_unregister:
+ hisi_sec_unregister_from_crypto(fusion_limit);
- ret = pci_enable_sriov(pdev, num_vfs);
- if (ret) {
- dev_err(&pdev->dev, "Can't enable VF!\n");
- hisi_sec_clear_vft_config(hisi_sec);
- return ret;
- }
+err_remove_from_list:
+ hisi_qm_del_from_list(qm, &sec_devices);
+ hisi_sec_debugfs_exit(qm);
+ hisi_qm_stop(qm, QM_NORMAL);
- return num_vfs;
-#else
- return 0;
-#endif
-}
-
-static int hisi_sec_try_frozen_vfs(struct pci_dev *pdev)
-{
- struct hisi_sec *sec, *vf_sec;
- struct pci_dev *dev;
- int ret = 0;
-
- /* Try to frozen all the VFs as disable SRIOV */
- mutex_lock(&hisi_sec_list_lock);
- list_for_each_entry(sec, &hisi_sec_list, list) {
- dev = sec->qm.pdev;
- if (dev == pdev)
- continue;
- if (pci_physfn(dev) == pdev) {
- vf_sec = pci_get_drvdata(dev);
- ret = hisi_qm_frozen(&vf_sec->qm);
- if (ret)
- goto frozen_fail;
- }
- }
+err_qm_uninit:
+ hisi_qm_uninit(qm);
-frozen_fail:
- mutex_unlock(&hisi_sec_list_lock);
return ret;
}
-static int hisi_sec_sriov_disable(struct pci_dev *pdev)
-{
- struct hisi_sec *hisi_sec = pci_get_drvdata(pdev);
-
- if (pci_vfs_assigned(pdev)) {
- dev_err(&pdev->dev,
- "Can't disable VFs while VFs are assigned!\n");
- return -EPERM;
- }
-
- if (hisi_sec_try_frozen_vfs(pdev)) {
- dev_err(&pdev->dev, "try frozen VFs failed!\n");
- return -EBUSY;
- }
-
- /* remove in hisi_sec_pci_driver will be called to free VF resources */
- pci_disable_sriov(pdev);
- return hisi_sec_clear_vft_config(hisi_sec);
-}
-
static int hisi_sec_sriov_configure(struct pci_dev *pdev, int num_vfs)
{
if (num_vfs == 0)
- return hisi_sec_sriov_disable(pdev);
+ return hisi_qm_sriov_disable(pdev, &sec_devices);
else
- return hisi_sec_sriov_enable(pdev, num_vfs);
-}
-
-static void hisi_sec_remove_wait_delay(struct hisi_sec *hisi_sec)
-{
- struct hisi_qm *qm = &hisi_sec->qm;
-
- while (hisi_qm_frozen(qm) || ((qm->fun_type == QM_HW_PF) &&
- hisi_sec_try_frozen_vfs(qm->pdev)))
- usleep_range(FROZEN_RANGE_MIN, FROZEN_RANGE_MAX);
-
- udelay(SEC_WAIT_DELAY);
+ return hisi_qm_sriov_enable(pdev, num_vfs);
}
static void hisi_sec_remove(struct pci_dev *pdev)
{
- struct hisi_sec *hisi_sec = pci_get_drvdata(pdev);
- struct hisi_qm *qm = &hisi_sec->qm;
+ struct hisi_qm *qm = pci_get_drvdata(pdev);
if (uacce_mode != UACCE_MODE_NOUACCE)
- hisi_sec_remove_wait_delay(hisi_sec);
+ hisi_qm_remove_wait_delay(qm, &sec_devices);
+
+ if (qm->fun_type == QM_HW_PF && qm->vfs_num)
+ (void)hisi_qm_sriov_disable(pdev, NULL);
- if (qm->fun_type == QM_HW_PF && hisi_sec->ctrl->num_vfs != 0)
- (void)hisi_sec_sriov_disable(pdev);
+ hisi_sec_unregister_from_crypto(fusion_limit);
- hisi_sec_debugfs_exit(hisi_sec);
+ hisi_qm_del_from_list(qm, &sec_devices);
+ hisi_sec_debugfs_exit(qm);
(void)hisi_qm_stop(qm, QM_NORMAL);
if (qm->fun_type == QM_HW_PF)
- hisi_sec_hw_error_set_state(hisi_sec, false);
+ hisi_qm_dev_err_uninit(qm);
hisi_qm_uninit(qm);
- hisi_sec_remove_from_list(hisi_sec);
-}
-
-static void hisi_sec_shutdown(struct pci_dev *pdev)
-{
- struct hisi_sec *hisi_sec = pci_get_drvdata(pdev);
-
- hisi_qm_stop(&hisi_sec->qm, QM_NORMAL);
-}
-
-static pci_ers_result_t hisi_sec_error_detected(struct pci_dev *pdev,
- pci_channel_state_t state)
-{
- if (pdev->is_virtfn)
- return PCI_ERS_RESULT_NONE;
-
- dev_info(&pdev->dev, "PCI error detected, state(=%d)!!\n", state);
- if (state == pci_channel_io_perm_failure)
- return PCI_ERS_RESULT_DISCONNECT;
-
- return hisi_qm_process_dev_error(pdev);
-}
-
-static int hisi_sec_reset_prepare_ready(struct hisi_sec *hisi_sec)
-{
- struct pci_dev *pdev = hisi_sec->qm.pdev;
- struct hisi_sec *sec = pci_get_drvdata(pci_physfn(pdev));
- int delay = 0;
-
- while (test_and_set_bit(HISI_SEC_RESET, &sec->status)) {
- msleep(++delay);
- if (delay > SEC_RESET_WAIT_TIMEOUT)
- return -EBUSY;
- }
-
- return 0;
-}
-
-static int hisi_sec_vf_reset_prepare(struct pci_dev *pdev,
- enum qm_stop_reason stop_reason)
-{
- struct hisi_sec *hisi_sec;
- struct pci_dev *dev;
- struct hisi_qm *qm;
- int ret = 0;
-
- mutex_lock(&hisi_sec_list_lock);
- if (pdev->is_physfn) {
- list_for_each_entry(hisi_sec, &hisi_sec_list, list) {
- dev = hisi_sec->qm.pdev;
- if (dev == pdev)
- continue;
-
- if (pci_physfn(dev) == pdev) {
- qm = &hisi_sec->qm;
-
- ret = hisi_qm_stop(qm, stop_reason);
- if (ret)
- goto prepare_fail;
- }
- }
- }
-
-prepare_fail:
- mutex_unlock(&hisi_sec_list_lock);
- return ret;
-}
-
-static int hisi_sec_controller_reset_prepare(struct hisi_sec *hisi_sec)
-{
- struct hisi_qm *qm = &hisi_sec->qm;
- struct pci_dev *pdev = qm->pdev;
- int ret;
-
- ret = hisi_sec_reset_prepare_ready(hisi_sec);
- if (ret) {
- dev_err(&pdev->dev, "Controller reset not ready!\n");
- return ret;
- }
-
- ret = hisi_sec_vf_reset_prepare(pdev, QM_SOFT_RESET);
- if (ret) {
- dev_err(&pdev->dev, "Fails to stop VFs!\n");
- return ret;
- }
-
- ret = hisi_qm_stop(qm, QM_SOFT_RESET);
- if (ret) {
- dev_err(&pdev->dev, "Fails to stop QM!\n");
- return ret;
- }
-
-#ifdef CONFIG_CRYPTO_QM_UACCE
- if (qm->use_uacce) {
- ret = uacce_hw_err_isolate(&qm->uacce);
- if (ret) {
- dev_err(&pdev->dev, "Fails to isolate hw err!\n");
- return ret;
- }
- }
-#endif
-
- return 0;
-}
-
-static int hisi_sec_soft_reset(struct hisi_sec *hisi_sec)
-{
- struct hisi_qm *qm = &hisi_sec->qm;
- struct device *dev = &qm->pdev->dev;
- unsigned long long value;
- int ret;
- u32 val;
-
- ret = hisi_qm_reg_test(qm);
- if (ret)
- return ret;
-
- ret = hisi_qm_set_vf_mse(qm, SEC_DISABLE);
- if (ret) {
- dev_err(dev, "Fails to disable vf mse bit.\n");
- return ret;
- }
-
- ret = hisi_qm_set_msi(qm, SEC_DISABLE);
- if (ret) {
- dev_err(dev, "Fails to disable peh msi bit.\n");
- return ret;
- }
-
- /* Set qm ecc if dev ecc happened to hold on ooo */
- hisi_qm_set_ecc(qm);
-
- /* OOO register set and check */
- writel(SEC_MASTER_GLOBAL_CTRL_SHUTDOWN,
- hisi_sec->qm.io_base + SEC_MASTER_GLOBAL_CTRL);
-
- /* If bus lock, reset chip */
- ret = readl_relaxed_poll_timeout(hisi_sec->qm.io_base +
- SEC_MASTER_TRANS_RETURN,
- val,
- (val == SEC_MASTER_TRANS_RETURN_RW),
- SEC_DELAY_10_US,
- SEC_POLL_TIMEOUT_US);
- if (ret) {
- dev_emerg(dev, "Bus lock! Please reset system.\n");
- return ret;
- }
-
- ret = hisi_qm_set_pf_mse(qm, SEC_DISABLE);
- if (ret) {
- dev_err(dev, "Fails to disable pf mse bit.\n");
- return ret;
- }
-
- /* The reset related sub-control registers are not in PCI BAR */
- if (ACPI_HANDLE(dev)) {
- acpi_status s;
-
- s = acpi_evaluate_integer(ACPI_HANDLE(dev), "SRST",
- NULL, &value);
- if (ACPI_FAILURE(s) || value) {
- dev_err(dev, "Controller reset fails %lld\n", value);
- return -EIO;
- }
- } else {
- dev_err(dev, "No reset method!\n");
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int hisi_sec_vf_reset_done(struct pci_dev *pdev)
-{
- struct hisi_sec *hisi_sec;
- struct pci_dev *dev;
- struct hisi_qm *qm;
- int ret = 0;
-
- mutex_lock(&hisi_sec_list_lock);
- list_for_each_entry(hisi_sec, &hisi_sec_list, list) {
- dev = hisi_sec->qm.pdev;
- if (dev == pdev)
- continue;
-
- if (pci_physfn(dev) == pdev) {
- qm = &hisi_sec->qm;
-
- ret = hisi_qm_restart(qm);
- if (ret)
- goto reset_fail;
- }
- }
-
-reset_fail:
- mutex_unlock(&hisi_sec_list_lock);
- return ret;
-}
-
-static int hisi_sec_controller_reset_done(struct hisi_sec *hisi_sec)
-{
- struct hisi_qm *qm = &hisi_sec->qm;
- struct pci_dev *pdev = qm->pdev;
- struct device *dev = &pdev->dev;
- int ret;
-
- ret = hisi_qm_set_msi(qm, SEC_ENABLE);
- if (ret) {
- dev_err(dev, "Fails to enable peh msi bit!\n");
- return ret;
- }
-
- ret = hisi_qm_set_pf_mse(qm, SEC_ENABLE);
- if (ret) {
- dev_err(dev, "Fails to enable pf mse bit!\n");
- return ret;
- }
-
- ret = hisi_qm_set_vf_mse(qm, SEC_ENABLE);
- if (ret) {
- dev_err(dev, "Fails to enable vf mse bit!\n");
- return ret;
- }
-
- hisi_sec_set_user_domain_and_cache(hisi_sec);
- hisi_qm_restart_prepare(qm);
-
- ret = hisi_qm_restart(qm);
- if (ret) {
- dev_err(dev, "Failed to start QM!\n");
- return -EPERM;
- }
-
- if (hisi_sec->ctrl->num_vfs) {
- ret = hisi_sec_vf_q_assign(hisi_sec, hisi_sec->ctrl->num_vfs);
- if (ret) {
- dev_err(dev, "Failed to assign vf queues!\n");
- return ret;
- }
- }
-
- ret = hisi_sec_vf_reset_done(pdev);
- if (ret) {
- dev_err(dev, "Failed to start VFs!\n");
- return -EPERM;
- }
-
- hisi_qm_restart_done(qm);
- hisi_sec_hw_error_init(hisi_sec);
-
- return 0;
-}
-
-static int hisi_sec_controller_reset(struct hisi_sec *hisi_sec)
-{
- struct device *dev = &hisi_sec->qm.pdev->dev;
- int ret;
-
- dev_info(dev, "Controller resetting...\n");
-
- ret = hisi_sec_controller_reset_prepare(hisi_sec);
- if (ret)
- return ret;
-
- ret = hisi_sec_soft_reset(hisi_sec);
- if (ret) {
- dev_err(dev, "Controller reset failed (%d)\n", ret);
- return ret;
- }
-
- ret = hisi_sec_controller_reset_done(hisi_sec);
- if (ret)
- return ret;
-
- clear_bit(HISI_SEC_RESET, &hisi_sec->status);
- dev_info(dev, "Controller reset complete\n");
-
- return 0;
-}
-
-static pci_ers_result_t hisi_sec_slot_reset(struct pci_dev *pdev)
-{
- struct hisi_sec *hisi_sec = pci_get_drvdata(pdev);
- int ret;
-
- if (pdev->is_virtfn)
- return PCI_ERS_RESULT_RECOVERED;
-
- dev_info(&pdev->dev, "Requesting reset due to PCI error\n");
-
- pci_cleanup_aer_uncorrect_error_status(pdev);
-
- /* reset sec controller */
- ret = hisi_sec_controller_reset(hisi_sec);
- if (ret) {
- dev_warn(&pdev->dev, "hisi_sec controller reset failed (%d)\n",
- ret);
- return PCI_ERS_RESULT_DISCONNECT;
- }
-
- return PCI_ERS_RESULT_RECOVERED;
-}
-
-static void hisi_sec_set_hw_error(struct hisi_sec *hisi_sec, bool state)
-{
- struct pci_dev *pdev = hisi_sec->qm.pdev;
- struct hisi_sec *sec = pci_get_drvdata(pci_physfn(pdev));
- struct hisi_qm *qm = &sec->qm;
-
- if (qm->fun_type == QM_HW_VF)
- return;
-
- if (state)
- hisi_qm_hw_error_init(qm, QM_BASE_CE,
- QM_BASE_NFE | QM_ACC_WB_NOT_READY_TIMEOUT,
- 0, QM_DB_RANDOM_INVALID);
- else
- hisi_qm_hw_error_uninit(qm);
-
- hisi_sec_hw_error_set_state(sec, state);
-}
-
-static int hisi_sec_get_hw_error_status(struct hisi_sec *hisi_sec)
-{
- u32 err_sts;
-
- err_sts = readl(hisi_sec->qm.io_base + SEC_CORE_INT_STATUS) &
- SEC_CORE_INT_STATUS_M_ECC;
- if (err_sts)
- return err_sts;
-
- return 0;
-}
-
-static int hisi_sec_check_hw_error(struct hisi_sec *hisi_sec)
-{
- struct pci_dev *pdev = hisi_sec->qm.pdev;
- struct hisi_sec *sec = pci_get_drvdata(pci_physfn(pdev));
- struct hisi_qm *qm = &sec->qm;
- int ret;
-
- if (qm->fun_type == QM_HW_VF)
- return 0;
-
- ret = hisi_qm_get_hw_error_status(qm);
- if (ret)
- return ret;
-
- return hisi_sec_get_hw_error_status(sec);
-}
-
-static void hisi_sec_reset_prepare(struct pci_dev *pdev)
-{
- struct hisi_sec *hisi_sec = pci_get_drvdata(pdev);
- struct hisi_qm *qm = &hisi_sec->qm;
- struct device *dev = &pdev->dev;
- u32 delay = 0;
- int ret;
-
- hisi_sec_set_hw_error(hisi_sec, SEC_HW_ERROR_IRQ_DISABLE);
-
- while (hisi_sec_check_hw_error(hisi_sec)) {
- msleep(++delay);
- if (delay > SEC_RESET_WAIT_TIMEOUT)
- return;
- }
-
- ret = hisi_sec_reset_prepare_ready(hisi_sec);
- if (ret) {
- dev_err(dev, "FLR not ready!\n");
- return;
- }
-
- ret = hisi_sec_vf_reset_prepare(pdev, QM_FLR);
- if (ret) {
- dev_err(dev, "Fails to prepare reset!\n");
- return;
- }
-
- ret = hisi_qm_stop(qm, QM_FLR);
- if (ret) {
- dev_err(dev, "Fails to stop QM!\n");
- return;
- }
-
- dev_info(dev, "FLR resetting...\n");
-}
-
-static void hisi_sec_flr_reset_complete(struct pci_dev *pdev)
-{
- struct pci_dev *pf_pdev = pci_physfn(pdev);
- struct hisi_sec *hisi_sec = pci_get_drvdata(pf_pdev);
- struct device *dev = &hisi_sec->qm.pdev->dev;
- u32 id;
-
- pci_read_config_dword(hisi_sec->qm.pdev, PCI_COMMAND, &id);
- if (id == SEC_PCI_COMMAND_INVALID)
- dev_err(dev, "Device can not be used!\n");
-
- clear_bit(HISI_SEC_RESET, &hisi_sec->status);
-}
-
-static void hisi_sec_reset_done(struct pci_dev *pdev)
-{
- struct hisi_sec *hisi_sec = pci_get_drvdata(pdev);
- struct hisi_qm *qm = &hisi_sec->qm;
- struct device *dev = &pdev->dev;
- int ret;
-
- hisi_sec_set_hw_error(hisi_sec, SEC_HW_ERROR_IRQ_ENABLE);
-
- ret = hisi_qm_restart(qm);
- if (ret) {
- dev_err(dev, "Failed to start QM!\n");
- goto flr_done;
- }
-
- if (pdev->is_physfn) {
- hisi_sec_set_user_domain_and_cache(hisi_sec);
- if (hisi_sec->ctrl->num_vfs) {
- ret = hisi_sec_vf_q_assign(hisi_sec,
- hisi_sec->ctrl->num_vfs);
- if (ret) {
- dev_err(dev, "Failed to assign vf queue\n");
- goto flr_done;
- }
- }
-
- ret = hisi_sec_vf_reset_done(pdev);
- if (ret) {
- dev_err(dev, "Failed to reset vf\n");
- goto flr_done;
- }
- }
-
-flr_done:
- hisi_sec_flr_reset_complete(pdev);
-
- dev_info(dev, "FLR reset complete\n");
}
static const struct pci_error_handlers hisi_sec_err_handler = {
- .error_detected = hisi_sec_error_detected,
- .slot_reset = hisi_sec_slot_reset,
- .reset_prepare = hisi_sec_reset_prepare,
- .reset_done = hisi_sec_reset_done,
+ .error_detected = hisi_qm_dev_err_detected,
+ .slot_reset = hisi_qm_dev_slot_reset,
+ .reset_prepare = hisi_qm_reset_prepare,
+ .reset_done = hisi_qm_reset_done,
};
static struct pci_driver hisi_sec_pci_driver = {
@@ -1729,7 +1051,7 @@ static void hisi_sec_reset_done(struct pci_dev *pdev)
.remove = hisi_sec_remove,
.sriov_configure = hisi_sec_sriov_configure,
.err_handler = &hisi_sec_err_handler,
- .shutdown = hisi_sec_shutdown,
+ .shutdown = hisi_qm_dev_shutdown,
};
static void hisi_sec_register_debugfs(void)
@@ -1759,35 +1081,25 @@ static int __init hisi_sec_init(void)
return -ENOMEM;
}
+ INIT_LIST_HEAD(&sec_devices.list);
+ mutex_init(&sec_devices.lock);
+ sec_devices.check = NULL;
+
hisi_sec_register_debugfs();
ret = pci_register_driver(&hisi_sec_pci_driver);
if (ret < 0) {
+ hisi_sec_unregister_debugfs();
+ if (sec_wq)
+ destroy_workqueue(sec_wq);
pr_err("Failed to register pci driver.\n");
- goto err_pci;
- }
-
- pr_info("hisi_sec: register to crypto\n");
- ret = hisi_sec_register_to_crypto(fusion_limit);
- if (ret < 0) {
- pr_err("Failed to register driver to crypto.\n");
- goto err_probe_device;
}
- return 0;
-
- err_probe_device:
- pci_unregister_driver(&hisi_sec_pci_driver);
- err_pci:
- hisi_sec_unregister_debugfs();
- if (sec_wq)
- destroy_workqueue(sec_wq);
return ret;
}
static void __exit hisi_sec_exit(void)
{
- hisi_sec_unregister_from_crypto(fusion_limit);
pci_unregister_driver(&hisi_sec_pci_driver);
hisi_sec_unregister_debugfs();
if (sec_wq)
diff --git a/drivers/crypto/hisilicon/zip/zip.h b/drivers/crypto/hisilicon/zip/zip.h
index 560751a..ddd5924 100644
--- a/drivers/crypto/hisilicon/zip/zip.h
+++ b/drivers/crypto/hisilicon/zip/zip.h
@@ -18,19 +18,12 @@ enum hisi_zip_error_type {
};
struct hisi_zip_ctrl;
-
-enum hisi_zip_status {
- HISI_ZIP_RESET,
-};
-
struct hisi_zip {
struct hisi_qm qm;
- struct list_head list;
struct hisi_zip_ctrl *ctrl;
- unsigned long status;
};
-struct hisi_zip *find_zip_device(int node);
+int zip_create_qps(struct hisi_qp **qps, int ctx_num);
int hisi_zip_register_to_crypto(void);
void hisi_zip_unregister_from_crypto(void);
#endif
diff --git a/drivers/crypto/hisilicon/zip/zip_crypto.c b/drivers/crypto/hisilicon/zip/zip_crypto.c
index b2965ba..b247021 100644
--- a/drivers/crypto/hisilicon/zip/zip_crypto.c
+++ b/drivers/crypto/hisilicon/zip/zip_crypto.c
@@ -153,26 +153,19 @@ static void hisi_zip_fill_sqe(struct hisi_zip_sqe *sqe, u8 req_type,
sqe->dest_addr_h = upper_32_bits(d_addr);
}
-static int hisi_zip_create_qp(struct hisi_qm *qm, struct hisi_zip_qp_ctx *ctx,
+static int hisi_zip_start_qp(struct hisi_qp *qp, struct hisi_zip_qp_ctx *ctx,
int alg_type, int req_type)
{
- struct device *dev = &qm->pdev->dev;
- struct hisi_qp *qp;
+ struct device *dev = &qp->qm->pdev->dev;
int ret;
- qp = hisi_qm_create_qp(qm, alg_type);
- if (IS_ERR(qp)) {
- dev_err(dev, "create qp failed!\n");
- return PTR_ERR(qp);
- }
-
qp->req_type = req_type;
+ qp->alg_type = alg_type;
qp->qp_ctx = ctx;
ret = hisi_qm_start_qp(qp, 0);
if (ret < 0) {
dev_err(dev, "start qp failed!\n");
- hisi_qm_release_qp(qp);
return ret;
}
@@ -188,26 +181,27 @@ static void hisi_zip_release_qp(struct hisi_zip_qp_ctx *ctx)
static int hisi_zip_ctx_init(struct hisi_zip_ctx *hisi_zip_ctx, u8 req_type)
{
+ struct hisi_qp *qps[HZIP_CTX_Q_NUM] = { NULL };
struct hisi_zip *hisi_zip;
- struct hisi_qm *qm;
int ret, i, j;
- /* find the proper zip device */
- hisi_zip = find_zip_device(cpu_to_node(smp_processor_id()));
- if (!hisi_zip) {
- pr_err("Failed to find a proper ZIP device!\n");
+ ret = zip_create_qps(qps, HZIP_CTX_Q_NUM);
+ if (ret) {
+ pr_err("Can not create zip qps!\n");
return -ENODEV;
}
- qm = &hisi_zip->qm;
+
+ hisi_zip = container_of(qps[0]->qm, struct hisi_zip, qm);
for (i = 0; i < HZIP_CTX_Q_NUM; i++) {
/* alg_type = 0 for compress, 1 for decompress in hw sqe */
- ret = hisi_zip_create_qp(qm, &hisi_zip_ctx->qp_ctx[i], i,
+ ret = hisi_zip_start_qp(qps[i], &hisi_zip_ctx->qp_ctx[i], i,
req_type);
if (ret) {
for (j = i - 1; j >= 0; j--)
- hisi_zip_release_qp(&hisi_zip_ctx->qp_ctx[j]);
+ hisi_qm_stop_qp(hisi_zip_ctx->qp_ctx[j].qp);
+ hisi_qm_free_qps(qps, HZIP_CTX_Q_NUM);
return ret;
}
diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c
index 5e40fbf..54681dc 100644
--- a/drivers/crypto/hisilicon/zip/zip_main.c
+++ b/drivers/crypto/hisilicon/zip/zip_main.c
@@ -15,7 +15,6 @@
#include <linux/uacce.h>
#include "zip.h"
-#define HZIP_VF_NUM 63
#define HZIP_QUEUE_NUM_V1 4096
#define HZIP_QUEUE_NUM_V2 1024
@@ -75,7 +74,6 @@
#define HZIP_CORE_SRAM_ECC_ERR_INFO 0x301148
#define HZIP_CORE_INT_RAS_CE_ENB 0x301160
#define HZIP_CORE_INT_RAS_NFE_ENB 0x301164
-#define HZIP_RAS_NFE_MBIT_DISABLE ~HZIP_CORE_INT_STATUS_M_ECC
#define HZIP_CORE_INT_RAS_FE_ENB 0x301168
#define HZIP_CORE_INT_RAS_NFE_ENABLE 0x7FE
#define HZIP_SRAM_ECC_ERR_NUM_SHIFT 16
@@ -95,95 +93,14 @@
#define HZIP_SOFT_CTRL_ZIP_CONTROL 0x30100C
#define HZIP_AXI_SHUTDOWN_ENABLE BIT(14)
#define HZIP_AXI_SHUTDOWN_DISABLE 0xFFFFBFFF
-#define HZIP_WR_MSI_PORT 0xF7FF
+#define HZIP_WR_PORT BIT(11)
-#define HZIP_ENABLE 1
-#define HZIP_DISABLE 0
-#define HZIP_NUMA_DISTANCE 100
#define HZIP_BUF_SIZE 22
#define FORMAT_DECIMAL 10
-#define HZIP_REG_RD_INTVRL_US 10
-#define HZIP_REG_RD_TMOUT_US 1000
-#define HZIP_RESET_WAIT_TIMEOUT 400
-#define HZIP_PCI_COMMAND_INVALID 0xFFFFFFFF
-
-#define FROZEN_RANGE_MIN 10
-#define FROZEN_RANGE_MAX 20
static const char hisi_zip_name[] = "hisi_zip";
static struct dentry *hzip_debugfs_root;
-static LIST_HEAD(hisi_zip_list);
-static DEFINE_MUTEX(hisi_zip_list_lock);
-
-struct hisi_zip_resource {
- struct hisi_zip *hzip;
- int distance;
- struct list_head list;
-};
-
-static void free_list(struct list_head *head)
-{
- struct hisi_zip_resource *res, *tmp;
-
- list_for_each_entry_safe(res, tmp, head, list) {
- list_del(&res->list);
- kfree(res);
- }
-}
-
-struct hisi_zip *find_zip_device(int node)
-{
- struct hisi_zip *ret = NULL;
-#ifdef CONFIG_NUMA
- struct hisi_zip_resource *res, *tmp;
- struct hisi_zip *hisi_zip;
- struct list_head *n;
- struct device *dev;
- LIST_HEAD(head);
-
- mutex_lock(&hisi_zip_list_lock);
-
- list_for_each_entry(hisi_zip, &hisi_zip_list, list) {
- res = kzalloc(sizeof(*res), GFP_KERNEL);
- if (!res)
- goto err;
-
- dev = &hisi_zip->qm.pdev->dev;
- res->hzip = hisi_zip;
- res->distance = node_distance(dev->numa_node, node);
-
- n = &head;
- list_for_each_entry(tmp, &head, list) {
- if (res->distance < tmp->distance) {
- n = &tmp->list;
- break;
- }
- }
- list_add_tail(&res->list, n);
- }
-
- list_for_each_entry(tmp, &head, list) {
- if (hisi_qm_get_free_qp_num(&tmp->hzip->qm)) {
- ret = tmp->hzip;
- break;
- }
- }
-
- free_list(&head);
-#else
- mutex_lock(&hisi_zip_list_lock);
-
- ret = list_first_entry(&hisi_zip_list, struct hisi_zip, list);
-#endif
- mutex_unlock(&hisi_zip_list_lock);
-
- return ret;
-
-err:
- free_list(&head);
- mutex_unlock(&hisi_zip_list_lock);
- return NULL;
-}
+static struct hisi_qm_list zip_devices;
struct hisi_zip_hw_error {
u32 int_msk;
@@ -229,9 +146,7 @@ struct ctrl_debug_file {
* Just relevant for PF.
*/
struct hisi_zip_ctrl {
- u32 num_vfs;
struct hisi_zip *hisi_zip;
- struct dentry *debug_root;
struct ctrl_debug_file files[HZIP_DEBUG_FILE_NUM];
};
@@ -282,73 +197,49 @@ enum {
{"HZIP_DECOMP_LZ77_CURR_ST ", 0x9cull},
};
-static int pf_q_num_set(const char *val, const struct kernel_param *kp)
+#ifdef CONFIG_CRYPTO_QM_UACCE
+static int uacce_mode_set(const char *val, const struct kernel_param *kp)
{
- struct pci_dev *pdev = pci_get_device(PCI_VENDOR_ID_HUAWEI,
- PCI_DEVICE_ID_ZIP_PF, NULL);
- u32 n, q_num;
- u8 rev_id;
- int ret;
-
- if (!val)
- return -EINVAL;
+ return mode_set(val, kp);
+}
- if (!pdev) {
- q_num = min_t(u32, HZIP_QUEUE_NUM_V1, HZIP_QUEUE_NUM_V2);
- pr_info("No device found currently, suppose queue number is %d\n",
- q_num);
- } else {
- rev_id = pdev->revision;
- switch (rev_id) {
- case QM_HW_V1:
- q_num = HZIP_QUEUE_NUM_V1;
- break;
- case QM_HW_V2:
- q_num = HZIP_QUEUE_NUM_V2;
- break;
- default:
- return -EINVAL;
- }
- }
+static const struct kernel_param_ops uacce_mode_ops = {
+ .set = uacce_mode_set,
+ .get = param_get_int,
+};
- ret = kstrtou32(val, 10, &n);
- if (ret != 0 || n == 0 || n > q_num)
- return -EINVAL;
+static int uacce_mode = UACCE_MODE_NOUACCE;
+module_param_cb(uacce_mode, &uacce_mode_ops, &uacce_mode, 0444);
+MODULE_PARM_DESC(uacce_mode, "Mode of UACCE can be 0(default), 2");
+#endif
- return param_set_int(val, kp);
+static int pf_q_num_set(const char *val, const struct kernel_param *kp)
+{
+ return q_num_set(val, kp, PCI_DEVICE_ID_ZIP_PF);
}
static const struct kernel_param_ops pf_q_num_ops = {
.set = pf_q_num_set,
.get = param_get_int,
};
-static int uacce_mode_set(const char *val, const struct kernel_param *kp)
-{
- u32 n;
- int ret;
-
- if (!val)
- return -EINVAL;
- ret = kstrtou32(val, FORMAT_DECIMAL, &n);
- if (ret != 0 || (n != UACCE_MODE_NOIOMMU && n != UACCE_MODE_NOUACCE))
- return -EINVAL;
+static u32 pf_q_num = HZIP_PF_DEF_Q_NUM;
+module_param_cb(pf_q_num, &pf_q_num_ops, &pf_q_num, 0444);
+MODULE_PARM_DESC(pf_q_num, "Number of queues in PF(v1 1-4096, v2 1-1024)");
- return param_set_int(val, kp);
+static int vfs_num_set(const char *val, const struct kernel_param *kp)
+{
+ return vf_num_set(val, kp);
}
-static const struct kernel_param_ops uacce_mode_ops = {
- .set = uacce_mode_set,
+static const struct kernel_param_ops vfs_num_ops = {
+ .set = vfs_num_set,
.get = param_get_int,
};
-static u32 pf_q_num = HZIP_PF_DEF_Q_NUM;
-module_param_cb(pf_q_num, &pf_q_num_ops, &pf_q_num, 0444);
-MODULE_PARM_DESC(pf_q_num, "Number of queues in PF(v1 1-4096, v2 1-1024)");
-
-static int uacce_mode = UACCE_MODE_NOUACCE;
-module_param_cb(uacce_mode, &uacce_mode_ops, &uacce_mode, 0444);
-MODULE_PARM_DESC(uacce_mode, "Mode of UACCE can be 0(default), 2");
+static u32 vfs_num;
+module_param_cb(vfs_num, &vfs_num_ops, &vfs_num, 0444);
+MODULE_PARM_DESC(vfs_num, "Number of VFs to enable(1-63), 0(default)");
static const struct pci_device_id hisi_zip_dev_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_ZIP_PF) },
@@ -357,81 +248,67 @@ static int uacce_mode_set(const char *val, const struct kernel_param *kp)
};
MODULE_DEVICE_TABLE(pci, hisi_zip_dev_ids);
-static inline void hisi_zip_add_to_list(struct hisi_zip *hisi_zip)
+int zip_create_qps(struct hisi_qp **qps, int ctx_num)
{
- mutex_lock(&hisi_zip_list_lock);
- list_add_tail(&hisi_zip->list, &hisi_zip_list);
- mutex_unlock(&hisi_zip_list_lock);
-}
+ int node = cpu_to_node(smp_processor_id());
-static inline void hisi_zip_remove_from_list(struct hisi_zip *hisi_zip)
-{
- mutex_lock(&hisi_zip_list_lock);
- list_del(&hisi_zip->list);
- mutex_unlock(&hisi_zip_list_lock);
+ return hisi_qm_alloc_qps_node(node, &zip_devices,
+ qps, ctx_num, 0);
}
-static void hisi_zip_set_user_domain_and_cache(struct hisi_zip *hisi_zip)
+static int hisi_zip_set_user_domain_and_cache(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_zip->qm;
+ void __iomem *base = qm->io_base;
/* qm user domain */
- writel(AXUSER_BASE, hisi_zip->qm.io_base + QM_ARUSER_M_CFG_1);
- writel(ARUSER_M_CFG_ENABLE, hisi_zip->qm.io_base +
- QM_ARUSER_M_CFG_ENABLE);
- writel(AXUSER_BASE, hisi_zip->qm.io_base + QM_AWUSER_M_CFG_1);
- writel(AWUSER_M_CFG_ENABLE, hisi_zip->qm.io_base +
- QM_AWUSER_M_CFG_ENABLE);
- writel(WUSER_M_CFG_ENABLE, hisi_zip->qm.io_base +
- QM_WUSER_M_CFG_ENABLE);
+ writel(AXUSER_BASE, base + QM_ARUSER_M_CFG_1);
+ writel(ARUSER_M_CFG_ENABLE, base + QM_ARUSER_M_CFG_ENABLE);
+ writel(AXUSER_BASE, base + QM_AWUSER_M_CFG_1);
+ writel(AWUSER_M_CFG_ENABLE, base + QM_AWUSER_M_CFG_ENABLE);
+ writel(WUSER_M_CFG_ENABLE, base + QM_WUSER_M_CFG_ENABLE);
/* qm cache */
- writel(AXI_M_CFG, hisi_zip->qm.io_base + QM_AXI_M_CFG);
- writel(AXI_M_CFG_ENABLE, hisi_zip->qm.io_base + QM_AXI_M_CFG_ENABLE);
+ writel(AXI_M_CFG, base + QM_AXI_M_CFG);
+ writel(AXI_M_CFG_ENABLE, base + QM_AXI_M_CFG_ENABLE);
+
/* disable FLR triggered by BME(bus master enable) */
- writel(PEH_AXUSER_CFG, hisi_zip->qm.io_base + QM_PEH_AXUSER_CFG);
- writel(PEH_AXUSER_CFG_ENABLE, hisi_zip->qm.io_base +
- QM_PEH_AXUSER_CFG_ENABLE);
+ writel(PEH_AXUSER_CFG, base + QM_PEH_AXUSER_CFG);
+ writel(PEH_AXUSER_CFG_ENABLE, base + QM_PEH_AXUSER_CFG_ENABLE);
/* cache */
- writel(HZIP_CACHE_ALL_EN, hisi_zip->qm.io_base + HZIP_PORT_ARCA_CHE_0);
- writel(HZIP_CACHE_ALL_EN, hisi_zip->qm.io_base + HZIP_PORT_ARCA_CHE_1);
- writel(HZIP_CACHE_ALL_EN, hisi_zip->qm.io_base + HZIP_PORT_AWCA_CHE_0);
- writel(HZIP_CACHE_ALL_EN, hisi_zip->qm.io_base + HZIP_PORT_AWCA_CHE_1);
+ writel(HZIP_CACHE_ALL_EN, base + HZIP_PORT_ARCA_CHE_0);
+ writel(HZIP_CACHE_ALL_EN, base + HZIP_PORT_ARCA_CHE_1);
+ writel(HZIP_CACHE_ALL_EN, base + HZIP_PORT_AWCA_CHE_0);
+ writel(HZIP_CACHE_ALL_EN, base + HZIP_PORT_AWCA_CHE_1);
/* user domain configurations */
- writel(AXUSER_BASE, hisi_zip->qm.io_base + HZIP_BD_RUSER_32_63);
- writel(AXUSER_BASE, hisi_zip->qm.io_base + HZIP_SGL_RUSER_32_63);
- writel(AXUSER_BASE, hisi_zip->qm.io_base + HZIP_BD_WUSER_32_63);
+ writel(AXUSER_BASE, base + HZIP_BD_RUSER_32_63);
+ writel(AXUSER_BASE, base + HZIP_SGL_RUSER_32_63);
+ writel(AXUSER_BASE, base + HZIP_BD_WUSER_32_63);
if (qm->use_sva) {
- writel(AXUSER_BASE | AXUSER_SSV, hisi_zip->qm.io_base +
- HZIP_DATA_RUSER_32_63);
- writel(AXUSER_BASE | AXUSER_SSV, hisi_zip->qm.io_base +
- HZIP_DATA_WUSER_32_63);
+ writel(AXUSER_BASE | AXUSER_SSV, base + HZIP_DATA_RUSER_32_63);
+ writel(AXUSER_BASE | AXUSER_SSV, base + HZIP_DATA_WUSER_32_63);
} else {
- writel(AXUSER_BASE, hisi_zip->qm.io_base +
- HZIP_DATA_RUSER_32_63);
- writel(AXUSER_BASE, hisi_zip->qm.io_base +
- HZIP_DATA_WUSER_32_63);
+ writel(AXUSER_BASE, base + HZIP_DATA_RUSER_32_63);
+ writel(AXUSER_BASE, base + HZIP_DATA_WUSER_32_63);
}
/* let's open all compression/decompression cores */
writel(HZIP_DECOMP_CHECK_ENABLE | HZIP_ALL_COMP_DECOMP_EN,
- hisi_zip->qm.io_base + HZIP_CLOCK_GATE_CTRL);
+ base + HZIP_CLOCK_GATE_CTRL);
/* enable sqc,cqc writeback */
writel(SQC_CACHE_ENABLE | CQC_CACHE_ENABLE | SQC_CACHE_WB_ENABLE |
CQC_CACHE_WB_ENABLE | FIELD_PREP(SQC_CACHE_WB_THRD, 1) |
- FIELD_PREP(CQC_CACHE_WB_THRD, 1),
- hisi_zip->qm.io_base + QM_CACHE_CTL);
+ FIELD_PREP(CQC_CACHE_WB_THRD, 1), base + QM_CACHE_CTL);
+
+ return 0;
}
/* hisi_zip_debug_regs_clear() - clear the zip debug regs */
-static void hisi_zip_debug_regs_clear(struct hisi_zip *hisi_zip)
+static void hisi_zip_debug_regs_clear(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_zip->qm;
-
/* clear current_qm */
writel(0x0, qm->io_base + QM_DFX_MB_CNT_VF);
writel(0x0, qm->io_base + QM_DFX_DB_CNT_VF);
@@ -442,52 +319,70 @@ static void hisi_zip_debug_regs_clear(struct hisi_zip *hisi_zip)
hisi_qm_debug_regs_clear(qm);
}
-
-static void hisi_zip_hw_error_set_state(struct hisi_zip *hisi_zip, bool state)
+static int hisi_zip_hw_err_pre_set(struct hisi_qm *qm, u32 *val)
{
- struct hisi_qm *qm = &hisi_zip->qm;
- u32 val;
-
if (qm->ver == QM_HW_V1) {
writel(HZIP_CORE_INT_DISABLE, qm->io_base + HZIP_CORE_INT_MASK);
pci_info(qm->pdev, "ZIP v%d cannot support hw error handle!\n",
qm->ver);
- return;
+ return -EINVAL;
}
/* configure error type */
- writel(0x1, hisi_zip->qm.io_base + HZIP_CORE_INT_RAS_CE_ENB);
- writel(0x0, hisi_zip->qm.io_base + HZIP_CORE_INT_RAS_FE_ENB);
+ writel(0x1, qm->io_base + HZIP_CORE_INT_RAS_CE_ENB);
+ writel(0x0, qm->io_base + HZIP_CORE_INT_RAS_FE_ENB);
writel(HZIP_CORE_INT_RAS_NFE_ENABLE,
- hisi_zip->qm.io_base + HZIP_CORE_INT_RAS_NFE_ENB);
-
- val = readl(hisi_zip->qm.io_base + HZIP_SOFT_CTRL_ZIP_CONTROL);
- if (state) {
- /* clear ZIP hw error source if having */
- writel(HZIP_CORE_INT_DISABLE, hisi_zip->qm.io_base +
- HZIP_CORE_INT_SOURCE);
- /* enable ZIP hw error interrupts */
- writel(0, hisi_zip->qm.io_base + HZIP_CORE_INT_MASK);
-
- /* enable ZIP block master OOO when m-bit error occur */
- val = val | HZIP_AXI_SHUTDOWN_ENABLE;
- } else {
- /* disable ZIP hw error interrupts */
- writel(HZIP_CORE_INT_DISABLE,
- hisi_zip->qm.io_base + HZIP_CORE_INT_MASK);
+ qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
- /* disable ZIP block master OOO when m-bit error occur */
- val = val & HZIP_AXI_SHUTDOWN_DISABLE;
- }
+ *val = readl(qm->io_base + HZIP_SOFT_CTRL_ZIP_CONTROL);
+
+ return 0;
+}
+
+static void hisi_zip_hw_error_enable(struct hisi_qm *qm)
+{
+ u32 val;
+ int ret;
+
+ ret = hisi_zip_hw_err_pre_set(qm, &val);
+ if (ret)
+ return;
+
+ /* clear ZIP hw error source if having */
+ writel(HZIP_CORE_INT_DISABLE, qm->io_base + HZIP_CORE_INT_SOURCE);
+
+ /* enable ZIP hw error interrupts */
+ writel(0, qm->io_base + HZIP_CORE_INT_MASK);
+
+ /* enable ZIP block master OOO when m-bit error occur */
+ val = val | HZIP_AXI_SHUTDOWN_ENABLE;
- writel(val, hisi_zip->qm.io_base + HZIP_SOFT_CTRL_ZIP_CONTROL);
+ writel(val, qm->io_base + HZIP_SOFT_CTRL_ZIP_CONTROL);
+}
+
+static void hisi_zip_hw_error_disable(struct hisi_qm *qm)
+{
+ u32 val;
+ int ret;
+
+ ret = hisi_zip_hw_err_pre_set(qm, &val);
+ if (ret)
+ return;
+
+ /* disable ZIP hw error interrupts */
+ writel(HZIP_CORE_INT_DISABLE, qm->io_base + HZIP_CORE_INT_MASK);
+
+ /* disable ZIP block master OOO when m-bit error occur */
+ val = val & HZIP_AXI_SHUTDOWN_DISABLE;
+
+ writel(val, qm->io_base + HZIP_SOFT_CTRL_ZIP_CONTROL);
}
static inline struct hisi_qm *file_to_qm(struct ctrl_debug_file *file)
{
- struct hisi_zip *hisi_zip = file->ctrl->hisi_zip;
+ struct hisi_zip *zip = file->ctrl->hisi_zip;
- return &hisi_zip->qm;
+ return &zip->qm;
}
static u32 current_qm_read(struct ctrl_debug_file *file)
@@ -500,22 +395,21 @@ static u32 current_qm_read(struct ctrl_debug_file *file)
static int current_qm_write(struct ctrl_debug_file *file, u32 val)
{
struct hisi_qm *qm = file_to_qm(file);
- struct hisi_zip_ctrl *ctrl = file->ctrl;
u32 vfq_num;
u32 tmp;
- if (val > ctrl->num_vfs)
+ if (val > qm->vfs_num)
return -EINVAL;
/* According PF or VF Dev ID to calculation curr_qm_qp_num and store */
if (val == 0) {
qm->debug.curr_qm_qp_num = qm->qp_num;
} else {
- vfq_num = (qm->ctrl_q_num - qm->qp_num) / ctrl->num_vfs;
- if (val == ctrl->num_vfs) {
+ vfq_num = (qm->ctrl_q_num - qm->qp_num) / qm->vfs_num;
+ if (val == qm->vfs_num) {
qm->debug.curr_qm_qp_num =
qm->ctrl_q_num - qm->qp_num -
- (ctrl->num_vfs - 1) * vfq_num;
+ (qm->vfs_num - 1) * vfq_num;
} else {
qm->debug.curr_qm_qp_num = vfq_num;
}
@@ -638,10 +532,8 @@ static ssize_t hisi_zip_ctrl_debug_write(struct file *filp,
.write = hisi_zip_ctrl_debug_write,
};
-static int hisi_zip_core_debug_init(struct hisi_zip_ctrl *ctrl)
+static int hisi_zip_core_debug_init(struct hisi_qm *qm)
{
- struct hisi_zip *hisi_zip = ctrl->hisi_zip;
- struct hisi_qm *qm = &hisi_zip->qm;
struct device *dev = &qm->pdev->dev;
struct debugfs_regset32 *regset;
struct dentry *tmp_d, *tmp;
@@ -657,7 +549,7 @@ static int hisi_zip_core_debug_init(struct hisi_zip_ctrl *ctrl)
if (ret < 0)
return -EINVAL;
- tmp_d = debugfs_create_dir(buf, ctrl->debug_root);
+ tmp_d = debugfs_create_dir(buf, qm->debug.debug_root);
if (!tmp_d)
return -ENOENT;
@@ -677,29 +569,29 @@ static int hisi_zip_core_debug_init(struct hisi_zip_ctrl *ctrl)
return 0;
}
-static int hisi_zip_ctrl_debug_init(struct hisi_zip_ctrl *ctrl)
+static int hisi_zip_ctrl_debug_init(struct hisi_qm *qm)
{
+ struct hisi_zip *zip = container_of(qm, struct hisi_zip, qm);
struct dentry *tmp;
int i;
for (i = HZIP_CURRENT_QM; i < HZIP_DEBUG_FILE_NUM; i++) {
- spin_lock_init(&ctrl->files[i].lock);
- ctrl->files[i].ctrl = ctrl;
- ctrl->files[i].index = i;
+ spin_lock_init(&zip->ctrl->files[i].lock);
+ zip->ctrl->files[i].ctrl = zip->ctrl;
+ zip->ctrl->files[i].index = i;
tmp = debugfs_create_file(ctrl_debug_file_name[i], 0600,
- ctrl->debug_root, ctrl->files + i,
+ qm->debug.debug_root, zip->ctrl->files + i,
&ctrl_debug_fops);
if (!tmp)
return -ENOENT;
}
- return hisi_zip_core_debug_init(ctrl);
+ return hisi_zip_core_debug_init(qm);
}
-static int hisi_zip_debugfs_init(struct hisi_zip *hisi_zip)
+static int hisi_zip_debugfs_init(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_zip->qm;
struct device *dev = &qm->pdev->dev;
struct dentry *dev_d;
int ret;
@@ -714,8 +606,7 @@ static int hisi_zip_debugfs_init(struct hisi_zip *hisi_zip)
goto failed_to_create;
if (qm->fun_type == QM_HW_PF) {
- hisi_zip->ctrl->debug_root = dev_d;
- ret = hisi_zip_ctrl_debug_init(hisi_zip->ctrl);
+ ret = hisi_zip_ctrl_debug_init(qm);
if (ret)
goto failed_to_create;
}
@@ -727,47 +618,16 @@ static int hisi_zip_debugfs_init(struct hisi_zip *hisi_zip)
return ret;
}
-static void hisi_zip_debugfs_exit(struct hisi_zip *hisi_zip)
+static void hisi_zip_debugfs_exit(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_zip->qm;
-
debugfs_remove_recursive(qm->debug.debug_root);
if (qm->fun_type == QM_HW_PF) {
- hisi_zip_debug_regs_clear(hisi_zip);
+ hisi_zip_debug_regs_clear(qm);
qm->debug.curr_qm_qp_num = 0;
}
}
-static void hisi_zip_hw_error_init(struct hisi_zip *hisi_zip)
-{
- hisi_qm_hw_error_init(&hisi_zip->qm, QM_BASE_CE,
- QM_BASE_NFE | QM_ACC_WB_NOT_READY_TIMEOUT, 0,
- QM_DB_RANDOM_INVALID);
- hisi_zip_hw_error_set_state(hisi_zip, true);
-}
-
-static u32 hisi_zip_get_hw_err_status(struct hisi_qm *qm)
-{
- return readl(qm->io_base + HZIP_CORE_INT_STATUS);
-}
-
-static void hisi_zip_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
-{
- writel(err_sts, qm->io_base + HZIP_CORE_INT_SOURCE);
-}
-
-static void hisi_zip_set_ecc(struct hisi_qm *qm)
-{
- u32 nfe_enb;
-
- nfe_enb = readl(qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
- writel(nfe_enb & HZIP_RAS_NFE_MBIT_DISABLE,
- qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
- writel(HZIP_CORE_INT_STATUS_M_ECC, qm->io_base + HZIP_CORE_INT_SET);
- qm->err_ini.is_dev_ecc_mbit = 1;
-}
-
static void hisi_zip_log_hw_error(struct hisi_qm *qm, u32 err_sts)
{
const struct hisi_zip_hw_error *err = zip_hw_error;
@@ -792,17 +652,53 @@ static void hisi_zip_log_hw_error(struct hisi_qm *qm, u32 err_sts)
}
}
-static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
+static u32 hisi_zip_get_hw_err_status(struct hisi_qm *qm)
+{
+ return readl(qm->io_base + HZIP_CORE_INT_STATUS);
+}
+
+static void hisi_zip_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
+{
+ writel(err_sts, qm->io_base + HZIP_CORE_INT_SOURCE);
+}
+
+static void hisi_zip_open_axi_master_ooo(struct hisi_qm *qm)
{
- struct hisi_qm *qm = &hisi_zip->qm;
+ u32 val;
+
+ val = readl(qm->io_base + HZIP_SOFT_CTRL_ZIP_CONTROL);
+
+ writel(val & HZIP_AXI_SHUTDOWN_DISABLE,
+ qm->io_base + HZIP_SOFT_CTRL_ZIP_CONTROL);
+
+ writel(val | HZIP_AXI_SHUTDOWN_ENABLE,
+ qm->io_base + HZIP_SOFT_CTRL_ZIP_CONTROL);
+}
+
+static void hisi_zip_close_axi_master_ooo(struct hisi_qm *qm)
+{
+ u32 nfe_enb;
+
+ nfe_enb = readl(qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
+ writel(nfe_enb & ~HZIP_CORE_INT_STATUS_M_ECC,
+ qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
+
+ writel(HZIP_CORE_INT_STATUS_M_ECC,
+ qm->io_base + HZIP_CORE_INT_SET);
+}
+
+static int hisi_zip_pf_probe_init(struct hisi_qm *qm)
+{
+ struct hisi_zip *zip = container_of(qm, struct hisi_zip, qm);
struct hisi_zip_ctrl *ctrl;
+ int ret;
ctrl = devm_kzalloc(&qm->pdev->dev, sizeof(*ctrl), GFP_KERNEL);
if (!ctrl)
return -ENOMEM;
- hisi_zip->ctrl = ctrl;
- ctrl->hisi_zip = hisi_zip;
+ zip->ctrl = ctrl;
+ ctrl->hisi_zip = zip;
switch (qm->ver) {
case QM_HW_V1:
@@ -817,61 +713,71 @@ static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
return -EINVAL;
}
- qm->err_ini.qm_wr_port = HZIP_WR_MSI_PORT;
- qm->err_ini.ecc_2bits_mask = HZIP_CORE_INT_STATUS_M_ECC;
qm->err_ini.get_dev_hw_err_status = hisi_zip_get_hw_err_status;
qm->err_ini.clear_dev_hw_err_status = hisi_zip_clear_hw_err_status;
+ qm->err_ini.err_info.ecc_2bits_mask = HZIP_CORE_INT_STATUS_M_ECC;
+ qm->err_ini.err_info.ce = QM_BASE_CE;
+ qm->err_ini.err_info.nfe = QM_BASE_NFE | QM_ACC_WB_NOT_READY_TIMEOUT;
+ qm->err_ini.err_info.fe = 0;
+ qm->err_ini.err_info.msi = QM_DB_RANDOM_INVALID;
+ qm->err_ini.err_info.acpi_rst = "ZRST";
+ qm->err_ini.hw_err_disable = hisi_zip_hw_error_disable;
+ qm->err_ini.hw_err_enable = hisi_zip_hw_error_enable;
+ qm->err_ini.set_usr_domain_cache = hisi_zip_set_user_domain_and_cache;
qm->err_ini.log_dev_hw_err = hisi_zip_log_hw_error;
- qm->err_ini.inject_dev_hw_err = hisi_zip_set_ecc;
+ qm->err_ini.open_axi_master_ooo = hisi_zip_open_axi_master_ooo;
+ qm->err_ini.close_axi_master_ooo = hisi_zip_close_axi_master_ooo;
- hisi_zip_set_user_domain_and_cache(hisi_zip);
- hisi_zip_hw_error_init(hisi_zip);
- hisi_zip_debug_regs_clear(hisi_zip);
+ qm->err_ini.err_info.msi_wr_port = HZIP_WR_PORT;
+
+ ret = qm->err_ini.set_usr_domain_cache(qm);
+ if (ret)
+ return ret;
+
+ hisi_qm_dev_err_init(qm);
+
+ hisi_zip_debug_regs_clear(qm);
+
+ return 0;
+}
+
+static int hisi_zip_qm_pre_init(struct hisi_qm *qm, struct pci_dev *pdev)
+{
+ int ret;
+
+#ifdef CONFIG_CRYPTO_QM_UACCE
+ qm->algs = "zlib\ngzip\nxts(sm4)\nxts(aes)\n";
+ qm->uacce_mode = uacce_mode;
+#endif
+ qm->pdev = pdev;
+ ret = hisi_qm_pre_init(qm, pf_q_num, HZIP_PF_DEF_Q_BASE);
+ if (ret)
+ return ret;
+ qm->sqe_size = HZIP_SQE_SIZE;
+ qm->dev_name = hisi_zip_name;
+ qm->qm_list = &zip_devices;
return 0;
}
static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
- struct hisi_zip *hisi_zip;
- enum qm_hw_ver rev_id;
+ struct hisi_zip *zip;
struct hisi_qm *qm;
int ret;
- rev_id = hisi_qm_get_hw_version(pdev);
- if (rev_id == QM_HW_UNKNOWN)
- return -EINVAL;
-
- hisi_zip = devm_kzalloc(&pdev->dev, sizeof(*hisi_zip), GFP_KERNEL);
- if (!hisi_zip)
+ zip = devm_kzalloc(&pdev->dev, sizeof(*zip), GFP_KERNEL);
+ if (!zip)
return -ENOMEM;
- pci_set_drvdata(pdev, hisi_zip);
-
- hisi_zip_add_to_list(hisi_zip);
+ qm = &zip->qm;
+ qm->fun_type = pdev->is_physfn ? QM_HW_PF : QM_HW_VF;
- hisi_zip->status = 0;
- qm = &hisi_zip->qm;
- qm->pdev = pdev;
- qm->ver = rev_id;
+ ret = hisi_zip_qm_pre_init(qm, pdev);
+ if (ret)
+ return ret;
- qm->sqe_size = HZIP_SQE_SIZE;
- qm->dev_name = hisi_zip_name;
- qm->fun_type = (pdev->device == PCI_DEVICE_ID_ZIP_PF) ? QM_HW_PF :
- QM_HW_VF;
- qm->algs = "zlib\ngzip\nxts(sm4)\nxts(aes)\n";
-
- switch (uacce_mode) {
- case UACCE_MODE_NOUACCE:
- qm->use_uacce = false;
- break;
- case UACCE_MODE_NOIOMMU:
- qm->use_uacce = true;
- break;
- default:
- ret = -EINVAL;
- goto err_remove_from_list;
- }
+ hisi_qm_add_to_list(qm, &zip_devices);
ret = hisi_qm_init(qm);
if (ret) {
@@ -880,15 +786,11 @@ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
}
if (qm->fun_type == QM_HW_PF) {
- ret = hisi_zip_pf_probe_init(hisi_zip);
+ ret = hisi_zip_pf_probe_init(qm);
if (ret) {
pci_err(pdev, "Failed to init pf probe (%d)!\n", ret);
goto err_remove_from_list;
}
-
- qm->qp_base = HZIP_PF_DEF_Q_BASE;
- qm->qp_num = pf_q_num;
- qm->debug.curr_qm_qp_num = pf_q_num;
} else if (qm->fun_type == QM_HW_VF) {
/*
* have no way to get qm configure in VM in v1 hardware,
@@ -914,7 +816,7 @@ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto err_qm_uninit;
}
- ret = hisi_zip_debugfs_init(hisi_zip);
+ ret = hisi_zip_debugfs_init(qm);
if (ret)
pci_err(pdev, "Failed to init debugfs (%d)!\n", ret);
@@ -923,630 +825,62 @@ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
pci_err(pdev, "Failed to register driver to crypto!\n");
goto err_qm_stop;
}
+
+ if (qm->fun_type == QM_HW_PF && vfs_num > 0) {
+ ret = hisi_qm_sriov_enable(pdev, vfs_num);
+ if (ret < 0)
+ goto err_crypto_unregister;
+ }
+
return 0;
+err_crypto_unregister:
+ hisi_zip_unregister_from_crypto();
err_qm_stop:
- hisi_zip_debugfs_exit(hisi_zip);
+ hisi_zip_debugfs_exit(qm);
hisi_qm_stop(qm, QM_NORMAL);
err_qm_uninit:
hisi_qm_uninit(qm);
err_remove_from_list:
- hisi_zip_remove_from_list(hisi_zip);
- return ret;
-}
-
-/* now we only support equal assignment */
-static int hisi_zip_vf_q_assign(struct hisi_zip *hisi_zip, u32 num_vfs)
-{
- struct hisi_qm *qm = &hisi_zip->qm;
- u32 qp_num = qm->qp_num;
- u32 q_base = qp_num;
- u32 q_num, remain_q_num, i;
- int ret;
-
- if (!num_vfs)
- return -EINVAL;
-
- remain_q_num = qm->ctrl_q_num - qp_num;
- /* If remain queues not enough, return error. */
- if (remain_q_num < num_vfs)
- return -EINVAL;
-
- q_num = remain_q_num / num_vfs;
- for (i = 1; i <= num_vfs; i++) {
- if (i == num_vfs)
- q_num += remain_q_num % num_vfs;
- ret = hisi_qm_set_vft(qm, i, q_base, q_num);
- if (ret)
- return ret;
- q_base += q_num;
- }
-
- return 0;
-}
-
-static int hisi_zip_clear_vft_config(struct hisi_zip *hisi_zip)
-{
- struct hisi_zip_ctrl *ctrl = hisi_zip->ctrl;
- struct hisi_qm *qm = &hisi_zip->qm;
- u32 i, num_vfs = ctrl->num_vfs;
- int ret;
-
- for (i = 1; i <= num_vfs; i++) {
- ret = hisi_qm_set_vft(qm, i, 0, 0);
- if (ret)
- return ret;
- }
-
- ctrl->num_vfs = 0;
-
- return 0;
-}
-
-static int hisi_zip_sriov_enable(struct pci_dev *pdev, int max_vfs)
-{
-#ifdef CONFIG_PCI_IOV
- struct hisi_zip *hisi_zip = pci_get_drvdata(pdev);
- int pre_existing_vfs, num_vfs, ret;
-
- pre_existing_vfs = pci_num_vf(pdev);
- if (pre_existing_vfs) {
- dev_err(&pdev->dev,
- "Can't enable VF. Please disable pre-enabled VFs!\n");
- return 0;
- }
-
- num_vfs = min_t(int, max_vfs, HZIP_VF_NUM);
-
- ret = hisi_zip_vf_q_assign(hisi_zip, num_vfs);
- if (ret) {
- dev_err(&pdev->dev, "Can't assign queues for VF!\n");
- return ret;
- }
-
- hisi_zip->ctrl->num_vfs = num_vfs;
-
- ret = pci_enable_sriov(pdev, num_vfs);
- if (ret) {
- dev_err(&pdev->dev, "Can't enable VF!\n");
- hisi_zip_clear_vft_config(hisi_zip);
- return ret;
- }
-
- return num_vfs;
-#else
- return 0;
-#endif
-}
-
-static int hisi_zip_try_frozen_vfs(struct pci_dev *pdev)
-{
- struct hisi_zip *zip, *vf_zip;
- struct pci_dev *dev;
- int ret = 0;
-
- /* Try to frozen all the VFs as disable SRIOV */
- mutex_lock(&hisi_zip_list_lock);
- list_for_each_entry(zip, &hisi_zip_list, list) {
- dev = zip->qm.pdev;
- if (dev == pdev)
- continue;
- if (pci_physfn(dev) == pdev) {
- vf_zip = pci_get_drvdata(dev);
- ret = hisi_qm_frozen(&vf_zip->qm);
- if (ret)
- goto frozen_fail;
- }
- }
-
-frozen_fail:
- mutex_unlock(&hisi_zip_list_lock);
+ hisi_qm_del_from_list(qm, &zip_devices);
return ret;
}
-static int hisi_zip_sriov_disable(struct pci_dev *pdev)
-{
- struct hisi_zip *hisi_zip = pci_get_drvdata(pdev);
-
- if (pci_vfs_assigned(pdev)) {
- dev_err(&pdev->dev,
- "Can't disable VFs while VFs are assigned!\n");
- return -EPERM;
- }
-
- if (hisi_zip_try_frozen_vfs(pdev)) {
- dev_err(&pdev->dev, "try frozen VFs failed!\n");
- return -EBUSY;
- }
-
- /* remove in hisi_zip_pci_driver will be called to free VF resources */
- pci_disable_sriov(pdev);
-
- return hisi_zip_clear_vft_config(hisi_zip);
-}
-
static int hisi_zip_sriov_configure(struct pci_dev *pdev, int num_vfs)
{
if (num_vfs == 0)
- return hisi_zip_sriov_disable(pdev);
+ return hisi_qm_sriov_disable(pdev, &zip_devices);
else
- return hisi_zip_sriov_enable(pdev, num_vfs);
-}
-
-static void hisi_zip_remove_wait_delay(struct hisi_zip *hisi_zip)
-{
- struct hisi_qm *qm = &hisi_zip->qm;
-
- while (hisi_qm_frozen(qm) || ((qm->fun_type == QM_HW_PF) &&
- hisi_zip_try_frozen_vfs(qm->pdev)))
- usleep_range(FROZEN_RANGE_MIN, FROZEN_RANGE_MAX);
-
- udelay(ZIP_WAIT_DELAY);
+ return hisi_qm_sriov_enable(pdev, num_vfs);
}
static void hisi_zip_remove(struct pci_dev *pdev)
{
- struct hisi_zip *hisi_zip = pci_get_drvdata(pdev);
- struct hisi_qm *qm = &hisi_zip->qm;
+ struct hisi_qm *qm = pci_get_drvdata(pdev);
if (uacce_mode != UACCE_MODE_NOUACCE)
- hisi_zip_remove_wait_delay(hisi_zip);
+ hisi_qm_remove_wait_delay(qm, &zip_devices);
- if (qm->fun_type == QM_HW_PF && hisi_zip->ctrl->num_vfs != 0)
- (void)hisi_zip_sriov_disable(pdev);
+ if (qm->fun_type == QM_HW_PF && qm->vfs_num)
+ hisi_qm_sriov_disable(pdev, NULL);
hisi_zip_unregister_from_crypto();
- hisi_zip_debugfs_exit(hisi_zip);
+
+ hisi_zip_debugfs_exit(qm);
hisi_qm_stop(qm, QM_NORMAL);
if (qm->fun_type == QM_HW_PF)
- hisi_zip_hw_error_set_state(hisi_zip, false);
+ hisi_qm_dev_err_uninit(qm);
hisi_qm_uninit(qm);
- hisi_zip_remove_from_list(hisi_zip);
-}
-
-static void hisi_zip_shutdown(struct pci_dev *pdev)
-{
- struct hisi_zip *hisi_zip = pci_get_drvdata(pdev);
-
- hisi_qm_stop(&hisi_zip->qm, QM_NORMAL);
-}
-
-static pci_ers_result_t hisi_zip_error_detected(struct pci_dev *pdev,
- pci_channel_state_t state)
-{
- if (pdev->is_virtfn)
- return PCI_ERS_RESULT_NONE;
-
- dev_info(&pdev->dev, "PCI error detected, state(=%d)!!\n", state);
- if (state == pci_channel_io_perm_failure)
- return PCI_ERS_RESULT_DISCONNECT;
-
- return hisi_qm_process_dev_error(pdev);
-}
-
-static int hisi_zip_reset_prepare_ready(struct hisi_zip *hisi_zip)
-{
- struct pci_dev *pdev = hisi_zip->qm.pdev;
- struct hisi_zip *zip = pci_get_drvdata(pci_physfn(pdev));
- int delay = 0;
-
- while (test_and_set_bit(HISI_ZIP_RESET, &zip->status)) {
- msleep(++delay);
- if (delay > HZIP_RESET_WAIT_TIMEOUT)
- return -EBUSY;
- }
-
- return 0;
-}
-
-static int hisi_zip_vf_reset_prepare(struct hisi_zip *hisi_zip,
- enum qm_stop_reason stop_reason)
-{
- struct pci_dev *pdev = hisi_zip->qm.pdev;
- struct pci_dev *dev;
- struct hisi_qm *qm;
- int ret = 0;
-
- mutex_lock(&hisi_zip_list_lock);
- if (pdev->is_physfn) {
- list_for_each_entry(hisi_zip, &hisi_zip_list, list) {
- dev = hisi_zip->qm.pdev;
- if (dev == pdev)
- continue;
-
- if (pci_physfn(dev) == pdev) {
- qm = &hisi_zip->qm;
-
- ret = hisi_qm_stop(qm, stop_reason);
- if (ret)
- goto prepare_fail;
- }
- }
- }
-
-prepare_fail:
- mutex_unlock(&hisi_zip_list_lock);
- return ret;
-}
-
-static int hisi_zip_controller_reset_prepare(struct hisi_zip *hisi_zip)
-{
- struct hisi_qm *qm = &hisi_zip->qm;
- struct device *dev = &qm->pdev->dev;
- int ret;
-
- ret = hisi_zip_reset_prepare_ready(hisi_zip);
- if (ret) {
- dev_err(dev, "Controller reset not ready!\n");
- return ret;
- }
-
- ret = hisi_zip_vf_reset_prepare(hisi_zip, QM_SOFT_RESET);
- if (ret) {
- dev_err(dev, "Fails to stop VFs!\n");
- return ret;
- }
-
- ret = hisi_qm_stop(qm, QM_SOFT_RESET);
- if (ret) {
- dev_err(dev, "Fails to stop QM!\n");
- return ret;
- }
-
-#ifdef CONFIG_CRYPTO_QM_UACCE
- if (qm->use_uacce) {
- ret = uacce_hw_err_isolate(&qm->uacce);
- if (ret) {
- dev_err(dev, "Fails to isolate hw err!\n");
- return ret;
- }
- }
-#endif
-
- return 0;
-}
-
-static int hisi_zip_soft_reset(struct hisi_zip *hisi_zip)
-{
- struct hisi_qm *qm = &hisi_zip->qm;
- struct device *dev = &qm->pdev->dev;
- unsigned long long value;
- int ret;
- u32 val;
-
- ret = hisi_qm_reg_test(qm);
- if (ret)
- return ret;
-
- ret = hisi_qm_set_vf_mse(qm, HZIP_DISABLE);
- if (ret) {
- dev_err(dev, "Fails to disable vf mse bit.\n");
- return ret;
- }
-
- ret = hisi_qm_set_msi(qm, HZIP_DISABLE);
- if (ret) {
- dev_err(dev, "Fails to disable peh msi bit.\n");
- return ret;
- }
-
- /* Set qm ecc if dev ecc happened to hold on ooo */
- hisi_qm_set_ecc(qm);
-
- /* OOO register set and check */
- writel(HZIP_MASTER_GLOBAL_CTRL_SHUTDOWN,
- hisi_zip->qm.io_base + HZIP_MASTER_GLOBAL_CTRL);
-
- /* If bus lock, reset chip */
- ret = readl_relaxed_poll_timeout(hisi_zip->qm.io_base +
- HZIP_MASTER_TRANS_RETURN, val,
- (val == HZIP_MASTER_TRANS_RETURN_RW),
- HZIP_REG_RD_INTVRL_US,
- HZIP_REG_RD_TMOUT_US);
- if (ret) {
- dev_emerg(dev, "Bus lock! Please reset system.\n");
- return ret;
- }
-
- ret = hisi_qm_set_pf_mse(qm, HZIP_DISABLE);
- if (ret) {
- dev_err(dev, "Fails to disable pf mse bit.\n");
- return ret;
- }
-
- /* The reset related sub-control registers are not in PCI BAR */
- if (ACPI_HANDLE(dev)) {
- acpi_status s;
-
- s = acpi_evaluate_integer(ACPI_HANDLE(dev), "ZRST",
- NULL, &value);
- if (ACPI_FAILURE(s)) {
- dev_err(dev, "NO controller reset method!\n");
- return -EIO;
- }
-
- if (value) {
- dev_err(dev, "Reset step %llu failed!\n", value);
- return -EIO;
- }
- } else {
- dev_err(dev, "No reset method!\n");
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int hisi_zip_vf_reset_done(struct hisi_zip *hisi_zip)
-{
- struct pci_dev *pdev = hisi_zip->qm.pdev;
- struct pci_dev *dev;
- struct hisi_qm *qm;
- int ret = 0;
-
- mutex_lock(&hisi_zip_list_lock);
- list_for_each_entry(hisi_zip, &hisi_zip_list, list) {
- dev = hisi_zip->qm.pdev;
- if (dev == pdev)
- continue;
-
- if (pci_physfn(dev) == pdev) {
- qm = &hisi_zip->qm;
-
- ret = hisi_qm_restart(qm);
- if (ret)
- goto reset_fail;
- }
- }
-
-reset_fail:
- mutex_unlock(&hisi_zip_list_lock);
- return ret;
-}
-
-static int hisi_zip_controller_reset_done(struct hisi_zip *hisi_zip)
-{
- struct hisi_qm *qm = &hisi_zip->qm;
- struct device *dev = &qm->pdev->dev;
- int ret;
-
- ret = hisi_qm_set_msi(qm, HZIP_ENABLE);
- if (ret) {
- dev_err(dev, "Fails to enable peh msi bit!\n");
- return ret;
- }
-
- ret = hisi_qm_set_pf_mse(qm, HZIP_ENABLE);
- if (ret) {
- dev_err(dev, "Fails to enable pf mse bit!\n");
- return ret;
- }
-
- ret = hisi_qm_set_vf_mse(qm, HZIP_ENABLE);
- if (ret) {
- dev_err(dev, "Fails to enable vf mse bit!\n");
- return ret;
- }
-
- hisi_zip_set_user_domain_and_cache(hisi_zip);
- hisi_qm_restart_prepare(qm);
-
- ret = hisi_qm_restart(qm);
- if (ret) {
- dev_err(dev, "Failed to start QM!\n");
- return -EPERM;
- }
-
- if (hisi_zip->ctrl->num_vfs) {
- ret = hisi_zip_vf_q_assign(hisi_zip, hisi_zip->ctrl->num_vfs);
- if (ret) {
- dev_err(dev, "Failed to assign vf queues!\n");
- return ret;
- }
- }
-
- ret = hisi_zip_vf_reset_done(hisi_zip);
- if (ret) {
- dev_err(dev, "Failed to start VFs!\n");
- return -EPERM;
- }
-
- hisi_qm_restart_done(qm);
- hisi_zip_hw_error_init(hisi_zip);
-
- return 0;
-}
-
-static int hisi_zip_controller_reset(struct hisi_zip *hisi_zip)
-{
- struct hisi_qm *qm = &hisi_zip->qm;
- struct device *dev = &qm->pdev->dev;
- int ret;
-
- dev_info(dev, "Controller resetting...\n");
-
- ret = hisi_zip_controller_reset_prepare(hisi_zip);
- if (ret)
- return ret;
-
- ret = hisi_zip_soft_reset(hisi_zip);
- if (ret) {
- dev_err(dev, "Controller reset failed (%d)\n", ret);
- return ret;
- }
-
- ret = hisi_zip_controller_reset_done(hisi_zip);
- if (ret)
- return ret;
-
- clear_bit(HISI_ZIP_RESET, &hisi_zip->status);
-
- dev_info(dev, "Controller reset complete\n");
-
- return ret;
-}
-
-static pci_ers_result_t hisi_zip_slot_reset(struct pci_dev *pdev)
-{
- struct hisi_zip *hisi_zip = pci_get_drvdata(pdev);
- int ret;
-
- if (pdev->is_virtfn)
- return PCI_ERS_RESULT_RECOVERED;
-
- dev_info(&pdev->dev, "Requesting reset due to PCI error\n");
-
- pci_cleanup_aer_uncorrect_error_status(pdev);
-
- ret = hisi_zip_controller_reset(hisi_zip);
- if (ret) {
- dev_err(&pdev->dev, "hisi_zip controller reset failed (%d)\n",
- ret);
- return PCI_ERS_RESULT_DISCONNECT;
- }
-
- return PCI_ERS_RESULT_RECOVERED;
-}
-
-static void hisi_zip_set_hw_error(struct hisi_zip *hisi_zip, bool state)
-{
- struct pci_dev *pdev = hisi_zip->qm.pdev;
- struct hisi_zip *zip = pci_get_drvdata(pci_physfn(pdev));
- struct hisi_qm *qm = &zip->qm;
-
- if (qm->fun_type == QM_HW_VF)
- return;
-
- if (state)
- hisi_qm_hw_error_init(qm, QM_BASE_CE,
- QM_BASE_NFE | QM_ACC_WB_NOT_READY_TIMEOUT,
- 0, QM_DB_RANDOM_INVALID);
- else
- hisi_qm_hw_error_uninit(qm);
-
- hisi_zip_hw_error_set_state(zip, state);
-}
-
-static int hisi_zip_get_hw_error_status(struct hisi_zip *hisi_zip)
-{
- u32 err_sts;
-
- err_sts = readl(hisi_zip->qm.io_base + HZIP_CORE_INT_STATUS) &
- HZIP_CORE_INT_STATUS_M_ECC;
- if (err_sts)
- return err_sts;
-
- return 0;
-}
-
-static int hisi_zip_check_hw_error(struct hisi_zip *hisi_zip)
-{
- struct pci_dev *pdev = hisi_zip->qm.pdev;
- struct hisi_zip *zip = pci_get_drvdata(pci_physfn(pdev));
- struct hisi_qm *qm = &zip->qm;
- int ret;
-
- if (qm->fun_type == QM_HW_VF)
- return 0;
-
- ret = hisi_qm_get_hw_error_status(qm);
- if (ret)
- return ret;
-
- return hisi_zip_get_hw_error_status(zip);
-}
-
-static void hisi_zip_reset_prepare(struct pci_dev *pdev)
-{
- struct hisi_zip *hisi_zip = pci_get_drvdata(pdev);
- struct hisi_qm *qm = &hisi_zip->qm;
- struct device *dev = &pdev->dev;
- u32 delay = 0;
- int ret;
-
- hisi_zip_set_hw_error(hisi_zip, HZIP_HW_ERROR_IRQ_DISABLE);
-
- while (hisi_zip_check_hw_error(hisi_zip)) {
- msleep(++delay);
- if (delay > HZIP_RESET_WAIT_TIMEOUT)
- return;
- }
-
- ret = hisi_zip_reset_prepare_ready(hisi_zip);
- if (ret) {
- dev_err(dev, "FLR not ready!\n");
- return;
- }
-
- ret = hisi_zip_vf_reset_prepare(hisi_zip, QM_FLR);
- if (ret) {
- dev_err(dev, "Fails to prepare reset!\n");
- return;
- }
-
- ret = hisi_qm_stop(qm, QM_FLR);
- if (ret) {
- dev_err(dev, "Fails to stop QM!\n");
- return;
- }
-
- dev_info(dev, "FLR resetting...\n");
-}
-
-static void hisi_zip_flr_reset_complete(struct hisi_zip *hisi_zip)
-{
- struct pci_dev *pdev = hisi_zip->qm.pdev;
- struct hisi_zip *zip = pci_get_drvdata(pci_physfn(pdev));
- struct device *dev = &zip->qm.pdev->dev;
- u32 id;
-
- pci_read_config_dword(zip->qm.pdev, PCI_COMMAND, &id);
- if (id == HZIP_PCI_COMMAND_INVALID)
- dev_err(dev, "Device can not be used!\n");
-
- clear_bit(HISI_ZIP_RESET, &zip->status);
-}
-
-static void hisi_zip_reset_done(struct pci_dev *pdev)
-{
- struct hisi_zip *hisi_zip = pci_get_drvdata(pdev);
- struct hisi_qm *qm = &hisi_zip->qm;
- struct device *dev = &pdev->dev;
- int ret;
-
- hisi_zip_set_hw_error(hisi_zip, HZIP_HW_ERROR_IRQ_ENABLE);
-
- ret = hisi_qm_restart(qm);
- if (ret) {
- dev_err(dev, "Failed to start QM!\n");
- goto flr_done;
- }
-
- if (pdev->is_physfn) {
- hisi_zip_set_user_domain_and_cache(hisi_zip);
- if (hisi_zip->ctrl->num_vfs)
- hisi_zip_vf_q_assign(hisi_zip,
- hisi_zip->ctrl->num_vfs);
- ret = hisi_zip_vf_reset_done(hisi_zip);
- if (ret) {
- dev_err(dev, "Failed to start VFs!\n");
- goto flr_done;
- }
- }
-
-flr_done:
- hisi_zip_flr_reset_complete(hisi_zip);
-
- dev_info(dev, "FLR reset complete\n");
+ hisi_qm_del_from_list(qm, &zip_devices);
}
static const struct pci_error_handlers hisi_zip_err_handler = {
- .error_detected = hisi_zip_error_detected,
- .slot_reset = hisi_zip_slot_reset,
- .reset_prepare = hisi_zip_reset_prepare,
- .reset_done = hisi_zip_reset_done,
+ .error_detected = hisi_qm_dev_err_detected,
+ .slot_reset = hisi_qm_dev_slot_reset,
+ .reset_prepare = hisi_qm_reset_prepare,
+ .reset_done = hisi_qm_reset_done,
};
static struct pci_driver hisi_zip_pci_driver = {
@@ -1556,7 +890,7 @@ static void hisi_zip_reset_done(struct pci_dev *pdev)
.remove = hisi_zip_remove,
.sriov_configure = hisi_zip_sriov_configure,
.err_handler = &hisi_zip_err_handler,
- .shutdown = hisi_zip_shutdown,
+ .shutdown = hisi_qm_dev_shutdown,
};
static void hisi_zip_register_debugfs(void)
@@ -1578,6 +912,10 @@ static int __init hisi_zip_init(void)
{
int ret;
+ INIT_LIST_HEAD(&zip_devices.list);
+ mutex_init(&zip_devices.lock);
+ zip_devices.check = NULL;
+
hisi_zip_register_debugfs();
ret = pci_register_driver(&hisi_zip_pci_driver);
--
1.8.3
1
0

16 Apr '20
From: Eric Auger <eric.auger(a)redhat.com>
mainline inclusion
from mainline-5.3
commit b9a7f9816483b193
category: bugfix
bugzilla: 17401
CVE: NA
-------------------------------------------------
Several call sites are about to check whether a device belongs
to the PCI sub-hierarchy of a candidate PCI-PCI bridge.
Introduce an helper to perform that check.
Signed-off-by: Eric Auger <eric.auger(a)redhat.com>
Reviewed-by: Lu Baolu <baolu.lu(a)linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel(a)suse.de>
Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/iommu/intel-iommu.c | 37 +++++++++++++++++++++++++++++--------
1 file changed, 29 insertions(+), 8 deletions(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index f6d7955..2f52ea8 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -826,12 +826,39 @@ static int iommu_dummy(struct device *dev)
return dev->archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO;
}
+/**
+ * is_downstream_to_pci_bridge - test if a device belongs to the PCI
+ * sub-hierarchy of a candidate PCI-PCI bridge
+ * @dev: candidate PCI device belonging to @bridge PCI sub-hierarchy
+ * @bridge: the candidate PCI-PCI bridge
+ *
+ * Return: true if @dev belongs to @bridge PCI sub-hierarchy, else false.
+ */
+static bool
+is_downstream_to_pci_bridge(struct device *dev, struct device *bridge)
+{
+ struct pci_dev *pdev, *pbridge;
+
+ if (!dev_is_pci(dev) || !dev_is_pci(bridge))
+ return false;
+
+ pdev = to_pci_dev(dev);
+ pbridge = to_pci_dev(bridge);
+
+ if (pbridge->subordinate &&
+ pbridge->subordinate->number <= pdev->bus->number &&
+ pbridge->subordinate->busn_res.end >= pdev->bus->number)
+ return true;
+
+ return false;
+}
+
static struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devfn)
{
struct dmar_drhd_unit *drhd = NULL;
struct intel_iommu *iommu;
struct device *tmp;
- struct pci_dev *ptmp, *pdev = NULL;
+ struct pci_dev *pdev = NULL;
u16 segment = 0;
int i;
@@ -877,13 +904,7 @@ static struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devf
goto out;
}
- if (!pdev || !dev_is_pci(tmp))
- continue;
-
- ptmp = to_pci_dev(tmp);
- if (ptmp->subordinate &&
- ptmp->subordinate->number <= pdev->bus->number &&
- ptmp->subordinate->busn_res.end >= pdev->bus->number)
+ if (is_downstream_to_pci_bridge(dev, tmp))
goto got_pdev;
}
--
1.8.3
1
17
From: fengsheng <fengsheng5(a)huawei.com>
driver inclusion
category: cleanup
bugzilla: NA
CVE: NA
1. sfc cleancode
Signed-off-by: fengsheng <fengsheng5(a)huawei.com>
Reviewed-by: zhangmu <zhangmu1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/mtd/hisilicon/sfc/hrd_common.h | 54 +++++++++++++--------------
drivers/mtd/hisilicon/sfc/hrd_sfc_driver.c | 4 +-
drivers/mtd/hisilicon/sfc/hrd_sflash_core.c | 12 ++----
drivers/mtd/hisilicon/sfc/hrd_sflash_core.h | 2 +
drivers/mtd/hisilicon/sfc/hrd_sflash_driver.h | 6 +--
drivers/mtd/hisilicon/sfc/hrd_sflash_hal.c | 5 +--
drivers/mtd/hisilicon/sfc/hrd_sflash_hal.h | 1 +
drivers/mtd/hisilicon/sfc/hrd_sflash_spec.h | 4 +-
8 files changed, 42 insertions(+), 46 deletions(-)
diff --git a/drivers/mtd/hisilicon/sfc/hrd_common.h b/drivers/mtd/hisilicon/sfc/hrd_common.h
index d36a7a3..71dcaa9 100644
--- a/drivers/mtd/hisilicon/sfc/hrd_common.h
+++ b/drivers/mtd/hisilicon/sfc/hrd_common.h
@@ -40,39 +40,39 @@
#define HRD_COMMON_ERR_RES_NOT_EXIST (int)(HRD_COMMON_ERR_BASE - 13)
/* 16 bit nibble swap. example 0x1234 -> 0x2143 */
-#define HRD_NIBBLE_SWAP_16BIT(X) ((((X)&0xf) << 4) | \
- (((X)&0xF0) >> 4) | \
- (((X)&0xF00) << 4) | \
- (((X)&0xF000) >> 4))
+#define HRD_NIBBLE_SWAP_16BIT(X) ((((X) & 0xf) << 4) | \
+ (((X) & 0xF0) >> 4) | \
+ (((X) & 0xF00) << 4) | \
+ (((X) & 0xF000) >> 4))
/* 32 bit nibble swap. example 0x12345678 -> 0x21436587 */
-#define HRD_NIBBLE_SWAP_32BIT(X) (((X&0xF) << 4) | \
- (((X)&0xF0) >> 4) | \
- (((X)&0xF00) << 4) | \
- (((X)&0xF000) >> 4) | \
- (((X)&0xF0000) << 4) | \
- (((X)&0xF00000) >> 4) | \
- (((X)&0xF000000) << 4) | \
- (((X)&0xF0000000) >> 4))
+#define HRD_NIBBLE_SWAP_32BIT(X) ((((X) & 0xF) << 4) | \
+ (((X) & 0xF0) >> 4) | \
+ (((X) & 0xF00) << 4) | \
+ (((X) & 0xF000) >> 4) | \
+ (((X) & 0xF0000) << 4) | \
+ (((X) & 0xF00000) >> 4) | \
+ (((X) & 0xF000000) << 4) | \
+ (((X) & 0xF0000000) >> 4))
/* 16 bit byte swap. example 0x1234->0x3412 */
-#define HRD_BYTE_SWAP_16BIT(X) ((((X)&0xFF)<<8) | (((X)&0xFF00)>>8))
+#define HRD_BYTE_SWAP_16BIT(X) ((((X) & 0xFF) << 8) | (((X) & 0xFF00) >> 8))
/* 32 bit byte swap. example 0x12345678->0x78563412 */
-#define HRD_BYTE_SWAP_32BIT(X) ((((X)&0xFF)<<24) | \
- (((X)&0xFF00)<<8) | \
- (((X)&0xFF0000)>>8) | \
- (((X)&0xFF000000)>>24))
+#define HRD_BYTE_SWAP_32BIT(X) ((((X) & 0xFF) << 24) | \
+ (((X) & 0xFF00) << 8) | \
+ (((X) & 0xFF0000) >> 8) | \
+ (((X) & 0xFF000000) >> 24))
/* 64 bit byte swap. example 0x11223344.55667788 -> 0x88776655.44332211 */
-#define HRD_BYTE_SWAP_64BIT(X) ((l64) ((((X)&0xFFULL)<<56) | \
- (((X)&0xFF00ULL)<<40) | \
- (((X)&0xFF0000ULL)<<24) | \
- (((X)&0xFF000000ULL)<<8) | \
- (((X)&0xFF00000000ULL)>>8) | \
- (((X)&0xFF0000000000ULL)>>24) | \
- (((X)&0xFF000000000000ULL)>>40) | \
- (((X)&0xFF00000000000000ULL)>>56)))
+#define HRD_BYTE_SWAP_64BIT(X) ((l64) ((((X) & 0xFFULL) << 56) | \
+ (((X) & 0xFF00ULL) << 40) | \
+ (((X) & 0xFF0000ULL) << 24) | \
+ (((X) & 0xFF000000ULL) << 8) | \
+ (((X) & 0xFF00000000ULL) >> 8) | \
+ (((X) & 0xFF0000000000ULL) >> 24) | \
+ (((X) & 0xFF000000000000ULL) >> 40) | \
+ (((X) & 0xFF00000000000000ULL) >> 56)))
/* -- Endianess macros. */
#ifdef HRD_ENDNESS_BIGEND
@@ -91,10 +91,8 @@
#define HRD_64BIT_BE(X) HRD_BYTE_SWAP_64BIT(X)
#endif
-#define VOID void
-
#ifndef NULL
-#define NULL ((VOID *)0)
+#define NULL ((void *)0)
#endif
#define MTD_FLASH_MAP_DEBUG
diff --git a/drivers/mtd/hisilicon/sfc/hrd_sfc_driver.c b/drivers/mtd/hisilicon/sfc/hrd_sfc_driver.c
index 6a10106..8935263 100644
--- a/drivers/mtd/hisilicon/sfc/hrd_sfc_driver.c
+++ b/drivers/mtd/hisilicon/sfc/hrd_sfc_driver.c
@@ -26,9 +26,9 @@
#include "hrd_common.h"
#include "hrd_sflash_driver.h"
-#define SFC_DRIVER_VERSION "1.8.15.0"
+#define SFC_DRIVER_VERSION "1.9.39.0"
-static const char *g_sflashMtdList[] = { "sflash", NULL };
+static const char *g_sflashMtdList[] = {"sflash", NULL};
static unsigned int hrd_flash_info_fill(struct maps_init_info *maps,
struct resource *flash_iores, struct platform_device *pdev)
diff --git a/drivers/mtd/hisilicon/sfc/hrd_sflash_core.c b/drivers/mtd/hisilicon/sfc/hrd_sflash_core.c
index 7341f9e..68547d8 100644
--- a/drivers/mtd/hisilicon/sfc/hrd_sflash_core.c
+++ b/drivers/mtd/hisilicon/sfc/hrd_sflash_core.c
@@ -28,7 +28,6 @@
#include <linux/signal.h>
#include <linux/types.h>
#include "hrd_common.h"
-#include "hrd_sflash_driver.h"
#include "hrd_sflash_spec.h"
#include "hrd_sflash_core.h"
@@ -205,7 +204,6 @@ s32 SFC_ClearStatus(struct SFC_SFLASH_INFO *sflash)
(void)SFC_ClearInt(sflash->sfc_reg_base);
if (sflash->manufacturerId == HISI_SPANSION_MANF_ID) {
-
/* 30 for spansion , clear status */
SFC_RegisterWrite(sflash->sfc_reg_base + CMD_INS, 0x30);
@@ -234,7 +232,6 @@ void SFC_CheckErr(struct SFC_SFLASH_INFO *sflash)
unsigned long delay_us = 50; /* delay 50us */
if (sflash->manufacturerId == HISI_SPANSION_MANF_ID) {
-
ulRegValue = SFC_ReadStatus(sflash);
if (ulRegValue == WAIT_TIME_OUT) {
pr_err("[SFC] [%s %d]: SFC_ReadStatus time out\n",
@@ -362,7 +359,7 @@ s32 SFC_RegWordAlignRead(struct SFC_SFLASH_INFO *sflash,
u32 ulRegValue;
s32 ulRet;
- if (!ulReadLen || ulReadLen > SFC_HARD_BUF_LEN || (ulReadLen&0x3)) {
+ if (!ulReadLen || ulReadLen > SFC_HARD_BUF_LEN || (ulReadLen & 0x3)) {
pr_err("[SFC] [%s %d]: len=%u err\n", __func__, __LINE__, ulReadLen);
return HRD_ERR;
}
@@ -379,7 +376,7 @@ s32 SFC_RegWordAlignRead(struct SFC_SFLASH_INFO *sflash,
ulRegValue = SFC_RegisterRead(sflash->sfc_reg_base + CMD_CONFIG);
ulRegValue &= (~(0xff << DATA_CNT) & (~(1 << SEL_CS)));
ulRegValue |=
- ((ulReadLen-1) << DATA_CNT) | (1 << ADDR_EN) | (1 << DATA_EN) | (1 << RW_DATA)
+ ((ulReadLen - 1) << DATA_CNT) | (1 << ADDR_EN) | (1 << DATA_EN) | (1 << RW_DATA)
| (SFC_CHIP_CS << SEL_CS) | (0x1 << START);
wmb();
@@ -398,7 +395,6 @@ s32 SFC_RegWordAlignRead(struct SFC_SFLASH_INFO *sflash,
pulData[i] = SFC_RegisterRead(sflash->sfc_reg_base + DATABUFFER1 + (u32)(0x4 * i));
return ulRet;
-
}
s32 SFC_RegByteRead(struct SFC_SFLASH_INFO *sflash,
@@ -448,7 +444,7 @@ s32 SFC_RegWordAlignWrite(struct SFC_SFLASH_INFO *sflash,
s32 ulRet;
ulRet = SFC_WriteEnable(sflash);
- if (!ulWriteLen || ulWriteLen > SFC_HARD_BUF_LEN || (ulWriteLen&0x3)) {
+ if ((!ulWriteLen) || (ulWriteLen > SFC_HARD_BUF_LEN) || (ulWriteLen & 0x3)) {
pr_err("[SFC] [%s %d]: len=%u err\n", __func__, __LINE__, ulWriteLen);
ulRet = HRD_ERR;
goto rel;
@@ -471,7 +467,7 @@ s32 SFC_RegWordAlignWrite(struct SFC_SFLASH_INFO *sflash,
ulRegValue = SFC_RegisterRead(sflash->sfc_reg_base + CMD_CONFIG);
ulRegValue &=
(~(0xff << DATA_CNT)) & (~(1 << RW_DATA) & (~(1 << SEL_CS)));
- ulRegValue |= ((ulWriteLen-1) << DATA_CNT) | (1 << ADDR_EN) | (1 << DATA_EN)
+ ulRegValue |= ((ulWriteLen - 1) << DATA_CNT) | (1 << ADDR_EN) | (1 << DATA_EN)
| (SFC_CHIP_CS << SEL_CS) | (0x1 << START);
wmb();
diff --git a/drivers/mtd/hisilicon/sfc/hrd_sflash_core.h b/drivers/mtd/hisilicon/sfc/hrd_sflash_core.h
index 9002c3e..56c4417 100644
--- a/drivers/mtd/hisilicon/sfc/hrd_sflash_core.h
+++ b/drivers/mtd/hisilicon/sfc/hrd_sflash_core.h
@@ -17,6 +17,8 @@
#ifndef __HRD_SFLASH_CORE_H__
#define __HRD_SFLASH_CORE_H__
+#include "hrd_sflash_driver.h"
+
#define SFC_HARD_BUF_LEN (256)
#define SPI_CMD_SR_WIP 1 /* Write in Progress bit in status register position */
diff --git a/drivers/mtd/hisilicon/sfc/hrd_sflash_driver.h b/drivers/mtd/hisilicon/sfc/hrd_sflash_driver.h
index 3494787..f659758 100644
--- a/drivers/mtd/hisilicon/sfc/hrd_sflash_driver.h
+++ b/drivers/mtd/hisilicon/sfc/hrd_sflash_driver.h
@@ -14,8 +14,8 @@
*
*/
-#ifndef _HRD_SLASH_DRIVER_H
-#define _HRD_SLASH_DRIVER_H
+#ifndef _HRD_SFLASH_DRIVER_H
+#define _HRD_SFLASH_DRIVER_H
#include <linux/mtd/map.h>
@@ -102,4 +102,4 @@ struct SFC_SFLASH_INFO {
extern struct mtd_info *sflash_probe(struct map_info *map, struct resource *sfc_regres);
extern void sflash_destroy(struct mtd_info *mtd);
-#endif /* _HRD_SLASH_DRIVER_H */
+#endif /* _HRD_SLASH_DRIVER_H */
diff --git a/drivers/mtd/hisilicon/sfc/hrd_sflash_hal.c b/drivers/mtd/hisilicon/sfc/hrd_sflash_hal.c
index 8f5b387..ec9887a7 100644
--- a/drivers/mtd/hisilicon/sfc/hrd_sflash_hal.c
+++ b/drivers/mtd/hisilicon/sfc/hrd_sflash_hal.c
@@ -700,7 +700,6 @@ s32 SFC_BlockErase(struct SFC_SFLASH_INFO *sflash, u32 ulAddr, u32 ErCmd)
rel:
SFC_FlashUnlock(sflash);
return ulRet;
-
}
static s32 _SFC_RegModeWrite(struct SFC_SFLASH_INFO *sflash,
@@ -727,7 +726,7 @@ static s32 _SFC_RegModeWrite(struct SFC_SFLASH_INFO *sflash,
}
if (ulRemain >= 0x4) {
- slRet = SFC_RegWordAlignWrite(sflash, (const u32 *)(pucSrc + i), offset + i, ulRemain&(~0x3));
+ slRet = SFC_RegWordAlignWrite(sflash, (const u32 *)(pucSrc + i), offset + i, ulRemain & (~0x3));
if (slRet != HRD_OK) {
pr_err("[SFC] [%s %d]: SFC_RegWordAlignWrite fail\n", __func__, __LINE__);
return slRet;
@@ -805,7 +804,7 @@ s32 SFC_RegModeRead(struct SFC_SFLASH_INFO *sflash,
}
if (ulRemain >= 0x4) {
- ret = SFC_RegWordAlignRead(sflash, offset + i, (u32 *) (pucDest + i), ulRemain&(~0x3));
+ ret = SFC_RegWordAlignRead(sflash, offset + i, (u32 *) (pucDest + i), ulRemain & (~0x3));
if (ret != HRD_OK) {
pr_err("[SFC] [%s %d]: SFC_RegWordAlignRead fail\n", __func__, __LINE__);
return ret;
diff --git a/drivers/mtd/hisilicon/sfc/hrd_sflash_hal.h b/drivers/mtd/hisilicon/sfc/hrd_sflash_hal.h
index 78c921c..f612731 100644
--- a/drivers/mtd/hisilicon/sfc/hrd_sflash_hal.h
+++ b/drivers/mtd/hisilicon/sfc/hrd_sflash_hal.h
@@ -16,6 +16,7 @@
#ifndef __HRD_SFLASH_HAL_H__
#define __HRD_SFLASH_HAL_H__
+#include "hrd_sflash_driver.h"
extern void SFC_CheckErr(struct SFC_SFLASH_INFO *sflash);
extern s32 SFC_RegModeRead(struct SFC_SFLASH_INFO *sflash, u32 offset,
diff --git a/drivers/mtd/hisilicon/sfc/hrd_sflash_spec.h b/drivers/mtd/hisilicon/sfc/hrd_sflash_spec.h
index a59965b..151957d 100644
--- a/drivers/mtd/hisilicon/sfc/hrd_sflash_spec.h
+++ b/drivers/mtd/hisilicon/sfc/hrd_sflash_spec.h
@@ -14,8 +14,8 @@
*
*/
-#ifndef __SPI_FLASH_SPEC_H__
-#define __SPI_FLASH_SPEC_H__
+#ifndef __HRD_SFLASH_SPEC_H__
+#define __HRD_SFLASH_SPEC_H__
#define SFLASH_DEFAULT_RDID_OPCD 0x9F /* Default Read ID */
#define SFLASH_DEFAULT_WREN_OPCD 0x06 /* Default Write Enable */
--
1.8.3
1
2

16 Apr '20
From: Lyude Paul <lyude(a)redhat.com>
commit bf502391353b928e63096127e5fd8482080203f5 upstream.
This supports RMI4 and everything seems to work, including the touchpad
buttons. So, let's enable this by default.
Signed-off-by: Lyude Paul <lyude(a)redhat.com>
Cc: stable(a)vger.kernel.org
Link: https://lore.kernel.org/r/20200204194322.112638-1-lyude@redhat.com
Signed-off-by: Dmitry Torokhov <dmitry.torokhov(a)gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/input/mouse/synaptics.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
index e8d1134..064be84 100644
--- a/drivers/input/mouse/synaptics.c
+++ b/drivers/input/mouse/synaptics.c
@@ -172,6 +172,7 @@ void synaptics_reset(struct psmouse *psmouse)
"LEN004a", /* W541 */
"LEN005b", /* P50 */
"LEN005e", /* T560 */
+ "LEN006c", /* T470s */
"LEN0071", /* T480 */
"LEN0072", /* X1 Carbon Gen 5 (2017) - Elan/ALPS trackpoint */
"LEN0073", /* X1 Carbon G5 (Elantech) */
--
1.8.3
1
37

16 Apr '20
From: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
mainline inclusion
from mainline-v4.20-rc1
commit ab8085c130edd65be0d95cc95c28b51c4c6faf9d
category: bugfix
bugzilla: NA
CVE: NA
-----------------------------------------------
As it turns out, the AVX2 multibuffer SHA routines are currently
broken [0], in a way that would have likely been noticed if this
code were in wide use. Since the code is too complicated to be
maintained by anyone except the original authors, and since the
performance benefits for real-world use cases are debatable to
begin with, it is better to drop it entirely for the moment.
[0] https://marc.info/?l=linux-crypto-vger&m=153476243825350&w=2
Suggested-by: Eric Biggers <ebiggers(a)google.com>
Cc: Megha Dey <megha.dey(a)linux.intel.com>
Cc: Tim Chen <tim.c.chen(a)linux.intel.com>
Cc: Geert Uytterhoeven <geert(a)linux-m68k.org>
Cc: Martin Schwidefsky <schwidefsky(a)de.ibm.com>
Cc: Heiko Carstens <heiko.carstens(a)de.ibm.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Ingo Molnar <mingo(a)redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Zhang Xiaoxu <zhangxiaoxu5(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
MAINTAINERS | 8 -
arch/m68k/configs/amiga_defconfig | 1 -
arch/m68k/configs/apollo_defconfig | 1 -
arch/m68k/configs/atari_defconfig | 1 -
arch/m68k/configs/bvme6000_defconfig | 1 -
arch/m68k/configs/hp300_defconfig | 1 -
arch/m68k/configs/mac_defconfig | 1 -
arch/m68k/configs/multi_defconfig | 1 -
arch/m68k/configs/mvme147_defconfig | 1 -
arch/m68k/configs/mvme16x_defconfig | 1 -
arch/m68k/configs/q40_defconfig | 1 -
arch/m68k/configs/sun3_defconfig | 1 -
arch/m68k/configs/sun3x_defconfig | 1 -
arch/s390/configs/debug_defconfig | 1 -
arch/s390/configs/performance_defconfig | 1 -
arch/x86/crypto/Makefile | 3 -
arch/x86/crypto/sha1-mb/Makefile | 14 -
arch/x86/crypto/sha1-mb/sha1_mb.c | 1011 -------------------
arch/x86/crypto/sha1-mb/sha1_mb_ctx.h | 134 ---
arch/x86/crypto/sha1-mb/sha1_mb_mgr.h | 110 --
arch/x86/crypto/sha1-mb/sha1_mb_mgr_datastruct.S | 287 ------
arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S | 304 ------
arch/x86/crypto/sha1-mb/sha1_mb_mgr_init_avx2.c | 64 --
arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S | 209 ----
arch/x86/crypto/sha1-mb/sha1_x8_avx2.S | 492 ---------
arch/x86/crypto/sha256-mb/Makefile | 14 -
arch/x86/crypto/sha256-mb/sha256_mb.c | 1013 -------------------
arch/x86/crypto/sha256-mb/sha256_mb_ctx.h | 134 ---
arch/x86/crypto/sha256-mb/sha256_mb_mgr.h | 108 --
.../crypto/sha256-mb/sha256_mb_mgr_datastruct.S | 304 ------
.../crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S | 307 ------
.../x86/crypto/sha256-mb/sha256_mb_mgr_init_avx2.c | 65 --
.../crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S | 214 ----
arch/x86/crypto/sha256-mb/sha256_x8_avx2.S | 598 -----------
arch/x86/crypto/sha512-mb/Makefile | 12 -
arch/x86/crypto/sha512-mb/sha512_mb.c | 1047 --------------------
arch/x86/crypto/sha512-mb/sha512_mb_ctx.h | 128 ---
arch/x86/crypto/sha512-mb/sha512_mb_mgr.h | 104 --
.../crypto/sha512-mb/sha512_mb_mgr_datastruct.S | 281 ------
.../crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S | 297 ------
.../x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c | 69 --
.../crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S | 224 -----
arch/x86/crypto/sha512-mb/sha512_x4_avx2.S | 531 ----------
crypto/Kconfig | 62 --
crypto/Makefile | 1 -
crypto/mcryptd.c | 675 -------------
include/crypto/mcryptd.h | 114 ---
47 files changed, 8952 deletions(-)
delete mode 100644 arch/x86/crypto/sha1-mb/Makefile
delete mode 100644 arch/x86/crypto/sha1-mb/sha1_mb.c
delete mode 100644 arch/x86/crypto/sha1-mb/sha1_mb_ctx.h
delete mode 100644 arch/x86/crypto/sha1-mb/sha1_mb_mgr.h
delete mode 100644 arch/x86/crypto/sha1-mb/sha1_mb_mgr_datastruct.S
delete mode 100644 arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S
delete mode 100644 arch/x86/crypto/sha1-mb/sha1_mb_mgr_init_avx2.c
delete mode 100644 arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S
delete mode 100644 arch/x86/crypto/sha1-mb/sha1_x8_avx2.S
delete mode 100644 arch/x86/crypto/sha256-mb/Makefile
delete mode 100644 arch/x86/crypto/sha256-mb/sha256_mb.c
delete mode 100644 arch/x86/crypto/sha256-mb/sha256_mb_ctx.h
delete mode 100644 arch/x86/crypto/sha256-mb/sha256_mb_mgr.h
delete mode 100644 arch/x86/crypto/sha256-mb/sha256_mb_mgr_datastruct.S
delete mode 100644 arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
delete mode 100644 arch/x86/crypto/sha256-mb/sha256_mb_mgr_init_avx2.c
delete mode 100644 arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S
delete mode 100644 arch/x86/crypto/sha256-mb/sha256_x8_avx2.S
delete mode 100644 arch/x86/crypto/sha512-mb/Makefile
delete mode 100644 arch/x86/crypto/sha512-mb/sha512_mb.c
delete mode 100644 arch/x86/crypto/sha512-mb/sha512_mb_ctx.h
delete mode 100644 arch/x86/crypto/sha512-mb/sha512_mb_mgr.h
delete mode 100644 arch/x86/crypto/sha512-mb/sha512_mb_mgr_datastruct.S
delete mode 100644 arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S
delete mode 100644 arch/x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c
delete mode 100644 arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S
delete mode 100644 arch/x86/crypto/sha512-mb/sha512_x4_avx2.S
delete mode 100644 crypto/mcryptd.c
delete mode 100644 include/crypto/mcryptd.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 588fc68..b143d31 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7565,14 +7565,6 @@ S: Supported
F: drivers/infiniband/hw/i40iw/
F: include/uapi/rdma/i40iw-abi.h
-INTEL SHA MULTIBUFFER DRIVER
-M: Megha Dey <megha.dey(a)linux.intel.com>
-R: Tim Chen <tim.c.chen(a)linux.intel.com>
-L: linux-crypto(a)vger.kernel.org
-S: Supported
-F: arch/x86/crypto/sha*-mb/
-F: crypto/mcryptd.c
-
INTEL TELEMETRY DRIVER
M: Souvik Kumar Chakravarty <souvik.k.chakravarty(a)intel.com>
L: platform-driver-x86(a)vger.kernel.org
diff --git a/arch/m68k/configs/amiga_defconfig b/arch/m68k/configs/amiga_defconfig
index 93a3c3c..85904b7 100644
--- a/arch/m68k/configs/amiga_defconfig
+++ b/arch/m68k/configs/amiga_defconfig
@@ -621,7 +621,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m
diff --git a/arch/m68k/configs/apollo_defconfig b/arch/m68k/configs/apollo_defconfig
index e3d0efd..9b3818b 100644
--- a/arch/m68k/configs/apollo_defconfig
+++ b/arch/m68k/configs/apollo_defconfig
@@ -578,7 +578,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m
diff --git a/arch/m68k/configs/atari_defconfig b/arch/m68k/configs/atari_defconfig
index 75ac0c7..7696778 100644
--- a/arch/m68k/configs/atari_defconfig
+++ b/arch/m68k/configs/atari_defconfig
@@ -599,7 +599,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m
diff --git a/arch/m68k/configs/bvme6000_defconfig b/arch/m68k/configs/bvme6000_defconfig
index c6e4927..7dd264d 100644
--- a/arch/m68k/configs/bvme6000_defconfig
+++ b/arch/m68k/configs/bvme6000_defconfig
@@ -570,7 +570,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m
diff --git a/arch/m68k/configs/hp300_defconfig b/arch/m68k/configs/hp300_defconfig
index b00d1c4..515f743 100644
--- a/arch/m68k/configs/hp300_defconfig
+++ b/arch/m68k/configs/hp300_defconfig
@@ -580,7 +580,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m
diff --git a/arch/m68k/configs/mac_defconfig b/arch/m68k/configs/mac_defconfig
index 85cac37..8e1038c 100644
--- a/arch/m68k/configs/mac_defconfig
+++ b/arch/m68k/configs/mac_defconfig
@@ -602,7 +602,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m
diff --git a/arch/m68k/configs/multi_defconfig b/arch/m68k/configs/multi_defconfig
index b3a5d1e..62c8aaa 100644
--- a/arch/m68k/configs/multi_defconfig
+++ b/arch/m68k/configs/multi_defconfig
@@ -684,7 +684,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m
diff --git a/arch/m68k/configs/mvme147_defconfig b/arch/m68k/configs/mvme147_defconfig
index 0ca2260..733973f 100644
--- a/arch/m68k/configs/mvme147_defconfig
+++ b/arch/m68k/configs/mvme147_defconfig
@@ -570,7 +570,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m
diff --git a/arch/m68k/configs/mvme16x_defconfig b/arch/m68k/configs/mvme16x_defconfig
index 8e3d10d..fee30cc 100644
--- a/arch/m68k/configs/mvme16x_defconfig
+++ b/arch/m68k/configs/mvme16x_defconfig
@@ -570,7 +570,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m
diff --git a/arch/m68k/configs/q40_defconfig b/arch/m68k/configs/q40_defconfig
index ff7e653..eebf9c9 100644
--- a/arch/m68k/configs/q40_defconfig
+++ b/arch/m68k/configs/q40_defconfig
@@ -593,7 +593,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m
diff --git a/arch/m68k/configs/sun3_defconfig b/arch/m68k/configs/sun3_defconfig
index 612cf46..dabc543 100644
--- a/arch/m68k/configs/sun3_defconfig
+++ b/arch/m68k/configs/sun3_defconfig
@@ -571,7 +571,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m
diff --git a/arch/m68k/configs/sun3x_defconfig b/arch/m68k/configs/sun3x_defconfig
index a6a7bb6..0d9a5c2 100644
--- a/arch/m68k/configs/sun3x_defconfig
+++ b/arch/m68k/configs/sun3x_defconfig
@@ -572,7 +572,6 @@ CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_AEGIS128=m
diff --git a/arch/s390/configs/debug_defconfig b/arch/s390/configs/debug_defconfig
index 941d8cc..259d169 100644
--- a/arch/s390/configs/debug_defconfig
+++ b/arch/s390/configs/debug_defconfig
@@ -668,7 +668,6 @@ CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_LRW=m
diff --git a/arch/s390/configs/performance_defconfig b/arch/s390/configs/performance_defconfig
index eb6f75f..37fd60c 100644
--- a/arch/s390/configs/performance_defconfig
+++ b/arch/s390/configs/performance_defconfig
@@ -610,7 +610,6 @@ CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_MCRYPTD=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_CHACHA20POLY1305=m
CONFIG_CRYPTO_LRW=m
diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
index a450ad5..9edfa54 100644
--- a/arch/x86/crypto/Makefile
+++ b/arch/x86/crypto/Makefile
@@ -60,9 +60,6 @@ endif
ifeq ($(avx2_supported),yes)
obj-$(CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64) += camellia-aesni-avx2.o
obj-$(CONFIG_CRYPTO_SERPENT_AVX2_X86_64) += serpent-avx2.o
- obj-$(CONFIG_CRYPTO_SHA1_MB) += sha1-mb/
- obj-$(CONFIG_CRYPTO_SHA256_MB) += sha256-mb/
- obj-$(CONFIG_CRYPTO_SHA512_MB) += sha512-mb/
obj-$(CONFIG_CRYPTO_MORUS1280_AVX2) += morus1280-avx2.o
endif
diff --git a/arch/x86/crypto/sha1-mb/Makefile b/arch/x86/crypto/sha1-mb/Makefile
deleted file mode 100644
index 815ded3..00000000
--- a/arch/x86/crypto/sha1-mb/Makefile
+++ /dev/null
@@ -1,14 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0
-#
-# Arch-specific CryptoAPI modules.
-#
-
-OBJECT_FILES_NON_STANDARD := y
-
-avx2_supported := $(call as-instr,vpgatherdd %ymm0$(comma)(%eax$(comma)%ymm1\
- $(comma)4)$(comma)%ymm2,yes,no)
-ifeq ($(avx2_supported),yes)
- obj-$(CONFIG_CRYPTO_SHA1_MB) += sha1-mb.o
- sha1-mb-y := sha1_mb.o sha1_mb_mgr_flush_avx2.o \
- sha1_mb_mgr_init_avx2.o sha1_mb_mgr_submit_avx2.o sha1_x8_avx2.o
-endif
diff --git a/arch/x86/crypto/sha1-mb/sha1_mb.c b/arch/x86/crypto/sha1-mb/sha1_mb.c
deleted file mode 100644
index b938056..00000000
--- a/arch/x86/crypto/sha1-mb/sha1_mb.c
+++ /dev/null
@@ -1,1011 +0,0 @@
-/*
- * Multi buffer SHA1 algorithm Glue Code
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Tim Chen <tim.c.chen(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include <crypto/internal/hash.h>
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/mm.h>
-#include <linux/cryptohash.h>
-#include <linux/types.h>
-#include <linux/list.h>
-#include <crypto/scatterwalk.h>
-#include <crypto/sha.h>
-#include <crypto/mcryptd.h>
-#include <crypto/crypto_wq.h>
-#include <asm/byteorder.h>
-#include <linux/hardirq.h>
-#include <asm/fpu/api.h>
-#include "sha1_mb_ctx.h"
-
-#define FLUSH_INTERVAL 1000 /* in usec */
-
-static struct mcryptd_alg_state sha1_mb_alg_state;
-
-struct sha1_mb_ctx {
- struct mcryptd_ahash *mcryptd_tfm;
-};
-
-static inline struct mcryptd_hash_request_ctx
- *cast_hash_to_mcryptd_ctx(struct sha1_hash_ctx *hash_ctx)
-{
- struct ahash_request *areq;
-
- areq = container_of((void *) hash_ctx, struct ahash_request, __ctx);
- return container_of(areq, struct mcryptd_hash_request_ctx, areq);
-}
-
-static inline struct ahash_request
- *cast_mcryptd_ctx_to_req(struct mcryptd_hash_request_ctx *ctx)
-{
- return container_of((void *) ctx, struct ahash_request, __ctx);
-}
-
-static void req_ctx_init(struct mcryptd_hash_request_ctx *rctx,
- struct ahash_request *areq)
-{
- rctx->flag = HASH_UPDATE;
-}
-
-static asmlinkage void (*sha1_job_mgr_init)(struct sha1_mb_mgr *state);
-static asmlinkage struct job_sha1* (*sha1_job_mgr_submit)
- (struct sha1_mb_mgr *state, struct job_sha1 *job);
-static asmlinkage struct job_sha1* (*sha1_job_mgr_flush)
- (struct sha1_mb_mgr *state);
-static asmlinkage struct job_sha1* (*sha1_job_mgr_get_comp_job)
- (struct sha1_mb_mgr *state);
-
-static inline uint32_t sha1_pad(uint8_t padblock[SHA1_BLOCK_SIZE * 2],
- uint64_t total_len)
-{
- uint32_t i = total_len & (SHA1_BLOCK_SIZE - 1);
-
- memset(&padblock[i], 0, SHA1_BLOCK_SIZE);
- padblock[i] = 0x80;
-
- i += ((SHA1_BLOCK_SIZE - 1) &
- (0 - (total_len + SHA1_PADLENGTHFIELD_SIZE + 1)))
- + 1 + SHA1_PADLENGTHFIELD_SIZE;
-
-#if SHA1_PADLENGTHFIELD_SIZE == 16
- *((uint64_t *) &padblock[i - 16]) = 0;
-#endif
-
- *((uint64_t *) &padblock[i - 8]) = cpu_to_be64(total_len << 3);
-
- /* Number of extra blocks to hash */
- return i >> SHA1_LOG2_BLOCK_SIZE;
-}
-
-static struct sha1_hash_ctx *sha1_ctx_mgr_resubmit(struct sha1_ctx_mgr *mgr,
- struct sha1_hash_ctx *ctx)
-{
- while (ctx) {
- if (ctx->status & HASH_CTX_STS_COMPLETE) {
- /* Clear PROCESSING bit */
- ctx->status = HASH_CTX_STS_COMPLETE;
- return ctx;
- }
-
- /*
- * If the extra blocks are empty, begin hashing what remains
- * in the user's buffer.
- */
- if (ctx->partial_block_buffer_length == 0 &&
- ctx->incoming_buffer_length) {
-
- const void *buffer = ctx->incoming_buffer;
- uint32_t len = ctx->incoming_buffer_length;
- uint32_t copy_len;
-
- /*
- * Only entire blocks can be hashed.
- * Copy remainder to extra blocks buffer.
- */
- copy_len = len & (SHA1_BLOCK_SIZE-1);
-
- if (copy_len) {
- len -= copy_len;
- memcpy(ctx->partial_block_buffer,
- ((const char *) buffer + len),
- copy_len);
- ctx->partial_block_buffer_length = copy_len;
- }
-
- ctx->incoming_buffer_length = 0;
-
- /* len should be a multiple of the block size now */
- assert((len % SHA1_BLOCK_SIZE) == 0);
-
- /* Set len to the number of blocks to be hashed */
- len >>= SHA1_LOG2_BLOCK_SIZE;
-
- if (len) {
-
- ctx->job.buffer = (uint8_t *) buffer;
- ctx->job.len = len;
- ctx = (struct sha1_hash_ctx *)sha1_job_mgr_submit(&mgr->mgr,
- &ctx->job);
- continue;
- }
- }
-
- /*
- * If the extra blocks are not empty, then we are
- * either on the last block(s) or we need more
- * user input before continuing.
- */
- if (ctx->status & HASH_CTX_STS_LAST) {
-
- uint8_t *buf = ctx->partial_block_buffer;
- uint32_t n_extra_blocks =
- sha1_pad(buf, ctx->total_length);
-
- ctx->status = (HASH_CTX_STS_PROCESSING |
- HASH_CTX_STS_COMPLETE);
- ctx->job.buffer = buf;
- ctx->job.len = (uint32_t) n_extra_blocks;
- ctx = (struct sha1_hash_ctx *)
- sha1_job_mgr_submit(&mgr->mgr, &ctx->job);
- continue;
- }
-
- ctx->status = HASH_CTX_STS_IDLE;
- return ctx;
- }
-
- return NULL;
-}
-
-static struct sha1_hash_ctx
- *sha1_ctx_mgr_get_comp_ctx(struct sha1_ctx_mgr *mgr)
-{
- /*
- * If get_comp_job returns NULL, there are no jobs complete.
- * If get_comp_job returns a job, verify that it is safe to return to
- * the user.
- * If it is not ready, resubmit the job to finish processing.
- * If sha1_ctx_mgr_resubmit returned a job, it is ready to be returned.
- * Otherwise, all jobs currently being managed by the hash_ctx_mgr
- * still need processing.
- */
- struct sha1_hash_ctx *ctx;
-
- ctx = (struct sha1_hash_ctx *) sha1_job_mgr_get_comp_job(&mgr->mgr);
- return sha1_ctx_mgr_resubmit(mgr, ctx);
-}
-
-static void sha1_ctx_mgr_init(struct sha1_ctx_mgr *mgr)
-{
- sha1_job_mgr_init(&mgr->mgr);
-}
-
-static struct sha1_hash_ctx *sha1_ctx_mgr_submit(struct sha1_ctx_mgr *mgr,
- struct sha1_hash_ctx *ctx,
- const void *buffer,
- uint32_t len,
- int flags)
-{
- if (flags & ~(HASH_UPDATE | HASH_LAST)) {
- /* User should not pass anything other than UPDATE or LAST */
- ctx->error = HASH_CTX_ERROR_INVALID_FLAGS;
- return ctx;
- }
-
- if (ctx->status & HASH_CTX_STS_PROCESSING) {
- /* Cannot submit to a currently processing job. */
- ctx->error = HASH_CTX_ERROR_ALREADY_PROCESSING;
- return ctx;
- }
-
- if (ctx->status & HASH_CTX_STS_COMPLETE) {
- /* Cannot update a finished job. */
- ctx->error = HASH_CTX_ERROR_ALREADY_COMPLETED;
- return ctx;
- }
-
- /*
- * If we made it here, there were no errors during this call to
- * submit
- */
- ctx->error = HASH_CTX_ERROR_NONE;
-
- /* Store buffer ptr info from user */
- ctx->incoming_buffer = buffer;
- ctx->incoming_buffer_length = len;
-
- /*
- * Store the user's request flags and mark this ctx as currently
- * being processed.
- */
- ctx->status = (flags & HASH_LAST) ?
- (HASH_CTX_STS_PROCESSING | HASH_CTX_STS_LAST) :
- HASH_CTX_STS_PROCESSING;
-
- /* Advance byte counter */
- ctx->total_length += len;
-
- /*
- * If there is anything currently buffered in the extra blocks,
- * append to it until it contains a whole block.
- * Or if the user's buffer contains less than a whole block,
- * append as much as possible to the extra block.
- */
- if (ctx->partial_block_buffer_length || len < SHA1_BLOCK_SIZE) {
- /*
- * Compute how many bytes to copy from user buffer into
- * extra block
- */
- uint32_t copy_len = SHA1_BLOCK_SIZE -
- ctx->partial_block_buffer_length;
- if (len < copy_len)
- copy_len = len;
-
- if (copy_len) {
- /* Copy and update relevant pointers and counters */
- memcpy(&ctx->partial_block_buffer[ctx->partial_block_buffer_length],
- buffer, copy_len);
-
- ctx->partial_block_buffer_length += copy_len;
- ctx->incoming_buffer = (const void *)
- ((const char *)buffer + copy_len);
- ctx->incoming_buffer_length = len - copy_len;
- }
-
- /*
- * The extra block should never contain more than 1 block
- * here
- */
- assert(ctx->partial_block_buffer_length <= SHA1_BLOCK_SIZE);
-
- /*
- * If the extra block buffer contains exactly 1 block, it can
- * be hashed.
- */
- if (ctx->partial_block_buffer_length >= SHA1_BLOCK_SIZE) {
- ctx->partial_block_buffer_length = 0;
-
- ctx->job.buffer = ctx->partial_block_buffer;
- ctx->job.len = 1;
- ctx = (struct sha1_hash_ctx *)
- sha1_job_mgr_submit(&mgr->mgr, &ctx->job);
- }
- }
-
- return sha1_ctx_mgr_resubmit(mgr, ctx);
-}
-
-static struct sha1_hash_ctx *sha1_ctx_mgr_flush(struct sha1_ctx_mgr *mgr)
-{
- struct sha1_hash_ctx *ctx;
-
- while (1) {
- ctx = (struct sha1_hash_ctx *) sha1_job_mgr_flush(&mgr->mgr);
-
- /* If flush returned 0, there are no more jobs in flight. */
- if (!ctx)
- return NULL;
-
- /*
- * If flush returned a job, resubmit the job to finish
- * processing.
- */
- ctx = sha1_ctx_mgr_resubmit(mgr, ctx);
-
- /*
- * If sha1_ctx_mgr_resubmit returned a job, it is ready to be
- * returned. Otherwise, all jobs currently being managed by the
- * sha1_ctx_mgr still need processing. Loop.
- */
- if (ctx)
- return ctx;
- }
-}
-
-static int sha1_mb_init(struct ahash_request *areq)
-{
- struct sha1_hash_ctx *sctx = ahash_request_ctx(areq);
-
- hash_ctx_init(sctx);
- sctx->job.result_digest[0] = SHA1_H0;
- sctx->job.result_digest[1] = SHA1_H1;
- sctx->job.result_digest[2] = SHA1_H2;
- sctx->job.result_digest[3] = SHA1_H3;
- sctx->job.result_digest[4] = SHA1_H4;
- sctx->total_length = 0;
- sctx->partial_block_buffer_length = 0;
- sctx->status = HASH_CTX_STS_IDLE;
-
- return 0;
-}
-
-static int sha1_mb_set_results(struct mcryptd_hash_request_ctx *rctx)
-{
- int i;
- struct sha1_hash_ctx *sctx = ahash_request_ctx(&rctx->areq);
- __be32 *dst = (__be32 *) rctx->out;
-
- for (i = 0; i < 5; ++i)
- dst[i] = cpu_to_be32(sctx->job.result_digest[i]);
-
- return 0;
-}
-
-static int sha_finish_walk(struct mcryptd_hash_request_ctx **ret_rctx,
- struct mcryptd_alg_cstate *cstate, bool flush)
-{
- int flag = HASH_UPDATE;
- int nbytes, err = 0;
- struct mcryptd_hash_request_ctx *rctx = *ret_rctx;
- struct sha1_hash_ctx *sha_ctx;
-
- /* more work ? */
- while (!(rctx->flag & HASH_DONE)) {
- nbytes = crypto_ahash_walk_done(&rctx->walk, 0);
- if (nbytes < 0) {
- err = nbytes;
- goto out;
- }
- /* check if the walk is done */
- if (crypto_ahash_walk_last(&rctx->walk)) {
- rctx->flag |= HASH_DONE;
- if (rctx->flag & HASH_FINAL)
- flag |= HASH_LAST;
-
- }
- sha_ctx = (struct sha1_hash_ctx *)
- ahash_request_ctx(&rctx->areq);
- kernel_fpu_begin();
- sha_ctx = sha1_ctx_mgr_submit(cstate->mgr, sha_ctx,
- rctx->walk.data, nbytes, flag);
- if (!sha_ctx) {
- if (flush)
- sha_ctx = sha1_ctx_mgr_flush(cstate->mgr);
- }
- kernel_fpu_end();
- if (sha_ctx)
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- else {
- rctx = NULL;
- goto out;
- }
- }
-
- /* copy the results */
- if (rctx->flag & HASH_FINAL)
- sha1_mb_set_results(rctx);
-
-out:
- *ret_rctx = rctx;
- return err;
-}
-
-static int sha_complete_job(struct mcryptd_hash_request_ctx *rctx,
- struct mcryptd_alg_cstate *cstate,
- int err)
-{
- struct ahash_request *req = cast_mcryptd_ctx_to_req(rctx);
- struct sha1_hash_ctx *sha_ctx;
- struct mcryptd_hash_request_ctx *req_ctx;
- int ret;
-
- /* remove from work list */
- spin_lock(&cstate->work_lock);
- list_del(&rctx->waiter);
- spin_unlock(&cstate->work_lock);
-
- if (irqs_disabled())
- rctx->complete(&req->base, err);
- else {
- local_bh_disable();
- rctx->complete(&req->base, err);
- local_bh_enable();
- }
-
- /* check to see if there are other jobs that are done */
- sha_ctx = sha1_ctx_mgr_get_comp_ctx(cstate->mgr);
- while (sha_ctx) {
- req_ctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- ret = sha_finish_walk(&req_ctx, cstate, false);
- if (req_ctx) {
- spin_lock(&cstate->work_lock);
- list_del(&req_ctx->waiter);
- spin_unlock(&cstate->work_lock);
-
- req = cast_mcryptd_ctx_to_req(req_ctx);
- if (irqs_disabled())
- req_ctx->complete(&req->base, ret);
- else {
- local_bh_disable();
- req_ctx->complete(&req->base, ret);
- local_bh_enable();
- }
- }
- sha_ctx = sha1_ctx_mgr_get_comp_ctx(cstate->mgr);
- }
-
- return 0;
-}
-
-static void sha1_mb_add_list(struct mcryptd_hash_request_ctx *rctx,
- struct mcryptd_alg_cstate *cstate)
-{
- unsigned long next_flush;
- unsigned long delay = usecs_to_jiffies(FLUSH_INTERVAL);
-
- /* initialize tag */
- rctx->tag.arrival = jiffies; /* tag the arrival time */
- rctx->tag.seq_num = cstate->next_seq_num++;
- next_flush = rctx->tag.arrival + delay;
- rctx->tag.expire = next_flush;
-
- spin_lock(&cstate->work_lock);
- list_add_tail(&rctx->waiter, &cstate->work_list);
- spin_unlock(&cstate->work_lock);
-
- mcryptd_arm_flusher(cstate, delay);
-}
-
-static int sha1_mb_update(struct ahash_request *areq)
-{
- struct mcryptd_hash_request_ctx *rctx =
- container_of(areq, struct mcryptd_hash_request_ctx, areq);
- struct mcryptd_alg_cstate *cstate =
- this_cpu_ptr(sha1_mb_alg_state.alg_cstate);
-
- struct ahash_request *req = cast_mcryptd_ctx_to_req(rctx);
- struct sha1_hash_ctx *sha_ctx;
- int ret = 0, nbytes;
-
-
- /* sanity check */
- if (rctx->tag.cpu != smp_processor_id()) {
- pr_err("mcryptd error: cpu clash\n");
- goto done;
- }
-
- /* need to init context */
- req_ctx_init(rctx, areq);
-
- nbytes = crypto_ahash_walk_first(req, &rctx->walk);
-
- if (nbytes < 0) {
- ret = nbytes;
- goto done;
- }
-
- if (crypto_ahash_walk_last(&rctx->walk))
- rctx->flag |= HASH_DONE;
-
- /* submit */
- sha_ctx = (struct sha1_hash_ctx *) ahash_request_ctx(areq);
- sha1_mb_add_list(rctx, cstate);
- kernel_fpu_begin();
- sha_ctx = sha1_ctx_mgr_submit(cstate->mgr, sha_ctx, rctx->walk.data,
- nbytes, HASH_UPDATE);
- kernel_fpu_end();
-
- /* check if anything is returned */
- if (!sha_ctx)
- return -EINPROGRESS;
-
- if (sha_ctx->error) {
- ret = sha_ctx->error;
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- goto done;
- }
-
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- ret = sha_finish_walk(&rctx, cstate, false);
-
- if (!rctx)
- return -EINPROGRESS;
-done:
- sha_complete_job(rctx, cstate, ret);
- return ret;
-}
-
-static int sha1_mb_finup(struct ahash_request *areq)
-{
- struct mcryptd_hash_request_ctx *rctx =
- container_of(areq, struct mcryptd_hash_request_ctx, areq);
- struct mcryptd_alg_cstate *cstate =
- this_cpu_ptr(sha1_mb_alg_state.alg_cstate);
-
- struct ahash_request *req = cast_mcryptd_ctx_to_req(rctx);
- struct sha1_hash_ctx *sha_ctx;
- int ret = 0, flag = HASH_UPDATE, nbytes;
-
- /* sanity check */
- if (rctx->tag.cpu != smp_processor_id()) {
- pr_err("mcryptd error: cpu clash\n");
- goto done;
- }
-
- /* need to init context */
- req_ctx_init(rctx, areq);
-
- nbytes = crypto_ahash_walk_first(req, &rctx->walk);
-
- if (nbytes < 0) {
- ret = nbytes;
- goto done;
- }
-
- if (crypto_ahash_walk_last(&rctx->walk)) {
- rctx->flag |= HASH_DONE;
- flag = HASH_LAST;
- }
-
- /* submit */
- rctx->flag |= HASH_FINAL;
- sha_ctx = (struct sha1_hash_ctx *) ahash_request_ctx(areq);
- sha1_mb_add_list(rctx, cstate);
-
- kernel_fpu_begin();
- sha_ctx = sha1_ctx_mgr_submit(cstate->mgr, sha_ctx, rctx->walk.data,
- nbytes, flag);
- kernel_fpu_end();
-
- /* check if anything is returned */
- if (!sha_ctx)
- return -EINPROGRESS;
-
- if (sha_ctx->error) {
- ret = sha_ctx->error;
- goto done;
- }
-
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- ret = sha_finish_walk(&rctx, cstate, false);
- if (!rctx)
- return -EINPROGRESS;
-done:
- sha_complete_job(rctx, cstate, ret);
- return ret;
-}
-
-static int sha1_mb_final(struct ahash_request *areq)
-{
- struct mcryptd_hash_request_ctx *rctx =
- container_of(areq, struct mcryptd_hash_request_ctx, areq);
- struct mcryptd_alg_cstate *cstate =
- this_cpu_ptr(sha1_mb_alg_state.alg_cstate);
-
- struct sha1_hash_ctx *sha_ctx;
- int ret = 0;
- u8 data;
-
- /* sanity check */
- if (rctx->tag.cpu != smp_processor_id()) {
- pr_err("mcryptd error: cpu clash\n");
- goto done;
- }
-
- /* need to init context */
- req_ctx_init(rctx, areq);
-
- rctx->flag |= HASH_DONE | HASH_FINAL;
-
- sha_ctx = (struct sha1_hash_ctx *) ahash_request_ctx(areq);
- /* flag HASH_FINAL and 0 data size */
- sha1_mb_add_list(rctx, cstate);
- kernel_fpu_begin();
- sha_ctx = sha1_ctx_mgr_submit(cstate->mgr, sha_ctx, &data, 0,
- HASH_LAST);
- kernel_fpu_end();
-
- /* check if anything is returned */
- if (!sha_ctx)
- return -EINPROGRESS;
-
- if (sha_ctx->error) {
- ret = sha_ctx->error;
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- goto done;
- }
-
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- ret = sha_finish_walk(&rctx, cstate, false);
- if (!rctx)
- return -EINPROGRESS;
-done:
- sha_complete_job(rctx, cstate, ret);
- return ret;
-}
-
-static int sha1_mb_export(struct ahash_request *areq, void *out)
-{
- struct sha1_hash_ctx *sctx = ahash_request_ctx(areq);
-
- memcpy(out, sctx, sizeof(*sctx));
-
- return 0;
-}
-
-static int sha1_mb_import(struct ahash_request *areq, const void *in)
-{
- struct sha1_hash_ctx *sctx = ahash_request_ctx(areq);
-
- memcpy(sctx, in, sizeof(*sctx));
-
- return 0;
-}
-
-static int sha1_mb_async_init_tfm(struct crypto_tfm *tfm)
-{
- struct mcryptd_ahash *mcryptd_tfm;
- struct sha1_mb_ctx *ctx = crypto_tfm_ctx(tfm);
- struct mcryptd_hash_ctx *mctx;
-
- mcryptd_tfm = mcryptd_alloc_ahash("__intel_sha1-mb",
- CRYPTO_ALG_INTERNAL,
- CRYPTO_ALG_INTERNAL);
- if (IS_ERR(mcryptd_tfm))
- return PTR_ERR(mcryptd_tfm);
- mctx = crypto_ahash_ctx(&mcryptd_tfm->base);
- mctx->alg_state = &sha1_mb_alg_state;
- ctx->mcryptd_tfm = mcryptd_tfm;
- crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
- sizeof(struct ahash_request) +
- crypto_ahash_reqsize(&mcryptd_tfm->base));
-
- return 0;
-}
-
-static void sha1_mb_async_exit_tfm(struct crypto_tfm *tfm)
-{
- struct sha1_mb_ctx *ctx = crypto_tfm_ctx(tfm);
-
- mcryptd_free_ahash(ctx->mcryptd_tfm);
-}
-
-static int sha1_mb_areq_init_tfm(struct crypto_tfm *tfm)
-{
- crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
- sizeof(struct ahash_request) +
- sizeof(struct sha1_hash_ctx));
-
- return 0;
-}
-
-static void sha1_mb_areq_exit_tfm(struct crypto_tfm *tfm)
-{
- struct sha1_mb_ctx *ctx = crypto_tfm_ctx(tfm);
-
- mcryptd_free_ahash(ctx->mcryptd_tfm);
-}
-
-static struct ahash_alg sha1_mb_areq_alg = {
- .init = sha1_mb_init,
- .update = sha1_mb_update,
- .final = sha1_mb_final,
- .finup = sha1_mb_finup,
- .export = sha1_mb_export,
- .import = sha1_mb_import,
- .halg = {
- .digestsize = SHA1_DIGEST_SIZE,
- .statesize = sizeof(struct sha1_hash_ctx),
- .base = {
- .cra_name = "__sha1-mb",
- .cra_driver_name = "__intel_sha1-mb",
- .cra_priority = 100,
- /*
- * use ASYNC flag as some buffers in multi-buffer
- * algo may not have completed before hashing thread
- * sleep
- */
- .cra_flags = CRYPTO_ALG_ASYNC |
- CRYPTO_ALG_INTERNAL,
- .cra_blocksize = SHA1_BLOCK_SIZE,
- .cra_module = THIS_MODULE,
- .cra_list = LIST_HEAD_INIT
- (sha1_mb_areq_alg.halg.base.cra_list),
- .cra_init = sha1_mb_areq_init_tfm,
- .cra_exit = sha1_mb_areq_exit_tfm,
- .cra_ctxsize = sizeof(struct sha1_hash_ctx),
- }
- }
-};
-
-static int sha1_mb_async_init(struct ahash_request *req)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha1_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_init(mcryptd_req);
-}
-
-static int sha1_mb_async_update(struct ahash_request *req)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
-
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha1_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_update(mcryptd_req);
-}
-
-static int sha1_mb_async_finup(struct ahash_request *req)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
-
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha1_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_finup(mcryptd_req);
-}
-
-static int sha1_mb_async_final(struct ahash_request *req)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
-
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha1_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_final(mcryptd_req);
-}
-
-static int sha1_mb_async_digest(struct ahash_request *req)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha1_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_digest(mcryptd_req);
-}
-
-static int sha1_mb_async_export(struct ahash_request *req, void *out)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha1_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_export(mcryptd_req, out);
-}
-
-static int sha1_mb_async_import(struct ahash_request *req, const void *in)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha1_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
- struct crypto_ahash *child = mcryptd_ahash_child(mcryptd_tfm);
- struct mcryptd_hash_request_ctx *rctx;
- struct ahash_request *areq;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- rctx = ahash_request_ctx(mcryptd_req);
- areq = &rctx->areq;
-
- ahash_request_set_tfm(areq, child);
- ahash_request_set_callback(areq, CRYPTO_TFM_REQ_MAY_SLEEP,
- rctx->complete, req);
-
- return crypto_ahash_import(mcryptd_req, in);
-}
-
-static struct ahash_alg sha1_mb_async_alg = {
- .init = sha1_mb_async_init,
- .update = sha1_mb_async_update,
- .final = sha1_mb_async_final,
- .finup = sha1_mb_async_finup,
- .digest = sha1_mb_async_digest,
- .export = sha1_mb_async_export,
- .import = sha1_mb_async_import,
- .halg = {
- .digestsize = SHA1_DIGEST_SIZE,
- .statesize = sizeof(struct sha1_hash_ctx),
- .base = {
- .cra_name = "sha1",
- .cra_driver_name = "sha1_mb",
- /*
- * Low priority, since with few concurrent hash requests
- * this is extremely slow due to the flush delay. Users
- * whose workloads would benefit from this can request
- * it explicitly by driver name, or can increase its
- * priority at runtime using NETLINK_CRYPTO.
- */
- .cra_priority = 50,
- .cra_flags = CRYPTO_ALG_ASYNC,
- .cra_blocksize = SHA1_BLOCK_SIZE,
- .cra_module = THIS_MODULE,
- .cra_list = LIST_HEAD_INIT(sha1_mb_async_alg.halg.base.cra_list),
- .cra_init = sha1_mb_async_init_tfm,
- .cra_exit = sha1_mb_async_exit_tfm,
- .cra_ctxsize = sizeof(struct sha1_mb_ctx),
- .cra_alignmask = 0,
- },
- },
-};
-
-static unsigned long sha1_mb_flusher(struct mcryptd_alg_cstate *cstate)
-{
- struct mcryptd_hash_request_ctx *rctx;
- unsigned long cur_time;
- unsigned long next_flush = 0;
- struct sha1_hash_ctx *sha_ctx;
-
-
- cur_time = jiffies;
-
- while (!list_empty(&cstate->work_list)) {
- rctx = list_entry(cstate->work_list.next,
- struct mcryptd_hash_request_ctx, waiter);
- if (time_before(cur_time, rctx->tag.expire))
- break;
- kernel_fpu_begin();
- sha_ctx = (struct sha1_hash_ctx *)
- sha1_ctx_mgr_flush(cstate->mgr);
- kernel_fpu_end();
- if (!sha_ctx) {
- pr_err("sha1_mb error: nothing got flushed for non-empty list\n");
- break;
- }
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- sha_finish_walk(&rctx, cstate, true);
- sha_complete_job(rctx, cstate, 0);
- }
-
- if (!list_empty(&cstate->work_list)) {
- rctx = list_entry(cstate->work_list.next,
- struct mcryptd_hash_request_ctx, waiter);
- /* get the hash context and then flush time */
- next_flush = rctx->tag.expire;
- mcryptd_arm_flusher(cstate, get_delay(next_flush));
- }
- return next_flush;
-}
-
-static int __init sha1_mb_mod_init(void)
-{
-
- int cpu;
- int err;
- struct mcryptd_alg_cstate *cpu_state;
-
- /* check for dependent cpu features */
- if (!boot_cpu_has(X86_FEATURE_AVX2) ||
- !boot_cpu_has(X86_FEATURE_BMI2))
- return -ENODEV;
-
- /* initialize multibuffer structures */
- sha1_mb_alg_state.alg_cstate = alloc_percpu(struct mcryptd_alg_cstate);
-
- sha1_job_mgr_init = sha1_mb_mgr_init_avx2;
- sha1_job_mgr_submit = sha1_mb_mgr_submit_avx2;
- sha1_job_mgr_flush = sha1_mb_mgr_flush_avx2;
- sha1_job_mgr_get_comp_job = sha1_mb_mgr_get_comp_job_avx2;
-
- if (!sha1_mb_alg_state.alg_cstate)
- return -ENOMEM;
- for_each_possible_cpu(cpu) {
- cpu_state = per_cpu_ptr(sha1_mb_alg_state.alg_cstate, cpu);
- cpu_state->next_flush = 0;
- cpu_state->next_seq_num = 0;
- cpu_state->flusher_engaged = false;
- INIT_DELAYED_WORK(&cpu_state->flush, mcryptd_flusher);
- cpu_state->cpu = cpu;
- cpu_state->alg_state = &sha1_mb_alg_state;
- cpu_state->mgr = kzalloc(sizeof(struct sha1_ctx_mgr),
- GFP_KERNEL);
- if (!cpu_state->mgr)
- goto err2;
- sha1_ctx_mgr_init(cpu_state->mgr);
- INIT_LIST_HEAD(&cpu_state->work_list);
- spin_lock_init(&cpu_state->work_lock);
- }
- sha1_mb_alg_state.flusher = &sha1_mb_flusher;
-
- err = crypto_register_ahash(&sha1_mb_areq_alg);
- if (err)
- goto err2;
- err = crypto_register_ahash(&sha1_mb_async_alg);
- if (err)
- goto err1;
-
-
- return 0;
-err1:
- crypto_unregister_ahash(&sha1_mb_areq_alg);
-err2:
- for_each_possible_cpu(cpu) {
- cpu_state = per_cpu_ptr(sha1_mb_alg_state.alg_cstate, cpu);
- kfree(cpu_state->mgr);
- }
- free_percpu(sha1_mb_alg_state.alg_cstate);
- return -ENODEV;
-}
-
-static void __exit sha1_mb_mod_fini(void)
-{
- int cpu;
- struct mcryptd_alg_cstate *cpu_state;
-
- crypto_unregister_ahash(&sha1_mb_async_alg);
- crypto_unregister_ahash(&sha1_mb_areq_alg);
- for_each_possible_cpu(cpu) {
- cpu_state = per_cpu_ptr(sha1_mb_alg_state.alg_cstate, cpu);
- kfree(cpu_state->mgr);
- }
- free_percpu(sha1_mb_alg_state.alg_cstate);
-}
-
-module_init(sha1_mb_mod_init);
-module_exit(sha1_mb_mod_fini);
-
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm, multi buffer accelerated");
-
-MODULE_ALIAS_CRYPTO("sha1");
diff --git a/arch/x86/crypto/sha1-mb/sha1_mb_ctx.h b/arch/x86/crypto/sha1-mb/sha1_mb_ctx.h
deleted file mode 100644
index 9454bd1..00000000
--- a/arch/x86/crypto/sha1-mb/sha1_mb_ctx.h
+++ /dev/null
@@ -1,134 +0,0 @@
-/*
- * Header file for multi buffer SHA context
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Tim Chen <tim.c.chen(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef _SHA_MB_CTX_INTERNAL_H
-#define _SHA_MB_CTX_INTERNAL_H
-
-#include "sha1_mb_mgr.h"
-
-#define HASH_UPDATE 0x00
-#define HASH_LAST 0x01
-#define HASH_DONE 0x02
-#define HASH_FINAL 0x04
-
-#define HASH_CTX_STS_IDLE 0x00
-#define HASH_CTX_STS_PROCESSING 0x01
-#define HASH_CTX_STS_LAST 0x02
-#define HASH_CTX_STS_COMPLETE 0x04
-
-enum hash_ctx_error {
- HASH_CTX_ERROR_NONE = 0,
- HASH_CTX_ERROR_INVALID_FLAGS = -1,
- HASH_CTX_ERROR_ALREADY_PROCESSING = -2,
- HASH_CTX_ERROR_ALREADY_COMPLETED = -3,
-
-#ifdef HASH_CTX_DEBUG
- HASH_CTX_ERROR_DEBUG_DIGEST_MISMATCH = -4,
-#endif
-};
-
-
-#define hash_ctx_user_data(ctx) ((ctx)->user_data)
-#define hash_ctx_digest(ctx) ((ctx)->job.result_digest)
-#define hash_ctx_processing(ctx) ((ctx)->status & HASH_CTX_STS_PROCESSING)
-#define hash_ctx_complete(ctx) ((ctx)->status == HASH_CTX_STS_COMPLETE)
-#define hash_ctx_status(ctx) ((ctx)->status)
-#define hash_ctx_error(ctx) ((ctx)->error)
-#define hash_ctx_init(ctx) \
- do { \
- (ctx)->error = HASH_CTX_ERROR_NONE; \
- (ctx)->status = HASH_CTX_STS_COMPLETE; \
- } while (0)
-
-
-/* Hash Constants and Typedefs */
-#define SHA1_DIGEST_LENGTH 5
-#define SHA1_LOG2_BLOCK_SIZE 6
-
-#define SHA1_PADLENGTHFIELD_SIZE 8
-
-#ifdef SHA_MB_DEBUG
-#define assert(expr) \
-do { \
- if (unlikely(!(expr))) { \
- printk(KERN_ERR "Assertion failed! %s,%s,%s,line=%d\n", \
- #expr, __FILE__, __func__, __LINE__); \
- } \
-} while (0)
-#else
-#define assert(expr) do {} while (0)
-#endif
-
-struct sha1_ctx_mgr {
- struct sha1_mb_mgr mgr;
-};
-
-/* typedef struct sha1_ctx_mgr sha1_ctx_mgr; */
-
-struct sha1_hash_ctx {
- /* Must be at struct offset 0 */
- struct job_sha1 job;
- /* status flag */
- int status;
- /* error flag */
- int error;
-
- uint64_t total_length;
- const void *incoming_buffer;
- uint32_t incoming_buffer_length;
- uint8_t partial_block_buffer[SHA1_BLOCK_SIZE * 2];
- uint32_t partial_block_buffer_length;
- void *user_data;
-};
-
-#endif
diff --git a/arch/x86/crypto/sha1-mb/sha1_mb_mgr.h b/arch/x86/crypto/sha1-mb/sha1_mb_mgr.h
deleted file mode 100644
index 08ad1a9..00000000
--- a/arch/x86/crypto/sha1-mb/sha1_mb_mgr.h
+++ /dev/null
@@ -1,110 +0,0 @@
-/*
- * Header file for multi buffer SHA1 algorithm manager
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * James Guilford <james.guilford(a)intel.com>
- * Tim Chen <tim.c.chen(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-#ifndef __SHA_MB_MGR_H
-#define __SHA_MB_MGR_H
-
-
-#include <linux/types.h>
-
-#define NUM_SHA1_DIGEST_WORDS 5
-
-enum job_sts { STS_UNKNOWN = 0,
- STS_BEING_PROCESSED = 1,
- STS_COMPLETED = 2,
- STS_INTERNAL_ERROR = 3,
- STS_ERROR = 4
-};
-
-struct job_sha1 {
- u8 *buffer;
- u32 len;
- u32 result_digest[NUM_SHA1_DIGEST_WORDS] __aligned(32);
- enum job_sts status;
- void *user_data;
-};
-
-/* SHA1 out-of-order scheduler */
-
-/* typedef uint32_t sha1_digest_array[5][8]; */
-
-struct sha1_args_x8 {
- uint32_t digest[5][8];
- uint8_t *data_ptr[8];
-};
-
-struct sha1_lane_data {
- struct job_sha1 *job_in_lane;
-};
-
-struct sha1_mb_mgr {
- struct sha1_args_x8 args;
-
- uint32_t lens[8];
-
- /* each byte is index (0...7) of unused lanes */
- uint64_t unused_lanes;
- /* byte 4 is set to FF as a flag */
- struct sha1_lane_data ldata[8];
-};
-
-
-#define SHA1_MB_MGR_NUM_LANES_AVX2 8
-
-void sha1_mb_mgr_init_avx2(struct sha1_mb_mgr *state);
-struct job_sha1 *sha1_mb_mgr_submit_avx2(struct sha1_mb_mgr *state,
- struct job_sha1 *job);
-struct job_sha1 *sha1_mb_mgr_flush_avx2(struct sha1_mb_mgr *state);
-struct job_sha1 *sha1_mb_mgr_get_comp_job_avx2(struct sha1_mb_mgr *state);
-
-#endif
diff --git a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_datastruct.S b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_datastruct.S
deleted file mode 100644
index 86688c6..00000000
--- a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_datastruct.S
+++ /dev/null
@@ -1,287 +0,0 @@
-/*
- * Header file for multi buffer SHA1 algorithm data structure
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * James Guilford <james.guilford(a)intel.com>
- * Tim Chen <tim.c.chen(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-# Macros for defining data structures
-
-# Usage example
-
-#START_FIELDS # JOB_AES
-### name size align
-#FIELD _plaintext, 8, 8 # pointer to plaintext
-#FIELD _ciphertext, 8, 8 # pointer to ciphertext
-#FIELD _IV, 16, 8 # IV
-#FIELD _keys, 8, 8 # pointer to keys
-#FIELD _len, 4, 4 # length in bytes
-#FIELD _status, 4, 4 # status enumeration
-#FIELD _user_data, 8, 8 # pointer to user data
-#UNION _union, size1, align1, \
-# size2, align2, \
-# size3, align3, \
-# ...
-#END_FIELDS
-#%assign _JOB_AES_size _FIELD_OFFSET
-#%assign _JOB_AES_align _STRUCT_ALIGN
-
-#########################################################################
-
-# Alternate "struc-like" syntax:
-# STRUCT job_aes2
-# RES_Q .plaintext, 1
-# RES_Q .ciphertext, 1
-# RES_DQ .IV, 1
-# RES_B .nested, _JOB_AES_SIZE, _JOB_AES_ALIGN
-# RES_U .union, size1, align1, \
-# size2, align2, \
-# ...
-# ENDSTRUCT
-# # Following only needed if nesting
-# %assign job_aes2_size _FIELD_OFFSET
-# %assign job_aes2_align _STRUCT_ALIGN
-#
-# RES_* macros take a name, a count and an optional alignment.
-# The count in in terms of the base size of the macro, and the
-# default alignment is the base size.
-# The macros are:
-# Macro Base size
-# RES_B 1
-# RES_W 2
-# RES_D 4
-# RES_Q 8
-# RES_DQ 16
-# RES_Y 32
-# RES_Z 64
-#
-# RES_U defines a union. It's arguments are a name and two or more
-# pairs of "size, alignment"
-#
-# The two assigns are only needed if this structure is being nested
-# within another. Even if the assigns are not done, one can still use
-# STRUCT_NAME_size as the size of the structure.
-#
-# Note that for nesting, you still need to assign to STRUCT_NAME_size.
-#
-# The differences between this and using "struc" directly are that each
-# type is implicitly aligned to its natural length (although this can be
-# over-ridden with an explicit third parameter), and that the structure
-# is padded at the end to its overall alignment.
-#
-
-#########################################################################
-
-#ifndef _SHA1_MB_MGR_DATASTRUCT_ASM_
-#define _SHA1_MB_MGR_DATASTRUCT_ASM_
-
-## START_FIELDS
-.macro START_FIELDS
- _FIELD_OFFSET = 0
- _STRUCT_ALIGN = 0
-.endm
-
-## FIELD name size align
-.macro FIELD name size align
- _FIELD_OFFSET = (_FIELD_OFFSET + (\align) - 1) & (~ ((\align)-1))
- \name = _FIELD_OFFSET
- _FIELD_OFFSET = _FIELD_OFFSET + (\size)
-.if (\align > _STRUCT_ALIGN)
- _STRUCT_ALIGN = \align
-.endif
-.endm
-
-## END_FIELDS
-.macro END_FIELDS
- _FIELD_OFFSET = (_FIELD_OFFSET + _STRUCT_ALIGN-1) & (~ (_STRUCT_ALIGN-1))
-.endm
-
-########################################################################
-
-.macro STRUCT p1
-START_FIELDS
-.struc \p1
-.endm
-
-.macro ENDSTRUCT
- tmp = _FIELD_OFFSET
- END_FIELDS
- tmp = (_FIELD_OFFSET - %%tmp)
-.if (tmp > 0)
- .lcomm tmp
-.endif
-.endstruc
-.endm
-
-## RES_int name size align
-.macro RES_int p1 p2 p3
- name = \p1
- size = \p2
- align = .\p3
-
- _FIELD_OFFSET = (_FIELD_OFFSET + (align) - 1) & (~ ((align)-1))
-.align align
-.lcomm name size
- _FIELD_OFFSET = _FIELD_OFFSET + (size)
-.if (align > _STRUCT_ALIGN)
- _STRUCT_ALIGN = align
-.endif
-.endm
-
-
-
-# macro RES_B name, size [, align]
-.macro RES_B _name, _size, _align=1
-RES_int _name _size _align
-.endm
-
-# macro RES_W name, size [, align]
-.macro RES_W _name, _size, _align=2
-RES_int _name 2*(_size) _align
-.endm
-
-# macro RES_D name, size [, align]
-.macro RES_D _name, _size, _align=4
-RES_int _name 4*(_size) _align
-.endm
-
-# macro RES_Q name, size [, align]
-.macro RES_Q _name, _size, _align=8
-RES_int _name 8*(_size) _align
-.endm
-
-# macro RES_DQ name, size [, align]
-.macro RES_DQ _name, _size, _align=16
-RES_int _name 16*(_size) _align
-.endm
-
-# macro RES_Y name, size [, align]
-.macro RES_Y _name, _size, _align=32
-RES_int _name 32*(_size) _align
-.endm
-
-# macro RES_Z name, size [, align]
-.macro RES_Z _name, _size, _align=64
-RES_int _name 64*(_size) _align
-.endm
-
-
-#endif
-
-########################################################################
-#### Define constants
-########################################################################
-
-########################################################################
-#### Define SHA1 Out Of Order Data Structures
-########################################################################
-
-START_FIELDS # LANE_DATA
-### name size align
-FIELD _job_in_lane, 8, 8 # pointer to job object
-END_FIELDS
-
-_LANE_DATA_size = _FIELD_OFFSET
-_LANE_DATA_align = _STRUCT_ALIGN
-
-########################################################################
-
-START_FIELDS # SHA1_ARGS_X8
-### name size align
-FIELD _digest, 4*5*8, 16 # transposed digest
-FIELD _data_ptr, 8*8, 8 # array of pointers to data
-END_FIELDS
-
-_SHA1_ARGS_X4_size = _FIELD_OFFSET
-_SHA1_ARGS_X4_align = _STRUCT_ALIGN
-_SHA1_ARGS_X8_size = _FIELD_OFFSET
-_SHA1_ARGS_X8_align = _STRUCT_ALIGN
-
-########################################################################
-
-START_FIELDS # MB_MGR
-### name size align
-FIELD _args, _SHA1_ARGS_X4_size, _SHA1_ARGS_X4_align
-FIELD _lens, 4*8, 8
-FIELD _unused_lanes, 8, 8
-FIELD _ldata, _LANE_DATA_size*8, _LANE_DATA_align
-END_FIELDS
-
-_MB_MGR_size = _FIELD_OFFSET
-_MB_MGR_align = _STRUCT_ALIGN
-
-_args_digest = _args + _digest
-_args_data_ptr = _args + _data_ptr
-
-
-########################################################################
-#### Define constants
-########################################################################
-
-#define STS_UNKNOWN 0
-#define STS_BEING_PROCESSED 1
-#define STS_COMPLETED 2
-
-########################################################################
-#### Define JOB_SHA1 structure
-########################################################################
-
-START_FIELDS # JOB_SHA1
-
-### name size align
-FIELD _buffer, 8, 8 # pointer to buffer
-FIELD _len, 4, 4 # length in bytes
-FIELD _result_digest, 5*4, 32 # Digest (output)
-FIELD _status, 4, 4
-FIELD _user_data, 8, 8
-END_FIELDS
-
-_JOB_SHA1_size = _FIELD_OFFSET
-_JOB_SHA1_align = _STRUCT_ALIGN
diff --git a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S
deleted file mode 100644
index 7cfba73..00000000
--- a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S
+++ /dev/null
@@ -1,304 +0,0 @@
-/*
- * Flush routine for SHA1 multibuffer
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * James Guilford <james.guilford(a)intel.com>
- * Tim Chen <tim.c.chen(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-#include <linux/linkage.h>
-#include <asm/frame.h>
-#include "sha1_mb_mgr_datastruct.S"
-
-
-.extern sha1_x8_avx2
-
-# LINUX register definitions
-#define arg1 %rdi
-#define arg2 %rsi
-
-# Common definitions
-#define state arg1
-#define job arg2
-#define len2 arg2
-
-# idx must be a register not clobbered by sha1_x8_avx2
-#define idx %r8
-#define DWORD_idx %r8d
-
-#define unused_lanes %rbx
-#define lane_data %rbx
-#define tmp2 %rbx
-#define tmp2_w %ebx
-
-#define job_rax %rax
-#define tmp1 %rax
-#define size_offset %rax
-#define tmp %rax
-#define start_offset %rax
-
-#define tmp3 %arg1
-
-#define extra_blocks %arg2
-#define p %arg2
-
-.macro LABEL prefix n
-\prefix\n\():
-.endm
-
-.macro JNE_SKIP i
-jne skip_\i
-.endm
-
-.altmacro
-.macro SET_OFFSET _offset
-offset = \_offset
-.endm
-.noaltmacro
-
-# JOB* sha1_mb_mgr_flush_avx2(MB_MGR *state)
-# arg 1 : rcx : state
-ENTRY(sha1_mb_mgr_flush_avx2)
- FRAME_BEGIN
- push %rbx
-
- # If bit (32+3) is set, then all lanes are empty
- mov _unused_lanes(state), unused_lanes
- bt $32+3, unused_lanes
- jc return_null
-
- # find a lane with a non-null job
- xor idx, idx
- offset = (_ldata + 1 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne one(%rip), idx
- offset = (_ldata + 2 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne two(%rip), idx
- offset = (_ldata + 3 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne three(%rip), idx
- offset = (_ldata + 4 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne four(%rip), idx
- offset = (_ldata + 5 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne five(%rip), idx
- offset = (_ldata + 6 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne six(%rip), idx
- offset = (_ldata + 7 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne seven(%rip), idx
-
- # copy idx to empty lanes
-copy_lane_data:
- offset = (_args + _data_ptr)
- mov offset(state,idx,8), tmp
-
- I = 0
-.rep 8
- offset = (_ldata + I * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
-.altmacro
- JNE_SKIP %I
- offset = (_args + _data_ptr + 8*I)
- mov tmp, offset(state)
- offset = (_lens + 4*I)
- movl $0xFFFFFFFF, offset(state)
-LABEL skip_ %I
- I = (I+1)
-.noaltmacro
-.endr
-
- # Find min length
- vmovdqu _lens+0*16(state), %xmm0
- vmovdqu _lens+1*16(state), %xmm1
-
- vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A}
- vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}
- vpminud %xmm3, %xmm2, %xmm2 # xmm2 has {x,x,E,F}
- vpalignr $4, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,x,E}
- vpminud %xmm3, %xmm2, %xmm2 # xmm2 has min value in low dword
-
- vmovd %xmm2, DWORD_idx
- mov idx, len2
- and $0xF, idx
- shr $4, len2
- jz len_is_0
-
- vpand clear_low_nibble(%rip), %xmm2, %xmm2
- vpshufd $0, %xmm2, %xmm2
-
- vpsubd %xmm2, %xmm0, %xmm0
- vpsubd %xmm2, %xmm1, %xmm1
-
- vmovdqu %xmm0, _lens+0*16(state)
- vmovdqu %xmm1, _lens+1*16(state)
-
- # "state" and "args" are the same address, arg1
- # len is arg2
- call sha1_x8_avx2
- # state and idx are intact
-
-
-len_is_0:
- # process completed job "idx"
- imul $_LANE_DATA_size, idx, lane_data
- lea _ldata(state, lane_data), lane_data
-
- mov _job_in_lane(lane_data), job_rax
- movq $0, _job_in_lane(lane_data)
- movl $STS_COMPLETED, _status(job_rax)
- mov _unused_lanes(state), unused_lanes
- shl $4, unused_lanes
- or idx, unused_lanes
- mov unused_lanes, _unused_lanes(state)
-
- movl $0xFFFFFFFF, _lens(state, idx, 4)
-
- vmovd _args_digest(state , idx, 4) , %xmm0
- vpinsrd $1, _args_digest+1*32(state, idx, 4), %xmm0, %xmm0
- vpinsrd $2, _args_digest+2*32(state, idx, 4), %xmm0, %xmm0
- vpinsrd $3, _args_digest+3*32(state, idx, 4), %xmm0, %xmm0
- movl _args_digest+4*32(state, idx, 4), tmp2_w
-
- vmovdqu %xmm0, _result_digest(job_rax)
- offset = (_result_digest + 1*16)
- mov tmp2_w, offset(job_rax)
-
-return:
- pop %rbx
- FRAME_END
- ret
-
-return_null:
- xor job_rax, job_rax
- jmp return
-ENDPROC(sha1_mb_mgr_flush_avx2)
-
-
-#################################################################
-
-.align 16
-ENTRY(sha1_mb_mgr_get_comp_job_avx2)
- push %rbx
-
- ## if bit 32+3 is set, then all lanes are empty
- mov _unused_lanes(state), unused_lanes
- bt $(32+3), unused_lanes
- jc .return_null
-
- # Find min length
- vmovdqu _lens(state), %xmm0
- vmovdqu _lens+1*16(state), %xmm1
-
- vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A}
- vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}
- vpminud %xmm3, %xmm2, %xmm2 # xmm2 has {x,x,E,F}
- vpalignr $4, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,x,E}
- vpminud %xmm3, %xmm2, %xmm2 # xmm2 has min value in low dword
-
- vmovd %xmm2, DWORD_idx
- test $~0xF, idx
- jnz .return_null
-
- # process completed job "idx"
- imul $_LANE_DATA_size, idx, lane_data
- lea _ldata(state, lane_data), lane_data
-
- mov _job_in_lane(lane_data), job_rax
- movq $0, _job_in_lane(lane_data)
- movl $STS_COMPLETED, _status(job_rax)
- mov _unused_lanes(state), unused_lanes
- shl $4, unused_lanes
- or idx, unused_lanes
- mov unused_lanes, _unused_lanes(state)
-
- movl $0xFFFFFFFF, _lens(state, idx, 4)
-
- vmovd _args_digest(state, idx, 4), %xmm0
- vpinsrd $1, _args_digest+1*32(state, idx, 4), %xmm0, %xmm0
- vpinsrd $2, _args_digest+2*32(state, idx, 4), %xmm0, %xmm0
- vpinsrd $3, _args_digest+3*32(state, idx, 4), %xmm0, %xmm0
- movl _args_digest+4*32(state, idx, 4), tmp2_w
-
- vmovdqu %xmm0, _result_digest(job_rax)
- movl tmp2_w, _result_digest+1*16(job_rax)
-
- pop %rbx
-
- ret
-
-.return_null:
- xor job_rax, job_rax
- pop %rbx
- ret
-ENDPROC(sha1_mb_mgr_get_comp_job_avx2)
-
-.section .rodata.cst16.clear_low_nibble, "aM", @progbits, 16
-.align 16
-clear_low_nibble:
-.octa 0x000000000000000000000000FFFFFFF0
-
-.section .rodata.cst8, "aM", @progbits, 8
-.align 8
-one:
-.quad 1
-two:
-.quad 2
-three:
-.quad 3
-four:
-.quad 4
-five:
-.quad 5
-six:
-.quad 6
-seven:
-.quad 7
diff --git a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_init_avx2.c b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_init_avx2.c
deleted file mode 100644
index d2add0d..00000000
--- a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_init_avx2.c
+++ /dev/null
@@ -1,64 +0,0 @@
-/*
- * Initialization code for multi buffer SHA1 algorithm for AVX2
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Tim Chen <tim.c.chen(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include "sha1_mb_mgr.h"
-
-void sha1_mb_mgr_init_avx2(struct sha1_mb_mgr *state)
-{
- unsigned int j;
- state->unused_lanes = 0xF76543210ULL;
- for (j = 0; j < 8; j++) {
- state->lens[j] = 0xFFFFFFFF;
- state->ldata[j].job_in_lane = NULL;
- }
-}
diff --git a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S
deleted file mode 100644
index 7a93b1c..00000000
--- a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S
+++ /dev/null
@@ -1,209 +0,0 @@
-/*
- * Buffer submit code for multi buffer SHA1 algorithm
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * James Guilford <james.guilford(a)intel.com>
- * Tim Chen <tim.c.chen(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <linux/linkage.h>
-#include <asm/frame.h>
-#include "sha1_mb_mgr_datastruct.S"
-
-
-.extern sha1_x8_avx
-
-# LINUX register definitions
-arg1 = %rdi
-arg2 = %rsi
-size_offset = %rcx
-tmp2 = %rcx
-extra_blocks = %rdx
-
-# Common definitions
-#define state arg1
-#define job %rsi
-#define len2 arg2
-#define p2 arg2
-
-# idx must be a register not clobberred by sha1_x8_avx2
-idx = %r8
-DWORD_idx = %r8d
-last_len = %r8
-
-p = %r11
-start_offset = %r11
-
-unused_lanes = %rbx
-BYTE_unused_lanes = %bl
-
-job_rax = %rax
-len = %rax
-DWORD_len = %eax
-
-lane = %r12
-tmp3 = %r12
-
-tmp = %r9
-DWORD_tmp = %r9d
-
-lane_data = %r10
-
-# JOB* submit_mb_mgr_submit_avx2(MB_MGR *state, job_sha1 *job)
-# arg 1 : rcx : state
-# arg 2 : rdx : job
-ENTRY(sha1_mb_mgr_submit_avx2)
- FRAME_BEGIN
- push %rbx
- push %r12
-
- mov _unused_lanes(state), unused_lanes
- mov unused_lanes, lane
- and $0xF, lane
- shr $4, unused_lanes
- imul $_LANE_DATA_size, lane, lane_data
- movl $STS_BEING_PROCESSED, _status(job)
- lea _ldata(state, lane_data), lane_data
- mov unused_lanes, _unused_lanes(state)
- movl _len(job), DWORD_len
-
- mov job, _job_in_lane(lane_data)
- shl $4, len
- or lane, len
-
- movl DWORD_len, _lens(state , lane, 4)
-
- # Load digest words from result_digest
- vmovdqu _result_digest(job), %xmm0
- mov _result_digest+1*16(job), DWORD_tmp
- vmovd %xmm0, _args_digest(state, lane, 4)
- vpextrd $1, %xmm0, _args_digest+1*32(state , lane, 4)
- vpextrd $2, %xmm0, _args_digest+2*32(state , lane, 4)
- vpextrd $3, %xmm0, _args_digest+3*32(state , lane, 4)
- movl DWORD_tmp, _args_digest+4*32(state , lane, 4)
-
- mov _buffer(job), p
- mov p, _args_data_ptr(state, lane, 8)
-
- cmp $0xF, unused_lanes
- jne return_null
-
-start_loop:
- # Find min length
- vmovdqa _lens(state), %xmm0
- vmovdqa _lens+1*16(state), %xmm1
-
- vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A}
- vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}
- vpminud %xmm3, %xmm2, %xmm2 # xmm2 has {x,x,E,F}
- vpalignr $4, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,x,E}
- vpminud %xmm3, %xmm2, %xmm2 # xmm2 has min value in low dword
-
- vmovd %xmm2, DWORD_idx
- mov idx, len2
- and $0xF, idx
- shr $4, len2
- jz len_is_0
-
- vpand clear_low_nibble(%rip), %xmm2, %xmm2
- vpshufd $0, %xmm2, %xmm2
-
- vpsubd %xmm2, %xmm0, %xmm0
- vpsubd %xmm2, %xmm1, %xmm1
-
- vmovdqa %xmm0, _lens + 0*16(state)
- vmovdqa %xmm1, _lens + 1*16(state)
-
-
- # "state" and "args" are the same address, arg1
- # len is arg2
- call sha1_x8_avx2
-
- # state and idx are intact
-
-len_is_0:
- # process completed job "idx"
- imul $_LANE_DATA_size, idx, lane_data
- lea _ldata(state, lane_data), lane_data
-
- mov _job_in_lane(lane_data), job_rax
- mov _unused_lanes(state), unused_lanes
- movq $0, _job_in_lane(lane_data)
- movl $STS_COMPLETED, _status(job_rax)
- shl $4, unused_lanes
- or idx, unused_lanes
- mov unused_lanes, _unused_lanes(state)
-
- movl $0xFFFFFFFF, _lens(state, idx, 4)
-
- vmovd _args_digest(state, idx, 4), %xmm0
- vpinsrd $1, _args_digest+1*32(state , idx, 4), %xmm0, %xmm0
- vpinsrd $2, _args_digest+2*32(state , idx, 4), %xmm0, %xmm0
- vpinsrd $3, _args_digest+3*32(state , idx, 4), %xmm0, %xmm0
- movl _args_digest+4*32(state, idx, 4), DWORD_tmp
-
- vmovdqu %xmm0, _result_digest(job_rax)
- movl DWORD_tmp, _result_digest+1*16(job_rax)
-
-return:
- pop %r12
- pop %rbx
- FRAME_END
- ret
-
-return_null:
- xor job_rax, job_rax
- jmp return
-
-ENDPROC(sha1_mb_mgr_submit_avx2)
-
-.section .rodata.cst16.clear_low_nibble, "aM", @progbits, 16
-.align 16
-clear_low_nibble:
- .octa 0x000000000000000000000000FFFFFFF0
diff --git a/arch/x86/crypto/sha1-mb/sha1_x8_avx2.S b/arch/x86/crypto/sha1-mb/sha1_x8_avx2.S
deleted file mode 100644
index 20f77aa..00000000
--- a/arch/x86/crypto/sha1-mb/sha1_x8_avx2.S
+++ /dev/null
@@ -1,492 +0,0 @@
-/*
- * Multi-buffer SHA1 algorithm hash compute routine
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * James Guilford <james.guilford(a)intel.com>
- * Tim Chen <tim.c.chen(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2014 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <linux/linkage.h>
-#include "sha1_mb_mgr_datastruct.S"
-
-## code to compute oct SHA1 using SSE-256
-## outer calling routine takes care of save and restore of XMM registers
-
-## Function clobbers: rax, rcx, rdx, rbx, rsi, rdi, r9-r15# ymm0-15
-##
-## Linux clobbers: rax rbx rcx rdx rsi r9 r10 r11 r12 r13 r14 r15
-## Linux preserves: rdi rbp r8
-##
-## clobbers ymm0-15
-
-
-# TRANSPOSE8 r0, r1, r2, r3, r4, r5, r6, r7, t0, t1
-# "transpose" data in {r0...r7} using temps {t0...t1}
-# Input looks like: {r0 r1 r2 r3 r4 r5 r6 r7}
-# r0 = {a7 a6 a5 a4 a3 a2 a1 a0}
-# r1 = {b7 b6 b5 b4 b3 b2 b1 b0}
-# r2 = {c7 c6 c5 c4 c3 c2 c1 c0}
-# r3 = {d7 d6 d5 d4 d3 d2 d1 d0}
-# r4 = {e7 e6 e5 e4 e3 e2 e1 e0}
-# r5 = {f7 f6 f5 f4 f3 f2 f1 f0}
-# r6 = {g7 g6 g5 g4 g3 g2 g1 g0}
-# r7 = {h7 h6 h5 h4 h3 h2 h1 h0}
-#
-# Output looks like: {r0 r1 r2 r3 r4 r5 r6 r7}
-# r0 = {h0 g0 f0 e0 d0 c0 b0 a0}
-# r1 = {h1 g1 f1 e1 d1 c1 b1 a1}
-# r2 = {h2 g2 f2 e2 d2 c2 b2 a2}
-# r3 = {h3 g3 f3 e3 d3 c3 b3 a3}
-# r4 = {h4 g4 f4 e4 d4 c4 b4 a4}
-# r5 = {h5 g5 f5 e5 d5 c5 b5 a5}
-# r6 = {h6 g6 f6 e6 d6 c6 b6 a6}
-# r7 = {h7 g7 f7 e7 d7 c7 b7 a7}
-#
-
-.macro TRANSPOSE8 r0 r1 r2 r3 r4 r5 r6 r7 t0 t1
- # process top half (r0..r3) {a...d}
- vshufps $0x44, \r1, \r0, \t0 # t0 = {b5 b4 a5 a4 b1 b0 a1 a0}
- vshufps $0xEE, \r1, \r0, \r0 # r0 = {b7 b6 a7 a6 b3 b2 a3 a2}
- vshufps $0x44, \r3, \r2, \t1 # t1 = {d5 d4 c5 c4 d1 d0 c1 c0}
- vshufps $0xEE, \r3, \r2, \r2 # r2 = {d7 d6 c7 c6 d3 d2 c3 c2}
- vshufps $0xDD, \t1, \t0, \r3 # r3 = {d5 c5 b5 a5 d1 c1 b1 a1}
- vshufps $0x88, \r2, \r0, \r1 # r1 = {d6 c6 b6 a6 d2 c2 b2 a2}
- vshufps $0xDD, \r2, \r0, \r0 # r0 = {d7 c7 b7 a7 d3 c3 b3 a3}
- vshufps $0x88, \t1, \t0, \t0 # t0 = {d4 c4 b4 a4 d0 c0 b0 a0}
-
- # use r2 in place of t0
- # process bottom half (r4..r7) {e...h}
- vshufps $0x44, \r5, \r4, \r2 # r2 = {f5 f4 e5 e4 f1 f0 e1 e0}
- vshufps $0xEE, \r5, \r4, \r4 # r4 = {f7 f6 e7 e6 f3 f2 e3 e2}
- vshufps $0x44, \r7, \r6, \t1 # t1 = {h5 h4 g5 g4 h1 h0 g1 g0}
- vshufps $0xEE, \r7, \r6, \r6 # r6 = {h7 h6 g7 g6 h3 h2 g3 g2}
- vshufps $0xDD, \t1, \r2, \r7 # r7 = {h5 g5 f5 e5 h1 g1 f1 e1}
- vshufps $0x88, \r6, \r4, \r5 # r5 = {h6 g6 f6 e6 h2 g2 f2 e2}
- vshufps $0xDD, \r6, \r4, \r4 # r4 = {h7 g7 f7 e7 h3 g3 f3 e3}
- vshufps $0x88, \t1, \r2, \t1 # t1 = {h4 g4 f4 e4 h0 g0 f0 e0}
-
- vperm2f128 $0x13, \r1, \r5, \r6 # h6...a6
- vperm2f128 $0x02, \r1, \r5, \r2 # h2...a2
- vperm2f128 $0x13, \r3, \r7, \r5 # h5...a5
- vperm2f128 $0x02, \r3, \r7, \r1 # h1...a1
- vperm2f128 $0x13, \r0, \r4, \r7 # h7...a7
- vperm2f128 $0x02, \r0, \r4, \r3 # h3...a3
- vperm2f128 $0x13, \t0, \t1, \r4 # h4...a4
- vperm2f128 $0x02, \t0, \t1, \r0 # h0...a0
-
-.endm
-##
-## Magic functions defined in FIPS 180-1
-##
-# macro MAGIC_F0 F,B,C,D,T ## F = (D ^ (B & (C ^ D)))
-.macro MAGIC_F0 regF regB regC regD regT
- vpxor \regD, \regC, \regF
- vpand \regB, \regF, \regF
- vpxor \regD, \regF, \regF
-.endm
-
-# macro MAGIC_F1 F,B,C,D,T ## F = (B ^ C ^ D)
-.macro MAGIC_F1 regF regB regC regD regT
- vpxor \regC, \regD, \regF
- vpxor \regB, \regF, \regF
-.endm
-
-# macro MAGIC_F2 F,B,C,D,T ## F = ((B & C) | (B & D) | (C & D))
-.macro MAGIC_F2 regF regB regC regD regT
- vpor \regC, \regB, \regF
- vpand \regC, \regB, \regT
- vpand \regD, \regF, \regF
- vpor \regT, \regF, \regF
-.endm
-
-# macro MAGIC_F3 F,B,C,D,T ## F = (B ^ C ^ D)
-.macro MAGIC_F3 regF regB regC regD regT
- MAGIC_F1 \regF,\regB,\regC,\regD,\regT
-.endm
-
-# PROLD reg, imm, tmp
-.macro PROLD reg imm tmp
- vpsrld $(32-\imm), \reg, \tmp
- vpslld $\imm, \reg, \reg
- vpor \tmp, \reg, \reg
-.endm
-
-.macro PROLD_nd reg imm tmp src
- vpsrld $(32-\imm), \src, \tmp
- vpslld $\imm, \src, \reg
- vpor \tmp, \reg, \reg
-.endm
-
-.macro SHA1_STEP_00_15 regA regB regC regD regE regT regF memW immCNT MAGIC
- vpaddd \immCNT, \regE, \regE
- vpaddd \memW*32(%rsp), \regE, \regE
- PROLD_nd \regT, 5, \regF, \regA
- vpaddd \regT, \regE, \regE
- \MAGIC \regF, \regB, \regC, \regD, \regT
- PROLD \regB, 30, \regT
- vpaddd \regF, \regE, \regE
-.endm
-
-.macro SHA1_STEP_16_79 regA regB regC regD regE regT regF memW immCNT MAGIC
- vpaddd \immCNT, \regE, \regE
- offset = ((\memW - 14) & 15) * 32
- vmovdqu offset(%rsp), W14
- vpxor W14, W16, W16
- offset = ((\memW - 8) & 15) * 32
- vpxor offset(%rsp), W16, W16
- offset = ((\memW - 3) & 15) * 32
- vpxor offset(%rsp), W16, W16
- vpsrld $(32-1), W16, \regF
- vpslld $1, W16, W16
- vpor W16, \regF, \regF
-
- ROTATE_W
-
- offset = ((\memW - 0) & 15) * 32
- vmovdqu \regF, offset(%rsp)
- vpaddd \regF, \regE, \regE
- PROLD_nd \regT, 5, \regF, \regA
- vpaddd \regT, \regE, \regE
- \MAGIC \regF,\regB,\regC,\regD,\regT ## FUN = MAGIC_Fi(B,C,D)
- PROLD \regB,30, \regT
- vpaddd \regF, \regE, \regE
-.endm
-
-########################################################################
-########################################################################
-########################################################################
-
-## FRAMESZ plus pushes must be an odd multiple of 8
-YMM_SAVE = (15-15)*32
-FRAMESZ = 32*16 + YMM_SAVE
-_YMM = FRAMESZ - YMM_SAVE
-
-#define VMOVPS vmovups
-
-IDX = %rax
-inp0 = %r9
-inp1 = %r10
-inp2 = %r11
-inp3 = %r12
-inp4 = %r13
-inp5 = %r14
-inp6 = %r15
-inp7 = %rcx
-arg1 = %rdi
-arg2 = %rsi
-RSP_SAVE = %rdx
-
-# ymm0 A
-# ymm1 B
-# ymm2 C
-# ymm3 D
-# ymm4 E
-# ymm5 F AA
-# ymm6 T0 BB
-# ymm7 T1 CC
-# ymm8 T2 DD
-# ymm9 T3 EE
-# ymm10 T4 TMP
-# ymm11 T5 FUN
-# ymm12 T6 K
-# ymm13 T7 W14
-# ymm14 T8 W15
-# ymm15 T9 W16
-
-
-A = %ymm0
-B = %ymm1
-C = %ymm2
-D = %ymm3
-E = %ymm4
-F = %ymm5
-T0 = %ymm6
-T1 = %ymm7
-T2 = %ymm8
-T3 = %ymm9
-T4 = %ymm10
-T5 = %ymm11
-T6 = %ymm12
-T7 = %ymm13
-T8 = %ymm14
-T9 = %ymm15
-
-AA = %ymm5
-BB = %ymm6
-CC = %ymm7
-DD = %ymm8
-EE = %ymm9
-TMP = %ymm10
-FUN = %ymm11
-K = %ymm12
-W14 = %ymm13
-W15 = %ymm14
-W16 = %ymm15
-
-.macro ROTATE_ARGS
- TMP_ = E
- E = D
- D = C
- C = B
- B = A
- A = TMP_
-.endm
-
-.macro ROTATE_W
-TMP_ = W16
-W16 = W15
-W15 = W14
-W14 = TMP_
-.endm
-
-# 8 streams x 5 32bit words per digest x 4 bytes per word
-#define DIGEST_SIZE (8*5*4)
-
-.align 32
-
-# void sha1_x8_avx2(void **input_data, UINT128 *digest, UINT32 size)
-# arg 1 : pointer to array[4] of pointer to input data
-# arg 2 : size (in blocks) ;; assumed to be >= 1
-#
-ENTRY(sha1_x8_avx2)
-
- # save callee-saved clobbered registers to comply with C function ABI
- push %r12
- push %r13
- push %r14
- push %r15
-
- #save rsp
- mov %rsp, RSP_SAVE
- sub $FRAMESZ, %rsp
-
- #align rsp to 32 Bytes
- and $~0x1F, %rsp
-
- ## Initialize digests
- vmovdqu 0*32(arg1), A
- vmovdqu 1*32(arg1), B
- vmovdqu 2*32(arg1), C
- vmovdqu 3*32(arg1), D
- vmovdqu 4*32(arg1), E
-
- ## transpose input onto stack
- mov _data_ptr+0*8(arg1),inp0
- mov _data_ptr+1*8(arg1),inp1
- mov _data_ptr+2*8(arg1),inp2
- mov _data_ptr+3*8(arg1),inp3
- mov _data_ptr+4*8(arg1),inp4
- mov _data_ptr+5*8(arg1),inp5
- mov _data_ptr+6*8(arg1),inp6
- mov _data_ptr+7*8(arg1),inp7
-
- xor IDX, IDX
-lloop:
- vmovdqu PSHUFFLE_BYTE_FLIP_MASK(%rip), F
- I=0
-.rep 2
- VMOVPS (inp0, IDX), T0
- VMOVPS (inp1, IDX), T1
- VMOVPS (inp2, IDX), T2
- VMOVPS (inp3, IDX), T3
- VMOVPS (inp4, IDX), T4
- VMOVPS (inp5, IDX), T5
- VMOVPS (inp6, IDX), T6
- VMOVPS (inp7, IDX), T7
-
- TRANSPOSE8 T0, T1, T2, T3, T4, T5, T6, T7, T8, T9
- vpshufb F, T0, T0
- vmovdqu T0, (I*8)*32(%rsp)
- vpshufb F, T1, T1
- vmovdqu T1, (I*8+1)*32(%rsp)
- vpshufb F, T2, T2
- vmovdqu T2, (I*8+2)*32(%rsp)
- vpshufb F, T3, T3
- vmovdqu T3, (I*8+3)*32(%rsp)
- vpshufb F, T4, T4
- vmovdqu T4, (I*8+4)*32(%rsp)
- vpshufb F, T5, T5
- vmovdqu T5, (I*8+5)*32(%rsp)
- vpshufb F, T6, T6
- vmovdqu T6, (I*8+6)*32(%rsp)
- vpshufb F, T7, T7
- vmovdqu T7, (I*8+7)*32(%rsp)
- add $32, IDX
- I = (I+1)
-.endr
- # save old digests
- vmovdqu A,AA
- vmovdqu B,BB
- vmovdqu C,CC
- vmovdqu D,DD
- vmovdqu E,EE
-
-##
-## perform 0-79 steps
-##
- vmovdqu K00_19(%rip), K
-## do rounds 0...15
- I = 0
-.rep 16
- SHA1_STEP_00_15 A,B,C,D,E, TMP,FUN, I, K, MAGIC_F0
- ROTATE_ARGS
- I = (I+1)
-.endr
-
-## do rounds 16...19
- vmovdqu ((16 - 16) & 15) * 32 (%rsp), W16
- vmovdqu ((16 - 15) & 15) * 32 (%rsp), W15
-.rep 4
- SHA1_STEP_16_79 A,B,C,D,E, TMP,FUN, I, K, MAGIC_F0
- ROTATE_ARGS
- I = (I+1)
-.endr
-
-## do rounds 20...39
- vmovdqu K20_39(%rip), K
-.rep 20
- SHA1_STEP_16_79 A,B,C,D,E, TMP,FUN, I, K, MAGIC_F1
- ROTATE_ARGS
- I = (I+1)
-.endr
-
-## do rounds 40...59
- vmovdqu K40_59(%rip), K
-.rep 20
- SHA1_STEP_16_79 A,B,C,D,E, TMP,FUN, I, K, MAGIC_F2
- ROTATE_ARGS
- I = (I+1)
-.endr
-
-## do rounds 60...79
- vmovdqu K60_79(%rip), K
-.rep 20
- SHA1_STEP_16_79 A,B,C,D,E, TMP,FUN, I, K, MAGIC_F3
- ROTATE_ARGS
- I = (I+1)
-.endr
-
- vpaddd AA,A,A
- vpaddd BB,B,B
- vpaddd CC,C,C
- vpaddd DD,D,D
- vpaddd EE,E,E
-
- sub $1, arg2
- jne lloop
-
- # write out digests
- vmovdqu A, 0*32(arg1)
- vmovdqu B, 1*32(arg1)
- vmovdqu C, 2*32(arg1)
- vmovdqu D, 3*32(arg1)
- vmovdqu E, 4*32(arg1)
-
- # update input pointers
- add IDX, inp0
- add IDX, inp1
- add IDX, inp2
- add IDX, inp3
- add IDX, inp4
- add IDX, inp5
- add IDX, inp6
- add IDX, inp7
- mov inp0, _data_ptr (arg1)
- mov inp1, _data_ptr + 1*8(arg1)
- mov inp2, _data_ptr + 2*8(arg1)
- mov inp3, _data_ptr + 3*8(arg1)
- mov inp4, _data_ptr + 4*8(arg1)
- mov inp5, _data_ptr + 5*8(arg1)
- mov inp6, _data_ptr + 6*8(arg1)
- mov inp7, _data_ptr + 7*8(arg1)
-
- ################
- ## Postamble
-
- mov RSP_SAVE, %rsp
-
- # restore callee-saved clobbered registers
- pop %r15
- pop %r14
- pop %r13
- pop %r12
-
- ret
-ENDPROC(sha1_x8_avx2)
-
-
-.section .rodata.cst32.K00_19, "aM", @progbits, 32
-.align 32
-K00_19:
-.octa 0x5A8279995A8279995A8279995A827999
-.octa 0x5A8279995A8279995A8279995A827999
-
-.section .rodata.cst32.K20_39, "aM", @progbits, 32
-.align 32
-K20_39:
-.octa 0x6ED9EBA16ED9EBA16ED9EBA16ED9EBA1
-.octa 0x6ED9EBA16ED9EBA16ED9EBA16ED9EBA1
-
-.section .rodata.cst32.K40_59, "aM", @progbits, 32
-.align 32
-K40_59:
-.octa 0x8F1BBCDC8F1BBCDC8F1BBCDC8F1BBCDC
-.octa 0x8F1BBCDC8F1BBCDC8F1BBCDC8F1BBCDC
-
-.section .rodata.cst32.K60_79, "aM", @progbits, 32
-.align 32
-K60_79:
-.octa 0xCA62C1D6CA62C1D6CA62C1D6CA62C1D6
-.octa 0xCA62C1D6CA62C1D6CA62C1D6CA62C1D6
-
-.section .rodata.cst32.PSHUFFLE_BYTE_FLIP_MASK, "aM", @progbits, 32
-.align 32
-PSHUFFLE_BYTE_FLIP_MASK:
-.octa 0x0c0d0e0f08090a0b0405060700010203
-.octa 0x0c0d0e0f08090a0b0405060700010203
diff --git a/arch/x86/crypto/sha256-mb/Makefile b/arch/x86/crypto/sha256-mb/Makefile
deleted file mode 100644
index 53ad6e7..00000000
--- a/arch/x86/crypto/sha256-mb/Makefile
+++ /dev/null
@@ -1,14 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0
-#
-# Arch-specific CryptoAPI modules.
-#
-
-OBJECT_FILES_NON_STANDARD := y
-
-avx2_supported := $(call as-instr,vpgatherdd %ymm0$(comma)(%eax$(comma)%ymm1\
- $(comma)4)$(comma)%ymm2,yes,no)
-ifeq ($(avx2_supported),yes)
- obj-$(CONFIG_CRYPTO_SHA256_MB) += sha256-mb.o
- sha256-mb-y := sha256_mb.o sha256_mb_mgr_flush_avx2.o \
- sha256_mb_mgr_init_avx2.o sha256_mb_mgr_submit_avx2.o sha256_x8_avx2.o
-endif
diff --git a/arch/x86/crypto/sha256-mb/sha256_mb.c b/arch/x86/crypto/sha256-mb/sha256_mb.c
deleted file mode 100644
index 97c5fc43..00000000
--- a/arch/x86/crypto/sha256-mb/sha256_mb.c
+++ /dev/null
@@ -1,1013 +0,0 @@
-/*
- * Multi buffer SHA256 algorithm Glue Code
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include <crypto/internal/hash.h>
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/mm.h>
-#include <linux/cryptohash.h>
-#include <linux/types.h>
-#include <linux/list.h>
-#include <crypto/scatterwalk.h>
-#include <crypto/sha.h>
-#include <crypto/mcryptd.h>
-#include <crypto/crypto_wq.h>
-#include <asm/byteorder.h>
-#include <linux/hardirq.h>
-#include <asm/fpu/api.h>
-#include "sha256_mb_ctx.h"
-
-#define FLUSH_INTERVAL 1000 /* in usec */
-
-static struct mcryptd_alg_state sha256_mb_alg_state;
-
-struct sha256_mb_ctx {
- struct mcryptd_ahash *mcryptd_tfm;
-};
-
-static inline struct mcryptd_hash_request_ctx
- *cast_hash_to_mcryptd_ctx(struct sha256_hash_ctx *hash_ctx)
-{
- struct ahash_request *areq;
-
- areq = container_of((void *) hash_ctx, struct ahash_request, __ctx);
- return container_of(areq, struct mcryptd_hash_request_ctx, areq);
-}
-
-static inline struct ahash_request
- *cast_mcryptd_ctx_to_req(struct mcryptd_hash_request_ctx *ctx)
-{
- return container_of((void *) ctx, struct ahash_request, __ctx);
-}
-
-static void req_ctx_init(struct mcryptd_hash_request_ctx *rctx,
- struct ahash_request *areq)
-{
- rctx->flag = HASH_UPDATE;
-}
-
-static asmlinkage void (*sha256_job_mgr_init)(struct sha256_mb_mgr *state);
-static asmlinkage struct job_sha256* (*sha256_job_mgr_submit)
- (struct sha256_mb_mgr *state, struct job_sha256 *job);
-static asmlinkage struct job_sha256* (*sha256_job_mgr_flush)
- (struct sha256_mb_mgr *state);
-static asmlinkage struct job_sha256* (*sha256_job_mgr_get_comp_job)
- (struct sha256_mb_mgr *state);
-
-inline uint32_t sha256_pad(uint8_t padblock[SHA256_BLOCK_SIZE * 2],
- uint64_t total_len)
-{
- uint32_t i = total_len & (SHA256_BLOCK_SIZE - 1);
-
- memset(&padblock[i], 0, SHA256_BLOCK_SIZE);
- padblock[i] = 0x80;
-
- i += ((SHA256_BLOCK_SIZE - 1) &
- (0 - (total_len + SHA256_PADLENGTHFIELD_SIZE + 1)))
- + 1 + SHA256_PADLENGTHFIELD_SIZE;
-
-#if SHA256_PADLENGTHFIELD_SIZE == 16
- *((uint64_t *) &padblock[i - 16]) = 0;
-#endif
-
- *((uint64_t *) &padblock[i - 8]) = cpu_to_be64(total_len << 3);
-
- /* Number of extra blocks to hash */
- return i >> SHA256_LOG2_BLOCK_SIZE;
-}
-
-static struct sha256_hash_ctx
- *sha256_ctx_mgr_resubmit(struct sha256_ctx_mgr *mgr,
- struct sha256_hash_ctx *ctx)
-{
- while (ctx) {
- if (ctx->status & HASH_CTX_STS_COMPLETE) {
- /* Clear PROCESSING bit */
- ctx->status = HASH_CTX_STS_COMPLETE;
- return ctx;
- }
-
- /*
- * If the extra blocks are empty, begin hashing what remains
- * in the user's buffer.
- */
- if (ctx->partial_block_buffer_length == 0 &&
- ctx->incoming_buffer_length) {
-
- const void *buffer = ctx->incoming_buffer;
- uint32_t len = ctx->incoming_buffer_length;
- uint32_t copy_len;
-
- /*
- * Only entire blocks can be hashed.
- * Copy remainder to extra blocks buffer.
- */
- copy_len = len & (SHA256_BLOCK_SIZE-1);
-
- if (copy_len) {
- len -= copy_len;
- memcpy(ctx->partial_block_buffer,
- ((const char *) buffer + len),
- copy_len);
- ctx->partial_block_buffer_length = copy_len;
- }
-
- ctx->incoming_buffer_length = 0;
-
- /* len should be a multiple of the block size now */
- assert((len % SHA256_BLOCK_SIZE) == 0);
-
- /* Set len to the number of blocks to be hashed */
- len >>= SHA256_LOG2_BLOCK_SIZE;
-
- if (len) {
-
- ctx->job.buffer = (uint8_t *) buffer;
- ctx->job.len = len;
- ctx = (struct sha256_hash_ctx *)
- sha256_job_mgr_submit(&mgr->mgr, &ctx->job);
- continue;
- }
- }
-
- /*
- * If the extra blocks are not empty, then we are
- * either on the last block(s) or we need more
- * user input before continuing.
- */
- if (ctx->status & HASH_CTX_STS_LAST) {
-
- uint8_t *buf = ctx->partial_block_buffer;
- uint32_t n_extra_blocks =
- sha256_pad(buf, ctx->total_length);
-
- ctx->status = (HASH_CTX_STS_PROCESSING |
- HASH_CTX_STS_COMPLETE);
- ctx->job.buffer = buf;
- ctx->job.len = (uint32_t) n_extra_blocks;
- ctx = (struct sha256_hash_ctx *)
- sha256_job_mgr_submit(&mgr->mgr, &ctx->job);
- continue;
- }
-
- ctx->status = HASH_CTX_STS_IDLE;
- return ctx;
- }
-
- return NULL;
-}
-
-static struct sha256_hash_ctx
- *sha256_ctx_mgr_get_comp_ctx(struct sha256_ctx_mgr *mgr)
-{
- /*
- * If get_comp_job returns NULL, there are no jobs complete.
- * If get_comp_job returns a job, verify that it is safe to return to
- * the user. If it is not ready, resubmit the job to finish processing.
- * If sha256_ctx_mgr_resubmit returned a job, it is ready to be
- * returned. Otherwise, all jobs currently being managed by the
- * hash_ctx_mgr still need processing.
- */
- struct sha256_hash_ctx *ctx;
-
- ctx = (struct sha256_hash_ctx *) sha256_job_mgr_get_comp_job(&mgr->mgr);
- return sha256_ctx_mgr_resubmit(mgr, ctx);
-}
-
-static void sha256_ctx_mgr_init(struct sha256_ctx_mgr *mgr)
-{
- sha256_job_mgr_init(&mgr->mgr);
-}
-
-static struct sha256_hash_ctx *sha256_ctx_mgr_submit(struct sha256_ctx_mgr *mgr,
- struct sha256_hash_ctx *ctx,
- const void *buffer,
- uint32_t len,
- int flags)
-{
- if (flags & ~(HASH_UPDATE | HASH_LAST)) {
- /* User should not pass anything other than UPDATE or LAST */
- ctx->error = HASH_CTX_ERROR_INVALID_FLAGS;
- return ctx;
- }
-
- if (ctx->status & HASH_CTX_STS_PROCESSING) {
- /* Cannot submit to a currently processing job. */
- ctx->error = HASH_CTX_ERROR_ALREADY_PROCESSING;
- return ctx;
- }
-
- if (ctx->status & HASH_CTX_STS_COMPLETE) {
- /* Cannot update a finished job. */
- ctx->error = HASH_CTX_ERROR_ALREADY_COMPLETED;
- return ctx;
- }
-
- /* If we made it here, there was no error during this call to submit */
- ctx->error = HASH_CTX_ERROR_NONE;
-
- /* Store buffer ptr info from user */
- ctx->incoming_buffer = buffer;
- ctx->incoming_buffer_length = len;
-
- /*
- * Store the user's request flags and mark this ctx as currently
- * being processed.
- */
- ctx->status = (flags & HASH_LAST) ?
- (HASH_CTX_STS_PROCESSING | HASH_CTX_STS_LAST) :
- HASH_CTX_STS_PROCESSING;
-
- /* Advance byte counter */
- ctx->total_length += len;
-
- /*
- * If there is anything currently buffered in the extra blocks,
- * append to it until it contains a whole block.
- * Or if the user's buffer contains less than a whole block,
- * append as much as possible to the extra block.
- */
- if (ctx->partial_block_buffer_length || len < SHA256_BLOCK_SIZE) {
- /*
- * Compute how many bytes to copy from user buffer into
- * extra block
- */
- uint32_t copy_len = SHA256_BLOCK_SIZE -
- ctx->partial_block_buffer_length;
- if (len < copy_len)
- copy_len = len;
-
- if (copy_len) {
- /* Copy and update relevant pointers and counters */
- memcpy(
- &ctx->partial_block_buffer[ctx->partial_block_buffer_length],
- buffer, copy_len);
-
- ctx->partial_block_buffer_length += copy_len;
- ctx->incoming_buffer = (const void *)
- ((const char *)buffer + copy_len);
- ctx->incoming_buffer_length = len - copy_len;
- }
-
- /* The extra block should never contain more than 1 block */
- assert(ctx->partial_block_buffer_length <= SHA256_BLOCK_SIZE);
-
- /*
- * If the extra block buffer contains exactly 1 block,
- * it can be hashed.
- */
- if (ctx->partial_block_buffer_length >= SHA256_BLOCK_SIZE) {
- ctx->partial_block_buffer_length = 0;
-
- ctx->job.buffer = ctx->partial_block_buffer;
- ctx->job.len = 1;
- ctx = (struct sha256_hash_ctx *)
- sha256_job_mgr_submit(&mgr->mgr, &ctx->job);
- }
- }
-
- return sha256_ctx_mgr_resubmit(mgr, ctx);
-}
-
-static struct sha256_hash_ctx *sha256_ctx_mgr_flush(struct sha256_ctx_mgr *mgr)
-{
- struct sha256_hash_ctx *ctx;
-
- while (1) {
- ctx = (struct sha256_hash_ctx *)
- sha256_job_mgr_flush(&mgr->mgr);
-
- /* If flush returned 0, there are no more jobs in flight. */
- if (!ctx)
- return NULL;
-
- /*
- * If flush returned a job, resubmit the job to finish
- * processing.
- */
- ctx = sha256_ctx_mgr_resubmit(mgr, ctx);
-
- /*
- * If sha256_ctx_mgr_resubmit returned a job, it is ready to
- * be returned. Otherwise, all jobs currently being managed by
- * the sha256_ctx_mgr still need processing. Loop.
- */
- if (ctx)
- return ctx;
- }
-}
-
-static int sha256_mb_init(struct ahash_request *areq)
-{
- struct sha256_hash_ctx *sctx = ahash_request_ctx(areq);
-
- hash_ctx_init(sctx);
- sctx->job.result_digest[0] = SHA256_H0;
- sctx->job.result_digest[1] = SHA256_H1;
- sctx->job.result_digest[2] = SHA256_H2;
- sctx->job.result_digest[3] = SHA256_H3;
- sctx->job.result_digest[4] = SHA256_H4;
- sctx->job.result_digest[5] = SHA256_H5;
- sctx->job.result_digest[6] = SHA256_H6;
- sctx->job.result_digest[7] = SHA256_H7;
- sctx->total_length = 0;
- sctx->partial_block_buffer_length = 0;
- sctx->status = HASH_CTX_STS_IDLE;
-
- return 0;
-}
-
-static int sha256_mb_set_results(struct mcryptd_hash_request_ctx *rctx)
-{
- int i;
- struct sha256_hash_ctx *sctx = ahash_request_ctx(&rctx->areq);
- __be32 *dst = (__be32 *) rctx->out;
-
- for (i = 0; i < 8; ++i)
- dst[i] = cpu_to_be32(sctx->job.result_digest[i]);
-
- return 0;
-}
-
-static int sha_finish_walk(struct mcryptd_hash_request_ctx **ret_rctx,
- struct mcryptd_alg_cstate *cstate, bool flush)
-{
- int flag = HASH_UPDATE;
- int nbytes, err = 0;
- struct mcryptd_hash_request_ctx *rctx = *ret_rctx;
- struct sha256_hash_ctx *sha_ctx;
-
- /* more work ? */
- while (!(rctx->flag & HASH_DONE)) {
- nbytes = crypto_ahash_walk_done(&rctx->walk, 0);
- if (nbytes < 0) {
- err = nbytes;
- goto out;
- }
- /* check if the walk is done */
- if (crypto_ahash_walk_last(&rctx->walk)) {
- rctx->flag |= HASH_DONE;
- if (rctx->flag & HASH_FINAL)
- flag |= HASH_LAST;
-
- }
- sha_ctx = (struct sha256_hash_ctx *)
- ahash_request_ctx(&rctx->areq);
- kernel_fpu_begin();
- sha_ctx = sha256_ctx_mgr_submit(cstate->mgr, sha_ctx,
- rctx->walk.data, nbytes, flag);
- if (!sha_ctx) {
- if (flush)
- sha_ctx = sha256_ctx_mgr_flush(cstate->mgr);
- }
- kernel_fpu_end();
- if (sha_ctx)
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- else {
- rctx = NULL;
- goto out;
- }
- }
-
- /* copy the results */
- if (rctx->flag & HASH_FINAL)
- sha256_mb_set_results(rctx);
-
-out:
- *ret_rctx = rctx;
- return err;
-}
-
-static int sha_complete_job(struct mcryptd_hash_request_ctx *rctx,
- struct mcryptd_alg_cstate *cstate,
- int err)
-{
- struct ahash_request *req = cast_mcryptd_ctx_to_req(rctx);
- struct sha256_hash_ctx *sha_ctx;
- struct mcryptd_hash_request_ctx *req_ctx;
- int ret;
-
- /* remove from work list */
- spin_lock(&cstate->work_lock);
- list_del(&rctx->waiter);
- spin_unlock(&cstate->work_lock);
-
- if (irqs_disabled())
- rctx->complete(&req->base, err);
- else {
- local_bh_disable();
- rctx->complete(&req->base, err);
- local_bh_enable();
- }
-
- /* check to see if there are other jobs that are done */
- sha_ctx = sha256_ctx_mgr_get_comp_ctx(cstate->mgr);
- while (sha_ctx) {
- req_ctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- ret = sha_finish_walk(&req_ctx, cstate, false);
- if (req_ctx) {
- spin_lock(&cstate->work_lock);
- list_del(&req_ctx->waiter);
- spin_unlock(&cstate->work_lock);
-
- req = cast_mcryptd_ctx_to_req(req_ctx);
- if (irqs_disabled())
- req_ctx->complete(&req->base, ret);
- else {
- local_bh_disable();
- req_ctx->complete(&req->base, ret);
- local_bh_enable();
- }
- }
- sha_ctx = sha256_ctx_mgr_get_comp_ctx(cstate->mgr);
- }
-
- return 0;
-}
-
-static void sha256_mb_add_list(struct mcryptd_hash_request_ctx *rctx,
- struct mcryptd_alg_cstate *cstate)
-{
- unsigned long next_flush;
- unsigned long delay = usecs_to_jiffies(FLUSH_INTERVAL);
-
- /* initialize tag */
- rctx->tag.arrival = jiffies; /* tag the arrival time */
- rctx->tag.seq_num = cstate->next_seq_num++;
- next_flush = rctx->tag.arrival + delay;
- rctx->tag.expire = next_flush;
-
- spin_lock(&cstate->work_lock);
- list_add_tail(&rctx->waiter, &cstate->work_list);
- spin_unlock(&cstate->work_lock);
-
- mcryptd_arm_flusher(cstate, delay);
-}
-
-static int sha256_mb_update(struct ahash_request *areq)
-{
- struct mcryptd_hash_request_ctx *rctx =
- container_of(areq, struct mcryptd_hash_request_ctx, areq);
- struct mcryptd_alg_cstate *cstate =
- this_cpu_ptr(sha256_mb_alg_state.alg_cstate);
-
- struct ahash_request *req = cast_mcryptd_ctx_to_req(rctx);
- struct sha256_hash_ctx *sha_ctx;
- int ret = 0, nbytes;
-
- /* sanity check */
- if (rctx->tag.cpu != smp_processor_id()) {
- pr_err("mcryptd error: cpu clash\n");
- goto done;
- }
-
- /* need to init context */
- req_ctx_init(rctx, areq);
-
- nbytes = crypto_ahash_walk_first(req, &rctx->walk);
-
- if (nbytes < 0) {
- ret = nbytes;
- goto done;
- }
-
- if (crypto_ahash_walk_last(&rctx->walk))
- rctx->flag |= HASH_DONE;
-
- /* submit */
- sha_ctx = (struct sha256_hash_ctx *) ahash_request_ctx(areq);
- sha256_mb_add_list(rctx, cstate);
- kernel_fpu_begin();
- sha_ctx = sha256_ctx_mgr_submit(cstate->mgr, sha_ctx, rctx->walk.data,
- nbytes, HASH_UPDATE);
- kernel_fpu_end();
-
- /* check if anything is returned */
- if (!sha_ctx)
- return -EINPROGRESS;
-
- if (sha_ctx->error) {
- ret = sha_ctx->error;
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- goto done;
- }
-
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- ret = sha_finish_walk(&rctx, cstate, false);
-
- if (!rctx)
- return -EINPROGRESS;
-done:
- sha_complete_job(rctx, cstate, ret);
- return ret;
-}
-
-static int sha256_mb_finup(struct ahash_request *areq)
-{
- struct mcryptd_hash_request_ctx *rctx =
- container_of(areq, struct mcryptd_hash_request_ctx, areq);
- struct mcryptd_alg_cstate *cstate =
- this_cpu_ptr(sha256_mb_alg_state.alg_cstate);
-
- struct ahash_request *req = cast_mcryptd_ctx_to_req(rctx);
- struct sha256_hash_ctx *sha_ctx;
- int ret = 0, flag = HASH_UPDATE, nbytes;
-
- /* sanity check */
- if (rctx->tag.cpu != smp_processor_id()) {
- pr_err("mcryptd error: cpu clash\n");
- goto done;
- }
-
- /* need to init context */
- req_ctx_init(rctx, areq);
-
- nbytes = crypto_ahash_walk_first(req, &rctx->walk);
-
- if (nbytes < 0) {
- ret = nbytes;
- goto done;
- }
-
- if (crypto_ahash_walk_last(&rctx->walk)) {
- rctx->flag |= HASH_DONE;
- flag = HASH_LAST;
- }
-
- /* submit */
- rctx->flag |= HASH_FINAL;
- sha_ctx = (struct sha256_hash_ctx *) ahash_request_ctx(areq);
- sha256_mb_add_list(rctx, cstate);
-
- kernel_fpu_begin();
- sha_ctx = sha256_ctx_mgr_submit(cstate->mgr, sha_ctx, rctx->walk.data,
- nbytes, flag);
- kernel_fpu_end();
-
- /* check if anything is returned */
- if (!sha_ctx)
- return -EINPROGRESS;
-
- if (sha_ctx->error) {
- ret = sha_ctx->error;
- goto done;
- }
-
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- ret = sha_finish_walk(&rctx, cstate, false);
- if (!rctx)
- return -EINPROGRESS;
-done:
- sha_complete_job(rctx, cstate, ret);
- return ret;
-}
-
-static int sha256_mb_final(struct ahash_request *areq)
-{
- struct mcryptd_hash_request_ctx *rctx =
- container_of(areq, struct mcryptd_hash_request_ctx,
- areq);
- struct mcryptd_alg_cstate *cstate =
- this_cpu_ptr(sha256_mb_alg_state.alg_cstate);
-
- struct sha256_hash_ctx *sha_ctx;
- int ret = 0;
- u8 data;
-
- /* sanity check */
- if (rctx->tag.cpu != smp_processor_id()) {
- pr_err("mcryptd error: cpu clash\n");
- goto done;
- }
-
- /* need to init context */
- req_ctx_init(rctx, areq);
-
- rctx->flag |= HASH_DONE | HASH_FINAL;
-
- sha_ctx = (struct sha256_hash_ctx *) ahash_request_ctx(areq);
- /* flag HASH_FINAL and 0 data size */
- sha256_mb_add_list(rctx, cstate);
- kernel_fpu_begin();
- sha_ctx = sha256_ctx_mgr_submit(cstate->mgr, sha_ctx, &data, 0,
- HASH_LAST);
- kernel_fpu_end();
-
- /* check if anything is returned */
- if (!sha_ctx)
- return -EINPROGRESS;
-
- if (sha_ctx->error) {
- ret = sha_ctx->error;
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- goto done;
- }
-
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- ret = sha_finish_walk(&rctx, cstate, false);
- if (!rctx)
- return -EINPROGRESS;
-done:
- sha_complete_job(rctx, cstate, ret);
- return ret;
-}
-
-static int sha256_mb_export(struct ahash_request *areq, void *out)
-{
- struct sha256_hash_ctx *sctx = ahash_request_ctx(areq);
-
- memcpy(out, sctx, sizeof(*sctx));
-
- return 0;
-}
-
-static int sha256_mb_import(struct ahash_request *areq, const void *in)
-{
- struct sha256_hash_ctx *sctx = ahash_request_ctx(areq);
-
- memcpy(sctx, in, sizeof(*sctx));
-
- return 0;
-}
-
-static int sha256_mb_async_init_tfm(struct crypto_tfm *tfm)
-{
- struct mcryptd_ahash *mcryptd_tfm;
- struct sha256_mb_ctx *ctx = crypto_tfm_ctx(tfm);
- struct mcryptd_hash_ctx *mctx;
-
- mcryptd_tfm = mcryptd_alloc_ahash("__intel_sha256-mb",
- CRYPTO_ALG_INTERNAL,
- CRYPTO_ALG_INTERNAL);
- if (IS_ERR(mcryptd_tfm))
- return PTR_ERR(mcryptd_tfm);
- mctx = crypto_ahash_ctx(&mcryptd_tfm->base);
- mctx->alg_state = &sha256_mb_alg_state;
- ctx->mcryptd_tfm = mcryptd_tfm;
- crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
- sizeof(struct ahash_request) +
- crypto_ahash_reqsize(&mcryptd_tfm->base));
-
- return 0;
-}
-
-static void sha256_mb_async_exit_tfm(struct crypto_tfm *tfm)
-{
- struct sha256_mb_ctx *ctx = crypto_tfm_ctx(tfm);
-
- mcryptd_free_ahash(ctx->mcryptd_tfm);
-}
-
-static int sha256_mb_areq_init_tfm(struct crypto_tfm *tfm)
-{
- crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
- sizeof(struct ahash_request) +
- sizeof(struct sha256_hash_ctx));
-
- return 0;
-}
-
-static void sha256_mb_areq_exit_tfm(struct crypto_tfm *tfm)
-{
- struct sha256_mb_ctx *ctx = crypto_tfm_ctx(tfm);
-
- mcryptd_free_ahash(ctx->mcryptd_tfm);
-}
-
-static struct ahash_alg sha256_mb_areq_alg = {
- .init = sha256_mb_init,
- .update = sha256_mb_update,
- .final = sha256_mb_final,
- .finup = sha256_mb_finup,
- .export = sha256_mb_export,
- .import = sha256_mb_import,
- .halg = {
- .digestsize = SHA256_DIGEST_SIZE,
- .statesize = sizeof(struct sha256_hash_ctx),
- .base = {
- .cra_name = "__sha256-mb",
- .cra_driver_name = "__intel_sha256-mb",
- .cra_priority = 100,
- /*
- * use ASYNC flag as some buffers in multi-buffer
- * algo may not have completed before hashing thread
- * sleep
- */
- .cra_flags = CRYPTO_ALG_ASYNC |
- CRYPTO_ALG_INTERNAL,
- .cra_blocksize = SHA256_BLOCK_SIZE,
- .cra_module = THIS_MODULE,
- .cra_list = LIST_HEAD_INIT
- (sha256_mb_areq_alg.halg.base.cra_list),
- .cra_init = sha256_mb_areq_init_tfm,
- .cra_exit = sha256_mb_areq_exit_tfm,
- .cra_ctxsize = sizeof(struct sha256_hash_ctx),
- }
- }
-};
-
-static int sha256_mb_async_init(struct ahash_request *req)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha256_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_init(mcryptd_req);
-}
-
-static int sha256_mb_async_update(struct ahash_request *req)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
-
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha256_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_update(mcryptd_req);
-}
-
-static int sha256_mb_async_finup(struct ahash_request *req)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
-
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha256_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_finup(mcryptd_req);
-}
-
-static int sha256_mb_async_final(struct ahash_request *req)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
-
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha256_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_final(mcryptd_req);
-}
-
-static int sha256_mb_async_digest(struct ahash_request *req)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha256_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_digest(mcryptd_req);
-}
-
-static int sha256_mb_async_export(struct ahash_request *req, void *out)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha256_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_export(mcryptd_req, out);
-}
-
-static int sha256_mb_async_import(struct ahash_request *req, const void *in)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha256_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
- struct crypto_ahash *child = mcryptd_ahash_child(mcryptd_tfm);
- struct mcryptd_hash_request_ctx *rctx;
- struct ahash_request *areq;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- rctx = ahash_request_ctx(mcryptd_req);
- areq = &rctx->areq;
-
- ahash_request_set_tfm(areq, child);
- ahash_request_set_callback(areq, CRYPTO_TFM_REQ_MAY_SLEEP,
- rctx->complete, req);
-
- return crypto_ahash_import(mcryptd_req, in);
-}
-
-static struct ahash_alg sha256_mb_async_alg = {
- .init = sha256_mb_async_init,
- .update = sha256_mb_async_update,
- .final = sha256_mb_async_final,
- .finup = sha256_mb_async_finup,
- .export = sha256_mb_async_export,
- .import = sha256_mb_async_import,
- .digest = sha256_mb_async_digest,
- .halg = {
- .digestsize = SHA256_DIGEST_SIZE,
- .statesize = sizeof(struct sha256_hash_ctx),
- .base = {
- .cra_name = "sha256",
- .cra_driver_name = "sha256_mb",
- /*
- * Low priority, since with few concurrent hash requests
- * this is extremely slow due to the flush delay. Users
- * whose workloads would benefit from this can request
- * it explicitly by driver name, or can increase its
- * priority at runtime using NETLINK_CRYPTO.
- */
- .cra_priority = 50,
- .cra_flags = CRYPTO_ALG_ASYNC,
- .cra_blocksize = SHA256_BLOCK_SIZE,
- .cra_module = THIS_MODULE,
- .cra_list = LIST_HEAD_INIT
- (sha256_mb_async_alg.halg.base.cra_list),
- .cra_init = sha256_mb_async_init_tfm,
- .cra_exit = sha256_mb_async_exit_tfm,
- .cra_ctxsize = sizeof(struct sha256_mb_ctx),
- .cra_alignmask = 0,
- },
- },
-};
-
-static unsigned long sha256_mb_flusher(struct mcryptd_alg_cstate *cstate)
-{
- struct mcryptd_hash_request_ctx *rctx;
- unsigned long cur_time;
- unsigned long next_flush = 0;
- struct sha256_hash_ctx *sha_ctx;
-
-
- cur_time = jiffies;
-
- while (!list_empty(&cstate->work_list)) {
- rctx = list_entry(cstate->work_list.next,
- struct mcryptd_hash_request_ctx, waiter);
- if (time_before(cur_time, rctx->tag.expire))
- break;
- kernel_fpu_begin();
- sha_ctx = (struct sha256_hash_ctx *)
- sha256_ctx_mgr_flush(cstate->mgr);
- kernel_fpu_end();
- if (!sha_ctx) {
- pr_err("sha256_mb error: nothing got"
- " flushed for non-empty list\n");
- break;
- }
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- sha_finish_walk(&rctx, cstate, true);
- sha_complete_job(rctx, cstate, 0);
- }
-
- if (!list_empty(&cstate->work_list)) {
- rctx = list_entry(cstate->work_list.next,
- struct mcryptd_hash_request_ctx, waiter);
- /* get the hash context and then flush time */
- next_flush = rctx->tag.expire;
- mcryptd_arm_flusher(cstate, get_delay(next_flush));
- }
- return next_flush;
-}
-
-static int __init sha256_mb_mod_init(void)
-{
-
- int cpu;
- int err;
- struct mcryptd_alg_cstate *cpu_state;
-
- /* check for dependent cpu features */
- if (!boot_cpu_has(X86_FEATURE_AVX2) ||
- !boot_cpu_has(X86_FEATURE_BMI2))
- return -ENODEV;
-
- /* initialize multibuffer structures */
- sha256_mb_alg_state.alg_cstate = alloc_percpu
- (struct mcryptd_alg_cstate);
-
- sha256_job_mgr_init = sha256_mb_mgr_init_avx2;
- sha256_job_mgr_submit = sha256_mb_mgr_submit_avx2;
- sha256_job_mgr_flush = sha256_mb_mgr_flush_avx2;
- sha256_job_mgr_get_comp_job = sha256_mb_mgr_get_comp_job_avx2;
-
- if (!sha256_mb_alg_state.alg_cstate)
- return -ENOMEM;
- for_each_possible_cpu(cpu) {
- cpu_state = per_cpu_ptr(sha256_mb_alg_state.alg_cstate, cpu);
- cpu_state->next_flush = 0;
- cpu_state->next_seq_num = 0;
- cpu_state->flusher_engaged = false;
- INIT_DELAYED_WORK(&cpu_state->flush, mcryptd_flusher);
- cpu_state->cpu = cpu;
- cpu_state->alg_state = &sha256_mb_alg_state;
- cpu_state->mgr = kzalloc(sizeof(struct sha256_ctx_mgr),
- GFP_KERNEL);
- if (!cpu_state->mgr)
- goto err2;
- sha256_ctx_mgr_init(cpu_state->mgr);
- INIT_LIST_HEAD(&cpu_state->work_list);
- spin_lock_init(&cpu_state->work_lock);
- }
- sha256_mb_alg_state.flusher = &sha256_mb_flusher;
-
- err = crypto_register_ahash(&sha256_mb_areq_alg);
- if (err)
- goto err2;
- err = crypto_register_ahash(&sha256_mb_async_alg);
- if (err)
- goto err1;
-
-
- return 0;
-err1:
- crypto_unregister_ahash(&sha256_mb_areq_alg);
-err2:
- for_each_possible_cpu(cpu) {
- cpu_state = per_cpu_ptr(sha256_mb_alg_state.alg_cstate, cpu);
- kfree(cpu_state->mgr);
- }
- free_percpu(sha256_mb_alg_state.alg_cstate);
- return -ENODEV;
-}
-
-static void __exit sha256_mb_mod_fini(void)
-{
- int cpu;
- struct mcryptd_alg_cstate *cpu_state;
-
- crypto_unregister_ahash(&sha256_mb_async_alg);
- crypto_unregister_ahash(&sha256_mb_areq_alg);
- for_each_possible_cpu(cpu) {
- cpu_state = per_cpu_ptr(sha256_mb_alg_state.alg_cstate, cpu);
- kfree(cpu_state->mgr);
- }
- free_percpu(sha256_mb_alg_state.alg_cstate);
-}
-
-module_init(sha256_mb_mod_init);
-module_exit(sha256_mb_mod_fini);
-
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("SHA256 Secure Hash Algorithm, multi buffer accelerated");
-
-MODULE_ALIAS_CRYPTO("sha256");
diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_ctx.h b/arch/x86/crypto/sha256-mb/sha256_mb_ctx.h
deleted file mode 100644
index 7c43254..00000000
--- a/arch/x86/crypto/sha256-mb/sha256_mb_ctx.h
+++ /dev/null
@@ -1,134 +0,0 @@
-/*
- * Header file for multi buffer SHA256 context
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef _SHA_MB_CTX_INTERNAL_H
-#define _SHA_MB_CTX_INTERNAL_H
-
-#include "sha256_mb_mgr.h"
-
-#define HASH_UPDATE 0x00
-#define HASH_LAST 0x01
-#define HASH_DONE 0x02
-#define HASH_FINAL 0x04
-
-#define HASH_CTX_STS_IDLE 0x00
-#define HASH_CTX_STS_PROCESSING 0x01
-#define HASH_CTX_STS_LAST 0x02
-#define HASH_CTX_STS_COMPLETE 0x04
-
-enum hash_ctx_error {
- HASH_CTX_ERROR_NONE = 0,
- HASH_CTX_ERROR_INVALID_FLAGS = -1,
- HASH_CTX_ERROR_ALREADY_PROCESSING = -2,
- HASH_CTX_ERROR_ALREADY_COMPLETED = -3,
-
-#ifdef HASH_CTX_DEBUG
- HASH_CTX_ERROR_DEBUG_DIGEST_MISMATCH = -4,
-#endif
-};
-
-
-#define hash_ctx_user_data(ctx) ((ctx)->user_data)
-#define hash_ctx_digest(ctx) ((ctx)->job.result_digest)
-#define hash_ctx_processing(ctx) ((ctx)->status & HASH_CTX_STS_PROCESSING)
-#define hash_ctx_complete(ctx) ((ctx)->status == HASH_CTX_STS_COMPLETE)
-#define hash_ctx_status(ctx) ((ctx)->status)
-#define hash_ctx_error(ctx) ((ctx)->error)
-#define hash_ctx_init(ctx) \
- do { \
- (ctx)->error = HASH_CTX_ERROR_NONE; \
- (ctx)->status = HASH_CTX_STS_COMPLETE; \
- } while (0)
-
-
-/* Hash Constants and Typedefs */
-#define SHA256_DIGEST_LENGTH 8
-#define SHA256_LOG2_BLOCK_SIZE 6
-
-#define SHA256_PADLENGTHFIELD_SIZE 8
-
-#ifdef SHA_MB_DEBUG
-#define assert(expr) \
-do { \
- if (unlikely(!(expr))) { \
- printk(KERN_ERR "Assertion failed! %s,%s,%s,line=%d\n", \
- #expr, __FILE__, __func__, __LINE__); \
- } \
-} while (0)
-#else
-#define assert(expr) do {} while (0)
-#endif
-
-struct sha256_ctx_mgr {
- struct sha256_mb_mgr mgr;
-};
-
-/* typedef struct sha256_ctx_mgr sha256_ctx_mgr; */
-
-struct sha256_hash_ctx {
- /* Must be at struct offset 0 */
- struct job_sha256 job;
- /* status flag */
- int status;
- /* error flag */
- int error;
-
- uint64_t total_length;
- const void *incoming_buffer;
- uint32_t incoming_buffer_length;
- uint8_t partial_block_buffer[SHA256_BLOCK_SIZE * 2];
- uint32_t partial_block_buffer_length;
- void *user_data;
-};
-
-#endif
diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_mgr.h b/arch/x86/crypto/sha256-mb/sha256_mb_mgr.h
deleted file mode 100644
index b01ae40..00000000
--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr.h
+++ /dev/null
@@ -1,108 +0,0 @@
-/*
- * Header file for multi buffer SHA256 algorithm manager
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-#ifndef __SHA_MB_MGR_H
-#define __SHA_MB_MGR_H
-
-#include <linux/types.h>
-
-#define NUM_SHA256_DIGEST_WORDS 8
-
-enum job_sts { STS_UNKNOWN = 0,
- STS_BEING_PROCESSED = 1,
- STS_COMPLETED = 2,
- STS_INTERNAL_ERROR = 3,
- STS_ERROR = 4
-};
-
-struct job_sha256 {
- u8 *buffer;
- u32 len;
- u32 result_digest[NUM_SHA256_DIGEST_WORDS] __aligned(32);
- enum job_sts status;
- void *user_data;
-};
-
-/* SHA256 out-of-order scheduler */
-
-/* typedef uint32_t sha8_digest_array[8][8]; */
-
-struct sha256_args_x8 {
- uint32_t digest[8][8];
- uint8_t *data_ptr[8];
-};
-
-struct sha256_lane_data {
- struct job_sha256 *job_in_lane;
-};
-
-struct sha256_mb_mgr {
- struct sha256_args_x8 args;
-
- uint32_t lens[8];
-
- /* each byte is index (0...7) of unused lanes */
- uint64_t unused_lanes;
- /* byte 4 is set to FF as a flag */
- struct sha256_lane_data ldata[8];
-};
-
-
-#define SHA256_MB_MGR_NUM_LANES_AVX2 8
-
-void sha256_mb_mgr_init_avx2(struct sha256_mb_mgr *state);
-struct job_sha256 *sha256_mb_mgr_submit_avx2(struct sha256_mb_mgr *state,
- struct job_sha256 *job);
-struct job_sha256 *sha256_mb_mgr_flush_avx2(struct sha256_mb_mgr *state);
-struct job_sha256 *sha256_mb_mgr_get_comp_job_avx2(struct sha256_mb_mgr *state);
-
-#endif
diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_datastruct.S b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_datastruct.S
deleted file mode 100644
index 5c377ba..00000000
--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_datastruct.S
+++ /dev/null
@@ -1,304 +0,0 @@
-/*
- * Header file for multi buffer SHA256 algorithm data structure
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-# Macros for defining data structures
-
-# Usage example
-
-#START_FIELDS # JOB_AES
-### name size align
-#FIELD _plaintext, 8, 8 # pointer to plaintext
-#FIELD _ciphertext, 8, 8 # pointer to ciphertext
-#FIELD _IV, 16, 8 # IV
-#FIELD _keys, 8, 8 # pointer to keys
-#FIELD _len, 4, 4 # length in bytes
-#FIELD _status, 4, 4 # status enumeration
-#FIELD _user_data, 8, 8 # pointer to user data
-#UNION _union, size1, align1, \
-# size2, align2, \
-# size3, align3, \
-# ...
-#END_FIELDS
-#%assign _JOB_AES_size _FIELD_OFFSET
-#%assign _JOB_AES_align _STRUCT_ALIGN
-
-#########################################################################
-
-# Alternate "struc-like" syntax:
-# STRUCT job_aes2
-# RES_Q .plaintext, 1
-# RES_Q .ciphertext, 1
-# RES_DQ .IV, 1
-# RES_B .nested, _JOB_AES_SIZE, _JOB_AES_ALIGN
-# RES_U .union, size1, align1, \
-# size2, align2, \
-# ...
-# ENDSTRUCT
-# # Following only needed if nesting
-# %assign job_aes2_size _FIELD_OFFSET
-# %assign job_aes2_align _STRUCT_ALIGN
-#
-# RES_* macros take a name, a count and an optional alignment.
-# The count in in terms of the base size of the macro, and the
-# default alignment is the base size.
-# The macros are:
-# Macro Base size
-# RES_B 1
-# RES_W 2
-# RES_D 4
-# RES_Q 8
-# RES_DQ 16
-# RES_Y 32
-# RES_Z 64
-#
-# RES_U defines a union. It's arguments are a name and two or more
-# pairs of "size, alignment"
-#
-# The two assigns are only needed if this structure is being nested
-# within another. Even if the assigns are not done, one can still use
-# STRUCT_NAME_size as the size of the structure.
-#
-# Note that for nesting, you still need to assign to STRUCT_NAME_size.
-#
-# The differences between this and using "struc" directly are that each
-# type is implicitly aligned to its natural length (although this can be
-# over-ridden with an explicit third parameter), and that the structure
-# is padded at the end to its overall alignment.
-#
-
-#########################################################################
-
-#ifndef _DATASTRUCT_ASM_
-#define _DATASTRUCT_ASM_
-
-#define SZ8 8*SHA256_DIGEST_WORD_SIZE
-#define ROUNDS 64*SZ8
-#define PTR_SZ 8
-#define SHA256_DIGEST_WORD_SIZE 4
-#define MAX_SHA256_LANES 8
-#define SHA256_DIGEST_WORDS 8
-#define SHA256_DIGEST_ROW_SIZE (MAX_SHA256_LANES * SHA256_DIGEST_WORD_SIZE)
-#define SHA256_DIGEST_SIZE (SHA256_DIGEST_ROW_SIZE * SHA256_DIGEST_WORDS)
-#define SHA256_BLK_SZ 64
-
-# START_FIELDS
-.macro START_FIELDS
- _FIELD_OFFSET = 0
- _STRUCT_ALIGN = 0
-.endm
-
-# FIELD name size align
-.macro FIELD name size align
- _FIELD_OFFSET = (_FIELD_OFFSET + (\align) - 1) & (~ ((\align)-1))
- \name = _FIELD_OFFSET
- _FIELD_OFFSET = _FIELD_OFFSET + (\size)
-.if (\align > _STRUCT_ALIGN)
- _STRUCT_ALIGN = \align
-.endif
-.endm
-
-# END_FIELDS
-.macro END_FIELDS
- _FIELD_OFFSET = (_FIELD_OFFSET + _STRUCT_ALIGN-1) & (~ (_STRUCT_ALIGN-1))
-.endm
-
-########################################################################
-
-.macro STRUCT p1
-START_FIELDS
-.struc \p1
-.endm
-
-.macro ENDSTRUCT
- tmp = _FIELD_OFFSET
- END_FIELDS
- tmp = (_FIELD_OFFSET - %%tmp)
-.if (tmp > 0)
- .lcomm tmp
-.endif
-.endstruc
-.endm
-
-## RES_int name size align
-.macro RES_int p1 p2 p3
- name = \p1
- size = \p2
- align = .\p3
-
- _FIELD_OFFSET = (_FIELD_OFFSET + (align) - 1) & (~ ((align)-1))
-.align align
-.lcomm name size
- _FIELD_OFFSET = _FIELD_OFFSET + (size)
-.if (align > _STRUCT_ALIGN)
- _STRUCT_ALIGN = align
-.endif
-.endm
-
-# macro RES_B name, size [, align]
-.macro RES_B _name, _size, _align=1
-RES_int _name _size _align
-.endm
-
-# macro RES_W name, size [, align]
-.macro RES_W _name, _size, _align=2
-RES_int _name 2*(_size) _align
-.endm
-
-# macro RES_D name, size [, align]
-.macro RES_D _name, _size, _align=4
-RES_int _name 4*(_size) _align
-.endm
-
-# macro RES_Q name, size [, align]
-.macro RES_Q _name, _size, _align=8
-RES_int _name 8*(_size) _align
-.endm
-
-# macro RES_DQ name, size [, align]
-.macro RES_DQ _name, _size, _align=16
-RES_int _name 16*(_size) _align
-.endm
-
-# macro RES_Y name, size [, align]
-.macro RES_Y _name, _size, _align=32
-RES_int _name 32*(_size) _align
-.endm
-
-# macro RES_Z name, size [, align]
-.macro RES_Z _name, _size, _align=64
-RES_int _name 64*(_size) _align
-.endm
-
-#endif
-
-
-########################################################################
-#### Define SHA256 Out Of Order Data Structures
-########################################################################
-
-START_FIELDS # LANE_DATA
-### name size align
-FIELD _job_in_lane, 8, 8 # pointer to job object
-END_FIELDS
-
- _LANE_DATA_size = _FIELD_OFFSET
- _LANE_DATA_align = _STRUCT_ALIGN
-
-########################################################################
-
-START_FIELDS # SHA256_ARGS_X4
-### name size align
-FIELD _digest, 4*8*8, 4 # transposed digest
-FIELD _data_ptr, 8*8, 8 # array of pointers to data
-END_FIELDS
-
- _SHA256_ARGS_X4_size = _FIELD_OFFSET
- _SHA256_ARGS_X4_align = _STRUCT_ALIGN
- _SHA256_ARGS_X8_size = _FIELD_OFFSET
- _SHA256_ARGS_X8_align = _STRUCT_ALIGN
-
-#######################################################################
-
-START_FIELDS # MB_MGR
-### name size align
-FIELD _args, _SHA256_ARGS_X4_size, _SHA256_ARGS_X4_align
-FIELD _lens, 4*8, 8
-FIELD _unused_lanes, 8, 8
-FIELD _ldata, _LANE_DATA_size*8, _LANE_DATA_align
-END_FIELDS
-
- _MB_MGR_size = _FIELD_OFFSET
- _MB_MGR_align = _STRUCT_ALIGN
-
-_args_digest = _args + _digest
-_args_data_ptr = _args + _data_ptr
-
-#######################################################################
-
-START_FIELDS #STACK_FRAME
-### name size align
-FIELD _data, 16*SZ8, 1 # transposed digest
-FIELD _digest, 8*SZ8, 1 # array of pointers to data
-FIELD _ytmp, 4*SZ8, 1
-FIELD _rsp, 8, 1
-END_FIELDS
-
- _STACK_FRAME_size = _FIELD_OFFSET
- _STACK_FRAME_align = _STRUCT_ALIGN
-
-#######################################################################
-
-########################################################################
-#### Define constants
-########################################################################
-
-#define STS_UNKNOWN 0
-#define STS_BEING_PROCESSED 1
-#define STS_COMPLETED 2
-
-########################################################################
-#### Define JOB_SHA256 structure
-########################################################################
-
-START_FIELDS # JOB_SHA256
-
-### name size align
-FIELD _buffer, 8, 8 # pointer to buffer
-FIELD _len, 8, 8 # length in bytes
-FIELD _result_digest, 8*4, 32 # Digest (output)
-FIELD _status, 4, 4
-FIELD _user_data, 8, 8
-END_FIELDS
-
- _JOB_SHA256_size = _FIELD_OFFSET
- _JOB_SHA256_align = _STRUCT_ALIGN
diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
deleted file mode 100644
index d2364c5..00000000
--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
+++ /dev/null
@@ -1,307 +0,0 @@
-/*
- * Flush routine for SHA256 multibuffer
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-#include <linux/linkage.h>
-#include <asm/frame.h>
-#include "sha256_mb_mgr_datastruct.S"
-
-.extern sha256_x8_avx2
-
-#LINUX register definitions
-#define arg1 %rdi
-#define arg2 %rsi
-
-# Common register definitions
-#define state arg1
-#define job arg2
-#define len2 arg2
-
-# idx must be a register not clobberred by sha1_mult
-#define idx %r8
-#define DWORD_idx %r8d
-
-#define unused_lanes %rbx
-#define lane_data %rbx
-#define tmp2 %rbx
-#define tmp2_w %ebx
-
-#define job_rax %rax
-#define tmp1 %rax
-#define size_offset %rax
-#define tmp %rax
-#define start_offset %rax
-
-#define tmp3 %arg1
-
-#define extra_blocks %arg2
-#define p %arg2
-
-.macro LABEL prefix n
-\prefix\n\():
-.endm
-
-.macro JNE_SKIP i
-jne skip_\i
-.endm
-
-.altmacro
-.macro SET_OFFSET _offset
-offset = \_offset
-.endm
-.noaltmacro
-
-# JOB_SHA256* sha256_mb_mgr_flush_avx2(MB_MGR *state)
-# arg 1 : rcx : state
-ENTRY(sha256_mb_mgr_flush_avx2)
- FRAME_BEGIN
- push %rbx
-
- # If bit (32+3) is set, then all lanes are empty
- mov _unused_lanes(state), unused_lanes
- bt $32+3, unused_lanes
- jc return_null
-
- # find a lane with a non-null job
- xor idx, idx
- offset = (_ldata + 1 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne one(%rip), idx
- offset = (_ldata + 2 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne two(%rip), idx
- offset = (_ldata + 3 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne three(%rip), idx
- offset = (_ldata + 4 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne four(%rip), idx
- offset = (_ldata + 5 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne five(%rip), idx
- offset = (_ldata + 6 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne six(%rip), idx
- offset = (_ldata + 7 * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne seven(%rip), idx
-
- # copy idx to empty lanes
-copy_lane_data:
- offset = (_args + _data_ptr)
- mov offset(state,idx,8), tmp
-
- I = 0
-.rep 8
- offset = (_ldata + I * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
-.altmacro
- JNE_SKIP %I
- offset = (_args + _data_ptr + 8*I)
- mov tmp, offset(state)
- offset = (_lens + 4*I)
- movl $0xFFFFFFFF, offset(state)
-LABEL skip_ %I
- I = (I+1)
-.noaltmacro
-.endr
-
- # Find min length
- vmovdqu _lens+0*16(state), %xmm0
- vmovdqu _lens+1*16(state), %xmm1
-
- vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A}
- vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}
- vpminud %xmm3, %xmm2, %xmm2 # xmm2 has {x,x,E,F}
- vpalignr $4, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,x,E}
- vpminud %xmm3, %xmm2, %xmm2 # xmm2 has min val in low dword
-
- vmovd %xmm2, DWORD_idx
- mov idx, len2
- and $0xF, idx
- shr $4, len2
- jz len_is_0
-
- vpand clear_low_nibble(%rip), %xmm2, %xmm2
- vpshufd $0, %xmm2, %xmm2
-
- vpsubd %xmm2, %xmm0, %xmm0
- vpsubd %xmm2, %xmm1, %xmm1
-
- vmovdqu %xmm0, _lens+0*16(state)
- vmovdqu %xmm1, _lens+1*16(state)
-
- # "state" and "args" are the same address, arg1
- # len is arg2
- call sha256_x8_avx2
- # state and idx are intact
-
-len_is_0:
- # process completed job "idx"
- imul $_LANE_DATA_size, idx, lane_data
- lea _ldata(state, lane_data), lane_data
-
- mov _job_in_lane(lane_data), job_rax
- movq $0, _job_in_lane(lane_data)
- movl $STS_COMPLETED, _status(job_rax)
- mov _unused_lanes(state), unused_lanes
- shl $4, unused_lanes
- or idx, unused_lanes
-
- mov unused_lanes, _unused_lanes(state)
- movl $0xFFFFFFFF, _lens(state,idx,4)
-
- vmovd _args_digest(state , idx, 4) , %xmm0
- vpinsrd $1, _args_digest+1*32(state, idx, 4), %xmm0, %xmm0
- vpinsrd $2, _args_digest+2*32(state, idx, 4), %xmm0, %xmm0
- vpinsrd $3, _args_digest+3*32(state, idx, 4), %xmm0, %xmm0
- vmovd _args_digest+4*32(state, idx, 4), %xmm1
- vpinsrd $1, _args_digest+5*32(state, idx, 4), %xmm1, %xmm1
- vpinsrd $2, _args_digest+6*32(state, idx, 4), %xmm1, %xmm1
- vpinsrd $3, _args_digest+7*32(state, idx, 4), %xmm1, %xmm1
-
- vmovdqu %xmm0, _result_digest(job_rax)
- offset = (_result_digest + 1*16)
- vmovdqu %xmm1, offset(job_rax)
-
-return:
- pop %rbx
- FRAME_END
- ret
-
-return_null:
- xor job_rax, job_rax
- jmp return
-ENDPROC(sha256_mb_mgr_flush_avx2)
-
-##############################################################################
-
-.align 16
-ENTRY(sha256_mb_mgr_get_comp_job_avx2)
- push %rbx
-
- ## if bit 32+3 is set, then all lanes are empty
- mov _unused_lanes(state), unused_lanes
- bt $(32+3), unused_lanes
- jc .return_null
-
- # Find min length
- vmovdqu _lens(state), %xmm0
- vmovdqu _lens+1*16(state), %xmm1
-
- vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A}
- vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}
- vpminud %xmm3, %xmm2, %xmm2 # xmm2 has {x,x,E,F}
- vpalignr $4, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,x,E}
- vpminud %xmm3, %xmm2, %xmm2 # xmm2 has min val in low dword
-
- vmovd %xmm2, DWORD_idx
- test $~0xF, idx
- jnz .return_null
-
- # process completed job "idx"
- imul $_LANE_DATA_size, idx, lane_data
- lea _ldata(state, lane_data), lane_data
-
- mov _job_in_lane(lane_data), job_rax
- movq $0, _job_in_lane(lane_data)
- movl $STS_COMPLETED, _status(job_rax)
- mov _unused_lanes(state), unused_lanes
- shl $4, unused_lanes
- or idx, unused_lanes
- mov unused_lanes, _unused_lanes(state)
-
- movl $0xFFFFFFFF, _lens(state, idx, 4)
-
- vmovd _args_digest(state, idx, 4), %xmm0
- vpinsrd $1, _args_digest+1*32(state, idx, 4), %xmm0, %xmm0
- vpinsrd $2, _args_digest+2*32(state, idx, 4), %xmm0, %xmm0
- vpinsrd $3, _args_digest+3*32(state, idx, 4), %xmm0, %xmm0
- vmovd _args_digest+4*32(state, idx, 4), %xmm1
- vpinsrd $1, _args_digest+5*32(state, idx, 4), %xmm1, %xmm1
- vpinsrd $2, _args_digest+6*32(state, idx, 4), %xmm1, %xmm1
- vpinsrd $3, _args_digest+7*32(state, idx, 4), %xmm1, %xmm1
-
- vmovdqu %xmm0, _result_digest(job_rax)
- offset = (_result_digest + 1*16)
- vmovdqu %xmm1, offset(job_rax)
-
- pop %rbx
-
- ret
-
-.return_null:
- xor job_rax, job_rax
- pop %rbx
- ret
-ENDPROC(sha256_mb_mgr_get_comp_job_avx2)
-
-.section .rodata.cst16.clear_low_nibble, "aM", @progbits, 16
-.align 16
-clear_low_nibble:
-.octa 0x000000000000000000000000FFFFFFF0
-
-.section .rodata.cst8, "aM", @progbits, 8
-.align 8
-one:
-.quad 1
-two:
-.quad 2
-three:
-.quad 3
-four:
-.quad 4
-five:
-.quad 5
-six:
-.quad 6
-seven:
-.quad 7
diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_init_avx2.c b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_init_avx2.c
deleted file mode 100644
index b0c4983..00000000
--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_init_avx2.c
+++ /dev/null
@@ -1,65 +0,0 @@
-/*
- * Initialization code for multi buffer SHA256 algorithm for AVX2
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include "sha256_mb_mgr.h"
-
-void sha256_mb_mgr_init_avx2(struct sha256_mb_mgr *state)
-{
- unsigned int j;
-
- state->unused_lanes = 0xF76543210ULL;
- for (j = 0; j < 8; j++) {
- state->lens[j] = 0xFFFFFFFF;
- state->ldata[j].job_in_lane = NULL;
- }
-}
diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S
deleted file mode 100644
index b36ae74..00000000
--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S
+++ /dev/null
@@ -1,214 +0,0 @@
-/*
- * Buffer submit code for multi buffer SHA256 algorithm
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <linux/linkage.h>
-#include <asm/frame.h>
-#include "sha256_mb_mgr_datastruct.S"
-
-.extern sha256_x8_avx2
-
-# LINUX register definitions
-arg1 = %rdi
-arg2 = %rsi
-size_offset = %rcx
-tmp2 = %rcx
-extra_blocks = %rdx
-
-# Common definitions
-#define state arg1
-#define job %rsi
-#define len2 arg2
-#define p2 arg2
-
-# idx must be a register not clobberred by sha1_x8_avx2
-idx = %r8
-DWORD_idx = %r8d
-last_len = %r8
-
-p = %r11
-start_offset = %r11
-
-unused_lanes = %rbx
-BYTE_unused_lanes = %bl
-
-job_rax = %rax
-len = %rax
-DWORD_len = %eax
-
-lane = %r12
-tmp3 = %r12
-
-tmp = %r9
-DWORD_tmp = %r9d
-
-lane_data = %r10
-
-# JOB* sha256_mb_mgr_submit_avx2(MB_MGR *state, JOB_SHA256 *job)
-# arg 1 : rcx : state
-# arg 2 : rdx : job
-ENTRY(sha256_mb_mgr_submit_avx2)
- FRAME_BEGIN
- push %rbx
- push %r12
-
- mov _unused_lanes(state), unused_lanes
- mov unused_lanes, lane
- and $0xF, lane
- shr $4, unused_lanes
- imul $_LANE_DATA_size, lane, lane_data
- movl $STS_BEING_PROCESSED, _status(job)
- lea _ldata(state, lane_data), lane_data
- mov unused_lanes, _unused_lanes(state)
- movl _len(job), DWORD_len
-
- mov job, _job_in_lane(lane_data)
- shl $4, len
- or lane, len
-
- movl DWORD_len, _lens(state , lane, 4)
-
- # Load digest words from result_digest
- vmovdqu _result_digest(job), %xmm0
- vmovdqu _result_digest+1*16(job), %xmm1
- vmovd %xmm0, _args_digest(state, lane, 4)
- vpextrd $1, %xmm0, _args_digest+1*32(state , lane, 4)
- vpextrd $2, %xmm0, _args_digest+2*32(state , lane, 4)
- vpextrd $3, %xmm0, _args_digest+3*32(state , lane, 4)
- vmovd %xmm1, _args_digest+4*32(state , lane, 4)
-
- vpextrd $1, %xmm1, _args_digest+5*32(state , lane, 4)
- vpextrd $2, %xmm1, _args_digest+6*32(state , lane, 4)
- vpextrd $3, %xmm1, _args_digest+7*32(state , lane, 4)
-
- mov _buffer(job), p
- mov p, _args_data_ptr(state, lane, 8)
-
- cmp $0xF, unused_lanes
- jne return_null
-
-start_loop:
- # Find min length
- vmovdqa _lens(state), %xmm0
- vmovdqa _lens+1*16(state), %xmm1
-
- vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A}
- vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}
- vpminud %xmm3, %xmm2, %xmm2 # xmm2 has {x,x,E,F}
- vpalignr $4, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,x,E}
- vpminud %xmm3, %xmm2, %xmm2 # xmm2 has min val in low dword
-
- vmovd %xmm2, DWORD_idx
- mov idx, len2
- and $0xF, idx
- shr $4, len2
- jz len_is_0
-
- vpand clear_low_nibble(%rip), %xmm2, %xmm2
- vpshufd $0, %xmm2, %xmm2
-
- vpsubd %xmm2, %xmm0, %xmm0
- vpsubd %xmm2, %xmm1, %xmm1
-
- vmovdqa %xmm0, _lens + 0*16(state)
- vmovdqa %xmm1, _lens + 1*16(state)
-
- # "state" and "args" are the same address, arg1
- # len is arg2
- call sha256_x8_avx2
-
- # state and idx are intact
-
-len_is_0:
- # process completed job "idx"
- imul $_LANE_DATA_size, idx, lane_data
- lea _ldata(state, lane_data), lane_data
-
- mov _job_in_lane(lane_data), job_rax
- mov _unused_lanes(state), unused_lanes
- movq $0, _job_in_lane(lane_data)
- movl $STS_COMPLETED, _status(job_rax)
- shl $4, unused_lanes
- or idx, unused_lanes
- mov unused_lanes, _unused_lanes(state)
-
- movl $0xFFFFFFFF, _lens(state,idx,4)
-
- vmovd _args_digest(state, idx, 4), %xmm0
- vpinsrd $1, _args_digest+1*32(state , idx, 4), %xmm0, %xmm0
- vpinsrd $2, _args_digest+2*32(state , idx, 4), %xmm0, %xmm0
- vpinsrd $3, _args_digest+3*32(state , idx, 4), %xmm0, %xmm0
- vmovd _args_digest+4*32(state, idx, 4), %xmm1
-
- vpinsrd $1, _args_digest+5*32(state , idx, 4), %xmm1, %xmm1
- vpinsrd $2, _args_digest+6*32(state , idx, 4), %xmm1, %xmm1
- vpinsrd $3, _args_digest+7*32(state , idx, 4), %xmm1, %xmm1
-
- vmovdqu %xmm0, _result_digest(job_rax)
- vmovdqu %xmm1, _result_digest+1*16(job_rax)
-
-return:
- pop %r12
- pop %rbx
- FRAME_END
- ret
-
-return_null:
- xor job_rax, job_rax
- jmp return
-
-ENDPROC(sha256_mb_mgr_submit_avx2)
-
-.section .rodata.cst16.clear_low_nibble, "aM", @progbits, 16
-.align 16
-clear_low_nibble:
- .octa 0x000000000000000000000000FFFFFFF0
diff --git a/arch/x86/crypto/sha256-mb/sha256_x8_avx2.S b/arch/x86/crypto/sha256-mb/sha256_x8_avx2.S
deleted file mode 100644
index 1687c80..00000000
--- a/arch/x86/crypto/sha256-mb/sha256_x8_avx2.S
+++ /dev/null
@@ -1,598 +0,0 @@
-/*
- * Multi-buffer SHA256 algorithm hash compute routine
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <linux/linkage.h>
-#include "sha256_mb_mgr_datastruct.S"
-
-## code to compute oct SHA256 using SSE-256
-## outer calling routine takes care of save and restore of XMM registers
-## Logic designed/laid out by JDG
-
-## Function clobbers: rax, rcx, rdx, rbx, rsi, rdi, r9-r15; %ymm0-15
-## Linux clobbers: rax rbx rcx rdx rsi r9 r10 r11 r12 r13 r14 r15
-## Linux preserves: rdi rbp r8
-##
-## clobbers %ymm0-15
-
-arg1 = %rdi
-arg2 = %rsi
-reg3 = %rcx
-reg4 = %rdx
-
-# Common definitions
-STATE = arg1
-INP_SIZE = arg2
-
-IDX = %rax
-ROUND = %rbx
-TBL = reg3
-
-inp0 = %r9
-inp1 = %r10
-inp2 = %r11
-inp3 = %r12
-inp4 = %r13
-inp5 = %r14
-inp6 = %r15
-inp7 = reg4
-
-a = %ymm0
-b = %ymm1
-c = %ymm2
-d = %ymm3
-e = %ymm4
-f = %ymm5
-g = %ymm6
-h = %ymm7
-
-T1 = %ymm8
-
-a0 = %ymm12
-a1 = %ymm13
-a2 = %ymm14
-TMP = %ymm15
-TMP0 = %ymm6
-TMP1 = %ymm7
-
-TT0 = %ymm8
-TT1 = %ymm9
-TT2 = %ymm10
-TT3 = %ymm11
-TT4 = %ymm12
-TT5 = %ymm13
-TT6 = %ymm14
-TT7 = %ymm15
-
-# Define stack usage
-
-# Assume stack aligned to 32 bytes before call
-# Therefore FRAMESZ mod 32 must be 32-8 = 24
-
-#define FRAMESZ 0x388
-
-#define VMOVPS vmovups
-
-# TRANSPOSE8 r0, r1, r2, r3, r4, r5, r6, r7, t0, t1
-# "transpose" data in {r0...r7} using temps {t0...t1}
-# Input looks like: {r0 r1 r2 r3 r4 r5 r6 r7}
-# r0 = {a7 a6 a5 a4 a3 a2 a1 a0}
-# r1 = {b7 b6 b5 b4 b3 b2 b1 b0}
-# r2 = {c7 c6 c5 c4 c3 c2 c1 c0}
-# r3 = {d7 d6 d5 d4 d3 d2 d1 d0}
-# r4 = {e7 e6 e5 e4 e3 e2 e1 e0}
-# r5 = {f7 f6 f5 f4 f3 f2 f1 f0}
-# r6 = {g7 g6 g5 g4 g3 g2 g1 g0}
-# r7 = {h7 h6 h5 h4 h3 h2 h1 h0}
-#
-# Output looks like: {r0 r1 r2 r3 r4 r5 r6 r7}
-# r0 = {h0 g0 f0 e0 d0 c0 b0 a0}
-# r1 = {h1 g1 f1 e1 d1 c1 b1 a1}
-# r2 = {h2 g2 f2 e2 d2 c2 b2 a2}
-# r3 = {h3 g3 f3 e3 d3 c3 b3 a3}
-# r4 = {h4 g4 f4 e4 d4 c4 b4 a4}
-# r5 = {h5 g5 f5 e5 d5 c5 b5 a5}
-# r6 = {h6 g6 f6 e6 d6 c6 b6 a6}
-# r7 = {h7 g7 f7 e7 d7 c7 b7 a7}
-#
-
-.macro TRANSPOSE8 r0 r1 r2 r3 r4 r5 r6 r7 t0 t1
- # process top half (r0..r3) {a...d}
- vshufps $0x44, \r1, \r0, \t0 # t0 = {b5 b4 a5 a4 b1 b0 a1 a0}
- vshufps $0xEE, \r1, \r0, \r0 # r0 = {b7 b6 a7 a6 b3 b2 a3 a2}
- vshufps $0x44, \r3, \r2, \t1 # t1 = {d5 d4 c5 c4 d1 d0 c1 c0}
- vshufps $0xEE, \r3, \r2, \r2 # r2 = {d7 d6 c7 c6 d3 d2 c3 c2}
- vshufps $0xDD, \t1, \t0, \r3 # r3 = {d5 c5 b5 a5 d1 c1 b1 a1}
- vshufps $0x88, \r2, \r0, \r1 # r1 = {d6 c6 b6 a6 d2 c2 b2 a2}
- vshufps $0xDD, \r2, \r0, \r0 # r0 = {d7 c7 b7 a7 d3 c3 b3 a3}
- vshufps $0x88, \t1, \t0, \t0 # t0 = {d4 c4 b4 a4 d0 c0 b0 a0}
-
- # use r2 in place of t0
- # process bottom half (r4..r7) {e...h}
- vshufps $0x44, \r5, \r4, \r2 # r2 = {f5 f4 e5 e4 f1 f0 e1 e0}
- vshufps $0xEE, \r5, \r4, \r4 # r4 = {f7 f6 e7 e6 f3 f2 e3 e2}
- vshufps $0x44, \r7, \r6, \t1 # t1 = {h5 h4 g5 g4 h1 h0 g1 g0}
- vshufps $0xEE, \r7, \r6, \r6 # r6 = {h7 h6 g7 g6 h3 h2 g3 g2}
- vshufps $0xDD, \t1, \r2, \r7 # r7 = {h5 g5 f5 e5 h1 g1 f1 e1}
- vshufps $0x88, \r6, \r4, \r5 # r5 = {h6 g6 f6 e6 h2 g2 f2 e2}
- vshufps $0xDD, \r6, \r4, \r4 # r4 = {h7 g7 f7 e7 h3 g3 f3 e3}
- vshufps $0x88, \t1, \r2, \t1 # t1 = {h4 g4 f4 e4 h0 g0 f0 e0}
-
- vperm2f128 $0x13, \r1, \r5, \r6 # h6...a6
- vperm2f128 $0x02, \r1, \r5, \r2 # h2...a2
- vperm2f128 $0x13, \r3, \r7, \r5 # h5...a5
- vperm2f128 $0x02, \r3, \r7, \r1 # h1...a1
- vperm2f128 $0x13, \r0, \r4, \r7 # h7...a7
- vperm2f128 $0x02, \r0, \r4, \r3 # h3...a3
- vperm2f128 $0x13, \t0, \t1, \r4 # h4...a4
- vperm2f128 $0x02, \t0, \t1, \r0 # h0...a0
-
-.endm
-
-.macro ROTATE_ARGS
-TMP_ = h
-h = g
-g = f
-f = e
-e = d
-d = c
-c = b
-b = a
-a = TMP_
-.endm
-
-.macro _PRORD reg imm tmp
- vpslld $(32-\imm),\reg,\tmp
- vpsrld $\imm,\reg, \reg
- vpor \tmp,\reg, \reg
-.endm
-
-# PRORD_nd reg, imm, tmp, src
-.macro _PRORD_nd reg imm tmp src
- vpslld $(32-\imm), \src, \tmp
- vpsrld $\imm, \src, \reg
- vpor \tmp, \reg, \reg
-.endm
-
-# PRORD dst/src, amt
-.macro PRORD reg imm
- _PRORD \reg,\imm,TMP
-.endm
-
-# PRORD_nd dst, src, amt
-.macro PRORD_nd reg tmp imm
- _PRORD_nd \reg, \imm, TMP, \tmp
-.endm
-
-# arguments passed implicitly in preprocessor symbols i, a...h
-.macro ROUND_00_15 _T1 i
- PRORD_nd a0,e,5 # sig1: a0 = (e >> 5)
-
- vpxor g, f, a2 # ch: a2 = f^g
- vpand e,a2, a2 # ch: a2 = (f^g)&e
- vpxor g, a2, a2 # a2 = ch
-
- PRORD_nd a1,e,25 # sig1: a1 = (e >> 25)
-
- vmovdqu \_T1,(SZ8*(\i & 0xf))(%rsp)
- vpaddd (TBL,ROUND,1), \_T1, \_T1 # T1 = W + K
- vpxor e,a0, a0 # sig1: a0 = e ^ (e >> 5)
- PRORD a0, 6 # sig1: a0 = (e >> 6) ^ (e >> 11)
- vpaddd a2, h, h # h = h + ch
- PRORD_nd a2,a,11 # sig0: a2 = (a >> 11)
- vpaddd \_T1,h, h # h = h + ch + W + K
- vpxor a1, a0, a0 # a0 = sigma1
- PRORD_nd a1,a,22 # sig0: a1 = (a >> 22)
- vpxor c, a, \_T1 # maj: T1 = a^c
- add $SZ8, ROUND # ROUND++
- vpand b, \_T1, \_T1 # maj: T1 = (a^c)&b
- vpaddd a0, h, h
- vpaddd h, d, d
- vpxor a, a2, a2 # sig0: a2 = a ^ (a >> 11)
- PRORD a2,2 # sig0: a2 = (a >> 2) ^ (a >> 13)
- vpxor a1, a2, a2 # a2 = sig0
- vpand c, a, a1 # maj: a1 = a&c
- vpor \_T1, a1, a1 # a1 = maj
- vpaddd a1, h, h # h = h + ch + W + K + maj
- vpaddd a2, h, h # h = h + ch + W + K + maj + sigma0
- ROTATE_ARGS
-.endm
-
-# arguments passed implicitly in preprocessor symbols i, a...h
-.macro ROUND_16_XX _T1 i
- vmovdqu (SZ8*((\i-15)&0xf))(%rsp), \_T1
- vmovdqu (SZ8*((\i-2)&0xf))(%rsp), a1
- vmovdqu \_T1, a0
- PRORD \_T1,11
- vmovdqu a1, a2
- PRORD a1,2
- vpxor a0, \_T1, \_T1
- PRORD \_T1, 7
- vpxor a2, a1, a1
- PRORD a1, 17
- vpsrld $3, a0, a0
- vpxor a0, \_T1, \_T1
- vpsrld $10, a2, a2
- vpxor a2, a1, a1
- vpaddd (SZ8*((\i-16)&0xf))(%rsp), \_T1, \_T1
- vpaddd (SZ8*((\i-7)&0xf))(%rsp), a1, a1
- vpaddd a1, \_T1, \_T1
-
- ROUND_00_15 \_T1,\i
-.endm
-
-# SHA256_ARGS:
-# UINT128 digest[8]; // transposed digests
-# UINT8 *data_ptr[4];
-
-# void sha256_x8_avx2(SHA256_ARGS *args, UINT64 bytes);
-# arg 1 : STATE : pointer to array of pointers to input data
-# arg 2 : INP_SIZE : size of input in blocks
- # general registers preserved in outer calling routine
- # outer calling routine saves all the XMM registers
- # save rsp, allocate 32-byte aligned for local variables
-ENTRY(sha256_x8_avx2)
-
- # save callee-saved clobbered registers to comply with C function ABI
- push %r12
- push %r13
- push %r14
- push %r15
-
- mov %rsp, IDX
- sub $FRAMESZ, %rsp
- and $~0x1F, %rsp
- mov IDX, _rsp(%rsp)
-
- # Load the pre-transposed incoming digest.
- vmovdqu 0*SHA256_DIGEST_ROW_SIZE(STATE),a
- vmovdqu 1*SHA256_DIGEST_ROW_SIZE(STATE),b
- vmovdqu 2*SHA256_DIGEST_ROW_SIZE(STATE),c
- vmovdqu 3*SHA256_DIGEST_ROW_SIZE(STATE),d
- vmovdqu 4*SHA256_DIGEST_ROW_SIZE(STATE),e
- vmovdqu 5*SHA256_DIGEST_ROW_SIZE(STATE),f
- vmovdqu 6*SHA256_DIGEST_ROW_SIZE(STATE),g
- vmovdqu 7*SHA256_DIGEST_ROW_SIZE(STATE),h
-
- lea K256_8(%rip),TBL
-
- # load the address of each of the 4 message lanes
- # getting ready to transpose input onto stack
- mov _args_data_ptr+0*PTR_SZ(STATE),inp0
- mov _args_data_ptr+1*PTR_SZ(STATE),inp1
- mov _args_data_ptr+2*PTR_SZ(STATE),inp2
- mov _args_data_ptr+3*PTR_SZ(STATE),inp3
- mov _args_data_ptr+4*PTR_SZ(STATE),inp4
- mov _args_data_ptr+5*PTR_SZ(STATE),inp5
- mov _args_data_ptr+6*PTR_SZ(STATE),inp6
- mov _args_data_ptr+7*PTR_SZ(STATE),inp7
-
- xor IDX, IDX
-lloop:
- xor ROUND, ROUND
-
- # save old digest
- vmovdqu a, _digest(%rsp)
- vmovdqu b, _digest+1*SZ8(%rsp)
- vmovdqu c, _digest+2*SZ8(%rsp)
- vmovdqu d, _digest+3*SZ8(%rsp)
- vmovdqu e, _digest+4*SZ8(%rsp)
- vmovdqu f, _digest+5*SZ8(%rsp)
- vmovdqu g, _digest+6*SZ8(%rsp)
- vmovdqu h, _digest+7*SZ8(%rsp)
- i = 0
-.rep 2
- VMOVPS i*32(inp0, IDX), TT0
- VMOVPS i*32(inp1, IDX), TT1
- VMOVPS i*32(inp2, IDX), TT2
- VMOVPS i*32(inp3, IDX), TT3
- VMOVPS i*32(inp4, IDX), TT4
- VMOVPS i*32(inp5, IDX), TT5
- VMOVPS i*32(inp6, IDX), TT6
- VMOVPS i*32(inp7, IDX), TT7
- vmovdqu g, _ytmp(%rsp)
- vmovdqu h, _ytmp+1*SZ8(%rsp)
- TRANSPOSE8 TT0, TT1, TT2, TT3, TT4, TT5, TT6, TT7, TMP0, TMP1
- vmovdqu PSHUFFLE_BYTE_FLIP_MASK(%rip), TMP1
- vmovdqu _ytmp(%rsp), g
- vpshufb TMP1, TT0, TT0
- vpshufb TMP1, TT1, TT1
- vpshufb TMP1, TT2, TT2
- vpshufb TMP1, TT3, TT3
- vpshufb TMP1, TT4, TT4
- vpshufb TMP1, TT5, TT5
- vpshufb TMP1, TT6, TT6
- vpshufb TMP1, TT7, TT7
- vmovdqu _ytmp+1*SZ8(%rsp), h
- vmovdqu TT4, _ytmp(%rsp)
- vmovdqu TT5, _ytmp+1*SZ8(%rsp)
- vmovdqu TT6, _ytmp+2*SZ8(%rsp)
- vmovdqu TT7, _ytmp+3*SZ8(%rsp)
- ROUND_00_15 TT0,(i*8+0)
- vmovdqu _ytmp(%rsp), TT0
- ROUND_00_15 TT1,(i*8+1)
- vmovdqu _ytmp+1*SZ8(%rsp), TT1
- ROUND_00_15 TT2,(i*8+2)
- vmovdqu _ytmp+2*SZ8(%rsp), TT2
- ROUND_00_15 TT3,(i*8+3)
- vmovdqu _ytmp+3*SZ8(%rsp), TT3
- ROUND_00_15 TT0,(i*8+4)
- ROUND_00_15 TT1,(i*8+5)
- ROUND_00_15 TT2,(i*8+6)
- ROUND_00_15 TT3,(i*8+7)
- i = (i+1)
-.endr
- add $64, IDX
- i = (i*8)
-
- jmp Lrounds_16_xx
-.align 16
-Lrounds_16_xx:
-.rep 16
- ROUND_16_XX T1, i
- i = (i+1)
-.endr
-
- cmp $ROUNDS,ROUND
- jb Lrounds_16_xx
-
- # add old digest
- vpaddd _digest+0*SZ8(%rsp), a, a
- vpaddd _digest+1*SZ8(%rsp), b, b
- vpaddd _digest+2*SZ8(%rsp), c, c
- vpaddd _digest+3*SZ8(%rsp), d, d
- vpaddd _digest+4*SZ8(%rsp), e, e
- vpaddd _digest+5*SZ8(%rsp), f, f
- vpaddd _digest+6*SZ8(%rsp), g, g
- vpaddd _digest+7*SZ8(%rsp), h, h
-
- sub $1, INP_SIZE # unit is blocks
- jne lloop
-
- # write back to memory (state object) the transposed digest
- vmovdqu a, 0*SHA256_DIGEST_ROW_SIZE(STATE)
- vmovdqu b, 1*SHA256_DIGEST_ROW_SIZE(STATE)
- vmovdqu c, 2*SHA256_DIGEST_ROW_SIZE(STATE)
- vmovdqu d, 3*SHA256_DIGEST_ROW_SIZE(STATE)
- vmovdqu e, 4*SHA256_DIGEST_ROW_SIZE(STATE)
- vmovdqu f, 5*SHA256_DIGEST_ROW_SIZE(STATE)
- vmovdqu g, 6*SHA256_DIGEST_ROW_SIZE(STATE)
- vmovdqu h, 7*SHA256_DIGEST_ROW_SIZE(STATE)
-
- # update input pointers
- add IDX, inp0
- mov inp0, _args_data_ptr+0*8(STATE)
- add IDX, inp1
- mov inp1, _args_data_ptr+1*8(STATE)
- add IDX, inp2
- mov inp2, _args_data_ptr+2*8(STATE)
- add IDX, inp3
- mov inp3, _args_data_ptr+3*8(STATE)
- add IDX, inp4
- mov inp4, _args_data_ptr+4*8(STATE)
- add IDX, inp5
- mov inp5, _args_data_ptr+5*8(STATE)
- add IDX, inp6
- mov inp6, _args_data_ptr+6*8(STATE)
- add IDX, inp7
- mov inp7, _args_data_ptr+7*8(STATE)
-
- # Postamble
- mov _rsp(%rsp), %rsp
-
- # restore callee-saved clobbered registers
- pop %r15
- pop %r14
- pop %r13
- pop %r12
-
- ret
-ENDPROC(sha256_x8_avx2)
-
-.section .rodata.K256_8, "a", @progbits
-.align 64
-K256_8:
- .octa 0x428a2f98428a2f98428a2f98428a2f98
- .octa 0x428a2f98428a2f98428a2f98428a2f98
- .octa 0x71374491713744917137449171374491
- .octa 0x71374491713744917137449171374491
- .octa 0xb5c0fbcfb5c0fbcfb5c0fbcfb5c0fbcf
- .octa 0xb5c0fbcfb5c0fbcfb5c0fbcfb5c0fbcf
- .octa 0xe9b5dba5e9b5dba5e9b5dba5e9b5dba5
- .octa 0xe9b5dba5e9b5dba5e9b5dba5e9b5dba5
- .octa 0x3956c25b3956c25b3956c25b3956c25b
- .octa 0x3956c25b3956c25b3956c25b3956c25b
- .octa 0x59f111f159f111f159f111f159f111f1
- .octa 0x59f111f159f111f159f111f159f111f1
- .octa 0x923f82a4923f82a4923f82a4923f82a4
- .octa 0x923f82a4923f82a4923f82a4923f82a4
- .octa 0xab1c5ed5ab1c5ed5ab1c5ed5ab1c5ed5
- .octa 0xab1c5ed5ab1c5ed5ab1c5ed5ab1c5ed5
- .octa 0xd807aa98d807aa98d807aa98d807aa98
- .octa 0xd807aa98d807aa98d807aa98d807aa98
- .octa 0x12835b0112835b0112835b0112835b01
- .octa 0x12835b0112835b0112835b0112835b01
- .octa 0x243185be243185be243185be243185be
- .octa 0x243185be243185be243185be243185be
- .octa 0x550c7dc3550c7dc3550c7dc3550c7dc3
- .octa 0x550c7dc3550c7dc3550c7dc3550c7dc3
- .octa 0x72be5d7472be5d7472be5d7472be5d74
- .octa 0x72be5d7472be5d7472be5d7472be5d74
- .octa 0x80deb1fe80deb1fe80deb1fe80deb1fe
- .octa 0x80deb1fe80deb1fe80deb1fe80deb1fe
- .octa 0x9bdc06a79bdc06a79bdc06a79bdc06a7
- .octa 0x9bdc06a79bdc06a79bdc06a79bdc06a7
- .octa 0xc19bf174c19bf174c19bf174c19bf174
- .octa 0xc19bf174c19bf174c19bf174c19bf174
- .octa 0xe49b69c1e49b69c1e49b69c1e49b69c1
- .octa 0xe49b69c1e49b69c1e49b69c1e49b69c1
- .octa 0xefbe4786efbe4786efbe4786efbe4786
- .octa 0xefbe4786efbe4786efbe4786efbe4786
- .octa 0x0fc19dc60fc19dc60fc19dc60fc19dc6
- .octa 0x0fc19dc60fc19dc60fc19dc60fc19dc6
- .octa 0x240ca1cc240ca1cc240ca1cc240ca1cc
- .octa 0x240ca1cc240ca1cc240ca1cc240ca1cc
- .octa 0x2de92c6f2de92c6f2de92c6f2de92c6f
- .octa 0x2de92c6f2de92c6f2de92c6f2de92c6f
- .octa 0x4a7484aa4a7484aa4a7484aa4a7484aa
- .octa 0x4a7484aa4a7484aa4a7484aa4a7484aa
- .octa 0x5cb0a9dc5cb0a9dc5cb0a9dc5cb0a9dc
- .octa 0x5cb0a9dc5cb0a9dc5cb0a9dc5cb0a9dc
- .octa 0x76f988da76f988da76f988da76f988da
- .octa 0x76f988da76f988da76f988da76f988da
- .octa 0x983e5152983e5152983e5152983e5152
- .octa 0x983e5152983e5152983e5152983e5152
- .octa 0xa831c66da831c66da831c66da831c66d
- .octa 0xa831c66da831c66da831c66da831c66d
- .octa 0xb00327c8b00327c8b00327c8b00327c8
- .octa 0xb00327c8b00327c8b00327c8b00327c8
- .octa 0xbf597fc7bf597fc7bf597fc7bf597fc7
- .octa 0xbf597fc7bf597fc7bf597fc7bf597fc7
- .octa 0xc6e00bf3c6e00bf3c6e00bf3c6e00bf3
- .octa 0xc6e00bf3c6e00bf3c6e00bf3c6e00bf3
- .octa 0xd5a79147d5a79147d5a79147d5a79147
- .octa 0xd5a79147d5a79147d5a79147d5a79147
- .octa 0x06ca635106ca635106ca635106ca6351
- .octa 0x06ca635106ca635106ca635106ca6351
- .octa 0x14292967142929671429296714292967
- .octa 0x14292967142929671429296714292967
- .octa 0x27b70a8527b70a8527b70a8527b70a85
- .octa 0x27b70a8527b70a8527b70a8527b70a85
- .octa 0x2e1b21382e1b21382e1b21382e1b2138
- .octa 0x2e1b21382e1b21382e1b21382e1b2138
- .octa 0x4d2c6dfc4d2c6dfc4d2c6dfc4d2c6dfc
- .octa 0x4d2c6dfc4d2c6dfc4d2c6dfc4d2c6dfc
- .octa 0x53380d1353380d1353380d1353380d13
- .octa 0x53380d1353380d1353380d1353380d13
- .octa 0x650a7354650a7354650a7354650a7354
- .octa 0x650a7354650a7354650a7354650a7354
- .octa 0x766a0abb766a0abb766a0abb766a0abb
- .octa 0x766a0abb766a0abb766a0abb766a0abb
- .octa 0x81c2c92e81c2c92e81c2c92e81c2c92e
- .octa 0x81c2c92e81c2c92e81c2c92e81c2c92e
- .octa 0x92722c8592722c8592722c8592722c85
- .octa 0x92722c8592722c8592722c8592722c85
- .octa 0xa2bfe8a1a2bfe8a1a2bfe8a1a2bfe8a1
- .octa 0xa2bfe8a1a2bfe8a1a2bfe8a1a2bfe8a1
- .octa 0xa81a664ba81a664ba81a664ba81a664b
- .octa 0xa81a664ba81a664ba81a664ba81a664b
- .octa 0xc24b8b70c24b8b70c24b8b70c24b8b70
- .octa 0xc24b8b70c24b8b70c24b8b70c24b8b70
- .octa 0xc76c51a3c76c51a3c76c51a3c76c51a3
- .octa 0xc76c51a3c76c51a3c76c51a3c76c51a3
- .octa 0xd192e819d192e819d192e819d192e819
- .octa 0xd192e819d192e819d192e819d192e819
- .octa 0xd6990624d6990624d6990624d6990624
- .octa 0xd6990624d6990624d6990624d6990624
- .octa 0xf40e3585f40e3585f40e3585f40e3585
- .octa 0xf40e3585f40e3585f40e3585f40e3585
- .octa 0x106aa070106aa070106aa070106aa070
- .octa 0x106aa070106aa070106aa070106aa070
- .octa 0x19a4c11619a4c11619a4c11619a4c116
- .octa 0x19a4c11619a4c11619a4c11619a4c116
- .octa 0x1e376c081e376c081e376c081e376c08
- .octa 0x1e376c081e376c081e376c081e376c08
- .octa 0x2748774c2748774c2748774c2748774c
- .octa 0x2748774c2748774c2748774c2748774c
- .octa 0x34b0bcb534b0bcb534b0bcb534b0bcb5
- .octa 0x34b0bcb534b0bcb534b0bcb534b0bcb5
- .octa 0x391c0cb3391c0cb3391c0cb3391c0cb3
- .octa 0x391c0cb3391c0cb3391c0cb3391c0cb3
- .octa 0x4ed8aa4a4ed8aa4a4ed8aa4a4ed8aa4a
- .octa 0x4ed8aa4a4ed8aa4a4ed8aa4a4ed8aa4a
- .octa 0x5b9cca4f5b9cca4f5b9cca4f5b9cca4f
- .octa 0x5b9cca4f5b9cca4f5b9cca4f5b9cca4f
- .octa 0x682e6ff3682e6ff3682e6ff3682e6ff3
- .octa 0x682e6ff3682e6ff3682e6ff3682e6ff3
- .octa 0x748f82ee748f82ee748f82ee748f82ee
- .octa 0x748f82ee748f82ee748f82ee748f82ee
- .octa 0x78a5636f78a5636f78a5636f78a5636f
- .octa 0x78a5636f78a5636f78a5636f78a5636f
- .octa 0x84c8781484c8781484c8781484c87814
- .octa 0x84c8781484c8781484c8781484c87814
- .octa 0x8cc702088cc702088cc702088cc70208
- .octa 0x8cc702088cc702088cc702088cc70208
- .octa 0x90befffa90befffa90befffa90befffa
- .octa 0x90befffa90befffa90befffa90befffa
- .octa 0xa4506ceba4506ceba4506ceba4506ceb
- .octa 0xa4506ceba4506ceba4506ceba4506ceb
- .octa 0xbef9a3f7bef9a3f7bef9a3f7bef9a3f7
- .octa 0xbef9a3f7bef9a3f7bef9a3f7bef9a3f7
- .octa 0xc67178f2c67178f2c67178f2c67178f2
- .octa 0xc67178f2c67178f2c67178f2c67178f2
-
-.section .rodata.cst32.PSHUFFLE_BYTE_FLIP_MASK, "aM", @progbits, 32
-.align 32
-PSHUFFLE_BYTE_FLIP_MASK:
-.octa 0x0c0d0e0f08090a0b0405060700010203
-.octa 0x0c0d0e0f08090a0b0405060700010203
-
-.section .rodata.cst256.K256, "aM", @progbits, 256
-.align 64
-.global K256
-K256:
- .int 0x428a2f98,0x71374491,0xb5c0fbcf,0xe9b5dba5
- .int 0x3956c25b,0x59f111f1,0x923f82a4,0xab1c5ed5
- .int 0xd807aa98,0x12835b01,0x243185be,0x550c7dc3
- .int 0x72be5d74,0x80deb1fe,0x9bdc06a7,0xc19bf174
- .int 0xe49b69c1,0xefbe4786,0x0fc19dc6,0x240ca1cc
- .int 0x2de92c6f,0x4a7484aa,0x5cb0a9dc,0x76f988da
- .int 0x983e5152,0xa831c66d,0xb00327c8,0xbf597fc7
- .int 0xc6e00bf3,0xd5a79147,0x06ca6351,0x14292967
- .int 0x27b70a85,0x2e1b2138,0x4d2c6dfc,0x53380d13
- .int 0x650a7354,0x766a0abb,0x81c2c92e,0x92722c85
- .int 0xa2bfe8a1,0xa81a664b,0xc24b8b70,0xc76c51a3
- .int 0xd192e819,0xd6990624,0xf40e3585,0x106aa070
- .int 0x19a4c116,0x1e376c08,0x2748774c,0x34b0bcb5
- .int 0x391c0cb3,0x4ed8aa4a,0x5b9cca4f,0x682e6ff3
- .int 0x748f82ee,0x78a5636f,0x84c87814,0x8cc70208
- .int 0x90befffa,0xa4506ceb,0xbef9a3f7,0xc67178f2
diff --git a/arch/x86/crypto/sha512-mb/Makefile b/arch/x86/crypto/sha512-mb/Makefile
deleted file mode 100644
index 90f1ef6..00000000
--- a/arch/x86/crypto/sha512-mb/Makefile
+++ /dev/null
@@ -1,12 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0
-#
-# Arch-specific CryptoAPI modules.
-#
-
-avx2_supported := $(call as-instr,vpgatherdd %ymm0$(comma)(%eax$(comma)%ymm1\
- $(comma)4)$(comma)%ymm2,yes,no)
-ifeq ($(avx2_supported),yes)
- obj-$(CONFIG_CRYPTO_SHA512_MB) += sha512-mb.o
- sha512-mb-y := sha512_mb.o sha512_mb_mgr_flush_avx2.o \
- sha512_mb_mgr_init_avx2.o sha512_mb_mgr_submit_avx2.o sha512_x4_avx2.o
-endif
diff --git a/arch/x86/crypto/sha512-mb/sha512_mb.c b/arch/x86/crypto/sha512-mb/sha512_mb.c
deleted file mode 100644
index 26b8567..00000000
--- a/arch/x86/crypto/sha512-mb/sha512_mb.c
+++ /dev/null
@@ -1,1047 +0,0 @@
-/*
- * Multi buffer SHA512 algorithm Glue Code
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include <crypto/internal/hash.h>
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/mm.h>
-#include <linux/cryptohash.h>
-#include <linux/types.h>
-#include <linux/list.h>
-#include <crypto/scatterwalk.h>
-#include <crypto/sha.h>
-#include <crypto/mcryptd.h>
-#include <crypto/crypto_wq.h>
-#include <asm/byteorder.h>
-#include <linux/hardirq.h>
-#include <asm/fpu/api.h>
-#include "sha512_mb_ctx.h"
-
-#define FLUSH_INTERVAL 1000 /* in usec */
-
-static struct mcryptd_alg_state sha512_mb_alg_state;
-
-struct sha512_mb_ctx {
- struct mcryptd_ahash *mcryptd_tfm;
-};
-
-static inline struct mcryptd_hash_request_ctx
- *cast_hash_to_mcryptd_ctx(struct sha512_hash_ctx *hash_ctx)
-{
- struct ahash_request *areq;
-
- areq = container_of((void *) hash_ctx, struct ahash_request, __ctx);
- return container_of(areq, struct mcryptd_hash_request_ctx, areq);
-}
-
-static inline struct ahash_request
- *cast_mcryptd_ctx_to_req(struct mcryptd_hash_request_ctx *ctx)
-{
- return container_of((void *) ctx, struct ahash_request, __ctx);
-}
-
-static void req_ctx_init(struct mcryptd_hash_request_ctx *rctx,
- struct ahash_request *areq)
-{
- rctx->flag = HASH_UPDATE;
-}
-
-static asmlinkage void (*sha512_job_mgr_init)(struct sha512_mb_mgr *state);
-static asmlinkage struct job_sha512* (*sha512_job_mgr_submit)
- (struct sha512_mb_mgr *state,
- struct job_sha512 *job);
-static asmlinkage struct job_sha512* (*sha512_job_mgr_flush)
- (struct sha512_mb_mgr *state);
-static asmlinkage struct job_sha512* (*sha512_job_mgr_get_comp_job)
- (struct sha512_mb_mgr *state);
-
-inline uint32_t sha512_pad(uint8_t padblock[SHA512_BLOCK_SIZE * 2],
- uint64_t total_len)
-{
- uint32_t i = total_len & (SHA512_BLOCK_SIZE - 1);
-
- memset(&padblock[i], 0, SHA512_BLOCK_SIZE);
- padblock[i] = 0x80;
-
- i += ((SHA512_BLOCK_SIZE - 1) &
- (0 - (total_len + SHA512_PADLENGTHFIELD_SIZE + 1)))
- + 1 + SHA512_PADLENGTHFIELD_SIZE;
-
-#if SHA512_PADLENGTHFIELD_SIZE == 16
- *((uint64_t *) &padblock[i - 16]) = 0;
-#endif
-
- *((uint64_t *) &padblock[i - 8]) = cpu_to_be64(total_len << 3);
-
- /* Number of extra blocks to hash */
- return i >> SHA512_LOG2_BLOCK_SIZE;
-}
-
-static struct sha512_hash_ctx *sha512_ctx_mgr_resubmit
- (struct sha512_ctx_mgr *mgr, struct sha512_hash_ctx *ctx)
-{
- while (ctx) {
- if (ctx->status & HASH_CTX_STS_COMPLETE) {
- /* Clear PROCESSING bit */
- ctx->status = HASH_CTX_STS_COMPLETE;
- return ctx;
- }
-
- /*
- * If the extra blocks are empty, begin hashing what remains
- * in the user's buffer.
- */
- if (ctx->partial_block_buffer_length == 0 &&
- ctx->incoming_buffer_length) {
-
- const void *buffer = ctx->incoming_buffer;
- uint32_t len = ctx->incoming_buffer_length;
- uint32_t copy_len;
-
- /*
- * Only entire blocks can be hashed.
- * Copy remainder to extra blocks buffer.
- */
- copy_len = len & (SHA512_BLOCK_SIZE-1);
-
- if (copy_len) {
- len -= copy_len;
- memcpy(ctx->partial_block_buffer,
- ((const char *) buffer + len),
- copy_len);
- ctx->partial_block_buffer_length = copy_len;
- }
-
- ctx->incoming_buffer_length = 0;
-
- /* len should be a multiple of the block size now */
- assert((len % SHA512_BLOCK_SIZE) == 0);
-
- /* Set len to the number of blocks to be hashed */
- len >>= SHA512_LOG2_BLOCK_SIZE;
-
- if (len) {
-
- ctx->job.buffer = (uint8_t *) buffer;
- ctx->job.len = len;
- ctx = (struct sha512_hash_ctx *)
- sha512_job_mgr_submit(&mgr->mgr,
- &ctx->job);
- continue;
- }
- }
-
- /*
- * If the extra blocks are not empty, then we are
- * either on the last block(s) or we need more
- * user input before continuing.
- */
- if (ctx->status & HASH_CTX_STS_LAST) {
-
- uint8_t *buf = ctx->partial_block_buffer;
- uint32_t n_extra_blocks =
- sha512_pad(buf, ctx->total_length);
-
- ctx->status = (HASH_CTX_STS_PROCESSING |
- HASH_CTX_STS_COMPLETE);
- ctx->job.buffer = buf;
- ctx->job.len = (uint32_t) n_extra_blocks;
- ctx = (struct sha512_hash_ctx *)
- sha512_job_mgr_submit(&mgr->mgr, &ctx->job);
- continue;
- }
-
- if (ctx)
- ctx->status = HASH_CTX_STS_IDLE;
- return ctx;
- }
-
- return NULL;
-}
-
-static struct sha512_hash_ctx
- *sha512_ctx_mgr_get_comp_ctx(struct mcryptd_alg_cstate *cstate)
-{
- /*
- * If get_comp_job returns NULL, there are no jobs complete.
- * If get_comp_job returns a job, verify that it is safe to return to
- * the user.
- * If it is not ready, resubmit the job to finish processing.
- * If sha512_ctx_mgr_resubmit returned a job, it is ready to be
- * returned.
- * Otherwise, all jobs currently being managed by the hash_ctx_mgr
- * still need processing.
- */
- struct sha512_ctx_mgr *mgr;
- struct sha512_hash_ctx *ctx;
- unsigned long flags;
-
- mgr = cstate->mgr;
- spin_lock_irqsave(&cstate->work_lock, flags);
- ctx = (struct sha512_hash_ctx *)
- sha512_job_mgr_get_comp_job(&mgr->mgr);
- ctx = sha512_ctx_mgr_resubmit(mgr, ctx);
- spin_unlock_irqrestore(&cstate->work_lock, flags);
- return ctx;
-}
-
-static void sha512_ctx_mgr_init(struct sha512_ctx_mgr *mgr)
-{
- sha512_job_mgr_init(&mgr->mgr);
-}
-
-static struct sha512_hash_ctx
- *sha512_ctx_mgr_submit(struct mcryptd_alg_cstate *cstate,
- struct sha512_hash_ctx *ctx,
- const void *buffer,
- uint32_t len,
- int flags)
-{
- struct sha512_ctx_mgr *mgr;
- unsigned long irqflags;
-
- mgr = cstate->mgr;
- spin_lock_irqsave(&cstate->work_lock, irqflags);
- if (flags & ~(HASH_UPDATE | HASH_LAST)) {
- /* User should not pass anything other than UPDATE or LAST */
- ctx->error = HASH_CTX_ERROR_INVALID_FLAGS;
- goto unlock;
- }
-
- if (ctx->status & HASH_CTX_STS_PROCESSING) {
- /* Cannot submit to a currently processing job. */
- ctx->error = HASH_CTX_ERROR_ALREADY_PROCESSING;
- goto unlock;
- }
-
- if (ctx->status & HASH_CTX_STS_COMPLETE) {
- /* Cannot update a finished job. */
- ctx->error = HASH_CTX_ERROR_ALREADY_COMPLETED;
- goto unlock;
- }
-
- /*
- * If we made it here, there were no errors during this call to
- * submit
- */
- ctx->error = HASH_CTX_ERROR_NONE;
-
- /* Store buffer ptr info from user */
- ctx->incoming_buffer = buffer;
- ctx->incoming_buffer_length = len;
-
- /*
- * Store the user's request flags and mark this ctx as currently being
- * processed.
- */
- ctx->status = (flags & HASH_LAST) ?
- (HASH_CTX_STS_PROCESSING | HASH_CTX_STS_LAST) :
- HASH_CTX_STS_PROCESSING;
-
- /* Advance byte counter */
- ctx->total_length += len;
-
- /*
- * If there is anything currently buffered in the extra blocks,
- * append to it until it contains a whole block.
- * Or if the user's buffer contains less than a whole block,
- * append as much as possible to the extra block.
- */
- if (ctx->partial_block_buffer_length || len < SHA512_BLOCK_SIZE) {
- /* Compute how many bytes to copy from user buffer into extra
- * block
- */
- uint32_t copy_len = SHA512_BLOCK_SIZE -
- ctx->partial_block_buffer_length;
- if (len < copy_len)
- copy_len = len;
-
- if (copy_len) {
- /* Copy and update relevant pointers and counters */
- memcpy
- (&ctx->partial_block_buffer[ctx->partial_block_buffer_length],
- buffer, copy_len);
-
- ctx->partial_block_buffer_length += copy_len;
- ctx->incoming_buffer = (const void *)
- ((const char *)buffer + copy_len);
- ctx->incoming_buffer_length = len - copy_len;
- }
-
- /* The extra block should never contain more than 1 block
- * here
- */
- assert(ctx->partial_block_buffer_length <= SHA512_BLOCK_SIZE);
-
- /* If the extra block buffer contains exactly 1 block, it can
- * be hashed.
- */
- if (ctx->partial_block_buffer_length >= SHA512_BLOCK_SIZE) {
- ctx->partial_block_buffer_length = 0;
-
- ctx->job.buffer = ctx->partial_block_buffer;
- ctx->job.len = 1;
- ctx = (struct sha512_hash_ctx *)
- sha512_job_mgr_submit(&mgr->mgr, &ctx->job);
- }
- }
-
- ctx = sha512_ctx_mgr_resubmit(mgr, ctx);
-unlock:
- spin_unlock_irqrestore(&cstate->work_lock, irqflags);
- return ctx;
-}
-
-static struct sha512_hash_ctx *sha512_ctx_mgr_flush(struct mcryptd_alg_cstate *cstate)
-{
- struct sha512_ctx_mgr *mgr;
- struct sha512_hash_ctx *ctx;
- unsigned long flags;
-
- mgr = cstate->mgr;
- spin_lock_irqsave(&cstate->work_lock, flags);
- while (1) {
- ctx = (struct sha512_hash_ctx *)
- sha512_job_mgr_flush(&mgr->mgr);
-
- /* If flush returned 0, there are no more jobs in flight. */
- if (!ctx)
- break;
-
- /*
- * If flush returned a job, resubmit the job to finish
- * processing.
- */
- ctx = sha512_ctx_mgr_resubmit(mgr, ctx);
-
- /*
- * If sha512_ctx_mgr_resubmit returned a job, it is ready to
- * be returned. Otherwise, all jobs currently being managed by
- * the sha512_ctx_mgr still need processing. Loop.
- */
- if (ctx)
- break;
- }
- spin_unlock_irqrestore(&cstate->work_lock, flags);
- return ctx;
-}
-
-static int sha512_mb_init(struct ahash_request *areq)
-{
- struct sha512_hash_ctx *sctx = ahash_request_ctx(areq);
-
- hash_ctx_init(sctx);
- sctx->job.result_digest[0] = SHA512_H0;
- sctx->job.result_digest[1] = SHA512_H1;
- sctx->job.result_digest[2] = SHA512_H2;
- sctx->job.result_digest[3] = SHA512_H3;
- sctx->job.result_digest[4] = SHA512_H4;
- sctx->job.result_digest[5] = SHA512_H5;
- sctx->job.result_digest[6] = SHA512_H6;
- sctx->job.result_digest[7] = SHA512_H7;
- sctx->total_length = 0;
- sctx->partial_block_buffer_length = 0;
- sctx->status = HASH_CTX_STS_IDLE;
-
- return 0;
-}
-
-static int sha512_mb_set_results(struct mcryptd_hash_request_ctx *rctx)
-{
- int i;
- struct sha512_hash_ctx *sctx = ahash_request_ctx(&rctx->areq);
- __be64 *dst = (__be64 *) rctx->out;
-
- for (i = 0; i < 8; ++i)
- dst[i] = cpu_to_be64(sctx->job.result_digest[i]);
-
- return 0;
-}
-
-static int sha_finish_walk(struct mcryptd_hash_request_ctx **ret_rctx,
- struct mcryptd_alg_cstate *cstate, bool flush)
-{
- int flag = HASH_UPDATE;
- int nbytes, err = 0;
- struct mcryptd_hash_request_ctx *rctx = *ret_rctx;
- struct sha512_hash_ctx *sha_ctx;
-
- /* more work ? */
- while (!(rctx->flag & HASH_DONE)) {
- nbytes = crypto_ahash_walk_done(&rctx->walk, 0);
- if (nbytes < 0) {
- err = nbytes;
- goto out;
- }
- /* check if the walk is done */
- if (crypto_ahash_walk_last(&rctx->walk)) {
- rctx->flag |= HASH_DONE;
- if (rctx->flag & HASH_FINAL)
- flag |= HASH_LAST;
-
- }
- sha_ctx = (struct sha512_hash_ctx *)
- ahash_request_ctx(&rctx->areq);
- kernel_fpu_begin();
- sha_ctx = sha512_ctx_mgr_submit(cstate, sha_ctx,
- rctx->walk.data, nbytes, flag);
- if (!sha_ctx) {
- if (flush)
- sha_ctx = sha512_ctx_mgr_flush(cstate);
- }
- kernel_fpu_end();
- if (sha_ctx)
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- else {
- rctx = NULL;
- goto out;
- }
- }
-
- /* copy the results */
- if (rctx->flag & HASH_FINAL)
- sha512_mb_set_results(rctx);
-
-out:
- *ret_rctx = rctx;
- return err;
-}
-
-static int sha_complete_job(struct mcryptd_hash_request_ctx *rctx,
- struct mcryptd_alg_cstate *cstate,
- int err)
-{
- struct ahash_request *req = cast_mcryptd_ctx_to_req(rctx);
- struct sha512_hash_ctx *sha_ctx;
- struct mcryptd_hash_request_ctx *req_ctx;
- int ret;
- unsigned long flags;
-
- /* remove from work list */
- spin_lock_irqsave(&cstate->work_lock, flags);
- list_del(&rctx->waiter);
- spin_unlock_irqrestore(&cstate->work_lock, flags);
-
- if (irqs_disabled())
- rctx->complete(&req->base, err);
- else {
- local_bh_disable();
- rctx->complete(&req->base, err);
- local_bh_enable();
- }
-
- /* check to see if there are other jobs that are done */
- sha_ctx = sha512_ctx_mgr_get_comp_ctx(cstate);
- while (sha_ctx) {
- req_ctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- ret = sha_finish_walk(&req_ctx, cstate, false);
- if (req_ctx) {
- spin_lock_irqsave(&cstate->work_lock, flags);
- list_del(&req_ctx->waiter);
- spin_unlock_irqrestore(&cstate->work_lock, flags);
-
- req = cast_mcryptd_ctx_to_req(req_ctx);
- if (irqs_disabled())
- req_ctx->complete(&req->base, ret);
- else {
- local_bh_disable();
- req_ctx->complete(&req->base, ret);
- local_bh_enable();
- }
- }
- sha_ctx = sha512_ctx_mgr_get_comp_ctx(cstate);
- }
-
- return 0;
-}
-
-static void sha512_mb_add_list(struct mcryptd_hash_request_ctx *rctx,
- struct mcryptd_alg_cstate *cstate)
-{
- unsigned long next_flush;
- unsigned long delay = usecs_to_jiffies(FLUSH_INTERVAL);
- unsigned long flags;
-
- /* initialize tag */
- rctx->tag.arrival = jiffies; /* tag the arrival time */
- rctx->tag.seq_num = cstate->next_seq_num++;
- next_flush = rctx->tag.arrival + delay;
- rctx->tag.expire = next_flush;
-
- spin_lock_irqsave(&cstate->work_lock, flags);
- list_add_tail(&rctx->waiter, &cstate->work_list);
- spin_unlock_irqrestore(&cstate->work_lock, flags);
-
- mcryptd_arm_flusher(cstate, delay);
-}
-
-static int sha512_mb_update(struct ahash_request *areq)
-{
- struct mcryptd_hash_request_ctx *rctx =
- container_of(areq, struct mcryptd_hash_request_ctx,
- areq);
- struct mcryptd_alg_cstate *cstate =
- this_cpu_ptr(sha512_mb_alg_state.alg_cstate);
-
- struct ahash_request *req = cast_mcryptd_ctx_to_req(rctx);
- struct sha512_hash_ctx *sha_ctx;
- int ret = 0, nbytes;
-
-
- /* sanity check */
- if (rctx->tag.cpu != smp_processor_id()) {
- pr_err("mcryptd error: cpu clash\n");
- goto done;
- }
-
- /* need to init context */
- req_ctx_init(rctx, areq);
-
- nbytes = crypto_ahash_walk_first(req, &rctx->walk);
-
- if (nbytes < 0) {
- ret = nbytes;
- goto done;
- }
-
- if (crypto_ahash_walk_last(&rctx->walk))
- rctx->flag |= HASH_DONE;
-
- /* submit */
- sha_ctx = (struct sha512_hash_ctx *) ahash_request_ctx(areq);
- sha512_mb_add_list(rctx, cstate);
- kernel_fpu_begin();
- sha_ctx = sha512_ctx_mgr_submit(cstate, sha_ctx, rctx->walk.data,
- nbytes, HASH_UPDATE);
- kernel_fpu_end();
-
- /* check if anything is returned */
- if (!sha_ctx)
- return -EINPROGRESS;
-
- if (sha_ctx->error) {
- ret = sha_ctx->error;
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- goto done;
- }
-
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- ret = sha_finish_walk(&rctx, cstate, false);
-
- if (!rctx)
- return -EINPROGRESS;
-done:
- sha_complete_job(rctx, cstate, ret);
- return ret;
-}
-
-static int sha512_mb_finup(struct ahash_request *areq)
-{
- struct mcryptd_hash_request_ctx *rctx =
- container_of(areq, struct mcryptd_hash_request_ctx,
- areq);
- struct mcryptd_alg_cstate *cstate =
- this_cpu_ptr(sha512_mb_alg_state.alg_cstate);
-
- struct ahash_request *req = cast_mcryptd_ctx_to_req(rctx);
- struct sha512_hash_ctx *sha_ctx;
- int ret = 0, flag = HASH_UPDATE, nbytes;
-
- /* sanity check */
- if (rctx->tag.cpu != smp_processor_id()) {
- pr_err("mcryptd error: cpu clash\n");
- goto done;
- }
-
- /* need to init context */
- req_ctx_init(rctx, areq);
-
- nbytes = crypto_ahash_walk_first(req, &rctx->walk);
-
- if (nbytes < 0) {
- ret = nbytes;
- goto done;
- }
-
- if (crypto_ahash_walk_last(&rctx->walk)) {
- rctx->flag |= HASH_DONE;
- flag = HASH_LAST;
- }
-
- /* submit */
- rctx->flag |= HASH_FINAL;
- sha_ctx = (struct sha512_hash_ctx *) ahash_request_ctx(areq);
- sha512_mb_add_list(rctx, cstate);
-
- kernel_fpu_begin();
- sha_ctx = sha512_ctx_mgr_submit(cstate, sha_ctx, rctx->walk.data,
- nbytes, flag);
- kernel_fpu_end();
-
- /* check if anything is returned */
- if (!sha_ctx)
- return -EINPROGRESS;
-
- if (sha_ctx->error) {
- ret = sha_ctx->error;
- goto done;
- }
-
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- ret = sha_finish_walk(&rctx, cstate, false);
- if (!rctx)
- return -EINPROGRESS;
-done:
- sha_complete_job(rctx, cstate, ret);
- return ret;
-}
-
-static int sha512_mb_final(struct ahash_request *areq)
-{
- struct mcryptd_hash_request_ctx *rctx =
- container_of(areq, struct mcryptd_hash_request_ctx,
- areq);
- struct mcryptd_alg_cstate *cstate =
- this_cpu_ptr(sha512_mb_alg_state.alg_cstate);
-
- struct sha512_hash_ctx *sha_ctx;
- int ret = 0;
- u8 data;
-
- /* sanity check */
- if (rctx->tag.cpu != smp_processor_id()) {
- pr_err("mcryptd error: cpu clash\n");
- goto done;
- }
-
- /* need to init context */
- req_ctx_init(rctx, areq);
-
- rctx->flag |= HASH_DONE | HASH_FINAL;
-
- sha_ctx = (struct sha512_hash_ctx *) ahash_request_ctx(areq);
- /* flag HASH_FINAL and 0 data size */
- sha512_mb_add_list(rctx, cstate);
- kernel_fpu_begin();
- sha_ctx = sha512_ctx_mgr_submit(cstate, sha_ctx, &data, 0, HASH_LAST);
- kernel_fpu_end();
-
- /* check if anything is returned */
- if (!sha_ctx)
- return -EINPROGRESS;
-
- if (sha_ctx->error) {
- ret = sha_ctx->error;
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- goto done;
- }
-
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- ret = sha_finish_walk(&rctx, cstate, false);
- if (!rctx)
- return -EINPROGRESS;
-done:
- sha_complete_job(rctx, cstate, ret);
- return ret;
-}
-
-static int sha512_mb_export(struct ahash_request *areq, void *out)
-{
- struct sha512_hash_ctx *sctx = ahash_request_ctx(areq);
-
- memcpy(out, sctx, sizeof(*sctx));
-
- return 0;
-}
-
-static int sha512_mb_import(struct ahash_request *areq, const void *in)
-{
- struct sha512_hash_ctx *sctx = ahash_request_ctx(areq);
-
- memcpy(sctx, in, sizeof(*sctx));
-
- return 0;
-}
-
-static int sha512_mb_async_init_tfm(struct crypto_tfm *tfm)
-{
- struct mcryptd_ahash *mcryptd_tfm;
- struct sha512_mb_ctx *ctx = crypto_tfm_ctx(tfm);
- struct mcryptd_hash_ctx *mctx;
-
- mcryptd_tfm = mcryptd_alloc_ahash("__intel_sha512-mb",
- CRYPTO_ALG_INTERNAL,
- CRYPTO_ALG_INTERNAL);
- if (IS_ERR(mcryptd_tfm))
- return PTR_ERR(mcryptd_tfm);
- mctx = crypto_ahash_ctx(&mcryptd_tfm->base);
- mctx->alg_state = &sha512_mb_alg_state;
- ctx->mcryptd_tfm = mcryptd_tfm;
- crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
- sizeof(struct ahash_request) +
- crypto_ahash_reqsize(&mcryptd_tfm->base));
-
- return 0;
-}
-
-static void sha512_mb_async_exit_tfm(struct crypto_tfm *tfm)
-{
- struct sha512_mb_ctx *ctx = crypto_tfm_ctx(tfm);
-
- mcryptd_free_ahash(ctx->mcryptd_tfm);
-}
-
-static int sha512_mb_areq_init_tfm(struct crypto_tfm *tfm)
-{
- crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
- sizeof(struct ahash_request) +
- sizeof(struct sha512_hash_ctx));
-
- return 0;
-}
-
-static void sha512_mb_areq_exit_tfm(struct crypto_tfm *tfm)
-{
- struct sha512_mb_ctx *ctx = crypto_tfm_ctx(tfm);
-
- mcryptd_free_ahash(ctx->mcryptd_tfm);
-}
-
-static struct ahash_alg sha512_mb_areq_alg = {
- .init = sha512_mb_init,
- .update = sha512_mb_update,
- .final = sha512_mb_final,
- .finup = sha512_mb_finup,
- .export = sha512_mb_export,
- .import = sha512_mb_import,
- .halg = {
- .digestsize = SHA512_DIGEST_SIZE,
- .statesize = sizeof(struct sha512_hash_ctx),
- .base = {
- .cra_name = "__sha512-mb",
- .cra_driver_name = "__intel_sha512-mb",
- .cra_priority = 100,
- /*
- * use ASYNC flag as some buffers in multi-buffer
- * algo may not have completed before hashing thread
- * sleep
- */
- .cra_flags = CRYPTO_ALG_ASYNC |
- CRYPTO_ALG_INTERNAL,
- .cra_blocksize = SHA512_BLOCK_SIZE,
- .cra_module = THIS_MODULE,
- .cra_list = LIST_HEAD_INIT
- (sha512_mb_areq_alg.halg.base.cra_list),
- .cra_init = sha512_mb_areq_init_tfm,
- .cra_exit = sha512_mb_areq_exit_tfm,
- .cra_ctxsize = sizeof(struct sha512_hash_ctx),
- }
- }
-};
-
-static int sha512_mb_async_init(struct ahash_request *req)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha512_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_init(mcryptd_req);
-}
-
-static int sha512_mb_async_update(struct ahash_request *req)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
-
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha512_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_update(mcryptd_req);
-}
-
-static int sha512_mb_async_finup(struct ahash_request *req)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
-
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha512_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_finup(mcryptd_req);
-}
-
-static int sha512_mb_async_final(struct ahash_request *req)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
-
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha512_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_final(mcryptd_req);
-}
-
-static int sha512_mb_async_digest(struct ahash_request *req)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha512_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_digest(mcryptd_req);
-}
-
-static int sha512_mb_async_export(struct ahash_request *req, void *out)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha512_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- return crypto_ahash_export(mcryptd_req, out);
-}
-
-static int sha512_mb_async_import(struct ahash_request *req, const void *in)
-{
- struct ahash_request *mcryptd_req = ahash_request_ctx(req);
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct sha512_mb_ctx *ctx = crypto_ahash_ctx(tfm);
- struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
- struct crypto_ahash *child = mcryptd_ahash_child(mcryptd_tfm);
- struct mcryptd_hash_request_ctx *rctx;
- struct ahash_request *areq;
-
- memcpy(mcryptd_req, req, sizeof(*req));
- ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
- rctx = ahash_request_ctx(mcryptd_req);
-
- areq = &rctx->areq;
-
- ahash_request_set_tfm(areq, child);
- ahash_request_set_callback(areq, CRYPTO_TFM_REQ_MAY_SLEEP,
- rctx->complete, req);
-
- return crypto_ahash_import(mcryptd_req, in);
-}
-
-static struct ahash_alg sha512_mb_async_alg = {
- .init = sha512_mb_async_init,
- .update = sha512_mb_async_update,
- .final = sha512_mb_async_final,
- .finup = sha512_mb_async_finup,
- .digest = sha512_mb_async_digest,
- .export = sha512_mb_async_export,
- .import = sha512_mb_async_import,
- .halg = {
- .digestsize = SHA512_DIGEST_SIZE,
- .statesize = sizeof(struct sha512_hash_ctx),
- .base = {
- .cra_name = "sha512",
- .cra_driver_name = "sha512_mb",
- /*
- * Low priority, since with few concurrent hash requests
- * this is extremely slow due to the flush delay. Users
- * whose workloads would benefit from this can request
- * it explicitly by driver name, or can increase its
- * priority at runtime using NETLINK_CRYPTO.
- */
- .cra_priority = 50,
- .cra_flags = CRYPTO_ALG_ASYNC,
- .cra_blocksize = SHA512_BLOCK_SIZE,
- .cra_module = THIS_MODULE,
- .cra_list = LIST_HEAD_INIT
- (sha512_mb_async_alg.halg.base.cra_list),
- .cra_init = sha512_mb_async_init_tfm,
- .cra_exit = sha512_mb_async_exit_tfm,
- .cra_ctxsize = sizeof(struct sha512_mb_ctx),
- .cra_alignmask = 0,
- },
- },
-};
-
-static unsigned long sha512_mb_flusher(struct mcryptd_alg_cstate *cstate)
-{
- struct mcryptd_hash_request_ctx *rctx;
- unsigned long cur_time;
- unsigned long next_flush = 0;
- struct sha512_hash_ctx *sha_ctx;
-
-
- cur_time = jiffies;
-
- while (!list_empty(&cstate->work_list)) {
- rctx = list_entry(cstate->work_list.next,
- struct mcryptd_hash_request_ctx, waiter);
- if time_before(cur_time, rctx->tag.expire)
- break;
- kernel_fpu_begin();
- sha_ctx = (struct sha512_hash_ctx *)
- sha512_ctx_mgr_flush(cstate);
- kernel_fpu_end();
- if (!sha_ctx) {
- pr_err("sha512_mb error: nothing got flushed for"
- " non-empty list\n");
- break;
- }
- rctx = cast_hash_to_mcryptd_ctx(sha_ctx);
- sha_finish_walk(&rctx, cstate, true);
- sha_complete_job(rctx, cstate, 0);
- }
-
- if (!list_empty(&cstate->work_list)) {
- rctx = list_entry(cstate->work_list.next,
- struct mcryptd_hash_request_ctx, waiter);
- /* get the hash context and then flush time */
- next_flush = rctx->tag.expire;
- mcryptd_arm_flusher(cstate, get_delay(next_flush));
- }
- return next_flush;
-}
-
-static int __init sha512_mb_mod_init(void)
-{
-
- int cpu;
- int err;
- struct mcryptd_alg_cstate *cpu_state;
-
- /* check for dependent cpu features */
- if (!boot_cpu_has(X86_FEATURE_AVX2) ||
- !boot_cpu_has(X86_FEATURE_BMI2))
- return -ENODEV;
-
- /* initialize multibuffer structures */
- sha512_mb_alg_state.alg_cstate =
- alloc_percpu(struct mcryptd_alg_cstate);
-
- sha512_job_mgr_init = sha512_mb_mgr_init_avx2;
- sha512_job_mgr_submit = sha512_mb_mgr_submit_avx2;
- sha512_job_mgr_flush = sha512_mb_mgr_flush_avx2;
- sha512_job_mgr_get_comp_job = sha512_mb_mgr_get_comp_job_avx2;
-
- if (!sha512_mb_alg_state.alg_cstate)
- return -ENOMEM;
- for_each_possible_cpu(cpu) {
- cpu_state = per_cpu_ptr(sha512_mb_alg_state.alg_cstate, cpu);
- cpu_state->next_flush = 0;
- cpu_state->next_seq_num = 0;
- cpu_state->flusher_engaged = false;
- INIT_DELAYED_WORK(&cpu_state->flush, mcryptd_flusher);
- cpu_state->cpu = cpu;
- cpu_state->alg_state = &sha512_mb_alg_state;
- cpu_state->mgr = kzalloc(sizeof(struct sha512_ctx_mgr),
- GFP_KERNEL);
- if (!cpu_state->mgr)
- goto err2;
- sha512_ctx_mgr_init(cpu_state->mgr);
- INIT_LIST_HEAD(&cpu_state->work_list);
- spin_lock_init(&cpu_state->work_lock);
- }
- sha512_mb_alg_state.flusher = &sha512_mb_flusher;
-
- err = crypto_register_ahash(&sha512_mb_areq_alg);
- if (err)
- goto err2;
- err = crypto_register_ahash(&sha512_mb_async_alg);
- if (err)
- goto err1;
-
-
- return 0;
-err1:
- crypto_unregister_ahash(&sha512_mb_areq_alg);
-err2:
- for_each_possible_cpu(cpu) {
- cpu_state = per_cpu_ptr(sha512_mb_alg_state.alg_cstate, cpu);
- kfree(cpu_state->mgr);
- }
- free_percpu(sha512_mb_alg_state.alg_cstate);
- return -ENODEV;
-}
-
-static void __exit sha512_mb_mod_fini(void)
-{
- int cpu;
- struct mcryptd_alg_cstate *cpu_state;
-
- crypto_unregister_ahash(&sha512_mb_async_alg);
- crypto_unregister_ahash(&sha512_mb_areq_alg);
- for_each_possible_cpu(cpu) {
- cpu_state = per_cpu_ptr(sha512_mb_alg_state.alg_cstate, cpu);
- kfree(cpu_state->mgr);
- }
- free_percpu(sha512_mb_alg_state.alg_cstate);
-}
-
-module_init(sha512_mb_mod_init);
-module_exit(sha512_mb_mod_fini);
-
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("SHA512 Secure Hash Algorithm, multi buffer accelerated");
-
-MODULE_ALIAS("sha512");
diff --git a/arch/x86/crypto/sha512-mb/sha512_mb_ctx.h b/arch/x86/crypto/sha512-mb/sha512_mb_ctx.h
deleted file mode 100644
index e5c465b..00000000
--- a/arch/x86/crypto/sha512-mb/sha512_mb_ctx.h
+++ /dev/null
@@ -1,128 +0,0 @@
-/*
- * Header file for multi buffer SHA512 context
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef _SHA_MB_CTX_INTERNAL_H
-#define _SHA_MB_CTX_INTERNAL_H
-
-#include "sha512_mb_mgr.h"
-
-#define HASH_UPDATE 0x00
-#define HASH_LAST 0x01
-#define HASH_DONE 0x02
-#define HASH_FINAL 0x04
-
-#define HASH_CTX_STS_IDLE 0x00
-#define HASH_CTX_STS_PROCESSING 0x01
-#define HASH_CTX_STS_LAST 0x02
-#define HASH_CTX_STS_COMPLETE 0x04
-
-enum hash_ctx_error {
- HASH_CTX_ERROR_NONE = 0,
- HASH_CTX_ERROR_INVALID_FLAGS = -1,
- HASH_CTX_ERROR_ALREADY_PROCESSING = -2,
- HASH_CTX_ERROR_ALREADY_COMPLETED = -3,
-};
-
-#define hash_ctx_user_data(ctx) ((ctx)->user_data)
-#define hash_ctx_digest(ctx) ((ctx)->job.result_digest)
-#define hash_ctx_processing(ctx) ((ctx)->status & HASH_CTX_STS_PROCESSING)
-#define hash_ctx_complete(ctx) ((ctx)->status == HASH_CTX_STS_COMPLETE)
-#define hash_ctx_status(ctx) ((ctx)->status)
-#define hash_ctx_error(ctx) ((ctx)->error)
-#define hash_ctx_init(ctx) \
- do { \
- (ctx)->error = HASH_CTX_ERROR_NONE; \
- (ctx)->status = HASH_CTX_STS_COMPLETE; \
- } while (0)
-
-/* Hash Constants and Typedefs */
-#define SHA512_DIGEST_LENGTH 8
-#define SHA512_LOG2_BLOCK_SIZE 7
-
-#define SHA512_PADLENGTHFIELD_SIZE 16
-
-#ifdef SHA_MB_DEBUG
-#define assert(expr) \
-do { \
- if (unlikely(!(expr))) { \
- printk(KERN_ERR "Assertion failed! %s,%s,%s,line=%d\n", \
- #expr, __FILE__, __func__, __LINE__); \
- } \
-} while (0)
-#else
-#define assert(expr) do {} while (0)
-#endif
-
-struct sha512_ctx_mgr {
- struct sha512_mb_mgr mgr;
-};
-
-/* typedef struct sha512_ctx_mgr sha512_ctx_mgr; */
-
-struct sha512_hash_ctx {
- /* Must be at struct offset 0 */
- struct job_sha512 job;
- /* status flag */
- int status;
- /* error flag */
- int error;
-
- uint64_t total_length;
- const void *incoming_buffer;
- uint32_t incoming_buffer_length;
- uint8_t partial_block_buffer[SHA512_BLOCK_SIZE * 2];
- uint32_t partial_block_buffer_length;
- void *user_data;
-};
-
-#endif
diff --git a/arch/x86/crypto/sha512-mb/sha512_mb_mgr.h b/arch/x86/crypto/sha512-mb/sha512_mb_mgr.h
deleted file mode 100644
index 178f17e..00000000
--- a/arch/x86/crypto/sha512-mb/sha512_mb_mgr.h
+++ /dev/null
@@ -1,104 +0,0 @@
-/*
- * Header file for multi buffer SHA512 algorithm manager
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef __SHA_MB_MGR_H
-#define __SHA_MB_MGR_H
-
-#include <linux/types.h>
-
-#define NUM_SHA512_DIGEST_WORDS 8
-
-enum job_sts {STS_UNKNOWN = 0,
- STS_BEING_PROCESSED = 1,
- STS_COMPLETED = 2,
- STS_INTERNAL_ERROR = 3,
- STS_ERROR = 4
-};
-
-struct job_sha512 {
- u8 *buffer;
- u64 len;
- u64 result_digest[NUM_SHA512_DIGEST_WORDS] __aligned(32);
- enum job_sts status;
- void *user_data;
-};
-
-struct sha512_args_x4 {
- uint64_t digest[8][4];
- uint8_t *data_ptr[4];
-};
-
-struct sha512_lane_data {
- struct job_sha512 *job_in_lane;
-};
-
-struct sha512_mb_mgr {
- struct sha512_args_x4 args;
-
- uint64_t lens[4];
-
- /* each byte is index (0...7) of unused lanes */
- uint64_t unused_lanes;
- /* byte 4 is set to FF as a flag */
- struct sha512_lane_data ldata[4];
-};
-
-#define SHA512_MB_MGR_NUM_LANES_AVX2 4
-
-void sha512_mb_mgr_init_avx2(struct sha512_mb_mgr *state);
-struct job_sha512 *sha512_mb_mgr_submit_avx2(struct sha512_mb_mgr *state,
- struct job_sha512 *job);
-struct job_sha512 *sha512_mb_mgr_flush_avx2(struct sha512_mb_mgr *state);
-struct job_sha512 *sha512_mb_mgr_get_comp_job_avx2(struct sha512_mb_mgr *state);
-
-#endif
diff --git a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_datastruct.S b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_datastruct.S
deleted file mode 100644
index cf2636d..00000000
--- a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_datastruct.S
+++ /dev/null
@@ -1,281 +0,0 @@
-/*
- * Header file for multi buffer SHA256 algorithm data structure
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-# Macros for defining data structures
-
-# Usage example
-
-#START_FIELDS # JOB_AES
-### name size align
-#FIELD _plaintext, 8, 8 # pointer to plaintext
-#FIELD _ciphertext, 8, 8 # pointer to ciphertext
-#FIELD _IV, 16, 8 # IV
-#FIELD _keys, 8, 8 # pointer to keys
-#FIELD _len, 4, 4 # length in bytes
-#FIELD _status, 4, 4 # status enumeration
-#FIELD _user_data, 8, 8 # pointer to user data
-#UNION _union, size1, align1, \
-# size2, align2, \
-# size3, align3, \
-# ...
-#END_FIELDS
-#%assign _JOB_AES_size _FIELD_OFFSET
-#%assign _JOB_AES_align _STRUCT_ALIGN
-
-#########################################################################
-
-# Alternate "struc-like" syntax:
-# STRUCT job_aes2
-# RES_Q .plaintext, 1
-# RES_Q .ciphertext, 1
-# RES_DQ .IV, 1
-# RES_B .nested, _JOB_AES_SIZE, _JOB_AES_ALIGN
-# RES_U .union, size1, align1, \
-# size2, align2, \
-# ...
-# ENDSTRUCT
-# # Following only needed if nesting
-# %assign job_aes2_size _FIELD_OFFSET
-# %assign job_aes2_align _STRUCT_ALIGN
-#
-# RES_* macros take a name, a count and an optional alignment.
-# The count in in terms of the base size of the macro, and the
-# default alignment is the base size.
-# The macros are:
-# Macro Base size
-# RES_B 1
-# RES_W 2
-# RES_D 4
-# RES_Q 8
-# RES_DQ 16
-# RES_Y 32
-# RES_Z 64
-#
-# RES_U defines a union. It's arguments are a name and two or more
-# pairs of "size, alignment"
-#
-# The two assigns are only needed if this structure is being nested
-# within another. Even if the assigns are not done, one can still use
-# STRUCT_NAME_size as the size of the structure.
-#
-# Note that for nesting, you still need to assign to STRUCT_NAME_size.
-#
-# The differences between this and using "struc" directly are that each
-# type is implicitly aligned to its natural length (although this can be
-# over-ridden with an explicit third parameter), and that the structure
-# is padded at the end to its overall alignment.
-#
-
-#########################################################################
-
-#ifndef _DATASTRUCT_ASM_
-#define _DATASTRUCT_ASM_
-
-#define PTR_SZ 8
-#define SHA512_DIGEST_WORD_SIZE 8
-#define SHA512_MB_MGR_NUM_LANES_AVX2 4
-#define NUM_SHA512_DIGEST_WORDS 8
-#define SZ4 4*SHA512_DIGEST_WORD_SIZE
-#define ROUNDS 80*SZ4
-#define SHA512_DIGEST_ROW_SIZE (SHA512_MB_MGR_NUM_LANES_AVX2 * 8)
-
-# START_FIELDS
-.macro START_FIELDS
- _FIELD_OFFSET = 0
- _STRUCT_ALIGN = 0
-.endm
-
-# FIELD name size align
-.macro FIELD name size align
- _FIELD_OFFSET = (_FIELD_OFFSET + (\align) - 1) & (~ ((\align)-1))
- \name = _FIELD_OFFSET
- _FIELD_OFFSET = _FIELD_OFFSET + (\size)
-.if (\align > _STRUCT_ALIGN)
- _STRUCT_ALIGN = \align
-.endif
-.endm
-
-# END_FIELDS
-.macro END_FIELDS
- _FIELD_OFFSET = (_FIELD_OFFSET + _STRUCT_ALIGN-1) & (~ (_STRUCT_ALIGN-1))
-.endm
-
-.macro STRUCT p1
-START_FIELDS
-.struc \p1
-.endm
-
-.macro ENDSTRUCT
- tmp = _FIELD_OFFSET
- END_FIELDS
- tmp = (_FIELD_OFFSET - ##tmp)
-.if (tmp > 0)
- .lcomm tmp
-.endm
-
-## RES_int name size align
-.macro RES_int p1 p2 p3
- name = \p1
- size = \p2
- align = .\p3
-
- _FIELD_OFFSET = (_FIELD_OFFSET + (align) - 1) & (~ ((align)-1))
-.align align
-.lcomm name size
- _FIELD_OFFSET = _FIELD_OFFSET + (size)
-.if (align > _STRUCT_ALIGN)
- _STRUCT_ALIGN = align
-.endif
-.endm
-
-# macro RES_B name, size [, align]
-.macro RES_B _name, _size, _align=1
-RES_int _name _size _align
-.endm
-
-# macro RES_W name, size [, align]
-.macro RES_W _name, _size, _align=2
-RES_int _name 2*(_size) _align
-.endm
-
-# macro RES_D name, size [, align]
-.macro RES_D _name, _size, _align=4
-RES_int _name 4*(_size) _align
-.endm
-
-# macro RES_Q name, size [, align]
-.macro RES_Q _name, _size, _align=8
-RES_int _name 8*(_size) _align
-.endm
-
-# macro RES_DQ name, size [, align]
-.macro RES_DQ _name, _size, _align=16
-RES_int _name 16*(_size) _align
-.endm
-
-# macro RES_Y name, size [, align]
-.macro RES_Y _name, _size, _align=32
-RES_int _name 32*(_size) _align
-.endm
-
-# macro RES_Z name, size [, align]
-.macro RES_Z _name, _size, _align=64
-RES_int _name 64*(_size) _align
-.endm
-
-#endif
-
-###################################################################
-### Define SHA512 Out Of Order Data Structures
-###################################################################
-
-START_FIELDS # LANE_DATA
-### name size align
-FIELD _job_in_lane, 8, 8 # pointer to job object
-END_FIELDS
-
- _LANE_DATA_size = _FIELD_OFFSET
- _LANE_DATA_align = _STRUCT_ALIGN
-
-####################################################################
-
-START_FIELDS # SHA512_ARGS_X4
-### name size align
-FIELD _digest, 8*8*4, 4 # transposed digest
-FIELD _data_ptr, 8*4, 8 # array of pointers to data
-END_FIELDS
-
- _SHA512_ARGS_X4_size = _FIELD_OFFSET
- _SHA512_ARGS_X4_align = _STRUCT_ALIGN
-
-#####################################################################
-
-START_FIELDS # MB_MGR
-### name size align
-FIELD _args, _SHA512_ARGS_X4_size, _SHA512_ARGS_X4_align
-FIELD _lens, 8*4, 8
-FIELD _unused_lanes, 8, 8
-FIELD _ldata, _LANE_DATA_size*4, _LANE_DATA_align
-END_FIELDS
-
- _MB_MGR_size = _FIELD_OFFSET
- _MB_MGR_align = _STRUCT_ALIGN
-
-_args_digest = _args + _digest
-_args_data_ptr = _args + _data_ptr
-
-#######################################################################
-
-#######################################################################
-#### Define constants
-#######################################################################
-
-#define STS_UNKNOWN 0
-#define STS_BEING_PROCESSED 1
-#define STS_COMPLETED 2
-
-#######################################################################
-#### Define JOB_SHA512 structure
-#######################################################################
-
-START_FIELDS # JOB_SHA512
-### name size align
-FIELD _buffer, 8, 8 # pointer to buffer
-FIELD _len, 8, 8 # length in bytes
-FIELD _result_digest, 8*8, 32 # Digest (output)
-FIELD _status, 4, 4
-FIELD _user_data, 8, 8
-END_FIELDS
-
- _JOB_SHA512_size = _FIELD_OFFSET
- _JOB_SHA512_align = _STRUCT_ALIGN
diff --git a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S
deleted file mode 100644
index 7c629ca..00000000
--- a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S
+++ /dev/null
@@ -1,297 +0,0 @@
-/*
- * Flush routine for SHA512 multibuffer
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <linux/linkage.h>
-#include <asm/frame.h>
-#include "sha512_mb_mgr_datastruct.S"
-
-.extern sha512_x4_avx2
-
-# LINUX register definitions
-#define arg1 %rdi
-#define arg2 %rsi
-
-# idx needs to be other than arg1, arg2, rbx, r12
-#define idx %rdx
-
-# Common definitions
-#define state arg1
-#define job arg2
-#define len2 arg2
-
-#define unused_lanes %rbx
-#define lane_data %rbx
-#define tmp2 %rbx
-
-#define job_rax %rax
-#define tmp1 %rax
-#define size_offset %rax
-#define tmp %rax
-#define start_offset %rax
-
-#define tmp3 arg1
-
-#define extra_blocks arg2
-#define p arg2
-
-#define tmp4 %r8
-#define lens0 %r8
-
-#define lens1 %r9
-#define lens2 %r10
-#define lens3 %r11
-
-.macro LABEL prefix n
-\prefix\n\():
-.endm
-
-.macro JNE_SKIP i
-jne skip_\i
-.endm
-
-.altmacro
-.macro SET_OFFSET _offset
-offset = \_offset
-.endm
-.noaltmacro
-
-# JOB* sha512_mb_mgr_flush_avx2(MB_MGR *state)
-# arg 1 : rcx : state
-ENTRY(sha512_mb_mgr_flush_avx2)
- FRAME_BEGIN
- push %rbx
-
- # If bit (32+3) is set, then all lanes are empty
- mov _unused_lanes(state), unused_lanes
- bt $32+7, unused_lanes
- jc return_null
-
- # find a lane with a non-null job
- xor idx, idx
- offset = (_ldata + 1*_LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne one(%rip), idx
- offset = (_ldata + 2*_LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne two(%rip), idx
- offset = (_ldata + 3*_LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
- cmovne three(%rip), idx
-
- # copy idx to empty lanes
-copy_lane_data:
- offset = (_args + _data_ptr)
- mov offset(state,idx,8), tmp
-
- I = 0
-.rep 4
- offset = (_ldata + I * _LANE_DATA_size + _job_in_lane)
- cmpq $0, offset(state)
-.altmacro
- JNE_SKIP %I
- offset = (_args + _data_ptr + 8*I)
- mov tmp, offset(state)
- offset = (_lens + 8*I +4)
- movl $0xFFFFFFFF, offset(state)
-LABEL skip_ %I
- I = (I+1)
-.noaltmacro
-.endr
-
- # Find min length
- mov _lens + 0*8(state),lens0
- mov lens0,idx
- mov _lens + 1*8(state),lens1
- cmp idx,lens1
- cmovb lens1,idx
- mov _lens + 2*8(state),lens2
- cmp idx,lens2
- cmovb lens2,idx
- mov _lens + 3*8(state),lens3
- cmp idx,lens3
- cmovb lens3,idx
- mov idx,len2
- and $0xF,idx
- and $~0xFF,len2
- jz len_is_0
-
- sub len2, lens0
- sub len2, lens1
- sub len2, lens2
- sub len2, lens3
- shr $32,len2
- mov lens0, _lens + 0*8(state)
- mov lens1, _lens + 1*8(state)
- mov lens2, _lens + 2*8(state)
- mov lens3, _lens + 3*8(state)
-
- # "state" and "args" are the same address, arg1
- # len is arg2
- call sha512_x4_avx2
- # state and idx are intact
-
-len_is_0:
- # process completed job "idx"
- imul $_LANE_DATA_size, idx, lane_data
- lea _ldata(state, lane_data), lane_data
-
- mov _job_in_lane(lane_data), job_rax
- movq $0, _job_in_lane(lane_data)
- movl $STS_COMPLETED, _status(job_rax)
- mov _unused_lanes(state), unused_lanes
- shl $8, unused_lanes
- or idx, unused_lanes
- mov unused_lanes, _unused_lanes(state)
-
- movl $0xFFFFFFFF, _lens+4(state, idx, 8)
-
- vmovq _args_digest+0*32(state, idx, 8), %xmm0
- vpinsrq $1, _args_digest+1*32(state, idx, 8), %xmm0, %xmm0
- vmovq _args_digest+2*32(state, idx, 8), %xmm1
- vpinsrq $1, _args_digest+3*32(state, idx, 8), %xmm1, %xmm1
- vmovq _args_digest+4*32(state, idx, 8), %xmm2
- vpinsrq $1, _args_digest+5*32(state, idx, 8), %xmm2, %xmm2
- vmovq _args_digest+6*32(state, idx, 8), %xmm3
- vpinsrq $1, _args_digest+7*32(state, idx, 8), %xmm3, %xmm3
-
- vmovdqu %xmm0, _result_digest(job_rax)
- vmovdqu %xmm1, _result_digest+1*16(job_rax)
- vmovdqu %xmm2, _result_digest+2*16(job_rax)
- vmovdqu %xmm3, _result_digest+3*16(job_rax)
-
-return:
- pop %rbx
- FRAME_END
- ret
-
-return_null:
- xor job_rax, job_rax
- jmp return
-ENDPROC(sha512_mb_mgr_flush_avx2)
-.align 16
-
-ENTRY(sha512_mb_mgr_get_comp_job_avx2)
- push %rbx
-
- mov _unused_lanes(state), unused_lanes
- bt $(32+7), unused_lanes
- jc .return_null
-
- # Find min length
- mov _lens(state),lens0
- mov lens0,idx
- mov _lens+1*8(state),lens1
- cmp idx,lens1
- cmovb lens1,idx
- mov _lens+2*8(state),lens2
- cmp idx,lens2
- cmovb lens2,idx
- mov _lens+3*8(state),lens3
- cmp idx,lens3
- cmovb lens3,idx
- test $~0xF,idx
- jnz .return_null
- and $0xF,idx
-
- #process completed job "idx"
- imul $_LANE_DATA_size, idx, lane_data
- lea _ldata(state, lane_data), lane_data
-
- mov _job_in_lane(lane_data), job_rax
- movq $0, _job_in_lane(lane_data)
- movl $STS_COMPLETED, _status(job_rax)
- mov _unused_lanes(state), unused_lanes
- shl $8, unused_lanes
- or idx, unused_lanes
- mov unused_lanes, _unused_lanes(state)
-
- movl $0xFFFFFFFF, _lens+4(state, idx, 8)
-
- vmovq _args_digest(state, idx, 8), %xmm0
- vpinsrq $1, _args_digest+1*32(state, idx, 8), %xmm0, %xmm0
- vmovq _args_digest+2*32(state, idx, 8), %xmm1
- vpinsrq $1, _args_digest+3*32(state, idx, 8), %xmm1, %xmm1
- vmovq _args_digest+4*32(state, idx, 8), %xmm2
- vpinsrq $1, _args_digest+5*32(state, idx, 8), %xmm2, %xmm2
- vmovq _args_digest+6*32(state, idx, 8), %xmm3
- vpinsrq $1, _args_digest+7*32(state, idx, 8), %xmm3, %xmm3
-
- vmovdqu %xmm0, _result_digest+0*16(job_rax)
- vmovdqu %xmm1, _result_digest+1*16(job_rax)
- vmovdqu %xmm2, _result_digest+2*16(job_rax)
- vmovdqu %xmm3, _result_digest+3*16(job_rax)
-
- pop %rbx
-
- ret
-
-.return_null:
- xor job_rax, job_rax
- pop %rbx
- ret
-ENDPROC(sha512_mb_mgr_get_comp_job_avx2)
-
-.section .rodata.cst8.one, "aM", @progbits, 8
-.align 8
-one:
-.quad 1
-
-.section .rodata.cst8.two, "aM", @progbits, 8
-.align 8
-two:
-.quad 2
-
-.section .rodata.cst8.three, "aM", @progbits, 8
-.align 8
-three:
-.quad 3
diff --git a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c
deleted file mode 100644
index d088050..00000000
--- a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c
+++ /dev/null
@@ -1,69 +0,0 @@
-/*
- * Initialization code for multi buffer SHA256 algorithm for AVX2
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include "sha512_mb_mgr.h"
-
-void sha512_mb_mgr_init_avx2(struct sha512_mb_mgr *state)
-{
- unsigned int j;
-
- /* initially all lanes are unused */
- state->lens[0] = 0xFFFFFFFF00000000;
- state->lens[1] = 0xFFFFFFFF00000001;
- state->lens[2] = 0xFFFFFFFF00000002;
- state->lens[3] = 0xFFFFFFFF00000003;
-
- state->unused_lanes = 0xFF03020100;
- for (j = 0; j < 4; j++)
- state->ldata[j].job_in_lane = NULL;
-}
diff --git a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S
deleted file mode 100644
index 4ba709b..00000000
--- a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S
+++ /dev/null
@@ -1,224 +0,0 @@
-/*
- * Buffer submit code for multi buffer SHA512 algorithm
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <linux/linkage.h>
-#include <asm/frame.h>
-#include "sha512_mb_mgr_datastruct.S"
-
-.extern sha512_x4_avx2
-
-#define arg1 %rdi
-#define arg2 %rsi
-
-#define idx %rdx
-#define last_len %rdx
-
-#define size_offset %rcx
-#define tmp2 %rcx
-
-# Common definitions
-#define state arg1
-#define job arg2
-#define len2 arg2
-#define p2 arg2
-
-#define p %r11
-#define start_offset %r11
-
-#define unused_lanes %rbx
-
-#define job_rax %rax
-#define len %rax
-
-#define lane %r12
-#define tmp3 %r12
-#define lens3 %r12
-
-#define extra_blocks %r8
-#define lens0 %r8
-
-#define tmp %r9
-#define lens1 %r9
-
-#define lane_data %r10
-#define lens2 %r10
-
-#define DWORD_len %eax
-
-# JOB* sha512_mb_mgr_submit_avx2(MB_MGR *state, JOB *job)
-# arg 1 : rcx : state
-# arg 2 : rdx : job
-ENTRY(sha512_mb_mgr_submit_avx2)
- FRAME_BEGIN
- push %rbx
- push %r12
-
- mov _unused_lanes(state), unused_lanes
- movzb %bl,lane
- shr $8, unused_lanes
- imul $_LANE_DATA_size, lane,lane_data
- movl $STS_BEING_PROCESSED, _status(job)
- lea _ldata(state, lane_data), lane_data
- mov unused_lanes, _unused_lanes(state)
- movl _len(job), DWORD_len
-
- mov job, _job_in_lane(lane_data)
- movl DWORD_len,_lens+4(state , lane, 8)
-
- # Load digest words from result_digest
- vmovdqu _result_digest+0*16(job), %xmm0
- vmovdqu _result_digest+1*16(job), %xmm1
- vmovdqu _result_digest+2*16(job), %xmm2
- vmovdqu _result_digest+3*16(job), %xmm3
-
- vmovq %xmm0, _args_digest(state, lane, 8)
- vpextrq $1, %xmm0, _args_digest+1*32(state , lane, 8)
- vmovq %xmm1, _args_digest+2*32(state , lane, 8)
- vpextrq $1, %xmm1, _args_digest+3*32(state , lane, 8)
- vmovq %xmm2, _args_digest+4*32(state , lane, 8)
- vpextrq $1, %xmm2, _args_digest+5*32(state , lane, 8)
- vmovq %xmm3, _args_digest+6*32(state , lane, 8)
- vpextrq $1, %xmm3, _args_digest+7*32(state , lane, 8)
-
- mov _buffer(job), p
- mov p, _args_data_ptr(state, lane, 8)
-
- cmp $0xFF, unused_lanes
- jne return_null
-
-start_loop:
-
- # Find min length
- mov _lens+0*8(state),lens0
- mov lens0,idx
- mov _lens+1*8(state),lens1
- cmp idx,lens1
- cmovb lens1, idx
- mov _lens+2*8(state),lens2
- cmp idx,lens2
- cmovb lens2,idx
- mov _lens+3*8(state),lens3
- cmp idx,lens3
- cmovb lens3,idx
- mov idx,len2
- and $0xF,idx
- and $~0xFF,len2
- jz len_is_0
-
- sub len2,lens0
- sub len2,lens1
- sub len2,lens2
- sub len2,lens3
- shr $32,len2
- mov lens0, _lens + 0*8(state)
- mov lens1, _lens + 1*8(state)
- mov lens2, _lens + 2*8(state)
- mov lens3, _lens + 3*8(state)
-
- # "state" and "args" are the same address, arg1
- # len is arg2
- call sha512_x4_avx2
- # state and idx are intact
-
-len_is_0:
-
- # process completed job "idx"
- imul $_LANE_DATA_size, idx, lane_data
- lea _ldata(state, lane_data), lane_data
-
- mov _job_in_lane(lane_data), job_rax
- mov _unused_lanes(state), unused_lanes
- movq $0, _job_in_lane(lane_data)
- movl $STS_COMPLETED, _status(job_rax)
- shl $8, unused_lanes
- or idx, unused_lanes
- mov unused_lanes, _unused_lanes(state)
-
- movl $0xFFFFFFFF,_lens+4(state,idx,8)
- vmovq _args_digest+0*32(state , idx, 8), %xmm0
- vpinsrq $1, _args_digest+1*32(state , idx, 8), %xmm0, %xmm0
- vmovq _args_digest+2*32(state , idx, 8), %xmm1
- vpinsrq $1, _args_digest+3*32(state , idx, 8), %xmm1, %xmm1
- vmovq _args_digest+4*32(state , idx, 8), %xmm2
- vpinsrq $1, _args_digest+5*32(state , idx, 8), %xmm2, %xmm2
- vmovq _args_digest+6*32(state , idx, 8), %xmm3
- vpinsrq $1, _args_digest+7*32(state , idx, 8), %xmm3, %xmm3
-
- vmovdqu %xmm0, _result_digest + 0*16(job_rax)
- vmovdqu %xmm1, _result_digest + 1*16(job_rax)
- vmovdqu %xmm2, _result_digest + 2*16(job_rax)
- vmovdqu %xmm3, _result_digest + 3*16(job_rax)
-
-return:
- pop %r12
- pop %rbx
- FRAME_END
- ret
-
-return_null:
- xor job_rax, job_rax
- jmp return
-ENDPROC(sha512_mb_mgr_submit_avx2)
-
-/* UNUSED?
-.section .rodata.cst16, "aM", @progbits, 16
-.align 16
-H0: .int 0x6a09e667
-H1: .int 0xbb67ae85
-H2: .int 0x3c6ef372
-H3: .int 0xa54ff53a
-H4: .int 0x510e527f
-H5: .int 0x9b05688c
-H6: .int 0x1f83d9ab
-H7: .int 0x5be0cd19
-*/
diff --git a/arch/x86/crypto/sha512-mb/sha512_x4_avx2.S b/arch/x86/crypto/sha512-mb/sha512_x4_avx2.S
deleted file mode 100644
index e22e907..00000000
--- a/arch/x86/crypto/sha512-mb/sha512_x4_avx2.S
+++ /dev/null
@@ -1,531 +0,0 @@
-/*
- * Multi-buffer SHA512 algorithm hash compute routine
- *
- * This file is provided under a dual BSD/GPLv2 license. When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * Contact Information:
- * Megha Dey <megha.dey(a)linux.intel.com>
- *
- * BSD LICENSE
- *
- * Copyright(c) 2016 Intel Corporation.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-# code to compute quad SHA512 using AVX2
-# use YMMs to tackle the larger digest size
-# outer calling routine takes care of save and restore of XMM registers
-# Logic designed/laid out by JDG
-
-# Function clobbers: rax, rcx, rdx, rbx, rsi, rdi, r9-r15; ymm0-15
-# Stack must be aligned to 32 bytes before call
-# Linux clobbers: rax rbx rcx rsi r8 r9 r10 r11 r12
-# Linux preserves: rcx rdx rdi rbp r13 r14 r15
-# clobbers ymm0-15
-
-#include <linux/linkage.h>
-#include "sha512_mb_mgr_datastruct.S"
-
-arg1 = %rdi
-arg2 = %rsi
-
-# Common definitions
-STATE = arg1
-INP_SIZE = arg2
-
-IDX = %rax
-ROUND = %rbx
-TBL = %r8
-
-inp0 = %r9
-inp1 = %r10
-inp2 = %r11
-inp3 = %r12
-
-a = %ymm0
-b = %ymm1
-c = %ymm2
-d = %ymm3
-e = %ymm4
-f = %ymm5
-g = %ymm6
-h = %ymm7
-
-a0 = %ymm8
-a1 = %ymm9
-a2 = %ymm10
-
-TT0 = %ymm14
-TT1 = %ymm13
-TT2 = %ymm12
-TT3 = %ymm11
-TT4 = %ymm10
-TT5 = %ymm9
-
-T1 = %ymm14
-TMP = %ymm15
-
-# Define stack usage
-STACK_SPACE1 = SZ4*16 + NUM_SHA512_DIGEST_WORDS*SZ4 + 24
-
-#define VMOVPD vmovupd
-_digest = SZ4*16
-
-# transpose r0, r1, r2, r3, t0, t1
-# "transpose" data in {r0..r3} using temps {t0..t3}
-# Input looks like: {r0 r1 r2 r3}
-# r0 = {a7 a6 a5 a4 a3 a2 a1 a0}
-# r1 = {b7 b6 b5 b4 b3 b2 b1 b0}
-# r2 = {c7 c6 c5 c4 c3 c2 c1 c0}
-# r3 = {d7 d6 d5 d4 d3 d2 d1 d0}
-#
-# output looks like: {t0 r1 r0 r3}
-# t0 = {d1 d0 c1 c0 b1 b0 a1 a0}
-# r1 = {d3 d2 c3 c2 b3 b2 a3 a2}
-# r0 = {d5 d4 c5 c4 b5 b4 a5 a4}
-# r3 = {d7 d6 c7 c6 b7 b6 a7 a6}
-
-.macro TRANSPOSE r0 r1 r2 r3 t0 t1
- vshufps $0x44, \r1, \r0, \t0 # t0 = {b5 b4 a5 a4 b1 b0 a1 a0}
- vshufps $0xEE, \r1, \r0, \r0 # r0 = {b7 b6 a7 a6 b3 b2 a3 a2}
- vshufps $0x44, \r3, \r2, \t1 # t1 = {d5 d4 c5 c4 d1 d0 c1 c0}
- vshufps $0xEE, \r3, \r2, \r2 # r2 = {d7 d6 c7 c6 d3 d2 c3 c2}
-
- vperm2f128 $0x20, \r2, \r0, \r1 # h6...a6
- vperm2f128 $0x31, \r2, \r0, \r3 # h2...a2
- vperm2f128 $0x31, \t1, \t0, \r0 # h5...a5
- vperm2f128 $0x20, \t1, \t0, \t0 # h1...a1
-.endm
-
-.macro ROTATE_ARGS
-TMP_ = h
-h = g
-g = f
-f = e
-e = d
-d = c
-c = b
-b = a
-a = TMP_
-.endm
-
-# PRORQ reg, imm, tmp
-# packed-rotate-right-double
-# does a rotate by doing two shifts and an or
-.macro _PRORQ reg imm tmp
- vpsllq $(64-\imm),\reg,\tmp
- vpsrlq $\imm,\reg, \reg
- vpor \tmp,\reg, \reg
-.endm
-
-# non-destructive
-# PRORQ_nd reg, imm, tmp, src
-.macro _PRORQ_nd reg imm tmp src
- vpsllq $(64-\imm), \src, \tmp
- vpsrlq $\imm, \src, \reg
- vpor \tmp, \reg, \reg
-.endm
-
-# PRORQ dst/src, amt
-.macro PRORQ reg imm
- _PRORQ \reg, \imm, TMP
-.endm
-
-# PRORQ_nd dst, src, amt
-.macro PRORQ_nd reg tmp imm
- _PRORQ_nd \reg, \imm, TMP, \tmp
-.endm
-
-#; arguments passed implicitly in preprocessor symbols i, a...h
-.macro ROUND_00_15 _T1 i
- PRORQ_nd a0, e, (18-14) # sig1: a0 = (e >> 4)
-
- vpxor g, f, a2 # ch: a2 = f^g
- vpand e,a2, a2 # ch: a2 = (f^g)&e
- vpxor g, a2, a2 # a2 = ch
-
- PRORQ_nd a1,e,41 # sig1: a1 = (e >> 25)
-
- offset = SZ4*(\i & 0xf)
- vmovdqu \_T1,offset(%rsp)
- vpaddq (TBL,ROUND,1), \_T1, \_T1 # T1 = W + K
- vpxor e,a0, a0 # sig1: a0 = e ^ (e >> 5)
- PRORQ a0, 14 # sig1: a0 = (e >> 6) ^ (e >> 11)
- vpaddq a2, h, h # h = h + ch
- PRORQ_nd a2,a,6 # sig0: a2 = (a >> 11)
- vpaddq \_T1,h, h # h = h + ch + W + K
- vpxor a1, a0, a0 # a0 = sigma1
- vmovdqu a,\_T1
- PRORQ_nd a1,a,39 # sig0: a1 = (a >> 22)
- vpxor c, \_T1, \_T1 # maj: T1 = a^c
- add $SZ4, ROUND # ROUND++
- vpand b, \_T1, \_T1 # maj: T1 = (a^c)&b
- vpaddq a0, h, h
- vpaddq h, d, d
- vpxor a, a2, a2 # sig0: a2 = a ^ (a >> 11)
- PRORQ a2,28 # sig0: a2 = (a >> 2) ^ (a >> 13)
- vpxor a1, a2, a2 # a2 = sig0
- vpand c, a, a1 # maj: a1 = a&c
- vpor \_T1, a1, a1 # a1 = maj
- vpaddq a1, h, h # h = h + ch + W + K + maj
- vpaddq a2, h, h # h = h + ch + W + K + maj + sigma0
- ROTATE_ARGS
-.endm
-
-
-#; arguments passed implicitly in preprocessor symbols i, a...h
-.macro ROUND_16_XX _T1 i
- vmovdqu SZ4*((\i-15)&0xf)(%rsp), \_T1
- vmovdqu SZ4*((\i-2)&0xf)(%rsp), a1
- vmovdqu \_T1, a0
- PRORQ \_T1,7
- vmovdqu a1, a2
- PRORQ a1,42
- vpxor a0, \_T1, \_T1
- PRORQ \_T1, 1
- vpxor a2, a1, a1
- PRORQ a1, 19
- vpsrlq $7, a0, a0
- vpxor a0, \_T1, \_T1
- vpsrlq $6, a2, a2
- vpxor a2, a1, a1
- vpaddq SZ4*((\i-16)&0xf)(%rsp), \_T1, \_T1
- vpaddq SZ4*((\i-7)&0xf)(%rsp), a1, a1
- vpaddq a1, \_T1, \_T1
-
- ROUND_00_15 \_T1,\i
-.endm
-
-
-# void sha512_x4_avx2(void *STATE, const int INP_SIZE)
-# arg 1 : STATE : pointer to input data
-# arg 2 : INP_SIZE : size of data in blocks (assumed >= 1)
-ENTRY(sha512_x4_avx2)
- # general registers preserved in outer calling routine
- # outer calling routine saves all the XMM registers
- # save callee-saved clobbered registers to comply with C function ABI
- push %r12
- push %r13
- push %r14
- push %r15
-
- sub $STACK_SPACE1, %rsp
-
- # Load the pre-transposed incoming digest.
- vmovdqu 0*SHA512_DIGEST_ROW_SIZE(STATE),a
- vmovdqu 1*SHA512_DIGEST_ROW_SIZE(STATE),b
- vmovdqu 2*SHA512_DIGEST_ROW_SIZE(STATE),c
- vmovdqu 3*SHA512_DIGEST_ROW_SIZE(STATE),d
- vmovdqu 4*SHA512_DIGEST_ROW_SIZE(STATE),e
- vmovdqu 5*SHA512_DIGEST_ROW_SIZE(STATE),f
- vmovdqu 6*SHA512_DIGEST_ROW_SIZE(STATE),g
- vmovdqu 7*SHA512_DIGEST_ROW_SIZE(STATE),h
-
- lea K512_4(%rip),TBL
-
- # load the address of each of the 4 message lanes
- # getting ready to transpose input onto stack
- mov _data_ptr+0*PTR_SZ(STATE),inp0
- mov _data_ptr+1*PTR_SZ(STATE),inp1
- mov _data_ptr+2*PTR_SZ(STATE),inp2
- mov _data_ptr+3*PTR_SZ(STATE),inp3
-
- xor IDX, IDX
-lloop:
- xor ROUND, ROUND
-
- # save old digest
- vmovdqu a, _digest(%rsp)
- vmovdqu b, _digest+1*SZ4(%rsp)
- vmovdqu c, _digest+2*SZ4(%rsp)
- vmovdqu d, _digest+3*SZ4(%rsp)
- vmovdqu e, _digest+4*SZ4(%rsp)
- vmovdqu f, _digest+5*SZ4(%rsp)
- vmovdqu g, _digest+6*SZ4(%rsp)
- vmovdqu h, _digest+7*SZ4(%rsp)
- i = 0
-.rep 4
- vmovdqu PSHUFFLE_BYTE_FLIP_MASK(%rip), TMP
- VMOVPD i*32(inp0, IDX), TT2
- VMOVPD i*32(inp1, IDX), TT1
- VMOVPD i*32(inp2, IDX), TT4
- VMOVPD i*32(inp3, IDX), TT3
- TRANSPOSE TT2, TT1, TT4, TT3, TT0, TT5
- vpshufb TMP, TT0, TT0
- vpshufb TMP, TT1, TT1
- vpshufb TMP, TT2, TT2
- vpshufb TMP, TT3, TT3
- ROUND_00_15 TT0,(i*4+0)
- ROUND_00_15 TT1,(i*4+1)
- ROUND_00_15 TT2,(i*4+2)
- ROUND_00_15 TT3,(i*4+3)
- i = (i+1)
-.endr
- add $128, IDX
-
- i = (i*4)
-
- jmp Lrounds_16_xx
-.align 16
-Lrounds_16_xx:
-.rep 16
- ROUND_16_XX T1, i
- i = (i+1)
-.endr
- cmp $0xa00,ROUND
- jb Lrounds_16_xx
-
- # add old digest
- vpaddq _digest(%rsp), a, a
- vpaddq _digest+1*SZ4(%rsp), b, b
- vpaddq _digest+2*SZ4(%rsp), c, c
- vpaddq _digest+3*SZ4(%rsp), d, d
- vpaddq _digest+4*SZ4(%rsp), e, e
- vpaddq _digest+5*SZ4(%rsp), f, f
- vpaddq _digest+6*SZ4(%rsp), g, g
- vpaddq _digest+7*SZ4(%rsp), h, h
-
- sub $1, INP_SIZE # unit is blocks
- jne lloop
-
- # write back to memory (state object) the transposed digest
- vmovdqu a, 0*SHA512_DIGEST_ROW_SIZE(STATE)
- vmovdqu b, 1*SHA512_DIGEST_ROW_SIZE(STATE)
- vmovdqu c, 2*SHA512_DIGEST_ROW_SIZE(STATE)
- vmovdqu d, 3*SHA512_DIGEST_ROW_SIZE(STATE)
- vmovdqu e, 4*SHA512_DIGEST_ROW_SIZE(STATE)
- vmovdqu f, 5*SHA512_DIGEST_ROW_SIZE(STATE)
- vmovdqu g, 6*SHA512_DIGEST_ROW_SIZE(STATE)
- vmovdqu h, 7*SHA512_DIGEST_ROW_SIZE(STATE)
-
- # update input data pointers
- add IDX, inp0
- mov inp0, _data_ptr+0*PTR_SZ(STATE)
- add IDX, inp1
- mov inp1, _data_ptr+1*PTR_SZ(STATE)
- add IDX, inp2
- mov inp2, _data_ptr+2*PTR_SZ(STATE)
- add IDX, inp3
- mov inp3, _data_ptr+3*PTR_SZ(STATE)
-
- #;;;;;;;;;;;;;;;
- #; Postamble
- add $STACK_SPACE1, %rsp
- # restore callee-saved clobbered registers
-
- pop %r15
- pop %r14
- pop %r13
- pop %r12
-
- # outer calling routine restores XMM and other GP registers
- ret
-ENDPROC(sha512_x4_avx2)
-
-.section .rodata.K512_4, "a", @progbits
-.align 64
-K512_4:
- .octa 0x428a2f98d728ae22428a2f98d728ae22,\
- 0x428a2f98d728ae22428a2f98d728ae22
- .octa 0x7137449123ef65cd7137449123ef65cd,\
- 0x7137449123ef65cd7137449123ef65cd
- .octa 0xb5c0fbcfec4d3b2fb5c0fbcfec4d3b2f,\
- 0xb5c0fbcfec4d3b2fb5c0fbcfec4d3b2f
- .octa 0xe9b5dba58189dbbce9b5dba58189dbbc,\
- 0xe9b5dba58189dbbce9b5dba58189dbbc
- .octa 0x3956c25bf348b5383956c25bf348b538,\
- 0x3956c25bf348b5383956c25bf348b538
- .octa 0x59f111f1b605d01959f111f1b605d019,\
- 0x59f111f1b605d01959f111f1b605d019
- .octa 0x923f82a4af194f9b923f82a4af194f9b,\
- 0x923f82a4af194f9b923f82a4af194f9b
- .octa 0xab1c5ed5da6d8118ab1c5ed5da6d8118,\
- 0xab1c5ed5da6d8118ab1c5ed5da6d8118
- .octa 0xd807aa98a3030242d807aa98a3030242,\
- 0xd807aa98a3030242d807aa98a3030242
- .octa 0x12835b0145706fbe12835b0145706fbe,\
- 0x12835b0145706fbe12835b0145706fbe
- .octa 0x243185be4ee4b28c243185be4ee4b28c,\
- 0x243185be4ee4b28c243185be4ee4b28c
- .octa 0x550c7dc3d5ffb4e2550c7dc3d5ffb4e2,\
- 0x550c7dc3d5ffb4e2550c7dc3d5ffb4e2
- .octa 0x72be5d74f27b896f72be5d74f27b896f,\
- 0x72be5d74f27b896f72be5d74f27b896f
- .octa 0x80deb1fe3b1696b180deb1fe3b1696b1,\
- 0x80deb1fe3b1696b180deb1fe3b1696b1
- .octa 0x9bdc06a725c712359bdc06a725c71235,\
- 0x9bdc06a725c712359bdc06a725c71235
- .octa 0xc19bf174cf692694c19bf174cf692694,\
- 0xc19bf174cf692694c19bf174cf692694
- .octa 0xe49b69c19ef14ad2e49b69c19ef14ad2,\
- 0xe49b69c19ef14ad2e49b69c19ef14ad2
- .octa 0xefbe4786384f25e3efbe4786384f25e3,\
- 0xefbe4786384f25e3efbe4786384f25e3
- .octa 0x0fc19dc68b8cd5b50fc19dc68b8cd5b5,\
- 0x0fc19dc68b8cd5b50fc19dc68b8cd5b5
- .octa 0x240ca1cc77ac9c65240ca1cc77ac9c65,\
- 0x240ca1cc77ac9c65240ca1cc77ac9c65
- .octa 0x2de92c6f592b02752de92c6f592b0275,\
- 0x2de92c6f592b02752de92c6f592b0275
- .octa 0x4a7484aa6ea6e4834a7484aa6ea6e483,\
- 0x4a7484aa6ea6e4834a7484aa6ea6e483
- .octa 0x5cb0a9dcbd41fbd45cb0a9dcbd41fbd4,\
- 0x5cb0a9dcbd41fbd45cb0a9dcbd41fbd4
- .octa 0x76f988da831153b576f988da831153b5,\
- 0x76f988da831153b576f988da831153b5
- .octa 0x983e5152ee66dfab983e5152ee66dfab,\
- 0x983e5152ee66dfab983e5152ee66dfab
- .octa 0xa831c66d2db43210a831c66d2db43210,\
- 0xa831c66d2db43210a831c66d2db43210
- .octa 0xb00327c898fb213fb00327c898fb213f,\
- 0xb00327c898fb213fb00327c898fb213f
- .octa 0xbf597fc7beef0ee4bf597fc7beef0ee4,\
- 0xbf597fc7beef0ee4bf597fc7beef0ee4
- .octa 0xc6e00bf33da88fc2c6e00bf33da88fc2,\
- 0xc6e00bf33da88fc2c6e00bf33da88fc2
- .octa 0xd5a79147930aa725d5a79147930aa725,\
- 0xd5a79147930aa725d5a79147930aa725
- .octa 0x06ca6351e003826f06ca6351e003826f,\
- 0x06ca6351e003826f06ca6351e003826f
- .octa 0x142929670a0e6e70142929670a0e6e70,\
- 0x142929670a0e6e70142929670a0e6e70
- .octa 0x27b70a8546d22ffc27b70a8546d22ffc,\
- 0x27b70a8546d22ffc27b70a8546d22ffc
- .octa 0x2e1b21385c26c9262e1b21385c26c926,\
- 0x2e1b21385c26c9262e1b21385c26c926
- .octa 0x4d2c6dfc5ac42aed4d2c6dfc5ac42aed,\
- 0x4d2c6dfc5ac42aed4d2c6dfc5ac42aed
- .octa 0x53380d139d95b3df53380d139d95b3df,\
- 0x53380d139d95b3df53380d139d95b3df
- .octa 0x650a73548baf63de650a73548baf63de,\
- 0x650a73548baf63de650a73548baf63de
- .octa 0x766a0abb3c77b2a8766a0abb3c77b2a8,\
- 0x766a0abb3c77b2a8766a0abb3c77b2a8
- .octa 0x81c2c92e47edaee681c2c92e47edaee6,\
- 0x81c2c92e47edaee681c2c92e47edaee6
- .octa 0x92722c851482353b92722c851482353b,\
- 0x92722c851482353b92722c851482353b
- .octa 0xa2bfe8a14cf10364a2bfe8a14cf10364,\
- 0xa2bfe8a14cf10364a2bfe8a14cf10364
- .octa 0xa81a664bbc423001a81a664bbc423001,\
- 0xa81a664bbc423001a81a664bbc423001
- .octa 0xc24b8b70d0f89791c24b8b70d0f89791,\
- 0xc24b8b70d0f89791c24b8b70d0f89791
- .octa 0xc76c51a30654be30c76c51a30654be30,\
- 0xc76c51a30654be30c76c51a30654be30
- .octa 0xd192e819d6ef5218d192e819d6ef5218,\
- 0xd192e819d6ef5218d192e819d6ef5218
- .octa 0xd69906245565a910d69906245565a910,\
- 0xd69906245565a910d69906245565a910
- .octa 0xf40e35855771202af40e35855771202a,\
- 0xf40e35855771202af40e35855771202a
- .octa 0x106aa07032bbd1b8106aa07032bbd1b8,\
- 0x106aa07032bbd1b8106aa07032bbd1b8
- .octa 0x19a4c116b8d2d0c819a4c116b8d2d0c8,\
- 0x19a4c116b8d2d0c819a4c116b8d2d0c8
- .octa 0x1e376c085141ab531e376c085141ab53,\
- 0x1e376c085141ab531e376c085141ab53
- .octa 0x2748774cdf8eeb992748774cdf8eeb99,\
- 0x2748774cdf8eeb992748774cdf8eeb99
- .octa 0x34b0bcb5e19b48a834b0bcb5e19b48a8,\
- 0x34b0bcb5e19b48a834b0bcb5e19b48a8
- .octa 0x391c0cb3c5c95a63391c0cb3c5c95a63,\
- 0x391c0cb3c5c95a63391c0cb3c5c95a63
- .octa 0x4ed8aa4ae3418acb4ed8aa4ae3418acb,\
- 0x4ed8aa4ae3418acb4ed8aa4ae3418acb
- .octa 0x5b9cca4f7763e3735b9cca4f7763e373,\
- 0x5b9cca4f7763e3735b9cca4f7763e373
- .octa 0x682e6ff3d6b2b8a3682e6ff3d6b2b8a3,\
- 0x682e6ff3d6b2b8a3682e6ff3d6b2b8a3
- .octa 0x748f82ee5defb2fc748f82ee5defb2fc,\
- 0x748f82ee5defb2fc748f82ee5defb2fc
- .octa 0x78a5636f43172f6078a5636f43172f60,\
- 0x78a5636f43172f6078a5636f43172f60
- .octa 0x84c87814a1f0ab7284c87814a1f0ab72,\
- 0x84c87814a1f0ab7284c87814a1f0ab72
- .octa 0x8cc702081a6439ec8cc702081a6439ec,\
- 0x8cc702081a6439ec8cc702081a6439ec
- .octa 0x90befffa23631e2890befffa23631e28,\
- 0x90befffa23631e2890befffa23631e28
- .octa 0xa4506cebde82bde9a4506cebde82bde9,\
- 0xa4506cebde82bde9a4506cebde82bde9
- .octa 0xbef9a3f7b2c67915bef9a3f7b2c67915,\
- 0xbef9a3f7b2c67915bef9a3f7b2c67915
- .octa 0xc67178f2e372532bc67178f2e372532b,\
- 0xc67178f2e372532bc67178f2e372532b
- .octa 0xca273eceea26619cca273eceea26619c,\
- 0xca273eceea26619cca273eceea26619c
- .octa 0xd186b8c721c0c207d186b8c721c0c207,\
- 0xd186b8c721c0c207d186b8c721c0c207
- .octa 0xeada7dd6cde0eb1eeada7dd6cde0eb1e,\
- 0xeada7dd6cde0eb1eeada7dd6cde0eb1e
- .octa 0xf57d4f7fee6ed178f57d4f7fee6ed178,\
- 0xf57d4f7fee6ed178f57d4f7fee6ed178
- .octa 0x06f067aa72176fba06f067aa72176fba,\
- 0x06f067aa72176fba06f067aa72176fba
- .octa 0x0a637dc5a2c898a60a637dc5a2c898a6,\
- 0x0a637dc5a2c898a60a637dc5a2c898a6
- .octa 0x113f9804bef90dae113f9804bef90dae,\
- 0x113f9804bef90dae113f9804bef90dae
- .octa 0x1b710b35131c471b1b710b35131c471b,\
- 0x1b710b35131c471b1b710b35131c471b
- .octa 0x28db77f523047d8428db77f523047d84,\
- 0x28db77f523047d8428db77f523047d84
- .octa 0x32caab7b40c7249332caab7b40c72493,\
- 0x32caab7b40c7249332caab7b40c72493
- .octa 0x3c9ebe0a15c9bebc3c9ebe0a15c9bebc,\
- 0x3c9ebe0a15c9bebc3c9ebe0a15c9bebc
- .octa 0x431d67c49c100d4c431d67c49c100d4c,\
- 0x431d67c49c100d4c431d67c49c100d4c
- .octa 0x4cc5d4becb3e42b64cc5d4becb3e42b6,\
- 0x4cc5d4becb3e42b64cc5d4becb3e42b6
- .octa 0x597f299cfc657e2a597f299cfc657e2a,\
- 0x597f299cfc657e2a597f299cfc657e2a
- .octa 0x5fcb6fab3ad6faec5fcb6fab3ad6faec,\
- 0x5fcb6fab3ad6faec5fcb6fab3ad6faec
- .octa 0x6c44198c4a4758176c44198c4a475817,\
- 0x6c44198c4a4758176c44198c4a475817
-
-.section .rodata.cst32.PSHUFFLE_BYTE_FLIP_MASK, "aM", @progbits, 32
-.align 32
-PSHUFFLE_BYTE_FLIP_MASK: .octa 0x08090a0b0c0d0e0f0001020304050607
- .octa 0x18191a1b1c1d1e1f1011121314151617
diff --git a/crypto/Kconfig b/crypto/Kconfig
index 0fb9586..0ec4767 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -213,20 +213,6 @@ config CRYPTO_CRYPTD
converts an arbitrary synchronous software crypto algorithm
into an asynchronous algorithm that executes in a kernel thread.
-config CRYPTO_MCRYPTD
- tristate "Software async multi-buffer crypto daemon"
- select CRYPTO_BLKCIPHER
- select CRYPTO_HASH
- select CRYPTO_MANAGER
- select CRYPTO_WORKQUEUE
- help
- This is a generic software asynchronous crypto daemon that
- provides the kernel thread to assist multi-buffer crypto
- algorithms for submitting jobs and flushing jobs in multi-buffer
- crypto algorithms. Multi-buffer crypto algorithms are executed
- in the context of this kernel thread and drivers can post
- their crypto request asynchronously to be processed by this daemon.
-
config CRYPTO_AUTHENC
tristate "Authenc support"
select CRYPTO_AEAD
@@ -848,54 +834,6 @@ config CRYPTO_SHA1_PPC_SPE
SHA-1 secure hash standard (DFIPS 180-4) implemented
using powerpc SPE SIMD instruction set.
-config CRYPTO_SHA1_MB
- tristate "SHA1 digest algorithm (x86_64 Multi-Buffer, Experimental)"
- depends on X86 && 64BIT
- select CRYPTO_SHA1
- select CRYPTO_HASH
- select CRYPTO_MCRYPTD
- help
- SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2) implemented
- using multi-buffer technique. This algorithm computes on
- multiple data lanes concurrently with SIMD instructions for
- better throughput. It should not be enabled by default but
- used when there is significant amount of work to keep the keep
- the data lanes filled to get performance benefit. If the data
- lanes remain unfilled, a flush operation will be initiated to
- process the crypto jobs, adding a slight latency.
-
-config CRYPTO_SHA256_MB
- tristate "SHA256 digest algorithm (x86_64 Multi-Buffer, Experimental)"
- depends on X86 && 64BIT
- select CRYPTO_SHA256
- select CRYPTO_HASH
- select CRYPTO_MCRYPTD
- help
- SHA-256 secure hash standard (FIPS 180-1/DFIPS 180-2) implemented
- using multi-buffer technique. This algorithm computes on
- multiple data lanes concurrently with SIMD instructions for
- better throughput. It should not be enabled by default but
- used when there is significant amount of work to keep the keep
- the data lanes filled to get performance benefit. If the data
- lanes remain unfilled, a flush operation will be initiated to
- process the crypto jobs, adding a slight latency.
-
-config CRYPTO_SHA512_MB
- tristate "SHA512 digest algorithm (x86_64 Multi-Buffer, Experimental)"
- depends on X86 && 64BIT
- select CRYPTO_SHA512
- select CRYPTO_HASH
- select CRYPTO_MCRYPTD
- help
- SHA-512 secure hash standard (FIPS 180-1/DFIPS 180-2) implemented
- using multi-buffer technique. This algorithm computes on
- multiple data lanes concurrently with SIMD instructions for
- better throughput. It should not be enabled by default but
- used when there is significant amount of work to keep the keep
- the data lanes filled to get performance benefit. If the data
- lanes remain unfilled, a flush operation will be initiated to
- process the crypto jobs, adding a slight latency.
-
config CRYPTO_SHA256
tristate "SHA224 and SHA256 digest algorithm"
select CRYPTO_HASH
diff --git a/crypto/Makefile b/crypto/Makefile
index f6a234d..d719843 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -93,7 +93,6 @@ obj-$(CONFIG_CRYPTO_MORUS640) += morus640.o
obj-$(CONFIG_CRYPTO_MORUS1280) += morus1280.o
obj-$(CONFIG_CRYPTO_PCRYPT) += pcrypt.o
obj-$(CONFIG_CRYPTO_CRYPTD) += cryptd.o
-obj-$(CONFIG_CRYPTO_MCRYPTD) += mcryptd.o
obj-$(CONFIG_CRYPTO_DES) += des_generic.o
obj-$(CONFIG_CRYPTO_FCRYPT) += fcrypt.o
obj-$(CONFIG_CRYPTO_BLOWFISH) += blowfish_generic.o
diff --git a/crypto/mcryptd.c b/crypto/mcryptd.c
deleted file mode 100644
index f141521..00000000
--- a/crypto/mcryptd.c
+++ /dev/null
@@ -1,675 +0,0 @@
-/*
- * Software multibuffer async crypto daemon.
- *
- * Copyright (c) 2014 Tim Chen <tim.c.chen(a)linux.intel.com>
- *
- * Adapted from crypto daemon.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the Free
- * Software Foundation; either version 2 of the License, or (at your option)
- * any later version.
- *
- */
-
-#include <crypto/algapi.h>
-#include <crypto/internal/hash.h>
-#include <crypto/internal/aead.h>
-#include <crypto/mcryptd.h>
-#include <crypto/crypto_wq.h>
-#include <linux/err.h>
-#include <linux/init.h>
-#include <linux/kernel.h>
-#include <linux/list.h>
-#include <linux/module.h>
-#include <linux/scatterlist.h>
-#include <linux/sched.h>
-#include <linux/sched/stat.h>
-#include <linux/slab.h>
-
-#define MCRYPTD_MAX_CPU_QLEN 100
-#define MCRYPTD_BATCH 9
-
-static void *mcryptd_alloc_instance(struct crypto_alg *alg, unsigned int head,
- unsigned int tail);
-
-struct mcryptd_flush_list {
- struct list_head list;
- struct mutex lock;
-};
-
-static struct mcryptd_flush_list __percpu *mcryptd_flist;
-
-struct hashd_instance_ctx {
- struct crypto_ahash_spawn spawn;
- struct mcryptd_queue *queue;
-};
-
-static void mcryptd_queue_worker(struct work_struct *work);
-
-void mcryptd_arm_flusher(struct mcryptd_alg_cstate *cstate, unsigned long delay)
-{
- struct mcryptd_flush_list *flist;
-
- if (!cstate->flusher_engaged) {
- /* put the flusher on the flush list */
- flist = per_cpu_ptr(mcryptd_flist, smp_processor_id());
- mutex_lock(&flist->lock);
- list_add_tail(&cstate->flush_list, &flist->list);
- cstate->flusher_engaged = true;
- cstate->next_flush = jiffies + delay;
- queue_delayed_work_on(smp_processor_id(), kcrypto_wq,
- &cstate->flush, delay);
- mutex_unlock(&flist->lock);
- }
-}
-EXPORT_SYMBOL(mcryptd_arm_flusher);
-
-static int mcryptd_init_queue(struct mcryptd_queue *queue,
- unsigned int max_cpu_qlen)
-{
- int cpu;
- struct mcryptd_cpu_queue *cpu_queue;
-
- queue->cpu_queue = alloc_percpu(struct mcryptd_cpu_queue);
- pr_debug("mqueue:%p mcryptd_cpu_queue %p\n", queue, queue->cpu_queue);
- if (!queue->cpu_queue)
- return -ENOMEM;
- for_each_possible_cpu(cpu) {
- cpu_queue = per_cpu_ptr(queue->cpu_queue, cpu);
- pr_debug("cpu_queue #%d %p\n", cpu, queue->cpu_queue);
- crypto_init_queue(&cpu_queue->queue, max_cpu_qlen);
- INIT_WORK(&cpu_queue->work, mcryptd_queue_worker);
- spin_lock_init(&cpu_queue->q_lock);
- }
- return 0;
-}
-
-static void mcryptd_fini_queue(struct mcryptd_queue *queue)
-{
- int cpu;
- struct mcryptd_cpu_queue *cpu_queue;
-
- for_each_possible_cpu(cpu) {
- cpu_queue = per_cpu_ptr(queue->cpu_queue, cpu);
- BUG_ON(cpu_queue->queue.qlen);
- }
- free_percpu(queue->cpu_queue);
-}
-
-static int mcryptd_enqueue_request(struct mcryptd_queue *queue,
- struct crypto_async_request *request,
- struct mcryptd_hash_request_ctx *rctx)
-{
- int cpu, err;
- struct mcryptd_cpu_queue *cpu_queue;
-
- cpu_queue = raw_cpu_ptr(queue->cpu_queue);
- spin_lock(&cpu_queue->q_lock);
- cpu = smp_processor_id();
- rctx->tag.cpu = smp_processor_id();
-
- err = crypto_enqueue_request(&cpu_queue->queue, request);
- pr_debug("enqueue request: cpu %d cpu_queue %p request %p\n",
- cpu, cpu_queue, request);
- spin_unlock(&cpu_queue->q_lock);
- queue_work_on(cpu, kcrypto_wq, &cpu_queue->work);
-
- return err;
-}
-
-/*
- * Try to opportunisticlly flush the partially completed jobs if
- * crypto daemon is the only task running.
- */
-static void mcryptd_opportunistic_flush(void)
-{
- struct mcryptd_flush_list *flist;
- struct mcryptd_alg_cstate *cstate;
-
- flist = per_cpu_ptr(mcryptd_flist, smp_processor_id());
- while (single_task_running()) {
- mutex_lock(&flist->lock);
- cstate = list_first_entry_or_null(&flist->list,
- struct mcryptd_alg_cstate, flush_list);
- if (!cstate || !cstate->flusher_engaged) {
- mutex_unlock(&flist->lock);
- return;
- }
- list_del(&cstate->flush_list);
- cstate->flusher_engaged = false;
- mutex_unlock(&flist->lock);
- cstate->alg_state->flusher(cstate);
- }
-}
-
-/*
- * Called in workqueue context, do one real cryption work (via
- * req->complete) and reschedule itself if there are more work to
- * do.
- */
-static void mcryptd_queue_worker(struct work_struct *work)
-{
- struct mcryptd_cpu_queue *cpu_queue;
- struct crypto_async_request *req, *backlog;
- int i;
-
- /*
- * Need to loop through more than once for multi-buffer to
- * be effective.
- */
-
- cpu_queue = container_of(work, struct mcryptd_cpu_queue, work);
- i = 0;
- while (i < MCRYPTD_BATCH || single_task_running()) {
-
- spin_lock_bh(&cpu_queue->q_lock);
- backlog = crypto_get_backlog(&cpu_queue->queue);
- req = crypto_dequeue_request(&cpu_queue->queue);
- spin_unlock_bh(&cpu_queue->q_lock);
-
- if (!req) {
- mcryptd_opportunistic_flush();
- return;
- }
-
- if (backlog)
- backlog->complete(backlog, -EINPROGRESS);
- req->complete(req, 0);
- if (!cpu_queue->queue.qlen)
- return;
- ++i;
- }
- if (cpu_queue->queue.qlen)
- queue_work_on(smp_processor_id(), kcrypto_wq, &cpu_queue->work);
-}
-
-void mcryptd_flusher(struct work_struct *__work)
-{
- struct mcryptd_alg_cstate *alg_cpu_state;
- struct mcryptd_alg_state *alg_state;
- struct mcryptd_flush_list *flist;
- int cpu;
-
- cpu = smp_processor_id();
- alg_cpu_state = container_of(to_delayed_work(__work),
- struct mcryptd_alg_cstate, flush);
- alg_state = alg_cpu_state->alg_state;
- if (alg_cpu_state->cpu != cpu)
- pr_debug("mcryptd error: work on cpu %d, should be cpu %d\n",
- cpu, alg_cpu_state->cpu);
-
- if (alg_cpu_state->flusher_engaged) {
- flist = per_cpu_ptr(mcryptd_flist, cpu);
- mutex_lock(&flist->lock);
- list_del(&alg_cpu_state->flush_list);
- alg_cpu_state->flusher_engaged = false;
- mutex_unlock(&flist->lock);
- alg_state->flusher(alg_cpu_state);
- }
-}
-EXPORT_SYMBOL_GPL(mcryptd_flusher);
-
-static inline struct mcryptd_queue *mcryptd_get_queue(struct crypto_tfm *tfm)
-{
- struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
- struct mcryptd_instance_ctx *ictx = crypto_instance_ctx(inst);
-
- return ictx->queue;
-}
-
-static void *mcryptd_alloc_instance(struct crypto_alg *alg, unsigned int head,
- unsigned int tail)
-{
- char *p;
- struct crypto_instance *inst;
- int err;
-
- p = kzalloc(head + sizeof(*inst) + tail, GFP_KERNEL);
- if (!p)
- return ERR_PTR(-ENOMEM);
-
- inst = (void *)(p + head);
-
- err = -ENAMETOOLONG;
- if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
- "mcryptd(%s)", alg->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
- goto out_free_inst;
-
- memcpy(inst->alg.cra_name, alg->cra_name, CRYPTO_MAX_ALG_NAME);
-
- inst->alg.cra_priority = alg->cra_priority + 50;
- inst->alg.cra_blocksize = alg->cra_blocksize;
- inst->alg.cra_alignmask = alg->cra_alignmask;
-
-out:
- return p;
-
-out_free_inst:
- kfree(p);
- p = ERR_PTR(err);
- goto out;
-}
-
-static inline bool mcryptd_check_internal(struct rtattr **tb, u32 *type,
- u32 *mask)
-{
- struct crypto_attr_type *algt;
-
- algt = crypto_get_attr_type(tb);
- if (IS_ERR(algt))
- return false;
-
- *type |= algt->type & CRYPTO_ALG_INTERNAL;
- *mask |= algt->mask & CRYPTO_ALG_INTERNAL;
-
- if (*type & *mask & CRYPTO_ALG_INTERNAL)
- return true;
- else
- return false;
-}
-
-static int mcryptd_hash_init_tfm(struct crypto_tfm *tfm)
-{
- struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
- struct hashd_instance_ctx *ictx = crypto_instance_ctx(inst);
- struct crypto_ahash_spawn *spawn = &ictx->spawn;
- struct mcryptd_hash_ctx *ctx = crypto_tfm_ctx(tfm);
- struct crypto_ahash *hash;
-
- hash = crypto_spawn_ahash(spawn);
- if (IS_ERR(hash))
- return PTR_ERR(hash);
-
- ctx->child = hash;
- crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
- sizeof(struct mcryptd_hash_request_ctx) +
- crypto_ahash_reqsize(hash));
- return 0;
-}
-
-static void mcryptd_hash_exit_tfm(struct crypto_tfm *tfm)
-{
- struct mcryptd_hash_ctx *ctx = crypto_tfm_ctx(tfm);
-
- crypto_free_ahash(ctx->child);
-}
-
-static int mcryptd_hash_setkey(struct crypto_ahash *parent,
- const u8 *key, unsigned int keylen)
-{
- struct mcryptd_hash_ctx *ctx = crypto_ahash_ctx(parent);
- struct crypto_ahash *child = ctx->child;
- int err;
-
- crypto_ahash_clear_flags(child, CRYPTO_TFM_REQ_MASK);
- crypto_ahash_set_flags(child, crypto_ahash_get_flags(parent) &
- CRYPTO_TFM_REQ_MASK);
- err = crypto_ahash_setkey(child, key, keylen);
- crypto_ahash_set_flags(parent, crypto_ahash_get_flags(child) &
- CRYPTO_TFM_RES_MASK);
- return err;
-}
-
-static int mcryptd_hash_enqueue(struct ahash_request *req,
- crypto_completion_t complete)
-{
- int ret;
-
- struct mcryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct mcryptd_queue *queue =
- mcryptd_get_queue(crypto_ahash_tfm(tfm));
-
- rctx->complete = req->base.complete;
- req->base.complete = complete;
-
- ret = mcryptd_enqueue_request(queue, &req->base, rctx);
-
- return ret;
-}
-
-static void mcryptd_hash_init(struct crypto_async_request *req_async, int err)
-{
- struct mcryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
- struct crypto_ahash *child = ctx->child;
- struct ahash_request *req = ahash_request_cast(req_async);
- struct mcryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
- struct ahash_request *desc = &rctx->areq;
-
- if (unlikely(err == -EINPROGRESS))
- goto out;
-
- ahash_request_set_tfm(desc, child);
- ahash_request_set_callback(desc, CRYPTO_TFM_REQ_MAY_SLEEP,
- rctx->complete, req_async);
-
- rctx->out = req->result;
- err = crypto_ahash_init(desc);
-
-out:
- local_bh_disable();
- rctx->complete(&req->base, err);
- local_bh_enable();
-}
-
-static int mcryptd_hash_init_enqueue(struct ahash_request *req)
-{
- return mcryptd_hash_enqueue(req, mcryptd_hash_init);
-}
-
-static void mcryptd_hash_update(struct crypto_async_request *req_async, int err)
-{
- struct ahash_request *req = ahash_request_cast(req_async);
- struct mcryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
-
- if (unlikely(err == -EINPROGRESS))
- goto out;
-
- rctx->out = req->result;
- err = crypto_ahash_update(&rctx->areq);
- if (err) {
- req->base.complete = rctx->complete;
- goto out;
- }
-
- return;
-out:
- local_bh_disable();
- rctx->complete(&req->base, err);
- local_bh_enable();
-}
-
-static int mcryptd_hash_update_enqueue(struct ahash_request *req)
-{
- return mcryptd_hash_enqueue(req, mcryptd_hash_update);
-}
-
-static void mcryptd_hash_final(struct crypto_async_request *req_async, int err)
-{
- struct ahash_request *req = ahash_request_cast(req_async);
- struct mcryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
-
- if (unlikely(err == -EINPROGRESS))
- goto out;
-
- rctx->out = req->result;
- err = crypto_ahash_final(&rctx->areq);
- if (err) {
- req->base.complete = rctx->complete;
- goto out;
- }
-
- return;
-out:
- local_bh_disable();
- rctx->complete(&req->base, err);
- local_bh_enable();
-}
-
-static int mcryptd_hash_final_enqueue(struct ahash_request *req)
-{
- return mcryptd_hash_enqueue(req, mcryptd_hash_final);
-}
-
-static void mcryptd_hash_finup(struct crypto_async_request *req_async, int err)
-{
- struct ahash_request *req = ahash_request_cast(req_async);
- struct mcryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
-
- if (unlikely(err == -EINPROGRESS))
- goto out;
- rctx->out = req->result;
- err = crypto_ahash_finup(&rctx->areq);
-
- if (err) {
- req->base.complete = rctx->complete;
- goto out;
- }
-
- return;
-out:
- local_bh_disable();
- rctx->complete(&req->base, err);
- local_bh_enable();
-}
-
-static int mcryptd_hash_finup_enqueue(struct ahash_request *req)
-{
- return mcryptd_hash_enqueue(req, mcryptd_hash_finup);
-}
-
-static void mcryptd_hash_digest(struct crypto_async_request *req_async, int err)
-{
- struct mcryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
- struct crypto_ahash *child = ctx->child;
- struct ahash_request *req = ahash_request_cast(req_async);
- struct mcryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
- struct ahash_request *desc = &rctx->areq;
-
- if (unlikely(err == -EINPROGRESS))
- goto out;
-
- ahash_request_set_tfm(desc, child);
- ahash_request_set_callback(desc, CRYPTO_TFM_REQ_MAY_SLEEP,
- rctx->complete, req_async);
-
- rctx->out = req->result;
- err = crypto_ahash_init(desc) ?: crypto_ahash_finup(desc);
-
-out:
- local_bh_disable();
- rctx->complete(&req->base, err);
- local_bh_enable();
-}
-
-static int mcryptd_hash_digest_enqueue(struct ahash_request *req)
-{
- return mcryptd_hash_enqueue(req, mcryptd_hash_digest);
-}
-
-static int mcryptd_hash_export(struct ahash_request *req, void *out)
-{
- struct mcryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
-
- return crypto_ahash_export(&rctx->areq, out);
-}
-
-static int mcryptd_hash_import(struct ahash_request *req, const void *in)
-{
- struct mcryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
-
- return crypto_ahash_import(&rctx->areq, in);
-}
-
-static int mcryptd_create_hash(struct crypto_template *tmpl, struct rtattr **tb,
- struct mcryptd_queue *queue)
-{
- struct hashd_instance_ctx *ctx;
- struct ahash_instance *inst;
- struct hash_alg_common *halg;
- struct crypto_alg *alg;
- u32 type = 0;
- u32 mask = 0;
- int err;
-
- if (!mcryptd_check_internal(tb, &type, &mask))
- return -EINVAL;
-
- halg = ahash_attr_alg(tb[1], type, mask);
- if (IS_ERR(halg))
- return PTR_ERR(halg);
-
- alg = &halg->base;
- pr_debug("crypto: mcryptd hash alg: %s\n", alg->cra_name);
- inst = mcryptd_alloc_instance(alg, ahash_instance_headroom(),
- sizeof(*ctx));
- err = PTR_ERR(inst);
- if (IS_ERR(inst))
- goto out_put_alg;
-
- ctx = ahash_instance_ctx(inst);
- ctx->queue = queue;
-
- err = crypto_init_ahash_spawn(&ctx->spawn, halg,
- ahash_crypto_instance(inst));
- if (err)
- goto out_free_inst;
-
- inst->alg.halg.base.cra_flags = CRYPTO_ALG_ASYNC |
- (alg->cra_flags & (CRYPTO_ALG_INTERNAL |
- CRYPTO_ALG_OPTIONAL_KEY));
-
- inst->alg.halg.digestsize = halg->digestsize;
- inst->alg.halg.statesize = halg->statesize;
- inst->alg.halg.base.cra_ctxsize = sizeof(struct mcryptd_hash_ctx);
-
- inst->alg.halg.base.cra_init = mcryptd_hash_init_tfm;
- inst->alg.halg.base.cra_exit = mcryptd_hash_exit_tfm;
-
- inst->alg.init = mcryptd_hash_init_enqueue;
- inst->alg.update = mcryptd_hash_update_enqueue;
- inst->alg.final = mcryptd_hash_final_enqueue;
- inst->alg.finup = mcryptd_hash_finup_enqueue;
- inst->alg.export = mcryptd_hash_export;
- inst->alg.import = mcryptd_hash_import;
- if (crypto_hash_alg_has_setkey(halg))
- inst->alg.setkey = mcryptd_hash_setkey;
- inst->alg.digest = mcryptd_hash_digest_enqueue;
-
- err = ahash_register_instance(tmpl, inst);
- if (err) {
- crypto_drop_ahash(&ctx->spawn);
-out_free_inst:
- kfree(inst);
- }
-
-out_put_alg:
- crypto_mod_put(alg);
- return err;
-}
-
-static struct mcryptd_queue mqueue;
-
-static int mcryptd_create(struct crypto_template *tmpl, struct rtattr **tb)
-{
- struct crypto_attr_type *algt;
-
- algt = crypto_get_attr_type(tb);
- if (IS_ERR(algt))
- return PTR_ERR(algt);
-
- switch (algt->type & algt->mask & CRYPTO_ALG_TYPE_MASK) {
- case CRYPTO_ALG_TYPE_DIGEST:
- return mcryptd_create_hash(tmpl, tb, &mqueue);
- break;
- }
-
- return -EINVAL;
-}
-
-static void mcryptd_free(struct crypto_instance *inst)
-{
- struct mcryptd_instance_ctx *ctx = crypto_instance_ctx(inst);
- struct hashd_instance_ctx *hctx = crypto_instance_ctx(inst);
-
- switch (inst->alg.cra_flags & CRYPTO_ALG_TYPE_MASK) {
- case CRYPTO_ALG_TYPE_AHASH:
- crypto_drop_ahash(&hctx->spawn);
- kfree(ahash_instance(inst));
- return;
- default:
- crypto_drop_spawn(&ctx->spawn);
- kfree(inst);
- }
-}
-
-static struct crypto_template mcryptd_tmpl = {
- .name = "mcryptd",
- .create = mcryptd_create,
- .free = mcryptd_free,
- .module = THIS_MODULE,
-};
-
-struct mcryptd_ahash *mcryptd_alloc_ahash(const char *alg_name,
- u32 type, u32 mask)
-{
- char mcryptd_alg_name[CRYPTO_MAX_ALG_NAME];
- struct crypto_ahash *tfm;
-
- if (snprintf(mcryptd_alg_name, CRYPTO_MAX_ALG_NAME,
- "mcryptd(%s)", alg_name) >= CRYPTO_MAX_ALG_NAME)
- return ERR_PTR(-EINVAL);
- tfm = crypto_alloc_ahash(mcryptd_alg_name, type, mask);
- if (IS_ERR(tfm))
- return ERR_CAST(tfm);
- if (tfm->base.__crt_alg->cra_module != THIS_MODULE) {
- crypto_free_ahash(tfm);
- return ERR_PTR(-EINVAL);
- }
-
- return __mcryptd_ahash_cast(tfm);
-}
-EXPORT_SYMBOL_GPL(mcryptd_alloc_ahash);
-
-struct crypto_ahash *mcryptd_ahash_child(struct mcryptd_ahash *tfm)
-{
- struct mcryptd_hash_ctx *ctx = crypto_ahash_ctx(&tfm->base);
-
- return ctx->child;
-}
-EXPORT_SYMBOL_GPL(mcryptd_ahash_child);
-
-struct ahash_request *mcryptd_ahash_desc(struct ahash_request *req)
-{
- struct mcryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
- return &rctx->areq;
-}
-EXPORT_SYMBOL_GPL(mcryptd_ahash_desc);
-
-void mcryptd_free_ahash(struct mcryptd_ahash *tfm)
-{
- crypto_free_ahash(&tfm->base);
-}
-EXPORT_SYMBOL_GPL(mcryptd_free_ahash);
-
-static int __init mcryptd_init(void)
-{
- int err, cpu;
- struct mcryptd_flush_list *flist;
-
- mcryptd_flist = alloc_percpu(struct mcryptd_flush_list);
- for_each_possible_cpu(cpu) {
- flist = per_cpu_ptr(mcryptd_flist, cpu);
- INIT_LIST_HEAD(&flist->list);
- mutex_init(&flist->lock);
- }
-
- err = mcryptd_init_queue(&mqueue, MCRYPTD_MAX_CPU_QLEN);
- if (err) {
- free_percpu(mcryptd_flist);
- return err;
- }
-
- err = crypto_register_template(&mcryptd_tmpl);
- if (err) {
- mcryptd_fini_queue(&mqueue);
- free_percpu(mcryptd_flist);
- }
-
- return err;
-}
-
-static void __exit mcryptd_exit(void)
-{
- mcryptd_fini_queue(&mqueue);
- crypto_unregister_template(&mcryptd_tmpl);
- free_percpu(mcryptd_flist);
-}
-
-subsys_initcall(mcryptd_init);
-module_exit(mcryptd_exit);
-
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("Software async multibuffer crypto daemon");
-MODULE_ALIAS_CRYPTO("mcryptd");
diff --git a/include/crypto/mcryptd.h b/include/crypto/mcryptd.h
deleted file mode 100644
index b67404f..00000000
--- a/include/crypto/mcryptd.h
+++ /dev/null
@@ -1,114 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Software async multibuffer crypto daemon headers
- *
- * Author:
- * Tim Chen <tim.c.chen(a)linux.intel.com>
- *
- * Copyright (c) 2014, Intel Corporation.
- */
-
-#ifndef _CRYPTO_MCRYPT_H
-#define _CRYPTO_MCRYPT_H
-
-#include <linux/crypto.h>
-#include <linux/kernel.h>
-#include <crypto/hash.h>
-
-struct mcryptd_ahash {
- struct crypto_ahash base;
-};
-
-static inline struct mcryptd_ahash *__mcryptd_ahash_cast(
- struct crypto_ahash *tfm)
-{
- return (struct mcryptd_ahash *)tfm;
-}
-
-struct mcryptd_cpu_queue {
- struct crypto_queue queue;
- spinlock_t q_lock;
- struct work_struct work;
-};
-
-struct mcryptd_queue {
- struct mcryptd_cpu_queue __percpu *cpu_queue;
-};
-
-struct mcryptd_instance_ctx {
- struct crypto_spawn spawn;
- struct mcryptd_queue *queue;
-};
-
-struct mcryptd_hash_ctx {
- struct crypto_ahash *child;
- struct mcryptd_alg_state *alg_state;
-};
-
-struct mcryptd_tag {
- /* seq number of request */
- unsigned seq_num;
- /* arrival time of request */
- unsigned long arrival;
- unsigned long expire;
- int cpu;
-};
-
-struct mcryptd_hash_request_ctx {
- struct list_head waiter;
- crypto_completion_t complete;
- struct mcryptd_tag tag;
- struct crypto_hash_walk walk;
- u8 *out;
- int flag;
- struct ahash_request areq;
-};
-
-struct mcryptd_ahash *mcryptd_alloc_ahash(const char *alg_name,
- u32 type, u32 mask);
-struct crypto_ahash *mcryptd_ahash_child(struct mcryptd_ahash *tfm);
-struct ahash_request *mcryptd_ahash_desc(struct ahash_request *req);
-void mcryptd_free_ahash(struct mcryptd_ahash *tfm);
-void mcryptd_flusher(struct work_struct *work);
-
-enum mcryptd_req_type {
- MCRYPTD_NONE,
- MCRYPTD_UPDATE,
- MCRYPTD_FINUP,
- MCRYPTD_DIGEST,
- MCRYPTD_FINAL
-};
-
-struct mcryptd_alg_cstate {
- unsigned long next_flush;
- unsigned next_seq_num;
- bool flusher_engaged;
- struct delayed_work flush;
- int cpu;
- struct mcryptd_alg_state *alg_state;
- void *mgr;
- spinlock_t work_lock;
- struct list_head work_list;
- struct list_head flush_list;
-};
-
-struct mcryptd_alg_state {
- struct mcryptd_alg_cstate __percpu *alg_cstate;
- unsigned long (*flusher)(struct mcryptd_alg_cstate *cstate);
-};
-
-/* return delay in jiffies from current time */
-static inline unsigned long get_delay(unsigned long t)
-{
- long delay;
-
- delay = (long) t - (long) jiffies;
- if (delay <= 0)
- return 0;
- else
- return (unsigned long) delay;
-}
-
-void mcryptd_arm_flusher(struct mcryptd_alg_cstate *cstate, unsigned long delay);
-
-#endif
--
1.8.3
1
1

[PATCH 01/55] ASoC: pcm: update FE/BE trigger order based on the command
by Yang Yingliang 16 Apr '20
by Yang Yingliang 16 Apr '20
16 Apr '20
From: Ranjani Sridharan <ranjani.sridharan(a)linux.intel.com>
[ Upstream commit acbf27746ecfa96b290b54cc7f05273482ea128a ]
Currently, the trigger orders SND_SOC_DPCM_TRIGGER_PRE/POST
determine the order in which FE DAI and BE DAI are triggered.
In the case of SND_SOC_DPCM_TRIGGER_PRE, the FE DAI is
triggered before the BE DAI and in the case of
SND_SOC_DPCM_TRIGGER_POST, the BE DAI is triggered before
the FE DAI. And this order remains the same irrespective of the
trigger command.
In the case of the SOF driver, during playback, the FW
expects the BE DAI to be triggered before the FE DAI during
the START trigger. The BE DAI trigger handles the starting of
Link DMA and so it must be started before the FE DAI is started
to prevent xruns during pause/release. This can be addressed
by setting the trigger order for the FE dai link to
SND_SOC_DPCM_TRIGGER_POST. But during the STOP trigger,
the FW expects the FE DAI to be triggered before the BE DAI.
Retaining the same order during the START and STOP commands,
results in FW error as the DAI component in the FW is still
active.
The issue can be fixed by mirroring the trigger order of
FE and BE DAI's during the START and STOP trigger. So, with the
trigger order set to SND_SOC_DPCM_TRIGGER_PRE, the FE DAI will be
trigger first during SNDRV_PCM_TRIGGER_START/STOP/RESUME
and the BE DAI will be triggered first during the
STOP/SUSPEND/PAUSE commands. Conversely, with the trigger order
set to SND_SOC_DPCM_TRIGGER_POST, the BE DAI will be triggered
first during the SNDRV_PCM_TRIGGER_START/STOP/RESUME commands
and the FE DAI will be triggered first during the
SNDRV_PCM_TRIGGER_STOP/SUSPEND/PAUSE commands.
Signed-off-by: Ranjani Sridharan <ranjani.sridharan(a)linux.intel.com>
Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart(a)linux.intel.com>
Link: https://lore.kernel.org/r/20191104224812.3393-2-ranjani.sridharan@linux.int…
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
sound/soc/soc-pcm.c | 95 ++++++++++++++++++++++++++++++++++++++---------------
1 file changed, 68 insertions(+), 27 deletions(-)
diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
index 53fefa7..f7d4a77 100644
--- a/sound/soc/soc-pcm.c
+++ b/sound/soc/soc-pcm.c
@@ -2341,42 +2341,81 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
}
EXPORT_SYMBOL_GPL(dpcm_be_dai_trigger);
+static int dpcm_dai_trigger_fe_be(struct snd_pcm_substream *substream,
+ int cmd, bool fe_first)
+{
+ struct snd_soc_pcm_runtime *fe = substream->private_data;
+ int ret;
+
+ /* call trigger on the frontend before the backend. */
+ if (fe_first) {
+ dev_dbg(fe->dev, "ASoC: pre trigger FE %s cmd %d\n",
+ fe->dai_link->name, cmd);
+
+ ret = soc_pcm_trigger(substream, cmd);
+ if (ret < 0)
+ return ret;
+
+ ret = dpcm_be_dai_trigger(fe, substream->stream, cmd);
+ return ret;
+ }
+
+ /* call trigger on the frontend after the backend. */
+ ret = dpcm_be_dai_trigger(fe, substream->stream, cmd);
+ if (ret < 0)
+ return ret;
+
+ dev_dbg(fe->dev, "ASoC: post trigger FE %s cmd %d\n",
+ fe->dai_link->name, cmd);
+
+ ret = soc_pcm_trigger(substream, cmd);
+
+ return ret;
+}
+
static int dpcm_fe_dai_do_trigger(struct snd_pcm_substream *substream, int cmd)
{
struct snd_soc_pcm_runtime *fe = substream->private_data;
- int stream = substream->stream, ret;
+ int stream = substream->stream;
+ int ret = 0;
enum snd_soc_dpcm_trigger trigger = fe->dai_link->trigger[stream];
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE;
switch (trigger) {
case SND_SOC_DPCM_TRIGGER_PRE:
- /* call trigger on the frontend before the backend. */
-
- dev_dbg(fe->dev, "ASoC: pre trigger FE %s cmd %d\n",
- fe->dai_link->name, cmd);
-
- ret = soc_pcm_trigger(substream, cmd);
- if (ret < 0) {
- dev_err(fe->dev,"ASoC: trigger FE failed %d\n", ret);
- goto out;
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ ret = dpcm_dai_trigger_fe_be(substream, cmd, true);
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ ret = dpcm_dai_trigger_fe_be(substream, cmd, false);
+ break;
+ default:
+ ret = -EINVAL;
+ break;
}
-
- ret = dpcm_be_dai_trigger(fe, substream->stream, cmd);
break;
case SND_SOC_DPCM_TRIGGER_POST:
- /* call trigger on the frontend after the backend. */
-
- ret = dpcm_be_dai_trigger(fe, substream->stream, cmd);
- if (ret < 0) {
- dev_err(fe->dev,"ASoC: trigger FE failed %d\n", ret);
- goto out;
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ ret = dpcm_dai_trigger_fe_be(substream, cmd, false);
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ ret = dpcm_dai_trigger_fe_be(substream, cmd, true);
+ break;
+ default:
+ ret = -EINVAL;
+ break;
}
-
- dev_dbg(fe->dev, "ASoC: post trigger FE %s cmd %d\n",
- fe->dai_link->name, cmd);
-
- ret = soc_pcm_trigger(substream, cmd);
break;
case SND_SOC_DPCM_TRIGGER_BESPOKE:
/* bespoke trigger() - handles both FE and BEs */
@@ -2385,10 +2424,6 @@ static int dpcm_fe_dai_do_trigger(struct snd_pcm_substream *substream, int cmd)
fe->dai_link->name, cmd);
ret = soc_pcm_bespoke_trigger(substream, cmd);
- if (ret < 0) {
- dev_err(fe->dev,"ASoC: trigger FE failed %d\n", ret);
- goto out;
- }
break;
default:
dev_err(fe->dev, "ASoC: invalid trigger cmd %d for %s\n", cmd,
@@ -2397,6 +2432,12 @@ static int dpcm_fe_dai_do_trigger(struct snd_pcm_substream *substream, int cmd)
goto out;
}
+ if (ret < 0) {
+ dev_err(fe->dev, "ASoC: trigger FE cmd: %d failed: %d\n",
+ cmd, ret);
+ goto out;
+ }
+
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
case SNDRV_PCM_TRIGGER_RESUME:
--
1.8.3
1
54

[PATCH 001/195] Revert "drm/sun4i: dsi: Change the start delay calculation"
by Yang Yingliang 16 Apr '20
by Yang Yingliang 16 Apr '20
16 Apr '20
From: Icenowy Zheng <icenowy(a)aosc.io>
[ Upstream commit a00d17e0a71ae2e4fdaac46e1c12785d3346c3f2 ]
This reverts commit da676c6aa6413d59ab0a80c97bbc273025e640b2.
The original commit adds a start parameter to the calculation of the
start delay according to some old BSP versions from Allwinner. However,
there're two ways to add this delay -- add it in DSI controller or add
it in the TCON. Add it in both controllers won't work.
The code before this commit is picked from new versions of BSP kernel,
which has a comment for the 1 that says "put start_delay to tcon". By
checking the sun4i_tcon0_mode_set_cpu() in sun4i_tcon driver, it has
already added this delay, so we shouldn't repeat to add the delay in DSI
controller, otherwise the timing won't match.
Signed-off-by: Icenowy Zheng <icenowy(a)aosc.io>
Reviewed-by: Jagan Teki <jagan(a)amarulasolutions.com>
Signed-off-by: Maxime Ripard <mripard(a)kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20191001080253.6135-2-icenowy…
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
index 97a0573..79eb11c 100644
--- a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+++ b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
@@ -357,8 +357,7 @@ static void sun6i_dsi_inst_init(struct sun6i_dsi *dsi,
static u16 sun6i_dsi_get_video_start_delay(struct sun6i_dsi *dsi,
struct drm_display_mode *mode)
{
- u16 start = clamp(mode->vtotal - mode->vdisplay - 10, 8, 100);
- u16 delay = mode->vtotal - (mode->vsync_end - mode->vdisplay) + start;
+ u16 delay = mode->vtotal - (mode->vsync_end - mode->vdisplay) + 1;
if (delay > mode->vtotal)
delay = delay % mode->vtotal;
--
1.8.3
1
194
From: Al Viro <viro(a)zeniv.linux.org.uk>
commit 6404674acd596de41fd3ad5f267b4525494a891a upstream.
Brown paperbag time: fetching ->i_uid/->i_mode really should've been
done from nd->inode. I even suggested that, but the reason for that has
slipped through the cracks and I went for dir->d_inode instead - made
for more "obvious" patch.
Analysis:
- at the entry into do_last() and all the way to step_into(): dir (aka
nd->path.dentry) is known not to have been freed; so's nd->inode and
it's equal to dir->d_inode unless we are already doomed to -ECHILD.
inode of the file to get opened is not known.
- after step_into(): inode of the file to get opened is known; dir
might be pointing to freed memory/be negative/etc.
- at the call of may_create_in_sticky(): guaranteed to be out of RCU
mode; inode of the file to get opened is known and pinned; dir might
be garbage.
The last was the reason for the original patch. Except that at the
do_last() entry we can be in RCU mode and it is possible that
nd->path.dentry->d_inode has already changed under us.
In that case we are going to fail with -ECHILD, but we need to be
careful; nd->inode is pointing to valid struct inode and it's the same
as nd->path.dentry->d_inode in "won't fail with -ECHILD" case, so we
should use that.
Reported-by: "Rantala, Tommi T. (Nokia - FI/Espoo)" <tommi.t.rantala(a)nokia.com>
Reported-by: syzbot+190005201ced78a74ad6(a)syzkaller.appspotmail.com
Wearing-brown-paperbag: Al Viro <viro(a)zeniv.linux.org.uk>
Cc: stable(a)kernel.org
Fixes: d0cb50185ae9 ("do_last(): fetch directory ->i_mode and ->i_uid before it's too late")
Signed-off-by: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/namei.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/namei.c b/fs/namei.c
index 1dd68b3..18ddae1 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -3266,8 +3266,8 @@ static int do_last(struct nameidata *nd,
struct file *file, const struct open_flags *op)
{
struct dentry *dir = nd->path.dentry;
- kuid_t dir_uid = dir->d_inode->i_uid;
- umode_t dir_mode = dir->d_inode->i_mode;
+ kuid_t dir_uid = nd->inode->i_uid;
+ umode_t dir_mode = nd->inode->i_mode;
int open_flag = op->open_flag;
bool will_truncate = (open_flag & O_TRUNC) != 0;
bool got_write = false;
--
1.8.3
1
71

[PATCH 001/137] can, slip: Protect tty->disc_data in write_wakeup and close with RCU
by Yang Yingliang 16 Apr '20
by Yang Yingliang 16 Apr '20
16 Apr '20
From: Richard Palethorpe <rpalethorpe(a)suse.com>
[ Upstream commit 0ace17d56824165c7f4c68785d6b58971db954dd ]
write_wakeup can happen in parallel with close/hangup where tty->disc_data
is set to NULL and the netdevice is freed thus also freeing
disc_data. write_wakeup accesses disc_data so we must prevent close from
freeing the netdev while write_wakeup has a non-NULL view of
tty->disc_data.
We also need to make sure that accesses to disc_data are atomic. Which can
all be done with RCU.
This problem was found by Syzkaller on SLCAN, but the same issue is
reproducible with the SLIP line discipline using an LTP test based on the
Syzkaller reproducer.
A fix which didn't use RCU was posted by Hillf Danton.
Fixes: 661f7fda21b1 ("slip: Fix deadlock in write_wakeup")
Fixes: a8e83b17536a ("slcan: Port write_wakeup deadlock fix from slip")
Reported-by: syzbot+017e491ae13c0068598a(a)syzkaller.appspotmail.com
Signed-off-by: Richard Palethorpe <rpalethorpe(a)suse.com>
Cc: Wolfgang Grandegger <wg(a)grandegger.com>
Cc: Marc Kleine-Budde <mkl(a)pengutronix.de>
Cc: "David S. Miller" <davem(a)davemloft.net>
Cc: Tyler Hall <tylerwhall(a)gmail.com>
Cc: linux-can(a)vger.kernel.org
Cc: netdev(a)vger.kernel.org
Cc: linux-kernel(a)vger.kernel.org
Cc: syzkaller(a)googlegroups.com
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/can/slcan.c | 12 ++++++++++--
drivers/net/slip/slip.c | 12 ++++++++++--
2 files changed, 20 insertions(+), 4 deletions(-)
diff --git a/drivers/net/can/slcan.c b/drivers/net/can/slcan.c
index cf0769a..b2e5bca 100644
--- a/drivers/net/can/slcan.c
+++ b/drivers/net/can/slcan.c
@@ -343,9 +343,16 @@ static void slcan_transmit(struct work_struct *work)
*/
static void slcan_write_wakeup(struct tty_struct *tty)
{
- struct slcan *sl = tty->disc_data;
+ struct slcan *sl;
+
+ rcu_read_lock();
+ sl = rcu_dereference(tty->disc_data);
+ if (!sl)
+ goto out;
schedule_work(&sl->tx_work);
+out:
+ rcu_read_unlock();
}
/* Send a can_frame to a TTY queue. */
@@ -640,10 +647,11 @@ static void slcan_close(struct tty_struct *tty)
return;
spin_lock_bh(&sl->lock);
- tty->disc_data = NULL;
+ rcu_assign_pointer(tty->disc_data, NULL);
sl->tty = NULL;
spin_unlock_bh(&sl->lock);
+ synchronize_rcu();
flush_work(&sl->tx_work);
/* Flush network side */
diff --git a/drivers/net/slip/slip.c b/drivers/net/slip/slip.c
index 77207f9..93f303e 100644
--- a/drivers/net/slip/slip.c
+++ b/drivers/net/slip/slip.c
@@ -452,9 +452,16 @@ static void slip_transmit(struct work_struct *work)
*/
static void slip_write_wakeup(struct tty_struct *tty)
{
- struct slip *sl = tty->disc_data;
+ struct slip *sl;
+
+ rcu_read_lock();
+ sl = rcu_dereference(tty->disc_data);
+ if (!sl)
+ goto out;
schedule_work(&sl->tx_work);
+out:
+ rcu_read_unlock();
}
static void sl_tx_timeout(struct net_device *dev)
@@ -882,10 +889,11 @@ static void slip_close(struct tty_struct *tty)
return;
spin_lock_bh(&sl->lock);
- tty->disc_data = NULL;
+ rcu_assign_pointer(tty->disc_data, NULL);
sl->tty = NULL;
spin_unlock_bh(&sl->lock);
+ synchronize_rcu();
flush_work(&sl->tx_work);
/* VSV = very important to remove timers */
--
1.8.3
1
136

[PATCH 1/3] mm/memory_hotplug: simplify and fix check_hotplug_memory_range()
by Yang Yingliang 16 Apr '20
by Yang Yingliang 16 Apr '20
16 Apr '20
From: David Hildenbrand <david(a)redhat.com>
mainline inclusion
from mainline-5.3-rc1
commit cec3ebd083d4e8d161d0b18894c78e3311bcd026
category: bugfix
bugzilla: 29418
CVE: NA
-------------------------------------------------
Patch series "mm/memory_hotplug: Factor out memory block devicehandling", v3.
We only want memory block devices for memory to be onlined/offlined
(add/remove from the buddy). This is required so user space can
online/offline memory and kdump gets notified about newly onlined
memory.
Let's factor out creation/removal of memory block devices. This helps
to further cleanup arch_add_memory/arch_remove_memory() and to make
implementation of new features easier - especially sub-section memory
hot add from Dan.
Anshuman Khandual is currently working on arch_remove_memory(). I added
a temporary solution via "arm64/mm: Add temporary arch_remove_memory()
implementation", that is sufficient as a firsts tep in the context of
this series. (we don't cleanup page tables in case anything goes wrong
already)
Did a quick sanity test with DIMM plug/unplug, making sure all devices
and sysfs links properly get added/removed. Compile tested on s390x and
x86-64.
This patch (of 11):
By converting start and size to page granularity, we actually ignore
unaligned parts within a page instead of properly bailing out with an
error.
Link: http://lkml.kernel.org/r/20190527111152.16324-2-david@redhat.com
Signed-off-by: David Hildenbrand <david(a)redhat.com>
Reviewed-by: Dan Williams <dan.j.williams(a)intel.com>
Reviewed-by: Wei Yang <richardw.yang(a)linux.intel.com>
Reviewed-by: Pavel Tatashin <pasha.tatashin(a)soleen.com>
Reviewed-by: Oscar Salvador <osalvador(a)suse.de>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: Qian Cai <cai(a)lca.pw>
Cc: Arun KS <arunks(a)codeaurora.org>
Cc: Mathieu Malaterre <malat(a)debian.org>
Cc: Alex Deucher <alexander.deucher(a)amd.com>
Cc: Andrew Banman <andrew.banman(a)hpe.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Anshuman Khandual <anshuman.khandual(a)arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
Cc: Baoquan He <bhe(a)redhat.com>
Cc: Benjamin Herrenschmidt <benh(a)kernel.crashing.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Catalin Marinas <catalin.marinas(a)arm.com>
Cc: Chintan Pandya <cpandya(a)codeaurora.org>
Cc: Christophe Leroy <christophe.leroy(a)c-s.fr>
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: "David S. Miller" <davem(a)davemloft.net>
Cc: Fenghua Yu <fenghua.yu(a)intel.com>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Heiko Carstens <heiko.carstens(a)de.ibm.com>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Ingo Molnar <mingo(a)kernel.org>
Cc: Jonathan Cameron <Jonathan.Cameron(a)huawei.com>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: Jun Yao <yaojun8558363(a)gmail.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Cc: Logan Gunthorpe <logang(a)deltatee.com>
Cc: Mark Brown <broonie(a)kernel.org>
Cc: Mark Rutland <mark.rutland(a)arm.com>
Cc: Masahiro Yamada <yamada.masahiro(a)socionext.com>
Cc: Michael Ellerman <mpe(a)ellerman.id.au>
Cc: Mike Rapoport <rppt(a)linux.vnet.ibm.com>
Cc: "mike.travis(a)hpe.com" <mike.travis(a)hpe.com>
Cc: Nicholas Piggin <npiggin(a)gmail.com>
Cc: Paul Mackerras <paulus(a)samba.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: "Rafael J. Wysocki" <rafael(a)kernel.org>
Cc: Rich Felker <dalias(a)libc.org>
Cc: Rob Herring <robh(a)kernel.org>
Cc: Robin Murphy <robin.murphy(a)arm.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Tony Luck <tony.luck(a)intel.com>
Cc: Vasily Gorbik <gor(a)linux.ibm.com>
Cc: Will Deacon <will.deacon(a)arm.com>
Cc: Yoshinori Sato <ysato(a)users.sourceforge.jp>
Cc: Yu Zhao <yuzhao(a)google.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/memory_hotplug.c | 11 +++--------
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 8a6ad9b..bfd148d 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1059,16 +1059,11 @@ int try_online_node(int nid)
static int check_hotplug_memory_range(u64 start, u64 size)
{
- unsigned long block_sz = memory_block_size_bytes();
- u64 block_nr_pages = block_sz >> PAGE_SHIFT;
- u64 nr_pages = size >> PAGE_SHIFT;
- u64 start_pfn = PFN_DOWN(start);
-
/* memory range must be block size aligned */
- if (!nr_pages || !IS_ALIGNED(start_pfn, block_nr_pages) ||
- !IS_ALIGNED(nr_pages, block_nr_pages)) {
+ if (!size || !IS_ALIGNED(start, memory_block_size_bytes()) ||
+ !IS_ALIGNED(size, memory_block_size_bytes())) {
pr_err("Block size [%#lx] unaligned hotplug range: start %#llx, size %#llx",
- block_sz, start, size);
+ memory_block_size_bytes(), start, size);
return -EINVAL;
}
--
1.8.3
1
2